diff --git a/DEVNOTES.txt b/DEVNOTES.txt index 7785ee780a4b4..307189059ae70 100644 --- a/DEVNOTES.txt +++ b/DEVNOTES.txt @@ -1,4 +1,4 @@ -Ignite Fabric Maven Build Instructions +Apache Ignite Maven Build Instructions ====================================== 1) Optional: build Apache Ignite.NET as described at modules/platforms/dotnet/DEVNOTES.txt. @@ -15,14 +15,14 @@ Ignite Fabric Maven Build Instructions mvn initialize -Pjavadoc -4) Assembly Apache Ignite fabric: +4) Assembly Apache Ignite: mvn initialize -Prelease -Look for apache-ignite-fabric--bin.zip in ./target/bin directory. +Look for apache-ignite--bin.zip in ./target/bin directory. -Ignite Fabric with LGPL Maven Build Instructions +Apache Ignite with LGPL Maven Build Instructions ================================================ 1) Optional: build Apache Ignite.NET as described at modules/platforms/dotnet/DEVNOTES.txt. @@ -39,11 +39,11 @@ Ignite Fabric with LGPL Maven Build Instructions mvn initialize -Pjavadoc,lgpl -4) Assembly Apache Ignite fabric with LGPL dependencies: +4) Assembly Apache Ignite with LGPL dependencies: - mvn initialize -Prelease,lgpl -Dignite.edition=fabric-lgpl + mvn initialize -Prelease,lgpl -Dignite.edition=apache-ignite-lgpl - Look for apache-ignite-fabric-lgpl--bin.zip in ./target/bin directory. + Look for apache-ignite-lgpl--bin.zip in ./target/bin directory. Ignite Hadoop Accelerator Maven Build Instructions @@ -60,7 +60,7 @@ Ignite Hadoop Accelerator Maven Build Instructions 2) Assembly Hadoop Accelerator: - mvn initialize -Prelease -Dignite.edition=hadoop + mvn initialize -Prelease -Dignite.edition=apache-ignite-hadoop Look for apache-ignite-hadoop--bin.zip in ./target/bin directory. Resulting binary assembly will also include integration module for Apache Spark. @@ -147,26 +147,21 @@ Ignite Release Instructions 3) Deploy Ignite release candidate to maven repository and dev-svn, make tag: - 3.1) Deploy Ignite to maven repository, prepares sources and fabric edition binaries. + 3.1) Deploy Ignite to maven repository, prepares sources and binaries. - mvn deploy -Papache-release,gpg,all-java,all-scala,licenses,deploy-ignite-site -Dignite.edition=fabric -DskipTests + mvn deploy -Papache-release,gpg,all-java,all-scala,licenses,deploy-ignite-site -Dignite.edition=apache-ignite -DskipTests 3.2) Javadoc generation: mvn initialize -Pjavadoc - 3.3) Assembly Apache Ignite Fabric: + 3.3) Assembly Apache Ignite: mvn initialize -Prelease 3.4) Assembly Hadoop Accelerator: - mvn initialize -Prelease -Dignite.edition=hadoop - - NOTE: Binary artifact name can be changed by setting additional property -Dignite.zip.pattern. Binary artifact will be - created inside /target/bin folder when release profile is used. - - NOTE: Sources artifact name is fixed. Sources artifact will be created inside /target dir when apache-release profile is used. + mvn initialize -Prelease -Dignite.edition=apache-ignite-hadoop NOTE: Nexus staging (repository.apache.org) should be closed with appropriate comment contains release version and release candidate number, for example "Apache Ignite 1.0.0-rc7", when mvn deploy finished. diff --git a/NOTICE b/NOTICE index 4c99a05109185..f98670a851d1b 100644 --- a/NOTICE +++ b/NOTICE @@ -1,5 +1,5 @@ Apache Ignite -Copyright 2018 The Apache Software Foundation +Copyright 2020 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). diff --git a/README.txt b/README.txt index 7cfdaad01ed50..2e6add5125ea3 100644 --- a/README.txt +++ b/README.txt @@ -1,5 +1,5 @@ -Apache Ignite In-Memory Data Fabric -=================================== +Apache Ignite In-Memory Database and Caching Platform +===================================================== Ignite is a memory-centric distributed database, caching, and processing platform for transactional, analytical, and streaming workloads delivering in-memory speeds at petabyte scale. diff --git a/RELEASE_NOTES.txt b/RELEASE_NOTES.txt index d4d3bf0514ce5..9e28404eb2145 100644 --- a/RELEASE_NOTES.txt +++ b/RELEASE_NOTES.txt @@ -1,8 +1,160 @@ Apache Ignite Release Notes =========================== -Apache Ignite In-Memory Data Fabric 2.6 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 2.7 +--------------------------------------------------------- + +Ignite: +* Added experimental support for multi-version concurrency control with snapshot isolation + - available for both cache API and SQL + - use CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT to enable it + - not production ready, data consistency is not guaranteed in case of node failures +* Implemented Transparent Data Encryption based on JKS certificates +* Implemented Node.JS Thin Client +* Implemented Python Thin Client +* Implemented PHP Thin Client +* Ignite start scripts now support Java 9 and higher +* Added ability to set WAL history size in bytes +* Added SslContextFactory.protocols and SslContextFactory.cipherSuites properties to control which SSL encryption algorithms can be used +* Added JCache 1.1 compliance +* Added IgniteCompute.withNoResultCache method with semantics similar to ComputeTaskNoResultCache annotation +* Spring Data 2.0 is now supported in the separate module 'ignite-spring-data_2.0' +* Added monitoring of critical system workers +* Added ability to provide custom implementations of ExceptionListener for JmsStreamer +* Ignite KafkaStreamer was upgraded to use new KafkaConsmer configuration +* S3 IP Finder now supports subfolder usage instead of bucket root +* Improved dynamic cache start speed +* Improved checkpoint performance by decreasing mark duration. +* Added ability to manage compression level for compressed WAL archives. +* Added metrics for Entry Processor invocations. +* Added JMX metrics: ClusterMetricsMXBean.getTotalBaselineNodes and ClusterMetricsMXBean.getActiveBaselineNodes +* Node uptime metric now includes days count +* Exposed info about thin client connections through JMX +* Introduced new system property IGNITE_REUSE_MEMORY_ON_DEACTIVATE to enable reuse of allocated memory on node deactivation (disabled by default) +* Optimistic transaction now will be properly rolled back if waiting too long for a new topology on remap +* ScanQuery with setLocal flag now checks if the partition is actually present on local node +* Improved cluster behaviour when a left node does not cause partition affinity assignment changes +* Interrupting user thread during partition initialization will no longer cause node to stop +* Fixed problem when partition lost event was not triggered if multiple nodes left cluster +* Fixed massive node drop from the cluster on temporary network issues +* Fixed service redeployment on cluster reactivation +* Fixed client node stability under ZooKeeper discovery +* Massive performance and stability improvements + +Ignite .Net: +* Add .NET Core 2.1 support +* Added thin client connection failover + +Ignite C++: +* Implemented Thin Client with base cache operations +* Implemented smart affinity routing for Thin Client to send requests directly to nodes containing data when possible +* Added Clang compiler support + +SQL: +* Added experimental support for fully ACID transactional SQL with the snapshot isolation: + - use CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT to enable it + - a transaction can be started through native API (IgniteTransactions), thin JDBC driver or ODBC driver + - not production ready, data consistency is not guaranteed in case of node failures +* Added a set of system views located in "IGNITE" schema to view cluster information (NODES, NODE_ATTRIBUTES, NODE_METRICS, BASELINE_NODES) +* Added ability to create predefined SQL schemas +* Added GROUP_CONCAT function support +* Added string length constraint +* Custom Java objects are now inlined into primary and secondary indexes what may significantly improve performance when AFFINITY_KEY is used +* Added timeout to fail query execution in case it cannot be mapped to topology +* Restricted number of cores allocated for CREATE INDEX by default to 4 to avoid contention on index tree Fixed transaction hanging during runtime error on commit. +* Fixed possible memory leak when result set size is multiple of the page size +* Fixed situation when data may be returned from cache partitions in LOST state even when PartitionLossPolicy doesn't permit it +* Fixed "Caches have distinct sets of data nodes" during SQL JOIN query execution between REPLICATED and PARTITIONED caches +* Fixed wrong result for SQL queries when item size exceeds the page size +* Fixed error during SQL query from client node with the local flag set to "true" +* Fixed handling UUID as a column type + +JDBC: +* Implemented DataSource interface for the thin driver + +ODBC: +* Added streaming mode support +* Fixed crash in Linux when there are more than 1023 open file descriptors +* Fixed bug that prevented cursors on a server from being closed +* Fixed segmentation fault when reusing a closed connection + +Web Console: +* Added new metrics: WAL and Data size on disk +* Added support for "collocated" query mode on Query screen +* Added support for Java 9+ for Web Agent. +* Added ability to show/hide password field value +* Implemented execution of selected part of SQL query +* Implemented explain of the selected part of SQL query +* Implemented connection to a secured cluster +* Implemented responsive full-screen layout +* Split "Sign In" page to three separate pages +* UI updated to modern look and feel +* Improved backend stability +* Fixed fail when working with web sockets + +REST: +* Added option IGNITE_REST_GETALL_AS_ARRAY for array format in "getAll" call + +Visor: +* Added output of node "Consistent ID" +* Visor now collects information about cache groups instead of separate caches to reduce memory consumption +* Improved help for "start" command +* Fixed output of cache metrics + +Control utility: +* Added information about transaction start time +* Added command to collect information about a distribution of partitions +* Added command to reset lost partitions +* Added support for empty label (control.sh --tx label null) +* Added atomicity mode to utility output. +* Added orphaned local and remote transactions and ability to rollback them +* Added "--dump" flag to dump current partition state to file. +* Renamed command argument '--force' to '--yes' +* Removed "initOrder" and "loc keys" from an info +* Fixed control utility hanging when connected to a joining node with PME + +ML: +* Added TensorFlow integration +* Added Estimator API support to TensorFlow cluster on top of Apache Ignite +* Added ANN algorithm based on ACD concept +* Added Random Forest algorithm +* Added OneHotEncoder for categorical features +* Added model estimation +* Added K-fold cross-validation for ML models +* Added splitter for splitting the dataset into test and train subsets +* Added ability of filtering data during datasets creation +* Added encoding categorical features with One-of-K Encoder +* Added MinMax scaler preprocessor +* Added gradient boosting for trees +* Added indexing for decision trees +* Added GDB convergence by error support +* Added ability to build pipeline of data preprocessing and model training +* Added ability to start and maintain TensorFlow cluster on top of Apache Ignite +* Added support of Multi-Class for Logistic Regression +* Implemented distributed binary logistic regression + +Dependency updates: +* Apache Camel updated to 2.22.0 +* Apache Commons Beanutils updated to 1.9.3 +* Apache Hadoop Yarn updated to 2.7.7 +* Apache Kafka updated to 1.1.0 +* Apache Lucene updated to 7.4.0 +* Apache Mesos updated to 1.5.0 +* Apache Tomcat updated to 9.0.10 +* Apache Zookeeper updated to 3.4.13 +* Guava updated to 25.1-jre +* Jackson Databind updated to 2.9.6 +* Jackson 1 usages replaced with Jackson 2 +* JCraft updated to 0.1.54 +* H2 version updated to 1.4.197 +* Log4j 2.x updated to 2.11.0 +* Netty updated to 4.1.27.Final +* RocketMQ updated to 4.3.0 +* Scala 2.10.x was updated to 2.10.7 +* Scala 2.11.x updated to 2.11.12 + +Apache Ignite In-Memory Database and Caching Platform 2.6 +--------------------------------------------------------- Ignite: * Fixed incorrect calculation of client affinity assignment with baseline. * Fixed incorrect calculation of switch segment record in WAL. @@ -11,8 +163,8 @@ Ignite: REST: * Fixed serialization of BinaryObjects to JSON. -Apache Ignite In-Memory Data Fabric 2.5 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 2.5 +--------------------------------------------------------- Ignite: * Implemented Zookeeper discovery SPI. * Added Java thin client. @@ -127,8 +279,8 @@ ML: * Implemented Linear SVM for binary classification. * Implemented distributed version of SVM (support vector machine) algoritm. -Apache Ignite In-Memory Data Fabric 2.4 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 2.4 +--------------------------------------------------------- Ignite: * Introduced Baseline Affinity Topology * Ability to disable WAL for cache in runtime through IgniteCluster API or ALTER TABLE command @@ -221,8 +373,8 @@ Visor: * Fixed reading last command line in batch mode * Updated eviction policy factory in configs -Apache Ignite In-Memory Data Fabric 2.3 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 2.3 +--------------------------------------------------------- Ignite: * Ability to enable persistence per data region. * Default page size is changed to 4KB. @@ -319,8 +471,8 @@ Visor: * Added missing configuration properties to "config" command. * Fixed script execution after alert throttling interval. -Apache Ignite In-Memory Data Fabric 2.2 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 2.2 +--------------------------------------------------------- Ignite: * Checkpointing algorithm optimized * Default max memory size changed from 80% to 20% @@ -329,8 +481,8 @@ Ignite CPP: * Now possible to start node with persistent store * Ignite.setActive method added to C++ API -Apache Ignite In-Memory Data Fabric 2.1 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 2.1 +--------------------------------------------------------- Ignite: * Persistent cache store * Added IgniteFuture.listenAsync() and IgniteFuture.chainAsync() mehtods @@ -362,8 +514,8 @@ Web Console: * Added option to show full stack trace on Queries screen * Added PK alias generation on Models screen. -Apache Ignite In-Memory Data Fabric 2.0 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 2.0 +--------------------------------------------------------- Ignite: * Introduced new page memory architecture * Machine Learning beta: distributed algebra support for dense and sparse data sets @@ -398,8 +550,8 @@ Web Console: * Possibility to configure Kubernetes IP finder * EnforceJoinOrder option on Queries screen -Apache Ignite In-Memory Data Fabric 1.9 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.9 +--------------------------------------------------------- Ignite: * Added Data streamer mode for DML * Added Discovery SPI Implementation for Ignite Kubernetes Pods @@ -418,8 +570,8 @@ Ignite CPP: * Implemented LoadCache * ContinuousQuery support -Apache Ignite In-Memory Data Fabric 1.8 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.8 +--------------------------------------------------------- Ignite: * SQL: Added DML operations support (INSERT, UPDATE, DELETE, MERGE) * SQL: Improved DISTINCT keyword handling in aggregates @@ -440,8 +592,8 @@ ODBC driver: * Added DSN support * Performance improvements -Apache Ignite In-Memory Data Fabric 1.7 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.7 +--------------------------------------------------------- Ignite: * Added distributed SQL JOIN. * Node can be assigned as primary only after preloading is finished. @@ -459,8 +611,8 @@ Ignite.NET: Ignite CPP: * Marshalling performance improvements. -Apache Ignite In-Memory Data Fabric 1.6 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.6 +--------------------------------------------------------- Ignite .NET: * Added LINQ Provider for cache SQL queries * Added native configuration mechanism (C#, app.config, web.config - instead of Spring XML) @@ -548,8 +700,8 @@ Ignite: * Web sessions: user session classes are no longer needed on server nodes. * A lot of stability and fault-tolerance fixes. -Apache Ignite In-Memory Data Fabric 1.5 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.5 +--------------------------------------------------------- * Ignite.NET: Initial Release. * Ignite C++: Initial Release. * Massive performance improvements for cache operations and SQL. @@ -571,8 +723,8 @@ Apache Ignite In-Memory Data Fabric 1.5 Complete list of closed issues: https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%20fixVersion%20%3D%201.5%20AND%20status%20%3D%20closed -Apache Ignite In-Memory Data Fabric 1.4 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.4 +--------------------------------------------------------- * Added SSL support to communication and discovery. * Added support for log4j2. * Added versioned entry to cache API. @@ -589,8 +741,8 @@ Apache Ignite In-Memory Data Fabric 1.4 * Fixed affinity routing in compute grid. * Many stability and fault-tolerance fixes. -Apache Ignite In-Memory Data Fabric 1.3 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.3 +--------------------------------------------------------- * Added auto-retries for cache operations in recoverable cases. * Added integration with Apache YARN. @@ -603,8 +755,8 @@ Apache Ignite In-Memory Data Fabric 1.3 * Bug fixes in In-Memory Accelerator For Apache Hadoop. * Many stability and fault-tolerance fixes. -Apache Ignite In-Memory Data Fabric 1.2 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.2 +--------------------------------------------------------- * Added client mode to TCP discovery SPI. * Added memory based evictions. @@ -615,8 +767,8 @@ Apache Ignite In-Memory Data Fabric 1.2 * Bug fixes in In-Memory Accelerator For Apache Hadoop. * Many stability and fault-tolerance fixes. -Apache Ignite In-Memory Data Fabric 1.1 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.1 +--------------------------------------------------------- * Added Google Compute Engine TCP discovery IP finder. * Added generic cloud TCP discovery IP finder (based on jclouds). @@ -635,8 +787,8 @@ Apache Ignite In-Memory Data Fabric 1.1 * Made deployment scanners for URI-based deployment pluggable. * Many stability and fault-tolerance fixes. -Apache Ignite In-Memory Data Fabric 1.0 ---------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.0 +--------------------------------------------------------- * Simplified query API. * Added automatic aggregation, grouping, and sorting support to SQL queries. @@ -651,14 +803,14 @@ Apache Ignite In-Memory Data Fabric 1.0 * Added ability to automatically exclude LGPL optional dependencies during build. -Apache Ignite In-Memory Data Fabric 1.0 RC3 -------------------------------------------- +Apache Ignite In-Memory Database and Caching Platform 1.0 RC3 +------------------------------------------------------------- This is the first release of Apache Ignite project. The source code in large part is based -on the 7 year old GridGain In-Memory Data Fabric, open source edition, v. 6.6.2, which was +on the 7 year old GridGain In-Memory Database and Caching Platform, open source edition, v. 6.6.2, which was donated to Apache Software Foundation in September 2014. -The main feature set of Ignite In-Memory Data Fabric includes: +The main feature set of Ignite In-Memory Database and Caching Platform includes: * Advanced Clustering * Compute Grid * Data Grid diff --git a/assembly/LICENSE_FABRIC b/assembly/LICENSE_IGNITE similarity index 100% rename from assembly/LICENSE_FABRIC rename to assembly/LICENSE_IGNITE diff --git a/assembly/NOTICE_HADOOP b/assembly/NOTICE_HADOOP index 4c99a05109185..f98670a851d1b 100644 --- a/assembly/NOTICE_HADOOP +++ b/assembly/NOTICE_HADOOP @@ -1,5 +1,5 @@ Apache Ignite -Copyright 2018 The Apache Software Foundation +Copyright 2020 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). diff --git a/assembly/NOTICE_FABRIC b/assembly/NOTICE_IGNITE similarity index 89% rename from assembly/NOTICE_FABRIC rename to assembly/NOTICE_IGNITE index 964fd359ad713..6601807baffad 100644 --- a/assembly/NOTICE_FABRIC +++ b/assembly/NOTICE_IGNITE @@ -1,5 +1,5 @@ Apache Ignite -Copyright 2018 The Apache Software Foundation +Copyright 2020 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). diff --git a/assembly/dependencies-hadoop.xml b/assembly/dependencies-apache-ignite-hadoop.xml similarity index 100% rename from assembly/dependencies-hadoop.xml rename to assembly/dependencies-apache-ignite-hadoop.xml diff --git a/assembly/dependencies-fabric-lgpl.xml b/assembly/dependencies-apache-ignite-lgpl.xml similarity index 99% rename from assembly/dependencies-fabric-lgpl.xml rename to assembly/dependencies-apache-ignite-lgpl.xml index fe2932e6db725..18efac94d4fff 100644 --- a/assembly/dependencies-fabric-lgpl.xml +++ b/assembly/dependencies-apache-ignite-lgpl.xml @@ -137,6 +137,7 @@ org.apache.ignite:ignite-extdata-platform org.apache.ignite:ignite-compatibility org.apache.ignite:ignite-sqlline + org.apache.ignite:ignite-h2 true diff --git a/assembly/dependencies-fabric.xml b/assembly/dependencies-apache-ignite.xml similarity index 99% rename from assembly/dependencies-fabric.xml rename to assembly/dependencies-apache-ignite.xml index 3bcae044e2a4e..c2546e5a942da 100644 --- a/assembly/dependencies-fabric.xml +++ b/assembly/dependencies-apache-ignite.xml @@ -142,6 +142,7 @@ org.apache.ignite:ignite-extdata-platform org.apache.ignite:ignite-compatibility org.apache.ignite:ignite-sqlline + org.apache.ignite:ignite-h2 true diff --git a/assembly/libs/README.txt b/assembly/libs/README.txt index 8e5f5194653f6..1365cc12f5d0d 100644 --- a/assembly/libs/README.txt +++ b/assembly/libs/README.txt @@ -22,10 +22,10 @@ Importing Ignite Dependencies In Maven Project If you are using Maven to manage dependencies of your project, there are two options: -1. Import fabric edition: - - ignite-fabric (all inclusive) +1. Import: + - apache-ignite (all inclusive) -Here is how 'ignite-fabric' can be added to your POM file (replace '${ignite.version}' +Here is how 'apache-ignite' can be added to your POM file (replace '${ignite.version}' with actual Ignite version you are interested in): org.apache.ignite - ignite-fabric + apache-ignite ${ignite.version} ... diff --git a/assembly/release-fabric-base.xml b/assembly/release-apache-ignite-base.xml similarity index 77% rename from assembly/release-fabric-base.xml rename to assembly/release-apache-ignite-base.xml index efc643e955d8f..91baf7327ae68 100644 --- a/assembly/release-fabric-base.xml +++ b/assembly/release-apache-ignite-base.xml @@ -47,15 +47,53 @@ Makefile.am + + + modules/platforms/nodejs/index.js + /platforms/nodejs + + + + modules/platforms/nodejs/package.json + /platforms/nodejs + + + + modules/platforms/nodejs/README.md + /platforms/nodejs + + + + + modules/platforms/php/composer.json + /platforms/php + + + + + modules/platforms/python/LICENSE + /platforms/python + + + + modules/platforms/python/README.md + /platforms/python + + + + modules/platforms/python/setup.py + /platforms/python + + - assembly/LICENSE_FABRIC + assembly/LICENSE_IGNITE LICENSE / - assembly/NOTICE_FABRIC + assembly/NOTICE_IGNITE NOTICE / @@ -192,6 +230,39 @@ /platforms/cpp/bin + + + modules/platforms/nodejs/lib + /platforms/nodejs/lib + + + + modules/platforms/nodejs/examples + /platforms/nodejs/examples + + + + + modules/platforms/php/src + /platforms/php/src + + + + modules/platforms/php/examples + /platforms/php/examples + + + + + modules/platforms/python/pyignite + /platforms/python/pyignite + + + + modules/platforms/python/requirements + /platforms/python/requirements + + bin diff --git a/assembly/release-hadoop.xml b/assembly/release-apache-ignite-hadoop.xml similarity index 100% rename from assembly/release-hadoop.xml rename to assembly/release-apache-ignite-hadoop.xml diff --git a/assembly/release-fabric-lgpl.xml b/assembly/release-apache-ignite-lgpl.xml similarity index 94% rename from assembly/release-fabric-lgpl.xml rename to assembly/release-apache-ignite-lgpl.xml index ff4d8c46952aa..8777ea3efba31 100644 --- a/assembly/release-fabric-lgpl.xml +++ b/assembly/release-apache-ignite-lgpl.xml @@ -21,7 +21,7 @@ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd"> - fabric-lgpl + apache-ignite-lgpl false @@ -31,7 +31,7 @@ release-base.xml - release-fabric-base.xml + release-apache-ignite-base.xml release-yardstick.xml diff --git a/assembly/release-fabric.xml b/assembly/release-apache-ignite.xml similarity index 94% rename from assembly/release-fabric.xml rename to assembly/release-apache-ignite.xml index 7536d4ec5118e..a8159167ac810 100644 --- a/assembly/release-fabric.xml +++ b/assembly/release-apache-ignite.xml @@ -21,7 +21,7 @@ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd"> - fabric + apache-ignite false @@ -31,7 +31,7 @@ release-base.xml - release-fabric-base.xml + release-apache-ignite-base.xml release-yardstick.xml diff --git a/assembly/release-base.xml b/assembly/release-base.xml index df8598f5453fe..e5e324aa43995 100644 --- a/assembly/release-base.xml +++ b/assembly/release-base.xml @@ -51,6 +51,11 @@ config/java.util.logging.properties /config + + + modules/h2/target/ignite-h2-${project.version}.jar + /libs/ignite-indexing + diff --git a/assembly/release-sources.xml b/assembly/release-sources.xml deleted file mode 100644 index cc33f3e837197..0000000000000 --- a/assembly/release-sources.xml +++ /dev/null @@ -1,80 +0,0 @@ - - - - - - source-release - - zip - - - - - - . - / - true - - - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/).*${project.build.directory}/.*] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/).*${project.build.directory}] - - - - - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?maven-eclipse\.xml] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?\.project] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?\.classpath] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?[^/]*\.iws] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?\.idea(/.*)?] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?out(/.*)?] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?[^/]*\.ipr] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?[^/]*\.iml] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?\.settings(/.*)?] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?\.externalToolBuilders(/.*)?] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?\.deployables(/.*)?] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?\.wtpmodules(/.*)?] - - - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?cobertura\.ser] - - - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?pom\.xml\.releaseBackup] - %regex[(?!((?!${project.build.directory}/)[^/]+/)*src/)(.*/)?release\.properties] - - - modules/platforms/dotnet/bin/** - - - - - ${project.build.directory}/maven-shared-archive-resources/META-INF - / - - - diff --git a/bin/control.bat b/bin/control.bat index 6b36a923d8105..fb5f11a352752 100644 --- a/bin/control.bat +++ b/bin/control.bat @@ -28,7 +28,7 @@ if "%OS%" == "Windows_NT" setlocal if defined JAVA_HOME goto checkJdk echo %0, ERROR: echo JAVA_HOME environment variable is not found. - echo Please point JAVA_HOME variable to location of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to location of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. goto error_finish @@ -37,7 +37,7 @@ goto error_finish if exist "%JAVA_HOME%\bin\java.exe" goto checkJdkVersion echo %0, ERROR: echo JAVA is not found in JAVA_HOME=%JAVA_HOME%. - echo Please point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to installation of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. goto error_finish @@ -49,25 +49,18 @@ for /f "tokens=* USEBACKQ" %%f in (`%cmd% -version 2^>^&1`) do ( ) :LoopEscape -set var=%var:~14% -set var=%var:"=% -for /f "tokens=1,2 delims=." %%a in ("%var%") do set MAJOR_JAVA_VER=%%a & set MINOR_JAVA_VER=%%b +for /f "tokens=1-3 delims= " %%a in ("%var%") do set JAVA_VER_STR=%%c +set JAVA_VER_STR=%JAVA_VER_STR:"=% +for /f "tokens=1,2 delims=." %%a in ("%JAVA_VER_STR%.x") do set MAJOR_JAVA_VER=%%a& set MINOR_JAVA_VER=%%b if %MAJOR_JAVA_VER% == 1 set MAJOR_JAVA_VER=%MINOR_JAVA_VER% if %MAJOR_JAVA_VER% LSS 8 ( echo %0, ERROR: echo The version of JAVA installed in %JAVA_HOME% is incorrect. - echo Please point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. - echo You can also download latest JDK at http://java.com/download. - goto error_finish -) - -if %MAJOR_JAVA_VER% GTR 9 ( - echo %0, WARNING: - echo The version of JAVA installed in %JAVA_HOME% was not tested with Apache Ignite. - echo Run it on your own risk or point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to installation of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. + goto error_finish ) :: Check IGNITE_HOME. @@ -220,7 +213,37 @@ if "%MAIN_CLASS%" == "" set MAIN_CLASS=org.apache.ignite.internal.commandline.Co :: :: Final JVM_OPTS for Java 9+ compatibility :: -if %MAJOR_JAVA_VER% GEQ 9 set JVM_OPTS=--add-exports java.base/jdk.internal.misc=ALL-UNNAMED --add-exports java.base/sun.nio.ch=ALL-UNNAMED --add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-exports jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED --add-modules java.xml.bind %JVM_OPTS% +if %MAJOR_JAVA_VER% == 8 ( + set JVM_OPTS= ^ + -XX:+AggressiveOpts ^ + %JVM_OPTS% +) + +if %MAJOR_JAVA_VER% GEQ 9 if %MAJOR_JAVA_VER% LSS 11 ( + set JVM_OPTS= ^ + -XX:+AggressiveOpts ^ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED ^ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED ^ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED ^ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED ^ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED ^ + --illegal-access=permit ^ + --add-modules=java.transaction ^ + --add-modules=java.xml.bind ^ + %JVM_OPTS% +) + +if %MAJOR_JAVA_VER% == 11 ( + set JVM_OPTS= ^ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED ^ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED ^ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED ^ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED ^ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED ^ + --illegal-access=permit ^ + -Djdk.tls.client.protocols=TLSv1.2 ^ + %JVM_OPTS% +) if "%INTERACTIVE%" == "1" ( "%JAVA_HOME%\bin\java.exe" %JVM_OPTS% %QUIET% %RESTART_SUCCESS_OPT% %JMX_MON% ^ diff --git a/bin/control.sh b/bin/control.sh index 7f84696831c50..07193675b789f 100755 --- a/bin/control.sh +++ b/bin/control.sh @@ -1,4 +1,10 @@ -#!/bin/bash +#!/usr/bin/env bash +set -o nounset +set -o errexit +set -o pipefail +set -o errtrace +set -o functrace + # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -23,7 +29,7 @@ # # Import common functions. # -if [ "${IGNITE_HOME}" = "" ]; +if [ "${IGNITE_HOME:-}" = "" ]; then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")"; else IGNITE_HOME_TMP=${IGNITE_HOME}; fi @@ -45,7 +51,7 @@ checkJava # setIgniteHome -if [ "${DEFAULT_CONFIG}" == "" ]; then +if [ "${DEFAULT_CONFIG:-}" == "" ]; then DEFAULT_CONFIG=config/default-config.xml fi @@ -68,14 +74,14 @@ RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}" # # This is executed when -nojmx is not specified # -if [ "${NOJMX}" == "0" ] ; then +if [ "${NOJMXI:-}" == "0" ] ; then findAvailableJmxPort fi # Mac OS specific support to display correct name in the dock. osname=`uname` -if [ "${DOCK_OPTS}" == "" ]; then +if [ "${DOCK_OPTS:-}" == "" ]; then DOCK_OPTS="-Xdock:name=Ignite Node" fi @@ -84,7 +90,7 @@ fi # # ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE # -if [ -z "$JVM_OPTS" ] ; then +if [ -z "${JVM_OPTS:-}" ] ; then if [[ `"$JAVA" -version 2>&1 | egrep "1\.[7]\."` ]]; then JVM_OPTS="-Xms256m -Xmx1g" else @@ -122,14 +128,14 @@ ENABLE_ASSERTIONS="1" # # Set '-ea' options if assertions are enabled. # -if [ "${ENABLE_ASSERTIONS}" = "1" ]; then +if [ "${ENABLE_ASSERTIONS:-}" = "1" ]; then JVM_OPTS="${JVM_OPTS} -ea" fi # # Set main class to start service (grid node by default). # -if [ "${MAIN_CLASS}" = "" ]; then +if [ "${MAIN_CLASS:-}" = "" ]; then MAIN_CLASS=org.apache.ignite.internal.commandline.CommandHandler fi @@ -140,45 +146,68 @@ fi # JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}" # -# Final JVM_OPTS for Java 9 compatibility -# -${JAVA_HOME}/bin/java -version 2>&1 | grep -qE 'java version "9.*"' && { -JVM_OPTS="--add-exports java.base/jdk.internal.misc=ALL-UNNAMED \ - --add-exports java.base/sun.nio.ch=ALL-UNNAMED \ - --add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ - --add-exports jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ - --add-modules java.xml.bind \ - ${JVM_OPTS}" -} || true +# Final JVM_OPTS for Java 9+ compatibility +# +javaMajorVersion "${JAVA_HOME}/bin/java" + +if [ $version -eq 8 ] ; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + ${JVM_OPTS}" + +elif [ $version -gt 8 ] && [ $version -lt 11 ]; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + --add-modules=java.transaction \ + --add-modules=java.xml.bind \ + ${JVM_OPTS}" + +elif [ $version -eq 11 ] ; then + JVM_OPTS="\ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + -Djdk.tls.client.protocols=TLSv1.2 \ + ${JVM_OPTS}" +fi ERRORCODE="-1" while [ "${ERRORCODE}" -ne "130" ] do - if [ "${INTERACTIVE}" == "1" ] ; then + if [ "${INTERACTIVE:-}" == "1" ] ; then case $osname in Darwin*) - "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \ + "$JAVA" ${JVM_OPTS} ${QUIET:-} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON:-} \ -DIGNITE_UPDATE_NOTIFIER=false -DIGNITE_HOME="${IGNITE_HOME}" \ - -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} $@ + -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS:-} -cp "${CP}" ${MAIN_CLASS} $@ ;; *) - "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \ + "$JAVA" ${JVM_OPTS} ${QUIET:-} "${RESTART_SUCCESS_OPT}" ${JMX_MON:-} \ -DIGNITE_UPDATE_NOTIFIER=false -DIGNITE_HOME="${IGNITE_HOME}" \ - -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} $@ + -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS:-} -cp "${CP}" ${MAIN_CLASS} $@ ;; esac else case $osname in Darwin*) - "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \ + "$JAVA" ${JVM_OPTS} ${QUIET:-} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON:-} \ -DIGNITE_UPDATE_NOTIFIER=false -DIGNITE_HOME="${IGNITE_HOME}" \ - -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} $@ + -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS:-} -cp "${CP}" ${MAIN_CLASS} $@ ;; *) - "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \ + "$JAVA" ${JVM_OPTS} ${QUIET:-} "${RESTART_SUCCESS_OPT}" ${JMX_MON:-} \ -DIGNITE_UPDATE_NOTIFIER=false -DIGNITE_HOME="${IGNITE_HOME}" \ - -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} $@ + -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS:-} -cp "${CP}" ${MAIN_CLASS} $@ ;; esac fi diff --git a/bin/ignite-tf.sh b/bin/ignite-tf.sh new file mode 100644 index 0000000000000..7b4d0d2fc30b3 --- /dev/null +++ b/bin/ignite-tf.sh @@ -0,0 +1,190 @@ +#!/usr/bin/env bash +set -o nounset +set -o errexit +set -o pipefail +set -o errtrace +set -o functrace + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# +# Command line tool for Tensorflow cluster management. +# + +# +# Import common functions. +# +if [ "${IGNITE_HOME:-}" = "" ]; + then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")"; + else IGNITE_HOME_TMP=${IGNITE_HOME}; +fi + +# +# Set SCRIPTS_HOME - base path to scripts. +# +SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin" + +source "${SCRIPTS_HOME}"/include/functions.sh + +# +# Discover path to Java executable and check it's version. +# +checkJava + +# +# Discover IGNITE_HOME environment variable. +# +setIgniteHome + +if [ "${DEFAULT_CONFIG:-}" == "" ]; then + DEFAULT_CONFIG=config/default-config.xml +fi + +# +# Set IGNITE_LIBS. +# +. "${SCRIPTS_HOME}"/include/setenv.sh +. "${SCRIPTS_HOME}"/include/build-classpath.sh # Will be removed in the binary release. +IGNITE_OPT_LIBS=${IGNITE_HOME}/libs/optional/ +CP="${IGNITE_LIBS}:${IGNITE_OPT_LIBS}/ignite-tensorflow/*:${IGNITE_OPT_LIBS}/ignite-slf4j/*" + +RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator) + +RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}" +RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}" + +# Mac OS specific support to display correct name in the dock. +osname=`uname` + +if [ "${DOCK_OPTS:-}" == "" ]; then + DOCK_OPTS="-Xdock:name=Ignite Node" +fi + +# +# JVM options. See http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp for more details. +# +# ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE +# +if [ -z "${JVM_OPTS:-}" ] ; then + JVM_OPTS="-Xms1g -Xmx1g -server -XX:MaxMetaspaceSize=256m" +fi + +# +# Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection. +# +# JVM_OPTS="$JVM_OPTS -XX:+UseG1GC" + +# +# Uncomment if you get StackOverflowError. +# On 64 bit systems this value can be larger, e.g. -Xss16m +# +# JVM_OPTS="${JVM_OPTS} -Xss4m" + +# +# Uncomment to set preference for IPv4 stack. +# +# JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true" + +# +# Assertions are disabled by default since version 3.5. +# If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'. +# +ENABLE_ASSERTIONS="1" + +# +# Set '-ea' options if assertions are enabled. +# +if [ "${ENABLE_ASSERTIONS:-}" = "1" ]; then + JVM_OPTS="${JVM_OPTS} -ea" +fi + +MAIN_CLASS=org.apache.ignite.tensorflow.submitter.JobSubmitter + +# +# Remote debugging (JPDA). +# Uncomment and change if remote debugging is required. +# +# JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}" + +# +# Final JVM_OPTS for Java 9+ compatibility +# +javaMajorVersion "${JAVA_HOME}/bin/java" + +if [ $version -eq 8 ] ; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + ${JVM_OPTS}" + +elif [ $version -gt 8 ] && [ $version -lt 11 ]; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + --add-modules=java.transaction \ + --add-modules=java.xml.bind \ + ${JVM_OPTS}" + +elif [ $version -eq 11 ] ; then + JVM_OPTS="\ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + -Djdk.tls.client.protocols=TLSv1.2 \ + ${JVM_OPTS}" +fi + + +ERRORCODE="-1" + +QUIET="-DIGNITE_QUIET=false" + +while [ "${ERRORCODE}" -ne "130" ] +do + case $osname in + Darwin*) + "$JAVA" ${JVM_OPTS:-} ${QUIET:-} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" \ + -DIGNITE_UPDATE_NOTIFIER=false -DIGNITE_HOME="${IGNITE_HOME}" \ + -DIGNITE_PROG_NAME="$0" -cp "${CP}" ${MAIN_CLASS} "${CONFIG:-}" "$@" + ;; + *) + "$JAVA" ${JVM_OPTS:-} ${QUIET:-} "${RESTART_SUCCESS_OPT}" \ + -DIGNITE_UPDATE_NOTIFIER=false -DIGNITE_HOME="${IGNITE_HOME}" \ + -DIGNITE_PROG_NAME="$0" -cp "${CP}" ${MAIN_CLASS} "$@" + ;; + esac + + ERRORCODE="$?" + + if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then + break + else + rm -f "${RESTART_SUCCESS_FILE}" + fi +done + +if [ -f "${RESTART_SUCCESS_FILE}" ] ; then + rm -f "${RESTART_SUCCESS_FILE}" +fi diff --git a/bin/ignite.bat b/bin/ignite.bat index 25c828fec9e04..915be598c9f34 100644 --- a/bin/ignite.bat +++ b/bin/ignite.bat @@ -28,7 +28,7 @@ if "%OS%" == "Windows_NT" setlocal if defined JAVA_HOME goto checkJdk echo %0, ERROR: echo JAVA_HOME environment variable is not found. - echo Please point JAVA_HOME variable to location of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to location of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. goto error_finish @@ -37,7 +37,7 @@ goto error_finish if exist "%JAVA_HOME%\bin\java.exe" goto checkJdkVersion echo %0, ERROR: echo JAVA is not found in JAVA_HOME=%JAVA_HOME%. - echo Please point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to installation of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. goto error_finish @@ -49,27 +49,20 @@ for /f "tokens=* USEBACKQ" %%f in (`%cmd% -version 2^>^&1`) do ( ) :LoopEscape -set var=%var:~14% -set var=%var:"=% -for /f "tokens=1,2 delims=." %%a in ("%var%") do set MAJOR_JAVA_VER=%%a & set MINOR_JAVA_VER=%%b +for /f "tokens=1-3 delims= " %%a in ("%var%") do set JAVA_VER_STR=%%c +set JAVA_VER_STR=%JAVA_VER_STR:"=% +for /f "tokens=1,2 delims=." %%a in ("%JAVA_VER_STR%.x") do set MAJOR_JAVA_VER=%%a& set MINOR_JAVA_VER=%%b if %MAJOR_JAVA_VER% == 1 set MAJOR_JAVA_VER=%MINOR_JAVA_VER% if %MAJOR_JAVA_VER% LSS 8 ( echo %0, ERROR: echo The version of JAVA installed in %JAVA_HOME% is incorrect. - echo Please point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to installation of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. goto error_finish ) -if %MAJOR_JAVA_VER% GTR 9 ( - echo %0, WARNING: - echo The version of JAVA installed in %JAVA_HOME% was not tested with Apache Ignite. - echo Run it on your own risk or point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. - echo You can also download latest JDK at http://java.com/download. -) - :: Check IGNITE_HOME. :checkIgniteHome1 if defined IGNITE_HOME goto checkIgniteHome2 @@ -181,9 +174,9 @@ if "%JMX_PORT%" == "" ( :: "%JAVA_HOME%\bin\java.exe" -version 2>&1 | findstr "1\.[7]\." > nul if %ERRORLEVEL% equ 0 ( - if "%JVM_OPTS%" == "" set JVM_OPTS=-Xms1g -Xmx1g -server -XX:+AggressiveOpts -XX:MaxPermSize=256m + if "%JVM_OPTS%" == "" set JVM_OPTS=-Xms1g -Xmx1g -server -XX:MaxPermSize=256m ) else ( - if "%JVM_OPTS%" == "" set JVM_OPTS=-Xms1g -Xmx1g -server -XX:+AggressiveOpts -XX:MaxMetaspaceSize=256m + if "%JVM_OPTS%" == "" set JVM_OPTS=-Xms1g -Xmx1g -server -XX:MaxMetaspaceSize=256m ) :: @@ -235,7 +228,37 @@ if "%MAIN_CLASS%" == "" set MAIN_CLASS=org.apache.ignite.startup.cmdline.Command :: :: Final JVM_OPTS for Java 9+ compatibility :: -if %MAJOR_JAVA_VER% GEQ 9 set JVM_OPTS=--add-exports java.base/jdk.internal.misc=ALL-UNNAMED --add-exports java.base/sun.nio.ch=ALL-UNNAMED --add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-exports jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED --add-modules java.xml.bind %JVM_OPTS% +if %MAJOR_JAVA_VER% == 8 ( + set JVM_OPTS= ^ + -XX:+AggressiveOpts ^ + %JVM_OPTS% +) + +if %MAJOR_JAVA_VER% GEQ 9 if %MAJOR_JAVA_VER% LSS 11 ( + set JVM_OPTS= ^ + -XX:+AggressiveOpts ^ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED ^ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED ^ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED ^ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED ^ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED ^ + --illegal-access=permit ^ + --add-modules=java.transaction ^ + --add-modules=java.xml.bind ^ + %JVM_OPTS% +) + +if %MAJOR_JAVA_VER% == 11 ( + set JVM_OPTS= ^ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED ^ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED ^ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED ^ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED ^ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED ^ + --illegal-access=permit ^ + -Djdk.tls.client.protocols=TLSv1.2 ^ + %JVM_OPTS% +) if "%INTERACTIVE%" == "1" ( "%JAVA_HOME%\bin\java.exe" %JVM_OPTS% %QUIET% %RESTART_SUCCESS_OPT% %JMX_MON% ^ diff --git a/bin/ignite.sh b/bin/ignite.sh index c7b7318190d09..f70880bb0a796 100755 --- a/bin/ignite.sh +++ b/bin/ignite.sh @@ -1,4 +1,10 @@ -#!/bin/bash +#!/usr/bin/env bash +set -o nounset +set -o errexit +set -o pipefail +set -o errtrace +set -o functrace + # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -23,7 +29,7 @@ # # Import common functions. # -if [ "${IGNITE_HOME}" = "" ]; +if [ "${IGNITE_HOME:-}" = "" ]; then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")"; else IGNITE_HOME_TMP=${IGNITE_HOME}; fi @@ -45,7 +51,7 @@ checkJava # setIgniteHome -if [ "${DEFAULT_CONFIG}" == "" ]; then +if [ "${DEFAULT_CONFIG:-}" == "" ]; then DEFAULT_CONFIG=config/default-config.xml fi @@ -80,7 +86,7 @@ fi # Mac OS specific support to display correct name in the dock. osname=`uname` -if [ "${DOCK_OPTS}" == "" ]; then +if [ "${DOCK_OPTS:-}" == "" ]; then DOCK_OPTS="-Xdock:name=Ignite Node" fi @@ -90,7 +96,7 @@ fi # ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE # if [ -z "$JVM_OPTS" ] ; then - JVM_OPTS="-Xms1g -Xmx1g -server -XX:+AggressiveOpts -XX:MaxMetaspaceSize=256m" + JVM_OPTS="-Xms1g -Xmx1g -server -XX:MaxMetaspaceSize=256m" fi # @@ -134,7 +140,7 @@ fi # # Set main class to start service (grid node by default). # -if [ "${MAIN_CLASS}" = "" ]; then +if [ "${MAIN_CLASS:-}" = "" ]; then MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup fi @@ -147,17 +153,39 @@ fi # # Final JVM_OPTS for Java 9+ compatibility # -javaMajorVersion "${JAVA_HOME}/bin/java" - -if [ $version -gt 8 ]; then - JVM_OPTS="--add-exports java.base/jdk.internal.misc=ALL-UNNAMED \ - --add-exports java.base/sun.nio.ch=ALL-UNNAMED \ - --add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ - --add-exports jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ - --add-modules java.xml.bind \ - ${JVM_OPTS}" +javaMajorVersion "${JAVA}" + +if [ $version -eq 8 ] ; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + ${JVM_OPTS}" + +elif [ $version -gt 8 ] && [ $version -lt 11 ]; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + --add-modules=java.transaction \ + --add-modules=java.xml.bind \ + ${JVM_OPTS}" + +elif [ $version -eq 11 ] ; then + JVM_OPTS="\ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + -Djdk.tls.client.protocols=TLSv1.2 \ + ${JVM_OPTS}" fi + ERRORCODE="-1" while [ "${ERRORCODE}" -ne "130" ] diff --git a/bin/ignitevisorcmd.bat b/bin/ignitevisorcmd.bat index 86e688f8698e2..0a92999b5b609 100644 --- a/bin/ignitevisorcmd.bat +++ b/bin/ignitevisorcmd.bat @@ -28,7 +28,7 @@ if "%OS%" == "Windows_NT" setlocal if defined JAVA_HOME goto checkJdk echo %0, ERROR: echo JAVA_HOME environment variable is not found. - echo Please point JAVA_HOME variable to location of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to location of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. goto error_finish @@ -37,7 +37,7 @@ goto error_finish if exist "%JAVA_HOME%\bin\java.exe" goto checkJdkVersion echo %0, ERROR: echo JAVA is not found in JAVA_HOME=%JAVA_HOME%. - echo Please point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. + echo Please point JAVA_HOME variable to installation of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. goto error_finish @@ -49,25 +49,18 @@ for /f "tokens=* USEBACKQ" %%f in (`%cmd% -version 2^>^&1`) do ( ) :LoopEscape -set var=%var:~14% -set var=%var:"=% -for /f "tokens=1,2 delims=." %%a in ("%var%") do set MAJOR_JAVA_VER=%%a & set MINOR_JAVA_VER=%%b +for /f "tokens=1-3 delims= " %%a in ("%var%") do set JAVA_VER_STR=%%c +set JAVA_VER_STR=%JAVA_VER_STR:"=% +for /f "tokens=1,2 delims=." %%a in ("%JAVA_VER_STR%.x") do set MAJOR_JAVA_VER=%%a& set MINOR_JAVA_VER=%%b if %MAJOR_JAVA_VER% == 1 set MAJOR_JAVA_VER=%MINOR_JAVA_VER% if %MAJOR_JAVA_VER% LSS 8 ( echo %0, ERROR: - echo The %MAJOR_JAVA_VER% version of JAVA installed in %JAVA_HOME% is incorrect. - echo Please point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. - echo You can also download latest JDK at http://java.com/download. - goto error_finish -) - -if %MAJOR_JAVA_VER% GTR 9 ( - echo %0, WARNING: - echo The %MAJOR_JAVA_VER% version of JAVA installed in %JAVA_HOME% was not tested with Apache Ignite. - echo Run it on your own risk or point JAVA_HOME variable to installation of JDK 1.8 or JDK 9. + echo The version of JAVA installed in %JAVA_HOME% is incorrect. + echo Please point JAVA_HOME variable to installation of JDK 1.8 or later. echo You can also download latest JDK at http://java.com/download. + goto error_finish ) :: Check IGNITE_HOME. @@ -162,9 +155,39 @@ if %ENABLE_ASSERTIONS% == 1 set JVM_OPTS_VISOR=%JVM_OPTS_VISOR% -ea if "%ARGS%" == "" set ARGS=%* :: -:: Final JVM_OPTS for Java 9+ compatibility +:: Final JVM_OPTS_VISOR for Java 9+ compatibility :: -if %MAJOR_JAVA_VER% GEQ 9 set JVM_OPTS=--add-exports java.base/jdk.internal.misc=ALL-UNNAMED --add-exports java.base/sun.nio.ch=ALL-UNNAMED --add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-exports jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED --add-modules java.xml.bind %JVM_OPTS% +if %MAJOR_JAVA_VER% == 8 ( + set JVM_OPTS_VISOR= ^ + -XX:+AggressiveOpts ^ + %JVM_OPTS_VISOR% +) + +if %MAJOR_JAVA_VER% GEQ 9 if %MAJOR_JAVA_VER% LSS 11 ( + set JVM_OPTS_VISOR= ^ + -XX:+AggressiveOpts ^ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED ^ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED ^ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED ^ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED ^ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED ^ + --illegal-access=permit ^ + --add-modules=java.transaction ^ + --add-modules=java.xml.bind ^ + %JVM_OPTS_VISOR% +) + +if %MAJOR_JAVA_VER% == 11 ( + set JVM_OPTS_VISOR= ^ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED ^ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED ^ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED ^ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED ^ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED ^ + --illegal-access=permit ^ + -Djdk.tls.client.protocols=TLSv1.2 ^ + %JVM_OPTS_VISOR% +) :: :: Starts Visor console. diff --git a/bin/ignitevisorcmd.sh b/bin/ignitevisorcmd.sh index 1fcc127b4caf0..67f670c4ad3e8 100755 --- a/bin/ignitevisorcmd.sh +++ b/bin/ignitevisorcmd.sh @@ -1,4 +1,10 @@ -#!/bin/bash +#!/usr/bin/env bash +set -o nounset +set -o errexit +set -o pipefail +set -o errtrace +set -o functrace + # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -22,7 +28,7 @@ ARGS=$@ # # Import common functions. # -if [ "${IGNITE_HOME}" = "" ]; +if [ "${IGNITE_HOME:-}" = "" ]; then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")"; else IGNITE_HOME_TMP=${IGNITE_HOME}; fi @@ -66,7 +72,7 @@ JVM_OPTS="-Xms1g -Xmx1g -XX:MaxPermSize=128M -server ${JVM_OPTS}" # Mac OS specific support to display correct name in the dock. osname=`uname` -if [ "${DOCK_OPTS}" == "" ]; then +if [ "${DOCK_OPTS:-}" == "" ]; then DOCK_OPTS="-Xdock:name=Visor - Ignite Shell Console" fi @@ -106,17 +112,38 @@ function restoreSttySettings() { trap restoreSttySettings INT # -# Final JVM_OPTS for Java 9 compatibility +# Final JVM_OPTS for Java 9+ compatibility # javaMajorVersion "${JAVA_HOME}/bin/java" -if [ $version -gt 8 ]; then - JVM_OPTS="--add-exports java.base/jdk.internal.misc=ALL-UNNAMED \ - --add-exports java.base/sun.nio.ch=ALL-UNNAMED \ - --add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ - --add-exports jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ - --add-modules java.xml.bind \ - ${JVM_OPTS}" +if [ $version -eq 8 ] ; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + ${JVM_OPTS}" + +elif [ $version -gt 8 ] && [ $version -lt 11 ]; then + JVM_OPTS="\ + -XX:+AggressiveOpts \ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + --add-modules=java.transaction \ + --add-modules=java.xml.bind \ + ${JVM_OPTS}" + +elif [ $version -eq 11 ] ; then + JVM_OPTS="\ + --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED \ + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED \ + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED \ + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED \ + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED \ + --illegal-access=permit \ + -Djdk.tls.client.protocols=TLSv1.2 \ + ${JVM_OPTS}" fi # diff --git a/bin/include/functions.sh b/bin/include/functions.sh index dbeee1112b0bc..199fd4e875113 100755 --- a/bin/include/functions.sh +++ b/bin/include/functions.sh @@ -1,4 +1,10 @@ -#!/bin/bash +#!/usr/bin/env bash +set -o nounset +set -o errexit +set -o pipefail +set -o errtrace +set -o functrace + # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -55,14 +61,14 @@ javaMajorVersion() { # checkJava() { # Check JAVA_HOME. - if [ "$JAVA_HOME" = "" ]; then + if [ "${JAVA_HOME:-}" = "" ]; then JAVA=`type -p java` RETCODE=$? if [ $RETCODE -ne 0 ]; then echo $0", ERROR:" echo "JAVA_HOME environment variable is not found." - echo "Please point JAVA_HOME variable to location of JDK 1.8 or JDK 9." + echo "Please point JAVA_HOME variable to location of JDK 1.8 or later." echo "You can also download latest JDK at http://java.com/download" exit 1 @@ -81,14 +87,9 @@ checkJava() { if [ $version -lt 8 ]; then echo "$0, ERROR:" echo "The $version version of JAVA installed in JAVA_HOME=$JAVA_HOME is incompatible." - echo "Please point JAVA_HOME variable to installation of JDK 1.8 or JDK 9." + echo "Please point JAVA_HOME variable to installation of JDK 1.8 or later." echo "You can also download latest JDK at http://java.com/download" exit 1 - elif [ $version -gt 9 ]; then - echo "$0, WARNING:" - echo "The $version version of JAVA installed in JAVA_HOME=$JAVA_HOME was not tested with Apache Ignite." - echo "Run it on your own risk or point JAVA_HOME variable to installation of JDK 1.8 or JDK 9." - echo "You can also download JDK at http://java.com/download" fi } @@ -101,7 +102,7 @@ setIgniteHome() { # # Set IGNITE_HOME, if needed. # - if [ "${IGNITE_HOME}" = "" ]; then + if [ "${IGNITE_HOME:-}" = "" ]; then export IGNITE_HOME=${IGNITE_HOME_TMP} fi diff --git a/bin/include/parseargs.sh b/bin/include/parseargs.sh index 3ab255eae1659..3b4a0702c24b9 100755 --- a/bin/include/parseargs.sh +++ b/bin/include/parseargs.sh @@ -1,4 +1,10 @@ -#!/bin/bash +#!/usr/bin/env bash +set -o nounset +set -o errexit +set -o pipefail +set -o errtrace +set -o functrace + # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -34,7 +40,7 @@ # in other scripts to parse common command lines parameters. # -CONFIG=${DEFAULT_CONFIG} +CONFIG=${DEFAULT_CONFIG:-} INTERACTIVE="0" NOJMX="0" QUIET="-DIGNITE_QUIET=true" @@ -51,3 +57,13 @@ do esac shift done + +# +# Set 'file.encoding' to UTF-8 default if not specified otherwise +# +case "${JVM_OPTS:-}" in + *-Dfile.encoding=*) + ;; + *) + JVM_OPTS="${JVM_OPTS:-} -Dfile.encoding=UTF-8";; +esac diff --git a/bin/include/setenv.sh b/bin/include/setenv.sh index 4b82cf9c9ba3f..7290ccd8e73e0 100755 --- a/bin/include/setenv.sh +++ b/bin/include/setenv.sh @@ -1,4 +1,10 @@ -#!/bin/bash +#!/usr/bin/env bash +set -o nounset +set -o errexit +set -o pipefail +set -o errtrace +set -o functrace + # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -27,7 +33,7 @@ # # Check IGNITE_HOME. # -if [ "${IGNITE_HOME}" = "" ]; then +if [ "${IGNITE_HOME:-}" = "" ]; then echo $0", ERROR: Ignite installation folder is not found." echo "Please create IGNITE_HOME variable pointing to location of" echo "Ignite installation folder." @@ -62,12 +68,12 @@ IFS=$(echo -en "\n\b") for file in ${IGNITE_HOME}/libs/* do if [ -d ${file} ] && [ "${file}" != "${IGNITE_HOME}"/libs/optional ]; then - IGNITE_LIBS=${IGNITE_LIBS}${SEP}${file}/* + IGNITE_LIBS=${IGNITE_LIBS:-}${SEP}${file}/* fi done IFS=$SAVEIFS -if [ "${USER_LIBS}" != "" ]; then - IGNITE_LIBS=${USER_LIBS}${SEP}${IGNITE_LIBS} +if [ "${USER_LIBS:-}" != "" ]; then + IGNITE_LIBS=${USER_LIBS:-}${SEP}${IGNITE_LIBS} fi diff --git a/config/fabric-lgpl/default-config.xml b/config/apache-ignite-lgpl/default-config.xml similarity index 100% rename from config/fabric-lgpl/default-config.xml rename to config/apache-ignite-lgpl/default-config.xml diff --git a/config/fabric/default-config.xml b/config/apache-ignite/default-config.xml similarity index 100% rename from config/fabric/default-config.xml rename to config/apache-ignite/default-config.xml diff --git a/bin/include/visorcmd/node_startup_by_ssh.sample.ini b/config/visor-cmd/node_startup_by_ssh.sample.ini similarity index 85% rename from bin/include/visorcmd/node_startup_by_ssh.sample.ini rename to config/visor-cmd/node_startup_by_ssh.sample.ini index e50ff29930459..f1d8e01b7b95e 100644 --- a/bin/include/visorcmd/node_startup_by_ssh.sample.ini +++ b/config/visor-cmd/node_startup_by_ssh.sample.ini @@ -13,6 +13,11 @@ ;See the License for the specific language governing permissions and ;limitations under the License. +# ================================================================== +# This is a sample file for Visor CMD to use with "start" command. +# More info: https://apacheignite-tools.readme.io/docs/start-command +# ================================================================== + # Section with settings for host1: [host1] # IP address or host name: diff --git a/doap_Ignite.rdf b/doap_Ignite.rdf index 16649c404f169..4d5daa41c8470 100644 --- a/doap_Ignite.rdf +++ b/doap_Ignite.rdf @@ -27,8 +27,8 @@ Apache Ignite - Apache Ignite is an In-Memory Data Fabric providing in-memory data caching, partitioning, processing, and querying components. - Apache Ignite In-Memory Data Fabric is designed to deliver uncompromised performance for a wide set of in-memory computing use cases from high performance computing, to the industry most advanced data grid, in-memory SQL, in-memory file system, streaming, and more. + Apache Ignite is an In-Memory Database and Caching Platform providing in-memory data caching, partitioning, processing, and querying components. + Apache Ignite In-Memory Database and Caching Platform is designed to deliver uncompromised performance for a wide set of in-memory computing use cases from high performance computing, to the industry most advanced data grid, in-memory SQL, in-memory file system, streaming, and more. diff --git a/docker/apache-ignite/Dockerfile b/docker/apache-ignite/Dockerfile index 15b5c3eafba55..148ce421f708f 100644 --- a/docker/apache-ignite/Dockerfile +++ b/docker/apache-ignite/Dockerfile @@ -19,7 +19,7 @@ FROM openjdk:8-jre-alpine # Settings -ENV IGNITE_HOME /opt/ignite/apache-ignite-fabric +ENV IGNITE_HOME /opt/ignite/apache-ignite WORKDIR /opt/ignite # Add missing software @@ -27,7 +27,7 @@ RUN apk --no-cache \ add bash # Copy main binary archive -COPY apache-ignite-fabric* apache-ignite-fabric +COPY apache-ignite* apache-ignite # Copy sh files and set permission COPY run.sh $IGNITE_HOME/ diff --git a/docker/apache-ignite/README.txt b/docker/apache-ignite/README.txt index 6912bc4745e4a..98d67b283d817 100644 --- a/docker/apache-ignite/README.txt +++ b/docker/apache-ignite/README.txt @@ -13,11 +13,11 @@ Build image 3) Copy Apache Ignite's binary archive to Docker module directory - cp -rfv ../../target/bin/apache-ignite-fabric-*.zip + cp -rfv ../../target/bin/apache-ignite-*.zip 4) Unpack Apache Ignite's binary archive - unzip apache-ignite-fabric-*.zip + unzip apache-ignite-*.zip 5) Build docker image diff --git a/modules/web-console/docker/standalone/Dockerfile b/docker/web-console/standalone/Dockerfile similarity index 53% rename from modules/web-console/docker/standalone/Dockerfile rename to docker/web-console/standalone/Dockerfile index 9b007349b6a62..dfcb188b2dac6 100644 --- a/modules/web-console/docker/standalone/Dockerfile +++ b/docker/web-console/standalone/Dockerfile @@ -15,62 +15,71 @@ # limitations under the License. # + +#~~~~~~~~~~~~~~~~~~# +# Frontend build # +#~~~~~~~~~~~~~~~~~~# FROM node:8-slim as frontend-build ENV NPM_CONFIG_LOGLEVEL error WORKDIR /opt/web-console -# Install node modules for frontend. -COPY frontend/package*.json frontend/ -RUN (cd frontend && npm install --no-optional) - -# Copy source. +# Install node modules and build sources COPY frontend frontend +RUN cd frontend && \ + npm install --no-optional && \ + npm run build -RUN (cd frontend && npm run build) +#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# +# Web Console Standalone assemble # +#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# FROM node:8-slim ENV NPM_CONFIG_LOGLEVEL error -# Update package list & install. -RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6 \ - && echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4 main" | tee /etc/apt/sources.list.d/mongodb-org-3.4.list - -# Update package list & install. -RUN apt-get update \ - && apt-get install -y nginx-light mongodb-org-server dos2unix \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* - -# Install global node packages. +# Install global node packages RUN npm install -g pm2 -WORKDIR /opt/web-console - -COPY docker/standalone/docker-entrypoint.sh docker-entrypoint.sh +# Update software sources and install missing applications +RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6 && \ + echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4 main" > /etc/apt/sources.list.d/mongodb-org-3.4.list && \ + apt update && \ + apt install -y --no-install-recommends \ + nginx-light \ + mongodb-org-server \ + dos2unix && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* -RUN chmod +x docker-entrypoint.sh \ - && dos2unix docker-entrypoint.sh - -# Copy nginx config. -COPY docker/standalone/nginx/* /etc/nginx/ +WORKDIR /opt/web-console -# Install node modules for frontend and backend modules. +# Install node modules for backend COPY backend/package*.json backend/ -RUN (cd backend && npm install --no-optional --production) +RUN cd backend && \ + npm install --no-optional --production -# Copy source. +# Copy and build sources COPY backend backend +RUN cd backend && \ + npm run build -COPY web-agent/target/ignite-web-agent-*.zip backend/agent_dists +# Copy Ignite Web Agent module package +COPY ignite-web-agent-*.zip backend/agent_dists +# Copy previously built frontend COPY --from=frontend-build /opt/web-console/frontend/build static -VOLUME ["/etc/nginx"] -VOLUME ["/data/db"] -VOLUME ["/opt/web-console/serve/agent_dists"] +# Copy and fix entrypoint script +COPY docker-entrypoint.sh docker-entrypoint.sh +RUN chmod +x docker-entrypoint.sh \ + && dos2unix docker-entrypoint.sh + +# Copy nginx configuration +COPY nginx/* /etc/nginx/ EXPOSE 80 + ENTRYPOINT ["/opt/web-console/docker-entrypoint.sh"] + diff --git a/docker/web-console/standalone/README.txt b/docker/web-console/standalone/README.txt new file mode 100644 index 0000000000000..c97e7924fbdc8 --- /dev/null +++ b/docker/web-console/standalone/README.txt @@ -0,0 +1,35 @@ +Apache Ignite Web Console Standalone Docker module +================================================== +Apache Ignite Web Console Standalone Docker module provides Dockerfile and accompanying files +for building docker image of Web Console. + + +Ignite Web Console Standalone Docker Image Build Instructions +============================================================= +1) Build ignite-web-console module + + mvn clean install -P web-console -DskipTests -T 2C -pl :ignite-web-console -am + +2) Copy ignite-web-agent-.zip from 'modules/web-console/web-agent/target' + to 'docker/web-console/standalone' directory + + cp -rf modules/web-console/web-agent/target/ignite-web-agent-*.zip docker/web-console/standalone + +3) Go to Apache Ignite Web Console Docker module directory and copy Apache + Ignite Web Console's frontend and backend directory + + cd docker/web-console/standalone + cp -rf ../../../modules/web-console/backend ./ + cp -rf ../../../modules/web-console/frontend ./ + +4) Build docker image + + docker build . -t apacheignite/web-console-standalone:[:] + + Prepared image will be available in local docker registry (can be seen + issuing `docker images` command) + +5) Clean up + + rm -rf backend frontend ignite-web-agent* + diff --git a/modules/web-console/docker/standalone/docker-entrypoint.sh b/docker/web-console/standalone/docker-entrypoint.sh similarity index 100% rename from modules/web-console/docker/standalone/docker-entrypoint.sh rename to docker/web-console/standalone/docker-entrypoint.sh diff --git a/modules/web-console/docker/standalone/nginx/nginx.conf b/docker/web-console/standalone/nginx/nginx.conf similarity index 100% rename from modules/web-console/docker/standalone/nginx/nginx.conf rename to docker/web-console/standalone/nginx/nginx.conf diff --git a/modules/web-console/docker/standalone/nginx/web-console.conf b/docker/web-console/standalone/nginx/web-console.conf similarity index 100% rename from modules/web-console/docker/standalone/nginx/web-console.conf rename to docker/web-console/standalone/nginx/web-console.conf diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 0000000000000..6357bf90816a4 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,30 @@ +# Apache Ignite Examples + +This module contains examples of how to run [Apache Ignite](ignite.apache.org) and [Apache Ignite](ignite.apache.org) with 3rd party components. + +Instructions on how to start examples can be found in [README.txt](README.txt). + +How to start examples in the developer's environment, please see [DEVNOTES.txt](DEVNOTES.txt). + +## Running examples on JDK 9/10/11 +Ignite uses proprietary SDK APIs that are not available by default. See also [How to run Ignite on JDK 9,10 and 11](https://apacheignite.readme.io/docs/getting-started#section-running-ignite-with-java-9-10-11) + +To set up local IDE to easier access to examples, it is possible to add following options as default for all applications + +``--add-exports=java.base/jdk.internal.misc=ALL-UNNAMED + --add-exports=java.base/sun.nio.ch=ALL-UNNAMED + --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED + --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED + --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED + --illegal-access=permit`` + +For example, for IntelliJ IDEA it is possible to use Application Templates. + +Use 'Run' -> 'Edit Configuration' menu. + + + +## Contributing to Examples +*Notice* When updating classpath of examples and in case any modifications required in [pom.xml](pom.xml) +please make sure that corresponding changes were applied to + [pom-standalone.xml](pom-standalone.xml). This pom file is finalized during release and placed to examples folder with these examples code. \ No newline at end of file diff --git a/examples/README.txt b/examples/README.txt index 9da74a8af2732..abb5d23b115d9 100644 --- a/examples/README.txt +++ b/examples/README.txt @@ -36,6 +36,6 @@ There are some ways to gain required libs from sources: 1) Run "mvn clean install -DskipTests -P lgpl" at Apache Ignite sources. This case will install lgpl-based libs to local maven repository. -2) Run "mvn clean package -DskipTests -Prelease,lgpl -Dignite.edition=fabric-lgpl" at Apache Ignite sources. +2) Run "mvn clean package -DskipTests -Prelease,lgpl -Dignite.edition=apache-ignite-lgpl" at Apache Ignite sources. Required libs will appear at /target/release-package/libs/optional subfolders. Found libs should be copied to global or project's classpath. diff --git a/examples/config/encryption/example-encrypted-store.xml b/examples/config/encryption/example-encrypted-store.xml new file mode 100644 index 0000000000000..7ce5482ad5ca7 --- /dev/null +++ b/examples/config/encryption/example-encrypted-store.xml @@ -0,0 +1,71 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 127.0.0.1:47500..47502 + + + + + + + + diff --git a/examples/config/encryption/example_keystore.jks b/examples/config/encryption/example_keystore.jks new file mode 100644 index 0000000000000..9d4476766822e Binary files /dev/null and b/examples/config/encryption/example_keystore.jks differ diff --git a/examples/pom-standalone-lgpl.xml b/examples/pom-standalone-lgpl.xml index 09698de49f5e7..29cac8bcf7464 100644 --- a/examples/pom-standalone-lgpl.xml +++ b/examples/pom-standalone-lgpl.xml @@ -74,7 +74,7 @@ org.apache.ignite - ignite-spring-data + ignite-spring-data_2.0 to_be_replaced_by_ignite_version diff --git a/examples/pom-standalone.xml b/examples/pom-standalone.xml index 385c2cf461cc5..1490dabbc0804 100644 --- a/examples/pom-standalone.xml +++ b/examples/pom-standalone.xml @@ -74,7 +74,7 @@ org.apache.ignite - ignite-spring-data + ignite-spring-data_2.0 to_be_replaced_by_ignite_version @@ -95,6 +95,31 @@ spymemcached 2.8.4 + + + + org.jpmml + pmml-model + 1.4.7 + + + + com.fasterxml.jackson.core + jackson-core + 2.7.3 + + + + com.fasterxml.jackson.core + jackson-databind + 2.7.3 + + + + com.fasterxml.jackson.core + jackson-annotations + 2.7.3 + diff --git a/examples/pom.xml b/examples/pom.xml index e745beb7204d4..c6b0a5f59db0d 100644 --- a/examples/pom.xml +++ b/examples/pom.xml @@ -145,7 +145,7 @@ org.scalatest scalatest_2.11 - 2.2.4 + ${scala.test.version} test diff --git a/examples/sql/world.sql b/examples/sql/world.sql index a34ee71fbf5dd..560b0bd585395 100644 --- a/examples/sql/world.sql +++ b/examples/sql/world.sql @@ -2,40 +2,40 @@ DROP TABLE IF EXISTS Country; CREATE TABLE Country ( Code CHAR(3) PRIMARY KEY, - Name CHAR(52), - Continent CHAR(50), - Region CHAR(26), + Name VARCHAR, + Continent VARCHAR, + Region VARCHAR, SurfaceArea DECIMAL(10,2), - IndepYear SMALLINT(6), - Population INT(11), + IndepYear SMALLINT, + Population INT, LifeExpectancy DECIMAL(3,1), GNP DECIMAL(10,2), GNPOld DECIMAL(10,2), - LocalName CHAR(45), - GovernmentForm CHAR(45), - HeadOfState CHAR(60), - Capital INT(11), + LocalName VARCHAR, + GovernmentForm VARCHAR, + HeadOfState VARCHAR, + Capital INT, Code2 CHAR(2) ) WITH "template=partitioned, backups=1, CACHE_NAME=Country, VALUE_TYPE=demo.model.Country"; DROP TABLE IF EXISTS City; CREATE TABLE City ( - ID INT(11), - Name CHAR(35), + ID INT, + Name VARCHAR, CountryCode CHAR(3), - District CHAR(20), - Population INT(11), + District VARCHAR, + Population INT, PRIMARY KEY (ID, CountryCode) ) WITH "template=partitioned, backups=1, affinityKey=CountryCode, CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City"; -CREATE INDEX idx_country_code ON city (CountryCode); +CREATE INDEX idx_country_code ON City (CountryCode); DROP TABLE IF EXISTS CountryLanguage; CREATE TABLE CountryLanguage ( CountryCode CHAR(3), - Language CHAR(30), + Language VARCHAR, IsOfficial CHAR(2), Percentage DECIMAL(4,1), PRIMARY KEY (CountryCode, Language) diff --git a/examples/src/main/java/org/apache/ignite/examples/encryption/EncryptedCacheExample.java b/examples/src/main/java/org/apache/ignite/examples/encryption/EncryptedCacheExample.java new file mode 100644 index 0000000000000..137d19471c551 --- /dev/null +++ b/examples/src/main/java/org/apache/ignite/examples/encryption/EncryptedCacheExample.java @@ -0,0 +1,106 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.examples.encryption; + +import javax.cache.Cache; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.Ignition; +import org.apache.ignite.cache.query.QueryCursor; +import org.apache.ignite.cache.query.ScanQuery; +import org.apache.ignite.configuration.CacheConfiguration; + +/** + * This example demonstrates the usage of Apache Ignite Persistent Store. + * Data stored in persistence will be encrypted. + */ +public class EncryptedCacheExample { + /** */ + public static void main(String[] args) { + System.out.println(">>> Starting cluster."); + + // Starting Ignite with EncryptionSpi configured. + // Please, note, you should have the same keystore on every server node in cluster with enabled encryption. + // You can use encryption feature only for deployment with Ignite persistence enabled. + try (Ignite ignite = Ignition.start("examples/config/encryption/example-encrypted-store.xml")) { + // Activate the cluster. Required to do if the persistent store is enabled because you might need + // to wait while all the nodes, that store a subset of data on disk, join the cluster. + ignite.cluster().active(true); + + CacheConfiguration ccfg = new CacheConfiguration<>("encrypted-cache"); + + // Enabling encryption for newly created cache. + ccfg.setEncryptionEnabled(true); + + System.out.println(">>> Creating encrypted cache."); + + IgniteCache cache = ignite.createCache(ccfg); + + System.out.println(">>> Populating cache with data."); + + // Data in this cache will be encrypted on the disk. + cache.put(1L, new BankAccount("Rich account", 1_000_000L)); + cache.put(2L, new BankAccount("Middle account", 1_000L)); + cache.put(3L, new BankAccount("One dollar account", 1L)); + } + + // After cluster shutdown data persisted on the disk in encrypted form. + + System.out.println(">>> Starting cluster again."); + // Starting cluster again. + try (Ignite ignite = Ignition.start("examples/config/encryption/example-encrypted-store.xml")) { + ignite.cluster().active(true); + + // We can obtain existing cache and load data from disk. + IgniteCache cache = ignite.getOrCreateCache("encrypted-cache"); + + QueryCursor> cursor = cache.query(new ScanQuery<>()); + + System.out.println(">>> Saved data:"); + + // Iterating through existing data. + for (Cache.Entry entry : cursor) { + System.out.println(">>> ID = " + entry.getKey() + + ", AccountName = " + entry.getValue().accountName + + ", Balance = " + entry.getValue().balance); + } + + } + } + + /** + * Test class with very secret data. + */ + private static class BankAccount { + /** + * Name. + */ + private String accountName; + + /** + * Balance. + */ + private long balance; + + /** */ + BankAccount(String accountName, long balance) { + this.accountName = accountName; + this.balance = balance; + } + } +} diff --git a/examples/src/test/java-lgpl/org/apache/ignite/examples/HibernateL2CacheExampleSelfTest.java b/examples/src/test/java-lgpl/org/apache/ignite/examples/HibernateL2CacheExampleSelfTest.java index 68767d77c4198..93e24b0694c3b 100644 --- a/examples/src/test/java-lgpl/org/apache/ignite/examples/HibernateL2CacheExampleSelfTest.java +++ b/examples/src/test/java-lgpl/org/apache/ignite/examples/HibernateL2CacheExampleSelfTest.java @@ -19,15 +19,15 @@ import org.apache.ignite.examples.datagrid.hibernate.HibernateL2CacheExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * Tests the {@link HibernateL2CacheExample}. */ public class HibernateL2CacheExampleSelfTest extends GridAbstractExamplesTest { - /** - * @throws Exception If failed. - */ - public void testHibernateL2CacheExample() throws Exception { + /** */ + @Test + public void testHibernateL2CacheExample() { HibernateL2CacheExample.main(EMPTY_ARGS); } } diff --git a/examples/src/test/java-lgpl/org/apache/ignite/examples/SpatialQueryExampleSelfTest.java b/examples/src/test/java-lgpl/org/apache/ignite/examples/SpatialQueryExampleSelfTest.java index ac59d8e567b43..ea88cff99cbe6 100644 --- a/examples/src/test/java-lgpl/org/apache/ignite/examples/SpatialQueryExampleSelfTest.java +++ b/examples/src/test/java-lgpl/org/apache/ignite/examples/SpatialQueryExampleSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.examples.datagrid.SpatialQueryExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * Tests {@link SpatialQueryExample}. @@ -27,6 +28,7 @@ public class SpatialQueryExampleSelfTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testSpatialQueryExample() throws Exception { SpatialQueryExample.main(EMPTY_ARGS); } diff --git a/examples/src/test/java-lgpl/org/apache/ignite/testsuites/IgniteLgplExamplesSelfTestSuite.java b/examples/src/test/java-lgpl/org/apache/ignite/testsuites/IgniteLgplExamplesSelfTestSuite.java index 3c9101a31594c..a5e714abecada 100644 --- a/examples/src/test/java-lgpl/org/apache/ignite/testsuites/IgniteLgplExamplesSelfTestSuite.java +++ b/examples/src/test/java-lgpl/org/apache/ignite/testsuites/IgniteLgplExamplesSelfTestSuite.java @@ -17,36 +17,39 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.examples.HibernateL2CacheExampleMultiNodeSelfTest; import org.apache.ignite.examples.HibernateL2CacheExampleSelfTest; import org.apache.ignite.examples.SpatialQueryExampleMultiNodeSelfTest; import org.apache.ignite.examples.SpatialQueryExampleSelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import static org.apache.ignite.IgniteSystemProperties.IGNITE_OVERRIDE_MCAST_GRP; /** * Examples test suite.

Contains only Spring ignite examples tests. */ -public class IgniteLgplExamplesSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteLgplExamplesSelfTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, GridTestUtils.getNextMulticastGroup(IgniteLgplExamplesSelfTestSuite.class)); TestSuite suite = new TestSuite("Ignite Examples Test Suite"); - suite.addTest(new TestSuite(HibernateL2CacheExampleSelfTest.class)); - suite.addTest(new TestSuite(SpatialQueryExampleSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheExampleSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SpatialQueryExampleSelfTest.class)); // Multi-node. - suite.addTest(new TestSuite(HibernateL2CacheExampleMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(SpatialQueryExampleMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheExampleMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SpatialQueryExampleMultiNodeSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/examples/src/test/java/org/apache/ignite/examples/BasicExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/BasicExamplesSelfTest.java index 41ae90a634449..fa1e630230774 100644 --- a/examples/src/test/java/org/apache/ignite/examples/BasicExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/BasicExamplesSelfTest.java @@ -23,6 +23,7 @@ import org.apache.ignite.examples.computegrid.ComputeRunnableExample; import org.apache.ignite.examples.datastructures.IgniteExecutorServiceExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * Closure examples self test. @@ -31,6 +32,7 @@ public class BasicExamplesSelfTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testBroadcastExample() throws Exception { ComputeBroadcastExample.main(EMPTY_ARGS); } @@ -38,6 +40,7 @@ public void testBroadcastExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCallableExample() throws Exception { ComputeCallableExample.main(EMPTY_ARGS); } @@ -45,6 +48,7 @@ public void testCallableExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClosureExample() throws Exception { ComputeClosureExample.main(EMPTY_ARGS); } @@ -52,6 +56,7 @@ public void testClosureExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecutorExample() throws Exception { IgniteExecutorServiceExample.main(EMPTY_ARGS); } @@ -67,6 +72,7 @@ public void testExecutorExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRunnableExample() throws Exception { ComputeRunnableExample.main(EMPTY_ARGS); } diff --git a/examples/src/test/java/org/apache/ignite/examples/CacheClientBinaryExampleTest.java b/examples/src/test/java/org/apache/ignite/examples/CacheClientBinaryExampleTest.java index 01be0bc7e0552..db25c603fd4fb 100644 --- a/examples/src/test/java/org/apache/ignite/examples/CacheClientBinaryExampleTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/CacheClientBinaryExampleTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.examples.binary.datagrid.CacheClientBinaryPutGetExample; import org.apache.ignite.examples.binary.datagrid.CacheClientBinaryQueryExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * @@ -33,6 +34,7 @@ public class CacheClientBinaryExampleTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testBinaryPutGetExample() throws Exception { CacheClientBinaryPutGetExample.main(new String[] {}); } @@ -40,6 +42,7 @@ public void testBinaryPutGetExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryQueryExample() throws Exception { CacheClientBinaryQueryExample.main(new String[] {}); } diff --git a/examples/src/test/java/org/apache/ignite/examples/CacheContinuousQueryExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/CacheContinuousQueryExamplesSelfTest.java index 1a1ae4e8e8e9e..d3aa704edcd2d 100644 --- a/examples/src/test/java/org/apache/ignite/examples/CacheContinuousQueryExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/CacheContinuousQueryExamplesSelfTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.examples.datagrid.CacheContinuousQueryExample; import org.apache.ignite.examples.datagrid.CacheContinuousQueryWithTransformerExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** */ @@ -28,6 +29,7 @@ public class CacheContinuousQueryExamplesSelfTest extends GridAbstractExamplesTe /** * @throws Exception If failed. */ + @Test public void testCacheContinuousAsyncQueryExample() throws Exception { CacheContinuousAsyncQueryExample.main(new String[] {}); } @@ -35,6 +37,7 @@ public void testCacheContinuousAsyncQueryExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheContinuousQueryExample() throws Exception { CacheContinuousQueryExample.main(new String[] {}); } @@ -42,6 +45,7 @@ public void testCacheContinuousQueryExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheContinuousQueryWithTransformerExample() throws Exception { CacheContinuousQueryWithTransformerExample.main(new String[] {}); } diff --git a/examples/src/test/java/org/apache/ignite/examples/CacheExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/CacheExamplesSelfTest.java index 0085573133ce8..258adbc5139eb 100644 --- a/examples/src/test/java/org/apache/ignite/examples/CacheExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/CacheExamplesSelfTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.examples.datagrid.CacheEntryProcessorExample; import org.apache.ignite.examples.datagrid.CacheApiExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; //import org.apache.ignite.examples.datagrid.starschema.*; //import org.apache.ignite.examples.datagrid.store.dummy.*; @@ -33,6 +34,7 @@ public class CacheExamplesSelfTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testCacheAffinityExample() throws Exception { CacheAffinityExample.main(EMPTY_ARGS); } @@ -40,6 +42,7 @@ public void testCacheAffinityExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheEntryProcessorExample() throws Exception { CacheEntryProcessorExample.main(EMPTY_ARGS); } @@ -112,6 +115,7 @@ public void testCacheEntryProcessorExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheApiExample() throws Exception { CacheApiExample.main(EMPTY_ARGS); } diff --git a/examples/src/test/java/org/apache/ignite/examples/CheckpointExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/CheckpointExamplesSelfTest.java index 03185c8a14f67..5327bed40e78c 100644 --- a/examples/src/test/java/org/apache/ignite/examples/CheckpointExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/CheckpointExamplesSelfTest.java @@ -18,10 +18,12 @@ package org.apache.ignite.examples; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * Checkpoint examples self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class CheckpointExamplesSelfTest extends GridAbstractExamplesTest { /** * TODO: IGNITE-711 next example(s) should be implemented for java 8 diff --git a/examples/src/test/java/org/apache/ignite/examples/ClusterGroupExampleSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/ClusterGroupExampleSelfTest.java index 35b251012018d..d6c5d49650529 100644 --- a/examples/src/test/java/org/apache/ignite/examples/ClusterGroupExampleSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/ClusterGroupExampleSelfTest.java @@ -18,10 +18,12 @@ package org.apache.ignite.examples; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class ClusterGroupExampleSelfTest extends GridAbstractExamplesTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { diff --git a/examples/src/test/java/org/apache/ignite/examples/ComputeClientBinaryExampleTest.java b/examples/src/test/java/org/apache/ignite/examples/ComputeClientBinaryExampleTest.java index 5dcad62d2ddb8..30480f0ea3a34 100644 --- a/examples/src/test/java/org/apache/ignite/examples/ComputeClientBinaryExampleTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/ComputeClientBinaryExampleTest.java @@ -18,6 +18,7 @@ import org.apache.ignite.examples.binary.computegrid.ComputeClientBinaryTaskExecutionExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * @@ -31,6 +32,7 @@ public class ComputeClientBinaryExampleTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testBinaryTaskExecutionExample() throws Exception { ComputeClientBinaryTaskExecutionExample.main(new String[] {}); } diff --git a/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesMultiNodeSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesMultiNodeSelfTest.java index f73a74d3aa394..01aa4c973a2b9 100644 --- a/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesMultiNodeSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesMultiNodeSelfTest.java @@ -17,12 +17,15 @@ package org.apache.ignite.examples; +import org.junit.Ignore; + /** * Continuation example multi-node self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class ContinuationExamplesMultiNodeSelfTest extends ContinuationExamplesSelfTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { startRemoteNodes(); } -} \ No newline at end of file +} diff --git a/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesSelfTest.java index 512a55b794df5..5ee93f6ea4053 100644 --- a/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/ContinuationExamplesSelfTest.java @@ -20,10 +20,12 @@ //import org.apache.ignite.examples.computegrid.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * Continuation example self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class ContinuationExamplesSelfTest extends GridAbstractExamplesTest { /** * TODO: IGNITE-711 next example(s) should be implemented for java 8 diff --git a/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesMultiNodeSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesMultiNodeSelfTest.java index e85e28186b79a..2495e6d15d74b 100644 --- a/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesMultiNodeSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesMultiNodeSelfTest.java @@ -17,12 +17,15 @@ package org.apache.ignite.examples; +import org.junit.Ignore; + /** * ContinuousMapperExample multi-node self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class ContinuousMapperExamplesMultiNodeSelfTest extends ContinuationExamplesSelfTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { startRemoteNodes(); } -} \ No newline at end of file +} diff --git a/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesSelfTest.java index 0ae7dc1c420a7..0029a5cb5d84a 100644 --- a/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/ContinuousMapperExamplesSelfTest.java @@ -18,10 +18,12 @@ package org.apache.ignite.examples; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * ContinuousMapperExample self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class ContinuousMapperExamplesSelfTest extends GridAbstractExamplesTest { /** * TODO: IGNITE-711 next example(s) should be implemented for java 8 diff --git a/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesMultiNodeSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesMultiNodeSelfTest.java index c57f2851bd682..2edf75eb53ef2 100644 --- a/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesMultiNodeSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesMultiNodeSelfTest.java @@ -17,11 +17,14 @@ package org.apache.ignite.examples; +import org.junit.Ignore; + /** * Deployment examples multi-node self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class DeploymentExamplesMultiNodeSelfTest extends DeploymentExamplesSelfTest { - // TODO: IGNITE-711 next example(s) should be implemented for java 8 + // TODO: IGNITE-711 next example(s) should be implemented for java 8 // or testing method(s) should be removed if example(s) does not applicable for java 8. /** {@inheritDoc} */ // @Override public void testDeploymentExample() throws Exception { @@ -29,4 +32,4 @@ public class DeploymentExamplesMultiNodeSelfTest extends DeploymentExamplesSelfT // // super.testDeploymentExample(); // } -} \ No newline at end of file +} diff --git a/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesSelfTest.java index 8a01b9b823605..2711d25cea928 100644 --- a/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/DeploymentExamplesSelfTest.java @@ -20,10 +20,12 @@ //import org.apache.ignite.examples.misc.deployment.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * Deployment examples self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class DeploymentExamplesSelfTest extends GridAbstractExamplesTest { // TODO: IGNITE-711 next example(s) should be implemented for java 8 // or testing method(s) should be removed if example(s) does not applicable for java 8. diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/MapReplicatedReservation.java b/examples/src/test/java/org/apache/ignite/examples/EncryptedCacheExampleSelfTest.java similarity index 60% rename from modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/MapReplicatedReservation.java rename to examples/src/test/java/org/apache/ignite/examples/EncryptedCacheExampleSelfTest.java index dd8237b6f946d..66cd4cf341ac9 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/MapReplicatedReservation.java +++ b/examples/src/test/java/org/apache/ignite/examples/EncryptedCacheExampleSelfTest.java @@ -15,24 +15,25 @@ * limitations under the License. */ -package org.apache.ignite.internal.processors.query.h2.twostep; +package org.apache.ignite.examples; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridReservable; +import org.apache.ignite.examples.encryption.EncryptedCacheExample; +import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** - * Mapper fake reservation object for replicated caches. */ -class MapReplicatedReservation implements GridReservable { - /** */ - static final MapReplicatedReservation INSTANCE = new MapReplicatedReservation(); - +public class EncryptedCacheExampleSelfTest extends GridAbstractExamplesTest { /** {@inheritDoc} */ - @Override public boolean reserve() { - throw new IllegalStateException(); + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); } - /** {@inheritDoc} */ - @Override public void release() { - throw new IllegalStateException(); + /** + * @throws Exception If failed. + */ + @Test + public void testBinaryPutGetExample() throws Exception { + EncryptedCacheExample.main(new String[] {}); } } diff --git a/examples/src/test/java/org/apache/ignite/examples/EventsExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/EventsExamplesSelfTest.java index 8e675a3c0be7b..635b247ad76f5 100644 --- a/examples/src/test/java/org/apache/ignite/examples/EventsExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/EventsExamplesSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.examples.events.EventsExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * Events examples self test. @@ -27,6 +28,7 @@ public class EventsExamplesSelfTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testEventsExample() throws Exception { EventsExample.main(EMPTY_ARGS); } diff --git a/examples/src/test/java/org/apache/ignite/examples/IgfsExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/IgfsExamplesSelfTest.java index 1be686ca86882..d227acb79bb0d 100644 --- a/examples/src/test/java/org/apache/ignite/examples/IgfsExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/IgfsExamplesSelfTest.java @@ -21,10 +21,12 @@ //import org.apache.ignite.internal.util.typedef.internal.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * IGFS examples self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class IgfsExamplesSelfTest extends GridAbstractExamplesTest { /** IGFS config with shared memory IPC. */ private static final String IGFS_SHMEM_CFG = "modules/core/src/test/config/igfs-shmem.xml"; diff --git a/examples/src/test/java/org/apache/ignite/examples/LifecycleExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/LifecycleExamplesSelfTest.java index b89a8f7fbc48c..2b80865943d5f 100644 --- a/examples/src/test/java/org/apache/ignite/examples/LifecycleExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/LifecycleExamplesSelfTest.java @@ -20,10 +20,12 @@ //import org.apache.ignite.examples.misc.lifecycle.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * LifecycleExample self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class LifecycleExamplesSelfTest extends GridAbstractExamplesTest { /** * TODO: IGNITE-711 next example(s) should be implemented for java 8 diff --git a/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesMultiNodeSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesMultiNodeSelfTest.java index fd57fbb0171a6..3972084e9786f 100644 --- a/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesMultiNodeSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesMultiNodeSelfTest.java @@ -19,15 +19,18 @@ //import org.apache.ignite.examples.misc.client.memcache.*; +import org.junit.Ignore; + /** * MemcacheRestExample multi-node self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class MemcacheRestExamplesMultiNodeSelfTest extends MemcacheRestExamplesSelfTest { - // TODO: IGNITE-711 next example(s) should be implemented for java 8 + // TODO: IGNITE-711 next example(s) should be implemented for java 8 // or testing method(s) should be removed if example(s) does not applicable for java 8. /** {@inheritDoc} */ // @Override protected void beforeTest() throws Exception { // for (int i = 0; i < RMT_NODES_CNT; i++) // startGrid("memcache-rest-examples-" + i, MemcacheRestExampleNodeStartup.configuration()); // } -} \ No newline at end of file +} diff --git a/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesSelfTest.java index 762343d31f261..f2b7b6e5019df 100644 --- a/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/MemcacheRestExamplesSelfTest.java @@ -20,10 +20,12 @@ //import org.apache.ignite.examples.misc.client.memcache.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * MemcacheRestExample self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class MemcacheRestExamplesSelfTest extends GridAbstractExamplesTest { // TODO: IGNITE-711 next example(s) should be implemented for java 8 // or testing method(s) should be removed if example(s) does not applicable for java 8. diff --git a/examples/src/test/java/org/apache/ignite/examples/MessagingExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/MessagingExamplesSelfTest.java index 3c94d3b20a6c2..0686dffc9c541 100644 --- a/examples/src/test/java/org/apache/ignite/examples/MessagingExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/MessagingExamplesSelfTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.examples.messaging.MessagingExample; import org.apache.ignite.examples.messaging.MessagingPingPongExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * Messaging examples self test. @@ -33,6 +34,7 @@ public class MessagingExamplesSelfTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testMessagingExample() throws Exception { MessagingExample.main(EMPTY_ARGS); } @@ -40,6 +42,7 @@ public void testMessagingExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMessagingPingPongExample() throws Exception { MessagingPingPongExample.main(EMPTY_ARGS); } diff --git a/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesMultiNodeSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesMultiNodeSelfTest.java index f641a45aaa2c3..135f8649f399f 100644 --- a/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesMultiNodeSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesMultiNodeSelfTest.java @@ -17,12 +17,15 @@ package org.apache.ignite.examples; +import org.junit.Ignore; + /** * PrimeExample multi-node self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class MonteCarloExamplesMultiNodeSelfTest extends MonteCarloExamplesSelfTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { startRemoteNodes(); } -} \ No newline at end of file +} diff --git a/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesSelfTest.java index beba341d5d2a1..66026b2f55b21 100644 --- a/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/MonteCarloExamplesSelfTest.java @@ -20,6 +20,7 @@ //import org.apache.ignite.examples.computegrid.montecarlo.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * Ignite examples self test. Excludes Ignite Spring tests. @@ -65,6 +66,7 @@ * Classpath should contain the {@code ${IGNITE_HOME}/modules/tests/config/aop/aspectj} folder. * */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class MonteCarloExamplesSelfTest extends GridAbstractExamplesTest { /** * TODO: IGNITE-711 next example(s) should be implemented for java 8 diff --git a/examples/src/test/java/org/apache/ignite/examples/SpringBeanExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/SpringBeanExamplesSelfTest.java index 4cc209f0e9cdf..c9c69dc52a488 100644 --- a/examples/src/test/java/org/apache/ignite/examples/SpringBeanExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/SpringBeanExamplesSelfTest.java @@ -20,10 +20,12 @@ //import org.apache.ignite.examples.misc.springbean.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * Spring bean examples self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class SpringBeanExamplesSelfTest extends GridAbstractExamplesTest { /** * TODO: IGNITE-711 next example(s) should be implemented for java 8 diff --git a/examples/src/test/java/org/apache/ignite/examples/SpringDataExampleSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/SpringDataExampleSelfTest.java index 516ad4546347c..bba21b3d66808 100644 --- a/examples/src/test/java/org/apache/ignite/examples/SpringDataExampleSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/SpringDataExampleSelfTest.java @@ -18,6 +18,7 @@ import org.apache.ignite.examples.springdata.SpringDataExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * Spring Data example test. @@ -26,6 +27,7 @@ public class SpringDataExampleSelfTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testSpringDataExample() throws Exception { SpringDataExample.main(EMPTY_ARGS); } diff --git a/examples/src/test/java/org/apache/ignite/examples/SqlExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/SqlExamplesSelfTest.java index 0bf01d8a534d2..c105335b9e73c 100644 --- a/examples/src/test/java/org/apache/ignite/examples/SqlExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/SqlExamplesSelfTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.examples.sql.SqlDmlExample; import org.apache.ignite.examples.sql.SqlQueriesExample; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Test; /** * SQL examples self test. @@ -29,6 +30,7 @@ public class SqlExamplesSelfTest extends GridAbstractExamplesTest { /** * @throws Exception If failed. */ + @Test public void testSqlJavaExample() throws Exception { SqlQueriesExample.main(EMPTY_ARGS); } @@ -36,6 +38,7 @@ public void testSqlJavaExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlDmlExample() throws Exception { SqlDmlExample.main(EMPTY_ARGS); } @@ -43,6 +46,7 @@ public void testSqlDmlExample() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlDdlExample() throws Exception { SqlDdlExample.main(EMPTY_ARGS); } diff --git a/examples/src/test/java/org/apache/ignite/examples/TaskExamplesMultiNodeSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/TaskExamplesMultiNodeSelfTest.java index 06c4dd831fb86..676123425e5cf 100644 --- a/examples/src/test/java/org/apache/ignite/examples/TaskExamplesMultiNodeSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/TaskExamplesMultiNodeSelfTest.java @@ -17,12 +17,15 @@ package org.apache.ignite.examples; +import org.junit.Ignore; + /** * Hello world examples multi-node self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class TaskExamplesMultiNodeSelfTest extends TaskExamplesSelfTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { startRemoteNodes(); } -} \ No newline at end of file +} diff --git a/examples/src/test/java/org/apache/ignite/examples/TaskExamplesSelfTest.java b/examples/src/test/java/org/apache/ignite/examples/TaskExamplesSelfTest.java index 72ad3c7f87e8d..a9b7f95c0a338 100644 --- a/examples/src/test/java/org/apache/ignite/examples/TaskExamplesSelfTest.java +++ b/examples/src/test/java/org/apache/ignite/examples/TaskExamplesSelfTest.java @@ -20,10 +20,12 @@ //import org.apache.ignite.examples.computegrid.*; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.Ignore; /** * Hello world examples self test. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-711") public class TaskExamplesSelfTest extends GridAbstractExamplesTest { // TODO: IGNITE-711 next example(s) should be implemented for java 8 // or testing method(s) should be removed if example(s) does not applicable for java 8. diff --git a/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesJ8SelfTestSuite.java b/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesJ8SelfTestSuite.java index f73d977916845..f281ec57252bf 100644 --- a/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesJ8SelfTestSuite.java +++ b/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesJ8SelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.examples.BasicExamplesMultiNodeSelfTest; import org.apache.ignite.examples.BasicExamplesSelfTest; @@ -45,35 +46,35 @@ public static TestSuite suite() throws Exception { TestSuite suite = new TestSuite("Ignite Examples Test Suite"); - suite.addTest(new TestSuite(CacheExamplesSelfTest.class)); - suite.addTest(new TestSuite(BasicExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BasicExamplesSelfTest.class)); -// suite.addTest(new TestSuite(ContinuationExamplesSelfTest.class)); -// suite.addTest(new TestSuite(ContinuousMapperExamplesSelfTest.class)); -// suite.addTest(new TestSuite(DeploymentExamplesSelfTest.class)); - suite.addTest(new TestSuite(EventsExamplesSelfTest.class)); -// suite.addTest(new TestSuite(LifecycleExamplesSelfTest.class)); - suite.addTest(new TestSuite(MessagingExamplesSelfTest.class)); -// suite.addTest(new TestSuite(MemcacheRestExamplesSelfTest.class)); -// suite.addTest(new TestSuite(MonteCarloExamplesSelfTest.class)); -// suite.addTest(new TestSuite(TaskExamplesSelfTest.class)); -// suite.addTest(new TestSuite(SpringBeanExamplesSelfTest.class)); -// suite.addTest(new TestSuite(IgfsExamplesSelfTest.class)); -// suite.addTest(new TestSuite(CheckpointExamplesSelfTest.class)); -// suite.addTest(new TestSuite(HibernateL2CacheExampleSelfTest.class)); -// suite.addTest(new TestSuite(ClusterGroupExampleSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(ContinuationExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(ContinuousMapperExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(DeploymentExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(EventsExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(LifecycleExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MessagingExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(MemcacheRestExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(MonteCarloExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(TaskExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(SpringBeanExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(IgfsExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(CheckpointExamplesSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(HibernateL2CacheExampleSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(ClusterGroupExampleSelfTest.class)); // Multi-node. - suite.addTest(new TestSuite(CacheExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(BasicExamplesMultiNodeSelfTest.class)); -// suite.addTest(new TestSuite(ContinuationExamplesMultiNodeSelfTest.class)); -// suite.addTest(new TestSuite(ContinuousMapperExamplesMultiNodeSelfTest.class)); -// suite.addTest(new TestSuite(DeploymentExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(EventsExamplesMultiNodeSelfTest.class)); -// suite.addTest(new TestSuite(TaskExamplesMultiNodeSelfTest.class)); -// suite.addTest(new TestSuite(MemcacheRestExamplesMultiNodeSelfTest.class)); -// suite.addTest(new TestSuite(MonteCarloExamplesMultiNodeSelfTest.class)); -// suite.addTest(new TestSuite(HibernateL2CacheExampleMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BasicExamplesMultiNodeSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(ContinuationExamplesMultiNodeSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(ContinuousMapperExamplesMultiNodeSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(DeploymentExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(EventsExamplesMultiNodeSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(TaskExamplesMultiNodeSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(MemcacheRestExamplesMultiNodeSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(MonteCarloExamplesMultiNodeSelfTest.class)); +// suite.addTest(new JUnit4TestAdapter(HibernateL2CacheExampleMultiNodeSelfTest.class)); return suite; } diff --git a/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesMLTestSuite.java b/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesMLTestSuite.java index 6b41301ccef50..8551460dd76dd 100644 --- a/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesMLTestSuite.java +++ b/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesMLTestSuite.java @@ -27,12 +27,21 @@ import javassist.ClassClassPath; import javassist.ClassPool; import javassist.CtClass; +import javassist.CtMethod; import javassist.CtNewMethod; import javassist.NotFoundException; +import javassist.bytecode.AnnotationsAttribute; +import javassist.bytecode.ClassFile; +import javassist.bytecode.ConstPool; +import javassist.bytecode.annotation.Annotation; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.examples.ml.util.MLExamplesCommonArgs; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest; +import org.junit.BeforeClass; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import static org.apache.ignite.IgniteSystemProperties.IGNITE_OVERRIDE_MCAST_GRP; @@ -41,13 +50,21 @@ *

* Contains only ML Grid Ignite examples tests.

*/ -public class IgniteExamplesMLTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteExamplesMLTestSuite { /** Base package to create test classes in. */ private static final String basePkgForTests = "org.apache.ignite.examples.ml"; /** Test class name pattern. */ private static final String clsNamePtrn = ".*Example$"; + /** */ + @BeforeClass + public static void init() { + System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, + GridTestUtils.getNextMulticastGroup(IgniteExamplesMLTestSuite.class)); + } + /** * Creates test suite for Ignite ML examples. * @@ -55,13 +72,10 @@ public class IgniteExamplesMLTestSuite extends TestSuite { * @throws Exception If failed. */ public static TestSuite suite() throws Exception { - System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, - GridTestUtils.getNextMulticastGroup(IgniteExamplesMLTestSuite.class)); - TestSuite suite = new TestSuite("Ignite ML Examples Test Suite"); for (Class clazz : getClasses(basePkgForTests)) - suite.addTest(new TestSuite(makeTestClass(clazz))); + suite.addTest(new JUnit4TestAdapter(makeTestClass(clazz))); return suite; } @@ -79,15 +93,25 @@ private static Class makeTestClass(Class exampleCls) ClassPool cp = ClassPool.getDefault(); cp.insertClassPath(new ClassClassPath(IgniteExamplesMLTestSuite.class)); - CtClass cl = cp.makeClass(basePkgForTests + "." + exampleCls.getSimpleName() + "SelfName"); + CtClass cl = cp.makeClass(basePkgForTests + "." + exampleCls.getSimpleName() + "SelfTest"); cl.setSuperclass(cp.get(GridAbstractExamplesTest.class.getName())); - cl.addMethod(CtNewMethod.make("public void testExample() { " + CtMethod mtd = CtNewMethod.make("public void testExample() { " + exampleCls.getCanonicalName() + ".main(" + MLExamplesCommonArgs.class.getName() - + ".EMPTY_ARGS_ML); }", cl)); + + ".EMPTY_ARGS_ML); }", cl); + + // Create and add annotation. + ClassFile ccFile = cl.getClassFile(); + ConstPool constpool = ccFile.getConstPool(); + AnnotationsAttribute attr = new AnnotationsAttribute(constpool, AnnotationsAttribute.visibleTag); + Annotation annot = new Annotation("org.junit.Test", constpool); + attr.addAnnotation(annot); + mtd.getMethodInfo().addAttribute(attr); + + cl.addMethod(mtd); return cl.toClass(); } diff --git a/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesSelfTestSuite.java b/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesSelfTestSuite.java index a55abdf7c9124..31e9a6c48135d 100644 --- a/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesSelfTestSuite.java +++ b/examples/src/test/java/org/apache/ignite/testsuites/IgniteExamplesSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.examples.BasicExamplesMultiNodeSelfTest; import org.apache.ignite.examples.BasicExamplesSelfTest; @@ -33,6 +34,7 @@ import org.apache.ignite.examples.ContinuousMapperExamplesSelfTest; import org.apache.ignite.examples.DeploymentExamplesMultiNodeSelfTest; import org.apache.ignite.examples.DeploymentExamplesSelfTest; +import org.apache.ignite.examples.EncryptedCacheExampleSelfTest; import org.apache.ignite.examples.EventsExamplesMultiNodeSelfTest; import org.apache.ignite.examples.EventsExamplesSelfTest; import org.apache.ignite.examples.IgfsExamplesSelfTest; @@ -47,59 +49,64 @@ import org.apache.ignite.examples.SqlExamplesSelfTest; import org.apache.ignite.examples.TaskExamplesMultiNodeSelfTest; import org.apache.ignite.examples.TaskExamplesSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Examples test suite. *

* Contains all Ignite examples tests.

*/ -public class IgniteExamplesSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteExamplesSelfTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { // System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, // GridTestUtils.getNextMulticastGroup(IgniteExamplesSelfTestSuite.class)); TestSuite suite = new TestSuite("Ignite Examples Test Suite"); - suite.addTest(new TestSuite(CacheExamplesSelfTest.class)); - suite.addTest(new TestSuite(SqlExamplesSelfTest.class)); - suite.addTest(new TestSuite(BasicExamplesSelfTest.class)); - suite.addTest(new TestSuite(ContinuationExamplesSelfTest.class)); - suite.addTest(new TestSuite(ContinuousMapperExamplesSelfTest.class)); - suite.addTest(new TestSuite(DeploymentExamplesSelfTest.class)); - suite.addTest(new TestSuite(EventsExamplesSelfTest.class)); - suite.addTest(new TestSuite(LifecycleExamplesSelfTest.class)); - suite.addTest(new TestSuite(MessagingExamplesSelfTest.class)); - suite.addTest(new TestSuite(MemcacheRestExamplesSelfTest.class)); - suite.addTest(new TestSuite(MonteCarloExamplesSelfTest.class)); - suite.addTest(new TestSuite(TaskExamplesSelfTest.class)); - suite.addTest(new TestSuite(SpringBeanExamplesSelfTest.class)); - suite.addTest(new TestSuite(SpringDataExampleSelfTest.class)); - suite.addTest(new TestSuite(IgfsExamplesSelfTest.class)); - suite.addTest(new TestSuite(CheckpointExamplesSelfTest.class)); - suite.addTest(new TestSuite(ClusterGroupExampleSelfTest.class)); - suite.addTest(new TestSuite(CacheContinuousQueryExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BasicExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuationExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuousMapperExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DeploymentExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(EventsExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(LifecycleExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MessagingExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MemcacheRestExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MonteCarloExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TaskExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SpringBeanExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SpringDataExampleSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CheckpointExamplesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClusterGroupExampleSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryExamplesSelfTest.class)); // Multi-node. - suite.addTest(new TestSuite(CacheExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(BasicExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(ContinuationExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(ContinuousMapperExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(DeploymentExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(EventsExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(TaskExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(MemcacheRestExamplesMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(MonteCarloExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BasicExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuationExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuousMapperExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DeploymentExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(EventsExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TaskExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MemcacheRestExamplesMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MonteCarloExamplesMultiNodeSelfTest.class)); // Binary. - suite.addTest(new TestSuite(CacheClientBinaryExampleTest.class)); - suite.addTest(new TestSuite(ComputeClientBinaryExampleTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheClientBinaryExampleTest.class)); + suite.addTest(new JUnit4TestAdapter(ComputeClientBinaryExampleTest.class)); // ML Grid. - suite.addTest(IgniteExamplesMLTestSuite.suite()); + suite.addTest(new JUnit4TestAdapter(IgniteExamplesMLTestSuite.class)); + + // Encryption. + suite.addTest(new JUnit4TestAdapter(EncryptedCacheExampleSelfTest.class)); return suite; } diff --git a/examples/src/test/spark/org/apache/ignite/spark/testsuites/IgniteExamplesSparkSelfTestSuite.java b/examples/src/test/spark/org/apache/ignite/spark/testsuites/IgniteExamplesSparkSelfTestSuite.java index 6328ee241a190..43a1198ce569f 100644 --- a/examples/src/test/spark/org/apache/ignite/spark/testsuites/IgniteExamplesSparkSelfTestSuite.java +++ b/examples/src/test/spark/org/apache/ignite/spark/testsuites/IgniteExamplesSparkSelfTestSuite.java @@ -17,11 +17,14 @@ package org.apache.ignite.spark.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spark.examples.IgniteDataFrameSelfTest; import org.apache.ignite.spark.examples.JavaIgniteDataFrameSelfTest; import org.apache.ignite.spark.examples.SharedRDDExampleSelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import static org.apache.ignite.IgniteSystemProperties.IGNITE_OVERRIDE_MCAST_GRP; @@ -30,20 +33,20 @@ *

* Contains only Spring ignite examples tests. */ -public class IgniteExamplesSparkSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteExamplesSparkSelfTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, GridTestUtils.getNextMulticastGroup(IgniteExamplesSparkSelfTestSuite.class)); TestSuite suite = new TestSuite("Ignite Spark Examples Test Suite"); - suite.addTest(new TestSuite(SharedRDDExampleSelfTest.class)); - suite.addTest(new TestSuite(IgniteDataFrameSelfTest.class)); - suite.addTest(new TestSuite(JavaIgniteDataFrameSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SharedRDDExampleSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDataFrameSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JavaIgniteDataFrameSelfTest.class)); return suite; } diff --git a/modules/aop/src/test/java/org/apache/ignite/gridify/AbstractAopTest.java b/modules/aop/src/test/java/org/apache/ignite/gridify/AbstractAopTest.java index 33f2cddbfaddf..0bb60688734a3 100644 --- a/modules/aop/src/test/java/org/apache/ignite/gridify/AbstractAopTest.java +++ b/modules/aop/src/test/java/org/apache/ignite/gridify/AbstractAopTest.java @@ -28,11 +28,11 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.deployment.local.LocalDeploymentSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestClassLoader; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CLASS_DEPLOYED; import static org.apache.ignite.events.EventType.EVT_TASK_DEPLOYED; @@ -40,11 +40,9 @@ /** * Abstract AOP test. */ -@SuppressWarnings( {"OverlyStrongTypeCast", "JUnitAbstractTestClassNamingConvention", "ProhibitedExceptionDeclared", "IfMayBeConditional"}) +@SuppressWarnings( {"OverlyStrongTypeCast", "ProhibitedExceptionDeclared", "IfMayBeConditional"}) +@RunWith(JUnit4.class) public abstract class AbstractAopTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private DeploymentMode depMode = DeploymentMode.PRIVATE; @@ -54,8 +52,6 @@ public abstract class AbstractAopTest extends GridCommonAbstractTest { cfg.setDeploymentSpi(new LocalDeploymentSpi()); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setMetricsUpdateFrequency(500); cfg.setDeploymentMode(depMode); @@ -65,6 +61,7 @@ public abstract class AbstractAopTest extends GridCommonAbstractTest { /** * @throws Exception If test failed. */ + @Test public void testDefaultPrivate() throws Exception { checkDefault(DeploymentMode.PRIVATE); } @@ -72,6 +69,7 @@ public void testDefaultPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultIsolated() throws Exception { checkDefault(DeploymentMode.ISOLATED); } @@ -79,6 +77,7 @@ public void testDefaultIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultContinuous() throws Exception { checkDefault(DeploymentMode.CONTINUOUS); } @@ -86,6 +85,7 @@ public void testDefaultContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultShared() throws Exception { checkDefault(DeploymentMode.SHARED); } @@ -93,6 +93,7 @@ public void testDefaultShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultWithUserClassLoaderPrivate() throws Exception { checkDefaultWithUserClassLoader(DeploymentMode.PRIVATE); } @@ -100,6 +101,7 @@ public void testDefaultWithUserClassLoaderPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultWithUserClassLoaderIsolated() throws Exception { checkDefaultWithUserClassLoader(DeploymentMode.ISOLATED); } @@ -107,6 +109,7 @@ public void testDefaultWithUserClassLoaderIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultWithUserClassLoaderContinuous() throws Exception { checkDefaultWithUserClassLoader(DeploymentMode.CONTINUOUS); } @@ -114,6 +117,7 @@ public void testDefaultWithUserClassLoaderContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultWithUserClassLoaderShared() throws Exception { checkDefaultWithUserClassLoader(DeploymentMode.SHARED); } @@ -121,6 +125,7 @@ public void testDefaultWithUserClassLoaderShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testSingleDeploymentWithUserClassLoaderPrivate() throws Exception { checkSingleDeploymentWithUserClassLoader(DeploymentMode.PRIVATE); } @@ -128,6 +133,7 @@ public void testSingleDeploymentWithUserClassLoaderPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testSingleDeploymentWithUserClassLoaderIsolated() throws Exception { checkSingleDeploymentWithUserClassLoader(DeploymentMode.ISOLATED); } @@ -135,6 +141,7 @@ public void testSingleDeploymentWithUserClassLoaderIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testSingleDeploymentWithUserClassLoaderContinuous() throws Exception { checkSingleDeploymentWithUserClassLoader(DeploymentMode.CONTINUOUS); } @@ -142,6 +149,7 @@ public void testSingleDeploymentWithUserClassLoaderContinuous() throws Exception /** * @throws Exception If test failed. */ + @Test public void testSingleDeploymentWithUserClassLoaderShared() throws Exception { checkSingleDeploymentWithUserClassLoader(DeploymentMode.SHARED); } @@ -149,6 +157,7 @@ public void testSingleDeploymentWithUserClassLoaderShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceWithUserClassLoaderPrivate() throws Exception { checkDefaultResourceWithUserClassLoader(DeploymentMode.PRIVATE); } @@ -156,6 +165,7 @@ public void testDefaultResourceWithUserClassLoaderPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceWithUserClassLoaderIsolated() throws Exception { checkDefaultResourceWithUserClassLoader(DeploymentMode.ISOLATED); } @@ -163,6 +173,7 @@ public void testDefaultResourceWithUserClassLoaderIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceWithUserClassLoaderContinuous() throws Exception { checkDefaultResourceWithUserClassLoader(DeploymentMode.CONTINUOUS); } @@ -170,6 +181,7 @@ public void testDefaultResourceWithUserClassLoaderContinuous() throws Exception /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceWithUserClassLoaderShared() throws Exception { checkDefaultResourceWithUserClassLoader(DeploymentMode.SHARED); } @@ -177,6 +189,7 @@ public void testDefaultResourceWithUserClassLoaderShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassPrivate() throws Exception { checkNonDefaultClass(DeploymentMode.PRIVATE); } @@ -184,6 +197,7 @@ public void testNonDefaultClassPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassIsolated() throws Exception { checkNonDefaultClass(DeploymentMode.ISOLATED); } @@ -191,6 +205,7 @@ public void testNonDefaultClassIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassContinuous() throws Exception { checkNonDefaultClass(DeploymentMode.CONTINUOUS); } @@ -198,6 +213,7 @@ public void testNonDefaultClassContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassShared() throws Exception { checkNonDefaultClass(DeploymentMode.SHARED); } @@ -205,6 +221,7 @@ public void testNonDefaultClassShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNamePrivate() throws Exception { checkNonDefaultName(DeploymentMode.PRIVATE); } @@ -212,6 +229,7 @@ public void testNonDefaultNamePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameIsolated() throws Exception { checkNonDefaultName(DeploymentMode.ISOLATED); } @@ -219,6 +237,7 @@ public void testNonDefaultNameIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameContinuous() throws Exception { checkNonDefaultName(DeploymentMode.CONTINUOUS); } @@ -226,6 +245,7 @@ public void testNonDefaultNameContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameShared() throws Exception { checkNonDefaultName(DeploymentMode.SHARED); } @@ -233,6 +253,7 @@ public void testNonDefaultNameShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionPrivate() throws Exception { checkDefaultException(DeploymentMode.PRIVATE); } @@ -240,6 +261,7 @@ public void testDefaultExceptionPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionIsolated() throws Exception { checkDefaultException(DeploymentMode.ISOLATED); } @@ -247,6 +269,7 @@ public void testDefaultExceptionIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionContinuous() throws Exception { checkDefaultException(DeploymentMode.CONTINUOUS); } @@ -254,6 +277,7 @@ public void testDefaultExceptionContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionShared() throws Exception { checkDefaultException(DeploymentMode.SHARED); } @@ -261,6 +285,7 @@ public void testDefaultExceptionShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourcePrivate() throws Exception { checkDefaultResource(DeploymentMode.PRIVATE); } @@ -268,6 +293,7 @@ public void testDefaultResourcePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceIsolated() throws Exception { checkDefaultResource(DeploymentMode.ISOLATED); } @@ -275,6 +301,7 @@ public void testDefaultResourceIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceContinuous() throws Exception { checkDefaultResource(DeploymentMode.CONTINUOUS); } @@ -282,6 +309,7 @@ public void testDefaultResourceContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceShared() throws Exception { checkDefaultResource(DeploymentMode.SHARED); } @@ -289,6 +317,7 @@ public void testDefaultResourceShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourcePrivate() throws Exception { checkNonDefaultClassResource(DeploymentMode.PRIVATE); } @@ -296,6 +325,7 @@ public void testNonDefaultClassResourcePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourceIsolated() throws Exception { checkNonDefaultClassResource(DeploymentMode.ISOLATED); } @@ -303,6 +333,7 @@ public void testNonDefaultClassResourceIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourceContinuous() throws Exception { checkNonDefaultClassResource(DeploymentMode.CONTINUOUS); } @@ -310,6 +341,7 @@ public void testNonDefaultClassResourceContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourceShared() throws Exception { checkNonDefaultClassResource(DeploymentMode.SHARED); } @@ -317,6 +349,7 @@ public void testNonDefaultClassResourceShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourcePrivate() throws Exception { checkNonDefaultNameResource(DeploymentMode.PRIVATE); } @@ -324,6 +357,7 @@ public void testNonDefaultNameResourcePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourceIsolated() throws Exception { checkNonDefaultNameResource(DeploymentMode.ISOLATED); } @@ -331,6 +365,7 @@ public void testNonDefaultNameResourceIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourceContinuous() throws Exception { checkNonDefaultNameResource(DeploymentMode.CONTINUOUS); } @@ -338,6 +373,7 @@ public void testNonDefaultNameResourceContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourceShared() throws Exception { checkNonDefaultNameResource(DeploymentMode.SHARED); } @@ -557,7 +593,6 @@ private void checkNonDefaultName(DeploymentMode depMode) throws Exception { * @param depMode Deployment mode to use. * @throws Exception If failed. */ - @SuppressWarnings({"CatchGenericClass"}) private void checkDefaultException(DeploymentMode depMode) throws Exception { this.depMode = depMode; diff --git a/modules/aop/src/test/java/org/apache/ignite/gridify/BasicAopSelfTest.java b/modules/aop/src/test/java/org/apache/ignite/gridify/BasicAopSelfTest.java index 54ac74fd2c4a3..024a5364717f7 100644 --- a/modules/aop/src/test/java/org/apache/ignite/gridify/BasicAopSelfTest.java +++ b/modules/aop/src/test/java/org/apache/ignite/gridify/BasicAopSelfTest.java @@ -26,6 +26,9 @@ import java.util.Collection; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tries to execute dummy gridified task. It should fail because grid is not started. @@ -33,10 +36,12 @@ * The main purpose of this test is to check that AOP is properly configured. It should * be included in all suites that require AOP. */ +@RunWith(JUnit4.class) public class BasicAopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAop() throws Exception { try { gridify(); @@ -71,4 +76,4 @@ private static class TestTask extends GridifyTaskSplitAdapter { return null; } } -} \ No newline at end of file +} diff --git a/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXNonSpringAopSelfTest.java b/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXNonSpringAopSelfTest.java index 2ebd272e6c604..14a7bff5585fe 100644 --- a/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXNonSpringAopSelfTest.java +++ b/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXNonSpringAopSelfTest.java @@ -25,6 +25,9 @@ import java.util.List; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * To run this test with JBoss AOP make sure of the following: @@ -51,10 +54,12 @@ * 2. Classpath should contains the ${IGNITE_HOME}/modules/tests/config/aop/aspectj folder. */ @GridCommonTest(group="AOP") +@RunWith(JUnit4.class) public class GridifySetToXXXNonSpringAopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGridifySetToSet() throws Exception { try { startGrid("GridifySetToSetTarget"); @@ -111,6 +116,7 @@ public void testGridifySetToSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGridifySetToValue() throws Exception { try { startGrid("GridifySetToValueTarget"); @@ -206,4 +212,4 @@ private static class MathEnumerationAdapter implements Enumeration { return iter.next(); } } -} \ No newline at end of file +} diff --git a/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXSpringAopSelfTest.java b/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXSpringAopSelfTest.java index 1dd499a031a16..7ff56d09a8b59 100644 --- a/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXSpringAopSelfTest.java +++ b/modules/aop/src/test/java/org/apache/ignite/gridify/GridifySetToXXXSpringAopSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.compute.gridify.aop.spring.GridifySpringEnhancer; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * To run this test with JBoss AOP make sure of the following: @@ -52,10 +55,12 @@ * 2. Classpath should contains the ${IGNITE_HOME}/modules/tests/config/aop/aspectj folder. */ @GridCommonTest(group="AOP") +@RunWith(JUnit4.class) public class GridifySetToXXXSpringAopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGridifySetToSet() throws Exception { try { startGrid("GridifySetToSetTarget"); @@ -112,6 +117,7 @@ public void testGridifySetToSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGridifySetToValue() throws Exception { try { startGrid("GridifySetToValueTarget"); @@ -207,4 +213,4 @@ private static class MathEnumerationAdapter implements Enumeration { return iter.next(); } } -} \ No newline at end of file +} diff --git a/modules/aop/src/test/java/org/apache/ignite/gridify/hierarchy/GridifyHierarchyTest.java b/modules/aop/src/test/java/org/apache/ignite/gridify/hierarchy/GridifyHierarchyTest.java index 3dc8d53b87326..232d92892a725 100644 --- a/modules/aop/src/test/java/org/apache/ignite/gridify/hierarchy/GridifyHierarchyTest.java +++ b/modules/aop/src/test/java/org/apache/ignite/gridify/hierarchy/GridifyHierarchyTest.java @@ -18,10 +18,14 @@ package org.apache.ignite.gridify.hierarchy; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Gridify hierarchy test. */ +@RunWith(JUnit4.class) public class GridifyHierarchyTest extends GridCommonAbstractTest { /** */ public GridifyHierarchyTest() { @@ -38,6 +42,7 @@ public void noneTestGridifyHierarchyProtected() { /** * @throws Exception If failed. */ + @Test public void testGridifyHierarchyPrivate() throws Exception { Target target = new Target(); @@ -48,4 +53,4 @@ public void testGridifyHierarchyPrivate() throws Exception { @Override public String getTestIgniteInstanceName() { return "GridifyHierarchyTest"; } -} \ No newline at end of file +} diff --git a/modules/aop/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerAopTest.java b/modules/aop/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerAopTest.java index 8236f435f9c1d..672225f404afc 100644 --- a/modules/aop/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerAopTest.java +++ b/modules/aop/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerAopTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_TASK_FINISHED; @@ -44,6 +47,7 @@ * * */ +@RunWith(JUnit4.class) public class OptimizedMarshallerAopTest extends GridCommonAbstractTest { /** */ private static final AtomicInteger cntr = new AtomicInteger(); @@ -71,6 +75,7 @@ public OptimizedMarshallerAopTest() { * * @throws Exception If failed. */ + @Test public void testUp() throws Exception { G.ignite().events().localListen(new IgnitePredicate() { @Override public boolean apply(Event evt) { @@ -95,4 +100,4 @@ public void testUp() throws Exception { private void gridify1() { X.println("Executes on grid"); } -} \ No newline at end of file +} diff --git a/modules/aop/src/test/java/org/apache/ignite/p2p/P2PGridifySelfTest.java b/modules/aop/src/test/java/org/apache/ignite/p2p/P2PGridifySelfTest.java index 7511d61b07fb4..42e527c08f613 100644 --- a/modules/aop/src/test/java/org/apache/ignite/p2p/P2PGridifySelfTest.java +++ b/modules/aop/src/test/java/org/apache/ignite/p2p/P2PGridifySelfTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.testframework.GridTestClassLoader; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class P2PGridifySelfTest extends GridCommonAbstractTest { /** Current deployment mode. Used in {@link #getConfiguration(String)}. */ private DeploymentMode depMode; @@ -174,6 +178,7 @@ public Integer executeGridifyResource(int res) { * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.PRIVATE); } @@ -183,6 +188,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.ISOLATED); } @@ -192,6 +198,7 @@ public void testIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.CONTINUOUS); } @@ -201,6 +208,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.SHARED); } @@ -210,6 +218,7 @@ public void testSharedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testResourcePrivateMode() throws Exception { processTestGridifyResource(DeploymentMode.PRIVATE); } @@ -219,6 +228,7 @@ public void testResourcePrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testResourceIsolatedMode() throws Exception { processTestGridifyResource(DeploymentMode.ISOLATED); } @@ -228,6 +238,7 @@ public void testResourceIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testResourceContinuousMode() throws Exception { processTestGridifyResource(DeploymentMode.CONTINUOUS); } @@ -237,7 +248,8 @@ public void testResourceContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testResourceSharedMode() throws Exception { processTestGridifyResource(DeploymentMode.SHARED); } -} \ No newline at end of file +} diff --git a/modules/aop/src/test/java/org/apache/ignite/testsuites/IgniteAopSelfTestSuite.java b/modules/aop/src/test/java/org/apache/ignite/testsuites/IgniteAopSelfTestSuite.java index b2dcea2f61633..399d759135de8 100644 --- a/modules/aop/src/test/java/org/apache/ignite/testsuites/IgniteAopSelfTestSuite.java +++ b/modules/aop/src/test/java/org/apache/ignite/testsuites/IgniteAopSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.gridify.BasicAopSelfTest; import org.apache.ignite.gridify.GridifySetToXXXNonSpringAopSelfTest; @@ -24,6 +25,8 @@ import org.apache.ignite.gridify.NonSpringAopSelfTest; import org.apache.ignite.gridify.SpringAopSelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import org.test.gridify.ExternalNonSpringAopSelfTest; import static org.apache.ignite.IgniteSystemProperties.IGNITE_OVERRIDE_MCAST_GRP; @@ -31,26 +34,26 @@ /** * AOP test suite. */ -public class IgniteAopSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteAopSelfTestSuite { /** * @return AOP test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite AOP Test Suite"); // Test configuration. - suite.addTestSuite(BasicAopSelfTest.class); + suite.addTest(new JUnit4TestAdapter(BasicAopSelfTest.class)); - suite.addTestSuite(SpringAopSelfTest.class); - suite.addTestSuite(NonSpringAopSelfTest.class); - suite.addTestSuite(GridifySetToXXXSpringAopSelfTest.class); - suite.addTestSuite(GridifySetToXXXNonSpringAopSelfTest.class); - suite.addTestSuite(ExternalNonSpringAopSelfTest.class); + suite.addTest(new JUnit4TestAdapter(SpringAopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(NonSpringAopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridifySetToXXXSpringAopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridifySetToXXXNonSpringAopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ExternalNonSpringAopSelfTest.class)); // Examples System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, GridTestUtils.getNextMulticastGroup(IgniteAopSelfTestSuite.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/aop/src/test/java/org/apache/loadtests/direct/singlesplit/SingleSplitsLoadTest.java b/modules/aop/src/test/java/org/apache/loadtests/direct/singlesplit/SingleSplitsLoadTest.java index 888a2f5f6353d..48a57bc092b3a 100644 --- a/modules/aop/src/test/java/org/apache/loadtests/direct/singlesplit/SingleSplitsLoadTest.java +++ b/modules/aop/src/test/java/org/apache/loadtests/direct/singlesplit/SingleSplitsLoadTest.java @@ -31,11 +31,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.apache.log4j.Level; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Single split load test. */ @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public class SingleSplitsLoadTest extends GridCommonAbstractTest { /** */ public SingleSplitsLoadTest() { @@ -81,6 +85,7 @@ private int getThreadCount() { * * @throws Exception If task execution failed. */ + @Test public void testLoad() throws Exception { final Ignite ignite = G.ignite(getTestIgniteInstanceName()); diff --git a/modules/aop/src/test/java/org/apache/loadtests/gridify/GridifySingleSplitLoadTest.java b/modules/aop/src/test/java/org/apache/loadtests/gridify/GridifySingleSplitLoadTest.java index 9abeb6ed1767d..f417c3c0157c2 100644 --- a/modules/aop/src/test/java/org/apache/loadtests/gridify/GridifySingleSplitLoadTest.java +++ b/modules/aop/src/test/java/org/apache/loadtests/gridify/GridifySingleSplitLoadTest.java @@ -31,12 +31,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.apache.log4j.Level; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Gridify single split load test. */ -@SuppressWarnings({"CatchGenericClass"}) @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public class GridifySingleSplitLoadTest extends GridCommonAbstractTest { /** */ public GridifySingleSplitLoadTest() { @@ -45,7 +48,6 @@ public GridifySingleSplitLoadTest() { /** {@inheritDoc} */ - @SuppressWarnings("ConstantConditions") @Override public String getTestIgniteInstanceName() { // Gridify task has empty Ignite instance name by default so we need to change it // here. @@ -99,7 +101,7 @@ private int getThreadCount() { * * @throws Exception If task execution failed. */ - @SuppressWarnings("unchecked") + @Test public void testGridifyLoad() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); diff --git a/modules/aop/src/test/java/org/test/gridify/ExternalNonSpringAopSelfTest.java b/modules/aop/src/test/java/org/test/gridify/ExternalNonSpringAopSelfTest.java index 44fa48d3b73ca..55017339e63cd 100644 --- a/modules/aop/src/test/java/org/test/gridify/ExternalNonSpringAopSelfTest.java +++ b/modules/aop/src/test/java/org/test/gridify/ExternalNonSpringAopSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * To run this test with JBoss AOP make sure of the following: @@ -52,6 +55,7 @@ * 2. Classpath should contains the ${IGNITE_HOME}/modules/tests/config/aop/aspectj folder. */ @GridCommonTest(group="AOP") +@RunWith(JUnit4.class) public class ExternalNonSpringAopSelfTest extends GridCommonAbstractTest { /** */ private DeploymentMode depMode = DeploymentMode.PRIVATE; @@ -79,6 +83,7 @@ private void deployTask() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultPrivate() throws Exception { checkDefault(DeploymentMode.PRIVATE); } @@ -86,6 +91,7 @@ public void testDefaultPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultIsolated() throws Exception { checkDefault(DeploymentMode.ISOLATED); } @@ -93,6 +99,7 @@ public void testDefaultIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultContinuous() throws Exception { checkDefault(DeploymentMode.CONTINUOUS); } @@ -100,6 +107,7 @@ public void testDefaultContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultShared() throws Exception { checkDefault(DeploymentMode.SHARED); } @@ -107,6 +115,7 @@ public void testDefaultShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassPrivate() throws Exception { checkNonDefaultClass(DeploymentMode.PRIVATE); } @@ -114,6 +123,7 @@ public void testNonDefaultClassPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassIsolated() throws Exception { checkNonDefaultClass(DeploymentMode.ISOLATED); } @@ -121,6 +131,7 @@ public void testNonDefaultClassIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultContinuous() throws Exception { checkNonDefaultClass(DeploymentMode.CONTINUOUS); } @@ -128,6 +139,7 @@ public void testNonDefaultContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultShared() throws Exception { checkNonDefaultClass(DeploymentMode.SHARED); } @@ -135,6 +147,7 @@ public void testNonDefaultShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNamePrivate() throws Exception { checkNonDefaultName(DeploymentMode.PRIVATE); } @@ -142,6 +155,7 @@ public void testNonDefaultNamePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameIsolated() throws Exception { checkNonDefaultName(DeploymentMode.ISOLATED); } @@ -149,6 +163,7 @@ public void testNonDefaultNameIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameContinuous() throws Exception { checkNonDefaultName(DeploymentMode.CONTINUOUS); } @@ -156,6 +171,7 @@ public void testNonDefaultNameContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameShared() throws Exception { checkNonDefaultName(DeploymentMode.SHARED); } @@ -163,6 +179,7 @@ public void testNonDefaultNameShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testTaskNameAndTaskClassPrivate() throws Exception { checkTaskNameAndTaskClass(DeploymentMode.PRIVATE); } @@ -170,6 +187,7 @@ public void testTaskNameAndTaskClassPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testTaskNameAndTaskClassIsolated() throws Exception { checkTaskNameAndTaskClass(DeploymentMode.ISOLATED); } @@ -177,6 +195,7 @@ public void testTaskNameAndTaskClassIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testTaskNameAndTaskClassContinuous() throws Exception { checkTaskNameAndTaskClass(DeploymentMode.CONTINUOUS); } @@ -184,6 +203,7 @@ public void testTaskNameAndTaskClassContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testTaskNameAndTaskClassShared() throws Exception { checkTaskNameAndTaskClass(DeploymentMode.SHARED); } @@ -191,6 +211,7 @@ public void testTaskNameAndTaskClassShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionPrivate() throws Exception { checkDefaultException(DeploymentMode.PRIVATE); } @@ -198,6 +219,7 @@ public void testDefaultExceptionPrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionIsolated() throws Exception { checkDefaultException(DeploymentMode.ISOLATED); } @@ -205,6 +227,7 @@ public void testDefaultExceptionIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionContinuous() throws Exception { checkDefaultException(DeploymentMode.CONTINUOUS); } @@ -212,6 +235,7 @@ public void testDefaultExceptionContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultExceptionShared() throws Exception { checkDefaultException(DeploymentMode.SHARED); } @@ -219,6 +243,7 @@ public void testDefaultExceptionShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourcePrivate() throws Exception { checkDefaultResource(DeploymentMode.PRIVATE); } @@ -226,6 +251,7 @@ public void testDefaultResourcePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceIsolated() throws Exception { checkDefaultResource(DeploymentMode.ISOLATED); } @@ -233,6 +259,7 @@ public void testDefaultResourceIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceContinuous() throws Exception { checkDefaultResource(DeploymentMode.CONTINUOUS); } @@ -240,6 +267,7 @@ public void testDefaultResourceContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDefaultResourceShared() throws Exception { checkDefaultResource(DeploymentMode.SHARED); } @@ -247,6 +275,7 @@ public void testDefaultResourceShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourcePrivate() throws Exception { checkNonDefaultClassResource(DeploymentMode.PRIVATE); } @@ -254,6 +283,7 @@ public void testNonDefaultClassResourcePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourceIsolated() throws Exception { checkNonDefaultClassResource(DeploymentMode.ISOLATED); } @@ -261,6 +291,7 @@ public void testNonDefaultClassResourceIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourceContinuous() throws Exception { checkNonDefaultClassResource(DeploymentMode.CONTINUOUS); } @@ -268,6 +299,7 @@ public void testNonDefaultClassResourceContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultClassResourceShared() throws Exception { checkNonDefaultClassResource(DeploymentMode.SHARED); } @@ -275,6 +307,7 @@ public void testNonDefaultClassResourceShared() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourcePrivate() throws Exception { checkNonDefaultNameResource(DeploymentMode.PRIVATE); } @@ -282,6 +315,7 @@ public void testNonDefaultNameResourcePrivate() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourceIsolated() throws Exception { checkNonDefaultNameResource(DeploymentMode.ISOLATED); } @@ -289,6 +323,7 @@ public void testNonDefaultNameResourceIsolated() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourceContinuous() throws Exception { checkNonDefaultNameResource(DeploymentMode.CONTINUOUS); } @@ -296,6 +331,7 @@ public void testNonDefaultNameResourceContinuous() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonDefaultNameResourceShared() throws Exception { checkNonDefaultNameResource(DeploymentMode.SHARED); } diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointManagerSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointManagerSelfTest.java index acda385d5fb7e..7816e7ba0af47 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointManagerSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointManagerSelfTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.internal.managers.checkpoint.GridCheckpointManagerAbstractSelfTest; import org.apache.ignite.testsuites.IgniteIgnore; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checkpoint manager test using {@link S3CheckpointSpi}. */ +@RunWith(JUnit4.class) public class S3CheckpointManagerSelfTest extends GridCheckpointManagerAbstractSelfTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -52,6 +56,7 @@ public class S3CheckpointManagerSelfTest extends GridCheckpointManagerAbstractSe * @throws Exception Thrown if any exception occurs. */ @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Test public void testS3Based() throws Exception { retries = 6; @@ -62,9 +67,10 @@ public void testS3Based() throws Exception { * @throws Exception Thrown if any exception occurs. */ @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Test public void testMultiNodeS3Based() throws Exception { retries = 6; doMultiNodeTest("s3"); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiConfigSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiConfigSelfTest.java index 727845b0d9915..446c715e0c96b 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiConfigSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiConfigSelfTest.java @@ -19,16 +19,21 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid S3 checkpoint SPI config self test. */ @GridSpiTest(spi = S3CheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class S3CheckpointSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new S3CheckpointSpi(), "awsCredentials", null); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiSelfTest.java index cb38083506e69..a25b5be481a53 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiSelfTest.java @@ -39,11 +39,15 @@ import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testsuites.IgniteIgnore; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid S3 checkpoint SPI self test. */ @GridSpiTest(spi = S3CheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class S3CheckpointSpiSelfTest extends GridSpiAbstractTest { /** */ private static final int CHECK_POINT_COUNT = 10; @@ -96,6 +100,7 @@ public class S3CheckpointSpiSelfTest extends GridSpiAbstractTest { - /** {@inheritDoc} */ @Override protected void spiConfigure(S3CheckpointSpi spi) throws Exception { AWSCredentials cred = new BasicAWSCredentials(IgniteS3TestSuite.getAccessKey(), @@ -43,8 +46,9 @@ public class S3CheckpointSpiStartStopBucketEndpointSelfTest extends GridSpiStart } /** {@inheritDoc} */ - @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Ignore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Test @Override public void testStartStop() throws Exception { super.testStartStop(); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSSEAlgorithmSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSSEAlgorithmSelfTest.java index 7bfb75dd7888e..cf4495225abd8 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSSEAlgorithmSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSSEAlgorithmSelfTest.java @@ -21,13 +21,17 @@ import com.amazonaws.auth.BasicAWSCredentials; import org.apache.ignite.spi.GridSpiStartStopAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; -import org.apache.ignite.testsuites.IgniteIgnore; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid S3 checkpoint SPI start stop self test. */ @GridSpiTest(spi = S3CheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class S3CheckpointSpiStartStopSSEAlgorithmSelfTest extends GridSpiStartStopAbstractTest { /** {@inheritDoc} */ @Override protected void spiConfigure(S3CheckpointSpi spi) throws Exception { @@ -42,8 +46,9 @@ public class S3CheckpointSpiStartStopSSEAlgorithmSelfTest extends GridSpiStartSt } /** {@inheritDoc} */ - @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Ignore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Test @Override public void testStartStop() throws Exception { super.testStartStop(); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSelfTest.java index a062b51f28e3f..5fec5027d1cdd 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSelfTest.java @@ -23,11 +23,15 @@ import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testsuites.IgniteIgnore; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid S3 checkpoint SPI start stop self test. */ @GridSpiTest(spi = S3CheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class S3CheckpointSpiStartStopSelfTest extends GridSpiStartStopAbstractTest { /** {@inheritDoc} */ @Override protected void spiConfigure(S3CheckpointSpi spi) throws Exception { @@ -43,7 +47,8 @@ public class S3CheckpointSpiStartStopSelfTest extends GridSpiStartStopAbstractTe /** {@inheritDoc} */ @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Test @Override public void testStartStop() throws Exception { super.testStartStop(); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3SessionCheckpointSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3SessionCheckpointSelfTest.java index 54a7910d030f3..99a0cc5e8ba57 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3SessionCheckpointSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3SessionCheckpointSelfTest.java @@ -24,15 +24,20 @@ import org.apache.ignite.session.GridSessionCheckpointSelfTest; import org.apache.ignite.testsuites.IgniteIgnore; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid session checkpoint self test using {@link S3CheckpointSpi}. */ +@RunWith(JUnit4.class) public class S3SessionCheckpointSelfTest extends GridSessionCheckpointAbstractSelfTest { /** * @throws Exception If failed. */ @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Test public void testS3Checkpoint() throws Exception { IgniteConfiguration cfg = getConfiguration(); @@ -51,4 +56,4 @@ public void testS3Checkpoint() throws Exception { checkCheckpoints(cfg); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/elb/TcpDiscoveryElbIpFinderSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/elb/TcpDiscoveryElbIpFinderSelfTest.java index 1217d8b4f98eb..85568f0f50540 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/elb/TcpDiscoveryElbIpFinderSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/elb/TcpDiscoveryElbIpFinderSelfTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TcpDiscoveryElbIpFinderSelfTest test. */ +@RunWith(JUnit4.class) public class TcpDiscoveryElbIpFinderSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** * Constructor. @@ -44,6 +48,7 @@ public TcpDiscoveryElbIpFinderSelfTest() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testIpFinder() throws Exception { TcpDiscoveryElbIpFinder ipFinder = new TcpDiscoveryElbIpFinder(); @@ -78,4 +83,4 @@ public TcpDiscoveryElbIpFinderSelfTest() throws Exception { assertTrue(e.getMessage().startsWith("One or more configuration parameters are invalid")); } } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAbstractSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAbstractSelfTest.java index 768e44da70f3d..37a0accc208f5 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAbstractSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAbstractSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.testsuites.IgniteIgnore; import org.apache.ignite.testsuites.IgniteS3TestSuite; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Abstract TcpDiscoveryS3IpFinder to test with different ways of setting AWS credentials. */ +@RunWith(JUnit4.class) abstract class TcpDiscoveryS3IpFinderAbstractSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** Bucket endpoint */ @@ -83,6 +87,7 @@ abstract class TcpDiscoveryS3IpFinderAbstractSelfTest /** {@inheritDoc} */ @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-2420") + @Test @Override public void testIpFinder() throws Exception { super.testIpFinder(); } diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest.java index 9ff5571a26e5e..06524d24d8cd1 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest.java @@ -20,10 +20,14 @@ import com.amazonaws.auth.AWSStaticCredentialsProvider; import com.amazonaws.auth.BasicAWSCredentials; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TcpDiscoveryS3IpFinder test using AWS credentials provider. */ +@RunWith(JUnit4.class) public class TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest extends TcpDiscoveryS3IpFinderAbstractSelfTest { /** * Constructor. @@ -41,7 +45,8 @@ public TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testIpFinder() throws Exception { super.testIpFinder(); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsSelfTest.java index 5bea2515530c5..db287108016c5 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderAwsCredentialsSelfTest.java @@ -19,10 +19,14 @@ import com.amazonaws.auth.BasicAWSCredentials; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TcpDiscoveryS3IpFinder test using AWS credentials. */ +@RunWith(JUnit4.class) public class TcpDiscoveryS3IpFinderAwsCredentialsSelfTest extends TcpDiscoveryS3IpFinderAbstractSelfTest { /** * Constructor. @@ -40,7 +44,8 @@ public TcpDiscoveryS3IpFinderAwsCredentialsSelfTest() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testIpFinder() throws Exception { super.testIpFinder(); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderBucketEndpointSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderBucketEndpointSelfTest.java index 07d4839d959d7..11481d8f42286 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderBucketEndpointSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderBucketEndpointSelfTest.java @@ -19,12 +19,16 @@ import com.amazonaws.auth.BasicAWSCredentials; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TcpDiscoveryS3IpFinder tests bucket endpoint for IP finder. * For information about possible endpoint names visit * docs.aws.amazon.com. */ +@RunWith(JUnit4.class) public class TcpDiscoveryS3IpFinderBucketEndpointSelfTest extends TcpDiscoveryS3IpFinderAbstractSelfTest { /** * Constructor. @@ -49,6 +53,7 @@ public TcpDiscoveryS3IpFinderBucketEndpointSelfTest() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testIpFinder() throws Exception { super.testIpFinder(); } diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderKeyPrefixSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderKeyPrefixSelfTest.java index 6e4960bdb3939..bb01f0b4e8503 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderKeyPrefixSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderKeyPrefixSelfTest.java @@ -19,12 +19,16 @@ import com.amazonaws.auth.BasicAWSCredentials; import org.apache.ignite.spi.discovery.tcp.ipfinder.s3.client.DummyS3Client; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.mockito.Mockito; /** * TcpDiscoveryS3IpFinder tests key prefix for IP finder. For information about key prefix visit: * . */ +@RunWith(JUnit4.class) public class TcpDiscoveryS3IpFinderKeyPrefixSelfTest extends TcpDiscoveryS3IpFinderAbstractSelfTest { /** * Constructor. @@ -58,6 +62,7 @@ public TcpDiscoveryS3IpFinderKeyPrefixSelfTest() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testIpFinder() throws Exception { injectLogger(finder); diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest.java index 838a3c67f37c7..b909fac99adde 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest.java @@ -19,12 +19,16 @@ import com.amazonaws.auth.BasicAWSCredentials; import org.apache.ignite.testsuites.IgniteS3TestSuite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TcpDiscoveryS3IpFinder tests server-side encryption algorithm for Amazon S3-managed encryption keys. * For information about possible S3-managed encryption keys visit * docs.aws.amazon.com. */ +@RunWith(JUnit4.class) public class TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest extends TcpDiscoveryS3IpFinderAbstractSelfTest { /** * Constructor. @@ -42,7 +46,8 @@ public TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testIpFinder() throws Exception { super.testIpFinder(); } -} \ No newline at end of file +} diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyObjectListingTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyObjectListingTest.java index 2598af0c7af58..9016b52f5de20 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyObjectListingTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyObjectListingTest.java @@ -23,14 +23,19 @@ import java.util.List; import java.util.Set; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Class to test {@link DummyObjectListing}. */ +@RunWith(JUnit4.class) public class DummyObjectListingTest extends GridCommonAbstractTest { /** * Test cases for various object listing functions for S3 bucket. */ + @Test public void testDummyObjectListing() { Set fakeKeyPrefixSet = new HashSet<>(); diff --git a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyS3ClientTest.java b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyS3ClientTest.java index bd1b12fd548f6..88c205c629e3e 100644 --- a/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyS3ClientTest.java +++ b/modules/aws/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/client/DummyS3ClientTest.java @@ -26,10 +26,14 @@ import java.util.Map; import java.util.Set; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Class to test {@link DummyS3Client}. */ +@RunWith(JUnit4.class) public class DummyS3ClientTest extends GridCommonAbstractTest { /** Instance of {@link DummyS3Client} to be used for tests. */ private DummyS3Client s3; @@ -54,6 +58,7 @@ public class DummyS3ClientTest extends GridCommonAbstractTest { /** * Test cases to check the 'doesBucketExist' method. */ + @Test public void testDoesBucketExist() { assertTrue("The bucket 'testBucket' should exist", s3.doesBucketExist("testBucket")); assertFalse("The bucket 'nonExistentBucket' should not exist", s3.doesBucketExist("nonExistentBucket")); @@ -62,6 +67,7 @@ public void testDoesBucketExist() { /** * Test cases for various object listing functions for S3 bucket. */ + @Test public void testListObjects() { ObjectListing listing = s3.listObjects("testBucket"); @@ -98,6 +104,7 @@ public void testListObjects() { /** * Test cases for various object listing functions for S3 bucket and key prefix. */ + @Test public void testListObjectsWithAPrefix() { ObjectListing listing = s3.listObjects("testBucket", "/test"); @@ -149,6 +156,7 @@ public void testListObjectsWithAPrefix() { /** * Test case to check if a bucket is created properly. */ + @Test public void testCreateBucket() { s3.createBucket("testBucket1"); diff --git a/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteElbTestSuite.java b/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteElbTestSuite.java index 28f7e0e47ec6c..5c6566da19418 100644 --- a/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteElbTestSuite.java +++ b/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteElbTestSuite.java @@ -17,22 +17,25 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.discovery.tcp.ipfinder.elb.TcpDiscoveryElbIpFinderSelfTest; import org.apache.ignite.testframework.IgniteTestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * ELB IP finder test suite. */ -public class IgniteElbTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteElbTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new IgniteTestSuite("ELB Integration Test Suite"); - suite.addTestSuite(TcpDiscoveryElbIpFinderSelfTest.class); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryElbIpFinderSelfTest.class)); return suite; } diff --git a/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteS3TestSuite.java b/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteS3TestSuite.java index a5b5eaabf12a2..1b28376b6d69e 100644 --- a/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteS3TestSuite.java +++ b/modules/aws/src/test/java/org/apache/ignite/testsuites/IgniteS3TestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.checkpoint.s3.S3CheckpointManagerSelfTest; import org.apache.ignite.spi.checkpoint.s3.S3CheckpointSpiConfigSelfTest; @@ -33,35 +34,37 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.s3.client.DummyObjectListingTest; import org.apache.ignite.spi.discovery.tcp.ipfinder.s3.client.DummyS3ClientTest; import org.apache.ignite.testframework.IgniteTestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * S3 integration tests. */ -public class IgniteS3TestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteS3TestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new IgniteTestSuite("S3 Integration Test Suite"); // Checkpoint SPI. - suite.addTestSuite(S3CheckpointSpiConfigSelfTest.class); - suite.addTestSuite(S3CheckpointSpiSelfTest.class); - suite.addTestSuite(S3CheckpointSpiStartStopSelfTest.class); - suite.addTestSuite(S3CheckpointManagerSelfTest.class); - suite.addTestSuite(S3SessionCheckpointSelfTest.class); - suite.addTestSuite(S3CheckpointSpiStartStopBucketEndpointSelfTest.class); - suite.addTestSuite(S3CheckpointSpiStartStopSSEAlgorithmSelfTest.class); + suite.addTest(new JUnit4TestAdapter(S3CheckpointSpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(S3CheckpointSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(S3CheckpointSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(S3CheckpointManagerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(S3SessionCheckpointSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(S3CheckpointSpiStartStopBucketEndpointSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(S3CheckpointSpiStartStopSSEAlgorithmSelfTest.class)); // S3 IP finder. - suite.addTestSuite(DummyS3ClientTest.class); - suite.addTestSuite(DummyObjectListingTest.class); - suite.addTestSuite(TcpDiscoveryS3IpFinderAwsCredentialsSelfTest.class); - suite.addTestSuite(TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest.class); - suite.addTestSuite(TcpDiscoveryS3IpFinderBucketEndpointSelfTest.class); - suite.addTestSuite(TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest.class); - suite.addTestSuite(TcpDiscoveryS3IpFinderKeyPrefixSelfTest.class); + suite.addTest(new JUnit4TestAdapter(DummyS3ClientTest.class)); + suite.addTest(new JUnit4TestAdapter(DummyObjectListingTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryS3IpFinderAwsCredentialsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryS3IpFinderAwsCredentialsProviderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryS3IpFinderBucketEndpointSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryS3IpFinderSSEAlgorithmSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryS3IpFinderKeyPrefixSelfTest.class)); return suite; } diff --git a/modules/benchmarks/pom.xml b/modules/benchmarks/pom.xml index 1ea984c6c544e..06e0e505cd2ae 100644 --- a/modules/benchmarks/pom.xml +++ b/modules/benchmarks/pom.xml @@ -62,6 +62,16 @@ ${jmh.version} provided + + org.mockito + mockito-all + ${mockito.version} + + + com.google.guava + guava + ${guava.version} + @@ -131,4 +141,4 @@ - \ No newline at end of file + diff --git a/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/algo/BenchmarkCRC.java b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/algo/BenchmarkCRC.java new file mode 100644 index 0000000000000..5c922fead0c80 --- /dev/null +++ b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/algo/BenchmarkCRC.java @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.benchmarks.jmh.algo; + +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.OutputTimeUnit; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Warmup; + +import java.nio.ByteBuffer; +import java.util.Random; + +import static java.util.concurrent.TimeUnit.NANOSECONDS; +import static org.openjdk.jmh.annotations.Mode.AverageTime; +import static org.openjdk.jmh.annotations.Scope.Thread; + +/** + * + */ +@State(Thread) +@OutputTimeUnit(NANOSECONDS) +@BenchmarkMode(AverageTime) +@Fork(value = 1, jvmArgsAppend = {"-XX:+UnlockDiagnosticVMOptions"}) +@Warmup(iterations = 5) +@Measurement(iterations = 5) +public class BenchmarkCRC { + /** */ + static final int SIZE = 1024; + + /** */ + static final int BUF_LEN = 4096; + + /** */ + @State(Thread) + public static class Context { + /** */ + final int[] results = new int[SIZE]; + + /** */ + final ByteBuffer bb = ByteBuffer.allocate(BUF_LEN); + + /** */ + @Setup + public void setup() { + new Random().ints(BUF_LEN, Byte.MIN_VALUE, Byte.MAX_VALUE).forEach(k -> bb.put((byte) k)); + } + } + + /** */ + @Benchmark + public int[] pureJavaCrc32(Context context) { + for (int i = 0; i < SIZE; i++) { + context.bb.rewind(); + + context.results[i] = PureJavaCrc32.calcCrc32(context.bb, BUF_LEN); + } + + return context.results; + } + + /** */ + @Benchmark + public int[] crc32(Context context) { + for (int i = 0; i < SIZE; i++) { + context.bb.rewind(); + + context.results[i] = FastCrc.calcCrc(context.bb, BUF_LEN); + } + + return context.results; + } +} + + diff --git a/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/encryption/JmhKeystoreEncryptionSpiBenchmark.java b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/encryption/JmhKeystoreEncryptionSpiBenchmark.java new file mode 100644 index 0000000000000..932d57e17261a --- /dev/null +++ b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/encryption/JmhKeystoreEncryptionSpiBenchmark.java @@ -0,0 +1,117 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.benchmarks.jmh.encryption; + +import java.nio.ByteBuffer; +import java.util.concurrent.ThreadLocalRandom; +import org.apache.ignite.internal.benchmarks.jmh.JmhAbstractBenchmark; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionKey; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Level; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.TearDown; +import org.openjdk.jmh.infra.Blackhole; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.Options; +import org.openjdk.jmh.runner.options.OptionsBuilder; + +import static org.apache.ignite.internal.util.IgniteUtils.resolveIgnitePath; + +/** + */ +public class JmhKeystoreEncryptionSpiBenchmark extends JmhAbstractBenchmark { + /** Data amount. */ + private static final int DATA_AMOUNT = 100; + + public static final int PAGE_SIZE = 1024 * 4; + + /** */ + @Benchmark + public void encryptBenchmark(EncryptionData d, Blackhole receiver) { + for (int i = 0; i < DATA_AMOUNT; i++) { + ByteBuffer[] dt = d.randomData[i]; + + KeystoreEncryptionKey key = d.keys[ThreadLocalRandom.current().nextInt(4)]; + + d.encSpi.encryptNoPadding(dt[0], key, dt[1]); + + receiver.consume(d.res); + + dt[0].rewind(); + dt[1].rewind(); + + d.encSpi.decryptNoPadding(dt[1], key, dt[0]); + } + } + + @State(Scope.Thread) + public static class EncryptionData { + KeystoreEncryptionSpi encSpi; + + KeystoreEncryptionKey[] keys = new KeystoreEncryptionKey[4]; + + ByteBuffer[][] randomData = new ByteBuffer[DATA_AMOUNT][2]; + + ByteBuffer res = ByteBuffer.allocate(PAGE_SIZE); + + public EncryptionData() { + encSpi = new KeystoreEncryptionSpi(); + + encSpi.setKeyStorePath(resolveIgnitePath("modules/core/src/test/resources/tde.jks").getAbsolutePath()); + encSpi.setKeyStorePassword("love_sex_god".toCharArray()); + + encSpi.onBeforeStart(); + encSpi.spiStart("test-instance"); + } + + @Setup(Level.Invocation) + public void prepareCollection() { + for (int i = 0; i < keys.length; i++) + keys[i] = encSpi.create(); + + for (int i = 0; i < DATA_AMOUNT; i++) { + byte[] dt = new byte[PAGE_SIZE - 16]; + + ThreadLocalRandom.current().nextBytes(dt); + + randomData[i][0] = ByteBuffer.wrap(dt); + randomData[i][1] = ByteBuffer.allocate(PAGE_SIZE); + } + } + + @TearDown(Level.Iteration) + public void tearDown() { + //No - op + } + } + + public static void main(String[] args) throws Exception { + Options opt = new OptionsBuilder() + .include(JmhKeystoreEncryptionSpiBenchmark.class.getSimpleName()) + .threads(1) + .forks(1) + .warmupIterations(10) + .measurementIterations(20) + .build(); + + new Runner(opt).run(); + } +} diff --git a/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/misc/GridDhtPartitionsStateValidatorBenchmark.java b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/misc/GridDhtPartitionsStateValidatorBenchmark.java new file mode 100644 index 0000000000000..f3bbcb96d6cb9 --- /dev/null +++ b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/misc/GridDhtPartitionsStateValidatorBenchmark.java @@ -0,0 +1,185 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.benchmarks.jmh.misc; + +import com.google.common.collect.Lists; +import com.google.common.collect.Sets; +import org.apache.ignite.internal.benchmarks.jmh.JmhAbstractBenchmark; +import org.apache.ignite.internal.benchmarks.jmh.runner.JmhIdeBenchmarkRunner; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionsStateValidator; +import org.apache.ignite.internal.util.typedef.T2; +import org.jetbrains.annotations.Nullable; +import org.mockito.Matchers; +import org.mockito.Mockito; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.stream.IntStream; + +import static org.openjdk.jmh.annotations.Scope.Thread; + +/** */ +@State(Scope.Benchmark) +public class GridDhtPartitionsStateValidatorBenchmark extends JmhAbstractBenchmark { + /** */ + @State(Thread) + public static class Context { + /** */ + private final UUID localNodeId = UUID.randomUUID(); + + /** */ + private GridCacheSharedContext cctxMock; + + /** */ + private GridDhtPartitionTopology topologyMock; + + /** */ + private GridDhtPartitionsStateValidator validator; + + /** */ + private Map messages = new HashMap<>(); + + /** */ + private UUID ignoreNode = UUID.randomUUID(); + + /** */ + private static final int NODES = 3; + + /** */ + private static final int PARTS = 100; + + /** + * @return Partition mock with specified {@code id}, {@code updateCounter} and {@code size}. + */ + private GridDhtLocalPartition partitionMock(int id, long updateCounter, long size) { + GridDhtLocalPartition partitionMock = Mockito.mock(GridDhtLocalPartition.class); + Mockito.when(partitionMock.id()).thenReturn(id); + Mockito.when(partitionMock.updateCounter()).thenReturn(updateCounter); + Mockito.when(partitionMock.fullSize()).thenReturn(size); + Mockito.when(partitionMock.state()).thenReturn(GridDhtPartitionState.OWNING); + return partitionMock; + } + + /** + * @param countersMap Update counters map. + * @param sizesMap Sizes map. + * @return Message with specified {@code countersMap} and {@code sizeMap}. + */ + private GridDhtPartitionsSingleMessage from(@Nullable Map> countersMap, @Nullable Map sizesMap) { + GridDhtPartitionsSingleMessage msg = new GridDhtPartitionsSingleMessage(); + if (countersMap != null) + msg.addPartitionUpdateCounters(0, countersMap); + if (sizesMap != null) + msg.addPartitionSizes(0, sizesMap); + return msg; + } + + /** */ + @Setup + public void setup() { + // Prepare mocks. + cctxMock = Mockito.mock(GridCacheSharedContext.class); + Mockito.when(cctxMock.localNodeId()).thenReturn(localNodeId); + + topologyMock = Mockito.mock(GridDhtPartitionTopology.class); + Mockito.when(topologyMock.partitionState(Matchers.any(), Matchers.anyInt())).thenReturn(GridDhtPartitionState.OWNING); + Mockito.when(topologyMock.groupId()).thenReturn(0); + + Mockito.when(topologyMock.partitions()).thenReturn(PARTS); + + List localPartitions = Lists.newArrayList(); + + Map> updateCountersMap = new HashMap<>(); + + Map cacheSizesMap = new HashMap<>(); + + IntStream.range(0, PARTS).forEach(k -> { localPartitions.add(partitionMock(k, k + 1, k + 1)); + long us = k > 20 && k <= 30 ? 0 :k + 2L; + updateCountersMap.put(k, new T2<>(k + 2L, us)); + cacheSizesMap.put(k, us); }); + + Mockito.when(topologyMock.localPartitions()).thenReturn(localPartitions); + Mockito.when(topologyMock.currentLocalPartitions()).thenReturn(localPartitions); + + // Form single messages map. + Map messages = new HashMap<>(); + + for (int n = 0; n < NODES; ++n) { + UUID remoteNode = UUID.randomUUID(); + + messages.put(remoteNode, from(updateCountersMap, cacheSizesMap)); + } + + messages.put(ignoreNode, from(updateCountersMap, cacheSizesMap)); + + validator = new GridDhtPartitionsStateValidator(cctxMock); + } + } + + /** */ + @Benchmark + public void testValidatePartitionsUpdateCounters(Context context) { + context.validator.validatePartitionsUpdateCounters(context.topologyMock, + context.messages, Sets.newHashSet(context.ignoreNode)); + } + + /** */ + @Benchmark + public void testValidatePartitionsSizes(Context context) { + context.validator.validatePartitionsSizes(context.topologyMock, context + .messages, Sets.newHashSet(context.ignoreNode)); + } + + /** + * Run benchmarks. + * + * @param args Arguments. + * @throws Exception If failed. + */ + public static void main(String[] args) throws Exception { + run(1); + } + + /** + * Run benchmark. + * + * @param threads Amount of threads. + * @throws Exception If failed. + */ + private static void run(int threads) throws Exception { + JmhIdeBenchmarkRunner.create() + .forks(1) + .threads(threads) + .warmupIterations(5) + .measurementIterations(10) + .benchmarks(GridDhtPartitionsStateValidatorBenchmark.class.getSimpleName()) + .jvmArguments("-XX:+UseG1GC", "-Xms4g", "-Xmx4g") + .run(); + } +} diff --git a/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/tree/BPlusTreeBenchmark.java b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/tree/BPlusTreeBenchmark.java index e80e13d52d4b3..15c47106a9a76 100644 --- a/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/tree/BPlusTreeBenchmark.java +++ b/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/tree/BPlusTreeBenchmark.java @@ -137,7 +137,7 @@ public void setup() throws Exception { public void tearDown() throws Exception { tree.destroy(); - pageMem.stop(); + pageMem.stop(true); } /** diff --git a/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTest.java b/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTest.java index 88b7eb8845757..b384d1a294202 100644 --- a/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTest.java +++ b/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTest.java @@ -56,12 +56,16 @@ import org.apache.ignite.stream.StreamMultipleTupleExtractor; import org.apache.ignite.stream.StreamSingleTupleExtractor; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; /** * Test class for {@link CamelStreamer}. */ +@RunWith(JUnit4.class) public class IgniteCamelStreamerTest extends GridCommonAbstractTest { /** text/plain media type. */ private static final MediaType TEXT_PLAIN = MediaType.parse("text/plain;charset=utf-8"); @@ -127,6 +131,7 @@ public IgniteCamelStreamerTest() { /** * @throws Exception */ + @Test public void testSendOneEntryPerMessage() throws Exception { streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -147,6 +152,7 @@ public void testSendOneEntryPerMessage() throws Exception { /** * @throws Exception */ + @Test public void testMultipleEntriesInOneMessage() throws Exception { streamer.setMultipleTupleExtractor(multipleTupleExtractor()); @@ -167,6 +173,7 @@ public void testMultipleEntriesInOneMessage() throws Exception { /** * @throws Exception */ + @Test public void testResponseProcessorIsCalled() throws Exception { streamer.setSingleTupleExtractor(singleTupleExtractor()); streamer.setResponseProcessor(new Processor() { @@ -195,6 +202,7 @@ public void testResponseProcessorIsCalled() throws Exception { /** * @throws Exception */ + @Test public void testUserSpecifiedCamelContext() throws Exception { final AtomicInteger cnt = new AtomicInteger(); @@ -228,6 +236,7 @@ public void testUserSpecifiedCamelContext() throws Exception { /** * @throws Exception */ + @Test public void testUserSpecifiedCamelContextWithPropertyPlaceholders() throws Exception { // Create a CamelContext with a custom property placeholder. CamelContext context = new DefaultCamelContext(); @@ -266,6 +275,7 @@ public void testUserSpecifiedCamelContextWithPropertyPlaceholders() throws Excep /** * @throws Exception */ + @Test public void testInvalidEndpointUri() throws Exception { streamer.setSingleTupleExtractor(singleTupleExtractor()); streamer.setEndpointUri("abc"); diff --git a/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTestSuite.java b/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTestSuite.java index c45272ed00c57..8e9f0b10fa07d 100644 --- a/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTestSuite.java +++ b/modules/camel/src/test/java/org/apache/ignite/stream/camel/IgniteCamelStreamerTestSuite.java @@ -18,29 +18,31 @@ package org.apache.ignite.stream.camel; import java.util.Set; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Camel streamer tests. Included into 'Streamers' run configuration. */ -public class IgniteCamelStreamerTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCamelStreamerTestSuite { /** * @return {@link IgniteCamelStreamerTest} test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** - * @param ignoredTests + * @param ignoredTests List of ignored tests. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(Set ignoredTests) throws Exception { + public static TestSuite suite(Set ignoredTests) { TestSuite suite = new TestSuite("IgniteCamelStreamer Test Suite"); - suite.addTestSuite(IgniteCamelStreamerTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCamelStreamerTest.class)); return suite; } diff --git a/modules/cassandra/store/src/test/java/org/apache/ignite/tests/CassandraConfigTest.java b/modules/cassandra/store/src/test/java/org/apache/ignite/tests/CassandraConfigTest.java index 98d7ef1bae92b..63ec90b446e86 100644 --- a/modules/cassandra/store/src/test/java/org/apache/ignite/tests/CassandraConfigTest.java +++ b/modules/cassandra/store/src/test/java/org/apache/ignite/tests/CassandraConfigTest.java @@ -17,22 +17,22 @@ package org.apache.ignite.tests; -import junit.framework.TestCase; -import org.apache.ignite.cache.affinity.AffinityKeyMapped; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.cache.store.cassandra.persistence.KeyPersistenceSettings; import org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings; +import org.junit.Test; + +import static junit.framework.Assert.assertEquals; /** * Simple test for DDL generator. */ -public class CassandraConfigTest extends TestCase { +public class CassandraConfigTest { /** * Check if same DDL generated for similar keys and same KeyPersistenceConfiguration. - * - * @throws Exception */ - public void testDDLGeneration() throws Exception { + @Test + public void testDDLGeneration() { KeyPersistenceSettings keyPersistenceSettingsA = getKeyPersistenceSettings(KeyA.class); KeyPersistenceSettings keyPersistenceSettingsB = getKeyPersistenceSettings(KeyB.class); diff --git a/modules/clients/pom.xml b/modules/clients/pom.xml index 621582619e54c..34a5c2a442d74 100644 --- a/modules/clients/pom.xml +++ b/modules/clients/pom.xml @@ -113,9 +113,9 @@ - com.h2database - h2 - ${h2.version} + org.apache.ignite + ignite-h2 + ${project.version} test diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/TaskEventSubjectIdSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/TaskEventSubjectIdSelfTest.java index 46aaa6ba387b6..881122ed09f65 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/TaskEventSubjectIdSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/TaskEventSubjectIdSelfTest.java @@ -45,6 +45,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVTS_TASK_EXECUTION; @@ -57,6 +60,7 @@ /** * Tests for security subject ID in task events. */ +@RunWith(JUnit4.class) public class TaskEventSubjectIdSelfTest extends GridCommonAbstractTest { /** */ private static final Collection evts = new ArrayList<>(); @@ -117,6 +121,7 @@ public class TaskEventSubjectIdSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSimpleTask() throws Exception { latch = new CountDownLatch(3); @@ -161,6 +166,7 @@ public void testSimpleTask() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailedTask() throws Exception { latch = new CountDownLatch(2); @@ -207,6 +213,7 @@ public void testFailedTask() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimedOutTask() throws Exception { latch = new CountDownLatch(2); @@ -262,6 +269,7 @@ public void testTimedOutTask() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClosure() throws Exception { latch = new CountDownLatch(3); @@ -308,8 +316,12 @@ public void testClosure() throws Exception { } /** + * Events for class tasks that was started from external clients should contain + * client subject id instead of the node where it was started. This test checks it. + * * @throws Exception If failed. */ + @Test public void testClient() throws Exception { latch = new CountDownLatch(3); @@ -328,7 +340,7 @@ public void testClient() throws Exception { assert evt != null; assertEquals(EVT_TASK_STARTED, evt.type()); - assertEquals(nodeId, evt.subjectId()); + assertEquals(client.id(), evt.subjectId()); assert it.hasNext(); @@ -337,7 +349,7 @@ public void testClient() throws Exception { assert evt != null; assertEquals(EVT_TASK_REDUCED, evt.type()); - assertEquals(nodeId, evt.subjectId()); + assertEquals(client.id(), evt.subjectId()); assert it.hasNext(); @@ -346,7 +358,7 @@ public void testClient() throws Exception { assert evt != null; assertEquals(EVT_TASK_FINISHED, evt.type()); - assertEquals(nodeId, evt.subjectId()); + assertEquals(client.id(), evt.subjectId()); assert !it.hasNext(); } diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientAbstractMultiThreadedSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientAbstractMultiThreadedSelfTest.java index fb46bb2c87950..4a1967f0059a8 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientAbstractMultiThreadedSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientAbstractMultiThreadedSelfTest.java @@ -43,12 +43,11 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -59,10 +58,8 @@ /** * */ +@RunWith(JUnit4.class) public abstract class ClientAbstractMultiThreadedSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Partitioned cache name. */ protected static final String PARTITIONED_CACHE_NAME = "partitioned"; @@ -174,12 +171,6 @@ protected int cachePutCount() { c.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - c.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME), cacheConfiguration(PARTITIONED_CACHE_NAME), cacheConfiguration(REPLICATED_CACHE_NAME), cacheConfiguration(PARTITIONED_ASYNC_BACKUP_CACHE_NAME), cacheConfiguration(REPLICATED_ASYNC_CACHE_NAME)); @@ -246,6 +237,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String cacheName) throws /** * @throws Exception If failed. */ + @Test public void testMultithreadedTaskRun() throws Exception { final AtomicLong cnt = new AtomicLong(); @@ -397,4 +389,4 @@ private static class TestTask extends ComputeTaskSplitAdapter { return locNodeId; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientDefaultCacheSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientDefaultCacheSelfTest.java index c62cf8a547c92..5004b875f84f0 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientDefaultCacheSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientDefaultCacheSelfTest.java @@ -37,23 +37,21 @@ import org.apache.ignite.internal.processors.rest.GridRestCommand; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.SB; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_JETTY_PORT; /** * Tests that client is able to connect to a grid with only default cache enabled. */ +@RunWith(JUnit4.class) public class ClientDefaultCacheSelfTest extends GridCommonAbstractTest { /** Path to jetty config configured with SSL. */ private static final String REST_JETTY_CFG = "modules/clients/src/test/resources/jetty/rest-jetty.xml"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Host. */ private static final String HOST = "127.0.0.1"; @@ -104,12 +102,6 @@ public class ClientDefaultCacheSelfTest extends GridCommonAbstractTest { cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cLoc = new CacheConfiguration(DEFAULT_CACHE_NAME); cLoc.setName(LOCAL_CACHE); @@ -183,6 +175,7 @@ private JsonNode jsonResponse(String content) throws IOException { /** * Json format string in cache should not transform to Json object on get request. */ + @Test public void testSkipString2JsonTransformation() throws Exception { String val = "{\"v\":\"my Value\",\"t\":1422559650154}"; diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientReconnectionSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientReconnectionSelfTest.java index f1085b3ec361a..453243c83f7a8 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientReconnectionSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientReconnectionSelfTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testsuites.IgniteIgnore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class ClientReconnectionSelfTest extends GridCommonAbstractTest { /** */ public static final String HOST = "127.0.0.1"; @@ -83,6 +87,7 @@ private GridClient client(String host) throws GridClientException { /** * @throws Exception If failed. */ + @Test public void testNoFailedReconnection() throws Exception { for (int i = 0; i < ClientTestRestServer.SERVERS_CNT; i++) runServer(i, false); @@ -140,6 +145,7 @@ public void testNoFailedReconnection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCorrectInit() throws Exception { for (int i = 0; i < ClientTestRestServer.SERVERS_CNT; i++) runServer(i, i == 0); @@ -157,6 +163,7 @@ public void testCorrectInit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailedInit() throws Exception { for (int i = 0; i < ClientTestRestServer.SERVERS_CNT; i++) runServer(i, true); @@ -184,9 +191,10 @@ public void testFailedInit() throws Exception { * @throws Exception If failed. */ @IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-590", forceFailure = true) + @Test public void testIdleConnection() throws Exception { int srvsCnt = 4; // TODO: IGNITE-590 it may be wrong value. Need to investigate after IGNITE-590 will be fixed. - + for (int i = 0; i < srvsCnt; i++) runServer(i, false); @@ -235,4 +243,4 @@ private ClientTestRestServer runServer(int idx, boolean failOnConnect) throws I return srv; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientSslParametersTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientSslParametersTest.java new file mode 100644 index 0000000000000..c2e10b8135557 --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientSslParametersTest.java @@ -0,0 +1,338 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.client; + +import java.util.Collections; +import java.util.List; +import java.util.concurrent.Callable; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.ConnectorConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.client.ssl.GridSslBasicContextFactory; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.ssl.SslContextFactory; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.NotNull; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Tests cases when node connects to cluster with different set of cipher suites. + */ +@RunWith(JUnit4.class) +public class ClientSslParametersTest extends GridCommonAbstractTest { + /** */ + public static final String TEST_CACHE_NAME = "TEST"; + + /** */ + private volatile String[] cipherSuites; + + /** */ + private volatile String[] protocols; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + cfg.setSslContextFactory(createSslFactory()); + + cfg.setConnectorConfiguration(new ConnectorConfiguration() + .setSslEnabled(true) + .setSslClientAuth(true)); + + cfg.setCacheConfiguration(new CacheConfiguration(TEST_CACHE_NAME)); + + return cfg; + } + + /** + * @return Client configuration. + */ + protected GridClientConfiguration getClientConfiguration() { + GridClientConfiguration cfg = new GridClientConfiguration(); + + cfg.setServers(Collections.singleton("127.0.0.1:11211")); + + cfg.setSslContextFactory(createOldSslFactory()); + + return cfg; + } + + /** + * @return SSL factory. + */ + @NotNull private SslContextFactory createSslFactory() { + SslContextFactory factory = (SslContextFactory)GridTestUtils.sslFactory(); + + factory.setCipherSuites(cipherSuites); + + factory.setProtocols(protocols); + + return factory; + } + + /** + * @return SSL Factory. + */ + @NotNull private GridSslBasicContextFactory createOldSslFactory() { + GridSslBasicContextFactory factory = (GridSslBasicContextFactory)GridTestUtils.sslContextFactory(); + + factory.setCipherSuites(cipherSuites); + + factory.setProtocols(protocols); + + return factory; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + protocols = null; + + cipherSuites = null; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testSameCipherSuite() throws Exception { + cipherSuites = new String[] { + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" + }; + + startGrid(); + + checkSuccessfulClientStart( + new String[] { + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" + }, + null + ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testOneCommonCipherSuite() throws Exception { + cipherSuites = new String[] { + "TLS_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" + }; + + startGrid(); + + checkSuccessfulClientStart( + new String[] { + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" + }, + null + ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNoCommonCipherSuite() throws Exception { + cipherSuites = new String[] { + "TLS_RSA_WITH_AES_128_GCM_SHA256" + }; + + startGrid(); + + checkClientStartFailure( + new String[] { + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" + }, + null + ); + } + + /** + * @throws Exception If failed. + */ + @Test + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10245") + public void testNonExistentCipherSuite() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10245"); + + cipherSuites = new String[] { + "TLS_RSA_WITH_AES_128_GCM_SHA256" + }; + + startGrid(); + + checkClientStartFailure( + new String[] { + "TLC_FAKE_CIPHER", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" + }, + null, + "Unsupported ciphersuite" + ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNoCommonProtocols() throws Exception { + protocols = new String[] { + "TLSv1.1", + "SSLv3" + }; + + startGrid(); + + checkClientStartFailure( + null, + new String[] { + "TLSv1", + "TLSv1.2" + } + ); + } + + /** + * @throws Exception If failed. + */ + @Test + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10245") + public void testNonExistentProtocol() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10245"); + + protocols = new String[] { + "SSLv3" + }; + + startGrid(); + + checkClientStartFailure( + null, + new String[] { + "SSLv3", + "SSLvDoesNotExist" + }, + "SSLvDoesNotExist" + ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testSameProtocols() throws Exception { + protocols = new String[] { + "TLSv1.1", + "TLSv1.2" + }; + + startGrid(); + + checkSuccessfulClientStart( + null, + new String[] { + "TLSv1.1", + "TLSv1.2" + } + ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testOneCommonProtocol() throws Exception { + protocols = new String[] { + "TLSv1", + "TLSv1.1", + "TLSv1.2" + }; + + startGrid(); + + checkSuccessfulClientStart( + null, + new String[] { + "TLSv1.1", + "SSLv3" + } + ); + } + + /** + * @param cipherSuites list of cipher suites + * @param protocols list of protocols + * @throws Exception If failed. + */ + private void checkSuccessfulClientStart(String[] cipherSuites, String[] protocols) throws Exception { + this.cipherSuites = F.isEmpty(cipherSuites) ? null : cipherSuites; + this.protocols = F.isEmpty(protocols) ? null : protocols; + + try (GridClient client = GridClientFactory.start(getClientConfiguration())) { + List top = client.compute().refreshTopology(false, false); + + assertEquals(1, top.size()); + } + } + + /** + * @param cipherSuites list of cipher suites + * @param protocols list of protocols + */ + private void checkClientStartFailure(String[] cipherSuites, String[] protocols) { + checkClientStartFailure(cipherSuites, protocols, "Latest topology update failed."); + } + + /** + * @param cipherSuites list of cipher suites + * @param protocols list of protocols + * @param msg exception message + */ + private void checkClientStartFailure(String[] cipherSuites, String[] protocols, String msg) { + this.cipherSuites = F.isEmpty(cipherSuites) ? null : cipherSuites; + this.protocols = F.isEmpty(protocols) ? null : protocols; + + GridTestUtils.assertThrows( + null, + new Callable() { + @Override public Object call() throws Exception { + GridClient client = GridClientFactory.start(getClientConfiguration()); + + client.compute().refreshTopology(false, false); + + return null; + } + }, + GridClientException.class, + msg + ); + } +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpSslAuthenticationSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpSslAuthenticationSelfTest.java index 922526201b194..c43c3c0c4354f 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpSslAuthenticationSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpSslAuthenticationSelfTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests */ +@RunWith(JUnit4.class) public class ClientTcpSslAuthenticationSelfTest extends GridCommonAbstractTest { /** REST TCP port. */ private static final int REST_TCP_PORT = 12121; @@ -112,6 +116,7 @@ private GridClientImpl createClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerAuthenticated() throws Exception { checkServerAuthenticatedByClient(false); } @@ -119,6 +124,7 @@ public void testServerAuthenticated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerNotAuthenticatedByClient() throws Exception { try { checkServerAuthenticatedByClient(true); @@ -131,6 +137,7 @@ public void testServerNotAuthenticatedByClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientAuthenticated() throws Exception { checkClientAuthenticatedByServer(false); } @@ -138,6 +145,7 @@ public void testClientAuthenticated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNotAuthenticated() throws Exception { try { checkServerAuthenticatedByClient(true); @@ -264,4 +272,4 @@ public void reset() { srvCheckCallCnt.set(0); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpTaskExecutionAfterTopologyRestartSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpTaskExecutionAfterTopologyRestartSelfTest.java index 4b63fff1d925c..ca7347596bb5e 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpTaskExecutionAfterTopologyRestartSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/ClientTcpTaskExecutionAfterTopologyRestartSelfTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Ensures */ +@RunWith(JUnit4.class) public class ClientTcpTaskExecutionAfterTopologyRestartSelfTest extends GridCommonAbstractTest { /** Port. */ private static final int PORT = 11211; @@ -54,6 +58,7 @@ public class ClientTcpTaskExecutionAfterTopologyRestartSelfTest extends GridComm /** * @throws Exception If failed. */ + @Test public void testTaskAfterRestart() throws Exception { startGrids(1); @@ -72,4 +77,4 @@ public void testTaskAfterRestart() throws Exception { cli.compute().execute(ClientTcpTask.class.getName(), Collections.singletonList("arg")); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientCacheFlagsCodecTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientCacheFlagsCodecTest.java index a9bda0aff1452..8d7b4b5c237d2 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientCacheFlagsCodecTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientCacheFlagsCodecTest.java @@ -21,18 +21,20 @@ import java.util.EnumSet; import java.util.Set; -import junit.framework.TestCase; - import org.apache.ignite.internal.client.GridClientCacheFlag; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; + +import static junit.framework.Assert.assertTrue; /** * Tests conversions between GridClientCacheFlag. */ -public class ClientCacheFlagsCodecTest extends TestCase { +public class ClientCacheFlagsCodecTest { /** * Tests that each client flag will be correctly converted to server flag. */ + @Test public void testEncodingDecodingFullness() { for (GridClientCacheFlag f : GridClientCacheFlag.values()) { int bits = GridClientCacheFlag.encodeCacheFlags(EnumSet.of(f)); @@ -48,6 +50,7 @@ public void testEncodingDecodingFullness() { /** * Tests that groups of client flags can be correctly converted to corresponding server flag groups. */ + @Test public void testGroupEncodingDecoding() { // All. doTestGroup(GridClientCacheFlag.values()); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientComputeImplSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientComputeImplSelfTest.java index 7777f334f56c9..7fd50574883c6 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientComputeImplSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientComputeImplSelfTest.java @@ -24,6 +24,9 @@ import org.apache.ignite.internal.client.GridClientNode; import org.apache.ignite.internal.client.GridClientPredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.assertThrows; @@ -31,6 +34,7 @@ * Simple unit test for GridClientComputeImpl which checks method parameters. * It tests only those methods that can produce assertion underneath upon incorrect arguments. */ +@RunWith(JUnit4.class) public class ClientComputeImplSelfTest extends GridCommonAbstractTest { /** Mocked client compute. */ private GridClientCompute compute = allocateInstance0(GridClientComputeImpl.class); @@ -38,6 +42,7 @@ public class ClientComputeImplSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testProjection_byGridClientNode() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -49,6 +54,7 @@ public void testProjection_byGridClientNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecute() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -60,6 +66,7 @@ public void testExecute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecuteAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -71,6 +78,7 @@ public void testExecuteAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityExecute() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -82,6 +90,7 @@ public void testAffinityExecute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityExecuteAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -93,6 +102,7 @@ public void testAffinityExecuteAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNode() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -104,6 +114,7 @@ public void testNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodesByIds() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -115,6 +126,7 @@ public void testNodesByIds() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodesByFilter() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -126,6 +138,7 @@ public void testNodesByFilter() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRefreshNodeById() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -137,6 +150,7 @@ public void testRefreshNodeById() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRefreshNodeByIdAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -148,6 +162,7 @@ public void testRefreshNodeByIdAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRefreshNodeByIp() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -159,6 +174,7 @@ public void testRefreshNodeByIp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRefreshNodeByIpAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -166,4 +182,4 @@ public void testRefreshNodeByIpAsync() throws Exception { } }, NullPointerException.class, "Ouch! Argument cannot be null: ip"); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientDataImplSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientDataImplSelfTest.java index 1638f31239193..61379d1368134 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientDataImplSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientDataImplSelfTest.java @@ -20,12 +20,16 @@ import java.util.concurrent.Callable; import org.apache.ignite.internal.client.GridClientData; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.assertThrows; /** * Simple unit test for GridClientDataImpl which checks method parameters. */ +@RunWith(JUnit4.class) public class ClientDataImplSelfTest extends GridCommonAbstractTest { /** Mocked client data. */ private GridClientData data = allocateInstance0(GridClientDataImpl.class); @@ -33,6 +37,7 @@ public class ClientDataImplSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -54,6 +59,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -75,6 +81,7 @@ public void testPutAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAll() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -88,6 +95,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -101,6 +109,7 @@ public void testPutAllAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -112,6 +121,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -123,6 +133,7 @@ public void testGetAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAll() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -134,6 +145,7 @@ public void testGetAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -145,6 +157,7 @@ public void testGetAllAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -158,6 +171,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -169,6 +183,7 @@ public void testRemoveAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAll() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -182,6 +197,7 @@ public void testRemoveAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAllAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -195,6 +211,7 @@ public void testRemoveAllAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -216,6 +233,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplaceAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -233,6 +251,7 @@ public void testReplaceAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCas() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -246,6 +265,7 @@ public void testCas() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCasAsync() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -257,6 +277,7 @@ public void testCasAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinity() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -264,4 +285,4 @@ public void testAffinity() throws Exception { } }, NullPointerException.class, "Ouch! Argument cannot be null: key"); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientFutureAdapterSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientFutureAdapterSelfTest.java index 67df048d13fe2..0d017cff11b5b 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientFutureAdapterSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientFutureAdapterSelfTest.java @@ -24,14 +24,19 @@ import org.apache.ignite.internal.client.GridClientFuture; import org.apache.ignite.internal.client.GridClientFutureTimeoutException; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid client future implementation self test. */ +@RunWith(JUnit4.class) public class ClientFutureAdapterSelfTest extends GridCommonAbstractTest { /** * Test finished futures. */ + @Test public void testFinished() { GridClientFutureAdapter fut = new GridClientFutureAdapter<>(); @@ -50,6 +55,7 @@ public void testFinished() { * * @throws org.apache.ignite.internal.client.GridClientException On any exception. */ + @Test public void testChains() throws GridClientException { // Synchronous notifications. testChains(1, 100); @@ -114,4 +120,4 @@ private void testChains(int chainSize, long waitDelay) throws GridClientExceptio info("Time consumption for " + chainSize + " chained futures: " + (System.currentTimeMillis() - start)); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientPropertiesConfigurationSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientPropertiesConfigurationSelfTest.java index 55aadfd01b51d..964b14e0de01f 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientPropertiesConfigurationSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/impl/ClientPropertiesConfigurationSelfTest.java @@ -39,6 +39,9 @@ import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.springframework.context.support.FileSystemXmlApplicationContext; import static org.apache.ignite.internal.client.GridClientConfiguration.DFLT_MAX_CONN_IDLE_TIME; @@ -47,6 +50,7 @@ /** * Properties-based configuration self test. */ +@RunWith(JUnit4.class) public class ClientPropertiesConfigurationSelfTest extends GridCommonAbstractTest { /** * Grid client spring configuration. @@ -73,6 +77,7 @@ public class ClientPropertiesConfigurationSelfTest extends GridCommonAbstractTes * * @throws Exception In case of exception. */ + @Test public void testCreation() throws Exception { // Validate default configuration. GridClientConfiguration cfg = new GridClientConfiguration(); @@ -131,6 +136,7 @@ public void testCreation() throws Exception { * * @throws Exception In case of any exception. */ + @Test public void testSpringConfig() throws Exception { GridClientConfiguration cfg = new FileSystemXmlApplicationContext( GRID_CLIENT_SPRING_CONFIG.toString()).getBean(GridClientConfiguration.class); @@ -242,4 +248,4 @@ private void validateConfig(int expDataCfgs, GridClientConfiguration cfg) { assertEquals(null, cfg.getSslContextFactory(), null); assertEquals(DFLT_TOP_REFRESH_FREQ, cfg.getTopologyRefreshFrequency()); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractConnectivitySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractConnectivitySelfTest.java index 8207ccfb6f15c..327c56f95a4be 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractConnectivitySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractConnectivitySelfTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests the REST client-server connectivity with various configurations. */ +@RunWith(JUnit4.class) public abstract class ClientAbstractConnectivitySelfTest extends GridCommonAbstractTest { /** */ private static final String WILDCARD_IP = "0.0.0.0"; @@ -122,6 +126,7 @@ protected GridClient startClient(String addr, int port) throws GridClientExcepti * * @throws Exception If failed. */ + @Test public void testOneNodeDefaultHostAndPort() throws Exception { startRestNode("grid1", null, null); @@ -136,6 +141,7 @@ public void testOneNodeDefaultHostAndPort() throws Exception { * Simple test of address list filtering. * @throws Exception If failed. */ + @Test public void testResolveReachableOneAddress() throws Exception { InetAddress addr = InetAddress.getByAddress(new byte[] {127, 0, 0, 1} ); @@ -151,6 +157,7 @@ public void testResolveReachableOneAddress() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testOneNodeLoopbackHost() throws Exception { startRestNode("grid1", LOOPBACK_IP, defaultRestPort()); @@ -164,6 +171,7 @@ public void testOneNodeLoopbackHost() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testOneNodeZeroIpv4Address() throws Exception { startRestNode("grid1", WILDCARD_IP, defaultRestPort()); @@ -208,6 +216,7 @@ public void testOneNodeZeroIpv4Address() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testTwoNodesDefaultHostAndPort() throws Exception { startRestNode("grid1", null, null); startRestNode("grid2", null, null); @@ -256,6 +265,7 @@ public void testTwoNodesDefaultHostAndPort() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testRefreshTopologyOnNodeLeft() throws Exception { startRestNode("grid1", null, null); startRestNode("grid2", null, null); @@ -323,4 +333,4 @@ private static class IpV4AddressPredicate implements P1 { return s.matches("\\d+\\.\\d+\\.\\d+\\.\\d+"); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractMultiNodeSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractMultiNodeSelfTest.java index 3481f34cf17aa..5ee06375634de 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractMultiNodeSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractMultiNodeSelfTest.java @@ -68,13 +68,13 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -87,11 +87,8 @@ /** * Tests basic client behavior with multiple nodes. */ -@SuppressWarnings("ThrowableResultOfMethodCallIgnored") +@RunWith(JUnit4.class) public abstract class ClientAbstractMultiNodeSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Partitioned cache name. */ private static final String PARTITIONED_CACHE_NAME = "partitioned"; @@ -178,12 +175,6 @@ public abstract class ClientAbstractMultiNodeSelfTest extends GridCommonAbstract c.setConnectorConfiguration(clientCfg); } - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - TestCommunicationSpi spi = new TestCommunicationSpi(); spi.setLocalPort(GridTestUtils.getNextCommPort(getClass())); @@ -255,6 +246,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String cacheName) throws /** * @throws Exception If failed. */ + @Test public void testEmptyProjections() throws Exception { final GridClientCompute dflt = client.compute(); @@ -290,6 +282,7 @@ public void testEmptyProjections() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProjectionRun() throws Exception { GridClientCompute dflt = client.compute(); @@ -319,6 +312,7 @@ public void testProjectionRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTopologyListener() throws Exception { final Collection added = new ArrayList<>(1); final Collection rmvd = new ArrayList<>(1); @@ -431,7 +425,6 @@ private static class TestTask extends ComputeTaskSplitAdapter { for (int i = 0; i < gridSize; i++) { jobs.add(new ComputeJobAdapter() { - @SuppressWarnings("OverlyStrongTypeCast") @Override public Object execute() { try { Thread.sleep(1000); @@ -475,7 +468,6 @@ private static class TestTask extends ComputeTaskSplitAdapter { /** * Communication SPI which checks cache flags. */ - @SuppressWarnings("unchecked") private static class TestCommunicationSpi extends TcpCommunicationSpi { /** {@inheritDoc} */ @Override public void sendMessage(ClusterNode node, Message msg, IgniteInClosure ackC) diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractSelfTest.java index 597121840e702..382c40e25a7a6 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientAbstractSelfTest.java @@ -66,12 +66,12 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_JETTY_PORT; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -83,10 +83,8 @@ * Tests for Java client. */ @SuppressWarnings("deprecation") +@RunWith(JUnit4.class) public abstract class ClientAbstractSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "cache"; @@ -233,12 +231,6 @@ protected Object getTaskArgument() { cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME), cacheConfiguration("replicated"), cacheConfiguration("partitioned"), cacheConfiguration(CACHE_NAME)); @@ -346,6 +338,7 @@ protected GridClientConfiguration clientConfiguration() throws GridClientExcepti /** * @throws Exception If failed. */ + @Test public void testConnectable() throws Exception { GridClient client = client(); @@ -359,6 +352,7 @@ public void testConnectable() throws Exception { * * @throws Exception If failed. */ + @Test public void testNoAsyncExceptions() throws Exception { GridClient client = client(); @@ -404,6 +398,7 @@ public void testNoAsyncExceptions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGracefulShutdown() throws Exception { GridClientCompute compute = client.compute(); @@ -422,6 +417,7 @@ public void testGracefulShutdown() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForceShutdown() throws Exception { GridClientCompute compute = client.compute(); @@ -445,6 +441,7 @@ public void testForceShutdown() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShutdown() throws Exception { GridClient c = client(); @@ -486,6 +483,7 @@ public void testShutdown() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecute() throws Exception { String taskName = getTaskName(); Object taskArg = getTaskArgument(); @@ -499,6 +497,7 @@ public void testExecute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTopology() throws Exception { GridClientCompute compute = client.compute(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientPreferDirectSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientPreferDirectSelfTest.java index b012d3b85fe6f..40f35eb256b88 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientPreferDirectSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/integration/ClientPreferDirectSelfTest.java @@ -36,10 +36,10 @@ import org.apache.ignite.internal.client.balancer.GridClientRandomBalancer; import org.apache.ignite.internal.client.balancer.GridClientRoundRobinBalancer; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.client.integration.ClientAbstractMultiNodeSelfTest.HOST; import static org.apache.ignite.internal.client.integration.ClientAbstractMultiNodeSelfTest.REST_TCP_PORT_BASE; @@ -48,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class ClientPreferDirectSelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES_CNT = 6; @@ -64,12 +62,6 @@ public class ClientPreferDirectSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - c.setLocalHost(HOST); assert c.getConnectorConfiguration() == null; @@ -86,6 +78,7 @@ public class ClientPreferDirectSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRandomBalancer() throws Exception { GridClientRandomBalancer b = new GridClientRandomBalancer(); @@ -97,6 +90,7 @@ public void testRandomBalancer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRoundRobinBalancer() throws Exception { GridClientRoundRobinBalancer b = new GridClientRoundRobinBalancer(); @@ -195,4 +189,4 @@ private static class TestTask extends ComputeTaskSplitAdapter { return ignite.cluster().localNode().id().toString(); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/ClientFailedInitSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/ClientFailedInitSelfTest.java index 971dcb186b157..cc9fad1c0f30d 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/ClientFailedInitSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/ClientFailedInitSelfTest.java @@ -38,10 +38,10 @@ import org.apache.ignite.internal.client.GridServerUnreachableException; import org.apache.ignite.internal.client.impl.connection.GridClientConnectionResetException; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_JETTY_PORT; import static org.apache.ignite.internal.client.GridClientProtocol.TCP; @@ -53,6 +53,7 @@ /** * */ +@RunWith(JUnit4.class) public class ClientFailedInitSelfTest extends GridCommonAbstractTest { /** */ private static final int RECONN_CNT = 3; @@ -66,9 +67,6 @@ public class ClientFailedInitSelfTest extends GridCommonAbstractTest { /** */ private static final int ROUTER_JETTY_PORT = 8081; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { GridClientFactory.stopAll(); @@ -91,18 +89,13 @@ public class ClientFailedInitSelfTest extends GridCommonAbstractTest { cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } /** * */ + @Test public void testEmptyAddresses() { try { GridClientFactory.start(new GridClientConfiguration()); @@ -117,6 +110,7 @@ public void testEmptyAddresses() { /** * */ + @Test public void testRoutersAndServersAddressesProvided() { try { GridClientConfiguration c = new GridClientConfiguration(); @@ -136,6 +130,7 @@ public void testRoutersAndServersAddressesProvided() { /** * @throws Exception If failed. */ + @Test public void testTcpClient() throws Exception { doTestClient(TCP); } @@ -143,6 +138,7 @@ public void testTcpClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTcpRouter() throws Exception { doTestRouter(TCP); } @@ -288,4 +284,4 @@ private static class TestTask extends ComputeTaskSplitAdapter { return results.get(0).getData(); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/RouterFactorySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/RouterFactorySelfTest.java index 5df424c4cabce..7a292a1ed032c 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/RouterFactorySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/RouterFactorySelfTest.java @@ -24,19 +24,18 @@ import java.util.Iterator; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_JETTY_PORT; /** * Test routers factory. */ +@RunWith(JUnit4.class) public class RouterFactorySelfTest extends GridCommonAbstractTest { - /** Shared IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_HTTP_PORT = 11087; @@ -44,7 +43,7 @@ public class RouterFactorySelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); + discoSpi.setIpFinder(sharedStaticIpFinder); IgniteConfiguration cfg = new IgniteConfiguration(); @@ -59,6 +58,7 @@ public class RouterFactorySelfTest extends GridCommonAbstractTest { * * @throws Exception In case of any exception. */ + @Test public void testRouterFactory() throws Exception { try { System.setProperty(IGNITE_JETTY_PORT, String.valueOf(GRID_HTTP_PORT)); @@ -109,4 +109,4 @@ public void testRouterFactory() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpRouterAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpRouterAbstractSelfTest.java index 628006ebfa2ac..c3ae3d2671aac 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpRouterAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpRouterAbstractSelfTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.internal.client.router.impl.GridTcpRouterImpl; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.logger.log4j.Log4JLogger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Abstract base class for http routing tests. */ +@RunWith(JUnit4.class) public abstract class TcpRouterAbstractSelfTest extends ClientAbstractSelfTest { /** Port number to use by router. */ private static final int ROUTER_PORT = BINARY_PORT + 1; @@ -117,6 +121,7 @@ public GridTcpRouterConfiguration routerConfiguration() throws IgniteCheckedExce /** * @throws Exception If failed. */ + @Test @Override public void testConnectable() throws Exception { GridClient client = client(); @@ -124,4 +129,4 @@ public GridTcpRouterConfiguration routerConfiguration() throws IgniteCheckedExce assertFalse(F.first(nodes).connectable()); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpSslRouterSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpSslRouterSelfTest.java index 3b47ae5ca9d87..3e710f08796a5 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpSslRouterSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/router/TcpSslRouterSelfTest.java @@ -20,12 +20,12 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.client.ssl.GridSslContextFactory; import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testsuites.IgniteIgnore; +import org.junit.Ignore; /** * */ -@IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-433", forceFailure = true) +@Ignore(value = "https://issues.apache.org/jira/browse/IGNITE-433") public class TcpSslRouterSelfTest extends TcpRouterAbstractSelfTest { /** {@inheritDoc} */ @Override protected boolean useSsl() { @@ -47,4 +47,4 @@ public class TcpSslRouterSelfTest extends TcpRouterAbstractSelfTest { return cfg; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/suite/IgniteClientTestSuite.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/suite/IgniteClientTestSuite.java index 657fda4a80179..ae8ab987608d2 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/suite/IgniteClientTestSuite.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/suite/IgniteClientTestSuite.java @@ -17,10 +17,12 @@ package org.apache.ignite.internal.client.suite; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.TaskEventSubjectIdSelfTest; import org.apache.ignite.internal.client.ClientDefaultCacheSelfTest; import org.apache.ignite.internal.client.ClientReconnectionSelfTest; +import org.apache.ignite.internal.client.ClientSslParametersTest; import org.apache.ignite.internal.client.ClientTcpMultiThreadedSelfTest; import org.apache.ignite.internal.client.ClientTcpSslAuthenticationSelfTest; import org.apache.ignite.internal.client.ClientTcpSslMultiThreadedSelfTest; @@ -53,6 +55,7 @@ import org.apache.ignite.internal.processors.rest.ClientMemcachedProtocolSelfTest; import org.apache.ignite.internal.processors.rest.JettyRestProcessorAuthenticationWithCredsSelfTest; import org.apache.ignite.internal.processors.rest.JettyRestProcessorAuthenticationWithTokenSelfTest; +import org.apache.ignite.internal.processors.rest.JettyRestProcessorBaselineSelfTest; import org.apache.ignite.internal.processors.rest.JettyRestProcessorGetAllAsArrayTest; import org.apache.ignite.internal.processors.rest.JettyRestProcessorSignedSelfTest; import org.apache.ignite.internal.processors.rest.JettyRestProcessorUnsignedSelfTest; @@ -79,91 +82,95 @@ public class IgniteClientTestSuite extends TestSuite { public static TestSuite suite() { TestSuite suite = new IgniteTestSuite("Ignite Clients Test Suite"); - suite.addTestSuite(RouterFactorySelfTest.class); + suite.addTest(new JUnit4TestAdapter(RouterFactorySelfTest.class)); // Parser standalone test. - suite.addTestSuite(TcpRestParserSelfTest.class); + suite.addTest(new JUnit4TestAdapter(TcpRestParserSelfTest.class)); // Test memcache protocol with custom test client. - suite.addTestSuite(RestMemcacheProtocolSelfTest.class); + suite.addTest(new JUnit4TestAdapter(RestMemcacheProtocolSelfTest.class)); // Test custom binary protocol with test client. - suite.addTestSuite(RestBinaryProtocolSelfTest.class); - suite.addTestSuite(TcpRestUnmarshalVulnerabilityTest.class); + suite.addTest(new JUnit4TestAdapter(RestBinaryProtocolSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpRestUnmarshalVulnerabilityTest.class)); // Test jetty rest processor - suite.addTestSuite(JettyRestProcessorSignedSelfTest.class); - suite.addTestSuite(JettyRestProcessorUnsignedSelfTest.class); - suite.addTestSuite(JettyRestProcessorAuthenticationWithCredsSelfTest.class); - suite.addTestSuite(JettyRestProcessorAuthenticationWithTokenSelfTest.class); - suite.addTestSuite(JettyRestProcessorGetAllAsArrayTest.class); + suite.addTest(new JUnit4TestAdapter(JettyRestProcessorSignedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JettyRestProcessorUnsignedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JettyRestProcessorAuthenticationWithCredsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JettyRestProcessorAuthenticationWithTokenSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JettyRestProcessorGetAllAsArrayTest.class)); + suite.addTest(new JUnit4TestAdapter(JettyRestProcessorBaselineSelfTest.class)); // Test TCP rest processor with original memcache client. - suite.addTestSuite(ClientMemcachedProtocolSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientMemcachedProtocolSelfTest.class)); // Test TCP rest processor with original REDIS client. - suite.addTestSuite(RedisProtocolStringSelfTest.class); - suite.addTestSuite(RedisProtocolGetAllAsArrayTest.class); - suite.addTestSuite(RedisProtocolConnectSelfTest.class); - suite.addTestSuite(RedisProtocolServerSelfTest.class); + suite.addTest(new JUnit4TestAdapter(RedisProtocolStringSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(RedisProtocolGetAllAsArrayTest.class)); + suite.addTest(new JUnit4TestAdapter(RedisProtocolConnectSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(RedisProtocolServerSelfTest.class)); - suite.addTestSuite(RestProcessorStartSelfTest.class); + suite.addTest(new JUnit4TestAdapter(RestProcessorStartSelfTest.class)); // Test cache flag conversion. - suite.addTestSuite(ClientCacheFlagsCodecTest.class); + suite.addTest(new JUnit4TestAdapter(ClientCacheFlagsCodecTest.class)); // Test multi-start. - suite.addTestSuite(RestProcessorMultiStartSelfTest.class); + suite.addTest(new JUnit4TestAdapter(RestProcessorMultiStartSelfTest.class)); // Test clients. - suite.addTestSuite(ClientDataImplSelfTest.class); - suite.addTestSuite(ClientComputeImplSelfTest.class); - suite.addTestSuite(ClientTcpSelfTest.class); - suite.addTestSuite(ClientTcpDirectSelfTest.class); - suite.addTestSuite(ClientTcpSslSelfTest.class); - suite.addTestSuite(ClientTcpSslDirectSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientDataImplSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientComputeImplSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpDirectSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpSslSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpSslDirectSelfTest.class)); // Test client with many nodes. - suite.addTestSuite(ClientTcpMultiNodeSelfTest.class); - suite.addTestSuite(ClientTcpDirectMultiNodeSelfTest.class); - suite.addTestSuite(ClientTcpSslMultiNodeSelfTest.class); - suite.addTestSuite(ClientTcpSslDirectMultiNodeSelfTest.class); - suite.addTestSuite(ClientTcpUnreachableMultiNodeSelfTest.class); - suite.addTestSuite(ClientPreferDirectSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientTcpMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpDirectMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpSslMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpSslDirectMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpUnreachableMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientPreferDirectSelfTest.class)); // Test client with many nodes and in multithreaded scenarios - suite.addTestSuite(ClientTcpMultiThreadedSelfTest.class); - suite.addTestSuite(ClientTcpSslMultiThreadedSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientTcpMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientTcpSslMultiThreadedSelfTest.class)); // Test client authentication. - suite.addTestSuite(ClientTcpSslAuthenticationSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientTcpSslAuthenticationSelfTest.class)); - suite.addTestSuite(ClientTcpConnectivitySelfTest.class); - suite.addTestSuite(ClientReconnectionSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientTcpConnectivitySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientReconnectionSelfTest.class)); // Rest task command handler test. - suite.addTestSuite(TaskCommandHandlerSelfTest.class); - suite.addTestSuite(ChangeStateCommandHandlerTest.class); - suite.addTestSuite(TaskEventSubjectIdSelfTest.class); + suite.addTest(new JUnit4TestAdapter(TaskCommandHandlerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ChangeStateCommandHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(TaskEventSubjectIdSelfTest.class)); // Default cache only test. - suite.addTestSuite(ClientDefaultCacheSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientDefaultCacheSelfTest.class)); - suite.addTestSuite(ClientFutureAdapterSelfTest.class); - suite.addTestSuite(ClientPropertiesConfigurationSelfTest.class); - suite.addTestSuite(ClientConsistentHashSelfTest.class); - suite.addTestSuite(ClientJavaHasherSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientFutureAdapterSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientPropertiesConfigurationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientConsistentHashSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientJavaHasherSelfTest.class)); - suite.addTestSuite(ClientByteUtilsTest.class); + suite.addTest(new JUnit4TestAdapter(ClientByteUtilsTest.class)); // Router tests. - suite.addTestSuite(TcpRouterSelfTest.class); - suite.addTestSuite(TcpSslRouterSelfTest.class); - suite.addTestSuite(TcpRouterMultiNodeSelfTest.class); + suite.addTest(new JUnit4TestAdapter(TcpRouterSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpSslRouterSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpRouterMultiNodeSelfTest.class)); - suite.addTestSuite(ClientFailedInitSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientFailedInitSelfTest.class)); - suite.addTestSuite(ClientTcpTaskExecutionAfterTopologyRestartSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClientTcpTaskExecutionAfterTopologyRestartSelfTest.class)); + + // SSL params. + suite.addTest(new JUnit4TestAdapter(ClientSslParametersTest.class)); return suite; } diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientByteUtilsTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientByteUtilsTest.java index 72112cb8e6fd9..93aa8cd17b310 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientByteUtilsTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientByteUtilsTest.java @@ -25,6 +25,9 @@ import org.apache.ignite.internal.util.GridClientByteUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.util.GridClientByteUtils.bytesToInt; import static org.apache.ignite.internal.util.GridClientByteUtils.bytesToLong; @@ -36,12 +39,14 @@ /** * Test case for client's byte convertion utility. */ +@RunWith(JUnit4.class) public class ClientByteUtilsTest extends GridCommonAbstractTest { /** * Test UUID conversions from string to binary and back. * * @throws Exception On any exception. */ + @Test public void testUuidConvertions() throws Exception { Map map = new LinkedHashMap<>(); @@ -92,6 +97,7 @@ public void testUuidConvertions() throws Exception { } } + @Test public void testShortToBytes() throws Exception { Map map = new HashMap<>(); @@ -116,6 +122,7 @@ public void testShortToBytes() throws Exception { } } + @Test public void testIntToBytes() throws Exception { Map map = new HashMap<>(); @@ -140,6 +147,7 @@ public void testIntToBytes() throws Exception { } } + @Test public void testLongToBytes() throws Exception { Map map = new LinkedHashMap<>(); @@ -177,4 +185,4 @@ private byte[] asByteArray(String text) { return b; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientConsistentHashSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientConsistentHashSelfTest.java index fa9d4b49a4d89..b0c7217c72512 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientConsistentHashSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientConsistentHashSelfTest.java @@ -31,10 +31,14 @@ import java.util.TreeSet; import java.util.UUID; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for consistent hash management class. */ +@RunWith(JUnit4.class) public class ClientConsistentHashSelfTest extends GridCommonAbstractTest { /** Replicas count. */ private static final int REPLICAS = 512; @@ -44,6 +48,7 @@ public class ClientConsistentHashSelfTest extends GridCommonAbstractTest { * * @throws Exception In case of any exception. */ + @Test public void testCollisions() throws Exception { Map> map = new HashMap<>(); @@ -95,6 +100,7 @@ public void testCollisions() throws Exception { * * @throws Exception In case of any exception. */ + @Test public void testTreeSetRestrictions() throws Exception { // Constructs hash without explicit node's comparator. GridClientConsistentHash hash = new GridClientConsistentHash<>(); @@ -129,6 +135,7 @@ public void testTreeSetRestrictions() throws Exception { * Validate generated hashes.

* Note! This test should be ported into all supported platforms. */ + @Test public void testHashGeneraton() { // Validate strings. checkHash("", -1484017934); @@ -167,7 +174,7 @@ public void testHashGeneraton() { /** * Test mapping to nodes. */ - @SuppressWarnings("UnaryPlus") + @Test public void testMappingToNodes() { String n1 = "node #1"; String n2 = "node #2"; @@ -289,4 +296,4 @@ private void checkHash(Object o, int code) { assertEquals("Check affinity for object: " + o, code, i); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientJavaHasherSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientJavaHasherSelfTest.java index 765c58d88eeca..432cd895ae43d 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientJavaHasherSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/client/util/ClientJavaHasherSelfTest.java @@ -21,14 +21,19 @@ import java.util.Map; import java.util.UUID; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for Java hash codes calculations - SHOULD BE PORTED to other languages. */ +@RunWith(JUnit4.class) public class ClientJavaHasherSelfTest extends GridCommonAbstractTest { /** * Validate known Java hash codes. */ + @Test public void testPredefined() { Map map = new LinkedHashMap<>(); @@ -82,4 +87,4 @@ public void testPredefined() { fail("Java hash codes validation fails."); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcAbstractDmlStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcAbstractDmlStatementSelfTest.java index 1cc5740c2f761..60e680d5aff41 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcAbstractDmlStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcAbstractDmlStatementSelfTest.java @@ -131,7 +131,7 @@ protected String getCfgUrl() { /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { - ((IgniteEx)ignite(0)).context().cache().dynamicDestroyCache(DEFAULT_CACHE_NAME, true, true, false); + ((IgniteEx)ignite(0)).context().cache().dynamicDestroyCache(DEFAULT_CACHE_NAME, true, true, false, null); if (conn != null) { conn.close(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBlobTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBlobTest.java index 9e0e0d2f6aab1..a8267aa238512 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBlobTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBlobTest.java @@ -21,15 +21,20 @@ import java.io.InputStream; import java.sql.SQLException; import java.util.Arrays; -import junit.framework.TestCase; +import org.junit.Test; + +import static junit.framework.Assert.assertEquals; +import static junit.framework.Assert.assertTrue; +import static junit.framework.Assert.fail; /** * */ -public class JdbcBlobTest extends TestCase { +public class JdbcBlobTest { /** * @throws Exception If failed. */ + @Test public void testLength() throws Exception { JdbcBlob blob = new JdbcBlob(new byte[16]); @@ -50,6 +55,7 @@ public void testLength() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetBytes() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; @@ -124,6 +130,7 @@ public void testGetBytes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetBinaryStream() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; @@ -150,6 +157,7 @@ public void testGetBinaryStream() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetBinaryStreamWithParams() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; @@ -222,6 +230,7 @@ public void testGetBinaryStreamWithParams() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPositionBytePattern() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; @@ -256,6 +265,7 @@ public void testPositionBytePattern() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPositionBlobPattern() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; @@ -290,6 +300,7 @@ public void testPositionBlobPattern() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetBytes() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7}; @@ -341,6 +352,7 @@ public void testSetBytes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetBytesWithOffsetAndLength() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7}; @@ -419,6 +431,7 @@ public void testSetBytesWithOffsetAndLength() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTruncate() throws Exception { byte[] arr = new byte[] {0, 1, 2, 3, 4, 5, 6, 7}; @@ -482,4 +495,4 @@ private static byte[] readBytes(InputStream is) throws IOException { return res; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBulkLoadSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBulkLoadSelfTest.java index 753a98c9cc775..df12cbec784f8 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBulkLoadSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcBulkLoadSelfTest.java @@ -37,12 +37,16 @@ import java.util.Collections; import java.util.Properties; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** COPY command test for the regular JDBC driver. */ +@RunWith(JUnit4.class) public class JdbcBulkLoadSelfTest extends GridCommonAbstractTest { /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + @@ -124,6 +128,7 @@ private Connection createConnection() throws Exception { * * @throws Exception if failed. */ + @Test public void testBulkLoadThrows() throws Exception { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcComplexQuerySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcComplexQuerySelfTest.java index 8b1390e2c58d8..b006cb61c042f 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcComplexQuerySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcComplexQuerySelfTest.java @@ -28,11 +28,11 @@ import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -42,10 +42,8 @@ /** * Tests for complex queries (joins, etc.). */ +@RunWith(JUnit4.class) public class JdbcComplexQuerySelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + "cache=pers@modules/clients/src/test/config/jdbc-config.xml"; @@ -60,12 +58,6 @@ public class JdbcComplexQuerySelfTest extends GridCommonAbstractTest { cacheConfiguration("pers", AffinityKey.class, Person.class), cacheConfiguration("org", String.class, Organization.class)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -131,6 +123,7 @@ protected CacheConfiguration cacheConfiguration(@NotNull String name, Class c /** * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name as orgName from \"pers\".Person p, \"org\".Organization o where p.orgId = o.id"); @@ -166,6 +159,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testJoinWithoutAlias() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name from \"pers\".Person p, \"org\".Organization o where p.orgId = o.id"); @@ -204,6 +198,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testIn() throws Exception { ResultSet rs = stmt.executeQuery("select name from \"pers\".Person where age in (25, 35)"); @@ -224,6 +219,7 @@ public void testIn() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBetween() throws Exception { ResultSet rs = stmt.executeQuery("select name from \"pers\".Person where age between 24 and 36"); @@ -244,6 +240,7 @@ public void testBetween() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCalculatedValue() throws Exception { ResultSet rs = stmt.executeQuery("select age * 2 from \"pers\".Person"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionReopenTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionReopenTest.java index 531b4e52352f6..554626b059467 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionReopenTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionReopenTest.java @@ -22,12 +22,16 @@ import org.apache.ignite.Ignite; import org.apache.ignite.internal.IgnitionEx; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; /** * Connection test. */ +@RunWith(JUnit4.class) public class JdbcConnectionReopenTest extends GridCommonAbstractTest { /** * @return Config URL to use in test. @@ -39,6 +43,7 @@ private String configURL() { /** * @throws Exception If failed. */ + @Test public void testReopenSameInstanceName() throws Exception { String url = CFG_URL_PREFIX + configURL(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionSelfTest.java index d560d74a2c898..999ca2f38f0f0 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcConnectionSelfTest.java @@ -25,22 +25,20 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; /** * Connection test. */ +@RunWith(JUnit4.class) public class JdbcConnectionSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Custom cache name. */ private static final String CUSTOM_CACHE_NAME = "custom-cache"; @@ -66,12 +64,6 @@ protected String configURL() { cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME), cacheConfiguration(CUSTOM_CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setDaemon(daemon); cfg.setClientMode(client); @@ -100,6 +92,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep /** * @throws Exception If failed. */ + @Test public void testDefaults() throws Exception { String url = CFG_URL_PREFIX + configURL(); @@ -117,6 +110,7 @@ public void testDefaults() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeId() throws Exception { String url = CFG_URL_PREFIX + "nodeId=" + grid(0).localNode().id() + '@' + configURL(); @@ -134,6 +128,7 @@ public void testNodeId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWrongNodeId() throws Exception { UUID wrongId = UUID.randomUUID(); @@ -156,6 +151,7 @@ public void testWrongNodeId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeId() throws Exception { client = true; @@ -182,6 +178,7 @@ public void testClientNodeId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDaemonNodeId() throws Exception { daemon = true; @@ -208,6 +205,7 @@ public void testDaemonNodeId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomCache() throws Exception { String url = CFG_URL_PREFIX + "cache=" + CUSTOM_CACHE_NAME + '@' + configURL(); @@ -219,6 +217,7 @@ public void testCustomCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWrongCache() throws Exception { final String url = CFG_URL_PREFIX + "cache=wrongCacheName@" + configURL(); @@ -239,6 +238,7 @@ public void testWrongCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClose() throws Exception { String url = CFG_URL_PREFIX + configURL(); @@ -268,6 +268,7 @@ public void testClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxAllowedCommit() throws Exception { String url = CFG_URL_PREFIX + "transactionsAllowed=true@" + configURL(); @@ -285,6 +286,7 @@ public void testTxAllowedCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxAllowedRollback() throws Exception { String url = CFG_URL_PREFIX + "transactionsAllowed=true@" + configURL(); @@ -302,6 +304,7 @@ public void testTxAllowedRollback() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlHints() throws Exception { try (final Connection conn = DriverManager.getConnection(CFG_URL_PREFIX + "enforceJoinOrder=true@" + configURL())) { diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDeleteStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDeleteStatementSelfTest.java index 3eec5a025d947..5652fb1177368 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDeleteStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDeleteStatementSelfTest.java @@ -21,14 +21,19 @@ import java.sql.SQLException; import java.util.Arrays; import java.util.HashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcDeleteStatementSelfTest extends JdbcAbstractUpdateStatementSelfTest { /** * */ + @Test public void testExecute() throws SQLException { conn.createStatement().execute("delete from Person where cast(substring(_key, 2, 1) as int) % 2 = 0"); @@ -39,6 +44,7 @@ public void testExecute() throws SQLException { /** * */ + @Test public void testExecuteUpdate() throws SQLException { int res = conn.createStatement().executeUpdate("delete from Person where cast(substring(_key, 2, 1) as int) % 2 = 0"); @@ -51,6 +57,7 @@ public void testExecuteUpdate() throws SQLException { /** * */ + @Test public void testBatch() throws SQLException { PreparedStatement ps = conn.prepareStatement("delete from Person where firstName = ?"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDistributedJoinsQueryTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDistributedJoinsQueryTest.java index 2a58e0280676b..7972683129ed3 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDistributedJoinsQueryTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDistributedJoinsQueryTest.java @@ -27,10 +27,10 @@ import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -40,10 +40,8 @@ /** * Tests for complex queries with distributed joins enabled (joins, etc.). */ +@RunWith(JUnit4.class) public class JdbcDistributedJoinsQueryTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + "cache=default:distributedJoins=true@modules/clients/src/test/config/jdbc-config.xml"; @@ -64,12 +62,6 @@ public class JdbcDistributedJoinsQueryTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -116,6 +108,7 @@ public class JdbcDistributedJoinsQueryTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name as orgName from Person p, Organization o where p.orgId = o.id"); @@ -151,6 +144,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testJoinWithoutAlias() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name from Person p, Organization o where p.orgId = o.id"); @@ -189,6 +183,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testIn() throws Exception { ResultSet rs = stmt.executeQuery("select name from Person where age in (25, 35)"); @@ -209,6 +204,7 @@ public void testIn() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBetween() throws Exception { ResultSet rs = stmt.executeQuery("select name from Person where age between 24 and 36"); @@ -229,6 +225,7 @@ public void testBetween() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCalculatedValue() throws Exception { ResultSet rs = stmt.executeQuery("select age * 2 from Person"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDynamicIndexAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDynamicIndexAbstractSelfTest.java index 9485d0d54212c..0ffc073d5cd55 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDynamicIndexAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcDynamicIndexAbstractSelfTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that checks indexes handling with JDBC. */ +@RunWith(JUnit4.class) public abstract class JdbcDynamicIndexAbstractSelfTest extends JdbcAbstractDmlStatementSelfTest { /** */ private final static String CREATE_INDEX = "create index idx on Person (id desc)"; @@ -136,6 +140,7 @@ private Object getSingleValue(ResultSet rs) throws SQLException { /** * Test that after index creation index is used by queries. */ + @Test public void testCreateIndex() throws SQLException { assertSize(3); @@ -165,6 +170,7 @@ public void testCreateIndex() throws SQLException { /** * Test that creating an index with duplicate name yields an error. */ + @Test public void testCreateIndexWithDuplicateName() throws SQLException { jdbcRun(CREATE_INDEX); @@ -179,6 +185,7 @@ public void testCreateIndexWithDuplicateName() throws SQLException { /** * Test that creating an index with duplicate name does not yield an error with {@code IF NOT EXISTS}. */ + @Test public void testCreateIndexIfNotExists() throws SQLException { jdbcRun(CREATE_INDEX); @@ -189,6 +196,7 @@ public void testCreateIndexIfNotExists() throws SQLException { /** * Test that after index drop there are no attempts to use it, and data state remains intact. */ + @Test public void testDropIndex() throws SQLException { assertSize(3); @@ -218,6 +226,7 @@ public void testDropIndex() throws SQLException { /** * Test that dropping a non-existent index yields an error. */ + @Test public void testDropMissingIndex() { assertSqlException(new RunnableX() { /** {@inheritDoc} */ @@ -230,6 +239,7 @@ public void testDropMissingIndex() { /** * Test that dropping a non-existent index does not yield an error with {@code IF EXISTS}. */ + @Test public void testDropMissingIndexIfExists() throws SQLException { // Despite index missing, this does not yield an error. jdbcRun(DROP_INDEX_IF_EXISTS); @@ -238,6 +248,7 @@ public void testDropMissingIndexIfExists() throws SQLException { /** * Test that changes in cache affect index, and vice versa. */ + @Test public void testIndexState() throws SQLException { IgniteCache cache = cache(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcEmptyCacheSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcEmptyCacheSelfTest.java index 25b97ea1c17c6..5fc9f751e9322 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcEmptyCacheSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcEmptyCacheSelfTest.java @@ -18,12 +18,12 @@ package org.apache.ignite.internal.jdbc2; import org.apache.ignite.configuration.*; -import org.apache.ignite.spi.discovery.tcp.*; -import org.apache.ignite.spi.discovery.tcp.ipfinder.*; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.*; import org.apache.ignite.testframework.junits.common.*; import java.sql.*; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.*; import static org.apache.ignite.cache.CacheMode.*; @@ -32,10 +32,8 @@ /** * Tests for empty cache. */ +@RunWith(JUnit4.class) public class JdbcEmptyCacheSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -62,12 +60,6 @@ public class JdbcEmptyCacheSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -99,6 +91,7 @@ public class JdbcEmptyCacheSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSelectNumber() throws Exception { ResultSet rs = stmt.executeQuery("select 1"); @@ -117,6 +110,7 @@ public void testSelectNumber() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelectString() throws Exception { ResultSet rs = stmt.executeQuery("select 'str'"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcErrorsSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcErrorsSelfTest.java index 63f0c84a67f76..a701b91514670 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcErrorsSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcErrorsSelfTest.java @@ -22,10 +22,14 @@ import java.sql.SQLException; import org.apache.ignite.jdbc.JdbcErrorsAbstractSelfTest; import org.apache.ignite.lang.IgniteCallable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test SQLSTATE codes propagation with thin client driver. */ +@RunWith(JUnit4.class) public class JdbcErrorsSelfTest extends JdbcErrorsAbstractSelfTest { /** Path to JDBC configuration for node that is to start. */ private static final String CFG_PATH = "modules/clients/src/test/config/jdbc-config.xml"; @@ -40,6 +44,7 @@ public class JdbcErrorsSelfTest extends JdbcErrorsAbstractSelfTest { * due to communication problems (not due to clear misconfiguration). * @throws SQLException if failed. */ + @Test public void testConnectionError() throws SQLException { final String path = "jdbc:ignite:сfg://cache=test@/unknown/path"; @@ -56,6 +61,7 @@ public void testConnectionError() throws SQLException { * Test error code for the case when connection string is a mess. * @throws SQLException if failed. */ + @Test public void testInvalidConnectionStringFormat() throws SQLException { final String cfgPath = "cache="; diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcInsertStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcInsertStatementSelfTest.java index 44a45b7323b4a..70c1d2df1e52a 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcInsertStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcInsertStatementSelfTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Statement test. */ +@RunWith(JUnit4.class) public class JdbcInsertStatementSelfTest extends JdbcAbstractDmlStatementSelfTest { /** SQL query. */ private static final String SQL = "insert into Person(_key, id, firstName, lastName, age, data) values " + @@ -136,6 +140,7 @@ public class JdbcInsertStatementSelfTest extends JdbcAbstractDmlStatementSelfTes /** * @throws SQLException If failed. */ + @Test public void testExecuteUpdate() throws SQLException { int res = stmt.executeUpdate(SQL); @@ -145,6 +150,7 @@ public void testExecuteUpdate() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testExecute() throws SQLException { boolean res = stmt.execute(SQL); @@ -154,6 +160,7 @@ public void testExecute() throws SQLException { /** * */ + @Test public void testDuplicateKeys() { jcache(0).put("p2", new Person(2, "Joe", "Black", 35)); @@ -177,6 +184,7 @@ public void testDuplicateKeys() { /** * @throws SQLException if failed. */ + @Test public void testBatch() throws SQLException { formBatch(1, 2); formBatch(3, 4); @@ -189,6 +197,7 @@ public void testBatch() throws SQLException { /** * @throws SQLException if failed. */ + @Test public void testSingleItemBatch() throws SQLException { formBatch(1, 2); @@ -200,6 +209,7 @@ public void testSingleItemBatch() throws SQLException { /** * @throws SQLException if failed. */ + @Test public void testSingleItemBatchError() throws SQLException { formBatch(1, 2); @@ -223,6 +233,7 @@ public void testSingleItemBatchError() throws SQLException { /** * @throws SQLException if failed. */ + @Test public void testErrorAmidstBatch() throws SQLException { formBatch(1, 2); formBatch(3, 1); // Duplicate key @@ -248,6 +259,7 @@ public void testErrorAmidstBatch() throws SQLException { /** * @throws Exception If failed. */ + @Test public void testClearBatch() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws SQLException { diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcLocalCachesSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcLocalCachesSelfTest.java index 46379cba70b81..4174975b9c854 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcLocalCachesSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcLocalCachesSelfTest.java @@ -25,10 +25,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.IgniteJdbcDriver.PROP_NODE_ID; @@ -38,10 +38,8 @@ /** * Test JDBC with several local caches. */ +@RunWith(JUnit4.class) public class JdbcLocalCachesSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -65,12 +63,6 @@ public class JdbcLocalCachesSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -98,6 +90,7 @@ public class JdbcLocalCachesSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCache1() throws Exception { Properties cfg = new Properties(); @@ -129,6 +122,7 @@ public void testCache1() throws Exception { * * @throws Exception If failed. */ + @Test public void testCountAll() throws Exception { Properties cfg = new Properties(); @@ -154,6 +148,7 @@ public void testCountAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCache2() throws Exception { Properties cfg = new Properties(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMergeStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMergeStatementSelfTest.java index 489bacd8ea906..b9fc04ec24426 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMergeStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMergeStatementSelfTest.java @@ -23,10 +23,14 @@ import java.sql.Statement; import java.util.Arrays; import org.apache.ignite.cache.CachePeekMode; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * MERGE statement test. */ +@RunWith(JUnit4.class) public class JdbcMergeStatementSelfTest extends JdbcAbstractDmlStatementSelfTest { /** SQL query. */ private static final String SQL = "merge into Person(_key, id, firstName, lastName, age, data) values " + @@ -130,6 +134,7 @@ public class JdbcMergeStatementSelfTest extends JdbcAbstractDmlStatementSelfTest /** * @throws SQLException If failed. */ + @Test public void testExecuteUpdate() throws SQLException { int res = stmt.executeUpdate(SQL); @@ -139,6 +144,7 @@ public void testExecuteUpdate() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testExecute() throws SQLException { boolean res = stmt.execute(SQL); @@ -148,6 +154,7 @@ public void testExecute() throws SQLException { /** * @throws SQLException if failed. */ + @Test public void testBatch() throws SQLException { prepStmt.setString(1, "p1"); prepStmt.setInt(2, 1); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMetadataSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMetadataSelfTest.java index c3d0824ff1230..fc4265f6f0081 100755 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMetadataSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcMetadataSelfTest.java @@ -44,11 +44,11 @@ import org.apache.ignite.internal.IgniteVersionUtils; import org.apache.ignite.internal.processors.query.QueryEntityEx; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Types.INTEGER; import static java.sql.Types.VARCHAR; @@ -60,10 +60,8 @@ /** * Metadata tests. */ +@RunWith(JUnit4.class) public class JdbcMetadataSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + "cache=pers@modules/clients/src/test/config/jdbc-config.xml"; @@ -91,12 +89,6 @@ public class JdbcMetadataSelfTest extends GridCommonAbstractTest { cacheConfiguration("org").setQueryEntities(Arrays.asList( new QueryEntity(AffinityKey.class, Organization.class)))); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -137,6 +129,7 @@ protected CacheConfiguration cacheConfiguration(@NotNull String name) { /** * @throws Exception If failed. */ + @Test public void testResultSetMetaData() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL)) { Statement stmt = conn.createStatement(); @@ -171,6 +164,7 @@ public void testResultSetMetaData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetTables() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -207,6 +201,7 @@ public void testGetTables() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetColumns() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -295,6 +290,7 @@ public void testGetColumns() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetadataResultSetClose() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL); ResultSet tbls = conn.getMetaData().getTables(null, null, "%", null)) { @@ -313,6 +309,7 @@ public void testMetadataResultSetClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIndexMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL); ResultSet rs = conn.getMetaData().getIndexInfo(null, "pers", "PERSON", false, false)) { @@ -351,6 +348,7 @@ else if ("AGE".equals(field)) /** * @throws Exception If failed. */ + @Test public void testPrimaryKeyMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL); ResultSet rs = conn.getMetaData().getPrimaryKeys(null, "pers", "PERSON")) { @@ -370,6 +368,7 @@ public void testPrimaryKeyMetadata() throws Exception { /** * @throws Exception If failed. */ + @Test public void testParametersMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL)) { conn.setSchema("pers"); @@ -394,6 +393,7 @@ public void testParametersMetadata() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSchemasMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL)) { ResultSet rs = conn.getMetaData().getSchemas(); @@ -415,6 +415,7 @@ public void testSchemasMetadata() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersions() throws Exception { try (Connection conn = DriverManager.getConnection(BASE_URL)) { assertEquals("Apache Ignite", conn.getMetaData().getDatabaseProductName()); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoCacheStreamingSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoCacheStreamingSelfTest.java index e32e07036bb0a..e3bdfec4dded7 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoCacheStreamingSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoCacheStreamingSelfTest.java @@ -31,6 +31,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,6 +42,7 @@ /** * Data streaming test for thick driver and no explicit caches. */ +@RunWith(JUnit4.class) public class JdbcNoCacheStreamingSelfTest extends GridCommonAbstractTest { /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + @@ -123,6 +127,7 @@ protected Connection createConnection(boolean allowOverwrite) throws Exception { /** * @throws Exception if failed. */ + @Test public void testStreamedInsert() throws Exception { for (int i = 10; i <= 100; i += 10) ignite(0).cache(DEFAULT_CACHE_NAME).put(i, i * 100); @@ -152,6 +157,7 @@ public void testStreamedInsert() throws Exception { /** * @throws Exception if failed. */ + @Test public void testStreamedInsertWithOverwritesAllowed() throws Exception { for (int i = 10; i <= 100; i += 10) ignite(0).cache(DEFAULT_CACHE_NAME).put(i, i * 100); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoDefaultCacheTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoDefaultCacheTest.java index 5c549a82712b6..34207eb572f10 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoDefaultCacheTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcNoDefaultCacheTest.java @@ -26,22 +26,20 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; /** * */ +@RunWith(JUnit4.class) public class JdbcNoDefaultCacheTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** First cache name. */ private static final String CACHE1_NAME = "cache1"; @@ -60,12 +58,6 @@ public class JdbcNoDefaultCacheTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheConfiguration(CACHE1_NAME), cacheConfiguration(CACHE2_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -102,6 +94,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep /** * @throws Exception If failed. */ + @Test public void testDefaults() throws Exception { String url = CFG_URL_PREFIX + CFG_URL; @@ -119,6 +112,7 @@ public void testDefaults() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoCacheNameQuery() throws Exception { try ( Connection conn = DriverManager.getConnection(CFG_URL_PREFIX + CFG_URL); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcPreparedStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcPreparedStatementSelfTest.java index 0a48961cfbf16..713d66ba0152a 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcPreparedStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcPreparedStatementSelfTest.java @@ -33,10 +33,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Types.BIGINT; import static java.sql.Types.BINARY; @@ -59,10 +59,8 @@ /** * Prepared statement test. */ +@RunWith(JUnit4.class) public class JdbcPreparedStatementSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + "cache=default@modules/clients/src/test/config/jdbc-config.xml"; @@ -87,12 +85,6 @@ public class JdbcPreparedStatementSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -154,6 +146,7 @@ public class JdbcPreparedStatementSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRepeatableUsage() throws Exception { stmt = conn.prepareStatement("select * from TestObject where id = ?"); @@ -189,6 +182,7 @@ public void testRepeatableUsage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { stmt = conn.prepareStatement("select * from TestObject where boolVal is not distinct from ?"); @@ -226,6 +220,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { stmt = conn.prepareStatement("select * from TestObject where byteVal is not distinct from ?"); @@ -263,6 +258,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { stmt = conn.prepareStatement("select * from TestObject where shortVal is not distinct from ?"); @@ -300,6 +296,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInteger() throws Exception { stmt = conn.prepareStatement("select * from TestObject where intVal is not distinct from ?"); @@ -337,6 +334,7 @@ public void testInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { stmt = conn.prepareStatement("select * from TestObject where longVal is not distinct from ?"); @@ -374,6 +372,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { stmt = conn.prepareStatement("select * from TestObject where floatVal is not distinct from ?"); @@ -411,6 +410,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { stmt = conn.prepareStatement("select * from TestObject where doubleVal is not distinct from ?"); @@ -448,6 +448,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimal() throws Exception { stmt = conn.prepareStatement("select * from TestObject where bigVal is not distinct from ?"); @@ -485,6 +486,7 @@ public void testBigDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testString() throws Exception { stmt = conn.prepareStatement("select * from TestObject where strVal is not distinct from ?"); @@ -522,6 +524,7 @@ public void testString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArray() throws Exception { stmt = conn.prepareStatement("select * from TestObject where arrVal is not distinct from ?"); @@ -559,6 +562,7 @@ public void testArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlob() throws Exception { stmt = conn.prepareStatement("select * from TestObject where blobVal is not distinct from ?"); @@ -600,6 +604,7 @@ public void testBlob() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDate() throws Exception { stmt = conn.prepareStatement("select * from TestObject where dateVal is not distinct from ?"); @@ -637,6 +642,7 @@ public void testDate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTime() throws Exception { stmt = conn.prepareStatement("select * from TestObject where timeVal is not distinct from ?"); @@ -674,6 +680,7 @@ public void testTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { stmt = conn.prepareStatement("select * from TestObject where tsVal is not distinct from ?"); @@ -711,6 +718,7 @@ public void testTimestamp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUrl() throws Exception { stmt = conn.prepareStatement("select * from TestObject where urlVal is not distinct from ?"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcResultSetSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcResultSetSelfTest.java index bd73bcd8d274d..b51ca530bb04e 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcResultSetSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcResultSetSelfTest.java @@ -40,11 +40,11 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -55,10 +55,8 @@ * Result set test. */ @SuppressWarnings("FloatingPointEquality") +@RunWith(JUnit4.class) public class JdbcResultSetSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + "cache=default@modules/clients/src/test/config/jdbc-config.xml"; @@ -87,12 +85,6 @@ public class JdbcResultSetSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -164,6 +156,7 @@ private TestObject createObjectWithData(int id) throws MalformedURLException { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -244,6 +237,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean2() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -264,6 +258,7 @@ public void testBoolean2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean3() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -284,6 +279,7 @@ public void testBoolean3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean4() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -304,6 +300,7 @@ public void testBoolean4() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -344,6 +341,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -384,6 +382,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInteger() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -424,6 +423,7 @@ public void testInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -464,6 +464,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -504,6 +505,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -544,6 +546,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimal() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -584,6 +587,7 @@ public void testBigDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimalScale() throws Exception { assert "0.12".equals(convertStringToBigDecimalViaJdbc("0.1234", 2).toString()); assert "1.001".equals(convertStringToBigDecimalViaJdbc("1.0005", 3).toString()); @@ -608,6 +612,7 @@ private BigDecimal convertStringToBigDecimalViaJdbc(String strDec, int scale) th /** * @throws Exception If failed. */ + @Test public void testString() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -647,6 +652,7 @@ public void testString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArray() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -668,6 +674,7 @@ public void testArray() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testDate() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -696,6 +703,7 @@ public void testDate() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testTime() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -723,6 +731,7 @@ public void testTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -750,6 +759,7 @@ public void testTimestamp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUrl() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -770,6 +780,7 @@ public void testUrl() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObject() throws Exception { final Ignite ignite = ignite(0); final boolean binaryMarshaller = ignite.configuration().getMarshaller() instanceof BinaryMarshaller; @@ -804,6 +815,7 @@ public void testObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNavigation() throws Exception { ResultSet rs = stmt.executeQuery("select * from TestObject where id > 0"); @@ -849,6 +861,7 @@ public void testNavigation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFetchSize() throws Exception { stmt.setFetchSize(1); @@ -864,6 +877,7 @@ public void testFetchSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNewQueryTaskFetchSize() throws Exception { stmt.setFetchSize(1); @@ -883,6 +897,7 @@ public void testNewQueryTaskFetchSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFindColumn() throws Exception { final ResultSet rs = stmt.executeQuery(SQL); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcSpringSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcSpringSelfTest.java index 7a29b13d7ee5f..f5ab95ccc0ef0 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcSpringSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcSpringSelfTest.java @@ -31,12 +31,16 @@ import org.apache.ignite.internal.util.spring.IgniteSpringHelper; import org.apache.ignite.resources.SpringApplicationContextResource; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; /** * Test of cluster and JDBC driver with config that contains cache with POJO store and datasource bean. */ +@RunWith(JUnit4.class) public class JdbcSpringSelfTest extends JdbcConnectionSelfTest { /** Grid count. */ private static final int GRID_CNT = 2; @@ -61,6 +65,7 @@ public class JdbcSpringSelfTest extends JdbcConnectionSelfTest { } /** {@inheritDoc} */ + @Test @Override public void testClientNodeId() throws Exception { IgniteEx client = (IgniteEx) startGridWithSpringCtx(getTestIgniteInstanceName(), true, configURL()); @@ -96,6 +101,7 @@ private static class TestInjectTarget { * * @throws Exception If test failed. */ + @Test public void testSpringBean() throws Exception { String url = CFG_URL_PREFIX + configURL(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementBatchingSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementBatchingSelfTest.java index c9169b98a6bd7..88fe2358cabb3 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementBatchingSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementBatchingSelfTest.java @@ -23,10 +23,14 @@ import java.sql.Statement; import java.util.concurrent.Callable; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Statement batch test. */ +@RunWith(JUnit4.class) public class JdbcStatementBatchingSelfTest extends JdbcAbstractDmlStatementSelfTest { /** {@inheritDoc} */ @@ -39,6 +43,7 @@ public class JdbcStatementBatchingSelfTest extends JdbcAbstractDmlStatementSelfT /** * @throws SQLException If failed. */ + @Test public void testDatabaseMetadataBatchSupportFlag() throws SQLException { DatabaseMetaData meta = conn.getMetaData(); @@ -50,6 +55,7 @@ public void testDatabaseMetadataBatchSupportFlag() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatch() throws SQLException { try (Statement stmt = conn.createStatement()) { stmt.addBatch("INSERT INTO Person(_key, id, firstName, lastName, age, data) " + @@ -78,6 +84,7 @@ public void testBatch() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testErrorAmidstBatch() throws SQLException { BatchUpdateException reason = (BatchUpdateException) GridTestUtils.assertThrows(log, @@ -110,6 +117,7 @@ public void testErrorAmidstBatch() throws SQLException { /** * @throws Exception If failed. */ + @Test public void testClearBatch() throws Exception { try (Statement stmt = conn.createStatement()) { GridTestUtils.assertThrows(log, new Callable() { diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementSelfTest.java index f778fde2f3b5b..4812bd27928be 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStatementSelfTest.java @@ -28,10 +28,10 @@ import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -40,10 +40,8 @@ /** * Statement test. */ +@RunWith(JUnit4.class) public class JdbcStatementSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + "cache=default:multipleStatementsAllowed=true@modules/clients/src/test/config/jdbc-config.xml"; @@ -72,12 +70,6 @@ public class JdbcStatementSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -119,6 +111,7 @@ public class JdbcStatementSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testExecuteQuery() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -151,6 +144,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testExecute() throws Exception { assert stmt.execute(SQL); @@ -185,6 +179,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testMaxRows() throws Exception { stmt.setMaxRows(1); @@ -248,6 +243,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testExecuteQueryMultipleOnlyResultSets() throws Exception { assert conn.getMetaData().supportsMultipleResultSets(); @@ -276,6 +272,7 @@ public void testExecuteQueryMultipleOnlyResultSets() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecuteQueryMultipleOnlyDml() throws Exception { assert conn.getMetaData().supportsMultipleResultSets(); @@ -312,6 +309,7 @@ public void testExecuteQueryMultipleOnlyDml() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecuteQueryMultipleMixed() throws Exception { assert conn.getMetaData().supportsMultipleResultSets(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingSelfTest.java index bc545ac70f1fa..bd390ba413154 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingSelfTest.java @@ -26,7 +26,6 @@ import java.util.Properties; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteJdbcDriver; -import org.apache.ignite.IgniteLogger; import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.binary.BinaryObjectBuilder; import org.apache.ignite.configuration.CacheConfiguration; @@ -39,7 +38,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -48,6 +49,7 @@ /** * Data streaming test. */ +@RunWith(JUnit4.class) public class JdbcStreamingSelfTest extends JdbcThinAbstractSelfTest { /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + @@ -165,6 +167,7 @@ protected Connection createStreamedConnection(boolean allowOverwrite, long flush /** * @throws Exception if failed. */ + @Test public void testStreamedInsert() throws Exception { for (int i = 10; i <= 100; i += 10) put(i, nameForId(i * 100)); @@ -195,6 +198,7 @@ public void testStreamedInsert() throws Exception { /** * @throws Exception if failed. */ + @Test public void testStreamedInsertWithoutColumnsList() throws Exception { for (int i = 10; i <= 100; i += 10) put(i, nameForId(i * 100)); @@ -225,6 +229,7 @@ public void testStreamedInsertWithoutColumnsList() throws Exception { /** * @throws Exception if failed. */ + @Test public void testStreamedInsertWithOverwritesAllowed() throws Exception { for (int i = 10; i <= 100; i += 10) put(i, nameForId(i * 100)); @@ -250,6 +255,7 @@ public void testStreamedInsertWithOverwritesAllowed() throws Exception { } /** */ + @Test public void testOnlyInsertsAllowed() { assertStatementForbidden("CREATE TABLE PUBLIC.X (x int primary key, y int)"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingToPublicCacheTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingToPublicCacheTest.java index 20fd0fbbe392f..e4491ae59d151 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingToPublicCacheTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcStreamingToPublicCacheTest.java @@ -31,6 +31,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,6 +42,7 @@ /** * Data streaming test. */ +@RunWith(JUnit4.class) public class JdbcStreamingToPublicCacheTest extends GridCommonAbstractTest { /** JDBC URL. */ private static final String BASE_URL = CFG_URL_PREFIX + "cache=%s@modules/clients/src/test/config/jdbc-config.xml"; @@ -105,6 +109,7 @@ private Connection createConnection(String cacheName, boolean streaming) throws /** * @throws Exception if failed. */ + @Test public void testStreamedInsert() throws Exception { // Create table try (Connection conn = createConnection(DEFAULT_CACHE_NAME, false)) { diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcUpdateStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcUpdateStatementSelfTest.java index 07b5587ffedde..3da079fae5ec2 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcUpdateStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/jdbc2/JdbcUpdateStatementSelfTest.java @@ -22,14 +22,19 @@ import java.util.Arrays; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcUpdateStatementSelfTest extends JdbcAbstractUpdateStatementSelfTest { /** * */ + @Test public void testExecute() throws SQLException { conn.createStatement().execute("update Person set firstName = 'Jack' where " + "cast(substring(_key, 2, 1) as int) % 2 = 0"); @@ -41,6 +46,7 @@ public void testExecute() throws SQLException { /** * */ + @Test public void testExecuteUpdate() throws SQLException { conn.createStatement().executeUpdate("update Person set firstName = 'Jack' where " + "cast(substring(_key, 2, 1) as int) % 2 = 0"); @@ -52,6 +58,7 @@ public void testExecuteUpdate() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatch() throws SQLException { PreparedStatement ps = conn.prepareStatement("update Person set lastName = concat(firstName, 'son') " + "where firstName = ?"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/AbstractRestProcessorSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/AbstractRestProcessorSelfTest.java index e5c658ccfae82..d765718d961fa 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/AbstractRestProcessorSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/AbstractRestProcessorSelfTest.java @@ -21,18 +21,12 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; /** * Abstract class for REST protocols tests. */ public abstract class AbstractRestProcessorSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Local host. */ protected static final String LOC_HOST = "127.0.0.1"; @@ -84,12 +78,6 @@ public abstract class AbstractRestProcessorSelfTest extends GridCommonAbstractTe cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - CacheConfiguration ccfg = defaultCacheConfiguration(); ccfg.setStatisticsEnabled(true); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ChangeStateCommandHandlerTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ChangeStateCommandHandlerTest.java index cb882e7393c8f..c73833a412dfa 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ChangeStateCommandHandlerTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ChangeStateCommandHandlerTest.java @@ -26,20 +26,18 @@ import org.apache.ignite.internal.client.GridClientConfiguration; import org.apache.ignite.internal.client.GridClientException; import org.apache.ignite.internal.client.GridClientFactory; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.client.GridClientProtocol.TCP; /** * */ +@RunWith(JUnit4.class) public class ChangeStateCommandHandlerTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public static final String HOST = "127.0.0.1"; @@ -61,12 +59,6 @@ public class ChangeStateCommandHandlerTest extends GridCommonAbstractTest { cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -94,6 +86,7 @@ public class ChangeStateCommandHandlerTest extends GridCommonAbstractTest { /** * */ + @Test public void testActivateDeActivate() throws GridClientException { GridClientClusterState state = client.state(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ClientMemcachedProtocolSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ClientMemcachedProtocolSelfTest.java index f80b5e92a52e5..6f83f50e16b99 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ClientMemcachedProtocolSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/ClientMemcachedProtocolSelfTest.java @@ -29,12 +29,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.junit.Assert.assertArrayEquals; /** * Tests for TCP binary protocol. */ +@RunWith(JUnit4.class) public class ClientMemcachedProtocolSelfTest extends AbstractRestProcessorSelfTest { /** Grid count. */ private static final int GRID_CNT = 1; @@ -92,6 +96,7 @@ private MemcachedClientIF startClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { jcache().put("getKey1", "getVal1"); jcache().put("getKey2", "getVal2"); @@ -104,6 +109,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetBulk() throws Exception { jcache().put("getKey1", "getVal1"); jcache().put("getKey2", "getVal2"); @@ -122,6 +128,7 @@ public void testGetBulk() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSet() throws Exception { Assert.assertTrue(client.set("setKey", 0, "setVal").get()); @@ -131,6 +138,7 @@ public void testSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetWithExpiration() throws Exception { Assert.assertTrue(client.set("setKey", 2000, "setVal").get()); @@ -144,6 +152,7 @@ public void testSetWithExpiration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAdd() throws Exception { jcache().put("addKey1", "addVal1"); @@ -157,6 +166,7 @@ public void testAdd() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddWithExpiration() throws Exception { Assert.assertTrue(client.add("addKey", 2000, "addVal").get()); @@ -170,6 +180,7 @@ public void testAddWithExpiration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { Assert.assertFalse(client.replace("replaceKey", 0, "replaceVal").get()); @@ -184,6 +195,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplaceWithExpiration() throws Exception { jcache().put("replaceKey", "replaceVal"); @@ -199,6 +211,7 @@ public void testReplaceWithExpiration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDelete() throws Exception { Assert.assertFalse(client.delete("deleteKey").get()); @@ -212,6 +225,7 @@ public void testDelete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrement() throws Exception { Assert.assertEquals(5, client.incr("incrKey", 3, 2)); @@ -225,6 +239,7 @@ public void testIncrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecrement() throws Exception { Assert.assertEquals(5, client.decr("decrKey", 10, 15)); @@ -238,6 +253,7 @@ public void testDecrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFlush() throws Exception { jcache().put("flushKey1", "flushVal1"); jcache().put("flushKey2", "flushVal2"); @@ -252,6 +268,7 @@ public void testFlush() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStat() throws Exception { jcache().put("statKey1", "statVal1"); assertEquals("statVal1", jcache().get("statKey1")); @@ -283,6 +300,7 @@ public void testStat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppend() throws Exception { Assert.assertFalse(client.append(0, "appendKey", "_suffix").get()); @@ -298,6 +316,7 @@ public void testAppend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepend() throws Exception { Assert.assertFalse(client.append(0, "prependKey", "_suffix").get()); @@ -313,6 +332,7 @@ public void testPrepend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSpecialTypes() throws Exception { Assert.assertTrue(client.set("boolKey", 0, true).get()); @@ -361,6 +381,7 @@ public void testSpecialTypes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testComplexObject() throws Exception { Assert.assertTrue(client.set("objKey", 0, new ValueObject(10, "String")).get()); @@ -370,6 +391,7 @@ public void testComplexObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomPort() throws Exception { customPort = 11212; @@ -392,6 +414,7 @@ public void testCustomPort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersion() throws Exception { Map map = client.getVersions(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAbstractSelfTest.java index a972bc33f1999..1f6d1391a1153 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAbstractSelfTest.java @@ -139,6 +139,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -154,6 +157,7 @@ * Tests for Jetty REST protocol. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public abstract class JettyRestProcessorAbstractSelfTest extends JettyRestProcessorCommonSelfTest { /** Used to sent request charset. */ private static final String CHARSET = StandardCharsets.UTF_8.name(); @@ -285,6 +289,7 @@ protected JsonNode jsonTaskResult(String content) throws IOException { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { jcache().put("getKey", "getVal"); @@ -313,6 +318,7 @@ private void checkJson(String json, Person p) throws IOException { /** * @throws Exception If failed. */ + @Test public void testGetBinaryObjects() throws Exception { Person p = new Person(1, "John", "Doe", 300); @@ -409,6 +415,7 @@ public void testGetBinaryObjects() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNullMapKeyAndValue() throws Exception { Map map1 = new HashMap<>(); map1.put(null, null); @@ -442,6 +449,7 @@ public void testNullMapKeyAndValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimpleObject() throws Exception { SimplePerson p = new SimplePerson(1, "Test", java.sql.Date.valueOf("1977-01-26"), 1000.55, 39, "CIO", 25); @@ -465,6 +473,7 @@ public void testSimpleObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDate() throws Exception { java.util.Date utilDate = new java.util.Date(); @@ -502,6 +511,7 @@ public void testDate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUUID() throws Exception { UUID uuid = UUID.randomUUID(); @@ -527,6 +537,7 @@ public void testUUID() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTuple() throws Exception { T2 t = new T2("key", "value"); @@ -545,6 +556,7 @@ public void testTuple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheSize() throws Exception { jcache().removeAll(); @@ -560,6 +572,7 @@ public void testCacheSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteName() throws Exception { String ret = content(null, GridRestCommand.NAME); @@ -613,6 +626,7 @@ private void checkGetOrCreateAndDestroy( /** * @throws Exception If failed. */ + @Test public void testGetOrCreateCache() throws Exception { checkGetOrCreateAndDestroy("testCache", PARTITIONED, 0, FULL_SYNC, null, null); @@ -662,6 +676,7 @@ public void testGetOrCreateCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAll() throws Exception { final Map entries = F.asMap("getKey1", "getVal1", "getKey2", "getVal2"); @@ -684,6 +699,7 @@ public void testGetAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncorrectPut() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.CACHE_PUT, "key", "key0"); @@ -694,6 +710,7 @@ public void testIncorrectPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsKey() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); @@ -705,6 +722,7 @@ public void testContainsKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsKeys() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); grid(0).cache(DEFAULT_CACHE_NAME).put("key1", "val1"); @@ -720,6 +738,7 @@ public void testContainsKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPut() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); @@ -736,6 +755,7 @@ public void testGetAndPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutIfAbsent() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); @@ -752,6 +772,7 @@ public void testGetAndPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsent2() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.CACHE_PUT_IF_ABSENT, "key", "key0", @@ -766,6 +787,7 @@ public void testPutIfAbsent2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveValue() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); @@ -791,6 +813,7 @@ public void testRemoveValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndRemove() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); @@ -804,6 +827,7 @@ public void testGetAndRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplaceValue() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); @@ -831,6 +855,7 @@ public void testReplaceValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndReplace() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("key0", "val0"); @@ -847,6 +872,7 @@ public void testGetAndReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateActivate() throws Exception { assertClusterState(true); @@ -863,6 +889,7 @@ public void testDeactivateActivate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.CACHE_PUT, "key", "putKey", @@ -879,6 +906,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutWithExpiration() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.CACHE_PUT, "key", "putKey", @@ -898,6 +926,7 @@ public void testPutWithExpiration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAdd() throws Exception { jcache().put("addKey1", "addVal1"); @@ -915,6 +944,7 @@ public void testAdd() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddWithExpiration() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.CACHE_ADD, "key", "addKey", @@ -934,6 +964,7 @@ public void testAddWithExpiration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAll() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.CACHE_PUT_ALL, "k1", "putKey1", @@ -953,6 +984,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { jcache().put("rmvKey", "rmvVal"); @@ -970,6 +1002,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAll() throws Exception { jcache().put("rmvKey1", "rmvVal1"); jcache().put("rmvKey2", "rmvVal2"); @@ -1011,6 +1044,7 @@ public void testRemoveAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCas() throws Exception { jcache().put("casKey", "casOldVal"); @@ -1034,6 +1068,7 @@ public void testCas() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { jcache().put("repKey", "repOldVal"); @@ -1054,6 +1089,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplaceWithExpiration() throws Exception { jcache().put("replaceKey", "replaceVal"); @@ -1078,6 +1114,7 @@ public void testReplaceWithExpiration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppend() throws Exception { jcache().put("appendKey", "appendVal"); @@ -1094,6 +1131,7 @@ public void testAppend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepend() throws Exception { jcache().put("prependKey", "prependVal"); @@ -1110,6 +1148,7 @@ public void testPrepend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrement() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.ATOMIC_INCREMENT, "key", "incrKey", @@ -1136,6 +1175,7 @@ public void testIncrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecrement() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.ATOMIC_DECREMENT, "key", "decrKey", @@ -1162,6 +1202,7 @@ public void testDecrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCar() throws Exception { jcache().put("casKey", "casOldVal"); @@ -1182,6 +1223,7 @@ public void testCar() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsent() throws Exception { assertNull(jcache().localPeek("casKey")); @@ -1200,6 +1242,7 @@ public void testPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCasRemove() throws Exception { jcache().put("casKey", "casVal"); @@ -1217,6 +1260,7 @@ public void testCasRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetrics() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.CACHE_METRICS); @@ -1306,6 +1350,7 @@ private void testMetadata(Collection metas, JsonNode arr) /** * @throws Exception If failed. */ + @Test public void testMetadataLocal() throws Exception { IgniteCacheProxy cache = F.first(grid(0).context().cache().publicCaches()); @@ -1347,6 +1392,7 @@ public void testMetadataLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetadataRemote() throws Exception { CacheConfiguration partialCacheCfg = new CacheConfiguration<>("partial"); @@ -1386,6 +1432,7 @@ public void testMetadataRemote() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTopology() throws Exception { String ret = content(null, GridRestCommand.TOPOLOGY, "attr", "false", @@ -1428,11 +1475,31 @@ public void testTopology() throws Exception { assertEquals(publicCache.getConfiguration(CacheConfiguration.class).getCacheMode(), cacheMode); } } + + // Test that caches not included. + ret = content(null, GridRestCommand.TOPOLOGY, + "attr", "false", + "mtr", "false", + "caches", "false" + ); + + info("Topology command result: " + ret); + + res = jsonResponse(ret); + + assertEquals(gridCount(), res.size()); + + for (JsonNode node : res) { + assertTrue(node.get("attributes").isNull()); + assertTrue(node.get("metrics").isNull()); + assertTrue(node.get("caches").isNull()); + } } /** * @throws Exception If failed. */ + @Test public void testNode() throws Exception { String ret = content(null, GridRestCommand.NODE, "attr", "true", @@ -1447,6 +1514,12 @@ public void testNode() throws Exception { assertTrue(res.get("attributes").isObject()); assertTrue(res.get("metrics").isObject()); + JsonNode caches = res.get("caches"); + + assertTrue(caches.isArray()); + assertFalse(caches.isNull()); + assertEquals(grid(0).context().cache().publicCaches().size(), caches.size()); + ret = content(null, GridRestCommand.NODE, "attr", "false", "mtr", "false", @@ -1472,6 +1545,22 @@ public void testNode() throws Exception { res = jsonResponse(ret); assertTrue(res.isNull()); + + // Check that caches not included. + ret = content(null, GridRestCommand.NODE, + "id", grid(0).localNode().id().toString(), + "attr", "false", + "mtr", "false", + "caches", "false" + ); + + info("Topology command result: " + ret); + + res = jsonResponse(ret); + + assertTrue(res.get("attributes").isNull()); + assertTrue(res.get("metrics").isNull()); + assertTrue(res.get("caches").isNull()); } /** @@ -1481,6 +1570,7 @@ public void testNode() throws Exception { * * @throws Exception If failed. */ + @Test public void testExe() throws Exception { String ret = content(DEFAULT_CACHE_NAME, GridRestCommand.EXE); @@ -1526,6 +1616,7 @@ public void testExe() throws Exception { * * @throws Exception If failed. */ + @Test public void testVisorGateway() throws Exception { ClusterNode locNode = grid(1).localNode(); @@ -1883,6 +1974,7 @@ public void testVisorGateway() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersion() throws Exception { String ret = content(null, GridRestCommand.VERSION); @@ -1894,6 +1986,7 @@ public void testVersion() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryArgs() throws Exception { String qry = "salary > ? and salary <= ?"; @@ -1915,6 +2008,7 @@ public void testQueryArgs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryScan() throws Exception { String ret = content("person", GridRestCommand.EXECUTE_SCAN_QUERY, "pageSize", "10", @@ -1931,6 +2025,7 @@ public void testQueryScan() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFilterQueryScan() throws Exception { String ret = content("person", GridRestCommand.EXECUTE_SCAN_QUERY, "pageSize", "10", @@ -1947,6 +2042,7 @@ public void testFilterQueryScan() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncorrectFilterQueryScan() throws Exception { String clsName = ScanFilter.class.getName() + 1; @@ -1961,6 +2057,7 @@ public void testIncorrectFilterQueryScan() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQuery() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put("1", "1"); grid(0).cache(DEFAULT_CACHE_NAME).put("2", "2"); @@ -2006,6 +2103,7 @@ public void testQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoinsQuery() throws Exception { String qry = "select * from Person, \"organization\".Organization " + "where \"organization\".Organization.id = Person.orgId " + @@ -2029,6 +2127,7 @@ public void testDistributedJoinsQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlFieldsQuery() throws Exception { String qry = "select concat(firstName, ' ', lastName) from Person"; @@ -2047,6 +2146,7 @@ public void testSqlFieldsQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoinsSqlFieldsQuery() throws Exception { String qry = "select * from \"person\".Person p, \"organization\".Organization o where o.id = p.orgId"; @@ -2066,6 +2166,7 @@ public void testDistributedJoinsSqlFieldsQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlFieldsMetadataQuery() throws Exception { String qry = "select firstName, lastName from Person"; @@ -2096,6 +2197,7 @@ public void testSqlFieldsMetadataQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryClose() throws Exception { String qry = "salary > ? and salary <= ?"; @@ -2125,6 +2227,7 @@ public void testQueryClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryDelay() throws Exception { String qry = "salary > ? and salary <= ?"; @@ -2187,6 +2290,7 @@ private void putTypedValue(String type, String k, String v, int status) throws E /** * @throws Exception If failed. */ + @Test public void testTypedPut() throws Exception { // Test boolean type. putTypedValue("boolean", "true", "false", STATUS_SUCCESS); @@ -2364,6 +2468,7 @@ private void getTypedValue(String keyType, String k, String exp) throws Exceptio /** * @throws Exception If failed. */ + @Test public void testTypedGet() throws Exception { // Test boolean type. IgniteCache cBool = typedCache(); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationAbstractTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationAbstractTest.java index 0ed9e95e9c62a..2e2982d136a76 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationAbstractTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationAbstractTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.configuration.WALMode.NONE; /** * Test REST with enabled authentication. */ +@RunWith(JUnit4.class) public abstract class JettyRestProcessorAuthenticationAbstractTest extends JettyRestProcessorUnsignedSelfTest { /** */ protected static final String DFLT_USER = "ignite"; @@ -90,6 +94,7 @@ public abstract class JettyRestProcessorAuthenticationAbstractTest extends Jetty /** * @throws Exception If failed. */ + @Test public void testAuthenticationCommand() throws Exception { String ret = content(null, GridRestCommand.AUTHENTICATE); @@ -99,6 +104,7 @@ public void testAuthenticationCommand() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddUpdateRemoveUser() throws Exception { // Add user. String ret = content(null, GridRestCommand.ADD_USER, diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationWithTokenSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationWithTokenSelfTest.java index 0a1b6b9d802f1..bf2ddac9b0f1b 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationWithTokenSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorAuthenticationWithTokenSelfTest.java @@ -18,10 +18,14 @@ package org.apache.ignite.internal.processors.rest; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test REST with enabled authentication and token. */ +@RunWith(JUnit4.class) public class JettyRestProcessorAuthenticationWithTokenSelfTest extends JettyRestProcessorAuthenticationAbstractTest { /** */ private String tok = ""; @@ -56,6 +60,7 @@ public class JettyRestProcessorAuthenticationWithTokenSelfTest extends JettyRest /** * @throws Exception If failed. */ + @Test public void testInvalidSessionToken() throws Exception { tok = null; diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorBaselineSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorBaselineSelfTest.java new file mode 100644 index 0000000000000..1a83a43cd743a --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorBaselineSelfTest.java @@ -0,0 +1,218 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.rest; + +import com.fasterxml.jackson.databind.JsonNode; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import org.apache.ignite.cluster.BaselineNode; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.rest.handlers.cluster.GridBaselineCommandResponse; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.jetbrains.annotations.Nullable; +import org.junit.Test; + +import static org.apache.ignite.configuration.WALMode.NONE; +import static org.apache.ignite.internal.processors.rest.GridRestResponse.STATUS_SUCCESS; + +/** + * Test REST with enabled authentication. + */ +public class JettyRestProcessorBaselineSelfTest extends JettyRestProcessorCommonSelfTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + U.resolveWorkDirectory(U.defaultWorkDirectory(), "db", true); + + super.beforeTestsStarted(); + + // We need to activate cluster. + grid(0).cluster().active(true); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + grid(0).cluster().setBaselineTopology(grid(0).cluster().topologyVersion()); + } + + /** {@inheritDoc} */ + @Override protected String signature() { + return null; + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + DataStorageConfiguration dsCfg = new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setMaxSize(100 * 1024 * 1024) + .setPersistenceEnabled(true)) + .setWalMode(NONE); + + cfg.setDataStorageConfiguration(dsCfg); + + return cfg; + } + + /** + * @param nodes Collection of grid nodes. + * @return Collection of node consistent IDs for given collection of grid nodes. + */ + private static Collection nodeConsistentIds(@Nullable Collection nodes) { + if (nodes == null || nodes.isEmpty()) + return Collections.emptyList(); + + return F.viewReadOnly(nodes, n -> String.valueOf(n.consistentId())); + } + + /** + * @param content Content to check. + * @param baselineSz Expected baseline size. + * @param srvsSz Expected server nodes count. + */ + private void assertBaseline(String content, int baselineSz, int srvsSz) throws IOException { + assertNotNull(content); + assertFalse(content.isEmpty()); + + JsonNode node = JSON_MAPPER.readTree(content); + + assertEquals(STATUS_SUCCESS, node.get("successStatus").asInt()); + assertTrue(node.get("error").isNull()); + + assertNotSame(securityEnabled(), node.get("sessionToken").isNull()); + + JsonNode res = node.get("response"); + + assertFalse(res.isNull()); + + GridBaselineCommandResponse baseline = JSON_MAPPER.treeToValue(res, GridBaselineCommandResponse.class); + + assertTrue(baseline.isActive()); + assertEquals(grid(0).cluster().topologyVersion(), baseline.getTopologyVersion()); + assertEquals(baselineSz, baseline.getBaseline().size()); + assertEqualsCollections(nodeConsistentIds(grid(0).cluster().currentBaselineTopology()), baseline.getBaseline()); + assertEquals(srvsSz, baseline.getServers().size()); + assertEqualsCollections(nodeConsistentIds(grid(0).cluster().nodes()), baseline.getServers()); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBaseline() throws Exception { + int sz = gridCount(); + + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz); + + // Stop one node. It will stay in baseline. + stopGrid(sz - 1); + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz - 1); + + // Start one node. Server node will be added, but baseline will not change. + startGrid(sz - 1); + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBaselineSet() throws Exception { + int sz = gridCount(); + + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz); + + startGrid(sz); + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz + 1); + + assertBaseline(content(null, GridRestCommand.BASELINE_SET, "topVer", + String.valueOf(grid(0).cluster().topologyVersion())), sz + 1, sz + 1); + + stopGrid(sz); + + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz + 1, sz); + + assertBaseline(content(null, GridRestCommand.BASELINE_SET, "topVer", + String.valueOf(grid(0).cluster().topologyVersion())), sz, sz); + + startGrid(sz); + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz + 1); + + ArrayList params = new ArrayList<>(); + int i = 1; + + for (BaselineNode n : grid(0).cluster().nodes()) { + params.add("consistentId" + i++); + params.add(String.valueOf(n.consistentId())); + } + + assertBaseline(content(null, GridRestCommand.BASELINE_SET, params.toArray(new String[0])), + sz + 1, sz + 1); + + stopGrid(sz); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBaselineAdd() throws Exception { + int sz = gridCount(); + + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz); + + startGrid(sz); + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz + 1); + + assertBaseline(content(null, GridRestCommand.BASELINE_ADD, "consistentId1", + grid(sz).localNode().consistentId().toString()), sz + 1, sz + 1); + + stopGrid(sz); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBaselineRemove() throws Exception { + int sz = gridCount(); + + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz); + + startGrid(sz); + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz, sz + 1); + + assertBaseline(content(null, GridRestCommand.BASELINE_SET, "topVer", + String.valueOf(grid(0).cluster().topologyVersion())), sz + 1, sz + 1); + + String consistentId = grid(sz).localNode().consistentId().toString(); + + stopGrid(sz); + assertBaseline(content(null, GridRestCommand.BASELINE_CURRENT_STATE), sz + 1, sz); + + assertBaseline(content(null, GridRestCommand.BASELINE_REMOVE, "consistentId1", + consistentId), sz, sz); + } +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorGetAllAsArrayTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorGetAllAsArrayTest.java index 521d7c18e9916..3017d87a558ca 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorGetAllAsArrayTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorGetAllAsArrayTest.java @@ -22,11 +22,15 @@ import java.util.Map; import java.util.Set; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_REST_GETALL_AS_ARRAY; import static org.apache.ignite.internal.processors.rest.GridRestResponse.STATUS_SUCCESS; /** */ +@RunWith(JUnit4.class) public class JettyRestProcessorGetAllAsArrayTest extends JettyRestProcessorCommonSelfTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -45,6 +49,7 @@ public class JettyRestProcessorGetAllAsArrayTest extends JettyRestProcessorCommo /** * @throws Exception If failed. */ + @Test public void testGetAll() throws Exception { final Map entries = F.asMap("getKey1", "getVal1", "getKey2", "getVal2"); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorSignedSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorSignedSelfTest.java index 3be99b489eef3..b5860dccd8344 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorSignedSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/JettyRestProcessorSignedSelfTest.java @@ -25,10 +25,14 @@ import java.util.Base64; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JettyRestProcessorSignedSelfTest extends JettyRestProcessorAbstractSelfTest { /** */ protected static final String REST_SECRET_KEY = "secret-key"; @@ -52,6 +56,7 @@ public class JettyRestProcessorSignedSelfTest extends JettyRestProcessorAbstract /** * @throws Exception If failed. */ + @Test public void testUnauthorized() throws Exception { String addr = "http://" + LOC_HOST + ":" + restPort() + "/ignite?cacheName=default&cmd=top"; diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestBinaryProtocolSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestBinaryProtocolSelfTest.java index fb56b77a54332..249e273190b28 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestBinaryProtocolSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestBinaryProtocolSelfTest.java @@ -39,13 +39,12 @@ import org.apache.ignite.internal.processors.rest.client.message.GridClientNodeBean; import org.apache.ignite.internal.processors.rest.client.message.GridClientTaskResultBean; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -54,10 +53,8 @@ * TCP protocol test. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class RestBinaryProtocolSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "cache"; @@ -102,12 +99,6 @@ public class RestBinaryProtocolSelfTest extends GridCommonAbstractTest { cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME), cacheConfiguration(CACHE_NAME)); return cfg; @@ -140,6 +131,7 @@ private TestBinaryClient client() throws IgniteCheckedException { /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { assertTrue(client.cachePut(DEFAULT_CACHE_NAME, "key1", "val1")); assertEquals("val1", grid().cache(DEFAULT_CACHE_NAME).get("key1")); @@ -151,6 +143,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAll() throws Exception { client.cachePutAll(DEFAULT_CACHE_NAME, F.asMap("key1", "val1", "key2", "val2")); @@ -172,6 +165,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { grid().cache(DEFAULT_CACHE_NAME).put("key", "val"); @@ -185,6 +179,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailure() throws Exception { IgniteKernal kernal = ((IgniteKernal)grid()); @@ -220,6 +215,7 @@ public void testFailure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAll() throws Exception { IgniteCache jcacheDflt = grid().cache(DEFAULT_CACHE_NAME); IgniteCache jcacheName = grid().cache(CACHE_NAME); @@ -264,6 +260,7 @@ public void testGetAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { IgniteCache jcacheDflt = grid().cache(DEFAULT_CACHE_NAME); IgniteCache jcacheName = grid().cache(CACHE_NAME); @@ -287,6 +284,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAll() throws Exception { IgniteCache jcacheDflt = grid().cache(DEFAULT_CACHE_NAME); @@ -320,6 +318,7 @@ public void testRemoveAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { assertFalse(client.cacheReplace(DEFAULT_CACHE_NAME, "key1", "val1")); @@ -342,6 +341,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCompareAndSet() throws Exception { assertFalse(client.cacheCompareAndSet(DEFAULT_CACHE_NAME, "key", null, null)); @@ -404,6 +404,7 @@ public void testCompareAndSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetrics() throws Exception { IgniteCache jcacheDft = grid().cache(DEFAULT_CACHE_NAME); IgniteCache jcacheName = grid().cache(CACHE_NAME); @@ -446,6 +447,7 @@ public void testMetrics() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppend() throws Exception { grid().cache(DEFAULT_CACHE_NAME).remove("key"); grid().cache(CACHE_NAME).remove("key"); @@ -470,6 +472,7 @@ public void testAppend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepend() throws Exception { grid().cache(DEFAULT_CACHE_NAME).remove("key"); grid().cache(CACHE_NAME).remove("key"); @@ -494,6 +497,7 @@ public void testPrepend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecute() throws Exception { GridClientTaskResultBean res = client.execute(TestTask.class.getName(), Arrays.asList("executing", 3, "test", 5, "task")); @@ -505,6 +509,7 @@ public void testExecute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNode() throws Exception { assertNull(client.node(UUID.randomUUID(), false, false)); assertNull(client.node("wrongHost", false, false)); @@ -549,6 +554,7 @@ public void testNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTopology() throws Exception { List top = client.topology(true, true); @@ -614,4 +620,4 @@ private static class TestTask extends ComputeTaskSplitAdapter, Inte return sum; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestMemcacheProtocolSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestMemcacheProtocolSelfTest.java index 9ac62703d04ac..6a4ee7ef7afce 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestMemcacheProtocolSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestMemcacheProtocolSelfTest.java @@ -23,12 +23,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -36,11 +35,8 @@ /** * TCP protocol test. */ -@SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class RestMemcacheProtocolSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "cache"; @@ -85,12 +81,6 @@ public class RestMemcacheProtocolSelfTest extends GridCommonAbstractTest { cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME), cacheConfiguration(CACHE_NAME)); return cfg; @@ -123,6 +113,7 @@ private TestMemcacheClient client() throws IgniteCheckedException { /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { assertTrue(client.cachePut(null, "key1", "val1")); assertEquals("val1", grid().cache(DEFAULT_CACHE_NAME).get("key1")); @@ -134,6 +125,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { grid().cache(DEFAULT_CACHE_NAME).put("key", "val"); @@ -147,6 +139,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { grid().cache(DEFAULT_CACHE_NAME).put("key", "val"); @@ -166,6 +159,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAdd() throws Exception { assertTrue(client.cacheAdd(null, "key", "val")); assertEquals("val", grid().cache(DEFAULT_CACHE_NAME).get("key")); @@ -181,6 +175,7 @@ public void testAdd() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { assertFalse(client.cacheReplace(null, "key1", "val1")); grid().cache(DEFAULT_CACHE_NAME).put("key1", "val1"); @@ -200,6 +195,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetrics() throws Exception { grid().cache(DEFAULT_CACHE_NAME).localMxBean().clear(); grid().cache(CACHE_NAME).localMxBean().clear(); @@ -238,6 +234,7 @@ public void testMetrics() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrement() throws Exception { assertEquals(15L, client().increment("key", 10L, 5L)); assertEquals(15L, grid().atomicLong("key", 0, true).get()); @@ -261,6 +258,7 @@ public void testIncrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecrement() throws Exception { assertEquals(15L, client().decrement("key", 20L, 5L)); assertEquals(15L, grid().atomicLong("key", 0, true).get()); @@ -284,6 +282,7 @@ public void testDecrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppend() throws Exception { assertFalse(client.cacheAppend(null, "wrongKey", "_suffix")); assertFalse(client.cacheAppend(CACHE_NAME, "wrongKey", "_suffix")); @@ -300,6 +299,7 @@ public void testAppend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepend() throws Exception { assertFalse(client.cachePrepend(null, "wrongKey", "prefix_")); assertFalse(client.cachePrepend(CACHE_NAME, "wrongKey", "prefix_")); @@ -316,6 +316,7 @@ public void testPrepend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersion() throws Exception { assertNotNull(client.version()); } @@ -323,6 +324,7 @@ public void testVersion() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoop() throws Exception { client.noop(); } @@ -330,7 +332,8 @@ public void testNoop() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQuit() throws Exception { client.quit(); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorMultiStartSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorMultiStartSelfTest.java index 24274d77d55b7..9be68b13d4085 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorMultiStartSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorMultiStartSelfTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Rest processor test. */ +@RunWith(JUnit4.class) public class RestProcessorMultiStartSelfTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 3; @@ -55,6 +59,7 @@ public class RestProcessorMultiStartSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testMultiStart() throws Exception { try { for (int i = 0; i < GRID_CNT; i++) @@ -72,6 +77,7 @@ public void testMultiStart() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultiStartWithClient() throws Exception { try { int clnIdx = GRID_CNT - 1; diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorStartSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorStartSelfTest.java index 477c41ab35bbc..ba7551a54042e 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorStartSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/RestProcessorStartSelfTest.java @@ -30,19 +30,18 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class RestProcessorStartSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String HOST = "127.0.0.1"; @@ -71,7 +70,7 @@ public class RestProcessorStartSelfTest extends GridCommonAbstractTest { TestDiscoverySpi disc = new TestDiscoverySpi(); - disc.setIpFinder(IP_FINDER); + disc.setIpFinder(sharedStaticIpFinder); cfg.setDiscoverySpi(disc); @@ -92,6 +91,7 @@ public class RestProcessorStartSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTcpStart() throws Exception { GridClientConfiguration clCfg = new GridClientConfiguration(); @@ -165,4 +165,4 @@ private class TestDiscoverySpi extends TcpDiscoverySpi { super.spiStart(igniteInstanceName); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TaskCommandHandlerSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TaskCommandHandlerSelfTest.java index 60e620be4edcd..b6e9c72f7ef51 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TaskCommandHandlerSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TaskCommandHandlerSelfTest.java @@ -41,13 +41,13 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; import org.jsr166.ConcurrentLinkedHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -58,10 +58,8 @@ /** * Test for {@code GridTaskCommandHandler} */ +@RunWith(JUnit4.class) public class TaskCommandHandlerSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "cache"; @@ -113,12 +111,6 @@ public class TaskCommandHandlerSelfTest extends GridCommonAbstractTest { cfg.setConnectorConfiguration(clientCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME), cacheConfiguration("replicated"), cacheConfiguration("partitioned"), cacheConfiguration(CACHE_NAME)); @@ -164,6 +156,7 @@ private GridClientConfiguration clientConfiguration() { /** * @throws Exception If failed. */ + @Test public void testManyTasksRun() throws Exception { GridClientCompute compute = client.compute(); @@ -221,4 +214,4 @@ private static class TestTask extends ComputeTaskSplitAdapter { return sum; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TcpRestUnmarshalVulnerabilityTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TcpRestUnmarshalVulnerabilityTest.java index 92d824be329db..cbb14ffaaa1c3 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TcpRestUnmarshalVulnerabilityTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/TcpRestUnmarshalVulnerabilityTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MARSHALLER_BLACKLIST; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MARSHALLER_WHITELIST; @@ -47,6 +50,7 @@ /** * Tests for whitelist and blacklist ot avoiding deserialization vulnerability. */ +@RunWith(JUnit4.class) public class TcpRestUnmarshalVulnerabilityTest extends GridCommonAbstractTest { /** Marshaller. */ private static final GridClientJdkMarshaller MARSH = new GridClientJdkMarshaller(); @@ -89,6 +93,7 @@ public class TcpRestUnmarshalVulnerabilityTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNoLists() throws Exception { testExploit(true); } @@ -96,6 +101,7 @@ public void testNoLists() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWhiteListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); @@ -107,6 +113,7 @@ public void testWhiteListIncluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWhiteListExcluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_excluded.txt").getPath(); @@ -118,6 +125,7 @@ public void testWhiteListExcluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlackListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); @@ -129,6 +137,7 @@ public void testBlackListIncluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlackListExcluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_excluded.txt").getPath(); @@ -140,6 +149,7 @@ public void testBlackListExcluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBothListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); @@ -266,4 +276,4 @@ private void readObject(ObjectInputStream is) throws ClassNotFoundException, IOE // No-op. } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/TcpRestParserSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/TcpRestParserSelfTest.java index fecd2b977bbdf..e230ad4414552 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/TcpRestParserSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/TcpRestParserSelfTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.rest.client.message.GridClientCacheRequest.GridCacheOperation.CAS; import static org.apache.ignite.internal.processors.rest.protocols.tcp.GridMemcachedMessage.IGNITE_HANDSHAKE_FLAG; @@ -45,6 +48,7 @@ * This class tests that parser confirms memcache extended specification. */ @SuppressWarnings("TypeMayBeWeakened") +@RunWith(JUnit4.class) public class TcpRestParserSelfTest extends GridCommonAbstractTest { /** Marshaller. */ private GridClientMarshaller marshaller = new GridClientOptimizedMarshaller(); @@ -58,6 +62,7 @@ public class TcpRestParserSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSimplePacketParsing() throws Exception { GridNioSession ses = new MockNioSession(); @@ -93,6 +98,7 @@ public void testSimplePacketParsing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncorrectPackets() throws Exception { final GridNioSession ses = new MockNioSession(); @@ -142,6 +148,7 @@ public void testIncorrectPackets() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomMessages() throws Exception { GridClientCacheRequest req = new GridClientCacheRequest(CAS); @@ -178,6 +185,7 @@ public void testCustomMessages() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMixedParsing() throws Exception { GridNioSession ses1 = new MockNioSession(); GridNioSession ses2 = new MockNioSession(); @@ -247,6 +255,7 @@ public void testMixedParsing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testParseContinuousSplit() throws Exception { ByteBuffer tmp = ByteBuffer.allocate(10 * 1024); @@ -301,6 +310,7 @@ public void testParseContinuousSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testParseClientHandshake() throws Exception { for (int splitPos = 1; splitPos < 5; splitPos++) { log.info("Checking split position: " + splitPos); @@ -453,4 +463,4 @@ private ByteBuffer rawPacket(byte magic, byte opCode, byte[] opaque, @Nullable b return res; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisCommonAbstractTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisCommonAbstractTest.java index 3db7706042ece..241e239d08411 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisCommonAbstractTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisCommonAbstractTest.java @@ -21,9 +21,6 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import redis.clients.jedis.JedisPool; import redis.clients.jedis.JedisPoolConfig; @@ -35,9 +32,6 @@ public class RedisCommonAbstractTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 2; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Local host. */ protected static final String HOST = "127.0.0.1"; @@ -88,12 +82,6 @@ public class RedisCommonAbstractTest extends GridCommonAbstractTest { cfg.setConnectorConfiguration(redisCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - CacheConfiguration ccfg = defaultCacheConfiguration(); ccfg.setStatisticsEnabled(true); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolConnectSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolConnectSelfTest.java index f7fd69a6d58cf..ca3859f72a00e 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolConnectSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolConnectSelfTest.java @@ -18,15 +18,20 @@ package org.apache.ignite.internal.processors.rest.protocols.tcp.redis; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import redis.clients.jedis.Jedis; /** * Tests for Connection commands of Redis protocol. */ +@RunWith(JUnit4.class) public class RedisProtocolConnectSelfTest extends RedisCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPing() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals("PONG", jedis.ping()); @@ -36,6 +41,7 @@ public void testPing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEcho() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals("Hello, grid!", jedis.echo("Hello, grid!")); @@ -45,6 +51,7 @@ public void testEcho() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelect() throws Exception { try (Jedis jedis = pool.getResource()) { // connected to cache with index 0 diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolServerSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolServerSelfTest.java index a424d77e89c8f..f0f5f477cfb41 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolServerSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolServerSelfTest.java @@ -19,15 +19,20 @@ import java.util.HashMap; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import redis.clients.jedis.Jedis; /** * Tests for Server commands of Redis protocol. */ +@RunWith(JUnit4.class) public class RedisProtocolServerSelfTest extends RedisCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDbSize() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(0, (long)jedis.dbSize()); @@ -46,6 +51,7 @@ public void testDbSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFlushDb() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(0, (long)jedis.dbSize()); @@ -82,6 +88,7 @@ public void testFlushDb() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFlushAll() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(0, (long)jedis.dbSize()); diff --git a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolStringSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolStringSelfTest.java index 21a988268c232..e192c004f5b22 100644 --- a/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolStringSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/internal/processors/rest/protocols/tcp/redis/RedisProtocolStringSelfTest.java @@ -21,16 +21,21 @@ import java.util.HashSet; import java.util.List; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import redis.clients.jedis.Jedis; import redis.clients.jedis.exceptions.JedisDataException; /** * Tests for String commands of Redis protocol. */ +@RunWith(JUnit4.class) public class RedisProtocolStringSelfTest extends RedisCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { try (Jedis jedis = pool.getResource()) { jcache().put("getKey1", "getVal1"); @@ -54,6 +59,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSet() throws Exception { try (Jedis jedis = pool.getResource()) { jcache().put("getSetKey1", "1"); @@ -77,6 +83,7 @@ public void testGetSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMGet() throws Exception { try (Jedis jedis = pool.getResource()) { jcache().put("getKey1", "getVal1"); @@ -96,6 +103,7 @@ public void testMGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSet() throws Exception { long EXPIRE_MS = 1000L; int EXPIRE_SEC = 1; @@ -131,6 +139,7 @@ public void testSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMSet() throws Exception { try (Jedis jedis = pool.getResource()) { jedis.mset("setKey1", "1", "setKey2", "2"); @@ -143,6 +152,7 @@ public void testMSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrDecr() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(1, (long)jedis.incr("newKeyIncr")); @@ -226,6 +236,7 @@ public void testIncrDecr() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrDecrBy() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(2, (long)jedis.incrBy("newKeyIncrBy", 2)); @@ -282,6 +293,7 @@ public void testIncrDecrBy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppend() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(5, (long)jedis.append("appendKey1", "Hello")); @@ -303,6 +315,7 @@ public void testAppend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStrlen() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(0, (long)jedis.strlen("strlenKeyNonExisting")); @@ -327,6 +340,7 @@ public void testStrlen() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetRange() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals(0, (long)jedis.setrange("setRangeKey1", 0, "")); @@ -375,6 +389,7 @@ public void testSetRange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetRange() throws Exception { try (Jedis jedis = pool.getResource()) { Assert.assertEquals("", jedis.getrange("getRangeKeyNonExisting", 0, 0)); @@ -402,6 +417,7 @@ public void testGetRange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDel() throws Exception { jcache().put("delKey1", "abc"); jcache().put("delKey2", "abcd"); @@ -415,6 +431,7 @@ public void testDel() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExists() throws Exception { jcache().put("existsKey1", "abc"); jcache().put("existsKey2", "abcd"); @@ -427,6 +444,7 @@ public void testExists() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExpire() throws Exception { testExpire(new Expiration() { @Override public long expire(Jedis jedis, String key) { @@ -438,6 +456,7 @@ public void testExpire() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExpireMs() throws Exception { testExpire(new Expiration() { @Override public long expire(Jedis jedis, String key) { diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/AbstractJdbcPojoQuerySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/AbstractJdbcPojoQuerySelfTest.java index cc8d72e71187e..a8dc8621dd336 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/AbstractJdbcPojoQuerySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/AbstractJdbcPojoQuerySelfTest.java @@ -31,9 +31,6 @@ import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; @@ -44,9 +41,6 @@ * Test for Jdbc driver query without class on client */ public abstract class AbstractJdbcPojoQuerySelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** TestObject class name. */ protected static final String TEST_OBJECT = "org.apache.ignite.internal.JdbcTestObject"; @@ -70,11 +64,6 @@ public abstract class AbstractJdbcPojoQuerySelfTest extends GridCommonAbstractTe cache.setWriteSynchronizationMode(FULL_SYNC); cache.setAtomicityMode(TRANSACTIONAL); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); cfg.setConnectorConfiguration(new ConnectorConfiguration()); QueryEntity queryEntity = new QueryEntity(); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcComplexQuerySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcComplexQuerySelfTest.java index 3c00288e0fdae..a3be96920046c 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcComplexQuerySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcComplexQuerySelfTest.java @@ -28,10 +28,10 @@ import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -40,10 +40,8 @@ /** * Tests for complex queries (joins, etc.). */ +@RunWith(JUnit4.class) public class JdbcComplexQuerySelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite://127.0.0.1/pers"; @@ -56,12 +54,6 @@ public class JdbcComplexQuerySelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -126,6 +118,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name as orgName from \"pers\".Person p, \"org\".Organization o where p.orgId = o.id"); @@ -161,6 +154,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testJoinWithoutAlias() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name from \"pers\".Person p, \"org\".Organization o where p.orgId = o.id"); @@ -199,6 +193,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testIn() throws Exception { ResultSet rs = stmt.executeQuery("select name from \"pers\".Person where age in (25, 35)"); @@ -219,6 +214,7 @@ public void testIn() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBetween() throws Exception { ResultSet rs = stmt.executeQuery("select name from \"pers\".Person where age between 24 and 36"); @@ -239,6 +235,7 @@ public void testBetween() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCalculatedValue() throws Exception { ResultSet rs = stmt.executeQuery("select age * 2 from \"pers\".Person"); @@ -318,4 +315,4 @@ private Organization(int id, String name) { this.name = name; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcConnectionSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcConnectionSelfTest.java index 14d21469468bc..99d6b9d841e5c 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcConnectionSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcConnectionSelfTest.java @@ -24,21 +24,18 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Connection test. */ +@RunWith(JUnit4.class) public class JdbcConnectionSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Custom cache name. */ private static final String CUSTOM_CACHE_NAME = "custom-cache"; @@ -57,12 +54,6 @@ public class JdbcConnectionSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME), cacheConfiguration(CUSTOM_CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - assert cfg.getConnectorConfiguration() == null; ConnectorConfiguration clientCfg = new ConnectorConfiguration(); @@ -96,6 +87,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep /** * @throws Exception If failed. */ + @Test public void testDefaults() throws Exception { String url = URL_PREFIX + HOST; @@ -106,6 +98,7 @@ public void testDefaults() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeId() throws Exception { String url = URL_PREFIX + HOST + "/?nodeId=" + grid(0).localNode().id(); @@ -119,6 +112,7 @@ public void testNodeId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomCache() throws Exception { String url = URL_PREFIX + HOST + "/" + CUSTOM_CACHE_NAME; @@ -128,6 +122,7 @@ public void testCustomCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomPort() throws Exception { String url = URL_PREFIX + HOST + ":" + CUSTOM_PORT; @@ -138,6 +133,7 @@ public void testCustomPort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomCacheNameAndPort() throws Exception { String url = URL_PREFIX + HOST + ":" + CUSTOM_PORT + "/" + CUSTOM_CACHE_NAME; @@ -147,6 +143,7 @@ public void testCustomCacheNameAndPort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWrongCache() throws Exception { final String url = URL_PREFIX + HOST + "/wrongCacheName"; @@ -167,6 +164,7 @@ public void testWrongCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWrongPort() throws Exception { final String url = URL_PREFIX + HOST + ":33333"; @@ -187,6 +185,7 @@ public void testWrongPort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClose() throws Exception { String url = URL_PREFIX + HOST; @@ -212,4 +211,4 @@ public void testClose() throws Exception { "Connection is closed." ); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcEmptyCacheSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcEmptyCacheSelfTest.java index 897f71e11cbca..7ae46ebf25e19 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcEmptyCacheSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcEmptyCacheSelfTest.java @@ -23,10 +23,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -34,10 +34,8 @@ /** * Tests for empty cache. */ +@RunWith(JUnit4.class) public class JdbcEmptyCacheSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -63,12 +61,6 @@ public class JdbcEmptyCacheSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -100,6 +92,7 @@ public class JdbcEmptyCacheSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSelectNumber() throws Exception { ResultSet rs = stmt.executeQuery("select 1"); @@ -118,6 +111,7 @@ public void testSelectNumber() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelectString() throws Exception { ResultSet rs = stmt.executeQuery("select 'str'"); @@ -131,4 +125,4 @@ public void testSelectString() throws Exception { assert cnt == 1; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcErrorsAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcErrorsAbstractSelfTest.java index c44e00725731b..0f7c6c1fdb2c0 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcErrorsAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcErrorsAbstractSelfTest.java @@ -41,10 +41,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test SQLSTATE codes propagation with (any) Ignite JDBC driver. */ +@RunWith(JUnit4.class) public abstract class JdbcErrorsAbstractSelfTest extends GridCommonAbstractTest { /** */ protected static final String CACHE_STORE_TEMPLATE = "cache_store"; @@ -73,6 +77,7 @@ public abstract class JdbcErrorsAbstractSelfTest extends GridCommonAbstractTest * Test that H2 specific error codes get propagated to Ignite SQL exceptions. * @throws SQLException if failed. */ + @Test public void testParsingErrors() throws SQLException { checkErrorState("gibberish", "42000", "Failed to parse query. Syntax error in SQL statement \"GIBBERISH[*] \""); @@ -82,6 +87,7 @@ public void testParsingErrors() throws SQLException { * Test that error codes from tables related DDL operations get propagated to Ignite SQL exceptions. * @throws SQLException if failed. */ + @Test public void testTableErrors() throws SQLException { checkErrorState("DROP TABLE \"PUBLIC\".missing", "42000", "Table doesn't exist: MISSING"); } @@ -90,6 +96,7 @@ public void testTableErrors() throws SQLException { * Test that error codes from indexes related DDL operations get propagated to Ignite SQL exceptions. * @throws SQLException if failed. */ + @Test public void testIndexErrors() throws SQLException { checkErrorState("DROP INDEX \"PUBLIC\".missing", "42000", "Index doesn't exist: MISSING"); } @@ -98,6 +105,7 @@ public void testIndexErrors() throws SQLException { * Test that error codes from DML operations get propagated to Ignite SQL exceptions. * @throws SQLException if failed. */ + @Test public void testDmlErrors() throws SQLException { checkErrorState("INSERT INTO \"test\".INTEGER(_key, _val) values(1, null)", "22004", "Value for INSERT, COPY, MERGE, or UPDATE must not be null"); @@ -110,6 +118,7 @@ public void testDmlErrors() throws SQLException { * Test error code for the case when user attempts to refer a future currently unsupported. * @throws SQLException if failed. */ + @Test public void testUnsupportedSql() throws SQLException { checkErrorState("ALTER TABLE \"test\".Integer MODIFY COLUMN _key CHAR", "0A000", "ALTER COLUMN is not supported"); @@ -119,6 +128,7 @@ public void testUnsupportedSql() throws SQLException { * Test error code for the case when user attempts to use a closed connection. * @throws SQLException if failed. */ + @Test public void testConnectionClosed() throws SQLException { checkErrorState(new IgniteCallable() { @Override public Void call() throws Exception { @@ -231,6 +241,7 @@ public void testConnectionClosed() throws SQLException { * Test error code for the case when user attempts to use a closed result set. * @throws SQLException if failed. */ + @Test public void testResultSetClosed() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -252,6 +263,7 @@ public void testResultSetClosed() throws SQLException { * from column whose value can't be converted to an {@code int}. * @throws SQLException if failed. */ + @Test public void testInvalidIntFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -271,6 +283,7 @@ public void testInvalidIntFormat() throws SQLException { * from column whose value can't be converted to an {@code long}. * @throws SQLException if failed. */ + @Test public void testInvalidLongFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -290,6 +303,7 @@ public void testInvalidLongFormat() throws SQLException { * from column whose value can't be converted to an {@code float}. * @throws SQLException if failed. */ + @Test public void testInvalidFloatFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -309,6 +323,7 @@ public void testInvalidFloatFormat() throws SQLException { * from column whose value can't be converted to an {@code double}. * @throws SQLException if failed. */ + @Test public void testInvalidDoubleFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -328,6 +343,7 @@ public void testInvalidDoubleFormat() throws SQLException { * from column whose value can't be converted to an {@code byte}. * @throws SQLException if failed. */ + @Test public void testInvalidByteFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -347,6 +363,7 @@ public void testInvalidByteFormat() throws SQLException { * from column whose value can't be converted to an {@code short}. * @throws SQLException if failed. */ + @Test public void testInvalidShortFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -366,6 +383,7 @@ public void testInvalidShortFormat() throws SQLException { * from column whose value can't be converted to an {@code BigDecimal}. * @throws SQLException if failed. */ + @Test public void testInvalidBigDecimalFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -385,6 +403,7 @@ public void testInvalidBigDecimalFormat() throws SQLException { * from column whose value can't be converted to an {@code boolean}. * @throws SQLException if failed. */ + @Test public void testInvalidBooleanFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -404,6 +423,7 @@ public void testInvalidBooleanFormat() throws SQLException { * from column whose value can't be converted to an {@code boolean}. * @throws SQLException if failed. */ + @Test public void testInvalidObjectFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -423,6 +443,7 @@ public void testInvalidObjectFormat() throws SQLException { * from column whose value can't be converted to a {@link Date}. * @throws SQLException if failed. */ + @Test public void testInvalidDateFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -442,6 +463,7 @@ public void testInvalidDateFormat() throws SQLException { * from column whose value can't be converted to a {@link Time}. * @throws SQLException if failed. */ + @Test public void testInvalidTimeFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -461,6 +483,7 @@ public void testInvalidTimeFormat() throws SQLException { * from column whose value can't be converted to a {@link Timestamp}. * @throws SQLException if failed. */ + @Test public void testInvalidTimestampFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -480,6 +503,7 @@ public void testInvalidTimestampFormat() throws SQLException { * from column whose value can't be converted to a {@link URL}. * @throws SQLException if failed. */ + @Test public void testInvalidUrlFormat() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -499,6 +523,7 @@ public void testInvalidUrlFormat() throws SQLException { * * @throws SQLException if failed. */ + @Test public void testNotNullViolation() throws SQLException { try (Connection conn = getConnection()) { conn.setSchema("PUBLIC"); @@ -528,6 +553,7 @@ public void testNotNullViolation() throws SQLException { * * @throws SQLException if failed. */ + @Test public void testNotNullRestrictionReadThroughCacheStore() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -548,6 +574,7 @@ public void testNotNullRestrictionReadThroughCacheStore() throws SQLException { * * @throws SQLException if failed. */ + @Test public void testNotNullRestrictionCacheInterceptor() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -566,6 +593,7 @@ public void testNotNullRestrictionCacheInterceptor() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testSelectWrongTable() throws SQLException { checkSqlErrorMessage("select from wrong", "42000", "Failed to parse query. Table \"WRONG\" not found"); @@ -576,6 +604,7 @@ public void testSelectWrongTable() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testSelectWrongColumnName() throws SQLException { checkSqlErrorMessage("select wrong from test", "42000", "Failed to parse query. Column \"WRONG\" not found"); @@ -586,6 +615,7 @@ public void testSelectWrongColumnName() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testSelectWrongSyntax() throws SQLException { checkSqlErrorMessage("select from test where", "42000", "Failed to parse query. Syntax error in SQL statement \"SELECT FROM TEST WHERE[*]"); @@ -596,6 +626,7 @@ public void testSelectWrongSyntax() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDmlWrongTable() throws SQLException { checkSqlErrorMessage("insert into wrong (id, val) values (3, 'val3')", "42000", "Failed to parse query. Table \"WRONG\" not found"); @@ -615,6 +646,7 @@ public void testDmlWrongTable() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDmlWrongColumnName() throws SQLException { checkSqlErrorMessage("insert into test (id, wrong) values (3, 'val3')", "42000", "Failed to parse query. Column \"WRONG\" not found"); @@ -634,6 +666,7 @@ public void testDmlWrongColumnName() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDmlWrongSyntax() throws SQLException { checkSqlErrorMessage("insert test (id, val) values (3, 'val3')", "42000", "Failed to parse query. Syntax error in SQL statement \"INSERT TEST[*] (ID, VAL)"); @@ -653,6 +686,7 @@ public void testDmlWrongSyntax() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDdlWrongTable() throws SQLException { checkSqlErrorMessage("create table test (id int primary key, val varchar)", "42000", "Table already exists: TEST"); @@ -675,6 +709,7 @@ public void testDdlWrongTable() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDdlWrongColumnName() throws SQLException { checkSqlErrorMessage("create index idx1 on test (wrong)", "42000", "Column doesn't exist: WRONG"); @@ -688,6 +723,7 @@ public void testDdlWrongColumnName() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDdlWrongSyntax() throws SQLException { checkSqlErrorMessage("create table test2 (id int wrong key, val varchar)", "42000", "Failed to parse query. Syntax error in SQL statement \"CREATE TABLE TEST2 (ID INT WRONG[*]"); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcLocalCachesSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcLocalCachesSelfTest.java index 9350e0d597241..639df29cdcc65 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcLocalCachesSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcLocalCachesSelfTest.java @@ -25,10 +25,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.PROP_NODE_ID; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -37,10 +37,8 @@ /** * Test JDBC with several local caches. */ +@RunWith(JUnit4.class) public class JdbcLocalCachesSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -62,12 +60,6 @@ public class JdbcLocalCachesSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -95,6 +87,7 @@ public class JdbcLocalCachesSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCache1() throws Exception { Properties cfg = new Properties(); @@ -123,6 +116,7 @@ public void testCache1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCache2() throws Exception { Properties cfg = new Properties(); @@ -147,4 +141,4 @@ public void testCache2() throws Exception { conn.close(); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcMetadataSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcMetadataSelfTest.java index f270910a77949..f262ffe768f6a 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcMetadataSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcMetadataSelfTest.java @@ -34,10 +34,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Types.INTEGER; import static java.sql.Types.OTHER; @@ -48,10 +48,8 @@ /** * Metadata tests. */ +@RunWith(JUnit4.class) public class JdbcMetadataSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite://127.0.0.1/pers"; @@ -59,12 +57,6 @@ public class JdbcMetadataSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -108,6 +100,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testResultSetMetaData() throws Exception { Statement stmt = DriverManager.getConnection(URL).createStatement(); @@ -176,6 +169,7 @@ public void testGetTables() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetColumns() throws Exception { final boolean primitivesInformationIsLostAfterStore = ignite(0).configuration().getMarshaller() instanceof BinaryMarshaller; try (Connection conn = DriverManager.getConnection(URL)) { @@ -269,6 +263,7 @@ public void testGetColumns() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetadataResultSetClose() throws Exception { try (Connection conn = DriverManager.getConnection(URL); ResultSet tbls = conn.getMetaData().getTables(null, null, "%", null)) { diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcNoDefaultCacheTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcNoDefaultCacheTest.java index adb5c306aeb94..ac4142d1e84e9 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcNoDefaultCacheTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcNoDefaultCacheTest.java @@ -26,19 +26,17 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcNoDefaultCacheTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** First cache name. */ private static final String CACHE1_NAME = "cache1"; @@ -57,12 +55,6 @@ public class JdbcNoDefaultCacheTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheConfiguration(CACHE1_NAME), cacheConfiguration(CACHE2_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -109,6 +101,7 @@ protected String getUrl() { /** * @throws Exception If failed. */ + @Test public void testDefaults() throws Exception { String url = getUrl(); @@ -124,6 +117,7 @@ public void testDefaults() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoCacheNameQuery() throws Exception { Statement stmt; diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoLegacyQuerySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoLegacyQuerySelfTest.java index 4fa7ba5443870..4f63caf8bd2b0 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoLegacyQuerySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoLegacyQuerySelfTest.java @@ -18,10 +18,14 @@ package org.apache.ignite.jdbc; import java.sql.ResultSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for Jdbc driver query without class on client */ +@RunWith(JUnit4.class) public class JdbcPojoLegacyQuerySelfTest extends AbstractJdbcPojoQuerySelfTest { /** URL. */ private static final String URL = "jdbc:ignite://127.0.0.1/"; @@ -29,6 +33,7 @@ public class JdbcPojoLegacyQuerySelfTest extends AbstractJdbcPojoQuerySelfTest { /** * @throws Exception If failed. */ + @Test public void testJdbcQuery() throws Exception { stmt.execute("select * from JdbcTestObject"); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoQuerySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoQuerySelfTest.java index 6729d0491c67d..31ac22f8206de 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoQuerySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPojoQuerySelfTest.java @@ -18,12 +18,16 @@ package org.apache.ignite.jdbc; import java.sql.ResultSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteJdbcDriver.CFG_URL_PREFIX; /** * Test for Jdbc driver query without class on client */ +@RunWith(JUnit4.class) public class JdbcPojoQuerySelfTest extends AbstractJdbcPojoQuerySelfTest { /** URL. */ private static final String URL = CFG_URL_PREFIX + "cache=default@modules/clients/src/test/config/jdbc-bin-config.xml"; @@ -31,6 +35,7 @@ public class JdbcPojoQuerySelfTest extends AbstractJdbcPojoQuerySelfTest { /** * @throws Exception If failed. */ + @Test public void testJdbcQueryTask2() throws Exception { stmt.execute("select * from JdbcTestObject"); @@ -42,6 +47,7 @@ public void testJdbcQueryTask2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJdbcQueryTask1() throws Exception { ResultSet rs = stmt.executeQuery("select * from JdbcTestObject"); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPreparedStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPreparedStatementSelfTest.java index 9bdb7d804b2ae..a23280c7b3654 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPreparedStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcPreparedStatementSelfTest.java @@ -32,10 +32,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Types.BIGINT; import static java.sql.Types.BINARY; @@ -57,10 +57,8 @@ /** * Prepared statement test. */ +@RunWith(JUnit4.class) public class JdbcPreparedStatementSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite://127.0.0.1/"; @@ -85,12 +83,6 @@ public class JdbcPreparedStatementSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -151,6 +143,7 @@ public class JdbcPreparedStatementSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRepeatableUsage() throws Exception { stmt = conn.prepareStatement("select * from TestObject where id = ?"); @@ -186,6 +179,7 @@ public void testRepeatableUsage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { stmt = conn.prepareStatement("select * from TestObject where boolVal is not distinct from ?"); @@ -223,6 +217,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { stmt = conn.prepareStatement("select * from TestObject where byteVal is not distinct from ?"); @@ -260,6 +255,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { stmt = conn.prepareStatement("select * from TestObject where shortVal is not distinct from ?"); @@ -297,6 +293,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInteger() throws Exception { stmt = conn.prepareStatement("select * from TestObject where intVal is not distinct from ?"); @@ -334,6 +331,7 @@ public void testInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { stmt = conn.prepareStatement("select * from TestObject where longVal is not distinct from ?"); @@ -371,6 +369,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { stmt = conn.prepareStatement("select * from TestObject where floatVal is not distinct from ?"); @@ -408,6 +407,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { stmt = conn.prepareStatement("select * from TestObject where doubleVal is not distinct from ?"); @@ -445,6 +445,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimal() throws Exception { stmt = conn.prepareStatement("select * from TestObject where bigVal is not distinct from ?"); @@ -482,6 +483,7 @@ public void testBigDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testString() throws Exception { stmt = conn.prepareStatement("select * from TestObject where strVal is not distinct from ?"); @@ -519,6 +521,7 @@ public void testString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArray() throws Exception { stmt = conn.prepareStatement("select * from TestObject where arrVal is not distinct from ?"); @@ -556,6 +559,7 @@ public void testArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDate() throws Exception { stmt = conn.prepareStatement("select * from TestObject where dateVal is not distinct from ?"); @@ -593,6 +597,7 @@ public void testDate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTime() throws Exception { stmt = conn.prepareStatement("select * from TestObject where timeVal is not distinct from ?"); @@ -630,6 +635,7 @@ public void testTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { stmt = conn.prepareStatement("select * from TestObject where tsVal is not distinct from ?"); @@ -667,6 +673,7 @@ public void testTimestamp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUrl() throws Exception { stmt = conn.prepareStatement("select * from TestObject where urlVal is not distinct from ?"); @@ -773,4 +780,4 @@ private TestObject(int id) { this.id = id; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcResultSetSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcResultSetSelfTest.java index 0fe55f2b0a63f..467a1fcf56bb2 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcResultSetSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcResultSetSelfTest.java @@ -28,9 +28,6 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; @@ -44,6 +41,9 @@ import java.util.Arrays; import java.util.Objects; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -52,10 +52,8 @@ * Result set test. */ @SuppressWarnings("FloatingPointEquality") +@RunWith(JUnit4.class) public class JdbcResultSetSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite://127.0.0.1/"; @@ -83,12 +81,6 @@ public class JdbcResultSetSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -156,6 +148,7 @@ private TestObject createObjectWithData(int id) throws MalformedURLException { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -176,6 +169,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -196,6 +190,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -216,6 +211,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInteger() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -236,6 +232,7 @@ public void testInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -256,6 +253,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -276,6 +274,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -296,6 +295,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimal() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -316,6 +316,7 @@ public void testBigDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testString() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -336,6 +337,7 @@ public void testString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArray() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -357,6 +359,7 @@ public void testArray() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testDate() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -378,6 +381,7 @@ public void testDate() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testTime() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -398,6 +402,7 @@ public void testTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -418,6 +423,7 @@ public void testTimestamp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUrl() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -507,6 +513,7 @@ private static String removeIdHash(String str) { /** * @throws Exception If failed. */ + @Test public void testObject() throws Exception { final Ignite ignite = ignite(0); final boolean binaryMarshaller = ignite.configuration().getMarshaller() instanceof BinaryMarshaller; @@ -541,6 +548,7 @@ public void testObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNavigation() throws Exception { ResultSet rs = stmt.executeQuery("select * from TestObject where id > 0"); @@ -578,6 +586,7 @@ public void testNavigation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFindColumn() throws Exception { final ResultSet rs = stmt.executeQuery(SQL); @@ -790,4 +799,4 @@ private TestObjectField(int a, String b) { return S.toString(TestObjectField.class, this); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcStatementSelfTest.java index 4d72b8e19720d..5f94238a09fec 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcStatementSelfTest.java @@ -28,10 +28,10 @@ import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -39,10 +39,8 @@ /** * Statement test. */ +@RunWith(JUnit4.class) public class JdbcStatementSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite://127.0.0.1/"; @@ -70,12 +68,6 @@ public class JdbcStatementSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; @@ -117,6 +109,7 @@ public class JdbcStatementSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testExecuteQuery() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -149,6 +142,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testExecute() throws Exception { assert stmt.execute(SQL); @@ -185,6 +179,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testMaxRows() throws Exception { stmt.setMaxRows(1); @@ -283,4 +278,4 @@ private Person(int id, String firstName, String lastName, int age) { this.age = age; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcVersionMismatchSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcVersionMismatchSelfTest.java new file mode 100644 index 0000000000000..19a1be60bfc46 --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/JdbcVersionMismatchSelfTest.java @@ -0,0 +1,176 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.jdbc; + +import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; +import org.apache.ignite.internal.processors.odbc.SqlStateCode; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * JDBC version mismatch test. + */ +@RunWith(JUnit4.class) +public class JdbcVersionMismatchSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + startGrid(); + + try (Connection conn = connect()) { + executeUpdate(conn, + "CREATE TABLE test (a INT PRIMARY KEY, b INT, c VARCHAR) WITH \"atomicity=TRANSACTIONAL_SNAPSHOT, cache_name=TEST\""); + + executeUpdate(conn, "INSERT INTO test VALUES (1, 1, 'test_1')"); + } + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testVersionMismatchJdbc() throws Exception { + try (Connection conn1 = connect(); Connection conn2 = connect()) { + conn1.setAutoCommit(false); + conn2.setAutoCommit(false); + + // Start first transaction and observe some values. + assertEquals(1, executeQuery(conn1, "SELECT * FROM test").size()); + + // Change values while first transaction is still in progress. + executeUpdate(conn2, "INSERT INTO test VALUES (2, 2, 'test_2')"); + executeUpdate(conn2, "COMMIT"); + assertEquals(2, executeQuery(conn2, "SELECT * FROM test").size()); + + // Force version mismatch. + try { + executeUpdate(conn1, "INSERT INTO test VALUES (2, 2, 'test_2')"); + + fail(); + } + catch (SQLException e) { + assertEquals(SqlStateCode.SERIALIZATION_FAILURE, e.getSQLState()); + assertEquals(IgniteQueryErrorCode.TRANSACTION_SERIALIZATION_ERROR, e.getErrorCode()); + + assertNotNull(e.getMessage()); + assertTrue(e.getMessage().contains("Cannot serialize transaction due to write conflict")); + } + + // Subsequent call should cause exception due to TX being rolled back. + try { + executeQuery(conn1, "SELECT * FROM test").size(); + + fail(); + } + catch (SQLException e) { + assertEquals(SqlStateCode.TRANSACTION_STATE_EXCEPTION, e.getSQLState()); + assertEquals(IgniteQueryErrorCode.TRANSACTION_COMPLETED, e.getErrorCode()); + + assertNotNull(e.getMessage()); + assertTrue(e.getMessage().contains("Transaction is already completed")); + } + + // Commit should fail. + try { + conn1.commit(); + + fail(); + } + catch (SQLException e) { + // Cannot pass proper error codes for now + assertEquals(SqlStateCode.INTERNAL_ERROR, e.getSQLState()); + assertEquals(IgniteQueryErrorCode.UNKNOWN, e.getErrorCode()); + + assertNotNull(e.getMessage()); + assertTrue(e.getMessage().contains("Failed to finish transaction because it has been rolled back")); + } + + // Rollback should work. + conn1.rollback(); + + // Subsequent calls should work fine. + assertEquals(2, executeQuery(conn2, "SELECT * FROM test").size()); + } + } + + /** + * Establish JDBC connection. + * + * @return Connection. + * @throws Exception If failed. + */ + private Connection connect() throws Exception { + return DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10800"); + } + + /** + * Execute update statement. + * + * @param conn Connection. + * @param sql SQL. + * @throws Exception If failed. + */ + private static void executeUpdate(Connection conn, String sql) throws Exception { + try (Statement stmt = conn.createStatement()) { + stmt.executeUpdate(sql); + } + } + + /** + * Execute query. + * + * @param conn Connection. + * @param sql SQL. + * @return Result. + * @throws Exception If failed. + */ + private static List> executeQuery(Connection conn, String sql) throws Exception { + List> rows = new ArrayList<>(); + + try (Statement stmt = conn.createStatement()) { + try (ResultSet rs = stmt.executeQuery(sql)) { + int colCnt = rs.getMetaData().getColumnCount(); + + while (rs.next()) { + List row = new ArrayList<>(colCnt); + + for (int i = 0; i < colCnt; i++) + row.add(rs.getObject(i + 1)); + + rows.add(row); + } + } + } + + return rows; + } +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverMvccTestSuite.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverMvccTestSuite.java index 6d8933dfe143f..f89bc44dcc2a1 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverMvccTestSuite.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverMvccTestSuite.java @@ -17,30 +17,38 @@ package org.apache.ignite.jdbc.suite; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.apache.ignite.jdbc.JdbcVersionMismatchSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinConnectionMvccEnabledSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinTransactionsClientAutoCommitComplexSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinTransactionsClientNoAutoCommitComplexSelfTest; -import org.apache.ignite.jdbc.thin.JdbcThinTransactionsWithMvccEnabledSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinTransactionsServerAutoCommitComplexSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinTransactionsServerNoAutoCommitComplexSelfTest; +import org.apache.ignite.jdbc.thin.JdbcThinTransactionsWithMvccEnabledSelfTest; +import org.apache.ignite.jdbc.thin.MvccJdbcTransactionFinishOnDeactivatedClusterSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; -public class IgniteJdbcDriverMvccTestSuite extends TestSuite { +/** */ +@RunWith(AllTests.class) +public class IgniteJdbcDriverMvccTestSuite { /** * @return JDBC Driver Test Suite. - * @throws Exception In case of error. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite JDBC Driver Test Suite"); - suite.addTest(new TestSuite(JdbcThinConnectionMvccEnabledSelfTest.class)); - + suite.addTest(new JUnit4TestAdapter(JdbcThinConnectionMvccEnabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcVersionMismatchSelfTest.class)); + // Transactions - suite.addTest(new TestSuite(JdbcThinTransactionsWithMvccEnabledSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsClientAutoCommitComplexSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsServerAutoCommitComplexSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsClientNoAutoCommitComplexSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsServerNoAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsWithMvccEnabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsClientAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsServerAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsClientNoAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsServerNoAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MvccJdbcTransactionFinishOnDeactivatedClusterSelfTest.class)); return suite; } diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverTestSuite.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverTestSuite.java index 2e98d689deaa6..013734bc72942 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverTestSuite.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/suite/IgniteJdbcDriverTestSuite.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -17,6 +18,7 @@ package org.apache.ignite.jdbc.suite; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.jdbc2.JdbcBlobTest; import org.apache.ignite.internal.jdbc2.JdbcBulkLoadSelfTest; @@ -51,6 +53,7 @@ import org.apache.ignite.jdbc.thin.JdbcThinConnectionMvccEnabledSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinConnectionSSLTest; import org.apache.ignite.jdbc.thin.JdbcThinConnectionSelfTest; +import org.apache.ignite.jdbc.thin.JdbcThinConnectionTimeoutSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinDataSourceSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinDeleteStatementSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinDynamicIndexAtomicPartitionedNearSelfTest; @@ -66,14 +69,18 @@ import org.apache.ignite.jdbc.thin.JdbcThinLocalQueriesSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinMergeStatementSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinMergeStatementSkipReducerOnUpdateSelfTest; +import org.apache.ignite.jdbc.thin.JdbcThinMetadataPrimaryKeysSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinMetadataSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinMissingLongArrayResultsTest; import org.apache.ignite.jdbc.thin.JdbcThinNoDefaultSchemaTest; +import org.apache.ignite.jdbc.thin.JdbcThinPreparedStatementLeakTest; import org.apache.ignite.jdbc.thin.JdbcThinPreparedStatementSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinResultSetSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinSchemaCaseTest; import org.apache.ignite.jdbc.thin.JdbcThinSelectAfterAlterTable; +import org.apache.ignite.jdbc.thin.JdbcThinStatementCancelSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinStatementSelfTest; +import org.apache.ignite.jdbc.thin.JdbcThinStatementTimeoutSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinStreamingNotOrderedSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinStreamingOrderedSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinTcpIoTest; @@ -85,137 +92,145 @@ import org.apache.ignite.jdbc.thin.JdbcThinUpdateStatementSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinUpdateStatementSkipReducerOnUpdateSelfTest; import org.apache.ignite.jdbc.thin.JdbcThinWalModeChangeSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * JDBC driver test suite. */ -public class IgniteJdbcDriverTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteJdbcDriverTestSuite { /** * @return JDBC Driver Test Suite. - * @throws Exception In case of error. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite JDBC Driver Test Suite"); // Thin client based driver tests. - suite.addTest(new TestSuite(JdbcConnectionSelfTest.class)); - suite.addTest(new TestSuite(JdbcStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcPreparedStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcResultSetSelfTest.class)); - suite.addTest(new TestSuite(JdbcComplexQuerySelfTest.class)); - suite.addTest(new TestSuite(JdbcMetadataSelfTest.class)); - suite.addTest(new TestSuite(JdbcEmptyCacheSelfTest.class)); - suite.addTest(new TestSuite(JdbcLocalCachesSelfTest.class)); - suite.addTest(new TestSuite(JdbcNoDefaultCacheTest.class)); - suite.addTest(new TestSuite(JdbcDefaultNoOpCacheTest.class)); - suite.addTest(new TestSuite(JdbcPojoQuerySelfTest.class)); - suite.addTest(new TestSuite(JdbcPojoLegacyQuerySelfTest.class)); - suite.addTest(new TestSuite(JdbcConnectionReopenTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcConnectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcPreparedStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcResultSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcComplexQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcMetadataSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcEmptyCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcLocalCachesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcNoDefaultCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcDefaultNoOpCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcPojoQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcPojoLegacyQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcConnectionReopenTest.class)); // Ignite client node based driver tests - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcConnectionSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcSpringSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcPreparedStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcResultSetSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcComplexQuerySelfTest.class)); - suite.addTest(new TestSuite(JdbcDistributedJoinsQueryTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcMetadataSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcEmptyCacheSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcLocalCachesSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcNoDefaultCacheTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDefaultNoOpCacheTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcMergeStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcBinaryMarshallerMergeStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcUpdateStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcInsertStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcBinaryMarshallerInsertStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDeleteStatementSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcStatementBatchingSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcErrorsSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcStreamingToPublicCacheTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcNoCacheStreamingSelfTest.class)); - suite.addTest(new TestSuite(JdbcBulkLoadSelfTest.class)); - - suite.addTest(new TestSuite(JdbcBlobTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcStreamingSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinStreamingNotOrderedSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinStreamingOrderedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcConnectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcSpringSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcPreparedStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcResultSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcComplexQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcDistributedJoinsQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcMetadataSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcEmptyCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcLocalCachesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcNoDefaultCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDefaultNoOpCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcMergeStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcBinaryMarshallerMergeStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcUpdateStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcInsertStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcBinaryMarshallerInsertStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDeleteStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcStatementBatchingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcErrorsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcStreamingToPublicCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcNoCacheStreamingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcBulkLoadSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(JdbcBlobTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcStreamingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinStreamingNotOrderedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinStreamingOrderedSelfTest.class)); // DDL tests. - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexAtomicPartitionedNearSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexAtomicPartitionedSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexAtomicReplicatedSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexTransactionalPartitionedNearSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexTransactionalPartitionedSelfTest.class)); - suite.addTest(new TestSuite(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexTransactionalReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexAtomicPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexAtomicPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexAtomicReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexTransactionalPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexTransactionalPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(org.apache.ignite.internal.jdbc2.JdbcDynamicIndexTransactionalReplicatedSelfTest.class)); // New thin JDBC - suite.addTest(new TestSuite(JdbcThinConnectionSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinConnectionMvccEnabledSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinConnectionMultipleAddressesTest.class)); - suite.addTest(new TestSuite(JdbcThinTcpIoTest.class)); - suite.addTest(new TestSuite(JdbcThinConnectionSSLTest.class)); - suite.addTest(new TestSuite(JdbcThinDataSourceSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinPreparedStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinResultSetSelfTest.class)); - - suite.addTest(new TestSuite(JdbcThinStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinComplexQuerySelfTest.class)); - suite.addTest(new TestSuite(JdbcThinNoDefaultSchemaTest.class)); - suite.addTest(new TestSuite(JdbcThinSchemaCaseTest.class)); - suite.addTest(new TestSuite(JdbcThinEmptyCacheSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinMetadataSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinErrorsSelfTest.class)); - - suite.addTest(new TestSuite(JdbcThinInsertStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinUpdateStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinMergeStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinDeleteStatementSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinAutoCloseServerCursorTest.class)); - suite.addTest(new TestSuite(JdbcThinBatchSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinMissingLongArrayResultsTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinConnectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinConnectionMvccEnabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinConnectionMultipleAddressesTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTcpIoTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinConnectionSSLTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDataSourceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinPreparedStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinResultSetSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(JdbcThinStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinComplexQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinNoDefaultSchemaTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinSchemaCaseTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinEmptyCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinMetadataSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinMetadataPrimaryKeysSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinErrorsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinStatementCancelSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinConnectionTimeoutSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinStatementTimeoutSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(JdbcThinInsertStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinUpdateStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinMergeStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDeleteStatementSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinAutoCloseServerCursorTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinBatchSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinMissingLongArrayResultsTest.class)); // New thin JDBC driver, DDL tests - suite.addTest(new TestSuite(JdbcThinDynamicIndexAtomicPartitionedNearSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinDynamicIndexAtomicPartitionedSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinDynamicIndexAtomicReplicatedSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinDynamicIndexTransactionalPartitionedNearSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinDynamicIndexTransactionalPartitionedSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinDynamicIndexTransactionalReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDynamicIndexAtomicPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDynamicIndexAtomicPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDynamicIndexAtomicReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDynamicIndexTransactionalPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDynamicIndexTransactionalPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinDynamicIndexTransactionalReplicatedSelfTest.class)); // New thin JDBC driver, DML tests - suite.addTest(new TestSuite(JdbcThinBulkLoadAtomicPartitionedNearSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinBulkLoadAtomicPartitionedSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinBulkLoadAtomicReplicatedSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinBulkLoadTransactionalPartitionedNearSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinBulkLoadTransactionalPartitionedSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinBulkLoadTransactionalReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinBulkLoadAtomicPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinBulkLoadAtomicPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinBulkLoadAtomicReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinBulkLoadTransactionalPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinBulkLoadTransactionalPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinBulkLoadTransactionalReplicatedSelfTest.class)); // New thin JDBC driver, full SQL tests - suite.addTest(new TestSuite(JdbcThinComplexDmlDdlSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinComplexDmlDdlSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinSelectAfterAlterTable.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinSelectAfterAlterTable.class)); // Update on server - suite.addTest(new TestSuite(JdbcThinInsertStatementSkipReducerOnUpdateSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinUpdateStatementSkipReducerOnUpdateSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinMergeStatementSkipReducerOnUpdateSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinComplexDmlDdlSkipReducerOnUpdateSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinComplexDmlDdlCustomSchemaSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinInsertStatementSkipReducerOnUpdateSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinUpdateStatementSkipReducerOnUpdateSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinMergeStatementSkipReducerOnUpdateSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinComplexDmlDdlSkipReducerOnUpdateSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinComplexDmlDdlCustomSchemaSelfTest.class)); // Transactions - suite.addTest(new TestSuite(JdbcThinTransactionsSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsClientAutoCommitComplexSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsServerAutoCommitComplexSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsClientNoAutoCommitComplexSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinTransactionsServerNoAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsClientAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsServerAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsClientNoAutoCommitComplexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinTransactionsServerNoAutoCommitComplexSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinLocalQueriesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinLocalQueriesSelfTest.class)); // Various commands. - suite.addTest(new TestSuite(JdbcThinWalModeChangeSelfTest.class)); - suite.addTest(new TestSuite(JdbcThinAuthenticateConnectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinWalModeChangeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcThinAuthenticateConnectionSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(JdbcThinPreparedStatementLeakTest.class)); return suite; } diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAbstractDmlStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAbstractDmlStatementSelfTest.java index 936346760e8dd..1fb009a7ffb3d 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAbstractDmlStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAbstractDmlStatementSelfTest.java @@ -29,9 +29,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -40,9 +37,6 @@ * Statement test. */ public abstract class JdbcThinAbstractDmlStatementSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** SQL SELECT query for verification. */ static final String SQL_SELECT = "select _key, id, firstName, lastName, age from Person"; @@ -95,12 +89,6 @@ protected Connection createConnection() throws SQLException { private IgniteConfiguration getConfiguration0(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(new ConnectorConfiguration()); return cfg; diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAuthenticateConnectionSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAuthenticateConnectionSelfTest.java index cb4d7f3cf70c9..7184e0295567a 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAuthenticateConnectionSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAuthenticateConnectionSelfTest.java @@ -27,19 +27,17 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.authentication.AuthorizationContext; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for authenticated an non authenticated JDBC thin connection. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinAuthenticateConnectionSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String URL = "jdbc:ignite:thin://127.0.0.1"; @@ -48,12 +46,6 @@ public class JdbcThinAuthenticateConnectionSelfTest extends JdbcThinAbstractSelf @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setMarshaller(new BinaryMarshaller()); cfg.setAuthenticationEnabled(true); @@ -93,6 +85,7 @@ public class JdbcThinAuthenticateConnectionSelfTest extends JdbcThinAbstractSelf /** * @throws Exception If failed. */ + @Test public void testConnection() throws Exception { checkConnection(URL, "ignite", "ignite"); checkConnection(URL, "another_user", "passwd"); @@ -102,6 +95,7 @@ public void testConnection() throws Exception { /** */ + @Test public void testInvalidUserPassword() { String err = "Unauthenticated sessions are prohibited"; checkInvalidUserPassword(URL, null, null, err); @@ -119,6 +113,7 @@ public void testInvalidUserPassword() { /** * @throws SQLException On failed. */ + @Test public void testUserSqlOnAuthorized() throws SQLException { try (Connection conn = DriverManager.getConnection(URL, "ignite", "ignite")) { conn.createStatement().execute("CREATE USER test WITH PASSWORD 'test'"); @@ -139,6 +134,7 @@ public void testUserSqlOnAuthorized() throws SQLException { /** * @throws SQLException On error. */ + @Test public void testUserSqlWithNotIgniteUser() throws SQLException { try (Connection conn = DriverManager.getConnection(URL, "another_user", "passwd")) { String err = "User management operations are not allowed for user"; @@ -157,6 +153,7 @@ public void testUserSqlWithNotIgniteUser() throws SQLException { /** * @throws SQLException On error. */ + @Test public void testQuotedUsername() throws SQLException { // Spaces checkUserPassword(" test", " "); @@ -235,4 +232,4 @@ private void checkUnauthorizedOperation(final Connection conn, final String sql, } }, SQLException.class, err); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAutoCloseServerCursorTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAutoCloseServerCursorTest.java index bb2696f570cc5..22768b64d5551 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAutoCloseServerCursorTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinAutoCloseServerCursorTest.java @@ -30,10 +30,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -42,10 +42,8 @@ * Tests an optional optimization that server cursor is closed automatically * when last result set page is transmitted. */ +@RunWith(JUnit4.class) public class JdbcThinAutoCloseServerCursorTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -67,12 +65,6 @@ public class JdbcThinAutoCloseServerCursorTest extends JdbcThinAbstractSelfTest cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -95,7 +87,7 @@ public class JdbcThinAutoCloseServerCursorTest extends JdbcThinAbstractSelfTest * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testQuery() throws Exception { IgniteCache cache = grid(0).cache(CACHE_NAME); @@ -189,6 +181,7 @@ public void testQuery() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsert() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { conn.setSchema('"' + CACHE_NAME + '"'); @@ -218,6 +211,7 @@ public void testInsert() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdate() throws Exception { IgniteCache cache = grid(0).cache(CACHE_NAME); @@ -243,6 +237,7 @@ public void testUpdate() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelete() throws Exception { IgniteCache cache = grid(0).cache(CACHE_NAME); @@ -288,7 +283,6 @@ private void checkResultSet(ResultSet rs, Person[] persons) throws Exception { /** * Person. */ - @SuppressWarnings("UnusedDeclaration") static class Person implements Serializable { /** ID. */ @QuerySqlField diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBatchSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBatchSelfTest.java index fe7c170729595..376ec1d648f03 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBatchSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBatchSelfTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.odbc.SqlStateCode; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.junit.Assert.assertArrayEquals; /** * Statement test. */ +@RunWith(JUnit4.class) public class JdbcThinBatchSelfTest extends JdbcThinAbstractDmlStatementSelfTest { /** SQL query. */ private static final String SQL_PREPARED = "insert into Person(_key, id, firstName, lastName, age) values " + @@ -75,6 +79,7 @@ public class JdbcThinBatchSelfTest extends JdbcThinAbstractDmlStatementSelfTest /** * @throws SQLException If failed. */ + @Test public void testBatch() throws SQLException { final int BATCH_SIZE = 10; @@ -94,6 +99,7 @@ public void testBatch() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchOnClosedStatement() throws SQLException { final Statement stmt2 = conn.createStatement(); final PreparedStatement pstmt2 = conn.prepareStatement(""); @@ -153,6 +159,7 @@ public void testBatchOnClosedStatement() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchException() throws SQLException { final int BATCH_SIZE = 7; @@ -195,6 +202,7 @@ public void testBatchException() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchParseException() throws SQLException { final int BATCH_SIZE = 7; @@ -237,6 +245,7 @@ public void testBatchParseException() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchMerge() throws SQLException { final int BATCH_SIZE = 7; @@ -256,6 +265,7 @@ public void testBatchMerge() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchMergeParseException() throws SQLException { final int BATCH_SIZE = 7; @@ -295,10 +305,10 @@ public void testBatchMergeParseException() throws SQLException { } } - /** * @throws SQLException If failed. */ + @Test public void testBatchKeyDuplicatesException() throws SQLException { final int BATCH_SIZE = 7; @@ -343,6 +353,7 @@ public void testBatchKeyDuplicatesException() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testHeterogeneousBatch() throws SQLException { stmt.addBatch("insert into Person (_key, id, firstName, lastName, age) values ('p0', 0, 'Name0', 'Lastname0', 10)"); stmt.addBatch("insert into Person (_key, id, firstName, lastName, age) values ('p1', 1, 'Name1', 'Lastname1', 20), ('p2', 2, 'Name2', 'Lastname2', 30)"); @@ -360,6 +371,7 @@ public void testHeterogeneousBatch() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testHeterogeneousBatchException() throws SQLException { stmt.addBatch("insert into Person (_key, id, firstName, lastName, age) values ('p0', 0, 'Name0', 'Lastname0', 10)"); stmt.addBatch("insert into Person (_key, id, firstName, lastName, age) values ('p1', 1, 'Name1', 'Lastname1', 20), ('p2', 2, 'Name2', 'Lastname2', 30)"); @@ -390,6 +402,7 @@ public void testHeterogeneousBatchException() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchClear() throws SQLException { final int BATCH_SIZE = 7; @@ -412,6 +425,7 @@ public void testBatchClear() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchPrepared() throws SQLException { final int BATCH_SIZE = 10; @@ -438,6 +452,7 @@ public void testBatchPrepared() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchExceptionPrepared() throws SQLException { final int BATCH_SIZE = 7; @@ -502,6 +517,7 @@ public void testBatchExceptionPrepared() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchMergePrepared() throws SQLException { final int BATCH_SIZE = 10; @@ -531,6 +547,7 @@ public void testBatchMergePrepared() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchMergeExceptionPrepared() throws SQLException { final int BATCH_SIZE = 7; @@ -611,6 +628,7 @@ private void populateTable(int size) throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchUpdatePrepared() throws SQLException { final int BATCH_SIZE = 10; @@ -635,6 +653,7 @@ public void testBatchUpdatePrepared() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchUpdateExceptionPrepared() throws SQLException { final int BATCH_SIZE = 7; @@ -690,6 +709,7 @@ public void testBatchUpdateExceptionPrepared() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchDeletePrepared() throws SQLException { final int BATCH_SIZE = 10; @@ -714,6 +734,7 @@ public void testBatchDeletePrepared() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchDeleteExceptionPrepared() throws SQLException { final int BATCH_SIZE = 7; @@ -769,6 +790,7 @@ public void testBatchDeleteExceptionPrepared() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testBatchClearPrepared() throws SQLException { final int BATCH_SIZE = 10; diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBulkLoadAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBulkLoadAbstractSelfTest.java index 2a4c7995fbdf6..a1432fcdab76e 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBulkLoadAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinBulkLoadAbstractSelfTest.java @@ -17,22 +17,12 @@ package org.apache.ignite.jdbc.thin; -import org.apache.ignite.cache.CacheAtomicityMode; -import org.apache.ignite.cache.CacheMode; -import org.apache.ignite.cache.QueryEntity; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.internal.processors.bulkload.BulkLoadCsvFormat; -import org.apache.ignite.internal.processors.bulkload.BulkLoadCsvParser; -import org.apache.ignite.internal.processors.query.QueryUtils; -import org.apache.ignite.lang.IgniteClosure; -import org.apache.ignite.testframework.GridTestUtils; - import java.nio.ByteBuffer; import java.nio.charset.Charset; import java.nio.charset.CodingErrorAction; import java.nio.charset.UnsupportedCharsetException; import java.sql.BatchUpdateException; +import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; @@ -41,6 +31,20 @@ import java.util.Collections; import java.util.Objects; import java.util.concurrent.Callable; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.bulkload.BulkLoadCsvFormat; +import org.apache.ignite.internal.processors.bulkload.BulkLoadCsvParser; +import org.apache.ignite.internal.processors.query.QueryUtils; +import org.apache.ignite.lang.IgniteClosure; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -49,6 +53,7 @@ /** * COPY statement tests. */ +@RunWith(JUnit4.class) public abstract class JdbcThinBulkLoadAbstractSelfTest extends JdbcThinAbstractDmlStatementSelfTest { /** Subdirectory with CSV files */ private static final String CSV_FILE_SUBDIR = "/modules/clients/src/test/resources/"; @@ -194,6 +199,7 @@ private CacheConfiguration cacheConfigWithQueryEntity() { * * @throws SQLException If failed. */ + @Test public void testBasicStatement() throws SQLException { int updatesCnt = stmt.executeUpdate(BASIC_SQL_COPY_STMT); @@ -208,6 +214,7 @@ public void testBasicStatement() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testEmptyFile() throws SQLException { int updatesCnt = stmt.executeUpdate( "copy from '" + BULKLOAD_EMPTY_CSV_FILE + "' into " + TBL_NAME + @@ -224,6 +231,7 @@ public void testEmptyFile() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testOneLineFile() throws SQLException { int updatesCnt = stmt.executeUpdate( "copy from '" + BULKLOAD_ONE_LINE_CSV_FILE + "' into " + TBL_NAME + @@ -238,6 +246,7 @@ public void testOneLineFile() throws SQLException { /** * Verifies that error is reported for empty charset name. */ + @Test public void testEmptyCharset() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -254,6 +263,7 @@ public void testEmptyCharset() { /** * Verifies that error is reported for unsupported charset name. */ + @Test public void testNotSupportedCharset() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -270,6 +280,7 @@ public void testNotSupportedCharset() { /** * Verifies that error is reported for unknown charset name. */ + @Test public void testUnknownCharset() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -288,6 +299,7 @@ public void testUnknownCharset() { * * @throws SQLException If failed. */ + @Test public void testAsciiCharset() throws SQLException { int updatesCnt = stmt.executeUpdate( "copy from '" + BULKLOAD_TWO_LINES_CSV_FILE + "'" + @@ -306,6 +318,7 @@ public void testAsciiCharset() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testUtf8Charset() throws SQLException { checkBulkLoadWithCharset(BULKLOAD_UTF8_CSV_FILE, "utf-8"); } @@ -315,6 +328,7 @@ public void testUtf8Charset() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testWin1251Charset() throws SQLException { checkBulkLoadWithCharset(BULKLOAD_CP1251_CSV_FILE, "windows-1251"); } @@ -344,6 +358,7 @@ private void checkBulkLoadWithCharset(String fileName, String charsetName) throw * * @throws SQLException If failed. */ + @Test public void testWrongCharset_Utf8AsWin1251() throws SQLException { checkBulkLoadWithWrongCharset(BULKLOAD_UTF8_CSV_FILE, "UTF-8", "windows-1251"); } @@ -354,6 +369,7 @@ public void testWrongCharset_Utf8AsWin1251() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testWrongCharset_Win1251AsUtf8() throws SQLException { checkBulkLoadWithWrongCharset(BULKLOAD_CP1251_CSV_FILE, "windows-1251", "UTF-8"); } @@ -364,6 +380,7 @@ public void testWrongCharset_Win1251AsUtf8() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testWrongCharset_Utf8AsAscii() throws SQLException { checkBulkLoadWithWrongCharset(BULKLOAD_UTF8_CSV_FILE, "UTF-8", "ascii"); } @@ -374,6 +391,7 @@ public void testWrongCharset_Utf8AsAscii() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testWrongCharset_Win1251AsAscii() throws SQLException { checkBulkLoadWithWrongCharset(BULKLOAD_CP1251_CSV_FILE, "windows-1251", "ascii"); } @@ -384,6 +402,7 @@ public void testWrongCharset_Win1251AsAscii() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testPacketSize_1() throws SQLException { int updatesCnt = stmt.executeUpdate(BASIC_SQL_COPY_STMT + " packet_size 1"); @@ -398,6 +417,7 @@ public void testPacketSize_1() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDefaultCharset() throws SQLException { int updatesCnt = stmt.executeUpdate( "copy from '" + BULKLOAD_UTF8_CSV_FILE + "' into " + TBL_NAME + @@ -409,6 +429,33 @@ public void testDefaultCharset() throws SQLException { checkNationalCacheContents(TBL_NAME); } + /** + * Test imports CSV file into a table on not affinity node and checks the created entries using SELECT statement. + * + * @throws SQLException If failed. + */ + @Test + public void testBulkLoadToNonAffinityNode() throws Exception { + IgniteEx client = startGrid(getConfiguration("client").setClientMode(true)); + + try (Connection con = connect(client, null)) { + con.setSchema('"' + DEFAULT_CACHE_NAME + '"'); + + try (Statement stmt = con.createStatement()) { + int updatesCnt = stmt.executeUpdate( + "copy from '" + BULKLOAD_UTF8_CSV_FILE + "' into " + TBL_NAME + + " (_key, age, firstName, lastName)" + + " format csv"); + + assertEquals(2, updatesCnt); + + checkNationalCacheContents(TBL_NAME); + } + } + + stopGrid(client.name()); + } + /** * Imports two-entry CSV file with UTF-8 characters into a table using packet size of one byte * (thus splitting each two-byte UTF-8 character into two packets) @@ -416,6 +463,7 @@ public void testDefaultCharset() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testDefaultCharsetPacketSize1() throws SQLException { int updatesCnt = stmt.executeUpdate( "copy from '" + BULKLOAD_UTF8_CSV_FILE + "' into " + TBL_NAME + @@ -430,6 +478,7 @@ public void testDefaultCharsetPacketSize1() throws SQLException { /** * Checks that error is reported for a non-existent file. */ + @Test public void testWrongFileName() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -446,6 +495,7 @@ public void testWrongFileName() { /** * Checks that error is reported if the destination table is missing. */ + @Test public void testMissingTable() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -462,6 +512,7 @@ public void testMissingTable() { /** * Checks that error is reported when a non-existing column is specified in the SQL command. */ + @Test public void testWrongColumnName() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -478,6 +529,7 @@ public void testWrongColumnName() { /** * Checks that error is reported if field read from CSV file cannot be converted to the type of the column. */ + @Test public void testWrongColumnType() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -496,6 +548,7 @@ public void testWrongColumnType() { * * @throws SQLException If failed. */ + @Test public void testFieldsSubset() throws SQLException { int updatesCnt = stmt.executeUpdate( "copy from '" + BULKLOAD_TWO_LINES_CSV_FILE + "'" + @@ -516,6 +569,7 @@ public void testFieldsSubset() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testCreateAndBulkLoadTable() throws SQLException { String tblName = QueryUtils.DFLT_SCHEMA + ".\"PersonTbl\""; @@ -541,6 +595,7 @@ public void testCreateAndBulkLoadTable() throws SQLException { * @throws SQLException If failed. */ @SuppressWarnings("unchecked") + @Test public void testConfigureQueryEntityAndBulkLoad() throws SQLException { ignite(0).getOrCreateCache(cacheConfigWithQueryEntity()); @@ -556,6 +611,7 @@ public void testConfigureQueryEntityAndBulkLoad() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testMultipleStatement() throws SQLException { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -581,6 +637,7 @@ public void testMultipleStatement() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testExecuteQuery() throws SQLException { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -596,6 +653,7 @@ public void testExecuteQuery() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testExecute() throws SQLException { boolean isRowSet = stmt.execute(BASIC_SQL_COPY_STMT); @@ -609,6 +667,7 @@ public void testExecute() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testPreparedStatementWithExecuteUpdate() throws SQLException { PreparedStatement pstmt = conn.prepareStatement(BASIC_SQL_COPY_STMT); @@ -624,6 +683,7 @@ public void testPreparedStatementWithExecuteUpdate() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testPreparedStatementWithParameter() throws SQLException { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -646,6 +706,7 @@ public void testPreparedStatementWithParameter() throws SQLException { * * @throws SQLException If failed. */ + @Test public void testPreparedStatementWithExecute() throws SQLException { PreparedStatement pstmt = conn.prepareStatement(BASIC_SQL_COPY_STMT); @@ -659,6 +720,7 @@ public void testPreparedStatementWithExecute() throws SQLException { /** * Verifies that COPY command is rejected by PreparedStatement.executeQuery(). */ + @Test public void testPreparedStatementWithExecuteQuery() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlCustomSchemaSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlCustomSchemaSelfTest.java index 8fd9356533be5..e4d7d1519c2d2 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlCustomSchemaSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlCustomSchemaSelfTest.java @@ -22,10 +22,14 @@ import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base class for complex SQL tests based on JDBC driver. */ +@RunWith(JUnit4.class) public class JdbcThinComplexDmlDdlCustomSchemaSelfTest extends JdbcThinComplexDmlDdlSelfTest { /** Simple schema. */ private static final String SCHEMA_1 = "SCHEMA_1"; @@ -55,6 +59,7 @@ public class JdbcThinComplexDmlDdlCustomSchemaSelfTest extends JdbcThinComplexDm * * @throws Exception If failed. */ + @Test public void testCreateSelectDropEscapedSchema() throws Exception { try { curSchema = SCHEMA_2; @@ -71,8 +76,9 @@ public void testCreateSelectDropEscapedSchema() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultiple() throws Exception { testCreateSelectDrop(); testCreateSelectDrop(); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlSelfTest.java index 36ee34a5fc950..ebadf8e59c0e6 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexDmlDdlSelfTest.java @@ -34,19 +34,17 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base class for complex SQL tests based on JDBC driver. */ +@RunWith(JUnit4.class) public class JdbcThinComplexDmlDdlSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache mode to test with. */ private final CacheMode cacheMode = CacheMode.PARTITIONED; @@ -68,12 +66,6 @@ public class JdbcThinComplexDmlDdlSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -127,7 +119,7 @@ protected Connection createConnection() throws SQLException { /** * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCreateSelectDrop() throws Exception { conn = createConnection(); @@ -477,4 +469,4 @@ private Row(Object[] row) { return Arrays.toString(row); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexQuerySelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexQuerySelfTest.java index 692de7ca7de7c..fd87164ef655e 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexQuerySelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinComplexQuerySelfTest.java @@ -28,9 +28,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,10 +39,8 @@ /** * Tests for complex queries (joins, etc.). */ +@RunWith(JUnit4.class) public class JdbcThinComplexQuerySelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; @@ -55,12 +53,6 @@ public class JdbcThinComplexQuerySelfTest extends JdbcThinAbstractSelfTest { cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -130,6 +122,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name as orgName from \"pers\".Person p, \"org\".Organization o where p.orgId = o.id"); @@ -165,6 +158,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testJoinWithoutAlias() throws Exception { ResultSet rs = stmt.executeQuery( "select p.id, p.name, o.name from \"pers\".Person p, \"org\".Organization o where p.orgId = o.id"); @@ -203,6 +197,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @Test public void testIn() throws Exception { ResultSet rs = stmt.executeQuery("select name from \"pers\".Person where age in (25, 35)"); @@ -223,6 +218,7 @@ public void testIn() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBetween() throws Exception { ResultSet rs = stmt.executeQuery("select name from \"pers\".Person where age between 24 and 36"); @@ -243,6 +239,7 @@ public void testBetween() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCalculatedValue() throws Exception { ResultSet rs = stmt.executeQuery("select age * 2 from \"pers\".Person"); @@ -322,4 +319,4 @@ private Organization(int id, String name) { this.name = name; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMultipleAddressesTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMultipleAddressesTest.java index 4f6651c2c0b7b..c8ad285a6d15b 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMultipleAddressesTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMultipleAddressesTest.java @@ -40,20 +40,18 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.mxbean.ClientProcessorMXBean; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * JDBC driver reconnect test with multiple addresses. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinConnectionMultipleAddressesTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Nodes count. */ private static final int NODES_CNT = 3; @@ -83,12 +81,6 @@ private static String url() { cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setMarshaller(new BinaryMarshaller()); cfg.setClientConnectorConfiguration( @@ -131,6 +123,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesConnect() throws Exception { try (Connection conn = DriverManager.getConnection(url())) { try (Statement stmt = conn.createStatement()) { @@ -148,6 +141,7 @@ public void testMultipleAddressesConnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPortRangeConnect() throws Exception { try (Connection conn = DriverManager.getConnection(URL_PORT_RANGE)) { try (Statement stmt = conn.createStatement()) { @@ -165,6 +159,7 @@ public void testPortRangeConnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesOneNodeFailoverOnStatementExecute() throws Exception { checkReconnectOnStatementExecute(url(), false); } @@ -172,6 +167,7 @@ public void testMultipleAddressesOneNodeFailoverOnStatementExecute() throws Exce /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesAllNodesFailoverOnStatementExecute() throws Exception { checkReconnectOnStatementExecute(url(), true); } @@ -179,6 +175,7 @@ public void testMultipleAddressesAllNodesFailoverOnStatementExecute() throws Exc /** * @throws Exception If failed. */ + @Test public void testPortRangeAllNodesFailoverOnStatementExecute() throws Exception { checkReconnectOnStatementExecute(URL_PORT_RANGE, true); } @@ -186,6 +183,7 @@ public void testPortRangeAllNodesFailoverOnStatementExecute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesOneNodeFailoverOnResultSet() throws Exception { checkReconnectOnResultSet(url(), false); } @@ -193,6 +191,7 @@ public void testMultipleAddressesOneNodeFailoverOnResultSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesAllNodesFailoverOnResultSet() throws Exception { checkReconnectOnResultSet(url(), true); } @@ -200,6 +199,7 @@ public void testMultipleAddressesAllNodesFailoverOnResultSet() throws Exception /** * @throws Exception If failed. */ + @Test public void testPortRangeAllNodesFailoverOnResultSet() throws Exception { checkReconnectOnResultSet(URL_PORT_RANGE, true); } @@ -207,6 +207,7 @@ public void testPortRangeAllNodesFailoverOnResultSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesOneNodeFailoverOnMeta() throws Exception { checkReconnectOnMeta(url(), false); } @@ -214,6 +215,7 @@ public void testMultipleAddressesOneNodeFailoverOnMeta() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesAllNodesFailoverOnMeta() throws Exception { checkReconnectOnMeta(url(), true); } @@ -221,6 +223,7 @@ public void testMultipleAddressesAllNodesFailoverOnMeta() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPortRangeAllNodesFailoverOnMeta() throws Exception { checkReconnectOnMeta(URL_PORT_RANGE, true); } @@ -228,6 +231,7 @@ public void testPortRangeAllNodesFailoverOnMeta() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAddressesOneNodeFailoverOnStreaming() throws Exception { checkReconnectOnStreaming(url(), false); } @@ -235,6 +239,7 @@ public void testMultipleAddressesOneNodeFailoverOnStreaming() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientConnectionMXBean() throws Exception { Connection conn = DriverManager.getConnection(URL_PORT_RANGE); @@ -548,4 +553,4 @@ private void restart(boolean all) throws Exception { if (all) startGrids(NODES_CNT); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMvccEnabledSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMvccEnabledSelfTest.java index 0196cb2a73465..895e26a8fd4a4 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMvccEnabledSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionMvccEnabledSelfTest.java @@ -27,12 +27,12 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Connection.TRANSACTION_NONE; import static java.sql.Connection.TRANSACTION_READ_COMMITTED; @@ -44,28 +44,19 @@ * Connection test. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinConnectionMvccEnabledSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String URL = "jdbc:ignite:thin://127.0.0.1"; /** {@inheritDoc} */ - @SuppressWarnings("deprecation") + @SuppressWarnings({"deprecation", "unchecked"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); + cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME).setNearConfiguration(null)); cfg.setMarshaller(new BinaryMarshaller()); - cfg.setGridLogger(new GridStringLogger()); return cfg; @@ -74,9 +65,8 @@ public class JdbcThinConnectionMvccEnabledSelfTest extends JdbcThinAbstractSelfT /** * @param name Cache name. * @return Cache configuration. - * @throws Exception In case of error. */ - private CacheConfiguration cacheConfiguration(@NotNull String name) throws Exception { + private CacheConfiguration cacheConfiguration(@NotNull String name) { CacheConfiguration cfg = defaultCacheConfiguration(); cfg.setName(name); @@ -101,6 +91,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep /** * @throws Exception If failed. */ + @Test public void testMetadataDefaults() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -119,6 +110,7 @@ public void testMetadataDefaults() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetAutoCommit() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assertTrue(conn.getMetaData().supportsTransactions()); @@ -147,6 +139,7 @@ public void testGetSetAutoCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCommit() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assertTrue(conn.getMetaData().supportsTransactions()); @@ -182,6 +175,7 @@ public void testCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollback() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assertTrue(conn.getMetaData().supportsTransactions()); @@ -217,6 +211,7 @@ public void testRollback() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetSavepoint() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsSavepoints(); @@ -256,6 +251,7 @@ public void testSetSavepoint() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetSavepointName() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsSavepoints(); @@ -310,6 +306,7 @@ public void testSetSavepointName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackSavePoint() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsSavepoints(); @@ -375,4 +372,4 @@ private Savepoint getFakeSavepoint() { } }; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSSLTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSSLTest.java index 355a198c56672..be7cbd6024606 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSSLTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSSLTest.java @@ -32,20 +32,18 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.ssl.SslContextFactory; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * SSL connection test. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinConnectionSSLTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Client key store path. */ private static final String CLI_KEY_STORE_PATH = U.getIgniteHome() + "/modules/clients/src/test/keystore/client.jks"; @@ -72,12 +70,6 @@ public class JdbcThinConnectionSSLTest extends JdbcThinAbstractSelfTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setMarshaller(new BinaryMarshaller()); cfg.setClientConnectorConfiguration( @@ -95,6 +87,7 @@ public class JdbcThinConnectionSSLTest extends JdbcThinAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testConnection() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -118,6 +111,7 @@ public void testConnection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectionTrustAll() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -140,6 +134,7 @@ public void testConnectionTrustAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectionUseIgniteFactory() throws Exception { setSslCtxFactoryToIgnite = true; sslCtxFactory = getTestSslContextFactory(); @@ -163,6 +158,7 @@ public void testConnectionUseIgniteFactory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultContext() throws Exception { // Store exists default SSL context to restore after test. final SSLContext dfltSslCtx = SSLContext.getDefault(); @@ -200,6 +196,7 @@ public void testDefaultContext() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContextFactory() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -218,6 +215,7 @@ public void testContextFactory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSslServerAndPlainClient() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -241,6 +239,7 @@ public void testSslServerAndPlainClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvalidKeystoreConfig() throws Exception { setSslCtxFactoryToCli = true; @@ -329,6 +328,7 @@ public void testInvalidKeystoreConfig() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnknownClientCertificate() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -356,6 +356,7 @@ public void testUnknownClientCertificate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnsupportedSslProtocol() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -384,6 +385,7 @@ public void testUnsupportedSslProtocol() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvalidKeyAlgorithm() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -412,6 +414,7 @@ public void testInvalidKeyAlgorithm() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvalidKeyStoreType() throws Exception { setSslCtxFactoryToCli = true; sslCtxFactory = getTestSslContextFactory(); @@ -476,4 +479,4 @@ public static class TestSSLFactory implements Factory { return getTestSslContextFactory().create().getSocketFactory(); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSelfTest.java index 80397e65e7aea..76bff4fa23ea5 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionSelfTest.java @@ -47,12 +47,12 @@ import org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo; import org.apache.ignite.internal.util.HostAndPortRange; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Connection.TRANSACTION_NONE; import static java.sql.Connection.TRANSACTION_READ_COMMITTED; @@ -66,15 +66,14 @@ import static java.sql.Statement.NO_GENERATED_KEYS; import static java.sql.Statement.RETURN_GENERATED_KEYS; import static org.apache.ignite.configuration.ClientConnectorConfiguration.DFLT_PORT; +import static org.apache.ignite.internal.processors.odbc.SqlStateCode.TRANSACTION_STATE_EXCEPTION; /** * Connection test. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinConnectionSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String URL = "jdbc:ignite:thin://127.0.0.1"; @@ -93,12 +92,6 @@ public class JdbcThinConnectionSelfTest extends JdbcThinAbstractSelfTest { cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setMarshaller(new BinaryMarshaller()); cfg.setGridLogger(new GridStringLogger()); @@ -130,6 +123,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep * @throws Exception If failed. */ @SuppressWarnings({"EmptyTryBlock", "unused"}) + @Test public void testDefaults() throws Exception { try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1")) { // No-op. @@ -143,6 +137,7 @@ public void testDefaults() throws Exception { /** * Test invalid endpoint. */ + @Test public void testInvalidEndpoint() { assertInvalid("jdbc:ignite:thin://", "Host name is empty"); assertInvalid("jdbc:ignite:thin://:10000", "Host name is empty"); @@ -159,6 +154,7 @@ public void testInvalidEndpoint() { * * @throws Exception If failed. */ + @Test public void testSocketBuffers() throws Exception { final int dfltDufSize = 64 * 1024; @@ -196,6 +192,7 @@ public void testSocketBuffers() throws Exception { * * @throws Exception If failed. */ + @Test public void testSocketBuffersSemicolon() throws Exception { final int dfltDufSize = 64 * 1024; @@ -228,6 +225,7 @@ public void testSocketBuffersSemicolon() throws Exception { * * @throws Exception If failed. */ + @Test public void testSqlHints() throws Exception { try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1")) { assertHints(conn, false, false, false, false, false, false); @@ -268,6 +266,7 @@ public void testSqlHints() throws Exception { * * @throws Exception If failed. */ + @Test public void testSqlHintsSemicolon() throws Exception { try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1;distributedJoins=true")) { assertHints(conn, true, false, false, false, false, false); @@ -326,6 +325,7 @@ private void assertHints(Connection conn, boolean distributedJoins, boolean enfo * * @throws Exception If failed. */ + @Test public void testTcpNoDelay() throws Exception { assertInvalid("jdbc:ignite:thin://127.0.0.1?tcpNoDelay=0", "Invalid property value. [name=tcpNoDelay, val=0, choices=[true, false]]"); @@ -365,6 +365,7 @@ public void testTcpNoDelay() throws Exception { * * @throws Exception If failed. */ + @Test public void testTcpNoDelaySemicolon() throws Exception { assertInvalid("jdbc:ignite:thin://127.0.0.1;tcpNoDelay=0", "Invalid property value. [name=tcpNoDelay, val=0, choices=[true, false]]"); @@ -400,6 +401,7 @@ public void testTcpNoDelaySemicolon() throws Exception { * * @throws Exception If failed. */ + @Test public void testAutoCloseServerCursorProperty() throws Exception { String url = "jdbc:ignite:thin://127.0.0.1?autoCloseServerCursor"; @@ -436,6 +438,7 @@ public void testAutoCloseServerCursorProperty() throws Exception { * * @throws Exception If failed. */ + @Test public void testAutoCloseServerCursorPropertySemicolon() throws Exception { String url = "jdbc:ignite:thin://127.0.0.1;autoCloseServerCursor"; @@ -468,6 +471,7 @@ public void testAutoCloseServerCursorPropertySemicolon() throws Exception { * * @throws Exception If failed. */ + @Test public void testSchema() throws Exception { assertInvalid("jdbc:ignite:thin://127.0.0.1/qwe/qwe", "Invalid URL format (only schema name is allowed in URL path parameter 'host:port[/schemaName]')" ); @@ -490,6 +494,7 @@ public void testSchema() throws Exception { * * @throws Exception If failed. */ + @Test public void testSchemaSemicolon() throws Exception { try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1;schema=public")) { assertEquals("Invalid schema", "PUBLIC", conn.getSchema()); @@ -538,6 +543,7 @@ private void assertInvalid(final String url, String errMsg) { * @throws Exception If failed. */ @SuppressWarnings("ThrowableNotThrown") + @Test public void testClose() throws Exception { final Connection conn; @@ -564,6 +570,7 @@ public void testClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateStatement() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { try (Statement stmt = conn.createStatement()) { @@ -586,6 +593,7 @@ public void testCreateStatement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateStatement2() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { int [] rsTypes = new int[] @@ -639,6 +647,7 @@ public void testCreateStatement2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateStatement3() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { int [] rsTypes = new int[] @@ -698,6 +707,7 @@ public void testCreateStatement3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepareStatement() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // null query text @@ -731,6 +741,7 @@ public void testPrepareStatement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepareStatement3() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { final String sqlText = "select * from test where param = ?"; @@ -791,6 +802,7 @@ public void testPrepareStatement3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepareStatement4() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { final String sqlText = "select * from test where param = ?"; @@ -856,6 +868,7 @@ public void testPrepareStatement4() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrepareStatementAutoGeneratedKeysUnsupported() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { final String sqlText = "insert into test (val) values (?)"; @@ -905,6 +918,7 @@ public void testPrepareStatementAutoGeneratedKeysUnsupported() throws Exception /** * @throws Exception If failed. */ + @Test public void testPrepareCallUnsupported() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { final String sqlText = "exec test()"; @@ -945,6 +959,7 @@ public void testPrepareCallUnsupported() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNativeSql() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // null query text @@ -976,31 +991,23 @@ public void testNativeSql() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetAutoCommit() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { - assertTrue(conn.getAutoCommit()); + boolean ac0 = conn.getAutoCommit(); - // Cannot disable autocommit when MVCC is disabled. - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - conn.setAutoCommit(false); + conn.setAutoCommit(!ac0); + // assert no exception - return null; - } - }, - SQLException.class, - "MVCC must be enabled in order to invoke transactional operation: COMMIT" - ); - - assertTrue(conn.getAutoCommit()); + conn.setAutoCommit(ac0); + // assert no exception conn.close(); // Exception when called on closed connection checkConnectionClosed(new RunnableX() { @Override public void run() throws Exception { - conn.setAutoCommit(true); + conn.setAutoCommit(ac0); } }); } @@ -1009,6 +1016,7 @@ public void testGetSetAutoCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCommit() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Should not be called in auto-commit mode @@ -1024,19 +1032,6 @@ public void testCommit() throws Exception { "Transaction cannot be committed explicitly in auto-commit mode" ); - // Cannot disable autocommit when MVCC is disabled. - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - conn.setAutoCommit(false); - - return null; - } - }, - SQLException.class, - "MVCC must be enabled in order to invoke transactional operation: COMMIT" - ); - assertTrue(conn.getAutoCommit()); // Should not be called in auto-commit mode @@ -1066,6 +1061,7 @@ public void testCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollback() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Should not be called in auto-commit mode @@ -1081,21 +1077,6 @@ public void testRollback() throws Exception { "Transaction cannot be rolled back explicitly in auto-commit mode." ); - // Cannot disable autocommit when MVCC is disabled. - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - conn.setAutoCommit(false); - - return null; - } - }, - SQLException.class, - "MVCC must be enabled in order to invoke transactional operation: COMMIT" - ); - - assertTrue(conn.getAutoCommit()); - conn.close(); // Exception when called on closed connection @@ -1107,9 +1088,54 @@ public void testRollback() throws Exception { } } + /** + * @throws Exception if failed. + */ + @Test + public void testBeginFailsWhenMvccIsDisabled() throws Exception { + try (Connection conn = DriverManager.getConnection(URL)) { + conn.createStatement().execute("BEGIN"); + + fail("Exception is expected"); + } + catch (SQLException e) { + assertEquals(TRANSACTION_STATE_EXCEPTION, e.getSQLState()); + } + } + + /** + * @throws Exception if failed. + */ + @Test + public void testCommitIgnoredWhenMvccIsDisabled() throws Exception { + try (Connection conn = DriverManager.getConnection(URL)) { + conn.setAutoCommit(false); + conn.createStatement().execute("COMMIT"); + + conn.commit(); + } + // assert no exception + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRollbackIgnoredWhenMvccIsDisabled() throws Exception { + try (Connection conn = DriverManager.getConnection(URL)) { + conn.setAutoCommit(false); + + conn.createStatement().execute("ROLLBACK"); + + conn.rollback(); + } + // assert no exception + } + /** * @throws Exception If failed. */ + @Test public void testGetMetaData() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -1130,6 +1156,7 @@ public void testGetMetaData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetReadOnly() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { conn.close(); @@ -1153,6 +1180,7 @@ public void testGetSetReadOnly() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetCatalog() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsCatalogsInDataManipulation(); @@ -1184,6 +1212,7 @@ public void testGetSetCatalog() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetTransactionIsolation() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Invalid parameter value @@ -1234,6 +1263,7 @@ public void testGetSetTransactionIsolation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearGetWarnings() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { SQLWarning warn = conn.getWarnings(); @@ -1267,6 +1297,7 @@ public void testClearGetWarnings() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetTypeMap() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { GridTestUtils.assertThrows(log, @@ -1322,6 +1353,7 @@ public void testGetSetTypeMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetHoldability() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // default value @@ -1375,6 +1407,7 @@ public void testGetSetHoldability() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetSavepoint() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsSavepoints(); @@ -1392,21 +1425,6 @@ public void testSetSavepoint() throws Exception { "Savepoint cannot be set in auto-commit mode" ); - // Cannot disable autocommit when MVCC is disabled. - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - conn.setAutoCommit(false); - - return null; - } - }, - SQLException.class, - "MVCC must be enabled in order to invoke transactional operation: COMMIT" - ); - - assertTrue(conn.getAutoCommit()); - conn.close(); checkConnectionClosed(new RunnableX() { @@ -1420,6 +1438,7 @@ public void testSetSavepoint() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetSavepointName() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsSavepoints(); @@ -1452,21 +1471,6 @@ public void testSetSavepointName() throws Exception { "Savepoint cannot be set in auto-commit mode" ); - // Cannot disable autocommit when MVCC is disabled. - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - conn.setAutoCommit(false); - - return null; - } - }, - SQLException.class, - "MVCC must be enabled in order to invoke transactional operation: COMMIT" - ); - - assertTrue(conn.getAutoCommit()); - conn.close(); checkConnectionClosed(new RunnableX() { @@ -1480,6 +1484,7 @@ public void testSetSavepointName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackSavePoint() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsSavepoints(); @@ -1512,21 +1517,6 @@ public void testRollbackSavePoint() throws Exception { "Auto-commit mode" ); - // Cannot disable autocommit when MVCC is disabled. - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - conn.setAutoCommit(false); - - return null; - } - }, - SQLException.class, - "MVCC must be enabled in order to invoke transactional operation: COMMIT" - ); - - assertTrue(conn.getAutoCommit()); - conn.close(); checkConnectionClosed(new RunnableX() { @@ -1540,6 +1530,7 @@ public void testRollbackSavePoint() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleaseSavepoint() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert !conn.getMetaData().supportsSavepoints(); @@ -1578,6 +1569,7 @@ public void testReleaseSavepoint() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateClob() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Unsupported @@ -1608,6 +1600,7 @@ public void testCreateClob() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateBlob() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Unsupported @@ -1638,6 +1631,7 @@ public void testCreateBlob() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateNClob() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Unsupported @@ -1668,6 +1662,7 @@ public void testCreateNClob() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateSQLXML() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Unsupported @@ -1698,6 +1693,7 @@ public void testCreateSQLXML() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetClientInfoPair() throws Exception { // fail("https://issues.apache.org/jira/browse/IGNITE-5425"); @@ -1733,6 +1729,7 @@ public void testGetSetClientInfoPair() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetClientInfoProperties() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { final String name = "ApplicationName"; @@ -1771,6 +1768,7 @@ public void testGetSetClientInfoProperties() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateArrayOf() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { final String typeName = "varchar"; @@ -1811,6 +1809,7 @@ public void testCreateArrayOf() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateStruct() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // Invalid typename @@ -1847,6 +1846,7 @@ public void testCreateStruct() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetSchema() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assertEquals("PUBLIC", conn.getSchema()); @@ -1880,6 +1880,7 @@ public void testGetSetSchema() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAbort() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { //Invalid executor @@ -1906,6 +1907,7 @@ public void testAbort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetSetNetworkTimeout() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // default @@ -1915,19 +1917,6 @@ public void testGetSetNetworkTimeout() throws Exception { final int timeout = 1000; - //Invalid executor - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - conn.setNetworkTimeout(null, timeout); - - return null; - } - }, - SQLException.class, - "Executor cannot be null" - ); - //Invalid timeout GridTestUtils.assertThrows(log, new Callable() { @@ -1964,7 +1953,7 @@ public void testGetSetNetworkTimeout() throws Exception { /** * Test that attempting to supply invalid nested TX mode to driver fails on the client. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testInvalidNestedTxMode() { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -1980,7 +1969,7 @@ public void testInvalidNestedTxMode() { * We have to do this without explicit {@link Connection} as long as there's no other way to bypass validation and * supply a malformed {@link ConnectionProperties} to {@link JdbcThinTcpIo}. */ - @SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "ThrowFromFinallyBlock"}) + @Test public void testInvalidNestedTxModeOnServerSide() throws SQLException, NoSuchMethodException, IllegalAccessException, InvocationTargetException, InstantiationException, IOException { ConnectionPropertiesImpl connProps = new ConnectionPropertiesImpl(); @@ -2015,6 +2004,7 @@ public void testInvalidNestedTxModeOnServerSide() throws SQLException, NoSuchMet /** */ + @Test public void testSslClientAndPlainServer() { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -2032,6 +2022,7 @@ public void testSslClientAndPlainServer() { /** * @throws Exception If failed. */ + @Test public void testMultithreadingException() throws Exception { int threadCnt = 10; @@ -2091,4 +2082,4 @@ private Savepoint getFakeSavepoint() { } }; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionTimeoutSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionTimeoutSelfTest.java new file mode 100644 index 0000000000000..bfe3519b2b675 --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinConnectionTimeoutSelfTest.java @@ -0,0 +1,284 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.jdbc.thin; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.SQLTimeoutException; +import java.sql.Statement; +import java.util.concurrent.Executor; +import org.apache.ignite.cache.query.annotations.QuerySqlFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.ClientConnectorConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Jdbc Thin Connection timeout tests. + */ +@RunWith(JUnit4.class) +@SuppressWarnings("ThrowableNotThrown") +public class JdbcThinConnectionTimeoutSelfTest extends JdbcThinAbstractSelfTest { + + /** IP finder. */ + private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + + /** URL. */ + private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; + + /** Server thread pull size. */ + private static final int SERVER_THREAD_POOL_SIZE = 4; + + /** Nodes count. */ + private static final byte NODES_COUNT = 3; + + /** Max table rows. */ + private static final int MAX_ROWS = 10000; + + /** Executor stub */ + private static final Executor EXECUTOR_STUB = (Runnable command) -> {}; + + /** Connection. */ + private Connection conn; + + /** Statement. */ + private Statement stmt; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration cache = defaultCacheConfiguration(); + + cache.setCacheMode(PARTITIONED); + cache.setBackups(1); + cache.setWriteSynchronizationMode(FULL_SYNC); + cache.setSqlFunctionClasses(JdbcThinConnectionTimeoutSelfTest.class); + cache.setIndexedTypes(Integer.class, Integer.class); + + cfg.setCacheConfiguration(cache); + + TcpDiscoverySpi disco = new TcpDiscoverySpi(); + + disco.setIpFinder(IP_FINDER); + + cfg.setDiscoverySpi(disco); + + cfg.setClientConnectorConfiguration(new ClientConnectorConfiguration().setThreadPoolSize(SERVER_THREAD_POOL_SIZE)); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + startGridsMultiThreaded(NODES_COUNT); + + for (int i = 0; i < MAX_ROWS; ++i) + grid(0).cache(DEFAULT_CACHE_NAME).put(i, i); + } + + /** + * Called before execution of every test method in class. + * + * @throws Exception If failed. + */ + @Before + public void before() throws Exception { + conn = DriverManager.getConnection(URL); + + conn.setSchema('"' + DEFAULT_CACHE_NAME + '"'); + + stmt = conn.createStatement(); + + assert stmt != null; + assert !stmt.isClosed(); + } + + /** + * Called after execution of every test method in class. + * + * @throws Exception If failed. + */ + @After + public void after() throws Exception { + if (stmt != null && !stmt.isClosed()) { + stmt.close(); + + assert stmt.isClosed(); + } + + conn.close(); + + assert stmt.isClosed(); + assert conn.isClosed(); + } + + /** + * + */ + @Test + public void testSettingNegativeConnectionTimeout() { + + GridTestUtils.assertThrows(log, + () -> { + conn.setNetworkTimeout(EXECUTOR_STUB, -1); + return null; + }, + SQLException.class, "Network timeout cannot be negative."); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testConnectionTimeoutRetrieval() throws Exception { + conn.setNetworkTimeout(EXECUTOR_STUB, 2000); + assertEquals(2000, conn.getNetworkTimeout()); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testConnectionTimeout() throws Exception { + + conn.setNetworkTimeout(EXECUTOR_STUB, 1000); + + GridTestUtils.assertThrows(log, + () -> { + stmt.execute("select sleep_func(2000)"); + return null; + }, + SQLException.class, "Connection timed out."); + + GridTestUtils.assertThrows(log, + () -> { + stmt.execute("select 1"); + return null; + }, + SQLException.class, "Statement is closed."); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testQueryTimeoutOccursBeforeConnectionTimeout() throws Exception { + conn.setNetworkTimeout(EXECUTOR_STUB, 10_000); + + stmt.setQueryTimeout(1); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeQuery("select sleep_func(10) from Integer;"); + + return null; + }, SQLTimeoutException.class, "The query was cancelled while executing."); + + stmt.execute("select 1"); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testConnectionTimeoutUpdate() throws Exception { + conn.setNetworkTimeout(EXECUTOR_STUB, 5000); + + stmt.execute("select sleep_func(1000)"); + + conn.setNetworkTimeout(EXECUTOR_STUB, 500); + + GridTestUtils.assertThrows(log, () -> { + stmt.execute("select sleep_func(1000)"); + return null; + }, SQLException.class, "Connection timed out."); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCancelingTimedOutStatement() throws Exception { + conn.setNetworkTimeout(EXECUTOR_STUB, 1); + + GridTestUtils.runAsync( + () -> { + try { + Thread.sleep(1000); + + GridTestUtils.assertThrows(log, + () -> { + stmt.cancel(); + return null; + }, + SQLException.class, "Statement is closed."); + } + catch (Exception e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception"); + } + }); + + GridTestUtils.runAsync(() -> { + try { + GridTestUtils.assertThrows(log, + () -> { + stmt.execute("select sleep_func(1000)"); + return null; + }, + SQLException.class, "Connection timed out."); + } + catch (Exception e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception"); + } + }); + } + + /** + * @param v amount of milliseconds to sleep + * @return amount of milliseconds to sleep + */ + @SuppressWarnings("unused") + @QuerySqlFunction + public static int sleep_func(int v) { + try { + Thread.sleep(v); + } + catch (InterruptedException ignored) { + // No-op + } + return v; + } +} \ No newline at end of file diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDataSourceSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDataSourceSelfTest.java index 6040bed3e2cf8..4ef4506893daf 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDataSourceSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDataSourceSelfTest.java @@ -39,19 +39,17 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.jdbc.thin.JdbcThinConnection; import org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * DataSource test. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinDataSourceSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @SuppressWarnings("deprecation") @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -59,12 +57,6 @@ public class JdbcThinDataSourceSelfTest extends JdbcThinAbstractSelfTest { cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setMarshaller(new BinaryMarshaller()); return cfg; @@ -98,6 +90,7 @@ private CacheConfiguration cacheConfiguration(String name) throws Exception { /** * @throws Exception If failed. */ + @Test public void testJndi() throws Exception { IgniteJdbcThinDataSource ids = new IgniteJdbcThinDataSource(); @@ -117,6 +110,7 @@ public void testJndi() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUrlCompose() throws Exception { IgniteJdbcThinDataSource ids = new IgniteJdbcThinDataSource(); @@ -139,6 +133,7 @@ public void testUrlCompose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testResetUrl() throws Exception { IgniteJdbcThinDataSource ids = new IgniteJdbcThinDataSource(); @@ -157,6 +152,7 @@ public void testResetUrl() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlHints() throws Exception { IgniteJdbcThinDataSource ids = new IgniteJdbcThinDataSource(); @@ -195,6 +191,7 @@ public void testSqlHints() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTcpNoDelay() throws Exception { IgniteJdbcThinDataSource ids = new IgniteJdbcThinDataSource(); @@ -218,6 +215,7 @@ public void testTcpNoDelay() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSocketBuffers() throws Exception { final IgniteJdbcThinDataSource ids = new IgniteJdbcThinDataSource(); @@ -426,4 +424,4 @@ public static class JndiMockContext implements Context { return null; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDeleteStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDeleteStatementSelfTest.java index 9d0665b8fe3d2..d11e5bb958615 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDeleteStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDeleteStatementSelfTest.java @@ -20,14 +20,19 @@ import java.sql.SQLException; import java.util.Arrays; import java.util.HashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcThinDeleteStatementSelfTest extends JdbcThinAbstractUpdateStatementSelfTest { /** * @throws SQLException If failed. */ + @Test public void testExecute() throws SQLException { conn.createStatement().execute("delete from Person where cast(substring(_key, 2, 1) as int) % 2 = 0"); @@ -38,6 +43,7 @@ public void testExecute() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testExecuteUpdate() throws SQLException { int res = conn.createStatement().executeUpdate("delete from Person where cast(substring(_key, 2, 1) as int) % 2 = 0"); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDynamicIndexAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDynamicIndexAbstractSelfTest.java index 539713aeb4cdb..765ca839d2759 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDynamicIndexAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDynamicIndexAbstractSelfTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that checks indexes handling with JDBC. */ +@RunWith(JUnit4.class) public abstract class JdbcThinDynamicIndexAbstractSelfTest extends JdbcThinAbstractDmlStatementSelfTest { /** */ private static final String CREATE_INDEX = "create index idx on Person (id desc)"; @@ -143,6 +147,7 @@ private Object getSingleValue(ResultSet rs) throws SQLException { * Test that after index creation index is used by queries. * @throws SQLException If failed. */ + @Test public void testCreateIndex() throws SQLException { assertSize(3); @@ -173,6 +178,7 @@ public void testCreateIndex() throws SQLException { * Test that creating an index with duplicate name yields an error. * @throws SQLException If failed. */ + @Test public void testCreateIndexWithDuplicateName() throws SQLException { jdbcRun(CREATE_INDEX); @@ -189,6 +195,7 @@ public void testCreateIndexWithDuplicateName() throws SQLException { * Test that creating an index with duplicate name does not yield an error with {@code IF NOT EXISTS}. * @throws SQLException If failed. */ + @Test public void testCreateIndexIfNotExists() throws SQLException { jdbcRun(CREATE_INDEX); @@ -200,6 +207,7 @@ public void testCreateIndexIfNotExists() throws SQLException { * Test that after index drop there are no attempts to use it, and data state remains intact. * @throws SQLException If failed. */ + @Test public void testDropIndex() throws SQLException { assertSize(3); @@ -229,6 +237,7 @@ public void testDropIndex() throws SQLException { /** * Test that dropping a non-existent index yields an error. */ + @Test public void testDropMissingIndex() { GridTestUtils.assertThrowsAnyCause(log, new Callable() { @Override public Void call() throws Exception { @@ -243,6 +252,7 @@ public void testDropMissingIndex() { * Test that dropping a non-existent index does not yield an error with {@code IF EXISTS}. * @throws SQLException If failed. */ + @Test public void testDropMissingIndexIfExists() throws SQLException { // Despite index missing, this does not yield an error. jdbcRun(DROP_INDEX_IF_EXISTS); @@ -252,6 +262,7 @@ public void testDropMissingIndexIfExists() throws SQLException { * Test that changes in cache affect index, and vice versa. * @throws SQLException If failed. */ + @Test public void testIndexState() throws SQLException { IgniteCache cache = cache(); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinEmptyCacheSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinEmptyCacheSelfTest.java index 87c1428f78b34..47708d85e95a5 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinEmptyCacheSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinEmptyCacheSelfTest.java @@ -23,9 +23,9 @@ import java.sql.Statement; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -33,10 +33,8 @@ /** * Tests for empty cache. */ +@RunWith(JUnit4.class) public class JdbcThinEmptyCacheSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -62,12 +60,6 @@ public class JdbcThinEmptyCacheSelfTest extends JdbcThinAbstractSelfTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -104,6 +96,7 @@ public class JdbcThinEmptyCacheSelfTest extends JdbcThinAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testSelectNumber() throws Exception { ResultSet rs = stmt.executeQuery("select 1"); @@ -122,6 +115,7 @@ public void testSelectNumber() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelectString() throws Exception { ResultSet rs = stmt.executeQuery("select 'str'"); @@ -135,4 +129,4 @@ public void testSelectString() throws Exception { assert cnt == 1; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinErrorsSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinErrorsSelfTest.java index 2ff3e9f83d1ee..e14feb34b7b88 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinErrorsSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinErrorsSelfTest.java @@ -24,12 +24,16 @@ import java.sql.Statement; import org.apache.ignite.jdbc.JdbcErrorsAbstractSelfTest; import org.apache.ignite.lang.IgniteCallable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.junit.Assert.assertArrayEquals; /** * Test SQLSTATE codes propagation with thin client driver. */ +@RunWith(JUnit4.class) public class JdbcThinErrorsSelfTest extends JdbcErrorsAbstractSelfTest { /** {@inheritDoc} */ @Override protected Connection getConnection() throws SQLException { @@ -41,6 +45,7 @@ public class JdbcThinErrorsSelfTest extends JdbcErrorsAbstractSelfTest { * due to communication problems (not due to clear misconfiguration). * @throws SQLException if failed. */ + @Test public void testConnectionError() throws SQLException { checkErrorState(new IgniteCallable() { @Override public Void call() throws Exception { @@ -55,6 +60,7 @@ public void testConnectionError() throws SQLException { * Test error code for the case when connection string is a mess. * @throws SQLException if failed. */ + @Test public void testInvalidConnectionStringFormat() throws SQLException { checkErrorState(new IgniteCallable() { @Override public Void call() throws Exception { @@ -71,6 +77,7 @@ public void testInvalidConnectionStringFormat() throws SQLException { * @throws SQLException if failed. */ @SuppressWarnings("MagicConstant") + @Test public void testInvalidIsolationLevel() throws SQLException { checkErrorState(new ConnClosure() { @Override public void run(Connection conn) throws Exception { @@ -83,7 +90,7 @@ public void testInvalidIsolationLevel() throws SQLException { * Test error code for the case when error is caused on batch execution. * @throws SQLException if failed. */ - @SuppressWarnings("MagicConstant") + @Test public void testBatchUpdateException() throws SQLException { try (final Connection conn = getConnection()) { try (Statement stmt = conn.createStatement()) { diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinInsertStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinInsertStatementSelfTest.java index bf55da0879eb2..50c197aad5ffb 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinInsertStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinInsertStatementSelfTest.java @@ -25,10 +25,14 @@ import java.util.HashSet; import java.util.concurrent.Callable; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Statement test. */ +@RunWith(JUnit4.class) public class JdbcThinInsertStatementSelfTest extends JdbcThinAbstractDmlStatementSelfTest { /** SQL query. */ private static final String SQL = "insert into Person(_key, id, firstName, lastName, age) values " + @@ -140,6 +144,7 @@ public class JdbcThinInsertStatementSelfTest extends JdbcThinAbstractDmlStatemen /** * @throws SQLException If failed. */ + @Test public void testExecuteUpdate() throws SQLException { assertEquals(3, stmt.executeUpdate(SQL)); } @@ -147,6 +152,7 @@ public void testExecuteUpdate() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testPreparedExecuteUpdate() throws SQLException { assertEquals(3, prepStmt.executeUpdate()); } @@ -154,6 +160,7 @@ public void testPreparedExecuteUpdate() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testExecute() throws SQLException { assertFalse(stmt.execute(SQL)); } @@ -161,6 +168,7 @@ public void testExecute() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testPreparedExecute() throws SQLException { assertFalse(prepStmt.execute()); } @@ -168,6 +176,7 @@ public void testPreparedExecute() throws SQLException { /** * */ + @Test public void testDuplicateKeys() { jcache(0).put("p2", new Person(2, "Joe", "Black", 35)); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinLocalQueriesSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinLocalQueriesSelfTest.java index 1e28e52d3e3d6..7bb7168e6e6e7 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinLocalQueriesSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinLocalQueriesSelfTest.java @@ -23,10 +23,14 @@ import java.util.Map; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that replicated-only query is executed locally. */ +@RunWith(JUnit4.class) public class JdbcThinLocalQueriesSelfTest extends JdbcThinAbstractSelfTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { @@ -47,6 +51,7 @@ public class JdbcThinLocalQueriesSelfTest extends JdbcThinAbstractSelfTest { /** * */ + @Test public void testLocalThinJdbcQuery() throws SQLException { try (Connection c = connect(grid(0), "replicatedOnly=true")) { execute(c, "CREATE TABLE Company(id int primary key, name varchar) WITH " + diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMergeStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMergeStatementSelfTest.java index 9d9467f3bd754..11911ba83f07f 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMergeStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMergeStatementSelfTest.java @@ -21,10 +21,14 @@ import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * MERGE statement test. */ +@RunWith(JUnit4.class) public class JdbcThinMergeStatementSelfTest extends JdbcThinAbstractDmlStatementSelfTest { /** SQL query. */ private static final String SQL = "merge into Person(_key, id, firstName, lastName, age) values " + @@ -117,6 +121,7 @@ public class JdbcThinMergeStatementSelfTest extends JdbcThinAbstractDmlStatement /** * @throws SQLException If failed. */ + @Test public void testExecuteUpdate() throws SQLException { assertEquals(3, stmt.executeUpdate(SQL)); } @@ -124,6 +129,7 @@ public void testExecuteUpdate() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testExecute() throws SQLException { assertFalse(stmt.execute(SQL)); } diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMetadataPrimaryKeysSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMetadataPrimaryKeysSelfTest.java new file mode 100644 index 0000000000000..a4ef3119b3c2b --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMetadataPrimaryKeysSelfTest.java @@ -0,0 +1,160 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.jdbc.thin; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Verifies that primary keys in the metadata are valid. + */ +@RunWith(JUnit4.class) +public class JdbcThinMetadataPrimaryKeysSelfTest extends GridCommonAbstractTest { + /** Url. */ + private static final String URL = "jdbc:ignite:thin://127.0.0.1"; + + /** COLUMN_NAME column index in the metadata table. */ + private static final int COL_NAME_IDX = 4; + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + startGrid(1); + } + + /** + * Execute update sql operation using new connection. + * + * @param sql update SQL query. + * @return update count. + * @throws SQLException on error. + */ + private int executeUpdate(String sql) throws SQLException { + try (Connection conn = DriverManager.getConnection(URL)) { + try (PreparedStatement stmt = conn.prepareStatement(sql)) { + return stmt.executeUpdate(); + } + } + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + executeUpdate("DROP TABLE IF EXISTS TEST;"); + } + + /** + * Checks for PK that contains single unwrapped field. + */ + @Test + public void testSingleUnwrappedKey() throws Exception { + executeUpdate("CREATE TABLE TEST (ID LONG PRIMARY KEY, NAME VARCHAR);"); + + checkPKFields("TEST", "ID"); + } + + /** + * Checks for PK that contains single field. Key is forcibly wrapped. + */ + @Test + public void testSingleWrappedKey() throws Exception { + executeUpdate("CREATE TABLE TEST (" + + "ID LONG PRIMARY KEY, " + + "NAME VARCHAR) " + + "WITH \"wrap_key=true\";"); + + checkPKFields("TEST", "ID"); + } + + /** + * Checks for composite (so implicitly wrapped) primary key. + */ + @Test + public void testCompositeKey() throws Exception { + executeUpdate("CREATE TABLE TEST (" + + "ID LONG, " + + "SEC_ID LONG, " + + "NAME VARCHAR, " + + "PRIMARY KEY (ID, SEC_ID));"); + + checkPKFields("TEST", "ID", "SEC_ID"); + } + + /** + * Checks for composite (so implicitly wrapped) primary key. Additionally, affinity key is used. + */ + @Test + public void testCompositeKeyWithAK() throws Exception { + final String tpl = "CREATE TABLE TEST (" + + "ID LONG, " + + "SEC_ID LONG, " + + "NAME VARCHAR, " + + "PRIMARY KEY (ID, SEC_ID)) " + + "WITH \"affinity_key=%s\";"; + + executeUpdate(String.format(tpl, "ID")); + + checkPKFields("TEST", "ID", "SEC_ID"); + + executeUpdate("DROP TABLE TEST;"); + + executeUpdate(String.format(tpl, "SEC_ID")); + + checkPKFields("TEST", "ID", "SEC_ID"); + } + + /** + * Checks that field names in the metadata matches specified expected fields. + * + * @param tabName part of the sql query after CREATE TABLE TESTER. + * @param expPKFields Expected primary key fields. + */ + private void checkPKFields(String tabName, String... expPKFields) throws Exception { + try (Connection conn = DriverManager.getConnection(URL)) { + DatabaseMetaData md = conn.getMetaData(); + + ResultSet rs = md.getPrimaryKeys(conn.getCatalog(), "", tabName); + + List colNames = new ArrayList<>(); + + while (rs.next()) + colNames.add(rs.getString(COL_NAME_IDX)); + + assertEquals("Field names in the primary key are not correct", + Arrays.asList(expPKFields), colNames); + } + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + super.afterTestsStopped(); + } +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMetadataSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMetadataSelfTest.java index 59382f1854704..b8aeb3b6fa4c3 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMetadataSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMetadataSelfTest.java @@ -42,13 +42,12 @@ import org.apache.ignite.cache.affinity.AffinityKey; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteVersionUtils; import org.apache.ignite.internal.processors.query.QueryEntityEx; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Types.INTEGER; import static java.sql.Types.OTHER; @@ -59,26 +58,11 @@ /** * Metadata tests. */ +@RunWith(JUnit4.class) public class JdbcThinMetadataSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - /** * @param qryEntity Query entity. * @return Cache configuration. @@ -102,7 +86,7 @@ protected CacheConfiguration cacheConfiguration(QueryEntity qryEntity) { startGridsMultiThreaded(3); Map orgPrecision = new HashMap<>(); - + orgPrecision.put("name", 42); IgniteCache orgCache = jcache(grid(0), @@ -143,7 +127,7 @@ protected CacheConfiguration cacheConfiguration(QueryEntity qryEntity) { personCache.put(new AffinityKey<>("p2", "o1"), new Person("Joe Black", 35, 1)); personCache.put(new AffinityKey<>("p3", "o2"), new Person("Mike Green", 40, 2)); - IgniteCache departmentCache = jcache(grid(0), + IgniteCache departmentCache = jcache(grid(0), defaultCacheConfiguration().setIndexedTypes(Integer.class, Department.class), "dep"); try (Connection conn = DriverManager.getConnection(URL)) { @@ -155,12 +139,14 @@ protected CacheConfiguration cacheConfiguration(QueryEntity qryEntity) { stmt.execute("CREATE INDEX \"MyTestIndex quoted\" on \"Quoted\" (\"Id\" DESC)"); stmt.execute("CREATE INDEX IDX ON TEST (ID ASC)"); stmt.execute("CREATE TABLE TEST_DECIMAL_COLUMN (ID INT primary key, DEC_COL DECIMAL(8, 3))"); + stmt.execute("CREATE TABLE TEST_DECIMAL_COLUMN_PRECISION (ID INT primary key, DEC_COL DECIMAL(8))"); } } /** * @throws Exception If failed. */ + @Test public void testResultSetMetaData() throws Exception { Connection conn = DriverManager.getConnection(URL); @@ -197,6 +183,7 @@ public void testResultSetMetaData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetTables() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -241,6 +228,7 @@ public void testGetTables() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllTables() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -253,7 +241,8 @@ public void testGetAllTables() throws Exception { "dep.DEPARTMENT", "PUBLIC.TEST", "PUBLIC.Quoted", - "PUBLIC.TEST_DECIMAL_COLUMN")); + "PUBLIC.TEST_DECIMAL_COLUMN", + "PUBLIC.TEST_DECIMAL_COLUMN_PRECISION")); Set actualTbls = new HashSet<>(expectedTbls.size()); @@ -270,6 +259,7 @@ public void testGetAllTables() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetColumns() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { conn.setSchema("pers"); @@ -382,6 +372,7 @@ else if ("_VAL".equals(name)) { /** * @throws Exception If failed. */ + @Test public void testGetAllColumns() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -403,7 +394,9 @@ public void testGetAllColumns() throws Exception { "PUBLIC.Quoted.Id.null", "PUBLIC.Quoted.Name.null.50", "PUBLIC.TEST_DECIMAL_COLUMN.ID.null", - "PUBLIC.TEST_DECIMAL_COLUMN.DEC_COL.null.8.3" + "PUBLIC.TEST_DECIMAL_COLUMN.DEC_COL.null.8.3", + "PUBLIC.TEST_DECIMAL_COLUMN_PRECISION.ID.null", + "PUBLIC.TEST_DECIMAL_COLUMN_PRECISION.DEC_COL.null.8" )); Set actualCols = new HashSet<>(expectedCols.size()); @@ -430,6 +423,7 @@ public void testGetAllColumns() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvalidCatalog() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { DatabaseMetaData meta = conn.getMetaData(); @@ -459,6 +453,7 @@ public void testInvalidCatalog() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIndexMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(URL); ResultSet rs = conn.getMetaData().getIndexInfo(null, "pers", "PERSON", false, false)) { @@ -497,6 +492,7 @@ else if ("AGE".equals(field)) /** * @throws Exception If failed. */ + @Test public void testGetAllIndexes() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { ResultSet rs = conn.getMetaData().getIndexInfo(null, null, null, false, false); @@ -525,6 +521,7 @@ public void testGetAllIndexes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryKeyMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(URL); ResultSet rs = conn.getMetaData().getPrimaryKeys(null, "pers", "PERSON")) { @@ -544,6 +541,7 @@ public void testPrimaryKeyMetadata() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllPrimaryKeys() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { ResultSet rs = conn.getMetaData().getPrimaryKeys(null, null, null); @@ -555,7 +553,8 @@ public void testGetAllPrimaryKeys() throws Exception { "PUBLIC.TEST.PK_PUBLIC_TEST.ID", "PUBLIC.TEST.PK_PUBLIC_TEST.NAME", "PUBLIC.Quoted.PK_PUBLIC_Quoted.Id", - "PUBLIC.TEST_DECIMAL_COLUMN.ID._KEY")); + "PUBLIC.TEST_DECIMAL_COLUMN.ID.ID", + "PUBLIC.TEST_DECIMAL_COLUMN_PRECISION.ID.ID")); Set actualPks = new HashSet<>(expectedPks.size()); @@ -574,6 +573,7 @@ public void testGetAllPrimaryKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testParametersMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { conn.setSchema("\"pers\""); @@ -598,6 +598,7 @@ public void testParametersMetadata() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSchemasMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { ResultSet rs = conn.getMetaData().getSchemas(); @@ -620,6 +621,7 @@ public void testSchemasMetadata() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEmptySchemasMetadata() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { ResultSet rs = conn.getMetaData().getSchemas(null, "qqq"); @@ -631,6 +633,7 @@ public void testEmptySchemasMetadata() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersions() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { assert conn.getMetaData().getDatabaseProductVersion().equals(IgniteVersionUtils.VER.toString()); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMissingLongArrayResultsTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMissingLongArrayResultsTest.java index 633c74bb52c91..157bf99996b21 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMissingLongArrayResultsTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinMissingLongArrayResultsTest.java @@ -31,18 +31,16 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcThinMissingLongArrayResultsTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** First cache name. */ private static final String CACHE_NAME = "test"; @@ -67,12 +65,6 @@ public class JdbcThinMissingLongArrayResultsTest extends JdbcThinAbstractSelfTes cfg.setCacheConfiguration(cacheConfiguration(CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -164,7 +156,8 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep /** * @throws Exception If failed. */ - @SuppressWarnings({"EmptyTryBlock", "unused"}) + @SuppressWarnings({"unused"}) + @Test public void testDefaults() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { conn.setSchema('"' + CACHE_NAME + '"'); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinNoDefaultSchemaTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinNoDefaultSchemaTest.java index 53425c8bbb054..254cbbd3c7fa9 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinNoDefaultSchemaTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinNoDefaultSchemaTest.java @@ -27,19 +27,17 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcThinNoDefaultSchemaTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** First cache name. */ private static final String CACHE1_NAME = "cache1"; @@ -58,12 +56,6 @@ public class JdbcThinNoDefaultSchemaTest extends JdbcThinAbstractSelfTest { cfg.setCacheConfiguration(cacheConfiguration(CACHE1_NAME), cacheConfiguration(CACHE2_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -104,6 +96,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep * @throws Exception If failed. */ @SuppressWarnings({"EmptyTryBlock", "unused"}) + @Test public void testDefaults() throws Exception { try (Connection conn = DriverManager.getConnection(URL)) { // No-op. @@ -117,6 +110,7 @@ public void testDefaults() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSchemaNameInQuery() throws Exception { Connection conn = DriverManager.getConnection(URL); @@ -155,6 +149,7 @@ public void testSchemaNameInQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSchemaInUrl() throws Exception { try(Connection conn = DriverManager.getConnection(URL + "/\"cache1\"")) { Statement stmt = conn.createStatement(); @@ -182,6 +177,7 @@ public void testSchemaInUrl() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSchemaInUrlAndInQuery() throws Exception { try(Connection conn = DriverManager.getConnection(URL + "/\"cache2\"")) { Statement stmt = conn.createStatement(); @@ -201,6 +197,7 @@ public void testSchemaInUrlAndInQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetSchema() throws Exception { try(Connection conn = DriverManager.getConnection(URL)) { // Try to execute query without set schema diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinPreparedStatementLeakTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinPreparedStatementLeakTest.java new file mode 100644 index 0000000000000..50ecae5cf7e0f --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinPreparedStatementLeakTest.java @@ -0,0 +1,76 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.jdbc.thin; + +import org.apache.ignite.IgniteJdbcThinDriver; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.util.Properties; +import java.util.Set; + +/** + * Prepared statement leaks test. + */ +@SuppressWarnings("ThrowableNotThrown") +public class JdbcThinPreparedStatementLeakTest extends JdbcThinAbstractSelfTest { + /** URL. */ + private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; + + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + startGrid(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + super.afterTest(); + } + + /** + * @throws Exception If failed. + */ + @SuppressWarnings("StatementWithEmptyBody") + @Test + public void test() throws Exception { + try (Connection conn = new IgniteJdbcThinDriver().connect(URL, new Properties())) { + for (int i = 0; i < 50000; ++i) { + try (PreparedStatement st = conn.prepareStatement("select 1")) { + ResultSet rs = st.executeQuery(); + + while (rs.next()) { + // No-op. + } + + rs.close(); + } + } + + Set stmts = U.field(conn, "stmts"); + + assertEquals(0, stmts.size()); + } + } +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinPreparedStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinPreparedStatementSelfTest.java index 85efb4d487319..9660760f595da 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinPreparedStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinPreparedStatementSelfTest.java @@ -40,10 +40,10 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.sql.Types.BIGINT; import static java.sql.Types.BINARY; @@ -65,10 +65,8 @@ * Prepared statement test. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinPreparedStatementSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; @@ -99,12 +97,6 @@ public class JdbcThinPreparedStatementSelfTest extends JdbcThinAbstractSelfTest cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -167,6 +159,7 @@ public class JdbcThinPreparedStatementSelfTest extends JdbcThinAbstractSelfTest /** * @throws Exception If failed. */ + @Test public void testRepeatableUsage() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where id = ?"); @@ -202,6 +195,7 @@ public void testRepeatableUsage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryExecuteException() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where boolVal is not distinct from ?"); @@ -259,6 +253,7 @@ public void testQueryExecuteException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where boolVal is not distinct from ?"); @@ -298,6 +293,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where byteVal is not distinct from ?"); @@ -337,6 +333,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where shortVal is not distinct from ?"); @@ -376,6 +373,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInteger() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where intVal is not distinct from ?"); @@ -415,6 +413,7 @@ public void testInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where longVal is not distinct from ?"); @@ -454,6 +453,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where floatVal is not distinct from ?"); @@ -493,6 +493,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where doubleVal is not distinct from ?"); @@ -532,6 +533,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimal() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where bigVal is not distinct from ?"); @@ -571,6 +573,7 @@ public void testBigDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testString() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where strVal is not distinct from ?"); @@ -610,6 +613,7 @@ public void testString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArray() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where arrVal is not distinct from ?"); @@ -649,6 +653,7 @@ public void testArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDate() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where dateVal is not distinct from ?"); @@ -688,6 +693,7 @@ public void testDate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTime() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where timeVal is not distinct from ?"); @@ -727,6 +733,7 @@ public void testTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where tsVal is not distinct from ?"); @@ -766,6 +773,7 @@ public void testTimestamp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearParameter() throws Exception { stmt = conn.prepareStatement(SQL_PART + " where boolVal is not distinct from ?"); @@ -789,6 +797,7 @@ public void testClearParameter() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotSupportedTypes() throws Exception { stmt = conn.prepareStatement(""); @@ -1050,4 +1059,4 @@ private TestObject(int id) { this.id = id; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinResultSetSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinResultSetSelfTest.java index 36a0a15e92dc3..93ac9ecb12818 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinResultSetSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinResultSetSelfTest.java @@ -42,10 +42,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -54,10 +54,8 @@ * Result set test. */ @SuppressWarnings({"FloatingPointEquality", "ThrowableNotThrown", "AssertWithSideEffects"}) +@RunWith(JUnit4.class) public class JdbcThinResultSetSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; @@ -85,12 +83,6 @@ public class JdbcThinResultSetSelfTest extends JdbcThinAbstractSelfTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -163,6 +155,7 @@ private TestObject createObjectWithData(int id) throws MalformedURLException { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -243,6 +236,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -282,6 +276,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -321,6 +316,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInteger() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -360,6 +356,7 @@ public void testInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -399,6 +396,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -438,6 +436,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -477,6 +476,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimal() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -516,6 +516,7 @@ public void testBigDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimalScale() throws Exception { assert "0.12".equals(convertStringToBigDecimalViaJdbc("0.1234", 2).toString()); assert "1.001".equals(convertStringToBigDecimalViaJdbc("1.0005", 3).toString()); @@ -540,6 +541,7 @@ private BigDecimal convertStringToBigDecimalViaJdbc(String strDec, int scale) th /** * @throws Exception If failed. */ + @Test public void testString() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -579,6 +581,7 @@ public void testString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArray() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -600,6 +603,7 @@ public void testArray() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testDate() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -628,6 +632,7 @@ public void testDate() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testTime() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -655,6 +660,7 @@ public void testTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -682,6 +688,7 @@ public void testTimestamp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectNotSupported() throws Exception { GridTestUtils.assertThrowsAnyCause(log, new Callable() { @Override public Object call() throws Exception { @@ -695,6 +702,7 @@ public void testObjectNotSupported() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNavigation() throws Exception { ResultSet rs = stmt.executeQuery("select id from TestObject where id > 0"); @@ -736,6 +744,7 @@ public void testNavigation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFindColumn() throws Exception { final ResultSet rs = stmt.executeQuery(SQL); @@ -761,6 +770,7 @@ public void testFindColumn() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotSupportedTypes() throws Exception { final ResultSet rs = stmt.executeQuery(SQL); @@ -902,6 +912,7 @@ public void testNotSupportedTypes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateNotSupported() throws Exception { final ResultSet rs = stmt.executeQuery(SQL); @@ -1409,6 +1420,7 @@ public void testUpdateNotSupported() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExceptionOnClosedResultSet() throws Exception { final ResultSet rs = stmt.executeQuery(SQL); @@ -1842,4 +1854,4 @@ private TestObjectField(int a, String b) { return S.toString(TestObjectField.class, this); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSchemaCaseTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSchemaCaseTest.java index 8f1108743cdd7..56e2fee73f1ff 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSchemaCaseTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSchemaCaseTest.java @@ -24,19 +24,17 @@ import java.util.concurrent.Callable; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcThinSchemaCaseTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite:thin://127.0.0.1"; @@ -52,12 +50,6 @@ public class JdbcThinSchemaCaseTest extends JdbcThinAbstractSelfTest { cacheConfiguration("test1", "tEst1"), cacheConfiguration("test2", "\"TestCase\"")); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -90,7 +82,8 @@ private CacheConfiguration cacheConfiguration(@NotNull String name, @NotNull Str /** * @throws Exception If failed. */ - @SuppressWarnings({"EmptyTryBlock", "unused"}) + @SuppressWarnings({"unused"}) + @Test public void testSchemaName() throws Exception { checkSchemaConnection("test0"); checkSchemaConnection("test1"); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSelectAfterAlterTable.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSelectAfterAlterTable.java index ef711dcfd1e92..887740658bca6 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSelectAfterAlterTable.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinSelectAfterAlterTable.java @@ -27,19 +27,17 @@ import org.apache.ignite.configuration.ClientConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base class for complex SQL tests based on JDBC driver. */ +@RunWith(JUnit4.class) public class JdbcThinSelectAfterAlterTable extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Client connection port. */ private int cliPort = ClientConnectorConfiguration.DFLT_PORT; @@ -55,12 +53,6 @@ public class JdbcThinSelectAfterAlterTable extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setClientConnectorConfiguration(new ClientConnectorConfiguration().setPort(cliPort++)); return cfg; @@ -119,7 +111,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name) throws Excep /** * @throws Exception If failed. */ - @SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "unchecked"}) + @Test public void testSelectAfterAlterTableSingleNode() throws Exception { stmt.executeUpdate("alter table person add age int"); @@ -129,7 +121,7 @@ public void testSelectAfterAlterTableSingleNode() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "unchecked"}) + @Test public void testSelectAfterAlterTableMultiNode() throws Exception { try (Connection conn2 = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:" + (ClientConnectorConfiguration.DFLT_PORT + 1))) { @@ -165,4 +157,4 @@ public void checkNewColumn(Statement stmt) throws SQLException { assertTrue(newColExists); } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementCancelSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementCancelSelfTest.java new file mode 100644 index 0000000000000..4ebf0ab023447 --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementCancelSelfTest.java @@ -0,0 +1,768 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.jdbc.thin; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Objects; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicInteger; +import org.apache.ignite.cache.query.annotations.QuerySqlFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.ClientConnectorConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.processors.odbc.ClientListenerProcessor; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; +import static org.apache.ignite.internal.util.IgniteUtils.resolveIgnitePath; + +/** + * Statement cancel test. + */ +@SuppressWarnings({"ThrowableNotThrown", "AssertWithSideEffects"}) +@RunWith(JUnit4.class) +public class JdbcThinStatementCancelSelfTest extends JdbcThinAbstractSelfTest { + /** IP finder. */ + private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + + /** URL. */ + private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; + + /** A CSV file with one record. */ + private static final String BULKLOAD_20_000_LINE_CSV_FILE = + Objects.requireNonNull(resolveIgnitePath("/modules/clients/src/test/resources/bulkload20_000.csv")). + getAbsolutePath(); + + /** Max table rows. */ + private static final int MAX_ROWS = 10000; + + /** Server thread pull size. */ + private static final int SERVER_THREAD_POOL_SIZE = 4; + + /** Cancellation processing timeout. */ + public static final int TIMEOUT = 5000; + + /** Nodes count. */ + private static final byte NODES_COUNT = 3; + + /** Timeout for checking async result. */ + public static final int CHECK_RESULT_TIMEOUT = 1_000; + + /** Connection. */ + private Connection conn; + + /** Statement. */ + private Statement stmt; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration cache = defaultCacheConfiguration(); + + cache.setCacheMode(PARTITIONED); + cache.setBackups(1); + cache.setWriteSynchronizationMode(FULL_SYNC); + cache.setSqlFunctionClasses(TestSQLFunctions.class); + cache.setIndexedTypes(Integer.class, Integer.class, Long.class, Long.class, String.class, + JdbcThinAbstractDmlStatementSelfTest.Person.class); + + cfg.setCacheConfiguration(cache); + + TcpDiscoverySpi disco = new TcpDiscoverySpi(); + + disco.setIpFinder(IP_FINDER); + + cfg.setDiscoverySpi(disco); + + cfg.setClientConnectorConfiguration(new ClientConnectorConfiguration(). + setThreadPoolSize(SERVER_THREAD_POOL_SIZE)); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + startGridsMultiThreaded(NODES_COUNT); + + for (int i = 0; i < MAX_ROWS; ++i) + grid(0).cache(DEFAULT_CACHE_NAME).put(i, i); + + for (int i = 0; i < MAX_ROWS; ++i) + grid(0).cache(DEFAULT_CACHE_NAME).put((long)i, (long)i); + } + + /** + * Called before execution of every test method in class. + * + * @throws Exception If failed. + */ + @Before + public void before() throws Exception { + TestSQLFunctions.init(); + + conn = DriverManager.getConnection(URL); + + conn.setSchema('"' + DEFAULT_CACHE_NAME + '"'); + + stmt = conn.createStatement(); + + assert stmt != null; + assert !stmt.isClosed(); + } + + /** + * Called after execution of every test method in class. + * + * @throws Exception If failed. + */ + @After + public void after() throws Exception { + if (stmt != null && !stmt.isClosed()) { + stmt.close(); + + assert stmt.isClosed(); + } + + conn.close(); + + assert stmt.isClosed(); + assert conn.isClosed(); + } + + /** + * Trying to cancel stament without query. In given case cancel is noop, so no exception expected. + */ + @Test + public void testCancelingStmtWithoutQuery() { + try { + stmt.cancel(); + } + catch (Exception e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception"); + } + } + + /** + * Trying to retrieve result set of a canceled query. + * SQLException with message "The query was cancelled while executing." expected. + * + * @throws Exception If failed. + */ + @Test + public void testResultSetRetrievalInCanceledStatement() throws Exception { + stmt.execute("SELECT 1; SELECT 2; SELECT 3;"); + + assertNotNull(stmt.getResultSet()); + + stmt.cancel(); + + GridTestUtils.assertThrows(log, () -> { + stmt.getResultSet(); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + } + + /** + * Trying to cancel already cancelled query. + * No exceptions exceped. + * + * @throws Exception If failed. + */ + @Test + public void testCancelCanceledQuery() throws Exception { + stmt.execute("SELECT 1;"); + + assertNotNull(stmt.getResultSet()); + + stmt.cancel(); + + stmt.cancel(); + + GridTestUtils.assertThrows(log, () -> { + stmt.getResultSet(); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + } + + /** + * Trying to cancel closed query. + * SQLException with message "Statement is closed." expected. + * + * @throws Exception If failed. + */ + @Test + public void testCancelClosedStmt() throws Exception { + stmt.close(); + + GridTestUtils.assertThrows(log, () -> { + stmt.cancel(); + + return null; + }, SQLException.class, "Statement is closed."); + } + + /** + * Trying to call resultSet.next() on a canceled query. + * SQLException with message "The query was cancelled while executing." expected. + * + * @throws Exception If failed. + */ + @Test + public void testResultSetNextAfterCanceling() throws Exception { + stmt.setFetchSize(10); + + ResultSet rs = stmt.executeQuery("select * from Integer"); + + assert rs.next(); + + stmt.cancel(); + + GridTestUtils.assertThrows(log, () -> { + rs.next(); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + } + + /** + * Ensure that it's possible to execute new query on cancelled statement. + * + * @throws Exception If failed. + */ + @Test + public void testCancelAnotherStmt() throws Exception { + stmt.setFetchSize(10); + + ResultSet rs = stmt.executeQuery("select * from Integer"); + + assert rs.next(); + + stmt.cancel(); + + ResultSet rs2 = stmt.executeQuery("select * from Integer order by _val"); + + assert rs2.next() : "The other cursor mustn't be closed"; + } + + /** + * Ensure that stament cancel doesn't effect another statement workflow, created by the same connection. + * + * @throws Exception If failed. + */ + @Test + public void testCancelAnotherStmtResultSet() throws Exception { + try (Statement anotherStmt = conn.createStatement()) { + ResultSet rs1 = stmt.executeQuery("select * from Integer WHERE _key % 2 = 0"); + + ResultSet rs2 = anotherStmt.executeQuery("select * from Integer WHERE _key % 2 <> 0"); + + stmt.cancel(); + + GridTestUtils.assertThrows(log, () -> { + rs1.next(); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + + assert rs2.next() : "The other cursor mustn't be closed"; + } + } + + /** + * Trying to cancel long running query. No exceptions expected. + * In order to guarantee correct concurrent processing of query itself and it's cancellation request + * two latches and some other stuff is used. + * For more details see TestSQLFunctions#awaitLatchCancelled() + * and JdbcThinStatementCancelSelfTest#cancel(java.sql.Statement). + * + * @throws Exception If failed. + */ + @Test + public void testCancelQuery() throws Exception { + IgniteInternalFuture cancelRes = cancel(stmt); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeQuery("select * from Integer where _key in " + + "(select _key from Integer where awaitLatchCancelled() = 0) and shouldNotBeCalledInCaseOfCancellation()"); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + + // Ensures that there were no exceptions within async cancellation process. + cancelRes.get(CHECK_RESULT_TIMEOUT); + } + + /** + * Trying close canceling query. No exceptions expected. + * In order to guarantee correct concurrent processing of query itself and it's cancellation request + * two latches and some other stuff is used. + * For more details see TestSQLFunctions#awaitLatchCancelled() + * and JdbcThinStatementCancelSelfTest#cancel(java.sql.Statement). + * + * @throws Exception If failed. + */ + @Test + public void testCloseCancelingQuery() throws Exception { + IgniteInternalFuture res = GridTestUtils.runAsync(() -> { + try { + TestSQLFunctions.cancelLatch.await(); + + long cancelCntrBeforeCancel = ClientListenerProcessor.CANCEL_COUNTER.get(); + + stmt.cancel(); + + try { + GridTestUtils.waitForCondition( + () -> ClientListenerProcessor.CANCEL_COUNTER.get() == cancelCntrBeforeCancel + 1, TIMEOUT); + } + catch (IgniteInterruptedCheckedException ignored) { + // No-op. + } + + assertEquals(cancelCntrBeforeCancel + 1, ClientListenerProcessor.CANCEL_COUNTER.get()); + + // Nothing expected here, cause query was already marked as canceled. + stmt.close(); + + TestSQLFunctions.reqLatch.countDown(); + } + catch (Exception e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception"); + } + }); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeQuery("select * from Integer where _key in " + + "(select _key from Integer where awaitLatchCancelled() = 0) and shouldNotBeCalledInCaseOfCancellation()"); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + + // Ensures that there were no exceptions within async cancellation process. + res.get(CHECK_RESULT_TIMEOUT); + } + + /** + * Trying to cancel long running multiple statments query. No exceptions expected. + * In order to guarantee correct concurrent processing of query itself and it's cancellation request + * two latches and some other stuff is used. + * For more details see TestSQLFunctions#awaitLatchCancelled() + * and JdbcThinStatementCancelSelfTest#cancel(java.sql.Statement). + * + * @throws Exception If failed. + */ + @Test + public void testCancelMultipleStatementsQuery() throws Exception { + try (Statement anotherStatment = conn.createStatement()) { + anotherStatment.setFetchSize(1); + + ResultSet rs = anotherStatment.executeQuery("select * from Integer"); + + assert rs.next(); + + IgniteInternalFuture cancelRes = cancel(stmt); + + GridTestUtils.assertThrows(log, () -> { + // Executes multiple long running query + stmt.execute( + "select 100 from Integer;" + + "select _key from Integer where awaitLatchCancelled() = 0;" + + "select 100 from Integer I1 join Integer I2;" + + "select * from Integer where shouldNotBeCalledInCaseOfCancellation()"); + return null; + }, SQLException.class, "The query was cancelled while executing"); + + assert rs.next() : "The other cursor mustn't be closed"; + + // Ensures that there were no exceptions within async cancellation process. + cancelRes.get(CHECK_RESULT_TIMEOUT); + } + } + + /** + * Trying to cancel long running batch query. No exceptions expected. + * In order to guarantee correct concurrent processing of query itself and it's cancellation request + * two latches and some other stuff is used. + * For more details see TestSQLFunctions#awaitLatchCancelled() + * and JdbcThinStatementCancelSelfTest#cancel(java.sql.Statement). + * + * @throws Exception If failed. + */ + @Test + public void testCancelBatchQuery() throws Exception { + try (Statement stmt2 = conn.createStatement()) { + stmt2.setFetchSize(1); + + ResultSet rs = stmt2.executeQuery("SELECT * from Integer"); + + assert rs.next(); + + IgniteInternalFuture cancelRes = cancel(stmt); + + GridTestUtils.assertThrows(log, () -> { + stmt.addBatch("update Long set _val = _val + 1 where _key < sleep_func (30)"); + stmt.addBatch("update Long set _val = _val + 1 where awaitLatchCancelled() = 0"); + stmt.addBatch("update Long set _val = _val + 1 where _key < sleep_func (30)"); + stmt.addBatch("update Long set _val = _val + 1 where shouldNotBeCalledInCaseOfCancellation()"); + + stmt.executeBatch(); + return null; + }, java.sql.SQLException.class, "The query was cancelled while executing"); + + assert rs.next() : "The other cursor mustn't be closed"; + + // Ensures that there were no exceptions within async cancellation process. + cancelRes.get(CHECK_RESULT_TIMEOUT); + } + } + + /** + * Trying to cancel long running query in situation that there's no worker for cancel query, + * cause server thread pool is full. No exceptions expected. + * In order to guarantee correct concurrent processing of query itself and it's cancellation request + * thress latches and some other stuff is used. + * For more details see TestSQLFunctions#awaitLatchCancelled(), + * TestSQLFunctions#awaitQuerySuspensionLatch() + * and JdbcThinStatementCancelSelfTest#cancel(java.sql.Statement). + * + * @throws Exception If failed. + */ + @Test + public void testCancelAgainstFullServerThreadPool() throws Exception { + List statements = Collections.synchronizedList(new ArrayList<>()); + List connections = Collections.synchronizedList(new ArrayList<>()); + + // Prepares connections and statemens in order to use them for filling thread pool with pseuso-infine quries. + for (int i = 0; i < SERVER_THREAD_POOL_SIZE; i++) { + Connection yaConn = DriverManager.getConnection(URL); + + yaConn.setSchema('"' + DEFAULT_CACHE_NAME + '"'); + + connections.add(yaConn); + + Statement yaStmt = yaConn.createStatement(); + + statements.add(yaStmt); + } + + try { + IgniteInternalFuture cancelRes = cancel(statements.get(SERVER_THREAD_POOL_SIZE - 1)); + + // Completely fills server thread pool. + IgniteInternalFuture fillPoolRes = fillServerThreadPool(statements, SERVER_THREAD_POOL_SIZE - 1); + + GridTestUtils.assertThrows(log, () -> { + statements.get(SERVER_THREAD_POOL_SIZE - 1).executeQuery( + "select * from Integer where _key in " + + "(select _key from Integer where awaitLatchCancelled() = 0) and" + + " shouldNotBeCalledInCaseOfCancellation()"); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + + // Releases queries in thread pool. + TestSQLFunctions.suspendQryLatch.countDown(); + + // Ensures that there were no exceptions within async cancellation process. + cancelRes.get(CHECK_RESULT_TIMEOUT); + + // Ensures that there were no exceptions within async thread pool filling process. + fillPoolRes.get(CHECK_RESULT_TIMEOUT); + } + finally { + for (Statement statement : statements) + statement.close(); + + for (Connection connection : connections) + connection.close(); + } + } + + /** + * Trying to cancel fetch query in situation that there's no worker for cancel query, + * cause server thread pool is full. No exceptions expected. + * In order to guarantee correct concurrent processing of query itself and it's cancellation request + * thress latches and some other stuff is used. + * For more details see TestSQLFunctions#awaitLatchCancelled(), + * TestSQLFunctions#awaitQuerySuspensionLatch() + * and JdbcThinStatementCancelSelfTest#cancel(java.sql.Statement). + * + * @throws Exception If failed. + */ + @Test + public void testCancelFetchAgainstFullServerThreadPool() throws Exception { + stmt.setFetchSize(1); + + ResultSet rs = stmt.executeQuery("SELECT * from Integer"); + + rs.next(); + + List statements = Collections.synchronizedList(new ArrayList<>()); + List connections = Collections.synchronizedList(new ArrayList<>()); + + // Prepares connections and statemens in order to use them for filling thread pool with pseuso-infine quries. + for (int i = 0; i < SERVER_THREAD_POOL_SIZE; i++) { + Connection yaConn = DriverManager.getConnection(URL); + + yaConn.setSchema('"' + DEFAULT_CACHE_NAME + '"'); + + connections.add(yaConn); + + Statement yaStmt = yaConn.createStatement(); + + statements.add(yaStmt); + } + + try { + // Completely fills server thread pool. + IgniteInternalFuture fillPoolRes = fillServerThreadPool(statements, + SERVER_THREAD_POOL_SIZE - 1); + + IgniteInternalFuture fetchRes = GridTestUtils.runAsync(() -> { + GridTestUtils.assertThrows(log, () -> { + rs.next(); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + }); + + stmt.cancel(); + + // Ensures that there were no exceptions within async data fetching process. + fetchRes.get(CHECK_RESULT_TIMEOUT); + + // Releases queries in thread pool. + TestSQLFunctions.suspendQryLatch.countDown(); + + // Ensure that there were no exceptions within async thread pool filling process. + fillPoolRes.get(CHECK_RESULT_TIMEOUT); + } + finally { + for (Statement statement : statements) + statement.close(); + + for (Connection connection : connections) + connection.close(); + } + } + + /** + * Trying to cancel long running file upload. No exceptions expected. + * + * @throws Exception If failed. + */ + @Test + public void testCancellingLongRunningFileUpload() throws Exception { + IgniteInternalFuture cancelRes = GridTestUtils.runAsync(() -> { + try { + Thread.sleep(200); + + stmt.cancel(); + } + catch (Exception e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception"); + } + }); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeUpdate( + "copy from '" + BULKLOAD_20_000_LINE_CSV_FILE + "' into Person" + + " (_key, age, firstName, lastName)" + + " format csv"); + + return null; + }, SQLException.class, "The query was cancelled while executing."); + + // Ensure that there were no exceptions within async cancellation process. + cancelRes.get(CHECK_RESULT_TIMEOUT); + } + + /** + * Cancels current query, actual cancel will wait cancelLatch to be releaseds. + * + * @return IgniteInternalFuture to check whether exception was thrown. + */ + private IgniteInternalFuture cancel(Statement stmt) { + return GridTestUtils.runAsync(() -> { + try { + TestSQLFunctions.cancelLatch.await(); + + long cancelCntrBeforeCancel = ClientListenerProcessor.CANCEL_COUNTER.get(); + + stmt.cancel(); + + try { + GridTestUtils.waitForCondition( + () -> ClientListenerProcessor.CANCEL_COUNTER.get() == cancelCntrBeforeCancel + 1, TIMEOUT); + } + catch (IgniteInterruptedCheckedException ignored) { + // No-op. + } + + assertEquals(cancelCntrBeforeCancel + 1, ClientListenerProcessor.CANCEL_COUNTER.get()); + + TestSQLFunctions.reqLatch.countDown(); + } + catch (Exception e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception"); + } + }); + } + + /** + * Fills Server Thread Pool with qryCnt queries. Given queries will wait for + * suspendQryLatch to be released. + * + * @param statements Statements. + * @param qryCnt Number of queries to execute. + * @return IgniteInternalFuture in order to check whether exception was thrown or not. + */ + private IgniteInternalFuture fillServerThreadPool(List statements, int qryCnt) { + AtomicInteger idx = new AtomicInteger(0); + + return GridTestUtils.runMultiThreadedAsync(() -> { + try { + statements.get(idx.getAndIncrement()).executeQuery( + "select * from Integer where awaitQuerySuspensionLatch();"); + } + catch (SQLException e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception"); + } + }, qryCnt, "ThreadName"); + } + + /** + * Utility class with custom SQL functions. + */ + public static class TestSQLFunctions { + /** Request latch. */ + static CountDownLatch reqLatch; + + /** Cancel latch. */ + static CountDownLatch cancelLatch; + + /** Suspend query latch. */ + static CountDownLatch suspendQryLatch; + + /** + * Recreate latches. + */ + static void init() { + reqLatch = new CountDownLatch(1); + + cancelLatch = new CountDownLatch(1); + + suspendQryLatch = new CountDownLatch(1); + } + + /** + * Releases cancelLatch that leeds to sending cancel Query and waits until cancel Query is fully processed. + * + * @return 0; + */ + @QuerySqlFunction + public static long awaitLatchCancelled() { + try { + cancelLatch.countDown(); + reqLatch.await(); + } + catch (Exception ignored) { + // No-op. + } + + return 0; + } + + /** + * Waits latch release. + * + * @return 0; + */ + @QuerySqlFunction + public static long awaitQuerySuspensionLatch() { + try { + suspendQryLatch.await(); + } + catch (Exception ignored) { + // No-op. + } + + return 0; + } + + /** + * If called fails with corresponding message. + * + * @return 0; + */ + @QuerySqlFunction + public static long shouldNotBeCalledInCaseOfCancellation() { + fail("Query wasn't actually cancelled."); + + return 0; + } + + /** + * + * @param v amount of milliseconds to sleep + * @return amount of milliseconds to sleep + */ + @QuerySqlFunction + public static int sleep_func(int v) { + try { + Thread.sleep(v); + } + catch (InterruptedException ignored) { + // No-op + } + return v; + } + } +} \ No newline at end of file diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementSelfTest.java index 6d6e72db0b087..915f3a1243db4 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementSelfTest.java @@ -23,7 +23,6 @@ import java.sql.ResultSet; import java.sql.SQLException; import java.sql.SQLFeatureNotSupportedException; -import java.sql.SQLTimeoutException; import java.sql.Statement; import java.util.concurrent.Callable; import org.apache.ignite.IgniteCache; @@ -31,12 +30,10 @@ import org.apache.ignite.cache.query.annotations.QuerySqlFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -44,11 +41,9 @@ /** * Statement test. */ -@SuppressWarnings({"ThrowableNotThrown", "ThrowableResultOfMethodCallIgnored"}) +@SuppressWarnings({"ThrowableNotThrown"}) +@RunWith(JUnit4.class) public class JdbcThinStatementSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** URL. */ private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; @@ -77,12 +72,6 @@ public class JdbcThinStatementSelfTest extends JdbcThinAbstractSelfTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -124,6 +113,7 @@ public class JdbcThinStatementSelfTest extends JdbcThinAbstractSelfTest { /** * @throws Exception If failed. */ + @org.junit.Test public void testExecuteQuery0() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -156,6 +146,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @org.junit.Test public void testExecuteQuery1() throws Exception { final String sqlText = "select val from test"; @@ -182,6 +173,7 @@ public void testExecuteQuery1() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExecute() throws Exception { assert stmt.execute(SQL); @@ -220,6 +212,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @org.junit.Test public void testMaxRows() throws Exception { stmt.setMaxRows(1); @@ -285,6 +278,7 @@ else if (id == 3) { /** * @throws Exception If failed. */ + @org.junit.Test public void testCloseResultSet0() throws Exception { ResultSet rs0 = stmt.executeQuery(SQL); ResultSet rs1 = stmt.executeQuery(SQL); @@ -303,6 +297,7 @@ public void testCloseResultSet0() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testCloseResultSet1() throws Exception { stmt.execute(SQL); @@ -316,6 +311,7 @@ public void testCloseResultSet1() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testCloseResultSetByConnectionClose() throws Exception { ResultSet rs = stmt.executeQuery(SQL); @@ -328,6 +324,7 @@ public void testCloseResultSetByConnectionClose() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testCloseOnCompletionAfterQuery() throws Exception { assert !stmt.isCloseOnCompletion() : "Invalid default closeOnCompletion"; @@ -357,6 +354,7 @@ public void testCloseOnCompletionAfterQuery() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testCloseOnCompletionBeforeQuery() throws Exception { assert !stmt.isCloseOnCompletion() : "Invalid default closeOnCompletion"; @@ -386,28 +384,7 @@ public void testCloseOnCompletionBeforeQuery() throws Exception { /** * @throws Exception If failed. */ - public void testExecuteQueryTimeout() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-5438"); - - final String sqlText = "select sleep_func(3)"; - - stmt.setQueryTimeout(1); - - // Timeout - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - return stmt.executeQuery(sqlText); - } - }, - SQLTimeoutException.class, - "Timeout" - ); - } - - /** - * @throws Exception If failed. - */ + @org.junit.Test public void testExecuteQueryMultipleOnlyResultSets() throws Exception { assert conn.getMetaData().supportsMultipleResultSets(); @@ -436,6 +413,7 @@ public void testExecuteQueryMultipleOnlyResultSets() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExecuteQueryMultipleOnlyDml() throws Exception { conn.setSchema(null); @@ -471,6 +449,7 @@ public void testExecuteQueryMultipleOnlyDml() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExecuteQueryMultipleMixed() throws Exception { conn.setSchema(null); @@ -531,6 +510,7 @@ public void testExecuteQueryMultipleMixed() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExecuteUpdate() throws Exception { final String sqlText = "update test set val=1 where _key=1"; @@ -548,6 +528,7 @@ public void testExecuteUpdate() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExecuteUpdateProducesResultSet() throws Exception { final String sqlText = "select * from test"; @@ -565,28 +546,7 @@ public void testExecuteUpdateProducesResultSet() throws Exception { /** * @throws Exception If failed. */ - public void testExecuteUpdateTimeout() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-5438"); - - final String sqlText = "update test set val=1 where _key=sleep_func(3)"; - - stmt.setQueryTimeout(1); - - // Timeout - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - return stmt.executeUpdate(sqlText); - } - }, - SQLTimeoutException.class, - "Timeout" - ); - } - - /** - * @throws Exception If failed. - */ + @org.junit.Test public void testClose() throws Exception { String sqlText = "select * from test"; @@ -609,6 +569,7 @@ public void testClose() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testGetSetMaxFieldSizeUnsupported() throws Exception { assertEquals(0, stmt.getMaxFieldSize()); @@ -646,6 +607,7 @@ public void testGetSetMaxFieldSizeUnsupported() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testGetSetMaxRows() throws Exception { assertEquals(0, stmt.getMaxRows()); @@ -696,6 +658,7 @@ public void testGetSetMaxRows() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testSetEscapeProcessing() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-5440"); @@ -733,6 +696,7 @@ public void testSetEscapeProcessing() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testGetSetQueryTimeout() throws Exception { assertEquals(0, stmt.getQueryTimeout()); @@ -777,6 +741,7 @@ public void testGetSetQueryTimeout() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testMaxFieldSize() throws Exception { assert stmt.getMaxFieldSize() >= 0; @@ -802,6 +767,7 @@ public void testMaxFieldSize() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testQueryTimeout() throws Exception { assert stmt.getQueryTimeout() == 0 : "Default timeout invalid: " + stmt.getQueryTimeout(); @@ -827,6 +793,7 @@ public void testQueryTimeout() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testWarningsOnClosedStatement() throws Exception { stmt.clearWarnings(); @@ -850,6 +817,7 @@ public void testWarningsOnClosedStatement() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testCursorName() throws Exception { checkNotSupported(new RunnableX() { @Override public void run() throws Exception { @@ -869,6 +837,7 @@ public void testCursorName() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testGetMoreResults() throws Exception { assert !stmt.getMoreResults(); @@ -894,6 +863,7 @@ public void testGetMoreResults() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testGetMoreResults1() throws Exception { assert !stmt.getMoreResults(Statement.CLOSE_CURRENT_RESULT); assert !stmt.getMoreResults(Statement.KEEP_CURRENT_RESULT); @@ -921,8 +891,11 @@ public void testGetMoreResults1() throws Exception { } /** + * Verifies that emty batch can be performed. + * * @throws Exception If failed. */ + @org.junit.Test public void testBatchEmpty() throws Exception { assert conn.getMetaData().supportsBatchUpdates(); @@ -945,6 +918,7 @@ public void testBatchEmpty() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testFetchDirection() throws Exception { assert stmt.getFetchDirection() == ResultSet.FETCH_FORWARD; @@ -978,6 +952,7 @@ public void testFetchDirection() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testAutogenerated() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @@ -1049,52 +1024,7 @@ public void testAutogenerated() throws Exception { /** * @throws Exception If failed. */ - public void testCancel() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-5439"); - - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - stmt.execute("select sleep_func(3)"); - - return null; - } - }, - SQLException.class, - "The query is canceled"); - - IgniteInternalFuture f = GridTestUtils.runAsync(new Runnable() { - @Override public void run() { - try { - stmt.cancel(); - } - catch (SQLException e) { - log.error("Unexpected exception", e); - - fail("Unexpected exception."); - } - } - }); - - f.get(); - - stmt.close(); - - GridTestUtils.assertThrows(log, - new Callable() { - @Override public Object call() throws Exception { - stmt.cancel(); - - return null; - } - }, - SQLException.class, - "Statement is closed"); - } - - /** - * @throws Exception If failed. - */ + @org.junit.Test public void testStatementTypeMismatchSelectForCachedQuery() throws Exception { // Put query to cache. stmt.executeQuery("select 1;"); @@ -1116,6 +1046,7 @@ public void testStatementTypeMismatchSelectForCachedQuery() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testStatementTypeMismatchUpdate() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @@ -1224,4 +1155,4 @@ private Person(int id, String firstName, String lastName, int age) { this.age = age; } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementTimeoutSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementTimeoutSelfTest.java new file mode 100644 index 0000000000000..ae63effa22f5e --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementTimeoutSelfTest.java @@ -0,0 +1,312 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.jdbc.thin; + +import java.io.File; +import java.io.FileWriter; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.SQLTimeoutException; +import java.sql.Statement; +import org.apache.ignite.cache.query.annotations.QuerySqlFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.ClientConnectorConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Statement timeout test. + */ +@SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) +public class JdbcThinStatementTimeoutSelfTest extends JdbcThinAbstractSelfTest { + /** IP finder. */ + private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + + /** URL. */ + private static final String URL = "jdbc:ignite:thin://127.0.0.1/"; + + /** Server thread pull size. */ + private static final int SERVER_THREAD_POOL_SIZE = 4; + + /** Connection. */ + private Connection conn; + + /** Statement. */ + private Statement stmt; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration cache = defaultCacheConfiguration(); + + cache.setCacheMode(PARTITIONED); + cache.setBackups(1); + cache.setWriteSynchronizationMode(FULL_SYNC); + cache.setSqlFunctionClasses(TestSQLFunctions.class); + cache.setIndexedTypes(Integer.class, Integer.class, Long.class, Long.class, String.class, + JdbcThinAbstractDmlStatementSelfTest.Person.class); + + cfg.setCacheConfiguration(cache); + + TcpDiscoverySpi disco = new TcpDiscoverySpi(); + + disco.setIpFinder(IP_FINDER); + + cfg.setDiscoverySpi(disco); + + cfg.setClientConnectorConfiguration(new ClientConnectorConfiguration().setThreadPoolSize(SERVER_THREAD_POOL_SIZE)); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + startGridsMultiThreaded(3); + + for (int i = 0; i < 10000; ++i) + grid(0).cache(DEFAULT_CACHE_NAME).put(i, i); + + for (int i = 0; i < 10000; ++i) + grid(0).cache(DEFAULT_CACHE_NAME).put((long)i, (long)i); + } + + /** + * Called before execution of every test method in class. + * + * @throws Exception If failed. + */ + @Before + public void before() throws Exception { + conn = DriverManager.getConnection(URL); + + conn.setSchema('"' + DEFAULT_CACHE_NAME + '"'); + + stmt = conn.createStatement(); + + assert stmt != null; + assert !stmt.isClosed(); + } + + /** + * Called after execution of every test method in class. + * + * @throws Exception If failed. + */ + @After + public void after() throws Exception { + if (stmt != null && !stmt.isClosed()) { + stmt.close(); + + assert stmt.isClosed(); + } + + conn.close(); + + assert stmt.isClosed(); + assert conn.isClosed(); + } + + /** + * Trying to set negative timeout. SQLException with message "Invalid timeout value." is expected. + */ + @Test + public void testSettingNegativeQueryTimeout() { + GridTestUtils.assertThrows(log, () -> { + stmt.setQueryTimeout(-1); + + return null; + }, SQLException.class, "Invalid timeout value."); + } + + /** + * Trying to set zero timeout. Zero timeout means no timeout, so no exception is expected. + * + * @throws Exception If failed. + */ + @Test + public void testSettingZeroQueryTimeout() throws Exception { + stmt.setQueryTimeout(0); + + stmt.executeQuery("select sleep_func(1000);"); + } + + /** + * Setting timeout that is greater than query execution time. SQLTimeoutException is expected. + * + * @throws Exception If failed. + */ + @Test + public void testQueryTimeout() throws Exception { + stmt.setQueryTimeout(2); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeQuery("select sleep_func(10) from Integer;"); + + return null; + }, SQLTimeoutException.class, "The query was cancelled while executing."); + } + + /** + * Setting timeout that is greater than query execution time. Running same query multiple times. + * SQLTimeoutException is expected in all cases. + * + * @throws Exception If failed. + */ + @SuppressWarnings("unchecked") + @Test + public void testQueryTimeoutRepeatable() throws Exception { + stmt.setQueryTimeout(2); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeQuery("select sleep_func(10) from Integer;"); + + return null; + }, SQLTimeoutException.class, "The query was cancelled while executing."); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeQuery("select sleep_func(10) from Integer;"); + + return null; + }, SQLTimeoutException.class, "The query was cancelled while executing."); + } + + /** + * Setting timeout that is greater than file uploading execution time. + * SQLTimeoutException is expected. + * + * @throws Exception If failed. + */ + @SuppressWarnings("unchecked") + @Test + public void testFileUploadingTimeout() throws Exception { + + File file = File.createTempFile("bulkload", "csv"); + + FileWriter writer = new FileWriter(file); + + for (int i = 1; i <= 1_000_000; i++) + writer.write(String.format("%d,%d,\"FirstName%d MiddleName%d\",LastName%d", i, i, i, i, i)); + + writer.close(); + + stmt.setQueryTimeout(1); + + GridTestUtils.assertThrows(log, () -> { + stmt.executeUpdate( + "copy from '" + file.getAbsolutePath() + "' into Person" + + " (_key, age, firstName, lastName)" + + " format csv"); + + return null; + }, SQLTimeoutException.class, "The query was cancelled while executing."); + } + + /** + * Setting timeout that is greater than batch query execution time. + * SQLTimeoutException is expected. + * + * @throws Exception If failed. + */ + @Test + public void testBatchQuery() throws Exception { + stmt.setQueryTimeout(1); + + GridTestUtils.assertThrows(log, () -> { + stmt.addBatch("update Long set _val = _val + 1 where _key < sleep_func (30)"); + stmt.addBatch("update Long set _val = _val + 1 where _key > sleep_func (10)"); + + stmt.executeBatch(); + + return null; + }, SQLTimeoutException.class, "The query was cancelled while executing."); + } + + /** + * Setting timeout that is greater than multiple statements query execution time. + * SQLTimeoutException is expected. + * + * @throws Exception If failed. + */ + @Test + public void testMultipleStatementsQuery() throws Exception { + stmt.setQueryTimeout(1); + + GridTestUtils.assertThrows(log, () -> { + stmt.execute( + "update Long set _val = _val + 1 where _key > sleep_func (10);" + + "update Long set _val = _val + 1 where _key > sleep_func (10);" + + "update Long set _val = _val + 1 where _key > sleep_func (10);" + + "update Long set _val = _val + 1 where _key > sleep_func (10);" + + "select _val, sleep_func(10) as s from Integer limit 10"); + + return null; + }, SQLTimeoutException.class, "The query was cancelled while executing."); + } + + /** + * Setting timeout that is greater than update query execution time. + * SQLTimeoutException is expected. + * + * @throws Exception If failed. + */ + @Test + public void testExecuteUpdateTimeout() throws Exception { + stmt.setQueryTimeout(1); + + GridTestUtils.assertThrows(log, () -> + stmt.executeUpdate("update Integer set _val=1 where _key > sleep_func(10)"), + SQLTimeoutException.class, "The query was cancelled while executing."); + } + + /** + * Utility class with custom SQL functions. + */ + public static class TestSQLFunctions { + /** + * @param v amount of milliseconds to sleep + * @return amount of milliseconds to sleep + */ + @SuppressWarnings("unused") + @QuerySqlFunction + public static int sleep_func(int v) { + try { + Thread.sleep(v); + } + catch (InterruptedException ignored) { + // No-op + } + return v; + } + } +} \ No newline at end of file diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStreamingAbstractSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStreamingAbstractSelfTest.java index c83977c692d19..70dc7815f8162 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStreamingAbstractSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStreamingAbstractSelfTest.java @@ -40,10 +40,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for streaming via thin driver. */ +@RunWith(JUnit4.class) public abstract class JdbcThinStreamingAbstractSelfTest extends JdbcStreamingSelfTest { /** */ protected int batchSize = 17; @@ -96,6 +100,7 @@ public abstract class JdbcThinStreamingAbstractSelfTest extends JdbcStreamingSel /** * @throws Exception if failed. */ + @Test public void testStreamedBatchedInsert() throws Exception { for (int i = 10; i <= 100; i += 10) put(i, nameForId(i * 100)); @@ -132,6 +137,7 @@ public void testStreamedBatchedInsert() throws Exception { /** * @throws SQLException if failed. */ + @Test public void testSimultaneousStreaming() throws Exception { try (Connection anotherConn = createOrdinaryConnection()) { execute(anotherConn, "CREATE TABLE PUBLIC.T(x int primary key, y int) WITH " + @@ -212,6 +218,7 @@ public void testSimultaneousStreaming() throws Exception { /** * */ + @Test public void testStreamingWithMixedStatementTypes() throws Exception { String prepStmtStr = "insert into Person(\"id\", \"name\") values (?, ?)"; @@ -268,6 +275,7 @@ public void testStreamingWithMixedStatementTypes() throws Exception { /** * @throws SQLException if failed. */ + @Test public void testStreamingOffToOn() throws Exception { try (Connection conn = createOrdinaryConnection()) { assertStreamingState(false); @@ -281,6 +289,7 @@ public void testStreamingOffToOn() throws Exception { /** * @throws SQLException if failed. */ + @Test public void testStreamingOffToOff() throws Exception { try (Connection conn = createOrdinaryConnection()) { assertStreamingState(false); @@ -294,6 +303,7 @@ public void testStreamingOffToOff() throws Exception { /** * @throws SQLException if failed. */ + @Test public void testStreamingOnToOff() throws Exception { try (Connection conn = createStreamedConnection(false)) { assertStreamingState(true); @@ -307,6 +317,7 @@ public void testStreamingOnToOff() throws Exception { /** * @throws SQLException if failed. */ + @Test public void testFlush() throws Exception { try (Connection conn = createStreamedConnection(false, 10000)) { assertStreamingState(true); @@ -337,6 +348,7 @@ public void testFlush() throws Exception { /** * @throws SQLException if failed. */ + @Test public void testStreamingReEnabled() throws Exception { try (Connection conn = createStreamedConnection(false, 10000)) { assertStreamingState(true); @@ -381,6 +393,7 @@ public void testStreamingReEnabled() throws Exception { * */ @SuppressWarnings("ThrowableNotThrown") + @Test public void testNonStreamedBatch() { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -407,6 +420,7 @@ public void testNonStreamedBatch() { * */ @SuppressWarnings("ThrowableNotThrown") + @Test public void testStreamingStatementInTheMiddleOfNonPreparedBatch() { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -428,6 +442,7 @@ public void testStreamingStatementInTheMiddleOfNonPreparedBatch() { * */ @SuppressWarnings("ThrowableNotThrown") + @Test public void testBatchingSetStreamingStatement() { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -513,4 +528,4 @@ static final class IndexingWithContext extends IgniteH2Indexing { return super.querySqlFields(schemaName, qry, cliCtx, keepBinary, failOnMultipleStmts, tracker, cancel); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTcpIoTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTcpIoTest.java index fc3704b6b7b50..dab2d88d95c8e 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTcpIoTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTcpIoTest.java @@ -32,11 +32,14 @@ import org.apache.ignite.internal.util.HostAndPortRange; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for JdbcThinTcpIo. */ -@SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class JdbcThinTcpIoTest extends GridCommonAbstractTest { /** Server port range. */ private static final int[] SERVER_PORT_RANGE = {59000, 59020}; @@ -116,6 +119,7 @@ private JdbcThinTcpIo createTcpIo(String[] addrs, int port) throws SQLException * @throws SQLException On connection error or reject. * @throws IOException On IO error in handshake. */ + @Test public void testHostWithManyAddresses() throws SQLException, IOException, InterruptedException { CountDownLatch connectionAccepted = new CountDownLatch(1); @@ -142,6 +146,7 @@ public void testHostWithManyAddresses() throws SQLException, IOException, Interr * @throws SQLException On connection error or reject. * @throws IOException On IO error in handshake. */ + @Test public void testExceptionMessage() throws SQLException, IOException { try (ServerSocket sock = createServerSocket(null)) { String[] addrs = {INACCESSIBLE_ADDRESSES[0], INACCESSIBLE_ADDRESSES[1]}; diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsAbstractComplexSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsAbstractComplexSelfTest.java index 68ed36b08647d..d614fbc4bc59d 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsAbstractComplexSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsAbstractComplexSelfTest.java @@ -44,13 +44,18 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to check various transactional scenarios. */ +@RunWith(JUnit4.class) public abstract class JdbcThinTransactionsAbstractComplexSelfTest extends JdbcThinAbstractSelfTest { /** Client node index. */ - final static int CLI_IDX = 1; + static final int CLI_IDX = 1; /** * Closure to perform ordinary delete after repeatable read. @@ -227,6 +232,7 @@ public abstract class JdbcThinTransactionsAbstractComplexSelfTest extends JdbcTh /** * */ + @Test public void testSingleDmlStatement() throws SQLException { insertPerson(6, "John", "Doe", 2, 2); @@ -237,6 +243,7 @@ public void testSingleDmlStatement() throws SQLException { /** * */ + @Test public void testMultipleDmlStatements() throws SQLException { executeInTransaction(new TransactionClosure() { @Override public void apply(Connection conn) { @@ -259,6 +266,7 @@ public void testMultipleDmlStatements() throws SQLException { /** * */ + @Test public void testBatchDmlStatements() throws SQLException { doBatchedInsert(); @@ -271,7 +279,7 @@ public void testBatchDmlStatements() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testBatchDmlStatementsIntermediateFailure() throws SQLException { insertPerson(6, "John", "Doe", 2, 2); @@ -339,6 +347,8 @@ private void doBatchedInsert() throws SQLException { /** * */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10770") + @Test public void testInsertAndQueryMultipleCaches() throws SQLException { executeInTransaction(new TransactionClosure() { @Override public void apply(Connection conn) { @@ -360,8 +370,9 @@ public void testInsertAndQueryMultipleCaches() throws SQLException { /** * */ + @Test public void testColocatedJoinSelectAndInsertInTransaction() throws SQLException { - // We'd like to put some Google into cities with over 1K population which don't have it yet + // We'd like to put some Google into cities wgit checith over 1K population which don't have it yet executeInTransaction(new TransactionClosure() { @Override public void apply(Connection conn) { List ids = flat(execute(conn, "SELECT distinct City.id from City left join Company c on " + @@ -383,6 +394,7 @@ public void testColocatedJoinSelectAndInsertInTransaction() throws SQLException /** * */ + @Test public void testDistributedJoinSelectAndInsertInTransaction() throws SQLException { try (Connection c = connect("distributedJoins=true")) { // We'd like to put some Google into cities with over 1K population which don't have it yet @@ -408,6 +420,7 @@ public void testDistributedJoinSelectAndInsertInTransaction() throws SQLExceptio /** * */ + @Test public void testInsertFromExpression() throws SQLException { executeInTransaction(new TransactionClosure() { @Override public void apply(Connection conn) { @@ -420,6 +433,7 @@ public void testInsertFromExpression() throws SQLException { /** * */ + @Test public void testAutoRollback() throws SQLException { try (Connection c = connect()) { begin(c); @@ -435,6 +449,7 @@ public void testAutoRollback() throws SQLException { /** * */ + @Test public void testRepeatableReadWithConcurrentDelete() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -446,6 +461,7 @@ public void testRepeatableReadWithConcurrentDelete() throws Exception { /** * */ + @Test public void testRepeatableReadWithConcurrentFastDelete() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -457,6 +473,7 @@ public void testRepeatableReadWithConcurrentFastDelete() throws Exception { /** * */ + @Test public void testRepeatableReadWithConcurrentCacheRemove() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -468,6 +485,7 @@ public void testRepeatableReadWithConcurrentCacheRemove() throws Exception { /** * */ + @Test public void testRepeatableReadAndDeleteWithConcurrentDelete() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -479,6 +497,7 @@ public void testRepeatableReadAndDeleteWithConcurrentDelete() throws Exception { /** * */ + @Test public void testRepeatableReadAndDeleteWithConcurrentFastDelete() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -490,6 +509,7 @@ public void testRepeatableReadAndDeleteWithConcurrentFastDelete() throws Excepti /** * */ + @Test public void testRepeatableReadAndDeleteWithConcurrentCacheRemove() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -501,6 +521,7 @@ public void testRepeatableReadAndDeleteWithConcurrentCacheRemove() throws Except /** * */ + @Test public void testRepeatableReadAndFastDeleteWithConcurrentDelete() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -512,6 +533,7 @@ public void testRepeatableReadAndFastDeleteWithConcurrentDelete() throws Excepti /** * */ + @Test public void testRepeatableReadAndFastDeleteWithConcurrentFastDelete() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -523,6 +545,7 @@ public void testRepeatableReadAndFastDeleteWithConcurrentFastDelete() throws Exc /** * */ + @Test public void testRepeatableReadAndFastDeleteWithConcurrentCacheRemove() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -534,6 +557,7 @@ public void testRepeatableReadAndFastDeleteWithConcurrentCacheRemove() throws Ex /** * */ + @Test public void testRepeatableReadAndDeleteWithConcurrentDeleteAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -545,6 +569,7 @@ public void testRepeatableReadAndDeleteWithConcurrentDeleteAndRollback() throws /** * */ + @Test public void testRepeatableReadAndDeleteWithConcurrentFastDeleteAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -556,6 +581,7 @@ public void testRepeatableReadAndDeleteWithConcurrentFastDeleteAndRollback() thr /** * */ + @Test public void testRepeatableReadAndDeleteWithConcurrentCacheRemoveAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -567,6 +593,7 @@ public void testRepeatableReadAndDeleteWithConcurrentCacheRemoveAndRollback() th /** * */ + @Test public void testRepeatableReadAndFastDeleteWithConcurrentDeleteAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -578,6 +605,7 @@ public void testRepeatableReadAndFastDeleteWithConcurrentDeleteAndRollback() thr /** * */ + @Test public void testRepeatableReadAndFastDeleteWithConcurrentFastDeleteAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -589,6 +617,7 @@ public void testRepeatableReadAndFastDeleteWithConcurrentFastDeleteAndRollback() /** * */ + @Test public void testRepeatableReadAndFastDeleteWithConcurrentCacheRemoveAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -600,6 +629,7 @@ public void testRepeatableReadAndFastDeleteWithConcurrentCacheRemoveAndRollback( /** * */ + @Test public void testRepeatableReadWithConcurrentUpdate() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -611,6 +641,7 @@ public void testRepeatableReadWithConcurrentUpdate() throws Exception { /** * */ + @Test public void testRepeatableReadWithConcurrentCacheReplace() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -628,6 +659,7 @@ public void testRepeatableReadWithConcurrentCacheReplace() throws Exception { /** * */ + @Test public void testRepeatableReadAndUpdateWithConcurrentUpdate() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -639,6 +671,7 @@ public void testRepeatableReadAndUpdateWithConcurrentUpdate() throws Exception { /** * */ + @Test public void testRepeatableReadAndUpdateWithConcurrentCacheReplace() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -656,6 +689,7 @@ public void testRepeatableReadAndUpdateWithConcurrentCacheReplace() throws Excep /** * */ + @Test public void testRepeatableReadAndUpdateWithConcurrentUpdateAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -667,6 +701,7 @@ public void testRepeatableReadAndUpdateWithConcurrentUpdateAndRollback() throws /** * */ + @Test public void testRepeatableReadAndUpdateWithConcurrentCacheReplaceAndRollback() throws Exception { doTestRepeatableRead(new IgniteInClosure() { @Override public void apply(Connection conn) { @@ -688,7 +723,6 @@ public void testRepeatableReadAndUpdateWithConcurrentCacheReplaceAndRollback() t * (must yield an exception). * @throws Exception if failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") private void doTestRepeatableRead(final IgniteInClosure concurrentWriteClo, final IgniteInClosure afterReadClo) throws Exception { final CountDownLatch repeatableReadLatch = new CountDownLatch(1); @@ -755,11 +789,11 @@ private void doTestRepeatableRead(final IgniteInClosure concurrentWr return null; } - }, IgniteCheckedException.class, "Mvcc version mismatch."); + }, IgniteCheckedException.class, "Cannot serialize transaction due to write conflict"); assertTrue(X.hasCause(ex, SQLException.class)); - assertTrue(X.getCause(ex).getMessage().contains("Mvcc version mismatch.")); + assertTrue(X.getCause(ex).getMessage().contains("Cannot serialize transaction due to write conflict")); } else readFut.get(); @@ -1016,7 +1050,7 @@ private void insertProduct(Connection c, int id, String name, int companyId) { /** * Person class. */ - private final static class Person { + private static final class Person { /** */ @QuerySqlField public int id; diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsSelfTest.java index a8fa47b7df2a7..bccca4d369854 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsSelfTest.java @@ -34,20 +34,18 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.query.NestedTxMode; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests to check behavior with transactions on. */ +@RunWith(JUnit4.class) public class JdbcThinTransactionsSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String URL = "jdbc:ignite:thin://127.0.0.1"; @@ -55,17 +53,11 @@ public class JdbcThinTransactionsSelfTest extends JdbcThinAbstractSelfTest { private GridStringLogger log; /** {@inheritDoc} */ - @SuppressWarnings("deprecation") + @SuppressWarnings({"deprecation", "unchecked"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); + cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME).setNearConfiguration(null)); cfg.setMarshaller(new BinaryMarshaller()); @@ -124,6 +116,7 @@ private static Connection c(boolean autoCommit, NestedTxMode nestedTxMode) throw /** * */ + @Test public void testTransactionsBeginCommitRollback() throws IgniteCheckedException { final AtomicBoolean stop = new AtomicBoolean(); @@ -160,6 +153,7 @@ public void testTransactionsBeginCommitRollback() throws IgniteCheckedException /** * */ + @Test public void testTransactionsBeginCommitRollbackAutocommit() throws IgniteCheckedException { GridTestUtils.runMultiThreadedAsync(new Runnable() { @Override public void run() { @@ -186,7 +180,7 @@ public void testTransactionsBeginCommitRollbackAutocommit() throws IgniteChecked /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOff() throws SQLException { try (Connection c = c(false, NestedTxMode.IGNORE)) { doNestedTxStart(c, false); @@ -198,7 +192,7 @@ public void testIgnoreNestedTxAutocommitOff() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOff() throws SQLException { try (Connection c = c(false, NestedTxMode.COMMIT)) { doNestedTxStart(c, false); @@ -210,7 +204,7 @@ public void testCommitNestedTxAutocommitOff() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testErrorNestedTxAutocommitOff() throws SQLException { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { @@ -226,7 +220,7 @@ public void testErrorNestedTxAutocommitOff() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOn() throws SQLException { try (Connection c = c(true, NestedTxMode.IGNORE)) { doNestedTxStart(c, false); @@ -238,7 +232,7 @@ public void testIgnoreNestedTxAutocommitOn() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOn() throws SQLException { try (Connection c = c(true, NestedTxMode.COMMIT)) { doNestedTxStart(c, false); @@ -250,7 +244,7 @@ public void testCommitNestedTxAutocommitOn() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testErrorNestedTxAutocommitOn() throws SQLException { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { @@ -266,7 +260,7 @@ public void testErrorNestedTxAutocommitOn() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOffBatched() throws SQLException { try (Connection c = c(false, NestedTxMode.IGNORE)) { doNestedTxStart(c, true); @@ -278,7 +272,7 @@ public void testIgnoreNestedTxAutocommitOffBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOffBatched() throws SQLException { try (Connection c = c(false, NestedTxMode.COMMIT)) { doNestedTxStart(c, true); @@ -290,7 +284,7 @@ public void testCommitNestedTxAutocommitOffBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testErrorNestedTxAutocommitOffBatched() throws SQLException { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { @@ -306,7 +300,7 @@ public void testErrorNestedTxAutocommitOffBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOnBatched() throws SQLException { try (Connection c = c(true, NestedTxMode.IGNORE)) { doNestedTxStart(c, true); @@ -318,7 +312,7 @@ public void testIgnoreNestedTxAutocommitOnBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOnBatched() throws SQLException { try (Connection c = c(true, NestedTxMode.COMMIT)) { doNestedTxStart(c, true); @@ -330,7 +324,7 @@ public void testCommitNestedTxAutocommitOnBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testErrorNestedTxAutocommitOnBatched() throws SQLException { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { @@ -371,6 +365,7 @@ private void doNestedTxStart(Connection conn, boolean batched) throws SQLExcepti /** * @throws SQLException if failed. */ + @Test public void testAutoCommitSingle() throws SQLException { doTestAutoCommit(false); } @@ -378,6 +373,7 @@ public void testAutoCommitSingle() throws SQLException { /** * @throws SQLException if failed. */ + @Test public void testAutoCommitBatched() throws SQLException { doTestAutoCommit(true); } @@ -422,7 +418,7 @@ private void doTestAutoCommit(boolean batched) throws SQLException { * Test that exception in one of the statements does not kill connection worker altogether. * @throws SQLException if failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testExceptionHandling() throws SQLException { try (Connection c = c(true, NestedTxMode.ERROR)) { try (Statement s = c.createStatement()) { @@ -444,4 +440,32 @@ public void testExceptionHandling() throws SQLException { } } } + + /** + * Test that exception in one of the statements does not kill connection worker altogether. + * @throws SQLException if failed. + */ + @Test + public void testParsingErrorHasNoSideEffect() throws SQLException { + try (Connection c = c(false, NestedTxMode.ERROR)) { + try (Statement s = c.createStatement()) { + s.execute("INSERT INTO INTS(k, v) values(1, 1)"); + + GridTestUtils.assertThrows(null, new Callable() { + @Override public Void call() throws Exception { + s.execute("INSERT INTO INTS(k, v) values(1)"); + + return null; + } + }, SQLException.class, "Failed to parse query"); + + s.execute("INSERT INTO INTS(k, v) values(2, 2)"); + + c.commit(); + } + + assertEquals(1, grid(0).cache("ints").get(1)); + assertEquals(2, grid(0).cache("ints").get(2)); + } + } } diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsWithMvccEnabledSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsWithMvccEnabledSelfTest.java index e01a53dc59e9f..f1441b55c3f7d 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsWithMvccEnabledSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinTransactionsWithMvccEnabledSelfTest.java @@ -34,20 +34,18 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.query.NestedTxMode; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests to check behavior with transactions on. */ +@RunWith(JUnit4.class) public class JdbcThinTransactionsWithMvccEnabledSelfTest extends JdbcThinAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String URL = "jdbc:ignite:thin://127.0.0.1"; @@ -55,17 +53,11 @@ public class JdbcThinTransactionsWithMvccEnabledSelfTest extends JdbcThinAbstrac private GridStringLogger log; /** {@inheritDoc} */ - @SuppressWarnings("deprecation") + @SuppressWarnings({"deprecation", "unchecked"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME)); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); + cfg.setCacheConfiguration(cacheConfiguration(DEFAULT_CACHE_NAME).setNearConfiguration(null)); cfg.setMarshaller(new BinaryMarshaller()); @@ -77,9 +69,8 @@ public class JdbcThinTransactionsWithMvccEnabledSelfTest extends JdbcThinAbstrac /** * @param name Cache name. * @return Cache configuration. - * @throws Exception In case of error. */ - private CacheConfiguration cacheConfiguration(@NotNull String name) throws Exception { + private CacheConfiguration cacheConfiguration(@NotNull String name) { CacheConfiguration cfg = defaultCacheConfiguration(); cfg.setName(name); @@ -124,6 +115,7 @@ private static Connection c(boolean autoCommit, NestedTxMode nestedTxMode) throw /** * */ + @Test public void testTransactionsBeginCommitRollback() throws IgniteCheckedException { final AtomicBoolean stop = new AtomicBoolean(); @@ -160,6 +152,7 @@ public void testTransactionsBeginCommitRollback() throws IgniteCheckedException /** * */ + @Test public void testTransactionsBeginCommitRollbackAutocommit() throws IgniteCheckedException { GridTestUtils.runMultiThreadedAsync(new Runnable() { @Override public void run() { @@ -186,7 +179,7 @@ public void testTransactionsBeginCommitRollbackAutocommit() throws IgniteChecked /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOff() throws SQLException { try (Connection c = c(false, NestedTxMode.IGNORE)) { doNestedTxStart(c, false); @@ -198,7 +191,7 @@ public void testIgnoreNestedTxAutocommitOff() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOff() throws SQLException { try (Connection c = c(false, NestedTxMode.COMMIT)) { doNestedTxStart(c, false); @@ -210,8 +203,8 @@ public void testCommitNestedTxAutocommitOff() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public void testErrorNestedTxAutocommitOff() throws SQLException { + @Test + public void testErrorNestedTxAutocommitOff() { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { try (Connection c = c(false, NestedTxMode.ERROR)) { @@ -226,7 +219,7 @@ public void testErrorNestedTxAutocommitOff() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOn() throws SQLException { try (Connection c = c(true, NestedTxMode.IGNORE)) { doNestedTxStart(c, false); @@ -238,7 +231,7 @@ public void testIgnoreNestedTxAutocommitOn() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOn() throws SQLException { try (Connection c = c(true, NestedTxMode.COMMIT)) { doNestedTxStart(c, false); @@ -250,8 +243,8 @@ public void testCommitNestedTxAutocommitOn() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public void testErrorNestedTxAutocommitOn() throws SQLException { + @Test + public void testErrorNestedTxAutocommitOn() { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { try (Connection c = c(true, NestedTxMode.ERROR)) { @@ -266,7 +259,7 @@ public void testErrorNestedTxAutocommitOn() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOffBatched() throws SQLException { try (Connection c = c(false, NestedTxMode.IGNORE)) { doNestedTxStart(c, true); @@ -278,7 +271,7 @@ public void testIgnoreNestedTxAutocommitOffBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOffBatched() throws SQLException { try (Connection c = c(false, NestedTxMode.COMMIT)) { doNestedTxStart(c, true); @@ -290,8 +283,8 @@ public void testCommitNestedTxAutocommitOffBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public void testErrorNestedTxAutocommitOffBatched() throws SQLException { + @Test + public void testErrorNestedTxAutocommitOffBatched() { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { try (Connection c = c(false, NestedTxMode.ERROR)) { @@ -306,7 +299,7 @@ public void testErrorNestedTxAutocommitOffBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIgnoreNestedTxAutocommitOnBatched() throws SQLException { try (Connection c = c(true, NestedTxMode.IGNORE)) { doNestedTxStart(c, true); @@ -318,7 +311,7 @@ public void testIgnoreNestedTxAutocommitOnBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCommitNestedTxAutocommitOnBatched() throws SQLException { try (Connection c = c(true, NestedTxMode.COMMIT)) { doNestedTxStart(c, true); @@ -330,8 +323,8 @@ public void testCommitNestedTxAutocommitOnBatched() throws SQLException { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public void testErrorNestedTxAutocommitOnBatched() throws SQLException { + @Test + public void testErrorNestedTxAutocommitOnBatched() { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { try (Connection c = c(true, NestedTxMode.ERROR)) { @@ -371,6 +364,7 @@ private void doNestedTxStart(Connection conn, boolean batched) throws SQLExcepti /** * @throws SQLException if failed. */ + @Test public void testAutoCommitSingle() throws SQLException { doTestAutoCommit(false); } @@ -378,6 +372,7 @@ public void testAutoCommitSingle() throws SQLException { /** * @throws SQLException if failed. */ + @Test public void testAutoCommitBatched() throws SQLException { doTestAutoCommit(true); } @@ -422,7 +417,7 @@ private void doTestAutoCommit(boolean batched) throws SQLException { * Test that exception in one of the statements does not kill connection worker altogether. * @throws SQLException if failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testExceptionHandling() throws SQLException { try (Connection c = c(true, NestedTxMode.ERROR)) { try (Statement s = c.createStatement()) { diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinUpdateStatementSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinUpdateStatementSelfTest.java index f749dbeba7bf2..1f5fd8564b076 100644 --- a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinUpdateStatementSelfTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinUpdateStatementSelfTest.java @@ -21,14 +21,19 @@ import java.util.Arrays; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class JdbcThinUpdateStatementSelfTest extends JdbcThinAbstractUpdateStatementSelfTest { /** * @throws SQLException If failed. */ + @Test public void testExecute() throws SQLException { conn.createStatement().execute("update Person set firstName = 'Jack' where " + "cast(substring(_key, 2, 1) as int) % 2 = 0"); @@ -40,6 +45,7 @@ public void testExecute() throws SQLException { /** * @throws SQLException If failed. */ + @Test public void testExecuteUpdate() throws SQLException { conn.createStatement().executeUpdate("update Person set firstName = 'Jack' where " + "cast(substring(_key, 2, 1) as int) % 2 = 0"); diff --git a/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/MvccJdbcTransactionFinishOnDeactivatedClusterSelfTest.java b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/MvccJdbcTransactionFinishOnDeactivatedClusterSelfTest.java new file mode 100644 index 0000000000000..dab44bfa38ba4 --- /dev/null +++ b/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/MvccJdbcTransactionFinishOnDeactivatedClusterSelfTest.java @@ -0,0 +1,162 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.jdbc.thin; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.Statement; +import java.util.Collections; +import java.util.concurrent.CountDownLatch; +import org.apache.ignite.configuration.ConnectorConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.client.GridClient; +import org.apache.ignite.internal.client.GridClientClusterState; +import org.apache.ignite.internal.client.GridClientConfiguration; +import org.apache.ignite.internal.client.GridClientFactory; +import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** */ +@RunWith(JUnit4.class) +public class MvccJdbcTransactionFinishOnDeactivatedClusterSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName) + .setConnectorConfiguration(new ConnectorConfiguration()) + .setDataStorageConfiguration(new DataStorageConfiguration().setDefaultDataRegionConfiguration( + new DataRegionConfiguration().setPersistenceEnabled(true)) + ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testTxCommitAfterDeactivation() throws Exception { + checkTxFinishAfterDeactivation(true); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testTxRollbackAfterDeactivation() throws Exception { + checkTxFinishAfterDeactivation(false); + } + + /** */ + public void checkTxFinishAfterDeactivation(boolean commit) throws Exception { + IgniteEx node0 = startGrid(0); + + node0.cluster().active(true); + + try (Connection conn = connect()) { + execute(conn, "CREATE TABLE t1(a INT, b VARCHAR, PRIMARY KEY(a)) WITH \"atomicity=TRANSACTIONAL_SNAPSHOT,backups=1\""); + } + + final CountDownLatch enlistedLatch = new CountDownLatch(1); + + assert node0.cluster().active(); + + IgniteInternalFuture txFinishedFut = GridTestUtils.runAsync(() -> { + executeTransaction(commit, enlistedLatch, () -> !node0.context().state().publicApiActiveState(true)); + + return null; + }); + + enlistedLatch.await(); + + deactivateThroughClient(); + + log.info(">>> Cluster deactivated ..."); + + try { + txFinishedFut.get(); + } + catch (Exception e) { + e.printStackTrace(); + + fail("Exception is not expected here"); + } + } + + /** */ + private void executeTransaction(boolean commit, CountDownLatch enlistedLatch, + GridAbsPredicate beforeCommitCondition) throws Exception { + try (Connection conn = connect()) { + execute(conn, "BEGIN"); + + execute(conn, "INSERT INTO t1 VALUES (1, '1')"); + + log.info(">>> Started transaction and enlisted entries"); + + enlistedLatch.countDown(); + + GridTestUtils.waitForCondition(beforeCommitCondition, 5_000); + + log.info(">>> Attempting to finish transaction"); + + execute(conn, commit ? "COMMIT" : "ROLLBACK"); + } + } + + /** */ + private static Connection connect() throws Exception { + return DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1"); + } + + /** */ + private static void execute(Connection conn, String sql) throws Exception { + try (Statement stmt = conn.createStatement()) { + stmt.executeUpdate(sql); + } + } + + /** */ + private void deactivateThroughClient() throws Exception { + GridClientConfiguration clientCfg = new GridClientConfiguration(); + + clientCfg.setServers(Collections.singletonList("127.0.0.1:11211")); + + try (GridClient client = GridClientFactory.start(clientCfg)) { + GridClientClusterState state = client.state(); + + log.info(">>> Try to deactivate ..."); + + state.active(false); + } + } +} diff --git a/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientMarshallerBenchmarkTest.java b/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientMarshallerBenchmarkTest.java index 08c2cbe375213..3b9b494523f97 100644 --- a/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientMarshallerBenchmarkTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientMarshallerBenchmarkTest.java @@ -31,12 +31,16 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.marshaller.MarshallerUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.rest.client.message.GridClientCacheRequest.GridCacheOperation.CAS; /** * Tests basic performance of marshallers. */ +@RunWith(JUnit4.class) public class ClientMarshallerBenchmarkTest extends GridCommonAbstractTest { /** Marshallers to test. */ private GridClientMarshaller[] marshallers; @@ -58,6 +62,7 @@ public ClientMarshallerBenchmarkTest() { /** * @throws Exception If failed. */ + @Test public void testCacheRequestTime() throws Exception { GridClientCacheRequest req = new GridClientCacheRequest(CAS); @@ -162,4 +167,4 @@ private T runMarshallUnmarshalLoop(T obj, int iterCnt, GridClientMarshaller return (T)res; } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientTcpSslLoadTest.java b/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientTcpSslLoadTest.java index 669c1104dde78..bd8d57d9482f9 100644 --- a/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientTcpSslLoadTest.java +++ b/modules/clients/src/test/java/org/apache/ignite/loadtests/client/ClientTcpSslLoadTest.java @@ -20,10 +20,14 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.internal.client.ClientTcpSslMultiThreadedSelfTest; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Makes a long run to ensure stability and absence of memory leaks. */ +@RunWith(JUnit4.class) public class ClientTcpSslLoadTest extends ClientTcpSslMultiThreadedSelfTest { /** Test duration. */ private static final long TEST_RUN_TIME = 8 * 60 * 60 * 1000; @@ -37,6 +41,7 @@ public class ClientTcpSslLoadTest extends ClientTcpSslMultiThreadedSelfTest { /** * @throws Exception If failed. */ + @Test public void testLongRun() throws Exception { long start = System.currentTimeMillis(); @@ -83,4 +88,4 @@ private void clearCaches() { log.error("Cache clear failed.", e); } } -} \ No newline at end of file +} diff --git a/modules/clients/src/test/resources/bulkload20_000.csv b/modules/clients/src/test/resources/bulkload20_000.csv new file mode 100644 index 0000000000000..f04df111d0d45 --- /dev/null +++ b/modules/clients/src/test/resources/bulkload20_000.csv @@ -0,0 +1,20000 @@ +1,1,"FirstName1 MiddleName1",LastName1 +2,2,"FirstName2 MiddleName2",LastName2 +3,3,"FirstName3 MiddleName3",LastName3 +4,4,"FirstName4 MiddleName4",LastName4 +5,5,"FirstName5 MiddleName5",LastName5 +6,6,"FirstName6 MiddleName6",LastName6 +7,7,"FirstName7 MiddleName7",LastName7 +8,8,"FirstName8 MiddleName8",LastName8 +9,9,"FirstName9 MiddleName9",LastName9 +10,10,"FirstName10 MiddleName10",LastName10 +11,11,"FirstName11 MiddleName11",LastName11 +12,12,"FirstName12 MiddleName12",LastName12 +13,13,"FirstName13 MiddleName13",LastName13 +14,14,"FirstName14 MiddleName14",LastName14 +15,15,"FirstName15 MiddleName15",LastName15 +16,16,"FirstName16 MiddleName16",LastName16 +17,17,"FirstName17 MiddleName17",LastName17 +18,18,"FirstName18 MiddleName18",LastName18 +19,19,"FirstName19 MiddleName19",LastName19 +20,20,"FirstName20 MiddleName20",LastName20 +21,21,"FirstName21 MiddleName21",LastName21 +22,22,"FirstName22 MiddleName22",LastName22 +23,23,"FirstName23 MiddleName23",LastName23 +24,24,"FirstName24 MiddleName24",LastName24 +25,25,"FirstName25 MiddleName25",LastName25 +26,26,"FirstName26 MiddleName26",LastName26 +27,27,"FirstName27 MiddleName27",LastName27 +28,28,"FirstName28 MiddleName28",LastName28 +29,29,"FirstName29 MiddleName29",LastName29 +30,30,"FirstName30 MiddleName30",LastName30 +31,31,"FirstName31 MiddleName31",LastName31 +32,32,"FirstName32 MiddleName32",LastName32 +33,33,"FirstName33 MiddleName33",LastName33 +34,34,"FirstName34 MiddleName34",LastName34 +35,35,"FirstName35 MiddleName35",LastName35 +36,36,"FirstName36 MiddleName36",LastName36 +37,37,"FirstName37 MiddleName37",LastName37 +38,38,"FirstName38 MiddleName38",LastName38 +39,39,"FirstName39 MiddleName39",LastName39 +40,40,"FirstName40 MiddleName40",LastName40 +41,41,"FirstName41 MiddleName41",LastName41 +42,42,"FirstName42 MiddleName42",LastName42 +43,43,"FirstName43 MiddleName43",LastName43 +44,44,"FirstName44 MiddleName44",LastName44 +45,45,"FirstName45 MiddleName45",LastName45 +46,46,"FirstName46 MiddleName46",LastName46 +47,47,"FirstName47 MiddleName47",LastName47 +48,48,"FirstName48 MiddleName48",LastName48 +49,49,"FirstName49 MiddleName49",LastName49 +50,50,"FirstName50 MiddleName50",LastName50 +51,51,"FirstName51 MiddleName51",LastName51 +52,52,"FirstName52 MiddleName52",LastName52 +53,53,"FirstName53 MiddleName53",LastName53 +54,54,"FirstName54 MiddleName54",LastName54 +55,55,"FirstName55 MiddleName55",LastName55 +56,56,"FirstName56 MiddleName56",LastName56 +57,57,"FirstName57 MiddleName57",LastName57 +58,58,"FirstName58 MiddleName58",LastName58 +59,59,"FirstName59 MiddleName59",LastName59 +60,60,"FirstName60 MiddleName60",LastName60 +61,61,"FirstName61 MiddleName61",LastName61 +62,62,"FirstName62 MiddleName62",LastName62 +63,63,"FirstName63 MiddleName63",LastName63 +64,64,"FirstName64 MiddleName64",LastName64 +65,65,"FirstName65 MiddleName65",LastName65 +66,66,"FirstName66 MiddleName66",LastName66 +67,67,"FirstName67 MiddleName67",LastName67 +68,68,"FirstName68 MiddleName68",LastName68 +69,69,"FirstName69 MiddleName69",LastName69 +70,70,"FirstName70 MiddleName70",LastName70 +71,71,"FirstName71 MiddleName71",LastName71 +72,72,"FirstName72 MiddleName72",LastName72 +73,73,"FirstName73 MiddleName73",LastName73 +74,74,"FirstName74 MiddleName74",LastName74 +75,75,"FirstName75 MiddleName75",LastName75 +76,76,"FirstName76 MiddleName76",LastName76 +77,77,"FirstName77 MiddleName77",LastName77 +78,78,"FirstName78 MiddleName78",LastName78 +79,79,"FirstName79 MiddleName79",LastName79 +80,80,"FirstName80 MiddleName80",LastName80 +81,81,"FirstName81 MiddleName81",LastName81 +82,82,"FirstName82 MiddleName82",LastName82 +83,83,"FirstName83 MiddleName83",LastName83 +84,84,"FirstName84 MiddleName84",LastName84 +85,85,"FirstName85 MiddleName85",LastName85 +86,86,"FirstName86 MiddleName86",LastName86 +87,87,"FirstName87 MiddleName87",LastName87 +88,88,"FirstName88 MiddleName88",LastName88 +89,89,"FirstName89 MiddleName89",LastName89 +90,90,"FirstName90 MiddleName90",LastName90 +91,91,"FirstName91 MiddleName91",LastName91 +92,92,"FirstName92 MiddleName92",LastName92 +93,93,"FirstName93 MiddleName93",LastName93 +94,94,"FirstName94 MiddleName94",LastName94 +95,95,"FirstName95 MiddleName95",LastName95 +96,96,"FirstName96 MiddleName96",LastName96 +97,97,"FirstName97 MiddleName97",LastName97 +98,98,"FirstName98 MiddleName98",LastName98 +99,99,"FirstName99 MiddleName99",LastName99 +100,100,"FirstName100 MiddleName100",LastName100 +101,101,"FirstName101 MiddleName101",LastName101 +102,102,"FirstName102 MiddleName102",LastName102 +103,103,"FirstName103 MiddleName103",LastName103 +104,104,"FirstName104 MiddleName104",LastName104 +105,105,"FirstName105 MiddleName105",LastName105 +106,106,"FirstName106 MiddleName106",LastName106 +107,107,"FirstName107 MiddleName107",LastName107 +108,108,"FirstName108 MiddleName108",LastName108 +109,109,"FirstName109 MiddleName109",LastName109 +110,110,"FirstName110 MiddleName110",LastName110 +111,111,"FirstName111 MiddleName111",LastName111 +112,112,"FirstName112 MiddleName112",LastName112 +113,113,"FirstName113 MiddleName113",LastName113 +114,114,"FirstName114 MiddleName114",LastName114 +115,115,"FirstName115 MiddleName115",LastName115 +116,116,"FirstName116 MiddleName116",LastName116 +117,117,"FirstName117 MiddleName117",LastName117 +118,118,"FirstName118 MiddleName118",LastName118 +119,119,"FirstName119 MiddleName119",LastName119 +120,120,"FirstName120 MiddleName120",LastName120 +121,121,"FirstName121 MiddleName121",LastName121 +122,122,"FirstName122 MiddleName122",LastName122 +123,123,"FirstName123 MiddleName123",LastName123 +124,124,"FirstName124 MiddleName124",LastName124 +125,125,"FirstName125 MiddleName125",LastName125 +126,126,"FirstName126 MiddleName126",LastName126 +127,127,"FirstName127 MiddleName127",LastName127 +128,128,"FirstName128 MiddleName128",LastName128 +129,129,"FirstName129 MiddleName129",LastName129 +130,130,"FirstName130 MiddleName130",LastName130 +131,131,"FirstName131 MiddleName131",LastName131 +132,132,"FirstName132 MiddleName132",LastName132 +133,133,"FirstName133 MiddleName133",LastName133 +134,134,"FirstName134 MiddleName134",LastName134 +135,135,"FirstName135 MiddleName135",LastName135 +136,136,"FirstName136 MiddleName136",LastName136 +137,137,"FirstName137 MiddleName137",LastName137 +138,138,"FirstName138 MiddleName138",LastName138 +139,139,"FirstName139 MiddleName139",LastName139 +140,140,"FirstName140 MiddleName140",LastName140 +141,141,"FirstName141 MiddleName141",LastName141 +142,142,"FirstName142 MiddleName142",LastName142 +143,143,"FirstName143 MiddleName143",LastName143 +144,144,"FirstName144 MiddleName144",LastName144 +145,145,"FirstName145 MiddleName145",LastName145 +146,146,"FirstName146 MiddleName146",LastName146 +147,147,"FirstName147 MiddleName147",LastName147 +148,148,"FirstName148 MiddleName148",LastName148 +149,149,"FirstName149 MiddleName149",LastName149 +150,150,"FirstName150 MiddleName150",LastName150 +151,151,"FirstName151 MiddleName151",LastName151 +152,152,"FirstName152 MiddleName152",LastName152 +153,153,"FirstName153 MiddleName153",LastName153 +154,154,"FirstName154 MiddleName154",LastName154 +155,155,"FirstName155 MiddleName155",LastName155 +156,156,"FirstName156 MiddleName156",LastName156 +157,157,"FirstName157 MiddleName157",LastName157 +158,158,"FirstName158 MiddleName158",LastName158 +159,159,"FirstName159 MiddleName159",LastName159 +160,160,"FirstName160 MiddleName160",LastName160 +161,161,"FirstName161 MiddleName161",LastName161 +162,162,"FirstName162 MiddleName162",LastName162 +163,163,"FirstName163 MiddleName163",LastName163 +164,164,"FirstName164 MiddleName164",LastName164 +165,165,"FirstName165 MiddleName165",LastName165 +166,166,"FirstName166 MiddleName166",LastName166 +167,167,"FirstName167 MiddleName167",LastName167 +168,168,"FirstName168 MiddleName168",LastName168 +169,169,"FirstName169 MiddleName169",LastName169 +170,170,"FirstName170 MiddleName170",LastName170 +171,171,"FirstName171 MiddleName171",LastName171 +172,172,"FirstName172 MiddleName172",LastName172 +173,173,"FirstName173 MiddleName173",LastName173 +174,174,"FirstName174 MiddleName174",LastName174 +175,175,"FirstName175 MiddleName175",LastName175 +176,176,"FirstName176 MiddleName176",LastName176 +177,177,"FirstName177 MiddleName177",LastName177 +178,178,"FirstName178 MiddleName178",LastName178 +179,179,"FirstName179 MiddleName179",LastName179 +180,180,"FirstName180 MiddleName180",LastName180 +181,181,"FirstName181 MiddleName181",LastName181 +182,182,"FirstName182 MiddleName182",LastName182 +183,183,"FirstName183 MiddleName183",LastName183 +184,184,"FirstName184 MiddleName184",LastName184 +185,185,"FirstName185 MiddleName185",LastName185 +186,186,"FirstName186 MiddleName186",LastName186 +187,187,"FirstName187 MiddleName187",LastName187 +188,188,"FirstName188 MiddleName188",LastName188 +189,189,"FirstName189 MiddleName189",LastName189 +190,190,"FirstName190 MiddleName190",LastName190 +191,191,"FirstName191 MiddleName191",LastName191 +192,192,"FirstName192 MiddleName192",LastName192 +193,193,"FirstName193 MiddleName193",LastName193 +194,194,"FirstName194 MiddleName194",LastName194 +195,195,"FirstName195 MiddleName195",LastName195 +196,196,"FirstName196 MiddleName196",LastName196 +197,197,"FirstName197 MiddleName197",LastName197 +198,198,"FirstName198 MiddleName198",LastName198 +199,199,"FirstName199 MiddleName199",LastName199 +200,200,"FirstName200 MiddleName200",LastName200 +201,201,"FirstName201 MiddleName201",LastName201 +202,202,"FirstName202 MiddleName202",LastName202 +203,203,"FirstName203 MiddleName203",LastName203 +204,204,"FirstName204 MiddleName204",LastName204 +205,205,"FirstName205 MiddleName205",LastName205 +206,206,"FirstName206 MiddleName206",LastName206 +207,207,"FirstName207 MiddleName207",LastName207 +208,208,"FirstName208 MiddleName208",LastName208 +209,209,"FirstName209 MiddleName209",LastName209 +210,210,"FirstName210 MiddleName210",LastName210 +211,211,"FirstName211 MiddleName211",LastName211 +212,212,"FirstName212 MiddleName212",LastName212 +213,213,"FirstName213 MiddleName213",LastName213 +214,214,"FirstName214 MiddleName214",LastName214 +215,215,"FirstName215 MiddleName215",LastName215 +216,216,"FirstName216 MiddleName216",LastName216 +217,217,"FirstName217 MiddleName217",LastName217 +218,218,"FirstName218 MiddleName218",LastName218 +219,219,"FirstName219 MiddleName219",LastName219 +220,220,"FirstName220 MiddleName220",LastName220 +221,221,"FirstName221 MiddleName221",LastName221 +222,222,"FirstName222 MiddleName222",LastName222 +223,223,"FirstName223 MiddleName223",LastName223 +224,224,"FirstName224 MiddleName224",LastName224 +225,225,"FirstName225 MiddleName225",LastName225 +226,226,"FirstName226 MiddleName226",LastName226 +227,227,"FirstName227 MiddleName227",LastName227 +228,228,"FirstName228 MiddleName228",LastName228 +229,229,"FirstName229 MiddleName229",LastName229 +230,230,"FirstName230 MiddleName230",LastName230 +231,231,"FirstName231 MiddleName231",LastName231 +232,232,"FirstName232 MiddleName232",LastName232 +233,233,"FirstName233 MiddleName233",LastName233 +234,234,"FirstName234 MiddleName234",LastName234 +235,235,"FirstName235 MiddleName235",LastName235 +236,236,"FirstName236 MiddleName236",LastName236 +237,237,"FirstName237 MiddleName237",LastName237 +238,238,"FirstName238 MiddleName238",LastName238 +239,239,"FirstName239 MiddleName239",LastName239 +240,240,"FirstName240 MiddleName240",LastName240 +241,241,"FirstName241 MiddleName241",LastName241 +242,242,"FirstName242 MiddleName242",LastName242 +243,243,"FirstName243 MiddleName243",LastName243 +244,244,"FirstName244 MiddleName244",LastName244 +245,245,"FirstName245 MiddleName245",LastName245 +246,246,"FirstName246 MiddleName246",LastName246 +247,247,"FirstName247 MiddleName247",LastName247 +248,248,"FirstName248 MiddleName248",LastName248 +249,249,"FirstName249 MiddleName249",LastName249 +250,250,"FirstName250 MiddleName250",LastName250 +251,251,"FirstName251 MiddleName251",LastName251 +252,252,"FirstName252 MiddleName252",LastName252 +253,253,"FirstName253 MiddleName253",LastName253 +254,254,"FirstName254 MiddleName254",LastName254 +255,255,"FirstName255 MiddleName255",LastName255 +256,256,"FirstName256 MiddleName256",LastName256 +257,257,"FirstName257 MiddleName257",LastName257 +258,258,"FirstName258 MiddleName258",LastName258 +259,259,"FirstName259 MiddleName259",LastName259 +260,260,"FirstName260 MiddleName260",LastName260 +261,261,"FirstName261 MiddleName261",LastName261 +262,262,"FirstName262 MiddleName262",LastName262 +263,263,"FirstName263 MiddleName263",LastName263 +264,264,"FirstName264 MiddleName264",LastName264 +265,265,"FirstName265 MiddleName265",LastName265 +266,266,"FirstName266 MiddleName266",LastName266 +267,267,"FirstName267 MiddleName267",LastName267 +268,268,"FirstName268 MiddleName268",LastName268 +269,269,"FirstName269 MiddleName269",LastName269 +270,270,"FirstName270 MiddleName270",LastName270 +271,271,"FirstName271 MiddleName271",LastName271 +272,272,"FirstName272 MiddleName272",LastName272 +273,273,"FirstName273 MiddleName273",LastName273 +274,274,"FirstName274 MiddleName274",LastName274 +275,275,"FirstName275 MiddleName275",LastName275 +276,276,"FirstName276 MiddleName276",LastName276 +277,277,"FirstName277 MiddleName277",LastName277 +278,278,"FirstName278 MiddleName278",LastName278 +279,279,"FirstName279 MiddleName279",LastName279 +280,280,"FirstName280 MiddleName280",LastName280 +281,281,"FirstName281 MiddleName281",LastName281 +282,282,"FirstName282 MiddleName282",LastName282 +283,283,"FirstName283 MiddleName283",LastName283 +284,284,"FirstName284 MiddleName284",LastName284 +285,285,"FirstName285 MiddleName285",LastName285 +286,286,"FirstName286 MiddleName286",LastName286 +287,287,"FirstName287 MiddleName287",LastName287 +288,288,"FirstName288 MiddleName288",LastName288 +289,289,"FirstName289 MiddleName289",LastName289 +290,290,"FirstName290 MiddleName290",LastName290 +291,291,"FirstName291 MiddleName291",LastName291 +292,292,"FirstName292 MiddleName292",LastName292 +293,293,"FirstName293 MiddleName293",LastName293 +294,294,"FirstName294 MiddleName294",LastName294 +295,295,"FirstName295 MiddleName295",LastName295 +296,296,"FirstName296 MiddleName296",LastName296 +297,297,"FirstName297 MiddleName297",LastName297 +298,298,"FirstName298 MiddleName298",LastName298 +299,299,"FirstName299 MiddleName299",LastName299 +300,300,"FirstName300 MiddleName300",LastName300 +301,301,"FirstName301 MiddleName301",LastName301 +302,302,"FirstName302 MiddleName302",LastName302 +303,303,"FirstName303 MiddleName303",LastName303 +304,304,"FirstName304 MiddleName304",LastName304 +305,305,"FirstName305 MiddleName305",LastName305 +306,306,"FirstName306 MiddleName306",LastName306 +307,307,"FirstName307 MiddleName307",LastName307 +308,308,"FirstName308 MiddleName308",LastName308 +309,309,"FirstName309 MiddleName309",LastName309 +310,310,"FirstName310 MiddleName310",LastName310 +311,311,"FirstName311 MiddleName311",LastName311 +312,312,"FirstName312 MiddleName312",LastName312 +313,313,"FirstName313 MiddleName313",LastName313 +314,314,"FirstName314 MiddleName314",LastName314 +315,315,"FirstName315 MiddleName315",LastName315 +316,316,"FirstName316 MiddleName316",LastName316 +317,317,"FirstName317 MiddleName317",LastName317 +318,318,"FirstName318 MiddleName318",LastName318 +319,319,"FirstName319 MiddleName319",LastName319 +320,320,"FirstName320 MiddleName320",LastName320 +321,321,"FirstName321 MiddleName321",LastName321 +322,322,"FirstName322 MiddleName322",LastName322 +323,323,"FirstName323 MiddleName323",LastName323 +324,324,"FirstName324 MiddleName324",LastName324 +325,325,"FirstName325 MiddleName325",LastName325 +326,326,"FirstName326 MiddleName326",LastName326 +327,327,"FirstName327 MiddleName327",LastName327 +328,328,"FirstName328 MiddleName328",LastName328 +329,329,"FirstName329 MiddleName329",LastName329 +330,330,"FirstName330 MiddleName330",LastName330 +331,331,"FirstName331 MiddleName331",LastName331 +332,332,"FirstName332 MiddleName332",LastName332 +333,333,"FirstName333 MiddleName333",LastName333 +334,334,"FirstName334 MiddleName334",LastName334 +335,335,"FirstName335 MiddleName335",LastName335 +336,336,"FirstName336 MiddleName336",LastName336 +337,337,"FirstName337 MiddleName337",LastName337 +338,338,"FirstName338 MiddleName338",LastName338 +339,339,"FirstName339 MiddleName339",LastName339 +340,340,"FirstName340 MiddleName340",LastName340 +341,341,"FirstName341 MiddleName341",LastName341 +342,342,"FirstName342 MiddleName342",LastName342 +343,343,"FirstName343 MiddleName343",LastName343 +344,344,"FirstName344 MiddleName344",LastName344 +345,345,"FirstName345 MiddleName345",LastName345 +346,346,"FirstName346 MiddleName346",LastName346 +347,347,"FirstName347 MiddleName347",LastName347 +348,348,"FirstName348 MiddleName348",LastName348 +349,349,"FirstName349 MiddleName349",LastName349 +350,350,"FirstName350 MiddleName350",LastName350 +351,351,"FirstName351 MiddleName351",LastName351 +352,352,"FirstName352 MiddleName352",LastName352 +353,353,"FirstName353 MiddleName353",LastName353 +354,354,"FirstName354 MiddleName354",LastName354 +355,355,"FirstName355 MiddleName355",LastName355 +356,356,"FirstName356 MiddleName356",LastName356 +357,357,"FirstName357 MiddleName357",LastName357 +358,358,"FirstName358 MiddleName358",LastName358 +359,359,"FirstName359 MiddleName359",LastName359 +360,360,"FirstName360 MiddleName360",LastName360 +361,361,"FirstName361 MiddleName361",LastName361 +362,362,"FirstName362 MiddleName362",LastName362 +363,363,"FirstName363 MiddleName363",LastName363 +364,364,"FirstName364 MiddleName364",LastName364 +365,365,"FirstName365 MiddleName365",LastName365 +366,366,"FirstName366 MiddleName366",LastName366 +367,367,"FirstName367 MiddleName367",LastName367 +368,368,"FirstName368 MiddleName368",LastName368 +369,369,"FirstName369 MiddleName369",LastName369 +370,370,"FirstName370 MiddleName370",LastName370 +371,371,"FirstName371 MiddleName371",LastName371 +372,372,"FirstName372 MiddleName372",LastName372 +373,373,"FirstName373 MiddleName373",LastName373 +374,374,"FirstName374 MiddleName374",LastName374 +375,375,"FirstName375 MiddleName375",LastName375 +376,376,"FirstName376 MiddleName376",LastName376 +377,377,"FirstName377 MiddleName377",LastName377 +378,378,"FirstName378 MiddleName378",LastName378 +379,379,"FirstName379 MiddleName379",LastName379 +380,380,"FirstName380 MiddleName380",LastName380 +381,381,"FirstName381 MiddleName381",LastName381 +382,382,"FirstName382 MiddleName382",LastName382 +383,383,"FirstName383 MiddleName383",LastName383 +384,384,"FirstName384 MiddleName384",LastName384 +385,385,"FirstName385 MiddleName385",LastName385 +386,386,"FirstName386 MiddleName386",LastName386 +387,387,"FirstName387 MiddleName387",LastName387 +388,388,"FirstName388 MiddleName388",LastName388 +389,389,"FirstName389 MiddleName389",LastName389 +390,390,"FirstName390 MiddleName390",LastName390 +391,391,"FirstName391 MiddleName391",LastName391 +392,392,"FirstName392 MiddleName392",LastName392 +393,393,"FirstName393 MiddleName393",LastName393 +394,394,"FirstName394 MiddleName394",LastName394 +395,395,"FirstName395 MiddleName395",LastName395 +396,396,"FirstName396 MiddleName396",LastName396 +397,397,"FirstName397 MiddleName397",LastName397 +398,398,"FirstName398 MiddleName398",LastName398 +399,399,"FirstName399 MiddleName399",LastName399 +400,400,"FirstName400 MiddleName400",LastName400 +401,401,"FirstName401 MiddleName401",LastName401 +402,402,"FirstName402 MiddleName402",LastName402 +403,403,"FirstName403 MiddleName403",LastName403 +404,404,"FirstName404 MiddleName404",LastName404 +405,405,"FirstName405 MiddleName405",LastName405 +406,406,"FirstName406 MiddleName406",LastName406 +407,407,"FirstName407 MiddleName407",LastName407 +408,408,"FirstName408 MiddleName408",LastName408 +409,409,"FirstName409 MiddleName409",LastName409 +410,410,"FirstName410 MiddleName410",LastName410 +411,411,"FirstName411 MiddleName411",LastName411 +412,412,"FirstName412 MiddleName412",LastName412 +413,413,"FirstName413 MiddleName413",LastName413 +414,414,"FirstName414 MiddleName414",LastName414 +415,415,"FirstName415 MiddleName415",LastName415 +416,416,"FirstName416 MiddleName416",LastName416 +417,417,"FirstName417 MiddleName417",LastName417 +418,418,"FirstName418 MiddleName418",LastName418 +419,419,"FirstName419 MiddleName419",LastName419 +420,420,"FirstName420 MiddleName420",LastName420 +421,421,"FirstName421 MiddleName421",LastName421 +422,422,"FirstName422 MiddleName422",LastName422 +423,423,"FirstName423 MiddleName423",LastName423 +424,424,"FirstName424 MiddleName424",LastName424 +425,425,"FirstName425 MiddleName425",LastName425 +426,426,"FirstName426 MiddleName426",LastName426 +427,427,"FirstName427 MiddleName427",LastName427 +428,428,"FirstName428 MiddleName428",LastName428 +429,429,"FirstName429 MiddleName429",LastName429 +430,430,"FirstName430 MiddleName430",LastName430 +431,431,"FirstName431 MiddleName431",LastName431 +432,432,"FirstName432 MiddleName432",LastName432 +433,433,"FirstName433 MiddleName433",LastName433 +434,434,"FirstName434 MiddleName434",LastName434 +435,435,"FirstName435 MiddleName435",LastName435 +436,436,"FirstName436 MiddleName436",LastName436 +437,437,"FirstName437 MiddleName437",LastName437 +438,438,"FirstName438 MiddleName438",LastName438 +439,439,"FirstName439 MiddleName439",LastName439 +440,440,"FirstName440 MiddleName440",LastName440 +441,441,"FirstName441 MiddleName441",LastName441 +442,442,"FirstName442 MiddleName442",LastName442 +443,443,"FirstName443 MiddleName443",LastName443 +444,444,"FirstName444 MiddleName444",LastName444 +445,445,"FirstName445 MiddleName445",LastName445 +446,446,"FirstName446 MiddleName446",LastName446 +447,447,"FirstName447 MiddleName447",LastName447 +448,448,"FirstName448 MiddleName448",LastName448 +449,449,"FirstName449 MiddleName449",LastName449 +450,450,"FirstName450 MiddleName450",LastName450 +451,451,"FirstName451 MiddleName451",LastName451 +452,452,"FirstName452 MiddleName452",LastName452 +453,453,"FirstName453 MiddleName453",LastName453 +454,454,"FirstName454 MiddleName454",LastName454 +455,455,"FirstName455 MiddleName455",LastName455 +456,456,"FirstName456 MiddleName456",LastName456 +457,457,"FirstName457 MiddleName457",LastName457 +458,458,"FirstName458 MiddleName458",LastName458 +459,459,"FirstName459 MiddleName459",LastName459 +460,460,"FirstName460 MiddleName460",LastName460 +461,461,"FirstName461 MiddleName461",LastName461 +462,462,"FirstName462 MiddleName462",LastName462 +463,463,"FirstName463 MiddleName463",LastName463 +464,464,"FirstName464 MiddleName464",LastName464 +465,465,"FirstName465 MiddleName465",LastName465 +466,466,"FirstName466 MiddleName466",LastName466 +467,467,"FirstName467 MiddleName467",LastName467 +468,468,"FirstName468 MiddleName468",LastName468 +469,469,"FirstName469 MiddleName469",LastName469 +470,470,"FirstName470 MiddleName470",LastName470 +471,471,"FirstName471 MiddleName471",LastName471 +472,472,"FirstName472 MiddleName472",LastName472 +473,473,"FirstName473 MiddleName473",LastName473 +474,474,"FirstName474 MiddleName474",LastName474 +475,475,"FirstName475 MiddleName475",LastName475 +476,476,"FirstName476 MiddleName476",LastName476 +477,477,"FirstName477 MiddleName477",LastName477 +478,478,"FirstName478 MiddleName478",LastName478 +479,479,"FirstName479 MiddleName479",LastName479 +480,480,"FirstName480 MiddleName480",LastName480 +481,481,"FirstName481 MiddleName481",LastName481 +482,482,"FirstName482 MiddleName482",LastName482 +483,483,"FirstName483 MiddleName483",LastName483 +484,484,"FirstName484 MiddleName484",LastName484 +485,485,"FirstName485 MiddleName485",LastName485 +486,486,"FirstName486 MiddleName486",LastName486 +487,487,"FirstName487 MiddleName487",LastName487 +488,488,"FirstName488 MiddleName488",LastName488 +489,489,"FirstName489 MiddleName489",LastName489 +490,490,"FirstName490 MiddleName490",LastName490 +491,491,"FirstName491 MiddleName491",LastName491 +492,492,"FirstName492 MiddleName492",LastName492 +493,493,"FirstName493 MiddleName493",LastName493 +494,494,"FirstName494 MiddleName494",LastName494 +495,495,"FirstName495 MiddleName495",LastName495 +496,496,"FirstName496 MiddleName496",LastName496 +497,497,"FirstName497 MiddleName497",LastName497 +498,498,"FirstName498 MiddleName498",LastName498 +499,499,"FirstName499 MiddleName499",LastName499 +500,500,"FirstName500 MiddleName500",LastName500 +501,501,"FirstName501 MiddleName501",LastName501 +502,502,"FirstName502 MiddleName502",LastName502 +503,503,"FirstName503 MiddleName503",LastName503 +504,504,"FirstName504 MiddleName504",LastName504 +505,505,"FirstName505 MiddleName505",LastName505 +506,506,"FirstName506 MiddleName506",LastName506 +507,507,"FirstName507 MiddleName507",LastName507 +508,508,"FirstName508 MiddleName508",LastName508 +509,509,"FirstName509 MiddleName509",LastName509 +510,510,"FirstName510 MiddleName510",LastName510 +511,511,"FirstName511 MiddleName511",LastName511 +512,512,"FirstName512 MiddleName512",LastName512 +513,513,"FirstName513 MiddleName513",LastName513 +514,514,"FirstName514 MiddleName514",LastName514 +515,515,"FirstName515 MiddleName515",LastName515 +516,516,"FirstName516 MiddleName516",LastName516 +517,517,"FirstName517 MiddleName517",LastName517 +518,518,"FirstName518 MiddleName518",LastName518 +519,519,"FirstName519 MiddleName519",LastName519 +520,520,"FirstName520 MiddleName520",LastName520 +521,521,"FirstName521 MiddleName521",LastName521 +522,522,"FirstName522 MiddleName522",LastName522 +523,523,"FirstName523 MiddleName523",LastName523 +524,524,"FirstName524 MiddleName524",LastName524 +525,525,"FirstName525 MiddleName525",LastName525 +526,526,"FirstName526 MiddleName526",LastName526 +527,527,"FirstName527 MiddleName527",LastName527 +528,528,"FirstName528 MiddleName528",LastName528 +529,529,"FirstName529 MiddleName529",LastName529 +530,530,"FirstName530 MiddleName530",LastName530 +531,531,"FirstName531 MiddleName531",LastName531 +532,532,"FirstName532 MiddleName532",LastName532 +533,533,"FirstName533 MiddleName533",LastName533 +534,534,"FirstName534 MiddleName534",LastName534 +535,535,"FirstName535 MiddleName535",LastName535 +536,536,"FirstName536 MiddleName536",LastName536 +537,537,"FirstName537 MiddleName537",LastName537 +538,538,"FirstName538 MiddleName538",LastName538 +539,539,"FirstName539 MiddleName539",LastName539 +540,540,"FirstName540 MiddleName540",LastName540 +541,541,"FirstName541 MiddleName541",LastName541 +542,542,"FirstName542 MiddleName542",LastName542 +543,543,"FirstName543 MiddleName543",LastName543 +544,544,"FirstName544 MiddleName544",LastName544 +545,545,"FirstName545 MiddleName545",LastName545 +546,546,"FirstName546 MiddleName546",LastName546 +547,547,"FirstName547 MiddleName547",LastName547 +548,548,"FirstName548 MiddleName548",LastName548 +549,549,"FirstName549 MiddleName549",LastName549 +550,550,"FirstName550 MiddleName550",LastName550 +551,551,"FirstName551 MiddleName551",LastName551 +552,552,"FirstName552 MiddleName552",LastName552 +553,553,"FirstName553 MiddleName553",LastName553 +554,554,"FirstName554 MiddleName554",LastName554 +555,555,"FirstName555 MiddleName555",LastName555 +556,556,"FirstName556 MiddleName556",LastName556 +557,557,"FirstName557 MiddleName557",LastName557 +558,558,"FirstName558 MiddleName558",LastName558 +559,559,"FirstName559 MiddleName559",LastName559 +560,560,"FirstName560 MiddleName560",LastName560 +561,561,"FirstName561 MiddleName561",LastName561 +562,562,"FirstName562 MiddleName562",LastName562 +563,563,"FirstName563 MiddleName563",LastName563 +564,564,"FirstName564 MiddleName564",LastName564 +565,565,"FirstName565 MiddleName565",LastName565 +566,566,"FirstName566 MiddleName566",LastName566 +567,567,"FirstName567 MiddleName567",LastName567 +568,568,"FirstName568 MiddleName568",LastName568 +569,569,"FirstName569 MiddleName569",LastName569 +570,570,"FirstName570 MiddleName570",LastName570 +571,571,"FirstName571 MiddleName571",LastName571 +572,572,"FirstName572 MiddleName572",LastName572 +573,573,"FirstName573 MiddleName573",LastName573 +574,574,"FirstName574 MiddleName574",LastName574 +575,575,"FirstName575 MiddleName575",LastName575 +576,576,"FirstName576 MiddleName576",LastName576 +577,577,"FirstName577 MiddleName577",LastName577 +578,578,"FirstName578 MiddleName578",LastName578 +579,579,"FirstName579 MiddleName579",LastName579 +580,580,"FirstName580 MiddleName580",LastName580 +581,581,"FirstName581 MiddleName581",LastName581 +582,582,"FirstName582 MiddleName582",LastName582 +583,583,"FirstName583 MiddleName583",LastName583 +584,584,"FirstName584 MiddleName584",LastName584 +585,585,"FirstName585 MiddleName585",LastName585 +586,586,"FirstName586 MiddleName586",LastName586 +587,587,"FirstName587 MiddleName587",LastName587 +588,588,"FirstName588 MiddleName588",LastName588 +589,589,"FirstName589 MiddleName589",LastName589 +590,590,"FirstName590 MiddleName590",LastName590 +591,591,"FirstName591 MiddleName591",LastName591 +592,592,"FirstName592 MiddleName592",LastName592 +593,593,"FirstName593 MiddleName593",LastName593 +594,594,"FirstName594 MiddleName594",LastName594 +595,595,"FirstName595 MiddleName595",LastName595 +596,596,"FirstName596 MiddleName596",LastName596 +597,597,"FirstName597 MiddleName597",LastName597 +598,598,"FirstName598 MiddleName598",LastName598 +599,599,"FirstName599 MiddleName599",LastName599 +600,600,"FirstName600 MiddleName600",LastName600 +601,601,"FirstName601 MiddleName601",LastName601 +602,602,"FirstName602 MiddleName602",LastName602 +603,603,"FirstName603 MiddleName603",LastName603 +604,604,"FirstName604 MiddleName604",LastName604 +605,605,"FirstName605 MiddleName605",LastName605 +606,606,"FirstName606 MiddleName606",LastName606 +607,607,"FirstName607 MiddleName607",LastName607 +608,608,"FirstName608 MiddleName608",LastName608 +609,609,"FirstName609 MiddleName609",LastName609 +610,610,"FirstName610 MiddleName610",LastName610 +611,611,"FirstName611 MiddleName611",LastName611 +612,612,"FirstName612 MiddleName612",LastName612 +613,613,"FirstName613 MiddleName613",LastName613 +614,614,"FirstName614 MiddleName614",LastName614 +615,615,"FirstName615 MiddleName615",LastName615 +616,616,"FirstName616 MiddleName616",LastName616 +617,617,"FirstName617 MiddleName617",LastName617 +618,618,"FirstName618 MiddleName618",LastName618 +619,619,"FirstName619 MiddleName619",LastName619 +620,620,"FirstName620 MiddleName620",LastName620 +621,621,"FirstName621 MiddleName621",LastName621 +622,622,"FirstName622 MiddleName622",LastName622 +623,623,"FirstName623 MiddleName623",LastName623 +624,624,"FirstName624 MiddleName624",LastName624 +625,625,"FirstName625 MiddleName625",LastName625 +626,626,"FirstName626 MiddleName626",LastName626 +627,627,"FirstName627 MiddleName627",LastName627 +628,628,"FirstName628 MiddleName628",LastName628 +629,629,"FirstName629 MiddleName629",LastName629 +630,630,"FirstName630 MiddleName630",LastName630 +631,631,"FirstName631 MiddleName631",LastName631 +632,632,"FirstName632 MiddleName632",LastName632 +633,633,"FirstName633 MiddleName633",LastName633 +634,634,"FirstName634 MiddleName634",LastName634 +635,635,"FirstName635 MiddleName635",LastName635 +636,636,"FirstName636 MiddleName636",LastName636 +637,637,"FirstName637 MiddleName637",LastName637 +638,638,"FirstName638 MiddleName638",LastName638 +639,639,"FirstName639 MiddleName639",LastName639 +640,640,"FirstName640 MiddleName640",LastName640 +641,641,"FirstName641 MiddleName641",LastName641 +642,642,"FirstName642 MiddleName642",LastName642 +643,643,"FirstName643 MiddleName643",LastName643 +644,644,"FirstName644 MiddleName644",LastName644 +645,645,"FirstName645 MiddleName645",LastName645 +646,646,"FirstName646 MiddleName646",LastName646 +647,647,"FirstName647 MiddleName647",LastName647 +648,648,"FirstName648 MiddleName648",LastName648 +649,649,"FirstName649 MiddleName649",LastName649 +650,650,"FirstName650 MiddleName650",LastName650 +651,651,"FirstName651 MiddleName651",LastName651 +652,652,"FirstName652 MiddleName652",LastName652 +653,653,"FirstName653 MiddleName653",LastName653 +654,654,"FirstName654 MiddleName654",LastName654 +655,655,"FirstName655 MiddleName655",LastName655 +656,656,"FirstName656 MiddleName656",LastName656 +657,657,"FirstName657 MiddleName657",LastName657 +658,658,"FirstName658 MiddleName658",LastName658 +659,659,"FirstName659 MiddleName659",LastName659 +660,660,"FirstName660 MiddleName660",LastName660 +661,661,"FirstName661 MiddleName661",LastName661 +662,662,"FirstName662 MiddleName662",LastName662 +663,663,"FirstName663 MiddleName663",LastName663 +664,664,"FirstName664 MiddleName664",LastName664 +665,665,"FirstName665 MiddleName665",LastName665 +666,666,"FirstName666 MiddleName666",LastName666 +667,667,"FirstName667 MiddleName667",LastName667 +668,668,"FirstName668 MiddleName668",LastName668 +669,669,"FirstName669 MiddleName669",LastName669 +670,670,"FirstName670 MiddleName670",LastName670 +671,671,"FirstName671 MiddleName671",LastName671 +672,672,"FirstName672 MiddleName672",LastName672 +673,673,"FirstName673 MiddleName673",LastName673 +674,674,"FirstName674 MiddleName674",LastName674 +675,675,"FirstName675 MiddleName675",LastName675 +676,676,"FirstName676 MiddleName676",LastName676 +677,677,"FirstName677 MiddleName677",LastName677 +678,678,"FirstName678 MiddleName678",LastName678 +679,679,"FirstName679 MiddleName679",LastName679 +680,680,"FirstName680 MiddleName680",LastName680 +681,681,"FirstName681 MiddleName681",LastName681 +682,682,"FirstName682 MiddleName682",LastName682 +683,683,"FirstName683 MiddleName683",LastName683 +684,684,"FirstName684 MiddleName684",LastName684 +685,685,"FirstName685 MiddleName685",LastName685 +686,686,"FirstName686 MiddleName686",LastName686 +687,687,"FirstName687 MiddleName687",LastName687 +688,688,"FirstName688 MiddleName688",LastName688 +689,689,"FirstName689 MiddleName689",LastName689 +690,690,"FirstName690 MiddleName690",LastName690 +691,691,"FirstName691 MiddleName691",LastName691 +692,692,"FirstName692 MiddleName692",LastName692 +693,693,"FirstName693 MiddleName693",LastName693 +694,694,"FirstName694 MiddleName694",LastName694 +695,695,"FirstName695 MiddleName695",LastName695 +696,696,"FirstName696 MiddleName696",LastName696 +697,697,"FirstName697 MiddleName697",LastName697 +698,698,"FirstName698 MiddleName698",LastName698 +699,699,"FirstName699 MiddleName699",LastName699 +700,700,"FirstName700 MiddleName700",LastName700 +701,701,"FirstName701 MiddleName701",LastName701 +702,702,"FirstName702 MiddleName702",LastName702 +703,703,"FirstName703 MiddleName703",LastName703 +704,704,"FirstName704 MiddleName704",LastName704 +705,705,"FirstName705 MiddleName705",LastName705 +706,706,"FirstName706 MiddleName706",LastName706 +707,707,"FirstName707 MiddleName707",LastName707 +708,708,"FirstName708 MiddleName708",LastName708 +709,709,"FirstName709 MiddleName709",LastName709 +710,710,"FirstName710 MiddleName710",LastName710 +711,711,"FirstName711 MiddleName711",LastName711 +712,712,"FirstName712 MiddleName712",LastName712 +713,713,"FirstName713 MiddleName713",LastName713 +714,714,"FirstName714 MiddleName714",LastName714 +715,715,"FirstName715 MiddleName715",LastName715 +716,716,"FirstName716 MiddleName716",LastName716 +717,717,"FirstName717 MiddleName717",LastName717 +718,718,"FirstName718 MiddleName718",LastName718 +719,719,"FirstName719 MiddleName719",LastName719 +720,720,"FirstName720 MiddleName720",LastName720 +721,721,"FirstName721 MiddleName721",LastName721 +722,722,"FirstName722 MiddleName722",LastName722 +723,723,"FirstName723 MiddleName723",LastName723 +724,724,"FirstName724 MiddleName724",LastName724 +725,725,"FirstName725 MiddleName725",LastName725 +726,726,"FirstName726 MiddleName726",LastName726 +727,727,"FirstName727 MiddleName727",LastName727 +728,728,"FirstName728 MiddleName728",LastName728 +729,729,"FirstName729 MiddleName729",LastName729 +730,730,"FirstName730 MiddleName730",LastName730 +731,731,"FirstName731 MiddleName731",LastName731 +732,732,"FirstName732 MiddleName732",LastName732 +733,733,"FirstName733 MiddleName733",LastName733 +734,734,"FirstName734 MiddleName734",LastName734 +735,735,"FirstName735 MiddleName735",LastName735 +736,736,"FirstName736 MiddleName736",LastName736 +737,737,"FirstName737 MiddleName737",LastName737 +738,738,"FirstName738 MiddleName738",LastName738 +739,739,"FirstName739 MiddleName739",LastName739 +740,740,"FirstName740 MiddleName740",LastName740 +741,741,"FirstName741 MiddleName741",LastName741 +742,742,"FirstName742 MiddleName742",LastName742 +743,743,"FirstName743 MiddleName743",LastName743 +744,744,"FirstName744 MiddleName744",LastName744 +745,745,"FirstName745 MiddleName745",LastName745 +746,746,"FirstName746 MiddleName746",LastName746 +747,747,"FirstName747 MiddleName747",LastName747 +748,748,"FirstName748 MiddleName748",LastName748 +749,749,"FirstName749 MiddleName749",LastName749 +750,750,"FirstName750 MiddleName750",LastName750 +751,751,"FirstName751 MiddleName751",LastName751 +752,752,"FirstName752 MiddleName752",LastName752 +753,753,"FirstName753 MiddleName753",LastName753 +754,754,"FirstName754 MiddleName754",LastName754 +755,755,"FirstName755 MiddleName755",LastName755 +756,756,"FirstName756 MiddleName756",LastName756 +757,757,"FirstName757 MiddleName757",LastName757 +758,758,"FirstName758 MiddleName758",LastName758 +759,759,"FirstName759 MiddleName759",LastName759 +760,760,"FirstName760 MiddleName760",LastName760 +761,761,"FirstName761 MiddleName761",LastName761 +762,762,"FirstName762 MiddleName762",LastName762 +763,763,"FirstName763 MiddleName763",LastName763 +764,764,"FirstName764 MiddleName764",LastName764 +765,765,"FirstName765 MiddleName765",LastName765 +766,766,"FirstName766 MiddleName766",LastName766 +767,767,"FirstName767 MiddleName767",LastName767 +768,768,"FirstName768 MiddleName768",LastName768 +769,769,"FirstName769 MiddleName769",LastName769 +770,770,"FirstName770 MiddleName770",LastName770 +771,771,"FirstName771 MiddleName771",LastName771 +772,772,"FirstName772 MiddleName772",LastName772 +773,773,"FirstName773 MiddleName773",LastName773 +774,774,"FirstName774 MiddleName774",LastName774 +775,775,"FirstName775 MiddleName775",LastName775 +776,776,"FirstName776 MiddleName776",LastName776 +777,777,"FirstName777 MiddleName777",LastName777 +778,778,"FirstName778 MiddleName778",LastName778 +779,779,"FirstName779 MiddleName779",LastName779 +780,780,"FirstName780 MiddleName780",LastName780 +781,781,"FirstName781 MiddleName781",LastName781 +782,782,"FirstName782 MiddleName782",LastName782 +783,783,"FirstName783 MiddleName783",LastName783 +784,784,"FirstName784 MiddleName784",LastName784 +785,785,"FirstName785 MiddleName785",LastName785 +786,786,"FirstName786 MiddleName786",LastName786 +787,787,"FirstName787 MiddleName787",LastName787 +788,788,"FirstName788 MiddleName788",LastName788 +789,789,"FirstName789 MiddleName789",LastName789 +790,790,"FirstName790 MiddleName790",LastName790 +791,791,"FirstName791 MiddleName791",LastName791 +792,792,"FirstName792 MiddleName792",LastName792 +793,793,"FirstName793 MiddleName793",LastName793 +794,794,"FirstName794 MiddleName794",LastName794 +795,795,"FirstName795 MiddleName795",LastName795 +796,796,"FirstName796 MiddleName796",LastName796 +797,797,"FirstName797 MiddleName797",LastName797 +798,798,"FirstName798 MiddleName798",LastName798 +799,799,"FirstName799 MiddleName799",LastName799 +800,800,"FirstName800 MiddleName800",LastName800 +801,801,"FirstName801 MiddleName801",LastName801 +802,802,"FirstName802 MiddleName802",LastName802 +803,803,"FirstName803 MiddleName803",LastName803 +804,804,"FirstName804 MiddleName804",LastName804 +805,805,"FirstName805 MiddleName805",LastName805 +806,806,"FirstName806 MiddleName806",LastName806 +807,807,"FirstName807 MiddleName807",LastName807 +808,808,"FirstName808 MiddleName808",LastName808 +809,809,"FirstName809 MiddleName809",LastName809 +810,810,"FirstName810 MiddleName810",LastName810 +811,811,"FirstName811 MiddleName811",LastName811 +812,812,"FirstName812 MiddleName812",LastName812 +813,813,"FirstName813 MiddleName813",LastName813 +814,814,"FirstName814 MiddleName814",LastName814 +815,815,"FirstName815 MiddleName815",LastName815 +816,816,"FirstName816 MiddleName816",LastName816 +817,817,"FirstName817 MiddleName817",LastName817 +818,818,"FirstName818 MiddleName818",LastName818 +819,819,"FirstName819 MiddleName819",LastName819 +820,820,"FirstName820 MiddleName820",LastName820 +821,821,"FirstName821 MiddleName821",LastName821 +822,822,"FirstName822 MiddleName822",LastName822 +823,823,"FirstName823 MiddleName823",LastName823 +824,824,"FirstName824 MiddleName824",LastName824 +825,825,"FirstName825 MiddleName825",LastName825 +826,826,"FirstName826 MiddleName826",LastName826 +827,827,"FirstName827 MiddleName827",LastName827 +828,828,"FirstName828 MiddleName828",LastName828 +829,829,"FirstName829 MiddleName829",LastName829 +830,830,"FirstName830 MiddleName830",LastName830 +831,831,"FirstName831 MiddleName831",LastName831 +832,832,"FirstName832 MiddleName832",LastName832 +833,833,"FirstName833 MiddleName833",LastName833 +834,834,"FirstName834 MiddleName834",LastName834 +835,835,"FirstName835 MiddleName835",LastName835 +836,836,"FirstName836 MiddleName836",LastName836 +837,837,"FirstName837 MiddleName837",LastName837 +838,838,"FirstName838 MiddleName838",LastName838 +839,839,"FirstName839 MiddleName839",LastName839 +840,840,"FirstName840 MiddleName840",LastName840 +841,841,"FirstName841 MiddleName841",LastName841 +842,842,"FirstName842 MiddleName842",LastName842 +843,843,"FirstName843 MiddleName843",LastName843 +844,844,"FirstName844 MiddleName844",LastName844 +845,845,"FirstName845 MiddleName845",LastName845 +846,846,"FirstName846 MiddleName846",LastName846 +847,847,"FirstName847 MiddleName847",LastName847 +848,848,"FirstName848 MiddleName848",LastName848 +849,849,"FirstName849 MiddleName849",LastName849 +850,850,"FirstName850 MiddleName850",LastName850 +851,851,"FirstName851 MiddleName851",LastName851 +852,852,"FirstName852 MiddleName852",LastName852 +853,853,"FirstName853 MiddleName853",LastName853 +854,854,"FirstName854 MiddleName854",LastName854 +855,855,"FirstName855 MiddleName855",LastName855 +856,856,"FirstName856 MiddleName856",LastName856 +857,857,"FirstName857 MiddleName857",LastName857 +858,858,"FirstName858 MiddleName858",LastName858 +859,859,"FirstName859 MiddleName859",LastName859 +860,860,"FirstName860 MiddleName860",LastName860 +861,861,"FirstName861 MiddleName861",LastName861 +862,862,"FirstName862 MiddleName862",LastName862 +863,863,"FirstName863 MiddleName863",LastName863 +864,864,"FirstName864 MiddleName864",LastName864 +865,865,"FirstName865 MiddleName865",LastName865 +866,866,"FirstName866 MiddleName866",LastName866 +867,867,"FirstName867 MiddleName867",LastName867 +868,868,"FirstName868 MiddleName868",LastName868 +869,869,"FirstName869 MiddleName869",LastName869 +870,870,"FirstName870 MiddleName870",LastName870 +871,871,"FirstName871 MiddleName871",LastName871 +872,872,"FirstName872 MiddleName872",LastName872 +873,873,"FirstName873 MiddleName873",LastName873 +874,874,"FirstName874 MiddleName874",LastName874 +875,875,"FirstName875 MiddleName875",LastName875 +876,876,"FirstName876 MiddleName876",LastName876 +877,877,"FirstName877 MiddleName877",LastName877 +878,878,"FirstName878 MiddleName878",LastName878 +879,879,"FirstName879 MiddleName879",LastName879 +880,880,"FirstName880 MiddleName880",LastName880 +881,881,"FirstName881 MiddleName881",LastName881 +882,882,"FirstName882 MiddleName882",LastName882 +883,883,"FirstName883 MiddleName883",LastName883 +884,884,"FirstName884 MiddleName884",LastName884 +885,885,"FirstName885 MiddleName885",LastName885 +886,886,"FirstName886 MiddleName886",LastName886 +887,887,"FirstName887 MiddleName887",LastName887 +888,888,"FirstName888 MiddleName888",LastName888 +889,889,"FirstName889 MiddleName889",LastName889 +890,890,"FirstName890 MiddleName890",LastName890 +891,891,"FirstName891 MiddleName891",LastName891 +892,892,"FirstName892 MiddleName892",LastName892 +893,893,"FirstName893 MiddleName893",LastName893 +894,894,"FirstName894 MiddleName894",LastName894 +895,895,"FirstName895 MiddleName895",LastName895 +896,896,"FirstName896 MiddleName896",LastName896 +897,897,"FirstName897 MiddleName897",LastName897 +898,898,"FirstName898 MiddleName898",LastName898 +899,899,"FirstName899 MiddleName899",LastName899 +900,900,"FirstName900 MiddleName900",LastName900 +901,901,"FirstName901 MiddleName901",LastName901 +902,902,"FirstName902 MiddleName902",LastName902 +903,903,"FirstName903 MiddleName903",LastName903 +904,904,"FirstName904 MiddleName904",LastName904 +905,905,"FirstName905 MiddleName905",LastName905 +906,906,"FirstName906 MiddleName906",LastName906 +907,907,"FirstName907 MiddleName907",LastName907 +908,908,"FirstName908 MiddleName908",LastName908 +909,909,"FirstName909 MiddleName909",LastName909 +910,910,"FirstName910 MiddleName910",LastName910 +911,911,"FirstName911 MiddleName911",LastName911 +912,912,"FirstName912 MiddleName912",LastName912 +913,913,"FirstName913 MiddleName913",LastName913 +914,914,"FirstName914 MiddleName914",LastName914 +915,915,"FirstName915 MiddleName915",LastName915 +916,916,"FirstName916 MiddleName916",LastName916 +917,917,"FirstName917 MiddleName917",LastName917 +918,918,"FirstName918 MiddleName918",LastName918 +919,919,"FirstName919 MiddleName919",LastName919 +920,920,"FirstName920 MiddleName920",LastName920 +921,921,"FirstName921 MiddleName921",LastName921 +922,922,"FirstName922 MiddleName922",LastName922 +923,923,"FirstName923 MiddleName923",LastName923 +924,924,"FirstName924 MiddleName924",LastName924 +925,925,"FirstName925 MiddleName925",LastName925 +926,926,"FirstName926 MiddleName926",LastName926 +927,927,"FirstName927 MiddleName927",LastName927 +928,928,"FirstName928 MiddleName928",LastName928 +929,929,"FirstName929 MiddleName929",LastName929 +930,930,"FirstName930 MiddleName930",LastName930 +931,931,"FirstName931 MiddleName931",LastName931 +932,932,"FirstName932 MiddleName932",LastName932 +933,933,"FirstName933 MiddleName933",LastName933 +934,934,"FirstName934 MiddleName934",LastName934 +935,935,"FirstName935 MiddleName935",LastName935 +936,936,"FirstName936 MiddleName936",LastName936 +937,937,"FirstName937 MiddleName937",LastName937 +938,938,"FirstName938 MiddleName938",LastName938 +939,939,"FirstName939 MiddleName939",LastName939 +940,940,"FirstName940 MiddleName940",LastName940 +941,941,"FirstName941 MiddleName941",LastName941 +942,942,"FirstName942 MiddleName942",LastName942 +943,943,"FirstName943 MiddleName943",LastName943 +944,944,"FirstName944 MiddleName944",LastName944 +945,945,"FirstName945 MiddleName945",LastName945 +946,946,"FirstName946 MiddleName946",LastName946 +947,947,"FirstName947 MiddleName947",LastName947 +948,948,"FirstName948 MiddleName948",LastName948 +949,949,"FirstName949 MiddleName949",LastName949 +950,950,"FirstName950 MiddleName950",LastName950 +951,951,"FirstName951 MiddleName951",LastName951 +952,952,"FirstName952 MiddleName952",LastName952 +953,953,"FirstName953 MiddleName953",LastName953 +954,954,"FirstName954 MiddleName954",LastName954 +955,955,"FirstName955 MiddleName955",LastName955 +956,956,"FirstName956 MiddleName956",LastName956 +957,957,"FirstName957 MiddleName957",LastName957 +958,958,"FirstName958 MiddleName958",LastName958 +959,959,"FirstName959 MiddleName959",LastName959 +960,960,"FirstName960 MiddleName960",LastName960 +961,961,"FirstName961 MiddleName961",LastName961 +962,962,"FirstName962 MiddleName962",LastName962 +963,963,"FirstName963 MiddleName963",LastName963 +964,964,"FirstName964 MiddleName964",LastName964 +965,965,"FirstName965 MiddleName965",LastName965 +966,966,"FirstName966 MiddleName966",LastName966 +967,967,"FirstName967 MiddleName967",LastName967 +968,968,"FirstName968 MiddleName968",LastName968 +969,969,"FirstName969 MiddleName969",LastName969 +970,970,"FirstName970 MiddleName970",LastName970 +971,971,"FirstName971 MiddleName971",LastName971 +972,972,"FirstName972 MiddleName972",LastName972 +973,973,"FirstName973 MiddleName973",LastName973 +974,974,"FirstName974 MiddleName974",LastName974 +975,975,"FirstName975 MiddleName975",LastName975 +976,976,"FirstName976 MiddleName976",LastName976 +977,977,"FirstName977 MiddleName977",LastName977 +978,978,"FirstName978 MiddleName978",LastName978 +979,979,"FirstName979 MiddleName979",LastName979 +980,980,"FirstName980 MiddleName980",LastName980 +981,981,"FirstName981 MiddleName981",LastName981 +982,982,"FirstName982 MiddleName982",LastName982 +983,983,"FirstName983 MiddleName983",LastName983 +984,984,"FirstName984 MiddleName984",LastName984 +985,985,"FirstName985 MiddleName985",LastName985 +986,986,"FirstName986 MiddleName986",LastName986 +987,987,"FirstName987 MiddleName987",LastName987 +988,988,"FirstName988 MiddleName988",LastName988 +989,989,"FirstName989 MiddleName989",LastName989 +990,990,"FirstName990 MiddleName990",LastName990 +991,991,"FirstName991 MiddleName991",LastName991 +992,992,"FirstName992 MiddleName992",LastName992 +993,993,"FirstName993 MiddleName993",LastName993 +994,994,"FirstName994 MiddleName994",LastName994 +995,995,"FirstName995 MiddleName995",LastName995 +996,996,"FirstName996 MiddleName996",LastName996 +997,997,"FirstName997 MiddleName997",LastName997 +998,998,"FirstName998 MiddleName998",LastName998 +999,999,"FirstName999 MiddleName999",LastName999 +1000,1000,"FirstName1000 MiddleName1000",LastName1000 +1001,1001,"FirstName1001 MiddleName1001",LastName1001 +1002,1002,"FirstName1002 MiddleName1002",LastName1002 +1003,1003,"FirstName1003 MiddleName1003",LastName1003 +1004,1004,"FirstName1004 MiddleName1004",LastName1004 +1005,1005,"FirstName1005 MiddleName1005",LastName1005 +1006,1006,"FirstName1006 MiddleName1006",LastName1006 +1007,1007,"FirstName1007 MiddleName1007",LastName1007 +1008,1008,"FirstName1008 MiddleName1008",LastName1008 +1009,1009,"FirstName1009 MiddleName1009",LastName1009 +1010,1010,"FirstName1010 MiddleName1010",LastName1010 +1011,1011,"FirstName1011 MiddleName1011",LastName1011 +1012,1012,"FirstName1012 MiddleName1012",LastName1012 +1013,1013,"FirstName1013 MiddleName1013",LastName1013 +1014,1014,"FirstName1014 MiddleName1014",LastName1014 +1015,1015,"FirstName1015 MiddleName1015",LastName1015 +1016,1016,"FirstName1016 MiddleName1016",LastName1016 +1017,1017,"FirstName1017 MiddleName1017",LastName1017 +1018,1018,"FirstName1018 MiddleName1018",LastName1018 +1019,1019,"FirstName1019 MiddleName1019",LastName1019 +1020,1020,"FirstName1020 MiddleName1020",LastName1020 +1021,1021,"FirstName1021 MiddleName1021",LastName1021 +1022,1022,"FirstName1022 MiddleName1022",LastName1022 +1023,1023,"FirstName1023 MiddleName1023",LastName1023 +1024,1024,"FirstName1024 MiddleName1024",LastName1024 +1025,1025,"FirstName1025 MiddleName1025",LastName1025 +1026,1026,"FirstName1026 MiddleName1026",LastName1026 +1027,1027,"FirstName1027 MiddleName1027",LastName1027 +1028,1028,"FirstName1028 MiddleName1028",LastName1028 +1029,1029,"FirstName1029 MiddleName1029",LastName1029 +1030,1030,"FirstName1030 MiddleName1030",LastName1030 +1031,1031,"FirstName1031 MiddleName1031",LastName1031 +1032,1032,"FirstName1032 MiddleName1032",LastName1032 +1033,1033,"FirstName1033 MiddleName1033",LastName1033 +1034,1034,"FirstName1034 MiddleName1034",LastName1034 +1035,1035,"FirstName1035 MiddleName1035",LastName1035 +1036,1036,"FirstName1036 MiddleName1036",LastName1036 +1037,1037,"FirstName1037 MiddleName1037",LastName1037 +1038,1038,"FirstName1038 MiddleName1038",LastName1038 +1039,1039,"FirstName1039 MiddleName1039",LastName1039 +1040,1040,"FirstName1040 MiddleName1040",LastName1040 +1041,1041,"FirstName1041 MiddleName1041",LastName1041 +1042,1042,"FirstName1042 MiddleName1042",LastName1042 +1043,1043,"FirstName1043 MiddleName1043",LastName1043 +1044,1044,"FirstName1044 MiddleName1044",LastName1044 +1045,1045,"FirstName1045 MiddleName1045",LastName1045 +1046,1046,"FirstName1046 MiddleName1046",LastName1046 +1047,1047,"FirstName1047 MiddleName1047",LastName1047 +1048,1048,"FirstName1048 MiddleName1048",LastName1048 +1049,1049,"FirstName1049 MiddleName1049",LastName1049 +1050,1050,"FirstName1050 MiddleName1050",LastName1050 +1051,1051,"FirstName1051 MiddleName1051",LastName1051 +1052,1052,"FirstName1052 MiddleName1052",LastName1052 +1053,1053,"FirstName1053 MiddleName1053",LastName1053 +1054,1054,"FirstName1054 MiddleName1054",LastName1054 +1055,1055,"FirstName1055 MiddleName1055",LastName1055 +1056,1056,"FirstName1056 MiddleName1056",LastName1056 +1057,1057,"FirstName1057 MiddleName1057",LastName1057 +1058,1058,"FirstName1058 MiddleName1058",LastName1058 +1059,1059,"FirstName1059 MiddleName1059",LastName1059 +1060,1060,"FirstName1060 MiddleName1060",LastName1060 +1061,1061,"FirstName1061 MiddleName1061",LastName1061 +1062,1062,"FirstName1062 MiddleName1062",LastName1062 +1063,1063,"FirstName1063 MiddleName1063",LastName1063 +1064,1064,"FirstName1064 MiddleName1064",LastName1064 +1065,1065,"FirstName1065 MiddleName1065",LastName1065 +1066,1066,"FirstName1066 MiddleName1066",LastName1066 +1067,1067,"FirstName1067 MiddleName1067",LastName1067 +1068,1068,"FirstName1068 MiddleName1068",LastName1068 +1069,1069,"FirstName1069 MiddleName1069",LastName1069 +1070,1070,"FirstName1070 MiddleName1070",LastName1070 +1071,1071,"FirstName1071 MiddleName1071",LastName1071 +1072,1072,"FirstName1072 MiddleName1072",LastName1072 +1073,1073,"FirstName1073 MiddleName1073",LastName1073 +1074,1074,"FirstName1074 MiddleName1074",LastName1074 +1075,1075,"FirstName1075 MiddleName1075",LastName1075 +1076,1076,"FirstName1076 MiddleName1076",LastName1076 +1077,1077,"FirstName1077 MiddleName1077",LastName1077 +1078,1078,"FirstName1078 MiddleName1078",LastName1078 +1079,1079,"FirstName1079 MiddleName1079",LastName1079 +1080,1080,"FirstName1080 MiddleName1080",LastName1080 +1081,1081,"FirstName1081 MiddleName1081",LastName1081 +1082,1082,"FirstName1082 MiddleName1082",LastName1082 +1083,1083,"FirstName1083 MiddleName1083",LastName1083 +1084,1084,"FirstName1084 MiddleName1084",LastName1084 +1085,1085,"FirstName1085 MiddleName1085",LastName1085 +1086,1086,"FirstName1086 MiddleName1086",LastName1086 +1087,1087,"FirstName1087 MiddleName1087",LastName1087 +1088,1088,"FirstName1088 MiddleName1088",LastName1088 +1089,1089,"FirstName1089 MiddleName1089",LastName1089 +1090,1090,"FirstName1090 MiddleName1090",LastName1090 +1091,1091,"FirstName1091 MiddleName1091",LastName1091 +1092,1092,"FirstName1092 MiddleName1092",LastName1092 +1093,1093,"FirstName1093 MiddleName1093",LastName1093 +1094,1094,"FirstName1094 MiddleName1094",LastName1094 +1095,1095,"FirstName1095 MiddleName1095",LastName1095 +1096,1096,"FirstName1096 MiddleName1096",LastName1096 +1097,1097,"FirstName1097 MiddleName1097",LastName1097 +1098,1098,"FirstName1098 MiddleName1098",LastName1098 +1099,1099,"FirstName1099 MiddleName1099",LastName1099 +1100,1100,"FirstName1100 MiddleName1100",LastName1100 +1101,1101,"FirstName1101 MiddleName1101",LastName1101 +1102,1102,"FirstName1102 MiddleName1102",LastName1102 +1103,1103,"FirstName1103 MiddleName1103",LastName1103 +1104,1104,"FirstName1104 MiddleName1104",LastName1104 +1105,1105,"FirstName1105 MiddleName1105",LastName1105 +1106,1106,"FirstName1106 MiddleName1106",LastName1106 +1107,1107,"FirstName1107 MiddleName1107",LastName1107 +1108,1108,"FirstName1108 MiddleName1108",LastName1108 +1109,1109,"FirstName1109 MiddleName1109",LastName1109 +1110,1110,"FirstName1110 MiddleName1110",LastName1110 +1111,1111,"FirstName1111 MiddleName1111",LastName1111 +1112,1112,"FirstName1112 MiddleName1112",LastName1112 +1113,1113,"FirstName1113 MiddleName1113",LastName1113 +1114,1114,"FirstName1114 MiddleName1114",LastName1114 +1115,1115,"FirstName1115 MiddleName1115",LastName1115 +1116,1116,"FirstName1116 MiddleName1116",LastName1116 +1117,1117,"FirstName1117 MiddleName1117",LastName1117 +1118,1118,"FirstName1118 MiddleName1118",LastName1118 +1119,1119,"FirstName1119 MiddleName1119",LastName1119 +1120,1120,"FirstName1120 MiddleName1120",LastName1120 +1121,1121,"FirstName1121 MiddleName1121",LastName1121 +1122,1122,"FirstName1122 MiddleName1122",LastName1122 +1123,1123,"FirstName1123 MiddleName1123",LastName1123 +1124,1124,"FirstName1124 MiddleName1124",LastName1124 +1125,1125,"FirstName1125 MiddleName1125",LastName1125 +1126,1126,"FirstName1126 MiddleName1126",LastName1126 +1127,1127,"FirstName1127 MiddleName1127",LastName1127 +1128,1128,"FirstName1128 MiddleName1128",LastName1128 +1129,1129,"FirstName1129 MiddleName1129",LastName1129 +1130,1130,"FirstName1130 MiddleName1130",LastName1130 +1131,1131,"FirstName1131 MiddleName1131",LastName1131 +1132,1132,"FirstName1132 MiddleName1132",LastName1132 +1133,1133,"FirstName1133 MiddleName1133",LastName1133 +1134,1134,"FirstName1134 MiddleName1134",LastName1134 +1135,1135,"FirstName1135 MiddleName1135",LastName1135 +1136,1136,"FirstName1136 MiddleName1136",LastName1136 +1137,1137,"FirstName1137 MiddleName1137",LastName1137 +1138,1138,"FirstName1138 MiddleName1138",LastName1138 +1139,1139,"FirstName1139 MiddleName1139",LastName1139 +1140,1140,"FirstName1140 MiddleName1140",LastName1140 +1141,1141,"FirstName1141 MiddleName1141",LastName1141 +1142,1142,"FirstName1142 MiddleName1142",LastName1142 +1143,1143,"FirstName1143 MiddleName1143",LastName1143 +1144,1144,"FirstName1144 MiddleName1144",LastName1144 +1145,1145,"FirstName1145 MiddleName1145",LastName1145 +1146,1146,"FirstName1146 MiddleName1146",LastName1146 +1147,1147,"FirstName1147 MiddleName1147",LastName1147 +1148,1148,"FirstName1148 MiddleName1148",LastName1148 +1149,1149,"FirstName1149 MiddleName1149",LastName1149 +1150,1150,"FirstName1150 MiddleName1150",LastName1150 +1151,1151,"FirstName1151 MiddleName1151",LastName1151 +1152,1152,"FirstName1152 MiddleName1152",LastName1152 +1153,1153,"FirstName1153 MiddleName1153",LastName1153 +1154,1154,"FirstName1154 MiddleName1154",LastName1154 +1155,1155,"FirstName1155 MiddleName1155",LastName1155 +1156,1156,"FirstName1156 MiddleName1156",LastName1156 +1157,1157,"FirstName1157 MiddleName1157",LastName1157 +1158,1158,"FirstName1158 MiddleName1158",LastName1158 +1159,1159,"FirstName1159 MiddleName1159",LastName1159 +1160,1160,"FirstName1160 MiddleName1160",LastName1160 +1161,1161,"FirstName1161 MiddleName1161",LastName1161 +1162,1162,"FirstName1162 MiddleName1162",LastName1162 +1163,1163,"FirstName1163 MiddleName1163",LastName1163 +1164,1164,"FirstName1164 MiddleName1164",LastName1164 +1165,1165,"FirstName1165 MiddleName1165",LastName1165 +1166,1166,"FirstName1166 MiddleName1166",LastName1166 +1167,1167,"FirstName1167 MiddleName1167",LastName1167 +1168,1168,"FirstName1168 MiddleName1168",LastName1168 +1169,1169,"FirstName1169 MiddleName1169",LastName1169 +1170,1170,"FirstName1170 MiddleName1170",LastName1170 +1171,1171,"FirstName1171 MiddleName1171",LastName1171 +1172,1172,"FirstName1172 MiddleName1172",LastName1172 +1173,1173,"FirstName1173 MiddleName1173",LastName1173 +1174,1174,"FirstName1174 MiddleName1174",LastName1174 +1175,1175,"FirstName1175 MiddleName1175",LastName1175 +1176,1176,"FirstName1176 MiddleName1176",LastName1176 +1177,1177,"FirstName1177 MiddleName1177",LastName1177 +1178,1178,"FirstName1178 MiddleName1178",LastName1178 +1179,1179,"FirstName1179 MiddleName1179",LastName1179 +1180,1180,"FirstName1180 MiddleName1180",LastName1180 +1181,1181,"FirstName1181 MiddleName1181",LastName1181 +1182,1182,"FirstName1182 MiddleName1182",LastName1182 +1183,1183,"FirstName1183 MiddleName1183",LastName1183 +1184,1184,"FirstName1184 MiddleName1184",LastName1184 +1185,1185,"FirstName1185 MiddleName1185",LastName1185 +1186,1186,"FirstName1186 MiddleName1186",LastName1186 +1187,1187,"FirstName1187 MiddleName1187",LastName1187 +1188,1188,"FirstName1188 MiddleName1188",LastName1188 +1189,1189,"FirstName1189 MiddleName1189",LastName1189 +1190,1190,"FirstName1190 MiddleName1190",LastName1190 +1191,1191,"FirstName1191 MiddleName1191",LastName1191 +1192,1192,"FirstName1192 MiddleName1192",LastName1192 +1193,1193,"FirstName1193 MiddleName1193",LastName1193 +1194,1194,"FirstName1194 MiddleName1194",LastName1194 +1195,1195,"FirstName1195 MiddleName1195",LastName1195 +1196,1196,"FirstName1196 MiddleName1196",LastName1196 +1197,1197,"FirstName1197 MiddleName1197",LastName1197 +1198,1198,"FirstName1198 MiddleName1198",LastName1198 +1199,1199,"FirstName1199 MiddleName1199",LastName1199 +1200,1200,"FirstName1200 MiddleName1200",LastName1200 +1201,1201,"FirstName1201 MiddleName1201",LastName1201 +1202,1202,"FirstName1202 MiddleName1202",LastName1202 +1203,1203,"FirstName1203 MiddleName1203",LastName1203 +1204,1204,"FirstName1204 MiddleName1204",LastName1204 +1205,1205,"FirstName1205 MiddleName1205",LastName1205 +1206,1206,"FirstName1206 MiddleName1206",LastName1206 +1207,1207,"FirstName1207 MiddleName1207",LastName1207 +1208,1208,"FirstName1208 MiddleName1208",LastName1208 +1209,1209,"FirstName1209 MiddleName1209",LastName1209 +1210,1210,"FirstName1210 MiddleName1210",LastName1210 +1211,1211,"FirstName1211 MiddleName1211",LastName1211 +1212,1212,"FirstName1212 MiddleName1212",LastName1212 +1213,1213,"FirstName1213 MiddleName1213",LastName1213 +1214,1214,"FirstName1214 MiddleName1214",LastName1214 +1215,1215,"FirstName1215 MiddleName1215",LastName1215 +1216,1216,"FirstName1216 MiddleName1216",LastName1216 +1217,1217,"FirstName1217 MiddleName1217",LastName1217 +1218,1218,"FirstName1218 MiddleName1218",LastName1218 +1219,1219,"FirstName1219 MiddleName1219",LastName1219 +1220,1220,"FirstName1220 MiddleName1220",LastName1220 +1221,1221,"FirstName1221 MiddleName1221",LastName1221 +1222,1222,"FirstName1222 MiddleName1222",LastName1222 +1223,1223,"FirstName1223 MiddleName1223",LastName1223 +1224,1224,"FirstName1224 MiddleName1224",LastName1224 +1225,1225,"FirstName1225 MiddleName1225",LastName1225 +1226,1226,"FirstName1226 MiddleName1226",LastName1226 +1227,1227,"FirstName1227 MiddleName1227",LastName1227 +1228,1228,"FirstName1228 MiddleName1228",LastName1228 +1229,1229,"FirstName1229 MiddleName1229",LastName1229 +1230,1230,"FirstName1230 MiddleName1230",LastName1230 +1231,1231,"FirstName1231 MiddleName1231",LastName1231 +1232,1232,"FirstName1232 MiddleName1232",LastName1232 +1233,1233,"FirstName1233 MiddleName1233",LastName1233 +1234,1234,"FirstName1234 MiddleName1234",LastName1234 +1235,1235,"FirstName1235 MiddleName1235",LastName1235 +1236,1236,"FirstName1236 MiddleName1236",LastName1236 +1237,1237,"FirstName1237 MiddleName1237",LastName1237 +1238,1238,"FirstName1238 MiddleName1238",LastName1238 +1239,1239,"FirstName1239 MiddleName1239",LastName1239 +1240,1240,"FirstName1240 MiddleName1240",LastName1240 +1241,1241,"FirstName1241 MiddleName1241",LastName1241 +1242,1242,"FirstName1242 MiddleName1242",LastName1242 +1243,1243,"FirstName1243 MiddleName1243",LastName1243 +1244,1244,"FirstName1244 MiddleName1244",LastName1244 +1245,1245,"FirstName1245 MiddleName1245",LastName1245 +1246,1246,"FirstName1246 MiddleName1246",LastName1246 +1247,1247,"FirstName1247 MiddleName1247",LastName1247 +1248,1248,"FirstName1248 MiddleName1248",LastName1248 +1249,1249,"FirstName1249 MiddleName1249",LastName1249 +1250,1250,"FirstName1250 MiddleName1250",LastName1250 +1251,1251,"FirstName1251 MiddleName1251",LastName1251 +1252,1252,"FirstName1252 MiddleName1252",LastName1252 +1253,1253,"FirstName1253 MiddleName1253",LastName1253 +1254,1254,"FirstName1254 MiddleName1254",LastName1254 +1255,1255,"FirstName1255 MiddleName1255",LastName1255 +1256,1256,"FirstName1256 MiddleName1256",LastName1256 +1257,1257,"FirstName1257 MiddleName1257",LastName1257 +1258,1258,"FirstName1258 MiddleName1258",LastName1258 +1259,1259,"FirstName1259 MiddleName1259",LastName1259 +1260,1260,"FirstName1260 MiddleName1260",LastName1260 +1261,1261,"FirstName1261 MiddleName1261",LastName1261 +1262,1262,"FirstName1262 MiddleName1262",LastName1262 +1263,1263,"FirstName1263 MiddleName1263",LastName1263 +1264,1264,"FirstName1264 MiddleName1264",LastName1264 +1265,1265,"FirstName1265 MiddleName1265",LastName1265 +1266,1266,"FirstName1266 MiddleName1266",LastName1266 +1267,1267,"FirstName1267 MiddleName1267",LastName1267 +1268,1268,"FirstName1268 MiddleName1268",LastName1268 +1269,1269,"FirstName1269 MiddleName1269",LastName1269 +1270,1270,"FirstName1270 MiddleName1270",LastName1270 +1271,1271,"FirstName1271 MiddleName1271",LastName1271 +1272,1272,"FirstName1272 MiddleName1272",LastName1272 +1273,1273,"FirstName1273 MiddleName1273",LastName1273 +1274,1274,"FirstName1274 MiddleName1274",LastName1274 +1275,1275,"FirstName1275 MiddleName1275",LastName1275 +1276,1276,"FirstName1276 MiddleName1276",LastName1276 +1277,1277,"FirstName1277 MiddleName1277",LastName1277 +1278,1278,"FirstName1278 MiddleName1278",LastName1278 +1279,1279,"FirstName1279 MiddleName1279",LastName1279 +1280,1280,"FirstName1280 MiddleName1280",LastName1280 +1281,1281,"FirstName1281 MiddleName1281",LastName1281 +1282,1282,"FirstName1282 MiddleName1282",LastName1282 +1283,1283,"FirstName1283 MiddleName1283",LastName1283 +1284,1284,"FirstName1284 MiddleName1284",LastName1284 +1285,1285,"FirstName1285 MiddleName1285",LastName1285 +1286,1286,"FirstName1286 MiddleName1286",LastName1286 +1287,1287,"FirstName1287 MiddleName1287",LastName1287 +1288,1288,"FirstName1288 MiddleName1288",LastName1288 +1289,1289,"FirstName1289 MiddleName1289",LastName1289 +1290,1290,"FirstName1290 MiddleName1290",LastName1290 +1291,1291,"FirstName1291 MiddleName1291",LastName1291 +1292,1292,"FirstName1292 MiddleName1292",LastName1292 +1293,1293,"FirstName1293 MiddleName1293",LastName1293 +1294,1294,"FirstName1294 MiddleName1294",LastName1294 +1295,1295,"FirstName1295 MiddleName1295",LastName1295 +1296,1296,"FirstName1296 MiddleName1296",LastName1296 +1297,1297,"FirstName1297 MiddleName1297",LastName1297 +1298,1298,"FirstName1298 MiddleName1298",LastName1298 +1299,1299,"FirstName1299 MiddleName1299",LastName1299 +1300,1300,"FirstName1300 MiddleName1300",LastName1300 +1301,1301,"FirstName1301 MiddleName1301",LastName1301 +1302,1302,"FirstName1302 MiddleName1302",LastName1302 +1303,1303,"FirstName1303 MiddleName1303",LastName1303 +1304,1304,"FirstName1304 MiddleName1304",LastName1304 +1305,1305,"FirstName1305 MiddleName1305",LastName1305 +1306,1306,"FirstName1306 MiddleName1306",LastName1306 +1307,1307,"FirstName1307 MiddleName1307",LastName1307 +1308,1308,"FirstName1308 MiddleName1308",LastName1308 +1309,1309,"FirstName1309 MiddleName1309",LastName1309 +1310,1310,"FirstName1310 MiddleName1310",LastName1310 +1311,1311,"FirstName1311 MiddleName1311",LastName1311 +1312,1312,"FirstName1312 MiddleName1312",LastName1312 +1313,1313,"FirstName1313 MiddleName1313",LastName1313 +1314,1314,"FirstName1314 MiddleName1314",LastName1314 +1315,1315,"FirstName1315 MiddleName1315",LastName1315 +1316,1316,"FirstName1316 MiddleName1316",LastName1316 +1317,1317,"FirstName1317 MiddleName1317",LastName1317 +1318,1318,"FirstName1318 MiddleName1318",LastName1318 +1319,1319,"FirstName1319 MiddleName1319",LastName1319 +1320,1320,"FirstName1320 MiddleName1320",LastName1320 +1321,1321,"FirstName1321 MiddleName1321",LastName1321 +1322,1322,"FirstName1322 MiddleName1322",LastName1322 +1323,1323,"FirstName1323 MiddleName1323",LastName1323 +1324,1324,"FirstName1324 MiddleName1324",LastName1324 +1325,1325,"FirstName1325 MiddleName1325",LastName1325 +1326,1326,"FirstName1326 MiddleName1326",LastName1326 +1327,1327,"FirstName1327 MiddleName1327",LastName1327 +1328,1328,"FirstName1328 MiddleName1328",LastName1328 +1329,1329,"FirstName1329 MiddleName1329",LastName1329 +1330,1330,"FirstName1330 MiddleName1330",LastName1330 +1331,1331,"FirstName1331 MiddleName1331",LastName1331 +1332,1332,"FirstName1332 MiddleName1332",LastName1332 +1333,1333,"FirstName1333 MiddleName1333",LastName1333 +1334,1334,"FirstName1334 MiddleName1334",LastName1334 +1335,1335,"FirstName1335 MiddleName1335",LastName1335 +1336,1336,"FirstName1336 MiddleName1336",LastName1336 +1337,1337,"FirstName1337 MiddleName1337",LastName1337 +1338,1338,"FirstName1338 MiddleName1338",LastName1338 +1339,1339,"FirstName1339 MiddleName1339",LastName1339 +1340,1340,"FirstName1340 MiddleName1340",LastName1340 +1341,1341,"FirstName1341 MiddleName1341",LastName1341 +1342,1342,"FirstName1342 MiddleName1342",LastName1342 +1343,1343,"FirstName1343 MiddleName1343",LastName1343 +1344,1344,"FirstName1344 MiddleName1344",LastName1344 +1345,1345,"FirstName1345 MiddleName1345",LastName1345 +1346,1346,"FirstName1346 MiddleName1346",LastName1346 +1347,1347,"FirstName1347 MiddleName1347",LastName1347 +1348,1348,"FirstName1348 MiddleName1348",LastName1348 +1349,1349,"FirstName1349 MiddleName1349",LastName1349 +1350,1350,"FirstName1350 MiddleName1350",LastName1350 +1351,1351,"FirstName1351 MiddleName1351",LastName1351 +1352,1352,"FirstName1352 MiddleName1352",LastName1352 +1353,1353,"FirstName1353 MiddleName1353",LastName1353 +1354,1354,"FirstName1354 MiddleName1354",LastName1354 +1355,1355,"FirstName1355 MiddleName1355",LastName1355 +1356,1356,"FirstName1356 MiddleName1356",LastName1356 +1357,1357,"FirstName1357 MiddleName1357",LastName1357 +1358,1358,"FirstName1358 MiddleName1358",LastName1358 +1359,1359,"FirstName1359 MiddleName1359",LastName1359 +1360,1360,"FirstName1360 MiddleName1360",LastName1360 +1361,1361,"FirstName1361 MiddleName1361",LastName1361 +1362,1362,"FirstName1362 MiddleName1362",LastName1362 +1363,1363,"FirstName1363 MiddleName1363",LastName1363 +1364,1364,"FirstName1364 MiddleName1364",LastName1364 +1365,1365,"FirstName1365 MiddleName1365",LastName1365 +1366,1366,"FirstName1366 MiddleName1366",LastName1366 +1367,1367,"FirstName1367 MiddleName1367",LastName1367 +1368,1368,"FirstName1368 MiddleName1368",LastName1368 +1369,1369,"FirstName1369 MiddleName1369",LastName1369 +1370,1370,"FirstName1370 MiddleName1370",LastName1370 +1371,1371,"FirstName1371 MiddleName1371",LastName1371 +1372,1372,"FirstName1372 MiddleName1372",LastName1372 +1373,1373,"FirstName1373 MiddleName1373",LastName1373 +1374,1374,"FirstName1374 MiddleName1374",LastName1374 +1375,1375,"FirstName1375 MiddleName1375",LastName1375 +1376,1376,"FirstName1376 MiddleName1376",LastName1376 +1377,1377,"FirstName1377 MiddleName1377",LastName1377 +1378,1378,"FirstName1378 MiddleName1378",LastName1378 +1379,1379,"FirstName1379 MiddleName1379",LastName1379 +1380,1380,"FirstName1380 MiddleName1380",LastName1380 +1381,1381,"FirstName1381 MiddleName1381",LastName1381 +1382,1382,"FirstName1382 MiddleName1382",LastName1382 +1383,1383,"FirstName1383 MiddleName1383",LastName1383 +1384,1384,"FirstName1384 MiddleName1384",LastName1384 +1385,1385,"FirstName1385 MiddleName1385",LastName1385 +1386,1386,"FirstName1386 MiddleName1386",LastName1386 +1387,1387,"FirstName1387 MiddleName1387",LastName1387 +1388,1388,"FirstName1388 MiddleName1388",LastName1388 +1389,1389,"FirstName1389 MiddleName1389",LastName1389 +1390,1390,"FirstName1390 MiddleName1390",LastName1390 +1391,1391,"FirstName1391 MiddleName1391",LastName1391 +1392,1392,"FirstName1392 MiddleName1392",LastName1392 +1393,1393,"FirstName1393 MiddleName1393",LastName1393 +1394,1394,"FirstName1394 MiddleName1394",LastName1394 +1395,1395,"FirstName1395 MiddleName1395",LastName1395 +1396,1396,"FirstName1396 MiddleName1396",LastName1396 +1397,1397,"FirstName1397 MiddleName1397",LastName1397 +1398,1398,"FirstName1398 MiddleName1398",LastName1398 +1399,1399,"FirstName1399 MiddleName1399",LastName1399 +1400,1400,"FirstName1400 MiddleName1400",LastName1400 +1401,1401,"FirstName1401 MiddleName1401",LastName1401 +1402,1402,"FirstName1402 MiddleName1402",LastName1402 +1403,1403,"FirstName1403 MiddleName1403",LastName1403 +1404,1404,"FirstName1404 MiddleName1404",LastName1404 +1405,1405,"FirstName1405 MiddleName1405",LastName1405 +1406,1406,"FirstName1406 MiddleName1406",LastName1406 +1407,1407,"FirstName1407 MiddleName1407",LastName1407 +1408,1408,"FirstName1408 MiddleName1408",LastName1408 +1409,1409,"FirstName1409 MiddleName1409",LastName1409 +1410,1410,"FirstName1410 MiddleName1410",LastName1410 +1411,1411,"FirstName1411 MiddleName1411",LastName1411 +1412,1412,"FirstName1412 MiddleName1412",LastName1412 +1413,1413,"FirstName1413 MiddleName1413",LastName1413 +1414,1414,"FirstName1414 MiddleName1414",LastName1414 +1415,1415,"FirstName1415 MiddleName1415",LastName1415 +1416,1416,"FirstName1416 MiddleName1416",LastName1416 +1417,1417,"FirstName1417 MiddleName1417",LastName1417 +1418,1418,"FirstName1418 MiddleName1418",LastName1418 +1419,1419,"FirstName1419 MiddleName1419",LastName1419 +1420,1420,"FirstName1420 MiddleName1420",LastName1420 +1421,1421,"FirstName1421 MiddleName1421",LastName1421 +1422,1422,"FirstName1422 MiddleName1422",LastName1422 +1423,1423,"FirstName1423 MiddleName1423",LastName1423 +1424,1424,"FirstName1424 MiddleName1424",LastName1424 +1425,1425,"FirstName1425 MiddleName1425",LastName1425 +1426,1426,"FirstName1426 MiddleName1426",LastName1426 +1427,1427,"FirstName1427 MiddleName1427",LastName1427 +1428,1428,"FirstName1428 MiddleName1428",LastName1428 +1429,1429,"FirstName1429 MiddleName1429",LastName1429 +1430,1430,"FirstName1430 MiddleName1430",LastName1430 +1431,1431,"FirstName1431 MiddleName1431",LastName1431 +1432,1432,"FirstName1432 MiddleName1432",LastName1432 +1433,1433,"FirstName1433 MiddleName1433",LastName1433 +1434,1434,"FirstName1434 MiddleName1434",LastName1434 +1435,1435,"FirstName1435 MiddleName1435",LastName1435 +1436,1436,"FirstName1436 MiddleName1436",LastName1436 +1437,1437,"FirstName1437 MiddleName1437",LastName1437 +1438,1438,"FirstName1438 MiddleName1438",LastName1438 +1439,1439,"FirstName1439 MiddleName1439",LastName1439 +1440,1440,"FirstName1440 MiddleName1440",LastName1440 +1441,1441,"FirstName1441 MiddleName1441",LastName1441 +1442,1442,"FirstName1442 MiddleName1442",LastName1442 +1443,1443,"FirstName1443 MiddleName1443",LastName1443 +1444,1444,"FirstName1444 MiddleName1444",LastName1444 +1445,1445,"FirstName1445 MiddleName1445",LastName1445 +1446,1446,"FirstName1446 MiddleName1446",LastName1446 +1447,1447,"FirstName1447 MiddleName1447",LastName1447 +1448,1448,"FirstName1448 MiddleName1448",LastName1448 +1449,1449,"FirstName1449 MiddleName1449",LastName1449 +1450,1450,"FirstName1450 MiddleName1450",LastName1450 +1451,1451,"FirstName1451 MiddleName1451",LastName1451 +1452,1452,"FirstName1452 MiddleName1452",LastName1452 +1453,1453,"FirstName1453 MiddleName1453",LastName1453 +1454,1454,"FirstName1454 MiddleName1454",LastName1454 +1455,1455,"FirstName1455 MiddleName1455",LastName1455 +1456,1456,"FirstName1456 MiddleName1456",LastName1456 +1457,1457,"FirstName1457 MiddleName1457",LastName1457 +1458,1458,"FirstName1458 MiddleName1458",LastName1458 +1459,1459,"FirstName1459 MiddleName1459",LastName1459 +1460,1460,"FirstName1460 MiddleName1460",LastName1460 +1461,1461,"FirstName1461 MiddleName1461",LastName1461 +1462,1462,"FirstName1462 MiddleName1462",LastName1462 +1463,1463,"FirstName1463 MiddleName1463",LastName1463 +1464,1464,"FirstName1464 MiddleName1464",LastName1464 +1465,1465,"FirstName1465 MiddleName1465",LastName1465 +1466,1466,"FirstName1466 MiddleName1466",LastName1466 +1467,1467,"FirstName1467 MiddleName1467",LastName1467 +1468,1468,"FirstName1468 MiddleName1468",LastName1468 +1469,1469,"FirstName1469 MiddleName1469",LastName1469 +1470,1470,"FirstName1470 MiddleName1470",LastName1470 +1471,1471,"FirstName1471 MiddleName1471",LastName1471 +1472,1472,"FirstName1472 MiddleName1472",LastName1472 +1473,1473,"FirstName1473 MiddleName1473",LastName1473 +1474,1474,"FirstName1474 MiddleName1474",LastName1474 +1475,1475,"FirstName1475 MiddleName1475",LastName1475 +1476,1476,"FirstName1476 MiddleName1476",LastName1476 +1477,1477,"FirstName1477 MiddleName1477",LastName1477 +1478,1478,"FirstName1478 MiddleName1478",LastName1478 +1479,1479,"FirstName1479 MiddleName1479",LastName1479 +1480,1480,"FirstName1480 MiddleName1480",LastName1480 +1481,1481,"FirstName1481 MiddleName1481",LastName1481 +1482,1482,"FirstName1482 MiddleName1482",LastName1482 +1483,1483,"FirstName1483 MiddleName1483",LastName1483 +1484,1484,"FirstName1484 MiddleName1484",LastName1484 +1485,1485,"FirstName1485 MiddleName1485",LastName1485 +1486,1486,"FirstName1486 MiddleName1486",LastName1486 +1487,1487,"FirstName1487 MiddleName1487",LastName1487 +1488,1488,"FirstName1488 MiddleName1488",LastName1488 +1489,1489,"FirstName1489 MiddleName1489",LastName1489 +1490,1490,"FirstName1490 MiddleName1490",LastName1490 +1491,1491,"FirstName1491 MiddleName1491",LastName1491 +1492,1492,"FirstName1492 MiddleName1492",LastName1492 +1493,1493,"FirstName1493 MiddleName1493",LastName1493 +1494,1494,"FirstName1494 MiddleName1494",LastName1494 +1495,1495,"FirstName1495 MiddleName1495",LastName1495 +1496,1496,"FirstName1496 MiddleName1496",LastName1496 +1497,1497,"FirstName1497 MiddleName1497",LastName1497 +1498,1498,"FirstName1498 MiddleName1498",LastName1498 +1499,1499,"FirstName1499 MiddleName1499",LastName1499 +1500,1500,"FirstName1500 MiddleName1500",LastName1500 +1501,1501,"FirstName1501 MiddleName1501",LastName1501 +1502,1502,"FirstName1502 MiddleName1502",LastName1502 +1503,1503,"FirstName1503 MiddleName1503",LastName1503 +1504,1504,"FirstName1504 MiddleName1504",LastName1504 +1505,1505,"FirstName1505 MiddleName1505",LastName1505 +1506,1506,"FirstName1506 MiddleName1506",LastName1506 +1507,1507,"FirstName1507 MiddleName1507",LastName1507 +1508,1508,"FirstName1508 MiddleName1508",LastName1508 +1509,1509,"FirstName1509 MiddleName1509",LastName1509 +1510,1510,"FirstName1510 MiddleName1510",LastName1510 +1511,1511,"FirstName1511 MiddleName1511",LastName1511 +1512,1512,"FirstName1512 MiddleName1512",LastName1512 +1513,1513,"FirstName1513 MiddleName1513",LastName1513 +1514,1514,"FirstName1514 MiddleName1514",LastName1514 +1515,1515,"FirstName1515 MiddleName1515",LastName1515 +1516,1516,"FirstName1516 MiddleName1516",LastName1516 +1517,1517,"FirstName1517 MiddleName1517",LastName1517 +1518,1518,"FirstName1518 MiddleName1518",LastName1518 +1519,1519,"FirstName1519 MiddleName1519",LastName1519 +1520,1520,"FirstName1520 MiddleName1520",LastName1520 +1521,1521,"FirstName1521 MiddleName1521",LastName1521 +1522,1522,"FirstName1522 MiddleName1522",LastName1522 +1523,1523,"FirstName1523 MiddleName1523",LastName1523 +1524,1524,"FirstName1524 MiddleName1524",LastName1524 +1525,1525,"FirstName1525 MiddleName1525",LastName1525 +1526,1526,"FirstName1526 MiddleName1526",LastName1526 +1527,1527,"FirstName1527 MiddleName1527",LastName1527 +1528,1528,"FirstName1528 MiddleName1528",LastName1528 +1529,1529,"FirstName1529 MiddleName1529",LastName1529 +1530,1530,"FirstName1530 MiddleName1530",LastName1530 +1531,1531,"FirstName1531 MiddleName1531",LastName1531 +1532,1532,"FirstName1532 MiddleName1532",LastName1532 +1533,1533,"FirstName1533 MiddleName1533",LastName1533 +1534,1534,"FirstName1534 MiddleName1534",LastName1534 +1535,1535,"FirstName1535 MiddleName1535",LastName1535 +1536,1536,"FirstName1536 MiddleName1536",LastName1536 +1537,1537,"FirstName1537 MiddleName1537",LastName1537 +1538,1538,"FirstName1538 MiddleName1538",LastName1538 +1539,1539,"FirstName1539 MiddleName1539",LastName1539 +1540,1540,"FirstName1540 MiddleName1540",LastName1540 +1541,1541,"FirstName1541 MiddleName1541",LastName1541 +1542,1542,"FirstName1542 MiddleName1542",LastName1542 +1543,1543,"FirstName1543 MiddleName1543",LastName1543 +1544,1544,"FirstName1544 MiddleName1544",LastName1544 +1545,1545,"FirstName1545 MiddleName1545",LastName1545 +1546,1546,"FirstName1546 MiddleName1546",LastName1546 +1547,1547,"FirstName1547 MiddleName1547",LastName1547 +1548,1548,"FirstName1548 MiddleName1548",LastName1548 +1549,1549,"FirstName1549 MiddleName1549",LastName1549 +1550,1550,"FirstName1550 MiddleName1550",LastName1550 +1551,1551,"FirstName1551 MiddleName1551",LastName1551 +1552,1552,"FirstName1552 MiddleName1552",LastName1552 +1553,1553,"FirstName1553 MiddleName1553",LastName1553 +1554,1554,"FirstName1554 MiddleName1554",LastName1554 +1555,1555,"FirstName1555 MiddleName1555",LastName1555 +1556,1556,"FirstName1556 MiddleName1556",LastName1556 +1557,1557,"FirstName1557 MiddleName1557",LastName1557 +1558,1558,"FirstName1558 MiddleName1558",LastName1558 +1559,1559,"FirstName1559 MiddleName1559",LastName1559 +1560,1560,"FirstName1560 MiddleName1560",LastName1560 +1561,1561,"FirstName1561 MiddleName1561",LastName1561 +1562,1562,"FirstName1562 MiddleName1562",LastName1562 +1563,1563,"FirstName1563 MiddleName1563",LastName1563 +1564,1564,"FirstName1564 MiddleName1564",LastName1564 +1565,1565,"FirstName1565 MiddleName1565",LastName1565 +1566,1566,"FirstName1566 MiddleName1566",LastName1566 +1567,1567,"FirstName1567 MiddleName1567",LastName1567 +1568,1568,"FirstName1568 MiddleName1568",LastName1568 +1569,1569,"FirstName1569 MiddleName1569",LastName1569 +1570,1570,"FirstName1570 MiddleName1570",LastName1570 +1571,1571,"FirstName1571 MiddleName1571",LastName1571 +1572,1572,"FirstName1572 MiddleName1572",LastName1572 +1573,1573,"FirstName1573 MiddleName1573",LastName1573 +1574,1574,"FirstName1574 MiddleName1574",LastName1574 +1575,1575,"FirstName1575 MiddleName1575",LastName1575 +1576,1576,"FirstName1576 MiddleName1576",LastName1576 +1577,1577,"FirstName1577 MiddleName1577",LastName1577 +1578,1578,"FirstName1578 MiddleName1578",LastName1578 +1579,1579,"FirstName1579 MiddleName1579",LastName1579 +1580,1580,"FirstName1580 MiddleName1580",LastName1580 +1581,1581,"FirstName1581 MiddleName1581",LastName1581 +1582,1582,"FirstName1582 MiddleName1582",LastName1582 +1583,1583,"FirstName1583 MiddleName1583",LastName1583 +1584,1584,"FirstName1584 MiddleName1584",LastName1584 +1585,1585,"FirstName1585 MiddleName1585",LastName1585 +1586,1586,"FirstName1586 MiddleName1586",LastName1586 +1587,1587,"FirstName1587 MiddleName1587",LastName1587 +1588,1588,"FirstName1588 MiddleName1588",LastName1588 +1589,1589,"FirstName1589 MiddleName1589",LastName1589 +1590,1590,"FirstName1590 MiddleName1590",LastName1590 +1591,1591,"FirstName1591 MiddleName1591",LastName1591 +1592,1592,"FirstName1592 MiddleName1592",LastName1592 +1593,1593,"FirstName1593 MiddleName1593",LastName1593 +1594,1594,"FirstName1594 MiddleName1594",LastName1594 +1595,1595,"FirstName1595 MiddleName1595",LastName1595 +1596,1596,"FirstName1596 MiddleName1596",LastName1596 +1597,1597,"FirstName1597 MiddleName1597",LastName1597 +1598,1598,"FirstName1598 MiddleName1598",LastName1598 +1599,1599,"FirstName1599 MiddleName1599",LastName1599 +1600,1600,"FirstName1600 MiddleName1600",LastName1600 +1601,1601,"FirstName1601 MiddleName1601",LastName1601 +1602,1602,"FirstName1602 MiddleName1602",LastName1602 +1603,1603,"FirstName1603 MiddleName1603",LastName1603 +1604,1604,"FirstName1604 MiddleName1604",LastName1604 +1605,1605,"FirstName1605 MiddleName1605",LastName1605 +1606,1606,"FirstName1606 MiddleName1606",LastName1606 +1607,1607,"FirstName1607 MiddleName1607",LastName1607 +1608,1608,"FirstName1608 MiddleName1608",LastName1608 +1609,1609,"FirstName1609 MiddleName1609",LastName1609 +1610,1610,"FirstName1610 MiddleName1610",LastName1610 +1611,1611,"FirstName1611 MiddleName1611",LastName1611 +1612,1612,"FirstName1612 MiddleName1612",LastName1612 +1613,1613,"FirstName1613 MiddleName1613",LastName1613 +1614,1614,"FirstName1614 MiddleName1614",LastName1614 +1615,1615,"FirstName1615 MiddleName1615",LastName1615 +1616,1616,"FirstName1616 MiddleName1616",LastName1616 +1617,1617,"FirstName1617 MiddleName1617",LastName1617 +1618,1618,"FirstName1618 MiddleName1618",LastName1618 +1619,1619,"FirstName1619 MiddleName1619",LastName1619 +1620,1620,"FirstName1620 MiddleName1620",LastName1620 +1621,1621,"FirstName1621 MiddleName1621",LastName1621 +1622,1622,"FirstName1622 MiddleName1622",LastName1622 +1623,1623,"FirstName1623 MiddleName1623",LastName1623 +1624,1624,"FirstName1624 MiddleName1624",LastName1624 +1625,1625,"FirstName1625 MiddleName1625",LastName1625 +1626,1626,"FirstName1626 MiddleName1626",LastName1626 +1627,1627,"FirstName1627 MiddleName1627",LastName1627 +1628,1628,"FirstName1628 MiddleName1628",LastName1628 +1629,1629,"FirstName1629 MiddleName1629",LastName1629 +1630,1630,"FirstName1630 MiddleName1630",LastName1630 +1631,1631,"FirstName1631 MiddleName1631",LastName1631 +1632,1632,"FirstName1632 MiddleName1632",LastName1632 +1633,1633,"FirstName1633 MiddleName1633",LastName1633 +1634,1634,"FirstName1634 MiddleName1634",LastName1634 +1635,1635,"FirstName1635 MiddleName1635",LastName1635 +1636,1636,"FirstName1636 MiddleName1636",LastName1636 +1637,1637,"FirstName1637 MiddleName1637",LastName1637 +1638,1638,"FirstName1638 MiddleName1638",LastName1638 +1639,1639,"FirstName1639 MiddleName1639",LastName1639 +1640,1640,"FirstName1640 MiddleName1640",LastName1640 +1641,1641,"FirstName1641 MiddleName1641",LastName1641 +1642,1642,"FirstName1642 MiddleName1642",LastName1642 +1643,1643,"FirstName1643 MiddleName1643",LastName1643 +1644,1644,"FirstName1644 MiddleName1644",LastName1644 +1645,1645,"FirstName1645 MiddleName1645",LastName1645 +1646,1646,"FirstName1646 MiddleName1646",LastName1646 +1647,1647,"FirstName1647 MiddleName1647",LastName1647 +1648,1648,"FirstName1648 MiddleName1648",LastName1648 +1649,1649,"FirstName1649 MiddleName1649",LastName1649 +1650,1650,"FirstName1650 MiddleName1650",LastName1650 +1651,1651,"FirstName1651 MiddleName1651",LastName1651 +1652,1652,"FirstName1652 MiddleName1652",LastName1652 +1653,1653,"FirstName1653 MiddleName1653",LastName1653 +1654,1654,"FirstName1654 MiddleName1654",LastName1654 +1655,1655,"FirstName1655 MiddleName1655",LastName1655 +1656,1656,"FirstName1656 MiddleName1656",LastName1656 +1657,1657,"FirstName1657 MiddleName1657",LastName1657 +1658,1658,"FirstName1658 MiddleName1658",LastName1658 +1659,1659,"FirstName1659 MiddleName1659",LastName1659 +1660,1660,"FirstName1660 MiddleName1660",LastName1660 +1661,1661,"FirstName1661 MiddleName1661",LastName1661 +1662,1662,"FirstName1662 MiddleName1662",LastName1662 +1663,1663,"FirstName1663 MiddleName1663",LastName1663 +1664,1664,"FirstName1664 MiddleName1664",LastName1664 +1665,1665,"FirstName1665 MiddleName1665",LastName1665 +1666,1666,"FirstName1666 MiddleName1666",LastName1666 +1667,1667,"FirstName1667 MiddleName1667",LastName1667 +1668,1668,"FirstName1668 MiddleName1668",LastName1668 +1669,1669,"FirstName1669 MiddleName1669",LastName1669 +1670,1670,"FirstName1670 MiddleName1670",LastName1670 +1671,1671,"FirstName1671 MiddleName1671",LastName1671 +1672,1672,"FirstName1672 MiddleName1672",LastName1672 +1673,1673,"FirstName1673 MiddleName1673",LastName1673 +1674,1674,"FirstName1674 MiddleName1674",LastName1674 +1675,1675,"FirstName1675 MiddleName1675",LastName1675 +1676,1676,"FirstName1676 MiddleName1676",LastName1676 +1677,1677,"FirstName1677 MiddleName1677",LastName1677 +1678,1678,"FirstName1678 MiddleName1678",LastName1678 +1679,1679,"FirstName1679 MiddleName1679",LastName1679 +1680,1680,"FirstName1680 MiddleName1680",LastName1680 +1681,1681,"FirstName1681 MiddleName1681",LastName1681 +1682,1682,"FirstName1682 MiddleName1682",LastName1682 +1683,1683,"FirstName1683 MiddleName1683",LastName1683 +1684,1684,"FirstName1684 MiddleName1684",LastName1684 +1685,1685,"FirstName1685 MiddleName1685",LastName1685 +1686,1686,"FirstName1686 MiddleName1686",LastName1686 +1687,1687,"FirstName1687 MiddleName1687",LastName1687 +1688,1688,"FirstName1688 MiddleName1688",LastName1688 +1689,1689,"FirstName1689 MiddleName1689",LastName1689 +1690,1690,"FirstName1690 MiddleName1690",LastName1690 +1691,1691,"FirstName1691 MiddleName1691",LastName1691 +1692,1692,"FirstName1692 MiddleName1692",LastName1692 +1693,1693,"FirstName1693 MiddleName1693",LastName1693 +1694,1694,"FirstName1694 MiddleName1694",LastName1694 +1695,1695,"FirstName1695 MiddleName1695",LastName1695 +1696,1696,"FirstName1696 MiddleName1696",LastName1696 +1697,1697,"FirstName1697 MiddleName1697",LastName1697 +1698,1698,"FirstName1698 MiddleName1698",LastName1698 +1699,1699,"FirstName1699 MiddleName1699",LastName1699 +1700,1700,"FirstName1700 MiddleName1700",LastName1700 +1701,1701,"FirstName1701 MiddleName1701",LastName1701 +1702,1702,"FirstName1702 MiddleName1702",LastName1702 +1703,1703,"FirstName1703 MiddleName1703",LastName1703 +1704,1704,"FirstName1704 MiddleName1704",LastName1704 +1705,1705,"FirstName1705 MiddleName1705",LastName1705 +1706,1706,"FirstName1706 MiddleName1706",LastName1706 +1707,1707,"FirstName1707 MiddleName1707",LastName1707 +1708,1708,"FirstName1708 MiddleName1708",LastName1708 +1709,1709,"FirstName1709 MiddleName1709",LastName1709 +1710,1710,"FirstName1710 MiddleName1710",LastName1710 +1711,1711,"FirstName1711 MiddleName1711",LastName1711 +1712,1712,"FirstName1712 MiddleName1712",LastName1712 +1713,1713,"FirstName1713 MiddleName1713",LastName1713 +1714,1714,"FirstName1714 MiddleName1714",LastName1714 +1715,1715,"FirstName1715 MiddleName1715",LastName1715 +1716,1716,"FirstName1716 MiddleName1716",LastName1716 +1717,1717,"FirstName1717 MiddleName1717",LastName1717 +1718,1718,"FirstName1718 MiddleName1718",LastName1718 +1719,1719,"FirstName1719 MiddleName1719",LastName1719 +1720,1720,"FirstName1720 MiddleName1720",LastName1720 +1721,1721,"FirstName1721 MiddleName1721",LastName1721 +1722,1722,"FirstName1722 MiddleName1722",LastName1722 +1723,1723,"FirstName1723 MiddleName1723",LastName1723 +1724,1724,"FirstName1724 MiddleName1724",LastName1724 +1725,1725,"FirstName1725 MiddleName1725",LastName1725 +1726,1726,"FirstName1726 MiddleName1726",LastName1726 +1727,1727,"FirstName1727 MiddleName1727",LastName1727 +1728,1728,"FirstName1728 MiddleName1728",LastName1728 +1729,1729,"FirstName1729 MiddleName1729",LastName1729 +1730,1730,"FirstName1730 MiddleName1730",LastName1730 +1731,1731,"FirstName1731 MiddleName1731",LastName1731 +1732,1732,"FirstName1732 MiddleName1732",LastName1732 +1733,1733,"FirstName1733 MiddleName1733",LastName1733 +1734,1734,"FirstName1734 MiddleName1734",LastName1734 +1735,1735,"FirstName1735 MiddleName1735",LastName1735 +1736,1736,"FirstName1736 MiddleName1736",LastName1736 +1737,1737,"FirstName1737 MiddleName1737",LastName1737 +1738,1738,"FirstName1738 MiddleName1738",LastName1738 +1739,1739,"FirstName1739 MiddleName1739",LastName1739 +1740,1740,"FirstName1740 MiddleName1740",LastName1740 +1741,1741,"FirstName1741 MiddleName1741",LastName1741 +1742,1742,"FirstName1742 MiddleName1742",LastName1742 +1743,1743,"FirstName1743 MiddleName1743",LastName1743 +1744,1744,"FirstName1744 MiddleName1744",LastName1744 +1745,1745,"FirstName1745 MiddleName1745",LastName1745 +1746,1746,"FirstName1746 MiddleName1746",LastName1746 +1747,1747,"FirstName1747 MiddleName1747",LastName1747 +1748,1748,"FirstName1748 MiddleName1748",LastName1748 +1749,1749,"FirstName1749 MiddleName1749",LastName1749 +1750,1750,"FirstName1750 MiddleName1750",LastName1750 +1751,1751,"FirstName1751 MiddleName1751",LastName1751 +1752,1752,"FirstName1752 MiddleName1752",LastName1752 +1753,1753,"FirstName1753 MiddleName1753",LastName1753 +1754,1754,"FirstName1754 MiddleName1754",LastName1754 +1755,1755,"FirstName1755 MiddleName1755",LastName1755 +1756,1756,"FirstName1756 MiddleName1756",LastName1756 +1757,1757,"FirstName1757 MiddleName1757",LastName1757 +1758,1758,"FirstName1758 MiddleName1758",LastName1758 +1759,1759,"FirstName1759 MiddleName1759",LastName1759 +1760,1760,"FirstName1760 MiddleName1760",LastName1760 +1761,1761,"FirstName1761 MiddleName1761",LastName1761 +1762,1762,"FirstName1762 MiddleName1762",LastName1762 +1763,1763,"FirstName1763 MiddleName1763",LastName1763 +1764,1764,"FirstName1764 MiddleName1764",LastName1764 +1765,1765,"FirstName1765 MiddleName1765",LastName1765 +1766,1766,"FirstName1766 MiddleName1766",LastName1766 +1767,1767,"FirstName1767 MiddleName1767",LastName1767 +1768,1768,"FirstName1768 MiddleName1768",LastName1768 +1769,1769,"FirstName1769 MiddleName1769",LastName1769 +1770,1770,"FirstName1770 MiddleName1770",LastName1770 +1771,1771,"FirstName1771 MiddleName1771",LastName1771 +1772,1772,"FirstName1772 MiddleName1772",LastName1772 +1773,1773,"FirstName1773 MiddleName1773",LastName1773 +1774,1774,"FirstName1774 MiddleName1774",LastName1774 +1775,1775,"FirstName1775 MiddleName1775",LastName1775 +1776,1776,"FirstName1776 MiddleName1776",LastName1776 +1777,1777,"FirstName1777 MiddleName1777",LastName1777 +1778,1778,"FirstName1778 MiddleName1778",LastName1778 +1779,1779,"FirstName1779 MiddleName1779",LastName1779 +1780,1780,"FirstName1780 MiddleName1780",LastName1780 +1781,1781,"FirstName1781 MiddleName1781",LastName1781 +1782,1782,"FirstName1782 MiddleName1782",LastName1782 +1783,1783,"FirstName1783 MiddleName1783",LastName1783 +1784,1784,"FirstName1784 MiddleName1784",LastName1784 +1785,1785,"FirstName1785 MiddleName1785",LastName1785 +1786,1786,"FirstName1786 MiddleName1786",LastName1786 +1787,1787,"FirstName1787 MiddleName1787",LastName1787 +1788,1788,"FirstName1788 MiddleName1788",LastName1788 +1789,1789,"FirstName1789 MiddleName1789",LastName1789 +1790,1790,"FirstName1790 MiddleName1790",LastName1790 +1791,1791,"FirstName1791 MiddleName1791",LastName1791 +1792,1792,"FirstName1792 MiddleName1792",LastName1792 +1793,1793,"FirstName1793 MiddleName1793",LastName1793 +1794,1794,"FirstName1794 MiddleName1794",LastName1794 +1795,1795,"FirstName1795 MiddleName1795",LastName1795 +1796,1796,"FirstName1796 MiddleName1796",LastName1796 +1797,1797,"FirstName1797 MiddleName1797",LastName1797 +1798,1798,"FirstName1798 MiddleName1798",LastName1798 +1799,1799,"FirstName1799 MiddleName1799",LastName1799 +1800,1800,"FirstName1800 MiddleName1800",LastName1800 +1801,1801,"FirstName1801 MiddleName1801",LastName1801 +1802,1802,"FirstName1802 MiddleName1802",LastName1802 +1803,1803,"FirstName1803 MiddleName1803",LastName1803 +1804,1804,"FirstName1804 MiddleName1804",LastName1804 +1805,1805,"FirstName1805 MiddleName1805",LastName1805 +1806,1806,"FirstName1806 MiddleName1806",LastName1806 +1807,1807,"FirstName1807 MiddleName1807",LastName1807 +1808,1808,"FirstName1808 MiddleName1808",LastName1808 +1809,1809,"FirstName1809 MiddleName1809",LastName1809 +1810,1810,"FirstName1810 MiddleName1810",LastName1810 +1811,1811,"FirstName1811 MiddleName1811",LastName1811 +1812,1812,"FirstName1812 MiddleName1812",LastName1812 +1813,1813,"FirstName1813 MiddleName1813",LastName1813 +1814,1814,"FirstName1814 MiddleName1814",LastName1814 +1815,1815,"FirstName1815 MiddleName1815",LastName1815 +1816,1816,"FirstName1816 MiddleName1816",LastName1816 +1817,1817,"FirstName1817 MiddleName1817",LastName1817 +1818,1818,"FirstName1818 MiddleName1818",LastName1818 +1819,1819,"FirstName1819 MiddleName1819",LastName1819 +1820,1820,"FirstName1820 MiddleName1820",LastName1820 +1821,1821,"FirstName1821 MiddleName1821",LastName1821 +1822,1822,"FirstName1822 MiddleName1822",LastName1822 +1823,1823,"FirstName1823 MiddleName1823",LastName1823 +1824,1824,"FirstName1824 MiddleName1824",LastName1824 +1825,1825,"FirstName1825 MiddleName1825",LastName1825 +1826,1826,"FirstName1826 MiddleName1826",LastName1826 +1827,1827,"FirstName1827 MiddleName1827",LastName1827 +1828,1828,"FirstName1828 MiddleName1828",LastName1828 +1829,1829,"FirstName1829 MiddleName1829",LastName1829 +1830,1830,"FirstName1830 MiddleName1830",LastName1830 +1831,1831,"FirstName1831 MiddleName1831",LastName1831 +1832,1832,"FirstName1832 MiddleName1832",LastName1832 +1833,1833,"FirstName1833 MiddleName1833",LastName1833 +1834,1834,"FirstName1834 MiddleName1834",LastName1834 +1835,1835,"FirstName1835 MiddleName1835",LastName1835 +1836,1836,"FirstName1836 MiddleName1836",LastName1836 +1837,1837,"FirstName1837 MiddleName1837",LastName1837 +1838,1838,"FirstName1838 MiddleName1838",LastName1838 +1839,1839,"FirstName1839 MiddleName1839",LastName1839 +1840,1840,"FirstName1840 MiddleName1840",LastName1840 +1841,1841,"FirstName1841 MiddleName1841",LastName1841 +1842,1842,"FirstName1842 MiddleName1842",LastName1842 +1843,1843,"FirstName1843 MiddleName1843",LastName1843 +1844,1844,"FirstName1844 MiddleName1844",LastName1844 +1845,1845,"FirstName1845 MiddleName1845",LastName1845 +1846,1846,"FirstName1846 MiddleName1846",LastName1846 +1847,1847,"FirstName1847 MiddleName1847",LastName1847 +1848,1848,"FirstName1848 MiddleName1848",LastName1848 +1849,1849,"FirstName1849 MiddleName1849",LastName1849 +1850,1850,"FirstName1850 MiddleName1850",LastName1850 +1851,1851,"FirstName1851 MiddleName1851",LastName1851 +1852,1852,"FirstName1852 MiddleName1852",LastName1852 +1853,1853,"FirstName1853 MiddleName1853",LastName1853 +1854,1854,"FirstName1854 MiddleName1854",LastName1854 +1855,1855,"FirstName1855 MiddleName1855",LastName1855 +1856,1856,"FirstName1856 MiddleName1856",LastName1856 +1857,1857,"FirstName1857 MiddleName1857",LastName1857 +1858,1858,"FirstName1858 MiddleName1858",LastName1858 +1859,1859,"FirstName1859 MiddleName1859",LastName1859 +1860,1860,"FirstName1860 MiddleName1860",LastName1860 +1861,1861,"FirstName1861 MiddleName1861",LastName1861 +1862,1862,"FirstName1862 MiddleName1862",LastName1862 +1863,1863,"FirstName1863 MiddleName1863",LastName1863 +1864,1864,"FirstName1864 MiddleName1864",LastName1864 +1865,1865,"FirstName1865 MiddleName1865",LastName1865 +1866,1866,"FirstName1866 MiddleName1866",LastName1866 +1867,1867,"FirstName1867 MiddleName1867",LastName1867 +1868,1868,"FirstName1868 MiddleName1868",LastName1868 +1869,1869,"FirstName1869 MiddleName1869",LastName1869 +1870,1870,"FirstName1870 MiddleName1870",LastName1870 +1871,1871,"FirstName1871 MiddleName1871",LastName1871 +1872,1872,"FirstName1872 MiddleName1872",LastName1872 +1873,1873,"FirstName1873 MiddleName1873",LastName1873 +1874,1874,"FirstName1874 MiddleName1874",LastName1874 +1875,1875,"FirstName1875 MiddleName1875",LastName1875 +1876,1876,"FirstName1876 MiddleName1876",LastName1876 +1877,1877,"FirstName1877 MiddleName1877",LastName1877 +1878,1878,"FirstName1878 MiddleName1878",LastName1878 +1879,1879,"FirstName1879 MiddleName1879",LastName1879 +1880,1880,"FirstName1880 MiddleName1880",LastName1880 +1881,1881,"FirstName1881 MiddleName1881",LastName1881 +1882,1882,"FirstName1882 MiddleName1882",LastName1882 +1883,1883,"FirstName1883 MiddleName1883",LastName1883 +1884,1884,"FirstName1884 MiddleName1884",LastName1884 +1885,1885,"FirstName1885 MiddleName1885",LastName1885 +1886,1886,"FirstName1886 MiddleName1886",LastName1886 +1887,1887,"FirstName1887 MiddleName1887",LastName1887 +1888,1888,"FirstName1888 MiddleName1888",LastName1888 +1889,1889,"FirstName1889 MiddleName1889",LastName1889 +1890,1890,"FirstName1890 MiddleName1890",LastName1890 +1891,1891,"FirstName1891 MiddleName1891",LastName1891 +1892,1892,"FirstName1892 MiddleName1892",LastName1892 +1893,1893,"FirstName1893 MiddleName1893",LastName1893 +1894,1894,"FirstName1894 MiddleName1894",LastName1894 +1895,1895,"FirstName1895 MiddleName1895",LastName1895 +1896,1896,"FirstName1896 MiddleName1896",LastName1896 +1897,1897,"FirstName1897 MiddleName1897",LastName1897 +1898,1898,"FirstName1898 MiddleName1898",LastName1898 +1899,1899,"FirstName1899 MiddleName1899",LastName1899 +1900,1900,"FirstName1900 MiddleName1900",LastName1900 +1901,1901,"FirstName1901 MiddleName1901",LastName1901 +1902,1902,"FirstName1902 MiddleName1902",LastName1902 +1903,1903,"FirstName1903 MiddleName1903",LastName1903 +1904,1904,"FirstName1904 MiddleName1904",LastName1904 +1905,1905,"FirstName1905 MiddleName1905",LastName1905 +1906,1906,"FirstName1906 MiddleName1906",LastName1906 +1907,1907,"FirstName1907 MiddleName1907",LastName1907 +1908,1908,"FirstName1908 MiddleName1908",LastName1908 +1909,1909,"FirstName1909 MiddleName1909",LastName1909 +1910,1910,"FirstName1910 MiddleName1910",LastName1910 +1911,1911,"FirstName1911 MiddleName1911",LastName1911 +1912,1912,"FirstName1912 MiddleName1912",LastName1912 +1913,1913,"FirstName1913 MiddleName1913",LastName1913 +1914,1914,"FirstName1914 MiddleName1914",LastName1914 +1915,1915,"FirstName1915 MiddleName1915",LastName1915 +1916,1916,"FirstName1916 MiddleName1916",LastName1916 +1917,1917,"FirstName1917 MiddleName1917",LastName1917 +1918,1918,"FirstName1918 MiddleName1918",LastName1918 +1919,1919,"FirstName1919 MiddleName1919",LastName1919 +1920,1920,"FirstName1920 MiddleName1920",LastName1920 +1921,1921,"FirstName1921 MiddleName1921",LastName1921 +1922,1922,"FirstName1922 MiddleName1922",LastName1922 +1923,1923,"FirstName1923 MiddleName1923",LastName1923 +1924,1924,"FirstName1924 MiddleName1924",LastName1924 +1925,1925,"FirstName1925 MiddleName1925",LastName1925 +1926,1926,"FirstName1926 MiddleName1926",LastName1926 +1927,1927,"FirstName1927 MiddleName1927",LastName1927 +1928,1928,"FirstName1928 MiddleName1928",LastName1928 +1929,1929,"FirstName1929 MiddleName1929",LastName1929 +1930,1930,"FirstName1930 MiddleName1930",LastName1930 +1931,1931,"FirstName1931 MiddleName1931",LastName1931 +1932,1932,"FirstName1932 MiddleName1932",LastName1932 +1933,1933,"FirstName1933 MiddleName1933",LastName1933 +1934,1934,"FirstName1934 MiddleName1934",LastName1934 +1935,1935,"FirstName1935 MiddleName1935",LastName1935 +1936,1936,"FirstName1936 MiddleName1936",LastName1936 +1937,1937,"FirstName1937 MiddleName1937",LastName1937 +1938,1938,"FirstName1938 MiddleName1938",LastName1938 +1939,1939,"FirstName1939 MiddleName1939",LastName1939 +1940,1940,"FirstName1940 MiddleName1940",LastName1940 +1941,1941,"FirstName1941 MiddleName1941",LastName1941 +1942,1942,"FirstName1942 MiddleName1942",LastName1942 +1943,1943,"FirstName1943 MiddleName1943",LastName1943 +1944,1944,"FirstName1944 MiddleName1944",LastName1944 +1945,1945,"FirstName1945 MiddleName1945",LastName1945 +1946,1946,"FirstName1946 MiddleName1946",LastName1946 +1947,1947,"FirstName1947 MiddleName1947",LastName1947 +1948,1948,"FirstName1948 MiddleName1948",LastName1948 +1949,1949,"FirstName1949 MiddleName1949",LastName1949 +1950,1950,"FirstName1950 MiddleName1950",LastName1950 +1951,1951,"FirstName1951 MiddleName1951",LastName1951 +1952,1952,"FirstName1952 MiddleName1952",LastName1952 +1953,1953,"FirstName1953 MiddleName1953",LastName1953 +1954,1954,"FirstName1954 MiddleName1954",LastName1954 +1955,1955,"FirstName1955 MiddleName1955",LastName1955 +1956,1956,"FirstName1956 MiddleName1956",LastName1956 +1957,1957,"FirstName1957 MiddleName1957",LastName1957 +1958,1958,"FirstName1958 MiddleName1958",LastName1958 +1959,1959,"FirstName1959 MiddleName1959",LastName1959 +1960,1960,"FirstName1960 MiddleName1960",LastName1960 +1961,1961,"FirstName1961 MiddleName1961",LastName1961 +1962,1962,"FirstName1962 MiddleName1962",LastName1962 +1963,1963,"FirstName1963 MiddleName1963",LastName1963 +1964,1964,"FirstName1964 MiddleName1964",LastName1964 +1965,1965,"FirstName1965 MiddleName1965",LastName1965 +1966,1966,"FirstName1966 MiddleName1966",LastName1966 +1967,1967,"FirstName1967 MiddleName1967",LastName1967 +1968,1968,"FirstName1968 MiddleName1968",LastName1968 +1969,1969,"FirstName1969 MiddleName1969",LastName1969 +1970,1970,"FirstName1970 MiddleName1970",LastName1970 +1971,1971,"FirstName1971 MiddleName1971",LastName1971 +1972,1972,"FirstName1972 MiddleName1972",LastName1972 +1973,1973,"FirstName1973 MiddleName1973",LastName1973 +1974,1974,"FirstName1974 MiddleName1974",LastName1974 +1975,1975,"FirstName1975 MiddleName1975",LastName1975 +1976,1976,"FirstName1976 MiddleName1976",LastName1976 +1977,1977,"FirstName1977 MiddleName1977",LastName1977 +1978,1978,"FirstName1978 MiddleName1978",LastName1978 +1979,1979,"FirstName1979 MiddleName1979",LastName1979 +1980,1980,"FirstName1980 MiddleName1980",LastName1980 +1981,1981,"FirstName1981 MiddleName1981",LastName1981 +1982,1982,"FirstName1982 MiddleName1982",LastName1982 +1983,1983,"FirstName1983 MiddleName1983",LastName1983 +1984,1984,"FirstName1984 MiddleName1984",LastName1984 +1985,1985,"FirstName1985 MiddleName1985",LastName1985 +1986,1986,"FirstName1986 MiddleName1986",LastName1986 +1987,1987,"FirstName1987 MiddleName1987",LastName1987 +1988,1988,"FirstName1988 MiddleName1988",LastName1988 +1989,1989,"FirstName1989 MiddleName1989",LastName1989 +1990,1990,"FirstName1990 MiddleName1990",LastName1990 +1991,1991,"FirstName1991 MiddleName1991",LastName1991 +1992,1992,"FirstName1992 MiddleName1992",LastName1992 +1993,1993,"FirstName1993 MiddleName1993",LastName1993 +1994,1994,"FirstName1994 MiddleName1994",LastName1994 +1995,1995,"FirstName1995 MiddleName1995",LastName1995 +1996,1996,"FirstName1996 MiddleName1996",LastName1996 +1997,1997,"FirstName1997 MiddleName1997",LastName1997 +1998,1998,"FirstName1998 MiddleName1998",LastName1998 +1999,1999,"FirstName1999 MiddleName1999",LastName1999 +2000,2000,"FirstName2000 MiddleName2000",LastName2000 +2001,2001,"FirstName2001 MiddleName2001",LastName2001 +2002,2002,"FirstName2002 MiddleName2002",LastName2002 +2003,2003,"FirstName2003 MiddleName2003",LastName2003 +2004,2004,"FirstName2004 MiddleName2004",LastName2004 +2005,2005,"FirstName2005 MiddleName2005",LastName2005 +2006,2006,"FirstName2006 MiddleName2006",LastName2006 +2007,2007,"FirstName2007 MiddleName2007",LastName2007 +2008,2008,"FirstName2008 MiddleName2008",LastName2008 +2009,2009,"FirstName2009 MiddleName2009",LastName2009 +2010,2010,"FirstName2010 MiddleName2010",LastName2010 +2011,2011,"FirstName2011 MiddleName2011",LastName2011 +2012,2012,"FirstName2012 MiddleName2012",LastName2012 +2013,2013,"FirstName2013 MiddleName2013",LastName2013 +2014,2014,"FirstName2014 MiddleName2014",LastName2014 +2015,2015,"FirstName2015 MiddleName2015",LastName2015 +2016,2016,"FirstName2016 MiddleName2016",LastName2016 +2017,2017,"FirstName2017 MiddleName2017",LastName2017 +2018,2018,"FirstName2018 MiddleName2018",LastName2018 +2019,2019,"FirstName2019 MiddleName2019",LastName2019 +2020,2020,"FirstName2020 MiddleName2020",LastName2020 +2021,2021,"FirstName2021 MiddleName2021",LastName2021 +2022,2022,"FirstName2022 MiddleName2022",LastName2022 +2023,2023,"FirstName2023 MiddleName2023",LastName2023 +2024,2024,"FirstName2024 MiddleName2024",LastName2024 +2025,2025,"FirstName2025 MiddleName2025",LastName2025 +2026,2026,"FirstName2026 MiddleName2026",LastName2026 +2027,2027,"FirstName2027 MiddleName2027",LastName2027 +2028,2028,"FirstName2028 MiddleName2028",LastName2028 +2029,2029,"FirstName2029 MiddleName2029",LastName2029 +2030,2030,"FirstName2030 MiddleName2030",LastName2030 +2031,2031,"FirstName2031 MiddleName2031",LastName2031 +2032,2032,"FirstName2032 MiddleName2032",LastName2032 +2033,2033,"FirstName2033 MiddleName2033",LastName2033 +2034,2034,"FirstName2034 MiddleName2034",LastName2034 +2035,2035,"FirstName2035 MiddleName2035",LastName2035 +2036,2036,"FirstName2036 MiddleName2036",LastName2036 +2037,2037,"FirstName2037 MiddleName2037",LastName2037 +2038,2038,"FirstName2038 MiddleName2038",LastName2038 +2039,2039,"FirstName2039 MiddleName2039",LastName2039 +2040,2040,"FirstName2040 MiddleName2040",LastName2040 +2041,2041,"FirstName2041 MiddleName2041",LastName2041 +2042,2042,"FirstName2042 MiddleName2042",LastName2042 +2043,2043,"FirstName2043 MiddleName2043",LastName2043 +2044,2044,"FirstName2044 MiddleName2044",LastName2044 +2045,2045,"FirstName2045 MiddleName2045",LastName2045 +2046,2046,"FirstName2046 MiddleName2046",LastName2046 +2047,2047,"FirstName2047 MiddleName2047",LastName2047 +2048,2048,"FirstName2048 MiddleName2048",LastName2048 +2049,2049,"FirstName2049 MiddleName2049",LastName2049 +2050,2050,"FirstName2050 MiddleName2050",LastName2050 +2051,2051,"FirstName2051 MiddleName2051",LastName2051 +2052,2052,"FirstName2052 MiddleName2052",LastName2052 +2053,2053,"FirstName2053 MiddleName2053",LastName2053 +2054,2054,"FirstName2054 MiddleName2054",LastName2054 +2055,2055,"FirstName2055 MiddleName2055",LastName2055 +2056,2056,"FirstName2056 MiddleName2056",LastName2056 +2057,2057,"FirstName2057 MiddleName2057",LastName2057 +2058,2058,"FirstName2058 MiddleName2058",LastName2058 +2059,2059,"FirstName2059 MiddleName2059",LastName2059 +2060,2060,"FirstName2060 MiddleName2060",LastName2060 +2061,2061,"FirstName2061 MiddleName2061",LastName2061 +2062,2062,"FirstName2062 MiddleName2062",LastName2062 +2063,2063,"FirstName2063 MiddleName2063",LastName2063 +2064,2064,"FirstName2064 MiddleName2064",LastName2064 +2065,2065,"FirstName2065 MiddleName2065",LastName2065 +2066,2066,"FirstName2066 MiddleName2066",LastName2066 +2067,2067,"FirstName2067 MiddleName2067",LastName2067 +2068,2068,"FirstName2068 MiddleName2068",LastName2068 +2069,2069,"FirstName2069 MiddleName2069",LastName2069 +2070,2070,"FirstName2070 MiddleName2070",LastName2070 +2071,2071,"FirstName2071 MiddleName2071",LastName2071 +2072,2072,"FirstName2072 MiddleName2072",LastName2072 +2073,2073,"FirstName2073 MiddleName2073",LastName2073 +2074,2074,"FirstName2074 MiddleName2074",LastName2074 +2075,2075,"FirstName2075 MiddleName2075",LastName2075 +2076,2076,"FirstName2076 MiddleName2076",LastName2076 +2077,2077,"FirstName2077 MiddleName2077",LastName2077 +2078,2078,"FirstName2078 MiddleName2078",LastName2078 +2079,2079,"FirstName2079 MiddleName2079",LastName2079 +2080,2080,"FirstName2080 MiddleName2080",LastName2080 +2081,2081,"FirstName2081 MiddleName2081",LastName2081 +2082,2082,"FirstName2082 MiddleName2082",LastName2082 +2083,2083,"FirstName2083 MiddleName2083",LastName2083 +2084,2084,"FirstName2084 MiddleName2084",LastName2084 +2085,2085,"FirstName2085 MiddleName2085",LastName2085 +2086,2086,"FirstName2086 MiddleName2086",LastName2086 +2087,2087,"FirstName2087 MiddleName2087",LastName2087 +2088,2088,"FirstName2088 MiddleName2088",LastName2088 +2089,2089,"FirstName2089 MiddleName2089",LastName2089 +2090,2090,"FirstName2090 MiddleName2090",LastName2090 +2091,2091,"FirstName2091 MiddleName2091",LastName2091 +2092,2092,"FirstName2092 MiddleName2092",LastName2092 +2093,2093,"FirstName2093 MiddleName2093",LastName2093 +2094,2094,"FirstName2094 MiddleName2094",LastName2094 +2095,2095,"FirstName2095 MiddleName2095",LastName2095 +2096,2096,"FirstName2096 MiddleName2096",LastName2096 +2097,2097,"FirstName2097 MiddleName2097",LastName2097 +2098,2098,"FirstName2098 MiddleName2098",LastName2098 +2099,2099,"FirstName2099 MiddleName2099",LastName2099 +2100,2100,"FirstName2100 MiddleName2100",LastName2100 +2101,2101,"FirstName2101 MiddleName2101",LastName2101 +2102,2102,"FirstName2102 MiddleName2102",LastName2102 +2103,2103,"FirstName2103 MiddleName2103",LastName2103 +2104,2104,"FirstName2104 MiddleName2104",LastName2104 +2105,2105,"FirstName2105 MiddleName2105",LastName2105 +2106,2106,"FirstName2106 MiddleName2106",LastName2106 +2107,2107,"FirstName2107 MiddleName2107",LastName2107 +2108,2108,"FirstName2108 MiddleName2108",LastName2108 +2109,2109,"FirstName2109 MiddleName2109",LastName2109 +2110,2110,"FirstName2110 MiddleName2110",LastName2110 +2111,2111,"FirstName2111 MiddleName2111",LastName2111 +2112,2112,"FirstName2112 MiddleName2112",LastName2112 +2113,2113,"FirstName2113 MiddleName2113",LastName2113 +2114,2114,"FirstName2114 MiddleName2114",LastName2114 +2115,2115,"FirstName2115 MiddleName2115",LastName2115 +2116,2116,"FirstName2116 MiddleName2116",LastName2116 +2117,2117,"FirstName2117 MiddleName2117",LastName2117 +2118,2118,"FirstName2118 MiddleName2118",LastName2118 +2119,2119,"FirstName2119 MiddleName2119",LastName2119 +2120,2120,"FirstName2120 MiddleName2120",LastName2120 +2121,2121,"FirstName2121 MiddleName2121",LastName2121 +2122,2122,"FirstName2122 MiddleName2122",LastName2122 +2123,2123,"FirstName2123 MiddleName2123",LastName2123 +2124,2124,"FirstName2124 MiddleName2124",LastName2124 +2125,2125,"FirstName2125 MiddleName2125",LastName2125 +2126,2126,"FirstName2126 MiddleName2126",LastName2126 +2127,2127,"FirstName2127 MiddleName2127",LastName2127 +2128,2128,"FirstName2128 MiddleName2128",LastName2128 +2129,2129,"FirstName2129 MiddleName2129",LastName2129 +2130,2130,"FirstName2130 MiddleName2130",LastName2130 +2131,2131,"FirstName2131 MiddleName2131",LastName2131 +2132,2132,"FirstName2132 MiddleName2132",LastName2132 +2133,2133,"FirstName2133 MiddleName2133",LastName2133 +2134,2134,"FirstName2134 MiddleName2134",LastName2134 +2135,2135,"FirstName2135 MiddleName2135",LastName2135 +2136,2136,"FirstName2136 MiddleName2136",LastName2136 +2137,2137,"FirstName2137 MiddleName2137",LastName2137 +2138,2138,"FirstName2138 MiddleName2138",LastName2138 +2139,2139,"FirstName2139 MiddleName2139",LastName2139 +2140,2140,"FirstName2140 MiddleName2140",LastName2140 +2141,2141,"FirstName2141 MiddleName2141",LastName2141 +2142,2142,"FirstName2142 MiddleName2142",LastName2142 +2143,2143,"FirstName2143 MiddleName2143",LastName2143 +2144,2144,"FirstName2144 MiddleName2144",LastName2144 +2145,2145,"FirstName2145 MiddleName2145",LastName2145 +2146,2146,"FirstName2146 MiddleName2146",LastName2146 +2147,2147,"FirstName2147 MiddleName2147",LastName2147 +2148,2148,"FirstName2148 MiddleName2148",LastName2148 +2149,2149,"FirstName2149 MiddleName2149",LastName2149 +2150,2150,"FirstName2150 MiddleName2150",LastName2150 +2151,2151,"FirstName2151 MiddleName2151",LastName2151 +2152,2152,"FirstName2152 MiddleName2152",LastName2152 +2153,2153,"FirstName2153 MiddleName2153",LastName2153 +2154,2154,"FirstName2154 MiddleName2154",LastName2154 +2155,2155,"FirstName2155 MiddleName2155",LastName2155 +2156,2156,"FirstName2156 MiddleName2156",LastName2156 +2157,2157,"FirstName2157 MiddleName2157",LastName2157 +2158,2158,"FirstName2158 MiddleName2158",LastName2158 +2159,2159,"FirstName2159 MiddleName2159",LastName2159 +2160,2160,"FirstName2160 MiddleName2160",LastName2160 +2161,2161,"FirstName2161 MiddleName2161",LastName2161 +2162,2162,"FirstName2162 MiddleName2162",LastName2162 +2163,2163,"FirstName2163 MiddleName2163",LastName2163 +2164,2164,"FirstName2164 MiddleName2164",LastName2164 +2165,2165,"FirstName2165 MiddleName2165",LastName2165 +2166,2166,"FirstName2166 MiddleName2166",LastName2166 +2167,2167,"FirstName2167 MiddleName2167",LastName2167 +2168,2168,"FirstName2168 MiddleName2168",LastName2168 +2169,2169,"FirstName2169 MiddleName2169",LastName2169 +2170,2170,"FirstName2170 MiddleName2170",LastName2170 +2171,2171,"FirstName2171 MiddleName2171",LastName2171 +2172,2172,"FirstName2172 MiddleName2172",LastName2172 +2173,2173,"FirstName2173 MiddleName2173",LastName2173 +2174,2174,"FirstName2174 MiddleName2174",LastName2174 +2175,2175,"FirstName2175 MiddleName2175",LastName2175 +2176,2176,"FirstName2176 MiddleName2176",LastName2176 +2177,2177,"FirstName2177 MiddleName2177",LastName2177 +2178,2178,"FirstName2178 MiddleName2178",LastName2178 +2179,2179,"FirstName2179 MiddleName2179",LastName2179 +2180,2180,"FirstName2180 MiddleName2180",LastName2180 +2181,2181,"FirstName2181 MiddleName2181",LastName2181 +2182,2182,"FirstName2182 MiddleName2182",LastName2182 +2183,2183,"FirstName2183 MiddleName2183",LastName2183 +2184,2184,"FirstName2184 MiddleName2184",LastName2184 +2185,2185,"FirstName2185 MiddleName2185",LastName2185 +2186,2186,"FirstName2186 MiddleName2186",LastName2186 +2187,2187,"FirstName2187 MiddleName2187",LastName2187 +2188,2188,"FirstName2188 MiddleName2188",LastName2188 +2189,2189,"FirstName2189 MiddleName2189",LastName2189 +2190,2190,"FirstName2190 MiddleName2190",LastName2190 +2191,2191,"FirstName2191 MiddleName2191",LastName2191 +2192,2192,"FirstName2192 MiddleName2192",LastName2192 +2193,2193,"FirstName2193 MiddleName2193",LastName2193 +2194,2194,"FirstName2194 MiddleName2194",LastName2194 +2195,2195,"FirstName2195 MiddleName2195",LastName2195 +2196,2196,"FirstName2196 MiddleName2196",LastName2196 +2197,2197,"FirstName2197 MiddleName2197",LastName2197 +2198,2198,"FirstName2198 MiddleName2198",LastName2198 +2199,2199,"FirstName2199 MiddleName2199",LastName2199 +2200,2200,"FirstName2200 MiddleName2200",LastName2200 +2201,2201,"FirstName2201 MiddleName2201",LastName2201 +2202,2202,"FirstName2202 MiddleName2202",LastName2202 +2203,2203,"FirstName2203 MiddleName2203",LastName2203 +2204,2204,"FirstName2204 MiddleName2204",LastName2204 +2205,2205,"FirstName2205 MiddleName2205",LastName2205 +2206,2206,"FirstName2206 MiddleName2206",LastName2206 +2207,2207,"FirstName2207 MiddleName2207",LastName2207 +2208,2208,"FirstName2208 MiddleName2208",LastName2208 +2209,2209,"FirstName2209 MiddleName2209",LastName2209 +2210,2210,"FirstName2210 MiddleName2210",LastName2210 +2211,2211,"FirstName2211 MiddleName2211",LastName2211 +2212,2212,"FirstName2212 MiddleName2212",LastName2212 +2213,2213,"FirstName2213 MiddleName2213",LastName2213 +2214,2214,"FirstName2214 MiddleName2214",LastName2214 +2215,2215,"FirstName2215 MiddleName2215",LastName2215 +2216,2216,"FirstName2216 MiddleName2216",LastName2216 +2217,2217,"FirstName2217 MiddleName2217",LastName2217 +2218,2218,"FirstName2218 MiddleName2218",LastName2218 +2219,2219,"FirstName2219 MiddleName2219",LastName2219 +2220,2220,"FirstName2220 MiddleName2220",LastName2220 +2221,2221,"FirstName2221 MiddleName2221",LastName2221 +2222,2222,"FirstName2222 MiddleName2222",LastName2222 +2223,2223,"FirstName2223 MiddleName2223",LastName2223 +2224,2224,"FirstName2224 MiddleName2224",LastName2224 +2225,2225,"FirstName2225 MiddleName2225",LastName2225 +2226,2226,"FirstName2226 MiddleName2226",LastName2226 +2227,2227,"FirstName2227 MiddleName2227",LastName2227 +2228,2228,"FirstName2228 MiddleName2228",LastName2228 +2229,2229,"FirstName2229 MiddleName2229",LastName2229 +2230,2230,"FirstName2230 MiddleName2230",LastName2230 +2231,2231,"FirstName2231 MiddleName2231",LastName2231 +2232,2232,"FirstName2232 MiddleName2232",LastName2232 +2233,2233,"FirstName2233 MiddleName2233",LastName2233 +2234,2234,"FirstName2234 MiddleName2234",LastName2234 +2235,2235,"FirstName2235 MiddleName2235",LastName2235 +2236,2236,"FirstName2236 MiddleName2236",LastName2236 +2237,2237,"FirstName2237 MiddleName2237",LastName2237 +2238,2238,"FirstName2238 MiddleName2238",LastName2238 +2239,2239,"FirstName2239 MiddleName2239",LastName2239 +2240,2240,"FirstName2240 MiddleName2240",LastName2240 +2241,2241,"FirstName2241 MiddleName2241",LastName2241 +2242,2242,"FirstName2242 MiddleName2242",LastName2242 +2243,2243,"FirstName2243 MiddleName2243",LastName2243 +2244,2244,"FirstName2244 MiddleName2244",LastName2244 +2245,2245,"FirstName2245 MiddleName2245",LastName2245 +2246,2246,"FirstName2246 MiddleName2246",LastName2246 +2247,2247,"FirstName2247 MiddleName2247",LastName2247 +2248,2248,"FirstName2248 MiddleName2248",LastName2248 +2249,2249,"FirstName2249 MiddleName2249",LastName2249 +2250,2250,"FirstName2250 MiddleName2250",LastName2250 +2251,2251,"FirstName2251 MiddleName2251",LastName2251 +2252,2252,"FirstName2252 MiddleName2252",LastName2252 +2253,2253,"FirstName2253 MiddleName2253",LastName2253 +2254,2254,"FirstName2254 MiddleName2254",LastName2254 +2255,2255,"FirstName2255 MiddleName2255",LastName2255 +2256,2256,"FirstName2256 MiddleName2256",LastName2256 +2257,2257,"FirstName2257 MiddleName2257",LastName2257 +2258,2258,"FirstName2258 MiddleName2258",LastName2258 +2259,2259,"FirstName2259 MiddleName2259",LastName2259 +2260,2260,"FirstName2260 MiddleName2260",LastName2260 +2261,2261,"FirstName2261 MiddleName2261",LastName2261 +2262,2262,"FirstName2262 MiddleName2262",LastName2262 +2263,2263,"FirstName2263 MiddleName2263",LastName2263 +2264,2264,"FirstName2264 MiddleName2264",LastName2264 +2265,2265,"FirstName2265 MiddleName2265",LastName2265 +2266,2266,"FirstName2266 MiddleName2266",LastName2266 +2267,2267,"FirstName2267 MiddleName2267",LastName2267 +2268,2268,"FirstName2268 MiddleName2268",LastName2268 +2269,2269,"FirstName2269 MiddleName2269",LastName2269 +2270,2270,"FirstName2270 MiddleName2270",LastName2270 +2271,2271,"FirstName2271 MiddleName2271",LastName2271 +2272,2272,"FirstName2272 MiddleName2272",LastName2272 +2273,2273,"FirstName2273 MiddleName2273",LastName2273 +2274,2274,"FirstName2274 MiddleName2274",LastName2274 +2275,2275,"FirstName2275 MiddleName2275",LastName2275 +2276,2276,"FirstName2276 MiddleName2276",LastName2276 +2277,2277,"FirstName2277 MiddleName2277",LastName2277 +2278,2278,"FirstName2278 MiddleName2278",LastName2278 +2279,2279,"FirstName2279 MiddleName2279",LastName2279 +2280,2280,"FirstName2280 MiddleName2280",LastName2280 +2281,2281,"FirstName2281 MiddleName2281",LastName2281 +2282,2282,"FirstName2282 MiddleName2282",LastName2282 +2283,2283,"FirstName2283 MiddleName2283",LastName2283 +2284,2284,"FirstName2284 MiddleName2284",LastName2284 +2285,2285,"FirstName2285 MiddleName2285",LastName2285 +2286,2286,"FirstName2286 MiddleName2286",LastName2286 +2287,2287,"FirstName2287 MiddleName2287",LastName2287 +2288,2288,"FirstName2288 MiddleName2288",LastName2288 +2289,2289,"FirstName2289 MiddleName2289",LastName2289 +2290,2290,"FirstName2290 MiddleName2290",LastName2290 +2291,2291,"FirstName2291 MiddleName2291",LastName2291 +2292,2292,"FirstName2292 MiddleName2292",LastName2292 +2293,2293,"FirstName2293 MiddleName2293",LastName2293 +2294,2294,"FirstName2294 MiddleName2294",LastName2294 +2295,2295,"FirstName2295 MiddleName2295",LastName2295 +2296,2296,"FirstName2296 MiddleName2296",LastName2296 +2297,2297,"FirstName2297 MiddleName2297",LastName2297 +2298,2298,"FirstName2298 MiddleName2298",LastName2298 +2299,2299,"FirstName2299 MiddleName2299",LastName2299 +2300,2300,"FirstName2300 MiddleName2300",LastName2300 +2301,2301,"FirstName2301 MiddleName2301",LastName2301 +2302,2302,"FirstName2302 MiddleName2302",LastName2302 +2303,2303,"FirstName2303 MiddleName2303",LastName2303 +2304,2304,"FirstName2304 MiddleName2304",LastName2304 +2305,2305,"FirstName2305 MiddleName2305",LastName2305 +2306,2306,"FirstName2306 MiddleName2306",LastName2306 +2307,2307,"FirstName2307 MiddleName2307",LastName2307 +2308,2308,"FirstName2308 MiddleName2308",LastName2308 +2309,2309,"FirstName2309 MiddleName2309",LastName2309 +2310,2310,"FirstName2310 MiddleName2310",LastName2310 +2311,2311,"FirstName2311 MiddleName2311",LastName2311 +2312,2312,"FirstName2312 MiddleName2312",LastName2312 +2313,2313,"FirstName2313 MiddleName2313",LastName2313 +2314,2314,"FirstName2314 MiddleName2314",LastName2314 +2315,2315,"FirstName2315 MiddleName2315",LastName2315 +2316,2316,"FirstName2316 MiddleName2316",LastName2316 +2317,2317,"FirstName2317 MiddleName2317",LastName2317 +2318,2318,"FirstName2318 MiddleName2318",LastName2318 +2319,2319,"FirstName2319 MiddleName2319",LastName2319 +2320,2320,"FirstName2320 MiddleName2320",LastName2320 +2321,2321,"FirstName2321 MiddleName2321",LastName2321 +2322,2322,"FirstName2322 MiddleName2322",LastName2322 +2323,2323,"FirstName2323 MiddleName2323",LastName2323 +2324,2324,"FirstName2324 MiddleName2324",LastName2324 +2325,2325,"FirstName2325 MiddleName2325",LastName2325 +2326,2326,"FirstName2326 MiddleName2326",LastName2326 +2327,2327,"FirstName2327 MiddleName2327",LastName2327 +2328,2328,"FirstName2328 MiddleName2328",LastName2328 +2329,2329,"FirstName2329 MiddleName2329",LastName2329 +2330,2330,"FirstName2330 MiddleName2330",LastName2330 +2331,2331,"FirstName2331 MiddleName2331",LastName2331 +2332,2332,"FirstName2332 MiddleName2332",LastName2332 +2333,2333,"FirstName2333 MiddleName2333",LastName2333 +2334,2334,"FirstName2334 MiddleName2334",LastName2334 +2335,2335,"FirstName2335 MiddleName2335",LastName2335 +2336,2336,"FirstName2336 MiddleName2336",LastName2336 +2337,2337,"FirstName2337 MiddleName2337",LastName2337 +2338,2338,"FirstName2338 MiddleName2338",LastName2338 +2339,2339,"FirstName2339 MiddleName2339",LastName2339 +2340,2340,"FirstName2340 MiddleName2340",LastName2340 +2341,2341,"FirstName2341 MiddleName2341",LastName2341 +2342,2342,"FirstName2342 MiddleName2342",LastName2342 +2343,2343,"FirstName2343 MiddleName2343",LastName2343 +2344,2344,"FirstName2344 MiddleName2344",LastName2344 +2345,2345,"FirstName2345 MiddleName2345",LastName2345 +2346,2346,"FirstName2346 MiddleName2346",LastName2346 +2347,2347,"FirstName2347 MiddleName2347",LastName2347 +2348,2348,"FirstName2348 MiddleName2348",LastName2348 +2349,2349,"FirstName2349 MiddleName2349",LastName2349 +2350,2350,"FirstName2350 MiddleName2350",LastName2350 +2351,2351,"FirstName2351 MiddleName2351",LastName2351 +2352,2352,"FirstName2352 MiddleName2352",LastName2352 +2353,2353,"FirstName2353 MiddleName2353",LastName2353 +2354,2354,"FirstName2354 MiddleName2354",LastName2354 +2355,2355,"FirstName2355 MiddleName2355",LastName2355 +2356,2356,"FirstName2356 MiddleName2356",LastName2356 +2357,2357,"FirstName2357 MiddleName2357",LastName2357 +2358,2358,"FirstName2358 MiddleName2358",LastName2358 +2359,2359,"FirstName2359 MiddleName2359",LastName2359 +2360,2360,"FirstName2360 MiddleName2360",LastName2360 +2361,2361,"FirstName2361 MiddleName2361",LastName2361 +2362,2362,"FirstName2362 MiddleName2362",LastName2362 +2363,2363,"FirstName2363 MiddleName2363",LastName2363 +2364,2364,"FirstName2364 MiddleName2364",LastName2364 +2365,2365,"FirstName2365 MiddleName2365",LastName2365 +2366,2366,"FirstName2366 MiddleName2366",LastName2366 +2367,2367,"FirstName2367 MiddleName2367",LastName2367 +2368,2368,"FirstName2368 MiddleName2368",LastName2368 +2369,2369,"FirstName2369 MiddleName2369",LastName2369 +2370,2370,"FirstName2370 MiddleName2370",LastName2370 +2371,2371,"FirstName2371 MiddleName2371",LastName2371 +2372,2372,"FirstName2372 MiddleName2372",LastName2372 +2373,2373,"FirstName2373 MiddleName2373",LastName2373 +2374,2374,"FirstName2374 MiddleName2374",LastName2374 +2375,2375,"FirstName2375 MiddleName2375",LastName2375 +2376,2376,"FirstName2376 MiddleName2376",LastName2376 +2377,2377,"FirstName2377 MiddleName2377",LastName2377 +2378,2378,"FirstName2378 MiddleName2378",LastName2378 +2379,2379,"FirstName2379 MiddleName2379",LastName2379 +2380,2380,"FirstName2380 MiddleName2380",LastName2380 +2381,2381,"FirstName2381 MiddleName2381",LastName2381 +2382,2382,"FirstName2382 MiddleName2382",LastName2382 +2383,2383,"FirstName2383 MiddleName2383",LastName2383 +2384,2384,"FirstName2384 MiddleName2384",LastName2384 +2385,2385,"FirstName2385 MiddleName2385",LastName2385 +2386,2386,"FirstName2386 MiddleName2386",LastName2386 +2387,2387,"FirstName2387 MiddleName2387",LastName2387 +2388,2388,"FirstName2388 MiddleName2388",LastName2388 +2389,2389,"FirstName2389 MiddleName2389",LastName2389 +2390,2390,"FirstName2390 MiddleName2390",LastName2390 +2391,2391,"FirstName2391 MiddleName2391",LastName2391 +2392,2392,"FirstName2392 MiddleName2392",LastName2392 +2393,2393,"FirstName2393 MiddleName2393",LastName2393 +2394,2394,"FirstName2394 MiddleName2394",LastName2394 +2395,2395,"FirstName2395 MiddleName2395",LastName2395 +2396,2396,"FirstName2396 MiddleName2396",LastName2396 +2397,2397,"FirstName2397 MiddleName2397",LastName2397 +2398,2398,"FirstName2398 MiddleName2398",LastName2398 +2399,2399,"FirstName2399 MiddleName2399",LastName2399 +2400,2400,"FirstName2400 MiddleName2400",LastName2400 +2401,2401,"FirstName2401 MiddleName2401",LastName2401 +2402,2402,"FirstName2402 MiddleName2402",LastName2402 +2403,2403,"FirstName2403 MiddleName2403",LastName2403 +2404,2404,"FirstName2404 MiddleName2404",LastName2404 +2405,2405,"FirstName2405 MiddleName2405",LastName2405 +2406,2406,"FirstName2406 MiddleName2406",LastName2406 +2407,2407,"FirstName2407 MiddleName2407",LastName2407 +2408,2408,"FirstName2408 MiddleName2408",LastName2408 +2409,2409,"FirstName2409 MiddleName2409",LastName2409 +2410,2410,"FirstName2410 MiddleName2410",LastName2410 +2411,2411,"FirstName2411 MiddleName2411",LastName2411 +2412,2412,"FirstName2412 MiddleName2412",LastName2412 +2413,2413,"FirstName2413 MiddleName2413",LastName2413 +2414,2414,"FirstName2414 MiddleName2414",LastName2414 +2415,2415,"FirstName2415 MiddleName2415",LastName2415 +2416,2416,"FirstName2416 MiddleName2416",LastName2416 +2417,2417,"FirstName2417 MiddleName2417",LastName2417 +2418,2418,"FirstName2418 MiddleName2418",LastName2418 +2419,2419,"FirstName2419 MiddleName2419",LastName2419 +2420,2420,"FirstName2420 MiddleName2420",LastName2420 +2421,2421,"FirstName2421 MiddleName2421",LastName2421 +2422,2422,"FirstName2422 MiddleName2422",LastName2422 +2423,2423,"FirstName2423 MiddleName2423",LastName2423 +2424,2424,"FirstName2424 MiddleName2424",LastName2424 +2425,2425,"FirstName2425 MiddleName2425",LastName2425 +2426,2426,"FirstName2426 MiddleName2426",LastName2426 +2427,2427,"FirstName2427 MiddleName2427",LastName2427 +2428,2428,"FirstName2428 MiddleName2428",LastName2428 +2429,2429,"FirstName2429 MiddleName2429",LastName2429 +2430,2430,"FirstName2430 MiddleName2430",LastName2430 +2431,2431,"FirstName2431 MiddleName2431",LastName2431 +2432,2432,"FirstName2432 MiddleName2432",LastName2432 +2433,2433,"FirstName2433 MiddleName2433",LastName2433 +2434,2434,"FirstName2434 MiddleName2434",LastName2434 +2435,2435,"FirstName2435 MiddleName2435",LastName2435 +2436,2436,"FirstName2436 MiddleName2436",LastName2436 +2437,2437,"FirstName2437 MiddleName2437",LastName2437 +2438,2438,"FirstName2438 MiddleName2438",LastName2438 +2439,2439,"FirstName2439 MiddleName2439",LastName2439 +2440,2440,"FirstName2440 MiddleName2440",LastName2440 +2441,2441,"FirstName2441 MiddleName2441",LastName2441 +2442,2442,"FirstName2442 MiddleName2442",LastName2442 +2443,2443,"FirstName2443 MiddleName2443",LastName2443 +2444,2444,"FirstName2444 MiddleName2444",LastName2444 +2445,2445,"FirstName2445 MiddleName2445",LastName2445 +2446,2446,"FirstName2446 MiddleName2446",LastName2446 +2447,2447,"FirstName2447 MiddleName2447",LastName2447 +2448,2448,"FirstName2448 MiddleName2448",LastName2448 +2449,2449,"FirstName2449 MiddleName2449",LastName2449 +2450,2450,"FirstName2450 MiddleName2450",LastName2450 +2451,2451,"FirstName2451 MiddleName2451",LastName2451 +2452,2452,"FirstName2452 MiddleName2452",LastName2452 +2453,2453,"FirstName2453 MiddleName2453",LastName2453 +2454,2454,"FirstName2454 MiddleName2454",LastName2454 +2455,2455,"FirstName2455 MiddleName2455",LastName2455 +2456,2456,"FirstName2456 MiddleName2456",LastName2456 +2457,2457,"FirstName2457 MiddleName2457",LastName2457 +2458,2458,"FirstName2458 MiddleName2458",LastName2458 +2459,2459,"FirstName2459 MiddleName2459",LastName2459 +2460,2460,"FirstName2460 MiddleName2460",LastName2460 +2461,2461,"FirstName2461 MiddleName2461",LastName2461 +2462,2462,"FirstName2462 MiddleName2462",LastName2462 +2463,2463,"FirstName2463 MiddleName2463",LastName2463 +2464,2464,"FirstName2464 MiddleName2464",LastName2464 +2465,2465,"FirstName2465 MiddleName2465",LastName2465 +2466,2466,"FirstName2466 MiddleName2466",LastName2466 +2467,2467,"FirstName2467 MiddleName2467",LastName2467 +2468,2468,"FirstName2468 MiddleName2468",LastName2468 +2469,2469,"FirstName2469 MiddleName2469",LastName2469 +2470,2470,"FirstName2470 MiddleName2470",LastName2470 +2471,2471,"FirstName2471 MiddleName2471",LastName2471 +2472,2472,"FirstName2472 MiddleName2472",LastName2472 +2473,2473,"FirstName2473 MiddleName2473",LastName2473 +2474,2474,"FirstName2474 MiddleName2474",LastName2474 +2475,2475,"FirstName2475 MiddleName2475",LastName2475 +2476,2476,"FirstName2476 MiddleName2476",LastName2476 +2477,2477,"FirstName2477 MiddleName2477",LastName2477 +2478,2478,"FirstName2478 MiddleName2478",LastName2478 +2479,2479,"FirstName2479 MiddleName2479",LastName2479 +2480,2480,"FirstName2480 MiddleName2480",LastName2480 +2481,2481,"FirstName2481 MiddleName2481",LastName2481 +2482,2482,"FirstName2482 MiddleName2482",LastName2482 +2483,2483,"FirstName2483 MiddleName2483",LastName2483 +2484,2484,"FirstName2484 MiddleName2484",LastName2484 +2485,2485,"FirstName2485 MiddleName2485",LastName2485 +2486,2486,"FirstName2486 MiddleName2486",LastName2486 +2487,2487,"FirstName2487 MiddleName2487",LastName2487 +2488,2488,"FirstName2488 MiddleName2488",LastName2488 +2489,2489,"FirstName2489 MiddleName2489",LastName2489 +2490,2490,"FirstName2490 MiddleName2490",LastName2490 +2491,2491,"FirstName2491 MiddleName2491",LastName2491 +2492,2492,"FirstName2492 MiddleName2492",LastName2492 +2493,2493,"FirstName2493 MiddleName2493",LastName2493 +2494,2494,"FirstName2494 MiddleName2494",LastName2494 +2495,2495,"FirstName2495 MiddleName2495",LastName2495 +2496,2496,"FirstName2496 MiddleName2496",LastName2496 +2497,2497,"FirstName2497 MiddleName2497",LastName2497 +2498,2498,"FirstName2498 MiddleName2498",LastName2498 +2499,2499,"FirstName2499 MiddleName2499",LastName2499 +2500,2500,"FirstName2500 MiddleName2500",LastName2500 +2501,2501,"FirstName2501 MiddleName2501",LastName2501 +2502,2502,"FirstName2502 MiddleName2502",LastName2502 +2503,2503,"FirstName2503 MiddleName2503",LastName2503 +2504,2504,"FirstName2504 MiddleName2504",LastName2504 +2505,2505,"FirstName2505 MiddleName2505",LastName2505 +2506,2506,"FirstName2506 MiddleName2506",LastName2506 +2507,2507,"FirstName2507 MiddleName2507",LastName2507 +2508,2508,"FirstName2508 MiddleName2508",LastName2508 +2509,2509,"FirstName2509 MiddleName2509",LastName2509 +2510,2510,"FirstName2510 MiddleName2510",LastName2510 +2511,2511,"FirstName2511 MiddleName2511",LastName2511 +2512,2512,"FirstName2512 MiddleName2512",LastName2512 +2513,2513,"FirstName2513 MiddleName2513",LastName2513 +2514,2514,"FirstName2514 MiddleName2514",LastName2514 +2515,2515,"FirstName2515 MiddleName2515",LastName2515 +2516,2516,"FirstName2516 MiddleName2516",LastName2516 +2517,2517,"FirstName2517 MiddleName2517",LastName2517 +2518,2518,"FirstName2518 MiddleName2518",LastName2518 +2519,2519,"FirstName2519 MiddleName2519",LastName2519 +2520,2520,"FirstName2520 MiddleName2520",LastName2520 +2521,2521,"FirstName2521 MiddleName2521",LastName2521 +2522,2522,"FirstName2522 MiddleName2522",LastName2522 +2523,2523,"FirstName2523 MiddleName2523",LastName2523 +2524,2524,"FirstName2524 MiddleName2524",LastName2524 +2525,2525,"FirstName2525 MiddleName2525",LastName2525 +2526,2526,"FirstName2526 MiddleName2526",LastName2526 +2527,2527,"FirstName2527 MiddleName2527",LastName2527 +2528,2528,"FirstName2528 MiddleName2528",LastName2528 +2529,2529,"FirstName2529 MiddleName2529",LastName2529 +2530,2530,"FirstName2530 MiddleName2530",LastName2530 +2531,2531,"FirstName2531 MiddleName2531",LastName2531 +2532,2532,"FirstName2532 MiddleName2532",LastName2532 +2533,2533,"FirstName2533 MiddleName2533",LastName2533 +2534,2534,"FirstName2534 MiddleName2534",LastName2534 +2535,2535,"FirstName2535 MiddleName2535",LastName2535 +2536,2536,"FirstName2536 MiddleName2536",LastName2536 +2537,2537,"FirstName2537 MiddleName2537",LastName2537 +2538,2538,"FirstName2538 MiddleName2538",LastName2538 +2539,2539,"FirstName2539 MiddleName2539",LastName2539 +2540,2540,"FirstName2540 MiddleName2540",LastName2540 +2541,2541,"FirstName2541 MiddleName2541",LastName2541 +2542,2542,"FirstName2542 MiddleName2542",LastName2542 +2543,2543,"FirstName2543 MiddleName2543",LastName2543 +2544,2544,"FirstName2544 MiddleName2544",LastName2544 +2545,2545,"FirstName2545 MiddleName2545",LastName2545 +2546,2546,"FirstName2546 MiddleName2546",LastName2546 +2547,2547,"FirstName2547 MiddleName2547",LastName2547 +2548,2548,"FirstName2548 MiddleName2548",LastName2548 +2549,2549,"FirstName2549 MiddleName2549",LastName2549 +2550,2550,"FirstName2550 MiddleName2550",LastName2550 +2551,2551,"FirstName2551 MiddleName2551",LastName2551 +2552,2552,"FirstName2552 MiddleName2552",LastName2552 +2553,2553,"FirstName2553 MiddleName2553",LastName2553 +2554,2554,"FirstName2554 MiddleName2554",LastName2554 +2555,2555,"FirstName2555 MiddleName2555",LastName2555 +2556,2556,"FirstName2556 MiddleName2556",LastName2556 +2557,2557,"FirstName2557 MiddleName2557",LastName2557 +2558,2558,"FirstName2558 MiddleName2558",LastName2558 +2559,2559,"FirstName2559 MiddleName2559",LastName2559 +2560,2560,"FirstName2560 MiddleName2560",LastName2560 +2561,2561,"FirstName2561 MiddleName2561",LastName2561 +2562,2562,"FirstName2562 MiddleName2562",LastName2562 +2563,2563,"FirstName2563 MiddleName2563",LastName2563 +2564,2564,"FirstName2564 MiddleName2564",LastName2564 +2565,2565,"FirstName2565 MiddleName2565",LastName2565 +2566,2566,"FirstName2566 MiddleName2566",LastName2566 +2567,2567,"FirstName2567 MiddleName2567",LastName2567 +2568,2568,"FirstName2568 MiddleName2568",LastName2568 +2569,2569,"FirstName2569 MiddleName2569",LastName2569 +2570,2570,"FirstName2570 MiddleName2570",LastName2570 +2571,2571,"FirstName2571 MiddleName2571",LastName2571 +2572,2572,"FirstName2572 MiddleName2572",LastName2572 +2573,2573,"FirstName2573 MiddleName2573",LastName2573 +2574,2574,"FirstName2574 MiddleName2574",LastName2574 +2575,2575,"FirstName2575 MiddleName2575",LastName2575 +2576,2576,"FirstName2576 MiddleName2576",LastName2576 +2577,2577,"FirstName2577 MiddleName2577",LastName2577 +2578,2578,"FirstName2578 MiddleName2578",LastName2578 +2579,2579,"FirstName2579 MiddleName2579",LastName2579 +2580,2580,"FirstName2580 MiddleName2580",LastName2580 +2581,2581,"FirstName2581 MiddleName2581",LastName2581 +2582,2582,"FirstName2582 MiddleName2582",LastName2582 +2583,2583,"FirstName2583 MiddleName2583",LastName2583 +2584,2584,"FirstName2584 MiddleName2584",LastName2584 +2585,2585,"FirstName2585 MiddleName2585",LastName2585 +2586,2586,"FirstName2586 MiddleName2586",LastName2586 +2587,2587,"FirstName2587 MiddleName2587",LastName2587 +2588,2588,"FirstName2588 MiddleName2588",LastName2588 +2589,2589,"FirstName2589 MiddleName2589",LastName2589 +2590,2590,"FirstName2590 MiddleName2590",LastName2590 +2591,2591,"FirstName2591 MiddleName2591",LastName2591 +2592,2592,"FirstName2592 MiddleName2592",LastName2592 +2593,2593,"FirstName2593 MiddleName2593",LastName2593 +2594,2594,"FirstName2594 MiddleName2594",LastName2594 +2595,2595,"FirstName2595 MiddleName2595",LastName2595 +2596,2596,"FirstName2596 MiddleName2596",LastName2596 +2597,2597,"FirstName2597 MiddleName2597",LastName2597 +2598,2598,"FirstName2598 MiddleName2598",LastName2598 +2599,2599,"FirstName2599 MiddleName2599",LastName2599 +2600,2600,"FirstName2600 MiddleName2600",LastName2600 +2601,2601,"FirstName2601 MiddleName2601",LastName2601 +2602,2602,"FirstName2602 MiddleName2602",LastName2602 +2603,2603,"FirstName2603 MiddleName2603",LastName2603 +2604,2604,"FirstName2604 MiddleName2604",LastName2604 +2605,2605,"FirstName2605 MiddleName2605",LastName2605 +2606,2606,"FirstName2606 MiddleName2606",LastName2606 +2607,2607,"FirstName2607 MiddleName2607",LastName2607 +2608,2608,"FirstName2608 MiddleName2608",LastName2608 +2609,2609,"FirstName2609 MiddleName2609",LastName2609 +2610,2610,"FirstName2610 MiddleName2610",LastName2610 +2611,2611,"FirstName2611 MiddleName2611",LastName2611 +2612,2612,"FirstName2612 MiddleName2612",LastName2612 +2613,2613,"FirstName2613 MiddleName2613",LastName2613 +2614,2614,"FirstName2614 MiddleName2614",LastName2614 +2615,2615,"FirstName2615 MiddleName2615",LastName2615 +2616,2616,"FirstName2616 MiddleName2616",LastName2616 +2617,2617,"FirstName2617 MiddleName2617",LastName2617 +2618,2618,"FirstName2618 MiddleName2618",LastName2618 +2619,2619,"FirstName2619 MiddleName2619",LastName2619 +2620,2620,"FirstName2620 MiddleName2620",LastName2620 +2621,2621,"FirstName2621 MiddleName2621",LastName2621 +2622,2622,"FirstName2622 MiddleName2622",LastName2622 +2623,2623,"FirstName2623 MiddleName2623",LastName2623 +2624,2624,"FirstName2624 MiddleName2624",LastName2624 +2625,2625,"FirstName2625 MiddleName2625",LastName2625 +2626,2626,"FirstName2626 MiddleName2626",LastName2626 +2627,2627,"FirstName2627 MiddleName2627",LastName2627 +2628,2628,"FirstName2628 MiddleName2628",LastName2628 +2629,2629,"FirstName2629 MiddleName2629",LastName2629 +2630,2630,"FirstName2630 MiddleName2630",LastName2630 +2631,2631,"FirstName2631 MiddleName2631",LastName2631 +2632,2632,"FirstName2632 MiddleName2632",LastName2632 +2633,2633,"FirstName2633 MiddleName2633",LastName2633 +2634,2634,"FirstName2634 MiddleName2634",LastName2634 +2635,2635,"FirstName2635 MiddleName2635",LastName2635 +2636,2636,"FirstName2636 MiddleName2636",LastName2636 +2637,2637,"FirstName2637 MiddleName2637",LastName2637 +2638,2638,"FirstName2638 MiddleName2638",LastName2638 +2639,2639,"FirstName2639 MiddleName2639",LastName2639 +2640,2640,"FirstName2640 MiddleName2640",LastName2640 +2641,2641,"FirstName2641 MiddleName2641",LastName2641 +2642,2642,"FirstName2642 MiddleName2642",LastName2642 +2643,2643,"FirstName2643 MiddleName2643",LastName2643 +2644,2644,"FirstName2644 MiddleName2644",LastName2644 +2645,2645,"FirstName2645 MiddleName2645",LastName2645 +2646,2646,"FirstName2646 MiddleName2646",LastName2646 +2647,2647,"FirstName2647 MiddleName2647",LastName2647 +2648,2648,"FirstName2648 MiddleName2648",LastName2648 +2649,2649,"FirstName2649 MiddleName2649",LastName2649 +2650,2650,"FirstName2650 MiddleName2650",LastName2650 +2651,2651,"FirstName2651 MiddleName2651",LastName2651 +2652,2652,"FirstName2652 MiddleName2652",LastName2652 +2653,2653,"FirstName2653 MiddleName2653",LastName2653 +2654,2654,"FirstName2654 MiddleName2654",LastName2654 +2655,2655,"FirstName2655 MiddleName2655",LastName2655 +2656,2656,"FirstName2656 MiddleName2656",LastName2656 +2657,2657,"FirstName2657 MiddleName2657",LastName2657 +2658,2658,"FirstName2658 MiddleName2658",LastName2658 +2659,2659,"FirstName2659 MiddleName2659",LastName2659 +2660,2660,"FirstName2660 MiddleName2660",LastName2660 +2661,2661,"FirstName2661 MiddleName2661",LastName2661 +2662,2662,"FirstName2662 MiddleName2662",LastName2662 +2663,2663,"FirstName2663 MiddleName2663",LastName2663 +2664,2664,"FirstName2664 MiddleName2664",LastName2664 +2665,2665,"FirstName2665 MiddleName2665",LastName2665 +2666,2666,"FirstName2666 MiddleName2666",LastName2666 +2667,2667,"FirstName2667 MiddleName2667",LastName2667 +2668,2668,"FirstName2668 MiddleName2668",LastName2668 +2669,2669,"FirstName2669 MiddleName2669",LastName2669 +2670,2670,"FirstName2670 MiddleName2670",LastName2670 +2671,2671,"FirstName2671 MiddleName2671",LastName2671 +2672,2672,"FirstName2672 MiddleName2672",LastName2672 +2673,2673,"FirstName2673 MiddleName2673",LastName2673 +2674,2674,"FirstName2674 MiddleName2674",LastName2674 +2675,2675,"FirstName2675 MiddleName2675",LastName2675 +2676,2676,"FirstName2676 MiddleName2676",LastName2676 +2677,2677,"FirstName2677 MiddleName2677",LastName2677 +2678,2678,"FirstName2678 MiddleName2678",LastName2678 +2679,2679,"FirstName2679 MiddleName2679",LastName2679 +2680,2680,"FirstName2680 MiddleName2680",LastName2680 +2681,2681,"FirstName2681 MiddleName2681",LastName2681 +2682,2682,"FirstName2682 MiddleName2682",LastName2682 +2683,2683,"FirstName2683 MiddleName2683",LastName2683 +2684,2684,"FirstName2684 MiddleName2684",LastName2684 +2685,2685,"FirstName2685 MiddleName2685",LastName2685 +2686,2686,"FirstName2686 MiddleName2686",LastName2686 +2687,2687,"FirstName2687 MiddleName2687",LastName2687 +2688,2688,"FirstName2688 MiddleName2688",LastName2688 +2689,2689,"FirstName2689 MiddleName2689",LastName2689 +2690,2690,"FirstName2690 MiddleName2690",LastName2690 +2691,2691,"FirstName2691 MiddleName2691",LastName2691 +2692,2692,"FirstName2692 MiddleName2692",LastName2692 +2693,2693,"FirstName2693 MiddleName2693",LastName2693 +2694,2694,"FirstName2694 MiddleName2694",LastName2694 +2695,2695,"FirstName2695 MiddleName2695",LastName2695 +2696,2696,"FirstName2696 MiddleName2696",LastName2696 +2697,2697,"FirstName2697 MiddleName2697",LastName2697 +2698,2698,"FirstName2698 MiddleName2698",LastName2698 +2699,2699,"FirstName2699 MiddleName2699",LastName2699 +2700,2700,"FirstName2700 MiddleName2700",LastName2700 +2701,2701,"FirstName2701 MiddleName2701",LastName2701 +2702,2702,"FirstName2702 MiddleName2702",LastName2702 +2703,2703,"FirstName2703 MiddleName2703",LastName2703 +2704,2704,"FirstName2704 MiddleName2704",LastName2704 +2705,2705,"FirstName2705 MiddleName2705",LastName2705 +2706,2706,"FirstName2706 MiddleName2706",LastName2706 +2707,2707,"FirstName2707 MiddleName2707",LastName2707 +2708,2708,"FirstName2708 MiddleName2708",LastName2708 +2709,2709,"FirstName2709 MiddleName2709",LastName2709 +2710,2710,"FirstName2710 MiddleName2710",LastName2710 +2711,2711,"FirstName2711 MiddleName2711",LastName2711 +2712,2712,"FirstName2712 MiddleName2712",LastName2712 +2713,2713,"FirstName2713 MiddleName2713",LastName2713 +2714,2714,"FirstName2714 MiddleName2714",LastName2714 +2715,2715,"FirstName2715 MiddleName2715",LastName2715 +2716,2716,"FirstName2716 MiddleName2716",LastName2716 +2717,2717,"FirstName2717 MiddleName2717",LastName2717 +2718,2718,"FirstName2718 MiddleName2718",LastName2718 +2719,2719,"FirstName2719 MiddleName2719",LastName2719 +2720,2720,"FirstName2720 MiddleName2720",LastName2720 +2721,2721,"FirstName2721 MiddleName2721",LastName2721 +2722,2722,"FirstName2722 MiddleName2722",LastName2722 +2723,2723,"FirstName2723 MiddleName2723",LastName2723 +2724,2724,"FirstName2724 MiddleName2724",LastName2724 +2725,2725,"FirstName2725 MiddleName2725",LastName2725 +2726,2726,"FirstName2726 MiddleName2726",LastName2726 +2727,2727,"FirstName2727 MiddleName2727",LastName2727 +2728,2728,"FirstName2728 MiddleName2728",LastName2728 +2729,2729,"FirstName2729 MiddleName2729",LastName2729 +2730,2730,"FirstName2730 MiddleName2730",LastName2730 +2731,2731,"FirstName2731 MiddleName2731",LastName2731 +2732,2732,"FirstName2732 MiddleName2732",LastName2732 +2733,2733,"FirstName2733 MiddleName2733",LastName2733 +2734,2734,"FirstName2734 MiddleName2734",LastName2734 +2735,2735,"FirstName2735 MiddleName2735",LastName2735 +2736,2736,"FirstName2736 MiddleName2736",LastName2736 +2737,2737,"FirstName2737 MiddleName2737",LastName2737 +2738,2738,"FirstName2738 MiddleName2738",LastName2738 +2739,2739,"FirstName2739 MiddleName2739",LastName2739 +2740,2740,"FirstName2740 MiddleName2740",LastName2740 +2741,2741,"FirstName2741 MiddleName2741",LastName2741 +2742,2742,"FirstName2742 MiddleName2742",LastName2742 +2743,2743,"FirstName2743 MiddleName2743",LastName2743 +2744,2744,"FirstName2744 MiddleName2744",LastName2744 +2745,2745,"FirstName2745 MiddleName2745",LastName2745 +2746,2746,"FirstName2746 MiddleName2746",LastName2746 +2747,2747,"FirstName2747 MiddleName2747",LastName2747 +2748,2748,"FirstName2748 MiddleName2748",LastName2748 +2749,2749,"FirstName2749 MiddleName2749",LastName2749 +2750,2750,"FirstName2750 MiddleName2750",LastName2750 +2751,2751,"FirstName2751 MiddleName2751",LastName2751 +2752,2752,"FirstName2752 MiddleName2752",LastName2752 +2753,2753,"FirstName2753 MiddleName2753",LastName2753 +2754,2754,"FirstName2754 MiddleName2754",LastName2754 +2755,2755,"FirstName2755 MiddleName2755",LastName2755 +2756,2756,"FirstName2756 MiddleName2756",LastName2756 +2757,2757,"FirstName2757 MiddleName2757",LastName2757 +2758,2758,"FirstName2758 MiddleName2758",LastName2758 +2759,2759,"FirstName2759 MiddleName2759",LastName2759 +2760,2760,"FirstName2760 MiddleName2760",LastName2760 +2761,2761,"FirstName2761 MiddleName2761",LastName2761 +2762,2762,"FirstName2762 MiddleName2762",LastName2762 +2763,2763,"FirstName2763 MiddleName2763",LastName2763 +2764,2764,"FirstName2764 MiddleName2764",LastName2764 +2765,2765,"FirstName2765 MiddleName2765",LastName2765 +2766,2766,"FirstName2766 MiddleName2766",LastName2766 +2767,2767,"FirstName2767 MiddleName2767",LastName2767 +2768,2768,"FirstName2768 MiddleName2768",LastName2768 +2769,2769,"FirstName2769 MiddleName2769",LastName2769 +2770,2770,"FirstName2770 MiddleName2770",LastName2770 +2771,2771,"FirstName2771 MiddleName2771",LastName2771 +2772,2772,"FirstName2772 MiddleName2772",LastName2772 +2773,2773,"FirstName2773 MiddleName2773",LastName2773 +2774,2774,"FirstName2774 MiddleName2774",LastName2774 +2775,2775,"FirstName2775 MiddleName2775",LastName2775 +2776,2776,"FirstName2776 MiddleName2776",LastName2776 +2777,2777,"FirstName2777 MiddleName2777",LastName2777 +2778,2778,"FirstName2778 MiddleName2778",LastName2778 +2779,2779,"FirstName2779 MiddleName2779",LastName2779 +2780,2780,"FirstName2780 MiddleName2780",LastName2780 +2781,2781,"FirstName2781 MiddleName2781",LastName2781 +2782,2782,"FirstName2782 MiddleName2782",LastName2782 +2783,2783,"FirstName2783 MiddleName2783",LastName2783 +2784,2784,"FirstName2784 MiddleName2784",LastName2784 +2785,2785,"FirstName2785 MiddleName2785",LastName2785 +2786,2786,"FirstName2786 MiddleName2786",LastName2786 +2787,2787,"FirstName2787 MiddleName2787",LastName2787 +2788,2788,"FirstName2788 MiddleName2788",LastName2788 +2789,2789,"FirstName2789 MiddleName2789",LastName2789 +2790,2790,"FirstName2790 MiddleName2790",LastName2790 +2791,2791,"FirstName2791 MiddleName2791",LastName2791 +2792,2792,"FirstName2792 MiddleName2792",LastName2792 +2793,2793,"FirstName2793 MiddleName2793",LastName2793 +2794,2794,"FirstName2794 MiddleName2794",LastName2794 +2795,2795,"FirstName2795 MiddleName2795",LastName2795 +2796,2796,"FirstName2796 MiddleName2796",LastName2796 +2797,2797,"FirstName2797 MiddleName2797",LastName2797 +2798,2798,"FirstName2798 MiddleName2798",LastName2798 +2799,2799,"FirstName2799 MiddleName2799",LastName2799 +2800,2800,"FirstName2800 MiddleName2800",LastName2800 +2801,2801,"FirstName2801 MiddleName2801",LastName2801 +2802,2802,"FirstName2802 MiddleName2802",LastName2802 +2803,2803,"FirstName2803 MiddleName2803",LastName2803 +2804,2804,"FirstName2804 MiddleName2804",LastName2804 +2805,2805,"FirstName2805 MiddleName2805",LastName2805 +2806,2806,"FirstName2806 MiddleName2806",LastName2806 +2807,2807,"FirstName2807 MiddleName2807",LastName2807 +2808,2808,"FirstName2808 MiddleName2808",LastName2808 +2809,2809,"FirstName2809 MiddleName2809",LastName2809 +2810,2810,"FirstName2810 MiddleName2810",LastName2810 +2811,2811,"FirstName2811 MiddleName2811",LastName2811 +2812,2812,"FirstName2812 MiddleName2812",LastName2812 +2813,2813,"FirstName2813 MiddleName2813",LastName2813 +2814,2814,"FirstName2814 MiddleName2814",LastName2814 +2815,2815,"FirstName2815 MiddleName2815",LastName2815 +2816,2816,"FirstName2816 MiddleName2816",LastName2816 +2817,2817,"FirstName2817 MiddleName2817",LastName2817 +2818,2818,"FirstName2818 MiddleName2818",LastName2818 +2819,2819,"FirstName2819 MiddleName2819",LastName2819 +2820,2820,"FirstName2820 MiddleName2820",LastName2820 +2821,2821,"FirstName2821 MiddleName2821",LastName2821 +2822,2822,"FirstName2822 MiddleName2822",LastName2822 +2823,2823,"FirstName2823 MiddleName2823",LastName2823 +2824,2824,"FirstName2824 MiddleName2824",LastName2824 +2825,2825,"FirstName2825 MiddleName2825",LastName2825 +2826,2826,"FirstName2826 MiddleName2826",LastName2826 +2827,2827,"FirstName2827 MiddleName2827",LastName2827 +2828,2828,"FirstName2828 MiddleName2828",LastName2828 +2829,2829,"FirstName2829 MiddleName2829",LastName2829 +2830,2830,"FirstName2830 MiddleName2830",LastName2830 +2831,2831,"FirstName2831 MiddleName2831",LastName2831 +2832,2832,"FirstName2832 MiddleName2832",LastName2832 +2833,2833,"FirstName2833 MiddleName2833",LastName2833 +2834,2834,"FirstName2834 MiddleName2834",LastName2834 +2835,2835,"FirstName2835 MiddleName2835",LastName2835 +2836,2836,"FirstName2836 MiddleName2836",LastName2836 +2837,2837,"FirstName2837 MiddleName2837",LastName2837 +2838,2838,"FirstName2838 MiddleName2838",LastName2838 +2839,2839,"FirstName2839 MiddleName2839",LastName2839 +2840,2840,"FirstName2840 MiddleName2840",LastName2840 +2841,2841,"FirstName2841 MiddleName2841",LastName2841 +2842,2842,"FirstName2842 MiddleName2842",LastName2842 +2843,2843,"FirstName2843 MiddleName2843",LastName2843 +2844,2844,"FirstName2844 MiddleName2844",LastName2844 +2845,2845,"FirstName2845 MiddleName2845",LastName2845 +2846,2846,"FirstName2846 MiddleName2846",LastName2846 +2847,2847,"FirstName2847 MiddleName2847",LastName2847 +2848,2848,"FirstName2848 MiddleName2848",LastName2848 +2849,2849,"FirstName2849 MiddleName2849",LastName2849 +2850,2850,"FirstName2850 MiddleName2850",LastName2850 +2851,2851,"FirstName2851 MiddleName2851",LastName2851 +2852,2852,"FirstName2852 MiddleName2852",LastName2852 +2853,2853,"FirstName2853 MiddleName2853",LastName2853 +2854,2854,"FirstName2854 MiddleName2854",LastName2854 +2855,2855,"FirstName2855 MiddleName2855",LastName2855 +2856,2856,"FirstName2856 MiddleName2856",LastName2856 +2857,2857,"FirstName2857 MiddleName2857",LastName2857 +2858,2858,"FirstName2858 MiddleName2858",LastName2858 +2859,2859,"FirstName2859 MiddleName2859",LastName2859 +2860,2860,"FirstName2860 MiddleName2860",LastName2860 +2861,2861,"FirstName2861 MiddleName2861",LastName2861 +2862,2862,"FirstName2862 MiddleName2862",LastName2862 +2863,2863,"FirstName2863 MiddleName2863",LastName2863 +2864,2864,"FirstName2864 MiddleName2864",LastName2864 +2865,2865,"FirstName2865 MiddleName2865",LastName2865 +2866,2866,"FirstName2866 MiddleName2866",LastName2866 +2867,2867,"FirstName2867 MiddleName2867",LastName2867 +2868,2868,"FirstName2868 MiddleName2868",LastName2868 +2869,2869,"FirstName2869 MiddleName2869",LastName2869 +2870,2870,"FirstName2870 MiddleName2870",LastName2870 +2871,2871,"FirstName2871 MiddleName2871",LastName2871 +2872,2872,"FirstName2872 MiddleName2872",LastName2872 +2873,2873,"FirstName2873 MiddleName2873",LastName2873 +2874,2874,"FirstName2874 MiddleName2874",LastName2874 +2875,2875,"FirstName2875 MiddleName2875",LastName2875 +2876,2876,"FirstName2876 MiddleName2876",LastName2876 +2877,2877,"FirstName2877 MiddleName2877",LastName2877 +2878,2878,"FirstName2878 MiddleName2878",LastName2878 +2879,2879,"FirstName2879 MiddleName2879",LastName2879 +2880,2880,"FirstName2880 MiddleName2880",LastName2880 +2881,2881,"FirstName2881 MiddleName2881",LastName2881 +2882,2882,"FirstName2882 MiddleName2882",LastName2882 +2883,2883,"FirstName2883 MiddleName2883",LastName2883 +2884,2884,"FirstName2884 MiddleName2884",LastName2884 +2885,2885,"FirstName2885 MiddleName2885",LastName2885 +2886,2886,"FirstName2886 MiddleName2886",LastName2886 +2887,2887,"FirstName2887 MiddleName2887",LastName2887 +2888,2888,"FirstName2888 MiddleName2888",LastName2888 +2889,2889,"FirstName2889 MiddleName2889",LastName2889 +2890,2890,"FirstName2890 MiddleName2890",LastName2890 +2891,2891,"FirstName2891 MiddleName2891",LastName2891 +2892,2892,"FirstName2892 MiddleName2892",LastName2892 +2893,2893,"FirstName2893 MiddleName2893",LastName2893 +2894,2894,"FirstName2894 MiddleName2894",LastName2894 +2895,2895,"FirstName2895 MiddleName2895",LastName2895 +2896,2896,"FirstName2896 MiddleName2896",LastName2896 +2897,2897,"FirstName2897 MiddleName2897",LastName2897 +2898,2898,"FirstName2898 MiddleName2898",LastName2898 +2899,2899,"FirstName2899 MiddleName2899",LastName2899 +2900,2900,"FirstName2900 MiddleName2900",LastName2900 +2901,2901,"FirstName2901 MiddleName2901",LastName2901 +2902,2902,"FirstName2902 MiddleName2902",LastName2902 +2903,2903,"FirstName2903 MiddleName2903",LastName2903 +2904,2904,"FirstName2904 MiddleName2904",LastName2904 +2905,2905,"FirstName2905 MiddleName2905",LastName2905 +2906,2906,"FirstName2906 MiddleName2906",LastName2906 +2907,2907,"FirstName2907 MiddleName2907",LastName2907 +2908,2908,"FirstName2908 MiddleName2908",LastName2908 +2909,2909,"FirstName2909 MiddleName2909",LastName2909 +2910,2910,"FirstName2910 MiddleName2910",LastName2910 +2911,2911,"FirstName2911 MiddleName2911",LastName2911 +2912,2912,"FirstName2912 MiddleName2912",LastName2912 +2913,2913,"FirstName2913 MiddleName2913",LastName2913 +2914,2914,"FirstName2914 MiddleName2914",LastName2914 +2915,2915,"FirstName2915 MiddleName2915",LastName2915 +2916,2916,"FirstName2916 MiddleName2916",LastName2916 +2917,2917,"FirstName2917 MiddleName2917",LastName2917 +2918,2918,"FirstName2918 MiddleName2918",LastName2918 +2919,2919,"FirstName2919 MiddleName2919",LastName2919 +2920,2920,"FirstName2920 MiddleName2920",LastName2920 +2921,2921,"FirstName2921 MiddleName2921",LastName2921 +2922,2922,"FirstName2922 MiddleName2922",LastName2922 +2923,2923,"FirstName2923 MiddleName2923",LastName2923 +2924,2924,"FirstName2924 MiddleName2924",LastName2924 +2925,2925,"FirstName2925 MiddleName2925",LastName2925 +2926,2926,"FirstName2926 MiddleName2926",LastName2926 +2927,2927,"FirstName2927 MiddleName2927",LastName2927 +2928,2928,"FirstName2928 MiddleName2928",LastName2928 +2929,2929,"FirstName2929 MiddleName2929",LastName2929 +2930,2930,"FirstName2930 MiddleName2930",LastName2930 +2931,2931,"FirstName2931 MiddleName2931",LastName2931 +2932,2932,"FirstName2932 MiddleName2932",LastName2932 +2933,2933,"FirstName2933 MiddleName2933",LastName2933 +2934,2934,"FirstName2934 MiddleName2934",LastName2934 +2935,2935,"FirstName2935 MiddleName2935",LastName2935 +2936,2936,"FirstName2936 MiddleName2936",LastName2936 +2937,2937,"FirstName2937 MiddleName2937",LastName2937 +2938,2938,"FirstName2938 MiddleName2938",LastName2938 +2939,2939,"FirstName2939 MiddleName2939",LastName2939 +2940,2940,"FirstName2940 MiddleName2940",LastName2940 +2941,2941,"FirstName2941 MiddleName2941",LastName2941 +2942,2942,"FirstName2942 MiddleName2942",LastName2942 +2943,2943,"FirstName2943 MiddleName2943",LastName2943 +2944,2944,"FirstName2944 MiddleName2944",LastName2944 +2945,2945,"FirstName2945 MiddleName2945",LastName2945 +2946,2946,"FirstName2946 MiddleName2946",LastName2946 +2947,2947,"FirstName2947 MiddleName2947",LastName2947 +2948,2948,"FirstName2948 MiddleName2948",LastName2948 +2949,2949,"FirstName2949 MiddleName2949",LastName2949 +2950,2950,"FirstName2950 MiddleName2950",LastName2950 +2951,2951,"FirstName2951 MiddleName2951",LastName2951 +2952,2952,"FirstName2952 MiddleName2952",LastName2952 +2953,2953,"FirstName2953 MiddleName2953",LastName2953 +2954,2954,"FirstName2954 MiddleName2954",LastName2954 +2955,2955,"FirstName2955 MiddleName2955",LastName2955 +2956,2956,"FirstName2956 MiddleName2956",LastName2956 +2957,2957,"FirstName2957 MiddleName2957",LastName2957 +2958,2958,"FirstName2958 MiddleName2958",LastName2958 +2959,2959,"FirstName2959 MiddleName2959",LastName2959 +2960,2960,"FirstName2960 MiddleName2960",LastName2960 +2961,2961,"FirstName2961 MiddleName2961",LastName2961 +2962,2962,"FirstName2962 MiddleName2962",LastName2962 +2963,2963,"FirstName2963 MiddleName2963",LastName2963 +2964,2964,"FirstName2964 MiddleName2964",LastName2964 +2965,2965,"FirstName2965 MiddleName2965",LastName2965 +2966,2966,"FirstName2966 MiddleName2966",LastName2966 +2967,2967,"FirstName2967 MiddleName2967",LastName2967 +2968,2968,"FirstName2968 MiddleName2968",LastName2968 +2969,2969,"FirstName2969 MiddleName2969",LastName2969 +2970,2970,"FirstName2970 MiddleName2970",LastName2970 +2971,2971,"FirstName2971 MiddleName2971",LastName2971 +2972,2972,"FirstName2972 MiddleName2972",LastName2972 +2973,2973,"FirstName2973 MiddleName2973",LastName2973 +2974,2974,"FirstName2974 MiddleName2974",LastName2974 +2975,2975,"FirstName2975 MiddleName2975",LastName2975 +2976,2976,"FirstName2976 MiddleName2976",LastName2976 +2977,2977,"FirstName2977 MiddleName2977",LastName2977 +2978,2978,"FirstName2978 MiddleName2978",LastName2978 +2979,2979,"FirstName2979 MiddleName2979",LastName2979 +2980,2980,"FirstName2980 MiddleName2980",LastName2980 +2981,2981,"FirstName2981 MiddleName2981",LastName2981 +2982,2982,"FirstName2982 MiddleName2982",LastName2982 +2983,2983,"FirstName2983 MiddleName2983",LastName2983 +2984,2984,"FirstName2984 MiddleName2984",LastName2984 +2985,2985,"FirstName2985 MiddleName2985",LastName2985 +2986,2986,"FirstName2986 MiddleName2986",LastName2986 +2987,2987,"FirstName2987 MiddleName2987",LastName2987 +2988,2988,"FirstName2988 MiddleName2988",LastName2988 +2989,2989,"FirstName2989 MiddleName2989",LastName2989 +2990,2990,"FirstName2990 MiddleName2990",LastName2990 +2991,2991,"FirstName2991 MiddleName2991",LastName2991 +2992,2992,"FirstName2992 MiddleName2992",LastName2992 +2993,2993,"FirstName2993 MiddleName2993",LastName2993 +2994,2994,"FirstName2994 MiddleName2994",LastName2994 +2995,2995,"FirstName2995 MiddleName2995",LastName2995 +2996,2996,"FirstName2996 MiddleName2996",LastName2996 +2997,2997,"FirstName2997 MiddleName2997",LastName2997 +2998,2998,"FirstName2998 MiddleName2998",LastName2998 +2999,2999,"FirstName2999 MiddleName2999",LastName2999 +3000,3000,"FirstName3000 MiddleName3000",LastName3000 +3001,3001,"FirstName3001 MiddleName3001",LastName3001 +3002,3002,"FirstName3002 MiddleName3002",LastName3002 +3003,3003,"FirstName3003 MiddleName3003",LastName3003 +3004,3004,"FirstName3004 MiddleName3004",LastName3004 +3005,3005,"FirstName3005 MiddleName3005",LastName3005 +3006,3006,"FirstName3006 MiddleName3006",LastName3006 +3007,3007,"FirstName3007 MiddleName3007",LastName3007 +3008,3008,"FirstName3008 MiddleName3008",LastName3008 +3009,3009,"FirstName3009 MiddleName3009",LastName3009 +3010,3010,"FirstName3010 MiddleName3010",LastName3010 +3011,3011,"FirstName3011 MiddleName3011",LastName3011 +3012,3012,"FirstName3012 MiddleName3012",LastName3012 +3013,3013,"FirstName3013 MiddleName3013",LastName3013 +3014,3014,"FirstName3014 MiddleName3014",LastName3014 +3015,3015,"FirstName3015 MiddleName3015",LastName3015 +3016,3016,"FirstName3016 MiddleName3016",LastName3016 +3017,3017,"FirstName3017 MiddleName3017",LastName3017 +3018,3018,"FirstName3018 MiddleName3018",LastName3018 +3019,3019,"FirstName3019 MiddleName3019",LastName3019 +3020,3020,"FirstName3020 MiddleName3020",LastName3020 +3021,3021,"FirstName3021 MiddleName3021",LastName3021 +3022,3022,"FirstName3022 MiddleName3022",LastName3022 +3023,3023,"FirstName3023 MiddleName3023",LastName3023 +3024,3024,"FirstName3024 MiddleName3024",LastName3024 +3025,3025,"FirstName3025 MiddleName3025",LastName3025 +3026,3026,"FirstName3026 MiddleName3026",LastName3026 +3027,3027,"FirstName3027 MiddleName3027",LastName3027 +3028,3028,"FirstName3028 MiddleName3028",LastName3028 +3029,3029,"FirstName3029 MiddleName3029",LastName3029 +3030,3030,"FirstName3030 MiddleName3030",LastName3030 +3031,3031,"FirstName3031 MiddleName3031",LastName3031 +3032,3032,"FirstName3032 MiddleName3032",LastName3032 +3033,3033,"FirstName3033 MiddleName3033",LastName3033 +3034,3034,"FirstName3034 MiddleName3034",LastName3034 +3035,3035,"FirstName3035 MiddleName3035",LastName3035 +3036,3036,"FirstName3036 MiddleName3036",LastName3036 +3037,3037,"FirstName3037 MiddleName3037",LastName3037 +3038,3038,"FirstName3038 MiddleName3038",LastName3038 +3039,3039,"FirstName3039 MiddleName3039",LastName3039 +3040,3040,"FirstName3040 MiddleName3040",LastName3040 +3041,3041,"FirstName3041 MiddleName3041",LastName3041 +3042,3042,"FirstName3042 MiddleName3042",LastName3042 +3043,3043,"FirstName3043 MiddleName3043",LastName3043 +3044,3044,"FirstName3044 MiddleName3044",LastName3044 +3045,3045,"FirstName3045 MiddleName3045",LastName3045 +3046,3046,"FirstName3046 MiddleName3046",LastName3046 +3047,3047,"FirstName3047 MiddleName3047",LastName3047 +3048,3048,"FirstName3048 MiddleName3048",LastName3048 +3049,3049,"FirstName3049 MiddleName3049",LastName3049 +3050,3050,"FirstName3050 MiddleName3050",LastName3050 +3051,3051,"FirstName3051 MiddleName3051",LastName3051 +3052,3052,"FirstName3052 MiddleName3052",LastName3052 +3053,3053,"FirstName3053 MiddleName3053",LastName3053 +3054,3054,"FirstName3054 MiddleName3054",LastName3054 +3055,3055,"FirstName3055 MiddleName3055",LastName3055 +3056,3056,"FirstName3056 MiddleName3056",LastName3056 +3057,3057,"FirstName3057 MiddleName3057",LastName3057 +3058,3058,"FirstName3058 MiddleName3058",LastName3058 +3059,3059,"FirstName3059 MiddleName3059",LastName3059 +3060,3060,"FirstName3060 MiddleName3060",LastName3060 +3061,3061,"FirstName3061 MiddleName3061",LastName3061 +3062,3062,"FirstName3062 MiddleName3062",LastName3062 +3063,3063,"FirstName3063 MiddleName3063",LastName3063 +3064,3064,"FirstName3064 MiddleName3064",LastName3064 +3065,3065,"FirstName3065 MiddleName3065",LastName3065 +3066,3066,"FirstName3066 MiddleName3066",LastName3066 +3067,3067,"FirstName3067 MiddleName3067",LastName3067 +3068,3068,"FirstName3068 MiddleName3068",LastName3068 +3069,3069,"FirstName3069 MiddleName3069",LastName3069 +3070,3070,"FirstName3070 MiddleName3070",LastName3070 +3071,3071,"FirstName3071 MiddleName3071",LastName3071 +3072,3072,"FirstName3072 MiddleName3072",LastName3072 +3073,3073,"FirstName3073 MiddleName3073",LastName3073 +3074,3074,"FirstName3074 MiddleName3074",LastName3074 +3075,3075,"FirstName3075 MiddleName3075",LastName3075 +3076,3076,"FirstName3076 MiddleName3076",LastName3076 +3077,3077,"FirstName3077 MiddleName3077",LastName3077 +3078,3078,"FirstName3078 MiddleName3078",LastName3078 +3079,3079,"FirstName3079 MiddleName3079",LastName3079 +3080,3080,"FirstName3080 MiddleName3080",LastName3080 +3081,3081,"FirstName3081 MiddleName3081",LastName3081 +3082,3082,"FirstName3082 MiddleName3082",LastName3082 +3083,3083,"FirstName3083 MiddleName3083",LastName3083 +3084,3084,"FirstName3084 MiddleName3084",LastName3084 +3085,3085,"FirstName3085 MiddleName3085",LastName3085 +3086,3086,"FirstName3086 MiddleName3086",LastName3086 +3087,3087,"FirstName3087 MiddleName3087",LastName3087 +3088,3088,"FirstName3088 MiddleName3088",LastName3088 +3089,3089,"FirstName3089 MiddleName3089",LastName3089 +3090,3090,"FirstName3090 MiddleName3090",LastName3090 +3091,3091,"FirstName3091 MiddleName3091",LastName3091 +3092,3092,"FirstName3092 MiddleName3092",LastName3092 +3093,3093,"FirstName3093 MiddleName3093",LastName3093 +3094,3094,"FirstName3094 MiddleName3094",LastName3094 +3095,3095,"FirstName3095 MiddleName3095",LastName3095 +3096,3096,"FirstName3096 MiddleName3096",LastName3096 +3097,3097,"FirstName3097 MiddleName3097",LastName3097 +3098,3098,"FirstName3098 MiddleName3098",LastName3098 +3099,3099,"FirstName3099 MiddleName3099",LastName3099 +3100,3100,"FirstName3100 MiddleName3100",LastName3100 +3101,3101,"FirstName3101 MiddleName3101",LastName3101 +3102,3102,"FirstName3102 MiddleName3102",LastName3102 +3103,3103,"FirstName3103 MiddleName3103",LastName3103 +3104,3104,"FirstName3104 MiddleName3104",LastName3104 +3105,3105,"FirstName3105 MiddleName3105",LastName3105 +3106,3106,"FirstName3106 MiddleName3106",LastName3106 +3107,3107,"FirstName3107 MiddleName3107",LastName3107 +3108,3108,"FirstName3108 MiddleName3108",LastName3108 +3109,3109,"FirstName3109 MiddleName3109",LastName3109 +3110,3110,"FirstName3110 MiddleName3110",LastName3110 +3111,3111,"FirstName3111 MiddleName3111",LastName3111 +3112,3112,"FirstName3112 MiddleName3112",LastName3112 +3113,3113,"FirstName3113 MiddleName3113",LastName3113 +3114,3114,"FirstName3114 MiddleName3114",LastName3114 +3115,3115,"FirstName3115 MiddleName3115",LastName3115 +3116,3116,"FirstName3116 MiddleName3116",LastName3116 +3117,3117,"FirstName3117 MiddleName3117",LastName3117 +3118,3118,"FirstName3118 MiddleName3118",LastName3118 +3119,3119,"FirstName3119 MiddleName3119",LastName3119 +3120,3120,"FirstName3120 MiddleName3120",LastName3120 +3121,3121,"FirstName3121 MiddleName3121",LastName3121 +3122,3122,"FirstName3122 MiddleName3122",LastName3122 +3123,3123,"FirstName3123 MiddleName3123",LastName3123 +3124,3124,"FirstName3124 MiddleName3124",LastName3124 +3125,3125,"FirstName3125 MiddleName3125",LastName3125 +3126,3126,"FirstName3126 MiddleName3126",LastName3126 +3127,3127,"FirstName3127 MiddleName3127",LastName3127 +3128,3128,"FirstName3128 MiddleName3128",LastName3128 +3129,3129,"FirstName3129 MiddleName3129",LastName3129 +3130,3130,"FirstName3130 MiddleName3130",LastName3130 +3131,3131,"FirstName3131 MiddleName3131",LastName3131 +3132,3132,"FirstName3132 MiddleName3132",LastName3132 +3133,3133,"FirstName3133 MiddleName3133",LastName3133 +3134,3134,"FirstName3134 MiddleName3134",LastName3134 +3135,3135,"FirstName3135 MiddleName3135",LastName3135 +3136,3136,"FirstName3136 MiddleName3136",LastName3136 +3137,3137,"FirstName3137 MiddleName3137",LastName3137 +3138,3138,"FirstName3138 MiddleName3138",LastName3138 +3139,3139,"FirstName3139 MiddleName3139",LastName3139 +3140,3140,"FirstName3140 MiddleName3140",LastName3140 +3141,3141,"FirstName3141 MiddleName3141",LastName3141 +3142,3142,"FirstName3142 MiddleName3142",LastName3142 +3143,3143,"FirstName3143 MiddleName3143",LastName3143 +3144,3144,"FirstName3144 MiddleName3144",LastName3144 +3145,3145,"FirstName3145 MiddleName3145",LastName3145 +3146,3146,"FirstName3146 MiddleName3146",LastName3146 +3147,3147,"FirstName3147 MiddleName3147",LastName3147 +3148,3148,"FirstName3148 MiddleName3148",LastName3148 +3149,3149,"FirstName3149 MiddleName3149",LastName3149 +3150,3150,"FirstName3150 MiddleName3150",LastName3150 +3151,3151,"FirstName3151 MiddleName3151",LastName3151 +3152,3152,"FirstName3152 MiddleName3152",LastName3152 +3153,3153,"FirstName3153 MiddleName3153",LastName3153 +3154,3154,"FirstName3154 MiddleName3154",LastName3154 +3155,3155,"FirstName3155 MiddleName3155",LastName3155 +3156,3156,"FirstName3156 MiddleName3156",LastName3156 +3157,3157,"FirstName3157 MiddleName3157",LastName3157 +3158,3158,"FirstName3158 MiddleName3158",LastName3158 +3159,3159,"FirstName3159 MiddleName3159",LastName3159 +3160,3160,"FirstName3160 MiddleName3160",LastName3160 +3161,3161,"FirstName3161 MiddleName3161",LastName3161 +3162,3162,"FirstName3162 MiddleName3162",LastName3162 +3163,3163,"FirstName3163 MiddleName3163",LastName3163 +3164,3164,"FirstName3164 MiddleName3164",LastName3164 +3165,3165,"FirstName3165 MiddleName3165",LastName3165 +3166,3166,"FirstName3166 MiddleName3166",LastName3166 +3167,3167,"FirstName3167 MiddleName3167",LastName3167 +3168,3168,"FirstName3168 MiddleName3168",LastName3168 +3169,3169,"FirstName3169 MiddleName3169",LastName3169 +3170,3170,"FirstName3170 MiddleName3170",LastName3170 +3171,3171,"FirstName3171 MiddleName3171",LastName3171 +3172,3172,"FirstName3172 MiddleName3172",LastName3172 +3173,3173,"FirstName3173 MiddleName3173",LastName3173 +3174,3174,"FirstName3174 MiddleName3174",LastName3174 +3175,3175,"FirstName3175 MiddleName3175",LastName3175 +3176,3176,"FirstName3176 MiddleName3176",LastName3176 +3177,3177,"FirstName3177 MiddleName3177",LastName3177 +3178,3178,"FirstName3178 MiddleName3178",LastName3178 +3179,3179,"FirstName3179 MiddleName3179",LastName3179 +3180,3180,"FirstName3180 MiddleName3180",LastName3180 +3181,3181,"FirstName3181 MiddleName3181",LastName3181 +3182,3182,"FirstName3182 MiddleName3182",LastName3182 +3183,3183,"FirstName3183 MiddleName3183",LastName3183 +3184,3184,"FirstName3184 MiddleName3184",LastName3184 +3185,3185,"FirstName3185 MiddleName3185",LastName3185 +3186,3186,"FirstName3186 MiddleName3186",LastName3186 +3187,3187,"FirstName3187 MiddleName3187",LastName3187 +3188,3188,"FirstName3188 MiddleName3188",LastName3188 +3189,3189,"FirstName3189 MiddleName3189",LastName3189 +3190,3190,"FirstName3190 MiddleName3190",LastName3190 +3191,3191,"FirstName3191 MiddleName3191",LastName3191 +3192,3192,"FirstName3192 MiddleName3192",LastName3192 +3193,3193,"FirstName3193 MiddleName3193",LastName3193 +3194,3194,"FirstName3194 MiddleName3194",LastName3194 +3195,3195,"FirstName3195 MiddleName3195",LastName3195 +3196,3196,"FirstName3196 MiddleName3196",LastName3196 +3197,3197,"FirstName3197 MiddleName3197",LastName3197 +3198,3198,"FirstName3198 MiddleName3198",LastName3198 +3199,3199,"FirstName3199 MiddleName3199",LastName3199 +3200,3200,"FirstName3200 MiddleName3200",LastName3200 +3201,3201,"FirstName3201 MiddleName3201",LastName3201 +3202,3202,"FirstName3202 MiddleName3202",LastName3202 +3203,3203,"FirstName3203 MiddleName3203",LastName3203 +3204,3204,"FirstName3204 MiddleName3204",LastName3204 +3205,3205,"FirstName3205 MiddleName3205",LastName3205 +3206,3206,"FirstName3206 MiddleName3206",LastName3206 +3207,3207,"FirstName3207 MiddleName3207",LastName3207 +3208,3208,"FirstName3208 MiddleName3208",LastName3208 +3209,3209,"FirstName3209 MiddleName3209",LastName3209 +3210,3210,"FirstName3210 MiddleName3210",LastName3210 +3211,3211,"FirstName3211 MiddleName3211",LastName3211 +3212,3212,"FirstName3212 MiddleName3212",LastName3212 +3213,3213,"FirstName3213 MiddleName3213",LastName3213 +3214,3214,"FirstName3214 MiddleName3214",LastName3214 +3215,3215,"FirstName3215 MiddleName3215",LastName3215 +3216,3216,"FirstName3216 MiddleName3216",LastName3216 +3217,3217,"FirstName3217 MiddleName3217",LastName3217 +3218,3218,"FirstName3218 MiddleName3218",LastName3218 +3219,3219,"FirstName3219 MiddleName3219",LastName3219 +3220,3220,"FirstName3220 MiddleName3220",LastName3220 +3221,3221,"FirstName3221 MiddleName3221",LastName3221 +3222,3222,"FirstName3222 MiddleName3222",LastName3222 +3223,3223,"FirstName3223 MiddleName3223",LastName3223 +3224,3224,"FirstName3224 MiddleName3224",LastName3224 +3225,3225,"FirstName3225 MiddleName3225",LastName3225 +3226,3226,"FirstName3226 MiddleName3226",LastName3226 +3227,3227,"FirstName3227 MiddleName3227",LastName3227 +3228,3228,"FirstName3228 MiddleName3228",LastName3228 +3229,3229,"FirstName3229 MiddleName3229",LastName3229 +3230,3230,"FirstName3230 MiddleName3230",LastName3230 +3231,3231,"FirstName3231 MiddleName3231",LastName3231 +3232,3232,"FirstName3232 MiddleName3232",LastName3232 +3233,3233,"FirstName3233 MiddleName3233",LastName3233 +3234,3234,"FirstName3234 MiddleName3234",LastName3234 +3235,3235,"FirstName3235 MiddleName3235",LastName3235 +3236,3236,"FirstName3236 MiddleName3236",LastName3236 +3237,3237,"FirstName3237 MiddleName3237",LastName3237 +3238,3238,"FirstName3238 MiddleName3238",LastName3238 +3239,3239,"FirstName3239 MiddleName3239",LastName3239 +3240,3240,"FirstName3240 MiddleName3240",LastName3240 +3241,3241,"FirstName3241 MiddleName3241",LastName3241 +3242,3242,"FirstName3242 MiddleName3242",LastName3242 +3243,3243,"FirstName3243 MiddleName3243",LastName3243 +3244,3244,"FirstName3244 MiddleName3244",LastName3244 +3245,3245,"FirstName3245 MiddleName3245",LastName3245 +3246,3246,"FirstName3246 MiddleName3246",LastName3246 +3247,3247,"FirstName3247 MiddleName3247",LastName3247 +3248,3248,"FirstName3248 MiddleName3248",LastName3248 +3249,3249,"FirstName3249 MiddleName3249",LastName3249 +3250,3250,"FirstName3250 MiddleName3250",LastName3250 +3251,3251,"FirstName3251 MiddleName3251",LastName3251 +3252,3252,"FirstName3252 MiddleName3252",LastName3252 +3253,3253,"FirstName3253 MiddleName3253",LastName3253 +3254,3254,"FirstName3254 MiddleName3254",LastName3254 +3255,3255,"FirstName3255 MiddleName3255",LastName3255 +3256,3256,"FirstName3256 MiddleName3256",LastName3256 +3257,3257,"FirstName3257 MiddleName3257",LastName3257 +3258,3258,"FirstName3258 MiddleName3258",LastName3258 +3259,3259,"FirstName3259 MiddleName3259",LastName3259 +3260,3260,"FirstName3260 MiddleName3260",LastName3260 +3261,3261,"FirstName3261 MiddleName3261",LastName3261 +3262,3262,"FirstName3262 MiddleName3262",LastName3262 +3263,3263,"FirstName3263 MiddleName3263",LastName3263 +3264,3264,"FirstName3264 MiddleName3264",LastName3264 +3265,3265,"FirstName3265 MiddleName3265",LastName3265 +3266,3266,"FirstName3266 MiddleName3266",LastName3266 +3267,3267,"FirstName3267 MiddleName3267",LastName3267 +3268,3268,"FirstName3268 MiddleName3268",LastName3268 +3269,3269,"FirstName3269 MiddleName3269",LastName3269 +3270,3270,"FirstName3270 MiddleName3270",LastName3270 +3271,3271,"FirstName3271 MiddleName3271",LastName3271 +3272,3272,"FirstName3272 MiddleName3272",LastName3272 +3273,3273,"FirstName3273 MiddleName3273",LastName3273 +3274,3274,"FirstName3274 MiddleName3274",LastName3274 +3275,3275,"FirstName3275 MiddleName3275",LastName3275 +3276,3276,"FirstName3276 MiddleName3276",LastName3276 +3277,3277,"FirstName3277 MiddleName3277",LastName3277 +3278,3278,"FirstName3278 MiddleName3278",LastName3278 +3279,3279,"FirstName3279 MiddleName3279",LastName3279 +3280,3280,"FirstName3280 MiddleName3280",LastName3280 +3281,3281,"FirstName3281 MiddleName3281",LastName3281 +3282,3282,"FirstName3282 MiddleName3282",LastName3282 +3283,3283,"FirstName3283 MiddleName3283",LastName3283 +3284,3284,"FirstName3284 MiddleName3284",LastName3284 +3285,3285,"FirstName3285 MiddleName3285",LastName3285 +3286,3286,"FirstName3286 MiddleName3286",LastName3286 +3287,3287,"FirstName3287 MiddleName3287",LastName3287 +3288,3288,"FirstName3288 MiddleName3288",LastName3288 +3289,3289,"FirstName3289 MiddleName3289",LastName3289 +3290,3290,"FirstName3290 MiddleName3290",LastName3290 +3291,3291,"FirstName3291 MiddleName3291",LastName3291 +3292,3292,"FirstName3292 MiddleName3292",LastName3292 +3293,3293,"FirstName3293 MiddleName3293",LastName3293 +3294,3294,"FirstName3294 MiddleName3294",LastName3294 +3295,3295,"FirstName3295 MiddleName3295",LastName3295 +3296,3296,"FirstName3296 MiddleName3296",LastName3296 +3297,3297,"FirstName3297 MiddleName3297",LastName3297 +3298,3298,"FirstName3298 MiddleName3298",LastName3298 +3299,3299,"FirstName3299 MiddleName3299",LastName3299 +3300,3300,"FirstName3300 MiddleName3300",LastName3300 +3301,3301,"FirstName3301 MiddleName3301",LastName3301 +3302,3302,"FirstName3302 MiddleName3302",LastName3302 +3303,3303,"FirstName3303 MiddleName3303",LastName3303 +3304,3304,"FirstName3304 MiddleName3304",LastName3304 +3305,3305,"FirstName3305 MiddleName3305",LastName3305 +3306,3306,"FirstName3306 MiddleName3306",LastName3306 +3307,3307,"FirstName3307 MiddleName3307",LastName3307 +3308,3308,"FirstName3308 MiddleName3308",LastName3308 +3309,3309,"FirstName3309 MiddleName3309",LastName3309 +3310,3310,"FirstName3310 MiddleName3310",LastName3310 +3311,3311,"FirstName3311 MiddleName3311",LastName3311 +3312,3312,"FirstName3312 MiddleName3312",LastName3312 +3313,3313,"FirstName3313 MiddleName3313",LastName3313 +3314,3314,"FirstName3314 MiddleName3314",LastName3314 +3315,3315,"FirstName3315 MiddleName3315",LastName3315 +3316,3316,"FirstName3316 MiddleName3316",LastName3316 +3317,3317,"FirstName3317 MiddleName3317",LastName3317 +3318,3318,"FirstName3318 MiddleName3318",LastName3318 +3319,3319,"FirstName3319 MiddleName3319",LastName3319 +3320,3320,"FirstName3320 MiddleName3320",LastName3320 +3321,3321,"FirstName3321 MiddleName3321",LastName3321 +3322,3322,"FirstName3322 MiddleName3322",LastName3322 +3323,3323,"FirstName3323 MiddleName3323",LastName3323 +3324,3324,"FirstName3324 MiddleName3324",LastName3324 +3325,3325,"FirstName3325 MiddleName3325",LastName3325 +3326,3326,"FirstName3326 MiddleName3326",LastName3326 +3327,3327,"FirstName3327 MiddleName3327",LastName3327 +3328,3328,"FirstName3328 MiddleName3328",LastName3328 +3329,3329,"FirstName3329 MiddleName3329",LastName3329 +3330,3330,"FirstName3330 MiddleName3330",LastName3330 +3331,3331,"FirstName3331 MiddleName3331",LastName3331 +3332,3332,"FirstName3332 MiddleName3332",LastName3332 +3333,3333,"FirstName3333 MiddleName3333",LastName3333 +3334,3334,"FirstName3334 MiddleName3334",LastName3334 +3335,3335,"FirstName3335 MiddleName3335",LastName3335 +3336,3336,"FirstName3336 MiddleName3336",LastName3336 +3337,3337,"FirstName3337 MiddleName3337",LastName3337 +3338,3338,"FirstName3338 MiddleName3338",LastName3338 +3339,3339,"FirstName3339 MiddleName3339",LastName3339 +3340,3340,"FirstName3340 MiddleName3340",LastName3340 +3341,3341,"FirstName3341 MiddleName3341",LastName3341 +3342,3342,"FirstName3342 MiddleName3342",LastName3342 +3343,3343,"FirstName3343 MiddleName3343",LastName3343 +3344,3344,"FirstName3344 MiddleName3344",LastName3344 +3345,3345,"FirstName3345 MiddleName3345",LastName3345 +3346,3346,"FirstName3346 MiddleName3346",LastName3346 +3347,3347,"FirstName3347 MiddleName3347",LastName3347 +3348,3348,"FirstName3348 MiddleName3348",LastName3348 +3349,3349,"FirstName3349 MiddleName3349",LastName3349 +3350,3350,"FirstName3350 MiddleName3350",LastName3350 +3351,3351,"FirstName3351 MiddleName3351",LastName3351 +3352,3352,"FirstName3352 MiddleName3352",LastName3352 +3353,3353,"FirstName3353 MiddleName3353",LastName3353 +3354,3354,"FirstName3354 MiddleName3354",LastName3354 +3355,3355,"FirstName3355 MiddleName3355",LastName3355 +3356,3356,"FirstName3356 MiddleName3356",LastName3356 +3357,3357,"FirstName3357 MiddleName3357",LastName3357 +3358,3358,"FirstName3358 MiddleName3358",LastName3358 +3359,3359,"FirstName3359 MiddleName3359",LastName3359 +3360,3360,"FirstName3360 MiddleName3360",LastName3360 +3361,3361,"FirstName3361 MiddleName3361",LastName3361 +3362,3362,"FirstName3362 MiddleName3362",LastName3362 +3363,3363,"FirstName3363 MiddleName3363",LastName3363 +3364,3364,"FirstName3364 MiddleName3364",LastName3364 +3365,3365,"FirstName3365 MiddleName3365",LastName3365 +3366,3366,"FirstName3366 MiddleName3366",LastName3366 +3367,3367,"FirstName3367 MiddleName3367",LastName3367 +3368,3368,"FirstName3368 MiddleName3368",LastName3368 +3369,3369,"FirstName3369 MiddleName3369",LastName3369 +3370,3370,"FirstName3370 MiddleName3370",LastName3370 +3371,3371,"FirstName3371 MiddleName3371",LastName3371 +3372,3372,"FirstName3372 MiddleName3372",LastName3372 +3373,3373,"FirstName3373 MiddleName3373",LastName3373 +3374,3374,"FirstName3374 MiddleName3374",LastName3374 +3375,3375,"FirstName3375 MiddleName3375",LastName3375 +3376,3376,"FirstName3376 MiddleName3376",LastName3376 +3377,3377,"FirstName3377 MiddleName3377",LastName3377 +3378,3378,"FirstName3378 MiddleName3378",LastName3378 +3379,3379,"FirstName3379 MiddleName3379",LastName3379 +3380,3380,"FirstName3380 MiddleName3380",LastName3380 +3381,3381,"FirstName3381 MiddleName3381",LastName3381 +3382,3382,"FirstName3382 MiddleName3382",LastName3382 +3383,3383,"FirstName3383 MiddleName3383",LastName3383 +3384,3384,"FirstName3384 MiddleName3384",LastName3384 +3385,3385,"FirstName3385 MiddleName3385",LastName3385 +3386,3386,"FirstName3386 MiddleName3386",LastName3386 +3387,3387,"FirstName3387 MiddleName3387",LastName3387 +3388,3388,"FirstName3388 MiddleName3388",LastName3388 +3389,3389,"FirstName3389 MiddleName3389",LastName3389 +3390,3390,"FirstName3390 MiddleName3390",LastName3390 +3391,3391,"FirstName3391 MiddleName3391",LastName3391 +3392,3392,"FirstName3392 MiddleName3392",LastName3392 +3393,3393,"FirstName3393 MiddleName3393",LastName3393 +3394,3394,"FirstName3394 MiddleName3394",LastName3394 +3395,3395,"FirstName3395 MiddleName3395",LastName3395 +3396,3396,"FirstName3396 MiddleName3396",LastName3396 +3397,3397,"FirstName3397 MiddleName3397",LastName3397 +3398,3398,"FirstName3398 MiddleName3398",LastName3398 +3399,3399,"FirstName3399 MiddleName3399",LastName3399 +3400,3400,"FirstName3400 MiddleName3400",LastName3400 +3401,3401,"FirstName3401 MiddleName3401",LastName3401 +3402,3402,"FirstName3402 MiddleName3402",LastName3402 +3403,3403,"FirstName3403 MiddleName3403",LastName3403 +3404,3404,"FirstName3404 MiddleName3404",LastName3404 +3405,3405,"FirstName3405 MiddleName3405",LastName3405 +3406,3406,"FirstName3406 MiddleName3406",LastName3406 +3407,3407,"FirstName3407 MiddleName3407",LastName3407 +3408,3408,"FirstName3408 MiddleName3408",LastName3408 +3409,3409,"FirstName3409 MiddleName3409",LastName3409 +3410,3410,"FirstName3410 MiddleName3410",LastName3410 +3411,3411,"FirstName3411 MiddleName3411",LastName3411 +3412,3412,"FirstName3412 MiddleName3412",LastName3412 +3413,3413,"FirstName3413 MiddleName3413",LastName3413 +3414,3414,"FirstName3414 MiddleName3414",LastName3414 +3415,3415,"FirstName3415 MiddleName3415",LastName3415 +3416,3416,"FirstName3416 MiddleName3416",LastName3416 +3417,3417,"FirstName3417 MiddleName3417",LastName3417 +3418,3418,"FirstName3418 MiddleName3418",LastName3418 +3419,3419,"FirstName3419 MiddleName3419",LastName3419 +3420,3420,"FirstName3420 MiddleName3420",LastName3420 +3421,3421,"FirstName3421 MiddleName3421",LastName3421 +3422,3422,"FirstName3422 MiddleName3422",LastName3422 +3423,3423,"FirstName3423 MiddleName3423",LastName3423 +3424,3424,"FirstName3424 MiddleName3424",LastName3424 +3425,3425,"FirstName3425 MiddleName3425",LastName3425 +3426,3426,"FirstName3426 MiddleName3426",LastName3426 +3427,3427,"FirstName3427 MiddleName3427",LastName3427 +3428,3428,"FirstName3428 MiddleName3428",LastName3428 +3429,3429,"FirstName3429 MiddleName3429",LastName3429 +3430,3430,"FirstName3430 MiddleName3430",LastName3430 +3431,3431,"FirstName3431 MiddleName3431",LastName3431 +3432,3432,"FirstName3432 MiddleName3432",LastName3432 +3433,3433,"FirstName3433 MiddleName3433",LastName3433 +3434,3434,"FirstName3434 MiddleName3434",LastName3434 +3435,3435,"FirstName3435 MiddleName3435",LastName3435 +3436,3436,"FirstName3436 MiddleName3436",LastName3436 +3437,3437,"FirstName3437 MiddleName3437",LastName3437 +3438,3438,"FirstName3438 MiddleName3438",LastName3438 +3439,3439,"FirstName3439 MiddleName3439",LastName3439 +3440,3440,"FirstName3440 MiddleName3440",LastName3440 +3441,3441,"FirstName3441 MiddleName3441",LastName3441 +3442,3442,"FirstName3442 MiddleName3442",LastName3442 +3443,3443,"FirstName3443 MiddleName3443",LastName3443 +3444,3444,"FirstName3444 MiddleName3444",LastName3444 +3445,3445,"FirstName3445 MiddleName3445",LastName3445 +3446,3446,"FirstName3446 MiddleName3446",LastName3446 +3447,3447,"FirstName3447 MiddleName3447",LastName3447 +3448,3448,"FirstName3448 MiddleName3448",LastName3448 +3449,3449,"FirstName3449 MiddleName3449",LastName3449 +3450,3450,"FirstName3450 MiddleName3450",LastName3450 +3451,3451,"FirstName3451 MiddleName3451",LastName3451 +3452,3452,"FirstName3452 MiddleName3452",LastName3452 +3453,3453,"FirstName3453 MiddleName3453",LastName3453 +3454,3454,"FirstName3454 MiddleName3454",LastName3454 +3455,3455,"FirstName3455 MiddleName3455",LastName3455 +3456,3456,"FirstName3456 MiddleName3456",LastName3456 +3457,3457,"FirstName3457 MiddleName3457",LastName3457 +3458,3458,"FirstName3458 MiddleName3458",LastName3458 +3459,3459,"FirstName3459 MiddleName3459",LastName3459 +3460,3460,"FirstName3460 MiddleName3460",LastName3460 +3461,3461,"FirstName3461 MiddleName3461",LastName3461 +3462,3462,"FirstName3462 MiddleName3462",LastName3462 +3463,3463,"FirstName3463 MiddleName3463",LastName3463 +3464,3464,"FirstName3464 MiddleName3464",LastName3464 +3465,3465,"FirstName3465 MiddleName3465",LastName3465 +3466,3466,"FirstName3466 MiddleName3466",LastName3466 +3467,3467,"FirstName3467 MiddleName3467",LastName3467 +3468,3468,"FirstName3468 MiddleName3468",LastName3468 +3469,3469,"FirstName3469 MiddleName3469",LastName3469 +3470,3470,"FirstName3470 MiddleName3470",LastName3470 +3471,3471,"FirstName3471 MiddleName3471",LastName3471 +3472,3472,"FirstName3472 MiddleName3472",LastName3472 +3473,3473,"FirstName3473 MiddleName3473",LastName3473 +3474,3474,"FirstName3474 MiddleName3474",LastName3474 +3475,3475,"FirstName3475 MiddleName3475",LastName3475 +3476,3476,"FirstName3476 MiddleName3476",LastName3476 +3477,3477,"FirstName3477 MiddleName3477",LastName3477 +3478,3478,"FirstName3478 MiddleName3478",LastName3478 +3479,3479,"FirstName3479 MiddleName3479",LastName3479 +3480,3480,"FirstName3480 MiddleName3480",LastName3480 +3481,3481,"FirstName3481 MiddleName3481",LastName3481 +3482,3482,"FirstName3482 MiddleName3482",LastName3482 +3483,3483,"FirstName3483 MiddleName3483",LastName3483 +3484,3484,"FirstName3484 MiddleName3484",LastName3484 +3485,3485,"FirstName3485 MiddleName3485",LastName3485 +3486,3486,"FirstName3486 MiddleName3486",LastName3486 +3487,3487,"FirstName3487 MiddleName3487",LastName3487 +3488,3488,"FirstName3488 MiddleName3488",LastName3488 +3489,3489,"FirstName3489 MiddleName3489",LastName3489 +3490,3490,"FirstName3490 MiddleName3490",LastName3490 +3491,3491,"FirstName3491 MiddleName3491",LastName3491 +3492,3492,"FirstName3492 MiddleName3492",LastName3492 +3493,3493,"FirstName3493 MiddleName3493",LastName3493 +3494,3494,"FirstName3494 MiddleName3494",LastName3494 +3495,3495,"FirstName3495 MiddleName3495",LastName3495 +3496,3496,"FirstName3496 MiddleName3496",LastName3496 +3497,3497,"FirstName3497 MiddleName3497",LastName3497 +3498,3498,"FirstName3498 MiddleName3498",LastName3498 +3499,3499,"FirstName3499 MiddleName3499",LastName3499 +3500,3500,"FirstName3500 MiddleName3500",LastName3500 +3501,3501,"FirstName3501 MiddleName3501",LastName3501 +3502,3502,"FirstName3502 MiddleName3502",LastName3502 +3503,3503,"FirstName3503 MiddleName3503",LastName3503 +3504,3504,"FirstName3504 MiddleName3504",LastName3504 +3505,3505,"FirstName3505 MiddleName3505",LastName3505 +3506,3506,"FirstName3506 MiddleName3506",LastName3506 +3507,3507,"FirstName3507 MiddleName3507",LastName3507 +3508,3508,"FirstName3508 MiddleName3508",LastName3508 +3509,3509,"FirstName3509 MiddleName3509",LastName3509 +3510,3510,"FirstName3510 MiddleName3510",LastName3510 +3511,3511,"FirstName3511 MiddleName3511",LastName3511 +3512,3512,"FirstName3512 MiddleName3512",LastName3512 +3513,3513,"FirstName3513 MiddleName3513",LastName3513 +3514,3514,"FirstName3514 MiddleName3514",LastName3514 +3515,3515,"FirstName3515 MiddleName3515",LastName3515 +3516,3516,"FirstName3516 MiddleName3516",LastName3516 +3517,3517,"FirstName3517 MiddleName3517",LastName3517 +3518,3518,"FirstName3518 MiddleName3518",LastName3518 +3519,3519,"FirstName3519 MiddleName3519",LastName3519 +3520,3520,"FirstName3520 MiddleName3520",LastName3520 +3521,3521,"FirstName3521 MiddleName3521",LastName3521 +3522,3522,"FirstName3522 MiddleName3522",LastName3522 +3523,3523,"FirstName3523 MiddleName3523",LastName3523 +3524,3524,"FirstName3524 MiddleName3524",LastName3524 +3525,3525,"FirstName3525 MiddleName3525",LastName3525 +3526,3526,"FirstName3526 MiddleName3526",LastName3526 +3527,3527,"FirstName3527 MiddleName3527",LastName3527 +3528,3528,"FirstName3528 MiddleName3528",LastName3528 +3529,3529,"FirstName3529 MiddleName3529",LastName3529 +3530,3530,"FirstName3530 MiddleName3530",LastName3530 +3531,3531,"FirstName3531 MiddleName3531",LastName3531 +3532,3532,"FirstName3532 MiddleName3532",LastName3532 +3533,3533,"FirstName3533 MiddleName3533",LastName3533 +3534,3534,"FirstName3534 MiddleName3534",LastName3534 +3535,3535,"FirstName3535 MiddleName3535",LastName3535 +3536,3536,"FirstName3536 MiddleName3536",LastName3536 +3537,3537,"FirstName3537 MiddleName3537",LastName3537 +3538,3538,"FirstName3538 MiddleName3538",LastName3538 +3539,3539,"FirstName3539 MiddleName3539",LastName3539 +3540,3540,"FirstName3540 MiddleName3540",LastName3540 +3541,3541,"FirstName3541 MiddleName3541",LastName3541 +3542,3542,"FirstName3542 MiddleName3542",LastName3542 +3543,3543,"FirstName3543 MiddleName3543",LastName3543 +3544,3544,"FirstName3544 MiddleName3544",LastName3544 +3545,3545,"FirstName3545 MiddleName3545",LastName3545 +3546,3546,"FirstName3546 MiddleName3546",LastName3546 +3547,3547,"FirstName3547 MiddleName3547",LastName3547 +3548,3548,"FirstName3548 MiddleName3548",LastName3548 +3549,3549,"FirstName3549 MiddleName3549",LastName3549 +3550,3550,"FirstName3550 MiddleName3550",LastName3550 +3551,3551,"FirstName3551 MiddleName3551",LastName3551 +3552,3552,"FirstName3552 MiddleName3552",LastName3552 +3553,3553,"FirstName3553 MiddleName3553",LastName3553 +3554,3554,"FirstName3554 MiddleName3554",LastName3554 +3555,3555,"FirstName3555 MiddleName3555",LastName3555 +3556,3556,"FirstName3556 MiddleName3556",LastName3556 +3557,3557,"FirstName3557 MiddleName3557",LastName3557 +3558,3558,"FirstName3558 MiddleName3558",LastName3558 +3559,3559,"FirstName3559 MiddleName3559",LastName3559 +3560,3560,"FirstName3560 MiddleName3560",LastName3560 +3561,3561,"FirstName3561 MiddleName3561",LastName3561 +3562,3562,"FirstName3562 MiddleName3562",LastName3562 +3563,3563,"FirstName3563 MiddleName3563",LastName3563 +3564,3564,"FirstName3564 MiddleName3564",LastName3564 +3565,3565,"FirstName3565 MiddleName3565",LastName3565 +3566,3566,"FirstName3566 MiddleName3566",LastName3566 +3567,3567,"FirstName3567 MiddleName3567",LastName3567 +3568,3568,"FirstName3568 MiddleName3568",LastName3568 +3569,3569,"FirstName3569 MiddleName3569",LastName3569 +3570,3570,"FirstName3570 MiddleName3570",LastName3570 +3571,3571,"FirstName3571 MiddleName3571",LastName3571 +3572,3572,"FirstName3572 MiddleName3572",LastName3572 +3573,3573,"FirstName3573 MiddleName3573",LastName3573 +3574,3574,"FirstName3574 MiddleName3574",LastName3574 +3575,3575,"FirstName3575 MiddleName3575",LastName3575 +3576,3576,"FirstName3576 MiddleName3576",LastName3576 +3577,3577,"FirstName3577 MiddleName3577",LastName3577 +3578,3578,"FirstName3578 MiddleName3578",LastName3578 +3579,3579,"FirstName3579 MiddleName3579",LastName3579 +3580,3580,"FirstName3580 MiddleName3580",LastName3580 +3581,3581,"FirstName3581 MiddleName3581",LastName3581 +3582,3582,"FirstName3582 MiddleName3582",LastName3582 +3583,3583,"FirstName3583 MiddleName3583",LastName3583 +3584,3584,"FirstName3584 MiddleName3584",LastName3584 +3585,3585,"FirstName3585 MiddleName3585",LastName3585 +3586,3586,"FirstName3586 MiddleName3586",LastName3586 +3587,3587,"FirstName3587 MiddleName3587",LastName3587 +3588,3588,"FirstName3588 MiddleName3588",LastName3588 +3589,3589,"FirstName3589 MiddleName3589",LastName3589 +3590,3590,"FirstName3590 MiddleName3590",LastName3590 +3591,3591,"FirstName3591 MiddleName3591",LastName3591 +3592,3592,"FirstName3592 MiddleName3592",LastName3592 +3593,3593,"FirstName3593 MiddleName3593",LastName3593 +3594,3594,"FirstName3594 MiddleName3594",LastName3594 +3595,3595,"FirstName3595 MiddleName3595",LastName3595 +3596,3596,"FirstName3596 MiddleName3596",LastName3596 +3597,3597,"FirstName3597 MiddleName3597",LastName3597 +3598,3598,"FirstName3598 MiddleName3598",LastName3598 +3599,3599,"FirstName3599 MiddleName3599",LastName3599 +3600,3600,"FirstName3600 MiddleName3600",LastName3600 +3601,3601,"FirstName3601 MiddleName3601",LastName3601 +3602,3602,"FirstName3602 MiddleName3602",LastName3602 +3603,3603,"FirstName3603 MiddleName3603",LastName3603 +3604,3604,"FirstName3604 MiddleName3604",LastName3604 +3605,3605,"FirstName3605 MiddleName3605",LastName3605 +3606,3606,"FirstName3606 MiddleName3606",LastName3606 +3607,3607,"FirstName3607 MiddleName3607",LastName3607 +3608,3608,"FirstName3608 MiddleName3608",LastName3608 +3609,3609,"FirstName3609 MiddleName3609",LastName3609 +3610,3610,"FirstName3610 MiddleName3610",LastName3610 +3611,3611,"FirstName3611 MiddleName3611",LastName3611 +3612,3612,"FirstName3612 MiddleName3612",LastName3612 +3613,3613,"FirstName3613 MiddleName3613",LastName3613 +3614,3614,"FirstName3614 MiddleName3614",LastName3614 +3615,3615,"FirstName3615 MiddleName3615",LastName3615 +3616,3616,"FirstName3616 MiddleName3616",LastName3616 +3617,3617,"FirstName3617 MiddleName3617",LastName3617 +3618,3618,"FirstName3618 MiddleName3618",LastName3618 +3619,3619,"FirstName3619 MiddleName3619",LastName3619 +3620,3620,"FirstName3620 MiddleName3620",LastName3620 +3621,3621,"FirstName3621 MiddleName3621",LastName3621 +3622,3622,"FirstName3622 MiddleName3622",LastName3622 +3623,3623,"FirstName3623 MiddleName3623",LastName3623 +3624,3624,"FirstName3624 MiddleName3624",LastName3624 +3625,3625,"FirstName3625 MiddleName3625",LastName3625 +3626,3626,"FirstName3626 MiddleName3626",LastName3626 +3627,3627,"FirstName3627 MiddleName3627",LastName3627 +3628,3628,"FirstName3628 MiddleName3628",LastName3628 +3629,3629,"FirstName3629 MiddleName3629",LastName3629 +3630,3630,"FirstName3630 MiddleName3630",LastName3630 +3631,3631,"FirstName3631 MiddleName3631",LastName3631 +3632,3632,"FirstName3632 MiddleName3632",LastName3632 +3633,3633,"FirstName3633 MiddleName3633",LastName3633 +3634,3634,"FirstName3634 MiddleName3634",LastName3634 +3635,3635,"FirstName3635 MiddleName3635",LastName3635 +3636,3636,"FirstName3636 MiddleName3636",LastName3636 +3637,3637,"FirstName3637 MiddleName3637",LastName3637 +3638,3638,"FirstName3638 MiddleName3638",LastName3638 +3639,3639,"FirstName3639 MiddleName3639",LastName3639 +3640,3640,"FirstName3640 MiddleName3640",LastName3640 +3641,3641,"FirstName3641 MiddleName3641",LastName3641 +3642,3642,"FirstName3642 MiddleName3642",LastName3642 +3643,3643,"FirstName3643 MiddleName3643",LastName3643 +3644,3644,"FirstName3644 MiddleName3644",LastName3644 +3645,3645,"FirstName3645 MiddleName3645",LastName3645 +3646,3646,"FirstName3646 MiddleName3646",LastName3646 +3647,3647,"FirstName3647 MiddleName3647",LastName3647 +3648,3648,"FirstName3648 MiddleName3648",LastName3648 +3649,3649,"FirstName3649 MiddleName3649",LastName3649 +3650,3650,"FirstName3650 MiddleName3650",LastName3650 +3651,3651,"FirstName3651 MiddleName3651",LastName3651 +3652,3652,"FirstName3652 MiddleName3652",LastName3652 +3653,3653,"FirstName3653 MiddleName3653",LastName3653 +3654,3654,"FirstName3654 MiddleName3654",LastName3654 +3655,3655,"FirstName3655 MiddleName3655",LastName3655 +3656,3656,"FirstName3656 MiddleName3656",LastName3656 +3657,3657,"FirstName3657 MiddleName3657",LastName3657 +3658,3658,"FirstName3658 MiddleName3658",LastName3658 +3659,3659,"FirstName3659 MiddleName3659",LastName3659 +3660,3660,"FirstName3660 MiddleName3660",LastName3660 +3661,3661,"FirstName3661 MiddleName3661",LastName3661 +3662,3662,"FirstName3662 MiddleName3662",LastName3662 +3663,3663,"FirstName3663 MiddleName3663",LastName3663 +3664,3664,"FirstName3664 MiddleName3664",LastName3664 +3665,3665,"FirstName3665 MiddleName3665",LastName3665 +3666,3666,"FirstName3666 MiddleName3666",LastName3666 +3667,3667,"FirstName3667 MiddleName3667",LastName3667 +3668,3668,"FirstName3668 MiddleName3668",LastName3668 +3669,3669,"FirstName3669 MiddleName3669",LastName3669 +3670,3670,"FirstName3670 MiddleName3670",LastName3670 +3671,3671,"FirstName3671 MiddleName3671",LastName3671 +3672,3672,"FirstName3672 MiddleName3672",LastName3672 +3673,3673,"FirstName3673 MiddleName3673",LastName3673 +3674,3674,"FirstName3674 MiddleName3674",LastName3674 +3675,3675,"FirstName3675 MiddleName3675",LastName3675 +3676,3676,"FirstName3676 MiddleName3676",LastName3676 +3677,3677,"FirstName3677 MiddleName3677",LastName3677 +3678,3678,"FirstName3678 MiddleName3678",LastName3678 +3679,3679,"FirstName3679 MiddleName3679",LastName3679 +3680,3680,"FirstName3680 MiddleName3680",LastName3680 +3681,3681,"FirstName3681 MiddleName3681",LastName3681 +3682,3682,"FirstName3682 MiddleName3682",LastName3682 +3683,3683,"FirstName3683 MiddleName3683",LastName3683 +3684,3684,"FirstName3684 MiddleName3684",LastName3684 +3685,3685,"FirstName3685 MiddleName3685",LastName3685 +3686,3686,"FirstName3686 MiddleName3686",LastName3686 +3687,3687,"FirstName3687 MiddleName3687",LastName3687 +3688,3688,"FirstName3688 MiddleName3688",LastName3688 +3689,3689,"FirstName3689 MiddleName3689",LastName3689 +3690,3690,"FirstName3690 MiddleName3690",LastName3690 +3691,3691,"FirstName3691 MiddleName3691",LastName3691 +3692,3692,"FirstName3692 MiddleName3692",LastName3692 +3693,3693,"FirstName3693 MiddleName3693",LastName3693 +3694,3694,"FirstName3694 MiddleName3694",LastName3694 +3695,3695,"FirstName3695 MiddleName3695",LastName3695 +3696,3696,"FirstName3696 MiddleName3696",LastName3696 +3697,3697,"FirstName3697 MiddleName3697",LastName3697 +3698,3698,"FirstName3698 MiddleName3698",LastName3698 +3699,3699,"FirstName3699 MiddleName3699",LastName3699 +3700,3700,"FirstName3700 MiddleName3700",LastName3700 +3701,3701,"FirstName3701 MiddleName3701",LastName3701 +3702,3702,"FirstName3702 MiddleName3702",LastName3702 +3703,3703,"FirstName3703 MiddleName3703",LastName3703 +3704,3704,"FirstName3704 MiddleName3704",LastName3704 +3705,3705,"FirstName3705 MiddleName3705",LastName3705 +3706,3706,"FirstName3706 MiddleName3706",LastName3706 +3707,3707,"FirstName3707 MiddleName3707",LastName3707 +3708,3708,"FirstName3708 MiddleName3708",LastName3708 +3709,3709,"FirstName3709 MiddleName3709",LastName3709 +3710,3710,"FirstName3710 MiddleName3710",LastName3710 +3711,3711,"FirstName3711 MiddleName3711",LastName3711 +3712,3712,"FirstName3712 MiddleName3712",LastName3712 +3713,3713,"FirstName3713 MiddleName3713",LastName3713 +3714,3714,"FirstName3714 MiddleName3714",LastName3714 +3715,3715,"FirstName3715 MiddleName3715",LastName3715 +3716,3716,"FirstName3716 MiddleName3716",LastName3716 +3717,3717,"FirstName3717 MiddleName3717",LastName3717 +3718,3718,"FirstName3718 MiddleName3718",LastName3718 +3719,3719,"FirstName3719 MiddleName3719",LastName3719 +3720,3720,"FirstName3720 MiddleName3720",LastName3720 +3721,3721,"FirstName3721 MiddleName3721",LastName3721 +3722,3722,"FirstName3722 MiddleName3722",LastName3722 +3723,3723,"FirstName3723 MiddleName3723",LastName3723 +3724,3724,"FirstName3724 MiddleName3724",LastName3724 +3725,3725,"FirstName3725 MiddleName3725",LastName3725 +3726,3726,"FirstName3726 MiddleName3726",LastName3726 +3727,3727,"FirstName3727 MiddleName3727",LastName3727 +3728,3728,"FirstName3728 MiddleName3728",LastName3728 +3729,3729,"FirstName3729 MiddleName3729",LastName3729 +3730,3730,"FirstName3730 MiddleName3730",LastName3730 +3731,3731,"FirstName3731 MiddleName3731",LastName3731 +3732,3732,"FirstName3732 MiddleName3732",LastName3732 +3733,3733,"FirstName3733 MiddleName3733",LastName3733 +3734,3734,"FirstName3734 MiddleName3734",LastName3734 +3735,3735,"FirstName3735 MiddleName3735",LastName3735 +3736,3736,"FirstName3736 MiddleName3736",LastName3736 +3737,3737,"FirstName3737 MiddleName3737",LastName3737 +3738,3738,"FirstName3738 MiddleName3738",LastName3738 +3739,3739,"FirstName3739 MiddleName3739",LastName3739 +3740,3740,"FirstName3740 MiddleName3740",LastName3740 +3741,3741,"FirstName3741 MiddleName3741",LastName3741 +3742,3742,"FirstName3742 MiddleName3742",LastName3742 +3743,3743,"FirstName3743 MiddleName3743",LastName3743 +3744,3744,"FirstName3744 MiddleName3744",LastName3744 +3745,3745,"FirstName3745 MiddleName3745",LastName3745 +3746,3746,"FirstName3746 MiddleName3746",LastName3746 +3747,3747,"FirstName3747 MiddleName3747",LastName3747 +3748,3748,"FirstName3748 MiddleName3748",LastName3748 +3749,3749,"FirstName3749 MiddleName3749",LastName3749 +3750,3750,"FirstName3750 MiddleName3750",LastName3750 +3751,3751,"FirstName3751 MiddleName3751",LastName3751 +3752,3752,"FirstName3752 MiddleName3752",LastName3752 +3753,3753,"FirstName3753 MiddleName3753",LastName3753 +3754,3754,"FirstName3754 MiddleName3754",LastName3754 +3755,3755,"FirstName3755 MiddleName3755",LastName3755 +3756,3756,"FirstName3756 MiddleName3756",LastName3756 +3757,3757,"FirstName3757 MiddleName3757",LastName3757 +3758,3758,"FirstName3758 MiddleName3758",LastName3758 +3759,3759,"FirstName3759 MiddleName3759",LastName3759 +3760,3760,"FirstName3760 MiddleName3760",LastName3760 +3761,3761,"FirstName3761 MiddleName3761",LastName3761 +3762,3762,"FirstName3762 MiddleName3762",LastName3762 +3763,3763,"FirstName3763 MiddleName3763",LastName3763 +3764,3764,"FirstName3764 MiddleName3764",LastName3764 +3765,3765,"FirstName3765 MiddleName3765",LastName3765 +3766,3766,"FirstName3766 MiddleName3766",LastName3766 +3767,3767,"FirstName3767 MiddleName3767",LastName3767 +3768,3768,"FirstName3768 MiddleName3768",LastName3768 +3769,3769,"FirstName3769 MiddleName3769",LastName3769 +3770,3770,"FirstName3770 MiddleName3770",LastName3770 +3771,3771,"FirstName3771 MiddleName3771",LastName3771 +3772,3772,"FirstName3772 MiddleName3772",LastName3772 +3773,3773,"FirstName3773 MiddleName3773",LastName3773 +3774,3774,"FirstName3774 MiddleName3774",LastName3774 +3775,3775,"FirstName3775 MiddleName3775",LastName3775 +3776,3776,"FirstName3776 MiddleName3776",LastName3776 +3777,3777,"FirstName3777 MiddleName3777",LastName3777 +3778,3778,"FirstName3778 MiddleName3778",LastName3778 +3779,3779,"FirstName3779 MiddleName3779",LastName3779 +3780,3780,"FirstName3780 MiddleName3780",LastName3780 +3781,3781,"FirstName3781 MiddleName3781",LastName3781 +3782,3782,"FirstName3782 MiddleName3782",LastName3782 +3783,3783,"FirstName3783 MiddleName3783",LastName3783 +3784,3784,"FirstName3784 MiddleName3784",LastName3784 +3785,3785,"FirstName3785 MiddleName3785",LastName3785 +3786,3786,"FirstName3786 MiddleName3786",LastName3786 +3787,3787,"FirstName3787 MiddleName3787",LastName3787 +3788,3788,"FirstName3788 MiddleName3788",LastName3788 +3789,3789,"FirstName3789 MiddleName3789",LastName3789 +3790,3790,"FirstName3790 MiddleName3790",LastName3790 +3791,3791,"FirstName3791 MiddleName3791",LastName3791 +3792,3792,"FirstName3792 MiddleName3792",LastName3792 +3793,3793,"FirstName3793 MiddleName3793",LastName3793 +3794,3794,"FirstName3794 MiddleName3794",LastName3794 +3795,3795,"FirstName3795 MiddleName3795",LastName3795 +3796,3796,"FirstName3796 MiddleName3796",LastName3796 +3797,3797,"FirstName3797 MiddleName3797",LastName3797 +3798,3798,"FirstName3798 MiddleName3798",LastName3798 +3799,3799,"FirstName3799 MiddleName3799",LastName3799 +3800,3800,"FirstName3800 MiddleName3800",LastName3800 +3801,3801,"FirstName3801 MiddleName3801",LastName3801 +3802,3802,"FirstName3802 MiddleName3802",LastName3802 +3803,3803,"FirstName3803 MiddleName3803",LastName3803 +3804,3804,"FirstName3804 MiddleName3804",LastName3804 +3805,3805,"FirstName3805 MiddleName3805",LastName3805 +3806,3806,"FirstName3806 MiddleName3806",LastName3806 +3807,3807,"FirstName3807 MiddleName3807",LastName3807 +3808,3808,"FirstName3808 MiddleName3808",LastName3808 +3809,3809,"FirstName3809 MiddleName3809",LastName3809 +3810,3810,"FirstName3810 MiddleName3810",LastName3810 +3811,3811,"FirstName3811 MiddleName3811",LastName3811 +3812,3812,"FirstName3812 MiddleName3812",LastName3812 +3813,3813,"FirstName3813 MiddleName3813",LastName3813 +3814,3814,"FirstName3814 MiddleName3814",LastName3814 +3815,3815,"FirstName3815 MiddleName3815",LastName3815 +3816,3816,"FirstName3816 MiddleName3816",LastName3816 +3817,3817,"FirstName3817 MiddleName3817",LastName3817 +3818,3818,"FirstName3818 MiddleName3818",LastName3818 +3819,3819,"FirstName3819 MiddleName3819",LastName3819 +3820,3820,"FirstName3820 MiddleName3820",LastName3820 +3821,3821,"FirstName3821 MiddleName3821",LastName3821 +3822,3822,"FirstName3822 MiddleName3822",LastName3822 +3823,3823,"FirstName3823 MiddleName3823",LastName3823 +3824,3824,"FirstName3824 MiddleName3824",LastName3824 +3825,3825,"FirstName3825 MiddleName3825",LastName3825 +3826,3826,"FirstName3826 MiddleName3826",LastName3826 +3827,3827,"FirstName3827 MiddleName3827",LastName3827 +3828,3828,"FirstName3828 MiddleName3828",LastName3828 +3829,3829,"FirstName3829 MiddleName3829",LastName3829 +3830,3830,"FirstName3830 MiddleName3830",LastName3830 +3831,3831,"FirstName3831 MiddleName3831",LastName3831 +3832,3832,"FirstName3832 MiddleName3832",LastName3832 +3833,3833,"FirstName3833 MiddleName3833",LastName3833 +3834,3834,"FirstName3834 MiddleName3834",LastName3834 +3835,3835,"FirstName3835 MiddleName3835",LastName3835 +3836,3836,"FirstName3836 MiddleName3836",LastName3836 +3837,3837,"FirstName3837 MiddleName3837",LastName3837 +3838,3838,"FirstName3838 MiddleName3838",LastName3838 +3839,3839,"FirstName3839 MiddleName3839",LastName3839 +3840,3840,"FirstName3840 MiddleName3840",LastName3840 +3841,3841,"FirstName3841 MiddleName3841",LastName3841 +3842,3842,"FirstName3842 MiddleName3842",LastName3842 +3843,3843,"FirstName3843 MiddleName3843",LastName3843 +3844,3844,"FirstName3844 MiddleName3844",LastName3844 +3845,3845,"FirstName3845 MiddleName3845",LastName3845 +3846,3846,"FirstName3846 MiddleName3846",LastName3846 +3847,3847,"FirstName3847 MiddleName3847",LastName3847 +3848,3848,"FirstName3848 MiddleName3848",LastName3848 +3849,3849,"FirstName3849 MiddleName3849",LastName3849 +3850,3850,"FirstName3850 MiddleName3850",LastName3850 +3851,3851,"FirstName3851 MiddleName3851",LastName3851 +3852,3852,"FirstName3852 MiddleName3852",LastName3852 +3853,3853,"FirstName3853 MiddleName3853",LastName3853 +3854,3854,"FirstName3854 MiddleName3854",LastName3854 +3855,3855,"FirstName3855 MiddleName3855",LastName3855 +3856,3856,"FirstName3856 MiddleName3856",LastName3856 +3857,3857,"FirstName3857 MiddleName3857",LastName3857 +3858,3858,"FirstName3858 MiddleName3858",LastName3858 +3859,3859,"FirstName3859 MiddleName3859",LastName3859 +3860,3860,"FirstName3860 MiddleName3860",LastName3860 +3861,3861,"FirstName3861 MiddleName3861",LastName3861 +3862,3862,"FirstName3862 MiddleName3862",LastName3862 +3863,3863,"FirstName3863 MiddleName3863",LastName3863 +3864,3864,"FirstName3864 MiddleName3864",LastName3864 +3865,3865,"FirstName3865 MiddleName3865",LastName3865 +3866,3866,"FirstName3866 MiddleName3866",LastName3866 +3867,3867,"FirstName3867 MiddleName3867",LastName3867 +3868,3868,"FirstName3868 MiddleName3868",LastName3868 +3869,3869,"FirstName3869 MiddleName3869",LastName3869 +3870,3870,"FirstName3870 MiddleName3870",LastName3870 +3871,3871,"FirstName3871 MiddleName3871",LastName3871 +3872,3872,"FirstName3872 MiddleName3872",LastName3872 +3873,3873,"FirstName3873 MiddleName3873",LastName3873 +3874,3874,"FirstName3874 MiddleName3874",LastName3874 +3875,3875,"FirstName3875 MiddleName3875",LastName3875 +3876,3876,"FirstName3876 MiddleName3876",LastName3876 +3877,3877,"FirstName3877 MiddleName3877",LastName3877 +3878,3878,"FirstName3878 MiddleName3878",LastName3878 +3879,3879,"FirstName3879 MiddleName3879",LastName3879 +3880,3880,"FirstName3880 MiddleName3880",LastName3880 +3881,3881,"FirstName3881 MiddleName3881",LastName3881 +3882,3882,"FirstName3882 MiddleName3882",LastName3882 +3883,3883,"FirstName3883 MiddleName3883",LastName3883 +3884,3884,"FirstName3884 MiddleName3884",LastName3884 +3885,3885,"FirstName3885 MiddleName3885",LastName3885 +3886,3886,"FirstName3886 MiddleName3886",LastName3886 +3887,3887,"FirstName3887 MiddleName3887",LastName3887 +3888,3888,"FirstName3888 MiddleName3888",LastName3888 +3889,3889,"FirstName3889 MiddleName3889",LastName3889 +3890,3890,"FirstName3890 MiddleName3890",LastName3890 +3891,3891,"FirstName3891 MiddleName3891",LastName3891 +3892,3892,"FirstName3892 MiddleName3892",LastName3892 +3893,3893,"FirstName3893 MiddleName3893",LastName3893 +3894,3894,"FirstName3894 MiddleName3894",LastName3894 +3895,3895,"FirstName3895 MiddleName3895",LastName3895 +3896,3896,"FirstName3896 MiddleName3896",LastName3896 +3897,3897,"FirstName3897 MiddleName3897",LastName3897 +3898,3898,"FirstName3898 MiddleName3898",LastName3898 +3899,3899,"FirstName3899 MiddleName3899",LastName3899 +3900,3900,"FirstName3900 MiddleName3900",LastName3900 +3901,3901,"FirstName3901 MiddleName3901",LastName3901 +3902,3902,"FirstName3902 MiddleName3902",LastName3902 +3903,3903,"FirstName3903 MiddleName3903",LastName3903 +3904,3904,"FirstName3904 MiddleName3904",LastName3904 +3905,3905,"FirstName3905 MiddleName3905",LastName3905 +3906,3906,"FirstName3906 MiddleName3906",LastName3906 +3907,3907,"FirstName3907 MiddleName3907",LastName3907 +3908,3908,"FirstName3908 MiddleName3908",LastName3908 +3909,3909,"FirstName3909 MiddleName3909",LastName3909 +3910,3910,"FirstName3910 MiddleName3910",LastName3910 +3911,3911,"FirstName3911 MiddleName3911",LastName3911 +3912,3912,"FirstName3912 MiddleName3912",LastName3912 +3913,3913,"FirstName3913 MiddleName3913",LastName3913 +3914,3914,"FirstName3914 MiddleName3914",LastName3914 +3915,3915,"FirstName3915 MiddleName3915",LastName3915 +3916,3916,"FirstName3916 MiddleName3916",LastName3916 +3917,3917,"FirstName3917 MiddleName3917",LastName3917 +3918,3918,"FirstName3918 MiddleName3918",LastName3918 +3919,3919,"FirstName3919 MiddleName3919",LastName3919 +3920,3920,"FirstName3920 MiddleName3920",LastName3920 +3921,3921,"FirstName3921 MiddleName3921",LastName3921 +3922,3922,"FirstName3922 MiddleName3922",LastName3922 +3923,3923,"FirstName3923 MiddleName3923",LastName3923 +3924,3924,"FirstName3924 MiddleName3924",LastName3924 +3925,3925,"FirstName3925 MiddleName3925",LastName3925 +3926,3926,"FirstName3926 MiddleName3926",LastName3926 +3927,3927,"FirstName3927 MiddleName3927",LastName3927 +3928,3928,"FirstName3928 MiddleName3928",LastName3928 +3929,3929,"FirstName3929 MiddleName3929",LastName3929 +3930,3930,"FirstName3930 MiddleName3930",LastName3930 +3931,3931,"FirstName3931 MiddleName3931",LastName3931 +3932,3932,"FirstName3932 MiddleName3932",LastName3932 +3933,3933,"FirstName3933 MiddleName3933",LastName3933 +3934,3934,"FirstName3934 MiddleName3934",LastName3934 +3935,3935,"FirstName3935 MiddleName3935",LastName3935 +3936,3936,"FirstName3936 MiddleName3936",LastName3936 +3937,3937,"FirstName3937 MiddleName3937",LastName3937 +3938,3938,"FirstName3938 MiddleName3938",LastName3938 +3939,3939,"FirstName3939 MiddleName3939",LastName3939 +3940,3940,"FirstName3940 MiddleName3940",LastName3940 +3941,3941,"FirstName3941 MiddleName3941",LastName3941 +3942,3942,"FirstName3942 MiddleName3942",LastName3942 +3943,3943,"FirstName3943 MiddleName3943",LastName3943 +3944,3944,"FirstName3944 MiddleName3944",LastName3944 +3945,3945,"FirstName3945 MiddleName3945",LastName3945 +3946,3946,"FirstName3946 MiddleName3946",LastName3946 +3947,3947,"FirstName3947 MiddleName3947",LastName3947 +3948,3948,"FirstName3948 MiddleName3948",LastName3948 +3949,3949,"FirstName3949 MiddleName3949",LastName3949 +3950,3950,"FirstName3950 MiddleName3950",LastName3950 +3951,3951,"FirstName3951 MiddleName3951",LastName3951 +3952,3952,"FirstName3952 MiddleName3952",LastName3952 +3953,3953,"FirstName3953 MiddleName3953",LastName3953 +3954,3954,"FirstName3954 MiddleName3954",LastName3954 +3955,3955,"FirstName3955 MiddleName3955",LastName3955 +3956,3956,"FirstName3956 MiddleName3956",LastName3956 +3957,3957,"FirstName3957 MiddleName3957",LastName3957 +3958,3958,"FirstName3958 MiddleName3958",LastName3958 +3959,3959,"FirstName3959 MiddleName3959",LastName3959 +3960,3960,"FirstName3960 MiddleName3960",LastName3960 +3961,3961,"FirstName3961 MiddleName3961",LastName3961 +3962,3962,"FirstName3962 MiddleName3962",LastName3962 +3963,3963,"FirstName3963 MiddleName3963",LastName3963 +3964,3964,"FirstName3964 MiddleName3964",LastName3964 +3965,3965,"FirstName3965 MiddleName3965",LastName3965 +3966,3966,"FirstName3966 MiddleName3966",LastName3966 +3967,3967,"FirstName3967 MiddleName3967",LastName3967 +3968,3968,"FirstName3968 MiddleName3968",LastName3968 +3969,3969,"FirstName3969 MiddleName3969",LastName3969 +3970,3970,"FirstName3970 MiddleName3970",LastName3970 +3971,3971,"FirstName3971 MiddleName3971",LastName3971 +3972,3972,"FirstName3972 MiddleName3972",LastName3972 +3973,3973,"FirstName3973 MiddleName3973",LastName3973 +3974,3974,"FirstName3974 MiddleName3974",LastName3974 +3975,3975,"FirstName3975 MiddleName3975",LastName3975 +3976,3976,"FirstName3976 MiddleName3976",LastName3976 +3977,3977,"FirstName3977 MiddleName3977",LastName3977 +3978,3978,"FirstName3978 MiddleName3978",LastName3978 +3979,3979,"FirstName3979 MiddleName3979",LastName3979 +3980,3980,"FirstName3980 MiddleName3980",LastName3980 +3981,3981,"FirstName3981 MiddleName3981",LastName3981 +3982,3982,"FirstName3982 MiddleName3982",LastName3982 +3983,3983,"FirstName3983 MiddleName3983",LastName3983 +3984,3984,"FirstName3984 MiddleName3984",LastName3984 +3985,3985,"FirstName3985 MiddleName3985",LastName3985 +3986,3986,"FirstName3986 MiddleName3986",LastName3986 +3987,3987,"FirstName3987 MiddleName3987",LastName3987 +3988,3988,"FirstName3988 MiddleName3988",LastName3988 +3989,3989,"FirstName3989 MiddleName3989",LastName3989 +3990,3990,"FirstName3990 MiddleName3990",LastName3990 +3991,3991,"FirstName3991 MiddleName3991",LastName3991 +3992,3992,"FirstName3992 MiddleName3992",LastName3992 +3993,3993,"FirstName3993 MiddleName3993",LastName3993 +3994,3994,"FirstName3994 MiddleName3994",LastName3994 +3995,3995,"FirstName3995 MiddleName3995",LastName3995 +3996,3996,"FirstName3996 MiddleName3996",LastName3996 +3997,3997,"FirstName3997 MiddleName3997",LastName3997 +3998,3998,"FirstName3998 MiddleName3998",LastName3998 +3999,3999,"FirstName3999 MiddleName3999",LastName3999 +4000,4000,"FirstName4000 MiddleName4000",LastName4000 +4001,4001,"FirstName4001 MiddleName4001",LastName4001 +4002,4002,"FirstName4002 MiddleName4002",LastName4002 +4003,4003,"FirstName4003 MiddleName4003",LastName4003 +4004,4004,"FirstName4004 MiddleName4004",LastName4004 +4005,4005,"FirstName4005 MiddleName4005",LastName4005 +4006,4006,"FirstName4006 MiddleName4006",LastName4006 +4007,4007,"FirstName4007 MiddleName4007",LastName4007 +4008,4008,"FirstName4008 MiddleName4008",LastName4008 +4009,4009,"FirstName4009 MiddleName4009",LastName4009 +4010,4010,"FirstName4010 MiddleName4010",LastName4010 +4011,4011,"FirstName4011 MiddleName4011",LastName4011 +4012,4012,"FirstName4012 MiddleName4012",LastName4012 +4013,4013,"FirstName4013 MiddleName4013",LastName4013 +4014,4014,"FirstName4014 MiddleName4014",LastName4014 +4015,4015,"FirstName4015 MiddleName4015",LastName4015 +4016,4016,"FirstName4016 MiddleName4016",LastName4016 +4017,4017,"FirstName4017 MiddleName4017",LastName4017 +4018,4018,"FirstName4018 MiddleName4018",LastName4018 +4019,4019,"FirstName4019 MiddleName4019",LastName4019 +4020,4020,"FirstName4020 MiddleName4020",LastName4020 +4021,4021,"FirstName4021 MiddleName4021",LastName4021 +4022,4022,"FirstName4022 MiddleName4022",LastName4022 +4023,4023,"FirstName4023 MiddleName4023",LastName4023 +4024,4024,"FirstName4024 MiddleName4024",LastName4024 +4025,4025,"FirstName4025 MiddleName4025",LastName4025 +4026,4026,"FirstName4026 MiddleName4026",LastName4026 +4027,4027,"FirstName4027 MiddleName4027",LastName4027 +4028,4028,"FirstName4028 MiddleName4028",LastName4028 +4029,4029,"FirstName4029 MiddleName4029",LastName4029 +4030,4030,"FirstName4030 MiddleName4030",LastName4030 +4031,4031,"FirstName4031 MiddleName4031",LastName4031 +4032,4032,"FirstName4032 MiddleName4032",LastName4032 +4033,4033,"FirstName4033 MiddleName4033",LastName4033 +4034,4034,"FirstName4034 MiddleName4034",LastName4034 +4035,4035,"FirstName4035 MiddleName4035",LastName4035 +4036,4036,"FirstName4036 MiddleName4036",LastName4036 +4037,4037,"FirstName4037 MiddleName4037",LastName4037 +4038,4038,"FirstName4038 MiddleName4038",LastName4038 +4039,4039,"FirstName4039 MiddleName4039",LastName4039 +4040,4040,"FirstName4040 MiddleName4040",LastName4040 +4041,4041,"FirstName4041 MiddleName4041",LastName4041 +4042,4042,"FirstName4042 MiddleName4042",LastName4042 +4043,4043,"FirstName4043 MiddleName4043",LastName4043 +4044,4044,"FirstName4044 MiddleName4044",LastName4044 +4045,4045,"FirstName4045 MiddleName4045",LastName4045 +4046,4046,"FirstName4046 MiddleName4046",LastName4046 +4047,4047,"FirstName4047 MiddleName4047",LastName4047 +4048,4048,"FirstName4048 MiddleName4048",LastName4048 +4049,4049,"FirstName4049 MiddleName4049",LastName4049 +4050,4050,"FirstName4050 MiddleName4050",LastName4050 +4051,4051,"FirstName4051 MiddleName4051",LastName4051 +4052,4052,"FirstName4052 MiddleName4052",LastName4052 +4053,4053,"FirstName4053 MiddleName4053",LastName4053 +4054,4054,"FirstName4054 MiddleName4054",LastName4054 +4055,4055,"FirstName4055 MiddleName4055",LastName4055 +4056,4056,"FirstName4056 MiddleName4056",LastName4056 +4057,4057,"FirstName4057 MiddleName4057",LastName4057 +4058,4058,"FirstName4058 MiddleName4058",LastName4058 +4059,4059,"FirstName4059 MiddleName4059",LastName4059 +4060,4060,"FirstName4060 MiddleName4060",LastName4060 +4061,4061,"FirstName4061 MiddleName4061",LastName4061 +4062,4062,"FirstName4062 MiddleName4062",LastName4062 +4063,4063,"FirstName4063 MiddleName4063",LastName4063 +4064,4064,"FirstName4064 MiddleName4064",LastName4064 +4065,4065,"FirstName4065 MiddleName4065",LastName4065 +4066,4066,"FirstName4066 MiddleName4066",LastName4066 +4067,4067,"FirstName4067 MiddleName4067",LastName4067 +4068,4068,"FirstName4068 MiddleName4068",LastName4068 +4069,4069,"FirstName4069 MiddleName4069",LastName4069 +4070,4070,"FirstName4070 MiddleName4070",LastName4070 +4071,4071,"FirstName4071 MiddleName4071",LastName4071 +4072,4072,"FirstName4072 MiddleName4072",LastName4072 +4073,4073,"FirstName4073 MiddleName4073",LastName4073 +4074,4074,"FirstName4074 MiddleName4074",LastName4074 +4075,4075,"FirstName4075 MiddleName4075",LastName4075 +4076,4076,"FirstName4076 MiddleName4076",LastName4076 +4077,4077,"FirstName4077 MiddleName4077",LastName4077 +4078,4078,"FirstName4078 MiddleName4078",LastName4078 +4079,4079,"FirstName4079 MiddleName4079",LastName4079 +4080,4080,"FirstName4080 MiddleName4080",LastName4080 +4081,4081,"FirstName4081 MiddleName4081",LastName4081 +4082,4082,"FirstName4082 MiddleName4082",LastName4082 +4083,4083,"FirstName4083 MiddleName4083",LastName4083 +4084,4084,"FirstName4084 MiddleName4084",LastName4084 +4085,4085,"FirstName4085 MiddleName4085",LastName4085 +4086,4086,"FirstName4086 MiddleName4086",LastName4086 +4087,4087,"FirstName4087 MiddleName4087",LastName4087 +4088,4088,"FirstName4088 MiddleName4088",LastName4088 +4089,4089,"FirstName4089 MiddleName4089",LastName4089 +4090,4090,"FirstName4090 MiddleName4090",LastName4090 +4091,4091,"FirstName4091 MiddleName4091",LastName4091 +4092,4092,"FirstName4092 MiddleName4092",LastName4092 +4093,4093,"FirstName4093 MiddleName4093",LastName4093 +4094,4094,"FirstName4094 MiddleName4094",LastName4094 +4095,4095,"FirstName4095 MiddleName4095",LastName4095 +4096,4096,"FirstName4096 MiddleName4096",LastName4096 +4097,4097,"FirstName4097 MiddleName4097",LastName4097 +4098,4098,"FirstName4098 MiddleName4098",LastName4098 +4099,4099,"FirstName4099 MiddleName4099",LastName4099 +4100,4100,"FirstName4100 MiddleName4100",LastName4100 +4101,4101,"FirstName4101 MiddleName4101",LastName4101 +4102,4102,"FirstName4102 MiddleName4102",LastName4102 +4103,4103,"FirstName4103 MiddleName4103",LastName4103 +4104,4104,"FirstName4104 MiddleName4104",LastName4104 +4105,4105,"FirstName4105 MiddleName4105",LastName4105 +4106,4106,"FirstName4106 MiddleName4106",LastName4106 +4107,4107,"FirstName4107 MiddleName4107",LastName4107 +4108,4108,"FirstName4108 MiddleName4108",LastName4108 +4109,4109,"FirstName4109 MiddleName4109",LastName4109 +4110,4110,"FirstName4110 MiddleName4110",LastName4110 +4111,4111,"FirstName4111 MiddleName4111",LastName4111 +4112,4112,"FirstName4112 MiddleName4112",LastName4112 +4113,4113,"FirstName4113 MiddleName4113",LastName4113 +4114,4114,"FirstName4114 MiddleName4114",LastName4114 +4115,4115,"FirstName4115 MiddleName4115",LastName4115 +4116,4116,"FirstName4116 MiddleName4116",LastName4116 +4117,4117,"FirstName4117 MiddleName4117",LastName4117 +4118,4118,"FirstName4118 MiddleName4118",LastName4118 +4119,4119,"FirstName4119 MiddleName4119",LastName4119 +4120,4120,"FirstName4120 MiddleName4120",LastName4120 +4121,4121,"FirstName4121 MiddleName4121",LastName4121 +4122,4122,"FirstName4122 MiddleName4122",LastName4122 +4123,4123,"FirstName4123 MiddleName4123",LastName4123 +4124,4124,"FirstName4124 MiddleName4124",LastName4124 +4125,4125,"FirstName4125 MiddleName4125",LastName4125 +4126,4126,"FirstName4126 MiddleName4126",LastName4126 +4127,4127,"FirstName4127 MiddleName4127",LastName4127 +4128,4128,"FirstName4128 MiddleName4128",LastName4128 +4129,4129,"FirstName4129 MiddleName4129",LastName4129 +4130,4130,"FirstName4130 MiddleName4130",LastName4130 +4131,4131,"FirstName4131 MiddleName4131",LastName4131 +4132,4132,"FirstName4132 MiddleName4132",LastName4132 +4133,4133,"FirstName4133 MiddleName4133",LastName4133 +4134,4134,"FirstName4134 MiddleName4134",LastName4134 +4135,4135,"FirstName4135 MiddleName4135",LastName4135 +4136,4136,"FirstName4136 MiddleName4136",LastName4136 +4137,4137,"FirstName4137 MiddleName4137",LastName4137 +4138,4138,"FirstName4138 MiddleName4138",LastName4138 +4139,4139,"FirstName4139 MiddleName4139",LastName4139 +4140,4140,"FirstName4140 MiddleName4140",LastName4140 +4141,4141,"FirstName4141 MiddleName4141",LastName4141 +4142,4142,"FirstName4142 MiddleName4142",LastName4142 +4143,4143,"FirstName4143 MiddleName4143",LastName4143 +4144,4144,"FirstName4144 MiddleName4144",LastName4144 +4145,4145,"FirstName4145 MiddleName4145",LastName4145 +4146,4146,"FirstName4146 MiddleName4146",LastName4146 +4147,4147,"FirstName4147 MiddleName4147",LastName4147 +4148,4148,"FirstName4148 MiddleName4148",LastName4148 +4149,4149,"FirstName4149 MiddleName4149",LastName4149 +4150,4150,"FirstName4150 MiddleName4150",LastName4150 +4151,4151,"FirstName4151 MiddleName4151",LastName4151 +4152,4152,"FirstName4152 MiddleName4152",LastName4152 +4153,4153,"FirstName4153 MiddleName4153",LastName4153 +4154,4154,"FirstName4154 MiddleName4154",LastName4154 +4155,4155,"FirstName4155 MiddleName4155",LastName4155 +4156,4156,"FirstName4156 MiddleName4156",LastName4156 +4157,4157,"FirstName4157 MiddleName4157",LastName4157 +4158,4158,"FirstName4158 MiddleName4158",LastName4158 +4159,4159,"FirstName4159 MiddleName4159",LastName4159 +4160,4160,"FirstName4160 MiddleName4160",LastName4160 +4161,4161,"FirstName4161 MiddleName4161",LastName4161 +4162,4162,"FirstName4162 MiddleName4162",LastName4162 +4163,4163,"FirstName4163 MiddleName4163",LastName4163 +4164,4164,"FirstName4164 MiddleName4164",LastName4164 +4165,4165,"FirstName4165 MiddleName4165",LastName4165 +4166,4166,"FirstName4166 MiddleName4166",LastName4166 +4167,4167,"FirstName4167 MiddleName4167",LastName4167 +4168,4168,"FirstName4168 MiddleName4168",LastName4168 +4169,4169,"FirstName4169 MiddleName4169",LastName4169 +4170,4170,"FirstName4170 MiddleName4170",LastName4170 +4171,4171,"FirstName4171 MiddleName4171",LastName4171 +4172,4172,"FirstName4172 MiddleName4172",LastName4172 +4173,4173,"FirstName4173 MiddleName4173",LastName4173 +4174,4174,"FirstName4174 MiddleName4174",LastName4174 +4175,4175,"FirstName4175 MiddleName4175",LastName4175 +4176,4176,"FirstName4176 MiddleName4176",LastName4176 +4177,4177,"FirstName4177 MiddleName4177",LastName4177 +4178,4178,"FirstName4178 MiddleName4178",LastName4178 +4179,4179,"FirstName4179 MiddleName4179",LastName4179 +4180,4180,"FirstName4180 MiddleName4180",LastName4180 +4181,4181,"FirstName4181 MiddleName4181",LastName4181 +4182,4182,"FirstName4182 MiddleName4182",LastName4182 +4183,4183,"FirstName4183 MiddleName4183",LastName4183 +4184,4184,"FirstName4184 MiddleName4184",LastName4184 +4185,4185,"FirstName4185 MiddleName4185",LastName4185 +4186,4186,"FirstName4186 MiddleName4186",LastName4186 +4187,4187,"FirstName4187 MiddleName4187",LastName4187 +4188,4188,"FirstName4188 MiddleName4188",LastName4188 +4189,4189,"FirstName4189 MiddleName4189",LastName4189 +4190,4190,"FirstName4190 MiddleName4190",LastName4190 +4191,4191,"FirstName4191 MiddleName4191",LastName4191 +4192,4192,"FirstName4192 MiddleName4192",LastName4192 +4193,4193,"FirstName4193 MiddleName4193",LastName4193 +4194,4194,"FirstName4194 MiddleName4194",LastName4194 +4195,4195,"FirstName4195 MiddleName4195",LastName4195 +4196,4196,"FirstName4196 MiddleName4196",LastName4196 +4197,4197,"FirstName4197 MiddleName4197",LastName4197 +4198,4198,"FirstName4198 MiddleName4198",LastName4198 +4199,4199,"FirstName4199 MiddleName4199",LastName4199 +4200,4200,"FirstName4200 MiddleName4200",LastName4200 +4201,4201,"FirstName4201 MiddleName4201",LastName4201 +4202,4202,"FirstName4202 MiddleName4202",LastName4202 +4203,4203,"FirstName4203 MiddleName4203",LastName4203 +4204,4204,"FirstName4204 MiddleName4204",LastName4204 +4205,4205,"FirstName4205 MiddleName4205",LastName4205 +4206,4206,"FirstName4206 MiddleName4206",LastName4206 +4207,4207,"FirstName4207 MiddleName4207",LastName4207 +4208,4208,"FirstName4208 MiddleName4208",LastName4208 +4209,4209,"FirstName4209 MiddleName4209",LastName4209 +4210,4210,"FirstName4210 MiddleName4210",LastName4210 +4211,4211,"FirstName4211 MiddleName4211",LastName4211 +4212,4212,"FirstName4212 MiddleName4212",LastName4212 +4213,4213,"FirstName4213 MiddleName4213",LastName4213 +4214,4214,"FirstName4214 MiddleName4214",LastName4214 +4215,4215,"FirstName4215 MiddleName4215",LastName4215 +4216,4216,"FirstName4216 MiddleName4216",LastName4216 +4217,4217,"FirstName4217 MiddleName4217",LastName4217 +4218,4218,"FirstName4218 MiddleName4218",LastName4218 +4219,4219,"FirstName4219 MiddleName4219",LastName4219 +4220,4220,"FirstName4220 MiddleName4220",LastName4220 +4221,4221,"FirstName4221 MiddleName4221",LastName4221 +4222,4222,"FirstName4222 MiddleName4222",LastName4222 +4223,4223,"FirstName4223 MiddleName4223",LastName4223 +4224,4224,"FirstName4224 MiddleName4224",LastName4224 +4225,4225,"FirstName4225 MiddleName4225",LastName4225 +4226,4226,"FirstName4226 MiddleName4226",LastName4226 +4227,4227,"FirstName4227 MiddleName4227",LastName4227 +4228,4228,"FirstName4228 MiddleName4228",LastName4228 +4229,4229,"FirstName4229 MiddleName4229",LastName4229 +4230,4230,"FirstName4230 MiddleName4230",LastName4230 +4231,4231,"FirstName4231 MiddleName4231",LastName4231 +4232,4232,"FirstName4232 MiddleName4232",LastName4232 +4233,4233,"FirstName4233 MiddleName4233",LastName4233 +4234,4234,"FirstName4234 MiddleName4234",LastName4234 +4235,4235,"FirstName4235 MiddleName4235",LastName4235 +4236,4236,"FirstName4236 MiddleName4236",LastName4236 +4237,4237,"FirstName4237 MiddleName4237",LastName4237 +4238,4238,"FirstName4238 MiddleName4238",LastName4238 +4239,4239,"FirstName4239 MiddleName4239",LastName4239 +4240,4240,"FirstName4240 MiddleName4240",LastName4240 +4241,4241,"FirstName4241 MiddleName4241",LastName4241 +4242,4242,"FirstName4242 MiddleName4242",LastName4242 +4243,4243,"FirstName4243 MiddleName4243",LastName4243 +4244,4244,"FirstName4244 MiddleName4244",LastName4244 +4245,4245,"FirstName4245 MiddleName4245",LastName4245 +4246,4246,"FirstName4246 MiddleName4246",LastName4246 +4247,4247,"FirstName4247 MiddleName4247",LastName4247 +4248,4248,"FirstName4248 MiddleName4248",LastName4248 +4249,4249,"FirstName4249 MiddleName4249",LastName4249 +4250,4250,"FirstName4250 MiddleName4250",LastName4250 +4251,4251,"FirstName4251 MiddleName4251",LastName4251 +4252,4252,"FirstName4252 MiddleName4252",LastName4252 +4253,4253,"FirstName4253 MiddleName4253",LastName4253 +4254,4254,"FirstName4254 MiddleName4254",LastName4254 +4255,4255,"FirstName4255 MiddleName4255",LastName4255 +4256,4256,"FirstName4256 MiddleName4256",LastName4256 +4257,4257,"FirstName4257 MiddleName4257",LastName4257 +4258,4258,"FirstName4258 MiddleName4258",LastName4258 +4259,4259,"FirstName4259 MiddleName4259",LastName4259 +4260,4260,"FirstName4260 MiddleName4260",LastName4260 +4261,4261,"FirstName4261 MiddleName4261",LastName4261 +4262,4262,"FirstName4262 MiddleName4262",LastName4262 +4263,4263,"FirstName4263 MiddleName4263",LastName4263 +4264,4264,"FirstName4264 MiddleName4264",LastName4264 +4265,4265,"FirstName4265 MiddleName4265",LastName4265 +4266,4266,"FirstName4266 MiddleName4266",LastName4266 +4267,4267,"FirstName4267 MiddleName4267",LastName4267 +4268,4268,"FirstName4268 MiddleName4268",LastName4268 +4269,4269,"FirstName4269 MiddleName4269",LastName4269 +4270,4270,"FirstName4270 MiddleName4270",LastName4270 +4271,4271,"FirstName4271 MiddleName4271",LastName4271 +4272,4272,"FirstName4272 MiddleName4272",LastName4272 +4273,4273,"FirstName4273 MiddleName4273",LastName4273 +4274,4274,"FirstName4274 MiddleName4274",LastName4274 +4275,4275,"FirstName4275 MiddleName4275",LastName4275 +4276,4276,"FirstName4276 MiddleName4276",LastName4276 +4277,4277,"FirstName4277 MiddleName4277",LastName4277 +4278,4278,"FirstName4278 MiddleName4278",LastName4278 +4279,4279,"FirstName4279 MiddleName4279",LastName4279 +4280,4280,"FirstName4280 MiddleName4280",LastName4280 +4281,4281,"FirstName4281 MiddleName4281",LastName4281 +4282,4282,"FirstName4282 MiddleName4282",LastName4282 +4283,4283,"FirstName4283 MiddleName4283",LastName4283 +4284,4284,"FirstName4284 MiddleName4284",LastName4284 +4285,4285,"FirstName4285 MiddleName4285",LastName4285 +4286,4286,"FirstName4286 MiddleName4286",LastName4286 +4287,4287,"FirstName4287 MiddleName4287",LastName4287 +4288,4288,"FirstName4288 MiddleName4288",LastName4288 +4289,4289,"FirstName4289 MiddleName4289",LastName4289 +4290,4290,"FirstName4290 MiddleName4290",LastName4290 +4291,4291,"FirstName4291 MiddleName4291",LastName4291 +4292,4292,"FirstName4292 MiddleName4292",LastName4292 +4293,4293,"FirstName4293 MiddleName4293",LastName4293 +4294,4294,"FirstName4294 MiddleName4294",LastName4294 +4295,4295,"FirstName4295 MiddleName4295",LastName4295 +4296,4296,"FirstName4296 MiddleName4296",LastName4296 +4297,4297,"FirstName4297 MiddleName4297",LastName4297 +4298,4298,"FirstName4298 MiddleName4298",LastName4298 +4299,4299,"FirstName4299 MiddleName4299",LastName4299 +4300,4300,"FirstName4300 MiddleName4300",LastName4300 +4301,4301,"FirstName4301 MiddleName4301",LastName4301 +4302,4302,"FirstName4302 MiddleName4302",LastName4302 +4303,4303,"FirstName4303 MiddleName4303",LastName4303 +4304,4304,"FirstName4304 MiddleName4304",LastName4304 +4305,4305,"FirstName4305 MiddleName4305",LastName4305 +4306,4306,"FirstName4306 MiddleName4306",LastName4306 +4307,4307,"FirstName4307 MiddleName4307",LastName4307 +4308,4308,"FirstName4308 MiddleName4308",LastName4308 +4309,4309,"FirstName4309 MiddleName4309",LastName4309 +4310,4310,"FirstName4310 MiddleName4310",LastName4310 +4311,4311,"FirstName4311 MiddleName4311",LastName4311 +4312,4312,"FirstName4312 MiddleName4312",LastName4312 +4313,4313,"FirstName4313 MiddleName4313",LastName4313 +4314,4314,"FirstName4314 MiddleName4314",LastName4314 +4315,4315,"FirstName4315 MiddleName4315",LastName4315 +4316,4316,"FirstName4316 MiddleName4316",LastName4316 +4317,4317,"FirstName4317 MiddleName4317",LastName4317 +4318,4318,"FirstName4318 MiddleName4318",LastName4318 +4319,4319,"FirstName4319 MiddleName4319",LastName4319 +4320,4320,"FirstName4320 MiddleName4320",LastName4320 +4321,4321,"FirstName4321 MiddleName4321",LastName4321 +4322,4322,"FirstName4322 MiddleName4322",LastName4322 +4323,4323,"FirstName4323 MiddleName4323",LastName4323 +4324,4324,"FirstName4324 MiddleName4324",LastName4324 +4325,4325,"FirstName4325 MiddleName4325",LastName4325 +4326,4326,"FirstName4326 MiddleName4326",LastName4326 +4327,4327,"FirstName4327 MiddleName4327",LastName4327 +4328,4328,"FirstName4328 MiddleName4328",LastName4328 +4329,4329,"FirstName4329 MiddleName4329",LastName4329 +4330,4330,"FirstName4330 MiddleName4330",LastName4330 +4331,4331,"FirstName4331 MiddleName4331",LastName4331 +4332,4332,"FirstName4332 MiddleName4332",LastName4332 +4333,4333,"FirstName4333 MiddleName4333",LastName4333 +4334,4334,"FirstName4334 MiddleName4334",LastName4334 +4335,4335,"FirstName4335 MiddleName4335",LastName4335 +4336,4336,"FirstName4336 MiddleName4336",LastName4336 +4337,4337,"FirstName4337 MiddleName4337",LastName4337 +4338,4338,"FirstName4338 MiddleName4338",LastName4338 +4339,4339,"FirstName4339 MiddleName4339",LastName4339 +4340,4340,"FirstName4340 MiddleName4340",LastName4340 +4341,4341,"FirstName4341 MiddleName4341",LastName4341 +4342,4342,"FirstName4342 MiddleName4342",LastName4342 +4343,4343,"FirstName4343 MiddleName4343",LastName4343 +4344,4344,"FirstName4344 MiddleName4344",LastName4344 +4345,4345,"FirstName4345 MiddleName4345",LastName4345 +4346,4346,"FirstName4346 MiddleName4346",LastName4346 +4347,4347,"FirstName4347 MiddleName4347",LastName4347 +4348,4348,"FirstName4348 MiddleName4348",LastName4348 +4349,4349,"FirstName4349 MiddleName4349",LastName4349 +4350,4350,"FirstName4350 MiddleName4350",LastName4350 +4351,4351,"FirstName4351 MiddleName4351",LastName4351 +4352,4352,"FirstName4352 MiddleName4352",LastName4352 +4353,4353,"FirstName4353 MiddleName4353",LastName4353 +4354,4354,"FirstName4354 MiddleName4354",LastName4354 +4355,4355,"FirstName4355 MiddleName4355",LastName4355 +4356,4356,"FirstName4356 MiddleName4356",LastName4356 +4357,4357,"FirstName4357 MiddleName4357",LastName4357 +4358,4358,"FirstName4358 MiddleName4358",LastName4358 +4359,4359,"FirstName4359 MiddleName4359",LastName4359 +4360,4360,"FirstName4360 MiddleName4360",LastName4360 +4361,4361,"FirstName4361 MiddleName4361",LastName4361 +4362,4362,"FirstName4362 MiddleName4362",LastName4362 +4363,4363,"FirstName4363 MiddleName4363",LastName4363 +4364,4364,"FirstName4364 MiddleName4364",LastName4364 +4365,4365,"FirstName4365 MiddleName4365",LastName4365 +4366,4366,"FirstName4366 MiddleName4366",LastName4366 +4367,4367,"FirstName4367 MiddleName4367",LastName4367 +4368,4368,"FirstName4368 MiddleName4368",LastName4368 +4369,4369,"FirstName4369 MiddleName4369",LastName4369 +4370,4370,"FirstName4370 MiddleName4370",LastName4370 +4371,4371,"FirstName4371 MiddleName4371",LastName4371 +4372,4372,"FirstName4372 MiddleName4372",LastName4372 +4373,4373,"FirstName4373 MiddleName4373",LastName4373 +4374,4374,"FirstName4374 MiddleName4374",LastName4374 +4375,4375,"FirstName4375 MiddleName4375",LastName4375 +4376,4376,"FirstName4376 MiddleName4376",LastName4376 +4377,4377,"FirstName4377 MiddleName4377",LastName4377 +4378,4378,"FirstName4378 MiddleName4378",LastName4378 +4379,4379,"FirstName4379 MiddleName4379",LastName4379 +4380,4380,"FirstName4380 MiddleName4380",LastName4380 +4381,4381,"FirstName4381 MiddleName4381",LastName4381 +4382,4382,"FirstName4382 MiddleName4382",LastName4382 +4383,4383,"FirstName4383 MiddleName4383",LastName4383 +4384,4384,"FirstName4384 MiddleName4384",LastName4384 +4385,4385,"FirstName4385 MiddleName4385",LastName4385 +4386,4386,"FirstName4386 MiddleName4386",LastName4386 +4387,4387,"FirstName4387 MiddleName4387",LastName4387 +4388,4388,"FirstName4388 MiddleName4388",LastName4388 +4389,4389,"FirstName4389 MiddleName4389",LastName4389 +4390,4390,"FirstName4390 MiddleName4390",LastName4390 +4391,4391,"FirstName4391 MiddleName4391",LastName4391 +4392,4392,"FirstName4392 MiddleName4392",LastName4392 +4393,4393,"FirstName4393 MiddleName4393",LastName4393 +4394,4394,"FirstName4394 MiddleName4394",LastName4394 +4395,4395,"FirstName4395 MiddleName4395",LastName4395 +4396,4396,"FirstName4396 MiddleName4396",LastName4396 +4397,4397,"FirstName4397 MiddleName4397",LastName4397 +4398,4398,"FirstName4398 MiddleName4398",LastName4398 +4399,4399,"FirstName4399 MiddleName4399",LastName4399 +4400,4400,"FirstName4400 MiddleName4400",LastName4400 +4401,4401,"FirstName4401 MiddleName4401",LastName4401 +4402,4402,"FirstName4402 MiddleName4402",LastName4402 +4403,4403,"FirstName4403 MiddleName4403",LastName4403 +4404,4404,"FirstName4404 MiddleName4404",LastName4404 +4405,4405,"FirstName4405 MiddleName4405",LastName4405 +4406,4406,"FirstName4406 MiddleName4406",LastName4406 +4407,4407,"FirstName4407 MiddleName4407",LastName4407 +4408,4408,"FirstName4408 MiddleName4408",LastName4408 +4409,4409,"FirstName4409 MiddleName4409",LastName4409 +4410,4410,"FirstName4410 MiddleName4410",LastName4410 +4411,4411,"FirstName4411 MiddleName4411",LastName4411 +4412,4412,"FirstName4412 MiddleName4412",LastName4412 +4413,4413,"FirstName4413 MiddleName4413",LastName4413 +4414,4414,"FirstName4414 MiddleName4414",LastName4414 +4415,4415,"FirstName4415 MiddleName4415",LastName4415 +4416,4416,"FirstName4416 MiddleName4416",LastName4416 +4417,4417,"FirstName4417 MiddleName4417",LastName4417 +4418,4418,"FirstName4418 MiddleName4418",LastName4418 +4419,4419,"FirstName4419 MiddleName4419",LastName4419 +4420,4420,"FirstName4420 MiddleName4420",LastName4420 +4421,4421,"FirstName4421 MiddleName4421",LastName4421 +4422,4422,"FirstName4422 MiddleName4422",LastName4422 +4423,4423,"FirstName4423 MiddleName4423",LastName4423 +4424,4424,"FirstName4424 MiddleName4424",LastName4424 +4425,4425,"FirstName4425 MiddleName4425",LastName4425 +4426,4426,"FirstName4426 MiddleName4426",LastName4426 +4427,4427,"FirstName4427 MiddleName4427",LastName4427 +4428,4428,"FirstName4428 MiddleName4428",LastName4428 +4429,4429,"FirstName4429 MiddleName4429",LastName4429 +4430,4430,"FirstName4430 MiddleName4430",LastName4430 +4431,4431,"FirstName4431 MiddleName4431",LastName4431 +4432,4432,"FirstName4432 MiddleName4432",LastName4432 +4433,4433,"FirstName4433 MiddleName4433",LastName4433 +4434,4434,"FirstName4434 MiddleName4434",LastName4434 +4435,4435,"FirstName4435 MiddleName4435",LastName4435 +4436,4436,"FirstName4436 MiddleName4436",LastName4436 +4437,4437,"FirstName4437 MiddleName4437",LastName4437 +4438,4438,"FirstName4438 MiddleName4438",LastName4438 +4439,4439,"FirstName4439 MiddleName4439",LastName4439 +4440,4440,"FirstName4440 MiddleName4440",LastName4440 +4441,4441,"FirstName4441 MiddleName4441",LastName4441 +4442,4442,"FirstName4442 MiddleName4442",LastName4442 +4443,4443,"FirstName4443 MiddleName4443",LastName4443 +4444,4444,"FirstName4444 MiddleName4444",LastName4444 +4445,4445,"FirstName4445 MiddleName4445",LastName4445 +4446,4446,"FirstName4446 MiddleName4446",LastName4446 +4447,4447,"FirstName4447 MiddleName4447",LastName4447 +4448,4448,"FirstName4448 MiddleName4448",LastName4448 +4449,4449,"FirstName4449 MiddleName4449",LastName4449 +4450,4450,"FirstName4450 MiddleName4450",LastName4450 +4451,4451,"FirstName4451 MiddleName4451",LastName4451 +4452,4452,"FirstName4452 MiddleName4452",LastName4452 +4453,4453,"FirstName4453 MiddleName4453",LastName4453 +4454,4454,"FirstName4454 MiddleName4454",LastName4454 +4455,4455,"FirstName4455 MiddleName4455",LastName4455 +4456,4456,"FirstName4456 MiddleName4456",LastName4456 +4457,4457,"FirstName4457 MiddleName4457",LastName4457 +4458,4458,"FirstName4458 MiddleName4458",LastName4458 +4459,4459,"FirstName4459 MiddleName4459",LastName4459 +4460,4460,"FirstName4460 MiddleName4460",LastName4460 +4461,4461,"FirstName4461 MiddleName4461",LastName4461 +4462,4462,"FirstName4462 MiddleName4462",LastName4462 +4463,4463,"FirstName4463 MiddleName4463",LastName4463 +4464,4464,"FirstName4464 MiddleName4464",LastName4464 +4465,4465,"FirstName4465 MiddleName4465",LastName4465 +4466,4466,"FirstName4466 MiddleName4466",LastName4466 +4467,4467,"FirstName4467 MiddleName4467",LastName4467 +4468,4468,"FirstName4468 MiddleName4468",LastName4468 +4469,4469,"FirstName4469 MiddleName4469",LastName4469 +4470,4470,"FirstName4470 MiddleName4470",LastName4470 +4471,4471,"FirstName4471 MiddleName4471",LastName4471 +4472,4472,"FirstName4472 MiddleName4472",LastName4472 +4473,4473,"FirstName4473 MiddleName4473",LastName4473 +4474,4474,"FirstName4474 MiddleName4474",LastName4474 +4475,4475,"FirstName4475 MiddleName4475",LastName4475 +4476,4476,"FirstName4476 MiddleName4476",LastName4476 +4477,4477,"FirstName4477 MiddleName4477",LastName4477 +4478,4478,"FirstName4478 MiddleName4478",LastName4478 +4479,4479,"FirstName4479 MiddleName4479",LastName4479 +4480,4480,"FirstName4480 MiddleName4480",LastName4480 +4481,4481,"FirstName4481 MiddleName4481",LastName4481 +4482,4482,"FirstName4482 MiddleName4482",LastName4482 +4483,4483,"FirstName4483 MiddleName4483",LastName4483 +4484,4484,"FirstName4484 MiddleName4484",LastName4484 +4485,4485,"FirstName4485 MiddleName4485",LastName4485 +4486,4486,"FirstName4486 MiddleName4486",LastName4486 +4487,4487,"FirstName4487 MiddleName4487",LastName4487 +4488,4488,"FirstName4488 MiddleName4488",LastName4488 +4489,4489,"FirstName4489 MiddleName4489",LastName4489 +4490,4490,"FirstName4490 MiddleName4490",LastName4490 +4491,4491,"FirstName4491 MiddleName4491",LastName4491 +4492,4492,"FirstName4492 MiddleName4492",LastName4492 +4493,4493,"FirstName4493 MiddleName4493",LastName4493 +4494,4494,"FirstName4494 MiddleName4494",LastName4494 +4495,4495,"FirstName4495 MiddleName4495",LastName4495 +4496,4496,"FirstName4496 MiddleName4496",LastName4496 +4497,4497,"FirstName4497 MiddleName4497",LastName4497 +4498,4498,"FirstName4498 MiddleName4498",LastName4498 +4499,4499,"FirstName4499 MiddleName4499",LastName4499 +4500,4500,"FirstName4500 MiddleName4500",LastName4500 +4501,4501,"FirstName4501 MiddleName4501",LastName4501 +4502,4502,"FirstName4502 MiddleName4502",LastName4502 +4503,4503,"FirstName4503 MiddleName4503",LastName4503 +4504,4504,"FirstName4504 MiddleName4504",LastName4504 +4505,4505,"FirstName4505 MiddleName4505",LastName4505 +4506,4506,"FirstName4506 MiddleName4506",LastName4506 +4507,4507,"FirstName4507 MiddleName4507",LastName4507 +4508,4508,"FirstName4508 MiddleName4508",LastName4508 +4509,4509,"FirstName4509 MiddleName4509",LastName4509 +4510,4510,"FirstName4510 MiddleName4510",LastName4510 +4511,4511,"FirstName4511 MiddleName4511",LastName4511 +4512,4512,"FirstName4512 MiddleName4512",LastName4512 +4513,4513,"FirstName4513 MiddleName4513",LastName4513 +4514,4514,"FirstName4514 MiddleName4514",LastName4514 +4515,4515,"FirstName4515 MiddleName4515",LastName4515 +4516,4516,"FirstName4516 MiddleName4516",LastName4516 +4517,4517,"FirstName4517 MiddleName4517",LastName4517 +4518,4518,"FirstName4518 MiddleName4518",LastName4518 +4519,4519,"FirstName4519 MiddleName4519",LastName4519 +4520,4520,"FirstName4520 MiddleName4520",LastName4520 +4521,4521,"FirstName4521 MiddleName4521",LastName4521 +4522,4522,"FirstName4522 MiddleName4522",LastName4522 +4523,4523,"FirstName4523 MiddleName4523",LastName4523 +4524,4524,"FirstName4524 MiddleName4524",LastName4524 +4525,4525,"FirstName4525 MiddleName4525",LastName4525 +4526,4526,"FirstName4526 MiddleName4526",LastName4526 +4527,4527,"FirstName4527 MiddleName4527",LastName4527 +4528,4528,"FirstName4528 MiddleName4528",LastName4528 +4529,4529,"FirstName4529 MiddleName4529",LastName4529 +4530,4530,"FirstName4530 MiddleName4530",LastName4530 +4531,4531,"FirstName4531 MiddleName4531",LastName4531 +4532,4532,"FirstName4532 MiddleName4532",LastName4532 +4533,4533,"FirstName4533 MiddleName4533",LastName4533 +4534,4534,"FirstName4534 MiddleName4534",LastName4534 +4535,4535,"FirstName4535 MiddleName4535",LastName4535 +4536,4536,"FirstName4536 MiddleName4536",LastName4536 +4537,4537,"FirstName4537 MiddleName4537",LastName4537 +4538,4538,"FirstName4538 MiddleName4538",LastName4538 +4539,4539,"FirstName4539 MiddleName4539",LastName4539 +4540,4540,"FirstName4540 MiddleName4540",LastName4540 +4541,4541,"FirstName4541 MiddleName4541",LastName4541 +4542,4542,"FirstName4542 MiddleName4542",LastName4542 +4543,4543,"FirstName4543 MiddleName4543",LastName4543 +4544,4544,"FirstName4544 MiddleName4544",LastName4544 +4545,4545,"FirstName4545 MiddleName4545",LastName4545 +4546,4546,"FirstName4546 MiddleName4546",LastName4546 +4547,4547,"FirstName4547 MiddleName4547",LastName4547 +4548,4548,"FirstName4548 MiddleName4548",LastName4548 +4549,4549,"FirstName4549 MiddleName4549",LastName4549 +4550,4550,"FirstName4550 MiddleName4550",LastName4550 +4551,4551,"FirstName4551 MiddleName4551",LastName4551 +4552,4552,"FirstName4552 MiddleName4552",LastName4552 +4553,4553,"FirstName4553 MiddleName4553",LastName4553 +4554,4554,"FirstName4554 MiddleName4554",LastName4554 +4555,4555,"FirstName4555 MiddleName4555",LastName4555 +4556,4556,"FirstName4556 MiddleName4556",LastName4556 +4557,4557,"FirstName4557 MiddleName4557",LastName4557 +4558,4558,"FirstName4558 MiddleName4558",LastName4558 +4559,4559,"FirstName4559 MiddleName4559",LastName4559 +4560,4560,"FirstName4560 MiddleName4560",LastName4560 +4561,4561,"FirstName4561 MiddleName4561",LastName4561 +4562,4562,"FirstName4562 MiddleName4562",LastName4562 +4563,4563,"FirstName4563 MiddleName4563",LastName4563 +4564,4564,"FirstName4564 MiddleName4564",LastName4564 +4565,4565,"FirstName4565 MiddleName4565",LastName4565 +4566,4566,"FirstName4566 MiddleName4566",LastName4566 +4567,4567,"FirstName4567 MiddleName4567",LastName4567 +4568,4568,"FirstName4568 MiddleName4568",LastName4568 +4569,4569,"FirstName4569 MiddleName4569",LastName4569 +4570,4570,"FirstName4570 MiddleName4570",LastName4570 +4571,4571,"FirstName4571 MiddleName4571",LastName4571 +4572,4572,"FirstName4572 MiddleName4572",LastName4572 +4573,4573,"FirstName4573 MiddleName4573",LastName4573 +4574,4574,"FirstName4574 MiddleName4574",LastName4574 +4575,4575,"FirstName4575 MiddleName4575",LastName4575 +4576,4576,"FirstName4576 MiddleName4576",LastName4576 +4577,4577,"FirstName4577 MiddleName4577",LastName4577 +4578,4578,"FirstName4578 MiddleName4578",LastName4578 +4579,4579,"FirstName4579 MiddleName4579",LastName4579 +4580,4580,"FirstName4580 MiddleName4580",LastName4580 +4581,4581,"FirstName4581 MiddleName4581",LastName4581 +4582,4582,"FirstName4582 MiddleName4582",LastName4582 +4583,4583,"FirstName4583 MiddleName4583",LastName4583 +4584,4584,"FirstName4584 MiddleName4584",LastName4584 +4585,4585,"FirstName4585 MiddleName4585",LastName4585 +4586,4586,"FirstName4586 MiddleName4586",LastName4586 +4587,4587,"FirstName4587 MiddleName4587",LastName4587 +4588,4588,"FirstName4588 MiddleName4588",LastName4588 +4589,4589,"FirstName4589 MiddleName4589",LastName4589 +4590,4590,"FirstName4590 MiddleName4590",LastName4590 +4591,4591,"FirstName4591 MiddleName4591",LastName4591 +4592,4592,"FirstName4592 MiddleName4592",LastName4592 +4593,4593,"FirstName4593 MiddleName4593",LastName4593 +4594,4594,"FirstName4594 MiddleName4594",LastName4594 +4595,4595,"FirstName4595 MiddleName4595",LastName4595 +4596,4596,"FirstName4596 MiddleName4596",LastName4596 +4597,4597,"FirstName4597 MiddleName4597",LastName4597 +4598,4598,"FirstName4598 MiddleName4598",LastName4598 +4599,4599,"FirstName4599 MiddleName4599",LastName4599 +4600,4600,"FirstName4600 MiddleName4600",LastName4600 +4601,4601,"FirstName4601 MiddleName4601",LastName4601 +4602,4602,"FirstName4602 MiddleName4602",LastName4602 +4603,4603,"FirstName4603 MiddleName4603",LastName4603 +4604,4604,"FirstName4604 MiddleName4604",LastName4604 +4605,4605,"FirstName4605 MiddleName4605",LastName4605 +4606,4606,"FirstName4606 MiddleName4606",LastName4606 +4607,4607,"FirstName4607 MiddleName4607",LastName4607 +4608,4608,"FirstName4608 MiddleName4608",LastName4608 +4609,4609,"FirstName4609 MiddleName4609",LastName4609 +4610,4610,"FirstName4610 MiddleName4610",LastName4610 +4611,4611,"FirstName4611 MiddleName4611",LastName4611 +4612,4612,"FirstName4612 MiddleName4612",LastName4612 +4613,4613,"FirstName4613 MiddleName4613",LastName4613 +4614,4614,"FirstName4614 MiddleName4614",LastName4614 +4615,4615,"FirstName4615 MiddleName4615",LastName4615 +4616,4616,"FirstName4616 MiddleName4616",LastName4616 +4617,4617,"FirstName4617 MiddleName4617",LastName4617 +4618,4618,"FirstName4618 MiddleName4618",LastName4618 +4619,4619,"FirstName4619 MiddleName4619",LastName4619 +4620,4620,"FirstName4620 MiddleName4620",LastName4620 +4621,4621,"FirstName4621 MiddleName4621",LastName4621 +4622,4622,"FirstName4622 MiddleName4622",LastName4622 +4623,4623,"FirstName4623 MiddleName4623",LastName4623 +4624,4624,"FirstName4624 MiddleName4624",LastName4624 +4625,4625,"FirstName4625 MiddleName4625",LastName4625 +4626,4626,"FirstName4626 MiddleName4626",LastName4626 +4627,4627,"FirstName4627 MiddleName4627",LastName4627 +4628,4628,"FirstName4628 MiddleName4628",LastName4628 +4629,4629,"FirstName4629 MiddleName4629",LastName4629 +4630,4630,"FirstName4630 MiddleName4630",LastName4630 +4631,4631,"FirstName4631 MiddleName4631",LastName4631 +4632,4632,"FirstName4632 MiddleName4632",LastName4632 +4633,4633,"FirstName4633 MiddleName4633",LastName4633 +4634,4634,"FirstName4634 MiddleName4634",LastName4634 +4635,4635,"FirstName4635 MiddleName4635",LastName4635 +4636,4636,"FirstName4636 MiddleName4636",LastName4636 +4637,4637,"FirstName4637 MiddleName4637",LastName4637 +4638,4638,"FirstName4638 MiddleName4638",LastName4638 +4639,4639,"FirstName4639 MiddleName4639",LastName4639 +4640,4640,"FirstName4640 MiddleName4640",LastName4640 +4641,4641,"FirstName4641 MiddleName4641",LastName4641 +4642,4642,"FirstName4642 MiddleName4642",LastName4642 +4643,4643,"FirstName4643 MiddleName4643",LastName4643 +4644,4644,"FirstName4644 MiddleName4644",LastName4644 +4645,4645,"FirstName4645 MiddleName4645",LastName4645 +4646,4646,"FirstName4646 MiddleName4646",LastName4646 +4647,4647,"FirstName4647 MiddleName4647",LastName4647 +4648,4648,"FirstName4648 MiddleName4648",LastName4648 +4649,4649,"FirstName4649 MiddleName4649",LastName4649 +4650,4650,"FirstName4650 MiddleName4650",LastName4650 +4651,4651,"FirstName4651 MiddleName4651",LastName4651 +4652,4652,"FirstName4652 MiddleName4652",LastName4652 +4653,4653,"FirstName4653 MiddleName4653",LastName4653 +4654,4654,"FirstName4654 MiddleName4654",LastName4654 +4655,4655,"FirstName4655 MiddleName4655",LastName4655 +4656,4656,"FirstName4656 MiddleName4656",LastName4656 +4657,4657,"FirstName4657 MiddleName4657",LastName4657 +4658,4658,"FirstName4658 MiddleName4658",LastName4658 +4659,4659,"FirstName4659 MiddleName4659",LastName4659 +4660,4660,"FirstName4660 MiddleName4660",LastName4660 +4661,4661,"FirstName4661 MiddleName4661",LastName4661 +4662,4662,"FirstName4662 MiddleName4662",LastName4662 +4663,4663,"FirstName4663 MiddleName4663",LastName4663 +4664,4664,"FirstName4664 MiddleName4664",LastName4664 +4665,4665,"FirstName4665 MiddleName4665",LastName4665 +4666,4666,"FirstName4666 MiddleName4666",LastName4666 +4667,4667,"FirstName4667 MiddleName4667",LastName4667 +4668,4668,"FirstName4668 MiddleName4668",LastName4668 +4669,4669,"FirstName4669 MiddleName4669",LastName4669 +4670,4670,"FirstName4670 MiddleName4670",LastName4670 +4671,4671,"FirstName4671 MiddleName4671",LastName4671 +4672,4672,"FirstName4672 MiddleName4672",LastName4672 +4673,4673,"FirstName4673 MiddleName4673",LastName4673 +4674,4674,"FirstName4674 MiddleName4674",LastName4674 +4675,4675,"FirstName4675 MiddleName4675",LastName4675 +4676,4676,"FirstName4676 MiddleName4676",LastName4676 +4677,4677,"FirstName4677 MiddleName4677",LastName4677 +4678,4678,"FirstName4678 MiddleName4678",LastName4678 +4679,4679,"FirstName4679 MiddleName4679",LastName4679 +4680,4680,"FirstName4680 MiddleName4680",LastName4680 +4681,4681,"FirstName4681 MiddleName4681",LastName4681 +4682,4682,"FirstName4682 MiddleName4682",LastName4682 +4683,4683,"FirstName4683 MiddleName4683",LastName4683 +4684,4684,"FirstName4684 MiddleName4684",LastName4684 +4685,4685,"FirstName4685 MiddleName4685",LastName4685 +4686,4686,"FirstName4686 MiddleName4686",LastName4686 +4687,4687,"FirstName4687 MiddleName4687",LastName4687 +4688,4688,"FirstName4688 MiddleName4688",LastName4688 +4689,4689,"FirstName4689 MiddleName4689",LastName4689 +4690,4690,"FirstName4690 MiddleName4690",LastName4690 +4691,4691,"FirstName4691 MiddleName4691",LastName4691 +4692,4692,"FirstName4692 MiddleName4692",LastName4692 +4693,4693,"FirstName4693 MiddleName4693",LastName4693 +4694,4694,"FirstName4694 MiddleName4694",LastName4694 +4695,4695,"FirstName4695 MiddleName4695",LastName4695 +4696,4696,"FirstName4696 MiddleName4696",LastName4696 +4697,4697,"FirstName4697 MiddleName4697",LastName4697 +4698,4698,"FirstName4698 MiddleName4698",LastName4698 +4699,4699,"FirstName4699 MiddleName4699",LastName4699 +4700,4700,"FirstName4700 MiddleName4700",LastName4700 +4701,4701,"FirstName4701 MiddleName4701",LastName4701 +4702,4702,"FirstName4702 MiddleName4702",LastName4702 +4703,4703,"FirstName4703 MiddleName4703",LastName4703 +4704,4704,"FirstName4704 MiddleName4704",LastName4704 +4705,4705,"FirstName4705 MiddleName4705",LastName4705 +4706,4706,"FirstName4706 MiddleName4706",LastName4706 +4707,4707,"FirstName4707 MiddleName4707",LastName4707 +4708,4708,"FirstName4708 MiddleName4708",LastName4708 +4709,4709,"FirstName4709 MiddleName4709",LastName4709 +4710,4710,"FirstName4710 MiddleName4710",LastName4710 +4711,4711,"FirstName4711 MiddleName4711",LastName4711 +4712,4712,"FirstName4712 MiddleName4712",LastName4712 +4713,4713,"FirstName4713 MiddleName4713",LastName4713 +4714,4714,"FirstName4714 MiddleName4714",LastName4714 +4715,4715,"FirstName4715 MiddleName4715",LastName4715 +4716,4716,"FirstName4716 MiddleName4716",LastName4716 +4717,4717,"FirstName4717 MiddleName4717",LastName4717 +4718,4718,"FirstName4718 MiddleName4718",LastName4718 +4719,4719,"FirstName4719 MiddleName4719",LastName4719 +4720,4720,"FirstName4720 MiddleName4720",LastName4720 +4721,4721,"FirstName4721 MiddleName4721",LastName4721 +4722,4722,"FirstName4722 MiddleName4722",LastName4722 +4723,4723,"FirstName4723 MiddleName4723",LastName4723 +4724,4724,"FirstName4724 MiddleName4724",LastName4724 +4725,4725,"FirstName4725 MiddleName4725",LastName4725 +4726,4726,"FirstName4726 MiddleName4726",LastName4726 +4727,4727,"FirstName4727 MiddleName4727",LastName4727 +4728,4728,"FirstName4728 MiddleName4728",LastName4728 +4729,4729,"FirstName4729 MiddleName4729",LastName4729 +4730,4730,"FirstName4730 MiddleName4730",LastName4730 +4731,4731,"FirstName4731 MiddleName4731",LastName4731 +4732,4732,"FirstName4732 MiddleName4732",LastName4732 +4733,4733,"FirstName4733 MiddleName4733",LastName4733 +4734,4734,"FirstName4734 MiddleName4734",LastName4734 +4735,4735,"FirstName4735 MiddleName4735",LastName4735 +4736,4736,"FirstName4736 MiddleName4736",LastName4736 +4737,4737,"FirstName4737 MiddleName4737",LastName4737 +4738,4738,"FirstName4738 MiddleName4738",LastName4738 +4739,4739,"FirstName4739 MiddleName4739",LastName4739 +4740,4740,"FirstName4740 MiddleName4740",LastName4740 +4741,4741,"FirstName4741 MiddleName4741",LastName4741 +4742,4742,"FirstName4742 MiddleName4742",LastName4742 +4743,4743,"FirstName4743 MiddleName4743",LastName4743 +4744,4744,"FirstName4744 MiddleName4744",LastName4744 +4745,4745,"FirstName4745 MiddleName4745",LastName4745 +4746,4746,"FirstName4746 MiddleName4746",LastName4746 +4747,4747,"FirstName4747 MiddleName4747",LastName4747 +4748,4748,"FirstName4748 MiddleName4748",LastName4748 +4749,4749,"FirstName4749 MiddleName4749",LastName4749 +4750,4750,"FirstName4750 MiddleName4750",LastName4750 +4751,4751,"FirstName4751 MiddleName4751",LastName4751 +4752,4752,"FirstName4752 MiddleName4752",LastName4752 +4753,4753,"FirstName4753 MiddleName4753",LastName4753 +4754,4754,"FirstName4754 MiddleName4754",LastName4754 +4755,4755,"FirstName4755 MiddleName4755",LastName4755 +4756,4756,"FirstName4756 MiddleName4756",LastName4756 +4757,4757,"FirstName4757 MiddleName4757",LastName4757 +4758,4758,"FirstName4758 MiddleName4758",LastName4758 +4759,4759,"FirstName4759 MiddleName4759",LastName4759 +4760,4760,"FirstName4760 MiddleName4760",LastName4760 +4761,4761,"FirstName4761 MiddleName4761",LastName4761 +4762,4762,"FirstName4762 MiddleName4762",LastName4762 +4763,4763,"FirstName4763 MiddleName4763",LastName4763 +4764,4764,"FirstName4764 MiddleName4764",LastName4764 +4765,4765,"FirstName4765 MiddleName4765",LastName4765 +4766,4766,"FirstName4766 MiddleName4766",LastName4766 +4767,4767,"FirstName4767 MiddleName4767",LastName4767 +4768,4768,"FirstName4768 MiddleName4768",LastName4768 +4769,4769,"FirstName4769 MiddleName4769",LastName4769 +4770,4770,"FirstName4770 MiddleName4770",LastName4770 +4771,4771,"FirstName4771 MiddleName4771",LastName4771 +4772,4772,"FirstName4772 MiddleName4772",LastName4772 +4773,4773,"FirstName4773 MiddleName4773",LastName4773 +4774,4774,"FirstName4774 MiddleName4774",LastName4774 +4775,4775,"FirstName4775 MiddleName4775",LastName4775 +4776,4776,"FirstName4776 MiddleName4776",LastName4776 +4777,4777,"FirstName4777 MiddleName4777",LastName4777 +4778,4778,"FirstName4778 MiddleName4778",LastName4778 +4779,4779,"FirstName4779 MiddleName4779",LastName4779 +4780,4780,"FirstName4780 MiddleName4780",LastName4780 +4781,4781,"FirstName4781 MiddleName4781",LastName4781 +4782,4782,"FirstName4782 MiddleName4782",LastName4782 +4783,4783,"FirstName4783 MiddleName4783",LastName4783 +4784,4784,"FirstName4784 MiddleName4784",LastName4784 +4785,4785,"FirstName4785 MiddleName4785",LastName4785 +4786,4786,"FirstName4786 MiddleName4786",LastName4786 +4787,4787,"FirstName4787 MiddleName4787",LastName4787 +4788,4788,"FirstName4788 MiddleName4788",LastName4788 +4789,4789,"FirstName4789 MiddleName4789",LastName4789 +4790,4790,"FirstName4790 MiddleName4790",LastName4790 +4791,4791,"FirstName4791 MiddleName4791",LastName4791 +4792,4792,"FirstName4792 MiddleName4792",LastName4792 +4793,4793,"FirstName4793 MiddleName4793",LastName4793 +4794,4794,"FirstName4794 MiddleName4794",LastName4794 +4795,4795,"FirstName4795 MiddleName4795",LastName4795 +4796,4796,"FirstName4796 MiddleName4796",LastName4796 +4797,4797,"FirstName4797 MiddleName4797",LastName4797 +4798,4798,"FirstName4798 MiddleName4798",LastName4798 +4799,4799,"FirstName4799 MiddleName4799",LastName4799 +4800,4800,"FirstName4800 MiddleName4800",LastName4800 +4801,4801,"FirstName4801 MiddleName4801",LastName4801 +4802,4802,"FirstName4802 MiddleName4802",LastName4802 +4803,4803,"FirstName4803 MiddleName4803",LastName4803 +4804,4804,"FirstName4804 MiddleName4804",LastName4804 +4805,4805,"FirstName4805 MiddleName4805",LastName4805 +4806,4806,"FirstName4806 MiddleName4806",LastName4806 +4807,4807,"FirstName4807 MiddleName4807",LastName4807 +4808,4808,"FirstName4808 MiddleName4808",LastName4808 +4809,4809,"FirstName4809 MiddleName4809",LastName4809 +4810,4810,"FirstName4810 MiddleName4810",LastName4810 +4811,4811,"FirstName4811 MiddleName4811",LastName4811 +4812,4812,"FirstName4812 MiddleName4812",LastName4812 +4813,4813,"FirstName4813 MiddleName4813",LastName4813 +4814,4814,"FirstName4814 MiddleName4814",LastName4814 +4815,4815,"FirstName4815 MiddleName4815",LastName4815 +4816,4816,"FirstName4816 MiddleName4816",LastName4816 +4817,4817,"FirstName4817 MiddleName4817",LastName4817 +4818,4818,"FirstName4818 MiddleName4818",LastName4818 +4819,4819,"FirstName4819 MiddleName4819",LastName4819 +4820,4820,"FirstName4820 MiddleName4820",LastName4820 +4821,4821,"FirstName4821 MiddleName4821",LastName4821 +4822,4822,"FirstName4822 MiddleName4822",LastName4822 +4823,4823,"FirstName4823 MiddleName4823",LastName4823 +4824,4824,"FirstName4824 MiddleName4824",LastName4824 +4825,4825,"FirstName4825 MiddleName4825",LastName4825 +4826,4826,"FirstName4826 MiddleName4826",LastName4826 +4827,4827,"FirstName4827 MiddleName4827",LastName4827 +4828,4828,"FirstName4828 MiddleName4828",LastName4828 +4829,4829,"FirstName4829 MiddleName4829",LastName4829 +4830,4830,"FirstName4830 MiddleName4830",LastName4830 +4831,4831,"FirstName4831 MiddleName4831",LastName4831 +4832,4832,"FirstName4832 MiddleName4832",LastName4832 +4833,4833,"FirstName4833 MiddleName4833",LastName4833 +4834,4834,"FirstName4834 MiddleName4834",LastName4834 +4835,4835,"FirstName4835 MiddleName4835",LastName4835 +4836,4836,"FirstName4836 MiddleName4836",LastName4836 +4837,4837,"FirstName4837 MiddleName4837",LastName4837 +4838,4838,"FirstName4838 MiddleName4838",LastName4838 +4839,4839,"FirstName4839 MiddleName4839",LastName4839 +4840,4840,"FirstName4840 MiddleName4840",LastName4840 +4841,4841,"FirstName4841 MiddleName4841",LastName4841 +4842,4842,"FirstName4842 MiddleName4842",LastName4842 +4843,4843,"FirstName4843 MiddleName4843",LastName4843 +4844,4844,"FirstName4844 MiddleName4844",LastName4844 +4845,4845,"FirstName4845 MiddleName4845",LastName4845 +4846,4846,"FirstName4846 MiddleName4846",LastName4846 +4847,4847,"FirstName4847 MiddleName4847",LastName4847 +4848,4848,"FirstName4848 MiddleName4848",LastName4848 +4849,4849,"FirstName4849 MiddleName4849",LastName4849 +4850,4850,"FirstName4850 MiddleName4850",LastName4850 +4851,4851,"FirstName4851 MiddleName4851",LastName4851 +4852,4852,"FirstName4852 MiddleName4852",LastName4852 +4853,4853,"FirstName4853 MiddleName4853",LastName4853 +4854,4854,"FirstName4854 MiddleName4854",LastName4854 +4855,4855,"FirstName4855 MiddleName4855",LastName4855 +4856,4856,"FirstName4856 MiddleName4856",LastName4856 +4857,4857,"FirstName4857 MiddleName4857",LastName4857 +4858,4858,"FirstName4858 MiddleName4858",LastName4858 +4859,4859,"FirstName4859 MiddleName4859",LastName4859 +4860,4860,"FirstName4860 MiddleName4860",LastName4860 +4861,4861,"FirstName4861 MiddleName4861",LastName4861 +4862,4862,"FirstName4862 MiddleName4862",LastName4862 +4863,4863,"FirstName4863 MiddleName4863",LastName4863 +4864,4864,"FirstName4864 MiddleName4864",LastName4864 +4865,4865,"FirstName4865 MiddleName4865",LastName4865 +4866,4866,"FirstName4866 MiddleName4866",LastName4866 +4867,4867,"FirstName4867 MiddleName4867",LastName4867 +4868,4868,"FirstName4868 MiddleName4868",LastName4868 +4869,4869,"FirstName4869 MiddleName4869",LastName4869 +4870,4870,"FirstName4870 MiddleName4870",LastName4870 +4871,4871,"FirstName4871 MiddleName4871",LastName4871 +4872,4872,"FirstName4872 MiddleName4872",LastName4872 +4873,4873,"FirstName4873 MiddleName4873",LastName4873 +4874,4874,"FirstName4874 MiddleName4874",LastName4874 +4875,4875,"FirstName4875 MiddleName4875",LastName4875 +4876,4876,"FirstName4876 MiddleName4876",LastName4876 +4877,4877,"FirstName4877 MiddleName4877",LastName4877 +4878,4878,"FirstName4878 MiddleName4878",LastName4878 +4879,4879,"FirstName4879 MiddleName4879",LastName4879 +4880,4880,"FirstName4880 MiddleName4880",LastName4880 +4881,4881,"FirstName4881 MiddleName4881",LastName4881 +4882,4882,"FirstName4882 MiddleName4882",LastName4882 +4883,4883,"FirstName4883 MiddleName4883",LastName4883 +4884,4884,"FirstName4884 MiddleName4884",LastName4884 +4885,4885,"FirstName4885 MiddleName4885",LastName4885 +4886,4886,"FirstName4886 MiddleName4886",LastName4886 +4887,4887,"FirstName4887 MiddleName4887",LastName4887 +4888,4888,"FirstName4888 MiddleName4888",LastName4888 +4889,4889,"FirstName4889 MiddleName4889",LastName4889 +4890,4890,"FirstName4890 MiddleName4890",LastName4890 +4891,4891,"FirstName4891 MiddleName4891",LastName4891 +4892,4892,"FirstName4892 MiddleName4892",LastName4892 +4893,4893,"FirstName4893 MiddleName4893",LastName4893 +4894,4894,"FirstName4894 MiddleName4894",LastName4894 +4895,4895,"FirstName4895 MiddleName4895",LastName4895 +4896,4896,"FirstName4896 MiddleName4896",LastName4896 +4897,4897,"FirstName4897 MiddleName4897",LastName4897 +4898,4898,"FirstName4898 MiddleName4898",LastName4898 +4899,4899,"FirstName4899 MiddleName4899",LastName4899 +4900,4900,"FirstName4900 MiddleName4900",LastName4900 +4901,4901,"FirstName4901 MiddleName4901",LastName4901 +4902,4902,"FirstName4902 MiddleName4902",LastName4902 +4903,4903,"FirstName4903 MiddleName4903",LastName4903 +4904,4904,"FirstName4904 MiddleName4904",LastName4904 +4905,4905,"FirstName4905 MiddleName4905",LastName4905 +4906,4906,"FirstName4906 MiddleName4906",LastName4906 +4907,4907,"FirstName4907 MiddleName4907",LastName4907 +4908,4908,"FirstName4908 MiddleName4908",LastName4908 +4909,4909,"FirstName4909 MiddleName4909",LastName4909 +4910,4910,"FirstName4910 MiddleName4910",LastName4910 +4911,4911,"FirstName4911 MiddleName4911",LastName4911 +4912,4912,"FirstName4912 MiddleName4912",LastName4912 +4913,4913,"FirstName4913 MiddleName4913",LastName4913 +4914,4914,"FirstName4914 MiddleName4914",LastName4914 +4915,4915,"FirstName4915 MiddleName4915",LastName4915 +4916,4916,"FirstName4916 MiddleName4916",LastName4916 +4917,4917,"FirstName4917 MiddleName4917",LastName4917 +4918,4918,"FirstName4918 MiddleName4918",LastName4918 +4919,4919,"FirstName4919 MiddleName4919",LastName4919 +4920,4920,"FirstName4920 MiddleName4920",LastName4920 +4921,4921,"FirstName4921 MiddleName4921",LastName4921 +4922,4922,"FirstName4922 MiddleName4922",LastName4922 +4923,4923,"FirstName4923 MiddleName4923",LastName4923 +4924,4924,"FirstName4924 MiddleName4924",LastName4924 +4925,4925,"FirstName4925 MiddleName4925",LastName4925 +4926,4926,"FirstName4926 MiddleName4926",LastName4926 +4927,4927,"FirstName4927 MiddleName4927",LastName4927 +4928,4928,"FirstName4928 MiddleName4928",LastName4928 +4929,4929,"FirstName4929 MiddleName4929",LastName4929 +4930,4930,"FirstName4930 MiddleName4930",LastName4930 +4931,4931,"FirstName4931 MiddleName4931",LastName4931 +4932,4932,"FirstName4932 MiddleName4932",LastName4932 +4933,4933,"FirstName4933 MiddleName4933",LastName4933 +4934,4934,"FirstName4934 MiddleName4934",LastName4934 +4935,4935,"FirstName4935 MiddleName4935",LastName4935 +4936,4936,"FirstName4936 MiddleName4936",LastName4936 +4937,4937,"FirstName4937 MiddleName4937",LastName4937 +4938,4938,"FirstName4938 MiddleName4938",LastName4938 +4939,4939,"FirstName4939 MiddleName4939",LastName4939 +4940,4940,"FirstName4940 MiddleName4940",LastName4940 +4941,4941,"FirstName4941 MiddleName4941",LastName4941 +4942,4942,"FirstName4942 MiddleName4942",LastName4942 +4943,4943,"FirstName4943 MiddleName4943",LastName4943 +4944,4944,"FirstName4944 MiddleName4944",LastName4944 +4945,4945,"FirstName4945 MiddleName4945",LastName4945 +4946,4946,"FirstName4946 MiddleName4946",LastName4946 +4947,4947,"FirstName4947 MiddleName4947",LastName4947 +4948,4948,"FirstName4948 MiddleName4948",LastName4948 +4949,4949,"FirstName4949 MiddleName4949",LastName4949 +4950,4950,"FirstName4950 MiddleName4950",LastName4950 +4951,4951,"FirstName4951 MiddleName4951",LastName4951 +4952,4952,"FirstName4952 MiddleName4952",LastName4952 +4953,4953,"FirstName4953 MiddleName4953",LastName4953 +4954,4954,"FirstName4954 MiddleName4954",LastName4954 +4955,4955,"FirstName4955 MiddleName4955",LastName4955 +4956,4956,"FirstName4956 MiddleName4956",LastName4956 +4957,4957,"FirstName4957 MiddleName4957",LastName4957 +4958,4958,"FirstName4958 MiddleName4958",LastName4958 +4959,4959,"FirstName4959 MiddleName4959",LastName4959 +4960,4960,"FirstName4960 MiddleName4960",LastName4960 +4961,4961,"FirstName4961 MiddleName4961",LastName4961 +4962,4962,"FirstName4962 MiddleName4962",LastName4962 +4963,4963,"FirstName4963 MiddleName4963",LastName4963 +4964,4964,"FirstName4964 MiddleName4964",LastName4964 +4965,4965,"FirstName4965 MiddleName4965",LastName4965 +4966,4966,"FirstName4966 MiddleName4966",LastName4966 +4967,4967,"FirstName4967 MiddleName4967",LastName4967 +4968,4968,"FirstName4968 MiddleName4968",LastName4968 +4969,4969,"FirstName4969 MiddleName4969",LastName4969 +4970,4970,"FirstName4970 MiddleName4970",LastName4970 +4971,4971,"FirstName4971 MiddleName4971",LastName4971 +4972,4972,"FirstName4972 MiddleName4972",LastName4972 +4973,4973,"FirstName4973 MiddleName4973",LastName4973 +4974,4974,"FirstName4974 MiddleName4974",LastName4974 +4975,4975,"FirstName4975 MiddleName4975",LastName4975 +4976,4976,"FirstName4976 MiddleName4976",LastName4976 +4977,4977,"FirstName4977 MiddleName4977",LastName4977 +4978,4978,"FirstName4978 MiddleName4978",LastName4978 +4979,4979,"FirstName4979 MiddleName4979",LastName4979 +4980,4980,"FirstName4980 MiddleName4980",LastName4980 +4981,4981,"FirstName4981 MiddleName4981",LastName4981 +4982,4982,"FirstName4982 MiddleName4982",LastName4982 +4983,4983,"FirstName4983 MiddleName4983",LastName4983 +4984,4984,"FirstName4984 MiddleName4984",LastName4984 +4985,4985,"FirstName4985 MiddleName4985",LastName4985 +4986,4986,"FirstName4986 MiddleName4986",LastName4986 +4987,4987,"FirstName4987 MiddleName4987",LastName4987 +4988,4988,"FirstName4988 MiddleName4988",LastName4988 +4989,4989,"FirstName4989 MiddleName4989",LastName4989 +4990,4990,"FirstName4990 MiddleName4990",LastName4990 +4991,4991,"FirstName4991 MiddleName4991",LastName4991 +4992,4992,"FirstName4992 MiddleName4992",LastName4992 +4993,4993,"FirstName4993 MiddleName4993",LastName4993 +4994,4994,"FirstName4994 MiddleName4994",LastName4994 +4995,4995,"FirstName4995 MiddleName4995",LastName4995 +4996,4996,"FirstName4996 MiddleName4996",LastName4996 +4997,4997,"FirstName4997 MiddleName4997",LastName4997 +4998,4998,"FirstName4998 MiddleName4998",LastName4998 +4999,4999,"FirstName4999 MiddleName4999",LastName4999 +5000,5000,"FirstName5000 MiddleName5000",LastName5000 +5001,5001,"FirstName5001 MiddleName5001",LastName5001 +5002,5002,"FirstName5002 MiddleName5002",LastName5002 +5003,5003,"FirstName5003 MiddleName5003",LastName5003 +5004,5004,"FirstName5004 MiddleName5004",LastName5004 +5005,5005,"FirstName5005 MiddleName5005",LastName5005 +5006,5006,"FirstName5006 MiddleName5006",LastName5006 +5007,5007,"FirstName5007 MiddleName5007",LastName5007 +5008,5008,"FirstName5008 MiddleName5008",LastName5008 +5009,5009,"FirstName5009 MiddleName5009",LastName5009 +5010,5010,"FirstName5010 MiddleName5010",LastName5010 +5011,5011,"FirstName5011 MiddleName5011",LastName5011 +5012,5012,"FirstName5012 MiddleName5012",LastName5012 +5013,5013,"FirstName5013 MiddleName5013",LastName5013 +5014,5014,"FirstName5014 MiddleName5014",LastName5014 +5015,5015,"FirstName5015 MiddleName5015",LastName5015 +5016,5016,"FirstName5016 MiddleName5016",LastName5016 +5017,5017,"FirstName5017 MiddleName5017",LastName5017 +5018,5018,"FirstName5018 MiddleName5018",LastName5018 +5019,5019,"FirstName5019 MiddleName5019",LastName5019 +5020,5020,"FirstName5020 MiddleName5020",LastName5020 +5021,5021,"FirstName5021 MiddleName5021",LastName5021 +5022,5022,"FirstName5022 MiddleName5022",LastName5022 +5023,5023,"FirstName5023 MiddleName5023",LastName5023 +5024,5024,"FirstName5024 MiddleName5024",LastName5024 +5025,5025,"FirstName5025 MiddleName5025",LastName5025 +5026,5026,"FirstName5026 MiddleName5026",LastName5026 +5027,5027,"FirstName5027 MiddleName5027",LastName5027 +5028,5028,"FirstName5028 MiddleName5028",LastName5028 +5029,5029,"FirstName5029 MiddleName5029",LastName5029 +5030,5030,"FirstName5030 MiddleName5030",LastName5030 +5031,5031,"FirstName5031 MiddleName5031",LastName5031 +5032,5032,"FirstName5032 MiddleName5032",LastName5032 +5033,5033,"FirstName5033 MiddleName5033",LastName5033 +5034,5034,"FirstName5034 MiddleName5034",LastName5034 +5035,5035,"FirstName5035 MiddleName5035",LastName5035 +5036,5036,"FirstName5036 MiddleName5036",LastName5036 +5037,5037,"FirstName5037 MiddleName5037",LastName5037 +5038,5038,"FirstName5038 MiddleName5038",LastName5038 +5039,5039,"FirstName5039 MiddleName5039",LastName5039 +5040,5040,"FirstName5040 MiddleName5040",LastName5040 +5041,5041,"FirstName5041 MiddleName5041",LastName5041 +5042,5042,"FirstName5042 MiddleName5042",LastName5042 +5043,5043,"FirstName5043 MiddleName5043",LastName5043 +5044,5044,"FirstName5044 MiddleName5044",LastName5044 +5045,5045,"FirstName5045 MiddleName5045",LastName5045 +5046,5046,"FirstName5046 MiddleName5046",LastName5046 +5047,5047,"FirstName5047 MiddleName5047",LastName5047 +5048,5048,"FirstName5048 MiddleName5048",LastName5048 +5049,5049,"FirstName5049 MiddleName5049",LastName5049 +5050,5050,"FirstName5050 MiddleName5050",LastName5050 +5051,5051,"FirstName5051 MiddleName5051",LastName5051 +5052,5052,"FirstName5052 MiddleName5052",LastName5052 +5053,5053,"FirstName5053 MiddleName5053",LastName5053 +5054,5054,"FirstName5054 MiddleName5054",LastName5054 +5055,5055,"FirstName5055 MiddleName5055",LastName5055 +5056,5056,"FirstName5056 MiddleName5056",LastName5056 +5057,5057,"FirstName5057 MiddleName5057",LastName5057 +5058,5058,"FirstName5058 MiddleName5058",LastName5058 +5059,5059,"FirstName5059 MiddleName5059",LastName5059 +5060,5060,"FirstName5060 MiddleName5060",LastName5060 +5061,5061,"FirstName5061 MiddleName5061",LastName5061 +5062,5062,"FirstName5062 MiddleName5062",LastName5062 +5063,5063,"FirstName5063 MiddleName5063",LastName5063 +5064,5064,"FirstName5064 MiddleName5064",LastName5064 +5065,5065,"FirstName5065 MiddleName5065",LastName5065 +5066,5066,"FirstName5066 MiddleName5066",LastName5066 +5067,5067,"FirstName5067 MiddleName5067",LastName5067 +5068,5068,"FirstName5068 MiddleName5068",LastName5068 +5069,5069,"FirstName5069 MiddleName5069",LastName5069 +5070,5070,"FirstName5070 MiddleName5070",LastName5070 +5071,5071,"FirstName5071 MiddleName5071",LastName5071 +5072,5072,"FirstName5072 MiddleName5072",LastName5072 +5073,5073,"FirstName5073 MiddleName5073",LastName5073 +5074,5074,"FirstName5074 MiddleName5074",LastName5074 +5075,5075,"FirstName5075 MiddleName5075",LastName5075 +5076,5076,"FirstName5076 MiddleName5076",LastName5076 +5077,5077,"FirstName5077 MiddleName5077",LastName5077 +5078,5078,"FirstName5078 MiddleName5078",LastName5078 +5079,5079,"FirstName5079 MiddleName5079",LastName5079 +5080,5080,"FirstName5080 MiddleName5080",LastName5080 +5081,5081,"FirstName5081 MiddleName5081",LastName5081 +5082,5082,"FirstName5082 MiddleName5082",LastName5082 +5083,5083,"FirstName5083 MiddleName5083",LastName5083 +5084,5084,"FirstName5084 MiddleName5084",LastName5084 +5085,5085,"FirstName5085 MiddleName5085",LastName5085 +5086,5086,"FirstName5086 MiddleName5086",LastName5086 +5087,5087,"FirstName5087 MiddleName5087",LastName5087 +5088,5088,"FirstName5088 MiddleName5088",LastName5088 +5089,5089,"FirstName5089 MiddleName5089",LastName5089 +5090,5090,"FirstName5090 MiddleName5090",LastName5090 +5091,5091,"FirstName5091 MiddleName5091",LastName5091 +5092,5092,"FirstName5092 MiddleName5092",LastName5092 +5093,5093,"FirstName5093 MiddleName5093",LastName5093 +5094,5094,"FirstName5094 MiddleName5094",LastName5094 +5095,5095,"FirstName5095 MiddleName5095",LastName5095 +5096,5096,"FirstName5096 MiddleName5096",LastName5096 +5097,5097,"FirstName5097 MiddleName5097",LastName5097 +5098,5098,"FirstName5098 MiddleName5098",LastName5098 +5099,5099,"FirstName5099 MiddleName5099",LastName5099 +5100,5100,"FirstName5100 MiddleName5100",LastName5100 +5101,5101,"FirstName5101 MiddleName5101",LastName5101 +5102,5102,"FirstName5102 MiddleName5102",LastName5102 +5103,5103,"FirstName5103 MiddleName5103",LastName5103 +5104,5104,"FirstName5104 MiddleName5104",LastName5104 +5105,5105,"FirstName5105 MiddleName5105",LastName5105 +5106,5106,"FirstName5106 MiddleName5106",LastName5106 +5107,5107,"FirstName5107 MiddleName5107",LastName5107 +5108,5108,"FirstName5108 MiddleName5108",LastName5108 +5109,5109,"FirstName5109 MiddleName5109",LastName5109 +5110,5110,"FirstName5110 MiddleName5110",LastName5110 +5111,5111,"FirstName5111 MiddleName5111",LastName5111 +5112,5112,"FirstName5112 MiddleName5112",LastName5112 +5113,5113,"FirstName5113 MiddleName5113",LastName5113 +5114,5114,"FirstName5114 MiddleName5114",LastName5114 +5115,5115,"FirstName5115 MiddleName5115",LastName5115 +5116,5116,"FirstName5116 MiddleName5116",LastName5116 +5117,5117,"FirstName5117 MiddleName5117",LastName5117 +5118,5118,"FirstName5118 MiddleName5118",LastName5118 +5119,5119,"FirstName5119 MiddleName5119",LastName5119 +5120,5120,"FirstName5120 MiddleName5120",LastName5120 +5121,5121,"FirstName5121 MiddleName5121",LastName5121 +5122,5122,"FirstName5122 MiddleName5122",LastName5122 +5123,5123,"FirstName5123 MiddleName5123",LastName5123 +5124,5124,"FirstName5124 MiddleName5124",LastName5124 +5125,5125,"FirstName5125 MiddleName5125",LastName5125 +5126,5126,"FirstName5126 MiddleName5126",LastName5126 +5127,5127,"FirstName5127 MiddleName5127",LastName5127 +5128,5128,"FirstName5128 MiddleName5128",LastName5128 +5129,5129,"FirstName5129 MiddleName5129",LastName5129 +5130,5130,"FirstName5130 MiddleName5130",LastName5130 +5131,5131,"FirstName5131 MiddleName5131",LastName5131 +5132,5132,"FirstName5132 MiddleName5132",LastName5132 +5133,5133,"FirstName5133 MiddleName5133",LastName5133 +5134,5134,"FirstName5134 MiddleName5134",LastName5134 +5135,5135,"FirstName5135 MiddleName5135",LastName5135 +5136,5136,"FirstName5136 MiddleName5136",LastName5136 +5137,5137,"FirstName5137 MiddleName5137",LastName5137 +5138,5138,"FirstName5138 MiddleName5138",LastName5138 +5139,5139,"FirstName5139 MiddleName5139",LastName5139 +5140,5140,"FirstName5140 MiddleName5140",LastName5140 +5141,5141,"FirstName5141 MiddleName5141",LastName5141 +5142,5142,"FirstName5142 MiddleName5142",LastName5142 +5143,5143,"FirstName5143 MiddleName5143",LastName5143 +5144,5144,"FirstName5144 MiddleName5144",LastName5144 +5145,5145,"FirstName5145 MiddleName5145",LastName5145 +5146,5146,"FirstName5146 MiddleName5146",LastName5146 +5147,5147,"FirstName5147 MiddleName5147",LastName5147 +5148,5148,"FirstName5148 MiddleName5148",LastName5148 +5149,5149,"FirstName5149 MiddleName5149",LastName5149 +5150,5150,"FirstName5150 MiddleName5150",LastName5150 +5151,5151,"FirstName5151 MiddleName5151",LastName5151 +5152,5152,"FirstName5152 MiddleName5152",LastName5152 +5153,5153,"FirstName5153 MiddleName5153",LastName5153 +5154,5154,"FirstName5154 MiddleName5154",LastName5154 +5155,5155,"FirstName5155 MiddleName5155",LastName5155 +5156,5156,"FirstName5156 MiddleName5156",LastName5156 +5157,5157,"FirstName5157 MiddleName5157",LastName5157 +5158,5158,"FirstName5158 MiddleName5158",LastName5158 +5159,5159,"FirstName5159 MiddleName5159",LastName5159 +5160,5160,"FirstName5160 MiddleName5160",LastName5160 +5161,5161,"FirstName5161 MiddleName5161",LastName5161 +5162,5162,"FirstName5162 MiddleName5162",LastName5162 +5163,5163,"FirstName5163 MiddleName5163",LastName5163 +5164,5164,"FirstName5164 MiddleName5164",LastName5164 +5165,5165,"FirstName5165 MiddleName5165",LastName5165 +5166,5166,"FirstName5166 MiddleName5166",LastName5166 +5167,5167,"FirstName5167 MiddleName5167",LastName5167 +5168,5168,"FirstName5168 MiddleName5168",LastName5168 +5169,5169,"FirstName5169 MiddleName5169",LastName5169 +5170,5170,"FirstName5170 MiddleName5170",LastName5170 +5171,5171,"FirstName5171 MiddleName5171",LastName5171 +5172,5172,"FirstName5172 MiddleName5172",LastName5172 +5173,5173,"FirstName5173 MiddleName5173",LastName5173 +5174,5174,"FirstName5174 MiddleName5174",LastName5174 +5175,5175,"FirstName5175 MiddleName5175",LastName5175 +5176,5176,"FirstName5176 MiddleName5176",LastName5176 +5177,5177,"FirstName5177 MiddleName5177",LastName5177 +5178,5178,"FirstName5178 MiddleName5178",LastName5178 +5179,5179,"FirstName5179 MiddleName5179",LastName5179 +5180,5180,"FirstName5180 MiddleName5180",LastName5180 +5181,5181,"FirstName5181 MiddleName5181",LastName5181 +5182,5182,"FirstName5182 MiddleName5182",LastName5182 +5183,5183,"FirstName5183 MiddleName5183",LastName5183 +5184,5184,"FirstName5184 MiddleName5184",LastName5184 +5185,5185,"FirstName5185 MiddleName5185",LastName5185 +5186,5186,"FirstName5186 MiddleName5186",LastName5186 +5187,5187,"FirstName5187 MiddleName5187",LastName5187 +5188,5188,"FirstName5188 MiddleName5188",LastName5188 +5189,5189,"FirstName5189 MiddleName5189",LastName5189 +5190,5190,"FirstName5190 MiddleName5190",LastName5190 +5191,5191,"FirstName5191 MiddleName5191",LastName5191 +5192,5192,"FirstName5192 MiddleName5192",LastName5192 +5193,5193,"FirstName5193 MiddleName5193",LastName5193 +5194,5194,"FirstName5194 MiddleName5194",LastName5194 +5195,5195,"FirstName5195 MiddleName5195",LastName5195 +5196,5196,"FirstName5196 MiddleName5196",LastName5196 +5197,5197,"FirstName5197 MiddleName5197",LastName5197 +5198,5198,"FirstName5198 MiddleName5198",LastName5198 +5199,5199,"FirstName5199 MiddleName5199",LastName5199 +5200,5200,"FirstName5200 MiddleName5200",LastName5200 +5201,5201,"FirstName5201 MiddleName5201",LastName5201 +5202,5202,"FirstName5202 MiddleName5202",LastName5202 +5203,5203,"FirstName5203 MiddleName5203",LastName5203 +5204,5204,"FirstName5204 MiddleName5204",LastName5204 +5205,5205,"FirstName5205 MiddleName5205",LastName5205 +5206,5206,"FirstName5206 MiddleName5206",LastName5206 +5207,5207,"FirstName5207 MiddleName5207",LastName5207 +5208,5208,"FirstName5208 MiddleName5208",LastName5208 +5209,5209,"FirstName5209 MiddleName5209",LastName5209 +5210,5210,"FirstName5210 MiddleName5210",LastName5210 +5211,5211,"FirstName5211 MiddleName5211",LastName5211 +5212,5212,"FirstName5212 MiddleName5212",LastName5212 +5213,5213,"FirstName5213 MiddleName5213",LastName5213 +5214,5214,"FirstName5214 MiddleName5214",LastName5214 +5215,5215,"FirstName5215 MiddleName5215",LastName5215 +5216,5216,"FirstName5216 MiddleName5216",LastName5216 +5217,5217,"FirstName5217 MiddleName5217",LastName5217 +5218,5218,"FirstName5218 MiddleName5218",LastName5218 +5219,5219,"FirstName5219 MiddleName5219",LastName5219 +5220,5220,"FirstName5220 MiddleName5220",LastName5220 +5221,5221,"FirstName5221 MiddleName5221",LastName5221 +5222,5222,"FirstName5222 MiddleName5222",LastName5222 +5223,5223,"FirstName5223 MiddleName5223",LastName5223 +5224,5224,"FirstName5224 MiddleName5224",LastName5224 +5225,5225,"FirstName5225 MiddleName5225",LastName5225 +5226,5226,"FirstName5226 MiddleName5226",LastName5226 +5227,5227,"FirstName5227 MiddleName5227",LastName5227 +5228,5228,"FirstName5228 MiddleName5228",LastName5228 +5229,5229,"FirstName5229 MiddleName5229",LastName5229 +5230,5230,"FirstName5230 MiddleName5230",LastName5230 +5231,5231,"FirstName5231 MiddleName5231",LastName5231 +5232,5232,"FirstName5232 MiddleName5232",LastName5232 +5233,5233,"FirstName5233 MiddleName5233",LastName5233 +5234,5234,"FirstName5234 MiddleName5234",LastName5234 +5235,5235,"FirstName5235 MiddleName5235",LastName5235 +5236,5236,"FirstName5236 MiddleName5236",LastName5236 +5237,5237,"FirstName5237 MiddleName5237",LastName5237 +5238,5238,"FirstName5238 MiddleName5238",LastName5238 +5239,5239,"FirstName5239 MiddleName5239",LastName5239 +5240,5240,"FirstName5240 MiddleName5240",LastName5240 +5241,5241,"FirstName5241 MiddleName5241",LastName5241 +5242,5242,"FirstName5242 MiddleName5242",LastName5242 +5243,5243,"FirstName5243 MiddleName5243",LastName5243 +5244,5244,"FirstName5244 MiddleName5244",LastName5244 +5245,5245,"FirstName5245 MiddleName5245",LastName5245 +5246,5246,"FirstName5246 MiddleName5246",LastName5246 +5247,5247,"FirstName5247 MiddleName5247",LastName5247 +5248,5248,"FirstName5248 MiddleName5248",LastName5248 +5249,5249,"FirstName5249 MiddleName5249",LastName5249 +5250,5250,"FirstName5250 MiddleName5250",LastName5250 +5251,5251,"FirstName5251 MiddleName5251",LastName5251 +5252,5252,"FirstName5252 MiddleName5252",LastName5252 +5253,5253,"FirstName5253 MiddleName5253",LastName5253 +5254,5254,"FirstName5254 MiddleName5254",LastName5254 +5255,5255,"FirstName5255 MiddleName5255",LastName5255 +5256,5256,"FirstName5256 MiddleName5256",LastName5256 +5257,5257,"FirstName5257 MiddleName5257",LastName5257 +5258,5258,"FirstName5258 MiddleName5258",LastName5258 +5259,5259,"FirstName5259 MiddleName5259",LastName5259 +5260,5260,"FirstName5260 MiddleName5260",LastName5260 +5261,5261,"FirstName5261 MiddleName5261",LastName5261 +5262,5262,"FirstName5262 MiddleName5262",LastName5262 +5263,5263,"FirstName5263 MiddleName5263",LastName5263 +5264,5264,"FirstName5264 MiddleName5264",LastName5264 +5265,5265,"FirstName5265 MiddleName5265",LastName5265 +5266,5266,"FirstName5266 MiddleName5266",LastName5266 +5267,5267,"FirstName5267 MiddleName5267",LastName5267 +5268,5268,"FirstName5268 MiddleName5268",LastName5268 +5269,5269,"FirstName5269 MiddleName5269",LastName5269 +5270,5270,"FirstName5270 MiddleName5270",LastName5270 +5271,5271,"FirstName5271 MiddleName5271",LastName5271 +5272,5272,"FirstName5272 MiddleName5272",LastName5272 +5273,5273,"FirstName5273 MiddleName5273",LastName5273 +5274,5274,"FirstName5274 MiddleName5274",LastName5274 +5275,5275,"FirstName5275 MiddleName5275",LastName5275 +5276,5276,"FirstName5276 MiddleName5276",LastName5276 +5277,5277,"FirstName5277 MiddleName5277",LastName5277 +5278,5278,"FirstName5278 MiddleName5278",LastName5278 +5279,5279,"FirstName5279 MiddleName5279",LastName5279 +5280,5280,"FirstName5280 MiddleName5280",LastName5280 +5281,5281,"FirstName5281 MiddleName5281",LastName5281 +5282,5282,"FirstName5282 MiddleName5282",LastName5282 +5283,5283,"FirstName5283 MiddleName5283",LastName5283 +5284,5284,"FirstName5284 MiddleName5284",LastName5284 +5285,5285,"FirstName5285 MiddleName5285",LastName5285 +5286,5286,"FirstName5286 MiddleName5286",LastName5286 +5287,5287,"FirstName5287 MiddleName5287",LastName5287 +5288,5288,"FirstName5288 MiddleName5288",LastName5288 +5289,5289,"FirstName5289 MiddleName5289",LastName5289 +5290,5290,"FirstName5290 MiddleName5290",LastName5290 +5291,5291,"FirstName5291 MiddleName5291",LastName5291 +5292,5292,"FirstName5292 MiddleName5292",LastName5292 +5293,5293,"FirstName5293 MiddleName5293",LastName5293 +5294,5294,"FirstName5294 MiddleName5294",LastName5294 +5295,5295,"FirstName5295 MiddleName5295",LastName5295 +5296,5296,"FirstName5296 MiddleName5296",LastName5296 +5297,5297,"FirstName5297 MiddleName5297",LastName5297 +5298,5298,"FirstName5298 MiddleName5298",LastName5298 +5299,5299,"FirstName5299 MiddleName5299",LastName5299 +5300,5300,"FirstName5300 MiddleName5300",LastName5300 +5301,5301,"FirstName5301 MiddleName5301",LastName5301 +5302,5302,"FirstName5302 MiddleName5302",LastName5302 +5303,5303,"FirstName5303 MiddleName5303",LastName5303 +5304,5304,"FirstName5304 MiddleName5304",LastName5304 +5305,5305,"FirstName5305 MiddleName5305",LastName5305 +5306,5306,"FirstName5306 MiddleName5306",LastName5306 +5307,5307,"FirstName5307 MiddleName5307",LastName5307 +5308,5308,"FirstName5308 MiddleName5308",LastName5308 +5309,5309,"FirstName5309 MiddleName5309",LastName5309 +5310,5310,"FirstName5310 MiddleName5310",LastName5310 +5311,5311,"FirstName5311 MiddleName5311",LastName5311 +5312,5312,"FirstName5312 MiddleName5312",LastName5312 +5313,5313,"FirstName5313 MiddleName5313",LastName5313 +5314,5314,"FirstName5314 MiddleName5314",LastName5314 +5315,5315,"FirstName5315 MiddleName5315",LastName5315 +5316,5316,"FirstName5316 MiddleName5316",LastName5316 +5317,5317,"FirstName5317 MiddleName5317",LastName5317 +5318,5318,"FirstName5318 MiddleName5318",LastName5318 +5319,5319,"FirstName5319 MiddleName5319",LastName5319 +5320,5320,"FirstName5320 MiddleName5320",LastName5320 +5321,5321,"FirstName5321 MiddleName5321",LastName5321 +5322,5322,"FirstName5322 MiddleName5322",LastName5322 +5323,5323,"FirstName5323 MiddleName5323",LastName5323 +5324,5324,"FirstName5324 MiddleName5324",LastName5324 +5325,5325,"FirstName5325 MiddleName5325",LastName5325 +5326,5326,"FirstName5326 MiddleName5326",LastName5326 +5327,5327,"FirstName5327 MiddleName5327",LastName5327 +5328,5328,"FirstName5328 MiddleName5328",LastName5328 +5329,5329,"FirstName5329 MiddleName5329",LastName5329 +5330,5330,"FirstName5330 MiddleName5330",LastName5330 +5331,5331,"FirstName5331 MiddleName5331",LastName5331 +5332,5332,"FirstName5332 MiddleName5332",LastName5332 +5333,5333,"FirstName5333 MiddleName5333",LastName5333 +5334,5334,"FirstName5334 MiddleName5334",LastName5334 +5335,5335,"FirstName5335 MiddleName5335",LastName5335 +5336,5336,"FirstName5336 MiddleName5336",LastName5336 +5337,5337,"FirstName5337 MiddleName5337",LastName5337 +5338,5338,"FirstName5338 MiddleName5338",LastName5338 +5339,5339,"FirstName5339 MiddleName5339",LastName5339 +5340,5340,"FirstName5340 MiddleName5340",LastName5340 +5341,5341,"FirstName5341 MiddleName5341",LastName5341 +5342,5342,"FirstName5342 MiddleName5342",LastName5342 +5343,5343,"FirstName5343 MiddleName5343",LastName5343 +5344,5344,"FirstName5344 MiddleName5344",LastName5344 +5345,5345,"FirstName5345 MiddleName5345",LastName5345 +5346,5346,"FirstName5346 MiddleName5346",LastName5346 +5347,5347,"FirstName5347 MiddleName5347",LastName5347 +5348,5348,"FirstName5348 MiddleName5348",LastName5348 +5349,5349,"FirstName5349 MiddleName5349",LastName5349 +5350,5350,"FirstName5350 MiddleName5350",LastName5350 +5351,5351,"FirstName5351 MiddleName5351",LastName5351 +5352,5352,"FirstName5352 MiddleName5352",LastName5352 +5353,5353,"FirstName5353 MiddleName5353",LastName5353 +5354,5354,"FirstName5354 MiddleName5354",LastName5354 +5355,5355,"FirstName5355 MiddleName5355",LastName5355 +5356,5356,"FirstName5356 MiddleName5356",LastName5356 +5357,5357,"FirstName5357 MiddleName5357",LastName5357 +5358,5358,"FirstName5358 MiddleName5358",LastName5358 +5359,5359,"FirstName5359 MiddleName5359",LastName5359 +5360,5360,"FirstName5360 MiddleName5360",LastName5360 +5361,5361,"FirstName5361 MiddleName5361",LastName5361 +5362,5362,"FirstName5362 MiddleName5362",LastName5362 +5363,5363,"FirstName5363 MiddleName5363",LastName5363 +5364,5364,"FirstName5364 MiddleName5364",LastName5364 +5365,5365,"FirstName5365 MiddleName5365",LastName5365 +5366,5366,"FirstName5366 MiddleName5366",LastName5366 +5367,5367,"FirstName5367 MiddleName5367",LastName5367 +5368,5368,"FirstName5368 MiddleName5368",LastName5368 +5369,5369,"FirstName5369 MiddleName5369",LastName5369 +5370,5370,"FirstName5370 MiddleName5370",LastName5370 +5371,5371,"FirstName5371 MiddleName5371",LastName5371 +5372,5372,"FirstName5372 MiddleName5372",LastName5372 +5373,5373,"FirstName5373 MiddleName5373",LastName5373 +5374,5374,"FirstName5374 MiddleName5374",LastName5374 +5375,5375,"FirstName5375 MiddleName5375",LastName5375 +5376,5376,"FirstName5376 MiddleName5376",LastName5376 +5377,5377,"FirstName5377 MiddleName5377",LastName5377 +5378,5378,"FirstName5378 MiddleName5378",LastName5378 +5379,5379,"FirstName5379 MiddleName5379",LastName5379 +5380,5380,"FirstName5380 MiddleName5380",LastName5380 +5381,5381,"FirstName5381 MiddleName5381",LastName5381 +5382,5382,"FirstName5382 MiddleName5382",LastName5382 +5383,5383,"FirstName5383 MiddleName5383",LastName5383 +5384,5384,"FirstName5384 MiddleName5384",LastName5384 +5385,5385,"FirstName5385 MiddleName5385",LastName5385 +5386,5386,"FirstName5386 MiddleName5386",LastName5386 +5387,5387,"FirstName5387 MiddleName5387",LastName5387 +5388,5388,"FirstName5388 MiddleName5388",LastName5388 +5389,5389,"FirstName5389 MiddleName5389",LastName5389 +5390,5390,"FirstName5390 MiddleName5390",LastName5390 +5391,5391,"FirstName5391 MiddleName5391",LastName5391 +5392,5392,"FirstName5392 MiddleName5392",LastName5392 +5393,5393,"FirstName5393 MiddleName5393",LastName5393 +5394,5394,"FirstName5394 MiddleName5394",LastName5394 +5395,5395,"FirstName5395 MiddleName5395",LastName5395 +5396,5396,"FirstName5396 MiddleName5396",LastName5396 +5397,5397,"FirstName5397 MiddleName5397",LastName5397 +5398,5398,"FirstName5398 MiddleName5398",LastName5398 +5399,5399,"FirstName5399 MiddleName5399",LastName5399 +5400,5400,"FirstName5400 MiddleName5400",LastName5400 +5401,5401,"FirstName5401 MiddleName5401",LastName5401 +5402,5402,"FirstName5402 MiddleName5402",LastName5402 +5403,5403,"FirstName5403 MiddleName5403",LastName5403 +5404,5404,"FirstName5404 MiddleName5404",LastName5404 +5405,5405,"FirstName5405 MiddleName5405",LastName5405 +5406,5406,"FirstName5406 MiddleName5406",LastName5406 +5407,5407,"FirstName5407 MiddleName5407",LastName5407 +5408,5408,"FirstName5408 MiddleName5408",LastName5408 +5409,5409,"FirstName5409 MiddleName5409",LastName5409 +5410,5410,"FirstName5410 MiddleName5410",LastName5410 +5411,5411,"FirstName5411 MiddleName5411",LastName5411 +5412,5412,"FirstName5412 MiddleName5412",LastName5412 +5413,5413,"FirstName5413 MiddleName5413",LastName5413 +5414,5414,"FirstName5414 MiddleName5414",LastName5414 +5415,5415,"FirstName5415 MiddleName5415",LastName5415 +5416,5416,"FirstName5416 MiddleName5416",LastName5416 +5417,5417,"FirstName5417 MiddleName5417",LastName5417 +5418,5418,"FirstName5418 MiddleName5418",LastName5418 +5419,5419,"FirstName5419 MiddleName5419",LastName5419 +5420,5420,"FirstName5420 MiddleName5420",LastName5420 +5421,5421,"FirstName5421 MiddleName5421",LastName5421 +5422,5422,"FirstName5422 MiddleName5422",LastName5422 +5423,5423,"FirstName5423 MiddleName5423",LastName5423 +5424,5424,"FirstName5424 MiddleName5424",LastName5424 +5425,5425,"FirstName5425 MiddleName5425",LastName5425 +5426,5426,"FirstName5426 MiddleName5426",LastName5426 +5427,5427,"FirstName5427 MiddleName5427",LastName5427 +5428,5428,"FirstName5428 MiddleName5428",LastName5428 +5429,5429,"FirstName5429 MiddleName5429",LastName5429 +5430,5430,"FirstName5430 MiddleName5430",LastName5430 +5431,5431,"FirstName5431 MiddleName5431",LastName5431 +5432,5432,"FirstName5432 MiddleName5432",LastName5432 +5433,5433,"FirstName5433 MiddleName5433",LastName5433 +5434,5434,"FirstName5434 MiddleName5434",LastName5434 +5435,5435,"FirstName5435 MiddleName5435",LastName5435 +5436,5436,"FirstName5436 MiddleName5436",LastName5436 +5437,5437,"FirstName5437 MiddleName5437",LastName5437 +5438,5438,"FirstName5438 MiddleName5438",LastName5438 +5439,5439,"FirstName5439 MiddleName5439",LastName5439 +5440,5440,"FirstName5440 MiddleName5440",LastName5440 +5441,5441,"FirstName5441 MiddleName5441",LastName5441 +5442,5442,"FirstName5442 MiddleName5442",LastName5442 +5443,5443,"FirstName5443 MiddleName5443",LastName5443 +5444,5444,"FirstName5444 MiddleName5444",LastName5444 +5445,5445,"FirstName5445 MiddleName5445",LastName5445 +5446,5446,"FirstName5446 MiddleName5446",LastName5446 +5447,5447,"FirstName5447 MiddleName5447",LastName5447 +5448,5448,"FirstName5448 MiddleName5448",LastName5448 +5449,5449,"FirstName5449 MiddleName5449",LastName5449 +5450,5450,"FirstName5450 MiddleName5450",LastName5450 +5451,5451,"FirstName5451 MiddleName5451",LastName5451 +5452,5452,"FirstName5452 MiddleName5452",LastName5452 +5453,5453,"FirstName5453 MiddleName5453",LastName5453 +5454,5454,"FirstName5454 MiddleName5454",LastName5454 +5455,5455,"FirstName5455 MiddleName5455",LastName5455 +5456,5456,"FirstName5456 MiddleName5456",LastName5456 +5457,5457,"FirstName5457 MiddleName5457",LastName5457 +5458,5458,"FirstName5458 MiddleName5458",LastName5458 +5459,5459,"FirstName5459 MiddleName5459",LastName5459 +5460,5460,"FirstName5460 MiddleName5460",LastName5460 +5461,5461,"FirstName5461 MiddleName5461",LastName5461 +5462,5462,"FirstName5462 MiddleName5462",LastName5462 +5463,5463,"FirstName5463 MiddleName5463",LastName5463 +5464,5464,"FirstName5464 MiddleName5464",LastName5464 +5465,5465,"FirstName5465 MiddleName5465",LastName5465 +5466,5466,"FirstName5466 MiddleName5466",LastName5466 +5467,5467,"FirstName5467 MiddleName5467",LastName5467 +5468,5468,"FirstName5468 MiddleName5468",LastName5468 +5469,5469,"FirstName5469 MiddleName5469",LastName5469 +5470,5470,"FirstName5470 MiddleName5470",LastName5470 +5471,5471,"FirstName5471 MiddleName5471",LastName5471 +5472,5472,"FirstName5472 MiddleName5472",LastName5472 +5473,5473,"FirstName5473 MiddleName5473",LastName5473 +5474,5474,"FirstName5474 MiddleName5474",LastName5474 +5475,5475,"FirstName5475 MiddleName5475",LastName5475 +5476,5476,"FirstName5476 MiddleName5476",LastName5476 +5477,5477,"FirstName5477 MiddleName5477",LastName5477 +5478,5478,"FirstName5478 MiddleName5478",LastName5478 +5479,5479,"FirstName5479 MiddleName5479",LastName5479 +5480,5480,"FirstName5480 MiddleName5480",LastName5480 +5481,5481,"FirstName5481 MiddleName5481",LastName5481 +5482,5482,"FirstName5482 MiddleName5482",LastName5482 +5483,5483,"FirstName5483 MiddleName5483",LastName5483 +5484,5484,"FirstName5484 MiddleName5484",LastName5484 +5485,5485,"FirstName5485 MiddleName5485",LastName5485 +5486,5486,"FirstName5486 MiddleName5486",LastName5486 +5487,5487,"FirstName5487 MiddleName5487",LastName5487 +5488,5488,"FirstName5488 MiddleName5488",LastName5488 +5489,5489,"FirstName5489 MiddleName5489",LastName5489 +5490,5490,"FirstName5490 MiddleName5490",LastName5490 +5491,5491,"FirstName5491 MiddleName5491",LastName5491 +5492,5492,"FirstName5492 MiddleName5492",LastName5492 +5493,5493,"FirstName5493 MiddleName5493",LastName5493 +5494,5494,"FirstName5494 MiddleName5494",LastName5494 +5495,5495,"FirstName5495 MiddleName5495",LastName5495 +5496,5496,"FirstName5496 MiddleName5496",LastName5496 +5497,5497,"FirstName5497 MiddleName5497",LastName5497 +5498,5498,"FirstName5498 MiddleName5498",LastName5498 +5499,5499,"FirstName5499 MiddleName5499",LastName5499 +5500,5500,"FirstName5500 MiddleName5500",LastName5500 +5501,5501,"FirstName5501 MiddleName5501",LastName5501 +5502,5502,"FirstName5502 MiddleName5502",LastName5502 +5503,5503,"FirstName5503 MiddleName5503",LastName5503 +5504,5504,"FirstName5504 MiddleName5504",LastName5504 +5505,5505,"FirstName5505 MiddleName5505",LastName5505 +5506,5506,"FirstName5506 MiddleName5506",LastName5506 +5507,5507,"FirstName5507 MiddleName5507",LastName5507 +5508,5508,"FirstName5508 MiddleName5508",LastName5508 +5509,5509,"FirstName5509 MiddleName5509",LastName5509 +5510,5510,"FirstName5510 MiddleName5510",LastName5510 +5511,5511,"FirstName5511 MiddleName5511",LastName5511 +5512,5512,"FirstName5512 MiddleName5512",LastName5512 +5513,5513,"FirstName5513 MiddleName5513",LastName5513 +5514,5514,"FirstName5514 MiddleName5514",LastName5514 +5515,5515,"FirstName5515 MiddleName5515",LastName5515 +5516,5516,"FirstName5516 MiddleName5516",LastName5516 +5517,5517,"FirstName5517 MiddleName5517",LastName5517 +5518,5518,"FirstName5518 MiddleName5518",LastName5518 +5519,5519,"FirstName5519 MiddleName5519",LastName5519 +5520,5520,"FirstName5520 MiddleName5520",LastName5520 +5521,5521,"FirstName5521 MiddleName5521",LastName5521 +5522,5522,"FirstName5522 MiddleName5522",LastName5522 +5523,5523,"FirstName5523 MiddleName5523",LastName5523 +5524,5524,"FirstName5524 MiddleName5524",LastName5524 +5525,5525,"FirstName5525 MiddleName5525",LastName5525 +5526,5526,"FirstName5526 MiddleName5526",LastName5526 +5527,5527,"FirstName5527 MiddleName5527",LastName5527 +5528,5528,"FirstName5528 MiddleName5528",LastName5528 +5529,5529,"FirstName5529 MiddleName5529",LastName5529 +5530,5530,"FirstName5530 MiddleName5530",LastName5530 +5531,5531,"FirstName5531 MiddleName5531",LastName5531 +5532,5532,"FirstName5532 MiddleName5532",LastName5532 +5533,5533,"FirstName5533 MiddleName5533",LastName5533 +5534,5534,"FirstName5534 MiddleName5534",LastName5534 +5535,5535,"FirstName5535 MiddleName5535",LastName5535 +5536,5536,"FirstName5536 MiddleName5536",LastName5536 +5537,5537,"FirstName5537 MiddleName5537",LastName5537 +5538,5538,"FirstName5538 MiddleName5538",LastName5538 +5539,5539,"FirstName5539 MiddleName5539",LastName5539 +5540,5540,"FirstName5540 MiddleName5540",LastName5540 +5541,5541,"FirstName5541 MiddleName5541",LastName5541 +5542,5542,"FirstName5542 MiddleName5542",LastName5542 +5543,5543,"FirstName5543 MiddleName5543",LastName5543 +5544,5544,"FirstName5544 MiddleName5544",LastName5544 +5545,5545,"FirstName5545 MiddleName5545",LastName5545 +5546,5546,"FirstName5546 MiddleName5546",LastName5546 +5547,5547,"FirstName5547 MiddleName5547",LastName5547 +5548,5548,"FirstName5548 MiddleName5548",LastName5548 +5549,5549,"FirstName5549 MiddleName5549",LastName5549 +5550,5550,"FirstName5550 MiddleName5550",LastName5550 +5551,5551,"FirstName5551 MiddleName5551",LastName5551 +5552,5552,"FirstName5552 MiddleName5552",LastName5552 +5553,5553,"FirstName5553 MiddleName5553",LastName5553 +5554,5554,"FirstName5554 MiddleName5554",LastName5554 +5555,5555,"FirstName5555 MiddleName5555",LastName5555 +5556,5556,"FirstName5556 MiddleName5556",LastName5556 +5557,5557,"FirstName5557 MiddleName5557",LastName5557 +5558,5558,"FirstName5558 MiddleName5558",LastName5558 +5559,5559,"FirstName5559 MiddleName5559",LastName5559 +5560,5560,"FirstName5560 MiddleName5560",LastName5560 +5561,5561,"FirstName5561 MiddleName5561",LastName5561 +5562,5562,"FirstName5562 MiddleName5562",LastName5562 +5563,5563,"FirstName5563 MiddleName5563",LastName5563 +5564,5564,"FirstName5564 MiddleName5564",LastName5564 +5565,5565,"FirstName5565 MiddleName5565",LastName5565 +5566,5566,"FirstName5566 MiddleName5566",LastName5566 +5567,5567,"FirstName5567 MiddleName5567",LastName5567 +5568,5568,"FirstName5568 MiddleName5568",LastName5568 +5569,5569,"FirstName5569 MiddleName5569",LastName5569 +5570,5570,"FirstName5570 MiddleName5570",LastName5570 +5571,5571,"FirstName5571 MiddleName5571",LastName5571 +5572,5572,"FirstName5572 MiddleName5572",LastName5572 +5573,5573,"FirstName5573 MiddleName5573",LastName5573 +5574,5574,"FirstName5574 MiddleName5574",LastName5574 +5575,5575,"FirstName5575 MiddleName5575",LastName5575 +5576,5576,"FirstName5576 MiddleName5576",LastName5576 +5577,5577,"FirstName5577 MiddleName5577",LastName5577 +5578,5578,"FirstName5578 MiddleName5578",LastName5578 +5579,5579,"FirstName5579 MiddleName5579",LastName5579 +5580,5580,"FirstName5580 MiddleName5580",LastName5580 +5581,5581,"FirstName5581 MiddleName5581",LastName5581 +5582,5582,"FirstName5582 MiddleName5582",LastName5582 +5583,5583,"FirstName5583 MiddleName5583",LastName5583 +5584,5584,"FirstName5584 MiddleName5584",LastName5584 +5585,5585,"FirstName5585 MiddleName5585",LastName5585 +5586,5586,"FirstName5586 MiddleName5586",LastName5586 +5587,5587,"FirstName5587 MiddleName5587",LastName5587 +5588,5588,"FirstName5588 MiddleName5588",LastName5588 +5589,5589,"FirstName5589 MiddleName5589",LastName5589 +5590,5590,"FirstName5590 MiddleName5590",LastName5590 +5591,5591,"FirstName5591 MiddleName5591",LastName5591 +5592,5592,"FirstName5592 MiddleName5592",LastName5592 +5593,5593,"FirstName5593 MiddleName5593",LastName5593 +5594,5594,"FirstName5594 MiddleName5594",LastName5594 +5595,5595,"FirstName5595 MiddleName5595",LastName5595 +5596,5596,"FirstName5596 MiddleName5596",LastName5596 +5597,5597,"FirstName5597 MiddleName5597",LastName5597 +5598,5598,"FirstName5598 MiddleName5598",LastName5598 +5599,5599,"FirstName5599 MiddleName5599",LastName5599 +5600,5600,"FirstName5600 MiddleName5600",LastName5600 +5601,5601,"FirstName5601 MiddleName5601",LastName5601 +5602,5602,"FirstName5602 MiddleName5602",LastName5602 +5603,5603,"FirstName5603 MiddleName5603",LastName5603 +5604,5604,"FirstName5604 MiddleName5604",LastName5604 +5605,5605,"FirstName5605 MiddleName5605",LastName5605 +5606,5606,"FirstName5606 MiddleName5606",LastName5606 +5607,5607,"FirstName5607 MiddleName5607",LastName5607 +5608,5608,"FirstName5608 MiddleName5608",LastName5608 +5609,5609,"FirstName5609 MiddleName5609",LastName5609 +5610,5610,"FirstName5610 MiddleName5610",LastName5610 +5611,5611,"FirstName5611 MiddleName5611",LastName5611 +5612,5612,"FirstName5612 MiddleName5612",LastName5612 +5613,5613,"FirstName5613 MiddleName5613",LastName5613 +5614,5614,"FirstName5614 MiddleName5614",LastName5614 +5615,5615,"FirstName5615 MiddleName5615",LastName5615 +5616,5616,"FirstName5616 MiddleName5616",LastName5616 +5617,5617,"FirstName5617 MiddleName5617",LastName5617 +5618,5618,"FirstName5618 MiddleName5618",LastName5618 +5619,5619,"FirstName5619 MiddleName5619",LastName5619 +5620,5620,"FirstName5620 MiddleName5620",LastName5620 +5621,5621,"FirstName5621 MiddleName5621",LastName5621 +5622,5622,"FirstName5622 MiddleName5622",LastName5622 +5623,5623,"FirstName5623 MiddleName5623",LastName5623 +5624,5624,"FirstName5624 MiddleName5624",LastName5624 +5625,5625,"FirstName5625 MiddleName5625",LastName5625 +5626,5626,"FirstName5626 MiddleName5626",LastName5626 +5627,5627,"FirstName5627 MiddleName5627",LastName5627 +5628,5628,"FirstName5628 MiddleName5628",LastName5628 +5629,5629,"FirstName5629 MiddleName5629",LastName5629 +5630,5630,"FirstName5630 MiddleName5630",LastName5630 +5631,5631,"FirstName5631 MiddleName5631",LastName5631 +5632,5632,"FirstName5632 MiddleName5632",LastName5632 +5633,5633,"FirstName5633 MiddleName5633",LastName5633 +5634,5634,"FirstName5634 MiddleName5634",LastName5634 +5635,5635,"FirstName5635 MiddleName5635",LastName5635 +5636,5636,"FirstName5636 MiddleName5636",LastName5636 +5637,5637,"FirstName5637 MiddleName5637",LastName5637 +5638,5638,"FirstName5638 MiddleName5638",LastName5638 +5639,5639,"FirstName5639 MiddleName5639",LastName5639 +5640,5640,"FirstName5640 MiddleName5640",LastName5640 +5641,5641,"FirstName5641 MiddleName5641",LastName5641 +5642,5642,"FirstName5642 MiddleName5642",LastName5642 +5643,5643,"FirstName5643 MiddleName5643",LastName5643 +5644,5644,"FirstName5644 MiddleName5644",LastName5644 +5645,5645,"FirstName5645 MiddleName5645",LastName5645 +5646,5646,"FirstName5646 MiddleName5646",LastName5646 +5647,5647,"FirstName5647 MiddleName5647",LastName5647 +5648,5648,"FirstName5648 MiddleName5648",LastName5648 +5649,5649,"FirstName5649 MiddleName5649",LastName5649 +5650,5650,"FirstName5650 MiddleName5650",LastName5650 +5651,5651,"FirstName5651 MiddleName5651",LastName5651 +5652,5652,"FirstName5652 MiddleName5652",LastName5652 +5653,5653,"FirstName5653 MiddleName5653",LastName5653 +5654,5654,"FirstName5654 MiddleName5654",LastName5654 +5655,5655,"FirstName5655 MiddleName5655",LastName5655 +5656,5656,"FirstName5656 MiddleName5656",LastName5656 +5657,5657,"FirstName5657 MiddleName5657",LastName5657 +5658,5658,"FirstName5658 MiddleName5658",LastName5658 +5659,5659,"FirstName5659 MiddleName5659",LastName5659 +5660,5660,"FirstName5660 MiddleName5660",LastName5660 +5661,5661,"FirstName5661 MiddleName5661",LastName5661 +5662,5662,"FirstName5662 MiddleName5662",LastName5662 +5663,5663,"FirstName5663 MiddleName5663",LastName5663 +5664,5664,"FirstName5664 MiddleName5664",LastName5664 +5665,5665,"FirstName5665 MiddleName5665",LastName5665 +5666,5666,"FirstName5666 MiddleName5666",LastName5666 +5667,5667,"FirstName5667 MiddleName5667",LastName5667 +5668,5668,"FirstName5668 MiddleName5668",LastName5668 +5669,5669,"FirstName5669 MiddleName5669",LastName5669 +5670,5670,"FirstName5670 MiddleName5670",LastName5670 +5671,5671,"FirstName5671 MiddleName5671",LastName5671 +5672,5672,"FirstName5672 MiddleName5672",LastName5672 +5673,5673,"FirstName5673 MiddleName5673",LastName5673 +5674,5674,"FirstName5674 MiddleName5674",LastName5674 +5675,5675,"FirstName5675 MiddleName5675",LastName5675 +5676,5676,"FirstName5676 MiddleName5676",LastName5676 +5677,5677,"FirstName5677 MiddleName5677",LastName5677 +5678,5678,"FirstName5678 MiddleName5678",LastName5678 +5679,5679,"FirstName5679 MiddleName5679",LastName5679 +5680,5680,"FirstName5680 MiddleName5680",LastName5680 +5681,5681,"FirstName5681 MiddleName5681",LastName5681 +5682,5682,"FirstName5682 MiddleName5682",LastName5682 +5683,5683,"FirstName5683 MiddleName5683",LastName5683 +5684,5684,"FirstName5684 MiddleName5684",LastName5684 +5685,5685,"FirstName5685 MiddleName5685",LastName5685 +5686,5686,"FirstName5686 MiddleName5686",LastName5686 +5687,5687,"FirstName5687 MiddleName5687",LastName5687 +5688,5688,"FirstName5688 MiddleName5688",LastName5688 +5689,5689,"FirstName5689 MiddleName5689",LastName5689 +5690,5690,"FirstName5690 MiddleName5690",LastName5690 +5691,5691,"FirstName5691 MiddleName5691",LastName5691 +5692,5692,"FirstName5692 MiddleName5692",LastName5692 +5693,5693,"FirstName5693 MiddleName5693",LastName5693 +5694,5694,"FirstName5694 MiddleName5694",LastName5694 +5695,5695,"FirstName5695 MiddleName5695",LastName5695 +5696,5696,"FirstName5696 MiddleName5696",LastName5696 +5697,5697,"FirstName5697 MiddleName5697",LastName5697 +5698,5698,"FirstName5698 MiddleName5698",LastName5698 +5699,5699,"FirstName5699 MiddleName5699",LastName5699 +5700,5700,"FirstName5700 MiddleName5700",LastName5700 +5701,5701,"FirstName5701 MiddleName5701",LastName5701 +5702,5702,"FirstName5702 MiddleName5702",LastName5702 +5703,5703,"FirstName5703 MiddleName5703",LastName5703 +5704,5704,"FirstName5704 MiddleName5704",LastName5704 +5705,5705,"FirstName5705 MiddleName5705",LastName5705 +5706,5706,"FirstName5706 MiddleName5706",LastName5706 +5707,5707,"FirstName5707 MiddleName5707",LastName5707 +5708,5708,"FirstName5708 MiddleName5708",LastName5708 +5709,5709,"FirstName5709 MiddleName5709",LastName5709 +5710,5710,"FirstName5710 MiddleName5710",LastName5710 +5711,5711,"FirstName5711 MiddleName5711",LastName5711 +5712,5712,"FirstName5712 MiddleName5712",LastName5712 +5713,5713,"FirstName5713 MiddleName5713",LastName5713 +5714,5714,"FirstName5714 MiddleName5714",LastName5714 +5715,5715,"FirstName5715 MiddleName5715",LastName5715 +5716,5716,"FirstName5716 MiddleName5716",LastName5716 +5717,5717,"FirstName5717 MiddleName5717",LastName5717 +5718,5718,"FirstName5718 MiddleName5718",LastName5718 +5719,5719,"FirstName5719 MiddleName5719",LastName5719 +5720,5720,"FirstName5720 MiddleName5720",LastName5720 +5721,5721,"FirstName5721 MiddleName5721",LastName5721 +5722,5722,"FirstName5722 MiddleName5722",LastName5722 +5723,5723,"FirstName5723 MiddleName5723",LastName5723 +5724,5724,"FirstName5724 MiddleName5724",LastName5724 +5725,5725,"FirstName5725 MiddleName5725",LastName5725 +5726,5726,"FirstName5726 MiddleName5726",LastName5726 +5727,5727,"FirstName5727 MiddleName5727",LastName5727 +5728,5728,"FirstName5728 MiddleName5728",LastName5728 +5729,5729,"FirstName5729 MiddleName5729",LastName5729 +5730,5730,"FirstName5730 MiddleName5730",LastName5730 +5731,5731,"FirstName5731 MiddleName5731",LastName5731 +5732,5732,"FirstName5732 MiddleName5732",LastName5732 +5733,5733,"FirstName5733 MiddleName5733",LastName5733 +5734,5734,"FirstName5734 MiddleName5734",LastName5734 +5735,5735,"FirstName5735 MiddleName5735",LastName5735 +5736,5736,"FirstName5736 MiddleName5736",LastName5736 +5737,5737,"FirstName5737 MiddleName5737",LastName5737 +5738,5738,"FirstName5738 MiddleName5738",LastName5738 +5739,5739,"FirstName5739 MiddleName5739",LastName5739 +5740,5740,"FirstName5740 MiddleName5740",LastName5740 +5741,5741,"FirstName5741 MiddleName5741",LastName5741 +5742,5742,"FirstName5742 MiddleName5742",LastName5742 +5743,5743,"FirstName5743 MiddleName5743",LastName5743 +5744,5744,"FirstName5744 MiddleName5744",LastName5744 +5745,5745,"FirstName5745 MiddleName5745",LastName5745 +5746,5746,"FirstName5746 MiddleName5746",LastName5746 +5747,5747,"FirstName5747 MiddleName5747",LastName5747 +5748,5748,"FirstName5748 MiddleName5748",LastName5748 +5749,5749,"FirstName5749 MiddleName5749",LastName5749 +5750,5750,"FirstName5750 MiddleName5750",LastName5750 +5751,5751,"FirstName5751 MiddleName5751",LastName5751 +5752,5752,"FirstName5752 MiddleName5752",LastName5752 +5753,5753,"FirstName5753 MiddleName5753",LastName5753 +5754,5754,"FirstName5754 MiddleName5754",LastName5754 +5755,5755,"FirstName5755 MiddleName5755",LastName5755 +5756,5756,"FirstName5756 MiddleName5756",LastName5756 +5757,5757,"FirstName5757 MiddleName5757",LastName5757 +5758,5758,"FirstName5758 MiddleName5758",LastName5758 +5759,5759,"FirstName5759 MiddleName5759",LastName5759 +5760,5760,"FirstName5760 MiddleName5760",LastName5760 +5761,5761,"FirstName5761 MiddleName5761",LastName5761 +5762,5762,"FirstName5762 MiddleName5762",LastName5762 +5763,5763,"FirstName5763 MiddleName5763",LastName5763 +5764,5764,"FirstName5764 MiddleName5764",LastName5764 +5765,5765,"FirstName5765 MiddleName5765",LastName5765 +5766,5766,"FirstName5766 MiddleName5766",LastName5766 +5767,5767,"FirstName5767 MiddleName5767",LastName5767 +5768,5768,"FirstName5768 MiddleName5768",LastName5768 +5769,5769,"FirstName5769 MiddleName5769",LastName5769 +5770,5770,"FirstName5770 MiddleName5770",LastName5770 +5771,5771,"FirstName5771 MiddleName5771",LastName5771 +5772,5772,"FirstName5772 MiddleName5772",LastName5772 +5773,5773,"FirstName5773 MiddleName5773",LastName5773 +5774,5774,"FirstName5774 MiddleName5774",LastName5774 +5775,5775,"FirstName5775 MiddleName5775",LastName5775 +5776,5776,"FirstName5776 MiddleName5776",LastName5776 +5777,5777,"FirstName5777 MiddleName5777",LastName5777 +5778,5778,"FirstName5778 MiddleName5778",LastName5778 +5779,5779,"FirstName5779 MiddleName5779",LastName5779 +5780,5780,"FirstName5780 MiddleName5780",LastName5780 +5781,5781,"FirstName5781 MiddleName5781",LastName5781 +5782,5782,"FirstName5782 MiddleName5782",LastName5782 +5783,5783,"FirstName5783 MiddleName5783",LastName5783 +5784,5784,"FirstName5784 MiddleName5784",LastName5784 +5785,5785,"FirstName5785 MiddleName5785",LastName5785 +5786,5786,"FirstName5786 MiddleName5786",LastName5786 +5787,5787,"FirstName5787 MiddleName5787",LastName5787 +5788,5788,"FirstName5788 MiddleName5788",LastName5788 +5789,5789,"FirstName5789 MiddleName5789",LastName5789 +5790,5790,"FirstName5790 MiddleName5790",LastName5790 +5791,5791,"FirstName5791 MiddleName5791",LastName5791 +5792,5792,"FirstName5792 MiddleName5792",LastName5792 +5793,5793,"FirstName5793 MiddleName5793",LastName5793 +5794,5794,"FirstName5794 MiddleName5794",LastName5794 +5795,5795,"FirstName5795 MiddleName5795",LastName5795 +5796,5796,"FirstName5796 MiddleName5796",LastName5796 +5797,5797,"FirstName5797 MiddleName5797",LastName5797 +5798,5798,"FirstName5798 MiddleName5798",LastName5798 +5799,5799,"FirstName5799 MiddleName5799",LastName5799 +5800,5800,"FirstName5800 MiddleName5800",LastName5800 +5801,5801,"FirstName5801 MiddleName5801",LastName5801 +5802,5802,"FirstName5802 MiddleName5802",LastName5802 +5803,5803,"FirstName5803 MiddleName5803",LastName5803 +5804,5804,"FirstName5804 MiddleName5804",LastName5804 +5805,5805,"FirstName5805 MiddleName5805",LastName5805 +5806,5806,"FirstName5806 MiddleName5806",LastName5806 +5807,5807,"FirstName5807 MiddleName5807",LastName5807 +5808,5808,"FirstName5808 MiddleName5808",LastName5808 +5809,5809,"FirstName5809 MiddleName5809",LastName5809 +5810,5810,"FirstName5810 MiddleName5810",LastName5810 +5811,5811,"FirstName5811 MiddleName5811",LastName5811 +5812,5812,"FirstName5812 MiddleName5812",LastName5812 +5813,5813,"FirstName5813 MiddleName5813",LastName5813 +5814,5814,"FirstName5814 MiddleName5814",LastName5814 +5815,5815,"FirstName5815 MiddleName5815",LastName5815 +5816,5816,"FirstName5816 MiddleName5816",LastName5816 +5817,5817,"FirstName5817 MiddleName5817",LastName5817 +5818,5818,"FirstName5818 MiddleName5818",LastName5818 +5819,5819,"FirstName5819 MiddleName5819",LastName5819 +5820,5820,"FirstName5820 MiddleName5820",LastName5820 +5821,5821,"FirstName5821 MiddleName5821",LastName5821 +5822,5822,"FirstName5822 MiddleName5822",LastName5822 +5823,5823,"FirstName5823 MiddleName5823",LastName5823 +5824,5824,"FirstName5824 MiddleName5824",LastName5824 +5825,5825,"FirstName5825 MiddleName5825",LastName5825 +5826,5826,"FirstName5826 MiddleName5826",LastName5826 +5827,5827,"FirstName5827 MiddleName5827",LastName5827 +5828,5828,"FirstName5828 MiddleName5828",LastName5828 +5829,5829,"FirstName5829 MiddleName5829",LastName5829 +5830,5830,"FirstName5830 MiddleName5830",LastName5830 +5831,5831,"FirstName5831 MiddleName5831",LastName5831 +5832,5832,"FirstName5832 MiddleName5832",LastName5832 +5833,5833,"FirstName5833 MiddleName5833",LastName5833 +5834,5834,"FirstName5834 MiddleName5834",LastName5834 +5835,5835,"FirstName5835 MiddleName5835",LastName5835 +5836,5836,"FirstName5836 MiddleName5836",LastName5836 +5837,5837,"FirstName5837 MiddleName5837",LastName5837 +5838,5838,"FirstName5838 MiddleName5838",LastName5838 +5839,5839,"FirstName5839 MiddleName5839",LastName5839 +5840,5840,"FirstName5840 MiddleName5840",LastName5840 +5841,5841,"FirstName5841 MiddleName5841",LastName5841 +5842,5842,"FirstName5842 MiddleName5842",LastName5842 +5843,5843,"FirstName5843 MiddleName5843",LastName5843 +5844,5844,"FirstName5844 MiddleName5844",LastName5844 +5845,5845,"FirstName5845 MiddleName5845",LastName5845 +5846,5846,"FirstName5846 MiddleName5846",LastName5846 +5847,5847,"FirstName5847 MiddleName5847",LastName5847 +5848,5848,"FirstName5848 MiddleName5848",LastName5848 +5849,5849,"FirstName5849 MiddleName5849",LastName5849 +5850,5850,"FirstName5850 MiddleName5850",LastName5850 +5851,5851,"FirstName5851 MiddleName5851",LastName5851 +5852,5852,"FirstName5852 MiddleName5852",LastName5852 +5853,5853,"FirstName5853 MiddleName5853",LastName5853 +5854,5854,"FirstName5854 MiddleName5854",LastName5854 +5855,5855,"FirstName5855 MiddleName5855",LastName5855 +5856,5856,"FirstName5856 MiddleName5856",LastName5856 +5857,5857,"FirstName5857 MiddleName5857",LastName5857 +5858,5858,"FirstName5858 MiddleName5858",LastName5858 +5859,5859,"FirstName5859 MiddleName5859",LastName5859 +5860,5860,"FirstName5860 MiddleName5860",LastName5860 +5861,5861,"FirstName5861 MiddleName5861",LastName5861 +5862,5862,"FirstName5862 MiddleName5862",LastName5862 +5863,5863,"FirstName5863 MiddleName5863",LastName5863 +5864,5864,"FirstName5864 MiddleName5864",LastName5864 +5865,5865,"FirstName5865 MiddleName5865",LastName5865 +5866,5866,"FirstName5866 MiddleName5866",LastName5866 +5867,5867,"FirstName5867 MiddleName5867",LastName5867 +5868,5868,"FirstName5868 MiddleName5868",LastName5868 +5869,5869,"FirstName5869 MiddleName5869",LastName5869 +5870,5870,"FirstName5870 MiddleName5870",LastName5870 +5871,5871,"FirstName5871 MiddleName5871",LastName5871 +5872,5872,"FirstName5872 MiddleName5872",LastName5872 +5873,5873,"FirstName5873 MiddleName5873",LastName5873 +5874,5874,"FirstName5874 MiddleName5874",LastName5874 +5875,5875,"FirstName5875 MiddleName5875",LastName5875 +5876,5876,"FirstName5876 MiddleName5876",LastName5876 +5877,5877,"FirstName5877 MiddleName5877",LastName5877 +5878,5878,"FirstName5878 MiddleName5878",LastName5878 +5879,5879,"FirstName5879 MiddleName5879",LastName5879 +5880,5880,"FirstName5880 MiddleName5880",LastName5880 +5881,5881,"FirstName5881 MiddleName5881",LastName5881 +5882,5882,"FirstName5882 MiddleName5882",LastName5882 +5883,5883,"FirstName5883 MiddleName5883",LastName5883 +5884,5884,"FirstName5884 MiddleName5884",LastName5884 +5885,5885,"FirstName5885 MiddleName5885",LastName5885 +5886,5886,"FirstName5886 MiddleName5886",LastName5886 +5887,5887,"FirstName5887 MiddleName5887",LastName5887 +5888,5888,"FirstName5888 MiddleName5888",LastName5888 +5889,5889,"FirstName5889 MiddleName5889",LastName5889 +5890,5890,"FirstName5890 MiddleName5890",LastName5890 +5891,5891,"FirstName5891 MiddleName5891",LastName5891 +5892,5892,"FirstName5892 MiddleName5892",LastName5892 +5893,5893,"FirstName5893 MiddleName5893",LastName5893 +5894,5894,"FirstName5894 MiddleName5894",LastName5894 +5895,5895,"FirstName5895 MiddleName5895",LastName5895 +5896,5896,"FirstName5896 MiddleName5896",LastName5896 +5897,5897,"FirstName5897 MiddleName5897",LastName5897 +5898,5898,"FirstName5898 MiddleName5898",LastName5898 +5899,5899,"FirstName5899 MiddleName5899",LastName5899 +5900,5900,"FirstName5900 MiddleName5900",LastName5900 +5901,5901,"FirstName5901 MiddleName5901",LastName5901 +5902,5902,"FirstName5902 MiddleName5902",LastName5902 +5903,5903,"FirstName5903 MiddleName5903",LastName5903 +5904,5904,"FirstName5904 MiddleName5904",LastName5904 +5905,5905,"FirstName5905 MiddleName5905",LastName5905 +5906,5906,"FirstName5906 MiddleName5906",LastName5906 +5907,5907,"FirstName5907 MiddleName5907",LastName5907 +5908,5908,"FirstName5908 MiddleName5908",LastName5908 +5909,5909,"FirstName5909 MiddleName5909",LastName5909 +5910,5910,"FirstName5910 MiddleName5910",LastName5910 +5911,5911,"FirstName5911 MiddleName5911",LastName5911 +5912,5912,"FirstName5912 MiddleName5912",LastName5912 +5913,5913,"FirstName5913 MiddleName5913",LastName5913 +5914,5914,"FirstName5914 MiddleName5914",LastName5914 +5915,5915,"FirstName5915 MiddleName5915",LastName5915 +5916,5916,"FirstName5916 MiddleName5916",LastName5916 +5917,5917,"FirstName5917 MiddleName5917",LastName5917 +5918,5918,"FirstName5918 MiddleName5918",LastName5918 +5919,5919,"FirstName5919 MiddleName5919",LastName5919 +5920,5920,"FirstName5920 MiddleName5920",LastName5920 +5921,5921,"FirstName5921 MiddleName5921",LastName5921 +5922,5922,"FirstName5922 MiddleName5922",LastName5922 +5923,5923,"FirstName5923 MiddleName5923",LastName5923 +5924,5924,"FirstName5924 MiddleName5924",LastName5924 +5925,5925,"FirstName5925 MiddleName5925",LastName5925 +5926,5926,"FirstName5926 MiddleName5926",LastName5926 +5927,5927,"FirstName5927 MiddleName5927",LastName5927 +5928,5928,"FirstName5928 MiddleName5928",LastName5928 +5929,5929,"FirstName5929 MiddleName5929",LastName5929 +5930,5930,"FirstName5930 MiddleName5930",LastName5930 +5931,5931,"FirstName5931 MiddleName5931",LastName5931 +5932,5932,"FirstName5932 MiddleName5932",LastName5932 +5933,5933,"FirstName5933 MiddleName5933",LastName5933 +5934,5934,"FirstName5934 MiddleName5934",LastName5934 +5935,5935,"FirstName5935 MiddleName5935",LastName5935 +5936,5936,"FirstName5936 MiddleName5936",LastName5936 +5937,5937,"FirstName5937 MiddleName5937",LastName5937 +5938,5938,"FirstName5938 MiddleName5938",LastName5938 +5939,5939,"FirstName5939 MiddleName5939",LastName5939 +5940,5940,"FirstName5940 MiddleName5940",LastName5940 +5941,5941,"FirstName5941 MiddleName5941",LastName5941 +5942,5942,"FirstName5942 MiddleName5942",LastName5942 +5943,5943,"FirstName5943 MiddleName5943",LastName5943 +5944,5944,"FirstName5944 MiddleName5944",LastName5944 +5945,5945,"FirstName5945 MiddleName5945",LastName5945 +5946,5946,"FirstName5946 MiddleName5946",LastName5946 +5947,5947,"FirstName5947 MiddleName5947",LastName5947 +5948,5948,"FirstName5948 MiddleName5948",LastName5948 +5949,5949,"FirstName5949 MiddleName5949",LastName5949 +5950,5950,"FirstName5950 MiddleName5950",LastName5950 +5951,5951,"FirstName5951 MiddleName5951",LastName5951 +5952,5952,"FirstName5952 MiddleName5952",LastName5952 +5953,5953,"FirstName5953 MiddleName5953",LastName5953 +5954,5954,"FirstName5954 MiddleName5954",LastName5954 +5955,5955,"FirstName5955 MiddleName5955",LastName5955 +5956,5956,"FirstName5956 MiddleName5956",LastName5956 +5957,5957,"FirstName5957 MiddleName5957",LastName5957 +5958,5958,"FirstName5958 MiddleName5958",LastName5958 +5959,5959,"FirstName5959 MiddleName5959",LastName5959 +5960,5960,"FirstName5960 MiddleName5960",LastName5960 +5961,5961,"FirstName5961 MiddleName5961",LastName5961 +5962,5962,"FirstName5962 MiddleName5962",LastName5962 +5963,5963,"FirstName5963 MiddleName5963",LastName5963 +5964,5964,"FirstName5964 MiddleName5964",LastName5964 +5965,5965,"FirstName5965 MiddleName5965",LastName5965 +5966,5966,"FirstName5966 MiddleName5966",LastName5966 +5967,5967,"FirstName5967 MiddleName5967",LastName5967 +5968,5968,"FirstName5968 MiddleName5968",LastName5968 +5969,5969,"FirstName5969 MiddleName5969",LastName5969 +5970,5970,"FirstName5970 MiddleName5970",LastName5970 +5971,5971,"FirstName5971 MiddleName5971",LastName5971 +5972,5972,"FirstName5972 MiddleName5972",LastName5972 +5973,5973,"FirstName5973 MiddleName5973",LastName5973 +5974,5974,"FirstName5974 MiddleName5974",LastName5974 +5975,5975,"FirstName5975 MiddleName5975",LastName5975 +5976,5976,"FirstName5976 MiddleName5976",LastName5976 +5977,5977,"FirstName5977 MiddleName5977",LastName5977 +5978,5978,"FirstName5978 MiddleName5978",LastName5978 +5979,5979,"FirstName5979 MiddleName5979",LastName5979 +5980,5980,"FirstName5980 MiddleName5980",LastName5980 +5981,5981,"FirstName5981 MiddleName5981",LastName5981 +5982,5982,"FirstName5982 MiddleName5982",LastName5982 +5983,5983,"FirstName5983 MiddleName5983",LastName5983 +5984,5984,"FirstName5984 MiddleName5984",LastName5984 +5985,5985,"FirstName5985 MiddleName5985",LastName5985 +5986,5986,"FirstName5986 MiddleName5986",LastName5986 +5987,5987,"FirstName5987 MiddleName5987",LastName5987 +5988,5988,"FirstName5988 MiddleName5988",LastName5988 +5989,5989,"FirstName5989 MiddleName5989",LastName5989 +5990,5990,"FirstName5990 MiddleName5990",LastName5990 +5991,5991,"FirstName5991 MiddleName5991",LastName5991 +5992,5992,"FirstName5992 MiddleName5992",LastName5992 +5993,5993,"FirstName5993 MiddleName5993",LastName5993 +5994,5994,"FirstName5994 MiddleName5994",LastName5994 +5995,5995,"FirstName5995 MiddleName5995",LastName5995 +5996,5996,"FirstName5996 MiddleName5996",LastName5996 +5997,5997,"FirstName5997 MiddleName5997",LastName5997 +5998,5998,"FirstName5998 MiddleName5998",LastName5998 +5999,5999,"FirstName5999 MiddleName5999",LastName5999 +6000,6000,"FirstName6000 MiddleName6000",LastName6000 +6001,6001,"FirstName6001 MiddleName6001",LastName6001 +6002,6002,"FirstName6002 MiddleName6002",LastName6002 +6003,6003,"FirstName6003 MiddleName6003",LastName6003 +6004,6004,"FirstName6004 MiddleName6004",LastName6004 +6005,6005,"FirstName6005 MiddleName6005",LastName6005 +6006,6006,"FirstName6006 MiddleName6006",LastName6006 +6007,6007,"FirstName6007 MiddleName6007",LastName6007 +6008,6008,"FirstName6008 MiddleName6008",LastName6008 +6009,6009,"FirstName6009 MiddleName6009",LastName6009 +6010,6010,"FirstName6010 MiddleName6010",LastName6010 +6011,6011,"FirstName6011 MiddleName6011",LastName6011 +6012,6012,"FirstName6012 MiddleName6012",LastName6012 +6013,6013,"FirstName6013 MiddleName6013",LastName6013 +6014,6014,"FirstName6014 MiddleName6014",LastName6014 +6015,6015,"FirstName6015 MiddleName6015",LastName6015 +6016,6016,"FirstName6016 MiddleName6016",LastName6016 +6017,6017,"FirstName6017 MiddleName6017",LastName6017 +6018,6018,"FirstName6018 MiddleName6018",LastName6018 +6019,6019,"FirstName6019 MiddleName6019",LastName6019 +6020,6020,"FirstName6020 MiddleName6020",LastName6020 +6021,6021,"FirstName6021 MiddleName6021",LastName6021 +6022,6022,"FirstName6022 MiddleName6022",LastName6022 +6023,6023,"FirstName6023 MiddleName6023",LastName6023 +6024,6024,"FirstName6024 MiddleName6024",LastName6024 +6025,6025,"FirstName6025 MiddleName6025",LastName6025 +6026,6026,"FirstName6026 MiddleName6026",LastName6026 +6027,6027,"FirstName6027 MiddleName6027",LastName6027 +6028,6028,"FirstName6028 MiddleName6028",LastName6028 +6029,6029,"FirstName6029 MiddleName6029",LastName6029 +6030,6030,"FirstName6030 MiddleName6030",LastName6030 +6031,6031,"FirstName6031 MiddleName6031",LastName6031 +6032,6032,"FirstName6032 MiddleName6032",LastName6032 +6033,6033,"FirstName6033 MiddleName6033",LastName6033 +6034,6034,"FirstName6034 MiddleName6034",LastName6034 +6035,6035,"FirstName6035 MiddleName6035",LastName6035 +6036,6036,"FirstName6036 MiddleName6036",LastName6036 +6037,6037,"FirstName6037 MiddleName6037",LastName6037 +6038,6038,"FirstName6038 MiddleName6038",LastName6038 +6039,6039,"FirstName6039 MiddleName6039",LastName6039 +6040,6040,"FirstName6040 MiddleName6040",LastName6040 +6041,6041,"FirstName6041 MiddleName6041",LastName6041 +6042,6042,"FirstName6042 MiddleName6042",LastName6042 +6043,6043,"FirstName6043 MiddleName6043",LastName6043 +6044,6044,"FirstName6044 MiddleName6044",LastName6044 +6045,6045,"FirstName6045 MiddleName6045",LastName6045 +6046,6046,"FirstName6046 MiddleName6046",LastName6046 +6047,6047,"FirstName6047 MiddleName6047",LastName6047 +6048,6048,"FirstName6048 MiddleName6048",LastName6048 +6049,6049,"FirstName6049 MiddleName6049",LastName6049 +6050,6050,"FirstName6050 MiddleName6050",LastName6050 +6051,6051,"FirstName6051 MiddleName6051",LastName6051 +6052,6052,"FirstName6052 MiddleName6052",LastName6052 +6053,6053,"FirstName6053 MiddleName6053",LastName6053 +6054,6054,"FirstName6054 MiddleName6054",LastName6054 +6055,6055,"FirstName6055 MiddleName6055",LastName6055 +6056,6056,"FirstName6056 MiddleName6056",LastName6056 +6057,6057,"FirstName6057 MiddleName6057",LastName6057 +6058,6058,"FirstName6058 MiddleName6058",LastName6058 +6059,6059,"FirstName6059 MiddleName6059",LastName6059 +6060,6060,"FirstName6060 MiddleName6060",LastName6060 +6061,6061,"FirstName6061 MiddleName6061",LastName6061 +6062,6062,"FirstName6062 MiddleName6062",LastName6062 +6063,6063,"FirstName6063 MiddleName6063",LastName6063 +6064,6064,"FirstName6064 MiddleName6064",LastName6064 +6065,6065,"FirstName6065 MiddleName6065",LastName6065 +6066,6066,"FirstName6066 MiddleName6066",LastName6066 +6067,6067,"FirstName6067 MiddleName6067",LastName6067 +6068,6068,"FirstName6068 MiddleName6068",LastName6068 +6069,6069,"FirstName6069 MiddleName6069",LastName6069 +6070,6070,"FirstName6070 MiddleName6070",LastName6070 +6071,6071,"FirstName6071 MiddleName6071",LastName6071 +6072,6072,"FirstName6072 MiddleName6072",LastName6072 +6073,6073,"FirstName6073 MiddleName6073",LastName6073 +6074,6074,"FirstName6074 MiddleName6074",LastName6074 +6075,6075,"FirstName6075 MiddleName6075",LastName6075 +6076,6076,"FirstName6076 MiddleName6076",LastName6076 +6077,6077,"FirstName6077 MiddleName6077",LastName6077 +6078,6078,"FirstName6078 MiddleName6078",LastName6078 +6079,6079,"FirstName6079 MiddleName6079",LastName6079 +6080,6080,"FirstName6080 MiddleName6080",LastName6080 +6081,6081,"FirstName6081 MiddleName6081",LastName6081 +6082,6082,"FirstName6082 MiddleName6082",LastName6082 +6083,6083,"FirstName6083 MiddleName6083",LastName6083 +6084,6084,"FirstName6084 MiddleName6084",LastName6084 +6085,6085,"FirstName6085 MiddleName6085",LastName6085 +6086,6086,"FirstName6086 MiddleName6086",LastName6086 +6087,6087,"FirstName6087 MiddleName6087",LastName6087 +6088,6088,"FirstName6088 MiddleName6088",LastName6088 +6089,6089,"FirstName6089 MiddleName6089",LastName6089 +6090,6090,"FirstName6090 MiddleName6090",LastName6090 +6091,6091,"FirstName6091 MiddleName6091",LastName6091 +6092,6092,"FirstName6092 MiddleName6092",LastName6092 +6093,6093,"FirstName6093 MiddleName6093",LastName6093 +6094,6094,"FirstName6094 MiddleName6094",LastName6094 +6095,6095,"FirstName6095 MiddleName6095",LastName6095 +6096,6096,"FirstName6096 MiddleName6096",LastName6096 +6097,6097,"FirstName6097 MiddleName6097",LastName6097 +6098,6098,"FirstName6098 MiddleName6098",LastName6098 +6099,6099,"FirstName6099 MiddleName6099",LastName6099 +6100,6100,"FirstName6100 MiddleName6100",LastName6100 +6101,6101,"FirstName6101 MiddleName6101",LastName6101 +6102,6102,"FirstName6102 MiddleName6102",LastName6102 +6103,6103,"FirstName6103 MiddleName6103",LastName6103 +6104,6104,"FirstName6104 MiddleName6104",LastName6104 +6105,6105,"FirstName6105 MiddleName6105",LastName6105 +6106,6106,"FirstName6106 MiddleName6106",LastName6106 +6107,6107,"FirstName6107 MiddleName6107",LastName6107 +6108,6108,"FirstName6108 MiddleName6108",LastName6108 +6109,6109,"FirstName6109 MiddleName6109",LastName6109 +6110,6110,"FirstName6110 MiddleName6110",LastName6110 +6111,6111,"FirstName6111 MiddleName6111",LastName6111 +6112,6112,"FirstName6112 MiddleName6112",LastName6112 +6113,6113,"FirstName6113 MiddleName6113",LastName6113 +6114,6114,"FirstName6114 MiddleName6114",LastName6114 +6115,6115,"FirstName6115 MiddleName6115",LastName6115 +6116,6116,"FirstName6116 MiddleName6116",LastName6116 +6117,6117,"FirstName6117 MiddleName6117",LastName6117 +6118,6118,"FirstName6118 MiddleName6118",LastName6118 +6119,6119,"FirstName6119 MiddleName6119",LastName6119 +6120,6120,"FirstName6120 MiddleName6120",LastName6120 +6121,6121,"FirstName6121 MiddleName6121",LastName6121 +6122,6122,"FirstName6122 MiddleName6122",LastName6122 +6123,6123,"FirstName6123 MiddleName6123",LastName6123 +6124,6124,"FirstName6124 MiddleName6124",LastName6124 +6125,6125,"FirstName6125 MiddleName6125",LastName6125 +6126,6126,"FirstName6126 MiddleName6126",LastName6126 +6127,6127,"FirstName6127 MiddleName6127",LastName6127 +6128,6128,"FirstName6128 MiddleName6128",LastName6128 +6129,6129,"FirstName6129 MiddleName6129",LastName6129 +6130,6130,"FirstName6130 MiddleName6130",LastName6130 +6131,6131,"FirstName6131 MiddleName6131",LastName6131 +6132,6132,"FirstName6132 MiddleName6132",LastName6132 +6133,6133,"FirstName6133 MiddleName6133",LastName6133 +6134,6134,"FirstName6134 MiddleName6134",LastName6134 +6135,6135,"FirstName6135 MiddleName6135",LastName6135 +6136,6136,"FirstName6136 MiddleName6136",LastName6136 +6137,6137,"FirstName6137 MiddleName6137",LastName6137 +6138,6138,"FirstName6138 MiddleName6138",LastName6138 +6139,6139,"FirstName6139 MiddleName6139",LastName6139 +6140,6140,"FirstName6140 MiddleName6140",LastName6140 +6141,6141,"FirstName6141 MiddleName6141",LastName6141 +6142,6142,"FirstName6142 MiddleName6142",LastName6142 +6143,6143,"FirstName6143 MiddleName6143",LastName6143 +6144,6144,"FirstName6144 MiddleName6144",LastName6144 +6145,6145,"FirstName6145 MiddleName6145",LastName6145 +6146,6146,"FirstName6146 MiddleName6146",LastName6146 +6147,6147,"FirstName6147 MiddleName6147",LastName6147 +6148,6148,"FirstName6148 MiddleName6148",LastName6148 +6149,6149,"FirstName6149 MiddleName6149",LastName6149 +6150,6150,"FirstName6150 MiddleName6150",LastName6150 +6151,6151,"FirstName6151 MiddleName6151",LastName6151 +6152,6152,"FirstName6152 MiddleName6152",LastName6152 +6153,6153,"FirstName6153 MiddleName6153",LastName6153 +6154,6154,"FirstName6154 MiddleName6154",LastName6154 +6155,6155,"FirstName6155 MiddleName6155",LastName6155 +6156,6156,"FirstName6156 MiddleName6156",LastName6156 +6157,6157,"FirstName6157 MiddleName6157",LastName6157 +6158,6158,"FirstName6158 MiddleName6158",LastName6158 +6159,6159,"FirstName6159 MiddleName6159",LastName6159 +6160,6160,"FirstName6160 MiddleName6160",LastName6160 +6161,6161,"FirstName6161 MiddleName6161",LastName6161 +6162,6162,"FirstName6162 MiddleName6162",LastName6162 +6163,6163,"FirstName6163 MiddleName6163",LastName6163 +6164,6164,"FirstName6164 MiddleName6164",LastName6164 +6165,6165,"FirstName6165 MiddleName6165",LastName6165 +6166,6166,"FirstName6166 MiddleName6166",LastName6166 +6167,6167,"FirstName6167 MiddleName6167",LastName6167 +6168,6168,"FirstName6168 MiddleName6168",LastName6168 +6169,6169,"FirstName6169 MiddleName6169",LastName6169 +6170,6170,"FirstName6170 MiddleName6170",LastName6170 +6171,6171,"FirstName6171 MiddleName6171",LastName6171 +6172,6172,"FirstName6172 MiddleName6172",LastName6172 +6173,6173,"FirstName6173 MiddleName6173",LastName6173 +6174,6174,"FirstName6174 MiddleName6174",LastName6174 +6175,6175,"FirstName6175 MiddleName6175",LastName6175 +6176,6176,"FirstName6176 MiddleName6176",LastName6176 +6177,6177,"FirstName6177 MiddleName6177",LastName6177 +6178,6178,"FirstName6178 MiddleName6178",LastName6178 +6179,6179,"FirstName6179 MiddleName6179",LastName6179 +6180,6180,"FirstName6180 MiddleName6180",LastName6180 +6181,6181,"FirstName6181 MiddleName6181",LastName6181 +6182,6182,"FirstName6182 MiddleName6182",LastName6182 +6183,6183,"FirstName6183 MiddleName6183",LastName6183 +6184,6184,"FirstName6184 MiddleName6184",LastName6184 +6185,6185,"FirstName6185 MiddleName6185",LastName6185 +6186,6186,"FirstName6186 MiddleName6186",LastName6186 +6187,6187,"FirstName6187 MiddleName6187",LastName6187 +6188,6188,"FirstName6188 MiddleName6188",LastName6188 +6189,6189,"FirstName6189 MiddleName6189",LastName6189 +6190,6190,"FirstName6190 MiddleName6190",LastName6190 +6191,6191,"FirstName6191 MiddleName6191",LastName6191 +6192,6192,"FirstName6192 MiddleName6192",LastName6192 +6193,6193,"FirstName6193 MiddleName6193",LastName6193 +6194,6194,"FirstName6194 MiddleName6194",LastName6194 +6195,6195,"FirstName6195 MiddleName6195",LastName6195 +6196,6196,"FirstName6196 MiddleName6196",LastName6196 +6197,6197,"FirstName6197 MiddleName6197",LastName6197 +6198,6198,"FirstName6198 MiddleName6198",LastName6198 +6199,6199,"FirstName6199 MiddleName6199",LastName6199 +6200,6200,"FirstName6200 MiddleName6200",LastName6200 +6201,6201,"FirstName6201 MiddleName6201",LastName6201 +6202,6202,"FirstName6202 MiddleName6202",LastName6202 +6203,6203,"FirstName6203 MiddleName6203",LastName6203 +6204,6204,"FirstName6204 MiddleName6204",LastName6204 +6205,6205,"FirstName6205 MiddleName6205",LastName6205 +6206,6206,"FirstName6206 MiddleName6206",LastName6206 +6207,6207,"FirstName6207 MiddleName6207",LastName6207 +6208,6208,"FirstName6208 MiddleName6208",LastName6208 +6209,6209,"FirstName6209 MiddleName6209",LastName6209 +6210,6210,"FirstName6210 MiddleName6210",LastName6210 +6211,6211,"FirstName6211 MiddleName6211",LastName6211 +6212,6212,"FirstName6212 MiddleName6212",LastName6212 +6213,6213,"FirstName6213 MiddleName6213",LastName6213 +6214,6214,"FirstName6214 MiddleName6214",LastName6214 +6215,6215,"FirstName6215 MiddleName6215",LastName6215 +6216,6216,"FirstName6216 MiddleName6216",LastName6216 +6217,6217,"FirstName6217 MiddleName6217",LastName6217 +6218,6218,"FirstName6218 MiddleName6218",LastName6218 +6219,6219,"FirstName6219 MiddleName6219",LastName6219 +6220,6220,"FirstName6220 MiddleName6220",LastName6220 +6221,6221,"FirstName6221 MiddleName6221",LastName6221 +6222,6222,"FirstName6222 MiddleName6222",LastName6222 +6223,6223,"FirstName6223 MiddleName6223",LastName6223 +6224,6224,"FirstName6224 MiddleName6224",LastName6224 +6225,6225,"FirstName6225 MiddleName6225",LastName6225 +6226,6226,"FirstName6226 MiddleName6226",LastName6226 +6227,6227,"FirstName6227 MiddleName6227",LastName6227 +6228,6228,"FirstName6228 MiddleName6228",LastName6228 +6229,6229,"FirstName6229 MiddleName6229",LastName6229 +6230,6230,"FirstName6230 MiddleName6230",LastName6230 +6231,6231,"FirstName6231 MiddleName6231",LastName6231 +6232,6232,"FirstName6232 MiddleName6232",LastName6232 +6233,6233,"FirstName6233 MiddleName6233",LastName6233 +6234,6234,"FirstName6234 MiddleName6234",LastName6234 +6235,6235,"FirstName6235 MiddleName6235",LastName6235 +6236,6236,"FirstName6236 MiddleName6236",LastName6236 +6237,6237,"FirstName6237 MiddleName6237",LastName6237 +6238,6238,"FirstName6238 MiddleName6238",LastName6238 +6239,6239,"FirstName6239 MiddleName6239",LastName6239 +6240,6240,"FirstName6240 MiddleName6240",LastName6240 +6241,6241,"FirstName6241 MiddleName6241",LastName6241 +6242,6242,"FirstName6242 MiddleName6242",LastName6242 +6243,6243,"FirstName6243 MiddleName6243",LastName6243 +6244,6244,"FirstName6244 MiddleName6244",LastName6244 +6245,6245,"FirstName6245 MiddleName6245",LastName6245 +6246,6246,"FirstName6246 MiddleName6246",LastName6246 +6247,6247,"FirstName6247 MiddleName6247",LastName6247 +6248,6248,"FirstName6248 MiddleName6248",LastName6248 +6249,6249,"FirstName6249 MiddleName6249",LastName6249 +6250,6250,"FirstName6250 MiddleName6250",LastName6250 +6251,6251,"FirstName6251 MiddleName6251",LastName6251 +6252,6252,"FirstName6252 MiddleName6252",LastName6252 +6253,6253,"FirstName6253 MiddleName6253",LastName6253 +6254,6254,"FirstName6254 MiddleName6254",LastName6254 +6255,6255,"FirstName6255 MiddleName6255",LastName6255 +6256,6256,"FirstName6256 MiddleName6256",LastName6256 +6257,6257,"FirstName6257 MiddleName6257",LastName6257 +6258,6258,"FirstName6258 MiddleName6258",LastName6258 +6259,6259,"FirstName6259 MiddleName6259",LastName6259 +6260,6260,"FirstName6260 MiddleName6260",LastName6260 +6261,6261,"FirstName6261 MiddleName6261",LastName6261 +6262,6262,"FirstName6262 MiddleName6262",LastName6262 +6263,6263,"FirstName6263 MiddleName6263",LastName6263 +6264,6264,"FirstName6264 MiddleName6264",LastName6264 +6265,6265,"FirstName6265 MiddleName6265",LastName6265 +6266,6266,"FirstName6266 MiddleName6266",LastName6266 +6267,6267,"FirstName6267 MiddleName6267",LastName6267 +6268,6268,"FirstName6268 MiddleName6268",LastName6268 +6269,6269,"FirstName6269 MiddleName6269",LastName6269 +6270,6270,"FirstName6270 MiddleName6270",LastName6270 +6271,6271,"FirstName6271 MiddleName6271",LastName6271 +6272,6272,"FirstName6272 MiddleName6272",LastName6272 +6273,6273,"FirstName6273 MiddleName6273",LastName6273 +6274,6274,"FirstName6274 MiddleName6274",LastName6274 +6275,6275,"FirstName6275 MiddleName6275",LastName6275 +6276,6276,"FirstName6276 MiddleName6276",LastName6276 +6277,6277,"FirstName6277 MiddleName6277",LastName6277 +6278,6278,"FirstName6278 MiddleName6278",LastName6278 +6279,6279,"FirstName6279 MiddleName6279",LastName6279 +6280,6280,"FirstName6280 MiddleName6280",LastName6280 +6281,6281,"FirstName6281 MiddleName6281",LastName6281 +6282,6282,"FirstName6282 MiddleName6282",LastName6282 +6283,6283,"FirstName6283 MiddleName6283",LastName6283 +6284,6284,"FirstName6284 MiddleName6284",LastName6284 +6285,6285,"FirstName6285 MiddleName6285",LastName6285 +6286,6286,"FirstName6286 MiddleName6286",LastName6286 +6287,6287,"FirstName6287 MiddleName6287",LastName6287 +6288,6288,"FirstName6288 MiddleName6288",LastName6288 +6289,6289,"FirstName6289 MiddleName6289",LastName6289 +6290,6290,"FirstName6290 MiddleName6290",LastName6290 +6291,6291,"FirstName6291 MiddleName6291",LastName6291 +6292,6292,"FirstName6292 MiddleName6292",LastName6292 +6293,6293,"FirstName6293 MiddleName6293",LastName6293 +6294,6294,"FirstName6294 MiddleName6294",LastName6294 +6295,6295,"FirstName6295 MiddleName6295",LastName6295 +6296,6296,"FirstName6296 MiddleName6296",LastName6296 +6297,6297,"FirstName6297 MiddleName6297",LastName6297 +6298,6298,"FirstName6298 MiddleName6298",LastName6298 +6299,6299,"FirstName6299 MiddleName6299",LastName6299 +6300,6300,"FirstName6300 MiddleName6300",LastName6300 +6301,6301,"FirstName6301 MiddleName6301",LastName6301 +6302,6302,"FirstName6302 MiddleName6302",LastName6302 +6303,6303,"FirstName6303 MiddleName6303",LastName6303 +6304,6304,"FirstName6304 MiddleName6304",LastName6304 +6305,6305,"FirstName6305 MiddleName6305",LastName6305 +6306,6306,"FirstName6306 MiddleName6306",LastName6306 +6307,6307,"FirstName6307 MiddleName6307",LastName6307 +6308,6308,"FirstName6308 MiddleName6308",LastName6308 +6309,6309,"FirstName6309 MiddleName6309",LastName6309 +6310,6310,"FirstName6310 MiddleName6310",LastName6310 +6311,6311,"FirstName6311 MiddleName6311",LastName6311 +6312,6312,"FirstName6312 MiddleName6312",LastName6312 +6313,6313,"FirstName6313 MiddleName6313",LastName6313 +6314,6314,"FirstName6314 MiddleName6314",LastName6314 +6315,6315,"FirstName6315 MiddleName6315",LastName6315 +6316,6316,"FirstName6316 MiddleName6316",LastName6316 +6317,6317,"FirstName6317 MiddleName6317",LastName6317 +6318,6318,"FirstName6318 MiddleName6318",LastName6318 +6319,6319,"FirstName6319 MiddleName6319",LastName6319 +6320,6320,"FirstName6320 MiddleName6320",LastName6320 +6321,6321,"FirstName6321 MiddleName6321",LastName6321 +6322,6322,"FirstName6322 MiddleName6322",LastName6322 +6323,6323,"FirstName6323 MiddleName6323",LastName6323 +6324,6324,"FirstName6324 MiddleName6324",LastName6324 +6325,6325,"FirstName6325 MiddleName6325",LastName6325 +6326,6326,"FirstName6326 MiddleName6326",LastName6326 +6327,6327,"FirstName6327 MiddleName6327",LastName6327 +6328,6328,"FirstName6328 MiddleName6328",LastName6328 +6329,6329,"FirstName6329 MiddleName6329",LastName6329 +6330,6330,"FirstName6330 MiddleName6330",LastName6330 +6331,6331,"FirstName6331 MiddleName6331",LastName6331 +6332,6332,"FirstName6332 MiddleName6332",LastName6332 +6333,6333,"FirstName6333 MiddleName6333",LastName6333 +6334,6334,"FirstName6334 MiddleName6334",LastName6334 +6335,6335,"FirstName6335 MiddleName6335",LastName6335 +6336,6336,"FirstName6336 MiddleName6336",LastName6336 +6337,6337,"FirstName6337 MiddleName6337",LastName6337 +6338,6338,"FirstName6338 MiddleName6338",LastName6338 +6339,6339,"FirstName6339 MiddleName6339",LastName6339 +6340,6340,"FirstName6340 MiddleName6340",LastName6340 +6341,6341,"FirstName6341 MiddleName6341",LastName6341 +6342,6342,"FirstName6342 MiddleName6342",LastName6342 +6343,6343,"FirstName6343 MiddleName6343",LastName6343 +6344,6344,"FirstName6344 MiddleName6344",LastName6344 +6345,6345,"FirstName6345 MiddleName6345",LastName6345 +6346,6346,"FirstName6346 MiddleName6346",LastName6346 +6347,6347,"FirstName6347 MiddleName6347",LastName6347 +6348,6348,"FirstName6348 MiddleName6348",LastName6348 +6349,6349,"FirstName6349 MiddleName6349",LastName6349 +6350,6350,"FirstName6350 MiddleName6350",LastName6350 +6351,6351,"FirstName6351 MiddleName6351",LastName6351 +6352,6352,"FirstName6352 MiddleName6352",LastName6352 +6353,6353,"FirstName6353 MiddleName6353",LastName6353 +6354,6354,"FirstName6354 MiddleName6354",LastName6354 +6355,6355,"FirstName6355 MiddleName6355",LastName6355 +6356,6356,"FirstName6356 MiddleName6356",LastName6356 +6357,6357,"FirstName6357 MiddleName6357",LastName6357 +6358,6358,"FirstName6358 MiddleName6358",LastName6358 +6359,6359,"FirstName6359 MiddleName6359",LastName6359 +6360,6360,"FirstName6360 MiddleName6360",LastName6360 +6361,6361,"FirstName6361 MiddleName6361",LastName6361 +6362,6362,"FirstName6362 MiddleName6362",LastName6362 +6363,6363,"FirstName6363 MiddleName6363",LastName6363 +6364,6364,"FirstName6364 MiddleName6364",LastName6364 +6365,6365,"FirstName6365 MiddleName6365",LastName6365 +6366,6366,"FirstName6366 MiddleName6366",LastName6366 +6367,6367,"FirstName6367 MiddleName6367",LastName6367 +6368,6368,"FirstName6368 MiddleName6368",LastName6368 +6369,6369,"FirstName6369 MiddleName6369",LastName6369 +6370,6370,"FirstName6370 MiddleName6370",LastName6370 +6371,6371,"FirstName6371 MiddleName6371",LastName6371 +6372,6372,"FirstName6372 MiddleName6372",LastName6372 +6373,6373,"FirstName6373 MiddleName6373",LastName6373 +6374,6374,"FirstName6374 MiddleName6374",LastName6374 +6375,6375,"FirstName6375 MiddleName6375",LastName6375 +6376,6376,"FirstName6376 MiddleName6376",LastName6376 +6377,6377,"FirstName6377 MiddleName6377",LastName6377 +6378,6378,"FirstName6378 MiddleName6378",LastName6378 +6379,6379,"FirstName6379 MiddleName6379",LastName6379 +6380,6380,"FirstName6380 MiddleName6380",LastName6380 +6381,6381,"FirstName6381 MiddleName6381",LastName6381 +6382,6382,"FirstName6382 MiddleName6382",LastName6382 +6383,6383,"FirstName6383 MiddleName6383",LastName6383 +6384,6384,"FirstName6384 MiddleName6384",LastName6384 +6385,6385,"FirstName6385 MiddleName6385",LastName6385 +6386,6386,"FirstName6386 MiddleName6386",LastName6386 +6387,6387,"FirstName6387 MiddleName6387",LastName6387 +6388,6388,"FirstName6388 MiddleName6388",LastName6388 +6389,6389,"FirstName6389 MiddleName6389",LastName6389 +6390,6390,"FirstName6390 MiddleName6390",LastName6390 +6391,6391,"FirstName6391 MiddleName6391",LastName6391 +6392,6392,"FirstName6392 MiddleName6392",LastName6392 +6393,6393,"FirstName6393 MiddleName6393",LastName6393 +6394,6394,"FirstName6394 MiddleName6394",LastName6394 +6395,6395,"FirstName6395 MiddleName6395",LastName6395 +6396,6396,"FirstName6396 MiddleName6396",LastName6396 +6397,6397,"FirstName6397 MiddleName6397",LastName6397 +6398,6398,"FirstName6398 MiddleName6398",LastName6398 +6399,6399,"FirstName6399 MiddleName6399",LastName6399 +6400,6400,"FirstName6400 MiddleName6400",LastName6400 +6401,6401,"FirstName6401 MiddleName6401",LastName6401 +6402,6402,"FirstName6402 MiddleName6402",LastName6402 +6403,6403,"FirstName6403 MiddleName6403",LastName6403 +6404,6404,"FirstName6404 MiddleName6404",LastName6404 +6405,6405,"FirstName6405 MiddleName6405",LastName6405 +6406,6406,"FirstName6406 MiddleName6406",LastName6406 +6407,6407,"FirstName6407 MiddleName6407",LastName6407 +6408,6408,"FirstName6408 MiddleName6408",LastName6408 +6409,6409,"FirstName6409 MiddleName6409",LastName6409 +6410,6410,"FirstName6410 MiddleName6410",LastName6410 +6411,6411,"FirstName6411 MiddleName6411",LastName6411 +6412,6412,"FirstName6412 MiddleName6412",LastName6412 +6413,6413,"FirstName6413 MiddleName6413",LastName6413 +6414,6414,"FirstName6414 MiddleName6414",LastName6414 +6415,6415,"FirstName6415 MiddleName6415",LastName6415 +6416,6416,"FirstName6416 MiddleName6416",LastName6416 +6417,6417,"FirstName6417 MiddleName6417",LastName6417 +6418,6418,"FirstName6418 MiddleName6418",LastName6418 +6419,6419,"FirstName6419 MiddleName6419",LastName6419 +6420,6420,"FirstName6420 MiddleName6420",LastName6420 +6421,6421,"FirstName6421 MiddleName6421",LastName6421 +6422,6422,"FirstName6422 MiddleName6422",LastName6422 +6423,6423,"FirstName6423 MiddleName6423",LastName6423 +6424,6424,"FirstName6424 MiddleName6424",LastName6424 +6425,6425,"FirstName6425 MiddleName6425",LastName6425 +6426,6426,"FirstName6426 MiddleName6426",LastName6426 +6427,6427,"FirstName6427 MiddleName6427",LastName6427 +6428,6428,"FirstName6428 MiddleName6428",LastName6428 +6429,6429,"FirstName6429 MiddleName6429",LastName6429 +6430,6430,"FirstName6430 MiddleName6430",LastName6430 +6431,6431,"FirstName6431 MiddleName6431",LastName6431 +6432,6432,"FirstName6432 MiddleName6432",LastName6432 +6433,6433,"FirstName6433 MiddleName6433",LastName6433 +6434,6434,"FirstName6434 MiddleName6434",LastName6434 +6435,6435,"FirstName6435 MiddleName6435",LastName6435 +6436,6436,"FirstName6436 MiddleName6436",LastName6436 +6437,6437,"FirstName6437 MiddleName6437",LastName6437 +6438,6438,"FirstName6438 MiddleName6438",LastName6438 +6439,6439,"FirstName6439 MiddleName6439",LastName6439 +6440,6440,"FirstName6440 MiddleName6440",LastName6440 +6441,6441,"FirstName6441 MiddleName6441",LastName6441 +6442,6442,"FirstName6442 MiddleName6442",LastName6442 +6443,6443,"FirstName6443 MiddleName6443",LastName6443 +6444,6444,"FirstName6444 MiddleName6444",LastName6444 +6445,6445,"FirstName6445 MiddleName6445",LastName6445 +6446,6446,"FirstName6446 MiddleName6446",LastName6446 +6447,6447,"FirstName6447 MiddleName6447",LastName6447 +6448,6448,"FirstName6448 MiddleName6448",LastName6448 +6449,6449,"FirstName6449 MiddleName6449",LastName6449 +6450,6450,"FirstName6450 MiddleName6450",LastName6450 +6451,6451,"FirstName6451 MiddleName6451",LastName6451 +6452,6452,"FirstName6452 MiddleName6452",LastName6452 +6453,6453,"FirstName6453 MiddleName6453",LastName6453 +6454,6454,"FirstName6454 MiddleName6454",LastName6454 +6455,6455,"FirstName6455 MiddleName6455",LastName6455 +6456,6456,"FirstName6456 MiddleName6456",LastName6456 +6457,6457,"FirstName6457 MiddleName6457",LastName6457 +6458,6458,"FirstName6458 MiddleName6458",LastName6458 +6459,6459,"FirstName6459 MiddleName6459",LastName6459 +6460,6460,"FirstName6460 MiddleName6460",LastName6460 +6461,6461,"FirstName6461 MiddleName6461",LastName6461 +6462,6462,"FirstName6462 MiddleName6462",LastName6462 +6463,6463,"FirstName6463 MiddleName6463",LastName6463 +6464,6464,"FirstName6464 MiddleName6464",LastName6464 +6465,6465,"FirstName6465 MiddleName6465",LastName6465 +6466,6466,"FirstName6466 MiddleName6466",LastName6466 +6467,6467,"FirstName6467 MiddleName6467",LastName6467 +6468,6468,"FirstName6468 MiddleName6468",LastName6468 +6469,6469,"FirstName6469 MiddleName6469",LastName6469 +6470,6470,"FirstName6470 MiddleName6470",LastName6470 +6471,6471,"FirstName6471 MiddleName6471",LastName6471 +6472,6472,"FirstName6472 MiddleName6472",LastName6472 +6473,6473,"FirstName6473 MiddleName6473",LastName6473 +6474,6474,"FirstName6474 MiddleName6474",LastName6474 +6475,6475,"FirstName6475 MiddleName6475",LastName6475 +6476,6476,"FirstName6476 MiddleName6476",LastName6476 +6477,6477,"FirstName6477 MiddleName6477",LastName6477 +6478,6478,"FirstName6478 MiddleName6478",LastName6478 +6479,6479,"FirstName6479 MiddleName6479",LastName6479 +6480,6480,"FirstName6480 MiddleName6480",LastName6480 +6481,6481,"FirstName6481 MiddleName6481",LastName6481 +6482,6482,"FirstName6482 MiddleName6482",LastName6482 +6483,6483,"FirstName6483 MiddleName6483",LastName6483 +6484,6484,"FirstName6484 MiddleName6484",LastName6484 +6485,6485,"FirstName6485 MiddleName6485",LastName6485 +6486,6486,"FirstName6486 MiddleName6486",LastName6486 +6487,6487,"FirstName6487 MiddleName6487",LastName6487 +6488,6488,"FirstName6488 MiddleName6488",LastName6488 +6489,6489,"FirstName6489 MiddleName6489",LastName6489 +6490,6490,"FirstName6490 MiddleName6490",LastName6490 +6491,6491,"FirstName6491 MiddleName6491",LastName6491 +6492,6492,"FirstName6492 MiddleName6492",LastName6492 +6493,6493,"FirstName6493 MiddleName6493",LastName6493 +6494,6494,"FirstName6494 MiddleName6494",LastName6494 +6495,6495,"FirstName6495 MiddleName6495",LastName6495 +6496,6496,"FirstName6496 MiddleName6496",LastName6496 +6497,6497,"FirstName6497 MiddleName6497",LastName6497 +6498,6498,"FirstName6498 MiddleName6498",LastName6498 +6499,6499,"FirstName6499 MiddleName6499",LastName6499 +6500,6500,"FirstName6500 MiddleName6500",LastName6500 +6501,6501,"FirstName6501 MiddleName6501",LastName6501 +6502,6502,"FirstName6502 MiddleName6502",LastName6502 +6503,6503,"FirstName6503 MiddleName6503",LastName6503 +6504,6504,"FirstName6504 MiddleName6504",LastName6504 +6505,6505,"FirstName6505 MiddleName6505",LastName6505 +6506,6506,"FirstName6506 MiddleName6506",LastName6506 +6507,6507,"FirstName6507 MiddleName6507",LastName6507 +6508,6508,"FirstName6508 MiddleName6508",LastName6508 +6509,6509,"FirstName6509 MiddleName6509",LastName6509 +6510,6510,"FirstName6510 MiddleName6510",LastName6510 +6511,6511,"FirstName6511 MiddleName6511",LastName6511 +6512,6512,"FirstName6512 MiddleName6512",LastName6512 +6513,6513,"FirstName6513 MiddleName6513",LastName6513 +6514,6514,"FirstName6514 MiddleName6514",LastName6514 +6515,6515,"FirstName6515 MiddleName6515",LastName6515 +6516,6516,"FirstName6516 MiddleName6516",LastName6516 +6517,6517,"FirstName6517 MiddleName6517",LastName6517 +6518,6518,"FirstName6518 MiddleName6518",LastName6518 +6519,6519,"FirstName6519 MiddleName6519",LastName6519 +6520,6520,"FirstName6520 MiddleName6520",LastName6520 +6521,6521,"FirstName6521 MiddleName6521",LastName6521 +6522,6522,"FirstName6522 MiddleName6522",LastName6522 +6523,6523,"FirstName6523 MiddleName6523",LastName6523 +6524,6524,"FirstName6524 MiddleName6524",LastName6524 +6525,6525,"FirstName6525 MiddleName6525",LastName6525 +6526,6526,"FirstName6526 MiddleName6526",LastName6526 +6527,6527,"FirstName6527 MiddleName6527",LastName6527 +6528,6528,"FirstName6528 MiddleName6528",LastName6528 +6529,6529,"FirstName6529 MiddleName6529",LastName6529 +6530,6530,"FirstName6530 MiddleName6530",LastName6530 +6531,6531,"FirstName6531 MiddleName6531",LastName6531 +6532,6532,"FirstName6532 MiddleName6532",LastName6532 +6533,6533,"FirstName6533 MiddleName6533",LastName6533 +6534,6534,"FirstName6534 MiddleName6534",LastName6534 +6535,6535,"FirstName6535 MiddleName6535",LastName6535 +6536,6536,"FirstName6536 MiddleName6536",LastName6536 +6537,6537,"FirstName6537 MiddleName6537",LastName6537 +6538,6538,"FirstName6538 MiddleName6538",LastName6538 +6539,6539,"FirstName6539 MiddleName6539",LastName6539 +6540,6540,"FirstName6540 MiddleName6540",LastName6540 +6541,6541,"FirstName6541 MiddleName6541",LastName6541 +6542,6542,"FirstName6542 MiddleName6542",LastName6542 +6543,6543,"FirstName6543 MiddleName6543",LastName6543 +6544,6544,"FirstName6544 MiddleName6544",LastName6544 +6545,6545,"FirstName6545 MiddleName6545",LastName6545 +6546,6546,"FirstName6546 MiddleName6546",LastName6546 +6547,6547,"FirstName6547 MiddleName6547",LastName6547 +6548,6548,"FirstName6548 MiddleName6548",LastName6548 +6549,6549,"FirstName6549 MiddleName6549",LastName6549 +6550,6550,"FirstName6550 MiddleName6550",LastName6550 +6551,6551,"FirstName6551 MiddleName6551",LastName6551 +6552,6552,"FirstName6552 MiddleName6552",LastName6552 +6553,6553,"FirstName6553 MiddleName6553",LastName6553 +6554,6554,"FirstName6554 MiddleName6554",LastName6554 +6555,6555,"FirstName6555 MiddleName6555",LastName6555 +6556,6556,"FirstName6556 MiddleName6556",LastName6556 +6557,6557,"FirstName6557 MiddleName6557",LastName6557 +6558,6558,"FirstName6558 MiddleName6558",LastName6558 +6559,6559,"FirstName6559 MiddleName6559",LastName6559 +6560,6560,"FirstName6560 MiddleName6560",LastName6560 +6561,6561,"FirstName6561 MiddleName6561",LastName6561 +6562,6562,"FirstName6562 MiddleName6562",LastName6562 +6563,6563,"FirstName6563 MiddleName6563",LastName6563 +6564,6564,"FirstName6564 MiddleName6564",LastName6564 +6565,6565,"FirstName6565 MiddleName6565",LastName6565 +6566,6566,"FirstName6566 MiddleName6566",LastName6566 +6567,6567,"FirstName6567 MiddleName6567",LastName6567 +6568,6568,"FirstName6568 MiddleName6568",LastName6568 +6569,6569,"FirstName6569 MiddleName6569",LastName6569 +6570,6570,"FirstName6570 MiddleName6570",LastName6570 +6571,6571,"FirstName6571 MiddleName6571",LastName6571 +6572,6572,"FirstName6572 MiddleName6572",LastName6572 +6573,6573,"FirstName6573 MiddleName6573",LastName6573 +6574,6574,"FirstName6574 MiddleName6574",LastName6574 +6575,6575,"FirstName6575 MiddleName6575",LastName6575 +6576,6576,"FirstName6576 MiddleName6576",LastName6576 +6577,6577,"FirstName6577 MiddleName6577",LastName6577 +6578,6578,"FirstName6578 MiddleName6578",LastName6578 +6579,6579,"FirstName6579 MiddleName6579",LastName6579 +6580,6580,"FirstName6580 MiddleName6580",LastName6580 +6581,6581,"FirstName6581 MiddleName6581",LastName6581 +6582,6582,"FirstName6582 MiddleName6582",LastName6582 +6583,6583,"FirstName6583 MiddleName6583",LastName6583 +6584,6584,"FirstName6584 MiddleName6584",LastName6584 +6585,6585,"FirstName6585 MiddleName6585",LastName6585 +6586,6586,"FirstName6586 MiddleName6586",LastName6586 +6587,6587,"FirstName6587 MiddleName6587",LastName6587 +6588,6588,"FirstName6588 MiddleName6588",LastName6588 +6589,6589,"FirstName6589 MiddleName6589",LastName6589 +6590,6590,"FirstName6590 MiddleName6590",LastName6590 +6591,6591,"FirstName6591 MiddleName6591",LastName6591 +6592,6592,"FirstName6592 MiddleName6592",LastName6592 +6593,6593,"FirstName6593 MiddleName6593",LastName6593 +6594,6594,"FirstName6594 MiddleName6594",LastName6594 +6595,6595,"FirstName6595 MiddleName6595",LastName6595 +6596,6596,"FirstName6596 MiddleName6596",LastName6596 +6597,6597,"FirstName6597 MiddleName6597",LastName6597 +6598,6598,"FirstName6598 MiddleName6598",LastName6598 +6599,6599,"FirstName6599 MiddleName6599",LastName6599 +6600,6600,"FirstName6600 MiddleName6600",LastName6600 +6601,6601,"FirstName6601 MiddleName6601",LastName6601 +6602,6602,"FirstName6602 MiddleName6602",LastName6602 +6603,6603,"FirstName6603 MiddleName6603",LastName6603 +6604,6604,"FirstName6604 MiddleName6604",LastName6604 +6605,6605,"FirstName6605 MiddleName6605",LastName6605 +6606,6606,"FirstName6606 MiddleName6606",LastName6606 +6607,6607,"FirstName6607 MiddleName6607",LastName6607 +6608,6608,"FirstName6608 MiddleName6608",LastName6608 +6609,6609,"FirstName6609 MiddleName6609",LastName6609 +6610,6610,"FirstName6610 MiddleName6610",LastName6610 +6611,6611,"FirstName6611 MiddleName6611",LastName6611 +6612,6612,"FirstName6612 MiddleName6612",LastName6612 +6613,6613,"FirstName6613 MiddleName6613",LastName6613 +6614,6614,"FirstName6614 MiddleName6614",LastName6614 +6615,6615,"FirstName6615 MiddleName6615",LastName6615 +6616,6616,"FirstName6616 MiddleName6616",LastName6616 +6617,6617,"FirstName6617 MiddleName6617",LastName6617 +6618,6618,"FirstName6618 MiddleName6618",LastName6618 +6619,6619,"FirstName6619 MiddleName6619",LastName6619 +6620,6620,"FirstName6620 MiddleName6620",LastName6620 +6621,6621,"FirstName6621 MiddleName6621",LastName6621 +6622,6622,"FirstName6622 MiddleName6622",LastName6622 +6623,6623,"FirstName6623 MiddleName6623",LastName6623 +6624,6624,"FirstName6624 MiddleName6624",LastName6624 +6625,6625,"FirstName6625 MiddleName6625",LastName6625 +6626,6626,"FirstName6626 MiddleName6626",LastName6626 +6627,6627,"FirstName6627 MiddleName6627",LastName6627 +6628,6628,"FirstName6628 MiddleName6628",LastName6628 +6629,6629,"FirstName6629 MiddleName6629",LastName6629 +6630,6630,"FirstName6630 MiddleName6630",LastName6630 +6631,6631,"FirstName6631 MiddleName6631",LastName6631 +6632,6632,"FirstName6632 MiddleName6632",LastName6632 +6633,6633,"FirstName6633 MiddleName6633",LastName6633 +6634,6634,"FirstName6634 MiddleName6634",LastName6634 +6635,6635,"FirstName6635 MiddleName6635",LastName6635 +6636,6636,"FirstName6636 MiddleName6636",LastName6636 +6637,6637,"FirstName6637 MiddleName6637",LastName6637 +6638,6638,"FirstName6638 MiddleName6638",LastName6638 +6639,6639,"FirstName6639 MiddleName6639",LastName6639 +6640,6640,"FirstName6640 MiddleName6640",LastName6640 +6641,6641,"FirstName6641 MiddleName6641",LastName6641 +6642,6642,"FirstName6642 MiddleName6642",LastName6642 +6643,6643,"FirstName6643 MiddleName6643",LastName6643 +6644,6644,"FirstName6644 MiddleName6644",LastName6644 +6645,6645,"FirstName6645 MiddleName6645",LastName6645 +6646,6646,"FirstName6646 MiddleName6646",LastName6646 +6647,6647,"FirstName6647 MiddleName6647",LastName6647 +6648,6648,"FirstName6648 MiddleName6648",LastName6648 +6649,6649,"FirstName6649 MiddleName6649",LastName6649 +6650,6650,"FirstName6650 MiddleName6650",LastName6650 +6651,6651,"FirstName6651 MiddleName6651",LastName6651 +6652,6652,"FirstName6652 MiddleName6652",LastName6652 +6653,6653,"FirstName6653 MiddleName6653",LastName6653 +6654,6654,"FirstName6654 MiddleName6654",LastName6654 +6655,6655,"FirstName6655 MiddleName6655",LastName6655 +6656,6656,"FirstName6656 MiddleName6656",LastName6656 +6657,6657,"FirstName6657 MiddleName6657",LastName6657 +6658,6658,"FirstName6658 MiddleName6658",LastName6658 +6659,6659,"FirstName6659 MiddleName6659",LastName6659 +6660,6660,"FirstName6660 MiddleName6660",LastName6660 +6661,6661,"FirstName6661 MiddleName6661",LastName6661 +6662,6662,"FirstName6662 MiddleName6662",LastName6662 +6663,6663,"FirstName6663 MiddleName6663",LastName6663 +6664,6664,"FirstName6664 MiddleName6664",LastName6664 +6665,6665,"FirstName6665 MiddleName6665",LastName6665 +6666,6666,"FirstName6666 MiddleName6666",LastName6666 +6667,6667,"FirstName6667 MiddleName6667",LastName6667 +6668,6668,"FirstName6668 MiddleName6668",LastName6668 +6669,6669,"FirstName6669 MiddleName6669",LastName6669 +6670,6670,"FirstName6670 MiddleName6670",LastName6670 +6671,6671,"FirstName6671 MiddleName6671",LastName6671 +6672,6672,"FirstName6672 MiddleName6672",LastName6672 +6673,6673,"FirstName6673 MiddleName6673",LastName6673 +6674,6674,"FirstName6674 MiddleName6674",LastName6674 +6675,6675,"FirstName6675 MiddleName6675",LastName6675 +6676,6676,"FirstName6676 MiddleName6676",LastName6676 +6677,6677,"FirstName6677 MiddleName6677",LastName6677 +6678,6678,"FirstName6678 MiddleName6678",LastName6678 +6679,6679,"FirstName6679 MiddleName6679",LastName6679 +6680,6680,"FirstName6680 MiddleName6680",LastName6680 +6681,6681,"FirstName6681 MiddleName6681",LastName6681 +6682,6682,"FirstName6682 MiddleName6682",LastName6682 +6683,6683,"FirstName6683 MiddleName6683",LastName6683 +6684,6684,"FirstName6684 MiddleName6684",LastName6684 +6685,6685,"FirstName6685 MiddleName6685",LastName6685 +6686,6686,"FirstName6686 MiddleName6686",LastName6686 +6687,6687,"FirstName6687 MiddleName6687",LastName6687 +6688,6688,"FirstName6688 MiddleName6688",LastName6688 +6689,6689,"FirstName6689 MiddleName6689",LastName6689 +6690,6690,"FirstName6690 MiddleName6690",LastName6690 +6691,6691,"FirstName6691 MiddleName6691",LastName6691 +6692,6692,"FirstName6692 MiddleName6692",LastName6692 +6693,6693,"FirstName6693 MiddleName6693",LastName6693 +6694,6694,"FirstName6694 MiddleName6694",LastName6694 +6695,6695,"FirstName6695 MiddleName6695",LastName6695 +6696,6696,"FirstName6696 MiddleName6696",LastName6696 +6697,6697,"FirstName6697 MiddleName6697",LastName6697 +6698,6698,"FirstName6698 MiddleName6698",LastName6698 +6699,6699,"FirstName6699 MiddleName6699",LastName6699 +6700,6700,"FirstName6700 MiddleName6700",LastName6700 +6701,6701,"FirstName6701 MiddleName6701",LastName6701 +6702,6702,"FirstName6702 MiddleName6702",LastName6702 +6703,6703,"FirstName6703 MiddleName6703",LastName6703 +6704,6704,"FirstName6704 MiddleName6704",LastName6704 +6705,6705,"FirstName6705 MiddleName6705",LastName6705 +6706,6706,"FirstName6706 MiddleName6706",LastName6706 +6707,6707,"FirstName6707 MiddleName6707",LastName6707 +6708,6708,"FirstName6708 MiddleName6708",LastName6708 +6709,6709,"FirstName6709 MiddleName6709",LastName6709 +6710,6710,"FirstName6710 MiddleName6710",LastName6710 +6711,6711,"FirstName6711 MiddleName6711",LastName6711 +6712,6712,"FirstName6712 MiddleName6712",LastName6712 +6713,6713,"FirstName6713 MiddleName6713",LastName6713 +6714,6714,"FirstName6714 MiddleName6714",LastName6714 +6715,6715,"FirstName6715 MiddleName6715",LastName6715 +6716,6716,"FirstName6716 MiddleName6716",LastName6716 +6717,6717,"FirstName6717 MiddleName6717",LastName6717 +6718,6718,"FirstName6718 MiddleName6718",LastName6718 +6719,6719,"FirstName6719 MiddleName6719",LastName6719 +6720,6720,"FirstName6720 MiddleName6720",LastName6720 +6721,6721,"FirstName6721 MiddleName6721",LastName6721 +6722,6722,"FirstName6722 MiddleName6722",LastName6722 +6723,6723,"FirstName6723 MiddleName6723",LastName6723 +6724,6724,"FirstName6724 MiddleName6724",LastName6724 +6725,6725,"FirstName6725 MiddleName6725",LastName6725 +6726,6726,"FirstName6726 MiddleName6726",LastName6726 +6727,6727,"FirstName6727 MiddleName6727",LastName6727 +6728,6728,"FirstName6728 MiddleName6728",LastName6728 +6729,6729,"FirstName6729 MiddleName6729",LastName6729 +6730,6730,"FirstName6730 MiddleName6730",LastName6730 +6731,6731,"FirstName6731 MiddleName6731",LastName6731 +6732,6732,"FirstName6732 MiddleName6732",LastName6732 +6733,6733,"FirstName6733 MiddleName6733",LastName6733 +6734,6734,"FirstName6734 MiddleName6734",LastName6734 +6735,6735,"FirstName6735 MiddleName6735",LastName6735 +6736,6736,"FirstName6736 MiddleName6736",LastName6736 +6737,6737,"FirstName6737 MiddleName6737",LastName6737 +6738,6738,"FirstName6738 MiddleName6738",LastName6738 +6739,6739,"FirstName6739 MiddleName6739",LastName6739 +6740,6740,"FirstName6740 MiddleName6740",LastName6740 +6741,6741,"FirstName6741 MiddleName6741",LastName6741 +6742,6742,"FirstName6742 MiddleName6742",LastName6742 +6743,6743,"FirstName6743 MiddleName6743",LastName6743 +6744,6744,"FirstName6744 MiddleName6744",LastName6744 +6745,6745,"FirstName6745 MiddleName6745",LastName6745 +6746,6746,"FirstName6746 MiddleName6746",LastName6746 +6747,6747,"FirstName6747 MiddleName6747",LastName6747 +6748,6748,"FirstName6748 MiddleName6748",LastName6748 +6749,6749,"FirstName6749 MiddleName6749",LastName6749 +6750,6750,"FirstName6750 MiddleName6750",LastName6750 +6751,6751,"FirstName6751 MiddleName6751",LastName6751 +6752,6752,"FirstName6752 MiddleName6752",LastName6752 +6753,6753,"FirstName6753 MiddleName6753",LastName6753 +6754,6754,"FirstName6754 MiddleName6754",LastName6754 +6755,6755,"FirstName6755 MiddleName6755",LastName6755 +6756,6756,"FirstName6756 MiddleName6756",LastName6756 +6757,6757,"FirstName6757 MiddleName6757",LastName6757 +6758,6758,"FirstName6758 MiddleName6758",LastName6758 +6759,6759,"FirstName6759 MiddleName6759",LastName6759 +6760,6760,"FirstName6760 MiddleName6760",LastName6760 +6761,6761,"FirstName6761 MiddleName6761",LastName6761 +6762,6762,"FirstName6762 MiddleName6762",LastName6762 +6763,6763,"FirstName6763 MiddleName6763",LastName6763 +6764,6764,"FirstName6764 MiddleName6764",LastName6764 +6765,6765,"FirstName6765 MiddleName6765",LastName6765 +6766,6766,"FirstName6766 MiddleName6766",LastName6766 +6767,6767,"FirstName6767 MiddleName6767",LastName6767 +6768,6768,"FirstName6768 MiddleName6768",LastName6768 +6769,6769,"FirstName6769 MiddleName6769",LastName6769 +6770,6770,"FirstName6770 MiddleName6770",LastName6770 +6771,6771,"FirstName6771 MiddleName6771",LastName6771 +6772,6772,"FirstName6772 MiddleName6772",LastName6772 +6773,6773,"FirstName6773 MiddleName6773",LastName6773 +6774,6774,"FirstName6774 MiddleName6774",LastName6774 +6775,6775,"FirstName6775 MiddleName6775",LastName6775 +6776,6776,"FirstName6776 MiddleName6776",LastName6776 +6777,6777,"FirstName6777 MiddleName6777",LastName6777 +6778,6778,"FirstName6778 MiddleName6778",LastName6778 +6779,6779,"FirstName6779 MiddleName6779",LastName6779 +6780,6780,"FirstName6780 MiddleName6780",LastName6780 +6781,6781,"FirstName6781 MiddleName6781",LastName6781 +6782,6782,"FirstName6782 MiddleName6782",LastName6782 +6783,6783,"FirstName6783 MiddleName6783",LastName6783 +6784,6784,"FirstName6784 MiddleName6784",LastName6784 +6785,6785,"FirstName6785 MiddleName6785",LastName6785 +6786,6786,"FirstName6786 MiddleName6786",LastName6786 +6787,6787,"FirstName6787 MiddleName6787",LastName6787 +6788,6788,"FirstName6788 MiddleName6788",LastName6788 +6789,6789,"FirstName6789 MiddleName6789",LastName6789 +6790,6790,"FirstName6790 MiddleName6790",LastName6790 +6791,6791,"FirstName6791 MiddleName6791",LastName6791 +6792,6792,"FirstName6792 MiddleName6792",LastName6792 +6793,6793,"FirstName6793 MiddleName6793",LastName6793 +6794,6794,"FirstName6794 MiddleName6794",LastName6794 +6795,6795,"FirstName6795 MiddleName6795",LastName6795 +6796,6796,"FirstName6796 MiddleName6796",LastName6796 +6797,6797,"FirstName6797 MiddleName6797",LastName6797 +6798,6798,"FirstName6798 MiddleName6798",LastName6798 +6799,6799,"FirstName6799 MiddleName6799",LastName6799 +6800,6800,"FirstName6800 MiddleName6800",LastName6800 +6801,6801,"FirstName6801 MiddleName6801",LastName6801 +6802,6802,"FirstName6802 MiddleName6802",LastName6802 +6803,6803,"FirstName6803 MiddleName6803",LastName6803 +6804,6804,"FirstName6804 MiddleName6804",LastName6804 +6805,6805,"FirstName6805 MiddleName6805",LastName6805 +6806,6806,"FirstName6806 MiddleName6806",LastName6806 +6807,6807,"FirstName6807 MiddleName6807",LastName6807 +6808,6808,"FirstName6808 MiddleName6808",LastName6808 +6809,6809,"FirstName6809 MiddleName6809",LastName6809 +6810,6810,"FirstName6810 MiddleName6810",LastName6810 +6811,6811,"FirstName6811 MiddleName6811",LastName6811 +6812,6812,"FirstName6812 MiddleName6812",LastName6812 +6813,6813,"FirstName6813 MiddleName6813",LastName6813 +6814,6814,"FirstName6814 MiddleName6814",LastName6814 +6815,6815,"FirstName6815 MiddleName6815",LastName6815 +6816,6816,"FirstName6816 MiddleName6816",LastName6816 +6817,6817,"FirstName6817 MiddleName6817",LastName6817 +6818,6818,"FirstName6818 MiddleName6818",LastName6818 +6819,6819,"FirstName6819 MiddleName6819",LastName6819 +6820,6820,"FirstName6820 MiddleName6820",LastName6820 +6821,6821,"FirstName6821 MiddleName6821",LastName6821 +6822,6822,"FirstName6822 MiddleName6822",LastName6822 +6823,6823,"FirstName6823 MiddleName6823",LastName6823 +6824,6824,"FirstName6824 MiddleName6824",LastName6824 +6825,6825,"FirstName6825 MiddleName6825",LastName6825 +6826,6826,"FirstName6826 MiddleName6826",LastName6826 +6827,6827,"FirstName6827 MiddleName6827",LastName6827 +6828,6828,"FirstName6828 MiddleName6828",LastName6828 +6829,6829,"FirstName6829 MiddleName6829",LastName6829 +6830,6830,"FirstName6830 MiddleName6830",LastName6830 +6831,6831,"FirstName6831 MiddleName6831",LastName6831 +6832,6832,"FirstName6832 MiddleName6832",LastName6832 +6833,6833,"FirstName6833 MiddleName6833",LastName6833 +6834,6834,"FirstName6834 MiddleName6834",LastName6834 +6835,6835,"FirstName6835 MiddleName6835",LastName6835 +6836,6836,"FirstName6836 MiddleName6836",LastName6836 +6837,6837,"FirstName6837 MiddleName6837",LastName6837 +6838,6838,"FirstName6838 MiddleName6838",LastName6838 +6839,6839,"FirstName6839 MiddleName6839",LastName6839 +6840,6840,"FirstName6840 MiddleName6840",LastName6840 +6841,6841,"FirstName6841 MiddleName6841",LastName6841 +6842,6842,"FirstName6842 MiddleName6842",LastName6842 +6843,6843,"FirstName6843 MiddleName6843",LastName6843 +6844,6844,"FirstName6844 MiddleName6844",LastName6844 +6845,6845,"FirstName6845 MiddleName6845",LastName6845 +6846,6846,"FirstName6846 MiddleName6846",LastName6846 +6847,6847,"FirstName6847 MiddleName6847",LastName6847 +6848,6848,"FirstName6848 MiddleName6848",LastName6848 +6849,6849,"FirstName6849 MiddleName6849",LastName6849 +6850,6850,"FirstName6850 MiddleName6850",LastName6850 +6851,6851,"FirstName6851 MiddleName6851",LastName6851 +6852,6852,"FirstName6852 MiddleName6852",LastName6852 +6853,6853,"FirstName6853 MiddleName6853",LastName6853 +6854,6854,"FirstName6854 MiddleName6854",LastName6854 +6855,6855,"FirstName6855 MiddleName6855",LastName6855 +6856,6856,"FirstName6856 MiddleName6856",LastName6856 +6857,6857,"FirstName6857 MiddleName6857",LastName6857 +6858,6858,"FirstName6858 MiddleName6858",LastName6858 +6859,6859,"FirstName6859 MiddleName6859",LastName6859 +6860,6860,"FirstName6860 MiddleName6860",LastName6860 +6861,6861,"FirstName6861 MiddleName6861",LastName6861 +6862,6862,"FirstName6862 MiddleName6862",LastName6862 +6863,6863,"FirstName6863 MiddleName6863",LastName6863 +6864,6864,"FirstName6864 MiddleName6864",LastName6864 +6865,6865,"FirstName6865 MiddleName6865",LastName6865 +6866,6866,"FirstName6866 MiddleName6866",LastName6866 +6867,6867,"FirstName6867 MiddleName6867",LastName6867 +6868,6868,"FirstName6868 MiddleName6868",LastName6868 +6869,6869,"FirstName6869 MiddleName6869",LastName6869 +6870,6870,"FirstName6870 MiddleName6870",LastName6870 +6871,6871,"FirstName6871 MiddleName6871",LastName6871 +6872,6872,"FirstName6872 MiddleName6872",LastName6872 +6873,6873,"FirstName6873 MiddleName6873",LastName6873 +6874,6874,"FirstName6874 MiddleName6874",LastName6874 +6875,6875,"FirstName6875 MiddleName6875",LastName6875 +6876,6876,"FirstName6876 MiddleName6876",LastName6876 +6877,6877,"FirstName6877 MiddleName6877",LastName6877 +6878,6878,"FirstName6878 MiddleName6878",LastName6878 +6879,6879,"FirstName6879 MiddleName6879",LastName6879 +6880,6880,"FirstName6880 MiddleName6880",LastName6880 +6881,6881,"FirstName6881 MiddleName6881",LastName6881 +6882,6882,"FirstName6882 MiddleName6882",LastName6882 +6883,6883,"FirstName6883 MiddleName6883",LastName6883 +6884,6884,"FirstName6884 MiddleName6884",LastName6884 +6885,6885,"FirstName6885 MiddleName6885",LastName6885 +6886,6886,"FirstName6886 MiddleName6886",LastName6886 +6887,6887,"FirstName6887 MiddleName6887",LastName6887 +6888,6888,"FirstName6888 MiddleName6888",LastName6888 +6889,6889,"FirstName6889 MiddleName6889",LastName6889 +6890,6890,"FirstName6890 MiddleName6890",LastName6890 +6891,6891,"FirstName6891 MiddleName6891",LastName6891 +6892,6892,"FirstName6892 MiddleName6892",LastName6892 +6893,6893,"FirstName6893 MiddleName6893",LastName6893 +6894,6894,"FirstName6894 MiddleName6894",LastName6894 +6895,6895,"FirstName6895 MiddleName6895",LastName6895 +6896,6896,"FirstName6896 MiddleName6896",LastName6896 +6897,6897,"FirstName6897 MiddleName6897",LastName6897 +6898,6898,"FirstName6898 MiddleName6898",LastName6898 +6899,6899,"FirstName6899 MiddleName6899",LastName6899 +6900,6900,"FirstName6900 MiddleName6900",LastName6900 +6901,6901,"FirstName6901 MiddleName6901",LastName6901 +6902,6902,"FirstName6902 MiddleName6902",LastName6902 +6903,6903,"FirstName6903 MiddleName6903",LastName6903 +6904,6904,"FirstName6904 MiddleName6904",LastName6904 +6905,6905,"FirstName6905 MiddleName6905",LastName6905 +6906,6906,"FirstName6906 MiddleName6906",LastName6906 +6907,6907,"FirstName6907 MiddleName6907",LastName6907 +6908,6908,"FirstName6908 MiddleName6908",LastName6908 +6909,6909,"FirstName6909 MiddleName6909",LastName6909 +6910,6910,"FirstName6910 MiddleName6910",LastName6910 +6911,6911,"FirstName6911 MiddleName6911",LastName6911 +6912,6912,"FirstName6912 MiddleName6912",LastName6912 +6913,6913,"FirstName6913 MiddleName6913",LastName6913 +6914,6914,"FirstName6914 MiddleName6914",LastName6914 +6915,6915,"FirstName6915 MiddleName6915",LastName6915 +6916,6916,"FirstName6916 MiddleName6916",LastName6916 +6917,6917,"FirstName6917 MiddleName6917",LastName6917 +6918,6918,"FirstName6918 MiddleName6918",LastName6918 +6919,6919,"FirstName6919 MiddleName6919",LastName6919 +6920,6920,"FirstName6920 MiddleName6920",LastName6920 +6921,6921,"FirstName6921 MiddleName6921",LastName6921 +6922,6922,"FirstName6922 MiddleName6922",LastName6922 +6923,6923,"FirstName6923 MiddleName6923",LastName6923 +6924,6924,"FirstName6924 MiddleName6924",LastName6924 +6925,6925,"FirstName6925 MiddleName6925",LastName6925 +6926,6926,"FirstName6926 MiddleName6926",LastName6926 +6927,6927,"FirstName6927 MiddleName6927",LastName6927 +6928,6928,"FirstName6928 MiddleName6928",LastName6928 +6929,6929,"FirstName6929 MiddleName6929",LastName6929 +6930,6930,"FirstName6930 MiddleName6930",LastName6930 +6931,6931,"FirstName6931 MiddleName6931",LastName6931 +6932,6932,"FirstName6932 MiddleName6932",LastName6932 +6933,6933,"FirstName6933 MiddleName6933",LastName6933 +6934,6934,"FirstName6934 MiddleName6934",LastName6934 +6935,6935,"FirstName6935 MiddleName6935",LastName6935 +6936,6936,"FirstName6936 MiddleName6936",LastName6936 +6937,6937,"FirstName6937 MiddleName6937",LastName6937 +6938,6938,"FirstName6938 MiddleName6938",LastName6938 +6939,6939,"FirstName6939 MiddleName6939",LastName6939 +6940,6940,"FirstName6940 MiddleName6940",LastName6940 +6941,6941,"FirstName6941 MiddleName6941",LastName6941 +6942,6942,"FirstName6942 MiddleName6942",LastName6942 +6943,6943,"FirstName6943 MiddleName6943",LastName6943 +6944,6944,"FirstName6944 MiddleName6944",LastName6944 +6945,6945,"FirstName6945 MiddleName6945",LastName6945 +6946,6946,"FirstName6946 MiddleName6946",LastName6946 +6947,6947,"FirstName6947 MiddleName6947",LastName6947 +6948,6948,"FirstName6948 MiddleName6948",LastName6948 +6949,6949,"FirstName6949 MiddleName6949",LastName6949 +6950,6950,"FirstName6950 MiddleName6950",LastName6950 +6951,6951,"FirstName6951 MiddleName6951",LastName6951 +6952,6952,"FirstName6952 MiddleName6952",LastName6952 +6953,6953,"FirstName6953 MiddleName6953",LastName6953 +6954,6954,"FirstName6954 MiddleName6954",LastName6954 +6955,6955,"FirstName6955 MiddleName6955",LastName6955 +6956,6956,"FirstName6956 MiddleName6956",LastName6956 +6957,6957,"FirstName6957 MiddleName6957",LastName6957 +6958,6958,"FirstName6958 MiddleName6958",LastName6958 +6959,6959,"FirstName6959 MiddleName6959",LastName6959 +6960,6960,"FirstName6960 MiddleName6960",LastName6960 +6961,6961,"FirstName6961 MiddleName6961",LastName6961 +6962,6962,"FirstName6962 MiddleName6962",LastName6962 +6963,6963,"FirstName6963 MiddleName6963",LastName6963 +6964,6964,"FirstName6964 MiddleName6964",LastName6964 +6965,6965,"FirstName6965 MiddleName6965",LastName6965 +6966,6966,"FirstName6966 MiddleName6966",LastName6966 +6967,6967,"FirstName6967 MiddleName6967",LastName6967 +6968,6968,"FirstName6968 MiddleName6968",LastName6968 +6969,6969,"FirstName6969 MiddleName6969",LastName6969 +6970,6970,"FirstName6970 MiddleName6970",LastName6970 +6971,6971,"FirstName6971 MiddleName6971",LastName6971 +6972,6972,"FirstName6972 MiddleName6972",LastName6972 +6973,6973,"FirstName6973 MiddleName6973",LastName6973 +6974,6974,"FirstName6974 MiddleName6974",LastName6974 +6975,6975,"FirstName6975 MiddleName6975",LastName6975 +6976,6976,"FirstName6976 MiddleName6976",LastName6976 +6977,6977,"FirstName6977 MiddleName6977",LastName6977 +6978,6978,"FirstName6978 MiddleName6978",LastName6978 +6979,6979,"FirstName6979 MiddleName6979",LastName6979 +6980,6980,"FirstName6980 MiddleName6980",LastName6980 +6981,6981,"FirstName6981 MiddleName6981",LastName6981 +6982,6982,"FirstName6982 MiddleName6982",LastName6982 +6983,6983,"FirstName6983 MiddleName6983",LastName6983 +6984,6984,"FirstName6984 MiddleName6984",LastName6984 +6985,6985,"FirstName6985 MiddleName6985",LastName6985 +6986,6986,"FirstName6986 MiddleName6986",LastName6986 +6987,6987,"FirstName6987 MiddleName6987",LastName6987 +6988,6988,"FirstName6988 MiddleName6988",LastName6988 +6989,6989,"FirstName6989 MiddleName6989",LastName6989 +6990,6990,"FirstName6990 MiddleName6990",LastName6990 +6991,6991,"FirstName6991 MiddleName6991",LastName6991 +6992,6992,"FirstName6992 MiddleName6992",LastName6992 +6993,6993,"FirstName6993 MiddleName6993",LastName6993 +6994,6994,"FirstName6994 MiddleName6994",LastName6994 +6995,6995,"FirstName6995 MiddleName6995",LastName6995 +6996,6996,"FirstName6996 MiddleName6996",LastName6996 +6997,6997,"FirstName6997 MiddleName6997",LastName6997 +6998,6998,"FirstName6998 MiddleName6998",LastName6998 +6999,6999,"FirstName6999 MiddleName6999",LastName6999 +7000,7000,"FirstName7000 MiddleName7000",LastName7000 +7001,7001,"FirstName7001 MiddleName7001",LastName7001 +7002,7002,"FirstName7002 MiddleName7002",LastName7002 +7003,7003,"FirstName7003 MiddleName7003",LastName7003 +7004,7004,"FirstName7004 MiddleName7004",LastName7004 +7005,7005,"FirstName7005 MiddleName7005",LastName7005 +7006,7006,"FirstName7006 MiddleName7006",LastName7006 +7007,7007,"FirstName7007 MiddleName7007",LastName7007 +7008,7008,"FirstName7008 MiddleName7008",LastName7008 +7009,7009,"FirstName7009 MiddleName7009",LastName7009 +7010,7010,"FirstName7010 MiddleName7010",LastName7010 +7011,7011,"FirstName7011 MiddleName7011",LastName7011 +7012,7012,"FirstName7012 MiddleName7012",LastName7012 +7013,7013,"FirstName7013 MiddleName7013",LastName7013 +7014,7014,"FirstName7014 MiddleName7014",LastName7014 +7015,7015,"FirstName7015 MiddleName7015",LastName7015 +7016,7016,"FirstName7016 MiddleName7016",LastName7016 +7017,7017,"FirstName7017 MiddleName7017",LastName7017 +7018,7018,"FirstName7018 MiddleName7018",LastName7018 +7019,7019,"FirstName7019 MiddleName7019",LastName7019 +7020,7020,"FirstName7020 MiddleName7020",LastName7020 +7021,7021,"FirstName7021 MiddleName7021",LastName7021 +7022,7022,"FirstName7022 MiddleName7022",LastName7022 +7023,7023,"FirstName7023 MiddleName7023",LastName7023 +7024,7024,"FirstName7024 MiddleName7024",LastName7024 +7025,7025,"FirstName7025 MiddleName7025",LastName7025 +7026,7026,"FirstName7026 MiddleName7026",LastName7026 +7027,7027,"FirstName7027 MiddleName7027",LastName7027 +7028,7028,"FirstName7028 MiddleName7028",LastName7028 +7029,7029,"FirstName7029 MiddleName7029",LastName7029 +7030,7030,"FirstName7030 MiddleName7030",LastName7030 +7031,7031,"FirstName7031 MiddleName7031",LastName7031 +7032,7032,"FirstName7032 MiddleName7032",LastName7032 +7033,7033,"FirstName7033 MiddleName7033",LastName7033 +7034,7034,"FirstName7034 MiddleName7034",LastName7034 +7035,7035,"FirstName7035 MiddleName7035",LastName7035 +7036,7036,"FirstName7036 MiddleName7036",LastName7036 +7037,7037,"FirstName7037 MiddleName7037",LastName7037 +7038,7038,"FirstName7038 MiddleName7038",LastName7038 +7039,7039,"FirstName7039 MiddleName7039",LastName7039 +7040,7040,"FirstName7040 MiddleName7040",LastName7040 +7041,7041,"FirstName7041 MiddleName7041",LastName7041 +7042,7042,"FirstName7042 MiddleName7042",LastName7042 +7043,7043,"FirstName7043 MiddleName7043",LastName7043 +7044,7044,"FirstName7044 MiddleName7044",LastName7044 +7045,7045,"FirstName7045 MiddleName7045",LastName7045 +7046,7046,"FirstName7046 MiddleName7046",LastName7046 +7047,7047,"FirstName7047 MiddleName7047",LastName7047 +7048,7048,"FirstName7048 MiddleName7048",LastName7048 +7049,7049,"FirstName7049 MiddleName7049",LastName7049 +7050,7050,"FirstName7050 MiddleName7050",LastName7050 +7051,7051,"FirstName7051 MiddleName7051",LastName7051 +7052,7052,"FirstName7052 MiddleName7052",LastName7052 +7053,7053,"FirstName7053 MiddleName7053",LastName7053 +7054,7054,"FirstName7054 MiddleName7054",LastName7054 +7055,7055,"FirstName7055 MiddleName7055",LastName7055 +7056,7056,"FirstName7056 MiddleName7056",LastName7056 +7057,7057,"FirstName7057 MiddleName7057",LastName7057 +7058,7058,"FirstName7058 MiddleName7058",LastName7058 +7059,7059,"FirstName7059 MiddleName7059",LastName7059 +7060,7060,"FirstName7060 MiddleName7060",LastName7060 +7061,7061,"FirstName7061 MiddleName7061",LastName7061 +7062,7062,"FirstName7062 MiddleName7062",LastName7062 +7063,7063,"FirstName7063 MiddleName7063",LastName7063 +7064,7064,"FirstName7064 MiddleName7064",LastName7064 +7065,7065,"FirstName7065 MiddleName7065",LastName7065 +7066,7066,"FirstName7066 MiddleName7066",LastName7066 +7067,7067,"FirstName7067 MiddleName7067",LastName7067 +7068,7068,"FirstName7068 MiddleName7068",LastName7068 +7069,7069,"FirstName7069 MiddleName7069",LastName7069 +7070,7070,"FirstName7070 MiddleName7070",LastName7070 +7071,7071,"FirstName7071 MiddleName7071",LastName7071 +7072,7072,"FirstName7072 MiddleName7072",LastName7072 +7073,7073,"FirstName7073 MiddleName7073",LastName7073 +7074,7074,"FirstName7074 MiddleName7074",LastName7074 +7075,7075,"FirstName7075 MiddleName7075",LastName7075 +7076,7076,"FirstName7076 MiddleName7076",LastName7076 +7077,7077,"FirstName7077 MiddleName7077",LastName7077 +7078,7078,"FirstName7078 MiddleName7078",LastName7078 +7079,7079,"FirstName7079 MiddleName7079",LastName7079 +7080,7080,"FirstName7080 MiddleName7080",LastName7080 +7081,7081,"FirstName7081 MiddleName7081",LastName7081 +7082,7082,"FirstName7082 MiddleName7082",LastName7082 +7083,7083,"FirstName7083 MiddleName7083",LastName7083 +7084,7084,"FirstName7084 MiddleName7084",LastName7084 +7085,7085,"FirstName7085 MiddleName7085",LastName7085 +7086,7086,"FirstName7086 MiddleName7086",LastName7086 +7087,7087,"FirstName7087 MiddleName7087",LastName7087 +7088,7088,"FirstName7088 MiddleName7088",LastName7088 +7089,7089,"FirstName7089 MiddleName7089",LastName7089 +7090,7090,"FirstName7090 MiddleName7090",LastName7090 +7091,7091,"FirstName7091 MiddleName7091",LastName7091 +7092,7092,"FirstName7092 MiddleName7092",LastName7092 +7093,7093,"FirstName7093 MiddleName7093",LastName7093 +7094,7094,"FirstName7094 MiddleName7094",LastName7094 +7095,7095,"FirstName7095 MiddleName7095",LastName7095 +7096,7096,"FirstName7096 MiddleName7096",LastName7096 +7097,7097,"FirstName7097 MiddleName7097",LastName7097 +7098,7098,"FirstName7098 MiddleName7098",LastName7098 +7099,7099,"FirstName7099 MiddleName7099",LastName7099 +7100,7100,"FirstName7100 MiddleName7100",LastName7100 +7101,7101,"FirstName7101 MiddleName7101",LastName7101 +7102,7102,"FirstName7102 MiddleName7102",LastName7102 +7103,7103,"FirstName7103 MiddleName7103",LastName7103 +7104,7104,"FirstName7104 MiddleName7104",LastName7104 +7105,7105,"FirstName7105 MiddleName7105",LastName7105 +7106,7106,"FirstName7106 MiddleName7106",LastName7106 +7107,7107,"FirstName7107 MiddleName7107",LastName7107 +7108,7108,"FirstName7108 MiddleName7108",LastName7108 +7109,7109,"FirstName7109 MiddleName7109",LastName7109 +7110,7110,"FirstName7110 MiddleName7110",LastName7110 +7111,7111,"FirstName7111 MiddleName7111",LastName7111 +7112,7112,"FirstName7112 MiddleName7112",LastName7112 +7113,7113,"FirstName7113 MiddleName7113",LastName7113 +7114,7114,"FirstName7114 MiddleName7114",LastName7114 +7115,7115,"FirstName7115 MiddleName7115",LastName7115 +7116,7116,"FirstName7116 MiddleName7116",LastName7116 +7117,7117,"FirstName7117 MiddleName7117",LastName7117 +7118,7118,"FirstName7118 MiddleName7118",LastName7118 +7119,7119,"FirstName7119 MiddleName7119",LastName7119 +7120,7120,"FirstName7120 MiddleName7120",LastName7120 +7121,7121,"FirstName7121 MiddleName7121",LastName7121 +7122,7122,"FirstName7122 MiddleName7122",LastName7122 +7123,7123,"FirstName7123 MiddleName7123",LastName7123 +7124,7124,"FirstName7124 MiddleName7124",LastName7124 +7125,7125,"FirstName7125 MiddleName7125",LastName7125 +7126,7126,"FirstName7126 MiddleName7126",LastName7126 +7127,7127,"FirstName7127 MiddleName7127",LastName7127 +7128,7128,"FirstName7128 MiddleName7128",LastName7128 +7129,7129,"FirstName7129 MiddleName7129",LastName7129 +7130,7130,"FirstName7130 MiddleName7130",LastName7130 +7131,7131,"FirstName7131 MiddleName7131",LastName7131 +7132,7132,"FirstName7132 MiddleName7132",LastName7132 +7133,7133,"FirstName7133 MiddleName7133",LastName7133 +7134,7134,"FirstName7134 MiddleName7134",LastName7134 +7135,7135,"FirstName7135 MiddleName7135",LastName7135 +7136,7136,"FirstName7136 MiddleName7136",LastName7136 +7137,7137,"FirstName7137 MiddleName7137",LastName7137 +7138,7138,"FirstName7138 MiddleName7138",LastName7138 +7139,7139,"FirstName7139 MiddleName7139",LastName7139 +7140,7140,"FirstName7140 MiddleName7140",LastName7140 +7141,7141,"FirstName7141 MiddleName7141",LastName7141 +7142,7142,"FirstName7142 MiddleName7142",LastName7142 +7143,7143,"FirstName7143 MiddleName7143",LastName7143 +7144,7144,"FirstName7144 MiddleName7144",LastName7144 +7145,7145,"FirstName7145 MiddleName7145",LastName7145 +7146,7146,"FirstName7146 MiddleName7146",LastName7146 +7147,7147,"FirstName7147 MiddleName7147",LastName7147 +7148,7148,"FirstName7148 MiddleName7148",LastName7148 +7149,7149,"FirstName7149 MiddleName7149",LastName7149 +7150,7150,"FirstName7150 MiddleName7150",LastName7150 +7151,7151,"FirstName7151 MiddleName7151",LastName7151 +7152,7152,"FirstName7152 MiddleName7152",LastName7152 +7153,7153,"FirstName7153 MiddleName7153",LastName7153 +7154,7154,"FirstName7154 MiddleName7154",LastName7154 +7155,7155,"FirstName7155 MiddleName7155",LastName7155 +7156,7156,"FirstName7156 MiddleName7156",LastName7156 +7157,7157,"FirstName7157 MiddleName7157",LastName7157 +7158,7158,"FirstName7158 MiddleName7158",LastName7158 +7159,7159,"FirstName7159 MiddleName7159",LastName7159 +7160,7160,"FirstName7160 MiddleName7160",LastName7160 +7161,7161,"FirstName7161 MiddleName7161",LastName7161 +7162,7162,"FirstName7162 MiddleName7162",LastName7162 +7163,7163,"FirstName7163 MiddleName7163",LastName7163 +7164,7164,"FirstName7164 MiddleName7164",LastName7164 +7165,7165,"FirstName7165 MiddleName7165",LastName7165 +7166,7166,"FirstName7166 MiddleName7166",LastName7166 +7167,7167,"FirstName7167 MiddleName7167",LastName7167 +7168,7168,"FirstName7168 MiddleName7168",LastName7168 +7169,7169,"FirstName7169 MiddleName7169",LastName7169 +7170,7170,"FirstName7170 MiddleName7170",LastName7170 +7171,7171,"FirstName7171 MiddleName7171",LastName7171 +7172,7172,"FirstName7172 MiddleName7172",LastName7172 +7173,7173,"FirstName7173 MiddleName7173",LastName7173 +7174,7174,"FirstName7174 MiddleName7174",LastName7174 +7175,7175,"FirstName7175 MiddleName7175",LastName7175 +7176,7176,"FirstName7176 MiddleName7176",LastName7176 +7177,7177,"FirstName7177 MiddleName7177",LastName7177 +7178,7178,"FirstName7178 MiddleName7178",LastName7178 +7179,7179,"FirstName7179 MiddleName7179",LastName7179 +7180,7180,"FirstName7180 MiddleName7180",LastName7180 +7181,7181,"FirstName7181 MiddleName7181",LastName7181 +7182,7182,"FirstName7182 MiddleName7182",LastName7182 +7183,7183,"FirstName7183 MiddleName7183",LastName7183 +7184,7184,"FirstName7184 MiddleName7184",LastName7184 +7185,7185,"FirstName7185 MiddleName7185",LastName7185 +7186,7186,"FirstName7186 MiddleName7186",LastName7186 +7187,7187,"FirstName7187 MiddleName7187",LastName7187 +7188,7188,"FirstName7188 MiddleName7188",LastName7188 +7189,7189,"FirstName7189 MiddleName7189",LastName7189 +7190,7190,"FirstName7190 MiddleName7190",LastName7190 +7191,7191,"FirstName7191 MiddleName7191",LastName7191 +7192,7192,"FirstName7192 MiddleName7192",LastName7192 +7193,7193,"FirstName7193 MiddleName7193",LastName7193 +7194,7194,"FirstName7194 MiddleName7194",LastName7194 +7195,7195,"FirstName7195 MiddleName7195",LastName7195 +7196,7196,"FirstName7196 MiddleName7196",LastName7196 +7197,7197,"FirstName7197 MiddleName7197",LastName7197 +7198,7198,"FirstName7198 MiddleName7198",LastName7198 +7199,7199,"FirstName7199 MiddleName7199",LastName7199 +7200,7200,"FirstName7200 MiddleName7200",LastName7200 +7201,7201,"FirstName7201 MiddleName7201",LastName7201 +7202,7202,"FirstName7202 MiddleName7202",LastName7202 +7203,7203,"FirstName7203 MiddleName7203",LastName7203 +7204,7204,"FirstName7204 MiddleName7204",LastName7204 +7205,7205,"FirstName7205 MiddleName7205",LastName7205 +7206,7206,"FirstName7206 MiddleName7206",LastName7206 +7207,7207,"FirstName7207 MiddleName7207",LastName7207 +7208,7208,"FirstName7208 MiddleName7208",LastName7208 +7209,7209,"FirstName7209 MiddleName7209",LastName7209 +7210,7210,"FirstName7210 MiddleName7210",LastName7210 +7211,7211,"FirstName7211 MiddleName7211",LastName7211 +7212,7212,"FirstName7212 MiddleName7212",LastName7212 +7213,7213,"FirstName7213 MiddleName7213",LastName7213 +7214,7214,"FirstName7214 MiddleName7214",LastName7214 +7215,7215,"FirstName7215 MiddleName7215",LastName7215 +7216,7216,"FirstName7216 MiddleName7216",LastName7216 +7217,7217,"FirstName7217 MiddleName7217",LastName7217 +7218,7218,"FirstName7218 MiddleName7218",LastName7218 +7219,7219,"FirstName7219 MiddleName7219",LastName7219 +7220,7220,"FirstName7220 MiddleName7220",LastName7220 +7221,7221,"FirstName7221 MiddleName7221",LastName7221 +7222,7222,"FirstName7222 MiddleName7222",LastName7222 +7223,7223,"FirstName7223 MiddleName7223",LastName7223 +7224,7224,"FirstName7224 MiddleName7224",LastName7224 +7225,7225,"FirstName7225 MiddleName7225",LastName7225 +7226,7226,"FirstName7226 MiddleName7226",LastName7226 +7227,7227,"FirstName7227 MiddleName7227",LastName7227 +7228,7228,"FirstName7228 MiddleName7228",LastName7228 +7229,7229,"FirstName7229 MiddleName7229",LastName7229 +7230,7230,"FirstName7230 MiddleName7230",LastName7230 +7231,7231,"FirstName7231 MiddleName7231",LastName7231 +7232,7232,"FirstName7232 MiddleName7232",LastName7232 +7233,7233,"FirstName7233 MiddleName7233",LastName7233 +7234,7234,"FirstName7234 MiddleName7234",LastName7234 +7235,7235,"FirstName7235 MiddleName7235",LastName7235 +7236,7236,"FirstName7236 MiddleName7236",LastName7236 +7237,7237,"FirstName7237 MiddleName7237",LastName7237 +7238,7238,"FirstName7238 MiddleName7238",LastName7238 +7239,7239,"FirstName7239 MiddleName7239",LastName7239 +7240,7240,"FirstName7240 MiddleName7240",LastName7240 +7241,7241,"FirstName7241 MiddleName7241",LastName7241 +7242,7242,"FirstName7242 MiddleName7242",LastName7242 +7243,7243,"FirstName7243 MiddleName7243",LastName7243 +7244,7244,"FirstName7244 MiddleName7244",LastName7244 +7245,7245,"FirstName7245 MiddleName7245",LastName7245 +7246,7246,"FirstName7246 MiddleName7246",LastName7246 +7247,7247,"FirstName7247 MiddleName7247",LastName7247 +7248,7248,"FirstName7248 MiddleName7248",LastName7248 +7249,7249,"FirstName7249 MiddleName7249",LastName7249 +7250,7250,"FirstName7250 MiddleName7250",LastName7250 +7251,7251,"FirstName7251 MiddleName7251",LastName7251 +7252,7252,"FirstName7252 MiddleName7252",LastName7252 +7253,7253,"FirstName7253 MiddleName7253",LastName7253 +7254,7254,"FirstName7254 MiddleName7254",LastName7254 +7255,7255,"FirstName7255 MiddleName7255",LastName7255 +7256,7256,"FirstName7256 MiddleName7256",LastName7256 +7257,7257,"FirstName7257 MiddleName7257",LastName7257 +7258,7258,"FirstName7258 MiddleName7258",LastName7258 +7259,7259,"FirstName7259 MiddleName7259",LastName7259 +7260,7260,"FirstName7260 MiddleName7260",LastName7260 +7261,7261,"FirstName7261 MiddleName7261",LastName7261 +7262,7262,"FirstName7262 MiddleName7262",LastName7262 +7263,7263,"FirstName7263 MiddleName7263",LastName7263 +7264,7264,"FirstName7264 MiddleName7264",LastName7264 +7265,7265,"FirstName7265 MiddleName7265",LastName7265 +7266,7266,"FirstName7266 MiddleName7266",LastName7266 +7267,7267,"FirstName7267 MiddleName7267",LastName7267 +7268,7268,"FirstName7268 MiddleName7268",LastName7268 +7269,7269,"FirstName7269 MiddleName7269",LastName7269 +7270,7270,"FirstName7270 MiddleName7270",LastName7270 +7271,7271,"FirstName7271 MiddleName7271",LastName7271 +7272,7272,"FirstName7272 MiddleName7272",LastName7272 +7273,7273,"FirstName7273 MiddleName7273",LastName7273 +7274,7274,"FirstName7274 MiddleName7274",LastName7274 +7275,7275,"FirstName7275 MiddleName7275",LastName7275 +7276,7276,"FirstName7276 MiddleName7276",LastName7276 +7277,7277,"FirstName7277 MiddleName7277",LastName7277 +7278,7278,"FirstName7278 MiddleName7278",LastName7278 +7279,7279,"FirstName7279 MiddleName7279",LastName7279 +7280,7280,"FirstName7280 MiddleName7280",LastName7280 +7281,7281,"FirstName7281 MiddleName7281",LastName7281 +7282,7282,"FirstName7282 MiddleName7282",LastName7282 +7283,7283,"FirstName7283 MiddleName7283",LastName7283 +7284,7284,"FirstName7284 MiddleName7284",LastName7284 +7285,7285,"FirstName7285 MiddleName7285",LastName7285 +7286,7286,"FirstName7286 MiddleName7286",LastName7286 +7287,7287,"FirstName7287 MiddleName7287",LastName7287 +7288,7288,"FirstName7288 MiddleName7288",LastName7288 +7289,7289,"FirstName7289 MiddleName7289",LastName7289 +7290,7290,"FirstName7290 MiddleName7290",LastName7290 +7291,7291,"FirstName7291 MiddleName7291",LastName7291 +7292,7292,"FirstName7292 MiddleName7292",LastName7292 +7293,7293,"FirstName7293 MiddleName7293",LastName7293 +7294,7294,"FirstName7294 MiddleName7294",LastName7294 +7295,7295,"FirstName7295 MiddleName7295",LastName7295 +7296,7296,"FirstName7296 MiddleName7296",LastName7296 +7297,7297,"FirstName7297 MiddleName7297",LastName7297 +7298,7298,"FirstName7298 MiddleName7298",LastName7298 +7299,7299,"FirstName7299 MiddleName7299",LastName7299 +7300,7300,"FirstName7300 MiddleName7300",LastName7300 +7301,7301,"FirstName7301 MiddleName7301",LastName7301 +7302,7302,"FirstName7302 MiddleName7302",LastName7302 +7303,7303,"FirstName7303 MiddleName7303",LastName7303 +7304,7304,"FirstName7304 MiddleName7304",LastName7304 +7305,7305,"FirstName7305 MiddleName7305",LastName7305 +7306,7306,"FirstName7306 MiddleName7306",LastName7306 +7307,7307,"FirstName7307 MiddleName7307",LastName7307 +7308,7308,"FirstName7308 MiddleName7308",LastName7308 +7309,7309,"FirstName7309 MiddleName7309",LastName7309 +7310,7310,"FirstName7310 MiddleName7310",LastName7310 +7311,7311,"FirstName7311 MiddleName7311",LastName7311 +7312,7312,"FirstName7312 MiddleName7312",LastName7312 +7313,7313,"FirstName7313 MiddleName7313",LastName7313 +7314,7314,"FirstName7314 MiddleName7314",LastName7314 +7315,7315,"FirstName7315 MiddleName7315",LastName7315 +7316,7316,"FirstName7316 MiddleName7316",LastName7316 +7317,7317,"FirstName7317 MiddleName7317",LastName7317 +7318,7318,"FirstName7318 MiddleName7318",LastName7318 +7319,7319,"FirstName7319 MiddleName7319",LastName7319 +7320,7320,"FirstName7320 MiddleName7320",LastName7320 +7321,7321,"FirstName7321 MiddleName7321",LastName7321 +7322,7322,"FirstName7322 MiddleName7322",LastName7322 +7323,7323,"FirstName7323 MiddleName7323",LastName7323 +7324,7324,"FirstName7324 MiddleName7324",LastName7324 +7325,7325,"FirstName7325 MiddleName7325",LastName7325 +7326,7326,"FirstName7326 MiddleName7326",LastName7326 +7327,7327,"FirstName7327 MiddleName7327",LastName7327 +7328,7328,"FirstName7328 MiddleName7328",LastName7328 +7329,7329,"FirstName7329 MiddleName7329",LastName7329 +7330,7330,"FirstName7330 MiddleName7330",LastName7330 +7331,7331,"FirstName7331 MiddleName7331",LastName7331 +7332,7332,"FirstName7332 MiddleName7332",LastName7332 +7333,7333,"FirstName7333 MiddleName7333",LastName7333 +7334,7334,"FirstName7334 MiddleName7334",LastName7334 +7335,7335,"FirstName7335 MiddleName7335",LastName7335 +7336,7336,"FirstName7336 MiddleName7336",LastName7336 +7337,7337,"FirstName7337 MiddleName7337",LastName7337 +7338,7338,"FirstName7338 MiddleName7338",LastName7338 +7339,7339,"FirstName7339 MiddleName7339",LastName7339 +7340,7340,"FirstName7340 MiddleName7340",LastName7340 +7341,7341,"FirstName7341 MiddleName7341",LastName7341 +7342,7342,"FirstName7342 MiddleName7342",LastName7342 +7343,7343,"FirstName7343 MiddleName7343",LastName7343 +7344,7344,"FirstName7344 MiddleName7344",LastName7344 +7345,7345,"FirstName7345 MiddleName7345",LastName7345 +7346,7346,"FirstName7346 MiddleName7346",LastName7346 +7347,7347,"FirstName7347 MiddleName7347",LastName7347 +7348,7348,"FirstName7348 MiddleName7348",LastName7348 +7349,7349,"FirstName7349 MiddleName7349",LastName7349 +7350,7350,"FirstName7350 MiddleName7350",LastName7350 +7351,7351,"FirstName7351 MiddleName7351",LastName7351 +7352,7352,"FirstName7352 MiddleName7352",LastName7352 +7353,7353,"FirstName7353 MiddleName7353",LastName7353 +7354,7354,"FirstName7354 MiddleName7354",LastName7354 +7355,7355,"FirstName7355 MiddleName7355",LastName7355 +7356,7356,"FirstName7356 MiddleName7356",LastName7356 +7357,7357,"FirstName7357 MiddleName7357",LastName7357 +7358,7358,"FirstName7358 MiddleName7358",LastName7358 +7359,7359,"FirstName7359 MiddleName7359",LastName7359 +7360,7360,"FirstName7360 MiddleName7360",LastName7360 +7361,7361,"FirstName7361 MiddleName7361",LastName7361 +7362,7362,"FirstName7362 MiddleName7362",LastName7362 +7363,7363,"FirstName7363 MiddleName7363",LastName7363 +7364,7364,"FirstName7364 MiddleName7364",LastName7364 +7365,7365,"FirstName7365 MiddleName7365",LastName7365 +7366,7366,"FirstName7366 MiddleName7366",LastName7366 +7367,7367,"FirstName7367 MiddleName7367",LastName7367 +7368,7368,"FirstName7368 MiddleName7368",LastName7368 +7369,7369,"FirstName7369 MiddleName7369",LastName7369 +7370,7370,"FirstName7370 MiddleName7370",LastName7370 +7371,7371,"FirstName7371 MiddleName7371",LastName7371 +7372,7372,"FirstName7372 MiddleName7372",LastName7372 +7373,7373,"FirstName7373 MiddleName7373",LastName7373 +7374,7374,"FirstName7374 MiddleName7374",LastName7374 +7375,7375,"FirstName7375 MiddleName7375",LastName7375 +7376,7376,"FirstName7376 MiddleName7376",LastName7376 +7377,7377,"FirstName7377 MiddleName7377",LastName7377 +7378,7378,"FirstName7378 MiddleName7378",LastName7378 +7379,7379,"FirstName7379 MiddleName7379",LastName7379 +7380,7380,"FirstName7380 MiddleName7380",LastName7380 +7381,7381,"FirstName7381 MiddleName7381",LastName7381 +7382,7382,"FirstName7382 MiddleName7382",LastName7382 +7383,7383,"FirstName7383 MiddleName7383",LastName7383 +7384,7384,"FirstName7384 MiddleName7384",LastName7384 +7385,7385,"FirstName7385 MiddleName7385",LastName7385 +7386,7386,"FirstName7386 MiddleName7386",LastName7386 +7387,7387,"FirstName7387 MiddleName7387",LastName7387 +7388,7388,"FirstName7388 MiddleName7388",LastName7388 +7389,7389,"FirstName7389 MiddleName7389",LastName7389 +7390,7390,"FirstName7390 MiddleName7390",LastName7390 +7391,7391,"FirstName7391 MiddleName7391",LastName7391 +7392,7392,"FirstName7392 MiddleName7392",LastName7392 +7393,7393,"FirstName7393 MiddleName7393",LastName7393 +7394,7394,"FirstName7394 MiddleName7394",LastName7394 +7395,7395,"FirstName7395 MiddleName7395",LastName7395 +7396,7396,"FirstName7396 MiddleName7396",LastName7396 +7397,7397,"FirstName7397 MiddleName7397",LastName7397 +7398,7398,"FirstName7398 MiddleName7398",LastName7398 +7399,7399,"FirstName7399 MiddleName7399",LastName7399 +7400,7400,"FirstName7400 MiddleName7400",LastName7400 +7401,7401,"FirstName7401 MiddleName7401",LastName7401 +7402,7402,"FirstName7402 MiddleName7402",LastName7402 +7403,7403,"FirstName7403 MiddleName7403",LastName7403 +7404,7404,"FirstName7404 MiddleName7404",LastName7404 +7405,7405,"FirstName7405 MiddleName7405",LastName7405 +7406,7406,"FirstName7406 MiddleName7406",LastName7406 +7407,7407,"FirstName7407 MiddleName7407",LastName7407 +7408,7408,"FirstName7408 MiddleName7408",LastName7408 +7409,7409,"FirstName7409 MiddleName7409",LastName7409 +7410,7410,"FirstName7410 MiddleName7410",LastName7410 +7411,7411,"FirstName7411 MiddleName7411",LastName7411 +7412,7412,"FirstName7412 MiddleName7412",LastName7412 +7413,7413,"FirstName7413 MiddleName7413",LastName7413 +7414,7414,"FirstName7414 MiddleName7414",LastName7414 +7415,7415,"FirstName7415 MiddleName7415",LastName7415 +7416,7416,"FirstName7416 MiddleName7416",LastName7416 +7417,7417,"FirstName7417 MiddleName7417",LastName7417 +7418,7418,"FirstName7418 MiddleName7418",LastName7418 +7419,7419,"FirstName7419 MiddleName7419",LastName7419 +7420,7420,"FirstName7420 MiddleName7420",LastName7420 +7421,7421,"FirstName7421 MiddleName7421",LastName7421 +7422,7422,"FirstName7422 MiddleName7422",LastName7422 +7423,7423,"FirstName7423 MiddleName7423",LastName7423 +7424,7424,"FirstName7424 MiddleName7424",LastName7424 +7425,7425,"FirstName7425 MiddleName7425",LastName7425 +7426,7426,"FirstName7426 MiddleName7426",LastName7426 +7427,7427,"FirstName7427 MiddleName7427",LastName7427 +7428,7428,"FirstName7428 MiddleName7428",LastName7428 +7429,7429,"FirstName7429 MiddleName7429",LastName7429 +7430,7430,"FirstName7430 MiddleName7430",LastName7430 +7431,7431,"FirstName7431 MiddleName7431",LastName7431 +7432,7432,"FirstName7432 MiddleName7432",LastName7432 +7433,7433,"FirstName7433 MiddleName7433",LastName7433 +7434,7434,"FirstName7434 MiddleName7434",LastName7434 +7435,7435,"FirstName7435 MiddleName7435",LastName7435 +7436,7436,"FirstName7436 MiddleName7436",LastName7436 +7437,7437,"FirstName7437 MiddleName7437",LastName7437 +7438,7438,"FirstName7438 MiddleName7438",LastName7438 +7439,7439,"FirstName7439 MiddleName7439",LastName7439 +7440,7440,"FirstName7440 MiddleName7440",LastName7440 +7441,7441,"FirstName7441 MiddleName7441",LastName7441 +7442,7442,"FirstName7442 MiddleName7442",LastName7442 +7443,7443,"FirstName7443 MiddleName7443",LastName7443 +7444,7444,"FirstName7444 MiddleName7444",LastName7444 +7445,7445,"FirstName7445 MiddleName7445",LastName7445 +7446,7446,"FirstName7446 MiddleName7446",LastName7446 +7447,7447,"FirstName7447 MiddleName7447",LastName7447 +7448,7448,"FirstName7448 MiddleName7448",LastName7448 +7449,7449,"FirstName7449 MiddleName7449",LastName7449 +7450,7450,"FirstName7450 MiddleName7450",LastName7450 +7451,7451,"FirstName7451 MiddleName7451",LastName7451 +7452,7452,"FirstName7452 MiddleName7452",LastName7452 +7453,7453,"FirstName7453 MiddleName7453",LastName7453 +7454,7454,"FirstName7454 MiddleName7454",LastName7454 +7455,7455,"FirstName7455 MiddleName7455",LastName7455 +7456,7456,"FirstName7456 MiddleName7456",LastName7456 +7457,7457,"FirstName7457 MiddleName7457",LastName7457 +7458,7458,"FirstName7458 MiddleName7458",LastName7458 +7459,7459,"FirstName7459 MiddleName7459",LastName7459 +7460,7460,"FirstName7460 MiddleName7460",LastName7460 +7461,7461,"FirstName7461 MiddleName7461",LastName7461 +7462,7462,"FirstName7462 MiddleName7462",LastName7462 +7463,7463,"FirstName7463 MiddleName7463",LastName7463 +7464,7464,"FirstName7464 MiddleName7464",LastName7464 +7465,7465,"FirstName7465 MiddleName7465",LastName7465 +7466,7466,"FirstName7466 MiddleName7466",LastName7466 +7467,7467,"FirstName7467 MiddleName7467",LastName7467 +7468,7468,"FirstName7468 MiddleName7468",LastName7468 +7469,7469,"FirstName7469 MiddleName7469",LastName7469 +7470,7470,"FirstName7470 MiddleName7470",LastName7470 +7471,7471,"FirstName7471 MiddleName7471",LastName7471 +7472,7472,"FirstName7472 MiddleName7472",LastName7472 +7473,7473,"FirstName7473 MiddleName7473",LastName7473 +7474,7474,"FirstName7474 MiddleName7474",LastName7474 +7475,7475,"FirstName7475 MiddleName7475",LastName7475 +7476,7476,"FirstName7476 MiddleName7476",LastName7476 +7477,7477,"FirstName7477 MiddleName7477",LastName7477 +7478,7478,"FirstName7478 MiddleName7478",LastName7478 +7479,7479,"FirstName7479 MiddleName7479",LastName7479 +7480,7480,"FirstName7480 MiddleName7480",LastName7480 +7481,7481,"FirstName7481 MiddleName7481",LastName7481 +7482,7482,"FirstName7482 MiddleName7482",LastName7482 +7483,7483,"FirstName7483 MiddleName7483",LastName7483 +7484,7484,"FirstName7484 MiddleName7484",LastName7484 +7485,7485,"FirstName7485 MiddleName7485",LastName7485 +7486,7486,"FirstName7486 MiddleName7486",LastName7486 +7487,7487,"FirstName7487 MiddleName7487",LastName7487 +7488,7488,"FirstName7488 MiddleName7488",LastName7488 +7489,7489,"FirstName7489 MiddleName7489",LastName7489 +7490,7490,"FirstName7490 MiddleName7490",LastName7490 +7491,7491,"FirstName7491 MiddleName7491",LastName7491 +7492,7492,"FirstName7492 MiddleName7492",LastName7492 +7493,7493,"FirstName7493 MiddleName7493",LastName7493 +7494,7494,"FirstName7494 MiddleName7494",LastName7494 +7495,7495,"FirstName7495 MiddleName7495",LastName7495 +7496,7496,"FirstName7496 MiddleName7496",LastName7496 +7497,7497,"FirstName7497 MiddleName7497",LastName7497 +7498,7498,"FirstName7498 MiddleName7498",LastName7498 +7499,7499,"FirstName7499 MiddleName7499",LastName7499 +7500,7500,"FirstName7500 MiddleName7500",LastName7500 +7501,7501,"FirstName7501 MiddleName7501",LastName7501 +7502,7502,"FirstName7502 MiddleName7502",LastName7502 +7503,7503,"FirstName7503 MiddleName7503",LastName7503 +7504,7504,"FirstName7504 MiddleName7504",LastName7504 +7505,7505,"FirstName7505 MiddleName7505",LastName7505 +7506,7506,"FirstName7506 MiddleName7506",LastName7506 +7507,7507,"FirstName7507 MiddleName7507",LastName7507 +7508,7508,"FirstName7508 MiddleName7508",LastName7508 +7509,7509,"FirstName7509 MiddleName7509",LastName7509 +7510,7510,"FirstName7510 MiddleName7510",LastName7510 +7511,7511,"FirstName7511 MiddleName7511",LastName7511 +7512,7512,"FirstName7512 MiddleName7512",LastName7512 +7513,7513,"FirstName7513 MiddleName7513",LastName7513 +7514,7514,"FirstName7514 MiddleName7514",LastName7514 +7515,7515,"FirstName7515 MiddleName7515",LastName7515 +7516,7516,"FirstName7516 MiddleName7516",LastName7516 +7517,7517,"FirstName7517 MiddleName7517",LastName7517 +7518,7518,"FirstName7518 MiddleName7518",LastName7518 +7519,7519,"FirstName7519 MiddleName7519",LastName7519 +7520,7520,"FirstName7520 MiddleName7520",LastName7520 +7521,7521,"FirstName7521 MiddleName7521",LastName7521 +7522,7522,"FirstName7522 MiddleName7522",LastName7522 +7523,7523,"FirstName7523 MiddleName7523",LastName7523 +7524,7524,"FirstName7524 MiddleName7524",LastName7524 +7525,7525,"FirstName7525 MiddleName7525",LastName7525 +7526,7526,"FirstName7526 MiddleName7526",LastName7526 +7527,7527,"FirstName7527 MiddleName7527",LastName7527 +7528,7528,"FirstName7528 MiddleName7528",LastName7528 +7529,7529,"FirstName7529 MiddleName7529",LastName7529 +7530,7530,"FirstName7530 MiddleName7530",LastName7530 +7531,7531,"FirstName7531 MiddleName7531",LastName7531 +7532,7532,"FirstName7532 MiddleName7532",LastName7532 +7533,7533,"FirstName7533 MiddleName7533",LastName7533 +7534,7534,"FirstName7534 MiddleName7534",LastName7534 +7535,7535,"FirstName7535 MiddleName7535",LastName7535 +7536,7536,"FirstName7536 MiddleName7536",LastName7536 +7537,7537,"FirstName7537 MiddleName7537",LastName7537 +7538,7538,"FirstName7538 MiddleName7538",LastName7538 +7539,7539,"FirstName7539 MiddleName7539",LastName7539 +7540,7540,"FirstName7540 MiddleName7540",LastName7540 +7541,7541,"FirstName7541 MiddleName7541",LastName7541 +7542,7542,"FirstName7542 MiddleName7542",LastName7542 +7543,7543,"FirstName7543 MiddleName7543",LastName7543 +7544,7544,"FirstName7544 MiddleName7544",LastName7544 +7545,7545,"FirstName7545 MiddleName7545",LastName7545 +7546,7546,"FirstName7546 MiddleName7546",LastName7546 +7547,7547,"FirstName7547 MiddleName7547",LastName7547 +7548,7548,"FirstName7548 MiddleName7548",LastName7548 +7549,7549,"FirstName7549 MiddleName7549",LastName7549 +7550,7550,"FirstName7550 MiddleName7550",LastName7550 +7551,7551,"FirstName7551 MiddleName7551",LastName7551 +7552,7552,"FirstName7552 MiddleName7552",LastName7552 +7553,7553,"FirstName7553 MiddleName7553",LastName7553 +7554,7554,"FirstName7554 MiddleName7554",LastName7554 +7555,7555,"FirstName7555 MiddleName7555",LastName7555 +7556,7556,"FirstName7556 MiddleName7556",LastName7556 +7557,7557,"FirstName7557 MiddleName7557",LastName7557 +7558,7558,"FirstName7558 MiddleName7558",LastName7558 +7559,7559,"FirstName7559 MiddleName7559",LastName7559 +7560,7560,"FirstName7560 MiddleName7560",LastName7560 +7561,7561,"FirstName7561 MiddleName7561",LastName7561 +7562,7562,"FirstName7562 MiddleName7562",LastName7562 +7563,7563,"FirstName7563 MiddleName7563",LastName7563 +7564,7564,"FirstName7564 MiddleName7564",LastName7564 +7565,7565,"FirstName7565 MiddleName7565",LastName7565 +7566,7566,"FirstName7566 MiddleName7566",LastName7566 +7567,7567,"FirstName7567 MiddleName7567",LastName7567 +7568,7568,"FirstName7568 MiddleName7568",LastName7568 +7569,7569,"FirstName7569 MiddleName7569",LastName7569 +7570,7570,"FirstName7570 MiddleName7570",LastName7570 +7571,7571,"FirstName7571 MiddleName7571",LastName7571 +7572,7572,"FirstName7572 MiddleName7572",LastName7572 +7573,7573,"FirstName7573 MiddleName7573",LastName7573 +7574,7574,"FirstName7574 MiddleName7574",LastName7574 +7575,7575,"FirstName7575 MiddleName7575",LastName7575 +7576,7576,"FirstName7576 MiddleName7576",LastName7576 +7577,7577,"FirstName7577 MiddleName7577",LastName7577 +7578,7578,"FirstName7578 MiddleName7578",LastName7578 +7579,7579,"FirstName7579 MiddleName7579",LastName7579 +7580,7580,"FirstName7580 MiddleName7580",LastName7580 +7581,7581,"FirstName7581 MiddleName7581",LastName7581 +7582,7582,"FirstName7582 MiddleName7582",LastName7582 +7583,7583,"FirstName7583 MiddleName7583",LastName7583 +7584,7584,"FirstName7584 MiddleName7584",LastName7584 +7585,7585,"FirstName7585 MiddleName7585",LastName7585 +7586,7586,"FirstName7586 MiddleName7586",LastName7586 +7587,7587,"FirstName7587 MiddleName7587",LastName7587 +7588,7588,"FirstName7588 MiddleName7588",LastName7588 +7589,7589,"FirstName7589 MiddleName7589",LastName7589 +7590,7590,"FirstName7590 MiddleName7590",LastName7590 +7591,7591,"FirstName7591 MiddleName7591",LastName7591 +7592,7592,"FirstName7592 MiddleName7592",LastName7592 +7593,7593,"FirstName7593 MiddleName7593",LastName7593 +7594,7594,"FirstName7594 MiddleName7594",LastName7594 +7595,7595,"FirstName7595 MiddleName7595",LastName7595 +7596,7596,"FirstName7596 MiddleName7596",LastName7596 +7597,7597,"FirstName7597 MiddleName7597",LastName7597 +7598,7598,"FirstName7598 MiddleName7598",LastName7598 +7599,7599,"FirstName7599 MiddleName7599",LastName7599 +7600,7600,"FirstName7600 MiddleName7600",LastName7600 +7601,7601,"FirstName7601 MiddleName7601",LastName7601 +7602,7602,"FirstName7602 MiddleName7602",LastName7602 +7603,7603,"FirstName7603 MiddleName7603",LastName7603 +7604,7604,"FirstName7604 MiddleName7604",LastName7604 +7605,7605,"FirstName7605 MiddleName7605",LastName7605 +7606,7606,"FirstName7606 MiddleName7606",LastName7606 +7607,7607,"FirstName7607 MiddleName7607",LastName7607 +7608,7608,"FirstName7608 MiddleName7608",LastName7608 +7609,7609,"FirstName7609 MiddleName7609",LastName7609 +7610,7610,"FirstName7610 MiddleName7610",LastName7610 +7611,7611,"FirstName7611 MiddleName7611",LastName7611 +7612,7612,"FirstName7612 MiddleName7612",LastName7612 +7613,7613,"FirstName7613 MiddleName7613",LastName7613 +7614,7614,"FirstName7614 MiddleName7614",LastName7614 +7615,7615,"FirstName7615 MiddleName7615",LastName7615 +7616,7616,"FirstName7616 MiddleName7616",LastName7616 +7617,7617,"FirstName7617 MiddleName7617",LastName7617 +7618,7618,"FirstName7618 MiddleName7618",LastName7618 +7619,7619,"FirstName7619 MiddleName7619",LastName7619 +7620,7620,"FirstName7620 MiddleName7620",LastName7620 +7621,7621,"FirstName7621 MiddleName7621",LastName7621 +7622,7622,"FirstName7622 MiddleName7622",LastName7622 +7623,7623,"FirstName7623 MiddleName7623",LastName7623 +7624,7624,"FirstName7624 MiddleName7624",LastName7624 +7625,7625,"FirstName7625 MiddleName7625",LastName7625 +7626,7626,"FirstName7626 MiddleName7626",LastName7626 +7627,7627,"FirstName7627 MiddleName7627",LastName7627 +7628,7628,"FirstName7628 MiddleName7628",LastName7628 +7629,7629,"FirstName7629 MiddleName7629",LastName7629 +7630,7630,"FirstName7630 MiddleName7630",LastName7630 +7631,7631,"FirstName7631 MiddleName7631",LastName7631 +7632,7632,"FirstName7632 MiddleName7632",LastName7632 +7633,7633,"FirstName7633 MiddleName7633",LastName7633 +7634,7634,"FirstName7634 MiddleName7634",LastName7634 +7635,7635,"FirstName7635 MiddleName7635",LastName7635 +7636,7636,"FirstName7636 MiddleName7636",LastName7636 +7637,7637,"FirstName7637 MiddleName7637",LastName7637 +7638,7638,"FirstName7638 MiddleName7638",LastName7638 +7639,7639,"FirstName7639 MiddleName7639",LastName7639 +7640,7640,"FirstName7640 MiddleName7640",LastName7640 +7641,7641,"FirstName7641 MiddleName7641",LastName7641 +7642,7642,"FirstName7642 MiddleName7642",LastName7642 +7643,7643,"FirstName7643 MiddleName7643",LastName7643 +7644,7644,"FirstName7644 MiddleName7644",LastName7644 +7645,7645,"FirstName7645 MiddleName7645",LastName7645 +7646,7646,"FirstName7646 MiddleName7646",LastName7646 +7647,7647,"FirstName7647 MiddleName7647",LastName7647 +7648,7648,"FirstName7648 MiddleName7648",LastName7648 +7649,7649,"FirstName7649 MiddleName7649",LastName7649 +7650,7650,"FirstName7650 MiddleName7650",LastName7650 +7651,7651,"FirstName7651 MiddleName7651",LastName7651 +7652,7652,"FirstName7652 MiddleName7652",LastName7652 +7653,7653,"FirstName7653 MiddleName7653",LastName7653 +7654,7654,"FirstName7654 MiddleName7654",LastName7654 +7655,7655,"FirstName7655 MiddleName7655",LastName7655 +7656,7656,"FirstName7656 MiddleName7656",LastName7656 +7657,7657,"FirstName7657 MiddleName7657",LastName7657 +7658,7658,"FirstName7658 MiddleName7658",LastName7658 +7659,7659,"FirstName7659 MiddleName7659",LastName7659 +7660,7660,"FirstName7660 MiddleName7660",LastName7660 +7661,7661,"FirstName7661 MiddleName7661",LastName7661 +7662,7662,"FirstName7662 MiddleName7662",LastName7662 +7663,7663,"FirstName7663 MiddleName7663",LastName7663 +7664,7664,"FirstName7664 MiddleName7664",LastName7664 +7665,7665,"FirstName7665 MiddleName7665",LastName7665 +7666,7666,"FirstName7666 MiddleName7666",LastName7666 +7667,7667,"FirstName7667 MiddleName7667",LastName7667 +7668,7668,"FirstName7668 MiddleName7668",LastName7668 +7669,7669,"FirstName7669 MiddleName7669",LastName7669 +7670,7670,"FirstName7670 MiddleName7670",LastName7670 +7671,7671,"FirstName7671 MiddleName7671",LastName7671 +7672,7672,"FirstName7672 MiddleName7672",LastName7672 +7673,7673,"FirstName7673 MiddleName7673",LastName7673 +7674,7674,"FirstName7674 MiddleName7674",LastName7674 +7675,7675,"FirstName7675 MiddleName7675",LastName7675 +7676,7676,"FirstName7676 MiddleName7676",LastName7676 +7677,7677,"FirstName7677 MiddleName7677",LastName7677 +7678,7678,"FirstName7678 MiddleName7678",LastName7678 +7679,7679,"FirstName7679 MiddleName7679",LastName7679 +7680,7680,"FirstName7680 MiddleName7680",LastName7680 +7681,7681,"FirstName7681 MiddleName7681",LastName7681 +7682,7682,"FirstName7682 MiddleName7682",LastName7682 +7683,7683,"FirstName7683 MiddleName7683",LastName7683 +7684,7684,"FirstName7684 MiddleName7684",LastName7684 +7685,7685,"FirstName7685 MiddleName7685",LastName7685 +7686,7686,"FirstName7686 MiddleName7686",LastName7686 +7687,7687,"FirstName7687 MiddleName7687",LastName7687 +7688,7688,"FirstName7688 MiddleName7688",LastName7688 +7689,7689,"FirstName7689 MiddleName7689",LastName7689 +7690,7690,"FirstName7690 MiddleName7690",LastName7690 +7691,7691,"FirstName7691 MiddleName7691",LastName7691 +7692,7692,"FirstName7692 MiddleName7692",LastName7692 +7693,7693,"FirstName7693 MiddleName7693",LastName7693 +7694,7694,"FirstName7694 MiddleName7694",LastName7694 +7695,7695,"FirstName7695 MiddleName7695",LastName7695 +7696,7696,"FirstName7696 MiddleName7696",LastName7696 +7697,7697,"FirstName7697 MiddleName7697",LastName7697 +7698,7698,"FirstName7698 MiddleName7698",LastName7698 +7699,7699,"FirstName7699 MiddleName7699",LastName7699 +7700,7700,"FirstName7700 MiddleName7700",LastName7700 +7701,7701,"FirstName7701 MiddleName7701",LastName7701 +7702,7702,"FirstName7702 MiddleName7702",LastName7702 +7703,7703,"FirstName7703 MiddleName7703",LastName7703 +7704,7704,"FirstName7704 MiddleName7704",LastName7704 +7705,7705,"FirstName7705 MiddleName7705",LastName7705 +7706,7706,"FirstName7706 MiddleName7706",LastName7706 +7707,7707,"FirstName7707 MiddleName7707",LastName7707 +7708,7708,"FirstName7708 MiddleName7708",LastName7708 +7709,7709,"FirstName7709 MiddleName7709",LastName7709 +7710,7710,"FirstName7710 MiddleName7710",LastName7710 +7711,7711,"FirstName7711 MiddleName7711",LastName7711 +7712,7712,"FirstName7712 MiddleName7712",LastName7712 +7713,7713,"FirstName7713 MiddleName7713",LastName7713 +7714,7714,"FirstName7714 MiddleName7714",LastName7714 +7715,7715,"FirstName7715 MiddleName7715",LastName7715 +7716,7716,"FirstName7716 MiddleName7716",LastName7716 +7717,7717,"FirstName7717 MiddleName7717",LastName7717 +7718,7718,"FirstName7718 MiddleName7718",LastName7718 +7719,7719,"FirstName7719 MiddleName7719",LastName7719 +7720,7720,"FirstName7720 MiddleName7720",LastName7720 +7721,7721,"FirstName7721 MiddleName7721",LastName7721 +7722,7722,"FirstName7722 MiddleName7722",LastName7722 +7723,7723,"FirstName7723 MiddleName7723",LastName7723 +7724,7724,"FirstName7724 MiddleName7724",LastName7724 +7725,7725,"FirstName7725 MiddleName7725",LastName7725 +7726,7726,"FirstName7726 MiddleName7726",LastName7726 +7727,7727,"FirstName7727 MiddleName7727",LastName7727 +7728,7728,"FirstName7728 MiddleName7728",LastName7728 +7729,7729,"FirstName7729 MiddleName7729",LastName7729 +7730,7730,"FirstName7730 MiddleName7730",LastName7730 +7731,7731,"FirstName7731 MiddleName7731",LastName7731 +7732,7732,"FirstName7732 MiddleName7732",LastName7732 +7733,7733,"FirstName7733 MiddleName7733",LastName7733 +7734,7734,"FirstName7734 MiddleName7734",LastName7734 +7735,7735,"FirstName7735 MiddleName7735",LastName7735 +7736,7736,"FirstName7736 MiddleName7736",LastName7736 +7737,7737,"FirstName7737 MiddleName7737",LastName7737 +7738,7738,"FirstName7738 MiddleName7738",LastName7738 +7739,7739,"FirstName7739 MiddleName7739",LastName7739 +7740,7740,"FirstName7740 MiddleName7740",LastName7740 +7741,7741,"FirstName7741 MiddleName7741",LastName7741 +7742,7742,"FirstName7742 MiddleName7742",LastName7742 +7743,7743,"FirstName7743 MiddleName7743",LastName7743 +7744,7744,"FirstName7744 MiddleName7744",LastName7744 +7745,7745,"FirstName7745 MiddleName7745",LastName7745 +7746,7746,"FirstName7746 MiddleName7746",LastName7746 +7747,7747,"FirstName7747 MiddleName7747",LastName7747 +7748,7748,"FirstName7748 MiddleName7748",LastName7748 +7749,7749,"FirstName7749 MiddleName7749",LastName7749 +7750,7750,"FirstName7750 MiddleName7750",LastName7750 +7751,7751,"FirstName7751 MiddleName7751",LastName7751 +7752,7752,"FirstName7752 MiddleName7752",LastName7752 +7753,7753,"FirstName7753 MiddleName7753",LastName7753 +7754,7754,"FirstName7754 MiddleName7754",LastName7754 +7755,7755,"FirstName7755 MiddleName7755",LastName7755 +7756,7756,"FirstName7756 MiddleName7756",LastName7756 +7757,7757,"FirstName7757 MiddleName7757",LastName7757 +7758,7758,"FirstName7758 MiddleName7758",LastName7758 +7759,7759,"FirstName7759 MiddleName7759",LastName7759 +7760,7760,"FirstName7760 MiddleName7760",LastName7760 +7761,7761,"FirstName7761 MiddleName7761",LastName7761 +7762,7762,"FirstName7762 MiddleName7762",LastName7762 +7763,7763,"FirstName7763 MiddleName7763",LastName7763 +7764,7764,"FirstName7764 MiddleName7764",LastName7764 +7765,7765,"FirstName7765 MiddleName7765",LastName7765 +7766,7766,"FirstName7766 MiddleName7766",LastName7766 +7767,7767,"FirstName7767 MiddleName7767",LastName7767 +7768,7768,"FirstName7768 MiddleName7768",LastName7768 +7769,7769,"FirstName7769 MiddleName7769",LastName7769 +7770,7770,"FirstName7770 MiddleName7770",LastName7770 +7771,7771,"FirstName7771 MiddleName7771",LastName7771 +7772,7772,"FirstName7772 MiddleName7772",LastName7772 +7773,7773,"FirstName7773 MiddleName7773",LastName7773 +7774,7774,"FirstName7774 MiddleName7774",LastName7774 +7775,7775,"FirstName7775 MiddleName7775",LastName7775 +7776,7776,"FirstName7776 MiddleName7776",LastName7776 +7777,7777,"FirstName7777 MiddleName7777",LastName7777 +7778,7778,"FirstName7778 MiddleName7778",LastName7778 +7779,7779,"FirstName7779 MiddleName7779",LastName7779 +7780,7780,"FirstName7780 MiddleName7780",LastName7780 +7781,7781,"FirstName7781 MiddleName7781",LastName7781 +7782,7782,"FirstName7782 MiddleName7782",LastName7782 +7783,7783,"FirstName7783 MiddleName7783",LastName7783 +7784,7784,"FirstName7784 MiddleName7784",LastName7784 +7785,7785,"FirstName7785 MiddleName7785",LastName7785 +7786,7786,"FirstName7786 MiddleName7786",LastName7786 +7787,7787,"FirstName7787 MiddleName7787",LastName7787 +7788,7788,"FirstName7788 MiddleName7788",LastName7788 +7789,7789,"FirstName7789 MiddleName7789",LastName7789 +7790,7790,"FirstName7790 MiddleName7790",LastName7790 +7791,7791,"FirstName7791 MiddleName7791",LastName7791 +7792,7792,"FirstName7792 MiddleName7792",LastName7792 +7793,7793,"FirstName7793 MiddleName7793",LastName7793 +7794,7794,"FirstName7794 MiddleName7794",LastName7794 +7795,7795,"FirstName7795 MiddleName7795",LastName7795 +7796,7796,"FirstName7796 MiddleName7796",LastName7796 +7797,7797,"FirstName7797 MiddleName7797",LastName7797 +7798,7798,"FirstName7798 MiddleName7798",LastName7798 +7799,7799,"FirstName7799 MiddleName7799",LastName7799 +7800,7800,"FirstName7800 MiddleName7800",LastName7800 +7801,7801,"FirstName7801 MiddleName7801",LastName7801 +7802,7802,"FirstName7802 MiddleName7802",LastName7802 +7803,7803,"FirstName7803 MiddleName7803",LastName7803 +7804,7804,"FirstName7804 MiddleName7804",LastName7804 +7805,7805,"FirstName7805 MiddleName7805",LastName7805 +7806,7806,"FirstName7806 MiddleName7806",LastName7806 +7807,7807,"FirstName7807 MiddleName7807",LastName7807 +7808,7808,"FirstName7808 MiddleName7808",LastName7808 +7809,7809,"FirstName7809 MiddleName7809",LastName7809 +7810,7810,"FirstName7810 MiddleName7810",LastName7810 +7811,7811,"FirstName7811 MiddleName7811",LastName7811 +7812,7812,"FirstName7812 MiddleName7812",LastName7812 +7813,7813,"FirstName7813 MiddleName7813",LastName7813 +7814,7814,"FirstName7814 MiddleName7814",LastName7814 +7815,7815,"FirstName7815 MiddleName7815",LastName7815 +7816,7816,"FirstName7816 MiddleName7816",LastName7816 +7817,7817,"FirstName7817 MiddleName7817",LastName7817 +7818,7818,"FirstName7818 MiddleName7818",LastName7818 +7819,7819,"FirstName7819 MiddleName7819",LastName7819 +7820,7820,"FirstName7820 MiddleName7820",LastName7820 +7821,7821,"FirstName7821 MiddleName7821",LastName7821 +7822,7822,"FirstName7822 MiddleName7822",LastName7822 +7823,7823,"FirstName7823 MiddleName7823",LastName7823 +7824,7824,"FirstName7824 MiddleName7824",LastName7824 +7825,7825,"FirstName7825 MiddleName7825",LastName7825 +7826,7826,"FirstName7826 MiddleName7826",LastName7826 +7827,7827,"FirstName7827 MiddleName7827",LastName7827 +7828,7828,"FirstName7828 MiddleName7828",LastName7828 +7829,7829,"FirstName7829 MiddleName7829",LastName7829 +7830,7830,"FirstName7830 MiddleName7830",LastName7830 +7831,7831,"FirstName7831 MiddleName7831",LastName7831 +7832,7832,"FirstName7832 MiddleName7832",LastName7832 +7833,7833,"FirstName7833 MiddleName7833",LastName7833 +7834,7834,"FirstName7834 MiddleName7834",LastName7834 +7835,7835,"FirstName7835 MiddleName7835",LastName7835 +7836,7836,"FirstName7836 MiddleName7836",LastName7836 +7837,7837,"FirstName7837 MiddleName7837",LastName7837 +7838,7838,"FirstName7838 MiddleName7838",LastName7838 +7839,7839,"FirstName7839 MiddleName7839",LastName7839 +7840,7840,"FirstName7840 MiddleName7840",LastName7840 +7841,7841,"FirstName7841 MiddleName7841",LastName7841 +7842,7842,"FirstName7842 MiddleName7842",LastName7842 +7843,7843,"FirstName7843 MiddleName7843",LastName7843 +7844,7844,"FirstName7844 MiddleName7844",LastName7844 +7845,7845,"FirstName7845 MiddleName7845",LastName7845 +7846,7846,"FirstName7846 MiddleName7846",LastName7846 +7847,7847,"FirstName7847 MiddleName7847",LastName7847 +7848,7848,"FirstName7848 MiddleName7848",LastName7848 +7849,7849,"FirstName7849 MiddleName7849",LastName7849 +7850,7850,"FirstName7850 MiddleName7850",LastName7850 +7851,7851,"FirstName7851 MiddleName7851",LastName7851 +7852,7852,"FirstName7852 MiddleName7852",LastName7852 +7853,7853,"FirstName7853 MiddleName7853",LastName7853 +7854,7854,"FirstName7854 MiddleName7854",LastName7854 +7855,7855,"FirstName7855 MiddleName7855",LastName7855 +7856,7856,"FirstName7856 MiddleName7856",LastName7856 +7857,7857,"FirstName7857 MiddleName7857",LastName7857 +7858,7858,"FirstName7858 MiddleName7858",LastName7858 +7859,7859,"FirstName7859 MiddleName7859",LastName7859 +7860,7860,"FirstName7860 MiddleName7860",LastName7860 +7861,7861,"FirstName7861 MiddleName7861",LastName7861 +7862,7862,"FirstName7862 MiddleName7862",LastName7862 +7863,7863,"FirstName7863 MiddleName7863",LastName7863 +7864,7864,"FirstName7864 MiddleName7864",LastName7864 +7865,7865,"FirstName7865 MiddleName7865",LastName7865 +7866,7866,"FirstName7866 MiddleName7866",LastName7866 +7867,7867,"FirstName7867 MiddleName7867",LastName7867 +7868,7868,"FirstName7868 MiddleName7868",LastName7868 +7869,7869,"FirstName7869 MiddleName7869",LastName7869 +7870,7870,"FirstName7870 MiddleName7870",LastName7870 +7871,7871,"FirstName7871 MiddleName7871",LastName7871 +7872,7872,"FirstName7872 MiddleName7872",LastName7872 +7873,7873,"FirstName7873 MiddleName7873",LastName7873 +7874,7874,"FirstName7874 MiddleName7874",LastName7874 +7875,7875,"FirstName7875 MiddleName7875",LastName7875 +7876,7876,"FirstName7876 MiddleName7876",LastName7876 +7877,7877,"FirstName7877 MiddleName7877",LastName7877 +7878,7878,"FirstName7878 MiddleName7878",LastName7878 +7879,7879,"FirstName7879 MiddleName7879",LastName7879 +7880,7880,"FirstName7880 MiddleName7880",LastName7880 +7881,7881,"FirstName7881 MiddleName7881",LastName7881 +7882,7882,"FirstName7882 MiddleName7882",LastName7882 +7883,7883,"FirstName7883 MiddleName7883",LastName7883 +7884,7884,"FirstName7884 MiddleName7884",LastName7884 +7885,7885,"FirstName7885 MiddleName7885",LastName7885 +7886,7886,"FirstName7886 MiddleName7886",LastName7886 +7887,7887,"FirstName7887 MiddleName7887",LastName7887 +7888,7888,"FirstName7888 MiddleName7888",LastName7888 +7889,7889,"FirstName7889 MiddleName7889",LastName7889 +7890,7890,"FirstName7890 MiddleName7890",LastName7890 +7891,7891,"FirstName7891 MiddleName7891",LastName7891 +7892,7892,"FirstName7892 MiddleName7892",LastName7892 +7893,7893,"FirstName7893 MiddleName7893",LastName7893 +7894,7894,"FirstName7894 MiddleName7894",LastName7894 +7895,7895,"FirstName7895 MiddleName7895",LastName7895 +7896,7896,"FirstName7896 MiddleName7896",LastName7896 +7897,7897,"FirstName7897 MiddleName7897",LastName7897 +7898,7898,"FirstName7898 MiddleName7898",LastName7898 +7899,7899,"FirstName7899 MiddleName7899",LastName7899 +7900,7900,"FirstName7900 MiddleName7900",LastName7900 +7901,7901,"FirstName7901 MiddleName7901",LastName7901 +7902,7902,"FirstName7902 MiddleName7902",LastName7902 +7903,7903,"FirstName7903 MiddleName7903",LastName7903 +7904,7904,"FirstName7904 MiddleName7904",LastName7904 +7905,7905,"FirstName7905 MiddleName7905",LastName7905 +7906,7906,"FirstName7906 MiddleName7906",LastName7906 +7907,7907,"FirstName7907 MiddleName7907",LastName7907 +7908,7908,"FirstName7908 MiddleName7908",LastName7908 +7909,7909,"FirstName7909 MiddleName7909",LastName7909 +7910,7910,"FirstName7910 MiddleName7910",LastName7910 +7911,7911,"FirstName7911 MiddleName7911",LastName7911 +7912,7912,"FirstName7912 MiddleName7912",LastName7912 +7913,7913,"FirstName7913 MiddleName7913",LastName7913 +7914,7914,"FirstName7914 MiddleName7914",LastName7914 +7915,7915,"FirstName7915 MiddleName7915",LastName7915 +7916,7916,"FirstName7916 MiddleName7916",LastName7916 +7917,7917,"FirstName7917 MiddleName7917",LastName7917 +7918,7918,"FirstName7918 MiddleName7918",LastName7918 +7919,7919,"FirstName7919 MiddleName7919",LastName7919 +7920,7920,"FirstName7920 MiddleName7920",LastName7920 +7921,7921,"FirstName7921 MiddleName7921",LastName7921 +7922,7922,"FirstName7922 MiddleName7922",LastName7922 +7923,7923,"FirstName7923 MiddleName7923",LastName7923 +7924,7924,"FirstName7924 MiddleName7924",LastName7924 +7925,7925,"FirstName7925 MiddleName7925",LastName7925 +7926,7926,"FirstName7926 MiddleName7926",LastName7926 +7927,7927,"FirstName7927 MiddleName7927",LastName7927 +7928,7928,"FirstName7928 MiddleName7928",LastName7928 +7929,7929,"FirstName7929 MiddleName7929",LastName7929 +7930,7930,"FirstName7930 MiddleName7930",LastName7930 +7931,7931,"FirstName7931 MiddleName7931",LastName7931 +7932,7932,"FirstName7932 MiddleName7932",LastName7932 +7933,7933,"FirstName7933 MiddleName7933",LastName7933 +7934,7934,"FirstName7934 MiddleName7934",LastName7934 +7935,7935,"FirstName7935 MiddleName7935",LastName7935 +7936,7936,"FirstName7936 MiddleName7936",LastName7936 +7937,7937,"FirstName7937 MiddleName7937",LastName7937 +7938,7938,"FirstName7938 MiddleName7938",LastName7938 +7939,7939,"FirstName7939 MiddleName7939",LastName7939 +7940,7940,"FirstName7940 MiddleName7940",LastName7940 +7941,7941,"FirstName7941 MiddleName7941",LastName7941 +7942,7942,"FirstName7942 MiddleName7942",LastName7942 +7943,7943,"FirstName7943 MiddleName7943",LastName7943 +7944,7944,"FirstName7944 MiddleName7944",LastName7944 +7945,7945,"FirstName7945 MiddleName7945",LastName7945 +7946,7946,"FirstName7946 MiddleName7946",LastName7946 +7947,7947,"FirstName7947 MiddleName7947",LastName7947 +7948,7948,"FirstName7948 MiddleName7948",LastName7948 +7949,7949,"FirstName7949 MiddleName7949",LastName7949 +7950,7950,"FirstName7950 MiddleName7950",LastName7950 +7951,7951,"FirstName7951 MiddleName7951",LastName7951 +7952,7952,"FirstName7952 MiddleName7952",LastName7952 +7953,7953,"FirstName7953 MiddleName7953",LastName7953 +7954,7954,"FirstName7954 MiddleName7954",LastName7954 +7955,7955,"FirstName7955 MiddleName7955",LastName7955 +7956,7956,"FirstName7956 MiddleName7956",LastName7956 +7957,7957,"FirstName7957 MiddleName7957",LastName7957 +7958,7958,"FirstName7958 MiddleName7958",LastName7958 +7959,7959,"FirstName7959 MiddleName7959",LastName7959 +7960,7960,"FirstName7960 MiddleName7960",LastName7960 +7961,7961,"FirstName7961 MiddleName7961",LastName7961 +7962,7962,"FirstName7962 MiddleName7962",LastName7962 +7963,7963,"FirstName7963 MiddleName7963",LastName7963 +7964,7964,"FirstName7964 MiddleName7964",LastName7964 +7965,7965,"FirstName7965 MiddleName7965",LastName7965 +7966,7966,"FirstName7966 MiddleName7966",LastName7966 +7967,7967,"FirstName7967 MiddleName7967",LastName7967 +7968,7968,"FirstName7968 MiddleName7968",LastName7968 +7969,7969,"FirstName7969 MiddleName7969",LastName7969 +7970,7970,"FirstName7970 MiddleName7970",LastName7970 +7971,7971,"FirstName7971 MiddleName7971",LastName7971 +7972,7972,"FirstName7972 MiddleName7972",LastName7972 +7973,7973,"FirstName7973 MiddleName7973",LastName7973 +7974,7974,"FirstName7974 MiddleName7974",LastName7974 +7975,7975,"FirstName7975 MiddleName7975",LastName7975 +7976,7976,"FirstName7976 MiddleName7976",LastName7976 +7977,7977,"FirstName7977 MiddleName7977",LastName7977 +7978,7978,"FirstName7978 MiddleName7978",LastName7978 +7979,7979,"FirstName7979 MiddleName7979",LastName7979 +7980,7980,"FirstName7980 MiddleName7980",LastName7980 +7981,7981,"FirstName7981 MiddleName7981",LastName7981 +7982,7982,"FirstName7982 MiddleName7982",LastName7982 +7983,7983,"FirstName7983 MiddleName7983",LastName7983 +7984,7984,"FirstName7984 MiddleName7984",LastName7984 +7985,7985,"FirstName7985 MiddleName7985",LastName7985 +7986,7986,"FirstName7986 MiddleName7986",LastName7986 +7987,7987,"FirstName7987 MiddleName7987",LastName7987 +7988,7988,"FirstName7988 MiddleName7988",LastName7988 +7989,7989,"FirstName7989 MiddleName7989",LastName7989 +7990,7990,"FirstName7990 MiddleName7990",LastName7990 +7991,7991,"FirstName7991 MiddleName7991",LastName7991 +7992,7992,"FirstName7992 MiddleName7992",LastName7992 +7993,7993,"FirstName7993 MiddleName7993",LastName7993 +7994,7994,"FirstName7994 MiddleName7994",LastName7994 +7995,7995,"FirstName7995 MiddleName7995",LastName7995 +7996,7996,"FirstName7996 MiddleName7996",LastName7996 +7997,7997,"FirstName7997 MiddleName7997",LastName7997 +7998,7998,"FirstName7998 MiddleName7998",LastName7998 +7999,7999,"FirstName7999 MiddleName7999",LastName7999 +8000,8000,"FirstName8000 MiddleName8000",LastName8000 +8001,8001,"FirstName8001 MiddleName8001",LastName8001 +8002,8002,"FirstName8002 MiddleName8002",LastName8002 +8003,8003,"FirstName8003 MiddleName8003",LastName8003 +8004,8004,"FirstName8004 MiddleName8004",LastName8004 +8005,8005,"FirstName8005 MiddleName8005",LastName8005 +8006,8006,"FirstName8006 MiddleName8006",LastName8006 +8007,8007,"FirstName8007 MiddleName8007",LastName8007 +8008,8008,"FirstName8008 MiddleName8008",LastName8008 +8009,8009,"FirstName8009 MiddleName8009",LastName8009 +8010,8010,"FirstName8010 MiddleName8010",LastName8010 +8011,8011,"FirstName8011 MiddleName8011",LastName8011 +8012,8012,"FirstName8012 MiddleName8012",LastName8012 +8013,8013,"FirstName8013 MiddleName8013",LastName8013 +8014,8014,"FirstName8014 MiddleName8014",LastName8014 +8015,8015,"FirstName8015 MiddleName8015",LastName8015 +8016,8016,"FirstName8016 MiddleName8016",LastName8016 +8017,8017,"FirstName8017 MiddleName8017",LastName8017 +8018,8018,"FirstName8018 MiddleName8018",LastName8018 +8019,8019,"FirstName8019 MiddleName8019",LastName8019 +8020,8020,"FirstName8020 MiddleName8020",LastName8020 +8021,8021,"FirstName8021 MiddleName8021",LastName8021 +8022,8022,"FirstName8022 MiddleName8022",LastName8022 +8023,8023,"FirstName8023 MiddleName8023",LastName8023 +8024,8024,"FirstName8024 MiddleName8024",LastName8024 +8025,8025,"FirstName8025 MiddleName8025",LastName8025 +8026,8026,"FirstName8026 MiddleName8026",LastName8026 +8027,8027,"FirstName8027 MiddleName8027",LastName8027 +8028,8028,"FirstName8028 MiddleName8028",LastName8028 +8029,8029,"FirstName8029 MiddleName8029",LastName8029 +8030,8030,"FirstName8030 MiddleName8030",LastName8030 +8031,8031,"FirstName8031 MiddleName8031",LastName8031 +8032,8032,"FirstName8032 MiddleName8032",LastName8032 +8033,8033,"FirstName8033 MiddleName8033",LastName8033 +8034,8034,"FirstName8034 MiddleName8034",LastName8034 +8035,8035,"FirstName8035 MiddleName8035",LastName8035 +8036,8036,"FirstName8036 MiddleName8036",LastName8036 +8037,8037,"FirstName8037 MiddleName8037",LastName8037 +8038,8038,"FirstName8038 MiddleName8038",LastName8038 +8039,8039,"FirstName8039 MiddleName8039",LastName8039 +8040,8040,"FirstName8040 MiddleName8040",LastName8040 +8041,8041,"FirstName8041 MiddleName8041",LastName8041 +8042,8042,"FirstName8042 MiddleName8042",LastName8042 +8043,8043,"FirstName8043 MiddleName8043",LastName8043 +8044,8044,"FirstName8044 MiddleName8044",LastName8044 +8045,8045,"FirstName8045 MiddleName8045",LastName8045 +8046,8046,"FirstName8046 MiddleName8046",LastName8046 +8047,8047,"FirstName8047 MiddleName8047",LastName8047 +8048,8048,"FirstName8048 MiddleName8048",LastName8048 +8049,8049,"FirstName8049 MiddleName8049",LastName8049 +8050,8050,"FirstName8050 MiddleName8050",LastName8050 +8051,8051,"FirstName8051 MiddleName8051",LastName8051 +8052,8052,"FirstName8052 MiddleName8052",LastName8052 +8053,8053,"FirstName8053 MiddleName8053",LastName8053 +8054,8054,"FirstName8054 MiddleName8054",LastName8054 +8055,8055,"FirstName8055 MiddleName8055",LastName8055 +8056,8056,"FirstName8056 MiddleName8056",LastName8056 +8057,8057,"FirstName8057 MiddleName8057",LastName8057 +8058,8058,"FirstName8058 MiddleName8058",LastName8058 +8059,8059,"FirstName8059 MiddleName8059",LastName8059 +8060,8060,"FirstName8060 MiddleName8060",LastName8060 +8061,8061,"FirstName8061 MiddleName8061",LastName8061 +8062,8062,"FirstName8062 MiddleName8062",LastName8062 +8063,8063,"FirstName8063 MiddleName8063",LastName8063 +8064,8064,"FirstName8064 MiddleName8064",LastName8064 +8065,8065,"FirstName8065 MiddleName8065",LastName8065 +8066,8066,"FirstName8066 MiddleName8066",LastName8066 +8067,8067,"FirstName8067 MiddleName8067",LastName8067 +8068,8068,"FirstName8068 MiddleName8068",LastName8068 +8069,8069,"FirstName8069 MiddleName8069",LastName8069 +8070,8070,"FirstName8070 MiddleName8070",LastName8070 +8071,8071,"FirstName8071 MiddleName8071",LastName8071 +8072,8072,"FirstName8072 MiddleName8072",LastName8072 +8073,8073,"FirstName8073 MiddleName8073",LastName8073 +8074,8074,"FirstName8074 MiddleName8074",LastName8074 +8075,8075,"FirstName8075 MiddleName8075",LastName8075 +8076,8076,"FirstName8076 MiddleName8076",LastName8076 +8077,8077,"FirstName8077 MiddleName8077",LastName8077 +8078,8078,"FirstName8078 MiddleName8078",LastName8078 +8079,8079,"FirstName8079 MiddleName8079",LastName8079 +8080,8080,"FirstName8080 MiddleName8080",LastName8080 +8081,8081,"FirstName8081 MiddleName8081",LastName8081 +8082,8082,"FirstName8082 MiddleName8082",LastName8082 +8083,8083,"FirstName8083 MiddleName8083",LastName8083 +8084,8084,"FirstName8084 MiddleName8084",LastName8084 +8085,8085,"FirstName8085 MiddleName8085",LastName8085 +8086,8086,"FirstName8086 MiddleName8086",LastName8086 +8087,8087,"FirstName8087 MiddleName8087",LastName8087 +8088,8088,"FirstName8088 MiddleName8088",LastName8088 +8089,8089,"FirstName8089 MiddleName8089",LastName8089 +8090,8090,"FirstName8090 MiddleName8090",LastName8090 +8091,8091,"FirstName8091 MiddleName8091",LastName8091 +8092,8092,"FirstName8092 MiddleName8092",LastName8092 +8093,8093,"FirstName8093 MiddleName8093",LastName8093 +8094,8094,"FirstName8094 MiddleName8094",LastName8094 +8095,8095,"FirstName8095 MiddleName8095",LastName8095 +8096,8096,"FirstName8096 MiddleName8096",LastName8096 +8097,8097,"FirstName8097 MiddleName8097",LastName8097 +8098,8098,"FirstName8098 MiddleName8098",LastName8098 +8099,8099,"FirstName8099 MiddleName8099",LastName8099 +8100,8100,"FirstName8100 MiddleName8100",LastName8100 +8101,8101,"FirstName8101 MiddleName8101",LastName8101 +8102,8102,"FirstName8102 MiddleName8102",LastName8102 +8103,8103,"FirstName8103 MiddleName8103",LastName8103 +8104,8104,"FirstName8104 MiddleName8104",LastName8104 +8105,8105,"FirstName8105 MiddleName8105",LastName8105 +8106,8106,"FirstName8106 MiddleName8106",LastName8106 +8107,8107,"FirstName8107 MiddleName8107",LastName8107 +8108,8108,"FirstName8108 MiddleName8108",LastName8108 +8109,8109,"FirstName8109 MiddleName8109",LastName8109 +8110,8110,"FirstName8110 MiddleName8110",LastName8110 +8111,8111,"FirstName8111 MiddleName8111",LastName8111 +8112,8112,"FirstName8112 MiddleName8112",LastName8112 +8113,8113,"FirstName8113 MiddleName8113",LastName8113 +8114,8114,"FirstName8114 MiddleName8114",LastName8114 +8115,8115,"FirstName8115 MiddleName8115",LastName8115 +8116,8116,"FirstName8116 MiddleName8116",LastName8116 +8117,8117,"FirstName8117 MiddleName8117",LastName8117 +8118,8118,"FirstName8118 MiddleName8118",LastName8118 +8119,8119,"FirstName8119 MiddleName8119",LastName8119 +8120,8120,"FirstName8120 MiddleName8120",LastName8120 +8121,8121,"FirstName8121 MiddleName8121",LastName8121 +8122,8122,"FirstName8122 MiddleName8122",LastName8122 +8123,8123,"FirstName8123 MiddleName8123",LastName8123 +8124,8124,"FirstName8124 MiddleName8124",LastName8124 +8125,8125,"FirstName8125 MiddleName8125",LastName8125 +8126,8126,"FirstName8126 MiddleName8126",LastName8126 +8127,8127,"FirstName8127 MiddleName8127",LastName8127 +8128,8128,"FirstName8128 MiddleName8128",LastName8128 +8129,8129,"FirstName8129 MiddleName8129",LastName8129 +8130,8130,"FirstName8130 MiddleName8130",LastName8130 +8131,8131,"FirstName8131 MiddleName8131",LastName8131 +8132,8132,"FirstName8132 MiddleName8132",LastName8132 +8133,8133,"FirstName8133 MiddleName8133",LastName8133 +8134,8134,"FirstName8134 MiddleName8134",LastName8134 +8135,8135,"FirstName8135 MiddleName8135",LastName8135 +8136,8136,"FirstName8136 MiddleName8136",LastName8136 +8137,8137,"FirstName8137 MiddleName8137",LastName8137 +8138,8138,"FirstName8138 MiddleName8138",LastName8138 +8139,8139,"FirstName8139 MiddleName8139",LastName8139 +8140,8140,"FirstName8140 MiddleName8140",LastName8140 +8141,8141,"FirstName8141 MiddleName8141",LastName8141 +8142,8142,"FirstName8142 MiddleName8142",LastName8142 +8143,8143,"FirstName8143 MiddleName8143",LastName8143 +8144,8144,"FirstName8144 MiddleName8144",LastName8144 +8145,8145,"FirstName8145 MiddleName8145",LastName8145 +8146,8146,"FirstName8146 MiddleName8146",LastName8146 +8147,8147,"FirstName8147 MiddleName8147",LastName8147 +8148,8148,"FirstName8148 MiddleName8148",LastName8148 +8149,8149,"FirstName8149 MiddleName8149",LastName8149 +8150,8150,"FirstName8150 MiddleName8150",LastName8150 +8151,8151,"FirstName8151 MiddleName8151",LastName8151 +8152,8152,"FirstName8152 MiddleName8152",LastName8152 +8153,8153,"FirstName8153 MiddleName8153",LastName8153 +8154,8154,"FirstName8154 MiddleName8154",LastName8154 +8155,8155,"FirstName8155 MiddleName8155",LastName8155 +8156,8156,"FirstName8156 MiddleName8156",LastName8156 +8157,8157,"FirstName8157 MiddleName8157",LastName8157 +8158,8158,"FirstName8158 MiddleName8158",LastName8158 +8159,8159,"FirstName8159 MiddleName8159",LastName8159 +8160,8160,"FirstName8160 MiddleName8160",LastName8160 +8161,8161,"FirstName8161 MiddleName8161",LastName8161 +8162,8162,"FirstName8162 MiddleName8162",LastName8162 +8163,8163,"FirstName8163 MiddleName8163",LastName8163 +8164,8164,"FirstName8164 MiddleName8164",LastName8164 +8165,8165,"FirstName8165 MiddleName8165",LastName8165 +8166,8166,"FirstName8166 MiddleName8166",LastName8166 +8167,8167,"FirstName8167 MiddleName8167",LastName8167 +8168,8168,"FirstName8168 MiddleName8168",LastName8168 +8169,8169,"FirstName8169 MiddleName8169",LastName8169 +8170,8170,"FirstName8170 MiddleName8170",LastName8170 +8171,8171,"FirstName8171 MiddleName8171",LastName8171 +8172,8172,"FirstName8172 MiddleName8172",LastName8172 +8173,8173,"FirstName8173 MiddleName8173",LastName8173 +8174,8174,"FirstName8174 MiddleName8174",LastName8174 +8175,8175,"FirstName8175 MiddleName8175",LastName8175 +8176,8176,"FirstName8176 MiddleName8176",LastName8176 +8177,8177,"FirstName8177 MiddleName8177",LastName8177 +8178,8178,"FirstName8178 MiddleName8178",LastName8178 +8179,8179,"FirstName8179 MiddleName8179",LastName8179 +8180,8180,"FirstName8180 MiddleName8180",LastName8180 +8181,8181,"FirstName8181 MiddleName8181",LastName8181 +8182,8182,"FirstName8182 MiddleName8182",LastName8182 +8183,8183,"FirstName8183 MiddleName8183",LastName8183 +8184,8184,"FirstName8184 MiddleName8184",LastName8184 +8185,8185,"FirstName8185 MiddleName8185",LastName8185 +8186,8186,"FirstName8186 MiddleName8186",LastName8186 +8187,8187,"FirstName8187 MiddleName8187",LastName8187 +8188,8188,"FirstName8188 MiddleName8188",LastName8188 +8189,8189,"FirstName8189 MiddleName8189",LastName8189 +8190,8190,"FirstName8190 MiddleName8190",LastName8190 +8191,8191,"FirstName8191 MiddleName8191",LastName8191 +8192,8192,"FirstName8192 MiddleName8192",LastName8192 +8193,8193,"FirstName8193 MiddleName8193",LastName8193 +8194,8194,"FirstName8194 MiddleName8194",LastName8194 +8195,8195,"FirstName8195 MiddleName8195",LastName8195 +8196,8196,"FirstName8196 MiddleName8196",LastName8196 +8197,8197,"FirstName8197 MiddleName8197",LastName8197 +8198,8198,"FirstName8198 MiddleName8198",LastName8198 +8199,8199,"FirstName8199 MiddleName8199",LastName8199 +8200,8200,"FirstName8200 MiddleName8200",LastName8200 +8201,8201,"FirstName8201 MiddleName8201",LastName8201 +8202,8202,"FirstName8202 MiddleName8202",LastName8202 +8203,8203,"FirstName8203 MiddleName8203",LastName8203 +8204,8204,"FirstName8204 MiddleName8204",LastName8204 +8205,8205,"FirstName8205 MiddleName8205",LastName8205 +8206,8206,"FirstName8206 MiddleName8206",LastName8206 +8207,8207,"FirstName8207 MiddleName8207",LastName8207 +8208,8208,"FirstName8208 MiddleName8208",LastName8208 +8209,8209,"FirstName8209 MiddleName8209",LastName8209 +8210,8210,"FirstName8210 MiddleName8210",LastName8210 +8211,8211,"FirstName8211 MiddleName8211",LastName8211 +8212,8212,"FirstName8212 MiddleName8212",LastName8212 +8213,8213,"FirstName8213 MiddleName8213",LastName8213 +8214,8214,"FirstName8214 MiddleName8214",LastName8214 +8215,8215,"FirstName8215 MiddleName8215",LastName8215 +8216,8216,"FirstName8216 MiddleName8216",LastName8216 +8217,8217,"FirstName8217 MiddleName8217",LastName8217 +8218,8218,"FirstName8218 MiddleName8218",LastName8218 +8219,8219,"FirstName8219 MiddleName8219",LastName8219 +8220,8220,"FirstName8220 MiddleName8220",LastName8220 +8221,8221,"FirstName8221 MiddleName8221",LastName8221 +8222,8222,"FirstName8222 MiddleName8222",LastName8222 +8223,8223,"FirstName8223 MiddleName8223",LastName8223 +8224,8224,"FirstName8224 MiddleName8224",LastName8224 +8225,8225,"FirstName8225 MiddleName8225",LastName8225 +8226,8226,"FirstName8226 MiddleName8226",LastName8226 +8227,8227,"FirstName8227 MiddleName8227",LastName8227 +8228,8228,"FirstName8228 MiddleName8228",LastName8228 +8229,8229,"FirstName8229 MiddleName8229",LastName8229 +8230,8230,"FirstName8230 MiddleName8230",LastName8230 +8231,8231,"FirstName8231 MiddleName8231",LastName8231 +8232,8232,"FirstName8232 MiddleName8232",LastName8232 +8233,8233,"FirstName8233 MiddleName8233",LastName8233 +8234,8234,"FirstName8234 MiddleName8234",LastName8234 +8235,8235,"FirstName8235 MiddleName8235",LastName8235 +8236,8236,"FirstName8236 MiddleName8236",LastName8236 +8237,8237,"FirstName8237 MiddleName8237",LastName8237 +8238,8238,"FirstName8238 MiddleName8238",LastName8238 +8239,8239,"FirstName8239 MiddleName8239",LastName8239 +8240,8240,"FirstName8240 MiddleName8240",LastName8240 +8241,8241,"FirstName8241 MiddleName8241",LastName8241 +8242,8242,"FirstName8242 MiddleName8242",LastName8242 +8243,8243,"FirstName8243 MiddleName8243",LastName8243 +8244,8244,"FirstName8244 MiddleName8244",LastName8244 +8245,8245,"FirstName8245 MiddleName8245",LastName8245 +8246,8246,"FirstName8246 MiddleName8246",LastName8246 +8247,8247,"FirstName8247 MiddleName8247",LastName8247 +8248,8248,"FirstName8248 MiddleName8248",LastName8248 +8249,8249,"FirstName8249 MiddleName8249",LastName8249 +8250,8250,"FirstName8250 MiddleName8250",LastName8250 +8251,8251,"FirstName8251 MiddleName8251",LastName8251 +8252,8252,"FirstName8252 MiddleName8252",LastName8252 +8253,8253,"FirstName8253 MiddleName8253",LastName8253 +8254,8254,"FirstName8254 MiddleName8254",LastName8254 +8255,8255,"FirstName8255 MiddleName8255",LastName8255 +8256,8256,"FirstName8256 MiddleName8256",LastName8256 +8257,8257,"FirstName8257 MiddleName8257",LastName8257 +8258,8258,"FirstName8258 MiddleName8258",LastName8258 +8259,8259,"FirstName8259 MiddleName8259",LastName8259 +8260,8260,"FirstName8260 MiddleName8260",LastName8260 +8261,8261,"FirstName8261 MiddleName8261",LastName8261 +8262,8262,"FirstName8262 MiddleName8262",LastName8262 +8263,8263,"FirstName8263 MiddleName8263",LastName8263 +8264,8264,"FirstName8264 MiddleName8264",LastName8264 +8265,8265,"FirstName8265 MiddleName8265",LastName8265 +8266,8266,"FirstName8266 MiddleName8266",LastName8266 +8267,8267,"FirstName8267 MiddleName8267",LastName8267 +8268,8268,"FirstName8268 MiddleName8268",LastName8268 +8269,8269,"FirstName8269 MiddleName8269",LastName8269 +8270,8270,"FirstName8270 MiddleName8270",LastName8270 +8271,8271,"FirstName8271 MiddleName8271",LastName8271 +8272,8272,"FirstName8272 MiddleName8272",LastName8272 +8273,8273,"FirstName8273 MiddleName8273",LastName8273 +8274,8274,"FirstName8274 MiddleName8274",LastName8274 +8275,8275,"FirstName8275 MiddleName8275",LastName8275 +8276,8276,"FirstName8276 MiddleName8276",LastName8276 +8277,8277,"FirstName8277 MiddleName8277",LastName8277 +8278,8278,"FirstName8278 MiddleName8278",LastName8278 +8279,8279,"FirstName8279 MiddleName8279",LastName8279 +8280,8280,"FirstName8280 MiddleName8280",LastName8280 +8281,8281,"FirstName8281 MiddleName8281",LastName8281 +8282,8282,"FirstName8282 MiddleName8282",LastName8282 +8283,8283,"FirstName8283 MiddleName8283",LastName8283 +8284,8284,"FirstName8284 MiddleName8284",LastName8284 +8285,8285,"FirstName8285 MiddleName8285",LastName8285 +8286,8286,"FirstName8286 MiddleName8286",LastName8286 +8287,8287,"FirstName8287 MiddleName8287",LastName8287 +8288,8288,"FirstName8288 MiddleName8288",LastName8288 +8289,8289,"FirstName8289 MiddleName8289",LastName8289 +8290,8290,"FirstName8290 MiddleName8290",LastName8290 +8291,8291,"FirstName8291 MiddleName8291",LastName8291 +8292,8292,"FirstName8292 MiddleName8292",LastName8292 +8293,8293,"FirstName8293 MiddleName8293",LastName8293 +8294,8294,"FirstName8294 MiddleName8294",LastName8294 +8295,8295,"FirstName8295 MiddleName8295",LastName8295 +8296,8296,"FirstName8296 MiddleName8296",LastName8296 +8297,8297,"FirstName8297 MiddleName8297",LastName8297 +8298,8298,"FirstName8298 MiddleName8298",LastName8298 +8299,8299,"FirstName8299 MiddleName8299",LastName8299 +8300,8300,"FirstName8300 MiddleName8300",LastName8300 +8301,8301,"FirstName8301 MiddleName8301",LastName8301 +8302,8302,"FirstName8302 MiddleName8302",LastName8302 +8303,8303,"FirstName8303 MiddleName8303",LastName8303 +8304,8304,"FirstName8304 MiddleName8304",LastName8304 +8305,8305,"FirstName8305 MiddleName8305",LastName8305 +8306,8306,"FirstName8306 MiddleName8306",LastName8306 +8307,8307,"FirstName8307 MiddleName8307",LastName8307 +8308,8308,"FirstName8308 MiddleName8308",LastName8308 +8309,8309,"FirstName8309 MiddleName8309",LastName8309 +8310,8310,"FirstName8310 MiddleName8310",LastName8310 +8311,8311,"FirstName8311 MiddleName8311",LastName8311 +8312,8312,"FirstName8312 MiddleName8312",LastName8312 +8313,8313,"FirstName8313 MiddleName8313",LastName8313 +8314,8314,"FirstName8314 MiddleName8314",LastName8314 +8315,8315,"FirstName8315 MiddleName8315",LastName8315 +8316,8316,"FirstName8316 MiddleName8316",LastName8316 +8317,8317,"FirstName8317 MiddleName8317",LastName8317 +8318,8318,"FirstName8318 MiddleName8318",LastName8318 +8319,8319,"FirstName8319 MiddleName8319",LastName8319 +8320,8320,"FirstName8320 MiddleName8320",LastName8320 +8321,8321,"FirstName8321 MiddleName8321",LastName8321 +8322,8322,"FirstName8322 MiddleName8322",LastName8322 +8323,8323,"FirstName8323 MiddleName8323",LastName8323 +8324,8324,"FirstName8324 MiddleName8324",LastName8324 +8325,8325,"FirstName8325 MiddleName8325",LastName8325 +8326,8326,"FirstName8326 MiddleName8326",LastName8326 +8327,8327,"FirstName8327 MiddleName8327",LastName8327 +8328,8328,"FirstName8328 MiddleName8328",LastName8328 +8329,8329,"FirstName8329 MiddleName8329",LastName8329 +8330,8330,"FirstName8330 MiddleName8330",LastName8330 +8331,8331,"FirstName8331 MiddleName8331",LastName8331 +8332,8332,"FirstName8332 MiddleName8332",LastName8332 +8333,8333,"FirstName8333 MiddleName8333",LastName8333 +8334,8334,"FirstName8334 MiddleName8334",LastName8334 +8335,8335,"FirstName8335 MiddleName8335",LastName8335 +8336,8336,"FirstName8336 MiddleName8336",LastName8336 +8337,8337,"FirstName8337 MiddleName8337",LastName8337 +8338,8338,"FirstName8338 MiddleName8338",LastName8338 +8339,8339,"FirstName8339 MiddleName8339",LastName8339 +8340,8340,"FirstName8340 MiddleName8340",LastName8340 +8341,8341,"FirstName8341 MiddleName8341",LastName8341 +8342,8342,"FirstName8342 MiddleName8342",LastName8342 +8343,8343,"FirstName8343 MiddleName8343",LastName8343 +8344,8344,"FirstName8344 MiddleName8344",LastName8344 +8345,8345,"FirstName8345 MiddleName8345",LastName8345 +8346,8346,"FirstName8346 MiddleName8346",LastName8346 +8347,8347,"FirstName8347 MiddleName8347",LastName8347 +8348,8348,"FirstName8348 MiddleName8348",LastName8348 +8349,8349,"FirstName8349 MiddleName8349",LastName8349 +8350,8350,"FirstName8350 MiddleName8350",LastName8350 +8351,8351,"FirstName8351 MiddleName8351",LastName8351 +8352,8352,"FirstName8352 MiddleName8352",LastName8352 +8353,8353,"FirstName8353 MiddleName8353",LastName8353 +8354,8354,"FirstName8354 MiddleName8354",LastName8354 +8355,8355,"FirstName8355 MiddleName8355",LastName8355 +8356,8356,"FirstName8356 MiddleName8356",LastName8356 +8357,8357,"FirstName8357 MiddleName8357",LastName8357 +8358,8358,"FirstName8358 MiddleName8358",LastName8358 +8359,8359,"FirstName8359 MiddleName8359",LastName8359 +8360,8360,"FirstName8360 MiddleName8360",LastName8360 +8361,8361,"FirstName8361 MiddleName8361",LastName8361 +8362,8362,"FirstName8362 MiddleName8362",LastName8362 +8363,8363,"FirstName8363 MiddleName8363",LastName8363 +8364,8364,"FirstName8364 MiddleName8364",LastName8364 +8365,8365,"FirstName8365 MiddleName8365",LastName8365 +8366,8366,"FirstName8366 MiddleName8366",LastName8366 +8367,8367,"FirstName8367 MiddleName8367",LastName8367 +8368,8368,"FirstName8368 MiddleName8368",LastName8368 +8369,8369,"FirstName8369 MiddleName8369",LastName8369 +8370,8370,"FirstName8370 MiddleName8370",LastName8370 +8371,8371,"FirstName8371 MiddleName8371",LastName8371 +8372,8372,"FirstName8372 MiddleName8372",LastName8372 +8373,8373,"FirstName8373 MiddleName8373",LastName8373 +8374,8374,"FirstName8374 MiddleName8374",LastName8374 +8375,8375,"FirstName8375 MiddleName8375",LastName8375 +8376,8376,"FirstName8376 MiddleName8376",LastName8376 +8377,8377,"FirstName8377 MiddleName8377",LastName8377 +8378,8378,"FirstName8378 MiddleName8378",LastName8378 +8379,8379,"FirstName8379 MiddleName8379",LastName8379 +8380,8380,"FirstName8380 MiddleName8380",LastName8380 +8381,8381,"FirstName8381 MiddleName8381",LastName8381 +8382,8382,"FirstName8382 MiddleName8382",LastName8382 +8383,8383,"FirstName8383 MiddleName8383",LastName8383 +8384,8384,"FirstName8384 MiddleName8384",LastName8384 +8385,8385,"FirstName8385 MiddleName8385",LastName8385 +8386,8386,"FirstName8386 MiddleName8386",LastName8386 +8387,8387,"FirstName8387 MiddleName8387",LastName8387 +8388,8388,"FirstName8388 MiddleName8388",LastName8388 +8389,8389,"FirstName8389 MiddleName8389",LastName8389 +8390,8390,"FirstName8390 MiddleName8390",LastName8390 +8391,8391,"FirstName8391 MiddleName8391",LastName8391 +8392,8392,"FirstName8392 MiddleName8392",LastName8392 +8393,8393,"FirstName8393 MiddleName8393",LastName8393 +8394,8394,"FirstName8394 MiddleName8394",LastName8394 +8395,8395,"FirstName8395 MiddleName8395",LastName8395 +8396,8396,"FirstName8396 MiddleName8396",LastName8396 +8397,8397,"FirstName8397 MiddleName8397",LastName8397 +8398,8398,"FirstName8398 MiddleName8398",LastName8398 +8399,8399,"FirstName8399 MiddleName8399",LastName8399 +8400,8400,"FirstName8400 MiddleName8400",LastName8400 +8401,8401,"FirstName8401 MiddleName8401",LastName8401 +8402,8402,"FirstName8402 MiddleName8402",LastName8402 +8403,8403,"FirstName8403 MiddleName8403",LastName8403 +8404,8404,"FirstName8404 MiddleName8404",LastName8404 +8405,8405,"FirstName8405 MiddleName8405",LastName8405 +8406,8406,"FirstName8406 MiddleName8406",LastName8406 +8407,8407,"FirstName8407 MiddleName8407",LastName8407 +8408,8408,"FirstName8408 MiddleName8408",LastName8408 +8409,8409,"FirstName8409 MiddleName8409",LastName8409 +8410,8410,"FirstName8410 MiddleName8410",LastName8410 +8411,8411,"FirstName8411 MiddleName8411",LastName8411 +8412,8412,"FirstName8412 MiddleName8412",LastName8412 +8413,8413,"FirstName8413 MiddleName8413",LastName8413 +8414,8414,"FirstName8414 MiddleName8414",LastName8414 +8415,8415,"FirstName8415 MiddleName8415",LastName8415 +8416,8416,"FirstName8416 MiddleName8416",LastName8416 +8417,8417,"FirstName8417 MiddleName8417",LastName8417 +8418,8418,"FirstName8418 MiddleName8418",LastName8418 +8419,8419,"FirstName8419 MiddleName8419",LastName8419 +8420,8420,"FirstName8420 MiddleName8420",LastName8420 +8421,8421,"FirstName8421 MiddleName8421",LastName8421 +8422,8422,"FirstName8422 MiddleName8422",LastName8422 +8423,8423,"FirstName8423 MiddleName8423",LastName8423 +8424,8424,"FirstName8424 MiddleName8424",LastName8424 +8425,8425,"FirstName8425 MiddleName8425",LastName8425 +8426,8426,"FirstName8426 MiddleName8426",LastName8426 +8427,8427,"FirstName8427 MiddleName8427",LastName8427 +8428,8428,"FirstName8428 MiddleName8428",LastName8428 +8429,8429,"FirstName8429 MiddleName8429",LastName8429 +8430,8430,"FirstName8430 MiddleName8430",LastName8430 +8431,8431,"FirstName8431 MiddleName8431",LastName8431 +8432,8432,"FirstName8432 MiddleName8432",LastName8432 +8433,8433,"FirstName8433 MiddleName8433",LastName8433 +8434,8434,"FirstName8434 MiddleName8434",LastName8434 +8435,8435,"FirstName8435 MiddleName8435",LastName8435 +8436,8436,"FirstName8436 MiddleName8436",LastName8436 +8437,8437,"FirstName8437 MiddleName8437",LastName8437 +8438,8438,"FirstName8438 MiddleName8438",LastName8438 +8439,8439,"FirstName8439 MiddleName8439",LastName8439 +8440,8440,"FirstName8440 MiddleName8440",LastName8440 +8441,8441,"FirstName8441 MiddleName8441",LastName8441 +8442,8442,"FirstName8442 MiddleName8442",LastName8442 +8443,8443,"FirstName8443 MiddleName8443",LastName8443 +8444,8444,"FirstName8444 MiddleName8444",LastName8444 +8445,8445,"FirstName8445 MiddleName8445",LastName8445 +8446,8446,"FirstName8446 MiddleName8446",LastName8446 +8447,8447,"FirstName8447 MiddleName8447",LastName8447 +8448,8448,"FirstName8448 MiddleName8448",LastName8448 +8449,8449,"FirstName8449 MiddleName8449",LastName8449 +8450,8450,"FirstName8450 MiddleName8450",LastName8450 +8451,8451,"FirstName8451 MiddleName8451",LastName8451 +8452,8452,"FirstName8452 MiddleName8452",LastName8452 +8453,8453,"FirstName8453 MiddleName8453",LastName8453 +8454,8454,"FirstName8454 MiddleName8454",LastName8454 +8455,8455,"FirstName8455 MiddleName8455",LastName8455 +8456,8456,"FirstName8456 MiddleName8456",LastName8456 +8457,8457,"FirstName8457 MiddleName8457",LastName8457 +8458,8458,"FirstName8458 MiddleName8458",LastName8458 +8459,8459,"FirstName8459 MiddleName8459",LastName8459 +8460,8460,"FirstName8460 MiddleName8460",LastName8460 +8461,8461,"FirstName8461 MiddleName8461",LastName8461 +8462,8462,"FirstName8462 MiddleName8462",LastName8462 +8463,8463,"FirstName8463 MiddleName8463",LastName8463 +8464,8464,"FirstName8464 MiddleName8464",LastName8464 +8465,8465,"FirstName8465 MiddleName8465",LastName8465 +8466,8466,"FirstName8466 MiddleName8466",LastName8466 +8467,8467,"FirstName8467 MiddleName8467",LastName8467 +8468,8468,"FirstName8468 MiddleName8468",LastName8468 +8469,8469,"FirstName8469 MiddleName8469",LastName8469 +8470,8470,"FirstName8470 MiddleName8470",LastName8470 +8471,8471,"FirstName8471 MiddleName8471",LastName8471 +8472,8472,"FirstName8472 MiddleName8472",LastName8472 +8473,8473,"FirstName8473 MiddleName8473",LastName8473 +8474,8474,"FirstName8474 MiddleName8474",LastName8474 +8475,8475,"FirstName8475 MiddleName8475",LastName8475 +8476,8476,"FirstName8476 MiddleName8476",LastName8476 +8477,8477,"FirstName8477 MiddleName8477",LastName8477 +8478,8478,"FirstName8478 MiddleName8478",LastName8478 +8479,8479,"FirstName8479 MiddleName8479",LastName8479 +8480,8480,"FirstName8480 MiddleName8480",LastName8480 +8481,8481,"FirstName8481 MiddleName8481",LastName8481 +8482,8482,"FirstName8482 MiddleName8482",LastName8482 +8483,8483,"FirstName8483 MiddleName8483",LastName8483 +8484,8484,"FirstName8484 MiddleName8484",LastName8484 +8485,8485,"FirstName8485 MiddleName8485",LastName8485 +8486,8486,"FirstName8486 MiddleName8486",LastName8486 +8487,8487,"FirstName8487 MiddleName8487",LastName8487 +8488,8488,"FirstName8488 MiddleName8488",LastName8488 +8489,8489,"FirstName8489 MiddleName8489",LastName8489 +8490,8490,"FirstName8490 MiddleName8490",LastName8490 +8491,8491,"FirstName8491 MiddleName8491",LastName8491 +8492,8492,"FirstName8492 MiddleName8492",LastName8492 +8493,8493,"FirstName8493 MiddleName8493",LastName8493 +8494,8494,"FirstName8494 MiddleName8494",LastName8494 +8495,8495,"FirstName8495 MiddleName8495",LastName8495 +8496,8496,"FirstName8496 MiddleName8496",LastName8496 +8497,8497,"FirstName8497 MiddleName8497",LastName8497 +8498,8498,"FirstName8498 MiddleName8498",LastName8498 +8499,8499,"FirstName8499 MiddleName8499",LastName8499 +8500,8500,"FirstName8500 MiddleName8500",LastName8500 +8501,8501,"FirstName8501 MiddleName8501",LastName8501 +8502,8502,"FirstName8502 MiddleName8502",LastName8502 +8503,8503,"FirstName8503 MiddleName8503",LastName8503 +8504,8504,"FirstName8504 MiddleName8504",LastName8504 +8505,8505,"FirstName8505 MiddleName8505",LastName8505 +8506,8506,"FirstName8506 MiddleName8506",LastName8506 +8507,8507,"FirstName8507 MiddleName8507",LastName8507 +8508,8508,"FirstName8508 MiddleName8508",LastName8508 +8509,8509,"FirstName8509 MiddleName8509",LastName8509 +8510,8510,"FirstName8510 MiddleName8510",LastName8510 +8511,8511,"FirstName8511 MiddleName8511",LastName8511 +8512,8512,"FirstName8512 MiddleName8512",LastName8512 +8513,8513,"FirstName8513 MiddleName8513",LastName8513 +8514,8514,"FirstName8514 MiddleName8514",LastName8514 +8515,8515,"FirstName8515 MiddleName8515",LastName8515 +8516,8516,"FirstName8516 MiddleName8516",LastName8516 +8517,8517,"FirstName8517 MiddleName8517",LastName8517 +8518,8518,"FirstName8518 MiddleName8518",LastName8518 +8519,8519,"FirstName8519 MiddleName8519",LastName8519 +8520,8520,"FirstName8520 MiddleName8520",LastName8520 +8521,8521,"FirstName8521 MiddleName8521",LastName8521 +8522,8522,"FirstName8522 MiddleName8522",LastName8522 +8523,8523,"FirstName8523 MiddleName8523",LastName8523 +8524,8524,"FirstName8524 MiddleName8524",LastName8524 +8525,8525,"FirstName8525 MiddleName8525",LastName8525 +8526,8526,"FirstName8526 MiddleName8526",LastName8526 +8527,8527,"FirstName8527 MiddleName8527",LastName8527 +8528,8528,"FirstName8528 MiddleName8528",LastName8528 +8529,8529,"FirstName8529 MiddleName8529",LastName8529 +8530,8530,"FirstName8530 MiddleName8530",LastName8530 +8531,8531,"FirstName8531 MiddleName8531",LastName8531 +8532,8532,"FirstName8532 MiddleName8532",LastName8532 +8533,8533,"FirstName8533 MiddleName8533",LastName8533 +8534,8534,"FirstName8534 MiddleName8534",LastName8534 +8535,8535,"FirstName8535 MiddleName8535",LastName8535 +8536,8536,"FirstName8536 MiddleName8536",LastName8536 +8537,8537,"FirstName8537 MiddleName8537",LastName8537 +8538,8538,"FirstName8538 MiddleName8538",LastName8538 +8539,8539,"FirstName8539 MiddleName8539",LastName8539 +8540,8540,"FirstName8540 MiddleName8540",LastName8540 +8541,8541,"FirstName8541 MiddleName8541",LastName8541 +8542,8542,"FirstName8542 MiddleName8542",LastName8542 +8543,8543,"FirstName8543 MiddleName8543",LastName8543 +8544,8544,"FirstName8544 MiddleName8544",LastName8544 +8545,8545,"FirstName8545 MiddleName8545",LastName8545 +8546,8546,"FirstName8546 MiddleName8546",LastName8546 +8547,8547,"FirstName8547 MiddleName8547",LastName8547 +8548,8548,"FirstName8548 MiddleName8548",LastName8548 +8549,8549,"FirstName8549 MiddleName8549",LastName8549 +8550,8550,"FirstName8550 MiddleName8550",LastName8550 +8551,8551,"FirstName8551 MiddleName8551",LastName8551 +8552,8552,"FirstName8552 MiddleName8552",LastName8552 +8553,8553,"FirstName8553 MiddleName8553",LastName8553 +8554,8554,"FirstName8554 MiddleName8554",LastName8554 +8555,8555,"FirstName8555 MiddleName8555",LastName8555 +8556,8556,"FirstName8556 MiddleName8556",LastName8556 +8557,8557,"FirstName8557 MiddleName8557",LastName8557 +8558,8558,"FirstName8558 MiddleName8558",LastName8558 +8559,8559,"FirstName8559 MiddleName8559",LastName8559 +8560,8560,"FirstName8560 MiddleName8560",LastName8560 +8561,8561,"FirstName8561 MiddleName8561",LastName8561 +8562,8562,"FirstName8562 MiddleName8562",LastName8562 +8563,8563,"FirstName8563 MiddleName8563",LastName8563 +8564,8564,"FirstName8564 MiddleName8564",LastName8564 +8565,8565,"FirstName8565 MiddleName8565",LastName8565 +8566,8566,"FirstName8566 MiddleName8566",LastName8566 +8567,8567,"FirstName8567 MiddleName8567",LastName8567 +8568,8568,"FirstName8568 MiddleName8568",LastName8568 +8569,8569,"FirstName8569 MiddleName8569",LastName8569 +8570,8570,"FirstName8570 MiddleName8570",LastName8570 +8571,8571,"FirstName8571 MiddleName8571",LastName8571 +8572,8572,"FirstName8572 MiddleName8572",LastName8572 +8573,8573,"FirstName8573 MiddleName8573",LastName8573 +8574,8574,"FirstName8574 MiddleName8574",LastName8574 +8575,8575,"FirstName8575 MiddleName8575",LastName8575 +8576,8576,"FirstName8576 MiddleName8576",LastName8576 +8577,8577,"FirstName8577 MiddleName8577",LastName8577 +8578,8578,"FirstName8578 MiddleName8578",LastName8578 +8579,8579,"FirstName8579 MiddleName8579",LastName8579 +8580,8580,"FirstName8580 MiddleName8580",LastName8580 +8581,8581,"FirstName8581 MiddleName8581",LastName8581 +8582,8582,"FirstName8582 MiddleName8582",LastName8582 +8583,8583,"FirstName8583 MiddleName8583",LastName8583 +8584,8584,"FirstName8584 MiddleName8584",LastName8584 +8585,8585,"FirstName8585 MiddleName8585",LastName8585 +8586,8586,"FirstName8586 MiddleName8586",LastName8586 +8587,8587,"FirstName8587 MiddleName8587",LastName8587 +8588,8588,"FirstName8588 MiddleName8588",LastName8588 +8589,8589,"FirstName8589 MiddleName8589",LastName8589 +8590,8590,"FirstName8590 MiddleName8590",LastName8590 +8591,8591,"FirstName8591 MiddleName8591",LastName8591 +8592,8592,"FirstName8592 MiddleName8592",LastName8592 +8593,8593,"FirstName8593 MiddleName8593",LastName8593 +8594,8594,"FirstName8594 MiddleName8594",LastName8594 +8595,8595,"FirstName8595 MiddleName8595",LastName8595 +8596,8596,"FirstName8596 MiddleName8596",LastName8596 +8597,8597,"FirstName8597 MiddleName8597",LastName8597 +8598,8598,"FirstName8598 MiddleName8598",LastName8598 +8599,8599,"FirstName8599 MiddleName8599",LastName8599 +8600,8600,"FirstName8600 MiddleName8600",LastName8600 +8601,8601,"FirstName8601 MiddleName8601",LastName8601 +8602,8602,"FirstName8602 MiddleName8602",LastName8602 +8603,8603,"FirstName8603 MiddleName8603",LastName8603 +8604,8604,"FirstName8604 MiddleName8604",LastName8604 +8605,8605,"FirstName8605 MiddleName8605",LastName8605 +8606,8606,"FirstName8606 MiddleName8606",LastName8606 +8607,8607,"FirstName8607 MiddleName8607",LastName8607 +8608,8608,"FirstName8608 MiddleName8608",LastName8608 +8609,8609,"FirstName8609 MiddleName8609",LastName8609 +8610,8610,"FirstName8610 MiddleName8610",LastName8610 +8611,8611,"FirstName8611 MiddleName8611",LastName8611 +8612,8612,"FirstName8612 MiddleName8612",LastName8612 +8613,8613,"FirstName8613 MiddleName8613",LastName8613 +8614,8614,"FirstName8614 MiddleName8614",LastName8614 +8615,8615,"FirstName8615 MiddleName8615",LastName8615 +8616,8616,"FirstName8616 MiddleName8616",LastName8616 +8617,8617,"FirstName8617 MiddleName8617",LastName8617 +8618,8618,"FirstName8618 MiddleName8618",LastName8618 +8619,8619,"FirstName8619 MiddleName8619",LastName8619 +8620,8620,"FirstName8620 MiddleName8620",LastName8620 +8621,8621,"FirstName8621 MiddleName8621",LastName8621 +8622,8622,"FirstName8622 MiddleName8622",LastName8622 +8623,8623,"FirstName8623 MiddleName8623",LastName8623 +8624,8624,"FirstName8624 MiddleName8624",LastName8624 +8625,8625,"FirstName8625 MiddleName8625",LastName8625 +8626,8626,"FirstName8626 MiddleName8626",LastName8626 +8627,8627,"FirstName8627 MiddleName8627",LastName8627 +8628,8628,"FirstName8628 MiddleName8628",LastName8628 +8629,8629,"FirstName8629 MiddleName8629",LastName8629 +8630,8630,"FirstName8630 MiddleName8630",LastName8630 +8631,8631,"FirstName8631 MiddleName8631",LastName8631 +8632,8632,"FirstName8632 MiddleName8632",LastName8632 +8633,8633,"FirstName8633 MiddleName8633",LastName8633 +8634,8634,"FirstName8634 MiddleName8634",LastName8634 +8635,8635,"FirstName8635 MiddleName8635",LastName8635 +8636,8636,"FirstName8636 MiddleName8636",LastName8636 +8637,8637,"FirstName8637 MiddleName8637",LastName8637 +8638,8638,"FirstName8638 MiddleName8638",LastName8638 +8639,8639,"FirstName8639 MiddleName8639",LastName8639 +8640,8640,"FirstName8640 MiddleName8640",LastName8640 +8641,8641,"FirstName8641 MiddleName8641",LastName8641 +8642,8642,"FirstName8642 MiddleName8642",LastName8642 +8643,8643,"FirstName8643 MiddleName8643",LastName8643 +8644,8644,"FirstName8644 MiddleName8644",LastName8644 +8645,8645,"FirstName8645 MiddleName8645",LastName8645 +8646,8646,"FirstName8646 MiddleName8646",LastName8646 +8647,8647,"FirstName8647 MiddleName8647",LastName8647 +8648,8648,"FirstName8648 MiddleName8648",LastName8648 +8649,8649,"FirstName8649 MiddleName8649",LastName8649 +8650,8650,"FirstName8650 MiddleName8650",LastName8650 +8651,8651,"FirstName8651 MiddleName8651",LastName8651 +8652,8652,"FirstName8652 MiddleName8652",LastName8652 +8653,8653,"FirstName8653 MiddleName8653",LastName8653 +8654,8654,"FirstName8654 MiddleName8654",LastName8654 +8655,8655,"FirstName8655 MiddleName8655",LastName8655 +8656,8656,"FirstName8656 MiddleName8656",LastName8656 +8657,8657,"FirstName8657 MiddleName8657",LastName8657 +8658,8658,"FirstName8658 MiddleName8658",LastName8658 +8659,8659,"FirstName8659 MiddleName8659",LastName8659 +8660,8660,"FirstName8660 MiddleName8660",LastName8660 +8661,8661,"FirstName8661 MiddleName8661",LastName8661 +8662,8662,"FirstName8662 MiddleName8662",LastName8662 +8663,8663,"FirstName8663 MiddleName8663",LastName8663 +8664,8664,"FirstName8664 MiddleName8664",LastName8664 +8665,8665,"FirstName8665 MiddleName8665",LastName8665 +8666,8666,"FirstName8666 MiddleName8666",LastName8666 +8667,8667,"FirstName8667 MiddleName8667",LastName8667 +8668,8668,"FirstName8668 MiddleName8668",LastName8668 +8669,8669,"FirstName8669 MiddleName8669",LastName8669 +8670,8670,"FirstName8670 MiddleName8670",LastName8670 +8671,8671,"FirstName8671 MiddleName8671",LastName8671 +8672,8672,"FirstName8672 MiddleName8672",LastName8672 +8673,8673,"FirstName8673 MiddleName8673",LastName8673 +8674,8674,"FirstName8674 MiddleName8674",LastName8674 +8675,8675,"FirstName8675 MiddleName8675",LastName8675 +8676,8676,"FirstName8676 MiddleName8676",LastName8676 +8677,8677,"FirstName8677 MiddleName8677",LastName8677 +8678,8678,"FirstName8678 MiddleName8678",LastName8678 +8679,8679,"FirstName8679 MiddleName8679",LastName8679 +8680,8680,"FirstName8680 MiddleName8680",LastName8680 +8681,8681,"FirstName8681 MiddleName8681",LastName8681 +8682,8682,"FirstName8682 MiddleName8682",LastName8682 +8683,8683,"FirstName8683 MiddleName8683",LastName8683 +8684,8684,"FirstName8684 MiddleName8684",LastName8684 +8685,8685,"FirstName8685 MiddleName8685",LastName8685 +8686,8686,"FirstName8686 MiddleName8686",LastName8686 +8687,8687,"FirstName8687 MiddleName8687",LastName8687 +8688,8688,"FirstName8688 MiddleName8688",LastName8688 +8689,8689,"FirstName8689 MiddleName8689",LastName8689 +8690,8690,"FirstName8690 MiddleName8690",LastName8690 +8691,8691,"FirstName8691 MiddleName8691",LastName8691 +8692,8692,"FirstName8692 MiddleName8692",LastName8692 +8693,8693,"FirstName8693 MiddleName8693",LastName8693 +8694,8694,"FirstName8694 MiddleName8694",LastName8694 +8695,8695,"FirstName8695 MiddleName8695",LastName8695 +8696,8696,"FirstName8696 MiddleName8696",LastName8696 +8697,8697,"FirstName8697 MiddleName8697",LastName8697 +8698,8698,"FirstName8698 MiddleName8698",LastName8698 +8699,8699,"FirstName8699 MiddleName8699",LastName8699 +8700,8700,"FirstName8700 MiddleName8700",LastName8700 +8701,8701,"FirstName8701 MiddleName8701",LastName8701 +8702,8702,"FirstName8702 MiddleName8702",LastName8702 +8703,8703,"FirstName8703 MiddleName8703",LastName8703 +8704,8704,"FirstName8704 MiddleName8704",LastName8704 +8705,8705,"FirstName8705 MiddleName8705",LastName8705 +8706,8706,"FirstName8706 MiddleName8706",LastName8706 +8707,8707,"FirstName8707 MiddleName8707",LastName8707 +8708,8708,"FirstName8708 MiddleName8708",LastName8708 +8709,8709,"FirstName8709 MiddleName8709",LastName8709 +8710,8710,"FirstName8710 MiddleName8710",LastName8710 +8711,8711,"FirstName8711 MiddleName8711",LastName8711 +8712,8712,"FirstName8712 MiddleName8712",LastName8712 +8713,8713,"FirstName8713 MiddleName8713",LastName8713 +8714,8714,"FirstName8714 MiddleName8714",LastName8714 +8715,8715,"FirstName8715 MiddleName8715",LastName8715 +8716,8716,"FirstName8716 MiddleName8716",LastName8716 +8717,8717,"FirstName8717 MiddleName8717",LastName8717 +8718,8718,"FirstName8718 MiddleName8718",LastName8718 +8719,8719,"FirstName8719 MiddleName8719",LastName8719 +8720,8720,"FirstName8720 MiddleName8720",LastName8720 +8721,8721,"FirstName8721 MiddleName8721",LastName8721 +8722,8722,"FirstName8722 MiddleName8722",LastName8722 +8723,8723,"FirstName8723 MiddleName8723",LastName8723 +8724,8724,"FirstName8724 MiddleName8724",LastName8724 +8725,8725,"FirstName8725 MiddleName8725",LastName8725 +8726,8726,"FirstName8726 MiddleName8726",LastName8726 +8727,8727,"FirstName8727 MiddleName8727",LastName8727 +8728,8728,"FirstName8728 MiddleName8728",LastName8728 +8729,8729,"FirstName8729 MiddleName8729",LastName8729 +8730,8730,"FirstName8730 MiddleName8730",LastName8730 +8731,8731,"FirstName8731 MiddleName8731",LastName8731 +8732,8732,"FirstName8732 MiddleName8732",LastName8732 +8733,8733,"FirstName8733 MiddleName8733",LastName8733 +8734,8734,"FirstName8734 MiddleName8734",LastName8734 +8735,8735,"FirstName8735 MiddleName8735",LastName8735 +8736,8736,"FirstName8736 MiddleName8736",LastName8736 +8737,8737,"FirstName8737 MiddleName8737",LastName8737 +8738,8738,"FirstName8738 MiddleName8738",LastName8738 +8739,8739,"FirstName8739 MiddleName8739",LastName8739 +8740,8740,"FirstName8740 MiddleName8740",LastName8740 +8741,8741,"FirstName8741 MiddleName8741",LastName8741 +8742,8742,"FirstName8742 MiddleName8742",LastName8742 +8743,8743,"FirstName8743 MiddleName8743",LastName8743 +8744,8744,"FirstName8744 MiddleName8744",LastName8744 +8745,8745,"FirstName8745 MiddleName8745",LastName8745 +8746,8746,"FirstName8746 MiddleName8746",LastName8746 +8747,8747,"FirstName8747 MiddleName8747",LastName8747 +8748,8748,"FirstName8748 MiddleName8748",LastName8748 +8749,8749,"FirstName8749 MiddleName8749",LastName8749 +8750,8750,"FirstName8750 MiddleName8750",LastName8750 +8751,8751,"FirstName8751 MiddleName8751",LastName8751 +8752,8752,"FirstName8752 MiddleName8752",LastName8752 +8753,8753,"FirstName8753 MiddleName8753",LastName8753 +8754,8754,"FirstName8754 MiddleName8754",LastName8754 +8755,8755,"FirstName8755 MiddleName8755",LastName8755 +8756,8756,"FirstName8756 MiddleName8756",LastName8756 +8757,8757,"FirstName8757 MiddleName8757",LastName8757 +8758,8758,"FirstName8758 MiddleName8758",LastName8758 +8759,8759,"FirstName8759 MiddleName8759",LastName8759 +8760,8760,"FirstName8760 MiddleName8760",LastName8760 +8761,8761,"FirstName8761 MiddleName8761",LastName8761 +8762,8762,"FirstName8762 MiddleName8762",LastName8762 +8763,8763,"FirstName8763 MiddleName8763",LastName8763 +8764,8764,"FirstName8764 MiddleName8764",LastName8764 +8765,8765,"FirstName8765 MiddleName8765",LastName8765 +8766,8766,"FirstName8766 MiddleName8766",LastName8766 +8767,8767,"FirstName8767 MiddleName8767",LastName8767 +8768,8768,"FirstName8768 MiddleName8768",LastName8768 +8769,8769,"FirstName8769 MiddleName8769",LastName8769 +8770,8770,"FirstName8770 MiddleName8770",LastName8770 +8771,8771,"FirstName8771 MiddleName8771",LastName8771 +8772,8772,"FirstName8772 MiddleName8772",LastName8772 +8773,8773,"FirstName8773 MiddleName8773",LastName8773 +8774,8774,"FirstName8774 MiddleName8774",LastName8774 +8775,8775,"FirstName8775 MiddleName8775",LastName8775 +8776,8776,"FirstName8776 MiddleName8776",LastName8776 +8777,8777,"FirstName8777 MiddleName8777",LastName8777 +8778,8778,"FirstName8778 MiddleName8778",LastName8778 +8779,8779,"FirstName8779 MiddleName8779",LastName8779 +8780,8780,"FirstName8780 MiddleName8780",LastName8780 +8781,8781,"FirstName8781 MiddleName8781",LastName8781 +8782,8782,"FirstName8782 MiddleName8782",LastName8782 +8783,8783,"FirstName8783 MiddleName8783",LastName8783 +8784,8784,"FirstName8784 MiddleName8784",LastName8784 +8785,8785,"FirstName8785 MiddleName8785",LastName8785 +8786,8786,"FirstName8786 MiddleName8786",LastName8786 +8787,8787,"FirstName8787 MiddleName8787",LastName8787 +8788,8788,"FirstName8788 MiddleName8788",LastName8788 +8789,8789,"FirstName8789 MiddleName8789",LastName8789 +8790,8790,"FirstName8790 MiddleName8790",LastName8790 +8791,8791,"FirstName8791 MiddleName8791",LastName8791 +8792,8792,"FirstName8792 MiddleName8792",LastName8792 +8793,8793,"FirstName8793 MiddleName8793",LastName8793 +8794,8794,"FirstName8794 MiddleName8794",LastName8794 +8795,8795,"FirstName8795 MiddleName8795",LastName8795 +8796,8796,"FirstName8796 MiddleName8796",LastName8796 +8797,8797,"FirstName8797 MiddleName8797",LastName8797 +8798,8798,"FirstName8798 MiddleName8798",LastName8798 +8799,8799,"FirstName8799 MiddleName8799",LastName8799 +8800,8800,"FirstName8800 MiddleName8800",LastName8800 +8801,8801,"FirstName8801 MiddleName8801",LastName8801 +8802,8802,"FirstName8802 MiddleName8802",LastName8802 +8803,8803,"FirstName8803 MiddleName8803",LastName8803 +8804,8804,"FirstName8804 MiddleName8804",LastName8804 +8805,8805,"FirstName8805 MiddleName8805",LastName8805 +8806,8806,"FirstName8806 MiddleName8806",LastName8806 +8807,8807,"FirstName8807 MiddleName8807",LastName8807 +8808,8808,"FirstName8808 MiddleName8808",LastName8808 +8809,8809,"FirstName8809 MiddleName8809",LastName8809 +8810,8810,"FirstName8810 MiddleName8810",LastName8810 +8811,8811,"FirstName8811 MiddleName8811",LastName8811 +8812,8812,"FirstName8812 MiddleName8812",LastName8812 +8813,8813,"FirstName8813 MiddleName8813",LastName8813 +8814,8814,"FirstName8814 MiddleName8814",LastName8814 +8815,8815,"FirstName8815 MiddleName8815",LastName8815 +8816,8816,"FirstName8816 MiddleName8816",LastName8816 +8817,8817,"FirstName8817 MiddleName8817",LastName8817 +8818,8818,"FirstName8818 MiddleName8818",LastName8818 +8819,8819,"FirstName8819 MiddleName8819",LastName8819 +8820,8820,"FirstName8820 MiddleName8820",LastName8820 +8821,8821,"FirstName8821 MiddleName8821",LastName8821 +8822,8822,"FirstName8822 MiddleName8822",LastName8822 +8823,8823,"FirstName8823 MiddleName8823",LastName8823 +8824,8824,"FirstName8824 MiddleName8824",LastName8824 +8825,8825,"FirstName8825 MiddleName8825",LastName8825 +8826,8826,"FirstName8826 MiddleName8826",LastName8826 +8827,8827,"FirstName8827 MiddleName8827",LastName8827 +8828,8828,"FirstName8828 MiddleName8828",LastName8828 +8829,8829,"FirstName8829 MiddleName8829",LastName8829 +8830,8830,"FirstName8830 MiddleName8830",LastName8830 +8831,8831,"FirstName8831 MiddleName8831",LastName8831 +8832,8832,"FirstName8832 MiddleName8832",LastName8832 +8833,8833,"FirstName8833 MiddleName8833",LastName8833 +8834,8834,"FirstName8834 MiddleName8834",LastName8834 +8835,8835,"FirstName8835 MiddleName8835",LastName8835 +8836,8836,"FirstName8836 MiddleName8836",LastName8836 +8837,8837,"FirstName8837 MiddleName8837",LastName8837 +8838,8838,"FirstName8838 MiddleName8838",LastName8838 +8839,8839,"FirstName8839 MiddleName8839",LastName8839 +8840,8840,"FirstName8840 MiddleName8840",LastName8840 +8841,8841,"FirstName8841 MiddleName8841",LastName8841 +8842,8842,"FirstName8842 MiddleName8842",LastName8842 +8843,8843,"FirstName8843 MiddleName8843",LastName8843 +8844,8844,"FirstName8844 MiddleName8844",LastName8844 +8845,8845,"FirstName8845 MiddleName8845",LastName8845 +8846,8846,"FirstName8846 MiddleName8846",LastName8846 +8847,8847,"FirstName8847 MiddleName8847",LastName8847 +8848,8848,"FirstName8848 MiddleName8848",LastName8848 +8849,8849,"FirstName8849 MiddleName8849",LastName8849 +8850,8850,"FirstName8850 MiddleName8850",LastName8850 +8851,8851,"FirstName8851 MiddleName8851",LastName8851 +8852,8852,"FirstName8852 MiddleName8852",LastName8852 +8853,8853,"FirstName8853 MiddleName8853",LastName8853 +8854,8854,"FirstName8854 MiddleName8854",LastName8854 +8855,8855,"FirstName8855 MiddleName8855",LastName8855 +8856,8856,"FirstName8856 MiddleName8856",LastName8856 +8857,8857,"FirstName8857 MiddleName8857",LastName8857 +8858,8858,"FirstName8858 MiddleName8858",LastName8858 +8859,8859,"FirstName8859 MiddleName8859",LastName8859 +8860,8860,"FirstName8860 MiddleName8860",LastName8860 +8861,8861,"FirstName8861 MiddleName8861",LastName8861 +8862,8862,"FirstName8862 MiddleName8862",LastName8862 +8863,8863,"FirstName8863 MiddleName8863",LastName8863 +8864,8864,"FirstName8864 MiddleName8864",LastName8864 +8865,8865,"FirstName8865 MiddleName8865",LastName8865 +8866,8866,"FirstName8866 MiddleName8866",LastName8866 +8867,8867,"FirstName8867 MiddleName8867",LastName8867 +8868,8868,"FirstName8868 MiddleName8868",LastName8868 +8869,8869,"FirstName8869 MiddleName8869",LastName8869 +8870,8870,"FirstName8870 MiddleName8870",LastName8870 +8871,8871,"FirstName8871 MiddleName8871",LastName8871 +8872,8872,"FirstName8872 MiddleName8872",LastName8872 +8873,8873,"FirstName8873 MiddleName8873",LastName8873 +8874,8874,"FirstName8874 MiddleName8874",LastName8874 +8875,8875,"FirstName8875 MiddleName8875",LastName8875 +8876,8876,"FirstName8876 MiddleName8876",LastName8876 +8877,8877,"FirstName8877 MiddleName8877",LastName8877 +8878,8878,"FirstName8878 MiddleName8878",LastName8878 +8879,8879,"FirstName8879 MiddleName8879",LastName8879 +8880,8880,"FirstName8880 MiddleName8880",LastName8880 +8881,8881,"FirstName8881 MiddleName8881",LastName8881 +8882,8882,"FirstName8882 MiddleName8882",LastName8882 +8883,8883,"FirstName8883 MiddleName8883",LastName8883 +8884,8884,"FirstName8884 MiddleName8884",LastName8884 +8885,8885,"FirstName8885 MiddleName8885",LastName8885 +8886,8886,"FirstName8886 MiddleName8886",LastName8886 +8887,8887,"FirstName8887 MiddleName8887",LastName8887 +8888,8888,"FirstName8888 MiddleName8888",LastName8888 +8889,8889,"FirstName8889 MiddleName8889",LastName8889 +8890,8890,"FirstName8890 MiddleName8890",LastName8890 +8891,8891,"FirstName8891 MiddleName8891",LastName8891 +8892,8892,"FirstName8892 MiddleName8892",LastName8892 +8893,8893,"FirstName8893 MiddleName8893",LastName8893 +8894,8894,"FirstName8894 MiddleName8894",LastName8894 +8895,8895,"FirstName8895 MiddleName8895",LastName8895 +8896,8896,"FirstName8896 MiddleName8896",LastName8896 +8897,8897,"FirstName8897 MiddleName8897",LastName8897 +8898,8898,"FirstName8898 MiddleName8898",LastName8898 +8899,8899,"FirstName8899 MiddleName8899",LastName8899 +8900,8900,"FirstName8900 MiddleName8900",LastName8900 +8901,8901,"FirstName8901 MiddleName8901",LastName8901 +8902,8902,"FirstName8902 MiddleName8902",LastName8902 +8903,8903,"FirstName8903 MiddleName8903",LastName8903 +8904,8904,"FirstName8904 MiddleName8904",LastName8904 +8905,8905,"FirstName8905 MiddleName8905",LastName8905 +8906,8906,"FirstName8906 MiddleName8906",LastName8906 +8907,8907,"FirstName8907 MiddleName8907",LastName8907 +8908,8908,"FirstName8908 MiddleName8908",LastName8908 +8909,8909,"FirstName8909 MiddleName8909",LastName8909 +8910,8910,"FirstName8910 MiddleName8910",LastName8910 +8911,8911,"FirstName8911 MiddleName8911",LastName8911 +8912,8912,"FirstName8912 MiddleName8912",LastName8912 +8913,8913,"FirstName8913 MiddleName8913",LastName8913 +8914,8914,"FirstName8914 MiddleName8914",LastName8914 +8915,8915,"FirstName8915 MiddleName8915",LastName8915 +8916,8916,"FirstName8916 MiddleName8916",LastName8916 +8917,8917,"FirstName8917 MiddleName8917",LastName8917 +8918,8918,"FirstName8918 MiddleName8918",LastName8918 +8919,8919,"FirstName8919 MiddleName8919",LastName8919 +8920,8920,"FirstName8920 MiddleName8920",LastName8920 +8921,8921,"FirstName8921 MiddleName8921",LastName8921 +8922,8922,"FirstName8922 MiddleName8922",LastName8922 +8923,8923,"FirstName8923 MiddleName8923",LastName8923 +8924,8924,"FirstName8924 MiddleName8924",LastName8924 +8925,8925,"FirstName8925 MiddleName8925",LastName8925 +8926,8926,"FirstName8926 MiddleName8926",LastName8926 +8927,8927,"FirstName8927 MiddleName8927",LastName8927 +8928,8928,"FirstName8928 MiddleName8928",LastName8928 +8929,8929,"FirstName8929 MiddleName8929",LastName8929 +8930,8930,"FirstName8930 MiddleName8930",LastName8930 +8931,8931,"FirstName8931 MiddleName8931",LastName8931 +8932,8932,"FirstName8932 MiddleName8932",LastName8932 +8933,8933,"FirstName8933 MiddleName8933",LastName8933 +8934,8934,"FirstName8934 MiddleName8934",LastName8934 +8935,8935,"FirstName8935 MiddleName8935",LastName8935 +8936,8936,"FirstName8936 MiddleName8936",LastName8936 +8937,8937,"FirstName8937 MiddleName8937",LastName8937 +8938,8938,"FirstName8938 MiddleName8938",LastName8938 +8939,8939,"FirstName8939 MiddleName8939",LastName8939 +8940,8940,"FirstName8940 MiddleName8940",LastName8940 +8941,8941,"FirstName8941 MiddleName8941",LastName8941 +8942,8942,"FirstName8942 MiddleName8942",LastName8942 +8943,8943,"FirstName8943 MiddleName8943",LastName8943 +8944,8944,"FirstName8944 MiddleName8944",LastName8944 +8945,8945,"FirstName8945 MiddleName8945",LastName8945 +8946,8946,"FirstName8946 MiddleName8946",LastName8946 +8947,8947,"FirstName8947 MiddleName8947",LastName8947 +8948,8948,"FirstName8948 MiddleName8948",LastName8948 +8949,8949,"FirstName8949 MiddleName8949",LastName8949 +8950,8950,"FirstName8950 MiddleName8950",LastName8950 +8951,8951,"FirstName8951 MiddleName8951",LastName8951 +8952,8952,"FirstName8952 MiddleName8952",LastName8952 +8953,8953,"FirstName8953 MiddleName8953",LastName8953 +8954,8954,"FirstName8954 MiddleName8954",LastName8954 +8955,8955,"FirstName8955 MiddleName8955",LastName8955 +8956,8956,"FirstName8956 MiddleName8956",LastName8956 +8957,8957,"FirstName8957 MiddleName8957",LastName8957 +8958,8958,"FirstName8958 MiddleName8958",LastName8958 +8959,8959,"FirstName8959 MiddleName8959",LastName8959 +8960,8960,"FirstName8960 MiddleName8960",LastName8960 +8961,8961,"FirstName8961 MiddleName8961",LastName8961 +8962,8962,"FirstName8962 MiddleName8962",LastName8962 +8963,8963,"FirstName8963 MiddleName8963",LastName8963 +8964,8964,"FirstName8964 MiddleName8964",LastName8964 +8965,8965,"FirstName8965 MiddleName8965",LastName8965 +8966,8966,"FirstName8966 MiddleName8966",LastName8966 +8967,8967,"FirstName8967 MiddleName8967",LastName8967 +8968,8968,"FirstName8968 MiddleName8968",LastName8968 +8969,8969,"FirstName8969 MiddleName8969",LastName8969 +8970,8970,"FirstName8970 MiddleName8970",LastName8970 +8971,8971,"FirstName8971 MiddleName8971",LastName8971 +8972,8972,"FirstName8972 MiddleName8972",LastName8972 +8973,8973,"FirstName8973 MiddleName8973",LastName8973 +8974,8974,"FirstName8974 MiddleName8974",LastName8974 +8975,8975,"FirstName8975 MiddleName8975",LastName8975 +8976,8976,"FirstName8976 MiddleName8976",LastName8976 +8977,8977,"FirstName8977 MiddleName8977",LastName8977 +8978,8978,"FirstName8978 MiddleName8978",LastName8978 +8979,8979,"FirstName8979 MiddleName8979",LastName8979 +8980,8980,"FirstName8980 MiddleName8980",LastName8980 +8981,8981,"FirstName8981 MiddleName8981",LastName8981 +8982,8982,"FirstName8982 MiddleName8982",LastName8982 +8983,8983,"FirstName8983 MiddleName8983",LastName8983 +8984,8984,"FirstName8984 MiddleName8984",LastName8984 +8985,8985,"FirstName8985 MiddleName8985",LastName8985 +8986,8986,"FirstName8986 MiddleName8986",LastName8986 +8987,8987,"FirstName8987 MiddleName8987",LastName8987 +8988,8988,"FirstName8988 MiddleName8988",LastName8988 +8989,8989,"FirstName8989 MiddleName8989",LastName8989 +8990,8990,"FirstName8990 MiddleName8990",LastName8990 +8991,8991,"FirstName8991 MiddleName8991",LastName8991 +8992,8992,"FirstName8992 MiddleName8992",LastName8992 +8993,8993,"FirstName8993 MiddleName8993",LastName8993 +8994,8994,"FirstName8994 MiddleName8994",LastName8994 +8995,8995,"FirstName8995 MiddleName8995",LastName8995 +8996,8996,"FirstName8996 MiddleName8996",LastName8996 +8997,8997,"FirstName8997 MiddleName8997",LastName8997 +8998,8998,"FirstName8998 MiddleName8998",LastName8998 +8999,8999,"FirstName8999 MiddleName8999",LastName8999 +9000,9000,"FirstName9000 MiddleName9000",LastName9000 +9001,9001,"FirstName9001 MiddleName9001",LastName9001 +9002,9002,"FirstName9002 MiddleName9002",LastName9002 +9003,9003,"FirstName9003 MiddleName9003",LastName9003 +9004,9004,"FirstName9004 MiddleName9004",LastName9004 +9005,9005,"FirstName9005 MiddleName9005",LastName9005 +9006,9006,"FirstName9006 MiddleName9006",LastName9006 +9007,9007,"FirstName9007 MiddleName9007",LastName9007 +9008,9008,"FirstName9008 MiddleName9008",LastName9008 +9009,9009,"FirstName9009 MiddleName9009",LastName9009 +9010,9010,"FirstName9010 MiddleName9010",LastName9010 +9011,9011,"FirstName9011 MiddleName9011",LastName9011 +9012,9012,"FirstName9012 MiddleName9012",LastName9012 +9013,9013,"FirstName9013 MiddleName9013",LastName9013 +9014,9014,"FirstName9014 MiddleName9014",LastName9014 +9015,9015,"FirstName9015 MiddleName9015",LastName9015 +9016,9016,"FirstName9016 MiddleName9016",LastName9016 +9017,9017,"FirstName9017 MiddleName9017",LastName9017 +9018,9018,"FirstName9018 MiddleName9018",LastName9018 +9019,9019,"FirstName9019 MiddleName9019",LastName9019 +9020,9020,"FirstName9020 MiddleName9020",LastName9020 +9021,9021,"FirstName9021 MiddleName9021",LastName9021 +9022,9022,"FirstName9022 MiddleName9022",LastName9022 +9023,9023,"FirstName9023 MiddleName9023",LastName9023 +9024,9024,"FirstName9024 MiddleName9024",LastName9024 +9025,9025,"FirstName9025 MiddleName9025",LastName9025 +9026,9026,"FirstName9026 MiddleName9026",LastName9026 +9027,9027,"FirstName9027 MiddleName9027",LastName9027 +9028,9028,"FirstName9028 MiddleName9028",LastName9028 +9029,9029,"FirstName9029 MiddleName9029",LastName9029 +9030,9030,"FirstName9030 MiddleName9030",LastName9030 +9031,9031,"FirstName9031 MiddleName9031",LastName9031 +9032,9032,"FirstName9032 MiddleName9032",LastName9032 +9033,9033,"FirstName9033 MiddleName9033",LastName9033 +9034,9034,"FirstName9034 MiddleName9034",LastName9034 +9035,9035,"FirstName9035 MiddleName9035",LastName9035 +9036,9036,"FirstName9036 MiddleName9036",LastName9036 +9037,9037,"FirstName9037 MiddleName9037",LastName9037 +9038,9038,"FirstName9038 MiddleName9038",LastName9038 +9039,9039,"FirstName9039 MiddleName9039",LastName9039 +9040,9040,"FirstName9040 MiddleName9040",LastName9040 +9041,9041,"FirstName9041 MiddleName9041",LastName9041 +9042,9042,"FirstName9042 MiddleName9042",LastName9042 +9043,9043,"FirstName9043 MiddleName9043",LastName9043 +9044,9044,"FirstName9044 MiddleName9044",LastName9044 +9045,9045,"FirstName9045 MiddleName9045",LastName9045 +9046,9046,"FirstName9046 MiddleName9046",LastName9046 +9047,9047,"FirstName9047 MiddleName9047",LastName9047 +9048,9048,"FirstName9048 MiddleName9048",LastName9048 +9049,9049,"FirstName9049 MiddleName9049",LastName9049 +9050,9050,"FirstName9050 MiddleName9050",LastName9050 +9051,9051,"FirstName9051 MiddleName9051",LastName9051 +9052,9052,"FirstName9052 MiddleName9052",LastName9052 +9053,9053,"FirstName9053 MiddleName9053",LastName9053 +9054,9054,"FirstName9054 MiddleName9054",LastName9054 +9055,9055,"FirstName9055 MiddleName9055",LastName9055 +9056,9056,"FirstName9056 MiddleName9056",LastName9056 +9057,9057,"FirstName9057 MiddleName9057",LastName9057 +9058,9058,"FirstName9058 MiddleName9058",LastName9058 +9059,9059,"FirstName9059 MiddleName9059",LastName9059 +9060,9060,"FirstName9060 MiddleName9060",LastName9060 +9061,9061,"FirstName9061 MiddleName9061",LastName9061 +9062,9062,"FirstName9062 MiddleName9062",LastName9062 +9063,9063,"FirstName9063 MiddleName9063",LastName9063 +9064,9064,"FirstName9064 MiddleName9064",LastName9064 +9065,9065,"FirstName9065 MiddleName9065",LastName9065 +9066,9066,"FirstName9066 MiddleName9066",LastName9066 +9067,9067,"FirstName9067 MiddleName9067",LastName9067 +9068,9068,"FirstName9068 MiddleName9068",LastName9068 +9069,9069,"FirstName9069 MiddleName9069",LastName9069 +9070,9070,"FirstName9070 MiddleName9070",LastName9070 +9071,9071,"FirstName9071 MiddleName9071",LastName9071 +9072,9072,"FirstName9072 MiddleName9072",LastName9072 +9073,9073,"FirstName9073 MiddleName9073",LastName9073 +9074,9074,"FirstName9074 MiddleName9074",LastName9074 +9075,9075,"FirstName9075 MiddleName9075",LastName9075 +9076,9076,"FirstName9076 MiddleName9076",LastName9076 +9077,9077,"FirstName9077 MiddleName9077",LastName9077 +9078,9078,"FirstName9078 MiddleName9078",LastName9078 +9079,9079,"FirstName9079 MiddleName9079",LastName9079 +9080,9080,"FirstName9080 MiddleName9080",LastName9080 +9081,9081,"FirstName9081 MiddleName9081",LastName9081 +9082,9082,"FirstName9082 MiddleName9082",LastName9082 +9083,9083,"FirstName9083 MiddleName9083",LastName9083 +9084,9084,"FirstName9084 MiddleName9084",LastName9084 +9085,9085,"FirstName9085 MiddleName9085",LastName9085 +9086,9086,"FirstName9086 MiddleName9086",LastName9086 +9087,9087,"FirstName9087 MiddleName9087",LastName9087 +9088,9088,"FirstName9088 MiddleName9088",LastName9088 +9089,9089,"FirstName9089 MiddleName9089",LastName9089 +9090,9090,"FirstName9090 MiddleName9090",LastName9090 +9091,9091,"FirstName9091 MiddleName9091",LastName9091 +9092,9092,"FirstName9092 MiddleName9092",LastName9092 +9093,9093,"FirstName9093 MiddleName9093",LastName9093 +9094,9094,"FirstName9094 MiddleName9094",LastName9094 +9095,9095,"FirstName9095 MiddleName9095",LastName9095 +9096,9096,"FirstName9096 MiddleName9096",LastName9096 +9097,9097,"FirstName9097 MiddleName9097",LastName9097 +9098,9098,"FirstName9098 MiddleName9098",LastName9098 +9099,9099,"FirstName9099 MiddleName9099",LastName9099 +9100,9100,"FirstName9100 MiddleName9100",LastName9100 +9101,9101,"FirstName9101 MiddleName9101",LastName9101 +9102,9102,"FirstName9102 MiddleName9102",LastName9102 +9103,9103,"FirstName9103 MiddleName9103",LastName9103 +9104,9104,"FirstName9104 MiddleName9104",LastName9104 +9105,9105,"FirstName9105 MiddleName9105",LastName9105 +9106,9106,"FirstName9106 MiddleName9106",LastName9106 +9107,9107,"FirstName9107 MiddleName9107",LastName9107 +9108,9108,"FirstName9108 MiddleName9108",LastName9108 +9109,9109,"FirstName9109 MiddleName9109",LastName9109 +9110,9110,"FirstName9110 MiddleName9110",LastName9110 +9111,9111,"FirstName9111 MiddleName9111",LastName9111 +9112,9112,"FirstName9112 MiddleName9112",LastName9112 +9113,9113,"FirstName9113 MiddleName9113",LastName9113 +9114,9114,"FirstName9114 MiddleName9114",LastName9114 +9115,9115,"FirstName9115 MiddleName9115",LastName9115 +9116,9116,"FirstName9116 MiddleName9116",LastName9116 +9117,9117,"FirstName9117 MiddleName9117",LastName9117 +9118,9118,"FirstName9118 MiddleName9118",LastName9118 +9119,9119,"FirstName9119 MiddleName9119",LastName9119 +9120,9120,"FirstName9120 MiddleName9120",LastName9120 +9121,9121,"FirstName9121 MiddleName9121",LastName9121 +9122,9122,"FirstName9122 MiddleName9122",LastName9122 +9123,9123,"FirstName9123 MiddleName9123",LastName9123 +9124,9124,"FirstName9124 MiddleName9124",LastName9124 +9125,9125,"FirstName9125 MiddleName9125",LastName9125 +9126,9126,"FirstName9126 MiddleName9126",LastName9126 +9127,9127,"FirstName9127 MiddleName9127",LastName9127 +9128,9128,"FirstName9128 MiddleName9128",LastName9128 +9129,9129,"FirstName9129 MiddleName9129",LastName9129 +9130,9130,"FirstName9130 MiddleName9130",LastName9130 +9131,9131,"FirstName9131 MiddleName9131",LastName9131 +9132,9132,"FirstName9132 MiddleName9132",LastName9132 +9133,9133,"FirstName9133 MiddleName9133",LastName9133 +9134,9134,"FirstName9134 MiddleName9134",LastName9134 +9135,9135,"FirstName9135 MiddleName9135",LastName9135 +9136,9136,"FirstName9136 MiddleName9136",LastName9136 +9137,9137,"FirstName9137 MiddleName9137",LastName9137 +9138,9138,"FirstName9138 MiddleName9138",LastName9138 +9139,9139,"FirstName9139 MiddleName9139",LastName9139 +9140,9140,"FirstName9140 MiddleName9140",LastName9140 +9141,9141,"FirstName9141 MiddleName9141",LastName9141 +9142,9142,"FirstName9142 MiddleName9142",LastName9142 +9143,9143,"FirstName9143 MiddleName9143",LastName9143 +9144,9144,"FirstName9144 MiddleName9144",LastName9144 +9145,9145,"FirstName9145 MiddleName9145",LastName9145 +9146,9146,"FirstName9146 MiddleName9146",LastName9146 +9147,9147,"FirstName9147 MiddleName9147",LastName9147 +9148,9148,"FirstName9148 MiddleName9148",LastName9148 +9149,9149,"FirstName9149 MiddleName9149",LastName9149 +9150,9150,"FirstName9150 MiddleName9150",LastName9150 +9151,9151,"FirstName9151 MiddleName9151",LastName9151 +9152,9152,"FirstName9152 MiddleName9152",LastName9152 +9153,9153,"FirstName9153 MiddleName9153",LastName9153 +9154,9154,"FirstName9154 MiddleName9154",LastName9154 +9155,9155,"FirstName9155 MiddleName9155",LastName9155 +9156,9156,"FirstName9156 MiddleName9156",LastName9156 +9157,9157,"FirstName9157 MiddleName9157",LastName9157 +9158,9158,"FirstName9158 MiddleName9158",LastName9158 +9159,9159,"FirstName9159 MiddleName9159",LastName9159 +9160,9160,"FirstName9160 MiddleName9160",LastName9160 +9161,9161,"FirstName9161 MiddleName9161",LastName9161 +9162,9162,"FirstName9162 MiddleName9162",LastName9162 +9163,9163,"FirstName9163 MiddleName9163",LastName9163 +9164,9164,"FirstName9164 MiddleName9164",LastName9164 +9165,9165,"FirstName9165 MiddleName9165",LastName9165 +9166,9166,"FirstName9166 MiddleName9166",LastName9166 +9167,9167,"FirstName9167 MiddleName9167",LastName9167 +9168,9168,"FirstName9168 MiddleName9168",LastName9168 +9169,9169,"FirstName9169 MiddleName9169",LastName9169 +9170,9170,"FirstName9170 MiddleName9170",LastName9170 +9171,9171,"FirstName9171 MiddleName9171",LastName9171 +9172,9172,"FirstName9172 MiddleName9172",LastName9172 +9173,9173,"FirstName9173 MiddleName9173",LastName9173 +9174,9174,"FirstName9174 MiddleName9174",LastName9174 +9175,9175,"FirstName9175 MiddleName9175",LastName9175 +9176,9176,"FirstName9176 MiddleName9176",LastName9176 +9177,9177,"FirstName9177 MiddleName9177",LastName9177 +9178,9178,"FirstName9178 MiddleName9178",LastName9178 +9179,9179,"FirstName9179 MiddleName9179",LastName9179 +9180,9180,"FirstName9180 MiddleName9180",LastName9180 +9181,9181,"FirstName9181 MiddleName9181",LastName9181 +9182,9182,"FirstName9182 MiddleName9182",LastName9182 +9183,9183,"FirstName9183 MiddleName9183",LastName9183 +9184,9184,"FirstName9184 MiddleName9184",LastName9184 +9185,9185,"FirstName9185 MiddleName9185",LastName9185 +9186,9186,"FirstName9186 MiddleName9186",LastName9186 +9187,9187,"FirstName9187 MiddleName9187",LastName9187 +9188,9188,"FirstName9188 MiddleName9188",LastName9188 +9189,9189,"FirstName9189 MiddleName9189",LastName9189 +9190,9190,"FirstName9190 MiddleName9190",LastName9190 +9191,9191,"FirstName9191 MiddleName9191",LastName9191 +9192,9192,"FirstName9192 MiddleName9192",LastName9192 +9193,9193,"FirstName9193 MiddleName9193",LastName9193 +9194,9194,"FirstName9194 MiddleName9194",LastName9194 +9195,9195,"FirstName9195 MiddleName9195",LastName9195 +9196,9196,"FirstName9196 MiddleName9196",LastName9196 +9197,9197,"FirstName9197 MiddleName9197",LastName9197 +9198,9198,"FirstName9198 MiddleName9198",LastName9198 +9199,9199,"FirstName9199 MiddleName9199",LastName9199 +9200,9200,"FirstName9200 MiddleName9200",LastName9200 +9201,9201,"FirstName9201 MiddleName9201",LastName9201 +9202,9202,"FirstName9202 MiddleName9202",LastName9202 +9203,9203,"FirstName9203 MiddleName9203",LastName9203 +9204,9204,"FirstName9204 MiddleName9204",LastName9204 +9205,9205,"FirstName9205 MiddleName9205",LastName9205 +9206,9206,"FirstName9206 MiddleName9206",LastName9206 +9207,9207,"FirstName9207 MiddleName9207",LastName9207 +9208,9208,"FirstName9208 MiddleName9208",LastName9208 +9209,9209,"FirstName9209 MiddleName9209",LastName9209 +9210,9210,"FirstName9210 MiddleName9210",LastName9210 +9211,9211,"FirstName9211 MiddleName9211",LastName9211 +9212,9212,"FirstName9212 MiddleName9212",LastName9212 +9213,9213,"FirstName9213 MiddleName9213",LastName9213 +9214,9214,"FirstName9214 MiddleName9214",LastName9214 +9215,9215,"FirstName9215 MiddleName9215",LastName9215 +9216,9216,"FirstName9216 MiddleName9216",LastName9216 +9217,9217,"FirstName9217 MiddleName9217",LastName9217 +9218,9218,"FirstName9218 MiddleName9218",LastName9218 +9219,9219,"FirstName9219 MiddleName9219",LastName9219 +9220,9220,"FirstName9220 MiddleName9220",LastName9220 +9221,9221,"FirstName9221 MiddleName9221",LastName9221 +9222,9222,"FirstName9222 MiddleName9222",LastName9222 +9223,9223,"FirstName9223 MiddleName9223",LastName9223 +9224,9224,"FirstName9224 MiddleName9224",LastName9224 +9225,9225,"FirstName9225 MiddleName9225",LastName9225 +9226,9226,"FirstName9226 MiddleName9226",LastName9226 +9227,9227,"FirstName9227 MiddleName9227",LastName9227 +9228,9228,"FirstName9228 MiddleName9228",LastName9228 +9229,9229,"FirstName9229 MiddleName9229",LastName9229 +9230,9230,"FirstName9230 MiddleName9230",LastName9230 +9231,9231,"FirstName9231 MiddleName9231",LastName9231 +9232,9232,"FirstName9232 MiddleName9232",LastName9232 +9233,9233,"FirstName9233 MiddleName9233",LastName9233 +9234,9234,"FirstName9234 MiddleName9234",LastName9234 +9235,9235,"FirstName9235 MiddleName9235",LastName9235 +9236,9236,"FirstName9236 MiddleName9236",LastName9236 +9237,9237,"FirstName9237 MiddleName9237",LastName9237 +9238,9238,"FirstName9238 MiddleName9238",LastName9238 +9239,9239,"FirstName9239 MiddleName9239",LastName9239 +9240,9240,"FirstName9240 MiddleName9240",LastName9240 +9241,9241,"FirstName9241 MiddleName9241",LastName9241 +9242,9242,"FirstName9242 MiddleName9242",LastName9242 +9243,9243,"FirstName9243 MiddleName9243",LastName9243 +9244,9244,"FirstName9244 MiddleName9244",LastName9244 +9245,9245,"FirstName9245 MiddleName9245",LastName9245 +9246,9246,"FirstName9246 MiddleName9246",LastName9246 +9247,9247,"FirstName9247 MiddleName9247",LastName9247 +9248,9248,"FirstName9248 MiddleName9248",LastName9248 +9249,9249,"FirstName9249 MiddleName9249",LastName9249 +9250,9250,"FirstName9250 MiddleName9250",LastName9250 +9251,9251,"FirstName9251 MiddleName9251",LastName9251 +9252,9252,"FirstName9252 MiddleName9252",LastName9252 +9253,9253,"FirstName9253 MiddleName9253",LastName9253 +9254,9254,"FirstName9254 MiddleName9254",LastName9254 +9255,9255,"FirstName9255 MiddleName9255",LastName9255 +9256,9256,"FirstName9256 MiddleName9256",LastName9256 +9257,9257,"FirstName9257 MiddleName9257",LastName9257 +9258,9258,"FirstName9258 MiddleName9258",LastName9258 +9259,9259,"FirstName9259 MiddleName9259",LastName9259 +9260,9260,"FirstName9260 MiddleName9260",LastName9260 +9261,9261,"FirstName9261 MiddleName9261",LastName9261 +9262,9262,"FirstName9262 MiddleName9262",LastName9262 +9263,9263,"FirstName9263 MiddleName9263",LastName9263 +9264,9264,"FirstName9264 MiddleName9264",LastName9264 +9265,9265,"FirstName9265 MiddleName9265",LastName9265 +9266,9266,"FirstName9266 MiddleName9266",LastName9266 +9267,9267,"FirstName9267 MiddleName9267",LastName9267 +9268,9268,"FirstName9268 MiddleName9268",LastName9268 +9269,9269,"FirstName9269 MiddleName9269",LastName9269 +9270,9270,"FirstName9270 MiddleName9270",LastName9270 +9271,9271,"FirstName9271 MiddleName9271",LastName9271 +9272,9272,"FirstName9272 MiddleName9272",LastName9272 +9273,9273,"FirstName9273 MiddleName9273",LastName9273 +9274,9274,"FirstName9274 MiddleName9274",LastName9274 +9275,9275,"FirstName9275 MiddleName9275",LastName9275 +9276,9276,"FirstName9276 MiddleName9276",LastName9276 +9277,9277,"FirstName9277 MiddleName9277",LastName9277 +9278,9278,"FirstName9278 MiddleName9278",LastName9278 +9279,9279,"FirstName9279 MiddleName9279",LastName9279 +9280,9280,"FirstName9280 MiddleName9280",LastName9280 +9281,9281,"FirstName9281 MiddleName9281",LastName9281 +9282,9282,"FirstName9282 MiddleName9282",LastName9282 +9283,9283,"FirstName9283 MiddleName9283",LastName9283 +9284,9284,"FirstName9284 MiddleName9284",LastName9284 +9285,9285,"FirstName9285 MiddleName9285",LastName9285 +9286,9286,"FirstName9286 MiddleName9286",LastName9286 +9287,9287,"FirstName9287 MiddleName9287",LastName9287 +9288,9288,"FirstName9288 MiddleName9288",LastName9288 +9289,9289,"FirstName9289 MiddleName9289",LastName9289 +9290,9290,"FirstName9290 MiddleName9290",LastName9290 +9291,9291,"FirstName9291 MiddleName9291",LastName9291 +9292,9292,"FirstName9292 MiddleName9292",LastName9292 +9293,9293,"FirstName9293 MiddleName9293",LastName9293 +9294,9294,"FirstName9294 MiddleName9294",LastName9294 +9295,9295,"FirstName9295 MiddleName9295",LastName9295 +9296,9296,"FirstName9296 MiddleName9296",LastName9296 +9297,9297,"FirstName9297 MiddleName9297",LastName9297 +9298,9298,"FirstName9298 MiddleName9298",LastName9298 +9299,9299,"FirstName9299 MiddleName9299",LastName9299 +9300,9300,"FirstName9300 MiddleName9300",LastName9300 +9301,9301,"FirstName9301 MiddleName9301",LastName9301 +9302,9302,"FirstName9302 MiddleName9302",LastName9302 +9303,9303,"FirstName9303 MiddleName9303",LastName9303 +9304,9304,"FirstName9304 MiddleName9304",LastName9304 +9305,9305,"FirstName9305 MiddleName9305",LastName9305 +9306,9306,"FirstName9306 MiddleName9306",LastName9306 +9307,9307,"FirstName9307 MiddleName9307",LastName9307 +9308,9308,"FirstName9308 MiddleName9308",LastName9308 +9309,9309,"FirstName9309 MiddleName9309",LastName9309 +9310,9310,"FirstName9310 MiddleName9310",LastName9310 +9311,9311,"FirstName9311 MiddleName9311",LastName9311 +9312,9312,"FirstName9312 MiddleName9312",LastName9312 +9313,9313,"FirstName9313 MiddleName9313",LastName9313 +9314,9314,"FirstName9314 MiddleName9314",LastName9314 +9315,9315,"FirstName9315 MiddleName9315",LastName9315 +9316,9316,"FirstName9316 MiddleName9316",LastName9316 +9317,9317,"FirstName9317 MiddleName9317",LastName9317 +9318,9318,"FirstName9318 MiddleName9318",LastName9318 +9319,9319,"FirstName9319 MiddleName9319",LastName9319 +9320,9320,"FirstName9320 MiddleName9320",LastName9320 +9321,9321,"FirstName9321 MiddleName9321",LastName9321 +9322,9322,"FirstName9322 MiddleName9322",LastName9322 +9323,9323,"FirstName9323 MiddleName9323",LastName9323 +9324,9324,"FirstName9324 MiddleName9324",LastName9324 +9325,9325,"FirstName9325 MiddleName9325",LastName9325 +9326,9326,"FirstName9326 MiddleName9326",LastName9326 +9327,9327,"FirstName9327 MiddleName9327",LastName9327 +9328,9328,"FirstName9328 MiddleName9328",LastName9328 +9329,9329,"FirstName9329 MiddleName9329",LastName9329 +9330,9330,"FirstName9330 MiddleName9330",LastName9330 +9331,9331,"FirstName9331 MiddleName9331",LastName9331 +9332,9332,"FirstName9332 MiddleName9332",LastName9332 +9333,9333,"FirstName9333 MiddleName9333",LastName9333 +9334,9334,"FirstName9334 MiddleName9334",LastName9334 +9335,9335,"FirstName9335 MiddleName9335",LastName9335 +9336,9336,"FirstName9336 MiddleName9336",LastName9336 +9337,9337,"FirstName9337 MiddleName9337",LastName9337 +9338,9338,"FirstName9338 MiddleName9338",LastName9338 +9339,9339,"FirstName9339 MiddleName9339",LastName9339 +9340,9340,"FirstName9340 MiddleName9340",LastName9340 +9341,9341,"FirstName9341 MiddleName9341",LastName9341 +9342,9342,"FirstName9342 MiddleName9342",LastName9342 +9343,9343,"FirstName9343 MiddleName9343",LastName9343 +9344,9344,"FirstName9344 MiddleName9344",LastName9344 +9345,9345,"FirstName9345 MiddleName9345",LastName9345 +9346,9346,"FirstName9346 MiddleName9346",LastName9346 +9347,9347,"FirstName9347 MiddleName9347",LastName9347 +9348,9348,"FirstName9348 MiddleName9348",LastName9348 +9349,9349,"FirstName9349 MiddleName9349",LastName9349 +9350,9350,"FirstName9350 MiddleName9350",LastName9350 +9351,9351,"FirstName9351 MiddleName9351",LastName9351 +9352,9352,"FirstName9352 MiddleName9352",LastName9352 +9353,9353,"FirstName9353 MiddleName9353",LastName9353 +9354,9354,"FirstName9354 MiddleName9354",LastName9354 +9355,9355,"FirstName9355 MiddleName9355",LastName9355 +9356,9356,"FirstName9356 MiddleName9356",LastName9356 +9357,9357,"FirstName9357 MiddleName9357",LastName9357 +9358,9358,"FirstName9358 MiddleName9358",LastName9358 +9359,9359,"FirstName9359 MiddleName9359",LastName9359 +9360,9360,"FirstName9360 MiddleName9360",LastName9360 +9361,9361,"FirstName9361 MiddleName9361",LastName9361 +9362,9362,"FirstName9362 MiddleName9362",LastName9362 +9363,9363,"FirstName9363 MiddleName9363",LastName9363 +9364,9364,"FirstName9364 MiddleName9364",LastName9364 +9365,9365,"FirstName9365 MiddleName9365",LastName9365 +9366,9366,"FirstName9366 MiddleName9366",LastName9366 +9367,9367,"FirstName9367 MiddleName9367",LastName9367 +9368,9368,"FirstName9368 MiddleName9368",LastName9368 +9369,9369,"FirstName9369 MiddleName9369",LastName9369 +9370,9370,"FirstName9370 MiddleName9370",LastName9370 +9371,9371,"FirstName9371 MiddleName9371",LastName9371 +9372,9372,"FirstName9372 MiddleName9372",LastName9372 +9373,9373,"FirstName9373 MiddleName9373",LastName9373 +9374,9374,"FirstName9374 MiddleName9374",LastName9374 +9375,9375,"FirstName9375 MiddleName9375",LastName9375 +9376,9376,"FirstName9376 MiddleName9376",LastName9376 +9377,9377,"FirstName9377 MiddleName9377",LastName9377 +9378,9378,"FirstName9378 MiddleName9378",LastName9378 +9379,9379,"FirstName9379 MiddleName9379",LastName9379 +9380,9380,"FirstName9380 MiddleName9380",LastName9380 +9381,9381,"FirstName9381 MiddleName9381",LastName9381 +9382,9382,"FirstName9382 MiddleName9382",LastName9382 +9383,9383,"FirstName9383 MiddleName9383",LastName9383 +9384,9384,"FirstName9384 MiddleName9384",LastName9384 +9385,9385,"FirstName9385 MiddleName9385",LastName9385 +9386,9386,"FirstName9386 MiddleName9386",LastName9386 +9387,9387,"FirstName9387 MiddleName9387",LastName9387 +9388,9388,"FirstName9388 MiddleName9388",LastName9388 +9389,9389,"FirstName9389 MiddleName9389",LastName9389 +9390,9390,"FirstName9390 MiddleName9390",LastName9390 +9391,9391,"FirstName9391 MiddleName9391",LastName9391 +9392,9392,"FirstName9392 MiddleName9392",LastName9392 +9393,9393,"FirstName9393 MiddleName9393",LastName9393 +9394,9394,"FirstName9394 MiddleName9394",LastName9394 +9395,9395,"FirstName9395 MiddleName9395",LastName9395 +9396,9396,"FirstName9396 MiddleName9396",LastName9396 +9397,9397,"FirstName9397 MiddleName9397",LastName9397 +9398,9398,"FirstName9398 MiddleName9398",LastName9398 +9399,9399,"FirstName9399 MiddleName9399",LastName9399 +9400,9400,"FirstName9400 MiddleName9400",LastName9400 +9401,9401,"FirstName9401 MiddleName9401",LastName9401 +9402,9402,"FirstName9402 MiddleName9402",LastName9402 +9403,9403,"FirstName9403 MiddleName9403",LastName9403 +9404,9404,"FirstName9404 MiddleName9404",LastName9404 +9405,9405,"FirstName9405 MiddleName9405",LastName9405 +9406,9406,"FirstName9406 MiddleName9406",LastName9406 +9407,9407,"FirstName9407 MiddleName9407",LastName9407 +9408,9408,"FirstName9408 MiddleName9408",LastName9408 +9409,9409,"FirstName9409 MiddleName9409",LastName9409 +9410,9410,"FirstName9410 MiddleName9410",LastName9410 +9411,9411,"FirstName9411 MiddleName9411",LastName9411 +9412,9412,"FirstName9412 MiddleName9412",LastName9412 +9413,9413,"FirstName9413 MiddleName9413",LastName9413 +9414,9414,"FirstName9414 MiddleName9414",LastName9414 +9415,9415,"FirstName9415 MiddleName9415",LastName9415 +9416,9416,"FirstName9416 MiddleName9416",LastName9416 +9417,9417,"FirstName9417 MiddleName9417",LastName9417 +9418,9418,"FirstName9418 MiddleName9418",LastName9418 +9419,9419,"FirstName9419 MiddleName9419",LastName9419 +9420,9420,"FirstName9420 MiddleName9420",LastName9420 +9421,9421,"FirstName9421 MiddleName9421",LastName9421 +9422,9422,"FirstName9422 MiddleName9422",LastName9422 +9423,9423,"FirstName9423 MiddleName9423",LastName9423 +9424,9424,"FirstName9424 MiddleName9424",LastName9424 +9425,9425,"FirstName9425 MiddleName9425",LastName9425 +9426,9426,"FirstName9426 MiddleName9426",LastName9426 +9427,9427,"FirstName9427 MiddleName9427",LastName9427 +9428,9428,"FirstName9428 MiddleName9428",LastName9428 +9429,9429,"FirstName9429 MiddleName9429",LastName9429 +9430,9430,"FirstName9430 MiddleName9430",LastName9430 +9431,9431,"FirstName9431 MiddleName9431",LastName9431 +9432,9432,"FirstName9432 MiddleName9432",LastName9432 +9433,9433,"FirstName9433 MiddleName9433",LastName9433 +9434,9434,"FirstName9434 MiddleName9434",LastName9434 +9435,9435,"FirstName9435 MiddleName9435",LastName9435 +9436,9436,"FirstName9436 MiddleName9436",LastName9436 +9437,9437,"FirstName9437 MiddleName9437",LastName9437 +9438,9438,"FirstName9438 MiddleName9438",LastName9438 +9439,9439,"FirstName9439 MiddleName9439",LastName9439 +9440,9440,"FirstName9440 MiddleName9440",LastName9440 +9441,9441,"FirstName9441 MiddleName9441",LastName9441 +9442,9442,"FirstName9442 MiddleName9442",LastName9442 +9443,9443,"FirstName9443 MiddleName9443",LastName9443 +9444,9444,"FirstName9444 MiddleName9444",LastName9444 +9445,9445,"FirstName9445 MiddleName9445",LastName9445 +9446,9446,"FirstName9446 MiddleName9446",LastName9446 +9447,9447,"FirstName9447 MiddleName9447",LastName9447 +9448,9448,"FirstName9448 MiddleName9448",LastName9448 +9449,9449,"FirstName9449 MiddleName9449",LastName9449 +9450,9450,"FirstName9450 MiddleName9450",LastName9450 +9451,9451,"FirstName9451 MiddleName9451",LastName9451 +9452,9452,"FirstName9452 MiddleName9452",LastName9452 +9453,9453,"FirstName9453 MiddleName9453",LastName9453 +9454,9454,"FirstName9454 MiddleName9454",LastName9454 +9455,9455,"FirstName9455 MiddleName9455",LastName9455 +9456,9456,"FirstName9456 MiddleName9456",LastName9456 +9457,9457,"FirstName9457 MiddleName9457",LastName9457 +9458,9458,"FirstName9458 MiddleName9458",LastName9458 +9459,9459,"FirstName9459 MiddleName9459",LastName9459 +9460,9460,"FirstName9460 MiddleName9460",LastName9460 +9461,9461,"FirstName9461 MiddleName9461",LastName9461 +9462,9462,"FirstName9462 MiddleName9462",LastName9462 +9463,9463,"FirstName9463 MiddleName9463",LastName9463 +9464,9464,"FirstName9464 MiddleName9464",LastName9464 +9465,9465,"FirstName9465 MiddleName9465",LastName9465 +9466,9466,"FirstName9466 MiddleName9466",LastName9466 +9467,9467,"FirstName9467 MiddleName9467",LastName9467 +9468,9468,"FirstName9468 MiddleName9468",LastName9468 +9469,9469,"FirstName9469 MiddleName9469",LastName9469 +9470,9470,"FirstName9470 MiddleName9470",LastName9470 +9471,9471,"FirstName9471 MiddleName9471",LastName9471 +9472,9472,"FirstName9472 MiddleName9472",LastName9472 +9473,9473,"FirstName9473 MiddleName9473",LastName9473 +9474,9474,"FirstName9474 MiddleName9474",LastName9474 +9475,9475,"FirstName9475 MiddleName9475",LastName9475 +9476,9476,"FirstName9476 MiddleName9476",LastName9476 +9477,9477,"FirstName9477 MiddleName9477",LastName9477 +9478,9478,"FirstName9478 MiddleName9478",LastName9478 +9479,9479,"FirstName9479 MiddleName9479",LastName9479 +9480,9480,"FirstName9480 MiddleName9480",LastName9480 +9481,9481,"FirstName9481 MiddleName9481",LastName9481 +9482,9482,"FirstName9482 MiddleName9482",LastName9482 +9483,9483,"FirstName9483 MiddleName9483",LastName9483 +9484,9484,"FirstName9484 MiddleName9484",LastName9484 +9485,9485,"FirstName9485 MiddleName9485",LastName9485 +9486,9486,"FirstName9486 MiddleName9486",LastName9486 +9487,9487,"FirstName9487 MiddleName9487",LastName9487 +9488,9488,"FirstName9488 MiddleName9488",LastName9488 +9489,9489,"FirstName9489 MiddleName9489",LastName9489 +9490,9490,"FirstName9490 MiddleName9490",LastName9490 +9491,9491,"FirstName9491 MiddleName9491",LastName9491 +9492,9492,"FirstName9492 MiddleName9492",LastName9492 +9493,9493,"FirstName9493 MiddleName9493",LastName9493 +9494,9494,"FirstName9494 MiddleName9494",LastName9494 +9495,9495,"FirstName9495 MiddleName9495",LastName9495 +9496,9496,"FirstName9496 MiddleName9496",LastName9496 +9497,9497,"FirstName9497 MiddleName9497",LastName9497 +9498,9498,"FirstName9498 MiddleName9498",LastName9498 +9499,9499,"FirstName9499 MiddleName9499",LastName9499 +9500,9500,"FirstName9500 MiddleName9500",LastName9500 +9501,9501,"FirstName9501 MiddleName9501",LastName9501 +9502,9502,"FirstName9502 MiddleName9502",LastName9502 +9503,9503,"FirstName9503 MiddleName9503",LastName9503 +9504,9504,"FirstName9504 MiddleName9504",LastName9504 +9505,9505,"FirstName9505 MiddleName9505",LastName9505 +9506,9506,"FirstName9506 MiddleName9506",LastName9506 +9507,9507,"FirstName9507 MiddleName9507",LastName9507 +9508,9508,"FirstName9508 MiddleName9508",LastName9508 +9509,9509,"FirstName9509 MiddleName9509",LastName9509 +9510,9510,"FirstName9510 MiddleName9510",LastName9510 +9511,9511,"FirstName9511 MiddleName9511",LastName9511 +9512,9512,"FirstName9512 MiddleName9512",LastName9512 +9513,9513,"FirstName9513 MiddleName9513",LastName9513 +9514,9514,"FirstName9514 MiddleName9514",LastName9514 +9515,9515,"FirstName9515 MiddleName9515",LastName9515 +9516,9516,"FirstName9516 MiddleName9516",LastName9516 +9517,9517,"FirstName9517 MiddleName9517",LastName9517 +9518,9518,"FirstName9518 MiddleName9518",LastName9518 +9519,9519,"FirstName9519 MiddleName9519",LastName9519 +9520,9520,"FirstName9520 MiddleName9520",LastName9520 +9521,9521,"FirstName9521 MiddleName9521",LastName9521 +9522,9522,"FirstName9522 MiddleName9522",LastName9522 +9523,9523,"FirstName9523 MiddleName9523",LastName9523 +9524,9524,"FirstName9524 MiddleName9524",LastName9524 +9525,9525,"FirstName9525 MiddleName9525",LastName9525 +9526,9526,"FirstName9526 MiddleName9526",LastName9526 +9527,9527,"FirstName9527 MiddleName9527",LastName9527 +9528,9528,"FirstName9528 MiddleName9528",LastName9528 +9529,9529,"FirstName9529 MiddleName9529",LastName9529 +9530,9530,"FirstName9530 MiddleName9530",LastName9530 +9531,9531,"FirstName9531 MiddleName9531",LastName9531 +9532,9532,"FirstName9532 MiddleName9532",LastName9532 +9533,9533,"FirstName9533 MiddleName9533",LastName9533 +9534,9534,"FirstName9534 MiddleName9534",LastName9534 +9535,9535,"FirstName9535 MiddleName9535",LastName9535 +9536,9536,"FirstName9536 MiddleName9536",LastName9536 +9537,9537,"FirstName9537 MiddleName9537",LastName9537 +9538,9538,"FirstName9538 MiddleName9538",LastName9538 +9539,9539,"FirstName9539 MiddleName9539",LastName9539 +9540,9540,"FirstName9540 MiddleName9540",LastName9540 +9541,9541,"FirstName9541 MiddleName9541",LastName9541 +9542,9542,"FirstName9542 MiddleName9542",LastName9542 +9543,9543,"FirstName9543 MiddleName9543",LastName9543 +9544,9544,"FirstName9544 MiddleName9544",LastName9544 +9545,9545,"FirstName9545 MiddleName9545",LastName9545 +9546,9546,"FirstName9546 MiddleName9546",LastName9546 +9547,9547,"FirstName9547 MiddleName9547",LastName9547 +9548,9548,"FirstName9548 MiddleName9548",LastName9548 +9549,9549,"FirstName9549 MiddleName9549",LastName9549 +9550,9550,"FirstName9550 MiddleName9550",LastName9550 +9551,9551,"FirstName9551 MiddleName9551",LastName9551 +9552,9552,"FirstName9552 MiddleName9552",LastName9552 +9553,9553,"FirstName9553 MiddleName9553",LastName9553 +9554,9554,"FirstName9554 MiddleName9554",LastName9554 +9555,9555,"FirstName9555 MiddleName9555",LastName9555 +9556,9556,"FirstName9556 MiddleName9556",LastName9556 +9557,9557,"FirstName9557 MiddleName9557",LastName9557 +9558,9558,"FirstName9558 MiddleName9558",LastName9558 +9559,9559,"FirstName9559 MiddleName9559",LastName9559 +9560,9560,"FirstName9560 MiddleName9560",LastName9560 +9561,9561,"FirstName9561 MiddleName9561",LastName9561 +9562,9562,"FirstName9562 MiddleName9562",LastName9562 +9563,9563,"FirstName9563 MiddleName9563",LastName9563 +9564,9564,"FirstName9564 MiddleName9564",LastName9564 +9565,9565,"FirstName9565 MiddleName9565",LastName9565 +9566,9566,"FirstName9566 MiddleName9566",LastName9566 +9567,9567,"FirstName9567 MiddleName9567",LastName9567 +9568,9568,"FirstName9568 MiddleName9568",LastName9568 +9569,9569,"FirstName9569 MiddleName9569",LastName9569 +9570,9570,"FirstName9570 MiddleName9570",LastName9570 +9571,9571,"FirstName9571 MiddleName9571",LastName9571 +9572,9572,"FirstName9572 MiddleName9572",LastName9572 +9573,9573,"FirstName9573 MiddleName9573",LastName9573 +9574,9574,"FirstName9574 MiddleName9574",LastName9574 +9575,9575,"FirstName9575 MiddleName9575",LastName9575 +9576,9576,"FirstName9576 MiddleName9576",LastName9576 +9577,9577,"FirstName9577 MiddleName9577",LastName9577 +9578,9578,"FirstName9578 MiddleName9578",LastName9578 +9579,9579,"FirstName9579 MiddleName9579",LastName9579 +9580,9580,"FirstName9580 MiddleName9580",LastName9580 +9581,9581,"FirstName9581 MiddleName9581",LastName9581 +9582,9582,"FirstName9582 MiddleName9582",LastName9582 +9583,9583,"FirstName9583 MiddleName9583",LastName9583 +9584,9584,"FirstName9584 MiddleName9584",LastName9584 +9585,9585,"FirstName9585 MiddleName9585",LastName9585 +9586,9586,"FirstName9586 MiddleName9586",LastName9586 +9587,9587,"FirstName9587 MiddleName9587",LastName9587 +9588,9588,"FirstName9588 MiddleName9588",LastName9588 +9589,9589,"FirstName9589 MiddleName9589",LastName9589 +9590,9590,"FirstName9590 MiddleName9590",LastName9590 +9591,9591,"FirstName9591 MiddleName9591",LastName9591 +9592,9592,"FirstName9592 MiddleName9592",LastName9592 +9593,9593,"FirstName9593 MiddleName9593",LastName9593 +9594,9594,"FirstName9594 MiddleName9594",LastName9594 +9595,9595,"FirstName9595 MiddleName9595",LastName9595 +9596,9596,"FirstName9596 MiddleName9596",LastName9596 +9597,9597,"FirstName9597 MiddleName9597",LastName9597 +9598,9598,"FirstName9598 MiddleName9598",LastName9598 +9599,9599,"FirstName9599 MiddleName9599",LastName9599 +9600,9600,"FirstName9600 MiddleName9600",LastName9600 +9601,9601,"FirstName9601 MiddleName9601",LastName9601 +9602,9602,"FirstName9602 MiddleName9602",LastName9602 +9603,9603,"FirstName9603 MiddleName9603",LastName9603 +9604,9604,"FirstName9604 MiddleName9604",LastName9604 +9605,9605,"FirstName9605 MiddleName9605",LastName9605 +9606,9606,"FirstName9606 MiddleName9606",LastName9606 +9607,9607,"FirstName9607 MiddleName9607",LastName9607 +9608,9608,"FirstName9608 MiddleName9608",LastName9608 +9609,9609,"FirstName9609 MiddleName9609",LastName9609 +9610,9610,"FirstName9610 MiddleName9610",LastName9610 +9611,9611,"FirstName9611 MiddleName9611",LastName9611 +9612,9612,"FirstName9612 MiddleName9612",LastName9612 +9613,9613,"FirstName9613 MiddleName9613",LastName9613 +9614,9614,"FirstName9614 MiddleName9614",LastName9614 +9615,9615,"FirstName9615 MiddleName9615",LastName9615 +9616,9616,"FirstName9616 MiddleName9616",LastName9616 +9617,9617,"FirstName9617 MiddleName9617",LastName9617 +9618,9618,"FirstName9618 MiddleName9618",LastName9618 +9619,9619,"FirstName9619 MiddleName9619",LastName9619 +9620,9620,"FirstName9620 MiddleName9620",LastName9620 +9621,9621,"FirstName9621 MiddleName9621",LastName9621 +9622,9622,"FirstName9622 MiddleName9622",LastName9622 +9623,9623,"FirstName9623 MiddleName9623",LastName9623 +9624,9624,"FirstName9624 MiddleName9624",LastName9624 +9625,9625,"FirstName9625 MiddleName9625",LastName9625 +9626,9626,"FirstName9626 MiddleName9626",LastName9626 +9627,9627,"FirstName9627 MiddleName9627",LastName9627 +9628,9628,"FirstName9628 MiddleName9628",LastName9628 +9629,9629,"FirstName9629 MiddleName9629",LastName9629 +9630,9630,"FirstName9630 MiddleName9630",LastName9630 +9631,9631,"FirstName9631 MiddleName9631",LastName9631 +9632,9632,"FirstName9632 MiddleName9632",LastName9632 +9633,9633,"FirstName9633 MiddleName9633",LastName9633 +9634,9634,"FirstName9634 MiddleName9634",LastName9634 +9635,9635,"FirstName9635 MiddleName9635",LastName9635 +9636,9636,"FirstName9636 MiddleName9636",LastName9636 +9637,9637,"FirstName9637 MiddleName9637",LastName9637 +9638,9638,"FirstName9638 MiddleName9638",LastName9638 +9639,9639,"FirstName9639 MiddleName9639",LastName9639 +9640,9640,"FirstName9640 MiddleName9640",LastName9640 +9641,9641,"FirstName9641 MiddleName9641",LastName9641 +9642,9642,"FirstName9642 MiddleName9642",LastName9642 +9643,9643,"FirstName9643 MiddleName9643",LastName9643 +9644,9644,"FirstName9644 MiddleName9644",LastName9644 +9645,9645,"FirstName9645 MiddleName9645",LastName9645 +9646,9646,"FirstName9646 MiddleName9646",LastName9646 +9647,9647,"FirstName9647 MiddleName9647",LastName9647 +9648,9648,"FirstName9648 MiddleName9648",LastName9648 +9649,9649,"FirstName9649 MiddleName9649",LastName9649 +9650,9650,"FirstName9650 MiddleName9650",LastName9650 +9651,9651,"FirstName9651 MiddleName9651",LastName9651 +9652,9652,"FirstName9652 MiddleName9652",LastName9652 +9653,9653,"FirstName9653 MiddleName9653",LastName9653 +9654,9654,"FirstName9654 MiddleName9654",LastName9654 +9655,9655,"FirstName9655 MiddleName9655",LastName9655 +9656,9656,"FirstName9656 MiddleName9656",LastName9656 +9657,9657,"FirstName9657 MiddleName9657",LastName9657 +9658,9658,"FirstName9658 MiddleName9658",LastName9658 +9659,9659,"FirstName9659 MiddleName9659",LastName9659 +9660,9660,"FirstName9660 MiddleName9660",LastName9660 +9661,9661,"FirstName9661 MiddleName9661",LastName9661 +9662,9662,"FirstName9662 MiddleName9662",LastName9662 +9663,9663,"FirstName9663 MiddleName9663",LastName9663 +9664,9664,"FirstName9664 MiddleName9664",LastName9664 +9665,9665,"FirstName9665 MiddleName9665",LastName9665 +9666,9666,"FirstName9666 MiddleName9666",LastName9666 +9667,9667,"FirstName9667 MiddleName9667",LastName9667 +9668,9668,"FirstName9668 MiddleName9668",LastName9668 +9669,9669,"FirstName9669 MiddleName9669",LastName9669 +9670,9670,"FirstName9670 MiddleName9670",LastName9670 +9671,9671,"FirstName9671 MiddleName9671",LastName9671 +9672,9672,"FirstName9672 MiddleName9672",LastName9672 +9673,9673,"FirstName9673 MiddleName9673",LastName9673 +9674,9674,"FirstName9674 MiddleName9674",LastName9674 +9675,9675,"FirstName9675 MiddleName9675",LastName9675 +9676,9676,"FirstName9676 MiddleName9676",LastName9676 +9677,9677,"FirstName9677 MiddleName9677",LastName9677 +9678,9678,"FirstName9678 MiddleName9678",LastName9678 +9679,9679,"FirstName9679 MiddleName9679",LastName9679 +9680,9680,"FirstName9680 MiddleName9680",LastName9680 +9681,9681,"FirstName9681 MiddleName9681",LastName9681 +9682,9682,"FirstName9682 MiddleName9682",LastName9682 +9683,9683,"FirstName9683 MiddleName9683",LastName9683 +9684,9684,"FirstName9684 MiddleName9684",LastName9684 +9685,9685,"FirstName9685 MiddleName9685",LastName9685 +9686,9686,"FirstName9686 MiddleName9686",LastName9686 +9687,9687,"FirstName9687 MiddleName9687",LastName9687 +9688,9688,"FirstName9688 MiddleName9688",LastName9688 +9689,9689,"FirstName9689 MiddleName9689",LastName9689 +9690,9690,"FirstName9690 MiddleName9690",LastName9690 +9691,9691,"FirstName9691 MiddleName9691",LastName9691 +9692,9692,"FirstName9692 MiddleName9692",LastName9692 +9693,9693,"FirstName9693 MiddleName9693",LastName9693 +9694,9694,"FirstName9694 MiddleName9694",LastName9694 +9695,9695,"FirstName9695 MiddleName9695",LastName9695 +9696,9696,"FirstName9696 MiddleName9696",LastName9696 +9697,9697,"FirstName9697 MiddleName9697",LastName9697 +9698,9698,"FirstName9698 MiddleName9698",LastName9698 +9699,9699,"FirstName9699 MiddleName9699",LastName9699 +9700,9700,"FirstName9700 MiddleName9700",LastName9700 +9701,9701,"FirstName9701 MiddleName9701",LastName9701 +9702,9702,"FirstName9702 MiddleName9702",LastName9702 +9703,9703,"FirstName9703 MiddleName9703",LastName9703 +9704,9704,"FirstName9704 MiddleName9704",LastName9704 +9705,9705,"FirstName9705 MiddleName9705",LastName9705 +9706,9706,"FirstName9706 MiddleName9706",LastName9706 +9707,9707,"FirstName9707 MiddleName9707",LastName9707 +9708,9708,"FirstName9708 MiddleName9708",LastName9708 +9709,9709,"FirstName9709 MiddleName9709",LastName9709 +9710,9710,"FirstName9710 MiddleName9710",LastName9710 +9711,9711,"FirstName9711 MiddleName9711",LastName9711 +9712,9712,"FirstName9712 MiddleName9712",LastName9712 +9713,9713,"FirstName9713 MiddleName9713",LastName9713 +9714,9714,"FirstName9714 MiddleName9714",LastName9714 +9715,9715,"FirstName9715 MiddleName9715",LastName9715 +9716,9716,"FirstName9716 MiddleName9716",LastName9716 +9717,9717,"FirstName9717 MiddleName9717",LastName9717 +9718,9718,"FirstName9718 MiddleName9718",LastName9718 +9719,9719,"FirstName9719 MiddleName9719",LastName9719 +9720,9720,"FirstName9720 MiddleName9720",LastName9720 +9721,9721,"FirstName9721 MiddleName9721",LastName9721 +9722,9722,"FirstName9722 MiddleName9722",LastName9722 +9723,9723,"FirstName9723 MiddleName9723",LastName9723 +9724,9724,"FirstName9724 MiddleName9724",LastName9724 +9725,9725,"FirstName9725 MiddleName9725",LastName9725 +9726,9726,"FirstName9726 MiddleName9726",LastName9726 +9727,9727,"FirstName9727 MiddleName9727",LastName9727 +9728,9728,"FirstName9728 MiddleName9728",LastName9728 +9729,9729,"FirstName9729 MiddleName9729",LastName9729 +9730,9730,"FirstName9730 MiddleName9730",LastName9730 +9731,9731,"FirstName9731 MiddleName9731",LastName9731 +9732,9732,"FirstName9732 MiddleName9732",LastName9732 +9733,9733,"FirstName9733 MiddleName9733",LastName9733 +9734,9734,"FirstName9734 MiddleName9734",LastName9734 +9735,9735,"FirstName9735 MiddleName9735",LastName9735 +9736,9736,"FirstName9736 MiddleName9736",LastName9736 +9737,9737,"FirstName9737 MiddleName9737",LastName9737 +9738,9738,"FirstName9738 MiddleName9738",LastName9738 +9739,9739,"FirstName9739 MiddleName9739",LastName9739 +9740,9740,"FirstName9740 MiddleName9740",LastName9740 +9741,9741,"FirstName9741 MiddleName9741",LastName9741 +9742,9742,"FirstName9742 MiddleName9742",LastName9742 +9743,9743,"FirstName9743 MiddleName9743",LastName9743 +9744,9744,"FirstName9744 MiddleName9744",LastName9744 +9745,9745,"FirstName9745 MiddleName9745",LastName9745 +9746,9746,"FirstName9746 MiddleName9746",LastName9746 +9747,9747,"FirstName9747 MiddleName9747",LastName9747 +9748,9748,"FirstName9748 MiddleName9748",LastName9748 +9749,9749,"FirstName9749 MiddleName9749",LastName9749 +9750,9750,"FirstName9750 MiddleName9750",LastName9750 +9751,9751,"FirstName9751 MiddleName9751",LastName9751 +9752,9752,"FirstName9752 MiddleName9752",LastName9752 +9753,9753,"FirstName9753 MiddleName9753",LastName9753 +9754,9754,"FirstName9754 MiddleName9754",LastName9754 +9755,9755,"FirstName9755 MiddleName9755",LastName9755 +9756,9756,"FirstName9756 MiddleName9756",LastName9756 +9757,9757,"FirstName9757 MiddleName9757",LastName9757 +9758,9758,"FirstName9758 MiddleName9758",LastName9758 +9759,9759,"FirstName9759 MiddleName9759",LastName9759 +9760,9760,"FirstName9760 MiddleName9760",LastName9760 +9761,9761,"FirstName9761 MiddleName9761",LastName9761 +9762,9762,"FirstName9762 MiddleName9762",LastName9762 +9763,9763,"FirstName9763 MiddleName9763",LastName9763 +9764,9764,"FirstName9764 MiddleName9764",LastName9764 +9765,9765,"FirstName9765 MiddleName9765",LastName9765 +9766,9766,"FirstName9766 MiddleName9766",LastName9766 +9767,9767,"FirstName9767 MiddleName9767",LastName9767 +9768,9768,"FirstName9768 MiddleName9768",LastName9768 +9769,9769,"FirstName9769 MiddleName9769",LastName9769 +9770,9770,"FirstName9770 MiddleName9770",LastName9770 +9771,9771,"FirstName9771 MiddleName9771",LastName9771 +9772,9772,"FirstName9772 MiddleName9772",LastName9772 +9773,9773,"FirstName9773 MiddleName9773",LastName9773 +9774,9774,"FirstName9774 MiddleName9774",LastName9774 +9775,9775,"FirstName9775 MiddleName9775",LastName9775 +9776,9776,"FirstName9776 MiddleName9776",LastName9776 +9777,9777,"FirstName9777 MiddleName9777",LastName9777 +9778,9778,"FirstName9778 MiddleName9778",LastName9778 +9779,9779,"FirstName9779 MiddleName9779",LastName9779 +9780,9780,"FirstName9780 MiddleName9780",LastName9780 +9781,9781,"FirstName9781 MiddleName9781",LastName9781 +9782,9782,"FirstName9782 MiddleName9782",LastName9782 +9783,9783,"FirstName9783 MiddleName9783",LastName9783 +9784,9784,"FirstName9784 MiddleName9784",LastName9784 +9785,9785,"FirstName9785 MiddleName9785",LastName9785 +9786,9786,"FirstName9786 MiddleName9786",LastName9786 +9787,9787,"FirstName9787 MiddleName9787",LastName9787 +9788,9788,"FirstName9788 MiddleName9788",LastName9788 +9789,9789,"FirstName9789 MiddleName9789",LastName9789 +9790,9790,"FirstName9790 MiddleName9790",LastName9790 +9791,9791,"FirstName9791 MiddleName9791",LastName9791 +9792,9792,"FirstName9792 MiddleName9792",LastName9792 +9793,9793,"FirstName9793 MiddleName9793",LastName9793 +9794,9794,"FirstName9794 MiddleName9794",LastName9794 +9795,9795,"FirstName9795 MiddleName9795",LastName9795 +9796,9796,"FirstName9796 MiddleName9796",LastName9796 +9797,9797,"FirstName9797 MiddleName9797",LastName9797 +9798,9798,"FirstName9798 MiddleName9798",LastName9798 +9799,9799,"FirstName9799 MiddleName9799",LastName9799 +9800,9800,"FirstName9800 MiddleName9800",LastName9800 +9801,9801,"FirstName9801 MiddleName9801",LastName9801 +9802,9802,"FirstName9802 MiddleName9802",LastName9802 +9803,9803,"FirstName9803 MiddleName9803",LastName9803 +9804,9804,"FirstName9804 MiddleName9804",LastName9804 +9805,9805,"FirstName9805 MiddleName9805",LastName9805 +9806,9806,"FirstName9806 MiddleName9806",LastName9806 +9807,9807,"FirstName9807 MiddleName9807",LastName9807 +9808,9808,"FirstName9808 MiddleName9808",LastName9808 +9809,9809,"FirstName9809 MiddleName9809",LastName9809 +9810,9810,"FirstName9810 MiddleName9810",LastName9810 +9811,9811,"FirstName9811 MiddleName9811",LastName9811 +9812,9812,"FirstName9812 MiddleName9812",LastName9812 +9813,9813,"FirstName9813 MiddleName9813",LastName9813 +9814,9814,"FirstName9814 MiddleName9814",LastName9814 +9815,9815,"FirstName9815 MiddleName9815",LastName9815 +9816,9816,"FirstName9816 MiddleName9816",LastName9816 +9817,9817,"FirstName9817 MiddleName9817",LastName9817 +9818,9818,"FirstName9818 MiddleName9818",LastName9818 +9819,9819,"FirstName9819 MiddleName9819",LastName9819 +9820,9820,"FirstName9820 MiddleName9820",LastName9820 +9821,9821,"FirstName9821 MiddleName9821",LastName9821 +9822,9822,"FirstName9822 MiddleName9822",LastName9822 +9823,9823,"FirstName9823 MiddleName9823",LastName9823 +9824,9824,"FirstName9824 MiddleName9824",LastName9824 +9825,9825,"FirstName9825 MiddleName9825",LastName9825 +9826,9826,"FirstName9826 MiddleName9826",LastName9826 +9827,9827,"FirstName9827 MiddleName9827",LastName9827 +9828,9828,"FirstName9828 MiddleName9828",LastName9828 +9829,9829,"FirstName9829 MiddleName9829",LastName9829 +9830,9830,"FirstName9830 MiddleName9830",LastName9830 +9831,9831,"FirstName9831 MiddleName9831",LastName9831 +9832,9832,"FirstName9832 MiddleName9832",LastName9832 +9833,9833,"FirstName9833 MiddleName9833",LastName9833 +9834,9834,"FirstName9834 MiddleName9834",LastName9834 +9835,9835,"FirstName9835 MiddleName9835",LastName9835 +9836,9836,"FirstName9836 MiddleName9836",LastName9836 +9837,9837,"FirstName9837 MiddleName9837",LastName9837 +9838,9838,"FirstName9838 MiddleName9838",LastName9838 +9839,9839,"FirstName9839 MiddleName9839",LastName9839 +9840,9840,"FirstName9840 MiddleName9840",LastName9840 +9841,9841,"FirstName9841 MiddleName9841",LastName9841 +9842,9842,"FirstName9842 MiddleName9842",LastName9842 +9843,9843,"FirstName9843 MiddleName9843",LastName9843 +9844,9844,"FirstName9844 MiddleName9844",LastName9844 +9845,9845,"FirstName9845 MiddleName9845",LastName9845 +9846,9846,"FirstName9846 MiddleName9846",LastName9846 +9847,9847,"FirstName9847 MiddleName9847",LastName9847 +9848,9848,"FirstName9848 MiddleName9848",LastName9848 +9849,9849,"FirstName9849 MiddleName9849",LastName9849 +9850,9850,"FirstName9850 MiddleName9850",LastName9850 +9851,9851,"FirstName9851 MiddleName9851",LastName9851 +9852,9852,"FirstName9852 MiddleName9852",LastName9852 +9853,9853,"FirstName9853 MiddleName9853",LastName9853 +9854,9854,"FirstName9854 MiddleName9854",LastName9854 +9855,9855,"FirstName9855 MiddleName9855",LastName9855 +9856,9856,"FirstName9856 MiddleName9856",LastName9856 +9857,9857,"FirstName9857 MiddleName9857",LastName9857 +9858,9858,"FirstName9858 MiddleName9858",LastName9858 +9859,9859,"FirstName9859 MiddleName9859",LastName9859 +9860,9860,"FirstName9860 MiddleName9860",LastName9860 +9861,9861,"FirstName9861 MiddleName9861",LastName9861 +9862,9862,"FirstName9862 MiddleName9862",LastName9862 +9863,9863,"FirstName9863 MiddleName9863",LastName9863 +9864,9864,"FirstName9864 MiddleName9864",LastName9864 +9865,9865,"FirstName9865 MiddleName9865",LastName9865 +9866,9866,"FirstName9866 MiddleName9866",LastName9866 +9867,9867,"FirstName9867 MiddleName9867",LastName9867 +9868,9868,"FirstName9868 MiddleName9868",LastName9868 +9869,9869,"FirstName9869 MiddleName9869",LastName9869 +9870,9870,"FirstName9870 MiddleName9870",LastName9870 +9871,9871,"FirstName9871 MiddleName9871",LastName9871 +9872,9872,"FirstName9872 MiddleName9872",LastName9872 +9873,9873,"FirstName9873 MiddleName9873",LastName9873 +9874,9874,"FirstName9874 MiddleName9874",LastName9874 +9875,9875,"FirstName9875 MiddleName9875",LastName9875 +9876,9876,"FirstName9876 MiddleName9876",LastName9876 +9877,9877,"FirstName9877 MiddleName9877",LastName9877 +9878,9878,"FirstName9878 MiddleName9878",LastName9878 +9879,9879,"FirstName9879 MiddleName9879",LastName9879 +9880,9880,"FirstName9880 MiddleName9880",LastName9880 +9881,9881,"FirstName9881 MiddleName9881",LastName9881 +9882,9882,"FirstName9882 MiddleName9882",LastName9882 +9883,9883,"FirstName9883 MiddleName9883",LastName9883 +9884,9884,"FirstName9884 MiddleName9884",LastName9884 +9885,9885,"FirstName9885 MiddleName9885",LastName9885 +9886,9886,"FirstName9886 MiddleName9886",LastName9886 +9887,9887,"FirstName9887 MiddleName9887",LastName9887 +9888,9888,"FirstName9888 MiddleName9888",LastName9888 +9889,9889,"FirstName9889 MiddleName9889",LastName9889 +9890,9890,"FirstName9890 MiddleName9890",LastName9890 +9891,9891,"FirstName9891 MiddleName9891",LastName9891 +9892,9892,"FirstName9892 MiddleName9892",LastName9892 +9893,9893,"FirstName9893 MiddleName9893",LastName9893 +9894,9894,"FirstName9894 MiddleName9894",LastName9894 +9895,9895,"FirstName9895 MiddleName9895",LastName9895 +9896,9896,"FirstName9896 MiddleName9896",LastName9896 +9897,9897,"FirstName9897 MiddleName9897",LastName9897 +9898,9898,"FirstName9898 MiddleName9898",LastName9898 +9899,9899,"FirstName9899 MiddleName9899",LastName9899 +9900,9900,"FirstName9900 MiddleName9900",LastName9900 +9901,9901,"FirstName9901 MiddleName9901",LastName9901 +9902,9902,"FirstName9902 MiddleName9902",LastName9902 +9903,9903,"FirstName9903 MiddleName9903",LastName9903 +9904,9904,"FirstName9904 MiddleName9904",LastName9904 +9905,9905,"FirstName9905 MiddleName9905",LastName9905 +9906,9906,"FirstName9906 MiddleName9906",LastName9906 +9907,9907,"FirstName9907 MiddleName9907",LastName9907 +9908,9908,"FirstName9908 MiddleName9908",LastName9908 +9909,9909,"FirstName9909 MiddleName9909",LastName9909 +9910,9910,"FirstName9910 MiddleName9910",LastName9910 +9911,9911,"FirstName9911 MiddleName9911",LastName9911 +9912,9912,"FirstName9912 MiddleName9912",LastName9912 +9913,9913,"FirstName9913 MiddleName9913",LastName9913 +9914,9914,"FirstName9914 MiddleName9914",LastName9914 +9915,9915,"FirstName9915 MiddleName9915",LastName9915 +9916,9916,"FirstName9916 MiddleName9916",LastName9916 +9917,9917,"FirstName9917 MiddleName9917",LastName9917 +9918,9918,"FirstName9918 MiddleName9918",LastName9918 +9919,9919,"FirstName9919 MiddleName9919",LastName9919 +9920,9920,"FirstName9920 MiddleName9920",LastName9920 +9921,9921,"FirstName9921 MiddleName9921",LastName9921 +9922,9922,"FirstName9922 MiddleName9922",LastName9922 +9923,9923,"FirstName9923 MiddleName9923",LastName9923 +9924,9924,"FirstName9924 MiddleName9924",LastName9924 +9925,9925,"FirstName9925 MiddleName9925",LastName9925 +9926,9926,"FirstName9926 MiddleName9926",LastName9926 +9927,9927,"FirstName9927 MiddleName9927",LastName9927 +9928,9928,"FirstName9928 MiddleName9928",LastName9928 +9929,9929,"FirstName9929 MiddleName9929",LastName9929 +9930,9930,"FirstName9930 MiddleName9930",LastName9930 +9931,9931,"FirstName9931 MiddleName9931",LastName9931 +9932,9932,"FirstName9932 MiddleName9932",LastName9932 +9933,9933,"FirstName9933 MiddleName9933",LastName9933 +9934,9934,"FirstName9934 MiddleName9934",LastName9934 +9935,9935,"FirstName9935 MiddleName9935",LastName9935 +9936,9936,"FirstName9936 MiddleName9936",LastName9936 +9937,9937,"FirstName9937 MiddleName9937",LastName9937 +9938,9938,"FirstName9938 MiddleName9938",LastName9938 +9939,9939,"FirstName9939 MiddleName9939",LastName9939 +9940,9940,"FirstName9940 MiddleName9940",LastName9940 +9941,9941,"FirstName9941 MiddleName9941",LastName9941 +9942,9942,"FirstName9942 MiddleName9942",LastName9942 +9943,9943,"FirstName9943 MiddleName9943",LastName9943 +9944,9944,"FirstName9944 MiddleName9944",LastName9944 +9945,9945,"FirstName9945 MiddleName9945",LastName9945 +9946,9946,"FirstName9946 MiddleName9946",LastName9946 +9947,9947,"FirstName9947 MiddleName9947",LastName9947 +9948,9948,"FirstName9948 MiddleName9948",LastName9948 +9949,9949,"FirstName9949 MiddleName9949",LastName9949 +9950,9950,"FirstName9950 MiddleName9950",LastName9950 +9951,9951,"FirstName9951 MiddleName9951",LastName9951 +9952,9952,"FirstName9952 MiddleName9952",LastName9952 +9953,9953,"FirstName9953 MiddleName9953",LastName9953 +9954,9954,"FirstName9954 MiddleName9954",LastName9954 +9955,9955,"FirstName9955 MiddleName9955",LastName9955 +9956,9956,"FirstName9956 MiddleName9956",LastName9956 +9957,9957,"FirstName9957 MiddleName9957",LastName9957 +9958,9958,"FirstName9958 MiddleName9958",LastName9958 +9959,9959,"FirstName9959 MiddleName9959",LastName9959 +9960,9960,"FirstName9960 MiddleName9960",LastName9960 +9961,9961,"FirstName9961 MiddleName9961",LastName9961 +9962,9962,"FirstName9962 MiddleName9962",LastName9962 +9963,9963,"FirstName9963 MiddleName9963",LastName9963 +9964,9964,"FirstName9964 MiddleName9964",LastName9964 +9965,9965,"FirstName9965 MiddleName9965",LastName9965 +9966,9966,"FirstName9966 MiddleName9966",LastName9966 +9967,9967,"FirstName9967 MiddleName9967",LastName9967 +9968,9968,"FirstName9968 MiddleName9968",LastName9968 +9969,9969,"FirstName9969 MiddleName9969",LastName9969 +9970,9970,"FirstName9970 MiddleName9970",LastName9970 +9971,9971,"FirstName9971 MiddleName9971",LastName9971 +9972,9972,"FirstName9972 MiddleName9972",LastName9972 +9973,9973,"FirstName9973 MiddleName9973",LastName9973 +9974,9974,"FirstName9974 MiddleName9974",LastName9974 +9975,9975,"FirstName9975 MiddleName9975",LastName9975 +9976,9976,"FirstName9976 MiddleName9976",LastName9976 +9977,9977,"FirstName9977 MiddleName9977",LastName9977 +9978,9978,"FirstName9978 MiddleName9978",LastName9978 +9979,9979,"FirstName9979 MiddleName9979",LastName9979 +9980,9980,"FirstName9980 MiddleName9980",LastName9980 +9981,9981,"FirstName9981 MiddleName9981",LastName9981 +9982,9982,"FirstName9982 MiddleName9982",LastName9982 +9983,9983,"FirstName9983 MiddleName9983",LastName9983 +9984,9984,"FirstName9984 MiddleName9984",LastName9984 +9985,9985,"FirstName9985 MiddleName9985",LastName9985 +9986,9986,"FirstName9986 MiddleName9986",LastName9986 +9987,9987,"FirstName9987 MiddleName9987",LastName9987 +9988,9988,"FirstName9988 MiddleName9988",LastName9988 +9989,9989,"FirstName9989 MiddleName9989",LastName9989 +9990,9990,"FirstName9990 MiddleName9990",LastName9990 +9991,9991,"FirstName9991 MiddleName9991",LastName9991 +9992,9992,"FirstName9992 MiddleName9992",LastName9992 +9993,9993,"FirstName9993 MiddleName9993",LastName9993 +9994,9994,"FirstName9994 MiddleName9994",LastName9994 +9995,9995,"FirstName9995 MiddleName9995",LastName9995 +9996,9996,"FirstName9996 MiddleName9996",LastName9996 +9997,9997,"FirstName9997 MiddleName9997",LastName9997 +9998,9998,"FirstName9998 MiddleName9998",LastName9998 +9999,9999,"FirstName9999 MiddleName9999",LastName9999 +10000,10000,"FirstName10000 MiddleName10000",LastName10000 +10001,10001,"FirstName10001 MiddleName10001",LastName10001 +10002,10002,"FirstName10002 MiddleName10002",LastName10002 +10003,10003,"FirstName10003 MiddleName10003",LastName10003 +10004,10004,"FirstName10004 MiddleName10004",LastName10004 +10005,10005,"FirstName10005 MiddleName10005",LastName10005 +10006,10006,"FirstName10006 MiddleName10006",LastName10006 +10007,10007,"FirstName10007 MiddleName10007",LastName10007 +10008,10008,"FirstName10008 MiddleName10008",LastName10008 +10009,10009,"FirstName10009 MiddleName10009",LastName10009 +10010,10010,"FirstName10010 MiddleName10010",LastName10010 +10011,10011,"FirstName10011 MiddleName10011",LastName10011 +10012,10012,"FirstName10012 MiddleName10012",LastName10012 +10013,10013,"FirstName10013 MiddleName10013",LastName10013 +10014,10014,"FirstName10014 MiddleName10014",LastName10014 +10015,10015,"FirstName10015 MiddleName10015",LastName10015 +10016,10016,"FirstName10016 MiddleName10016",LastName10016 +10017,10017,"FirstName10017 MiddleName10017",LastName10017 +10018,10018,"FirstName10018 MiddleName10018",LastName10018 +10019,10019,"FirstName10019 MiddleName10019",LastName10019 +10020,10020,"FirstName10020 MiddleName10020",LastName10020 +10021,10021,"FirstName10021 MiddleName10021",LastName10021 +10022,10022,"FirstName10022 MiddleName10022",LastName10022 +10023,10023,"FirstName10023 MiddleName10023",LastName10023 +10024,10024,"FirstName10024 MiddleName10024",LastName10024 +10025,10025,"FirstName10025 MiddleName10025",LastName10025 +10026,10026,"FirstName10026 MiddleName10026",LastName10026 +10027,10027,"FirstName10027 MiddleName10027",LastName10027 +10028,10028,"FirstName10028 MiddleName10028",LastName10028 +10029,10029,"FirstName10029 MiddleName10029",LastName10029 +10030,10030,"FirstName10030 MiddleName10030",LastName10030 +10031,10031,"FirstName10031 MiddleName10031",LastName10031 +10032,10032,"FirstName10032 MiddleName10032",LastName10032 +10033,10033,"FirstName10033 MiddleName10033",LastName10033 +10034,10034,"FirstName10034 MiddleName10034",LastName10034 +10035,10035,"FirstName10035 MiddleName10035",LastName10035 +10036,10036,"FirstName10036 MiddleName10036",LastName10036 +10037,10037,"FirstName10037 MiddleName10037",LastName10037 +10038,10038,"FirstName10038 MiddleName10038",LastName10038 +10039,10039,"FirstName10039 MiddleName10039",LastName10039 +10040,10040,"FirstName10040 MiddleName10040",LastName10040 +10041,10041,"FirstName10041 MiddleName10041",LastName10041 +10042,10042,"FirstName10042 MiddleName10042",LastName10042 +10043,10043,"FirstName10043 MiddleName10043",LastName10043 +10044,10044,"FirstName10044 MiddleName10044",LastName10044 +10045,10045,"FirstName10045 MiddleName10045",LastName10045 +10046,10046,"FirstName10046 MiddleName10046",LastName10046 +10047,10047,"FirstName10047 MiddleName10047",LastName10047 +10048,10048,"FirstName10048 MiddleName10048",LastName10048 +10049,10049,"FirstName10049 MiddleName10049",LastName10049 +10050,10050,"FirstName10050 MiddleName10050",LastName10050 +10051,10051,"FirstName10051 MiddleName10051",LastName10051 +10052,10052,"FirstName10052 MiddleName10052",LastName10052 +10053,10053,"FirstName10053 MiddleName10053",LastName10053 +10054,10054,"FirstName10054 MiddleName10054",LastName10054 +10055,10055,"FirstName10055 MiddleName10055",LastName10055 +10056,10056,"FirstName10056 MiddleName10056",LastName10056 +10057,10057,"FirstName10057 MiddleName10057",LastName10057 +10058,10058,"FirstName10058 MiddleName10058",LastName10058 +10059,10059,"FirstName10059 MiddleName10059",LastName10059 +10060,10060,"FirstName10060 MiddleName10060",LastName10060 +10061,10061,"FirstName10061 MiddleName10061",LastName10061 +10062,10062,"FirstName10062 MiddleName10062",LastName10062 +10063,10063,"FirstName10063 MiddleName10063",LastName10063 +10064,10064,"FirstName10064 MiddleName10064",LastName10064 +10065,10065,"FirstName10065 MiddleName10065",LastName10065 +10066,10066,"FirstName10066 MiddleName10066",LastName10066 +10067,10067,"FirstName10067 MiddleName10067",LastName10067 +10068,10068,"FirstName10068 MiddleName10068",LastName10068 +10069,10069,"FirstName10069 MiddleName10069",LastName10069 +10070,10070,"FirstName10070 MiddleName10070",LastName10070 +10071,10071,"FirstName10071 MiddleName10071",LastName10071 +10072,10072,"FirstName10072 MiddleName10072",LastName10072 +10073,10073,"FirstName10073 MiddleName10073",LastName10073 +10074,10074,"FirstName10074 MiddleName10074",LastName10074 +10075,10075,"FirstName10075 MiddleName10075",LastName10075 +10076,10076,"FirstName10076 MiddleName10076",LastName10076 +10077,10077,"FirstName10077 MiddleName10077",LastName10077 +10078,10078,"FirstName10078 MiddleName10078",LastName10078 +10079,10079,"FirstName10079 MiddleName10079",LastName10079 +10080,10080,"FirstName10080 MiddleName10080",LastName10080 +10081,10081,"FirstName10081 MiddleName10081",LastName10081 +10082,10082,"FirstName10082 MiddleName10082",LastName10082 +10083,10083,"FirstName10083 MiddleName10083",LastName10083 +10084,10084,"FirstName10084 MiddleName10084",LastName10084 +10085,10085,"FirstName10085 MiddleName10085",LastName10085 +10086,10086,"FirstName10086 MiddleName10086",LastName10086 +10087,10087,"FirstName10087 MiddleName10087",LastName10087 +10088,10088,"FirstName10088 MiddleName10088",LastName10088 +10089,10089,"FirstName10089 MiddleName10089",LastName10089 +10090,10090,"FirstName10090 MiddleName10090",LastName10090 +10091,10091,"FirstName10091 MiddleName10091",LastName10091 +10092,10092,"FirstName10092 MiddleName10092",LastName10092 +10093,10093,"FirstName10093 MiddleName10093",LastName10093 +10094,10094,"FirstName10094 MiddleName10094",LastName10094 +10095,10095,"FirstName10095 MiddleName10095",LastName10095 +10096,10096,"FirstName10096 MiddleName10096",LastName10096 +10097,10097,"FirstName10097 MiddleName10097",LastName10097 +10098,10098,"FirstName10098 MiddleName10098",LastName10098 +10099,10099,"FirstName10099 MiddleName10099",LastName10099 +10100,10100,"FirstName10100 MiddleName10100",LastName10100 +10101,10101,"FirstName10101 MiddleName10101",LastName10101 +10102,10102,"FirstName10102 MiddleName10102",LastName10102 +10103,10103,"FirstName10103 MiddleName10103",LastName10103 +10104,10104,"FirstName10104 MiddleName10104",LastName10104 +10105,10105,"FirstName10105 MiddleName10105",LastName10105 +10106,10106,"FirstName10106 MiddleName10106",LastName10106 +10107,10107,"FirstName10107 MiddleName10107",LastName10107 +10108,10108,"FirstName10108 MiddleName10108",LastName10108 +10109,10109,"FirstName10109 MiddleName10109",LastName10109 +10110,10110,"FirstName10110 MiddleName10110",LastName10110 +10111,10111,"FirstName10111 MiddleName10111",LastName10111 +10112,10112,"FirstName10112 MiddleName10112",LastName10112 +10113,10113,"FirstName10113 MiddleName10113",LastName10113 +10114,10114,"FirstName10114 MiddleName10114",LastName10114 +10115,10115,"FirstName10115 MiddleName10115",LastName10115 +10116,10116,"FirstName10116 MiddleName10116",LastName10116 +10117,10117,"FirstName10117 MiddleName10117",LastName10117 +10118,10118,"FirstName10118 MiddleName10118",LastName10118 +10119,10119,"FirstName10119 MiddleName10119",LastName10119 +10120,10120,"FirstName10120 MiddleName10120",LastName10120 +10121,10121,"FirstName10121 MiddleName10121",LastName10121 +10122,10122,"FirstName10122 MiddleName10122",LastName10122 +10123,10123,"FirstName10123 MiddleName10123",LastName10123 +10124,10124,"FirstName10124 MiddleName10124",LastName10124 +10125,10125,"FirstName10125 MiddleName10125",LastName10125 +10126,10126,"FirstName10126 MiddleName10126",LastName10126 +10127,10127,"FirstName10127 MiddleName10127",LastName10127 +10128,10128,"FirstName10128 MiddleName10128",LastName10128 +10129,10129,"FirstName10129 MiddleName10129",LastName10129 +10130,10130,"FirstName10130 MiddleName10130",LastName10130 +10131,10131,"FirstName10131 MiddleName10131",LastName10131 +10132,10132,"FirstName10132 MiddleName10132",LastName10132 +10133,10133,"FirstName10133 MiddleName10133",LastName10133 +10134,10134,"FirstName10134 MiddleName10134",LastName10134 +10135,10135,"FirstName10135 MiddleName10135",LastName10135 +10136,10136,"FirstName10136 MiddleName10136",LastName10136 +10137,10137,"FirstName10137 MiddleName10137",LastName10137 +10138,10138,"FirstName10138 MiddleName10138",LastName10138 +10139,10139,"FirstName10139 MiddleName10139",LastName10139 +10140,10140,"FirstName10140 MiddleName10140",LastName10140 +10141,10141,"FirstName10141 MiddleName10141",LastName10141 +10142,10142,"FirstName10142 MiddleName10142",LastName10142 +10143,10143,"FirstName10143 MiddleName10143",LastName10143 +10144,10144,"FirstName10144 MiddleName10144",LastName10144 +10145,10145,"FirstName10145 MiddleName10145",LastName10145 +10146,10146,"FirstName10146 MiddleName10146",LastName10146 +10147,10147,"FirstName10147 MiddleName10147",LastName10147 +10148,10148,"FirstName10148 MiddleName10148",LastName10148 +10149,10149,"FirstName10149 MiddleName10149",LastName10149 +10150,10150,"FirstName10150 MiddleName10150",LastName10150 +10151,10151,"FirstName10151 MiddleName10151",LastName10151 +10152,10152,"FirstName10152 MiddleName10152",LastName10152 +10153,10153,"FirstName10153 MiddleName10153",LastName10153 +10154,10154,"FirstName10154 MiddleName10154",LastName10154 +10155,10155,"FirstName10155 MiddleName10155",LastName10155 +10156,10156,"FirstName10156 MiddleName10156",LastName10156 +10157,10157,"FirstName10157 MiddleName10157",LastName10157 +10158,10158,"FirstName10158 MiddleName10158",LastName10158 +10159,10159,"FirstName10159 MiddleName10159",LastName10159 +10160,10160,"FirstName10160 MiddleName10160",LastName10160 +10161,10161,"FirstName10161 MiddleName10161",LastName10161 +10162,10162,"FirstName10162 MiddleName10162",LastName10162 +10163,10163,"FirstName10163 MiddleName10163",LastName10163 +10164,10164,"FirstName10164 MiddleName10164",LastName10164 +10165,10165,"FirstName10165 MiddleName10165",LastName10165 +10166,10166,"FirstName10166 MiddleName10166",LastName10166 +10167,10167,"FirstName10167 MiddleName10167",LastName10167 +10168,10168,"FirstName10168 MiddleName10168",LastName10168 +10169,10169,"FirstName10169 MiddleName10169",LastName10169 +10170,10170,"FirstName10170 MiddleName10170",LastName10170 +10171,10171,"FirstName10171 MiddleName10171",LastName10171 +10172,10172,"FirstName10172 MiddleName10172",LastName10172 +10173,10173,"FirstName10173 MiddleName10173",LastName10173 +10174,10174,"FirstName10174 MiddleName10174",LastName10174 +10175,10175,"FirstName10175 MiddleName10175",LastName10175 +10176,10176,"FirstName10176 MiddleName10176",LastName10176 +10177,10177,"FirstName10177 MiddleName10177",LastName10177 +10178,10178,"FirstName10178 MiddleName10178",LastName10178 +10179,10179,"FirstName10179 MiddleName10179",LastName10179 +10180,10180,"FirstName10180 MiddleName10180",LastName10180 +10181,10181,"FirstName10181 MiddleName10181",LastName10181 +10182,10182,"FirstName10182 MiddleName10182",LastName10182 +10183,10183,"FirstName10183 MiddleName10183",LastName10183 +10184,10184,"FirstName10184 MiddleName10184",LastName10184 +10185,10185,"FirstName10185 MiddleName10185",LastName10185 +10186,10186,"FirstName10186 MiddleName10186",LastName10186 +10187,10187,"FirstName10187 MiddleName10187",LastName10187 +10188,10188,"FirstName10188 MiddleName10188",LastName10188 +10189,10189,"FirstName10189 MiddleName10189",LastName10189 +10190,10190,"FirstName10190 MiddleName10190",LastName10190 +10191,10191,"FirstName10191 MiddleName10191",LastName10191 +10192,10192,"FirstName10192 MiddleName10192",LastName10192 +10193,10193,"FirstName10193 MiddleName10193",LastName10193 +10194,10194,"FirstName10194 MiddleName10194",LastName10194 +10195,10195,"FirstName10195 MiddleName10195",LastName10195 +10196,10196,"FirstName10196 MiddleName10196",LastName10196 +10197,10197,"FirstName10197 MiddleName10197",LastName10197 +10198,10198,"FirstName10198 MiddleName10198",LastName10198 +10199,10199,"FirstName10199 MiddleName10199",LastName10199 +10200,10200,"FirstName10200 MiddleName10200",LastName10200 +10201,10201,"FirstName10201 MiddleName10201",LastName10201 +10202,10202,"FirstName10202 MiddleName10202",LastName10202 +10203,10203,"FirstName10203 MiddleName10203",LastName10203 +10204,10204,"FirstName10204 MiddleName10204",LastName10204 +10205,10205,"FirstName10205 MiddleName10205",LastName10205 +10206,10206,"FirstName10206 MiddleName10206",LastName10206 +10207,10207,"FirstName10207 MiddleName10207",LastName10207 +10208,10208,"FirstName10208 MiddleName10208",LastName10208 +10209,10209,"FirstName10209 MiddleName10209",LastName10209 +10210,10210,"FirstName10210 MiddleName10210",LastName10210 +10211,10211,"FirstName10211 MiddleName10211",LastName10211 +10212,10212,"FirstName10212 MiddleName10212",LastName10212 +10213,10213,"FirstName10213 MiddleName10213",LastName10213 +10214,10214,"FirstName10214 MiddleName10214",LastName10214 +10215,10215,"FirstName10215 MiddleName10215",LastName10215 +10216,10216,"FirstName10216 MiddleName10216",LastName10216 +10217,10217,"FirstName10217 MiddleName10217",LastName10217 +10218,10218,"FirstName10218 MiddleName10218",LastName10218 +10219,10219,"FirstName10219 MiddleName10219",LastName10219 +10220,10220,"FirstName10220 MiddleName10220",LastName10220 +10221,10221,"FirstName10221 MiddleName10221",LastName10221 +10222,10222,"FirstName10222 MiddleName10222",LastName10222 +10223,10223,"FirstName10223 MiddleName10223",LastName10223 +10224,10224,"FirstName10224 MiddleName10224",LastName10224 +10225,10225,"FirstName10225 MiddleName10225",LastName10225 +10226,10226,"FirstName10226 MiddleName10226",LastName10226 +10227,10227,"FirstName10227 MiddleName10227",LastName10227 +10228,10228,"FirstName10228 MiddleName10228",LastName10228 +10229,10229,"FirstName10229 MiddleName10229",LastName10229 +10230,10230,"FirstName10230 MiddleName10230",LastName10230 +10231,10231,"FirstName10231 MiddleName10231",LastName10231 +10232,10232,"FirstName10232 MiddleName10232",LastName10232 +10233,10233,"FirstName10233 MiddleName10233",LastName10233 +10234,10234,"FirstName10234 MiddleName10234",LastName10234 +10235,10235,"FirstName10235 MiddleName10235",LastName10235 +10236,10236,"FirstName10236 MiddleName10236",LastName10236 +10237,10237,"FirstName10237 MiddleName10237",LastName10237 +10238,10238,"FirstName10238 MiddleName10238",LastName10238 +10239,10239,"FirstName10239 MiddleName10239",LastName10239 +10240,10240,"FirstName10240 MiddleName10240",LastName10240 +10241,10241,"FirstName10241 MiddleName10241",LastName10241 +10242,10242,"FirstName10242 MiddleName10242",LastName10242 +10243,10243,"FirstName10243 MiddleName10243",LastName10243 +10244,10244,"FirstName10244 MiddleName10244",LastName10244 +10245,10245,"FirstName10245 MiddleName10245",LastName10245 +10246,10246,"FirstName10246 MiddleName10246",LastName10246 +10247,10247,"FirstName10247 MiddleName10247",LastName10247 +10248,10248,"FirstName10248 MiddleName10248",LastName10248 +10249,10249,"FirstName10249 MiddleName10249",LastName10249 +10250,10250,"FirstName10250 MiddleName10250",LastName10250 +10251,10251,"FirstName10251 MiddleName10251",LastName10251 +10252,10252,"FirstName10252 MiddleName10252",LastName10252 +10253,10253,"FirstName10253 MiddleName10253",LastName10253 +10254,10254,"FirstName10254 MiddleName10254",LastName10254 +10255,10255,"FirstName10255 MiddleName10255",LastName10255 +10256,10256,"FirstName10256 MiddleName10256",LastName10256 +10257,10257,"FirstName10257 MiddleName10257",LastName10257 +10258,10258,"FirstName10258 MiddleName10258",LastName10258 +10259,10259,"FirstName10259 MiddleName10259",LastName10259 +10260,10260,"FirstName10260 MiddleName10260",LastName10260 +10261,10261,"FirstName10261 MiddleName10261",LastName10261 +10262,10262,"FirstName10262 MiddleName10262",LastName10262 +10263,10263,"FirstName10263 MiddleName10263",LastName10263 +10264,10264,"FirstName10264 MiddleName10264",LastName10264 +10265,10265,"FirstName10265 MiddleName10265",LastName10265 +10266,10266,"FirstName10266 MiddleName10266",LastName10266 +10267,10267,"FirstName10267 MiddleName10267",LastName10267 +10268,10268,"FirstName10268 MiddleName10268",LastName10268 +10269,10269,"FirstName10269 MiddleName10269",LastName10269 +10270,10270,"FirstName10270 MiddleName10270",LastName10270 +10271,10271,"FirstName10271 MiddleName10271",LastName10271 +10272,10272,"FirstName10272 MiddleName10272",LastName10272 +10273,10273,"FirstName10273 MiddleName10273",LastName10273 +10274,10274,"FirstName10274 MiddleName10274",LastName10274 +10275,10275,"FirstName10275 MiddleName10275",LastName10275 +10276,10276,"FirstName10276 MiddleName10276",LastName10276 +10277,10277,"FirstName10277 MiddleName10277",LastName10277 +10278,10278,"FirstName10278 MiddleName10278",LastName10278 +10279,10279,"FirstName10279 MiddleName10279",LastName10279 +10280,10280,"FirstName10280 MiddleName10280",LastName10280 +10281,10281,"FirstName10281 MiddleName10281",LastName10281 +10282,10282,"FirstName10282 MiddleName10282",LastName10282 +10283,10283,"FirstName10283 MiddleName10283",LastName10283 +10284,10284,"FirstName10284 MiddleName10284",LastName10284 +10285,10285,"FirstName10285 MiddleName10285",LastName10285 +10286,10286,"FirstName10286 MiddleName10286",LastName10286 +10287,10287,"FirstName10287 MiddleName10287",LastName10287 +10288,10288,"FirstName10288 MiddleName10288",LastName10288 +10289,10289,"FirstName10289 MiddleName10289",LastName10289 +10290,10290,"FirstName10290 MiddleName10290",LastName10290 +10291,10291,"FirstName10291 MiddleName10291",LastName10291 +10292,10292,"FirstName10292 MiddleName10292",LastName10292 +10293,10293,"FirstName10293 MiddleName10293",LastName10293 +10294,10294,"FirstName10294 MiddleName10294",LastName10294 +10295,10295,"FirstName10295 MiddleName10295",LastName10295 +10296,10296,"FirstName10296 MiddleName10296",LastName10296 +10297,10297,"FirstName10297 MiddleName10297",LastName10297 +10298,10298,"FirstName10298 MiddleName10298",LastName10298 +10299,10299,"FirstName10299 MiddleName10299",LastName10299 +10300,10300,"FirstName10300 MiddleName10300",LastName10300 +10301,10301,"FirstName10301 MiddleName10301",LastName10301 +10302,10302,"FirstName10302 MiddleName10302",LastName10302 +10303,10303,"FirstName10303 MiddleName10303",LastName10303 +10304,10304,"FirstName10304 MiddleName10304",LastName10304 +10305,10305,"FirstName10305 MiddleName10305",LastName10305 +10306,10306,"FirstName10306 MiddleName10306",LastName10306 +10307,10307,"FirstName10307 MiddleName10307",LastName10307 +10308,10308,"FirstName10308 MiddleName10308",LastName10308 +10309,10309,"FirstName10309 MiddleName10309",LastName10309 +10310,10310,"FirstName10310 MiddleName10310",LastName10310 +10311,10311,"FirstName10311 MiddleName10311",LastName10311 +10312,10312,"FirstName10312 MiddleName10312",LastName10312 +10313,10313,"FirstName10313 MiddleName10313",LastName10313 +10314,10314,"FirstName10314 MiddleName10314",LastName10314 +10315,10315,"FirstName10315 MiddleName10315",LastName10315 +10316,10316,"FirstName10316 MiddleName10316",LastName10316 +10317,10317,"FirstName10317 MiddleName10317",LastName10317 +10318,10318,"FirstName10318 MiddleName10318",LastName10318 +10319,10319,"FirstName10319 MiddleName10319",LastName10319 +10320,10320,"FirstName10320 MiddleName10320",LastName10320 +10321,10321,"FirstName10321 MiddleName10321",LastName10321 +10322,10322,"FirstName10322 MiddleName10322",LastName10322 +10323,10323,"FirstName10323 MiddleName10323",LastName10323 +10324,10324,"FirstName10324 MiddleName10324",LastName10324 +10325,10325,"FirstName10325 MiddleName10325",LastName10325 +10326,10326,"FirstName10326 MiddleName10326",LastName10326 +10327,10327,"FirstName10327 MiddleName10327",LastName10327 +10328,10328,"FirstName10328 MiddleName10328",LastName10328 +10329,10329,"FirstName10329 MiddleName10329",LastName10329 +10330,10330,"FirstName10330 MiddleName10330",LastName10330 +10331,10331,"FirstName10331 MiddleName10331",LastName10331 +10332,10332,"FirstName10332 MiddleName10332",LastName10332 +10333,10333,"FirstName10333 MiddleName10333",LastName10333 +10334,10334,"FirstName10334 MiddleName10334",LastName10334 +10335,10335,"FirstName10335 MiddleName10335",LastName10335 +10336,10336,"FirstName10336 MiddleName10336",LastName10336 +10337,10337,"FirstName10337 MiddleName10337",LastName10337 +10338,10338,"FirstName10338 MiddleName10338",LastName10338 +10339,10339,"FirstName10339 MiddleName10339",LastName10339 +10340,10340,"FirstName10340 MiddleName10340",LastName10340 +10341,10341,"FirstName10341 MiddleName10341",LastName10341 +10342,10342,"FirstName10342 MiddleName10342",LastName10342 +10343,10343,"FirstName10343 MiddleName10343",LastName10343 +10344,10344,"FirstName10344 MiddleName10344",LastName10344 +10345,10345,"FirstName10345 MiddleName10345",LastName10345 +10346,10346,"FirstName10346 MiddleName10346",LastName10346 +10347,10347,"FirstName10347 MiddleName10347",LastName10347 +10348,10348,"FirstName10348 MiddleName10348",LastName10348 +10349,10349,"FirstName10349 MiddleName10349",LastName10349 +10350,10350,"FirstName10350 MiddleName10350",LastName10350 +10351,10351,"FirstName10351 MiddleName10351",LastName10351 +10352,10352,"FirstName10352 MiddleName10352",LastName10352 +10353,10353,"FirstName10353 MiddleName10353",LastName10353 +10354,10354,"FirstName10354 MiddleName10354",LastName10354 +10355,10355,"FirstName10355 MiddleName10355",LastName10355 +10356,10356,"FirstName10356 MiddleName10356",LastName10356 +10357,10357,"FirstName10357 MiddleName10357",LastName10357 +10358,10358,"FirstName10358 MiddleName10358",LastName10358 +10359,10359,"FirstName10359 MiddleName10359",LastName10359 +10360,10360,"FirstName10360 MiddleName10360",LastName10360 +10361,10361,"FirstName10361 MiddleName10361",LastName10361 +10362,10362,"FirstName10362 MiddleName10362",LastName10362 +10363,10363,"FirstName10363 MiddleName10363",LastName10363 +10364,10364,"FirstName10364 MiddleName10364",LastName10364 +10365,10365,"FirstName10365 MiddleName10365",LastName10365 +10366,10366,"FirstName10366 MiddleName10366",LastName10366 +10367,10367,"FirstName10367 MiddleName10367",LastName10367 +10368,10368,"FirstName10368 MiddleName10368",LastName10368 +10369,10369,"FirstName10369 MiddleName10369",LastName10369 +10370,10370,"FirstName10370 MiddleName10370",LastName10370 +10371,10371,"FirstName10371 MiddleName10371",LastName10371 +10372,10372,"FirstName10372 MiddleName10372",LastName10372 +10373,10373,"FirstName10373 MiddleName10373",LastName10373 +10374,10374,"FirstName10374 MiddleName10374",LastName10374 +10375,10375,"FirstName10375 MiddleName10375",LastName10375 +10376,10376,"FirstName10376 MiddleName10376",LastName10376 +10377,10377,"FirstName10377 MiddleName10377",LastName10377 +10378,10378,"FirstName10378 MiddleName10378",LastName10378 +10379,10379,"FirstName10379 MiddleName10379",LastName10379 +10380,10380,"FirstName10380 MiddleName10380",LastName10380 +10381,10381,"FirstName10381 MiddleName10381",LastName10381 +10382,10382,"FirstName10382 MiddleName10382",LastName10382 +10383,10383,"FirstName10383 MiddleName10383",LastName10383 +10384,10384,"FirstName10384 MiddleName10384",LastName10384 +10385,10385,"FirstName10385 MiddleName10385",LastName10385 +10386,10386,"FirstName10386 MiddleName10386",LastName10386 +10387,10387,"FirstName10387 MiddleName10387",LastName10387 +10388,10388,"FirstName10388 MiddleName10388",LastName10388 +10389,10389,"FirstName10389 MiddleName10389",LastName10389 +10390,10390,"FirstName10390 MiddleName10390",LastName10390 +10391,10391,"FirstName10391 MiddleName10391",LastName10391 +10392,10392,"FirstName10392 MiddleName10392",LastName10392 +10393,10393,"FirstName10393 MiddleName10393",LastName10393 +10394,10394,"FirstName10394 MiddleName10394",LastName10394 +10395,10395,"FirstName10395 MiddleName10395",LastName10395 +10396,10396,"FirstName10396 MiddleName10396",LastName10396 +10397,10397,"FirstName10397 MiddleName10397",LastName10397 +10398,10398,"FirstName10398 MiddleName10398",LastName10398 +10399,10399,"FirstName10399 MiddleName10399",LastName10399 +10400,10400,"FirstName10400 MiddleName10400",LastName10400 +10401,10401,"FirstName10401 MiddleName10401",LastName10401 +10402,10402,"FirstName10402 MiddleName10402",LastName10402 +10403,10403,"FirstName10403 MiddleName10403",LastName10403 +10404,10404,"FirstName10404 MiddleName10404",LastName10404 +10405,10405,"FirstName10405 MiddleName10405",LastName10405 +10406,10406,"FirstName10406 MiddleName10406",LastName10406 +10407,10407,"FirstName10407 MiddleName10407",LastName10407 +10408,10408,"FirstName10408 MiddleName10408",LastName10408 +10409,10409,"FirstName10409 MiddleName10409",LastName10409 +10410,10410,"FirstName10410 MiddleName10410",LastName10410 +10411,10411,"FirstName10411 MiddleName10411",LastName10411 +10412,10412,"FirstName10412 MiddleName10412",LastName10412 +10413,10413,"FirstName10413 MiddleName10413",LastName10413 +10414,10414,"FirstName10414 MiddleName10414",LastName10414 +10415,10415,"FirstName10415 MiddleName10415",LastName10415 +10416,10416,"FirstName10416 MiddleName10416",LastName10416 +10417,10417,"FirstName10417 MiddleName10417",LastName10417 +10418,10418,"FirstName10418 MiddleName10418",LastName10418 +10419,10419,"FirstName10419 MiddleName10419",LastName10419 +10420,10420,"FirstName10420 MiddleName10420",LastName10420 +10421,10421,"FirstName10421 MiddleName10421",LastName10421 +10422,10422,"FirstName10422 MiddleName10422",LastName10422 +10423,10423,"FirstName10423 MiddleName10423",LastName10423 +10424,10424,"FirstName10424 MiddleName10424",LastName10424 +10425,10425,"FirstName10425 MiddleName10425",LastName10425 +10426,10426,"FirstName10426 MiddleName10426",LastName10426 +10427,10427,"FirstName10427 MiddleName10427",LastName10427 +10428,10428,"FirstName10428 MiddleName10428",LastName10428 +10429,10429,"FirstName10429 MiddleName10429",LastName10429 +10430,10430,"FirstName10430 MiddleName10430",LastName10430 +10431,10431,"FirstName10431 MiddleName10431",LastName10431 +10432,10432,"FirstName10432 MiddleName10432",LastName10432 +10433,10433,"FirstName10433 MiddleName10433",LastName10433 +10434,10434,"FirstName10434 MiddleName10434",LastName10434 +10435,10435,"FirstName10435 MiddleName10435",LastName10435 +10436,10436,"FirstName10436 MiddleName10436",LastName10436 +10437,10437,"FirstName10437 MiddleName10437",LastName10437 +10438,10438,"FirstName10438 MiddleName10438",LastName10438 +10439,10439,"FirstName10439 MiddleName10439",LastName10439 +10440,10440,"FirstName10440 MiddleName10440",LastName10440 +10441,10441,"FirstName10441 MiddleName10441",LastName10441 +10442,10442,"FirstName10442 MiddleName10442",LastName10442 +10443,10443,"FirstName10443 MiddleName10443",LastName10443 +10444,10444,"FirstName10444 MiddleName10444",LastName10444 +10445,10445,"FirstName10445 MiddleName10445",LastName10445 +10446,10446,"FirstName10446 MiddleName10446",LastName10446 +10447,10447,"FirstName10447 MiddleName10447",LastName10447 +10448,10448,"FirstName10448 MiddleName10448",LastName10448 +10449,10449,"FirstName10449 MiddleName10449",LastName10449 +10450,10450,"FirstName10450 MiddleName10450",LastName10450 +10451,10451,"FirstName10451 MiddleName10451",LastName10451 +10452,10452,"FirstName10452 MiddleName10452",LastName10452 +10453,10453,"FirstName10453 MiddleName10453",LastName10453 +10454,10454,"FirstName10454 MiddleName10454",LastName10454 +10455,10455,"FirstName10455 MiddleName10455",LastName10455 +10456,10456,"FirstName10456 MiddleName10456",LastName10456 +10457,10457,"FirstName10457 MiddleName10457",LastName10457 +10458,10458,"FirstName10458 MiddleName10458",LastName10458 +10459,10459,"FirstName10459 MiddleName10459",LastName10459 +10460,10460,"FirstName10460 MiddleName10460",LastName10460 +10461,10461,"FirstName10461 MiddleName10461",LastName10461 +10462,10462,"FirstName10462 MiddleName10462",LastName10462 +10463,10463,"FirstName10463 MiddleName10463",LastName10463 +10464,10464,"FirstName10464 MiddleName10464",LastName10464 +10465,10465,"FirstName10465 MiddleName10465",LastName10465 +10466,10466,"FirstName10466 MiddleName10466",LastName10466 +10467,10467,"FirstName10467 MiddleName10467",LastName10467 +10468,10468,"FirstName10468 MiddleName10468",LastName10468 +10469,10469,"FirstName10469 MiddleName10469",LastName10469 +10470,10470,"FirstName10470 MiddleName10470",LastName10470 +10471,10471,"FirstName10471 MiddleName10471",LastName10471 +10472,10472,"FirstName10472 MiddleName10472",LastName10472 +10473,10473,"FirstName10473 MiddleName10473",LastName10473 +10474,10474,"FirstName10474 MiddleName10474",LastName10474 +10475,10475,"FirstName10475 MiddleName10475",LastName10475 +10476,10476,"FirstName10476 MiddleName10476",LastName10476 +10477,10477,"FirstName10477 MiddleName10477",LastName10477 +10478,10478,"FirstName10478 MiddleName10478",LastName10478 +10479,10479,"FirstName10479 MiddleName10479",LastName10479 +10480,10480,"FirstName10480 MiddleName10480",LastName10480 +10481,10481,"FirstName10481 MiddleName10481",LastName10481 +10482,10482,"FirstName10482 MiddleName10482",LastName10482 +10483,10483,"FirstName10483 MiddleName10483",LastName10483 +10484,10484,"FirstName10484 MiddleName10484",LastName10484 +10485,10485,"FirstName10485 MiddleName10485",LastName10485 +10486,10486,"FirstName10486 MiddleName10486",LastName10486 +10487,10487,"FirstName10487 MiddleName10487",LastName10487 +10488,10488,"FirstName10488 MiddleName10488",LastName10488 +10489,10489,"FirstName10489 MiddleName10489",LastName10489 +10490,10490,"FirstName10490 MiddleName10490",LastName10490 +10491,10491,"FirstName10491 MiddleName10491",LastName10491 +10492,10492,"FirstName10492 MiddleName10492",LastName10492 +10493,10493,"FirstName10493 MiddleName10493",LastName10493 +10494,10494,"FirstName10494 MiddleName10494",LastName10494 +10495,10495,"FirstName10495 MiddleName10495",LastName10495 +10496,10496,"FirstName10496 MiddleName10496",LastName10496 +10497,10497,"FirstName10497 MiddleName10497",LastName10497 +10498,10498,"FirstName10498 MiddleName10498",LastName10498 +10499,10499,"FirstName10499 MiddleName10499",LastName10499 +10500,10500,"FirstName10500 MiddleName10500",LastName10500 +10501,10501,"FirstName10501 MiddleName10501",LastName10501 +10502,10502,"FirstName10502 MiddleName10502",LastName10502 +10503,10503,"FirstName10503 MiddleName10503",LastName10503 +10504,10504,"FirstName10504 MiddleName10504",LastName10504 +10505,10505,"FirstName10505 MiddleName10505",LastName10505 +10506,10506,"FirstName10506 MiddleName10506",LastName10506 +10507,10507,"FirstName10507 MiddleName10507",LastName10507 +10508,10508,"FirstName10508 MiddleName10508",LastName10508 +10509,10509,"FirstName10509 MiddleName10509",LastName10509 +10510,10510,"FirstName10510 MiddleName10510",LastName10510 +10511,10511,"FirstName10511 MiddleName10511",LastName10511 +10512,10512,"FirstName10512 MiddleName10512",LastName10512 +10513,10513,"FirstName10513 MiddleName10513",LastName10513 +10514,10514,"FirstName10514 MiddleName10514",LastName10514 +10515,10515,"FirstName10515 MiddleName10515",LastName10515 +10516,10516,"FirstName10516 MiddleName10516",LastName10516 +10517,10517,"FirstName10517 MiddleName10517",LastName10517 +10518,10518,"FirstName10518 MiddleName10518",LastName10518 +10519,10519,"FirstName10519 MiddleName10519",LastName10519 +10520,10520,"FirstName10520 MiddleName10520",LastName10520 +10521,10521,"FirstName10521 MiddleName10521",LastName10521 +10522,10522,"FirstName10522 MiddleName10522",LastName10522 +10523,10523,"FirstName10523 MiddleName10523",LastName10523 +10524,10524,"FirstName10524 MiddleName10524",LastName10524 +10525,10525,"FirstName10525 MiddleName10525",LastName10525 +10526,10526,"FirstName10526 MiddleName10526",LastName10526 +10527,10527,"FirstName10527 MiddleName10527",LastName10527 +10528,10528,"FirstName10528 MiddleName10528",LastName10528 +10529,10529,"FirstName10529 MiddleName10529",LastName10529 +10530,10530,"FirstName10530 MiddleName10530",LastName10530 +10531,10531,"FirstName10531 MiddleName10531",LastName10531 +10532,10532,"FirstName10532 MiddleName10532",LastName10532 +10533,10533,"FirstName10533 MiddleName10533",LastName10533 +10534,10534,"FirstName10534 MiddleName10534",LastName10534 +10535,10535,"FirstName10535 MiddleName10535",LastName10535 +10536,10536,"FirstName10536 MiddleName10536",LastName10536 +10537,10537,"FirstName10537 MiddleName10537",LastName10537 +10538,10538,"FirstName10538 MiddleName10538",LastName10538 +10539,10539,"FirstName10539 MiddleName10539",LastName10539 +10540,10540,"FirstName10540 MiddleName10540",LastName10540 +10541,10541,"FirstName10541 MiddleName10541",LastName10541 +10542,10542,"FirstName10542 MiddleName10542",LastName10542 +10543,10543,"FirstName10543 MiddleName10543",LastName10543 +10544,10544,"FirstName10544 MiddleName10544",LastName10544 +10545,10545,"FirstName10545 MiddleName10545",LastName10545 +10546,10546,"FirstName10546 MiddleName10546",LastName10546 +10547,10547,"FirstName10547 MiddleName10547",LastName10547 +10548,10548,"FirstName10548 MiddleName10548",LastName10548 +10549,10549,"FirstName10549 MiddleName10549",LastName10549 +10550,10550,"FirstName10550 MiddleName10550",LastName10550 +10551,10551,"FirstName10551 MiddleName10551",LastName10551 +10552,10552,"FirstName10552 MiddleName10552",LastName10552 +10553,10553,"FirstName10553 MiddleName10553",LastName10553 +10554,10554,"FirstName10554 MiddleName10554",LastName10554 +10555,10555,"FirstName10555 MiddleName10555",LastName10555 +10556,10556,"FirstName10556 MiddleName10556",LastName10556 +10557,10557,"FirstName10557 MiddleName10557",LastName10557 +10558,10558,"FirstName10558 MiddleName10558",LastName10558 +10559,10559,"FirstName10559 MiddleName10559",LastName10559 +10560,10560,"FirstName10560 MiddleName10560",LastName10560 +10561,10561,"FirstName10561 MiddleName10561",LastName10561 +10562,10562,"FirstName10562 MiddleName10562",LastName10562 +10563,10563,"FirstName10563 MiddleName10563",LastName10563 +10564,10564,"FirstName10564 MiddleName10564",LastName10564 +10565,10565,"FirstName10565 MiddleName10565",LastName10565 +10566,10566,"FirstName10566 MiddleName10566",LastName10566 +10567,10567,"FirstName10567 MiddleName10567",LastName10567 +10568,10568,"FirstName10568 MiddleName10568",LastName10568 +10569,10569,"FirstName10569 MiddleName10569",LastName10569 +10570,10570,"FirstName10570 MiddleName10570",LastName10570 +10571,10571,"FirstName10571 MiddleName10571",LastName10571 +10572,10572,"FirstName10572 MiddleName10572",LastName10572 +10573,10573,"FirstName10573 MiddleName10573",LastName10573 +10574,10574,"FirstName10574 MiddleName10574",LastName10574 +10575,10575,"FirstName10575 MiddleName10575",LastName10575 +10576,10576,"FirstName10576 MiddleName10576",LastName10576 +10577,10577,"FirstName10577 MiddleName10577",LastName10577 +10578,10578,"FirstName10578 MiddleName10578",LastName10578 +10579,10579,"FirstName10579 MiddleName10579",LastName10579 +10580,10580,"FirstName10580 MiddleName10580",LastName10580 +10581,10581,"FirstName10581 MiddleName10581",LastName10581 +10582,10582,"FirstName10582 MiddleName10582",LastName10582 +10583,10583,"FirstName10583 MiddleName10583",LastName10583 +10584,10584,"FirstName10584 MiddleName10584",LastName10584 +10585,10585,"FirstName10585 MiddleName10585",LastName10585 +10586,10586,"FirstName10586 MiddleName10586",LastName10586 +10587,10587,"FirstName10587 MiddleName10587",LastName10587 +10588,10588,"FirstName10588 MiddleName10588",LastName10588 +10589,10589,"FirstName10589 MiddleName10589",LastName10589 +10590,10590,"FirstName10590 MiddleName10590",LastName10590 +10591,10591,"FirstName10591 MiddleName10591",LastName10591 +10592,10592,"FirstName10592 MiddleName10592",LastName10592 +10593,10593,"FirstName10593 MiddleName10593",LastName10593 +10594,10594,"FirstName10594 MiddleName10594",LastName10594 +10595,10595,"FirstName10595 MiddleName10595",LastName10595 +10596,10596,"FirstName10596 MiddleName10596",LastName10596 +10597,10597,"FirstName10597 MiddleName10597",LastName10597 +10598,10598,"FirstName10598 MiddleName10598",LastName10598 +10599,10599,"FirstName10599 MiddleName10599",LastName10599 +10600,10600,"FirstName10600 MiddleName10600",LastName10600 +10601,10601,"FirstName10601 MiddleName10601",LastName10601 +10602,10602,"FirstName10602 MiddleName10602",LastName10602 +10603,10603,"FirstName10603 MiddleName10603",LastName10603 +10604,10604,"FirstName10604 MiddleName10604",LastName10604 +10605,10605,"FirstName10605 MiddleName10605",LastName10605 +10606,10606,"FirstName10606 MiddleName10606",LastName10606 +10607,10607,"FirstName10607 MiddleName10607",LastName10607 +10608,10608,"FirstName10608 MiddleName10608",LastName10608 +10609,10609,"FirstName10609 MiddleName10609",LastName10609 +10610,10610,"FirstName10610 MiddleName10610",LastName10610 +10611,10611,"FirstName10611 MiddleName10611",LastName10611 +10612,10612,"FirstName10612 MiddleName10612",LastName10612 +10613,10613,"FirstName10613 MiddleName10613",LastName10613 +10614,10614,"FirstName10614 MiddleName10614",LastName10614 +10615,10615,"FirstName10615 MiddleName10615",LastName10615 +10616,10616,"FirstName10616 MiddleName10616",LastName10616 +10617,10617,"FirstName10617 MiddleName10617",LastName10617 +10618,10618,"FirstName10618 MiddleName10618",LastName10618 +10619,10619,"FirstName10619 MiddleName10619",LastName10619 +10620,10620,"FirstName10620 MiddleName10620",LastName10620 +10621,10621,"FirstName10621 MiddleName10621",LastName10621 +10622,10622,"FirstName10622 MiddleName10622",LastName10622 +10623,10623,"FirstName10623 MiddleName10623",LastName10623 +10624,10624,"FirstName10624 MiddleName10624",LastName10624 +10625,10625,"FirstName10625 MiddleName10625",LastName10625 +10626,10626,"FirstName10626 MiddleName10626",LastName10626 +10627,10627,"FirstName10627 MiddleName10627",LastName10627 +10628,10628,"FirstName10628 MiddleName10628",LastName10628 +10629,10629,"FirstName10629 MiddleName10629",LastName10629 +10630,10630,"FirstName10630 MiddleName10630",LastName10630 +10631,10631,"FirstName10631 MiddleName10631",LastName10631 +10632,10632,"FirstName10632 MiddleName10632",LastName10632 +10633,10633,"FirstName10633 MiddleName10633",LastName10633 +10634,10634,"FirstName10634 MiddleName10634",LastName10634 +10635,10635,"FirstName10635 MiddleName10635",LastName10635 +10636,10636,"FirstName10636 MiddleName10636",LastName10636 +10637,10637,"FirstName10637 MiddleName10637",LastName10637 +10638,10638,"FirstName10638 MiddleName10638",LastName10638 +10639,10639,"FirstName10639 MiddleName10639",LastName10639 +10640,10640,"FirstName10640 MiddleName10640",LastName10640 +10641,10641,"FirstName10641 MiddleName10641",LastName10641 +10642,10642,"FirstName10642 MiddleName10642",LastName10642 +10643,10643,"FirstName10643 MiddleName10643",LastName10643 +10644,10644,"FirstName10644 MiddleName10644",LastName10644 +10645,10645,"FirstName10645 MiddleName10645",LastName10645 +10646,10646,"FirstName10646 MiddleName10646",LastName10646 +10647,10647,"FirstName10647 MiddleName10647",LastName10647 +10648,10648,"FirstName10648 MiddleName10648",LastName10648 +10649,10649,"FirstName10649 MiddleName10649",LastName10649 +10650,10650,"FirstName10650 MiddleName10650",LastName10650 +10651,10651,"FirstName10651 MiddleName10651",LastName10651 +10652,10652,"FirstName10652 MiddleName10652",LastName10652 +10653,10653,"FirstName10653 MiddleName10653",LastName10653 +10654,10654,"FirstName10654 MiddleName10654",LastName10654 +10655,10655,"FirstName10655 MiddleName10655",LastName10655 +10656,10656,"FirstName10656 MiddleName10656",LastName10656 +10657,10657,"FirstName10657 MiddleName10657",LastName10657 +10658,10658,"FirstName10658 MiddleName10658",LastName10658 +10659,10659,"FirstName10659 MiddleName10659",LastName10659 +10660,10660,"FirstName10660 MiddleName10660",LastName10660 +10661,10661,"FirstName10661 MiddleName10661",LastName10661 +10662,10662,"FirstName10662 MiddleName10662",LastName10662 +10663,10663,"FirstName10663 MiddleName10663",LastName10663 +10664,10664,"FirstName10664 MiddleName10664",LastName10664 +10665,10665,"FirstName10665 MiddleName10665",LastName10665 +10666,10666,"FirstName10666 MiddleName10666",LastName10666 +10667,10667,"FirstName10667 MiddleName10667",LastName10667 +10668,10668,"FirstName10668 MiddleName10668",LastName10668 +10669,10669,"FirstName10669 MiddleName10669",LastName10669 +10670,10670,"FirstName10670 MiddleName10670",LastName10670 +10671,10671,"FirstName10671 MiddleName10671",LastName10671 +10672,10672,"FirstName10672 MiddleName10672",LastName10672 +10673,10673,"FirstName10673 MiddleName10673",LastName10673 +10674,10674,"FirstName10674 MiddleName10674",LastName10674 +10675,10675,"FirstName10675 MiddleName10675",LastName10675 +10676,10676,"FirstName10676 MiddleName10676",LastName10676 +10677,10677,"FirstName10677 MiddleName10677",LastName10677 +10678,10678,"FirstName10678 MiddleName10678",LastName10678 +10679,10679,"FirstName10679 MiddleName10679",LastName10679 +10680,10680,"FirstName10680 MiddleName10680",LastName10680 +10681,10681,"FirstName10681 MiddleName10681",LastName10681 +10682,10682,"FirstName10682 MiddleName10682",LastName10682 +10683,10683,"FirstName10683 MiddleName10683",LastName10683 +10684,10684,"FirstName10684 MiddleName10684",LastName10684 +10685,10685,"FirstName10685 MiddleName10685",LastName10685 +10686,10686,"FirstName10686 MiddleName10686",LastName10686 +10687,10687,"FirstName10687 MiddleName10687",LastName10687 +10688,10688,"FirstName10688 MiddleName10688",LastName10688 +10689,10689,"FirstName10689 MiddleName10689",LastName10689 +10690,10690,"FirstName10690 MiddleName10690",LastName10690 +10691,10691,"FirstName10691 MiddleName10691",LastName10691 +10692,10692,"FirstName10692 MiddleName10692",LastName10692 +10693,10693,"FirstName10693 MiddleName10693",LastName10693 +10694,10694,"FirstName10694 MiddleName10694",LastName10694 +10695,10695,"FirstName10695 MiddleName10695",LastName10695 +10696,10696,"FirstName10696 MiddleName10696",LastName10696 +10697,10697,"FirstName10697 MiddleName10697",LastName10697 +10698,10698,"FirstName10698 MiddleName10698",LastName10698 +10699,10699,"FirstName10699 MiddleName10699",LastName10699 +10700,10700,"FirstName10700 MiddleName10700",LastName10700 +10701,10701,"FirstName10701 MiddleName10701",LastName10701 +10702,10702,"FirstName10702 MiddleName10702",LastName10702 +10703,10703,"FirstName10703 MiddleName10703",LastName10703 +10704,10704,"FirstName10704 MiddleName10704",LastName10704 +10705,10705,"FirstName10705 MiddleName10705",LastName10705 +10706,10706,"FirstName10706 MiddleName10706",LastName10706 +10707,10707,"FirstName10707 MiddleName10707",LastName10707 +10708,10708,"FirstName10708 MiddleName10708",LastName10708 +10709,10709,"FirstName10709 MiddleName10709",LastName10709 +10710,10710,"FirstName10710 MiddleName10710",LastName10710 +10711,10711,"FirstName10711 MiddleName10711",LastName10711 +10712,10712,"FirstName10712 MiddleName10712",LastName10712 +10713,10713,"FirstName10713 MiddleName10713",LastName10713 +10714,10714,"FirstName10714 MiddleName10714",LastName10714 +10715,10715,"FirstName10715 MiddleName10715",LastName10715 +10716,10716,"FirstName10716 MiddleName10716",LastName10716 +10717,10717,"FirstName10717 MiddleName10717",LastName10717 +10718,10718,"FirstName10718 MiddleName10718",LastName10718 +10719,10719,"FirstName10719 MiddleName10719",LastName10719 +10720,10720,"FirstName10720 MiddleName10720",LastName10720 +10721,10721,"FirstName10721 MiddleName10721",LastName10721 +10722,10722,"FirstName10722 MiddleName10722",LastName10722 +10723,10723,"FirstName10723 MiddleName10723",LastName10723 +10724,10724,"FirstName10724 MiddleName10724",LastName10724 +10725,10725,"FirstName10725 MiddleName10725",LastName10725 +10726,10726,"FirstName10726 MiddleName10726",LastName10726 +10727,10727,"FirstName10727 MiddleName10727",LastName10727 +10728,10728,"FirstName10728 MiddleName10728",LastName10728 +10729,10729,"FirstName10729 MiddleName10729",LastName10729 +10730,10730,"FirstName10730 MiddleName10730",LastName10730 +10731,10731,"FirstName10731 MiddleName10731",LastName10731 +10732,10732,"FirstName10732 MiddleName10732",LastName10732 +10733,10733,"FirstName10733 MiddleName10733",LastName10733 +10734,10734,"FirstName10734 MiddleName10734",LastName10734 +10735,10735,"FirstName10735 MiddleName10735",LastName10735 +10736,10736,"FirstName10736 MiddleName10736",LastName10736 +10737,10737,"FirstName10737 MiddleName10737",LastName10737 +10738,10738,"FirstName10738 MiddleName10738",LastName10738 +10739,10739,"FirstName10739 MiddleName10739",LastName10739 +10740,10740,"FirstName10740 MiddleName10740",LastName10740 +10741,10741,"FirstName10741 MiddleName10741",LastName10741 +10742,10742,"FirstName10742 MiddleName10742",LastName10742 +10743,10743,"FirstName10743 MiddleName10743",LastName10743 +10744,10744,"FirstName10744 MiddleName10744",LastName10744 +10745,10745,"FirstName10745 MiddleName10745",LastName10745 +10746,10746,"FirstName10746 MiddleName10746",LastName10746 +10747,10747,"FirstName10747 MiddleName10747",LastName10747 +10748,10748,"FirstName10748 MiddleName10748",LastName10748 +10749,10749,"FirstName10749 MiddleName10749",LastName10749 +10750,10750,"FirstName10750 MiddleName10750",LastName10750 +10751,10751,"FirstName10751 MiddleName10751",LastName10751 +10752,10752,"FirstName10752 MiddleName10752",LastName10752 +10753,10753,"FirstName10753 MiddleName10753",LastName10753 +10754,10754,"FirstName10754 MiddleName10754",LastName10754 +10755,10755,"FirstName10755 MiddleName10755",LastName10755 +10756,10756,"FirstName10756 MiddleName10756",LastName10756 +10757,10757,"FirstName10757 MiddleName10757",LastName10757 +10758,10758,"FirstName10758 MiddleName10758",LastName10758 +10759,10759,"FirstName10759 MiddleName10759",LastName10759 +10760,10760,"FirstName10760 MiddleName10760",LastName10760 +10761,10761,"FirstName10761 MiddleName10761",LastName10761 +10762,10762,"FirstName10762 MiddleName10762",LastName10762 +10763,10763,"FirstName10763 MiddleName10763",LastName10763 +10764,10764,"FirstName10764 MiddleName10764",LastName10764 +10765,10765,"FirstName10765 MiddleName10765",LastName10765 +10766,10766,"FirstName10766 MiddleName10766",LastName10766 +10767,10767,"FirstName10767 MiddleName10767",LastName10767 +10768,10768,"FirstName10768 MiddleName10768",LastName10768 +10769,10769,"FirstName10769 MiddleName10769",LastName10769 +10770,10770,"FirstName10770 MiddleName10770",LastName10770 +10771,10771,"FirstName10771 MiddleName10771",LastName10771 +10772,10772,"FirstName10772 MiddleName10772",LastName10772 +10773,10773,"FirstName10773 MiddleName10773",LastName10773 +10774,10774,"FirstName10774 MiddleName10774",LastName10774 +10775,10775,"FirstName10775 MiddleName10775",LastName10775 +10776,10776,"FirstName10776 MiddleName10776",LastName10776 +10777,10777,"FirstName10777 MiddleName10777",LastName10777 +10778,10778,"FirstName10778 MiddleName10778",LastName10778 +10779,10779,"FirstName10779 MiddleName10779",LastName10779 +10780,10780,"FirstName10780 MiddleName10780",LastName10780 +10781,10781,"FirstName10781 MiddleName10781",LastName10781 +10782,10782,"FirstName10782 MiddleName10782",LastName10782 +10783,10783,"FirstName10783 MiddleName10783",LastName10783 +10784,10784,"FirstName10784 MiddleName10784",LastName10784 +10785,10785,"FirstName10785 MiddleName10785",LastName10785 +10786,10786,"FirstName10786 MiddleName10786",LastName10786 +10787,10787,"FirstName10787 MiddleName10787",LastName10787 +10788,10788,"FirstName10788 MiddleName10788",LastName10788 +10789,10789,"FirstName10789 MiddleName10789",LastName10789 +10790,10790,"FirstName10790 MiddleName10790",LastName10790 +10791,10791,"FirstName10791 MiddleName10791",LastName10791 +10792,10792,"FirstName10792 MiddleName10792",LastName10792 +10793,10793,"FirstName10793 MiddleName10793",LastName10793 +10794,10794,"FirstName10794 MiddleName10794",LastName10794 +10795,10795,"FirstName10795 MiddleName10795",LastName10795 +10796,10796,"FirstName10796 MiddleName10796",LastName10796 +10797,10797,"FirstName10797 MiddleName10797",LastName10797 +10798,10798,"FirstName10798 MiddleName10798",LastName10798 +10799,10799,"FirstName10799 MiddleName10799",LastName10799 +10800,10800,"FirstName10800 MiddleName10800",LastName10800 +10801,10801,"FirstName10801 MiddleName10801",LastName10801 +10802,10802,"FirstName10802 MiddleName10802",LastName10802 +10803,10803,"FirstName10803 MiddleName10803",LastName10803 +10804,10804,"FirstName10804 MiddleName10804",LastName10804 +10805,10805,"FirstName10805 MiddleName10805",LastName10805 +10806,10806,"FirstName10806 MiddleName10806",LastName10806 +10807,10807,"FirstName10807 MiddleName10807",LastName10807 +10808,10808,"FirstName10808 MiddleName10808",LastName10808 +10809,10809,"FirstName10809 MiddleName10809",LastName10809 +10810,10810,"FirstName10810 MiddleName10810",LastName10810 +10811,10811,"FirstName10811 MiddleName10811",LastName10811 +10812,10812,"FirstName10812 MiddleName10812",LastName10812 +10813,10813,"FirstName10813 MiddleName10813",LastName10813 +10814,10814,"FirstName10814 MiddleName10814",LastName10814 +10815,10815,"FirstName10815 MiddleName10815",LastName10815 +10816,10816,"FirstName10816 MiddleName10816",LastName10816 +10817,10817,"FirstName10817 MiddleName10817",LastName10817 +10818,10818,"FirstName10818 MiddleName10818",LastName10818 +10819,10819,"FirstName10819 MiddleName10819",LastName10819 +10820,10820,"FirstName10820 MiddleName10820",LastName10820 +10821,10821,"FirstName10821 MiddleName10821",LastName10821 +10822,10822,"FirstName10822 MiddleName10822",LastName10822 +10823,10823,"FirstName10823 MiddleName10823",LastName10823 +10824,10824,"FirstName10824 MiddleName10824",LastName10824 +10825,10825,"FirstName10825 MiddleName10825",LastName10825 +10826,10826,"FirstName10826 MiddleName10826",LastName10826 +10827,10827,"FirstName10827 MiddleName10827",LastName10827 +10828,10828,"FirstName10828 MiddleName10828",LastName10828 +10829,10829,"FirstName10829 MiddleName10829",LastName10829 +10830,10830,"FirstName10830 MiddleName10830",LastName10830 +10831,10831,"FirstName10831 MiddleName10831",LastName10831 +10832,10832,"FirstName10832 MiddleName10832",LastName10832 +10833,10833,"FirstName10833 MiddleName10833",LastName10833 +10834,10834,"FirstName10834 MiddleName10834",LastName10834 +10835,10835,"FirstName10835 MiddleName10835",LastName10835 +10836,10836,"FirstName10836 MiddleName10836",LastName10836 +10837,10837,"FirstName10837 MiddleName10837",LastName10837 +10838,10838,"FirstName10838 MiddleName10838",LastName10838 +10839,10839,"FirstName10839 MiddleName10839",LastName10839 +10840,10840,"FirstName10840 MiddleName10840",LastName10840 +10841,10841,"FirstName10841 MiddleName10841",LastName10841 +10842,10842,"FirstName10842 MiddleName10842",LastName10842 +10843,10843,"FirstName10843 MiddleName10843",LastName10843 +10844,10844,"FirstName10844 MiddleName10844",LastName10844 +10845,10845,"FirstName10845 MiddleName10845",LastName10845 +10846,10846,"FirstName10846 MiddleName10846",LastName10846 +10847,10847,"FirstName10847 MiddleName10847",LastName10847 +10848,10848,"FirstName10848 MiddleName10848",LastName10848 +10849,10849,"FirstName10849 MiddleName10849",LastName10849 +10850,10850,"FirstName10850 MiddleName10850",LastName10850 +10851,10851,"FirstName10851 MiddleName10851",LastName10851 +10852,10852,"FirstName10852 MiddleName10852",LastName10852 +10853,10853,"FirstName10853 MiddleName10853",LastName10853 +10854,10854,"FirstName10854 MiddleName10854",LastName10854 +10855,10855,"FirstName10855 MiddleName10855",LastName10855 +10856,10856,"FirstName10856 MiddleName10856",LastName10856 +10857,10857,"FirstName10857 MiddleName10857",LastName10857 +10858,10858,"FirstName10858 MiddleName10858",LastName10858 +10859,10859,"FirstName10859 MiddleName10859",LastName10859 +10860,10860,"FirstName10860 MiddleName10860",LastName10860 +10861,10861,"FirstName10861 MiddleName10861",LastName10861 +10862,10862,"FirstName10862 MiddleName10862",LastName10862 +10863,10863,"FirstName10863 MiddleName10863",LastName10863 +10864,10864,"FirstName10864 MiddleName10864",LastName10864 +10865,10865,"FirstName10865 MiddleName10865",LastName10865 +10866,10866,"FirstName10866 MiddleName10866",LastName10866 +10867,10867,"FirstName10867 MiddleName10867",LastName10867 +10868,10868,"FirstName10868 MiddleName10868",LastName10868 +10869,10869,"FirstName10869 MiddleName10869",LastName10869 +10870,10870,"FirstName10870 MiddleName10870",LastName10870 +10871,10871,"FirstName10871 MiddleName10871",LastName10871 +10872,10872,"FirstName10872 MiddleName10872",LastName10872 +10873,10873,"FirstName10873 MiddleName10873",LastName10873 +10874,10874,"FirstName10874 MiddleName10874",LastName10874 +10875,10875,"FirstName10875 MiddleName10875",LastName10875 +10876,10876,"FirstName10876 MiddleName10876",LastName10876 +10877,10877,"FirstName10877 MiddleName10877",LastName10877 +10878,10878,"FirstName10878 MiddleName10878",LastName10878 +10879,10879,"FirstName10879 MiddleName10879",LastName10879 +10880,10880,"FirstName10880 MiddleName10880",LastName10880 +10881,10881,"FirstName10881 MiddleName10881",LastName10881 +10882,10882,"FirstName10882 MiddleName10882",LastName10882 +10883,10883,"FirstName10883 MiddleName10883",LastName10883 +10884,10884,"FirstName10884 MiddleName10884",LastName10884 +10885,10885,"FirstName10885 MiddleName10885",LastName10885 +10886,10886,"FirstName10886 MiddleName10886",LastName10886 +10887,10887,"FirstName10887 MiddleName10887",LastName10887 +10888,10888,"FirstName10888 MiddleName10888",LastName10888 +10889,10889,"FirstName10889 MiddleName10889",LastName10889 +10890,10890,"FirstName10890 MiddleName10890",LastName10890 +10891,10891,"FirstName10891 MiddleName10891",LastName10891 +10892,10892,"FirstName10892 MiddleName10892",LastName10892 +10893,10893,"FirstName10893 MiddleName10893",LastName10893 +10894,10894,"FirstName10894 MiddleName10894",LastName10894 +10895,10895,"FirstName10895 MiddleName10895",LastName10895 +10896,10896,"FirstName10896 MiddleName10896",LastName10896 +10897,10897,"FirstName10897 MiddleName10897",LastName10897 +10898,10898,"FirstName10898 MiddleName10898",LastName10898 +10899,10899,"FirstName10899 MiddleName10899",LastName10899 +10900,10900,"FirstName10900 MiddleName10900",LastName10900 +10901,10901,"FirstName10901 MiddleName10901",LastName10901 +10902,10902,"FirstName10902 MiddleName10902",LastName10902 +10903,10903,"FirstName10903 MiddleName10903",LastName10903 +10904,10904,"FirstName10904 MiddleName10904",LastName10904 +10905,10905,"FirstName10905 MiddleName10905",LastName10905 +10906,10906,"FirstName10906 MiddleName10906",LastName10906 +10907,10907,"FirstName10907 MiddleName10907",LastName10907 +10908,10908,"FirstName10908 MiddleName10908",LastName10908 +10909,10909,"FirstName10909 MiddleName10909",LastName10909 +10910,10910,"FirstName10910 MiddleName10910",LastName10910 +10911,10911,"FirstName10911 MiddleName10911",LastName10911 +10912,10912,"FirstName10912 MiddleName10912",LastName10912 +10913,10913,"FirstName10913 MiddleName10913",LastName10913 +10914,10914,"FirstName10914 MiddleName10914",LastName10914 +10915,10915,"FirstName10915 MiddleName10915",LastName10915 +10916,10916,"FirstName10916 MiddleName10916",LastName10916 +10917,10917,"FirstName10917 MiddleName10917",LastName10917 +10918,10918,"FirstName10918 MiddleName10918",LastName10918 +10919,10919,"FirstName10919 MiddleName10919",LastName10919 +10920,10920,"FirstName10920 MiddleName10920",LastName10920 +10921,10921,"FirstName10921 MiddleName10921",LastName10921 +10922,10922,"FirstName10922 MiddleName10922",LastName10922 +10923,10923,"FirstName10923 MiddleName10923",LastName10923 +10924,10924,"FirstName10924 MiddleName10924",LastName10924 +10925,10925,"FirstName10925 MiddleName10925",LastName10925 +10926,10926,"FirstName10926 MiddleName10926",LastName10926 +10927,10927,"FirstName10927 MiddleName10927",LastName10927 +10928,10928,"FirstName10928 MiddleName10928",LastName10928 +10929,10929,"FirstName10929 MiddleName10929",LastName10929 +10930,10930,"FirstName10930 MiddleName10930",LastName10930 +10931,10931,"FirstName10931 MiddleName10931",LastName10931 +10932,10932,"FirstName10932 MiddleName10932",LastName10932 +10933,10933,"FirstName10933 MiddleName10933",LastName10933 +10934,10934,"FirstName10934 MiddleName10934",LastName10934 +10935,10935,"FirstName10935 MiddleName10935",LastName10935 +10936,10936,"FirstName10936 MiddleName10936",LastName10936 +10937,10937,"FirstName10937 MiddleName10937",LastName10937 +10938,10938,"FirstName10938 MiddleName10938",LastName10938 +10939,10939,"FirstName10939 MiddleName10939",LastName10939 +10940,10940,"FirstName10940 MiddleName10940",LastName10940 +10941,10941,"FirstName10941 MiddleName10941",LastName10941 +10942,10942,"FirstName10942 MiddleName10942",LastName10942 +10943,10943,"FirstName10943 MiddleName10943",LastName10943 +10944,10944,"FirstName10944 MiddleName10944",LastName10944 +10945,10945,"FirstName10945 MiddleName10945",LastName10945 +10946,10946,"FirstName10946 MiddleName10946",LastName10946 +10947,10947,"FirstName10947 MiddleName10947",LastName10947 +10948,10948,"FirstName10948 MiddleName10948",LastName10948 +10949,10949,"FirstName10949 MiddleName10949",LastName10949 +10950,10950,"FirstName10950 MiddleName10950",LastName10950 +10951,10951,"FirstName10951 MiddleName10951",LastName10951 +10952,10952,"FirstName10952 MiddleName10952",LastName10952 +10953,10953,"FirstName10953 MiddleName10953",LastName10953 +10954,10954,"FirstName10954 MiddleName10954",LastName10954 +10955,10955,"FirstName10955 MiddleName10955",LastName10955 +10956,10956,"FirstName10956 MiddleName10956",LastName10956 +10957,10957,"FirstName10957 MiddleName10957",LastName10957 +10958,10958,"FirstName10958 MiddleName10958",LastName10958 +10959,10959,"FirstName10959 MiddleName10959",LastName10959 +10960,10960,"FirstName10960 MiddleName10960",LastName10960 +10961,10961,"FirstName10961 MiddleName10961",LastName10961 +10962,10962,"FirstName10962 MiddleName10962",LastName10962 +10963,10963,"FirstName10963 MiddleName10963",LastName10963 +10964,10964,"FirstName10964 MiddleName10964",LastName10964 +10965,10965,"FirstName10965 MiddleName10965",LastName10965 +10966,10966,"FirstName10966 MiddleName10966",LastName10966 +10967,10967,"FirstName10967 MiddleName10967",LastName10967 +10968,10968,"FirstName10968 MiddleName10968",LastName10968 +10969,10969,"FirstName10969 MiddleName10969",LastName10969 +10970,10970,"FirstName10970 MiddleName10970",LastName10970 +10971,10971,"FirstName10971 MiddleName10971",LastName10971 +10972,10972,"FirstName10972 MiddleName10972",LastName10972 +10973,10973,"FirstName10973 MiddleName10973",LastName10973 +10974,10974,"FirstName10974 MiddleName10974",LastName10974 +10975,10975,"FirstName10975 MiddleName10975",LastName10975 +10976,10976,"FirstName10976 MiddleName10976",LastName10976 +10977,10977,"FirstName10977 MiddleName10977",LastName10977 +10978,10978,"FirstName10978 MiddleName10978",LastName10978 +10979,10979,"FirstName10979 MiddleName10979",LastName10979 +10980,10980,"FirstName10980 MiddleName10980",LastName10980 +10981,10981,"FirstName10981 MiddleName10981",LastName10981 +10982,10982,"FirstName10982 MiddleName10982",LastName10982 +10983,10983,"FirstName10983 MiddleName10983",LastName10983 +10984,10984,"FirstName10984 MiddleName10984",LastName10984 +10985,10985,"FirstName10985 MiddleName10985",LastName10985 +10986,10986,"FirstName10986 MiddleName10986",LastName10986 +10987,10987,"FirstName10987 MiddleName10987",LastName10987 +10988,10988,"FirstName10988 MiddleName10988",LastName10988 +10989,10989,"FirstName10989 MiddleName10989",LastName10989 +10990,10990,"FirstName10990 MiddleName10990",LastName10990 +10991,10991,"FirstName10991 MiddleName10991",LastName10991 +10992,10992,"FirstName10992 MiddleName10992",LastName10992 +10993,10993,"FirstName10993 MiddleName10993",LastName10993 +10994,10994,"FirstName10994 MiddleName10994",LastName10994 +10995,10995,"FirstName10995 MiddleName10995",LastName10995 +10996,10996,"FirstName10996 MiddleName10996",LastName10996 +10997,10997,"FirstName10997 MiddleName10997",LastName10997 +10998,10998,"FirstName10998 MiddleName10998",LastName10998 +10999,10999,"FirstName10999 MiddleName10999",LastName10999 +11000,11000,"FirstName11000 MiddleName11000",LastName11000 +11001,11001,"FirstName11001 MiddleName11001",LastName11001 +11002,11002,"FirstName11002 MiddleName11002",LastName11002 +11003,11003,"FirstName11003 MiddleName11003",LastName11003 +11004,11004,"FirstName11004 MiddleName11004",LastName11004 +11005,11005,"FirstName11005 MiddleName11005",LastName11005 +11006,11006,"FirstName11006 MiddleName11006",LastName11006 +11007,11007,"FirstName11007 MiddleName11007",LastName11007 +11008,11008,"FirstName11008 MiddleName11008",LastName11008 +11009,11009,"FirstName11009 MiddleName11009",LastName11009 +11010,11010,"FirstName11010 MiddleName11010",LastName11010 +11011,11011,"FirstName11011 MiddleName11011",LastName11011 +11012,11012,"FirstName11012 MiddleName11012",LastName11012 +11013,11013,"FirstName11013 MiddleName11013",LastName11013 +11014,11014,"FirstName11014 MiddleName11014",LastName11014 +11015,11015,"FirstName11015 MiddleName11015",LastName11015 +11016,11016,"FirstName11016 MiddleName11016",LastName11016 +11017,11017,"FirstName11017 MiddleName11017",LastName11017 +11018,11018,"FirstName11018 MiddleName11018",LastName11018 +11019,11019,"FirstName11019 MiddleName11019",LastName11019 +11020,11020,"FirstName11020 MiddleName11020",LastName11020 +11021,11021,"FirstName11021 MiddleName11021",LastName11021 +11022,11022,"FirstName11022 MiddleName11022",LastName11022 +11023,11023,"FirstName11023 MiddleName11023",LastName11023 +11024,11024,"FirstName11024 MiddleName11024",LastName11024 +11025,11025,"FirstName11025 MiddleName11025",LastName11025 +11026,11026,"FirstName11026 MiddleName11026",LastName11026 +11027,11027,"FirstName11027 MiddleName11027",LastName11027 +11028,11028,"FirstName11028 MiddleName11028",LastName11028 +11029,11029,"FirstName11029 MiddleName11029",LastName11029 +11030,11030,"FirstName11030 MiddleName11030",LastName11030 +11031,11031,"FirstName11031 MiddleName11031",LastName11031 +11032,11032,"FirstName11032 MiddleName11032",LastName11032 +11033,11033,"FirstName11033 MiddleName11033",LastName11033 +11034,11034,"FirstName11034 MiddleName11034",LastName11034 +11035,11035,"FirstName11035 MiddleName11035",LastName11035 +11036,11036,"FirstName11036 MiddleName11036",LastName11036 +11037,11037,"FirstName11037 MiddleName11037",LastName11037 +11038,11038,"FirstName11038 MiddleName11038",LastName11038 +11039,11039,"FirstName11039 MiddleName11039",LastName11039 +11040,11040,"FirstName11040 MiddleName11040",LastName11040 +11041,11041,"FirstName11041 MiddleName11041",LastName11041 +11042,11042,"FirstName11042 MiddleName11042",LastName11042 +11043,11043,"FirstName11043 MiddleName11043",LastName11043 +11044,11044,"FirstName11044 MiddleName11044",LastName11044 +11045,11045,"FirstName11045 MiddleName11045",LastName11045 +11046,11046,"FirstName11046 MiddleName11046",LastName11046 +11047,11047,"FirstName11047 MiddleName11047",LastName11047 +11048,11048,"FirstName11048 MiddleName11048",LastName11048 +11049,11049,"FirstName11049 MiddleName11049",LastName11049 +11050,11050,"FirstName11050 MiddleName11050",LastName11050 +11051,11051,"FirstName11051 MiddleName11051",LastName11051 +11052,11052,"FirstName11052 MiddleName11052",LastName11052 +11053,11053,"FirstName11053 MiddleName11053",LastName11053 +11054,11054,"FirstName11054 MiddleName11054",LastName11054 +11055,11055,"FirstName11055 MiddleName11055",LastName11055 +11056,11056,"FirstName11056 MiddleName11056",LastName11056 +11057,11057,"FirstName11057 MiddleName11057",LastName11057 +11058,11058,"FirstName11058 MiddleName11058",LastName11058 +11059,11059,"FirstName11059 MiddleName11059",LastName11059 +11060,11060,"FirstName11060 MiddleName11060",LastName11060 +11061,11061,"FirstName11061 MiddleName11061",LastName11061 +11062,11062,"FirstName11062 MiddleName11062",LastName11062 +11063,11063,"FirstName11063 MiddleName11063",LastName11063 +11064,11064,"FirstName11064 MiddleName11064",LastName11064 +11065,11065,"FirstName11065 MiddleName11065",LastName11065 +11066,11066,"FirstName11066 MiddleName11066",LastName11066 +11067,11067,"FirstName11067 MiddleName11067",LastName11067 +11068,11068,"FirstName11068 MiddleName11068",LastName11068 +11069,11069,"FirstName11069 MiddleName11069",LastName11069 +11070,11070,"FirstName11070 MiddleName11070",LastName11070 +11071,11071,"FirstName11071 MiddleName11071",LastName11071 +11072,11072,"FirstName11072 MiddleName11072",LastName11072 +11073,11073,"FirstName11073 MiddleName11073",LastName11073 +11074,11074,"FirstName11074 MiddleName11074",LastName11074 +11075,11075,"FirstName11075 MiddleName11075",LastName11075 +11076,11076,"FirstName11076 MiddleName11076",LastName11076 +11077,11077,"FirstName11077 MiddleName11077",LastName11077 +11078,11078,"FirstName11078 MiddleName11078",LastName11078 +11079,11079,"FirstName11079 MiddleName11079",LastName11079 +11080,11080,"FirstName11080 MiddleName11080",LastName11080 +11081,11081,"FirstName11081 MiddleName11081",LastName11081 +11082,11082,"FirstName11082 MiddleName11082",LastName11082 +11083,11083,"FirstName11083 MiddleName11083",LastName11083 +11084,11084,"FirstName11084 MiddleName11084",LastName11084 +11085,11085,"FirstName11085 MiddleName11085",LastName11085 +11086,11086,"FirstName11086 MiddleName11086",LastName11086 +11087,11087,"FirstName11087 MiddleName11087",LastName11087 +11088,11088,"FirstName11088 MiddleName11088",LastName11088 +11089,11089,"FirstName11089 MiddleName11089",LastName11089 +11090,11090,"FirstName11090 MiddleName11090",LastName11090 +11091,11091,"FirstName11091 MiddleName11091",LastName11091 +11092,11092,"FirstName11092 MiddleName11092",LastName11092 +11093,11093,"FirstName11093 MiddleName11093",LastName11093 +11094,11094,"FirstName11094 MiddleName11094",LastName11094 +11095,11095,"FirstName11095 MiddleName11095",LastName11095 +11096,11096,"FirstName11096 MiddleName11096",LastName11096 +11097,11097,"FirstName11097 MiddleName11097",LastName11097 +11098,11098,"FirstName11098 MiddleName11098",LastName11098 +11099,11099,"FirstName11099 MiddleName11099",LastName11099 +11100,11100,"FirstName11100 MiddleName11100",LastName11100 +11101,11101,"FirstName11101 MiddleName11101",LastName11101 +11102,11102,"FirstName11102 MiddleName11102",LastName11102 +11103,11103,"FirstName11103 MiddleName11103",LastName11103 +11104,11104,"FirstName11104 MiddleName11104",LastName11104 +11105,11105,"FirstName11105 MiddleName11105",LastName11105 +11106,11106,"FirstName11106 MiddleName11106",LastName11106 +11107,11107,"FirstName11107 MiddleName11107",LastName11107 +11108,11108,"FirstName11108 MiddleName11108",LastName11108 +11109,11109,"FirstName11109 MiddleName11109",LastName11109 +11110,11110,"FirstName11110 MiddleName11110",LastName11110 +11111,11111,"FirstName11111 MiddleName11111",LastName11111 +11112,11112,"FirstName11112 MiddleName11112",LastName11112 +11113,11113,"FirstName11113 MiddleName11113",LastName11113 +11114,11114,"FirstName11114 MiddleName11114",LastName11114 +11115,11115,"FirstName11115 MiddleName11115",LastName11115 +11116,11116,"FirstName11116 MiddleName11116",LastName11116 +11117,11117,"FirstName11117 MiddleName11117",LastName11117 +11118,11118,"FirstName11118 MiddleName11118",LastName11118 +11119,11119,"FirstName11119 MiddleName11119",LastName11119 +11120,11120,"FirstName11120 MiddleName11120",LastName11120 +11121,11121,"FirstName11121 MiddleName11121",LastName11121 +11122,11122,"FirstName11122 MiddleName11122",LastName11122 +11123,11123,"FirstName11123 MiddleName11123",LastName11123 +11124,11124,"FirstName11124 MiddleName11124",LastName11124 +11125,11125,"FirstName11125 MiddleName11125",LastName11125 +11126,11126,"FirstName11126 MiddleName11126",LastName11126 +11127,11127,"FirstName11127 MiddleName11127",LastName11127 +11128,11128,"FirstName11128 MiddleName11128",LastName11128 +11129,11129,"FirstName11129 MiddleName11129",LastName11129 +11130,11130,"FirstName11130 MiddleName11130",LastName11130 +11131,11131,"FirstName11131 MiddleName11131",LastName11131 +11132,11132,"FirstName11132 MiddleName11132",LastName11132 +11133,11133,"FirstName11133 MiddleName11133",LastName11133 +11134,11134,"FirstName11134 MiddleName11134",LastName11134 +11135,11135,"FirstName11135 MiddleName11135",LastName11135 +11136,11136,"FirstName11136 MiddleName11136",LastName11136 +11137,11137,"FirstName11137 MiddleName11137",LastName11137 +11138,11138,"FirstName11138 MiddleName11138",LastName11138 +11139,11139,"FirstName11139 MiddleName11139",LastName11139 +11140,11140,"FirstName11140 MiddleName11140",LastName11140 +11141,11141,"FirstName11141 MiddleName11141",LastName11141 +11142,11142,"FirstName11142 MiddleName11142",LastName11142 +11143,11143,"FirstName11143 MiddleName11143",LastName11143 +11144,11144,"FirstName11144 MiddleName11144",LastName11144 +11145,11145,"FirstName11145 MiddleName11145",LastName11145 +11146,11146,"FirstName11146 MiddleName11146",LastName11146 +11147,11147,"FirstName11147 MiddleName11147",LastName11147 +11148,11148,"FirstName11148 MiddleName11148",LastName11148 +11149,11149,"FirstName11149 MiddleName11149",LastName11149 +11150,11150,"FirstName11150 MiddleName11150",LastName11150 +11151,11151,"FirstName11151 MiddleName11151",LastName11151 +11152,11152,"FirstName11152 MiddleName11152",LastName11152 +11153,11153,"FirstName11153 MiddleName11153",LastName11153 +11154,11154,"FirstName11154 MiddleName11154",LastName11154 +11155,11155,"FirstName11155 MiddleName11155",LastName11155 +11156,11156,"FirstName11156 MiddleName11156",LastName11156 +11157,11157,"FirstName11157 MiddleName11157",LastName11157 +11158,11158,"FirstName11158 MiddleName11158",LastName11158 +11159,11159,"FirstName11159 MiddleName11159",LastName11159 +11160,11160,"FirstName11160 MiddleName11160",LastName11160 +11161,11161,"FirstName11161 MiddleName11161",LastName11161 +11162,11162,"FirstName11162 MiddleName11162",LastName11162 +11163,11163,"FirstName11163 MiddleName11163",LastName11163 +11164,11164,"FirstName11164 MiddleName11164",LastName11164 +11165,11165,"FirstName11165 MiddleName11165",LastName11165 +11166,11166,"FirstName11166 MiddleName11166",LastName11166 +11167,11167,"FirstName11167 MiddleName11167",LastName11167 +11168,11168,"FirstName11168 MiddleName11168",LastName11168 +11169,11169,"FirstName11169 MiddleName11169",LastName11169 +11170,11170,"FirstName11170 MiddleName11170",LastName11170 +11171,11171,"FirstName11171 MiddleName11171",LastName11171 +11172,11172,"FirstName11172 MiddleName11172",LastName11172 +11173,11173,"FirstName11173 MiddleName11173",LastName11173 +11174,11174,"FirstName11174 MiddleName11174",LastName11174 +11175,11175,"FirstName11175 MiddleName11175",LastName11175 +11176,11176,"FirstName11176 MiddleName11176",LastName11176 +11177,11177,"FirstName11177 MiddleName11177",LastName11177 +11178,11178,"FirstName11178 MiddleName11178",LastName11178 +11179,11179,"FirstName11179 MiddleName11179",LastName11179 +11180,11180,"FirstName11180 MiddleName11180",LastName11180 +11181,11181,"FirstName11181 MiddleName11181",LastName11181 +11182,11182,"FirstName11182 MiddleName11182",LastName11182 +11183,11183,"FirstName11183 MiddleName11183",LastName11183 +11184,11184,"FirstName11184 MiddleName11184",LastName11184 +11185,11185,"FirstName11185 MiddleName11185",LastName11185 +11186,11186,"FirstName11186 MiddleName11186",LastName11186 +11187,11187,"FirstName11187 MiddleName11187",LastName11187 +11188,11188,"FirstName11188 MiddleName11188",LastName11188 +11189,11189,"FirstName11189 MiddleName11189",LastName11189 +11190,11190,"FirstName11190 MiddleName11190",LastName11190 +11191,11191,"FirstName11191 MiddleName11191",LastName11191 +11192,11192,"FirstName11192 MiddleName11192",LastName11192 +11193,11193,"FirstName11193 MiddleName11193",LastName11193 +11194,11194,"FirstName11194 MiddleName11194",LastName11194 +11195,11195,"FirstName11195 MiddleName11195",LastName11195 +11196,11196,"FirstName11196 MiddleName11196",LastName11196 +11197,11197,"FirstName11197 MiddleName11197",LastName11197 +11198,11198,"FirstName11198 MiddleName11198",LastName11198 +11199,11199,"FirstName11199 MiddleName11199",LastName11199 +11200,11200,"FirstName11200 MiddleName11200",LastName11200 +11201,11201,"FirstName11201 MiddleName11201",LastName11201 +11202,11202,"FirstName11202 MiddleName11202",LastName11202 +11203,11203,"FirstName11203 MiddleName11203",LastName11203 +11204,11204,"FirstName11204 MiddleName11204",LastName11204 +11205,11205,"FirstName11205 MiddleName11205",LastName11205 +11206,11206,"FirstName11206 MiddleName11206",LastName11206 +11207,11207,"FirstName11207 MiddleName11207",LastName11207 +11208,11208,"FirstName11208 MiddleName11208",LastName11208 +11209,11209,"FirstName11209 MiddleName11209",LastName11209 +11210,11210,"FirstName11210 MiddleName11210",LastName11210 +11211,11211,"FirstName11211 MiddleName11211",LastName11211 +11212,11212,"FirstName11212 MiddleName11212",LastName11212 +11213,11213,"FirstName11213 MiddleName11213",LastName11213 +11214,11214,"FirstName11214 MiddleName11214",LastName11214 +11215,11215,"FirstName11215 MiddleName11215",LastName11215 +11216,11216,"FirstName11216 MiddleName11216",LastName11216 +11217,11217,"FirstName11217 MiddleName11217",LastName11217 +11218,11218,"FirstName11218 MiddleName11218",LastName11218 +11219,11219,"FirstName11219 MiddleName11219",LastName11219 +11220,11220,"FirstName11220 MiddleName11220",LastName11220 +11221,11221,"FirstName11221 MiddleName11221",LastName11221 +11222,11222,"FirstName11222 MiddleName11222",LastName11222 +11223,11223,"FirstName11223 MiddleName11223",LastName11223 +11224,11224,"FirstName11224 MiddleName11224",LastName11224 +11225,11225,"FirstName11225 MiddleName11225",LastName11225 +11226,11226,"FirstName11226 MiddleName11226",LastName11226 +11227,11227,"FirstName11227 MiddleName11227",LastName11227 +11228,11228,"FirstName11228 MiddleName11228",LastName11228 +11229,11229,"FirstName11229 MiddleName11229",LastName11229 +11230,11230,"FirstName11230 MiddleName11230",LastName11230 +11231,11231,"FirstName11231 MiddleName11231",LastName11231 +11232,11232,"FirstName11232 MiddleName11232",LastName11232 +11233,11233,"FirstName11233 MiddleName11233",LastName11233 +11234,11234,"FirstName11234 MiddleName11234",LastName11234 +11235,11235,"FirstName11235 MiddleName11235",LastName11235 +11236,11236,"FirstName11236 MiddleName11236",LastName11236 +11237,11237,"FirstName11237 MiddleName11237",LastName11237 +11238,11238,"FirstName11238 MiddleName11238",LastName11238 +11239,11239,"FirstName11239 MiddleName11239",LastName11239 +11240,11240,"FirstName11240 MiddleName11240",LastName11240 +11241,11241,"FirstName11241 MiddleName11241",LastName11241 +11242,11242,"FirstName11242 MiddleName11242",LastName11242 +11243,11243,"FirstName11243 MiddleName11243",LastName11243 +11244,11244,"FirstName11244 MiddleName11244",LastName11244 +11245,11245,"FirstName11245 MiddleName11245",LastName11245 +11246,11246,"FirstName11246 MiddleName11246",LastName11246 +11247,11247,"FirstName11247 MiddleName11247",LastName11247 +11248,11248,"FirstName11248 MiddleName11248",LastName11248 +11249,11249,"FirstName11249 MiddleName11249",LastName11249 +11250,11250,"FirstName11250 MiddleName11250",LastName11250 +11251,11251,"FirstName11251 MiddleName11251",LastName11251 +11252,11252,"FirstName11252 MiddleName11252",LastName11252 +11253,11253,"FirstName11253 MiddleName11253",LastName11253 +11254,11254,"FirstName11254 MiddleName11254",LastName11254 +11255,11255,"FirstName11255 MiddleName11255",LastName11255 +11256,11256,"FirstName11256 MiddleName11256",LastName11256 +11257,11257,"FirstName11257 MiddleName11257",LastName11257 +11258,11258,"FirstName11258 MiddleName11258",LastName11258 +11259,11259,"FirstName11259 MiddleName11259",LastName11259 +11260,11260,"FirstName11260 MiddleName11260",LastName11260 +11261,11261,"FirstName11261 MiddleName11261",LastName11261 +11262,11262,"FirstName11262 MiddleName11262",LastName11262 +11263,11263,"FirstName11263 MiddleName11263",LastName11263 +11264,11264,"FirstName11264 MiddleName11264",LastName11264 +11265,11265,"FirstName11265 MiddleName11265",LastName11265 +11266,11266,"FirstName11266 MiddleName11266",LastName11266 +11267,11267,"FirstName11267 MiddleName11267",LastName11267 +11268,11268,"FirstName11268 MiddleName11268",LastName11268 +11269,11269,"FirstName11269 MiddleName11269",LastName11269 +11270,11270,"FirstName11270 MiddleName11270",LastName11270 +11271,11271,"FirstName11271 MiddleName11271",LastName11271 +11272,11272,"FirstName11272 MiddleName11272",LastName11272 +11273,11273,"FirstName11273 MiddleName11273",LastName11273 +11274,11274,"FirstName11274 MiddleName11274",LastName11274 +11275,11275,"FirstName11275 MiddleName11275",LastName11275 +11276,11276,"FirstName11276 MiddleName11276",LastName11276 +11277,11277,"FirstName11277 MiddleName11277",LastName11277 +11278,11278,"FirstName11278 MiddleName11278",LastName11278 +11279,11279,"FirstName11279 MiddleName11279",LastName11279 +11280,11280,"FirstName11280 MiddleName11280",LastName11280 +11281,11281,"FirstName11281 MiddleName11281",LastName11281 +11282,11282,"FirstName11282 MiddleName11282",LastName11282 +11283,11283,"FirstName11283 MiddleName11283",LastName11283 +11284,11284,"FirstName11284 MiddleName11284",LastName11284 +11285,11285,"FirstName11285 MiddleName11285",LastName11285 +11286,11286,"FirstName11286 MiddleName11286",LastName11286 +11287,11287,"FirstName11287 MiddleName11287",LastName11287 +11288,11288,"FirstName11288 MiddleName11288",LastName11288 +11289,11289,"FirstName11289 MiddleName11289",LastName11289 +11290,11290,"FirstName11290 MiddleName11290",LastName11290 +11291,11291,"FirstName11291 MiddleName11291",LastName11291 +11292,11292,"FirstName11292 MiddleName11292",LastName11292 +11293,11293,"FirstName11293 MiddleName11293",LastName11293 +11294,11294,"FirstName11294 MiddleName11294",LastName11294 +11295,11295,"FirstName11295 MiddleName11295",LastName11295 +11296,11296,"FirstName11296 MiddleName11296",LastName11296 +11297,11297,"FirstName11297 MiddleName11297",LastName11297 +11298,11298,"FirstName11298 MiddleName11298",LastName11298 +11299,11299,"FirstName11299 MiddleName11299",LastName11299 +11300,11300,"FirstName11300 MiddleName11300",LastName11300 +11301,11301,"FirstName11301 MiddleName11301",LastName11301 +11302,11302,"FirstName11302 MiddleName11302",LastName11302 +11303,11303,"FirstName11303 MiddleName11303",LastName11303 +11304,11304,"FirstName11304 MiddleName11304",LastName11304 +11305,11305,"FirstName11305 MiddleName11305",LastName11305 +11306,11306,"FirstName11306 MiddleName11306",LastName11306 +11307,11307,"FirstName11307 MiddleName11307",LastName11307 +11308,11308,"FirstName11308 MiddleName11308",LastName11308 +11309,11309,"FirstName11309 MiddleName11309",LastName11309 +11310,11310,"FirstName11310 MiddleName11310",LastName11310 +11311,11311,"FirstName11311 MiddleName11311",LastName11311 +11312,11312,"FirstName11312 MiddleName11312",LastName11312 +11313,11313,"FirstName11313 MiddleName11313",LastName11313 +11314,11314,"FirstName11314 MiddleName11314",LastName11314 +11315,11315,"FirstName11315 MiddleName11315",LastName11315 +11316,11316,"FirstName11316 MiddleName11316",LastName11316 +11317,11317,"FirstName11317 MiddleName11317",LastName11317 +11318,11318,"FirstName11318 MiddleName11318",LastName11318 +11319,11319,"FirstName11319 MiddleName11319",LastName11319 +11320,11320,"FirstName11320 MiddleName11320",LastName11320 +11321,11321,"FirstName11321 MiddleName11321",LastName11321 +11322,11322,"FirstName11322 MiddleName11322",LastName11322 +11323,11323,"FirstName11323 MiddleName11323",LastName11323 +11324,11324,"FirstName11324 MiddleName11324",LastName11324 +11325,11325,"FirstName11325 MiddleName11325",LastName11325 +11326,11326,"FirstName11326 MiddleName11326",LastName11326 +11327,11327,"FirstName11327 MiddleName11327",LastName11327 +11328,11328,"FirstName11328 MiddleName11328",LastName11328 +11329,11329,"FirstName11329 MiddleName11329",LastName11329 +11330,11330,"FirstName11330 MiddleName11330",LastName11330 +11331,11331,"FirstName11331 MiddleName11331",LastName11331 +11332,11332,"FirstName11332 MiddleName11332",LastName11332 +11333,11333,"FirstName11333 MiddleName11333",LastName11333 +11334,11334,"FirstName11334 MiddleName11334",LastName11334 +11335,11335,"FirstName11335 MiddleName11335",LastName11335 +11336,11336,"FirstName11336 MiddleName11336",LastName11336 +11337,11337,"FirstName11337 MiddleName11337",LastName11337 +11338,11338,"FirstName11338 MiddleName11338",LastName11338 +11339,11339,"FirstName11339 MiddleName11339",LastName11339 +11340,11340,"FirstName11340 MiddleName11340",LastName11340 +11341,11341,"FirstName11341 MiddleName11341",LastName11341 +11342,11342,"FirstName11342 MiddleName11342",LastName11342 +11343,11343,"FirstName11343 MiddleName11343",LastName11343 +11344,11344,"FirstName11344 MiddleName11344",LastName11344 +11345,11345,"FirstName11345 MiddleName11345",LastName11345 +11346,11346,"FirstName11346 MiddleName11346",LastName11346 +11347,11347,"FirstName11347 MiddleName11347",LastName11347 +11348,11348,"FirstName11348 MiddleName11348",LastName11348 +11349,11349,"FirstName11349 MiddleName11349",LastName11349 +11350,11350,"FirstName11350 MiddleName11350",LastName11350 +11351,11351,"FirstName11351 MiddleName11351",LastName11351 +11352,11352,"FirstName11352 MiddleName11352",LastName11352 +11353,11353,"FirstName11353 MiddleName11353",LastName11353 +11354,11354,"FirstName11354 MiddleName11354",LastName11354 +11355,11355,"FirstName11355 MiddleName11355",LastName11355 +11356,11356,"FirstName11356 MiddleName11356",LastName11356 +11357,11357,"FirstName11357 MiddleName11357",LastName11357 +11358,11358,"FirstName11358 MiddleName11358",LastName11358 +11359,11359,"FirstName11359 MiddleName11359",LastName11359 +11360,11360,"FirstName11360 MiddleName11360",LastName11360 +11361,11361,"FirstName11361 MiddleName11361",LastName11361 +11362,11362,"FirstName11362 MiddleName11362",LastName11362 +11363,11363,"FirstName11363 MiddleName11363",LastName11363 +11364,11364,"FirstName11364 MiddleName11364",LastName11364 +11365,11365,"FirstName11365 MiddleName11365",LastName11365 +11366,11366,"FirstName11366 MiddleName11366",LastName11366 +11367,11367,"FirstName11367 MiddleName11367",LastName11367 +11368,11368,"FirstName11368 MiddleName11368",LastName11368 +11369,11369,"FirstName11369 MiddleName11369",LastName11369 +11370,11370,"FirstName11370 MiddleName11370",LastName11370 +11371,11371,"FirstName11371 MiddleName11371",LastName11371 +11372,11372,"FirstName11372 MiddleName11372",LastName11372 +11373,11373,"FirstName11373 MiddleName11373",LastName11373 +11374,11374,"FirstName11374 MiddleName11374",LastName11374 +11375,11375,"FirstName11375 MiddleName11375",LastName11375 +11376,11376,"FirstName11376 MiddleName11376",LastName11376 +11377,11377,"FirstName11377 MiddleName11377",LastName11377 +11378,11378,"FirstName11378 MiddleName11378",LastName11378 +11379,11379,"FirstName11379 MiddleName11379",LastName11379 +11380,11380,"FirstName11380 MiddleName11380",LastName11380 +11381,11381,"FirstName11381 MiddleName11381",LastName11381 +11382,11382,"FirstName11382 MiddleName11382",LastName11382 +11383,11383,"FirstName11383 MiddleName11383",LastName11383 +11384,11384,"FirstName11384 MiddleName11384",LastName11384 +11385,11385,"FirstName11385 MiddleName11385",LastName11385 +11386,11386,"FirstName11386 MiddleName11386",LastName11386 +11387,11387,"FirstName11387 MiddleName11387",LastName11387 +11388,11388,"FirstName11388 MiddleName11388",LastName11388 +11389,11389,"FirstName11389 MiddleName11389",LastName11389 +11390,11390,"FirstName11390 MiddleName11390",LastName11390 +11391,11391,"FirstName11391 MiddleName11391",LastName11391 +11392,11392,"FirstName11392 MiddleName11392",LastName11392 +11393,11393,"FirstName11393 MiddleName11393",LastName11393 +11394,11394,"FirstName11394 MiddleName11394",LastName11394 +11395,11395,"FirstName11395 MiddleName11395",LastName11395 +11396,11396,"FirstName11396 MiddleName11396",LastName11396 +11397,11397,"FirstName11397 MiddleName11397",LastName11397 +11398,11398,"FirstName11398 MiddleName11398",LastName11398 +11399,11399,"FirstName11399 MiddleName11399",LastName11399 +11400,11400,"FirstName11400 MiddleName11400",LastName11400 +11401,11401,"FirstName11401 MiddleName11401",LastName11401 +11402,11402,"FirstName11402 MiddleName11402",LastName11402 +11403,11403,"FirstName11403 MiddleName11403",LastName11403 +11404,11404,"FirstName11404 MiddleName11404",LastName11404 +11405,11405,"FirstName11405 MiddleName11405",LastName11405 +11406,11406,"FirstName11406 MiddleName11406",LastName11406 +11407,11407,"FirstName11407 MiddleName11407",LastName11407 +11408,11408,"FirstName11408 MiddleName11408",LastName11408 +11409,11409,"FirstName11409 MiddleName11409",LastName11409 +11410,11410,"FirstName11410 MiddleName11410",LastName11410 +11411,11411,"FirstName11411 MiddleName11411",LastName11411 +11412,11412,"FirstName11412 MiddleName11412",LastName11412 +11413,11413,"FirstName11413 MiddleName11413",LastName11413 +11414,11414,"FirstName11414 MiddleName11414",LastName11414 +11415,11415,"FirstName11415 MiddleName11415",LastName11415 +11416,11416,"FirstName11416 MiddleName11416",LastName11416 +11417,11417,"FirstName11417 MiddleName11417",LastName11417 +11418,11418,"FirstName11418 MiddleName11418",LastName11418 +11419,11419,"FirstName11419 MiddleName11419",LastName11419 +11420,11420,"FirstName11420 MiddleName11420",LastName11420 +11421,11421,"FirstName11421 MiddleName11421",LastName11421 +11422,11422,"FirstName11422 MiddleName11422",LastName11422 +11423,11423,"FirstName11423 MiddleName11423",LastName11423 +11424,11424,"FirstName11424 MiddleName11424",LastName11424 +11425,11425,"FirstName11425 MiddleName11425",LastName11425 +11426,11426,"FirstName11426 MiddleName11426",LastName11426 +11427,11427,"FirstName11427 MiddleName11427",LastName11427 +11428,11428,"FirstName11428 MiddleName11428",LastName11428 +11429,11429,"FirstName11429 MiddleName11429",LastName11429 +11430,11430,"FirstName11430 MiddleName11430",LastName11430 +11431,11431,"FirstName11431 MiddleName11431",LastName11431 +11432,11432,"FirstName11432 MiddleName11432",LastName11432 +11433,11433,"FirstName11433 MiddleName11433",LastName11433 +11434,11434,"FirstName11434 MiddleName11434",LastName11434 +11435,11435,"FirstName11435 MiddleName11435",LastName11435 +11436,11436,"FirstName11436 MiddleName11436",LastName11436 +11437,11437,"FirstName11437 MiddleName11437",LastName11437 +11438,11438,"FirstName11438 MiddleName11438",LastName11438 +11439,11439,"FirstName11439 MiddleName11439",LastName11439 +11440,11440,"FirstName11440 MiddleName11440",LastName11440 +11441,11441,"FirstName11441 MiddleName11441",LastName11441 +11442,11442,"FirstName11442 MiddleName11442",LastName11442 +11443,11443,"FirstName11443 MiddleName11443",LastName11443 +11444,11444,"FirstName11444 MiddleName11444",LastName11444 +11445,11445,"FirstName11445 MiddleName11445",LastName11445 +11446,11446,"FirstName11446 MiddleName11446",LastName11446 +11447,11447,"FirstName11447 MiddleName11447",LastName11447 +11448,11448,"FirstName11448 MiddleName11448",LastName11448 +11449,11449,"FirstName11449 MiddleName11449",LastName11449 +11450,11450,"FirstName11450 MiddleName11450",LastName11450 +11451,11451,"FirstName11451 MiddleName11451",LastName11451 +11452,11452,"FirstName11452 MiddleName11452",LastName11452 +11453,11453,"FirstName11453 MiddleName11453",LastName11453 +11454,11454,"FirstName11454 MiddleName11454",LastName11454 +11455,11455,"FirstName11455 MiddleName11455",LastName11455 +11456,11456,"FirstName11456 MiddleName11456",LastName11456 +11457,11457,"FirstName11457 MiddleName11457",LastName11457 +11458,11458,"FirstName11458 MiddleName11458",LastName11458 +11459,11459,"FirstName11459 MiddleName11459",LastName11459 +11460,11460,"FirstName11460 MiddleName11460",LastName11460 +11461,11461,"FirstName11461 MiddleName11461",LastName11461 +11462,11462,"FirstName11462 MiddleName11462",LastName11462 +11463,11463,"FirstName11463 MiddleName11463",LastName11463 +11464,11464,"FirstName11464 MiddleName11464",LastName11464 +11465,11465,"FirstName11465 MiddleName11465",LastName11465 +11466,11466,"FirstName11466 MiddleName11466",LastName11466 +11467,11467,"FirstName11467 MiddleName11467",LastName11467 +11468,11468,"FirstName11468 MiddleName11468",LastName11468 +11469,11469,"FirstName11469 MiddleName11469",LastName11469 +11470,11470,"FirstName11470 MiddleName11470",LastName11470 +11471,11471,"FirstName11471 MiddleName11471",LastName11471 +11472,11472,"FirstName11472 MiddleName11472",LastName11472 +11473,11473,"FirstName11473 MiddleName11473",LastName11473 +11474,11474,"FirstName11474 MiddleName11474",LastName11474 +11475,11475,"FirstName11475 MiddleName11475",LastName11475 +11476,11476,"FirstName11476 MiddleName11476",LastName11476 +11477,11477,"FirstName11477 MiddleName11477",LastName11477 +11478,11478,"FirstName11478 MiddleName11478",LastName11478 +11479,11479,"FirstName11479 MiddleName11479",LastName11479 +11480,11480,"FirstName11480 MiddleName11480",LastName11480 +11481,11481,"FirstName11481 MiddleName11481",LastName11481 +11482,11482,"FirstName11482 MiddleName11482",LastName11482 +11483,11483,"FirstName11483 MiddleName11483",LastName11483 +11484,11484,"FirstName11484 MiddleName11484",LastName11484 +11485,11485,"FirstName11485 MiddleName11485",LastName11485 +11486,11486,"FirstName11486 MiddleName11486",LastName11486 +11487,11487,"FirstName11487 MiddleName11487",LastName11487 +11488,11488,"FirstName11488 MiddleName11488",LastName11488 +11489,11489,"FirstName11489 MiddleName11489",LastName11489 +11490,11490,"FirstName11490 MiddleName11490",LastName11490 +11491,11491,"FirstName11491 MiddleName11491",LastName11491 +11492,11492,"FirstName11492 MiddleName11492",LastName11492 +11493,11493,"FirstName11493 MiddleName11493",LastName11493 +11494,11494,"FirstName11494 MiddleName11494",LastName11494 +11495,11495,"FirstName11495 MiddleName11495",LastName11495 +11496,11496,"FirstName11496 MiddleName11496",LastName11496 +11497,11497,"FirstName11497 MiddleName11497",LastName11497 +11498,11498,"FirstName11498 MiddleName11498",LastName11498 +11499,11499,"FirstName11499 MiddleName11499",LastName11499 +11500,11500,"FirstName11500 MiddleName11500",LastName11500 +11501,11501,"FirstName11501 MiddleName11501",LastName11501 +11502,11502,"FirstName11502 MiddleName11502",LastName11502 +11503,11503,"FirstName11503 MiddleName11503",LastName11503 +11504,11504,"FirstName11504 MiddleName11504",LastName11504 +11505,11505,"FirstName11505 MiddleName11505",LastName11505 +11506,11506,"FirstName11506 MiddleName11506",LastName11506 +11507,11507,"FirstName11507 MiddleName11507",LastName11507 +11508,11508,"FirstName11508 MiddleName11508",LastName11508 +11509,11509,"FirstName11509 MiddleName11509",LastName11509 +11510,11510,"FirstName11510 MiddleName11510",LastName11510 +11511,11511,"FirstName11511 MiddleName11511",LastName11511 +11512,11512,"FirstName11512 MiddleName11512",LastName11512 +11513,11513,"FirstName11513 MiddleName11513",LastName11513 +11514,11514,"FirstName11514 MiddleName11514",LastName11514 +11515,11515,"FirstName11515 MiddleName11515",LastName11515 +11516,11516,"FirstName11516 MiddleName11516",LastName11516 +11517,11517,"FirstName11517 MiddleName11517",LastName11517 +11518,11518,"FirstName11518 MiddleName11518",LastName11518 +11519,11519,"FirstName11519 MiddleName11519",LastName11519 +11520,11520,"FirstName11520 MiddleName11520",LastName11520 +11521,11521,"FirstName11521 MiddleName11521",LastName11521 +11522,11522,"FirstName11522 MiddleName11522",LastName11522 +11523,11523,"FirstName11523 MiddleName11523",LastName11523 +11524,11524,"FirstName11524 MiddleName11524",LastName11524 +11525,11525,"FirstName11525 MiddleName11525",LastName11525 +11526,11526,"FirstName11526 MiddleName11526",LastName11526 +11527,11527,"FirstName11527 MiddleName11527",LastName11527 +11528,11528,"FirstName11528 MiddleName11528",LastName11528 +11529,11529,"FirstName11529 MiddleName11529",LastName11529 +11530,11530,"FirstName11530 MiddleName11530",LastName11530 +11531,11531,"FirstName11531 MiddleName11531",LastName11531 +11532,11532,"FirstName11532 MiddleName11532",LastName11532 +11533,11533,"FirstName11533 MiddleName11533",LastName11533 +11534,11534,"FirstName11534 MiddleName11534",LastName11534 +11535,11535,"FirstName11535 MiddleName11535",LastName11535 +11536,11536,"FirstName11536 MiddleName11536",LastName11536 +11537,11537,"FirstName11537 MiddleName11537",LastName11537 +11538,11538,"FirstName11538 MiddleName11538",LastName11538 +11539,11539,"FirstName11539 MiddleName11539",LastName11539 +11540,11540,"FirstName11540 MiddleName11540",LastName11540 +11541,11541,"FirstName11541 MiddleName11541",LastName11541 +11542,11542,"FirstName11542 MiddleName11542",LastName11542 +11543,11543,"FirstName11543 MiddleName11543",LastName11543 +11544,11544,"FirstName11544 MiddleName11544",LastName11544 +11545,11545,"FirstName11545 MiddleName11545",LastName11545 +11546,11546,"FirstName11546 MiddleName11546",LastName11546 +11547,11547,"FirstName11547 MiddleName11547",LastName11547 +11548,11548,"FirstName11548 MiddleName11548",LastName11548 +11549,11549,"FirstName11549 MiddleName11549",LastName11549 +11550,11550,"FirstName11550 MiddleName11550",LastName11550 +11551,11551,"FirstName11551 MiddleName11551",LastName11551 +11552,11552,"FirstName11552 MiddleName11552",LastName11552 +11553,11553,"FirstName11553 MiddleName11553",LastName11553 +11554,11554,"FirstName11554 MiddleName11554",LastName11554 +11555,11555,"FirstName11555 MiddleName11555",LastName11555 +11556,11556,"FirstName11556 MiddleName11556",LastName11556 +11557,11557,"FirstName11557 MiddleName11557",LastName11557 +11558,11558,"FirstName11558 MiddleName11558",LastName11558 +11559,11559,"FirstName11559 MiddleName11559",LastName11559 +11560,11560,"FirstName11560 MiddleName11560",LastName11560 +11561,11561,"FirstName11561 MiddleName11561",LastName11561 +11562,11562,"FirstName11562 MiddleName11562",LastName11562 +11563,11563,"FirstName11563 MiddleName11563",LastName11563 +11564,11564,"FirstName11564 MiddleName11564",LastName11564 +11565,11565,"FirstName11565 MiddleName11565",LastName11565 +11566,11566,"FirstName11566 MiddleName11566",LastName11566 +11567,11567,"FirstName11567 MiddleName11567",LastName11567 +11568,11568,"FirstName11568 MiddleName11568",LastName11568 +11569,11569,"FirstName11569 MiddleName11569",LastName11569 +11570,11570,"FirstName11570 MiddleName11570",LastName11570 +11571,11571,"FirstName11571 MiddleName11571",LastName11571 +11572,11572,"FirstName11572 MiddleName11572",LastName11572 +11573,11573,"FirstName11573 MiddleName11573",LastName11573 +11574,11574,"FirstName11574 MiddleName11574",LastName11574 +11575,11575,"FirstName11575 MiddleName11575",LastName11575 +11576,11576,"FirstName11576 MiddleName11576",LastName11576 +11577,11577,"FirstName11577 MiddleName11577",LastName11577 +11578,11578,"FirstName11578 MiddleName11578",LastName11578 +11579,11579,"FirstName11579 MiddleName11579",LastName11579 +11580,11580,"FirstName11580 MiddleName11580",LastName11580 +11581,11581,"FirstName11581 MiddleName11581",LastName11581 +11582,11582,"FirstName11582 MiddleName11582",LastName11582 +11583,11583,"FirstName11583 MiddleName11583",LastName11583 +11584,11584,"FirstName11584 MiddleName11584",LastName11584 +11585,11585,"FirstName11585 MiddleName11585",LastName11585 +11586,11586,"FirstName11586 MiddleName11586",LastName11586 +11587,11587,"FirstName11587 MiddleName11587",LastName11587 +11588,11588,"FirstName11588 MiddleName11588",LastName11588 +11589,11589,"FirstName11589 MiddleName11589",LastName11589 +11590,11590,"FirstName11590 MiddleName11590",LastName11590 +11591,11591,"FirstName11591 MiddleName11591",LastName11591 +11592,11592,"FirstName11592 MiddleName11592",LastName11592 +11593,11593,"FirstName11593 MiddleName11593",LastName11593 +11594,11594,"FirstName11594 MiddleName11594",LastName11594 +11595,11595,"FirstName11595 MiddleName11595",LastName11595 +11596,11596,"FirstName11596 MiddleName11596",LastName11596 +11597,11597,"FirstName11597 MiddleName11597",LastName11597 +11598,11598,"FirstName11598 MiddleName11598",LastName11598 +11599,11599,"FirstName11599 MiddleName11599",LastName11599 +11600,11600,"FirstName11600 MiddleName11600",LastName11600 +11601,11601,"FirstName11601 MiddleName11601",LastName11601 +11602,11602,"FirstName11602 MiddleName11602",LastName11602 +11603,11603,"FirstName11603 MiddleName11603",LastName11603 +11604,11604,"FirstName11604 MiddleName11604",LastName11604 +11605,11605,"FirstName11605 MiddleName11605",LastName11605 +11606,11606,"FirstName11606 MiddleName11606",LastName11606 +11607,11607,"FirstName11607 MiddleName11607",LastName11607 +11608,11608,"FirstName11608 MiddleName11608",LastName11608 +11609,11609,"FirstName11609 MiddleName11609",LastName11609 +11610,11610,"FirstName11610 MiddleName11610",LastName11610 +11611,11611,"FirstName11611 MiddleName11611",LastName11611 +11612,11612,"FirstName11612 MiddleName11612",LastName11612 +11613,11613,"FirstName11613 MiddleName11613",LastName11613 +11614,11614,"FirstName11614 MiddleName11614",LastName11614 +11615,11615,"FirstName11615 MiddleName11615",LastName11615 +11616,11616,"FirstName11616 MiddleName11616",LastName11616 +11617,11617,"FirstName11617 MiddleName11617",LastName11617 +11618,11618,"FirstName11618 MiddleName11618",LastName11618 +11619,11619,"FirstName11619 MiddleName11619",LastName11619 +11620,11620,"FirstName11620 MiddleName11620",LastName11620 +11621,11621,"FirstName11621 MiddleName11621",LastName11621 +11622,11622,"FirstName11622 MiddleName11622",LastName11622 +11623,11623,"FirstName11623 MiddleName11623",LastName11623 +11624,11624,"FirstName11624 MiddleName11624",LastName11624 +11625,11625,"FirstName11625 MiddleName11625",LastName11625 +11626,11626,"FirstName11626 MiddleName11626",LastName11626 +11627,11627,"FirstName11627 MiddleName11627",LastName11627 +11628,11628,"FirstName11628 MiddleName11628",LastName11628 +11629,11629,"FirstName11629 MiddleName11629",LastName11629 +11630,11630,"FirstName11630 MiddleName11630",LastName11630 +11631,11631,"FirstName11631 MiddleName11631",LastName11631 +11632,11632,"FirstName11632 MiddleName11632",LastName11632 +11633,11633,"FirstName11633 MiddleName11633",LastName11633 +11634,11634,"FirstName11634 MiddleName11634",LastName11634 +11635,11635,"FirstName11635 MiddleName11635",LastName11635 +11636,11636,"FirstName11636 MiddleName11636",LastName11636 +11637,11637,"FirstName11637 MiddleName11637",LastName11637 +11638,11638,"FirstName11638 MiddleName11638",LastName11638 +11639,11639,"FirstName11639 MiddleName11639",LastName11639 +11640,11640,"FirstName11640 MiddleName11640",LastName11640 +11641,11641,"FirstName11641 MiddleName11641",LastName11641 +11642,11642,"FirstName11642 MiddleName11642",LastName11642 +11643,11643,"FirstName11643 MiddleName11643",LastName11643 +11644,11644,"FirstName11644 MiddleName11644",LastName11644 +11645,11645,"FirstName11645 MiddleName11645",LastName11645 +11646,11646,"FirstName11646 MiddleName11646",LastName11646 +11647,11647,"FirstName11647 MiddleName11647",LastName11647 +11648,11648,"FirstName11648 MiddleName11648",LastName11648 +11649,11649,"FirstName11649 MiddleName11649",LastName11649 +11650,11650,"FirstName11650 MiddleName11650",LastName11650 +11651,11651,"FirstName11651 MiddleName11651",LastName11651 +11652,11652,"FirstName11652 MiddleName11652",LastName11652 +11653,11653,"FirstName11653 MiddleName11653",LastName11653 +11654,11654,"FirstName11654 MiddleName11654",LastName11654 +11655,11655,"FirstName11655 MiddleName11655",LastName11655 +11656,11656,"FirstName11656 MiddleName11656",LastName11656 +11657,11657,"FirstName11657 MiddleName11657",LastName11657 +11658,11658,"FirstName11658 MiddleName11658",LastName11658 +11659,11659,"FirstName11659 MiddleName11659",LastName11659 +11660,11660,"FirstName11660 MiddleName11660",LastName11660 +11661,11661,"FirstName11661 MiddleName11661",LastName11661 +11662,11662,"FirstName11662 MiddleName11662",LastName11662 +11663,11663,"FirstName11663 MiddleName11663",LastName11663 +11664,11664,"FirstName11664 MiddleName11664",LastName11664 +11665,11665,"FirstName11665 MiddleName11665",LastName11665 +11666,11666,"FirstName11666 MiddleName11666",LastName11666 +11667,11667,"FirstName11667 MiddleName11667",LastName11667 +11668,11668,"FirstName11668 MiddleName11668",LastName11668 +11669,11669,"FirstName11669 MiddleName11669",LastName11669 +11670,11670,"FirstName11670 MiddleName11670",LastName11670 +11671,11671,"FirstName11671 MiddleName11671",LastName11671 +11672,11672,"FirstName11672 MiddleName11672",LastName11672 +11673,11673,"FirstName11673 MiddleName11673",LastName11673 +11674,11674,"FirstName11674 MiddleName11674",LastName11674 +11675,11675,"FirstName11675 MiddleName11675",LastName11675 +11676,11676,"FirstName11676 MiddleName11676",LastName11676 +11677,11677,"FirstName11677 MiddleName11677",LastName11677 +11678,11678,"FirstName11678 MiddleName11678",LastName11678 +11679,11679,"FirstName11679 MiddleName11679",LastName11679 +11680,11680,"FirstName11680 MiddleName11680",LastName11680 +11681,11681,"FirstName11681 MiddleName11681",LastName11681 +11682,11682,"FirstName11682 MiddleName11682",LastName11682 +11683,11683,"FirstName11683 MiddleName11683",LastName11683 +11684,11684,"FirstName11684 MiddleName11684",LastName11684 +11685,11685,"FirstName11685 MiddleName11685",LastName11685 +11686,11686,"FirstName11686 MiddleName11686",LastName11686 +11687,11687,"FirstName11687 MiddleName11687",LastName11687 +11688,11688,"FirstName11688 MiddleName11688",LastName11688 +11689,11689,"FirstName11689 MiddleName11689",LastName11689 +11690,11690,"FirstName11690 MiddleName11690",LastName11690 +11691,11691,"FirstName11691 MiddleName11691",LastName11691 +11692,11692,"FirstName11692 MiddleName11692",LastName11692 +11693,11693,"FirstName11693 MiddleName11693",LastName11693 +11694,11694,"FirstName11694 MiddleName11694",LastName11694 +11695,11695,"FirstName11695 MiddleName11695",LastName11695 +11696,11696,"FirstName11696 MiddleName11696",LastName11696 +11697,11697,"FirstName11697 MiddleName11697",LastName11697 +11698,11698,"FirstName11698 MiddleName11698",LastName11698 +11699,11699,"FirstName11699 MiddleName11699",LastName11699 +11700,11700,"FirstName11700 MiddleName11700",LastName11700 +11701,11701,"FirstName11701 MiddleName11701",LastName11701 +11702,11702,"FirstName11702 MiddleName11702",LastName11702 +11703,11703,"FirstName11703 MiddleName11703",LastName11703 +11704,11704,"FirstName11704 MiddleName11704",LastName11704 +11705,11705,"FirstName11705 MiddleName11705",LastName11705 +11706,11706,"FirstName11706 MiddleName11706",LastName11706 +11707,11707,"FirstName11707 MiddleName11707",LastName11707 +11708,11708,"FirstName11708 MiddleName11708",LastName11708 +11709,11709,"FirstName11709 MiddleName11709",LastName11709 +11710,11710,"FirstName11710 MiddleName11710",LastName11710 +11711,11711,"FirstName11711 MiddleName11711",LastName11711 +11712,11712,"FirstName11712 MiddleName11712",LastName11712 +11713,11713,"FirstName11713 MiddleName11713",LastName11713 +11714,11714,"FirstName11714 MiddleName11714",LastName11714 +11715,11715,"FirstName11715 MiddleName11715",LastName11715 +11716,11716,"FirstName11716 MiddleName11716",LastName11716 +11717,11717,"FirstName11717 MiddleName11717",LastName11717 +11718,11718,"FirstName11718 MiddleName11718",LastName11718 +11719,11719,"FirstName11719 MiddleName11719",LastName11719 +11720,11720,"FirstName11720 MiddleName11720",LastName11720 +11721,11721,"FirstName11721 MiddleName11721",LastName11721 +11722,11722,"FirstName11722 MiddleName11722",LastName11722 +11723,11723,"FirstName11723 MiddleName11723",LastName11723 +11724,11724,"FirstName11724 MiddleName11724",LastName11724 +11725,11725,"FirstName11725 MiddleName11725",LastName11725 +11726,11726,"FirstName11726 MiddleName11726",LastName11726 +11727,11727,"FirstName11727 MiddleName11727",LastName11727 +11728,11728,"FirstName11728 MiddleName11728",LastName11728 +11729,11729,"FirstName11729 MiddleName11729",LastName11729 +11730,11730,"FirstName11730 MiddleName11730",LastName11730 +11731,11731,"FirstName11731 MiddleName11731",LastName11731 +11732,11732,"FirstName11732 MiddleName11732",LastName11732 +11733,11733,"FirstName11733 MiddleName11733",LastName11733 +11734,11734,"FirstName11734 MiddleName11734",LastName11734 +11735,11735,"FirstName11735 MiddleName11735",LastName11735 +11736,11736,"FirstName11736 MiddleName11736",LastName11736 +11737,11737,"FirstName11737 MiddleName11737",LastName11737 +11738,11738,"FirstName11738 MiddleName11738",LastName11738 +11739,11739,"FirstName11739 MiddleName11739",LastName11739 +11740,11740,"FirstName11740 MiddleName11740",LastName11740 +11741,11741,"FirstName11741 MiddleName11741",LastName11741 +11742,11742,"FirstName11742 MiddleName11742",LastName11742 +11743,11743,"FirstName11743 MiddleName11743",LastName11743 +11744,11744,"FirstName11744 MiddleName11744",LastName11744 +11745,11745,"FirstName11745 MiddleName11745",LastName11745 +11746,11746,"FirstName11746 MiddleName11746",LastName11746 +11747,11747,"FirstName11747 MiddleName11747",LastName11747 +11748,11748,"FirstName11748 MiddleName11748",LastName11748 +11749,11749,"FirstName11749 MiddleName11749",LastName11749 +11750,11750,"FirstName11750 MiddleName11750",LastName11750 +11751,11751,"FirstName11751 MiddleName11751",LastName11751 +11752,11752,"FirstName11752 MiddleName11752",LastName11752 +11753,11753,"FirstName11753 MiddleName11753",LastName11753 +11754,11754,"FirstName11754 MiddleName11754",LastName11754 +11755,11755,"FirstName11755 MiddleName11755",LastName11755 +11756,11756,"FirstName11756 MiddleName11756",LastName11756 +11757,11757,"FirstName11757 MiddleName11757",LastName11757 +11758,11758,"FirstName11758 MiddleName11758",LastName11758 +11759,11759,"FirstName11759 MiddleName11759",LastName11759 +11760,11760,"FirstName11760 MiddleName11760",LastName11760 +11761,11761,"FirstName11761 MiddleName11761",LastName11761 +11762,11762,"FirstName11762 MiddleName11762",LastName11762 +11763,11763,"FirstName11763 MiddleName11763",LastName11763 +11764,11764,"FirstName11764 MiddleName11764",LastName11764 +11765,11765,"FirstName11765 MiddleName11765",LastName11765 +11766,11766,"FirstName11766 MiddleName11766",LastName11766 +11767,11767,"FirstName11767 MiddleName11767",LastName11767 +11768,11768,"FirstName11768 MiddleName11768",LastName11768 +11769,11769,"FirstName11769 MiddleName11769",LastName11769 +11770,11770,"FirstName11770 MiddleName11770",LastName11770 +11771,11771,"FirstName11771 MiddleName11771",LastName11771 +11772,11772,"FirstName11772 MiddleName11772",LastName11772 +11773,11773,"FirstName11773 MiddleName11773",LastName11773 +11774,11774,"FirstName11774 MiddleName11774",LastName11774 +11775,11775,"FirstName11775 MiddleName11775",LastName11775 +11776,11776,"FirstName11776 MiddleName11776",LastName11776 +11777,11777,"FirstName11777 MiddleName11777",LastName11777 +11778,11778,"FirstName11778 MiddleName11778",LastName11778 +11779,11779,"FirstName11779 MiddleName11779",LastName11779 +11780,11780,"FirstName11780 MiddleName11780",LastName11780 +11781,11781,"FirstName11781 MiddleName11781",LastName11781 +11782,11782,"FirstName11782 MiddleName11782",LastName11782 +11783,11783,"FirstName11783 MiddleName11783",LastName11783 +11784,11784,"FirstName11784 MiddleName11784",LastName11784 +11785,11785,"FirstName11785 MiddleName11785",LastName11785 +11786,11786,"FirstName11786 MiddleName11786",LastName11786 +11787,11787,"FirstName11787 MiddleName11787",LastName11787 +11788,11788,"FirstName11788 MiddleName11788",LastName11788 +11789,11789,"FirstName11789 MiddleName11789",LastName11789 +11790,11790,"FirstName11790 MiddleName11790",LastName11790 +11791,11791,"FirstName11791 MiddleName11791",LastName11791 +11792,11792,"FirstName11792 MiddleName11792",LastName11792 +11793,11793,"FirstName11793 MiddleName11793",LastName11793 +11794,11794,"FirstName11794 MiddleName11794",LastName11794 +11795,11795,"FirstName11795 MiddleName11795",LastName11795 +11796,11796,"FirstName11796 MiddleName11796",LastName11796 +11797,11797,"FirstName11797 MiddleName11797",LastName11797 +11798,11798,"FirstName11798 MiddleName11798",LastName11798 +11799,11799,"FirstName11799 MiddleName11799",LastName11799 +11800,11800,"FirstName11800 MiddleName11800",LastName11800 +11801,11801,"FirstName11801 MiddleName11801",LastName11801 +11802,11802,"FirstName11802 MiddleName11802",LastName11802 +11803,11803,"FirstName11803 MiddleName11803",LastName11803 +11804,11804,"FirstName11804 MiddleName11804",LastName11804 +11805,11805,"FirstName11805 MiddleName11805",LastName11805 +11806,11806,"FirstName11806 MiddleName11806",LastName11806 +11807,11807,"FirstName11807 MiddleName11807",LastName11807 +11808,11808,"FirstName11808 MiddleName11808",LastName11808 +11809,11809,"FirstName11809 MiddleName11809",LastName11809 +11810,11810,"FirstName11810 MiddleName11810",LastName11810 +11811,11811,"FirstName11811 MiddleName11811",LastName11811 +11812,11812,"FirstName11812 MiddleName11812",LastName11812 +11813,11813,"FirstName11813 MiddleName11813",LastName11813 +11814,11814,"FirstName11814 MiddleName11814",LastName11814 +11815,11815,"FirstName11815 MiddleName11815",LastName11815 +11816,11816,"FirstName11816 MiddleName11816",LastName11816 +11817,11817,"FirstName11817 MiddleName11817",LastName11817 +11818,11818,"FirstName11818 MiddleName11818",LastName11818 +11819,11819,"FirstName11819 MiddleName11819",LastName11819 +11820,11820,"FirstName11820 MiddleName11820",LastName11820 +11821,11821,"FirstName11821 MiddleName11821",LastName11821 +11822,11822,"FirstName11822 MiddleName11822",LastName11822 +11823,11823,"FirstName11823 MiddleName11823",LastName11823 +11824,11824,"FirstName11824 MiddleName11824",LastName11824 +11825,11825,"FirstName11825 MiddleName11825",LastName11825 +11826,11826,"FirstName11826 MiddleName11826",LastName11826 +11827,11827,"FirstName11827 MiddleName11827",LastName11827 +11828,11828,"FirstName11828 MiddleName11828",LastName11828 +11829,11829,"FirstName11829 MiddleName11829",LastName11829 +11830,11830,"FirstName11830 MiddleName11830",LastName11830 +11831,11831,"FirstName11831 MiddleName11831",LastName11831 +11832,11832,"FirstName11832 MiddleName11832",LastName11832 +11833,11833,"FirstName11833 MiddleName11833",LastName11833 +11834,11834,"FirstName11834 MiddleName11834",LastName11834 +11835,11835,"FirstName11835 MiddleName11835",LastName11835 +11836,11836,"FirstName11836 MiddleName11836",LastName11836 +11837,11837,"FirstName11837 MiddleName11837",LastName11837 +11838,11838,"FirstName11838 MiddleName11838",LastName11838 +11839,11839,"FirstName11839 MiddleName11839",LastName11839 +11840,11840,"FirstName11840 MiddleName11840",LastName11840 +11841,11841,"FirstName11841 MiddleName11841",LastName11841 +11842,11842,"FirstName11842 MiddleName11842",LastName11842 +11843,11843,"FirstName11843 MiddleName11843",LastName11843 +11844,11844,"FirstName11844 MiddleName11844",LastName11844 +11845,11845,"FirstName11845 MiddleName11845",LastName11845 +11846,11846,"FirstName11846 MiddleName11846",LastName11846 +11847,11847,"FirstName11847 MiddleName11847",LastName11847 +11848,11848,"FirstName11848 MiddleName11848",LastName11848 +11849,11849,"FirstName11849 MiddleName11849",LastName11849 +11850,11850,"FirstName11850 MiddleName11850",LastName11850 +11851,11851,"FirstName11851 MiddleName11851",LastName11851 +11852,11852,"FirstName11852 MiddleName11852",LastName11852 +11853,11853,"FirstName11853 MiddleName11853",LastName11853 +11854,11854,"FirstName11854 MiddleName11854",LastName11854 +11855,11855,"FirstName11855 MiddleName11855",LastName11855 +11856,11856,"FirstName11856 MiddleName11856",LastName11856 +11857,11857,"FirstName11857 MiddleName11857",LastName11857 +11858,11858,"FirstName11858 MiddleName11858",LastName11858 +11859,11859,"FirstName11859 MiddleName11859",LastName11859 +11860,11860,"FirstName11860 MiddleName11860",LastName11860 +11861,11861,"FirstName11861 MiddleName11861",LastName11861 +11862,11862,"FirstName11862 MiddleName11862",LastName11862 +11863,11863,"FirstName11863 MiddleName11863",LastName11863 +11864,11864,"FirstName11864 MiddleName11864",LastName11864 +11865,11865,"FirstName11865 MiddleName11865",LastName11865 +11866,11866,"FirstName11866 MiddleName11866",LastName11866 +11867,11867,"FirstName11867 MiddleName11867",LastName11867 +11868,11868,"FirstName11868 MiddleName11868",LastName11868 +11869,11869,"FirstName11869 MiddleName11869",LastName11869 +11870,11870,"FirstName11870 MiddleName11870",LastName11870 +11871,11871,"FirstName11871 MiddleName11871",LastName11871 +11872,11872,"FirstName11872 MiddleName11872",LastName11872 +11873,11873,"FirstName11873 MiddleName11873",LastName11873 +11874,11874,"FirstName11874 MiddleName11874",LastName11874 +11875,11875,"FirstName11875 MiddleName11875",LastName11875 +11876,11876,"FirstName11876 MiddleName11876",LastName11876 +11877,11877,"FirstName11877 MiddleName11877",LastName11877 +11878,11878,"FirstName11878 MiddleName11878",LastName11878 +11879,11879,"FirstName11879 MiddleName11879",LastName11879 +11880,11880,"FirstName11880 MiddleName11880",LastName11880 +11881,11881,"FirstName11881 MiddleName11881",LastName11881 +11882,11882,"FirstName11882 MiddleName11882",LastName11882 +11883,11883,"FirstName11883 MiddleName11883",LastName11883 +11884,11884,"FirstName11884 MiddleName11884",LastName11884 +11885,11885,"FirstName11885 MiddleName11885",LastName11885 +11886,11886,"FirstName11886 MiddleName11886",LastName11886 +11887,11887,"FirstName11887 MiddleName11887",LastName11887 +11888,11888,"FirstName11888 MiddleName11888",LastName11888 +11889,11889,"FirstName11889 MiddleName11889",LastName11889 +11890,11890,"FirstName11890 MiddleName11890",LastName11890 +11891,11891,"FirstName11891 MiddleName11891",LastName11891 +11892,11892,"FirstName11892 MiddleName11892",LastName11892 +11893,11893,"FirstName11893 MiddleName11893",LastName11893 +11894,11894,"FirstName11894 MiddleName11894",LastName11894 +11895,11895,"FirstName11895 MiddleName11895",LastName11895 +11896,11896,"FirstName11896 MiddleName11896",LastName11896 +11897,11897,"FirstName11897 MiddleName11897",LastName11897 +11898,11898,"FirstName11898 MiddleName11898",LastName11898 +11899,11899,"FirstName11899 MiddleName11899",LastName11899 +11900,11900,"FirstName11900 MiddleName11900",LastName11900 +11901,11901,"FirstName11901 MiddleName11901",LastName11901 +11902,11902,"FirstName11902 MiddleName11902",LastName11902 +11903,11903,"FirstName11903 MiddleName11903",LastName11903 +11904,11904,"FirstName11904 MiddleName11904",LastName11904 +11905,11905,"FirstName11905 MiddleName11905",LastName11905 +11906,11906,"FirstName11906 MiddleName11906",LastName11906 +11907,11907,"FirstName11907 MiddleName11907",LastName11907 +11908,11908,"FirstName11908 MiddleName11908",LastName11908 +11909,11909,"FirstName11909 MiddleName11909",LastName11909 +11910,11910,"FirstName11910 MiddleName11910",LastName11910 +11911,11911,"FirstName11911 MiddleName11911",LastName11911 +11912,11912,"FirstName11912 MiddleName11912",LastName11912 +11913,11913,"FirstName11913 MiddleName11913",LastName11913 +11914,11914,"FirstName11914 MiddleName11914",LastName11914 +11915,11915,"FirstName11915 MiddleName11915",LastName11915 +11916,11916,"FirstName11916 MiddleName11916",LastName11916 +11917,11917,"FirstName11917 MiddleName11917",LastName11917 +11918,11918,"FirstName11918 MiddleName11918",LastName11918 +11919,11919,"FirstName11919 MiddleName11919",LastName11919 +11920,11920,"FirstName11920 MiddleName11920",LastName11920 +11921,11921,"FirstName11921 MiddleName11921",LastName11921 +11922,11922,"FirstName11922 MiddleName11922",LastName11922 +11923,11923,"FirstName11923 MiddleName11923",LastName11923 +11924,11924,"FirstName11924 MiddleName11924",LastName11924 +11925,11925,"FirstName11925 MiddleName11925",LastName11925 +11926,11926,"FirstName11926 MiddleName11926",LastName11926 +11927,11927,"FirstName11927 MiddleName11927",LastName11927 +11928,11928,"FirstName11928 MiddleName11928",LastName11928 +11929,11929,"FirstName11929 MiddleName11929",LastName11929 +11930,11930,"FirstName11930 MiddleName11930",LastName11930 +11931,11931,"FirstName11931 MiddleName11931",LastName11931 +11932,11932,"FirstName11932 MiddleName11932",LastName11932 +11933,11933,"FirstName11933 MiddleName11933",LastName11933 +11934,11934,"FirstName11934 MiddleName11934",LastName11934 +11935,11935,"FirstName11935 MiddleName11935",LastName11935 +11936,11936,"FirstName11936 MiddleName11936",LastName11936 +11937,11937,"FirstName11937 MiddleName11937",LastName11937 +11938,11938,"FirstName11938 MiddleName11938",LastName11938 +11939,11939,"FirstName11939 MiddleName11939",LastName11939 +11940,11940,"FirstName11940 MiddleName11940",LastName11940 +11941,11941,"FirstName11941 MiddleName11941",LastName11941 +11942,11942,"FirstName11942 MiddleName11942",LastName11942 +11943,11943,"FirstName11943 MiddleName11943",LastName11943 +11944,11944,"FirstName11944 MiddleName11944",LastName11944 +11945,11945,"FirstName11945 MiddleName11945",LastName11945 +11946,11946,"FirstName11946 MiddleName11946",LastName11946 +11947,11947,"FirstName11947 MiddleName11947",LastName11947 +11948,11948,"FirstName11948 MiddleName11948",LastName11948 +11949,11949,"FirstName11949 MiddleName11949",LastName11949 +11950,11950,"FirstName11950 MiddleName11950",LastName11950 +11951,11951,"FirstName11951 MiddleName11951",LastName11951 +11952,11952,"FirstName11952 MiddleName11952",LastName11952 +11953,11953,"FirstName11953 MiddleName11953",LastName11953 +11954,11954,"FirstName11954 MiddleName11954",LastName11954 +11955,11955,"FirstName11955 MiddleName11955",LastName11955 +11956,11956,"FirstName11956 MiddleName11956",LastName11956 +11957,11957,"FirstName11957 MiddleName11957",LastName11957 +11958,11958,"FirstName11958 MiddleName11958",LastName11958 +11959,11959,"FirstName11959 MiddleName11959",LastName11959 +11960,11960,"FirstName11960 MiddleName11960",LastName11960 +11961,11961,"FirstName11961 MiddleName11961",LastName11961 +11962,11962,"FirstName11962 MiddleName11962",LastName11962 +11963,11963,"FirstName11963 MiddleName11963",LastName11963 +11964,11964,"FirstName11964 MiddleName11964",LastName11964 +11965,11965,"FirstName11965 MiddleName11965",LastName11965 +11966,11966,"FirstName11966 MiddleName11966",LastName11966 +11967,11967,"FirstName11967 MiddleName11967",LastName11967 +11968,11968,"FirstName11968 MiddleName11968",LastName11968 +11969,11969,"FirstName11969 MiddleName11969",LastName11969 +11970,11970,"FirstName11970 MiddleName11970",LastName11970 +11971,11971,"FirstName11971 MiddleName11971",LastName11971 +11972,11972,"FirstName11972 MiddleName11972",LastName11972 +11973,11973,"FirstName11973 MiddleName11973",LastName11973 +11974,11974,"FirstName11974 MiddleName11974",LastName11974 +11975,11975,"FirstName11975 MiddleName11975",LastName11975 +11976,11976,"FirstName11976 MiddleName11976",LastName11976 +11977,11977,"FirstName11977 MiddleName11977",LastName11977 +11978,11978,"FirstName11978 MiddleName11978",LastName11978 +11979,11979,"FirstName11979 MiddleName11979",LastName11979 +11980,11980,"FirstName11980 MiddleName11980",LastName11980 +11981,11981,"FirstName11981 MiddleName11981",LastName11981 +11982,11982,"FirstName11982 MiddleName11982",LastName11982 +11983,11983,"FirstName11983 MiddleName11983",LastName11983 +11984,11984,"FirstName11984 MiddleName11984",LastName11984 +11985,11985,"FirstName11985 MiddleName11985",LastName11985 +11986,11986,"FirstName11986 MiddleName11986",LastName11986 +11987,11987,"FirstName11987 MiddleName11987",LastName11987 +11988,11988,"FirstName11988 MiddleName11988",LastName11988 +11989,11989,"FirstName11989 MiddleName11989",LastName11989 +11990,11990,"FirstName11990 MiddleName11990",LastName11990 +11991,11991,"FirstName11991 MiddleName11991",LastName11991 +11992,11992,"FirstName11992 MiddleName11992",LastName11992 +11993,11993,"FirstName11993 MiddleName11993",LastName11993 +11994,11994,"FirstName11994 MiddleName11994",LastName11994 +11995,11995,"FirstName11995 MiddleName11995",LastName11995 +11996,11996,"FirstName11996 MiddleName11996",LastName11996 +11997,11997,"FirstName11997 MiddleName11997",LastName11997 +11998,11998,"FirstName11998 MiddleName11998",LastName11998 +11999,11999,"FirstName11999 MiddleName11999",LastName11999 +12000,12000,"FirstName12000 MiddleName12000",LastName12000 +12001,12001,"FirstName12001 MiddleName12001",LastName12001 +12002,12002,"FirstName12002 MiddleName12002",LastName12002 +12003,12003,"FirstName12003 MiddleName12003",LastName12003 +12004,12004,"FirstName12004 MiddleName12004",LastName12004 +12005,12005,"FirstName12005 MiddleName12005",LastName12005 +12006,12006,"FirstName12006 MiddleName12006",LastName12006 +12007,12007,"FirstName12007 MiddleName12007",LastName12007 +12008,12008,"FirstName12008 MiddleName12008",LastName12008 +12009,12009,"FirstName12009 MiddleName12009",LastName12009 +12010,12010,"FirstName12010 MiddleName12010",LastName12010 +12011,12011,"FirstName12011 MiddleName12011",LastName12011 +12012,12012,"FirstName12012 MiddleName12012",LastName12012 +12013,12013,"FirstName12013 MiddleName12013",LastName12013 +12014,12014,"FirstName12014 MiddleName12014",LastName12014 +12015,12015,"FirstName12015 MiddleName12015",LastName12015 +12016,12016,"FirstName12016 MiddleName12016",LastName12016 +12017,12017,"FirstName12017 MiddleName12017",LastName12017 +12018,12018,"FirstName12018 MiddleName12018",LastName12018 +12019,12019,"FirstName12019 MiddleName12019",LastName12019 +12020,12020,"FirstName12020 MiddleName12020",LastName12020 +12021,12021,"FirstName12021 MiddleName12021",LastName12021 +12022,12022,"FirstName12022 MiddleName12022",LastName12022 +12023,12023,"FirstName12023 MiddleName12023",LastName12023 +12024,12024,"FirstName12024 MiddleName12024",LastName12024 +12025,12025,"FirstName12025 MiddleName12025",LastName12025 +12026,12026,"FirstName12026 MiddleName12026",LastName12026 +12027,12027,"FirstName12027 MiddleName12027",LastName12027 +12028,12028,"FirstName12028 MiddleName12028",LastName12028 +12029,12029,"FirstName12029 MiddleName12029",LastName12029 +12030,12030,"FirstName12030 MiddleName12030",LastName12030 +12031,12031,"FirstName12031 MiddleName12031",LastName12031 +12032,12032,"FirstName12032 MiddleName12032",LastName12032 +12033,12033,"FirstName12033 MiddleName12033",LastName12033 +12034,12034,"FirstName12034 MiddleName12034",LastName12034 +12035,12035,"FirstName12035 MiddleName12035",LastName12035 +12036,12036,"FirstName12036 MiddleName12036",LastName12036 +12037,12037,"FirstName12037 MiddleName12037",LastName12037 +12038,12038,"FirstName12038 MiddleName12038",LastName12038 +12039,12039,"FirstName12039 MiddleName12039",LastName12039 +12040,12040,"FirstName12040 MiddleName12040",LastName12040 +12041,12041,"FirstName12041 MiddleName12041",LastName12041 +12042,12042,"FirstName12042 MiddleName12042",LastName12042 +12043,12043,"FirstName12043 MiddleName12043",LastName12043 +12044,12044,"FirstName12044 MiddleName12044",LastName12044 +12045,12045,"FirstName12045 MiddleName12045",LastName12045 +12046,12046,"FirstName12046 MiddleName12046",LastName12046 +12047,12047,"FirstName12047 MiddleName12047",LastName12047 +12048,12048,"FirstName12048 MiddleName12048",LastName12048 +12049,12049,"FirstName12049 MiddleName12049",LastName12049 +12050,12050,"FirstName12050 MiddleName12050",LastName12050 +12051,12051,"FirstName12051 MiddleName12051",LastName12051 +12052,12052,"FirstName12052 MiddleName12052",LastName12052 +12053,12053,"FirstName12053 MiddleName12053",LastName12053 +12054,12054,"FirstName12054 MiddleName12054",LastName12054 +12055,12055,"FirstName12055 MiddleName12055",LastName12055 +12056,12056,"FirstName12056 MiddleName12056",LastName12056 +12057,12057,"FirstName12057 MiddleName12057",LastName12057 +12058,12058,"FirstName12058 MiddleName12058",LastName12058 +12059,12059,"FirstName12059 MiddleName12059",LastName12059 +12060,12060,"FirstName12060 MiddleName12060",LastName12060 +12061,12061,"FirstName12061 MiddleName12061",LastName12061 +12062,12062,"FirstName12062 MiddleName12062",LastName12062 +12063,12063,"FirstName12063 MiddleName12063",LastName12063 +12064,12064,"FirstName12064 MiddleName12064",LastName12064 +12065,12065,"FirstName12065 MiddleName12065",LastName12065 +12066,12066,"FirstName12066 MiddleName12066",LastName12066 +12067,12067,"FirstName12067 MiddleName12067",LastName12067 +12068,12068,"FirstName12068 MiddleName12068",LastName12068 +12069,12069,"FirstName12069 MiddleName12069",LastName12069 +12070,12070,"FirstName12070 MiddleName12070",LastName12070 +12071,12071,"FirstName12071 MiddleName12071",LastName12071 +12072,12072,"FirstName12072 MiddleName12072",LastName12072 +12073,12073,"FirstName12073 MiddleName12073",LastName12073 +12074,12074,"FirstName12074 MiddleName12074",LastName12074 +12075,12075,"FirstName12075 MiddleName12075",LastName12075 +12076,12076,"FirstName12076 MiddleName12076",LastName12076 +12077,12077,"FirstName12077 MiddleName12077",LastName12077 +12078,12078,"FirstName12078 MiddleName12078",LastName12078 +12079,12079,"FirstName12079 MiddleName12079",LastName12079 +12080,12080,"FirstName12080 MiddleName12080",LastName12080 +12081,12081,"FirstName12081 MiddleName12081",LastName12081 +12082,12082,"FirstName12082 MiddleName12082",LastName12082 +12083,12083,"FirstName12083 MiddleName12083",LastName12083 +12084,12084,"FirstName12084 MiddleName12084",LastName12084 +12085,12085,"FirstName12085 MiddleName12085",LastName12085 +12086,12086,"FirstName12086 MiddleName12086",LastName12086 +12087,12087,"FirstName12087 MiddleName12087",LastName12087 +12088,12088,"FirstName12088 MiddleName12088",LastName12088 +12089,12089,"FirstName12089 MiddleName12089",LastName12089 +12090,12090,"FirstName12090 MiddleName12090",LastName12090 +12091,12091,"FirstName12091 MiddleName12091",LastName12091 +12092,12092,"FirstName12092 MiddleName12092",LastName12092 +12093,12093,"FirstName12093 MiddleName12093",LastName12093 +12094,12094,"FirstName12094 MiddleName12094",LastName12094 +12095,12095,"FirstName12095 MiddleName12095",LastName12095 +12096,12096,"FirstName12096 MiddleName12096",LastName12096 +12097,12097,"FirstName12097 MiddleName12097",LastName12097 +12098,12098,"FirstName12098 MiddleName12098",LastName12098 +12099,12099,"FirstName12099 MiddleName12099",LastName12099 +12100,12100,"FirstName12100 MiddleName12100",LastName12100 +12101,12101,"FirstName12101 MiddleName12101",LastName12101 +12102,12102,"FirstName12102 MiddleName12102",LastName12102 +12103,12103,"FirstName12103 MiddleName12103",LastName12103 +12104,12104,"FirstName12104 MiddleName12104",LastName12104 +12105,12105,"FirstName12105 MiddleName12105",LastName12105 +12106,12106,"FirstName12106 MiddleName12106",LastName12106 +12107,12107,"FirstName12107 MiddleName12107",LastName12107 +12108,12108,"FirstName12108 MiddleName12108",LastName12108 +12109,12109,"FirstName12109 MiddleName12109",LastName12109 +12110,12110,"FirstName12110 MiddleName12110",LastName12110 +12111,12111,"FirstName12111 MiddleName12111",LastName12111 +12112,12112,"FirstName12112 MiddleName12112",LastName12112 +12113,12113,"FirstName12113 MiddleName12113",LastName12113 +12114,12114,"FirstName12114 MiddleName12114",LastName12114 +12115,12115,"FirstName12115 MiddleName12115",LastName12115 +12116,12116,"FirstName12116 MiddleName12116",LastName12116 +12117,12117,"FirstName12117 MiddleName12117",LastName12117 +12118,12118,"FirstName12118 MiddleName12118",LastName12118 +12119,12119,"FirstName12119 MiddleName12119",LastName12119 +12120,12120,"FirstName12120 MiddleName12120",LastName12120 +12121,12121,"FirstName12121 MiddleName12121",LastName12121 +12122,12122,"FirstName12122 MiddleName12122",LastName12122 +12123,12123,"FirstName12123 MiddleName12123",LastName12123 +12124,12124,"FirstName12124 MiddleName12124",LastName12124 +12125,12125,"FirstName12125 MiddleName12125",LastName12125 +12126,12126,"FirstName12126 MiddleName12126",LastName12126 +12127,12127,"FirstName12127 MiddleName12127",LastName12127 +12128,12128,"FirstName12128 MiddleName12128",LastName12128 +12129,12129,"FirstName12129 MiddleName12129",LastName12129 +12130,12130,"FirstName12130 MiddleName12130",LastName12130 +12131,12131,"FirstName12131 MiddleName12131",LastName12131 +12132,12132,"FirstName12132 MiddleName12132",LastName12132 +12133,12133,"FirstName12133 MiddleName12133",LastName12133 +12134,12134,"FirstName12134 MiddleName12134",LastName12134 +12135,12135,"FirstName12135 MiddleName12135",LastName12135 +12136,12136,"FirstName12136 MiddleName12136",LastName12136 +12137,12137,"FirstName12137 MiddleName12137",LastName12137 +12138,12138,"FirstName12138 MiddleName12138",LastName12138 +12139,12139,"FirstName12139 MiddleName12139",LastName12139 +12140,12140,"FirstName12140 MiddleName12140",LastName12140 +12141,12141,"FirstName12141 MiddleName12141",LastName12141 +12142,12142,"FirstName12142 MiddleName12142",LastName12142 +12143,12143,"FirstName12143 MiddleName12143",LastName12143 +12144,12144,"FirstName12144 MiddleName12144",LastName12144 +12145,12145,"FirstName12145 MiddleName12145",LastName12145 +12146,12146,"FirstName12146 MiddleName12146",LastName12146 +12147,12147,"FirstName12147 MiddleName12147",LastName12147 +12148,12148,"FirstName12148 MiddleName12148",LastName12148 +12149,12149,"FirstName12149 MiddleName12149",LastName12149 +12150,12150,"FirstName12150 MiddleName12150",LastName12150 +12151,12151,"FirstName12151 MiddleName12151",LastName12151 +12152,12152,"FirstName12152 MiddleName12152",LastName12152 +12153,12153,"FirstName12153 MiddleName12153",LastName12153 +12154,12154,"FirstName12154 MiddleName12154",LastName12154 +12155,12155,"FirstName12155 MiddleName12155",LastName12155 +12156,12156,"FirstName12156 MiddleName12156",LastName12156 +12157,12157,"FirstName12157 MiddleName12157",LastName12157 +12158,12158,"FirstName12158 MiddleName12158",LastName12158 +12159,12159,"FirstName12159 MiddleName12159",LastName12159 +12160,12160,"FirstName12160 MiddleName12160",LastName12160 +12161,12161,"FirstName12161 MiddleName12161",LastName12161 +12162,12162,"FirstName12162 MiddleName12162",LastName12162 +12163,12163,"FirstName12163 MiddleName12163",LastName12163 +12164,12164,"FirstName12164 MiddleName12164",LastName12164 +12165,12165,"FirstName12165 MiddleName12165",LastName12165 +12166,12166,"FirstName12166 MiddleName12166",LastName12166 +12167,12167,"FirstName12167 MiddleName12167",LastName12167 +12168,12168,"FirstName12168 MiddleName12168",LastName12168 +12169,12169,"FirstName12169 MiddleName12169",LastName12169 +12170,12170,"FirstName12170 MiddleName12170",LastName12170 +12171,12171,"FirstName12171 MiddleName12171",LastName12171 +12172,12172,"FirstName12172 MiddleName12172",LastName12172 +12173,12173,"FirstName12173 MiddleName12173",LastName12173 +12174,12174,"FirstName12174 MiddleName12174",LastName12174 +12175,12175,"FirstName12175 MiddleName12175",LastName12175 +12176,12176,"FirstName12176 MiddleName12176",LastName12176 +12177,12177,"FirstName12177 MiddleName12177",LastName12177 +12178,12178,"FirstName12178 MiddleName12178",LastName12178 +12179,12179,"FirstName12179 MiddleName12179",LastName12179 +12180,12180,"FirstName12180 MiddleName12180",LastName12180 +12181,12181,"FirstName12181 MiddleName12181",LastName12181 +12182,12182,"FirstName12182 MiddleName12182",LastName12182 +12183,12183,"FirstName12183 MiddleName12183",LastName12183 +12184,12184,"FirstName12184 MiddleName12184",LastName12184 +12185,12185,"FirstName12185 MiddleName12185",LastName12185 +12186,12186,"FirstName12186 MiddleName12186",LastName12186 +12187,12187,"FirstName12187 MiddleName12187",LastName12187 +12188,12188,"FirstName12188 MiddleName12188",LastName12188 +12189,12189,"FirstName12189 MiddleName12189",LastName12189 +12190,12190,"FirstName12190 MiddleName12190",LastName12190 +12191,12191,"FirstName12191 MiddleName12191",LastName12191 +12192,12192,"FirstName12192 MiddleName12192",LastName12192 +12193,12193,"FirstName12193 MiddleName12193",LastName12193 +12194,12194,"FirstName12194 MiddleName12194",LastName12194 +12195,12195,"FirstName12195 MiddleName12195",LastName12195 +12196,12196,"FirstName12196 MiddleName12196",LastName12196 +12197,12197,"FirstName12197 MiddleName12197",LastName12197 +12198,12198,"FirstName12198 MiddleName12198",LastName12198 +12199,12199,"FirstName12199 MiddleName12199",LastName12199 +12200,12200,"FirstName12200 MiddleName12200",LastName12200 +12201,12201,"FirstName12201 MiddleName12201",LastName12201 +12202,12202,"FirstName12202 MiddleName12202",LastName12202 +12203,12203,"FirstName12203 MiddleName12203",LastName12203 +12204,12204,"FirstName12204 MiddleName12204",LastName12204 +12205,12205,"FirstName12205 MiddleName12205",LastName12205 +12206,12206,"FirstName12206 MiddleName12206",LastName12206 +12207,12207,"FirstName12207 MiddleName12207",LastName12207 +12208,12208,"FirstName12208 MiddleName12208",LastName12208 +12209,12209,"FirstName12209 MiddleName12209",LastName12209 +12210,12210,"FirstName12210 MiddleName12210",LastName12210 +12211,12211,"FirstName12211 MiddleName12211",LastName12211 +12212,12212,"FirstName12212 MiddleName12212",LastName12212 +12213,12213,"FirstName12213 MiddleName12213",LastName12213 +12214,12214,"FirstName12214 MiddleName12214",LastName12214 +12215,12215,"FirstName12215 MiddleName12215",LastName12215 +12216,12216,"FirstName12216 MiddleName12216",LastName12216 +12217,12217,"FirstName12217 MiddleName12217",LastName12217 +12218,12218,"FirstName12218 MiddleName12218",LastName12218 +12219,12219,"FirstName12219 MiddleName12219",LastName12219 +12220,12220,"FirstName12220 MiddleName12220",LastName12220 +12221,12221,"FirstName12221 MiddleName12221",LastName12221 +12222,12222,"FirstName12222 MiddleName12222",LastName12222 +12223,12223,"FirstName12223 MiddleName12223",LastName12223 +12224,12224,"FirstName12224 MiddleName12224",LastName12224 +12225,12225,"FirstName12225 MiddleName12225",LastName12225 +12226,12226,"FirstName12226 MiddleName12226",LastName12226 +12227,12227,"FirstName12227 MiddleName12227",LastName12227 +12228,12228,"FirstName12228 MiddleName12228",LastName12228 +12229,12229,"FirstName12229 MiddleName12229",LastName12229 +12230,12230,"FirstName12230 MiddleName12230",LastName12230 +12231,12231,"FirstName12231 MiddleName12231",LastName12231 +12232,12232,"FirstName12232 MiddleName12232",LastName12232 +12233,12233,"FirstName12233 MiddleName12233",LastName12233 +12234,12234,"FirstName12234 MiddleName12234",LastName12234 +12235,12235,"FirstName12235 MiddleName12235",LastName12235 +12236,12236,"FirstName12236 MiddleName12236",LastName12236 +12237,12237,"FirstName12237 MiddleName12237",LastName12237 +12238,12238,"FirstName12238 MiddleName12238",LastName12238 +12239,12239,"FirstName12239 MiddleName12239",LastName12239 +12240,12240,"FirstName12240 MiddleName12240",LastName12240 +12241,12241,"FirstName12241 MiddleName12241",LastName12241 +12242,12242,"FirstName12242 MiddleName12242",LastName12242 +12243,12243,"FirstName12243 MiddleName12243",LastName12243 +12244,12244,"FirstName12244 MiddleName12244",LastName12244 +12245,12245,"FirstName12245 MiddleName12245",LastName12245 +12246,12246,"FirstName12246 MiddleName12246",LastName12246 +12247,12247,"FirstName12247 MiddleName12247",LastName12247 +12248,12248,"FirstName12248 MiddleName12248",LastName12248 +12249,12249,"FirstName12249 MiddleName12249",LastName12249 +12250,12250,"FirstName12250 MiddleName12250",LastName12250 +12251,12251,"FirstName12251 MiddleName12251",LastName12251 +12252,12252,"FirstName12252 MiddleName12252",LastName12252 +12253,12253,"FirstName12253 MiddleName12253",LastName12253 +12254,12254,"FirstName12254 MiddleName12254",LastName12254 +12255,12255,"FirstName12255 MiddleName12255",LastName12255 +12256,12256,"FirstName12256 MiddleName12256",LastName12256 +12257,12257,"FirstName12257 MiddleName12257",LastName12257 +12258,12258,"FirstName12258 MiddleName12258",LastName12258 +12259,12259,"FirstName12259 MiddleName12259",LastName12259 +12260,12260,"FirstName12260 MiddleName12260",LastName12260 +12261,12261,"FirstName12261 MiddleName12261",LastName12261 +12262,12262,"FirstName12262 MiddleName12262",LastName12262 +12263,12263,"FirstName12263 MiddleName12263",LastName12263 +12264,12264,"FirstName12264 MiddleName12264",LastName12264 +12265,12265,"FirstName12265 MiddleName12265",LastName12265 +12266,12266,"FirstName12266 MiddleName12266",LastName12266 +12267,12267,"FirstName12267 MiddleName12267",LastName12267 +12268,12268,"FirstName12268 MiddleName12268",LastName12268 +12269,12269,"FirstName12269 MiddleName12269",LastName12269 +12270,12270,"FirstName12270 MiddleName12270",LastName12270 +12271,12271,"FirstName12271 MiddleName12271",LastName12271 +12272,12272,"FirstName12272 MiddleName12272",LastName12272 +12273,12273,"FirstName12273 MiddleName12273",LastName12273 +12274,12274,"FirstName12274 MiddleName12274",LastName12274 +12275,12275,"FirstName12275 MiddleName12275",LastName12275 +12276,12276,"FirstName12276 MiddleName12276",LastName12276 +12277,12277,"FirstName12277 MiddleName12277",LastName12277 +12278,12278,"FirstName12278 MiddleName12278",LastName12278 +12279,12279,"FirstName12279 MiddleName12279",LastName12279 +12280,12280,"FirstName12280 MiddleName12280",LastName12280 +12281,12281,"FirstName12281 MiddleName12281",LastName12281 +12282,12282,"FirstName12282 MiddleName12282",LastName12282 +12283,12283,"FirstName12283 MiddleName12283",LastName12283 +12284,12284,"FirstName12284 MiddleName12284",LastName12284 +12285,12285,"FirstName12285 MiddleName12285",LastName12285 +12286,12286,"FirstName12286 MiddleName12286",LastName12286 +12287,12287,"FirstName12287 MiddleName12287",LastName12287 +12288,12288,"FirstName12288 MiddleName12288",LastName12288 +12289,12289,"FirstName12289 MiddleName12289",LastName12289 +12290,12290,"FirstName12290 MiddleName12290",LastName12290 +12291,12291,"FirstName12291 MiddleName12291",LastName12291 +12292,12292,"FirstName12292 MiddleName12292",LastName12292 +12293,12293,"FirstName12293 MiddleName12293",LastName12293 +12294,12294,"FirstName12294 MiddleName12294",LastName12294 +12295,12295,"FirstName12295 MiddleName12295",LastName12295 +12296,12296,"FirstName12296 MiddleName12296",LastName12296 +12297,12297,"FirstName12297 MiddleName12297",LastName12297 +12298,12298,"FirstName12298 MiddleName12298",LastName12298 +12299,12299,"FirstName12299 MiddleName12299",LastName12299 +12300,12300,"FirstName12300 MiddleName12300",LastName12300 +12301,12301,"FirstName12301 MiddleName12301",LastName12301 +12302,12302,"FirstName12302 MiddleName12302",LastName12302 +12303,12303,"FirstName12303 MiddleName12303",LastName12303 +12304,12304,"FirstName12304 MiddleName12304",LastName12304 +12305,12305,"FirstName12305 MiddleName12305",LastName12305 +12306,12306,"FirstName12306 MiddleName12306",LastName12306 +12307,12307,"FirstName12307 MiddleName12307",LastName12307 +12308,12308,"FirstName12308 MiddleName12308",LastName12308 +12309,12309,"FirstName12309 MiddleName12309",LastName12309 +12310,12310,"FirstName12310 MiddleName12310",LastName12310 +12311,12311,"FirstName12311 MiddleName12311",LastName12311 +12312,12312,"FirstName12312 MiddleName12312",LastName12312 +12313,12313,"FirstName12313 MiddleName12313",LastName12313 +12314,12314,"FirstName12314 MiddleName12314",LastName12314 +12315,12315,"FirstName12315 MiddleName12315",LastName12315 +12316,12316,"FirstName12316 MiddleName12316",LastName12316 +12317,12317,"FirstName12317 MiddleName12317",LastName12317 +12318,12318,"FirstName12318 MiddleName12318",LastName12318 +12319,12319,"FirstName12319 MiddleName12319",LastName12319 +12320,12320,"FirstName12320 MiddleName12320",LastName12320 +12321,12321,"FirstName12321 MiddleName12321",LastName12321 +12322,12322,"FirstName12322 MiddleName12322",LastName12322 +12323,12323,"FirstName12323 MiddleName12323",LastName12323 +12324,12324,"FirstName12324 MiddleName12324",LastName12324 +12325,12325,"FirstName12325 MiddleName12325",LastName12325 +12326,12326,"FirstName12326 MiddleName12326",LastName12326 +12327,12327,"FirstName12327 MiddleName12327",LastName12327 +12328,12328,"FirstName12328 MiddleName12328",LastName12328 +12329,12329,"FirstName12329 MiddleName12329",LastName12329 +12330,12330,"FirstName12330 MiddleName12330",LastName12330 +12331,12331,"FirstName12331 MiddleName12331",LastName12331 +12332,12332,"FirstName12332 MiddleName12332",LastName12332 +12333,12333,"FirstName12333 MiddleName12333",LastName12333 +12334,12334,"FirstName12334 MiddleName12334",LastName12334 +12335,12335,"FirstName12335 MiddleName12335",LastName12335 +12336,12336,"FirstName12336 MiddleName12336",LastName12336 +12337,12337,"FirstName12337 MiddleName12337",LastName12337 +12338,12338,"FirstName12338 MiddleName12338",LastName12338 +12339,12339,"FirstName12339 MiddleName12339",LastName12339 +12340,12340,"FirstName12340 MiddleName12340",LastName12340 +12341,12341,"FirstName12341 MiddleName12341",LastName12341 +12342,12342,"FirstName12342 MiddleName12342",LastName12342 +12343,12343,"FirstName12343 MiddleName12343",LastName12343 +12344,12344,"FirstName12344 MiddleName12344",LastName12344 +12345,12345,"FirstName12345 MiddleName12345",LastName12345 +12346,12346,"FirstName12346 MiddleName12346",LastName12346 +12347,12347,"FirstName12347 MiddleName12347",LastName12347 +12348,12348,"FirstName12348 MiddleName12348",LastName12348 +12349,12349,"FirstName12349 MiddleName12349",LastName12349 +12350,12350,"FirstName12350 MiddleName12350",LastName12350 +12351,12351,"FirstName12351 MiddleName12351",LastName12351 +12352,12352,"FirstName12352 MiddleName12352",LastName12352 +12353,12353,"FirstName12353 MiddleName12353",LastName12353 +12354,12354,"FirstName12354 MiddleName12354",LastName12354 +12355,12355,"FirstName12355 MiddleName12355",LastName12355 +12356,12356,"FirstName12356 MiddleName12356",LastName12356 +12357,12357,"FirstName12357 MiddleName12357",LastName12357 +12358,12358,"FirstName12358 MiddleName12358",LastName12358 +12359,12359,"FirstName12359 MiddleName12359",LastName12359 +12360,12360,"FirstName12360 MiddleName12360",LastName12360 +12361,12361,"FirstName12361 MiddleName12361",LastName12361 +12362,12362,"FirstName12362 MiddleName12362",LastName12362 +12363,12363,"FirstName12363 MiddleName12363",LastName12363 +12364,12364,"FirstName12364 MiddleName12364",LastName12364 +12365,12365,"FirstName12365 MiddleName12365",LastName12365 +12366,12366,"FirstName12366 MiddleName12366",LastName12366 +12367,12367,"FirstName12367 MiddleName12367",LastName12367 +12368,12368,"FirstName12368 MiddleName12368",LastName12368 +12369,12369,"FirstName12369 MiddleName12369",LastName12369 +12370,12370,"FirstName12370 MiddleName12370",LastName12370 +12371,12371,"FirstName12371 MiddleName12371",LastName12371 +12372,12372,"FirstName12372 MiddleName12372",LastName12372 +12373,12373,"FirstName12373 MiddleName12373",LastName12373 +12374,12374,"FirstName12374 MiddleName12374",LastName12374 +12375,12375,"FirstName12375 MiddleName12375",LastName12375 +12376,12376,"FirstName12376 MiddleName12376",LastName12376 +12377,12377,"FirstName12377 MiddleName12377",LastName12377 +12378,12378,"FirstName12378 MiddleName12378",LastName12378 +12379,12379,"FirstName12379 MiddleName12379",LastName12379 +12380,12380,"FirstName12380 MiddleName12380",LastName12380 +12381,12381,"FirstName12381 MiddleName12381",LastName12381 +12382,12382,"FirstName12382 MiddleName12382",LastName12382 +12383,12383,"FirstName12383 MiddleName12383",LastName12383 +12384,12384,"FirstName12384 MiddleName12384",LastName12384 +12385,12385,"FirstName12385 MiddleName12385",LastName12385 +12386,12386,"FirstName12386 MiddleName12386",LastName12386 +12387,12387,"FirstName12387 MiddleName12387",LastName12387 +12388,12388,"FirstName12388 MiddleName12388",LastName12388 +12389,12389,"FirstName12389 MiddleName12389",LastName12389 +12390,12390,"FirstName12390 MiddleName12390",LastName12390 +12391,12391,"FirstName12391 MiddleName12391",LastName12391 +12392,12392,"FirstName12392 MiddleName12392",LastName12392 +12393,12393,"FirstName12393 MiddleName12393",LastName12393 +12394,12394,"FirstName12394 MiddleName12394",LastName12394 +12395,12395,"FirstName12395 MiddleName12395",LastName12395 +12396,12396,"FirstName12396 MiddleName12396",LastName12396 +12397,12397,"FirstName12397 MiddleName12397",LastName12397 +12398,12398,"FirstName12398 MiddleName12398",LastName12398 +12399,12399,"FirstName12399 MiddleName12399",LastName12399 +12400,12400,"FirstName12400 MiddleName12400",LastName12400 +12401,12401,"FirstName12401 MiddleName12401",LastName12401 +12402,12402,"FirstName12402 MiddleName12402",LastName12402 +12403,12403,"FirstName12403 MiddleName12403",LastName12403 +12404,12404,"FirstName12404 MiddleName12404",LastName12404 +12405,12405,"FirstName12405 MiddleName12405",LastName12405 +12406,12406,"FirstName12406 MiddleName12406",LastName12406 +12407,12407,"FirstName12407 MiddleName12407",LastName12407 +12408,12408,"FirstName12408 MiddleName12408",LastName12408 +12409,12409,"FirstName12409 MiddleName12409",LastName12409 +12410,12410,"FirstName12410 MiddleName12410",LastName12410 +12411,12411,"FirstName12411 MiddleName12411",LastName12411 +12412,12412,"FirstName12412 MiddleName12412",LastName12412 +12413,12413,"FirstName12413 MiddleName12413",LastName12413 +12414,12414,"FirstName12414 MiddleName12414",LastName12414 +12415,12415,"FirstName12415 MiddleName12415",LastName12415 +12416,12416,"FirstName12416 MiddleName12416",LastName12416 +12417,12417,"FirstName12417 MiddleName12417",LastName12417 +12418,12418,"FirstName12418 MiddleName12418",LastName12418 +12419,12419,"FirstName12419 MiddleName12419",LastName12419 +12420,12420,"FirstName12420 MiddleName12420",LastName12420 +12421,12421,"FirstName12421 MiddleName12421",LastName12421 +12422,12422,"FirstName12422 MiddleName12422",LastName12422 +12423,12423,"FirstName12423 MiddleName12423",LastName12423 +12424,12424,"FirstName12424 MiddleName12424",LastName12424 +12425,12425,"FirstName12425 MiddleName12425",LastName12425 +12426,12426,"FirstName12426 MiddleName12426",LastName12426 +12427,12427,"FirstName12427 MiddleName12427",LastName12427 +12428,12428,"FirstName12428 MiddleName12428",LastName12428 +12429,12429,"FirstName12429 MiddleName12429",LastName12429 +12430,12430,"FirstName12430 MiddleName12430",LastName12430 +12431,12431,"FirstName12431 MiddleName12431",LastName12431 +12432,12432,"FirstName12432 MiddleName12432",LastName12432 +12433,12433,"FirstName12433 MiddleName12433",LastName12433 +12434,12434,"FirstName12434 MiddleName12434",LastName12434 +12435,12435,"FirstName12435 MiddleName12435",LastName12435 +12436,12436,"FirstName12436 MiddleName12436",LastName12436 +12437,12437,"FirstName12437 MiddleName12437",LastName12437 +12438,12438,"FirstName12438 MiddleName12438",LastName12438 +12439,12439,"FirstName12439 MiddleName12439",LastName12439 +12440,12440,"FirstName12440 MiddleName12440",LastName12440 +12441,12441,"FirstName12441 MiddleName12441",LastName12441 +12442,12442,"FirstName12442 MiddleName12442",LastName12442 +12443,12443,"FirstName12443 MiddleName12443",LastName12443 +12444,12444,"FirstName12444 MiddleName12444",LastName12444 +12445,12445,"FirstName12445 MiddleName12445",LastName12445 +12446,12446,"FirstName12446 MiddleName12446",LastName12446 +12447,12447,"FirstName12447 MiddleName12447",LastName12447 +12448,12448,"FirstName12448 MiddleName12448",LastName12448 +12449,12449,"FirstName12449 MiddleName12449",LastName12449 +12450,12450,"FirstName12450 MiddleName12450",LastName12450 +12451,12451,"FirstName12451 MiddleName12451",LastName12451 +12452,12452,"FirstName12452 MiddleName12452",LastName12452 +12453,12453,"FirstName12453 MiddleName12453",LastName12453 +12454,12454,"FirstName12454 MiddleName12454",LastName12454 +12455,12455,"FirstName12455 MiddleName12455",LastName12455 +12456,12456,"FirstName12456 MiddleName12456",LastName12456 +12457,12457,"FirstName12457 MiddleName12457",LastName12457 +12458,12458,"FirstName12458 MiddleName12458",LastName12458 +12459,12459,"FirstName12459 MiddleName12459",LastName12459 +12460,12460,"FirstName12460 MiddleName12460",LastName12460 +12461,12461,"FirstName12461 MiddleName12461",LastName12461 +12462,12462,"FirstName12462 MiddleName12462",LastName12462 +12463,12463,"FirstName12463 MiddleName12463",LastName12463 +12464,12464,"FirstName12464 MiddleName12464",LastName12464 +12465,12465,"FirstName12465 MiddleName12465",LastName12465 +12466,12466,"FirstName12466 MiddleName12466",LastName12466 +12467,12467,"FirstName12467 MiddleName12467",LastName12467 +12468,12468,"FirstName12468 MiddleName12468",LastName12468 +12469,12469,"FirstName12469 MiddleName12469",LastName12469 +12470,12470,"FirstName12470 MiddleName12470",LastName12470 +12471,12471,"FirstName12471 MiddleName12471",LastName12471 +12472,12472,"FirstName12472 MiddleName12472",LastName12472 +12473,12473,"FirstName12473 MiddleName12473",LastName12473 +12474,12474,"FirstName12474 MiddleName12474",LastName12474 +12475,12475,"FirstName12475 MiddleName12475",LastName12475 +12476,12476,"FirstName12476 MiddleName12476",LastName12476 +12477,12477,"FirstName12477 MiddleName12477",LastName12477 +12478,12478,"FirstName12478 MiddleName12478",LastName12478 +12479,12479,"FirstName12479 MiddleName12479",LastName12479 +12480,12480,"FirstName12480 MiddleName12480",LastName12480 +12481,12481,"FirstName12481 MiddleName12481",LastName12481 +12482,12482,"FirstName12482 MiddleName12482",LastName12482 +12483,12483,"FirstName12483 MiddleName12483",LastName12483 +12484,12484,"FirstName12484 MiddleName12484",LastName12484 +12485,12485,"FirstName12485 MiddleName12485",LastName12485 +12486,12486,"FirstName12486 MiddleName12486",LastName12486 +12487,12487,"FirstName12487 MiddleName12487",LastName12487 +12488,12488,"FirstName12488 MiddleName12488",LastName12488 +12489,12489,"FirstName12489 MiddleName12489",LastName12489 +12490,12490,"FirstName12490 MiddleName12490",LastName12490 +12491,12491,"FirstName12491 MiddleName12491",LastName12491 +12492,12492,"FirstName12492 MiddleName12492",LastName12492 +12493,12493,"FirstName12493 MiddleName12493",LastName12493 +12494,12494,"FirstName12494 MiddleName12494",LastName12494 +12495,12495,"FirstName12495 MiddleName12495",LastName12495 +12496,12496,"FirstName12496 MiddleName12496",LastName12496 +12497,12497,"FirstName12497 MiddleName12497",LastName12497 +12498,12498,"FirstName12498 MiddleName12498",LastName12498 +12499,12499,"FirstName12499 MiddleName12499",LastName12499 +12500,12500,"FirstName12500 MiddleName12500",LastName12500 +12501,12501,"FirstName12501 MiddleName12501",LastName12501 +12502,12502,"FirstName12502 MiddleName12502",LastName12502 +12503,12503,"FirstName12503 MiddleName12503",LastName12503 +12504,12504,"FirstName12504 MiddleName12504",LastName12504 +12505,12505,"FirstName12505 MiddleName12505",LastName12505 +12506,12506,"FirstName12506 MiddleName12506",LastName12506 +12507,12507,"FirstName12507 MiddleName12507",LastName12507 +12508,12508,"FirstName12508 MiddleName12508",LastName12508 +12509,12509,"FirstName12509 MiddleName12509",LastName12509 +12510,12510,"FirstName12510 MiddleName12510",LastName12510 +12511,12511,"FirstName12511 MiddleName12511",LastName12511 +12512,12512,"FirstName12512 MiddleName12512",LastName12512 +12513,12513,"FirstName12513 MiddleName12513",LastName12513 +12514,12514,"FirstName12514 MiddleName12514",LastName12514 +12515,12515,"FirstName12515 MiddleName12515",LastName12515 +12516,12516,"FirstName12516 MiddleName12516",LastName12516 +12517,12517,"FirstName12517 MiddleName12517",LastName12517 +12518,12518,"FirstName12518 MiddleName12518",LastName12518 +12519,12519,"FirstName12519 MiddleName12519",LastName12519 +12520,12520,"FirstName12520 MiddleName12520",LastName12520 +12521,12521,"FirstName12521 MiddleName12521",LastName12521 +12522,12522,"FirstName12522 MiddleName12522",LastName12522 +12523,12523,"FirstName12523 MiddleName12523",LastName12523 +12524,12524,"FirstName12524 MiddleName12524",LastName12524 +12525,12525,"FirstName12525 MiddleName12525",LastName12525 +12526,12526,"FirstName12526 MiddleName12526",LastName12526 +12527,12527,"FirstName12527 MiddleName12527",LastName12527 +12528,12528,"FirstName12528 MiddleName12528",LastName12528 +12529,12529,"FirstName12529 MiddleName12529",LastName12529 +12530,12530,"FirstName12530 MiddleName12530",LastName12530 +12531,12531,"FirstName12531 MiddleName12531",LastName12531 +12532,12532,"FirstName12532 MiddleName12532",LastName12532 +12533,12533,"FirstName12533 MiddleName12533",LastName12533 +12534,12534,"FirstName12534 MiddleName12534",LastName12534 +12535,12535,"FirstName12535 MiddleName12535",LastName12535 +12536,12536,"FirstName12536 MiddleName12536",LastName12536 +12537,12537,"FirstName12537 MiddleName12537",LastName12537 +12538,12538,"FirstName12538 MiddleName12538",LastName12538 +12539,12539,"FirstName12539 MiddleName12539",LastName12539 +12540,12540,"FirstName12540 MiddleName12540",LastName12540 +12541,12541,"FirstName12541 MiddleName12541",LastName12541 +12542,12542,"FirstName12542 MiddleName12542",LastName12542 +12543,12543,"FirstName12543 MiddleName12543",LastName12543 +12544,12544,"FirstName12544 MiddleName12544",LastName12544 +12545,12545,"FirstName12545 MiddleName12545",LastName12545 +12546,12546,"FirstName12546 MiddleName12546",LastName12546 +12547,12547,"FirstName12547 MiddleName12547",LastName12547 +12548,12548,"FirstName12548 MiddleName12548",LastName12548 +12549,12549,"FirstName12549 MiddleName12549",LastName12549 +12550,12550,"FirstName12550 MiddleName12550",LastName12550 +12551,12551,"FirstName12551 MiddleName12551",LastName12551 +12552,12552,"FirstName12552 MiddleName12552",LastName12552 +12553,12553,"FirstName12553 MiddleName12553",LastName12553 +12554,12554,"FirstName12554 MiddleName12554",LastName12554 +12555,12555,"FirstName12555 MiddleName12555",LastName12555 +12556,12556,"FirstName12556 MiddleName12556",LastName12556 +12557,12557,"FirstName12557 MiddleName12557",LastName12557 +12558,12558,"FirstName12558 MiddleName12558",LastName12558 +12559,12559,"FirstName12559 MiddleName12559",LastName12559 +12560,12560,"FirstName12560 MiddleName12560",LastName12560 +12561,12561,"FirstName12561 MiddleName12561",LastName12561 +12562,12562,"FirstName12562 MiddleName12562",LastName12562 +12563,12563,"FirstName12563 MiddleName12563",LastName12563 +12564,12564,"FirstName12564 MiddleName12564",LastName12564 +12565,12565,"FirstName12565 MiddleName12565",LastName12565 +12566,12566,"FirstName12566 MiddleName12566",LastName12566 +12567,12567,"FirstName12567 MiddleName12567",LastName12567 +12568,12568,"FirstName12568 MiddleName12568",LastName12568 +12569,12569,"FirstName12569 MiddleName12569",LastName12569 +12570,12570,"FirstName12570 MiddleName12570",LastName12570 +12571,12571,"FirstName12571 MiddleName12571",LastName12571 +12572,12572,"FirstName12572 MiddleName12572",LastName12572 +12573,12573,"FirstName12573 MiddleName12573",LastName12573 +12574,12574,"FirstName12574 MiddleName12574",LastName12574 +12575,12575,"FirstName12575 MiddleName12575",LastName12575 +12576,12576,"FirstName12576 MiddleName12576",LastName12576 +12577,12577,"FirstName12577 MiddleName12577",LastName12577 +12578,12578,"FirstName12578 MiddleName12578",LastName12578 +12579,12579,"FirstName12579 MiddleName12579",LastName12579 +12580,12580,"FirstName12580 MiddleName12580",LastName12580 +12581,12581,"FirstName12581 MiddleName12581",LastName12581 +12582,12582,"FirstName12582 MiddleName12582",LastName12582 +12583,12583,"FirstName12583 MiddleName12583",LastName12583 +12584,12584,"FirstName12584 MiddleName12584",LastName12584 +12585,12585,"FirstName12585 MiddleName12585",LastName12585 +12586,12586,"FirstName12586 MiddleName12586",LastName12586 +12587,12587,"FirstName12587 MiddleName12587",LastName12587 +12588,12588,"FirstName12588 MiddleName12588",LastName12588 +12589,12589,"FirstName12589 MiddleName12589",LastName12589 +12590,12590,"FirstName12590 MiddleName12590",LastName12590 +12591,12591,"FirstName12591 MiddleName12591",LastName12591 +12592,12592,"FirstName12592 MiddleName12592",LastName12592 +12593,12593,"FirstName12593 MiddleName12593",LastName12593 +12594,12594,"FirstName12594 MiddleName12594",LastName12594 +12595,12595,"FirstName12595 MiddleName12595",LastName12595 +12596,12596,"FirstName12596 MiddleName12596",LastName12596 +12597,12597,"FirstName12597 MiddleName12597",LastName12597 +12598,12598,"FirstName12598 MiddleName12598",LastName12598 +12599,12599,"FirstName12599 MiddleName12599",LastName12599 +12600,12600,"FirstName12600 MiddleName12600",LastName12600 +12601,12601,"FirstName12601 MiddleName12601",LastName12601 +12602,12602,"FirstName12602 MiddleName12602",LastName12602 +12603,12603,"FirstName12603 MiddleName12603",LastName12603 +12604,12604,"FirstName12604 MiddleName12604",LastName12604 +12605,12605,"FirstName12605 MiddleName12605",LastName12605 +12606,12606,"FirstName12606 MiddleName12606",LastName12606 +12607,12607,"FirstName12607 MiddleName12607",LastName12607 +12608,12608,"FirstName12608 MiddleName12608",LastName12608 +12609,12609,"FirstName12609 MiddleName12609",LastName12609 +12610,12610,"FirstName12610 MiddleName12610",LastName12610 +12611,12611,"FirstName12611 MiddleName12611",LastName12611 +12612,12612,"FirstName12612 MiddleName12612",LastName12612 +12613,12613,"FirstName12613 MiddleName12613",LastName12613 +12614,12614,"FirstName12614 MiddleName12614",LastName12614 +12615,12615,"FirstName12615 MiddleName12615",LastName12615 +12616,12616,"FirstName12616 MiddleName12616",LastName12616 +12617,12617,"FirstName12617 MiddleName12617",LastName12617 +12618,12618,"FirstName12618 MiddleName12618",LastName12618 +12619,12619,"FirstName12619 MiddleName12619",LastName12619 +12620,12620,"FirstName12620 MiddleName12620",LastName12620 +12621,12621,"FirstName12621 MiddleName12621",LastName12621 +12622,12622,"FirstName12622 MiddleName12622",LastName12622 +12623,12623,"FirstName12623 MiddleName12623",LastName12623 +12624,12624,"FirstName12624 MiddleName12624",LastName12624 +12625,12625,"FirstName12625 MiddleName12625",LastName12625 +12626,12626,"FirstName12626 MiddleName12626",LastName12626 +12627,12627,"FirstName12627 MiddleName12627",LastName12627 +12628,12628,"FirstName12628 MiddleName12628",LastName12628 +12629,12629,"FirstName12629 MiddleName12629",LastName12629 +12630,12630,"FirstName12630 MiddleName12630",LastName12630 +12631,12631,"FirstName12631 MiddleName12631",LastName12631 +12632,12632,"FirstName12632 MiddleName12632",LastName12632 +12633,12633,"FirstName12633 MiddleName12633",LastName12633 +12634,12634,"FirstName12634 MiddleName12634",LastName12634 +12635,12635,"FirstName12635 MiddleName12635",LastName12635 +12636,12636,"FirstName12636 MiddleName12636",LastName12636 +12637,12637,"FirstName12637 MiddleName12637",LastName12637 +12638,12638,"FirstName12638 MiddleName12638",LastName12638 +12639,12639,"FirstName12639 MiddleName12639",LastName12639 +12640,12640,"FirstName12640 MiddleName12640",LastName12640 +12641,12641,"FirstName12641 MiddleName12641",LastName12641 +12642,12642,"FirstName12642 MiddleName12642",LastName12642 +12643,12643,"FirstName12643 MiddleName12643",LastName12643 +12644,12644,"FirstName12644 MiddleName12644",LastName12644 +12645,12645,"FirstName12645 MiddleName12645",LastName12645 +12646,12646,"FirstName12646 MiddleName12646",LastName12646 +12647,12647,"FirstName12647 MiddleName12647",LastName12647 +12648,12648,"FirstName12648 MiddleName12648",LastName12648 +12649,12649,"FirstName12649 MiddleName12649",LastName12649 +12650,12650,"FirstName12650 MiddleName12650",LastName12650 +12651,12651,"FirstName12651 MiddleName12651",LastName12651 +12652,12652,"FirstName12652 MiddleName12652",LastName12652 +12653,12653,"FirstName12653 MiddleName12653",LastName12653 +12654,12654,"FirstName12654 MiddleName12654",LastName12654 +12655,12655,"FirstName12655 MiddleName12655",LastName12655 +12656,12656,"FirstName12656 MiddleName12656",LastName12656 +12657,12657,"FirstName12657 MiddleName12657",LastName12657 +12658,12658,"FirstName12658 MiddleName12658",LastName12658 +12659,12659,"FirstName12659 MiddleName12659",LastName12659 +12660,12660,"FirstName12660 MiddleName12660",LastName12660 +12661,12661,"FirstName12661 MiddleName12661",LastName12661 +12662,12662,"FirstName12662 MiddleName12662",LastName12662 +12663,12663,"FirstName12663 MiddleName12663",LastName12663 +12664,12664,"FirstName12664 MiddleName12664",LastName12664 +12665,12665,"FirstName12665 MiddleName12665",LastName12665 +12666,12666,"FirstName12666 MiddleName12666",LastName12666 +12667,12667,"FirstName12667 MiddleName12667",LastName12667 +12668,12668,"FirstName12668 MiddleName12668",LastName12668 +12669,12669,"FirstName12669 MiddleName12669",LastName12669 +12670,12670,"FirstName12670 MiddleName12670",LastName12670 +12671,12671,"FirstName12671 MiddleName12671",LastName12671 +12672,12672,"FirstName12672 MiddleName12672",LastName12672 +12673,12673,"FirstName12673 MiddleName12673",LastName12673 +12674,12674,"FirstName12674 MiddleName12674",LastName12674 +12675,12675,"FirstName12675 MiddleName12675",LastName12675 +12676,12676,"FirstName12676 MiddleName12676",LastName12676 +12677,12677,"FirstName12677 MiddleName12677",LastName12677 +12678,12678,"FirstName12678 MiddleName12678",LastName12678 +12679,12679,"FirstName12679 MiddleName12679",LastName12679 +12680,12680,"FirstName12680 MiddleName12680",LastName12680 +12681,12681,"FirstName12681 MiddleName12681",LastName12681 +12682,12682,"FirstName12682 MiddleName12682",LastName12682 +12683,12683,"FirstName12683 MiddleName12683",LastName12683 +12684,12684,"FirstName12684 MiddleName12684",LastName12684 +12685,12685,"FirstName12685 MiddleName12685",LastName12685 +12686,12686,"FirstName12686 MiddleName12686",LastName12686 +12687,12687,"FirstName12687 MiddleName12687",LastName12687 +12688,12688,"FirstName12688 MiddleName12688",LastName12688 +12689,12689,"FirstName12689 MiddleName12689",LastName12689 +12690,12690,"FirstName12690 MiddleName12690",LastName12690 +12691,12691,"FirstName12691 MiddleName12691",LastName12691 +12692,12692,"FirstName12692 MiddleName12692",LastName12692 +12693,12693,"FirstName12693 MiddleName12693",LastName12693 +12694,12694,"FirstName12694 MiddleName12694",LastName12694 +12695,12695,"FirstName12695 MiddleName12695",LastName12695 +12696,12696,"FirstName12696 MiddleName12696",LastName12696 +12697,12697,"FirstName12697 MiddleName12697",LastName12697 +12698,12698,"FirstName12698 MiddleName12698",LastName12698 +12699,12699,"FirstName12699 MiddleName12699",LastName12699 +12700,12700,"FirstName12700 MiddleName12700",LastName12700 +12701,12701,"FirstName12701 MiddleName12701",LastName12701 +12702,12702,"FirstName12702 MiddleName12702",LastName12702 +12703,12703,"FirstName12703 MiddleName12703",LastName12703 +12704,12704,"FirstName12704 MiddleName12704",LastName12704 +12705,12705,"FirstName12705 MiddleName12705",LastName12705 +12706,12706,"FirstName12706 MiddleName12706",LastName12706 +12707,12707,"FirstName12707 MiddleName12707",LastName12707 +12708,12708,"FirstName12708 MiddleName12708",LastName12708 +12709,12709,"FirstName12709 MiddleName12709",LastName12709 +12710,12710,"FirstName12710 MiddleName12710",LastName12710 +12711,12711,"FirstName12711 MiddleName12711",LastName12711 +12712,12712,"FirstName12712 MiddleName12712",LastName12712 +12713,12713,"FirstName12713 MiddleName12713",LastName12713 +12714,12714,"FirstName12714 MiddleName12714",LastName12714 +12715,12715,"FirstName12715 MiddleName12715",LastName12715 +12716,12716,"FirstName12716 MiddleName12716",LastName12716 +12717,12717,"FirstName12717 MiddleName12717",LastName12717 +12718,12718,"FirstName12718 MiddleName12718",LastName12718 +12719,12719,"FirstName12719 MiddleName12719",LastName12719 +12720,12720,"FirstName12720 MiddleName12720",LastName12720 +12721,12721,"FirstName12721 MiddleName12721",LastName12721 +12722,12722,"FirstName12722 MiddleName12722",LastName12722 +12723,12723,"FirstName12723 MiddleName12723",LastName12723 +12724,12724,"FirstName12724 MiddleName12724",LastName12724 +12725,12725,"FirstName12725 MiddleName12725",LastName12725 +12726,12726,"FirstName12726 MiddleName12726",LastName12726 +12727,12727,"FirstName12727 MiddleName12727",LastName12727 +12728,12728,"FirstName12728 MiddleName12728",LastName12728 +12729,12729,"FirstName12729 MiddleName12729",LastName12729 +12730,12730,"FirstName12730 MiddleName12730",LastName12730 +12731,12731,"FirstName12731 MiddleName12731",LastName12731 +12732,12732,"FirstName12732 MiddleName12732",LastName12732 +12733,12733,"FirstName12733 MiddleName12733",LastName12733 +12734,12734,"FirstName12734 MiddleName12734",LastName12734 +12735,12735,"FirstName12735 MiddleName12735",LastName12735 +12736,12736,"FirstName12736 MiddleName12736",LastName12736 +12737,12737,"FirstName12737 MiddleName12737",LastName12737 +12738,12738,"FirstName12738 MiddleName12738",LastName12738 +12739,12739,"FirstName12739 MiddleName12739",LastName12739 +12740,12740,"FirstName12740 MiddleName12740",LastName12740 +12741,12741,"FirstName12741 MiddleName12741",LastName12741 +12742,12742,"FirstName12742 MiddleName12742",LastName12742 +12743,12743,"FirstName12743 MiddleName12743",LastName12743 +12744,12744,"FirstName12744 MiddleName12744",LastName12744 +12745,12745,"FirstName12745 MiddleName12745",LastName12745 +12746,12746,"FirstName12746 MiddleName12746",LastName12746 +12747,12747,"FirstName12747 MiddleName12747",LastName12747 +12748,12748,"FirstName12748 MiddleName12748",LastName12748 +12749,12749,"FirstName12749 MiddleName12749",LastName12749 +12750,12750,"FirstName12750 MiddleName12750",LastName12750 +12751,12751,"FirstName12751 MiddleName12751",LastName12751 +12752,12752,"FirstName12752 MiddleName12752",LastName12752 +12753,12753,"FirstName12753 MiddleName12753",LastName12753 +12754,12754,"FirstName12754 MiddleName12754",LastName12754 +12755,12755,"FirstName12755 MiddleName12755",LastName12755 +12756,12756,"FirstName12756 MiddleName12756",LastName12756 +12757,12757,"FirstName12757 MiddleName12757",LastName12757 +12758,12758,"FirstName12758 MiddleName12758",LastName12758 +12759,12759,"FirstName12759 MiddleName12759",LastName12759 +12760,12760,"FirstName12760 MiddleName12760",LastName12760 +12761,12761,"FirstName12761 MiddleName12761",LastName12761 +12762,12762,"FirstName12762 MiddleName12762",LastName12762 +12763,12763,"FirstName12763 MiddleName12763",LastName12763 +12764,12764,"FirstName12764 MiddleName12764",LastName12764 +12765,12765,"FirstName12765 MiddleName12765",LastName12765 +12766,12766,"FirstName12766 MiddleName12766",LastName12766 +12767,12767,"FirstName12767 MiddleName12767",LastName12767 +12768,12768,"FirstName12768 MiddleName12768",LastName12768 +12769,12769,"FirstName12769 MiddleName12769",LastName12769 +12770,12770,"FirstName12770 MiddleName12770",LastName12770 +12771,12771,"FirstName12771 MiddleName12771",LastName12771 +12772,12772,"FirstName12772 MiddleName12772",LastName12772 +12773,12773,"FirstName12773 MiddleName12773",LastName12773 +12774,12774,"FirstName12774 MiddleName12774",LastName12774 +12775,12775,"FirstName12775 MiddleName12775",LastName12775 +12776,12776,"FirstName12776 MiddleName12776",LastName12776 +12777,12777,"FirstName12777 MiddleName12777",LastName12777 +12778,12778,"FirstName12778 MiddleName12778",LastName12778 +12779,12779,"FirstName12779 MiddleName12779",LastName12779 +12780,12780,"FirstName12780 MiddleName12780",LastName12780 +12781,12781,"FirstName12781 MiddleName12781",LastName12781 +12782,12782,"FirstName12782 MiddleName12782",LastName12782 +12783,12783,"FirstName12783 MiddleName12783",LastName12783 +12784,12784,"FirstName12784 MiddleName12784",LastName12784 +12785,12785,"FirstName12785 MiddleName12785",LastName12785 +12786,12786,"FirstName12786 MiddleName12786",LastName12786 +12787,12787,"FirstName12787 MiddleName12787",LastName12787 +12788,12788,"FirstName12788 MiddleName12788",LastName12788 +12789,12789,"FirstName12789 MiddleName12789",LastName12789 +12790,12790,"FirstName12790 MiddleName12790",LastName12790 +12791,12791,"FirstName12791 MiddleName12791",LastName12791 +12792,12792,"FirstName12792 MiddleName12792",LastName12792 +12793,12793,"FirstName12793 MiddleName12793",LastName12793 +12794,12794,"FirstName12794 MiddleName12794",LastName12794 +12795,12795,"FirstName12795 MiddleName12795",LastName12795 +12796,12796,"FirstName12796 MiddleName12796",LastName12796 +12797,12797,"FirstName12797 MiddleName12797",LastName12797 +12798,12798,"FirstName12798 MiddleName12798",LastName12798 +12799,12799,"FirstName12799 MiddleName12799",LastName12799 +12800,12800,"FirstName12800 MiddleName12800",LastName12800 +12801,12801,"FirstName12801 MiddleName12801",LastName12801 +12802,12802,"FirstName12802 MiddleName12802",LastName12802 +12803,12803,"FirstName12803 MiddleName12803",LastName12803 +12804,12804,"FirstName12804 MiddleName12804",LastName12804 +12805,12805,"FirstName12805 MiddleName12805",LastName12805 +12806,12806,"FirstName12806 MiddleName12806",LastName12806 +12807,12807,"FirstName12807 MiddleName12807",LastName12807 +12808,12808,"FirstName12808 MiddleName12808",LastName12808 +12809,12809,"FirstName12809 MiddleName12809",LastName12809 +12810,12810,"FirstName12810 MiddleName12810",LastName12810 +12811,12811,"FirstName12811 MiddleName12811",LastName12811 +12812,12812,"FirstName12812 MiddleName12812",LastName12812 +12813,12813,"FirstName12813 MiddleName12813",LastName12813 +12814,12814,"FirstName12814 MiddleName12814",LastName12814 +12815,12815,"FirstName12815 MiddleName12815",LastName12815 +12816,12816,"FirstName12816 MiddleName12816",LastName12816 +12817,12817,"FirstName12817 MiddleName12817",LastName12817 +12818,12818,"FirstName12818 MiddleName12818",LastName12818 +12819,12819,"FirstName12819 MiddleName12819",LastName12819 +12820,12820,"FirstName12820 MiddleName12820",LastName12820 +12821,12821,"FirstName12821 MiddleName12821",LastName12821 +12822,12822,"FirstName12822 MiddleName12822",LastName12822 +12823,12823,"FirstName12823 MiddleName12823",LastName12823 +12824,12824,"FirstName12824 MiddleName12824",LastName12824 +12825,12825,"FirstName12825 MiddleName12825",LastName12825 +12826,12826,"FirstName12826 MiddleName12826",LastName12826 +12827,12827,"FirstName12827 MiddleName12827",LastName12827 +12828,12828,"FirstName12828 MiddleName12828",LastName12828 +12829,12829,"FirstName12829 MiddleName12829",LastName12829 +12830,12830,"FirstName12830 MiddleName12830",LastName12830 +12831,12831,"FirstName12831 MiddleName12831",LastName12831 +12832,12832,"FirstName12832 MiddleName12832",LastName12832 +12833,12833,"FirstName12833 MiddleName12833",LastName12833 +12834,12834,"FirstName12834 MiddleName12834",LastName12834 +12835,12835,"FirstName12835 MiddleName12835",LastName12835 +12836,12836,"FirstName12836 MiddleName12836",LastName12836 +12837,12837,"FirstName12837 MiddleName12837",LastName12837 +12838,12838,"FirstName12838 MiddleName12838",LastName12838 +12839,12839,"FirstName12839 MiddleName12839",LastName12839 +12840,12840,"FirstName12840 MiddleName12840",LastName12840 +12841,12841,"FirstName12841 MiddleName12841",LastName12841 +12842,12842,"FirstName12842 MiddleName12842",LastName12842 +12843,12843,"FirstName12843 MiddleName12843",LastName12843 +12844,12844,"FirstName12844 MiddleName12844",LastName12844 +12845,12845,"FirstName12845 MiddleName12845",LastName12845 +12846,12846,"FirstName12846 MiddleName12846",LastName12846 +12847,12847,"FirstName12847 MiddleName12847",LastName12847 +12848,12848,"FirstName12848 MiddleName12848",LastName12848 +12849,12849,"FirstName12849 MiddleName12849",LastName12849 +12850,12850,"FirstName12850 MiddleName12850",LastName12850 +12851,12851,"FirstName12851 MiddleName12851",LastName12851 +12852,12852,"FirstName12852 MiddleName12852",LastName12852 +12853,12853,"FirstName12853 MiddleName12853",LastName12853 +12854,12854,"FirstName12854 MiddleName12854",LastName12854 +12855,12855,"FirstName12855 MiddleName12855",LastName12855 +12856,12856,"FirstName12856 MiddleName12856",LastName12856 +12857,12857,"FirstName12857 MiddleName12857",LastName12857 +12858,12858,"FirstName12858 MiddleName12858",LastName12858 +12859,12859,"FirstName12859 MiddleName12859",LastName12859 +12860,12860,"FirstName12860 MiddleName12860",LastName12860 +12861,12861,"FirstName12861 MiddleName12861",LastName12861 +12862,12862,"FirstName12862 MiddleName12862",LastName12862 +12863,12863,"FirstName12863 MiddleName12863",LastName12863 +12864,12864,"FirstName12864 MiddleName12864",LastName12864 +12865,12865,"FirstName12865 MiddleName12865",LastName12865 +12866,12866,"FirstName12866 MiddleName12866",LastName12866 +12867,12867,"FirstName12867 MiddleName12867",LastName12867 +12868,12868,"FirstName12868 MiddleName12868",LastName12868 +12869,12869,"FirstName12869 MiddleName12869",LastName12869 +12870,12870,"FirstName12870 MiddleName12870",LastName12870 +12871,12871,"FirstName12871 MiddleName12871",LastName12871 +12872,12872,"FirstName12872 MiddleName12872",LastName12872 +12873,12873,"FirstName12873 MiddleName12873",LastName12873 +12874,12874,"FirstName12874 MiddleName12874",LastName12874 +12875,12875,"FirstName12875 MiddleName12875",LastName12875 +12876,12876,"FirstName12876 MiddleName12876",LastName12876 +12877,12877,"FirstName12877 MiddleName12877",LastName12877 +12878,12878,"FirstName12878 MiddleName12878",LastName12878 +12879,12879,"FirstName12879 MiddleName12879",LastName12879 +12880,12880,"FirstName12880 MiddleName12880",LastName12880 +12881,12881,"FirstName12881 MiddleName12881",LastName12881 +12882,12882,"FirstName12882 MiddleName12882",LastName12882 +12883,12883,"FirstName12883 MiddleName12883",LastName12883 +12884,12884,"FirstName12884 MiddleName12884",LastName12884 +12885,12885,"FirstName12885 MiddleName12885",LastName12885 +12886,12886,"FirstName12886 MiddleName12886",LastName12886 +12887,12887,"FirstName12887 MiddleName12887",LastName12887 +12888,12888,"FirstName12888 MiddleName12888",LastName12888 +12889,12889,"FirstName12889 MiddleName12889",LastName12889 +12890,12890,"FirstName12890 MiddleName12890",LastName12890 +12891,12891,"FirstName12891 MiddleName12891",LastName12891 +12892,12892,"FirstName12892 MiddleName12892",LastName12892 +12893,12893,"FirstName12893 MiddleName12893",LastName12893 +12894,12894,"FirstName12894 MiddleName12894",LastName12894 +12895,12895,"FirstName12895 MiddleName12895",LastName12895 +12896,12896,"FirstName12896 MiddleName12896",LastName12896 +12897,12897,"FirstName12897 MiddleName12897",LastName12897 +12898,12898,"FirstName12898 MiddleName12898",LastName12898 +12899,12899,"FirstName12899 MiddleName12899",LastName12899 +12900,12900,"FirstName12900 MiddleName12900",LastName12900 +12901,12901,"FirstName12901 MiddleName12901",LastName12901 +12902,12902,"FirstName12902 MiddleName12902",LastName12902 +12903,12903,"FirstName12903 MiddleName12903",LastName12903 +12904,12904,"FirstName12904 MiddleName12904",LastName12904 +12905,12905,"FirstName12905 MiddleName12905",LastName12905 +12906,12906,"FirstName12906 MiddleName12906",LastName12906 +12907,12907,"FirstName12907 MiddleName12907",LastName12907 +12908,12908,"FirstName12908 MiddleName12908",LastName12908 +12909,12909,"FirstName12909 MiddleName12909",LastName12909 +12910,12910,"FirstName12910 MiddleName12910",LastName12910 +12911,12911,"FirstName12911 MiddleName12911",LastName12911 +12912,12912,"FirstName12912 MiddleName12912",LastName12912 +12913,12913,"FirstName12913 MiddleName12913",LastName12913 +12914,12914,"FirstName12914 MiddleName12914",LastName12914 +12915,12915,"FirstName12915 MiddleName12915",LastName12915 +12916,12916,"FirstName12916 MiddleName12916",LastName12916 +12917,12917,"FirstName12917 MiddleName12917",LastName12917 +12918,12918,"FirstName12918 MiddleName12918",LastName12918 +12919,12919,"FirstName12919 MiddleName12919",LastName12919 +12920,12920,"FirstName12920 MiddleName12920",LastName12920 +12921,12921,"FirstName12921 MiddleName12921",LastName12921 +12922,12922,"FirstName12922 MiddleName12922",LastName12922 +12923,12923,"FirstName12923 MiddleName12923",LastName12923 +12924,12924,"FirstName12924 MiddleName12924",LastName12924 +12925,12925,"FirstName12925 MiddleName12925",LastName12925 +12926,12926,"FirstName12926 MiddleName12926",LastName12926 +12927,12927,"FirstName12927 MiddleName12927",LastName12927 +12928,12928,"FirstName12928 MiddleName12928",LastName12928 +12929,12929,"FirstName12929 MiddleName12929",LastName12929 +12930,12930,"FirstName12930 MiddleName12930",LastName12930 +12931,12931,"FirstName12931 MiddleName12931",LastName12931 +12932,12932,"FirstName12932 MiddleName12932",LastName12932 +12933,12933,"FirstName12933 MiddleName12933",LastName12933 +12934,12934,"FirstName12934 MiddleName12934",LastName12934 +12935,12935,"FirstName12935 MiddleName12935",LastName12935 +12936,12936,"FirstName12936 MiddleName12936",LastName12936 +12937,12937,"FirstName12937 MiddleName12937",LastName12937 +12938,12938,"FirstName12938 MiddleName12938",LastName12938 +12939,12939,"FirstName12939 MiddleName12939",LastName12939 +12940,12940,"FirstName12940 MiddleName12940",LastName12940 +12941,12941,"FirstName12941 MiddleName12941",LastName12941 +12942,12942,"FirstName12942 MiddleName12942",LastName12942 +12943,12943,"FirstName12943 MiddleName12943",LastName12943 +12944,12944,"FirstName12944 MiddleName12944",LastName12944 +12945,12945,"FirstName12945 MiddleName12945",LastName12945 +12946,12946,"FirstName12946 MiddleName12946",LastName12946 +12947,12947,"FirstName12947 MiddleName12947",LastName12947 +12948,12948,"FirstName12948 MiddleName12948",LastName12948 +12949,12949,"FirstName12949 MiddleName12949",LastName12949 +12950,12950,"FirstName12950 MiddleName12950",LastName12950 +12951,12951,"FirstName12951 MiddleName12951",LastName12951 +12952,12952,"FirstName12952 MiddleName12952",LastName12952 +12953,12953,"FirstName12953 MiddleName12953",LastName12953 +12954,12954,"FirstName12954 MiddleName12954",LastName12954 +12955,12955,"FirstName12955 MiddleName12955",LastName12955 +12956,12956,"FirstName12956 MiddleName12956",LastName12956 +12957,12957,"FirstName12957 MiddleName12957",LastName12957 +12958,12958,"FirstName12958 MiddleName12958",LastName12958 +12959,12959,"FirstName12959 MiddleName12959",LastName12959 +12960,12960,"FirstName12960 MiddleName12960",LastName12960 +12961,12961,"FirstName12961 MiddleName12961",LastName12961 +12962,12962,"FirstName12962 MiddleName12962",LastName12962 +12963,12963,"FirstName12963 MiddleName12963",LastName12963 +12964,12964,"FirstName12964 MiddleName12964",LastName12964 +12965,12965,"FirstName12965 MiddleName12965",LastName12965 +12966,12966,"FirstName12966 MiddleName12966",LastName12966 +12967,12967,"FirstName12967 MiddleName12967",LastName12967 +12968,12968,"FirstName12968 MiddleName12968",LastName12968 +12969,12969,"FirstName12969 MiddleName12969",LastName12969 +12970,12970,"FirstName12970 MiddleName12970",LastName12970 +12971,12971,"FirstName12971 MiddleName12971",LastName12971 +12972,12972,"FirstName12972 MiddleName12972",LastName12972 +12973,12973,"FirstName12973 MiddleName12973",LastName12973 +12974,12974,"FirstName12974 MiddleName12974",LastName12974 +12975,12975,"FirstName12975 MiddleName12975",LastName12975 +12976,12976,"FirstName12976 MiddleName12976",LastName12976 +12977,12977,"FirstName12977 MiddleName12977",LastName12977 +12978,12978,"FirstName12978 MiddleName12978",LastName12978 +12979,12979,"FirstName12979 MiddleName12979",LastName12979 +12980,12980,"FirstName12980 MiddleName12980",LastName12980 +12981,12981,"FirstName12981 MiddleName12981",LastName12981 +12982,12982,"FirstName12982 MiddleName12982",LastName12982 +12983,12983,"FirstName12983 MiddleName12983",LastName12983 +12984,12984,"FirstName12984 MiddleName12984",LastName12984 +12985,12985,"FirstName12985 MiddleName12985",LastName12985 +12986,12986,"FirstName12986 MiddleName12986",LastName12986 +12987,12987,"FirstName12987 MiddleName12987",LastName12987 +12988,12988,"FirstName12988 MiddleName12988",LastName12988 +12989,12989,"FirstName12989 MiddleName12989",LastName12989 +12990,12990,"FirstName12990 MiddleName12990",LastName12990 +12991,12991,"FirstName12991 MiddleName12991",LastName12991 +12992,12992,"FirstName12992 MiddleName12992",LastName12992 +12993,12993,"FirstName12993 MiddleName12993",LastName12993 +12994,12994,"FirstName12994 MiddleName12994",LastName12994 +12995,12995,"FirstName12995 MiddleName12995",LastName12995 +12996,12996,"FirstName12996 MiddleName12996",LastName12996 +12997,12997,"FirstName12997 MiddleName12997",LastName12997 +12998,12998,"FirstName12998 MiddleName12998",LastName12998 +12999,12999,"FirstName12999 MiddleName12999",LastName12999 +13000,13000,"FirstName13000 MiddleName13000",LastName13000 +13001,13001,"FirstName13001 MiddleName13001",LastName13001 +13002,13002,"FirstName13002 MiddleName13002",LastName13002 +13003,13003,"FirstName13003 MiddleName13003",LastName13003 +13004,13004,"FirstName13004 MiddleName13004",LastName13004 +13005,13005,"FirstName13005 MiddleName13005",LastName13005 +13006,13006,"FirstName13006 MiddleName13006",LastName13006 +13007,13007,"FirstName13007 MiddleName13007",LastName13007 +13008,13008,"FirstName13008 MiddleName13008",LastName13008 +13009,13009,"FirstName13009 MiddleName13009",LastName13009 +13010,13010,"FirstName13010 MiddleName13010",LastName13010 +13011,13011,"FirstName13011 MiddleName13011",LastName13011 +13012,13012,"FirstName13012 MiddleName13012",LastName13012 +13013,13013,"FirstName13013 MiddleName13013",LastName13013 +13014,13014,"FirstName13014 MiddleName13014",LastName13014 +13015,13015,"FirstName13015 MiddleName13015",LastName13015 +13016,13016,"FirstName13016 MiddleName13016",LastName13016 +13017,13017,"FirstName13017 MiddleName13017",LastName13017 +13018,13018,"FirstName13018 MiddleName13018",LastName13018 +13019,13019,"FirstName13019 MiddleName13019",LastName13019 +13020,13020,"FirstName13020 MiddleName13020",LastName13020 +13021,13021,"FirstName13021 MiddleName13021",LastName13021 +13022,13022,"FirstName13022 MiddleName13022",LastName13022 +13023,13023,"FirstName13023 MiddleName13023",LastName13023 +13024,13024,"FirstName13024 MiddleName13024",LastName13024 +13025,13025,"FirstName13025 MiddleName13025",LastName13025 +13026,13026,"FirstName13026 MiddleName13026",LastName13026 +13027,13027,"FirstName13027 MiddleName13027",LastName13027 +13028,13028,"FirstName13028 MiddleName13028",LastName13028 +13029,13029,"FirstName13029 MiddleName13029",LastName13029 +13030,13030,"FirstName13030 MiddleName13030",LastName13030 +13031,13031,"FirstName13031 MiddleName13031",LastName13031 +13032,13032,"FirstName13032 MiddleName13032",LastName13032 +13033,13033,"FirstName13033 MiddleName13033",LastName13033 +13034,13034,"FirstName13034 MiddleName13034",LastName13034 +13035,13035,"FirstName13035 MiddleName13035",LastName13035 +13036,13036,"FirstName13036 MiddleName13036",LastName13036 +13037,13037,"FirstName13037 MiddleName13037",LastName13037 +13038,13038,"FirstName13038 MiddleName13038",LastName13038 +13039,13039,"FirstName13039 MiddleName13039",LastName13039 +13040,13040,"FirstName13040 MiddleName13040",LastName13040 +13041,13041,"FirstName13041 MiddleName13041",LastName13041 +13042,13042,"FirstName13042 MiddleName13042",LastName13042 +13043,13043,"FirstName13043 MiddleName13043",LastName13043 +13044,13044,"FirstName13044 MiddleName13044",LastName13044 +13045,13045,"FirstName13045 MiddleName13045",LastName13045 +13046,13046,"FirstName13046 MiddleName13046",LastName13046 +13047,13047,"FirstName13047 MiddleName13047",LastName13047 +13048,13048,"FirstName13048 MiddleName13048",LastName13048 +13049,13049,"FirstName13049 MiddleName13049",LastName13049 +13050,13050,"FirstName13050 MiddleName13050",LastName13050 +13051,13051,"FirstName13051 MiddleName13051",LastName13051 +13052,13052,"FirstName13052 MiddleName13052",LastName13052 +13053,13053,"FirstName13053 MiddleName13053",LastName13053 +13054,13054,"FirstName13054 MiddleName13054",LastName13054 +13055,13055,"FirstName13055 MiddleName13055",LastName13055 +13056,13056,"FirstName13056 MiddleName13056",LastName13056 +13057,13057,"FirstName13057 MiddleName13057",LastName13057 +13058,13058,"FirstName13058 MiddleName13058",LastName13058 +13059,13059,"FirstName13059 MiddleName13059",LastName13059 +13060,13060,"FirstName13060 MiddleName13060",LastName13060 +13061,13061,"FirstName13061 MiddleName13061",LastName13061 +13062,13062,"FirstName13062 MiddleName13062",LastName13062 +13063,13063,"FirstName13063 MiddleName13063",LastName13063 +13064,13064,"FirstName13064 MiddleName13064",LastName13064 +13065,13065,"FirstName13065 MiddleName13065",LastName13065 +13066,13066,"FirstName13066 MiddleName13066",LastName13066 +13067,13067,"FirstName13067 MiddleName13067",LastName13067 +13068,13068,"FirstName13068 MiddleName13068",LastName13068 +13069,13069,"FirstName13069 MiddleName13069",LastName13069 +13070,13070,"FirstName13070 MiddleName13070",LastName13070 +13071,13071,"FirstName13071 MiddleName13071",LastName13071 +13072,13072,"FirstName13072 MiddleName13072",LastName13072 +13073,13073,"FirstName13073 MiddleName13073",LastName13073 +13074,13074,"FirstName13074 MiddleName13074",LastName13074 +13075,13075,"FirstName13075 MiddleName13075",LastName13075 +13076,13076,"FirstName13076 MiddleName13076",LastName13076 +13077,13077,"FirstName13077 MiddleName13077",LastName13077 +13078,13078,"FirstName13078 MiddleName13078",LastName13078 +13079,13079,"FirstName13079 MiddleName13079",LastName13079 +13080,13080,"FirstName13080 MiddleName13080",LastName13080 +13081,13081,"FirstName13081 MiddleName13081",LastName13081 +13082,13082,"FirstName13082 MiddleName13082",LastName13082 +13083,13083,"FirstName13083 MiddleName13083",LastName13083 +13084,13084,"FirstName13084 MiddleName13084",LastName13084 +13085,13085,"FirstName13085 MiddleName13085",LastName13085 +13086,13086,"FirstName13086 MiddleName13086",LastName13086 +13087,13087,"FirstName13087 MiddleName13087",LastName13087 +13088,13088,"FirstName13088 MiddleName13088",LastName13088 +13089,13089,"FirstName13089 MiddleName13089",LastName13089 +13090,13090,"FirstName13090 MiddleName13090",LastName13090 +13091,13091,"FirstName13091 MiddleName13091",LastName13091 +13092,13092,"FirstName13092 MiddleName13092",LastName13092 +13093,13093,"FirstName13093 MiddleName13093",LastName13093 +13094,13094,"FirstName13094 MiddleName13094",LastName13094 +13095,13095,"FirstName13095 MiddleName13095",LastName13095 +13096,13096,"FirstName13096 MiddleName13096",LastName13096 +13097,13097,"FirstName13097 MiddleName13097",LastName13097 +13098,13098,"FirstName13098 MiddleName13098",LastName13098 +13099,13099,"FirstName13099 MiddleName13099",LastName13099 +13100,13100,"FirstName13100 MiddleName13100",LastName13100 +13101,13101,"FirstName13101 MiddleName13101",LastName13101 +13102,13102,"FirstName13102 MiddleName13102",LastName13102 +13103,13103,"FirstName13103 MiddleName13103",LastName13103 +13104,13104,"FirstName13104 MiddleName13104",LastName13104 +13105,13105,"FirstName13105 MiddleName13105",LastName13105 +13106,13106,"FirstName13106 MiddleName13106",LastName13106 +13107,13107,"FirstName13107 MiddleName13107",LastName13107 +13108,13108,"FirstName13108 MiddleName13108",LastName13108 +13109,13109,"FirstName13109 MiddleName13109",LastName13109 +13110,13110,"FirstName13110 MiddleName13110",LastName13110 +13111,13111,"FirstName13111 MiddleName13111",LastName13111 +13112,13112,"FirstName13112 MiddleName13112",LastName13112 +13113,13113,"FirstName13113 MiddleName13113",LastName13113 +13114,13114,"FirstName13114 MiddleName13114",LastName13114 +13115,13115,"FirstName13115 MiddleName13115",LastName13115 +13116,13116,"FirstName13116 MiddleName13116",LastName13116 +13117,13117,"FirstName13117 MiddleName13117",LastName13117 +13118,13118,"FirstName13118 MiddleName13118",LastName13118 +13119,13119,"FirstName13119 MiddleName13119",LastName13119 +13120,13120,"FirstName13120 MiddleName13120",LastName13120 +13121,13121,"FirstName13121 MiddleName13121",LastName13121 +13122,13122,"FirstName13122 MiddleName13122",LastName13122 +13123,13123,"FirstName13123 MiddleName13123",LastName13123 +13124,13124,"FirstName13124 MiddleName13124",LastName13124 +13125,13125,"FirstName13125 MiddleName13125",LastName13125 +13126,13126,"FirstName13126 MiddleName13126",LastName13126 +13127,13127,"FirstName13127 MiddleName13127",LastName13127 +13128,13128,"FirstName13128 MiddleName13128",LastName13128 +13129,13129,"FirstName13129 MiddleName13129",LastName13129 +13130,13130,"FirstName13130 MiddleName13130",LastName13130 +13131,13131,"FirstName13131 MiddleName13131",LastName13131 +13132,13132,"FirstName13132 MiddleName13132",LastName13132 +13133,13133,"FirstName13133 MiddleName13133",LastName13133 +13134,13134,"FirstName13134 MiddleName13134",LastName13134 +13135,13135,"FirstName13135 MiddleName13135",LastName13135 +13136,13136,"FirstName13136 MiddleName13136",LastName13136 +13137,13137,"FirstName13137 MiddleName13137",LastName13137 +13138,13138,"FirstName13138 MiddleName13138",LastName13138 +13139,13139,"FirstName13139 MiddleName13139",LastName13139 +13140,13140,"FirstName13140 MiddleName13140",LastName13140 +13141,13141,"FirstName13141 MiddleName13141",LastName13141 +13142,13142,"FirstName13142 MiddleName13142",LastName13142 +13143,13143,"FirstName13143 MiddleName13143",LastName13143 +13144,13144,"FirstName13144 MiddleName13144",LastName13144 +13145,13145,"FirstName13145 MiddleName13145",LastName13145 +13146,13146,"FirstName13146 MiddleName13146",LastName13146 +13147,13147,"FirstName13147 MiddleName13147",LastName13147 +13148,13148,"FirstName13148 MiddleName13148",LastName13148 +13149,13149,"FirstName13149 MiddleName13149",LastName13149 +13150,13150,"FirstName13150 MiddleName13150",LastName13150 +13151,13151,"FirstName13151 MiddleName13151",LastName13151 +13152,13152,"FirstName13152 MiddleName13152",LastName13152 +13153,13153,"FirstName13153 MiddleName13153",LastName13153 +13154,13154,"FirstName13154 MiddleName13154",LastName13154 +13155,13155,"FirstName13155 MiddleName13155",LastName13155 +13156,13156,"FirstName13156 MiddleName13156",LastName13156 +13157,13157,"FirstName13157 MiddleName13157",LastName13157 +13158,13158,"FirstName13158 MiddleName13158",LastName13158 +13159,13159,"FirstName13159 MiddleName13159",LastName13159 +13160,13160,"FirstName13160 MiddleName13160",LastName13160 +13161,13161,"FirstName13161 MiddleName13161",LastName13161 +13162,13162,"FirstName13162 MiddleName13162",LastName13162 +13163,13163,"FirstName13163 MiddleName13163",LastName13163 +13164,13164,"FirstName13164 MiddleName13164",LastName13164 +13165,13165,"FirstName13165 MiddleName13165",LastName13165 +13166,13166,"FirstName13166 MiddleName13166",LastName13166 +13167,13167,"FirstName13167 MiddleName13167",LastName13167 +13168,13168,"FirstName13168 MiddleName13168",LastName13168 +13169,13169,"FirstName13169 MiddleName13169",LastName13169 +13170,13170,"FirstName13170 MiddleName13170",LastName13170 +13171,13171,"FirstName13171 MiddleName13171",LastName13171 +13172,13172,"FirstName13172 MiddleName13172",LastName13172 +13173,13173,"FirstName13173 MiddleName13173",LastName13173 +13174,13174,"FirstName13174 MiddleName13174",LastName13174 +13175,13175,"FirstName13175 MiddleName13175",LastName13175 +13176,13176,"FirstName13176 MiddleName13176",LastName13176 +13177,13177,"FirstName13177 MiddleName13177",LastName13177 +13178,13178,"FirstName13178 MiddleName13178",LastName13178 +13179,13179,"FirstName13179 MiddleName13179",LastName13179 +13180,13180,"FirstName13180 MiddleName13180",LastName13180 +13181,13181,"FirstName13181 MiddleName13181",LastName13181 +13182,13182,"FirstName13182 MiddleName13182",LastName13182 +13183,13183,"FirstName13183 MiddleName13183",LastName13183 +13184,13184,"FirstName13184 MiddleName13184",LastName13184 +13185,13185,"FirstName13185 MiddleName13185",LastName13185 +13186,13186,"FirstName13186 MiddleName13186",LastName13186 +13187,13187,"FirstName13187 MiddleName13187",LastName13187 +13188,13188,"FirstName13188 MiddleName13188",LastName13188 +13189,13189,"FirstName13189 MiddleName13189",LastName13189 +13190,13190,"FirstName13190 MiddleName13190",LastName13190 +13191,13191,"FirstName13191 MiddleName13191",LastName13191 +13192,13192,"FirstName13192 MiddleName13192",LastName13192 +13193,13193,"FirstName13193 MiddleName13193",LastName13193 +13194,13194,"FirstName13194 MiddleName13194",LastName13194 +13195,13195,"FirstName13195 MiddleName13195",LastName13195 +13196,13196,"FirstName13196 MiddleName13196",LastName13196 +13197,13197,"FirstName13197 MiddleName13197",LastName13197 +13198,13198,"FirstName13198 MiddleName13198",LastName13198 +13199,13199,"FirstName13199 MiddleName13199",LastName13199 +13200,13200,"FirstName13200 MiddleName13200",LastName13200 +13201,13201,"FirstName13201 MiddleName13201",LastName13201 +13202,13202,"FirstName13202 MiddleName13202",LastName13202 +13203,13203,"FirstName13203 MiddleName13203",LastName13203 +13204,13204,"FirstName13204 MiddleName13204",LastName13204 +13205,13205,"FirstName13205 MiddleName13205",LastName13205 +13206,13206,"FirstName13206 MiddleName13206",LastName13206 +13207,13207,"FirstName13207 MiddleName13207",LastName13207 +13208,13208,"FirstName13208 MiddleName13208",LastName13208 +13209,13209,"FirstName13209 MiddleName13209",LastName13209 +13210,13210,"FirstName13210 MiddleName13210",LastName13210 +13211,13211,"FirstName13211 MiddleName13211",LastName13211 +13212,13212,"FirstName13212 MiddleName13212",LastName13212 +13213,13213,"FirstName13213 MiddleName13213",LastName13213 +13214,13214,"FirstName13214 MiddleName13214",LastName13214 +13215,13215,"FirstName13215 MiddleName13215",LastName13215 +13216,13216,"FirstName13216 MiddleName13216",LastName13216 +13217,13217,"FirstName13217 MiddleName13217",LastName13217 +13218,13218,"FirstName13218 MiddleName13218",LastName13218 +13219,13219,"FirstName13219 MiddleName13219",LastName13219 +13220,13220,"FirstName13220 MiddleName13220",LastName13220 +13221,13221,"FirstName13221 MiddleName13221",LastName13221 +13222,13222,"FirstName13222 MiddleName13222",LastName13222 +13223,13223,"FirstName13223 MiddleName13223",LastName13223 +13224,13224,"FirstName13224 MiddleName13224",LastName13224 +13225,13225,"FirstName13225 MiddleName13225",LastName13225 +13226,13226,"FirstName13226 MiddleName13226",LastName13226 +13227,13227,"FirstName13227 MiddleName13227",LastName13227 +13228,13228,"FirstName13228 MiddleName13228",LastName13228 +13229,13229,"FirstName13229 MiddleName13229",LastName13229 +13230,13230,"FirstName13230 MiddleName13230",LastName13230 +13231,13231,"FirstName13231 MiddleName13231",LastName13231 +13232,13232,"FirstName13232 MiddleName13232",LastName13232 +13233,13233,"FirstName13233 MiddleName13233",LastName13233 +13234,13234,"FirstName13234 MiddleName13234",LastName13234 +13235,13235,"FirstName13235 MiddleName13235",LastName13235 +13236,13236,"FirstName13236 MiddleName13236",LastName13236 +13237,13237,"FirstName13237 MiddleName13237",LastName13237 +13238,13238,"FirstName13238 MiddleName13238",LastName13238 +13239,13239,"FirstName13239 MiddleName13239",LastName13239 +13240,13240,"FirstName13240 MiddleName13240",LastName13240 +13241,13241,"FirstName13241 MiddleName13241",LastName13241 +13242,13242,"FirstName13242 MiddleName13242",LastName13242 +13243,13243,"FirstName13243 MiddleName13243",LastName13243 +13244,13244,"FirstName13244 MiddleName13244",LastName13244 +13245,13245,"FirstName13245 MiddleName13245",LastName13245 +13246,13246,"FirstName13246 MiddleName13246",LastName13246 +13247,13247,"FirstName13247 MiddleName13247",LastName13247 +13248,13248,"FirstName13248 MiddleName13248",LastName13248 +13249,13249,"FirstName13249 MiddleName13249",LastName13249 +13250,13250,"FirstName13250 MiddleName13250",LastName13250 +13251,13251,"FirstName13251 MiddleName13251",LastName13251 +13252,13252,"FirstName13252 MiddleName13252",LastName13252 +13253,13253,"FirstName13253 MiddleName13253",LastName13253 +13254,13254,"FirstName13254 MiddleName13254",LastName13254 +13255,13255,"FirstName13255 MiddleName13255",LastName13255 +13256,13256,"FirstName13256 MiddleName13256",LastName13256 +13257,13257,"FirstName13257 MiddleName13257",LastName13257 +13258,13258,"FirstName13258 MiddleName13258",LastName13258 +13259,13259,"FirstName13259 MiddleName13259",LastName13259 +13260,13260,"FirstName13260 MiddleName13260",LastName13260 +13261,13261,"FirstName13261 MiddleName13261",LastName13261 +13262,13262,"FirstName13262 MiddleName13262",LastName13262 +13263,13263,"FirstName13263 MiddleName13263",LastName13263 +13264,13264,"FirstName13264 MiddleName13264",LastName13264 +13265,13265,"FirstName13265 MiddleName13265",LastName13265 +13266,13266,"FirstName13266 MiddleName13266",LastName13266 +13267,13267,"FirstName13267 MiddleName13267",LastName13267 +13268,13268,"FirstName13268 MiddleName13268",LastName13268 +13269,13269,"FirstName13269 MiddleName13269",LastName13269 +13270,13270,"FirstName13270 MiddleName13270",LastName13270 +13271,13271,"FirstName13271 MiddleName13271",LastName13271 +13272,13272,"FirstName13272 MiddleName13272",LastName13272 +13273,13273,"FirstName13273 MiddleName13273",LastName13273 +13274,13274,"FirstName13274 MiddleName13274",LastName13274 +13275,13275,"FirstName13275 MiddleName13275",LastName13275 +13276,13276,"FirstName13276 MiddleName13276",LastName13276 +13277,13277,"FirstName13277 MiddleName13277",LastName13277 +13278,13278,"FirstName13278 MiddleName13278",LastName13278 +13279,13279,"FirstName13279 MiddleName13279",LastName13279 +13280,13280,"FirstName13280 MiddleName13280",LastName13280 +13281,13281,"FirstName13281 MiddleName13281",LastName13281 +13282,13282,"FirstName13282 MiddleName13282",LastName13282 +13283,13283,"FirstName13283 MiddleName13283",LastName13283 +13284,13284,"FirstName13284 MiddleName13284",LastName13284 +13285,13285,"FirstName13285 MiddleName13285",LastName13285 +13286,13286,"FirstName13286 MiddleName13286",LastName13286 +13287,13287,"FirstName13287 MiddleName13287",LastName13287 +13288,13288,"FirstName13288 MiddleName13288",LastName13288 +13289,13289,"FirstName13289 MiddleName13289",LastName13289 +13290,13290,"FirstName13290 MiddleName13290",LastName13290 +13291,13291,"FirstName13291 MiddleName13291",LastName13291 +13292,13292,"FirstName13292 MiddleName13292",LastName13292 +13293,13293,"FirstName13293 MiddleName13293",LastName13293 +13294,13294,"FirstName13294 MiddleName13294",LastName13294 +13295,13295,"FirstName13295 MiddleName13295",LastName13295 +13296,13296,"FirstName13296 MiddleName13296",LastName13296 +13297,13297,"FirstName13297 MiddleName13297",LastName13297 +13298,13298,"FirstName13298 MiddleName13298",LastName13298 +13299,13299,"FirstName13299 MiddleName13299",LastName13299 +13300,13300,"FirstName13300 MiddleName13300",LastName13300 +13301,13301,"FirstName13301 MiddleName13301",LastName13301 +13302,13302,"FirstName13302 MiddleName13302",LastName13302 +13303,13303,"FirstName13303 MiddleName13303",LastName13303 +13304,13304,"FirstName13304 MiddleName13304",LastName13304 +13305,13305,"FirstName13305 MiddleName13305",LastName13305 +13306,13306,"FirstName13306 MiddleName13306",LastName13306 +13307,13307,"FirstName13307 MiddleName13307",LastName13307 +13308,13308,"FirstName13308 MiddleName13308",LastName13308 +13309,13309,"FirstName13309 MiddleName13309",LastName13309 +13310,13310,"FirstName13310 MiddleName13310",LastName13310 +13311,13311,"FirstName13311 MiddleName13311",LastName13311 +13312,13312,"FirstName13312 MiddleName13312",LastName13312 +13313,13313,"FirstName13313 MiddleName13313",LastName13313 +13314,13314,"FirstName13314 MiddleName13314",LastName13314 +13315,13315,"FirstName13315 MiddleName13315",LastName13315 +13316,13316,"FirstName13316 MiddleName13316",LastName13316 +13317,13317,"FirstName13317 MiddleName13317",LastName13317 +13318,13318,"FirstName13318 MiddleName13318",LastName13318 +13319,13319,"FirstName13319 MiddleName13319",LastName13319 +13320,13320,"FirstName13320 MiddleName13320",LastName13320 +13321,13321,"FirstName13321 MiddleName13321",LastName13321 +13322,13322,"FirstName13322 MiddleName13322",LastName13322 +13323,13323,"FirstName13323 MiddleName13323",LastName13323 +13324,13324,"FirstName13324 MiddleName13324",LastName13324 +13325,13325,"FirstName13325 MiddleName13325",LastName13325 +13326,13326,"FirstName13326 MiddleName13326",LastName13326 +13327,13327,"FirstName13327 MiddleName13327",LastName13327 +13328,13328,"FirstName13328 MiddleName13328",LastName13328 +13329,13329,"FirstName13329 MiddleName13329",LastName13329 +13330,13330,"FirstName13330 MiddleName13330",LastName13330 +13331,13331,"FirstName13331 MiddleName13331",LastName13331 +13332,13332,"FirstName13332 MiddleName13332",LastName13332 +13333,13333,"FirstName13333 MiddleName13333",LastName13333 +13334,13334,"FirstName13334 MiddleName13334",LastName13334 +13335,13335,"FirstName13335 MiddleName13335",LastName13335 +13336,13336,"FirstName13336 MiddleName13336",LastName13336 +13337,13337,"FirstName13337 MiddleName13337",LastName13337 +13338,13338,"FirstName13338 MiddleName13338",LastName13338 +13339,13339,"FirstName13339 MiddleName13339",LastName13339 +13340,13340,"FirstName13340 MiddleName13340",LastName13340 +13341,13341,"FirstName13341 MiddleName13341",LastName13341 +13342,13342,"FirstName13342 MiddleName13342",LastName13342 +13343,13343,"FirstName13343 MiddleName13343",LastName13343 +13344,13344,"FirstName13344 MiddleName13344",LastName13344 +13345,13345,"FirstName13345 MiddleName13345",LastName13345 +13346,13346,"FirstName13346 MiddleName13346",LastName13346 +13347,13347,"FirstName13347 MiddleName13347",LastName13347 +13348,13348,"FirstName13348 MiddleName13348",LastName13348 +13349,13349,"FirstName13349 MiddleName13349",LastName13349 +13350,13350,"FirstName13350 MiddleName13350",LastName13350 +13351,13351,"FirstName13351 MiddleName13351",LastName13351 +13352,13352,"FirstName13352 MiddleName13352",LastName13352 +13353,13353,"FirstName13353 MiddleName13353",LastName13353 +13354,13354,"FirstName13354 MiddleName13354",LastName13354 +13355,13355,"FirstName13355 MiddleName13355",LastName13355 +13356,13356,"FirstName13356 MiddleName13356",LastName13356 +13357,13357,"FirstName13357 MiddleName13357",LastName13357 +13358,13358,"FirstName13358 MiddleName13358",LastName13358 +13359,13359,"FirstName13359 MiddleName13359",LastName13359 +13360,13360,"FirstName13360 MiddleName13360",LastName13360 +13361,13361,"FirstName13361 MiddleName13361",LastName13361 +13362,13362,"FirstName13362 MiddleName13362",LastName13362 +13363,13363,"FirstName13363 MiddleName13363",LastName13363 +13364,13364,"FirstName13364 MiddleName13364",LastName13364 +13365,13365,"FirstName13365 MiddleName13365",LastName13365 +13366,13366,"FirstName13366 MiddleName13366",LastName13366 +13367,13367,"FirstName13367 MiddleName13367",LastName13367 +13368,13368,"FirstName13368 MiddleName13368",LastName13368 +13369,13369,"FirstName13369 MiddleName13369",LastName13369 +13370,13370,"FirstName13370 MiddleName13370",LastName13370 +13371,13371,"FirstName13371 MiddleName13371",LastName13371 +13372,13372,"FirstName13372 MiddleName13372",LastName13372 +13373,13373,"FirstName13373 MiddleName13373",LastName13373 +13374,13374,"FirstName13374 MiddleName13374",LastName13374 +13375,13375,"FirstName13375 MiddleName13375",LastName13375 +13376,13376,"FirstName13376 MiddleName13376",LastName13376 +13377,13377,"FirstName13377 MiddleName13377",LastName13377 +13378,13378,"FirstName13378 MiddleName13378",LastName13378 +13379,13379,"FirstName13379 MiddleName13379",LastName13379 +13380,13380,"FirstName13380 MiddleName13380",LastName13380 +13381,13381,"FirstName13381 MiddleName13381",LastName13381 +13382,13382,"FirstName13382 MiddleName13382",LastName13382 +13383,13383,"FirstName13383 MiddleName13383",LastName13383 +13384,13384,"FirstName13384 MiddleName13384",LastName13384 +13385,13385,"FirstName13385 MiddleName13385",LastName13385 +13386,13386,"FirstName13386 MiddleName13386",LastName13386 +13387,13387,"FirstName13387 MiddleName13387",LastName13387 +13388,13388,"FirstName13388 MiddleName13388",LastName13388 +13389,13389,"FirstName13389 MiddleName13389",LastName13389 +13390,13390,"FirstName13390 MiddleName13390",LastName13390 +13391,13391,"FirstName13391 MiddleName13391",LastName13391 +13392,13392,"FirstName13392 MiddleName13392",LastName13392 +13393,13393,"FirstName13393 MiddleName13393",LastName13393 +13394,13394,"FirstName13394 MiddleName13394",LastName13394 +13395,13395,"FirstName13395 MiddleName13395",LastName13395 +13396,13396,"FirstName13396 MiddleName13396",LastName13396 +13397,13397,"FirstName13397 MiddleName13397",LastName13397 +13398,13398,"FirstName13398 MiddleName13398",LastName13398 +13399,13399,"FirstName13399 MiddleName13399",LastName13399 +13400,13400,"FirstName13400 MiddleName13400",LastName13400 +13401,13401,"FirstName13401 MiddleName13401",LastName13401 +13402,13402,"FirstName13402 MiddleName13402",LastName13402 +13403,13403,"FirstName13403 MiddleName13403",LastName13403 +13404,13404,"FirstName13404 MiddleName13404",LastName13404 +13405,13405,"FirstName13405 MiddleName13405",LastName13405 +13406,13406,"FirstName13406 MiddleName13406",LastName13406 +13407,13407,"FirstName13407 MiddleName13407",LastName13407 +13408,13408,"FirstName13408 MiddleName13408",LastName13408 +13409,13409,"FirstName13409 MiddleName13409",LastName13409 +13410,13410,"FirstName13410 MiddleName13410",LastName13410 +13411,13411,"FirstName13411 MiddleName13411",LastName13411 +13412,13412,"FirstName13412 MiddleName13412",LastName13412 +13413,13413,"FirstName13413 MiddleName13413",LastName13413 +13414,13414,"FirstName13414 MiddleName13414",LastName13414 +13415,13415,"FirstName13415 MiddleName13415",LastName13415 +13416,13416,"FirstName13416 MiddleName13416",LastName13416 +13417,13417,"FirstName13417 MiddleName13417",LastName13417 +13418,13418,"FirstName13418 MiddleName13418",LastName13418 +13419,13419,"FirstName13419 MiddleName13419",LastName13419 +13420,13420,"FirstName13420 MiddleName13420",LastName13420 +13421,13421,"FirstName13421 MiddleName13421",LastName13421 +13422,13422,"FirstName13422 MiddleName13422",LastName13422 +13423,13423,"FirstName13423 MiddleName13423",LastName13423 +13424,13424,"FirstName13424 MiddleName13424",LastName13424 +13425,13425,"FirstName13425 MiddleName13425",LastName13425 +13426,13426,"FirstName13426 MiddleName13426",LastName13426 +13427,13427,"FirstName13427 MiddleName13427",LastName13427 +13428,13428,"FirstName13428 MiddleName13428",LastName13428 +13429,13429,"FirstName13429 MiddleName13429",LastName13429 +13430,13430,"FirstName13430 MiddleName13430",LastName13430 +13431,13431,"FirstName13431 MiddleName13431",LastName13431 +13432,13432,"FirstName13432 MiddleName13432",LastName13432 +13433,13433,"FirstName13433 MiddleName13433",LastName13433 +13434,13434,"FirstName13434 MiddleName13434",LastName13434 +13435,13435,"FirstName13435 MiddleName13435",LastName13435 +13436,13436,"FirstName13436 MiddleName13436",LastName13436 +13437,13437,"FirstName13437 MiddleName13437",LastName13437 +13438,13438,"FirstName13438 MiddleName13438",LastName13438 +13439,13439,"FirstName13439 MiddleName13439",LastName13439 +13440,13440,"FirstName13440 MiddleName13440",LastName13440 +13441,13441,"FirstName13441 MiddleName13441",LastName13441 +13442,13442,"FirstName13442 MiddleName13442",LastName13442 +13443,13443,"FirstName13443 MiddleName13443",LastName13443 +13444,13444,"FirstName13444 MiddleName13444",LastName13444 +13445,13445,"FirstName13445 MiddleName13445",LastName13445 +13446,13446,"FirstName13446 MiddleName13446",LastName13446 +13447,13447,"FirstName13447 MiddleName13447",LastName13447 +13448,13448,"FirstName13448 MiddleName13448",LastName13448 +13449,13449,"FirstName13449 MiddleName13449",LastName13449 +13450,13450,"FirstName13450 MiddleName13450",LastName13450 +13451,13451,"FirstName13451 MiddleName13451",LastName13451 +13452,13452,"FirstName13452 MiddleName13452",LastName13452 +13453,13453,"FirstName13453 MiddleName13453",LastName13453 +13454,13454,"FirstName13454 MiddleName13454",LastName13454 +13455,13455,"FirstName13455 MiddleName13455",LastName13455 +13456,13456,"FirstName13456 MiddleName13456",LastName13456 +13457,13457,"FirstName13457 MiddleName13457",LastName13457 +13458,13458,"FirstName13458 MiddleName13458",LastName13458 +13459,13459,"FirstName13459 MiddleName13459",LastName13459 +13460,13460,"FirstName13460 MiddleName13460",LastName13460 +13461,13461,"FirstName13461 MiddleName13461",LastName13461 +13462,13462,"FirstName13462 MiddleName13462",LastName13462 +13463,13463,"FirstName13463 MiddleName13463",LastName13463 +13464,13464,"FirstName13464 MiddleName13464",LastName13464 +13465,13465,"FirstName13465 MiddleName13465",LastName13465 +13466,13466,"FirstName13466 MiddleName13466",LastName13466 +13467,13467,"FirstName13467 MiddleName13467",LastName13467 +13468,13468,"FirstName13468 MiddleName13468",LastName13468 +13469,13469,"FirstName13469 MiddleName13469",LastName13469 +13470,13470,"FirstName13470 MiddleName13470",LastName13470 +13471,13471,"FirstName13471 MiddleName13471",LastName13471 +13472,13472,"FirstName13472 MiddleName13472",LastName13472 +13473,13473,"FirstName13473 MiddleName13473",LastName13473 +13474,13474,"FirstName13474 MiddleName13474",LastName13474 +13475,13475,"FirstName13475 MiddleName13475",LastName13475 +13476,13476,"FirstName13476 MiddleName13476",LastName13476 +13477,13477,"FirstName13477 MiddleName13477",LastName13477 +13478,13478,"FirstName13478 MiddleName13478",LastName13478 +13479,13479,"FirstName13479 MiddleName13479",LastName13479 +13480,13480,"FirstName13480 MiddleName13480",LastName13480 +13481,13481,"FirstName13481 MiddleName13481",LastName13481 +13482,13482,"FirstName13482 MiddleName13482",LastName13482 +13483,13483,"FirstName13483 MiddleName13483",LastName13483 +13484,13484,"FirstName13484 MiddleName13484",LastName13484 +13485,13485,"FirstName13485 MiddleName13485",LastName13485 +13486,13486,"FirstName13486 MiddleName13486",LastName13486 +13487,13487,"FirstName13487 MiddleName13487",LastName13487 +13488,13488,"FirstName13488 MiddleName13488",LastName13488 +13489,13489,"FirstName13489 MiddleName13489",LastName13489 +13490,13490,"FirstName13490 MiddleName13490",LastName13490 +13491,13491,"FirstName13491 MiddleName13491",LastName13491 +13492,13492,"FirstName13492 MiddleName13492",LastName13492 +13493,13493,"FirstName13493 MiddleName13493",LastName13493 +13494,13494,"FirstName13494 MiddleName13494",LastName13494 +13495,13495,"FirstName13495 MiddleName13495",LastName13495 +13496,13496,"FirstName13496 MiddleName13496",LastName13496 +13497,13497,"FirstName13497 MiddleName13497",LastName13497 +13498,13498,"FirstName13498 MiddleName13498",LastName13498 +13499,13499,"FirstName13499 MiddleName13499",LastName13499 +13500,13500,"FirstName13500 MiddleName13500",LastName13500 +13501,13501,"FirstName13501 MiddleName13501",LastName13501 +13502,13502,"FirstName13502 MiddleName13502",LastName13502 +13503,13503,"FirstName13503 MiddleName13503",LastName13503 +13504,13504,"FirstName13504 MiddleName13504",LastName13504 +13505,13505,"FirstName13505 MiddleName13505",LastName13505 +13506,13506,"FirstName13506 MiddleName13506",LastName13506 +13507,13507,"FirstName13507 MiddleName13507",LastName13507 +13508,13508,"FirstName13508 MiddleName13508",LastName13508 +13509,13509,"FirstName13509 MiddleName13509",LastName13509 +13510,13510,"FirstName13510 MiddleName13510",LastName13510 +13511,13511,"FirstName13511 MiddleName13511",LastName13511 +13512,13512,"FirstName13512 MiddleName13512",LastName13512 +13513,13513,"FirstName13513 MiddleName13513",LastName13513 +13514,13514,"FirstName13514 MiddleName13514",LastName13514 +13515,13515,"FirstName13515 MiddleName13515",LastName13515 +13516,13516,"FirstName13516 MiddleName13516",LastName13516 +13517,13517,"FirstName13517 MiddleName13517",LastName13517 +13518,13518,"FirstName13518 MiddleName13518",LastName13518 +13519,13519,"FirstName13519 MiddleName13519",LastName13519 +13520,13520,"FirstName13520 MiddleName13520",LastName13520 +13521,13521,"FirstName13521 MiddleName13521",LastName13521 +13522,13522,"FirstName13522 MiddleName13522",LastName13522 +13523,13523,"FirstName13523 MiddleName13523",LastName13523 +13524,13524,"FirstName13524 MiddleName13524",LastName13524 +13525,13525,"FirstName13525 MiddleName13525",LastName13525 +13526,13526,"FirstName13526 MiddleName13526",LastName13526 +13527,13527,"FirstName13527 MiddleName13527",LastName13527 +13528,13528,"FirstName13528 MiddleName13528",LastName13528 +13529,13529,"FirstName13529 MiddleName13529",LastName13529 +13530,13530,"FirstName13530 MiddleName13530",LastName13530 +13531,13531,"FirstName13531 MiddleName13531",LastName13531 +13532,13532,"FirstName13532 MiddleName13532",LastName13532 +13533,13533,"FirstName13533 MiddleName13533",LastName13533 +13534,13534,"FirstName13534 MiddleName13534",LastName13534 +13535,13535,"FirstName13535 MiddleName13535",LastName13535 +13536,13536,"FirstName13536 MiddleName13536",LastName13536 +13537,13537,"FirstName13537 MiddleName13537",LastName13537 +13538,13538,"FirstName13538 MiddleName13538",LastName13538 +13539,13539,"FirstName13539 MiddleName13539",LastName13539 +13540,13540,"FirstName13540 MiddleName13540",LastName13540 +13541,13541,"FirstName13541 MiddleName13541",LastName13541 +13542,13542,"FirstName13542 MiddleName13542",LastName13542 +13543,13543,"FirstName13543 MiddleName13543",LastName13543 +13544,13544,"FirstName13544 MiddleName13544",LastName13544 +13545,13545,"FirstName13545 MiddleName13545",LastName13545 +13546,13546,"FirstName13546 MiddleName13546",LastName13546 +13547,13547,"FirstName13547 MiddleName13547",LastName13547 +13548,13548,"FirstName13548 MiddleName13548",LastName13548 +13549,13549,"FirstName13549 MiddleName13549",LastName13549 +13550,13550,"FirstName13550 MiddleName13550",LastName13550 +13551,13551,"FirstName13551 MiddleName13551",LastName13551 +13552,13552,"FirstName13552 MiddleName13552",LastName13552 +13553,13553,"FirstName13553 MiddleName13553",LastName13553 +13554,13554,"FirstName13554 MiddleName13554",LastName13554 +13555,13555,"FirstName13555 MiddleName13555",LastName13555 +13556,13556,"FirstName13556 MiddleName13556",LastName13556 +13557,13557,"FirstName13557 MiddleName13557",LastName13557 +13558,13558,"FirstName13558 MiddleName13558",LastName13558 +13559,13559,"FirstName13559 MiddleName13559",LastName13559 +13560,13560,"FirstName13560 MiddleName13560",LastName13560 +13561,13561,"FirstName13561 MiddleName13561",LastName13561 +13562,13562,"FirstName13562 MiddleName13562",LastName13562 +13563,13563,"FirstName13563 MiddleName13563",LastName13563 +13564,13564,"FirstName13564 MiddleName13564",LastName13564 +13565,13565,"FirstName13565 MiddleName13565",LastName13565 +13566,13566,"FirstName13566 MiddleName13566",LastName13566 +13567,13567,"FirstName13567 MiddleName13567",LastName13567 +13568,13568,"FirstName13568 MiddleName13568",LastName13568 +13569,13569,"FirstName13569 MiddleName13569",LastName13569 +13570,13570,"FirstName13570 MiddleName13570",LastName13570 +13571,13571,"FirstName13571 MiddleName13571",LastName13571 +13572,13572,"FirstName13572 MiddleName13572",LastName13572 +13573,13573,"FirstName13573 MiddleName13573",LastName13573 +13574,13574,"FirstName13574 MiddleName13574",LastName13574 +13575,13575,"FirstName13575 MiddleName13575",LastName13575 +13576,13576,"FirstName13576 MiddleName13576",LastName13576 +13577,13577,"FirstName13577 MiddleName13577",LastName13577 +13578,13578,"FirstName13578 MiddleName13578",LastName13578 +13579,13579,"FirstName13579 MiddleName13579",LastName13579 +13580,13580,"FirstName13580 MiddleName13580",LastName13580 +13581,13581,"FirstName13581 MiddleName13581",LastName13581 +13582,13582,"FirstName13582 MiddleName13582",LastName13582 +13583,13583,"FirstName13583 MiddleName13583",LastName13583 +13584,13584,"FirstName13584 MiddleName13584",LastName13584 +13585,13585,"FirstName13585 MiddleName13585",LastName13585 +13586,13586,"FirstName13586 MiddleName13586",LastName13586 +13587,13587,"FirstName13587 MiddleName13587",LastName13587 +13588,13588,"FirstName13588 MiddleName13588",LastName13588 +13589,13589,"FirstName13589 MiddleName13589",LastName13589 +13590,13590,"FirstName13590 MiddleName13590",LastName13590 +13591,13591,"FirstName13591 MiddleName13591",LastName13591 +13592,13592,"FirstName13592 MiddleName13592",LastName13592 +13593,13593,"FirstName13593 MiddleName13593",LastName13593 +13594,13594,"FirstName13594 MiddleName13594",LastName13594 +13595,13595,"FirstName13595 MiddleName13595",LastName13595 +13596,13596,"FirstName13596 MiddleName13596",LastName13596 +13597,13597,"FirstName13597 MiddleName13597",LastName13597 +13598,13598,"FirstName13598 MiddleName13598",LastName13598 +13599,13599,"FirstName13599 MiddleName13599",LastName13599 +13600,13600,"FirstName13600 MiddleName13600",LastName13600 +13601,13601,"FirstName13601 MiddleName13601",LastName13601 +13602,13602,"FirstName13602 MiddleName13602",LastName13602 +13603,13603,"FirstName13603 MiddleName13603",LastName13603 +13604,13604,"FirstName13604 MiddleName13604",LastName13604 +13605,13605,"FirstName13605 MiddleName13605",LastName13605 +13606,13606,"FirstName13606 MiddleName13606",LastName13606 +13607,13607,"FirstName13607 MiddleName13607",LastName13607 +13608,13608,"FirstName13608 MiddleName13608",LastName13608 +13609,13609,"FirstName13609 MiddleName13609",LastName13609 +13610,13610,"FirstName13610 MiddleName13610",LastName13610 +13611,13611,"FirstName13611 MiddleName13611",LastName13611 +13612,13612,"FirstName13612 MiddleName13612",LastName13612 +13613,13613,"FirstName13613 MiddleName13613",LastName13613 +13614,13614,"FirstName13614 MiddleName13614",LastName13614 +13615,13615,"FirstName13615 MiddleName13615",LastName13615 +13616,13616,"FirstName13616 MiddleName13616",LastName13616 +13617,13617,"FirstName13617 MiddleName13617",LastName13617 +13618,13618,"FirstName13618 MiddleName13618",LastName13618 +13619,13619,"FirstName13619 MiddleName13619",LastName13619 +13620,13620,"FirstName13620 MiddleName13620",LastName13620 +13621,13621,"FirstName13621 MiddleName13621",LastName13621 +13622,13622,"FirstName13622 MiddleName13622",LastName13622 +13623,13623,"FirstName13623 MiddleName13623",LastName13623 +13624,13624,"FirstName13624 MiddleName13624",LastName13624 +13625,13625,"FirstName13625 MiddleName13625",LastName13625 +13626,13626,"FirstName13626 MiddleName13626",LastName13626 +13627,13627,"FirstName13627 MiddleName13627",LastName13627 +13628,13628,"FirstName13628 MiddleName13628",LastName13628 +13629,13629,"FirstName13629 MiddleName13629",LastName13629 +13630,13630,"FirstName13630 MiddleName13630",LastName13630 +13631,13631,"FirstName13631 MiddleName13631",LastName13631 +13632,13632,"FirstName13632 MiddleName13632",LastName13632 +13633,13633,"FirstName13633 MiddleName13633",LastName13633 +13634,13634,"FirstName13634 MiddleName13634",LastName13634 +13635,13635,"FirstName13635 MiddleName13635",LastName13635 +13636,13636,"FirstName13636 MiddleName13636",LastName13636 +13637,13637,"FirstName13637 MiddleName13637",LastName13637 +13638,13638,"FirstName13638 MiddleName13638",LastName13638 +13639,13639,"FirstName13639 MiddleName13639",LastName13639 +13640,13640,"FirstName13640 MiddleName13640",LastName13640 +13641,13641,"FirstName13641 MiddleName13641",LastName13641 +13642,13642,"FirstName13642 MiddleName13642",LastName13642 +13643,13643,"FirstName13643 MiddleName13643",LastName13643 +13644,13644,"FirstName13644 MiddleName13644",LastName13644 +13645,13645,"FirstName13645 MiddleName13645",LastName13645 +13646,13646,"FirstName13646 MiddleName13646",LastName13646 +13647,13647,"FirstName13647 MiddleName13647",LastName13647 +13648,13648,"FirstName13648 MiddleName13648",LastName13648 +13649,13649,"FirstName13649 MiddleName13649",LastName13649 +13650,13650,"FirstName13650 MiddleName13650",LastName13650 +13651,13651,"FirstName13651 MiddleName13651",LastName13651 +13652,13652,"FirstName13652 MiddleName13652",LastName13652 +13653,13653,"FirstName13653 MiddleName13653",LastName13653 +13654,13654,"FirstName13654 MiddleName13654",LastName13654 +13655,13655,"FirstName13655 MiddleName13655",LastName13655 +13656,13656,"FirstName13656 MiddleName13656",LastName13656 +13657,13657,"FirstName13657 MiddleName13657",LastName13657 +13658,13658,"FirstName13658 MiddleName13658",LastName13658 +13659,13659,"FirstName13659 MiddleName13659",LastName13659 +13660,13660,"FirstName13660 MiddleName13660",LastName13660 +13661,13661,"FirstName13661 MiddleName13661",LastName13661 +13662,13662,"FirstName13662 MiddleName13662",LastName13662 +13663,13663,"FirstName13663 MiddleName13663",LastName13663 +13664,13664,"FirstName13664 MiddleName13664",LastName13664 +13665,13665,"FirstName13665 MiddleName13665",LastName13665 +13666,13666,"FirstName13666 MiddleName13666",LastName13666 +13667,13667,"FirstName13667 MiddleName13667",LastName13667 +13668,13668,"FirstName13668 MiddleName13668",LastName13668 +13669,13669,"FirstName13669 MiddleName13669",LastName13669 +13670,13670,"FirstName13670 MiddleName13670",LastName13670 +13671,13671,"FirstName13671 MiddleName13671",LastName13671 +13672,13672,"FirstName13672 MiddleName13672",LastName13672 +13673,13673,"FirstName13673 MiddleName13673",LastName13673 +13674,13674,"FirstName13674 MiddleName13674",LastName13674 +13675,13675,"FirstName13675 MiddleName13675",LastName13675 +13676,13676,"FirstName13676 MiddleName13676",LastName13676 +13677,13677,"FirstName13677 MiddleName13677",LastName13677 +13678,13678,"FirstName13678 MiddleName13678",LastName13678 +13679,13679,"FirstName13679 MiddleName13679",LastName13679 +13680,13680,"FirstName13680 MiddleName13680",LastName13680 +13681,13681,"FirstName13681 MiddleName13681",LastName13681 +13682,13682,"FirstName13682 MiddleName13682",LastName13682 +13683,13683,"FirstName13683 MiddleName13683",LastName13683 +13684,13684,"FirstName13684 MiddleName13684",LastName13684 +13685,13685,"FirstName13685 MiddleName13685",LastName13685 +13686,13686,"FirstName13686 MiddleName13686",LastName13686 +13687,13687,"FirstName13687 MiddleName13687",LastName13687 +13688,13688,"FirstName13688 MiddleName13688",LastName13688 +13689,13689,"FirstName13689 MiddleName13689",LastName13689 +13690,13690,"FirstName13690 MiddleName13690",LastName13690 +13691,13691,"FirstName13691 MiddleName13691",LastName13691 +13692,13692,"FirstName13692 MiddleName13692",LastName13692 +13693,13693,"FirstName13693 MiddleName13693",LastName13693 +13694,13694,"FirstName13694 MiddleName13694",LastName13694 +13695,13695,"FirstName13695 MiddleName13695",LastName13695 +13696,13696,"FirstName13696 MiddleName13696",LastName13696 +13697,13697,"FirstName13697 MiddleName13697",LastName13697 +13698,13698,"FirstName13698 MiddleName13698",LastName13698 +13699,13699,"FirstName13699 MiddleName13699",LastName13699 +13700,13700,"FirstName13700 MiddleName13700",LastName13700 +13701,13701,"FirstName13701 MiddleName13701",LastName13701 +13702,13702,"FirstName13702 MiddleName13702",LastName13702 +13703,13703,"FirstName13703 MiddleName13703",LastName13703 +13704,13704,"FirstName13704 MiddleName13704",LastName13704 +13705,13705,"FirstName13705 MiddleName13705",LastName13705 +13706,13706,"FirstName13706 MiddleName13706",LastName13706 +13707,13707,"FirstName13707 MiddleName13707",LastName13707 +13708,13708,"FirstName13708 MiddleName13708",LastName13708 +13709,13709,"FirstName13709 MiddleName13709",LastName13709 +13710,13710,"FirstName13710 MiddleName13710",LastName13710 +13711,13711,"FirstName13711 MiddleName13711",LastName13711 +13712,13712,"FirstName13712 MiddleName13712",LastName13712 +13713,13713,"FirstName13713 MiddleName13713",LastName13713 +13714,13714,"FirstName13714 MiddleName13714",LastName13714 +13715,13715,"FirstName13715 MiddleName13715",LastName13715 +13716,13716,"FirstName13716 MiddleName13716",LastName13716 +13717,13717,"FirstName13717 MiddleName13717",LastName13717 +13718,13718,"FirstName13718 MiddleName13718",LastName13718 +13719,13719,"FirstName13719 MiddleName13719",LastName13719 +13720,13720,"FirstName13720 MiddleName13720",LastName13720 +13721,13721,"FirstName13721 MiddleName13721",LastName13721 +13722,13722,"FirstName13722 MiddleName13722",LastName13722 +13723,13723,"FirstName13723 MiddleName13723",LastName13723 +13724,13724,"FirstName13724 MiddleName13724",LastName13724 +13725,13725,"FirstName13725 MiddleName13725",LastName13725 +13726,13726,"FirstName13726 MiddleName13726",LastName13726 +13727,13727,"FirstName13727 MiddleName13727",LastName13727 +13728,13728,"FirstName13728 MiddleName13728",LastName13728 +13729,13729,"FirstName13729 MiddleName13729",LastName13729 +13730,13730,"FirstName13730 MiddleName13730",LastName13730 +13731,13731,"FirstName13731 MiddleName13731",LastName13731 +13732,13732,"FirstName13732 MiddleName13732",LastName13732 +13733,13733,"FirstName13733 MiddleName13733",LastName13733 +13734,13734,"FirstName13734 MiddleName13734",LastName13734 +13735,13735,"FirstName13735 MiddleName13735",LastName13735 +13736,13736,"FirstName13736 MiddleName13736",LastName13736 +13737,13737,"FirstName13737 MiddleName13737",LastName13737 +13738,13738,"FirstName13738 MiddleName13738",LastName13738 +13739,13739,"FirstName13739 MiddleName13739",LastName13739 +13740,13740,"FirstName13740 MiddleName13740",LastName13740 +13741,13741,"FirstName13741 MiddleName13741",LastName13741 +13742,13742,"FirstName13742 MiddleName13742",LastName13742 +13743,13743,"FirstName13743 MiddleName13743",LastName13743 +13744,13744,"FirstName13744 MiddleName13744",LastName13744 +13745,13745,"FirstName13745 MiddleName13745",LastName13745 +13746,13746,"FirstName13746 MiddleName13746",LastName13746 +13747,13747,"FirstName13747 MiddleName13747",LastName13747 +13748,13748,"FirstName13748 MiddleName13748",LastName13748 +13749,13749,"FirstName13749 MiddleName13749",LastName13749 +13750,13750,"FirstName13750 MiddleName13750",LastName13750 +13751,13751,"FirstName13751 MiddleName13751",LastName13751 +13752,13752,"FirstName13752 MiddleName13752",LastName13752 +13753,13753,"FirstName13753 MiddleName13753",LastName13753 +13754,13754,"FirstName13754 MiddleName13754",LastName13754 +13755,13755,"FirstName13755 MiddleName13755",LastName13755 +13756,13756,"FirstName13756 MiddleName13756",LastName13756 +13757,13757,"FirstName13757 MiddleName13757",LastName13757 +13758,13758,"FirstName13758 MiddleName13758",LastName13758 +13759,13759,"FirstName13759 MiddleName13759",LastName13759 +13760,13760,"FirstName13760 MiddleName13760",LastName13760 +13761,13761,"FirstName13761 MiddleName13761",LastName13761 +13762,13762,"FirstName13762 MiddleName13762",LastName13762 +13763,13763,"FirstName13763 MiddleName13763",LastName13763 +13764,13764,"FirstName13764 MiddleName13764",LastName13764 +13765,13765,"FirstName13765 MiddleName13765",LastName13765 +13766,13766,"FirstName13766 MiddleName13766",LastName13766 +13767,13767,"FirstName13767 MiddleName13767",LastName13767 +13768,13768,"FirstName13768 MiddleName13768",LastName13768 +13769,13769,"FirstName13769 MiddleName13769",LastName13769 +13770,13770,"FirstName13770 MiddleName13770",LastName13770 +13771,13771,"FirstName13771 MiddleName13771",LastName13771 +13772,13772,"FirstName13772 MiddleName13772",LastName13772 +13773,13773,"FirstName13773 MiddleName13773",LastName13773 +13774,13774,"FirstName13774 MiddleName13774",LastName13774 +13775,13775,"FirstName13775 MiddleName13775",LastName13775 +13776,13776,"FirstName13776 MiddleName13776",LastName13776 +13777,13777,"FirstName13777 MiddleName13777",LastName13777 +13778,13778,"FirstName13778 MiddleName13778",LastName13778 +13779,13779,"FirstName13779 MiddleName13779",LastName13779 +13780,13780,"FirstName13780 MiddleName13780",LastName13780 +13781,13781,"FirstName13781 MiddleName13781",LastName13781 +13782,13782,"FirstName13782 MiddleName13782",LastName13782 +13783,13783,"FirstName13783 MiddleName13783",LastName13783 +13784,13784,"FirstName13784 MiddleName13784",LastName13784 +13785,13785,"FirstName13785 MiddleName13785",LastName13785 +13786,13786,"FirstName13786 MiddleName13786",LastName13786 +13787,13787,"FirstName13787 MiddleName13787",LastName13787 +13788,13788,"FirstName13788 MiddleName13788",LastName13788 +13789,13789,"FirstName13789 MiddleName13789",LastName13789 +13790,13790,"FirstName13790 MiddleName13790",LastName13790 +13791,13791,"FirstName13791 MiddleName13791",LastName13791 +13792,13792,"FirstName13792 MiddleName13792",LastName13792 +13793,13793,"FirstName13793 MiddleName13793",LastName13793 +13794,13794,"FirstName13794 MiddleName13794",LastName13794 +13795,13795,"FirstName13795 MiddleName13795",LastName13795 +13796,13796,"FirstName13796 MiddleName13796",LastName13796 +13797,13797,"FirstName13797 MiddleName13797",LastName13797 +13798,13798,"FirstName13798 MiddleName13798",LastName13798 +13799,13799,"FirstName13799 MiddleName13799",LastName13799 +13800,13800,"FirstName13800 MiddleName13800",LastName13800 +13801,13801,"FirstName13801 MiddleName13801",LastName13801 +13802,13802,"FirstName13802 MiddleName13802",LastName13802 +13803,13803,"FirstName13803 MiddleName13803",LastName13803 +13804,13804,"FirstName13804 MiddleName13804",LastName13804 +13805,13805,"FirstName13805 MiddleName13805",LastName13805 +13806,13806,"FirstName13806 MiddleName13806",LastName13806 +13807,13807,"FirstName13807 MiddleName13807",LastName13807 +13808,13808,"FirstName13808 MiddleName13808",LastName13808 +13809,13809,"FirstName13809 MiddleName13809",LastName13809 +13810,13810,"FirstName13810 MiddleName13810",LastName13810 +13811,13811,"FirstName13811 MiddleName13811",LastName13811 +13812,13812,"FirstName13812 MiddleName13812",LastName13812 +13813,13813,"FirstName13813 MiddleName13813",LastName13813 +13814,13814,"FirstName13814 MiddleName13814",LastName13814 +13815,13815,"FirstName13815 MiddleName13815",LastName13815 +13816,13816,"FirstName13816 MiddleName13816",LastName13816 +13817,13817,"FirstName13817 MiddleName13817",LastName13817 +13818,13818,"FirstName13818 MiddleName13818",LastName13818 +13819,13819,"FirstName13819 MiddleName13819",LastName13819 +13820,13820,"FirstName13820 MiddleName13820",LastName13820 +13821,13821,"FirstName13821 MiddleName13821",LastName13821 +13822,13822,"FirstName13822 MiddleName13822",LastName13822 +13823,13823,"FirstName13823 MiddleName13823",LastName13823 +13824,13824,"FirstName13824 MiddleName13824",LastName13824 +13825,13825,"FirstName13825 MiddleName13825",LastName13825 +13826,13826,"FirstName13826 MiddleName13826",LastName13826 +13827,13827,"FirstName13827 MiddleName13827",LastName13827 +13828,13828,"FirstName13828 MiddleName13828",LastName13828 +13829,13829,"FirstName13829 MiddleName13829",LastName13829 +13830,13830,"FirstName13830 MiddleName13830",LastName13830 +13831,13831,"FirstName13831 MiddleName13831",LastName13831 +13832,13832,"FirstName13832 MiddleName13832",LastName13832 +13833,13833,"FirstName13833 MiddleName13833",LastName13833 +13834,13834,"FirstName13834 MiddleName13834",LastName13834 +13835,13835,"FirstName13835 MiddleName13835",LastName13835 +13836,13836,"FirstName13836 MiddleName13836",LastName13836 +13837,13837,"FirstName13837 MiddleName13837",LastName13837 +13838,13838,"FirstName13838 MiddleName13838",LastName13838 +13839,13839,"FirstName13839 MiddleName13839",LastName13839 +13840,13840,"FirstName13840 MiddleName13840",LastName13840 +13841,13841,"FirstName13841 MiddleName13841",LastName13841 +13842,13842,"FirstName13842 MiddleName13842",LastName13842 +13843,13843,"FirstName13843 MiddleName13843",LastName13843 +13844,13844,"FirstName13844 MiddleName13844",LastName13844 +13845,13845,"FirstName13845 MiddleName13845",LastName13845 +13846,13846,"FirstName13846 MiddleName13846",LastName13846 +13847,13847,"FirstName13847 MiddleName13847",LastName13847 +13848,13848,"FirstName13848 MiddleName13848",LastName13848 +13849,13849,"FirstName13849 MiddleName13849",LastName13849 +13850,13850,"FirstName13850 MiddleName13850",LastName13850 +13851,13851,"FirstName13851 MiddleName13851",LastName13851 +13852,13852,"FirstName13852 MiddleName13852",LastName13852 +13853,13853,"FirstName13853 MiddleName13853",LastName13853 +13854,13854,"FirstName13854 MiddleName13854",LastName13854 +13855,13855,"FirstName13855 MiddleName13855",LastName13855 +13856,13856,"FirstName13856 MiddleName13856",LastName13856 +13857,13857,"FirstName13857 MiddleName13857",LastName13857 +13858,13858,"FirstName13858 MiddleName13858",LastName13858 +13859,13859,"FirstName13859 MiddleName13859",LastName13859 +13860,13860,"FirstName13860 MiddleName13860",LastName13860 +13861,13861,"FirstName13861 MiddleName13861",LastName13861 +13862,13862,"FirstName13862 MiddleName13862",LastName13862 +13863,13863,"FirstName13863 MiddleName13863",LastName13863 +13864,13864,"FirstName13864 MiddleName13864",LastName13864 +13865,13865,"FirstName13865 MiddleName13865",LastName13865 +13866,13866,"FirstName13866 MiddleName13866",LastName13866 +13867,13867,"FirstName13867 MiddleName13867",LastName13867 +13868,13868,"FirstName13868 MiddleName13868",LastName13868 +13869,13869,"FirstName13869 MiddleName13869",LastName13869 +13870,13870,"FirstName13870 MiddleName13870",LastName13870 +13871,13871,"FirstName13871 MiddleName13871",LastName13871 +13872,13872,"FirstName13872 MiddleName13872",LastName13872 +13873,13873,"FirstName13873 MiddleName13873",LastName13873 +13874,13874,"FirstName13874 MiddleName13874",LastName13874 +13875,13875,"FirstName13875 MiddleName13875",LastName13875 +13876,13876,"FirstName13876 MiddleName13876",LastName13876 +13877,13877,"FirstName13877 MiddleName13877",LastName13877 +13878,13878,"FirstName13878 MiddleName13878",LastName13878 +13879,13879,"FirstName13879 MiddleName13879",LastName13879 +13880,13880,"FirstName13880 MiddleName13880",LastName13880 +13881,13881,"FirstName13881 MiddleName13881",LastName13881 +13882,13882,"FirstName13882 MiddleName13882",LastName13882 +13883,13883,"FirstName13883 MiddleName13883",LastName13883 +13884,13884,"FirstName13884 MiddleName13884",LastName13884 +13885,13885,"FirstName13885 MiddleName13885",LastName13885 +13886,13886,"FirstName13886 MiddleName13886",LastName13886 +13887,13887,"FirstName13887 MiddleName13887",LastName13887 +13888,13888,"FirstName13888 MiddleName13888",LastName13888 +13889,13889,"FirstName13889 MiddleName13889",LastName13889 +13890,13890,"FirstName13890 MiddleName13890",LastName13890 +13891,13891,"FirstName13891 MiddleName13891",LastName13891 +13892,13892,"FirstName13892 MiddleName13892",LastName13892 +13893,13893,"FirstName13893 MiddleName13893",LastName13893 +13894,13894,"FirstName13894 MiddleName13894",LastName13894 +13895,13895,"FirstName13895 MiddleName13895",LastName13895 +13896,13896,"FirstName13896 MiddleName13896",LastName13896 +13897,13897,"FirstName13897 MiddleName13897",LastName13897 +13898,13898,"FirstName13898 MiddleName13898",LastName13898 +13899,13899,"FirstName13899 MiddleName13899",LastName13899 +13900,13900,"FirstName13900 MiddleName13900",LastName13900 +13901,13901,"FirstName13901 MiddleName13901",LastName13901 +13902,13902,"FirstName13902 MiddleName13902",LastName13902 +13903,13903,"FirstName13903 MiddleName13903",LastName13903 +13904,13904,"FirstName13904 MiddleName13904",LastName13904 +13905,13905,"FirstName13905 MiddleName13905",LastName13905 +13906,13906,"FirstName13906 MiddleName13906",LastName13906 +13907,13907,"FirstName13907 MiddleName13907",LastName13907 +13908,13908,"FirstName13908 MiddleName13908",LastName13908 +13909,13909,"FirstName13909 MiddleName13909",LastName13909 +13910,13910,"FirstName13910 MiddleName13910",LastName13910 +13911,13911,"FirstName13911 MiddleName13911",LastName13911 +13912,13912,"FirstName13912 MiddleName13912",LastName13912 +13913,13913,"FirstName13913 MiddleName13913",LastName13913 +13914,13914,"FirstName13914 MiddleName13914",LastName13914 +13915,13915,"FirstName13915 MiddleName13915",LastName13915 +13916,13916,"FirstName13916 MiddleName13916",LastName13916 +13917,13917,"FirstName13917 MiddleName13917",LastName13917 +13918,13918,"FirstName13918 MiddleName13918",LastName13918 +13919,13919,"FirstName13919 MiddleName13919",LastName13919 +13920,13920,"FirstName13920 MiddleName13920",LastName13920 +13921,13921,"FirstName13921 MiddleName13921",LastName13921 +13922,13922,"FirstName13922 MiddleName13922",LastName13922 +13923,13923,"FirstName13923 MiddleName13923",LastName13923 +13924,13924,"FirstName13924 MiddleName13924",LastName13924 +13925,13925,"FirstName13925 MiddleName13925",LastName13925 +13926,13926,"FirstName13926 MiddleName13926",LastName13926 +13927,13927,"FirstName13927 MiddleName13927",LastName13927 +13928,13928,"FirstName13928 MiddleName13928",LastName13928 +13929,13929,"FirstName13929 MiddleName13929",LastName13929 +13930,13930,"FirstName13930 MiddleName13930",LastName13930 +13931,13931,"FirstName13931 MiddleName13931",LastName13931 +13932,13932,"FirstName13932 MiddleName13932",LastName13932 +13933,13933,"FirstName13933 MiddleName13933",LastName13933 +13934,13934,"FirstName13934 MiddleName13934",LastName13934 +13935,13935,"FirstName13935 MiddleName13935",LastName13935 +13936,13936,"FirstName13936 MiddleName13936",LastName13936 +13937,13937,"FirstName13937 MiddleName13937",LastName13937 +13938,13938,"FirstName13938 MiddleName13938",LastName13938 +13939,13939,"FirstName13939 MiddleName13939",LastName13939 +13940,13940,"FirstName13940 MiddleName13940",LastName13940 +13941,13941,"FirstName13941 MiddleName13941",LastName13941 +13942,13942,"FirstName13942 MiddleName13942",LastName13942 +13943,13943,"FirstName13943 MiddleName13943",LastName13943 +13944,13944,"FirstName13944 MiddleName13944",LastName13944 +13945,13945,"FirstName13945 MiddleName13945",LastName13945 +13946,13946,"FirstName13946 MiddleName13946",LastName13946 +13947,13947,"FirstName13947 MiddleName13947",LastName13947 +13948,13948,"FirstName13948 MiddleName13948",LastName13948 +13949,13949,"FirstName13949 MiddleName13949",LastName13949 +13950,13950,"FirstName13950 MiddleName13950",LastName13950 +13951,13951,"FirstName13951 MiddleName13951",LastName13951 +13952,13952,"FirstName13952 MiddleName13952",LastName13952 +13953,13953,"FirstName13953 MiddleName13953",LastName13953 +13954,13954,"FirstName13954 MiddleName13954",LastName13954 +13955,13955,"FirstName13955 MiddleName13955",LastName13955 +13956,13956,"FirstName13956 MiddleName13956",LastName13956 +13957,13957,"FirstName13957 MiddleName13957",LastName13957 +13958,13958,"FirstName13958 MiddleName13958",LastName13958 +13959,13959,"FirstName13959 MiddleName13959",LastName13959 +13960,13960,"FirstName13960 MiddleName13960",LastName13960 +13961,13961,"FirstName13961 MiddleName13961",LastName13961 +13962,13962,"FirstName13962 MiddleName13962",LastName13962 +13963,13963,"FirstName13963 MiddleName13963",LastName13963 +13964,13964,"FirstName13964 MiddleName13964",LastName13964 +13965,13965,"FirstName13965 MiddleName13965",LastName13965 +13966,13966,"FirstName13966 MiddleName13966",LastName13966 +13967,13967,"FirstName13967 MiddleName13967",LastName13967 +13968,13968,"FirstName13968 MiddleName13968",LastName13968 +13969,13969,"FirstName13969 MiddleName13969",LastName13969 +13970,13970,"FirstName13970 MiddleName13970",LastName13970 +13971,13971,"FirstName13971 MiddleName13971",LastName13971 +13972,13972,"FirstName13972 MiddleName13972",LastName13972 +13973,13973,"FirstName13973 MiddleName13973",LastName13973 +13974,13974,"FirstName13974 MiddleName13974",LastName13974 +13975,13975,"FirstName13975 MiddleName13975",LastName13975 +13976,13976,"FirstName13976 MiddleName13976",LastName13976 +13977,13977,"FirstName13977 MiddleName13977",LastName13977 +13978,13978,"FirstName13978 MiddleName13978",LastName13978 +13979,13979,"FirstName13979 MiddleName13979",LastName13979 +13980,13980,"FirstName13980 MiddleName13980",LastName13980 +13981,13981,"FirstName13981 MiddleName13981",LastName13981 +13982,13982,"FirstName13982 MiddleName13982",LastName13982 +13983,13983,"FirstName13983 MiddleName13983",LastName13983 +13984,13984,"FirstName13984 MiddleName13984",LastName13984 +13985,13985,"FirstName13985 MiddleName13985",LastName13985 +13986,13986,"FirstName13986 MiddleName13986",LastName13986 +13987,13987,"FirstName13987 MiddleName13987",LastName13987 +13988,13988,"FirstName13988 MiddleName13988",LastName13988 +13989,13989,"FirstName13989 MiddleName13989",LastName13989 +13990,13990,"FirstName13990 MiddleName13990",LastName13990 +13991,13991,"FirstName13991 MiddleName13991",LastName13991 +13992,13992,"FirstName13992 MiddleName13992",LastName13992 +13993,13993,"FirstName13993 MiddleName13993",LastName13993 +13994,13994,"FirstName13994 MiddleName13994",LastName13994 +13995,13995,"FirstName13995 MiddleName13995",LastName13995 +13996,13996,"FirstName13996 MiddleName13996",LastName13996 +13997,13997,"FirstName13997 MiddleName13997",LastName13997 +13998,13998,"FirstName13998 MiddleName13998",LastName13998 +13999,13999,"FirstName13999 MiddleName13999",LastName13999 +14000,14000,"FirstName14000 MiddleName14000",LastName14000 +14001,14001,"FirstName14001 MiddleName14001",LastName14001 +14002,14002,"FirstName14002 MiddleName14002",LastName14002 +14003,14003,"FirstName14003 MiddleName14003",LastName14003 +14004,14004,"FirstName14004 MiddleName14004",LastName14004 +14005,14005,"FirstName14005 MiddleName14005",LastName14005 +14006,14006,"FirstName14006 MiddleName14006",LastName14006 +14007,14007,"FirstName14007 MiddleName14007",LastName14007 +14008,14008,"FirstName14008 MiddleName14008",LastName14008 +14009,14009,"FirstName14009 MiddleName14009",LastName14009 +14010,14010,"FirstName14010 MiddleName14010",LastName14010 +14011,14011,"FirstName14011 MiddleName14011",LastName14011 +14012,14012,"FirstName14012 MiddleName14012",LastName14012 +14013,14013,"FirstName14013 MiddleName14013",LastName14013 +14014,14014,"FirstName14014 MiddleName14014",LastName14014 +14015,14015,"FirstName14015 MiddleName14015",LastName14015 +14016,14016,"FirstName14016 MiddleName14016",LastName14016 +14017,14017,"FirstName14017 MiddleName14017",LastName14017 +14018,14018,"FirstName14018 MiddleName14018",LastName14018 +14019,14019,"FirstName14019 MiddleName14019",LastName14019 +14020,14020,"FirstName14020 MiddleName14020",LastName14020 +14021,14021,"FirstName14021 MiddleName14021",LastName14021 +14022,14022,"FirstName14022 MiddleName14022",LastName14022 +14023,14023,"FirstName14023 MiddleName14023",LastName14023 +14024,14024,"FirstName14024 MiddleName14024",LastName14024 +14025,14025,"FirstName14025 MiddleName14025",LastName14025 +14026,14026,"FirstName14026 MiddleName14026",LastName14026 +14027,14027,"FirstName14027 MiddleName14027",LastName14027 +14028,14028,"FirstName14028 MiddleName14028",LastName14028 +14029,14029,"FirstName14029 MiddleName14029",LastName14029 +14030,14030,"FirstName14030 MiddleName14030",LastName14030 +14031,14031,"FirstName14031 MiddleName14031",LastName14031 +14032,14032,"FirstName14032 MiddleName14032",LastName14032 +14033,14033,"FirstName14033 MiddleName14033",LastName14033 +14034,14034,"FirstName14034 MiddleName14034",LastName14034 +14035,14035,"FirstName14035 MiddleName14035",LastName14035 +14036,14036,"FirstName14036 MiddleName14036",LastName14036 +14037,14037,"FirstName14037 MiddleName14037",LastName14037 +14038,14038,"FirstName14038 MiddleName14038",LastName14038 +14039,14039,"FirstName14039 MiddleName14039",LastName14039 +14040,14040,"FirstName14040 MiddleName14040",LastName14040 +14041,14041,"FirstName14041 MiddleName14041",LastName14041 +14042,14042,"FirstName14042 MiddleName14042",LastName14042 +14043,14043,"FirstName14043 MiddleName14043",LastName14043 +14044,14044,"FirstName14044 MiddleName14044",LastName14044 +14045,14045,"FirstName14045 MiddleName14045",LastName14045 +14046,14046,"FirstName14046 MiddleName14046",LastName14046 +14047,14047,"FirstName14047 MiddleName14047",LastName14047 +14048,14048,"FirstName14048 MiddleName14048",LastName14048 +14049,14049,"FirstName14049 MiddleName14049",LastName14049 +14050,14050,"FirstName14050 MiddleName14050",LastName14050 +14051,14051,"FirstName14051 MiddleName14051",LastName14051 +14052,14052,"FirstName14052 MiddleName14052",LastName14052 +14053,14053,"FirstName14053 MiddleName14053",LastName14053 +14054,14054,"FirstName14054 MiddleName14054",LastName14054 +14055,14055,"FirstName14055 MiddleName14055",LastName14055 +14056,14056,"FirstName14056 MiddleName14056",LastName14056 +14057,14057,"FirstName14057 MiddleName14057",LastName14057 +14058,14058,"FirstName14058 MiddleName14058",LastName14058 +14059,14059,"FirstName14059 MiddleName14059",LastName14059 +14060,14060,"FirstName14060 MiddleName14060",LastName14060 +14061,14061,"FirstName14061 MiddleName14061",LastName14061 +14062,14062,"FirstName14062 MiddleName14062",LastName14062 +14063,14063,"FirstName14063 MiddleName14063",LastName14063 +14064,14064,"FirstName14064 MiddleName14064",LastName14064 +14065,14065,"FirstName14065 MiddleName14065",LastName14065 +14066,14066,"FirstName14066 MiddleName14066",LastName14066 +14067,14067,"FirstName14067 MiddleName14067",LastName14067 +14068,14068,"FirstName14068 MiddleName14068",LastName14068 +14069,14069,"FirstName14069 MiddleName14069",LastName14069 +14070,14070,"FirstName14070 MiddleName14070",LastName14070 +14071,14071,"FirstName14071 MiddleName14071",LastName14071 +14072,14072,"FirstName14072 MiddleName14072",LastName14072 +14073,14073,"FirstName14073 MiddleName14073",LastName14073 +14074,14074,"FirstName14074 MiddleName14074",LastName14074 +14075,14075,"FirstName14075 MiddleName14075",LastName14075 +14076,14076,"FirstName14076 MiddleName14076",LastName14076 +14077,14077,"FirstName14077 MiddleName14077",LastName14077 +14078,14078,"FirstName14078 MiddleName14078",LastName14078 +14079,14079,"FirstName14079 MiddleName14079",LastName14079 +14080,14080,"FirstName14080 MiddleName14080",LastName14080 +14081,14081,"FirstName14081 MiddleName14081",LastName14081 +14082,14082,"FirstName14082 MiddleName14082",LastName14082 +14083,14083,"FirstName14083 MiddleName14083",LastName14083 +14084,14084,"FirstName14084 MiddleName14084",LastName14084 +14085,14085,"FirstName14085 MiddleName14085",LastName14085 +14086,14086,"FirstName14086 MiddleName14086",LastName14086 +14087,14087,"FirstName14087 MiddleName14087",LastName14087 +14088,14088,"FirstName14088 MiddleName14088",LastName14088 +14089,14089,"FirstName14089 MiddleName14089",LastName14089 +14090,14090,"FirstName14090 MiddleName14090",LastName14090 +14091,14091,"FirstName14091 MiddleName14091",LastName14091 +14092,14092,"FirstName14092 MiddleName14092",LastName14092 +14093,14093,"FirstName14093 MiddleName14093",LastName14093 +14094,14094,"FirstName14094 MiddleName14094",LastName14094 +14095,14095,"FirstName14095 MiddleName14095",LastName14095 +14096,14096,"FirstName14096 MiddleName14096",LastName14096 +14097,14097,"FirstName14097 MiddleName14097",LastName14097 +14098,14098,"FirstName14098 MiddleName14098",LastName14098 +14099,14099,"FirstName14099 MiddleName14099",LastName14099 +14100,14100,"FirstName14100 MiddleName14100",LastName14100 +14101,14101,"FirstName14101 MiddleName14101",LastName14101 +14102,14102,"FirstName14102 MiddleName14102",LastName14102 +14103,14103,"FirstName14103 MiddleName14103",LastName14103 +14104,14104,"FirstName14104 MiddleName14104",LastName14104 +14105,14105,"FirstName14105 MiddleName14105",LastName14105 +14106,14106,"FirstName14106 MiddleName14106",LastName14106 +14107,14107,"FirstName14107 MiddleName14107",LastName14107 +14108,14108,"FirstName14108 MiddleName14108",LastName14108 +14109,14109,"FirstName14109 MiddleName14109",LastName14109 +14110,14110,"FirstName14110 MiddleName14110",LastName14110 +14111,14111,"FirstName14111 MiddleName14111",LastName14111 +14112,14112,"FirstName14112 MiddleName14112",LastName14112 +14113,14113,"FirstName14113 MiddleName14113",LastName14113 +14114,14114,"FirstName14114 MiddleName14114",LastName14114 +14115,14115,"FirstName14115 MiddleName14115",LastName14115 +14116,14116,"FirstName14116 MiddleName14116",LastName14116 +14117,14117,"FirstName14117 MiddleName14117",LastName14117 +14118,14118,"FirstName14118 MiddleName14118",LastName14118 +14119,14119,"FirstName14119 MiddleName14119",LastName14119 +14120,14120,"FirstName14120 MiddleName14120",LastName14120 +14121,14121,"FirstName14121 MiddleName14121",LastName14121 +14122,14122,"FirstName14122 MiddleName14122",LastName14122 +14123,14123,"FirstName14123 MiddleName14123",LastName14123 +14124,14124,"FirstName14124 MiddleName14124",LastName14124 +14125,14125,"FirstName14125 MiddleName14125",LastName14125 +14126,14126,"FirstName14126 MiddleName14126",LastName14126 +14127,14127,"FirstName14127 MiddleName14127",LastName14127 +14128,14128,"FirstName14128 MiddleName14128",LastName14128 +14129,14129,"FirstName14129 MiddleName14129",LastName14129 +14130,14130,"FirstName14130 MiddleName14130",LastName14130 +14131,14131,"FirstName14131 MiddleName14131",LastName14131 +14132,14132,"FirstName14132 MiddleName14132",LastName14132 +14133,14133,"FirstName14133 MiddleName14133",LastName14133 +14134,14134,"FirstName14134 MiddleName14134",LastName14134 +14135,14135,"FirstName14135 MiddleName14135",LastName14135 +14136,14136,"FirstName14136 MiddleName14136",LastName14136 +14137,14137,"FirstName14137 MiddleName14137",LastName14137 +14138,14138,"FirstName14138 MiddleName14138",LastName14138 +14139,14139,"FirstName14139 MiddleName14139",LastName14139 +14140,14140,"FirstName14140 MiddleName14140",LastName14140 +14141,14141,"FirstName14141 MiddleName14141",LastName14141 +14142,14142,"FirstName14142 MiddleName14142",LastName14142 +14143,14143,"FirstName14143 MiddleName14143",LastName14143 +14144,14144,"FirstName14144 MiddleName14144",LastName14144 +14145,14145,"FirstName14145 MiddleName14145",LastName14145 +14146,14146,"FirstName14146 MiddleName14146",LastName14146 +14147,14147,"FirstName14147 MiddleName14147",LastName14147 +14148,14148,"FirstName14148 MiddleName14148",LastName14148 +14149,14149,"FirstName14149 MiddleName14149",LastName14149 +14150,14150,"FirstName14150 MiddleName14150",LastName14150 +14151,14151,"FirstName14151 MiddleName14151",LastName14151 +14152,14152,"FirstName14152 MiddleName14152",LastName14152 +14153,14153,"FirstName14153 MiddleName14153",LastName14153 +14154,14154,"FirstName14154 MiddleName14154",LastName14154 +14155,14155,"FirstName14155 MiddleName14155",LastName14155 +14156,14156,"FirstName14156 MiddleName14156",LastName14156 +14157,14157,"FirstName14157 MiddleName14157",LastName14157 +14158,14158,"FirstName14158 MiddleName14158",LastName14158 +14159,14159,"FirstName14159 MiddleName14159",LastName14159 +14160,14160,"FirstName14160 MiddleName14160",LastName14160 +14161,14161,"FirstName14161 MiddleName14161",LastName14161 +14162,14162,"FirstName14162 MiddleName14162",LastName14162 +14163,14163,"FirstName14163 MiddleName14163",LastName14163 +14164,14164,"FirstName14164 MiddleName14164",LastName14164 +14165,14165,"FirstName14165 MiddleName14165",LastName14165 +14166,14166,"FirstName14166 MiddleName14166",LastName14166 +14167,14167,"FirstName14167 MiddleName14167",LastName14167 +14168,14168,"FirstName14168 MiddleName14168",LastName14168 +14169,14169,"FirstName14169 MiddleName14169",LastName14169 +14170,14170,"FirstName14170 MiddleName14170",LastName14170 +14171,14171,"FirstName14171 MiddleName14171",LastName14171 +14172,14172,"FirstName14172 MiddleName14172",LastName14172 +14173,14173,"FirstName14173 MiddleName14173",LastName14173 +14174,14174,"FirstName14174 MiddleName14174",LastName14174 +14175,14175,"FirstName14175 MiddleName14175",LastName14175 +14176,14176,"FirstName14176 MiddleName14176",LastName14176 +14177,14177,"FirstName14177 MiddleName14177",LastName14177 +14178,14178,"FirstName14178 MiddleName14178",LastName14178 +14179,14179,"FirstName14179 MiddleName14179",LastName14179 +14180,14180,"FirstName14180 MiddleName14180",LastName14180 +14181,14181,"FirstName14181 MiddleName14181",LastName14181 +14182,14182,"FirstName14182 MiddleName14182",LastName14182 +14183,14183,"FirstName14183 MiddleName14183",LastName14183 +14184,14184,"FirstName14184 MiddleName14184",LastName14184 +14185,14185,"FirstName14185 MiddleName14185",LastName14185 +14186,14186,"FirstName14186 MiddleName14186",LastName14186 +14187,14187,"FirstName14187 MiddleName14187",LastName14187 +14188,14188,"FirstName14188 MiddleName14188",LastName14188 +14189,14189,"FirstName14189 MiddleName14189",LastName14189 +14190,14190,"FirstName14190 MiddleName14190",LastName14190 +14191,14191,"FirstName14191 MiddleName14191",LastName14191 +14192,14192,"FirstName14192 MiddleName14192",LastName14192 +14193,14193,"FirstName14193 MiddleName14193",LastName14193 +14194,14194,"FirstName14194 MiddleName14194",LastName14194 +14195,14195,"FirstName14195 MiddleName14195",LastName14195 +14196,14196,"FirstName14196 MiddleName14196",LastName14196 +14197,14197,"FirstName14197 MiddleName14197",LastName14197 +14198,14198,"FirstName14198 MiddleName14198",LastName14198 +14199,14199,"FirstName14199 MiddleName14199",LastName14199 +14200,14200,"FirstName14200 MiddleName14200",LastName14200 +14201,14201,"FirstName14201 MiddleName14201",LastName14201 +14202,14202,"FirstName14202 MiddleName14202",LastName14202 +14203,14203,"FirstName14203 MiddleName14203",LastName14203 +14204,14204,"FirstName14204 MiddleName14204",LastName14204 +14205,14205,"FirstName14205 MiddleName14205",LastName14205 +14206,14206,"FirstName14206 MiddleName14206",LastName14206 +14207,14207,"FirstName14207 MiddleName14207",LastName14207 +14208,14208,"FirstName14208 MiddleName14208",LastName14208 +14209,14209,"FirstName14209 MiddleName14209",LastName14209 +14210,14210,"FirstName14210 MiddleName14210",LastName14210 +14211,14211,"FirstName14211 MiddleName14211",LastName14211 +14212,14212,"FirstName14212 MiddleName14212",LastName14212 +14213,14213,"FirstName14213 MiddleName14213",LastName14213 +14214,14214,"FirstName14214 MiddleName14214",LastName14214 +14215,14215,"FirstName14215 MiddleName14215",LastName14215 +14216,14216,"FirstName14216 MiddleName14216",LastName14216 +14217,14217,"FirstName14217 MiddleName14217",LastName14217 +14218,14218,"FirstName14218 MiddleName14218",LastName14218 +14219,14219,"FirstName14219 MiddleName14219",LastName14219 +14220,14220,"FirstName14220 MiddleName14220",LastName14220 +14221,14221,"FirstName14221 MiddleName14221",LastName14221 +14222,14222,"FirstName14222 MiddleName14222",LastName14222 +14223,14223,"FirstName14223 MiddleName14223",LastName14223 +14224,14224,"FirstName14224 MiddleName14224",LastName14224 +14225,14225,"FirstName14225 MiddleName14225",LastName14225 +14226,14226,"FirstName14226 MiddleName14226",LastName14226 +14227,14227,"FirstName14227 MiddleName14227",LastName14227 +14228,14228,"FirstName14228 MiddleName14228",LastName14228 +14229,14229,"FirstName14229 MiddleName14229",LastName14229 +14230,14230,"FirstName14230 MiddleName14230",LastName14230 +14231,14231,"FirstName14231 MiddleName14231",LastName14231 +14232,14232,"FirstName14232 MiddleName14232",LastName14232 +14233,14233,"FirstName14233 MiddleName14233",LastName14233 +14234,14234,"FirstName14234 MiddleName14234",LastName14234 +14235,14235,"FirstName14235 MiddleName14235",LastName14235 +14236,14236,"FirstName14236 MiddleName14236",LastName14236 +14237,14237,"FirstName14237 MiddleName14237",LastName14237 +14238,14238,"FirstName14238 MiddleName14238",LastName14238 +14239,14239,"FirstName14239 MiddleName14239",LastName14239 +14240,14240,"FirstName14240 MiddleName14240",LastName14240 +14241,14241,"FirstName14241 MiddleName14241",LastName14241 +14242,14242,"FirstName14242 MiddleName14242",LastName14242 +14243,14243,"FirstName14243 MiddleName14243",LastName14243 +14244,14244,"FirstName14244 MiddleName14244",LastName14244 +14245,14245,"FirstName14245 MiddleName14245",LastName14245 +14246,14246,"FirstName14246 MiddleName14246",LastName14246 +14247,14247,"FirstName14247 MiddleName14247",LastName14247 +14248,14248,"FirstName14248 MiddleName14248",LastName14248 +14249,14249,"FirstName14249 MiddleName14249",LastName14249 +14250,14250,"FirstName14250 MiddleName14250",LastName14250 +14251,14251,"FirstName14251 MiddleName14251",LastName14251 +14252,14252,"FirstName14252 MiddleName14252",LastName14252 +14253,14253,"FirstName14253 MiddleName14253",LastName14253 +14254,14254,"FirstName14254 MiddleName14254",LastName14254 +14255,14255,"FirstName14255 MiddleName14255",LastName14255 +14256,14256,"FirstName14256 MiddleName14256",LastName14256 +14257,14257,"FirstName14257 MiddleName14257",LastName14257 +14258,14258,"FirstName14258 MiddleName14258",LastName14258 +14259,14259,"FirstName14259 MiddleName14259",LastName14259 +14260,14260,"FirstName14260 MiddleName14260",LastName14260 +14261,14261,"FirstName14261 MiddleName14261",LastName14261 +14262,14262,"FirstName14262 MiddleName14262",LastName14262 +14263,14263,"FirstName14263 MiddleName14263",LastName14263 +14264,14264,"FirstName14264 MiddleName14264",LastName14264 +14265,14265,"FirstName14265 MiddleName14265",LastName14265 +14266,14266,"FirstName14266 MiddleName14266",LastName14266 +14267,14267,"FirstName14267 MiddleName14267",LastName14267 +14268,14268,"FirstName14268 MiddleName14268",LastName14268 +14269,14269,"FirstName14269 MiddleName14269",LastName14269 +14270,14270,"FirstName14270 MiddleName14270",LastName14270 +14271,14271,"FirstName14271 MiddleName14271",LastName14271 +14272,14272,"FirstName14272 MiddleName14272",LastName14272 +14273,14273,"FirstName14273 MiddleName14273",LastName14273 +14274,14274,"FirstName14274 MiddleName14274",LastName14274 +14275,14275,"FirstName14275 MiddleName14275",LastName14275 +14276,14276,"FirstName14276 MiddleName14276",LastName14276 +14277,14277,"FirstName14277 MiddleName14277",LastName14277 +14278,14278,"FirstName14278 MiddleName14278",LastName14278 +14279,14279,"FirstName14279 MiddleName14279",LastName14279 +14280,14280,"FirstName14280 MiddleName14280",LastName14280 +14281,14281,"FirstName14281 MiddleName14281",LastName14281 +14282,14282,"FirstName14282 MiddleName14282",LastName14282 +14283,14283,"FirstName14283 MiddleName14283",LastName14283 +14284,14284,"FirstName14284 MiddleName14284",LastName14284 +14285,14285,"FirstName14285 MiddleName14285",LastName14285 +14286,14286,"FirstName14286 MiddleName14286",LastName14286 +14287,14287,"FirstName14287 MiddleName14287",LastName14287 +14288,14288,"FirstName14288 MiddleName14288",LastName14288 +14289,14289,"FirstName14289 MiddleName14289",LastName14289 +14290,14290,"FirstName14290 MiddleName14290",LastName14290 +14291,14291,"FirstName14291 MiddleName14291",LastName14291 +14292,14292,"FirstName14292 MiddleName14292",LastName14292 +14293,14293,"FirstName14293 MiddleName14293",LastName14293 +14294,14294,"FirstName14294 MiddleName14294",LastName14294 +14295,14295,"FirstName14295 MiddleName14295",LastName14295 +14296,14296,"FirstName14296 MiddleName14296",LastName14296 +14297,14297,"FirstName14297 MiddleName14297",LastName14297 +14298,14298,"FirstName14298 MiddleName14298",LastName14298 +14299,14299,"FirstName14299 MiddleName14299",LastName14299 +14300,14300,"FirstName14300 MiddleName14300",LastName14300 +14301,14301,"FirstName14301 MiddleName14301",LastName14301 +14302,14302,"FirstName14302 MiddleName14302",LastName14302 +14303,14303,"FirstName14303 MiddleName14303",LastName14303 +14304,14304,"FirstName14304 MiddleName14304",LastName14304 +14305,14305,"FirstName14305 MiddleName14305",LastName14305 +14306,14306,"FirstName14306 MiddleName14306",LastName14306 +14307,14307,"FirstName14307 MiddleName14307",LastName14307 +14308,14308,"FirstName14308 MiddleName14308",LastName14308 +14309,14309,"FirstName14309 MiddleName14309",LastName14309 +14310,14310,"FirstName14310 MiddleName14310",LastName14310 +14311,14311,"FirstName14311 MiddleName14311",LastName14311 +14312,14312,"FirstName14312 MiddleName14312",LastName14312 +14313,14313,"FirstName14313 MiddleName14313",LastName14313 +14314,14314,"FirstName14314 MiddleName14314",LastName14314 +14315,14315,"FirstName14315 MiddleName14315",LastName14315 +14316,14316,"FirstName14316 MiddleName14316",LastName14316 +14317,14317,"FirstName14317 MiddleName14317",LastName14317 +14318,14318,"FirstName14318 MiddleName14318",LastName14318 +14319,14319,"FirstName14319 MiddleName14319",LastName14319 +14320,14320,"FirstName14320 MiddleName14320",LastName14320 +14321,14321,"FirstName14321 MiddleName14321",LastName14321 +14322,14322,"FirstName14322 MiddleName14322",LastName14322 +14323,14323,"FirstName14323 MiddleName14323",LastName14323 +14324,14324,"FirstName14324 MiddleName14324",LastName14324 +14325,14325,"FirstName14325 MiddleName14325",LastName14325 +14326,14326,"FirstName14326 MiddleName14326",LastName14326 +14327,14327,"FirstName14327 MiddleName14327",LastName14327 +14328,14328,"FirstName14328 MiddleName14328",LastName14328 +14329,14329,"FirstName14329 MiddleName14329",LastName14329 +14330,14330,"FirstName14330 MiddleName14330",LastName14330 +14331,14331,"FirstName14331 MiddleName14331",LastName14331 +14332,14332,"FirstName14332 MiddleName14332",LastName14332 +14333,14333,"FirstName14333 MiddleName14333",LastName14333 +14334,14334,"FirstName14334 MiddleName14334",LastName14334 +14335,14335,"FirstName14335 MiddleName14335",LastName14335 +14336,14336,"FirstName14336 MiddleName14336",LastName14336 +14337,14337,"FirstName14337 MiddleName14337",LastName14337 +14338,14338,"FirstName14338 MiddleName14338",LastName14338 +14339,14339,"FirstName14339 MiddleName14339",LastName14339 +14340,14340,"FirstName14340 MiddleName14340",LastName14340 +14341,14341,"FirstName14341 MiddleName14341",LastName14341 +14342,14342,"FirstName14342 MiddleName14342",LastName14342 +14343,14343,"FirstName14343 MiddleName14343",LastName14343 +14344,14344,"FirstName14344 MiddleName14344",LastName14344 +14345,14345,"FirstName14345 MiddleName14345",LastName14345 +14346,14346,"FirstName14346 MiddleName14346",LastName14346 +14347,14347,"FirstName14347 MiddleName14347",LastName14347 +14348,14348,"FirstName14348 MiddleName14348",LastName14348 +14349,14349,"FirstName14349 MiddleName14349",LastName14349 +14350,14350,"FirstName14350 MiddleName14350",LastName14350 +14351,14351,"FirstName14351 MiddleName14351",LastName14351 +14352,14352,"FirstName14352 MiddleName14352",LastName14352 +14353,14353,"FirstName14353 MiddleName14353",LastName14353 +14354,14354,"FirstName14354 MiddleName14354",LastName14354 +14355,14355,"FirstName14355 MiddleName14355",LastName14355 +14356,14356,"FirstName14356 MiddleName14356",LastName14356 +14357,14357,"FirstName14357 MiddleName14357",LastName14357 +14358,14358,"FirstName14358 MiddleName14358",LastName14358 +14359,14359,"FirstName14359 MiddleName14359",LastName14359 +14360,14360,"FirstName14360 MiddleName14360",LastName14360 +14361,14361,"FirstName14361 MiddleName14361",LastName14361 +14362,14362,"FirstName14362 MiddleName14362",LastName14362 +14363,14363,"FirstName14363 MiddleName14363",LastName14363 +14364,14364,"FirstName14364 MiddleName14364",LastName14364 +14365,14365,"FirstName14365 MiddleName14365",LastName14365 +14366,14366,"FirstName14366 MiddleName14366",LastName14366 +14367,14367,"FirstName14367 MiddleName14367",LastName14367 +14368,14368,"FirstName14368 MiddleName14368",LastName14368 +14369,14369,"FirstName14369 MiddleName14369",LastName14369 +14370,14370,"FirstName14370 MiddleName14370",LastName14370 +14371,14371,"FirstName14371 MiddleName14371",LastName14371 +14372,14372,"FirstName14372 MiddleName14372",LastName14372 +14373,14373,"FirstName14373 MiddleName14373",LastName14373 +14374,14374,"FirstName14374 MiddleName14374",LastName14374 +14375,14375,"FirstName14375 MiddleName14375",LastName14375 +14376,14376,"FirstName14376 MiddleName14376",LastName14376 +14377,14377,"FirstName14377 MiddleName14377",LastName14377 +14378,14378,"FirstName14378 MiddleName14378",LastName14378 +14379,14379,"FirstName14379 MiddleName14379",LastName14379 +14380,14380,"FirstName14380 MiddleName14380",LastName14380 +14381,14381,"FirstName14381 MiddleName14381",LastName14381 +14382,14382,"FirstName14382 MiddleName14382",LastName14382 +14383,14383,"FirstName14383 MiddleName14383",LastName14383 +14384,14384,"FirstName14384 MiddleName14384",LastName14384 +14385,14385,"FirstName14385 MiddleName14385",LastName14385 +14386,14386,"FirstName14386 MiddleName14386",LastName14386 +14387,14387,"FirstName14387 MiddleName14387",LastName14387 +14388,14388,"FirstName14388 MiddleName14388",LastName14388 +14389,14389,"FirstName14389 MiddleName14389",LastName14389 +14390,14390,"FirstName14390 MiddleName14390",LastName14390 +14391,14391,"FirstName14391 MiddleName14391",LastName14391 +14392,14392,"FirstName14392 MiddleName14392",LastName14392 +14393,14393,"FirstName14393 MiddleName14393",LastName14393 +14394,14394,"FirstName14394 MiddleName14394",LastName14394 +14395,14395,"FirstName14395 MiddleName14395",LastName14395 +14396,14396,"FirstName14396 MiddleName14396",LastName14396 +14397,14397,"FirstName14397 MiddleName14397",LastName14397 +14398,14398,"FirstName14398 MiddleName14398",LastName14398 +14399,14399,"FirstName14399 MiddleName14399",LastName14399 +14400,14400,"FirstName14400 MiddleName14400",LastName14400 +14401,14401,"FirstName14401 MiddleName14401",LastName14401 +14402,14402,"FirstName14402 MiddleName14402",LastName14402 +14403,14403,"FirstName14403 MiddleName14403",LastName14403 +14404,14404,"FirstName14404 MiddleName14404",LastName14404 +14405,14405,"FirstName14405 MiddleName14405",LastName14405 +14406,14406,"FirstName14406 MiddleName14406",LastName14406 +14407,14407,"FirstName14407 MiddleName14407",LastName14407 +14408,14408,"FirstName14408 MiddleName14408",LastName14408 +14409,14409,"FirstName14409 MiddleName14409",LastName14409 +14410,14410,"FirstName14410 MiddleName14410",LastName14410 +14411,14411,"FirstName14411 MiddleName14411",LastName14411 +14412,14412,"FirstName14412 MiddleName14412",LastName14412 +14413,14413,"FirstName14413 MiddleName14413",LastName14413 +14414,14414,"FirstName14414 MiddleName14414",LastName14414 +14415,14415,"FirstName14415 MiddleName14415",LastName14415 +14416,14416,"FirstName14416 MiddleName14416",LastName14416 +14417,14417,"FirstName14417 MiddleName14417",LastName14417 +14418,14418,"FirstName14418 MiddleName14418",LastName14418 +14419,14419,"FirstName14419 MiddleName14419",LastName14419 +14420,14420,"FirstName14420 MiddleName14420",LastName14420 +14421,14421,"FirstName14421 MiddleName14421",LastName14421 +14422,14422,"FirstName14422 MiddleName14422",LastName14422 +14423,14423,"FirstName14423 MiddleName14423",LastName14423 +14424,14424,"FirstName14424 MiddleName14424",LastName14424 +14425,14425,"FirstName14425 MiddleName14425",LastName14425 +14426,14426,"FirstName14426 MiddleName14426",LastName14426 +14427,14427,"FirstName14427 MiddleName14427",LastName14427 +14428,14428,"FirstName14428 MiddleName14428",LastName14428 +14429,14429,"FirstName14429 MiddleName14429",LastName14429 +14430,14430,"FirstName14430 MiddleName14430",LastName14430 +14431,14431,"FirstName14431 MiddleName14431",LastName14431 +14432,14432,"FirstName14432 MiddleName14432",LastName14432 +14433,14433,"FirstName14433 MiddleName14433",LastName14433 +14434,14434,"FirstName14434 MiddleName14434",LastName14434 +14435,14435,"FirstName14435 MiddleName14435",LastName14435 +14436,14436,"FirstName14436 MiddleName14436",LastName14436 +14437,14437,"FirstName14437 MiddleName14437",LastName14437 +14438,14438,"FirstName14438 MiddleName14438",LastName14438 +14439,14439,"FirstName14439 MiddleName14439",LastName14439 +14440,14440,"FirstName14440 MiddleName14440",LastName14440 +14441,14441,"FirstName14441 MiddleName14441",LastName14441 +14442,14442,"FirstName14442 MiddleName14442",LastName14442 +14443,14443,"FirstName14443 MiddleName14443",LastName14443 +14444,14444,"FirstName14444 MiddleName14444",LastName14444 +14445,14445,"FirstName14445 MiddleName14445",LastName14445 +14446,14446,"FirstName14446 MiddleName14446",LastName14446 +14447,14447,"FirstName14447 MiddleName14447",LastName14447 +14448,14448,"FirstName14448 MiddleName14448",LastName14448 +14449,14449,"FirstName14449 MiddleName14449",LastName14449 +14450,14450,"FirstName14450 MiddleName14450",LastName14450 +14451,14451,"FirstName14451 MiddleName14451",LastName14451 +14452,14452,"FirstName14452 MiddleName14452",LastName14452 +14453,14453,"FirstName14453 MiddleName14453",LastName14453 +14454,14454,"FirstName14454 MiddleName14454",LastName14454 +14455,14455,"FirstName14455 MiddleName14455",LastName14455 +14456,14456,"FirstName14456 MiddleName14456",LastName14456 +14457,14457,"FirstName14457 MiddleName14457",LastName14457 +14458,14458,"FirstName14458 MiddleName14458",LastName14458 +14459,14459,"FirstName14459 MiddleName14459",LastName14459 +14460,14460,"FirstName14460 MiddleName14460",LastName14460 +14461,14461,"FirstName14461 MiddleName14461",LastName14461 +14462,14462,"FirstName14462 MiddleName14462",LastName14462 +14463,14463,"FirstName14463 MiddleName14463",LastName14463 +14464,14464,"FirstName14464 MiddleName14464",LastName14464 +14465,14465,"FirstName14465 MiddleName14465",LastName14465 +14466,14466,"FirstName14466 MiddleName14466",LastName14466 +14467,14467,"FirstName14467 MiddleName14467",LastName14467 +14468,14468,"FirstName14468 MiddleName14468",LastName14468 +14469,14469,"FirstName14469 MiddleName14469",LastName14469 +14470,14470,"FirstName14470 MiddleName14470",LastName14470 +14471,14471,"FirstName14471 MiddleName14471",LastName14471 +14472,14472,"FirstName14472 MiddleName14472",LastName14472 +14473,14473,"FirstName14473 MiddleName14473",LastName14473 +14474,14474,"FirstName14474 MiddleName14474",LastName14474 +14475,14475,"FirstName14475 MiddleName14475",LastName14475 +14476,14476,"FirstName14476 MiddleName14476",LastName14476 +14477,14477,"FirstName14477 MiddleName14477",LastName14477 +14478,14478,"FirstName14478 MiddleName14478",LastName14478 +14479,14479,"FirstName14479 MiddleName14479",LastName14479 +14480,14480,"FirstName14480 MiddleName14480",LastName14480 +14481,14481,"FirstName14481 MiddleName14481",LastName14481 +14482,14482,"FirstName14482 MiddleName14482",LastName14482 +14483,14483,"FirstName14483 MiddleName14483",LastName14483 +14484,14484,"FirstName14484 MiddleName14484",LastName14484 +14485,14485,"FirstName14485 MiddleName14485",LastName14485 +14486,14486,"FirstName14486 MiddleName14486",LastName14486 +14487,14487,"FirstName14487 MiddleName14487",LastName14487 +14488,14488,"FirstName14488 MiddleName14488",LastName14488 +14489,14489,"FirstName14489 MiddleName14489",LastName14489 +14490,14490,"FirstName14490 MiddleName14490",LastName14490 +14491,14491,"FirstName14491 MiddleName14491",LastName14491 +14492,14492,"FirstName14492 MiddleName14492",LastName14492 +14493,14493,"FirstName14493 MiddleName14493",LastName14493 +14494,14494,"FirstName14494 MiddleName14494",LastName14494 +14495,14495,"FirstName14495 MiddleName14495",LastName14495 +14496,14496,"FirstName14496 MiddleName14496",LastName14496 +14497,14497,"FirstName14497 MiddleName14497",LastName14497 +14498,14498,"FirstName14498 MiddleName14498",LastName14498 +14499,14499,"FirstName14499 MiddleName14499",LastName14499 +14500,14500,"FirstName14500 MiddleName14500",LastName14500 +14501,14501,"FirstName14501 MiddleName14501",LastName14501 +14502,14502,"FirstName14502 MiddleName14502",LastName14502 +14503,14503,"FirstName14503 MiddleName14503",LastName14503 +14504,14504,"FirstName14504 MiddleName14504",LastName14504 +14505,14505,"FirstName14505 MiddleName14505",LastName14505 +14506,14506,"FirstName14506 MiddleName14506",LastName14506 +14507,14507,"FirstName14507 MiddleName14507",LastName14507 +14508,14508,"FirstName14508 MiddleName14508",LastName14508 +14509,14509,"FirstName14509 MiddleName14509",LastName14509 +14510,14510,"FirstName14510 MiddleName14510",LastName14510 +14511,14511,"FirstName14511 MiddleName14511",LastName14511 +14512,14512,"FirstName14512 MiddleName14512",LastName14512 +14513,14513,"FirstName14513 MiddleName14513",LastName14513 +14514,14514,"FirstName14514 MiddleName14514",LastName14514 +14515,14515,"FirstName14515 MiddleName14515",LastName14515 +14516,14516,"FirstName14516 MiddleName14516",LastName14516 +14517,14517,"FirstName14517 MiddleName14517",LastName14517 +14518,14518,"FirstName14518 MiddleName14518",LastName14518 +14519,14519,"FirstName14519 MiddleName14519",LastName14519 +14520,14520,"FirstName14520 MiddleName14520",LastName14520 +14521,14521,"FirstName14521 MiddleName14521",LastName14521 +14522,14522,"FirstName14522 MiddleName14522",LastName14522 +14523,14523,"FirstName14523 MiddleName14523",LastName14523 +14524,14524,"FirstName14524 MiddleName14524",LastName14524 +14525,14525,"FirstName14525 MiddleName14525",LastName14525 +14526,14526,"FirstName14526 MiddleName14526",LastName14526 +14527,14527,"FirstName14527 MiddleName14527",LastName14527 +14528,14528,"FirstName14528 MiddleName14528",LastName14528 +14529,14529,"FirstName14529 MiddleName14529",LastName14529 +14530,14530,"FirstName14530 MiddleName14530",LastName14530 +14531,14531,"FirstName14531 MiddleName14531",LastName14531 +14532,14532,"FirstName14532 MiddleName14532",LastName14532 +14533,14533,"FirstName14533 MiddleName14533",LastName14533 +14534,14534,"FirstName14534 MiddleName14534",LastName14534 +14535,14535,"FirstName14535 MiddleName14535",LastName14535 +14536,14536,"FirstName14536 MiddleName14536",LastName14536 +14537,14537,"FirstName14537 MiddleName14537",LastName14537 +14538,14538,"FirstName14538 MiddleName14538",LastName14538 +14539,14539,"FirstName14539 MiddleName14539",LastName14539 +14540,14540,"FirstName14540 MiddleName14540",LastName14540 +14541,14541,"FirstName14541 MiddleName14541",LastName14541 +14542,14542,"FirstName14542 MiddleName14542",LastName14542 +14543,14543,"FirstName14543 MiddleName14543",LastName14543 +14544,14544,"FirstName14544 MiddleName14544",LastName14544 +14545,14545,"FirstName14545 MiddleName14545",LastName14545 +14546,14546,"FirstName14546 MiddleName14546",LastName14546 +14547,14547,"FirstName14547 MiddleName14547",LastName14547 +14548,14548,"FirstName14548 MiddleName14548",LastName14548 +14549,14549,"FirstName14549 MiddleName14549",LastName14549 +14550,14550,"FirstName14550 MiddleName14550",LastName14550 +14551,14551,"FirstName14551 MiddleName14551",LastName14551 +14552,14552,"FirstName14552 MiddleName14552",LastName14552 +14553,14553,"FirstName14553 MiddleName14553",LastName14553 +14554,14554,"FirstName14554 MiddleName14554",LastName14554 +14555,14555,"FirstName14555 MiddleName14555",LastName14555 +14556,14556,"FirstName14556 MiddleName14556",LastName14556 +14557,14557,"FirstName14557 MiddleName14557",LastName14557 +14558,14558,"FirstName14558 MiddleName14558",LastName14558 +14559,14559,"FirstName14559 MiddleName14559",LastName14559 +14560,14560,"FirstName14560 MiddleName14560",LastName14560 +14561,14561,"FirstName14561 MiddleName14561",LastName14561 +14562,14562,"FirstName14562 MiddleName14562",LastName14562 +14563,14563,"FirstName14563 MiddleName14563",LastName14563 +14564,14564,"FirstName14564 MiddleName14564",LastName14564 +14565,14565,"FirstName14565 MiddleName14565",LastName14565 +14566,14566,"FirstName14566 MiddleName14566",LastName14566 +14567,14567,"FirstName14567 MiddleName14567",LastName14567 +14568,14568,"FirstName14568 MiddleName14568",LastName14568 +14569,14569,"FirstName14569 MiddleName14569",LastName14569 +14570,14570,"FirstName14570 MiddleName14570",LastName14570 +14571,14571,"FirstName14571 MiddleName14571",LastName14571 +14572,14572,"FirstName14572 MiddleName14572",LastName14572 +14573,14573,"FirstName14573 MiddleName14573",LastName14573 +14574,14574,"FirstName14574 MiddleName14574",LastName14574 +14575,14575,"FirstName14575 MiddleName14575",LastName14575 +14576,14576,"FirstName14576 MiddleName14576",LastName14576 +14577,14577,"FirstName14577 MiddleName14577",LastName14577 +14578,14578,"FirstName14578 MiddleName14578",LastName14578 +14579,14579,"FirstName14579 MiddleName14579",LastName14579 +14580,14580,"FirstName14580 MiddleName14580",LastName14580 +14581,14581,"FirstName14581 MiddleName14581",LastName14581 +14582,14582,"FirstName14582 MiddleName14582",LastName14582 +14583,14583,"FirstName14583 MiddleName14583",LastName14583 +14584,14584,"FirstName14584 MiddleName14584",LastName14584 +14585,14585,"FirstName14585 MiddleName14585",LastName14585 +14586,14586,"FirstName14586 MiddleName14586",LastName14586 +14587,14587,"FirstName14587 MiddleName14587",LastName14587 +14588,14588,"FirstName14588 MiddleName14588",LastName14588 +14589,14589,"FirstName14589 MiddleName14589",LastName14589 +14590,14590,"FirstName14590 MiddleName14590",LastName14590 +14591,14591,"FirstName14591 MiddleName14591",LastName14591 +14592,14592,"FirstName14592 MiddleName14592",LastName14592 +14593,14593,"FirstName14593 MiddleName14593",LastName14593 +14594,14594,"FirstName14594 MiddleName14594",LastName14594 +14595,14595,"FirstName14595 MiddleName14595",LastName14595 +14596,14596,"FirstName14596 MiddleName14596",LastName14596 +14597,14597,"FirstName14597 MiddleName14597",LastName14597 +14598,14598,"FirstName14598 MiddleName14598",LastName14598 +14599,14599,"FirstName14599 MiddleName14599",LastName14599 +14600,14600,"FirstName14600 MiddleName14600",LastName14600 +14601,14601,"FirstName14601 MiddleName14601",LastName14601 +14602,14602,"FirstName14602 MiddleName14602",LastName14602 +14603,14603,"FirstName14603 MiddleName14603",LastName14603 +14604,14604,"FirstName14604 MiddleName14604",LastName14604 +14605,14605,"FirstName14605 MiddleName14605",LastName14605 +14606,14606,"FirstName14606 MiddleName14606",LastName14606 +14607,14607,"FirstName14607 MiddleName14607",LastName14607 +14608,14608,"FirstName14608 MiddleName14608",LastName14608 +14609,14609,"FirstName14609 MiddleName14609",LastName14609 +14610,14610,"FirstName14610 MiddleName14610",LastName14610 +14611,14611,"FirstName14611 MiddleName14611",LastName14611 +14612,14612,"FirstName14612 MiddleName14612",LastName14612 +14613,14613,"FirstName14613 MiddleName14613",LastName14613 +14614,14614,"FirstName14614 MiddleName14614",LastName14614 +14615,14615,"FirstName14615 MiddleName14615",LastName14615 +14616,14616,"FirstName14616 MiddleName14616",LastName14616 +14617,14617,"FirstName14617 MiddleName14617",LastName14617 +14618,14618,"FirstName14618 MiddleName14618",LastName14618 +14619,14619,"FirstName14619 MiddleName14619",LastName14619 +14620,14620,"FirstName14620 MiddleName14620",LastName14620 +14621,14621,"FirstName14621 MiddleName14621",LastName14621 +14622,14622,"FirstName14622 MiddleName14622",LastName14622 +14623,14623,"FirstName14623 MiddleName14623",LastName14623 +14624,14624,"FirstName14624 MiddleName14624",LastName14624 +14625,14625,"FirstName14625 MiddleName14625",LastName14625 +14626,14626,"FirstName14626 MiddleName14626",LastName14626 +14627,14627,"FirstName14627 MiddleName14627",LastName14627 +14628,14628,"FirstName14628 MiddleName14628",LastName14628 +14629,14629,"FirstName14629 MiddleName14629",LastName14629 +14630,14630,"FirstName14630 MiddleName14630",LastName14630 +14631,14631,"FirstName14631 MiddleName14631",LastName14631 +14632,14632,"FirstName14632 MiddleName14632",LastName14632 +14633,14633,"FirstName14633 MiddleName14633",LastName14633 +14634,14634,"FirstName14634 MiddleName14634",LastName14634 +14635,14635,"FirstName14635 MiddleName14635",LastName14635 +14636,14636,"FirstName14636 MiddleName14636",LastName14636 +14637,14637,"FirstName14637 MiddleName14637",LastName14637 +14638,14638,"FirstName14638 MiddleName14638",LastName14638 +14639,14639,"FirstName14639 MiddleName14639",LastName14639 +14640,14640,"FirstName14640 MiddleName14640",LastName14640 +14641,14641,"FirstName14641 MiddleName14641",LastName14641 +14642,14642,"FirstName14642 MiddleName14642",LastName14642 +14643,14643,"FirstName14643 MiddleName14643",LastName14643 +14644,14644,"FirstName14644 MiddleName14644",LastName14644 +14645,14645,"FirstName14645 MiddleName14645",LastName14645 +14646,14646,"FirstName14646 MiddleName14646",LastName14646 +14647,14647,"FirstName14647 MiddleName14647",LastName14647 +14648,14648,"FirstName14648 MiddleName14648",LastName14648 +14649,14649,"FirstName14649 MiddleName14649",LastName14649 +14650,14650,"FirstName14650 MiddleName14650",LastName14650 +14651,14651,"FirstName14651 MiddleName14651",LastName14651 +14652,14652,"FirstName14652 MiddleName14652",LastName14652 +14653,14653,"FirstName14653 MiddleName14653",LastName14653 +14654,14654,"FirstName14654 MiddleName14654",LastName14654 +14655,14655,"FirstName14655 MiddleName14655",LastName14655 +14656,14656,"FirstName14656 MiddleName14656",LastName14656 +14657,14657,"FirstName14657 MiddleName14657",LastName14657 +14658,14658,"FirstName14658 MiddleName14658",LastName14658 +14659,14659,"FirstName14659 MiddleName14659",LastName14659 +14660,14660,"FirstName14660 MiddleName14660",LastName14660 +14661,14661,"FirstName14661 MiddleName14661",LastName14661 +14662,14662,"FirstName14662 MiddleName14662",LastName14662 +14663,14663,"FirstName14663 MiddleName14663",LastName14663 +14664,14664,"FirstName14664 MiddleName14664",LastName14664 +14665,14665,"FirstName14665 MiddleName14665",LastName14665 +14666,14666,"FirstName14666 MiddleName14666",LastName14666 +14667,14667,"FirstName14667 MiddleName14667",LastName14667 +14668,14668,"FirstName14668 MiddleName14668",LastName14668 +14669,14669,"FirstName14669 MiddleName14669",LastName14669 +14670,14670,"FirstName14670 MiddleName14670",LastName14670 +14671,14671,"FirstName14671 MiddleName14671",LastName14671 +14672,14672,"FirstName14672 MiddleName14672",LastName14672 +14673,14673,"FirstName14673 MiddleName14673",LastName14673 +14674,14674,"FirstName14674 MiddleName14674",LastName14674 +14675,14675,"FirstName14675 MiddleName14675",LastName14675 +14676,14676,"FirstName14676 MiddleName14676",LastName14676 +14677,14677,"FirstName14677 MiddleName14677",LastName14677 +14678,14678,"FirstName14678 MiddleName14678",LastName14678 +14679,14679,"FirstName14679 MiddleName14679",LastName14679 +14680,14680,"FirstName14680 MiddleName14680",LastName14680 +14681,14681,"FirstName14681 MiddleName14681",LastName14681 +14682,14682,"FirstName14682 MiddleName14682",LastName14682 +14683,14683,"FirstName14683 MiddleName14683",LastName14683 +14684,14684,"FirstName14684 MiddleName14684",LastName14684 +14685,14685,"FirstName14685 MiddleName14685",LastName14685 +14686,14686,"FirstName14686 MiddleName14686",LastName14686 +14687,14687,"FirstName14687 MiddleName14687",LastName14687 +14688,14688,"FirstName14688 MiddleName14688",LastName14688 +14689,14689,"FirstName14689 MiddleName14689",LastName14689 +14690,14690,"FirstName14690 MiddleName14690",LastName14690 +14691,14691,"FirstName14691 MiddleName14691",LastName14691 +14692,14692,"FirstName14692 MiddleName14692",LastName14692 +14693,14693,"FirstName14693 MiddleName14693",LastName14693 +14694,14694,"FirstName14694 MiddleName14694",LastName14694 +14695,14695,"FirstName14695 MiddleName14695",LastName14695 +14696,14696,"FirstName14696 MiddleName14696",LastName14696 +14697,14697,"FirstName14697 MiddleName14697",LastName14697 +14698,14698,"FirstName14698 MiddleName14698",LastName14698 +14699,14699,"FirstName14699 MiddleName14699",LastName14699 +14700,14700,"FirstName14700 MiddleName14700",LastName14700 +14701,14701,"FirstName14701 MiddleName14701",LastName14701 +14702,14702,"FirstName14702 MiddleName14702",LastName14702 +14703,14703,"FirstName14703 MiddleName14703",LastName14703 +14704,14704,"FirstName14704 MiddleName14704",LastName14704 +14705,14705,"FirstName14705 MiddleName14705",LastName14705 +14706,14706,"FirstName14706 MiddleName14706",LastName14706 +14707,14707,"FirstName14707 MiddleName14707",LastName14707 +14708,14708,"FirstName14708 MiddleName14708",LastName14708 +14709,14709,"FirstName14709 MiddleName14709",LastName14709 +14710,14710,"FirstName14710 MiddleName14710",LastName14710 +14711,14711,"FirstName14711 MiddleName14711",LastName14711 +14712,14712,"FirstName14712 MiddleName14712",LastName14712 +14713,14713,"FirstName14713 MiddleName14713",LastName14713 +14714,14714,"FirstName14714 MiddleName14714",LastName14714 +14715,14715,"FirstName14715 MiddleName14715",LastName14715 +14716,14716,"FirstName14716 MiddleName14716",LastName14716 +14717,14717,"FirstName14717 MiddleName14717",LastName14717 +14718,14718,"FirstName14718 MiddleName14718",LastName14718 +14719,14719,"FirstName14719 MiddleName14719",LastName14719 +14720,14720,"FirstName14720 MiddleName14720",LastName14720 +14721,14721,"FirstName14721 MiddleName14721",LastName14721 +14722,14722,"FirstName14722 MiddleName14722",LastName14722 +14723,14723,"FirstName14723 MiddleName14723",LastName14723 +14724,14724,"FirstName14724 MiddleName14724",LastName14724 +14725,14725,"FirstName14725 MiddleName14725",LastName14725 +14726,14726,"FirstName14726 MiddleName14726",LastName14726 +14727,14727,"FirstName14727 MiddleName14727",LastName14727 +14728,14728,"FirstName14728 MiddleName14728",LastName14728 +14729,14729,"FirstName14729 MiddleName14729",LastName14729 +14730,14730,"FirstName14730 MiddleName14730",LastName14730 +14731,14731,"FirstName14731 MiddleName14731",LastName14731 +14732,14732,"FirstName14732 MiddleName14732",LastName14732 +14733,14733,"FirstName14733 MiddleName14733",LastName14733 +14734,14734,"FirstName14734 MiddleName14734",LastName14734 +14735,14735,"FirstName14735 MiddleName14735",LastName14735 +14736,14736,"FirstName14736 MiddleName14736",LastName14736 +14737,14737,"FirstName14737 MiddleName14737",LastName14737 +14738,14738,"FirstName14738 MiddleName14738",LastName14738 +14739,14739,"FirstName14739 MiddleName14739",LastName14739 +14740,14740,"FirstName14740 MiddleName14740",LastName14740 +14741,14741,"FirstName14741 MiddleName14741",LastName14741 +14742,14742,"FirstName14742 MiddleName14742",LastName14742 +14743,14743,"FirstName14743 MiddleName14743",LastName14743 +14744,14744,"FirstName14744 MiddleName14744",LastName14744 +14745,14745,"FirstName14745 MiddleName14745",LastName14745 +14746,14746,"FirstName14746 MiddleName14746",LastName14746 +14747,14747,"FirstName14747 MiddleName14747",LastName14747 +14748,14748,"FirstName14748 MiddleName14748",LastName14748 +14749,14749,"FirstName14749 MiddleName14749",LastName14749 +14750,14750,"FirstName14750 MiddleName14750",LastName14750 +14751,14751,"FirstName14751 MiddleName14751",LastName14751 +14752,14752,"FirstName14752 MiddleName14752",LastName14752 +14753,14753,"FirstName14753 MiddleName14753",LastName14753 +14754,14754,"FirstName14754 MiddleName14754",LastName14754 +14755,14755,"FirstName14755 MiddleName14755",LastName14755 +14756,14756,"FirstName14756 MiddleName14756",LastName14756 +14757,14757,"FirstName14757 MiddleName14757",LastName14757 +14758,14758,"FirstName14758 MiddleName14758",LastName14758 +14759,14759,"FirstName14759 MiddleName14759",LastName14759 +14760,14760,"FirstName14760 MiddleName14760",LastName14760 +14761,14761,"FirstName14761 MiddleName14761",LastName14761 +14762,14762,"FirstName14762 MiddleName14762",LastName14762 +14763,14763,"FirstName14763 MiddleName14763",LastName14763 +14764,14764,"FirstName14764 MiddleName14764",LastName14764 +14765,14765,"FirstName14765 MiddleName14765",LastName14765 +14766,14766,"FirstName14766 MiddleName14766",LastName14766 +14767,14767,"FirstName14767 MiddleName14767",LastName14767 +14768,14768,"FirstName14768 MiddleName14768",LastName14768 +14769,14769,"FirstName14769 MiddleName14769",LastName14769 +14770,14770,"FirstName14770 MiddleName14770",LastName14770 +14771,14771,"FirstName14771 MiddleName14771",LastName14771 +14772,14772,"FirstName14772 MiddleName14772",LastName14772 +14773,14773,"FirstName14773 MiddleName14773",LastName14773 +14774,14774,"FirstName14774 MiddleName14774",LastName14774 +14775,14775,"FirstName14775 MiddleName14775",LastName14775 +14776,14776,"FirstName14776 MiddleName14776",LastName14776 +14777,14777,"FirstName14777 MiddleName14777",LastName14777 +14778,14778,"FirstName14778 MiddleName14778",LastName14778 +14779,14779,"FirstName14779 MiddleName14779",LastName14779 +14780,14780,"FirstName14780 MiddleName14780",LastName14780 +14781,14781,"FirstName14781 MiddleName14781",LastName14781 +14782,14782,"FirstName14782 MiddleName14782",LastName14782 +14783,14783,"FirstName14783 MiddleName14783",LastName14783 +14784,14784,"FirstName14784 MiddleName14784",LastName14784 +14785,14785,"FirstName14785 MiddleName14785",LastName14785 +14786,14786,"FirstName14786 MiddleName14786",LastName14786 +14787,14787,"FirstName14787 MiddleName14787",LastName14787 +14788,14788,"FirstName14788 MiddleName14788",LastName14788 +14789,14789,"FirstName14789 MiddleName14789",LastName14789 +14790,14790,"FirstName14790 MiddleName14790",LastName14790 +14791,14791,"FirstName14791 MiddleName14791",LastName14791 +14792,14792,"FirstName14792 MiddleName14792",LastName14792 +14793,14793,"FirstName14793 MiddleName14793",LastName14793 +14794,14794,"FirstName14794 MiddleName14794",LastName14794 +14795,14795,"FirstName14795 MiddleName14795",LastName14795 +14796,14796,"FirstName14796 MiddleName14796",LastName14796 +14797,14797,"FirstName14797 MiddleName14797",LastName14797 +14798,14798,"FirstName14798 MiddleName14798",LastName14798 +14799,14799,"FirstName14799 MiddleName14799",LastName14799 +14800,14800,"FirstName14800 MiddleName14800",LastName14800 +14801,14801,"FirstName14801 MiddleName14801",LastName14801 +14802,14802,"FirstName14802 MiddleName14802",LastName14802 +14803,14803,"FirstName14803 MiddleName14803",LastName14803 +14804,14804,"FirstName14804 MiddleName14804",LastName14804 +14805,14805,"FirstName14805 MiddleName14805",LastName14805 +14806,14806,"FirstName14806 MiddleName14806",LastName14806 +14807,14807,"FirstName14807 MiddleName14807",LastName14807 +14808,14808,"FirstName14808 MiddleName14808",LastName14808 +14809,14809,"FirstName14809 MiddleName14809",LastName14809 +14810,14810,"FirstName14810 MiddleName14810",LastName14810 +14811,14811,"FirstName14811 MiddleName14811",LastName14811 +14812,14812,"FirstName14812 MiddleName14812",LastName14812 +14813,14813,"FirstName14813 MiddleName14813",LastName14813 +14814,14814,"FirstName14814 MiddleName14814",LastName14814 +14815,14815,"FirstName14815 MiddleName14815",LastName14815 +14816,14816,"FirstName14816 MiddleName14816",LastName14816 +14817,14817,"FirstName14817 MiddleName14817",LastName14817 +14818,14818,"FirstName14818 MiddleName14818",LastName14818 +14819,14819,"FirstName14819 MiddleName14819",LastName14819 +14820,14820,"FirstName14820 MiddleName14820",LastName14820 +14821,14821,"FirstName14821 MiddleName14821",LastName14821 +14822,14822,"FirstName14822 MiddleName14822",LastName14822 +14823,14823,"FirstName14823 MiddleName14823",LastName14823 +14824,14824,"FirstName14824 MiddleName14824",LastName14824 +14825,14825,"FirstName14825 MiddleName14825",LastName14825 +14826,14826,"FirstName14826 MiddleName14826",LastName14826 +14827,14827,"FirstName14827 MiddleName14827",LastName14827 +14828,14828,"FirstName14828 MiddleName14828",LastName14828 +14829,14829,"FirstName14829 MiddleName14829",LastName14829 +14830,14830,"FirstName14830 MiddleName14830",LastName14830 +14831,14831,"FirstName14831 MiddleName14831",LastName14831 +14832,14832,"FirstName14832 MiddleName14832",LastName14832 +14833,14833,"FirstName14833 MiddleName14833",LastName14833 +14834,14834,"FirstName14834 MiddleName14834",LastName14834 +14835,14835,"FirstName14835 MiddleName14835",LastName14835 +14836,14836,"FirstName14836 MiddleName14836",LastName14836 +14837,14837,"FirstName14837 MiddleName14837",LastName14837 +14838,14838,"FirstName14838 MiddleName14838",LastName14838 +14839,14839,"FirstName14839 MiddleName14839",LastName14839 +14840,14840,"FirstName14840 MiddleName14840",LastName14840 +14841,14841,"FirstName14841 MiddleName14841",LastName14841 +14842,14842,"FirstName14842 MiddleName14842",LastName14842 +14843,14843,"FirstName14843 MiddleName14843",LastName14843 +14844,14844,"FirstName14844 MiddleName14844",LastName14844 +14845,14845,"FirstName14845 MiddleName14845",LastName14845 +14846,14846,"FirstName14846 MiddleName14846",LastName14846 +14847,14847,"FirstName14847 MiddleName14847",LastName14847 +14848,14848,"FirstName14848 MiddleName14848",LastName14848 +14849,14849,"FirstName14849 MiddleName14849",LastName14849 +14850,14850,"FirstName14850 MiddleName14850",LastName14850 +14851,14851,"FirstName14851 MiddleName14851",LastName14851 +14852,14852,"FirstName14852 MiddleName14852",LastName14852 +14853,14853,"FirstName14853 MiddleName14853",LastName14853 +14854,14854,"FirstName14854 MiddleName14854",LastName14854 +14855,14855,"FirstName14855 MiddleName14855",LastName14855 +14856,14856,"FirstName14856 MiddleName14856",LastName14856 +14857,14857,"FirstName14857 MiddleName14857",LastName14857 +14858,14858,"FirstName14858 MiddleName14858",LastName14858 +14859,14859,"FirstName14859 MiddleName14859",LastName14859 +14860,14860,"FirstName14860 MiddleName14860",LastName14860 +14861,14861,"FirstName14861 MiddleName14861",LastName14861 +14862,14862,"FirstName14862 MiddleName14862",LastName14862 +14863,14863,"FirstName14863 MiddleName14863",LastName14863 +14864,14864,"FirstName14864 MiddleName14864",LastName14864 +14865,14865,"FirstName14865 MiddleName14865",LastName14865 +14866,14866,"FirstName14866 MiddleName14866",LastName14866 +14867,14867,"FirstName14867 MiddleName14867",LastName14867 +14868,14868,"FirstName14868 MiddleName14868",LastName14868 +14869,14869,"FirstName14869 MiddleName14869",LastName14869 +14870,14870,"FirstName14870 MiddleName14870",LastName14870 +14871,14871,"FirstName14871 MiddleName14871",LastName14871 +14872,14872,"FirstName14872 MiddleName14872",LastName14872 +14873,14873,"FirstName14873 MiddleName14873",LastName14873 +14874,14874,"FirstName14874 MiddleName14874",LastName14874 +14875,14875,"FirstName14875 MiddleName14875",LastName14875 +14876,14876,"FirstName14876 MiddleName14876",LastName14876 +14877,14877,"FirstName14877 MiddleName14877",LastName14877 +14878,14878,"FirstName14878 MiddleName14878",LastName14878 +14879,14879,"FirstName14879 MiddleName14879",LastName14879 +14880,14880,"FirstName14880 MiddleName14880",LastName14880 +14881,14881,"FirstName14881 MiddleName14881",LastName14881 +14882,14882,"FirstName14882 MiddleName14882",LastName14882 +14883,14883,"FirstName14883 MiddleName14883",LastName14883 +14884,14884,"FirstName14884 MiddleName14884",LastName14884 +14885,14885,"FirstName14885 MiddleName14885",LastName14885 +14886,14886,"FirstName14886 MiddleName14886",LastName14886 +14887,14887,"FirstName14887 MiddleName14887",LastName14887 +14888,14888,"FirstName14888 MiddleName14888",LastName14888 +14889,14889,"FirstName14889 MiddleName14889",LastName14889 +14890,14890,"FirstName14890 MiddleName14890",LastName14890 +14891,14891,"FirstName14891 MiddleName14891",LastName14891 +14892,14892,"FirstName14892 MiddleName14892",LastName14892 +14893,14893,"FirstName14893 MiddleName14893",LastName14893 +14894,14894,"FirstName14894 MiddleName14894",LastName14894 +14895,14895,"FirstName14895 MiddleName14895",LastName14895 +14896,14896,"FirstName14896 MiddleName14896",LastName14896 +14897,14897,"FirstName14897 MiddleName14897",LastName14897 +14898,14898,"FirstName14898 MiddleName14898",LastName14898 +14899,14899,"FirstName14899 MiddleName14899",LastName14899 +14900,14900,"FirstName14900 MiddleName14900",LastName14900 +14901,14901,"FirstName14901 MiddleName14901",LastName14901 +14902,14902,"FirstName14902 MiddleName14902",LastName14902 +14903,14903,"FirstName14903 MiddleName14903",LastName14903 +14904,14904,"FirstName14904 MiddleName14904",LastName14904 +14905,14905,"FirstName14905 MiddleName14905",LastName14905 +14906,14906,"FirstName14906 MiddleName14906",LastName14906 +14907,14907,"FirstName14907 MiddleName14907",LastName14907 +14908,14908,"FirstName14908 MiddleName14908",LastName14908 +14909,14909,"FirstName14909 MiddleName14909",LastName14909 +14910,14910,"FirstName14910 MiddleName14910",LastName14910 +14911,14911,"FirstName14911 MiddleName14911",LastName14911 +14912,14912,"FirstName14912 MiddleName14912",LastName14912 +14913,14913,"FirstName14913 MiddleName14913",LastName14913 +14914,14914,"FirstName14914 MiddleName14914",LastName14914 +14915,14915,"FirstName14915 MiddleName14915",LastName14915 +14916,14916,"FirstName14916 MiddleName14916",LastName14916 +14917,14917,"FirstName14917 MiddleName14917",LastName14917 +14918,14918,"FirstName14918 MiddleName14918",LastName14918 +14919,14919,"FirstName14919 MiddleName14919",LastName14919 +14920,14920,"FirstName14920 MiddleName14920",LastName14920 +14921,14921,"FirstName14921 MiddleName14921",LastName14921 +14922,14922,"FirstName14922 MiddleName14922",LastName14922 +14923,14923,"FirstName14923 MiddleName14923",LastName14923 +14924,14924,"FirstName14924 MiddleName14924",LastName14924 +14925,14925,"FirstName14925 MiddleName14925",LastName14925 +14926,14926,"FirstName14926 MiddleName14926",LastName14926 +14927,14927,"FirstName14927 MiddleName14927",LastName14927 +14928,14928,"FirstName14928 MiddleName14928",LastName14928 +14929,14929,"FirstName14929 MiddleName14929",LastName14929 +14930,14930,"FirstName14930 MiddleName14930",LastName14930 +14931,14931,"FirstName14931 MiddleName14931",LastName14931 +14932,14932,"FirstName14932 MiddleName14932",LastName14932 +14933,14933,"FirstName14933 MiddleName14933",LastName14933 +14934,14934,"FirstName14934 MiddleName14934",LastName14934 +14935,14935,"FirstName14935 MiddleName14935",LastName14935 +14936,14936,"FirstName14936 MiddleName14936",LastName14936 +14937,14937,"FirstName14937 MiddleName14937",LastName14937 +14938,14938,"FirstName14938 MiddleName14938",LastName14938 +14939,14939,"FirstName14939 MiddleName14939",LastName14939 +14940,14940,"FirstName14940 MiddleName14940",LastName14940 +14941,14941,"FirstName14941 MiddleName14941",LastName14941 +14942,14942,"FirstName14942 MiddleName14942",LastName14942 +14943,14943,"FirstName14943 MiddleName14943",LastName14943 +14944,14944,"FirstName14944 MiddleName14944",LastName14944 +14945,14945,"FirstName14945 MiddleName14945",LastName14945 +14946,14946,"FirstName14946 MiddleName14946",LastName14946 +14947,14947,"FirstName14947 MiddleName14947",LastName14947 +14948,14948,"FirstName14948 MiddleName14948",LastName14948 +14949,14949,"FirstName14949 MiddleName14949",LastName14949 +14950,14950,"FirstName14950 MiddleName14950",LastName14950 +14951,14951,"FirstName14951 MiddleName14951",LastName14951 +14952,14952,"FirstName14952 MiddleName14952",LastName14952 +14953,14953,"FirstName14953 MiddleName14953",LastName14953 +14954,14954,"FirstName14954 MiddleName14954",LastName14954 +14955,14955,"FirstName14955 MiddleName14955",LastName14955 +14956,14956,"FirstName14956 MiddleName14956",LastName14956 +14957,14957,"FirstName14957 MiddleName14957",LastName14957 +14958,14958,"FirstName14958 MiddleName14958",LastName14958 +14959,14959,"FirstName14959 MiddleName14959",LastName14959 +14960,14960,"FirstName14960 MiddleName14960",LastName14960 +14961,14961,"FirstName14961 MiddleName14961",LastName14961 +14962,14962,"FirstName14962 MiddleName14962",LastName14962 +14963,14963,"FirstName14963 MiddleName14963",LastName14963 +14964,14964,"FirstName14964 MiddleName14964",LastName14964 +14965,14965,"FirstName14965 MiddleName14965",LastName14965 +14966,14966,"FirstName14966 MiddleName14966",LastName14966 +14967,14967,"FirstName14967 MiddleName14967",LastName14967 +14968,14968,"FirstName14968 MiddleName14968",LastName14968 +14969,14969,"FirstName14969 MiddleName14969",LastName14969 +14970,14970,"FirstName14970 MiddleName14970",LastName14970 +14971,14971,"FirstName14971 MiddleName14971",LastName14971 +14972,14972,"FirstName14972 MiddleName14972",LastName14972 +14973,14973,"FirstName14973 MiddleName14973",LastName14973 +14974,14974,"FirstName14974 MiddleName14974",LastName14974 +14975,14975,"FirstName14975 MiddleName14975",LastName14975 +14976,14976,"FirstName14976 MiddleName14976",LastName14976 +14977,14977,"FirstName14977 MiddleName14977",LastName14977 +14978,14978,"FirstName14978 MiddleName14978",LastName14978 +14979,14979,"FirstName14979 MiddleName14979",LastName14979 +14980,14980,"FirstName14980 MiddleName14980",LastName14980 +14981,14981,"FirstName14981 MiddleName14981",LastName14981 +14982,14982,"FirstName14982 MiddleName14982",LastName14982 +14983,14983,"FirstName14983 MiddleName14983",LastName14983 +14984,14984,"FirstName14984 MiddleName14984",LastName14984 +14985,14985,"FirstName14985 MiddleName14985",LastName14985 +14986,14986,"FirstName14986 MiddleName14986",LastName14986 +14987,14987,"FirstName14987 MiddleName14987",LastName14987 +14988,14988,"FirstName14988 MiddleName14988",LastName14988 +14989,14989,"FirstName14989 MiddleName14989",LastName14989 +14990,14990,"FirstName14990 MiddleName14990",LastName14990 +14991,14991,"FirstName14991 MiddleName14991",LastName14991 +14992,14992,"FirstName14992 MiddleName14992",LastName14992 +14993,14993,"FirstName14993 MiddleName14993",LastName14993 +14994,14994,"FirstName14994 MiddleName14994",LastName14994 +14995,14995,"FirstName14995 MiddleName14995",LastName14995 +14996,14996,"FirstName14996 MiddleName14996",LastName14996 +14997,14997,"FirstName14997 MiddleName14997",LastName14997 +14998,14998,"FirstName14998 MiddleName14998",LastName14998 +14999,14999,"FirstName14999 MiddleName14999",LastName14999 +15000,15000,"FirstName15000 MiddleName15000",LastName15000 +15001,15001,"FirstName15001 MiddleName15001",LastName15001 +15002,15002,"FirstName15002 MiddleName15002",LastName15002 +15003,15003,"FirstName15003 MiddleName15003",LastName15003 +15004,15004,"FirstName15004 MiddleName15004",LastName15004 +15005,15005,"FirstName15005 MiddleName15005",LastName15005 +15006,15006,"FirstName15006 MiddleName15006",LastName15006 +15007,15007,"FirstName15007 MiddleName15007",LastName15007 +15008,15008,"FirstName15008 MiddleName15008",LastName15008 +15009,15009,"FirstName15009 MiddleName15009",LastName15009 +15010,15010,"FirstName15010 MiddleName15010",LastName15010 +15011,15011,"FirstName15011 MiddleName15011",LastName15011 +15012,15012,"FirstName15012 MiddleName15012",LastName15012 +15013,15013,"FirstName15013 MiddleName15013",LastName15013 +15014,15014,"FirstName15014 MiddleName15014",LastName15014 +15015,15015,"FirstName15015 MiddleName15015",LastName15015 +15016,15016,"FirstName15016 MiddleName15016",LastName15016 +15017,15017,"FirstName15017 MiddleName15017",LastName15017 +15018,15018,"FirstName15018 MiddleName15018",LastName15018 +15019,15019,"FirstName15019 MiddleName15019",LastName15019 +15020,15020,"FirstName15020 MiddleName15020",LastName15020 +15021,15021,"FirstName15021 MiddleName15021",LastName15021 +15022,15022,"FirstName15022 MiddleName15022",LastName15022 +15023,15023,"FirstName15023 MiddleName15023",LastName15023 +15024,15024,"FirstName15024 MiddleName15024",LastName15024 +15025,15025,"FirstName15025 MiddleName15025",LastName15025 +15026,15026,"FirstName15026 MiddleName15026",LastName15026 +15027,15027,"FirstName15027 MiddleName15027",LastName15027 +15028,15028,"FirstName15028 MiddleName15028",LastName15028 +15029,15029,"FirstName15029 MiddleName15029",LastName15029 +15030,15030,"FirstName15030 MiddleName15030",LastName15030 +15031,15031,"FirstName15031 MiddleName15031",LastName15031 +15032,15032,"FirstName15032 MiddleName15032",LastName15032 +15033,15033,"FirstName15033 MiddleName15033",LastName15033 +15034,15034,"FirstName15034 MiddleName15034",LastName15034 +15035,15035,"FirstName15035 MiddleName15035",LastName15035 +15036,15036,"FirstName15036 MiddleName15036",LastName15036 +15037,15037,"FirstName15037 MiddleName15037",LastName15037 +15038,15038,"FirstName15038 MiddleName15038",LastName15038 +15039,15039,"FirstName15039 MiddleName15039",LastName15039 +15040,15040,"FirstName15040 MiddleName15040",LastName15040 +15041,15041,"FirstName15041 MiddleName15041",LastName15041 +15042,15042,"FirstName15042 MiddleName15042",LastName15042 +15043,15043,"FirstName15043 MiddleName15043",LastName15043 +15044,15044,"FirstName15044 MiddleName15044",LastName15044 +15045,15045,"FirstName15045 MiddleName15045",LastName15045 +15046,15046,"FirstName15046 MiddleName15046",LastName15046 +15047,15047,"FirstName15047 MiddleName15047",LastName15047 +15048,15048,"FirstName15048 MiddleName15048",LastName15048 +15049,15049,"FirstName15049 MiddleName15049",LastName15049 +15050,15050,"FirstName15050 MiddleName15050",LastName15050 +15051,15051,"FirstName15051 MiddleName15051",LastName15051 +15052,15052,"FirstName15052 MiddleName15052",LastName15052 +15053,15053,"FirstName15053 MiddleName15053",LastName15053 +15054,15054,"FirstName15054 MiddleName15054",LastName15054 +15055,15055,"FirstName15055 MiddleName15055",LastName15055 +15056,15056,"FirstName15056 MiddleName15056",LastName15056 +15057,15057,"FirstName15057 MiddleName15057",LastName15057 +15058,15058,"FirstName15058 MiddleName15058",LastName15058 +15059,15059,"FirstName15059 MiddleName15059",LastName15059 +15060,15060,"FirstName15060 MiddleName15060",LastName15060 +15061,15061,"FirstName15061 MiddleName15061",LastName15061 +15062,15062,"FirstName15062 MiddleName15062",LastName15062 +15063,15063,"FirstName15063 MiddleName15063",LastName15063 +15064,15064,"FirstName15064 MiddleName15064",LastName15064 +15065,15065,"FirstName15065 MiddleName15065",LastName15065 +15066,15066,"FirstName15066 MiddleName15066",LastName15066 +15067,15067,"FirstName15067 MiddleName15067",LastName15067 +15068,15068,"FirstName15068 MiddleName15068",LastName15068 +15069,15069,"FirstName15069 MiddleName15069",LastName15069 +15070,15070,"FirstName15070 MiddleName15070",LastName15070 +15071,15071,"FirstName15071 MiddleName15071",LastName15071 +15072,15072,"FirstName15072 MiddleName15072",LastName15072 +15073,15073,"FirstName15073 MiddleName15073",LastName15073 +15074,15074,"FirstName15074 MiddleName15074",LastName15074 +15075,15075,"FirstName15075 MiddleName15075",LastName15075 +15076,15076,"FirstName15076 MiddleName15076",LastName15076 +15077,15077,"FirstName15077 MiddleName15077",LastName15077 +15078,15078,"FirstName15078 MiddleName15078",LastName15078 +15079,15079,"FirstName15079 MiddleName15079",LastName15079 +15080,15080,"FirstName15080 MiddleName15080",LastName15080 +15081,15081,"FirstName15081 MiddleName15081",LastName15081 +15082,15082,"FirstName15082 MiddleName15082",LastName15082 +15083,15083,"FirstName15083 MiddleName15083",LastName15083 +15084,15084,"FirstName15084 MiddleName15084",LastName15084 +15085,15085,"FirstName15085 MiddleName15085",LastName15085 +15086,15086,"FirstName15086 MiddleName15086",LastName15086 +15087,15087,"FirstName15087 MiddleName15087",LastName15087 +15088,15088,"FirstName15088 MiddleName15088",LastName15088 +15089,15089,"FirstName15089 MiddleName15089",LastName15089 +15090,15090,"FirstName15090 MiddleName15090",LastName15090 +15091,15091,"FirstName15091 MiddleName15091",LastName15091 +15092,15092,"FirstName15092 MiddleName15092",LastName15092 +15093,15093,"FirstName15093 MiddleName15093",LastName15093 +15094,15094,"FirstName15094 MiddleName15094",LastName15094 +15095,15095,"FirstName15095 MiddleName15095",LastName15095 +15096,15096,"FirstName15096 MiddleName15096",LastName15096 +15097,15097,"FirstName15097 MiddleName15097",LastName15097 +15098,15098,"FirstName15098 MiddleName15098",LastName15098 +15099,15099,"FirstName15099 MiddleName15099",LastName15099 +15100,15100,"FirstName15100 MiddleName15100",LastName15100 +15101,15101,"FirstName15101 MiddleName15101",LastName15101 +15102,15102,"FirstName15102 MiddleName15102",LastName15102 +15103,15103,"FirstName15103 MiddleName15103",LastName15103 +15104,15104,"FirstName15104 MiddleName15104",LastName15104 +15105,15105,"FirstName15105 MiddleName15105",LastName15105 +15106,15106,"FirstName15106 MiddleName15106",LastName15106 +15107,15107,"FirstName15107 MiddleName15107",LastName15107 +15108,15108,"FirstName15108 MiddleName15108",LastName15108 +15109,15109,"FirstName15109 MiddleName15109",LastName15109 +15110,15110,"FirstName15110 MiddleName15110",LastName15110 +15111,15111,"FirstName15111 MiddleName15111",LastName15111 +15112,15112,"FirstName15112 MiddleName15112",LastName15112 +15113,15113,"FirstName15113 MiddleName15113",LastName15113 +15114,15114,"FirstName15114 MiddleName15114",LastName15114 +15115,15115,"FirstName15115 MiddleName15115",LastName15115 +15116,15116,"FirstName15116 MiddleName15116",LastName15116 +15117,15117,"FirstName15117 MiddleName15117",LastName15117 +15118,15118,"FirstName15118 MiddleName15118",LastName15118 +15119,15119,"FirstName15119 MiddleName15119",LastName15119 +15120,15120,"FirstName15120 MiddleName15120",LastName15120 +15121,15121,"FirstName15121 MiddleName15121",LastName15121 +15122,15122,"FirstName15122 MiddleName15122",LastName15122 +15123,15123,"FirstName15123 MiddleName15123",LastName15123 +15124,15124,"FirstName15124 MiddleName15124",LastName15124 +15125,15125,"FirstName15125 MiddleName15125",LastName15125 +15126,15126,"FirstName15126 MiddleName15126",LastName15126 +15127,15127,"FirstName15127 MiddleName15127",LastName15127 +15128,15128,"FirstName15128 MiddleName15128",LastName15128 +15129,15129,"FirstName15129 MiddleName15129",LastName15129 +15130,15130,"FirstName15130 MiddleName15130",LastName15130 +15131,15131,"FirstName15131 MiddleName15131",LastName15131 +15132,15132,"FirstName15132 MiddleName15132",LastName15132 +15133,15133,"FirstName15133 MiddleName15133",LastName15133 +15134,15134,"FirstName15134 MiddleName15134",LastName15134 +15135,15135,"FirstName15135 MiddleName15135",LastName15135 +15136,15136,"FirstName15136 MiddleName15136",LastName15136 +15137,15137,"FirstName15137 MiddleName15137",LastName15137 +15138,15138,"FirstName15138 MiddleName15138",LastName15138 +15139,15139,"FirstName15139 MiddleName15139",LastName15139 +15140,15140,"FirstName15140 MiddleName15140",LastName15140 +15141,15141,"FirstName15141 MiddleName15141",LastName15141 +15142,15142,"FirstName15142 MiddleName15142",LastName15142 +15143,15143,"FirstName15143 MiddleName15143",LastName15143 +15144,15144,"FirstName15144 MiddleName15144",LastName15144 +15145,15145,"FirstName15145 MiddleName15145",LastName15145 +15146,15146,"FirstName15146 MiddleName15146",LastName15146 +15147,15147,"FirstName15147 MiddleName15147",LastName15147 +15148,15148,"FirstName15148 MiddleName15148",LastName15148 +15149,15149,"FirstName15149 MiddleName15149",LastName15149 +15150,15150,"FirstName15150 MiddleName15150",LastName15150 +15151,15151,"FirstName15151 MiddleName15151",LastName15151 +15152,15152,"FirstName15152 MiddleName15152",LastName15152 +15153,15153,"FirstName15153 MiddleName15153",LastName15153 +15154,15154,"FirstName15154 MiddleName15154",LastName15154 +15155,15155,"FirstName15155 MiddleName15155",LastName15155 +15156,15156,"FirstName15156 MiddleName15156",LastName15156 +15157,15157,"FirstName15157 MiddleName15157",LastName15157 +15158,15158,"FirstName15158 MiddleName15158",LastName15158 +15159,15159,"FirstName15159 MiddleName15159",LastName15159 +15160,15160,"FirstName15160 MiddleName15160",LastName15160 +15161,15161,"FirstName15161 MiddleName15161",LastName15161 +15162,15162,"FirstName15162 MiddleName15162",LastName15162 +15163,15163,"FirstName15163 MiddleName15163",LastName15163 +15164,15164,"FirstName15164 MiddleName15164",LastName15164 +15165,15165,"FirstName15165 MiddleName15165",LastName15165 +15166,15166,"FirstName15166 MiddleName15166",LastName15166 +15167,15167,"FirstName15167 MiddleName15167",LastName15167 +15168,15168,"FirstName15168 MiddleName15168",LastName15168 +15169,15169,"FirstName15169 MiddleName15169",LastName15169 +15170,15170,"FirstName15170 MiddleName15170",LastName15170 +15171,15171,"FirstName15171 MiddleName15171",LastName15171 +15172,15172,"FirstName15172 MiddleName15172",LastName15172 +15173,15173,"FirstName15173 MiddleName15173",LastName15173 +15174,15174,"FirstName15174 MiddleName15174",LastName15174 +15175,15175,"FirstName15175 MiddleName15175",LastName15175 +15176,15176,"FirstName15176 MiddleName15176",LastName15176 +15177,15177,"FirstName15177 MiddleName15177",LastName15177 +15178,15178,"FirstName15178 MiddleName15178",LastName15178 +15179,15179,"FirstName15179 MiddleName15179",LastName15179 +15180,15180,"FirstName15180 MiddleName15180",LastName15180 +15181,15181,"FirstName15181 MiddleName15181",LastName15181 +15182,15182,"FirstName15182 MiddleName15182",LastName15182 +15183,15183,"FirstName15183 MiddleName15183",LastName15183 +15184,15184,"FirstName15184 MiddleName15184",LastName15184 +15185,15185,"FirstName15185 MiddleName15185",LastName15185 +15186,15186,"FirstName15186 MiddleName15186",LastName15186 +15187,15187,"FirstName15187 MiddleName15187",LastName15187 +15188,15188,"FirstName15188 MiddleName15188",LastName15188 +15189,15189,"FirstName15189 MiddleName15189",LastName15189 +15190,15190,"FirstName15190 MiddleName15190",LastName15190 +15191,15191,"FirstName15191 MiddleName15191",LastName15191 +15192,15192,"FirstName15192 MiddleName15192",LastName15192 +15193,15193,"FirstName15193 MiddleName15193",LastName15193 +15194,15194,"FirstName15194 MiddleName15194",LastName15194 +15195,15195,"FirstName15195 MiddleName15195",LastName15195 +15196,15196,"FirstName15196 MiddleName15196",LastName15196 +15197,15197,"FirstName15197 MiddleName15197",LastName15197 +15198,15198,"FirstName15198 MiddleName15198",LastName15198 +15199,15199,"FirstName15199 MiddleName15199",LastName15199 +15200,15200,"FirstName15200 MiddleName15200",LastName15200 +15201,15201,"FirstName15201 MiddleName15201",LastName15201 +15202,15202,"FirstName15202 MiddleName15202",LastName15202 +15203,15203,"FirstName15203 MiddleName15203",LastName15203 +15204,15204,"FirstName15204 MiddleName15204",LastName15204 +15205,15205,"FirstName15205 MiddleName15205",LastName15205 +15206,15206,"FirstName15206 MiddleName15206",LastName15206 +15207,15207,"FirstName15207 MiddleName15207",LastName15207 +15208,15208,"FirstName15208 MiddleName15208",LastName15208 +15209,15209,"FirstName15209 MiddleName15209",LastName15209 +15210,15210,"FirstName15210 MiddleName15210",LastName15210 +15211,15211,"FirstName15211 MiddleName15211",LastName15211 +15212,15212,"FirstName15212 MiddleName15212",LastName15212 +15213,15213,"FirstName15213 MiddleName15213",LastName15213 +15214,15214,"FirstName15214 MiddleName15214",LastName15214 +15215,15215,"FirstName15215 MiddleName15215",LastName15215 +15216,15216,"FirstName15216 MiddleName15216",LastName15216 +15217,15217,"FirstName15217 MiddleName15217",LastName15217 +15218,15218,"FirstName15218 MiddleName15218",LastName15218 +15219,15219,"FirstName15219 MiddleName15219",LastName15219 +15220,15220,"FirstName15220 MiddleName15220",LastName15220 +15221,15221,"FirstName15221 MiddleName15221",LastName15221 +15222,15222,"FirstName15222 MiddleName15222",LastName15222 +15223,15223,"FirstName15223 MiddleName15223",LastName15223 +15224,15224,"FirstName15224 MiddleName15224",LastName15224 +15225,15225,"FirstName15225 MiddleName15225",LastName15225 +15226,15226,"FirstName15226 MiddleName15226",LastName15226 +15227,15227,"FirstName15227 MiddleName15227",LastName15227 +15228,15228,"FirstName15228 MiddleName15228",LastName15228 +15229,15229,"FirstName15229 MiddleName15229",LastName15229 +15230,15230,"FirstName15230 MiddleName15230",LastName15230 +15231,15231,"FirstName15231 MiddleName15231",LastName15231 +15232,15232,"FirstName15232 MiddleName15232",LastName15232 +15233,15233,"FirstName15233 MiddleName15233",LastName15233 +15234,15234,"FirstName15234 MiddleName15234",LastName15234 +15235,15235,"FirstName15235 MiddleName15235",LastName15235 +15236,15236,"FirstName15236 MiddleName15236",LastName15236 +15237,15237,"FirstName15237 MiddleName15237",LastName15237 +15238,15238,"FirstName15238 MiddleName15238",LastName15238 +15239,15239,"FirstName15239 MiddleName15239",LastName15239 +15240,15240,"FirstName15240 MiddleName15240",LastName15240 +15241,15241,"FirstName15241 MiddleName15241",LastName15241 +15242,15242,"FirstName15242 MiddleName15242",LastName15242 +15243,15243,"FirstName15243 MiddleName15243",LastName15243 +15244,15244,"FirstName15244 MiddleName15244",LastName15244 +15245,15245,"FirstName15245 MiddleName15245",LastName15245 +15246,15246,"FirstName15246 MiddleName15246",LastName15246 +15247,15247,"FirstName15247 MiddleName15247",LastName15247 +15248,15248,"FirstName15248 MiddleName15248",LastName15248 +15249,15249,"FirstName15249 MiddleName15249",LastName15249 +15250,15250,"FirstName15250 MiddleName15250",LastName15250 +15251,15251,"FirstName15251 MiddleName15251",LastName15251 +15252,15252,"FirstName15252 MiddleName15252",LastName15252 +15253,15253,"FirstName15253 MiddleName15253",LastName15253 +15254,15254,"FirstName15254 MiddleName15254",LastName15254 +15255,15255,"FirstName15255 MiddleName15255",LastName15255 +15256,15256,"FirstName15256 MiddleName15256",LastName15256 +15257,15257,"FirstName15257 MiddleName15257",LastName15257 +15258,15258,"FirstName15258 MiddleName15258",LastName15258 +15259,15259,"FirstName15259 MiddleName15259",LastName15259 +15260,15260,"FirstName15260 MiddleName15260",LastName15260 +15261,15261,"FirstName15261 MiddleName15261",LastName15261 +15262,15262,"FirstName15262 MiddleName15262",LastName15262 +15263,15263,"FirstName15263 MiddleName15263",LastName15263 +15264,15264,"FirstName15264 MiddleName15264",LastName15264 +15265,15265,"FirstName15265 MiddleName15265",LastName15265 +15266,15266,"FirstName15266 MiddleName15266",LastName15266 +15267,15267,"FirstName15267 MiddleName15267",LastName15267 +15268,15268,"FirstName15268 MiddleName15268",LastName15268 +15269,15269,"FirstName15269 MiddleName15269",LastName15269 +15270,15270,"FirstName15270 MiddleName15270",LastName15270 +15271,15271,"FirstName15271 MiddleName15271",LastName15271 +15272,15272,"FirstName15272 MiddleName15272",LastName15272 +15273,15273,"FirstName15273 MiddleName15273",LastName15273 +15274,15274,"FirstName15274 MiddleName15274",LastName15274 +15275,15275,"FirstName15275 MiddleName15275",LastName15275 +15276,15276,"FirstName15276 MiddleName15276",LastName15276 +15277,15277,"FirstName15277 MiddleName15277",LastName15277 +15278,15278,"FirstName15278 MiddleName15278",LastName15278 +15279,15279,"FirstName15279 MiddleName15279",LastName15279 +15280,15280,"FirstName15280 MiddleName15280",LastName15280 +15281,15281,"FirstName15281 MiddleName15281",LastName15281 +15282,15282,"FirstName15282 MiddleName15282",LastName15282 +15283,15283,"FirstName15283 MiddleName15283",LastName15283 +15284,15284,"FirstName15284 MiddleName15284",LastName15284 +15285,15285,"FirstName15285 MiddleName15285",LastName15285 +15286,15286,"FirstName15286 MiddleName15286",LastName15286 +15287,15287,"FirstName15287 MiddleName15287",LastName15287 +15288,15288,"FirstName15288 MiddleName15288",LastName15288 +15289,15289,"FirstName15289 MiddleName15289",LastName15289 +15290,15290,"FirstName15290 MiddleName15290",LastName15290 +15291,15291,"FirstName15291 MiddleName15291",LastName15291 +15292,15292,"FirstName15292 MiddleName15292",LastName15292 +15293,15293,"FirstName15293 MiddleName15293",LastName15293 +15294,15294,"FirstName15294 MiddleName15294",LastName15294 +15295,15295,"FirstName15295 MiddleName15295",LastName15295 +15296,15296,"FirstName15296 MiddleName15296",LastName15296 +15297,15297,"FirstName15297 MiddleName15297",LastName15297 +15298,15298,"FirstName15298 MiddleName15298",LastName15298 +15299,15299,"FirstName15299 MiddleName15299",LastName15299 +15300,15300,"FirstName15300 MiddleName15300",LastName15300 +15301,15301,"FirstName15301 MiddleName15301",LastName15301 +15302,15302,"FirstName15302 MiddleName15302",LastName15302 +15303,15303,"FirstName15303 MiddleName15303",LastName15303 +15304,15304,"FirstName15304 MiddleName15304",LastName15304 +15305,15305,"FirstName15305 MiddleName15305",LastName15305 +15306,15306,"FirstName15306 MiddleName15306",LastName15306 +15307,15307,"FirstName15307 MiddleName15307",LastName15307 +15308,15308,"FirstName15308 MiddleName15308",LastName15308 +15309,15309,"FirstName15309 MiddleName15309",LastName15309 +15310,15310,"FirstName15310 MiddleName15310",LastName15310 +15311,15311,"FirstName15311 MiddleName15311",LastName15311 +15312,15312,"FirstName15312 MiddleName15312",LastName15312 +15313,15313,"FirstName15313 MiddleName15313",LastName15313 +15314,15314,"FirstName15314 MiddleName15314",LastName15314 +15315,15315,"FirstName15315 MiddleName15315",LastName15315 +15316,15316,"FirstName15316 MiddleName15316",LastName15316 +15317,15317,"FirstName15317 MiddleName15317",LastName15317 +15318,15318,"FirstName15318 MiddleName15318",LastName15318 +15319,15319,"FirstName15319 MiddleName15319",LastName15319 +15320,15320,"FirstName15320 MiddleName15320",LastName15320 +15321,15321,"FirstName15321 MiddleName15321",LastName15321 +15322,15322,"FirstName15322 MiddleName15322",LastName15322 +15323,15323,"FirstName15323 MiddleName15323",LastName15323 +15324,15324,"FirstName15324 MiddleName15324",LastName15324 +15325,15325,"FirstName15325 MiddleName15325",LastName15325 +15326,15326,"FirstName15326 MiddleName15326",LastName15326 +15327,15327,"FirstName15327 MiddleName15327",LastName15327 +15328,15328,"FirstName15328 MiddleName15328",LastName15328 +15329,15329,"FirstName15329 MiddleName15329",LastName15329 +15330,15330,"FirstName15330 MiddleName15330",LastName15330 +15331,15331,"FirstName15331 MiddleName15331",LastName15331 +15332,15332,"FirstName15332 MiddleName15332",LastName15332 +15333,15333,"FirstName15333 MiddleName15333",LastName15333 +15334,15334,"FirstName15334 MiddleName15334",LastName15334 +15335,15335,"FirstName15335 MiddleName15335",LastName15335 +15336,15336,"FirstName15336 MiddleName15336",LastName15336 +15337,15337,"FirstName15337 MiddleName15337",LastName15337 +15338,15338,"FirstName15338 MiddleName15338",LastName15338 +15339,15339,"FirstName15339 MiddleName15339",LastName15339 +15340,15340,"FirstName15340 MiddleName15340",LastName15340 +15341,15341,"FirstName15341 MiddleName15341",LastName15341 +15342,15342,"FirstName15342 MiddleName15342",LastName15342 +15343,15343,"FirstName15343 MiddleName15343",LastName15343 +15344,15344,"FirstName15344 MiddleName15344",LastName15344 +15345,15345,"FirstName15345 MiddleName15345",LastName15345 +15346,15346,"FirstName15346 MiddleName15346",LastName15346 +15347,15347,"FirstName15347 MiddleName15347",LastName15347 +15348,15348,"FirstName15348 MiddleName15348",LastName15348 +15349,15349,"FirstName15349 MiddleName15349",LastName15349 +15350,15350,"FirstName15350 MiddleName15350",LastName15350 +15351,15351,"FirstName15351 MiddleName15351",LastName15351 +15352,15352,"FirstName15352 MiddleName15352",LastName15352 +15353,15353,"FirstName15353 MiddleName15353",LastName15353 +15354,15354,"FirstName15354 MiddleName15354",LastName15354 +15355,15355,"FirstName15355 MiddleName15355",LastName15355 +15356,15356,"FirstName15356 MiddleName15356",LastName15356 +15357,15357,"FirstName15357 MiddleName15357",LastName15357 +15358,15358,"FirstName15358 MiddleName15358",LastName15358 +15359,15359,"FirstName15359 MiddleName15359",LastName15359 +15360,15360,"FirstName15360 MiddleName15360",LastName15360 +15361,15361,"FirstName15361 MiddleName15361",LastName15361 +15362,15362,"FirstName15362 MiddleName15362",LastName15362 +15363,15363,"FirstName15363 MiddleName15363",LastName15363 +15364,15364,"FirstName15364 MiddleName15364",LastName15364 +15365,15365,"FirstName15365 MiddleName15365",LastName15365 +15366,15366,"FirstName15366 MiddleName15366",LastName15366 +15367,15367,"FirstName15367 MiddleName15367",LastName15367 +15368,15368,"FirstName15368 MiddleName15368",LastName15368 +15369,15369,"FirstName15369 MiddleName15369",LastName15369 +15370,15370,"FirstName15370 MiddleName15370",LastName15370 +15371,15371,"FirstName15371 MiddleName15371",LastName15371 +15372,15372,"FirstName15372 MiddleName15372",LastName15372 +15373,15373,"FirstName15373 MiddleName15373",LastName15373 +15374,15374,"FirstName15374 MiddleName15374",LastName15374 +15375,15375,"FirstName15375 MiddleName15375",LastName15375 +15376,15376,"FirstName15376 MiddleName15376",LastName15376 +15377,15377,"FirstName15377 MiddleName15377",LastName15377 +15378,15378,"FirstName15378 MiddleName15378",LastName15378 +15379,15379,"FirstName15379 MiddleName15379",LastName15379 +15380,15380,"FirstName15380 MiddleName15380",LastName15380 +15381,15381,"FirstName15381 MiddleName15381",LastName15381 +15382,15382,"FirstName15382 MiddleName15382",LastName15382 +15383,15383,"FirstName15383 MiddleName15383",LastName15383 +15384,15384,"FirstName15384 MiddleName15384",LastName15384 +15385,15385,"FirstName15385 MiddleName15385",LastName15385 +15386,15386,"FirstName15386 MiddleName15386",LastName15386 +15387,15387,"FirstName15387 MiddleName15387",LastName15387 +15388,15388,"FirstName15388 MiddleName15388",LastName15388 +15389,15389,"FirstName15389 MiddleName15389",LastName15389 +15390,15390,"FirstName15390 MiddleName15390",LastName15390 +15391,15391,"FirstName15391 MiddleName15391",LastName15391 +15392,15392,"FirstName15392 MiddleName15392",LastName15392 +15393,15393,"FirstName15393 MiddleName15393",LastName15393 +15394,15394,"FirstName15394 MiddleName15394",LastName15394 +15395,15395,"FirstName15395 MiddleName15395",LastName15395 +15396,15396,"FirstName15396 MiddleName15396",LastName15396 +15397,15397,"FirstName15397 MiddleName15397",LastName15397 +15398,15398,"FirstName15398 MiddleName15398",LastName15398 +15399,15399,"FirstName15399 MiddleName15399",LastName15399 +15400,15400,"FirstName15400 MiddleName15400",LastName15400 +15401,15401,"FirstName15401 MiddleName15401",LastName15401 +15402,15402,"FirstName15402 MiddleName15402",LastName15402 +15403,15403,"FirstName15403 MiddleName15403",LastName15403 +15404,15404,"FirstName15404 MiddleName15404",LastName15404 +15405,15405,"FirstName15405 MiddleName15405",LastName15405 +15406,15406,"FirstName15406 MiddleName15406",LastName15406 +15407,15407,"FirstName15407 MiddleName15407",LastName15407 +15408,15408,"FirstName15408 MiddleName15408",LastName15408 +15409,15409,"FirstName15409 MiddleName15409",LastName15409 +15410,15410,"FirstName15410 MiddleName15410",LastName15410 +15411,15411,"FirstName15411 MiddleName15411",LastName15411 +15412,15412,"FirstName15412 MiddleName15412",LastName15412 +15413,15413,"FirstName15413 MiddleName15413",LastName15413 +15414,15414,"FirstName15414 MiddleName15414",LastName15414 +15415,15415,"FirstName15415 MiddleName15415",LastName15415 +15416,15416,"FirstName15416 MiddleName15416",LastName15416 +15417,15417,"FirstName15417 MiddleName15417",LastName15417 +15418,15418,"FirstName15418 MiddleName15418",LastName15418 +15419,15419,"FirstName15419 MiddleName15419",LastName15419 +15420,15420,"FirstName15420 MiddleName15420",LastName15420 +15421,15421,"FirstName15421 MiddleName15421",LastName15421 +15422,15422,"FirstName15422 MiddleName15422",LastName15422 +15423,15423,"FirstName15423 MiddleName15423",LastName15423 +15424,15424,"FirstName15424 MiddleName15424",LastName15424 +15425,15425,"FirstName15425 MiddleName15425",LastName15425 +15426,15426,"FirstName15426 MiddleName15426",LastName15426 +15427,15427,"FirstName15427 MiddleName15427",LastName15427 +15428,15428,"FirstName15428 MiddleName15428",LastName15428 +15429,15429,"FirstName15429 MiddleName15429",LastName15429 +15430,15430,"FirstName15430 MiddleName15430",LastName15430 +15431,15431,"FirstName15431 MiddleName15431",LastName15431 +15432,15432,"FirstName15432 MiddleName15432",LastName15432 +15433,15433,"FirstName15433 MiddleName15433",LastName15433 +15434,15434,"FirstName15434 MiddleName15434",LastName15434 +15435,15435,"FirstName15435 MiddleName15435",LastName15435 +15436,15436,"FirstName15436 MiddleName15436",LastName15436 +15437,15437,"FirstName15437 MiddleName15437",LastName15437 +15438,15438,"FirstName15438 MiddleName15438",LastName15438 +15439,15439,"FirstName15439 MiddleName15439",LastName15439 +15440,15440,"FirstName15440 MiddleName15440",LastName15440 +15441,15441,"FirstName15441 MiddleName15441",LastName15441 +15442,15442,"FirstName15442 MiddleName15442",LastName15442 +15443,15443,"FirstName15443 MiddleName15443",LastName15443 +15444,15444,"FirstName15444 MiddleName15444",LastName15444 +15445,15445,"FirstName15445 MiddleName15445",LastName15445 +15446,15446,"FirstName15446 MiddleName15446",LastName15446 +15447,15447,"FirstName15447 MiddleName15447",LastName15447 +15448,15448,"FirstName15448 MiddleName15448",LastName15448 +15449,15449,"FirstName15449 MiddleName15449",LastName15449 +15450,15450,"FirstName15450 MiddleName15450",LastName15450 +15451,15451,"FirstName15451 MiddleName15451",LastName15451 +15452,15452,"FirstName15452 MiddleName15452",LastName15452 +15453,15453,"FirstName15453 MiddleName15453",LastName15453 +15454,15454,"FirstName15454 MiddleName15454",LastName15454 +15455,15455,"FirstName15455 MiddleName15455",LastName15455 +15456,15456,"FirstName15456 MiddleName15456",LastName15456 +15457,15457,"FirstName15457 MiddleName15457",LastName15457 +15458,15458,"FirstName15458 MiddleName15458",LastName15458 +15459,15459,"FirstName15459 MiddleName15459",LastName15459 +15460,15460,"FirstName15460 MiddleName15460",LastName15460 +15461,15461,"FirstName15461 MiddleName15461",LastName15461 +15462,15462,"FirstName15462 MiddleName15462",LastName15462 +15463,15463,"FirstName15463 MiddleName15463",LastName15463 +15464,15464,"FirstName15464 MiddleName15464",LastName15464 +15465,15465,"FirstName15465 MiddleName15465",LastName15465 +15466,15466,"FirstName15466 MiddleName15466",LastName15466 +15467,15467,"FirstName15467 MiddleName15467",LastName15467 +15468,15468,"FirstName15468 MiddleName15468",LastName15468 +15469,15469,"FirstName15469 MiddleName15469",LastName15469 +15470,15470,"FirstName15470 MiddleName15470",LastName15470 +15471,15471,"FirstName15471 MiddleName15471",LastName15471 +15472,15472,"FirstName15472 MiddleName15472",LastName15472 +15473,15473,"FirstName15473 MiddleName15473",LastName15473 +15474,15474,"FirstName15474 MiddleName15474",LastName15474 +15475,15475,"FirstName15475 MiddleName15475",LastName15475 +15476,15476,"FirstName15476 MiddleName15476",LastName15476 +15477,15477,"FirstName15477 MiddleName15477",LastName15477 +15478,15478,"FirstName15478 MiddleName15478",LastName15478 +15479,15479,"FirstName15479 MiddleName15479",LastName15479 +15480,15480,"FirstName15480 MiddleName15480",LastName15480 +15481,15481,"FirstName15481 MiddleName15481",LastName15481 +15482,15482,"FirstName15482 MiddleName15482",LastName15482 +15483,15483,"FirstName15483 MiddleName15483",LastName15483 +15484,15484,"FirstName15484 MiddleName15484",LastName15484 +15485,15485,"FirstName15485 MiddleName15485",LastName15485 +15486,15486,"FirstName15486 MiddleName15486",LastName15486 +15487,15487,"FirstName15487 MiddleName15487",LastName15487 +15488,15488,"FirstName15488 MiddleName15488",LastName15488 +15489,15489,"FirstName15489 MiddleName15489",LastName15489 +15490,15490,"FirstName15490 MiddleName15490",LastName15490 +15491,15491,"FirstName15491 MiddleName15491",LastName15491 +15492,15492,"FirstName15492 MiddleName15492",LastName15492 +15493,15493,"FirstName15493 MiddleName15493",LastName15493 +15494,15494,"FirstName15494 MiddleName15494",LastName15494 +15495,15495,"FirstName15495 MiddleName15495",LastName15495 +15496,15496,"FirstName15496 MiddleName15496",LastName15496 +15497,15497,"FirstName15497 MiddleName15497",LastName15497 +15498,15498,"FirstName15498 MiddleName15498",LastName15498 +15499,15499,"FirstName15499 MiddleName15499",LastName15499 +15500,15500,"FirstName15500 MiddleName15500",LastName15500 +15501,15501,"FirstName15501 MiddleName15501",LastName15501 +15502,15502,"FirstName15502 MiddleName15502",LastName15502 +15503,15503,"FirstName15503 MiddleName15503",LastName15503 +15504,15504,"FirstName15504 MiddleName15504",LastName15504 +15505,15505,"FirstName15505 MiddleName15505",LastName15505 +15506,15506,"FirstName15506 MiddleName15506",LastName15506 +15507,15507,"FirstName15507 MiddleName15507",LastName15507 +15508,15508,"FirstName15508 MiddleName15508",LastName15508 +15509,15509,"FirstName15509 MiddleName15509",LastName15509 +15510,15510,"FirstName15510 MiddleName15510",LastName15510 +15511,15511,"FirstName15511 MiddleName15511",LastName15511 +15512,15512,"FirstName15512 MiddleName15512",LastName15512 +15513,15513,"FirstName15513 MiddleName15513",LastName15513 +15514,15514,"FirstName15514 MiddleName15514",LastName15514 +15515,15515,"FirstName15515 MiddleName15515",LastName15515 +15516,15516,"FirstName15516 MiddleName15516",LastName15516 +15517,15517,"FirstName15517 MiddleName15517",LastName15517 +15518,15518,"FirstName15518 MiddleName15518",LastName15518 +15519,15519,"FirstName15519 MiddleName15519",LastName15519 +15520,15520,"FirstName15520 MiddleName15520",LastName15520 +15521,15521,"FirstName15521 MiddleName15521",LastName15521 +15522,15522,"FirstName15522 MiddleName15522",LastName15522 +15523,15523,"FirstName15523 MiddleName15523",LastName15523 +15524,15524,"FirstName15524 MiddleName15524",LastName15524 +15525,15525,"FirstName15525 MiddleName15525",LastName15525 +15526,15526,"FirstName15526 MiddleName15526",LastName15526 +15527,15527,"FirstName15527 MiddleName15527",LastName15527 +15528,15528,"FirstName15528 MiddleName15528",LastName15528 +15529,15529,"FirstName15529 MiddleName15529",LastName15529 +15530,15530,"FirstName15530 MiddleName15530",LastName15530 +15531,15531,"FirstName15531 MiddleName15531",LastName15531 +15532,15532,"FirstName15532 MiddleName15532",LastName15532 +15533,15533,"FirstName15533 MiddleName15533",LastName15533 +15534,15534,"FirstName15534 MiddleName15534",LastName15534 +15535,15535,"FirstName15535 MiddleName15535",LastName15535 +15536,15536,"FirstName15536 MiddleName15536",LastName15536 +15537,15537,"FirstName15537 MiddleName15537",LastName15537 +15538,15538,"FirstName15538 MiddleName15538",LastName15538 +15539,15539,"FirstName15539 MiddleName15539",LastName15539 +15540,15540,"FirstName15540 MiddleName15540",LastName15540 +15541,15541,"FirstName15541 MiddleName15541",LastName15541 +15542,15542,"FirstName15542 MiddleName15542",LastName15542 +15543,15543,"FirstName15543 MiddleName15543",LastName15543 +15544,15544,"FirstName15544 MiddleName15544",LastName15544 +15545,15545,"FirstName15545 MiddleName15545",LastName15545 +15546,15546,"FirstName15546 MiddleName15546",LastName15546 +15547,15547,"FirstName15547 MiddleName15547",LastName15547 +15548,15548,"FirstName15548 MiddleName15548",LastName15548 +15549,15549,"FirstName15549 MiddleName15549",LastName15549 +15550,15550,"FirstName15550 MiddleName15550",LastName15550 +15551,15551,"FirstName15551 MiddleName15551",LastName15551 +15552,15552,"FirstName15552 MiddleName15552",LastName15552 +15553,15553,"FirstName15553 MiddleName15553",LastName15553 +15554,15554,"FirstName15554 MiddleName15554",LastName15554 +15555,15555,"FirstName15555 MiddleName15555",LastName15555 +15556,15556,"FirstName15556 MiddleName15556",LastName15556 +15557,15557,"FirstName15557 MiddleName15557",LastName15557 +15558,15558,"FirstName15558 MiddleName15558",LastName15558 +15559,15559,"FirstName15559 MiddleName15559",LastName15559 +15560,15560,"FirstName15560 MiddleName15560",LastName15560 +15561,15561,"FirstName15561 MiddleName15561",LastName15561 +15562,15562,"FirstName15562 MiddleName15562",LastName15562 +15563,15563,"FirstName15563 MiddleName15563",LastName15563 +15564,15564,"FirstName15564 MiddleName15564",LastName15564 +15565,15565,"FirstName15565 MiddleName15565",LastName15565 +15566,15566,"FirstName15566 MiddleName15566",LastName15566 +15567,15567,"FirstName15567 MiddleName15567",LastName15567 +15568,15568,"FirstName15568 MiddleName15568",LastName15568 +15569,15569,"FirstName15569 MiddleName15569",LastName15569 +15570,15570,"FirstName15570 MiddleName15570",LastName15570 +15571,15571,"FirstName15571 MiddleName15571",LastName15571 +15572,15572,"FirstName15572 MiddleName15572",LastName15572 +15573,15573,"FirstName15573 MiddleName15573",LastName15573 +15574,15574,"FirstName15574 MiddleName15574",LastName15574 +15575,15575,"FirstName15575 MiddleName15575",LastName15575 +15576,15576,"FirstName15576 MiddleName15576",LastName15576 +15577,15577,"FirstName15577 MiddleName15577",LastName15577 +15578,15578,"FirstName15578 MiddleName15578",LastName15578 +15579,15579,"FirstName15579 MiddleName15579",LastName15579 +15580,15580,"FirstName15580 MiddleName15580",LastName15580 +15581,15581,"FirstName15581 MiddleName15581",LastName15581 +15582,15582,"FirstName15582 MiddleName15582",LastName15582 +15583,15583,"FirstName15583 MiddleName15583",LastName15583 +15584,15584,"FirstName15584 MiddleName15584",LastName15584 +15585,15585,"FirstName15585 MiddleName15585",LastName15585 +15586,15586,"FirstName15586 MiddleName15586",LastName15586 +15587,15587,"FirstName15587 MiddleName15587",LastName15587 +15588,15588,"FirstName15588 MiddleName15588",LastName15588 +15589,15589,"FirstName15589 MiddleName15589",LastName15589 +15590,15590,"FirstName15590 MiddleName15590",LastName15590 +15591,15591,"FirstName15591 MiddleName15591",LastName15591 +15592,15592,"FirstName15592 MiddleName15592",LastName15592 +15593,15593,"FirstName15593 MiddleName15593",LastName15593 +15594,15594,"FirstName15594 MiddleName15594",LastName15594 +15595,15595,"FirstName15595 MiddleName15595",LastName15595 +15596,15596,"FirstName15596 MiddleName15596",LastName15596 +15597,15597,"FirstName15597 MiddleName15597",LastName15597 +15598,15598,"FirstName15598 MiddleName15598",LastName15598 +15599,15599,"FirstName15599 MiddleName15599",LastName15599 +15600,15600,"FirstName15600 MiddleName15600",LastName15600 +15601,15601,"FirstName15601 MiddleName15601",LastName15601 +15602,15602,"FirstName15602 MiddleName15602",LastName15602 +15603,15603,"FirstName15603 MiddleName15603",LastName15603 +15604,15604,"FirstName15604 MiddleName15604",LastName15604 +15605,15605,"FirstName15605 MiddleName15605",LastName15605 +15606,15606,"FirstName15606 MiddleName15606",LastName15606 +15607,15607,"FirstName15607 MiddleName15607",LastName15607 +15608,15608,"FirstName15608 MiddleName15608",LastName15608 +15609,15609,"FirstName15609 MiddleName15609",LastName15609 +15610,15610,"FirstName15610 MiddleName15610",LastName15610 +15611,15611,"FirstName15611 MiddleName15611",LastName15611 +15612,15612,"FirstName15612 MiddleName15612",LastName15612 +15613,15613,"FirstName15613 MiddleName15613",LastName15613 +15614,15614,"FirstName15614 MiddleName15614",LastName15614 +15615,15615,"FirstName15615 MiddleName15615",LastName15615 +15616,15616,"FirstName15616 MiddleName15616",LastName15616 +15617,15617,"FirstName15617 MiddleName15617",LastName15617 +15618,15618,"FirstName15618 MiddleName15618",LastName15618 +15619,15619,"FirstName15619 MiddleName15619",LastName15619 +15620,15620,"FirstName15620 MiddleName15620",LastName15620 +15621,15621,"FirstName15621 MiddleName15621",LastName15621 +15622,15622,"FirstName15622 MiddleName15622",LastName15622 +15623,15623,"FirstName15623 MiddleName15623",LastName15623 +15624,15624,"FirstName15624 MiddleName15624",LastName15624 +15625,15625,"FirstName15625 MiddleName15625",LastName15625 +15626,15626,"FirstName15626 MiddleName15626",LastName15626 +15627,15627,"FirstName15627 MiddleName15627",LastName15627 +15628,15628,"FirstName15628 MiddleName15628",LastName15628 +15629,15629,"FirstName15629 MiddleName15629",LastName15629 +15630,15630,"FirstName15630 MiddleName15630",LastName15630 +15631,15631,"FirstName15631 MiddleName15631",LastName15631 +15632,15632,"FirstName15632 MiddleName15632",LastName15632 +15633,15633,"FirstName15633 MiddleName15633",LastName15633 +15634,15634,"FirstName15634 MiddleName15634",LastName15634 +15635,15635,"FirstName15635 MiddleName15635",LastName15635 +15636,15636,"FirstName15636 MiddleName15636",LastName15636 +15637,15637,"FirstName15637 MiddleName15637",LastName15637 +15638,15638,"FirstName15638 MiddleName15638",LastName15638 +15639,15639,"FirstName15639 MiddleName15639",LastName15639 +15640,15640,"FirstName15640 MiddleName15640",LastName15640 +15641,15641,"FirstName15641 MiddleName15641",LastName15641 +15642,15642,"FirstName15642 MiddleName15642",LastName15642 +15643,15643,"FirstName15643 MiddleName15643",LastName15643 +15644,15644,"FirstName15644 MiddleName15644",LastName15644 +15645,15645,"FirstName15645 MiddleName15645",LastName15645 +15646,15646,"FirstName15646 MiddleName15646",LastName15646 +15647,15647,"FirstName15647 MiddleName15647",LastName15647 +15648,15648,"FirstName15648 MiddleName15648",LastName15648 +15649,15649,"FirstName15649 MiddleName15649",LastName15649 +15650,15650,"FirstName15650 MiddleName15650",LastName15650 +15651,15651,"FirstName15651 MiddleName15651",LastName15651 +15652,15652,"FirstName15652 MiddleName15652",LastName15652 +15653,15653,"FirstName15653 MiddleName15653",LastName15653 +15654,15654,"FirstName15654 MiddleName15654",LastName15654 +15655,15655,"FirstName15655 MiddleName15655",LastName15655 +15656,15656,"FirstName15656 MiddleName15656",LastName15656 +15657,15657,"FirstName15657 MiddleName15657",LastName15657 +15658,15658,"FirstName15658 MiddleName15658",LastName15658 +15659,15659,"FirstName15659 MiddleName15659",LastName15659 +15660,15660,"FirstName15660 MiddleName15660",LastName15660 +15661,15661,"FirstName15661 MiddleName15661",LastName15661 +15662,15662,"FirstName15662 MiddleName15662",LastName15662 +15663,15663,"FirstName15663 MiddleName15663",LastName15663 +15664,15664,"FirstName15664 MiddleName15664",LastName15664 +15665,15665,"FirstName15665 MiddleName15665",LastName15665 +15666,15666,"FirstName15666 MiddleName15666",LastName15666 +15667,15667,"FirstName15667 MiddleName15667",LastName15667 +15668,15668,"FirstName15668 MiddleName15668",LastName15668 +15669,15669,"FirstName15669 MiddleName15669",LastName15669 +15670,15670,"FirstName15670 MiddleName15670",LastName15670 +15671,15671,"FirstName15671 MiddleName15671",LastName15671 +15672,15672,"FirstName15672 MiddleName15672",LastName15672 +15673,15673,"FirstName15673 MiddleName15673",LastName15673 +15674,15674,"FirstName15674 MiddleName15674",LastName15674 +15675,15675,"FirstName15675 MiddleName15675",LastName15675 +15676,15676,"FirstName15676 MiddleName15676",LastName15676 +15677,15677,"FirstName15677 MiddleName15677",LastName15677 +15678,15678,"FirstName15678 MiddleName15678",LastName15678 +15679,15679,"FirstName15679 MiddleName15679",LastName15679 +15680,15680,"FirstName15680 MiddleName15680",LastName15680 +15681,15681,"FirstName15681 MiddleName15681",LastName15681 +15682,15682,"FirstName15682 MiddleName15682",LastName15682 +15683,15683,"FirstName15683 MiddleName15683",LastName15683 +15684,15684,"FirstName15684 MiddleName15684",LastName15684 +15685,15685,"FirstName15685 MiddleName15685",LastName15685 +15686,15686,"FirstName15686 MiddleName15686",LastName15686 +15687,15687,"FirstName15687 MiddleName15687",LastName15687 +15688,15688,"FirstName15688 MiddleName15688",LastName15688 +15689,15689,"FirstName15689 MiddleName15689",LastName15689 +15690,15690,"FirstName15690 MiddleName15690",LastName15690 +15691,15691,"FirstName15691 MiddleName15691",LastName15691 +15692,15692,"FirstName15692 MiddleName15692",LastName15692 +15693,15693,"FirstName15693 MiddleName15693",LastName15693 +15694,15694,"FirstName15694 MiddleName15694",LastName15694 +15695,15695,"FirstName15695 MiddleName15695",LastName15695 +15696,15696,"FirstName15696 MiddleName15696",LastName15696 +15697,15697,"FirstName15697 MiddleName15697",LastName15697 +15698,15698,"FirstName15698 MiddleName15698",LastName15698 +15699,15699,"FirstName15699 MiddleName15699",LastName15699 +15700,15700,"FirstName15700 MiddleName15700",LastName15700 +15701,15701,"FirstName15701 MiddleName15701",LastName15701 +15702,15702,"FirstName15702 MiddleName15702",LastName15702 +15703,15703,"FirstName15703 MiddleName15703",LastName15703 +15704,15704,"FirstName15704 MiddleName15704",LastName15704 +15705,15705,"FirstName15705 MiddleName15705",LastName15705 +15706,15706,"FirstName15706 MiddleName15706",LastName15706 +15707,15707,"FirstName15707 MiddleName15707",LastName15707 +15708,15708,"FirstName15708 MiddleName15708",LastName15708 +15709,15709,"FirstName15709 MiddleName15709",LastName15709 +15710,15710,"FirstName15710 MiddleName15710",LastName15710 +15711,15711,"FirstName15711 MiddleName15711",LastName15711 +15712,15712,"FirstName15712 MiddleName15712",LastName15712 +15713,15713,"FirstName15713 MiddleName15713",LastName15713 +15714,15714,"FirstName15714 MiddleName15714",LastName15714 +15715,15715,"FirstName15715 MiddleName15715",LastName15715 +15716,15716,"FirstName15716 MiddleName15716",LastName15716 +15717,15717,"FirstName15717 MiddleName15717",LastName15717 +15718,15718,"FirstName15718 MiddleName15718",LastName15718 +15719,15719,"FirstName15719 MiddleName15719",LastName15719 +15720,15720,"FirstName15720 MiddleName15720",LastName15720 +15721,15721,"FirstName15721 MiddleName15721",LastName15721 +15722,15722,"FirstName15722 MiddleName15722",LastName15722 +15723,15723,"FirstName15723 MiddleName15723",LastName15723 +15724,15724,"FirstName15724 MiddleName15724",LastName15724 +15725,15725,"FirstName15725 MiddleName15725",LastName15725 +15726,15726,"FirstName15726 MiddleName15726",LastName15726 +15727,15727,"FirstName15727 MiddleName15727",LastName15727 +15728,15728,"FirstName15728 MiddleName15728",LastName15728 +15729,15729,"FirstName15729 MiddleName15729",LastName15729 +15730,15730,"FirstName15730 MiddleName15730",LastName15730 +15731,15731,"FirstName15731 MiddleName15731",LastName15731 +15732,15732,"FirstName15732 MiddleName15732",LastName15732 +15733,15733,"FirstName15733 MiddleName15733",LastName15733 +15734,15734,"FirstName15734 MiddleName15734",LastName15734 +15735,15735,"FirstName15735 MiddleName15735",LastName15735 +15736,15736,"FirstName15736 MiddleName15736",LastName15736 +15737,15737,"FirstName15737 MiddleName15737",LastName15737 +15738,15738,"FirstName15738 MiddleName15738",LastName15738 +15739,15739,"FirstName15739 MiddleName15739",LastName15739 +15740,15740,"FirstName15740 MiddleName15740",LastName15740 +15741,15741,"FirstName15741 MiddleName15741",LastName15741 +15742,15742,"FirstName15742 MiddleName15742",LastName15742 +15743,15743,"FirstName15743 MiddleName15743",LastName15743 +15744,15744,"FirstName15744 MiddleName15744",LastName15744 +15745,15745,"FirstName15745 MiddleName15745",LastName15745 +15746,15746,"FirstName15746 MiddleName15746",LastName15746 +15747,15747,"FirstName15747 MiddleName15747",LastName15747 +15748,15748,"FirstName15748 MiddleName15748",LastName15748 +15749,15749,"FirstName15749 MiddleName15749",LastName15749 +15750,15750,"FirstName15750 MiddleName15750",LastName15750 +15751,15751,"FirstName15751 MiddleName15751",LastName15751 +15752,15752,"FirstName15752 MiddleName15752",LastName15752 +15753,15753,"FirstName15753 MiddleName15753",LastName15753 +15754,15754,"FirstName15754 MiddleName15754",LastName15754 +15755,15755,"FirstName15755 MiddleName15755",LastName15755 +15756,15756,"FirstName15756 MiddleName15756",LastName15756 +15757,15757,"FirstName15757 MiddleName15757",LastName15757 +15758,15758,"FirstName15758 MiddleName15758",LastName15758 +15759,15759,"FirstName15759 MiddleName15759",LastName15759 +15760,15760,"FirstName15760 MiddleName15760",LastName15760 +15761,15761,"FirstName15761 MiddleName15761",LastName15761 +15762,15762,"FirstName15762 MiddleName15762",LastName15762 +15763,15763,"FirstName15763 MiddleName15763",LastName15763 +15764,15764,"FirstName15764 MiddleName15764",LastName15764 +15765,15765,"FirstName15765 MiddleName15765",LastName15765 +15766,15766,"FirstName15766 MiddleName15766",LastName15766 +15767,15767,"FirstName15767 MiddleName15767",LastName15767 +15768,15768,"FirstName15768 MiddleName15768",LastName15768 +15769,15769,"FirstName15769 MiddleName15769",LastName15769 +15770,15770,"FirstName15770 MiddleName15770",LastName15770 +15771,15771,"FirstName15771 MiddleName15771",LastName15771 +15772,15772,"FirstName15772 MiddleName15772",LastName15772 +15773,15773,"FirstName15773 MiddleName15773",LastName15773 +15774,15774,"FirstName15774 MiddleName15774",LastName15774 +15775,15775,"FirstName15775 MiddleName15775",LastName15775 +15776,15776,"FirstName15776 MiddleName15776",LastName15776 +15777,15777,"FirstName15777 MiddleName15777",LastName15777 +15778,15778,"FirstName15778 MiddleName15778",LastName15778 +15779,15779,"FirstName15779 MiddleName15779",LastName15779 +15780,15780,"FirstName15780 MiddleName15780",LastName15780 +15781,15781,"FirstName15781 MiddleName15781",LastName15781 +15782,15782,"FirstName15782 MiddleName15782",LastName15782 +15783,15783,"FirstName15783 MiddleName15783",LastName15783 +15784,15784,"FirstName15784 MiddleName15784",LastName15784 +15785,15785,"FirstName15785 MiddleName15785",LastName15785 +15786,15786,"FirstName15786 MiddleName15786",LastName15786 +15787,15787,"FirstName15787 MiddleName15787",LastName15787 +15788,15788,"FirstName15788 MiddleName15788",LastName15788 +15789,15789,"FirstName15789 MiddleName15789",LastName15789 +15790,15790,"FirstName15790 MiddleName15790",LastName15790 +15791,15791,"FirstName15791 MiddleName15791",LastName15791 +15792,15792,"FirstName15792 MiddleName15792",LastName15792 +15793,15793,"FirstName15793 MiddleName15793",LastName15793 +15794,15794,"FirstName15794 MiddleName15794",LastName15794 +15795,15795,"FirstName15795 MiddleName15795",LastName15795 +15796,15796,"FirstName15796 MiddleName15796",LastName15796 +15797,15797,"FirstName15797 MiddleName15797",LastName15797 +15798,15798,"FirstName15798 MiddleName15798",LastName15798 +15799,15799,"FirstName15799 MiddleName15799",LastName15799 +15800,15800,"FirstName15800 MiddleName15800",LastName15800 +15801,15801,"FirstName15801 MiddleName15801",LastName15801 +15802,15802,"FirstName15802 MiddleName15802",LastName15802 +15803,15803,"FirstName15803 MiddleName15803",LastName15803 +15804,15804,"FirstName15804 MiddleName15804",LastName15804 +15805,15805,"FirstName15805 MiddleName15805",LastName15805 +15806,15806,"FirstName15806 MiddleName15806",LastName15806 +15807,15807,"FirstName15807 MiddleName15807",LastName15807 +15808,15808,"FirstName15808 MiddleName15808",LastName15808 +15809,15809,"FirstName15809 MiddleName15809",LastName15809 +15810,15810,"FirstName15810 MiddleName15810",LastName15810 +15811,15811,"FirstName15811 MiddleName15811",LastName15811 +15812,15812,"FirstName15812 MiddleName15812",LastName15812 +15813,15813,"FirstName15813 MiddleName15813",LastName15813 +15814,15814,"FirstName15814 MiddleName15814",LastName15814 +15815,15815,"FirstName15815 MiddleName15815",LastName15815 +15816,15816,"FirstName15816 MiddleName15816",LastName15816 +15817,15817,"FirstName15817 MiddleName15817",LastName15817 +15818,15818,"FirstName15818 MiddleName15818",LastName15818 +15819,15819,"FirstName15819 MiddleName15819",LastName15819 +15820,15820,"FirstName15820 MiddleName15820",LastName15820 +15821,15821,"FirstName15821 MiddleName15821",LastName15821 +15822,15822,"FirstName15822 MiddleName15822",LastName15822 +15823,15823,"FirstName15823 MiddleName15823",LastName15823 +15824,15824,"FirstName15824 MiddleName15824",LastName15824 +15825,15825,"FirstName15825 MiddleName15825",LastName15825 +15826,15826,"FirstName15826 MiddleName15826",LastName15826 +15827,15827,"FirstName15827 MiddleName15827",LastName15827 +15828,15828,"FirstName15828 MiddleName15828",LastName15828 +15829,15829,"FirstName15829 MiddleName15829",LastName15829 +15830,15830,"FirstName15830 MiddleName15830",LastName15830 +15831,15831,"FirstName15831 MiddleName15831",LastName15831 +15832,15832,"FirstName15832 MiddleName15832",LastName15832 +15833,15833,"FirstName15833 MiddleName15833",LastName15833 +15834,15834,"FirstName15834 MiddleName15834",LastName15834 +15835,15835,"FirstName15835 MiddleName15835",LastName15835 +15836,15836,"FirstName15836 MiddleName15836",LastName15836 +15837,15837,"FirstName15837 MiddleName15837",LastName15837 +15838,15838,"FirstName15838 MiddleName15838",LastName15838 +15839,15839,"FirstName15839 MiddleName15839",LastName15839 +15840,15840,"FirstName15840 MiddleName15840",LastName15840 +15841,15841,"FirstName15841 MiddleName15841",LastName15841 +15842,15842,"FirstName15842 MiddleName15842",LastName15842 +15843,15843,"FirstName15843 MiddleName15843",LastName15843 +15844,15844,"FirstName15844 MiddleName15844",LastName15844 +15845,15845,"FirstName15845 MiddleName15845",LastName15845 +15846,15846,"FirstName15846 MiddleName15846",LastName15846 +15847,15847,"FirstName15847 MiddleName15847",LastName15847 +15848,15848,"FirstName15848 MiddleName15848",LastName15848 +15849,15849,"FirstName15849 MiddleName15849",LastName15849 +15850,15850,"FirstName15850 MiddleName15850",LastName15850 +15851,15851,"FirstName15851 MiddleName15851",LastName15851 +15852,15852,"FirstName15852 MiddleName15852",LastName15852 +15853,15853,"FirstName15853 MiddleName15853",LastName15853 +15854,15854,"FirstName15854 MiddleName15854",LastName15854 +15855,15855,"FirstName15855 MiddleName15855",LastName15855 +15856,15856,"FirstName15856 MiddleName15856",LastName15856 +15857,15857,"FirstName15857 MiddleName15857",LastName15857 +15858,15858,"FirstName15858 MiddleName15858",LastName15858 +15859,15859,"FirstName15859 MiddleName15859",LastName15859 +15860,15860,"FirstName15860 MiddleName15860",LastName15860 +15861,15861,"FirstName15861 MiddleName15861",LastName15861 +15862,15862,"FirstName15862 MiddleName15862",LastName15862 +15863,15863,"FirstName15863 MiddleName15863",LastName15863 +15864,15864,"FirstName15864 MiddleName15864",LastName15864 +15865,15865,"FirstName15865 MiddleName15865",LastName15865 +15866,15866,"FirstName15866 MiddleName15866",LastName15866 +15867,15867,"FirstName15867 MiddleName15867",LastName15867 +15868,15868,"FirstName15868 MiddleName15868",LastName15868 +15869,15869,"FirstName15869 MiddleName15869",LastName15869 +15870,15870,"FirstName15870 MiddleName15870",LastName15870 +15871,15871,"FirstName15871 MiddleName15871",LastName15871 +15872,15872,"FirstName15872 MiddleName15872",LastName15872 +15873,15873,"FirstName15873 MiddleName15873",LastName15873 +15874,15874,"FirstName15874 MiddleName15874",LastName15874 +15875,15875,"FirstName15875 MiddleName15875",LastName15875 +15876,15876,"FirstName15876 MiddleName15876",LastName15876 +15877,15877,"FirstName15877 MiddleName15877",LastName15877 +15878,15878,"FirstName15878 MiddleName15878",LastName15878 +15879,15879,"FirstName15879 MiddleName15879",LastName15879 +15880,15880,"FirstName15880 MiddleName15880",LastName15880 +15881,15881,"FirstName15881 MiddleName15881",LastName15881 +15882,15882,"FirstName15882 MiddleName15882",LastName15882 +15883,15883,"FirstName15883 MiddleName15883",LastName15883 +15884,15884,"FirstName15884 MiddleName15884",LastName15884 +15885,15885,"FirstName15885 MiddleName15885",LastName15885 +15886,15886,"FirstName15886 MiddleName15886",LastName15886 +15887,15887,"FirstName15887 MiddleName15887",LastName15887 +15888,15888,"FirstName15888 MiddleName15888",LastName15888 +15889,15889,"FirstName15889 MiddleName15889",LastName15889 +15890,15890,"FirstName15890 MiddleName15890",LastName15890 +15891,15891,"FirstName15891 MiddleName15891",LastName15891 +15892,15892,"FirstName15892 MiddleName15892",LastName15892 +15893,15893,"FirstName15893 MiddleName15893",LastName15893 +15894,15894,"FirstName15894 MiddleName15894",LastName15894 +15895,15895,"FirstName15895 MiddleName15895",LastName15895 +15896,15896,"FirstName15896 MiddleName15896",LastName15896 +15897,15897,"FirstName15897 MiddleName15897",LastName15897 +15898,15898,"FirstName15898 MiddleName15898",LastName15898 +15899,15899,"FirstName15899 MiddleName15899",LastName15899 +15900,15900,"FirstName15900 MiddleName15900",LastName15900 +15901,15901,"FirstName15901 MiddleName15901",LastName15901 +15902,15902,"FirstName15902 MiddleName15902",LastName15902 +15903,15903,"FirstName15903 MiddleName15903",LastName15903 +15904,15904,"FirstName15904 MiddleName15904",LastName15904 +15905,15905,"FirstName15905 MiddleName15905",LastName15905 +15906,15906,"FirstName15906 MiddleName15906",LastName15906 +15907,15907,"FirstName15907 MiddleName15907",LastName15907 +15908,15908,"FirstName15908 MiddleName15908",LastName15908 +15909,15909,"FirstName15909 MiddleName15909",LastName15909 +15910,15910,"FirstName15910 MiddleName15910",LastName15910 +15911,15911,"FirstName15911 MiddleName15911",LastName15911 +15912,15912,"FirstName15912 MiddleName15912",LastName15912 +15913,15913,"FirstName15913 MiddleName15913",LastName15913 +15914,15914,"FirstName15914 MiddleName15914",LastName15914 +15915,15915,"FirstName15915 MiddleName15915",LastName15915 +15916,15916,"FirstName15916 MiddleName15916",LastName15916 +15917,15917,"FirstName15917 MiddleName15917",LastName15917 +15918,15918,"FirstName15918 MiddleName15918",LastName15918 +15919,15919,"FirstName15919 MiddleName15919",LastName15919 +15920,15920,"FirstName15920 MiddleName15920",LastName15920 +15921,15921,"FirstName15921 MiddleName15921",LastName15921 +15922,15922,"FirstName15922 MiddleName15922",LastName15922 +15923,15923,"FirstName15923 MiddleName15923",LastName15923 +15924,15924,"FirstName15924 MiddleName15924",LastName15924 +15925,15925,"FirstName15925 MiddleName15925",LastName15925 +15926,15926,"FirstName15926 MiddleName15926",LastName15926 +15927,15927,"FirstName15927 MiddleName15927",LastName15927 +15928,15928,"FirstName15928 MiddleName15928",LastName15928 +15929,15929,"FirstName15929 MiddleName15929",LastName15929 +15930,15930,"FirstName15930 MiddleName15930",LastName15930 +15931,15931,"FirstName15931 MiddleName15931",LastName15931 +15932,15932,"FirstName15932 MiddleName15932",LastName15932 +15933,15933,"FirstName15933 MiddleName15933",LastName15933 +15934,15934,"FirstName15934 MiddleName15934",LastName15934 +15935,15935,"FirstName15935 MiddleName15935",LastName15935 +15936,15936,"FirstName15936 MiddleName15936",LastName15936 +15937,15937,"FirstName15937 MiddleName15937",LastName15937 +15938,15938,"FirstName15938 MiddleName15938",LastName15938 +15939,15939,"FirstName15939 MiddleName15939",LastName15939 +15940,15940,"FirstName15940 MiddleName15940",LastName15940 +15941,15941,"FirstName15941 MiddleName15941",LastName15941 +15942,15942,"FirstName15942 MiddleName15942",LastName15942 +15943,15943,"FirstName15943 MiddleName15943",LastName15943 +15944,15944,"FirstName15944 MiddleName15944",LastName15944 +15945,15945,"FirstName15945 MiddleName15945",LastName15945 +15946,15946,"FirstName15946 MiddleName15946",LastName15946 +15947,15947,"FirstName15947 MiddleName15947",LastName15947 +15948,15948,"FirstName15948 MiddleName15948",LastName15948 +15949,15949,"FirstName15949 MiddleName15949",LastName15949 +15950,15950,"FirstName15950 MiddleName15950",LastName15950 +15951,15951,"FirstName15951 MiddleName15951",LastName15951 +15952,15952,"FirstName15952 MiddleName15952",LastName15952 +15953,15953,"FirstName15953 MiddleName15953",LastName15953 +15954,15954,"FirstName15954 MiddleName15954",LastName15954 +15955,15955,"FirstName15955 MiddleName15955",LastName15955 +15956,15956,"FirstName15956 MiddleName15956",LastName15956 +15957,15957,"FirstName15957 MiddleName15957",LastName15957 +15958,15958,"FirstName15958 MiddleName15958",LastName15958 +15959,15959,"FirstName15959 MiddleName15959",LastName15959 +15960,15960,"FirstName15960 MiddleName15960",LastName15960 +15961,15961,"FirstName15961 MiddleName15961",LastName15961 +15962,15962,"FirstName15962 MiddleName15962",LastName15962 +15963,15963,"FirstName15963 MiddleName15963",LastName15963 +15964,15964,"FirstName15964 MiddleName15964",LastName15964 +15965,15965,"FirstName15965 MiddleName15965",LastName15965 +15966,15966,"FirstName15966 MiddleName15966",LastName15966 +15967,15967,"FirstName15967 MiddleName15967",LastName15967 +15968,15968,"FirstName15968 MiddleName15968",LastName15968 +15969,15969,"FirstName15969 MiddleName15969",LastName15969 +15970,15970,"FirstName15970 MiddleName15970",LastName15970 +15971,15971,"FirstName15971 MiddleName15971",LastName15971 +15972,15972,"FirstName15972 MiddleName15972",LastName15972 +15973,15973,"FirstName15973 MiddleName15973",LastName15973 +15974,15974,"FirstName15974 MiddleName15974",LastName15974 +15975,15975,"FirstName15975 MiddleName15975",LastName15975 +15976,15976,"FirstName15976 MiddleName15976",LastName15976 +15977,15977,"FirstName15977 MiddleName15977",LastName15977 +15978,15978,"FirstName15978 MiddleName15978",LastName15978 +15979,15979,"FirstName15979 MiddleName15979",LastName15979 +15980,15980,"FirstName15980 MiddleName15980",LastName15980 +15981,15981,"FirstName15981 MiddleName15981",LastName15981 +15982,15982,"FirstName15982 MiddleName15982",LastName15982 +15983,15983,"FirstName15983 MiddleName15983",LastName15983 +15984,15984,"FirstName15984 MiddleName15984",LastName15984 +15985,15985,"FirstName15985 MiddleName15985",LastName15985 +15986,15986,"FirstName15986 MiddleName15986",LastName15986 +15987,15987,"FirstName15987 MiddleName15987",LastName15987 +15988,15988,"FirstName15988 MiddleName15988",LastName15988 +15989,15989,"FirstName15989 MiddleName15989",LastName15989 +15990,15990,"FirstName15990 MiddleName15990",LastName15990 +15991,15991,"FirstName15991 MiddleName15991",LastName15991 +15992,15992,"FirstName15992 MiddleName15992",LastName15992 +15993,15993,"FirstName15993 MiddleName15993",LastName15993 +15994,15994,"FirstName15994 MiddleName15994",LastName15994 +15995,15995,"FirstName15995 MiddleName15995",LastName15995 +15996,15996,"FirstName15996 MiddleName15996",LastName15996 +15997,15997,"FirstName15997 MiddleName15997",LastName15997 +15998,15998,"FirstName15998 MiddleName15998",LastName15998 +15999,15999,"FirstName15999 MiddleName15999",LastName15999 +16000,16000,"FirstName16000 MiddleName16000",LastName16000 +16001,16001,"FirstName16001 MiddleName16001",LastName16001 +16002,16002,"FirstName16002 MiddleName16002",LastName16002 +16003,16003,"FirstName16003 MiddleName16003",LastName16003 +16004,16004,"FirstName16004 MiddleName16004",LastName16004 +16005,16005,"FirstName16005 MiddleName16005",LastName16005 +16006,16006,"FirstName16006 MiddleName16006",LastName16006 +16007,16007,"FirstName16007 MiddleName16007",LastName16007 +16008,16008,"FirstName16008 MiddleName16008",LastName16008 +16009,16009,"FirstName16009 MiddleName16009",LastName16009 +16010,16010,"FirstName16010 MiddleName16010",LastName16010 +16011,16011,"FirstName16011 MiddleName16011",LastName16011 +16012,16012,"FirstName16012 MiddleName16012",LastName16012 +16013,16013,"FirstName16013 MiddleName16013",LastName16013 +16014,16014,"FirstName16014 MiddleName16014",LastName16014 +16015,16015,"FirstName16015 MiddleName16015",LastName16015 +16016,16016,"FirstName16016 MiddleName16016",LastName16016 +16017,16017,"FirstName16017 MiddleName16017",LastName16017 +16018,16018,"FirstName16018 MiddleName16018",LastName16018 +16019,16019,"FirstName16019 MiddleName16019",LastName16019 +16020,16020,"FirstName16020 MiddleName16020",LastName16020 +16021,16021,"FirstName16021 MiddleName16021",LastName16021 +16022,16022,"FirstName16022 MiddleName16022",LastName16022 +16023,16023,"FirstName16023 MiddleName16023",LastName16023 +16024,16024,"FirstName16024 MiddleName16024",LastName16024 +16025,16025,"FirstName16025 MiddleName16025",LastName16025 +16026,16026,"FirstName16026 MiddleName16026",LastName16026 +16027,16027,"FirstName16027 MiddleName16027",LastName16027 +16028,16028,"FirstName16028 MiddleName16028",LastName16028 +16029,16029,"FirstName16029 MiddleName16029",LastName16029 +16030,16030,"FirstName16030 MiddleName16030",LastName16030 +16031,16031,"FirstName16031 MiddleName16031",LastName16031 +16032,16032,"FirstName16032 MiddleName16032",LastName16032 +16033,16033,"FirstName16033 MiddleName16033",LastName16033 +16034,16034,"FirstName16034 MiddleName16034",LastName16034 +16035,16035,"FirstName16035 MiddleName16035",LastName16035 +16036,16036,"FirstName16036 MiddleName16036",LastName16036 +16037,16037,"FirstName16037 MiddleName16037",LastName16037 +16038,16038,"FirstName16038 MiddleName16038",LastName16038 +16039,16039,"FirstName16039 MiddleName16039",LastName16039 +16040,16040,"FirstName16040 MiddleName16040",LastName16040 +16041,16041,"FirstName16041 MiddleName16041",LastName16041 +16042,16042,"FirstName16042 MiddleName16042",LastName16042 +16043,16043,"FirstName16043 MiddleName16043",LastName16043 +16044,16044,"FirstName16044 MiddleName16044",LastName16044 +16045,16045,"FirstName16045 MiddleName16045",LastName16045 +16046,16046,"FirstName16046 MiddleName16046",LastName16046 +16047,16047,"FirstName16047 MiddleName16047",LastName16047 +16048,16048,"FirstName16048 MiddleName16048",LastName16048 +16049,16049,"FirstName16049 MiddleName16049",LastName16049 +16050,16050,"FirstName16050 MiddleName16050",LastName16050 +16051,16051,"FirstName16051 MiddleName16051",LastName16051 +16052,16052,"FirstName16052 MiddleName16052",LastName16052 +16053,16053,"FirstName16053 MiddleName16053",LastName16053 +16054,16054,"FirstName16054 MiddleName16054",LastName16054 +16055,16055,"FirstName16055 MiddleName16055",LastName16055 +16056,16056,"FirstName16056 MiddleName16056",LastName16056 +16057,16057,"FirstName16057 MiddleName16057",LastName16057 +16058,16058,"FirstName16058 MiddleName16058",LastName16058 +16059,16059,"FirstName16059 MiddleName16059",LastName16059 +16060,16060,"FirstName16060 MiddleName16060",LastName16060 +16061,16061,"FirstName16061 MiddleName16061",LastName16061 +16062,16062,"FirstName16062 MiddleName16062",LastName16062 +16063,16063,"FirstName16063 MiddleName16063",LastName16063 +16064,16064,"FirstName16064 MiddleName16064",LastName16064 +16065,16065,"FirstName16065 MiddleName16065",LastName16065 +16066,16066,"FirstName16066 MiddleName16066",LastName16066 +16067,16067,"FirstName16067 MiddleName16067",LastName16067 +16068,16068,"FirstName16068 MiddleName16068",LastName16068 +16069,16069,"FirstName16069 MiddleName16069",LastName16069 +16070,16070,"FirstName16070 MiddleName16070",LastName16070 +16071,16071,"FirstName16071 MiddleName16071",LastName16071 +16072,16072,"FirstName16072 MiddleName16072",LastName16072 +16073,16073,"FirstName16073 MiddleName16073",LastName16073 +16074,16074,"FirstName16074 MiddleName16074",LastName16074 +16075,16075,"FirstName16075 MiddleName16075",LastName16075 +16076,16076,"FirstName16076 MiddleName16076",LastName16076 +16077,16077,"FirstName16077 MiddleName16077",LastName16077 +16078,16078,"FirstName16078 MiddleName16078",LastName16078 +16079,16079,"FirstName16079 MiddleName16079",LastName16079 +16080,16080,"FirstName16080 MiddleName16080",LastName16080 +16081,16081,"FirstName16081 MiddleName16081",LastName16081 +16082,16082,"FirstName16082 MiddleName16082",LastName16082 +16083,16083,"FirstName16083 MiddleName16083",LastName16083 +16084,16084,"FirstName16084 MiddleName16084",LastName16084 +16085,16085,"FirstName16085 MiddleName16085",LastName16085 +16086,16086,"FirstName16086 MiddleName16086",LastName16086 +16087,16087,"FirstName16087 MiddleName16087",LastName16087 +16088,16088,"FirstName16088 MiddleName16088",LastName16088 +16089,16089,"FirstName16089 MiddleName16089",LastName16089 +16090,16090,"FirstName16090 MiddleName16090",LastName16090 +16091,16091,"FirstName16091 MiddleName16091",LastName16091 +16092,16092,"FirstName16092 MiddleName16092",LastName16092 +16093,16093,"FirstName16093 MiddleName16093",LastName16093 +16094,16094,"FirstName16094 MiddleName16094",LastName16094 +16095,16095,"FirstName16095 MiddleName16095",LastName16095 +16096,16096,"FirstName16096 MiddleName16096",LastName16096 +16097,16097,"FirstName16097 MiddleName16097",LastName16097 +16098,16098,"FirstName16098 MiddleName16098",LastName16098 +16099,16099,"FirstName16099 MiddleName16099",LastName16099 +16100,16100,"FirstName16100 MiddleName16100",LastName16100 +16101,16101,"FirstName16101 MiddleName16101",LastName16101 +16102,16102,"FirstName16102 MiddleName16102",LastName16102 +16103,16103,"FirstName16103 MiddleName16103",LastName16103 +16104,16104,"FirstName16104 MiddleName16104",LastName16104 +16105,16105,"FirstName16105 MiddleName16105",LastName16105 +16106,16106,"FirstName16106 MiddleName16106",LastName16106 +16107,16107,"FirstName16107 MiddleName16107",LastName16107 +16108,16108,"FirstName16108 MiddleName16108",LastName16108 +16109,16109,"FirstName16109 MiddleName16109",LastName16109 +16110,16110,"FirstName16110 MiddleName16110",LastName16110 +16111,16111,"FirstName16111 MiddleName16111",LastName16111 +16112,16112,"FirstName16112 MiddleName16112",LastName16112 +16113,16113,"FirstName16113 MiddleName16113",LastName16113 +16114,16114,"FirstName16114 MiddleName16114",LastName16114 +16115,16115,"FirstName16115 MiddleName16115",LastName16115 +16116,16116,"FirstName16116 MiddleName16116",LastName16116 +16117,16117,"FirstName16117 MiddleName16117",LastName16117 +16118,16118,"FirstName16118 MiddleName16118",LastName16118 +16119,16119,"FirstName16119 MiddleName16119",LastName16119 +16120,16120,"FirstName16120 MiddleName16120",LastName16120 +16121,16121,"FirstName16121 MiddleName16121",LastName16121 +16122,16122,"FirstName16122 MiddleName16122",LastName16122 +16123,16123,"FirstName16123 MiddleName16123",LastName16123 +16124,16124,"FirstName16124 MiddleName16124",LastName16124 +16125,16125,"FirstName16125 MiddleName16125",LastName16125 +16126,16126,"FirstName16126 MiddleName16126",LastName16126 +16127,16127,"FirstName16127 MiddleName16127",LastName16127 +16128,16128,"FirstName16128 MiddleName16128",LastName16128 +16129,16129,"FirstName16129 MiddleName16129",LastName16129 +16130,16130,"FirstName16130 MiddleName16130",LastName16130 +16131,16131,"FirstName16131 MiddleName16131",LastName16131 +16132,16132,"FirstName16132 MiddleName16132",LastName16132 +16133,16133,"FirstName16133 MiddleName16133",LastName16133 +16134,16134,"FirstName16134 MiddleName16134",LastName16134 +16135,16135,"FirstName16135 MiddleName16135",LastName16135 +16136,16136,"FirstName16136 MiddleName16136",LastName16136 +16137,16137,"FirstName16137 MiddleName16137",LastName16137 +16138,16138,"FirstName16138 MiddleName16138",LastName16138 +16139,16139,"FirstName16139 MiddleName16139",LastName16139 +16140,16140,"FirstName16140 MiddleName16140",LastName16140 +16141,16141,"FirstName16141 MiddleName16141",LastName16141 +16142,16142,"FirstName16142 MiddleName16142",LastName16142 +16143,16143,"FirstName16143 MiddleName16143",LastName16143 +16144,16144,"FirstName16144 MiddleName16144",LastName16144 +16145,16145,"FirstName16145 MiddleName16145",LastName16145 +16146,16146,"FirstName16146 MiddleName16146",LastName16146 +16147,16147,"FirstName16147 MiddleName16147",LastName16147 +16148,16148,"FirstName16148 MiddleName16148",LastName16148 +16149,16149,"FirstName16149 MiddleName16149",LastName16149 +16150,16150,"FirstName16150 MiddleName16150",LastName16150 +16151,16151,"FirstName16151 MiddleName16151",LastName16151 +16152,16152,"FirstName16152 MiddleName16152",LastName16152 +16153,16153,"FirstName16153 MiddleName16153",LastName16153 +16154,16154,"FirstName16154 MiddleName16154",LastName16154 +16155,16155,"FirstName16155 MiddleName16155",LastName16155 +16156,16156,"FirstName16156 MiddleName16156",LastName16156 +16157,16157,"FirstName16157 MiddleName16157",LastName16157 +16158,16158,"FirstName16158 MiddleName16158",LastName16158 +16159,16159,"FirstName16159 MiddleName16159",LastName16159 +16160,16160,"FirstName16160 MiddleName16160",LastName16160 +16161,16161,"FirstName16161 MiddleName16161",LastName16161 +16162,16162,"FirstName16162 MiddleName16162",LastName16162 +16163,16163,"FirstName16163 MiddleName16163",LastName16163 +16164,16164,"FirstName16164 MiddleName16164",LastName16164 +16165,16165,"FirstName16165 MiddleName16165",LastName16165 +16166,16166,"FirstName16166 MiddleName16166",LastName16166 +16167,16167,"FirstName16167 MiddleName16167",LastName16167 +16168,16168,"FirstName16168 MiddleName16168",LastName16168 +16169,16169,"FirstName16169 MiddleName16169",LastName16169 +16170,16170,"FirstName16170 MiddleName16170",LastName16170 +16171,16171,"FirstName16171 MiddleName16171",LastName16171 +16172,16172,"FirstName16172 MiddleName16172",LastName16172 +16173,16173,"FirstName16173 MiddleName16173",LastName16173 +16174,16174,"FirstName16174 MiddleName16174",LastName16174 +16175,16175,"FirstName16175 MiddleName16175",LastName16175 +16176,16176,"FirstName16176 MiddleName16176",LastName16176 +16177,16177,"FirstName16177 MiddleName16177",LastName16177 +16178,16178,"FirstName16178 MiddleName16178",LastName16178 +16179,16179,"FirstName16179 MiddleName16179",LastName16179 +16180,16180,"FirstName16180 MiddleName16180",LastName16180 +16181,16181,"FirstName16181 MiddleName16181",LastName16181 +16182,16182,"FirstName16182 MiddleName16182",LastName16182 +16183,16183,"FirstName16183 MiddleName16183",LastName16183 +16184,16184,"FirstName16184 MiddleName16184",LastName16184 +16185,16185,"FirstName16185 MiddleName16185",LastName16185 +16186,16186,"FirstName16186 MiddleName16186",LastName16186 +16187,16187,"FirstName16187 MiddleName16187",LastName16187 +16188,16188,"FirstName16188 MiddleName16188",LastName16188 +16189,16189,"FirstName16189 MiddleName16189",LastName16189 +16190,16190,"FirstName16190 MiddleName16190",LastName16190 +16191,16191,"FirstName16191 MiddleName16191",LastName16191 +16192,16192,"FirstName16192 MiddleName16192",LastName16192 +16193,16193,"FirstName16193 MiddleName16193",LastName16193 +16194,16194,"FirstName16194 MiddleName16194",LastName16194 +16195,16195,"FirstName16195 MiddleName16195",LastName16195 +16196,16196,"FirstName16196 MiddleName16196",LastName16196 +16197,16197,"FirstName16197 MiddleName16197",LastName16197 +16198,16198,"FirstName16198 MiddleName16198",LastName16198 +16199,16199,"FirstName16199 MiddleName16199",LastName16199 +16200,16200,"FirstName16200 MiddleName16200",LastName16200 +16201,16201,"FirstName16201 MiddleName16201",LastName16201 +16202,16202,"FirstName16202 MiddleName16202",LastName16202 +16203,16203,"FirstName16203 MiddleName16203",LastName16203 +16204,16204,"FirstName16204 MiddleName16204",LastName16204 +16205,16205,"FirstName16205 MiddleName16205",LastName16205 +16206,16206,"FirstName16206 MiddleName16206",LastName16206 +16207,16207,"FirstName16207 MiddleName16207",LastName16207 +16208,16208,"FirstName16208 MiddleName16208",LastName16208 +16209,16209,"FirstName16209 MiddleName16209",LastName16209 +16210,16210,"FirstName16210 MiddleName16210",LastName16210 +16211,16211,"FirstName16211 MiddleName16211",LastName16211 +16212,16212,"FirstName16212 MiddleName16212",LastName16212 +16213,16213,"FirstName16213 MiddleName16213",LastName16213 +16214,16214,"FirstName16214 MiddleName16214",LastName16214 +16215,16215,"FirstName16215 MiddleName16215",LastName16215 +16216,16216,"FirstName16216 MiddleName16216",LastName16216 +16217,16217,"FirstName16217 MiddleName16217",LastName16217 +16218,16218,"FirstName16218 MiddleName16218",LastName16218 +16219,16219,"FirstName16219 MiddleName16219",LastName16219 +16220,16220,"FirstName16220 MiddleName16220",LastName16220 +16221,16221,"FirstName16221 MiddleName16221",LastName16221 +16222,16222,"FirstName16222 MiddleName16222",LastName16222 +16223,16223,"FirstName16223 MiddleName16223",LastName16223 +16224,16224,"FirstName16224 MiddleName16224",LastName16224 +16225,16225,"FirstName16225 MiddleName16225",LastName16225 +16226,16226,"FirstName16226 MiddleName16226",LastName16226 +16227,16227,"FirstName16227 MiddleName16227",LastName16227 +16228,16228,"FirstName16228 MiddleName16228",LastName16228 +16229,16229,"FirstName16229 MiddleName16229",LastName16229 +16230,16230,"FirstName16230 MiddleName16230",LastName16230 +16231,16231,"FirstName16231 MiddleName16231",LastName16231 +16232,16232,"FirstName16232 MiddleName16232",LastName16232 +16233,16233,"FirstName16233 MiddleName16233",LastName16233 +16234,16234,"FirstName16234 MiddleName16234",LastName16234 +16235,16235,"FirstName16235 MiddleName16235",LastName16235 +16236,16236,"FirstName16236 MiddleName16236",LastName16236 +16237,16237,"FirstName16237 MiddleName16237",LastName16237 +16238,16238,"FirstName16238 MiddleName16238",LastName16238 +16239,16239,"FirstName16239 MiddleName16239",LastName16239 +16240,16240,"FirstName16240 MiddleName16240",LastName16240 +16241,16241,"FirstName16241 MiddleName16241",LastName16241 +16242,16242,"FirstName16242 MiddleName16242",LastName16242 +16243,16243,"FirstName16243 MiddleName16243",LastName16243 +16244,16244,"FirstName16244 MiddleName16244",LastName16244 +16245,16245,"FirstName16245 MiddleName16245",LastName16245 +16246,16246,"FirstName16246 MiddleName16246",LastName16246 +16247,16247,"FirstName16247 MiddleName16247",LastName16247 +16248,16248,"FirstName16248 MiddleName16248",LastName16248 +16249,16249,"FirstName16249 MiddleName16249",LastName16249 +16250,16250,"FirstName16250 MiddleName16250",LastName16250 +16251,16251,"FirstName16251 MiddleName16251",LastName16251 +16252,16252,"FirstName16252 MiddleName16252",LastName16252 +16253,16253,"FirstName16253 MiddleName16253",LastName16253 +16254,16254,"FirstName16254 MiddleName16254",LastName16254 +16255,16255,"FirstName16255 MiddleName16255",LastName16255 +16256,16256,"FirstName16256 MiddleName16256",LastName16256 +16257,16257,"FirstName16257 MiddleName16257",LastName16257 +16258,16258,"FirstName16258 MiddleName16258",LastName16258 +16259,16259,"FirstName16259 MiddleName16259",LastName16259 +16260,16260,"FirstName16260 MiddleName16260",LastName16260 +16261,16261,"FirstName16261 MiddleName16261",LastName16261 +16262,16262,"FirstName16262 MiddleName16262",LastName16262 +16263,16263,"FirstName16263 MiddleName16263",LastName16263 +16264,16264,"FirstName16264 MiddleName16264",LastName16264 +16265,16265,"FirstName16265 MiddleName16265",LastName16265 +16266,16266,"FirstName16266 MiddleName16266",LastName16266 +16267,16267,"FirstName16267 MiddleName16267",LastName16267 +16268,16268,"FirstName16268 MiddleName16268",LastName16268 +16269,16269,"FirstName16269 MiddleName16269",LastName16269 +16270,16270,"FirstName16270 MiddleName16270",LastName16270 +16271,16271,"FirstName16271 MiddleName16271",LastName16271 +16272,16272,"FirstName16272 MiddleName16272",LastName16272 +16273,16273,"FirstName16273 MiddleName16273",LastName16273 +16274,16274,"FirstName16274 MiddleName16274",LastName16274 +16275,16275,"FirstName16275 MiddleName16275",LastName16275 +16276,16276,"FirstName16276 MiddleName16276",LastName16276 +16277,16277,"FirstName16277 MiddleName16277",LastName16277 +16278,16278,"FirstName16278 MiddleName16278",LastName16278 +16279,16279,"FirstName16279 MiddleName16279",LastName16279 +16280,16280,"FirstName16280 MiddleName16280",LastName16280 +16281,16281,"FirstName16281 MiddleName16281",LastName16281 +16282,16282,"FirstName16282 MiddleName16282",LastName16282 +16283,16283,"FirstName16283 MiddleName16283",LastName16283 +16284,16284,"FirstName16284 MiddleName16284",LastName16284 +16285,16285,"FirstName16285 MiddleName16285",LastName16285 +16286,16286,"FirstName16286 MiddleName16286",LastName16286 +16287,16287,"FirstName16287 MiddleName16287",LastName16287 +16288,16288,"FirstName16288 MiddleName16288",LastName16288 +16289,16289,"FirstName16289 MiddleName16289",LastName16289 +16290,16290,"FirstName16290 MiddleName16290",LastName16290 +16291,16291,"FirstName16291 MiddleName16291",LastName16291 +16292,16292,"FirstName16292 MiddleName16292",LastName16292 +16293,16293,"FirstName16293 MiddleName16293",LastName16293 +16294,16294,"FirstName16294 MiddleName16294",LastName16294 +16295,16295,"FirstName16295 MiddleName16295",LastName16295 +16296,16296,"FirstName16296 MiddleName16296",LastName16296 +16297,16297,"FirstName16297 MiddleName16297",LastName16297 +16298,16298,"FirstName16298 MiddleName16298",LastName16298 +16299,16299,"FirstName16299 MiddleName16299",LastName16299 +16300,16300,"FirstName16300 MiddleName16300",LastName16300 +16301,16301,"FirstName16301 MiddleName16301",LastName16301 +16302,16302,"FirstName16302 MiddleName16302",LastName16302 +16303,16303,"FirstName16303 MiddleName16303",LastName16303 +16304,16304,"FirstName16304 MiddleName16304",LastName16304 +16305,16305,"FirstName16305 MiddleName16305",LastName16305 +16306,16306,"FirstName16306 MiddleName16306",LastName16306 +16307,16307,"FirstName16307 MiddleName16307",LastName16307 +16308,16308,"FirstName16308 MiddleName16308",LastName16308 +16309,16309,"FirstName16309 MiddleName16309",LastName16309 +16310,16310,"FirstName16310 MiddleName16310",LastName16310 +16311,16311,"FirstName16311 MiddleName16311",LastName16311 +16312,16312,"FirstName16312 MiddleName16312",LastName16312 +16313,16313,"FirstName16313 MiddleName16313",LastName16313 +16314,16314,"FirstName16314 MiddleName16314",LastName16314 +16315,16315,"FirstName16315 MiddleName16315",LastName16315 +16316,16316,"FirstName16316 MiddleName16316",LastName16316 +16317,16317,"FirstName16317 MiddleName16317",LastName16317 +16318,16318,"FirstName16318 MiddleName16318",LastName16318 +16319,16319,"FirstName16319 MiddleName16319",LastName16319 +16320,16320,"FirstName16320 MiddleName16320",LastName16320 +16321,16321,"FirstName16321 MiddleName16321",LastName16321 +16322,16322,"FirstName16322 MiddleName16322",LastName16322 +16323,16323,"FirstName16323 MiddleName16323",LastName16323 +16324,16324,"FirstName16324 MiddleName16324",LastName16324 +16325,16325,"FirstName16325 MiddleName16325",LastName16325 +16326,16326,"FirstName16326 MiddleName16326",LastName16326 +16327,16327,"FirstName16327 MiddleName16327",LastName16327 +16328,16328,"FirstName16328 MiddleName16328",LastName16328 +16329,16329,"FirstName16329 MiddleName16329",LastName16329 +16330,16330,"FirstName16330 MiddleName16330",LastName16330 +16331,16331,"FirstName16331 MiddleName16331",LastName16331 +16332,16332,"FirstName16332 MiddleName16332",LastName16332 +16333,16333,"FirstName16333 MiddleName16333",LastName16333 +16334,16334,"FirstName16334 MiddleName16334",LastName16334 +16335,16335,"FirstName16335 MiddleName16335",LastName16335 +16336,16336,"FirstName16336 MiddleName16336",LastName16336 +16337,16337,"FirstName16337 MiddleName16337",LastName16337 +16338,16338,"FirstName16338 MiddleName16338",LastName16338 +16339,16339,"FirstName16339 MiddleName16339",LastName16339 +16340,16340,"FirstName16340 MiddleName16340",LastName16340 +16341,16341,"FirstName16341 MiddleName16341",LastName16341 +16342,16342,"FirstName16342 MiddleName16342",LastName16342 +16343,16343,"FirstName16343 MiddleName16343",LastName16343 +16344,16344,"FirstName16344 MiddleName16344",LastName16344 +16345,16345,"FirstName16345 MiddleName16345",LastName16345 +16346,16346,"FirstName16346 MiddleName16346",LastName16346 +16347,16347,"FirstName16347 MiddleName16347",LastName16347 +16348,16348,"FirstName16348 MiddleName16348",LastName16348 +16349,16349,"FirstName16349 MiddleName16349",LastName16349 +16350,16350,"FirstName16350 MiddleName16350",LastName16350 +16351,16351,"FirstName16351 MiddleName16351",LastName16351 +16352,16352,"FirstName16352 MiddleName16352",LastName16352 +16353,16353,"FirstName16353 MiddleName16353",LastName16353 +16354,16354,"FirstName16354 MiddleName16354",LastName16354 +16355,16355,"FirstName16355 MiddleName16355",LastName16355 +16356,16356,"FirstName16356 MiddleName16356",LastName16356 +16357,16357,"FirstName16357 MiddleName16357",LastName16357 +16358,16358,"FirstName16358 MiddleName16358",LastName16358 +16359,16359,"FirstName16359 MiddleName16359",LastName16359 +16360,16360,"FirstName16360 MiddleName16360",LastName16360 +16361,16361,"FirstName16361 MiddleName16361",LastName16361 +16362,16362,"FirstName16362 MiddleName16362",LastName16362 +16363,16363,"FirstName16363 MiddleName16363",LastName16363 +16364,16364,"FirstName16364 MiddleName16364",LastName16364 +16365,16365,"FirstName16365 MiddleName16365",LastName16365 +16366,16366,"FirstName16366 MiddleName16366",LastName16366 +16367,16367,"FirstName16367 MiddleName16367",LastName16367 +16368,16368,"FirstName16368 MiddleName16368",LastName16368 +16369,16369,"FirstName16369 MiddleName16369",LastName16369 +16370,16370,"FirstName16370 MiddleName16370",LastName16370 +16371,16371,"FirstName16371 MiddleName16371",LastName16371 +16372,16372,"FirstName16372 MiddleName16372",LastName16372 +16373,16373,"FirstName16373 MiddleName16373",LastName16373 +16374,16374,"FirstName16374 MiddleName16374",LastName16374 +16375,16375,"FirstName16375 MiddleName16375",LastName16375 +16376,16376,"FirstName16376 MiddleName16376",LastName16376 +16377,16377,"FirstName16377 MiddleName16377",LastName16377 +16378,16378,"FirstName16378 MiddleName16378",LastName16378 +16379,16379,"FirstName16379 MiddleName16379",LastName16379 +16380,16380,"FirstName16380 MiddleName16380",LastName16380 +16381,16381,"FirstName16381 MiddleName16381",LastName16381 +16382,16382,"FirstName16382 MiddleName16382",LastName16382 +16383,16383,"FirstName16383 MiddleName16383",LastName16383 +16384,16384,"FirstName16384 MiddleName16384",LastName16384 +16385,16385,"FirstName16385 MiddleName16385",LastName16385 +16386,16386,"FirstName16386 MiddleName16386",LastName16386 +16387,16387,"FirstName16387 MiddleName16387",LastName16387 +16388,16388,"FirstName16388 MiddleName16388",LastName16388 +16389,16389,"FirstName16389 MiddleName16389",LastName16389 +16390,16390,"FirstName16390 MiddleName16390",LastName16390 +16391,16391,"FirstName16391 MiddleName16391",LastName16391 +16392,16392,"FirstName16392 MiddleName16392",LastName16392 +16393,16393,"FirstName16393 MiddleName16393",LastName16393 +16394,16394,"FirstName16394 MiddleName16394",LastName16394 +16395,16395,"FirstName16395 MiddleName16395",LastName16395 +16396,16396,"FirstName16396 MiddleName16396",LastName16396 +16397,16397,"FirstName16397 MiddleName16397",LastName16397 +16398,16398,"FirstName16398 MiddleName16398",LastName16398 +16399,16399,"FirstName16399 MiddleName16399",LastName16399 +16400,16400,"FirstName16400 MiddleName16400",LastName16400 +16401,16401,"FirstName16401 MiddleName16401",LastName16401 +16402,16402,"FirstName16402 MiddleName16402",LastName16402 +16403,16403,"FirstName16403 MiddleName16403",LastName16403 +16404,16404,"FirstName16404 MiddleName16404",LastName16404 +16405,16405,"FirstName16405 MiddleName16405",LastName16405 +16406,16406,"FirstName16406 MiddleName16406",LastName16406 +16407,16407,"FirstName16407 MiddleName16407",LastName16407 +16408,16408,"FirstName16408 MiddleName16408",LastName16408 +16409,16409,"FirstName16409 MiddleName16409",LastName16409 +16410,16410,"FirstName16410 MiddleName16410",LastName16410 +16411,16411,"FirstName16411 MiddleName16411",LastName16411 +16412,16412,"FirstName16412 MiddleName16412",LastName16412 +16413,16413,"FirstName16413 MiddleName16413",LastName16413 +16414,16414,"FirstName16414 MiddleName16414",LastName16414 +16415,16415,"FirstName16415 MiddleName16415",LastName16415 +16416,16416,"FirstName16416 MiddleName16416",LastName16416 +16417,16417,"FirstName16417 MiddleName16417",LastName16417 +16418,16418,"FirstName16418 MiddleName16418",LastName16418 +16419,16419,"FirstName16419 MiddleName16419",LastName16419 +16420,16420,"FirstName16420 MiddleName16420",LastName16420 +16421,16421,"FirstName16421 MiddleName16421",LastName16421 +16422,16422,"FirstName16422 MiddleName16422",LastName16422 +16423,16423,"FirstName16423 MiddleName16423",LastName16423 +16424,16424,"FirstName16424 MiddleName16424",LastName16424 +16425,16425,"FirstName16425 MiddleName16425",LastName16425 +16426,16426,"FirstName16426 MiddleName16426",LastName16426 +16427,16427,"FirstName16427 MiddleName16427",LastName16427 +16428,16428,"FirstName16428 MiddleName16428",LastName16428 +16429,16429,"FirstName16429 MiddleName16429",LastName16429 +16430,16430,"FirstName16430 MiddleName16430",LastName16430 +16431,16431,"FirstName16431 MiddleName16431",LastName16431 +16432,16432,"FirstName16432 MiddleName16432",LastName16432 +16433,16433,"FirstName16433 MiddleName16433",LastName16433 +16434,16434,"FirstName16434 MiddleName16434",LastName16434 +16435,16435,"FirstName16435 MiddleName16435",LastName16435 +16436,16436,"FirstName16436 MiddleName16436",LastName16436 +16437,16437,"FirstName16437 MiddleName16437",LastName16437 +16438,16438,"FirstName16438 MiddleName16438",LastName16438 +16439,16439,"FirstName16439 MiddleName16439",LastName16439 +16440,16440,"FirstName16440 MiddleName16440",LastName16440 +16441,16441,"FirstName16441 MiddleName16441",LastName16441 +16442,16442,"FirstName16442 MiddleName16442",LastName16442 +16443,16443,"FirstName16443 MiddleName16443",LastName16443 +16444,16444,"FirstName16444 MiddleName16444",LastName16444 +16445,16445,"FirstName16445 MiddleName16445",LastName16445 +16446,16446,"FirstName16446 MiddleName16446",LastName16446 +16447,16447,"FirstName16447 MiddleName16447",LastName16447 +16448,16448,"FirstName16448 MiddleName16448",LastName16448 +16449,16449,"FirstName16449 MiddleName16449",LastName16449 +16450,16450,"FirstName16450 MiddleName16450",LastName16450 +16451,16451,"FirstName16451 MiddleName16451",LastName16451 +16452,16452,"FirstName16452 MiddleName16452",LastName16452 +16453,16453,"FirstName16453 MiddleName16453",LastName16453 +16454,16454,"FirstName16454 MiddleName16454",LastName16454 +16455,16455,"FirstName16455 MiddleName16455",LastName16455 +16456,16456,"FirstName16456 MiddleName16456",LastName16456 +16457,16457,"FirstName16457 MiddleName16457",LastName16457 +16458,16458,"FirstName16458 MiddleName16458",LastName16458 +16459,16459,"FirstName16459 MiddleName16459",LastName16459 +16460,16460,"FirstName16460 MiddleName16460",LastName16460 +16461,16461,"FirstName16461 MiddleName16461",LastName16461 +16462,16462,"FirstName16462 MiddleName16462",LastName16462 +16463,16463,"FirstName16463 MiddleName16463",LastName16463 +16464,16464,"FirstName16464 MiddleName16464",LastName16464 +16465,16465,"FirstName16465 MiddleName16465",LastName16465 +16466,16466,"FirstName16466 MiddleName16466",LastName16466 +16467,16467,"FirstName16467 MiddleName16467",LastName16467 +16468,16468,"FirstName16468 MiddleName16468",LastName16468 +16469,16469,"FirstName16469 MiddleName16469",LastName16469 +16470,16470,"FirstName16470 MiddleName16470",LastName16470 +16471,16471,"FirstName16471 MiddleName16471",LastName16471 +16472,16472,"FirstName16472 MiddleName16472",LastName16472 +16473,16473,"FirstName16473 MiddleName16473",LastName16473 +16474,16474,"FirstName16474 MiddleName16474",LastName16474 +16475,16475,"FirstName16475 MiddleName16475",LastName16475 +16476,16476,"FirstName16476 MiddleName16476",LastName16476 +16477,16477,"FirstName16477 MiddleName16477",LastName16477 +16478,16478,"FirstName16478 MiddleName16478",LastName16478 +16479,16479,"FirstName16479 MiddleName16479",LastName16479 +16480,16480,"FirstName16480 MiddleName16480",LastName16480 +16481,16481,"FirstName16481 MiddleName16481",LastName16481 +16482,16482,"FirstName16482 MiddleName16482",LastName16482 +16483,16483,"FirstName16483 MiddleName16483",LastName16483 +16484,16484,"FirstName16484 MiddleName16484",LastName16484 +16485,16485,"FirstName16485 MiddleName16485",LastName16485 +16486,16486,"FirstName16486 MiddleName16486",LastName16486 +16487,16487,"FirstName16487 MiddleName16487",LastName16487 +16488,16488,"FirstName16488 MiddleName16488",LastName16488 +16489,16489,"FirstName16489 MiddleName16489",LastName16489 +16490,16490,"FirstName16490 MiddleName16490",LastName16490 +16491,16491,"FirstName16491 MiddleName16491",LastName16491 +16492,16492,"FirstName16492 MiddleName16492",LastName16492 +16493,16493,"FirstName16493 MiddleName16493",LastName16493 +16494,16494,"FirstName16494 MiddleName16494",LastName16494 +16495,16495,"FirstName16495 MiddleName16495",LastName16495 +16496,16496,"FirstName16496 MiddleName16496",LastName16496 +16497,16497,"FirstName16497 MiddleName16497",LastName16497 +16498,16498,"FirstName16498 MiddleName16498",LastName16498 +16499,16499,"FirstName16499 MiddleName16499",LastName16499 +16500,16500,"FirstName16500 MiddleName16500",LastName16500 +16501,16501,"FirstName16501 MiddleName16501",LastName16501 +16502,16502,"FirstName16502 MiddleName16502",LastName16502 +16503,16503,"FirstName16503 MiddleName16503",LastName16503 +16504,16504,"FirstName16504 MiddleName16504",LastName16504 +16505,16505,"FirstName16505 MiddleName16505",LastName16505 +16506,16506,"FirstName16506 MiddleName16506",LastName16506 +16507,16507,"FirstName16507 MiddleName16507",LastName16507 +16508,16508,"FirstName16508 MiddleName16508",LastName16508 +16509,16509,"FirstName16509 MiddleName16509",LastName16509 +16510,16510,"FirstName16510 MiddleName16510",LastName16510 +16511,16511,"FirstName16511 MiddleName16511",LastName16511 +16512,16512,"FirstName16512 MiddleName16512",LastName16512 +16513,16513,"FirstName16513 MiddleName16513",LastName16513 +16514,16514,"FirstName16514 MiddleName16514",LastName16514 +16515,16515,"FirstName16515 MiddleName16515",LastName16515 +16516,16516,"FirstName16516 MiddleName16516",LastName16516 +16517,16517,"FirstName16517 MiddleName16517",LastName16517 +16518,16518,"FirstName16518 MiddleName16518",LastName16518 +16519,16519,"FirstName16519 MiddleName16519",LastName16519 +16520,16520,"FirstName16520 MiddleName16520",LastName16520 +16521,16521,"FirstName16521 MiddleName16521",LastName16521 +16522,16522,"FirstName16522 MiddleName16522",LastName16522 +16523,16523,"FirstName16523 MiddleName16523",LastName16523 +16524,16524,"FirstName16524 MiddleName16524",LastName16524 +16525,16525,"FirstName16525 MiddleName16525",LastName16525 +16526,16526,"FirstName16526 MiddleName16526",LastName16526 +16527,16527,"FirstName16527 MiddleName16527",LastName16527 +16528,16528,"FirstName16528 MiddleName16528",LastName16528 +16529,16529,"FirstName16529 MiddleName16529",LastName16529 +16530,16530,"FirstName16530 MiddleName16530",LastName16530 +16531,16531,"FirstName16531 MiddleName16531",LastName16531 +16532,16532,"FirstName16532 MiddleName16532",LastName16532 +16533,16533,"FirstName16533 MiddleName16533",LastName16533 +16534,16534,"FirstName16534 MiddleName16534",LastName16534 +16535,16535,"FirstName16535 MiddleName16535",LastName16535 +16536,16536,"FirstName16536 MiddleName16536",LastName16536 +16537,16537,"FirstName16537 MiddleName16537",LastName16537 +16538,16538,"FirstName16538 MiddleName16538",LastName16538 +16539,16539,"FirstName16539 MiddleName16539",LastName16539 +16540,16540,"FirstName16540 MiddleName16540",LastName16540 +16541,16541,"FirstName16541 MiddleName16541",LastName16541 +16542,16542,"FirstName16542 MiddleName16542",LastName16542 +16543,16543,"FirstName16543 MiddleName16543",LastName16543 +16544,16544,"FirstName16544 MiddleName16544",LastName16544 +16545,16545,"FirstName16545 MiddleName16545",LastName16545 +16546,16546,"FirstName16546 MiddleName16546",LastName16546 +16547,16547,"FirstName16547 MiddleName16547",LastName16547 +16548,16548,"FirstName16548 MiddleName16548",LastName16548 +16549,16549,"FirstName16549 MiddleName16549",LastName16549 +16550,16550,"FirstName16550 MiddleName16550",LastName16550 +16551,16551,"FirstName16551 MiddleName16551",LastName16551 +16552,16552,"FirstName16552 MiddleName16552",LastName16552 +16553,16553,"FirstName16553 MiddleName16553",LastName16553 +16554,16554,"FirstName16554 MiddleName16554",LastName16554 +16555,16555,"FirstName16555 MiddleName16555",LastName16555 +16556,16556,"FirstName16556 MiddleName16556",LastName16556 +16557,16557,"FirstName16557 MiddleName16557",LastName16557 +16558,16558,"FirstName16558 MiddleName16558",LastName16558 +16559,16559,"FirstName16559 MiddleName16559",LastName16559 +16560,16560,"FirstName16560 MiddleName16560",LastName16560 +16561,16561,"FirstName16561 MiddleName16561",LastName16561 +16562,16562,"FirstName16562 MiddleName16562",LastName16562 +16563,16563,"FirstName16563 MiddleName16563",LastName16563 +16564,16564,"FirstName16564 MiddleName16564",LastName16564 +16565,16565,"FirstName16565 MiddleName16565",LastName16565 +16566,16566,"FirstName16566 MiddleName16566",LastName16566 +16567,16567,"FirstName16567 MiddleName16567",LastName16567 +16568,16568,"FirstName16568 MiddleName16568",LastName16568 +16569,16569,"FirstName16569 MiddleName16569",LastName16569 +16570,16570,"FirstName16570 MiddleName16570",LastName16570 +16571,16571,"FirstName16571 MiddleName16571",LastName16571 +16572,16572,"FirstName16572 MiddleName16572",LastName16572 +16573,16573,"FirstName16573 MiddleName16573",LastName16573 +16574,16574,"FirstName16574 MiddleName16574",LastName16574 +16575,16575,"FirstName16575 MiddleName16575",LastName16575 +16576,16576,"FirstName16576 MiddleName16576",LastName16576 +16577,16577,"FirstName16577 MiddleName16577",LastName16577 +16578,16578,"FirstName16578 MiddleName16578",LastName16578 +16579,16579,"FirstName16579 MiddleName16579",LastName16579 +16580,16580,"FirstName16580 MiddleName16580",LastName16580 +16581,16581,"FirstName16581 MiddleName16581",LastName16581 +16582,16582,"FirstName16582 MiddleName16582",LastName16582 +16583,16583,"FirstName16583 MiddleName16583",LastName16583 +16584,16584,"FirstName16584 MiddleName16584",LastName16584 +16585,16585,"FirstName16585 MiddleName16585",LastName16585 +16586,16586,"FirstName16586 MiddleName16586",LastName16586 +16587,16587,"FirstName16587 MiddleName16587",LastName16587 +16588,16588,"FirstName16588 MiddleName16588",LastName16588 +16589,16589,"FirstName16589 MiddleName16589",LastName16589 +16590,16590,"FirstName16590 MiddleName16590",LastName16590 +16591,16591,"FirstName16591 MiddleName16591",LastName16591 +16592,16592,"FirstName16592 MiddleName16592",LastName16592 +16593,16593,"FirstName16593 MiddleName16593",LastName16593 +16594,16594,"FirstName16594 MiddleName16594",LastName16594 +16595,16595,"FirstName16595 MiddleName16595",LastName16595 +16596,16596,"FirstName16596 MiddleName16596",LastName16596 +16597,16597,"FirstName16597 MiddleName16597",LastName16597 +16598,16598,"FirstName16598 MiddleName16598",LastName16598 +16599,16599,"FirstName16599 MiddleName16599",LastName16599 +16600,16600,"FirstName16600 MiddleName16600",LastName16600 +16601,16601,"FirstName16601 MiddleName16601",LastName16601 +16602,16602,"FirstName16602 MiddleName16602",LastName16602 +16603,16603,"FirstName16603 MiddleName16603",LastName16603 +16604,16604,"FirstName16604 MiddleName16604",LastName16604 +16605,16605,"FirstName16605 MiddleName16605",LastName16605 +16606,16606,"FirstName16606 MiddleName16606",LastName16606 +16607,16607,"FirstName16607 MiddleName16607",LastName16607 +16608,16608,"FirstName16608 MiddleName16608",LastName16608 +16609,16609,"FirstName16609 MiddleName16609",LastName16609 +16610,16610,"FirstName16610 MiddleName16610",LastName16610 +16611,16611,"FirstName16611 MiddleName16611",LastName16611 +16612,16612,"FirstName16612 MiddleName16612",LastName16612 +16613,16613,"FirstName16613 MiddleName16613",LastName16613 +16614,16614,"FirstName16614 MiddleName16614",LastName16614 +16615,16615,"FirstName16615 MiddleName16615",LastName16615 +16616,16616,"FirstName16616 MiddleName16616",LastName16616 +16617,16617,"FirstName16617 MiddleName16617",LastName16617 +16618,16618,"FirstName16618 MiddleName16618",LastName16618 +16619,16619,"FirstName16619 MiddleName16619",LastName16619 +16620,16620,"FirstName16620 MiddleName16620",LastName16620 +16621,16621,"FirstName16621 MiddleName16621",LastName16621 +16622,16622,"FirstName16622 MiddleName16622",LastName16622 +16623,16623,"FirstName16623 MiddleName16623",LastName16623 +16624,16624,"FirstName16624 MiddleName16624",LastName16624 +16625,16625,"FirstName16625 MiddleName16625",LastName16625 +16626,16626,"FirstName16626 MiddleName16626",LastName16626 +16627,16627,"FirstName16627 MiddleName16627",LastName16627 +16628,16628,"FirstName16628 MiddleName16628",LastName16628 +16629,16629,"FirstName16629 MiddleName16629",LastName16629 +16630,16630,"FirstName16630 MiddleName16630",LastName16630 +16631,16631,"FirstName16631 MiddleName16631",LastName16631 +16632,16632,"FirstName16632 MiddleName16632",LastName16632 +16633,16633,"FirstName16633 MiddleName16633",LastName16633 +16634,16634,"FirstName16634 MiddleName16634",LastName16634 +16635,16635,"FirstName16635 MiddleName16635",LastName16635 +16636,16636,"FirstName16636 MiddleName16636",LastName16636 +16637,16637,"FirstName16637 MiddleName16637",LastName16637 +16638,16638,"FirstName16638 MiddleName16638",LastName16638 +16639,16639,"FirstName16639 MiddleName16639",LastName16639 +16640,16640,"FirstName16640 MiddleName16640",LastName16640 +16641,16641,"FirstName16641 MiddleName16641",LastName16641 +16642,16642,"FirstName16642 MiddleName16642",LastName16642 +16643,16643,"FirstName16643 MiddleName16643",LastName16643 +16644,16644,"FirstName16644 MiddleName16644",LastName16644 +16645,16645,"FirstName16645 MiddleName16645",LastName16645 +16646,16646,"FirstName16646 MiddleName16646",LastName16646 +16647,16647,"FirstName16647 MiddleName16647",LastName16647 +16648,16648,"FirstName16648 MiddleName16648",LastName16648 +16649,16649,"FirstName16649 MiddleName16649",LastName16649 +16650,16650,"FirstName16650 MiddleName16650",LastName16650 +16651,16651,"FirstName16651 MiddleName16651",LastName16651 +16652,16652,"FirstName16652 MiddleName16652",LastName16652 +16653,16653,"FirstName16653 MiddleName16653",LastName16653 +16654,16654,"FirstName16654 MiddleName16654",LastName16654 +16655,16655,"FirstName16655 MiddleName16655",LastName16655 +16656,16656,"FirstName16656 MiddleName16656",LastName16656 +16657,16657,"FirstName16657 MiddleName16657",LastName16657 +16658,16658,"FirstName16658 MiddleName16658",LastName16658 +16659,16659,"FirstName16659 MiddleName16659",LastName16659 +16660,16660,"FirstName16660 MiddleName16660",LastName16660 +16661,16661,"FirstName16661 MiddleName16661",LastName16661 +16662,16662,"FirstName16662 MiddleName16662",LastName16662 +16663,16663,"FirstName16663 MiddleName16663",LastName16663 +16664,16664,"FirstName16664 MiddleName16664",LastName16664 +16665,16665,"FirstName16665 MiddleName16665",LastName16665 +16666,16666,"FirstName16666 MiddleName16666",LastName16666 +16667,16667,"FirstName16667 MiddleName16667",LastName16667 +16668,16668,"FirstName16668 MiddleName16668",LastName16668 +16669,16669,"FirstName16669 MiddleName16669",LastName16669 +16670,16670,"FirstName16670 MiddleName16670",LastName16670 +16671,16671,"FirstName16671 MiddleName16671",LastName16671 +16672,16672,"FirstName16672 MiddleName16672",LastName16672 +16673,16673,"FirstName16673 MiddleName16673",LastName16673 +16674,16674,"FirstName16674 MiddleName16674",LastName16674 +16675,16675,"FirstName16675 MiddleName16675",LastName16675 +16676,16676,"FirstName16676 MiddleName16676",LastName16676 +16677,16677,"FirstName16677 MiddleName16677",LastName16677 +16678,16678,"FirstName16678 MiddleName16678",LastName16678 +16679,16679,"FirstName16679 MiddleName16679",LastName16679 +16680,16680,"FirstName16680 MiddleName16680",LastName16680 +16681,16681,"FirstName16681 MiddleName16681",LastName16681 +16682,16682,"FirstName16682 MiddleName16682",LastName16682 +16683,16683,"FirstName16683 MiddleName16683",LastName16683 +16684,16684,"FirstName16684 MiddleName16684",LastName16684 +16685,16685,"FirstName16685 MiddleName16685",LastName16685 +16686,16686,"FirstName16686 MiddleName16686",LastName16686 +16687,16687,"FirstName16687 MiddleName16687",LastName16687 +16688,16688,"FirstName16688 MiddleName16688",LastName16688 +16689,16689,"FirstName16689 MiddleName16689",LastName16689 +16690,16690,"FirstName16690 MiddleName16690",LastName16690 +16691,16691,"FirstName16691 MiddleName16691",LastName16691 +16692,16692,"FirstName16692 MiddleName16692",LastName16692 +16693,16693,"FirstName16693 MiddleName16693",LastName16693 +16694,16694,"FirstName16694 MiddleName16694",LastName16694 +16695,16695,"FirstName16695 MiddleName16695",LastName16695 +16696,16696,"FirstName16696 MiddleName16696",LastName16696 +16697,16697,"FirstName16697 MiddleName16697",LastName16697 +16698,16698,"FirstName16698 MiddleName16698",LastName16698 +16699,16699,"FirstName16699 MiddleName16699",LastName16699 +16700,16700,"FirstName16700 MiddleName16700",LastName16700 +16701,16701,"FirstName16701 MiddleName16701",LastName16701 +16702,16702,"FirstName16702 MiddleName16702",LastName16702 +16703,16703,"FirstName16703 MiddleName16703",LastName16703 +16704,16704,"FirstName16704 MiddleName16704",LastName16704 +16705,16705,"FirstName16705 MiddleName16705",LastName16705 +16706,16706,"FirstName16706 MiddleName16706",LastName16706 +16707,16707,"FirstName16707 MiddleName16707",LastName16707 +16708,16708,"FirstName16708 MiddleName16708",LastName16708 +16709,16709,"FirstName16709 MiddleName16709",LastName16709 +16710,16710,"FirstName16710 MiddleName16710",LastName16710 +16711,16711,"FirstName16711 MiddleName16711",LastName16711 +16712,16712,"FirstName16712 MiddleName16712",LastName16712 +16713,16713,"FirstName16713 MiddleName16713",LastName16713 +16714,16714,"FirstName16714 MiddleName16714",LastName16714 +16715,16715,"FirstName16715 MiddleName16715",LastName16715 +16716,16716,"FirstName16716 MiddleName16716",LastName16716 +16717,16717,"FirstName16717 MiddleName16717",LastName16717 +16718,16718,"FirstName16718 MiddleName16718",LastName16718 +16719,16719,"FirstName16719 MiddleName16719",LastName16719 +16720,16720,"FirstName16720 MiddleName16720",LastName16720 +16721,16721,"FirstName16721 MiddleName16721",LastName16721 +16722,16722,"FirstName16722 MiddleName16722",LastName16722 +16723,16723,"FirstName16723 MiddleName16723",LastName16723 +16724,16724,"FirstName16724 MiddleName16724",LastName16724 +16725,16725,"FirstName16725 MiddleName16725",LastName16725 +16726,16726,"FirstName16726 MiddleName16726",LastName16726 +16727,16727,"FirstName16727 MiddleName16727",LastName16727 +16728,16728,"FirstName16728 MiddleName16728",LastName16728 +16729,16729,"FirstName16729 MiddleName16729",LastName16729 +16730,16730,"FirstName16730 MiddleName16730",LastName16730 +16731,16731,"FirstName16731 MiddleName16731",LastName16731 +16732,16732,"FirstName16732 MiddleName16732",LastName16732 +16733,16733,"FirstName16733 MiddleName16733",LastName16733 +16734,16734,"FirstName16734 MiddleName16734",LastName16734 +16735,16735,"FirstName16735 MiddleName16735",LastName16735 +16736,16736,"FirstName16736 MiddleName16736",LastName16736 +16737,16737,"FirstName16737 MiddleName16737",LastName16737 +16738,16738,"FirstName16738 MiddleName16738",LastName16738 +16739,16739,"FirstName16739 MiddleName16739",LastName16739 +16740,16740,"FirstName16740 MiddleName16740",LastName16740 +16741,16741,"FirstName16741 MiddleName16741",LastName16741 +16742,16742,"FirstName16742 MiddleName16742",LastName16742 +16743,16743,"FirstName16743 MiddleName16743",LastName16743 +16744,16744,"FirstName16744 MiddleName16744",LastName16744 +16745,16745,"FirstName16745 MiddleName16745",LastName16745 +16746,16746,"FirstName16746 MiddleName16746",LastName16746 +16747,16747,"FirstName16747 MiddleName16747",LastName16747 +16748,16748,"FirstName16748 MiddleName16748",LastName16748 +16749,16749,"FirstName16749 MiddleName16749",LastName16749 +16750,16750,"FirstName16750 MiddleName16750",LastName16750 +16751,16751,"FirstName16751 MiddleName16751",LastName16751 +16752,16752,"FirstName16752 MiddleName16752",LastName16752 +16753,16753,"FirstName16753 MiddleName16753",LastName16753 +16754,16754,"FirstName16754 MiddleName16754",LastName16754 +16755,16755,"FirstName16755 MiddleName16755",LastName16755 +16756,16756,"FirstName16756 MiddleName16756",LastName16756 +16757,16757,"FirstName16757 MiddleName16757",LastName16757 +16758,16758,"FirstName16758 MiddleName16758",LastName16758 +16759,16759,"FirstName16759 MiddleName16759",LastName16759 +16760,16760,"FirstName16760 MiddleName16760",LastName16760 +16761,16761,"FirstName16761 MiddleName16761",LastName16761 +16762,16762,"FirstName16762 MiddleName16762",LastName16762 +16763,16763,"FirstName16763 MiddleName16763",LastName16763 +16764,16764,"FirstName16764 MiddleName16764",LastName16764 +16765,16765,"FirstName16765 MiddleName16765",LastName16765 +16766,16766,"FirstName16766 MiddleName16766",LastName16766 +16767,16767,"FirstName16767 MiddleName16767",LastName16767 +16768,16768,"FirstName16768 MiddleName16768",LastName16768 +16769,16769,"FirstName16769 MiddleName16769",LastName16769 +16770,16770,"FirstName16770 MiddleName16770",LastName16770 +16771,16771,"FirstName16771 MiddleName16771",LastName16771 +16772,16772,"FirstName16772 MiddleName16772",LastName16772 +16773,16773,"FirstName16773 MiddleName16773",LastName16773 +16774,16774,"FirstName16774 MiddleName16774",LastName16774 +16775,16775,"FirstName16775 MiddleName16775",LastName16775 +16776,16776,"FirstName16776 MiddleName16776",LastName16776 +16777,16777,"FirstName16777 MiddleName16777",LastName16777 +16778,16778,"FirstName16778 MiddleName16778",LastName16778 +16779,16779,"FirstName16779 MiddleName16779",LastName16779 +16780,16780,"FirstName16780 MiddleName16780",LastName16780 +16781,16781,"FirstName16781 MiddleName16781",LastName16781 +16782,16782,"FirstName16782 MiddleName16782",LastName16782 +16783,16783,"FirstName16783 MiddleName16783",LastName16783 +16784,16784,"FirstName16784 MiddleName16784",LastName16784 +16785,16785,"FirstName16785 MiddleName16785",LastName16785 +16786,16786,"FirstName16786 MiddleName16786",LastName16786 +16787,16787,"FirstName16787 MiddleName16787",LastName16787 +16788,16788,"FirstName16788 MiddleName16788",LastName16788 +16789,16789,"FirstName16789 MiddleName16789",LastName16789 +16790,16790,"FirstName16790 MiddleName16790",LastName16790 +16791,16791,"FirstName16791 MiddleName16791",LastName16791 +16792,16792,"FirstName16792 MiddleName16792",LastName16792 +16793,16793,"FirstName16793 MiddleName16793",LastName16793 +16794,16794,"FirstName16794 MiddleName16794",LastName16794 +16795,16795,"FirstName16795 MiddleName16795",LastName16795 +16796,16796,"FirstName16796 MiddleName16796",LastName16796 +16797,16797,"FirstName16797 MiddleName16797",LastName16797 +16798,16798,"FirstName16798 MiddleName16798",LastName16798 +16799,16799,"FirstName16799 MiddleName16799",LastName16799 +16800,16800,"FirstName16800 MiddleName16800",LastName16800 +16801,16801,"FirstName16801 MiddleName16801",LastName16801 +16802,16802,"FirstName16802 MiddleName16802",LastName16802 +16803,16803,"FirstName16803 MiddleName16803",LastName16803 +16804,16804,"FirstName16804 MiddleName16804",LastName16804 +16805,16805,"FirstName16805 MiddleName16805",LastName16805 +16806,16806,"FirstName16806 MiddleName16806",LastName16806 +16807,16807,"FirstName16807 MiddleName16807",LastName16807 +16808,16808,"FirstName16808 MiddleName16808",LastName16808 +16809,16809,"FirstName16809 MiddleName16809",LastName16809 +16810,16810,"FirstName16810 MiddleName16810",LastName16810 +16811,16811,"FirstName16811 MiddleName16811",LastName16811 +16812,16812,"FirstName16812 MiddleName16812",LastName16812 +16813,16813,"FirstName16813 MiddleName16813",LastName16813 +16814,16814,"FirstName16814 MiddleName16814",LastName16814 +16815,16815,"FirstName16815 MiddleName16815",LastName16815 +16816,16816,"FirstName16816 MiddleName16816",LastName16816 +16817,16817,"FirstName16817 MiddleName16817",LastName16817 +16818,16818,"FirstName16818 MiddleName16818",LastName16818 +16819,16819,"FirstName16819 MiddleName16819",LastName16819 +16820,16820,"FirstName16820 MiddleName16820",LastName16820 +16821,16821,"FirstName16821 MiddleName16821",LastName16821 +16822,16822,"FirstName16822 MiddleName16822",LastName16822 +16823,16823,"FirstName16823 MiddleName16823",LastName16823 +16824,16824,"FirstName16824 MiddleName16824",LastName16824 +16825,16825,"FirstName16825 MiddleName16825",LastName16825 +16826,16826,"FirstName16826 MiddleName16826",LastName16826 +16827,16827,"FirstName16827 MiddleName16827",LastName16827 +16828,16828,"FirstName16828 MiddleName16828",LastName16828 +16829,16829,"FirstName16829 MiddleName16829",LastName16829 +16830,16830,"FirstName16830 MiddleName16830",LastName16830 +16831,16831,"FirstName16831 MiddleName16831",LastName16831 +16832,16832,"FirstName16832 MiddleName16832",LastName16832 +16833,16833,"FirstName16833 MiddleName16833",LastName16833 +16834,16834,"FirstName16834 MiddleName16834",LastName16834 +16835,16835,"FirstName16835 MiddleName16835",LastName16835 +16836,16836,"FirstName16836 MiddleName16836",LastName16836 +16837,16837,"FirstName16837 MiddleName16837",LastName16837 +16838,16838,"FirstName16838 MiddleName16838",LastName16838 +16839,16839,"FirstName16839 MiddleName16839",LastName16839 +16840,16840,"FirstName16840 MiddleName16840",LastName16840 +16841,16841,"FirstName16841 MiddleName16841",LastName16841 +16842,16842,"FirstName16842 MiddleName16842",LastName16842 +16843,16843,"FirstName16843 MiddleName16843",LastName16843 +16844,16844,"FirstName16844 MiddleName16844",LastName16844 +16845,16845,"FirstName16845 MiddleName16845",LastName16845 +16846,16846,"FirstName16846 MiddleName16846",LastName16846 +16847,16847,"FirstName16847 MiddleName16847",LastName16847 +16848,16848,"FirstName16848 MiddleName16848",LastName16848 +16849,16849,"FirstName16849 MiddleName16849",LastName16849 +16850,16850,"FirstName16850 MiddleName16850",LastName16850 +16851,16851,"FirstName16851 MiddleName16851",LastName16851 +16852,16852,"FirstName16852 MiddleName16852",LastName16852 +16853,16853,"FirstName16853 MiddleName16853",LastName16853 +16854,16854,"FirstName16854 MiddleName16854",LastName16854 +16855,16855,"FirstName16855 MiddleName16855",LastName16855 +16856,16856,"FirstName16856 MiddleName16856",LastName16856 +16857,16857,"FirstName16857 MiddleName16857",LastName16857 +16858,16858,"FirstName16858 MiddleName16858",LastName16858 +16859,16859,"FirstName16859 MiddleName16859",LastName16859 +16860,16860,"FirstName16860 MiddleName16860",LastName16860 +16861,16861,"FirstName16861 MiddleName16861",LastName16861 +16862,16862,"FirstName16862 MiddleName16862",LastName16862 +16863,16863,"FirstName16863 MiddleName16863",LastName16863 +16864,16864,"FirstName16864 MiddleName16864",LastName16864 +16865,16865,"FirstName16865 MiddleName16865",LastName16865 +16866,16866,"FirstName16866 MiddleName16866",LastName16866 +16867,16867,"FirstName16867 MiddleName16867",LastName16867 +16868,16868,"FirstName16868 MiddleName16868",LastName16868 +16869,16869,"FirstName16869 MiddleName16869",LastName16869 +16870,16870,"FirstName16870 MiddleName16870",LastName16870 +16871,16871,"FirstName16871 MiddleName16871",LastName16871 +16872,16872,"FirstName16872 MiddleName16872",LastName16872 +16873,16873,"FirstName16873 MiddleName16873",LastName16873 +16874,16874,"FirstName16874 MiddleName16874",LastName16874 +16875,16875,"FirstName16875 MiddleName16875",LastName16875 +16876,16876,"FirstName16876 MiddleName16876",LastName16876 +16877,16877,"FirstName16877 MiddleName16877",LastName16877 +16878,16878,"FirstName16878 MiddleName16878",LastName16878 +16879,16879,"FirstName16879 MiddleName16879",LastName16879 +16880,16880,"FirstName16880 MiddleName16880",LastName16880 +16881,16881,"FirstName16881 MiddleName16881",LastName16881 +16882,16882,"FirstName16882 MiddleName16882",LastName16882 +16883,16883,"FirstName16883 MiddleName16883",LastName16883 +16884,16884,"FirstName16884 MiddleName16884",LastName16884 +16885,16885,"FirstName16885 MiddleName16885",LastName16885 +16886,16886,"FirstName16886 MiddleName16886",LastName16886 +16887,16887,"FirstName16887 MiddleName16887",LastName16887 +16888,16888,"FirstName16888 MiddleName16888",LastName16888 +16889,16889,"FirstName16889 MiddleName16889",LastName16889 +16890,16890,"FirstName16890 MiddleName16890",LastName16890 +16891,16891,"FirstName16891 MiddleName16891",LastName16891 +16892,16892,"FirstName16892 MiddleName16892",LastName16892 +16893,16893,"FirstName16893 MiddleName16893",LastName16893 +16894,16894,"FirstName16894 MiddleName16894",LastName16894 +16895,16895,"FirstName16895 MiddleName16895",LastName16895 +16896,16896,"FirstName16896 MiddleName16896",LastName16896 +16897,16897,"FirstName16897 MiddleName16897",LastName16897 +16898,16898,"FirstName16898 MiddleName16898",LastName16898 +16899,16899,"FirstName16899 MiddleName16899",LastName16899 +16900,16900,"FirstName16900 MiddleName16900",LastName16900 +16901,16901,"FirstName16901 MiddleName16901",LastName16901 +16902,16902,"FirstName16902 MiddleName16902",LastName16902 +16903,16903,"FirstName16903 MiddleName16903",LastName16903 +16904,16904,"FirstName16904 MiddleName16904",LastName16904 +16905,16905,"FirstName16905 MiddleName16905",LastName16905 +16906,16906,"FirstName16906 MiddleName16906",LastName16906 +16907,16907,"FirstName16907 MiddleName16907",LastName16907 +16908,16908,"FirstName16908 MiddleName16908",LastName16908 +16909,16909,"FirstName16909 MiddleName16909",LastName16909 +16910,16910,"FirstName16910 MiddleName16910",LastName16910 +16911,16911,"FirstName16911 MiddleName16911",LastName16911 +16912,16912,"FirstName16912 MiddleName16912",LastName16912 +16913,16913,"FirstName16913 MiddleName16913",LastName16913 +16914,16914,"FirstName16914 MiddleName16914",LastName16914 +16915,16915,"FirstName16915 MiddleName16915",LastName16915 +16916,16916,"FirstName16916 MiddleName16916",LastName16916 +16917,16917,"FirstName16917 MiddleName16917",LastName16917 +16918,16918,"FirstName16918 MiddleName16918",LastName16918 +16919,16919,"FirstName16919 MiddleName16919",LastName16919 +16920,16920,"FirstName16920 MiddleName16920",LastName16920 +16921,16921,"FirstName16921 MiddleName16921",LastName16921 +16922,16922,"FirstName16922 MiddleName16922",LastName16922 +16923,16923,"FirstName16923 MiddleName16923",LastName16923 +16924,16924,"FirstName16924 MiddleName16924",LastName16924 +16925,16925,"FirstName16925 MiddleName16925",LastName16925 +16926,16926,"FirstName16926 MiddleName16926",LastName16926 +16927,16927,"FirstName16927 MiddleName16927",LastName16927 +16928,16928,"FirstName16928 MiddleName16928",LastName16928 +16929,16929,"FirstName16929 MiddleName16929",LastName16929 +16930,16930,"FirstName16930 MiddleName16930",LastName16930 +16931,16931,"FirstName16931 MiddleName16931",LastName16931 +16932,16932,"FirstName16932 MiddleName16932",LastName16932 +16933,16933,"FirstName16933 MiddleName16933",LastName16933 +16934,16934,"FirstName16934 MiddleName16934",LastName16934 +16935,16935,"FirstName16935 MiddleName16935",LastName16935 +16936,16936,"FirstName16936 MiddleName16936",LastName16936 +16937,16937,"FirstName16937 MiddleName16937",LastName16937 +16938,16938,"FirstName16938 MiddleName16938",LastName16938 +16939,16939,"FirstName16939 MiddleName16939",LastName16939 +16940,16940,"FirstName16940 MiddleName16940",LastName16940 +16941,16941,"FirstName16941 MiddleName16941",LastName16941 +16942,16942,"FirstName16942 MiddleName16942",LastName16942 +16943,16943,"FirstName16943 MiddleName16943",LastName16943 +16944,16944,"FirstName16944 MiddleName16944",LastName16944 +16945,16945,"FirstName16945 MiddleName16945",LastName16945 +16946,16946,"FirstName16946 MiddleName16946",LastName16946 +16947,16947,"FirstName16947 MiddleName16947",LastName16947 +16948,16948,"FirstName16948 MiddleName16948",LastName16948 +16949,16949,"FirstName16949 MiddleName16949",LastName16949 +16950,16950,"FirstName16950 MiddleName16950",LastName16950 +16951,16951,"FirstName16951 MiddleName16951",LastName16951 +16952,16952,"FirstName16952 MiddleName16952",LastName16952 +16953,16953,"FirstName16953 MiddleName16953",LastName16953 +16954,16954,"FirstName16954 MiddleName16954",LastName16954 +16955,16955,"FirstName16955 MiddleName16955",LastName16955 +16956,16956,"FirstName16956 MiddleName16956",LastName16956 +16957,16957,"FirstName16957 MiddleName16957",LastName16957 +16958,16958,"FirstName16958 MiddleName16958",LastName16958 +16959,16959,"FirstName16959 MiddleName16959",LastName16959 +16960,16960,"FirstName16960 MiddleName16960",LastName16960 +16961,16961,"FirstName16961 MiddleName16961",LastName16961 +16962,16962,"FirstName16962 MiddleName16962",LastName16962 +16963,16963,"FirstName16963 MiddleName16963",LastName16963 +16964,16964,"FirstName16964 MiddleName16964",LastName16964 +16965,16965,"FirstName16965 MiddleName16965",LastName16965 +16966,16966,"FirstName16966 MiddleName16966",LastName16966 +16967,16967,"FirstName16967 MiddleName16967",LastName16967 +16968,16968,"FirstName16968 MiddleName16968",LastName16968 +16969,16969,"FirstName16969 MiddleName16969",LastName16969 +16970,16970,"FirstName16970 MiddleName16970",LastName16970 +16971,16971,"FirstName16971 MiddleName16971",LastName16971 +16972,16972,"FirstName16972 MiddleName16972",LastName16972 +16973,16973,"FirstName16973 MiddleName16973",LastName16973 +16974,16974,"FirstName16974 MiddleName16974",LastName16974 +16975,16975,"FirstName16975 MiddleName16975",LastName16975 +16976,16976,"FirstName16976 MiddleName16976",LastName16976 +16977,16977,"FirstName16977 MiddleName16977",LastName16977 +16978,16978,"FirstName16978 MiddleName16978",LastName16978 +16979,16979,"FirstName16979 MiddleName16979",LastName16979 +16980,16980,"FirstName16980 MiddleName16980",LastName16980 +16981,16981,"FirstName16981 MiddleName16981",LastName16981 +16982,16982,"FirstName16982 MiddleName16982",LastName16982 +16983,16983,"FirstName16983 MiddleName16983",LastName16983 +16984,16984,"FirstName16984 MiddleName16984",LastName16984 +16985,16985,"FirstName16985 MiddleName16985",LastName16985 +16986,16986,"FirstName16986 MiddleName16986",LastName16986 +16987,16987,"FirstName16987 MiddleName16987",LastName16987 +16988,16988,"FirstName16988 MiddleName16988",LastName16988 +16989,16989,"FirstName16989 MiddleName16989",LastName16989 +16990,16990,"FirstName16990 MiddleName16990",LastName16990 +16991,16991,"FirstName16991 MiddleName16991",LastName16991 +16992,16992,"FirstName16992 MiddleName16992",LastName16992 +16993,16993,"FirstName16993 MiddleName16993",LastName16993 +16994,16994,"FirstName16994 MiddleName16994",LastName16994 +16995,16995,"FirstName16995 MiddleName16995",LastName16995 +16996,16996,"FirstName16996 MiddleName16996",LastName16996 +16997,16997,"FirstName16997 MiddleName16997",LastName16997 +16998,16998,"FirstName16998 MiddleName16998",LastName16998 +16999,16999,"FirstName16999 MiddleName16999",LastName16999 +17000,17000,"FirstName17000 MiddleName17000",LastName17000 +17001,17001,"FirstName17001 MiddleName17001",LastName17001 +17002,17002,"FirstName17002 MiddleName17002",LastName17002 +17003,17003,"FirstName17003 MiddleName17003",LastName17003 +17004,17004,"FirstName17004 MiddleName17004",LastName17004 +17005,17005,"FirstName17005 MiddleName17005",LastName17005 +17006,17006,"FirstName17006 MiddleName17006",LastName17006 +17007,17007,"FirstName17007 MiddleName17007",LastName17007 +17008,17008,"FirstName17008 MiddleName17008",LastName17008 +17009,17009,"FirstName17009 MiddleName17009",LastName17009 +17010,17010,"FirstName17010 MiddleName17010",LastName17010 +17011,17011,"FirstName17011 MiddleName17011",LastName17011 +17012,17012,"FirstName17012 MiddleName17012",LastName17012 +17013,17013,"FirstName17013 MiddleName17013",LastName17013 +17014,17014,"FirstName17014 MiddleName17014",LastName17014 +17015,17015,"FirstName17015 MiddleName17015",LastName17015 +17016,17016,"FirstName17016 MiddleName17016",LastName17016 +17017,17017,"FirstName17017 MiddleName17017",LastName17017 +17018,17018,"FirstName17018 MiddleName17018",LastName17018 +17019,17019,"FirstName17019 MiddleName17019",LastName17019 +17020,17020,"FirstName17020 MiddleName17020",LastName17020 +17021,17021,"FirstName17021 MiddleName17021",LastName17021 +17022,17022,"FirstName17022 MiddleName17022",LastName17022 +17023,17023,"FirstName17023 MiddleName17023",LastName17023 +17024,17024,"FirstName17024 MiddleName17024",LastName17024 +17025,17025,"FirstName17025 MiddleName17025",LastName17025 +17026,17026,"FirstName17026 MiddleName17026",LastName17026 +17027,17027,"FirstName17027 MiddleName17027",LastName17027 +17028,17028,"FirstName17028 MiddleName17028",LastName17028 +17029,17029,"FirstName17029 MiddleName17029",LastName17029 +17030,17030,"FirstName17030 MiddleName17030",LastName17030 +17031,17031,"FirstName17031 MiddleName17031",LastName17031 +17032,17032,"FirstName17032 MiddleName17032",LastName17032 +17033,17033,"FirstName17033 MiddleName17033",LastName17033 +17034,17034,"FirstName17034 MiddleName17034",LastName17034 +17035,17035,"FirstName17035 MiddleName17035",LastName17035 +17036,17036,"FirstName17036 MiddleName17036",LastName17036 +17037,17037,"FirstName17037 MiddleName17037",LastName17037 +17038,17038,"FirstName17038 MiddleName17038",LastName17038 +17039,17039,"FirstName17039 MiddleName17039",LastName17039 +17040,17040,"FirstName17040 MiddleName17040",LastName17040 +17041,17041,"FirstName17041 MiddleName17041",LastName17041 +17042,17042,"FirstName17042 MiddleName17042",LastName17042 +17043,17043,"FirstName17043 MiddleName17043",LastName17043 +17044,17044,"FirstName17044 MiddleName17044",LastName17044 +17045,17045,"FirstName17045 MiddleName17045",LastName17045 +17046,17046,"FirstName17046 MiddleName17046",LastName17046 +17047,17047,"FirstName17047 MiddleName17047",LastName17047 +17048,17048,"FirstName17048 MiddleName17048",LastName17048 +17049,17049,"FirstName17049 MiddleName17049",LastName17049 +17050,17050,"FirstName17050 MiddleName17050",LastName17050 +17051,17051,"FirstName17051 MiddleName17051",LastName17051 +17052,17052,"FirstName17052 MiddleName17052",LastName17052 +17053,17053,"FirstName17053 MiddleName17053",LastName17053 +17054,17054,"FirstName17054 MiddleName17054",LastName17054 +17055,17055,"FirstName17055 MiddleName17055",LastName17055 +17056,17056,"FirstName17056 MiddleName17056",LastName17056 +17057,17057,"FirstName17057 MiddleName17057",LastName17057 +17058,17058,"FirstName17058 MiddleName17058",LastName17058 +17059,17059,"FirstName17059 MiddleName17059",LastName17059 +17060,17060,"FirstName17060 MiddleName17060",LastName17060 +17061,17061,"FirstName17061 MiddleName17061",LastName17061 +17062,17062,"FirstName17062 MiddleName17062",LastName17062 +17063,17063,"FirstName17063 MiddleName17063",LastName17063 +17064,17064,"FirstName17064 MiddleName17064",LastName17064 +17065,17065,"FirstName17065 MiddleName17065",LastName17065 +17066,17066,"FirstName17066 MiddleName17066",LastName17066 +17067,17067,"FirstName17067 MiddleName17067",LastName17067 +17068,17068,"FirstName17068 MiddleName17068",LastName17068 +17069,17069,"FirstName17069 MiddleName17069",LastName17069 +17070,17070,"FirstName17070 MiddleName17070",LastName17070 +17071,17071,"FirstName17071 MiddleName17071",LastName17071 +17072,17072,"FirstName17072 MiddleName17072",LastName17072 +17073,17073,"FirstName17073 MiddleName17073",LastName17073 +17074,17074,"FirstName17074 MiddleName17074",LastName17074 +17075,17075,"FirstName17075 MiddleName17075",LastName17075 +17076,17076,"FirstName17076 MiddleName17076",LastName17076 +17077,17077,"FirstName17077 MiddleName17077",LastName17077 +17078,17078,"FirstName17078 MiddleName17078",LastName17078 +17079,17079,"FirstName17079 MiddleName17079",LastName17079 +17080,17080,"FirstName17080 MiddleName17080",LastName17080 +17081,17081,"FirstName17081 MiddleName17081",LastName17081 +17082,17082,"FirstName17082 MiddleName17082",LastName17082 +17083,17083,"FirstName17083 MiddleName17083",LastName17083 +17084,17084,"FirstName17084 MiddleName17084",LastName17084 +17085,17085,"FirstName17085 MiddleName17085",LastName17085 +17086,17086,"FirstName17086 MiddleName17086",LastName17086 +17087,17087,"FirstName17087 MiddleName17087",LastName17087 +17088,17088,"FirstName17088 MiddleName17088",LastName17088 +17089,17089,"FirstName17089 MiddleName17089",LastName17089 +17090,17090,"FirstName17090 MiddleName17090",LastName17090 +17091,17091,"FirstName17091 MiddleName17091",LastName17091 +17092,17092,"FirstName17092 MiddleName17092",LastName17092 +17093,17093,"FirstName17093 MiddleName17093",LastName17093 +17094,17094,"FirstName17094 MiddleName17094",LastName17094 +17095,17095,"FirstName17095 MiddleName17095",LastName17095 +17096,17096,"FirstName17096 MiddleName17096",LastName17096 +17097,17097,"FirstName17097 MiddleName17097",LastName17097 +17098,17098,"FirstName17098 MiddleName17098",LastName17098 +17099,17099,"FirstName17099 MiddleName17099",LastName17099 +17100,17100,"FirstName17100 MiddleName17100",LastName17100 +17101,17101,"FirstName17101 MiddleName17101",LastName17101 +17102,17102,"FirstName17102 MiddleName17102",LastName17102 +17103,17103,"FirstName17103 MiddleName17103",LastName17103 +17104,17104,"FirstName17104 MiddleName17104",LastName17104 +17105,17105,"FirstName17105 MiddleName17105",LastName17105 +17106,17106,"FirstName17106 MiddleName17106",LastName17106 +17107,17107,"FirstName17107 MiddleName17107",LastName17107 +17108,17108,"FirstName17108 MiddleName17108",LastName17108 +17109,17109,"FirstName17109 MiddleName17109",LastName17109 +17110,17110,"FirstName17110 MiddleName17110",LastName17110 +17111,17111,"FirstName17111 MiddleName17111",LastName17111 +17112,17112,"FirstName17112 MiddleName17112",LastName17112 +17113,17113,"FirstName17113 MiddleName17113",LastName17113 +17114,17114,"FirstName17114 MiddleName17114",LastName17114 +17115,17115,"FirstName17115 MiddleName17115",LastName17115 +17116,17116,"FirstName17116 MiddleName17116",LastName17116 +17117,17117,"FirstName17117 MiddleName17117",LastName17117 +17118,17118,"FirstName17118 MiddleName17118",LastName17118 +17119,17119,"FirstName17119 MiddleName17119",LastName17119 +17120,17120,"FirstName17120 MiddleName17120",LastName17120 +17121,17121,"FirstName17121 MiddleName17121",LastName17121 +17122,17122,"FirstName17122 MiddleName17122",LastName17122 +17123,17123,"FirstName17123 MiddleName17123",LastName17123 +17124,17124,"FirstName17124 MiddleName17124",LastName17124 +17125,17125,"FirstName17125 MiddleName17125",LastName17125 +17126,17126,"FirstName17126 MiddleName17126",LastName17126 +17127,17127,"FirstName17127 MiddleName17127",LastName17127 +17128,17128,"FirstName17128 MiddleName17128",LastName17128 +17129,17129,"FirstName17129 MiddleName17129",LastName17129 +17130,17130,"FirstName17130 MiddleName17130",LastName17130 +17131,17131,"FirstName17131 MiddleName17131",LastName17131 +17132,17132,"FirstName17132 MiddleName17132",LastName17132 +17133,17133,"FirstName17133 MiddleName17133",LastName17133 +17134,17134,"FirstName17134 MiddleName17134",LastName17134 +17135,17135,"FirstName17135 MiddleName17135",LastName17135 +17136,17136,"FirstName17136 MiddleName17136",LastName17136 +17137,17137,"FirstName17137 MiddleName17137",LastName17137 +17138,17138,"FirstName17138 MiddleName17138",LastName17138 +17139,17139,"FirstName17139 MiddleName17139",LastName17139 +17140,17140,"FirstName17140 MiddleName17140",LastName17140 +17141,17141,"FirstName17141 MiddleName17141",LastName17141 +17142,17142,"FirstName17142 MiddleName17142",LastName17142 +17143,17143,"FirstName17143 MiddleName17143",LastName17143 +17144,17144,"FirstName17144 MiddleName17144",LastName17144 +17145,17145,"FirstName17145 MiddleName17145",LastName17145 +17146,17146,"FirstName17146 MiddleName17146",LastName17146 +17147,17147,"FirstName17147 MiddleName17147",LastName17147 +17148,17148,"FirstName17148 MiddleName17148",LastName17148 +17149,17149,"FirstName17149 MiddleName17149",LastName17149 +17150,17150,"FirstName17150 MiddleName17150",LastName17150 +17151,17151,"FirstName17151 MiddleName17151",LastName17151 +17152,17152,"FirstName17152 MiddleName17152",LastName17152 +17153,17153,"FirstName17153 MiddleName17153",LastName17153 +17154,17154,"FirstName17154 MiddleName17154",LastName17154 +17155,17155,"FirstName17155 MiddleName17155",LastName17155 +17156,17156,"FirstName17156 MiddleName17156",LastName17156 +17157,17157,"FirstName17157 MiddleName17157",LastName17157 +17158,17158,"FirstName17158 MiddleName17158",LastName17158 +17159,17159,"FirstName17159 MiddleName17159",LastName17159 +17160,17160,"FirstName17160 MiddleName17160",LastName17160 +17161,17161,"FirstName17161 MiddleName17161",LastName17161 +17162,17162,"FirstName17162 MiddleName17162",LastName17162 +17163,17163,"FirstName17163 MiddleName17163",LastName17163 +17164,17164,"FirstName17164 MiddleName17164",LastName17164 +17165,17165,"FirstName17165 MiddleName17165",LastName17165 +17166,17166,"FirstName17166 MiddleName17166",LastName17166 +17167,17167,"FirstName17167 MiddleName17167",LastName17167 +17168,17168,"FirstName17168 MiddleName17168",LastName17168 +17169,17169,"FirstName17169 MiddleName17169",LastName17169 +17170,17170,"FirstName17170 MiddleName17170",LastName17170 +17171,17171,"FirstName17171 MiddleName17171",LastName17171 +17172,17172,"FirstName17172 MiddleName17172",LastName17172 +17173,17173,"FirstName17173 MiddleName17173",LastName17173 +17174,17174,"FirstName17174 MiddleName17174",LastName17174 +17175,17175,"FirstName17175 MiddleName17175",LastName17175 +17176,17176,"FirstName17176 MiddleName17176",LastName17176 +17177,17177,"FirstName17177 MiddleName17177",LastName17177 +17178,17178,"FirstName17178 MiddleName17178",LastName17178 +17179,17179,"FirstName17179 MiddleName17179",LastName17179 +17180,17180,"FirstName17180 MiddleName17180",LastName17180 +17181,17181,"FirstName17181 MiddleName17181",LastName17181 +17182,17182,"FirstName17182 MiddleName17182",LastName17182 +17183,17183,"FirstName17183 MiddleName17183",LastName17183 +17184,17184,"FirstName17184 MiddleName17184",LastName17184 +17185,17185,"FirstName17185 MiddleName17185",LastName17185 +17186,17186,"FirstName17186 MiddleName17186",LastName17186 +17187,17187,"FirstName17187 MiddleName17187",LastName17187 +17188,17188,"FirstName17188 MiddleName17188",LastName17188 +17189,17189,"FirstName17189 MiddleName17189",LastName17189 +17190,17190,"FirstName17190 MiddleName17190",LastName17190 +17191,17191,"FirstName17191 MiddleName17191",LastName17191 +17192,17192,"FirstName17192 MiddleName17192",LastName17192 +17193,17193,"FirstName17193 MiddleName17193",LastName17193 +17194,17194,"FirstName17194 MiddleName17194",LastName17194 +17195,17195,"FirstName17195 MiddleName17195",LastName17195 +17196,17196,"FirstName17196 MiddleName17196",LastName17196 +17197,17197,"FirstName17197 MiddleName17197",LastName17197 +17198,17198,"FirstName17198 MiddleName17198",LastName17198 +17199,17199,"FirstName17199 MiddleName17199",LastName17199 +17200,17200,"FirstName17200 MiddleName17200",LastName17200 +17201,17201,"FirstName17201 MiddleName17201",LastName17201 +17202,17202,"FirstName17202 MiddleName17202",LastName17202 +17203,17203,"FirstName17203 MiddleName17203",LastName17203 +17204,17204,"FirstName17204 MiddleName17204",LastName17204 +17205,17205,"FirstName17205 MiddleName17205",LastName17205 +17206,17206,"FirstName17206 MiddleName17206",LastName17206 +17207,17207,"FirstName17207 MiddleName17207",LastName17207 +17208,17208,"FirstName17208 MiddleName17208",LastName17208 +17209,17209,"FirstName17209 MiddleName17209",LastName17209 +17210,17210,"FirstName17210 MiddleName17210",LastName17210 +17211,17211,"FirstName17211 MiddleName17211",LastName17211 +17212,17212,"FirstName17212 MiddleName17212",LastName17212 +17213,17213,"FirstName17213 MiddleName17213",LastName17213 +17214,17214,"FirstName17214 MiddleName17214",LastName17214 +17215,17215,"FirstName17215 MiddleName17215",LastName17215 +17216,17216,"FirstName17216 MiddleName17216",LastName17216 +17217,17217,"FirstName17217 MiddleName17217",LastName17217 +17218,17218,"FirstName17218 MiddleName17218",LastName17218 +17219,17219,"FirstName17219 MiddleName17219",LastName17219 +17220,17220,"FirstName17220 MiddleName17220",LastName17220 +17221,17221,"FirstName17221 MiddleName17221",LastName17221 +17222,17222,"FirstName17222 MiddleName17222",LastName17222 +17223,17223,"FirstName17223 MiddleName17223",LastName17223 +17224,17224,"FirstName17224 MiddleName17224",LastName17224 +17225,17225,"FirstName17225 MiddleName17225",LastName17225 +17226,17226,"FirstName17226 MiddleName17226",LastName17226 +17227,17227,"FirstName17227 MiddleName17227",LastName17227 +17228,17228,"FirstName17228 MiddleName17228",LastName17228 +17229,17229,"FirstName17229 MiddleName17229",LastName17229 +17230,17230,"FirstName17230 MiddleName17230",LastName17230 +17231,17231,"FirstName17231 MiddleName17231",LastName17231 +17232,17232,"FirstName17232 MiddleName17232",LastName17232 +17233,17233,"FirstName17233 MiddleName17233",LastName17233 +17234,17234,"FirstName17234 MiddleName17234",LastName17234 +17235,17235,"FirstName17235 MiddleName17235",LastName17235 +17236,17236,"FirstName17236 MiddleName17236",LastName17236 +17237,17237,"FirstName17237 MiddleName17237",LastName17237 +17238,17238,"FirstName17238 MiddleName17238",LastName17238 +17239,17239,"FirstName17239 MiddleName17239",LastName17239 +17240,17240,"FirstName17240 MiddleName17240",LastName17240 +17241,17241,"FirstName17241 MiddleName17241",LastName17241 +17242,17242,"FirstName17242 MiddleName17242",LastName17242 +17243,17243,"FirstName17243 MiddleName17243",LastName17243 +17244,17244,"FirstName17244 MiddleName17244",LastName17244 +17245,17245,"FirstName17245 MiddleName17245",LastName17245 +17246,17246,"FirstName17246 MiddleName17246",LastName17246 +17247,17247,"FirstName17247 MiddleName17247",LastName17247 +17248,17248,"FirstName17248 MiddleName17248",LastName17248 +17249,17249,"FirstName17249 MiddleName17249",LastName17249 +17250,17250,"FirstName17250 MiddleName17250",LastName17250 +17251,17251,"FirstName17251 MiddleName17251",LastName17251 +17252,17252,"FirstName17252 MiddleName17252",LastName17252 +17253,17253,"FirstName17253 MiddleName17253",LastName17253 +17254,17254,"FirstName17254 MiddleName17254",LastName17254 +17255,17255,"FirstName17255 MiddleName17255",LastName17255 +17256,17256,"FirstName17256 MiddleName17256",LastName17256 +17257,17257,"FirstName17257 MiddleName17257",LastName17257 +17258,17258,"FirstName17258 MiddleName17258",LastName17258 +17259,17259,"FirstName17259 MiddleName17259",LastName17259 +17260,17260,"FirstName17260 MiddleName17260",LastName17260 +17261,17261,"FirstName17261 MiddleName17261",LastName17261 +17262,17262,"FirstName17262 MiddleName17262",LastName17262 +17263,17263,"FirstName17263 MiddleName17263",LastName17263 +17264,17264,"FirstName17264 MiddleName17264",LastName17264 +17265,17265,"FirstName17265 MiddleName17265",LastName17265 +17266,17266,"FirstName17266 MiddleName17266",LastName17266 +17267,17267,"FirstName17267 MiddleName17267",LastName17267 +17268,17268,"FirstName17268 MiddleName17268",LastName17268 +17269,17269,"FirstName17269 MiddleName17269",LastName17269 +17270,17270,"FirstName17270 MiddleName17270",LastName17270 +17271,17271,"FirstName17271 MiddleName17271",LastName17271 +17272,17272,"FirstName17272 MiddleName17272",LastName17272 +17273,17273,"FirstName17273 MiddleName17273",LastName17273 +17274,17274,"FirstName17274 MiddleName17274",LastName17274 +17275,17275,"FirstName17275 MiddleName17275",LastName17275 +17276,17276,"FirstName17276 MiddleName17276",LastName17276 +17277,17277,"FirstName17277 MiddleName17277",LastName17277 +17278,17278,"FirstName17278 MiddleName17278",LastName17278 +17279,17279,"FirstName17279 MiddleName17279",LastName17279 +17280,17280,"FirstName17280 MiddleName17280",LastName17280 +17281,17281,"FirstName17281 MiddleName17281",LastName17281 +17282,17282,"FirstName17282 MiddleName17282",LastName17282 +17283,17283,"FirstName17283 MiddleName17283",LastName17283 +17284,17284,"FirstName17284 MiddleName17284",LastName17284 +17285,17285,"FirstName17285 MiddleName17285",LastName17285 +17286,17286,"FirstName17286 MiddleName17286",LastName17286 +17287,17287,"FirstName17287 MiddleName17287",LastName17287 +17288,17288,"FirstName17288 MiddleName17288",LastName17288 +17289,17289,"FirstName17289 MiddleName17289",LastName17289 +17290,17290,"FirstName17290 MiddleName17290",LastName17290 +17291,17291,"FirstName17291 MiddleName17291",LastName17291 +17292,17292,"FirstName17292 MiddleName17292",LastName17292 +17293,17293,"FirstName17293 MiddleName17293",LastName17293 +17294,17294,"FirstName17294 MiddleName17294",LastName17294 +17295,17295,"FirstName17295 MiddleName17295",LastName17295 +17296,17296,"FirstName17296 MiddleName17296",LastName17296 +17297,17297,"FirstName17297 MiddleName17297",LastName17297 +17298,17298,"FirstName17298 MiddleName17298",LastName17298 +17299,17299,"FirstName17299 MiddleName17299",LastName17299 +17300,17300,"FirstName17300 MiddleName17300",LastName17300 +17301,17301,"FirstName17301 MiddleName17301",LastName17301 +17302,17302,"FirstName17302 MiddleName17302",LastName17302 +17303,17303,"FirstName17303 MiddleName17303",LastName17303 +17304,17304,"FirstName17304 MiddleName17304",LastName17304 +17305,17305,"FirstName17305 MiddleName17305",LastName17305 +17306,17306,"FirstName17306 MiddleName17306",LastName17306 +17307,17307,"FirstName17307 MiddleName17307",LastName17307 +17308,17308,"FirstName17308 MiddleName17308",LastName17308 +17309,17309,"FirstName17309 MiddleName17309",LastName17309 +17310,17310,"FirstName17310 MiddleName17310",LastName17310 +17311,17311,"FirstName17311 MiddleName17311",LastName17311 +17312,17312,"FirstName17312 MiddleName17312",LastName17312 +17313,17313,"FirstName17313 MiddleName17313",LastName17313 +17314,17314,"FirstName17314 MiddleName17314",LastName17314 +17315,17315,"FirstName17315 MiddleName17315",LastName17315 +17316,17316,"FirstName17316 MiddleName17316",LastName17316 +17317,17317,"FirstName17317 MiddleName17317",LastName17317 +17318,17318,"FirstName17318 MiddleName17318",LastName17318 +17319,17319,"FirstName17319 MiddleName17319",LastName17319 +17320,17320,"FirstName17320 MiddleName17320",LastName17320 +17321,17321,"FirstName17321 MiddleName17321",LastName17321 +17322,17322,"FirstName17322 MiddleName17322",LastName17322 +17323,17323,"FirstName17323 MiddleName17323",LastName17323 +17324,17324,"FirstName17324 MiddleName17324",LastName17324 +17325,17325,"FirstName17325 MiddleName17325",LastName17325 +17326,17326,"FirstName17326 MiddleName17326",LastName17326 +17327,17327,"FirstName17327 MiddleName17327",LastName17327 +17328,17328,"FirstName17328 MiddleName17328",LastName17328 +17329,17329,"FirstName17329 MiddleName17329",LastName17329 +17330,17330,"FirstName17330 MiddleName17330",LastName17330 +17331,17331,"FirstName17331 MiddleName17331",LastName17331 +17332,17332,"FirstName17332 MiddleName17332",LastName17332 +17333,17333,"FirstName17333 MiddleName17333",LastName17333 +17334,17334,"FirstName17334 MiddleName17334",LastName17334 +17335,17335,"FirstName17335 MiddleName17335",LastName17335 +17336,17336,"FirstName17336 MiddleName17336",LastName17336 +17337,17337,"FirstName17337 MiddleName17337",LastName17337 +17338,17338,"FirstName17338 MiddleName17338",LastName17338 +17339,17339,"FirstName17339 MiddleName17339",LastName17339 +17340,17340,"FirstName17340 MiddleName17340",LastName17340 +17341,17341,"FirstName17341 MiddleName17341",LastName17341 +17342,17342,"FirstName17342 MiddleName17342",LastName17342 +17343,17343,"FirstName17343 MiddleName17343",LastName17343 +17344,17344,"FirstName17344 MiddleName17344",LastName17344 +17345,17345,"FirstName17345 MiddleName17345",LastName17345 +17346,17346,"FirstName17346 MiddleName17346",LastName17346 +17347,17347,"FirstName17347 MiddleName17347",LastName17347 +17348,17348,"FirstName17348 MiddleName17348",LastName17348 +17349,17349,"FirstName17349 MiddleName17349",LastName17349 +17350,17350,"FirstName17350 MiddleName17350",LastName17350 +17351,17351,"FirstName17351 MiddleName17351",LastName17351 +17352,17352,"FirstName17352 MiddleName17352",LastName17352 +17353,17353,"FirstName17353 MiddleName17353",LastName17353 +17354,17354,"FirstName17354 MiddleName17354",LastName17354 +17355,17355,"FirstName17355 MiddleName17355",LastName17355 +17356,17356,"FirstName17356 MiddleName17356",LastName17356 +17357,17357,"FirstName17357 MiddleName17357",LastName17357 +17358,17358,"FirstName17358 MiddleName17358",LastName17358 +17359,17359,"FirstName17359 MiddleName17359",LastName17359 +17360,17360,"FirstName17360 MiddleName17360",LastName17360 +17361,17361,"FirstName17361 MiddleName17361",LastName17361 +17362,17362,"FirstName17362 MiddleName17362",LastName17362 +17363,17363,"FirstName17363 MiddleName17363",LastName17363 +17364,17364,"FirstName17364 MiddleName17364",LastName17364 +17365,17365,"FirstName17365 MiddleName17365",LastName17365 +17366,17366,"FirstName17366 MiddleName17366",LastName17366 +17367,17367,"FirstName17367 MiddleName17367",LastName17367 +17368,17368,"FirstName17368 MiddleName17368",LastName17368 +17369,17369,"FirstName17369 MiddleName17369",LastName17369 +17370,17370,"FirstName17370 MiddleName17370",LastName17370 +17371,17371,"FirstName17371 MiddleName17371",LastName17371 +17372,17372,"FirstName17372 MiddleName17372",LastName17372 +17373,17373,"FirstName17373 MiddleName17373",LastName17373 +17374,17374,"FirstName17374 MiddleName17374",LastName17374 +17375,17375,"FirstName17375 MiddleName17375",LastName17375 +17376,17376,"FirstName17376 MiddleName17376",LastName17376 +17377,17377,"FirstName17377 MiddleName17377",LastName17377 +17378,17378,"FirstName17378 MiddleName17378",LastName17378 +17379,17379,"FirstName17379 MiddleName17379",LastName17379 +17380,17380,"FirstName17380 MiddleName17380",LastName17380 +17381,17381,"FirstName17381 MiddleName17381",LastName17381 +17382,17382,"FirstName17382 MiddleName17382",LastName17382 +17383,17383,"FirstName17383 MiddleName17383",LastName17383 +17384,17384,"FirstName17384 MiddleName17384",LastName17384 +17385,17385,"FirstName17385 MiddleName17385",LastName17385 +17386,17386,"FirstName17386 MiddleName17386",LastName17386 +17387,17387,"FirstName17387 MiddleName17387",LastName17387 +17388,17388,"FirstName17388 MiddleName17388",LastName17388 +17389,17389,"FirstName17389 MiddleName17389",LastName17389 +17390,17390,"FirstName17390 MiddleName17390",LastName17390 +17391,17391,"FirstName17391 MiddleName17391",LastName17391 +17392,17392,"FirstName17392 MiddleName17392",LastName17392 +17393,17393,"FirstName17393 MiddleName17393",LastName17393 +17394,17394,"FirstName17394 MiddleName17394",LastName17394 +17395,17395,"FirstName17395 MiddleName17395",LastName17395 +17396,17396,"FirstName17396 MiddleName17396",LastName17396 +17397,17397,"FirstName17397 MiddleName17397",LastName17397 +17398,17398,"FirstName17398 MiddleName17398",LastName17398 +17399,17399,"FirstName17399 MiddleName17399",LastName17399 +17400,17400,"FirstName17400 MiddleName17400",LastName17400 +17401,17401,"FirstName17401 MiddleName17401",LastName17401 +17402,17402,"FirstName17402 MiddleName17402",LastName17402 +17403,17403,"FirstName17403 MiddleName17403",LastName17403 +17404,17404,"FirstName17404 MiddleName17404",LastName17404 +17405,17405,"FirstName17405 MiddleName17405",LastName17405 +17406,17406,"FirstName17406 MiddleName17406",LastName17406 +17407,17407,"FirstName17407 MiddleName17407",LastName17407 +17408,17408,"FirstName17408 MiddleName17408",LastName17408 +17409,17409,"FirstName17409 MiddleName17409",LastName17409 +17410,17410,"FirstName17410 MiddleName17410",LastName17410 +17411,17411,"FirstName17411 MiddleName17411",LastName17411 +17412,17412,"FirstName17412 MiddleName17412",LastName17412 +17413,17413,"FirstName17413 MiddleName17413",LastName17413 +17414,17414,"FirstName17414 MiddleName17414",LastName17414 +17415,17415,"FirstName17415 MiddleName17415",LastName17415 +17416,17416,"FirstName17416 MiddleName17416",LastName17416 +17417,17417,"FirstName17417 MiddleName17417",LastName17417 +17418,17418,"FirstName17418 MiddleName17418",LastName17418 +17419,17419,"FirstName17419 MiddleName17419",LastName17419 +17420,17420,"FirstName17420 MiddleName17420",LastName17420 +17421,17421,"FirstName17421 MiddleName17421",LastName17421 +17422,17422,"FirstName17422 MiddleName17422",LastName17422 +17423,17423,"FirstName17423 MiddleName17423",LastName17423 +17424,17424,"FirstName17424 MiddleName17424",LastName17424 +17425,17425,"FirstName17425 MiddleName17425",LastName17425 +17426,17426,"FirstName17426 MiddleName17426",LastName17426 +17427,17427,"FirstName17427 MiddleName17427",LastName17427 +17428,17428,"FirstName17428 MiddleName17428",LastName17428 +17429,17429,"FirstName17429 MiddleName17429",LastName17429 +17430,17430,"FirstName17430 MiddleName17430",LastName17430 +17431,17431,"FirstName17431 MiddleName17431",LastName17431 +17432,17432,"FirstName17432 MiddleName17432",LastName17432 +17433,17433,"FirstName17433 MiddleName17433",LastName17433 +17434,17434,"FirstName17434 MiddleName17434",LastName17434 +17435,17435,"FirstName17435 MiddleName17435",LastName17435 +17436,17436,"FirstName17436 MiddleName17436",LastName17436 +17437,17437,"FirstName17437 MiddleName17437",LastName17437 +17438,17438,"FirstName17438 MiddleName17438",LastName17438 +17439,17439,"FirstName17439 MiddleName17439",LastName17439 +17440,17440,"FirstName17440 MiddleName17440",LastName17440 +17441,17441,"FirstName17441 MiddleName17441",LastName17441 +17442,17442,"FirstName17442 MiddleName17442",LastName17442 +17443,17443,"FirstName17443 MiddleName17443",LastName17443 +17444,17444,"FirstName17444 MiddleName17444",LastName17444 +17445,17445,"FirstName17445 MiddleName17445",LastName17445 +17446,17446,"FirstName17446 MiddleName17446",LastName17446 +17447,17447,"FirstName17447 MiddleName17447",LastName17447 +17448,17448,"FirstName17448 MiddleName17448",LastName17448 +17449,17449,"FirstName17449 MiddleName17449",LastName17449 +17450,17450,"FirstName17450 MiddleName17450",LastName17450 +17451,17451,"FirstName17451 MiddleName17451",LastName17451 +17452,17452,"FirstName17452 MiddleName17452",LastName17452 +17453,17453,"FirstName17453 MiddleName17453",LastName17453 +17454,17454,"FirstName17454 MiddleName17454",LastName17454 +17455,17455,"FirstName17455 MiddleName17455",LastName17455 +17456,17456,"FirstName17456 MiddleName17456",LastName17456 +17457,17457,"FirstName17457 MiddleName17457",LastName17457 +17458,17458,"FirstName17458 MiddleName17458",LastName17458 +17459,17459,"FirstName17459 MiddleName17459",LastName17459 +17460,17460,"FirstName17460 MiddleName17460",LastName17460 +17461,17461,"FirstName17461 MiddleName17461",LastName17461 +17462,17462,"FirstName17462 MiddleName17462",LastName17462 +17463,17463,"FirstName17463 MiddleName17463",LastName17463 +17464,17464,"FirstName17464 MiddleName17464",LastName17464 +17465,17465,"FirstName17465 MiddleName17465",LastName17465 +17466,17466,"FirstName17466 MiddleName17466",LastName17466 +17467,17467,"FirstName17467 MiddleName17467",LastName17467 +17468,17468,"FirstName17468 MiddleName17468",LastName17468 +17469,17469,"FirstName17469 MiddleName17469",LastName17469 +17470,17470,"FirstName17470 MiddleName17470",LastName17470 +17471,17471,"FirstName17471 MiddleName17471",LastName17471 +17472,17472,"FirstName17472 MiddleName17472",LastName17472 +17473,17473,"FirstName17473 MiddleName17473",LastName17473 +17474,17474,"FirstName17474 MiddleName17474",LastName17474 +17475,17475,"FirstName17475 MiddleName17475",LastName17475 +17476,17476,"FirstName17476 MiddleName17476",LastName17476 +17477,17477,"FirstName17477 MiddleName17477",LastName17477 +17478,17478,"FirstName17478 MiddleName17478",LastName17478 +17479,17479,"FirstName17479 MiddleName17479",LastName17479 +17480,17480,"FirstName17480 MiddleName17480",LastName17480 +17481,17481,"FirstName17481 MiddleName17481",LastName17481 +17482,17482,"FirstName17482 MiddleName17482",LastName17482 +17483,17483,"FirstName17483 MiddleName17483",LastName17483 +17484,17484,"FirstName17484 MiddleName17484",LastName17484 +17485,17485,"FirstName17485 MiddleName17485",LastName17485 +17486,17486,"FirstName17486 MiddleName17486",LastName17486 +17487,17487,"FirstName17487 MiddleName17487",LastName17487 +17488,17488,"FirstName17488 MiddleName17488",LastName17488 +17489,17489,"FirstName17489 MiddleName17489",LastName17489 +17490,17490,"FirstName17490 MiddleName17490",LastName17490 +17491,17491,"FirstName17491 MiddleName17491",LastName17491 +17492,17492,"FirstName17492 MiddleName17492",LastName17492 +17493,17493,"FirstName17493 MiddleName17493",LastName17493 +17494,17494,"FirstName17494 MiddleName17494",LastName17494 +17495,17495,"FirstName17495 MiddleName17495",LastName17495 +17496,17496,"FirstName17496 MiddleName17496",LastName17496 +17497,17497,"FirstName17497 MiddleName17497",LastName17497 +17498,17498,"FirstName17498 MiddleName17498",LastName17498 +17499,17499,"FirstName17499 MiddleName17499",LastName17499 +17500,17500,"FirstName17500 MiddleName17500",LastName17500 +17501,17501,"FirstName17501 MiddleName17501",LastName17501 +17502,17502,"FirstName17502 MiddleName17502",LastName17502 +17503,17503,"FirstName17503 MiddleName17503",LastName17503 +17504,17504,"FirstName17504 MiddleName17504",LastName17504 +17505,17505,"FirstName17505 MiddleName17505",LastName17505 +17506,17506,"FirstName17506 MiddleName17506",LastName17506 +17507,17507,"FirstName17507 MiddleName17507",LastName17507 +17508,17508,"FirstName17508 MiddleName17508",LastName17508 +17509,17509,"FirstName17509 MiddleName17509",LastName17509 +17510,17510,"FirstName17510 MiddleName17510",LastName17510 +17511,17511,"FirstName17511 MiddleName17511",LastName17511 +17512,17512,"FirstName17512 MiddleName17512",LastName17512 +17513,17513,"FirstName17513 MiddleName17513",LastName17513 +17514,17514,"FirstName17514 MiddleName17514",LastName17514 +17515,17515,"FirstName17515 MiddleName17515",LastName17515 +17516,17516,"FirstName17516 MiddleName17516",LastName17516 +17517,17517,"FirstName17517 MiddleName17517",LastName17517 +17518,17518,"FirstName17518 MiddleName17518",LastName17518 +17519,17519,"FirstName17519 MiddleName17519",LastName17519 +17520,17520,"FirstName17520 MiddleName17520",LastName17520 +17521,17521,"FirstName17521 MiddleName17521",LastName17521 +17522,17522,"FirstName17522 MiddleName17522",LastName17522 +17523,17523,"FirstName17523 MiddleName17523",LastName17523 +17524,17524,"FirstName17524 MiddleName17524",LastName17524 +17525,17525,"FirstName17525 MiddleName17525",LastName17525 +17526,17526,"FirstName17526 MiddleName17526",LastName17526 +17527,17527,"FirstName17527 MiddleName17527",LastName17527 +17528,17528,"FirstName17528 MiddleName17528",LastName17528 +17529,17529,"FirstName17529 MiddleName17529",LastName17529 +17530,17530,"FirstName17530 MiddleName17530",LastName17530 +17531,17531,"FirstName17531 MiddleName17531",LastName17531 +17532,17532,"FirstName17532 MiddleName17532",LastName17532 +17533,17533,"FirstName17533 MiddleName17533",LastName17533 +17534,17534,"FirstName17534 MiddleName17534",LastName17534 +17535,17535,"FirstName17535 MiddleName17535",LastName17535 +17536,17536,"FirstName17536 MiddleName17536",LastName17536 +17537,17537,"FirstName17537 MiddleName17537",LastName17537 +17538,17538,"FirstName17538 MiddleName17538",LastName17538 +17539,17539,"FirstName17539 MiddleName17539",LastName17539 +17540,17540,"FirstName17540 MiddleName17540",LastName17540 +17541,17541,"FirstName17541 MiddleName17541",LastName17541 +17542,17542,"FirstName17542 MiddleName17542",LastName17542 +17543,17543,"FirstName17543 MiddleName17543",LastName17543 +17544,17544,"FirstName17544 MiddleName17544",LastName17544 +17545,17545,"FirstName17545 MiddleName17545",LastName17545 +17546,17546,"FirstName17546 MiddleName17546",LastName17546 +17547,17547,"FirstName17547 MiddleName17547",LastName17547 +17548,17548,"FirstName17548 MiddleName17548",LastName17548 +17549,17549,"FirstName17549 MiddleName17549",LastName17549 +17550,17550,"FirstName17550 MiddleName17550",LastName17550 +17551,17551,"FirstName17551 MiddleName17551",LastName17551 +17552,17552,"FirstName17552 MiddleName17552",LastName17552 +17553,17553,"FirstName17553 MiddleName17553",LastName17553 +17554,17554,"FirstName17554 MiddleName17554",LastName17554 +17555,17555,"FirstName17555 MiddleName17555",LastName17555 +17556,17556,"FirstName17556 MiddleName17556",LastName17556 +17557,17557,"FirstName17557 MiddleName17557",LastName17557 +17558,17558,"FirstName17558 MiddleName17558",LastName17558 +17559,17559,"FirstName17559 MiddleName17559",LastName17559 +17560,17560,"FirstName17560 MiddleName17560",LastName17560 +17561,17561,"FirstName17561 MiddleName17561",LastName17561 +17562,17562,"FirstName17562 MiddleName17562",LastName17562 +17563,17563,"FirstName17563 MiddleName17563",LastName17563 +17564,17564,"FirstName17564 MiddleName17564",LastName17564 +17565,17565,"FirstName17565 MiddleName17565",LastName17565 +17566,17566,"FirstName17566 MiddleName17566",LastName17566 +17567,17567,"FirstName17567 MiddleName17567",LastName17567 +17568,17568,"FirstName17568 MiddleName17568",LastName17568 +17569,17569,"FirstName17569 MiddleName17569",LastName17569 +17570,17570,"FirstName17570 MiddleName17570",LastName17570 +17571,17571,"FirstName17571 MiddleName17571",LastName17571 +17572,17572,"FirstName17572 MiddleName17572",LastName17572 +17573,17573,"FirstName17573 MiddleName17573",LastName17573 +17574,17574,"FirstName17574 MiddleName17574",LastName17574 +17575,17575,"FirstName17575 MiddleName17575",LastName17575 +17576,17576,"FirstName17576 MiddleName17576",LastName17576 +17577,17577,"FirstName17577 MiddleName17577",LastName17577 +17578,17578,"FirstName17578 MiddleName17578",LastName17578 +17579,17579,"FirstName17579 MiddleName17579",LastName17579 +17580,17580,"FirstName17580 MiddleName17580",LastName17580 +17581,17581,"FirstName17581 MiddleName17581",LastName17581 +17582,17582,"FirstName17582 MiddleName17582",LastName17582 +17583,17583,"FirstName17583 MiddleName17583",LastName17583 +17584,17584,"FirstName17584 MiddleName17584",LastName17584 +17585,17585,"FirstName17585 MiddleName17585",LastName17585 +17586,17586,"FirstName17586 MiddleName17586",LastName17586 +17587,17587,"FirstName17587 MiddleName17587",LastName17587 +17588,17588,"FirstName17588 MiddleName17588",LastName17588 +17589,17589,"FirstName17589 MiddleName17589",LastName17589 +17590,17590,"FirstName17590 MiddleName17590",LastName17590 +17591,17591,"FirstName17591 MiddleName17591",LastName17591 +17592,17592,"FirstName17592 MiddleName17592",LastName17592 +17593,17593,"FirstName17593 MiddleName17593",LastName17593 +17594,17594,"FirstName17594 MiddleName17594",LastName17594 +17595,17595,"FirstName17595 MiddleName17595",LastName17595 +17596,17596,"FirstName17596 MiddleName17596",LastName17596 +17597,17597,"FirstName17597 MiddleName17597",LastName17597 +17598,17598,"FirstName17598 MiddleName17598",LastName17598 +17599,17599,"FirstName17599 MiddleName17599",LastName17599 +17600,17600,"FirstName17600 MiddleName17600",LastName17600 +17601,17601,"FirstName17601 MiddleName17601",LastName17601 +17602,17602,"FirstName17602 MiddleName17602",LastName17602 +17603,17603,"FirstName17603 MiddleName17603",LastName17603 +17604,17604,"FirstName17604 MiddleName17604",LastName17604 +17605,17605,"FirstName17605 MiddleName17605",LastName17605 +17606,17606,"FirstName17606 MiddleName17606",LastName17606 +17607,17607,"FirstName17607 MiddleName17607",LastName17607 +17608,17608,"FirstName17608 MiddleName17608",LastName17608 +17609,17609,"FirstName17609 MiddleName17609",LastName17609 +17610,17610,"FirstName17610 MiddleName17610",LastName17610 +17611,17611,"FirstName17611 MiddleName17611",LastName17611 +17612,17612,"FirstName17612 MiddleName17612",LastName17612 +17613,17613,"FirstName17613 MiddleName17613",LastName17613 +17614,17614,"FirstName17614 MiddleName17614",LastName17614 +17615,17615,"FirstName17615 MiddleName17615",LastName17615 +17616,17616,"FirstName17616 MiddleName17616",LastName17616 +17617,17617,"FirstName17617 MiddleName17617",LastName17617 +17618,17618,"FirstName17618 MiddleName17618",LastName17618 +17619,17619,"FirstName17619 MiddleName17619",LastName17619 +17620,17620,"FirstName17620 MiddleName17620",LastName17620 +17621,17621,"FirstName17621 MiddleName17621",LastName17621 +17622,17622,"FirstName17622 MiddleName17622",LastName17622 +17623,17623,"FirstName17623 MiddleName17623",LastName17623 +17624,17624,"FirstName17624 MiddleName17624",LastName17624 +17625,17625,"FirstName17625 MiddleName17625",LastName17625 +17626,17626,"FirstName17626 MiddleName17626",LastName17626 +17627,17627,"FirstName17627 MiddleName17627",LastName17627 +17628,17628,"FirstName17628 MiddleName17628",LastName17628 +17629,17629,"FirstName17629 MiddleName17629",LastName17629 +17630,17630,"FirstName17630 MiddleName17630",LastName17630 +17631,17631,"FirstName17631 MiddleName17631",LastName17631 +17632,17632,"FirstName17632 MiddleName17632",LastName17632 +17633,17633,"FirstName17633 MiddleName17633",LastName17633 +17634,17634,"FirstName17634 MiddleName17634",LastName17634 +17635,17635,"FirstName17635 MiddleName17635",LastName17635 +17636,17636,"FirstName17636 MiddleName17636",LastName17636 +17637,17637,"FirstName17637 MiddleName17637",LastName17637 +17638,17638,"FirstName17638 MiddleName17638",LastName17638 +17639,17639,"FirstName17639 MiddleName17639",LastName17639 +17640,17640,"FirstName17640 MiddleName17640",LastName17640 +17641,17641,"FirstName17641 MiddleName17641",LastName17641 +17642,17642,"FirstName17642 MiddleName17642",LastName17642 +17643,17643,"FirstName17643 MiddleName17643",LastName17643 +17644,17644,"FirstName17644 MiddleName17644",LastName17644 +17645,17645,"FirstName17645 MiddleName17645",LastName17645 +17646,17646,"FirstName17646 MiddleName17646",LastName17646 +17647,17647,"FirstName17647 MiddleName17647",LastName17647 +17648,17648,"FirstName17648 MiddleName17648",LastName17648 +17649,17649,"FirstName17649 MiddleName17649",LastName17649 +17650,17650,"FirstName17650 MiddleName17650",LastName17650 +17651,17651,"FirstName17651 MiddleName17651",LastName17651 +17652,17652,"FirstName17652 MiddleName17652",LastName17652 +17653,17653,"FirstName17653 MiddleName17653",LastName17653 +17654,17654,"FirstName17654 MiddleName17654",LastName17654 +17655,17655,"FirstName17655 MiddleName17655",LastName17655 +17656,17656,"FirstName17656 MiddleName17656",LastName17656 +17657,17657,"FirstName17657 MiddleName17657",LastName17657 +17658,17658,"FirstName17658 MiddleName17658",LastName17658 +17659,17659,"FirstName17659 MiddleName17659",LastName17659 +17660,17660,"FirstName17660 MiddleName17660",LastName17660 +17661,17661,"FirstName17661 MiddleName17661",LastName17661 +17662,17662,"FirstName17662 MiddleName17662",LastName17662 +17663,17663,"FirstName17663 MiddleName17663",LastName17663 +17664,17664,"FirstName17664 MiddleName17664",LastName17664 +17665,17665,"FirstName17665 MiddleName17665",LastName17665 +17666,17666,"FirstName17666 MiddleName17666",LastName17666 +17667,17667,"FirstName17667 MiddleName17667",LastName17667 +17668,17668,"FirstName17668 MiddleName17668",LastName17668 +17669,17669,"FirstName17669 MiddleName17669",LastName17669 +17670,17670,"FirstName17670 MiddleName17670",LastName17670 +17671,17671,"FirstName17671 MiddleName17671",LastName17671 +17672,17672,"FirstName17672 MiddleName17672",LastName17672 +17673,17673,"FirstName17673 MiddleName17673",LastName17673 +17674,17674,"FirstName17674 MiddleName17674",LastName17674 +17675,17675,"FirstName17675 MiddleName17675",LastName17675 +17676,17676,"FirstName17676 MiddleName17676",LastName17676 +17677,17677,"FirstName17677 MiddleName17677",LastName17677 +17678,17678,"FirstName17678 MiddleName17678",LastName17678 +17679,17679,"FirstName17679 MiddleName17679",LastName17679 +17680,17680,"FirstName17680 MiddleName17680",LastName17680 +17681,17681,"FirstName17681 MiddleName17681",LastName17681 +17682,17682,"FirstName17682 MiddleName17682",LastName17682 +17683,17683,"FirstName17683 MiddleName17683",LastName17683 +17684,17684,"FirstName17684 MiddleName17684",LastName17684 +17685,17685,"FirstName17685 MiddleName17685",LastName17685 +17686,17686,"FirstName17686 MiddleName17686",LastName17686 +17687,17687,"FirstName17687 MiddleName17687",LastName17687 +17688,17688,"FirstName17688 MiddleName17688",LastName17688 +17689,17689,"FirstName17689 MiddleName17689",LastName17689 +17690,17690,"FirstName17690 MiddleName17690",LastName17690 +17691,17691,"FirstName17691 MiddleName17691",LastName17691 +17692,17692,"FirstName17692 MiddleName17692",LastName17692 +17693,17693,"FirstName17693 MiddleName17693",LastName17693 +17694,17694,"FirstName17694 MiddleName17694",LastName17694 +17695,17695,"FirstName17695 MiddleName17695",LastName17695 +17696,17696,"FirstName17696 MiddleName17696",LastName17696 +17697,17697,"FirstName17697 MiddleName17697",LastName17697 +17698,17698,"FirstName17698 MiddleName17698",LastName17698 +17699,17699,"FirstName17699 MiddleName17699",LastName17699 +17700,17700,"FirstName17700 MiddleName17700",LastName17700 +17701,17701,"FirstName17701 MiddleName17701",LastName17701 +17702,17702,"FirstName17702 MiddleName17702",LastName17702 +17703,17703,"FirstName17703 MiddleName17703",LastName17703 +17704,17704,"FirstName17704 MiddleName17704",LastName17704 +17705,17705,"FirstName17705 MiddleName17705",LastName17705 +17706,17706,"FirstName17706 MiddleName17706",LastName17706 +17707,17707,"FirstName17707 MiddleName17707",LastName17707 +17708,17708,"FirstName17708 MiddleName17708",LastName17708 +17709,17709,"FirstName17709 MiddleName17709",LastName17709 +17710,17710,"FirstName17710 MiddleName17710",LastName17710 +17711,17711,"FirstName17711 MiddleName17711",LastName17711 +17712,17712,"FirstName17712 MiddleName17712",LastName17712 +17713,17713,"FirstName17713 MiddleName17713",LastName17713 +17714,17714,"FirstName17714 MiddleName17714",LastName17714 +17715,17715,"FirstName17715 MiddleName17715",LastName17715 +17716,17716,"FirstName17716 MiddleName17716",LastName17716 +17717,17717,"FirstName17717 MiddleName17717",LastName17717 +17718,17718,"FirstName17718 MiddleName17718",LastName17718 +17719,17719,"FirstName17719 MiddleName17719",LastName17719 +17720,17720,"FirstName17720 MiddleName17720",LastName17720 +17721,17721,"FirstName17721 MiddleName17721",LastName17721 +17722,17722,"FirstName17722 MiddleName17722",LastName17722 +17723,17723,"FirstName17723 MiddleName17723",LastName17723 +17724,17724,"FirstName17724 MiddleName17724",LastName17724 +17725,17725,"FirstName17725 MiddleName17725",LastName17725 +17726,17726,"FirstName17726 MiddleName17726",LastName17726 +17727,17727,"FirstName17727 MiddleName17727",LastName17727 +17728,17728,"FirstName17728 MiddleName17728",LastName17728 +17729,17729,"FirstName17729 MiddleName17729",LastName17729 +17730,17730,"FirstName17730 MiddleName17730",LastName17730 +17731,17731,"FirstName17731 MiddleName17731",LastName17731 +17732,17732,"FirstName17732 MiddleName17732",LastName17732 +17733,17733,"FirstName17733 MiddleName17733",LastName17733 +17734,17734,"FirstName17734 MiddleName17734",LastName17734 +17735,17735,"FirstName17735 MiddleName17735",LastName17735 +17736,17736,"FirstName17736 MiddleName17736",LastName17736 +17737,17737,"FirstName17737 MiddleName17737",LastName17737 +17738,17738,"FirstName17738 MiddleName17738",LastName17738 +17739,17739,"FirstName17739 MiddleName17739",LastName17739 +17740,17740,"FirstName17740 MiddleName17740",LastName17740 +17741,17741,"FirstName17741 MiddleName17741",LastName17741 +17742,17742,"FirstName17742 MiddleName17742",LastName17742 +17743,17743,"FirstName17743 MiddleName17743",LastName17743 +17744,17744,"FirstName17744 MiddleName17744",LastName17744 +17745,17745,"FirstName17745 MiddleName17745",LastName17745 +17746,17746,"FirstName17746 MiddleName17746",LastName17746 +17747,17747,"FirstName17747 MiddleName17747",LastName17747 +17748,17748,"FirstName17748 MiddleName17748",LastName17748 +17749,17749,"FirstName17749 MiddleName17749",LastName17749 +17750,17750,"FirstName17750 MiddleName17750",LastName17750 +17751,17751,"FirstName17751 MiddleName17751",LastName17751 +17752,17752,"FirstName17752 MiddleName17752",LastName17752 +17753,17753,"FirstName17753 MiddleName17753",LastName17753 +17754,17754,"FirstName17754 MiddleName17754",LastName17754 +17755,17755,"FirstName17755 MiddleName17755",LastName17755 +17756,17756,"FirstName17756 MiddleName17756",LastName17756 +17757,17757,"FirstName17757 MiddleName17757",LastName17757 +17758,17758,"FirstName17758 MiddleName17758",LastName17758 +17759,17759,"FirstName17759 MiddleName17759",LastName17759 +17760,17760,"FirstName17760 MiddleName17760",LastName17760 +17761,17761,"FirstName17761 MiddleName17761",LastName17761 +17762,17762,"FirstName17762 MiddleName17762",LastName17762 +17763,17763,"FirstName17763 MiddleName17763",LastName17763 +17764,17764,"FirstName17764 MiddleName17764",LastName17764 +17765,17765,"FirstName17765 MiddleName17765",LastName17765 +17766,17766,"FirstName17766 MiddleName17766",LastName17766 +17767,17767,"FirstName17767 MiddleName17767",LastName17767 +17768,17768,"FirstName17768 MiddleName17768",LastName17768 +17769,17769,"FirstName17769 MiddleName17769",LastName17769 +17770,17770,"FirstName17770 MiddleName17770",LastName17770 +17771,17771,"FirstName17771 MiddleName17771",LastName17771 +17772,17772,"FirstName17772 MiddleName17772",LastName17772 +17773,17773,"FirstName17773 MiddleName17773",LastName17773 +17774,17774,"FirstName17774 MiddleName17774",LastName17774 +17775,17775,"FirstName17775 MiddleName17775",LastName17775 +17776,17776,"FirstName17776 MiddleName17776",LastName17776 +17777,17777,"FirstName17777 MiddleName17777",LastName17777 +17778,17778,"FirstName17778 MiddleName17778",LastName17778 +17779,17779,"FirstName17779 MiddleName17779",LastName17779 +17780,17780,"FirstName17780 MiddleName17780",LastName17780 +17781,17781,"FirstName17781 MiddleName17781",LastName17781 +17782,17782,"FirstName17782 MiddleName17782",LastName17782 +17783,17783,"FirstName17783 MiddleName17783",LastName17783 +17784,17784,"FirstName17784 MiddleName17784",LastName17784 +17785,17785,"FirstName17785 MiddleName17785",LastName17785 +17786,17786,"FirstName17786 MiddleName17786",LastName17786 +17787,17787,"FirstName17787 MiddleName17787",LastName17787 +17788,17788,"FirstName17788 MiddleName17788",LastName17788 +17789,17789,"FirstName17789 MiddleName17789",LastName17789 +17790,17790,"FirstName17790 MiddleName17790",LastName17790 +17791,17791,"FirstName17791 MiddleName17791",LastName17791 +17792,17792,"FirstName17792 MiddleName17792",LastName17792 +17793,17793,"FirstName17793 MiddleName17793",LastName17793 +17794,17794,"FirstName17794 MiddleName17794",LastName17794 +17795,17795,"FirstName17795 MiddleName17795",LastName17795 +17796,17796,"FirstName17796 MiddleName17796",LastName17796 +17797,17797,"FirstName17797 MiddleName17797",LastName17797 +17798,17798,"FirstName17798 MiddleName17798",LastName17798 +17799,17799,"FirstName17799 MiddleName17799",LastName17799 +17800,17800,"FirstName17800 MiddleName17800",LastName17800 +17801,17801,"FirstName17801 MiddleName17801",LastName17801 +17802,17802,"FirstName17802 MiddleName17802",LastName17802 +17803,17803,"FirstName17803 MiddleName17803",LastName17803 +17804,17804,"FirstName17804 MiddleName17804",LastName17804 +17805,17805,"FirstName17805 MiddleName17805",LastName17805 +17806,17806,"FirstName17806 MiddleName17806",LastName17806 +17807,17807,"FirstName17807 MiddleName17807",LastName17807 +17808,17808,"FirstName17808 MiddleName17808",LastName17808 +17809,17809,"FirstName17809 MiddleName17809",LastName17809 +17810,17810,"FirstName17810 MiddleName17810",LastName17810 +17811,17811,"FirstName17811 MiddleName17811",LastName17811 +17812,17812,"FirstName17812 MiddleName17812",LastName17812 +17813,17813,"FirstName17813 MiddleName17813",LastName17813 +17814,17814,"FirstName17814 MiddleName17814",LastName17814 +17815,17815,"FirstName17815 MiddleName17815",LastName17815 +17816,17816,"FirstName17816 MiddleName17816",LastName17816 +17817,17817,"FirstName17817 MiddleName17817",LastName17817 +17818,17818,"FirstName17818 MiddleName17818",LastName17818 +17819,17819,"FirstName17819 MiddleName17819",LastName17819 +17820,17820,"FirstName17820 MiddleName17820",LastName17820 +17821,17821,"FirstName17821 MiddleName17821",LastName17821 +17822,17822,"FirstName17822 MiddleName17822",LastName17822 +17823,17823,"FirstName17823 MiddleName17823",LastName17823 +17824,17824,"FirstName17824 MiddleName17824",LastName17824 +17825,17825,"FirstName17825 MiddleName17825",LastName17825 +17826,17826,"FirstName17826 MiddleName17826",LastName17826 +17827,17827,"FirstName17827 MiddleName17827",LastName17827 +17828,17828,"FirstName17828 MiddleName17828",LastName17828 +17829,17829,"FirstName17829 MiddleName17829",LastName17829 +17830,17830,"FirstName17830 MiddleName17830",LastName17830 +17831,17831,"FirstName17831 MiddleName17831",LastName17831 +17832,17832,"FirstName17832 MiddleName17832",LastName17832 +17833,17833,"FirstName17833 MiddleName17833",LastName17833 +17834,17834,"FirstName17834 MiddleName17834",LastName17834 +17835,17835,"FirstName17835 MiddleName17835",LastName17835 +17836,17836,"FirstName17836 MiddleName17836",LastName17836 +17837,17837,"FirstName17837 MiddleName17837",LastName17837 +17838,17838,"FirstName17838 MiddleName17838",LastName17838 +17839,17839,"FirstName17839 MiddleName17839",LastName17839 +17840,17840,"FirstName17840 MiddleName17840",LastName17840 +17841,17841,"FirstName17841 MiddleName17841",LastName17841 +17842,17842,"FirstName17842 MiddleName17842",LastName17842 +17843,17843,"FirstName17843 MiddleName17843",LastName17843 +17844,17844,"FirstName17844 MiddleName17844",LastName17844 +17845,17845,"FirstName17845 MiddleName17845",LastName17845 +17846,17846,"FirstName17846 MiddleName17846",LastName17846 +17847,17847,"FirstName17847 MiddleName17847",LastName17847 +17848,17848,"FirstName17848 MiddleName17848",LastName17848 +17849,17849,"FirstName17849 MiddleName17849",LastName17849 +17850,17850,"FirstName17850 MiddleName17850",LastName17850 +17851,17851,"FirstName17851 MiddleName17851",LastName17851 +17852,17852,"FirstName17852 MiddleName17852",LastName17852 +17853,17853,"FirstName17853 MiddleName17853",LastName17853 +17854,17854,"FirstName17854 MiddleName17854",LastName17854 +17855,17855,"FirstName17855 MiddleName17855",LastName17855 +17856,17856,"FirstName17856 MiddleName17856",LastName17856 +17857,17857,"FirstName17857 MiddleName17857",LastName17857 +17858,17858,"FirstName17858 MiddleName17858",LastName17858 +17859,17859,"FirstName17859 MiddleName17859",LastName17859 +17860,17860,"FirstName17860 MiddleName17860",LastName17860 +17861,17861,"FirstName17861 MiddleName17861",LastName17861 +17862,17862,"FirstName17862 MiddleName17862",LastName17862 +17863,17863,"FirstName17863 MiddleName17863",LastName17863 +17864,17864,"FirstName17864 MiddleName17864",LastName17864 +17865,17865,"FirstName17865 MiddleName17865",LastName17865 +17866,17866,"FirstName17866 MiddleName17866",LastName17866 +17867,17867,"FirstName17867 MiddleName17867",LastName17867 +17868,17868,"FirstName17868 MiddleName17868",LastName17868 +17869,17869,"FirstName17869 MiddleName17869",LastName17869 +17870,17870,"FirstName17870 MiddleName17870",LastName17870 +17871,17871,"FirstName17871 MiddleName17871",LastName17871 +17872,17872,"FirstName17872 MiddleName17872",LastName17872 +17873,17873,"FirstName17873 MiddleName17873",LastName17873 +17874,17874,"FirstName17874 MiddleName17874",LastName17874 +17875,17875,"FirstName17875 MiddleName17875",LastName17875 +17876,17876,"FirstName17876 MiddleName17876",LastName17876 +17877,17877,"FirstName17877 MiddleName17877",LastName17877 +17878,17878,"FirstName17878 MiddleName17878",LastName17878 +17879,17879,"FirstName17879 MiddleName17879",LastName17879 +17880,17880,"FirstName17880 MiddleName17880",LastName17880 +17881,17881,"FirstName17881 MiddleName17881",LastName17881 +17882,17882,"FirstName17882 MiddleName17882",LastName17882 +17883,17883,"FirstName17883 MiddleName17883",LastName17883 +17884,17884,"FirstName17884 MiddleName17884",LastName17884 +17885,17885,"FirstName17885 MiddleName17885",LastName17885 +17886,17886,"FirstName17886 MiddleName17886",LastName17886 +17887,17887,"FirstName17887 MiddleName17887",LastName17887 +17888,17888,"FirstName17888 MiddleName17888",LastName17888 +17889,17889,"FirstName17889 MiddleName17889",LastName17889 +17890,17890,"FirstName17890 MiddleName17890",LastName17890 +17891,17891,"FirstName17891 MiddleName17891",LastName17891 +17892,17892,"FirstName17892 MiddleName17892",LastName17892 +17893,17893,"FirstName17893 MiddleName17893",LastName17893 +17894,17894,"FirstName17894 MiddleName17894",LastName17894 +17895,17895,"FirstName17895 MiddleName17895",LastName17895 +17896,17896,"FirstName17896 MiddleName17896",LastName17896 +17897,17897,"FirstName17897 MiddleName17897",LastName17897 +17898,17898,"FirstName17898 MiddleName17898",LastName17898 +17899,17899,"FirstName17899 MiddleName17899",LastName17899 +17900,17900,"FirstName17900 MiddleName17900",LastName17900 +17901,17901,"FirstName17901 MiddleName17901",LastName17901 +17902,17902,"FirstName17902 MiddleName17902",LastName17902 +17903,17903,"FirstName17903 MiddleName17903",LastName17903 +17904,17904,"FirstName17904 MiddleName17904",LastName17904 +17905,17905,"FirstName17905 MiddleName17905",LastName17905 +17906,17906,"FirstName17906 MiddleName17906",LastName17906 +17907,17907,"FirstName17907 MiddleName17907",LastName17907 +17908,17908,"FirstName17908 MiddleName17908",LastName17908 +17909,17909,"FirstName17909 MiddleName17909",LastName17909 +17910,17910,"FirstName17910 MiddleName17910",LastName17910 +17911,17911,"FirstName17911 MiddleName17911",LastName17911 +17912,17912,"FirstName17912 MiddleName17912",LastName17912 +17913,17913,"FirstName17913 MiddleName17913",LastName17913 +17914,17914,"FirstName17914 MiddleName17914",LastName17914 +17915,17915,"FirstName17915 MiddleName17915",LastName17915 +17916,17916,"FirstName17916 MiddleName17916",LastName17916 +17917,17917,"FirstName17917 MiddleName17917",LastName17917 +17918,17918,"FirstName17918 MiddleName17918",LastName17918 +17919,17919,"FirstName17919 MiddleName17919",LastName17919 +17920,17920,"FirstName17920 MiddleName17920",LastName17920 +17921,17921,"FirstName17921 MiddleName17921",LastName17921 +17922,17922,"FirstName17922 MiddleName17922",LastName17922 +17923,17923,"FirstName17923 MiddleName17923",LastName17923 +17924,17924,"FirstName17924 MiddleName17924",LastName17924 +17925,17925,"FirstName17925 MiddleName17925",LastName17925 +17926,17926,"FirstName17926 MiddleName17926",LastName17926 +17927,17927,"FirstName17927 MiddleName17927",LastName17927 +17928,17928,"FirstName17928 MiddleName17928",LastName17928 +17929,17929,"FirstName17929 MiddleName17929",LastName17929 +17930,17930,"FirstName17930 MiddleName17930",LastName17930 +17931,17931,"FirstName17931 MiddleName17931",LastName17931 +17932,17932,"FirstName17932 MiddleName17932",LastName17932 +17933,17933,"FirstName17933 MiddleName17933",LastName17933 +17934,17934,"FirstName17934 MiddleName17934",LastName17934 +17935,17935,"FirstName17935 MiddleName17935",LastName17935 +17936,17936,"FirstName17936 MiddleName17936",LastName17936 +17937,17937,"FirstName17937 MiddleName17937",LastName17937 +17938,17938,"FirstName17938 MiddleName17938",LastName17938 +17939,17939,"FirstName17939 MiddleName17939",LastName17939 +17940,17940,"FirstName17940 MiddleName17940",LastName17940 +17941,17941,"FirstName17941 MiddleName17941",LastName17941 +17942,17942,"FirstName17942 MiddleName17942",LastName17942 +17943,17943,"FirstName17943 MiddleName17943",LastName17943 +17944,17944,"FirstName17944 MiddleName17944",LastName17944 +17945,17945,"FirstName17945 MiddleName17945",LastName17945 +17946,17946,"FirstName17946 MiddleName17946",LastName17946 +17947,17947,"FirstName17947 MiddleName17947",LastName17947 +17948,17948,"FirstName17948 MiddleName17948",LastName17948 +17949,17949,"FirstName17949 MiddleName17949",LastName17949 +17950,17950,"FirstName17950 MiddleName17950",LastName17950 +17951,17951,"FirstName17951 MiddleName17951",LastName17951 +17952,17952,"FirstName17952 MiddleName17952",LastName17952 +17953,17953,"FirstName17953 MiddleName17953",LastName17953 +17954,17954,"FirstName17954 MiddleName17954",LastName17954 +17955,17955,"FirstName17955 MiddleName17955",LastName17955 +17956,17956,"FirstName17956 MiddleName17956",LastName17956 +17957,17957,"FirstName17957 MiddleName17957",LastName17957 +17958,17958,"FirstName17958 MiddleName17958",LastName17958 +17959,17959,"FirstName17959 MiddleName17959",LastName17959 +17960,17960,"FirstName17960 MiddleName17960",LastName17960 +17961,17961,"FirstName17961 MiddleName17961",LastName17961 +17962,17962,"FirstName17962 MiddleName17962",LastName17962 +17963,17963,"FirstName17963 MiddleName17963",LastName17963 +17964,17964,"FirstName17964 MiddleName17964",LastName17964 +17965,17965,"FirstName17965 MiddleName17965",LastName17965 +17966,17966,"FirstName17966 MiddleName17966",LastName17966 +17967,17967,"FirstName17967 MiddleName17967",LastName17967 +17968,17968,"FirstName17968 MiddleName17968",LastName17968 +17969,17969,"FirstName17969 MiddleName17969",LastName17969 +17970,17970,"FirstName17970 MiddleName17970",LastName17970 +17971,17971,"FirstName17971 MiddleName17971",LastName17971 +17972,17972,"FirstName17972 MiddleName17972",LastName17972 +17973,17973,"FirstName17973 MiddleName17973",LastName17973 +17974,17974,"FirstName17974 MiddleName17974",LastName17974 +17975,17975,"FirstName17975 MiddleName17975",LastName17975 +17976,17976,"FirstName17976 MiddleName17976",LastName17976 +17977,17977,"FirstName17977 MiddleName17977",LastName17977 +17978,17978,"FirstName17978 MiddleName17978",LastName17978 +17979,17979,"FirstName17979 MiddleName17979",LastName17979 +17980,17980,"FirstName17980 MiddleName17980",LastName17980 +17981,17981,"FirstName17981 MiddleName17981",LastName17981 +17982,17982,"FirstName17982 MiddleName17982",LastName17982 +17983,17983,"FirstName17983 MiddleName17983",LastName17983 +17984,17984,"FirstName17984 MiddleName17984",LastName17984 +17985,17985,"FirstName17985 MiddleName17985",LastName17985 +17986,17986,"FirstName17986 MiddleName17986",LastName17986 +17987,17987,"FirstName17987 MiddleName17987",LastName17987 +17988,17988,"FirstName17988 MiddleName17988",LastName17988 +17989,17989,"FirstName17989 MiddleName17989",LastName17989 +17990,17990,"FirstName17990 MiddleName17990",LastName17990 +17991,17991,"FirstName17991 MiddleName17991",LastName17991 +17992,17992,"FirstName17992 MiddleName17992",LastName17992 +17993,17993,"FirstName17993 MiddleName17993",LastName17993 +17994,17994,"FirstName17994 MiddleName17994",LastName17994 +17995,17995,"FirstName17995 MiddleName17995",LastName17995 +17996,17996,"FirstName17996 MiddleName17996",LastName17996 +17997,17997,"FirstName17997 MiddleName17997",LastName17997 +17998,17998,"FirstName17998 MiddleName17998",LastName17998 +17999,17999,"FirstName17999 MiddleName17999",LastName17999 +18000,18000,"FirstName18000 MiddleName18000",LastName18000 +18001,18001,"FirstName18001 MiddleName18001",LastName18001 +18002,18002,"FirstName18002 MiddleName18002",LastName18002 +18003,18003,"FirstName18003 MiddleName18003",LastName18003 +18004,18004,"FirstName18004 MiddleName18004",LastName18004 +18005,18005,"FirstName18005 MiddleName18005",LastName18005 +18006,18006,"FirstName18006 MiddleName18006",LastName18006 +18007,18007,"FirstName18007 MiddleName18007",LastName18007 +18008,18008,"FirstName18008 MiddleName18008",LastName18008 +18009,18009,"FirstName18009 MiddleName18009",LastName18009 +18010,18010,"FirstName18010 MiddleName18010",LastName18010 +18011,18011,"FirstName18011 MiddleName18011",LastName18011 +18012,18012,"FirstName18012 MiddleName18012",LastName18012 +18013,18013,"FirstName18013 MiddleName18013",LastName18013 +18014,18014,"FirstName18014 MiddleName18014",LastName18014 +18015,18015,"FirstName18015 MiddleName18015",LastName18015 +18016,18016,"FirstName18016 MiddleName18016",LastName18016 +18017,18017,"FirstName18017 MiddleName18017",LastName18017 +18018,18018,"FirstName18018 MiddleName18018",LastName18018 +18019,18019,"FirstName18019 MiddleName18019",LastName18019 +18020,18020,"FirstName18020 MiddleName18020",LastName18020 +18021,18021,"FirstName18021 MiddleName18021",LastName18021 +18022,18022,"FirstName18022 MiddleName18022",LastName18022 +18023,18023,"FirstName18023 MiddleName18023",LastName18023 +18024,18024,"FirstName18024 MiddleName18024",LastName18024 +18025,18025,"FirstName18025 MiddleName18025",LastName18025 +18026,18026,"FirstName18026 MiddleName18026",LastName18026 +18027,18027,"FirstName18027 MiddleName18027",LastName18027 +18028,18028,"FirstName18028 MiddleName18028",LastName18028 +18029,18029,"FirstName18029 MiddleName18029",LastName18029 +18030,18030,"FirstName18030 MiddleName18030",LastName18030 +18031,18031,"FirstName18031 MiddleName18031",LastName18031 +18032,18032,"FirstName18032 MiddleName18032",LastName18032 +18033,18033,"FirstName18033 MiddleName18033",LastName18033 +18034,18034,"FirstName18034 MiddleName18034",LastName18034 +18035,18035,"FirstName18035 MiddleName18035",LastName18035 +18036,18036,"FirstName18036 MiddleName18036",LastName18036 +18037,18037,"FirstName18037 MiddleName18037",LastName18037 +18038,18038,"FirstName18038 MiddleName18038",LastName18038 +18039,18039,"FirstName18039 MiddleName18039",LastName18039 +18040,18040,"FirstName18040 MiddleName18040",LastName18040 +18041,18041,"FirstName18041 MiddleName18041",LastName18041 +18042,18042,"FirstName18042 MiddleName18042",LastName18042 +18043,18043,"FirstName18043 MiddleName18043",LastName18043 +18044,18044,"FirstName18044 MiddleName18044",LastName18044 +18045,18045,"FirstName18045 MiddleName18045",LastName18045 +18046,18046,"FirstName18046 MiddleName18046",LastName18046 +18047,18047,"FirstName18047 MiddleName18047",LastName18047 +18048,18048,"FirstName18048 MiddleName18048",LastName18048 +18049,18049,"FirstName18049 MiddleName18049",LastName18049 +18050,18050,"FirstName18050 MiddleName18050",LastName18050 +18051,18051,"FirstName18051 MiddleName18051",LastName18051 +18052,18052,"FirstName18052 MiddleName18052",LastName18052 +18053,18053,"FirstName18053 MiddleName18053",LastName18053 +18054,18054,"FirstName18054 MiddleName18054",LastName18054 +18055,18055,"FirstName18055 MiddleName18055",LastName18055 +18056,18056,"FirstName18056 MiddleName18056",LastName18056 +18057,18057,"FirstName18057 MiddleName18057",LastName18057 +18058,18058,"FirstName18058 MiddleName18058",LastName18058 +18059,18059,"FirstName18059 MiddleName18059",LastName18059 +18060,18060,"FirstName18060 MiddleName18060",LastName18060 +18061,18061,"FirstName18061 MiddleName18061",LastName18061 +18062,18062,"FirstName18062 MiddleName18062",LastName18062 +18063,18063,"FirstName18063 MiddleName18063",LastName18063 +18064,18064,"FirstName18064 MiddleName18064",LastName18064 +18065,18065,"FirstName18065 MiddleName18065",LastName18065 +18066,18066,"FirstName18066 MiddleName18066",LastName18066 +18067,18067,"FirstName18067 MiddleName18067",LastName18067 +18068,18068,"FirstName18068 MiddleName18068",LastName18068 +18069,18069,"FirstName18069 MiddleName18069",LastName18069 +18070,18070,"FirstName18070 MiddleName18070",LastName18070 +18071,18071,"FirstName18071 MiddleName18071",LastName18071 +18072,18072,"FirstName18072 MiddleName18072",LastName18072 +18073,18073,"FirstName18073 MiddleName18073",LastName18073 +18074,18074,"FirstName18074 MiddleName18074",LastName18074 +18075,18075,"FirstName18075 MiddleName18075",LastName18075 +18076,18076,"FirstName18076 MiddleName18076",LastName18076 +18077,18077,"FirstName18077 MiddleName18077",LastName18077 +18078,18078,"FirstName18078 MiddleName18078",LastName18078 +18079,18079,"FirstName18079 MiddleName18079",LastName18079 +18080,18080,"FirstName18080 MiddleName18080",LastName18080 +18081,18081,"FirstName18081 MiddleName18081",LastName18081 +18082,18082,"FirstName18082 MiddleName18082",LastName18082 +18083,18083,"FirstName18083 MiddleName18083",LastName18083 +18084,18084,"FirstName18084 MiddleName18084",LastName18084 +18085,18085,"FirstName18085 MiddleName18085",LastName18085 +18086,18086,"FirstName18086 MiddleName18086",LastName18086 +18087,18087,"FirstName18087 MiddleName18087",LastName18087 +18088,18088,"FirstName18088 MiddleName18088",LastName18088 +18089,18089,"FirstName18089 MiddleName18089",LastName18089 +18090,18090,"FirstName18090 MiddleName18090",LastName18090 +18091,18091,"FirstName18091 MiddleName18091",LastName18091 +18092,18092,"FirstName18092 MiddleName18092",LastName18092 +18093,18093,"FirstName18093 MiddleName18093",LastName18093 +18094,18094,"FirstName18094 MiddleName18094",LastName18094 +18095,18095,"FirstName18095 MiddleName18095",LastName18095 +18096,18096,"FirstName18096 MiddleName18096",LastName18096 +18097,18097,"FirstName18097 MiddleName18097",LastName18097 +18098,18098,"FirstName18098 MiddleName18098",LastName18098 +18099,18099,"FirstName18099 MiddleName18099",LastName18099 +18100,18100,"FirstName18100 MiddleName18100",LastName18100 +18101,18101,"FirstName18101 MiddleName18101",LastName18101 +18102,18102,"FirstName18102 MiddleName18102",LastName18102 +18103,18103,"FirstName18103 MiddleName18103",LastName18103 +18104,18104,"FirstName18104 MiddleName18104",LastName18104 +18105,18105,"FirstName18105 MiddleName18105",LastName18105 +18106,18106,"FirstName18106 MiddleName18106",LastName18106 +18107,18107,"FirstName18107 MiddleName18107",LastName18107 +18108,18108,"FirstName18108 MiddleName18108",LastName18108 +18109,18109,"FirstName18109 MiddleName18109",LastName18109 +18110,18110,"FirstName18110 MiddleName18110",LastName18110 +18111,18111,"FirstName18111 MiddleName18111",LastName18111 +18112,18112,"FirstName18112 MiddleName18112",LastName18112 +18113,18113,"FirstName18113 MiddleName18113",LastName18113 +18114,18114,"FirstName18114 MiddleName18114",LastName18114 +18115,18115,"FirstName18115 MiddleName18115",LastName18115 +18116,18116,"FirstName18116 MiddleName18116",LastName18116 +18117,18117,"FirstName18117 MiddleName18117",LastName18117 +18118,18118,"FirstName18118 MiddleName18118",LastName18118 +18119,18119,"FirstName18119 MiddleName18119",LastName18119 +18120,18120,"FirstName18120 MiddleName18120",LastName18120 +18121,18121,"FirstName18121 MiddleName18121",LastName18121 +18122,18122,"FirstName18122 MiddleName18122",LastName18122 +18123,18123,"FirstName18123 MiddleName18123",LastName18123 +18124,18124,"FirstName18124 MiddleName18124",LastName18124 +18125,18125,"FirstName18125 MiddleName18125",LastName18125 +18126,18126,"FirstName18126 MiddleName18126",LastName18126 +18127,18127,"FirstName18127 MiddleName18127",LastName18127 +18128,18128,"FirstName18128 MiddleName18128",LastName18128 +18129,18129,"FirstName18129 MiddleName18129",LastName18129 +18130,18130,"FirstName18130 MiddleName18130",LastName18130 +18131,18131,"FirstName18131 MiddleName18131",LastName18131 +18132,18132,"FirstName18132 MiddleName18132",LastName18132 +18133,18133,"FirstName18133 MiddleName18133",LastName18133 +18134,18134,"FirstName18134 MiddleName18134",LastName18134 +18135,18135,"FirstName18135 MiddleName18135",LastName18135 +18136,18136,"FirstName18136 MiddleName18136",LastName18136 +18137,18137,"FirstName18137 MiddleName18137",LastName18137 +18138,18138,"FirstName18138 MiddleName18138",LastName18138 +18139,18139,"FirstName18139 MiddleName18139",LastName18139 +18140,18140,"FirstName18140 MiddleName18140",LastName18140 +18141,18141,"FirstName18141 MiddleName18141",LastName18141 +18142,18142,"FirstName18142 MiddleName18142",LastName18142 +18143,18143,"FirstName18143 MiddleName18143",LastName18143 +18144,18144,"FirstName18144 MiddleName18144",LastName18144 +18145,18145,"FirstName18145 MiddleName18145",LastName18145 +18146,18146,"FirstName18146 MiddleName18146",LastName18146 +18147,18147,"FirstName18147 MiddleName18147",LastName18147 +18148,18148,"FirstName18148 MiddleName18148",LastName18148 +18149,18149,"FirstName18149 MiddleName18149",LastName18149 +18150,18150,"FirstName18150 MiddleName18150",LastName18150 +18151,18151,"FirstName18151 MiddleName18151",LastName18151 +18152,18152,"FirstName18152 MiddleName18152",LastName18152 +18153,18153,"FirstName18153 MiddleName18153",LastName18153 +18154,18154,"FirstName18154 MiddleName18154",LastName18154 +18155,18155,"FirstName18155 MiddleName18155",LastName18155 +18156,18156,"FirstName18156 MiddleName18156",LastName18156 +18157,18157,"FirstName18157 MiddleName18157",LastName18157 +18158,18158,"FirstName18158 MiddleName18158",LastName18158 +18159,18159,"FirstName18159 MiddleName18159",LastName18159 +18160,18160,"FirstName18160 MiddleName18160",LastName18160 +18161,18161,"FirstName18161 MiddleName18161",LastName18161 +18162,18162,"FirstName18162 MiddleName18162",LastName18162 +18163,18163,"FirstName18163 MiddleName18163",LastName18163 +18164,18164,"FirstName18164 MiddleName18164",LastName18164 +18165,18165,"FirstName18165 MiddleName18165",LastName18165 +18166,18166,"FirstName18166 MiddleName18166",LastName18166 +18167,18167,"FirstName18167 MiddleName18167",LastName18167 +18168,18168,"FirstName18168 MiddleName18168",LastName18168 +18169,18169,"FirstName18169 MiddleName18169",LastName18169 +18170,18170,"FirstName18170 MiddleName18170",LastName18170 +18171,18171,"FirstName18171 MiddleName18171",LastName18171 +18172,18172,"FirstName18172 MiddleName18172",LastName18172 +18173,18173,"FirstName18173 MiddleName18173",LastName18173 +18174,18174,"FirstName18174 MiddleName18174",LastName18174 +18175,18175,"FirstName18175 MiddleName18175",LastName18175 +18176,18176,"FirstName18176 MiddleName18176",LastName18176 +18177,18177,"FirstName18177 MiddleName18177",LastName18177 +18178,18178,"FirstName18178 MiddleName18178",LastName18178 +18179,18179,"FirstName18179 MiddleName18179",LastName18179 +18180,18180,"FirstName18180 MiddleName18180",LastName18180 +18181,18181,"FirstName18181 MiddleName18181",LastName18181 +18182,18182,"FirstName18182 MiddleName18182",LastName18182 +18183,18183,"FirstName18183 MiddleName18183",LastName18183 +18184,18184,"FirstName18184 MiddleName18184",LastName18184 +18185,18185,"FirstName18185 MiddleName18185",LastName18185 +18186,18186,"FirstName18186 MiddleName18186",LastName18186 +18187,18187,"FirstName18187 MiddleName18187",LastName18187 +18188,18188,"FirstName18188 MiddleName18188",LastName18188 +18189,18189,"FirstName18189 MiddleName18189",LastName18189 +18190,18190,"FirstName18190 MiddleName18190",LastName18190 +18191,18191,"FirstName18191 MiddleName18191",LastName18191 +18192,18192,"FirstName18192 MiddleName18192",LastName18192 +18193,18193,"FirstName18193 MiddleName18193",LastName18193 +18194,18194,"FirstName18194 MiddleName18194",LastName18194 +18195,18195,"FirstName18195 MiddleName18195",LastName18195 +18196,18196,"FirstName18196 MiddleName18196",LastName18196 +18197,18197,"FirstName18197 MiddleName18197",LastName18197 +18198,18198,"FirstName18198 MiddleName18198",LastName18198 +18199,18199,"FirstName18199 MiddleName18199",LastName18199 +18200,18200,"FirstName18200 MiddleName18200",LastName18200 +18201,18201,"FirstName18201 MiddleName18201",LastName18201 +18202,18202,"FirstName18202 MiddleName18202",LastName18202 +18203,18203,"FirstName18203 MiddleName18203",LastName18203 +18204,18204,"FirstName18204 MiddleName18204",LastName18204 +18205,18205,"FirstName18205 MiddleName18205",LastName18205 +18206,18206,"FirstName18206 MiddleName18206",LastName18206 +18207,18207,"FirstName18207 MiddleName18207",LastName18207 +18208,18208,"FirstName18208 MiddleName18208",LastName18208 +18209,18209,"FirstName18209 MiddleName18209",LastName18209 +18210,18210,"FirstName18210 MiddleName18210",LastName18210 +18211,18211,"FirstName18211 MiddleName18211",LastName18211 +18212,18212,"FirstName18212 MiddleName18212",LastName18212 +18213,18213,"FirstName18213 MiddleName18213",LastName18213 +18214,18214,"FirstName18214 MiddleName18214",LastName18214 +18215,18215,"FirstName18215 MiddleName18215",LastName18215 +18216,18216,"FirstName18216 MiddleName18216",LastName18216 +18217,18217,"FirstName18217 MiddleName18217",LastName18217 +18218,18218,"FirstName18218 MiddleName18218",LastName18218 +18219,18219,"FirstName18219 MiddleName18219",LastName18219 +18220,18220,"FirstName18220 MiddleName18220",LastName18220 +18221,18221,"FirstName18221 MiddleName18221",LastName18221 +18222,18222,"FirstName18222 MiddleName18222",LastName18222 +18223,18223,"FirstName18223 MiddleName18223",LastName18223 +18224,18224,"FirstName18224 MiddleName18224",LastName18224 +18225,18225,"FirstName18225 MiddleName18225",LastName18225 +18226,18226,"FirstName18226 MiddleName18226",LastName18226 +18227,18227,"FirstName18227 MiddleName18227",LastName18227 +18228,18228,"FirstName18228 MiddleName18228",LastName18228 +18229,18229,"FirstName18229 MiddleName18229",LastName18229 +18230,18230,"FirstName18230 MiddleName18230",LastName18230 +18231,18231,"FirstName18231 MiddleName18231",LastName18231 +18232,18232,"FirstName18232 MiddleName18232",LastName18232 +18233,18233,"FirstName18233 MiddleName18233",LastName18233 +18234,18234,"FirstName18234 MiddleName18234",LastName18234 +18235,18235,"FirstName18235 MiddleName18235",LastName18235 +18236,18236,"FirstName18236 MiddleName18236",LastName18236 +18237,18237,"FirstName18237 MiddleName18237",LastName18237 +18238,18238,"FirstName18238 MiddleName18238",LastName18238 +18239,18239,"FirstName18239 MiddleName18239",LastName18239 +18240,18240,"FirstName18240 MiddleName18240",LastName18240 +18241,18241,"FirstName18241 MiddleName18241",LastName18241 +18242,18242,"FirstName18242 MiddleName18242",LastName18242 +18243,18243,"FirstName18243 MiddleName18243",LastName18243 +18244,18244,"FirstName18244 MiddleName18244",LastName18244 +18245,18245,"FirstName18245 MiddleName18245",LastName18245 +18246,18246,"FirstName18246 MiddleName18246",LastName18246 +18247,18247,"FirstName18247 MiddleName18247",LastName18247 +18248,18248,"FirstName18248 MiddleName18248",LastName18248 +18249,18249,"FirstName18249 MiddleName18249",LastName18249 +18250,18250,"FirstName18250 MiddleName18250",LastName18250 +18251,18251,"FirstName18251 MiddleName18251",LastName18251 +18252,18252,"FirstName18252 MiddleName18252",LastName18252 +18253,18253,"FirstName18253 MiddleName18253",LastName18253 +18254,18254,"FirstName18254 MiddleName18254",LastName18254 +18255,18255,"FirstName18255 MiddleName18255",LastName18255 +18256,18256,"FirstName18256 MiddleName18256",LastName18256 +18257,18257,"FirstName18257 MiddleName18257",LastName18257 +18258,18258,"FirstName18258 MiddleName18258",LastName18258 +18259,18259,"FirstName18259 MiddleName18259",LastName18259 +18260,18260,"FirstName18260 MiddleName18260",LastName18260 +18261,18261,"FirstName18261 MiddleName18261",LastName18261 +18262,18262,"FirstName18262 MiddleName18262",LastName18262 +18263,18263,"FirstName18263 MiddleName18263",LastName18263 +18264,18264,"FirstName18264 MiddleName18264",LastName18264 +18265,18265,"FirstName18265 MiddleName18265",LastName18265 +18266,18266,"FirstName18266 MiddleName18266",LastName18266 +18267,18267,"FirstName18267 MiddleName18267",LastName18267 +18268,18268,"FirstName18268 MiddleName18268",LastName18268 +18269,18269,"FirstName18269 MiddleName18269",LastName18269 +18270,18270,"FirstName18270 MiddleName18270",LastName18270 +18271,18271,"FirstName18271 MiddleName18271",LastName18271 +18272,18272,"FirstName18272 MiddleName18272",LastName18272 +18273,18273,"FirstName18273 MiddleName18273",LastName18273 +18274,18274,"FirstName18274 MiddleName18274",LastName18274 +18275,18275,"FirstName18275 MiddleName18275",LastName18275 +18276,18276,"FirstName18276 MiddleName18276",LastName18276 +18277,18277,"FirstName18277 MiddleName18277",LastName18277 +18278,18278,"FirstName18278 MiddleName18278",LastName18278 +18279,18279,"FirstName18279 MiddleName18279",LastName18279 +18280,18280,"FirstName18280 MiddleName18280",LastName18280 +18281,18281,"FirstName18281 MiddleName18281",LastName18281 +18282,18282,"FirstName18282 MiddleName18282",LastName18282 +18283,18283,"FirstName18283 MiddleName18283",LastName18283 +18284,18284,"FirstName18284 MiddleName18284",LastName18284 +18285,18285,"FirstName18285 MiddleName18285",LastName18285 +18286,18286,"FirstName18286 MiddleName18286",LastName18286 +18287,18287,"FirstName18287 MiddleName18287",LastName18287 +18288,18288,"FirstName18288 MiddleName18288",LastName18288 +18289,18289,"FirstName18289 MiddleName18289",LastName18289 +18290,18290,"FirstName18290 MiddleName18290",LastName18290 +18291,18291,"FirstName18291 MiddleName18291",LastName18291 +18292,18292,"FirstName18292 MiddleName18292",LastName18292 +18293,18293,"FirstName18293 MiddleName18293",LastName18293 +18294,18294,"FirstName18294 MiddleName18294",LastName18294 +18295,18295,"FirstName18295 MiddleName18295",LastName18295 +18296,18296,"FirstName18296 MiddleName18296",LastName18296 +18297,18297,"FirstName18297 MiddleName18297",LastName18297 +18298,18298,"FirstName18298 MiddleName18298",LastName18298 +18299,18299,"FirstName18299 MiddleName18299",LastName18299 +18300,18300,"FirstName18300 MiddleName18300",LastName18300 +18301,18301,"FirstName18301 MiddleName18301",LastName18301 +18302,18302,"FirstName18302 MiddleName18302",LastName18302 +18303,18303,"FirstName18303 MiddleName18303",LastName18303 +18304,18304,"FirstName18304 MiddleName18304",LastName18304 +18305,18305,"FirstName18305 MiddleName18305",LastName18305 +18306,18306,"FirstName18306 MiddleName18306",LastName18306 +18307,18307,"FirstName18307 MiddleName18307",LastName18307 +18308,18308,"FirstName18308 MiddleName18308",LastName18308 +18309,18309,"FirstName18309 MiddleName18309",LastName18309 +18310,18310,"FirstName18310 MiddleName18310",LastName18310 +18311,18311,"FirstName18311 MiddleName18311",LastName18311 +18312,18312,"FirstName18312 MiddleName18312",LastName18312 +18313,18313,"FirstName18313 MiddleName18313",LastName18313 +18314,18314,"FirstName18314 MiddleName18314",LastName18314 +18315,18315,"FirstName18315 MiddleName18315",LastName18315 +18316,18316,"FirstName18316 MiddleName18316",LastName18316 +18317,18317,"FirstName18317 MiddleName18317",LastName18317 +18318,18318,"FirstName18318 MiddleName18318",LastName18318 +18319,18319,"FirstName18319 MiddleName18319",LastName18319 +18320,18320,"FirstName18320 MiddleName18320",LastName18320 +18321,18321,"FirstName18321 MiddleName18321",LastName18321 +18322,18322,"FirstName18322 MiddleName18322",LastName18322 +18323,18323,"FirstName18323 MiddleName18323",LastName18323 +18324,18324,"FirstName18324 MiddleName18324",LastName18324 +18325,18325,"FirstName18325 MiddleName18325",LastName18325 +18326,18326,"FirstName18326 MiddleName18326",LastName18326 +18327,18327,"FirstName18327 MiddleName18327",LastName18327 +18328,18328,"FirstName18328 MiddleName18328",LastName18328 +18329,18329,"FirstName18329 MiddleName18329",LastName18329 +18330,18330,"FirstName18330 MiddleName18330",LastName18330 +18331,18331,"FirstName18331 MiddleName18331",LastName18331 +18332,18332,"FirstName18332 MiddleName18332",LastName18332 +18333,18333,"FirstName18333 MiddleName18333",LastName18333 +18334,18334,"FirstName18334 MiddleName18334",LastName18334 +18335,18335,"FirstName18335 MiddleName18335",LastName18335 +18336,18336,"FirstName18336 MiddleName18336",LastName18336 +18337,18337,"FirstName18337 MiddleName18337",LastName18337 +18338,18338,"FirstName18338 MiddleName18338",LastName18338 +18339,18339,"FirstName18339 MiddleName18339",LastName18339 +18340,18340,"FirstName18340 MiddleName18340",LastName18340 +18341,18341,"FirstName18341 MiddleName18341",LastName18341 +18342,18342,"FirstName18342 MiddleName18342",LastName18342 +18343,18343,"FirstName18343 MiddleName18343",LastName18343 +18344,18344,"FirstName18344 MiddleName18344",LastName18344 +18345,18345,"FirstName18345 MiddleName18345",LastName18345 +18346,18346,"FirstName18346 MiddleName18346",LastName18346 +18347,18347,"FirstName18347 MiddleName18347",LastName18347 +18348,18348,"FirstName18348 MiddleName18348",LastName18348 +18349,18349,"FirstName18349 MiddleName18349",LastName18349 +18350,18350,"FirstName18350 MiddleName18350",LastName18350 +18351,18351,"FirstName18351 MiddleName18351",LastName18351 +18352,18352,"FirstName18352 MiddleName18352",LastName18352 +18353,18353,"FirstName18353 MiddleName18353",LastName18353 +18354,18354,"FirstName18354 MiddleName18354",LastName18354 +18355,18355,"FirstName18355 MiddleName18355",LastName18355 +18356,18356,"FirstName18356 MiddleName18356",LastName18356 +18357,18357,"FirstName18357 MiddleName18357",LastName18357 +18358,18358,"FirstName18358 MiddleName18358",LastName18358 +18359,18359,"FirstName18359 MiddleName18359",LastName18359 +18360,18360,"FirstName18360 MiddleName18360",LastName18360 +18361,18361,"FirstName18361 MiddleName18361",LastName18361 +18362,18362,"FirstName18362 MiddleName18362",LastName18362 +18363,18363,"FirstName18363 MiddleName18363",LastName18363 +18364,18364,"FirstName18364 MiddleName18364",LastName18364 +18365,18365,"FirstName18365 MiddleName18365",LastName18365 +18366,18366,"FirstName18366 MiddleName18366",LastName18366 +18367,18367,"FirstName18367 MiddleName18367",LastName18367 +18368,18368,"FirstName18368 MiddleName18368",LastName18368 +18369,18369,"FirstName18369 MiddleName18369",LastName18369 +18370,18370,"FirstName18370 MiddleName18370",LastName18370 +18371,18371,"FirstName18371 MiddleName18371",LastName18371 +18372,18372,"FirstName18372 MiddleName18372",LastName18372 +18373,18373,"FirstName18373 MiddleName18373",LastName18373 +18374,18374,"FirstName18374 MiddleName18374",LastName18374 +18375,18375,"FirstName18375 MiddleName18375",LastName18375 +18376,18376,"FirstName18376 MiddleName18376",LastName18376 +18377,18377,"FirstName18377 MiddleName18377",LastName18377 +18378,18378,"FirstName18378 MiddleName18378",LastName18378 +18379,18379,"FirstName18379 MiddleName18379",LastName18379 +18380,18380,"FirstName18380 MiddleName18380",LastName18380 +18381,18381,"FirstName18381 MiddleName18381",LastName18381 +18382,18382,"FirstName18382 MiddleName18382",LastName18382 +18383,18383,"FirstName18383 MiddleName18383",LastName18383 +18384,18384,"FirstName18384 MiddleName18384",LastName18384 +18385,18385,"FirstName18385 MiddleName18385",LastName18385 +18386,18386,"FirstName18386 MiddleName18386",LastName18386 +18387,18387,"FirstName18387 MiddleName18387",LastName18387 +18388,18388,"FirstName18388 MiddleName18388",LastName18388 +18389,18389,"FirstName18389 MiddleName18389",LastName18389 +18390,18390,"FirstName18390 MiddleName18390",LastName18390 +18391,18391,"FirstName18391 MiddleName18391",LastName18391 +18392,18392,"FirstName18392 MiddleName18392",LastName18392 +18393,18393,"FirstName18393 MiddleName18393",LastName18393 +18394,18394,"FirstName18394 MiddleName18394",LastName18394 +18395,18395,"FirstName18395 MiddleName18395",LastName18395 +18396,18396,"FirstName18396 MiddleName18396",LastName18396 +18397,18397,"FirstName18397 MiddleName18397",LastName18397 +18398,18398,"FirstName18398 MiddleName18398",LastName18398 +18399,18399,"FirstName18399 MiddleName18399",LastName18399 +18400,18400,"FirstName18400 MiddleName18400",LastName18400 +18401,18401,"FirstName18401 MiddleName18401",LastName18401 +18402,18402,"FirstName18402 MiddleName18402",LastName18402 +18403,18403,"FirstName18403 MiddleName18403",LastName18403 +18404,18404,"FirstName18404 MiddleName18404",LastName18404 +18405,18405,"FirstName18405 MiddleName18405",LastName18405 +18406,18406,"FirstName18406 MiddleName18406",LastName18406 +18407,18407,"FirstName18407 MiddleName18407",LastName18407 +18408,18408,"FirstName18408 MiddleName18408",LastName18408 +18409,18409,"FirstName18409 MiddleName18409",LastName18409 +18410,18410,"FirstName18410 MiddleName18410",LastName18410 +18411,18411,"FirstName18411 MiddleName18411",LastName18411 +18412,18412,"FirstName18412 MiddleName18412",LastName18412 +18413,18413,"FirstName18413 MiddleName18413",LastName18413 +18414,18414,"FirstName18414 MiddleName18414",LastName18414 +18415,18415,"FirstName18415 MiddleName18415",LastName18415 +18416,18416,"FirstName18416 MiddleName18416",LastName18416 +18417,18417,"FirstName18417 MiddleName18417",LastName18417 +18418,18418,"FirstName18418 MiddleName18418",LastName18418 +18419,18419,"FirstName18419 MiddleName18419",LastName18419 +18420,18420,"FirstName18420 MiddleName18420",LastName18420 +18421,18421,"FirstName18421 MiddleName18421",LastName18421 +18422,18422,"FirstName18422 MiddleName18422",LastName18422 +18423,18423,"FirstName18423 MiddleName18423",LastName18423 +18424,18424,"FirstName18424 MiddleName18424",LastName18424 +18425,18425,"FirstName18425 MiddleName18425",LastName18425 +18426,18426,"FirstName18426 MiddleName18426",LastName18426 +18427,18427,"FirstName18427 MiddleName18427",LastName18427 +18428,18428,"FirstName18428 MiddleName18428",LastName18428 +18429,18429,"FirstName18429 MiddleName18429",LastName18429 +18430,18430,"FirstName18430 MiddleName18430",LastName18430 +18431,18431,"FirstName18431 MiddleName18431",LastName18431 +18432,18432,"FirstName18432 MiddleName18432",LastName18432 +18433,18433,"FirstName18433 MiddleName18433",LastName18433 +18434,18434,"FirstName18434 MiddleName18434",LastName18434 +18435,18435,"FirstName18435 MiddleName18435",LastName18435 +18436,18436,"FirstName18436 MiddleName18436",LastName18436 +18437,18437,"FirstName18437 MiddleName18437",LastName18437 +18438,18438,"FirstName18438 MiddleName18438",LastName18438 +18439,18439,"FirstName18439 MiddleName18439",LastName18439 +18440,18440,"FirstName18440 MiddleName18440",LastName18440 +18441,18441,"FirstName18441 MiddleName18441",LastName18441 +18442,18442,"FirstName18442 MiddleName18442",LastName18442 +18443,18443,"FirstName18443 MiddleName18443",LastName18443 +18444,18444,"FirstName18444 MiddleName18444",LastName18444 +18445,18445,"FirstName18445 MiddleName18445",LastName18445 +18446,18446,"FirstName18446 MiddleName18446",LastName18446 +18447,18447,"FirstName18447 MiddleName18447",LastName18447 +18448,18448,"FirstName18448 MiddleName18448",LastName18448 +18449,18449,"FirstName18449 MiddleName18449",LastName18449 +18450,18450,"FirstName18450 MiddleName18450",LastName18450 +18451,18451,"FirstName18451 MiddleName18451",LastName18451 +18452,18452,"FirstName18452 MiddleName18452",LastName18452 +18453,18453,"FirstName18453 MiddleName18453",LastName18453 +18454,18454,"FirstName18454 MiddleName18454",LastName18454 +18455,18455,"FirstName18455 MiddleName18455",LastName18455 +18456,18456,"FirstName18456 MiddleName18456",LastName18456 +18457,18457,"FirstName18457 MiddleName18457",LastName18457 +18458,18458,"FirstName18458 MiddleName18458",LastName18458 +18459,18459,"FirstName18459 MiddleName18459",LastName18459 +18460,18460,"FirstName18460 MiddleName18460",LastName18460 +18461,18461,"FirstName18461 MiddleName18461",LastName18461 +18462,18462,"FirstName18462 MiddleName18462",LastName18462 +18463,18463,"FirstName18463 MiddleName18463",LastName18463 +18464,18464,"FirstName18464 MiddleName18464",LastName18464 +18465,18465,"FirstName18465 MiddleName18465",LastName18465 +18466,18466,"FirstName18466 MiddleName18466",LastName18466 +18467,18467,"FirstName18467 MiddleName18467",LastName18467 +18468,18468,"FirstName18468 MiddleName18468",LastName18468 +18469,18469,"FirstName18469 MiddleName18469",LastName18469 +18470,18470,"FirstName18470 MiddleName18470",LastName18470 +18471,18471,"FirstName18471 MiddleName18471",LastName18471 +18472,18472,"FirstName18472 MiddleName18472",LastName18472 +18473,18473,"FirstName18473 MiddleName18473",LastName18473 +18474,18474,"FirstName18474 MiddleName18474",LastName18474 +18475,18475,"FirstName18475 MiddleName18475",LastName18475 +18476,18476,"FirstName18476 MiddleName18476",LastName18476 +18477,18477,"FirstName18477 MiddleName18477",LastName18477 +18478,18478,"FirstName18478 MiddleName18478",LastName18478 +18479,18479,"FirstName18479 MiddleName18479",LastName18479 +18480,18480,"FirstName18480 MiddleName18480",LastName18480 +18481,18481,"FirstName18481 MiddleName18481",LastName18481 +18482,18482,"FirstName18482 MiddleName18482",LastName18482 +18483,18483,"FirstName18483 MiddleName18483",LastName18483 +18484,18484,"FirstName18484 MiddleName18484",LastName18484 +18485,18485,"FirstName18485 MiddleName18485",LastName18485 +18486,18486,"FirstName18486 MiddleName18486",LastName18486 +18487,18487,"FirstName18487 MiddleName18487",LastName18487 +18488,18488,"FirstName18488 MiddleName18488",LastName18488 +18489,18489,"FirstName18489 MiddleName18489",LastName18489 +18490,18490,"FirstName18490 MiddleName18490",LastName18490 +18491,18491,"FirstName18491 MiddleName18491",LastName18491 +18492,18492,"FirstName18492 MiddleName18492",LastName18492 +18493,18493,"FirstName18493 MiddleName18493",LastName18493 +18494,18494,"FirstName18494 MiddleName18494",LastName18494 +18495,18495,"FirstName18495 MiddleName18495",LastName18495 +18496,18496,"FirstName18496 MiddleName18496",LastName18496 +18497,18497,"FirstName18497 MiddleName18497",LastName18497 +18498,18498,"FirstName18498 MiddleName18498",LastName18498 +18499,18499,"FirstName18499 MiddleName18499",LastName18499 +18500,18500,"FirstName18500 MiddleName18500",LastName18500 +18501,18501,"FirstName18501 MiddleName18501",LastName18501 +18502,18502,"FirstName18502 MiddleName18502",LastName18502 +18503,18503,"FirstName18503 MiddleName18503",LastName18503 +18504,18504,"FirstName18504 MiddleName18504",LastName18504 +18505,18505,"FirstName18505 MiddleName18505",LastName18505 +18506,18506,"FirstName18506 MiddleName18506",LastName18506 +18507,18507,"FirstName18507 MiddleName18507",LastName18507 +18508,18508,"FirstName18508 MiddleName18508",LastName18508 +18509,18509,"FirstName18509 MiddleName18509",LastName18509 +18510,18510,"FirstName18510 MiddleName18510",LastName18510 +18511,18511,"FirstName18511 MiddleName18511",LastName18511 +18512,18512,"FirstName18512 MiddleName18512",LastName18512 +18513,18513,"FirstName18513 MiddleName18513",LastName18513 +18514,18514,"FirstName18514 MiddleName18514",LastName18514 +18515,18515,"FirstName18515 MiddleName18515",LastName18515 +18516,18516,"FirstName18516 MiddleName18516",LastName18516 +18517,18517,"FirstName18517 MiddleName18517",LastName18517 +18518,18518,"FirstName18518 MiddleName18518",LastName18518 +18519,18519,"FirstName18519 MiddleName18519",LastName18519 +18520,18520,"FirstName18520 MiddleName18520",LastName18520 +18521,18521,"FirstName18521 MiddleName18521",LastName18521 +18522,18522,"FirstName18522 MiddleName18522",LastName18522 +18523,18523,"FirstName18523 MiddleName18523",LastName18523 +18524,18524,"FirstName18524 MiddleName18524",LastName18524 +18525,18525,"FirstName18525 MiddleName18525",LastName18525 +18526,18526,"FirstName18526 MiddleName18526",LastName18526 +18527,18527,"FirstName18527 MiddleName18527",LastName18527 +18528,18528,"FirstName18528 MiddleName18528",LastName18528 +18529,18529,"FirstName18529 MiddleName18529",LastName18529 +18530,18530,"FirstName18530 MiddleName18530",LastName18530 +18531,18531,"FirstName18531 MiddleName18531",LastName18531 +18532,18532,"FirstName18532 MiddleName18532",LastName18532 +18533,18533,"FirstName18533 MiddleName18533",LastName18533 +18534,18534,"FirstName18534 MiddleName18534",LastName18534 +18535,18535,"FirstName18535 MiddleName18535",LastName18535 +18536,18536,"FirstName18536 MiddleName18536",LastName18536 +18537,18537,"FirstName18537 MiddleName18537",LastName18537 +18538,18538,"FirstName18538 MiddleName18538",LastName18538 +18539,18539,"FirstName18539 MiddleName18539",LastName18539 +18540,18540,"FirstName18540 MiddleName18540",LastName18540 +18541,18541,"FirstName18541 MiddleName18541",LastName18541 +18542,18542,"FirstName18542 MiddleName18542",LastName18542 +18543,18543,"FirstName18543 MiddleName18543",LastName18543 +18544,18544,"FirstName18544 MiddleName18544",LastName18544 +18545,18545,"FirstName18545 MiddleName18545",LastName18545 +18546,18546,"FirstName18546 MiddleName18546",LastName18546 +18547,18547,"FirstName18547 MiddleName18547",LastName18547 +18548,18548,"FirstName18548 MiddleName18548",LastName18548 +18549,18549,"FirstName18549 MiddleName18549",LastName18549 +18550,18550,"FirstName18550 MiddleName18550",LastName18550 +18551,18551,"FirstName18551 MiddleName18551",LastName18551 +18552,18552,"FirstName18552 MiddleName18552",LastName18552 +18553,18553,"FirstName18553 MiddleName18553",LastName18553 +18554,18554,"FirstName18554 MiddleName18554",LastName18554 +18555,18555,"FirstName18555 MiddleName18555",LastName18555 +18556,18556,"FirstName18556 MiddleName18556",LastName18556 +18557,18557,"FirstName18557 MiddleName18557",LastName18557 +18558,18558,"FirstName18558 MiddleName18558",LastName18558 +18559,18559,"FirstName18559 MiddleName18559",LastName18559 +18560,18560,"FirstName18560 MiddleName18560",LastName18560 +18561,18561,"FirstName18561 MiddleName18561",LastName18561 +18562,18562,"FirstName18562 MiddleName18562",LastName18562 +18563,18563,"FirstName18563 MiddleName18563",LastName18563 +18564,18564,"FirstName18564 MiddleName18564",LastName18564 +18565,18565,"FirstName18565 MiddleName18565",LastName18565 +18566,18566,"FirstName18566 MiddleName18566",LastName18566 +18567,18567,"FirstName18567 MiddleName18567",LastName18567 +18568,18568,"FirstName18568 MiddleName18568",LastName18568 +18569,18569,"FirstName18569 MiddleName18569",LastName18569 +18570,18570,"FirstName18570 MiddleName18570",LastName18570 +18571,18571,"FirstName18571 MiddleName18571",LastName18571 +18572,18572,"FirstName18572 MiddleName18572",LastName18572 +18573,18573,"FirstName18573 MiddleName18573",LastName18573 +18574,18574,"FirstName18574 MiddleName18574",LastName18574 +18575,18575,"FirstName18575 MiddleName18575",LastName18575 +18576,18576,"FirstName18576 MiddleName18576",LastName18576 +18577,18577,"FirstName18577 MiddleName18577",LastName18577 +18578,18578,"FirstName18578 MiddleName18578",LastName18578 +18579,18579,"FirstName18579 MiddleName18579",LastName18579 +18580,18580,"FirstName18580 MiddleName18580",LastName18580 +18581,18581,"FirstName18581 MiddleName18581",LastName18581 +18582,18582,"FirstName18582 MiddleName18582",LastName18582 +18583,18583,"FirstName18583 MiddleName18583",LastName18583 +18584,18584,"FirstName18584 MiddleName18584",LastName18584 +18585,18585,"FirstName18585 MiddleName18585",LastName18585 +18586,18586,"FirstName18586 MiddleName18586",LastName18586 +18587,18587,"FirstName18587 MiddleName18587",LastName18587 +18588,18588,"FirstName18588 MiddleName18588",LastName18588 +18589,18589,"FirstName18589 MiddleName18589",LastName18589 +18590,18590,"FirstName18590 MiddleName18590",LastName18590 +18591,18591,"FirstName18591 MiddleName18591",LastName18591 +18592,18592,"FirstName18592 MiddleName18592",LastName18592 +18593,18593,"FirstName18593 MiddleName18593",LastName18593 +18594,18594,"FirstName18594 MiddleName18594",LastName18594 +18595,18595,"FirstName18595 MiddleName18595",LastName18595 +18596,18596,"FirstName18596 MiddleName18596",LastName18596 +18597,18597,"FirstName18597 MiddleName18597",LastName18597 +18598,18598,"FirstName18598 MiddleName18598",LastName18598 +18599,18599,"FirstName18599 MiddleName18599",LastName18599 +18600,18600,"FirstName18600 MiddleName18600",LastName18600 +18601,18601,"FirstName18601 MiddleName18601",LastName18601 +18602,18602,"FirstName18602 MiddleName18602",LastName18602 +18603,18603,"FirstName18603 MiddleName18603",LastName18603 +18604,18604,"FirstName18604 MiddleName18604",LastName18604 +18605,18605,"FirstName18605 MiddleName18605",LastName18605 +18606,18606,"FirstName18606 MiddleName18606",LastName18606 +18607,18607,"FirstName18607 MiddleName18607",LastName18607 +18608,18608,"FirstName18608 MiddleName18608",LastName18608 +18609,18609,"FirstName18609 MiddleName18609",LastName18609 +18610,18610,"FirstName18610 MiddleName18610",LastName18610 +18611,18611,"FirstName18611 MiddleName18611",LastName18611 +18612,18612,"FirstName18612 MiddleName18612",LastName18612 +18613,18613,"FirstName18613 MiddleName18613",LastName18613 +18614,18614,"FirstName18614 MiddleName18614",LastName18614 +18615,18615,"FirstName18615 MiddleName18615",LastName18615 +18616,18616,"FirstName18616 MiddleName18616",LastName18616 +18617,18617,"FirstName18617 MiddleName18617",LastName18617 +18618,18618,"FirstName18618 MiddleName18618",LastName18618 +18619,18619,"FirstName18619 MiddleName18619",LastName18619 +18620,18620,"FirstName18620 MiddleName18620",LastName18620 +18621,18621,"FirstName18621 MiddleName18621",LastName18621 +18622,18622,"FirstName18622 MiddleName18622",LastName18622 +18623,18623,"FirstName18623 MiddleName18623",LastName18623 +18624,18624,"FirstName18624 MiddleName18624",LastName18624 +18625,18625,"FirstName18625 MiddleName18625",LastName18625 +18626,18626,"FirstName18626 MiddleName18626",LastName18626 +18627,18627,"FirstName18627 MiddleName18627",LastName18627 +18628,18628,"FirstName18628 MiddleName18628",LastName18628 +18629,18629,"FirstName18629 MiddleName18629",LastName18629 +18630,18630,"FirstName18630 MiddleName18630",LastName18630 +18631,18631,"FirstName18631 MiddleName18631",LastName18631 +18632,18632,"FirstName18632 MiddleName18632",LastName18632 +18633,18633,"FirstName18633 MiddleName18633",LastName18633 +18634,18634,"FirstName18634 MiddleName18634",LastName18634 +18635,18635,"FirstName18635 MiddleName18635",LastName18635 +18636,18636,"FirstName18636 MiddleName18636",LastName18636 +18637,18637,"FirstName18637 MiddleName18637",LastName18637 +18638,18638,"FirstName18638 MiddleName18638",LastName18638 +18639,18639,"FirstName18639 MiddleName18639",LastName18639 +18640,18640,"FirstName18640 MiddleName18640",LastName18640 +18641,18641,"FirstName18641 MiddleName18641",LastName18641 +18642,18642,"FirstName18642 MiddleName18642",LastName18642 +18643,18643,"FirstName18643 MiddleName18643",LastName18643 +18644,18644,"FirstName18644 MiddleName18644",LastName18644 +18645,18645,"FirstName18645 MiddleName18645",LastName18645 +18646,18646,"FirstName18646 MiddleName18646",LastName18646 +18647,18647,"FirstName18647 MiddleName18647",LastName18647 +18648,18648,"FirstName18648 MiddleName18648",LastName18648 +18649,18649,"FirstName18649 MiddleName18649",LastName18649 +18650,18650,"FirstName18650 MiddleName18650",LastName18650 +18651,18651,"FirstName18651 MiddleName18651",LastName18651 +18652,18652,"FirstName18652 MiddleName18652",LastName18652 +18653,18653,"FirstName18653 MiddleName18653",LastName18653 +18654,18654,"FirstName18654 MiddleName18654",LastName18654 +18655,18655,"FirstName18655 MiddleName18655",LastName18655 +18656,18656,"FirstName18656 MiddleName18656",LastName18656 +18657,18657,"FirstName18657 MiddleName18657",LastName18657 +18658,18658,"FirstName18658 MiddleName18658",LastName18658 +18659,18659,"FirstName18659 MiddleName18659",LastName18659 +18660,18660,"FirstName18660 MiddleName18660",LastName18660 +18661,18661,"FirstName18661 MiddleName18661",LastName18661 +18662,18662,"FirstName18662 MiddleName18662",LastName18662 +18663,18663,"FirstName18663 MiddleName18663",LastName18663 +18664,18664,"FirstName18664 MiddleName18664",LastName18664 +18665,18665,"FirstName18665 MiddleName18665",LastName18665 +18666,18666,"FirstName18666 MiddleName18666",LastName18666 +18667,18667,"FirstName18667 MiddleName18667",LastName18667 +18668,18668,"FirstName18668 MiddleName18668",LastName18668 +18669,18669,"FirstName18669 MiddleName18669",LastName18669 +18670,18670,"FirstName18670 MiddleName18670",LastName18670 +18671,18671,"FirstName18671 MiddleName18671",LastName18671 +18672,18672,"FirstName18672 MiddleName18672",LastName18672 +18673,18673,"FirstName18673 MiddleName18673",LastName18673 +18674,18674,"FirstName18674 MiddleName18674",LastName18674 +18675,18675,"FirstName18675 MiddleName18675",LastName18675 +18676,18676,"FirstName18676 MiddleName18676",LastName18676 +18677,18677,"FirstName18677 MiddleName18677",LastName18677 +18678,18678,"FirstName18678 MiddleName18678",LastName18678 +18679,18679,"FirstName18679 MiddleName18679",LastName18679 +18680,18680,"FirstName18680 MiddleName18680",LastName18680 +18681,18681,"FirstName18681 MiddleName18681",LastName18681 +18682,18682,"FirstName18682 MiddleName18682",LastName18682 +18683,18683,"FirstName18683 MiddleName18683",LastName18683 +18684,18684,"FirstName18684 MiddleName18684",LastName18684 +18685,18685,"FirstName18685 MiddleName18685",LastName18685 +18686,18686,"FirstName18686 MiddleName18686",LastName18686 +18687,18687,"FirstName18687 MiddleName18687",LastName18687 +18688,18688,"FirstName18688 MiddleName18688",LastName18688 +18689,18689,"FirstName18689 MiddleName18689",LastName18689 +18690,18690,"FirstName18690 MiddleName18690",LastName18690 +18691,18691,"FirstName18691 MiddleName18691",LastName18691 +18692,18692,"FirstName18692 MiddleName18692",LastName18692 +18693,18693,"FirstName18693 MiddleName18693",LastName18693 +18694,18694,"FirstName18694 MiddleName18694",LastName18694 +18695,18695,"FirstName18695 MiddleName18695",LastName18695 +18696,18696,"FirstName18696 MiddleName18696",LastName18696 +18697,18697,"FirstName18697 MiddleName18697",LastName18697 +18698,18698,"FirstName18698 MiddleName18698",LastName18698 +18699,18699,"FirstName18699 MiddleName18699",LastName18699 +18700,18700,"FirstName18700 MiddleName18700",LastName18700 +18701,18701,"FirstName18701 MiddleName18701",LastName18701 +18702,18702,"FirstName18702 MiddleName18702",LastName18702 +18703,18703,"FirstName18703 MiddleName18703",LastName18703 +18704,18704,"FirstName18704 MiddleName18704",LastName18704 +18705,18705,"FirstName18705 MiddleName18705",LastName18705 +18706,18706,"FirstName18706 MiddleName18706",LastName18706 +18707,18707,"FirstName18707 MiddleName18707",LastName18707 +18708,18708,"FirstName18708 MiddleName18708",LastName18708 +18709,18709,"FirstName18709 MiddleName18709",LastName18709 +18710,18710,"FirstName18710 MiddleName18710",LastName18710 +18711,18711,"FirstName18711 MiddleName18711",LastName18711 +18712,18712,"FirstName18712 MiddleName18712",LastName18712 +18713,18713,"FirstName18713 MiddleName18713",LastName18713 +18714,18714,"FirstName18714 MiddleName18714",LastName18714 +18715,18715,"FirstName18715 MiddleName18715",LastName18715 +18716,18716,"FirstName18716 MiddleName18716",LastName18716 +18717,18717,"FirstName18717 MiddleName18717",LastName18717 +18718,18718,"FirstName18718 MiddleName18718",LastName18718 +18719,18719,"FirstName18719 MiddleName18719",LastName18719 +18720,18720,"FirstName18720 MiddleName18720",LastName18720 +18721,18721,"FirstName18721 MiddleName18721",LastName18721 +18722,18722,"FirstName18722 MiddleName18722",LastName18722 +18723,18723,"FirstName18723 MiddleName18723",LastName18723 +18724,18724,"FirstName18724 MiddleName18724",LastName18724 +18725,18725,"FirstName18725 MiddleName18725",LastName18725 +18726,18726,"FirstName18726 MiddleName18726",LastName18726 +18727,18727,"FirstName18727 MiddleName18727",LastName18727 +18728,18728,"FirstName18728 MiddleName18728",LastName18728 +18729,18729,"FirstName18729 MiddleName18729",LastName18729 +18730,18730,"FirstName18730 MiddleName18730",LastName18730 +18731,18731,"FirstName18731 MiddleName18731",LastName18731 +18732,18732,"FirstName18732 MiddleName18732",LastName18732 +18733,18733,"FirstName18733 MiddleName18733",LastName18733 +18734,18734,"FirstName18734 MiddleName18734",LastName18734 +18735,18735,"FirstName18735 MiddleName18735",LastName18735 +18736,18736,"FirstName18736 MiddleName18736",LastName18736 +18737,18737,"FirstName18737 MiddleName18737",LastName18737 +18738,18738,"FirstName18738 MiddleName18738",LastName18738 +18739,18739,"FirstName18739 MiddleName18739",LastName18739 +18740,18740,"FirstName18740 MiddleName18740",LastName18740 +18741,18741,"FirstName18741 MiddleName18741",LastName18741 +18742,18742,"FirstName18742 MiddleName18742",LastName18742 +18743,18743,"FirstName18743 MiddleName18743",LastName18743 +18744,18744,"FirstName18744 MiddleName18744",LastName18744 +18745,18745,"FirstName18745 MiddleName18745",LastName18745 +18746,18746,"FirstName18746 MiddleName18746",LastName18746 +18747,18747,"FirstName18747 MiddleName18747",LastName18747 +18748,18748,"FirstName18748 MiddleName18748",LastName18748 +18749,18749,"FirstName18749 MiddleName18749",LastName18749 +18750,18750,"FirstName18750 MiddleName18750",LastName18750 +18751,18751,"FirstName18751 MiddleName18751",LastName18751 +18752,18752,"FirstName18752 MiddleName18752",LastName18752 +18753,18753,"FirstName18753 MiddleName18753",LastName18753 +18754,18754,"FirstName18754 MiddleName18754",LastName18754 +18755,18755,"FirstName18755 MiddleName18755",LastName18755 +18756,18756,"FirstName18756 MiddleName18756",LastName18756 +18757,18757,"FirstName18757 MiddleName18757",LastName18757 +18758,18758,"FirstName18758 MiddleName18758",LastName18758 +18759,18759,"FirstName18759 MiddleName18759",LastName18759 +18760,18760,"FirstName18760 MiddleName18760",LastName18760 +18761,18761,"FirstName18761 MiddleName18761",LastName18761 +18762,18762,"FirstName18762 MiddleName18762",LastName18762 +18763,18763,"FirstName18763 MiddleName18763",LastName18763 +18764,18764,"FirstName18764 MiddleName18764",LastName18764 +18765,18765,"FirstName18765 MiddleName18765",LastName18765 +18766,18766,"FirstName18766 MiddleName18766",LastName18766 +18767,18767,"FirstName18767 MiddleName18767",LastName18767 +18768,18768,"FirstName18768 MiddleName18768",LastName18768 +18769,18769,"FirstName18769 MiddleName18769",LastName18769 +18770,18770,"FirstName18770 MiddleName18770",LastName18770 +18771,18771,"FirstName18771 MiddleName18771",LastName18771 +18772,18772,"FirstName18772 MiddleName18772",LastName18772 +18773,18773,"FirstName18773 MiddleName18773",LastName18773 +18774,18774,"FirstName18774 MiddleName18774",LastName18774 +18775,18775,"FirstName18775 MiddleName18775",LastName18775 +18776,18776,"FirstName18776 MiddleName18776",LastName18776 +18777,18777,"FirstName18777 MiddleName18777",LastName18777 +18778,18778,"FirstName18778 MiddleName18778",LastName18778 +18779,18779,"FirstName18779 MiddleName18779",LastName18779 +18780,18780,"FirstName18780 MiddleName18780",LastName18780 +18781,18781,"FirstName18781 MiddleName18781",LastName18781 +18782,18782,"FirstName18782 MiddleName18782",LastName18782 +18783,18783,"FirstName18783 MiddleName18783",LastName18783 +18784,18784,"FirstName18784 MiddleName18784",LastName18784 +18785,18785,"FirstName18785 MiddleName18785",LastName18785 +18786,18786,"FirstName18786 MiddleName18786",LastName18786 +18787,18787,"FirstName18787 MiddleName18787",LastName18787 +18788,18788,"FirstName18788 MiddleName18788",LastName18788 +18789,18789,"FirstName18789 MiddleName18789",LastName18789 +18790,18790,"FirstName18790 MiddleName18790",LastName18790 +18791,18791,"FirstName18791 MiddleName18791",LastName18791 +18792,18792,"FirstName18792 MiddleName18792",LastName18792 +18793,18793,"FirstName18793 MiddleName18793",LastName18793 +18794,18794,"FirstName18794 MiddleName18794",LastName18794 +18795,18795,"FirstName18795 MiddleName18795",LastName18795 +18796,18796,"FirstName18796 MiddleName18796",LastName18796 +18797,18797,"FirstName18797 MiddleName18797",LastName18797 +18798,18798,"FirstName18798 MiddleName18798",LastName18798 +18799,18799,"FirstName18799 MiddleName18799",LastName18799 +18800,18800,"FirstName18800 MiddleName18800",LastName18800 +18801,18801,"FirstName18801 MiddleName18801",LastName18801 +18802,18802,"FirstName18802 MiddleName18802",LastName18802 +18803,18803,"FirstName18803 MiddleName18803",LastName18803 +18804,18804,"FirstName18804 MiddleName18804",LastName18804 +18805,18805,"FirstName18805 MiddleName18805",LastName18805 +18806,18806,"FirstName18806 MiddleName18806",LastName18806 +18807,18807,"FirstName18807 MiddleName18807",LastName18807 +18808,18808,"FirstName18808 MiddleName18808",LastName18808 +18809,18809,"FirstName18809 MiddleName18809",LastName18809 +18810,18810,"FirstName18810 MiddleName18810",LastName18810 +18811,18811,"FirstName18811 MiddleName18811",LastName18811 +18812,18812,"FirstName18812 MiddleName18812",LastName18812 +18813,18813,"FirstName18813 MiddleName18813",LastName18813 +18814,18814,"FirstName18814 MiddleName18814",LastName18814 +18815,18815,"FirstName18815 MiddleName18815",LastName18815 +18816,18816,"FirstName18816 MiddleName18816",LastName18816 +18817,18817,"FirstName18817 MiddleName18817",LastName18817 +18818,18818,"FirstName18818 MiddleName18818",LastName18818 +18819,18819,"FirstName18819 MiddleName18819",LastName18819 +18820,18820,"FirstName18820 MiddleName18820",LastName18820 +18821,18821,"FirstName18821 MiddleName18821",LastName18821 +18822,18822,"FirstName18822 MiddleName18822",LastName18822 +18823,18823,"FirstName18823 MiddleName18823",LastName18823 +18824,18824,"FirstName18824 MiddleName18824",LastName18824 +18825,18825,"FirstName18825 MiddleName18825",LastName18825 +18826,18826,"FirstName18826 MiddleName18826",LastName18826 +18827,18827,"FirstName18827 MiddleName18827",LastName18827 +18828,18828,"FirstName18828 MiddleName18828",LastName18828 +18829,18829,"FirstName18829 MiddleName18829",LastName18829 +18830,18830,"FirstName18830 MiddleName18830",LastName18830 +18831,18831,"FirstName18831 MiddleName18831",LastName18831 +18832,18832,"FirstName18832 MiddleName18832",LastName18832 +18833,18833,"FirstName18833 MiddleName18833",LastName18833 +18834,18834,"FirstName18834 MiddleName18834",LastName18834 +18835,18835,"FirstName18835 MiddleName18835",LastName18835 +18836,18836,"FirstName18836 MiddleName18836",LastName18836 +18837,18837,"FirstName18837 MiddleName18837",LastName18837 +18838,18838,"FirstName18838 MiddleName18838",LastName18838 +18839,18839,"FirstName18839 MiddleName18839",LastName18839 +18840,18840,"FirstName18840 MiddleName18840",LastName18840 +18841,18841,"FirstName18841 MiddleName18841",LastName18841 +18842,18842,"FirstName18842 MiddleName18842",LastName18842 +18843,18843,"FirstName18843 MiddleName18843",LastName18843 +18844,18844,"FirstName18844 MiddleName18844",LastName18844 +18845,18845,"FirstName18845 MiddleName18845",LastName18845 +18846,18846,"FirstName18846 MiddleName18846",LastName18846 +18847,18847,"FirstName18847 MiddleName18847",LastName18847 +18848,18848,"FirstName18848 MiddleName18848",LastName18848 +18849,18849,"FirstName18849 MiddleName18849",LastName18849 +18850,18850,"FirstName18850 MiddleName18850",LastName18850 +18851,18851,"FirstName18851 MiddleName18851",LastName18851 +18852,18852,"FirstName18852 MiddleName18852",LastName18852 +18853,18853,"FirstName18853 MiddleName18853",LastName18853 +18854,18854,"FirstName18854 MiddleName18854",LastName18854 +18855,18855,"FirstName18855 MiddleName18855",LastName18855 +18856,18856,"FirstName18856 MiddleName18856",LastName18856 +18857,18857,"FirstName18857 MiddleName18857",LastName18857 +18858,18858,"FirstName18858 MiddleName18858",LastName18858 +18859,18859,"FirstName18859 MiddleName18859",LastName18859 +18860,18860,"FirstName18860 MiddleName18860",LastName18860 +18861,18861,"FirstName18861 MiddleName18861",LastName18861 +18862,18862,"FirstName18862 MiddleName18862",LastName18862 +18863,18863,"FirstName18863 MiddleName18863",LastName18863 +18864,18864,"FirstName18864 MiddleName18864",LastName18864 +18865,18865,"FirstName18865 MiddleName18865",LastName18865 +18866,18866,"FirstName18866 MiddleName18866",LastName18866 +18867,18867,"FirstName18867 MiddleName18867",LastName18867 +18868,18868,"FirstName18868 MiddleName18868",LastName18868 +18869,18869,"FirstName18869 MiddleName18869",LastName18869 +18870,18870,"FirstName18870 MiddleName18870",LastName18870 +18871,18871,"FirstName18871 MiddleName18871",LastName18871 +18872,18872,"FirstName18872 MiddleName18872",LastName18872 +18873,18873,"FirstName18873 MiddleName18873",LastName18873 +18874,18874,"FirstName18874 MiddleName18874",LastName18874 +18875,18875,"FirstName18875 MiddleName18875",LastName18875 +18876,18876,"FirstName18876 MiddleName18876",LastName18876 +18877,18877,"FirstName18877 MiddleName18877",LastName18877 +18878,18878,"FirstName18878 MiddleName18878",LastName18878 +18879,18879,"FirstName18879 MiddleName18879",LastName18879 +18880,18880,"FirstName18880 MiddleName18880",LastName18880 +18881,18881,"FirstName18881 MiddleName18881",LastName18881 +18882,18882,"FirstName18882 MiddleName18882",LastName18882 +18883,18883,"FirstName18883 MiddleName18883",LastName18883 +18884,18884,"FirstName18884 MiddleName18884",LastName18884 +18885,18885,"FirstName18885 MiddleName18885",LastName18885 +18886,18886,"FirstName18886 MiddleName18886",LastName18886 +18887,18887,"FirstName18887 MiddleName18887",LastName18887 +18888,18888,"FirstName18888 MiddleName18888",LastName18888 +18889,18889,"FirstName18889 MiddleName18889",LastName18889 +18890,18890,"FirstName18890 MiddleName18890",LastName18890 +18891,18891,"FirstName18891 MiddleName18891",LastName18891 +18892,18892,"FirstName18892 MiddleName18892",LastName18892 +18893,18893,"FirstName18893 MiddleName18893",LastName18893 +18894,18894,"FirstName18894 MiddleName18894",LastName18894 +18895,18895,"FirstName18895 MiddleName18895",LastName18895 +18896,18896,"FirstName18896 MiddleName18896",LastName18896 +18897,18897,"FirstName18897 MiddleName18897",LastName18897 +18898,18898,"FirstName18898 MiddleName18898",LastName18898 +18899,18899,"FirstName18899 MiddleName18899",LastName18899 +18900,18900,"FirstName18900 MiddleName18900",LastName18900 +18901,18901,"FirstName18901 MiddleName18901",LastName18901 +18902,18902,"FirstName18902 MiddleName18902",LastName18902 +18903,18903,"FirstName18903 MiddleName18903",LastName18903 +18904,18904,"FirstName18904 MiddleName18904",LastName18904 +18905,18905,"FirstName18905 MiddleName18905",LastName18905 +18906,18906,"FirstName18906 MiddleName18906",LastName18906 +18907,18907,"FirstName18907 MiddleName18907",LastName18907 +18908,18908,"FirstName18908 MiddleName18908",LastName18908 +18909,18909,"FirstName18909 MiddleName18909",LastName18909 +18910,18910,"FirstName18910 MiddleName18910",LastName18910 +18911,18911,"FirstName18911 MiddleName18911",LastName18911 +18912,18912,"FirstName18912 MiddleName18912",LastName18912 +18913,18913,"FirstName18913 MiddleName18913",LastName18913 +18914,18914,"FirstName18914 MiddleName18914",LastName18914 +18915,18915,"FirstName18915 MiddleName18915",LastName18915 +18916,18916,"FirstName18916 MiddleName18916",LastName18916 +18917,18917,"FirstName18917 MiddleName18917",LastName18917 +18918,18918,"FirstName18918 MiddleName18918",LastName18918 +18919,18919,"FirstName18919 MiddleName18919",LastName18919 +18920,18920,"FirstName18920 MiddleName18920",LastName18920 +18921,18921,"FirstName18921 MiddleName18921",LastName18921 +18922,18922,"FirstName18922 MiddleName18922",LastName18922 +18923,18923,"FirstName18923 MiddleName18923",LastName18923 +18924,18924,"FirstName18924 MiddleName18924",LastName18924 +18925,18925,"FirstName18925 MiddleName18925",LastName18925 +18926,18926,"FirstName18926 MiddleName18926",LastName18926 +18927,18927,"FirstName18927 MiddleName18927",LastName18927 +18928,18928,"FirstName18928 MiddleName18928",LastName18928 +18929,18929,"FirstName18929 MiddleName18929",LastName18929 +18930,18930,"FirstName18930 MiddleName18930",LastName18930 +18931,18931,"FirstName18931 MiddleName18931",LastName18931 +18932,18932,"FirstName18932 MiddleName18932",LastName18932 +18933,18933,"FirstName18933 MiddleName18933",LastName18933 +18934,18934,"FirstName18934 MiddleName18934",LastName18934 +18935,18935,"FirstName18935 MiddleName18935",LastName18935 +18936,18936,"FirstName18936 MiddleName18936",LastName18936 +18937,18937,"FirstName18937 MiddleName18937",LastName18937 +18938,18938,"FirstName18938 MiddleName18938",LastName18938 +18939,18939,"FirstName18939 MiddleName18939",LastName18939 +18940,18940,"FirstName18940 MiddleName18940",LastName18940 +18941,18941,"FirstName18941 MiddleName18941",LastName18941 +18942,18942,"FirstName18942 MiddleName18942",LastName18942 +18943,18943,"FirstName18943 MiddleName18943",LastName18943 +18944,18944,"FirstName18944 MiddleName18944",LastName18944 +18945,18945,"FirstName18945 MiddleName18945",LastName18945 +18946,18946,"FirstName18946 MiddleName18946",LastName18946 +18947,18947,"FirstName18947 MiddleName18947",LastName18947 +18948,18948,"FirstName18948 MiddleName18948",LastName18948 +18949,18949,"FirstName18949 MiddleName18949",LastName18949 +18950,18950,"FirstName18950 MiddleName18950",LastName18950 +18951,18951,"FirstName18951 MiddleName18951",LastName18951 +18952,18952,"FirstName18952 MiddleName18952",LastName18952 +18953,18953,"FirstName18953 MiddleName18953",LastName18953 +18954,18954,"FirstName18954 MiddleName18954",LastName18954 +18955,18955,"FirstName18955 MiddleName18955",LastName18955 +18956,18956,"FirstName18956 MiddleName18956",LastName18956 +18957,18957,"FirstName18957 MiddleName18957",LastName18957 +18958,18958,"FirstName18958 MiddleName18958",LastName18958 +18959,18959,"FirstName18959 MiddleName18959",LastName18959 +18960,18960,"FirstName18960 MiddleName18960",LastName18960 +18961,18961,"FirstName18961 MiddleName18961",LastName18961 +18962,18962,"FirstName18962 MiddleName18962",LastName18962 +18963,18963,"FirstName18963 MiddleName18963",LastName18963 +18964,18964,"FirstName18964 MiddleName18964",LastName18964 +18965,18965,"FirstName18965 MiddleName18965",LastName18965 +18966,18966,"FirstName18966 MiddleName18966",LastName18966 +18967,18967,"FirstName18967 MiddleName18967",LastName18967 +18968,18968,"FirstName18968 MiddleName18968",LastName18968 +18969,18969,"FirstName18969 MiddleName18969",LastName18969 +18970,18970,"FirstName18970 MiddleName18970",LastName18970 +18971,18971,"FirstName18971 MiddleName18971",LastName18971 +18972,18972,"FirstName18972 MiddleName18972",LastName18972 +18973,18973,"FirstName18973 MiddleName18973",LastName18973 +18974,18974,"FirstName18974 MiddleName18974",LastName18974 +18975,18975,"FirstName18975 MiddleName18975",LastName18975 +18976,18976,"FirstName18976 MiddleName18976",LastName18976 +18977,18977,"FirstName18977 MiddleName18977",LastName18977 +18978,18978,"FirstName18978 MiddleName18978",LastName18978 +18979,18979,"FirstName18979 MiddleName18979",LastName18979 +18980,18980,"FirstName18980 MiddleName18980",LastName18980 +18981,18981,"FirstName18981 MiddleName18981",LastName18981 +18982,18982,"FirstName18982 MiddleName18982",LastName18982 +18983,18983,"FirstName18983 MiddleName18983",LastName18983 +18984,18984,"FirstName18984 MiddleName18984",LastName18984 +18985,18985,"FirstName18985 MiddleName18985",LastName18985 +18986,18986,"FirstName18986 MiddleName18986",LastName18986 +18987,18987,"FirstName18987 MiddleName18987",LastName18987 +18988,18988,"FirstName18988 MiddleName18988",LastName18988 +18989,18989,"FirstName18989 MiddleName18989",LastName18989 +18990,18990,"FirstName18990 MiddleName18990",LastName18990 +18991,18991,"FirstName18991 MiddleName18991",LastName18991 +18992,18992,"FirstName18992 MiddleName18992",LastName18992 +18993,18993,"FirstName18993 MiddleName18993",LastName18993 +18994,18994,"FirstName18994 MiddleName18994",LastName18994 +18995,18995,"FirstName18995 MiddleName18995",LastName18995 +18996,18996,"FirstName18996 MiddleName18996",LastName18996 +18997,18997,"FirstName18997 MiddleName18997",LastName18997 +18998,18998,"FirstName18998 MiddleName18998",LastName18998 +18999,18999,"FirstName18999 MiddleName18999",LastName18999 +19000,19000,"FirstName19000 MiddleName19000",LastName19000 +19001,19001,"FirstName19001 MiddleName19001",LastName19001 +19002,19002,"FirstName19002 MiddleName19002",LastName19002 +19003,19003,"FirstName19003 MiddleName19003",LastName19003 +19004,19004,"FirstName19004 MiddleName19004",LastName19004 +19005,19005,"FirstName19005 MiddleName19005",LastName19005 +19006,19006,"FirstName19006 MiddleName19006",LastName19006 +19007,19007,"FirstName19007 MiddleName19007",LastName19007 +19008,19008,"FirstName19008 MiddleName19008",LastName19008 +19009,19009,"FirstName19009 MiddleName19009",LastName19009 +19010,19010,"FirstName19010 MiddleName19010",LastName19010 +19011,19011,"FirstName19011 MiddleName19011",LastName19011 +19012,19012,"FirstName19012 MiddleName19012",LastName19012 +19013,19013,"FirstName19013 MiddleName19013",LastName19013 +19014,19014,"FirstName19014 MiddleName19014",LastName19014 +19015,19015,"FirstName19015 MiddleName19015",LastName19015 +19016,19016,"FirstName19016 MiddleName19016",LastName19016 +19017,19017,"FirstName19017 MiddleName19017",LastName19017 +19018,19018,"FirstName19018 MiddleName19018",LastName19018 +19019,19019,"FirstName19019 MiddleName19019",LastName19019 +19020,19020,"FirstName19020 MiddleName19020",LastName19020 +19021,19021,"FirstName19021 MiddleName19021",LastName19021 +19022,19022,"FirstName19022 MiddleName19022",LastName19022 +19023,19023,"FirstName19023 MiddleName19023",LastName19023 +19024,19024,"FirstName19024 MiddleName19024",LastName19024 +19025,19025,"FirstName19025 MiddleName19025",LastName19025 +19026,19026,"FirstName19026 MiddleName19026",LastName19026 +19027,19027,"FirstName19027 MiddleName19027",LastName19027 +19028,19028,"FirstName19028 MiddleName19028",LastName19028 +19029,19029,"FirstName19029 MiddleName19029",LastName19029 +19030,19030,"FirstName19030 MiddleName19030",LastName19030 +19031,19031,"FirstName19031 MiddleName19031",LastName19031 +19032,19032,"FirstName19032 MiddleName19032",LastName19032 +19033,19033,"FirstName19033 MiddleName19033",LastName19033 +19034,19034,"FirstName19034 MiddleName19034",LastName19034 +19035,19035,"FirstName19035 MiddleName19035",LastName19035 +19036,19036,"FirstName19036 MiddleName19036",LastName19036 +19037,19037,"FirstName19037 MiddleName19037",LastName19037 +19038,19038,"FirstName19038 MiddleName19038",LastName19038 +19039,19039,"FirstName19039 MiddleName19039",LastName19039 +19040,19040,"FirstName19040 MiddleName19040",LastName19040 +19041,19041,"FirstName19041 MiddleName19041",LastName19041 +19042,19042,"FirstName19042 MiddleName19042",LastName19042 +19043,19043,"FirstName19043 MiddleName19043",LastName19043 +19044,19044,"FirstName19044 MiddleName19044",LastName19044 +19045,19045,"FirstName19045 MiddleName19045",LastName19045 +19046,19046,"FirstName19046 MiddleName19046",LastName19046 +19047,19047,"FirstName19047 MiddleName19047",LastName19047 +19048,19048,"FirstName19048 MiddleName19048",LastName19048 +19049,19049,"FirstName19049 MiddleName19049",LastName19049 +19050,19050,"FirstName19050 MiddleName19050",LastName19050 +19051,19051,"FirstName19051 MiddleName19051",LastName19051 +19052,19052,"FirstName19052 MiddleName19052",LastName19052 +19053,19053,"FirstName19053 MiddleName19053",LastName19053 +19054,19054,"FirstName19054 MiddleName19054",LastName19054 +19055,19055,"FirstName19055 MiddleName19055",LastName19055 +19056,19056,"FirstName19056 MiddleName19056",LastName19056 +19057,19057,"FirstName19057 MiddleName19057",LastName19057 +19058,19058,"FirstName19058 MiddleName19058",LastName19058 +19059,19059,"FirstName19059 MiddleName19059",LastName19059 +19060,19060,"FirstName19060 MiddleName19060",LastName19060 +19061,19061,"FirstName19061 MiddleName19061",LastName19061 +19062,19062,"FirstName19062 MiddleName19062",LastName19062 +19063,19063,"FirstName19063 MiddleName19063",LastName19063 +19064,19064,"FirstName19064 MiddleName19064",LastName19064 +19065,19065,"FirstName19065 MiddleName19065",LastName19065 +19066,19066,"FirstName19066 MiddleName19066",LastName19066 +19067,19067,"FirstName19067 MiddleName19067",LastName19067 +19068,19068,"FirstName19068 MiddleName19068",LastName19068 +19069,19069,"FirstName19069 MiddleName19069",LastName19069 +19070,19070,"FirstName19070 MiddleName19070",LastName19070 +19071,19071,"FirstName19071 MiddleName19071",LastName19071 +19072,19072,"FirstName19072 MiddleName19072",LastName19072 +19073,19073,"FirstName19073 MiddleName19073",LastName19073 +19074,19074,"FirstName19074 MiddleName19074",LastName19074 +19075,19075,"FirstName19075 MiddleName19075",LastName19075 +19076,19076,"FirstName19076 MiddleName19076",LastName19076 +19077,19077,"FirstName19077 MiddleName19077",LastName19077 +19078,19078,"FirstName19078 MiddleName19078",LastName19078 +19079,19079,"FirstName19079 MiddleName19079",LastName19079 +19080,19080,"FirstName19080 MiddleName19080",LastName19080 +19081,19081,"FirstName19081 MiddleName19081",LastName19081 +19082,19082,"FirstName19082 MiddleName19082",LastName19082 +19083,19083,"FirstName19083 MiddleName19083",LastName19083 +19084,19084,"FirstName19084 MiddleName19084",LastName19084 +19085,19085,"FirstName19085 MiddleName19085",LastName19085 +19086,19086,"FirstName19086 MiddleName19086",LastName19086 +19087,19087,"FirstName19087 MiddleName19087",LastName19087 +19088,19088,"FirstName19088 MiddleName19088",LastName19088 +19089,19089,"FirstName19089 MiddleName19089",LastName19089 +19090,19090,"FirstName19090 MiddleName19090",LastName19090 +19091,19091,"FirstName19091 MiddleName19091",LastName19091 +19092,19092,"FirstName19092 MiddleName19092",LastName19092 +19093,19093,"FirstName19093 MiddleName19093",LastName19093 +19094,19094,"FirstName19094 MiddleName19094",LastName19094 +19095,19095,"FirstName19095 MiddleName19095",LastName19095 +19096,19096,"FirstName19096 MiddleName19096",LastName19096 +19097,19097,"FirstName19097 MiddleName19097",LastName19097 +19098,19098,"FirstName19098 MiddleName19098",LastName19098 +19099,19099,"FirstName19099 MiddleName19099",LastName19099 +19100,19100,"FirstName19100 MiddleName19100",LastName19100 +19101,19101,"FirstName19101 MiddleName19101",LastName19101 +19102,19102,"FirstName19102 MiddleName19102",LastName19102 +19103,19103,"FirstName19103 MiddleName19103",LastName19103 +19104,19104,"FirstName19104 MiddleName19104",LastName19104 +19105,19105,"FirstName19105 MiddleName19105",LastName19105 +19106,19106,"FirstName19106 MiddleName19106",LastName19106 +19107,19107,"FirstName19107 MiddleName19107",LastName19107 +19108,19108,"FirstName19108 MiddleName19108",LastName19108 +19109,19109,"FirstName19109 MiddleName19109",LastName19109 +19110,19110,"FirstName19110 MiddleName19110",LastName19110 +19111,19111,"FirstName19111 MiddleName19111",LastName19111 +19112,19112,"FirstName19112 MiddleName19112",LastName19112 +19113,19113,"FirstName19113 MiddleName19113",LastName19113 +19114,19114,"FirstName19114 MiddleName19114",LastName19114 +19115,19115,"FirstName19115 MiddleName19115",LastName19115 +19116,19116,"FirstName19116 MiddleName19116",LastName19116 +19117,19117,"FirstName19117 MiddleName19117",LastName19117 +19118,19118,"FirstName19118 MiddleName19118",LastName19118 +19119,19119,"FirstName19119 MiddleName19119",LastName19119 +19120,19120,"FirstName19120 MiddleName19120",LastName19120 +19121,19121,"FirstName19121 MiddleName19121",LastName19121 +19122,19122,"FirstName19122 MiddleName19122",LastName19122 +19123,19123,"FirstName19123 MiddleName19123",LastName19123 +19124,19124,"FirstName19124 MiddleName19124",LastName19124 +19125,19125,"FirstName19125 MiddleName19125",LastName19125 +19126,19126,"FirstName19126 MiddleName19126",LastName19126 +19127,19127,"FirstName19127 MiddleName19127",LastName19127 +19128,19128,"FirstName19128 MiddleName19128",LastName19128 +19129,19129,"FirstName19129 MiddleName19129",LastName19129 +19130,19130,"FirstName19130 MiddleName19130",LastName19130 +19131,19131,"FirstName19131 MiddleName19131",LastName19131 +19132,19132,"FirstName19132 MiddleName19132",LastName19132 +19133,19133,"FirstName19133 MiddleName19133",LastName19133 +19134,19134,"FirstName19134 MiddleName19134",LastName19134 +19135,19135,"FirstName19135 MiddleName19135",LastName19135 +19136,19136,"FirstName19136 MiddleName19136",LastName19136 +19137,19137,"FirstName19137 MiddleName19137",LastName19137 +19138,19138,"FirstName19138 MiddleName19138",LastName19138 +19139,19139,"FirstName19139 MiddleName19139",LastName19139 +19140,19140,"FirstName19140 MiddleName19140",LastName19140 +19141,19141,"FirstName19141 MiddleName19141",LastName19141 +19142,19142,"FirstName19142 MiddleName19142",LastName19142 +19143,19143,"FirstName19143 MiddleName19143",LastName19143 +19144,19144,"FirstName19144 MiddleName19144",LastName19144 +19145,19145,"FirstName19145 MiddleName19145",LastName19145 +19146,19146,"FirstName19146 MiddleName19146",LastName19146 +19147,19147,"FirstName19147 MiddleName19147",LastName19147 +19148,19148,"FirstName19148 MiddleName19148",LastName19148 +19149,19149,"FirstName19149 MiddleName19149",LastName19149 +19150,19150,"FirstName19150 MiddleName19150",LastName19150 +19151,19151,"FirstName19151 MiddleName19151",LastName19151 +19152,19152,"FirstName19152 MiddleName19152",LastName19152 +19153,19153,"FirstName19153 MiddleName19153",LastName19153 +19154,19154,"FirstName19154 MiddleName19154",LastName19154 +19155,19155,"FirstName19155 MiddleName19155",LastName19155 +19156,19156,"FirstName19156 MiddleName19156",LastName19156 +19157,19157,"FirstName19157 MiddleName19157",LastName19157 +19158,19158,"FirstName19158 MiddleName19158",LastName19158 +19159,19159,"FirstName19159 MiddleName19159",LastName19159 +19160,19160,"FirstName19160 MiddleName19160",LastName19160 +19161,19161,"FirstName19161 MiddleName19161",LastName19161 +19162,19162,"FirstName19162 MiddleName19162",LastName19162 +19163,19163,"FirstName19163 MiddleName19163",LastName19163 +19164,19164,"FirstName19164 MiddleName19164",LastName19164 +19165,19165,"FirstName19165 MiddleName19165",LastName19165 +19166,19166,"FirstName19166 MiddleName19166",LastName19166 +19167,19167,"FirstName19167 MiddleName19167",LastName19167 +19168,19168,"FirstName19168 MiddleName19168",LastName19168 +19169,19169,"FirstName19169 MiddleName19169",LastName19169 +19170,19170,"FirstName19170 MiddleName19170",LastName19170 +19171,19171,"FirstName19171 MiddleName19171",LastName19171 +19172,19172,"FirstName19172 MiddleName19172",LastName19172 +19173,19173,"FirstName19173 MiddleName19173",LastName19173 +19174,19174,"FirstName19174 MiddleName19174",LastName19174 +19175,19175,"FirstName19175 MiddleName19175",LastName19175 +19176,19176,"FirstName19176 MiddleName19176",LastName19176 +19177,19177,"FirstName19177 MiddleName19177",LastName19177 +19178,19178,"FirstName19178 MiddleName19178",LastName19178 +19179,19179,"FirstName19179 MiddleName19179",LastName19179 +19180,19180,"FirstName19180 MiddleName19180",LastName19180 +19181,19181,"FirstName19181 MiddleName19181",LastName19181 +19182,19182,"FirstName19182 MiddleName19182",LastName19182 +19183,19183,"FirstName19183 MiddleName19183",LastName19183 +19184,19184,"FirstName19184 MiddleName19184",LastName19184 +19185,19185,"FirstName19185 MiddleName19185",LastName19185 +19186,19186,"FirstName19186 MiddleName19186",LastName19186 +19187,19187,"FirstName19187 MiddleName19187",LastName19187 +19188,19188,"FirstName19188 MiddleName19188",LastName19188 +19189,19189,"FirstName19189 MiddleName19189",LastName19189 +19190,19190,"FirstName19190 MiddleName19190",LastName19190 +19191,19191,"FirstName19191 MiddleName19191",LastName19191 +19192,19192,"FirstName19192 MiddleName19192",LastName19192 +19193,19193,"FirstName19193 MiddleName19193",LastName19193 +19194,19194,"FirstName19194 MiddleName19194",LastName19194 +19195,19195,"FirstName19195 MiddleName19195",LastName19195 +19196,19196,"FirstName19196 MiddleName19196",LastName19196 +19197,19197,"FirstName19197 MiddleName19197",LastName19197 +19198,19198,"FirstName19198 MiddleName19198",LastName19198 +19199,19199,"FirstName19199 MiddleName19199",LastName19199 +19200,19200,"FirstName19200 MiddleName19200",LastName19200 +19201,19201,"FirstName19201 MiddleName19201",LastName19201 +19202,19202,"FirstName19202 MiddleName19202",LastName19202 +19203,19203,"FirstName19203 MiddleName19203",LastName19203 +19204,19204,"FirstName19204 MiddleName19204",LastName19204 +19205,19205,"FirstName19205 MiddleName19205",LastName19205 +19206,19206,"FirstName19206 MiddleName19206",LastName19206 +19207,19207,"FirstName19207 MiddleName19207",LastName19207 +19208,19208,"FirstName19208 MiddleName19208",LastName19208 +19209,19209,"FirstName19209 MiddleName19209",LastName19209 +19210,19210,"FirstName19210 MiddleName19210",LastName19210 +19211,19211,"FirstName19211 MiddleName19211",LastName19211 +19212,19212,"FirstName19212 MiddleName19212",LastName19212 +19213,19213,"FirstName19213 MiddleName19213",LastName19213 +19214,19214,"FirstName19214 MiddleName19214",LastName19214 +19215,19215,"FirstName19215 MiddleName19215",LastName19215 +19216,19216,"FirstName19216 MiddleName19216",LastName19216 +19217,19217,"FirstName19217 MiddleName19217",LastName19217 +19218,19218,"FirstName19218 MiddleName19218",LastName19218 +19219,19219,"FirstName19219 MiddleName19219",LastName19219 +19220,19220,"FirstName19220 MiddleName19220",LastName19220 +19221,19221,"FirstName19221 MiddleName19221",LastName19221 +19222,19222,"FirstName19222 MiddleName19222",LastName19222 +19223,19223,"FirstName19223 MiddleName19223",LastName19223 +19224,19224,"FirstName19224 MiddleName19224",LastName19224 +19225,19225,"FirstName19225 MiddleName19225",LastName19225 +19226,19226,"FirstName19226 MiddleName19226",LastName19226 +19227,19227,"FirstName19227 MiddleName19227",LastName19227 +19228,19228,"FirstName19228 MiddleName19228",LastName19228 +19229,19229,"FirstName19229 MiddleName19229",LastName19229 +19230,19230,"FirstName19230 MiddleName19230",LastName19230 +19231,19231,"FirstName19231 MiddleName19231",LastName19231 +19232,19232,"FirstName19232 MiddleName19232",LastName19232 +19233,19233,"FirstName19233 MiddleName19233",LastName19233 +19234,19234,"FirstName19234 MiddleName19234",LastName19234 +19235,19235,"FirstName19235 MiddleName19235",LastName19235 +19236,19236,"FirstName19236 MiddleName19236",LastName19236 +19237,19237,"FirstName19237 MiddleName19237",LastName19237 +19238,19238,"FirstName19238 MiddleName19238",LastName19238 +19239,19239,"FirstName19239 MiddleName19239",LastName19239 +19240,19240,"FirstName19240 MiddleName19240",LastName19240 +19241,19241,"FirstName19241 MiddleName19241",LastName19241 +19242,19242,"FirstName19242 MiddleName19242",LastName19242 +19243,19243,"FirstName19243 MiddleName19243",LastName19243 +19244,19244,"FirstName19244 MiddleName19244",LastName19244 +19245,19245,"FirstName19245 MiddleName19245",LastName19245 +19246,19246,"FirstName19246 MiddleName19246",LastName19246 +19247,19247,"FirstName19247 MiddleName19247",LastName19247 +19248,19248,"FirstName19248 MiddleName19248",LastName19248 +19249,19249,"FirstName19249 MiddleName19249",LastName19249 +19250,19250,"FirstName19250 MiddleName19250",LastName19250 +19251,19251,"FirstName19251 MiddleName19251",LastName19251 +19252,19252,"FirstName19252 MiddleName19252",LastName19252 +19253,19253,"FirstName19253 MiddleName19253",LastName19253 +19254,19254,"FirstName19254 MiddleName19254",LastName19254 +19255,19255,"FirstName19255 MiddleName19255",LastName19255 +19256,19256,"FirstName19256 MiddleName19256",LastName19256 +19257,19257,"FirstName19257 MiddleName19257",LastName19257 +19258,19258,"FirstName19258 MiddleName19258",LastName19258 +19259,19259,"FirstName19259 MiddleName19259",LastName19259 +19260,19260,"FirstName19260 MiddleName19260",LastName19260 +19261,19261,"FirstName19261 MiddleName19261",LastName19261 +19262,19262,"FirstName19262 MiddleName19262",LastName19262 +19263,19263,"FirstName19263 MiddleName19263",LastName19263 +19264,19264,"FirstName19264 MiddleName19264",LastName19264 +19265,19265,"FirstName19265 MiddleName19265",LastName19265 +19266,19266,"FirstName19266 MiddleName19266",LastName19266 +19267,19267,"FirstName19267 MiddleName19267",LastName19267 +19268,19268,"FirstName19268 MiddleName19268",LastName19268 +19269,19269,"FirstName19269 MiddleName19269",LastName19269 +19270,19270,"FirstName19270 MiddleName19270",LastName19270 +19271,19271,"FirstName19271 MiddleName19271",LastName19271 +19272,19272,"FirstName19272 MiddleName19272",LastName19272 +19273,19273,"FirstName19273 MiddleName19273",LastName19273 +19274,19274,"FirstName19274 MiddleName19274",LastName19274 +19275,19275,"FirstName19275 MiddleName19275",LastName19275 +19276,19276,"FirstName19276 MiddleName19276",LastName19276 +19277,19277,"FirstName19277 MiddleName19277",LastName19277 +19278,19278,"FirstName19278 MiddleName19278",LastName19278 +19279,19279,"FirstName19279 MiddleName19279",LastName19279 +19280,19280,"FirstName19280 MiddleName19280",LastName19280 +19281,19281,"FirstName19281 MiddleName19281",LastName19281 +19282,19282,"FirstName19282 MiddleName19282",LastName19282 +19283,19283,"FirstName19283 MiddleName19283",LastName19283 +19284,19284,"FirstName19284 MiddleName19284",LastName19284 +19285,19285,"FirstName19285 MiddleName19285",LastName19285 +19286,19286,"FirstName19286 MiddleName19286",LastName19286 +19287,19287,"FirstName19287 MiddleName19287",LastName19287 +19288,19288,"FirstName19288 MiddleName19288",LastName19288 +19289,19289,"FirstName19289 MiddleName19289",LastName19289 +19290,19290,"FirstName19290 MiddleName19290",LastName19290 +19291,19291,"FirstName19291 MiddleName19291",LastName19291 +19292,19292,"FirstName19292 MiddleName19292",LastName19292 +19293,19293,"FirstName19293 MiddleName19293",LastName19293 +19294,19294,"FirstName19294 MiddleName19294",LastName19294 +19295,19295,"FirstName19295 MiddleName19295",LastName19295 +19296,19296,"FirstName19296 MiddleName19296",LastName19296 +19297,19297,"FirstName19297 MiddleName19297",LastName19297 +19298,19298,"FirstName19298 MiddleName19298",LastName19298 +19299,19299,"FirstName19299 MiddleName19299",LastName19299 +19300,19300,"FirstName19300 MiddleName19300",LastName19300 +19301,19301,"FirstName19301 MiddleName19301",LastName19301 +19302,19302,"FirstName19302 MiddleName19302",LastName19302 +19303,19303,"FirstName19303 MiddleName19303",LastName19303 +19304,19304,"FirstName19304 MiddleName19304",LastName19304 +19305,19305,"FirstName19305 MiddleName19305",LastName19305 +19306,19306,"FirstName19306 MiddleName19306",LastName19306 +19307,19307,"FirstName19307 MiddleName19307",LastName19307 +19308,19308,"FirstName19308 MiddleName19308",LastName19308 +19309,19309,"FirstName19309 MiddleName19309",LastName19309 +19310,19310,"FirstName19310 MiddleName19310",LastName19310 +19311,19311,"FirstName19311 MiddleName19311",LastName19311 +19312,19312,"FirstName19312 MiddleName19312",LastName19312 +19313,19313,"FirstName19313 MiddleName19313",LastName19313 +19314,19314,"FirstName19314 MiddleName19314",LastName19314 +19315,19315,"FirstName19315 MiddleName19315",LastName19315 +19316,19316,"FirstName19316 MiddleName19316",LastName19316 +19317,19317,"FirstName19317 MiddleName19317",LastName19317 +19318,19318,"FirstName19318 MiddleName19318",LastName19318 +19319,19319,"FirstName19319 MiddleName19319",LastName19319 +19320,19320,"FirstName19320 MiddleName19320",LastName19320 +19321,19321,"FirstName19321 MiddleName19321",LastName19321 +19322,19322,"FirstName19322 MiddleName19322",LastName19322 +19323,19323,"FirstName19323 MiddleName19323",LastName19323 +19324,19324,"FirstName19324 MiddleName19324",LastName19324 +19325,19325,"FirstName19325 MiddleName19325",LastName19325 +19326,19326,"FirstName19326 MiddleName19326",LastName19326 +19327,19327,"FirstName19327 MiddleName19327",LastName19327 +19328,19328,"FirstName19328 MiddleName19328",LastName19328 +19329,19329,"FirstName19329 MiddleName19329",LastName19329 +19330,19330,"FirstName19330 MiddleName19330",LastName19330 +19331,19331,"FirstName19331 MiddleName19331",LastName19331 +19332,19332,"FirstName19332 MiddleName19332",LastName19332 +19333,19333,"FirstName19333 MiddleName19333",LastName19333 +19334,19334,"FirstName19334 MiddleName19334",LastName19334 +19335,19335,"FirstName19335 MiddleName19335",LastName19335 +19336,19336,"FirstName19336 MiddleName19336",LastName19336 +19337,19337,"FirstName19337 MiddleName19337",LastName19337 +19338,19338,"FirstName19338 MiddleName19338",LastName19338 +19339,19339,"FirstName19339 MiddleName19339",LastName19339 +19340,19340,"FirstName19340 MiddleName19340",LastName19340 +19341,19341,"FirstName19341 MiddleName19341",LastName19341 +19342,19342,"FirstName19342 MiddleName19342",LastName19342 +19343,19343,"FirstName19343 MiddleName19343",LastName19343 +19344,19344,"FirstName19344 MiddleName19344",LastName19344 +19345,19345,"FirstName19345 MiddleName19345",LastName19345 +19346,19346,"FirstName19346 MiddleName19346",LastName19346 +19347,19347,"FirstName19347 MiddleName19347",LastName19347 +19348,19348,"FirstName19348 MiddleName19348",LastName19348 +19349,19349,"FirstName19349 MiddleName19349",LastName19349 +19350,19350,"FirstName19350 MiddleName19350",LastName19350 +19351,19351,"FirstName19351 MiddleName19351",LastName19351 +19352,19352,"FirstName19352 MiddleName19352",LastName19352 +19353,19353,"FirstName19353 MiddleName19353",LastName19353 +19354,19354,"FirstName19354 MiddleName19354",LastName19354 +19355,19355,"FirstName19355 MiddleName19355",LastName19355 +19356,19356,"FirstName19356 MiddleName19356",LastName19356 +19357,19357,"FirstName19357 MiddleName19357",LastName19357 +19358,19358,"FirstName19358 MiddleName19358",LastName19358 +19359,19359,"FirstName19359 MiddleName19359",LastName19359 +19360,19360,"FirstName19360 MiddleName19360",LastName19360 +19361,19361,"FirstName19361 MiddleName19361",LastName19361 +19362,19362,"FirstName19362 MiddleName19362",LastName19362 +19363,19363,"FirstName19363 MiddleName19363",LastName19363 +19364,19364,"FirstName19364 MiddleName19364",LastName19364 +19365,19365,"FirstName19365 MiddleName19365",LastName19365 +19366,19366,"FirstName19366 MiddleName19366",LastName19366 +19367,19367,"FirstName19367 MiddleName19367",LastName19367 +19368,19368,"FirstName19368 MiddleName19368",LastName19368 +19369,19369,"FirstName19369 MiddleName19369",LastName19369 +19370,19370,"FirstName19370 MiddleName19370",LastName19370 +19371,19371,"FirstName19371 MiddleName19371",LastName19371 +19372,19372,"FirstName19372 MiddleName19372",LastName19372 +19373,19373,"FirstName19373 MiddleName19373",LastName19373 +19374,19374,"FirstName19374 MiddleName19374",LastName19374 +19375,19375,"FirstName19375 MiddleName19375",LastName19375 +19376,19376,"FirstName19376 MiddleName19376",LastName19376 +19377,19377,"FirstName19377 MiddleName19377",LastName19377 +19378,19378,"FirstName19378 MiddleName19378",LastName19378 +19379,19379,"FirstName19379 MiddleName19379",LastName19379 +19380,19380,"FirstName19380 MiddleName19380",LastName19380 +19381,19381,"FirstName19381 MiddleName19381",LastName19381 +19382,19382,"FirstName19382 MiddleName19382",LastName19382 +19383,19383,"FirstName19383 MiddleName19383",LastName19383 +19384,19384,"FirstName19384 MiddleName19384",LastName19384 +19385,19385,"FirstName19385 MiddleName19385",LastName19385 +19386,19386,"FirstName19386 MiddleName19386",LastName19386 +19387,19387,"FirstName19387 MiddleName19387",LastName19387 +19388,19388,"FirstName19388 MiddleName19388",LastName19388 +19389,19389,"FirstName19389 MiddleName19389",LastName19389 +19390,19390,"FirstName19390 MiddleName19390",LastName19390 +19391,19391,"FirstName19391 MiddleName19391",LastName19391 +19392,19392,"FirstName19392 MiddleName19392",LastName19392 +19393,19393,"FirstName19393 MiddleName19393",LastName19393 +19394,19394,"FirstName19394 MiddleName19394",LastName19394 +19395,19395,"FirstName19395 MiddleName19395",LastName19395 +19396,19396,"FirstName19396 MiddleName19396",LastName19396 +19397,19397,"FirstName19397 MiddleName19397",LastName19397 +19398,19398,"FirstName19398 MiddleName19398",LastName19398 +19399,19399,"FirstName19399 MiddleName19399",LastName19399 +19400,19400,"FirstName19400 MiddleName19400",LastName19400 +19401,19401,"FirstName19401 MiddleName19401",LastName19401 +19402,19402,"FirstName19402 MiddleName19402",LastName19402 +19403,19403,"FirstName19403 MiddleName19403",LastName19403 +19404,19404,"FirstName19404 MiddleName19404",LastName19404 +19405,19405,"FirstName19405 MiddleName19405",LastName19405 +19406,19406,"FirstName19406 MiddleName19406",LastName19406 +19407,19407,"FirstName19407 MiddleName19407",LastName19407 +19408,19408,"FirstName19408 MiddleName19408",LastName19408 +19409,19409,"FirstName19409 MiddleName19409",LastName19409 +19410,19410,"FirstName19410 MiddleName19410",LastName19410 +19411,19411,"FirstName19411 MiddleName19411",LastName19411 +19412,19412,"FirstName19412 MiddleName19412",LastName19412 +19413,19413,"FirstName19413 MiddleName19413",LastName19413 +19414,19414,"FirstName19414 MiddleName19414",LastName19414 +19415,19415,"FirstName19415 MiddleName19415",LastName19415 +19416,19416,"FirstName19416 MiddleName19416",LastName19416 +19417,19417,"FirstName19417 MiddleName19417",LastName19417 +19418,19418,"FirstName19418 MiddleName19418",LastName19418 +19419,19419,"FirstName19419 MiddleName19419",LastName19419 +19420,19420,"FirstName19420 MiddleName19420",LastName19420 +19421,19421,"FirstName19421 MiddleName19421",LastName19421 +19422,19422,"FirstName19422 MiddleName19422",LastName19422 +19423,19423,"FirstName19423 MiddleName19423",LastName19423 +19424,19424,"FirstName19424 MiddleName19424",LastName19424 +19425,19425,"FirstName19425 MiddleName19425",LastName19425 +19426,19426,"FirstName19426 MiddleName19426",LastName19426 +19427,19427,"FirstName19427 MiddleName19427",LastName19427 +19428,19428,"FirstName19428 MiddleName19428",LastName19428 +19429,19429,"FirstName19429 MiddleName19429",LastName19429 +19430,19430,"FirstName19430 MiddleName19430",LastName19430 +19431,19431,"FirstName19431 MiddleName19431",LastName19431 +19432,19432,"FirstName19432 MiddleName19432",LastName19432 +19433,19433,"FirstName19433 MiddleName19433",LastName19433 +19434,19434,"FirstName19434 MiddleName19434",LastName19434 +19435,19435,"FirstName19435 MiddleName19435",LastName19435 +19436,19436,"FirstName19436 MiddleName19436",LastName19436 +19437,19437,"FirstName19437 MiddleName19437",LastName19437 +19438,19438,"FirstName19438 MiddleName19438",LastName19438 +19439,19439,"FirstName19439 MiddleName19439",LastName19439 +19440,19440,"FirstName19440 MiddleName19440",LastName19440 +19441,19441,"FirstName19441 MiddleName19441",LastName19441 +19442,19442,"FirstName19442 MiddleName19442",LastName19442 +19443,19443,"FirstName19443 MiddleName19443",LastName19443 +19444,19444,"FirstName19444 MiddleName19444",LastName19444 +19445,19445,"FirstName19445 MiddleName19445",LastName19445 +19446,19446,"FirstName19446 MiddleName19446",LastName19446 +19447,19447,"FirstName19447 MiddleName19447",LastName19447 +19448,19448,"FirstName19448 MiddleName19448",LastName19448 +19449,19449,"FirstName19449 MiddleName19449",LastName19449 +19450,19450,"FirstName19450 MiddleName19450",LastName19450 +19451,19451,"FirstName19451 MiddleName19451",LastName19451 +19452,19452,"FirstName19452 MiddleName19452",LastName19452 +19453,19453,"FirstName19453 MiddleName19453",LastName19453 +19454,19454,"FirstName19454 MiddleName19454",LastName19454 +19455,19455,"FirstName19455 MiddleName19455",LastName19455 +19456,19456,"FirstName19456 MiddleName19456",LastName19456 +19457,19457,"FirstName19457 MiddleName19457",LastName19457 +19458,19458,"FirstName19458 MiddleName19458",LastName19458 +19459,19459,"FirstName19459 MiddleName19459",LastName19459 +19460,19460,"FirstName19460 MiddleName19460",LastName19460 +19461,19461,"FirstName19461 MiddleName19461",LastName19461 +19462,19462,"FirstName19462 MiddleName19462",LastName19462 +19463,19463,"FirstName19463 MiddleName19463",LastName19463 +19464,19464,"FirstName19464 MiddleName19464",LastName19464 +19465,19465,"FirstName19465 MiddleName19465",LastName19465 +19466,19466,"FirstName19466 MiddleName19466",LastName19466 +19467,19467,"FirstName19467 MiddleName19467",LastName19467 +19468,19468,"FirstName19468 MiddleName19468",LastName19468 +19469,19469,"FirstName19469 MiddleName19469",LastName19469 +19470,19470,"FirstName19470 MiddleName19470",LastName19470 +19471,19471,"FirstName19471 MiddleName19471",LastName19471 +19472,19472,"FirstName19472 MiddleName19472",LastName19472 +19473,19473,"FirstName19473 MiddleName19473",LastName19473 +19474,19474,"FirstName19474 MiddleName19474",LastName19474 +19475,19475,"FirstName19475 MiddleName19475",LastName19475 +19476,19476,"FirstName19476 MiddleName19476",LastName19476 +19477,19477,"FirstName19477 MiddleName19477",LastName19477 +19478,19478,"FirstName19478 MiddleName19478",LastName19478 +19479,19479,"FirstName19479 MiddleName19479",LastName19479 +19480,19480,"FirstName19480 MiddleName19480",LastName19480 +19481,19481,"FirstName19481 MiddleName19481",LastName19481 +19482,19482,"FirstName19482 MiddleName19482",LastName19482 +19483,19483,"FirstName19483 MiddleName19483",LastName19483 +19484,19484,"FirstName19484 MiddleName19484",LastName19484 +19485,19485,"FirstName19485 MiddleName19485",LastName19485 +19486,19486,"FirstName19486 MiddleName19486",LastName19486 +19487,19487,"FirstName19487 MiddleName19487",LastName19487 +19488,19488,"FirstName19488 MiddleName19488",LastName19488 +19489,19489,"FirstName19489 MiddleName19489",LastName19489 +19490,19490,"FirstName19490 MiddleName19490",LastName19490 +19491,19491,"FirstName19491 MiddleName19491",LastName19491 +19492,19492,"FirstName19492 MiddleName19492",LastName19492 +19493,19493,"FirstName19493 MiddleName19493",LastName19493 +19494,19494,"FirstName19494 MiddleName19494",LastName19494 +19495,19495,"FirstName19495 MiddleName19495",LastName19495 +19496,19496,"FirstName19496 MiddleName19496",LastName19496 +19497,19497,"FirstName19497 MiddleName19497",LastName19497 +19498,19498,"FirstName19498 MiddleName19498",LastName19498 +19499,19499,"FirstName19499 MiddleName19499",LastName19499 +19500,19500,"FirstName19500 MiddleName19500",LastName19500 +19501,19501,"FirstName19501 MiddleName19501",LastName19501 +19502,19502,"FirstName19502 MiddleName19502",LastName19502 +19503,19503,"FirstName19503 MiddleName19503",LastName19503 +19504,19504,"FirstName19504 MiddleName19504",LastName19504 +19505,19505,"FirstName19505 MiddleName19505",LastName19505 +19506,19506,"FirstName19506 MiddleName19506",LastName19506 +19507,19507,"FirstName19507 MiddleName19507",LastName19507 +19508,19508,"FirstName19508 MiddleName19508",LastName19508 +19509,19509,"FirstName19509 MiddleName19509",LastName19509 +19510,19510,"FirstName19510 MiddleName19510",LastName19510 +19511,19511,"FirstName19511 MiddleName19511",LastName19511 +19512,19512,"FirstName19512 MiddleName19512",LastName19512 +19513,19513,"FirstName19513 MiddleName19513",LastName19513 +19514,19514,"FirstName19514 MiddleName19514",LastName19514 +19515,19515,"FirstName19515 MiddleName19515",LastName19515 +19516,19516,"FirstName19516 MiddleName19516",LastName19516 +19517,19517,"FirstName19517 MiddleName19517",LastName19517 +19518,19518,"FirstName19518 MiddleName19518",LastName19518 +19519,19519,"FirstName19519 MiddleName19519",LastName19519 +19520,19520,"FirstName19520 MiddleName19520",LastName19520 +19521,19521,"FirstName19521 MiddleName19521",LastName19521 +19522,19522,"FirstName19522 MiddleName19522",LastName19522 +19523,19523,"FirstName19523 MiddleName19523",LastName19523 +19524,19524,"FirstName19524 MiddleName19524",LastName19524 +19525,19525,"FirstName19525 MiddleName19525",LastName19525 +19526,19526,"FirstName19526 MiddleName19526",LastName19526 +19527,19527,"FirstName19527 MiddleName19527",LastName19527 +19528,19528,"FirstName19528 MiddleName19528",LastName19528 +19529,19529,"FirstName19529 MiddleName19529",LastName19529 +19530,19530,"FirstName19530 MiddleName19530",LastName19530 +19531,19531,"FirstName19531 MiddleName19531",LastName19531 +19532,19532,"FirstName19532 MiddleName19532",LastName19532 +19533,19533,"FirstName19533 MiddleName19533",LastName19533 +19534,19534,"FirstName19534 MiddleName19534",LastName19534 +19535,19535,"FirstName19535 MiddleName19535",LastName19535 +19536,19536,"FirstName19536 MiddleName19536",LastName19536 +19537,19537,"FirstName19537 MiddleName19537",LastName19537 +19538,19538,"FirstName19538 MiddleName19538",LastName19538 +19539,19539,"FirstName19539 MiddleName19539",LastName19539 +19540,19540,"FirstName19540 MiddleName19540",LastName19540 +19541,19541,"FirstName19541 MiddleName19541",LastName19541 +19542,19542,"FirstName19542 MiddleName19542",LastName19542 +19543,19543,"FirstName19543 MiddleName19543",LastName19543 +19544,19544,"FirstName19544 MiddleName19544",LastName19544 +19545,19545,"FirstName19545 MiddleName19545",LastName19545 +19546,19546,"FirstName19546 MiddleName19546",LastName19546 +19547,19547,"FirstName19547 MiddleName19547",LastName19547 +19548,19548,"FirstName19548 MiddleName19548",LastName19548 +19549,19549,"FirstName19549 MiddleName19549",LastName19549 +19550,19550,"FirstName19550 MiddleName19550",LastName19550 +19551,19551,"FirstName19551 MiddleName19551",LastName19551 +19552,19552,"FirstName19552 MiddleName19552",LastName19552 +19553,19553,"FirstName19553 MiddleName19553",LastName19553 +19554,19554,"FirstName19554 MiddleName19554",LastName19554 +19555,19555,"FirstName19555 MiddleName19555",LastName19555 +19556,19556,"FirstName19556 MiddleName19556",LastName19556 +19557,19557,"FirstName19557 MiddleName19557",LastName19557 +19558,19558,"FirstName19558 MiddleName19558",LastName19558 +19559,19559,"FirstName19559 MiddleName19559",LastName19559 +19560,19560,"FirstName19560 MiddleName19560",LastName19560 +19561,19561,"FirstName19561 MiddleName19561",LastName19561 +19562,19562,"FirstName19562 MiddleName19562",LastName19562 +19563,19563,"FirstName19563 MiddleName19563",LastName19563 +19564,19564,"FirstName19564 MiddleName19564",LastName19564 +19565,19565,"FirstName19565 MiddleName19565",LastName19565 +19566,19566,"FirstName19566 MiddleName19566",LastName19566 +19567,19567,"FirstName19567 MiddleName19567",LastName19567 +19568,19568,"FirstName19568 MiddleName19568",LastName19568 +19569,19569,"FirstName19569 MiddleName19569",LastName19569 +19570,19570,"FirstName19570 MiddleName19570",LastName19570 +19571,19571,"FirstName19571 MiddleName19571",LastName19571 +19572,19572,"FirstName19572 MiddleName19572",LastName19572 +19573,19573,"FirstName19573 MiddleName19573",LastName19573 +19574,19574,"FirstName19574 MiddleName19574",LastName19574 +19575,19575,"FirstName19575 MiddleName19575",LastName19575 +19576,19576,"FirstName19576 MiddleName19576",LastName19576 +19577,19577,"FirstName19577 MiddleName19577",LastName19577 +19578,19578,"FirstName19578 MiddleName19578",LastName19578 +19579,19579,"FirstName19579 MiddleName19579",LastName19579 +19580,19580,"FirstName19580 MiddleName19580",LastName19580 +19581,19581,"FirstName19581 MiddleName19581",LastName19581 +19582,19582,"FirstName19582 MiddleName19582",LastName19582 +19583,19583,"FirstName19583 MiddleName19583",LastName19583 +19584,19584,"FirstName19584 MiddleName19584",LastName19584 +19585,19585,"FirstName19585 MiddleName19585",LastName19585 +19586,19586,"FirstName19586 MiddleName19586",LastName19586 +19587,19587,"FirstName19587 MiddleName19587",LastName19587 +19588,19588,"FirstName19588 MiddleName19588",LastName19588 +19589,19589,"FirstName19589 MiddleName19589",LastName19589 +19590,19590,"FirstName19590 MiddleName19590",LastName19590 +19591,19591,"FirstName19591 MiddleName19591",LastName19591 +19592,19592,"FirstName19592 MiddleName19592",LastName19592 +19593,19593,"FirstName19593 MiddleName19593",LastName19593 +19594,19594,"FirstName19594 MiddleName19594",LastName19594 +19595,19595,"FirstName19595 MiddleName19595",LastName19595 +19596,19596,"FirstName19596 MiddleName19596",LastName19596 +19597,19597,"FirstName19597 MiddleName19597",LastName19597 +19598,19598,"FirstName19598 MiddleName19598",LastName19598 +19599,19599,"FirstName19599 MiddleName19599",LastName19599 +19600,19600,"FirstName19600 MiddleName19600",LastName19600 +19601,19601,"FirstName19601 MiddleName19601",LastName19601 +19602,19602,"FirstName19602 MiddleName19602",LastName19602 +19603,19603,"FirstName19603 MiddleName19603",LastName19603 +19604,19604,"FirstName19604 MiddleName19604",LastName19604 +19605,19605,"FirstName19605 MiddleName19605",LastName19605 +19606,19606,"FirstName19606 MiddleName19606",LastName19606 +19607,19607,"FirstName19607 MiddleName19607",LastName19607 +19608,19608,"FirstName19608 MiddleName19608",LastName19608 +19609,19609,"FirstName19609 MiddleName19609",LastName19609 +19610,19610,"FirstName19610 MiddleName19610",LastName19610 +19611,19611,"FirstName19611 MiddleName19611",LastName19611 +19612,19612,"FirstName19612 MiddleName19612",LastName19612 +19613,19613,"FirstName19613 MiddleName19613",LastName19613 +19614,19614,"FirstName19614 MiddleName19614",LastName19614 +19615,19615,"FirstName19615 MiddleName19615",LastName19615 +19616,19616,"FirstName19616 MiddleName19616",LastName19616 +19617,19617,"FirstName19617 MiddleName19617",LastName19617 +19618,19618,"FirstName19618 MiddleName19618",LastName19618 +19619,19619,"FirstName19619 MiddleName19619",LastName19619 +19620,19620,"FirstName19620 MiddleName19620",LastName19620 +19621,19621,"FirstName19621 MiddleName19621",LastName19621 +19622,19622,"FirstName19622 MiddleName19622",LastName19622 +19623,19623,"FirstName19623 MiddleName19623",LastName19623 +19624,19624,"FirstName19624 MiddleName19624",LastName19624 +19625,19625,"FirstName19625 MiddleName19625",LastName19625 +19626,19626,"FirstName19626 MiddleName19626",LastName19626 +19627,19627,"FirstName19627 MiddleName19627",LastName19627 +19628,19628,"FirstName19628 MiddleName19628",LastName19628 +19629,19629,"FirstName19629 MiddleName19629",LastName19629 +19630,19630,"FirstName19630 MiddleName19630",LastName19630 +19631,19631,"FirstName19631 MiddleName19631",LastName19631 +19632,19632,"FirstName19632 MiddleName19632",LastName19632 +19633,19633,"FirstName19633 MiddleName19633",LastName19633 +19634,19634,"FirstName19634 MiddleName19634",LastName19634 +19635,19635,"FirstName19635 MiddleName19635",LastName19635 +19636,19636,"FirstName19636 MiddleName19636",LastName19636 +19637,19637,"FirstName19637 MiddleName19637",LastName19637 +19638,19638,"FirstName19638 MiddleName19638",LastName19638 +19639,19639,"FirstName19639 MiddleName19639",LastName19639 +19640,19640,"FirstName19640 MiddleName19640",LastName19640 +19641,19641,"FirstName19641 MiddleName19641",LastName19641 +19642,19642,"FirstName19642 MiddleName19642",LastName19642 +19643,19643,"FirstName19643 MiddleName19643",LastName19643 +19644,19644,"FirstName19644 MiddleName19644",LastName19644 +19645,19645,"FirstName19645 MiddleName19645",LastName19645 +19646,19646,"FirstName19646 MiddleName19646",LastName19646 +19647,19647,"FirstName19647 MiddleName19647",LastName19647 +19648,19648,"FirstName19648 MiddleName19648",LastName19648 +19649,19649,"FirstName19649 MiddleName19649",LastName19649 +19650,19650,"FirstName19650 MiddleName19650",LastName19650 +19651,19651,"FirstName19651 MiddleName19651",LastName19651 +19652,19652,"FirstName19652 MiddleName19652",LastName19652 +19653,19653,"FirstName19653 MiddleName19653",LastName19653 +19654,19654,"FirstName19654 MiddleName19654",LastName19654 +19655,19655,"FirstName19655 MiddleName19655",LastName19655 +19656,19656,"FirstName19656 MiddleName19656",LastName19656 +19657,19657,"FirstName19657 MiddleName19657",LastName19657 +19658,19658,"FirstName19658 MiddleName19658",LastName19658 +19659,19659,"FirstName19659 MiddleName19659",LastName19659 +19660,19660,"FirstName19660 MiddleName19660",LastName19660 +19661,19661,"FirstName19661 MiddleName19661",LastName19661 +19662,19662,"FirstName19662 MiddleName19662",LastName19662 +19663,19663,"FirstName19663 MiddleName19663",LastName19663 +19664,19664,"FirstName19664 MiddleName19664",LastName19664 +19665,19665,"FirstName19665 MiddleName19665",LastName19665 +19666,19666,"FirstName19666 MiddleName19666",LastName19666 +19667,19667,"FirstName19667 MiddleName19667",LastName19667 +19668,19668,"FirstName19668 MiddleName19668",LastName19668 +19669,19669,"FirstName19669 MiddleName19669",LastName19669 +19670,19670,"FirstName19670 MiddleName19670",LastName19670 +19671,19671,"FirstName19671 MiddleName19671",LastName19671 +19672,19672,"FirstName19672 MiddleName19672",LastName19672 +19673,19673,"FirstName19673 MiddleName19673",LastName19673 +19674,19674,"FirstName19674 MiddleName19674",LastName19674 +19675,19675,"FirstName19675 MiddleName19675",LastName19675 +19676,19676,"FirstName19676 MiddleName19676",LastName19676 +19677,19677,"FirstName19677 MiddleName19677",LastName19677 +19678,19678,"FirstName19678 MiddleName19678",LastName19678 +19679,19679,"FirstName19679 MiddleName19679",LastName19679 +19680,19680,"FirstName19680 MiddleName19680",LastName19680 +19681,19681,"FirstName19681 MiddleName19681",LastName19681 +19682,19682,"FirstName19682 MiddleName19682",LastName19682 +19683,19683,"FirstName19683 MiddleName19683",LastName19683 +19684,19684,"FirstName19684 MiddleName19684",LastName19684 +19685,19685,"FirstName19685 MiddleName19685",LastName19685 +19686,19686,"FirstName19686 MiddleName19686",LastName19686 +19687,19687,"FirstName19687 MiddleName19687",LastName19687 +19688,19688,"FirstName19688 MiddleName19688",LastName19688 +19689,19689,"FirstName19689 MiddleName19689",LastName19689 +19690,19690,"FirstName19690 MiddleName19690",LastName19690 +19691,19691,"FirstName19691 MiddleName19691",LastName19691 +19692,19692,"FirstName19692 MiddleName19692",LastName19692 +19693,19693,"FirstName19693 MiddleName19693",LastName19693 +19694,19694,"FirstName19694 MiddleName19694",LastName19694 +19695,19695,"FirstName19695 MiddleName19695",LastName19695 +19696,19696,"FirstName19696 MiddleName19696",LastName19696 +19697,19697,"FirstName19697 MiddleName19697",LastName19697 +19698,19698,"FirstName19698 MiddleName19698",LastName19698 +19699,19699,"FirstName19699 MiddleName19699",LastName19699 +19700,19700,"FirstName19700 MiddleName19700",LastName19700 +19701,19701,"FirstName19701 MiddleName19701",LastName19701 +19702,19702,"FirstName19702 MiddleName19702",LastName19702 +19703,19703,"FirstName19703 MiddleName19703",LastName19703 +19704,19704,"FirstName19704 MiddleName19704",LastName19704 +19705,19705,"FirstName19705 MiddleName19705",LastName19705 +19706,19706,"FirstName19706 MiddleName19706",LastName19706 +19707,19707,"FirstName19707 MiddleName19707",LastName19707 +19708,19708,"FirstName19708 MiddleName19708",LastName19708 +19709,19709,"FirstName19709 MiddleName19709",LastName19709 +19710,19710,"FirstName19710 MiddleName19710",LastName19710 +19711,19711,"FirstName19711 MiddleName19711",LastName19711 +19712,19712,"FirstName19712 MiddleName19712",LastName19712 +19713,19713,"FirstName19713 MiddleName19713",LastName19713 +19714,19714,"FirstName19714 MiddleName19714",LastName19714 +19715,19715,"FirstName19715 MiddleName19715",LastName19715 +19716,19716,"FirstName19716 MiddleName19716",LastName19716 +19717,19717,"FirstName19717 MiddleName19717",LastName19717 +19718,19718,"FirstName19718 MiddleName19718",LastName19718 +19719,19719,"FirstName19719 MiddleName19719",LastName19719 +19720,19720,"FirstName19720 MiddleName19720",LastName19720 +19721,19721,"FirstName19721 MiddleName19721",LastName19721 +19722,19722,"FirstName19722 MiddleName19722",LastName19722 +19723,19723,"FirstName19723 MiddleName19723",LastName19723 +19724,19724,"FirstName19724 MiddleName19724",LastName19724 +19725,19725,"FirstName19725 MiddleName19725",LastName19725 +19726,19726,"FirstName19726 MiddleName19726",LastName19726 +19727,19727,"FirstName19727 MiddleName19727",LastName19727 +19728,19728,"FirstName19728 MiddleName19728",LastName19728 +19729,19729,"FirstName19729 MiddleName19729",LastName19729 +19730,19730,"FirstName19730 MiddleName19730",LastName19730 +19731,19731,"FirstName19731 MiddleName19731",LastName19731 +19732,19732,"FirstName19732 MiddleName19732",LastName19732 +19733,19733,"FirstName19733 MiddleName19733",LastName19733 +19734,19734,"FirstName19734 MiddleName19734",LastName19734 +19735,19735,"FirstName19735 MiddleName19735",LastName19735 +19736,19736,"FirstName19736 MiddleName19736",LastName19736 +19737,19737,"FirstName19737 MiddleName19737",LastName19737 +19738,19738,"FirstName19738 MiddleName19738",LastName19738 +19739,19739,"FirstName19739 MiddleName19739",LastName19739 +19740,19740,"FirstName19740 MiddleName19740",LastName19740 +19741,19741,"FirstName19741 MiddleName19741",LastName19741 +19742,19742,"FirstName19742 MiddleName19742",LastName19742 +19743,19743,"FirstName19743 MiddleName19743",LastName19743 +19744,19744,"FirstName19744 MiddleName19744",LastName19744 +19745,19745,"FirstName19745 MiddleName19745",LastName19745 +19746,19746,"FirstName19746 MiddleName19746",LastName19746 +19747,19747,"FirstName19747 MiddleName19747",LastName19747 +19748,19748,"FirstName19748 MiddleName19748",LastName19748 +19749,19749,"FirstName19749 MiddleName19749",LastName19749 +19750,19750,"FirstName19750 MiddleName19750",LastName19750 +19751,19751,"FirstName19751 MiddleName19751",LastName19751 +19752,19752,"FirstName19752 MiddleName19752",LastName19752 +19753,19753,"FirstName19753 MiddleName19753",LastName19753 +19754,19754,"FirstName19754 MiddleName19754",LastName19754 +19755,19755,"FirstName19755 MiddleName19755",LastName19755 +19756,19756,"FirstName19756 MiddleName19756",LastName19756 +19757,19757,"FirstName19757 MiddleName19757",LastName19757 +19758,19758,"FirstName19758 MiddleName19758",LastName19758 +19759,19759,"FirstName19759 MiddleName19759",LastName19759 +19760,19760,"FirstName19760 MiddleName19760",LastName19760 +19761,19761,"FirstName19761 MiddleName19761",LastName19761 +19762,19762,"FirstName19762 MiddleName19762",LastName19762 +19763,19763,"FirstName19763 MiddleName19763",LastName19763 +19764,19764,"FirstName19764 MiddleName19764",LastName19764 +19765,19765,"FirstName19765 MiddleName19765",LastName19765 +19766,19766,"FirstName19766 MiddleName19766",LastName19766 +19767,19767,"FirstName19767 MiddleName19767",LastName19767 +19768,19768,"FirstName19768 MiddleName19768",LastName19768 +19769,19769,"FirstName19769 MiddleName19769",LastName19769 +19770,19770,"FirstName19770 MiddleName19770",LastName19770 +19771,19771,"FirstName19771 MiddleName19771",LastName19771 +19772,19772,"FirstName19772 MiddleName19772",LastName19772 +19773,19773,"FirstName19773 MiddleName19773",LastName19773 +19774,19774,"FirstName19774 MiddleName19774",LastName19774 +19775,19775,"FirstName19775 MiddleName19775",LastName19775 +19776,19776,"FirstName19776 MiddleName19776",LastName19776 +19777,19777,"FirstName19777 MiddleName19777",LastName19777 +19778,19778,"FirstName19778 MiddleName19778",LastName19778 +19779,19779,"FirstName19779 MiddleName19779",LastName19779 +19780,19780,"FirstName19780 MiddleName19780",LastName19780 +19781,19781,"FirstName19781 MiddleName19781",LastName19781 +19782,19782,"FirstName19782 MiddleName19782",LastName19782 +19783,19783,"FirstName19783 MiddleName19783",LastName19783 +19784,19784,"FirstName19784 MiddleName19784",LastName19784 +19785,19785,"FirstName19785 MiddleName19785",LastName19785 +19786,19786,"FirstName19786 MiddleName19786",LastName19786 +19787,19787,"FirstName19787 MiddleName19787",LastName19787 +19788,19788,"FirstName19788 MiddleName19788",LastName19788 +19789,19789,"FirstName19789 MiddleName19789",LastName19789 +19790,19790,"FirstName19790 MiddleName19790",LastName19790 +19791,19791,"FirstName19791 MiddleName19791",LastName19791 +19792,19792,"FirstName19792 MiddleName19792",LastName19792 +19793,19793,"FirstName19793 MiddleName19793",LastName19793 +19794,19794,"FirstName19794 MiddleName19794",LastName19794 +19795,19795,"FirstName19795 MiddleName19795",LastName19795 +19796,19796,"FirstName19796 MiddleName19796",LastName19796 +19797,19797,"FirstName19797 MiddleName19797",LastName19797 +19798,19798,"FirstName19798 MiddleName19798",LastName19798 +19799,19799,"FirstName19799 MiddleName19799",LastName19799 +19800,19800,"FirstName19800 MiddleName19800",LastName19800 +19801,19801,"FirstName19801 MiddleName19801",LastName19801 +19802,19802,"FirstName19802 MiddleName19802",LastName19802 +19803,19803,"FirstName19803 MiddleName19803",LastName19803 +19804,19804,"FirstName19804 MiddleName19804",LastName19804 +19805,19805,"FirstName19805 MiddleName19805",LastName19805 +19806,19806,"FirstName19806 MiddleName19806",LastName19806 +19807,19807,"FirstName19807 MiddleName19807",LastName19807 +19808,19808,"FirstName19808 MiddleName19808",LastName19808 +19809,19809,"FirstName19809 MiddleName19809",LastName19809 +19810,19810,"FirstName19810 MiddleName19810",LastName19810 +19811,19811,"FirstName19811 MiddleName19811",LastName19811 +19812,19812,"FirstName19812 MiddleName19812",LastName19812 +19813,19813,"FirstName19813 MiddleName19813",LastName19813 +19814,19814,"FirstName19814 MiddleName19814",LastName19814 +19815,19815,"FirstName19815 MiddleName19815",LastName19815 +19816,19816,"FirstName19816 MiddleName19816",LastName19816 +19817,19817,"FirstName19817 MiddleName19817",LastName19817 +19818,19818,"FirstName19818 MiddleName19818",LastName19818 +19819,19819,"FirstName19819 MiddleName19819",LastName19819 +19820,19820,"FirstName19820 MiddleName19820",LastName19820 +19821,19821,"FirstName19821 MiddleName19821",LastName19821 +19822,19822,"FirstName19822 MiddleName19822",LastName19822 +19823,19823,"FirstName19823 MiddleName19823",LastName19823 +19824,19824,"FirstName19824 MiddleName19824",LastName19824 +19825,19825,"FirstName19825 MiddleName19825",LastName19825 +19826,19826,"FirstName19826 MiddleName19826",LastName19826 +19827,19827,"FirstName19827 MiddleName19827",LastName19827 +19828,19828,"FirstName19828 MiddleName19828",LastName19828 +19829,19829,"FirstName19829 MiddleName19829",LastName19829 +19830,19830,"FirstName19830 MiddleName19830",LastName19830 +19831,19831,"FirstName19831 MiddleName19831",LastName19831 +19832,19832,"FirstName19832 MiddleName19832",LastName19832 +19833,19833,"FirstName19833 MiddleName19833",LastName19833 +19834,19834,"FirstName19834 MiddleName19834",LastName19834 +19835,19835,"FirstName19835 MiddleName19835",LastName19835 +19836,19836,"FirstName19836 MiddleName19836",LastName19836 +19837,19837,"FirstName19837 MiddleName19837",LastName19837 +19838,19838,"FirstName19838 MiddleName19838",LastName19838 +19839,19839,"FirstName19839 MiddleName19839",LastName19839 +19840,19840,"FirstName19840 MiddleName19840",LastName19840 +19841,19841,"FirstName19841 MiddleName19841",LastName19841 +19842,19842,"FirstName19842 MiddleName19842",LastName19842 +19843,19843,"FirstName19843 MiddleName19843",LastName19843 +19844,19844,"FirstName19844 MiddleName19844",LastName19844 +19845,19845,"FirstName19845 MiddleName19845",LastName19845 +19846,19846,"FirstName19846 MiddleName19846",LastName19846 +19847,19847,"FirstName19847 MiddleName19847",LastName19847 +19848,19848,"FirstName19848 MiddleName19848",LastName19848 +19849,19849,"FirstName19849 MiddleName19849",LastName19849 +19850,19850,"FirstName19850 MiddleName19850",LastName19850 +19851,19851,"FirstName19851 MiddleName19851",LastName19851 +19852,19852,"FirstName19852 MiddleName19852",LastName19852 +19853,19853,"FirstName19853 MiddleName19853",LastName19853 +19854,19854,"FirstName19854 MiddleName19854",LastName19854 +19855,19855,"FirstName19855 MiddleName19855",LastName19855 +19856,19856,"FirstName19856 MiddleName19856",LastName19856 +19857,19857,"FirstName19857 MiddleName19857",LastName19857 +19858,19858,"FirstName19858 MiddleName19858",LastName19858 +19859,19859,"FirstName19859 MiddleName19859",LastName19859 +19860,19860,"FirstName19860 MiddleName19860",LastName19860 +19861,19861,"FirstName19861 MiddleName19861",LastName19861 +19862,19862,"FirstName19862 MiddleName19862",LastName19862 +19863,19863,"FirstName19863 MiddleName19863",LastName19863 +19864,19864,"FirstName19864 MiddleName19864",LastName19864 +19865,19865,"FirstName19865 MiddleName19865",LastName19865 +19866,19866,"FirstName19866 MiddleName19866",LastName19866 +19867,19867,"FirstName19867 MiddleName19867",LastName19867 +19868,19868,"FirstName19868 MiddleName19868",LastName19868 +19869,19869,"FirstName19869 MiddleName19869",LastName19869 +19870,19870,"FirstName19870 MiddleName19870",LastName19870 +19871,19871,"FirstName19871 MiddleName19871",LastName19871 +19872,19872,"FirstName19872 MiddleName19872",LastName19872 +19873,19873,"FirstName19873 MiddleName19873",LastName19873 +19874,19874,"FirstName19874 MiddleName19874",LastName19874 +19875,19875,"FirstName19875 MiddleName19875",LastName19875 +19876,19876,"FirstName19876 MiddleName19876",LastName19876 +19877,19877,"FirstName19877 MiddleName19877",LastName19877 +19878,19878,"FirstName19878 MiddleName19878",LastName19878 +19879,19879,"FirstName19879 MiddleName19879",LastName19879 +19880,19880,"FirstName19880 MiddleName19880",LastName19880 +19881,19881,"FirstName19881 MiddleName19881",LastName19881 +19882,19882,"FirstName19882 MiddleName19882",LastName19882 +19883,19883,"FirstName19883 MiddleName19883",LastName19883 +19884,19884,"FirstName19884 MiddleName19884",LastName19884 +19885,19885,"FirstName19885 MiddleName19885",LastName19885 +19886,19886,"FirstName19886 MiddleName19886",LastName19886 +19887,19887,"FirstName19887 MiddleName19887",LastName19887 +19888,19888,"FirstName19888 MiddleName19888",LastName19888 +19889,19889,"FirstName19889 MiddleName19889",LastName19889 +19890,19890,"FirstName19890 MiddleName19890",LastName19890 +19891,19891,"FirstName19891 MiddleName19891",LastName19891 +19892,19892,"FirstName19892 MiddleName19892",LastName19892 +19893,19893,"FirstName19893 MiddleName19893",LastName19893 +19894,19894,"FirstName19894 MiddleName19894",LastName19894 +19895,19895,"FirstName19895 MiddleName19895",LastName19895 +19896,19896,"FirstName19896 MiddleName19896",LastName19896 +19897,19897,"FirstName19897 MiddleName19897",LastName19897 +19898,19898,"FirstName19898 MiddleName19898",LastName19898 +19899,19899,"FirstName19899 MiddleName19899",LastName19899 +19900,19900,"FirstName19900 MiddleName19900",LastName19900 +19901,19901,"FirstName19901 MiddleName19901",LastName19901 +19902,19902,"FirstName19902 MiddleName19902",LastName19902 +19903,19903,"FirstName19903 MiddleName19903",LastName19903 +19904,19904,"FirstName19904 MiddleName19904",LastName19904 +19905,19905,"FirstName19905 MiddleName19905",LastName19905 +19906,19906,"FirstName19906 MiddleName19906",LastName19906 +19907,19907,"FirstName19907 MiddleName19907",LastName19907 +19908,19908,"FirstName19908 MiddleName19908",LastName19908 +19909,19909,"FirstName19909 MiddleName19909",LastName19909 +19910,19910,"FirstName19910 MiddleName19910",LastName19910 +19911,19911,"FirstName19911 MiddleName19911",LastName19911 +19912,19912,"FirstName19912 MiddleName19912",LastName19912 +19913,19913,"FirstName19913 MiddleName19913",LastName19913 +19914,19914,"FirstName19914 MiddleName19914",LastName19914 +19915,19915,"FirstName19915 MiddleName19915",LastName19915 +19916,19916,"FirstName19916 MiddleName19916",LastName19916 +19917,19917,"FirstName19917 MiddleName19917",LastName19917 +19918,19918,"FirstName19918 MiddleName19918",LastName19918 +19919,19919,"FirstName19919 MiddleName19919",LastName19919 +19920,19920,"FirstName19920 MiddleName19920",LastName19920 +19921,19921,"FirstName19921 MiddleName19921",LastName19921 +19922,19922,"FirstName19922 MiddleName19922",LastName19922 +19923,19923,"FirstName19923 MiddleName19923",LastName19923 +19924,19924,"FirstName19924 MiddleName19924",LastName19924 +19925,19925,"FirstName19925 MiddleName19925",LastName19925 +19926,19926,"FirstName19926 MiddleName19926",LastName19926 +19927,19927,"FirstName19927 MiddleName19927",LastName19927 +19928,19928,"FirstName19928 MiddleName19928",LastName19928 +19929,19929,"FirstName19929 MiddleName19929",LastName19929 +19930,19930,"FirstName19930 MiddleName19930",LastName19930 +19931,19931,"FirstName19931 MiddleName19931",LastName19931 +19932,19932,"FirstName19932 MiddleName19932",LastName19932 +19933,19933,"FirstName19933 MiddleName19933",LastName19933 +19934,19934,"FirstName19934 MiddleName19934",LastName19934 +19935,19935,"FirstName19935 MiddleName19935",LastName19935 +19936,19936,"FirstName19936 MiddleName19936",LastName19936 +19937,19937,"FirstName19937 MiddleName19937",LastName19937 +19938,19938,"FirstName19938 MiddleName19938",LastName19938 +19939,19939,"FirstName19939 MiddleName19939",LastName19939 +19940,19940,"FirstName19940 MiddleName19940",LastName19940 +19941,19941,"FirstName19941 MiddleName19941",LastName19941 +19942,19942,"FirstName19942 MiddleName19942",LastName19942 +19943,19943,"FirstName19943 MiddleName19943",LastName19943 +19944,19944,"FirstName19944 MiddleName19944",LastName19944 +19945,19945,"FirstName19945 MiddleName19945",LastName19945 +19946,19946,"FirstName19946 MiddleName19946",LastName19946 +19947,19947,"FirstName19947 MiddleName19947",LastName19947 +19948,19948,"FirstName19948 MiddleName19948",LastName19948 +19949,19949,"FirstName19949 MiddleName19949",LastName19949 +19950,19950,"FirstName19950 MiddleName19950",LastName19950 +19951,19951,"FirstName19951 MiddleName19951",LastName19951 +19952,19952,"FirstName19952 MiddleName19952",LastName19952 +19953,19953,"FirstName19953 MiddleName19953",LastName19953 +19954,19954,"FirstName19954 MiddleName19954",LastName19954 +19955,19955,"FirstName19955 MiddleName19955",LastName19955 +19956,19956,"FirstName19956 MiddleName19956",LastName19956 +19957,19957,"FirstName19957 MiddleName19957",LastName19957 +19958,19958,"FirstName19958 MiddleName19958",LastName19958 +19959,19959,"FirstName19959 MiddleName19959",LastName19959 +19960,19960,"FirstName19960 MiddleName19960",LastName19960 +19961,19961,"FirstName19961 MiddleName19961",LastName19961 +19962,19962,"FirstName19962 MiddleName19962",LastName19962 +19963,19963,"FirstName19963 MiddleName19963",LastName19963 +19964,19964,"FirstName19964 MiddleName19964",LastName19964 +19965,19965,"FirstName19965 MiddleName19965",LastName19965 +19966,19966,"FirstName19966 MiddleName19966",LastName19966 +19967,19967,"FirstName19967 MiddleName19967",LastName19967 +19968,19968,"FirstName19968 MiddleName19968",LastName19968 +19969,19969,"FirstName19969 MiddleName19969",LastName19969 +19970,19970,"FirstName19970 MiddleName19970",LastName19970 +19971,19971,"FirstName19971 MiddleName19971",LastName19971 +19972,19972,"FirstName19972 MiddleName19972",LastName19972 +19973,19973,"FirstName19973 MiddleName19973",LastName19973 +19974,19974,"FirstName19974 MiddleName19974",LastName19974 +19975,19975,"FirstName19975 MiddleName19975",LastName19975 +19976,19976,"FirstName19976 MiddleName19976",LastName19976 +19977,19977,"FirstName19977 MiddleName19977",LastName19977 +19978,19978,"FirstName19978 MiddleName19978",LastName19978 +19979,19979,"FirstName19979 MiddleName19979",LastName19979 +19980,19980,"FirstName19980 MiddleName19980",LastName19980 +19981,19981,"FirstName19981 MiddleName19981",LastName19981 +19982,19982,"FirstName19982 MiddleName19982",LastName19982 +19983,19983,"FirstName19983 MiddleName19983",LastName19983 +19984,19984,"FirstName19984 MiddleName19984",LastName19984 +19985,19985,"FirstName19985 MiddleName19985",LastName19985 +19986,19986,"FirstName19986 MiddleName19986",LastName19986 +19987,19987,"FirstName19987 MiddleName19987",LastName19987 +19988,19988,"FirstName19988 MiddleName19988",LastName19988 +19989,19989,"FirstName19989 MiddleName19989",LastName19989 +19990,19990,"FirstName19990 MiddleName19990",LastName19990 +19991,19991,"FirstName19991 MiddleName19991",LastName19991 +19992,19992,"FirstName19992 MiddleName19992",LastName19992 +19993,19993,"FirstName19993 MiddleName19993",LastName19993 +19994,19994,"FirstName19994 MiddleName19994",LastName19994 +19995,19995,"FirstName19995 MiddleName19995",LastName19995 +19996,19996,"FirstName19996 MiddleName19996",LastName19996 +19997,19997,"FirstName19997 MiddleName19997",LastName19997 +19998,19998,"FirstName19998 MiddleName19998",LastName19998 +19999,19999,"FirstName19999 MiddleName19999",LastName19999 +20000,20000,"FirstName20000 MiddleName20000",LastName20000 \ No newline at end of file diff --git a/modules/cloud/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/cloud/TcpDiscoveryCloudIpFinderSelfTest.java b/modules/cloud/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/cloud/TcpDiscoveryCloudIpFinderSelfTest.java index c754553ef66e4..f76c9037929e7 100644 --- a/modules/cloud/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/cloud/TcpDiscoveryCloudIpFinderSelfTest.java +++ b/modules/cloud/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/cloud/TcpDiscoveryCloudIpFinderSelfTest.java @@ -25,10 +25,15 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAbstractSelfTest; import org.apache.ignite.testsuites.IgniteCloudTestSuite; import org.apache.ignite.testsuites.IgniteIgnore; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TcpDiscoveryCloudIpFinder test. */ +@RunWith(JUnit4.class) public class TcpDiscoveryCloudIpFinderSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** @@ -40,18 +45,20 @@ public TcpDiscoveryCloudIpFinderSelfTest() throws Exception { // No-op. } - @Override protected void beforeTest() throws Exception { + /** {@inheritDoc} */ + @Override protected void beforeTest() { // No-op. } - /* {@inheritDoc} */ - @Override protected TcpDiscoveryCloudIpFinder ipFinder() throws Exception { + /** {@inheritDoc} */ + @Override protected TcpDiscoveryCloudIpFinder ipFinder() { // No-op. return null; } - /* {@inheritDoc} */ - @Override public void testIpFinder() throws Exception { + /** {@inheritDoc} */ + @Test + @Override public void testIpFinder() { // No-op } @@ -61,6 +68,7 @@ public TcpDiscoveryCloudIpFinderSelfTest() throws Exception { * @throws Exception If any error occurs. */ @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-845") + @Test public void testAmazonWebServices() throws Exception { testCloudProvider("aws-ec2"); } @@ -71,6 +79,7 @@ public void testAmazonWebServices() throws Exception { * @throws Exception If any error occurs. */ @IgniteIgnore("https://issues.apache.org/jira/browse/IGNITE-1585") + @Test public void testGoogleComputeEngine() throws Exception { testCloudProvider("google-compute-engine"); } @@ -80,9 +89,9 @@ public void testGoogleComputeEngine() throws Exception { * * @throws Exception If any error occurs. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9444") + @Test public void testRackspace() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9444"); - testCloudProvider("rackspace-cloudservers-us"); } @@ -123,4 +132,4 @@ private void testCloudProvider(String provider) throws Exception { assert addresses.size() == ipFinder.getRegisteredAddresses().size(); } -} \ No newline at end of file +} diff --git a/modules/cloud/src/test/java/org/apache/ignite/testsuites/IgniteCloudTestSuite.java b/modules/cloud/src/test/java/org/apache/ignite/testsuites/IgniteCloudTestSuite.java index 632cddc6c7bbc..5be6d3b45449b 100644 --- a/modules/cloud/src/test/java/org/apache/ignite/testsuites/IgniteCloudTestSuite.java +++ b/modules/cloud/src/test/java/org/apache/ignite/testsuites/IgniteCloudTestSuite.java @@ -19,23 +19,26 @@ import java.util.Collection; import java.util.LinkedList; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.discovery.tcp.ipfinder.cloud.TcpDiscoveryCloudIpFinderSelfTest; import org.apache.ignite.testframework.IgniteTestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Ignite Cloud integration test. */ -public class IgniteCloudTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCloudTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new IgniteTestSuite("Cloud Integration Test Suite"); // Cloud Nodes IP finder. - suite.addTestSuite(TcpDiscoveryCloudIpFinderSelfTest.class); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryCloudIpFinderSelfTest.class)); return suite; } @@ -51,7 +54,7 @@ public static String getAccessKey(String provider) { String key = System.getenv("test." + provider + ".access.key"); assert key != null : "Environment variable 'test." + provider + ".access.key' is not set"; - + return key; } @@ -111,4 +114,4 @@ public static Collection getRegions(String provider) { return list; } -} \ No newline at end of file +} diff --git a/modules/codegen/src/main/java/org/apache/ignite/codegen/MessageCodeGenerator.java b/modules/codegen/src/main/java/org/apache/ignite/codegen/MessageCodeGenerator.java index bcb9ef432e9f8..e004cf0c74919 100644 --- a/modules/codegen/src/main/java/org/apache/ignite/codegen/MessageCodeGenerator.java +++ b/modules/codegen/src/main/java/org/apache/ignite/codegen/MessageCodeGenerator.java @@ -43,8 +43,7 @@ import org.apache.ignite.internal.GridDirectMap; import org.apache.ignite.internal.GridDirectTransient; import org.apache.ignite.internal.IgniteCodeGeneratingFail; -import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistRequest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistResponse; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; @@ -117,6 +116,7 @@ public class MessageCodeGenerator { TYPES.put(BitSet.class, MessageCollectionItemType.BIT_SET); TYPES.put(UUID.class, MessageCollectionItemType.UUID); TYPES.put(IgniteUuid.class, MessageCollectionItemType.IGNITE_UUID); + TYPES.put(AffinityTopologyVersion.class, MessageCollectionItemType.AFFINITY_TOPOLOGY_VERSION); } /** @@ -168,11 +168,12 @@ public static void main(String[] args) throws Exception { MessageCodeGenerator gen = new MessageCodeGenerator(srcDir); -// gen.generateAll(true); +// gen.generateAndWrite(ProbedTx.class); +// gen.generateAndWrite(DeadlockProbe.class); - gen.generateAndWrite(GridNearTxEnlistResponse.class); +// gen.generateAll(true); -// gen.generateAndWrite(GridNearAtomicUpdateRequest.class); +// gen.generateAndWrite(GridCacheMessage.class); // gen.generateAndWrite(GridMessageCollection.class); // gen.generateAndWrite(DataStreamerEntry.class); @@ -181,7 +182,12 @@ public static void main(String[] args) throws Exception { // gen.generateAndWrite(GridDistributedLockResponse.class); // gen.generateAndWrite(GridNearLockRequest.class); // gen.generateAndWrite(GridNearLockResponse.class); +// gen.generateAndWrite(GridNearLockRequest.class); // gen.generateAndWrite(GridDhtLockRequest.class); +// gen.generateAndWrite(GridNearSingleGetRequest.class); +// gen.generateAndWrite(GridNearGetRequest.class); +// gen.generateAndWrite(GridDhtTxPrepareRequest.class); +// gen.generateAndWrite(GridNearTxPrepareRequest.class); // gen.generateAndWrite(GridDhtLockResponse.class); // // gen.generateAndWrite(GridDistributedTxPrepareRequest.class); @@ -239,6 +245,8 @@ public static void main(String[] args) throws Exception { // gen.generateAndWrite(GridH2DmlResponse.class); // gen.generateAndWrite(GridNearTxEnlistRequest.class); // gen.generateAndWrite(GridNearTxEnlistResponse.class); +// gen.generateAndWrite(GenerateEncryptionKeyRequest.class); +// gen.generateAndWrite(GenerateEncryptionKeyResponse.class); } /** @@ -662,6 +670,8 @@ else if (type == UUID.class) returnFalseIfFailed(write, "writer.writeUuid", field, getExpr); else if (type == IgniteUuid.class) returnFalseIfFailed(write, "writer.writeIgniteUuid", field, getExpr); + else if (type == AffinityTopologyVersion.class) + returnFalseIfFailed(write, "writer.writeAffinityTopologyVersion", field, getExpr); else if (type.isEnum()) { String arg = getExpr + " != null ? (byte)" + getExpr + ".ordinal() : -1"; @@ -744,6 +754,8 @@ else if (type == UUID.class) returnFalseIfReadFailed(name, "reader.readUuid", setExpr, field); else if (type == IgniteUuid.class) returnFalseIfReadFailed(name, "reader.readIgniteUuid", setExpr, field); + else if (type == AffinityTopologyVersion.class) + returnFalseIfReadFailed(name, "reader.readAffinityTopologyVersion", setExpr, field); else if (type.isEnum()) { String loc = name + "Ord"; diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/PdsWithTtlCompatibilityTest.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/PdsWithTtlCompatibilityTest.java index 946caddb5f203..ffb05bb2e2102 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/PdsWithTtlCompatibilityTest.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/PdsWithTtlCompatibilityTest.java @@ -43,6 +43,9 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test PendingTree upgrading to per-partition basis. Test fill cache with persistence enabled and with ExpirePolicy @@ -51,6 +54,7 @@ * Note: Test for ignite-2.3 version will always fails due to entry ttl update fails with assertion on checkpoint lock * check. */ +@RunWith(JUnit4.class) public class PdsWithTtlCompatibilityTest extends IgnitePersistenceCompatibilityAbstractTest { /** */ static final String TEST_CACHE_NAME = PdsWithTtlCompatibilityTest.class.getSimpleName(); @@ -84,6 +88,7 @@ public class PdsWithTtlCompatibilityTest extends IgnitePersistenceCompatibilityA * * @throws Exception If failed. */ + @Test public void testNodeStartByOldVersionPersistenceData_2_1() throws Exception { doTestStartupWithOldVersion("2.1.0"); } diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/FoldersReuseCompatibilityTest.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/FoldersReuseCompatibilityTest.java index e04f39f99321d..a667c15267375 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/FoldersReuseCompatibilityTest.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/FoldersReuseCompatibilityTest.java @@ -36,12 +36,16 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.persistence.filename.PdsConsistentIdProcessor.parseSubFolderName; /** * Test for new and old style persistent storage folders generation and compatible startup of current ignite version */ +@RunWith(JUnit4.class) public class FoldersReuseCompatibilityTest extends IgnitePersistenceCompatibilityAbstractTest { /** Cache name for test. */ private static final String CACHE_NAME = "dummy"; @@ -80,6 +84,7 @@ public void ignored_testFoldersReuseCompatibility_2_3() throws Exception { * * @throws Exception if failed. */ + @Test public void testFoldersReuseCompatibility_2_2() throws Exception { runFoldersReuse("2.2.0"); } @@ -90,6 +95,7 @@ public void testFoldersReuseCompatibility_2_2() throws Exception { * * @throws Exception if failed. */ + @Test public void testFoldersReuseCompatibility_2_1() throws Exception { runFoldersReuse("2.1.0"); } diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/MigratingToWalV2SerializerWithCompactionTest.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/MigratingToWalV2SerializerWithCompactionTest.java index d4c58f8f4b3c5..84d0cfa610e63 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/MigratingToWalV2SerializerWithCompactionTest.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/MigratingToWalV2SerializerWithCompactionTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Saves data using previous version of ignite and then load this data using actual version */ +@RunWith(JUnit4.class) public class MigratingToWalV2SerializerWithCompactionTest extends IgnitePersistenceCompatibilityAbstractTest { /** */ private static final String TEST_CACHE_NAME = MigratingToWalV2SerializerWithCompactionTest.class.getSimpleName(); @@ -78,6 +82,7 @@ public class MigratingToWalV2SerializerWithCompactionTest extends IgnitePersiste * * @throws Exception If failed. */ + @Test public void testCompactingOldWalFiles() throws Exception { doTestStartupWithOldVersion("2.3.0"); } diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/PersistenceBasicCompatibilityTest.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/PersistenceBasicCompatibilityTest.java index f27caa34a2843..8088dda909bb0 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/PersistenceBasicCompatibilityTest.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/PersistenceBasicCompatibilityTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.internal.processors.cache.GridCacheAbstractFullApiSelfTest; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Saves data using previous version of ignite and then load this data using actual version. */ +@RunWith(JUnit4.class) public class PersistenceBasicCompatibilityTest extends IgnitePersistenceCompatibilityAbstractTest { /** */ protected static final String TEST_CACHE_NAME = PersistenceBasicCompatibilityTest.class.getSimpleName(); @@ -76,6 +80,7 @@ public class PersistenceBasicCompatibilityTest extends IgnitePersistenceCompatib * * @throws Exception If failed. */ + @Test public void testNodeStartByOldVersionPersistenceData_2_2() throws Exception { doTestStartupWithOldVersion("2.2.0"); } @@ -85,6 +90,7 @@ public void testNodeStartByOldVersionPersistenceData_2_2() throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeStartByOldVersionPersistenceData_2_1() throws Exception { doTestStartupWithOldVersion("2.1.0"); } @@ -94,10 +100,41 @@ public void testNodeStartByOldVersionPersistenceData_2_1() throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeStartByOldVersionPersistenceData_2_3() throws Exception { doTestStartupWithOldVersion("2.3.0"); } + /** + * Tests opportunity to read data from previous Ignite DB version. + * + * @throws Exception If failed. + */ + @Test + public void testNodeStartByOldVersionPersistenceData_2_4() throws Exception { + doTestStartupWithOldVersion("2.4.0"); + } + + /** + * Tests opportunity to read data from previous Ignite DB version. + * + * @throws Exception If failed. + */ + @Test + public void testNodeStartByOldVersionPersistenceData_2_5() throws Exception { + doTestStartupWithOldVersion("2.5.0"); + } + + /** + * Tests opportunity to read data from previous Ignite DB version. + * + * @throws Exception If failed. + */ + @Test + public void testNodeStartByOldVersionPersistenceData_2_6() throws Exception { + doTestStartupWithOldVersion("2.6.0"); + } + /** * Tests opportunity to read data from previous Ignite DB version. * diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/Dependency.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/Dependency.java index 56da0e982bec8..1316203d9f504 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/Dependency.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/Dependency.java @@ -17,70 +17,76 @@ package org.apache.ignite.compatibility.testframework.junits; +import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; /** * Module dependency: Should be filtered out from current test classpath for separate JVM classpath. */ public class Dependency { + /** Default value of group id. */ + private static final String DEFAULT_GROUP_ID = "org.apache.ignite"; + /** Local module name. Folder name where module is located. */ - private String locModuleName; + private final String locModuleName; - /** Group name. Null means ignite default group name. */ - @Nullable - private String groupName; + /** Group id. */ + private final String groupId; - /** Artifact name (artifact ID) without group name. */ - private String artifactName; + /** Artifact id. */ + private final String artifactId; - /** Version. Null means default Ignite version is to be used. May be used for 3rd party dependencies. */ - @Nullable - private String version; + /** Version. {@code null} means default Ignite version is to be used. May be used for 3rd party dependencies. */ + @Nullable private final String ver; - /** Test flag. Test jar should have {@code true} value. Default is {@code false}. */ - private boolean test; + /** Test flag. Test jar should have {@code true} value. */ + private final boolean test; /** - * Creates dependency. + * Creates dependency with {@link #DEFAULT_GROUP_ID} as group id. * * @param locModuleName Local module name. Folder name where module is located. - * @param artifactName Artifact name (artifact ID) without group name. - * @param test Test flag. Test jar should have {@code true} value. Default is {@code false}. + * @param artifactId Artifact id. + * @param test Test flag. Test jar should have {@code true} value. */ - public Dependency(String locModuleName, String artifactName, boolean test) { - this.locModuleName = locModuleName; - this.artifactName = artifactName; - this.test = test; + public Dependency(String locModuleName, String artifactId, boolean test) { + this(locModuleName, artifactId, null, test); } /** - * Creates dependency. + * Creates dependency with {@link #DEFAULT_GROUP_ID} as group id. * * @param locModuleName Local module name. Folder name where module is located. - * @param artifactName Artifact name (artifact ID) without group name. + * @param artifactId Artifact id. + * @param ver Version, {@code null} means default Ignite version is to be used. + * @param test Test flag. Test jar should have {@code true} value. */ - public Dependency(String locModuleName, String artifactName) { - this.locModuleName = locModuleName; - this.artifactName = artifactName; + public Dependency(String locModuleName, String artifactId, String ver, boolean test) { + this(locModuleName, DEFAULT_GROUP_ID, artifactId, ver, test); } /** + * Creates dependency with given parameters. + * * @param locModuleName Local module name. Folder name where module is located. - * @param grpName Group name. Null means ignite default group name. - * @param artifactName Artifact name (artifact ID) without group na - * @param version Version. Null means default Ignite version is to be used. M + * @param groupId Group id. + * @param artifactId Artifact id. + * @param ver Dependency version, {@code null} means default Ignite version is to be used. + * @param test Test flag. Test jar should have {@code true} value. */ - public Dependency(String locModuleName, @Nullable String grpName, String artifactName, @Nullable String version) { + public Dependency(@NotNull String locModuleName, @NotNull String groupId, @NotNull String artifactId, + @Nullable String ver, boolean test) { this.locModuleName = locModuleName; - this.groupName = grpName; - this.artifactName = artifactName; - this.version = version; + this.groupId = groupId; + this.artifactId = artifactId; + this.ver = ver; + this.test = test; } /** - * @return path based on local module name to exclude from classpath + * @return Template of sources path based on local module name. */ - public String localPathTemplate() { + public String sourcePathTemplate() { return "modules/" + locModuleName + "/target/" + @@ -88,30 +94,37 @@ public String localPathTemplate() { } /** - * @return {@link #artifactName} + * @return Template of artifact's path in Maven repository. + */ + public String artifactPathTemplate() { + return "repository/" + groupId.replaceAll("\\.", "/") + "/" + artifactId; + } + + /** + * @return Dependency artifact id. */ - public String artifactName() { - return artifactName; + public String artifactId() { + return artifactId; } /** - * @return classifier or {@code} null depending on {@link #test} flag + * @return Classifier or {@code null} depending on {@link #test} flag. */ @Nullable public String classifier() { return test ? "tests" : null; } /** - * @return {@link #version} + * @return Dependency version. */ @Nullable public String version() { - return version; + return ver; } /** - * @return {@link #groupName} + * @return Dependency group id. */ - @Nullable public String groupName() { - return groupName; + public String groupId() { + return groupId; } } diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/IgniteCompatibilityAbstractTest.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/IgniteCompatibilityAbstractTest.java index 2301717b2068b..b0e4001d3bc80 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/IgniteCompatibilityAbstractTest.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/junits/IgniteCompatibilityAbstractTest.java @@ -21,11 +21,12 @@ import java.net.URL; import java.util.ArrayList; import java.util.Collection; +import java.util.HashSet; +import java.util.Optional; import java.util.Set; import java.util.UUID; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; -import java.util.stream.Collectors; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteLogger; import org.apache.ignite.compatibility.testframework.junits.logger.ListenedGridTestLog4jLogger; @@ -168,7 +169,7 @@ protected IgniteEx startGrid(final String igniteInstanceName, final String ver, final Collection dependencies = getDependencies(ver); - Set excluded = dependencies.stream().map(Dependency::localPathTemplate).collect(Collectors.toSet()); + Set excluded = getExcluded(ver, dependencies); StringBuilder pathBuilder = new StringBuilder(); @@ -180,11 +181,10 @@ protected IgniteEx startGrid(final String igniteInstanceName, final String ver, } for (Dependency dependency : dependencies) { - final String artifactVer = dependency.version() != null ? dependency.version() : ver; - final String grpName = dependency.groupName() != null ? dependency.groupName() : "org.apache.ignite"; + final String artifactVer = Optional.ofNullable(dependency.version()).orElse(ver); - String pathToArtifact = MavenUtils.getPathToIgniteArtifact(grpName, dependency.artifactName(), - artifactVer, dependency.classifier()); + String pathToArtifact = MavenUtils.getPathToIgniteArtifact(dependency.groupId(), + dependency.artifactId(), artifactVer, dependency.classifier()); pathBuilder.append(pathToArtifact).append(File.pathSeparator); } @@ -240,12 +240,35 @@ protected long getNodeJoinTimeout() { @NotNull protected Collection getDependencies(String igniteVer) { final Collection dependencies = new ArrayList<>(); - dependencies.add(new Dependency("core", "ignite-core")); + dependencies.add(new Dependency("core", "ignite-core", false)); dependencies.add(new Dependency("core", "ignite-core", true)); return dependencies; } + /** + * These dependencies will not be translated from current code dependencies into separate node's classpath. + * + * Include here all dependencies which will be set up manually, leave all version independent dependencies. + * + * @param ver Ignite version. + * @param dependencies Dependencies to filter. + * @return Set of paths to exclude. + */ + protected Set getExcluded(String ver, Collection dependencies) { + Set excluded = new HashSet<>(); + + for (Dependency dependency : dependencies) { + excluded.add(dependency.sourcePathTemplate()); + excluded.add(dependency.artifactPathTemplate()); + } + + // Just to exclude indexing module + excluded.add("indexing"); + + return excluded; + } + /** * Allows to setup JVM arguments for standalone JVM * diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/util/MavenUtils.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/util/MavenUtils.java index 7eb3131a3fa01..a05cfd95c2840 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/util/MavenUtils.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testframework/util/MavenUtils.java @@ -46,23 +46,24 @@ public class MavenUtils { private static boolean useGgRepo; /** - * Gets a path to an artifact with given version and groupId=org.apache.ignite and artifactId={@code artifactName}. + * Gets a path to an artifact with given version and groupId=org.apache.ignite and artifactId={@code artifactId}. *
* At first, artifact is looked for in the Maven local repository, if it isn't exists there, it will be downloaded * and stored via Maven. *
- * @param groupName group name, e.g. 'org.apache.ignite'. + * + * @param groupId group name, e.g. 'org.apache.ignite'. * @param ver Version of ignite or 3rd party library artifact. * @param classifier Artifact classifier. * @return Path to the artifact. * @throws Exception In case of an error. * @see #getPathToArtifact(String) */ - public static String getPathToIgniteArtifact(@NotNull String groupName, - @NotNull String artifactName, @NotNull String ver, + public static String getPathToIgniteArtifact(@NotNull String groupId, + @NotNull String artifactId, @NotNull String ver, @Nullable String classifier) throws Exception { - String artifact = groupName + - ":" + artifactName + ":" + ver; + String artifact = groupId + + ":" + artifactId + ":" + ver; if (classifier != null) artifact += ":jar:" + classifier; @@ -213,6 +214,6 @@ private static String buildMvnCommand() { if (m2Home == null) return "mvn"; - return m2Home + "/bin/mvn" ; + return m2Home + "/bin/mvn"; } } diff --git a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testsuites/IgniteCompatibilityBasicTestSuite.java b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testsuites/IgniteCompatibilityBasicTestSuite.java index f6dd73606d16d..12ef3d091c6bd 100644 --- a/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testsuites/IgniteCompatibilityBasicTestSuite.java +++ b/modules/compatibility/src/test/java/org/apache/ignite/compatibility/testsuites/IgniteCompatibilityBasicTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.compatibility.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.compatibility.PdsWithTtlCompatibilityTest; import org.apache.ignite.compatibility.persistence.FoldersReuseCompatibilityTest; @@ -29,18 +30,17 @@ public class IgniteCompatibilityBasicTestSuite { /** * @return Test suite. - * @throws Exception In case of an error. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Compatibility Basic Test Suite"); - suite.addTestSuite(PersistenceBasicCompatibilityTest.class); + suite.addTest(new JUnit4TestAdapter(PersistenceBasicCompatibilityTest.class)); - suite.addTestSuite(PdsWithTtlCompatibilityTest.class); + suite.addTest(new JUnit4TestAdapter(PdsWithTtlCompatibilityTest.class)); - suite.addTestSuite(FoldersReuseCompatibilityTest.class); + suite.addTest(new JUnit4TestAdapter(FoldersReuseCompatibilityTest.class)); - suite.addTestSuite(MigratingToWalV2SerializerWithCompactionTest.class); + suite.addTest(new JUnit4TestAdapter(MigratingToWalV2SerializerWithCompactionTest.class)); return suite; } diff --git a/modules/core/pom.xml b/modules/core/pom.xml index 9be521799f3a0..dca9d5483a0dd 100644 --- a/modules/core/pom.xml +++ b/modules/core/pom.xml @@ -117,9 +117,9 @@
- com.h2database - h2 - ${h2.version} + org.apache.ignite + ignite-h2 + ${project.version} test diff --git a/modules/core/src/main/java/META-INF/NOTICE b/modules/core/src/main/java/META-INF/NOTICE index 4c99a05109185..f98670a851d1b 100644 --- a/modules/core/src/main/java/META-INF/NOTICE +++ b/modules/core/src/main/java/META-INF/NOTICE @@ -1,5 +1,5 @@ Apache Ignite -Copyright 2018 The Apache Software Foundation +Copyright 2020 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). diff --git a/modules/core/src/main/java/org/apache/ignite/IgniteCache.java b/modules/core/src/main/java/org/apache/ignite/IgniteCache.java index 8479420910f16..395c8f89a8692 100644 --- a/modules/core/src/main/java/org/apache/ignite/IgniteCache.java +++ b/modules/core/src/main/java/org/apache/ignite/IgniteCache.java @@ -441,9 +441,9 @@ public IgniteFuture localLoadCacheAsync(@Nullable IgniteBiPredicate public void localEvict(Collection keys); /** - * Peeks at in-memory cached value using default optional peek mode. + * Peeks at a value in the local storage using an optional peek mode. *

- * This method will not load value from any persistent store or from a remote node. + * This method will not load a value from the configured {@link CacheStore} or from a remote node. *

Transactions

* This method does not participate in any transactions. * @@ -1516,7 +1516,7 @@ public IgniteFuture>> invokeAllAsync(Set lostPartitions(); @@ -1531,4 +1531,51 @@ public IgniteFuture>> invokeAllAsync(Set + * This is useful for fast iteration over cache partition data if persistence is enabled and the data is "cold". + *

+ * Preload will reduce available amount of page memory for subsequent operations and may lead to earlier page + * replacement. + *

+ * This method is irrelevant for in-memory caches. Calling this method on an in-memory cache will result in + * exception. + * + * @param partition Partition. + */ + public void preloadPartition(int partition); + + /** + * Efficiently preloads cache partition into page memory. + *

+ * This is useful for fast iteration over cache partition data if persistence is enabled and the data is "cold". + *

+ * Preload will reduce available amount of page memory for subsequent operations and may lead to earlier page + * replacement. + *

+ * This method is irrelevant for in-memory caches. Calling this method on an in-memory cache will result in + * exception. + * + * @param partition Partition. + * @return A future representing pending completion of the partition preloading. + */ + public IgniteFuture preloadPartitionAsync(int partition); + + /** + * Efficiently preloads cache partition into page memory if it exists on the local node. + *

+ * This is useful for fast iteration over cache partition data if persistence is enabled and the data is "cold". + *

+ * Preload will reduce available amount of page memory for subsequent operations and may lead to earlier page + * replacement. + *

+ * This method is irrelevant for in-memory caches. Calling this method on an in-memory cache will result in + * exception. + * + * @param partition Partition. + * @return {@code True} if partition was preloaded, {@code false} if it doesn't belong to local node. + */ + public boolean localPreloadPartition(int partition); } diff --git a/modules/core/src/main/java/org/apache/ignite/IgniteCacheRestartingException.java b/modules/core/src/main/java/org/apache/ignite/IgniteCacheRestartingException.java index a3a749070cec1..1dbfc67dc69dd 100644 --- a/modules/core/src/main/java/org/apache/ignite/IgniteCacheRestartingException.java +++ b/modules/core/src/main/java/org/apache/ignite/IgniteCacheRestartingException.java @@ -18,6 +18,7 @@ package org.apache.ignite; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.lang.IgniteFuture; import org.jetbrains.annotations.Nullable; @@ -29,26 +30,34 @@ public class IgniteCacheRestartingException extends IgniteException { private static final long serialVersionUID = 0L; /** */ - private final IgniteFuture restartFut; + private final transient IgniteFuture restartFut; + + /** + * @param cacheName Error message. + */ + public IgniteCacheRestartingException(String cacheName) { + this(null, cacheName, null); + } /** * @param restartFut Restart future. - * @param msg Error message. + * @param cacheName Error message. */ - public IgniteCacheRestartingException(IgniteFuture restartFut, String msg) { - this(restartFut, msg, null); + public IgniteCacheRestartingException(IgniteFuture restartFut, String cacheName) { + this(restartFut, cacheName, null); } /** * @param restartFut Restart future. - * @param msg Error message. + * @param cacheName Cache name what is restarting. * @param cause Optional nested exception (can be {@code null}). */ public IgniteCacheRestartingException( IgniteFuture restartFut, - String msg, - @Nullable Throwable cause) { - super(msg, cause); + String cacheName, + @Nullable Throwable cause + ) { + super("Cache is restarting:" + cacheName + ", you could wait restart completion with restartFuture", cause); this.restartFut = restartFut; } diff --git a/modules/core/src/main/java/org/apache/ignite/IgniteSystemProperties.java b/modules/core/src/main/java/org/apache/ignite/IgniteSystemProperties.java index 5932de05a309f..3a65f5d42f5b7 100644 --- a/modules/core/src/main/java/org/apache/ignite/IgniteSystemProperties.java +++ b/modules/core/src/main/java/org/apache/ignite/IgniteSystemProperties.java @@ -31,7 +31,6 @@ import org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller; import org.apache.ignite.internal.processors.rest.GridRestCommand; import org.apache.ignite.internal.util.GridLogThrottle; -import org.apache.ignite.internal.util.worker.GridWorker; import org.apache.ignite.stream.StreamTransformer; import org.jetbrains.annotations.Nullable; @@ -260,6 +259,13 @@ public final class IgniteSystemProperties { */ public static final String IGNITE_TX_DEADLOCK_DETECTION_TIMEOUT = "IGNITE_TX_DEADLOCK_DETECTION_TIMEOUT"; + /** + * System property to enable pending transaction tracker. + * Affects impact of {@link IgniteSystemProperties#IGNITE_DISABLE_WAL_DURING_REBALANCING} property: + * if this property is set, WAL anyway won't be disabled during rebalancing triggered by baseline topology change. + */ + public static final String IGNITE_PENDING_TX_TRACKER_ENABLED = "IGNITE_PENDING_TX_TRACKER_ENABLED"; + /** * System property to override multicast group taken from configuration. * Used for testing purposes. @@ -900,6 +906,11 @@ public final class IgniteSystemProperties { */ public static final String IGNITE_THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE = "IGNITE_THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE"; + /** + * Count of WAL compressor worker threads. Default value is 4. + */ + public static final String IGNITE_WAL_COMPRESSOR_WORKER_THREAD_CNT = "IGNITE_WAL_COMPRESSOR_WORKER_THREAD_CNT"; + /** * Whenever read load balancing is enabled, that means 'get' requests will be distributed between primary and backup * nodes if it is possible and {@link CacheConfiguration#readFromBackup} is {@code true}. @@ -988,6 +999,72 @@ public final class IgniteSystemProperties { */ public static final String IGNITE_ZOOKEEPER_DISCOVERY_MAX_RETRY_COUNT = "IGNITE_ZOOKEEPER_DISCOVERY_MAX_RETRY_COUNT"; + /** + * Maximum number for cached MVCC transaction updates. This caching is used for continuous query with MVCC caches. + */ + public static final String IGNITE_MVCC_TX_SIZE_CACHING_THRESHOLD = "IGNITE_MVCC_TX_SIZE_CACHING_THRESHOLD"; + + /** + * Try reuse memory on deactivation. Useful in case of huge page memory region size. + */ + public static final String IGNITE_REUSE_MEMORY_ON_DEACTIVATE = "IGNITE_REUSE_MEMORY_ON_DEACTIVATE"; + + /** + * Timeout for waiting schema update if schema was not found for last accepted version. + */ + public static final String IGNITE_WAIT_SCHEMA_UPDATE = "IGNITE_WAIT_SCHEMA_UPDATE"; + + /** + * System property to override {@link CacheConfiguration#rebalanceThrottle} configuration property for all caches. + * {@code 0} by default, which means that override is disabled. + */ + public static final String IGNITE_REBALANCE_THROTTLE_OVERRIDE = "IGNITE_REBALANCE_THROTTLE_OVERRIDE"; + + /** + * Maximum inactivity period for system worker in milliseconds. When this value is exceeded, worker is considered + * blocked with consequent critical failure handler invocation. + */ + public static final String IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT = "IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT"; + + /** + * Timeout for checkpoint read lock acquisition in milliseconds. + */ + public static final String IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT = "IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT"; + + /** + * Enables start caches in parallel. + * + * Default is {@code true}. + */ + public static final String IGNITE_ALLOW_START_CACHES_IN_PARALLEL = "IGNITE_ALLOW_START_CACHES_IN_PARALLEL"; + + /** For test purposes only. Force Mvcc mode. */ + public static final String IGNITE_FORCE_MVCC_MODE_IN_TESTS = "IGNITE_FORCE_MVCC_MODE_IN_TESTS"; + + /** + * Allows to log additional information about all restored partitions after binary and logical recovery phases. + * + * Default is {@code true}. + */ + public static final String IGNITE_RECOVERY_VERBOSE_LOGGING = "IGNITE_RECOVERY_VERBOSE_LOGGING"; + + /** + * Disables cache interceptor triggering in case of conflicts. + * + * Default is {@code false}. + */ + public static final String IGNITE_DISABLE_TRIGGERING_CACHE_INTERCEPTOR_ON_CONFLICT = "IGNITE_DISABLE_TRIGGERING_CACHE_INTERCEPTOR_ON_CONFLICT"; + + /** + * When set to {@code true}, cache metrics are not included into the discovery metrics update message (in this + * case message contains only cluster metrics). By default cache metrics are included into the message and + * calculated each time the message is sent. + *

+ * Cache metrics sending can also be turned off by disabling statistics per each cache, but in this case some cache + * metrics will be unavailable via JMX too. + */ + public static final String IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE = "IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE"; + /** * Enforces singleton. */ diff --git a/modules/core/src/main/java/org/apache/ignite/cache/CacheAtomicityMode.java b/modules/core/src/main/java/org/apache/ignite/cache/CacheAtomicityMode.java index 43c561ca7f281..5b101bf36577b 100644 --- a/modules/core/src/main/java/org/apache/ignite/cache/CacheAtomicityMode.java +++ b/modules/core/src/main/java/org/apache/ignite/cache/CacheAtomicityMode.java @@ -38,6 +38,10 @@ public enum CacheAtomicityMode { * Note! In this mode, transactional consistency is guaranteed for key-value API operations only. * To enable ACID capabilities for SQL transactions, use the {@code TRANSACTIONAL_SNAPSHOT} mode. *

+ * Note! This atomicity mode is not compatible with the other modes within the same transaction. + * if a transaction is executed over multiple caches, all caches must have the same atomicity mode, + * either {@code TRANSACTIONAL_SNAPSHOT} or {@code TRANSACTIONAL}. + *

* See {@link Transaction} for more information about transactions. */ TRANSACTIONAL, @@ -109,9 +113,9 @@ public enum CacheAtomicityMode { * by the coordinator. This snapshot ensures that the transaction works with a consistent database state * during its execution period. *

- * Note! This atomicity mode is not compatible with the other modes within the same transaction. - * If a transaction is executed over multiple caches, all caches must have the same mode, - * either {@code TRANSACTIONAL_SNAPSHOT} or {@code TRANSACTIONAL} + * Note! This atomicity mode is not compatible with the other modes within the same transaction. + * If a transaction is executed over multiple caches, all caches must have the same atomicity mode, + * either {@code TRANSACTIONAL_SNAPSHOT} or {@code TRANSACTIONAL}. *

* See {@link Transaction} for more information about transactions. */ diff --git a/modules/core/src/main/java/org/apache/ignite/cache/affinity/AffinityKeyMapped.java b/modules/core/src/main/java/org/apache/ignite/cache/affinity/AffinityKeyMapped.java index 8b19338571805..e7e9eba1ccfd7 100644 --- a/modules/core/src/main/java/org/apache/ignite/cache/affinity/AffinityKeyMapped.java +++ b/modules/core/src/main/java/org/apache/ignite/cache/affinity/AffinityKeyMapped.java @@ -91,8 +91,8 @@ * is otherwise known as {@code Collocation Of Computations And Data}. In this case, * {@code @AffinityKeyMapped} annotation allows to specify a routing affinity key for a * {@link org.apache.ignite.compute.ComputeJob} or any other grid computation, such as {@link Runnable}, - * {@link Callable}, or {@link org.apache.ignite.lang.IgniteClosure}. It should be attached to a method or - * field that provides affinity key for the computation. Only one annotation per class is allowed. + * {@link Callable}, or {@link org.apache.ignite.lang.IgniteClosure}. It should be attached to a field + * that provides affinity key for the computation. Only one annotation per class is allowed. * Whenever such annotation is detected, then {@link org.apache.ignite.spi.loadbalancing.LoadBalancingSpi} * will be bypassed, and computation will be routed to the grid node where the specified affinity key is cached. *

diff --git a/modules/core/src/main/java/org/apache/ignite/cache/query/ContinuousQuery.java b/modules/core/src/main/java/org/apache/ignite/cache/query/ContinuousQuery.java index e4d6d0ad36837..0d1444b7eac5c 100644 --- a/modules/core/src/main/java/org/apache/ignite/cache/query/ContinuousQuery.java +++ b/modules/core/src/main/java/org/apache/ignite/cache/query/ContinuousQuery.java @@ -213,7 +213,16 @@ public CacheEntryEventSerializableFilter getRemoteFilter() { return (ContinuousQuery)super.setPageSize(pageSize); } - /** {@inheritDoc} */ + /** + * Sets whether this query should be executed on local node only. + * + * Note: backup event queues are not kept for local continuous queries. It may lead to loss of notifications in case + * of node failures. Use {@link ContinuousQuery#setRemoteFilterFactory(Factory)} to register cache event listeners + * on all cache nodes, if delivery guarantee is required. + * + * @param loc Local flag. + * @return {@code this} for chaining. + */ @Override public ContinuousQuery setLocal(boolean loc) { return (ContinuousQuery)super.setLocal(loc); } diff --git a/modules/core/src/main/java/org/apache/ignite/cache/query/QueryCancelledException.java b/modules/core/src/main/java/org/apache/ignite/cache/query/QueryCancelledException.java index 5f5ffdce163fe..eef5f52ac2c88 100644 --- a/modules/core/src/main/java/org/apache/ignite/cache/query/QueryCancelledException.java +++ b/modules/core/src/main/java/org/apache/ignite/cache/query/QueryCancelledException.java @@ -26,10 +26,13 @@ public class QueryCancelledException extends IgniteCheckedException { /** */ private static final long serialVersionUID = 0L; + /** Error message. */ + public static final String ERR_MSG = "The query was cancelled while executing."; + /** * Default constructor. */ public QueryCancelledException() { - super("The query was cancelled while executing."); + super(ERR_MSG); } } \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/cache/query/SqlFieldsQuery.java b/modules/core/src/main/java/org/apache/ignite/cache/query/SqlFieldsQuery.java index 4e12b8ca03ecc..6945f25fe7d4a 100644 --- a/modules/core/src/main/java/org/apache/ignite/cache/query/SqlFieldsQuery.java +++ b/modules/core/src/main/java/org/apache/ignite/cache/query/SqlFieldsQuery.java @@ -273,7 +273,9 @@ public boolean isDistributedJoins() { * * @param replicatedOnly The query contains only replicated tables. * @return {@code this} For chaining. + * @deprecated No longer used as of Apache Ignite 2.8. */ + @Deprecated public SqlFieldsQuery setReplicatedOnly(boolean replicatedOnly) { this.replicatedOnly = replicatedOnly; @@ -284,7 +286,9 @@ public SqlFieldsQuery setReplicatedOnly(boolean replicatedOnly) { * Check is the query contains only replicated tables. * * @return {@code true} If the query contains only replicated tables. + * @deprecated No longer used as of Apache Ignite 2.8. */ + @Deprecated public boolean isReplicatedOnly() { return replicatedOnly; } diff --git a/modules/core/src/main/java/org/apache/ignite/cache/query/SqlQuery.java b/modules/core/src/main/java/org/apache/ignite/cache/query/SqlQuery.java index a5994b92c0924..5b0667c1f06bf 100644 --- a/modules/core/src/main/java/org/apache/ignite/cache/query/SqlQuery.java +++ b/modules/core/src/main/java/org/apache/ignite/cache/query/SqlQuery.java @@ -238,7 +238,9 @@ public boolean isDistributedJoins() { * * @param replicatedOnly The query contains only replicated tables. * @return {@code this} For chaining. + * @deprecated No longer used as of Apache Ignite 2.8. */ + @Deprecated public SqlQuery setReplicatedOnly(boolean replicatedOnly) { this.replicatedOnly = replicatedOnly; @@ -249,7 +251,9 @@ public SqlQuery setReplicatedOnly(boolean replicatedOnly) { * Check is the query contains only replicated tables. * * @return {@code true} If the query contains only replicated tables. + * @deprecated No longer used as of Apache Ignite 2.8. */ + @Deprecated public boolean isReplicatedOnly() { return replicatedOnly; } diff --git a/modules/core/src/main/java/org/apache/ignite/client/ClientConnectionException.java b/modules/core/src/main/java/org/apache/ignite/client/ClientConnectionException.java index 1ec096c714818..58ca1538cd2eb 100644 --- a/modules/core/src/main/java/org/apache/ignite/client/ClientConnectionException.java +++ b/modules/core/src/main/java/org/apache/ignite/client/ClientConnectionException.java @@ -24,22 +24,22 @@ public class ClientConnectionException extends ClientException { /** Serial version uid. */ private static final long serialVersionUID = 0L; - /** Message. */ - private static final String MSG = "Ignite cluster is unavailable"; - /** - * Default constructor. + * Constructs a new exception with the specified detail message. + * + * @param msg the detail message. */ - public ClientConnectionException() { - super(MSG); + public ClientConnectionException(String msg) { + super(msg); } /** - * Constructs a new exception with the specified cause. + * Constructs a new exception with the specified cause and detail message. * + * @param msg the detail message. * @param cause the cause. */ - public ClientConnectionException(Throwable cause) { - super(MSG, cause); + public ClientConnectionException(String msg, Throwable cause) { + super(msg, cause); } } diff --git a/modules/core/src/main/java/org/apache/ignite/client/ClientReconnectedException.java b/modules/core/src/main/java/org/apache/ignite/client/ClientReconnectedException.java new file mode 100644 index 0000000000000..39034b6eb9a1a --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/client/ClientReconnectedException.java @@ -0,0 +1,39 @@ +/* + * Copyright 2019 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.client; + +/** + * Indicates that previous connection was lost and a new connection established, + * which can lead to inconsistency of non-atomic operations. + */ +public class ClientReconnectedException extends ClientException { + /** Serial version uid. */ + private static final long serialVersionUID = 0L; + + /** + * Default constructor. + */ + public ClientReconnectedException() { + } + + /** + * Constructs a new exception with the specified message. + */ + public ClientReconnectedException(String msg) { + super(msg); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/configuration/CacheConfiguration.java b/modules/core/src/main/java/org/apache/ignite/configuration/CacheConfiguration.java index fb3789d4df227..05be893392b02 100644 --- a/modules/core/src/main/java/org/apache/ignite/configuration/CacheConfiguration.java +++ b/modules/core/src/main/java/org/apache/ignite/configuration/CacheConfiguration.java @@ -58,6 +58,8 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.plugin.CachePluginConfiguration; +import org.apache.ignite.spi.encryption.EncryptionSpi; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi; import org.jetbrains.annotations.Nullable; /** @@ -316,7 +318,7 @@ public class CacheConfiguration extends MutableConfiguration { private long rebalanceThrottle = DFLT_REBALANCE_THROTTLE; /** */ - private CacheInterceptor interceptor; + private CacheInterceptor interceptor; /** */ private Class[] sqlFuncCls; @@ -373,6 +375,15 @@ public class CacheConfiguration extends MutableConfiguration { /** Events disabled. */ private boolean evtsDisabled = DFLT_EVENTS_DISABLED; + /** + * Flag indicating whether data must be encrypted. + * If {@code true} data on the disk will be encrypted. + * + * @see EncryptionSpi + * @see KeystoreEncryptionSpi + */ + private boolean encryptionEnabled; + /** Empty constructor (all values are initialized to their defaults). */ public CacheConfiguration() { /* No-op. */ @@ -412,6 +423,7 @@ public CacheConfiguration(CompleteConfiguration cfg) { cpOnRead = cc.isCopyOnRead(); dfltLockTimeout = cc.getDefaultLockTimeout(); eagerTtl = cc.isEagerTtl(); + encryptionEnabled = cc.isEncryptionEnabled(); evictFilter = cc.getEvictionFilter(); evictPlc = cc.getEvictionPolicy(); evictPlcFactory = cc.getEvictionPolicyFactory(); @@ -1618,9 +1630,8 @@ public CacheConfiguration setMaxQueryIteratorsCount(int maxQryIterCnt) { * * @return Cache interceptor. */ - @SuppressWarnings({"unchecked"}) @Nullable public CacheInterceptor getInterceptor() { - return (CacheInterceptor)interceptor; + return interceptor; } /** @@ -2268,6 +2279,27 @@ public CacheConfiguration setKeyConfiguration(CacheKeyConfiguration... cac return this; } + /** + * Gets flag indicating whether data must be encrypted. + * + * @return {@code True} if this cache persistent data is encrypted. + */ + public boolean isEncryptionEnabled() { + return encryptionEnabled; + } + + /** + * Sets encrypted flag. + * + * @param encryptionEnabled {@code True} if this cache persistent data should be encrypted. + * @return {@code this} for chaining. + */ + public CacheConfiguration setEncryptionEnabled(boolean encryptionEnabled) { + this.encryptionEnabled = encryptionEnabled; + + return this; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(CacheConfiguration.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/configuration/DataStorageConfiguration.java b/modules/core/src/main/java/org/apache/ignite/configuration/DataStorageConfiguration.java index 556e3cd44b3a4..7bca0f96479e6 100644 --- a/modules/core/src/main/java/org/apache/ignite/configuration/DataStorageConfiguration.java +++ b/modules/core/src/main/java/org/apache/ignite/configuration/DataStorageConfiguration.java @@ -279,6 +279,9 @@ public class DataStorageConfiguration implements Serializable { */ private int walCompactionLevel = DFLT_WAL_COMPACTION_LEVEL; + /** Timeout for checkpoint read lock acquisition. */ + private Long checkpointReadLockTimeout; + /** * Initial size of a data region reserved for system cache. * @@ -983,6 +986,30 @@ public void setWalCompactionLevel(int walCompactionLevel) { this.walCompactionLevel = walCompactionLevel; } + /** + * Returns timeout for checkpoint read lock acquisition. + * + * @see #setCheckpointReadLockTimeout(long) + * @return Returns timeout for checkpoint read lock acquisition in milliseconds. + */ + public Long getCheckpointReadLockTimeout() { + return checkpointReadLockTimeout; + } + + /** + * Sets timeout for checkpoint read lock acquisition. + *

+ * When any thread cannot acquire checkpoint read lock in this time, then critical failure handler is being called. + * + * @param checkpointReadLockTimeout Timeout for checkpoint read lock acquisition in milliseconds. + * @return {@code this} for chaining. + */ + public DataStorageConfiguration setCheckpointReadLockTimeout(long checkpointReadLockTimeout) { + this.checkpointReadLockTimeout = checkpointReadLockTimeout; + + return this; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(DataStorageConfiguration.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/configuration/IgniteConfiguration.java b/modules/core/src/main/java/org/apache/ignite/configuration/IgniteConfiguration.java index 6a0c7cb3a4e31..a04fc53f681aa 100644 --- a/modules/core/src/main/java/org/apache/ignite/configuration/IgniteConfiguration.java +++ b/modules/core/src/main/java/org/apache/ignite/configuration/IgniteConfiguration.java @@ -21,6 +21,7 @@ import java.lang.management.ManagementFactory; import java.util.Map; import java.util.UUID; +import java.util.zip.Deflater; import javax.cache.configuration.Factory; import javax.cache.event.CacheEntryListener; import javax.cache.expiry.ExpiryPolicy; @@ -68,6 +69,7 @@ import org.apache.ignite.spi.deployment.local.LocalDeploymentSpi; import org.apache.ignite.spi.discovery.DiscoverySpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.encryption.EncryptionSpi; import org.apache.ignite.spi.eventstorage.EventStorageSpi; import org.apache.ignite.spi.eventstorage.NoopEventStorageSpi; import org.apache.ignite.spi.failover.FailoverSpi; @@ -118,6 +120,9 @@ public class IgniteConfiguration { /** Default maximum timeout to wait for network responses in milliseconds (value is {@code 5,000ms}). */ public static final long DFLT_NETWORK_TIMEOUT = 5000; + /** Default compression level for network messages (value is Deflater.BEST_SPEED. */ + public static final int DFLT_NETWORK_COMPRESSION = Deflater.BEST_SPEED; + /** Default interval between message send retries. */ public static final long DFLT_SEND_RETRY_DELAY = 1000; @@ -214,11 +219,11 @@ public class IgniteConfiguration { /** Default timeout after which long query warning will be printed. */ public static final long DFLT_LONG_QRY_WARN_TIMEOUT = 3000; - /** Default size of MVCC vacuum thread pool. */ + /** Default number of MVCC vacuum threads.. */ public static final int DFLT_MVCC_VACUUM_THREAD_CNT = 2; - /** Default time interval between vacuum process runs (ms). */ - public static final int DFLT_MVCC_VACUUM_FREQUENCY = 5000; + /** Default time interval between MVCC vacuum runs in milliseconds. */ + public static final long DFLT_MVCC_VACUUM_FREQUENCY = 5000; /** Optional local Ignite instance name. */ private String igniteInstanceName; @@ -301,6 +306,9 @@ public class IgniteConfiguration { /** Maximum network requests timeout. */ private long netTimeout = DFLT_NETWORK_TIMEOUT; + /** Compression level for network binary messages. */ + private int netCompressionLevel = DFLT_NETWORK_COMPRESSION; + /** Interval between message send retries. */ private long sndRetryDelay = DFLT_SEND_RETRY_DELAY; @@ -367,6 +375,9 @@ public class IgniteConfiguration { /** Address resolver. */ private AddressResolver addrRslvr; + /** Encryption SPI. */ + private EncryptionSpi encryptionSpi; + /** Cache configurations. */ private CacheConfiguration[] cacheCfg; @@ -407,6 +418,9 @@ public class IgniteConfiguration { /** Failure detection timeout. */ private Long failureDetectionTimeout = DFLT_FAILURE_DETECTION_TIMEOUT; + /** Timeout for blocked system workers detection. */ + private Long sysWorkerBlockedTimeout; + /** Failure detection timeout for client nodes. */ private Long clientFailureDetectionTimeout = DFLT_CLIENT_FAILURE_DETECTION_TIMEOUT; @@ -496,8 +510,8 @@ public class IgniteConfiguration { /** Size of MVCC vacuum thread pool. */ private int mvccVacuumThreadCnt = DFLT_MVCC_VACUUM_THREAD_CNT; - /** Time interval between vacuum process runs (ms). */ - private int mvccVacuumFreq = DFLT_MVCC_VACUUM_FREQUENCY; + /** Time interval between vacuum runs (ms). */ + private long mvccVacuumFreq = DFLT_MVCC_VACUUM_FREQUENCY; /** User authentication enabled. */ private boolean authEnabled; @@ -537,6 +551,7 @@ public IgniteConfiguration(IgniteConfiguration cfg) { failSpi = cfg.getFailoverSpi(); loadBalancingSpi = cfg.getLoadBalancingSpi(); indexingSpi = cfg.getIndexingSpi(); + encryptionSpi = cfg.getEncryptionSpi(); commFailureRslvr = cfg.getCommunicationFailureResolver(); @@ -619,6 +634,7 @@ public IgniteConfiguration(IgniteConfiguration cfg) { svcCfgs = cfg.getServiceConfiguration(); svcPoolSize = cfg.getServiceThreadPoolSize(); sysPoolSize = cfg.getSystemThreadPoolSize(); + sysWorkerBlockedTimeout = cfg.getSystemWorkerBlockedTimeout(); timeSrvPortBase = cfg.getTimeServerPortBase(); timeSrvPortRange = cfg.getTimeServerPortRange(); txCfg = cfg.getTransactionConfiguration(); @@ -1469,6 +1485,29 @@ public IgniteConfiguration setNetworkTimeout(long netTimeout) { return this; } + /** + * Compression level of internal network messages. + *

+ * If not provided, then default value + * Deflater.BEST_SPEED is used. + * + * @return Network messages default compression level. + */ + public int getNetworkCompressionLevel() { + return netCompressionLevel; + } + + /** + * Compression level for internal network messages. + *

+ * If not provided, then default value + * Deflater.BEST_SPEED is used. + * + */ + public void setNetworkCompressionLevel(int netCompressionLevel) { + this.netCompressionLevel = netCompressionLevel; + } + /** * Interval in milliseconds between message send retries. *

@@ -1976,6 +2015,31 @@ public IgniteConfiguration setFailureDetectionTimeout(long failureDetectionTimeo return this; } + /** + * Returns maximum inactivity period for system worker. When this value is exceeded, worker is considered blocked + * with consequent critical failure handler invocation. + * + * @see #setSystemWorkerBlockedTimeout(long) + * @return Maximum inactivity period for system worker in milliseconds. + */ + public Long getSystemWorkerBlockedTimeout() { + return sysWorkerBlockedTimeout; + } + + /** + * Sets maximum inactivity period for system worker. When this value is exceeded, worker is considered blocked + * with consequent critical failure handler invocation. + * + * @see #setFailureHandler(FailureHandler) + * @param sysWorkerBlockedTimeout Maximum inactivity period for system worker in milliseconds. + * @return {@code this} for chaining. + */ + public IgniteConfiguration setSystemWorkerBlockedTimeout(long sysWorkerBlockedTimeout) { + this.sysWorkerBlockedTimeout = sysWorkerBlockedTimeout; + + return this; + } + /** * Should return fully configured load balancing SPI implementation. If not provided, * {@link RoundRobinLoadBalancingSpi} will be used. @@ -2061,6 +2125,28 @@ public IndexingSpi getIndexingSpi() { return indexingSpi; } + /** + * Sets fully configured instances of {@link EncryptionSpi}. + * + * @param encryptionSpi Fully configured instance of {@link EncryptionSpi}. + * @see IgniteConfiguration#getEncryptionSpi() + * @return {@code this} for chaining. + */ + public IgniteConfiguration setEncryptionSpi(EncryptionSpi encryptionSpi) { + this.encryptionSpi = encryptionSpi; + + return this; + } + + /** + * Gets fully configured encryption SPI implementations. + * + * @return Encryption SPI implementation. + */ + public EncryptionSpi getEncryptionSpi() { + return encryptionSpi; + } + /** * Gets address resolver for addresses mapping determination. * @@ -2998,18 +3084,18 @@ public IgniteConfiguration setFailureHandler(FailureHandler failureHnd) { } /** - * Returns number of MVCC vacuum cleanup threads. + * Returns number of MVCC vacuum threads. * - * @return Number of MVCC vacuum cleanup threads. + * @return Number of MVCC vacuum threads. */ public int getMvccVacuumThreadCount() { return mvccVacuumThreadCnt; } /** - * Sets number of MVCC vacuum cleanup threads. + * Sets number of MVCC vacuum threads. * - * @param mvccVacuumThreadCnt Number of MVCC vacuum cleanup threads. + * @param mvccVacuumThreadCnt Number of MVCC vacuum threads. * @return {@code this} for chaining. */ public IgniteConfiguration setMvccVacuumThreadCount(int mvccVacuumThreadCnt) { @@ -3019,21 +3105,21 @@ public IgniteConfiguration setMvccVacuumThreadCount(int mvccVacuumThreadCnt) { } /** - * Returns time interval between vacuum runs. + * Returns time interval between MVCC vacuum runs in milliseconds. * - * @return Time interval between vacuum runs. + * @return Time interval between MVCC vacuum runs in milliseconds. */ - public int getMvccVacuumFrequency() { + public long getMvccVacuumFrequency() { return mvccVacuumFreq; } /** - * Sets time interval between vacuum runs. + * Sets time interval between MVCC vacuum runs in milliseconds. * - * @param mvccVacuumFreq Time interval between vacuum runs. + * @param mvccVacuumFreq Time interval between MVCC vacuum runs in milliseconds. * @return {@code this} for chaining. */ - public IgniteConfiguration setMvccVacuumFrequency(int mvccVacuumFreq) { + public IgniteConfiguration setMvccVacuumFrequency(long mvccVacuumFreq) { this.mvccVacuumFreq = mvccVacuumFreq; return this; diff --git a/modules/core/src/main/java/org/apache/ignite/events/CacheEvent.java b/modules/core/src/main/java/org/apache/ignite/events/CacheEvent.java index 5aa9d0663b85d..9a437f7a369dd 100644 --- a/modules/core/src/main/java/org/apache/ignite/events/CacheEvent.java +++ b/modules/core/src/main/java/org/apache/ignite/events/CacheEvent.java @@ -140,6 +140,10 @@ public class CacheEvent extends EventAdapter { @GridToStringInclude private String taskName; + /** Transaction label. */ + @GridToStringInclude + private String txLbl; + /** * Constructs cache event. * @@ -152,6 +156,7 @@ public class CacheEvent extends EventAdapter { * @param near Flag indicating whether event happened on {@code near} or {@code partitioned} cache. * @param key Cache key. * @param xid Transaction ID. + * @param txLbl Transaction label. * @param lockId Lock ID. * @param newVal New value. * @param hasNewVal Flag indicating whether new value is present in case if we @@ -163,7 +168,7 @@ public class CacheEvent extends EventAdapter { * @param cloClsName Closure class name. */ public CacheEvent(String cacheName, ClusterNode node, @Nullable ClusterNode evtNode, String msg, int type, int part, - boolean near, Object key, IgniteUuid xid, Object lockId, Object newVal, boolean hasNewVal, + boolean near, Object key, IgniteUuid xid, String txLbl, Object lockId, Object newVal, boolean hasNewVal, Object oldVal, boolean hasOldVal, UUID subjId, String cloClsName, String taskName) { super(node, msg, type); this.cacheName = cacheName; @@ -172,6 +177,7 @@ public CacheEvent(String cacheName, ClusterNode node, @Nullable ClusterNode evtN this.near = near; this.key = key; this.xid = xid; + this.txLbl = txLbl; this.lockId = lockId; this.newVal = newVal; this.hasNewVal = hasNewVal; @@ -229,7 +235,7 @@ public K key() { } /** - * ID of surrounding cache cache transaction or null if there is + * ID of surrounding cache transaction or null if there is * no surrounding transaction. * * @return ID of surrounding cache transaction. @@ -320,6 +326,16 @@ public String taskName() { return taskName; } + /** + * Label of surrounding cache transaction or null if there either is + * no surrounding transaction or label was not set. + * + * @return Label of surrounding cache transaction. + */ + public String txLabel() { + return txLbl; + } + /** {@inheritDoc} */ @Override public String shortDisplay() { return name() + ": near=" + near + ", key=" + key + ", hasNewVal=" + hasNewVal + ", hasOldVal=" + hasOldVal + diff --git a/modules/core/src/main/java/org/apache/ignite/events/EventType.java b/modules/core/src/main/java/org/apache/ignite/events/EventType.java index 485e5671e6423..97017d638dd0a 100644 --- a/modules/core/src/main/java/org/apache/ignite/events/EventType.java +++ b/modules/core/src/main/java/org/apache/ignite/events/EventType.java @@ -244,6 +244,16 @@ public interface EventType { */ public static final int EVT_TASK_REDUCED = 25; + /** + * Built-in event type: Visor or Web Console management task started. + *

+ * NOTE: all types in range from 1 to 1000 are reserved for + * internal Ignite events and should not be used by user-defined events. + * + * @see TaskEvent + */ + public static final int EVT_MANAGEMENT_TASK_STARTED = 26; + /** * Built-in event type: non-task class deployed. *

diff --git a/modules/core/src/main/java/org/apache/ignite/failure/AbstractFailureHandler.java b/modules/core/src/main/java/org/apache/ignite/failure/AbstractFailureHandler.java index 6ca6520f2128d..79b1f8f6ea591 100644 --- a/modules/core/src/main/java/org/apache/ignite/failure/AbstractFailureHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/failure/AbstractFailureHandler.java @@ -18,22 +18,33 @@ package org.apache.ignite.failure; import java.util.Collections; +import java.util.EnumSet; import java.util.Set; import org.apache.ignite.Ignite; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; +import static org.apache.ignite.failure.FailureType.SYSTEM_CRITICAL_OPERATION_TIMEOUT; +import static org.apache.ignite.failure.FailureType.SYSTEM_WORKER_BLOCKED; + /** * Abstract superclass for {@link FailureHandler} implementations. - * Maintains a set of ignored failure types. + * Maintains a set of ignored failure types. Failure handler will not invalidate kernal context for this failures + * and will not handle it. */ public abstract class AbstractFailureHandler implements FailureHandler { /** */ @GridToStringInclude - private Set ignoredFailureTypes = Collections.emptySet(); + private Set ignoredFailureTypes = + Collections.unmodifiableSet(EnumSet.of(SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT)); - /** {@inheritDoc} */ - @Override public void setIgnoredFailureTypes(Set failureTypes) { + /** + * Sets failure types that must be ignored by failure handler. + * + * @param failureTypes Set of failure type that must be ignored. + * @see FailureType + */ + public void setIgnoredFailureTypes(Set failureTypes) { ignoredFailureTypes = Collections.unmodifiableSet(failureTypes); } diff --git a/modules/core/src/main/java/org/apache/ignite/failure/FailureHandler.java b/modules/core/src/main/java/org/apache/ignite/failure/FailureHandler.java index f325e65608a8b..8717b16707174 100644 --- a/modules/core/src/main/java/org/apache/ignite/failure/FailureHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/failure/FailureHandler.java @@ -17,7 +17,6 @@ package org.apache.ignite.failure; -import java.util.Set; import org.apache.ignite.Ignite; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.failure.FailureProcessor; @@ -37,9 +36,4 @@ public interface FailureHandler { * @return Whether kernal context must be invalidated or not. */ public boolean onFailure(Ignite ignite, FailureContext failureCtx); - - /** - * Sets failure types to ignore. - */ - public void setIgnoredFailureTypes(Set failureTypes); } diff --git a/modules/core/src/main/java/org/apache/ignite/failure/FailureType.java b/modules/core/src/main/java/org/apache/ignite/failure/FailureType.java index fbd5529fc8792..114e432e1ef73 100644 --- a/modules/core/src/main/java/org/apache/ignite/failure/FailureType.java +++ b/modules/core/src/main/java/org/apache/ignite/failure/FailureType.java @@ -31,5 +31,8 @@ public enum FailureType { SYSTEM_WORKER_BLOCKED, /** Critical error - error which leads to the system's inoperability. */ - CRITICAL_ERROR + CRITICAL_ERROR, + + /** System-critical operation has been timed out. */ + SYSTEM_CRITICAL_OPERATION_TIMEOUT } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridComponent.java b/modules/core/src/main/java/org/apache/ignite/internal/GridComponent.java index 0cf3a6eb34771..607217ebde2d3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridComponent.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridComponent.java @@ -67,7 +67,10 @@ enum DiscoveryDataExchangeType { AUTH_PROC, /** */ - CACHE_CRD_PROC + CACHE_CRD_PROC, + + /** Encryption manager. */ + ENCRYPTION_MGR } /** @@ -153,7 +156,7 @@ enum DiscoveryDataExchangeType { @Nullable public IgniteNodeValidationResult validateNode(ClusterNode node); /** */ - @Nullable public IgniteNodeValidationResult validateNode(ClusterNode node, DiscoveryDataBag.JoiningNodeDiscoveryData discoData); + @Nullable public IgniteNodeValidationResult validateNode(ClusterNode node, JoiningNodeDiscoveryData discoData); /** * Gets unique component type to distinguish components providing discovery data. Must return non-null value @@ -180,4 +183,4 @@ enum DiscoveryDataExchangeType { * @return Future to wait before completing reconnect future. */ @Nullable public IgniteInternalFuture onReconnected(boolean clusterRestarted) throws IgniteCheckedException; -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridJobCancelRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/GridJobCancelRequest.java index aaa69eaff7190..ac3a87336fc9f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridJobCancelRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridJobCancelRequest.java @@ -201,4 +201,4 @@ public boolean system() { @Override public String toString() { return S.toString(GridJobCancelRequest.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteRequest.java index 4357d1da07dce..ebfeb0153ffdc 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteRequest.java @@ -664,7 +664,7 @@ public AffinityTopologyVersion getTopVer() { writer.incrementState(); case 24: - if (!writer.writeMessage("topVer", topVer)) + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -885,7 +885,7 @@ public AffinityTopologyVersion getTopVer() { reader.incrementState(); case 24: - topVer = reader.readMessage("topVer"); + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteResponse.java index 312435e922750..f052edf07d446 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridJobExecuteResponse.java @@ -282,7 +282,7 @@ public AffinityTopologyVersion getRetryTopologyVersion() { writer.incrementState(); case 6: - if (!writer.writeMessage("retry", retry)) + if (!writer.writeAffinityTopologyVersion("retry", retry)) return false; writer.incrementState(); @@ -355,7 +355,7 @@ public AffinityTopologyVersion getRetryTopologyVersion() { reader.incrementState(); case 6: - retry = reader.readMessage("retry"); + retry = reader.readAffinityTopologyVersion("retry"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsRequest.java index 8a11cef33aa21..d743a355f8d8e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsRequest.java @@ -161,4 +161,4 @@ public byte[] topicBytes() { @Override public String toString() { return S.toString(GridJobSiblingsRequest.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsResponse.java index 3911446d2b86b..dc59ab5f3057c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridJobSiblingsResponse.java @@ -141,4 +141,4 @@ public void unmarshalSiblings(Marshaller marsh) throws IgniteCheckedException { @Override public String toString() { return S.toString(GridJobSiblingsResponse.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContext.java b/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContext.java index 4cb68da5f60c2..a43312cc8ac6a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContext.java @@ -28,6 +28,7 @@ import org.apache.ignite.internal.managers.communication.GridIoManager; import org.apache.ignite.internal.managers.deployment.GridDeploymentManager; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager; import org.apache.ignite.internal.managers.failover.GridFailoverManager; import org.apache.ignite.internal.managers.indexing.GridIndexingManager; @@ -424,6 +425,13 @@ public interface GridKernalContext extends Iterable { */ public GridIndexingManager indexing(); + /** + * Gets encryption manager. + * + * @return Encryption manager. + */ + public GridEncryptionManager encryption(); + /** * Gets workers registry. * @@ -690,4 +698,9 @@ public interface GridKernalContext extends Iterable { * @return Default uncaught exception handler used by thread pools. */ public Thread.UncaughtExceptionHandler uncaughtExceptionHandler(); + + /** + * @return {@code True} if node is in recovery mode (before join to topology). + */ + public boolean recoveryMode(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContextImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContextImpl.java index a0e3f93a65c39..08090f2bff1f3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContextImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridKernalContextImpl.java @@ -38,6 +38,7 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.failure.FailureType; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.managers.checkpoint.GridCheckpointManager; import org.apache.ignite.internal.managers.collision.GridCollisionManager; import org.apache.ignite.internal.managers.communication.GridIoManager; @@ -162,6 +163,10 @@ public class GridKernalContextImpl implements GridKernalContext, Externalizable @GridToStringExclude private GridIndexingManager indexingMgr; + /** */ + @GridToStringExclude + private GridEncryptionManager encryptionMgr; + /* * Processors. * ========== @@ -410,6 +415,9 @@ public class GridKernalContextImpl implements GridKernalContext, Externalizable /** Failure processor. */ private FailureProcessor failureProc; + /** Recovery mode flag. Flag is set to {@code false} when discovery manager started. */ + private boolean recoveryMode = true; + /** * No-arg constructor is required by externalization. */ @@ -557,6 +565,8 @@ else if (comp instanceof GridLoadBalancerManager) loadMgr = (GridLoadBalancerManager)comp; else if (comp instanceof GridIndexingManager) indexingMgr = (GridIndexingManager)comp; + else if (comp instanceof GridEncryptionManager) + encryptionMgr = (GridEncryptionManager)comp; /* * Processors. @@ -801,6 +811,11 @@ else if (helper instanceof HadoopHelper) return indexingMgr; } + /** {@inheritDoc} */ + @Override public GridEncryptionManager encryption() { + return encryptionMgr; + } + /** {@inheritDoc} */ @Override public WorkersRegistry workersRegistry() { return workersRegistry; @@ -1168,6 +1183,18 @@ public Thread.UncaughtExceptionHandler uncaughtExceptionHandler() { return hnd; } + /** {@inheritDoc} */ + @Override public boolean recoveryMode() { + return recoveryMode; + } + + /** + * @param recoveryMode Recovery mode. + */ + public void recoveryMode(boolean recoveryMode) { + this.recoveryMode = recoveryMode; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(GridKernalContextImpl.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridTaskCancelRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/GridTaskCancelRequest.java index 273d0a777a468..71c318b537dd5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridTaskCancelRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridTaskCancelRequest.java @@ -124,4 +124,4 @@ public IgniteUuid sessionId() { @Override public String toString() { return S.toString(GridTaskCancelRequest.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridTaskSessionRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/GridTaskSessionRequest.java index dbac893189e38..576392e097fc3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridTaskSessionRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridTaskSessionRequest.java @@ -189,4 +189,4 @@ public IgniteUuid getJobId() { @Override public String toString() { return S.toString(GridTaskSessionRequest.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/GridTopic.java b/modules/core/src/main/java/org/apache/ignite/internal/GridTopic.java index 98a4d8d7be1ca..95d7717ee2f37 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/GridTopic.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/GridTopic.java @@ -133,7 +133,10 @@ public enum GridTopic { TOPIC_EXCHANGE, /** */ - TOPIC_CACHE_COORDINATOR; + TOPIC_CACHE_COORDINATOR, + + /** */ + TOPIC_GEN_ENC_KEY; /** Enum values. */ private static final GridTopic[] VALS = values(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java b/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java index 6b1c9956717af..a60233552a88d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java @@ -91,6 +91,7 @@ import org.apache.ignite.configuration.BinaryConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.CollectionConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.MemoryConfiguration; @@ -108,6 +109,7 @@ import org.apache.ignite.internal.managers.deployment.GridDeploymentManager; import org.apache.ignite.internal.managers.discovery.DiscoveryLocalJoinData; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager; import org.apache.ignite.internal.managers.failover.GridFailoverManager; import org.apache.ignite.internal.managers.indexing.GridIndexingManager; @@ -127,7 +129,6 @@ import org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl; import org.apache.ignite.internal.processors.cache.mvcc.MvccProcessorImpl; import org.apache.ignite.internal.processors.cache.persistence.DataRegion; -import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.DataStorageMXBeanImpl; import org.apache.ignite.internal.processors.cache.persistence.filename.PdsConsistentIdProcessor; import org.apache.ignite.internal.processors.cacheobject.IgniteCacheObjectProcessor; @@ -186,9 +187,11 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.worker.FailureHandlingMxBeanImpl; import org.apache.ignite.internal.worker.WorkersControlMXBeanImpl; import org.apache.ignite.internal.worker.WorkersRegistry; import org.apache.ignite.lang.IgniteBiTuple; +import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteProductVersion; @@ -200,6 +203,7 @@ import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.mxbean.ClusterMetricsMXBean; import org.apache.ignite.mxbean.DataStorageMXBean; +import org.apache.ignite.mxbean.FailureHandlingMxBean; import org.apache.ignite.mxbean.IgniteMXBean; import org.apache.ignite.mxbean.StripedExecutorMXBean; import org.apache.ignite.mxbean.ThreadPoolMXBean; @@ -269,6 +273,7 @@ import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_SPI_CLASS; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_TX_CONFIG; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_USER_NAME; +import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_VALIDATE_CACHE_REQUESTS; import static org.apache.ignite.internal.IgniteVersionUtils.ACK_VER_STR; import static org.apache.ignite.internal.IgniteVersionUtils.BUILD_TSTAMP_STR; import static org.apache.ignite.internal.IgniteVersionUtils.COPYRIGHT; @@ -987,6 +992,7 @@ public void start( startManager(new GridFailoverManager(ctx)); startManager(new GridCollisionManager(ctx)); startManager(new GridIndexingManager(ctx)); + startManager(new GridEncryptionManager(ctx)); ackSecurity(); @@ -1040,6 +1046,10 @@ public void start( fillNodeAttributes(clusterProc.updateNotifierEnabled()); ctx.cache().context().database().notifyMetaStorageSubscribersOnReadyForRead(); + + ctx.cache().context().database().startMemoryRestore(ctx); + + ctx.recoveryMode(false); } catch (Throwable e) { U.error( @@ -1079,6 +1089,14 @@ public void start( IgniteInternalFuture transitionWaitFut = joinData.transitionWaitFuture(); + // Notify discovery manager the first to make sure that topology is discovered. + // Active flag is not used in managers, so it is safe to pass true. + ctx.discovery().onKernalStart(true); + + // Notify IO manager the second so further components can send and receive messages. + // Must notify the IO manager before transition state await to make sure IO connection can be established. + ctx.io().onKernalStart(true); + boolean active; if (transitionWaitFut != null) { @@ -1092,12 +1110,6 @@ public void start( else active = joinData.active(); - // Notify discovery manager the first to make sure that topology is discovered. - ctx.discovery().onKernalStart(active); - - // Notify IO manager the second so further components can send and receive messages. - ctx.io().onKernalStart(active); - boolean recon = false; // Callbacks. @@ -1254,7 +1266,8 @@ private long checkPoolStarvation( GridKernalContext ctx = IgniteKernal.this.ctx; if (ctx != null) - ctx.cache().context().exchange().dumpLongRunningOperations(longOpDumpTimeout); + ctx.closure().runLocalSafe(() -> ctx.cache().context().exchange().dumpLongRunningOperations(longOpDumpTimeout)); + } }, longOpDumpTimeout, longOpDumpTimeout); } @@ -1352,7 +1365,9 @@ private HadoopProcessorAdapter createHadoopComponent() throws IgniteCheckedExcep private void validateCommon(IgniteConfiguration cfg) { A.notNull(cfg.getNodeId(), "cfg.getNodeId()"); - A.notNull(cfg.getMBeanServer(), "cfg.getMBeanServer()"); + if (!U.IGNITE_MBEANS_DISABLED) + A.notNull(cfg.getMBeanServer(), "cfg.getMBeanServer()"); + A.notNull(cfg.getGridLogger(), "cfg.getGridLogger()"); A.notNull(cfg.getMarshaller(), "cfg.getMarshaller()"); A.notNull(cfg.getUserAttributes(), "cfg.getUserAttributes()"); @@ -1553,6 +1568,8 @@ private void fillNodeAttributes(boolean notifyEnabled) throws IgniteCheckedExcep add(ATTR_CONSISTENCY_CHECK_SKIPPED, getBoolean(IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK)); + add(ATTR_VALIDATE_CACHE_REQUESTS, Boolean.TRUE); + if (cfg.getConsistentId() != null) add(ATTR_NODE_CONSISTENT_ID, cfg.getConsistentId()); @@ -1864,13 +1881,24 @@ private void ackStart(RuntimeMXBean rtBean) { ClusterNode locNode = localNode(); if (log.isQuiet()) { + ackDataRegions(s -> { + U.quiet(false, s); + + return null; + }); + U.quiet(false, ""); + U.quiet(false, "Ignite node started OK (id=" + U.id8(locNode.id()) + (F.isEmpty(igniteInstanceName) ? "" : ", instance name=" + igniteInstanceName) + ')'); } if (log.isInfoEnabled()) { - log.info(""); + ackDataRegions(s -> { + log.info(s); + + return null; + }); String ack = "Ignite ver. " + VER_STR + '#' + BUILD_TSTAMP_STR + "-sha1:" + REV_HASH_STR; @@ -1907,6 +1935,48 @@ private void ackStart(RuntimeMXBean rtBean) { } } + /** + * @param clo Message output closure. + */ + public void ackDataRegions(IgniteClosure clo) { + DataStorageConfiguration memCfg = ctx.config().getDataStorageConfiguration(); + + if (memCfg == null) + return; + + clo.apply("Data Regions Configured:"); + clo.apply(dataRegionConfigurationMessage(memCfg.getDefaultDataRegionConfiguration())); + + DataRegionConfiguration[] dataRegions = memCfg.getDataRegionConfigurations(); + + if (dataRegions != null) { + for (DataRegionConfiguration dataRegion : dataRegions) { + String msg = dataRegionConfigurationMessage(dataRegion); + + if (msg != null) + clo.apply(msg); + } + } + } + + /** + * @param regCfg Data region configuration. + * @return Data region message. + */ + private String dataRegionConfigurationMessage(DataRegionConfiguration regCfg) { + if (regCfg == null) + return null; + + SB m = new SB(); + + m.a(" ^-- ").a(regCfg.getName()).a(" ["); + m.a("initSize=").a(U.readableSize(regCfg.getInitialSize(), false)); + m.a(", maxSize=").a(U.readableSize(regCfg.getMaxSize(), false)); + m.a(", persistence=" + regCfg.isPersistenceEnabled()).a(']'); + + return m.toString(); + } + /** * Logs out OS information. */ @@ -2040,12 +2110,7 @@ private void ackNodeMetrics(DecimalFormat dblFmt, pdsUsedSummary += pdsUsed; - // TODO https://issues.apache.org/jira/browse/IGNITE-9455 - // TODO Print actual value for meta store region when issue will be fixed. - boolean metastore = - GridCacheDatabaseSharedManager.METASTORE_DATA_REGION_NAME.equals(region.config().getName()); - - String pdsUsedSize = metastore ? "unknown" : dblFmt.format(pdsUsedMBytes) + "MB"; + String pdsUsedSize = dblFmt.format(pdsUsedMBytes) + "MB"; pdsRegionsInfo.append(" ^-- ") .append(region.config().getName()).append(" region") @@ -2228,6 +2293,9 @@ else if (state == STARTING) } } + if (ctx.hadoopHelper() != null) + ctx.hadoopHelper().close(); + if (starveTask != null) starveTask.close(); @@ -2316,6 +2384,7 @@ else if (state == STARTING) MarshallerExclusions.clearCache(); BinaryEnumCache.clear(); + gw.writeLock(); try { @@ -2516,19 +2585,25 @@ private void ackSpis() { * */ private void ackRebalanceConfiguration() throws IgniteCheckedException { - if (cfg.getSystemThreadPoolSize() <= cfg.getRebalanceThreadPoolSize()) - throw new IgniteCheckedException("Rebalance thread pool size exceed or equals System thread pool size. " + - "Change IgniteConfiguration.rebalanceThreadPoolSize property before next start."); - - if (cfg.getRebalanceThreadPoolSize() < 1) - throw new IgniteCheckedException("Rebalance thread pool size minimal allowed value is 1. " + - "Change IgniteConfiguration.rebalanceThreadPoolSize property before next start."); - - for (CacheConfiguration ccfg : cfg.getCacheConfiguration()) { - if (ccfg.getRebalanceBatchesPrefetchCount() < 1) - throw new IgniteCheckedException("Rebalance batches prefetch count minimal allowed value is 1. " + - "Change CacheConfiguration.rebalanceBatchesPrefetchCount property before next start. " + - "[cache=" + ccfg.getName() + "]"); + if (cfg.isClientMode()) { + if (cfg.getRebalanceThreadPoolSize() != IgniteConfiguration.DFLT_REBALANCE_THREAD_POOL_SIZE) + U.warn(log, "Setting the rebalance pool size has no effect on the client mode"); + } + else { + if (cfg.getSystemThreadPoolSize() <= cfg.getRebalanceThreadPoolSize()) + throw new IgniteCheckedException("Rebalance thread pool size exceed or equals System thread pool size. " + + "Change IgniteConfiguration.rebalanceThreadPoolSize property before next start."); + + if (cfg.getRebalanceThreadPoolSize() < 1) + throw new IgniteCheckedException("Rebalance thread pool size minimal allowed value is 1. " + + "Change IgniteConfiguration.rebalanceThreadPoolSize property before next start."); + + for (CacheConfiguration ccfg : cfg.getCacheConfiguration()) { + if (ccfg.getRebalanceBatchesPrefetchCount() < 1) + throw new IgniteCheckedException("Rebalance batches prefetch count minimal allowed value is 1. " + + "Change CacheConfiguration.rebalanceBatchesPrefetchCount property before next start. " + + "[cache=" + ccfg.getName() + "]"); + } } } @@ -2932,7 +3007,10 @@ public IgniteInternalCache getCache(String name) { @Override public IgniteBiTuple, Boolean> getOrCreateCache0( CacheConfiguration cacheCfg, boolean sql) { A.notNull(cacheCfg, "cacheCfg"); - CU.validateNewCacheName(cacheCfg.getName()); + + String cacheName = cacheCfg.getName(); + + CU.validateNewCacheName(cacheName); guard(); @@ -2941,18 +3019,22 @@ public IgniteInternalCache getCache(String name) { Boolean res = false; - if (ctx.cache().cache(cacheCfg.getName()) == null) { + IgniteCacheProxy cache = ctx.cache().publicJCache(cacheName, false, true); + + if (cache == null) { res = sql ? ctx.cache().dynamicStartSqlCache(cacheCfg).get() : ctx.cache().dynamicStartCache(cacheCfg, - cacheCfg.getName(), + cacheName, null, false, true, true).get(); - } - return new IgniteBiTuple<>((IgniteCache)ctx.cache().publicJCache(cacheCfg.getName()), res); + return new IgniteBiTuple<>(ctx.cache().publicJCache(cacheName), res); + } + else + return new IgniteBiTuple<>(cache, res); } catch (IgniteCheckedException e) { throw CU.convertToCacheException(e); @@ -3200,7 +3282,7 @@ public IgniteInternalFuture destroyCacheAsync(String cacheName, boolean try { checkClusterState(); - return ctx.cache().dynamicDestroyCache(cacheName, sql, checkThreadTx, false); + return ctx.cache().dynamicDestroyCache(cacheName, sql, checkThreadTx, false, null); } finally { unguard(); @@ -3220,7 +3302,7 @@ public IgniteInternalFuture destroyCachesAsync(Collection cacheNames, try { checkClusterState(); - return ctx.cache().dynamicDestroyCaches(cacheNames, checkThreadTx, false); + return ctx.cache().dynamicDestroyCaches(cacheNames, checkThreadTx); } finally { unguard(); @@ -3236,10 +3318,15 @@ public IgniteInternalFuture destroyCachesAsync(Collection cacheNames, try { checkClusterState(); - if (ctx.cache().cache(cacheName) == null) + IgniteCacheProxy cache = ctx.cache().publicJCache(cacheName, false, true); + + if (cache == null) { ctx.cache().getOrCreateFromTemplate(cacheName, true).get(); - return ctx.cache().publicJCache(cacheName); + return ctx.cache().publicJCache(cacheName); + } + + return cache; } catch (IgniteCheckedException e) { throw CU.convertToCacheException(e); @@ -4306,6 +4393,12 @@ private void registerAllMBeans( registerMBean("Kernal", workerCtrlMXBean.getClass().getSimpleName(), workerCtrlMXBean, WorkersControlMXBean.class); } + + FailureHandlingMxBean blockOpCtrlMXBean = new FailureHandlingMxBeanImpl(workersRegistry, + ctx.cache().context().database()); + + registerMBean("Kernal", blockOpCtrlMXBean.getClass().getSimpleName(), blockOpCtrlMXBean, + FailureHandlingMxBean.class); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/IgniteNodeAttributes.java b/modules/core/src/main/java/org/apache/ignite/internal/IgniteNodeAttributes.java index 5b764e40bd550..3945adf33d38d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/IgniteNodeAttributes.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/IgniteNodeAttributes.java @@ -196,12 +196,18 @@ public final class IgniteNodeAttributes { /** User authentication enabled flag. */ public static final String ATTR_AUTHENTICATION_ENABLED = ATTR_PREFIX + ".authentication.enabled"; + /** Encryption master key digest. */ + public static final String ATTR_ENCRYPTION_MASTER_KEY_DIGEST = ATTR_PREFIX + ".master.key.digest"; + /** Rebalance thread pool size. */ public static final String ATTR_REBALANCE_POOL_SIZE = ATTR_PREFIX + ".rebalance.pool.size"; /** Internal attribute name constant. */ public static final String ATTR_DYNAMIC_CACHE_START_ROLLBACK_SUPPORTED = ATTR_PREFIX + ".dynamic.cache.start.rollback.supported"; + /** Internal attribute indicates that incoming cache requests should be validated on primary node as well. */ + public static final String ATTR_VALIDATE_CACHE_REQUESTS = ATTR_CACHE + ".validate.cache.requests"; + /** * Enforces singleton. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/IgniteVersionUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/IgniteVersionUtils.java index 8a459522e70b2..d12560e813a72 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/IgniteVersionUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/IgniteVersionUtils.java @@ -37,6 +37,9 @@ public class IgniteVersionUtils { /** Build timestamp in seconds. */ public static final long BUILD_TSTAMP; + /** Build timestamp string property value. */ + private static final String BUILD_TSTAMP_FROM_PROPERTY; + /** Revision hash. */ public static final String REV_HASH_STR; @@ -47,7 +50,7 @@ public class IgniteVersionUtils { public static final String ACK_VER_STR; /** Copyright blurb. */ - public static final String COPYRIGHT = "2018 Copyright(C) Apache Software Foundation"; + public static final String COPYRIGHT; /** * Static initializer. @@ -58,10 +61,18 @@ public class IgniteVersionUtils { .replace(".b", "-b") .replace(".final", "-final"); - BUILD_TSTAMP = Long.valueOf(IgniteProperties.get("ignite.build")); + BUILD_TSTAMP_FROM_PROPERTY = IgniteProperties.get("ignite.build"); + + //Development ignite.properties file contains ignite.build = 0, so we will add the check for it. + BUILD_TSTAMP = !BUILD_TSTAMP_FROM_PROPERTY.isEmpty() && Long.parseLong(BUILD_TSTAMP_FROM_PROPERTY) != 0 + ? Long.parseLong(BUILD_TSTAMP_FROM_PROPERTY) : System.currentTimeMillis() / 1000; + BUILD_TSTAMP_STR = new SimpleDateFormat("yyyyMMdd").format(new Date(BUILD_TSTAMP * 1000)); + COPYRIGHT = BUILD_TSTAMP_STR.substring(0, 4) + " Copyright(C) Apache Software Foundation"; + REV_HASH_STR = IgniteProperties.get("ignite.revision"); + RELEASE_DATE_STR = IgniteProperties.get("ignite.rel.date"); String rev = REV_HASH_STR.length() > 8 ? REV_HASH_STR.substring(0, 8) : REV_HASH_STR; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/IgnitionEx.java b/modules/core/src/main/java/org/apache/ignite/internal/IgnitionEx.java index ed0fbe9b5671a..d3dde71d22d5e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/IgnitionEx.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/IgnitionEx.java @@ -115,6 +115,7 @@ import org.apache.ignite.spi.deployment.local.LocalDeploymentSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder; +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi; import org.apache.ignite.spi.eventstorage.NoopEventStorageSpi; import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.spi.indexing.noop.NoopIndexingSpi; @@ -137,6 +138,7 @@ import static org.apache.ignite.IgniteSystemProperties.IGNITE_NO_SHUTDOWN_HOOK; import static org.apache.ignite.IgniteSystemProperties.IGNITE_RESTART_CODE; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SUCCESS_FILE; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -1155,9 +1157,11 @@ private static T2 start0(GridStartContext startCtx try { grid.start(startCtx); } - catch (IgniteInterruptedCheckedException e) { - if (grid.starterThreadInterrupted) - Thread.interrupted(); + catch (Exception e) { + if (X.hasCause(e, IgniteInterruptedCheckedException.class, InterruptedException.class)) { + if (grid.starterThreadInterrupted) + Thread.interrupted(); + } throw e; } @@ -1829,7 +1833,10 @@ private void start0(GridStartContext startCtx) throws IgniteCheckedException { new IgniteException(S.toString(GridWorker.class, deadWorker)))); } }, - cfg.getFailureDetectionTimeout(), + IgniteSystemProperties.getLong(IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT, + cfg.getSystemWorkerBlockedTimeout() != null + ? cfg.getSystemWorkerBlockedTimeout() + : cfg.getFailureDetectionTimeout()), log); stripedExecSvc = new StripedExecutor( @@ -2444,6 +2451,9 @@ private void initializeDefaultSpi(IgniteConfiguration cfg) { if (cfg.getIndexingSpi() == null) cfg.setIndexingSpi(new NoopIndexingSpi()); + + if (cfg.getEncryptionSpi() == null) + cfg.setEncryptionSpi(new NoopEncryptionSpi()); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/MarshallerContextImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/MarshallerContextImpl.java index 9bad1eacb772e..7d5bbda4100ad 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/MarshallerContextImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/MarshallerContextImpl.java @@ -576,10 +576,10 @@ public Iterator>> currentMappings() { } /** - * @return custom marshaller mapping files directory. Used for standalone WAL iteration + * @return {@code True} if marshaller context is initialized. */ - @Nullable public File getMarshallerMappingFileStoreDir() { - return marshallerMappingFileStoreDir; + public boolean initialized() { + return fileStore != null; } /** @@ -656,4 +656,4 @@ static final class CombinedMap extends AbstractMap return userMap.containsKey(key) || sysMap.containsKey(key); } } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/TransactionsMXBeanImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/TransactionsMXBeanImpl.java index 16738de12343d..a8a3c886617b8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/TransactionsMXBeanImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/TransactionsMXBeanImpl.java @@ -72,12 +72,8 @@ else if ("servers".equals(prj)) VisorTxSortOrder sortOrder = null; - if (order != null) { - if ("DURATION".equals(order)) - sortOrder = VisorTxSortOrder.DURATION; - else if ("SIZE".equals(order)) - sortOrder = VisorTxSortOrder.SIZE; - } + if (order != null) + sortOrder = VisorTxSortOrder.valueOf(order.toUpperCase()); VisorTxTaskArg arg = new VisorTxTaskArg(kill ? VisorTxOperation.KILL : VisorTxOperation.LIST, limit, minDuration == null ? null : minDuration * 1000, minSize, null, proj, consIds, xid, lbRegex, sortOrder); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryFieldAccessor.java b/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryFieldAccessor.java index 87c4f3e18d8fc..7d138a30e6d82 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryFieldAccessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryFieldAccessor.java @@ -26,6 +26,7 @@ import java.util.Map; import java.util.UUID; import org.apache.ignite.binary.BinaryObjectException; +import org.apache.ignite.internal.UnregisteredBinaryTypeException; import org.apache.ignite.internal.UnregisteredClassException; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.internal.util.typedef.F; @@ -156,7 +157,7 @@ public void write(Object obj, BinaryWriterExImpl writer) throws BinaryObjectExce write0(obj, writer); } catch (Exception ex) { - if (ex instanceof UnregisteredClassException) + if (ex instanceof UnregisteredClassException || ex instanceof UnregisteredBinaryTypeException) throw ex; if (S.INCLUDE_SENSITIVE && !F.isEmpty(name)) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryMetadataHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryMetadataHandler.java index 85ab1372f49f2..3652d98aa50a1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryMetadataHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryMetadataHandler.java @@ -20,13 +20,15 @@ import java.util.Collection; import org.apache.ignite.binary.BinaryObjectException; import org.apache.ignite.binary.BinaryType; +import org.apache.ignite.internal.processors.cache.binary.MetadataUpdateProposedMessage; /** - * Binary meta data handler. + * Binary metadata handler. */ public interface BinaryMetadataHandler { /** - * Adds meta data. + * Adds a new or updates an existing metadata to the latest version. + * See {@link MetadataUpdateProposedMessage} javadoc for detailed protocol description. * * @param typeId Type ID. * @param meta Metadata. @@ -36,7 +38,7 @@ public interface BinaryMetadataHandler { public void addMeta(int typeId, BinaryType meta, boolean failIfUnregistered) throws BinaryObjectException; /** - * Gets meta data for provided type ID. + * Gets metadata for provided type ID. * * @param typeId Type ID. * @return Metadata. @@ -45,7 +47,7 @@ public interface BinaryMetadataHandler { public BinaryType metadata(int typeId) throws BinaryObjectException; /** - * Gets unwrapped meta data for provided type ID. + * Gets unwrapped metadata for provided type ID. * * @param typeId Type ID. * @return Metadata. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryObjectExImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryObjectExImpl.java index 920a296856213..f213ad916cc51 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryObjectExImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/binary/BinaryObjectExImpl.java @@ -32,6 +32,7 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.thread.IgniteThread; import org.jetbrains.annotations.Nullable; /** @@ -201,12 +202,17 @@ private String toString(BinaryReaderHandles ctx, IdentityHashMap extends GenericQueryPager> { ReliableChannel ch, ClientOperation qryOp, ClientOperation pageQryOp, - Consumer qryWriter, + Consumer qryWriter, boolean keepBinary, ClientBinaryMarshaller marsh ) { @@ -50,7 +49,9 @@ class ClientQueryPager extends GenericQueryPager> { } /** {@inheritDoc} */ - @Override Collection> readEntries(BinaryInputStream in) { + @Override Collection> readEntries(PayloadInputChannel paloadCh) { + BinaryInputStream in = paloadCh.in(); + return ClientUtils.collection( in, ignored -> new ClientCacheEntry<>(serDes.readObject(in, keepBinary), serDes.readObject(in, keepBinary)) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ClientUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ClientUtils.java index d218e451d4c5c..5c35cc24ae342 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ClientUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ClientUtils.java @@ -55,9 +55,8 @@ import org.apache.ignite.internal.binary.BinaryWriterExImpl; import org.apache.ignite.internal.binary.streams.BinaryInputStream; import org.apache.ignite.internal.binary.streams.BinaryOutputStream; -import org.apache.ignite.internal.processors.odbc.ClientListenerProtocolVersion; -import static org.apache.ignite.internal.processors.platform.client.ClientConnectionContext.VER_1_2_0; +import static org.apache.ignite.internal.client.thin.ProtocolVersion.V1_2_0; /** * Shared serialization/deserialization utils. @@ -234,7 +233,7 @@ void binaryMetadata(BinaryMetadata meta, BinaryOutputStream out) { } /** Serialize configuration to stream. */ - void cacheConfiguration(ClientCacheConfiguration cfg, BinaryOutputStream out, ClientListenerProtocolVersion ver) { + void cacheConfiguration(ClientCacheConfiguration cfg, BinaryOutputStream out, ProtocolVersion ver) { try (BinaryRawWriterEx writer = new BinaryWriterExImpl(marsh.context(), out, null, null)) { int origPos = out.position(); @@ -313,7 +312,7 @@ void cacheConfiguration(ClientCacheConfiguration cfg, BinaryOutputStream out, Cl w.writeBoolean(qf.isNotNull()); w.writeObject(qf.getDefaultValue()); - if (ver.compareTo(VER_1_2_0) >= 0) { + if (ver.compareTo(V1_2_0) >= 0) { w.writeInt(qf.getPrecision()); w.writeInt(qf.getScale()); } @@ -349,7 +348,7 @@ void cacheConfiguration(ClientCacheConfiguration cfg, BinaryOutputStream out, Cl } /** Deserialize configuration from stream. */ - ClientCacheConfiguration cacheConfiguration(BinaryInputStream in, ClientListenerProtocolVersion ver) + ClientCacheConfiguration cacheConfiguration(BinaryInputStream in, ProtocolVersion ver) throws IOException { try (BinaryReaderExImpl reader = new BinaryReaderExImpl(marsh.context(), in, null, true)) { reader.readInt(); // Do not need length to read data. The protocol defines fixed configuration layout. @@ -394,7 +393,7 @@ ClientCacheConfiguration cacheConfiguration(BinaryInputStream in, ClientListener .setKeyFieldName(reader.readString()) .setValueFieldName(reader.readString()); - boolean isCliVer1_2 = ver.compareTo(VER_1_2_0) >= 0; + boolean isCliVer1_2 = ver.compareTo(V1_2_0) >= 0; Collection qryFields = ClientUtils.collection( in, diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/GenericQueryPager.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/GenericQueryPager.java index ce15caee73968..90ab5685c18db 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/GenericQueryPager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/GenericQueryPager.java @@ -19,14 +19,11 @@ import java.util.Collection; import java.util.function.Consumer; -import org.apache.ignite.internal.binary.streams.BinaryInputStream; -import org.apache.ignite.internal.binary.streams.BinaryOutputStream; -import org.apache.ignite.internal.processors.platform.client.ClientStatus; import org.apache.ignite.client.ClientException; -import org.apache.ignite.client.ClientConnectionException; +import org.apache.ignite.client.ClientReconnectedException; /** - * Generic query pager. Override {@link this#readResult(BinaryInputStream)} to make it specific. + * Generic query pager. Override {@link this#readResult(PayloadInputChannel)} to make it specific. */ abstract class GenericQueryPager implements QueryPager { /** Query op. */ @@ -36,7 +33,7 @@ abstract class GenericQueryPager implements QueryPager { private final ClientOperation pageQryOp; /** Query writer. */ - private final Consumer qryWriter; + private final Consumer qryWriter; /** Channel. */ private final ReliableChannel ch; @@ -50,12 +47,15 @@ abstract class GenericQueryPager implements QueryPager { /** Cursor id. */ private Long cursorId = null; + /** Client channel on first query page. */ + private ClientChannel clientCh; + /** Constructor. */ GenericQueryPager( ReliableChannel ch, ClientOperation qryOp, ClientOperation pageQryOp, - Consumer qryWriter + Consumer qryWriter ) { this.ch = ch; this.qryOp = qryOp; @@ -75,7 +75,7 @@ abstract class GenericQueryPager implements QueryPager { @Override public void close() throws Exception { // Close cursor only if the server has more pages: the server closes cursor automatically on last page if (cursorId != null && hasNext) - ch.request(ClientOperation.RESOURCE_CLOSE, req -> req.writeLong(cursorId)); + ch.request(ClientOperation.RESOURCE_CLOSE, req -> req.out().writeLong(cursorId)); } /** {@inheritDoc} */ @@ -88,17 +88,28 @@ abstract class GenericQueryPager implements QueryPager { return hasFirstPage; } + /** {@inheritDoc} */ + @Override public void reset() { + hasFirstPage = false; + + hasNext = true; + + cursorId = null; + + clientCh = null; + } + /** * Override this method to read entries from the input stream. "Entries" means response data excluding heading * cursor ID and trailing "has next page" flag. * Use {@link this#hasFirstPage} flag to differentiate between the initial query and page query responses. */ - abstract Collection readEntries(BinaryInputStream in); + abstract Collection readEntries(PayloadInputChannel in); /** */ - private Collection readResult(BinaryInputStream in) { + private Collection readResult(PayloadInputChannel payloadCh) { if (!hasFirstPage) { - long resCursorId = in.readLong(); + long resCursorId = payloadCh.in().readLong(); if (cursorId != null) { if (cursorId != resCursorId) @@ -106,34 +117,31 @@ private Collection readResult(BinaryInputStream in) { String.format("Expected cursor [%s] but received cursor [%s]", cursorId, resCursorId) ); } - else + else { cursorId = resCursorId; + + clientCh = payloadCh.clientChannel(); + } } - Collection res = readEntries(in); + Collection res = readEntries(payloadCh); - hasNext = in.readBoolean(); + hasNext = payloadCh.in().readBoolean(); hasFirstPage = true; return res; } - /** Get page with failover. */ + /** Get page. */ private Collection queryPage() throws ClientException { - try { - return ch.service(pageQryOp, req -> req.writeLong(cursorId), this::readResult); - } - catch (ClientServerError ex) { - if (ex.getCode() != ClientStatus.RESOURCE_DOES_NOT_EXIST) - throw ex; - } - catch (ClientConnectionException ignored) { - } - - // Retry entire query to failover - hasFirstPage = false; + return ch.service(pageQryOp, req -> { + if (clientCh != req.clientChannel()) { + throw new ClientReconnectedException("Client was reconnected in the middle of results fetch, " + + "query results can be inconsistent, please retry the query."); + } - return ch.service(qryOp, qryWriter, this::readResult); + req.out().writeLong(cursorId); + }, this::readResult); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/PayloadInputChannel.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/PayloadInputChannel.java new file mode 100644 index 0000000000000..3c070096ccc29 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/PayloadInputChannel.java @@ -0,0 +1,53 @@ +/* + * Copyright 2019 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.client.thin; + +import org.apache.ignite.internal.binary.streams.BinaryHeapInputStream; +import org.apache.ignite.internal.binary.streams.BinaryInputStream; + +/** + * Thin client payload input channel. + */ +class PayloadInputChannel { + /** Client channel. */ + private final ClientChannel ch; + + /** Input stream. */ + private final BinaryInputStream in; + + /** + * Constructor. + */ + PayloadInputChannel(ClientChannel ch, byte[] payload) { + in = new BinaryHeapInputStream(payload); + this.ch = ch; + } + + /** + * Gets client channel. + */ + public ClientChannel clientChannel() { + return ch; + } + + /** + * Gets input stream. + */ + public BinaryInputStream in() { + return in; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/PayloadOutputChannel.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/PayloadOutputChannel.java new file mode 100644 index 0000000000000..c568f4a1e050a --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/PayloadOutputChannel.java @@ -0,0 +1,61 @@ +/* + * Copyright 2019 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.client.thin; + +import org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream; +import org.apache.ignite.internal.binary.streams.BinaryOutputStream; + +/** + * Thin client payload output channel. + */ +class PayloadOutputChannel implements AutoCloseable { + /** Initial output stream buffer capacity. */ + private static final int INITIAL_BUFFER_CAPACITY = 1024; + + /** Client channel. */ + private final ClientChannel ch; + + /** Output stream. */ + private final BinaryOutputStream out; + + /** + * Constructor. + */ + PayloadOutputChannel(ClientChannel ch) { + out = new BinaryHeapOutputStream(INITIAL_BUFFER_CAPACITY); + this.ch = ch; + } + + /** + * Gets client channel. + */ + public ClientChannel clientChannel() { + return ch; + } + + /** + * Gets output stream. + */ + public BinaryOutputStream out() { + return out; + } + + /** {@inheritDoc} */ + @Override public void close() { + out.close(); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ProtocolVersion.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ProtocolVersion.java index 2e84e36a30d74..aaf7eed556479 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ProtocolVersion.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ProtocolVersion.java @@ -19,6 +19,15 @@ /** Thin client protocol version. */ public final class ProtocolVersion implements Comparable { + /** Protocol version: 1.2.0. */ + public static final ProtocolVersion V1_2_0 = new ProtocolVersion((short)1, (short)2, (short)0); + + /** Protocol version: 1.1.0. */ + public static final ProtocolVersion V1_1_0 = new ProtocolVersion((short)1, (short)1, (short)0); + + /** Protocol version 1.0.0. */ + public static final ProtocolVersion V1_0_0 = new ProtocolVersion((short)1, (short)0, (short)0); + /** Major. */ private final short major; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/QueryPager.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/QueryPager.java index e9856892695b6..5b90b5c78aaee 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/QueryPager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/QueryPager.java @@ -36,4 +36,9 @@ interface QueryPager extends AutoCloseable { /** Indicates if initial query response was received. */ public boolean hasFirstPage(); + + /** + * Reset query pager. + */ + public void reset(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ReliableChannel.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ReliableChannel.java index c23bf8491790e..0a27451f2d7e0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ReliableChannel.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/ReliableChannel.java @@ -1,12 +1,11 @@ /* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at + * Copyright 2019 GridGain Systems, Inc. and Contributors. * - * http://www.apache.org/licenses/LICENSE-2.0 + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -24,8 +23,6 @@ import java.util.LinkedList; import java.util.List; import java.util.Random; -import java.util.concurrent.locks.Lock; -import java.util.concurrent.locks.ReentrantLock; import java.util.function.Consumer; import java.util.function.Function; import java.util.stream.Collectors; @@ -35,9 +32,9 @@ import org.apache.ignite.client.ClientException; import org.apache.ignite.configuration.ClientConfiguration; import org.apache.ignite.configuration.ClientConnectorConfiguration; -import org.apache.ignite.internal.binary.streams.BinaryInputStream; -import org.apache.ignite.internal.binary.streams.BinaryOutputStream; import org.apache.ignite.internal.util.HostAndPortRange; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.U; /** * Adds failover abd thread-safety to {@link ClientChannel}. @@ -46,8 +43,8 @@ final class ReliableChannel implements AutoCloseable { /** Raw channel. */ private final Function> chFactory; - /** Service lock. */ - private final Lock svcLock = new ReentrantLock(); + /** Servers count. */ + private final int srvCnt; /** Primary server. */ private InetSocketAddress primary; @@ -56,11 +53,14 @@ final class ReliableChannel implements AutoCloseable { private final Deque backups = new LinkedList<>(); /** Channel. */ - private ClientChannel ch = null; + private ClientChannel ch; /** Ignite config. */ private final ClientConfiguration clientCfg; + /** Channel is closed. */ + private boolean closed; + /** * Constructor. */ @@ -79,17 +79,36 @@ final class ReliableChannel implements AutoCloseable { List addrs = parseAddresses(clientCfg.getAddresses()); - primary = addrs.get(new Random().nextInt(addrs.size())); // we already verified there is at least one address + srvCnt = addrs.size(); - ch = chFactory.apply(new ClientChannelConfiguration(clientCfg).setAddress(primary)).get(); + primary = addrs.get(new Random().nextInt(addrs.size())); // we already verified there is at least one address - for (InetSocketAddress a : addrs) + for (InetSocketAddress a : addrs) { if (a != primary) - this.backups.add(a); + backups.add(a); + } + + ClientConnectionException lastEx = null; + + for (int i = 0; i < addrs.size(); i++) { + try { + ch = chFactory.apply(new ClientChannelConfiguration(clientCfg).setAddress(primary)).get(); + + return; + } catch (ClientConnectionException e) { + lastEx = e; + + rollAddress(); + } + } + + throw lastEx; } /** {@inheritDoc} */ - @Override public void close() throws Exception { + @Override public synchronized void close() throws Exception { + closed = true; + if (ch != null) { ch.close(); @@ -98,59 +117,40 @@ final class ReliableChannel implements AutoCloseable { } /** - * Send request and handle response. The method is synchronous and single-threaded. + * Send request and handle response. */ public T service( ClientOperation op, - Consumer payloadWriter, - Function payloadReader + Consumer payloadWriter, + Function payloadReader ) throws ClientException { ClientConnectionException failure = null; - T res = null; - - int totalSrvs = 1 + backups.size(); - - svcLock.lock(); - try { - for (int i = 0; i < totalSrvs; i++) { - try { - if (failure != null) - changeServer(); - - if (ch == null) - ch = chFactory.apply(new ClientChannelConfiguration(clientCfg).setAddress(primary)).get(); + for (int i = 0; i < srvCnt; i++) { + ClientChannel ch = null; - long id = ch.send(op, payloadWriter); - - res = ch.receive(op, id, payloadReader); + try { + ch = channel(); - failure = null; + return ch.service(op, payloadWriter, payloadReader); + } + catch (ClientConnectionException e) { + if (failure == null) + failure = e; + else + failure.addSuppressed(e); - break; - } - catch (ClientConnectionException e) { - if (failure == null) - failure = e; - else - failure.addSuppressed(e); - } + changeServer(ch); } } - finally { - svcLock.unlock(); - } - - if (failure != null) - throw failure; - return res; + throw failure; } /** * Send request without payload and handle response. */ - public T service(ClientOperation op, Function payloadReader) + public T service(ClientOperation op, Function payloadReader) throws ClientException { return service(op, null, payloadReader); } @@ -158,21 +158,17 @@ public T service(ClientOperation op, Function payloadR /** * Send request and handle response without payload. */ - public void request(ClientOperation op, Consumer payloadWriter) throws ClientException { + public void request(ClientOperation op, Consumer payloadWriter) throws ClientException { service(op, payloadWriter, null); } - /** - * @return Server version. - */ - public ProtocolVersion serverVersion() { - return ch.serverVersion(); - } - /** * @return host:port_range address lines parsed as {@link InetSocketAddress}. */ private static List parseAddresses(String[] addrs) throws ClientException { + if (F.isEmpty(addrs)) + throw new ClientException("Empty addresses"); + Collection ranges = new ArrayList<>(addrs.length); for (String a : addrs) { @@ -198,17 +194,39 @@ private static List parseAddresses(String[] addrs) throws Cli } /** */ - private void changeServer() { + private synchronized ClientChannel channel() { + if (closed) + throw new ClientException("Channel is closed"); + + if (ch == null) { + try { + ch = chFactory.apply(new ClientChannelConfiguration(clientCfg).setAddress(primary)).get(); + } + catch (ClientConnectionException e) { + rollAddress(); + + throw e; + } + } + + return ch; + } + + /** */ + private void rollAddress() { if (!backups.isEmpty()) { backups.addLast(primary); primary = backups.removeFirst(); + } + } - try { - ch.close(); - } - catch (Exception ignored) { - } + /** */ + private synchronized void changeServer(ClientChannel oldCh) { + if (oldCh == ch && ch != null) { + rollAddress(); + + U.closeQuiet(ch); ch = null; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientCache.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientCache.java index 2da3b8f402b11..f1bad208a8100 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientCache.java @@ -1,12 +1,11 @@ /* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at + * Copyright 2019 GridGain Systems, Inc. and Contributors. * - * http://www.apache.org/licenses/LICENSE-2.0 + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -40,7 +39,6 @@ import org.apache.ignite.internal.binary.streams.BinaryOutputStream; import static java.util.AbstractMap.SimpleEntry; -import static org.apache.ignite.internal.processors.platform.client.ClientConnectionContext.CURRENT_VER; /** * Implementation of {@link ClientCache} over TCP protocol. @@ -83,7 +81,7 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_GET, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); + writeObject(req, key); }, this::readObject ); @@ -101,8 +99,8 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_PUT, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); - serDes.writeObject(req, val); + writeObject(req, key); + writeObject(req, val); } ); } @@ -116,9 +114,9 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_CONTAINS_KEY, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); + writeObject(req, key); }, - BinaryInputStream::readBoolean + res -> res.in().readBoolean() ); } @@ -134,7 +132,7 @@ class TcpClientCache implements ClientCache { this::writeCacheInfo, res -> { try { - return serDes.cacheConfiguration(res, CURRENT_VER); + return serDes.cacheConfiguration(res.in(), res.clientChannel().serverVersion()); } catch (IOException e) { return null; @@ -149,9 +147,9 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_GET_SIZE, req -> { writeCacheInfo(req); - ClientUtils.collection(peekModes, req, (out, m) -> out.writeByte((byte)m.ordinal())); + ClientUtils.collection(peekModes, req.out(), (out, m) -> out.writeByte((byte)m.ordinal())); }, - res -> (int)res.readLong() + res -> (int)res.in().readLong() ); } @@ -167,10 +165,10 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_GET_ALL, req -> { writeCacheInfo(req); - ClientUtils.collection(keys, req, serDes::writeObject); + ClientUtils.collection(keys, req.out(), serDes::writeObject); }, res -> ClientUtils.collection( - res, + res.in(), in -> new SimpleEntry(readObject(in), readObject(in)) ) ).stream().collect(Collectors.toMap(SimpleEntry::getKey, SimpleEntry::getValue)); @@ -190,7 +188,7 @@ class TcpClientCache implements ClientCache { writeCacheInfo(req); ClientUtils.collection( map.entrySet(), - req, + req.out(), (out, e) -> { serDes.writeObject(out, e.getKey()); serDes.writeObject(out, e.getValue()); @@ -214,11 +212,11 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_REPLACE_IF_EQUALS, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); - serDes.writeObject(req, oldVal); - serDes.writeObject(req, newVal); + writeObject(req, key); + writeObject(req, oldVal); + writeObject(req, newVal); }, - BinaryInputStream::readBoolean + res -> res.in().readBoolean() ); } @@ -234,10 +232,10 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_REPLACE, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); - serDes.writeObject(req, val); + writeObject(req, key); + writeObject(req, val); }, - BinaryInputStream::readBoolean + res -> res.in().readBoolean() ); } @@ -250,9 +248,9 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_REMOVE_KEY, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); + writeObject(req, key); }, - BinaryInputStream::readBoolean + res -> res.in().readBoolean() ); } @@ -268,10 +266,10 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_REMOVE_IF_EQUALS, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); - serDes.writeObject(req, oldVal); + writeObject(req, key); + writeObject(req, oldVal); }, - BinaryInputStream::readBoolean + res -> res.in().readBoolean() ); } @@ -287,7 +285,7 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_REMOVE_KEYS, req -> { writeCacheInfo(req); - ClientUtils.collection(keys, req, serDes::writeObject); + ClientUtils.collection(keys, req.out(), serDes::writeObject); } ); } @@ -309,8 +307,8 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_GET_AND_PUT, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); - serDes.writeObject(req, val); + writeObject(req, key); + writeObject(req, val); }, this::readObject ); @@ -325,7 +323,7 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_GET_AND_REMOVE, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); + writeObject(req, key); }, this::readObject ); @@ -343,8 +341,8 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_GET_AND_REPLACE, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); - serDes.writeObject(req, val); + writeObject(req, key); + writeObject(req, val); }, this::readObject ); @@ -362,10 +360,10 @@ class TcpClientCache implements ClientCache { ClientOperation.CACHE_PUT_IF_ABSENT, req -> { writeCacheInfo(req); - serDes.writeObject(req, key); - serDes.writeObject(req, val); + writeObject(req, key); + writeObject(req, val); }, - BinaryInputStream::readBoolean + res -> res.in().readBoolean() ); } @@ -375,7 +373,6 @@ class TcpClientCache implements ClientCache { } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override public ClientCache withKeepBinary() { TcpClientCache binCache; @@ -426,9 +423,9 @@ else if (qry instanceof SqlFieldsQuery) if (qry == null) throw new NullPointerException("qry"); - Consumer qryWriter = out -> { - writeCacheInfo(out); - serDes.write(qry, out); + Consumer qryWriter = payloadCh -> { + writeCacheInfo(payloadCh); + serDes.write(qry, payloadCh.out()); }; return new ClientFieldsQueryCursor<>(new ClientFieldsQueryPager( @@ -443,8 +440,10 @@ else if (qry instanceof SqlFieldsQuery) /** Handle scan query. */ private QueryCursor> scanQuery(ScanQuery qry) { - Consumer qryWriter = out -> { - writeCacheInfo(out); + Consumer qryWriter = payloadCh -> { + writeCacheInfo(payloadCh); + + BinaryOutputStream out = payloadCh.out(); if (qry.getFilter() == null) out.writeByte(GridBinaryMarshaller.NULL); @@ -470,8 +469,11 @@ private QueryCursor> scanQuery(ScanQuery qry) { /** Handle SQL query. */ private QueryCursor> sqlQuery(SqlQuery qry) { - Consumer qryWriter = out -> { - writeCacheInfo(out); + Consumer qryWriter = payloadCh -> { + writeCacheInfo(payloadCh); + + BinaryOutputStream out = payloadCh.out(); + serDes.writeObject(out, qry.getType()); serDes.writeObject(out, qry.getSql()); ClientUtils.collection(qry.getArgs(), out, serDes::writeObject); @@ -493,7 +495,9 @@ private QueryCursor> sqlQuery(SqlQuery qry) { } /** Write cache ID and flags. */ - private void writeCacheInfo(BinaryOutputStream out) { + private void writeCacheInfo(PayloadOutputChannel payloadCh) { + BinaryOutputStream out = payloadCh.out(); + out.writeInt(cacheId); out.writeByte((byte)(keepBinary ? 1 : 0)); } @@ -502,4 +506,14 @@ private void writeCacheInfo(BinaryOutputStream out) { private T readObject(BinaryInputStream in) { return serDes.readObject(in, keepBinary); } + + /** */ + private T readObject(PayloadInputChannel payloadCh) { + return readObject(payloadCh.in()); + } + + /** */ + private void writeObject(PayloadOutputChannel payloadCh, Object obj) { + serDes.writeObject(payloadCh.out(), obj); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientChannel.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientChannel.java index d6097f2f097e6..66930e02bca6d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientChannel.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpClientChannel.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.client.thin; +import java.io.DataInput; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; @@ -34,7 +35,11 @@ import java.security.cert.X509Certificate; import java.util.Arrays; import java.util.Collection; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.locks.Lock; +import java.util.concurrent.locks.ReentrantLock; import java.util.function.BiFunction; import java.util.function.Consumer; import java.util.function.Function; @@ -49,42 +54,45 @@ import javax.net.ssl.TrustManager; import javax.net.ssl.TrustManagerFactory; import javax.net.ssl.X509TrustManager; +import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.client.ClientAuthenticationException; import org.apache.ignite.client.ClientAuthorizationException; import org.apache.ignite.client.ClientConnectionException; +import org.apache.ignite.client.ClientException; import org.apache.ignite.client.SslMode; import org.apache.ignite.client.SslProtocol; import org.apache.ignite.configuration.ClientConfiguration; +import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; import org.apache.ignite.internal.binary.BinaryRawWriterEx; import org.apache.ignite.internal.binary.BinaryReaderExImpl; import org.apache.ignite.internal.binary.BinaryWriterExImpl; import org.apache.ignite.internal.binary.streams.BinaryHeapInputStream; import org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream; import org.apache.ignite.internal.binary.streams.BinaryInputStream; -import org.apache.ignite.internal.binary.streams.BinaryOffheapOutputStream; import org.apache.ignite.internal.binary.streams.BinaryOutputStream; import org.apache.ignite.internal.processors.platform.client.ClientStatus; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.io.GridUnsafeDataInput; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.internal.client.thin.ProtocolVersion.V1_0_0; +import static org.apache.ignite.internal.client.thin.ProtocolVersion.V1_1_0; +import static org.apache.ignite.internal.client.thin.ProtocolVersion.V1_2_0; /** * Implements {@link ClientChannel} over TCP. */ class TcpClientChannel implements ClientChannel { - /** Protocol version: 1.2.0. */ - private static final ProtocolVersion V1_2_0 = new ProtocolVersion((short)1, (short)2, (short)0); - - /** Protocol version: 1.1.0. */ - private static final ProtocolVersion V1_1_0 = new ProtocolVersion((short)1, (short)1, (short)0); - - /** Protocol version 1 0 0. */ - private static final ProtocolVersion V1_0_0 = new ProtocolVersion((short)1, (short)0, (short)0); - /** Supported protocol versions. */ private static final Collection supportedVers = Arrays.asList( - V1_2_0, + V1_2_0, V1_1_0, V1_0_0 ); + /** Timeout before next attempt to lock channel and process next response by current thread. */ + private static final long PAYLOAD_WAIT_TIMEOUT = 10L; + /** Protocol version agreed with the server. */ private ProtocolVersion ver = V1_2_0; @@ -97,9 +105,24 @@ class TcpClientChannel implements ClientChannel { /** Input stream. */ private final InputStream in; + /** Data input. */ + private final DataInput dataInput; + + /** Total bytes read by channel. */ + private long totalBytesRead; + /** Request id. */ private final AtomicLong reqId = new AtomicLong(1); + /** Send lock. */ + private final Lock sndLock = new ReentrantLock(); + + /** Receive lock. */ + private final Lock rcvLock = new ReentrantLock(); + + /** Pending requests. */ + private final Map pendingReqs = new ConcurrentHashMap<>(); + /** Constructor. */ TcpClientChannel(ClientChannelConfiguration cfg) throws ClientConnectionException, ClientAuthenticationException { validateConfiguration(cfg); @@ -109,9 +132,13 @@ class TcpClientChannel implements ClientChannel { out = sock.getOutputStream(); in = sock.getInputStream(); + + GridUnsafeDataInput dis = new GridUnsafeDataInput(); + dis.inputStream(in); + dataInput = dis; } catch (IOException e) { - throw new ClientConnectionException(e); + throw handleIOError("addr=" + cfg.getAddress(), e); } handshake(cfg.getUserName(), cfg.getUserPassword()); @@ -122,71 +149,157 @@ class TcpClientChannel implements ClientChannel { in.close(); out.close(); sock.close(); + + for (ClientRequestFuture pendingReq : pendingReqs.values()) + pendingReq.onDone(new ClientConnectionException("Channel is closed")); } /** {@inheritDoc} */ - @Override public long send(ClientOperation op, Consumer payloadWriter) + @Override public T service(ClientOperation op, Consumer payloadWriter, + Function payloadReader) throws ClientConnectionException, ClientAuthorizationException { + long id = send(op, payloadWriter); + + return receive(id, payloadReader); + } + + /** + * @param op Operation. + * @param payloadWriter Payload writer to stream or {@code null} if request has no payload. + * @return Request ID. + */ + private long send(ClientOperation op, Consumer payloadWriter) throws ClientConnectionException { long id = reqId.getAndIncrement(); - try (BinaryOutputStream req = new BinaryHeapOutputStream(1024)) { - req.writeInt(0); // reserve an integer for the request size + // Only one thread at a time can have access to write to the channel. + sndLock.lock(); + + try (PayloadOutputChannel payloadCh = new PayloadOutputChannel(this)) { + pendingReqs.put(id, new ClientRequestFuture()); + + BinaryOutputStream req = payloadCh.out(); + + req.writeInt(0); // Reserve an integer for the request size. req.writeShort(op.code()); req.writeLong(id); if (payloadWriter != null) - payloadWriter.accept(req); + payloadWriter.accept(payloadCh); - req.writeInt(0, req.position() - 4); // actual size + req.writeInt(0, req.position() - 4); // Actual size. write(req.array(), req.position()); } + catch (Throwable t) { + pendingReqs.remove(id); + + throw t; + } + finally { + sndLock.unlock(); + } return id; } - /** {@inheritDoc} */ - @Override public T receive(ClientOperation op, long reqId, Function payloadReader) + /** + * @param reqId ID of the request to receive the response for. + * @param payloadReader Payload reader from stream. + * @return Received operation payload or {@code null} if response has no payload. + */ + private T receive(long reqId, Function payloadReader) throws ClientConnectionException, ClientAuthorizationException { + ClientRequestFuture pendingReq = pendingReqs.get(reqId); + + assert pendingReq != null : "Pending request future not found for request " + reqId; + + // Each thread creates a future on request sent and returns a response when this future is completed. + // Only one thread at a time can have access to read from the channel. This thread reads the next available + // response and complete corresponding future. All other concurrent threads wait for their own futures with + // a timeout and periodically try to lock the channel to process the next response. + try { + while (true) { + if (rcvLock.tryLock()) { + try { + if (!pendingReq.isDone()) + processNextResponse(); + } + finally { + rcvLock.unlock(); + } + } + + try { + byte[] payload = pendingReq.get(PAYLOAD_WAIT_TIMEOUT); + + if (payload == null || payloadReader == null) + return null; - final int MIN_RES_SIZE = 8 + 4; // minimal response size: long (8 bytes) ID + int (4 bytes) status + return payloadReader.apply(new PayloadInputChannel(this, payload)); + } + catch (IgniteFutureTimeoutCheckedException ignore) { + // Next cycle if timed out. + } + } + } + catch (IgniteCheckedException e) { + if (e.getCause() instanceof ClientError) + throw (ClientError)e.getCause(); + + if (e.getCause() instanceof ClientException) + throw (ClientException)e.getCause(); + + throw new ClientException(e.getMessage(), e); + } + finally { + pendingReqs.remove(reqId); + } + } - int resSize = new BinaryHeapInputStream(read(4)).readInt(); + /** + * Process next response from the input stream and complete corresponding future. + */ + private void processNextResponse() throws ClientProtocolError, ClientConnectionException { + int resSize = readInt(); - if (resSize < 0) + if (resSize <= 0) throw new ClientProtocolError(String.format("Invalid response size: %s", resSize)); - if (resSize == 0) - return null; + long bytesReadOnStartReq = totalBytesRead; + + long resId = readLong(); + + ClientRequestFuture pendingReq = pendingReqs.get(resId); - BinaryInputStream resIn = new BinaryHeapInputStream(read(MIN_RES_SIZE)); + if (pendingReq == null) + throw new ClientProtocolError(String.format("Unexpected response ID [%s]", resId)); - long resId = resIn.readLong(); + int status; - if (resId != reqId) - throw new ClientProtocolError(String.format("Unexpected response ID [%s], [%s] was expected", resId, reqId)); + BinaryInputStream resIn; - int status = resIn.readInt(); + status = readInt(); - if (status != 0) { - resIn = new BinaryHeapInputStream(read(resSize - MIN_RES_SIZE)); + int hdrSize = (int)(totalBytesRead - bytesReadOnStartReq); + + if (status == 0) { + if (resSize <= hdrSize) + pendingReq.onDone(); + else + pendingReq.onDone(read(resSize - hdrSize)); + } + else { + resIn = new BinaryHeapInputStream(read(resSize - hdrSize)); String err = new BinaryReaderExImpl(null, resIn, null, true).readString(); switch (status) { case ClientStatus.SECURITY_VIOLATION: - throw new ClientAuthorizationException(); + pendingReq.onDone(new ClientAuthorizationException()); default: - throw new ClientServerError(err, status, reqId); + pendingReq.onDone(new ClientServerError(err, status, resId)); } } - - if (resSize <= MIN_RES_SIZE || payloadReader == null) - return null; - - BinaryInputStream payload = new BinaryHeapInputStream(read(resSize - MIN_RES_SIZE)); - - return payloadReader.apply(payload); } /** {@inheritDoc} */ @@ -249,7 +362,7 @@ private void handshake(String user, String pwd) /** Send handshake request. */ private void handshakeReq(String user, String pwd) throws ClientConnectionException { - try (BinaryOutputStream req = new BinaryOffheapOutputStream(32)) { + try (BinaryOutputStream req = new BinaryHeapOutputStream(32)) { req.writeInt(0); // reserve an integer for the request size req.writeByte((byte)1); // handshake code, always 1 req.writeShort(ver.major()); @@ -271,7 +384,7 @@ private void handshakeReq(String user, String pwd) throws ClientConnectionExcept /** Receive and handle handshake response. */ private void handshakeRes(String user, String pwd) throws ClientConnectionException, ClientAuthenticationException { - int resSize = new BinaryHeapInputStream(read(4)).readInt(); + int resSize = readInt(); if (resSize <= 0) throw new ClientProtocolError(String.format("Invalid handshake response size: %s", resSize)); @@ -310,7 +423,7 @@ else if (!supportedVers.contains(srvVer) || } } catch (IOException e) { - throw new ClientConnectionException(e); + throw handleIOError(e); } } } @@ -326,18 +439,68 @@ private byte[] read(int len) throws ClientConnectionException { bytesNum = in.read(bytes, readBytesNum, len - readBytesNum); } catch (IOException e) { - throw new ClientConnectionException(e); + throw handleIOError(e); } if (bytesNum < 0) - throw new ClientConnectionException(); + throw handleIOError(null); readBytesNum += bytesNum; } + totalBytesRead += readBytesNum; + return bytes; } + /** + * Read long value from input stream. + */ + private long readLong() { + try { + long val = dataInput.readLong(); + + totalBytesRead += Long.BYTES; + + return val; + } + catch (IOException e) { + throw handleIOError(e); + } + } + + /** + * Read int value from input stream. + */ + private int readInt() { + try { + int val = dataInput.readInt(); + + totalBytesRead += Integer.BYTES; + + return val; + } + catch (IOException e) { + throw handleIOError(e); + } + } + + /** + * Read short value from input stream. + */ + private short readShort() { + try { + short val = dataInput.readShort(); + + totalBytesRead += Short.BYTES; + + return val; + } + catch (IOException e) { + throw handleIOError(e); + } + } + /** Write bytes to the output stream. */ private void write(byte[] bytes, int len) throws ClientConnectionException { try { @@ -345,10 +508,31 @@ private void write(byte[] bytes, int len) throws ClientConnectionException { out.flush(); } catch (IOException e) { - throw new ClientConnectionException(e); + throw handleIOError(e); } } + /** + * @param ex IO exception (cause). + */ + private ClientException handleIOError(@Nullable IOException ex) { + return handleIOError("sock=" + sock, ex); + } + + /** + * @param chInfo Additional channel info + * @param ex IO exception (cause). + */ + private ClientException handleIOError(String chInfo, @Nullable IOException ex) { + return new ClientConnectionException("Ignite cluster is unavailable [" + chInfo + ']', ex); + } + + /** + * + */ + private static class ClientRequestFuture extends GridFutureAdapter { + } + /** SSL Socket Factory. */ private static class ClientSslSocketFactory { /** Trust manager ignoring all certificate checks. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpIgniteClient.java b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpIgniteClient.java index 50408168d3118..fafb15ffde26c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpIgniteClient.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/client/thin/TcpIgniteClient.java @@ -1,12 +1,11 @@ /* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at + * Copyright 2019 GridGain Systems, Inc. and Contributors. * - * http://www.apache.org/licenses/LICENSE-2.0 + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -47,7 +46,6 @@ import org.apache.ignite.internal.binary.BinaryWriterExImpl; import org.apache.ignite.internal.binary.streams.BinaryInputStream; import org.apache.ignite.internal.binary.streams.BinaryOutputStream; -import org.apache.ignite.internal.processors.odbc.ClientListenerProtocolVersion; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.marshaller.MarshallerContext; @@ -103,7 +101,7 @@ private TcpIgniteClient(ClientConfiguration cfg) throws ClientException { @Override public ClientCache getOrCreateCache(String name) throws ClientException { ensureCacheName(name); - ch.request(ClientOperation.CACHE_GET_OR_CREATE_WITH_NAME, req -> writeString(name, req)); + ch.request(ClientOperation.CACHE_GET_OR_CREATE_WITH_NAME, req -> writeString(name, req.out())); return new TcpClientCache<>(name, ch, marsh); } @@ -114,7 +112,7 @@ private TcpIgniteClient(ClientConfiguration cfg) throws ClientException { ensureCacheConfiguration(cfg); ch.request(ClientOperation.CACHE_GET_OR_CREATE_WITH_CONFIGURATION, - req -> serDes.cacheConfiguration(cfg, req, toClientVersion(ch.serverVersion()))); + req -> serDes.cacheConfiguration(cfg, req.out(), req.clientChannel().serverVersion())); return new TcpClientCache<>(cfg.getName(), ch, marsh); } @@ -128,21 +126,21 @@ private TcpIgniteClient(ClientConfiguration cfg) throws ClientException { /** {@inheritDoc} */ @Override public Collection cacheNames() throws ClientException { - return ch.service(ClientOperation.CACHE_GET_NAMES, res -> Arrays.asList(BinaryUtils.doReadStringArray(res))); + return ch.service(ClientOperation.CACHE_GET_NAMES, res -> Arrays.asList(BinaryUtils.doReadStringArray(res.in()))); } /** {@inheritDoc} */ @Override public void destroyCache(String name) throws ClientException { ensureCacheName(name); - ch.request(ClientOperation.CACHE_DESTROY, req -> req.writeInt(ClientUtils.cacheId(name))); + ch.request(ClientOperation.CACHE_DESTROY, req -> req.out().writeInt(ClientUtils.cacheId(name))); } /** {@inheritDoc} */ @Override public ClientCache createCache(String name) throws ClientException { ensureCacheName(name); - ch.request(ClientOperation.CACHE_CREATE_WITH_NAME, req -> writeString(name, req)); + ch.request(ClientOperation.CACHE_CREATE_WITH_NAME, req -> writeString(name, req.out())); return new TcpClientCache<>(name, ch, marsh); } @@ -152,21 +150,11 @@ private TcpIgniteClient(ClientConfiguration cfg) throws ClientException { ensureCacheConfiguration(cfg); ch.request(ClientOperation.CACHE_CREATE_WITH_CONFIGURATION, - req -> serDes.cacheConfiguration(cfg, req, toClientVersion(ch.serverVersion()))); + req -> serDes.cacheConfiguration(cfg, req.out(), req.clientChannel().serverVersion())); return new TcpClientCache<>(cfg.getName(), ch, marsh); } - /** - * Converts {@link ProtocolVersion} to {@link ClientListenerProtocolVersion}. - * - * @param srvVer Server protocol version. - * @return Client protocol version. - */ - private ClientListenerProtocolVersion toClientVersion(ProtocolVersion srvVer) { - return ClientListenerProtocolVersion.create(srvVer.major(), srvVer.minor(), srvVer.patch()); - } - /** {@inheritDoc} */ @Override public IgniteBinary binary() { return binary; @@ -177,7 +165,9 @@ private ClientListenerProtocolVersion toClientVersion(ProtocolVersion srvVer) { if (qry == null) throw new NullPointerException("qry"); - Consumer qryWriter = out -> { + Consumer qryWriter = payloadCh -> { + BinaryOutputStream out = payloadCh.out(); + out.writeInt(0); // no cache ID out.writeByte((byte)1); // keep binary serDes.write(qry, out); @@ -195,11 +185,9 @@ private ClientListenerProtocolVersion toClientVersion(ProtocolVersion srvVer) { /** * Initializes new instance of {@link IgniteClient}. - *

- * Server connection will be lazily initialized when first required. * * @param cfg Thin client configuration. - * @return Successfully opened thin client connection. + * @return Client with successfully opened thin client connection. */ public static IgniteClient start(ClientConfiguration cfg) throws ClientException { return new TcpIgniteClient(cfg); @@ -251,7 +239,7 @@ private class ClientBinaryMetadataHandler implements BinaryMetadataHandler { try { ch.request( ClientOperation.PUT_BINARY_TYPE, - req -> serDes.binaryMetadata(((BinaryTypeImpl)meta).metadata(), req) + req -> serDes.binaryMetadata(((BinaryTypeImpl)meta).metadata(), req.out()) ); } catch (ClientException e) { @@ -287,10 +275,10 @@ private class ClientBinaryMetadataHandler implements BinaryMetadataHandler { try { meta = ch.service( ClientOperation.GET_BINARY_TYPE, - req -> req.writeInt(typeId), + req -> req.out().writeInt(typeId), res -> { try { - return res.readBoolean() ? serDes.binaryMetadata(res) : null; + return res.in().readBoolean() ? serDes.binaryMetadata(res.in()) : null; } catch (IOException e) { throw new BinaryObjectException(e); @@ -327,8 +315,12 @@ private class ClientMarshallerContext implements MarshallerContext { private Map cache = new ConcurrentHashMap<>(); /** {@inheritDoc} */ - @Override public boolean registerClassName(byte platformId, int typeId, String clsName) - throws IgniteCheckedException { + public boolean registerClassName( + byte platformId, + int typeId, + String clsName, + boolean failIfUnregistered + ) throws IgniteCheckedException { if (platformId != MarshallerPlatformIds.JAVA_ID) throw new IllegalArgumentException("platformId"); @@ -339,12 +331,14 @@ private class ClientMarshallerContext implements MarshallerContext { try { res = ch.service( ClientOperation.REGISTER_BINARY_TYPE_NAME, - req -> { - req.writeByte(platformId); - req.writeInt(typeId); - writeString(clsName, req); + payloadCh -> { + BinaryOutputStream out = payloadCh.out(); + + out.writeByte(platformId); + out.writeInt(typeId); + writeString(clsName, out); }, - BinaryInputStream::readBoolean + payloadCh -> payloadCh.in().readBoolean() ); } catch (ClientException e) { @@ -358,6 +352,12 @@ private class ClientMarshallerContext implements MarshallerContext { return res; } + /** {@inheritDoc} */ + @Deprecated + @Override public boolean registerClassName(byte platformId, int typeId, String clsName) throws IgniteCheckedException { + return registerClassName(platformId, typeId, clsName, false); + } + /** {@inheritDoc} */ @Override public boolean registerClassNameLocally(byte platformId, int typeId, String clsName) { if (platformId != MarshallerPlatformIds.JAVA_ID) @@ -389,10 +389,12 @@ private class ClientMarshallerContext implements MarshallerContext { clsName = ch.service( ClientOperation.GET_BINARY_TYPE_NAME, req -> { - req.writeByte(platformId); - req.writeInt(typeId); + BinaryOutputStream out = req.out(); + + out.writeByte(platformId); + out.writeInt(typeId); }, - TcpIgniteClient.this::readString + res -> readString(res.in()) ); } catch (ClientException e) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/commandline/Arguments.java b/modules/core/src/main/java/org/apache/ignite/internal/commandline/Arguments.java index 5b8a0dcd6c005..abd27bdb1b315 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/commandline/Arguments.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/commandline/Arguments.java @@ -71,12 +71,39 @@ public class Arguments { */ private String walArgs; - /** Ping timeout for grid client. See {@link GridClientConfiguration#pingTimeout}.*/ + /** Ping timeout for grid client. See {@link GridClientConfiguration#getPingTimeout()}. */ private long pingTimeout; - /** Ping interval for grid client. See {@link GridClientConfiguration#pingInterval}.*/ + /** Ping interval for grid client. See {@link GridClientConfiguration#getPingInterval()}. */ private long pingInterval; + /** SSL Protocol. */ + private String sslProtocol; + + /** SSL Cipher suites. */ + private String sslCipherSuites; + + /** SSL Key Algorithm. */ + private String sslKeyAlgorithm; + + /** Keystore. */ + private String sslKeyStorePath; + + /** Keystore Type. */ + private String sslKeyStoreType; + + /** Keystore Password. */ + private char[] sslKeyStorePassword; + + /** Truststore. */ + private String sslTrustStorePath; + + /** Truststore Type. */ + private String sslTrustStoreType; + + /** Truststore Password. */ + private char[] sslTrustStorePassword; + /** * @param cmd Command. * @param host Host. @@ -89,27 +116,57 @@ public class Arguments { * @param cacheArgs --cache subcommand arguments. * @param walAct WAL action. * @param walArgs WAL args. - * @param pingTimeout Ping timeout. See {@link GridClientConfiguration#pingTimeout}. - * @param pingInterval Ping interval. See {@link GridClientConfiguration#pingInterval}. + * @param pingTimeout Ping timeout. See {@link GridClientConfiguration#getPingTimeout()}. + * @param pingInterval Ping interval. See {@link GridClientConfiguration#getPingInterval()}. * @param autoConfirmation Auto confirmation flag. + * @param sslProtocol SSL Protocol. + * @param sslCipherSuites SSL cipher suites. + * @param sslKeyAlgorithm SSL Key Algorithm. + * @param sslKeyStorePath Keystore. + * @param sslKeyStorePassword Keystore Password. + * @param sslKeyStoreType Keystore Type. + * @param sslTrustStorePath Truststore. + * @param sslTrustStorePassword Truststore Password. + * @param sslTrustStoreType Truststore Type. */ public Arguments(Command cmd, String host, String port, String user, String pwd, String baselineAct, - String baselineArgs, VisorTxTaskArg txArg, CacheArguments cacheArgs, String walAct, String walArgs, - Long pingTimeout, Long pingInterval, boolean autoConfirmation) { + String baselineArgs, VisorTxTaskArg txArg, CacheArguments cacheArgs, String walAct, String walArgs, + Long pingTimeout, Long pingInterval, boolean autoConfirmation, + String sslProtocol, String sslCipherSuites, String sslKeyAlgorithm, + String sslKeyStorePath, char[] sslKeyStorePassword, String sslKeyStoreType, + String sslTrustStorePath, char[] sslTrustStorePassword, String sslTrustStoreType + ) { this.cmd = cmd; this.host = host; this.port = port; this.user = user; this.pwd = pwd; + this.baselineAct = baselineAct; this.baselineArgs = baselineArgs; + this.txArg = txArg; this.cacheArgs = cacheArgs; + this.walAct = walAct; this.walArgs = walArgs; + this.pingTimeout = pingTimeout; this.pingInterval = pingInterval; + this.autoConfirmation = autoConfirmation; + + this.sslProtocol = sslProtocol; + this.sslCipherSuites = sslCipherSuites; + + this.sslKeyAlgorithm = sslKeyAlgorithm; + this.sslKeyStorePath = sslKeyStorePath; + this.sslKeyStoreType = sslKeyStoreType; + this.sslKeyStorePassword = sslKeyStorePassword; + + this.sslTrustStorePath = sslTrustStorePath; + this.sslTrustStoreType = sslTrustStoreType; + this.sslTrustStorePassword = sslTrustStorePassword; } /** @@ -136,17 +193,31 @@ public String port() { /** * @return user name */ - public String user() { + public String getUserName() { return user; } + /** + * @param user New user name. + */ + public void setUserName(String user) { + this.user = user; + } + /** * @return password */ - public String password() { + public String getPassword() { return pwd; } + /** + * @param pwd New password. + */ + public void setPassword(String pwd) { + this.pwd = pwd; + } + /** * @return baseline action */ @@ -190,7 +261,7 @@ public String walArguments() { } /** - * See {@link GridClientConfiguration#pingTimeout}. + * See {@link GridClientConfiguration#getPingInterval()}. * * @return Ping timeout. */ @@ -199,7 +270,7 @@ public long pingTimeout() { } /** - * See {@link GridClientConfiguration#pingInterval}. + * See {@link GridClientConfiguration#getPingInterval()}. * * @return Ping interval. */ @@ -213,4 +284,67 @@ public long pingInterval() { public boolean autoConfirmation() { return autoConfirmation; } + + /** + * @return SSL protocol + */ + public String sslProtocol() { + return sslProtocol; + } + + /** + * @return SSL cipher suites. + */ + public String getSslCipherSuites() { + return sslCipherSuites; + } + + /** + * @return SSL Key Algorithm + */ + public String sslKeyAlgorithm() { + return sslKeyAlgorithm; + } + + /** + * @return Keystore + */ + public String sslKeyStorePath() { + return sslKeyStorePath; + } + + /** + * @return Keystore type + */ + public String sslKeyStoreType() { + return sslKeyStoreType; + } + + /** + * @return Keystore password + */ + public char[] sslKeyStorePassword() { + return sslKeyStorePassword; + } + + /** + * @return Truststore + */ + public String sslTrustStorePath() { + return sslTrustStorePath; + } + + /** + * @return Truststore type + */ + public String sslTrustStoreType() { + return sslTrustStoreType; + } + + /** + * @return Truststore password + */ + public char[] sslTrustStorePassword() { + return sslTrustStorePassword; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/commandline/Command.java b/modules/core/src/main/java/org/apache/ignite/internal/commandline/Command.java index c64e488db4ffd..1f7c0a34c57b0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/commandline/Command.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/commandline/Command.java @@ -71,4 +71,9 @@ public static Command of(String text) { public String text() { return text; } + + /** {@inheritDoc} */ + @Override public String toString() { + return text; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/commandline/CommandHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/commandline/CommandHandler.java index c484018e0e568..8db818fb9daea 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/commandline/CommandHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/commandline/CommandHandler.java @@ -17,21 +17,26 @@ package org.apache.ignite.internal.commandline; +import java.io.Console; +import java.io.IOException; +import java.net.InetAddress; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; +import java.util.Comparator; import java.util.HashSet; import java.util.Iterator; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Scanner; import java.util.Set; import java.util.UUID; -import java.util.logging.Logger; import java.util.regex.Pattern; import java.util.regex.PatternSyntaxException; import java.util.stream.Collectors; +import java.util.stream.Stream; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.compute.ComputeTask; @@ -49,6 +54,7 @@ import org.apache.ignite.internal.client.GridClientNode; import org.apache.ignite.internal.client.GridServerUnreachableException; import org.apache.ignite.internal.client.impl.connection.GridClientConnectionResetException; +import org.apache.ignite.internal.client.ssl.GridSslBasicContextFactory; import org.apache.ignite.internal.commandline.cache.CacheArguments; import org.apache.ignite.internal.commandline.cache.CacheCommand; import org.apache.ignite.internal.commandline.cache.distribution.CacheDistributionTask; @@ -63,8 +69,10 @@ import org.apache.ignite.internal.processors.cache.verify.PartitionHashRecord; import org.apache.ignite.internal.processors.cache.verify.PartitionKey; import org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2; +import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorTaskArgument; import org.apache.ignite.internal.visor.baseline.VisorBaselineNode; @@ -72,11 +80,20 @@ import org.apache.ignite.internal.visor.baseline.VisorBaselineTask; import org.apache.ignite.internal.visor.baseline.VisorBaselineTaskArg; import org.apache.ignite.internal.visor.baseline.VisorBaselineTaskResult; +import org.apache.ignite.internal.visor.cache.VisorCacheAffinityConfiguration; +import org.apache.ignite.internal.visor.cache.VisorCacheConfiguration; +import org.apache.ignite.internal.visor.cache.VisorCacheConfigurationCollectorTask; +import org.apache.ignite.internal.visor.cache.VisorCacheConfigurationCollectorTaskArg; +import org.apache.ignite.internal.visor.cache.VisorCacheEvictionConfiguration; +import org.apache.ignite.internal.visor.cache.VisorCacheNearConfiguration; +import org.apache.ignite.internal.visor.cache.VisorCacheRebalanceConfiguration; +import org.apache.ignite.internal.visor.cache.VisorCacheStoreConfiguration; import org.apache.ignite.internal.visor.misc.VisorClusterNode; import org.apache.ignite.internal.visor.misc.VisorWalTask; import org.apache.ignite.internal.visor.misc.VisorWalTaskArg; import org.apache.ignite.internal.visor.misc.VisorWalTaskOperation; import org.apache.ignite.internal.visor.misc.VisorWalTaskResult; +import org.apache.ignite.internal.visor.query.VisorQueryConfiguration; import org.apache.ignite.internal.visor.tx.VisorTxInfo; import org.apache.ignite.internal.visor.tx.VisorTxOperation; import org.apache.ignite.internal.visor.tx.VisorTxProjection; @@ -84,6 +101,8 @@ import org.apache.ignite.internal.visor.tx.VisorTxTask; import org.apache.ignite.internal.visor.tx.VisorTxTaskArg; import org.apache.ignite.internal.visor.tx.VisorTxTaskResult; +import org.apache.ignite.internal.visor.verify.CacheFilterEnum; +import org.apache.ignite.internal.visor.verify.IndexIntegrityCheckIssue; import org.apache.ignite.internal.visor.verify.IndexValidationIssue; import org.apache.ignite.internal.visor.verify.ValidateIndexesPartitionResult; import org.apache.ignite.internal.visor.verify.VisorContentionTask; @@ -98,12 +117,16 @@ import org.apache.ignite.internal.visor.verify.VisorValidateIndexesJobResult; import org.apache.ignite.internal.visor.verify.VisorValidateIndexesTaskArg; import org.apache.ignite.internal.visor.verify.VisorValidateIndexesTaskResult; +import org.apache.ignite.internal.visor.verify.VisorViewCacheCmd; import org.apache.ignite.internal.visor.verify.VisorViewCacheTask; import org.apache.ignite.internal.visor.verify.VisorViewCacheTaskArg; import org.apache.ignite.internal.visor.verify.VisorViewCacheTaskResult; +import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteProductVersion; import org.apache.ignite.plugin.security.SecurityCredentials; import org.apache.ignite.plugin.security.SecurityCredentialsBasicProvider; +import org.apache.ignite.plugin.security.SecurityCredentialsProvider; +import org.apache.ignite.ssl.SslContextFactory; import static org.apache.ignite.IgniteSystemProperties.IGNITE_ENABLE_EXPERIMENTAL_COMMAND; import static org.apache.ignite.internal.IgniteVersionUtils.ACK_VER_STR; @@ -115,21 +138,29 @@ import static org.apache.ignite.internal.commandline.Command.STATE; import static org.apache.ignite.internal.commandline.Command.TX; import static org.apache.ignite.internal.commandline.Command.WAL; +import static org.apache.ignite.internal.commandline.OutputFormat.MULTI_LINE; +import static org.apache.ignite.internal.commandline.OutputFormat.SINGLE_LINE; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.CONTENTION; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.DISTRIBUTION; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.HELP; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.IDLE_VERIFY; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.LIST; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.RESET_LOST_PARTITIONS; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.VALIDATE_INDEXES; import static org.apache.ignite.internal.visor.baseline.VisorBaselineOperation.ADD; import static org.apache.ignite.internal.visor.baseline.VisorBaselineOperation.COLLECT; import static org.apache.ignite.internal.visor.baseline.VisorBaselineOperation.REMOVE; import static org.apache.ignite.internal.visor.baseline.VisorBaselineOperation.SET; import static org.apache.ignite.internal.visor.baseline.VisorBaselineOperation.VERSION; +import static org.apache.ignite.internal.visor.verify.VisorViewCacheCmd.CACHES; import static org.apache.ignite.internal.visor.verify.VisorViewCacheCmd.GROUPS; import static org.apache.ignite.internal.visor.verify.VisorViewCacheCmd.SEQ; +import static org.apache.ignite.ssl.SslContextFactory.DFLT_SSL_PROTOCOL; /** * Class that execute several commands passed via command line. */ public class CommandHandler { - /** Logger. */ - private static final Logger log = Logger.getLogger(CommandHandler.class.getName()); - /** */ static final String DFLT_HOST = "127.0.0.1"; @@ -155,32 +186,87 @@ public class CommandHandler { private static final String CMD_AUTO_CONFIRMATION = "--yes"; /** */ - protected static final String CMD_PING_INTERVAL = "--ping-interval"; + private static final String CMD_PING_INTERVAL = "--ping-interval"; /** */ - protected static final String CMD_PING_TIMEOUT = "--ping-timeout"; + private static final String CMD_PING_TIMEOUT = "--ping-timeout"; /** */ private static final String CMD_DUMP = "--dump"; /** */ - private static final String CMD_SKIP_ZEROS = "--skipZeros"; + private static final String CMD_SKIP_ZEROS = "--skip-zeros"; + + /** Command exclude caches. */ + private static final String CMD_EXCLUDE_CACHES = "--excludeCaches"; + + /** Cache filter. */ + private static final String CACHE_FILTER = "--cache-filter"; + + /** One cache filter option should used message. */ + public static final String ONE_CACHE_FILTER_OPT_SHOULD_USED_MSG = "Should use only one of option: " + + CMD_EXCLUDE_CACHES + ", " + CACHE_FILTER + " or pass caches explicitly"; /** */ private static final String CMD_USER_ATTRIBUTES = "--user-attributes"; + // SSL configuration section + + /** */ + private static final String CMD_SSL_PROTOCOL = "--ssl-protocol"; + + /** */ + private static final String CMD_SSL_KEY_ALGORITHM = "--ssl-key-algorithm"; + + /** */ + private static final String CMD_SSL_CIPHER_SUITES = "--ssl-cipher-suites"; + + /** */ + private static final String CMD_KEYSTORE = "--keystore"; + + /** */ + private static final String CMD_KEYSTORE_PASSWORD = "--keystore-password"; + + /** */ + private static final String CMD_KEYSTORE_TYPE = "--keystore-type"; + + /** */ + private static final String CMD_TRUSTSTORE = "--truststore"; + + /** */ + private static final String CMD_TRUSTSTORE_PASSWORD = "--truststore-password"; + + /** */ + private static final String CMD_TRUSTSTORE_TYPE = "--truststore-type"; + /** List of optional auxiliary commands. */ private static final Set AUX_COMMANDS = new HashSet<>(); static { AUX_COMMANDS.add(CMD_HELP); + AUX_COMMANDS.add(CMD_HOST); AUX_COMMANDS.add(CMD_PORT); + AUX_COMMANDS.add(CMD_PASSWORD); AUX_COMMANDS.add(CMD_USER); + AUX_COMMANDS.add(CMD_AUTO_CONFIRMATION); + AUX_COMMANDS.add(CMD_PING_INTERVAL); AUX_COMMANDS.add(CMD_PING_TIMEOUT); + + AUX_COMMANDS.add(CMD_SSL_PROTOCOL); + AUX_COMMANDS.add(CMD_SSL_KEY_ALGORITHM); + AUX_COMMANDS.add(CMD_SSL_CIPHER_SUITES); + + AUX_COMMANDS.add(CMD_KEYSTORE); + AUX_COMMANDS.add(CMD_KEYSTORE_PASSWORD); + AUX_COMMANDS.add(CMD_KEYSTORE_TYPE); + + AUX_COMMANDS.add(CMD_TRUSTSTORE); + AUX_COMMANDS.add(CMD_TRUSTSTORE_PASSWORD); + AUX_COMMANDS.add(CMD_TRUSTSTORE_TYPE); } /** Broadcast uuid. */ @@ -205,10 +291,10 @@ public class CommandHandler { private static final String BASELINE_SET_VERSION = "version"; /** Parameter name for validate_indexes command. */ - static final String VI_CHECK_FIRST = "checkFirst"; + static final String VI_CHECK_FIRST = "--check-first"; /** Parameter name for validate_indexes command. */ - static final String VI_CHECK_THROUGH = "checkThrough"; + static final String VI_CHECK_THROUGH = "--check-through"; /** */ static final String WAL_PRINT = "print"; @@ -247,37 +333,61 @@ public class CommandHandler { private static final String VALIDATE_INDEXES_TASK = "org.apache.ignite.internal.visor.verify.VisorValidateIndexesTask"; /** */ - private static final String TX_LIMIT = "limit"; + private static final String TX_LIMIT = "--limit"; + + /** */ + private static final String TX_ORDER = "--order"; + + /** */ + private static final String TX_SERVERS = "--servers"; + + /** */ + private static final String TX_CLIENTS = "--clients"; /** */ - private static final String TX_ORDER = "order"; + private static final String TX_DURATION = "--min-duration"; /** */ - public static final String CMD_TX_ORDER_START_TIME = "START_TIME"; + private static final String TX_SIZE = "--min-size"; /** */ - private static final String TX_SERVERS = "servers"; + private static final String TX_LABEL = "--label"; /** */ - private static final String TX_CLIENTS = "clients"; + private static final String TX_NODES = "--nodes"; /** */ - private static final String TX_DURATION = "minDuration"; + private static final String TX_XID = "--xid"; /** */ - private static final String TX_SIZE = "minSize"; + private static final String TX_KILL = "--kill"; /** */ - private static final String TX_LABEL = "label"; + private static final String OUTPUT_FORMAT = "--output-format"; /** */ - private static final String TX_NODES = "nodes"; + private static final String CONFIG = "--config"; + + /** Utility name. */ + private static final String UTILITY_NAME = "control.sh"; + + /** Common options. */ + private static final String COMMON_OPTIONS = j(" ", getCommonOptions()); + + /** Utility name with common options. */ + private static final String UTILITY_NAME_WITH_COMMON_OPTIONS = j(" ", UTILITY_NAME, COMMON_OPTIONS); + + /** Indent for help output. */ + private static final String INDENT = " "; /** */ - private static final String TX_XID = "xid"; + private static final String NULL = "null"; /** */ - private static final String TX_KILL = "kill"; + private static final String NODE_ID = "nodeId"; + + /** */ + private static final String OP_NODE_ID = op(NODE_ID); /** */ private Iterator argsIt; @@ -294,6 +404,34 @@ public class CommandHandler { /** Check if experimental commands are enabled. Default {@code false}. */ private final boolean enableExperimental = IgniteSystemProperties.getBoolean(IGNITE_ENABLE_EXPERIMENTAL_COMMAND, false); + /** + * Creates list of common utility options. + * + * @return List of common utility options. + */ + private static List getCommonOptions() { + List list = new ArrayList<>(32); + + list.add(op(CMD_HOST, "HOST_OR_IP")); + list.add(op(CMD_PORT, "PORT")); + list.add(op(CMD_USER, "USER")); + list.add(op(CMD_PASSWORD, "PASSWORD")); + list.add(op(CMD_PING_INTERVAL, "PING_INTERVAL")); + list.add(op(CMD_PING_TIMEOUT, "PING_TIMEOUT")); + + list.add(op(CMD_SSL_PROTOCOL, "SSL_PROTOCOL[, SSL_PROTOCOL_2, ..., SSL_PROTOCOL_N]")); + list.add(op(CMD_SSL_CIPHER_SUITES, "SSL_CIPHER_1[, SSL_CIPHER_2, ..., SSL_CIPHER_N]")); + list.add(op(CMD_SSL_KEY_ALGORITHM, "SSL_KEY_ALGORITHM")); + list.add(op(CMD_KEYSTORE_TYPE, "KEYSTORE_TYPE")); + list.add(op(CMD_KEYSTORE, "KEYSTORE_PATH")); + list.add(op(CMD_KEYSTORE_PASSWORD, "KEYSTORE_PASSWORD")); + list.add(op(CMD_TRUSTSTORE_TYPE, "TRUSTSTORE_TYPE")); + list.add(op(CMD_TRUSTSTORE, "TRUSTSTORE_PATH")); + list.add(op(CMD_TRUSTSTORE_PASSWORD, "TRUSTSTORE_PASSWORD")); + + return list; + } + /** * Output specified string to console. * @@ -303,6 +441,57 @@ private void log(String s) { System.out.println(s); } + /** + * Adds indent to begin of object's string representation. + * + * @param o Input object. + * @return Indented string. + */ + private static String i(Object o) { + return i(o, 1); + } + + /** + * Adds specified indents to begin of object's string representation. + * + * @param o Input object. + * @param indentCnt Number of indents. + * @return Indented string. + */ + private static String i(Object o, int indentCnt) { + assert indentCnt >= 0; + + String s = o == null ? null : o.toString(); + + switch (indentCnt) { + case 0: + return s; + + case 1: + return INDENT + s; + + default: + int sLen = s == null ? 4 : s.length(); + + SB sb = new SB(sLen + indentCnt * INDENT.length()); + + for (int i = 0; i < indentCnt; i++) + sb.a(INDENT); + + return sb.a(s).toString(); + } + } + + /** + * Format and output specified string to console. + * + * @param format A format string as described in Format string syntax. + * @param args Arguments referenced by the format specifiers in the format string. + */ + private void log(String format, Object... args) { + System.out.printf(format, args); + } + /** * Provides a prompt, then reads a single line of text from the console. * @@ -319,7 +508,7 @@ private String readLine(String prompt) { * Output empty line. */ private void nl() { - System.out.println(""); + System.out.println(); } /** @@ -490,6 +679,41 @@ private R executeTask(GridClient client, Class> return executeTaskByNameOnNode(client, taskCls.getName(), taskArgs, null); } + /** + * @param client Client. + * @return List of hosts. + */ + private Stream> listHosts(GridClient client) throws GridClientException { + return client.compute() + .nodes(GridClientNode::connectable) + .stream() + .flatMap(node -> Stream.concat( + node.tcpAddresses() == null ? Stream.empty() : node.tcpAddresses().stream(), + node.tcpHostNames() == null ? Stream.empty() : node.tcpHostNames().stream() + ) + .map(addr -> new IgniteBiTuple<>(node, addr + ":" + node.tcpPort()))); + } + + /** + * @param client Client. + * @return List of hosts. + */ + private Stream>> listHostsByClientNode( + GridClient client + ) throws GridClientException { + return client.compute().nodes(GridClientNode::connectable).stream() + .map( + node -> new IgniteBiTuple<>( + node, + Stream.concat( + node.tcpAddresses() == null ? Stream.empty() : node.tcpAddresses().stream(), + node.tcpHostNames() == null ? Stream.empty() : node.tcpHostNames().stream() + ) + .map(addr -> addr + ":" + node.tcpPort()).collect(Collectors.toList()) + ) + ); + } + /** * @param client Client * @param taskClsName Task class name. @@ -498,7 +722,12 @@ private R executeTask(GridClient client, Class> * @return Task result. * @throws GridClientException If failed to execute task. */ - private R executeTaskByNameOnNode(GridClient client, String taskClsName, Object taskArgs, UUID nodeId + @SuppressWarnings("unchecked") + private R executeTaskByNameOnNode( + GridClient client, + String taskClsName, + Object taskArgs, + UUID nodeId ) throws GridClientException { GridClientCompute compute = client.compute(); @@ -518,22 +747,34 @@ private R executeTaskByNameOnNode(GridClient client, String taskClsName, Obj GridClientNode node = null; if (nodeId == null) { - Collection nodes = compute.nodes(GridClientNode::connectable); - // Prefer node from connect string. - String origAddr = clientCfg.getServers().iterator().next(); + final String cfgAddr = clientCfg.getServers().iterator().next(); - for (GridClientNode clientNode : nodes) { - Iterator it = F.concat(clientNode.tcpAddresses().iterator(), clientNode.tcpHostNames().iterator()); + String[] parts = cfgAddr.split(":"); - while (it.hasNext()) { - if (origAddr.equals(it.next() + ":" + clientNode.tcpPort())) { - node = clientNode; + if (DFLT_HOST.equals(parts[0])) { + InetAddress addr; - break; - } + try { + addr = IgniteUtils.getLocalHost(); } + catch (IOException e) { + throw new GridClientException("Can't get localhost name.", e); + } + + if (addr.isLoopbackAddress()) + throw new GridClientException("Can't find localhost name."); + + String origAddr = addr.getHostName() + ":" + parts[1]; + + node = listHosts(client).filter(tuple -> origAddr.equals(tuple.get2())).findFirst().map(IgniteBiTuple::get1).orElse(null); + + if (node == null) + node = listHostsByClientNode(client).filter(tuple -> tuple.get2().size() == 1 && cfgAddr.equals(tuple.get2().get(0))). + findFirst().map(IgniteBiTuple::get1).orElse(null); } + else + node = listHosts(client).filter(tuple -> cfgAddr.equals(tuple.get2())).findFirst().map(IgniteBiTuple::get1).orElse(null); // Otherwise choose random node. if (node == null) @@ -613,23 +854,26 @@ private void cache(GridClient client, CacheArguments cacheArgs) throws Throwable } } - /** - * - */ + /** */ private void printCacheHelp() { - log("--cache subcommand allows to do the following operations:"); - - usage(" Show information about caches, groups or sequences that match a regex:", CACHE, " list regexPattern [groups|seq] [nodeId]"); - usage(" Show hot keys that are point of contention for multiple transactions:", CACHE, " contention minQueueSize [nodeId] [maxPrint]"); - usage(" Verify partition counters and hashes between primary and backups on idle cluster:", CACHE, " idle_verify [--dump] [--skipZeros] [cache1,...,cacheN]"); - usage(" Validate custom indexes on idle cluster:", CACHE, " validate_indexes [cache1,...,cacheN] [nodeId] [checkFirst|checkThrough]"); - usage(" Collect partition distribution information:", CACHE, " distribution nodeId|null [cacheName1,...,cacheNameN] [--user-attributes attributeName1[,...,attributeNameN]]"); - usage(" Reset lost partitions:", CACHE, " reset_lost_partitions cacheName1[,...,cacheNameN]"); - - log(" If [nodeId] is not specified, contention and validate_indexes commands will be broadcasted to all server nodes."); - log(" Another commands where [nodeId] is optional will run on a random server node."); - log(" checkFirst numeric parameter for validate_indexes specifies number of first K keys to be validated."); - log(" checkThrough numeric parameter for validate_indexes allows to check each Kth key."); + log(i("The '" + CACHE + " subcommand' is used to get information about and perform actions with caches. The command has the following syntax:")); + nl(); + log(i(UTILITY_NAME_WITH_COMMON_OPTIONS + " " + CACHE + "[subcommand] ")); + nl(); + log(i("The subcommands that take " + OP_NODE_ID + " as an argument ('" + LIST + "', '" + CONTENTION + "' and '" + VALIDATE_INDEXES + "') will be executed on the given node or on all server nodes if the option is not specified. Other commands will run on a random server node.")); + nl(); + nl(); + log(i("Subcommands:")); + + usageCache(LIST, "regexPattern", op(or("groups", "seq")), OP_NODE_ID, op(CONFIG), op(OUTPUT_FORMAT, MULTI_LINE)); + usageCache(CONTENTION, "minQueueSize", OP_NODE_ID, op("maxPrint")); + usageCache(IDLE_VERIFY, op(CMD_DUMP), op(CMD_SKIP_ZEROS), op(or(CMD_EXCLUDE_CACHES + " cache1,...,cacheN", + op(CACHE_FILTER, or(CacheFilterEnum.ALL.toString(), CacheFilterEnum.SYSTEM.toString(), + CacheFilterEnum.PERSISTENT.toString(), CacheFilterEnum.NOT_PERSISTENT.toString())), "cache1,...," + + "cacheN"))); + usageCache(VALIDATE_INDEXES, "[cache1,...,cacheN]", OP_NODE_ID, op(or(VI_CHECK_FIRST + " N", VI_CHECK_THROUGH + " K"))); + usageCache(DISTRIBUTION, or(NODE_ID, NULL), "[cacheName1,...,cacheNameN]", op(CMD_USER_ATTRIBUTES, "attrName1,...,attrNameN")); + usageCache(RESET_LOST_PARTITIONS, "cacheName1,...,cacheNameN"); nl(); } @@ -650,7 +894,7 @@ private void cacheContention(GridClient client, CacheArguments cacheArgs) throws log("Contention check failed on nodes:"); for (Map.Entry e : res.exceptions().entrySet()) { - log("Node ID = " + e.getKey()); + log("Node ID: " + e.getKey()); log("Exception message:"); log(e.getValue().getMessage()); @@ -669,42 +913,55 @@ private void cacheContention(GridClient client, CacheArguments cacheArgs) throws private void cacheValidateIndexes(GridClient client, CacheArguments cacheArgs) throws GridClientException { VisorValidateIndexesTaskArg taskArg = new VisorValidateIndexesTaskArg( cacheArgs.caches(), + cacheArgs.nodeId() != null ? Collections.singleton(cacheArgs.nodeId()) : null, cacheArgs.checkFirst(), cacheArgs.checkThrough() ); - UUID nodeId = cacheArgs.nodeId() == null ? BROADCAST_UUID : cacheArgs.nodeId(); - VisorValidateIndexesTaskResult taskRes = executeTaskByNameOnNode( - client, VALIDATE_INDEXES_TASK, taskArg, nodeId); + client, VALIDATE_INDEXES_TASK, taskArg, null); + + boolean errors = false; if (!F.isEmpty(taskRes.exceptions())) { + errors = true; + log("Index validation failed on nodes:"); for (Map.Entry e : taskRes.exceptions().entrySet()) { - log("Node ID = " + e.getKey()); + log(i("Node ID: " + e.getKey())); - log("Exception message:"); - log(e.getValue().getMessage()); + log(i("Exception message:")); + log(i(e.getValue().getMessage(), 2)); nl(); } } - boolean errors = false; - for (Map.Entry nodeEntry : taskRes.results().entrySet()) { + if (!nodeEntry.getValue().hasIssues()) + continue; + + errors = true; + + log("Index issues found on node " + nodeEntry.getKey() + ":"); + + Collection integrityCheckFailures = nodeEntry.getValue().integrityCheckFailures(); + + if (!integrityCheckFailures.isEmpty()) { + for (IndexIntegrityCheckIssue is : integrityCheckFailures) + log(i(is)); + } + Map partRes = nodeEntry.getValue().partitionResult(); for (Map.Entry e : partRes.entrySet()) { ValidateIndexesPartitionResult res = e.getValue(); if (!res.issues().isEmpty()) { - errors = true; - - log(e.getKey().toString() + " " + e.getValue().toString()); + log(i(j(" ", e.getKey(), e.getValue()))); for (IndexValidationIssue is : res.issues()) - log(is.toString()); + log(i(is, 2)); } } @@ -714,20 +971,20 @@ private void cacheValidateIndexes(GridClient client, CacheArguments cacheArgs) t ValidateIndexesPartitionResult res = e.getValue(); if (!res.issues().isEmpty()) { - errors = true; - - log("SQL Index " + e.getKey() + " " + e.getValue().toString()); + log(i(j(" ", "SQL Index", e.getKey(), e.getValue()))); for (IndexValidationIssue is : res.issues()) - log(is.toString()); + log(i(is, 2)); } } } if (!errors) - log("validate_indexes has finished, no issues found."); + log("no issues found."); else - log("validate_indexes has finished with errors (listed above)."); + log("issues found (listed above)."); + + nl(); } /** @@ -740,8 +997,11 @@ private void cacheView(GridClient client, CacheArguments cacheArgs) throws GridC VisorViewCacheTaskResult res = executeTaskByNameOnNode( client, VisorViewCacheTask.class.getName(), taskArg, cacheArgs.nodeId()); - for (CacheInfo info : res.cacheInfos()) - info.print(cacheArgs.cacheCommand()); + if (cacheArgs.fullConfig() && cacheArgs.cacheCommand() == CACHES) + cachesConfig(client, cacheArgs, res); + else + printCacheInfos(res.cacheInfos(), cacheArgs.cacheCommand()); + } /** @@ -782,7 +1042,7 @@ else if (idleVerifyV2) */ private void legacyCacheIdleVerify(GridClient client, CacheArguments cacheArgs) throws GridClientException { VisorIdleVerifyTaskResult res = executeTask( - client, VisorIdleVerifyTask.class, new VisorIdleVerifyTaskArg(cacheArgs.caches())); + client, VisorIdleVerifyTask.class, new VisorIdleVerifyTaskArg(cacheArgs.caches(), cacheArgs.excludeCaches())); Map> conflicts = res.getConflicts(); @@ -816,6 +1076,100 @@ private void cacheDistribution(GridClient client, CacheArguments cacheArgs) thro res.print(System.out); } + /** + * @param client Client. + * @param cacheArgs Cache args. + * @param viewRes Cache view task result. + */ + private void cachesConfig(GridClient client, CacheArguments cacheArgs, + VisorViewCacheTaskResult viewRes) throws GridClientException { + VisorCacheConfigurationCollectorTaskArg taskArg = new VisorCacheConfigurationCollectorTaskArg(cacheArgs.regex()); + + UUID nodeId = cacheArgs.nodeId() == null ? BROADCAST_UUID : cacheArgs.nodeId(); + + Map res = + executeTaskByNameOnNode(client, VisorCacheConfigurationCollectorTask.class.getName(), taskArg, nodeId); + + Map cacheToMapped = + viewRes.cacheInfos().stream().collect(Collectors.toMap(CacheInfo::getCacheName, CacheInfo::getMapped)); + + printCachesConfig(res, cacheArgs.outputFormat(), cacheToMapped); + } + + /** + * Prints caches info. + * + * @param infos Caches info. + * @param cmd Command. + */ + private void printCacheInfos(Collection infos, VisorViewCacheCmd cmd) { + for (CacheInfo info : infos) { + Map map = info.toMap(cmd); + + SB sb = new SB("["); + + for (Map.Entry e : map.entrySet()) + sb.a(e.getKey()).a("=").a(e.getValue()).a(", "); + + sb.setLength(sb.length() - 2); + + sb.a("]"); + + log(sb.toString()); + } + } + + /** + * Prints caches config. + * + * @param caches Caches config. + * @param outputFormat Output format. + * @param cacheToMapped Map cache name to mapped. + */ + private void printCachesConfig( + Map caches, + OutputFormat outputFormat, + Map cacheToMapped + ) { + + for (Map.Entry entry : caches.entrySet()) { + String cacheName = entry.getKey(); + + switch (outputFormat) { + case MULTI_LINE: + Map params = mapToPairs(entry.getValue()); + + params.put("Mapped", cacheToMapped.get(cacheName)); + + log("[cache = '%s']%n", cacheName); + + for (Map.Entry innerEntry : params.entrySet()) + log("%s: %s%n", innerEntry.getKey(), innerEntry.getValue()); + + nl(); + + break; + + default: + int mapped = cacheToMapped.get(cacheName); + + log("%s: %s %s=%s%n", entry.getKey(), toString(entry.getValue()), "mapped", mapped); + + break; + } + } + } + + /** + * Invokes toString() method and cuts class name from result string. + * + * @param cfg Visor cache configuration for invocation. + * @return String representation without class name in begin of string. + */ + private String toString(VisorCacheConfiguration cfg) { + return cfg.toString().substring(cfg.getClass().getSimpleName().length() + 1); + } + /** * @param client Client. * @param cacheArgs Cache args. @@ -837,7 +1191,8 @@ private void cacheIdleVerifyDump(GridClient client, CacheArguments cacheArgs) th String path = executeTask( client, VisorIdleVerifyDumpTask.class, - new VisorIdleVerifyDumpTaskArg(cacheArgs.caches(), cacheArgs.isSkipZeros()) + new VisorIdleVerifyDumpTaskArg(cacheArgs.caches(), cacheArgs.excludeCaches(), cacheArgs.isSkipZeros(), cacheArgs + .getCacheFilterEnum()) ); log("VisorIdleVerifyDumpTask successfully written output to '" + path + "'"); @@ -849,7 +1204,7 @@ private void cacheIdleVerifyDump(GridClient client, CacheArguments cacheArgs) th */ private void cacheIdleVerifyV2(GridClient client, CacheArguments cacheArgs) throws GridClientException { IdleVerifyResultV2 res = executeTask( - client, VisorIdleVerifyTaskV2.class, new VisorIdleVerifyTaskArg(cacheArgs.caches())); + client, VisorIdleVerifyTaskV2.class, new VisorIdleVerifyTaskArg(cacheArgs.caches(), cacheArgs.excludeCaches())); res.print(System.out::print); } @@ -954,8 +1309,9 @@ private void baselinePrint0(VisorBaselineTaskResult res) { log("Baseline nodes:"); for (VisorBaselineNode node : baseline.values()) { - log(" ConsistentID=" + node.getConsistentId() + ", STATE=" + - (srvs.containsKey(node.getConsistentId()) ? "ONLINE" : "OFFLINE")); + boolean online = srvs.containsKey(node.getConsistentId()); + + log(i("ConsistentID=" + node.getConsistentId() + ", STATE=" + (online ? "ONLINE" : "OFFLINE"), 2)); } log(DELIM); @@ -976,7 +1332,7 @@ private void baselinePrint0(VisorBaselineTaskResult res) { log("Other nodes:"); for (VisorBaselineNode node : others) - log(" ConsistentID=" + node.getConsistentId()); + log(i("ConsistentID=" + node.getConsistentId(), 2)); log("Number of other nodes: " + others.size()); } @@ -1209,10 +1565,10 @@ private void printUnusedWalSegments0(VisorWalTaskResult taskRes) { VisorClusterNode node = nodesInfo.get(entry.getKey()); log("Node=" + node.getConsistentId()); - log(" addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames())); + log(i("addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames()), 2)); for (String fileName : entry.getValue()) - log(" " + fileName); + log(i(fileName)); nl(); } @@ -1221,8 +1577,8 @@ private void printUnusedWalSegments0(VisorWalTaskResult taskRes) { VisorClusterNode node = nodesInfo.get(entry.getKey()); log("Node=" + node.getConsistentId()); - log(" addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames())); - log(" failed with error: " + entry.getValue().getMessage()); + log(i("addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames())), 2); + log(i("failed with error: " + entry.getValue().getMessage())); nl(); } } @@ -1244,7 +1600,7 @@ private void printDeleteWalSegments0(VisorWalTaskResult taskRes) { VisorClusterNode node = nodesInfo.get(entry.getKey()); log("Node=" + node.getConsistentId()); - log(" addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames())); + log(i("addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames())), 2); nl(); } @@ -1252,8 +1608,8 @@ private void printDeleteWalSegments0(VisorWalTaskResult taskRes) { VisorClusterNode node = nodesInfo.get(entry.getKey()); log("Node=" + node.getConsistentId()); - log(" addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames())); - log(" failed with error: " + entry.getValue().getMessage()); + log(i("addresses " + U.addressesAsString(node.getAddresses(), node.getHostNames())), 2); + log(i("failed with error: " + entry.getValue().getMessage())); nl(); } } @@ -1286,11 +1642,190 @@ private boolean isConnectionError(Throwable e) { */ private void usage(String desc, Command cmd, String... args) { log(desc); - log(" control.sh [--host HOST_OR_IP] [--port PORT] [--user USER] [--password PASSWORD] " + - " [--ping-interval PING_INTERVAL] [--ping-timeout PING_TIMEOUT] " + cmd.text() + String.join("", args)); + log(i(j(" ", UTILITY_NAME, cmd, j(" ", args)), 2)); nl(); } + /** + * Print cache command usage with default indention. + * + * @param cmd Cache command. + * @param args Cache command arguments. + */ + private void usageCache(CacheCommand cmd, String... args) { + usageCache(1, cmd, args); + } + + /** + * Print cache command usage. + * + * @param indentsNum Number of indents. + * @param cmd Cache command. + * @param args Cache command arguments. + */ + private void usageCache(int indentsNum, CacheCommand cmd, String... args) { + log(i(DELIM, indentsNum)); + nl(); + log(i(j(" ", CACHE, cmd, j(" ", args)), indentsNum++)); + nl(); + log(i(getCacheSubcommandDesc(cmd), indentsNum)); + nl(); + + Map paramsDesc = createCacheArgsDesc(cmd); + + if (!paramsDesc.isEmpty()) { + log(i("Parameters:", indentsNum)); + + usageCacheParams(paramsDesc, indentsNum + 1); + + nl(); + } + } + + /** + * Print cache command arguments usage. + * + * @param paramsDesc Cache command arguments description. + * @param indentsNum Number of indents. + */ + private void usageCacheParams(Map paramsDesc, int indentsNum) { + int maxParamLen = paramsDesc.keySet().stream().max(Comparator.comparingInt(String::length)).get().length(); + + for (Map.Entry param : paramsDesc.entrySet()) + log(i(extendToLen(param.getKey(), maxParamLen) + INDENT + "- " + param.getValue(), indentsNum)); + } + + /** + * Appends spaces to end of input string for extending to needed length. + * + * @param s Input string. + * @param targetLen Needed length. + * @return String with appended spaces on the end. + */ + private String extendToLen(String s, int targetLen) { + assert targetLen >= 0; + assert s.length() <= targetLen; + + if (s.length() == targetLen) + return s; + + SB sb = new SB(targetLen); + + sb.a(s); + + for (int i = 0; i < targetLen - s.length(); i++) + sb.a(" "); + + return sb.toString(); + } + + /** + * Gets cache command description by cache command. + * + * @param cmd Cache command. + * @return Cache command description. + */ + private String getCacheSubcommandDesc(CacheCommand cmd) { + switch (cmd) { + case LIST: + return "Show information about caches, groups or sequences that match a regular expression. When executed without parameters, this subcommand prints the list of caches."; + + case CONTENTION: + return "Show the keys that are point of contention for multiple transactions."; + + case IDLE_VERIFY: + return "Verify counters and hash sums of primary and backup partitions for the specified caches on an idle cluster and print out the differences, if any."; + + case VALIDATE_INDEXES: + return "Validate indexes on an idle cluster and print out the keys that are missing in the indexes."; + + case DISTRIBUTION: + return "Prints the information about partition distribution."; + + case RESET_LOST_PARTITIONS: + return "Reset the state of lost partitions for the specified caches."; + + default: + throw new IllegalArgumentException("Unknown command: " + cmd); + } + } + + /** + * Gets cache command arguments description by cache command. + * + * @param cmd Cache command. + * @return Cache command arguments description. + */ + private Map createCacheArgsDesc(CacheCommand cmd) { + Map map = U.newLinkedHashMap(16); + switch (cmd) { + case LIST: + map.put(CONFIG, "print a all configuration parameters for each cache."); + map.put(OUTPUT_FORMAT + " " + MULTI_LINE.text(), "print configuration parameters per line. This option has effect only when used with " + CONFIG + " and without [groups|seq]."); + + break; + case VALIDATE_INDEXES: + map.put(VI_CHECK_FIRST + " N", "validate only the first N keys"); + map.put(VI_CHECK_THROUGH + " K", "validate every Kth key"); + + break; + } + return map; + } + + /** + * Join input parameters with space and wrap optional braces {@code []}. + * + * @param params Other input parameter. + * @return Joined parameters wrapped optional braces. + */ + private static String op(Object... params) { + return j(new SB(), "[", " ", params).a("]").toString(); + } + + /** + * Join input parameters with specified {@code delimeter} between them. + * + * @param delimeter Specified delimeter. + * @param params Other input parameter. + * @return Joined paramaters with specified {@code delimeter}. + */ + private static String j(String delimeter, Object... params) { + return j(new SB(), "", delimeter, params).toString(); + } + + /** + * Join input parameters with specified {@code delimeter} between them and append to the end {@code delimeter}. + * + * @param sb Specified string builder. + * @param sbDelimeter Delimeter between {@code sb} and appended {@code param}. + * @param delimeter Specified delimeter. + * @param params Other input parameter. + * @return SB with appended to the end joined paramaters with specified {@code delimeter}. + */ + private static SB j(SB sb, String sbDelimeter, String delimeter, Object... params) { + if (!F.isEmpty(params)) { + sb.a(sbDelimeter); + + for (Object par : params) + sb.a(par).a(delimeter); + + sb.setLength(sb.length() - delimeter.length()); + } + + return sb; + } + + /** + * Concatenates input parameters to single string with OR delimiter {@code |}. + * + * @param params Remaining parameters. + * @return Concatenated string. + */ + private static String or(Object... params) { + return j("|", params); + } + /** * Extract next argument. * @@ -1362,6 +1897,24 @@ Arguments parseAndValidate(List rawArgs) { VisorTxTaskArg txArgs = null; + String sslProtocol = DFLT_SSL_PROTOCOL; + + String sslCipherSuites = ""; + + String sslKeyAlgorithm = SslContextFactory.DFLT_KEY_ALGORITHM; + + String sslKeyStoreType = SslContextFactory.DFLT_STORE_TYPE; + + String sslKeyStorePath = null; + + char sslKeyStorePassword[] = null; + + String sslTrustStoreType = SslContextFactory.DFLT_STORE_TYPE; + + String sslTrustStorePath = null; + + char sslTrustStorePassword[] = null; + while (hasNextArg()) { String str = nextArg("").toLowerCase(); @@ -1474,6 +2027,51 @@ Arguments parseAndValidate(List rawArgs) { break; + case CMD_SSL_PROTOCOL: + sslProtocol = nextArg("Expected SSL protocol"); + + break; + + case CMD_SSL_CIPHER_SUITES: + sslCipherSuites = nextArg("Expected SSL cipher suites"); + + break; + + case CMD_SSL_KEY_ALGORITHM: + sslKeyAlgorithm = nextArg("Expected SSL key algorithm"); + + break; + + case CMD_KEYSTORE: + sslKeyStorePath = nextArg("Expected SSL key store path"); + + break; + + case CMD_KEYSTORE_PASSWORD: + sslKeyStorePassword = nextArg("Expected SSL key store password").toCharArray(); + + break; + + case CMD_KEYSTORE_TYPE: + sslKeyStoreType = nextArg("Expected SSL key store type"); + + break; + + case CMD_TRUSTSTORE: + sslTrustStorePath = nextArg("Expected SSL trust store path"); + + break; + + case CMD_TRUSTSTORE_PASSWORD: + sslTrustStorePassword = nextArg("Expected SSL trust store password").toCharArray(); + + break; + + case CMD_TRUSTSTORE_TYPE: + sslTrustStoreType = nextArg("Expected SSL trust store type"); + + break; + case CMD_AUTO_CONFIRMATION: autoConfirmation = true; @@ -1495,14 +2093,14 @@ Arguments parseAndValidate(List rawArgs) { Command cmd = commands.get(0); - boolean hasUsr = F.isEmpty(user); - boolean hasPwd = F.isEmpty(pwd); - - if (hasUsr != hasPwd) - throw new IllegalArgumentException("Both user and password should be specified"); - - return new Arguments(cmd, host, port, user, pwd, baselineAct, baselineArgs, txArgs, cacheArgs, walAct, walArgs, - pingTimeout, pingInterval, autoConfirmation); + return new Arguments(cmd, host, port, user, pwd, + baselineAct, baselineArgs, + txArgs, cacheArgs, + walAct, walArgs, + pingTimeout, pingInterval, autoConfirmation, + sslProtocol, sslCipherSuites, + sslKeyAlgorithm, sslKeyStorePath, sslKeyStorePassword, sslKeyStoreType, + sslTrustStorePath, sslTrustStorePassword, sslTrustStoreType); } /** @@ -1539,10 +2137,30 @@ private CacheArguments parseAndValidateCacheArgs() { if (CMD_DUMP.equals(nextArg)) cacheArgs.dump(true); + else if (CMD_EXCLUDE_CACHES.equals(nextArg)) { + if (cacheArgs.caches() != null || cacheArgs.getCacheFilterEnum() != CacheFilterEnum.ALL) + throw new IllegalArgumentException(ONE_CACHE_FILTER_OPT_SHOULD_USED_MSG); + + parseExcludeCacheNames(nextArg("Specify caches, which will be excluded."), + cacheArgs); + } else if (CMD_SKIP_ZEROS.equals(nextArg)) cacheArgs.skipZeros(true); - else + else if (CACHE_FILTER.equals(nextArg)) { + if (cacheArgs.caches() != null || cacheArgs.excludeCaches() != null) + throw new IllegalArgumentException(ONE_CACHE_FILTER_OPT_SHOULD_USED_MSG); + + String filter = nextArg("The cache filter should be specified. The following values can be " + + "used: " + Arrays.toString(CacheFilterEnum.values()) + '.'); + + cacheArgs.setCacheFilterEnum(CacheFilterEnum.valueOf(filter.toUpperCase())); + } + else { + if (cacheArgs.excludeCaches() != null || cacheArgs.getCacheFilterEnum() != CacheFilterEnum.ALL) + throw new IllegalArgumentException(ONE_CACHE_FILTER_OPT_SHOULD_USED_MSG); + parseCacheNames(nextArg, cacheArgs); + } } break; @@ -1612,27 +2230,27 @@ else if (CMD_SKIP_ZEROS.equals(nextArg)) case DISTRIBUTION: String nodeIdStr = nextArg("Node id expected or null"); - if (!"null".equals(nodeIdStr)) + if (!NULL.equals(nodeIdStr)) cacheArgs.nodeId(UUID.fromString(nodeIdStr)); while (hasNextCacheArg()) { String nextArg = nextArg(""); - if (CMD_USER_ATTRIBUTES.equals(nextArg)){ + if (CMD_USER_ATTRIBUTES.equals(nextArg)) { nextArg = nextArg("User attributes are expected to be separated by commas"); - Set userAttributes = new HashSet(); + Set userAttrs = new HashSet<>(); - for (String userAttribute:nextArg.split(",")) - userAttributes.add(userAttribute.trim()); + for (String userAttribute : nextArg.split(",")) + userAttrs.add(userAttribute.trim()); - cacheArgs.setUserAttributes(userAttributes); + cacheArgs.setUserAttributes(userAttrs); nextArg = (hasNextCacheArg()) ? nextArg("") : null; } - if (nextArg!=null) + if (nextArg != null) parseCacheNames(nextArg, cacheArgs); } @@ -1643,20 +2261,36 @@ else if (CMD_SKIP_ZEROS.equals(nextArg)) break; - default: + case LIST: cacheArgs.regex(nextArg("Regex is expected")); - if (hasNextCacheArg()) { - String tmp = nextArg(""); + VisorViewCacheCmd cacheCmd = CACHES; + + OutputFormat outputFormat = SINGLE_LINE; + + while (hasNextCacheArg()) { + String tmp = nextArg("").toLowerCase(); switch (tmp) { case "groups": - cacheArgs.cacheCommand(GROUPS); + cacheCmd = GROUPS; break; case "seq": - cacheArgs.cacheCommand(SEQ); + cacheCmd = SEQ; + + break; + + case OUTPUT_FORMAT: + String tmp2 = nextArg("output format must be defined!").toLowerCase(); + + outputFormat = OutputFormat.fromConsoleName(tmp2); + + break; + + case CONFIG: + cacheArgs.fullConfig(true); break; @@ -1665,7 +2299,13 @@ else if (CMD_SKIP_ZEROS.equals(nextArg)) } } + cacheArgs.cacheCommand(cacheCmd); + cacheArgs.outputFormat(outputFormat); + break; + + default: + throw new IllegalArgumentException("Unknown --cache subcommand " + cmd); } if (hasNextCacheArg()) @@ -1698,6 +2338,24 @@ private void parseCacheNames(String cacheNames, CacheArguments cacheArgs) { cacheArgs.caches(cacheNamesSet); } + /** + * @param cacheNames Cache names arg. + * @param cacheArgs Cache args. + */ + private void parseExcludeCacheNames(String cacheNames, CacheArguments cacheArgs) { + String[] cacheNamesArr = cacheNames.split(","); + Set cacheNamesSet = new HashSet<>(); + + for (String cacheName : cacheNamesArr) { + if (F.isEmpty(cacheName)) + throw new IllegalArgumentException("Non-empty cache names expected."); + + cacheNamesSet.add(cacheName.trim()); + } + + cacheArgs.excludeCaches(cacheNamesSet); + } + /** * Get ping param for grid client. * @@ -1760,7 +2418,7 @@ private VisorTxTaskArg parseTransactionArguments() { case TX_ORDER: nextArg(""); - sortOrder = VisorTxSortOrder.fromString(nextArg(TX_ORDER)); + sortOrder = VisorTxSortOrder.valueOf(nextArg(TX_ORDER).toUpperCase()); break; @@ -1851,6 +2509,41 @@ private long nextLongArg(String lb) { } } + /** + * Requests password from console with message. + * + * @param msg Message. + * @return Password. + */ + private char[] requestPasswordFromConsole(String msg) { + Console console = System.console(); + + if (console == null) + throw new UnsupportedOperationException("Failed to securely read password (console is unavailable): " + msg); + else + return console.readPassword(msg); + } + + /** + * Requests user data from console with message. + * + * @param msg Message. + * @return Input user data. + */ + private String requestDataFromConsole(String msg) { + Console console = System.console(); + + if (console != null) + return console.readLine(msg); + else { + Scanner scanner = new Scanner(System.in); + + log(msg); + + return scanner.nextLine(); + } + } + /** * Check if raw arg is command or option. * @@ -1860,6 +2553,200 @@ private boolean isCommandOrOption(String raw) { return raw != null && raw.contains("--"); } + /** + * Maps VisorCacheConfiguration to key-value pairs. + * + * @param cfg Visor cache configuration. + * @return map of key-value pairs. + */ + private Map mapToPairs(VisorCacheConfiguration cfg) { + Map params = new LinkedHashMap<>(); + + VisorCacheAffinityConfiguration affinityCfg = cfg.getAffinityConfiguration(); + VisorCacheNearConfiguration nearCfg = cfg.getNearConfiguration(); + VisorCacheRebalanceConfiguration rebalanceCfg = cfg.getRebalanceConfiguration(); + VisorCacheEvictionConfiguration evictCfg = cfg.getEvictionConfiguration(); + VisorCacheStoreConfiguration storeCfg = cfg.getStoreConfiguration(); + VisorQueryConfiguration qryCfg = cfg.getQueryConfiguration(); + + params.put("Name", cfg.getName()); + params.put("Group", cfg.getGroupName()); + params.put("Dynamic Deployment ID", cfg.getDynamicDeploymentId()); + params.put("System", cfg.isSystem()); + + params.put("Mode", cfg.getMode()); + params.put("Atomicity Mode", cfg.getAtomicityMode()); + params.put("Statistic Enabled", cfg.isStatisticsEnabled()); + params.put("Management Enabled", cfg.isManagementEnabled()); + + params.put("On-heap cache enabled", cfg.isOnheapCacheEnabled()); + params.put("Partition Loss Policy", cfg.getPartitionLossPolicy()); + params.put("Query Parallelism", cfg.getQueryParallelism()); + params.put("Copy On Read", cfg.isCopyOnRead()); + params.put("Listener Configurations", cfg.getListenerConfigurations()); + params.put("Load Previous Value", cfg.isLoadPreviousValue()); + params.put("Memory Policy Name", cfg.getMemoryPolicyName()); + params.put("Node Filter", cfg.getNodeFilter()); + params.put("Read From Backup", cfg.isReadFromBackup()); + params.put("Topology Validator", cfg.getTopologyValidator()); + + params.put("Time To Live Eager Flag", cfg.isEagerTtl()); + + params.put("Write Synchronization Mode", cfg.getWriteSynchronizationMode()); + params.put("Invalidate", cfg.isInvalidate()); + + params.put("Affinity Function", affinityCfg.getFunction()); + params.put("Affinity Backups", affinityCfg.getPartitionedBackups()); + params.put("Affinity Partitions", affinityCfg.getPartitions()); + params.put("Affinity Exclude Neighbors", affinityCfg.isExcludeNeighbors()); + params.put("Affinity Mapper", affinityCfg.getMapper()); + + params.put("Rebalance Mode", rebalanceCfg.getMode()); + params.put("Rebalance Batch Size", rebalanceCfg.getBatchSize()); + params.put("Rebalance Timeout", rebalanceCfg.getTimeout()); + params.put("Rebalance Delay", rebalanceCfg.getPartitionedDelay()); + params.put("Time Between Rebalance Messages", rebalanceCfg.getThrottle()); + params.put("Rebalance Batches Count", rebalanceCfg.getBatchesPrefetchCnt()); + params.put("Rebalance Cache Order", rebalanceCfg.getRebalanceOrder()); + + params.put("Eviction Policy Enabled", (evictCfg.getPolicy() != null)); + params.put("Eviction Policy Factory", evictCfg.getPolicy()); + params.put("Eviction Policy Max Size", evictCfg.getPolicyMaxSize()); + params.put("Eviction Filter", evictCfg.getFilter()); + + params.put("Near Cache Enabled", nearCfg.isNearEnabled()); + params.put("Near Start Size", nearCfg.getNearStartSize()); + params.put("Near Eviction Policy Factory", nearCfg.getNearEvictPolicy()); + params.put("Near Eviction Policy Max Size", nearCfg.getNearEvictMaxSize()); + + params.put("Default Lock Timeout", cfg.getDefaultLockTimeout()); + params.put("Query Entities", cfg.getQueryEntities()); + params.put("Cache Interceptor", cfg.getInterceptor()); + + params.put("Store Enabled", storeCfg.isEnabled()); + params.put("Store Class", storeCfg.getStore()); + params.put("Store Factory Class", storeCfg.getStoreFactory()); + params.put("Store Keep Binary", storeCfg.isStoreKeepBinary()); + params.put("Store Read Through", storeCfg.isReadThrough()); + params.put("Store Write Through", storeCfg.isWriteThrough()); + params.put("Store Write Coalescing", storeCfg.getWriteBehindCoalescing()); + + params.put("Write-Behind Enabled", storeCfg.isWriteBehindEnabled()); + params.put("Write-Behind Flush Size", storeCfg.getFlushSize()); + params.put("Write-Behind Frequency", storeCfg.getFlushFrequency()); + params.put("Write-Behind Flush Threads Count", storeCfg.getFlushThreadCount()); + params.put("Write-Behind Batch Size", storeCfg.getBatchSize()); + + params.put("Concurrent Asynchronous Operations Number", cfg.getMaxConcurrentAsyncOperations()); + + params.put("Loader Factory Class Name", cfg.getLoaderFactory()); + params.put("Writer Factory Class Name", cfg.getWriterFactory()); + params.put("Expiry Policy Factory Class Name", cfg.getExpiryPolicyFactory()); + + params.put("Query Execution Time Threshold", qryCfg.getLongQueryWarningTimeout()); + params.put("Query Escaped Names", qryCfg.isSqlEscapeAll()); + params.put("Query SQL Schema", qryCfg.getSqlSchema()); + params.put("Query SQL functions", qryCfg.getSqlFunctionClasses()); + params.put("Query Indexed Types", qryCfg.getIndexedTypes()); + params.put("Maximum payload size for offheap indexes", cfg.getSqlIndexMaxInlineSize()); + params.put("Query Metrics History Size", cfg.getQueryDetailMetricsSize()); + + return params; + } + + /** + * Split string into items. + * + * @param s String to process. + * @param delim Delimiter. + * @return List with items. + */ + private List split(String s, String delim) { + if (F.isEmpty(s)) + return Collections.emptyList(); + + return Arrays.stream(s.split(delim)) + .map(String::trim) + .filter(item -> !item.isEmpty()) + .collect(Collectors.toList()); + } + + /** + * @return Transaction command options. + */ + private String[] getTxOptions() { + List list = new ArrayList<>(); + + list.add(op(TX_XID, "XID")); + list.add(op(TX_DURATION, "SECONDS")); + list.add(op(TX_SIZE, "SIZE")); + list.add(op(TX_LABEL, "PATTERN_REGEX")); + list.add(op(or(TX_SERVERS, TX_CLIENTS))); + list.add(op(TX_NODES, "consistentId1[,consistentId2,....,consistentIdN]")); + list.add(op(TX_LIMIT, "NUMBER")); + list.add(op(TX_ORDER, or(VisorTxSortOrder.values()))); + list.add(op(TX_KILL)); + list.add(op(CMD_AUTO_CONFIRMATION)); + + return list.toArray(new String[list.size()]); + } + + /** */ + private void printHelp() { + final String constistIds = "consistentId1[,consistentId2,....,consistentIdN]"; + + log("Control.sh is used to execute admin commands on cluster or get common cluster info. The command has the following syntax:"); + nl(); + + log(i(j(" ", UTILITY_NAME_WITH_COMMON_OPTIONS, op("command"), ""))); + nl(); + nl(); + + log("This utility can do the following commands:"); + + usage(i("Activate cluster:"), ACTIVATE); + usage(i("Deactivate cluster:"), DEACTIVATE, op(CMD_AUTO_CONFIRMATION)); + usage(i("Print current cluster state:"), STATE); + usage(i("Print cluster baseline topology:"), BASELINE); + usage(i("Add nodes into baseline topology:"), BASELINE, BASELINE_ADD, constistIds, op(CMD_AUTO_CONFIRMATION)); + usage(i("Remove nodes from baseline topology:"), BASELINE, BASELINE_REMOVE, constistIds, op(CMD_AUTO_CONFIRMATION)); + usage(i("Set baseline topology:"), BASELINE, BASELINE_SET, constistIds, op(CMD_AUTO_CONFIRMATION)); + usage(i("Set baseline topology based on version:"), BASELINE, BASELINE_SET_VERSION + " topologyVersion", op(CMD_AUTO_CONFIRMATION)); + usage(i("List or kill transactions:"), TX, getTxOptions()); + + if (enableExperimental) { + usage(i("Print absolute paths of unused archived wal segments on each node:"), WAL, WAL_PRINT, "[consistentId1,consistentId2,....,consistentIdN]"); + usage(i("Delete unused archived wal segments on each node:"), WAL, WAL_DELETE, "[consistentId1,consistentId2,....,consistentIdN]", op(CMD_AUTO_CONFIRMATION)); + } + + log(i("View caches information in a cluster. For more details type:")); + log(i(j(" ", UTILITY_NAME, CACHE, HELP), 2)); + nl(); + + log("By default commands affecting the cluster require interactive confirmation."); + log("Use " + CMD_AUTO_CONFIRMATION + " option to disable it."); + nl(); + + log("Default values:"); + log(i("HOST_OR_IP=" + DFLT_HOST, 2)); + log(i("PORT=" + DFLT_PORT, 2)); + log(i("PING_INTERVAL=" + DFLT_PING_INTERVAL, 2)); + log(i("PING_TIMEOUT=" + DFLT_PING_TIMEOUT, 2)); + log(i("SSL_PROTOCOL=" + SslContextFactory.DFLT_SSL_PROTOCOL, 2)); + log(i("SSL_KEY_ALGORITHM=" + SslContextFactory.DFLT_KEY_ALGORITHM, 2)); + log(i("KEYSTORE_TYPE=" + SslContextFactory.DFLT_STORE_TYPE, 2)); + log(i("TRUSTSTORE_TYPE=" + SslContextFactory.DFLT_STORE_TYPE, 2)); + + nl(); + + log("Exit codes:"); + log(i(EXIT_CODE_OK + " - successful execution.", 2)); + log(i(EXIT_CODE_INVALID_ARGUMENTS + " - invalid arguments.", 2)); + log(i(EXIT_CODE_CONNECTION_FAILED + " - connection failed.", 2)); + log(i(ERR_AUTHENTICATION_FAILED + " - authentication failed.", 2)); + log(i(EXIT_CODE_UNEXPECTED_ERROR + " - unexpected error.", 2)); + } + /** * Parse and execute command. * @@ -1874,54 +2761,19 @@ public int execute(List rawArgs) { try { if (F.isEmpty(rawArgs) || (rawArgs.size() == 1 && CMD_HELP.equalsIgnoreCase(rawArgs.get(0)))) { - log("This utility can do the following commands:"); - - usage(" Activate cluster:", ACTIVATE); - usage(" Deactivate cluster:", DEACTIVATE, " [" + CMD_AUTO_CONFIRMATION + "]"); - usage(" Print current cluster state:", STATE); - usage(" Print cluster baseline topology:", BASELINE); - usage(" Add nodes into baseline topology:", BASELINE, " add consistentId1[,consistentId2,....,consistentIdN] [" + CMD_AUTO_CONFIRMATION + "]"); - usage(" Remove nodes from baseline topology:", BASELINE, " remove consistentId1[,consistentId2,....,consistentIdN] [" + CMD_AUTO_CONFIRMATION + "]"); - usage(" Set baseline topology:", BASELINE, " set consistentId1[,consistentId2,....,consistentIdN] [" + CMD_AUTO_CONFIRMATION + "]"); - usage(" Set baseline topology based on version:", BASELINE, " version topologyVersion [" + CMD_AUTO_CONFIRMATION + "]"); - usage(" List or kill transactions:", TX, " [xid XID] [minDuration SECONDS] " + - "[minSize SIZE] [label PATTERN_REGEX] [servers|clients] " + - "[nodes consistentId1[,consistentId2,....,consistentIdN] [limit NUMBER] [order DURATION|SIZE|", CMD_TX_ORDER_START_TIME, "] [kill] [" + CMD_AUTO_CONFIRMATION + "]"); - - if (enableExperimental) { - usage(" Print absolute paths of unused archived wal segments on each node:", WAL, - " print [consistentId1,consistentId2,....,consistentIdN]"); - usage(" Delete unused archived wal segments on each node:", WAL, - " delete [consistentId1,consistentId2,....,consistentIdN] [" + CMD_AUTO_CONFIRMATION + "]"); - } + printHelp(); - log(" View caches information in a cluster. For more details type:"); - log(" control.sh --cache help"); - nl(); - - log("By default commands affecting the cluster require interactive confirmation."); - log("Use " + CMD_AUTO_CONFIRMATION + " option to disable it."); - nl(); + return EXIT_CODE_OK; + } - log("Default values:"); - log(" HOST_OR_IP=" + DFLT_HOST); - log(" PORT=" + DFLT_PORT); - log(" PING_INTERVAL=" + DFLT_PING_INTERVAL); - log(" PING_TIMEOUT=" + DFLT_PING_TIMEOUT); - nl(); + Arguments args = parseAndValidate(rawArgs); - log("Exit codes:"); - log(" " + EXIT_CODE_OK + " - successful execution."); - log(" " + EXIT_CODE_INVALID_ARGUMENTS + " - invalid arguments."); - log(" " + EXIT_CODE_CONNECTION_FAILED + " - connection failed."); - log(" " + ERR_AUTHENTICATION_FAILED + " - authentication failed."); - log(" " + EXIT_CODE_UNEXPECTED_ERROR + " - unexpected error."); + if (args.command() == CACHE && args.cacheArgs().command() == HELP) { + printCacheHelp(); return EXIT_CODE_OK; } - Arguments args = parseAndValidate(rawArgs); - if (!args.autoConfirmation() && !confirm(args)) { log("Operation cancelled."); @@ -1936,51 +2788,128 @@ public int execute(List rawArgs) { clientCfg.setServers(Collections.singletonList(args.host() + ":" + args.port())); - if (!F.isEmpty(args.user())) { - clientCfg.setSecurityCredentialsProvider( - new SecurityCredentialsBasicProvider(new SecurityCredentials(args.user(), args.password()))); - } + boolean tryConnectAgain = true; - try (GridClient client = GridClientFactory.start(clientCfg)) { - switch (args.command()) { - case ACTIVATE: - activate(client); + int tryConnectMaxCount = 3; - break; + while (tryConnectAgain) { + tryConnectAgain = false; - case DEACTIVATE: - deactivate(client); + if (!F.isEmpty(args.getUserName())) { + SecurityCredentialsProvider securityCredential = clientCfg.getSecurityCredentialsProvider(); - break; + if (securityCredential == null) { + securityCredential = new SecurityCredentialsBasicProvider( + new SecurityCredentials(args.getUserName(), args.getPassword())); - case STATE: - state(client); + clientCfg.setSecurityCredentialsProvider(securityCredential); + } + final SecurityCredentials credential = securityCredential.credentials(); + credential.setLogin(args.getUserName()); + credential.setPassword(args.getPassword()); + } - break; + if (!F.isEmpty(args.sslKeyStorePath())) { + GridSslBasicContextFactory factory = new GridSslBasicContextFactory(); - case BASELINE: - baseline(client, args.baselineAction(), args.baselineArguments()); + List sslProtocols = split(args.sslProtocol(), ","); - break; + String sslProtocol = F.isEmpty(sslProtocols) ? DFLT_SSL_PROTOCOL : sslProtocols.get(0); - case TX: - transactions(client, args.transactionArguments()); + factory.setProtocol(sslProtocol); + factory.setKeyAlgorithm(args.sslKeyAlgorithm()); - break; + if (sslProtocols.size() > 1) + factory.setProtocols(sslProtocols); - case CACHE: - cache(client, args.cacheArgs()); + factory.setCipherSuites(split(args.getSslCipherSuites(), ",")); - break; + factory.setKeyStoreFilePath(args.sslKeyStorePath()); - case WAL: - wal(client, args.walAction(), args.walArguments()); + if (args.sslKeyStorePassword() != null) + factory.setKeyStorePassword(args.sslKeyStorePassword()); + else + factory.setKeyStorePassword(requestPasswordFromConsole("SSL keystore password: ")); - break; + factory.setKeyStoreType(args.sslKeyStoreType()); + + if (F.isEmpty(args.sslTrustStorePath())) + factory.setTrustManagers(GridSslBasicContextFactory.getDisabledTrustManager()); + else { + factory.setTrustStoreFilePath(args.sslTrustStorePath()); + + if (args.sslTrustStorePassword() != null) + factory.setTrustStorePassword(args.sslTrustStorePassword()); + else + factory.setTrustStorePassword(requestPasswordFromConsole("SSL truststore password: ")); + + factory.setTrustStoreType(args.sslTrustStoreType()); + } + + clientCfg.setSslContextFactory(factory); } - } - return 0; + try (GridClient client = GridClientFactory.start(clientCfg)) { + switch (args.command()) { + case ACTIVATE: + activate(client); + + break; + + case DEACTIVATE: + deactivate(client); + + break; + + case STATE: + state(client); + + break; + + case BASELINE: + baseline(client, args.baselineAction(), args.baselineArguments()); + + break; + + case TX: + transactions(client, args.transactionArguments()); + + break; + + case CACHE: + cache(client, args.cacheArgs()); + + break; + + case WAL: + wal(client, args.walAction(), args.walArguments()); + + break; + } + } + catch (Throwable e) { + if (tryConnectMaxCount > 0 && isAuthError(e)) { + log("Authentication error, try connection again."); + + if (F.isEmpty(args.getUserName())) + args.setUserName(requestDataFromConsole("user: ")); + + args.setPassword(new String(requestPasswordFromConsole("password: "))); + + tryConnectAgain = true; + + tryConnectMaxCount--; + } + else { + if (tryConnectMaxCount == 0) + throw new GridClientAuthenticationException("Authentication error, maximum number of " + + "retries exceeded"); + + throw e; + } + } + } + return EXIT_CODE_OK; } catch (IllegalArgumentException e) { return error(EXIT_CODE_INVALID_ARGUMENTS, "Check arguments.", e); @@ -2010,7 +2939,6 @@ public static void main(String[] args) { * * @return Last operation result; */ - @SuppressWarnings("unchecked") public T getLastOperationResult() { return (T)lastOperationRes; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/commandline/OutputFormat.java b/modules/core/src/main/java/org/apache/ignite/internal/commandline/OutputFormat.java new file mode 100644 index 0000000000000..356cb4b2e88e5 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/commandline/OutputFormat.java @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.commandline; + +import org.jetbrains.annotations.NotNull; + +/** + * + */ +public enum OutputFormat { + /** Single line. */ + SINGLE_LINE("single-line"), + + /** Multi line. */ + MULTI_LINE("multi-line"); + + /** */ + private final String text; + + /** */ + OutputFormat(String text) { + this.text = text; + } + + /** + * @return Text. + */ + public String text() { + return text; + } + + /** + * Converts format name in console to enumerated value. + * + * @param text Format name in console. + * @return Enumerated value. + * @throws IllegalArgumentException If enumerated value not found. + */ + public static OutputFormat fromConsoleName(@NotNull String text) { + for (OutputFormat format : values()) { + if (format.text.equals(text)) + return format; + } + + throw new IllegalArgumentException("Unknown output format " + text); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return text; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheArguments.java b/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheArguments.java index 97d234aeb879d..46110b7272a7e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheArguments.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheArguments.java @@ -18,6 +18,8 @@ import java.util.Set; import java.util.UUID; +import org.apache.ignite.internal.commandline.OutputFormat; +import org.apache.ignite.internal.visor.verify.CacheFilterEnum; import org.apache.ignite.internal.visor.verify.VisorViewCacheCmd; import org.jetbrains.annotations.Nullable; @@ -31,6 +33,9 @@ public class CacheArguments { /** Caches. */ private Set caches; + /** Exclude caches or groups. */ + private Set excludeCaches; + /** Partition id. */ private int partId; @@ -64,6 +69,39 @@ public class CacheArguments { /** Additional user attributes in result. Set of attribute names whose values will be searched in ClusterNode.attributes(). */ private Set userAttributes; + /** Output format. */ + private OutputFormat outputFormat; + + /** Full config flag. */ + private boolean fullConfig; + + /** Cache filter. */ + private CacheFilterEnum cacheFilterEnum = CacheFilterEnum.ALL; + + /** + * @return Gets filter of caches, which will by checked. + */ + public CacheFilterEnum getCacheFilterEnum() { + return cacheFilterEnum; + } + + /** + * @param cacheFilterEnum Cache filter. + */ + public void setCacheFilterEnum(CacheFilterEnum cacheFilterEnum) { + this.cacheFilterEnum = cacheFilterEnum; + } + + /** + * @return Full config flag. + */ + public boolean fullConfig(){ return fullConfig; } + + /** + * @param fullConfig New full config flag. + */ + public void fullConfig(boolean fullConfig) { this.fullConfig = fullConfig; } + /** * @return Command. */ @@ -106,6 +144,20 @@ public void caches(Set caches) { this.caches = caches; } + /** + * @return Exclude caches or groups. + */ + public Set excludeCaches() { + return excludeCaches; + } + + /** + * @param excludeCaches Excluse caches or groups. + */ + public void excludeCaches(Set excludeCaches) { + this.excludeCaches = excludeCaches; + } + /** * @return Partition id. */ @@ -245,4 +297,14 @@ public Set getUserAttributes() { public void setUserAttributes(Set userAttrs) { userAttributes = userAttrs; } + + /** + * @return Output format. + */ + public OutputFormat outputFormat() { return outputFormat; } + + /** + * @param outputFormat New output format. + */ + public void outputFormat(OutputFormat outputFormat) { this.outputFormat = outputFormat; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheCommand.java b/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheCommand.java index af222a8bc0973..63a55d8c9f41d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheCommand.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/CacheCommand.java @@ -100,4 +100,9 @@ public String text() { @Nullable public static CacheCommand fromOrdinal(int ord) { return ord >= 0 && ord < VALS.length ? VALS[ord] : null; } + + /** {@inheritDoc} */ + @Override public String toString() { + return name; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/distribution/CacheDistributionTaskResult.java b/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/distribution/CacheDistributionTaskResult.java index 71de3bbc0dd05..52c6eec308afb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/distribution/CacheDistributionTaskResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/commandline/cache/distribution/CacheDistributionTaskResult.java @@ -294,10 +294,10 @@ public void setUserAttributes(Map userAttrs) { Row other = (Row)o; - int res = grpId - other.grpId; + int res = Integer.compare(grpId, other.grpId); if (res == 0) { - res = partId - other.partId; + res = Integer.compare(partId, other.partId); if (res == 0) res = nodeId.compareTo(other.nodeId); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageReader.java b/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageReader.java index 47d7877816824..b8208016aba53 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageReader.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageReader.java @@ -27,6 +27,9 @@ import org.apache.ignite.internal.direct.stream.DirectByteBufferStream; import org.apache.ignite.internal.direct.stream.v1.DirectByteBufferStreamImplV1; import org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2; +import org.apache.ignite.internal.direct.stream.v3.DirectByteBufferStreamImplV3; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteOutClosure; import org.apache.ignite.lang.IgniteUuid; @@ -41,8 +44,13 @@ */ public class DirectMessageReader implements MessageReader { /** State. */ + @GridToStringInclude private final DirectMessageState state; + /** Protocol version. */ + @GridToStringInclude + private final byte protoVer; + /** Whether last field was fully read. */ private boolean lastRead; @@ -56,6 +64,8 @@ public DirectMessageReader(final MessageFactory msgFactory, final byte protoVer) return new StateItem(msgFactory, protoVer); } }); + + this.protoVer = protoVer; } /** {@inheritDoc} */ @@ -304,6 +314,21 @@ public DirectMessageReader(final MessageFactory msgFactory, final byte protoVer) return val; } + /** {@inheritDoc} */ + @Override public AffinityTopologyVersion readAffinityTopologyVersion(String name) { + if (protoVer >= 3) { + DirectByteBufferStream stream = state.item().stream; + + AffinityTopologyVersion val = stream.readAffinityTopologyVersion(); + + lastRead = stream.lastFinished(); + + return val; + } + + return readMessage(name); + } + /** {@inheritDoc} */ @Nullable @Override public T readMessage(String name) { DirectByteBufferStream stream = state.item().stream; @@ -409,6 +434,11 @@ public StateItem(MessageFactory msgFactory, byte protoVer) { break; + case 3: + stream = new DirectByteBufferStreamImplV3(msgFactory); + + break; + default: throw new IllegalStateException("Invalid protocol version: " + protoVer); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageWriter.java b/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageWriter.java index 51cea174e9a1d..bb88ffc851669 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageWriter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/direct/DirectMessageWriter.java @@ -27,6 +27,8 @@ import org.apache.ignite.internal.direct.stream.DirectByteBufferStream; import org.apache.ignite.internal.direct.stream.v1.DirectByteBufferStreamImplV1; import org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2; +import org.apache.ignite.internal.direct.stream.v3.DirectByteBufferStreamImplV3; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteOutClosure; @@ -44,6 +46,10 @@ public class DirectMessageWriter implements MessageWriter { @GridToStringInclude private final DirectMessageState state; + /** Protocol version. */ + @GridToStringInclude + private final byte protoVer; + /** * @param protoVer Protocol version. */ @@ -53,6 +59,8 @@ public DirectMessageWriter(final byte protoVer) { return new StateItem(protoVer); } }); + + this.protoVer = protoVer; } /** {@inheritDoc} */ @@ -272,6 +280,19 @@ public DirectMessageWriter(final byte protoVer) { return stream.lastFinished(); } + /** {@inheritDoc} */ + @Override public boolean writeAffinityTopologyVersion(String name, AffinityTopologyVersion val) { + if (protoVer >= 3) { + DirectByteBufferStream stream = state.item().stream; + + stream.writeAffinityTopologyVersion(val); + + return stream.lastFinished(); + } + + return writeMessage(name, val); + } + /** {@inheritDoc} */ @Override public boolean writeMessage(String name, @Nullable Message msg) { DirectByteBufferStream stream = state.item().stream; @@ -376,6 +397,11 @@ public StateItem(byte protoVer) { break; + case 3: + stream = new DirectByteBufferStreamImplV3(null); + + break; + default: throw new IllegalStateException("Invalid protocol version: " + protoVer); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/DirectByteBufferStream.java b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/DirectByteBufferStream.java index 204e6b034530b..ae5502eb75f8e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/DirectByteBufferStream.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/DirectByteBufferStream.java @@ -22,6 +22,7 @@ import java.util.Collection; import java.util.Map; import java.util.UUID; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; @@ -160,6 +161,11 @@ public interface DirectByteBufferStream { */ public void writeIgniteUuid(IgniteUuid val); + /** + * @param val Value. + */ + public void writeAffinityTopologyVersion(AffinityTopologyVersion val); + /** * @param msg Message. * @param writer Writer. @@ -289,6 +295,11 @@ public void writeMap(Map map, MessageCollectionItemType keyType, Me */ public IgniteUuid readIgniteUuid(); + /** + * @return Value. + */ + public AffinityTopologyVersion readAffinityTopologyVersion(); + /** * @param reader Reader. * @return Message. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v1/DirectByteBufferStreamImplV1.java b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v1/DirectByteBufferStreamImplV1.java index c78c47914fdfe..118e1f1d0cbe7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v1/DirectByteBufferStreamImplV1.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v1/DirectByteBufferStreamImplV1.java @@ -27,6 +27,7 @@ import java.util.NoSuchElementException; import java.util.UUID; import org.apache.ignite.internal.direct.stream.DirectByteBufferStream; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.typedef.internal.S; @@ -494,6 +495,11 @@ public DirectByteBufferStreamImplV1(MessageFactory msgFactory) { writeByteArray(val != null ? U.igniteUuidToBytes(val) : null); } + /** {@inheritDoc} */ + @Override public void writeAffinityTopologyVersion(AffinityTopologyVersion val) { + throw new UnsupportedOperationException("Not implemented"); + } + /** {@inheritDoc} */ @Override public void writeMessage(Message msg, MessageWriter writer) { if (msg != null) { @@ -811,6 +817,11 @@ public DirectByteBufferStreamImplV1(MessageFactory msgFactory) { return arr != null ? U.bytesToIgniteUuid(arr, 0) : null; } + /** {@inheritDoc} */ + @Override public AffinityTopologyVersion readAffinityTopologyVersion() { + throw new UnsupportedOperationException("Not implemented"); + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public T readMessage(MessageReader reader) { @@ -1212,6 +1223,7 @@ private void write(MessageCollectionItemType type, Object val, MessageWriter wri break; + case AFFINITY_TOPOLOGY_VERSION: case MSG: try { if (val != null) @@ -1298,6 +1310,7 @@ private Object read(MessageCollectionItemType type, MessageReader reader) { case IGNITE_UUID: return readIgniteUuid(); + case AFFINITY_TOPOLOGY_VERSION: case MSG: return readMessage(reader); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2.java b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2.java index e338bc0187d46..fd93cfb81f832 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2.java @@ -29,6 +29,7 @@ import java.util.UUID; import org.apache.ignite.IgniteException; import org.apache.ignite.internal.direct.stream.DirectByteBufferStream; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.typedef.internal.S; @@ -298,7 +299,7 @@ public class DirectByteBufferStreamImplV2 implements DirectByteBufferStream { private long uuidLocId; /** */ - private boolean lastFinished; + protected boolean lastFinished; /** * @param msgFactory Message factory. @@ -657,6 +658,11 @@ public DirectByteBufferStreamImplV2(MessageFactory msgFactory) { } } + /** {@inheritDoc} */ + @Override public void writeAffinityTopologyVersion(AffinityTopologyVersion val) { + throw new UnsupportedOperationException("Not implemented"); + } + /** {@inheritDoc} */ @Override public void writeMessage(Message msg, MessageWriter writer) { if (msg != null) { @@ -1152,6 +1158,11 @@ private void writeRandomAccessList(List list, MessageCollectionItemType i return val; } + /** {@inheritDoc} */ + @Override public AffinityTopologyVersion readAffinityTopologyVersion() { + throw new UnsupportedOperationException("Not implemented"); + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public T readMessage(MessageReader reader) { @@ -1587,7 +1598,7 @@ T readArrayLE(ArrayCreator creator, int typeSize, int lenShift, long off) * @param val Value. * @param writer Writer. */ - private void write(MessageCollectionItemType type, Object val, MessageWriter writer) { + protected void write(MessageCollectionItemType type, Object val, MessageWriter writer) { switch (type) { case BYTE: writeByte((Byte)val); @@ -1689,6 +1700,7 @@ private void write(MessageCollectionItemType type, Object val, MessageWriter wri break; + case AFFINITY_TOPOLOGY_VERSION: case MSG: try { if (val != null) @@ -1713,7 +1725,7 @@ private void write(MessageCollectionItemType type, Object val, MessageWriter wri * @param reader Reader. * @return Value. */ - private Object read(MessageCollectionItemType type, MessageReader reader) { + protected Object read(MessageCollectionItemType type, MessageReader reader) { switch (type) { case BYTE: return readByte(); @@ -1775,6 +1787,7 @@ private Object read(MessageCollectionItemType type, MessageReader reader) { case IGNITE_UUID: return readIgniteUuid(); + case AFFINITY_TOPOLOGY_VERSION: case MSG: return readMessage(reader); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v3/DirectByteBufferStreamImplV3.java b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v3/DirectByteBufferStreamImplV3.java new file mode 100644 index 0000000000000..89043ebcbc0c5 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/direct/stream/v3/DirectByteBufferStreamImplV3.java @@ -0,0 +1,298 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.direct.stream.v3; + +import java.util.BitSet; +import java.util.UUID; +import org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; +import org.apache.ignite.plugin.extensions.communication.MessageFactory; +import org.apache.ignite.plugin.extensions.communication.MessageReader; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; + +/** + * + */ +public class DirectByteBufferStreamImplV3 extends DirectByteBufferStreamImplV2 { + /** */ + private byte topVerState; + + /** */ + private long topVerMajor; + + /** */ + private int topVerMinor; + + /** + * @param msgFactory Message factory. + */ + public DirectByteBufferStreamImplV3(MessageFactory msgFactory) { + super(msgFactory); + } + + /** {@inheritDoc} */ + @Override public void writeAffinityTopologyVersion(AffinityTopologyVersion val) { + if (val != null) { + switch (topVerState) { + case 0: + writeInt(val.minorTopologyVersion()); + + if (!lastFinished) + return; + + topVerState++; + + case 1: + writeLong(val.topologyVersion()); + + if (!lastFinished) + return; + + topVerState = 0; + } + } + else + writeInt(-1); + } + + /** {@inheritDoc} */ + @Override public AffinityTopologyVersion readAffinityTopologyVersion() { + switch (topVerState) { + case 0: + topVerMinor = readInt(); + + if (!lastFinished || topVerMinor == -1) + return null; + + topVerState++; + + case 1: + topVerMajor = readLong(); + + if (!lastFinished) + return null; + + topVerState = 0; + } + + return new AffinityTopologyVersion(topVerMajor, topVerMinor); + } + + /** {@inheritDoc} */ + @Override protected void write(MessageCollectionItemType type, Object val, MessageWriter writer) { + switch (type) { + case BYTE: + writeByte((Byte)val); + + break; + + case SHORT: + writeShort((Short)val); + + break; + + case INT: + writeInt((Integer)val); + + break; + + case LONG: + writeLong((Long)val); + + break; + + case FLOAT: + writeFloat((Float)val); + + break; + + case DOUBLE: + writeDouble((Double)val); + + break; + + case CHAR: + writeChar((Character)val); + + break; + + case BOOLEAN: + writeBoolean((Boolean)val); + + break; + + case BYTE_ARR: + writeByteArray((byte[])val); + + break; + + case SHORT_ARR: + writeShortArray((short[])val); + + break; + + case INT_ARR: + writeIntArray((int[])val); + + break; + + case LONG_ARR: + writeLongArray((long[])val); + + break; + + case FLOAT_ARR: + writeFloatArray((float[])val); + + break; + + case DOUBLE_ARR: + writeDoubleArray((double[])val); + + break; + + case CHAR_ARR: + writeCharArray((char[])val); + + break; + + case BOOLEAN_ARR: + writeBooleanArray((boolean[])val); + + break; + + case STRING: + writeString((String)val); + + break; + + case BIT_SET: + writeBitSet((BitSet)val); + + break; + + case UUID: + writeUuid((UUID)val); + + break; + + case IGNITE_UUID: + writeIgniteUuid((IgniteUuid)val); + + break; + + case AFFINITY_TOPOLOGY_VERSION: + writeAffinityTopologyVersion((AffinityTopologyVersion)val); + + break; + case MSG: + try { + if (val != null) + writer.beforeInnerMessageWrite(); + + writeMessage((Message)val, writer); + } + finally { + if (val != null) + writer.afterInnerMessageWrite(lastFinished); + } + + break; + + default: + throw new IllegalArgumentException("Unknown type: " + type); + } + } + + /** {@inheritDoc} */ + @Override protected Object read(MessageCollectionItemType type, MessageReader reader) { + switch (type) { + case BYTE: + return readByte(); + + case SHORT: + return readShort(); + + case INT: + return readInt(); + + case LONG: + return readLong(); + + case FLOAT: + return readFloat(); + + case DOUBLE: + return readDouble(); + + case CHAR: + return readChar(); + + case BOOLEAN: + return readBoolean(); + + case BYTE_ARR: + return readByteArray(); + + case SHORT_ARR: + return readShortArray(); + + case INT_ARR: + return readIntArray(); + + case LONG_ARR: + return readLongArray(); + + case FLOAT_ARR: + return readFloatArray(); + + case DOUBLE_ARR: + return readDoubleArray(); + + case CHAR_ARR: + return readCharArray(); + + case BOOLEAN_ARR: + return readBooleanArray(); + + case STRING: + return readString(); + + case BIT_SET: + return readBitSet(); + + case UUID: + return readUuid(); + + case IGNITE_UUID: + return readIgniteUuid(); + + case AFFINITY_TOPOLOGY_VERSION: + return readAffinityTopologyVersion(); + + case MSG: + return readMessage(reader); + + default: + throw new IllegalArgumentException("Unknown type: " + type); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObject.java b/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObject.java new file mode 100644 index 0000000000000..3441742dfabd6 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObject.java @@ -0,0 +1,130 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.dto; + +import java.io.Externalizable; +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import java.util.ArrayList; +import java.util.Collection; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Set; +import org.jetbrains.annotations.Nullable; + +/** + * Base class for data transfer objects. + */ +public abstract class IgniteDataTransferObject implements Externalizable { + /** */ + private static final long serialVersionUID = 0L; + + /** Magic number to detect correct transfer objects. */ + private static final int MAGIC = 0x42BEEF00; + + /** Version 1. */ + protected static final byte V1 = 1; + + /** Version 2. */ + protected static final byte V2 = 2; + + /** Version 3. */ + protected static final byte V3 = 3; + + /** Version 4. */ + protected static final byte V4 = 4; + + /** Version 5. */ + protected static final byte V5 = 5; + + /** + * @param col Source collection. + * @param Collection type. + * @return List based on passed collection. + */ + @Nullable protected static List toList(Collection col) { + if (col != null) + return new ArrayList<>(col); + + return null; + } + + /** + * @param col Source collection. + * @param Collection type. + * @return List based on passed collection. + */ + @Nullable protected static Set toSet(Collection col) { + if (col != null) + return new LinkedHashSet<>(col); + + return null; + } + + /** + * @return Transfer object version. + */ + public byte getProtocolVersion() { + return V1; + } + + /** + * Save object's specific data content. + * + * @param out Output object to write data content. + * @throws IOException If I/O errors occur. + */ + protected abstract void writeExternalData(ObjectOutput out) throws IOException; + + /** {@inheritDoc} */ + @Override public void writeExternal(ObjectOutput out) throws IOException { + int hdr = MAGIC + getProtocolVersion(); + + out.writeInt(hdr); + + try (IgniteDataTransferObjectOutput dtout = new IgniteDataTransferObjectOutput(out)) { + writeExternalData(dtout); + } + } + + /** + * Load object's specific data content. + * + * @param protoVer Input object version. + * @param in Input object to load data content. + * @throws IOException If I/O errors occur. + * @throws ClassNotFoundException If the class for an object being restored cannot be found. + */ + protected abstract void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException; + + /** {@inheritDoc} */ + @Override public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { + int hdr = in.readInt(); + + if ((hdr & MAGIC) != MAGIC) + throw new IOException("Unexpected IgniteDataTransferObject header " + + "[actual=" + Integer.toHexString(hdr) + ", expected=" + Integer.toHexString(MAGIC) + "]"); + + byte ver = (byte)(hdr & 0xFF); + + try (IgniteDataTransferObjectInput dtin = new IgniteDataTransferObjectInput(in)) { + readExternalData(ver, dtin); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObjectInput.java b/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObjectInput.java new file mode 100644 index 0000000000000..c12287520656a --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObjectInput.java @@ -0,0 +1,156 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.dto; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectInputStream; +import org.apache.ignite.internal.util.io.GridByteArrayInputStream; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.jetbrains.annotations.NotNull; + +/** + * Wrapper for object input. + */ +public class IgniteDataTransferObjectInput implements ObjectInput { + /** */ + private final ObjectInputStream ois; + + /** + * @param in Target input. + * @throws IOException If an I/O error occurs. + */ + public IgniteDataTransferObjectInput(ObjectInput in) throws IOException { + byte[] buf = U.readByteArray(in); + + /* */ + GridByteArrayInputStream bis = new GridByteArrayInputStream(buf); + ois = new ObjectInputStream(bis); + } + + + /** {@inheritDoc} */ + @Override public Object readObject() throws ClassNotFoundException, IOException { + return ois.readObject(); + } + + /** {@inheritDoc} */ + @Override public int read() throws IOException { + return ois.read(); + } + + /** {@inheritDoc} */ + @Override public int read(byte[] b) throws IOException { + return ois.read(b); + } + + /** {@inheritDoc} */ + @Override public int read(byte[] b, int off, int len) throws IOException { + return ois.read(b, off, len); + } + + /** {@inheritDoc} */ + @Override public long skip(long n) throws IOException { + return ois.skip(n); + } + + /** {@inheritDoc} */ + @Override public int available() throws IOException { + return ois.available(); + } + + /** {@inheritDoc} */ + @Override public void close() throws IOException { + ois.close(); + } + + /** {@inheritDoc} */ + @Override public void readFully(@NotNull byte[] b) throws IOException { + ois.readFully(b); + } + + /** {@inheritDoc} */ + @Override public void readFully(@NotNull byte[] b, int off, int len) throws IOException { + ois.readFully(b, off, len); + } + + /** {@inheritDoc} */ + @Override public int skipBytes(int n) throws IOException { + return ois.skipBytes(n); + } + + /** {@inheritDoc} */ + @Override public boolean readBoolean() throws IOException { + return ois.readBoolean(); + } + + /** {@inheritDoc} */ + @Override public byte readByte() throws IOException { + return ois.readByte(); + } + + /** {@inheritDoc} */ + @Override public int readUnsignedByte() throws IOException { + return ois.readUnsignedByte(); + } + + /** {@inheritDoc} */ + @Override public short readShort() throws IOException { + return ois.readShort(); + } + + /** {@inheritDoc} */ + @Override public int readUnsignedShort() throws IOException { + return ois.readUnsignedShort(); + } + + /** {@inheritDoc} */ + @Override public char readChar() throws IOException { + return ois.readChar(); + } + + /** {@inheritDoc} */ + @Override public int readInt() throws IOException { + return ois.readInt(); + } + + /** {@inheritDoc} */ + @Override public long readLong() throws IOException { + return ois.readLong(); + } + + /** {@inheritDoc} */ + @Override public float readFloat() throws IOException { + return ois.readFloat(); + } + + /** {@inheritDoc} */ + @Override public double readDouble() throws IOException { + return ois.readDouble(); + } + + /** {@inheritDoc} */ + @Override public String readLine() throws IOException { + return ois.readLine(); + } + + /** {@inheritDoc} */ + @NotNull @Override public String readUTF() throws IOException { + return ois.readUTF(); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObjectOutput.java b/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObjectOutput.java new file mode 100644 index 0000000000000..db4933cea29c0 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/dto/IgniteDataTransferObjectOutput.java @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.dto; + +import java.io.IOException; +import java.io.ObjectOutput; +import java.io.ObjectOutputStream; +import org.apache.ignite.internal.util.io.GridByteArrayOutputStream; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.jetbrains.annotations.NotNull; + +/** + * Wrapper for object output. + */ +public class IgniteDataTransferObjectOutput implements ObjectOutput { + /** */ + private final ObjectOutput out; + + /** */ + private final GridByteArrayOutputStream bos; + + /** */ + private final ObjectOutputStream oos; + + /** + * Constructor. + * + * @param out Target stream. + * @throws IOException If an I/O error occurs. + */ + public IgniteDataTransferObjectOutput(ObjectOutput out) throws IOException { + this.out = out; + + bos = new GridByteArrayOutputStream(); + oos = new ObjectOutputStream(bos); + } + + /** {@inheritDoc} */ + @Override public void writeObject(Object obj) throws IOException { + oos.writeObject(obj); + } + + /** {@inheritDoc} */ + @Override public void write(int b) throws IOException { + oos.write(b); + } + + /** {@inheritDoc} */ + @Override public void write(byte[] b) throws IOException { + oos.write(b); + } + + /** {@inheritDoc} */ + @Override public void write(byte[] b, int off, int len) throws IOException { + oos.write(b, off, len); + } + + /** {@inheritDoc} */ + @Override public void writeBoolean(boolean v) throws IOException { + oos.writeBoolean(v); + } + + /** {@inheritDoc} */ + @Override public void writeByte(int v) throws IOException { + oos.writeByte(v); + } + + /** {@inheritDoc} */ + @Override public void writeShort(int v) throws IOException { + oos.writeShort(v); + } + + /** {@inheritDoc} */ + @Override public void writeChar(int v) throws IOException { + oos.writeChar(v); + } + + /** {@inheritDoc} */ + @Override public void writeInt(int v) throws IOException { + oos.writeInt(v); + } + + /** {@inheritDoc} */ + @Override public void writeLong(long v) throws IOException { + oos.writeLong(v); + } + + /** {@inheritDoc} */ + @Override public void writeFloat(float v) throws IOException { + oos.writeFloat(v); + } + + /** {@inheritDoc} */ + @Override public void writeDouble(double v) throws IOException { + oos.writeDouble(v); + } + + /** {@inheritDoc} */ + @Override public void writeBytes(@NotNull String s) throws IOException { + oos.writeBytes(s); + } + + /** {@inheritDoc} */ + @Override public void writeChars(@NotNull String s) throws IOException { + oos.writeChars(s); + } + + /** {@inheritDoc} */ + @Override public void writeUTF(@NotNull String s) throws IOException { + oos.writeUTF(s); + } + + /** {@inheritDoc} */ + @Override public void flush() throws IOException { + oos.flush(); + } + + /** {@inheritDoc} */ + @Override public void close() throws IOException { + oos.flush(); + + U.writeByteArray(out, bos.internalArray(), bos.size()); + + oos.close(); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinConnection.java b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinConnection.java index 323a410851257..ee17716b48f62 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinConnection.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinConnection.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -17,6 +18,7 @@ package org.apache.ignite.internal.jdbc.thin; +import java.net.SocketTimeoutException; import java.sql.Array; import java.sql.BatchUpdateException; import java.sql.Blob; @@ -29,26 +31,38 @@ import java.sql.SQLClientInfoException; import java.sql.SQLException; import java.sql.SQLFeatureNotSupportedException; +import java.sql.SQLTimeoutException; +import java.sql.SQLPermission; +import java.sql.SQLTimeoutException; import java.sql.SQLWarning; import java.sql.SQLXML; import java.sql.Savepoint; import java.sql.Statement; import java.sql.Struct; import java.util.ArrayList; +import java.util.Collections; +import java.util.IdentityHashMap; import java.util.List; import java.util.Map; import java.util.Properties; +import java.util.Timer; +import java.util.TimerTask; +import java.util.Set; import java.util.concurrent.Executor; import java.util.concurrent.Semaphore; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.logging.Level; import java.util.logging.Logger; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.query.QueryCancelledException; import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.odbc.ClientListenerResponse; import org.apache.ignite.internal.processors.odbc.SqlStateCode; +import org.apache.ignite.internal.processors.odbc.jdbc.JdbcBulkLoadBatchRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcOrderedBatchExecuteRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcOrderedBatchExecuteResult; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQuery; +import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryCancelRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcResponse; @@ -75,6 +89,14 @@ public class JdbcThinConnection implements Connection { /** Logger. */ private static final Logger LOG = Logger.getLogger(JdbcThinConnection.class.getName()); + /** Request timeout period. */ + private static final int REQUEST_TIMEOUT_PERIOD = 1_000; + + /** Zero timeout as query timeout means no timeout. */ + static final int NO_TIMEOUT = 0; + + private static final String SET_NETWORK_TIMEOUT_PERM = "setNetworkTimeout"; + /** Statements modification mutex. */ final private Object stmtsMux = new Object(); @@ -82,7 +104,7 @@ public class JdbcThinConnection implements Connection { private String schema; /** Closed flag. */ - private boolean closed; + private volatile boolean closed; /** Current transaction isolation. */ private int txIsolation; @@ -99,9 +121,6 @@ public class JdbcThinConnection implements Connection { /** Current transaction holdability. */ private int holdability; - /** Timeout. */ - private int timeout; - /** Ignite endpoint. */ private JdbcThinTcpIo cliIo; @@ -115,7 +134,10 @@ public class JdbcThinConnection implements Connection { private boolean connected; /** Tracked statements to close on disconnect. */ - private final ArrayList stmts = new ArrayList<>(); + private final Set stmts = Collections.newSetFromMap(new IdentityHashMap<>()); + + /** Query timeout timer */ + private final Timer timer; /** * Creates new connection. @@ -134,6 +156,8 @@ public JdbcThinConnection(ConnectionProperties connProps) throws SQLException { cliIo = new JdbcThinTcpIo(connProps); + timer = new Timer("query-timeout-timer"); + ensureConnected(); } @@ -174,9 +198,10 @@ boolean isStream() { /** * @param sql Statement. * @param cmd Parsed form of {@code sql}. + * @param stmt Jdbc thin statement. * @throws SQLException if failed. */ - void executeNative(String sql, SqlCommand cmd) throws SQLException { + void executeNative(String sql, SqlCommand cmd, JdbcThinStatement stmt) throws SQLException { if (cmd instanceof SqlSetStreamingCommand) { SqlSetStreamingCommand cmd0 = (SqlSetStreamingCommand)cmd; @@ -196,10 +221,12 @@ void executeNative(String sql, SqlCommand cmd) throws SQLException { + cliIo.igniteVersion() + ']', SqlStateCode.INTERNAL_ERROR); } + streamState = new StreamState((SqlSetStreamingCommand)cmd); + sendRequest(new JdbcQueryExecuteRequest(JdbcStatementType.ANY_STATEMENT_TYPE, - schema, 1, 1, autoCommit, sql, null)); + schema, 1, 1, autoCommit, sql, null), stmt); - streamState = new StreamState((SqlSetStreamingCommand)cmd); + streamState.start(); } } else @@ -238,9 +265,6 @@ void addBatch(String sql, List args) throws SQLException { JdbcThinStatement stmt = new JdbcThinStatement(this, resSetHoldability, schema); - if (timeout > 0) - stmt.timeout(timeout); - synchronized (stmtsMux) { stmts.add(stmt); } @@ -271,9 +295,6 @@ void addBatch(String sql, List args) throws SQLException { JdbcThinPreparedStatement stmt = new JdbcThinPreparedStatement(this, sql, resSetHoldability, schema); - if (timeout > 0) - stmt.timeout(timeout); - synchronized (stmtsMux) { stmts.add(stmt); } @@ -381,9 +402,17 @@ private void doCommit() throws SQLException { streamState = null; } + synchronized (stmtsMux) { + stmts.clear(); + } + + SQLException err = null; + closed = true; cliIo.close(); + + timer.cancel(); } /** {@inheritDoc} */ @@ -693,20 +722,22 @@ private void doCommit() throws SQLException { @Override public void setNetworkTimeout(Executor executor, int ms) throws SQLException { ensureNotClosed(); - if (executor == null) - throw new SQLException("Executor cannot be null."); - if (ms < 0) throw new SQLException("Network timeout cannot be negative."); - timeout = ms; + SecurityManager secMgr = System.getSecurityManager(); + + if (secMgr != null) + secMgr.checkPermission(new SQLPermission(SET_NETWORK_TIMEOUT_PERM)); + + cliIo.timeout(ms); } /** {@inheritDoc} */ @Override public int getNetworkTimeout() throws SQLException { ensureNotClosed(); - return timeout; + return cliIo.timeout(); } /** @@ -741,13 +772,40 @@ boolean autoCloseServerCursor() { */ @SuppressWarnings("unchecked") R sendRequest(JdbcRequest req) throws SQLException { + return sendRequest(req, null); + } + + /** + * Send request for execution via {@link #cliIo}. + * @param req Request. + * @param stmt Jdbc thin statement. + * @return Server response. + * @throws SQLException On any error. + */ + @SuppressWarnings("unchecked") + R sendRequest(JdbcRequest req, JdbcThinStatement stmt) throws SQLException { ensureConnected(); + RequestTimeoutTimerTask reqTimeoutTimerTask = null; + try { - JdbcResponse res = cliIo.sendRequest(req); + if (stmt != null && stmt.requestTimeout() != NO_TIMEOUT) { + reqTimeoutTimerTask = new RequestTimeoutTimerTask( + req instanceof JdbcBulkLoadBatchRequest ? stmt.currentRequestId() : req.requestId(), + stmt.requestTimeout()); + + timer.schedule(reqTimeoutTimerTask, 0, REQUEST_TIMEOUT_PERIOD); + } - if (res.status() != ClientListenerResponse.STATUS_SUCCESS) - throw new SQLException(res.error(), IgniteQueryErrorCode.codeToSqlState(res.status())); + JdbcResponse res = cliIo.sendRequest(req, stmt); + + if (res.status() == IgniteQueryErrorCode.QUERY_CANCELED && stmt != null && + stmt.requestTimeout() != NO_TIMEOUT && reqTimeoutTimerTask != null && reqTimeoutTimerTask.expired.get()) { + throw new SQLTimeoutException(QueryCancelledException.ERR_MSG, SqlStateCode.QUERY_CANCELLED, + IgniteQueryErrorCode.QUERY_CANCELED); + } + else if (res.status() != ClientListenerResponse.STATUS_SUCCESS) + throw new SQLException(res.error(), IgniteQueryErrorCode.codeToSqlState(res.status()), res.status()); return (R)res.response(); } @@ -757,6 +815,30 @@ R sendRequest(JdbcRequest req) throws SQLException { catch (Exception e) { onDisconnect(); + if (e instanceof SocketTimeoutException) + throw new SQLException("Connection timed out.", SqlStateCode.CONNECTION_FAILURE, e); + else + throw new SQLException("Failed to communicate with Ignite cluster.", SqlStateCode.CONNECTION_FAILURE, e); + } + finally { + if (stmt != null && stmt.requestTimeout() != NO_TIMEOUT && reqTimeoutTimerTask != null) + reqTimeoutTimerTask.cancel(); + } + } + + /** + * Send request for execution via {@link #cliIo}. Response is waited at the separate thread + * (see {@link StreamState#asyncRespReaderThread}). + * @param req Request. + * @throws SQLException On any error. + */ + void sendQueryCancelRequest(JdbcQueryCancelRequest req) throws SQLException { + ensureConnected(); + + try { + cliIo.sendCancelRequest(req); + } + catch (Exception e) { throw new SQLException("Failed to communicate with Ignite cluster.", SqlStateCode.CONNECTION_FAILURE, e); } } @@ -779,7 +861,10 @@ private void sendRequestNotWaitResponse(JdbcOrderedBatchExecuteRequest req) thro catch (Exception e) { onDisconnect(); - throw new SQLException("Failed to communicate with Ignite cluster.", SqlStateCode.CONNECTION_FAILURE, e); + if (e instanceof SocketTimeoutException) + throw new SQLException("Connection timed out.", SqlStateCode.CONNECTION_FAILURE, e); + else + throw new SQLException("Failed to communicate with Ignite cluster.", SqlStateCode.CONNECTION_FAILURE, e); } } @@ -813,6 +898,8 @@ private void onDisconnect() { stmts.clear(); } + + timer.cancel(); } /** @@ -835,6 +922,15 @@ private static String normalizeSchema(String schemaName) { return res; } + /** + * @param stmt Statement to close. + */ + void closeStatement(JdbcThinStatement stmt) { + synchronized (stmtsMux) { + stmts.remove(stmt); + } + } + /** * Streamer state and */ @@ -842,9 +938,6 @@ private class StreamState { /** Maximum requests count that may be sent before any responses. */ private static final int MAX_REQUESTS_BEFORE_RESPONSE = 10; - /** Wait timeout. */ - private static final long WAIT_TIMEOUT = 1; - /** Batch size for streaming. */ private int streamBatchSize; @@ -879,7 +972,12 @@ private class StreamState { streamBatchSize = cmd.batchSize(); asyncRespReaderThread = new Thread(this::readResponses); + } + /** + * Start reader. + */ + void start() { asyncRespReaderThread.start(); } @@ -964,6 +1062,8 @@ void checkError() throws SQLException { else { onDisconnect(); + if (err0 instanceof SocketTimeoutException) + throw new SQLException("Connection timed out.", SqlStateCode.CONNECTION_FAILURE, err0); throw new SQLException("Failed to communicate with Ignite cluster on JDBC streaming.", SqlStateCode.CONNECTION_FAILURE, err0); } @@ -1033,4 +1133,59 @@ void readResponses () { } } } + + /** + * @return True if query cancellation supported, false otherwise. + */ + boolean isQueryCancellationSupported() { + return cliIo.isQueryCancellationSupported(); + } + + /** + * Request Timeout Timer Task + */ + private class RequestTimeoutTimerTask extends TimerTask { + + /** Request id. */ + private long reqId; + + /** Remaining query timeout. */ + private int remainingQryTimeout; + + /** Flag that shows whether TimerTask was expired or not. */ + private AtomicBoolean expired; + + /** + * @param reqId Request Id to cancel in case of timeout + * @param initReqTimeout Initial request timeout + */ + RequestTimeoutTimerTask(long reqId, int initReqTimeout) { + this.reqId = reqId; + + remainingQryTimeout = initReqTimeout; + + expired = new AtomicBoolean(false); + } + + /** {@inheritDoc} */ + @Override public void run() { + try { + if (remainingQryTimeout <= 0) { + expired.set(true); + + sendQueryCancelRequest(new JdbcQueryCancelRequest(reqId)); + + cancel(); + } + + remainingQryTimeout -= REQUEST_TIMEOUT_PERIOD; + } + catch (SQLException e) { + LOG.log(Level.WARNING, + "Request timeout processing failure: unable to cancel request [reqId=" + reqId + ']', e); + + cancel(); + } + } + } } \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinResultSet.java b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinResultSet.java index 29693e458c1cb..794717e3489e5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinResultSet.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinResultSet.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -80,8 +81,8 @@ public class JdbcThinResultSet implements ResultSet { /** Statement. */ private final JdbcThinStatement stmt; - /** Query ID. */ - private final Long qryId; + /** Cursor ID. */ + private final Long cursorId; /** Metadata. */ private List meta; @@ -140,7 +141,7 @@ public class JdbcThinResultSet implements ResultSet { JdbcThinResultSet(List> fields, List meta) { stmt = null; fetchSize = 0; - qryId = -1L; + cursorId = -1L; finished = true; isQuery = true; updCnt = -1; @@ -160,7 +161,7 @@ public class JdbcThinResultSet implements ResultSet { * Creates new result set. * * @param stmt Statement. - * @param qryId Query ID. + * @param cursorId Cursor ID. * @param fetchSize Fetch size. * @param finished Finished flag. * @param rows Rows. @@ -170,13 +171,13 @@ public class JdbcThinResultSet implements ResultSet { * @param closeStmt Close statement on the result set close. */ @SuppressWarnings("OverlyStrongTypeCast") - JdbcThinResultSet(JdbcThinStatement stmt, long qryId, int fetchSize, boolean finished, + JdbcThinResultSet(JdbcThinStatement stmt, long cursorId, int fetchSize, boolean finished, List> rows, boolean isQuery, boolean autoClose, long updCnt, boolean closeStmt) { assert stmt != null; assert fetchSize > 0; this.stmt = stmt; - this.qryId = qryId; + this.cursorId = cursorId; this.fetchSize = fetchSize; this.finished = finished; this.isQuery = isQuery; @@ -196,10 +197,10 @@ public class JdbcThinResultSet implements ResultSet { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public boolean next() throws SQLException { - ensureNotClosed(); + ensureAlive(); if ((rowsIter == null || !rowsIter.hasNext()) && !finished) { - JdbcQueryFetchResult res = stmt.conn.sendRequest(new JdbcQueryFetchRequest(qryId, fetchSize)); + JdbcQueryFetchResult res = stmt.conn.sendRequest(new JdbcQueryFetchRequest(cursorId, fetchSize), stmt); rows = res.items(); finished = res.last(); @@ -242,8 +243,8 @@ void close0() throws SQLException { return; try { - if (!finished || (isQuery && !autoClose)) - stmt.conn.sendRequest(new JdbcQueryCloseRequest(qryId)); + if (!(stmt != null && stmt.isCancelled()) && (!finished || (isQuery && !autoClose))) + stmt.conn.sendRequest(new JdbcQueryCloseRequest(cursorId), stmt); } finally { closed = true; @@ -720,8 +721,11 @@ else if (cls == String.class) { @Override public ResultSetMetaData getMetaData() throws SQLException { ensureNotClosed(); - if (jdbcMeta == null) + if (jdbcMeta == null) { + ensureNotCancelled(); + jdbcMeta = new JdbcThinResultSetMetadata(meta()); + } return jdbcMeta; } @@ -1840,7 +1844,7 @@ else if (targetCls == URL.class) */ @SuppressWarnings("unchecked") private Object getValue(int colIdx) throws SQLException { - ensureNotClosed(); + ensureAlive(); ensureHasCurrentRow(); try { @@ -1865,6 +1869,27 @@ private void ensureNotClosed() throws SQLException { throw new SQLException("Result set is closed.", SqlStateCode.INVALID_CURSOR_STATE); } + /** + * Ensures that result set is not cancelled. + * + * @throws SQLException If result set is cancelled. + */ + private void ensureNotCancelled() throws SQLException { + if (stmt != null && stmt.isCancelled()) + throw new SQLException("The query was cancelled while executing.", SqlStateCode.QUERY_CANCELLED); + } + + /** + * Ensures that result set is not closed or cancelled. + * + * @throws SQLException If result set is closed or cancelled. + */ + private void ensureAlive() throws SQLException { + ensureNotClosed(); + + ensureNotCancelled(); + } + /** * Ensures that result set is positioned on a row. * @@ -1884,11 +1909,11 @@ private List meta() throws SQLException { throw new SQLException("Server cursor is already closed.", SqlStateCode.INVALID_CURSOR_STATE); if (!metaInit) { - JdbcQueryMetadataResult res = stmt.conn.sendRequest(new JdbcQueryMetadataRequest(qryId)); + JdbcQueryMetadataResult res = stmt.conn.sendRequest(new JdbcQueryMetadataRequest(cursorId), stmt); - meta = res.meta(); + meta = res.meta(); - metaInit = true; + metaInit = true; } return meta; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinStatement.java b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinStatement.java index d1605b0ddd0f3..f621670d90098 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinStatement.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinStatement.java @@ -25,6 +25,7 @@ import java.sql.ResultSet; import java.sql.SQLException; import java.sql.SQLFeatureNotSupportedException; +import java.sql.SQLTimeoutException; import java.sql.SQLWarning; import java.sql.Statement; import java.util.ArrayList; @@ -40,14 +41,13 @@ import org.apache.ignite.internal.processors.odbc.jdbc.JdbcBulkLoadAckResult; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcBulkLoadBatchRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQuery; +import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryCancelRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteMultipleStatementsResult; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteResult; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcResult; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcResultInfo; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcStatementType; -import org.apache.ignite.internal.processors.odbc.jdbc.JdbcBulkLoadBatchRequest; -import org.apache.ignite.internal.processors.odbc.jdbc.JdbcStatementType; import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.sql.SqlKeyword; import org.apache.ignite.internal.sql.SqlParseException; @@ -68,13 +68,13 @@ public class JdbcThinStatement implements Statement { private static final int DFLT_PAGE_SIZE = SqlQuery.DFLT_PAGE_SIZE; /** JDBC Connection implementation. */ - protected JdbcThinConnection conn; + protected final JdbcThinConnection conn; /** Schema name. */ private final String schema; /** Closed flag. */ - private boolean closed; + private volatile boolean closed; /** Rows limit. */ private int maxRows; @@ -82,10 +82,13 @@ public class JdbcThinStatement implements Statement { /** Query timeout. */ private int timeout; + /** Request timeout. */ + private int reqTimeout; + /** Fetch size. */ private int pageSize = DFLT_PAGE_SIZE; - /** Result set holdability*/ + /** Result set holdability. */ private final int resHoldability; /** Batch size to keep track of number of items to return as fake update counters for executeBatch. */ @@ -98,11 +101,20 @@ public class JdbcThinStatement implements Statement { private boolean closeOnCompletion; /** Result sets. */ - protected List resultSets; + protected volatile List resultSets; /** Current result index. */ protected int curRes; + /** Current request Id. */ + private long currReqId; + + /** Cancelled flag. */ + private volatile boolean cancelled; + + /** Cancellation mutex. */ + final Object cancellationMux = new Object(); + /** * Creates new statement. * @@ -187,7 +199,7 @@ protected void execute0(JdbcStatementType stmtType, String sql, List arg nativeCmd = tryParseNative(sql); if (nativeCmd != null) { - conn.executeNative(sql, nativeCmd); + conn.executeNative(sql, nativeCmd, this); resultSets = Collections.singletonList(resultSetForUpdate(0)); @@ -209,8 +221,10 @@ protected void execute0(JdbcStatementType stmtType, String sql, List arg return; } - JdbcResult res0 = conn.sendRequest(new JdbcQueryExecuteRequest(stmtType, schema, pageSize, - maxRows, conn.getAutoCommit(), sql, args == null ? null : args.toArray(new Object[args.size()]))); + JdbcQueryExecuteRequest req = new JdbcQueryExecuteRequest(stmtType, schema, pageSize, + maxRows, conn.getAutoCommit(), sql, args == null ? null : args.toArray(new Object[args.size()])); + + JdbcResult res0 = conn.sendRequest(req, this); assert res0 != null; @@ -220,7 +234,7 @@ protected void execute0(JdbcStatementType stmtType, String sql, List arg if (res0 instanceof JdbcQueryExecuteResult) { JdbcQueryExecuteResult res = (JdbcQueryExecuteResult)res0; - resultSets = Collections.singletonList(new JdbcThinResultSet(this, res.getQueryId(), pageSize, + resultSets = Collections.singletonList(new JdbcThinResultSet(this, res.cursorId(), pageSize, res.last(), res.items(), res.isQuery(), conn.autoCloseServerCursor(), res.updateCount(), closeOnCompletion)); } @@ -233,21 +247,19 @@ else if (res0 instanceof JdbcQueryExecuteMultipleStatementsResult) { boolean firstRes = true; - for(JdbcResultInfo rsInfo : resInfos) { + for (JdbcResultInfo rsInfo : resInfos) { if (!rsInfo.isQuery()) resultSets.add(resultSetForUpdate(rsInfo.updateCount())); else { if (firstRes) { firstRes = false; - resultSets.add(new JdbcThinResultSet(this, rsInfo.queryId(), pageSize, - res.isLast(), res.items(), true, - conn.autoCloseServerCursor(), -1, closeOnCompletion)); + resultSets.add(new JdbcThinResultSet(this, rsInfo.cursorId(), pageSize, res.isLast(), + res.items(), true, conn.autoCloseServerCursor(), -1, closeOnCompletion)); } else { - resultSets.add(new JdbcThinResultSet(this, rsInfo.queryId(), pageSize, - false, null, true, - conn.autoCloseServerCursor(), -1, closeOnCompletion)); + resultSets.add(new JdbcThinResultSet(this, rsInfo.cursorId(), pageSize, false, + null, true, conn.autoCloseServerCursor(), -1, closeOnCompletion)); } } } @@ -297,32 +309,50 @@ private JdbcResult sendFile(JdbcBulkLoadAckResult cmdRes) throws SQLException { byte[] buf = new byte[batchSize]; int readBytes; + int timeSpendMillis = 0; + while ((readBytes = input.read(buf)) != -1) { + long startTime = System.currentTimeMillis(); + if (readBytes == 0) continue; + if (reqTimeout != JdbcThinConnection.NO_TIMEOUT) + reqTimeout -= timeSpendMillis; + JdbcResult res = conn.sendRequest(new JdbcBulkLoadBatchRequest( - cmdRes.queryId(), - batchNum++, - JdbcBulkLoadBatchRequest.CMD_CONTINUE, - readBytes == buf.length ? buf : Arrays.copyOf(buf, readBytes))); + cmdRes.cursorId(), + batchNum++, + JdbcBulkLoadBatchRequest.CMD_CONTINUE, + readBytes == buf.length ? buf : Arrays.copyOf(buf, readBytes)), + this); if (!(res instanceof JdbcQueryExecuteResult)) throw new SQLException("Unknown response sent by the server: " + res); + + timeSpendMillis = (int)(System.currentTimeMillis() - startTime); } + if (reqTimeout != JdbcThinConnection.NO_TIMEOUT) + reqTimeout -= timeSpendMillis; + return conn.sendRequest(new JdbcBulkLoadBatchRequest( - cmdRes.queryId(), - batchNum++, - JdbcBulkLoadBatchRequest.CMD_FINISHED_EOF)); + cmdRes.cursorId(), + batchNum++, + JdbcBulkLoadBatchRequest.CMD_FINISHED_EOF), + this); } } catch (Exception e) { + if (e instanceof SQLTimeoutException) + throw (SQLTimeoutException)e; + try { conn.sendRequest(new JdbcBulkLoadBatchRequest( - cmdRes.queryId(), - batchNum, - JdbcBulkLoadBatchRequest.CMD_FINISHED_ERROR)); + cmdRes.cursorId(), + batchNum, + JdbcBulkLoadBatchRequest.CMD_FINISHED_ERROR), + this); } catch (SQLException e1) { throw new SQLException("Cannot send finalization request: " + e1.getMessage(), e); @@ -354,6 +384,8 @@ private JdbcResult sendFile(JdbcBulkLoadAckResult cmdRes) throws SQLException { try { closeResults(); + + conn.closeStatement(this); } finally { closed = true; @@ -372,6 +404,19 @@ private void closeResults() throws SQLException { resultSets = null; curRes = 0; } + + synchronized (cancellationMux) { + currReqId = 0; + + cancelled = false; + } + } + + /** + * @return Returns true if statement was cancelled, false otherwise. + */ + boolean isCancelled() { + return cancelled; } /** @@ -442,11 +487,34 @@ void closeOnDisconnect() { throw new SQLException("Invalid timeout value."); this.timeout = timeout * 1000; + + reqTimeout = this.timeout; } /** {@inheritDoc} */ @Override public void cancel() throws SQLException { ensureNotClosed(); + + if (!isQueryCancellationSupported()) + throw new SQLFeatureNotSupportedException("Cancel method is not supported."); + + long reqId; + + synchronized (cancellationMux) { + if (isCancelled()) + return; + + if (conn.isStream()) + throw new SQLFeatureNotSupportedException("Cancel method is not allowed in streaming mode."); + + reqId = currReqId; + + if (reqId != 0) + cancelled = true; + } + + if (reqId != 0) + conn.sendQueryCancelRequest(new JdbcQueryCancelRequest(reqId)); } /** {@inheritDoc} */ @@ -516,7 +584,7 @@ void closeOnDisconnect() { * @throws SQLException If failed. */ private JdbcThinResultSet nextResultSet() throws SQLException { - ensureNotClosed(); + ensureAlive(); if (resultSets == null || curRes >= resultSets.size()) return null; @@ -526,7 +594,7 @@ private JdbcThinResultSet nextResultSet() throws SQLException { /** {@inheritDoc} */ @Override public boolean getMoreResults() throws SQLException { - ensureNotClosed(); + ensureAlive(); return getMoreResults(CLOSE_CURRENT_RESULT); } @@ -647,9 +715,11 @@ void checkStatementEligibleForBatching(String sql) throws SQLException { if (F.isEmpty(batch)) throw new SQLException("Batch is empty."); + JdbcBatchExecuteRequest req = new JdbcBatchExecuteRequest(conn.getSchema(), batch, + conn.getAutoCommit(), false); + try { - JdbcBatchExecuteResult res = conn.sendRequest(new JdbcBatchExecuteRequest(conn.getSchema(), batch, - conn.getAutoCommit(), false)); + JdbcBatchExecuteResult res = conn.sendRequest(req, this); if (res.errorCode() != ClientListenerResponse.STATUS_SUCCESS) { throw new BatchUpdateException(res.errorMessage(), IgniteQueryErrorCode.codeToSqlState(res.errorCode()), @@ -674,7 +744,7 @@ void checkStatementEligibleForBatching(String sql) throws SQLException { /** {@inheritDoc} */ @Override public boolean getMoreResults(int curr) throws SQLException { - ensureNotClosed(); + ensureAlive(); if (resultSets != null) { assert curRes <= resultSets.size() : "Invalid results state: [resultsCount=" + resultSets.size() + @@ -706,7 +776,7 @@ void checkStatementEligibleForBatching(String sql) throws SQLException { /** {@inheritDoc} */ @Override public ResultSet getGeneratedKeys() throws SQLException { - ensureNotClosed(); + ensureAlive(); throw new SQLFeatureNotSupportedException("Auto-generated columns are not supported."); } @@ -730,6 +800,7 @@ void checkStatementEligibleForBatching(String sql) throws SQLException { /** {@inheritDoc} */ @Override public int executeUpdate(String sql, int[] colIndexes) throws SQLException { ensureNotClosed(); + throw new SQLFeatureNotSupportedException("Auto-generated columns are not supported."); } @@ -853,15 +924,27 @@ JdbcThinConnection connection() { } /** - * Ensures that statement is not closed. + * Ensures that statement not closed. * * @throws SQLException If statement is closed. */ - protected void ensureNotClosed() throws SQLException { + void ensureNotClosed() throws SQLException { if (isClosed()) throw new SQLException("Statement is closed."); } + /** + * Ensures that statement neither closed nor canceled. + * + * @throws SQLException If statement is closed or canceled. + */ + void ensureAlive() throws SQLException { + ensureNotClosed(); + + if (cancelled) + throw new SQLException("The query was cancelled while executing.", SqlStateCode.QUERY_CANCELLED); + } + /** * Used by statement on closeOnCompletion mode. * @throws SQLException On error. @@ -882,4 +965,43 @@ void closeIfAllResultsClosed() throws SQLException { if (allRsClosed) close(); } + + /** + * @param currReqId Sets current request Id. + */ + void currentRequestId(long currReqId) { + synchronized (cancellationMux) { + this.currReqId = currReqId; + } + } + + /** + * @return Current request Id. + */ + long currentRequestId() { + synchronized (cancellationMux) { + return currReqId; + } + } + + /** + * @return Cancellation mutex. + */ + Object cancellationMutex() { + return cancellationMux; + } + + /** + * @return True if query cancellation supported, false otherwise. + */ + private boolean isQueryCancellationSupported() { + return conn.isQueryCancellationSupported(); + } + + /** + * @return Request timeout. + */ + int requestTimeout() { + return reqTimeout; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinTcpIo.java b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinTcpIo.java index b065b7aec9cf9..e2864e89eb23d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinTcpIo.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinTcpIo.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -23,17 +24,19 @@ import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.Socket; +import java.net.SocketException; import java.net.UnknownHostException; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; import java.util.concurrent.atomic.AtomicLong; - import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.query.QueryCancelledException; import org.apache.ignite.internal.binary.BinaryReaderExImpl; import org.apache.ignite.internal.binary.BinaryWriterExImpl; import org.apache.ignite.internal.binary.streams.BinaryHeapInputStream; import org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream; +import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.odbc.ClientListenerNioListener; import org.apache.ignite.internal.processors.odbc.ClientListenerProtocolVersion; import org.apache.ignite.internal.processors.odbc.ClientListenerRequest; @@ -41,7 +44,9 @@ import org.apache.ignite.internal.processors.odbc.jdbc.JdbcBatchExecuteRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcOrderedBatchExecuteRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQuery; +import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryCancelRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryCloseRequest; +import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryFetchRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryMetadataRequest; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest; @@ -74,8 +79,11 @@ public class JdbcThinTcpIo { /** Version 2.7.0. */ private static final ClientListenerProtocolVersion VER_2_7_0 = ClientListenerProtocolVersion.create(2, 7, 0); + /** Version 2.8.0. */ + private static final ClientListenerProtocolVersion VER_2_8_0 = ClientListenerProtocolVersion.create(2, 8, 0); + /** Current version. */ - public static final ClientListenerProtocolVersion CURRENT_VER = VER_2_7_0; + public static final ClientListenerProtocolVersion CURRENT_VER = VER_2_8_0; /** Initial output stream capacity for handshake. */ private static final int HANDSHAKE_MSG_SIZE = 13; @@ -122,12 +130,18 @@ public class JdbcThinTcpIo { /** Mutex. */ private final Object mux = new Object(); + /** Connection mutex. */ + private final Object connMux = new Object(); + /** Current protocol version used to connection to Ignite. */ private ClientListenerProtocolVersion srvProtocolVer; /** Server index. */ private volatile int srvIdx; + /** Socket. */ + private Socket sock; + /** * Constructor. * @@ -249,6 +263,8 @@ else if (ConnectionProperties.SSL_MODE_DISABLE.equalsIgnoreCase(connProps.getSsl try { sock.connect(addr, timeout); + + this.sock = sock; } catch (IOException e) { throw new SQLException("Failed to connect to server [host=" + addr.getHostName() + @@ -377,7 +393,8 @@ public void handshake(ClientListenerProtocolVersion ver) throws IOException, SQL + ", url=" + connProps.getUrl() + ']', SqlStateCode.CONNECTION_REJECTED); } - if (VER_2_5_0.equals(srvProtoVer0) + if (VER_2_7_0.equals(srvProtoVer0) + || VER_2_5_0.equals(srvProtoVer0) || VER_2_4_0.equals(srvProtoVer0) || VER_2_3_0.equals(srvProtoVer0) || VER_2_1_5.equals(srvProtoVer0)) @@ -464,14 +481,7 @@ void sendBatchRequestNoWaitResponse(JdbcOrderedBatchExecuteRequest req) throws I + CURRENT_VER + ", remoteNodeVer=" + igniteVer + ']', SqlStateCode.INTERNAL_ERROR); } - int cap = guessCapacity(req); - - BinaryWriterExImpl writer = new BinaryWriterExImpl(null, new BinaryHeapOutputStream(cap), - null, null); - - req.writeBinary(writer, srvProtocolVer); - - send(writer.array()); + sendRequestRaw(req); } finally { synchronized (mux) { @@ -482,12 +492,13 @@ void sendBatchRequestNoWaitResponse(JdbcOrderedBatchExecuteRequest req) throws I /** * @param req Request. + * @param stmt Statement. * @return Server response. * @throws IOException In case of IO error. * @throws SQLException On concurrent access to JDBC connection. */ @SuppressWarnings("unchecked") - JdbcResponse sendRequest(JdbcRequest req) throws SQLException, IOException { + JdbcResponse sendRequest(JdbcRequest req, JdbcThinStatement stmt) throws SQLException, IOException { synchronized (mux) { if (ownThread != null) { throw new SQLException("Concurrent access to JDBC connection is not allowed" @@ -499,15 +510,30 @@ JdbcResponse sendRequest(JdbcRequest req) throws SQLException, IOException { } try { - int cap = guessCapacity(req); + if (stmt != null) { + synchronized (stmt.cancellationMutex()) { + if (stmt.isCancelled()) { + if (req instanceof JdbcQueryCloseRequest) + return new JdbcResponse(null); - BinaryWriterExImpl writer = new BinaryWriterExImpl(null, new BinaryHeapOutputStream(cap), null, null); + return new JdbcResponse(IgniteQueryErrorCode.QUERY_CANCELED, QueryCancelledException.ERR_MSG); + } - req.writeBinary(writer, srvProtocolVer); + sendRequestRaw(req); - send(writer.array()); + if (req instanceof JdbcQueryExecuteRequest || req instanceof JdbcBatchExecuteRequest) + stmt.currentRequestId(req.requestId()); + } + } + else + sendRequestRaw(req); - return readResponse(); + JdbcResponse resp = readResponse(); + + if (stmt != null && stmt.isCancelled()) + return new JdbcResponse(IgniteQueryErrorCode.QUERY_CANCELED, QueryCancelledException.ERR_MSG); + else + return resp; } finally { synchronized (mux) { @@ -516,13 +542,24 @@ JdbcResponse sendRequest(JdbcRequest req) throws SQLException, IOException { } } + /** + * Sends cancel request. + * + * @param cancellationReq contains request id to be cancelled + * @throws IOException In case of IO error. + */ + void sendCancelRequest(JdbcQueryCancelRequest cancellationReq) throws IOException { + sendRequestRaw(cancellationReq); + } + /** * @return Server response. * @throws IOException In case of IO error. */ @SuppressWarnings("unchecked") JdbcResponse readResponse() throws IOException { - BinaryReaderExImpl reader = new BinaryReaderExImpl(null, new BinaryHeapInputStream(read()), null, null, false); + BinaryReaderExImpl reader = new BinaryReaderExImpl(null, new BinaryHeapInputStream(read()), null, + null, false); JdbcResponse res = new JdbcResponse(); @@ -531,7 +568,6 @@ JdbcResponse readResponse() throws IOException { return res; } - /** * Try to guess request capacity. * @@ -561,6 +597,23 @@ else if (req instanceof JdbcQueryFetchRequest) return cap; } + /** + * @param req Request. + * @throws IOException In case of IO error. + */ + private void sendRequestRaw(JdbcRequest req) throws IOException { + int cap = guessCapacity(req); + + BinaryWriterExImpl writer = new BinaryWriterExImpl(null, new BinaryHeapOutputStream(cap), + null, null); + + req.writeBinary(writer, srvProtocolVer); + + synchronized (connMux) { + send(writer.array()); + } + } + /** * @param req JDBC request bytes. * @throws IOException On error. @@ -653,6 +706,15 @@ boolean isUnorderedStreamSupported() { return srvProtocolVer.compareTo(VER_2_5_0) >= 0; } + /** + * @return True if query cancellation supported, false otherwise. + */ + boolean isQueryCancellationSupported() { + assert srvProtocolVer != null; + + return srvProtocolVer.compareTo(VER_2_8_0) >= 0; + } + /** * @return Current server index. */ @@ -675,4 +737,33 @@ private static int nextServerIndex(int len) { return (int)(nextIdx % len); } } + + /** + * Enable/disable socket timeout with specified timeout. + * + * @param ms the specified timeout, in milliseconds. + * @throws SQLException if there is an error in the underlying protocol. + */ + public void timeout(int ms) throws SQLException { + try { + sock.setSoTimeout(ms); + } + catch (SocketException e) { + throw new SQLException("Failed to set connection timeout.", SqlStateCode.INTERNAL_ERROR, e); + } + } + + /** + * Returns socket timeout. + * + * @throws SQLException if there is an error in the underlying protocol. + */ + public int timeout() throws SQLException { + try { + return sock.getSoTimeout(); + } + catch (SocketException e) { + throw new SQLException("Failed to set connection timeout.", SqlStateCode.INTERNAL_ERROR, e); + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointRequest.java index 8b21ff2fe5382..4b25e0b687662 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointRequest.java @@ -177,4 +177,4 @@ public String getCheckpointSpi() { @Override public String toString() { return S.toString(GridCheckpointRequest.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoManager.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoManager.java index b3c80b05f635b..ae117bb073558 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoManager.java @@ -135,7 +135,7 @@ public class GridIoManager extends GridManagerAdapter CUR_PLC = new ThreadLocal<>(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoMessageFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoMessageFactory.java index 389d8c038d843..3f4eb1828206c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoMessageFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoMessageFactory.java @@ -46,6 +46,8 @@ import org.apache.ignite.internal.processors.cache.CacheInvokeDirectResult; import org.apache.ignite.internal.processors.cache.CacheObjectByteArrayImpl; import org.apache.ignite.internal.processors.cache.CacheObjectImpl; +import org.apache.ignite.internal.managers.encryption.GenerateEncryptionKeyRequest; +import org.apache.ignite.internal.managers.encryption.GenerateEncryptionKeyResponse; import org.apache.ignite.internal.processors.cache.GridCacheEntryInfo; import org.apache.ignite.internal.processors.cache.GridCacheMvccEntryInfo; import org.apache.ignite.internal.processors.cache.GridCacheReturn; @@ -54,8 +56,6 @@ import org.apache.ignite.internal.processors.cache.WalStateAckMessage; import org.apache.ignite.internal.processors.cache.binary.MetadataRequestMessage; import org.apache.ignite.internal.processors.cache.binary.MetadataResponseMessage; -import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessageV2; -import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.LatchAckMessage; import org.apache.ignite.internal.processors.cache.distributed.GridCacheTtlUpdateRequest; import org.apache.ignite.internal.processors.cache.distributed.GridCacheTxRecoveryRequest; import org.apache.ignite.internal.processors.cache.distributed.GridCacheTxRecoveryResponse; @@ -79,6 +79,7 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxQueryEnlistResponse; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxQueryFirstEnlistRequest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtUnlockRequest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridInvokeValue; import org.apache.ignite.internal.processors.cache.distributed.dht.PartitionUpdateCountersMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicDeferredUpdateResponse; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicNearResponse; @@ -100,9 +101,11 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionExchangeId; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessageV2; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleRequest; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.LatchAckMessage; import org.apache.ignite.internal.processors.cache.distributed.near.CacheVersionedValue; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetResponse; @@ -131,9 +134,12 @@ import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccActiveQueriesMessage; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccFutureResponse; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccQuerySnapshotRequest; +import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccRecoveryFinishedMessage; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccSnapshotResponse; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccTxSnapshotRequest; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccWaitTxsRequest; +import org.apache.ignite.internal.processors.cache.mvcc.msg.PartitionCountersNeighborcastRequest; +import org.apache.ignite.internal.processors.cache.mvcc.msg.PartitionCountersNeighborcastResponse; import org.apache.ignite.internal.processors.cache.query.GridCacheQueryRequest; import org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse; import org.apache.ignite.internal.processors.cache.query.GridCacheSqlQuery; @@ -1078,6 +1084,36 @@ public GridIoMessageFactory(MessageFactory[] ext) { break; + case 161: + msg = new GridInvokeValue(); + + break; + + case 162: + msg = new GenerateEncryptionKeyRequest(); + + break; + + case 163: + msg = new GenerateEncryptionKeyResponse(); + + break; + + case 164: + msg = new MvccRecoveryFinishedMessage(); + + break; + + case 165: + msg = new PartitionCountersNeighborcastRequest(); + + break; + + case 166: + msg = new PartitionCountersNeighborcastResponse(); + + break; + // [-3..119] [124..129] [-23..-27] [-36..-55]- this // [120..123] - DR // [-4..-22, -30..-35] - SQL diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoUserMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoUserMessage.java index 332a9de511c79..408fad773f93e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoUserMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoUserMessage.java @@ -358,4 +358,4 @@ public void deployment(GridDeployment dep) { @Override public String toString() { return S.toString(GridIoUserMessage.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessage.java index 0a8b2b7a38e60..a6a2469736ef2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessage.java @@ -20,6 +20,7 @@ import java.nio.ByteBuffer; import java.util.UUID; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageReader; @@ -28,6 +29,7 @@ /** * */ +@IgniteCodeGeneratingFail public class IgniteIoTestMessage implements Message { /** */ private static byte FLAG_PROC_FROM_NIO = 1; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeployment.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeployment.java index c3efc592b1347..3450aa5195c4a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeployment.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeployment.java @@ -35,6 +35,7 @@ import org.apache.ignite.compute.ComputeTask; import org.apache.ignite.configuration.DeploymentMode; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.GridLeanSet; import org.apache.ignite.internal.util.lang.GridMetadataAwareAdapter; import org.apache.ignite.internal.util.lang.GridPeerDeployAware; @@ -45,6 +46,7 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteUuid; +import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; import java.util.concurrent.ConcurrentHashMap; @@ -383,6 +385,20 @@ public boolean internalTask(@Nullable ComputeTask task, Class taskCls) { return res; } + /** + * Checks whether task class is annotated with {@link GridVisorManagementTask}. + * + * @param task Task. + * @param taskCls Task class. + * @return {@code True} if task is internal. + */ + @SuppressWarnings("unchecked") + public boolean visorManagementTask(@Nullable ComputeTask task, @NotNull Class taskCls) { + return annotation(task instanceof GridPeerDeployAware ? + ((GridPeerDeployAware)task).deployClass() : taskCls, + GridVisorManagementTask.class) != null; + } + /** * @param cls Class to create new instance of (using default constructor). * @return New instance. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentCommunication.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentCommunication.java index 2a5f7cae1224b..e14c8dfafcb10 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentCommunication.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentCommunication.java @@ -204,9 +204,9 @@ private void processResourceRequest(UUID nodeId, GridDeploymentRequest req) { // since it was already performed before (and was successful). if (!(ldr instanceof GridDeploymentClassLoader)) { // First check for @GridNotPeerDeployable annotation. - try { - String clsName = req.resourceName().replace('/', '.'); + String clsName = req.resourceName().replace('/', '.'); + try { int idx = clsName.indexOf(".class"); if (idx >= 0) @@ -228,8 +228,10 @@ private void processResourceRequest(UUID nodeId, GridDeploymentRequest req) { return; } } - catch (ClassNotFoundException ignore) { - // Safely ignore it here - resource wasn't a class name. + catch (LinkageError | ClassNotFoundException e) { + U.warn(log, "Failed to resolve class: " + clsName, e); + // Defined errors can be safely ignored here, because of resource which is able to be not a class name. + // Unsuccessful response will be sent below if the resource failed to be loaded. } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentInfoBean.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentInfoBean.java index 7f58ce36001ab..72f5ec6b30579 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentInfoBean.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentInfoBean.java @@ -277,4 +277,4 @@ public GridDeploymentInfoBean(GridDeploymentInfo dep) { @Override public String toString() { return S.toString(GridDeploymentInfoBean.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentRequest.java index 729cf4c54278e..708c64860579e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentRequest.java @@ -278,4 +278,4 @@ public void nodeIds(Collection nodeIds) { @Override public String toString() { return S.toString(GridDeploymentRequest.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentResponse.java index d1b0384f0fb6a..591957d37bd2f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/deployment/GridDeploymentResponse.java @@ -197,4 +197,4 @@ void errorMessage(String errMsg) { @Override public String toString() { return S.toString(GridDeploymentResponse.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/DiscoCache.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/DiscoCache.java index 70e2013d1065a..e99e4782ba3c8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/DiscoCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/DiscoCache.java @@ -26,7 +26,6 @@ import org.apache.ignite.cluster.BaselineNode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.processors.cluster.DiscoveryDataClusterState; import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.tostring.GridToStringInclude; @@ -109,14 +108,10 @@ public class DiscoCache { /** */ private final P1 aliveNodePred; - /** */ - private final MvccCoordinator mvccCrd; - /** * @param topVer Topology version. * @param state Current cluster state. * @param loc Local node. - * @param mvccCrd MVCC coordinator node. * @param rmtNodes Remote nodes. * @param allNodes All nodes. * @param srvNodes Server nodes. @@ -135,7 +130,6 @@ public class DiscoCache { AffinityTopologyVersion topVer, DiscoveryDataClusterState state, ClusterNode loc, - MvccCoordinator mvccCrd, List rmtNodes, List allNodes, List srvNodes, @@ -154,7 +148,6 @@ public class DiscoCache { this.topVer = topVer; this.state = state; this.loc = loc; - this.mvccCrd = mvccCrd; this.rmtNodes = rmtNodes; this.allNodes = allNodes; this.srvNodes = srvNodes; @@ -183,13 +176,6 @@ public class DiscoCache { }; } - /** - * @return Mvcc coordinator node. - */ - @Nullable public MvccCoordinator mvccCoordinator() { - return mvccCrd; - } - /** * @return Topology version. */ @@ -475,7 +461,6 @@ public DiscoCache copy(AffinityTopologyVersion ver, @Nullable DiscoveryDataClust ver, state == null ? this.state : state, loc, - mvccCrd, rmtNodes, allNodes, srvNodes, diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManager.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManager.java index 5ce4cb644e337..2f8f61f1421bd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManager.java @@ -1,12 +1,11 @@ /* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at + * Copyright 2020 GridGain Systems, Inc. and Contributors. * - * http://www.apache.org/licenses/LICENSE-2.0 + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -48,6 +47,7 @@ import org.apache.ignite.IgniteClientDisconnectedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteInterruptedException; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.CacheMetrics; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cluster.BaselineNode; @@ -85,8 +85,6 @@ import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.processors.cache.GridCacheContext; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; -import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cluster.BaselineTopology; import org.apache.ignite.internal.processors.cluster.ChangeGlobalStateFinishMessage; import org.apache.ignite.internal.processors.cluster.ChangeGlobalStateMessage; @@ -99,6 +97,7 @@ import org.apache.ignite.internal.util.GridSpinBusyLock; import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.future.IgniteFutureImpl; import org.apache.ignite.internal.util.lang.GridTuple6; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.typedef.CI1; @@ -108,7 +107,6 @@ import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.util.worker.GridWorker; import org.apache.ignite.lang.IgniteClosure; @@ -133,6 +131,7 @@ import org.apache.ignite.spi.discovery.DiscoverySpiMutableCustomMessageSupport; import org.apache.ignite.spi.discovery.DiscoverySpiNodeAuthenticator; import org.apache.ignite.spi.discovery.DiscoverySpiOrderSupport; +import org.apache.ignite.spi.discovery.IgniteDiscoveryThread; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; import org.apache.ignite.thread.IgniteThread; @@ -587,7 +586,7 @@ private void updateClientNodes(UUID leftNodeId) { } } - @Override public IgniteInternalFuture onDiscovery( + @Override public IgniteFuture onDiscovery( final int type, final long topVer, final ClusterNode node, @@ -595,7 +594,7 @@ private void updateClientNodes(UUID leftNodeId) { final Map> snapshots, @Nullable DiscoverySpiCustomMessage spiCustomMsg ) { - GridFutureAdapter notificationFut = new GridFutureAdapter(); + GridFutureAdapter notificationFut = new GridFutureAdapter<>(); discoNtfWrk.submit(notificationFut, () -> { synchronized (discoEvtMux) { @@ -603,7 +602,7 @@ private void updateClientNodes(UUID leftNodeId) { } }); - return notificationFut; + return new IgniteFutureImpl<>(notificationFut); } /** @@ -657,8 +656,6 @@ private void onDiscovery0( updateClientNodes(node.id()); } - ctx.coordinators().onDiscoveryEvent(type, topSnapshot, topVer, customMsg); - boolean locJoinEvt = type == EVT_NODE_JOINED && node.id().equals(locNode.id()); ChangeGlobalStateFinishMessage stateFinishMsg = null; @@ -769,8 +766,6 @@ else if (customMsg instanceof ChangeGlobalStateMessage) { // Current version. discoCache = discoCache(); - final DiscoCache discoCache0 = discoCache; - // If this is a local join event, just save it and do not notify listeners. if (locJoinEvt) { if (gridStartTime == 0) @@ -791,9 +786,13 @@ else if (customMsg instanceof ChangeGlobalStateMessage) { discoWrk.discoCache = discoCache; if (!isLocDaemon && !ctx.clientDisconnected()) { + ctx.cache().context().coordinators().onLocalJoin(discoEvt); + ctx.cache().context().exchange().onLocalJoin(discoEvt, discoCache); ctx.authentication().onLocalJoin(); + + ctx.encryption().onLocalJoin(); } IgniteInternalFuture transitionWaitFut = ctx.state().onLocalJoin(discoCache); @@ -854,7 +853,7 @@ else if (type == EVT_CLIENT_NODE_RECONNECTED) { try { fut.get(); - discoWrk.addEvent(type, nextTopVer, node, discoCache0, topSnapshot, null); + discoWrk.addEvent(EVT_CLIENT_NODE_RECONNECTED, nextTopVer, node, discoCache, topSnapshot, null); } catch (IgniteException ignore) { // No-op. @@ -870,6 +869,9 @@ else if (type == EVT_CLIENT_NODE_RECONNECTED) { if (stateFinishMsg != null) discoWrk.addEvent(EVT_DISCOVERY_CUSTOM_EVT, nextTopVer, node, discoCache, topSnapshot, stateFinishMsg); + + if (type == EVT_CLIENT_NODE_DISCONNECTED) + discoWrk.awaitDisconnectEvent(); } }); @@ -1094,6 +1096,10 @@ private GridLocalMetrics createMetrics() { */ public DiscoveryMetricsProvider createMetricsProvider() { return new DiscoveryMetricsProvider() { + /** Disable cache metrics update. */ + private final boolean disableCacheMetricsUpdate = IgniteSystemProperties.getBoolean( + IgniteSystemProperties.IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE, false); + /** */ private final long startTime = U.currentTimeMillis(); @@ -1105,6 +1111,9 @@ public DiscoveryMetricsProvider createMetricsProvider() { /** {@inheritDoc} */ @Override public Map cacheMetrics() { try { + if (disableCacheMetricsUpdate) + return Collections.emptyMap(); + /** Caches should not be accessed while state transition is in progress. */ if (ctx.state().clusterState().transition()) return Collections.emptyMap(); @@ -1533,13 +1542,13 @@ private long requiredOffheap() { for (DataRegionConfiguration dataReg : dataRegions) { res += dataReg.getMaxSize(); - res += GridCacheDatabaseSharedManager.checkpointBufferSize(dataReg); + res += U.checkpointBufferSize(dataReg); } } res += memCfg.getDefaultDataRegionConfiguration().getMaxSize(); - res += GridCacheDatabaseSharedManager.checkpointBufferSize(memCfg.getDefaultDataRegionConfiguration()); + res += U.checkpointBufferSize(memCfg.getDefaultDataRegionConfiguration()); return res; } @@ -1565,24 +1574,6 @@ private long configuredOffheap() { return res; } - /** - * @param regCfg Data region configuration. - * @return Data region message. - */ - private String dataRegionConfigurationMessage(DataRegionConfiguration regCfg) { - if (regCfg == null) - return null; - - SB m = new SB(); - - m.a(" ^-- ").a(regCfg.getName()).a(" ["); - m.a("initSize=").a(U.readableSize(regCfg.getInitialSize(), false)); - m.a(", maxSize=").a(U.readableSize(regCfg.getMaxSize(), false)); - m.a(", persistenceEnabled=" + regCfg.isPersistenceEnabled()).a(']'); - - return m.toString(); - } - /** * @param clo Wrapper of logger. * @param topVer Topology version. @@ -1598,10 +1589,14 @@ private String dataRegionConfigurationMessage(DataRegionConfiguration regCfg) { private void topologySnapshotMessage(IgniteClosure clo, long topVer, DiscoCache discoCache, int evtType, ClusterNode evtNode, int srvNodesNum, int clientNodesNum, int totalCpus, double heap, double offheap) { + DiscoveryDataClusterState state = discoCache.state(); + String summary = PREFIX + " [" + (discoOrdered ? "ver=" + topVer + ", " : "") + - "servers=" + srvNodesNum + + "locNode=" + U.id8(discoCache.localNode().id()) + + ", servers=" + srvNodesNum + ", clients=" + clientNodesNum + + ", state=" + (state.active() ? "ACTIVE" : "INACTIVE") + ", CPUs=" + totalCpus + ", offheap=" + offheap + "GB" + ", heap=" + heap + "GB]"; @@ -1614,11 +1609,6 @@ private void topologySnapshotMessage(IgniteClosure clo, long topVe currCrd != null && currCrd.order() > evtNode.order()) clo.apply("Coordinator changed [prev=" + evtNode + ", cur=" + currCrd + "]"); - DiscoveryDataClusterState state = discoCache.state(); - - clo.apply(" ^-- Node [id=" + discoCache.localNode().id().toString().toUpperCase() + ", clusterState=" - + (state.active() ? "ACTIVE" : "INACTIVE") + ']'); - BaselineTopology blt = state.baselineTopology(); if (blt != null && discoCache.baselineNodes() != null) { @@ -1648,25 +1638,6 @@ private void topologySnapshotMessage(IgniteClosure clo, long topVe clo.apply(" ^-- " + bltOffline + " nodes left for auto-activation" + offlineConsistentIds); } } - - DataStorageConfiguration memCfg = ctx.config().getDataStorageConfiguration(); - - if (memCfg == null) - return; - - clo.apply("Data Regions Configured:"); - clo.apply(dataRegionConfigurationMessage(memCfg.getDefaultDataRegionConfiguration())); - - DataRegionConfiguration[] dataRegions = memCfg.getDataRegionConfigurations(); - - if (dataRegions != null) { - for (int i = 0; i < dataRegions.length; ++i) { - String msg = dataRegionConfigurationMessage(dataRegions[i]); - - if (msg != null) - clo.apply(msg); - } - } } /** {@inheritDoc} */ @@ -2095,6 +2066,21 @@ private DiscoCache resolveDiscoCache(int grpId, AffinityTopologyVersion topVer) snap.discoCache : discoCacheHist.get(topVer); if (cache == null) { + AffinityTopologyVersion lastAffChangedTopVer = + ctx.cache().context().exchange().lastAffinityChangedTopologyVersion(topVer); + + if (!lastAffChangedTopVer.equals(topVer)) { + assert lastAffChangedTopVer.compareTo(topVer) < 0; + + for (Map.Entry e : discoCacheHist.descendingEntrySet()) { + if (e.getKey().isBetween(lastAffChangedTopVer, topVer)) + return e.getValue(); + + if (e.getKey().compareTo(lastAffChangedTopVer) < 0) + break; + } + } + CacheGroupDescriptor desc = ctx.cache().cacheGroupDescriptors().get(grpId); throw new IgniteException("Failed to resolve nodes topology [" + @@ -2374,8 +2360,6 @@ public void reconnect() { Collection topSnapshot) { assert topSnapshot.contains(loc); - MvccCoordinator mvccCrd = ctx.coordinators().assignedCoordinator(); - HashSet alives = U.newHashSet(topSnapshot.size()); HashMap nodeMap = U.newHashMap(topSnapshot.size()); @@ -2477,7 +2461,6 @@ else if (node.version().compareTo(minVer) < 0) topVer, state, loc, - mvccCrd, Collections.unmodifiableList(rmtNodes), Collections.unmodifiableList(allNodes), Collections.unmodifiableList(srvNodes), @@ -2499,7 +2482,7 @@ else if (node.version().compareTo(minVer) < 0) * * @param cacheMap Map to add to. * @param cacheName Cache name. - * @param rich Node to add + * @param node Node to add */ private void addToMap(Map> cacheMap, String cacheName, ClusterNode rich) { List cacheNodes = cacheMap.get(CU.cacheId(cacheName)); @@ -2649,8 +2632,8 @@ public void scheduleSegmentCheck() { AffinityTopologyVersion.NONE, ctx.state().clusterState(), node, - locNodeOnlyTop - ), locNodeOnlyTop, + locNodeOnlyTop), + locNodeOnlyTop, null); lastSegChkRes.set(false); @@ -2671,7 +2654,7 @@ public void scheduleSegmentCheck() { /** * */ - private class DiscoveryMessageNotifierWorker extends GridWorker { + private class DiscoveryMessageNotifierWorker extends GridWorker implements IgniteDiscoveryThread { /** Queue. */ private final BlockingQueue> queue = new LinkedBlockingQueue<>(); @@ -2775,6 +2758,14 @@ private class DiscoveryWorker extends GridWorker { /** Node segmented event fired flag. */ private boolean nodeSegFired; + /** + * Future to wait for client disconnect event before an attempt to reconnect. + * + * Otherwise, we can continue process events from the previous cluster topology when the client already + * connected to a new topology. + */ + private volatile GridFutureAdapter disconnectEvtFut; + /** * */ @@ -2849,6 +2840,9 @@ void addEvent( ) { assert node != null : data; + if (type == EVT_CLIENT_NODE_DISCONNECTED) + discoWrk.disconnectEvtFut = new GridFutureAdapter(); + evts.add(new GridTuple6<>(type, topVer, node, discoCache, topSnapshot, data)); } @@ -2959,7 +2953,7 @@ else if (log.isDebugEnabled()) } case EVT_CLIENT_NODE_DISCONNECTED: { - // No-op. + disconnectEvtFut.onDone(); break; } @@ -3089,6 +3083,16 @@ private void onSegmentation() { } } + /** Awaits client disconnect event. */ + private void awaitDisconnectEvent() { + try { + disconnectEvtFut.get(); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to wait for handling disconnect event.", e); + } + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(DiscoveryWorker.class, this); @@ -3484,7 +3488,6 @@ public DiscoCache createDiscoCacheOnCacheChange( topVer, discoCache.state(), discoCache.localNode(), - discoCache.mvccCoordinator(), discoCache.remoteNodes(), allNodes, discoCache.serverNodes(), diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/IgniteDiscoverySpiInternalListener.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/IgniteDiscoverySpiInternalListener.java index 24405f8101f0a..80164238dd639 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/IgniteDiscoverySpiInternalListener.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/IgniteDiscoverySpiInternalListener.java @@ -32,6 +32,14 @@ public interface IgniteDiscoverySpiInternalListener { */ public void beforeJoin(ClusterNode locNode, IgniteLogger log); + /** + * @param locNode Local node. + * @param log Logger. + */ + default public void beforeReconnect(ClusterNode locNode, IgniteLogger log) { + // No-op. + } + /** * @param spi SPI instance. * @param log Logger. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GenerateEncryptionKeyRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GenerateEncryptionKeyRequest.java new file mode 100644 index 0000000000000..3d480148c2e2a --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GenerateEncryptionKeyRequest.java @@ -0,0 +1,142 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.managers.encryption; + +import java.nio.ByteBuffer; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.plugin.extensions.communication.MessageReader; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; + +/** + * Generate encryption key request. + */ +public class GenerateEncryptionKeyRequest implements Message { + /** */ + private static final long serialVersionUID = 0L; + + /** Request ID. */ + private IgniteUuid id = IgniteUuid.randomUuid(); + + /** */ + private int keyCnt; + + /** */ + public GenerateEncryptionKeyRequest() { + } + + /** + * @param keyCnt Count of encryption key to generate. + */ + public GenerateEncryptionKeyRequest(int keyCnt) { + this.keyCnt = keyCnt; + } + + /** + * @return Request id. + */ + public IgniteUuid id() { + return id; + } + + /** + * @return Count of encryption key to generate. + */ + public int keyCount() { + return keyCnt; + } + + /** {@inheritDoc} */ + @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { + writer.setBuffer(buf); + + if (!writer.isHeaderWritten()) { + if (!writer.writeHeader(directType(), fieldsCount())) + return false; + + writer.onHeaderWritten(); + } + + switch (writer.state()) { + case 0: + if (!writer.writeIgniteUuid("id", id)) + return false; + + writer.incrementState(); + + case 1: + if (!writer.writeInt("keyCnt", keyCnt)) + return false; + + writer.incrementState(); + + } + + return true; + } + + /** {@inheritDoc} */ + @Override public boolean readFrom(ByteBuffer buf, MessageReader reader) { + reader.setBuffer(buf); + + if (!reader.beforeMessageRead()) + return false; + + switch (reader.state()) { + case 0: + id = reader.readIgniteUuid("id"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 1: + keyCnt = reader.readInt("keyCnt"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + } + + return reader.afterMessageRead(GenerateEncryptionKeyRequest.class); + } + + /** {@inheritDoc} */ + @Override public short directType() { + return 162; + } + + /** {@inheritDoc} */ + @Override public byte fieldsCount() { + return 2; + } + + /** {@inheritDoc} */ + @Override public void onAckReceived() { + // No-op. + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(GenerateEncryptionKeyRequest.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GenerateEncryptionKeyResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GenerateEncryptionKeyResponse.java new file mode 100644 index 0000000000000..89712489fd02c --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GenerateEncryptionKeyResponse.java @@ -0,0 +1,148 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.managers.encryption; + +import java.nio.ByteBuffer; +import java.util.Collection; +import org.apache.ignite.internal.GridDirectCollection; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; +import org.apache.ignite.plugin.extensions.communication.MessageReader; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; + +/** + * Generate encryption key response. + */ +public class GenerateEncryptionKeyResponse implements Message { + /** */ + private static final long serialVersionUID = 0L; + + /** Request message ID. */ + private IgniteUuid id; + + /** */ + @GridDirectCollection(byte[].class) + private Collection encKeys; + + /** */ + public GenerateEncryptionKeyResponse() { + } + + /** + * @param id Request id. + * @param encKeys Encryption keys. + */ + public GenerateEncryptionKeyResponse(IgniteUuid id, Collection encKeys) { + this.id = id; + this.encKeys = encKeys; + } + + /** + * @return Request id. + */ + public IgniteUuid requestId() { + return id; + } + + /** + * @return Encryption keys. + */ + public Collection encryptionKeys() { + return encKeys; + } + + /** {@inheritDoc} */ + @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { + writer.setBuffer(buf); + + if (!writer.isHeaderWritten()) { + if (!writer.writeHeader(directType(), fieldsCount())) + return false; + + writer.onHeaderWritten(); + } + + switch (writer.state()) { + case 0: + if (!writer.writeCollection("encKeys", encKeys, MessageCollectionItemType.BYTE_ARR)) + return false; + + writer.incrementState(); + + case 1: + if (!writer.writeIgniteUuid("id", id)) + return false; + + writer.incrementState(); + + } + + return true; + } + + /** {@inheritDoc} */ + @Override public boolean readFrom(ByteBuffer buf, MessageReader reader) { + reader.setBuffer(buf); + + if (!reader.beforeMessageRead()) + return false; + + switch (reader.state()) { + case 0: + encKeys = reader.readCollection("encKeys", MessageCollectionItemType.BYTE_ARR); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 1: + id = reader.readIgniteUuid("id"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + } + + return reader.afterMessageRead(GenerateEncryptionKeyResponse.class); + } + + /** {@inheritDoc} */ + @Override public short directType() { + return 163; + } + + /** {@inheritDoc} */ + @Override public byte fieldsCount() { + return 2; + } + + /** {@inheritDoc} */ + @Override public void onAckReceived() { + //No-op. + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(GenerateEncryptionKeyResponse.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GridEncryptionManager.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GridEncryptionManager.java new file mode 100644 index 0000000000000..7d74023b9b971 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GridEncryptionManager.java @@ -0,0 +1,867 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.managers.encryption; + +import java.io.Serializable; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.spi.encryption.EncryptionSpi; +import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.managers.GridManagerAdapter; +import org.apache.ignite.internal.managers.communication.GridMessageListener; +import org.apache.ignite.internal.managers.eventstorage.DiscoveryEventListener; +import org.apache.ignite.internal.processors.cache.CacheGroupDescriptor; +import org.apache.ignite.internal.processors.cache.GridCacheProcessor; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetastorageLifecycleListener; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.ReadOnlyMetastorage; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.ReadWriteMetastorage; +import org.apache.ignite.internal.processors.cluster.IgniteChangeGlobalStateSupport; +import org.apache.ignite.internal.util.future.GridFinishedFuture; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.lang.GridPlainClosure; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteFuture; +import org.apache.ignite.lang.IgniteFutureCancelledException; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.lang.IgniteProductVersion; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.spi.IgniteNodeValidationResult; +import org.apache.ignite.spi.discovery.DiscoveryDataBag; +import org.apache.ignite.spi.discovery.DiscoveryDataBag.GridDiscoveryData; +import org.apache.ignite.spi.discovery.DiscoveryDataBag.JoiningNodeDiscoveryData; +import org.apache.ignite.spi.discovery.DiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; +import static org.apache.ignite.events.EventType.EVT_NODE_LEFT; +import static org.apache.ignite.internal.GridComponent.DiscoveryDataExchangeType.ENCRYPTION_MGR; +import static org.apache.ignite.internal.GridTopic.TOPIC_GEN_ENC_KEY; +import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_ENCRYPTION_MASTER_KEY_DIGEST; +import static org.apache.ignite.internal.managers.communication.GridIoPolicy.SYSTEM_POOL; + +/** + * Manages cache keys and {@code EncryptionSpi} instances. + * + * NOTE: Following protocol applied to statically configured caches. + * For dynamically created caches key generated in request creation. + * + * Group keys generation protocol: + * + *
    + *
  • Joining node: + *
      + *
    • 1. Collects and send all stored group keys to coordinator.
    • + *
    • 2. Generate(but doesn't store locally!) and send keys for all statically configured groups in case the not presented in metastore.
    • + *
    • 3. Store all keys received from coordinator to local store.
    • + *
    + *
  • + *
  • Coordinator: + *
      + *
    • 1. Checks master key digest are equal to local. If not join is rejected.
    • + *
    • 2. Checks all stored keys from joining node are equal to stored keys. If not join is rejected.
    • + *
    • 3. Collects all stored keys and sends it to joining node.
    • + *
    + *
  • + *
  • All nodes: + *
      + *
    • 1. If new key for group doesn't exists locally it added to local store.
    • + *
    • 2. If new key for group exists locally, then received key skipped.
    • + *
    + *
  • + *
+ * + * @see GridCacheProcessor#generateEncryptionKeysAndStartCacheAfter(int, GridPlainClosure) + */ +public class GridEncryptionManager extends GridManagerAdapter implements MetastorageLifecycleListener, + IgniteChangeGlobalStateSupport { + /** + * Cache encryption introduced in this Ignite version. + */ + private static final IgniteProductVersion CACHE_ENCRYPTION_SINCE = IgniteProductVersion.fromString("2.7.0"); + + /** Synchronization mutex. */ + private final Object metaStorageMux = new Object(); + + /** Synchronization mutex for an generate encryption keys operations. */ + private final Object genEcnKeyMux = new Object(); + + /** Disconnected flag. */ + private volatile boolean disconnected; + + /** Stopped flag. */ + private volatile boolean stopped; + + /** Flag to enable/disable write to metastore on cluster state change. */ + private volatile boolean writeToMetaStoreEnabled; + + /** Prefix for a encryption group key in meta store. */ + public static final String ENCRYPTION_KEY_PREFIX = "grp-encryption-key-"; + + /** Encryption key predicate for meta store. */ + private static final IgnitePredicate ENCRYPTION_KEY_PREFIX_PRED = + (IgnitePredicate)key -> key.startsWith(ENCRYPTION_KEY_PREFIX); + + /** Group encryption keys. */ + private final ConcurrentHashMap grpEncKeys = new ConcurrentHashMap<>(); + + /** Pending generate encryption key futures. */ + private ConcurrentMap genEncKeyFuts = new ConcurrentHashMap<>(); + + /** Metastorage. */ + private volatile ReadWriteMetastorage metaStorage; + + /** I/O message listener. */ + private GridMessageListener ioLsnr; + + /** System discovery message listener. */ + private DiscoveryEventListener discoLsnr; + + /** + * @param ctx Kernel context. + */ + public GridEncryptionManager(GridKernalContext ctx) { + super(ctx, ctx.config().getEncryptionSpi()); + + ctx.internalSubscriptionProcessor().registerMetastorageListener(this); + } + + /** {@inheritDoc} */ + @Override public void start() throws IgniteCheckedException { + startSpi(); + + if (getSpi().masterKeyDigest() != null) + ctx.addNodeAttribute(ATTR_ENCRYPTION_MASTER_KEY_DIGEST, getSpi().masterKeyDigest()); + + ctx.event().addDiscoveryEventListener(discoLsnr = (evt, discoCache) -> { + UUID leftNodeId = evt.eventNode().id(); + + synchronized (genEcnKeyMux) { + Iterator> futsIter = + genEncKeyFuts.entrySet().iterator(); + + while (futsIter.hasNext()) { + GenerateEncryptionKeyFuture fut = futsIter.next().getValue(); + + if (!F.eq(leftNodeId, fut.nodeId())) + return; + + try { + futsIter.remove(); + + sendGenerateEncryptionKeyRequest(fut); + + genEncKeyFuts.put(fut.id(), fut); + } + catch (IgniteCheckedException e) { + fut.onDone(null, e); + } + } + } + }, EVT_NODE_LEFT, EVT_NODE_FAILED); + + ctx.io().addMessageListener(TOPIC_GEN_ENC_KEY, ioLsnr = (nodeId, msg, plc) -> { + synchronized (genEcnKeyMux) { + if (msg instanceof GenerateEncryptionKeyRequest) { + GenerateEncryptionKeyRequest req = (GenerateEncryptionKeyRequest)msg; + + assert req.keyCount() != 0; + + List encKeys = new ArrayList<>(req.keyCount()); + + for (int i = 0; i < req.keyCount(); i++) + encKeys.add(getSpi().encryptKey(getSpi().create())); + + try { + ctx.io().sendToGridTopic(nodeId, TOPIC_GEN_ENC_KEY, + new GenerateEncryptionKeyResponse(req.id(), encKeys), SYSTEM_POOL); + } + catch (IgniteCheckedException e) { + U.error(log, "Unable to send generate key response[nodeId=" + nodeId + "]"); + } + } + else { + GenerateEncryptionKeyResponse resp = (GenerateEncryptionKeyResponse)msg; + + GenerateEncryptionKeyFuture fut = genEncKeyFuts.get(resp.requestId()); + + if (fut != null) + fut.onDone(resp.encryptionKeys(), null); + else + U.warn(log, "Response received for a unknown request.[reqId=" + resp.requestId() + "]"); + } + } + }); + } + + /** {@inheritDoc} */ + @Override public void stop(boolean cancel) throws IgniteCheckedException { + stopSpi(); + } + + /** {@inheritDoc} */ + @Override protected void onKernalStart0() throws IgniteCheckedException { + // No-op. + } + + /** {@inheritDoc} */ + @Override protected void onKernalStop0(boolean cancel) { + synchronized (genEcnKeyMux) { + stopped = true; + + if (ioLsnr != null) + ctx.io().removeMessageListener(TOPIC_GEN_ENC_KEY, ioLsnr); + + if (discoLsnr != null) + ctx.event().removeDiscoveryEventListener(discoLsnr, EVT_NODE_LEFT, EVT_NODE_FAILED); + + cancelFutures("Kernal stopped."); + } + } + + /** {@inheritDoc} */ + @Override public void onDisconnected(IgniteFuture reconnectFut) { + synchronized (genEcnKeyMux) { + assert !disconnected; + + disconnected = true; + + cancelFutures("Client node was disconnected from topology (operation result is unknown)."); + } + } + + /** {@inheritDoc} */ + @Override public IgniteInternalFuture onReconnected(boolean clusterRestarted) { + synchronized (genEcnKeyMux) { + assert disconnected; + + disconnected = false; + + return null; + } + } + + /** + * Callback for local join. + */ + public void onLocalJoin() { + if (notCoordinator()) + return; + + //We can't store keys before node join to cluster(on statically configured cache registration). + //Because, keys should be received from cluster. + //Otherwise, we would generate different keys on each started node. + //So, after starting, coordinator saves locally newly generated encryption keys. + //And sends that keys to every joining node. + synchronized (metaStorageMux) { + //Keys read from meta storage. + HashMap knownEncKeys = knownEncryptionKeys(); + + //Generated(not saved!) keys for a new caches. + //Configured statically in config, but doesn't stored on the disk. + HashMap newEncKeys = + newEncryptionKeys(knownEncKeys == null ? Collections.EMPTY_SET : knownEncKeys.keySet()); + + if (newEncKeys == null) + return; + + //We can store keys to the disk, because we are on a coordinator. + for (Map.Entry entry : newEncKeys.entrySet()) { + groupKey(entry.getKey(), entry.getValue()); + + U.quietAndInfo(log, "Added encryption key on local join [grpId=" + entry.getKey() + "]"); + } + } + } + + /** {@inheritDoc} */ + @Nullable @Override public IgniteNodeValidationResult validateNode(ClusterNode node, + JoiningNodeDiscoveryData discoData) { + IgniteNodeValidationResult res = super.validateNode(node, discoData); + + if (res != null) + return res; + + if (node.isClient()) + return null; + + res = validateNode(node); + + if (res != null) + return res; + + if (!discoData.hasJoiningNodeData()) { + U.quietAndInfo(log, "Joining node doesn't have encryption data [node=" + node.id() + "]"); + + return null; + } + + NodeEncryptionKeys nodeEncKeys = (NodeEncryptionKeys)discoData.joiningNodeData(); + + if (nodeEncKeys == null || F.isEmpty(nodeEncKeys.knownKeys)) { + U.quietAndInfo(log, "Joining node doesn't have stored group keys [node=" + node.id() + "]"); + + return null; + } + + for (Map.Entry entry : nodeEncKeys.knownKeys.entrySet()) { + Serializable locEncKey = grpEncKeys.get(entry.getKey()); + + if (locEncKey == null) + continue; + + Serializable rmtKey = getSpi().decryptKey(entry.getValue()); + + if (F.eq(locEncKey, rmtKey)) + continue; + + return new IgniteNodeValidationResult(ctx.localNodeId(), + "Cache key differs! Node join is rejected. [node=" + node.id() + ", grp=" + entry.getKey() + "]", + "Cache key differs! Node join is rejected."); + } + + return null; + } + + /** {@inheritDoc} */ + @Nullable @Override public IgniteNodeValidationResult validateNode(ClusterNode node) { + IgniteNodeValidationResult res = super.validateNode(node); + + if (res != null) + return res; + + if (node.isClient()) + return null; + + byte[] lclMkDig = getSpi().masterKeyDigest(); + + byte[] rmtMkDig = node.attribute(ATTR_ENCRYPTION_MASTER_KEY_DIGEST); + + if (Arrays.equals(lclMkDig, rmtMkDig)) + return null; + + return new IgniteNodeValidationResult(ctx.localNodeId(), + "Master key digest differs! Node join is rejected. [node=" + node.id() + "]", + "Master key digest differs! Node join is rejected."); + } + + /** {@inheritDoc} */ + @Override public void collectJoiningNodeData(DiscoveryDataBag dataBag) { + HashMap knownEncKeys = knownEncryptionKeys(); + + HashMap newKeys = + newEncryptionKeys(knownEncKeys == null ? Collections.EMPTY_SET : knownEncKeys.keySet()); + + if ((knownEncKeys == null && newKeys == null) || dataBag.isJoiningNodeClient()) + return; + + if (log.isInfoEnabled()) { + String knownGrps = F.isEmpty(knownEncKeys) ? null : F.concat(knownEncKeys.keySet(), ","); + + if (knownGrps != null) + U.quietAndInfo(log, "Sending stored group keys to coordinator [grps=" + knownGrps + "]"); + + String newGrps = F.isEmpty(newKeys) ? null : F.concat(newKeys.keySet(), ","); + + if (newGrps != null) + U.quietAndInfo(log, "Sending new group keys to coordinator [grps=" + newGrps + "]"); + } + + dataBag.addJoiningNodeData(ENCRYPTION_MGR.ordinal(), new NodeEncryptionKeys(knownEncKeys, newKeys)); + } + + /** {@inheritDoc} */ + @Override public void onJoiningNodeDataReceived(JoiningNodeDiscoveryData data) { + NodeEncryptionKeys nodeEncryptionKeys = (NodeEncryptionKeys)data.joiningNodeData(); + + if (nodeEncryptionKeys == null || nodeEncryptionKeys.newKeys == null || ctx.clientNode()) + return; + + for (Map.Entry entry : nodeEncryptionKeys.newKeys.entrySet()) { + if (groupKey(entry.getKey()) == null) { + U.quietAndInfo(log, "Store group key received from joining node [node=" + + data.joiningNodeId() + ", grp=" + entry.getKey() + "]"); + + groupKey(entry.getKey(), entry.getValue()); + } + else { + U.quietAndInfo(log, "Skip group key received from joining node. Already exists. [node=" + + data.joiningNodeId() + ", grp=" + entry.getKey() + "]"); + } + } + } + + /** {@inheritDoc} */ + @Override public void collectGridNodeData(DiscoveryDataBag dataBag) { + if (dataBag.isJoiningNodeClient() || dataBag.commonDataCollectedFor(ENCRYPTION_MGR.ordinal())) + return; + + HashMap knownEncKeys = knownEncryptionKeys(); + + HashMap newKeys = + newEncryptionKeys(knownEncKeys == null ? Collections.EMPTY_SET : knownEncKeys.keySet()); + + if (knownEncKeys == null) + knownEncKeys = newKeys; + else if (newKeys != null) { + for (Map.Entry entry : newKeys.entrySet()) { + byte[] old = knownEncKeys.putIfAbsent(entry.getKey(), entry.getValue()); + + assert old == null; + } + } + + dataBag.addGridCommonData(ENCRYPTION_MGR.ordinal(), knownEncKeys); + } + + /** {@inheritDoc} */ + @Override public void onGridDataReceived(GridDiscoveryData data) { + Map encKeysFromCluster = (Map)data.commonData(); + + if (F.isEmpty(encKeysFromCluster)) + return; + + for (Map.Entry entry : encKeysFromCluster.entrySet()) { + if (groupKey(entry.getKey()) == null) { + U.quietAndInfo(log, "Store group key received from coordinator [grp=" + entry.getKey() + "]"); + + groupKey(entry.getKey(), entry.getValue()); + } + else { + U.quietAndInfo(log, "Skip group key received from coordinator. Already exists. [grp=" + + entry.getKey() + "]"); + } + } + } + + /** + * Returns group encryption key. + * + * @param grpId Group id. + * @return Group encryption key. + */ + @Nullable public Serializable groupKey(int grpId) { + if (grpEncKeys.isEmpty()) + return null; + + return grpEncKeys.get(grpId); + } + + /** + * Store group encryption key. + * + * @param grpId Group id. + * @param encGrpKey Encrypted group key. + */ + public void groupKey(int grpId, byte[] encGrpKey) { + assert !grpEncKeys.containsKey(grpId); + + Serializable encKey = getSpi().decryptKey(encGrpKey); + + synchronized (metaStorageMux) { + if (log.isDebugEnabled()) + log.debug("Key added. [grp=" + grpId + "]"); + + grpEncKeys.put(grpId, encKey); + + writeToMetaStore(grpId, encGrpKey); + } + } + + /** + * Removes encryption key. + * + * @param grpId Group id. + */ + private void removeGroupKey(int grpId) { + synchronized (metaStorageMux) { + ctx.cache().context().database().checkpointReadLock(); + + try { + grpEncKeys.remove(grpId); + + metaStorage.remove(ENCRYPTION_KEY_PREFIX + grpId); + + if (log.isDebugEnabled()) + log.debug("Key removed. [grp=" + grpId + "]"); + } + catch (IgniteCheckedException e) { + U.error(log, "Failed to clear meta storage", e); + } + finally { + ctx.cache().context().database().checkpointReadUnlock(); + } + } + } + + /** + * Callback for cache group start event. + * @param grpId Group id. + * @param encKey Encryption key + */ + public void beforeCacheGroupStart(int grpId, @Nullable byte[] encKey) { + if (encKey == null || ctx.clientNode()) + return; + + groupKey(grpId, encKey); + } + + /** + * Callback for cache group destroy event. + * @param grpId Group id. + */ + public void onCacheGroupDestroyed(int grpId) { + if (groupKey(grpId) == null) + return; + + removeGroupKey(grpId); + } + + /** {@inheritDoc} */ + @Override public void onReadyForRead(ReadOnlyMetastorage metastorage) { + try { + Map encKeys = metastorage.readForPredicate(ENCRYPTION_KEY_PREFIX_PRED); + + if (encKeys.isEmpty()) + return; + + for (String key : encKeys.keySet()) { + Integer grpId = Integer.valueOf(key.replace(ENCRYPTION_KEY_PREFIX, "")); + + byte[] encGrpKey = (byte[])encKeys.get(key); + + grpEncKeys.putIfAbsent(grpId, getSpi().decryptKey(encGrpKey)); + } + + if (!grpEncKeys.isEmpty()) { + U.quietAndInfo(log, "Encryption keys loaded from metastore. [grps=" + + F.concat(grpEncKeys.keySet(), ",") + "]"); + } + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to read encryption keys state.", e); + } + } + + /** {@inheritDoc} */ + @Override public void onReadyForReadWrite(ReadWriteMetastorage metaStorage) throws IgniteCheckedException { + synchronized (metaStorageMux) { + this.metaStorage = metaStorage; + + writeToMetaStoreEnabled = true; + + writeAllToMetaStore(); + } + } + + /** {@inheritDoc} */ + @Override public void onActivate(GridKernalContext kctx) throws IgniteCheckedException { + synchronized (metaStorageMux) { + writeToMetaStoreEnabled = metaStorage != null; + + if (writeToMetaStoreEnabled) + writeAllToMetaStore(); + } + } + + /** {@inheritDoc} */ + @Override public void onDeActivate(GridKernalContext kctx) { + synchronized (metaStorageMux) { + writeToMetaStoreEnabled = false; + } + } + + /** + * @param keyCnt Count of keys to generate. + * @return Future that will contain results of generation. + */ + public IgniteInternalFuture> generateKeys(int keyCnt) { + if (keyCnt == 0 || !ctx.clientNode()) + return new GridFinishedFuture<>(createKeys(keyCnt)); + + synchronized (genEcnKeyMux) { + if (disconnected || stopped) { + return new GridFinishedFuture<>( + new IgniteFutureCancelledException("Node " + (stopped ? "stopped" : "disconnected"))); + } + + try { + GenerateEncryptionKeyFuture genEncKeyFut = new GenerateEncryptionKeyFuture(keyCnt); + + sendGenerateEncryptionKeyRequest(genEncKeyFut); + + genEncKeyFuts.put(genEncKeyFut.id(), genEncKeyFut); + + return genEncKeyFut; + } + catch (IgniteCheckedException e) { + return new GridFinishedFuture<>(e); + } + } + } + + /** */ + private void sendGenerateEncryptionKeyRequest(GenerateEncryptionKeyFuture fut) throws IgniteCheckedException { + ClusterNode rndNode = U.randomServerNode(ctx); + + if (rndNode == null) + throw new IgniteCheckedException("There is no node to send GenerateEncryptionKeyRequest to"); + + GenerateEncryptionKeyRequest req = new GenerateEncryptionKeyRequest(fut.keyCount()); + + fut.id(req.id()); + fut.nodeId(rndNode.id()); + + ctx.io().sendToGridTopic(rndNode.id(), TOPIC_GEN_ENC_KEY, req, SYSTEM_POOL); + } + + /** + * Writes all unsaved grpEncKeys to metaStorage. + * @throws IgniteCheckedException If failed. + */ + private void writeAllToMetaStore() throws IgniteCheckedException { + for (Map.Entry entry : grpEncKeys.entrySet()) { + if (metaStorage.read(ENCRYPTION_KEY_PREFIX + entry.getKey()) != null) + continue; + + writeToMetaStore(entry.getKey(), getSpi().encryptKey(entry.getValue())); + } + } + + /** + * Checks cache encryption supported by all nodes in cluster. + * + * @throws IgniteCheckedException If check fails. + */ + public void checkEncryptedCacheSupported() throws IgniteCheckedException { + Collection nodes = ctx.grid().cluster().nodes(); + + for (ClusterNode node : nodes) { + if (CACHE_ENCRYPTION_SINCE.compareTo(node.version()) > 0) { + throw new IgniteCheckedException("All nodes in cluster should be 2.7.0 or greater " + + "to create encrypted cache! [nodeId=" + node.id() + "]"); + } + } + } + + /** {@inheritDoc} */ + @Override public DiscoveryDataExchangeType discoveryDataType() { + return ENCRYPTION_MGR; + } + + /** + * Writes encryption key to metastore. + * + * @param grpId Group id. + * @param encGrpKey Group encryption key. + */ + private void writeToMetaStore(int grpId, byte[] encGrpKey) { + if (metaStorage == null || !writeToMetaStoreEnabled) + return; + + ctx.cache().context().database().checkpointReadLock(); + + try { + metaStorage.write(ENCRYPTION_KEY_PREFIX + grpId, encGrpKey); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to write cache group encryption key [grpId=" + grpId + ']', e); + } + finally { + ctx.cache().context().database().checkpointReadUnlock(); + } + } + + /** + * @param knownKeys Saved keys set. + * @return New keys for local cache groups. + */ + @Nullable private HashMap newEncryptionKeys(Set knownKeys) { + Map grpDescs = ctx.cache().cacheGroupDescriptors(); + + HashMap newKeys = null; + + for (CacheGroupDescriptor grpDesc : grpDescs.values()) { + if (knownKeys.contains(grpDesc.groupId()) || !grpDesc.config().isEncryptionEnabled()) + continue; + + if (newKeys == null) + newKeys = new HashMap<>(); + + newKeys.put(grpDesc.groupId(), getSpi().encryptKey(getSpi().create())); + } + + return newKeys; + } + + /** + * @return Local encryption keys. + */ + @Nullable private HashMap knownEncryptionKeys() { + if (F.isEmpty(grpEncKeys)) + return null; + + HashMap knownKeys = new HashMap<>(); + + for (Map.Entry entry : grpEncKeys.entrySet()) + knownKeys.put(entry.getKey(), getSpi().encryptKey(entry.getValue())); + + return knownKeys; + } + + /** + * Generates required count of encryption keys. + * + * @param keyCnt Keys count. + * @return Collection with newly generated encryption keys. + */ + private Collection createKeys(int keyCnt) { + if (keyCnt == 0) + return Collections.emptyList(); + + List encKeys = new ArrayList<>(keyCnt); + + for(int i=0; i node.order()) + crd = node; + } + + return crd == null || !F.eq(ctx.localNodeId(), crd.id()); + } + } + + /** */ + public static class NodeEncryptionKeys implements Serializable { + /** */ + private static final long serialVersionUID = 0L; + + /** */ + NodeEncryptionKeys(Map knownKeys, Map newKeys) { + this.knownKeys = knownKeys; + this.newKeys = newKeys; + } + + /** Known i.e. stored in {@code ReadWriteMetastorage} keys from node. */ + Map knownKeys; + + /** New keys i.e. keys for a local statically configured caches. */ + Map newKeys; + } + + /** */ + @SuppressWarnings("ExternalizableWithoutPublicNoArgConstructor") + private class GenerateEncryptionKeyFuture extends GridFutureAdapter> { + /** */ + private IgniteUuid id; + + /** */ + private int keyCnt; + + /** */ + private UUID nodeId; + + /** + * @param keyCnt Count of keys to generate. + */ + private GenerateEncryptionKeyFuture(int keyCnt) { + this.keyCnt = keyCnt; + } + + /** {@inheritDoc} */ + @Override public boolean onDone(@Nullable Collection res, @Nullable Throwable err) { + // Make sure to remove future before completion. + genEncKeyFuts.remove(id, this); + + return super.onDone(res, err); + } + + /** */ + public IgniteUuid id() { + return id; + } + + /** */ + public void id(IgniteUuid id) { + this.id = id; + } + + /** */ + public UUID nodeId() { + return nodeId; + } + + /** */ + public void nodeId(UUID nodeId) { + this.nodeId = nodeId; + } + + /** */ + public int keyCount() { + return keyCnt; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(GenerateEncryptionKeyFuture.class, this); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageManager.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageManager.java index d4daab85c0ed7..92a2eefe0bcf3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageManager.java @@ -316,6 +316,9 @@ public void record(DiscoveryEvent evt, DiscoCache discoCache) { private void record0(Event evt, Object... params) { assert evt != null; + if (ctx.recoveryMode()) + return; + if (!enterBusy()) return; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageMessage.java index 515500b91d0f1..fd5326cf0577c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/managers/eventstorage/GridEventStorageMessage.java @@ -445,4 +445,4 @@ void exceptionBytes(byte[] exBytes) { @Override public String toString() { return S.toString(GridEventStorageMessage.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/mem/DirectMemoryProvider.java b/modules/core/src/main/java/org/apache/ignite/internal/mem/DirectMemoryProvider.java index a90c6b80b9e33..03d386bdfc326 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/mem/DirectMemoryProvider.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/mem/DirectMemoryProvider.java @@ -27,9 +27,11 @@ public interface DirectMemoryProvider { public void initialize(long[] chunkSizes); /** - * Shuts down the provider. Will deallocate all previously allocated regions. + * Shuts down the provider. + * + * @param deallocate {@code True} to deallocate memory, {@code false} to allow memory reuse. */ - public void shutdown(); + public void shutdown(boolean deallocate); /** * Attempts to allocate next memory region. Will return {@code null} if no more regions are available. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/mem/file/MappedFileMemoryProvider.java b/modules/core/src/main/java/org/apache/ignite/internal/mem/file/MappedFileMemoryProvider.java index 54b4af464f6ca..67e86f5ff779c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/mem/file/MappedFileMemoryProvider.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/mem/file/MappedFileMemoryProvider.java @@ -30,7 +30,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; /** - * + * Memory provider implementation based on memory mapped file. + *

+ * Doesn't support memory reuse semantics. */ public class MappedFileMemoryProvider implements DirectMemoryProvider { /** */ @@ -101,7 +103,7 @@ public MappedFileMemoryProvider(IgniteLogger log, File allocationPath) { } /** {@inheritDoc} */ - @Override public void shutdown() { + @Override public void shutdown(boolean deallocate) { if (mappedFiles != null) { for (MappedFile file : mappedFiles) { try { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java index d9646487e8dfa..8cb8119001a23 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java @@ -29,7 +29,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; /** - * + * Memory provider implementation based on unsafe memory access. + *

+ * Supports memory reuse semantics. */ public class UnsafeMemoryProvider implements DirectMemoryProvider { /** */ @@ -44,6 +46,9 @@ public class UnsafeMemoryProvider implements DirectMemoryProvider { /** Flag shows if current memory provider have been already initialized. */ private boolean isInit; + /** */ + private int used = 0; + /** * @param log Ignite logger to use. */ @@ -54,7 +59,7 @@ public UnsafeMemoryProvider(IgniteLogger log) { /** {@inheritDoc} */ @Override public void initialize(long[] sizes) { if (isInit) - throw new IgniteException("Second initialization does not allowed for current provider"); + return; this.sizes = sizes; @@ -64,24 +69,32 @@ public UnsafeMemoryProvider(IgniteLogger log) { } /** {@inheritDoc} */ - @Override public void shutdown() { + @Override public void shutdown(boolean deallocate) { if (regions != null) { for (Iterator it = regions.iterator(); it.hasNext(); ) { DirectMemoryRegion chunk = it.next(); - GridUnsafe.freeMemory(chunk.address()); + if (deallocate) { + GridUnsafe.freeMemory(chunk.address()); - // Safety. - it.remove(); + // Safety. + it.remove(); + } } + + if (!deallocate) + used = 0; } } /** {@inheritDoc} */ @Override public DirectMemoryRegion nextRegion() { - if (regions.size() == sizes.length) + if (used == sizes.length) return null; + if (used < regions.size()) + return regions.get(used++); + long chunkSize = sizes[regions.size()]; long ptr; @@ -111,6 +124,8 @@ public UnsafeMemoryProvider(IgniteLogger log) { regions.add(region); + used++; + return region; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageIdAllocator.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageIdAllocator.java index c6aeabe087975..d91d31da32957 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageIdAllocator.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageIdAllocator.java @@ -35,6 +35,12 @@ public interface PageIdAllocator { /** Special partition reserved for index space. */ public static final int INDEX_PARTITION = 0xFFFF; + /** Old special partition reserved for metastore space. */ + public static final int OLD_METASTORE_PARTITION = 0x0; + + /** Special partition reserved for metastore space. */ + public static final int METASTORE_PARTITION = 0x1; + /** * Allocates a page from the space for the given partition ID and the given flags. * diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageMemory.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageMemory.java index 6f2e2c9ec36ed..3ef0ec7250745 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageMemory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/PageMemory.java @@ -18,16 +18,36 @@ package org.apache.ignite.internal.pagemem; import java.nio.ByteBuffer; +import org.apache.ignite.IgniteException; import org.apache.ignite.lifecycle.LifecycleAware; /** */ -public interface PageMemory extends LifecycleAware, PageIdAllocator, PageSupport { +public interface PageMemory extends PageIdAllocator, PageSupport { + /** + * Start page memory. + */ + public void start() throws IgniteException; + + /** + * Stop page memory. + * + * @param deallocate {@code True} to deallocate memory, {@code false} to allow memory reuse on subsequent {@link #start()} + * @throws IgniteException + */ + public void stop(boolean deallocate) throws IgniteException; + /** * @return Page size in bytes. */ public int pageSize(); + /** + * @param grpId Group id. + * @return Page size without encryption overhead. + */ + public int realPageSize(int grpId); + /** * @return Page size with system overhead, in bytes. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoStoreImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoStoreImpl.java index c5eba60484d34..4ba19c2d71594 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoStoreImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoStoreImpl.java @@ -163,6 +163,11 @@ public class PageMemoryNoStoreImpl implements PageMemory { /** Shared context. */ private final GridCacheSharedContext ctx; + /** + * Marker that stop was invoked and memory is not supposed for any usage. + */ + private volatile boolean stopped; + /** * @param log Logger. * @param directMemoryProvider Memory allocator to use. @@ -202,6 +207,8 @@ public PageMemoryNoStoreImpl( /** {@inheritDoc} */ @Override public void start() throws IgniteException { + stopped = false; + long startSize = dataRegionCfg.getInitialSize(); long maxSize = dataRegionCfg.getMaxSize(); @@ -239,11 +246,13 @@ public PageMemoryNoStoreImpl( /** {@inheritDoc} */ @SuppressWarnings("OverlyStrongTypeCast") - @Override public void stop() throws IgniteException { + @Override public void stop(boolean deallocate) throws IgniteException { if (log.isDebugEnabled()) log.debug("Stopping page memory."); - directMemoryProvider.shutdown(); + stopped = true; + + directMemoryProvider.shutdown(deallocate); if (directMemoryProvider instanceof Closeable) { try { @@ -262,6 +271,8 @@ public PageMemoryNoStoreImpl( /** {@inheritDoc} */ @Override public long allocatePage(int grpId, int partId, byte flags) { + assert !stopped; + long relPtr = borrowFreePage(); long absPtr = 0; @@ -326,6 +337,8 @@ public PageMemoryNoStoreImpl( /** {@inheritDoc} */ @Override public boolean freePage(int cacheId, long pageId) { + assert !stopped; + releaseFreePage(pageId); return true; @@ -341,6 +354,11 @@ public PageMemoryNoStoreImpl( return sysPageSize; } + /** {@inheritDoc} */ + @Override public int realPageSize(int grpId) { + return pageSize(); + } + /** * @return Next index. */ @@ -440,6 +458,8 @@ private long fromSegmentIndex(int segIdx, long pageIdx) { /** {@inheritDoc} */ @Override public long acquirePage(int cacheId, long pageId) { + assert !stopped; + int pageIdx = PageIdUtils.pageIndex(pageId); Segment seg = segment(pageIdx); @@ -449,6 +469,8 @@ private long fromSegmentIndex(int segIdx, long pageIdx) { /** {@inheritDoc} */ @Override public void releasePage(int cacheId, long pageId, long page) { + assert !stopped; + if (trackAcquiredPages) { Segment seg = segment(PageIdUtils.pageIndex(pageId)); @@ -458,6 +480,8 @@ private long fromSegmentIndex(int segIdx, long pageIdx) { /** {@inheritDoc} */ @Override public long readLock(int cacheId, long pageId, long page) { + assert !stopped; + if (rwLock.readLock(page + LOCK_OFFSET, PageIdUtils.tag(pageId))) return page + PAGE_OVERHEAD; @@ -466,6 +490,8 @@ private long fromSegmentIndex(int segIdx, long pageIdx) { /** {@inheritDoc} */ @Override public long readLockForce(int cacheId, long pageId, long page) { + assert !stopped; + if (rwLock.readLock(page + LOCK_OFFSET, -1)) return page + PAGE_OVERHEAD; @@ -474,11 +500,15 @@ private long fromSegmentIndex(int segIdx, long pageIdx) { /** {@inheritDoc} */ @Override public void readUnlock(int cacheId, long pageId, long page) { + assert !stopped; + rwLock.readUnlock(page + LOCK_OFFSET); } /** {@inheritDoc} */ @Override public long writeLock(int cacheId, long pageId, long page) { + assert !stopped; + if (rwLock.writeLock(page + LOCK_OFFSET, PageIdUtils.tag(pageId))) return page + PAGE_OVERHEAD; @@ -487,6 +517,8 @@ private long fromSegmentIndex(int segIdx, long pageIdx) { /** {@inheritDoc} */ @Override public long tryWriteLock(int cacheId, long pageId, long page) { + assert !stopped; + if (rwLock.tryWriteLock(page + LOCK_OFFSET, PageIdUtils.tag(pageId))) return page + PAGE_OVERHEAD; @@ -501,6 +533,8 @@ private long fromSegmentIndex(int segIdx, long pageIdx) { Boolean walPlc, boolean dirtyFlag ) { + assert !stopped; + long actualId = PageIO.getPageId(page + PAGE_OVERHEAD); rwLock.writeUnlock(page + LOCK_OFFSET, PageIdUtils.tag(actualId)); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/IgnitePageStoreManager.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/IgnitePageStoreManager.java index d7c61e9b114ff..1408383be168e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/IgnitePageStoreManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/IgnitePageStoreManager.java @@ -19,6 +19,7 @@ import java.nio.ByteBuffer; import java.util.Map; +import java.util.function.Predicate; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.pagemem.PageMemory; @@ -243,10 +244,19 @@ public void initializeForCache(CacheGroupDescriptor grpDesc, StoredCacheData cac public void cleanupPersistentSpace(CacheConfiguration cacheConfiguration) throws IgniteCheckedException; /** - * Cleanup persistent space for all caches. + * Cleanup persistent space for all caches except metastore. */ public void cleanupPersistentSpace() throws IgniteCheckedException; + /** + * Cleanup cache store whether it matches the provided predicate and if matched + * store was previously initizlized. + * + * @param cacheGrpPred Predicate to match by id cache group stores to clean. + * @param cleanFiles {@code True} to delete all persisted files related to particular store. + */ + public void cleanupPageStoreIfMatch(Predicate cacheGrpPred, boolean cleanFiles); + /** * Creates and initializes cache work directory retrieved from {@code cacheCfg}. * diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/PageStore.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/PageStore.java index 42d584d5eba1c..7a7f964fc26ed 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/PageStore.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/store/PageStore.java @@ -20,6 +20,7 @@ import org.apache.ignite.IgniteCheckedException; import java.nio.ByteBuffer; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; /** * Persistent store of pages. @@ -101,4 +102,30 @@ public interface PageStore { * @return Page store version. */ public int version(); + + /** + * @param cleanFile {@code True} to delete file. + * @throws StorageException If failed. + */ + public void stop(boolean cleanFile) throws StorageException; + + /** + * Starts recover process. + */ + public void beginRecover(); + + /** + * Ends recover process. + * + * @throws StorageException If failed. + */ + public void finishRecover() throws StorageException; + + /** + * Truncates and deletes partition file. + * + * @param tag New partition tag. + * @throws StorageException If failed. + */ + public void truncate(int tag) throws StorageException; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/IgniteWriteAheadLogManager.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/IgniteWriteAheadLogManager.java index 12fd3e94bd0fe..7b8333f17aa04 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/IgniteWriteAheadLogManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/IgniteWriteAheadLogManager.java @@ -19,10 +19,13 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; +import org.apache.ignite.internal.pagemem.wal.record.RolloverType; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.processors.cache.GridCacheSharedManager; import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.processors.cluster.IgniteChangeGlobalStateSupport; +import org.apache.ignite.lang.IgniteBiPredicate; +import org.jetbrains.annotations.Nullable; /** * @@ -46,13 +49,15 @@ public interface IgniteWriteAheadLogManager extends GridCacheSharedManager, Igni /** * Resumes logging after start. When WAL manager is started, it will skip logging any updates until this * method is called to avoid logging changes induced by the state restore procedure. + * + * @throws IgniteCheckedException If fails. */ public void resumeLogging(WALPointer lastWrittenPtr) throws IgniteCheckedException; /** * Appends the given log entry to the write-ahead log. * - * @param entry entry to log. + * @param entry Entry to log. * @return WALPointer that may be passed to {@link #flush(WALPointer, boolean)} method to make sure the record is * written to the log. * @throws IgniteCheckedException If failed to construct log entry. @@ -60,6 +65,22 @@ public interface IgniteWriteAheadLogManager extends GridCacheSharedManager, Igni */ public WALPointer log(WALRecord entry) throws IgniteCheckedException, StorageException; + /** + * Appends the given log entry to the write-ahead log. If entry logging leads to rollover, caller can specify + * whether to write the entry to the current segment or to th next one. + * + * @param entry Entry to log. + * @param rolloverType Rollover type. + * @return WALPointer that may be passed to {@link #flush(WALPointer, boolean)} method to make sure the record is + * written to the log. + * @throws IgniteCheckedException If failed to construct log entry. + * @throws StorageException If IO error occurred while writing log entry. + * + * @see RolloverType + */ + public WALPointer log(WALRecord entry, RolloverType rolloverType) + throws IgniteCheckedException, StorageException; + /** * Makes sure that all log entries written to the log up until the specified pointer are actually written * to the underlying storage. @@ -72,6 +93,16 @@ public interface IgniteWriteAheadLogManager extends GridCacheSharedManager, Igni */ public void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException; + /** + * Reads WAL record by the specified pointer. + * + * @param ptr WAL pointer. + * @return WAL record. + * @throws IgniteCheckedException If failed to read. + * @throws StorageException If IO error occurred while reading WAL entries. + */ + public WALRecord read(WALPointer ptr) throws IgniteCheckedException, StorageException; + /** * Invoke this method to iterate over the written log entries. * @@ -82,13 +113,26 @@ public interface IgniteWriteAheadLogManager extends GridCacheSharedManager, Igni */ public WALIterator replay(WALPointer start) throws IgniteCheckedException, StorageException; + /** + * Invoke this method to iterate over the written log entries. + * + * @param start Optional WAL pointer from which to start iteration. + * @param recordDeserializeFilter Specify a filter to skip WAL records. Those records will not be explicitly deserialized. + * @return Records iterator. + * @throws IgniteException If failed to start iteration. + * @throws StorageException If IO error occurred while reading WAL entries. + */ + public WALIterator replay( + WALPointer start, + @Nullable IgniteBiPredicate recordDeserializeFilter + ) throws IgniteCheckedException, StorageException; + /** * Invoke this method to reserve WAL history since provided pointer and prevent it's deletion. * * @param start WAL pointer. - * @throws IgniteException If failed to reserve. */ - public boolean reserve(WALPointer start) throws IgniteCheckedException; + public boolean reserve(WALPointer start); /** * Invoke this method to release WAL history since provided pointer that was previously reserved. @@ -162,9 +206,4 @@ public interface IgniteWriteAheadLogManager extends GridCacheSharedManager, Igni * @param grpId Group id. */ public boolean disabled(int grpId); - - /** - * Cleanup all directories relating to WAL (e.g. work WAL dir, archive WAL dir). - */ - public void cleanupWalDirectories() throws IgniteCheckedException; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/WALIterator.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/WALIterator.java index 14fdfda8fe9cd..b3c9726f7f88c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/WALIterator.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/WALIterator.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.pagemem.wal; +import java.util.Optional; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.util.lang.GridCloseableIterator; import org.apache.ignite.lang.IgniteBiTuple; @@ -25,5 +26,8 @@ * */ public interface WALIterator extends GridCloseableIterator> { - // Iterator alias. + /** + * @return Pointer of last read valid record. Empty if no records were read. + */ + public Optional lastRead(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/DataRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/DataRecord.java index 7a4d6b8793a21..ef6c3bafbc307 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/DataRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/DataRecord.java @@ -76,6 +76,16 @@ public DataRecord(List writeEntries, long timestamp) { this.writeEntries = writeEntries; } + /** + * @param writeEntries Write entries. + * @return {@code this} for chaining. + */ + public DataRecord setWriteEntries(List writeEntries) { + this.writeEntries = writeEntries; + + return this; + } + /** * @return Collection of write entries. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/EncryptedRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/EncryptedRecord.java new file mode 100644 index 0000000000000..234292b18e1e6 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/EncryptedRecord.java @@ -0,0 +1,60 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +/** + * Encrypted record from WAL. + * That types of record returned from a {@code RecordDataSerializer} on offline WAL iteration. + */ +public class EncryptedRecord extends WALRecord implements WalRecordCacheGroupAware { + /** + * Group id. + */ + private int grpId; + + /** + * Type of plain record. + */ + private RecordType plainRecType; + + /** + * @param grpId Group id + * @param plainRecType Plain record type. + */ + public EncryptedRecord(int grpId, RecordType plainRecType) { + this.grpId = grpId; + this.plainRecType = plainRecType; + } + + /** {@inheritDoc} */ + @Override public RecordType type() { + return RecordType.ENCRYPTED_RECORD; + } + + /** {@inheritDoc} */ + @Override public int groupId() { + return grpId; + } + + /** + * @return Type of plain record. + */ + public RecordType plainRecordType() { + return plainRecType; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/LazyDataEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/LazyDataEntry.java index 6b56da5b7b3e5..ba2fabc9f0eeb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/LazyDataEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/LazyDataEntry.java @@ -31,7 +31,7 @@ * Represents Data Entry ({@link #key}, {@link #val value}) pair update {@link #op operation}.
* This Data entry was not converted to key, value pair during record deserialization. */ -public class LazyDataEntry extends DataEntry { +public class LazyDataEntry extends DataEntry implements MarshalledDataEntry { /** */ private GridCacheSharedContext cctx; @@ -124,24 +124,23 @@ public LazyDataEntry( return val; } - /** @return Data Entry Key type code. See {@link CacheObject} for built-in value type codes */ - public byte getKeyType() { + /** {@inheritDoc} */ + @Override public byte getKeyType() { return keyType; } - /** @return Key value bytes. */ - public byte[] getKeyBytes() { + /** {@inheritDoc} */ + @Override public byte[] getKeyBytes() { return keyBytes; } - /** @return Data Entry Value type code. See {@link CacheObject} for built-in value type codes */ - public byte getValType() { + /** {@inheritDoc} */ + @Override public byte getValType() { return valType; } - /** @return Value value bytes. */ - public byte[] getValBytes() { + /** {@inheritDoc} */ + @Override public byte[] getValBytes() { return valBytes; } - } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/LazyMvccDataEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/LazyMvccDataEntry.java new file mode 100644 index 0000000000000..a7ad86f4eaf82 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/LazyMvccDataEntry.java @@ -0,0 +1,149 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; +import org.apache.ignite.internal.processors.cache.CacheObject; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheOperation; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.processors.cacheobject.IgniteCacheObjectProcessor; + +/** + * Represents Data Entry ({@link #key}, {@link #val value}) pair update {@link #op operation}.
+ * This Data entry was not converted to key, value pair during record deserialization. + */ +public class LazyMvccDataEntry extends MvccDataEntry implements MarshalledDataEntry { + /** */ + private GridCacheSharedContext cctx; + + /** Data Entry key type code. See {@link CacheObject} for built-in value type codes */ + private byte keyType; + + /** Key value bytes. */ + private byte[] keyBytes; + + /** Data Entry Value type code. See {@link CacheObject} for built-in value type codes */ + private byte valType; + + /** Value value bytes. */ + private byte[] valBytes; + + /** + * @param cctx Shared context. + * @param cacheId Cache ID. + * @param keyType Object type code for Key. + * @param keyBytes Data Entry Key value bytes. + * @param valType Object type code for Value. + * @param valBytes Data Entry Value value bytes. + * @param op Operation. + * @param nearXidVer Near transaction version. + * @param writeVer Write version. + * @param expireTime Expire time. + * @param partId Partition ID. + * @param partCnt Partition counter. + * @param mvccVer Mvcc version. + */ + public LazyMvccDataEntry( + GridCacheSharedContext cctx, + int cacheId, + byte keyType, + byte[] keyBytes, + byte valType, + byte[] valBytes, + GridCacheOperation op, + GridCacheVersion nearXidVer, + GridCacheVersion writeVer, + long expireTime, + int partId, + long partCnt, + MvccVersion mvccVer + ) { + super(cacheId, null, null, op, nearXidVer, writeVer, expireTime, partId, partCnt, mvccVer); + + this.cctx = cctx; + this.keyType = keyType; + this.keyBytes = keyBytes; + this.valType = valType; + this.valBytes = valBytes; + } + + /** {@inheritDoc} */ + @Override public KeyCacheObject key() { + try { + if (key == null) { + GridCacheContext cacheCtx = cctx.cacheContext(cacheId); + + if (cacheCtx == null) + throw new IgniteException("Failed to find cache context for the given cache ID: " + cacheId); + + IgniteCacheObjectProcessor co = cctx.kernalContext().cacheObjects(); + + key = co.toKeyCacheObject(cacheCtx.cacheObjectContext(), keyType, keyBytes); + + if (key.partition() == -1) + key.partition(partId); + } + + return key; + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + } + + /** {@inheritDoc} */ + @Override public CacheObject value() { + if (val == null && valBytes != null) { + GridCacheContext cacheCtx = cctx.cacheContext(cacheId); + + if (cacheCtx == null) + throw new IgniteException("Failed to find cache context for the given cache ID: " + cacheId); + + IgniteCacheObjectProcessor co = cctx.kernalContext().cacheObjects(); + + val = co.toCacheObject(cacheCtx.cacheObjectContext(), valType, valBytes); + } + + return val; + } + + /** {@inheritDoc} */ + @Override public byte getKeyType() { + return keyType; + } + + /** {@inheritDoc} */ + @Override public byte[] getKeyBytes() { + return keyBytes; + } + + /** {@inheritDoc} */ + @Override public byte getValType() { + return valType; + } + + /** {@inheritDoc} */ + @Override public byte[] getValBytes() { + return valBytes; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MarshalledDataEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MarshalledDataEntry.java new file mode 100644 index 0000000000000..c977d527941e9 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MarshalledDataEntry.java @@ -0,0 +1,45 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +import org.apache.ignite.internal.processors.cache.CacheObject; + +/** + * Interface for Data Entry record that was not converted to key, value pair during record deserialization. + */ +public interface MarshalledDataEntry { + /** + * @return Data Entry Key type code. See {@link CacheObject} for built-in value type codes. + */ + byte getKeyType(); + + /** + * @return Key value bytes. + */ + byte[] getKeyBytes(); + + /** + * @return Data Entry Value type code. See {@link CacheObject} for built-in value type codes. + */ + byte getValType(); + + /** + * @return Value value bytes. + */ + byte[] getValBytes(); +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MemoryRecoveryRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MemoryRecoveryRecord.java index 8843eeedef90e..5a48b340d0ca4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MemoryRecoveryRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MemoryRecoveryRecord.java @@ -20,11 +20,11 @@ import org.apache.ignite.internal.util.typedef.internal.S; /** - * Marker that we start memory recovering + * Marker indicates that binary memory recovery has finished. */ public class MemoryRecoveryRecord extends WALRecord { /** Create timestamp, millis */ - private long time; + private final long time; /** * Default constructor. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MetastoreDataRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MetastoreDataRecord.java index e269de2adc012..9e734244c7b84 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MetastoreDataRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MetastoreDataRecord.java @@ -18,13 +18,14 @@ package org.apache.ignite.internal.pagemem.wal.record; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage; import org.apache.ignite.internal.util.typedef.internal.S; import org.jetbrains.annotations.Nullable; /** * */ -public class MetastoreDataRecord extends WALRecord { +public class MetastoreDataRecord extends WALRecord implements WalRecordCacheGroupAware { /** */ private final String key; @@ -59,4 +60,9 @@ public String key() { @Override public String toString() { return S.toString(MetastoreDataRecord.class, this, "super", super.toString()); } + + /** {@inheritDoc} */ + @Override public int groupId() { + return MetaStorage.METASTORAGE_CACHE_ID; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccDataEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccDataEntry.java new file mode 100644 index 0000000000000..6480f6f8dc4e5 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccDataEntry.java @@ -0,0 +1,75 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +import org.apache.ignite.internal.processors.cache.CacheObject; +import org.apache.ignite.internal.processors.cache.GridCacheOperation; +import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.jetbrains.annotations.Nullable; + +/** + * Represents Data Entry ({@link #key}, {@link #val value}) pair for mvcc update {@link #op operation} in WAL log. + */ +public class MvccDataEntry extends DataEntry { + /** Entry version. */ + private MvccVersion mvccVer; + + /** + * @param cacheId Cache ID. + * @param key Key. + * @param val Value or null for delete operation. + * @param op Operation. + * @param nearXidVer Near transaction version. + * @param writeVer Write version. + * @param expireTime Expire time. + * @param partId Partition ID. + * @param partCnt Partition counter. + * @param mvccVer Mvcc version. + */ + public MvccDataEntry( + int cacheId, + KeyCacheObject key, + @Nullable CacheObject val, + GridCacheOperation op, + GridCacheVersion nearXidVer, + GridCacheVersion writeVer, + long expireTime, + int partId, + long partCnt, + MvccVersion mvccVer + ) { + super(cacheId, key, val, op, nearXidVer, writeVer, expireTime, partId, partCnt); + + this.mvccVer = mvccVer; + } + + /** + * @return Mvcc version. + */ + public MvccVersion mvccVer() { + return mvccVer; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(MvccDataEntry.class, this); + } +} \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccDataRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccDataRecord.java new file mode 100644 index 0000000000000..276ba1bc05498 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccDataRecord.java @@ -0,0 +1,69 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +import java.util.Collections; +import java.util.List; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; + +/** + * Logical data record with cache operation description. + * This record contains information about operation we want to do. + * Contains operation type (put, remove) and (Key, Value, Version) for each {@link MvccDataEntry} + */ +public class MvccDataRecord extends DataRecord { + /** {@inheritDoc} */ + @Override public RecordType type() { + return RecordType.MVCC_DATA_RECORD; + } + + /** + * @param writeEntry Write entry. + */ + public MvccDataRecord(MvccDataEntry writeEntry) { + this(writeEntry, U.currentTimeMillis()); + } + + /** + * @param writeEntries Write entries. + */ + public MvccDataRecord(List writeEntries) { + this(writeEntries, U.currentTimeMillis()); + } + + /** + * @param writeEntry Write entry. + */ + public MvccDataRecord(MvccDataEntry writeEntry, long timestamp) { + this(Collections.singletonList(writeEntry), timestamp); + } + + /** + * @param writeEntries Write entries. + * @param timestamp TimeStamp. + */ + public MvccDataRecord(List writeEntries, long timestamp) { + super(writeEntries, timestamp); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(MvccDataRecord.class, this, "super", super.toString()); + } +} \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccTxRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccTxRecord.java new file mode 100644 index 0000000000000..86ad983db209e --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/MvccTxRecord.java @@ -0,0 +1,98 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +import java.util.Collection; +import java.util.Map; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; +import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxLog; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.transactions.TransactionState; +import org.jetbrains.annotations.Nullable; + +/** + * Logical data record indented for MVCC transaction related actions.
+ * This record is marker of prepare, commit, and rollback transactions. + */ +public class MvccTxRecord extends TxRecord implements WalRecordCacheGroupAware { + /** Transaction mvcc snapshot version. */ + private final MvccVersion mvccVer; + + /** + * @param state Transaction state. + * @param nearXidVer Transaction id. + * @param writeVer Transaction entries write topology version. + * @param participatingNodes Primary -> Backup nodes compact IDs participating in transaction. + * @param mvccVer Transaction snapshot version. + */ + public MvccTxRecord( + TransactionState state, + GridCacheVersion nearXidVer, + GridCacheVersion writeVer, + @Nullable Map> participatingNodes, + MvccVersion mvccVer + ) { + super(state, nearXidVer, writeVer, participatingNodes); + + this.mvccVer = mvccVer; + } + + /** + * @param state Transaction state. + * @param nearXidVer Transaction id. + * @param writeVer Transaction entries write topology version. + * @param mvccVer Transaction snapshot version. + * @param participatingNodes Primary -> Backup nodes participating in transaction. + * @param ts TimeStamp. + */ + public MvccTxRecord( + TransactionState state, + GridCacheVersion nearXidVer, + GridCacheVersion writeVer, + @Nullable Map> participatingNodes, + MvccVersion mvccVer, + long ts + ) { + super(state, nearXidVer, writeVer, participatingNodes, ts); + + this.mvccVer = mvccVer; + } + + /** {@inheritDoc} */ + @Override public RecordType type() { + return RecordType.MVCC_TX_RECORD; + } + + /** + * @return Mvcc version. + */ + public MvccVersion mvccVersion() { + return mvccVer; + } + + /** {@inheritDoc} */ + @Override public int groupId() { + return TxLog.TX_LOG_CACHE_ID; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(MvccTxRecord.class, this, "super", super.toString()); + } +} \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/PageSnapshot.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/PageSnapshot.java index 1aa065e10df40..d3a465d79d500 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/PageSnapshot.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/PageSnapshot.java @@ -37,22 +37,31 @@ public class PageSnapshot extends WALRecord implements WalRecordCacheGroupAware{ /** */ private FullPageId fullPageId; + /** + * PageSIze without encryption overhead. + */ + private int realPageSize; + /** * @param fullId Full page ID. * @param arr Read array. + * @param realPageSize Page size without encryption overhead. */ - public PageSnapshot(FullPageId fullId, byte[] arr) { - fullPageId = fullId; - pageData = arr; + public PageSnapshot(FullPageId fullId, byte[] arr, int realPageSize) { + this.fullPageId = fullId; + this.pageData = arr; + this.realPageSize = realPageSize; } /** * @param fullPageId Full page ID. * @param ptr Pointer to copy from. * @param pageSize Page size. + * @param realPageSize Page size without encryption overhead. */ - public PageSnapshot(FullPageId fullPageId, long ptr, int pageSize) { + public PageSnapshot(FullPageId fullPageId, long ptr, int pageSize, int realPageSize) { this.fullPageId = fullPageId; + this.realPageSize = realPageSize; pageData = new byte[pageSize]; @@ -88,7 +97,7 @@ public FullPageId fullPageId() { try { return "PageSnapshot [fullPageId = " + fullPageId() + ", page = [\n" - + PageIO.printPage(addr, pageData.length) + + PageIO.printPage(addr, realPageSize) + "],\nsuper = [" + super.toString() + "]]"; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/RolloverType.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/RolloverType.java new file mode 100644 index 0000000000000..1d99de1e79741 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/RolloverType.java @@ -0,0 +1,38 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +/** + * Defines WAL logging type with regard to segment rollover. + */ +public enum RolloverType { + /** Record being logged is not a rollover record. */ + NONE, + + /** + * Record being logged is a rollover record and it should get to the current segment whenever possible. + * If current segment is full, then the record gets to the next segment. Anyway, logging implementation should + * guarantee segment rollover afterwards. + */ + CURRENT_SEGMENT, + + /** + * Record being logged is a rollover record and it should become the first record in the next segment. + */ + NEXT_SEGMENT; +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/SnapshotRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/SnapshotRecord.java index c6b63295c781b..caa1494b00e3e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/SnapshotRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/SnapshotRecord.java @@ -51,11 +51,6 @@ public boolean isFull() { return full; } - /** {@inheritDoc} */ - @Override public boolean rollOver() { - return true; - } - /** * */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrapDataEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrapDataEntry.java index dbcc65176b6fa..5dd268b3b8ca8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrapDataEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrapDataEntry.java @@ -27,7 +27,7 @@ /** * Data Entry for automatic unwrapping key and value from Data Entry */ -public class UnwrapDataEntry extends DataEntry { +public class UnwrapDataEntry extends DataEntry implements UnwrappedDataEntry { /** Cache object value context. Context is used for unwrapping objects. */ private final CacheObjectValueContext cacheObjValCtx; @@ -64,13 +64,8 @@ public UnwrapDataEntry( this.keepBinary = keepBinary; } - /** - * Unwraps key value from cache key object into primitive boxed type or source class. If client classes were used - * in key, call of this method requires classes to be available in classpath. - * - * @return Key which was placed into cache. Or null if failed to convert. - */ - public Object unwrappedKey() { + /** {@inheritDoc} */ + @Override public Object unwrappedKey() { try { if (keepBinary && key instanceof BinaryObject) return key; @@ -93,13 +88,8 @@ public Object unwrappedKey() { } } - /** - * Unwraps value value from cache value object into primitive boxed type or source class. If client classes were - * used in key, call of this method requires classes to be available in classpath. - * - * @return Value which was placed into cache. Or null for delete operation or for failure. - */ - public Object unwrappedValue() { + /** {@inheritDoc} */ + @Override public Object unwrappedValue() { try { if (val == null) return null; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrapMvccDataEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrapMvccDataEntry.java new file mode 100644 index 0000000000000..c3c12a372e757 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrapMvccDataEntry.java @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +import org.apache.ignite.binary.BinaryObject; +import org.apache.ignite.internal.processors.cache.CacheObject; +import org.apache.ignite.internal.processors.cache.CacheObjectValueContext; +import org.apache.ignite.internal.processors.cache.GridCacheOperation; +import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; + +/** + * Data Entry for automatic unwrapping key and value from Mvcc Data Entry + */ +public class UnwrapMvccDataEntry extends MvccDataEntry implements UnwrappedDataEntry { + /** Cache object value context. Context is used for unwrapping objects. */ + private final CacheObjectValueContext cacheObjValCtx; + + /** Keep binary. This flag disables converting of non primitive types (BinaryObjects). */ + private boolean keepBinary; + + /** + * @param cacheId Cache ID. + * @param key Key. + * @param val Value or null for delete operation. + * @param op Operation. + * @param nearXidVer Near transaction version. + * @param writeVer Write version. + * @param expireTime Expire time. + * @param partId Partition ID. + * @param partCnt Partition counter. + * @param mvccVer Mvcc version. + * @param cacheObjValCtx cache object value context for unwrapping objects. + * @param keepBinary disable unwrapping for non primitive objects, Binary Objects would be returned instead. + */ + public UnwrapMvccDataEntry( + final int cacheId, + final KeyCacheObject key, + final CacheObject val, + final GridCacheOperation op, + final GridCacheVersion nearXidVer, + final GridCacheVersion writeVer, + final long expireTime, + final int partId, + final long partCnt, + MvccVersion mvccVer, + final CacheObjectValueContext cacheObjValCtx, + final boolean keepBinary) { + super(cacheId, key, val, op, nearXidVer, writeVer, expireTime, partId, partCnt, mvccVer); + + this.cacheObjValCtx = cacheObjValCtx; + this.keepBinary = keepBinary; + } + + /** {@inheritDoc} */ + @Override public Object unwrappedKey() { + try { + if (keepBinary && key instanceof BinaryObject) + return key; + + Object unwrapped = key.value(cacheObjValCtx, false); + + if (unwrapped instanceof BinaryObject) { + if (keepBinary) + return unwrapped; + unwrapped = ((BinaryObject)unwrapped).deserialize(); + } + + return unwrapped; + } + catch (Exception e) { + cacheObjValCtx.kernalContext().log(UnwrapMvccDataEntry.class) + .error("Unable to convert key [" + key + "]", e); + + return null; + } + } + + /** {@inheritDoc} */ + @Override public Object unwrappedValue() { + try { + if (val == null) + return null; + + if (keepBinary && val instanceof BinaryObject) + return val; + + return val.value(cacheObjValCtx, false); + } + catch (Exception e) { + cacheObjValCtx.kernalContext().log(UnwrapMvccDataEntry.class) + .error("Unable to convert value [" + value() + "]", e); + return null; + } + } + + /** {@inheritDoc} */ + @Override public String toString() { + return getClass().getSimpleName() + "[k = " + unwrappedKey() + ", v = [ " + + unwrappedValue() + + "], super = [" + + super.toString() + "]]"; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrappedDataEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrappedDataEntry.java new file mode 100644 index 0000000000000..b3a20b9ca0525 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/UnwrappedDataEntry.java @@ -0,0 +1,39 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.pagemem.wal.record; + +/** + * Interface for Data Entry for automatic unwrapping key and value from Data Entry + */ +public interface UnwrappedDataEntry { + /** + * Unwraps key value from cache key object into primitive boxed type or source class. If client classes were used in + * key, call of this method requires classes to be available in classpath. + * + * @return Key which was placed into cache. Or null if failed to convert. + */ + Object unwrappedKey(); + + /** + * Unwraps value value from cache value object into primitive boxed type or source class. If client classes were + * used in key, call of this method requires classes to be available in classpath. + * + * @return Value which was placed into cache. Or null for delete operation or for failure. + */ + Object unwrappedValue(); +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/WALRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/WALRecord.java index a555aaef4f245..5d7276838ed3d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/WALRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/WALRecord.java @@ -19,9 +19,12 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.typedef.internal.S; +import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordPurpose.*; + /** * Log entry abstract class. */ @@ -32,126 +35,126 @@ public abstract class WALRecord { */ public enum RecordType { /** */ - TX_RECORD, + TX_RECORD (LOGICAL), /** */ - PAGE_RECORD, + PAGE_RECORD (PHYSICAL), /** */ - DATA_RECORD, + DATA_RECORD (LOGICAL), /** Checkpoint (begin) record */ - CHECKPOINT_RECORD, + CHECKPOINT_RECORD (PHYSICAL), /** WAL segment header record. */ - HEADER_RECORD, + HEADER_RECORD (INTERNAL), // Delta records. /** */ - INIT_NEW_PAGE_RECORD, + INIT_NEW_PAGE_RECORD (PHYSICAL), /** */ - DATA_PAGE_INSERT_RECORD, + DATA_PAGE_INSERT_RECORD (PHYSICAL), /** */ - DATA_PAGE_INSERT_FRAGMENT_RECORD, + DATA_PAGE_INSERT_FRAGMENT_RECORD (PHYSICAL), /** */ - DATA_PAGE_REMOVE_RECORD, + DATA_PAGE_REMOVE_RECORD (PHYSICAL), /** */ - DATA_PAGE_SET_FREE_LIST_PAGE, + DATA_PAGE_SET_FREE_LIST_PAGE (PHYSICAL), /** */ - BTREE_META_PAGE_INIT_ROOT, + BTREE_META_PAGE_INIT_ROOT (PHYSICAL), /** */ - BTREE_META_PAGE_ADD_ROOT, + BTREE_META_PAGE_ADD_ROOT (PHYSICAL), /** */ - BTREE_META_PAGE_CUT_ROOT, + BTREE_META_PAGE_CUT_ROOT (PHYSICAL), /** */ - BTREE_INIT_NEW_ROOT, + BTREE_INIT_NEW_ROOT (PHYSICAL), /** */ - BTREE_PAGE_RECYCLE, + BTREE_PAGE_RECYCLE (PHYSICAL), /** */ - BTREE_PAGE_INSERT, + BTREE_PAGE_INSERT (PHYSICAL), /** */ - BTREE_FIX_LEFTMOST_CHILD, + BTREE_FIX_LEFTMOST_CHILD (PHYSICAL), /** */ - BTREE_FIX_COUNT, + BTREE_FIX_COUNT (PHYSICAL), /** */ - BTREE_PAGE_REPLACE, + BTREE_PAGE_REPLACE (PHYSICAL), /** */ - BTREE_PAGE_REMOVE, + BTREE_PAGE_REMOVE (PHYSICAL), /** */ - BTREE_PAGE_INNER_REPLACE, + BTREE_PAGE_INNER_REPLACE (PHYSICAL), /** */ - BTREE_FIX_REMOVE_ID, + BTREE_FIX_REMOVE_ID (PHYSICAL), /** */ - BTREE_FORWARD_PAGE_SPLIT, + BTREE_FORWARD_PAGE_SPLIT (PHYSICAL), /** */ - BTREE_EXISTING_PAGE_SPLIT, + BTREE_EXISTING_PAGE_SPLIT (PHYSICAL), /** */ - BTREE_PAGE_MERGE, + BTREE_PAGE_MERGE (PHYSICAL), /** */ - PAGES_LIST_SET_NEXT, + PAGES_LIST_SET_NEXT (PHYSICAL), /** */ - PAGES_LIST_SET_PREVIOUS, + PAGES_LIST_SET_PREVIOUS (PHYSICAL), /** */ - PAGES_LIST_INIT_NEW_PAGE, + PAGES_LIST_INIT_NEW_PAGE (PHYSICAL), /** */ - PAGES_LIST_ADD_PAGE, + PAGES_LIST_ADD_PAGE (PHYSICAL), /** */ - PAGES_LIST_REMOVE_PAGE, + PAGES_LIST_REMOVE_PAGE (PHYSICAL), /** */ - META_PAGE_INIT, + META_PAGE_INIT (PHYSICAL), /** */ - PARTITION_META_PAGE_UPDATE_COUNTERS, + PARTITION_META_PAGE_UPDATE_COUNTERS (PHYSICAL), /** Memory recovering start marker */ MEMORY_RECOVERY, /** */ - TRACKING_PAGE_DELTA, + TRACKING_PAGE_DELTA (PHYSICAL), /** Meta page update last successful snapshot id. */ - META_PAGE_UPDATE_LAST_SUCCESSFUL_SNAPSHOT_ID, + META_PAGE_UPDATE_LAST_SUCCESSFUL_SNAPSHOT_ID (MIXED), /** Meta page update last successful full snapshot id. */ - META_PAGE_UPDATE_LAST_SUCCESSFUL_FULL_SNAPSHOT_ID, + META_PAGE_UPDATE_LAST_SUCCESSFUL_FULL_SNAPSHOT_ID (MIXED), /** Meta page update next snapshot id. */ - META_PAGE_UPDATE_NEXT_SNAPSHOT_ID, + META_PAGE_UPDATE_NEXT_SNAPSHOT_ID (MIXED), /** Meta page update last allocated index. */ - META_PAGE_UPDATE_LAST_ALLOCATED_INDEX, + META_PAGE_UPDATE_LAST_ALLOCATED_INDEX (MIXED), /** Partition meta update state. */ - PART_META_UPDATE_STATE, + PART_META_UPDATE_STATE (MIXED), /** Page list meta reset count record. */ - PAGE_LIST_META_RESET_COUNT_RECORD, + PAGE_LIST_META_RESET_COUNT_RECORD (PHYSICAL), /** Switch segment record. * Marker record for indicate end of segment. @@ -160,22 +163,22 @@ public enum RecordType { * that one byte in the end,then we write SWITCH_SEGMENT_RECORD as marker end of segment. * No need write CRC or WAL pointer for this record. It is byte marker record. * */ - SWITCH_SEGMENT_RECORD, + SWITCH_SEGMENT_RECORD (INTERNAL), /** */ - DATA_PAGE_UPDATE_RECORD, + DATA_PAGE_UPDATE_RECORD (PHYSICAL), /** init */ - BTREE_META_PAGE_INIT_ROOT2, + BTREE_META_PAGE_INIT_ROOT2 (PHYSICAL), /** Partition destroy. */ - PARTITION_DESTROY, + PARTITION_DESTROY (PHYSICAL), /** Snapshot record. */ SNAPSHOT, /** Metastore data record. */ - METASTORE_DATA_RECORD, + METASTORE_DATA_RECORD (LOGICAL), /** Exchange record. */ EXCHANGE, @@ -184,16 +187,57 @@ public enum RecordType { RESERVED, /** Rotated id part record. */ - ROTATED_ID_PART_RECORD, + ROTATED_ID_PART_RECORD (PHYSICAL), /** */ - MVCC_DATA_PAGE_MARK_UPDATED_RECORD, + MVCC_DATA_PAGE_MARK_UPDATED_RECORD (PHYSICAL), /** */ - MVCC_DATA_PAGE_TX_STATE_HINT_UPDATED_RECORD, + MVCC_DATA_PAGE_TX_STATE_HINT_UPDATED_RECORD (PHYSICAL), /** */ - MVCC_DATA_PAGE_NEW_TX_STATE_HINT_UPDATED_RECORD; + MVCC_DATA_PAGE_NEW_TX_STATE_HINT_UPDATED_RECORD (PHYSICAL), + + /** Encrypted WAL-record. */ + ENCRYPTED_RECORD (PHYSICAL), + + /** Ecnrypted data record. */ + ENCRYPTED_DATA_RECORD (LOGICAL), + + /** Mvcc data record. */ + MVCC_DATA_RECORD (LOGICAL), + + /** Mvcc Tx state change record. */ + MVCC_TX_RECORD (LOGICAL); + + /** + * When you're adding a new record don't forget to choose record purpose explicitly + * if record is needed for physical or logical recovery. + * By default the purpose of record is {@link RecordPurpose#CUSTOM} and this record will not be used in recovery process. + * For more information read description of {@link RecordPurpose}. + */ + private final RecordPurpose purpose; + + /** + * @param purpose Purpose. + */ + RecordType(RecordPurpose purpose) { + this.purpose = purpose; + } + + /** + * Default constructor. + */ + RecordType() { + this(CUSTOM); + } + + /** + * @return Purpose of record. + */ + public RecordPurpose purpose() { + return purpose; + } /** */ private static final RecordType[] VALS = RecordType.values(); @@ -211,6 +255,37 @@ public static RecordType fromOrdinal(int ord) { public static final int STOP_ITERATION_RECORD_TYPE = 0; } + /** + * Record purposes set. + */ + public enum RecordPurpose { + /** + * Internal records are needed for correct iterating over WAL structure. + * These records will never be returned to user during WAL iteration. + */ + INTERNAL, + /** + * Physical records are needed for correct recovering physical state of {@link org.apache.ignite.internal.pagemem.PageMemory}. + * {@link org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager#restoreBinaryMemory(org.apache.ignite.lang.IgnitePredicate, org.apache.ignite.lang.IgniteBiPredicate)}. + */ + PHYSICAL, + /** + * Logical records are needed to replay logical updates since last checkpoint. + * {@link GridCacheDatabaseSharedManager#applyLogicalUpdates(org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.CheckpointStatus, org.apache.ignite.lang.IgnitePredicate, org.apache.ignite.lang.IgniteBiPredicate, boolean)} + */ + LOGICAL, + /** + * Physical-logical records are used both for physical and logical recovery. + * Usually these records contain meta-information about partitions. + * NOTE: Not recommend to use this type without strong reason. + */ + MIXED, + /** + * Custom records are needed for any custom iterations over WAL in various components. + */ + CUSTOM + } + /** */ private int size; @@ -284,13 +359,6 @@ public void size(int size) { this.size = size; } - /** - * @return Need wal rollOver. - */ - public boolean rollOver(){ - return false; - } - /** * @return Entry type. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertFragmentRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertFragmentRecord.java index 2b02bb5748fdb..650ae1e8014c3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertFragmentRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertFragmentRecord.java @@ -57,7 +57,7 @@ public DataPageInsertFragmentRecord( @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { AbstractDataPageIO io = PageIO.getPageIO(pageAddr); - io.addRowFragment(PageIO.getPageId(pageAddr), pageAddr, payload, lastLink, pageMem.pageSize()); + io.addRowFragment(PageIO.getPageId(pageAddr), pageAddr, payload, lastLink, pageMem.realPageSize(groupId())); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertRecord.java index 2c9a8e7abdd7f..9b0637dae89da 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageInsertRecord.java @@ -58,7 +58,7 @@ public byte[] payload() { AbstractDataPageIO io = PageIO.getPageIO(pageAddr); - io.addRow(pageAddr, payload, pageMem.pageSize()); + io.addRow(pageAddr, payload, pageMem.realPageSize(groupId())); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccMarkUpdatedRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccMarkUpdatedRecord.java index 5e89f8e960333..907f4c09aa96b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccMarkUpdatedRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccMarkUpdatedRecord.java @@ -60,7 +60,7 @@ public DataPageMvccMarkUpdatedRecord(int grpId, long pageId, int itemId, long ne @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { DataPageIO io = PageIO.getPageIO(pageAddr); - io.updateNewVersion(pageAddr, itemId, pageMem.pageSize(), newMvccCrd, newMvccCntr, newMvccOpCntr); + io.updateNewVersion(pageAddr, itemId, pageMem.realPageSize(groupId()), newMvccCrd, newMvccCntr, newMvccOpCntr); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateNewTxStateHintRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateNewTxStateHintRecord.java index 4a244a1f2fe3d..f3d235d35e13a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateNewTxStateHintRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateNewTxStateHintRecord.java @@ -50,7 +50,7 @@ public DataPageMvccUpdateNewTxStateHintRecord(int grpId, long pageId, int itemId @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { DataPageIO io = PageIO.getPageIO(pageAddr); - io.updateNewTxState(pageAddr, itemId, pageMem.pageSize(), txState); + io.updateNewTxState(pageAddr, itemId, pageMem.realPageSize(groupId()), txState); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateTxStateHintRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateTxStateHintRecord.java index 7e53609064c06..fd77728d7acdc 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateTxStateHintRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageMvccUpdateTxStateHintRecord.java @@ -50,7 +50,7 @@ public DataPageMvccUpdateTxStateHintRecord(int grpId, long pageId, int itemId, b @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { DataPageIO io = PageIO.getPageIO(pageAddr); - io.updateTxState(pageAddr, itemId, pageMem.pageSize(), txState); + io.updateTxState(pageAddr, itemId, pageMem.realPageSize(groupId()), txState); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageRemoveRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageRemoveRecord.java index f7776be99e12c..abc84ea18efd6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageRemoveRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageRemoveRecord.java @@ -53,7 +53,7 @@ public int itemId() { throws IgniteCheckedException { AbstractDataPageIO io = PageIO.getPageIO(pageAddr); - io.removeRow(pageAddr, itemId, pageMem.pageSize()); + io.removeRow(pageAddr, itemId, pageMem.realPageSize(groupId())); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageUpdateRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageUpdateRecord.java index ed469a4044e28..6f5d8fd86409b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageUpdateRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/DataPageUpdateRecord.java @@ -71,7 +71,7 @@ public byte[] payload() { AbstractDataPageIO io = PageIO.getPageIO(pageAddr); - io.updateRow(pageAddr, itemId, pageMem.pageSize(), payload, null, 0); + io.updateRow(pageAddr, itemId, pageMem.realPageSize(groupId()), payload, null, 0); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/InitNewPageRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/InitNewPageRecord.java index c177a04b6efca..d0ba2aaf66126 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/InitNewPageRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/InitNewPageRecord.java @@ -57,7 +57,7 @@ public InitNewPageRecord(int grpId, long pageId, int ioType, int ioVer, long new @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { PageIO io = PageIO.getPageIO(ioType, ioVer); - io.initNewPage(pageAddr, newPageId, pageMem.pageSize()); + io.initNewPage(pageAddr, newPageId, pageMem.realPageSize(groupId())); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageAddRootRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageAddRootRecord.java index 4972155ffa26c..9bf3aeff19124 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageAddRootRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageAddRootRecord.java @@ -44,7 +44,7 @@ public MetaPageAddRootRecord(int grpId, long pageId, long rootId) { @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { BPlusMetaIO io = BPlusMetaIO.VERSIONS.forPage(pageAddr); - io.addRoot(pageAddr, rootId, pageMem.pageSize()); + io.addRoot(pageAddr, rootId, pageMem.realPageSize(groupId())); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageCutRootRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageCutRootRecord.java index 5b896f64b2845..1383a380757f5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageCutRootRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageCutRootRecord.java @@ -38,7 +38,7 @@ public MetaPageCutRootRecord(int grpId, long pageId) { @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { BPlusMetaIO io = BPlusMetaIO.VERSIONS.forPage(pageAddr); - io.cutRoot(pageAddr, pageMem.pageSize()); + io.cutRoot(pageAddr, pageMem.realPageSize(groupId())); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRecord.java index ca995bf4e1f8d..7b3f3a9b32c4f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRecord.java @@ -76,7 +76,7 @@ public long reuseListRoot() { @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { PageMetaIO io = PageMetaIO.getPageIO(ioType, ioVer); - io.initNewPage(pageAddr, newPageId, pageMem.pageSize()); + io.initNewPage(pageAddr, newPageId, pageMem.realPageSize(groupId())); io.setTreeRoot(pageAddr, treeRoot); io.setReuseListRoot(pageAddr, reuseListRoot); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootInlineRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootInlineRecord.java index 0d3c15538aebc..71ae85db02877 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootInlineRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootInlineRecord.java @@ -51,7 +51,7 @@ public int inlineSize() { @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { BPlusMetaIO io = BPlusMetaIO.VERSIONS.forPage(pageAddr); - io.initRoot(pageAddr, rootId, pageMem.pageSize()); + io.initRoot(pageAddr, rootId, pageMem.realPageSize(groupId())); io.setInlineSize(pageAddr, inlineSize); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootRecord.java index 78a7e4ffbed34..7eca27840c5f6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageInitRootRecord.java @@ -44,7 +44,7 @@ public MetaPageInitRootRecord(int grpId, long pageId, long rootId) { @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { BPlusMetaIO io = BPlusMetaIO.VERSIONS.forPage(pageAddr); - io.initRoot(pageAddr, rootId, pageMem.pageSize()); + io.initRoot(pageAddr, rootId, pageMem.realPageSize(groupId())); io.setInlineSize(pageAddr, 0); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdateNextSnapshotId.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdateNextSnapshotId.java index 2046ecd274d95..5068fe5b45ec5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdateNextSnapshotId.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdateNextSnapshotId.java @@ -27,22 +27,22 @@ */ public class MetaPageUpdateNextSnapshotId extends PageDeltaRecord { /** */ - private final long nextSnapshotId; + private final long nextSnapshotTag; /** * @param pageId Meta page ID. */ - public MetaPageUpdateNextSnapshotId(int grpId, long pageId, long nextSnapshotId) { + public MetaPageUpdateNextSnapshotId(int grpId, long pageId, long nextSnapshotTag) { super(grpId, pageId); - this.nextSnapshotId = nextSnapshotId; + this.nextSnapshotTag = nextSnapshotTag; } /** {@inheritDoc} */ @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { PageMetaIO io = PageMetaIO.VERSIONS.forPage(pageAddr); - io.setNextSnapshotTag(pageAddr, nextSnapshotId); + io.setNextSnapshotTag(pageAddr, nextSnapshotTag); } /** {@inheritDoc} */ @@ -54,7 +54,7 @@ public MetaPageUpdateNextSnapshotId(int grpId, long pageId, long nextSnapshotId) * @return Root ID. */ public long nextSnapshotId() { - return nextSnapshotId; + return nextSnapshotTag; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdatePartitionDataRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdatePartitionDataRecord.java index e5bd343bcb5ee..28294a9f2cd5b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdatePartitionDataRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/MetaPageUpdatePartitionDataRecord.java @@ -55,7 +55,8 @@ public MetaPageUpdatePartitionDataRecord( long updateCntr, long globalRmvId, int partSize, - long cntrsPageId, byte state, + long cntrsPageId, + byte state, int allocatedIdxCandidate ) { super(grpId, pageId); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/NewRootInitRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/NewRootInitRecord.java index 4b8f74750e96f..1d780333d900c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/NewRootInitRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/NewRootInitRecord.java @@ -71,7 +71,8 @@ public NewRootInitRecord( /** {@inheritDoc} */ @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { - io.initNewRoot(pageAddr, newRootId, leftChildId, null, rowBytes, rightChildId, pageMem.pageSize(), false); + io.initNewRoot(pageAddr, newRootId, leftChildId, null, rowBytes, rightChildId, pageMem.realPageSize(groupId()), + false); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListAddPageRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListAddPageRecord.java index 6c7fc71080de9..6f877b9f7fb99 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListAddPageRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListAddPageRecord.java @@ -54,7 +54,7 @@ public long dataPageId() { @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { PagesListNodeIO io = PagesListNodeIO.VERSIONS.forPage(pageAddr); - int cnt = io.addPage(pageAddr, dataPageId, pageMem.pageSize()); + int cnt = io.addPage(pageAddr, dataPageId, pageMem.realPageSize(groupId())); assert cnt >= 0 : cnt; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListInitNewPageRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListInitNewPageRecord.java index b2512aad2852b..53c23b1b1528b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListInitNewPageRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/PagesListInitNewPageRecord.java @@ -76,11 +76,11 @@ public long dataPageId() { @Override public void applyDelta(PageMemory pageMem, long pageAddr) throws IgniteCheckedException { PagesListNodeIO io = PageIO.getPageIO(PageIO.T_PAGE_LIST_NODE, ioVer); - io.initNewPage(pageAddr, pageId(), pageMem.pageSize()); + io.initNewPage(pageAddr, pageId(), pageMem.realPageSize(groupId())); io.setPreviousId(pageAddr, prevPageId); if (addDataPageId != 0L) { - int cnt = io.addPage(pageAddr, addDataPageId, pageMem.pageSize()); + int cnt = io.addPage(pageAddr, addDataPageId, pageMem.realPageSize(groupId())); assert cnt == 0 : cnt; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/TrackingPageDeltaRecord.java b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/TrackingPageDeltaRecord.java index 089eb9a0c902e..3f11c58d85be9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/TrackingPageDeltaRecord.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/pagemem/wal/record/delta/TrackingPageDeltaRecord.java @@ -76,7 +76,7 @@ public long lastSuccessfulSnapshotId() { pageIdToMark, nextSnapshotId, lastSuccessfulSnapshotId, - pageMem.pageSize()); + pageMem.realPageSize(groupId())); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityAssignment.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityAssignment.java index eb06d3e60b873..b603c32a8f6ec 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityAssignment.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityAssignment.java @@ -22,7 +22,6 @@ import java.util.Set; import java.util.UUID; import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; /** * Cached affinity calculations. @@ -84,9 +83,4 @@ public interface AffinityAssignment { * @return Backup partitions for specified node ID. */ public Set backupPartitions(UUID nodeId); - - /** - * @return Mvcc coordinator. - */ - public MvccCoordinator mvccCoordinator(); } \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityTopologyVersion.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityTopologyVersion.java index 44b27534dee62..2c02f26be6641 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityTopologyVersion.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/AffinityTopologyVersion.java @@ -112,6 +112,15 @@ public int minorTopologyVersion() { return cmp; } + /** + * @param lower Lower bound. + * @param upper Upper bound. + * @return {@code True} if this topology version is within provided bounds (inclusive). + */ + public boolean isBetween(AffinityTopologyVersion lower, AffinityTopologyVersion upper) { + return compareTo(lower) >= 0 && compareTo(upper) <= 0; + } + /** {@inheritDoc} */ @Override public void onAckReceived() { // No-op. @@ -219,4 +228,4 @@ public int minorTopologyVersion() { @Override public String toString() { return S.toString(AffinityTopologyVersion.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignment.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignment.java index fe5103681a999..95cf76fc58144 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignment.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignment.java @@ -27,7 +27,6 @@ import java.util.Set; import java.util.UUID; import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.util.typedef.internal.S; /** @@ -41,9 +40,6 @@ public class GridAffinityAssignment implements AffinityAssignment, Serializable /** Topology version. */ private final AffinityTopologyVersion topVer; - /** */ - private final MvccCoordinator mvccCrd; - /** Collection of calculated affinity nodes. */ private List> assignment; @@ -74,7 +70,6 @@ public class GridAffinityAssignment implements AffinityAssignment, Serializable this.topVer = topVer; primary = new HashMap<>(); backup = new HashMap<>(); - mvccCrd = null; } /** @@ -84,8 +79,7 @@ public class GridAffinityAssignment implements AffinityAssignment, Serializable */ GridAffinityAssignment(AffinityTopologyVersion topVer, List> assignment, - List> idealAssignment, - MvccCoordinator mvccCrd) { + List> idealAssignment) { assert topVer != null; assert assignment != null; assert idealAssignment != null; @@ -93,7 +87,6 @@ public class GridAffinityAssignment implements AffinityAssignment, Serializable this.topVer = topVer; this.assignment = assignment; this.idealAssignment = idealAssignment.equals(assignment) ? assignment : idealAssignment; - this.mvccCrd = mvccCrd; primary = new HashMap<>(); backup = new HashMap<>(); @@ -112,7 +105,6 @@ public class GridAffinityAssignment implements AffinityAssignment, Serializable idealAssignment = aff.idealAssignment; primary = aff.primary; backup = aff.backup; - mvccCrd = aff.mvccCrd; } /** @@ -275,11 +267,6 @@ private void initPrimaryBackupMaps() { } } - /** {@inheritDoc} */ - @Override public MvccCoordinator mvccCoordinator() { - return mvccCrd; - } - /** {@inheritDoc} */ @Override public int hashCode() { return topVer.hashCode(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignmentCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignmentCache.java index 2290ce6bee725..e242a73153880 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignmentCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityAssignmentCache.java @@ -43,8 +43,8 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.NodeOrderComparator; import org.apache.ignite.internal.managers.discovery.DiscoCache; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.processors.cache.ExchangeDiscoveryEvents; +import org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager; import org.apache.ignite.internal.processors.cluster.BaselineTopology; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.typedef.F; @@ -66,8 +66,27 @@ * Affinity cached function. */ public class GridAffinityAssignmentCache { - /** Cleanup history size. */ - private final int MAX_HIST_SIZE = getInteger(IGNITE_AFFINITY_HISTORY_SIZE, 500); + /** + * Affinity cache will shrink when total number of non-shallow (see {@link HistoryAffinityAssignmentImpl}) + * historical instances will be greater than value of this constant. + */ + private final int MAX_NON_SHALLOW_HIST_SIZE = getInteger(IGNITE_AFFINITY_HISTORY_SIZE, 25); + + /** + * Affinity cache will also shrink when total number of both shallow ({@link HistoryAffinityAssignmentShallowCopy}) + * and non-shallow (see {@link HistoryAffinityAssignmentImpl}) historical instances will be greater than + * value of this constant. + */ + private final int MAX_TOTAL_HIST_SIZE = MAX_NON_SHALLOW_HIST_SIZE * 10; + + /** + * Independent of {@link #MAX_NON_SHALLOW_HIST_SIZE} and {@link #MAX_TOTAL_HIST_SIZE}, affinity cache will always + * keep this number of non-shallow (see {@link HistoryAffinityAssignmentImpl}) instances. + * We need at least one real instance, otherwise we won't be able to get affinity cache for + * {@link GridCachePartitionExchangeManager#lastAffinityChangedTopologyVersion} in case cluster has experienced + * too many client joins / client leaves / local cache starts. + */ + private final int MIN_NON_SHALLOW_HIST_SIZE = 2; /** Partition distribution. */ private final float partDistribution = getFloat(IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD, 50f); @@ -123,8 +142,8 @@ public class GridAffinityAssignmentCache { /** Node stop flag. */ private volatile IgniteCheckedException stopErr; - /** Full history size. */ - private final AtomicInteger fullHistSize = new AtomicInteger(); + /** Numner of non-shallow (see {@link HistoryAffinityAssignmentImpl}) affinity cache instances. */ + private final AtomicInteger nonShallowHistSize = new AtomicInteger(); /** */ private final Object similarAffKey; @@ -203,30 +222,15 @@ public int groupId() { * @param affAssignment Affinity assignment for topology version. */ public void initialize(AffinityTopologyVersion topVer, List> affAssignment) { - MvccCoordinator mvccCrd = null; - - if (!locCache) - mvccCrd = ctx.cache().context().coordinators().currentCoordinator(topVer); - - initialize(topVer, affAssignment, mvccCrd); - } - - /** - * Initializes affinity with given topology version and assignment. - * - * @param topVer Topology version. - * @param affAssignment Affinity assignment for topology version. - * @param mvccCrd Mvcc coordinator. - */ - public void initialize(AffinityTopologyVersion topVer, List> affAssignment, MvccCoordinator mvccCrd) { assert topVer.compareTo(lastVersion()) >= 0 : "[topVer = " + topVer + ", last=" + lastVersion() + ']'; assert idealAssignment != null; - assert mvccCrd == null || topVer.compareTo(mvccCrd.topologyVersion()) >= 0 : "[mvccCrd=" + mvccCrd + ", topVer=" + topVer + ']'; - GridAffinityAssignment assignment = new GridAffinityAssignment(topVer, affAssignment, idealAssignment, mvccCrd); + GridAffinityAssignment assignment = new GridAffinityAssignment(topVer, affAssignment, idealAssignment); + + HistoryAffinityAssignmentImpl newHistEntry = new HistoryAffinityAssignmentImpl(assignment); - HistoryAffinityAssignment hAff = affCache.put(topVer, new HistoryAffinityAssignment(assignment)); + HistoryAffinityAssignment existing = affCache.put(topVer, newHistEntry); head.set(assignment); @@ -240,9 +244,7 @@ public void initialize(AffinityTopologyVersion topVer, List> a } } - // In case if value was replaced there is no sense to clean the history. - if (hAff == null) - onHistoryAdded(); + onHistoryAdded(existing, newHistEntry); if (log.isTraceEnabled()) { log.trace("New affinity assignment [grp=" + cacheOrGrpName @@ -292,7 +294,7 @@ public void onReconnected() { affCache.clear(); - fullHistSize.set(0); + nonShallowHistSize.set(0); head.set(new GridAffinityAssignment(AffinityTopologyVersion.NONE)); @@ -511,7 +513,17 @@ public void clientEventTopologyChange(DiscoveryEvent evt, AffinityTopologyVersio GridAffinityAssignment assignmentCpy = new GridAffinityAssignment(topVer, aff); - HistoryAffinityAssignment hAff = affCache.put(topVer, new HistoryAffinityAssignment(assignmentCpy)); + AffinityTopologyVersion prevVer = topVer.minorTopologyVersion() == 0 ? + new AffinityTopologyVersion(topVer.topologyVersion() - 1, Integer.MAX_VALUE) : + new AffinityTopologyVersion(topVer.topologyVersion(), topVer.minorTopologyVersion() - 1); + + Map.Entry prevHistEntry = affCache.floorEntry(prevVer); + + HistoryAffinityAssignment newHistEntry = (prevHistEntry == null) ? + new HistoryAffinityAssignmentImpl(assignmentCpy) : + new HistoryAffinityAssignmentShallowCopy(prevHistEntry.getValue().origin(), topVer); + + HistoryAffinityAssignment existing = affCache.put(topVer, newHistEntry); head.set(assignmentCpy); @@ -525,9 +537,7 @@ public void clientEventTopologyChange(DiscoveryEvent evt, AffinityTopologyVersio } } - // In case if value was replaced there is no sense to clean the history. - if (hAff == null) - onHistoryAdded(); + onHistoryAdded(existing, newHistEntry); } /** @@ -691,30 +701,69 @@ public AffinityAssignment readyAffinity(AffinityTopologyVersion topVer) { * @return Cached affinity. */ public AffinityAssignment cachedAffinity(AffinityTopologyVersion topVer) { + AffinityTopologyVersion lastAffChangeTopVer = + ctx.cache().context().exchange().lastAffinityChangedTopologyVersion(topVer); + + return cachedAffinity(topVer, lastAffChangeTopVer); + } + + /** + * Get cached affinity for specified topology version. + * + * @param topVer Topology version for which affinity assignment is requested. + * @param lastAffChangeTopVer Topology version of last affinity assignment change. + * @return Cached affinity. + */ + public AffinityAssignment cachedAffinity( + AffinityTopologyVersion topVer, + AffinityTopologyVersion lastAffChangeTopVer + ) { if (topVer.equals(AffinityTopologyVersion.NONE)) - topVer = lastVersion(); - else - awaitTopologyVersion(topVer); + topVer = lastAffChangeTopVer = lastVersion(); + else { + if (lastAffChangeTopVer.equals(AffinityTopologyVersion.NONE)) + lastAffChangeTopVer = topVer; + + awaitTopologyVersion(lastAffChangeTopVer); + } assert topVer.topologyVersion() >= 0 : topVer; AffinityAssignment cache = head.get(); - if (!cache.topologyVersion().equals(topVer)) { - cache = affCache.get(topVer); + if (!(cache.topologyVersion().compareTo(lastAffChangeTopVer) >= 0 && + cache.topologyVersion().compareTo(topVer) <= 0)) { + + Map.Entry e = affCache.ceilingEntry(lastAffChangeTopVer); + + if (e != null) + cache = e.getValue(); if (cache == null) { throw new IllegalStateException("Getting affinity for topology version earlier than affinity is " + "calculated [locNode=" + ctx.discovery().localNode() + ", grp=" + cacheOrGrpName + ", topVer=" + topVer + + ", lastAffChangeTopVer=" + lastAffChangeTopVer + + ", head=" + head.get().topologyVersion() + + ", history=" + affCache.keySet() + + ']'); + } + + if (cache.topologyVersion().compareTo(topVer) > 0) { + throw new IllegalStateException("Getting affinity for too old topology version that is already " + + "out of history [locNode=" + ctx.discovery().localNode() + + ", grp=" + cacheOrGrpName + + ", topVer=" + topVer + + ", lastAffChangeTopVer=" + lastAffChangeTopVer + ", head=" + head.get().topologyVersion() + ", history=" + affCache.keySet() + ']'); } } - assert cache.topologyVersion().equals(topVer) : "Invalid cached affinity: " + cache; + assert cache.topologyVersion().compareTo(lastAffChangeTopVer) >= 0 && + cache.topologyVersion().compareTo(topVer) <= 0 : "Invalid cached affinity: [cache=" + cache + ", topVer=" + topVer + ", lastAffChangedTopVer=" + lastAffChangeTopVer + "]"; return cache; } @@ -765,7 +814,7 @@ public void init(GridAffinityAssignmentCache aff) { AffinityAssignment assign = aff.cachedAffinity(aff.lastVersion()); - initialize(aff.lastVersion(), assign.assignment(), assign.mvccCoordinator()); + initialize(aff.lastVersion(), assign.assignment()); } /** @@ -807,33 +856,87 @@ private void awaitTopologyVersion(AffinityTopologyVersion topVer) { /** * Cleaning the affinity history. + * + * @param replaced Replaced entry in case history item was already present, null otherwise. + * @param added New history item. */ - private void onHistoryAdded() { - if (fullHistSize.incrementAndGet() > MAX_HIST_SIZE) { + private void onHistoryAdded( + HistoryAffinityAssignment replaced, + HistoryAffinityAssignment added + ) { + boolean cleanupNeeded = false; + + if (replaced == null) { + cleanupNeeded = true; + + if (added.requiresHistoryCleanup()) + nonShallowHistSize.incrementAndGet(); + } + else { + if (replaced.requiresHistoryCleanup() != added.requiresHistoryCleanup()) { + if (added.requiresHistoryCleanup()) { + cleanupNeeded = true; + + nonShallowHistSize.incrementAndGet(); + } + else + nonShallowHistSize.decrementAndGet(); + } + } + + if (!cleanupNeeded) + return; + + int nonShallowSize = nonShallowHistSize.get(); + + int totalSize = affCache.size(); + + if (shouldContinueCleanup(nonShallowSize, totalSize)) { + int initNonShallowSize = nonShallowSize; + Iterator it = affCache.values().iterator(); - int rmvCnt = MAX_HIST_SIZE / 2; + while (it.hasNext()) { + HistoryAffinityAssignment aff0 = it.next(); - AffinityTopologyVersion topVerRmv = null; + if (aff0.requiresHistoryCleanup()) { + // We can stop cleanup only on non-shallow item. + // Keeping part of shallow items chain if corresponding real item is missing makes no sense. + if (!shouldContinueCleanup(nonShallowSize, totalSize)) { + nonShallowHistSize.getAndAdd(nonShallowSize - initNonShallowSize); - while (it.hasNext() && rmvCnt > 0) { - AffinityAssignment aff0 = it.next(); + // GridAffinityProcessor#affMap has the same size and instance set as #affCache. + ctx.affinity().removeCachedAffinity(aff0.topologyVersion()); - it.remove(); + return; + } - rmvCnt--; + nonShallowSize--; + } - fullHistSize.decrementAndGet(); + totalSize--; - topVerRmv = aff0.topologyVersion(); + it.remove(); } - topVerRmv = it.hasNext() ? it.next().topologyVersion() : topVerRmv; - - ctx.affinity().removeCachedAffinity(topVerRmv); + assert false : "All elements have been removed from affinity cache during cleanup"; } } + /** + * Checks whether affinity cache size conditions are still unsatisfied. + * + * @param nonShallowSize Non shallow size. + * @param totalSize Total size. + * @return true if affinity cache cleanup is not finished yet. + */ + private boolean shouldContinueCleanup(int nonShallowSize, int totalSize) { + if (nonShallowSize <= MIN_NON_SHALLOW_HIST_SIZE) + return false; + + return nonShallowSize > MAX_NON_SHALLOW_HIST_SIZE || totalSize > MAX_TOTAL_HIST_SIZE; + } + /** * @return All initialized versions. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessor.java index 4a0908c6071f6..08333c33e131a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessor.java @@ -423,7 +423,7 @@ private Map> keysToNodes(@Nullable final String c try { GridAffinityAssignment assign = assign0 instanceof GridAffinityAssignment ? (GridAffinityAssignment)assign0 : - new GridAffinityAssignment(topVer, assign0.assignment(), assign0.idealAssignment(), assign0.mvccCoordinator()); + new GridAffinityAssignment(topVer, assign0.assignment(), assign0.idealAssignment()); AffinityInfo info = new AffinityInfo( cctx.config().getAffinity(), diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityUtils.java index 15d7e4e437ea7..abd5292799958 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/GridAffinityUtils.java @@ -184,7 +184,7 @@ public AffinityJob() { GridAffinityAssignment assign = assign0 instanceof GridAffinityAssignment ? (GridAffinityAssignment)assign0 : - new GridAffinityAssignment(topVer, assign0.assignment(), assign0.idealAssignment(), assign0.mvccCoordinator()); + new GridAffinityAssignment(topVer, assign0.assignment(), assign0.idealAssignment()); return F.t( affinityMessage(ctx, cctx.config().getAffinity()), diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignment.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignment.java index a9496486fb965..1d2e95c36aa01 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignment.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignment.java @@ -17,169 +17,22 @@ package org.apache.ignite.internal.processors.affinity; -import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; -import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.internal.util.typedef.internal.U; - -import java.util.HashSet; -import java.util.List; -import java.util.Set; -import java.util.UUID; - /** - * + * Interface for historical calculated affinity assignment. */ -@SuppressWarnings("ForLoopReplaceableByForEach") -public class HistoryAffinityAssignment implements AffinityAssignment { - /** */ - private final AffinityTopologyVersion topVer; - - /** */ - private final List> assignment; - - /** */ - private final List> idealAssignment; - - /** */ - private final MvccCoordinator mvccCrd; - +public interface HistoryAffinityAssignment extends AffinityAssignment { /** - * @param assign Assignment. + * Should return true if instance is "heavy" and should be taken into account during history size management. + * + * @return true if adding this instance to history should trigger size check and possible cleanup. */ - HistoryAffinityAssignment(GridAffinityAssignment assign) { - this.topVer = assign.topologyVersion(); - this.assignment = assign.assignment(); - this.idealAssignment = assign.idealAssignment(); - this.mvccCrd = assign.mvccCoordinator(); - } - - /** {@inheritDoc} */ - @Override public MvccCoordinator mvccCoordinator() { - return mvccCrd; - } - - /** {@inheritDoc} */ - @Override public List> idealAssignment() { - return idealAssignment; - } - - /** {@inheritDoc} */ - @Override public List> assignment() { - return assignment; - } - - /** {@inheritDoc} */ - @Override public AffinityTopologyVersion topologyVersion() { - return topVer; - } - - /** {@inheritDoc} */ - @Override public List get(int part) { - assert part >= 0 && part < assignment.size() : "Affinity partition is out of range" + - " [part=" + part + ", partitions=" + assignment.size() + ']'; - - return assignment.get(part); - } - - /** {@inheritDoc} */ - @Override public HashSet getIds(int part) { - assert part >= 0 && part < assignment.size() : "Affinity partition is out of range" + - " [part=" + part + ", partitions=" + assignment.size() + ']'; - - List nodes = assignment.get(part); - - HashSet ids = U.newHashSet(nodes.size()); - - for (int i = 0; i < nodes.size(); i++) - ids.add(nodes.get(i).id()); - - return ids; - } - - /** {@inheritDoc} */ - @Override public Set nodes() { - Set res = new HashSet<>(); - - for (int p = 0; p < assignment.size(); p++) { - List nodes = assignment.get(p); - - if (!F.isEmpty(nodes)) - res.addAll(nodes); - } + public boolean requiresHistoryCleanup(); - return res; - } - - /** {@inheritDoc} */ - @Override public Set primaryPartitionNodes() { - Set res = new HashSet<>(); - - for (int p = 0; p < assignment.size(); p++) { - List nodes = assignment.get(p); - - if (!F.isEmpty(nodes)) - res.add(nodes.get(0)); - } - - return res; - } - - /** {@inheritDoc} */ - @Override public Set primaryPartitions(UUID nodeId) { - Set res = new HashSet<>(); - - for (int p = 0; p < assignment.size(); p++) { - List nodes = assignment.get(p); - - if (!F.isEmpty(nodes) && nodes.get(0).id().equals(nodeId)) - res.add(p); - } - - return res; - } - - /** {@inheritDoc} */ - @Override public Set backupPartitions(UUID nodeId) { - Set res = new HashSet<>(); - - for (int p = 0; p < assignment.size(); p++) { - List nodes = assignment.get(p); - - for (int i = 1; i < nodes.size(); i++) { - ClusterNode node = nodes.get(i); - - if (node.id().equals(nodeId)) { - res.add(p); - - break; - } - } - } - - return res; - } - - /** {@inheritDoc} */ - @Override public int hashCode() { - return topVer.hashCode(); - } - - /** {@inheritDoc} */ - @SuppressWarnings("SimplifiableIfStatement") - @Override public boolean equals(Object o) { - if (o == this) - return true; - - if (o == null || !(o instanceof AffinityAssignment)) - return false; - - return topVer.equals(((AffinityAssignment)o).topologyVersion()); - } - - /** {@inheritDoc} */ - @Override public String toString() { - return S.toString(HistoryAffinityAssignment.class, this); - } + /** + * In case this instance is lightweight wrapper of another instance, this method should return reference + * to an original one. Otherwise, it should return this reference. + * + * @return Original instance of this if not applicable. + */ + public HistoryAffinityAssignment origin(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignmentImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignmentImpl.java new file mode 100644 index 0000000000000..d9ecee840f225 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignmentImpl.java @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.affinity; + +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.UUID; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; + +/** + * Heap-space optimized version of calculated affinity assignment. + */ +@SuppressWarnings("ForLoopReplaceableByForEach") +public class HistoryAffinityAssignmentImpl implements HistoryAffinityAssignment { + /** */ + private final AffinityTopologyVersion topVer; + + /** */ + private final List> assignment; + + /** */ + private final List> idealAssignment; + + /** + * @param assign Assignment. + */ + HistoryAffinityAssignmentImpl(GridAffinityAssignment assign) { + topVer = assign.topologyVersion(); + assignment = assign.assignment(); + idealAssignment = assign.idealAssignment(); + } + + /** {@inheritDoc} */ + @Override public List> idealAssignment() { + return idealAssignment; + } + + /** {@inheritDoc} */ + @Override public List> assignment() { + return assignment; + } + + /** {@inheritDoc} */ + @Override public AffinityTopologyVersion topologyVersion() { + return topVer; + } + + /** {@inheritDoc} */ + @Override public List get(int part) { + assert part >= 0 && part < assignment.size() : "Affinity partition is out of range" + + " [part=" + part + ", partitions=" + assignment.size() + ']'; + + return assignment.get(part); + } + + /** {@inheritDoc} */ + @Override public HashSet getIds(int part) { + assert part >= 0 && part < assignment.size() : "Affinity partition is out of range" + + " [part=" + part + ", partitions=" + assignment.size() + ']'; + + List nodes = assignment.get(part); + + HashSet ids = U.newHashSet(nodes.size()); + + for (int i = 0; i < nodes.size(); i++) + ids.add(nodes.get(i).id()); + + return ids; + } + + /** {@inheritDoc} */ + @Override public Set nodes() { + Set res = new HashSet<>(); + + for (int p = 0; p < assignment.size(); p++) { + List nodes = assignment.get(p); + + if (!F.isEmpty(nodes)) + res.addAll(nodes); + } + + return res; + } + + /** {@inheritDoc} */ + @Override public Set primaryPartitionNodes() { + Set res = new HashSet<>(); + + for (int p = 0; p < assignment.size(); p++) { + List nodes = assignment.get(p); + + if (!F.isEmpty(nodes)) + res.add(nodes.get(0)); + } + + return res; + } + + /** {@inheritDoc} */ + @Override public Set primaryPartitions(UUID nodeId) { + Set res = new HashSet<>(); + + for (int p = 0; p < assignment.size(); p++) { + List nodes = assignment.get(p); + + if (!F.isEmpty(nodes) && nodes.get(0).id().equals(nodeId)) + res.add(p); + } + + return res; + } + + /** {@inheritDoc} */ + @Override public Set backupPartitions(UUID nodeId) { + Set res = new HashSet<>(); + + for (int p = 0; p < assignment.size(); p++) { + List nodes = assignment.get(p); + + for (int i = 1; i < nodes.size(); i++) { + ClusterNode node = nodes.get(i); + + if (node.id().equals(nodeId)) { + res.add(p); + + break; + } + } + } + + return res; + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + return topVer.hashCode(); + } + + /** {@inheritDoc} */ + @SuppressWarnings("SimplifiableIfStatement") + @Override public boolean equals(Object o) { + if (o == this) + return true; + + if (o == null || !(o instanceof AffinityAssignment)) + return false; + + return topVer.equals(((AffinityAssignment)o).topologyVersion()); + } + + /** {@inheritDoc} */ + @Override public HistoryAffinityAssignment origin() { + return this; + } + + /** {@inheritDoc} */ + @Override public boolean requiresHistoryCleanup() { + return true; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(HistoryAffinityAssignment.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignmentShallowCopy.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignmentShallowCopy.java new file mode 100644 index 0000000000000..d8afbb5b5c99a --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/affinity/HistoryAffinityAssignmentShallowCopy.java @@ -0,0 +1,107 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +package org.apache.ignite.internal.processors.affinity; + +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.UUID; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.internal.util.typedef.internal.S; + +/** + * Shallow copy that contains reference to delegate {@link HistoryAffinityAssignment}. + */ +public class HistoryAffinityAssignmentShallowCopy implements HistoryAffinityAssignment { + /** History assignment. */ + private final HistoryAffinityAssignment histAssignment; + + /** Topology version. */ + private final AffinityTopologyVersion topVer; + + /** + * @param histAssignment History assignment. + * @param topVer Topology version. + */ + public HistoryAffinityAssignmentShallowCopy( + HistoryAffinityAssignment histAssignment, + AffinityTopologyVersion topVer + ) { + this.histAssignment = histAssignment; + this.topVer = topVer; + } + + /** {@inheritDoc} */ + @Override public boolean requiresHistoryCleanup() { + return false; + } + + /** {@inheritDoc} */ + @Override public List> idealAssignment() { + return histAssignment.idealAssignment(); + } + + /** {@inheritDoc} */ + @Override public List> assignment() { + return histAssignment.assignment(); + } + + /** {@inheritDoc} */ + @Override public AffinityTopologyVersion topologyVersion() { + return topVer; + } + + /** {@inheritDoc} */ + @Override public List get(int part) { + return histAssignment.get(part); + } + + /** {@inheritDoc} */ + @Override public HashSet getIds(int part) { + return histAssignment.getIds(part); + } + + /** {@inheritDoc} */ + @Override public Set nodes() { + return histAssignment.nodes(); + } + + /** {@inheritDoc} */ + @Override public Set primaryPartitionNodes() { + return histAssignment.primaryPartitionNodes(); + } + + /** {@inheritDoc} */ + @Override public Set primaryPartitions(UUID nodeId) { + return histAssignment.primaryPartitions(nodeId); + } + + /** {@inheritDoc} */ + @Override public Set backupPartitions(UUID nodeId) { + return histAssignment.backupPartitions(nodeId); + } + + /** {@inheritDoc} */ + @Override public HistoryAffinityAssignment origin() { + return histAssignment; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(HistoryAffinityAssignmentShallowCopy.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/IgniteAuthenticationProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/IgniteAuthenticationProcessor.java index ded37e79e7ab1..d11df1e0356c4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/IgniteAuthenticationProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/IgniteAuthenticationProcessor.java @@ -34,7 +34,6 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.events.DiscoveryEvent; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.GridTopic; import org.apache.ignite.internal.IgniteInternalFuture; @@ -44,7 +43,6 @@ import org.apache.ignite.internal.managers.communication.GridIoPolicy; import org.apache.ignite.internal.managers.communication.GridMessageListener; import org.apache.ignite.internal.managers.discovery.CustomEventListener; -import org.apache.ignite.internal.managers.discovery.DiscoCache; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; import org.apache.ignite.internal.managers.eventstorage.DiscoveryEventListener; import org.apache.ignite.internal.processors.GridProcessorAdapter; @@ -97,14 +95,11 @@ public class IgniteAuthenticationProcessor extends GridProcessorAdapter implemen /** Whan the future is done the node is ready for authentication. */ private final GridFutureAdapter readyForAuthFut = new GridFutureAdapter<>(); - /** Random is used to get random server node to authentication from client node. */ - private static final Random RND = new Random(System.currentTimeMillis()); - /** Operation mutex. */ private final Object mux = new Object(); /** Active operations. Collects to send on joining node. */ - private Map activeOps = Collections.synchronizedMap(new LinkedHashMap<>()); + private final Map activeOps = Collections.synchronizedMap(new LinkedHashMap<>()); /** User map. */ private ConcurrentMap users; @@ -141,7 +136,7 @@ public class IgniteAuthenticationProcessor extends GridProcessorAdapter implemen private DiscoveryEventListener discoLsnr; /** Node activate future. */ - private GridFutureAdapter activateFut = new GridFutureAdapter<>(); + private final GridFutureAdapter activateFut = new GridFutureAdapter<>(); /** Validate error. */ private String validateErr; @@ -151,16 +146,14 @@ public class IgniteAuthenticationProcessor extends GridProcessorAdapter implemen */ public IgniteAuthenticationProcessor(GridKernalContext ctx) { super(ctx); - - isEnabled = ctx.config().isAuthenticationEnabled(); - - ctx.internalSubscriptionProcessor().registerMetastorageListener(this); } /** {@inheritDoc} */ @Override public void start() throws IgniteCheckedException { super.start(); + isEnabled = ctx.config().isAuthenticationEnabled(); + if (isEnabled && !GridCacheUtils.isPersistenceEnabled(ctx.config())) { isEnabled = false; @@ -168,6 +161,8 @@ public IgniteAuthenticationProcessor(GridKernalContext ctx) { + " Check the DataRegionConfiguration"); } + ctx.internalSubscriptionProcessor().registerMetastorageListener(this); + ctx.addNodeAttribute(IgniteNodeAttributes.ATTR_AUTHENTICATION_ENABLED, isEnabled); GridDiscoveryManager discoMgr = ctx.discovery(); @@ -178,38 +173,34 @@ public IgniteAuthenticationProcessor(GridKernalContext ctx) { discoMgr.setCustomEventListener(UserAcceptedMessage.class, new UserAcceptedListener()); - discoLsnr = new DiscoveryEventListener() { - @Override public void onEvent(DiscoveryEvent evt, DiscoCache discoCache) { - if (!isEnabled || ctx.isStopping()) - return; + discoLsnr = (evt, discoCache) -> { + if (!isEnabled || ctx.isStopping()) + return; - switch (evt.type()) { - case EVT_NODE_LEFT: - case EVT_NODE_FAILED: - onNodeLeft(evt.eventNode().id()); - break; + switch (evt.type()) { + case EVT_NODE_LEFT: + case EVT_NODE_FAILED: + onNodeLeft(evt.eventNode().id()); + break; - case EVT_NODE_JOINED: - onNodeJoin(evt.eventNode()); - break; - } + case EVT_NODE_JOINED: + onNodeJoin(evt.eventNode()); + break; } }; ctx.event().addDiscoveryEventListener(discoLsnr, DISCO_EVT_TYPES); - ioLsnr = new GridMessageListener() { - @Override public void onMessage(UUID nodeId, Object msg, byte plc) { - if (!isEnabled || ctx.isStopping()) - return; + ioLsnr = (nodeId, msg, plc) -> { + if (!isEnabled || ctx.isStopping()) + return; - if (msg instanceof UserManagementOperationFinishedMessage) - onFinishMessage(nodeId, (UserManagementOperationFinishedMessage)msg); - else if (msg instanceof UserAuthenticateRequestMessage) - onAuthenticateRequestMessage(nodeId, (UserAuthenticateRequestMessage)msg); - else if (msg instanceof UserAuthenticateResponseMessage) - onAuthenticateResponseMessage((UserAuthenticateResponseMessage)msg); - } + if (msg instanceof UserManagementOperationFinishedMessage) + onFinishMessage(nodeId, (UserManagementOperationFinishedMessage)msg); + else if (msg instanceof UserAuthenticateRequestMessage) + onAuthenticateRequestMessage(nodeId, (UserAuthenticateRequestMessage)msg); + else if (msg instanceof UserAuthenticateResponseMessage) + onAuthenticateResponseMessage((UserAuthenticateResponseMessage)msg); }; ioMgr.addMessageListener(GridTopic.TOPIC_AUTH, ioLsnr); @@ -313,18 +304,7 @@ public AuthorizationContext authenticate(String login, String passwd) throws Ign AuthenticateFuture fut; synchronized (mux) { - Collection aliveNodes = ctx.discovery().aliveServerNodes(); - - int rndIdx = RND.nextInt(aliveNodes.size()) + 1; - - int i = 0; - ClusterNode rndNode = null; - - for (Iterator it = aliveNodes.iterator(); i < rndIdx && it.hasNext(); i++) - rndNode = it.next(); - - if (rndNode == null) - assert rndNode != null; + ClusterNode rndNode = U.randomServerNode(ctx); fut = new AuthenticateFuture(rndNode.id()); @@ -409,11 +389,8 @@ public void updateUser(String login, String passwd) throws IgniteCheckedExceptio if (!ctx.clientNode()) { users = new ConcurrentHashMap<>(); - Map readUsers = (Map)metastorage.readForPredicate(new IgnitePredicate() { - @Override public boolean apply(String key) { - return key != null && key.startsWith(STORE_USER_PREFIX); - } - }); + Map readUsers = (Map)metastorage.readForPredicate( + (IgnitePredicate)key -> key != null && key.startsWith(STORE_USER_PREFIX)); for (User u : readUsers.values()) users.put(u.name(), u); @@ -1318,6 +1295,7 @@ private UserOperationWorker(UserManagementOperation op, UserOperationFinishFutur * Initial users set worker. */ private class RefreshUsersStorageWorker extends GridWorker { + /** */ private final ArrayList newUsrs; /** @@ -1343,11 +1321,8 @@ private RefreshUsersStorageWorker(ArrayList usrs) { sharedCtx.database().checkpointReadLock(); try { - Map existUsrs = (Map)metastorage.readForPredicate(new IgnitePredicate() { - @Override public boolean apply(String key) { - return key != null && key.startsWith(STORE_USER_PREFIX); - } - }); + Map existUsrs = (Map)metastorage.readForPredicate( + (IgnitePredicate)key -> key != null && key.startsWith(STORE_USER_PREFIX)); for (String key : existUsrs.keySet()) metastorage.remove(key); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/UserAuthenticateResponseMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/UserAuthenticateResponseMessage.java index d86b1ad91a551..e3dee3ce92a4f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/UserAuthenticateResponseMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/authentication/UserAuthenticateResponseMessage.java @@ -98,7 +98,6 @@ public IgniteUuid id() { writer.incrementState(); - } return true; @@ -127,6 +126,7 @@ public IgniteUuid id() { return false; reader.incrementState(); + } return reader.afterMessageRead(UserAuthenticateResponseMessage.class); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheAffinitySharedManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheAffinitySharedManager.java index d009c5d42d690..4cc0788bfbfeb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheAffinitySharedManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheAffinitySharedManager.java @@ -23,14 +23,17 @@ import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Set; import java.util.UUID; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import java.util.stream.Collectors; import javax.cache.CacheException; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.affinity.AffinityFunction; import org.apache.ignite.cluster.ClusterNode; @@ -58,7 +61,6 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridClientPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.processors.cluster.DiscoveryDataClusterState; import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.GridPartitionStateMap; @@ -359,11 +361,7 @@ public Set waitGroups() { void onCacheGroupCreated(CacheGroupContext grp) { if (!grpHolders.containsKey(grp.groupId())) { cctx.io().addCacheGroupHandler(grp.groupId(), GridDhtAffinityAssignmentResponse.class, - new IgniteBiInClosure() { - @Override public void apply(UUID nodeId, GridDhtAffinityAssignmentResponse res) { - processAffinityAssignmentResponse(nodeId, res); - } - }); + (IgniteBiInClosure) this::processAffinityAssignmentResponse); } } @@ -408,7 +406,8 @@ void onCacheGroupCreated(CacheGroupContext grp) { ClientCacheChangeDummyDiscoveryMessage msg, boolean crd, AffinityTopologyVersion topVer, - DiscoCache discoCache) { + DiscoCache discoCache + ) { Map startReqs = msg.startRequests(); if (startReqs == null) @@ -424,23 +423,43 @@ void onCacheGroupCreated(CacheGroupContext grp) { Map fetchFuts = U.newHashMap(startDescs.size()); - Set startedCaches = U.newHashSet(startDescs.size()); - Map startedInfos = U.newHashMap(startDescs.size()); - for (DynamicCacheDescriptor desc : startDescs) { - try { - startedCaches.add(desc.cacheName()); - - DynamicCacheChangeRequest startReq = startReqs.get(desc.cacheName()); + List startCacheInfos = startDescs.stream() + .map(desc -> { + DynamicCacheChangeRequest changeReq = startReqs.get(desc.cacheName()); - cctx.cache().prepareCacheStart(desc.cacheConfiguration(), + return new StartCacheInfo( desc, - startReq.nearCacheConfiguration(), + changeReq.nearCacheConfiguration(), topVer, - startReq.disabledAfterStart()); + changeReq.disabledAfterStart() + ); + }) + .collect(Collectors.toList()); + + Set startedCaches = startCacheInfos.stream() + .map(info -> info.getCacheDescriptor().cacheName()) + .collect(Collectors.toSet()); + + try { + cctx.cache().prepareStartCaches(startCacheInfos); + } + catch (IgniteCheckedException e) { + cctx.cache().closeCaches(startedCaches, false); + + cctx.cache().completeClientCacheChangeFuture(msg.requestId(), e); + + return null; + } - startedInfos.put(desc.cacheId(), startReq.nearCacheConfiguration() != null); + for (StartCacheInfo startCacheInfo : startCacheInfos) { + try { + DynamicCacheDescriptor desc = startCacheInfo.getCacheDescriptor(); + + startedCaches.add(desc.cacheName()); + + startedInfos.put(desc.cacheId(), startCacheInfo.getReqNearCfg() != null); CacheGroupContext grp = cctx.cache().cacheGroup(desc.groupId()); @@ -460,7 +479,6 @@ void onCacheGroupCreated(CacheGroupContext grp) { grp.topology().updateTopologyVersion(topFut, discoCache, - cctx.coordinators().currentCoordinator(), -1, false); @@ -510,7 +528,6 @@ else if (!fetchFuts.containsKey(grp.groupId())) { assert grp != null; GridDhtAffinityAssignmentResponse res = fetchAffinity(topVer, - cctx.coordinators().currentCoordinator(), null, discoCache, grp.affinity(), @@ -535,7 +552,6 @@ else if (!fetchFuts.containsKey(grp.groupId())) { grp.topology().updateTopologyVersion(topFut, discoCache, - cctx.coordinators().currentCoordinator(), -1, false); @@ -564,6 +580,8 @@ else if (!fetchFuts.containsKey(grp.groupId())) { cctx.cache().initCacheProxies(topVer, null); + startReqs.keySet().forEach(req -> cctx.cache().completeProxyInitialize(req)); + cctx.cache().completeClientCacheChangeFuture(msg.requestId(), null); return startedInfos; @@ -578,7 +596,8 @@ else if (!fetchFuts.containsKey(grp.groupId())) { private Set processCacheCloseRequests( ClientCacheChangeDummyDiscoveryMessage msg, boolean crd, - AffinityTopologyVersion topVer) { + AffinityTopologyVersion topVer + ) { Set cachesToClose = msg.cachesToClose(); if (cachesToClose == null) @@ -720,6 +739,8 @@ public void onCustomMessageNoAffinityChange( return; aff.clientEventTopologyChange(evts.lastEvent(), evts.topologyVersion()); + + cctx.exchange().exchangerUpdateHeartbeat(); } }); } @@ -777,15 +798,12 @@ public IgniteInternalFuture onCacheChangeRequest( ) throws IgniteCheckedException { assert exchActions != null && !exchActions.empty() : exchActions; - long time = System.currentTimeMillis(); - IgniteInternalFuture res = cachesRegistry.update(exchActions); // Affinity did not change for existing caches. onCustomMessageNoAffinityChange(fut, crd, exchActions); - if (log.isInfoEnabled()) - log.info("Updating caches registry performed in " + (System.currentTimeMillis() - time) + " ms."); + fut.timeBag().finishGlobalStage("Update caches registry"); processCacheStartRequests(fut, crd, exchActions); @@ -848,7 +866,7 @@ private void processCacheStartRequests( final ExchangeDiscoveryEvents evts = fut.context().events(); - long time = System.currentTimeMillis(); + Map startCacheInfos = new LinkedHashMap<>(); for (ExchangeActions.CacheActionData action : exchActions.cacheStartRequests()) { DynamicCacheDescriptor cacheDesc = action.descriptor(); @@ -869,7 +887,7 @@ private void processCacheStartRequests( assert cctx.cacheContext(cacheDesc.cacheId()) == null : "Starting cache has not null context: " + cacheDesc.cacheName(); - IgniteCacheProxyImpl cacheProxy = (IgniteCacheProxyImpl) cctx.cache().jcacheProxy(req.cacheName()); + IgniteCacheProxyImpl cacheProxy = cctx.cache().jcacheProxy(req.cacheName(), false); // If it has proxy then try to start it if (cacheProxy != null) { @@ -885,45 +903,80 @@ private void processCacheStartRequests( } } - try { - if (startCache) { - cctx.cache().prepareCacheStart(req.startCacheConfiguration(), + if (startCache) { + startCacheInfos.put( + new StartCacheInfo( + req.startCacheConfiguration(), cacheDesc, nearCfg, evts.topologyVersion(), - req.disabledAfterStart()); - - if (fut.cacheAddedOnExchange(cacheDesc.cacheId(), cacheDesc.receivedFrom())) { - if (fut.events().discoveryCache().cacheGroupAffinityNodes(cacheDesc.groupId()).isEmpty()) - U.quietAndWarn(log, "No server nodes found for cache client: " + req.cacheName()); - } - } + req.disabledAfterStart() + ), + req + ); } - catch (IgniteCheckedException e) { + } + + Map failedCaches = cctx.cache().prepareStartCachesIfPossible(startCacheInfos.keySet()); + + for (Map.Entry entry : failedCaches.entrySet()) { + if (cctx.localNode().isClient()) { U.error(log, "Failed to initialize cache. Will try to rollback cache start routine. " + - "[cacheName=" + req.cacheName() + ']', e); + "[cacheName=" + entry.getKey().getStartedConfiguration().getName() + ']', entry.getValue()); - cctx.cache().closeCaches(Collections.singleton(req.cacheName()), false); + cctx.cache().closeCaches(Collections.singleton(entry.getKey().getStartedConfiguration().getName()), false); - cctx.cache().completeCacheStartFuture(req, false, e); + cctx.cache().completeCacheStartFuture(startCacheInfos.get(entry.getKey()), false, entry.getValue()); } + else + throw entry.getValue(); } - if (log.isInfoEnabled()) - log.info("Caches starting performed in " + (System.currentTimeMillis() - time) + " ms."); + Set failedCacheInfos = failedCaches.keySet(); - time = System.currentTimeMillis(); + List cacheInfos = startCacheInfos.keySet().stream() + .filter(failedCacheInfos::contains) + .collect(Collectors.toList()); - Set gprs = new HashSet<>(); + for (StartCacheInfo info : cacheInfos) { + if (fut.cacheAddedOnExchange(info.getCacheDescriptor().cacheId(), info.getCacheDescriptor().receivedFrom())) { + if (fut.events().discoveryCache().cacheGroupAffinityNodes(info.getCacheDescriptor().groupId()).isEmpty()) + U.quietAndWarn(log, "No server nodes found for cache client: " + info.getCacheDescriptor().cacheName()); + } + } - for (ExchangeActions.CacheActionData action : exchActions.cacheStartRequests()) { - int grpId = action.descriptor().groupId(); + fut.timeBag().finishGlobalStage("Start caches"); - if (gprs.add(grpId)) { + initAffinityOnCacheGroupsStart(fut, exchActions, crd); + + fut.timeBag().finishGlobalStage("Affinity initialization on cache group start"); + } + + /** + * Initializes affinity for started cache groups received during {@code fut}. + * + * @param fut Exchange future. + * @param exchangeActions Exchange actions. + * @param crd {@code True} if local node is coordinator. + */ + private void initAffinityOnCacheGroupsStart( + GridDhtPartitionsExchangeFuture fut, + ExchangeActions exchangeActions, + boolean crd + ) throws IgniteCheckedException { + List startedGroups = exchangeActions.cacheStartRequests().stream() + .map(action -> action.descriptor().groupDescriptor()) + .distinct() + .collect(Collectors.toList()); + + U.doInParallel( + cctx.kernalContext().getSystemExecutorService(), + startedGroups, + grpDesc -> { if (crd) - initStartedGroupOnCoordinator(fut, action.descriptor().groupDescriptor()); + initStartedGroupOnCoordinator(fut, grpDesc); else { - CacheGroupContext grp = cctx.cache().cacheGroup(grpId); + CacheGroupContext grp = cctx.cache().cacheGroup(grpDesc.groupId()); if (grp != null && !grp.isLocal() && grp.localStartVersion().equals(fut.initialVersion())) { assert grp.affinity().lastVersion().equals(AffinityTopologyVersion.NONE) : grp.affinity().lastVersion(); @@ -931,11 +984,13 @@ private void processCacheStartRequests( initAffinity(cachesRegistry.group(grp.groupId()), grp.affinity(), fut); } } - } - } - if (log.isInfoEnabled()) - log.info("Affinity initialization for started caches performed in " + (System.currentTimeMillis() - time) + " ms."); + fut.timeBag().finishLocalStage("Affinity initialization on cache group start " + + "[grp=" + grpDesc.cacheOrGroupName() + "]"); + + return null; + } + ); } /** @@ -944,7 +999,7 @@ private void processCacheStartRequests( * @param fut Exchange future. * @param crd Coordinator flag. * @param exchActions Cache change requests. - * @param forceClose + * @param forceClose Force close flag. * @return Set of cache groups to be stopped. */ private Set processCacheStopRequests( @@ -1001,9 +1056,11 @@ public void clearGroupHoldersAndRegistry() { * @param crd Coordinator flag. * @param msg Affinity change message. */ - public void onExchangeChangeAffinityMessage(GridDhtPartitionsExchangeFuture exchFut, + public void onExchangeChangeAffinityMessage( + GridDhtPartitionsExchangeFuture exchFut, boolean crd, - CacheAffinityChangeMessage msg) { + CacheAffinityChangeMessage msg + ) { if (log.isDebugEnabled()) { log.debug("Process exchange affinity change message [exchVer=" + exchFut.initialVersion() + ", msg=" + msg + ']'); @@ -1017,7 +1074,7 @@ public void onExchangeChangeAffinityMessage(GridDhtPartitionsExchangeFuture exch assert assignment != null; - final Map>> affCache = new HashMap<>(); + final Map>> affCache = new ConcurrentHashMap<>(); forAllCacheGroups(crd, new IgniteInClosureX() { @Override public void applyx(GridAffinityAssignmentCache aff) throws IgniteCheckedException { @@ -1039,6 +1096,9 @@ public void onExchangeChangeAffinityMessage(GridDhtPartitionsExchangeFuture exch newAssignment = idealAssignment; aff.initialize(topVer, cachedAssignment(aff, newAssignment, affCache)); + + exchFut.timeBag().finishLocalStage("Affinity recalculate by change affinity message " + + "[grp=" + aff.cacheOrGroupName() + "]"); } }); } @@ -1051,10 +1111,11 @@ public void onExchangeChangeAffinityMessage(GridDhtPartitionsExchangeFuture exch * @param msg Message. * @throws IgniteCheckedException If failed. */ - public void onChangeAffinityMessage(final GridDhtPartitionsExchangeFuture exchFut, + public void onChangeAffinityMessage( + final GridDhtPartitionsExchangeFuture exchFut, boolean crd, - final CacheAffinityChangeMessage msg) - throws IgniteCheckedException { + final CacheAffinityChangeMessage msg + ) { assert msg.topologyVersion() != null && msg.exchangeId() == null : msg; final AffinityTopologyVersion topVer = exchFut.initialVersion(); @@ -1070,7 +1131,7 @@ public void onChangeAffinityMessage(final GridDhtPartitionsExchangeFuture exchFu final Map deploymentIds = msg.cacheDeploymentIds(); - final Map>> affCache = new HashMap<>(); + final Map>> affCache = new ConcurrentHashMap<>(); forAllCacheGroups(crd, new IgniteInClosureX() { @Override public void applyx(GridAffinityAssignmentCache aff) throws IgniteCheckedException { @@ -1120,6 +1181,11 @@ public void onChangeAffinityMessage(final GridDhtPartitionsExchangeFuture exchFu } else aff.clientEventTopologyChange(exchFut.firstEvent(), topVer); + + cctx.exchange().exchangerUpdateHeartbeat(); + + exchFut.timeBag().finishLocalStage("Affinity change by custom message " + + "[grp=" + aff.cacheOrGroupName() + "]"); } }); } @@ -1140,6 +1206,8 @@ public void onClientEvent(final GridDhtPartitionsExchangeFuture fut, boolean crd AffinityTopologyVersion topVer = fut.initialVersion(); aff.clientEventTopologyChange(fut.firstEvent(), topVer); + + cctx.exchange().exchangerUpdateHeartbeat(); } }); } @@ -1182,14 +1250,21 @@ private void processAffinityAssignmentResponse(UUID nodeId, GridDhtAffinityAssig /** * @param c Cache closure. - * @throws IgniteCheckedException If failed */ - private void forAllRegisteredCacheGroups(IgniteInClosureX c) throws IgniteCheckedException { - for (CacheGroupDescriptor cacheDesc : cachesRegistry.allGroups().values()) { - if (cacheDesc.config().getCacheMode() == LOCAL) - continue; + private void forAllRegisteredCacheGroups(IgniteInClosureX c) { + Collection affinityCaches = cachesRegistry.allGroups().values().stream() + .filter(desc -> desc.config().getCacheMode() != LOCAL) + .collect(Collectors.toList()); - c.applyx(cacheDesc); + try { + U.doInParallel(cctx.kernalContext().getSystemExecutorService(), affinityCaches, t -> { + c.applyx(t); + + return null; + }); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to execute affinity operation on cache groups", e); } } @@ -1198,17 +1273,30 @@ private void forAllRegisteredCacheGroups(IgniteInClosureX * @param c Closure. */ private void forAllCacheGroups(boolean crd, IgniteInClosureX c) { + Collection affinityCaches; + if (crd) { - for (CacheGroupHolder grp : grpHolders.values()) - c.apply(grp.affinity()); + affinityCaches = grpHolders.values().stream() + .map(CacheGroupHolder::affinity) + .collect(Collectors.toList()); } else { - for (CacheGroupContext grp : cctx.kernalContext().cache().cacheGroups()) { - if (grp.isLocal()) - continue; + affinityCaches = cctx.kernalContext().cache().cacheGroups().stream() + .filter(grp -> !grp.isLocal()) + .filter(grp -> !grp.isRecoveryMode()) + .map(CacheGroupContext::affinity) + .collect(Collectors.toList()); + } - c.apply(grp.affinity()); - } + try { + U.doInParallel(cctx.kernalContext().getSystemExecutorService(), affinityCaches, t -> { + c.applyx(t); + + return null; + }); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to execute affinity operation on cache groups", e); } } @@ -1273,16 +1361,28 @@ public IgniteInternalFuture initStartedCaches( @Override public void applyx(CacheGroupDescriptor desc) throws IgniteCheckedException { CacheGroupHolder cache = groupHolder(fut.initialVersion(), desc); - if (cache.affinity().lastVersion().equals(AffinityTopologyVersion.NONE)) + if (cache.affinity().lastVersion().equals(AffinityTopologyVersion.NONE)) { calculateAndInit(fut.events(), cache.affinity(), fut.initialVersion()); + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("Affinity initialization (crd, new cache) " + + "[grp=" + desc.cacheOrGroupName() + "]"); + } } }); } else { forAllCacheGroups(false, new IgniteInClosureX() { @Override public void applyx(GridAffinityAssignmentCache aff) throws IgniteCheckedException { - if (aff.lastVersion().equals(AffinityTopologyVersion.NONE)) + if (aff.lastVersion().equals(AffinityTopologyVersion.NONE)) { initAffinity(cachesRegistry.group(aff.groupId()), aff, fut); + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("Affinity initialization (new cache) " + + "[grp=" + aff.cacheOrGroupName() + "]"); + } } }); } @@ -1315,7 +1415,6 @@ private void initAffinity(CacheGroupDescriptor desc, fetchFut.init(false); fetchAffinity(evts.topologyVersion(), - cctx.coordinators().currentCoordinator(), evts, evts.discoveryCache(), aff, @@ -1364,13 +1463,14 @@ public GridAffinityAssignmentCache affinity(Integer grpId) { * @param fut Current exchange future. * @param msg Finish exchange message. */ - public void applyAffinityFromFullMessage(final GridDhtPartitionsExchangeFuture fut, - final GridDhtPartitionsFullMessage msg) { - final Map nodesByOrder = new HashMap<>(); - - final Map>> affCache = new HashMap<>(); + public void applyAffinityFromFullMessage( + final GridDhtPartitionsExchangeFuture fut, + final GridDhtPartitionsFullMessage msg + ) { + // Please do not use following pattern of code (nodesByOrder, affCache). NEVER. + final Map nodesByOrder = new ConcurrentHashMap<>(); - long time = System.currentTimeMillis(); + final Map>> affCache = new ConcurrentHashMap<>(); forAllCacheGroups(false, new IgniteInClosureX() { @Override public void applyx(GridAffinityAssignmentCache aff) throws IgniteCheckedException { @@ -1403,11 +1503,11 @@ public void applyAffinityFromFullMessage(final GridDhtPartitionsExchangeFuture f newAssignment = idealAssignment; aff.initialize(evts.topologyVersion(), cachedAssignment(aff, newAssignment, affCache)); + + fut.timeBag().finishLocalStage("Affinity applying from full message " + + "[grp=" + aff.cacheOrGroupName() + "]"); } }); - - if (log.isInfoEnabled()) - log.info("Affinity applying from full message performed in " + (System.currentTimeMillis() - time) + " ms."); } /** @@ -1415,13 +1515,13 @@ public void applyAffinityFromFullMessage(final GridDhtPartitionsExchangeFuture f * @param msg Message finish message. * @param resTopVer Result topology version. */ - public void onLocalJoin(final GridDhtPartitionsExchangeFuture fut, + public void onLocalJoin( + final GridDhtPartitionsExchangeFuture fut, GridDhtPartitionsFullMessage msg, - final AffinityTopologyVersion resTopVer) { + final AffinityTopologyVersion resTopVer + ) { final Set affReq = fut.context().groupsAffinityRequestOnJoin(); - final Map nodesByOrder = new HashMap<>(); - final Map receivedAff = msg.joinedNodeAffinity(); assert F.isEmpty(affReq) || (!F.isEmpty(receivedAff) && receivedAff.size() >= affReq.size()) @@ -1430,7 +1530,7 @@ public void onLocalJoin(final GridDhtPartitionsExchangeFuture fut, ", receivedCnt=" + (receivedAff != null ? receivedAff.size() : "none") + ", msg=" + msg + "]"); - long time = System.currentTimeMillis(); + final Map nodesByOrder = new ConcurrentHashMap<>(); forAllCacheGroups(false, new IgniteInClosureX() { @Override public void applyx(GridAffinityAssignmentCache aff) throws IgniteCheckedException { @@ -1441,7 +1541,7 @@ public void onLocalJoin(final GridDhtPartitionsExchangeFuture fut, assert grp != null; if (affReq != null && affReq.contains(aff.groupId())) { - assert AffinityTopologyVersion.NONE.equals(aff.lastVersion()); + assert AffinityTopologyVersion.NONE.equals(aff.lastVersion()) : aff.lastVersion(); CacheGroupAffinityMessage affMsg = receivedAff.get(aff.groupId()); @@ -1449,7 +1549,8 @@ public void onLocalJoin(final GridDhtPartitionsExchangeFuture fut, List> assignments = affMsg.createAssignments(nodesByOrder, evts.discoveryCache()); - assert resTopVer.equals(evts.topologyVersion()); + assert resTopVer.equals(evts.topologyVersion()) : "resTopVer=" + resTopVer + + ", evts.topVer=" + evts.topologyVersion(); List> idealAssign = affMsg.createIdealAssignments(nodesByOrder, evts.discoveryCache()); @@ -1457,7 +1558,7 @@ public void onLocalJoin(final GridDhtPartitionsExchangeFuture fut, if (idealAssign != null) aff.idealAssignment(idealAssign); else { - assert !aff.centralizedAffinityFunction(); + assert !aff.centralizedAffinityFunction() : aff; // Calculate ideal assignments. aff.calculate(evts.topologyVersion(), evts, evts.discoveryCache()); @@ -1469,11 +1570,11 @@ else if (fut.cacheGroupAddedOnExchange(aff.groupId(), grp.receivedFrom())) calculateAndInit(evts, aff, evts.topologyVersion()); grp.topology().initPartitionsWhenAffinityReady(resTopVer, fut); + + fut.timeBag().finishLocalStage("Affinity initialization (local join) " + + "[grp=" + grp.cacheOrGroupName() + "]"); } }); - - if (log.isInfoEnabled()) - log.info("Affinity initialization on local join performed in " + (System.currentTimeMillis() - time) + " ms."); } /** @@ -1488,8 +1589,6 @@ public void onServerJoinWithExchangeMergeProtocol(GridDhtPartitionsExchangeFutur assert fut.context().mergeExchanges(); assert evts.hasServerJoin() && !evts.hasServerLeft(); - long time = System.currentTimeMillis(); - WaitRebalanceInfo waitRebalanceInfo = initAffinityOnNodeJoin(fut, crd); this.waitInfo = waitRebalanceInfo != null && !waitRebalanceInfo.empty() ? waitRebalanceInfo : null; @@ -1502,10 +1601,6 @@ public void onServerJoinWithExchangeMergeProtocol(GridDhtPartitionsExchangeFutur ", waitGrps=" + (info != null ? groupNames(info.waitGrps.keySet()) : null) + ']'); } } - - if (log.isInfoEnabled()) - log.info("Affinity recalculation (on server join) performed in " - + (System.currentTimeMillis() - time) + " ms."); } /** @@ -1516,8 +1611,6 @@ public void onServerJoinWithExchangeMergeProtocol(GridDhtPartitionsExchangeFutur public Map onServerLeftWithExchangeMergeProtocol( final GridDhtPartitionsExchangeFuture fut) throws IgniteCheckedException { - long time = System.currentTimeMillis(); - final ExchangeDiscoveryEvents evts = fut.context().events(); assert fut.context().mergeExchanges(); @@ -1525,10 +1618,6 @@ public Map onServerLeftWithExchangeMergeProt Map result = onReassignmentEnforced(fut); - if (log.isInfoEnabled()) - log.info("Affinity recalculation (on server left) performed in " - + (System.currentTimeMillis() - time) + " ms."); - return result; } @@ -1544,14 +1633,8 @@ public Map onCustomEventWithEnforcedAffinity { assert DiscoveryCustomEvent.requiresCentralizedAffinityAssignment(fut.firstEvent()); - long time = System.currentTimeMillis(); - Map result = onReassignmentEnforced(fut); - if (log.isInfoEnabled()) - log.info("Affinity recalculation (custom message) performed in " - + (System.currentTimeMillis() - time) + " ms."); - return result; } @@ -1577,6 +1660,9 @@ private Map onReassignmentEnforced( if (!cache.rebalanceEnabled || fut.cacheGroupAddedOnExchange(desc.groupId(), desc.receivedFrom())) cache.affinity().initialize(topVer, assign); + + fut.timeBag().finishLocalStage("Affinity initialization (enforced) " + + "[grp=" + desc.cacheOrGroupName() + "]"); } }); @@ -1612,11 +1698,19 @@ public void onServerJoin(final GridDhtPartitionsExchangeFuture fut, boolean crd) CacheGroupHolder grpHolder = groupHolder(topVer, desc); calculateAndInit(fut.events(), grpHolder.affinity(), topVer); + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("First node affinity initialization (node join) " + + "[grp=" + desc.cacheOrGroupName() + "]"); } }); } - else + else { fetchAffinityOnJoin(fut); + + fut.timeBag().finishLocalStage("Affinity fetch"); + } } else waitRebalanceInfo = initAffinityOnNodeJoin(fut, crd); @@ -1741,6 +1835,8 @@ private void fetchAffinityOnJoin(GridDhtPartitionsExchangeFuture fut) throws Ign calculateAndInit(fut.events(), grp.affinity(), topVer); } } + + cctx.exchange().exchangerUpdateHeartbeat(); } for (int i = 0; i < fetchFuts.size(); i++) { @@ -1749,17 +1845,17 @@ private void fetchAffinityOnJoin(GridDhtPartitionsExchangeFuture fut) throws Ign int grpId = fetchFut.groupId(); fetchAffinity(topVer, - cctx.coordinators().currentCoordinator(), fut.events(), fut.events().discoveryCache(), cctx.cache().cacheGroup(grpId).affinity(), fetchFut); + + cctx.exchange().exchangerUpdateHeartbeat(); } } /** * @param topVer Topology version. - * @param mvccCrd Mvcc coordinator to set in affinity. * @param events Discovery events. * @param discoCache Discovery data cache. * @param affCache Affinity. @@ -1769,12 +1865,11 @@ private void fetchAffinityOnJoin(GridDhtPartitionsExchangeFuture fut) throws Ign */ private GridDhtAffinityAssignmentResponse fetchAffinity( AffinityTopologyVersion topVer, - MvccCoordinator mvccCrd, @Nullable ExchangeDiscoveryEvents events, DiscoCache discoCache, GridAffinityAssignmentCache affCache, - GridDhtAssignmentFetchFuture fetchFut) - throws IgniteCheckedException { + GridDhtAssignmentFetchFuture fetchFut + ) throws IgniteCheckedException { assert affCache != null; GridDhtAffinityAssignmentResponse res = fetchFut.get(); @@ -1782,7 +1877,7 @@ private GridDhtAffinityAssignmentResponse fetchAffinity( if (res == null) { List> aff = affCache.calculate(topVer, events, discoCache); - affCache.initialize(topVer, aff, mvccCrd); + affCache.initialize(topVer, aff); } else { List> idealAff = res.idealAffinityAssignment(discoCache); @@ -1799,7 +1894,7 @@ private GridDhtAffinityAssignmentResponse fetchAffinity( assert aff != null : res; - affCache.initialize(topVer, aff, mvccCrd); + affCache.initialize(topVer, aff); } return res; @@ -1824,6 +1919,11 @@ public boolean onCentralizedAffinityChange(final GridDhtPartitionsExchangeFuture CacheGroupHolder cache = groupHolder(fut.initialVersion(), desc); cache.aff.calculate(fut.initialVersion(), fut.events(), fut.events().discoveryCache()); + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("Affinity centralized initialization (crd) " + + "[grp=" + desc.cacheOrGroupName() + "]"); } }); } @@ -1831,6 +1931,11 @@ public boolean onCentralizedAffinityChange(final GridDhtPartitionsExchangeFuture forAllCacheGroups(false, new IgniteInClosureX() { @Override public void applyx(GridAffinityAssignmentCache aff) throws IgniteCheckedException { aff.calculate(fut.initialVersion(), fut.events(), fut.events().discoveryCache()); + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("Affinity centralized initialization " + + "[grp=" + aff.cacheOrGroupName() + "]"); } }); } @@ -1852,7 +1957,7 @@ public IgniteInternalFuture initCoordinatorCaches( final GridDhtPartitionsExchangeFuture fut, final boolean newAff ) throws IgniteCheckedException { - final List> futs = new ArrayList<>(); + final List> futs = Collections.synchronizedList(new ArrayList<>()); final AffinityTopologyVersion topVer = fut.initialVersion(); @@ -1895,38 +2000,59 @@ public IgniteInternalFuture initCoordinatorCaches( assert idx >= 0 && idx < exchFuts.size() - 1 : "Invalid exchange futures state [cur=" + idx + ", total=" + exchFuts.size() + ']'; - final GridDhtPartitionsExchangeFuture prev = exchFuts.get(idx + 1); + GridDhtPartitionsExchangeFuture futureToFetchAffinity = null; - assert prev.isDone() && prev.topologyVersion().compareTo(topVer) < 0 : prev; + for (int i = idx + 1; i < exchFuts.size(); i++) { + GridDhtPartitionsExchangeFuture prev = exchFuts.get(i); + + assert prev.isDone() && prev.topologyVersion().compareTo(topVer) < 0; + + if (prev.isMerged()) + continue; + + futureToFetchAffinity = prev; + + break; + } + + if (futureToFetchAffinity == null) + throw new IgniteCheckedException("Failed to find completed exchange future to fetch affinity."); if (log.isDebugEnabled()) { log.debug("Need initialize affinity on coordinator [" + "cacheGrp=" + desc.cacheOrGroupName() + - "prevAff=" + prev.topologyVersion() + ']'); + "prevAff=" + futureToFetchAffinity.topologyVersion() + ']'); } - GridDhtAssignmentFetchFuture fetchFut = new GridDhtAssignmentFetchFuture(cctx, - desc.groupId(), - prev.topologyVersion(), - prev.events().discoveryCache()); + GridDhtAssignmentFetchFuture fetchFut = new GridDhtAssignmentFetchFuture( + cctx, + desc.groupId(), + futureToFetchAffinity.topologyVersion(), + futureToFetchAffinity.events().discoveryCache() + ); fetchFut.init(false); final GridFutureAdapter affFut = new GridFutureAdapter<>(); + final GridDhtPartitionsExchangeFuture futureToFetchAffinity0 = futureToFetchAffinity; + fetchFut.listen(new IgniteInClosureX>() { @Override public void applyx(IgniteInternalFuture fetchFut) - throws IgniteCheckedException { - fetchAffinity(prev.topologyVersion(), - null, // Pass null mvcc coordinator, this affinity version should be used for queries. - prev.events(), - prev.events().discoveryCache(), - aff, - (GridDhtAssignmentFetchFuture)fetchFut); + throws IgniteCheckedException { + fetchAffinity( + futureToFetchAffinity0.topologyVersion(), + futureToFetchAffinity0.events(), + futureToFetchAffinity0.events().discoveryCache(), + aff, + (GridDhtAssignmentFetchFuture)fetchFut + ); aff.calculate(topVer, fut.events(), fut.events().discoveryCache()); affFut.onDone(topVer); + + cctx.exchange().exchangerUpdateHeartbeat(); } }); @@ -1949,6 +2075,11 @@ public IgniteInternalFuture initCoordinatorCaches( CacheGroupHolder old = grpHolders.put(grpHolder.groupId(), grpHolder); assert old == null : old; + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("Coordinator affinity cache init " + + "[grp=" + desc.cacheOrGroupName() + "]"); } }); @@ -2005,29 +2136,33 @@ private CacheGroupHolder groupHolder(AffinityTopologyVersion topVer, final Cache /** * @param fut Current exchange future. * @param crd Coordinator flag. - * @throws IgniteCheckedException If failed. * @return Rabalance info. */ - @Nullable private WaitRebalanceInfo initAffinityOnNodeJoin(final GridDhtPartitionsExchangeFuture fut, boolean crd) - throws IgniteCheckedException { + @Nullable private WaitRebalanceInfo initAffinityOnNodeJoin(final GridDhtPartitionsExchangeFuture fut, boolean crd) { final ExchangeDiscoveryEvents evts = fut.context().events(); - final Map>> affCache = new HashMap<>(); + final Map>> affCache = new ConcurrentHashMap<>(); if (!crd) { - for (CacheGroupContext grp : cctx.cache().cacheGroups()) { - if (grp.isLocal()) - continue; + forAllCacheGroups(false, new IgniteInClosureX() { + @Override public void applyx(GridAffinityAssignmentCache grpAffCache) throws IgniteCheckedException { + CacheGroupContext grp = cctx.cache().cacheGroup(grpAffCache.groupId()); - boolean latePrimary = grp.rebalanceEnabled(); + assert grp != null; - initAffinityOnNodeJoin(evts, - evts.nodeJoined(grp.receivedFrom()), - grp.affinity(), - null, - latePrimary, - affCache); - } + initAffinityOnNodeJoin(evts, + evts.nodeJoined(grp.receivedFrom()), + grp.affinity(), + null, + grp.rebalanceEnabled(), + affCache); + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("Affinity initialization (node join) " + + "[grp=" + grp.cacheOrGroupName() + "]"); + } + }); return null; } @@ -2062,6 +2197,11 @@ private CacheGroupHolder groupHolder(AffinityTopologyVersion topVer, final Cache for (GridDhtPartitionMap map0 : map.values()) cache.topology(fut.context().events().discoveryCache()).update(fut.exchangeId(), map0, true); } + + cctx.exchange().exchangerUpdateHeartbeat(); + + fut.timeBag().finishLocalStage("Affinity initialization (crd, node join) " + + "[grp=" + desc.cacheOrGroupName() + "]"); } }); @@ -2069,6 +2209,9 @@ private CacheGroupHolder groupHolder(AffinityTopologyVersion topVer, final Cache } } + /** + * @param aff Affinity assignment. + */ private Map affinityFullMap(AffinityAssignment aff) { Map map = new HashMap<>(); @@ -2234,7 +2377,7 @@ public IgniteInternalFuture>>> initAffinity try { resFut.onDone(initAffinityBasedOnPartitionsAvailability(fut.initialVersion(), fut, NODE_TO_ID, false)); } - catch (IgniteCheckedException e) { + catch (Exception e) { resFut.onDone(e); } } @@ -2254,14 +2397,15 @@ public IgniteInternalFuture>>> initAffinity * @param fut Exchange future. * @param c Closure converting affinity diff. * @param initAff {@code True} if need initialize affinity. - * @return Affinity assignment. - * @throws IgniteCheckedException If failed. + * + * @return Affinity assignment for each of registered cache group. */ - private Map>> initAffinityBasedOnPartitionsAvailability(final AffinityTopologyVersion topVer, + private Map>> initAffinityBasedOnPartitionsAvailability( + final AffinityTopologyVersion topVer, final GridDhtPartitionsExchangeFuture fut, final IgniteClosure c, - final boolean initAff) - throws IgniteCheckedException { + final boolean initAff + ) { final boolean enforcedCentralizedAssignment = DiscoveryCustomEvent.requiresCentralizedAffinityAssignment(fut.firstEvent()); @@ -2271,7 +2415,7 @@ private Map>> initAffinityBasedOnPartitionsAva final Collection aliveNodes = fut.context().events().discoveryCache().serverNodes(); - final Map>> assignment = new HashMap<>(); + final Map>> assignment = new ConcurrentHashMap<>(); forAllRegisteredCacheGroups(new IgniteInClosureX() { @Override public void applyx(CacheGroupDescriptor desc) throws IgniteCheckedException { @@ -2403,6 +2547,9 @@ else if (curPrimary != null && !curPrimary.equals(newPrimary)) { if (initAff) grpHolder.affinity().initialize(topVer, newAssignment0); + + fut.timeBag().finishLocalStage("Affinity recalculation (partitions availability) " + + "[grp=" + desc.cacheOrGroupName() + "]"); } }); @@ -2667,13 +2814,13 @@ class WaitRebalanceInfo { private final AffinityTopologyVersion topVer; /** */ - private Map> waitGrps; + private final Map> waitGrps = new ConcurrentHashMap<>(); /** */ - private Map>> assignments; + private final Map>> assignments = new ConcurrentHashMap<>(); /** */ - private Map deploymentIds; + private final Map deploymentIds = new ConcurrentHashMap<>(); /** * @param topVer Topology version. @@ -2686,14 +2833,15 @@ class WaitRebalanceInfo { * @return {@code True} if there are partitions waiting for rebalancing. */ boolean empty() { - if (waitGrps != null) { - assert !waitGrps.isEmpty(); + boolean isEmpty = waitGrps.isEmpty(); + + if (!isEmpty) { assert waitGrps.size() == assignments.size(); return false; } - return true; + return isEmpty; } /** @@ -2705,28 +2853,11 @@ boolean empty() { void add(Integer grpId, Integer part, UUID waitNode, List assignment) { assert !F.isEmpty(assignment) : assignment; - if (waitGrps == null) { - waitGrps = new HashMap<>(); - assignments = new HashMap<>(); - deploymentIds = new HashMap<>(); - } - - Map cacheWaitParts = waitGrps.get(grpId); - - if (cacheWaitParts == null) { - waitGrps.put(grpId, cacheWaitParts = new HashMap<>()); - - deploymentIds.put(grpId, cachesRegistry.group(grpId).deploymentId()); - } - - cacheWaitParts.put(part, waitNode); - - Map> cacheAssignment = assignments.get(grpId); + deploymentIds.putIfAbsent(grpId, cachesRegistry.group(grpId).deploymentId()); - if (cacheAssignment == null) - assignments.put(grpId, cacheAssignment = new HashMap<>()); + waitGrps.computeIfAbsent(grpId, k -> new HashMap<>()).put(part, waitNode); - cacheAssignment.put(part, assignment); + assignments.computeIfAbsent(grpId, k -> new HashMap<>()).put(part, assignment); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheConflictResolutionManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheConflictResolutionManager.java index 6d65d828fc298..9790f754a8d5a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheConflictResolutionManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheConflictResolutionManager.java @@ -17,7 +17,7 @@ package org.apache.ignite.internal.processors.cache; -import org.apache.ignite.internal.processors.cache.version.*; +import org.apache.ignite.internal.processors.cache.version.CacheVersionConflictResolver; /** * Conflict resolver manager. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryInfoCollection.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryInfoCollection.java index 614d7c06fbf56..968afd5212640 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryInfoCollection.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryInfoCollection.java @@ -128,4 +128,4 @@ public void add(GridCacheEntryInfo info) { @Override public byte fieldsCount() { return 1; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicate.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicate.java index 61cbb9e04fb54..36312a1591135 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicate.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicate.java @@ -42,4 +42,4 @@ public interface CacheEntryPredicate extends IgnitePredicate, * @param locked Entry locked */ public void entryLocked(boolean locked); -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateAdapter.java index e41938997daa2..62325323e1d3e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateAdapter.java @@ -97,4 +97,4 @@ public abstract class CacheEntryPredicateAdapter implements CacheEntryPredicate @Override public void onAckReceived() { // No-op. } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateContainsValue.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateContainsValue.java index 76806a44f5cfd..ad9861cf8b24e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateContainsValue.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateContainsValue.java @@ -19,6 +19,7 @@ import java.nio.ByteBuffer; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.CU; @@ -60,6 +61,9 @@ public CacheEntryPredicateContainsValue(CacheObject val) { GridCacheContext cctx = e.context(); + if (this.val instanceof BinaryObject && val instanceof BinaryObject) + return F.eq(val, this.val); + Object thisVal = CU.value(this.val, cctx, false); Object cacheVal = CU.value(val, cctx, false); @@ -140,4 +144,4 @@ public CacheEntryPredicateContainsValue(CacheObject val) { @Override public String toString() { return S.toString(CacheEntryPredicateContainsValue.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateHasValue.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateHasValue.java index cac04357a5816..210cc7059b21d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateHasValue.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateHasValue.java @@ -28,4 +28,4 @@ public class CacheEntryPredicateHasValue extends CacheEntryPredicateAdapter { @Override public boolean apply(GridCacheEntryEx e) { return peekVisibleValue(e) != null; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateNoValue.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateNoValue.java index 2790170e959e8..4c8917fcc68be 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateNoValue.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntryPredicateNoValue.java @@ -28,4 +28,4 @@ public class CacheEntryPredicateNoValue extends CacheEntryPredicateAdapter { @Override public boolean apply(GridCacheEntryEx e) { return peekVisibleValue(e) == null; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntrySerializablePredicate.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntrySerializablePredicate.java index 9057e41fbfbfc..257433636e6b4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntrySerializablePredicate.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEntrySerializablePredicate.java @@ -156,4 +156,4 @@ public CacheEntryPredicate predicate() { @Override public byte fieldsCount() { return 1; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictableEntryImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictableEntryImpl.java index bcaf890ab2f89..b7c882372ed03 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictableEntryImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictableEntryImpl.java @@ -87,7 +87,7 @@ protected CacheEvictableEntryImpl(GridCacheEntryEx cached) { */ @Nullable public V peek() { try { - CacheObject val = cached.peek(null); + CacheObject val = cached.peek(); return val != null ? val.value(cached.context().cacheObjectContext(), false) : null; } @@ -151,7 +151,7 @@ protected CacheEvictableEntryImpl(GridCacheEntryEx cached) { return null; try { - CacheObject val = e.peek(null); + CacheObject val = e.peek(); return val != null ? val.value(cached.context().cacheObjectContext(), false) : null; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionEntry.java index 2717b1e03e22e..96b85df2f0721 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionEntry.java @@ -185,4 +185,4 @@ public void finishUnmarshal(GridCacheContext ctx, ClassLoader ldr) throws Ignite @Override public byte fieldsCount() { return 3; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionManager.java index b614728f25946..2a9a0e8b10d76 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheEvictionManager.java @@ -19,7 +19,6 @@ import java.util.Collection; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.jetbrains.annotations.Nullable; @@ -35,10 +34,9 @@ public interface CacheEvictionManager extends GridCacheManager { public void touch(IgniteTxEntry txEntry, boolean loc); /** - * @param e Entry for eviction policy notification. - * @param topVer Topology version. + * @param e Entry for eviction policy notification. */ - public void touch(GridCacheEntryEx e, AffinityTopologyVersion topVer); + public void touch(GridCacheEntryEx e); /** * @param entry Entry to attempt to evict. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupContext.java index e1b307a8c241e..88842921e48cb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupContext.java @@ -20,10 +20,11 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; -import java.util.Iterator; +import java.util.Collections; import java.util.List; import java.util.Set; import java.util.UUID; +import java.util.concurrent.atomic.AtomicBoolean; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cache.affinity.AffinityFunction; @@ -39,9 +40,9 @@ import org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtAffinityAssignmentRequest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtAffinityAssignmentResponse; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl; -import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader; import org.apache.ignite.internal.processors.cache.persistence.DataRegion; import org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager; import org.apache.ignite.internal.processors.cache.persistence.freelist.FreeList; @@ -49,13 +50,13 @@ import org.apache.ignite.internal.processors.cache.query.continuous.CounterSkipContext; import org.apache.ignite.internal.processors.query.QueryUtils; import org.apache.ignite.internal.util.typedef.CI1; +import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.mxbean.CacheGroupMetricsMXBean; import org.jetbrains.annotations.Nullable; @@ -77,13 +78,13 @@ public class CacheGroupContext { private final int grpId; /** Node ID cache group was received from. */ - private final UUID rcvdFrom; + private volatile UUID rcvdFrom; /** Flag indicating that this cache group is in a recovery mode due to partitions loss. */ private boolean needsRecovery; /** */ - private final AffinityTopologyVersion locStartVer; + private volatile AffinityTopologyVersion locStartVer; /** */ private final CacheConfiguration ccfg; @@ -92,7 +93,7 @@ public class CacheGroupContext { private final GridCacheSharedContext ctx; /** */ - private final boolean affNode; + private volatile boolean affNode; /** */ private final CacheType cacheType; @@ -106,26 +107,26 @@ public class CacheGroupContext { /** */ private final boolean storeCacheId; - /** */ - private volatile List caches; + /** We modify content under lock, by making defencive copy, field always contains unmodifiable list. */ + private volatile List caches = Collections.unmodifiableList(new ArrayList<>()); - /** */ private volatile List contQryCaches; /** */ private final IgniteLogger log; /** */ - private GridAffinityAssignmentCache aff; + private volatile GridAffinityAssignmentCache aff; /** */ - private GridDhtPartitionTopologyImpl top; + private volatile GridDhtPartitionTopologyImpl top; /** */ - private IgniteCacheOffheapManager offheapMgr; + private volatile IgniteCacheOffheapManager offheapMgr; /** */ - private GridCachePreloader preldr; + private volatile GridCachePreloader preldr; + /** */ private final DataRegion dataRegion; @@ -142,16 +143,16 @@ public class CacheGroupContext { private final ReuseList reuseList; /** */ - private boolean drEnabled; + private volatile boolean drEnabled; /** */ - private boolean qryEnabled; + private volatile boolean qryEnabled; /** */ - private boolean mvccEnabled; + private final boolean mvccEnabled; /** MXBean. */ - private CacheGroupMetricsMXBean mxBean; + private final CacheGroupMetricsMXBean mxBean; /** */ private volatile boolean localWalEnabled; @@ -159,6 +160,9 @@ public class CacheGroupContext { /** */ private volatile boolean globalWalEnabled; + /** Flag indicates that cache group is under recovering and not attached to topology. */ + private final AtomicBoolean recoveryMode; + /** * @param ctx Context. * @param grpId Group ID. @@ -187,7 +191,8 @@ public class CacheGroupContext { ReuseList reuseList, AffinityTopologyVersion locStartVer, boolean persistenceEnabled, - boolean walEnabled + boolean walEnabled, + boolean recoveryMode ) { assert ccfg != null; assert dataRegion != null || !affNode; @@ -207,8 +212,7 @@ public class CacheGroupContext { this.globalWalEnabled = walEnabled; this.persistenceEnabled = persistenceEnabled; this.localWalEnabled = true; - - persistGlobalWalState(walEnabled); + this.recoveryMode = new AtomicBoolean(recoveryMode); ioPlc = cacheType.ioPolicy(); @@ -220,8 +224,6 @@ public class CacheGroupContext { log = ctx.kernalContext().log(getClass()); - caches = new ArrayList<>(); - mxBean = new CacheGroupMetricsMXBeanImpl(this); } @@ -291,10 +293,9 @@ void onCacheStarted(GridCacheContext cctx) throws IgniteCheckedException { public boolean hasCache(String cacheName) { List caches = this.caches; - for (int i = 0; i < caches.size(); i++) { - if (caches.get(i).name().equals(cacheName)) + for (GridCacheContext cacheContext : caches) + if (cacheContext.name().equals(cacheName)) return true; - } return false; } @@ -306,11 +307,17 @@ private void addCacheContext(GridCacheContext cctx) { assert cacheType.userCache() == cctx.userCache() : cctx.name(); assert grpId == cctx.groupId() : cctx.name(); - ArrayList caches = new ArrayList<>(this.caches); + final boolean add; + + synchronized (this) { + List copy = new ArrayList<>(caches); + + assert sharedGroup() || copy.isEmpty(); - assert sharedGroup() || caches.isEmpty(); + add = copy.add(cctx); - boolean add = caches.add(cctx); + caches = Collections.unmodifiableList(copy); + } assert add : cctx.name(); @@ -319,39 +326,39 @@ private void addCacheContext(GridCacheContext cctx) { if (!drEnabled && cctx.isDrEnabled()) drEnabled = true; - - this.caches = caches; } /** * @param cctx Cache context. */ private void removeCacheContext(GridCacheContext cctx) { - ArrayList caches = new ArrayList<>(this.caches); + final List copy; - // It is possible cache was not added in case of errors on cache start. - for (Iterator it = caches.iterator(); it.hasNext();) { - GridCacheContext next = it.next(); + synchronized (this) { + copy = new ArrayList<>(caches); - if (next == cctx) { - assert sharedGroup() || caches.size() == 1 : caches.size(); + for (GridCacheContext next : copy) { + if (next == cctx) { + assert sharedGroup() || copy.size() == 1 : copy.size(); - it.remove(); + copy.remove(next); - break; + break; + } } + + caches = Collections.unmodifiableList(copy); } if (QueryUtils.isEnabled(cctx.config())) { boolean qryEnabled = false; - for (int i = 0; i < caches.size(); i++) { - if (QueryUtils.isEnabled(caches.get(i).config())) { + for (GridCacheContext cacheContext : copy) + if (QueryUtils.isEnabled(cacheContext.config())) { qryEnabled = true; break; } - } this.qryEnabled = qryEnabled; } @@ -359,18 +366,15 @@ private void removeCacheContext(GridCacheContext cctx) { if (cctx.isDrEnabled()) { boolean drEnabled = false; - for (int i = 0; i < caches.size(); i++) { - if (caches.get(i).isDrEnabled()) { + for (GridCacheContext cacheContext : copy) + if (QueryUtils.isEnabled(cacheContext.config())) { drEnabled = true; break; } - } this.drEnabled = drEnabled; } - - this.caches = caches; } /** @@ -392,11 +396,8 @@ public GridCacheContext singleCacheContext() { public void unwindUndeploys() { List caches = this.caches; - for (int i = 0; i < caches.size(); i++) { - GridCacheContext cctx = caches.get(i); - + for (GridCacheContext cctx : caches) cctx.deploy().unwind(cctx); - } } /** @@ -434,9 +435,7 @@ public void addRebalanceEvent(int part, int type, ClusterNode discoNode, int dis List caches = this.caches; - for (int i = 0; i < caches.size(); i++) { - GridCacheContext cctx = caches.get(i); - + for (GridCacheContext cctx : caches) if (!cctx.config().isEventsDisabled() && cctx.recordEvent(type)) { cctx.gridEvents().record(new CacheRebalancingEvent(cctx.name(), cctx.localNode(), @@ -447,7 +446,6 @@ public void addRebalanceEvent(int part, int type, ClusterNode discoNode, int dis discoType, discoTs)); } - } } /** @@ -462,9 +460,7 @@ public void addUnloadEvent(int part) { List caches = this.caches; - for (int i = 0; i < caches.size(); i++) { - GridCacheContext cctx = caches.get(i); - + for (GridCacheContext cctx : caches) if (!cctx.config().isEventsDisabled()) cctx.gridEvents().record(new CacheRebalancingEvent(cctx.name(), cctx.localNode(), @@ -474,7 +470,6 @@ public void addUnloadEvent(int part) { null, 0, 0)); - } } /** @@ -501,14 +496,13 @@ public void addCacheEvent( ) { List caches = this.caches; - for (int i = 0; i < caches.size(); i++) { - GridCacheContext cctx = caches.get(i); - + for (GridCacheContext cctx : caches) if (!cctx.config().isEventsDisabled()) cctx.events().addEvent(part, key, evtNodeId, - (IgniteUuid)null, + null, + null, null, type, newVal, @@ -519,7 +513,6 @@ public void addCacheEvent( null, null, keepBinary); - } } /** @@ -714,9 +707,11 @@ public boolean sharedGroup() { * */ public void onKernalStop() { - aff.cancelFutures(new IgniteCheckedException("Failed to wait for topology update, node is stopping.")); + if (!isRecoveryMode()) { + aff.cancelFutures(new IgniteCheckedException("Failed to wait for topology update, node is stopping.")); - preldr.onKernalStop(); + preldr.onKernalStop(); + } offheapMgr.onKernalStop(); } @@ -738,6 +733,11 @@ void stopCache(GridCacheContext cctx, boolean destroy) { * */ void stopGroup() { + offheapMgr.stop(); + + if (isRecoveryMode()) + return; + IgniteCheckedException err = new IgniteCheckedException("Failed to wait for topology update, cache (or node) is stopping."); @@ -747,11 +747,67 @@ void stopGroup() { preldr.onKernalStop(); - offheapMgr.stop(); - ctx.io().removeCacheGroupHandlers(grpId); } + /** + * Finishes recovery for current cache group. + * Attaches topology version and initializes I/O. + * + * @param startVer Cache group start version. + * @param originalReceivedFrom UUID of node that was first who initiated cache group creating. + * This is needed to decide should node calculate affinity locally or fetch from other nodes. + * @param affinityNode Flag indicates, is local node affinity node or not. This may be calculated only after node joined to topology. + * @throws IgniteCheckedException If failed. + */ + public void finishRecovery( + AffinityTopologyVersion startVer, + UUID originalReceivedFrom, + boolean affinityNode + ) throws IgniteCheckedException { + if (!recoveryMode.compareAndSet(true, false)) + return; + + affNode = affinityNode; + + rcvdFrom = originalReceivedFrom; + + locStartVer = startVer; + + persistGlobalWalState(globalWalEnabled); + + initializeIO(); + + ctx.affinity().onCacheGroupCreated(this); + } + + /** + * @return {@code True} if current cache group is in recovery mode. + */ + public boolean isRecoveryMode() { + return recoveryMode.get(); + } + + /** + * Initializes affinity and rebalance I/O handlers. + */ + private void initializeIO() throws IgniteCheckedException { + assert !recoveryMode.get() : "Couldn't initialize I/O handlers, recovery mode is on for group " + this; + + if (ccfg.getCacheMode() != LOCAL) { + if (!ctx.kernalContext().clientNode()) { + ctx.io().addCacheGroupHandler(groupId(), GridDhtAffinityAssignmentRequest.class, + (IgniteBiInClosure) this::processAffinityAssignmentRequest); + } + + preldr = new GridDhtPreloader(this); + + preldr.start(); + } + else + preldr = new GridCachePreloaderAdapter(this); + } + /** * @return IDs of caches in this group. */ @@ -760,23 +816,25 @@ public Set cacheIds() { Set ids = U.newHashSet(caches.size()); - for (int i = 0; i < caches.size(); i++) - ids.add(caches.get(i).cacheId()); + for (GridCacheContext cctx : caches) + ids.add(cctx.cacheId()); return ids; } /** * @return Caches in this group. + * + * caches is already Unmodifiable list, so we don't need to explicitly wrap it here. */ public List caches() { - return this.caches; + return caches; } /** * @return {@code True} if group contains caches. */ - boolean hasCaches() { + public boolean hasCaches() { List caches = this.caches; return !caches.isEmpty(); @@ -788,9 +846,7 @@ boolean hasCaches() { public void onPartitionEvicted(int part) { List caches = this.caches; - for (int i = 0; i < caches.size(); i++) { - GridCacheContext cctx = caches.get(i); - + for (GridCacheContext cctx : caches) { if (cctx.isDrEnabled()) cctx.dr().partitionEvicted(part); @@ -883,6 +939,13 @@ public void onPartitionCounterUpdate(int cacheId, } } + /** + * @return {@code True} if there is at least one cache with registered CQ exists in this group. + */ + public boolean hasContinuousQueryCaches() { + return !F.isEmpty(contQryCaches); + } + /** * @throws IgniteCheckedException If failed. */ @@ -896,39 +959,25 @@ public void start() throws IgniteCheckedException { ccfg.getCacheMode() == LOCAL, persistenceEnabled()); - if (ccfg.getCacheMode() != LOCAL) { + if (ccfg.getCacheMode() != LOCAL) top = new GridDhtPartitionTopologyImpl(ctx, this); - if (!ctx.kernalContext().clientNode()) { - ctx.io().addCacheGroupHandler(groupId(), GridDhtAffinityAssignmentRequest.class, - new IgniteBiInClosure() { - @Override public void apply(UUID nodeId, GridDhtAffinityAssignmentRequest msg) { - processAffinityAssignmentRequest(nodeId, msg); - } - }); - } - - preldr = new GridDhtPreloader(this); - - preldr.start(); + try { + offheapMgr = persistenceEnabled + ? new GridCacheOffheapManager() + : new IgniteCacheOffheapManagerImpl(); } - else - preldr = new GridCachePreloaderAdapter(this); - - if (persistenceEnabled()) { - try { - offheapMgr = new GridCacheOffheapManager(); - } - catch (Exception e) { - throw new IgniteCheckedException("Failed to initialize offheap manager", e); - } + catch (Exception e) { + throw new IgniteCheckedException("Failed to initialize offheap manager", e); } - else - offheapMgr = new IgniteCacheOffheapManagerImpl(); offheapMgr.start(ctx, this); - ctx.affinity().onCacheGroupCreated(this); + if (!isRecoveryMode()) { + initializeIO(); + + ctx.affinity().onCacheGroupCreated(this); + } } /** @@ -942,8 +991,7 @@ public boolean persistenceEnabled() { * @param nodeId Node ID. * @param req Request. */ - private void processAffinityAssignmentRequest(final UUID nodeId, - final GridDhtAffinityAssignmentRequest req) { + private void processAffinityAssignmentRequest(UUID nodeId, GridDhtAffinityAssignmentRequest req) { if (log.isDebugEnabled()) log.debug("Processing affinity assignment request [node=" + nodeId + ", req=" + req + ']'); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupDescriptor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupDescriptor.java index 70cdcc735af62..e72de28a05674 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupDescriptor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupDescriptor.java @@ -23,6 +23,7 @@ import java.util.LinkedList; import java.util.List; import java.util.Map; +import java.util.Objects; import java.util.UUID; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -311,4 +312,22 @@ public boolean persistenceEnabled() { @Override public String toString() { return S.toString(CacheGroupDescriptor.class, this, "cacheName", cacheCfg.getName()); } + + /** {@inheritDoc} */ + @Override public boolean equals(Object o) { + if (this == o) + return true; + + if (o == null || getClass() != o.getClass()) + return false; + + CacheGroupDescriptor that = (CacheGroupDescriptor) o; + + return grpId == that.grpId; + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + return Objects.hash(grpId); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMXBeanImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMXBeanImpl.java index 5ece77f57ba4d..753a86fdc4866 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMXBeanImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMXBeanImpl.java @@ -83,6 +83,13 @@ public GroupAllocationTracker(AllocatedPageTracker delegate) { delegate.updateTotalAllocatedPages(delta); } + + /** + * Resets count of allocated pages to zero. + */ + public void reset() { + totalAllocatedPages.reset(); + } } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeDirectResult.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeDirectResult.java index 3b463afe8a25e..3f880339eb889 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeDirectResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeDirectResult.java @@ -267,4 +267,4 @@ public void finishUnmarshal(GridCacheContext ctx, ClassLoader ldr) throws Ignite @Override public String toString() { return S.toString(CacheInvokeDirectResult.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeEntry.java index 25261463f3354..dddc735435c56 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheInvokeEntry.java @@ -96,13 +96,30 @@ public CacheInvokeEntry(KeyCacheObject keyObj, /** {@inheritDoc} */ @Override public void remove() { + if (!entry.isMvcc()) { + if (op == Operation.CREATE) + op = Operation.NONE; + else + op = Operation.REMOVE; + } + else { + if (op == Operation.CREATE) { + assert !hadVal; + + op = Operation.NONE; + } + else if (exists()) { + assert hadVal; + + op = Operation.REMOVE; + } + + if (hadVal && oldVal == null) + oldVal = val; + } + val = null; valObj = null; - - if (op == Operation.CREATE) - op = Operation.NONE; - else - op = Operation.REMOVE; } /** {@inheritDoc} */ @@ -110,13 +127,27 @@ public CacheInvokeEntry(KeyCacheObject keyObj, if (val == null) throw new NullPointerException(); - this.oldVal = this.val; + if (!entry.isMvcc()) + this.oldVal = this.val; + else { + if (hadVal && oldVal == null) + this.oldVal = this.val; + } this.val = val; op = hadVal ? Operation.UPDATE : Operation.CREATE; } + /** + * Entry processor operation. + * + * @return Operation. + */ + public Operation op() { + return op; + } + /** * @return Return origin value, before modification. */ @@ -160,7 +191,7 @@ public GridCacheEntryEx entry() { /** * */ - private static enum Operation { + public static enum Operation { /** */ NONE, diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheMetricsSnapshot.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheMetricsSnapshot.java index 5f3001cdc7311..6bdaf9669be82 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheMetricsSnapshot.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheMetricsSnapshot.java @@ -1148,11 +1148,13 @@ public CacheMetricsSnapshot(CacheMetrics loc, Collection metrics) rebalancingBytesRate = in.readLong(); rebalancingKeysRate = in.readLong(); - rebalancedKeys = in.readLong(); - estimatedRebalancingKeys = in.readLong(); - rebalanceStartTime = in.readLong(); - rebalanceFinishTime = in.readLong(); - rebalanceClearingPartitionsLeft = in.readLong(); + if (in.available() >= 40) { + rebalancedKeys = in.readLong(); + estimatedRebalancingKeys = in.readLong(); + rebalanceStartTime = in.readLong(); + rebalanceFinishTime = in.readLong(); + rebalanceClearingPartitionsLeft = in.readLong(); + } // 11 long and 5 float values give 108 bytes in total. if (in.available() >= 108) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObject.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObject.java index 3bc2a6dcb0624..f9f384a7f9702 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObject.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObject.java @@ -118,4 +118,4 @@ public interface CacheObject extends Message { * @throws IgniteCheckedException If failed. */ public void prepareMarshal(CacheObjectValueContext ctx) throws IgniteCheckedException; -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectAdapter.java index 67ee410d06e2e..c6d900250f6b0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectAdapter.java @@ -247,4 +247,4 @@ else if (off >= headSize) return true; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectByteArrayImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectByteArrayImpl.java index 57a70f83e5044..de5a9191950c0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectByteArrayImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectByteArrayImpl.java @@ -191,4 +191,4 @@ public CacheObjectByteArrayImpl(byte[] val) { @Override public String toString() { return "CacheObjectByteArrayImpl [arrLen=" + (val != null ? val.length : 0) + ']'; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectImpl.java index 2124a97940b9b..b29c19e1e254c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheObjectImpl.java @@ -152,4 +152,4 @@ else if (kernalCtx.config().isPeerClassLoadingEnabled()) @Override public CacheObject prepareForCache(CacheObjectContext ctx) { return this; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheOffheapEvictionManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheOffheapEvictionManager.java index d737c8bfca328..6813fec446d9e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheOffheapEvictionManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheOffheapEvictionManager.java @@ -19,7 +19,6 @@ import java.util.Collection; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionManager; @@ -32,11 +31,11 @@ public class CacheOffheapEvictionManager extends GridCacheManagerAdapter implements CacheEvictionManager { /** {@inheritDoc} */ @Override public void touch(IgniteTxEntry txEntry, boolean loc) { - touch(txEntry.cached(), null); + touch(txEntry.cached()); } /** {@inheritDoc} */ - @Override public void touch(GridCacheEntryEx e, AffinityTopologyVersion topVer) { + @Override public void touch(GridCacheEntryEx e) { if (e.detached()) return; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CachesRegistry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CachesRegistry.java index fe55f979a56ec..d37f69ca5f6ad 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CachesRegistry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CachesRegistry.java @@ -280,7 +280,7 @@ private boolean shouldPersist(CacheConfiguration cacheCfg) { */ private IgniteInternalFuture persistCacheConfigurations(List cacheDescriptors) { List cacheConfigsToPersist = cacheDescriptors.stream() - .map(cacheDesc -> new StoredCacheData(cacheDesc.cacheConfiguration()).sql(cacheDesc.sql())) + .map(DynamicCacheDescriptor::toStoredData) .collect(Collectors.toList()); // Pre-create cache work directories if they don't exist. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ClusterCachesInfo.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ClusterCachesInfo.java index 8bed063ef6d94..3e8e4bf34a32c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ClusterCachesInfo.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ClusterCachesInfo.java @@ -53,7 +53,6 @@ import org.apache.ignite.internal.processors.query.QuerySchemaPatch; import org.apache.ignite.internal.processors.query.QueryUtils; import org.apache.ignite.internal.processors.query.schema.SchemaOperationException; -import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.lang.GridFunc; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.T2; @@ -79,6 +78,9 @@ * Logic related to cache discovery data processing. */ class ClusterCachesInfo { + /** Representation of null for restarting caches map */ + private static final IgniteUuid NULL_OBJECT = new IgniteUuid(); + /** Version since which merge of config is supports. */ private static final IgniteProductVersion V_MERGE_CONFIG_SINCE = IgniteProductVersion.fromString("2.5.0"); @@ -94,8 +96,8 @@ class ClusterCachesInfo { /** Cache templates. */ private final ConcurrentMap registeredTemplates = new ConcurrentHashMap<>(); - /** Caches currently being restarted. */ - private final Collection restartingCaches = new GridConcurrentHashSet<>(); + /** Caches currently being restarted (with restarter id). */ + private final ConcurrentHashMap restartingCaches = new ConcurrentHashMap<>(); /** */ private final IgniteLogger log; @@ -157,17 +159,6 @@ public void onStart(CacheJoinNodeDiscoveryData joinDiscoData) throws IgniteCheck throw new IgniteCheckedException("Failed to start configured cache. " + conflictErr); } - /** - * @param cacheName Cache name. - * @param grpName Group name. - * @return Group ID. - */ - private int cacheGroupId(String cacheName, @Nullable String grpName) { - assert cacheName != null; - - return grpName != null ? CU.cacheId(grpName) : CU.cacheId(cacheName); - } - /** * @param checkConsistency {@code True} if need check cache configurations consistency. * @throws IgniteCheckedException If failed. @@ -363,6 +354,9 @@ private void checkCache(CacheJoinNodeDiscoveryData.CacheInfo locInfo, CacheData "Query parallelism", locAttr.qryParallelism(), rmtAttr.qryParallelism(), true); } } + + CU.checkAttributeMismatch(log, rmtAttr.cacheName(), rmt, "isEncryptionEnabled", + "Cache encrypted", locAttr.isEncryptionEnabled(), rmtAttr.isEncryptionEnabled(), true); } /** @@ -419,7 +413,7 @@ public void onCacheChangeRequested(DynamicCacheChangeFailureMessage failMsg, Aff requests.add(DynamicCacheChangeRequest.stopRequest(ctx, cacheName, cacheDescr.sql(), true)); } - processCacheChangeRequests(exchangeActions, requests, topVer,false); + processCacheChangeRequests(exchangeActions, requests, topVer, false); failMsg.exchangeActions(exchangeActions); } @@ -476,303 +470,399 @@ private CacheChangeProcessResult processCacheChangeRequests( ExchangeActions exchangeActions, Collection reqs, AffinityTopologyVersion topVer, - boolean persistedCfgs) { + boolean persistedCfgs + ) { CacheChangeProcessResult res = new CacheChangeProcessResult(); final List> reqsToComplete = new ArrayList<>(); - for (DynamicCacheChangeRequest req : reqs) { - if (req.template()) { - CacheConfiguration ccfg = req.startCacheConfiguration(); + for (DynamicCacheChangeRequest req : reqs) + processCacheChangeRequest0(req, exchangeActions, topVer, persistedCfgs, res, reqsToComplete); - assert ccfg != null : req; + if (!F.isEmpty(res.addedDescs)) { + AffinityTopologyVersion startTopVer = res.needExchange ? topVer.nextMinorVersion() : topVer; - DynamicCacheDescriptor desc = registeredTemplates.get(req.cacheName()); + for (DynamicCacheDescriptor desc : res.addedDescs) { + assert desc.template() || res.needExchange; - if (desc == null) { - DynamicCacheDescriptor templateDesc = new DynamicCacheDescriptor(ctx, - ccfg, - req.cacheType(), - null, - true, - req.initiatingNodeId(), - false, - false, - req.deploymentId(), - req.schema()); + desc.startTopologyVersion(startTopVer); + } + } - DynamicCacheDescriptor old = registeredTemplates().put(ccfg.getName(), templateDesc); + if (!F.isEmpty(reqsToComplete)) { + ctx.closure().callLocalSafe(new Callable() { + @Override public Void call() throws Exception { + for (T2 t : reqsToComplete) { + final DynamicCacheChangeRequest req = t.get1(); + AffinityTopologyVersion waitTopVer = t.get2(); - assert old == null; + IgniteInternalFuture fut = waitTopVer != null ? + ctx.cache().context().exchange().affinityReadyFuture(waitTopVer) : null; - res.addedDescs.add(templateDesc); + if (fut == null || fut.isDone()) + ctx.cache().completeCacheStartFuture(req, false, null); + else { + fut.listen(new IgniteInClosure>() { + @Override public void apply(IgniteInternalFuture fut) { + ctx.cache().completeCacheStartFuture(req, false, null); + } + }); + } + } + + return null; } + }); + } - if (!persistedCfgs) - ctx.cache().completeTemplateAddFuture(ccfg.getName(), req.deploymentId()); + return res; + } - continue; - } + /** + * @param req Cache change request. + * @param exchangeActions Exchange actions to update. + * @param topVer Topology version. + * @param persistedCfgs {@code True} if process start of persisted caches during cluster activation. + * @param res Accumulator for cache change process results. + * @param reqsToComplete Accumulator for cache change requests which should be completed after + * ({@link org.apache.ignite.internal.processors.cache.GridCacheProcessor#pendingFuts} + */ + private void processCacheChangeRequest0( + DynamicCacheChangeRequest req, + ExchangeActions exchangeActions, + AffinityTopologyVersion topVer, + boolean persistedCfgs, + CacheChangeProcessResult res, + List> reqsToComplete + ) { + String cacheName = req.cacheName(); - assert !req.clientStartOnly() : req; + if (req.template()) { + processTemplateAddRequest(persistedCfgs, res, req); - DynamicCacheDescriptor desc = registeredCaches.get(req.cacheName()); + return; + } - boolean needExchange = false; + assert !req.clientStartOnly() : req; - boolean clientCacheStart = false; + DynamicCacheDescriptor desc = registeredCaches.get(cacheName); - AffinityTopologyVersion waitTopVer = null; + boolean needExchange = false; - if (req.start()) { - // Starting a new cache. - if (desc == null) { - String conflictErr = checkCacheConflict(req.startCacheConfiguration()); + boolean clientCacheStart = false; - if (conflictErr != null) { - U.warn(log, "Ignore cache start request. " + conflictErr); + AffinityTopologyVersion waitTopVer = null; - IgniteCheckedException err = new IgniteCheckedException("Failed to start " + - "cache. " + conflictErr); + if (req.start()) { + boolean proceedFuther = true; - if (persistedCfgs) - res.errs.add(err); - else - ctx.cache().completeCacheStartFuture(req, false, err); + if (restartingCaches.containsKey(cacheName) && + ((req.restartId() == null && restartingCaches.get(cacheName) != NULL_OBJECT) + || (req.restartId() != null &&!req.restartId().equals(restartingCaches.get(cacheName))))) { - continue; - } + if (req.failIfExists()) { + ctx.cache().completeCacheStartFuture(req, false, + new CacheExistsException("Failed to start cache (a cache is restarting): " + cacheName)); + } - if (req.clientStartOnly()) { - assert !persistedCfgs; + proceedFuther = false; + } - ctx.cache().completeCacheStartFuture(req, false, new IgniteCheckedException("Failed to start " + - "client cache (a cache with the given name is not started): " + req.cacheName())); - } - else { - SchemaOperationException err = QueryUtils.checkQueryEntityConflicts( - req.startCacheConfiguration(), registeredCaches.values()); + if (proceedFuther) { + if (desc == null) { /* Starting a new cache.*/ + if (!processStartNewCacheRequest(exchangeActions, topVer, persistedCfgs, res, req, cacheName)) + return; - if (err != null) { - if (persistedCfgs) - res.errs.add(err); + needExchange = true; + } + else { + clientCacheStart = processStartAlreadyStartedCacheRequest(topVer, persistedCfgs, req, cacheName, desc); + + if (!clientCacheStart) { + if (desc.clientCacheStartVersion() != null) + waitTopVer = desc.clientCacheStartVersion(); + else { + AffinityTopologyVersion nodeStartVer = + new AffinityTopologyVersion(ctx.discovery().localNode().order(), 0); + + if (desc.startTopologyVersion() != null) + waitTopVer = desc.startTopologyVersion(); else - ctx.cache().completeCacheStartFuture(req, false, err); + waitTopVer = desc.receivedFromStartVersion(); - continue; + if (waitTopVer == null || nodeStartVer.compareTo(waitTopVer) > 0) + waitTopVer = nodeStartVer; } + } + } + } + } + else if (req.resetLostPartitions()) { + if (desc != null) { + needExchange = true; - CacheConfiguration ccfg = req.startCacheConfiguration(); + exchangeActions.addCacheToResetLostPartitions(req, desc); + } + } + else if (req.stop()) { + if (desc != null) { + if (req.sql() && !desc.sql()) { + ctx.cache().completeCacheStartFuture(req, false, + new IgniteCheckedException("Only cache created with CREATE TABLE may be removed with " + + "DROP TABLE [cacheName=" + cacheName + ']')); - assert req.cacheType() != null : req; - assert F.eq(ccfg.getName(), req.cacheName()) : req; + return; + } - int cacheId = CU.cacheId(req.cacheName()); + processStopCacheRequest(exchangeActions, req, cacheName, desc); - CacheGroupDescriptor grpDesc = registerCacheGroup(exchangeActions, - topVer, - ccfg, - cacheId, - req.initiatingNodeId(), - req.deploymentId()); + needExchange = true; + } + } + else + assert false : req; - DynamicCacheDescriptor startDesc = new DynamicCacheDescriptor(ctx, - ccfg, - req.cacheType(), - grpDesc, - false, - req.initiatingNodeId(), - false, - req.sql(), - req.deploymentId(), - req.schema()); + if (!needExchange) { + if (!clientCacheStart && ctx.localNodeId().equals(req.initiatingNodeId())) + reqsToComplete.add(new T2<>(req, waitTopVer)); + } + else + res.needExchange = true; + } - DynamicCacheDescriptor old = registeredCaches.put(ccfg.getName(), startDesc); + /** + * @param req Cache change request. + * @param exchangeActions Exchange actions to update. + * @param cacheName Cache name. + * @param desc Dynamic cache descriptor. + */ + private void processStopCacheRequest( + ExchangeActions exchangeActions, + DynamicCacheChangeRequest req, + String cacheName, + DynamicCacheDescriptor desc + ) { + DynamicCacheDescriptor old = registeredCaches.remove(cacheName); - restartingCaches.remove(ccfg.getName()); + if (req.restart()) { + IgniteUuid restartId = req.restartId(); - assert old == null; + restartingCaches.put(cacheName, restartId == null ? NULL_OBJECT : restartId); + } - ctx.discovery().setCacheFilter( - startDesc.cacheId(), - grpDesc.groupId(), - ccfg.getName(), - ccfg.getNearConfiguration() != null); + assert old != null && old == desc : "Dynamic cache map was concurrently modified [req=" + req + ']'; - if (!persistedCfgs) { - ctx.discovery().addClientNode(req.cacheName(), - req.initiatingNodeId(), - req.nearCacheConfiguration() != null); - } + ctx.discovery().removeCacheFilter(cacheName); - res.addedDescs.add(startDesc); + exchangeActions.addCacheToStop(req, desc); - exchangeActions.addCacheToStart(req, startDesc); + CacheGroupDescriptor grpDesc = registeredCacheGrps.get(desc.groupId()); - needExchange = true; - } - } - else { - assert !persistedCfgs; - assert req.initiatingNodeId() != null : req; + assert grpDesc != null && grpDesc.groupId() == desc.groupId() : desc; - if (req.failIfExists()) { - ctx.cache().completeCacheStartFuture(req, false, - new CacheExistsException("Failed to start cache " + - "(a cache with the same name is already started): " + req.cacheName())); - } - else { - // Cache already exists, it is possible client cache is needed. - ClusterNode node = ctx.discovery().node(req.initiatingNodeId()); + grpDesc.onCacheStopped(desc.cacheName(), desc.cacheId()); - boolean clientReq = node != null && - !ctx.discovery().cacheAffinityNode(node, req.cacheName()); + if (!grpDesc.hasCaches()) { + registeredCacheGrps.remove(grpDesc.groupId()); - if (clientReq) { - ctx.discovery().addClientNode(req.cacheName(), - req.initiatingNodeId(), - req.nearCacheConfiguration() != null); + ctx.discovery().removeCacheGroup(grpDesc); - if (node.id().equals(req.initiatingNodeId())) { - desc.clientCacheStartVersion(topVer); + exchangeActions.addCacheGroupToStop(grpDesc, req.destroy()); - clientCacheStart = true; + assert exchangeActions.checkStopRequestConsistency(grpDesc.groupId()); - ctx.discovery().clientCacheStartEvent(req.requestId(), F.asMap(req.cacheName(), req), null); - } - } - } + // If all caches in group will be destroyed it is not necessary to destroy single cache + // because group will be stopped anyway. + if (req.destroy()) { + for (ExchangeActions.CacheActionData action : exchangeActions.cacheStopRequests()) { + if (action.descriptor().groupId() == grpDesc.groupId()) + action.request().destroy(false); } + } + } + } - if (!needExchange && !clientCacheStart && desc != null) { - if (desc.clientCacheStartVersion() != null) - waitTopVer = desc.clientCacheStartVersion(); - else { - AffinityTopologyVersion nodeStartVer = - new AffinityTopologyVersion(ctx.discovery().localNode().order(), 0); + /** + * @param persistedCfgs {@code True} if process start of persisted caches during cluster activation. + * @param res Accumulator for cache change process results. + * @param req Dynamic cache change request. + */ + private void processTemplateAddRequest( + boolean persistedCfgs, + CacheChangeProcessResult res, + DynamicCacheChangeRequest req + ) { + CacheConfiguration ccfg = req.startCacheConfiguration(); - if (desc.startTopologyVersion() != null) - waitTopVer = desc.startTopologyVersion(); - else - waitTopVer = desc.receivedFromStartVersion(); + assert ccfg != null : req; - if (waitTopVer == null || nodeStartVer.compareTo(waitTopVer) > 0) - waitTopVer = nodeStartVer; - } - } - } - else if (req.resetLostPartitions()) { - if (desc != null) { - needExchange = true; + DynamicCacheDescriptor desc = registeredTemplates.get(req.cacheName()); - exchangeActions.addCacheToResetLostPartitions(req, desc); - } - } - else if (req.stop()) { - if (desc != null) { - if (req.sql() && !desc.sql()) { - ctx.cache().completeCacheStartFuture(req, false, - new IgniteCheckedException("Only cache created with CREATE TABLE may be removed with " + - "DROP TABLE [cacheName=" + req.cacheName() + ']')); - - continue; - } + if (desc == null) { + DynamicCacheDescriptor templateDesc = new DynamicCacheDescriptor(ctx, + ccfg, + req.cacheType(), + null, + true, + req.initiatingNodeId(), + false, + false, + req.deploymentId(), + req.schema()); - if (!req.sql() && desc.sql()) { - ctx.cache().completeCacheStartFuture(req, false, - new IgniteCheckedException("Only cache created with cache API may be removed with " + - "direct call to destroyCache [cacheName=" + req.cacheName() + ']')); + DynamicCacheDescriptor old = registeredTemplates().put(ccfg.getName(), templateDesc); - continue; - } + assert old == null; - DynamicCacheDescriptor old = registeredCaches.remove(req.cacheName()); - - if (req.restart()) - restartingCaches.add(req.cacheName()); + res.addedDescs.add(templateDesc); + } - assert old != null && old == desc : "Dynamic cache map was concurrently modified [req=" + req + ']'; + if (!persistedCfgs) + ctx.cache().completeTemplateAddFuture(ccfg.getName(), req.deploymentId()); + } - ctx.discovery().removeCacheFilter(req.cacheName()); + /** + * @param topVer Topology version. + * @param persistedCfgs {@code True} if process start of persisted caches during cluster activation. + * @param req Cache change request. + * @param cacheName Cache name. + * @param desc Dynamic cache descriptor. + * @return True if it is needed to start client cache. + */ + private boolean processStartAlreadyStartedCacheRequest( + AffinityTopologyVersion topVer, + boolean persistedCfgs, + DynamicCacheChangeRequest req, + String cacheName, + DynamicCacheDescriptor desc + ) { + assert !persistedCfgs; + assert req.initiatingNodeId() != null : req; + + if (req.failIfExists()) { + ctx.cache().completeCacheStartFuture(req, false, + new CacheExistsException("Failed to start cache " + + "(a cache with the same name is already started): " + cacheName)); + } + else { + // Cache already exists, it is possible client cache is needed. + ClusterNode node = ctx.discovery().node(req.initiatingNodeId()); - needExchange = true; + boolean clientReq = node != null && + !ctx.discovery().cacheAffinityNode(node, cacheName); - exchangeActions.addCacheToStop(req, desc); + if (clientReq) { + ctx.discovery().addClientNode(cacheName, + req.initiatingNodeId(), + req.nearCacheConfiguration() != null); - CacheGroupDescriptor grpDesc = registeredCacheGrps.get(desc.groupId()); + if (node.id().equals(req.initiatingNodeId())) { + desc.clientCacheStartVersion(topVer); - assert grpDesc != null && grpDesc.groupId() == desc.groupId() : desc; + ctx.discovery().clientCacheStartEvent(req.requestId(), F.asMap(cacheName, req), null); - grpDesc.onCacheStopped(desc.cacheName(), desc.cacheId()); + return true; + } + } + } - if (!grpDesc.hasCaches()) { - registeredCacheGrps.remove(grpDesc.groupId()); + return false; + } - ctx.discovery().removeCacheGroup(grpDesc); + /** + * @param exchangeActions Exchange actions to update. + * @param topVer Topology version. + * @param persistedCfgs {@code True} if process start of persisted caches during cluster activation. + * @param res Accumulator for cache change process results. + * @param req Cache change request. + * @param cacheName Cache name. + * @return True if there was no errors. + */ + private boolean processStartNewCacheRequest( + ExchangeActions exchangeActions, + AffinityTopologyVersion topVer, + boolean persistedCfgs, + CacheChangeProcessResult res, + DynamicCacheChangeRequest req, + String cacheName + ) { + String conflictErr = checkCacheConflict(req.startCacheConfiguration()); - exchangeActions.addCacheGroupToStop(grpDesc, req.destroy()); + if (conflictErr != null) { + U.warn(log, "Ignore cache start request. " + conflictErr); - assert exchangeActions.checkStopRequestConsistency(grpDesc.groupId()); + IgniteCheckedException err = new IgniteCheckedException("Failed to start " + + "cache. " + conflictErr); - // If all caches in group will be destroyed it is not necessary to destroy single cache - // because group will be stopped anyway. - if (req.destroy()) { - for (ExchangeActions.CacheActionData action : exchangeActions.cacheStopRequests()) { - if (action.descriptor().groupId() == grpDesc.groupId()) - action.request().destroy(false); - } - } - } - } - } + if (persistedCfgs) + res.errs.add(err); else - assert false : req; + ctx.cache().completeCacheStartFuture(req, false, err); - if (!needExchange) { - if (!clientCacheStart && ctx.localNodeId().equals(req.initiatingNodeId())) - reqsToComplete.add(new T2<>(req, waitTopVer)); - } - else - res.needExchange = true; + return false; } - if (!F.isEmpty(res.addedDescs)) { - AffinityTopologyVersion startTopVer = res.needExchange ? topVer.nextMinorVersion() : topVer; + SchemaOperationException err = QueryUtils.checkQueryEntityConflicts( + req.startCacheConfiguration(), registeredCaches.values()); - for (DynamicCacheDescriptor desc : res.addedDescs) { - assert desc.template() || res.needExchange; + if (err != null) { + if (persistedCfgs) + res.errs.add(err); + else + ctx.cache().completeCacheStartFuture(req, false, err); - desc.startTopologyVersion(startTopVer); - } + return false; } - if (!F.isEmpty(reqsToComplete)) { - ctx.closure().callLocalSafe(new Callable() { - @Override public Void call() throws Exception { - for (T2 t : reqsToComplete) { - final DynamicCacheChangeRequest req = t.get1(); - AffinityTopologyVersion waitTopVer = t.get2(); + CacheConfiguration ccfg = req.startCacheConfiguration(); - IgniteInternalFuture fut = waitTopVer != null ? - ctx.cache().context().exchange().affinityReadyFuture(waitTopVer) : null; + assert req.cacheType() != null : req; + assert F.eq(ccfg.getName(), cacheName) : req; - if (fut == null || fut.isDone()) - ctx.cache().completeCacheStartFuture(req, false, null); - else { - fut.listen(new IgniteInClosure>() { - @Override public void apply(IgniteInternalFuture fut) { - ctx.cache().completeCacheStartFuture(req, false, null); - } - }); - } - } + int cacheId = CU.cacheId(cacheName); - return null; - } - }); + CacheGroupDescriptor grpDesc = registerCacheGroup(exchangeActions, + topVer, + ccfg, + cacheId, + req.initiatingNodeId(), + req.deploymentId(), + req.encryptionKey()); + + DynamicCacheDescriptor startDesc = new DynamicCacheDescriptor(ctx, + ccfg, + req.cacheType(), + grpDesc, + false, + req.initiatingNodeId(), + false, + req.sql(), + req.deploymentId(), + req.schema()); + + DynamicCacheDescriptor old = registeredCaches.put(ccfg.getName(), startDesc); + + restartingCaches.remove(ccfg.getName()); + + assert old == null; + + ctx.discovery().setCacheFilter( + startDesc.cacheId(), + grpDesc.groupId(), + ccfg.getName(), + ccfg.getNearConfiguration() != null); + + if (!persistedCfgs) { + ctx.discovery().addClientNode(cacheName, + req.initiatingNodeId(), + req.nearCacheConfiguration() != null); } - return res; + res.addedDescs.add(startDesc); + + exchangeActions.addCacheToStart(req, startDesc); + + return true; } /** @@ -794,7 +884,7 @@ boolean hasRestartingCaches() { * @return Collection of currently restarting caches. */ Collection restartingCaches() { - return restartingCaches; + return restartingCaches.keySet(); } /** @@ -1002,7 +1092,7 @@ private CacheNodeCommonDiscoveryData collectCommonDiscoveryData() { templates.put(desc.cacheName(), cacheData); } - Collection restarting = new HashSet<>(restartingCaches); + Collection restarting = new HashSet<>(restartingCaches.keySet()); return new CacheNodeCommonDiscoveryData(caches, templates, @@ -1371,7 +1461,8 @@ public void onStateChangeFinish(ChangeGlobalStateFinishMessage msg) { * @return Exchange action. * @throws IgniteCheckedException If configuration validation failed. */ - public ExchangeActions onStateChangeRequest(ChangeGlobalStateMessage msg, AffinityTopologyVersion topVer, DiscoveryDataClusterState curState) + public ExchangeActions onStateChangeRequest(ChangeGlobalStateMessage msg, AffinityTopologyVersion topVer, + DiscoveryDataClusterState curState) throws IgniteCheckedException { ExchangeActions exchangeActions = new ExchangeActions(); @@ -1536,7 +1627,7 @@ private String checkCacheConflict(CacheConfiguration cfg) { ", conflictingCacheName=" + desc.cacheName() + ']'; } - int grpId = cacheGroupId(cfg.getName(), cfg.getGroupName()); + int grpId = CU.cacheGroupId(cfg.getName(), cfg.getGroupName()); if (cfg.getGroupName() != null) { if (cacheGroupByName(cfg.getGroupName()) == null) { @@ -1611,7 +1702,7 @@ else if (!schemaPatch.isEmpty() && !hasSchemaPatchConflict) //If conflict was detected we don't merge config and we leave existed config. if (!hasSchemaPatchConflict && !patchesToApply.isEmpty()) - for(Map.Entry entry: patchesToApply.entrySet()){ + for (Map.Entry entry : patchesToApply.entrySet()) { if (entry.getKey().applySchemaPatch(entry.getValue())) saveCacheConfiguration(entry.getKey()); } @@ -1647,7 +1738,8 @@ private void registerNewCache( cfg, cacheId, nodeId, - joinData.cacheDeploymentId()); + joinData.cacheDeploymentId(), + null); ctx.discovery().setCacheFilter( cacheId, @@ -1761,6 +1853,7 @@ public boolean isMergeConfigSupports(ClusterNode joiningNode) { * @param cacheId Cache ID. * @param rcvdFrom Node ID cache was recived from. * @param deploymentId Deployment ID. + * @param encKey Encryption key. * @return Group descriptor. */ private CacheGroupDescriptor registerCacheGroup( @@ -1769,7 +1862,9 @@ private CacheGroupDescriptor registerCacheGroup( CacheConfiguration startedCacheCfg, Integer cacheId, UUID rcvdFrom, - IgniteUuid deploymentId) { + IgniteUuid deploymentId, + @Nullable byte[] encKey + ) { if (startedCacheCfg.getGroupName() != null) { CacheGroupDescriptor desc = cacheGroupByName(startedCacheCfg.getGroupName()); @@ -1780,7 +1875,7 @@ private CacheGroupDescriptor registerCacheGroup( } } - int grpId = cacheGroupId(startedCacheCfg.getName(), startedCacheCfg.getGroupName()); + int grpId = CU.cacheGroupId(startedCacheCfg.getName(), startedCacheCfg.getGroupName()); Map caches = Collections.singletonMap(startedCacheCfg.getName(), cacheId); @@ -1798,6 +1893,9 @@ private CacheGroupDescriptor registerCacheGroup( persistent, null); + if (startedCacheCfg.isEncryptionEnabled()) + ctx.encryption().beforeCacheGroupStart(grpId, encKey); + if (ctx.cache().context().pageStore() != null) ctx.cache().context().pageStore().beforeCacheGroupStart(grpDesc); @@ -1819,7 +1917,8 @@ private CacheGroupDescriptor registerCacheGroup( * @param exchActions Optional exchange actions to update if new group was added. * @param startedCacheCfg Started cache configuration. */ - private boolean resolvePersistentFlag(@Nullable ExchangeActions exchActions, CacheConfiguration startedCacheCfg) { + private boolean resolvePersistentFlag(@Nullable ExchangeActions exchActions, + CacheConfiguration startedCacheCfg) { if (!ctx.clientNode()) { // On server, we always can determine whether cache is persistent by local storage configuration. return CU.isPersistentCache(startedCacheCfg, ctx.config().getDataStorageConfiguration()); @@ -1930,6 +2029,9 @@ private void validateCacheGroupConfiguration(CacheConfiguration cfg, CacheConfig CU.validateCacheGroupsAttributesMismatch(log, cfg, startCfg, "backups", "Backups", cfg.getBackups(), startCfg.getBackups(), true); } + + CU.validateCacheGroupsAttributesMismatch(log, cfg, startCfg, "encryptionEnabled", "Encrypted", + cfg.isEncryptionEnabled(), startCfg.isEncryptionEnabled(), true); } /** @@ -1955,6 +2057,7 @@ ConcurrentMap registeredCacheGroups() { /** * Returns registered cache descriptors ordered by {@code comparator} + * * @param comparator Comparator (DIRECT, REVERSE or custom) to order cache descriptors. * @return Ordered by comparator cache descriptors. */ @@ -2099,6 +2202,28 @@ private boolean surviveReconnect(String cacheName) { return CU.isUtilityCache(cacheName); } + /** + * @param cacheName Cache name. + * @return {@code True} if cache is restarting. + */ + public boolean isRestarting(String cacheName) { + return restartingCaches.containsKey(cacheName); + } + + /** + * @param cacheName Cache name which restart were cancelled. + */ + public void removeRestartingCache(String cacheName) { + restartingCaches.remove(cacheName); + } + + /** + * Clear up information about restarting caches. + */ + public void removeRestartingCaches() { + restartingCaches.clear(); + } + /** * Holds direct comparator (first system caches) and reverse comparator (first user caches). * Use DIRECT comparator for ordering cache start operations. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/DynamicCacheChangeRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/DynamicCacheChangeRequest.java index 2b942b09aa884..812823050f4a3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/DynamicCacheChangeRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/DynamicCacheChangeRequest.java @@ -17,6 +17,8 @@ package org.apache.ignite.internal.processors.cache; +import java.io.Serializable; +import java.util.UUID; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.GridKernalContext; @@ -25,9 +27,6 @@ import org.apache.ignite.lang.IgniteUuid; import org.jetbrains.annotations.Nullable; -import java.io.Serializable; -import java.util.UUID; - /** * Cache start/stop request. */ @@ -68,6 +67,9 @@ public class DynamicCacheChangeRequest implements Serializable { /** Restart flag. */ private boolean restart; + /** Restart operation id. */ + private IgniteUuid restartId; + /** Cache active on start or not*/ private boolean disabledAfterStart; @@ -95,6 +97,9 @@ public class DynamicCacheChangeRequest implements Serializable { /** */ private transient boolean locallyConfigured; + /** Encryption key. */ + @Nullable private byte[] encKey; + /** * @param reqId Unique request ID. * @param cacheName Cache stop name. @@ -261,6 +266,20 @@ public void restart(boolean restart) { this.restart = restart; } + /** + * @return Id of restart to allow only initiator start the restarting cache. + */ + public IgniteUuid restartId() { + return restartId; + } + + /** + * @param restartId Id of cache restart requester. + */ + public void restartId(IgniteUuid restartId) { + this.restartId = restartId; + } + /** * @return Cache name. */ @@ -424,6 +443,20 @@ public void disabledAfterStart(boolean disabledAfterStart) { this.disabledAfterStart = disabledAfterStart; } + /** + * @param encKey Encryption key. + */ + public void encryptionKey(@Nullable byte[] encKey) { + this.encKey = encKey; + } + + /** + * @return Encryption key. + */ + @Nullable public byte[] encryptionKey() { + return encKey; + } + /** {@inheritDoc} */ @Override public String toString() { return "DynamicCacheChangeRequest [cacheName=" + cacheName() + diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeContext.java index 34ed048542c16..4046c98b29414 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeContext.java @@ -17,15 +17,11 @@ package org.apache.ignite.internal.processors.cache; -import java.util.HashMap; import java.util.HashSet; -import java.util.Map; import java.util.Set; -import java.util.UUID; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; -import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.typedef.internal.S; import org.jetbrains.annotations.Nullable; @@ -55,20 +51,11 @@ public class ExchangeContext { /** */ private final boolean compatibilityNode = getBoolean(IGNITE_EXCHANGE_COMPATIBILITY_VER_1, false); - /** */ - private final boolean newMvccCrd; - - /** Currently running mvcc queries, initialized when mvcc coordinator is changed. */ - private Map activeQueries; - /** * @param crd Coordinator flag. - * @param newMvccCrd {@code True} if new coordinator assigned during this exchange. * @param fut Exchange future. */ - public ExchangeContext(boolean crd, boolean newMvccCrd, GridDhtPartitionsExchangeFuture fut) { - this.newMvccCrd = newMvccCrd; - + public ExchangeContext(boolean crd, GridDhtPartitionsExchangeFuture fut) { int protocolVer = exchangeProtocolVersion(fut.firstEventCache().minimumNodeVersion()); if (compatibilityNode || (crd && fut.localJoinExchange())) { @@ -137,34 +124,6 @@ public boolean mergeExchanges() { return merge; } - /** - * @return {@code True} if new node assigned as mvcc coordinator node during this exchange. - */ - public boolean newMvccCoordinator() { - return newMvccCrd; - } - - /** - * @return Active queries. - */ - public Map activeQueries() { - return activeQueries; - } - - /** - * @param nodeId Node ID. - * @param nodeQueries Node queries. - */ - public void addActiveQueries(UUID nodeId, @Nullable GridLongList nodeQueries) { - if (nodeQueries == null) - return; - - if (activeQueries == null) - activeQueries = new HashMap<>(); - - activeQueries.put(nodeId, nodeQueries); - } - /** {@inheritDoc} */ @Override public String toString() { return S.toString(ExchangeContext.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeDiscoveryEvents.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeDiscoveryEvents.java index 2f7753beabc02..23df8d4f29e08 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeDiscoveryEvents.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/ExchangeDiscoveryEvents.java @@ -35,7 +35,6 @@ import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; import static org.apache.ignite.events.EventType.EVT_NODE_JOINED; import static org.apache.ignite.events.EventType.EVT_NODE_LEFT; -import static org.apache.ignite.internal.events.DiscoveryCustomEvent.EVT_DISCOVERY_CUSTOM_EVT; /** * Discovery events processed in single exchange (contain multiple events if exchanges for multiple @@ -77,11 +76,6 @@ public class ExchangeDiscoveryEvents { * @param fut Current exchange future. */ public void processEvents(GridDhtPartitionsExchangeFuture fut) { - for (DiscoveryEvent evt : evts) { - if (evt.type() == EVT_NODE_LEFT || evt.type() == EVT_NODE_FAILED) - fut.sharedContext().mvcc().removeExplicitNodeLocks(evt.eventNode().id(), fut.initialVersion()); - } - if (hasServerLeft()) warnNoAffinityNodes(fut.sharedContext()); } @@ -223,6 +217,7 @@ public void warnNoAffinityNodes(GridCacheSharedContext cctx) { null, null, null, + null, false, null, false, diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GatewayProtectedCacheProxy.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GatewayProtectedCacheProxy.java index c99eb006002e9..ef861b945147b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GatewayProtectedCacheProxy.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GatewayProtectedCacheProxy.java @@ -49,10 +49,8 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cluster.ClusterGroup; import org.apache.ignite.internal.AsyncSupportAdapter; -import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils; import org.apache.ignite.internal.GridKernalState; -import org.apache.ignite.internal.util.future.GridFutureAdapter; -import org.apache.ignite.internal.util.future.IgniteFutureImpl; +import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteClosure; @@ -1494,6 +1492,42 @@ public void setCacheManager(org.apache.ignite.cache.CacheManager cacheMgr) { } } + /** {@inheritDoc} */ + @Override public void preloadPartition(int part) { + CacheOperationGate opGate = onEnter(); + + try { + delegate.preloadPartition(part); + } + finally { + onLeave(opGate); + } + } + + /** {@inheritDoc} */ + @Override public IgniteFuture preloadPartitionAsync(int part) { + CacheOperationGate opGate = onEnter(); + + try { + return delegate.preloadPartitionAsync(part); + } + finally { + onLeave(opGate); + } + } + + /** {@inheritDoc} */ + @Override public boolean localPreloadPartition(int part) { + CacheOperationGate opGate = onEnter(); + + try { + return delegate.localPreloadPartition(part); + } + finally { + onLeave(opGate); + } + } + /** * Safely get CacheGateway. * @@ -1529,12 +1563,7 @@ private GridCacheGateway checkProxyIsValid(@Nullable GridCacheGateway cache = context().kernalContext().cache().publicJCache(context().name()).internalProxy(); - GridFutureAdapter fut = proxyImpl.opportunisticRestart(); - - if (fut == null) - proxyImpl.onRestarted(cache.context(), cache.context().cache()); - else - new IgniteFutureImpl<>(fut).get(); + proxyImpl.opportunisticRestart(cache); return gate(); } catch (IgniteCheckedException ice) { @@ -1551,8 +1580,18 @@ private GridCacheGateway checkProxyIsValid(@Nullable GridCacheGateway gate = checkProxyIsValid(gate(), true); - return new CacheOperationGate(gate, - lock ? gate.enter(opCtx) : gate.enterNoLock(opCtx)); + try { + return new CacheOperationGate(gate, + lock ? gate.enter(opCtx) : gate.enterNoLock(opCtx)); + } + catch (IllegalStateException e) { + boolean isCacheProxy = delegate instanceof IgniteCacheProxyImpl; + + if (isCacheProxy) + ((IgniteCacheProxyImpl) delegate).checkRestart(true); + + throw e; // If we reached this line. + } } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAdapter.java index cf9337b92634d..2b21ec12a6c22 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAdapter.java @@ -49,6 +49,7 @@ import javax.cache.processor.EntryProcessorResult; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; @@ -87,9 +88,11 @@ import org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl; import org.apache.ignite.internal.processors.cache.distributed.IgniteExternalizableExpiryPolicy; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; import org.apache.ignite.internal.processors.cache.dr.GridCacheDrInfo; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; @@ -136,12 +139,16 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteOutClosure; import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.lang.IgniteProductVersion; +import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.mxbean.CacheMetricsMXBean; import org.apache.ignite.plugin.security.SecurityPermission; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.JobContextResource; +import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionException; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; @@ -149,6 +156,7 @@ import static org.apache.ignite.IgniteSystemProperties.IGNITE_CACHE_RETRIES_COUNT; import static org.apache.ignite.internal.GridClosureCallMode.BROADCAST; import static org.apache.ignite.internal.processors.cache.CacheOperationContext.DFLT_ALLOW_ATOMIC_OPS_IN_TX; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; import static org.apache.ignite.internal.processors.dr.GridDrType.DR_LOAD; import static org.apache.ignite.internal.processors.dr.GridDrType.DR_NONE; import static org.apache.ignite.internal.processors.task.GridTaskThreadContextKey.TC_NO_FAILOVER; @@ -179,6 +187,9 @@ public abstract class GridCacheAdapter implements IgniteInternalCache> stash = new ThreadLocal>() { @@ -284,6 +295,9 @@ public abstract class GridCacheAdapter implements IgniteInternalCachecacheEntriesIterator(ctx, modes.primary, modes.backup, topVer, ctx.keepBinary())); + MvccSnapshot mvccSnapshot = ctx.mvccEnabled() ? MvccUtils.MVCC_MAX_SNAPSHOT : null; + + its.add(offheapMgr + .cacheEntriesIterator(ctx, modes.primary, modes.backup, topVer, ctx.keepBinary(), mvccSnapshot)); } } else if (modes.heap) { + if (ctx.mvccEnabled()) + return F.emptyIterator(); + if (modes.near && ctx.isNear()) its.add(ctx.near().nearEntries().iterator()); @@ -799,10 +819,8 @@ else if (modes.heap) { } /** {@inheritDoc} */ - @SuppressWarnings("ForLoopReplaceableByForEach") - @Nullable @Override public final V localPeek(K key, - CachePeekMode[] peekModes, - @Nullable IgniteCacheExpiryPolicy plc) + @Override public final V localPeek(K key, + CachePeekMode[] peekModes) throws IgniteCheckedException { A.notNull(key, "key"); @@ -811,9 +829,6 @@ else if (modes.heap) { ctx.checkSecurity(SecurityPermission.CACHE_READ); - //TODO IGNITE-7955 - MvccUtils.verifyMvccOperationSupport(ctx, "Peek"); - PeekModes modes = parsePeekModes(peekModes, false); KeyCacheObject cacheKey = ctx.toCacheKeyObject(key); @@ -872,10 +887,8 @@ else if (modes.heap) { GridCacheContext ctx0; while (true) { - if (nearKey) { - ctx0 = context(); + if (nearKey) e = peekEx(key); - } else { ctx0 = ctx.isNear() ? ctx.near().dht().context() : ctx; e = modes.offheap ? ctx0.cache().entryEx(key) : ctx0.cache().peekEx(key); @@ -885,7 +898,9 @@ else if (modes.heap) { ctx.shared().database().checkpointReadLock(); try { - cacheVal = e.peek(modes.heap, modes.offheap, topVer, plc); + cacheVal = ctx.mvccEnabled() + ? e.mvccPeek(modes.heap && !modes.offheap) + : e.peek(modes.heap, modes.offheap, topVer, null); } catch (GridCacheEntryRemovedException ignore) { if (log.isDebugEnabled()) @@ -894,7 +909,7 @@ else if (modes.heap) { continue; } finally { - e.touch(null); + e.touch(); ctx.shared().database().checkpointReadUnlock(); } @@ -906,7 +921,7 @@ else if (modes.heap) { else { while (true) { try { - cacheVal = localCachePeek0(cacheKey, modes.heap, modes.offheap, plc); + cacheVal = localCachePeek0(cacheKey, modes.heap, modes.offheap); break; } @@ -928,7 +943,6 @@ else if (modes.heap) { * @param key Key. * @param heap Read heap flag. * @param offheap Read offheap flag. - * @param plc Optional expiry policy. * @return Value. * @throws GridCacheEntryRemovedException If entry removed. * @throws IgniteCheckedException If failed. @@ -936,8 +950,7 @@ else if (modes.heap) { @SuppressWarnings("ConstantConditions") @Nullable private CacheObject localCachePeek0(KeyCacheObject key, boolean heap, - boolean offheap, - IgniteCacheExpiryPolicy plc) + boolean offheap) throws GridCacheEntryRemovedException, IgniteCheckedException { assert ctx.isLocal(); assert heap || offheap; @@ -946,10 +959,10 @@ else if (modes.heap) { if (e != null) { try { - return e.peek(heap, offheap, AffinityTopologyVersion.NONE, plc); + return e.peek(heap, offheap, AffinityTopologyVersion.NONE, null); } finally { - e.touch(null); + e.touch(); } } @@ -1261,6 +1274,31 @@ private IgniteInternalFuture executeClearTask(@Nullable Set keys return new GridFinishedFuture<>(); } + /** + * @param part Partition id. + * @return Future. + */ + private IgniteInternalFuture executePreloadTask(int part) throws IgniteCheckedException { + ClusterGroup grp = ctx.grid().cluster().forDataNodes(ctx.name()); + + @Nullable ClusterNode targetNode = ctx.affinity().primaryByPartition(part, ctx.topology().readyTopologyVersion()); + + if (targetNode == null || targetNode.version().compareTo(PRELOAD_PARTITION_SINCE) < 0) { + if (!partPreloadBadVerWarned) { + U.warn(log(), "Attempting to execute partition preloading task on outdated or not mapped node " + + "[targetNodeVer=" + (targetNode == null ? "NA" : targetNode.version()) + + ", minSupportedNodeVer=" + PRELOAD_PARTITION_SINCE + ']'); + + partPreloadBadVerWarned = true; + } + + return new GridFinishedFuture<>(); + } + + return ctx.closures().affinityRun(Collections.singleton(name()), part, + new PartitionPreloadJob(ctx.name(), part), grp.nodes(), null); + } + /** * @param keys Keys. * @param readers Readers flag. @@ -1892,6 +1930,7 @@ public final IgniteInternalFuture> getAllAsync(@Nullable final Collect /*keep cache objects*/false, recovery, needVer, + null, null); // TODO IGNITE-7371 } @@ -1907,6 +1946,7 @@ public final IgniteInternalFuture> getAllAsync(@Nullable final Collect * @param skipVals Skip values flag. * @param keepCacheObjects Keep cache objects. * @param needVer If {@code true} returns values as tuples containing value and version. + * @param txLbl Transaction label. * @param mvccSnapshot MVCC snapshot. * @return Future. */ @@ -1923,6 +1963,7 @@ protected final IgniteInternalFuture> getAllAsync0( final boolean keepCacheObjects, final boolean recovery, final boolean needVer, + @Nullable String txLbl, MvccSnapshot mvccSnapshot ) { if (F.isEmpty(keys)) @@ -1938,7 +1979,7 @@ protected final IgniteInternalFuture> getAllAsync0( return new GridFinishedFuture<>(e); } - tx = ctx.tm().threadLocalTx(ctx); + tx = checkCurrentTx(); } if (tx == null || tx.implicit()) { @@ -2008,6 +2049,7 @@ protected final IgniteInternalFuture> getAllAsync0( if (evt) { ctx.events().readEvent(key, null, + txLbl, row.value(), subjId, taskName, @@ -2076,7 +2118,7 @@ else if (storeEnabled) readerArgs); if (res == null) - entry.touch(topVer); + entry.touch(); } } @@ -2091,7 +2133,7 @@ else if (storeEnabled) needVer); if (entry != null && (tx == null || (!tx.implicit() && tx.isolation() == READ_COMMITTED))) - entry.touch(topVer); + entry.touch(); if (keysSize == 1) // Safe to return because no locks are required in READ_COMMITTED mode. @@ -2173,7 +2215,7 @@ else if (storeEnabled) if (tx0 == null || (!tx0.implicit() && tx0.isolation() == READ_COMMITTED)) - entry.touch(topVer); + entry.touch(); break; } @@ -2216,7 +2258,7 @@ else if (storeEnabled) GridCacheEntryEx entry = peekEx(key); if (entry != null) - entry.touch(topVer); + entry.touch(); } } @@ -2246,7 +2288,7 @@ else if (storeEnabled) for (KeyCacheObject key0 : misses.keySet()) { GridCacheEntryEx entry = peekEx(key0); if (entry != null) - entry.touch(topVer); + entry.touch(); } } @@ -2279,6 +2321,19 @@ else if (storeEnabled) } } + /** */ + protected GridNearTxLocal checkCurrentTx() { + if (!ctx.mvccEnabled()) + return ctx.tm().threadLocalTx(ctx); + + try { + return MvccUtils.currentTx(ctx.kernalContext(), null); + } + catch (MvccUtils.UnsupportedTxModeException | MvccUtils.NonMvccTransactionException e) { + throw new TransactionException(e.getMessage()); + } + } + /** * @param topVer Affinity topology version for which load was performed. * @param loadKeys Keys to load. @@ -2307,7 +2362,7 @@ private void clearReservationsIfNeeded( entry.clearReserveForLoad(e.getValue().version()); if (needTouch) - entry.touch(topVer); + entry.touch(); } } } @@ -3663,7 +3718,7 @@ private void loadEntry(KeyCacheObject key, log.debug("Got removed entry during loadCache (will ignore): " + entry); } finally { - entry.touch(topVer); + entry.touch(); } CU.unwindEvicts(ctx); @@ -4009,43 +4064,6 @@ IgniteInternalFuture globalLoadCacheAsync(@Nullable IgniteBiPredicate p return entrySet().iterator(); } - /** - * @param opCtx Cache operation context. - * @return JCache Iterator. - */ - private Iterator> localIteratorHonorExpirePolicy(final CacheOperationContext opCtx) { - return F.iterator(iterator(), - new IgniteClosure, Cache.Entry>() { - private IgniteCacheExpiryPolicy expiryPlc = - ctx.cache().expiryPolicy(opCtx != null ? opCtx.expiry() : null); - - @Override public Cache.Entry apply(Cache.Entry lazyEntry) { - CacheOperationContext prev = ctx.gate().enter(opCtx); - try { - V val = localPeek(lazyEntry.getKey(), CachePeekModes.ONHEAP_ONLY, expiryPlc); - - GridCacheVersion ver = null; - - try { - ver = lazyEntry.unwrap(GridCacheVersion.class); - } - catch (IllegalArgumentException e) { - log.error("Failed to unwrap entry version information", e); - } - - return new CacheEntryImpl<>(lazyEntry.getKey(), val, ver); - } - catch (IgniteCheckedException e) { - throw CU.convertToCacheException(e); - } - finally { - ctx.gate().leave(prev); - } - } - }, false - ); - } - /** {@inheritDoc} */ @Override public Iterator> scanIterator(boolean keepBinary, @Nullable IgniteBiPredicate p) @@ -4204,7 +4222,7 @@ public void awaitLastFut() { awaitLastFut(); - GridNearTxLocal tx = ctx.tm().threadLocalTx(ctx); + GridNearTxLocal tx = checkCurrentTx(); if (tx == null || tx.implicit()) { TransactionConfiguration tCfg = CU.transactionConfiguration(ctx, ctx.kernalContext().config()); @@ -4264,7 +4282,10 @@ public void awaitLastFut() { assert topVer != null && topVer.topologyVersion() > 0 : tx; - ctx.affinity().affinityReadyFuture(topVer.topologyVersion() + 1).get(); + AffinityTopologyVersion awaitVer = new AffinityTopologyVersion( + topVer.topologyVersion() + 1, 0); + + ctx.shared().exchange().affinityReadyFuture(awaitVer).get(); continue; } @@ -4304,7 +4325,7 @@ private IgniteInternalFuture asyncOp(final AsyncOp op) { if (log.isDebugEnabled()) log.debug("Performing async op: " + op); - GridNearTxLocal tx = ctx.tm().threadLocalTx(ctx); + GridNearTxLocal tx = checkCurrentTx(); CacheOperationContext opCtx = ctx.operationContextPerCall(); @@ -4958,6 +4979,55 @@ private void advance() { return new CacheEntryImpl<>((K)key0, (V)val0, entry.version()); } + /** {@inheritDoc} */ + @Override public void preloadPartition(int part) throws IgniteCheckedException { + if (isLocal()) + ctx.offheap().preloadPartition(part); + else + executePreloadTask(part).get(); + } + + /** {@inheritDoc} */ + @Override public IgniteInternalFuture preloadPartitionAsync(int part) throws IgniteCheckedException { + if (isLocal()) { + return ctx.kernalContext().closure().runLocalSafe(() -> { + try { + ctx.offheap().preloadPartition(part); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + }); + } + else + return executePreloadTask(part); + } + + /** {@inheritDoc} */ + @Override public boolean localPreloadPartition(int part) throws IgniteCheckedException { + if (!ctx.affinityNode()) + return false; + + GridDhtPartitionTopology top = ctx.group().topology(); + + @Nullable GridDhtLocalPartition p = top.localPartition(part, top.readyTopologyVersion(), false); + + if (p == null) + return false; + + try { + if (!p.reserve() || p.state() != OWNING) + return false; + + p.dataStore().preload(); + } + finally { + p.release(); + } + + return true; + } + /** * */ @@ -5031,8 +5101,10 @@ public void execute(boolean retry) { assert topVer != null && topVer.topologyVersion() > 0 : tx; + AffinityTopologyVersion awaitVer = new AffinityTopologyVersion(topVer.topologyVersion() + 1, 0); + IgniteInternalFuture topFut = - ctx.affinity().affinityReadyFuture(topVer.topologyVersion() + 1); + ctx.shared().exchange().affinityReadyFuture(awaitVer); topFut.listen(new IgniteInClosure>() { @Override public void apply(IgniteInternalFuture topFut) { @@ -6317,7 +6389,7 @@ public InvokeAllTimeStatClosure(CacheMetricsImpl metrics, final long start) { /** * Delayed callable class. */ - public static abstract class TopologyVersionAwareJob extends ComputeJobAdapter { + public abstract static class TopologyVersionAwareJob extends ComputeJobAdapter { /** */ private static final long serialVersionUID = 0L; @@ -6686,6 +6758,52 @@ public ClearTask(String cacheName, AffinityTopologyVersion topVer, Set nodesByPartition(int part, AffinityTopologyVersion topV * @return Affinity assignment. */ public AffinityAssignment assignment(AffinityTopologyVersion topVer) { + return assignment(topVer, cctx.shared().exchange().lastAffinityChangedTopologyVersion(topVer)); + } + + /** + * Get affinity assignment for the given topology version. + * + * @param topVer Topology version. + * @return Affinity assignment. + */ + public AffinityAssignment assignment(AffinityTopologyVersion topVer, AffinityTopologyVersion lastAffChangedTopVer) { if (cctx.isLocal()) - topVer = LOC_CACHE_TOP_VER; + topVer = lastAffChangedTopVer = LOC_CACHE_TOP_VER; GridAffinityAssignmentCache aff0 = aff; if (aff0 == null) throw new IgniteException(FAILED_TO_FIND_CACHE_ERR_MSG + cctx.name()); - return aff0.cachedAffinity(topVer); - } - - public MvccCoordinator mvccCoordinator(AffinityTopologyVersion topVer) { - return assignment(topVer).mvccCoordinator(); + return aff0.cachedAffinity(topVer, lastAffChangedTopVer); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAttributes.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAttributes.java index 01daee2a21519..230320a51805f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAttributes.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAttributes.java @@ -352,6 +352,13 @@ String topologyValidatorClassName() { return className(ccfg.getTopologyValidator()); } + /** + * @return Is cache encryption enabled. + */ + public boolean isEncryptionEnabled() { + return ccfg.isEncryptionEnabled(); + } + /** * @param obj Object to get class of. * @return Class name or {@code null}. @@ -364,4 +371,4 @@ String topologyValidatorClassName() { @Override public String toString() { return S.toString(GridCacheAttributes.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapImpl.java index 75c0d0cde8c23..91dfe59312dbc 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapImpl.java @@ -26,7 +26,6 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.lang.IgniteUuid; import org.jetbrains.annotations.Nullable; import static org.apache.ignite.events.EventType.EVT_CACHE_ENTRY_CREATED; @@ -168,7 +167,8 @@ protected final GridCacheMapEntry putEntryIfObsoleteOrAbsent( ctx.events().addEvent(doomed.partition(), doomed.key(), ctx.localNodeId(), - (IgniteUuid)null, + null, + null, null, EVT_CACHE_ENTRY_DESTROYED, null, @@ -188,7 +188,8 @@ protected final GridCacheMapEntry putEntryIfObsoleteOrAbsent( ctx.events().addEvent(created.partition(), created.key(), ctx.localNodeId(), - (IgniteUuid)null, + null, + null, null, EVT_CACHE_ENTRY_CREATED, null, @@ -201,7 +202,7 @@ protected final GridCacheMapEntry putEntryIfObsoleteOrAbsent( true); if (touch) - cur.touch(topVer); + cur.touch(); } assert Math.abs(sizeChange) <= 1; @@ -274,7 +275,8 @@ else if (sizeChange == -1) ctx.events().addEvent(entry.partition(), entry.key(), ctx.localNodeId(), - (IgniteUuid)null, + null, + null, null, EVT_CACHE_ENTRY_DESTROYED, null, diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheContext.java index 7eea905966b63..ef33e8b4928a5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheContext.java @@ -30,6 +30,7 @@ import java.util.LinkedList; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.UUID; import java.util.concurrent.Callable; import java.util.concurrent.CountDownLatch; @@ -109,6 +110,7 @@ import org.apache.ignite.plugin.security.SecurityPermission; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_DISABLE_TRIGGERING_CACHE_INTERCEPTOR_ON_CONFLICT; import static org.apache.ignite.IgniteSystemProperties.IGNITE_READ_LOAD_BALANCING; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -226,7 +228,7 @@ public class GridCacheContext implements Externalizable { private CacheWeakQueryIteratorsHolder> itHolder; /** Affinity node. */ - private boolean affNode; + private volatile boolean affNode; /** Conflict resolver. */ private CacheVersionConflictResolver conflictRslvr; @@ -238,10 +240,10 @@ public class GridCacheContext implements Externalizable { private CountDownLatch startLatch = new CountDownLatch(1); /** Topology version when cache was started on local node. */ - private AffinityTopologyVersion locStartTopVer; + private volatile AffinityTopologyVersion locStartTopVer; /** Dynamic cache deployment ID. */ - private IgniteUuid dynamicDeploymentId; + private volatile IgniteUuid dynamicDeploymentId; /** Updates allowed flag. */ private boolean updatesAllowed; @@ -271,7 +273,14 @@ public class GridCacheContext implements Externalizable { private boolean readFromBackup = CacheConfiguration.DFLT_READ_FROM_BACKUP; /** Local node's MAC address. */ - private String locMacs; + private volatile String locMacs; + + /** Recovery mode flag. */ + private volatile boolean recoveryMode; + + /** */ + private final boolean disableTriggeringCacheInterceptorOnConflict = + Boolean.parseBoolean(System.getProperty(IGNITE_DISABLE_TRIGGERING_CACHE_INTERCEPTOR_ON_CONFLICT, "false")); /** * Empty constructor required for {@link Externalizable}. @@ -309,8 +318,11 @@ public GridCacheContext( CacheGroupContext grp, CacheType cacheType, AffinityTopologyVersion locStartTopVer, + IgniteUuid deploymentId, boolean affNode, boolean updatesAllowed, + boolean statisticsEnabled, + boolean recoveryMode, /* * Managers in starting order! @@ -395,9 +407,46 @@ public GridCacheContext( readFromBackup = cacheCfg.isReadFromBackup(); + this.dynamicDeploymentId = deploymentId; + this.recoveryMode = recoveryMode; + + statisticsEnabled(statisticsEnabled); + + assert kernalContext().recoveryMode() == recoveryMode; + + if (!recoveryMode) { + locMacs = localNode().attribute(ATTR_MACS); + + assert locMacs != null; + } + } + + /** + * Called when cache was restored during recovery and node has joined to topology. + * + * @param topVer Cache topology join version. + * @param clusterWideDesc Cluster-wide cache descriptor received during exchange. + */ + public void finishRecovery(AffinityTopologyVersion topVer, DynamicCacheDescriptor clusterWideDesc) { + assert recoveryMode : this; + + recoveryMode = false; + + locStartTopVer = topVer; + locMacs = localNode().attribute(ATTR_MACS); assert locMacs != null; + + this.statisticsEnabled = clusterWideDesc.cacheConfiguration().isStatisticsEnabled(); + this.dynamicDeploymentId = clusterWideDesc.deploymentId(); + } + + /** + * @return {@code True} if cache is in recovery mode. + */ + public boolean isRecoveryMode() { + return recoveryMode; } /** @@ -421,13 +470,6 @@ public boolean customAffinityMapper() { return customAffMapper; } - /** - * @param dynamicDeploymentId Dynamic deployment ID. - */ - void dynamicDeploymentId(IgniteUuid dynamicDeploymentId) { - this.dynamicDeploymentId = dynamicDeploymentId; - } - /** * @return Dynamic deployment ID. */ @@ -828,6 +870,13 @@ public boolean transactionalSnapshot() { return cacheCfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT; } + /** + * @return {@code True} if cache interceptor should be skipped in case of conflicts. + */ + public boolean disableTriggeringCacheInterceptorOnConflict() { + return disableTriggeringCacheInterceptorOnConflict; + } + /** * @return Local node. */ @@ -2204,24 +2253,37 @@ else if (type == EVT_CACHE_REBALANCE_STOPPED) { * * @param affNodes All affinity nodes. * @param canRemap Flag indicating that 'get' should be done on a locked topology version. + * @param partitionId Partition ID. * @return Affinity node to get key from or {@code null} if there is no suitable alive node. */ - @Nullable public ClusterNode selectAffinityNodeBalanced(List affNodes, boolean canRemap) { + @Nullable public ClusterNode selectAffinityNodeBalanced( + List affNodes, + Set invalidNodes, + int partitionId, + boolean canRemap + ) { if (!readLoadBalancingEnabled) { if (!canRemap) { + // Find next available node if we can not wait next topology version. for (ClusterNode node : affNodes) { - if (ctx.discovery().alive(node)) + if (ctx.discovery().alive(node) && !invalidNodes.contains(node)) return node; } return null; } - else - return affNodes.get(0); + else { + ClusterNode first = affNodes.get(0); + + return !invalidNodes.contains(first) ? first : null; + } } - if (!readFromBackup) - return affNodes.get(0); + if (!readFromBackup){ + ClusterNode first = affNodes.get(0); + + return !invalidNodes.contains(first) ? first : null; + } assert locMacs != null; @@ -2230,7 +2292,7 @@ else if (type == EVT_CACHE_REBALANCE_STOPPED) { ClusterNode n0 = null; for (ClusterNode node : affNodes) { - if (canRemap || discovery().alive(node)) { + if ((canRemap || discovery().alive(node)) && !invalidNodes.contains(node)) { if (locMacs.equals(node.attribute(ATTR_MACS))) return node; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentManager.java index b34da627bd7d3..15af1f10d091d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentManager.java @@ -312,7 +312,7 @@ private boolean undeploy(ClassLoader ldr, GridCacheEntryEx e, GridCacheAdapter c Object val0; try { - CacheObject v = entry.peek(null); + CacheObject v = entry.peek(); key0 = key.value(cache.context().cacheObjectContext(), false); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEntryEx.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEntryEx.java index 2e96a9ca48899..26da38ba019fa 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEntryEx.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEntryEx.java @@ -22,14 +22,15 @@ import java.util.UUID; import javax.cache.Cache; import javax.cache.expiry.ExpiryPolicy; +import javax.cache.processor.EntryProcessor; import javax.cache.processor.EntryProcessorResult; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.eviction.EvictableEntry; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedLockCancelledException; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxState; @@ -42,6 +43,7 @@ import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheFilter; import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorClosure; import org.apache.ignite.internal.util.lang.GridTuple3; +import org.apache.ignite.lang.IgniteUuid; import org.jetbrains.annotations.Nullable; /** @@ -79,6 +81,11 @@ public interface GridCacheEntryEx { */ public boolean isLocal(); + /** + * @return {@code True} if this is n entry from MVCC cache. + */ + public boolean isMvcc(); + /** * @return {@code False} if entry belongs to cache map, {@code true} if this entry was created in colocated * cache and node is not primary for this key. @@ -346,12 +353,15 @@ public EntryGetResult innerGetAndReserveForLoad(boolean updateMetrics, * @param tx Cache transaction. * @param affNodeId Partitioned node iD. * @param val Value to set. + * @param entryProc Entry processor. + * @param invokeArgs Entry processor invoke arguments. * @param ttl0 TTL. * @param topVer Topology version. * @param mvccVer Mvcc version. * @param op Cache operation. - * @param needHistory Whether to collect rows created or affected by the current tx. + * @param needHist Whether to collect rows created or affected by the current tx. * @param noCreate Entry should not be created when enabled, e.g. SQL INSERT. + * @param needOldVal Flag if it is need to return the old value (value before current tx has been started). * @param filter Filter. * @param retVal Previous value return flag. * @return Tuple containing success flag and old value. If success is {@code false}, @@ -363,12 +373,15 @@ public GridCacheUpdateTxResult mvccSet( @Nullable IgniteInternalTx tx, UUID affNodeId, CacheObject val, + EntryProcessor entryProc, + Object[] invokeArgs, long ttl0, AffinityTopologyVersion topVer, MvccSnapshot mvccVer, GridCacheOperation op, - boolean needHistory, + boolean needHist, boolean noCreate, + boolean needOldVal, @Nullable CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException, GridCacheEntryRemovedException; @@ -377,7 +390,8 @@ public GridCacheUpdateTxResult mvccSet( * @param affNodeId Partitioned node iD. * @param topVer Topology version. * @param mvccVer Mvcc version. - * @param needHistory Whether to collect rows created or affected by the current tx. + * @param needHist Whether to collect rows created or affected by the current tx. + * @param needOldValue Flag if it is need to return the old value (value before current tx has been started). * @param filter Filter. * @param retVal Previous value return flag. * @return Tuple containing success flag and old value. If success is {@code false}, @@ -390,7 +404,8 @@ public GridCacheUpdateTxResult mvccRemove( UUID affNodeId, AffinityTopologyVersion topVer, MvccSnapshot mvccVer, - boolean needHistory, + boolean needHist, + boolean needOldValue, @Nullable CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException, GridCacheEntryRemovedException; @@ -697,6 +712,16 @@ public boolean tmLock(IgniteInternalTx tx, */ public boolean checkSerializableReadVersion(GridCacheVersion serReadVer) throws GridCacheEntryRemovedException; + /** + * Retrieves the last committed MVCC entry version. + * @param onheapOnly {@code True} if a specified peek mode instructs to look only in the on-heap storage. + * @return Last committed entry if either or {@code null} otherwise. + * @throws GridCacheEntryRemovedException If entry has been removed. + * @throws IgniteCheckedException If failed. + */ + @Nullable public CacheObject mvccPeek(boolean onheapOnly) + throws GridCacheEntryRemovedException, IgniteCheckedException; + /** * Peeks into entry without loading value or updating statistics. * @@ -717,12 +742,11 @@ public boolean tmLock(IgniteInternalTx tx, /** * Peeks into entry without loading value or updating statistics. * - * @param plc Expiry policy if TTL should be updated. * @return Value. * @throws GridCacheEntryRemovedException If entry has been removed. * @throws IgniteCheckedException If failed. */ - @Nullable public CacheObject peek(@Nullable IgniteCacheExpiryPolicy plc) + @Nullable public CacheObject peek() throws GridCacheEntryRemovedException, IgniteCheckedException; /** @@ -1161,8 +1185,12 @@ public void updateIndex(SchemaIndexCacheFilter filter, SchemaIndexCacheVisitorCl * @param tx Transaction. * @param affNodeId Affinity node id. * @param topVer Topology version. + * @param entries Entries. * @param op Cache operation. - * @param mvccVer Mvcc version. @return Update result. + * @param mvccVer Mvcc version. + * @param futId Future id. + * @param batchNum Batch number. + * @return Update result. * @throws IgniteCheckedException, If failed. * @throws GridCacheEntryRemovedException, If entry has been removed. */ @@ -1172,13 +1200,14 @@ public GridCacheUpdateTxResult mvccUpdateRowsWithPreloadInfo( AffinityTopologyVersion topVer, List entries, GridCacheOperation op, - MvccSnapshot mvccVer) + MvccSnapshot mvccVer, + IgniteUuid futId, + int batchNum) throws IgniteCheckedException, GridCacheEntryRemovedException; /** * Touch this entry in its context's eviction manager. * - * @param topVer Topology version. */ - public void touch(AffinityTopologyVersion topVer); + public void touch(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEventManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEventManager.java index 3c5cf1e944ad1..726a6c88c1804 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEventManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEventManager.java @@ -63,6 +63,7 @@ public void removeListener(GridLocalEventListener lsnr) { /** * @param key Key for event. * @param tx Possible surrounding transaction. + * @param txLbl Possible lable of possible surrounding transaction. * @param val Read value. * @param subjId Subject ID. * @param taskName Task name. @@ -70,6 +71,7 @@ public void removeListener(GridLocalEventListener lsnr) { */ public void readEvent(KeyCacheObject key, @Nullable IgniteInternalTx tx, + @Nullable String txLbl, @Nullable CacheObject val, @Nullable UUID subjId, @Nullable String taskName, @@ -77,7 +79,9 @@ public void readEvent(KeyCacheObject key, if (isRecordable(EVT_CACHE_OBJECT_READ)) { addEvent(cctx.affinity().partition(key), key, + cctx.localNodeId(), tx, + txLbl, null, EVT_CACHE_OBJECT_READ, val, @@ -107,7 +111,7 @@ public void readEvent(KeyCacheObject key, */ public void addEvent(int part, KeyCacheObject key, - IgniteInternalTx tx, + @Nullable IgniteInternalTx tx, @Nullable GridCacheMvccCandidate owner, int type, @Nullable CacheObject newVal, @@ -143,7 +147,8 @@ public void addEvent(int type) { 0, null, cctx.localNodeId(), - (IgniteUuid)null, + null, + null, null, type, null, @@ -174,7 +179,7 @@ public void addEvent(int type) { public void addEvent(int part, KeyCacheObject key, UUID nodeId, - IgniteInternalTx tx, + @Nullable IgniteInternalTx tx, GridCacheMvccCandidate owner, int type, CacheObject newVal, @@ -188,7 +193,9 @@ public void addEvent(int part, { addEvent(part, key, - nodeId, tx == null ? null : tx.xid(), + nodeId, + tx, + null, owner == null ? null : owner.version(), type, newVal, @@ -234,7 +241,8 @@ public void addEvent(int part, addEvent(part, key, evtNodeId, - tx == null ? null : tx.xid(), + tx, + null, owner == null ? null : owner.version(), type, newVal, @@ -251,7 +259,8 @@ public void addEvent(int part, * @param part Partition. * @param key Key for the event. * @param evtNodeId Event node ID. - * @param xid Transaction ID. + * @param tx Possible surrounding transaction. + * @param txLbl Possible label of possible surrounding transaction. * @param lockId Lock ID. * @param type Event type. * @param newVal New value. @@ -266,7 +275,8 @@ public void addEvent( int part, KeyCacheObject key, UUID evtNodeId, - @Nullable IgniteUuid xid, + @Nullable IgniteInternalTx tx, + @Nullable String txLbl, @Nullable Object lockId, int type, @Nullable CacheObject newVal, @@ -324,6 +334,10 @@ public void addEvent( oldVal0 = cctx.cacheObjectContext().unwrapBinaryIfNeeded(oldVal, true, false); } + IgniteUuid xid = tx == null ? null : tx.xid(); + + String finalTxLbl = (tx == null || tx.label() == null) ? txLbl : tx.label(); + cctx.gridEvents().record(new CacheEvent(cctx.name(), cctx.localNode(), evtNode, @@ -333,6 +347,7 @@ public void addEvent( cctx.isNear(), key0, xid, + finalTxLbl, lockId, val0, hasNewVal, @@ -372,6 +387,10 @@ public void addEvent( public boolean isRecordable(int type) { GridCacheContext cctx0 = cctx; + // Event recording is impossible in recovery mode. + if (cctx0 != null && cctx0.kernalContext().recoveryMode()) + return false; + return cctx0 != null && cctx0.userCache() && cctx0.gridEvents().isRecordable(type) && !cctx0.config().isEventsDisabled(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionManager.java index e5ab189329aff..12cb0fa5722df 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionManager.java @@ -23,7 +23,6 @@ import org.apache.ignite.cache.eviction.EvictionFilter; import org.apache.ignite.cache.eviction.EvictionPolicy; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionManager; @@ -155,7 +154,7 @@ private boolean evict0( cache.metrics0().onEvict(); if (recordable) - cctx.events().addEvent(entry.partition(), entry.key(), cctx.nodeId(), (IgniteUuid)null, null, + cctx.events().addEvent(entry.partition(), entry.key(), cctx.nodeId(), null, null, null, EVT_CACHE_ENTRY_EVICTED, null, false, oldVal, hasVal, null, null, null, false); if (log.isDebugEnabled()) @@ -201,7 +200,7 @@ private boolean evict0( } /** {@inheritDoc} */ - @Override public void touch(GridCacheEntryEx e, AffinityTopologyVersion topVer) { + @Override public void touch(GridCacheEntryEx e) { assert e.context() == cctx : "Entry from another cache context passed to eviction manager: [" + "entry=" + e + ", cctx=" + cctx + @@ -296,7 +295,7 @@ private void warnFirstEvict() { notifyPolicy(entry); if (recordable) - cctx.events().addEvent(entry.partition(), entry.key(), cctx.nodeId(), (IgniteUuid)null, null, + cctx.events().addEvent(entry.partition(), entry.key(), cctx.nodeId(), null, null, null, EVT_CACHE_ENTRY_EVICTED, null, false, entry.rawGet(), entry.hasValue(), null, null, null, false); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheGroupIdMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheGroupIdMessage.java index 09c143b0c0dd3..bfdce35e86e62 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheGroupIdMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheGroupIdMessage.java @@ -50,7 +50,7 @@ public int groupId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 3; + return 4; } /** {@inheritDoc} */ @@ -68,7 +68,7 @@ public int groupId() { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeInt("grpId", grpId)) return false; @@ -90,7 +90,7 @@ public int groupId() { return false; switch (reader.state()) { - case 2: + case 3: grpId = reader.readInt("grpId"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIdMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIdMessage.java index 6c20bdd15bdd6..e0944397ecf3d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIdMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIdMessage.java @@ -52,7 +52,7 @@ public void cacheId(int cacheId) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 3; + return 4; } /** {@inheritDoc} */ @@ -70,7 +70,7 @@ public void cacheId(int cacheId) { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeInt("cacheId", cacheId)) return false; @@ -92,7 +92,7 @@ public void cacheId(int cacheId) { return false; switch (reader.state()) { - case 2: + case 3: cacheId = reader.readInt("cacheId"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIoManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIoManager.java index 2e66e5bfc3fe4..2588a51c6e8f7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIoManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheIoManager.java @@ -51,6 +51,8 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxFinishResponse; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareRequest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareResponse; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxQueryEnlistRequest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxQueryEnlistResponse; import org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateRequest; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicNearResponse; @@ -73,10 +75,16 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearLockResponse; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearSingleGetRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearSingleGetResponse; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistRequest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistResponse; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishResponse; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareResponse; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxQueryEnlistRequest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxQueryEnlistResponse; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxQueryResultsEnlistRequest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxQueryResultsEnlistResponse; import org.apache.ignite.internal.processors.cache.query.GridCacheQueryRequest; import org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxState; @@ -165,6 +173,11 @@ public void dumpPendingMessages(StringBuilder sb) { final GridCacheMessage cacheMsg = (GridCacheMessage)msg; + AffinityTopologyVersion rmtAffVer = cacheMsg.topologyVersion(); + AffinityTopologyVersion lastAffChangedVer = cacheMsg.lastAffinityChangedTopologyVersion(); + + cctx.exchange().lastAffinityChangedTopologyVersion(rmtAffVer, lastAffChangedVer); + IgniteInternalFuture fut = null; if (cacheMsg.partitionExchangeMessage()) { @@ -222,9 +235,8 @@ else if (desc.receivedFromStartVersion() != null) } else { AffinityTopologyVersion locAffVer = cctx.exchange().readyAffinityVersion(); - AffinityTopologyVersion rmtAffVer = cacheMsg.topologyVersion(); - if (locAffVer.compareTo(rmtAffVer) < 0) { + if (locAffVer.compareTo(lastAffChangedVer) < 0) { IgniteLogger log = cacheMsg.messageLogger(cctx); if (log.isDebugEnabled()) { @@ -234,12 +246,13 @@ else if (desc.receivedFromStartVersion() != null) msg0.append(", locTopVer=").append(locAffVer). append(", rmtTopVer=").append(rmtAffVer). + append(", lastAffChangedVer=").append(lastAffChangedVer). append(']'); log.debug(msg0.toString()); } - fut = cctx.exchange().affinityReadyFuture(rmtAffVer); + fut = cctx.exchange().affinityReadyFuture(lastAffChangedVer); } } @@ -581,7 +594,22 @@ private void onMessage0(final UUID nodeId, final GridCacheMessage cacheMsg, processMessage(nodeId, cacheMsg, c); } catch (Throwable e) { - U.error(log, "Failed to process message [senderId=" + nodeId + ", messageType=" + cacheMsg.getClass() + ']', e); + String msgStr; + + try { + msgStr = String.valueOf(cacheMsg); + } + catch (Throwable e0) { + String clsName = cacheMsg.getClass().getName(); + + U.error(log, "Failed to log message due to an error: " + clsName, e0); + + msgStr = clsName + "(failed to log message)"; + } + + U.error(log, "Failed to process message [senderId=" + nodeId + ", msg=" + msgStr + ']', e); + + cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); if (e instanceof Error) throw (Error)e; @@ -1012,6 +1040,66 @@ private void processFailedMessage(UUID nodeId, break; + case 151: { + GridNearTxQueryEnlistRequest req = (GridNearTxQueryEnlistRequest)msg; + + GridNearTxQueryEnlistResponse res = new GridNearTxQueryEnlistResponse( + req.cacheId(), + req.futureId(), + req.miniId(), + req.version(), + req.classError()); + + sendResponseOnFailedMessage(nodeId, res, cctx, plc); + + break; + } + + case 153: { + GridNearTxQueryResultsEnlistRequest req = (GridNearTxQueryResultsEnlistRequest)msg; + + GridNearTxQueryEnlistResponse res = new GridNearTxQueryResultsEnlistResponse( + req.cacheId(), + req.futureId(), + req.miniId(), + req.version(), + req.classError()); + + sendResponseOnFailedMessage(nodeId, res, cctx, plc); + + break; + } + + case 155: /* GridDhtTxQueryEnlistRequest */ + case 156: /* GridDhtTxQueryFirstEnlistRequest */ { + GridDhtTxQueryEnlistRequest req = (GridDhtTxQueryEnlistRequest)msg; + + GridDhtTxQueryEnlistResponse res = new GridDhtTxQueryEnlistResponse( + req.cacheId(), + req.dhtFutureId(), + req.batchId(), + req.classError()); + + sendResponseOnFailedMessage(nodeId, res, cctx, plc); + + break; + } + + case 159: { + GridNearTxEnlistRequest req = (GridNearTxEnlistRequest)msg; + + GridNearTxEnlistResponse res = new GridNearTxEnlistResponse( + req.cacheId(), + req.futureId(), + req.miniId(), + req.version(), + req.classError()); + + sendResponseOnFailedMessage(nodeId, res, cctx, plc); + + break; + } + case -36: { GridDhtAtomicSingleUpdateRequest req = (GridDhtAtomicSingleUpdateRequest)msg; @@ -1155,6 +1243,8 @@ public boolean checkNodeLeft(UUID nodeId, IgniteCheckedException sndErr, boolean public void send(ClusterNode node, GridCacheMessage msg, byte plc) throws IgniteCheckedException { assert !node.isLocal() : node; + msg.lastAffinityChangedTopologyVersion(cctx.exchange().lastAffinityChangedTopologyVersion(msg.topologyVersion())); + if (!onSend(msg, node.id())) return; @@ -1222,6 +1312,8 @@ public void sendOrderedMessage(ClusterNode node, Object topic, GridCacheMessage if (!onSend(msg, node.id())) return; + msg.lastAffinityChangedTopologyVersion(cctx.exchange().lastAffinityChangedTopologyVersion(msg.topologyVersion())); + int cnt = 0; while (cnt <= retryCnt) { @@ -1278,6 +1370,8 @@ void sendNoRetry(ClusterNode node, if (!onSend(msg, null)) return; + msg.lastAffinityChangedTopologyVersion(cctx.exchange().lastAffinityChangedTopologyVersion(msg.topologyVersion())); + try { cctx.gridIO().sendToGridTopic(node, TOPIC_CACHE, msg, plc); @@ -1340,22 +1434,22 @@ private void addHandler( if (msgIdx != -1) { Map idxClsHandlers0 = msgHandlers.idxClsHandlers; - IgniteBiInClosure[] cacheClsHandlers = idxClsHandlers0.get(hndId); + IgniteBiInClosure[] cacheClsHandlers = idxClsHandlers0.compute(hndId, (key, clsHandlers) -> { + if (clsHandlers == null) + clsHandlers = new IgniteBiInClosure[GridCacheMessage.MAX_CACHE_MSG_LOOKUP_INDEX]; - if (cacheClsHandlers == null) { - cacheClsHandlers = new IgniteBiInClosure[GridCacheMessage.MAX_CACHE_MSG_LOOKUP_INDEX]; + if(clsHandlers[msgIdx] != null) + return null; - idxClsHandlers0.put(hndId, cacheClsHandlers); - } + clsHandlers[msgIdx] = c; - if (cacheClsHandlers[msgIdx] != null) + return clsHandlers; + }); + + if (cacheClsHandlers == null) throw new IgniteException("Duplicate cache message ID found [hndId=" + hndId + ", type=" + type + ']'); - cacheClsHandlers[msgIdx] = c; - - msgHandlers.idxClsHandlers = idxClsHandlers0; - return; } else { @@ -1572,7 +1666,7 @@ else if (msg instanceof GridCacheGroupIdMessage) */ static class MessageHandlers { /** Indexed class handlers. */ - volatile Map idxClsHandlers = new HashMap<>(); + volatile Map idxClsHandlers = new ConcurrentHashMap<>(); /** Handler registry. */ ConcurrentMap> diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheMapEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheMapEntry.java index f58a3dcb92610..801aefaa50c89 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheMapEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheMapEntry.java @@ -34,6 +34,7 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.cache.CacheInterceptor; import org.apache.ignite.cache.eviction.EvictableEntry; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.NodeStoppingException; @@ -42,12 +43,14 @@ import org.apache.ignite.internal.pagemem.wal.WALPointer; import org.apache.ignite.internal.pagemem.wal.record.DataEntry; import org.apache.ignite.internal.pagemem.wal.record.DataRecord; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataRecord; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheUpdateAtomicResult.UpdateOutcome; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheEntry; import org.apache.ignite.internal.processors.cache.extras.GridCacheEntryExtras; import org.apache.ignite.internal.processors.cache.extras.GridCacheMvccEntryExtras; @@ -67,14 +70,12 @@ import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter; import org.apache.ignite.internal.processors.cache.transactions.TxCounters; -import org.apache.ignite.internal.processors.cache.tree.mvcc.data.MvccUpdateDataRow; import org.apache.ignite.internal.processors.cache.tree.mvcc.data.MvccUpdateResult; import org.apache.ignite.internal.processors.cache.tree.mvcc.data.ResultType; import org.apache.ignite.internal.processors.cache.version.GridCacheLazyPlainVersionedEntry; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionConflictContext; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionEx; -import org.apache.ignite.internal.processors.cache.version.GridCacheVersionManager; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionedEntryEx; import org.apache.ignite.internal.processors.dr.GridDrType; import org.apache.ignite.internal.processors.query.IgniteSQLException; @@ -99,6 +100,7 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.thread.IgniteThread; import org.jetbrains.annotations.Nullable; @@ -115,12 +117,12 @@ import static org.apache.ignite.internal.processors.cache.GridCacheOperation.UPDATE; import static org.apache.ignite.internal.processors.cache.GridCacheUpdateAtomicResult.UpdateOutcome.INVOKE_NO_OP; import static org.apache.ignite.internal.processors.cache.GridCacheUpdateAtomicResult.UpdateOutcome.REMOVE_NO_VAL; +import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_MAX_SNAPSHOT; +import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.compareIgnoreOpCounter; import static org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.RowData.NO_KEY; -import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.CONCURRENT_UPDATE; import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.DUPLICATE_KEY; -import static org.apache.ignite.internal.processors.dr.GridDrType.DR_BACKUP; +import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.TRANSACTION_SERIALIZATION_ERROR; import static org.apache.ignite.internal.processors.dr.GridDrType.DR_NONE; -import static org.apache.ignite.internal.processors.dr.GridDrType.DR_PRIMARY; /** * Adapter for cache entry. @@ -141,7 +143,9 @@ public abstract class GridCacheMapEntry extends GridMetadataAwareAdapter impleme /** * NOTE + *
* ==== + *
* Make sure to recalculate this value any time when adding or removing fields from entry. * The size should be count as follows: *

    @@ -149,9 +153,45 @@ public abstract class GridCacheMapEntry extends GridMetadataAwareAdapter impleme *
  • References: 8 each
  • *
  • Each nested object should be analyzed in the same way as above.
  • *
+ * ==== + *
+ *
    + *
  • Reference fields:
      + *
    • 8 : {@link #cctx}
    • + *
    • 8 : {@link #key}
    • + *
    • 8 : {@link #val}
    • + *
    • 8 : {@link #ver}
    • + *
    • 8 : {@link #extras}
    • + *
    • 8 : {@link #lock}
    • + *
    • 8 : {@link #listenerLock}
    • + *
    • 8 : {@link GridMetadataAwareAdapter#data}
    • + *
  • + *
  • Primitive fields:
      + *
    • 4 : {@link #hash}
    • + *
    • 1 : {@link #flags}
    • + *
  • + *
  • Extras:
      + *
    • 8 : {@link GridCacheEntryExtras#ttl()}
    • + *
    • 8 : {@link GridCacheEntryExtras#expireTime()}
    • + *
  • + *
  • Version:
      + *
    • 4 : {@link GridCacheVersion#topVer}
    • + *
    • 4 : {@link GridCacheVersion#nodeOrderDrId}
    • + *
    • 8 : {@link GridCacheVersion#order}
    • + *
  • + *
  • Key:
      + *
    • 8 : {@link CacheObjectAdapter#val}
    • + *
    • 8 : {@link CacheObjectAdapter#valBytes}
    • + *
    • 4 : {@link KeyCacheObjectImpl#part}
    • + *
  • + *
  • Value:
      + *
    • 8 : {@link CacheObjectAdapter#val}
    • + *
    • 8 : {@link CacheObjectAdapter#valBytes}
    • + *
  • + *
*/ - // 7 * 8 /*references*/ + 2 * 8 /*long*/ + 1 * 4 /*int*/ + 1 * 1 /*byte*/ + array at parent = 85 - private static final int SIZE_OVERHEAD = 85 /*entry*/ + 32 /* version */ + 4 * 7 /* key + val */; + private static final int SIZE_OVERHEAD = 8 * 8 /* references */ + 5 /* primitives */ + 16 /* extras */ + + 16 /* version */ + 20 /* key */ + 16 /* value */; /** Static logger to avoid re-creation. Made static for test purpose. */ protected static final AtomicReference logRef = new AtomicReference<>(); @@ -221,7 +261,7 @@ protected GridCacheMapEntry( this.cctx = cctx; this.listenerLock = cctx.continuousQueries().getListenerReadLock(); - ver = GridCacheVersionManager.START_VER; + ver = cctx.shared().versions().startVersion(); } /** @@ -279,6 +319,11 @@ protected void value(@Nullable CacheObject val) { return false; } + /** {@inheritDoc} */ + @Override public boolean isMvcc() { + return cctx.mvccEnabled(); + } + /** {@inheritDoc} */ @Override public boolean isNear() { return false; @@ -326,7 +371,7 @@ protected void value(@Nullable CacheObject val) { * @return {@code True} if start version. */ public boolean isStartVersion() { - return ver == GridCacheVersionManager.START_VER; + return cctx.shared().versions().isStartVersion(ver); } /** {@inheritDoc} */ @@ -508,9 +553,7 @@ protected GridDhtLocalPartition localPartition() { update(val, read.expireTime(), 0, read.version(), false); - long delta = checkExpire && read.expireTime() > 0 ? read.expireTime() - U.currentTimeMillis() : 0; - - if (delta >= 0) + if (!(checkExpire && read.expireTime() > 0) || (read.expireTime() > U.currentTimeMillis())) return read; else { if (onExpired(this.val, null)) { @@ -1026,7 +1069,7 @@ private EntryGetResult entryGetResult(CacheObject val, GridCacheVersion ver, boo } finally { if (touch) - touch(cctx.affinity().affinityTopologyVersion()); + touch(); } } @@ -1042,18 +1085,23 @@ protected void recordNodeId(UUID nodeId, AffinityTopologyVersion topVer) { IgniteInternalTx tx, UUID affNodeId, CacheObject val, + EntryProcessor entryProc, + Object[] invokeArgs, long ttl0, AffinityTopologyVersion topVer, MvccSnapshot mvccVer, GridCacheOperation op, boolean needHistory, boolean noCreate, + boolean needOldVal, CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException, GridCacheEntryRemovedException { assert tx != null; final boolean valid = valid(tx.topologyVersion()); + final boolean invoke = entryProc != null; + final GridCacheVersion newVer; WALPointer logPtr = null; @@ -1084,13 +1132,14 @@ protected void recordNodeId(UUID nodeId, AffinityTopologyVersion topVer) { assert ttl >= 0 : ttl; assert expireTime >= 0 : expireTime; + // Detach value before index update. val = cctx.kernalContext().cacheObjects().prepareForCache(val, cctx); - assert val != null; + assert val != null || invoke; - res = cctx.offheap().mvccUpdate( - this, val, newVer, expireTime, mvccVer, tx.local(), needHistory, noCreate, filter, retVal); + res = cctx.offheap().mvccUpdate(this, val, newVer, expireTime, mvccVer, tx.local(), needHistory, + noCreate, needOldVal, filter, retVal, entryProc, invokeArgs); assert res != null; @@ -1102,8 +1151,23 @@ protected void recordNodeId(UUID nodeId, AffinityTopologyVersion topVer) { assert res.resultType() != ResultType.PREV_NOT_NULL || op != CREATE || tx.local(); if (res.resultType() == ResultType.VERSION_MISMATCH) - throw new IgniteSQLException("Mvcc version mismatch.", CONCURRENT_UPDATE); - else if (res.resultType() == ResultType.FILTERED || (noCreate && res.resultType() == ResultType.PREV_NULL)) + throw serializationError(); + else if (res.resultType() == ResultType.FILTERED) { + GridCacheUpdateTxResult updRes = new GridCacheUpdateTxResult(invoke); + + assert !invoke || res.invokeResult() != null; + + if (invoke) // No-op invoke happened. + updRes.invokeResult(res.invokeResult()); + + updRes.filtered(true); + + if (retVal) + updRes.prevValue(res.oldValue()); + + return updRes; + } + else if (noCreate && !invoke && res.resultType() == ResultType.PREV_NULL) return new GridCacheUpdateTxResult(false); else if (res.resultType() == ResultType.LOCKED) { unlockEntry(); @@ -1115,7 +1179,7 @@ else if (res.resultType() == ResultType.LOCKED) { IgniteInternalFuture lockFut = cctx.kernalContext().coordinators().waitFor(cctx, lockVer); lockFut.listen(new MvccUpdateLockListener(tx, this, affNodeId, topVer, val, ttl0, mvccVer, - op, needHistory, noCreate, filter, retVal, resFut)); + op, needHistory, noCreate, resFut, needOldVal, filter, retVal, entryProc, invokeArgs)); return new GridCacheUpdateTxResult(false, resFut); } @@ -1129,7 +1193,7 @@ else if (op == CREATE && tx.local() && (res.resultType() == ResultType.PREV_NOT_ if (res.resultType() == ResultType.PREV_NULL) { TxCounters counters = tx.txCounters(true); - if (res.isOwnValueOverridden()) { + if (compareIgnoreOpCounter(res.resultVersion(), mvccVer) == 0) { if (res.isKeyAbsentBefore()) counters.incrementUpdateCounter(cctx.cacheId(), partition()); } @@ -1138,36 +1202,49 @@ else if (op == CREATE && tx.local() && (res.resultType() == ResultType.PREV_NOT_ counters.accumulateSizeDelta(cctx.cacheId(), partition(), 1); } - else if (res.resultType() == ResultType.PREV_NOT_NULL && !res.isOwnValueOverridden()) { + else if (res.resultType() == ResultType.PREV_NOT_NULL && compareIgnoreOpCounter(res.resultVersion(), mvccVer) != 0) { TxCounters counters = tx.txCounters(true); counters.incrementUpdateCounter(cctx.cacheId(), partition()); } + else if (res.resultType() == ResultType.REMOVED_NOT_NULL) { + TxCounters counters = tx.txCounters(true); + + if (compareIgnoreOpCounter(res.resultVersion(), mvccVer) == 0) { + if (res.isKeyAbsentBefore()) // Do not count own update removal. + counters.decrementUpdateCounter(cctx.cacheId(), partition()); + } + else + counters.incrementUpdateCounter(cctx.cacheId(), partition()); + + counters.accumulateSizeDelta(cctx.cacheId(), partition(), -1); + } if (cctx.group().persistenceEnabled() && cctx.group().walEnabled()) { - logPtr = cctx.shared().wal().log(new DataRecord(new DataEntry( + logPtr = cctx.shared().wal().log(new MvccDataRecord(new MvccDataEntry( cctx.cacheId(), key, val, - res.resultType() == ResultType.PREV_NULL ? CREATE : UPDATE, + res.resultType() == ResultType.PREV_NULL ? CREATE : + (res.resultType() == ResultType.REMOVED_NOT_NULL) ? DELETE : UPDATE, tx.nearXidVersion(), newVer, expireTime, key.partition(), - 0L))); + 0L, + mvccVer) + )); } update(val, expireTime, ttl, newVer, true); - mvccDrReplicate(tx.local() ? DR_PRIMARY : DR_BACKUP, val, newVer, topVer, mvccVer); - recordNodeId(affNodeId, topVer); } finally { if (lockedByCurrentThread()) { unlockEntry(); - cctx.evicts().touch(this, AffinityTopologyVersion.NONE); + cctx.evicts().touch(this); } } @@ -1176,12 +1253,19 @@ else if (res.resultType() == ResultType.PREV_NOT_NULL && !res.isOwnValueOverridd GridCacheUpdateTxResult updRes = valid ? new GridCacheUpdateTxResult(true, 0L, logPtr) : new GridCacheUpdateTxResult(false, logPtr); - CacheDataRow oldRow = ((MvccUpdateDataRow)res).oldRow(); + if (retVal && (res.resultType() == ResultType.PREV_NOT_NULL || res.resultType() == ResultType.VERSION_FOUND)) + updRes.prevValue(res.oldValue()); + + if (needOldVal && compareIgnoreOpCounter(res.resultVersion(), mvccVer) != 0 && ( + res.resultType() == ResultType.PREV_NOT_NULL || res.resultType() == ResultType.REMOVED_NOT_NULL)) + updRes.oldValue(res.oldValue()); + + updRes.newValue(res.newValue()); - if(retVal && (res.resultType() == ResultType.PREV_NOT_NULL || res.resultType() == ResultType.VERSION_FOUND)) { - assert oldRow != null; + if (invoke && res.resultType() != ResultType.VERSION_FOUND) { + assert res.invokeResult() != null; - updRes.prevValue(oldRow.value()); + updRes.invokeResult(res.invokeResult()); } updRes.mvccHistory(res.history()); @@ -1196,6 +1280,7 @@ else if (res.resultType() == ResultType.PREV_NOT_NULL && !res.isOwnValueOverridd AffinityTopologyVersion topVer, MvccSnapshot mvccVer, boolean needHistory, + boolean needOldVal, @Nullable CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException, GridCacheEntryRemovedException { assert tx != null; @@ -1218,14 +1303,21 @@ else if (res.resultType() == ResultType.PREV_NOT_NULL && !res.isOwnValueOverridd assert newVer != null : "Failed to get write version for tx: " + tx; - res = cctx.offheap().mvccRemove(this, mvccVer, tx.local(), needHistory, filter, retVal); + res = cctx.offheap().mvccRemove(this, mvccVer, tx.local(), needHistory, needOldVal, filter, retVal); assert res != null; if (res.resultType() == ResultType.VERSION_MISMATCH) - throw new IgniteSQLException("Mvcc version mismatch.", CONCURRENT_UPDATE); - else if (res.resultType() == ResultType.PREV_NULL || res.resultType() == ResultType.FILTERED) + throw serializationError(); + else if (res.resultType() == ResultType.PREV_NULL) return new GridCacheUpdateTxResult(false); + else if (res.resultType() == ResultType.FILTERED) { + GridCacheUpdateTxResult updRes = new GridCacheUpdateTxResult(false); + + updRes.filtered(true); + + return updRes; + } else if (res.resultType() == ResultType.LOCKED) { unlockEntry(); @@ -1236,7 +1328,7 @@ else if (res.resultType() == ResultType.LOCKED) { IgniteInternalFuture lockFut = cctx.kernalContext().coordinators().waitFor(cctx, lockVer); lockFut.listen(new MvccRemoveLockListener(tx, this, affNodeId, topVer, mvccVer, needHistory, - resFut, retVal, filter)); + resFut, needOldVal, retVal, filter)); return new GridCacheUpdateTxResult(false, resFut); } @@ -1247,7 +1339,7 @@ else if (res.resultType() == ResultType.LOCKED) { if (res.resultType() == ResultType.PREV_NOT_NULL) { TxCounters counters = tx.txCounters(true); - if (res.isOwnValueOverridden()) { + if (compareIgnoreOpCounter(res.resultVersion(), mvccVer) == 0) { if (res.isKeyAbsentBefore()) // Do not count own update removal. counters.decrementUpdateCounter(cctx.cacheId(), partition()); } @@ -1258,19 +1350,17 @@ else if (res.resultType() == ResultType.LOCKED) { } if (cctx.group().persistenceEnabled() && cctx.group().walEnabled()) - logPtr = logTxUpdate(tx, null, 0, 0L); + logPtr = logMvccUpdate(tx, null, 0, 0L, mvccVer); update(null, 0, 0, newVer, true); - mvccDrReplicate(tx.local() ? DR_PRIMARY : DR_BACKUP, null, newVer, topVer, mvccVer); - recordNodeId(affNodeId, topVer); } finally { if (lockedByCurrentThread()) { unlockEntry(); - cctx.evicts().touch(this, AffinityTopologyVersion.NONE); + cctx.evicts().touch(this); } } @@ -1279,13 +1369,12 @@ else if (res.resultType() == ResultType.LOCKED) { GridCacheUpdateTxResult updRes = valid ? new GridCacheUpdateTxResult(true, 0L, logPtr) : new GridCacheUpdateTxResult(false, logPtr); - CacheDataRow oldRow = ((MvccUpdateDataRow)res).oldRow(); + if (retVal && (res.resultType() == ResultType.PREV_NOT_NULL || res.resultType() == ResultType.VERSION_FOUND)) + updRes.prevValue(res.oldValue()); - if(retVal && (res.resultType() == ResultType.PREV_NOT_NULL || res.resultType() == ResultType.VERSION_FOUND)) { - assert oldRow != null; - - updRes.prevValue(oldRow.value()); - } + if (needOldVal && compareIgnoreOpCounter(res.resultVersion(), mvccVer) != 0 && + (res.resultType() == ResultType.PREV_NOT_NULL || res.resultType() == ResultType.REMOVED_NOT_NULL)) + updRes.oldValue(res.oldValue()); updRes.mvccHistory(res.history()); @@ -1320,7 +1409,7 @@ else if (res.resultType() == ResultType.LOCKED) { assert res != null; if (res.resultType() == ResultType.VERSION_MISMATCH) - throw new IgniteSQLException("Mvcc version mismatch.", CONCURRENT_UPDATE); + throw serializationError(); else if (res.resultType() == ResultType.LOCKED) { unlockEntry(); @@ -1339,7 +1428,7 @@ else if (res.resultType() == ResultType.LOCKED) { if (lockedByCurrentThread()) { unlockEntry(); - cctx.evicts().touch(this, AffinityTopologyVersion.NONE); + cctx.evicts().touch(this); } } @@ -1348,24 +1437,6 @@ else if (res.resultType() == ResultType.LOCKED) { return new GridCacheUpdateTxResult(valid, logPtr); } - /** - * Enlist for DR if needed. - * - * @param drType DR type. - * @param val Value. - * @param ver Version. - * @param topVer Topology version. - * @param mvccVer MVCC snapshot. - * @throws IgniteCheckedException In case of exception. - */ - private void mvccDrReplicate(GridDrType drType, CacheObject val, GridCacheVersion ver, - AffinityTopologyVersion topVer, - MvccSnapshot mvccVer) throws IgniteCheckedException { - - if (cctx.isDrEnabled() && drType != DR_NONE && !isInternal()) - cctx.dr().mvccReplicate(key, val, rawTtl(), rawExpireTime(), ver.conflictVersion(), drType, topVer, mvccVer); - } - /** {@inheritDoc} */ @Override public final GridCacheUpdateTxResult innerSet( @Nullable IgniteInternalTx tx, @@ -1436,7 +1507,8 @@ private void mvccDrReplicate(GridDrType drType, CacheObject val, GridCacheVersio boolean internal = isInternal() || !context().userCache(); Map lsnrCol = - notifyContinuousQueries(tx) ? cctx.continuousQueries().updateListeners(internal, false) : null; + notifyContinuousQueries() ? + cctx.continuousQueries().updateListeners(internal, false) : null; if (startVer && (retval || intercept || lsnrCol != null)) unswap(retval); @@ -1448,17 +1520,18 @@ private void mvccDrReplicate(GridDrType drType, CacheObject val, GridCacheVersio old = oldValPresent ? oldVal : this.val; + if (intercept) + intercept = !skipInterceptor(explicitVer); + if (intercept) { val0 = cctx.unwrapBinaryIfNeeded(val, keepBinary, false); CacheLazyEntry e = new CacheLazyEntry(cctx, key, old, keepBinary); - Object interceptorVal = cctx.config().getInterceptor().onBeforePut( - new CacheLazyEntry(cctx, key, old, keepBinary), - val0); - key0 = e.key(); + Object interceptorVal = cctx.config().getInterceptor().onBeforePut(e, val0); + if (interceptorVal == null) return new GridCacheUpdateTxResult(false, logPtr); else if (interceptorVal != val0) @@ -1537,7 +1610,8 @@ else if (interceptorVal != val0) cctx.events().addEvent(partition(), key, evtNodeId, - tx == null ? null : tx.xid(), + tx, + null, newVer, EVT_CACHE_OBJECT_PUT, val, @@ -1668,13 +1742,17 @@ protected Object keyValue(boolean cpy) { boolean internal = isInternal() || !context().userCache(); Map lsnrCol = - notifyContinuousQueries(tx) ? cctx.continuousQueries().updateListeners(internal, false) : null; + notifyContinuousQueries() ? + cctx.continuousQueries().updateListeners(internal, false) : null; if (startVer && (retval || intercept || lsnrCol != null)) unswap(); old = oldValPresent ? oldVal : val; + if (intercept) + intercept = !skipInterceptor(explicitVer); + if (intercept) { entry0 = new CacheLazyEntry(cctx, key, old, keepBinary); @@ -1747,7 +1825,9 @@ else if (log.isDebugEnabled()) cctx.events().addEvent(partition(), key, evtNodeId, - tx == null ? null : tx.xid(), newVer, + tx, + null, + newVer, EVT_CACHE_OBJECT_REMOVED, null, false, @@ -1800,9 +1880,6 @@ else if (log.isDebugEnabled()) unlockListenerReadLock(); } - if (deferred) - cctx.onDeferredDelete(this, newVer); - if (marked) { assert !deferred; @@ -1820,16 +1897,6 @@ else if (log.isDebugEnabled()) return new GridCacheUpdateTxResult(false, logPtr); } - /** - * @param tx Transaction. - * @return {@code True} if should notify continuous query manager. - */ - private boolean notifyContinuousQueries(@Nullable IgniteInternalTx tx) { - return cctx.isLocal() || - cctx.isReplicated() || - (!isNear() && !(tx != null && tx.onePhaseCommit() && !tx.local())); - } - /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public GridTuple3> innerUpdateLocal( @@ -2009,12 +2076,12 @@ else if (ttl == CU.TTL_NOT_CHANGED) else updated = (CacheObject)writeObj; - op = updated == null ? GridCacheOperation.DELETE : GridCacheOperation.UPDATE; + op = updated == null ? DELETE : UPDATE; if (intercept) { CacheLazyEntry e; - if (op == GridCacheOperation.UPDATE) { + if (op == UPDATE) { updated0 = value(updated0, updated, keepBinary, false); e = new CacheLazyEntry(cctx, key, key0, old, old0, keepBinary); @@ -2047,7 +2114,7 @@ else if (ttl == CU.TTL_NOT_CHANGED) long ttl = CU.TTL_ETERNAL; long expireTime = CU.EXPIRE_TIME_ETERNAL; - if (op == GridCacheOperation.UPDATE) { + if (op == UPDATE) { if (expiryPlc != null) { ttl = CU.toTtl(hadVal ? expiryPlc.getExpiryForUpdate() : expiryPlc.getExpiryForCreation()); @@ -2065,14 +2132,14 @@ else if (ttl != CU.TTL_ZERO) } if (ttl == CU.TTL_ZERO) { - op = GridCacheOperation.DELETE; + op = DELETE; //If time expired no transformation needed. transformOp = false; } // Try write-through. - if (op == GridCacheOperation.UPDATE) { + if (op == UPDATE) { // Detach value before index update. updated = cctx.kernalContext().cacheObjects().prepareForCache(updated, cctx); @@ -2094,7 +2161,7 @@ else if (ttl != CU.TTL_ZERO) if (transformCloClsName != null && cctx.events().isRecordable(EVT_CACHE_OBJECT_READ)) { evtOld = cctx.unwrapTemporary(old); - cctx.events().addEvent(partition(), key, cctx.localNodeId(), null, + cctx.events().addEvent(partition(), key, cctx.localNodeId(),null, null, (GridCacheVersion)null, EVT_CACHE_OBJECT_READ, evtOld, evtOld != null || hadVal, evtOld, evtOld != null || hadVal, subjId, transformCloClsName, taskName, keepBinary); } @@ -2103,7 +2170,7 @@ else if (ttl != CU.TTL_ZERO) if (evtOld == null) evtOld = cctx.unwrapTemporary(old); - cctx.events().addEvent(partition(), key, cctx.localNodeId(), null, + cctx.events().addEvent(partition(), key, cctx.localNodeId(), null, null, (GridCacheVersion)null, EVT_CACHE_OBJECT_PUT, updated, updated != null, evtOld, evtOld != null || hadVal, subjId, null, taskName, keepBinary); } @@ -2124,7 +2191,7 @@ else if (ttl != CU.TTL_ZERO) CacheObject evtOld = null; if (transformCloClsName != null && cctx.events().isRecordable(EVT_CACHE_OBJECT_READ)) - cctx.events().addEvent(partition(), key, cctx.localNodeId(), null, + cctx.events().addEvent(partition(), key, cctx.localNodeId(), null, null, (GridCacheVersion)null, EVT_CACHE_OBJECT_READ, evtOld, evtOld != null || hadVal, evtOld, evtOld != null || hadVal, subjId, transformCloClsName, taskName, keepBinary); @@ -2132,7 +2199,7 @@ else if (ttl != CU.TTL_ZERO) if (evtOld == null) evtOld = cctx.unwrapTemporary(old); - cctx.events().addEvent(partition(), key, cctx.localNodeId(), null, (GridCacheVersion)null, + cctx.events().addEvent(partition(), key, cctx.localNodeId(), null, null, (GridCacheVersion)null, EVT_CACHE_OBJECT_REMOVED, null, false, evtOld, evtOld != null || hadVal, subjId, null, taskName, keepBinary); } @@ -2166,7 +2233,7 @@ else if (op == DELETE && transformOp) } if (intercept) { - if (op == GridCacheOperation.UPDATE) + if (op == UPDATE) cctx.config().getInterceptor().onAfterPut(new CacheLazyEntry(cctx, key, key0, updated, updated0, keepBinary, 0L)); else cctx.config().getInterceptor().onAfterRemove(new CacheLazyEntry(cctx, key, key0, old, old0, keepBinary, 0L)); @@ -2259,7 +2326,9 @@ else if (op == DELETE && transformOp) conflictVer, conflictResolve, intercept, - updateCntr); + updateCntr, + cctx.disableTriggeringCacheInterceptorOnConflict() + ); key.valueBytes(cctx.cacheObjectContext()); @@ -2369,6 +2438,7 @@ else if (updateMetrics && REMOVE_NO_VAL.equals(updateRes.outcome()) key, evtNodeId, null, + null, updateVer, EVT_CACHE_OBJECT_READ, evtOld, evtOld != null, @@ -2379,7 +2449,7 @@ else if (updateMetrics && REMOVE_NO_VAL.equals(updateRes.outcome()) keepBinary); } - if (c.op == GridCacheOperation.UPDATE) { + if (c.op == UPDATE) { updateVal = val; assert updateVal != null : c; @@ -2396,6 +2466,7 @@ else if (updateMetrics && REMOVE_NO_VAL.equals(updateRes.outcome()) key, evtNodeId, null, + null, updateVer, EVT_CACHE_OBJECT_PUT, updateVal, @@ -2409,7 +2480,7 @@ else if (updateMetrics && REMOVE_NO_VAL.equals(updateRes.outcome()) } } else { - assert c.op == GridCacheOperation.DELETE : c.op; + assert c.op == DELETE : c.op; clearReaders(); @@ -2425,6 +2496,7 @@ else if (updateMetrics && REMOVE_NO_VAL.equals(updateRes.outcome()) key, evtNodeId, null, + null, updateVer, EVT_CACHE_OBJECT_REMOVED, null, false, @@ -2457,29 +2529,23 @@ else if (updateMetrics && REMOVE_NO_VAL.equals(updateRes.outcome()) topVer); } - if (intercept) { - if (c.op == GridCacheOperation.UPDATE) { - cctx.config().getInterceptor().onAfterPut(new CacheLazyEntry( - cctx, - key, - null, - updateVal, - null, - keepBinary, - c.updateRes.updateCounter())); - } - else { - assert c.op == GridCacheOperation.DELETE : c.op; + if (intercept && c.wasIntercepted) { + assert c.op == UPDATE || c.op == DELETE : c.op; - cctx.config().getInterceptor().onAfterRemove(new CacheLazyEntry( - cctx, - key, - null, - oldVal, - null, - keepBinary, - c.updateRes.updateCounter())); - } + Cache.Entry entry = new CacheLazyEntry<>( + cctx, + key, + null, + c.op == UPDATE ? updateVal : oldVal, + null, + keepBinary, + c.updateRes.updateCounter() + ); + + if (c.op == UPDATE) + cctx.config().getInterceptor().onAfterPut(entry); + else + cctx.config().getInterceptor().onAfterRemove(entry); } } finally { @@ -2885,11 +2951,6 @@ protected final boolean markObsolete0(GridCacheVersion ver, boolean clear, GridC ver = newVer; flags &= ~IS_EVICT_DISABLED; - if (cctx.mvccEnabled()) - cctx.offheap().mvccRemoveAll(this); - else - removeValue(); - onInvalidate(); return obsoleteVersionExtras() != null; @@ -2935,6 +2996,13 @@ protected final void update(@Nullable CacheObject val, long expireTime, long ttl cctx.ttl().addTrackedEntry((GridNearCacheEntry)this); } + /** + * @return {@code True} if should notify continuous query manager on updates of this entry. + */ + private boolean notifyContinuousQueries() { + return !isNear(); + } + /** * Update TTL if it is changed. * @@ -3061,6 +3129,26 @@ int hash() { return hash; } + /** {@inheritDoc} */ + @Nullable @Override public CacheObject mvccPeek(boolean onheapOnly) + throws GridCacheEntryRemovedException, IgniteCheckedException { + if (onheapOnly) + return null; + + lockEntry(); + + try { + checkObsolete(); + + CacheDataRow row = cctx.offheap().mvccRead(cctx, key, MVCC_MAX_SNAPSHOT); + + return row != null ? row.value() : null; + } + finally { + unlockEntry(); + } + } + /** {@inheritDoc} */ @Nullable @Override public CacheObject peek( boolean heap, @@ -3129,13 +3217,13 @@ int hash() { } /** {@inheritDoc} */ - @Nullable @Override public CacheObject peek(@Nullable IgniteCacheExpiryPolicy plc) + @Nullable @Override public CacheObject peek() throws GridCacheEntryRemovedException, IgniteCheckedException { IgniteInternalTx tx = cctx.tm().localTx(); AffinityTopologyVersion topVer = tx != null ? tx.topologyVersion() : cctx.affinity().affinityTopologyVersion(); - return peek(true, false, topVer, plc); + return peek(true, false, topVer, null); } /** @@ -3200,6 +3288,34 @@ protected final boolean hasValueUnlocked() { return val != null; } + /** + * Checks, that changes were got by DR. + * + * @param explicitVer – Explicit version (if any). + * @return {@code true} if changes were got by DR and {@code false} otherwise. + */ + private boolean isRemoteDrUpdate(@Nullable GridCacheVersion explicitVer) { + return explicitVer != null && explicitVer.dataCenterId() != cctx.dr().dataCenterId(); + } + + /** + * Checks, that cache interceptor should be skipped. + *

+ * It is expects by default behavior that Interceptor methods ({@link CacheInterceptor#onBeforePut(Cache.Entry, + * Object)}, {@link CacheInterceptor#onAfterPut(Cache.Entry)}, {@link CacheInterceptor#onBeforeRemove(Cache.Entry)} + * and {@link CacheInterceptor#onAfterRemove(Cache.Entry)}) will be called, but {@link + * CacheInterceptor#onGet(Object, Object)}. This can even make DR-update flow broken in case of non-idempotent + * Interceptor and force users to call onGet manually as the only workaround. Also, user may want to skip + * Interceptor to avoid redundant entry transformation for DR updates and exchange with internal data b/w data + * centres which is a normal case. + * + * @param explicitVer - Explicit version (if any). + * @return {@code true} if cache interceptor should be skipped and {@code false} otherwise. + */ + private boolean skipInterceptor(@Nullable GridCacheVersion explicitVer) { + return isRemoteDrUpdate(explicitVer) && cctx.disableTriggeringCacheInterceptorOnConflict(); + } + /** {@inheritDoc} */ @Override public CacheObject rawPut(CacheObject val, long ttl) { lockEntry(); @@ -3261,7 +3377,7 @@ protected final boolean hasValueUnlocked() { GridCacheVersion currentVer = row != null ? row.version() : GridCacheMapEntry.this.ver; - boolean isStartVer = currentVer == GridCacheVersionManager.START_VER; + boolean isStartVer = cctx.shared().versions().isStartVersion(currentVer); if (cctx.group().persistenceEnabled()) { if (!isStartVer) { @@ -3378,17 +3494,32 @@ else if (deletedUnlocked()) updateCntr = nextPartitionCounter(topVer, true, null); if (walEnabled) { - cctx.shared().wal().log(new DataRecord(new DataEntry( - cctx.cacheId(), - key, - val, - val == null ? GridCacheOperation.DELETE : GridCacheOperation.CREATE, - null, - ver, - expireTime, - partition(), - updateCntr - ))); + if (cctx.mvccEnabled()) { + cctx.shared().wal().log(new MvccDataRecord(new MvccDataEntry( + cctx.cacheId(), + key, + val, + val == null ? GridCacheOperation.DELETE : GridCacheOperation.CREATE, + null, + ver, + expireTime, + partition(), + updateCntr, + mvccVer == null ? MvccUtils.INITIAL_VERSION : mvccVer + ))); + } else { + cctx.shared().wal().log(new DataRecord(new DataEntry( + cctx.cacheId(), + key, + val, + val == null ? GridCacheOperation.DELETE : GridCacheOperation.CREATE, + null, + ver, + expireTime, + partition(), + updateCntr + ))); + } } drReplicate(drType, val, ver, topVer); @@ -3918,7 +4049,7 @@ private GridCacheVersion nextVersion() { long expireTime = expireTimeExtras(); - if (!(expireTime > 0 && expireTime < U.currentTimeMillis())) + if (!(expireTime > 0 && expireTime <= U.currentTimeMillis())) return false; CacheObject expiredVal = this.val; @@ -4242,14 +4373,14 @@ protected void logUpdate(GridCacheOperation op, CacheObject val, GridCacheVersio */ protected WALPointer logTxUpdate(IgniteInternalTx tx, CacheObject val, long expireTime, long updCntr) throws IgniteCheckedException { - assert cctx.transactional(); + assert cctx.transactional() && !cctx.transactionalSnapshot(); if (tx.local()) { // For remote tx we log all updates in batch: GridDistributedTxRemoteAdapter.commitIfLocked() GridCacheOperation op; if (val == null) - op = GridCacheOperation.DELETE; + op = DELETE; else - op = this.val == null ? GridCacheOperation.CREATE : GridCacheOperation.UPDATE; + op = this.val == null ? GridCacheOperation.CREATE : UPDATE; return cctx.shared().wal().log(new DataRecord(new DataEntry( cctx.cacheId(), @@ -4266,6 +4397,43 @@ protected WALPointer logTxUpdate(IgniteInternalTx tx, CacheObject val, long expi return null; } + /** + * @param tx Transaction. + * @param val Value. + * @param expireTime Expire time (or 0 if not applicable). * + * @param updCntr Update counter. + * @param mvccVer Mvcc version. + * @throws IgniteCheckedException In case of log failure. + */ + protected WALPointer logMvccUpdate(IgniteInternalTx tx, CacheObject val, long expireTime, long updCntr, + MvccSnapshot mvccVer) + throws IgniteCheckedException { + assert mvccVer != null; + assert cctx.transactionalSnapshot(); + + if (tx.local()) { // For remote tx we log all updates in batch: GridDistributedTxRemoteAdapter.commitIfLocked() + GridCacheOperation op; + if (val == null) + op = DELETE; + else + op = this.val == null ? GridCacheOperation.CREATE : UPDATE; + + return cctx.shared().wal().log(new MvccDataRecord(new MvccDataEntry( + cctx.cacheId(), + key, + val, + op, + tx.nearXidVersion(), + tx.writeVersion(), + expireTime, + key.partition(), + updCntr, + mvccVer))); + } + else + return null; + } + /** * Removes value from offheap. * @@ -4328,7 +4496,7 @@ protected void removeValue() throws IgniteCheckedException { return null; try { - return e.peek(null); + return e.peek(); } catch (GridCacheEntryRemovedException ignored) { // No-op. @@ -4823,7 +4991,7 @@ protected final void checkOwnerChanged(@Nullable CacheLockCandidates prevOwners, */ private void updateMetrics(GridCacheOperation op, boolean metrics, boolean transformed, boolean hasOldVal) { if (metrics && cctx.statisticsEnabled()) { - if (op == GridCacheOperation.DELETE) { + if (op == DELETE) { cctx.cache().metrics0().onRemove(); if (transformed) @@ -4872,6 +5040,11 @@ private int extrasSize() { return extras != null ? extras.size() : 0; } + /** {@inheritDoc} */ + @Override public void txUnlock(IgniteInternalTx tx) throws GridCacheEntryRemovedException { + removeLock(tx.xidVersion()); + } + /** {@inheritDoc} */ @Override public void onUnlock() { // No-op. @@ -4913,8 +5086,8 @@ private void unlockListenerReadLock() { } /** {@inheritDoc} */ - @Override public void touch(AffinityTopologyVersion topVer) { - context().evicts().touch(this, topVer); + @Override public void touch() { + context().evicts().touch(this); } /** {@inheritDoc} */ @@ -4973,6 +5146,26 @@ private static class MvccRemoveLockListener implements IgniteInClosure resFut, + boolean needOldVal, boolean retVal, @Nullable CacheEntryPredicate filter) { this.tx = tx; @@ -4989,6 +5183,7 @@ private static class MvccRemoveLockListener implements IgniteInClosure resFut, + boolean needOldVal, CacheEntryPredicate filter, boolean needVal, - GridFutureAdapter resFut) { + EntryProcessor entryProc, + Object[] invokeArgs) { this.tx = tx; this.entry = entry; this.affNodeId = affNodeId; @@ -5276,6 +5486,9 @@ private static class MvccUpdateLockListener implements IgniteInClosure fut : activeFutures()) fut.onNodeLeft(discoEvt.eventNode().id()); @@ -353,21 +355,25 @@ private IgniteInternalFuture ignoreErrors(IgniteInternalFuture f) { /** * @param leftNodeId Left node ID. - * @param topVer Topology version. */ - public void removeExplicitNodeLocks(UUID leftNodeId, AffinityTopologyVersion topVer) { - for (GridDistributedCacheEntry entry : locked()) { - try { - entry.removeExplicitNodeLocks(leftNodeId); - - entry.touch(topVer); - } - catch (GridCacheEntryRemovedException ignore) { - if (log.isDebugEnabled()) - log.debug("Attempted to remove node locks from removed entry in mvcc manager " + - "disco callback (will ignore): " + entry); - } - } + public void removeExplicitNodeLocks(UUID leftNodeId) { + cctx.kernalContext().closure().runLocalSafe( + new Runnable() { + @Override public void run() { + for (GridDistributedCacheEntry entry : locked()) { + try { + entry.removeExplicitNodeLocks(leftNodeId); + + entry.touch(); + } + catch (GridCacheEntryRemovedException ignore) { + if (log.isDebugEnabled()) + log.debug("Attempted to remove node locks from removed entry in cache lock manager " + + "disco callback (will ignore): " + entry); + } + } + } + }, true); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java index 731822b6731c8..1c2043a97eb53 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java @@ -22,7 +22,6 @@ import java.util.ArrayList; import java.util.Collection; import java.util.Collections; -import java.util.Comparator; import java.util.Date; import java.util.HashMap; import java.util.HashSet; @@ -31,17 +30,21 @@ import java.util.Map; import java.util.NavigableMap; import java.util.Objects; +import java.util.Set; import java.util.TreeMap; import java.util.UUID; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ConcurrentNavigableMap; import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.LinkedBlockingDeque; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.locks.ReadWriteLock; +import java.util.concurrent.locks.ReentrantLock; import java.util.concurrent.locks.ReentrantReadWriteLock; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteSystemProperties; @@ -58,6 +61,7 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.IgniteNeedReconnectException; +import org.apache.ignite.internal.NodeStoppingException; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.events.DiscoveryCustomEvent; import org.apache.ignite.internal.managers.discovery.DiscoCache; @@ -83,6 +87,7 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloaderAssignments; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtPartitionHistorySuppliersMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtPartitionsToReloadMap; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.PartitionsExchangeAware; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.RebalanceReassignExchangeTask; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.StopCachesOnClientReconnectExchangeTask; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager; @@ -101,6 +106,7 @@ import org.apache.ignite.internal.util.GridPartitionStateMap; import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.internal.util.future.GridCompoundFuture; +import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.tostring.GridToStringInclude; @@ -108,6 +114,7 @@ import org.apache.ignite.internal.util.typedef.CI2; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.T2; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; @@ -132,6 +139,8 @@ import static org.apache.ignite.internal.GridTopic.TOPIC_CACHE; import static org.apache.ignite.internal.events.DiscoveryCustomEvent.EVT_DISCOVERY_CUSTOM_EVT; import static org.apache.ignite.internal.managers.communication.GridIoPolicy.SYSTEM_POOL; +import static org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion.NONE; +import static org.apache.ignite.internal.processors.cache.distributed.dht.preloader.CachePartitionPartialCountersMap.PARTIAL_COUNTERS_MAP_SINCE; import static org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.nextDumpTimeout; import static org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.DFLT_PRELOAD_RESEND_TIMEOUT; @@ -174,19 +183,23 @@ public class GridCachePartitionExchangeManager extends GridCacheSharedMana @Nullable private volatile GridDhtPartitionsExchangeFuture lastInitializedFut; /** */ - private final AtomicReference lastFinishedFut = new AtomicReference<>(); + private final AtomicReference lastFinishedFut = new AtomicReference<>(); /** */ private final ConcurrentMap readyFuts = new ConcurrentSkipListMap<>(); + /** */ + private final ConcurrentNavigableMap lastAffTopVers = + new ConcurrentSkipListMap<>(); + /** * Latest started rebalance topology version but possibly not finished yet. Value {@code NONE} * means that previous rebalance is undefined and the new one should be initiated. * * Should not be used to determine latest rebalanced topology. */ - private volatile AffinityTopologyVersion rebTopVer = AffinityTopologyVersion.NONE; + private volatile AffinityTopologyVersion rebTopVer = NONE; /** */ private GridFutureAdapter reconnectExchangeFut; @@ -230,6 +243,12 @@ public class GridCachePartitionExchangeManager extends GridCacheSharedMana /** Distributed latch manager. */ private ExchangeLatchManager latchMgr; + /** List of exchange aware components. */ + private final List exchangeAwareComps = new ArrayList<>(); + + /** */ + private final ReentrantLock dumpLongRunningOpsLock = new ReentrantLock(); + /** Discovery listener. */ private final DiscoveryEventListener discoLsnr = new DiscoveryEventListener() { @Override public void onEvent(DiscoveryEvent evt, DiscoCache cache) { @@ -429,9 +448,11 @@ else if (m instanceof GridDhtPartitionDemandLegacyMessage) { return; } + else + U.error(log, "Unsupported message type: " + m.getClass().getName()); } - U.error(log, "Unsupported message type: " + m.getClass().getName()); + U.warn(log, "Cache group with id=" + m.groupId() + " is stopped or absent"); } finally { leaveBusy(); @@ -753,7 +774,7 @@ public static Object rebalanceTopic(int idx) { stopErr = cctx.kernalContext().clientDisconnected() ? new IgniteClientDisconnectedCheckedException(cctx.kernalContext().cluster().clientReconnectFuture(), "Client node disconnected: " + cctx.igniteInstanceName()) : - new IgniteInterruptedCheckedException("Node is stopping: " + cctx.igniteInstanceName()); + new NodeStoppingException("Node is stopping: " + cctx.igniteInstanceName()); // Stop exchange worker U.cancel(exchWorker); @@ -892,13 +913,13 @@ public GridDhtPartitionsExchangeFuture lastTopologyFuture() { /** * @param fut Finished future. */ - public void lastFinishedFuture(GridDhtTopologyFuture fut) { + public void lastFinishedFuture(GridDhtPartitionsExchangeFuture fut) { assert fut != null && fut.isDone() : fut; while (true) { - GridDhtTopologyFuture cur = lastFinishedFut.get(); + GridDhtPartitionsExchangeFuture cur = lastFinishedFut.get(); - if (cur == null || fut.topologyVersion().compareTo(cur.topologyVersion()) > 0) { + if (fut.topologyVersion() != null && (cur == null || fut.topologyVersion().compareTo(cur.topologyVersion()) > 0)) { if (lastFinishedFut.compareAndSet(cur, fut)) break; } @@ -914,7 +935,8 @@ public void lastFinishedFuture(GridDhtTopologyFuture fut) { @Nullable public IgniteInternalFuture affinityReadyFuture(AffinityTopologyVersion ver) { GridDhtPartitionsExchangeFuture lastInitializedFut0 = lastInitializedFut; - if (lastInitializedFut0 != null && lastInitializedFut0.initialVersion().compareTo(ver) == 0) { + if (lastInitializedFut0 != null && lastInitializedFut0.initialVersion().compareTo(ver) == 0 + && lastInitializedFut0.changedAffinity()) { if (log.isTraceEnabled()) log.trace("Return lastInitializedFut for topology ready future " + "[ver=" + ver + ", fut=" + lastInitializedFut0 + ']'); @@ -928,7 +950,7 @@ public void lastFinishedFuture(GridDhtTopologyFuture fut) { if (log.isTraceEnabled()) log.trace("Return finished future for topology ready future [ver=" + ver + ", topVer=" + topVer + ']'); - return null; + return new GridFinishedFuture<>(topVer); } GridFutureAdapter fut = F.addIfAbsent(readyFuts, ver, @@ -986,6 +1008,48 @@ public boolean hasPendingExchange() { return exchWorker.hasPendingExchange(); } + /** + * + * @param topVer Topology version. + * @return Last topology version before the provided one when affinity was modified. + */ + public AffinityTopologyVersion lastAffinityChangedTopologyVersion(AffinityTopologyVersion topVer) { + if (topVer.topologyVersion() <= 0) + return topVer; + + AffinityTopologyVersion lastAffTopVer = lastAffTopVers.get(topVer); + + return lastAffTopVer != null ? lastAffTopVer : topVer; + } + + /** + * + * @param topVer Topology version. + * @param lastAffTopVer Last topology version before the provided one when affinity was modified. + * @return {@code True} if data was modified. + */ + public boolean lastAffinityChangedTopologyVersion(AffinityTopologyVersion topVer, AffinityTopologyVersion lastAffTopVer) { + assert lastAffTopVer.compareTo(topVer) <= 0; + + if (lastAffTopVer.topologyVersion() <= 0 || lastAffTopVer.equals(topVer)) + return false; + + while (true) { + AffinityTopologyVersion old = lastAffTopVers.putIfAbsent(topVer, lastAffTopVer); + + if (old == null) + return true; + + if (lastAffTopVer.compareTo(old) < 0) { + if (lastAffTopVers.replace(topVer, old, lastAffTopVer)) + return true; + } + else + return false; + } + + } + /** * @param evt Discovery event. * @return Affinity topology version. @@ -1037,11 +1101,38 @@ public void scheduleResendPartitions() { } /** - * Partition refresh callback. + * Registers component that will be notified on every partition map exchange. + * + * @param comp Component to be registered. + */ + public void registerExchangeAwareComponent(PartitionsExchangeAware comp) { + exchangeAwareComps.add(new PartitionsExchangeAwareWrapper(comp)); + } + + /** + * Removes exchange aware component from list of listeners. + * + * @param comp Component to be registered. + */ + public void unregisterExchangeAwareComponent(PartitionsExchangeAware comp) { + exchangeAwareComps.remove(new PartitionsExchangeAwareWrapper(comp)); + } + + /** + * @return List of registered exchange listeners. + */ + public List exchangeAwareComponents() { + return U.sealList(exchangeAwareComps); + } + + /** + * Partition refresh callback for selected cache groups. * For coordinator causes {@link GridDhtPartitionsFullMessage FullMessages} send, * for non coordinator - {@link GridDhtPartitionsSingleMessage SingleMessages} send + * + * @param grps Cache groups for partitions refresh. */ - public void refreshPartitions() { + public void refreshPartitions(@NotNull Collection grps) { // TODO https://issues.apache.org/jira/browse/IGNITE-6857 if (cctx.snapshot().snapshotOperationInProgress()) { scheduleResendPartitions(); @@ -1049,7 +1140,14 @@ public void refreshPartitions() { return; } - ClusterNode oldest = cctx.discovery().oldestAliveServerNode(AffinityTopologyVersion.NONE); + if (grps.isEmpty()) { + if (log.isDebugEnabled()) + log.debug("Skip partitions refresh, there are no cache groups for partition refresh."); + + return; + } + + ClusterNode oldest = cctx.discovery().oldestAliveServerNode(NONE); if (oldest == null) { if (log.isDebugEnabled()) @@ -1058,8 +1156,10 @@ public void refreshPartitions() { return; } - if (log.isDebugEnabled()) - log.debug("Refreshing partitions [oldest=" + oldest.id() + ", loc=" + cctx.localNodeId() + ']'); + if (log.isDebugEnabled()) { + log.debug("Refreshing partitions [oldest=" + oldest.id() + ", loc=" + cctx.localNodeId() + + ", cacheGroups= " + grps + ']'); + } // If this is the oldest node. if (oldest.id().equals(cctx.localNodeId())) { @@ -1077,46 +1177,66 @@ public void refreshPartitions() { // No need to send to nodes which did not finish their first exchange. AffinityTopologyVersion rmtTopVer = - lastFut != null ? (lastFut.isDone() ? lastFut.topologyVersion() : lastFut.initialVersion()) : AffinityTopologyVersion.NONE; + lastFut != null ? + (lastFut.isDone() ? lastFut.topologyVersion() : lastFut.initialVersion()) + : AffinityTopologyVersion.NONE; Collection rmts = cctx.discovery().remoteAliveNodesWithCaches(rmtTopVer); if (log.isDebugEnabled()) log.debug("Refreshing partitions from oldest node: " + cctx.localNodeId()); - sendAllPartitions(rmts, rmtTopVer); + sendAllPartitions(rmts, rmtTopVer, grps); } else { if (log.isDebugEnabled()) log.debug("Refreshing local partitions from non-oldest node: " + cctx.localNodeId()); - sendLocalPartitions(oldest, null); + sendLocalPartitions(oldest, null, grps); } } + /** + * Partition refresh callback. + * For coordinator causes {@link GridDhtPartitionsFullMessage FullMessages} send, + * for non coordinator - {@link GridDhtPartitionsSingleMessage SingleMessages} send + */ + public void refreshPartitions() { refreshPartitions(cctx.cache().cacheGroups()); } + /** * @param nodes Nodes. * @param msgTopVer Topology version. Will be added to full message. + * @param grps Selected cache groups. */ private void sendAllPartitions( Collection nodes, - AffinityTopologyVersion msgTopVer + AffinityTopologyVersion msgTopVer, + Collection grps ) { long time = System.currentTimeMillis(); - GridDhtPartitionsFullMessage m = createPartitionsFullMessage(true, false, null, null, null, null); + GridDhtPartitionsFullMessage m = createPartitionsFullMessage(true, false, null, null, null, null, grps); m.topologyVersion(msgTopVer); - if (log.isInfoEnabled()) - log.info("Full Message creating for " + msgTopVer + " performed in " + (System.currentTimeMillis() - time) + " ms."); + if (log.isInfoEnabled()) { + long latency = System.currentTimeMillis() - time; + + if (latency > 50 || log.isDebugEnabled()) { + log.info("Finished full message creation [msgTopVer=" + msgTopVer + ", groups=" + grps + + ", latency=" + latency + "ms]"); + } + } if (log.isTraceEnabled()) - log.trace("Sending all partitions [nodeIds=" + U.nodeIds(nodes) + ", msg=" + m + ']'); + log.trace("Sending all partitions [nodeIds=" + U.nodeIds(nodes) + ", cacheGroups=" + grps + + ", msg=" + m + ']'); time = System.currentTimeMillis(); + Collection failedNodes = U.newHashSet(nodes.size()); + for (ClusterNode node : nodes) { try { assert !node.equals(cctx.localNode()); @@ -1124,22 +1244,34 @@ private void sendAllPartitions( cctx.io().sendNoRetry(node, m, SYSTEM_POOL); } catch (ClusterTopologyCheckedException ignore) { - if (log.isDebugEnabled()) - log.debug("Failed to send partition update to node because it left grid (will ignore) [node=" + - node.id() + ", msg=" + m + ']'); + if (log.isDebugEnabled()) { + log.debug("Failed to send partition update to node because it left grid (will ignore) " + + "[node=" + node.id() + ", msg=" + m + ']'); + } } catch (IgniteCheckedException e) { - U.warn(log, "Failed to send partitions full message [node=" + node + ", err=" + e + ']'); + failedNodes.add(node); + + U.warn(log, "Failed to send partitions full message [node=" + node + ", err=" + e + ']', e); } } - if (log.isInfoEnabled()) - log.info("Sending Full Message for " + msgTopVer + " performed in " + (System.currentTimeMillis() - time) + " ms."); + if (log.isInfoEnabled()) { + long latency = System.currentTimeMillis() - time; + + if (latency > 50 || log.isDebugEnabled()) { + log.info("Finished sending full message [msgTopVer=" + msgTopVer + ", groups=" + grps + + (failedNodes.isEmpty() ? "" : (", skipped=" + U.nodeIds(failedNodes))) + + ", latency=" + latency + "ms]"); + } + } } /** + * Creates partitions full message for all cache groups. + * * @param compress {@code True} if possible to compress message (properly work only if prepareMarshall/ - * finishUnmarshall methods are called). + * finishUnmarshall methods are called). * @param newCntrMap {@code True} if possible to use {@link CachePartitionFullCountersMap}. * @param exchId Non-null exchange ID if message is created for exchange. * @param lastVer Last version. @@ -1155,18 +1287,45 @@ public GridDhtPartitionsFullMessage createPartitionsFullMessage( @Nullable IgniteDhtPartitionHistorySuppliersMap partHistSuppliers, @Nullable IgniteDhtPartitionsToReloadMap partsToReload ) { - final GridDhtPartitionsFullMessage m = new GridDhtPartitionsFullMessage(exchId, - lastVer, - exchId != null ? exchId.topologyVersion() : AffinityTopologyVersion.NONE, - partHistSuppliers, - partsToReload - ); + Collection grps = cctx.cache().cacheGroups(); + + return createPartitionsFullMessage(compress, newCntrMap, exchId, lastVer, partHistSuppliers, partsToReload, grps); + } + + /** + * Creates partitions full message for selected cache groups. + * + * @param compress {@code True} if possible to compress message (properly work only if prepareMarshall/ + * finishUnmarshall methods are called). + * @param newCntrMap {@code True} if possible to use {@link CachePartitionFullCountersMap}. + * @param exchId Non-null exchange ID if message is created for exchange. + * @param lastVer Last version. + * @param partHistSuppliers Partition history suppliers map. + * @param partsToReload Partitions to reload map. + * @param grps Selected cache groups. + * @return Message. + */ + public GridDhtPartitionsFullMessage createPartitionsFullMessage( + boolean compress, + boolean newCntrMap, + @Nullable final GridDhtPartitionExchangeId exchId, + @Nullable GridCacheVersion lastVer, + @Nullable IgniteDhtPartitionHistorySuppliersMap partHistSuppliers, + @Nullable IgniteDhtPartitionsToReloadMap partsToReload, + Collection grps + ) { + AffinityTopologyVersion ver = exchId != null ? exchId.topologyVersion() : AffinityTopologyVersion.NONE; - m.compress(compress); + final GridDhtPartitionsFullMessage m = + new GridDhtPartitionsFullMessage(exchId, lastVer, ver, partHistSuppliers, partsToReload); + + m.compressed(compress); final Map> dupData = new HashMap<>(); - for (CacheGroupContext grp : cctx.cache().cacheGroups()) { + Map> partsSizes = new HashMap<>(); + + for (CacheGroupContext grp : grps) { if (!grp.isLocal()) { if (exchId != null) { AffinityTopologyVersion startTopVer = grp.localStartVersion(); @@ -1179,16 +1338,13 @@ public GridDhtPartitionsFullMessage createPartitionsFullMessage( GridDhtPartitionFullMap locMap = grp.topology().partitionMap(true); - if (locMap != null) { - addFullPartitionsMap(m, - dupData, - compress, - grp.groupId(), - locMap, - affCache.similarAffinityKey()); - } + if (locMap != null) + addFullPartitionsMap(m, dupData, compress, grp.groupId(), locMap, affCache.similarAffinityKey()); - m.addPartitionSizes(grp.groupId(), grp.topology().globalPartSizes()); + Map partSizesMap = grp.topology().globalPartSizes(); + + if (!partSizesMap.isEmpty()) + partsSizes.put(grp.groupId(), partSizesMap); if (exchId != null) { CachePartitionFullCountersMap cntrsMap = grp.topology().fullUpdateCounters(); @@ -1207,14 +1363,8 @@ public GridDhtPartitionsFullMessage createPartitionsFullMessage( for (GridClientPartitionTopology top : cctx.exchange().clientTopologies()) { GridDhtPartitionFullMap map = top.partitionMap(true); - if (map != null) { - addFullPartitionsMap(m, - dupData, - compress, - top.groupId(), - map, - top.similarAffinityKey()); - } + if (map != null) + addFullPartitionsMap(m, dupData, compress, top.groupId(), map, top.similarAffinityKey()); if (exchId != null) { CachePartitionFullCountersMap cntrsMap = top.fullUpdateCounters(); @@ -1224,10 +1374,16 @@ public GridDhtPartitionsFullMessage createPartitionsFullMessage( else m.addPartitionUpdateCounters(top.groupId(), CachePartitionFullCountersMap.toCountersMap(cntrsMap)); - m.addPartitionSizes(top.groupId(), top.globalPartSizes()); + Map partSizesMap = top.globalPartSizes(); + + if (!partSizesMap.isEmpty()) + partsSizes.put(top.groupId(), partSizesMap); } } + if (!partsSizes.isEmpty()) + m.partitionSizes(cctx, partsSizes); + return m; } @@ -1274,13 +1430,20 @@ private void addFullPartitionsMap(GridDhtPartitionsFullMessage m, /** * @param node Destination cluster node. * @param id Exchange ID. + * @param grps Cache groups for send partitions. */ - private void sendLocalPartitions(ClusterNode node, @Nullable GridDhtPartitionExchangeId id) { - GridDhtPartitionsSingleMessage m = createPartitionsSingleMessage(id, - cctx.kernalContext().clientNode(), - false, - false, - null); + private void sendLocalPartitions( + ClusterNode node, + @Nullable GridDhtPartitionExchangeId id, + @NotNull Collection grps + ) { + GridDhtPartitionsSingleMessage m = + createPartitionsSingleMessage(id, + cctx.kernalContext().clientNode(), + false, + node.version().compareToIgnoreTimestamp(PARTIAL_COUNTERS_MAP_SINCE) >= 0, + null, + grps); if (log.isTraceEnabled()) log.trace("Sending local partitions [nodeId=" + node.id() + ", msg=" + m + ']'); @@ -1299,6 +1462,8 @@ private void sendLocalPartitions(ClusterNode node, @Nullable GridDhtPartitionExc } /** + * Creates partitions single message for all cache groups. + * * @param exchangeId Exchange ID. * @param clientOnlyExchange Client exchange flag. * @param sndCounters {@code True} if need send partition update counters. @@ -1311,6 +1476,29 @@ public GridDhtPartitionsSingleMessage createPartitionsSingleMessage( boolean sndCounters, boolean newCntrMap, ExchangeActions exchActions + ) { + Collection grps = cctx.cache().cacheGroups(); + + return createPartitionsSingleMessage(exchangeId, clientOnlyExchange, sndCounters, newCntrMap, exchActions, grps); + } + + /** + * Creates partitions single message for selected cache groups. + * + * @param exchangeId Exchange ID. + * @param clientOnlyExchange Client exchange flag. + * @param sndCounters {@code True} if need send partition update counters. + * @param newCntrMap {@code True} if possible to use {@link CachePartitionPartialCountersMap}. + * @param grps Selected cache groups. + * @return Message. + */ + public GridDhtPartitionsSingleMessage createPartitionsSingleMessage( + @Nullable GridDhtPartitionExchangeId exchangeId, + boolean clientOnlyExchange, + boolean sndCounters, + boolean newCntrMap, + ExchangeActions exchActions, + Collection grps ) { GridDhtPartitionsSingleMessage m = new GridDhtPartitionsSingleMessage(exchangeId, clientOnlyExchange, @@ -1319,7 +1507,7 @@ public GridDhtPartitionsSingleMessage createPartitionsSingleMessage( Map> dupData = new HashMap<>(); - for (CacheGroupContext grp : cctx.cache().cacheGroups()) { + for (CacheGroupContext grp : grps) { if (!grp.isLocal() && (exchActions == null || !exchActions.cacheGroupStopping(grp.groupId()))) { GridDhtPartitionMap locMap = grp.topology().localPartitionMap(); @@ -1468,30 +1656,10 @@ public void onExchangeDone(AffinityTopologyVersion topVer, AffinityTopologyVersi if (log.isDebugEnabled()) log.debug("Exchange done [topVer=" + topVer + ", err=" + err + ']'); - if (err == null) { + if (err == null) exchFuts.readyTopVer(topVer); - for (Map.Entry entry : readyFuts.entrySet()) { - if (entry.getKey().compareTo(topVer) <= 0) { - if (log.isDebugEnabled()) - log.debug("Completing created topology ready future " + - "[ver=" + topVer + ", fut=" + entry.getValue() + ']'); - - entry.getValue().onDone(topVer); - } - } - } - else { - for (Map.Entry entry : readyFuts.entrySet()) { - if (entry.getKey().compareTo(initTopVer) <= 0) { - if (log.isDebugEnabled()) - log.debug("Completing created topology ready future with error " + - "[ver=" + entry.getKey() + ", fut=" + entry.getValue() + ']'); - - entry.getValue().onDone(err); - } - } - } + completeAffReadyFuts(err == null ? topVer : initTopVer, err); ExchangeFutureSet exchFuts0 = exchFuts; @@ -1510,6 +1678,28 @@ public void onExchangeDone(AffinityTopologyVersion topVer, AffinityTopologyVersi } } + /** */ + private void completeAffReadyFuts(AffinityTopologyVersion topVer, @Nullable Throwable err) { + for (Map.Entry entry : readyFuts.entrySet()) { + if (entry.getKey().compareTo(topVer) <= 0) { + if (err == null) { + if (log.isDebugEnabled()) + log.debug("Completing created topology ready future " + + "[ver=" + topVer + ", fut=" + entry.getValue() + ']'); + + entry.getValue().onDone(topVer); + } + else { + if (log.isDebugEnabled()) + log.debug("Completing created topology ready future with error " + + "[ver=" + entry.getKey() + ", fut=" + entry.getValue() + ']'); + + entry.getValue().onDone(err); + } + } + } + } + /** * @param fut Future. * @return {@code True} if added. @@ -1539,6 +1729,8 @@ public void processFullPartitionUpdate(ClusterNode node, GridDhtPartitionsFullMe boolean updated = false; + Map> partsSizes = msg.partitionSizes(cctx); + for (Map.Entry entry : msg.partitions().entrySet()) { Integer grpId = entry.getKey(); @@ -1556,7 +1748,7 @@ else if (!grp.isLocal()) entry.getValue(), null, msg.partsToReload(cctx.localNodeId(), grpId), - msg.partitionSizes(grpId), + partsSizes.getOrDefault(grpId, Collections.emptyMap()), msg.topologyVersion()); } } @@ -1746,8 +1938,12 @@ public void dumpDebugInfo(@Nullable GridDhtPartitionsExchangeFuture exchFut) thr dumpPendingObjects(exchTopVer, diagCtx); - for (CacheGroupContext grp : cctx.cache().cacheGroups()) - grp.preloader().dumpDebugInfo(); + for (CacheGroupContext grp : cctx.cache().cacheGroups()) { + GridCachePreloader preloader = grp.preloader(); + + if (preloader != null) + preloader.dumpDebugInfo(); + } cctx.affinity().dumpDebugInfo(); @@ -1848,28 +2044,37 @@ public void dumpLongRunningOperations(long timeout) { if (lastFut != null && !lastFut.isDone()) return; - if (U.currentTimeMillis() < nextLongRunningOpsDumpTime) + if (!dumpLongRunningOpsLock.tryLock()) return; - if (dumpLongRunningOperations0(timeout)) { - nextLongRunningOpsDumpTime = U.currentTimeMillis() + nextDumpTimeout(longRunningOpsDumpStep++, timeout); + try { + if (U.currentTimeMillis() < nextLongRunningOpsDumpTime) + return; - if (IgniteSystemProperties.getBoolean(IGNITE_THREAD_DUMP_ON_EXCHANGE_TIMEOUT, false)) { - U.warn(diagnosticLog, "Found long running cache operations, dump threads."); + if (dumpLongRunningOperations0(timeout)) { + nextLongRunningOpsDumpTime = U.currentTimeMillis() + nextDumpTimeout(longRunningOpsDumpStep++, timeout); - U.dumpThreads(diagnosticLog); - } + if (IgniteSystemProperties.getBoolean(IGNITE_THREAD_DUMP_ON_EXCHANGE_TIMEOUT, false)) { + U.warn(diagnosticLog, "Found long running cache operations, dump threads."); + + U.dumpThreads(diagnosticLog); + } - if (IgniteSystemProperties.getBoolean(IGNITE_IO_DUMP_ON_TIMEOUT, false)) { - U.warn(diagnosticLog, "Found long running cache operations, dump IO statistics."); + if (IgniteSystemProperties.getBoolean(IGNITE_IO_DUMP_ON_TIMEOUT, false)) { + U.warn(diagnosticLog, "Found long running cache operations, dump IO statistics."); - // Dump IO manager statistics. - if (IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_IO_DUMP_ON_TIMEOUT, false)) - cctx.gridIO().dumpStats();} + // Dump IO manager statistics. + if (IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_IO_DUMP_ON_TIMEOUT, false)) + cctx.gridIO().dumpStats(); + } + } + else { + nextLongRunningOpsDumpTime = 0; + longRunningOpsDumpStep = 0; + } } - else { - nextLongRunningOpsDumpTime = 0; - longRunningOpsDumpStep = 0; + finally { + dumpLongRunningOpsLock.unlock(); } } catch (Exception e) { @@ -2144,16 +2349,15 @@ public boolean mergeExchangesOnCoordinator(GridDhtPartitionsExchangeFuture curFu break; } - ClusterNode node = evt.eventNode(); - - if ((evt.type() == EVT_NODE_FAILED || evt.type() == EVT_NODE_LEFT) && - node.equals(cctx.coordinators().currentCoordinator())) { + if (!fut.changedAffinity()) { if (log.isInfoEnabled()) - log.info("Stop merge, need exchange for mvcc coordinator failure: " + node); + log.info("Stop merge, no-affinity exchange found: " + evt); break; } + ClusterNode node = evt.eventNode(); + if (!curFut.context().supportsMergeExchanges(node)) { if (log.isInfoEnabled()) log.info("Stop merge, node does not support merge: " + node); @@ -2262,14 +2466,20 @@ private void waitForTestVersion(AffinityTopologyVersion exchMergeTestWaitVer, Gr this.exchMergeTestWaitVer = null; } + /** + * Invokes {@link GridWorker#updateHeartbeat()} for exchange worker. + */ + public void exchangerUpdateHeartbeat() { + exchWorker.updateHeartbeat(); + } + /** * Invokes {@link GridWorker#blockingSectionBegin()} for exchange worker. * Should be called from exchange worker thread. */ public void exchangerBlockingSectionBegin() { - assert exchWorker != null && Thread.currentThread() == exchWorker.runner(); - - exchWorker.blockingSectionBegin(); + if (currentThreadIsExchanger()) + exchWorker.blockingSectionBegin(); } /** @@ -2277,9 +2487,13 @@ public void exchangerBlockingSectionBegin() { * Should be called from exchange worker thread. */ public void exchangerBlockingSectionEnd() { - assert exchWorker != null && Thread.currentThread() == exchWorker.runner(); + if (currentThreadIsExchanger()) + exchWorker.blockingSectionEnd(); + } - exchWorker.blockingSectionEnd(); + /** */ + public boolean currentThreadIsExchanger() { + return exchWorker != null && Thread.currentThread() == exchWorker.runner(); } /** @@ -2306,6 +2520,38 @@ private boolean exchangeInProgress() { return false; } + /** */ + public boolean affinityChanged(AffinityTopologyVersion from, AffinityTopologyVersion to) { + if (lastAffinityChangedTopologyVersion(to).compareTo(from) >= 0) + return false; + + Collection history = exchFuts.values(); + + boolean fromFound = false; + + for (GridDhtPartitionsExchangeFuture fut : history) { + if (!fromFound) { + int cmp = fut.initialVersion().compareTo(from); + + if (cmp > 0) // We don't have history, so return true for safety + return true; + else if (cmp == 0) + fromFound = true; + else if (fut.isDone() && fut.topologyVersion().compareTo(from) >= 0) + return true; // Temporary solution for merge exchange case + } + else { + if (fut.changedAffinity()) + return true; + + if (fut.initialVersion().compareTo(to) >= 0) + return false; + } + } + + return true; + } + /** * Exchange future thread. All exchanges happen only by one thread and next * exchange will not start until previous one completes. @@ -2427,7 +2673,7 @@ private void removeMergedFutures(AffinityTopologyVersion resVer, GridDhtPartitio GridDhtPartitionsExchangeFuture fut0 = (GridDhtPartitionsExchangeFuture)task; if (resVer.compareTo(fut0.initialVersion()) >= 0) { - fut0.finishMerged(); + fut0.finishMerged(resVer); futQ.remove(fut0); } @@ -2538,7 +2784,8 @@ void dumpExchangeDebugInfo() { err = e; } catch (Throwable e) { - err = e; + if (!(stop && X.hasCause(e, IgniteInterruptedCheckedException.class))) + err = e; } finally { if (err == null && !stop && !reconnectNeeded) @@ -2548,6 +2795,9 @@ void dumpExchangeDebugInfo() { cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); else if (err != null) cctx.kernalContext().failure().process(new FailureContext(SYSTEM_WORKER_TERMINATION, err)); + else + // In case of reconnectNeeded == true, prevent general-case termination handling. + cancel(); } } @@ -2656,6 +2906,28 @@ else if (task instanceof ForceRebalanceExchangeTask) { crd = newCrd = !srvNodes.isEmpty() && srvNodes.get(0).isLocal(); } + if (!exchFut.changedAffinity()) { + GridDhtPartitionsExchangeFuture lastFut = lastFinishedFut.get(); + + if (lastFut != null) { + if (!lastFut.changedAffinity()) { + // If lastFut corresponds to merged exchange, it is essential to use + // topologyVersion() instead of initialVersion() - nodes joined in this PME + // will have DiscoCache only for the last version. + AffinityTopologyVersion lastAffVer = cctx.exchange() + .lastAffinityChangedTopologyVersion(lastFut.topologyVersion()); + + cctx.exchange().lastAffinityChangedTopologyVersion(exchFut.initialVersion(), + lastAffVer); + } + else + cctx.exchange().lastAffinityChangedTopologyVersion(exchFut.initialVersion(), + lastFut.topologyVersion()); + } + } + + exchFut.timeBag().finishGlobalStage("Waiting in exchange queue"); + exchFut.init(newCrd); int dumpCnt = 0; @@ -2681,7 +2953,7 @@ else if (task instanceof ForceRebalanceExchangeTask) { ? Math.min(curTimeout, dumpTimeout) : dumpTimeout; - blockingSectionEnd(); + blockingSectionBegin(); try { resVer = exchFut.get(exchTimeout, TimeUnit.MILLISECONDS); @@ -2701,8 +2973,8 @@ else if (task instanceof ForceRebalanceExchangeTask) { "topVer=" + exchFut.initialVersion() + ", node=" + cctx.localNodeId() + "]. " + (curTimeout <= 0 && !txRolledBack ? "Consider changing " + - "TransactionConfiguration.txTimeoutOnPartitionMapSynchronization" + - " to non default value to avoid this message. " : "") + + "TransactionConfiguration.txTimeoutOnPartitionMapSynchronization" + + " to non default value to avoid this message. " : "") + "Dumping pending objects that might be the cause: "); try { @@ -2749,7 +3021,7 @@ else if (task instanceof ForceRebalanceExchangeTask) { continue; if (grp.preloader().rebalanceRequired(rebTopVer, exchFut)) - rebTopVer = AffinityTopologyVersion.NONE; + rebTopVer = NONE; changed |= grp.topology().afterExchange(exchFut); } @@ -2760,9 +3032,9 @@ else if (task instanceof ForceRebalanceExchangeTask) { // Schedule rebalance if force rebalance or force reassign occurs. if (exchFut == null) - rebTopVer = AffinityTopologyVersion.NONE; + rebTopVer = NONE; - if (!cctx.kernalContext().clientNode() && rebTopVer.equals(AffinityTopologyVersion.NONE)) { + if (!cctx.kernalContext().clientNode() && rebTopVer.equals(NONE)) { assignsMap = new HashMap<>(); IgniteCacheSnapshotManager snp = cctx.snapshot(); @@ -2793,7 +3065,7 @@ else if (task instanceof ForceRebalanceExchangeTask) { busy = false; } - if (assignsMap != null && rebTopVer.equals(AffinityTopologyVersion.NONE)) { + if (assignsMap != null && rebTopVer.equals(NONE)) { int size = assignsMap.size(); NavigableMap> orderMap = new TreeMap<>(); @@ -2976,7 +3248,7 @@ private static class ExchangeFutureSet extends GridListSet readyTopVer = - new AtomicReference<>(AffinityTopologyVersion.NONE); + new AtomicReference<>(NONE); /** * Creates ordered, not strict list set. @@ -2984,20 +3256,15 @@ private static class ExchangeFutureSet extends GridListSet() { - @Override public int compare( - GridDhtPartitionsExchangeFuture f1, - GridDhtPartitionsExchangeFuture f2 - ) { - AffinityTopologyVersion t1 = f1.exchangeId().topologyVersion(); - AffinityTopologyVersion t2 = f2.exchangeId().topologyVersion(); - - assert t1.topologyVersion() > 0; - assert t2.topologyVersion() > 0; - - // Reverse order. - return t2.compareTo(t1); - } + super((f1, f2) -> { + AffinityTopologyVersion t1 = f1.exchangeId().topologyVersion(); + AffinityTopologyVersion t2 = f2.exchangeId().topologyVersion(); + + assert t1.topologyVersion() > 0; + assert t2.topologyVersion() > 0; + + // Reverse order. + return t2.compareTo(t1); }, /*not strict*/false); this.histSize = histSize; @@ -3133,4 +3400,124 @@ private AffinityReadyFuture(AffinityTopologyVersion topVer) { return S.toString(AffinityReadyFuture.class, this, super.toString()); } } + + /** + * That wrapper class allows avoiding the propagation of the user's exceptions into the Exchange thread. + */ + private class PartitionsExchangeAwareWrapper implements PartitionsExchangeAware { + /** */ + private final PartitionsExchangeAware delegate; + + /** + * Creates a new wrapper. + * @param delegate Delegate. + */ + public PartitionsExchangeAwareWrapper(PartitionsExchangeAware delegate) { + this.delegate = delegate; + } + + /** {@inheritDoc} */ + @Override public void onInitBeforeTopologyLock(GridDhtPartitionsExchangeFuture fut) { + try { + delegate.onInitBeforeTopologyLock(fut); + } + catch (Exception e) { + U.warn(log, "Failed to execute exchange callback.", e); + } + } + + /** {@inheritDoc} */ + @Override public void onInitAfterTopologyLock(GridDhtPartitionsExchangeFuture fut) { + try { + delegate.onInitAfterTopologyLock(fut); + } + catch (Exception e) { + U.warn(log, "Failed to execute exchange callback.", e); + } + } + + /** {@inheritDoc} */ + @Override public void onDoneBeforeTopologyUnlock(GridDhtPartitionsExchangeFuture fut) { + try { + delegate.onDoneBeforeTopologyUnlock(fut); + } + catch (Exception e) { + U.warn(log, "Failed to execute exchange callback.", e); + } + } + + /** {@inheritDoc} */ + @Override public void onDoneAfterTopologyUnlock(GridDhtPartitionsExchangeFuture fut) { + try { + delegate.onDoneAfterTopologyUnlock(fut); + } + catch (Exception e) { + U.warn(log, "Failed to execute exchange callback.", e); + } + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + return delegate.hashCode(); + } + + /** {@inheritDoc} */ + @SuppressWarnings("EqualsWhichDoesntCheckParameterClass") + @Override public boolean equals(Object obj) { + return delegate.equals(obj); + } + } + + /** + * Class to limit action count for unique objects. + *

+ * NO guarantees of thread safety are provided. + */ + private static class ActionLimiter { + /** */ + private final int limit; + + /** + * Internal storage of objects and counters for each of object. + */ + private final Map actionsCnt = new HashMap<>(); + + /** + * Set of active objects. + */ + private final Set activeObjects = new HashSet<>(); + + /** + * @param limit Limit. + */ + private ActionLimiter(int limit) { + this.limit = limit; + } + + /** + * Shows if action is allowed for the given object. Adds this object to internal set of active + * objects that are still in use. + * + * @param obj object. + */ + boolean allowAction(T obj) { + activeObjects.add(obj); + + int cnt = actionsCnt.computeIfAbsent(obj, o -> new AtomicInteger(0)) + .incrementAndGet(); + + return (cnt <= limit); + } + + /** + * Removes old objects from limiter's internal storage. All objects that are contained in internal + * storage but not in set of active objects, are assumed as 'old'. This method is to be called + * after processing of collection of objects to purge limiter's internal storage. + */ + void trim() { + actionsCnt.keySet().removeIf(key -> !activeObjects.contains(key)); + + activeObjects.clear(); + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloader.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloader.java index d629e94db7e84..6ac26c958fc29 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloader.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloader.java @@ -192,4 +192,14 @@ public GridDhtFuture request(GridCacheContext cctx, * Dumps debug information. */ public void dumpDebugInfo(); + + /** + * Pause preloader. + */ + public void pause(); + + /** + * Resume preloader. + */ + public void resume(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloaderAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloaderAdapter.java index c5e4a817d0418..f16305cd85456 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloaderAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloaderAdapter.java @@ -181,4 +181,14 @@ public GridCachePreloaderAdapter(CacheGroupContext grp) { @Override public void dumpDebugInfo() { // No-op. } + + /** {@inheritDoc} */ + @Override public void pause() { + // No-op + } + + /** {@inheritDoc} */ + @Override public void resume() { + // No-op + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProcessor.java index 2b05d96c8c9cf..564311491ca28 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProcessor.java @@ -25,6 +25,7 @@ import java.util.HashMap; import java.util.HashSet; import java.util.IdentityHashMap; +import java.util.Iterator; import java.util.LinkedList; import java.util.List; import java.util.ListIterator; @@ -34,7 +35,11 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; import java.util.stream.Collectors; +import javax.cache.configuration.FactoryBuilder; +import javax.cache.expiry.EternalExpiryPolicy; +import javax.cache.expiry.ExpiryPolicy; import javax.management.MBeanServer; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; @@ -50,6 +55,7 @@ import org.apache.ignite.cache.store.CacheStoreSessionListener; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataPageEvictionMode; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.DeploymentMode; import org.apache.ignite.configuration.FileSystemConfiguration; @@ -57,17 +63,18 @@ import org.apache.ignite.configuration.MemoryConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; -import org.apache.ignite.configuration.WALMode; import org.apache.ignite.events.EventType; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException; import org.apache.ignite.internal.IgniteComponentType; import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.internal.IgniteTransactionsEx; import org.apache.ignite.internal.binary.BinaryContext; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.binary.GridBinaryMarshaller; +import org.apache.ignite.internal.managers.communication.GridIoPolicy; import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; import org.apache.ignite.internal.pagemem.store.IgnitePageStoreManager; import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; @@ -79,32 +86,35 @@ import org.apache.ignite.internal.processors.cache.datastructures.CacheDataStructuresManager; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCache; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.PartitionsEvictManager; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache; import org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.StopCachesOnClientReconnectExchangeTask; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.PartitionsEvictManager; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearAtomicCache; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTransactionalCache; import org.apache.ignite.internal.processors.cache.dr.GridCacheDrManager; import org.apache.ignite.internal.processors.cache.jta.CacheJtaManagerAdapter; import org.apache.ignite.internal.processors.cache.local.GridLocalCache; import org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache; +import org.apache.ignite.internal.processors.cache.mvcc.MvccCachingManager; import org.apache.ignite.internal.processors.cache.persistence.DataRegion; +import org.apache.ignite.internal.processors.cache.persistence.DatabaseLifecycleListener; import org.apache.ignite.internal.processors.cache.persistence.DbCheckpointListener; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.persistence.freelist.FreeList; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage; import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetastorageLifecycleListener; import org.apache.ignite.internal.processors.cache.persistence.metastorage.ReadOnlyMetastorage; -import org.apache.ignite.internal.processors.cache.persistence.metastorage.ReadWriteMetastorage; +import org.apache.ignite.internal.processors.cache.persistence.partstate.GroupPartitionId; +import org.apache.ignite.internal.processors.cache.persistence.partstate.PartitionRecoverState; import org.apache.ignite.internal.processors.cache.persistence.snapshot.IgniteCacheSnapshotManager; import org.apache.ignite.internal.processors.cache.persistence.snapshot.SnapshotDiscoveryMessage; import org.apache.ignite.internal.processors.cache.persistence.tree.reuse.ReuseList; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; -import org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager; import org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager; import org.apache.ignite.internal.processors.cache.query.GridCacheLocalQueryManager; import org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager; @@ -130,14 +140,16 @@ import org.apache.ignite.internal.processors.timeout.GridTimeoutObject; import org.apache.ignite.internal.suggestions.GridPerformanceSuggestions; import org.apache.ignite.internal.util.F0; +import org.apache.ignite.internal.util.InitializationProtector; import org.apache.ignite.internal.util.future.GridCompoundFuture; import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.lang.GridPlainClosure; import org.apache.ignite.internal.util.lang.IgniteOutClosureX; +import org.apache.ignite.internal.util.lang.IgniteThrowableConsumer; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.CIX1; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.A; import org.apache.ignite.internal.util.typedef.internal.CU; @@ -146,6 +158,7 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgniteFuture; +import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.lifecycle.LifecycleAware; @@ -160,6 +173,7 @@ import org.apache.ignite.spi.discovery.DiscoveryDataBag; import org.apache.ignite.spi.discovery.DiscoveryDataBag.GridDiscoveryData; import org.apache.ignite.spi.discovery.DiscoveryDataBag.JoiningNodeDiscoveryData; +import org.apache.ignite.spi.encryption.EncryptionSpi; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; @@ -184,12 +198,13 @@ import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_TX_CONFIG; import static org.apache.ignite.internal.processors.cache.GridCacheUtils.isNearEnabled; import static org.apache.ignite.internal.processors.cache.GridCacheUtils.isPersistentCache; +import static org.apache.ignite.internal.util.IgniteUtils.doInParallel; /** * Cache processor. */ @SuppressWarnings({"unchecked", "TypeMayBeWeakened", "deprecation"}) -public class GridCacheProcessor extends GridProcessorAdapter implements MetastorageLifecycleListener { +public class GridCacheProcessor extends GridProcessorAdapter { /** Template of message of conflicts during configuration merge*/ private static final String MERGE_OF_CONFIG_CONFLICTS_MESSAGE = "Conflicts during configuration merge for cache '%s' : \n%s"; @@ -202,8 +217,9 @@ public class GridCacheProcessor extends GridProcessorAdapter implements Metastor private final boolean startClientCaches = IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_START_CACHES_ON_JOIN, false); - private final boolean walFsyncWithDedicatedWorker = - IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER, false); + /** Enables start caches in parallel. */ + private final boolean IGNITE_ALLOW_START_CACHES_IN_PARALLEL = + IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_ALLOW_START_CACHES_IN_PARALLEL, true); /** Shared cache context. */ private GridCacheSharedContext sharedCtx; @@ -257,6 +273,15 @@ public class GridCacheProcessor extends GridProcessorAdapter implements Metastor /** MBean group for cache group metrics */ private final String CACHE_GRP_METRICS_MBEAN_GRP = "Cache groups"; + /** Protector of initialization of specific value. */ + private final InitializationProtector initializationProtector = new InitializationProtector(); + + /** Cache recovery lifecycle state and actions. */ + private final CacheRecoveryLifecycle recovery = new CacheRecoveryLifecycle(); + + /** Tmp storage for meta migration. */ + private MetaStorage.TmpStorage tmpStorage; + /** * @param ctx Kernal context. */ @@ -487,6 +512,49 @@ private void validate(IgniteConfiguration c, CacheConfiguration.MAX_PARTITIONS_COUNT + " partitions [actual=" + cc.getAffinity().partitions() + ", affFunction=" + cc.getAffinity() + ", cacheName=" + cc.getName() + ']'); + if (cc.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) { + assertParameter(cc.getCacheMode() != LOCAL, + "LOCAL cache mode cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode"); + + assertParameter(cc.getNearConfiguration() == null, + "near cache cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode"); + + assertParameter(!cc.isReadThrough(), + "readThrough cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode"); + + assertParameter(!cc.isWriteThrough(), + "writeThrough cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode"); + + assertParameter(!cc.isWriteBehindEnabled(), + "writeBehindEnabled cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode"); + + ExpiryPolicy expPlc = null; + + if (cc.getExpiryPolicyFactory() instanceof FactoryBuilder.SingletonFactory) + expPlc = (ExpiryPolicy)cc.getExpiryPolicyFactory().create(); + + if (!(expPlc instanceof EternalExpiryPolicy)) { + assertParameter(cc.getExpiryPolicyFactory() == null, + "expiry policy cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode"); + } + + assertParameter(cc.getInterceptor() == null, + "interceptor cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode"); + + // Disable in-memory evictions for mvcc cache. TODO IGNITE-10738 + String memPlcName = cc.getDataRegionName(); + DataRegion dataRegion = sharedCtx.database().dataRegion(memPlcName); + + if (dataRegion != null && !dataRegion.config().isPersistenceEnabled() && + dataRegion.config().getPageEvictionMode() != DataPageEvictionMode.DISABLED) { + throw new IgniteCheckedException("Data pages evictions cannot be used with TRANSACTIONAL_SNAPSHOT " + + "cache atomicity mode for in-memory regions. Please, either disable evictions or enable " + + "persistence for data regions with TRANSACTIONAL_SNAPSHOT caches. [cacheName=" + cc.getName() + + ", dataRegionName=" + memPlcName + ", pageEvictionMode=" + + dataRegion.config().getPageEvictionMode() + ']'); + } + } + if (cc.isWriteBehindEnabled()) { if (cfgStore == null) throw new IgniteCheckedException("Cannot enable write-behind (writer or store is not provided) " + @@ -565,6 +633,22 @@ else if (cc.getRebalanceMode() == SYNC) { } } } + + if (cc.isEncryptionEnabled() && !ctx.clientNode()) { + if (!CU.isPersistentCache(cc, c.getDataStorageConfiguration())) { + throw new IgniteCheckedException("Using encryption is not allowed" + + " for not persistent cache [cacheName=" + cc.getName() + ", groupName=" + cc.getGroupName() + + ", cacheType=" + cacheType + "]"); + } + + EncryptionSpi encSpi = c.getEncryptionSpi(); + + if (encSpi == null) { + throw new IgniteCheckedException("EncryptionSpi should be configured to use encrypted cache " + + "[cacheName=" + cc.getName() + ", groupName=" + cc.getGroupName() + + ", cacheType=" + cacheType + "]"); + } + } } /** @@ -677,42 +761,34 @@ private void cleanup(CacheConfiguration cfg, @Nullable Object rsrc, boolean near } } - /** {@inheritDoc} */ - @Override public void onReadyForRead(ReadOnlyMetastorage metastorage) throws IgniteCheckedException { - startCachesOnStart(); - } - - /** {@inheritDoc} */ - @Override public void onReadyForReadWrite(ReadWriteMetastorage metastorage) throws IgniteCheckedException { - } - /** - * * @throws IgniteCheckedException If failed. */ - private void startCachesOnStart() throws IgniteCheckedException { - if (!ctx.isDaemon()) { - Map caches = new HashMap<>(); + private void restoreCacheConfigurations() throws IgniteCheckedException { + if (ctx.isDaemon()) + return; - Map templates = new HashMap<>(); + Map caches = new HashMap<>(); - addCacheOnJoinFromConfig(caches, templates); + Map templates = new HashMap<>(); - CacheJoinNodeDiscoveryData discoData = new CacheJoinNodeDiscoveryData( + addCacheOnJoinFromConfig(caches, templates); + + CacheJoinNodeDiscoveryData discoData = new CacheJoinNodeDiscoveryData( IgniteUuid.randomUuid(), caches, templates, startAllCachesOnClientStart() - ); + ); - cachesInfo.onStart(discoData); - } + cachesInfo.onStart(discoData); } /** {@inheritDoc} */ @SuppressWarnings({"unchecked"}) @Override public void start() throws IgniteCheckedException { - ctx.internalSubscriptionProcessor().registerMetastorageListener(this); + ctx.internalSubscriptionProcessor().registerMetastorageListener(recovery); + ctx.internalSubscriptionProcessor().registerDatabaseListener(recovery); cachesInfo = new ClusterCachesInfo(ctx); @@ -738,7 +814,7 @@ private void startCachesOnStart() throws IgniteCheckedException { mgr.start(sharedCtx); if (!ctx.isDaemon() && (!CU.isPersistenceEnabled(ctx.config())) || ctx.config().isClientMode()) - startCachesOnStart(); + restoreCacheConfigurations(); if (log.isDebugEnabled()) log.debug("Started cache processor."); @@ -870,6 +946,15 @@ private void validateCacheConfigurationOnRestore(CacheConfiguration cfg, CacheCo ", configuredAtomicityMode=" + cfg.getAtomicityMode() + ", storedAtomicityMode=" + cfgFromStore.getAtomicityMode() + "]"); } + + boolean staticCfgVal = cfg.isEncryptionEnabled(); + + boolean storedVal = cfgFromStore.isEncryptionEnabled(); + + if (storedVal != staticCfgVal) { + throw new IgniteCheckedException("Encrypted flag value differs. Static config value is '" + staticCfgVal + + "' and value stored on the disk is '" + storedVal + "'"); + } } /** @@ -1035,8 +1120,8 @@ public void stopCaches(boolean cancel) { * Blocks all available gateways */ public void blockGateways() { - for (IgniteCacheProxy proxy : jCacheProxies.values()) - proxy.context().gate().onStopped(); + for (IgniteCacheProxyImpl proxy : jCacheProxies.values()) + proxy.context0().gate().onStopped(); } /** {@inheritDoc} */ @@ -1154,6 +1239,9 @@ private void stopCacheOnReconnect(GridCacheContext cctx, List sharedCtx.removeCacheContext(cctx); caches.remove(cctx.name()); + + completeProxyInitialize(cctx.name()); + jCacheProxies.remove(cctx.name()); stoppedCaches.add(cctx.cache()); @@ -1218,70 +1306,6 @@ private void stopCacheOnReconnect(GridCacheContext cctx, List return null; } - /** - * @param cache Cache to start. - * @param schema Cache schema. - * @throws IgniteCheckedException If failed to start cache. - */ - @SuppressWarnings({"TypeMayBeWeakened", "unchecked"}) - private void startCache(GridCacheAdapter cache, QuerySchema schema) throws IgniteCheckedException { - GridCacheContext cacheCtx = cache.context(); - - CacheConfiguration cfg = cacheCtx.config(); - - // Intentionally compare Boolean references using '!=' below to check if the flag has been explicitly set. - if (cfg.isStoreKeepBinary() && cfg.isStoreKeepBinary() != CacheConfiguration.DFLT_STORE_KEEP_BINARY - && !(ctx.config().getMarshaller() instanceof BinaryMarshaller)) - U.warn(log, "CacheConfiguration.isStoreKeepBinary() configuration property will be ignored because " + - "BinaryMarshaller is not used"); - - // Start managers. - for (GridCacheManager mgr : F.view(cacheCtx.managers(), F.notContains(dhtExcludes(cacheCtx)))) - mgr.start(cacheCtx); - - cacheCtx.initConflictResolver(); - - if (cfg.getCacheMode() != LOCAL && GridCacheUtils.isNearEnabled(cfg)) { - GridCacheContext dhtCtx = cacheCtx.near().dht().context(); - - // Start DHT managers. - for (GridCacheManager mgr : dhtManagers(dhtCtx)) - mgr.start(dhtCtx); - - dhtCtx.initConflictResolver(); - - // Start DHT cache. - dhtCtx.cache().start(); - - if (log.isDebugEnabled()) - log.debug("Started DHT cache: " + dhtCtx.cache().name()); - } - - ctx.continuous().onCacheStart(cacheCtx); - - cacheCtx.cache().start(); - - ctx.query().onCacheStart(cacheCtx, schema); - - cacheCtx.onStarted(); - - String memPlcName = cfg.getDataRegionName(); - - if (memPlcName == null && ctx.config().getDataStorageConfiguration() != null) - memPlcName = ctx.config().getDataStorageConfiguration().getDefaultDataRegionConfiguration().getName(); - - if (log.isInfoEnabled()) { - log.info("Started cache [name=" + cfg.getName() + - ", id=" + cacheCtx.cacheId() + - (cfg.getGroupName() != null ? ", group=" + cfg.getGroupName() : "") + - ", memoryPolicyName=" + memPlcName + - ", mode=" + cfg.getCacheMode() + - ", atomicity=" + cfg.getAtomicityMode() + - ", backups=" + cfg.getBackups() + - ", mvcc=" + cacheCtx.mvccEnabled() +']'); - } - } - /** * @param cache Cache to stop. * @param cancel Cancel flag. @@ -1352,9 +1376,10 @@ private void stopCache(GridCacheAdapter cache, boolean cancel, boolean des if (destroy && (pageStore = sharedCtx.pageStore()) != null) { try { pageStore.removeCacheData(new StoredCacheData(ctx.config())); - } catch (IgniteCheckedException e) { + } + catch (IgniteCheckedException e) { U.error(log, "Failed to delete cache configuration data while destroying cache" + - "[cache=" + ctx.name() + "]", e); + "[cache=" + ctx.name() + "]", e); } } @@ -1448,7 +1473,7 @@ private void onKernalStop(GridCacheAdapter cache, boolean cancel) { cache.onKernalStop(); - if (ctx.events().isRecordable(EventType.EVT_CACHE_STOPPED)) + if (!ctx.isRecoveryMode() && ctx.events().isRecordable(EventType.EVT_CACHE_STOPPED)) ctx.events().addEvent(EventType.EVT_CACHE_STOPPED); } @@ -1466,7 +1491,8 @@ private void onKernalStop(GridCacheAdapter cache, boolean cancel) { * @return Cache context. * @throws IgniteCheckedException If failed to create cache. */ - private GridCacheContext createCache(CacheConfiguration cfg, + private GridCacheContext createCacheContext( + CacheConfiguration cfg, CacheGroupContext grp, @Nullable CachePluginManager pluginMgr, DynamicCacheDescriptor desc, @@ -1474,8 +1500,9 @@ private GridCacheContext createCache(CacheConfiguration cfg, CacheObjectContext cacheObjCtx, boolean affNode, boolean updatesAllowed, - boolean disabledAfterStart) - throws IgniteCheckedException { + boolean disabledAfterStart, + boolean recoveryMode + ) throws IgniteCheckedException { assert cfg != null; if (cfg.getCacheStoreFactory() instanceof GridCacheLoaderWriterStoreFactory) { @@ -1496,7 +1523,7 @@ private GridCacheContext createCache(CacheConfiguration cfg, pluginMgr.validate(); - if (cfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) + if (!recoveryMode && cfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) sharedCtx.coordinators().ensureStarted(); sharedCtx.jta().registerCache(cfg); @@ -1532,7 +1559,13 @@ private GridCacheContext createCache(CacheConfiguration cfg, GridCacheDrManager drMgr = pluginMgr.createComponent(GridCacheDrManager.class); CacheStoreManager storeMgr = pluginMgr.createComponent(CacheStoreManager.class); - storeMgr.initialize(cfgStore, sesHolders); + if (cfgStore == null) + storeMgr.initialize(cfgStore, sesHolders); + else + initializationProtector.protect( + cfgStore, + () -> storeMgr.initialize(cfgStore, sesHolders) + ); GridCacheContext cacheCtx = new GridCacheContext( ctx, @@ -1541,8 +1574,11 @@ private GridCacheContext createCache(CacheConfiguration cfg, grp, desc.cacheType(), locStartTopVer, + desc.deploymentId(), affNode, updatesAllowed, + desc.cacheConfiguration().isStatisticsEnabled(), + recoveryMode, /* * Managers in starting order! * =========================== @@ -1560,8 +1596,6 @@ private GridCacheContext createCache(CacheConfiguration cfg, affMgr ); - cacheCtx.statisticsEnabled(desc.cacheConfiguration().isStatisticsEnabled()); - cacheCtx.cacheObjectContext(cacheObjCtx); GridCacheAdapter cache = null; @@ -1676,8 +1710,11 @@ private GridCacheContext createCache(CacheConfiguration cfg, grp, desc.cacheType(), locStartTopVer, + desc.deploymentId(), affNode, true, + desc.cacheConfiguration().isStatisticsEnabled(), + recoveryMode, /* * Managers in starting order! * =========================== @@ -1695,8 +1732,6 @@ private GridCacheContext createCache(CacheConfiguration cfg, affMgr ); - cacheCtx.statisticsEnabled(desc.cacheConfiguration().isStatisticsEnabled()); - cacheCtx.cacheObjectContext(cacheObjCtx); GridDhtCacheAdapter dht = null; @@ -1754,6 +1789,67 @@ private GridCacheContext createCache(CacheConfiguration cfg, return ret; } + /** + * + * @param reqs Cache requests to start. + * @param fut Completable future. + */ + public void registrateProxyRestart(Map reqs, GridFutureAdapter fut) { + for (IgniteCacheProxyImpl proxy : jCacheProxies.values()) { + if (reqs.containsKey(proxy.getName()) && + proxy.isRestarting() && + !reqs.get(proxy.getName()).disabledAfterStart() + ) + proxy.registrateFutureRestart(fut); + } + } + + /** + * + * @param reqs Cache requests to start. + * @param initVer Init exchange version. + * @param doneVer Finish excahnge vertison. + */ + public void completeProxyRestart( + Map reqs, + AffinityTopologyVersion initVer, + AffinityTopologyVersion doneVer + ) { + if (initVer == null || doneVer == null) + return; + + for (GridCacheAdapter cache : caches.values()) { + GridCacheContext cacheCtx = cache.context(); + + if (reqs.containsKey(cache.name()) || + (cacheCtx.startTopologyVersion().compareTo(initVer) <= 0 || + cacheCtx.startTopologyVersion().compareTo(doneVer) <= 0)) + completeProxyInitialize(cache.name()); + + if ( + cacheCtx.startTopologyVersion().compareTo(initVer) >= 0 && + cacheCtx.startTopologyVersion().compareTo(doneVer) <= 0 + ) { + IgniteCacheProxyImpl proxy = jCacheProxies.get(cache.name()); + + boolean canRestart = true; + + DynamicCacheChangeRequest req = reqs.get(cache.name()); + + if (req != null) { + canRestart = !req.disabledAfterStart(); + } + + if (proxy != null && proxy.isRestarting() && canRestart) { + proxy.onRestarted(cacheCtx, cache); + + if (cacheCtx.dataStructuresCache()) + ctx.dataStructures().restart(cache.name(), proxy.internalProxy()); + } + } + } + } + /** * Gets a collection of currently started caches. * @@ -1769,8 +1865,8 @@ public Collection cacheNames() { } /** - * Gets public cache that can be used for query execution. - * If cache isn't created on current node it will be started. + * Gets public cache that can be used for query execution. If cache isn't created on current node it will be + * started. * * @param start Start cache. * @param inclLoc Include local caches. @@ -1888,16 +1984,13 @@ public IgniteInternalFuture startCachesOnLocalJoin( IgniteInternalFuture res = sharedCtx.affinity().initCachesOnLocalJoin( locJoinCtx.cacheGroupDescriptors(), locJoinCtx.cacheDescriptors()); - for (T2 t : locJoinCtx.caches()) { - DynamicCacheDescriptor desc = t.get1(); + List startCacheInfos = locJoinCtx.caches().stream() + .map(cacheInfo -> new StartCacheInfo(cacheInfo.get1(), cacheInfo.get2(), exchTopVer, false)) + .collect(Collectors.toList()); - prepareCacheStart( - desc.cacheConfiguration(), - desc, - t.get2(), - exchTopVer, - false); - } + prepareStartCaches(startCacheInfos); + + context().exchange().exchangerUpdateHeartbeat(); if (log.isInfoEnabled()) log.info("Starting caches on local join performed in " + (System.currentTimeMillis() - time) + " ms."); @@ -1909,7 +2002,7 @@ public IgniteInternalFuture startCachesOnLocalJoin( * @param node Joined node. * @return {@code True} if there are new caches received from joined node. */ - boolean hasCachesReceivedFromJoin(ClusterNode node) { + public boolean hasCachesReceivedFromJoin(ClusterNode node) { return cachesInfo.hasCachesReceivedFromJoin(node.id()); } @@ -1923,131 +2016,597 @@ boolean hasCachesReceivedFromJoin(ClusterNode node) { */ public Collection startReceivedCaches(UUID nodeId, AffinityTopologyVersion exchTopVer) throws IgniteCheckedException { - List started = cachesInfo.cachesReceivedFromJoin(nodeId); + List receivedCaches = cachesInfo.cachesReceivedFromJoin(nodeId); - for (DynamicCacheDescriptor desc : started) { - IgnitePredicate filter = desc.groupDescriptor().config().getNodeFilter(); + List startCacheInfos = receivedCaches.stream() + .filter(desc -> isLocalAffinity(desc.groupDescriptor().config())) + .map(desc -> new StartCacheInfo(desc, null, exchTopVer, false)) + .collect(Collectors.toList()); - if (CU.affinityNode(ctx.discovery().localNode(), filter)) { - prepareCacheStart( - desc.cacheConfiguration(), - desc, - null, - exchTopVer, - false); + prepareStartCaches(startCacheInfos); + + return receivedCaches; + } + + /** + * @param cacheConfiguration Checked configuration. + * @return {@code true} if local node is affinity node for cache. + */ + private boolean isLocalAffinity(CacheConfiguration cacheConfiguration) { + return CU.affinityNode(ctx.discovery().localNode(), cacheConfiguration.getNodeFilter()); + } + + /** + * Start all input caches in parallel. + * + * @param startCacheInfos All caches information for start. + */ + void prepareStartCaches(Collection startCacheInfos) throws IgniteCheckedException { + prepareStartCaches(startCacheInfos, (data, operation) -> { + operation.accept(data);// PROXY + }); + } + + /** + * Trying to start all input caches in parallel and skip failed caches. + * + * @param startCacheInfos Caches info for start. + * @return Caches which was failed. + * @throws IgniteCheckedException if failed. + */ + Map prepareStartCachesIfPossible(Collection startCacheInfos) throws IgniteCheckedException { + HashMap failedCaches = new HashMap<>(); + + prepareStartCaches(startCacheInfos, (data, operation) -> { + try { + operation.accept(data); + } + catch (IgniteInterruptedCheckedException e) { + throw e; + } + catch (IgniteCheckedException e) { + log.warning("Cache can not be started : cache=" + data.getStartedConfiguration().getName()); + + failedCaches.put(data, e); + } + }); + + return failedCaches; + } + + /** + * Start all input caches in parallel. + * + * @param startCacheInfos All caches information for start. + * @param cacheStartFailHandler Fail handler for one cache start. + */ + private void prepareStartCaches( + Collection startCacheInfos, + StartCacheFailHandler cacheStartFailHandler + ) throws IgniteCheckedException { + if (!IGNITE_ALLOW_START_CACHES_IN_PARALLEL || startCacheInfos.size() <= 1) { + for (StartCacheInfo startCacheInfo : startCacheInfos) { + cacheStartFailHandler.handle( + startCacheInfo, + cacheInfo -> { + prepareCacheStart( + cacheInfo.getCacheDescriptor().cacheConfiguration(), + cacheInfo.getCacheDescriptor(), + cacheInfo.getReqNearCfg(), + cacheInfo.getExchangeTopVer(), + cacheInfo.isDisabledAfterStart() + ); + + return null; + } + ); + + context().exchange().exchangerUpdateHeartbeat(); + } + } + else { + Map cacheContexts = new ConcurrentHashMap<>(); + + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(ctx, GridIoPolicy.SYSTEM_POOL, 2); + + doInParallel( + parallelismLvl, + sharedCtx.kernalContext().getSystemExecutorService(), + startCacheInfos, + startCacheInfo -> { + cacheStartFailHandler.handle( + startCacheInfo, + cacheInfo -> { + GridCacheContext cacheCtx = prepareCacheContext( + cacheInfo.getCacheDescriptor().cacheConfiguration(), + cacheInfo.getCacheDescriptor(), + cacheInfo.getReqNearCfg(), + cacheInfo.getExchangeTopVer(), + cacheInfo.isDisabledAfterStart() + ); + cacheContexts.put(cacheInfo, cacheCtx); + + context().exchange().exchangerUpdateHeartbeat(); + + return null; + } + ); + + return null; + } + ); + + /* + * This hack required because we can't start sql schema in parallel by folowing reasons: + * * checking index to duplicate(and other checking) require one order on every nodes. + * * onCacheStart and createSchema contains a lot of mutex. + * + * TODO IGNITE-9729 + */ + Set successfullyPreparedCaches = cacheContexts.keySet(); + + List cacheInfosInOriginalOrder = startCacheInfos.stream() + .filter(successfullyPreparedCaches::contains) + .collect(Collectors.toList()); + + for (StartCacheInfo startCacheInfo : cacheInfosInOriginalOrder) { + cacheStartFailHandler.handle( + startCacheInfo, + cacheInfo -> { + GridCacheContext cctx = cacheContexts.get(cacheInfo); + + if (!cctx.isRecoveryMode()) { + ctx.query().onCacheStart( + cctx, + cacheInfo.getCacheDescriptor().schema() != null + ? cacheInfo.getCacheDescriptor().schema() + : new QuerySchema() + ); + } + + context().exchange().exchangerUpdateHeartbeat(); + + return null; + } + ); } + + doInParallel( + parallelismLvl, + sharedCtx.kernalContext().getSystemExecutorService(), + cacheContexts.entrySet(), + cacheCtxEntry -> { + cacheStartFailHandler.handle( + cacheCtxEntry.getKey(), + cacheInfo -> { + GridCacheContext cacheContext = cacheCtxEntry.getValue(); + + if (cacheContext.isRecoveryMode()) + finishRecovery(cacheInfo.getExchangeTopVer(), cacheContext); + else + onCacheStarted(cacheCtxEntry.getValue()); + + context().exchange().exchangerUpdateHeartbeat(); + + return null; + } + ); + + return null; + } + ); } + } + + /** + * @param startCfg Cache configuration to use. + * @param desc Cache descriptor. + * @param reqNearCfg Near configuration if specified for client cache start request. + * @param exchTopVer Current exchange version. + * @param disabledAfterStart If true, then we will discard restarting state from proxies. If false then we will + * change state of proxies to restarting + * @throws IgniteCheckedException If failed. + */ + public void prepareCacheStart( + CacheConfiguration startCfg, + DynamicCacheDescriptor desc, + @Nullable NearCacheConfiguration reqNearCfg, + AffinityTopologyVersion exchTopVer, + boolean disabledAfterStart + ) throws IgniteCheckedException { + GridCacheContext cacheCtx = prepareCacheContext(startCfg, desc, reqNearCfg, exchTopVer, disabledAfterStart); - return started; + if (cacheCtx.isRecoveryMode()) + finishRecovery(exchTopVer, cacheCtx); + else { + ctx.query().onCacheStart(cacheCtx, desc.schema() != null ? desc.schema() : new QuerySchema()); + + onCacheStarted(cacheCtx); + } } /** + * Preparing cache context to start. + * * @param startCfg Cache configuration to use. * @param desc Cache descriptor. * @param reqNearCfg Near configuration if specified for client cache start request. * @param exchTopVer Current exchange version. * @param disabledAfterStart If true, then we will discard restarting state from proxies. If false then we will change * state of proxies to restarting - * @throws IgniteCheckedException If failed. + * @return Created {@link GridCacheContext}. + * @throws IgniteCheckedException if failed. */ - void prepareCacheStart( + private GridCacheContext prepareCacheContext( CacheConfiguration startCfg, DynamicCacheDescriptor desc, @Nullable NearCacheConfiguration reqNearCfg, AffinityTopologyVersion exchTopVer, boolean disabledAfterStart ) throws IgniteCheckedException { + if (caches.containsKey(startCfg.getName())) { + GridCacheAdapter existingCache = caches.get(startCfg.getName()); + + GridCacheContext cctx = existingCache.context(); + + assert cctx.isRecoveryMode(); + + QuerySchema localSchema = recovery.querySchemas.get(desc.cacheId()); + + QuerySchemaPatch localSchemaPatch = localSchema.makePatch(desc.schema().entities()); + + // Cache schema is changed after restart, workaround is stop existing cache and start new. + if (!localSchemaPatch.isEmpty() || localSchemaPatch.hasConflicts()) + stopCacheSafely(cctx); + else + return existingCache.context(); + } + assert !caches.containsKey(startCfg.getName()) : startCfg.getName(); CacheConfiguration ccfg = new CacheConfiguration(startCfg); CacheObjectContext cacheObjCtx = ctx.cacheObjects().contextForCache(ccfg); - boolean affNode; + boolean affNode = checkForAffinityNode(desc, reqNearCfg, ccfg); - if (ccfg.getCacheMode() == LOCAL) { - affNode = true; + CacheGroupContext grp = getOrCreateCacheGroupContext(desc, exchTopVer, cacheObjCtx, affNode, startCfg.getGroupName(), false); - ccfg.setNearConfiguration(null); - } - else if (CU.affinityNode(ctx.discovery().localNode(), desc.groupDescriptor().config().getNodeFilter())) - affNode = true; - else { - affNode = false; + GridCacheContext cacheCtx = createCacheContext(ccfg, + grp, + null, + desc, + exchTopVer, + cacheObjCtx, + affNode, + true, + disabledAfterStart, + false + ); + + initCacheContext(cacheCtx, ccfg); + + return cacheCtx; + } + + /** + * Stops cache under checkpoint lock. + * @param cctx Cache context. + */ + private void stopCacheSafely(GridCacheContext cctx) { + sharedCtx.database().checkpointReadLock(); - ccfg.setNearConfiguration(reqNearCfg); + try { + prepareCacheStop(cctx.name(), false); + + if (!cctx.group().hasCaches()) + stopCacheGroup(cctx.group().groupId()); + } + finally { + sharedCtx.database().checkpointReadUnlock(); } - if (sharedCtx.pageStore() != null && affNode) - sharedCtx.pageStore().initializeForCache(desc.groupDescriptor(), desc.toStoredData()); + } + + /** + * Finishes recovery for given cache context. + * + * @param cacheStartVer Cache join to topology version. + * @param cacheContext Cache context. + * @throws IgniteCheckedException If failed. + */ + private void finishRecovery( + AffinityTopologyVersion cacheStartVer, GridCacheContext cacheContext + ) throws IgniteCheckedException { + CacheGroupContext groupContext = cacheContext.group(); - String grpName = startCfg.getGroupName(); + // Take cluster-wide cache descriptor and try to update local cache and cache group parameters. + DynamicCacheDescriptor updatedDescriptor = cacheDescriptor(cacheContext.cacheId()); - CacheGroupContext grp = null; + groupContext.finishRecovery( + cacheStartVer, + updatedDescriptor.receivedFrom(), + isLocalAffinity(updatedDescriptor.cacheConfiguration()) + ); - if (grpName != null) { - for (CacheGroupContext grp0 : cacheGrps.values()) { - if (grp0.sharedGroup() && grpName.equals(grp0.name())) { - grp = grp0; + cacheContext.finishRecovery(cacheStartVer, updatedDescriptor); - break; - } + if (cacheContext.config().getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) + sharedCtx.coordinators().ensureStarted(); + + onKernalStart(cacheContext.cache()); + + if (log.isInfoEnabled()) + log.info("Finished recovery for cache [cache=" + cacheContext.name() + + ", grp=" + groupContext.cacheOrGroupName() + ", startVer=" + cacheStartVer + "]"); + } + + /** + * Stops all caches and groups, that was recovered, but not activated on node join. + * Such caches can remain only if it was filtered by node filter on current node. + * It's impossible to check whether current node is affinity node for given cache before join to topology. + */ + public void shutdownNotFinishedRecoveryCaches() { + for (GridCacheAdapter cacheAdapter : caches.values()) { + GridCacheContext cacheContext = cacheAdapter.context(); + + if (cacheContext.isLocal()) + continue; + + if (cacheContext.isRecoveryMode()) { + assert !isLocalAffinity(cacheContext.config()) + : "Cache " + cacheAdapter.context() + " is still in recovery mode after start, but not activated."; + + stopCacheSafely(cacheContext); } + } + } - if (grp == null) { - grp = startCacheGroup(desc.groupDescriptor(), + /** + * Check for affinity node and customize near configuration if needed. + * + * @param desc Cache descriptor. + * @param reqNearCfg Near configuration if specified for client cache start request. + * @param ccfg Cache configuration to use. + * @return {@code true} if it is affinity node for cache. + */ + private boolean checkForAffinityNode( + DynamicCacheDescriptor desc, + @Nullable NearCacheConfiguration reqNearCfg, + CacheConfiguration ccfg + ) { + if (ccfg.getCacheMode() == LOCAL) { + ccfg.setNearConfiguration(null); + + return true; + } + + if (isLocalAffinity(desc.groupDescriptor().config())) + return true; + + ccfg.setNearConfiguration(reqNearCfg); + + return false; + } + + /** + * Prepare page store for start cache. + * + * @param desc Cache descriptor. + * @param affNode {@code true} if it is affinity node for cache. + * @throws IgniteCheckedException if failed. + */ + public void preparePageStore(DynamicCacheDescriptor desc, boolean affNode) throws IgniteCheckedException { + if (sharedCtx.pageStore() != null && affNode) + initializationProtector.protect( + desc.groupDescriptor().groupId(), + () -> sharedCtx.pageStore().initializeForCache(desc.groupDescriptor(), desc.toStoredData()) + ); + } + + /** + * Prepare cache group to start cache. + * + * @param desc Cache descriptor. + * @param exchTopVer Current exchange version. + * @param cacheObjCtx Cache object context. + * @param affNode {@code true} if it is affinity node for cache. + * @param grpName Group name. + * @return Prepared cache group context. + * @throws IgniteCheckedException if failed. + */ + private CacheGroupContext getOrCreateCacheGroupContext( + DynamicCacheDescriptor desc, + AffinityTopologyVersion exchTopVer, + CacheObjectContext cacheObjCtx, + boolean affNode, + String grpName, + boolean recoveryMode + ) throws IgniteCheckedException { + if (grpName != null) { + return initializationProtector.protect( + desc.groupId(), + () -> findCacheGroup(grpName), + () -> startCacheGroup( + desc.groupDescriptor(), desc.cacheType(), affNode, cacheObjCtx, - exchTopVer); - } - } - else { - grp = startCacheGroup(desc.groupDescriptor(), - desc.cacheType(), - affNode, - cacheObjCtx, - exchTopVer); + exchTopVer, + recoveryMode + ) + ); } - GridCacheContext cacheCtx = createCache(ccfg, - grp, - null, - desc, - exchTopVer, - cacheObjCtx, + return startCacheGroup(desc.groupDescriptor(), + desc.cacheType(), affNode, - true, - disabledAfterStart + cacheObjCtx, + exchTopVer, + recoveryMode ); + } - cacheCtx.dynamicDeploymentId(desc.deploymentId()); - + /** + * Initialize created cache context. + * + * @param cacheCtx Cache context to initializtion. + * @param cfg Cache configuration. + * @throws IgniteCheckedException if failed. + */ + private void initCacheContext( + GridCacheContext cacheCtx, + CacheConfiguration cfg + ) throws IgniteCheckedException { GridCacheAdapter cache = cacheCtx.cache(); sharedCtx.addCacheContext(cacheCtx); caches.put(cacheCtx.name(), cache); - startCache(cache, desc.schema() != null ? desc.schema() : new QuerySchema()); + // Intentionally compare Boolean references using '!=' below to check if the flag has been explicitly set. + if (cfg.isStoreKeepBinary() && cfg.isStoreKeepBinary() != CacheConfiguration.DFLT_STORE_KEEP_BINARY + && !(ctx.config().getMarshaller() instanceof BinaryMarshaller)) + U.warn(log, "CacheConfiguration.isStoreKeepBinary() configuration property will be ignored because " + + "BinaryMarshaller is not used"); + + // Start managers. + for (GridCacheManager mgr : F.view(cacheCtx.managers(), F.notContains(dhtExcludes(cacheCtx)))) + mgr.start(cacheCtx); + + cacheCtx.initConflictResolver(); + + if (cfg.getCacheMode() != LOCAL && GridCacheUtils.isNearEnabled(cfg)) { + GridCacheContext dhtCtx = cacheCtx.near().dht().context(); + + // Start DHT managers. + for (GridCacheManager mgr : dhtManagers(dhtCtx)) + mgr.start(dhtCtx); + + dhtCtx.initConflictResolver(); + + // Start DHT cache. + dhtCtx.cache().start(); + + if (log.isDebugEnabled()) + log.debug("Started DHT cache: " + dhtCtx.cache().name()); + } + + ctx.continuous().onCacheStart(cacheCtx); + + cacheCtx.cache().start(); + } + + /** + * Handle of cache context which was fully prepared. + * + * @param cacheCtx Fully prepared context. + * @throws IgniteCheckedException if failed. + */ + private void onCacheStarted(GridCacheContext cacheCtx) throws IgniteCheckedException { + GridCacheAdapter cache = cacheCtx.cache(); + CacheConfiguration cfg = cacheCtx.config(); + CacheGroupContext grp = cacheGrps.get(cacheCtx.groupId()); + + cacheCtx.onStarted(); + + String dataRegion = cfg.getDataRegionName(); + + if (dataRegion == null && ctx.config().getDataStorageConfiguration() != null) + dataRegion = ctx.config().getDataStorageConfiguration().getDefaultDataRegionConfiguration().getName(); + + if (log.isInfoEnabled()) { + log.info("Started cache [name=" + cfg.getName() + + ", id=" + cacheCtx.cacheId() + + (cfg.getGroupName() != null ? ", group=" + cfg.getGroupName() : "") + + ", dataRegionName=" + dataRegion + + ", mode=" + cfg.getCacheMode() + + ", atomicity=" + cfg.getAtomicityMode() + + ", backups=" + cfg.getBackups() + + ", mvcc=" + cacheCtx.mvccEnabled() + ']'); + } grp.onCacheStarted(cacheCtx); onKernalStart(cache); + } + + /** + * @param desc Cache descriptor. + * @throws IgniteCheckedException If failed. + */ + private GridCacheContext startCacheInRecoveryMode( + DynamicCacheDescriptor desc + ) throws IgniteCheckedException { + CacheConfiguration cfg = desc.cacheConfiguration(); + + CacheObjectContext cacheObjCtx = ctx.cacheObjects().contextForCache(cfg); + + preparePageStore(desc, true); + + CacheGroupContext grp = getOrCreateCacheGroupContext( + desc, + AffinityTopologyVersion.NONE, + cacheObjCtx, + true, + cfg.getGroupName(), + true + ); + + GridCacheContext cacheCtx = createCacheContext(cfg, + grp, + null, + desc, + AffinityTopologyVersion.NONE, + cacheObjCtx, + true, + true, + false, + true + ); + + initCacheContext(cacheCtx, cfg); - IgniteCacheProxyImpl proxy = jCacheProxies.get(ccfg.getName()); + cacheCtx.onStarted(); + + String dataRegion = cfg.getDataRegionName(); - if (!disabledAfterStart && proxy != null && proxy.isRestarting()) { - proxy.onRestarted(cacheCtx, cache); + if (dataRegion == null && ctx.config().getDataStorageConfiguration() != null) + dataRegion = ctx.config().getDataStorageConfiguration().getDefaultDataRegionConfiguration().getName(); + + grp.onCacheStarted(cacheCtx); - if (cacheCtx.dataStructuresCache()) - ctx.dataStructures().restart(proxy.internalProxy()); + ctx.query().onCacheStart(cacheCtx, desc.schema() != null ? desc.schema() : new QuerySchema()); + + if (log.isInfoEnabled()) { + log.info("Started cache in recovery mode [name=" + cfg.getName() + + ", id=" + cacheCtx.cacheId() + + (cfg.getGroupName() != null ? ", group=" + cfg.getGroupName() : "") + + ", dataRegionName=" + dataRegion + + ", mode=" + cfg.getCacheMode() + + ", atomicity=" + cfg.getAtomicityMode() + + ", backups=" + cfg.getBackups() + + ", mvcc=" + cacheCtx.mvccEnabled() + ']'); } + + return cacheCtx; } /** - * Restarts proxies of caches if they was marked as restarting. - * Requires external synchronization - shouldn't be called concurrently with another caches restart. + * @param grpName Group name. + * @return Found group or null. + */ + private CacheGroupContext findCacheGroup(String grpName) { + return cacheGrps.values().stream() + .filter(grp -> grp.sharedGroup() && grpName.equals(grp.name())) + .findAny() + .orElse(null); + } + + /** + * Restarts proxies of caches if they was marked as restarting. Requires external synchronization - shouldn't be + * called concurrently with another caches restart. */ public void restartProxies() { for (IgniteCacheProxyImpl proxy : jCacheProxies.values()) { @@ -2060,14 +2619,46 @@ public void restartProxies() { continue; if (proxy.isRestarting()) { - caches.get(proxy.getName()).active(true); + caches.get(proxy.getName()).active(true); + + proxy.onRestarted(cacheCtx, cacheCtx.cache()); + + if (cacheCtx.dataStructuresCache()) + ctx.dataStructures().restart(proxy.getName(), proxy.internalProxy()); + } + } + } + + /** + * Complete stopping of caches if they were marked as restarting but it failed. + * @return Cache names of proxies which were restarted. + */ + public List resetRestartingProxies() { + List res = new ArrayList<>(); + + for (Map.Entry> e : jCacheProxies.entrySet()) { + IgniteCacheProxyImpl proxy = e.getValue(); + + if (proxy == null) + continue; + + if (proxy.isRestarting()) { + String cacheName = e.getKey(); - proxy.onRestarted(cacheCtx, cacheCtx.cache()); + res.add(cacheName); - if (cacheCtx.dataStructuresCache()) - ctx.dataStructures().restart(proxy.internalProxy()); + jCacheProxies.remove(cacheName); + + proxy.onRestarted(null, null); + + if (DataStructuresProcessor.isDataStructureCache(cacheName)) + ctx.dataStructures().restart(cacheName, null); } } + + cachesInfo.removeRestartingCaches(); + + return res; } /** @@ -2084,8 +2675,9 @@ private CacheGroupContext startCacheGroup( CacheType cacheType, boolean affNode, CacheObjectContext cacheObjCtx, - AffinityTopologyVersion exchTopVer) - throws IgniteCheckedException { + AffinityTopologyVersion exchTopVer, + boolean recoveryMode + ) throws IgniteCheckedException { CacheConfiguration cfg = new CacheConfiguration(desc.config()); String memPlcName = cfg.getDataRegionName(); @@ -2094,7 +2686,7 @@ private CacheGroupContext startCacheGroup( FreeList freeList = sharedCtx.database().freeList(memPlcName); ReuseList reuseList = sharedCtx.database().reuseList(memPlcName); - boolean persistenceEnabled = sharedCtx.localNode().isClient() ? desc.persistenceEnabled() : + boolean persistenceEnabled = recoveryMode || sharedCtx.localNode().isClient() ? desc.persistenceEnabled() : dataRegion != null && dataRegion.config().isPersistenceEnabled(); CacheGroupContext grp = new CacheGroupContext(sharedCtx, @@ -2109,7 +2701,8 @@ private CacheGroupContext startCacheGroup( reuseList, exchTopVer, persistenceEnabled, - desc.walEnabled() + desc.walEnabled(), + recoveryMode ); for (Object obj : grp.configuredUserObjects()) @@ -2121,7 +2714,7 @@ private CacheGroupContext startCacheGroup( CacheGroupContext old = cacheGrps.put(desc.groupId(), grp); - if (!grp.systemCache() && !U.IGNITE_MBEANS_DISABLED) { + if (!grp.systemCache() && !U.IGNITE_MBEANS_DISABLED) { try { U.registerMBean(ctx.config().getMBeanServer(), ctx.igniteInstanceName(), CACHE_GRP_METRICS_MBEAN_GRP, grp.cacheOrGroupName(), grp.mxBean(), CacheGroupMetricsMXBean.class); @@ -2143,7 +2736,7 @@ private CacheGroupContext startCacheGroup( */ void blockGateway(String cacheName, boolean stop, boolean restart) { // Break the proxy before exchange future is done. - IgniteCacheProxyImpl proxy = jCacheProxies.get(cacheName); + IgniteCacheProxyImpl proxy = jcacheProxy(cacheName, false); if (restart) { GridCacheAdapter cache = caches.get(cacheName); @@ -2152,16 +2745,28 @@ void blockGateway(String cacheName, boolean stop, boolean restart) { cache.active(false); } - if (proxy != null) { - if (stop) { - if (restart) - proxy.restart(); + if (stop) { + if (restart) { + GridCacheAdapter cache; + + if (proxy == null && (cache = caches.get(cacheName)) != null) { + proxy = new IgniteCacheProxyImpl(cache.context(), cache, false); + + IgniteCacheProxyImpl oldProxy = jCacheProxies.putIfAbsent(cacheName, proxy); + + if (oldProxy != null) + proxy = oldProxy; + } - proxy.context().gate().stopped(); + if (proxy != null) + proxy.suspend(); } - else - proxy.closeProxy(); + + if (proxy != null) + proxy.context0().gate().stopped(); } + else if (proxy != null) + proxy.closeProxy(); } /** @@ -2185,13 +2790,16 @@ private void stopGateway(DynamicCacheChangeRequest req) { proxy = jCacheProxies.get(req.cacheName()); if (proxy != null) - proxy.restart(); + proxy.suspend(); } - else + else { + completeProxyInitialize(req.cacheName()); + proxy = jCacheProxies.remove(req.cacheName()); + } if (proxy != null) - proxy.context().gate().onStopped(); + proxy.context0().gate().onStopped(); } /** @@ -2199,7 +2807,7 @@ private void stopGateway(DynamicCacheChangeRequest req) { * @param destroy Cache data destroy flag. Setting to true will remove all cache data. * @return Stopped cache context. */ - private GridCacheContext prepareCacheStop(String cacheName, boolean destroy) { + public GridCacheContext prepareCacheStop(String cacheName, boolean destroy) { assert sharedCtx.database().checkpointLockIsHeldByThread(); GridCacheAdapter cache = caches.remove(cacheName); @@ -2229,12 +2837,12 @@ void initCacheProxies(AffinityTopologyVersion startTopVer, @Nullable Throwable e if (cacheCtx.startTopologyVersion().equals(startTopVer)) { if (!jCacheProxies.containsKey(cacheCtx.name())) { - IgniteCacheProxyImpl newProxy = new IgniteCacheProxyImpl(cache.context(), cache, false); + IgniteCacheProxyImpl newProxy = new IgniteCacheProxyImpl(cache.context(), cache, false); if (!cache.active()) - newProxy.restart(); + newProxy.suspend(); - jCacheProxies.putIfAbsent(cacheCtx.name(), newProxy); + addjCacheProxy(cacheCtx.name(), newProxy); } if (cacheCtx.preloader() != null) @@ -2252,6 +2860,8 @@ Set closeCaches(Set cachesToClose, boolean retClientCaches) { Set ids = null; for (String cacheName : cachesToClose) { + completeProxyInitialize(cacheName); + blockGateway(cacheName, false, false); GridCacheContext ctx = sharedCtx.cacheContext(CU.cacheId(cacheName)); @@ -2282,6 +2892,8 @@ private void closeCache(GridCacheContext cctx) { assert cache != null : cctx.name(); jCacheProxies.put(cctx.name(), new IgniteCacheProxyImpl(cache.context(), cache, false)); + + completeProxyInitialize(cctx.name()); } else { cctx.gate().onStopped(); @@ -2293,19 +2905,11 @@ private void closeCache(GridCacheContext cctx) { if (!cctx.affinityNode() && cctx.transactional()) sharedCtx.tm().rollbackTransactionsForCache(cctx.cacheId()); - jCacheProxies.remove(cctx.name()); - - sharedCtx.database().checkpointReadLock(); + completeProxyInitialize(cctx.name()); - try { - prepareCacheStop(cctx.name(), false); - } - finally { - sharedCtx.database().checkpointReadUnlock(); - } + jCacheProxies.remove(cctx.name()); - if (!cctx.group().hasCaches()) - stopCacheGroup(cctx.group().groupId()); + stopCacheSafely(cctx); } finally { sharedCtx.io().writeUnlock(); @@ -2314,8 +2918,8 @@ private void closeCache(GridCacheContext cctx) { } /** - * Called during the rollback of the exchange partitions procedure - * in order to stop the given cache even if it's not fully initialized (e.g. failed on cache init stage). + * Called during the rollback of the exchange partitions procedure in order to stop the given cache even if it's not + * fully initialized (e.g. failed on cache init stage). * * @param exchActions Stop requests. */ @@ -2329,7 +2933,9 @@ void forceCloseCaches(ExchangeActions exchActions) { * @param exchActions Change requests. */ private void processCacheStopRequestOnExchangeDone(ExchangeActions exchActions) { - // Force checkpoint if there is any cache stop request + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(ctx, GridIoPolicy.SYSTEM_POOL, 2); + if (!exchActions.cacheStopRequests().isEmpty()) { try { sharedCtx.database().waitForCheckpoint("caches stop"); @@ -2339,63 +2945,88 @@ private void processCacheStopRequestOnExchangeDone(ExchangeActions exchActions) } } - for (ExchangeActions.CacheActionData action : exchActions.cacheStopRequests()) { - CacheGroupContext gctx = cacheGrps.get(action.descriptor().groupId()); + List> grpToStop = exchActions.cacheGroupsToStop().stream() + .filter(a -> cacheGrps.containsKey(a.descriptor().groupId())) + .map(a -> F.t(cacheGrps.get(a.descriptor().groupId()), a.destroy())) + .collect(Collectors.toList()); - // Cancel all operations blocking gateway - if (gctx != null) { - final String msg = "Failed to wait for topology update, cache group is stopping."; + Map> cachesToStop = exchActions.cacheStopRequests().stream() + .collect(Collectors.groupingBy(action -> action.descriptor().groupId())); - // If snapshot operation in progress we must throw CacheStoppedException - // for correct cache proxy restart. For more details see - // IgniteCacheProxy.cacheException() - gctx.affinity().cancelFutures(new CacheStoppedException(msg)); - } + try { + doInParallel( + parallelismLvl, + sharedCtx.kernalContext().getSystemExecutorService(), + cachesToStop.entrySet(), + cachesToStopByGrp -> { + CacheGroupContext gctx = cacheGrps.get(cachesToStopByGrp.getKey()); - stopGateway(action.request()); + if (gctx != null) + gctx.preloader().pause(); - sharedCtx.database().checkpointReadLock(); + try { - try { - prepareCacheStop(action.request().cacheName(), action.request().destroy()); - } - finally { - sharedCtx.database().checkpointReadUnlock(); - } + if (gctx != null) { + final String msg = "Failed to wait for topology update, cache group is stopping."; + + // If snapshot operation in progress we must throw CacheStoppedException + // for correct cache proxy restart. For more details see + // IgniteCacheProxy.cacheException() + gctx.affinity().cancelFutures(new CacheStoppedException(msg)); + } + + for (ExchangeActions.CacheActionData action: cachesToStopByGrp.getValue()) { + stopGateway(action.request()); + + sharedCtx.database().checkpointReadLock(); + + try { + prepareCacheStop(action.request().cacheName(), action.request().destroy()); + } + finally { + sharedCtx.database().checkpointReadUnlock(); + } + } + } + finally { + if (gctx != null) + gctx.preloader().resume(); + } + + return null; + } + ); + } + catch (IgniteCheckedException e) { + String msg = "Failed to stop caches"; + + log.error(msg, e); + + throw new IgniteException(msg, e); } sharedCtx.database().checkpointReadLock(); try { // Do not invoke checkpoint listeners for groups are going to be destroyed to prevent metadata corruption. - for (ExchangeActions.CacheGroupActionData action : exchActions.cacheGroupsToStop()) { - Integer groupId = action.descriptor().groupId(); - CacheGroupContext grp = cacheGrps.get(groupId); + grpToStop.forEach(grp -> { + CacheGroupContext gctx = grp.getKey(); - if (grp != null && grp.persistenceEnabled() && sharedCtx.database() instanceof GridCacheDatabaseSharedManager) { - GridCacheDatabaseSharedManager mngr = (GridCacheDatabaseSharedManager) sharedCtx.database(); - mngr.removeCheckpointListener((DbCheckpointListener) grp.offheap()); + if (gctx != null && gctx.persistenceEnabled() && sharedCtx.database() instanceof GridCacheDatabaseSharedManager) { + GridCacheDatabaseSharedManager mngr = (GridCacheDatabaseSharedManager)sharedCtx.database(); + mngr.removeCheckpointListener((DbCheckpointListener)gctx.offheap()); } - } + }); } finally { sharedCtx.database().checkpointReadUnlock(); } - List> stoppedGroups = new ArrayList<>(); - - for (ExchangeActions.CacheGroupActionData action : exchActions.cacheGroupsToStop()) { - Integer groupId = action.descriptor().groupId(); - - if (cacheGrps.containsKey(groupId)) { - stoppedGroups.add(F.t(cacheGrps.get(groupId), action.destroy())); - - stopCacheGroup(groupId); - } - } + for (IgniteBiTuple grp : grpToStop) + stopCacheGroup(grp.get1().groupId()); if (!sharedCtx.kernalContext().clientNode()) - sharedCtx.database().onCacheGroupsStopped(stoppedGroups); + sharedCtx.database().onCacheGroupsStopped(grpToStop); if (exchActions.deactivate()) sharedCtx.deactivate(); @@ -2425,10 +3056,39 @@ public void onExchangeDone( ctx.service().updateUtilityCache(); } + rollbackCoveredTx(exchActions); + if (err == null) processCacheStopRequestOnExchangeDone(exchActions); } + /** + * Rollback tx covered by stopped caches. + * + * @param exchActions Change requests. + */ + private void rollbackCoveredTx(ExchangeActions exchActions) { + if (!exchActions.cacheGroupsToStop().isEmpty() || !exchActions.cacheStopRequests().isEmpty()) { + Set cachesToStop = new HashSet<>(); + + for (ExchangeActions.CacheGroupActionData act : exchActions.cacheGroupsToStop()) { + @Nullable CacheGroupContext grpCtx = context().cache().cacheGroup(act.descriptor().groupId()); + + if (grpCtx != null && grpCtx.sharedGroup()) + cachesToStop.addAll(grpCtx.cacheIds()); + } + + for (ExchangeActions.CacheActionData act : exchActions.cacheStopRequests()) + cachesToStop.add(act.descriptor().cacheId()); + + if (!cachesToStop.isEmpty()) { + IgniteTxManager tm = context().tm(); + + tm.rollbackTransactionsForCaches(cachesToStop); + } + } + } + /** * @param grpId Group ID. */ @@ -2518,13 +3178,8 @@ private GridCacheSharedContext createSharedContext(GridKernalContext kernalCtx, walMgr = ctx.plugins().createComponent(IgniteWriteAheadLogManager.class); - if (walMgr == null) { - if (ctx.config().getDataStorageConfiguration().getWalMode() == WALMode.FSYNC && - !walFsyncWithDedicatedWorker) - walMgr = new FsyncModeFileWriteAheadLogManager(ctx); - else - walMgr = new FileWriteAheadLogManager(ctx); - } + if (walMgr == null) + walMgr = new FileWriteAheadLogManager(ctx); } else { if (CU.isPersistenceEnabled(ctx.config()) && ctx.clientNode()) { @@ -2549,6 +3204,8 @@ private GridCacheSharedContext createSharedContext(GridKernalContext kernalCtx, CacheJtaManagerAdapter jta = JTA.createOptional(); + MvccCachingManager mvccCachingMgr = new MvccCachingManager(); + return new GridCacheSharedContext( kernalCtx, tm, @@ -2566,7 +3223,8 @@ private GridCacheSharedContext createSharedContext(GridKernalContext kernalCtx, ttl, evict, jta, - storeSesLsnrs + storeSesLsnrs, + mvccCachingMgr ); } @@ -3005,7 +3663,6 @@ private CacheConfiguration getOrCreateConfigFromTemplate(String cacheName) throw * @param checkThreadTx If {@code true} checks that current thread does not have active transactions. * @return Future that will be completed when cache is deployed. */ - @SuppressWarnings("IfMayBeConditional") public IgniteInternalFuture dynamicStartCache( @Nullable CacheConfiguration ccfg, String cacheName, @@ -3029,7 +3686,6 @@ public IgniteInternalFuture dynamicStartCache( * * @param ccfg Cache configuration. */ - @SuppressWarnings("IfMayBeConditional") public IgniteInternalFuture dynamicStartSqlCache( CacheConfiguration ccfg ) { @@ -3058,7 +3714,6 @@ public IgniteInternalFuture dynamicStartSqlCache( * @param checkThreadTx If {@code true} checks that current thread does not have active transactions. * @return Future that will be completed when cache is deployed. */ - @SuppressWarnings("IfMayBeConditional") public IgniteInternalFuture dynamicStartCache( @Nullable CacheConfiguration ccfg, String cacheName, @@ -3074,7 +3729,9 @@ public IgniteInternalFuture dynamicStartCache( if (checkThreadTx) checkEmptyTransactions(); - try { + GridPlainClosure, IgniteInternalFuture> startCacheClsr = (grpKeys) -> { + assert ccfg == null || !ccfg.isEncryptionEnabled() || !grpKeys.isEmpty(); + DynamicCacheChangeRequest req = prepareCacheChangeRequest( ccfg, cacheName, @@ -3083,8 +3740,10 @@ public IgniteInternalFuture dynamicStartCache( sql, failIfExists, failIfNotStarted, + null, false, - null); + null, + ccfg != null && ccfg.isEncryptionEnabled() ? grpKeys.iterator().next() : null); if (req != null) { if (req.clientStartOnly()) @@ -3094,12 +3753,64 @@ public IgniteInternalFuture dynamicStartCache( } else return new GridFinishedFuture<>(); + }; + + try { + if (ccfg != null && ccfg.isEncryptionEnabled()) { + ctx.encryption().checkEncryptedCacheSupported(); + + return generateEncryptionKeysAndStartCacheAfter(1, startCacheClsr); + } + + return startCacheClsr.apply(Collections.EMPTY_SET); } catch (Exception e) { return new GridFinishedFuture<>(e); } } + /** + * Send {@code GenerateEncryptionKeyRequest} and execute {@code after} closure if succeed. + * + * @param keyCnt Count of keys to generate. + * @param after Closure to execute after encryption keys would be generated. + */ + private IgniteInternalFuture generateEncryptionKeysAndStartCacheAfter(int keyCnt, + GridPlainClosure, IgniteInternalFuture> after) { + IgniteInternalFuture> genEncKeyFut = ctx.encryption().generateKeys(keyCnt); + + GridFutureAdapter res = new GridFutureAdapter<>(); + + genEncKeyFut.listen(new IgniteInClosure>>() { + @Override public void apply(IgniteInternalFuture> fut) { + try { + Collection grpKeys = fut.result(); + + if (F.size(grpKeys, F.alwaysTrue()) != keyCnt) + res.onDone(false, fut.error()); + + IgniteInternalFuture dynStartCacheFut = after.apply(grpKeys); + + dynStartCacheFut.listen(new IgniteInClosure>() { + @Override public void apply(IgniteInternalFuture fut) { + try { + res.onDone(fut.get(), fut.error()); + } + catch (IgniteCheckedException e) { + res.onDone(false, e); + } + } + }); + } + catch (Exception e) { + res.onDone(false, e); + } + } + }); + + return res; + } + /** * @param startReqs Start requests. * @param cachesToClose Cache tp close. @@ -3134,14 +3845,18 @@ private IgniteInternalFuture startClientCacheChange( * @param disabledAfterStart If true, cache proxies will be only activated after {@link #restartProxies()}. * @return Future that will be completed when all caches are deployed. */ - public IgniteInternalFuture dynamicStartCaches(Collection ccfgList, boolean failIfExists, - boolean checkThreadTx, boolean disabledAfterStart) { + public IgniteInternalFuture dynamicStartCaches( + Collection ccfgList, + boolean failIfExists, + boolean checkThreadTx, + boolean disabledAfterStart + ) { return dynamicStartCachesByStoredConf( ccfgList.stream().map(StoredCacheData::new).collect(Collectors.toList()), failIfExists, checkThreadTx, - disabledAfterStart - ); + disabledAfterStart, + null); } /** @@ -3151,21 +3866,28 @@ public IgniteInternalFuture dynamicStartCaches(Collection * @param failIfExists Fail if exists flag. * @param checkThreadTx If {@code true} checks that current thread does not have active transactions. * @param disabledAfterStart If true, cache proxies will be only activated after {@link #restartProxies()}. + * @param restartId Restart requester id (it'll allow to start this cache only him). * @return Future that will be completed when all caches are deployed. */ - public IgniteInternalFuture dynamicStartCachesByStoredConf( + public IgniteInternalFuture dynamicStartCachesByStoredConf( Collection storedCacheDataList, boolean failIfExists, boolean checkThreadTx, - boolean disabledAfterStart) { + boolean disabledAfterStart, + IgniteUuid restartId + ) { if (checkThreadTx) checkEmptyTransactions(); - List srvReqs = null; - Map clientReqs = null; + GridPlainClosure, IgniteInternalFuture> startCacheClsr = (grpKeys) -> { + List srvReqs = null; + Map clientReqs = null; + + Iterator grpKeysIter = grpKeys.iterator(); - try { for (StoredCacheData ccfg : storedCacheDataList) { + assert !ccfg.config().isEncryptionEnabled() || grpKeysIter.hasNext(); + DynamicCacheChangeRequest req = prepareCacheChangeRequest( ccfg.config(), ccfg.config().getName(), @@ -3174,8 +3896,10 @@ public IgniteInternalFuture dynamicStartCachesByStoredConf( ccfg.sql(), failIfExists, true, + restartId, disabledAfterStart, - ccfg.queryEntities()); + ccfg.queryEntities(), + ccfg.config().isEncryptionEnabled() ? grpKeysIter.next() : null); if (req != null) { if (req.clientStartOnly()) { @@ -3192,16 +3916,14 @@ public IgniteInternalFuture dynamicStartCachesByStoredConf( } } } - } - catch (Exception e) { - return new GridFinishedFuture<>(e); - } - if (srvReqs != null || clientReqs != null) { + if (srvReqs == null && clientReqs == null) + return new GridFinishedFuture<>(); + if (clientReqs != null && srvReqs == null) return startClientCacheChange(clientReqs, null); - GridCompoundFuture compoundFut = new GridCompoundFuture<>(); + GridCompoundFuture compoundFut = new GridCompoundFuture<>(); for (DynamicCacheStartFuture fut : initiateCacheChanges(srvReqs)) compoundFut.add((IgniteInternalFuture)fut); @@ -3215,9 +3937,16 @@ public IgniteInternalFuture dynamicStartCachesByStoredConf( compoundFut.markInitialized(); return compoundFut; + }; + + int encGrpCnt = 0; + + for (StoredCacheData ccfg : storedCacheDataList) { + if (ccfg.config().isEncryptionEnabled()) + encGrpCnt++; } - else - return new GridFinishedFuture<>(); + + return generateEncryptionKeysAndStartCacheAfter(encGrpCnt, startCacheClsr); } /** Resolve cache type for input cacheType */ @@ -3238,10 +3967,16 @@ else if (DataStructuresProcessor.isDataStructureCache(ccfg.getName())) * command. * @param checkThreadTx If {@code true} checks that current thread does not have active transactions. * @param restart Restart flag. + * @param restartId Restart requester id (it'll allow to start this cache only him). * @return Future that will be completed when cache is destroyed. */ - public IgniteInternalFuture dynamicDestroyCache(String cacheName, boolean sql, boolean checkThreadTx, - boolean restart) { + public IgniteInternalFuture dynamicDestroyCache( + String cacheName, + boolean sql, + boolean checkThreadTx, + boolean restart, + IgniteUuid restartId + ) { assert cacheName != null; if (checkThreadTx) @@ -3252,6 +3987,7 @@ public IgniteInternalFuture dynamicDestroyCache(String cacheName, boole req.stop(true); req.destroy(true); req.restart(restart); + req.restartId(restartId); return F.first(initiateCacheChanges(F.asList(req))); } @@ -3259,30 +3995,30 @@ public IgniteInternalFuture dynamicDestroyCache(String cacheName, boole /** * @param cacheNames Collection of cache names to destroy. * @param checkThreadTx If {@code true} checks that current thread does not have active transactions. - * @param restart Restart flag. * @return Future that will be completed when cache is destroyed. */ - public IgniteInternalFuture dynamicDestroyCaches(Collection cacheNames, boolean checkThreadTx, - boolean restart) { - return dynamicDestroyCaches(cacheNames, checkThreadTx, restart, true); + public IgniteInternalFuture dynamicDestroyCaches(Collection cacheNames, boolean checkThreadTx) { + return dynamicDestroyCaches(cacheNames, checkThreadTx, true); } /** * @param cacheNames Collection of cache names to destroy. * @param checkThreadTx If {@code true} checks that current thread does not have active transactions. - * @param restart Restart flag. * @param destroy Cache data destroy flag. Setting to true will cause removing all cache data * @return Future that will be completed when cache is destroyed. */ - public IgniteInternalFuture dynamicDestroyCaches(Collection cacheNames, boolean checkThreadTx, - boolean restart, boolean destroy) { + public IgniteInternalFuture dynamicDestroyCaches( + Collection cacheNames, + boolean checkThreadTx, + boolean destroy + ) { if (checkThreadTx) checkEmptyTransactions(); List reqs = new ArrayList<>(cacheNames.size()); for (String cacheName : cacheNames) { - reqs.add(createStopRequest(cacheName, restart, destroy)); + reqs.add(createStopRequest(cacheName, false, null, destroy)); } return dynamicChangeCaches(reqs); @@ -3293,15 +4029,17 @@ public IgniteInternalFuture dynamicDestroyCaches(Collection cacheName * * @param cacheName Cache names to destroy. * @param restart Restart flag. + * @param restartId Restart requester id (it'll allow to start this cache only him). * @param destroy Cache data destroy flag. Setting to {@code true} will cause removing all cache data from store. * @return Future that will be completed when cache is destroyed. */ - @NotNull public DynamicCacheChangeRequest createStopRequest(String cacheName, boolean restart, boolean destroy) { + @NotNull public DynamicCacheChangeRequest createStopRequest(String cacheName, boolean restart, IgniteUuid restartId, boolean destroy) { DynamicCacheChangeRequest req = DynamicCacheChangeRequest.stopRequest(ctx, cacheName, false, true); req.stop(true); req.destroy(destroy); req.restart(restart); + req.restartId(restartId); return req; } @@ -3356,7 +4094,7 @@ public boolean walEnabled(String cacheName) { IgniteInternalFuture dynamicCloseCache(String cacheName) { assert cacheName != null; - IgniteCacheProxy proxy = jCacheProxies.get(cacheName); + IgniteCacheProxy proxy = jcacheProxy(cacheName, false); if (proxy == null || proxy.isProxyClosed()) return new GridFinishedFuture<>(); // No-op. @@ -3364,7 +4102,7 @@ IgniteInternalFuture dynamicCloseCache(String cacheName) { checkEmptyTransactions(); if (proxy.context().isLocal()) - return dynamicDestroyCache(cacheName, false, true, false); + return dynamicDestroyCache(cacheName, false, true, false, null); return startClientCacheChange(null, Collections.singleton(cacheName)); } @@ -3529,6 +4267,7 @@ private Collection initiateCacheChanges( /** * Authorize creating cache. + * * @param cfg Cache configuration. * @param secCtx Optional security context. */ @@ -3542,6 +4281,7 @@ private void authorizeCacheCreate(CacheConfiguration cfg, SecurityContext secCtx /** * Authorize dynamic cache management for this node. + * * @param req start/stop cache request. */ private void authorizeCacheChange(DynamicCacheChangeRequest req) { @@ -3624,7 +4364,7 @@ else if (msg0 instanceof WalStateFinishMessage) return cachesInfo.onCacheChangeRequested((DynamicCacheChangeBatch)msg, topVer); if (msg instanceof DynamicCacheChangeFailureMessage) - cachesInfo.onCacheChangeRequested((DynamicCacheChangeFailureMessage) msg, topVer); + cachesInfo.onCacheChangeRequested((DynamicCacheChangeFailureMessage)msg, topVer); if (msg instanceof ClientCacheChangeDiscoveryMessage) cachesInfo.onClientCacheChange((ClientCacheChangeDiscoveryMessage)msg, node); @@ -3685,10 +4425,11 @@ else if (rebalanceOrder < 0) } /** - * Reset restarting caches. + * @param cacheName Cache to check. + * @return Cache is under restarting. */ - public void resetRestartingCaches() { - cachesInfo.restartingCaches().clear(); + public boolean isCacheRestarting(String cacheName) { + return cachesInfo.isRestarting(cacheName); } /** @@ -3867,11 +4608,60 @@ public IgniteInternalCache cache(String name) { if (log.isDebugEnabled()) log.debug("Getting cache for name: " + name); - IgniteCacheProxy jcache = (IgniteCacheProxy)jCacheProxies.get(name); + IgniteCacheProxy jcache = (IgniteCacheProxy)jcacheProxy(name, true); return jcache == null ? null : jcache.internalProxy(); } + /** + * Await proxy initialization. + * + * @param jcache Cache proxy. + */ + private void awaitInitializeProxy(IgniteCacheProxyImpl jcache) { + if (jcache != null) { + CountDownLatch initLatch = jcache.getInitLatch(); + + try { + while (initLatch.getCount() > 0) { + initLatch.await(2000, TimeUnit.MILLISECONDS); + + if (log.isInfoEnabled()) + log.info("Failed to wait proxy initialization, cache=" + jcache.getName() + + ", localNodeId=" + ctx.localNodeId()); + } + } + catch (InterruptedException e) { + Thread.currentThread().interrupt(); + // Ignore intteruption. + } + } + } + + /** + * @param name Cache name. + */ + public void completeProxyInitialize(String name) { + IgniteCacheProxyImpl jcache = jCacheProxies.get(name); + + if (jcache != null) { + CountDownLatch proxyInitLatch = jcache.getInitLatch(); + + if (proxyInitLatch.getCount() > 0) { + if (log.isInfoEnabled()) + log.info("Finish proxy initialization, cacheName=" + name + + ", localNodeId=" + ctx.localNodeId()); + + proxyInitLatch.countDown(); + } + } + else { + if (log.isInfoEnabled()) + log.info("Can not finish proxy initialization because proxy does not exist, cacheName=" + name + + ", localNodeId=" + ctx.localNodeId()); + } + } + /** * @param name Cache name. * @return Cache instance for given name. @@ -3888,18 +4678,21 @@ public IgniteInternalCache getOrStartCache(String name) throws Igni * @throws IgniteCheckedException If failed. */ @SuppressWarnings("unchecked") - public IgniteInternalCache getOrStartCache(String name, CacheConfiguration ccfg) throws IgniteCheckedException { + public IgniteInternalCache getOrStartCache( + String name, + CacheConfiguration ccfg + ) throws IgniteCheckedException { assert name != null; if (log.isDebugEnabled()) log.debug("Getting cache for name: " + name); - IgniteCacheProxy cache = jCacheProxies.get(name); + IgniteCacheProxy cache = jcacheProxy(name, true); if (cache == null) { dynamicStartCache(ccfg, name, null, false, ccfg == null, true).get(); - cache = jCacheProxies.get(name); + cache = jcacheProxy(name, true); } return cache == null ? null : (IgniteInternalCache)cache.internalProxy(); @@ -3943,13 +4736,21 @@ public IgniteInternalCache utilityCache() { */ private IgniteInternalCache internalCacheEx(String name) { if (ctx.discovery().localNode().isClient()) { - IgniteCacheProxy proxy = (IgniteCacheProxy)jCacheProxies.get(name); + IgniteCacheProxy proxy = (IgniteCacheProxy)jcacheProxy(name, true); if (proxy == null) { GridCacheAdapter cacheAdapter = caches.get(name); - if (cacheAdapter != null) + if (cacheAdapter != null) { proxy = new IgniteCacheProxyImpl(cacheAdapter.context(), cacheAdapter, false); + + IgniteCacheProxyImpl prev = addjCacheProxy(name, (IgniteCacheProxyImpl)proxy); + + if (prev != null) + proxy = (IgniteCacheProxy)prev; + + completeProxyInitialize(proxy.getName()); + } } assert proxy != null : name; @@ -3981,7 +4782,7 @@ public IgniteInternalCache publicCache(String name) { if (!desc.cacheType().userCache()) throw new IllegalStateException("Failed to get cache because it is a system cache: " + name); - IgniteCacheProxy jcache = (IgniteCacheProxy)jCacheProxies.get(name); + IgniteCacheProxy jcache = (IgniteCacheProxy)jcacheProxy(name, true); if (jcache == null) throw new IllegalArgumentException("Cache is not started: " + name); @@ -4022,16 +4823,16 @@ public IgniteCacheProxy publicJCache(String cacheName) throws Ignit if (desc != null && !desc.cacheType().userCache()) throw new IllegalStateException("Failed to get cache because it is a system cache: " + cacheName); - IgniteCacheProxyImpl cache = jCacheProxies.get(cacheName); + IgniteCacheProxyImpl proxy = jcacheProxy(cacheName, true); // Try to start cache, there is no guarantee that cache will be instantiated. - if (cache == null) { + if (proxy == null) { dynamicStartCache(null, cacheName, null, false, failIfNotStarted, checkThreadTx).get(); - cache = jCacheProxies.get(cacheName); + proxy = jcacheProxy(cacheName, true); } - return cache != null ? (IgniteCacheProxy)cache.gatewayWrapper() : null; + return proxy != null ? (IgniteCacheProxy)proxy.gatewayWrapper() : null; } /** @@ -4045,8 +4846,20 @@ public CacheConfiguration cacheConfiguration(String name) { DynamicCacheDescriptor desc = cacheDescriptor(name); - if (desc == null) + if (desc == null) { + if (cachesInfo.isRestarting(name)) { + IgniteCacheProxyImpl proxy = jCacheProxies.get(name); + + assert proxy != null: name; + + proxy.internalProxy(); //should throw exception + + // we have procceed, try again + return cacheConfiguration(name); + } + throw new IllegalStateException("Cache doesn't exist: " + name); + } else return desc.cacheConfiguration(); } @@ -4068,6 +4881,26 @@ public Map cacheDescriptors() { return cachesInfo.registeredCaches(); } + /** + * @return Collection of persistent cache descriptors. + */ + public Collection persistentCaches() { + return cachesInfo.registeredCaches().values() + .stream() + .filter(desc -> isPersistentCache(desc.cacheConfiguration(), ctx.config().getDataStorageConfiguration())) + .collect(Collectors.toList()); + } + + /** + * @return Collection of persistent cache group descriptors. + */ + public Collection persistentGroups() { + return cachesInfo.registeredCacheGroups().values() + .stream() + .filter(CacheGroupDescriptor::persistenceEnabled) + .collect(Collectors.toList()); + } + /** * @return Cache group descriptors. */ @@ -4148,13 +4981,21 @@ else if (ctx.clientDisconnected()) { public IgniteCacheProxy jcache(String name) { assert name != null; - IgniteCacheProxy cache = (IgniteCacheProxy) jCacheProxies.get(name); + IgniteCacheProxy cache = (IgniteCacheProxy)jcacheProxy(name, true); if (cache == null) { GridCacheAdapter cacheAdapter = caches.get(name); - if (cacheAdapter != null) + if (cacheAdapter != null) { cache = new IgniteCacheProxyImpl(cacheAdapter.context(), cacheAdapter, false); + + IgniteCacheProxyImpl prev = addjCacheProxy(name, (IgniteCacheProxyImpl)cache); + + if (prev != null) + cache = (IgniteCacheProxy)prev; + + completeProxyInitialize(cache.getName()); + } } if (cache == null) @@ -4165,10 +5006,25 @@ public IgniteCacheProxy jcache(String name) { /** * @param name Cache name. + * @param awaitInit Await proxy initialization. * @return Cache proxy. */ - @Nullable public IgniteCacheProxy jcacheProxy(String name) { - return jCacheProxies.get(name); + @Nullable public IgniteCacheProxyImpl jcacheProxy(String name, boolean awaitInit) { + IgniteCacheProxyImpl cache = jCacheProxies.get(name); + + if (awaitInit) + awaitInitializeProxy(cache); + + return cache; + } + + /** + * @param name Cache name. + * @param proxy Cache proxy. + * @return Previous cache proxy. + */ + @Nullable public IgniteCacheProxyImpl addjCacheProxy(String name, IgniteCacheProxyImpl proxy) { + return jCacheProxies.putIfAbsent(name, proxy); } /** @@ -4504,8 +5360,10 @@ private T withBinaryContext(IgniteOutClosureX c) throws IgniteCheckedExce * @param sql Whether the cache needs to be created as the result of SQL {@code CREATE TABLE} command. * @param failIfExists Fail if exists flag. * @param failIfNotStarted If {@code true} fails if cache is not started. + * @param restartId Restart requester id (it'll allow to start this cache only him). * @param disabledAfterStart If true, cache proxies will be only activated after {@link #restartProxies()}. * @param qryEntities Query entities. + * @param encKey Encryption key. * @return Request or {@code null} if cache already exists. * @throws IgniteCheckedException if some of pre-checks failed * @throws CacheExistsException if cache exists and failIfExists flag is {@code true} @@ -4518,8 +5376,10 @@ private DynamicCacheChangeRequest prepareCacheChangeRequest( boolean sql, boolean failIfExists, boolean failIfNotStarted, + IgniteUuid restartId, boolean disabledAfterStart, - @Nullable Collection qryEntities + @Nullable Collection qryEntities, + @Nullable byte[] encKey ) throws IgniteCheckedException { DynamicCacheDescriptor desc = cacheDescriptor(cacheName); @@ -4531,6 +5391,10 @@ private DynamicCacheChangeRequest prepareCacheChangeRequest( req.disabledAfterStart(disabledAfterStart); + req.encryptionKey(encKey); + + req.restartId(restartId); + if (ccfg != null) { cloneCheckSerializable(ccfg); @@ -4544,7 +5408,7 @@ private DynamicCacheChangeRequest prepareCacheChangeRequest( // Check if we were asked to start a near cache. if (nearCfg != null) { - if (CU.affinityNode(ctx.discovery().localNode(), descCfg.getNodeFilter())) { + if (isLocalAffinity(descCfg)) { // If we are on a data node and near cache was enabled, return success, else - fail. if (descCfg.getNearConfiguration() != null) return null; @@ -4556,7 +5420,7 @@ private DynamicCacheChangeRequest prepareCacheChangeRequest( // If local node has near cache, return success. req.clientStartOnly(true); } - else if (!CU.affinityNode(ctx.discovery().localNode(), descCfg.getNodeFilter())) + else if (!isLocalAffinity(descCfg)) req.clientStartOnly(true); req.deploymentId(desc.deploymentId()); @@ -4565,17 +5429,21 @@ else if (!CU.affinityNode(ctx.discovery().localNode(), descCfg.getNodeFilter())) } } else { + CacheConfiguration cfg = new CacheConfiguration(ccfg); + req.deploymentId(IgniteUuid.randomUuid()); - CacheConfiguration cfg = new CacheConfiguration(ccfg); + req.startCacheConfiguration(cfg); CacheObjectContext cacheObjCtx = ctx.cacheObjects().contextForCache(cfg); initialize(cfg, cacheObjCtx); - req.startCacheConfiguration(cfg); - req.schema(new QuerySchema(qryEntities != null ? QueryUtils.normalizeQueryEntities(qryEntities, cfg) - : cfg.getQueryEntities())); + if (restartId != null) + req.schema(new QuerySchema(qryEntities == null ? cfg.getQueryEntities() : qryEntities)); + else + req.schema(new QuerySchema(qryEntities != null ? QueryUtils.normalizeQueryEntities(qryEntities, cfg) + : cfg.getQueryEntities())); } } else { @@ -4724,6 +5592,108 @@ public T clone(final T obj) throws IgniteCheckedException { }); } + /** + * Get Temporary storage + */ + public MetaStorage.TmpStorage getTmpStorage() { + return tmpStorage; + } + + /** + * Set Temporary storage + */ + public void setTmpStorage(MetaStorage.TmpStorage tmpStorage) { + this.tmpStorage = tmpStorage; + } + + /** + * Recovery lifecycle for caches. + */ + private class CacheRecoveryLifecycle implements MetastorageLifecycleListener, DatabaseLifecycleListener { + /** Set of QuerySchema's saved on recovery. It's needed if cache query schema has changed after node joined to topology.*/ + private final Map querySchemas = new ConcurrentHashMap<>(); + + /** {@inheritDoc} */ + @Override public void onBaselineChange() { + onKernalStopCaches(true); + + stopCaches(true); + + sharedCtx.database().cleanupRestoredCaches(); + } + + /** {@inheritDoc} */ + @Override public void onReadyForRead(ReadOnlyMetastorage metastorage) throws IgniteCheckedException { + restoreCacheConfigurations(); + } + + /** {@inheritDoc} */ + @Override public void beforeBinaryMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { + for (DynamicCacheDescriptor cacheDescriptor : persistentCaches()) + preparePageStore(cacheDescriptor, true); + } + + /** {@inheritDoc} */ + @Override public void afterBinaryMemoryRestore( + IgniteCacheDatabaseSharedManager mgr, + GridCacheDatabaseSharedManager.RestoreBinaryState restoreState) throws IgniteCheckedException { + for (DynamicCacheDescriptor cacheDescriptor : persistentCaches()) { + startCacheInRecoveryMode(cacheDescriptor); + + querySchemas.put(cacheDescriptor.cacheId(), cacheDescriptor.schema().copy()); + } + } + + /** {@inheritDoc} */ + @Override public void afterLogicalUpdatesApplied( + IgniteCacheDatabaseSharedManager mgr, + GridCacheDatabaseSharedManager.RestoreLogicalState restoreState) throws IgniteCheckedException { + restorePartitionStates(cacheGroups(), restoreState.partitionRecoveryStates()); + } + + /** + * @param forGroups Cache groups. + * @param partitionStates Partition states. + * @throws IgniteCheckedException If failed. + */ + private void restorePartitionStates( + Collection forGroups, + Map partitionStates + ) throws IgniteCheckedException { + long startRestorePart = U.currentTimeMillis(); + + if (log.isInfoEnabled()) + log.info("Restoring partition state for local groups."); + + long totalProcessed = 0; + + for (CacheGroupContext grp : forGroups) + totalProcessed += grp.offheap().restorePartitionStates(partitionStates); + + if (log.isInfoEnabled()) + log.info("Finished restoring partition state for local groups [" + + "groupsProcessed=" + forGroups.size() + + ", partitionsProcessed=" + totalProcessed + + ", time=" + (U.currentTimeMillis() - startRestorePart) + "ms]"); + } + } + + /** + * Handle of fail during cache start. + * + * @param Type of started data. + */ + private static interface StartCacheFailHandler { + /** + * Handle of fail. + * + * @param data Start data. + * @param startCacheOperation Operation for start cache. + * @throws IgniteCheckedException if failed. + */ + void handle(T data, IgniteThrowableConsumer startCacheOperation) throws IgniteCheckedException; + } + /** * */ @@ -4744,6 +5714,8 @@ private DynamicCacheStartFuture(UUID id) { // Make sure to remove future before completion. pendingFuts.remove(id, this); + context().exchange().exchangerUpdateHeartbeat(); + return super.onDone(res, err); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProxyImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProxyImpl.java index a9ce448004b6f..fa9f3cb56f995 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProxyImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheProxyImpl.java @@ -90,8 +90,11 @@ public GridCacheProxyImpl() { * @param delegate Delegate object. * @param opCtx Optional operation context which will be passed to gateway. */ - public GridCacheProxyImpl(GridCacheContext ctx, IgniteInternalCache delegate, - @Nullable CacheOperationContext opCtx) { + public GridCacheProxyImpl( + GridCacheContext ctx, + IgniteInternalCache delegate, + @Nullable CacheOperationContext opCtx + ) { assert ctx != null; assert delegate != null; @@ -235,6 +238,42 @@ public IgniteInternalCache delegate() { } } + /** {@inheritDoc} */ + @Override public void preloadPartition(int part) throws IgniteCheckedException { + CacheOperationContext prev = gate.enter(opCtx); + + try { + delegate.preloadPartition(part); + } + finally { + gate.leave(prev); + } + } + + /** {@inheritDoc} */ + @Override public IgniteInternalFuture preloadPartitionAsync(int part) throws IgniteCheckedException { + CacheOperationContext prev = gate.enter(opCtx); + + try { + return delegate.preloadPartitionAsync(part); + } + finally { + gate.leave(prev); + } + } + + /** {@inheritDoc} */ + @Override public boolean localPreloadPartition(int part) throws IgniteCheckedException { + CacheOperationContext prev = gate.enter(opCtx); + + try { + return delegate.localPreloadPartition(part); + } + finally { + gate.leave(prev); + } + } + /** {@inheritDoc} */ @Override public GridCacheProxyImpl forSubjectId(UUID subjId) { return new GridCacheProxyImpl<>(ctx, delegate, @@ -887,13 +926,12 @@ public IgniteInternalCache delegate() { /** {@inheritDoc} */ @Nullable @Override public V localPeek(K key, - CachePeekMode[] peekModes, - @Nullable IgniteCacheExpiryPolicy plc) + CachePeekMode[] peekModes) throws IgniteCheckedException { CacheOperationContext prev = gate.enter(opCtx); try { - return delegate.localPeek(key, peekModes, plc); + return delegate.localPeek(key, peekModes); } finally { gate.leave(prev); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedContext.java index e4d398a3990be..846952ae937fa 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedContext.java @@ -46,6 +46,7 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.topology.PartitionsEvictManager; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; import org.apache.ignite.internal.processors.cache.jta.CacheJtaManagerAdapter; +import org.apache.ignite.internal.processors.cache.mvcc.MvccCachingManager; import org.apache.ignite.internal.processors.cache.mvcc.MvccProcessor; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.snapshot.IgniteCacheSnapshotManager; @@ -129,6 +130,9 @@ public class GridCacheSharedContext { /** */ private PartitionsEvictManager evictMgr; + /** Mvcc caching manager. */ + private MvccCachingManager mvccCachingMgr; + /** Cache contexts map. */ private ConcurrentHashMap> ctxMap; @@ -171,6 +175,9 @@ public class GridCacheSharedContext { /** */ private final List stateAwareMgrs; + /** Cluster is in read-only mode. */ + private volatile boolean readOnlyMode; + /** * @param kernalCtx Context. * @param txMgr Transaction manager. @@ -204,7 +211,8 @@ public GridCacheSharedContext( GridCacheSharedTtlCleanupManager ttlMgr, PartitionsEvictManager evictMgr, CacheJtaManagerAdapter jtaMgr, - Collection storeSesLsnrs + Collection storeSesLsnrs, + MvccCachingManager mvccCachingMgr ) { this.kernalCtx = kernalCtx; @@ -224,7 +232,8 @@ public GridCacheSharedContext( affMgr, ioMgr, ttlMgr, - evictMgr + evictMgr, + mvccCachingMgr ); this.storeSesLsnrs = storeSesLsnrs; @@ -390,7 +399,8 @@ void onReconnected(boolean active) throws IgniteCheckedException { affMgr, ioMgr, ttlMgr, - evictMgr + evictMgr, + mvccCachingMgr ); this.mgrs = mgrs; @@ -449,7 +459,8 @@ private void setManagers( CacheAffinitySharedManager affMgr, GridCacheIoManager ioMgr, GridCacheSharedTtlCleanupManager ttlMgr, - PartitionsEvictManager evictMgr + PartitionsEvictManager evictMgr, + MvccCachingManager mvccCachingMgr ) { this.mvccMgr = add(mgrs, mvccMgr); this.verMgr = add(mgrs, verMgr); @@ -466,6 +477,7 @@ private void setManagers( this.ioMgr = add(mgrs, ioMgr); this.ttlMgr = add(mgrs, ttlMgr); this.evictMgr = add(mgrs, evictMgr); + this.mvccCachingMgr = add(mgrs, mvccCachingMgr); } /** @@ -811,6 +823,13 @@ public PartitionsEvictManager evict() { return evictMgr; } + /** + * @return Mvcc transaction enlist caching manager. + */ + public MvccCachingManager mvccCaching() { + return mvccCachingMgr; + } + /** * @return Node ID. */ @@ -1017,10 +1036,9 @@ public IgniteInternalFuture commitTxAsync(GridNearTxLocal tx) /** * @param tx Transaction to rollback. - * @throws IgniteCheckedException If failed. * @return Rollback future. */ - public IgniteInternalFuture rollbackTxAsync(GridNearTxLocal tx) throws IgniteCheckedException { + public IgniteInternalFuture rollbackTxAsync(GridNearTxLocal tx) { boolean clearThreadMap = txMgr.threadLocalTx(null) == tx; if (clearThreadMap) @@ -1108,4 +1126,26 @@ public void finishDhtAtomicUpdate(GridCacheVersion ver) { private int dhtAtomicUpdateIndex(GridCacheVersion ver) { return U.safeAbs(ver.hashCode()) % dhtAtomicUpdCnt.length(); } + + /** + * @return {@code true} if cluster is in read-only mode. + */ + public boolean readOnlyMode() { + return readOnlyMode; + } + + /** + * @param readOnlyMode Read-only flag. + */ + public void readOnlyMode(boolean readOnlyMode) { + this.readOnlyMode = readOnlyMode; + } + + /** + * For test purposes. + * @param txMgr Tx manager. + */ + public void setTxManager(IgniteTxManager txMgr) { + this.txMgr = txMgr; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedTtlCleanupManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedTtlCleanupManager.java index 7a543546eb700..3f64f56923ab4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedTtlCleanupManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheSharedTtlCleanupManager.java @@ -130,6 +130,17 @@ private class CleanupWorker extends GridWorker { Throwable err = null; try { + blockingSectionBegin(); + + try { + cctx.discovery().localJoin(); + } + finally { + blockingSectionEnd(); + } + + assert !cctx.kernalContext().recoveryMode(); + while (!isCancelled()) { boolean expiredRemains = false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManager.java index 2166ce5eba935..f82cc759938ad 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManager.java @@ -20,6 +20,7 @@ import java.util.concurrent.atomic.LongAdder; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.NodeStoppingException; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; @@ -38,9 +39,22 @@ * {@link CacheConfiguration#isEagerTtl()} flag is set. */ public class GridCacheTtlManager extends GridCacheManagerAdapter { - /** Entries pending removal. */ + /** + * Throttling timeout in millis which avoid excessive PendingTree access on unwind + * if there is nothing to clean yet. + */ + public static final long UNWIND_THROTTLING_TIMEOUT = Long.getLong( + IgniteSystemProperties.IGNITE_UNWIND_THROTTLING_TIMEOUT, 500L); + + /** Entries pending removal. This collection tracks entries for near cache only. */ private GridConcurrentSkipListSetEx pendingEntries; + /** Indicates that */ + protected volatile boolean hasPendingEntries; + + /** Timestamp when next clean try will be allowed. Used for throttling on per-cache basis. */ + protected volatile long nextCleanTime; + /** See {@link CacheConfiguration#isEagerTtl()}. */ private volatile boolean eagerTtlEnabled; @@ -71,7 +85,7 @@ public class GridCacheTtlManager extends GridCacheManagerAdapter { } if (touch) - entry.touch(null); + entry.touch(); } }; @@ -140,6 +154,22 @@ public long pendingSize() throws IgniteCheckedException { return (pendingEntries != null ? pendingEntries.sizex() : 0) + cctx.offheap().expiredSize(); } + /** + * Updates the flag {@code hasPendingEntries} with the given value. + * + * @param update {@code true} if the underlying pending tree has entries with expire policy enabled. + */ + public void hasPendingEntries(boolean update) { + hasPendingEntries = update; + } + + /** + * @return {@code true} if the underlying pending tree has entries with expire policy enabled. + */ + public boolean hasPendingEntries() { + return hasPendingEntries; + } + /** {@inheritDoc} */ @Override public void printMemoryStats() { try { @@ -153,13 +183,6 @@ public long pendingSize() throws IgniteCheckedException { } } - /** - * Expires entries by TTL. - */ - public void expire() { - expire(-1); - } - /** * Processes specified amount of expired entries. * @@ -201,14 +224,20 @@ public boolean expire(int amount) { } } - if(!(cctx.affinityNode() && cctx.ttl().eagerTtlEnabled())) + if(!cctx.affinityNode()) return false; /* Pending tree never contains entries for that cache */ + if (!hasPendingEntries || nextCleanTime > U.currentTimeMillis()) + return false; + boolean more = cctx.offheap().expire(dhtCtx, expireC, amount); if (more) return true; + // There is nothing to clean, so the next clean up can be postponed. + nextCleanTime = U.currentTimeMillis() + UNWIND_THROTTLING_TIMEOUT; + if (amount != -1 && pendingEntries != null) { EntryWrapper e = pendingEntries.firstx(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUpdateTxResult.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUpdateTxResult.java index 4543dfd5a4a7d..8a681009225a7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUpdateTxResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUpdateTxResult.java @@ -30,7 +30,7 @@ * Cache entry transactional update result. */ public class GridCacheUpdateTxResult { - /** Success flag.*/ + /** Success flag. */ private final boolean success; /** Partition update counter. */ @@ -51,6 +51,18 @@ public class GridCacheUpdateTxResult { /** Previous value. */ private CacheObject prevVal; + /** Invoke result. */ + private CacheInvokeResult invokeRes; + + /** New value. */ + private CacheObject newVal; + + /** Value before the current tx. */ + private CacheObject oldVal; + + /** Filtered flag. */ + private boolean filtered; + /** * Constructor. * @@ -146,7 +158,6 @@ public WALPointer loggedPointer() { } /** - * * @return Mvcc history rows. */ @Nullable public List mvccHistory() { @@ -154,7 +165,6 @@ public WALPointer loggedPointer() { } /** - * * @param mvccHistory Mvcc history rows. */ public void mvccHistory(List mvccHistory) { @@ -162,21 +172,75 @@ public void mvccHistory(List mvccHistory) { } /** - * * @return Previous value. */ - @Nullable public CacheObject prevValue() { + @Nullable public CacheObject prevValue() { return prevVal; } /** - * * @param prevVal Previous value. */ - public void prevValue( @Nullable CacheObject prevVal) { + public void prevValue(@Nullable CacheObject prevVal) { this.prevVal = prevVal; } + /** + * @param result Entry processor invoke result. + */ + public void invokeResult(CacheInvokeResult result) { + invokeRes = result; + } + + /** + * @return Invoke result. + */ + public CacheInvokeResult invokeResult() { + return invokeRes; + } + + /** + * @return New value. + */ + public CacheObject newValue() { + return newVal; + } + + /** + * @return Old value. + */ + public CacheObject oldValue() { + return oldVal; + } + + /** + * @param newVal New value. + */ + public void newValue(CacheObject newVal) { + this.newVal = newVal; + } + + /** + * @param oldVal Old value. + */ + public void oldValue(CacheObject oldVal) { + this.oldVal = oldVal; + } + + /** + * @return Filtered flag. + */ + public boolean filtered() { + return filtered; + } + + /** + * @param filtered Filtered flag. + */ + public void filtered(boolean filtered) { + this.filtered = filtered; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(GridCacheUpdateTxResult.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUtils.java index 91a449f04013f..ca60e29ab4187 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheUtils.java @@ -58,6 +58,7 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; +import org.apache.ignite.spi.encryption.EncryptionSpi; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException; import org.apache.ignite.internal.IgniteInternalFuture; @@ -1148,6 +1149,17 @@ public static int cacheId(String cacheName) { return 1; } + /** + * @param cacheName Cache name. + * @param grpName Group name. + * @return Group ID. + */ + public static int cacheGroupId(String cacheName, @Nullable String grpName) { + assert cacheName != null; + + return grpName != null ? CU.cacheId(grpName) : CU.cacheId(cacheName); + } + /** * @param cfg Grid configuration. * @param cacheName Cache name. @@ -1305,7 +1317,7 @@ public static long expireTimeInPast() { if (e instanceof CachePartialUpdateCheckedException) return new CachePartialUpdateException((CachePartialUpdateCheckedException)e); - else if (e instanceof ClusterTopologyServerNotFoundException) + else if (e.hasCause(ClusterTopologyServerNotFoundException.class)) return new CacheServerNotFoundException(e.getMessage(), e); else if (e instanceof SchemaOperationException) return new CacheException(e.getMessage(), e); @@ -1743,7 +1755,7 @@ else if (cfg.getCacheMode() == REPLICATED) { boolean readThrough, boolean skipVals ) { - if (!readThrough || skipVals || + if (cctx.mvccEnabled() || !readThrough || skipVals || (key != null && !cctx.affinity().backupsByKey(key, topVer).contains(cctx.localNode()))) return null; @@ -1784,7 +1796,7 @@ private void process(KeyCacheObject key, CacheObject val, GridCacheVersion ver, } finally { if (entry != null) - entry.touch(topVer); + entry.touch(); cctx.shared().database().checkpointReadUnlock(); } @@ -1899,6 +1911,17 @@ public static boolean isPersistenceEnabled(DataStorageConfiguration cfg) { return false; } + /** + * @param pageSize Page size. + * @param encSpi Encryption spi. + * @return Page size without encryption overhead. + */ + public static int encryptedPageSize(int pageSize, EncryptionSpi encSpi) { + return pageSize + - (encSpi.encryptedSizeNoPadding(pageSize) - pageSize) + - encSpi.blockSize(); /* For CRC. */ + } + /** * @param sctx Shared context. * @param cacheIds Cache ids. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridChangeGlobalStateMessageResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridChangeGlobalStateMessageResponse.java index e49be4934eeb9..4cf9e2388229c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridChangeGlobalStateMessageResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridChangeGlobalStateMessageResponse.java @@ -116,13 +116,13 @@ public Throwable getError() { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 3: + case 4: if (!writer.writeUuid("requestId", requestId)) return false; @@ -144,7 +144,7 @@ public Throwable getError() { return false; switch (reader.state()) { - case 2: + case 3: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -152,7 +152,7 @@ public Throwable getError() { reader.incrementState(); - case 3: + case 4: requestId = reader.readUuid("requestId"); if (!reader.isLastRead()) @@ -172,7 +172,7 @@ public Throwable getError() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 4; + return 5; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManager.java index f576cc58c02a7..7f0fc3096d9a0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManager.java @@ -20,16 +20,19 @@ import java.util.List; import java.util.Map; import javax.cache.Cache; +import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtDemandedPartitionsMap; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.CacheSearchRow; import org.apache.ignite.internal.processors.cache.persistence.RootPage; import org.apache.ignite.internal.processors.cache.persistence.RowStore; +import org.apache.ignite.internal.processors.cache.persistence.partstate.GroupPartitionId; +import org.apache.ignite.internal.processors.cache.persistence.partstate.PartitionRecoverState; import org.apache.ignite.internal.processors.cache.persistence.tree.reuse.ReuseList; import org.apache.ignite.internal.processors.cache.tree.PendingEntriesTree; import org.apache.ignite.internal.processors.cache.tree.mvcc.data.MvccUpdateResult; @@ -80,6 +83,15 @@ public interface IgniteCacheOffheapManager { */ public void stop(); + /** + * Pre-create partitions that resides in page memory or WAL and restores their state. + * + * @param partitionRecoveryStates Partition recovery states. + * @return Number of processed partitions. + * @throws IgniteCheckedException If failed. + */ + long restorePartitionStates(Map partitionRecoveryStates) throws IgniteCheckedException; + /** * Partition counter update callback. May be overridden by plugin-provided subclasses. * @@ -179,10 +191,11 @@ public void invoke(GridCacheContext cctx, KeyCacheObject key, GridDhtLocalPartit /** * @param cctx Cache context. * @param key Key. + * @param mvccSnapshot MVCC snapshot. * @return Cached row, if available, null otherwise. * @throws IgniteCheckedException If failed. */ - @Nullable public CacheDataRow mvccRead(GridCacheContext cctx, KeyCacheObject key, MvccSnapshot ver) + @Nullable public CacheDataRow mvccRead(GridCacheContext cctx, KeyCacheObject key, MvccSnapshot mvccSnapshot) throws IgniteCheckedException; /** @@ -274,10 +287,13 @@ public boolean mvccInitialValueIfAbsent( * @param expireTime Expire time. * @param mvccSnapshot MVCC snapshot. * @param primary {@code True} if on primary node. - * @param needHistory Flag to collect history. + * @param needHist Flag to collect history. * @param noCreate Flag indicating that row should not be created if absent. + * @param needOldVal {@code True} if need old value. * @param filter Filter. * @param retVal Flag to return previous value. + * @param entryProc Entry processor. + * @param invokeArgs Entry processor invoke arguments. * @return Update result. * @throws IgniteCheckedException If failed. */ @@ -288,16 +304,20 @@ public boolean mvccInitialValueIfAbsent( long expireTime, MvccSnapshot mvccSnapshot, boolean primary, - boolean needHistory, + boolean needHist, boolean noCreate, + boolean needOldVal, @Nullable CacheEntryPredicate filter, - boolean retVal) throws IgniteCheckedException; + boolean retVal, + EntryProcessor entryProc, + Object[] invokeArgs) throws IgniteCheckedException; /** * @param entry Entry. * @param mvccSnapshot MVCC snapshot. * @param primary {@code True} if on primary node. - * @param needHistory Flag to collect history. + * @param needHist Flag to collect history. + * @param needOldVal {@code True} if need old value. * @param filter Filter. * @param retVal Flag to return previous value. * @return Update result. @@ -307,7 +327,8 @@ public boolean mvccInitialValueIfAbsent( GridCacheMapEntry entry, MvccSnapshot mvccSnapshot, boolean primary, - boolean needHistory, + boolean needHist, + boolean needOldVal, @Nullable CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException; @@ -400,6 +421,25 @@ public void update( @Nullable CacheDataRow oldRow ) throws IgniteCheckedException; + /** + * @param cctx Cache context. + * @param key Key. + * @param val Value. + * @param ver Version. + * @param expireTime Expire time. + * @param part Partition. + * @param mvccVer Mvcc version. + * @throws IgniteCheckedException If failed. + */ + void mvccApplyUpdate( + GridCacheContext cctx, + KeyCacheObject key, + CacheObject val, + GridCacheVersion ver, + long expireTime, + GridDhtLocalPartition part, + MvccVersion mvccVer) throws IgniteCheckedException; + /** * @param cctx Cache context. * @param key Key. @@ -421,11 +461,11 @@ public void remove( public int onUndeploy(ClassLoader ldr); /** - * * @param cacheId Cache ID. * @param primary Primary entries flag. * @param backup Backup entries flag. * @param topVer Topology version. + * @param mvccSnapshot MVCC snapshot. * @return Rows iterator. * @throws IgniteCheckedException If failed. */ @@ -484,20 +524,21 @@ public IgniteRebalanceIterator rebalanceIterator(IgniteDhtDemandedPartitionsMap /** * @param cctx Cache context. - * @param primary Primary entries flag. - * @param backup Backup entries flag. + * @param primary {@code True} if need to return primary entries. + * @param backup {@code True} if need to return backup entries. * @param topVer Topology version. * @param keepBinary Keep binary flag. + * @param mvccSnapshot MVCC snapshot. * @return Entries iterator. * @throws IgniteCheckedException If failed. */ - // TODO: MVCC> public GridCloseableIterator> cacheEntriesIterator( GridCacheContext cctx, final boolean primary, final boolean backup, final AffinityTopologyVersion topVer, - final boolean keepBinary) throws IgniteCheckedException; + final boolean keepBinary, + @Nullable final MvccSnapshot mvccSnapshot) throws IgniteCheckedException; /** * @param cacheId Cache ID. @@ -549,17 +590,18 @@ public long cacheEntriesCount(int cacheId, boolean primary, boolean backup, Affi /** * @param cacheId Cache ID. * @param idxName Index name. + * @param segment Segment. * @return Root page for index tree. * @throws IgniteCheckedException If failed. */ - public RootPage rootPageForIndex(int cacheId, String idxName) throws IgniteCheckedException; + public RootPage rootPageForIndex(int cacheId, String idxName, int segment) throws IgniteCheckedException; /** * @param cacheId Cache ID. * @param idxName Index name. * @throws IgniteCheckedException If failed. */ - public void dropRootPageForIndex(int cacheId, String idxName) throws IgniteCheckedException; + public void dropRootPageForIndex(int cacheId, String idxName, int segment) throws IgniteCheckedException; /** * @param idxName Index name. @@ -580,6 +622,14 @@ public long cacheEntriesCount(int cacheId, boolean primary, boolean backup, Affi */ public long totalPartitionEntriesCount(int part); + /** + * Preload a partition. Must be called under partition reservation for DHT caches. + * + * @param part Partition. + * @throws IgniteCheckedException If failed. + */ + public void preloadPartition(int part) throws IgniteCheckedException; + /** * */ @@ -627,6 +677,11 @@ interface CacheDataStore { */ long fullSize(); + /** + * @return {@code True} if there are no items in the store. + */ + boolean isEmpty(); + /** * Updates size metric for particular cache. * @@ -797,9 +852,12 @@ boolean mvccUpdateRowWithPreloadInfo( * @param expireTime Expire time. * @param mvccSnapshot MVCC snapshot. * @param filter Filter. + * @param entryProc Entry processor. + * @param invokeArgs Entry processor invoke arguments. * @param primary {@code True} if update is executed on primary node. - * @param needHistory Flag to collect history. + * @param needHist Flag to collect history. * @param noCreate Flag indicating that row should not be created if absent. + * @param needOldVal {@code True} if need old value. * @param retVal Flag to return previous value. * @return Update result. * @throws IgniteCheckedException If failed. @@ -812,9 +870,12 @@ MvccUpdateResult mvccUpdate( long expireTime, MvccSnapshot mvccSnapshot, @Nullable CacheEntryPredicate filter, + EntryProcessor entryProc, + Object[] invokeArgs, boolean primary, - boolean needHistory, + boolean needHist, boolean noCreate, + boolean needOldVal, boolean retVal) throws IgniteCheckedException; /** @@ -824,6 +885,7 @@ MvccUpdateResult mvccUpdate( * @param filter Filter. * @param primary {@code True} if update is executed on primary node. * @param needHistory Flag to collect history. + * @param needOldVal {@code True} if need old value. * @param retVal Flag to return previous value. * @return List of transactions to wait for. * @throws IgniteCheckedException If failed. @@ -835,6 +897,7 @@ MvccUpdateResult mvccRemove( @Nullable CacheEntryPredicate filter, boolean primary, boolean needHistory, + boolean needOldVal, boolean retVal) throws IgniteCheckedException; /** @@ -897,6 +960,24 @@ MvccUpdateResult mvccLock( */ public void invoke(GridCacheContext cctx, KeyCacheObject key, OffheapInvokeClosure c) throws IgniteCheckedException; + /** + * + * @param cctx Cache context. + * @param key Key. + * @param val Value. + * @param ver Version. + * @param expireTime Expire time. + * @param mvccVer Mvcc version. + * @throws IgniteCheckedException + */ + void mvccApplyUpdate(GridCacheContext cctx, + KeyCacheObject key, + CacheObject val, + GridCacheVersion ver, + long expireTime, + MvccVersion mvccVer + ) throws IgniteCheckedException; + /** * @param cctx Cache context. * @param key Key. @@ -1037,7 +1118,7 @@ public GridCursor cursor(int cacheId, KeyCacheObject low /** * @param cntr Counter. */ - void updateInitialCounter(long cntr); + public void updateInitialCounter(long cntr); /** * Inject rows cache cleaner. @@ -1050,8 +1131,20 @@ public GridCursor cursor(int cacheId, KeyCacheObject low * Return PendingTree for data store. * * @return PendingTree instance. - * @throws IgniteCheckedException */ - PendingEntriesTree pendingTree(); + public PendingEntriesTree pendingTree(); + + /** + * Flushes pending update counters closing all possible gaps. + * + * @return Even-length array of pairs [start, end] for each gap. + */ + GridLongList finalizeUpdateCounters(); + + /** + * Preload a store into page memory. + * @throws IgniteCheckedException If failed. + */ + public void preload() throws IgniteCheckedException; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManagerImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManagerImpl.java index e0b9c06f16627..69ca5a4555516 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManagerImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManagerImpl.java @@ -32,6 +32,7 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import javax.cache.Cache; +import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; @@ -42,13 +43,16 @@ import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageMvccUpdateNewTxStateHintRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageMvccUpdateTxStateHintRecord; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtDetachedCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.CachePartitionPartialCountersMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtDemandedPartitionsMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteHistoricalIterator; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteRebalanceIteratorImpl; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; +import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshotWithoutTxs; +import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils; import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxState; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; @@ -56,6 +60,8 @@ import org.apache.ignite.internal.processors.cache.persistence.CacheSearchRow; import org.apache.ignite.internal.processors.cache.persistence.RootPage; import org.apache.ignite.internal.processors.cache.persistence.RowStore; +import org.apache.ignite.internal.processors.cache.persistence.partstate.GroupPartitionId; +import org.apache.ignite.internal.processors.cache.persistence.partstate.PartitionRecoverState; import org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree; import org.apache.ignite.internal.processors.cache.persistence.tree.io.BPlusIO; import org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO; @@ -129,13 +135,6 @@ */ @SuppressWarnings("PublicInnerClass") public class IgniteCacheOffheapManagerImpl implements IgniteCacheOffheapManager { - /** - * Throttling timeout in millis which avoid excessive PendingTree access on unwind - * if there is nothing to clean yet. - */ - public static final long UNWIND_THROTTLING_TIMEOUT = Long.getLong( - IgniteSystemProperties.IGNITE_UNWIND_THROTTLING_TIMEOUT, 500L); - /** */ protected GridCacheSharedContext ctx; @@ -154,12 +153,6 @@ public class IgniteCacheOffheapManagerImpl implements IgniteCacheOffheapManager /** */ private PendingEntriesTree pendingEntries; - /** */ - protected volatile boolean hasPendingEntries; - - /** Timestamp when next clean try will be allowed. Used for throttling on per-group basis. */ - protected volatile long nextCleanTime; - /** */ private final GridAtomicLong globalRmvId = new GridAtomicLong(U.currentTimeMillis() * 1000_000); @@ -254,6 +247,11 @@ protected void initDataStructures() throws IgniteCheckedException { } } + /** {@inheritDoc} */ + @Override public long restorePartitionStates(Map partitionRecoveryStates) throws IgniteCheckedException { + return 0; // No-op. + } + /** {@inheritDoc} */ @Override public void onKernalStop() { busyLock.block(); @@ -334,6 +332,11 @@ public CacheDataStore dataStore(int part) { } } + /** {@inheritDoc} */ + @Override public void preloadPartition(int p) throws IgniteCheckedException { + throw new IgniteCheckedException("Operation only applicable to caches with enabled persistence"); + } + /** * @param p Partition. * @return Partition data. @@ -515,8 +518,11 @@ private Iterator cacheData(boolean primary, boolean backup, Affi boolean primary, boolean needHistory, boolean noCreate, + boolean needOldVal, @Nullable CacheEntryPredicate filter, - boolean retVal) throws IgniteCheckedException { + boolean retVal, + EntryProcessor entryProc, + Object[] invokeArgs) throws IgniteCheckedException { if (entry.detached() || entry.isNear()) return null; @@ -529,9 +535,12 @@ private Iterator cacheData(boolean primary, boolean backup, Affi expireTime, mvccSnapshot, filter, + entryProc, + invokeArgs, primary, needHistory, noCreate, + needOldVal, retVal); } @@ -541,6 +550,7 @@ private Iterator cacheData(boolean primary, boolean backup, Affi MvccSnapshot mvccSnapshot, boolean primary, boolean needHistory, + boolean needOldVal, @Nullable CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException { if (entry.detached() || entry.isNear()) @@ -554,6 +564,7 @@ private Iterator cacheData(boolean primary, boolean backup, Affi filter, primary, needHistory, + needOldVal, retVal); } @@ -611,6 +622,24 @@ private Iterator cacheData(boolean primary, boolean backup, Affi return dataStore(entry.localPartition()).mvccLock(entry.context(), entry.key(), mvccSnapshot); } + /** {@inheritDoc} */ + @Override public void mvccApplyUpdate( + GridCacheContext cctx, + KeyCacheObject key, + CacheObject val, + GridCacheVersion ver, + long expireTime, + GridDhtLocalPartition part, + MvccVersion mvccVer) throws IgniteCheckedException { + + dataStore(part).mvccApplyUpdate(cctx, + key, + val, + ver, + expireTime, + mvccVer); + } + /** {@inheritDoc} */ @Override public void remove( GridCacheContext cctx, @@ -645,13 +674,13 @@ private Iterator cacheData(boolean primary, boolean backup, Affi } /** {@inheritDoc} */ - @Nullable @Override public CacheDataRow mvccRead(GridCacheContext cctx, KeyCacheObject key, MvccSnapshot ver) + @Nullable @Override public CacheDataRow mvccRead(GridCacheContext cctx, KeyCacheObject key, MvccSnapshot mvccSnapshot) throws IgniteCheckedException { - assert ver != null; + assert mvccSnapshot != null; CacheDataStore dataStore = dataStore(cctx, key); - CacheDataRow row = dataStore != null ? dataStore.mvccFind(cctx, key, ver) : null; + CacheDataRow row = dataStore != null ? dataStore.mvccFind(cctx, key, mvccSnapshot) : null; assert row == null || row.value() != null : row; @@ -770,21 +799,16 @@ private Iterator cacheData(boolean primary, boolean backup, Affi return 0; } - /** - * @param primary {@code True} if need return primary entries. - * @param backup {@code True} if need return backup entries. - * @param topVer Topology version to use. - * @return Entries iterator. - * @throws IgniteCheckedException If failed. - */ + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public GridCloseableIterator> cacheEntriesIterator( final GridCacheContext cctx, final boolean primary, final boolean backup, final AffinityTopologyVersion topVer, - final boolean keepBinary) throws IgniteCheckedException { - final Iterator it = cacheIterator(cctx.cacheId(), primary, backup, topVer, null); + final boolean keepBinary, + @Nullable final MvccSnapshot mvccSnapshot) throws IgniteCheckedException { + final Iterator it = cacheIterator(cctx.cacheId(), primary, backup, topVer, mvccSnapshot); return new GridCloseableIteratorAdapter>() { /** */ @@ -1101,14 +1125,14 @@ private long allocateForTree() throws IgniteCheckedException { } /** {@inheritDoc} */ - @Override public RootPage rootPageForIndex(int cacheId, String idxName) throws IgniteCheckedException { + @Override public RootPage rootPageForIndex(int cacheId, String idxName, int segment) throws IgniteCheckedException { long pageId = allocateForTree(); return new RootPage(new FullPageId(pageId, grp.groupId()), true); } /** {@inheritDoc} */ - @Override public void dropRootPageForIndex(int cacheId, String idxName) throws IgniteCheckedException { + @Override public void dropRootPageForIndex(int cacheId, String idxName, int segment) throws IgniteCheckedException { // No-op. } @@ -1309,17 +1333,10 @@ protected final String treeName(int p) { ) throws IgniteCheckedException { assert !cctx.isNear() : cctx.name(); - if (!hasPendingEntries || nextCleanTime > U.currentTimeMillis()) - return false; - assert pendingEntries != null; int cleared = expireInternal(cctx, c, amount); - // Throttle if there is nothing to clean anymore. - if (cleared < amount) - nextCleanTime = U.currentTimeMillis() + UNWIND_THROTTLING_TIMEOUT; - return amount != -1 && cleared >= amount; } @@ -1492,6 +1509,24 @@ void decrementSize(int cacheId) { return storageSize.get(); } + /** + * @return {@code True} if there are no items in the store. + */ + @Override public boolean isEmpty() { + try { + /* + * TODO https://issues.apache.org/jira/browse/IGNITE-10082 + * Using of counters is cheaper than tree operations. Return size checking after the ticked is resolved. + */ + return grp.mvccEnabled() ? dataTree.isEmpty() : storageSize.get() == 0; + } + catch (IgniteCheckedException e) { + U.error(log, "Failed to perform operation.", e); + + return false; + } + } + /** {@inheritDoc} */ @Override public void updateSize(int cacheId, long delta) { storageSize.addAndGet(delta); @@ -1545,6 +1580,11 @@ void decrementSize(int cacheId) { pCntr.update(start, delta); } + /** {@inheritDoc} */ + @Override public GridLongList finalizeUpdateCounters() { + return pCntr.finalizeUpdateCounters(); + } + /** {@inheritDoc} */ @Override public String name() { return name; @@ -1857,9 +1897,12 @@ private void invoke0(GridCacheContext cctx, CacheSearchRow row, OffheapInvokeClo long expireTime, MvccSnapshot mvccSnapshot, @Nullable CacheEntryPredicate filter, + EntryProcessor entryProc, + Object[] invokeArgs, boolean primary, boolean needHistory, boolean noCreate, + boolean needOldVal, boolean retVal) throws IgniteCheckedException { assert mvccSnapshot != null; assert primary || !needHistory; @@ -1874,7 +1917,9 @@ private void invoke0(GridCacheContext cctx, CacheSearchRow row, OffheapInvokeClo // Make sure value bytes initialized. key.valueBytes(coCtx); - val.valueBytes(coCtx); + + if(val != null) + val.valueBytes(coCtx); MvccUpdateDataRow updateRow = new MvccUpdateDataRow( cctx, @@ -1891,7 +1936,8 @@ private void invoke0(GridCacheContext cctx, CacheSearchRow row, OffheapInvokeClo needHistory, // we follow fast update visit flow here if row cannot be created by current operation noCreate, - retVal); + needOldVal, + retVal || entryProc != null); assert cctx.shared().database().checkpointLockIsHeldByThread(); @@ -1920,12 +1966,46 @@ else if (res == ResultType.VERSION_FOUND || // exceptional case assert oldRow != null && oldRow.link() != 0 : oldRow; oldRow.key(key); - - rowStore.updateDataRow(oldRow.link(), mvccUpdateMarker, mvccSnapshot); } else assert res == ResultType.PREV_NULL; + if (entryProc != null) { + entryProc = EntryProcessorResourceInjectorProxy.wrap(cctx.kernalContext(), entryProc); + + CacheInvokeEntry.Operation op = applyEntryProcessor(cctx, key, ver, entryProc, invokeArgs, updateRow, oldRow); + + if (op == CacheInvokeEntry.Operation.NONE) { + if (res == ResultType.PREV_NOT_NULL) + updateRow.value(oldRow.value()); // Restore prev. value. + + updateRow.resultType(ResultType.FILTERED); + + cleanup(cctx, updateRow.cleanupRows()); + + return updateRow; + } + + // Mark old version as removed. + if (res == ResultType.PREV_NOT_NULL) { + rowStore.updateDataRow(oldRow.link(), mvccUpdateMarker, mvccSnapshot); + + if (op == CacheInvokeEntry.Operation.REMOVE) { + updateRow.resultType(ResultType.REMOVED_NOT_NULL); + + cleanup(cctx, updateRow.cleanupRows()); + + clearPendingEntries(cctx, oldRow); + + return updateRow; // Won't create new version on remove. + } + } + else + assert op != CacheInvokeEntry.Operation.REMOVE; + } + else if (oldRow != null) + rowStore.updateDataRow(oldRow.link(), mvccUpdateMarker, mvccSnapshot); + if (!grp.storeCacheIdInDataPage() && updateRow.cacheId() != CU.UNDEFINED_CACHE_ID) { updateRow.cacheId(CU.UNDEFINED_CACHE_ID); @@ -1967,6 +2047,56 @@ else if (res == ResultType.VERSION_FOUND || // exceptional case } } + /** + * + * @param cctx Cache context. + * @param key entry key. + * @param ver Entry version. + * @param entryProc Entry processor. + * @param invokeArgs Entry processor invoke arguments. + * @param updateRow Row for update. + * @param oldRow Old row. + * @return Entry processor operation. + */ + @SuppressWarnings("unchecked") + private CacheInvokeEntry.Operation applyEntryProcessor(GridCacheContext cctx, KeyCacheObject key, GridCacheVersion ver, + EntryProcessor entryProc, Object[] invokeArgs, MvccUpdateDataRow updateRow, + CacheDataRow oldRow) { + Object procRes = null; + Exception err = null; + + CacheObject oldVal = oldRow == null ? null : oldRow.value(); + + CacheInvokeEntry invokeEntry = new CacheInvokeEntry<>(key, oldVal, ver, cctx.keepBinary(), + new GridDhtDetachedCacheEntry(cctx, key)); + + try { + procRes = entryProc.process(invokeEntry, invokeArgs); + + if(invokeEntry.modified() && invokeEntry.op() != CacheInvokeEntry.Operation.REMOVE) { + Object val = invokeEntry.getValue(true); + + CacheObject val0 = cctx.toCacheObject(val); + + val0.prepareForCache(cctx.cacheObjectContext()); + + updateRow.value(val0); + } + } + catch (Exception e) { + log.error("Exception was thrown during entry processing.", e); + + err = e; + } + + CacheInvokeResult invokeRes = err == null ? CacheInvokeResult.fromResult(procRes) : + CacheInvokeResult.fromError(err); + + updateRow.invokeResult(invokeRes); + + return invokeEntry.op(); + } + /** {@inheritDoc} */ @Override public MvccUpdateResult mvccRemove(GridCacheContext cctx, KeyCacheObject key, @@ -1974,6 +2104,7 @@ else if (res == ResultType.VERSION_FOUND || // exceptional case @Nullable CacheEntryPredicate filter, boolean primary, boolean needHistory, + boolean needOldVal, boolean retVal) throws IgniteCheckedException { assert mvccSnapshot != null; assert primary || mvccSnapshot.activeTransactions().size() == 0 : mvccSnapshot; @@ -2004,6 +2135,7 @@ else if (res == ResultType.VERSION_FOUND || // exceptional case false, needHistory, true, + needOldVal, retVal); assert cctx.shared().database().checkpointLockIsHeldByThread(); @@ -2070,6 +2202,7 @@ else if (res == ResultType.PREV_NOT_NULL) { true, false, false, + false, false); assert cctx.shared().database().checkpointLockIsHeldByThread(); @@ -2253,7 +2386,7 @@ else if (res == ResultType.PREV_NOT_NULL) { int cacheId = grp.sharedGroup() ? cctx.cacheId() : CU.UNDEFINED_CACHE_ID; - boolean cleanup = cctx.queries().enabled() || hasPendingEntries; + boolean cleanup = cctx.queries().enabled() || cctx.ttl().hasPendingEntries(); assert cctx.shared().database().checkpointLockIsHeldByThread(); @@ -2402,6 +2535,92 @@ else if (res == ResultType.PREV_NOT_NULL) { } } + /** {@inheritDoc} */ + @Override public void mvccApplyUpdate(GridCacheContext cctx, + KeyCacheObject key, + CacheObject val, + GridCacheVersion ver, + long expireTime, + MvccVersion mvccVer + ) throws IgniteCheckedException { + if (!busyLock.enterBusy()) + throw new NodeStoppingException("Operation has been cancelled (node is stopping)."); + + try { + int cacheId = grp.sharedGroup() ? cctx.cacheId() : CU.UNDEFINED_CACHE_ID; + + CacheObjectContext coCtx = cctx.cacheObjectContext(); + + // Make sure value bytes initialized. + key.valueBytes(coCtx); + + if (val != null) + val.valueBytes(coCtx); + + MvccSnapshotWithoutTxs mvccSnapshot = new MvccSnapshotWithoutTxs(mvccVer.coordinatorVersion(), + mvccVer.counter(), mvccVer.operationCounter(), MvccUtils.MVCC_COUNTER_NA); + + MvccUpdateDataRow updateRow = new MvccUpdateDataRow( + cctx, + key, + val, + ver, + partId, + 0L, + mvccSnapshot, + null, + null, + false, + false, + false, + false, + false, + false); + + assert cctx.shared().database().checkpointLockIsHeldByThread(); + + dataTree.visit(new MvccMaxSearchRow(cacheId, key), new MvccMinSearchRow(cacheId, key), updateRow); + + ResultType res = updateRow.resultType(); + + assert res == ResultType.PREV_NULL || res == ResultType.PREV_NOT_NULL : res; + + if (res == ResultType.PREV_NOT_NULL) { + CacheDataRow oldRow = updateRow.oldRow(); + + assert oldRow != null && oldRow.link() != 0 : oldRow; + + rowStore.updateDataRow(oldRow.link(), mvccUpdateMarker, mvccSnapshot); + } + + if (val != null) { + if (!grp.storeCacheIdInDataPage() && updateRow.cacheId() != CU.UNDEFINED_CACHE_ID) { + updateRow.cacheId(CU.UNDEFINED_CACHE_ID); + + rowStore.addRow(updateRow); + + updateRow.cacheId(cctx.cacheId()); + } + else + rowStore.addRow(updateRow); + + boolean old = dataTree.putx(updateRow); + + assert !old; + + GridCacheQueryManager qryMgr = cctx.queries(); + + if (qryMgr.enabled()) + qryMgr.store(updateRow, null, true); + + cleanup(cctx, updateRow.cleanupRows()); + } + } + finally { + busyLock.leaveBusy(); + } + } + /** * @param cctx Cache context. * @param newRow New row. @@ -2455,7 +2674,8 @@ private void updatePendingEntries(GridCacheContext cctx, CacheDataRow newRow, @N if (pendingTree() != null && expireTime != 0) { pendingTree().putx(new PendingRow(cacheId, expireTime, newRow.link())); - hasPendingEntries = true; + if (!cctx.ttl().hasPendingEntries()) + cctx.ttl().hasPendingEntries(true); } } @@ -2800,6 +3020,11 @@ private void afterRowFound(@Nullable CacheDataRow row, KeyCacheObject key) throw return pendingEntries; } + /** {@inheritDoc} */ + @Override public void preload() throws IgniteCheckedException { + // No-op. + } + /** * @param cctx Cache context. * @param key Key. @@ -3049,7 +3274,8 @@ private final class MvccMarkUpdatedHandler extends PageHandler extends AsyncSupportAdapter ctx; + /** Old context. */ + private transient volatile GridCacheContext oldContext; + /** Delegate. */ @GridToStringInclude private volatile IgniteInternalCache delegate; @@ -136,16 +143,23 @@ public class IgniteCacheProxyImpl extends AsyncSupportAdapter> restartFut; + private final AtomicReference restartFut; /** Flag indicates that proxy is closed. */ private volatile boolean closed; + /** Proxy initialization latch used for await final completion after proxy created, as an example, + * a proxy may be created but the exchange is not completed and if we try to perform some cache + * the operation we get last finished exchange future (need for validation) + * for the previous version but not for current. + */ + private final CountDownLatch initLatch = new CountDownLatch(1); + /** * Empty constructor required for {@link Externalizable}. */ public IgniteCacheProxyImpl() { - restartFut = new AtomicReference>(null); + restartFut = new AtomicReference<>(null); } /** @@ -158,7 +172,7 @@ public IgniteCacheProxyImpl( @NotNull IgniteInternalCache delegate, boolean async ) { - this(ctx, delegate, new AtomicReference>(null), async); + this(ctx, delegate, new AtomicReference<>(null), async); } /** @@ -169,7 +183,7 @@ public IgniteCacheProxyImpl( private IgniteCacheProxyImpl( @NotNull GridCacheContext ctx, @NotNull IgniteInternalCache delegate, - @NotNull AtomicReference> restartFut, + @NotNull AtomicReference restartFut, boolean async ) { super(async); @@ -177,16 +191,87 @@ private IgniteCacheProxyImpl( assert ctx != null; assert delegate != null; + cacheName = ctx.name(); + + assert cacheName.equals(delegate.name()) : "ctx.name=" + cacheName + ", delegate.name=" + delegate.name(); + this.ctx = ctx; this.delegate = delegate; this.restartFut = restartFut; } + /** + * + * @return Init latch. + */ + public CountDownLatch getInitLatch(){ + return initLatch; + } + /** * @return Context. */ @Override public GridCacheContext context() { + return getContextSafe(); + } + + /** + * @return Context or throw restart exception. + */ + private GridCacheContext getContextSafe() { + while (true) { + GridCacheContext ctx = this.ctx; + + if (ctx == null) { + checkRestart(); + + if (Thread.currentThread().isInterrupted()) + throw new IgniteException(new InterruptedException()); + } + else + return ctx; + } + } + + /** + * @return Delegate or throw restart exception. + */ + private IgniteInternalCache getDelegateSafe() { + while (true) { + IgniteInternalCache delegate = this.delegate; + + if (delegate == null) { + checkRestart(); + + if (Thread.currentThread().isInterrupted()) + throw new IgniteException(new InterruptedException()); + } + else + return delegate; + } + } + + /** + * @return Context. + */ + public GridCacheContext context0() { + GridCacheContext ctx = this.ctx; + + if (ctx == null) { + synchronized (this) { + ctx = this.ctx; + + if (ctx == null) { + GridCacheContext context = oldContext; + + assert context != null; + + return context; + } + } + } + return ctx; } @@ -203,36 +288,49 @@ public IgniteCacheProxy gatewayWrapper() { return cachedProxy; cachedProxy = new GatewayProtectedCacheProxy<>(this, new CacheOperationContext(), true); + return cachedProxy; } /** {@inheritDoc} */ @Override public CacheMetrics metrics() { + GridCacheContext ctx = getContextSafe(); + return ctx.cache().clusterMetrics(); } /** {@inheritDoc} */ @Override public CacheMetrics metrics(ClusterGroup grp) { + GridCacheContext ctx = getContextSafe(); + return ctx.cache().clusterMetrics(grp); } /** {@inheritDoc} */ @Override public CacheMetrics localMetrics() { + GridCacheContext ctx = getContextSafe(); + return ctx.cache().localMetrics(); } /** {@inheritDoc} */ @Override public CacheMetricsMXBean mxBean() { + GridCacheContext ctx = getContextSafe(); + return ctx.cache().clusterMxBean(); } /** {@inheritDoc} */ @Override public CacheMetricsMXBean localMxBean() { + GridCacheContext ctx = getContextSafe(); + return ctx.cache().localMxBean(); } /** {@inheritDoc} */ @Override public > C getConfiguration(Class clazz) { + GridCacheContext ctx = getContextSafe(); + CacheConfiguration cfg = ctx.config(); if (!clazz.isAssignableFrom(cfg.getClass())) @@ -268,6 +366,8 @@ public IgniteCacheProxy gatewayWrapper() { /** {@inheritDoc} */ @Override public void loadCache(@Nullable IgniteBiPredicate p, @Nullable Object... args) { + GridCacheContext ctx = getContextSafe(); + try { if (isAsync()) { if (ctx.cache().isLocal()) @@ -290,6 +390,8 @@ public IgniteCacheProxy gatewayWrapper() { /** {@inheritDoc} */ @Override public IgniteFuture loadCacheAsync(@Nullable IgniteBiPredicate p, @Nullable Object... args) throws CacheException { + GridCacheContext ctx = getContextSafe(); + try { if (ctx.cache().isLocal()) return (IgniteFuture)createFuture(ctx.cache().localLoadCacheAsync(p, args)); @@ -303,6 +405,8 @@ public IgniteCacheProxy gatewayWrapper() { /** {@inheritDoc} */ @Override public void localLoadCache(@Nullable IgniteBiPredicate p, @Nullable Object... args) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(delegate.localLoadCacheAsync(p, args)); @@ -317,11 +421,15 @@ public IgniteCacheProxy gatewayWrapper() { /** {@inheritDoc} */ @Override public IgniteFuture localLoadCacheAsync(@Nullable IgniteBiPredicate p, @Nullable Object... args) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + return (IgniteFuture)createFuture(delegate.localLoadCacheAsync(p, args)); } /** {@inheritDoc} */ @Nullable @Override public V getAndPutIfAbsent(K key, V val) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAndPutIfAbsentAsync(key, val)); @@ -338,6 +446,8 @@ public IgniteCacheProxy gatewayWrapper() { /** {@inheritDoc} */ @Override public IgniteFuture getAndPutIfAbsentAsync(K key, V val) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getAndPutIfAbsentAsync(key, val)); } @@ -348,6 +458,9 @@ public IgniteCacheProxy gatewayWrapper() { /** {@inheritDoc} */ @Override public Lock lockAll(final Collection keys) { + IgniteInternalCache delegate = getDelegateSafe(); + GridCacheContext ctx = getContextSafe(); + //TODO: IGNITE-9324: add explicit locks support. MvccUtils.verifyMvccOperationSupport(ctx, "Lock"); @@ -356,6 +469,8 @@ public IgniteCacheProxy gatewayWrapper() { /** {@inheritDoc} */ @Override public boolean isLocalLocked(K key, boolean byCurrThread) { + IgniteInternalCache delegate = getDelegateSafe(); + return byCurrThread ? delegate.isLockedByThread(key) : delegate.isLocked(key); } @@ -370,8 +485,9 @@ public IgniteCacheProxy gatewayWrapper() { private QueryCursor query( final ScanQuery scanQry, @Nullable final IgniteClosure transformer, - @Nullable ClusterGroup grp) - throws IgniteCheckedException { + @Nullable ClusterGroup grp + ) throws IgniteCheckedException { + GridCacheContext ctx = getContextSafe(); CacheOperationContext opCtxCall = ctx.operationContextPerCall(); @@ -389,7 +505,7 @@ private QueryCursor query( qry.projection(grp); final GridCloseableIterator iter = ctx.kernalContext().query().executeQuery(GridCacheQueryType.SCAN, - ctx.name(), ctx, new IgniteOutClosureX>() { + cacheName, ctx, new IgniteOutClosureX>() { @Override public GridCloseableIterator applyx() throws IgniteCheckedException { return qry.executeScanQuery(); } @@ -407,6 +523,8 @@ private QueryCursor query( @SuppressWarnings("unchecked") private QueryCursor> query(final Query filter, @Nullable ClusterGroup grp) throws IgniteCheckedException { + GridCacheContext ctx = getContextSafe(); + final CacheQuery qry; CacheOperationContext opCtxCall = ctx.operationContextPerCall(); @@ -499,11 +617,13 @@ else if (filter instanceof SpiQuery) { * @return Local node cluster group. */ private ClusterGroup projection(boolean loc) { + GridCacheContext ctx = getContextSafe(); + if (loc || ctx.isLocal() || ctx.isReplicatedAffinityNode()) return ctx.kernalContext().grid().cluster().forLocal(); if (ctx.isReplicated()) - return ctx.kernalContext().grid().cluster().forDataNodes(ctx.name()).forRandom(); + return ctx.kernalContext().grid().cluster().forDataNodes(cacheName).forRandom(); return null; } @@ -518,6 +638,8 @@ private ClusterGroup projection(boolean loc) { */ @SuppressWarnings("unchecked") private QueryCursor> queryContinuous(AbstractContinuousQuery qry, boolean loc, boolean keepBinary) { + GridCacheContext ctx = getContextSafe(); + assert qry instanceof ContinuousQuery || qry instanceof ContinuousQueryWithTransformer; if (qry.getInitialQuery() instanceof ContinuousQuery || @@ -622,6 +744,8 @@ private QueryCursor> queryContinuous(AbstractContinuousQuery q /** {@inheritDoc} */ @Override public List>> queryMultipleStatements(SqlFieldsQuery qry) { + GridCacheContext ctx = getContextSafe(); + A.notNull(qry, "qry"); try { ctx.checkSecurity(SecurityPermission.CACHE_READ); @@ -647,6 +771,8 @@ private QueryCursor> queryContinuous(AbstractContinuousQuery q /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public QueryCursor query(Query qry) { + GridCacheContext ctx = getContextSafe(); + A.notNull(qry, "qry"); try { ctx.checkSecurity(SecurityPermission.CACHE_READ); @@ -684,6 +810,8 @@ private QueryCursor> queryContinuous(AbstractContinuousQuery q /** {@inheritDoc} */ @Override public QueryCursor query(Query qry, IgniteClosure transformer) { + GridCacheContext ctx = getContextSafe(); + A.notNull(qry, "qry"); A.notNull(transformer, "transformer"); @@ -711,6 +839,8 @@ private QueryCursor> queryContinuous(AbstractContinuousQuery q * @param qry Query. */ private void convertToBinary(final Query qry) { + GridCacheContext ctx = getContextSafe(); + if (ctx.binaryMarshaller()) { if (qry instanceof SqlQuery) { final SqlQuery sqlQry = (SqlQuery) qry; @@ -739,6 +869,8 @@ private void convertToBinary(final Object[] args) { if (args == null) return; + GridCacheContext ctx = getContextSafe(); + for (int i = 0; i < args.length; i++) args[i] = ctx.cacheObjects().binary().toBinary(args[i]); } @@ -750,10 +882,12 @@ private void convertToBinary(final Object[] args) { * @throws CacheException If query indexing disabled for sql query. */ private void validate(Query qry) { + GridCacheContext ctx = getContextSafe(); + if (!QueryUtils.isEnabled(ctx.config()) && !(qry instanceof ScanQuery) && !(qry instanceof ContinuousQuery) && !(qry instanceof ContinuousQueryWithTransformer) && !(qry instanceof SpiQuery) && !(qry instanceof SqlQuery) && !(qry instanceof SqlFieldsQuery)) - throw new CacheException("Indexing is disabled for cache: " + ctx.cache().name() + + throw new CacheException("Indexing is disabled for cache: " + cacheName + ". Use setIndexedTypes or setTypeMetadata methods on CacheConfiguration to enable."); if (!ctx.kernalContext().query().moduleEnabled() && @@ -764,6 +898,8 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public Iterable> localEntries(CachePeekMode... peekModes) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + try { return delegate.localEntries(peekModes); } @@ -774,33 +910,45 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public QueryMetrics queryMetrics() { + IgniteInternalCache delegate = getDelegateSafe(); + return delegate.context().queries().metrics(); } /** {@inheritDoc} */ @Override public void resetQueryMetrics() { + IgniteInternalCache delegate = getDelegateSafe(); + delegate.context().queries().resetMetrics(); } /** {@inheritDoc} */ @Override public Collection queryDetailMetrics() { + IgniteInternalCache delegate = getDelegateSafe(); + return delegate.context().queries().detailMetrics(); } /** {@inheritDoc} */ @Override public void resetQueryDetailMetrics() { + IgniteInternalCache delegate = getDelegateSafe(); + delegate.context().queries().resetDetailMetrics(); } /** {@inheritDoc} */ @Override public void localEvict(Collection keys) { + IgniteInternalCache delegate = getDelegateSafe(); + delegate.evictAll(keys); } /** {@inheritDoc} */ @Nullable @Override public V localPeek(K key, CachePeekMode... peekModes) { + IgniteInternalCache delegate = getDelegateSafe(); + try { - return delegate.localPeek(key, peekModes, null); + return delegate.localPeek(key, peekModes); } catch (IgniteException | IgniteCheckedException e) { throw cacheException(e); @@ -809,6 +957,8 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public int size(CachePeekMode... peekModes) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.sizeAsync(peekModes)); @@ -825,11 +975,15 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture sizeAsync(CachePeekMode... peekModes) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.sizeAsync(peekModes)); } /** {@inheritDoc} */ @Override public long sizeLong(CachePeekMode... peekModes) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.sizeLongAsync(peekModes)); @@ -846,11 +1000,15 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture sizeLongAsync(CachePeekMode... peekModes) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.sizeLongAsync(peekModes)); } /** {@inheritDoc} */ @Override public long sizeLong(int part, CachePeekMode... peekModes) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.sizeLongAsync(part, peekModes)); @@ -867,11 +1025,15 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture sizeLongAsync(int part, CachePeekMode... peekModes) throws CacheException { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.sizeLongAsync(part, peekModes)); } /** {@inheritDoc} */ @Override public int localSize(CachePeekMode... peekModes) { + IgniteInternalCache delegate = getDelegateSafe(); + try { return delegate.localSize(peekModes); } @@ -882,6 +1044,8 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public long localSizeLong(CachePeekMode... peekModes) { + IgniteInternalCache delegate = getDelegateSafe(); + try { return delegate.localSizeLong(peekModes); } @@ -892,6 +1056,8 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public long localSizeLong(int part, CachePeekMode... peekModes) { + IgniteInternalCache delegate = getDelegateSafe(); + try { return delegate.localSizeLong(part, peekModes); } @@ -902,6 +1068,8 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public V get(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAsync(key)); @@ -918,11 +1086,15 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture getAsync(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getAsync(key)); } /** {@inheritDoc} */ @Override public CacheEntry getEntry(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getEntryAsync(key)); @@ -939,11 +1111,15 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture> getEntryAsync(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getEntryAsync(key)); } /** {@inheritDoc} */ @Override public Map getAll(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAllAsync(keys)); @@ -960,11 +1136,15 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture> getAllAsync(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getAllAsync(keys)); } /** {@inheritDoc} */ @Override public Collection> getEntries(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getEntriesAsync(keys)); @@ -981,11 +1161,15 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture>> getEntriesAsync(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getEntriesAsync(keys)); } /** {@inheritDoc} */ @Override public Map getAllOutTx(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAllOutTxAsync(keys)); @@ -1002,6 +1186,8 @@ private void validate(Query qry) { /** {@inheritDoc} */ @Override public IgniteFuture> getAllOutTxAsync(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getAllOutTxAsync(keys)); } @@ -1010,6 +1196,8 @@ private void validate(Query qry) { * @return Values map. */ public Map getAll(Collection keys) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAllAsync(keys)); @@ -1026,6 +1214,8 @@ public Map getAll(Collection keys) { /** {@inheritDoc} */ @Override public boolean containsKey(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + if (isAsync()) { setFuture(delegate.containsKeyAsync(key)); @@ -1037,11 +1227,15 @@ public Map getAll(Collection keys) { /** {@inheritDoc} */ @Override public IgniteFuture containsKeyAsync(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.containsKeyAsync(key)); } /** {@inheritDoc} */ @Override public boolean containsKeys(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + if (isAsync()) { setFuture(delegate.containsKeysAsync(keys)); @@ -1053,6 +1247,8 @@ public Map getAll(Collection keys) { /** {@inheritDoc} */ @Override public IgniteFuture containsKeysAsync(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.containsKeysAsync(keys)); } @@ -1062,6 +1258,8 @@ public Map getAll(Collection keys) { boolean replaceExisting, @Nullable final CompletionListener completionLsnr ) { + GridCacheContext ctx = getContextSafe(); + IgniteInternalFuture fut = ctx.cache().loadAll(keys, replaceExisting); if (completionLsnr != null) { @@ -1082,6 +1280,8 @@ public Map getAll(Collection keys) { /** {@inheritDoc} */ @Override public void put(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(putAsync0(key, val)); @@ -1106,6 +1306,8 @@ public Map getAll(Collection keys) { * @return Internal future. */ private IgniteInternalFuture putAsync0(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + IgniteInternalFuture fut = delegate.putAsync(key, val); return fut.chain(new CX1, Void>() { @@ -1124,6 +1326,8 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public V getAndPut(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAndPutAsync(key, val)); @@ -1140,11 +1344,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture getAndPutAsync(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getAndPutAsync(key, val)); } /** {@inheritDoc} */ @Override public void putAll(Map map) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(delegate.putAllAsync(map)); @@ -1158,11 +1366,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture putAllAsync(Map map) { + IgniteInternalCache delegate = getDelegateSafe(); + return (IgniteFuture)createFuture(delegate.putAllAsync(map)); } /** {@inheritDoc} */ @Override public boolean putIfAbsent(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.putIfAbsentAsync(key, val)); @@ -1179,11 +1391,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture putIfAbsentAsync(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.putIfAbsentAsync(key, val)); } /** {@inheritDoc} */ @Override public boolean remove(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.removeAsync(key)); @@ -1200,11 +1416,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture removeAsync(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.removeAsync(key)); } /** {@inheritDoc} */ @Override public boolean remove(K key, V oldVal) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.removeAsync(key, oldVal)); @@ -1221,11 +1441,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture removeAsync(K key, V oldVal) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.removeAsync(key, oldVal)); } /** {@inheritDoc} */ @Override public V getAndRemove(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAndRemoveAsync(key)); @@ -1242,11 +1466,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture getAndRemoveAsync(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getAndRemoveAsync(key)); } /** {@inheritDoc} */ @Override public boolean replace(K key, V oldVal, V newVal) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.replaceAsync(key, oldVal, newVal)); @@ -1263,11 +1491,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture replaceAsync(K key, V oldVal, V newVal) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.replaceAsync(key, oldVal, newVal)); } /** {@inheritDoc} */ @Override public boolean replace(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.replaceAsync(key, val)); @@ -1284,11 +1516,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture replaceAsync(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.replaceAsync(key, val)); } /** {@inheritDoc} */ @Override public V getAndReplace(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.getAndReplaceAsync(key, val)); @@ -1305,11 +1541,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture getAndReplaceAsync(K key, V val) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.getAndReplaceAsync(key, val)); } /** {@inheritDoc} */ @Override public void removeAll(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(delegate.removeAllAsync(keys)); @@ -1323,11 +1563,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture removeAllAsync(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + return (IgniteFuture)createFuture(delegate.removeAllAsync(keys)); } /** {@inheritDoc} */ @Override public void removeAll() { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(delegate.removeAllAsync()); @@ -1341,11 +1585,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture removeAllAsync() { + IgniteInternalCache delegate = getDelegateSafe(); + return (IgniteFuture)createFuture(delegate.removeAllAsync()); } /** {@inheritDoc} */ @Override public void clear(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(delegate.clearAsync(key)); @@ -1359,11 +1607,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture clearAsync(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + return (IgniteFuture)createFuture(delegate.clearAsync(key)); } /** {@inheritDoc} */ @Override public void clearAll(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(delegate.clearAllAsync(keys)); @@ -1377,11 +1629,15 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture clearAllAsync(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + return (IgniteFuture)createFuture(delegate.clearAllAsync(keys)); } /** {@inheritDoc} */ @Override public void clear() { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) setFuture(delegate.clearAsync()); @@ -1395,16 +1651,22 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public IgniteFuture clearAsync() { + IgniteInternalCache delegate = getDelegateSafe(); + return (IgniteFuture)createFuture(delegate.clearAsync()); } /** {@inheritDoc} */ @Override public void localClear(K key) { + IgniteInternalCache delegate = getDelegateSafe(); + delegate.clearLocally(key); } /** {@inheritDoc} */ @Override public void localClearAll(Set keys) { + IgniteInternalCache delegate = getDelegateSafe(); + for (K key : keys) delegate.clearLocally(key); } @@ -1412,6 +1674,8 @@ private IgniteInternalFuture putAsync0(K key, V val) { /** {@inheritDoc} */ @Override public T invoke(K key, EntryProcessor entryProcessor, Object... args) throws EntryProcessorException { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(invokeAsync0(key, entryProcessor, args)); @@ -1444,6 +1708,8 @@ private IgniteInternalFuture putAsync0(K key, V val) { * @return Internal future. */ private IgniteInternalFuture invokeAsync0(K key, EntryProcessor entryProcessor, Object[] args) { + IgniteInternalCache delegate = getDelegateSafe(); + IgniteInternalFuture> fut = delegate.invokeAsync(key, entryProcessor, args); return fut.chain(new CX1>, T>() { @@ -1482,7 +1748,10 @@ private IgniteInternalFuture invokeAsync0(K key, EntryProcessor public T invoke(@Nullable AffinityTopologyVersion topVer, K key, EntryProcessor entryProcessor, - Object... args) { + Object... args + ) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) throw new UnsupportedOperationException(); @@ -1500,7 +1769,10 @@ public T invoke(@Nullable AffinityTopologyVersion topVer, /** {@inheritDoc} */ @Override public Map> invokeAll(Set keys, EntryProcessor entryProcessor, - Object... args) { + Object... args + ) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.invokeAllAsync(keys, entryProcessor, args)); @@ -1518,13 +1790,19 @@ public T invoke(@Nullable AffinityTopologyVersion topVer, /** {@inheritDoc} */ @Override public IgniteFuture>> invokeAllAsync(Set keys, EntryProcessor entryProcessor, Object... args) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.invokeAllAsync(keys, entryProcessor, args)); } /** {@inheritDoc} */ - @Override public Map> invokeAll(Set keys, + @Override public Map> invokeAll( + Set keys, CacheEntryProcessor entryProcessor, - Object... args) { + Object... args + ) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.invokeAllAsync(keys, entryProcessor, args)); @@ -1542,6 +1820,8 @@ public T invoke(@Nullable AffinityTopologyVersion topVer, /** {@inheritDoc} */ @Override public IgniteFuture>> invokeAllAsync(Set keys, CacheEntryProcessor entryProcessor, Object... args) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.invokeAllAsync(keys, entryProcessor, args)); } @@ -1549,6 +1829,8 @@ public T invoke(@Nullable AffinityTopologyVersion topVer, @Override public Map> invokeAll( Map> map, Object... args) { + IgniteInternalCache delegate = getDelegateSafe(); + try { if (isAsync()) { setFuture(delegate.invokeAllAsync(map, args)); @@ -1566,12 +1848,14 @@ public T invoke(@Nullable AffinityTopologyVersion topVer, /** {@inheritDoc} */ @Override public IgniteFuture>> invokeAllAsync( Map> map, Object... args) { + IgniteInternalCache delegate = getDelegateSafe(); + return createFuture(delegate.invokeAllAsync(map, args)); } /** {@inheritDoc} */ @Override public String getName() { - return delegate.name(); + return cacheName; } /** {@inheritDoc} */ @@ -1593,7 +1877,9 @@ public void setCacheManager(CacheManager cacheMgr) { /** {@inheritDoc} */ @Override public IgniteFuture destroyAsync() { - return new IgniteFutureImpl<>(ctx.kernalContext().cache().dynamicDestroyCache(ctx.name(), false, true, false)); + GridCacheContext ctx = getContextSafe(); + + return new IgniteFutureImpl<>(ctx.kernalContext().cache().dynamicDestroyCache(cacheName, false, true, false, null)); } /** {@inheritDoc} */ @@ -1603,11 +1889,15 @@ public void setCacheManager(CacheManager cacheMgr) { /** {@inheritDoc} */ @Override public IgniteFuture closeAsync() { - return new IgniteFutureImpl<>(ctx.kernalContext().cache().dynamicCloseCache(ctx.name())); + GridCacheContext ctx = getContextSafe(); + + return new IgniteFutureImpl<>(ctx.kernalContext().cache().dynamicCloseCache(cacheName)); } /** {@inheritDoc} */ @Override public boolean isClosed() { + GridCacheContext ctx = getContextSafe(); + return ctx.kernalContext().cache().context().closed(ctx); } @@ -1616,14 +1906,19 @@ public void setCacheManager(CacheManager cacheMgr) { @Override public T unwrap(Class clazz) { if (clazz.isAssignableFrom(getClass())) return (T)this; - else if (clazz.isAssignableFrom(IgniteEx.class)) + else if (clazz.isAssignableFrom(IgniteEx.class)) { + GridCacheContext ctx = getContextSafe(); + return (T)ctx.grid(); + } throw new IllegalArgumentException("Unwrapping to class is not supported: " + clazz); } /** {@inheritDoc} */ @Override public void registerCacheEntryListener(CacheEntryListenerConfiguration lsnrCfg) { + GridCacheContext ctx = getContextSafe(); + try { CacheOperationContext opCtx = ctx.operationContextPerCall(); @@ -1636,6 +1931,8 @@ else if (clazz.isAssignableFrom(IgniteEx.class)) /** {@inheritDoc} */ @Override public void deregisterCacheEntryListener(CacheEntryListenerConfiguration lsnrCfg) { + GridCacheContext ctx = getContextSafe(); + try { ctx.continuousQueries().cancelJCacheQuery(lsnrCfg); } @@ -1646,6 +1943,8 @@ else if (clazz.isAssignableFrom(IgniteEx.class)) /** {@inheritDoc} */ @Override public Iterator> iterator() { + GridCacheContext ctx = getContextSafe(); + try { return ctx.cache().igniteIterator(); } @@ -1656,6 +1955,9 @@ else if (clazz.isAssignableFrom(IgniteEx.class)) /** {@inheritDoc} */ @Override protected IgniteCache createAsyncInstance() { + GridCacheContext ctx = getContextSafe(); + IgniteInternalCache delegate = getDelegateSafe(); + return new IgniteCacheProxyImpl( ctx, delegate, @@ -1729,10 +2031,25 @@ else if (clazz.isAssignableFrom(IgniteEx.class)) private RuntimeException cacheException(Exception e) { GridFutureAdapter restartFut = this.restartFut.get(); + if (X.hasCause(e, IgniteCacheRestartingException.class)) { + IgniteCacheRestartingException restartingException = X.cause(e, IgniteCacheRestartingException.class); + + if (restartingException.restartFuture() == null) { + if (restartFut == null) + restartFut = suspend(); + + assert restartFut != null; + + throw new IgniteCacheRestartingException(new IgniteFutureImpl<>(restartFut), cacheName); + } + else + throw restartingException; + } + if (restartFut != null) { if (X.hasCause(e, CacheStoppedException.class) || X.hasSuppressed(e, CacheStoppedException.class)) throw new IgniteCacheRestartingException(new IgniteFutureImpl<>(restartFut), "Cache is restarting: " + - ctx.name(), e); + cacheName, e); } if (e instanceof IgniteException && X.hasCause(e, CacheException.class)) @@ -1766,6 +2083,9 @@ private void setFuture(IgniteInternalFuture fut) { * @return Internal proxy. */ @Override public GridCacheProxyImpl internalProxy() { + GridCacheContext ctx = getContextSafe(); + IgniteInternalCache delegate = getDelegateSafe(); + return new GridCacheProxyImpl<>(ctx, delegate, ctx.operationContextPerCall()); } @@ -1785,11 +2105,15 @@ private void setFuture(IgniteInternalFuture fut) { /** {@inheritDoc} */ @Override public Collection lostPartitions() { + IgniteInternalCache delegate = getDelegateSafe(); + return delegate.lostPartitions(); } /** {@inheritDoc} */ @Override public void enableStatistics(boolean enabled) { + GridCacheContext ctx = getContextSafe(); + try { ctx.kernalContext().cache().enableStatistics(Collections.singleton(getName()), enabled); } @@ -1800,6 +2124,8 @@ private void setFuture(IgniteInternalFuture fut) { /** {@inheritDoc} */ @Override public void clearStatistics() { + GridCacheContext ctx = getContextSafe(); + try { ctx.kernalContext().cache().clearStatistics(Collections.singleton(getName())); } @@ -1808,6 +2134,42 @@ private void setFuture(IgniteInternalFuture fut) { } } + /** {@inheritDoc} */ + @Override public void preloadPartition(int part) { + IgniteInternalCache delegate = getDelegateSafe(); + + try { + delegate.preloadPartition(part); + } + catch (IgniteCheckedException e) { + throw cacheException(e); + } + } + + /** {@inheritDoc} */ + @Override public IgniteFuture preloadPartitionAsync(int part) { + IgniteInternalCache delegate = getDelegateSafe(); + + try { + return (IgniteFuture)createFuture(delegate.preloadPartitionAsync(part)); + } + catch (IgniteCheckedException e) { + throw cacheException(e); + } + } + + /** {@inheritDoc} */ + @Override public boolean localPreloadPartition(int part) { + IgniteInternalCache delegate = getDelegateSafe(); + + try { + return delegate.localPreloadPartition(part); + } + catch (IgniteCheckedException e) { + throw cacheException(e); + } + } + /** {@inheritDoc} */ @Override public void writeExternal(ObjectOutput out) throws IOException { out.writeObject(ctx); @@ -1821,15 +2183,23 @@ private void setFuture(IgniteInternalFuture fut) { ctx = (GridCacheContext)in.readObject(); delegate = (IgniteInternalCache)in.readObject(); + + cacheName = ctx.name(); + + assert cacheName.equals(delegate.name()) : "ctx.name=" + cacheName + ", delegate.name=" + delegate.name(); } /** {@inheritDoc} */ @Override public IgniteFuture rebalance() { + GridCacheContext ctx = getContextSafe(); + return new IgniteFutureImpl<>(ctx.preloader().forceRebalance()); } /** {@inheritDoc} */ @Override public IgniteFuture indexReadyFuture() { + GridCacheContext ctx = getContextSafe(); + IgniteInternalFuture fut = ctx.shared().database().indexRebuildFuture(ctx.cacheId()); if (fut == null) @@ -1842,11 +2212,29 @@ private void setFuture(IgniteInternalFuture fut) { * Throws {@code IgniteCacheRestartingException} if proxy is restarting. */ public void checkRestart() { - GridFutureAdapter currentFut = this.restartFut.get(); + checkRestart(false); + } - if (currentFut != null) - throw new IgniteCacheRestartingException(new IgniteFutureImpl<>(currentFut), "Cache is restarting: " + - context().name()); + /** + * Throws {@code IgniteCacheRestartingException} if proxy is restarting. + */ + public void checkRestart(boolean noWait) { + RestartFuture currentFut = restartFut.get(); + + if (currentFut != null) { + try { + if (!noWait) { + currentFut.get(1, TimeUnit.SECONDS); + + return; + } + } + catch (IgniteCheckedException ignore) { + //do nothing + } + + throw new IgniteCacheRestartingException(new IgniteFutureImpl<>(currentFut), cacheName); + } } /** @@ -1857,44 +2245,72 @@ public boolean isRestarting() { } /** - * Restarts this cache proxy. + * Suspend this cache proxy. + * To make cache proxy active again, it's needed to restart it. */ - public boolean restart() { - GridFutureAdapter restartFut = new GridFutureAdapter<>(); - - final GridFutureAdapter curFut = this.restartFut.get(); - - boolean changed = this.restartFut.compareAndSet(curFut, restartFut); + public RestartFuture suspend() { + while (true) { + RestartFuture curFut = this.restartFut.get(); + + if (curFut == null) { + RestartFuture restartFut = new RestartFuture(cacheName); + + if (this.restartFut.compareAndSet(null, restartFut)) { + synchronized (this) { + if (!restartFut.isDone()) { + if (oldContext == null) { + oldContext = ctx; + delegate = null; + ctx = null; + } + } + } - if (changed && curFut != null) - restartFut.listen(new IgniteInClosure>() { - @Override public void apply(IgniteInternalFuture fut) { - if (fut.error() != null) - curFut.onDone(fut.error()); - else - curFut.onDone(); + return restartFut; } - }); + } + else + return curFut; + } + } + + /** + * @param fut Finish restart future. + */ + public void registrateFutureRestart(GridFutureAdapter fut){ + RestartFuture currentFut = restartFut.get(); - return changed; + if (currentFut != null) + currentFut.addRestartFinishedFuture(fut); } /** * If proxy is already being restarted, returns future to wait on, else restarts this cache proxy. * - * @return Future to wait on, or null. + * @param cache To use for restart proxy. */ - public GridFutureAdapter opportunisticRestart() { - GridFutureAdapter restartFut = new GridFutureAdapter<>(); + public void opportunisticRestart(IgniteInternalCache cache) { + RestartFuture restartFut = new RestartFuture(cacheName); while (true) { - if (this.restartFut.compareAndSet(null, restartFut)) - return null; + if (this.restartFut.compareAndSet(null, restartFut)) { + onRestarted(cache.context(), cache.context().cache()); + + return; + } GridFutureAdapter curFut = this.restartFut.get(); - if (curFut != null) - return curFut; + if (curFut != null) { + try { + curFut.get(); + } + catch (IgniteCheckedException ignore) { + // Do notrhing. + } + + return; + } } } @@ -1905,16 +2321,68 @@ public GridFutureAdapter opportunisticRestart() { * @param delegate New delegate. */ public void onRestarted(GridCacheContext ctx, IgniteInternalCache delegate) { - GridFutureAdapter restartFut = this.restartFut.get(); + RestartFuture restartFut = this.restartFut.get(); assert restartFut != null; - this.ctx = ctx; - this.delegate = delegate; + synchronized (this) { + this.restartFut.compareAndSet(restartFut, null); + + this.ctx = ctx; + oldContext = null; + this.delegate = delegate; + + restartFut.onDone(); + } + + assert delegate == null || cacheName.equals(delegate.name()) && cacheName.equals(ctx.name()) : + "ctx.name=" + ctx.name() + ", delegate.name=" + delegate.name() + ", cacheName=" + cacheName; + } - this.restartFut.compareAndSet(restartFut, null); + /** + * + */ + private class RestartFuture extends GridFutureAdapter { + /** */ + private final String name; + + /** */ + private volatile GridFutureAdapter restartFinishFut; + + /** */ + private RestartFuture(String name) { + this.name = name; + } + + /** + * + */ + void checkRestartOrAwait() { + GridFutureAdapter fut = restartFinishFut; + + if (fut != null) { + try { + fut.get(); + } + catch (IgniteCheckedException e) { + throw U.convertException(e); + } + + return; + } + + throw new IgniteCacheRestartingException( + new IgniteFutureImpl<>(this), + "Cache is restarting: " + name + ); + } - restartFut.onDone(); + /** + * + */ + void addRestartFinishedFuture(GridFutureAdapter fut) { + restartFinishFut = fut; + } } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteInternalCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteInternalCache.java index cba2228f0323e..64e21e3fe5f32 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteInternalCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteInternalCache.java @@ -302,11 +302,10 @@ public interface IgniteInternalCache extends Iterable> { /** * @param key Key. * @param peekModes Peek modes. - * @param plc Expiry policy if TTL should be updated. * @return Value. * @throws IgniteCheckedException If failed. */ - @Nullable public V localPeek(K key, CachePeekMode[] peekModes, @Nullable IgniteCacheExpiryPolicy plc) + @Nullable public V localPeek(K key, CachePeekMode[] peekModes) throws IgniteCheckedException; /** @@ -1818,4 +1817,27 @@ public void localLoadCache(@Nullable IgniteBiPredicate p, @Nullable Object * @return A collection of lost partitions if a cache is in recovery state. */ public Collection lostPartitions(); + + /** + * Preload cache partition. + * @param part Partition. + * @throws IgniteCheckedException If failed. + */ + public void preloadPartition(int part) throws IgniteCheckedException; + + /** + * Preload cache partition. + * @param part Partition. + * @return Future to be completed whenever preloading completes. + * @throws IgniteCheckedException If failed. + */ + public IgniteInternalFuture preloadPartitionAsync(int part) throws IgniteCheckedException; + + /** + * Preloads cache partition if it exists on local node. + * @param part Partition. + * @return {@code True} if partition was preloaded, {@code false} if it doesn't belong to local node. + * @throws IgniteCheckedException If failed. + */ + public boolean localPreloadPartition(int part) throws IgniteCheckedException; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/KeyCacheObject.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/KeyCacheObject.java index 8f8ceb6d256f1..9c4eeee6cceda 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/KeyCacheObject.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/KeyCacheObject.java @@ -48,4 +48,4 @@ public interface KeyCacheObject extends CacheObject { * @return Copy of this object with given partition set. */ public KeyCacheObject copy(int part); -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/PartitionUpdateCounter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/PartitionUpdateCounter.java index b5960ab719353..39d8d5fcc1c21 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/PartitionUpdateCounter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/PartitionUpdateCounter.java @@ -17,10 +17,10 @@ package org.apache.ignite.internal.processors.cache; -import java.util.PriorityQueue; -import java.util.Queue; +import java.util.TreeSet; import java.util.concurrent.atomic.AtomicLong; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.internal.util.GridLongList; import org.jetbrains.annotations.NotNull; /** @@ -31,7 +31,7 @@ public class PartitionUpdateCounter { private IgniteLogger log; /** Queue of counter update tasks*/ - private final Queue queue = new PriorityQueue<>(); + private final TreeSet queue = new TreeSet<>(); /** Counter. */ private final AtomicLong cntr = new AtomicLong(); @@ -161,21 +161,51 @@ public void updateInitial(long cntr) { * @return Retrieves the minimum update counter task from queue. */ private Item poll() { - return queue.poll(); + return queue.pollFirst(); } /** * @return Checks the minimum update counter task from queue. */ private Item peek() { - return queue.peek(); + return queue.isEmpty() ? null : queue.first(); + } /** * @param item Adds update task to priority queue. */ private void offer(Item item) { - queue.offer(item); + queue.add(item); + } + + /** + * Flushes pending update counters closing all possible gaps. + * + * @return Even-length array of pairs [start, end] for each gap. + */ + public synchronized GridLongList finalizeUpdateCounters() { + Item item = poll(); + + GridLongList gaps = null; + + while (item != null) { + if (gaps == null) + gaps = new GridLongList((queue.size() + 1) * 2); + + long start = cntr.get() + 1; + long end = item.start; + + gaps.add(start); + gaps.add(end); + + // Close pending ranges. + update(item.start + item.delta); + + item = poll(); + } + + return gaps; } /** @@ -199,11 +229,7 @@ private Item(long start, long delta) { /** {@inheritDoc} */ @Override public int compareTo(@NotNull Item o) { - int cmp = Long.compare(this.start, o.start); - - assert cmp != 0; - - return cmp; + return Long.compare(this.start, o.start); } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/StartCacheInfo.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/StartCacheInfo.java new file mode 100644 index 0000000000000..a5aea26453ff6 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/StartCacheInfo.java @@ -0,0 +1,113 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.jetbrains.annotations.Nullable; + +/** + * Specific cache information for start. + */ +public class StartCacheInfo { + /** Cache configuration for start. */ + private final CacheConfiguration startedConf; + + /** Cache descriptor for start. */ + private final DynamicCacheDescriptor desc; + + /** Near cache configuration for start. */ + private final @Nullable NearCacheConfiguration reqNearCfg; + + /** Exchange topology version in which starting happened. */ + private final AffinityTopologyVersion exchTopVer; + + /** Disable started cache after start or not. */ + private final boolean disabledAfterStart; + + /** + * @param desc Cache configuration for start. + * @param reqNearCfg Near cache configuration for start. + * @param exchTopVer Exchange topology version in which starting happened. + * @param disabledAfterStart Disable started cache after start or not. + */ + public StartCacheInfo(DynamicCacheDescriptor desc, + NearCacheConfiguration reqNearCfg, + AffinityTopologyVersion exchTopVer, boolean disabledAfterStart) { + this(desc.cacheConfiguration(), desc, reqNearCfg, exchTopVer, disabledAfterStart); + } + + /** + * @param conf Cache configuration for start. + * @param desc Cache descriptor for start. + * @param reqNearCfg Near cache configuration for start. + * @param exchTopVer Exchange topology version in which starting happened. + * @param disabledAfterStart Disable started cache after start or not. + */ + public StartCacheInfo(CacheConfiguration conf, DynamicCacheDescriptor desc, + NearCacheConfiguration reqNearCfg, + AffinityTopologyVersion exchTopVer, boolean disabledAfterStart) { + startedConf = conf; + this.desc = desc; + this.reqNearCfg = reqNearCfg; + this.exchTopVer = exchTopVer; + this.disabledAfterStart = disabledAfterStart; + } + + /** + * @return Cache configuration for start. + */ + public CacheConfiguration getStartedConfiguration() { + return startedConf; + } + + /** + * @return Cache descriptor for start. + */ + public DynamicCacheDescriptor getCacheDescriptor() { + return desc; + } + + /** + * @return Near cache configuration for start. + */ + @Nullable public NearCacheConfiguration getReqNearCfg() { + return reqNearCfg; + } + + /** + * @return Exchange topology version in which starting happened. + */ + public AffinityTopologyVersion getExchangeTopVer() { + return exchTopVer; + } + + /** + * @return Disable started cache after start or not. + */ + public boolean isDisabledAfterStart() { + return disabledAfterStart; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(StartCacheInfo.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateAckMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateAckMessage.java index 7c241068231bb..f8012a7de733e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateAckMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateAckMessage.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.processors.query.schema.message.SchemaOperationStatusMessage; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.Message; @@ -31,6 +32,7 @@ /** * WAL state ack message (sent from participant node to coordinator). */ +@IgniteCodeGeneratingFail public class WalStateAckMessage implements Message { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateManager.java index a01e813cbcc4c..b380d97950051 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/WalStateManager.java @@ -122,6 +122,9 @@ public class WalStateManager extends GridCacheSharedManagerAdapter { /** */ private volatile WALDisableContext walDisableContext; + /** Denies or allows WAL disabling. */ + private volatile boolean prohibitDisabling; + /** * Constructor. * @@ -265,6 +268,24 @@ public void onKernalStart() { } } + /** + * Denies or allows WAL disabling with subsequent {@link #init(Collection, boolean)} call. + * + * @param val denial status. + */ + public void prohibitWALDisabling(boolean val) { + prohibitDisabling = val; + } + + /** + * Reports whether WAL disabling with subsequent {@link #init(Collection, boolean)} is denied. + * + * @return denial status. + */ + public boolean prohibitWALDisabling() { + return prohibitDisabling; + } + /** * Initiate WAL mode change operation. * @@ -273,6 +294,9 @@ public void onKernalStart() { * @return Future completed when operation finished. */ public IgniteInternalFuture init(Collection cacheNames, boolean enabled) { + if (!enabled && prohibitDisabling) + return errorFuture("WAL disabling is prohibited."); + if (F.isEmpty(cacheNames)) return errorFuture("Cache names cannot be empty."); @@ -363,9 +387,12 @@ else if (!F.eq(grpDesc.deploymentId(), curGrpDesc.deploymentId())) { * in OWNING state if such feature is enabled. * * @param topVer Topology version. + * @param changedBaseline The exchange is caused by Baseline Topology change. */ - public void changeLocalStatesOnExchangeDone(AffinityTopologyVersion topVer) { - if (!IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING, false)) + public void changeLocalStatesOnExchangeDone(AffinityTopologyVersion topVer, boolean changedBaseline) { + if (changedBaseline + && IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_PENDING_TX_TRACKER_ENABLED) + || !IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING, false)) return; Set grpsToEnableWal = new HashSet<>(); @@ -449,7 +476,7 @@ else if (!grp.localWalEnabled()) public void onGroupRebalanceFinished(int grpId, AffinityTopologyVersion topVer) { TemporaryDisabledWal session0 = tmpDisabledWal; - if (session0 == null || !session0.topVer.equals(topVer)) + if (session0 == null || session0.topVer.compareTo(topVer) > 0) return; session0.remainingGrps.remove(grpId); @@ -480,9 +507,11 @@ public void onGroupRebalanceFinished(int grpId, AffinityTopologyVersion topVer) for (Integer grpId0 : session0.disabledGrps) { CacheGroupContext grp = cctx.cache().cacheGroup(grpId0); - assert grp != null; + if (grp != null) + grp.topology().ownMoving(topVer); + else if (log.isDebugEnabled()) + log.debug("Cache group was destroyed before checkpoint finished, [grpId=" + grpId0 + ']'); - grp.topology().ownMoving(session0.topVer); } cctx.exchange().refreshPartitions(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataFileStore.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataFileStore.java index 662839c6f99f1..bee40994bc63b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataFileStore.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataFileStore.java @@ -18,13 +18,17 @@ import java.io.File; import java.io.FileInputStream; -import java.io.FileOutputStream; import java.util.concurrent.ConcurrentMap; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.binary.BinaryMetadata; import org.apache.ignite.internal.binary.BinaryUtils; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.jetbrains.annotations.Nullable; @@ -45,6 +49,9 @@ class BinaryMetadataFileStore { /** */ private final GridKernalContext ctx; + /** */ + private FileIOFactory fileIOFactory; + /** */ private final IgniteLogger log; @@ -67,6 +74,8 @@ class BinaryMetadataFileStore { if (!CU.isPersistenceEnabled(ctx.config())) return; + fileIOFactory = ctx.config().getDataStorageConfiguration().getFileIOFactory(); + if (binaryMetadataFileStoreDir != null) workDir = binaryMetadataFileStoreDir; else { @@ -91,17 +100,27 @@ void writeMetadata(BinaryMetadata binMeta) { return; try { - File file = new File(workDir, Integer.toString(binMeta.typeId()) + ".bin"); + File file = new File(workDir, binMeta.typeId() + ".bin"); + + byte[] marshalled = U.marshal(ctx, binMeta); - try(FileOutputStream out = new FileOutputStream(file, false)) { - byte[] marshalled = U.marshal(ctx, binMeta); + try (final FileIO out = fileIOFactory.create(file)) { + int left = marshalled.length; + while ((left -= out.writeFully(marshalled, 0, Math.min(marshalled.length, left))) > 0) + ; - out.write(marshalled); + out.force(); } } catch (Exception e) { - U.warn(log, "Failed to save metadata for typeId: " + binMeta.typeId() + - "; exception was thrown: " + e.getMessage()); + final String msg = "Failed to save metadata for typeId: " + binMeta.typeId() + + "; exception was thrown: " + e.getMessage(); + + U.error(log, msg); + + ctx.failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); + + throw new IgniteException(msg, e); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataKey.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataKey.java index 32ab2a09ecbb4..352754069927b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataKey.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataKey.java @@ -25,7 +25,7 @@ import org.apache.ignite.internal.util.typedef.internal.S; /** - * Key for binary meta data. + * Key for binary metadata. */ public class BinaryMetadataKey extends GridCacheUtilityKey implements Externalizable { /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataTransport.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataTransport.java index 38450dfec5c78..1c2f6f0e77f84 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataTransport.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/BinaryMetadataTransport.java @@ -16,8 +16,12 @@ */ package org.apache.ignite.internal.processors.cache.binary; +import java.util.Iterator; +import java.util.LinkedHashSet; import java.util.List; +import java.util.Map; import java.util.Queue; +import java.util.Set; import java.util.UUID; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentLinkedQueue; @@ -42,6 +46,7 @@ import org.apache.ignite.internal.managers.eventstorage.GridLocalEventListener; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteInClosure; import org.jetbrains.annotations.Nullable; @@ -86,6 +91,9 @@ final class BinaryMetadataTransport { /** */ private final ConcurrentMap clientReqSyncMap = new ConcurrentHashMap<>(); + /** */ + private final ConcurrentMap> schemaWaitFuts = new ConcurrentHashMap<>(); + /** */ private volatile boolean stopping; @@ -206,6 +214,21 @@ GridFutureAdapter awaitMetadataUpdate(int typeId, int ver) return resFut; } + /** + * Await specific schema update. + * @param typeId Type id. + * @param schemaId Schema id. + * @return Future which will be completed when schema is received. + */ + GridFutureAdapter awaitSchemaUpdate(int typeId, int schemaId) { + GridFutureAdapter fut = new GridFutureAdapter<>(); + + // Use version for schemaId. + GridFutureAdapter oldFut = schemaWaitFuts.putIfAbsent(new SyncKey(typeId, schemaId), fut); + + return oldFut == null ? fut : oldFut; + } + /** * Allows client node to request latest version of binary metadata for a given typeId from the cluster * in case client is able to detect that it has obsolete metadata in its local cache. @@ -259,6 +282,13 @@ private final class MetadataUpdateProposedListener implements CustomEventListene /** {@inheritDoc} */ @Override public void onCustomEvent(AffinityTopologyVersion topVer, ClusterNode snd, MetadataUpdateProposedMessage msg) { + if (log.isDebugEnabled()) + log.debug("Received MetadataUpdateProposedListener [typeId=" + msg.typeId() + + ", typeName=" + msg.metadata().typeName() + + ", pendingVer=" + msg.pendingVersion() + + ", acceptedVer=" + msg.acceptedVersion() + + ", schemasCnt=" + msg.metadata().schemas().size() + ']'); + int typeId = msg.typeId(); BinaryMetadataHolder holder = metaLocCache.get(typeId); @@ -277,20 +307,23 @@ private final class MetadataUpdateProposedListener implements CustomEventListene acceptedVer = 0; } - if (log.isDebugEnabled()) - log.debug("Versions are stamped on coordinator" + - " [typeId=" + typeId + - ", pendingVer=" + pendingVer + - ", acceptedVer=" + acceptedVer + "]" - ); - msg.pendingVersion(pendingVer); msg.acceptedVersion(acceptedVer); BinaryMetadata locMeta = holder != null ? holder.metadata() : null; try { - BinaryMetadata mergedMeta = BinaryUtils.mergeMetadata(locMeta, msg.metadata()); + Set changedSchemas = new LinkedHashSet<>(); + + BinaryMetadata mergedMeta = BinaryUtils.mergeMetadata(locMeta, msg.metadata(), changedSchemas); + + if (log.isDebugEnabled()) + log.debug("Versions are stamped on coordinator" + + " [typeId=" + typeId + + ", changedSchemas=" + changedSchemas + + ", pendingVer=" + pendingVer + + ", acceptedVer=" + acceptedVer + "]" + ); msg.metadata(mergedMeta); } @@ -358,8 +391,10 @@ private final class MetadataUpdateProposedListener implements CustomEventListene if (!msg.rejected()) { BinaryMetadata locMeta = holder != null ? holder.metadata() : null; + Set changedSchemas = new LinkedHashSet<>(); + try { - BinaryMetadata mergedMeta = BinaryUtils.mergeMetadata(locMeta, msg.metadata()); + BinaryMetadata mergedMeta = BinaryUtils.mergeMetadata(locMeta, msg.metadata(), changedSchemas); BinaryMetadataHolder newHolder = new BinaryMetadataHolder(mergedMeta, pendingVer, acceptedVer); @@ -382,7 +417,8 @@ private final class MetadataUpdateProposedListener implements CustomEventListene } else { if (log.isDebugEnabled()) - log.debug("Updated metadata on server node: " + newHolder); + log.debug("Updated metadata on server node [holder=" + newHolder + + ", changedSchemas=" + changedSchemas + ']'); metaLocCache.put(typeId, newHolder); } @@ -463,7 +499,7 @@ private final class MetadataUpdateAcceptedListener implements CustomEventListene if (oldAcceptedVer >= newAcceptedVer) { if (log.isDebugEnabled()) log.debug("Marking ack as duplicate [holder=" + holder + - ", newAcceptedVer: " + newAcceptedVer + ']'); + ", newAcceptedVer=" + newAcceptedVer + ']'); //this is duplicate ack msg.duplicated(true); @@ -481,8 +517,26 @@ private final class MetadataUpdateAcceptedListener implements CustomEventListene GridFutureAdapter fut = syncMap.get(new SyncKey(typeId, newAcceptedVer)); + holder = metaLocCache.get(typeId); + if (log.isDebugEnabled()) - log.debug("Completing future " + fut + " for " + metaLocCache.get(typeId)); + log.debug("Completing future " + fut + " for " + holder); + + if (!schemaWaitFuts.isEmpty()) { + Iterator>> iter = schemaWaitFuts.entrySet().iterator(); + + while (iter.hasNext()) { + Map.Entry> entry = iter.next(); + + SyncKey key = entry.getKey(); + + if (key.typeId() == typeId && holder.metadata().hasSchema(key.version())) { + entry.getValue().onDone(); + + iter.remove(); + } + } + } if (fut != null) fut.onDone(MetadataUpdateResult.createSuccessfulResult()); @@ -527,6 +581,11 @@ private final class MetadataUpdateResultFuture extends GridFutureAdapter reconnectFut) throws IgniteCheckedException { + this.reconnectFut = reconnectFut; + if (transport != null) transport.onDisconnected(); + + binaryContext().unregisterUserTypeDescriptors(); + binaryContext().unregisterBinarySchemas(); + + metadataLocCache.clear(); + } + + /** {@inheritDoc} */ + @Override public IgniteInternalFuture onReconnected(boolean clusterRestarted) throws IgniteCheckedException { + this.reconnectFut = null; + + return super.onReconnected(clusterRestarted); } /** {@inheritDoc} */ @@ -452,11 +475,20 @@ public GridBinaryMarshaller marshaller() { BinaryMetadata oldMeta = metaHolder != null ? metaHolder.metadata() : null; - BinaryMetadata mergedMeta = BinaryUtils.mergeMetadata(oldMeta, newMeta0); + Set changedSchemas = new LinkedHashSet<>(); + + BinaryMetadata mergedMeta = BinaryUtils.mergeMetadata(oldMeta, newMeta0, changedSchemas); - //metadata requested to be added is exactly the same as already presented in the cache - if (mergedMeta == oldMeta) + if (mergedMeta == oldMeta) { + // Metadata locally is up-to-date. Waiting for updating metadata in an entire cluster, if necessary. + if (metaHolder.pendingVersion() != metaHolder.acceptedVersion()) { + GridFutureAdapter fut = + transport.awaitMetadataUpdate(typeId, metaHolder.pendingVersion()); + + fut.get(); + } return; + } if (failIfUnregistered) throw new UnregisteredBinaryTypeException( @@ -466,7 +498,24 @@ public GridBinaryMarshaller marshaller() { "dev-list.", typeId, mergedMeta); - MetadataUpdateResult res = transport.requestMetadataUpdate(mergedMeta).get(); + long t0 = System.nanoTime(); + + GridFutureAdapter fut = transport.requestMetadataUpdate(mergedMeta); + + MetadataUpdateResult res = fut.get(); + + if (log.isDebugEnabled()) { + IgniteInternalTx tx = ctx.cache().context().tm().tx(); + + log.debug("Completed metadata update [typeId=" + typeId + + ", typeName=" + newMeta.typeName() + + ", changedSchemas=" + changedSchemas + + ", waitTime=" + MILLISECONDS.convert(System.nanoTime() - t0, NANOSECONDS) + "ms" + + ", holder=" + metaHolder + + ", fut=" + fut + + ", tx=" + CU.txString(tx) + + ']'); + } assert res != null; @@ -474,7 +523,7 @@ public GridBinaryMarshaller marshaller() { throw res.error(); } catch (IgniteCheckedException e) { - throw new BinaryObjectException("Failed to update meta data for type: " + newMeta.typeName(), e); + throw new BinaryObjectException("Failed to update metadata for type: " + newMeta.typeName(), e); } } @@ -513,13 +562,15 @@ public GridBinaryMarshaller marshaller() { /** * @param typeId Type ID. - * @return Meta data. + * @return Metadata. * @throws IgniteException In case of error. */ @Nullable public BinaryMetadata metadata0(final int typeId) { BinaryMetadataHolder holder = metadataLocCache.get(typeId); - if (holder == null) { + IgniteThread curThread = IgniteThread.current(); + + if (holder == null && (curThread == null || !curThread.isForbiddenToRequestBinaryMetadata())) { if (ctx.clientNode()) { try { transport.requestUpToDateMetadata(typeId).get(); @@ -533,7 +584,7 @@ public GridBinaryMarshaller marshaller() { } if (holder != null) { - if (IgniteThread.current() instanceof IgniteDiscoveryThread) + if (curThread instanceof IgniteDiscoveryThread || (curThread != null && curThread.isForbiddenToRequestBinaryMetadata())) return holder.metadata(); if (holder.pendingVersion() - holder.acceptedVersion() > 0) { @@ -541,9 +592,9 @@ public GridBinaryMarshaller marshaller() { if (log.isDebugEnabled() && !fut.isDone()) log.debug("Waiting for update for" + - " [typeId=" + typeId + - ", pendingVer=" + holder.pendingVersion() + - ", acceptedVer=" + holder.acceptedVersion() + "]"); + " [typeId=" + typeId + + ", pendingVer=" + holder.pendingVersion() + + ", acceptedVer=" + holder.acceptedVersion() + "]"); try { fut.get(); @@ -565,40 +616,104 @@ public GridBinaryMarshaller marshaller() { if (ctx.clientNode()) { if (holder == null || !holder.metadata().hasSchema(schemaId)) { + if (log.isDebugEnabled()) + log.debug("Waiting for client metadata update" + + " [typeId=" + typeId + + ", schemaId=" + schemaId + + ", pendingVer=" + (holder == null ? "NA" : holder.pendingVersion()) + + ", acceptedVer=" + (holder == null ? "NA" :holder.acceptedVersion()) + ']'); + try { transport.requestUpToDateMetadata(typeId).get(); - - holder = metadataLocCache.get(typeId); } catch (IgniteCheckedException ignored) { // No-op. } + + holder = metadataLocCache.get(typeId); + + IgniteFuture reconnectFut0 = reconnectFut; + + if (holder == null && reconnectFut0 != null) + throw new IgniteClientDisconnectedException(reconnectFut0, "Client node disconnected."); + + if (log.isDebugEnabled()) + log.debug("Finished waiting for client metadata update" + + " [typeId=" + typeId + + ", schemaId=" + schemaId + + ", pendingVer=" + (holder == null ? "NA" : holder.pendingVersion()) + + ", acceptedVer=" + (holder == null ? "NA" :holder.acceptedVersion()) + ']'); } } - else if (holder != null) { - if (IgniteThread.current() instanceof IgniteDiscoveryThread) + else { + if (holder != null && IgniteThread.current() instanceof IgniteDiscoveryThread) return holder.metadata().wrap(binaryCtx); + else if (holder != null && (holder.pendingVersion() - holder.acceptedVersion() > 0)) { + if (log.isDebugEnabled()) + log.debug("Waiting for metadata update" + + " [typeId=" + typeId + + ", schemaId=" + schemaId + + ", pendingVer=" + holder.pendingVersion() + + ", acceptedVer=" + holder.acceptedVersion() + ']'); - if (holder.pendingVersion() - holder.acceptedVersion() > 0) { - GridFutureAdapter fut = transport.awaitMetadataUpdate( - typeId, - holder.pendingVersion()); + long t0 = System.nanoTime(); - if (log.isDebugEnabled() && !fut.isDone()) - log.debug("Waiting for update for" + - " [typeId=" + typeId - + ", schemaId=" + schemaId - + ", pendingVer=" + holder.pendingVersion() - + ", acceptedVer=" + holder.acceptedVersion() + "]"); + GridFutureAdapter fut = transport.awaitMetadataUpdate( + typeId, + holder.pendingVersion()); try { fut.get(); } + catch (IgniteCheckedException e) { + log.error("Failed to wait for metadata update [typeId=" + typeId + ", schemaId=" + schemaId + ']', e); + } + + if (log.isDebugEnabled()) + log.debug("Finished waiting for metadata update" + + " [typeId=" + typeId + + ", waitTime=" + NANOSECONDS.convert(System.nanoTime() - t0, MILLISECONDS) + "ms" + + ", schemaId=" + schemaId + + ", pendingVer=" + holder.pendingVersion() + + ", acceptedVer=" + holder.acceptedVersion() + ']'); + + holder = metadataLocCache.get(typeId); + } + else if (holder == null || !holder.metadata().hasSchema(schemaId)) { + // Last resort waiting. + U.warn(log, + "Schema is missing while no metadata updates are in progress " + + "(will wait for schema update within timeout defined by IGNITE_BINARY_META_UPDATE_TIMEOUT system property)" + + " [typeId=" + typeId + + ", missingSchemaId=" + schemaId + + ", pendingVer=" + (holder == null ? "NA" : holder.pendingVersion()) + + ", acceptedVer=" + (holder == null ? "NA" : holder.acceptedVersion()) + + ", binMetaUpdateTimeout=" + waitSchemaTimeout +']'); + + long t0 = System.nanoTime(); + + GridFutureAdapter fut = transport.awaitSchemaUpdate(typeId, schemaId); + + try { + fut.get(waitSchemaTimeout); + } + catch (IgniteFutureTimeoutCheckedException e) { + log.error("Timed out while waiting for schema update [typeId=" + typeId + ", schemaId=" + + schemaId + ']'); + } catch (IgniteCheckedException ignored) { // No-op. } holder = metadataLocCache.get(typeId); + + if (log.isDebugEnabled() && holder != null && holder.metadata().hasSchema(schemaId)) + log.debug("Found the schema after wait" + + " [typeId=" + typeId + + ", waitTime=" + NANOSECONDS.convert(System.nanoTime() - t0, MILLISECONDS) + "ms" + + ", schemaId=" + schemaId + + ", pendingVer=" + holder.pendingVersion() + + ", acceptedVer=" + holder.acceptedVersion() + ']'); } } @@ -903,7 +1018,7 @@ else if (type == BinaryObjectImpl.TYPE_BINARY_ENUM) if ((res = validateBinaryConfiguration(rmtNode)) != null) return res; - return validateBinaryMetadata(rmtNode.id(), (Map) discoData.joiningNodeData()); + return validateBinaryMetadata(rmtNode.id(), (Map)discoData.joiningNodeData()); } /** */ @@ -1070,4 +1185,75 @@ private IgniteNodeValidationResult validateBinaryMetadata(UUID rmtNodeId, Map listeners; + + /** + * @param metaHnd Meta handler. + * @param igniteCfg Ignite config. + * @param log Logger. + */ + public TestBinaryContext(BinaryMetadataHandler metaHnd, IgniteConfiguration igniteCfg, + IgniteLogger log) { + super(metaHnd, igniteCfg, log); + } + + /** {@inheritDoc} */ + @Nullable @Override public BinaryType metadata(int typeId) throws BinaryObjectException { + BinaryType metadata = super.metadata(typeId); + + if (listeners != null) { + for (TestBinaryContextListener listener : listeners) + listener.onAfterMetadataRequest(typeId, metadata); + } + + return metadata; + } + + /** {@inheritDoc} */ + @Override public void updateMetadata(int typeId, BinaryMetadata meta, + boolean failIfUnregistered) throws BinaryObjectException { + if (listeners != null) { + for (TestBinaryContextListener listener : listeners) + listener.onBeforeMetadataUpdate(typeId, meta); + } + + super.updateMetadata(typeId, meta, failIfUnregistered); + } + + /** */ + public interface TestBinaryContextListener { + /** + * @param typeId Type id. + * @param type Type. + */ + void onAfterMetadataRequest(int typeId, BinaryType type); + + /** + * @param typeId Type id. + * @param metadata Metadata. + */ + void onBeforeMetadataUpdate(int typeId, BinaryMetadata metadata); + } + + /** + * @param lsnr Listener. + */ + public void addListener(TestBinaryContextListener lsnr) { + if (listeners == null) + listeners = new ArrayList<>(); + + if (!listeners.contains(lsnr)) + listeners.add(lsnr); + } + + /** */ + public void clearAllListener() { + if (listeners != null) + listeners.clear(); + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/MetadataUpdateProposedMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/MetadataUpdateProposedMessage.java index 84e32e1b3d7cc..c465314c9170b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/MetadataUpdateProposedMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/binary/MetadataUpdateProposedMessage.java @@ -77,22 +77,22 @@ public final class MetadataUpdateProposedMessage implements DiscoveryCustomMessa /** */ private final IgniteUuid id = IgniteUuid.randomUuid(); - /** */ + /** Node UUID which initiated metadata update. */ private final UUID origNodeId; /** */ private BinaryMetadata metadata; - /** */ + /** Metadata type id. */ private final int typeId; - /** */ + /** Metadata version which is pending for update. */ private int pendingVer; - /** */ + /** Metadata version which is already accepted by entire cluster. */ private int acceptedVer; - /** */ + /** Message acceptance status. */ private ProposalStatus status = ProposalStatus.SUCCESSFUL; /** */ @@ -222,7 +222,7 @@ public int typeId() { return typeId; } - /** */ + /** Message acceptance status. */ private enum ProposalStatus { /** */ SUCCESSFUL, diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTtlUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTtlUpdateRequest.java index c092132192c74..c420aeb4f0491 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTtlUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTtlUpdateRequest.java @@ -213,37 +213,37 @@ public List nearVersions() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeCollection("nearKeys", nearKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeCollection("nearVers", nearVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 6: - if (!writer.writeMessage("topVer", topVer)) + case 7: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeLong("ttl", ttl)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeCollection("vers", vers, MessageCollectionItemType.MSG)) return false; @@ -265,7 +265,7 @@ public List nearVersions() { return false; switch (reader.state()) { - case 3: + case 4: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -273,7 +273,7 @@ public List nearVersions() { reader.incrementState(); - case 4: + case 5: nearKeys = reader.readCollection("nearKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -281,7 +281,7 @@ public List nearVersions() { reader.incrementState(); - case 5: + case 6: nearVers = reader.readCollection("nearVers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -289,15 +289,15 @@ public List nearVersions() { reader.incrementState(); - case 6: - topVer = reader.readMessage("topVer"); + case 7: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 7: + case 8: ttl = reader.readLong("ttl"); if (!reader.isLastRead()) @@ -305,7 +305,7 @@ public List nearVersions() { reader.incrementState(); - case 8: + case 9: vers = reader.readCollection("vers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -325,7 +325,7 @@ public List nearVersions() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 9; + return 10; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxFinishSync.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxFinishSync.java index 1f688f64cc9e2..514e78e898086 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxFinishSync.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxFinishSync.java @@ -29,7 +29,6 @@ import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; -import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgniteFuture; import org.jetbrains.annotations.Nullable; @@ -66,9 +65,16 @@ public void onFinishSend(UUID nodeId, long threadId) { ThreadFinishSync threadSync = threadMap.get(threadId); if (threadSync == null) - threadSync = F.addIfAbsent(threadMap, threadId, new ThreadFinishSync(threadId)); + threadMap.put(threadId, threadSync = new ThreadFinishSync(threadId)); - threadSync.onSend(nodeId); + synchronized (threadSync) { + //thread has to create new ThreadFinishSync + //if other thread executing onAckReceived method removed previous threadSync object + if (threadMap.get(threadId) == null) + threadMap.put(threadId, threadSync = new ThreadFinishSync(threadId)); + + threadSync.onSend(nodeId); + } } /** @@ -104,8 +110,14 @@ public void onDisconnected(IgniteFuture reconnectFut) { public void onAckReceived(UUID nodeId, long threadId) { ThreadFinishSync threadSync = threadMap.get(threadId); - if (threadSync != null) + if (threadSync != null) { threadSync.onReceive(nodeId); + + synchronized (threadSync) { + if (threadSync.isEmpty()) + threadMap.remove(threadId); + } + } } /** @@ -114,8 +126,14 @@ public void onAckReceived(UUID nodeId, long threadId) { * @param nodeId Left node ID. */ public void onNodeLeft(UUID nodeId) { - for (ThreadFinishSync threadSync : threadMap.values()) + for (ThreadFinishSync threadSync : threadMap.values()) { threadSync.onNodeLeft(nodeId); + + synchronized (threadSync) { + if (threadSync.isEmpty()) + threadMap.remove(threadSync); + } + } } /** @@ -193,7 +211,7 @@ public void onDisconnected(IgniteFuture reconnectFut) { * @param nodeId Node ID response received from. */ public void onReceive(UUID nodeId) { - TxFinishSync sync = nodeMap.get(nodeId); + TxFinishSync sync = nodeMap.remove(nodeId); if (sync != null) sync.onReceive(); @@ -208,6 +226,13 @@ public void onNodeLeft(UUID nodeId) { if (sync != null) sync.onNodeLeft(); } + + /** + * + */ + private boolean isEmpty() { + return nodeMap.isEmpty(); + } } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryFuture.java index 41e8abae60b61..5e0deb0cb0050 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryFuture.java @@ -49,9 +49,9 @@ * Future verifying that all remote transactions related to transaction were prepared or committed. */ public class GridCacheTxRecoveryFuture extends GridCacheCompoundIdentityFuture { - /** */ + /** */ private static final long serialVersionUID = 0L; - + /** Logger reference. */ private static final AtomicReference logRef = new AtomicReference<>(); @@ -146,17 +146,6 @@ else if (log.isInfoEnabled()) */ @SuppressWarnings("ConstantConditions") public void prepare() { - if (tx.txState().mvccEnabled(cctx)) { // TODO IGNITE-7313 - U.error(log, "Cannot commit MVCC enabled transaction by recovery procedure. " + - "Operation is usupported at the moment [tx=" + CU.txString(tx) + ']'); - - onDone(false); - - markInitialized(); - - return; - } - if (nearTxCheck) { UUID nearNodeId = tx.eventNodeId(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryRequest.java index 45d1f1a31bf5a..90ce2344d77ab 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryRequest.java @@ -148,37 +148,37 @@ public boolean system() { } switch (writer.state()) { - case 7: + case 8: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeBoolean("nearTxCheck", nearTxCheck)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeMessage("nearXidVer", nearXidVer)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeBoolean("sys", sys)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeInt("txNum", txNum)) return false; @@ -200,7 +200,7 @@ public boolean system() { return false; switch (reader.state()) { - case 7: + case 8: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -208,7 +208,7 @@ public boolean system() { reader.incrementState(); - case 8: + case 9: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -216,7 +216,7 @@ public boolean system() { reader.incrementState(); - case 9: + case 10: nearTxCheck = reader.readBoolean("nearTxCheck"); if (!reader.isLastRead()) @@ -224,7 +224,7 @@ public boolean system() { reader.incrementState(); - case 10: + case 11: nearXidVer = reader.readMessage("nearXidVer"); if (!reader.isLastRead()) @@ -232,7 +232,7 @@ public boolean system() { reader.incrementState(); - case 11: + case 12: sys = reader.readBoolean("sys"); if (!reader.isLastRead()) @@ -240,7 +240,7 @@ public boolean system() { reader.incrementState(); - case 12: + case 13: txNum = reader.readInt("txNum"); if (!reader.isLastRead()) @@ -260,7 +260,7 @@ public boolean system() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 13; + return 14; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryResponse.java index a9ac26ba4b49c..1ef44a8f21f6b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTxRecoveryResponse.java @@ -129,19 +129,19 @@ public boolean success() { } switch (writer.state()) { - case 7: + case 8: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeBoolean("success", success)) return false; @@ -163,7 +163,7 @@ public boolean success() { return false; switch (reader.state()) { - case 7: + case 8: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -171,7 +171,7 @@ public boolean success() { reader.incrementState(); - case 8: + case 9: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -179,7 +179,7 @@ public boolean success() { reader.incrementState(); - case 9: + case 10: success = reader.readBoolean("success"); if (!reader.isLastRead()) @@ -199,7 +199,7 @@ public boolean success() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedBaseMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedBaseMessage.java index fc209aaa956f0..8536e480489b5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedBaseMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedBaseMessage.java @@ -161,25 +161,25 @@ int keysCount() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByteArray("candsByIdxBytes", candsByIdxBytes)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeCollection("committedVers", committedVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeCollection("rolledbackVers", rolledbackVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeMessage("ver", ver)) return false; @@ -201,7 +201,7 @@ int keysCount() { return false; switch (reader.state()) { - case 3: + case 4: candsByIdxBytes = reader.readByteArray("candsByIdxBytes"); if (!reader.isLastRead()) @@ -209,7 +209,7 @@ int keysCount() { reader.incrementState(); - case 4: + case 5: committedVers = reader.readCollection("committedVers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -217,7 +217,7 @@ int keysCount() { reader.incrementState(); - case 5: + case 6: rolledbackVers = reader.readCollection("rolledbackVers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -225,7 +225,7 @@ int keysCount() { reader.incrementState(); - case 6: + case 7: ver = reader.readMessage("ver"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedCacheEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedCacheEntry.java index ff636c7c9a9a2..b07451a6a2091 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedCacheEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedCacheEntry.java @@ -368,6 +368,8 @@ public void removeExplicitNodeLocks(UUID nodeId) throws GridCacheEntryRemovedExc GridCacheMvccCandidate doomed; + GridCacheVersion deferredDelVer; + CacheObject val; lockEntry(); @@ -406,11 +408,22 @@ public void removeExplicitNodeLocks(UUID nodeId) throws GridCacheEntryRemovedExc } val = this.val; + + deferredDelVer = this.ver; } finally { unlockEntry(); } + if (val == null) { + boolean deferred = cctx.deferredDelete() && !detached() && !isInternal(); + + if (deferred) { + if (deferredDelVer != null) + cctx.onDeferredDelete(this, deferredDelVer); + } + } + if (log.isDebugEnabled()) log.debug("Removed lock candidate from entry [doomed=" + doomed + ", owner=" + owner + ", prev=" + prev + ", entry=" + this + ']'); @@ -705,11 +718,6 @@ public void recheck() { } } - /** {@inheritDoc} */ - @Override public final void txUnlock(IgniteInternalTx tx) throws GridCacheEntryRemovedException { - removeLock(tx.xidVersion()); - } - /** * @param emptyBefore Empty flag before operation. * @param emptyAfter Empty flag after operation. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockRequest.java index 25a557c324817..ca78763fc2148 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockRequest.java @@ -366,79 +366,79 @@ public long timeout() { } switch (writer.state()) { - case 7: + case 8: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeBoolean("isInTx", isInTx)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeBoolean("isInvalidate", isInvalidate)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeBoolean("isRead", isRead)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeByte("isolation", isolation != null ? (byte)isolation.ordinal() : -1)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeMessage("nearXidVer", nearXidVer)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeUuid("nodeId", nodeId)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeBooleanArray("retVals", retVals)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeLong("threadId", threadId)) return false; writer.incrementState(); - case 18: + case 19: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 19: + case 20: if (!writer.writeInt("txSize", txSize)) return false; @@ -460,7 +460,7 @@ public long timeout() { return false; switch (reader.state()) { - case 7: + case 8: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -468,7 +468,7 @@ public long timeout() { reader.incrementState(); - case 8: + case 9: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -476,7 +476,7 @@ public long timeout() { reader.incrementState(); - case 9: + case 10: isInTx = reader.readBoolean("isInTx"); if (!reader.isLastRead()) @@ -484,7 +484,7 @@ public long timeout() { reader.incrementState(); - case 10: + case 11: isInvalidate = reader.readBoolean("isInvalidate"); if (!reader.isLastRead()) @@ -492,7 +492,7 @@ public long timeout() { reader.incrementState(); - case 11: + case 12: isRead = reader.readBoolean("isRead"); if (!reader.isLastRead()) @@ -500,7 +500,7 @@ public long timeout() { reader.incrementState(); - case 12: + case 13: byte isolationOrd; isolationOrd = reader.readByte("isolation"); @@ -512,7 +512,7 @@ public long timeout() { reader.incrementState(); - case 13: + case 14: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -520,7 +520,7 @@ public long timeout() { reader.incrementState(); - case 14: + case 15: nearXidVer = reader.readMessage("nearXidVer"); if (!reader.isLastRead()) @@ -528,7 +528,7 @@ public long timeout() { reader.incrementState(); - case 15: + case 16: nodeId = reader.readUuid("nodeId"); if (!reader.isLastRead()) @@ -536,7 +536,7 @@ public long timeout() { reader.incrementState(); - case 16: + case 17: retVals = reader.readBooleanArray("retVals"); if (!reader.isLastRead()) @@ -544,7 +544,7 @@ public long timeout() { reader.incrementState(); - case 17: + case 18: threadId = reader.readLong("threadId"); if (!reader.isLastRead()) @@ -552,7 +552,7 @@ public long timeout() { reader.incrementState(); - case 18: + case 19: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -560,7 +560,7 @@ public long timeout() { reader.incrementState(); - case 19: + case 20: txSize = reader.readInt("txSize"); if (!reader.isLastRead()) @@ -580,7 +580,7 @@ public long timeout() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 20; + return 21; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockResponse.java index 4b21896b1c051..2d4de9c8156eb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedLockResponse.java @@ -221,19 +221,19 @@ protected int valuesSize() { } switch (writer.state()) { - case 7: + case 8: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeCollection("vals", vals, MessageCollectionItemType.MSG)) return false; @@ -255,7 +255,7 @@ protected int valuesSize() { return false; switch (reader.state()) { - case 7: + case 8: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -263,7 +263,7 @@ protected int valuesSize() { reader.incrementState(); - case 8: + case 9: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -271,7 +271,7 @@ protected int valuesSize() { reader.incrementState(); - case 9: + case 10: vals = reader.readCollection("vals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -291,7 +291,7 @@ protected int valuesSize() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishRequest.java index ea9336b2a55e3..a1af470c56deb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishRequest.java @@ -325,85 +325,85 @@ public boolean replyRequired() { } switch (writer.state()) { - case 7: + case 8: if (!writer.writeMessage("baseVer", baseVer)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeBoolean("commit", commit)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeMessage("commitVer", commitVer)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeBoolean("invalidate", invalidate)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeByte("plc", plc)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeByte("syncMode", syncMode != null ? (byte)syncMode.ordinal() : -1)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeBoolean("sys", sys)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 18: + case 19: if (!writer.writeLong("threadId", threadId)) return false; writer.incrementState(); - case 19: - if (!writer.writeMessage("topVer", topVer)) + case 20: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 20: + case 21: if (!writer.writeInt("txSize", txSize)) return false; @@ -425,7 +425,7 @@ public boolean replyRequired() { return false; switch (reader.state()) { - case 7: + case 8: baseVer = reader.readMessage("baseVer"); if (!reader.isLastRead()) @@ -433,7 +433,7 @@ public boolean replyRequired() { reader.incrementState(); - case 8: + case 9: commit = reader.readBoolean("commit"); if (!reader.isLastRead()) @@ -441,7 +441,7 @@ public boolean replyRequired() { reader.incrementState(); - case 9: + case 10: commitVer = reader.readMessage("commitVer"); if (!reader.isLastRead()) @@ -449,7 +449,7 @@ public boolean replyRequired() { reader.incrementState(); - case 10: + case 11: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -457,7 +457,7 @@ public boolean replyRequired() { reader.incrementState(); - case 11: + case 12: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -465,7 +465,7 @@ public boolean replyRequired() { reader.incrementState(); - case 12: + case 13: invalidate = reader.readBoolean("invalidate"); if (!reader.isLastRead()) @@ -473,7 +473,7 @@ public boolean replyRequired() { reader.incrementState(); - case 13: + case 14: plc = reader.readByte("plc"); if (!reader.isLastRead()) @@ -481,7 +481,7 @@ public boolean replyRequired() { reader.incrementState(); - case 14: + case 15: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -489,7 +489,7 @@ public boolean replyRequired() { reader.incrementState(); - case 15: + case 16: byte syncModeOrd; syncModeOrd = reader.readByte("syncMode"); @@ -501,7 +501,7 @@ public boolean replyRequired() { reader.incrementState(); - case 16: + case 17: sys = reader.readBoolean("sys"); if (!reader.isLastRead()) @@ -509,7 +509,7 @@ public boolean replyRequired() { reader.incrementState(); - case 17: + case 18: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -517,7 +517,7 @@ public boolean replyRequired() { reader.incrementState(); - case 18: + case 19: threadId = reader.readLong("threadId"); if (!reader.isLastRead()) @@ -525,15 +525,15 @@ public boolean replyRequired() { reader.incrementState(); - case 19: - topVer = reader.readMessage("topVer"); + case 20: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 20: + case 21: txSize = reader.readInt("txSize"); if (!reader.isLastRead()) @@ -553,7 +553,7 @@ public boolean replyRequired() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 21; + return 22; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishResponse.java index c36e6336d4f7c..5fdf970bc0f8d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxFinishResponse.java @@ -145,25 +145,25 @@ public IgniteUuid futureId() { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 3: + case 4: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeInt("part", part)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeMessage("txId", txId)) return false; @@ -185,7 +185,7 @@ public IgniteUuid futureId() { return false; switch (reader.state()) { - case 2: + case 3: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -193,7 +193,7 @@ public IgniteUuid futureId() { reader.incrementState(); - case 3: + case 4: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -201,7 +201,7 @@ public IgniteUuid futureId() { reader.incrementState(); - case 4: + case 5: part = reader.readInt("part"); if (!reader.isLastRead()) @@ -209,7 +209,7 @@ public IgniteUuid futureId() { reader.incrementState(); - case 5: + case 6: txId = reader.readMessage("txId"); if (!reader.isLastRead()) @@ -229,7 +229,7 @@ public IgniteUuid futureId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 6; + return 7; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxMapping.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxMapping.java index b5437869c6594..0eba9424775fb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxMapping.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxMapping.java @@ -18,8 +18,13 @@ package org.apache.ignite.internal.processors.cache.distributed; import java.util.Collection; +import java.util.Collections; import java.util.Iterator; import java.util.LinkedHashSet; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; @@ -35,10 +40,17 @@ * Transaction node mapping. */ public class GridDistributedTxMapping { + /** */ + private static final AtomicReferenceFieldUpdater BACKUPS_FIELD_UPDATER + = AtomicReferenceFieldUpdater.newUpdater(GridDistributedTxMapping.class, Set.class, "backups"); + /** Mapped node. */ @GridToStringExclude private ClusterNode primary; + /** Mapped backup nodes. */ + private volatile Set backups; + /** Entries. */ @GridToStringInclude private final Collection entries; @@ -282,6 +294,26 @@ public boolean empty() { return entries.isEmpty(); } + /** + * @param newBackups Backups to be added to this mapping. + */ + public void addBackups(Collection newBackups) { + if (newBackups == null) + return; + + if (backups == null) + BACKUPS_FIELD_UPDATER.compareAndSet(this, null, Collections.newSetFromMap(new ConcurrentHashMap<>())); + + backups.addAll(newBackups); + } + + /** + * @return Mapped backup nodes. + */ + public Set backups() { + return backups != null ? backups : Collections.emptySet(); + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(GridDistributedTxMapping.class, this, "node", primary.id()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareRequest.java index a5aa0d838935a..96eeee20b4b71 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareRequest.java @@ -505,79 +505,79 @@ private boolean isFlag(int mask) { } switch (writer.state()) { - case 7: + case 8: if (!writer.writeByte("concurrency", concurrency != null ? (byte)concurrency.ordinal() : -1)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeCollection("dhtVerKeys", dhtVerKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeCollection("dhtVerVals", dhtVerVals, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeByte("isolation", isolation != null ? (byte)isolation.ordinal() : -1)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeByte("plc", plc)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeCollection("reads", reads, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeLong("threadId", threadId)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeMap("txNodesMsg", txNodesMsg, MessageCollectionItemType.UUID, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeInt("txSize", txSize)) return false; writer.incrementState(); - case 18: + case 19: if (!writer.writeMessage("writeVer", writeVer)) return false; writer.incrementState(); - case 19: + case 20: if (!writer.writeCollection("writes", writes, MessageCollectionItemType.MSG)) return false; @@ -599,7 +599,7 @@ private boolean isFlag(int mask) { return false; switch (reader.state()) { - case 7: + case 8: byte concurrencyOrd; concurrencyOrd = reader.readByte("concurrency"); @@ -611,7 +611,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 8: + case 9: dhtVerKeys = reader.readCollection("dhtVerKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -619,7 +619,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 9: + case 10: dhtVerVals = reader.readCollection("dhtVerVals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -627,7 +627,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 10: + case 11: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -635,7 +635,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 11: + case 12: byte isolationOrd; isolationOrd = reader.readByte("isolation"); @@ -647,7 +647,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 12: + case 13: plc = reader.readByte("plc"); if (!reader.isLastRead()) @@ -655,7 +655,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 13: + case 14: reads = reader.readCollection("reads", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -663,7 +663,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 14: + case 15: threadId = reader.readLong("threadId"); if (!reader.isLastRead()) @@ -671,7 +671,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 15: + case 16: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -679,7 +679,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 16: + case 17: txNodesMsg = reader.readMap("txNodesMsg", MessageCollectionItemType.UUID, MessageCollectionItemType.MSG, false); if (!reader.isLastRead()) @@ -687,7 +687,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 17: + case 18: txSize = reader.readInt("txSize"); if (!reader.isLastRead()) @@ -695,7 +695,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 18: + case 19: writeVer = reader.readMessage("writeVer"); if (!reader.isLastRead()) @@ -703,7 +703,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 19: + case 20: writes = reader.readCollection("writes", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -723,7 +723,7 @@ private boolean isFlag(int mask) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 20; + return 21; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareResponse.java index 58e94926ca9e8..c26880e10fed4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxPrepareResponse.java @@ -178,19 +178,19 @@ public boolean isRollback() { } switch (writer.state()) { - case 7: + case 8: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeInt("part", part)) return false; @@ -212,7 +212,7 @@ public boolean isRollback() { return false; switch (reader.state()) { - case 7: + case 8: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -220,7 +220,7 @@ public boolean isRollback() { reader.incrementState(); - case 8: + case 9: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -228,7 +228,7 @@ public boolean isRollback() { reader.incrementState(); - case 9: + case 10: part = reader.readInt("part"); if (!reader.isLastRead()) @@ -248,7 +248,7 @@ public boolean isRollback() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxRemoteAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxRemoteAdapter.java index fdbdc4648c0f3..ad83cbe967fb4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxRemoteAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedTxRemoteAdapter.java @@ -29,6 +29,7 @@ import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.stream.Collectors; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.failure.FailureContext; import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.IgniteInternalFuture; @@ -50,11 +51,7 @@ import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.GridCacheUpdateTxResult; import org.apache.ignite.internal.processors.cache.KeyCacheObject; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; -import org.apache.ignite.internal.processors.cache.distributed.dht.PartitionUpdateCountersMessage; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheEntry; -import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; @@ -129,6 +126,10 @@ public abstract class GridDistributedTxRemoteAdapter extends IgniteTxAdapter /** {@code True} if tx should skip adding itself to completed version map on finish. */ private boolean skipCompletedVers; + /** Transaction label. */ + @GridToStringInclude + @Nullable private String txLbl; + /** * Empty constructor required for {@link Externalizable}. */ @@ -150,6 +151,7 @@ public GridDistributedTxRemoteAdapter() { * @param txSize Expected transaction size. * @param subjId Subject ID. * @param taskNameHash Task name hash code. + * @param txLbl Transaction label. */ public GridDistributedTxRemoteAdapter( GridCacheSharedContext ctx, @@ -164,7 +166,8 @@ public GridDistributedTxRemoteAdapter( long timeout, int txSize, @Nullable UUID subjId, - int taskNameHash + int taskNameHash, + String txLbl ) { super( ctx, @@ -182,6 +185,7 @@ public GridDistributedTxRemoteAdapter( taskNameHash); this.invalidate = invalidate; + this.txLbl = txLbl; commitVersion(commitVer); @@ -502,7 +506,7 @@ private void commitIfLocked() throws IgniteCheckedException { cctx.database().checkpointReadLock(); try { - assert !txState.mvccEnabled(cctx) || mvccSnapshot != null : "Mvcc is not initialized: " + this; + assert !txState.mvccEnabled() || mvccSnapshot != null : "Mvcc is not initialized: " + this; Collection entries = near() || cctx.snapshot().needTxReadLogging() ? allEntries() : writeEntries(); @@ -511,337 +515,306 @@ private void commitIfLocked() throws IgniteCheckedException { batchStoreCommit(writeMap().values()); - try { - // Node that for near transactions we grab all entries. - for (IgniteTxEntry txEntry : entries) { - GridCacheContext cacheCtx = txEntry.context(); + // Node that for near transactions we grab all entries. + for (IgniteTxEntry txEntry : entries) { + GridCacheContext cacheCtx = txEntry.context(); - boolean replicate = cacheCtx.isDrEnabled(); + boolean replicate = cacheCtx.isDrEnabled(); + while (true) { try { - while (true) { - try { - GridCacheEntryEx cached = txEntry.cached(); + GridCacheEntryEx cached = txEntry.cached(); - if (cached == null) - txEntry.cached(cached = cacheCtx.cache().entryEx(txEntry.key(), topologyVersion())); + if (cached == null) + txEntry.cached(cached = cacheCtx.cache().entryEx(txEntry.key(), topologyVersion())); - if (near() && cacheCtx.dr().receiveEnabled()) { - cached.markObsolete(xidVer); + if (near() && cacheCtx.dr().receiveEnabled()) { + cached.markObsolete(xidVer); - break; - } + break; + } - GridNearCacheEntry nearCached = null; + GridNearCacheEntry nearCached = null; - if (updateNearCache(cacheCtx, txEntry.key(), topVer)) - nearCached = cacheCtx.dht().near().peekExx(txEntry.key()); + if (updateNearCache(cacheCtx, txEntry.key(), topVer)) + nearCached = cacheCtx.dht().near().peekExx(txEntry.key()); - if (!F.isEmpty(txEntry.entryProcessors())) - txEntry.cached().unswap(false); + if (!F.isEmpty(txEntry.entryProcessors())) + txEntry.cached().unswap(false); - IgniteBiTuple res = - applyTransformClosures(txEntry, false, ret); + IgniteBiTuple res = + applyTransformClosures(txEntry, false, ret); - GridCacheOperation op = res.get1(); - CacheObject val = res.get2(); + GridCacheOperation op = res.get1(); + CacheObject val = res.get2(); - GridCacheVersion explicitVer = txEntry.conflictVersion(); + GridCacheVersion explicitVer = txEntry.conflictVersion(); - if (explicitVer == null) - explicitVer = writeVersion(); + if (explicitVer == null) + explicitVer = writeVersion(); - if (txEntry.ttl() == CU.TTL_ZERO) - op = DELETE; + if (txEntry.ttl() == CU.TTL_ZERO) + op = DELETE; - boolean conflictNeedResolve = cacheCtx.conflictNeedResolve(); + boolean conflictNeedResolve = cacheCtx.conflictNeedResolve(); - GridCacheVersionConflictContext conflictCtx = null; + GridCacheVersionConflictContext conflictCtx = null; - if (conflictNeedResolve) { - IgniteBiTuple - drRes = conflictResolve(op, txEntry, val, explicitVer, cached); + if (conflictNeedResolve) { + IgniteBiTuple + drRes = conflictResolve(op, txEntry, val, explicitVer, cached); - assert drRes != null; + assert drRes != null; - conflictCtx = drRes.get2(); + conflictCtx = drRes.get2(); - if (conflictCtx.isUseOld()) - op = NOOP; - else if (conflictCtx.isUseNew()) { - txEntry.ttl(conflictCtx.ttl()); - txEntry.conflictExpireTime(conflictCtx.expireTime()); - } - else if (conflictCtx.isMerge()) { - op = drRes.get1(); - val = txEntry.context().toCacheObject(conflictCtx.mergeValue()); - explicitVer = writeVersion(); + if (conflictCtx.isUseOld()) + op = NOOP; + else if (conflictCtx.isUseNew()) { + txEntry.ttl(conflictCtx.ttl()); + txEntry.conflictExpireTime(conflictCtx.expireTime()); + } + else if (conflictCtx.isMerge()) { + op = drRes.get1(); + val = txEntry.context().toCacheObject(conflictCtx.mergeValue()); + explicitVer = writeVersion(); - txEntry.ttl(conflictCtx.ttl()); - txEntry.conflictExpireTime(conflictCtx.expireTime()); - } - } - else - // Nullify explicit version so that innerSet/innerRemove will work as usual. - explicitVer = null; - - GridCacheVersion dhtVer = cached.isNear() ? writeVersion() : null; - - if (!near() && cacheCtx.group().persistenceEnabled() && cacheCtx.group().walEnabled() && - op != NOOP && op != RELOAD && (op != READ || cctx.snapshot().needTxReadLogging())) { - if (dataEntries == null) - dataEntries = new ArrayList<>(entries.size()); - - dataEntries.add( - new T2<>( - new DataEntry( - cacheCtx.cacheId(), - txEntry.key(), - val, - op, - nearXidVersion(), - writeVersion(), - 0, - txEntry.key().partition(), - txEntry.updateCounter() - ), - txEntry - ) - ); - } + txEntry.ttl(conflictCtx.ttl()); + txEntry.conflictExpireTime(conflictCtx.expireTime()); + } + } + else + // Nullify explicit version so that innerSet/innerRemove will work as usual. + explicitVer = null; + + GridCacheVersion dhtVer = cached.isNear() ? writeVersion() : null; + + if (!near() && cacheCtx.group().persistenceEnabled() && cacheCtx.group().walEnabled() && + op != NOOP && op != RELOAD && (op != READ || cctx.snapshot().needTxReadLogging())) { + if (dataEntries == null) + dataEntries = new ArrayList<>(entries.size()); + + dataEntries.add( + new T2<>( + new DataEntry( + cacheCtx.cacheId(), + txEntry.key(), + val, + op, + nearXidVersion(), + writeVersion(), + 0, + txEntry.key().partition(), + txEntry.updateCounter() + ), + txEntry + ) + ); + } - if (op == CREATE || op == UPDATE) { - // Invalidate only for near nodes (backups cannot be invalidated). - if (isSystemInvalidate() || (isInvalidate() && cacheCtx.isNear())) - cached.innerRemove(this, - eventNodeId(), - nodeId, - false, - true, - true, - txEntry.keepBinary(), - txEntry.hasOldValue(), - txEntry.oldValue(), - topVer, - null, - replicate ? DR_BACKUP : DR_NONE, - near() ? null : explicitVer, - CU.subjectId(this, cctx), - resolveTaskName(), - dhtVer, - txEntry.updateCounter(), - mvccSnapshot()); - else { - assert val != null : txEntry; - - GridCacheUpdateTxResult updRes = cached.innerSet(this, - eventNodeId(), - nodeId, - val, - false, - false, - txEntry.ttl(), - true, - true, - txEntry.keepBinary(), - txEntry.hasOldValue(), - txEntry.oldValue(), - topVer, - null, - replicate ? DR_BACKUP : DR_NONE, - txEntry.conflictExpireTime(), - near() ? null : explicitVer, - CU.subjectId(this, cctx), - resolveTaskName(), - dhtVer, - txEntry.updateCounter(), - mvccSnapshot()); - - txEntry.updateCounter(updRes.updateCounter()); - - if (updRes.loggedPointer() != null) - ptr = updRes.loggedPointer(); - - // Keep near entry up to date. - if (nearCached != null) { - CacheObject val0 = cached.valueBytes(); - - nearCached.updateOrEvict(xidVer, - val0, - cached.expireTime(), - cached.ttl(), - nodeId, - topVer); - } - } - } - else if (op == DELETE) { - GridCacheUpdateTxResult updRes = cached.innerRemove(this, - eventNodeId(), + if (op == CREATE || op == UPDATE) { + // Invalidate only for near nodes (backups cannot be invalidated). + if (isSystemInvalidate() || (isInvalidate() && cacheCtx.isNear())) + cached.innerRemove(this, + eventNodeId(), + nodeId, + false, + true, + true, + txEntry.keepBinary(), + txEntry.hasOldValue(), + txEntry.oldValue(), + topVer, + null, + replicate ? DR_BACKUP : DR_NONE, + near() ? null : explicitVer, + CU.subjectId(this, cctx), + resolveTaskName(), + dhtVer, + txEntry.updateCounter(), + mvccSnapshot()); + else { + assert val != null : txEntry; + + GridCacheUpdateTxResult updRes = cached.innerSet(this, + eventNodeId(), + nodeId, + val, + false, + false, + txEntry.ttl(), + true, + true, + txEntry.keepBinary(), + txEntry.hasOldValue(), + txEntry.oldValue(), + topVer, + null, + replicate ? DR_BACKUP : DR_NONE, + txEntry.conflictExpireTime(), + near() ? null : explicitVer, + CU.subjectId(this, cctx), + resolveTaskName(), + dhtVer, + txEntry.updateCounter(), + mvccSnapshot()); + + txEntry.updateCounter(updRes.updateCounter()); + + if (updRes.loggedPointer() != null) + ptr = updRes.loggedPointer(); + + // Keep near entry up to date. + if (nearCached != null) { + CacheObject val0 = cached.valueBytes(); + + nearCached.updateOrEvict(xidVer, + val0, + cached.expireTime(), + cached.ttl(), nodeId, - false, - true, - true, - txEntry.keepBinary(), - txEntry.hasOldValue(), - txEntry.oldValue(), - topVer, - null, - replicate ? DR_BACKUP : DR_NONE, - near() ? null : explicitVer, - CU.subjectId(this, cctx), - resolveTaskName(), - dhtVer, - txEntry.updateCounter(), - mvccSnapshot()); - - txEntry.updateCounter(updRes.updateCounter()); - - if (updRes.loggedPointer() != null) - ptr = updRes.loggedPointer(); - - // Keep near entry up to date. - if (nearCached != null) - nearCached.updateOrEvict(xidVer, null, 0, 0, nodeId, topVer); + topVer); } - else if (op == RELOAD) { - CacheObject reloaded = cached.innerReload(); - - if (nearCached != null) { - nearCached.innerReload(); - - nearCached.updateOrEvict(cached.version(), - reloaded, - cached.expireTime(), - cached.ttl(), - nodeId, - topVer); - } - } - else if (op == READ) { - assert near(); - - if (log.isDebugEnabled()) - log.debug("Ignoring READ entry when committing: " + txEntry); - } - // No-op. - else { - if (conflictCtx == null || !conflictCtx.isUseOld()) { - if (txEntry.ttl() != CU.TTL_NOT_CHANGED) - cached.updateTtl(null, txEntry.ttl()); - - if (nearCached != null) { - CacheObject val0 = cached.valueBytes(); - - nearCached.updateOrEvict(xidVer, - val0, - cached.expireTime(), - cached.ttl(), - nodeId, - topVer); - } - } - } - - // Assert after setting values as we want to make sure - // that if we replaced removed entries. - assert - txEntry.op() == READ || onePhaseCommit() || - // If candidate is not there, then lock was explicit - // and we simply allow the commit to proceed. - !cached.hasLockCandidateUnsafe(xidVer) || cached.lockedByUnsafe(xidVer) : - "Transaction does not own lock for commit [entry=" + cached + - ", tx=" + this + ']'; - - // Break out of while loop. - break; } - catch (GridCacheEntryRemovedException ignored) { - if (log.isDebugEnabled()) - log.debug("Attempting to commit a removed entry (will retry): " + txEntry); - - // Renew cached entry. - txEntry.cached(cacheCtx.cache().entryEx(txEntry.key(), topologyVersion())); - } - } - } - catch (Throwable ex) { - boolean isNodeStopping = X.hasCause(ex, NodeStoppingException.class); - boolean hasInvalidEnvironmentIssue = X.hasCause(ex, InvalidEnvironmentException.class); - - // In case of error, we still make the best effort to commit, - // as there is no way to rollback at this point. - err = new IgniteTxHeuristicCheckedException("Commit produced a runtime exception " + - "(all transaction entries will be invalidated): " + CU.txString(this), ex); - - if (isNodeStopping) { - U.warn(log, "Failed to commit transaction, node is stopping [tx=" + this + - ", err=" + ex + ']'); } - else if (hasInvalidEnvironmentIssue) { - U.warn(log, "Failed to commit transaction, node is in invalid state and will be stopped [tx=" + this + - ", err=" + ex + ']'); + else if (op == DELETE) { + GridCacheUpdateTxResult updRes = cached.innerRemove(this, + eventNodeId(), + nodeId, + false, + true, + true, + txEntry.keepBinary(), + txEntry.hasOldValue(), + txEntry.oldValue(), + topVer, + null, + replicate ? DR_BACKUP : DR_NONE, + near() ? null : explicitVer, + CU.subjectId(this, cctx), + resolveTaskName(), + dhtVer, + txEntry.updateCounter(), + mvccSnapshot()); + + txEntry.updateCounter(updRes.updateCounter()); + + if (updRes.loggedPointer() != null) + ptr = updRes.loggedPointer(); + + // Keep near entry up to date. + if (nearCached != null) + nearCached.updateOrEvict(xidVer, null, 0, 0, nodeId, topVer); } - else - U.error(log, "Commit failed.", err); - - state(UNKNOWN); - - if (hasInvalidEnvironmentIssue) - cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, ex)); - else if (!isNodeStopping) { // Skip fair uncommit in case of node stopping or invalidation. - try { - // Courtesy to minimize damage. - uncommit(); + else if (op == RELOAD) { + CacheObject reloaded = cached.innerReload(); + + if (nearCached != null) { + nearCached.innerReload(); + + nearCached.updateOrEvict(cached.version(), + reloaded, + cached.expireTime(), + cached.ttl(), + nodeId, + topVer); } - catch (Throwable ex1) { - U.error(log, "Failed to uncommit transaction: " + this, ex1); + } + else if (op == READ) { + assert near(); - if (ex1 instanceof Error) - throw ex1; + if (log.isDebugEnabled()) + log.debug("Ignoring READ entry when committing: " + txEntry); + } + // No-op. + else { + if (conflictCtx == null || !conflictCtx.isUseOld()) { + if (txEntry.ttl() != CU.TTL_NOT_CHANGED) + cached.updateTtl(null, txEntry.ttl()); + + if (nearCached != null) { + CacheObject val0 = cached.valueBytes(); + + nearCached.updateOrEvict(xidVer, + val0, + cached.expireTime(), + cached.ttl(), + nodeId, + topVer); + } } } - if (ex instanceof Error) - throw (Error) ex; + // Assert after setting values as we want to make sure + // that if we replaced removed entries. + assert + txEntry.op() == READ || onePhaseCommit() || + // If candidate is not there, then lock was explicit + // and we simply allow the commit to proceed. + !cached.hasLockCandidateUnsafe(xidVer) || cached.lockedByUnsafe(xidVer) : + "Transaction does not own lock for commit [entry=" + cached + + ", tx=" + this + ']'; + + // Break out of while loop. + break; + } + catch (GridCacheEntryRemovedException ignored) { + if (log.isDebugEnabled()) + log.debug("Attempting to commit a removed entry (will retry): " + txEntry); - throw err; + // Renew cached entry. + txEntry.cached(cacheCtx.cache().entryEx(txEntry.key(), topologyVersion())); } } + } - // Apply cache size deltas. - applyTxSizes(); + // Apply cache size deltas. + applyTxSizes(); - TxCounters txCntrs = txCounters(false); + TxCounters txCntrs = txCounters(false); - // Apply update counters. - if (txCntrs != null) - applyPartitionsUpdatesCounters(txCntrs.updateCounters()); + // Apply update counters. + if (txCntrs != null) + cctx.tm().txHandler().applyPartitionsUpdatesCounters(txCntrs.updateCounters()); - if (!near() && !F.isEmpty(dataEntries) && cctx.wal() != null) { - // Set new update counters for data entries received from persisted tx entries. - List entriesWithCounters = dataEntries.stream() - .map(tuple -> tuple.get1().partitionCounter(tuple.get2().updateCounter())) - .collect(Collectors.toList()); + cctx.mvccCaching().onTxFinished(this, true); + + if (!near() && !F.isEmpty(dataEntries) && cctx.wal() != null) { + // Set new update counters for data entries received from persisted tx entries. + List entriesWithCounters = dataEntries.stream() + .map(tuple -> tuple.get1().partitionCounter(tuple.get2().updateCounter())) + .collect(Collectors.toList()); cctx.wal().log(new DataRecord(entriesWithCounters)); - } + } + + if (ptr != null && !cctx.tm().logTxRecords()) + cctx.wal().flush(ptr, false); + } + catch (Throwable ex) { + state(UNKNOWN); + + if (X.hasCause(ex, NodeStoppingException.class)) { + U.warn(log, "Failed to commit transaction, node is stopping [tx=" + CU.txString(this) + + ", err=" + ex + ']'); - if (ptr != null && !cctx.tm().logTxRecords()) - cctx.wal().flush(ptr, false); + return; } - catch (StorageException e) { - err = e; - throw new IgniteCheckedException("Failed to log transaction record " + - "(transaction will be rolled back): " + this, e); + err = heuristicException(ex); + + try { + uncommit(); } + catch (Throwable e) { + err.addSuppressed(e); + } + + throw err; } finally { cctx.database().checkpointReadUnlock(); - notifyDrManager(state() == COMMITTING && err == null); - if (wrapper != null) wrapper.initialize(ret); } @@ -849,6 +822,8 @@ else if (!isNodeStopping) { // Skip fair uncommit in case of node stopping or in cctx.tm().commitTx(this); + cctx.tm().mvccFinish(this, true); + state(COMMITTED); } } @@ -875,9 +850,19 @@ else if (!isNodeStopping) { // Skip fair uncommit in case of node stopping or in throw new IgniteCheckedException("Invalid transaction state for commit [state=" + state + ", tx=" + this + ']'); rollbackRemoteTx(); + + return; } - commitIfLocked(); + try { + commitIfLocked(); + } + catch (IgniteTxHeuristicCheckedException e) { + // Treat heuristic exception as critical. + cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); + + throw e; + } } /** @@ -933,8 +918,6 @@ public void forceCommit() throws IgniteCheckedException { /** {@inheritDoc} */ @Override public final void rollbackRemoteTx() { try { - notifyDrManager(false); - // Note that we don't evict near entries here - // they will be deleted by their corresponding transactions. if (state(ROLLING_BACK) || state() == UNKNOWN) { @@ -943,15 +926,26 @@ public void forceCommit() throws IgniteCheckedException { TxCounters counters = txCounters(false); if (counters != null) - applyPartitionsUpdatesCounters(counters.updateCounters()); + cctx.tm().txHandler().applyPartitionsUpdatesCounters(counters.updateCounters()); state(ROLLED_BACK); + + cctx.mvccCaching().onTxFinished(this, false); + + cctx.tm().mvccFinish(this, false); } } - catch (RuntimeException | Error e) { + catch (IgniteCheckedException | RuntimeException | Error e) { state(UNKNOWN); - throw e; + U.error(log, "Error during tx rollback.", e); + + if (e instanceof IgniteCheckedException) + throw new IgniteException(e); + else if (e instanceof RuntimeException) + throw (RuntimeException) e; + else + throw (Error) e; } } @@ -1009,37 +1003,9 @@ protected void addExplicit(IgniteTxEntry e) { } } - /** - * Applies partition counters updates for mvcc transactions. - * - * @param counters Counters values to be updated. - */ - private void applyPartitionsUpdatesCounters(Iterable counters) { - if (counters == null) - return; - - int cacheId = CU.UNDEFINED_CACHE_ID; - GridDhtPartitionTopology top = null; - - for (PartitionUpdateCountersMessage counter : counters) { - if (counter.cacheId() != cacheId) { - GridCacheContext ctx0 = cctx.cacheContext(cacheId = counter.cacheId()); - - assert ctx0.mvccEnabled(); - - top = ctx0.topology(); - } - - assert top != null; - - for (int i = 0; i < counter.size(); i++) { - GridDhtLocalPartition part = top.localPartition(counter.partition(i)); - - assert part != null; - - part.updateCounter(counter.initialCounter(i), counter.updatesCount(i)); - } - } + /** {@inheritDoc} */ + @Override public String label() { + return txLbl; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedUnlockRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedUnlockRequest.java index ca2bdab156bf7..001eb61589569 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedUnlockRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/GridDistributedUnlockRequest.java @@ -122,7 +122,7 @@ public void addKey(KeyCacheObject key, GridCacheContext ctx) throws IgniteChecke } switch (writer.state()) { - case 7: + case 8: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; @@ -144,7 +144,7 @@ public void addKey(KeyCacheObject key, GridCacheContext ctx) throws IgniteChecke return false; switch (reader.state()) { - case 7: + case 8: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -164,7 +164,7 @@ public void addKey(KeyCacheObject key, GridCacheContext ctx) throws IgniteChecke /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 8; + return 9; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/CacheDistributedGetFutureAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/CacheDistributedGetFutureAdapter.java index 62b5fe9879c58..d0228159a8721 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/CacheDistributedGetFutureAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/CacheDistributedGetFutureAdapter.java @@ -18,18 +18,40 @@ package org.apache.ignite.internal.processors.cache.distributed.dht; import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedHashMap; import java.util.Map; +import java.util.Set; import java.util.UUID; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.concurrent.atomic.AtomicReference; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheCompoundIdentityFuture; import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheEntryInfo; import org.apache.ignite.internal.processors.cache.GridCacheFuture; import org.apache.ignite.internal.processors.cache.IgniteCacheExpiryPolicy; import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetRequest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetResponse; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.tostring.GridToStringInclude; +import org.apache.ignite.internal.util.typedef.C1; +import org.apache.ignite.internal.util.typedef.CIX1; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteUuid; import org.jetbrains.annotations.Nullable; @@ -40,8 +62,14 @@ /** * */ -public abstract class CacheDistributedGetFutureAdapter extends GridCacheCompoundIdentityFuture> - implements GridCacheFuture>, CacheGetFuture { +public abstract class CacheDistributedGetFutureAdapter + extends GridCacheCompoundIdentityFuture> implements CacheGetFuture { + /** Logger reference. */ + protected static final AtomicReference logRef = new AtomicReference<>(); + + /** Logger. */ + protected static IgniteLogger log; + /** Default max remap count value. */ public static final int DFLT_MAX_REMAP_CNT = 3; @@ -101,6 +129,10 @@ public abstract class CacheDistributedGetFutureAdapter extends GridCacheCo /** */ protected final boolean recovery; + /** */ + protected Map>> invalidNodes = Collections.emptyMap(); + + /** * @param cctx Context. * @param keys Keys. @@ -149,6 +181,29 @@ protected CacheDistributedGetFutureAdapter( futId = IgniteUuid.randomUuid(); } + /** + * @param aclass Class. + */ + protected void initLogger(Class aclass){ + if (log == null) + log = U.logger(cctx.kernalContext(), logRef, aclass); + } + + /** {@inheritDoc} */ + @Override public boolean trackable() { + return trackable; + } + + /** {@inheritDoc} */ + @Override public void markNotTrackable() { + // Should not flip trackable flag from true to false since get future can be remapped. + } + + /** {@inheritDoc} */ + @Override public IgniteUuid futureId() { + return futId; + } + /** * @param part Partition. * @return {@code True} if partition is in owned state. @@ -158,11 +213,384 @@ protected final boolean partitionOwned(int part) { } /** + * @param fut Future. + */ + protected void registrateFutureInMvccManager(GridCacheFuture fut) { + if (!trackable) { + trackable = true; + + cctx.mvcc().addFuture(fut, futId); + } + } + + /** + * @param node Cluster node. + * @param part Invalid partition. + * @param topVer Topology version. + */ + protected synchronized void addNodeAsInvalid(ClusterNode node, int part, AffinityTopologyVersion topVer) { + if (invalidNodes == Collections.>>emptyMap()) { + invalidNodes = new HashMap<>(); + } + + Map> invalidNodeMap = invalidNodes.get(topVer); + + if (invalidNodeMap == null) + invalidNodes.put(topVer, invalidNodeMap = new HashMap<>()); + + Set invalidNodeSet = invalidNodeMap.get(part); + + if (invalidNodeSet == null) + invalidNodeMap.put(part, invalidNodeSet = new HashSet<>()); + + invalidNodeSet.add(node); + } + + /** + * @param part Partition. + * @param topVer Topology version. + * @return Set of invalid cluster nodes. + */ + protected synchronized Set getInvalidNodes(int part, AffinityTopologyVersion topVer) { + Set invalidNodeSet = Collections.emptySet(); + + Map> invalidNodesMap = invalidNodes.get(topVer); + + if (invalidNodesMap != null) { + Set nodes = invalidNodesMap.get(part); + + if (nodes != null) + invalidNodeSet = nodes; + } + + return invalidNodeSet; + } + + /** + * + * @param key Key. + * @param node Mapped node. + * @param missedNodesToKeysMapping Full node mapping. + */ + protected boolean checkRetryPermits( + KeyCacheObject key, + ClusterNode node, + Map> missedNodesToKeysMapping + ) { + LinkedHashMap keys = missedNodesToKeysMapping.get(node); + + if (keys != null && keys.containsKey(key)) { + if (REMAP_CNT_UPD.incrementAndGet(this) > MAX_REMAP_CNT) { + onDone(new ClusterTopologyCheckedException("Failed to remap key to a new node after " + + MAX_REMAP_CNT + " attempts (key got remapped to the same node) [key=" + key + ", node=" + + U.toShortString(node) + ", mappings=" + missedNodesToKeysMapping + ']')); + + return false; + } + } + + return true; + } + + /** {@inheritDoc} */ + @Override public boolean onNodeLeft(UUID nodeId) { + boolean found = false; + + for (IgniteInternalFuture> fut : futures()) + if (isMini(fut)) { + AbstractMiniFuture f = (AbstractMiniFuture)fut; + + if (f.node().id().equals(nodeId)) { + found = true; + + f.onNodeLeft(new ClusterTopologyCheckedException("Remote node left grid (will retry): " + nodeId)); + } + } + + return found; + } + + /** {@inheritDoc} */ + @Override public void onResult(UUID nodeId, GridNearGetResponse res) { + for (IgniteInternalFuture> fut : futures()) + if (isMini(fut)) { + AbstractMiniFuture f = (AbstractMiniFuture)fut; + + if (f.futureId().equals(res.miniId())) { + assert f.node().id().equals(nodeId); + + f.onResult(res); + } + } + } + + /** + * @param part Partition. * @param topVer Topology version. * @return Exception. */ - protected final ClusterTopologyServerNotFoundException serverNotFoundError(AffinityTopologyVersion topVer) { + protected final ClusterTopologyServerNotFoundException serverNotFoundError(int part, AffinityTopologyVersion topVer) { return new ClusterTopologyServerNotFoundException("Failed to map keys for cache " + - "(all partition nodes left the grid) [topVer=" + topVer + ", cache=" + cctx.name() + ']'); + "(all partition nodes left the grid) [topVer=" + topVer + + ", part" + part + ", cache=" + cctx.name() + ", localNodeId=" + cctx.localNodeId() + ']'); + } + + /** + * @param f Future. + * @return {@code True} if mini-future. + */ + protected abstract boolean isMini(IgniteInternalFuture f); + + /** + * @param keys Collection of mapping keys. + * @param mapped Previous mapping. + * @param topVer Topology version. + */ + protected abstract void map( + Collection keys, + Map> mapped, + AffinityTopologyVersion topVer + ); + + /** {@inheritDoc} */ + @Override public String toString() { + Collection futuresStrings = F.viewReadOnly(futures(), new C1, String>() { + @SuppressWarnings("unchecked") + @Override public String apply(IgniteInternalFuture f) { + if (isMini(f)) { + AbstractMiniFuture mini = (AbstractMiniFuture)f; + + return "miniFuture([futId=" + mini.futureId() + ", node=" + mini.node().id() + + ", loc=" + mini.node().isLocal() + + ", done=" + f.isDone() + "])"; + } + else + return f.getClass().getSimpleName() + " [loc=true, done=" + f.isDone() + "]"; + } + }); + + return S.toString(CacheDistributedGetFutureAdapter.class, this, + "innerFuts", futuresStrings, + "super", super.toString()); + } + + /** + * Mini-future for get operations. Mini-futures are only waiting on a single + * node as opposed to multiple nodes. + */ + protected abstract class AbstractMiniFuture extends GridFutureAdapter> { + /** Mini-future id. */ + private final IgniteUuid futId = IgniteUuid.randomUuid(); + + /** Mapped node. */ + protected final ClusterNode node; + + /** Mapped keys. */ + @GridToStringInclude + protected final LinkedHashMap keys; + + /** Topology version on which this future was mapped. */ + protected final AffinityTopologyVersion topVer; + + /** Post processing closure. */ + private final IgniteInClosure> postProcessingClos; + + /** {@code True} if remapped after node left. */ + private boolean remapped; + + /** + * @param node Node. + * @param keys Keys. + * @param topVer Topology version. + */ + protected AbstractMiniFuture( + ClusterNode node, + LinkedHashMap keys, + AffinityTopologyVersion topVer + ) { + this.node = node; + this.keys = keys; + this.topVer = topVer; + this.postProcessingClos = CU.createBackupPostProcessingClosure( + topVer, log, cctx, null, expiryPlc, readThrough, skipVals); + } + + /** + * @return Future ID. + */ + public IgniteUuid futureId() { + return futId; + } + + /** + * @return Node ID. + */ + public ClusterNode node() { + return node; + } + + /** + * @return Keys. + */ + public Collection keys() { + return keys.keySet(); + } + + /** + * Factory methond for generate request associated with this miniFuture. + * + * @param rootFutId Root future id. + * @return Near get request. + */ + public GridNearGetRequest createGetRequest(IgniteUuid rootFutId) { + return createGetRequest0(rootFutId, futureId()); + } + + /** + * @param rootFutId Root future id. + * @param futId Mini future id. + * @return Near get request. + */ + protected abstract GridNearGetRequest createGetRequest0(IgniteUuid rootFutId, IgniteUuid futId); + + /** + * @param entries Collection of entries. + * @return Map with key value results. + */ + protected abstract Map createResultMap(Collection entries); + + /** + * @param e Error. + */ + public void onResult(Throwable e) { + if (log.isDebugEnabled()) + log.debug("Failed to get future result [fut=" + this + ", err=" + e + ']'); + + // Fail. + onDone(e); + } + + /** + * @param e Failure exception. + */ + public synchronized void onNodeLeft(ClusterTopologyCheckedException e) { + if (remapped) + return; + + remapped = true; + + if (log.isDebugEnabled()) + log.debug("Remote node left grid while sending or waiting for reply (will retry): " + this); + + // Try getting from existing nodes. + if (!canRemap) { + map(keys.keySet(), F.t(node, keys), topVer); + + onDone(Collections.emptyMap()); + } + else { + long maxTopVer = Math.max(topVer.topologyVersion() + 1, cctx.discovery().topologyVersion()); + + AffinityTopologyVersion awaitTopVer = new AffinityTopologyVersion(maxTopVer); + + cctx.shared().exchange() + .affinityReadyFuture(awaitTopVer) + .listen((f) -> { + try { + // Remap. + map(keys.keySet(), F.t(node, keys), f.get()); + + onDone(Collections.emptyMap()); + } + catch (IgniteCheckedException ex) { + CacheDistributedGetFutureAdapter.this.onDone(ex); + } + } + ); + } + } + + /** + * @param res Result callback. + */ + public void onResult(GridNearGetResponse res) { + // If error happened on remote node, fail the whole future. + if (res.error() != null) { + onDone(res.error()); + + return; + } + + Collection invalidParts = res.invalidPartitions(); + + // Remap invalid partitions. + if (!F.isEmpty(invalidParts)) { + AffinityTopologyVersion rmtTopVer = res.topologyVersion(); + + for (Integer part : invalidParts) + addNodeAsInvalid(node, part, topVer); + + if (log.isDebugEnabled()) + log.debug("Remapping mini get future [invalidParts=" + invalidParts + ", fut=" + this + ']'); + + if (!canRemap) { + map(F.view(keys.keySet(), new P1() { + @Override public boolean apply(KeyCacheObject key) { + return invalidParts.contains(cctx.affinity().partition(key)); + } + }), F.t(node, keys), topVer); + + postProcessResult(res); + + onDone(createResultMap(res.entries())); + + return; + } + + // Remap after remote version will be finished localy. + cctx.shared().exchange().affinityReadyFuture(rmtTopVer) + .listen(new CIX1>() { + @Override public void applyx( + IgniteInternalFuture fut + ) throws IgniteCheckedException { + AffinityTopologyVersion topVer = fut.get(); + + // This will append new futures to compound list. + map(F.view(keys.keySet(), new P1() { + @Override public boolean apply(KeyCacheObject key) { + return invalidParts.contains(cctx.affinity().partition(key)); + } + }), F.t(node, keys), topVer); + + postProcessResult(res); + + onDone(createResultMap(res.entries())); + } + }); + } + else { + try { + postProcessResult(res); + + onDone(createResultMap(res.entries())); + } + catch (Exception e) { + onDone(e); + } + } + } + + /** + * @param res Response. + */ + protected void postProcessResult(final GridNearGetResponse res) { + if (postProcessingClos != null) + postProcessingClos.apply(res.entries()); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(AbstractMiniFuture.class, this); + } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/ClientCacheDhtTopologyFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/ClientCacheDhtTopologyFuture.java index 4b48f5a59e9ff..f468590a2c198 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/ClientCacheDhtTopologyFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/ClientCacheDhtTopologyFuture.java @@ -80,6 +80,11 @@ public void validate(CacheGroupContext grp, Collection topNodes) { return topVer; } + /** {@inheritDoc} */ + @Override public boolean changedAffinity() { + return true; + } + /** {@inheritDoc} */ @Override public String toString() { return "ClientCacheDhtTopologyFuture [topVer=" + topVer + ']'; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentRequest.java index 44c7b88c029f3..cf7018a15c895 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentRequest.java @@ -109,7 +109,7 @@ public long futureId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 6; + return 7; } /** {@inheritDoc} */ @@ -127,20 +127,20 @@ public long futureId() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 5: - if (!writer.writeMessage("topVer", topVer)) + case 6: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -161,7 +161,7 @@ public long futureId() { return false; switch (reader.state()) { - case 3: + case 4: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -169,7 +169,7 @@ public long futureId() { reader.incrementState(); - case 4: + case 5: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -177,8 +177,8 @@ public long futureId() { reader.incrementState(); - case 5: - topVer = reader.readMessage("topVer"); + case 6: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentResponse.java index 5b0de08a2ce97..e8b40e9cd8c15 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtAffinityAssignmentResponse.java @@ -215,7 +215,7 @@ private List> ids(List> assignments) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 8; + return 9; } /** @@ -272,32 +272,32 @@ private List> ids(List> assignments) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByteArray("affAssignmentIdsBytes", affAssignmentIdsBytes)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeByteArray("idealAffAssignmentBytes", idealAffAssignmentBytes)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeByteArray("partBytes", partBytes)) return false; writer.incrementState(); - case 7: - if (!writer.writeMessage("topVer", topVer)) + case 8: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -318,7 +318,7 @@ private List> ids(List> assignments) { return false; switch (reader.state()) { - case 3: + case 4: affAssignmentIdsBytes = reader.readByteArray("affAssignmentIdsBytes"); if (!reader.isLastRead()) @@ -326,7 +326,7 @@ private List> ids(List> assignments) { reader.incrementState(); - case 4: + case 5: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -334,7 +334,7 @@ private List> ids(List> assignments) { reader.incrementState(); - case 5: + case 6: idealAffAssignmentBytes = reader.readByteArray("idealAffAssignmentBytes"); if (!reader.isLastRead()) @@ -342,7 +342,7 @@ private List> ids(List> assignments) { reader.incrementState(); - case 6: + case 7: partBytes = reader.readByteArray("partBytes"); if (!reader.isLastRead()) @@ -350,8 +350,8 @@ private List> ids(List> assignments) { reader.incrementState(); - case 7: - topVer = reader.readMessage("topVer"); + case 8: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheAdapter.java index 71ddf3ce189bc..a5303fc5dad37 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheAdapter.java @@ -260,7 +260,7 @@ private void processForceKeysRequest0(ClusterNode node, GridDhtForceKeysRequest res.addInfo(info); } - entry.touch(msg.topologyVersion()); + entry.touch(); break; } @@ -699,7 +699,7 @@ private void loadEntry(KeyCacheObject key, } finally { if (entry != null) - entry.touch(topVer); + entry.touch(); part.release(); @@ -788,6 +788,7 @@ else if (log.isDebugEnabled()) * @param taskName Task name. * @param expiry Expiry policy. * @param skipVals Skip values flag. + * @param txLbl Transaction label. * @param mvccSnapshot MVCC snapshot. * @return Get future. */ @@ -800,6 +801,7 @@ IgniteInternalFuture> getDhtAllAsync( @Nullable IgniteCacheExpiryPolicy expiry, boolean skipVals, boolean recovery, + @Nullable String txLbl, MvccSnapshot mvccSnapshot ) { return getAllAsync0(keys, @@ -814,6 +816,7 @@ IgniteInternalFuture> getDhtAllAsync( /*keep cache objects*/true, recovery, /*need version*/true, + txLbl, mvccSnapshot); } @@ -828,6 +831,7 @@ IgniteInternalFuture> getDhtAllAsync( * @param taskNameHash Task name hash code. * @param expiry Expiry policy. * @param skipVals Skip values flag. + * @param txLbl Transaction label. * @param mvccSnapshot MVCC snapshot. * @return DHT future. */ @@ -842,6 +846,7 @@ public GridDhtFuture> getDhtAsync(UUID reader, @Nullable IgniteCacheExpiryPolicy expiry, boolean skipVals, boolean recovery, + @Nullable String txLbl, MvccSnapshot mvccSnapshot ) { GridDhtGetFuture fut = new GridDhtGetFuture<>(ctx, @@ -856,6 +861,7 @@ public GridDhtFuture> getDhtAsync(UUID reader, skipVals, recovery, addReaders, + txLbl, mvccSnapshot); fut.init(); @@ -874,6 +880,7 @@ public GridDhtFuture> getDhtAsync(UUID reader, * @param taskNameHash Task name hash. * @param expiry Expiry. * @param skipVals Skip vals flag. + * @param txLbl Transaction label. * @param mvccSnapshot Mvcc snapshot. * @return Future for the operation. */ @@ -889,6 +896,7 @@ GridDhtGetSingleFuture getDhtSingleAsync( @Nullable IgniteCacheExpiryPolicy expiry, boolean skipVals, boolean recovery, + String txLbl, MvccSnapshot mvccSnapshot ) { GridDhtGetSingleFuture fut = new GridDhtGetSingleFuture<>( @@ -904,6 +912,7 @@ GridDhtGetSingleFuture getDhtSingleAsync( expiry, skipVals, recovery, + txLbl, mvccSnapshot); fut.init(); @@ -933,6 +942,7 @@ protected void processNearSingleGetRequest(final UUID nodeId, final GridNearSing expiryPlc, req.skipValues(), req.recovery(), + req.txLabel(), req.mvccSnapshot()); fut.listen(new CI1>() { @@ -959,12 +969,14 @@ else if (req.needVersion()) res0 = info.value(); } - res = new GridNearSingleGetResponse(ctx.cacheId(), + res = new GridNearSingleGetResponse( + ctx.cacheId(), req.futureId(), null, res0, false, - req.addDeploymentInfo()); + req.addDeploymentInfo() + ); if (info != null && req.skipValues()) res.setContainsValue(); @@ -972,15 +984,14 @@ else if (req.needVersion()) else { AffinityTopologyVersion topVer = ctx.shared().exchange().lastTopologyFuture().initialVersion(); - assert topVer.compareTo(req.topologyVersion()) > 0 : "Wrong ready topology version for " + - "invalid partitions response [topVer=" + topVer + ", req=" + req + ']'; - - res = new GridNearSingleGetResponse(ctx.cacheId(), + res = new GridNearSingleGetResponse( + ctx.cacheId(), req.futureId(), topVer, null, true, - req.addDeploymentInfo()); + req.addDeploymentInfo() + ); } } catch (NodeStoppingException ignored) { @@ -1037,6 +1048,7 @@ protected void processNearGetRequest(final UUID nodeId, final GridNearGetRequest expiryPlc, req.skipValues(), req.recovery(), + req.txLabel(), req.mvccSnapshot()); fut.listen(new CI1>>() { @@ -1064,8 +1076,11 @@ protected void processNearGetRequest(final UUID nodeId, final GridNearGetRequest res.error(e); } - if (!F.isEmpty(fut.invalidPartitions())) - res.invalidPartitions(fut.invalidPartitions(), ctx.shared().exchange().lastTopologyFuture().initialVersion()); + if (!F.isEmpty(fut.invalidPartitions())){ + AffinityTopologyVersion topVer = ctx.shared().exchange().lastTopologyFuture().initialVersion(); + + res.invalidPartitions(fut.invalidPartitions(), topVer); + } try { ctx.io().send(nodeId, res, ctx.ioPolicy()); @@ -1217,7 +1232,7 @@ private void updateTtl(GridCacheAdapter cache, } finally { if (entry != null) - entry.touch(AffinityTopologyVersion.NONE); + entry.touch(); } } catch (IgniteCheckedException e) { @@ -1263,6 +1278,11 @@ protected final boolean needRemap(AffinityTopologyVersion expVer, AffinityTopolo if (expVer.equals(curVer)) return false; + AffinityTopologyVersion lastAffChangedTopVer = ctx.shared().exchange().lastAffinityChangedTopologyVersion(expVer); + + if (curVer.compareTo(lastAffChangedTopVer) >= 0 && curVer.compareTo(expVer) <= 0) + return false; + // TODO IGNITE-7164 check mvcc crd for mvcc enabled txs. Collection cacheNodes0 = ctx.discovery().cacheGroupAffinityNodes(ctx.groupId(), expVer); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheEntry.java index 8d029b20beadb..90841523cabb6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtCacheEntry.java @@ -649,10 +649,10 @@ public boolean clearInternal( ']'); } - if (cctx.mvccEnabled()) - cctx.offheap().mvccRemoveAll(this); - else - removeValue(); + if (cctx.mvccEnabled()) + cctx.offheap().mvccRemoveAll(this); + else + removeValue(); // Give to GC. update(null, 0L, 0L, ver, true); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetFuture.java index 24fd621c8878a..a45bae616b0b9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetFuture.java @@ -37,6 +37,7 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheExpiryPolicy; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.ReaderArguments; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.LostPolicyValidator; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; @@ -55,6 +56,11 @@ import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import static java.util.Collections.singleton; +import static org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.OperationType.READ; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.LOST; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; + /** * */ @@ -117,6 +123,9 @@ public final class GridDhtGetFuture extends GridCompoundIdentityFuture extends GridCompoundIdentityFuturecollectionsReducer(keys.size())); @@ -165,6 +176,7 @@ public GridDhtGetFuture( this.skipVals = skipVals; this.recovery = recovery; this.addReaders = addReaders; + this.txLbl = txLbl; this.mvccSnapshot = mvccSnapshot; futId = IgniteUuid.randomUuid(); @@ -179,6 +191,7 @@ public GridDhtGetFuture( * Initializes future. */ void init() { + // TODO get rid of force keys request https://issues.apache.org/jira/browse/IGNITE-10251 GridDhtFuture fut = cctx.group().preloader().request(cctx, keys.keySet(), topVer); if (fut != null) { @@ -203,14 +216,14 @@ void init() { return; } - map0(keys); + map0(keys, true); markInitialized(); } }); } else { - map0(keys); + map0(keys, false); markInitialized(); } @@ -251,7 +264,7 @@ public GridCacheVersion version() { /** * @param keys Keys to map. */ - private void map0(Map keys) { + private void map0(Map keys, boolean forceKeys) { Map mappedKeys = null; // Assign keys to primary nodes. @@ -259,7 +272,7 @@ private void map0(Map keys) { int part = cctx.affinity().partition(key.getKey()); if (retries == null || !retries.contains(part)) { - if (!map(key.getKey())) { + if (!map(key.getKey(), forceKeys)) { if (retries == null) retries = new HashSet<>(); @@ -303,7 +316,7 @@ else if (mappedKeys != null) * @param key Key. * @return {@code True} if mapped. */ - private boolean map(KeyCacheObject key) { + private boolean map(KeyCacheObject key, boolean forceKeys) { try { int keyPart = cctx.affinity().partition(key); @@ -314,14 +327,31 @@ private boolean map(KeyCacheObject key) { if (part == null) return false; + if (!forceKeys && part.state() == LOST && !recovery) { + Throwable error = LostPolicyValidator.validate(cctx, key, READ, singleton(part.id())); + + if (error != null) { + onDone(null, error); + + return false; + } + } + if (parts == null || !F.contains(parts, part.id())) { // By reserving, we make sure that partition won't be unloaded while processed. if (part.reserve()) { - parts = parts == null ? new int[1] : Arrays.copyOf(parts, parts.length + 1); + if (forceKeys || (part.state() == OWNING || part.state() == LOST)) { + parts = parts == null ? new int[1] : Arrays.copyOf(parts, parts.length + 1); - parts[parts.length - 1] = part.id(); + parts[parts.length - 1] = part.id(); - return true; + return true; + } + else { + part.release(); + + return false; + } } else return false; @@ -411,7 +441,7 @@ private IgniteInternalFuture> getAsync( log.debug("Got removed entry when getting a DHT value: " + e); } finally { - e.touch(topVer); + e.touch(); } } } @@ -432,6 +462,7 @@ private IgniteInternalFuture> getAsync( expiryPlc, skipVals, recovery, + txLbl, mvccSnapshot); } else { @@ -456,6 +487,7 @@ private IgniteInternalFuture> getAsync( expiryPlc, skipVals, recovery, + txLbl, mvccSnapshot); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetSingleFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetSingleFuture.java index d722397442999..402df697f4db3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetSingleFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtGetSingleFuture.java @@ -35,6 +35,7 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheExpiryPolicy; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.ReaderArguments; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.LostPolicyValidator; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; @@ -47,6 +48,11 @@ import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import static java.util.Collections.singleton; +import static org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.OperationType.READ; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.LOST; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; + /** * */ @@ -106,6 +112,9 @@ public final class GridDhtGetSingleFuture extends GridFutureAdapter extends GridFutureAdapter fut = cctx.group().preloader().request( cctx, @@ -234,7 +247,7 @@ private void map() { onDone(e); } else - map0(); + map0(true); } } ); @@ -243,19 +256,20 @@ private void map() { } } - map0(); + map0(false); } /** * */ - private void map0() { + private void map0(boolean forceKeys) { assert retry == null : retry; - if (!map(key)) { + if (!map(key, forceKeys)) { retry = cctx.affinity().partition(key); - onDone((GridCacheEntryInfo)null); + if (!isDone()) + onDone((GridCacheEntryInfo)null); return; } @@ -272,7 +286,7 @@ private void map0() { * @param key Key. * @return {@code True} if mapped. */ - private boolean map(KeyCacheObject key) { + private boolean map(KeyCacheObject key, boolean forceKeys) { try { int keyPart = cctx.affinity().partition(key); @@ -285,11 +299,28 @@ private boolean map(KeyCacheObject key) { assert this.part == -1; + if (!forceKeys && part.state() == LOST && !recovery) { + Throwable error = LostPolicyValidator.validate(cctx, key, READ, singleton(part.id())); + + if (error != null) { + onDone(null, error); + + return false; + } + } + // By reserving, we make sure that partition won't be unloaded while processed. if (part.reserve()) { - this.part = part.id(); + if (forceKeys || (part.state() == OWNING || part.state() == LOST)) { + this.part = part.id(); - return true; + return true; + } + else { + part.release(); + + return false; + } } else return false; @@ -358,7 +389,7 @@ private void getAsync() { log.debug("Got removed entry when getting a DHT value: " + e); } finally { - e.touch(topVer); + e.touch(); } } } @@ -375,6 +406,7 @@ private void getAsync() { expiryPlc, skipVals, recovery, + txLbl, mvccSnapshot); } else { @@ -401,6 +433,7 @@ private void getAsync() { expiryPlc, skipVals, recovery, + null, mvccSnapshot); fut0.listen(createGetFutureListener()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockFuture.java index ef369cfbe9f94..6e482d5d12aa2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockFuture.java @@ -946,7 +946,8 @@ private void map(Iterable entries) { skipStore, cctx.store().configured(), keepBinary, - cctx.deploymentEnabled()); + cctx.deploymentEnabled(), + inTx() ? tx.label() : null); try { for (ListIterator it = dhtMapping.listIterator(); it.hasNext(); ) { @@ -1322,8 +1323,8 @@ void onResult(GridDhtLockResponse res) { replicate ? DR_PRELOAD : DR_NONE, false)) { if (rec && !entry.isInternal()) - cctx.events().addEvent(entry.partition(), entry.key(), cctx.localNodeId(), - (IgniteUuid)null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, info.value(), true, null, + cctx.events().addEvent(entry.partition(), entry.key(), cctx.localNodeId(), null, + null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, info.value(), true, null, false, null, null, null, false); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockRequest.java index 1ac58182ddc57..95786be854a85 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockRequest.java @@ -92,6 +92,9 @@ public class GridDhtLockRequest extends GridDistributedLockRequest { /** TTL for read operation. */ private long accessTtl; + /** Transaction label. */ + private String txLbl; + /** * Empty constructor required for {@link Externalizable}. */ @@ -123,6 +126,7 @@ public GridDhtLockRequest() { * @param storeUsed Cache store used flag. * @param keepBinary Keep binary flag. * @param addDepInfo Deployment info flag. + * @param txLbl Transaction label. */ public GridDhtLockRequest( int cacheId, @@ -147,7 +151,8 @@ public GridDhtLockRequest( boolean skipStore, boolean storeUsed, boolean keepBinary, - boolean addDepInfo + boolean addDepInfo, + String txLbl ) { super(cacheId, nodeId, @@ -179,6 +184,8 @@ public GridDhtLockRequest( this.subjId = subjId; this.taskNameHash = taskNameHash; this.accessTtl = accessTtl; + + this.txLbl = txLbl; } /** @@ -309,6 +316,13 @@ public long accessTtl() { return accessTtl; } + /** + * @return Transaction label. + */ + @Nullable public String txLabel() { + return txLbl; + } + /** {@inheritDoc} */ @Override public void prepareMarshal(GridCacheSharedContext ctx) throws IgniteCheckedException { super.prepareMarshal(ctx); @@ -363,62 +377,68 @@ public long accessTtl() { } switch (writer.state()) { - case 20: + case 21: if (!writer.writeLong("accessTtl", accessTtl)) return false; writer.incrementState(); - case 21: + case 22: if (!writer.writeBitSet("invalidateEntries", invalidateEntries)) return false; writer.incrementState(); - case 22: + case 23: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 23: + case 24: if (!writer.writeCollection("nearKeys", nearKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 24: + case 25: if (!writer.writeObjectArray("ownedKeys", ownedKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 25: + case 26: if (!writer.writeObjectArray("ownedValues", ownedValues, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 26: + case 27: if (!writer.writeBitSet("preloadKeys", preloadKeys)) return false; writer.incrementState(); - case 27: + case 28: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 28: + case 29: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 29: - if (!writer.writeMessage("topVer", topVer)) + case 30: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) + return false; + + writer.incrementState(); + + case 31: + if (!writer.writeString("txLbl", txLbl)) return false; writer.incrementState(); @@ -439,7 +459,7 @@ public long accessTtl() { return false; switch (reader.state()) { - case 20: + case 21: accessTtl = reader.readLong("accessTtl"); if (!reader.isLastRead()) @@ -447,7 +467,7 @@ public long accessTtl() { reader.incrementState(); - case 21: + case 22: invalidateEntries = reader.readBitSet("invalidateEntries"); if (!reader.isLastRead()) @@ -455,7 +475,7 @@ public long accessTtl() { reader.incrementState(); - case 22: + case 23: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -463,7 +483,7 @@ public long accessTtl() { reader.incrementState(); - case 23: + case 24: nearKeys = reader.readCollection("nearKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -471,7 +491,7 @@ public long accessTtl() { reader.incrementState(); - case 24: + case 25: ownedKeys = reader.readObjectArray("ownedKeys", MessageCollectionItemType.MSG, KeyCacheObject.class); if (!reader.isLastRead()) @@ -479,7 +499,7 @@ public long accessTtl() { reader.incrementState(); - case 25: + case 26: ownedValues = reader.readObjectArray("ownedValues", MessageCollectionItemType.MSG, GridCacheVersion.class); if (!reader.isLastRead()) @@ -487,7 +507,7 @@ public long accessTtl() { reader.incrementState(); - case 26: + case 27: preloadKeys = reader.readBitSet("preloadKeys"); if (!reader.isLastRead()) @@ -495,7 +515,7 @@ public long accessTtl() { reader.incrementState(); - case 27: + case 28: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -503,7 +523,7 @@ public long accessTtl() { reader.incrementState(); - case 28: + case 29: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -511,8 +531,16 @@ public long accessTtl() { reader.incrementState(); - case 29: - topVer = reader.readMessage("topVer"); + case 30: + topVer = reader.readAffinityTopologyVersion("topVer"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 31: + txLbl = reader.readString("txLbl"); if (!reader.isLastRead()) return false; @@ -531,7 +559,7 @@ public long accessTtl() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 30; + return 32; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockResponse.java index 87abd6c2ce994..63c07e82906f4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtLockResponse.java @@ -207,25 +207,25 @@ public Collection preloadEntries() { } switch (writer.state()) { - case 10: + case 11: if (!writer.writeCollection("invalidParts", invalidParts, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeCollection("nearEvicted", nearEvicted, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeCollection("preloadEntries", preloadEntries, MessageCollectionItemType.MSG)) return false; @@ -247,7 +247,7 @@ public Collection preloadEntries() { return false; switch (reader.state()) { - case 10: + case 11: invalidParts = reader.readCollection("invalidParts", MessageCollectionItemType.INT); if (!reader.isLastRead()) @@ -255,7 +255,7 @@ public Collection preloadEntries() { reader.incrementState(); - case 11: + case 12: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -263,7 +263,7 @@ public Collection preloadEntries() { reader.incrementState(); - case 12: + case 13: nearEvicted = reader.readCollection("nearEvicted", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -271,7 +271,7 @@ public Collection preloadEntries() { reader.incrementState(); - case 13: + case 14: preloadEntries = reader.readCollection("preloadEntries", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -291,7 +291,7 @@ public Collection preloadEntries() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 14; + return 15; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFuture.java index 489fb63309ea5..3cae875f5d9c7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFuture.java @@ -86,4 +86,10 @@ public interface GridDhtTopologyFuture extends IgniteInternalFuture keys); + + /** + * + * @return {@code True} if this exchange changed affinity. + */ + public boolean changedAffinity(); } \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFutureAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFutureAdapter.java index 539fef48bda46..60ab62fef5743 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFutureAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTopologyFutureAdapter.java @@ -29,12 +29,13 @@ import org.apache.ignite.internal.processors.cache.CacheInvalidStateException; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.typedef.F; import org.jetbrains.annotations.Nullable; import static org.apache.ignite.cache.PartitionLossPolicy.READ_ONLY_ALL; import static org.apache.ignite.cache.PartitionLossPolicy.READ_ONLY_SAFE; -import static org.apache.ignite.cache.PartitionLossPolicy.READ_WRITE_ALL; import static org.apache.ignite.cache.PartitionLossPolicy.READ_WRITE_SAFE; +import static org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.OperationType.WRITE; /** * @@ -42,7 +43,7 @@ public abstract class GridDhtTopologyFutureAdapter extends GridFutureAdapter implements GridDhtTopologyFuture { /** Cache groups validation results. */ - protected volatile Map grpValidRes; + protected volatile Map grpValidRes = Collections.emptyMap(); /** Whether or not cluster is active. */ protected volatile boolean clusterIsActive = true; @@ -52,7 +53,7 @@ public abstract class GridDhtTopologyFutureAdapter extends GridFutureAdapter topNodes) { + protected final CacheGroupValidation validateCacheGroup(CacheGroupContext grp, Collection topNodes) { Collection lostParts = grp.isLocal() ? Collections.emptyList() : grp.topology().lostPartitions(); @@ -65,11 +66,11 @@ protected final CacheValidation validateCacheGroup(CacheGroupContext grp, Collec valid = validator.validate(topNodes); } - return new CacheValidation(valid, lostParts); + return new CacheGroupValidation(valid, lostParts); } /** {@inheritDoc} */ - @Nullable @Override public final Throwable validateCache( + @Override public final @Nullable Throwable validateCache( GridCacheContext cctx, boolean recovery, boolean read, @@ -87,112 +88,185 @@ protected final CacheValidation validateCacheGroup(CacheGroupContext grp, Collec return new CacheInvalidStateException( "Failed to perform cache operation (cluster is not activated): " + cctx.name()); - CacheGroupContext grp = cctx.group(); - - PartitionLossPolicy partLossPlc = grp.config().getPartitionLossPolicy(); - - if (grp.needsRecovery() && !recovery) { - if (!read && (partLossPlc == READ_ONLY_SAFE || partLossPlc == READ_ONLY_ALL)) - return new IgniteCheckedException("Failed to write to cache (cache is moved to a read-only state): " + - cctx.name()); - } - - if (grp.needsRecovery() || grp.topologyValidator() != null) { - CacheValidation validation = grpValidRes.get(grp.groupId()); - - if (validation == null) - return null; + if (cctx.cache() == null) + return new CacheInvalidStateException( + "Failed to perform cache operation (cache is stopped): " + cctx.name()); - if (!validation.valid && !read) - return new IgniteCheckedException("Failed to perform cache operation " + - "(cache topology is not valid): " + cctx.name()); + OperationType opType = read ? OperationType.READ : WRITE; - if (recovery || !grp.needsRecovery()) - return null; + CacheGroupContext grp = cctx.group(); - if (key != null) { - int p = cctx.affinity().partition(key); + PartitionLossPolicy lossPlc = grp.config().getPartitionLossPolicy(); - CacheInvalidStateException ex = validatePartitionOperation(cctx.name(), read, key, p, - validation.lostParts, partLossPlc); + if (cctx.shared().readOnlyMode() && opType == WRITE) + return new IgniteCheckedException("Failed to perform cache operation (cluster is in read only mode)"); - if (ex != null) - return ex; - } + if (grp.needsRecovery() && !recovery) { + if (opType == WRITE && (lossPlc == READ_ONLY_SAFE || lossPlc == READ_ONLY_ALL)) + return new IgniteCheckedException( + "Failed to write to cache (cache is moved to a read-only state): " + cctx.name()); + } - if (keys != null) { - for (Object k : keys) { - int p = cctx.affinity().partition(k); + CacheGroupValidation validation = grpValidRes.get(grp.groupId()); - CacheInvalidStateException ex = validatePartitionOperation(cctx.name(), read, k, p, - validation.lostParts, partLossPlc); + if (validation == null) + return null; - if (ex != null) - return ex; - } - } + if (opType == WRITE && !validation.isValid()) { + return new IgniteCheckedException("Failed to perform cache operation " + + "(cache topology is not valid): " + cctx.name()); } - return null; - } + if (recovery) + return null; - /** - * @param cacheName Cache name. - * @param read Read flag. - * @param key Key to check. - * @param part Partition this key belongs to. - * @param lostParts Collection of lost partitions. - * @param plc Partition loss policy. - * @return Invalid state exception if this operation is disallowed. - */ - private CacheInvalidStateException validatePartitionOperation( - String cacheName, - boolean read, - Object key, - int part, - Collection lostParts, - PartitionLossPolicy plc - ) { - if (lostParts.contains(part)) { - if (!read) { - assert plc == READ_WRITE_ALL || plc == READ_WRITE_SAFE; + if (validation.hasLostPartitions()) { + if (key != null) + return LostPolicyValidator.validate(cctx, key, opType, validation.lostPartitions()); - if (plc == READ_WRITE_SAFE) { - return new CacheInvalidStateException("Failed to execute cache operation " + - "(all partition owners have left the grid, partition data has been lost) [" + - "cacheName=" + cacheName + ", part=" + part + ", key=" + key + ']'); - } - } - else { - // Read. - if (plc == READ_ONLY_SAFE || plc == READ_WRITE_SAFE) - return new CacheInvalidStateException("Failed to execute cache operation " + - "(all partition owners have left the grid, partition data has been lost) [" + - "cacheName=" + cacheName + ", part=" + part + ", key=" + key + ']'); - } + if (keys != null) + return LostPolicyValidator.validate(cctx, keys, opType, validation.lostPartitions()); } return null; } /** - * Cache validation result. + * Cache group validation result. */ - protected static class CacheValidation { + protected static class CacheGroupValidation { /** Topology validation result. */ - private boolean valid; + private final boolean valid; /** Lost partitions on this topology version. */ - private Collection lostParts; + private final Collection lostParts; /** * @param valid Valid flag. * @param lostParts Lost partitions. */ - private CacheValidation(boolean valid, Collection lostParts) { + private CacheGroupValidation(boolean valid, Collection lostParts) { this.valid = valid; this.lostParts = lostParts; } + + /** + * @return True if valid, False if invalide. + */ + public boolean isValid() { + return valid; + } + + /** + * @return True if lost partition is present, False if not. + */ + public boolean hasLostPartitions() { + return !F.isEmpty(lostParts); + } + + /** + * @return Lost patition ID collection. + */ + public Collection lostPartitions() { + return lostParts; + } } + /** + * + */ + public enum OperationType { + /** + * Read operation. + */ + READ, + /** + * Write operation. + */ + WRITE + } + + /** + * Lost policy validator. + */ + public static class LostPolicyValidator { + /** + * + */ + public static Throwable validate( + GridCacheContext cctx, + Object key, + OperationType opType, + Collection lostParts + ) { + CacheGroupContext grp = cctx.group(); + + PartitionLossPolicy lostPlc = grp.config().getPartitionLossPolicy(); + + int partition = cctx.affinity().partition(key); + + return validate(cctx, key, partition, opType, lostPlc, lostParts); + } + + /** + * + */ + public static Throwable validate( + GridCacheContext cctx, + Collection keys, + OperationType opType, + Collection lostParts + ) { + CacheGroupContext grp = cctx.group(); + + PartitionLossPolicy lostPlc = grp.config().getPartitionLossPolicy(); + + for (Object key : keys) { + int partition = cctx.affinity().partition(key); + + Throwable res = validate(cctx, key, partition, opType, lostPlc, lostParts); + + if (res != null) + return res; + } + + return null; + } + + /** + * + */ + private static Throwable validate( + GridCacheContext cctx, + Object key, + int partition, + OperationType opType, + PartitionLossPolicy lostPlc, + Collection lostParts + ) { + if (opType == WRITE) { + if (lostPlc == READ_ONLY_SAFE || lostPlc == READ_ONLY_ALL) { + return new IgniteCheckedException( + "Failed to write to cache (cache is moved to a read-only state): " + cctx.name() + ); + } + + if (lostParts.contains(partition) && lostPlc == READ_WRITE_SAFE) { + return new CacheInvalidStateException("Failed to execute cache operation " + + "(all partition owners have left the grid, partition data has been lost) [" + + "cacheName=" + cctx.name() + ", part=" + partition + ", key=" + key + ']'); + } + } + + if (opType == OperationType.READ) { + if (lostParts.contains(partition) && (lostPlc == READ_ONLY_SAFE || lostPlc == READ_WRITE_SAFE)) + return new CacheInvalidStateException("Failed to execute cache operation " + + "(all partition owners have left the grid, partition data has been lost) [" + + "cacheName=" + cctx.name() + ", part=" + partition + ", key=" + key + ']' + ); + } + + return null; + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTransactionalCacheAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTransactionalCacheAdapter.java index 52638c048baab..f5b03d6d85a03 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTransactionalCacheAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTransactionalCacheAdapter.java @@ -339,7 +339,8 @@ protected GridDhtTransactionalCacheAdapter(GridCacheContext ctx, GridCache req.txSize(), req.subjectId(), req.taskNameHash(), - !req.skipStore() && req.storeUsed()); + !req.skipStore() && req.storeUsed(), + req.txLabel()); tx = ctx.tm().onCreated(null, tx); @@ -1121,7 +1122,9 @@ public IgniteInternalFuture lockAllAsync( req.txSize(), null, req.subjectId(), - req.taskNameHash()); + req.taskNameHash(), + req.txLabel(), + null); if (req.syncCommit()) tx.syncMode(FULL_SYNC); @@ -1646,7 +1649,7 @@ private void clearLocks(UUID nodeId, GridDistributedUnlockRequest req) { "(added to cancelled locks set): " + req); } - entry.touch(ctx.affinity().affinityTopologyVersion()); + entry.touch(); break; } @@ -1834,7 +1837,7 @@ else if (log.isDebugEnabled()) if (created && entry.markObsolete(dhtVer)) removeEntry(entry); - entry.touch(topVer); + entry.touch(); break; } @@ -1962,23 +1965,24 @@ private void processNearTxQueryResultsEnlistRequest(UUID nodeId, final GridNearT req.subjectId(), req.taskNameHash()); } - catch (IgniteCheckedException | IgniteException ex) { + catch (Throwable e) { GridNearTxQueryResultsEnlistResponse res = new GridNearTxQueryResultsEnlistResponse(req.cacheId(), req.futureId(), req.miniId(), req.version(), - ex); + e); try { ctx.io().send(nearNode, res, ctx.ioPolicy()); } - catch (IgniteCheckedException e) { - U.error(log, "Failed to send near enlist response [" + - "txId=" + req.version() + - ", node=" + nodeId + - ", res=" + res + ']', e); + catch (IgniteCheckedException ioEx) { + U.error(log, "Failed to send near enlist response " + + "[txId=" + req.version() + ", node=" + nodeId + ", res=" + res + ']', ioEx); } + if (e instanceof Error) + throw (Error) e; + return; } @@ -2124,7 +2128,8 @@ public GridDhtTxLocal initTxTopologyVersion(UUID nodeId, GridDhtTopologyFuture topFut = top.topologyVersionFuture(); - if (!topFut.isDone() || !topFut.topologyVersion().equals(topVer)) { + if (!topFut.isDone() || !(topFut.topologyVersion().compareTo(topVer) >= 0 + && ctx.shared().exchange().lastAffinityChangedTopologyVersion(topFut.initialVersion()).compareTo(topVer) <= 0)) { // TODO IGNITE-7164 Wait for topology change, remap client TX in case affinity was changed. top.readUnlock(); @@ -2155,7 +2160,9 @@ public GridDhtTxLocal initTxTopologyVersion(UUID nodeId, -1, null, txSubjectId, - txTaskNameHash); + txTaskNameHash, + null, + null); // if (req.syncCommit()) tx.syncMode(FULL_SYNC); @@ -2227,26 +2234,6 @@ private void processNearTxQueryResultsEnlistResponse(UUID nodeId, final GridNear fut.onResult(nodeId, res); } - /** - * @param primary Primary node. - * @param req Request. - * @param e Error. - */ - private void onError(UUID primary, GridDhtTxQueryEnlistRequest req, Throwable e) { - GridDhtTxQueryEnlistResponse res = new GridDhtTxQueryEnlistResponse(ctx.cacheId(), - req.dhtFutureId(), - req.batchId(), - e); - - try { - ctx.io().send(primary, res, ctx.ioPolicy()); - } - catch (IgniteCheckedException ioEx) { - U.error(log, "Failed to send DHT enlist reply to primary node [node: " + primary + ", req=" + req + - ']', ioEx); - } - } - /** * @param primary Primary node. * @param req Message. @@ -2282,7 +2269,8 @@ private void processDhtTxQueryEnlistRequest(UUID primary, GridDhtTxQueryEnlistRe -1, req0.subjectId(), req0.taskNameHash(), - false); + false, + null); tx.mvccSnapshot(new MvccSnapshotWithoutTxs(req0.coordinatorVersion(), req0.counter(), MVCC_OP_COUNTER_NA, req0.cleanupVersion())); @@ -2302,7 +2290,8 @@ private void processDhtTxQueryEnlistRequest(UUID primary, GridDhtTxQueryEnlistRe MvccSnapshot snapshot = new MvccSnapshotWithoutTxs(s0.coordinatorVersion(), s0.counter(), req.operationCounter(), s0.cleanupVersion()); - tx.mvccEnlistBatch(ctx, req.op(), req.keys(), req.values(), snapshot); + ctx.tm().txHandler().mvccEnlistBatch(tx, ctx, req.op(), req.keys(), req.values(), snapshot, + req.dhtFutureId(), req.batchId()); GridDhtTxQueryEnlistResponse res = new GridDhtTxQueryEnlistResponse(req.cacheId(), req.dhtFutureId(), @@ -2317,8 +2306,22 @@ private void processDhtTxQueryEnlistRequest(UUID primary, GridDhtTxQueryEnlistRe req + ']', ioEx); } } - catch (IgniteCheckedException e) { - onError(primary, req, e); + catch (Throwable e) { + GridDhtTxQueryEnlistResponse res = new GridDhtTxQueryEnlistResponse(ctx.cacheId(), + req.dhtFutureId(), + req.batchId(), + e); + + try { + ctx.io().send(primary, res, ctx.ioPolicy()); + } + catch (IgniteCheckedException ioEx) { + U.error(log, "Failed to send DHT enlist reply to primary node " + + "[node: " + primary + ", req=" + req + ']', ioEx); + } + + if (e instanceof Error) + throw (Error) e; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxAbstractEnlistFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxAbstractEnlistFuture.java index 64f966d6f3f44..0a355a49c5417 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxAbstractEnlistFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxAbstractEnlistFuture.java @@ -18,8 +18,8 @@ package org.apache.ignite.internal.processors.cache.distributed.dht; import java.util.ArrayList; -import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -29,12 +29,12 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; -import org.apache.ignite.internal.pagemem.wal.WALPointer; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheEntryInfoCollection; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; @@ -135,7 +135,7 @@ public abstract class GridDhtTxAbstractEnlistFuture extends GridCacheFutureAd protected final MvccSnapshot mvccSnapshot; /** New DHT nodes. */ - protected Set newDhtNodes = Collections.newSetFromMap(new ConcurrentHashMap<>()); + protected Set newDhtNodes = new HashSet<>(); /** Near node ID. */ protected final UUID nearNodeId; @@ -179,9 +179,6 @@ public abstract class GridDhtTxAbstractEnlistFuture extends GridCacheFutureAd /** Batches already sent to remotes, but their acks are not received yet. */ private ConcurrentMap> pending; - /** */ - private WALPointer walPtr; - /** Do not send DHT requests to near node. */ protected boolean skipNearNodeUpdates; @@ -191,6 +188,9 @@ public abstract class GridDhtTxAbstractEnlistFuture extends GridCacheFutureAd /** Moving partitions. */ private Map movingParts; + /** Map for tracking nodes to which first request was already sent in order to send smaller subsequent requests. */ + private final Set firstReqSent = new HashSet<>(); + /** * @param nearNodeId Near node ID. * @param nearLockVer Near lock version. @@ -385,12 +385,20 @@ private void continueLoop(boolean ignoreCntr) { try { while (true) { + int curPart = -1; + List backups = null; + while (hasNext0()) { Object cur = next0(); - KeyCacheObject key = cctx.toCacheKeyObject(op.isDeleteOrLock() ? cur : ((IgniteBiTuple)cur).getKey()); + KeyCacheObject key = toKey(op, cur); + + if (curPart != key.partition()) + backups = backupNodes(curPart = key.partition()); - if (!ensureFreeSlot(key)) { + assert backups != null; + + if (!ensureFreeSlot(key, backups)) { // Can't advance further at the moment. peek = cur; @@ -406,10 +414,28 @@ private void continueLoop(boolean ignoreCntr) { assert !entry.detached(); - CacheObject val = op.isDeleteOrLock() ? null : cctx.toCacheObject(((IgniteBiTuple)cur).getValue()); + CacheObject val = op.isDeleteOrLock() || op.isInvoke() + ? null : cctx.toCacheObject(((IgniteBiTuple)cur).getValue()); + + GridInvokeValue invokeVal = null; + EntryProcessor entryProc = null; + Object[] invokeArgs = null; + + if(op.isInvoke()) { + assert needResult(); + + invokeVal = (GridInvokeValue)((IgniteBiTuple)cur).getValue(); + + entryProc = invokeVal.entryProcessor(); + invokeArgs = invokeVal.invokeArgs(); + } + + assert entryProc != null || !op.isInvoke(); tx.markQueryEnlisted(mvccSnapshot); + boolean needOldVal = cctx.shared().mvccCaching().continuousQueryListeners(cctx, tx, key) != null; + GridCacheUpdateTxResult res; while (true) { @@ -423,25 +449,30 @@ private void continueLoop(boolean ignoreCntr) { cctx.localNodeId(), topVer, mvccSnapshot, - isMoving(key.partition()), + isMoving(key.partition(), backups), + needOldVal, filter, needResult()); break; case INSERT: + case TRANSFORM: case UPSERT: case UPDATE: res = entry.mvccSet( tx, cctx.localNodeId(), val, + entryProc, + invokeArgs, 0, topVer, mvccSnapshot, op.cacheOperation(), - isMoving(key.partition()), + isMoving(key.partition(), backups), op.noCreate(), + needOldVal, filter, needResult()); @@ -471,19 +502,21 @@ private void continueLoop(boolean ignoreCntr) { IgniteInternalFuture updateFut = res.updateFuture(); + final Message val0 = invokeVal != null ? invokeVal : val; + if (updateFut != null) { if (updateFut.isDone()) res = updateFut.get(); else { - CacheObject val0 = val; GridDhtCacheEntry entry0 = entry; + List backups0 = backups; it.beforeDetach(); updateFut.listen(new CI1>() { @Override public void apply(IgniteInternalFuture fut) { try { - processEntry(entry0, op, fut.get(), val0); + processEntry(entry0, op, fut.get(), val0, backups0); continueLoop(true); } @@ -498,16 +531,10 @@ private void continueLoop(boolean ignoreCntr) { } } - processEntry(entry, op, res, val); + processEntry(entry, op, res, val0, backups); } if (!hasNext0()) { - if (walPtr != null && !cctx.tm().logTxRecords()) { - cctx.shared().wal().flush(walPtr, true); - - walPtr = null; // Avoid additional flushing. - } - if (!F.isEmpty(batches)) { // Flush incomplete batches. // Need to skip batches for nodes where first request (contains tx info) is still in-flight. @@ -572,6 +599,16 @@ private boolean hasNext0() { return peek != FINISHED; } + /** */ + private KeyCacheObject toKey(EnlistOperation op, Object cur) { + KeyCacheObject key = cctx.toCacheKeyObject(op.isDeleteOrLock() ? cur : ((IgniteBiTuple)cur).getKey()); + + if (key.partition() == -1) + key.partition(cctx.affinity().partition(key)); + + return key; + } + /** * @return {@code True} if in-flight batches map is empty. */ @@ -592,41 +629,40 @@ private boolean noPendingRequests() { * @param op Operation. * @param updRes Update result. * @param val New value. + * @param backups Backup nodes * @throws IgniteCheckedException If failed. */ private void processEntry(GridDhtCacheEntry entry, EnlistOperation op, - GridCacheUpdateTxResult updRes, CacheObject val) throws IgniteCheckedException { + GridCacheUpdateTxResult updRes, Message val, List backups) throws IgniteCheckedException { checkCompleted(); assert updRes != null && updRes.updateFuture() == null; - WALPointer ptr0 = updRes.loggedPointer(); - - if (ptr0 != null) - walPtr = ptr0; - onEntryProcessed(entry.key(), updRes); - if (!updRes.success()) + if (!updRes.success() + || updRes.filtered() + || op == EnlistOperation.LOCK) return; - if (op != EnlistOperation.LOCK) - addToBatch(entry.key(), val, updRes.mvccHistory(), entry.context().cacheId()); + cctx.shared().mvccCaching().addEnlisted(entry.key(), updRes.newValue(), 0, 0, lockVer, + updRes.oldValue(), tx.local(), tx.topologyVersion(), mvccSnapshot, cctx.cacheId(), tx, null, -1); + + addToBatch(entry.key(), val, updRes.mvccHistory(), entry.context().cacheId(), backups); } /** * Adds row to batch. * IMPORTANT: This method should be called from the critical section in {@link this.sendNextBatches()} - * * @param key Key. * @param val Value. * @param hist History rows. + * @param cacheId Cache Id. + * @param backups Backup nodes */ - private void addToBatch(KeyCacheObject key, CacheObject val, List hist, - int cacheId) throws IgniteCheckedException { - List backups = backupNodes(key); - - int part = cctx.affinity().partition(key); + private void addToBatch(KeyCacheObject key, Message val, List hist, + int cacheId, List backups) throws IgniteCheckedException { + int part = key.partition(); tx.touchPartition(cacheId, part); @@ -640,7 +676,7 @@ private void addToBatch(KeyCacheObject key, CacheObject val, List backups) { if (F.isEmpty(batches) || F.isEmpty(pending)) return true; + int part = key.partition(); + // Check possibility of adding to batch and sending. - for (ClusterNode node : backupNodes(key)) { - if (skipNearNodeUpdates && node.id().equals(nearNodeId) && !isMoving(node, key.partition())) + for (ClusterNode node : backups) { + if (skipNearLocalUpdate(node, isMoving(node, part))) continue; Batch batch = batches.get(node.id()); @@ -792,16 +831,17 @@ private void sendBatch(Batch batch) throws IgniteCheckedException { GridDhtTxQueryEnlistRequest req; - if (newRemoteTx(node)) { + if (newRemoteTx(node)) addNewRemoteTxNode(node); + if (firstReqSent.add(node)) { // If this is a first request to this node, send full info. req = new GridDhtTxQueryFirstEnlistRequest(cctx.cacheId(), futId, cctx.localNodeId(), tx.topologyVersionSnapshot(), lockVer, - mvccSnapshot, + mvccSnapshot.withoutActiveTransactions(), tx.remainingTime(), tx.taskNameHash(), nearNodeId, @@ -839,7 +879,13 @@ private void sendBatch(Batch batch) throws IgniteCheckedException { assert prev == null; - cctx.io().send(node, req, cctx.ioPolicy()); + try { + cctx.io().send(node, req, cctx.ioPolicy()); + } + catch (ClusterTopologyCheckedException e) { + // backup node left the grid, will continue. + onNodeLeft(node.id()); + } } /** */ @@ -856,20 +902,21 @@ private synchronized void updateMappings(ClusterNode node) throws IgniteCheckedE mapping.markQueryUpdate(); } + /** */ + private boolean skipNearLocalUpdate(ClusterNode node, boolean moving) { + return skipNearNodeUpdates && node.id().equals(nearNodeId) && !moving; + } + /** - * @param key Key. - * @return Backup nodes for the given key. + * @param part Partition. + * @return Backup nodes for the given partition. */ - @NotNull private List backupNodes(KeyCacheObject key) { - List dhtNodes = cctx.affinity().nodesByKey(key, tx.topologyVersion()); - - assert !dhtNodes.isEmpty() && dhtNodes.get(0).id().equals(cctx.localNodeId()) : - "localNode = " + cctx.localNodeId() + ", dhtNodes = " + dhtNodes; + @NotNull private List backupNodes(int part) { + List nodes = cctx.topology().nodes(part, tx.topologyVersion()); - if (dhtNodes.size() == 1) - return Collections.emptyList(); + assert nodes.size() > 0 && nodes.get(0).isLocal(); - return dhtNodes.subList(1, dhtNodes.size()); + return nodes.subList(1, nodes.size()); } /** @@ -896,9 +943,10 @@ private void checkPartitions(@Nullable int[] parts) throws ClusterTopologyChecke for (int i = 0; i < parts.length; i++) { GridDhtLocalPartition p = top.localPartition(parts[i]); - if (p == null || p.state() != GridDhtPartitionState.OWNING) + if (p == null || p.state() != GridDhtPartitionState.OWNING) { throw new ClusterTopologyCheckedException("Cannot run update query. " + - "Node must own all the necessary partitions."); // TODO IGNITE-7185 Send retry instead. + "Node must own all the necessary partitions."); + } } } finally { @@ -908,31 +956,33 @@ private void checkPartitions(@Nullable int[] parts) throws ClusterTopologyChecke /** * @param part Partition. + * @param backups Backup nodes. * @return {@code true} if the given partition is rebalancing to any backup node. */ - private boolean isMoving(int part) { + private boolean isMoving(int part, List backups) { + Boolean res; + if (movingParts == null) movingParts = new HashMap<>(); - Boolean res = movingParts.get(part); + if ((res = movingParts.get(part)) == null) + movingParts.put(part, res = isMoving0(part, backups)); - if (res != null) - return res; - - List dhtNodes = cctx.affinity().nodesByPartition(part, tx.topologyVersion()); - - for (int i = 1; i < dhtNodes.size(); i++) { - ClusterNode node = dhtNodes.get(i); - if (isMoving(node, part)) { - movingParts.put(part, Boolean.TRUE); + return res == Boolean.TRUE; + } - return true; - } + /** + * @param part Partition. + * @param backups Backup nodes. + * @return {@code true} if the given partition is rebalancing to any backup node. + */ + private Boolean isMoving0(int part, List backups) { + for (ClusterNode node : backups) { + if (isMoving(node, part)) + return Boolean.TRUE; } - movingParts.put(part, Boolean.FALSE); - - return false; + return Boolean.FALSE; } /** @@ -941,9 +991,7 @@ private boolean isMoving(int part) { * @return {@code true} if the given partition is rebalancing to the given node. */ private boolean isMoving(ClusterNode node, int part) { - GridDhtPartitionState partState = cctx.topology().partitionState(node.id(), part); - - return partState != GridDhtPartitionState.OWNING && partState != GridDhtPartitionState.EVICTED; + return cctx.topology().partitionState(node.id(), part) == GridDhtPartitionState.MOVING; } /** */ @@ -996,23 +1044,17 @@ public void onResult(UUID nodeId, GridDhtTxQueryEnlistResponse res) { /** {@inheritDoc} */ @Override public boolean onNodeLeft(UUID nodeId) { - boolean backupLeft = false; - - Set nodes = tx.lockTransactionNodes(); - - if (!F.isEmpty(nodes)) { - for (ClusterNode node : nodes) { - if (node.id().equals(nodeId)) { - backupLeft = true; - - break; - } - } + try { + if (nearNodeId.equals(nodeId)) + onDone(new ClusterTopologyCheckedException("Requesting node left the grid [nodeId=" + nodeId + ']')); + else if (pending != null && pending.remove(nodeId) != null) + cctx.kernalContext().closure().runLocalSafe(() -> continueLoop(false)); + } + catch (Exception e) { + onDone(e); } - return (backupLeft || nearNodeId.equals(nodeId)) && onDone( - new ClusterTopologyCheckedException((backupLeft ? "Backup" : "Requesting") + - " node left the grid [nodeId=" + nodeId + ']')); + return false; } /** {@inheritDoc} */ @@ -1098,21 +1140,23 @@ public ClusterNode node() { * @param val Value or preload entries collection. */ public void add(KeyCacheObject key, Message val) { - assert val == null || val instanceof CacheObject || val instanceof CacheEntryInfoCollection; + assert val == null || val instanceof GridInvokeValue || val instanceof CacheObject + || val instanceof CacheEntryInfoCollection; if (keys == null) keys = new ArrayList<>(); - keys.add(key); - - if (val != null) { - if (vals == null) - vals = new ArrayList<>(); + if (vals == null && val != null) { + vals = new ArrayList<>(U.ceilPow2(keys.size() + 1)); - vals.add(val); + while (vals.size() != keys.size()) + vals.add(null); // Init vals with missed 'nulls'. } - assert (vals == null) || keys.size() == vals.size(); + keys.add(key); + + if (vals != null) + vals.add(val); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxEnlistFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxEnlistFuture.java index 58d6b15e9f285..60644cd8e351a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxEnlistFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxEnlistFuture.java @@ -22,6 +22,7 @@ import java.util.UUID; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; +import org.apache.ignite.internal.processors.cache.CacheInvokeResult; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheReturn; import org.apache.ignite.internal.processors.cache.GridCacheUpdateTxResult; @@ -114,10 +115,20 @@ public GridDhtTxEnlistFuture(UUID nearNodeId, /** {@inheritDoc} */ @Override protected void onEntryProcessed(KeyCacheObject key, GridCacheUpdateTxResult txRes) { - if (needRes && txRes.success()) + assert txRes.invokeResult() == null || needRes; + + res.success(txRes.success()); + + if(txRes.invokeResult() != null) { + res.invokeResult(true); + + CacheInvokeResult invokeRes = txRes.invokeResult(); + + if (invokeRes.result() != null || invokeRes.error() != null) + res.addEntryProcessResult(cctx, key, null, invokeRes.result(), invokeRes.error(), cctx.keepBinary()); + } + else if (needRes) res.set(cctx, txRes.prevValue(), txRes.success(), true); - else - res.success(txRes.success()); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishFuture.java index 9283939c558b8..d0fbd90bf03eb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishFuture.java @@ -173,10 +173,13 @@ public void rollbackOnError(Throwable e) { if (ERR_UPD.compareAndSet(this, null, e)) { tx.setRollbackOnly(); - if (X.hasCause(e, InvalidEnvironmentException.class, NodeStoppingException.class)) + if (X.hasCause(e, NodeStoppingException.class) || cctx.kernalContext().failure().nodeStopping()) onComplete(); - else + else { + // Rolling back a remote transaction may result in partial commit. + // This is only acceptable in tests with no-op failure handler. finish(false); + } } } @@ -230,9 +233,9 @@ public void onResult(UUID nodeId, GridDhtTxFinishResponse res) { if (this.tx.onePhaseCommit() && (this.tx.state() == COMMITTING)) { try { - boolean hasInvalidEnvironmentIssue = X.hasCause(err, InvalidEnvironmentException.class, NodeStoppingException.class); + boolean nodeStopping = X.hasCause(err, NodeStoppingException.class); - this.tx.tmFinish(err == null, hasInvalidEnvironmentIssue, false); + this.tx.tmFinish(err == null, nodeStopping || cctx.kernalContext().failure().nodeStopping(), false); } catch (IgniteCheckedException finishErr) { U.error(log, "Failed to finish tx: " + tx, e); @@ -245,7 +248,7 @@ public void onResult(UUID nodeId, GridDhtTxFinishResponse res) { if (commit && e == null) e = this.tx.commitError(); - Throwable finishErr = e != null ? e : err; + Throwable finishErr = mvccFinish(e != null ? e : err); if (super.onDone(tx, finishErr)) { if (finishErr == null) @@ -372,7 +375,7 @@ private boolean rollbackLockTransactions(Collection nodes) { false, false, tx.mvccSnapshot(), - tx.filterUpdateCountersForBackupNode(n)); + cctx.tm().txHandler().filterUpdateCountersForBackupNode(tx, n)); try { cctx.io().send(n, req, tx.ioPolicy()); @@ -420,7 +423,7 @@ private boolean finish(boolean commit, if (tx.onePhaseCommit()) return false; - assert !commit || !tx.txState().mvccEnabled(cctx) || tx.mvccSnapshot() != null || F.isEmpty(tx.writeEntries()); + assert !commit || !tx.txState().mvccEnabled() || tx.mvccSnapshot() != null || F.isEmpty(tx.writeEntries()); boolean sync = tx.syncMode() == FULL_SYNC; @@ -485,7 +488,7 @@ private boolean finish(boolean commit, false, false, mvccSnapshot, - commit ? null : tx.filterUpdateCountersForBackupNode(n)); + commit ? null : cctx.tm().txHandler().filterUpdateCountersForBackupNode(tx, n)); req.writeVersion(tx.writeVersion() != null ? tx.writeVersion() : tx.xidVersion()); @@ -595,6 +598,23 @@ private boolean finish(boolean commit, return res; } + /** + * Finishes MVCC transaction on the local node. + */ + private Throwable mvccFinish(Throwable commitError) { + try { + cctx.tm().mvccFinish(tx, commit && commitError == null); + } + catch (IgniteCheckedException ex) { + if (commitError == null) + tx.commitError(commitError = ex); + else + commitError.addSuppressed(ex); + } + + return commitError; + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public void addDiagnosticRequest(IgniteDiagnosticPrepareContext ctx) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishRequest.java index 61896b59a3d15..04f411d8e3b23 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishRequest.java @@ -393,44 +393,38 @@ public Collection updateCounters() { } switch (writer.state()) { - case 21: - if (!writer.writeByte("isolation", isolation != null ? (byte)isolation.ordinal() : -1)) - return false; - - writer.incrementState(); - case 22: - if (!writer.writeInt("miniId", miniId)) + if (!writer.writeByte("isolation", isolation != null ? (byte)isolation.ordinal() : -1)) return false; writer.incrementState(); case 23: - if (!writer.writeUuid("nearNodeId", nearNodeId)) + if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); case 24: - if (!writer.writeMessage("partUpdateCnt", partUpdateCnt)) + if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) return false; writer.incrementState(); case 25: - if (!writer.writeCollection("pendingVers", pendingVers, MessageCollectionItemType.MSG)) + if (!writer.writeUuid("nearNodeId", nearNodeId)) return false; writer.incrementState(); case 26: - if (!writer.writeMessage("writeVer", writeVer)) + if (!writer.writeMessage("partUpdateCnt", partUpdateCnt)) return false; writer.incrementState(); case 27: - if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + if (!writer.writeCollection("pendingVers", pendingVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); @@ -441,6 +435,12 @@ public Collection updateCounters() { writer.incrementState(); + case 29: + if (!writer.writeMessage("writeVer", writeVer)) + return false; + + writer.incrementState(); + } return true; @@ -457,7 +457,7 @@ public Collection updateCounters() { return false; switch (reader.state()) { - case 21: + case 22: byte isolationOrd; isolationOrd = reader.readByte("isolation"); @@ -469,16 +469,8 @@ public Collection updateCounters() { reader.incrementState(); - case 22: - miniId = reader.readInt("miniId"); - - if (!reader.isLastRead()) - return false; - - reader.incrementState(); - case 23: - nearNodeId = reader.readUuid("nearNodeId"); + miniId = reader.readInt("miniId"); if (!reader.isLastRead()) return false; @@ -486,7 +478,7 @@ public Collection updateCounters() { reader.incrementState(); case 24: - partUpdateCnt = reader.readMessage("partUpdateCnt"); + mvccSnapshot = reader.readMessage("mvccSnapshot"); if (!reader.isLastRead()) return false; @@ -494,7 +486,7 @@ public Collection updateCounters() { reader.incrementState(); case 25: - pendingVers = reader.readCollection("pendingVers", MessageCollectionItemType.MSG); + nearNodeId = reader.readUuid("nearNodeId"); if (!reader.isLastRead()) return false; @@ -502,7 +494,7 @@ public Collection updateCounters() { reader.incrementState(); case 26: - writeVer = reader.readMessage("writeVer"); + partUpdateCnt = reader.readMessage("partUpdateCnt"); if (!reader.isLastRead()) return false; @@ -510,7 +502,7 @@ public Collection updateCounters() { reader.incrementState(); case 27: - mvccSnapshot = reader.readMessage("mvccSnapshot"); + pendingVers = reader.readCollection("pendingVers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) return false; @@ -525,6 +517,14 @@ public Collection updateCounters() { reader.incrementState(); + case 29: + writeVer = reader.readMessage("writeVer"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + } return reader.afterMessageRead(GridDhtTxFinishRequest.class); @@ -537,7 +537,7 @@ public Collection updateCounters() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 29; + return 30; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishResponse.java index 6d717ebf904e6..d777a2201a149 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxFinishResponse.java @@ -173,19 +173,19 @@ public GridCacheReturn returnValue() { } switch (writer.state()) { - case 6: + case 7: if (!writer.writeByteArray("checkCommittedErrBytes", checkCommittedErrBytes)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeMessage("retVal", retVal)) return false; @@ -207,7 +207,7 @@ public GridCacheReturn returnValue() { return false; switch (reader.state()) { - case 6: + case 7: checkCommittedErrBytes = reader.readByteArray("checkCommittedErrBytes"); if (!reader.isLastRead()) @@ -215,7 +215,7 @@ public GridCacheReturn returnValue() { reader.incrementState(); - case 7: + case 8: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -223,7 +223,7 @@ public GridCacheReturn returnValue() { reader.incrementState(); - case 8: + case 9: retVal = reader.readMessage("retVal"); if (!reader.isLastRead()) @@ -243,7 +243,7 @@ public GridCacheReturn returnValue() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 9; + return 10; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocal.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocal.java index a091d44ac423f..42c9dbe36ce7c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocal.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocal.java @@ -24,6 +24,8 @@ import java.util.UUID; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -34,11 +36,13 @@ import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.GridCacheMappedVersion; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishResponse; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareResponse; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException; import org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException; import org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException; import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; @@ -46,6 +50,7 @@ import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteInClosure; @@ -93,6 +98,9 @@ public class GridDhtTxLocal extends GridDhtTxLocalAdapter implements GridCacheMa @GridToStringExclude private volatile GridDhtTxPrepareFuture prepFut; + /** Transaction label. */ + private @Nullable String lb; + /** * Empty constructor required for {@link Externalizable}. */ @@ -115,6 +123,8 @@ public GridDhtTxLocal() { * @param storeEnabled Store enabled flag. * @param txSize Expected transaction size. * @param txNodes Transaction nodes mapping. + * @param lb Transaction label. + * @param parentTx Transaction from which this transaction was copied by(if it was). */ public GridDhtTxLocal( GridCacheSharedContext cctx, @@ -138,7 +148,9 @@ public GridDhtTxLocal( int txSize, Map> txNodes, UUID subjId, - int taskNameHash + int taskNameHash, + @Nullable String lb, + GridNearTxLocal parentTx ) { super( cctx, @@ -158,6 +170,8 @@ public GridDhtTxLocal( subjId, taskNameHash); + this.lb = lb; + assert nearNodeId != null; assert nearFutId != null; assert nearXidVer != null; @@ -170,6 +184,8 @@ public GridDhtTxLocal( threadId = nearThreadId; + setParentTx(parentTx); + assert !F.eq(xidVer, nearXidVer); initResult(); @@ -467,7 +483,11 @@ else if (!lockFut.isDone()) { ", tx=" + CU.txString(this) + ']'); } catch (IgniteCheckedException e) { - U.error(log, "Failed to finish transaction [commit=" + commit + ", tx=" + this + ']', e); + logTxFinishErrorSafe(log, commit, e); + + // Treat heuristic exception as critical. + if (X.hasCause(e, IgniteTxHeuristicCheckedException.class)) + cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); err = e; } @@ -525,6 +545,11 @@ public IgniteInternalFuture commitDhtLocalAsync() { return commitDhtLocalAsync(); } + /** {@inheritDoc} */ + @Nullable @Override public String label() { + return lb; + } + /** {@inheritDoc} */ @Override protected void clearPrepareFuture(GridDhtTxPrepareFuture fut) { assert optimistic(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocalAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocalAdapter.java index ffa383bd2994e..86f9c3c0d0cb8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocalAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxLocalAdapter.java @@ -18,7 +18,6 @@ package org.apache.ignite.internal.processors.cache.distributed.dht; import java.io.Externalizable; -import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashSet; @@ -30,9 +29,10 @@ import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.GridCacheAffinityManager; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.internal.processors.cache.GridCacheEntryRemovedException; @@ -43,9 +43,9 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareResponse; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; +import org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter; -import org.apache.ignite.internal.processors.cache.transactions.TxCounters; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.F0; import org.apache.ignite.internal.util.GridLeanMap; @@ -941,41 +941,6 @@ protected final IgniteInternalFuture chainOnePhasePre return prepFut; } - /** - * @param node Backup node. - * @return Partition counters map for the given backup node. - */ - public List filterUpdateCountersForBackupNode(ClusterNode node) { - TxCounters txCntrs = txCounters(false); - - if (txCntrs == null || F.isEmpty(txCntrs.updateCounters())) - return null; - - Collection updCntrs = txCntrs.updateCounters(); - - List res = new ArrayList<>(updCntrs.size()); - - AffinityTopologyVersion top = topologyVersionSnapshot(); - - for (PartitionUpdateCountersMessage partCntrs : updCntrs) { - GridCacheAffinityManager affinity = cctx.cacheContext(partCntrs.cacheId()).affinity(); - - PartitionUpdateCountersMessage resCntrs = new PartitionUpdateCountersMessage(partCntrs.cacheId(), partCntrs.size()); - - for (int i = 0; i < partCntrs.size(); i++) { - int part = partCntrs.partition(i); - - if (affinity.backupByPartition(node, part, top)) - resCntrs.add(part, partCntrs.initialCounter(i), partCntrs.updatesCount(i)); - } - - if (resCntrs.size() > 0) - res.add(resCntrs); - } - - return res; - } - /** {@inheritDoc} */ @Override public String toString() { return GridToStringBuilder.toString(GridDhtTxLocalAdapter.class, this, "nearNodes", nearMap.keySet(), diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxOnePhaseCommitAckRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxOnePhaseCommitAckRequest.java index 67eacd3f8c5f8..50f2e7947b1c4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxOnePhaseCommitAckRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxOnePhaseCommitAckRequest.java @@ -97,7 +97,7 @@ public Collection versions() { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeCollection("vers", vers, MessageCollectionItemType.MSG)) return false; @@ -119,7 +119,7 @@ public Collection versions() { return false; switch (reader.state()) { - case 2: + case 3: vers = reader.readCollection("vers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -139,6 +139,6 @@ public Collection versions() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 3; + return 4; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareFuture.java index 0edf63f8e1c44..d1a096d67eaaf 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareFuture.java @@ -26,6 +26,7 @@ import java.util.Map; import java.util.Set; import java.util.UUID; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; @@ -36,6 +37,8 @@ import org.apache.ignite.IgniteInterruptedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.IgniteDiagnosticAware; import org.apache.ignite.internal.IgniteDiagnosticPrepareContext; import org.apache.ignite.internal.IgniteInternalFuture; @@ -68,7 +71,6 @@ import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; -import org.apache.ignite.internal.processors.cache.transactions.TxCounters; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.dr.GridDrType; import org.apache.ignite.internal.processors.timeout.GridTimeoutObjectAdapter; @@ -97,12 +99,14 @@ import org.jetbrains.annotations.Nullable; import static org.apache.ignite.events.EventType.EVT_CACHE_REBALANCE_OBJECT_LOADED; +import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_VALIDATE_CACHE_REQUESTS; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.CREATE; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.DELETE; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.NOOP; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.READ; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.TRANSFORM; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.UPDATE; +import static org.apache.ignite.internal.util.lang.GridFunc.isEmpty; import static org.apache.ignite.transactions.TransactionState.PREPARED; /** @@ -216,6 +220,9 @@ public final class GridDhtTxPrepareFuture extends GridCacheCompoundFuture 0 ? new PrepareTimeoutObject(timeout) : null; + + if (tx.onePhaseCommit()) + timeoutAddedLatch = new CountDownLatch(1); } /** {@inheritDoc} */ @@ -691,6 +701,14 @@ private boolean mapIfLocked() { if (!MAPPED_UPD.compareAndSet(this, 0, 1)) return false; + if (timeoutObj != null && tx.onePhaseCommit()) { + U.awaitQuiet(timeoutAddedLatch); + + // Disable timeouts after all locks are acquired for one-phase commit or partition desync will occur. + if (!cctx.time().removeTimeoutObject(timeoutObj)) + return true; // Should not proceed with prepare if tx is already timed out. + } + if (forceKeysFut == null || (forceKeysFut.isDone() && forceKeysFut.error() == null)) prepare0(); else { @@ -729,7 +747,7 @@ private boolean mapIfLocked() { tx.clearPrepareFuture(this); // Do not commit one-phase commit transaction if originating node has near cache enabled. - if (tx.onePhaseCommit() && tx.commitOnPrepare()) { + if (tx.commitOnPrepare()) { assert last; Throwable prepErr = this.err; @@ -739,37 +757,30 @@ private boolean mapIfLocked() { onComplete(res); - if (tx.commitOnPrepare()) { - if (tx.markFinalizing(IgniteInternalTx.FinalizationStatus.USER_FINISH)) { - IgniteInternalFuture fut = null; - - CIX1> resClo = - new CIX1>() { - @Override public void applyx(IgniteInternalFuture fut) { - if(res.error() == null && fut.error() != null) - res.error(fut.error()); + if (tx.markFinalizing(IgniteInternalTx.FinalizationStatus.USER_FINISH)) { + CIX1> resClo = + new CIX1>() { + @Override public void applyx(IgniteInternalFuture fut) { + if (res.error() == null && fut.error() != null) + res.error(fut.error()); - if (REPLIED_UPD.compareAndSet(GridDhtTxPrepareFuture.this, 0, 1)) - sendPrepareResponse(res); - } - }; + if (REPLIED_UPD.compareAndSet(GridDhtTxPrepareFuture.this, 0, 1)) + sendPrepareResponse(res); + } + }; + try { if (prepErr == null) { try { - fut = tx.commitAsync(); + tx.commitAsync().listen(resClo); } - catch (RuntimeException | Error e) { - Exception hEx = new IgniteTxHeuristicCheckedException("Commit produced a runtime " + - "exception: " + CU.txString(tx), e); - - res.error(hEx); + catch (Throwable e) { + res.error(e); tx.systemInvalidate(true); try { - fut = tx.rollbackAsync(); - - fut.listen(resClo); + tx.rollbackAsync().listen(resClo); } catch (Throwable e1) { e.addSuppressed(e1); @@ -777,25 +788,25 @@ private boolean mapIfLocked() { throw e; } - } - else if (!cctx.kernalContext().isStopping()) + else if (!cctx.kernalContext().isStopping()) { try { - fut = tx.rollbackAsync(); + tx.rollbackAsync().listen(resClo); } catch (Throwable e) { - err.addSuppressed(e); - fut = null; + if (err != null) + err.addSuppressed(e); + + throw err; } + } + } + catch (Throwable e) { + tx.logTxFinishErrorSafe(log, true, e); - if (fut != null) - fut.listen(resClo); + cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); } } - else { - if (REPLIED_UPD.compareAndSet(this, 0, 1)) - sendPrepareResponse(res); - } return true; } @@ -896,8 +907,6 @@ private GridNearTxPrepareResponse createPrepareResponse(@Nullable Throwable prep tx.onePhaseCommit(), tx.activeCachesDeploymentEnabled()); - res.mvccSnapshot(tx.mvccSnapshot()); - if (prepErr == null) { if (tx.needReturnValue() || tx.nearOnOriginatingNode() || tx.hasInterceptor()) addDhtValues(res); @@ -1001,7 +1010,7 @@ private boolean isMini(IgniteInternalFuture f) { * @return {@code True} if {@code done} flag was changed as a result of this call. */ private boolean onComplete(@Nullable GridNearTxPrepareResponse res) { - if ((last || tx.isSystemInvalidate()) && !(tx.near() && tx.local())) + if (!tx.onePhaseCommit() && ((last || tx.isSystemInvalidate()) && !(tx.near() && tx.local()))) tx.state(PREPARED); if (super.onDone(res, res == null ? err : null)) { @@ -1045,6 +1054,24 @@ public void prepare(GridNearTxPrepareRequest req) { this.req = req; + ClusterNode node = cctx.discovery().node(tx.topologyVersion(), tx.nearNodeId()); + + boolean validateCache = needCacheValidation(node); + + if (validateCache) { + GridDhtTopologyFuture topFut = cctx.exchange().lastFinishedFuture(); + + if (topFut != null && !isEmpty(req.writes())) { + // All caches either read only or not. So validation of one cache context is enough. + GridCacheContext ctx = F.first(req.writes()).context(); + + Throwable err = topFut.validateCache(ctx, req.recovery(), isEmpty(req.writes()), null, null); + + if (err != null) + onDone(null, new IgniteCheckedException(err)); + } + } + boolean ser = tx.serializable() && tx.optimistic(); if (!F.isEmpty(req.writes()) || (ser && !F.isEmpty(req.reads()))) { @@ -1063,14 +1090,35 @@ public void prepare(GridNearTxPrepareRequest req) { readyLocks(); - if (timeoutObj != null && !isDone()) { - // Start timeout tracking after 'readyLocks' to avoid race with timeout processing. + // Start timeout tracking after 'readyLocks' to avoid race with timeout processing. + if (timeoutObj != null) { cctx.time().addTimeoutObject(timeoutObj); + + // Fix race with add/remove timeout object if locks are mapped from another + // thread before timeout object is enqueued. + if (tx.onePhaseCommit()) + timeoutAddedLatch.countDown(); } mapIfLocked(); } + /** + * Returns {@code true} if cache validation needed. + * + * @param node Originating node. + * @return {@code True} if cache should be validated, {@code false} - otherwise. + */ + private boolean needCacheValidation(ClusterNode node) { + if (node == null) { + // The originating (aka near) node has left the topology + // and therefore the cache validation doesn't make sense. + return false; + } + + return Boolean.TRUE.equals(node.attribute(ATTR_VALIDATE_CACHE_REQUESTS)); + } + /** * Checks if this transaction needs previous value for the given tx entry. Will use passed in map to store * required key or will create new map if passed in map is {@code null}. @@ -1217,8 +1265,6 @@ private IgniteTxOptimisticCheckedException versionCheckError(IgniteTxEntry entry * */ private void prepare0() { - boolean skipInit = false; - try { if (tx.serializable() && tx.optimistic()) { IgniteCheckedException err0; @@ -1253,29 +1299,6 @@ private void prepare0() { } } - IgniteInternalFuture waitCrdCntrFut = null; - - if (req.requestMvccCounter()) { - assert last; - - assert tx.txState().mvccEnabled(cctx); - - try { - // Request snapshot locally only because - // Mvcc Coordinator is expected to be local. - MvccSnapshot snapshot = cctx.coordinators().tryRequestSnapshotLocal(tx); - - assert snapshot != null : tx.topologyVersion(); - - tx.mvccSnapshot(snapshot); - } - catch (ClusterTopologyCheckedException e) { - onDone(e); - - return; - } - } - onEntriesLocked(); // We are holding transaction-level locks for entries here, so we can get next write version. @@ -1296,43 +1319,23 @@ private void prepare0() { return; if (last) { - if (waitCrdCntrFut != null) { - skipInit = true; - - waitCrdCntrFut.listen(new IgniteInClosure>() { - @Override public void apply(IgniteInternalFuture fut) { - try { - fut.get(); - - sendPrepareRequests(); + recheckOnePhaseCommit(); - markInitialized(); - } - catch (Throwable e) { - U.error(log, "Failed to get mvcc version for tx [txId=" + tx.nearXidVersion() + - ", err=" + e + ']', e); - - GridNearTxPrepareResponse res = createPrepareResponse(e); + if (tx.onePhaseCommit()) + tx.chainState(PREPARED); - onDone(res, res.error()); - } - } - }); - } - else - sendPrepareRequests(); + sendPrepareRequests(); } } finally { - if (!skipInit) - markInitialized(); + markInitialized(); } } /** - * + * Checking that one phase commit for transaction still actual. */ - private void sendPrepareRequests() { + private void recheckOnePhaseCommit() { if (tx.onePhaseCommit() && !tx.nearMap().isEmpty()) { for (GridDistributedTxMapping nearMapping : tx.nearMap().values()) { if (!tx.dhtMap().containsKey(nearMapping.primary().id())) { @@ -1342,8 +1345,13 @@ private void sendPrepareRequests() { } } } + } - assert !tx.txState().mvccEnabled(cctx) || !tx.onePhaseCommit() || tx.mvccSnapshot() != null; + /** + * + */ + private void sendPrepareRequests() { + assert !tx.txState().mvccEnabled() || !tx.onePhaseCommit() || tx.mvccSnapshot() != null; int miniId = 0; @@ -1398,7 +1406,7 @@ private void sendPrepareRequests() { tx.storeWriteThrough(), retVal, mvccSnapshot, - tx.filterUpdateCountersForBackupNode(n)); + cctx.tm().txHandler().filterUpdateCountersForBackupNode(tx, n)); req.queryUpdate(dhtMapping.queryUpdate()); @@ -1918,8 +1926,8 @@ void onResult(GridDhtTxPrepareResponse res) { drType, false)) { if (rec && !entry.isInternal()) - cacheCtx.events().addEvent(entry.partition(), entry.key(), cctx.localNodeId(), - (IgniteUuid)null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, info.value(), true, null, + cacheCtx.events().addEvent(entry.partition(), entry.key(), cctx.localNodeId(), null, + null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, info.value(), true, null, false, null, null, null, false); if (retVal && !invoke) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareRequest.java index 30e8cebbd3638..5b69f2bd7806a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareRequest.java @@ -90,7 +90,7 @@ public class GridDhtTxPrepareRequest extends GridDistributedTxPrepareRequest { /** */ @GridDirectCollection(PartitionUpdateCountersMessage.class) - private Collection counters; + private Collection updCntrs; /** Near transaction ID. */ private GridCacheVersion nearXidVer; @@ -114,6 +114,10 @@ public class GridDhtTxPrepareRequest extends GridDistributedTxPrepareRequest { /** {@code True} if remote tx should skip adding itself to completed versions map on finish. */ private boolean skipCompletedVers; + /** Transaction label. */ + @GridToStringInclude + @Nullable private String txLbl; + /** * Empty constructor required for {@link Externalizable}. */ @@ -136,7 +140,7 @@ public GridDhtTxPrepareRequest() { * @param storeWriteThrough Cache store write through flag. * @param retVal Need return value flag * @param mvccSnapshot Mvcc snapshot. - * @param counters Update counters for mvcc Tx. + * @param updCntrs Update counters for mvcc Tx. */ public GridDhtTxPrepareRequest( IgniteUuid futId, @@ -156,7 +160,7 @@ public GridDhtTxPrepareRequest( boolean storeWriteThrough, boolean retVal, MvccSnapshot mvccSnapshot, - Collection counters) { + Collection updCntrs) { super(tx, timeout, null, @@ -177,8 +181,9 @@ public GridDhtTxPrepareRequest( this.nearXidVer = nearXidVer; this.subjId = subjId; this.taskNameHash = taskNameHash; + this.txLbl = tx.label(); this.mvccSnapshot = mvccSnapshot; - this.counters = counters; + this.updCntrs = updCntrs; storeWriteThrough(storeWriteThrough); needReturnValue(retVal); @@ -201,7 +206,7 @@ public MvccSnapshot mvccSnapshot() { * @return Update counters list. */ public Collection updateCounters() { - return counters; + return updCntrs; } /** @@ -332,6 +337,13 @@ public boolean skipCompletedVersion() { return skipCompletedVers; } + /** + * @return Transaction label. + */ + @Nullable public String txLabel() { + return txLbl; + } + /** * {@inheritDoc} * @@ -423,92 +435,98 @@ public boolean skipCompletedVersion() { } switch (writer.state()) { - case 20: + case 21: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 21: + case 22: if (!writer.writeBitSet("invalidateNearEntries", invalidateNearEntries)) return false; writer.incrementState(); - case 22: + case 23: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 23: + case 24: + if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + return false; + + writer.incrementState(); + + case 25: if (!writer.writeUuid("nearNodeId", nearNodeId)) return false; writer.incrementState(); - case 24: + case 26: if (!writer.writeCollection("nearWrites", nearWrites, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 25: + case 27: if (!writer.writeMessage("nearXidVer", nearXidVer)) return false; writer.incrementState(); - case 26: + case 28: if (!writer.writeCollection("ownedKeys", ownedKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 27: + case 29: if (!writer.writeCollection("ownedVals", ownedVals, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 28: + case 30: if (!writer.writeBitSet("preloadKeys", preloadKeys)) return false; writer.incrementState(); - case 29: + case 31: if (!writer.writeBoolean("skipCompletedVers", skipCompletedVers)) return false; writer.incrementState(); - case 30: + case 32: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 31: + case 33: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 32: - if (!writer.writeMessage("topVer", topVer)) + case 34: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 33: - if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + case 35: + if (!writer.writeString("txLbl", txLbl)) return false; writer.incrementState(); - case 34: - if (!writer.writeCollection("counters", counters, MessageCollectionItemType.MSG)) + case 36: + if (!writer.writeCollection("updCntrs", updCntrs, MessageCollectionItemType.MSG)) return false; writer.incrementState(); @@ -529,7 +547,7 @@ public boolean skipCompletedVersion() { return false; switch (reader.state()) { - case 20: + case 21: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -537,7 +555,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 21: + case 22: invalidateNearEntries = reader.readBitSet("invalidateNearEntries"); if (!reader.isLastRead()) @@ -545,7 +563,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 22: + case 23: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -553,7 +571,15 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 23: + case 24: + mvccSnapshot = reader.readMessage("mvccSnapshot"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 25: nearNodeId = reader.readUuid("nearNodeId"); if (!reader.isLastRead()) @@ -561,7 +587,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 24: + case 26: nearWrites = reader.readCollection("nearWrites", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -569,7 +595,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 25: + case 27: nearXidVer = reader.readMessage("nearXidVer"); if (!reader.isLastRead()) @@ -577,7 +603,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 26: + case 28: ownedKeys = reader.readCollection("ownedKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -585,7 +611,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 27: + case 29: ownedVals = reader.readCollection("ownedVals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -593,7 +619,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 28: + case 30: preloadKeys = reader.readBitSet("preloadKeys"); if (!reader.isLastRead()) @@ -601,7 +627,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 29: + case 31: skipCompletedVers = reader.readBoolean("skipCompletedVers"); if (!reader.isLastRead()) @@ -609,7 +635,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 30: + case 32: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -617,7 +643,7 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 31: + case 33: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -625,24 +651,24 @@ public boolean skipCompletedVersion() { reader.incrementState(); - case 32: - topVer = reader.readMessage("topVer"); + case 34: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 33: - mvccSnapshot = reader.readMessage("mvccSnapshot"); + case 35: + txLbl = reader.readString("txLbl"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 34: - counters = reader.readCollection("counters", MessageCollectionItemType.MSG); + case 36: + updCntrs = reader.readCollection("updCntrs", MessageCollectionItemType.MSG); if (!reader.isLastRead()) return false; @@ -661,7 +687,7 @@ public boolean skipCompletedVersion() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 35; + return 37; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareResponse.java index 0c2bf8198bac6..fcb14a34c58e0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareResponse.java @@ -245,31 +245,31 @@ public void addPreloadEntry(GridCacheEntryInfo info) { } switch (writer.state()) { - case 10: + case 11: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeMap("invalidParts", invalidParts, MessageCollectionItemType.INT, MessageCollectionItemType.INT_ARR)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeCollection("nearEvicted", nearEvicted, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeCollection("preloadEntries", preloadEntries, MessageCollectionItemType.MSG)) return false; @@ -291,7 +291,7 @@ public void addPreloadEntry(GridCacheEntryInfo info) { return false; switch (reader.state()) { - case 10: + case 11: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -299,7 +299,7 @@ public void addPreloadEntry(GridCacheEntryInfo info) { reader.incrementState(); - case 11: + case 12: invalidParts = reader.readMap("invalidParts", MessageCollectionItemType.INT, MessageCollectionItemType.INT_ARR, false); if (!reader.isLastRead()) @@ -307,7 +307,7 @@ public void addPreloadEntry(GridCacheEntryInfo info) { reader.incrementState(); - case 12: + case 13: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -315,7 +315,7 @@ public void addPreloadEntry(GridCacheEntryInfo info) { reader.incrementState(); - case 13: + case 14: nearEvicted = reader.readCollection("nearEvicted", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -323,7 +323,7 @@ public void addPreloadEntry(GridCacheEntryInfo info) { reader.incrementState(); - case 14: + case 15: preloadEntries = reader.readCollection("preloadEntries", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -343,7 +343,7 @@ public void addPreloadEntry(GridCacheEntryInfo info) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 15; + return 16; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistRequest.java index a1bc26b5ed678..27b7c81aa1479 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistRequest.java @@ -24,6 +24,7 @@ import org.apache.ignite.internal.processors.cache.CacheEntryInfoCollection; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.CacheObjectContext; +import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryInfo; import org.apache.ignite.internal.processors.cache.GridCacheIdMessage; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; @@ -173,7 +174,8 @@ public int batchId() { @Override public void prepareMarshal(GridCacheSharedContext ctx) throws IgniteCheckedException { super.prepareMarshal(ctx); - CacheObjectContext objCtx = ctx.cacheContext(cacheId).cacheObjectContext(); + GridCacheContext cctx = ctx.cacheContext(cacheId); + CacheObjectContext objCtx = cctx.cacheObjectContext(); if (keys != null) { for (int i = 0; i < keys.size(); i++) { @@ -193,6 +195,8 @@ else if (val instanceof CacheEntryInfoCollection) { entryVal.prepareMarshal(objCtx); } } + else if (val instanceof GridInvokeValue) + ((GridInvokeValue)val).prepareMarshal(cctx); } } } @@ -221,6 +225,8 @@ else if (val instanceof CacheEntryInfoCollection) { entryVal.finishUnmarshal(objCtx, ldr); } } + else if (val instanceof GridInvokeValue) + ((GridInvokeValue)val).finishUnmarshal(ctx, ldr); } } } @@ -241,43 +247,43 @@ else if (val instanceof CacheEntryInfoCollection) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeInt("batchId", batchId)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeIgniteUuid("dhtFutId", dhtFutId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeMessage("lockVer", lockVer)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeInt("mvccOpCnt", mvccOpCnt)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeByte("op", op != null ? (byte)op.ordinal() : -1)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeCollection("vals", vals, MessageCollectionItemType.MSG)) return false; @@ -299,7 +305,7 @@ else if (val instanceof CacheEntryInfoCollection) { return false; switch (reader.state()) { - case 3: + case 4: batchId = reader.readInt("batchId"); if (!reader.isLastRead()) @@ -307,7 +313,7 @@ else if (val instanceof CacheEntryInfoCollection) { reader.incrementState(); - case 4: + case 5: dhtFutId = reader.readIgniteUuid("dhtFutId"); if (!reader.isLastRead()) @@ -315,7 +321,7 @@ else if (val instanceof CacheEntryInfoCollection) { reader.incrementState(); - case 5: + case 6: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -323,7 +329,7 @@ else if (val instanceof CacheEntryInfoCollection) { reader.incrementState(); - case 6: + case 7: lockVer = reader.readMessage("lockVer"); if (!reader.isLastRead()) @@ -331,7 +337,7 @@ else if (val instanceof CacheEntryInfoCollection) { reader.incrementState(); - case 7: + case 8: mvccOpCnt = reader.readInt("mvccOpCnt"); if (!reader.isLastRead()) @@ -339,7 +345,7 @@ else if (val instanceof CacheEntryInfoCollection) { reader.incrementState(); - case 8: + case 9: byte opOrd; opOrd = reader.readByte("op"); @@ -351,7 +357,7 @@ else if (val instanceof CacheEntryInfoCollection) { reader.incrementState(); - case 9: + case 10: vals = reader.readCollection("vals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -366,7 +372,7 @@ else if (val instanceof CacheEntryInfoCollection) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistResponse.java index f3b4aa7c95824..42554362199aa 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryEnlistResponse.java @@ -60,7 +60,7 @@ public GridDhtTxQueryEnlistResponse() { * @param batchId Batch id. * @param err Error. */ - GridDhtTxQueryEnlistResponse(int cacheId, IgniteUuid futId, int batchId, + public GridDhtTxQueryEnlistResponse(int cacheId, IgniteUuid futId, int batchId, Throwable err) { this.cacheId = cacheId; this.futId = futId; @@ -117,7 +117,7 @@ public int batchId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 6; + return 7; } /** {@inheritDoc} */ @@ -135,19 +135,19 @@ public int batchId() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeInt("batchId", batchId)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeIgniteUuid("futId", futId)) return false; @@ -169,7 +169,7 @@ public int batchId() { return false; switch (reader.state()) { - case 3: + case 4: batchId = reader.readInt("batchId"); if (!reader.isLastRead()) @@ -177,7 +177,7 @@ public int batchId() { reader.incrementState(); - case 4: + case 5: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -185,7 +185,7 @@ public int batchId() { reader.incrementState(); - case 5: + case 6: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryFirstEnlistRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryFirstEnlistRequest.java index 5c1bf6cb3e61f..2220b29720f8a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryFirstEnlistRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxQueryFirstEnlistRequest.java @@ -208,56 +208,56 @@ public long cleanupVersion() { } switch (writer.state()) { - case 10: + case 11: if (!writer.writeLong("cleanupVer", cleanupVer)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeLong("cntr", cntr)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeLong("crdVer", crdVer)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeUuid("nearNodeId", nearNodeId)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeMessage("nearXidVer", nearXidVer)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 18: - if (!writer.writeMessage("topVer", topVer)) + case 19: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -278,7 +278,7 @@ public long cleanupVersion() { return false; switch (reader.state()) { - case 10: + case 11: cleanupVer = reader.readLong("cleanupVer"); if (!reader.isLastRead()) @@ -286,7 +286,7 @@ public long cleanupVersion() { reader.incrementState(); - case 11: + case 12: cntr = reader.readLong("cntr"); if (!reader.isLastRead()) @@ -294,7 +294,7 @@ public long cleanupVersion() { reader.incrementState(); - case 12: + case 13: crdVer = reader.readLong("crdVer"); if (!reader.isLastRead()) @@ -302,7 +302,7 @@ public long cleanupVersion() { reader.incrementState(); - case 13: + case 14: nearNodeId = reader.readUuid("nearNodeId"); if (!reader.isLastRead()) @@ -310,7 +310,7 @@ public long cleanupVersion() { reader.incrementState(); - case 14: + case 15: nearXidVer = reader.readMessage("nearXidVer"); if (!reader.isLastRead()) @@ -318,7 +318,7 @@ public long cleanupVersion() { reader.incrementState(); - case 15: + case 16: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -326,7 +326,7 @@ public long cleanupVersion() { reader.incrementState(); - case 16: + case 17: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -334,7 +334,7 @@ public long cleanupVersion() { reader.incrementState(); - case 17: + case 18: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -342,8 +342,8 @@ public long cleanupVersion() { reader.incrementState(); - case 18: - topVer = reader.readMessage("topVer"); + case 19: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; @@ -357,7 +357,7 @@ public long cleanupVersion() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 19; + return 20; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxRemote.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxRemote.java index 9883f6dd9b6a6..9518a014b5029 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxRemote.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxRemote.java @@ -21,40 +21,27 @@ import java.util.ArrayList; import java.util.Collection; import java.util.Collections; -import java.util.List; import java.util.Map; import java.util.UUID; import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.cluster.ClusterTopologyException; -import org.apache.ignite.internal.pagemem.wal.WALPointer; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.CacheEntryInfoCollection; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.GridCacheContext; -import org.apache.ignite.internal.processors.cache.GridCacheEntryRemovedException; import org.apache.ignite.internal.processors.cache.GridCacheOperation; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; -import org.apache.ignite.internal.processors.cache.GridCacheUpdateTxResult; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; -import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; -import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxRemoteSingleStateImpl; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxRemoteStateImpl; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; -import org.apache.ignite.internal.processors.query.EnlistOperation; -import org.apache.ignite.internal.processors.query.IgniteSQLException; -import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.tostring.GridToStringBuilder; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; @@ -107,6 +94,7 @@ public GridDhtTxRemote() { * @param nearXidVer Near transaction ID. * @param txNodes Transaction nodes mapping. * @param storeWriteThrough Cache store write through flag. + * @param txLbl Transaction label. */ public GridDhtTxRemote( GridCacheSharedContext ctx, @@ -128,7 +116,8 @@ public GridDhtTxRemote( @Nullable UUID subjId, int taskNameHash, boolean single, - boolean storeWriteThrough) { + boolean storeWriteThrough, + @Nullable String txLbl) { super( ctx, nodeId, @@ -142,7 +131,8 @@ public GridDhtTxRemote( timeout, txSize, subjId, - taskNameHash + taskNameHash, + txLbl ); assert nearNodeId != null; @@ -182,6 +172,7 @@ public GridDhtTxRemote( * @param timeout Timeout. * @param txSize Expected transaction size. * @param storeWriteThrough Cache store write through flag. + * @param txLbl Transaction label. */ public GridDhtTxRemote( GridCacheSharedContext ctx, @@ -201,7 +192,8 @@ public GridDhtTxRemote( int txSize, @Nullable UUID subjId, int taskNameHash, - boolean storeWriteThrough) { + boolean storeWriteThrough, + @Nullable String txLbl) { super( ctx, nodeId, @@ -215,7 +207,8 @@ public GridDhtTxRemote( timeout, txSize, subjId, - taskNameHash + taskNameHash, + txLbl ); assert nearNodeId != null; @@ -382,126 +375,6 @@ public void addWrite(GridCacheContext cacheCtx, txState.addWriteEntry(key, txEntry); } - /** - * - * @param ctx Cache context. - * @param op Operation. - * @param keys Keys. - * @param vals Values. - * @param snapshot Mvcc snapshot. - * @throws IgniteCheckedException If failed. - */ - public void mvccEnlistBatch(GridCacheContext ctx, EnlistOperation op, List keys, - List vals, MvccSnapshot snapshot) throws IgniteCheckedException { - assert keys != null && (vals == null || vals.size() == keys.size()); - - WALPointer ptr = null; - - GridDhtCacheAdapter dht = ctx.dht(); - - addActiveCache(ctx, false); - - for (int i = 0; i < keys.size(); i++) { - KeyCacheObject key = keys.get(i); - - assert key != null; - - int part = ctx.affinity().partition(key); - - GridDhtLocalPartition locPart = ctx.topology().localPartition(part, topologyVersion(), false); - - if (locPart == null || !locPart.reserve()) - throw new ClusterTopologyException("Can not reserve partition. Please retry on stable topology."); - - try { - CacheObject val = null; - - Message val0 = vals != null ? vals.get(i) : null; - - CacheEntryInfoCollection entries = - val0 instanceof CacheEntryInfoCollection ? (CacheEntryInfoCollection)val0 : null; - - if (entries == null && !op.isDeleteOrLock()) - val = (val0 instanceof CacheObject) ? (CacheObject)val0 : null; - - GridDhtCacheEntry entry = dht.entryExx(key, topologyVersion()); - - GridCacheUpdateTxResult updRes; - - while (true) { - ctx.shared().database().checkpointReadLock(); - - try { - if (entries == null) { - switch (op) { - case DELETE: - updRes = entry.mvccRemove( - this, - ctx.localNodeId(), - topologyVersion(), - snapshot, - false, - null, - false); - - break; - - case INSERT: - case UPSERT: - case UPDATE: - updRes = entry.mvccSet( - this, - ctx.localNodeId(), - val, - 0, - topologyVersion(), - snapshot, - op.cacheOperation(), - false, - false, - null, - false); - - break; - - default: - throw new IgniteSQLException("Cannot acquire lock for operation [op= " - + op + "]" + "Operation is unsupported at the moment ", - IgniteQueryErrorCode.UNSUPPORTED_OPERATION); - } - } - else { - updRes = entry.mvccUpdateRowsWithPreloadInfo(this, - ctx.localNodeId(), - topologyVersion(), - entries.infos(), - op.cacheOperation(), - snapshot); - } - - break; - } - catch (GridCacheEntryRemovedException ignore) { - entry = dht.entryExx(key); - } - finally { - ctx.shared().database().checkpointReadUnlock(); - } - } - - assert updRes.updateFuture() == null : "Entry should not be locked on the backup"; - - ptr = updRes.loggedPointer(); - } - finally { - locPart.release(); - } - } - - if (ptr != null && !ctx.tm().logTxRecords()) - ctx.shared().wal().flush(ptr, true); - } - /** {@inheritDoc} */ @Override public String toString() { return GridToStringBuilder.toString(GridDhtTxRemote.class, this, "super", super.toString()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtUnlockRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtUnlockRequest.java index 5671d7fd0f044..3bc4de01f6b17 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtUnlockRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtUnlockRequest.java @@ -113,7 +113,7 @@ public void addNearKey(KeyCacheObject key) } switch (writer.state()) { - case 8: + case 9: if (!writer.writeCollection("nearKeys", nearKeys, MessageCollectionItemType.MSG)) return false; @@ -135,7 +135,7 @@ public void addNearKey(KeyCacheObject key) return false; switch (reader.state()) { - case 8: + case 9: nearKeys = reader.readCollection("nearKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -155,6 +155,6 @@ public void addNearKey(KeyCacheObject key) /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 9; + return 10; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridInvokeValue.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridInvokeValue.java new file mode 100644 index 0000000000000..b88df4ec25c45 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridInvokeValue.java @@ -0,0 +1,186 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.dht; + +import java.nio.ByteBuffer; +import javax.cache.processor.EntryProcessor; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.plugin.extensions.communication.MessageReader; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; + +/** + * + */ +public class GridInvokeValue implements Message { + /** */ + private static final long serialVersionUID = 1L; + + /** Optional arguments for entry processor. */ + @GridDirectTransient + private Object[] invokeArgs; + + /** Entry processor arguments bytes. */ + private byte[] invokeArgsBytes; + + /** Entry processors. */ + @GridDirectTransient + private EntryProcessor entryProcessor; + + /** Entry processors bytes. */ + private byte[] entryProcessorBytes; + + /** + * Constructor. + */ + public GridInvokeValue() { + } + + /** + * Constructor. + * + * @param entryProcessor Entry processor. + * @param invokeArgs Entry processor invoke arguments. + */ + public GridInvokeValue(EntryProcessor entryProcessor, Object[] invokeArgs) { + this.invokeArgs = invokeArgs; + this.entryProcessor = entryProcessor; + } + + /** + * @return Invoke arguments. + */ + public Object[] invokeArgs() { + return invokeArgs; + } + + /** + * @return Entry processor. + */ + public EntryProcessor entryProcessor() { + return entryProcessor; + } + + /** + * Marshalls invoke value. + * + * @param ctx Context. + * @throws IgniteCheckedException If failed. + */ + public void prepareMarshal(GridCacheContext ctx) throws IgniteCheckedException { + if (entryProcessor != null && entryProcessorBytes == null) { + entryProcessorBytes = CU.marshal(ctx, entryProcessor); + } + + if (invokeArgsBytes == null) + invokeArgsBytes = CU.marshal(ctx, invokeArgs); + } + + /** + * Unmarshalls invoke value. + * + * @param ctx Cache context. + * @param ldr Class loader. + * @throws IgniteCheckedException If un-marshalling failed. + */ + public void finishUnmarshal(GridCacheSharedContext ctx, ClassLoader ldr) throws IgniteCheckedException { + if (entryProcessorBytes != null && entryProcessor == null) + entryProcessor = U.unmarshal(ctx, entryProcessorBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); + + if (invokeArgs == null) + invokeArgs = U.unmarshal(ctx, invokeArgsBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); + } + + /** {@inheritDoc} */ + @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { + writer.setBuffer(buf); + + if (!writer.isHeaderWritten()) { + if (!writer.writeHeader(directType(), fieldsCount())) + return false; + + writer.onHeaderWritten(); + } + + switch (writer.state()) { + case 0: + if (!writer.writeByteArray("entryProcessorBytes", entryProcessorBytes)) + return false; + + writer.incrementState(); + + case 1: + if (!writer.writeByteArray("invokeArgsBytes", invokeArgsBytes)) + return false; + + writer.incrementState(); + + } + + return true; + } + + /** {@inheritDoc} */ + @Override public boolean readFrom(ByteBuffer buf, MessageReader reader) { + reader.setBuffer(buf); + + if (!reader.beforeMessageRead()) + return false; + + switch (reader.state()) { + case 0: + entryProcessorBytes = reader.readByteArray("entryProcessorBytes"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 1: + invokeArgsBytes = reader.readByteArray("invokeArgsBytes"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + } + + return reader.afterMessageRead(GridInvokeValue.class); + } + + /** {@inheritDoc} */ + @Override public short directType() { + return 161; + } + + /** {@inheritDoc} */ + @Override public byte fieldsCount() { + return 2; + } + + /** {@inheritDoc} */ + @Override public void onAckReceived() { + // No-op. + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedGetFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedGetFuture.java index 39e0774ea3e8f..22053f38ef77d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedGetFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedGetFuture.java @@ -23,10 +23,9 @@ import java.util.LinkedHashMap; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.UUID; -import java.util.concurrent.atomic.AtomicReference; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; @@ -43,7 +42,6 @@ import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetRequest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetResponse; import org.apache.ignite.internal.processors.cache.mvcc.MvccQueryTracker; import org.apache.ignite.internal.processors.cache.mvcc.MvccQueryTrackerImpl; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; @@ -52,17 +50,10 @@ import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.GridLeanMap; import org.apache.ignite.internal.util.future.GridFinishedFuture; -import org.apache.ignite.internal.util.future.GridFutureAdapter; -import org.apache.ignite.internal.util.tostring.GridToStringInclude; -import org.apache.ignite.internal.util.typedef.C1; -import org.apache.ignite.internal.util.typedef.CI1; -import org.apache.ignite.internal.util.typedef.CIX1; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteUuid; import org.jetbrains.annotations.Nullable; @@ -71,14 +62,8 @@ */ public class GridPartitionedGetFuture extends CacheDistributedGetFutureAdapter implements MvccSnapshotResponseListener { - /** */ - private static final long serialVersionUID = 0L; - - /** Logger reference. */ - private static final AtomicReference logRef = new AtomicReference<>(); - - /** Logger. */ - private static IgniteLogger log; + /** Transaction label. */ + private String txLbl; /** */ protected final MvccSnapshot mvccSnapshot; @@ -100,6 +85,7 @@ public class GridPartitionedGetFuture extends CacheDistributedGetFutureAda * @param skipVals Skip values flag. * @param needVer If {@code true} returns values as tuples containing value and version. * @param keepCacheObjects Keep cache objects flag. + * @param txLbl Transaction label. * @param mvccSnapshot Mvcc snapshot. */ public GridPartitionedGetFuture( @@ -115,9 +101,11 @@ public GridPartitionedGetFuture( boolean skipVals, boolean needVer, boolean keepCacheObjects, + @Nullable String txLbl, @Nullable MvccSnapshot mvccSnapshot ) { - super(cctx, + super( + cctx, keys, readThrough, forcePrimary, @@ -128,13 +116,16 @@ public GridPartitionedGetFuture( skipVals, needVer, keepCacheObjects, - recovery); + recovery + ); + assert mvccSnapshot == null || cctx.mvccEnabled(); this.mvccSnapshot = mvccSnapshot; - if (log == null) - log = U.logger(cctx.kernalContext(), logRef, GridPartitionedGetFuture.class); + this.txLbl = txLbl; + + initLogger(GridPartitionedGetFuture.class); } /** @@ -162,14 +153,15 @@ public GridPartitionedGetFuture( public void init(AffinityTopologyVersion topVer) { AffinityTopologyVersion lockedTopVer = cctx.shared().lockedTopologyVersion(null); + // Can not remap if we in transaction and locked on some topology. if (lockedTopVer != null) { topVer = lockedTopVer; canRemap = false; } - else { - topVer = topVer.topologyVersion() > 0 ? topVer : - canRemap ? cctx.affinity().affinityTopologyVersion() : cctx.shared().exchange().readyAffinityVersion(); + else{ + // Use affinity topology version if constructor version is not specify. + topVer = topVer.topologyVersion() > 0 ? topVer : cctx.affinity().affinityTopologyVersion(); } if (!cctx.mvccEnabled() || mvccSnapshot != null) @@ -177,84 +169,33 @@ public void init(AffinityTopologyVersion topVer) { else { mvccTracker = new MvccQueryTrackerImpl(cctx, canRemap); - trackable = true; - - cctx.mvcc().addFuture(this, futId); + registrateFutureInMvccManager(this); mvccTracker.requestSnapshot(topVer, this); } } - @Override public void onResponse(MvccSnapshot res) { - AffinityTopologyVersion topVer = mvccTracker.topologyVersion(); - - assert topVer != null; - - initialMap(topVer); - } - - @Override public void onError(IgniteCheckedException e) { - onDone(e); - } - /** * @param topVer Topology version. */ private void initialMap(AffinityTopologyVersion topVer) { - map(keys, Collections.>emptyMap(), topVer); + map(keys, Collections.emptyMap(), topVer); markInitialized(); } /** {@inheritDoc} */ - @Override public boolean trackable() { - return trackable; - } + @Override public void onResponse(MvccSnapshot res) { + AffinityTopologyVersion topVer = mvccTracker.topologyVersion(); - /** {@inheritDoc} */ - @Override public void markNotTrackable() { - // Should not flip trackable flag from true to false since get future can be remapped. - } + assert topVer != null; - /** {@inheritDoc} */ - @Override public IgniteUuid futureId() { - return futId; + initialMap(topVer); } /** {@inheritDoc} */ - @Override public boolean onNodeLeft(UUID nodeId) { - boolean found = false; - - for (IgniteInternalFuture> fut : futures()) - if (isMini(fut)) { - MiniFuture f = (MiniFuture)fut; - - if (f.node().id().equals(nodeId)) { - found = true; - - f.onNodeLeft(new ClusterTopologyCheckedException("Remote node left grid (will retry): " + nodeId)); - } - } - - return found; - } - - /** - * @param nodeId Sender. - * @param res Result. - */ - @Override public void onResult(UUID nodeId, GridNearGetResponse res) { - for (IgniteInternalFuture> fut : futures()) { - if (isMini(fut)) { - MiniFuture f = (MiniFuture)fut; - - if (f.futureId().equals(res.miniId())) { - assert f.node().id().equals(nodeId); - - f.onResult(res); - } - } - } + @Override public void onError(IgniteCheckedException e) { + onDone(e); } /** {@inheritDoc} */ @@ -263,6 +204,8 @@ private void initialMap(AffinityTopologyVersion topVer) { if (trackable) cctx.mvcc().removeFuture(futId); + MvccQueryTracker mvccTracker = this.mvccTracker; + if (mvccTracker != null) mvccTracker.onDone(); @@ -274,11 +217,8 @@ private void initialMap(AffinityTopologyVersion topVer) { return false; } - /** - * @param f Future. - * @return {@code True} if mini-future. - */ - private boolean isMini(IgniteInternalFuture f) { + /** {@inheritDoc} */ + @Override protected boolean isMini(IgniteInternalFuture f) { return f.getClass().equals(MiniFuture.class); } @@ -287,68 +227,60 @@ private boolean isMini(IgniteInternalFuture f) { * @param mapped Mappings to check for duplicates. * @param topVer Topology version on which keys should be mapped. */ - private void map( + @Override protected void map( Collection keys, Map> mapped, - final AffinityTopologyVersion topVer + AffinityTopologyVersion topVer ) { Collection cacheNodes = CU.affinityNodes(cctx, topVer); - if (cacheNodes.isEmpty()) { - onDone(new ClusterTopologyServerNotFoundException("Failed to map keys for cache " + - "(all partition nodes left the grid) [topVer=" + topVer + ", cache=" + cctx.name() + ']')); - - return; - } - - GridDhtTopologyFuture topFut = cctx.shared().exchange().lastFinishedFuture(); - - Throwable err = topFut != null ? topFut.validateCache(cctx, recovery, true, null, keys) : null; - - if (err != null) { - onDone(err); + validate(cacheNodes, topVer); + // Future can be already done with some exception. + if (isDone()) return; - } Map> mappings = U.newHashMap(cacheNodes.size()); - final int keysSize = keys.size(); + int keysSize = keys.size(); + // Map for local (key,value) pairs. Map locVals = U.newHashMap(keysSize); + // True if we have remote nodes after key mapping complete. boolean hasRmtNodes = false; - // Assign keys to primary nodes. + // Assign keys to nodes. for (KeyCacheObject key : keys) - hasRmtNodes |= map(key, mappings, locVals, topVer, mapped); + hasRmtNodes |= map(key, topVer, mappings, mapped, locVals); + // Future can be alredy done with some exception. if (isDone()) return; + // Add local read (key,value) in result. if (!locVals.isEmpty()) add(new GridFinishedFuture<>(locVals)); - if (hasRmtNodes) { - if (!trackable) { - trackable = true; + // If we have remote nodes in mapping we should registrate future in mvcc manager. + if (hasRmtNodes) + registrateFutureInMvccManager(this); - cctx.mvcc().addFuture(this, futId); - } - } - - // Create mini futures. + // Create mini futures after mapping to remote nodes. for (Map.Entry> entry : mappings.entrySet()) { - final ClusterNode n = entry.getKey(); + // Node for request. + ClusterNode n = entry.getKey(); - final LinkedHashMap mappedKeys = entry.getValue(); + // Keys for request. + LinkedHashMap mappedKeys = entry.getValue(); assert !mappedKeys.isEmpty(); // If this is the primary or backup node for the keys. if (n.isLocal()) { - final GridDhtFuture> fut = - cache().getDhtAsync(n.id(), + GridDhtFuture> fut = cache() + .getDhtAsync( + n.id(), -1, mappedKeys, false, @@ -359,67 +291,51 @@ private void map( expiryPlc, skipVals, recovery, - mvccSnapshot()); + txLbl, + mvccSnapshot() + ); - final Collection invalidParts = fut.invalidPartitions(); + Collection invalidParts = fut.invalidPartitions(); if (!F.isEmpty(invalidParts)) { Collection remapKeys = new ArrayList<>(keysSize); for (KeyCacheObject key : keys) { - if (key != null && invalidParts.contains(cctx.affinity().partition(key))) + int part = cctx.affinity().partition(key); + + if (key != null && invalidParts.contains(part)) { + addNodeAsInvalid(n, part, topVer); + remapKeys.add(key); + } } AffinityTopologyVersion updTopVer = cctx.shared().exchange().readyAffinityVersion(); - assert updTopVer.compareTo(topVer) > 0 : "Got invalid partitions for local node but topology version did " + - "not change [topVer=" + topVer + ", updTopVer=" + updTopVer + - ", invalidParts=" + invalidParts + ']'; - // Remap recursively. map(remapKeys, mappings, updTopVer); } // Add new future. - add(fut.chain(new C1>, Map>() { - @Override public Map apply(IgniteInternalFuture> fut) { - try { - return createResultMap(fut.get()); - } - catch (Exception e) { - U.error(log, "Failed to get values from dht cache [fut=" + fut + "]", e); + add(fut.chain(f -> { + try { + return createResultMap(f.get()); + } + catch (Exception e) { + U.error(log, "Failed to get values from dht cache [fut=" + fut + "]", e); - onDone(e); + onDone(e); - return Collections.emptyMap(); - } + return Collections.emptyMap(); } })); } else { - MiniFuture fut = new MiniFuture(n, mappedKeys, topVer, - CU.createBackupPostProcessingClosure(topVer, log, cctx, null, expiryPlc, readThrough, skipVals)); - - GridCacheMessage req = new GridNearGetRequest( - cctx.cacheId(), - futId, - fut.futureId(), - null, - mappedKeys, - readThrough, - topVer, - subjId, - taskName == null ? 0 : taskName.hashCode(), - expiryPlc != null ? expiryPlc.forCreate() : -1L, - expiryPlc != null ? expiryPlc.forAccess() : -1L, - false, - skipVals, - cctx.deploymentEnabled(), - recovery, - mvccSnapshot()); + MiniFuture miniFut = new MiniFuture(n, mappedKeys, topVer); + + GridCacheMessage req = miniFut.createGetRequest(futId); - add(fut); // Append new future. + add(miniFut); // Append new future. try { cctx.io().send(n, req, cctx.ioPolicy()); @@ -427,83 +343,119 @@ private void map( catch (IgniteCheckedException e) { // Fail the whole thing. if (e instanceof ClusterTopologyCheckedException) - fut.onNodeLeft((ClusterTopologyCheckedException)e); + miniFut.onNodeLeft((ClusterTopologyCheckedException)e); else - fut.onResult(e); + miniFut.onResult(e); } } } } /** - * @param mappings Mappings. + * @param nodesToKeysMapping Mappings. * @param key Key to map. * @param locVals Local values. * @param topVer Topology version. - * @param mapped Previously mapped. + * @param missedNodesToKeysMapping Previously mapped. * @return {@code True} if has remote nodes. */ @SuppressWarnings("ConstantConditions") private boolean map( KeyCacheObject key, - Map> mappings, - Map locVals, AffinityTopologyVersion topVer, - Map> mapped + Map> nodesToKeysMapping, + Map> missedNodesToKeysMapping, + Map locVals ) { int part = cctx.affinity().partition(key); List affNodes = cctx.affinity().nodesByPartition(part, topVer); + // Failed if none affinity node found. if (affNodes.isEmpty()) { - onDone(serverNotFoundError(topVer)); + onDone(serverNotFoundError(part, topVer)); return false; } - boolean fastLocGet = (!forcePrimary || affNodes.get(0).isLocal()) && - cctx.reserveForFastLocalGet(part, topVer); + // Try to read key localy if we can. + if (tryLocalGet(key, part, topVer, affNodes, locVals)) + return false; - if (fastLocGet) { - try { - if (localGet(topVer, key, part, locVals)) - return false; - } - finally { - cctx.releaseForFastLocalGet(part, topVer); - } - } + Set invalidNodeSet = getInvalidNodes(part, topVer); - ClusterNode node = cctx.selectAffinityNodeBalanced(affNodes, canRemap); + // Get remote node for request for this key. + ClusterNode node = cctx.selectAffinityNodeBalanced(affNodes, invalidNodeSet, part, canRemap); + // Failed if none remote node found. if (node == null) { - onDone(serverNotFoundError(topVer)); + onDone(serverNotFoundError(part, topVer)); return false; } + // The node still can be local, see details implementation of #tryLocalGet(). boolean remote = !node.isLocal(); - LinkedHashMap keys = mapped.get(node); + // Check retry counter, bound for avoid inifinit remap. + if (!checkRetryPermits(key, node, missedNodesToKeysMapping)) + return false; - if (keys != null && keys.containsKey(key)) { - if (REMAP_CNT_UPD.incrementAndGet(this) > MAX_REMAP_CNT) { - onDone(new ClusterTopologyCheckedException("Failed to remap key to a new node after " + - MAX_REMAP_CNT + " attempts (key got remapped to the same node) [key=" + key + ", node=" + - U.toShortString(node) + ", mappings=" + mapped + ']')); + addNodeMapping(key, node, nodesToKeysMapping); - return false; - } - } + return remote; + } + /** + * + * @param key Key. + * @param node Mapped node. + * @param mappings Full node mapping. + */ + private void addNodeMapping( + KeyCacheObject key, + ClusterNode node, + Map> mappings + ) { LinkedHashMap old = mappings.get(node); if (old == null) mappings.put(node, old = new LinkedHashMap<>(3, 1f)); old.put(key, false); + } - return remote; + /** + * + * @param key Key. + * @param part Partition. + * @param topVer Topology version. + * @param affNodes Affynity nodes. + * @param locVals Map for local (key,value) pairs. + */ + private boolean tryLocalGet( + KeyCacheObject key, + int part, + AffinityTopologyVersion topVer, + List affNodes, + Map locVals + ) { + // Local get cannot be used with MVCC as local node can contain some visible version which is not latest. + boolean fastLocGet = !cctx.mvccEnabled() && + (!forcePrimary || affNodes.get(0).isLocal()) && + cctx.reserveForFastLocalGet(part, topVer); + + if (fastLocGet) { + try { + if (localGet(topVer, key, part, locVals)) + return true; + } + finally { + cctx.releaseForFastLocalGet(part, topVer); + } + } + + return false; } /** @@ -546,6 +498,7 @@ private boolean localGet(AffinityTopologyVersion topVer, KeyCacheObject key, int if (evt) { cctx.events().readEvent(key, null, + txLbl, row.value(), subjId, taskName, @@ -598,7 +551,7 @@ private boolean localGet(AffinityTopologyVersion topVer, KeyCacheObject key, int mvccSnapshot()); } - entry.touch(topVer); + entry.touch(); // Entry was not in memory or in swap, so we remove it from cache. if (v == null) { @@ -651,6 +604,27 @@ private boolean localGet(AffinityTopologyVersion topVer, KeyCacheObject key, int } } + /** + * + * @param cacheNodes Cache affynity nodes. + * @param topVer Topology version. + */ + private void validate(Collection cacheNodes, AffinityTopologyVersion topVer) { + if (cacheNodes.isEmpty()) { + onDone(new ClusterTopologyServerNotFoundException("Failed to map keys for cache " + + "(all partition nodes left the grid) [topVer=" + topVer + ", cache=" + cctx.name() + ']')); + + return; + } + + GridDhtTopologyFuture topFut = cctx.shared().exchange().lastFinishedFuture(); + + Throwable err = topFut != null ? topFut.validateCache(cctx, recovery, true, null, keys) : null; + + if (err != null) + onDone(err); + } + /** * @return Near cache. */ @@ -691,21 +665,7 @@ private Map createResultMap(Collection infos) { /** {@inheritDoc} */ @Override public String toString() { - Collection futs = F.viewReadOnly(futures(), new C1, String>() { - @SuppressWarnings("unchecked") - @Override public String apply(IgniteInternalFuture f) { - if (isMini(f)) { - return "[node=" + ((MiniFuture)f).node().id() + - ", loc=" + ((MiniFuture)f).node().isLocal() + - ", done=" + f.isDone() + "]"; - } - else - return "[loc=true, done=" + f.isDone() + "]"; - } - }); - return S.toString(GridPartitionedGetFuture.class, this, - "innerFuts", futs, "super", super.toString()); } @@ -713,200 +673,46 @@ private Map createResultMap(Collection infos) { * Mini-future for get operations. Mini-futures are only waiting on a single * node as opposed to multiple nodes. */ - private class MiniFuture extends GridFutureAdapter> { - /** */ - private final IgniteUuid futId = IgniteUuid.randomUuid(); - - /** Node ID. */ - private final ClusterNode node; - - /** Keys. */ - @GridToStringInclude - private final LinkedHashMap keys; - - /** Topology version on which this future was mapped. */ - private final AffinityTopologyVersion topVer; - - /** Post processing closure. */ - private final IgniteInClosure> postProcessingClos; - - /** {@code True} if remapped after node left. */ - private boolean remapped; - + private class MiniFuture extends AbstractMiniFuture { /** * @param node Node. * @param keys Keys. * @param topVer Topology version. - * @param postProcessingClos Post processing closure. - */ - MiniFuture(ClusterNode node, LinkedHashMap keys, AffinityTopologyVersion topVer, - @Nullable IgniteInClosure> postProcessingClos) { - this.node = node; - this.keys = keys; - this.topVer = topVer; - this.postProcessingClos = postProcessingClos; - } - - /** - * @return Future ID. - */ - IgniteUuid futureId() { - return futId; - } - - /** - * @return Node ID. - */ - public ClusterNode node() { - return node; - } - - /** - * @return Keys. - */ - public Collection keys() { - return keys.keySet(); - } - - /** - * @param e Error. */ - void onResult(Throwable e) { - if (log.isDebugEnabled()) - log.debug("Failed to get future result [fut=" + this + ", err=" + e + ']'); - - // Fail. - onDone(e); + public MiniFuture( + ClusterNode node, + LinkedHashMap keys, + AffinityTopologyVersion topVer + ) { + super(node, keys, topVer); } - /** - * @param e Failure exception. - */ - @SuppressWarnings("UnusedParameters") - synchronized void onNodeLeft(ClusterTopologyCheckedException e) { - if (remapped) - return; - - remapped = true; - - if (log.isDebugEnabled()) - log.debug("Remote node left grid while sending or waiting for reply (will retry): " + this); - - // Try getting from existing nodes. - if (!canRemap) { - map(keys.keySet(), F.t(node, keys), topVer); - - onDone(Collections.emptyMap()); - } - else { - AffinityTopologyVersion updTopVer = - new AffinityTopologyVersion(Math.max(topVer.topologyVersion() + 1, cctx.discovery().topologyVersion())); - - cctx.affinity().affinityReadyFuture(updTopVer).listen( - new CI1>() { - @Override public void apply(IgniteInternalFuture fut) { - try { - // Remap. - map(keys.keySet(), F.t(node, keys), fut.get()); - - onDone(Collections.emptyMap()); - } - catch (IgniteCheckedException e) { - GridPartitionedGetFuture.this.onDone(e); - } - } - } - ); - } - } - - /** - * @param res Result callback. - */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - void onResult(final GridNearGetResponse res) { - final Collection invalidParts = res.invalidPartitions(); - - // If error happened on remote node, fail the whole future. - if (res.error() != null) { - onDone(res.error()); - - return; - } - - // Remap invalid partitions. - if (!F.isEmpty(invalidParts)) { - AffinityTopologyVersion rmtTopVer = res.topologyVersion(); - - assert !rmtTopVer.equals(AffinityTopologyVersion.ZERO); - - if (rmtTopVer.compareTo(topVer) <= 0) { - // Fail the whole get future. - onDone(new IgniteCheckedException("Failed to process invalid partitions response (remote node reported " + - "invalid partitions but remote topology version does not differ from local) " + - "[topVer=" + topVer + ", rmtTopVer=" + rmtTopVer + ", invalidParts=" + invalidParts + - ", nodeId=" + node.id() + ']')); - - return; - } - - if (log.isDebugEnabled()) - log.debug("Remapping mini get future [invalidParts=" + invalidParts + ", fut=" + this + ']'); - - if (!canRemap) { - map(F.view(keys.keySet(), new P1() { - @Override public boolean apply(KeyCacheObject key) { - return invalidParts.contains(cctx.affinity().partition(key)); - } - }), F.t(node, keys), topVer); - - postProcessResult(res); - - onDone(createResultMap(res.entries())); - - return; - } - - // Need to wait for next topology version to remap. - IgniteInternalFuture topFut = cctx.affinity().affinityReadyFuture(rmtTopVer); - - topFut.listen(new CIX1>() { - @SuppressWarnings("unchecked") - @Override public void applyx( - IgniteInternalFuture fut) throws IgniteCheckedException { - AffinityTopologyVersion topVer = fut.get(); - - // This will append new futures to compound list. - map(F.view(keys.keySet(), new P1() { - @Override public boolean apply(KeyCacheObject key) { - return invalidParts.contains(cctx.affinity().partition(key)); - } - }), F.t(node, keys), topVer); - - postProcessResult(res); - - onDone(createResultMap(res.entries())); - } - }); - } - else { - try { - postProcessResult(res); - - onDone(createResultMap(res.entries())); - } - catch (Exception e) { - onDone(e); - } - } + /** {@inheritDoc} */ + @Override protected GridNearGetRequest createGetRequest0(IgniteUuid rootFutId, IgniteUuid futId) { + return new GridNearGetRequest( + cctx.cacheId(), + rootFutId, + futId, + null, + keys, + readThrough, + topVer, + subjId, + taskName == null ? 0 : taskName.hashCode(), + expiryPlc != null ? expiryPlc.forCreate() : -1L, + expiryPlc != null ? expiryPlc.forAccess() : -1L, + false, + skipVals, + cctx.deploymentEnabled(), + recovery, + txLbl, + mvccSnapshot() + ); } - /** - * @param res Response. - */ - private void postProcessResult(final GridNearGetResponse res) { - if (postProcessingClos != null) - postProcessingClos.apply(res.entries()); + /** {@inheritDoc} */ + @Override protected Map createResultMap(Collection entries) { + return GridPartitionedGetFuture.this.createResultMap(entries); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedSingleGetFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedSingleGetFuture.java index 5d3bef298db68..9924635fb6efe 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedSingleGetFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridPartitionedSingleGetFuture.java @@ -18,8 +18,12 @@ package org.apache.ignite.internal.processors.cache.distributed.dht; import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; import java.util.List; +import java.util.Set; import java.util.UUID; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.concurrent.atomic.AtomicReference; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; @@ -27,6 +31,7 @@ import org.apache.ignite.internal.IgniteDiagnosticAware; import org.apache.ignite.internal.IgniteDiagnosticPrepareContext; import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.NodeStoppingException; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -51,8 +56,6 @@ import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.tostring.GridToStringInclude; -import org.apache.ignite.internal.util.typedef.CI1; -import org.apache.ignite.internal.util.typedef.CIX1; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.S; @@ -61,13 +64,25 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_NEAR_GET_MAX_REMAPS; +import static org.apache.ignite.IgniteSystemProperties.getInteger; import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; /** * */ -public class GridPartitionedSingleGetFuture extends GridCacheFutureAdapter implements GridCacheFuture, - CacheGetFuture, IgniteDiagnosticAware { +public class GridPartitionedSingleGetFuture extends GridCacheFutureAdapter + implements CacheGetFuture, IgniteDiagnosticAware { + /** Default max remap count value. */ + public static final int DFLT_MAX_REMAP_CNT = 3; + + /** Maximum number of attempts to remap key to the same primary node. */ + protected static final int MAX_REMAP_CNT = getInteger(IGNITE_NEAR_GET_MAX_REMAPS, DFLT_MAX_REMAP_CNT); + + /** Remap count updater. */ + protected static final AtomicIntegerFieldUpdater REMAP_CNT_UPD = + AtomicIntegerFieldUpdater.newUpdater(GridPartitionedSingleGetFuture.class, "remapCnt"); + /** Logger reference. */ private static final AtomicReference logRef = new AtomicReference<>(); @@ -132,6 +147,15 @@ public class GridPartitionedSingleGetFuture extends GridCacheFutureAdapter invalidNodes = Collections.emptySet(); + + /** Remap count. */ + protected volatile int remapCnt; + + /** Transaction label. */ + private String txLbl; + /** * @param cctx Context. * @param key Key. @@ -145,6 +169,7 @@ public class GridPartitionedSingleGetFuture extends GridCacheFutureAdapter 0 ? this.topVer : - canRemap ? cctx.affinity().affinityTopologyVersion() : cctx.shared().exchange().readyAffinityVersion(); - - GridDhtTopologyFuture topFut = cctx.shared().exchange().lastFinishedFuture(); + AffinityTopologyVersion mappingtopVermappingtopVer; - Throwable err = topFut != null ? topFut.validateCache(cctx, recovery, true, key, null) : null; - - if (err != null) { - onDone(err); - - return; + if (topVer.topologyVersion() > 0) + mappingtopVermappingtopVer = topVer; + else { + mappingtopVermappingtopVer = canRemap ? + cctx.affinity().affinityTopologyVersion() : + cctx.shared().exchange().readyAffinityVersion(); } - map(topVer); + map(mappingtopVermappingtopVer); } /** * @param topVer Topology version. */ @SuppressWarnings("unchecked") - private void map(final AffinityTopologyVersion topVer) { + private void map(AffinityTopologyVersion topVer) { + if (!validate(topVer)) + return; + ClusterNode node = mapKeyToNode(topVer); if (node == null) { @@ -232,45 +259,46 @@ private void map(final AffinityTopologyVersion topVer) { if (isDone()) return; + // Read value if node is localNode. if (node.isLocal()) { - final GridDhtFuture fut = cctx.dht().getDhtSingleAsync(node.id(), - -1, - key, - false, - readThrough, - topVer, - subjId, - taskName == null ? 0 : taskName.hashCode(), - expiryPlc, - skipVals, - recovery, - mvccSnapshot); - - final Collection invalidParts = fut.invalidPartitions(); + GridDhtFuture fut = cctx.dht() + .getDhtSingleAsync( + node.id(), + -1, + key, + false, + readThrough, + topVer, + subjId, + taskName == null ? 0 : taskName.hashCode(), + expiryPlc, + skipVals, + recovery, + txLbl, + mvccSnapshot + ); + + Collection invalidParts = fut.invalidPartitions(); if (!F.isEmpty(invalidParts)) { - AffinityTopologyVersion updTopVer = cctx.shared().exchange().readyAffinityVersion(); + addNodeAsInvalid(node); - assert updTopVer.compareTo(topVer) > 0 : "Got invalid partitions for local node but topology " + - "version did not change [topVer=" + topVer + ", updTopVer=" + updTopVer + - ", invalidParts=" + invalidParts + ']'; + AffinityTopologyVersion updTopVer = cctx.shared().exchange().readyAffinityVersion(); // Remap recursively. map(updTopVer); } else { - fut.listen(new CI1>() { - @Override public void apply(IgniteInternalFuture fut) { - try { - GridCacheEntryInfo info = fut.get(); + fut.listen(f -> { + try { + GridCacheEntryInfo info = f.get(); - setResult(info); - } - catch (Exception e) { - U.error(log, "Failed to get values from dht cache [fut=" + fut + "]", e); + setResult(info); + } + catch (Exception e) { + U.error(log, "Failed to get values from dht cache [fut=" + fut + "]", e); - onDone(e); - } + onDone(e); } }); } @@ -283,15 +311,11 @@ private void map(final AffinityTopologyVersion topVer) { this.node = node; } - if (!trackable) { - trackable = true; - - cctx.mvcc().addFuture(this, futId); - } + registrateFutureInMvccManager(this); boolean needVer = this.needVer; - final BackupPostProcessingClosure postClos = CU.createBackupPostProcessingClosure(topVer, log, + BackupPostProcessingClosure postClos = CU.createBackupPostProcessingClosure(topVer, log, cctx, key, expiryPlc, readThrough, skipVals); if (postClos != null) { @@ -301,7 +325,8 @@ private void map(final AffinityTopologyVersion topVer) { postProcessingClos = postClos; } - GridCacheMessage req = new GridNearSingleGetRequest(cctx.cacheId(), + GridCacheMessage req = new GridNearSingleGetRequest( + cctx.cacheId(), futId.localId(), key, readThrough, @@ -315,7 +340,9 @@ private void map(final AffinityTopologyVersion topVer) { needVer, cctx.deploymentEnabled(), recovery, - mvccSnapshot); + txLbl, + mvccSnapshot + ); try { cctx.io().send(node, req, cctx.ioPolicy()); @@ -338,34 +365,58 @@ private void map(final AffinityTopologyVersion topVer) { List affNodes = cctx.affinity().nodesByPartition(part, topVer); + // Failed if none affinity node found by assigment. if (affNodes.isEmpty()) { - onDone(serverNotFoundError(topVer)); + onDone(serverNotFoundError(part, topVer)); return null; } - boolean fastLocGet = (!forcePrimary || affNodes.get(0).isLocal()) && + // Try to read key localy if we can. + if (tryLocalGet(key, part, topVer, affNodes)) + return null; + + ClusterNode affNode = cctx.selectAffinityNodeBalanced(affNodes, getInvalidNodes(), part, canRemap); + + // Failed if none balanced node found. + if (affNode == null) { + onDone(serverNotFoundError(part, topVer)); + + return null; + } + + return affNode; + } + + /** + * + * @param key Key. + * @param part Partition. + * @param topVer Topology version. + * @param affNodes Affynity nodes. + */ + private boolean tryLocalGet( + KeyCacheObject key, + int part, + AffinityTopologyVersion topVer, + List affNodes + ) { + // Local get cannot be used with MVCC as local node can contain some visible version which is not latest. + boolean fastLocGet = !cctx.mvccEnabled() && + (!forcePrimary || affNodes.get(0).isLocal()) && cctx.reserveForFastLocalGet(part, topVer); if (fastLocGet) { try { - if (localGet(topVer, part)) - return null; + if (localGet(topVer, key, part)) + return true; } finally { cctx.releaseForFastLocalGet(part, topVer); } } - ClusterNode affNode = cctx.selectAffinityNodeBalanced(affNodes, canRemap); - - if (affNode == null) { - onDone(serverNotFoundError(topVer)); - - return null; - } - - return affNode; + return false; } /** @@ -373,7 +424,7 @@ private void map(final AffinityTopologyVersion topVer) { * @param part Partition. * @return {@code True} if future completed. */ - private boolean localGet(AffinityTopologyVersion topVer, int part) { + private boolean localGet(AffinityTopologyVersion topVer, KeyCacheObject key, int part) { assert cctx.affinityNode() : this; GridDhtCacheAdapter colocated = cctx.dht(); @@ -405,6 +456,7 @@ private boolean localGet(AffinityTopologyVersion topVer, int part) { if (evt) { cctx.events().readEvent(key, null, + txLbl, row.value(), subjId, taskName, @@ -457,7 +509,7 @@ private boolean localGet(AffinityTopologyVersion topVer, int part) { mvccSnapshot); } - entry.touch(topVer); + entry.touch(); // Entry was not in memory or in swap, so we remove it from cache. if (v == null) { @@ -510,13 +562,29 @@ private boolean localGet(AffinityTopologyVersion topVer, int part) { } } + /** + * @param fut Future. + */ + private void registrateFutureInMvccManager(GridCacheFuture fut) { + if (!trackable) { + trackable = true; + + cctx.mvcc().addFuture(fut, futId); + } + } + /** * @param nodeId Node ID. * @param res Result. */ public void onResult(UUID nodeId, GridNearSingleGetResponse res) { - if (!processResponse(nodeId) || - !checkError(res.error(), res.invalidPartitions(), res.topologyVersion(), nodeId)) + // Brake here if response from unexpected node. + if (!processResponse(nodeId)) + return; + + // Brake here if exception was throws on remote node or + // parition on remote node is invalid. + if (!checkError(nodeId, res.invalidPartitions(), res.topologyVersion(), res.error())) return; Message res0 = res.result(); @@ -551,13 +619,15 @@ else if (readThrough && res0 instanceof CacheVersionedValue) { } } - /** - * @param nodeId Node ID. - * @param res Result. - */ + /** {@inheritDoc} */ @Override public void onResult(UUID nodeId, GridNearGetResponse res) { - if (!processResponse(nodeId) || - !checkError(res.error(), !F.isEmpty(res.invalidPartitions()), res.topologyVersion(), nodeId)) + // Brake here if response from unexpected node. + if (!processResponse(nodeId)) + return; + + // Brake here if exception was throws on remote node or + // parition on remote node is invalid. + if (!checkError(nodeId, !F.isEmpty(res.invalidPartitions()), res.topologyVersion(), res.error())) return; Collection infos = res.entries(); @@ -590,10 +660,12 @@ private boolean processResponse(UUID nodeId) { * @param nodeId Node ID. * @return {@code True} if should process received response. */ - private boolean checkError(@Nullable IgniteCheckedException err, + private boolean checkError( + UUID nodeId, boolean invalidParts, AffinityTopologyVersion rmtTopVer, - UUID nodeId) { + @Nullable IgniteCheckedException err + ) { if (err != null) { onDone(err); @@ -601,34 +673,10 @@ private boolean checkError(@Nullable IgniteCheckedException err, } if (invalidParts) { - assert !rmtTopVer.equals(AffinityTopologyVersion.ZERO); - - if (rmtTopVer.compareTo(topVer) <= 0) { - // Fail the whole get future. - onDone(new IgniteCheckedException("Failed to process invalid partitions response (remote node reported " + - "invalid partitions but remote topology version does not differ from local) " + - "[topVer=" + topVer + ", rmtTopVer=" + rmtTopVer + ", part=" + cctx.affinity().partition(key) + - ", nodeId=" + nodeId + ']')); - - return false; - } + addNodeAsInvalid(cctx.node(nodeId)); if (canRemap) { - IgniteInternalFuture topFut = cctx.affinity().affinityReadyFuture(rmtTopVer); - - topFut.listen(new CIX1>() { - @Override public void applyx(IgniteInternalFuture fut) { - try { - AffinityTopologyVersion topVer = fut.get(); - - remap(topVer); - } - catch (IgniteCheckedException e) { - onDone(e); - } - } - }); - + awaitVersionAndRemap(rmtTopVer); } else map(topVer); @@ -717,12 +765,71 @@ private boolean partitionOwned(int part) { } /** + * @param node Invalid node. + */ + private synchronized void addNodeAsInvalid(ClusterNode node) { + if (invalidNodes == Collections.emptySet()) + invalidNodes = new HashSet<>(); + + invalidNodes.add(node); + } + + /** + * @return Set of invalid cluster nodes. + */ + protected synchronized Set getInvalidNodes() { + return invalidNodes; + } + + /** + * @param topVer Topology version. + */ + private boolean checkRetryPermits(AffinityTopologyVersion topVer) { + if (topVer.equals(this.topVer)) + return true; + + if (REMAP_CNT_UPD.incrementAndGet(this) > MAX_REMAP_CNT) { + ClusterNode node0 = node; + + onDone(new ClusterTopologyCheckedException("Failed to remap key to a new node after " + + MAX_REMAP_CNT + " attempts (key got remapped to the same node) [key=" + key + ", node=" + + (node0 != null ? U.toShortString(node0) : node0) + ", invalidNodes=" + invalidNodes + ']')); + + return false; + } + + return true; + } + + /** + * @param part Partition. * @param topVer Topology version. * @return Exception. */ - private ClusterTopologyServerNotFoundException serverNotFoundError(AffinityTopologyVersion topVer) { + private ClusterTopologyServerNotFoundException serverNotFoundError(int part, AffinityTopologyVersion topVer) { return new ClusterTopologyServerNotFoundException("Failed to map keys for cache " + - "(all partition nodes left the grid) [topVer=" + topVer + ", cache=" + cctx.name() + ']'); + "(all partition nodes left the grid) [topVer=" + topVer + ", part=" + part + ", cache=" + cctx.name() + ']'); + } + + /** + * @param topVer Topology version. + * @return True if validate success, False is not. + */ + private boolean validate(AffinityTopologyVersion topVer) { + if (!checkRetryPermits(topVer)) + return false; + + GridDhtTopologyFuture lastFut = cctx.shared().exchange().lastFinishedFuture(); + + Throwable error = lastFut.validateCache(cctx, recovery, true, key, null); + + if (error != null) { + onDone(error); + + return false; + } + else + return true; } /** {@inheritDoc} */ @@ -736,20 +843,9 @@ private ClusterTopologyServerNotFoundException serverNotFoundError(AffinityTopol return false; if (canRemap) { - AffinityTopologyVersion updTopVer = new AffinityTopologyVersion( - Math.max(topVer.topologyVersion() + 1, cctx.discovery().topologyVersion())); - - cctx.affinity().affinityReadyFuture(updTopVer).listen( - new CI1>() { - @Override public void apply(IgniteInternalFuture fut) { - try { - remap(fut.get()); - } - catch (IgniteCheckedException e) { - onDone(e); - } - } - }); + long maxTopVer = Math.max(topVer.topologyVersion() + 1, cctx.discovery().topologyVersion()); + + awaitVersionAndRemap(new AffinityTopologyVersion(maxTopVer)); } else remap(topVer); @@ -757,12 +853,34 @@ private ClusterTopologyServerNotFoundException serverNotFoundError(AffinityTopol return true; } + /** + * @param topVer Topology version. + */ + private void awaitVersionAndRemap(AffinityTopologyVersion topVer){ + IgniteInternalFuture awaitTopologyVersionFuture = + cctx.shared().exchange().affinityReadyFuture(topVer); + + awaitTopologyVersionFuture.listen(f -> { + try { + remap(f.get()); + } + catch (IgniteCheckedException e) { + onDone(e); + } + }); + } + /** * @param topVer Topology version. */ private void remap(final AffinityTopologyVersion topVer) { cctx.closures().runLocalSafe(new Runnable() { @Override public void run() { + // If topology changed reset collection of invalid nodes. + synchronized (this) { + invalidNodes = Collections.emptySet(); + } + map(topVer); } }); @@ -775,7 +893,8 @@ private void remap(final AffinityTopologyVersion topVer) { if (trackable) cctx.mvcc().removeFuture(futId); - cctx.dht().sendTtlUpdateRequest(expiryPlc); + if (!(err instanceof NodeStoppingException)) + cctx.dht().sendTtlUpdateRequest(expiryPlc); return true; } @@ -815,6 +934,6 @@ private void remap(final AffinityTopologyVersion topVer) { /** {@inheritDoc} */ @Override public String toString() { - return S.toString(GridPartitionedSingleGetFuture.class, this); + return S.toString(GridPartitionedSingleGetFuture.class, this, "super", super.toString()); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/NearTxQueryEnlistResultHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/NearTxQueryEnlistResultHandler.java index 8043defc88b34..20583770882dd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/NearTxQueryEnlistResultHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/NearTxQueryEnlistResultHandler.java @@ -25,7 +25,6 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxQueryResultsEnlistResponse; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; -import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.lang.GridClosureException; import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.internal.CU; @@ -126,7 +125,8 @@ else if (clazz == GridDhtTxQueryEnlistFuture.class) GridNearTxQueryEnlistResponse res = createResponse(fut); if (res.removeMapping()) { - // TODO IGNITE-9133 + tx.forceSkipCompletedVersions(); + tx.rollbackDhtLocalAsync().listen(new CI1>() { @Override public void apply(IgniteInternalFuture fut0) { try { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/PartitionUpdateCountersMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/PartitionUpdateCountersMessage.java index 6ee0294d196dd..f30d51d9016d6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/PartitionUpdateCountersMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/PartitionUpdateCountersMessage.java @@ -20,6 +20,7 @@ import java.nio.ByteBuffer; import java.util.Arrays; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageReader; @@ -28,6 +29,7 @@ /** * Partition update counters message. */ +@IgniteCodeGeneratingFail public class PartitionUpdateCountersMessage implements Message { /** */ private static final int ITEM_SIZE = 4 /* partition */ + 8 /* initial counter */ + 8 /* updates count */; @@ -228,7 +230,7 @@ private void ensureSpace(int newSize) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 1; + return 2; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicAbstractUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicAbstractUpdateRequest.java index a5e9feb1255a4..0096f012d0fe5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicAbstractUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicAbstractUpdateRequest.java @@ -484,7 +484,7 @@ final boolean isFlag(int mask) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 12; + return 13; } /** {@inheritDoc} */ @@ -502,55 +502,55 @@ final boolean isFlag(int mask) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeLong("nearFutId", nearFutId)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeUuid("nearNodeId", nearNodeId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeByte("syncMode", syncMode != null ? (byte)syncMode.ordinal() : -1)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 10: - if (!writer.writeMessage("topVer", topVer)) + case 11: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeMessage("writeVer", writeVer)) return false; @@ -572,7 +572,7 @@ final boolean isFlag(int mask) { return false; switch (reader.state()) { - case 3: + case 4: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -580,7 +580,7 @@ final boolean isFlag(int mask) { reader.incrementState(); - case 4: + case 5: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -588,7 +588,7 @@ final boolean isFlag(int mask) { reader.incrementState(); - case 5: + case 6: nearFutId = reader.readLong("nearFutId"); if (!reader.isLastRead()) @@ -596,7 +596,7 @@ final boolean isFlag(int mask) { reader.incrementState(); - case 6: + case 7: nearNodeId = reader.readUuid("nearNodeId"); if (!reader.isLastRead()) @@ -604,7 +604,7 @@ final boolean isFlag(int mask) { reader.incrementState(); - case 7: + case 8: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -612,7 +612,7 @@ final boolean isFlag(int mask) { reader.incrementState(); - case 8: + case 9: byte syncModeOrd; syncModeOrd = reader.readByte("syncMode"); @@ -624,7 +624,7 @@ final boolean isFlag(int mask) { reader.incrementState(); - case 9: + case 10: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -632,15 +632,15 @@ final boolean isFlag(int mask) { reader.incrementState(); - case 10: - topVer = reader.readMessage("topVer"); + case 11: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 11: + case 12: writeVer = reader.readMessage("writeVer"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java index 8edefa23188e4..d55da8ad9b215 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java @@ -31,9 +31,11 @@ import javax.cache.expiry.ExpiryPolicy; import javax.cache.processor.EntryProcessor; import javax.cache.processor.EntryProcessorResult; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.binary.BinaryInvalidTypeException; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.NodeStoppingException; @@ -41,7 +43,6 @@ import org.apache.ignite.internal.UnregisteredClassException; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.mem.IgniteOutOfMemoryException; -import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.processors.affinity.AffinityAssignment; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; @@ -68,12 +69,14 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtFuture; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedGetFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysRequest; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysResponse; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearAtomicCache; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetRequest; @@ -83,6 +86,7 @@ import org.apache.ignite.internal.processors.cache.dr.GridCacheDrExpirationInfo; import org.apache.ignite.internal.processors.cache.dr.GridCacheDrInfo; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalEx; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionConflictContext; @@ -119,6 +123,7 @@ import static org.apache.ignite.IgniteSystemProperties.IGNITE_ATOMIC_DEFERRED_ACK_TIMEOUT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_ASYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; +import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_VALIDATE_CACHE_REQUESTS; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.DELETE; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.TRANSFORM; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.UPDATE; @@ -1423,6 +1428,7 @@ private IgniteInternalFuture getAsync0(KeyCacheObject key, needVer, false, recovery, + null, null); fut.init(); @@ -1493,6 +1499,7 @@ private IgniteInternalFuture> getAllAsync0(@Nullable Collection> getAllAsync0(@Nullable Collection affFut = + ctx.shared().exchange().affinityReadyFuture(req.topologyVersion()); + + if (affFut.isDone()) { + List futs = + ctx.shared().exchange().exchangeFutures(); + + boolean found = false; + + for (int i = 0; i < futs.size(); ++i) { + GridDhtPartitionsExchangeFuture fut = futs.get(i); + + // We have to check fut.exchangeDone() here - + // otherwise attempt to get topVer will throw error. + // We won't skip needed future as per affinity ready future is done. + if (fut.exchangeDone() && + fut.topologyVersion().equals(req.topologyVersion())) { + topFut = fut; + + found = true; + + break; + } + } + + assert found: "The requested topology future cannot be found [topVer=" + + req.topologyVersion() + ']'; + } + else { + affFut.listen(f -> updateAllAsyncInternal0(node, req, completionCb)); + + return; + } + + assert req.topologyVersion().equals(topFut.topologyVersion()) : + "The requested topology version cannot be found [" + + "reqTopFut=" + req.topologyVersion() + + ", topFut=" + topFut + ']'; + } + + assert topFut.isDone() : topFut; + + Throwable err = topFut.validateCache(ctx, req.recovery(), false, null, null); + + if (err != null) { + IgniteCheckedException e = new IgniteCheckedException(err); + + res.error(e); + + completionCb.apply(req, res); + + return; + } + } + update(node, locked, req, res, updDhtRes); dhtFut = updDhtRes.dhtFuture(); @@ -1793,7 +1869,8 @@ private void updateAllAsyncInternal0( // This call will convert entry processor invocation results to cache object instances. // Must be done outside topology read lock to avoid deadlocks. - res.returnValue().marshalResult(ctx); + if (res.returnValue() != null) + res.returnValue().marshalResult(ctx); break; } @@ -2724,6 +2801,8 @@ else if (GridDhtCacheEntry.ReaderId.contains(readers, nearNode.id())) { final GridDhtAtomicAbstractUpdateFuture dhtFut = dhtUpdRes.dhtFuture(); + Collection failedToUnwrapKeys = null; + // Avoid iterator creation. for (int i = 0; i < entries.size(); i++) { GridDhtCacheEntry entry = entries.get(i); @@ -2736,9 +2815,26 @@ else if (GridDhtCacheEntry.ReaderId.contains(readers, nearNode.id())) { continue; } - if (storeErr != null && - storeErr.failedKeys().contains(entry.key().value(ctx.cacheObjectContext(), false))) - continue; + if (storeErr != null) { + Object key = entry.key(); + + try { + key = entry.key().value(ctx.cacheObjectContext(), false); + } + catch (BinaryInvalidTypeException e) { + if (log.isDebugEnabled()) { + if (failedToUnwrapKeys == null) + failedToUnwrapKeys = new ArrayList<>(); + + // To limit keys count in log message. + if (failedToUnwrapKeys.size() < 5) + failedToUnwrapKeys.add(key); + } + } + + if (storeErr.failedKeys().contains(key)) + continue; + } try { // We are holding java-level locks on entries at this point. @@ -2867,6 +2963,10 @@ else if (GridDhtCacheEntry.ReaderId.contains(readers, nearNode.id())) { dhtUpdRes.processedEntriesCount(firstEntryIdx + i + 1); } + if (failedToUnwrapKeys != null) { + log.warning("Failed to get values of keys: " + failedToUnwrapKeys + + " (the binary objects will be used instead)."); + } } catch (IgniteCheckedException e) { res.addFailedKeys(putMap != null ? putMap.keySet() : rmvKeys, e); @@ -3005,7 +3105,7 @@ private void unlockEntries(List locked, AffinityTopologyVersi for (int i = 0; i < size; i++) { GridCacheMapEntry entry = locked.get(i); if (entry != null && (skip == null || !skip.contains(entry.key()))) - entry.touch(topVer); + entry.touch(); } } @@ -3285,7 +3385,7 @@ && writeThrough() && !req.skipStore(), } finally { if (entry != null) - entry.touch(req.topologyVersion()); + entry.touch(); } } } @@ -3613,6 +3713,17 @@ private void sendNearUpdateReply(UUID nodeId, GridNearAtomicUpdateResponse res) } } + /** + * Returns {@code true} if cache validation needed. + * + * @return {@code True} if cache should be validated, {@code false} - otherwise. + */ + private boolean needCacheValidation(ClusterNode node) { + assert node != null: "The near node must not be null. This is guaranteed by processNearAtomicUpdateRequest()"; + + return Boolean.TRUE.equals(node.attribute(ATTR_VALIDATE_CACHE_REQUESTS)); + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(GridDhtAtomicCache.class, this, super.toString()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicDeferredUpdateResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicDeferredUpdateResponse.java index 0c069da80082e..ee5eac15a6c45 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicDeferredUpdateResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicDeferredUpdateResponse.java @@ -119,7 +119,7 @@ GridLongList futureIds() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeMessage("futIds", futIds)) return false; @@ -141,7 +141,7 @@ GridLongList futureIds() { return false; switch (reader.state()) { - case 3: + case 4: futIds = reader.readMessage("futIds"); if (!reader.isLastRead()) @@ -161,7 +161,7 @@ GridLongList futureIds() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 4; + return 5; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicNearResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicNearResponse.java index 71d23216f0e2b..8f11ead53adff 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicNearResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicNearResponse.java @@ -171,7 +171,7 @@ public long futureId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 8; + return 9; } /** {@inheritDoc} */ @@ -210,31 +210,31 @@ public long futureId() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeMessage("errs", errs)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeInt("partId", partId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeUuid("primaryId", primaryId)) return false; @@ -256,7 +256,7 @@ public long futureId() { return false; switch (reader.state()) { - case 3: + case 4: errs = reader.readMessage("errs"); if (!reader.isLastRead()) @@ -264,7 +264,7 @@ public long futureId() { reader.incrementState(); - case 4: + case 5: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -272,7 +272,7 @@ public long futureId() { reader.incrementState(); - case 5: + case 6: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -280,7 +280,7 @@ public long futureId() { reader.incrementState(); - case 6: + case 7: partId = reader.readInt("partId"); if (!reader.isLastRead()) @@ -288,7 +288,7 @@ public long futureId() { reader.incrementState(); - case 7: + case 8: primaryId = reader.readUuid("primaryId"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicSingleUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicSingleUpdateRequest.java index 19b24b052d795..16be80e2e5a71 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicSingleUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicSingleUpdateRequest.java @@ -373,25 +373,25 @@ private void near(boolean near) { } switch (writer.state()) { - case 12: + case 13: if (!writer.writeMessage("key", key)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeMessage("prevVal", prevVal)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeLong("updateCntr", updateCntr)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeMessage("val", val)) return false; @@ -413,7 +413,7 @@ private void near(boolean near) { return false; switch (reader.state()) { - case 12: + case 13: key = reader.readMessage("key"); if (!reader.isLastRead()) @@ -421,7 +421,7 @@ private void near(boolean near) { reader.incrementState(); - case 13: + case 14: prevVal = reader.readMessage("prevVal"); if (!reader.isLastRead()) @@ -429,7 +429,7 @@ private void near(boolean near) { reader.incrementState(); - case 14: + case 15: updateCntr = reader.readLong("updateCntr"); if (!reader.isLastRead()) @@ -437,7 +437,7 @@ private void near(boolean near) { reader.incrementState(); - case 15: + case 16: val = reader.readMessage("val"); if (!reader.isLastRead()) @@ -487,7 +487,7 @@ private void finishUnmarshalObject(@Nullable CacheObject obj, GridCacheContext c /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 16; + return 17; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateRequest.java index 30be9dcc30c22..67281f756267a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateRequest.java @@ -558,97 +558,97 @@ else if (conflictVers != null) } switch (writer.state()) { - case 12: + case 13: if (!writer.writeMessage("conflictExpireTimes", conflictExpireTimes)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeCollection("conflictVers", conflictVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeCollection("entryProcessorsBytes", entryProcessorsBytes, MessageCollectionItemType.BYTE_ARR)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeBoolean("forceTransformBackups", forceTransformBackups)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeObjectArray("invokeArgsBytes", invokeArgsBytes, MessageCollectionItemType.BYTE_ARR)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 18: + case 19: if (!writer.writeCollection("nearEntryProcessorsBytes", nearEntryProcessorsBytes, MessageCollectionItemType.BYTE_ARR)) return false; writer.incrementState(); - case 19: + case 20: if (!writer.writeMessage("nearExpireTimes", nearExpireTimes)) return false; writer.incrementState(); - case 20: + case 21: if (!writer.writeCollection("nearKeys", nearKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 21: + case 22: if (!writer.writeMessage("nearTtls", nearTtls)) return false; writer.incrementState(); - case 22: + case 23: if (!writer.writeCollection("nearVals", nearVals, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 23: + case 24: if (!writer.writeMessage("obsoleteIndexes", obsoleteIndexes)) return false; writer.incrementState(); - case 24: + case 25: if (!writer.writeCollection("prevVals", prevVals, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 25: + case 26: if (!writer.writeMessage("ttls", ttls)) return false; writer.incrementState(); - case 26: + case 27: if (!writer.writeMessage("updateCntrs", updateCntrs)) return false; writer.incrementState(); - case 27: + case 28: if (!writer.writeCollection("vals", vals, MessageCollectionItemType.MSG)) return false; @@ -670,7 +670,7 @@ else if (conflictVers != null) return false; switch (reader.state()) { - case 12: + case 13: conflictExpireTimes = reader.readMessage("conflictExpireTimes"); if (!reader.isLastRead()) @@ -678,7 +678,7 @@ else if (conflictVers != null) reader.incrementState(); - case 13: + case 14: conflictVers = reader.readCollection("conflictVers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -686,7 +686,7 @@ else if (conflictVers != null) reader.incrementState(); - case 14: + case 15: entryProcessorsBytes = reader.readCollection("entryProcessorsBytes", MessageCollectionItemType.BYTE_ARR); if (!reader.isLastRead()) @@ -694,7 +694,7 @@ else if (conflictVers != null) reader.incrementState(); - case 15: + case 16: forceTransformBackups = reader.readBoolean("forceTransformBackups"); if (!reader.isLastRead()) @@ -702,7 +702,7 @@ else if (conflictVers != null) reader.incrementState(); - case 16: + case 17: invokeArgsBytes = reader.readObjectArray("invokeArgsBytes", MessageCollectionItemType.BYTE_ARR, byte[].class); if (!reader.isLastRead()) @@ -710,7 +710,7 @@ else if (conflictVers != null) reader.incrementState(); - case 17: + case 18: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -718,7 +718,7 @@ else if (conflictVers != null) reader.incrementState(); - case 18: + case 19: nearEntryProcessorsBytes = reader.readCollection("nearEntryProcessorsBytes", MessageCollectionItemType.BYTE_ARR); if (!reader.isLastRead()) @@ -726,7 +726,7 @@ else if (conflictVers != null) reader.incrementState(); - case 19: + case 20: nearExpireTimes = reader.readMessage("nearExpireTimes"); if (!reader.isLastRead()) @@ -734,7 +734,7 @@ else if (conflictVers != null) reader.incrementState(); - case 20: + case 21: nearKeys = reader.readCollection("nearKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -742,7 +742,7 @@ else if (conflictVers != null) reader.incrementState(); - case 21: + case 22: nearTtls = reader.readMessage("nearTtls"); if (!reader.isLastRead()) @@ -750,7 +750,7 @@ else if (conflictVers != null) reader.incrementState(); - case 22: + case 23: nearVals = reader.readCollection("nearVals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -758,7 +758,7 @@ else if (conflictVers != null) reader.incrementState(); - case 23: + case 24: obsoleteIndexes = reader.readMessage("obsoleteIndexes"); if (!reader.isLastRead()) @@ -766,7 +766,7 @@ else if (conflictVers != null) reader.incrementState(); - case 24: + case 25: prevVals = reader.readCollection("prevVals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -774,7 +774,7 @@ else if (conflictVers != null) reader.incrementState(); - case 25: + case 26: ttls = reader.readMessage("ttls"); if (!reader.isLastRead()) @@ -782,7 +782,7 @@ else if (conflictVers != null) reader.incrementState(); - case 26: + case 27: updateCntrs = reader.readMessage("updateCntrs"); if (!reader.isLastRead()) @@ -790,7 +790,7 @@ else if (conflictVers != null) reader.incrementState(); - case 27: + case 28: vals = reader.readCollection("vals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -816,7 +816,7 @@ else if (conflictVers != null) /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 28; + return 29; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateResponse.java index 70bf6f5648b41..21efbb1350b3d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicUpdateResponse.java @@ -179,25 +179,25 @@ public void nearEvicted(List nearEvicted) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeMessage("errs", errs)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeCollection("nearEvicted", nearEvicted, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeInt("partId", partId)) return false; @@ -219,7 +219,7 @@ public void nearEvicted(List nearEvicted) { return false; switch (reader.state()) { - case 3: + case 4: errs = reader.readMessage("errs"); if (!reader.isLastRead()) @@ -227,7 +227,7 @@ public void nearEvicted(List nearEvicted) { reader.incrementState(); - case 4: + case 5: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -235,7 +235,7 @@ public void nearEvicted(List nearEvicted) { reader.incrementState(); - case 5: + case 6: nearEvicted = reader.readCollection("nearEvicted", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -243,7 +243,7 @@ public void nearEvicted(List nearEvicted) { reader.incrementState(); - case 6: + case 7: partId = reader.readInt("partId"); if (!reader.isLastRead()) @@ -263,7 +263,7 @@ public void nearEvicted(List nearEvicted) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 7; + return 8; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateFuture.java index 983b18ac38ba0..f4c9b5cf22645 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateFuture.java @@ -30,6 +30,7 @@ import javax.cache.expiry.ExpiryPolicy; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.binary.BinaryInvalidTypeException; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; @@ -400,10 +401,46 @@ final void onPrimaryError(GridNearAtomicAbstractUpdateRequest req, GridNearAtomi Collection keys = new ArrayList<>(keys0.size()); - for (KeyCacheObject key : keys0) - keys.add(cctx.cacheObjectContext().unwrapBinaryIfNeeded(key, keepBinary, false)); + Collection failedToUnwrapKeys = null; - err.add(keys, res.error(), req.topologyVersion()); + Exception suppressedErr = null; + + for (KeyCacheObject key : keys0) { + try { + keys.add(cctx.cacheObjectContext().unwrapBinaryIfNeeded(key, keepBinary, false)); + } + catch (BinaryInvalidTypeException e) { + keys.add(cctx.toCacheKeyObject(key)); + + if (log.isDebugEnabled()) { + if (failedToUnwrapKeys == null) + failedToUnwrapKeys = new ArrayList<>(); + + // To limit keys count in log message. + if (failedToUnwrapKeys.size() < 5) + failedToUnwrapKeys.add(key); + } + + suppressedErr = e; + } + catch (Exception e) { + keys.add(cctx.toCacheKeyObject(key)); + + suppressedErr = e; + } + } + + if (failedToUnwrapKeys != null) { + log.warning("Failed to unwrap keys: " + failedToUnwrapKeys + + " (the binary objects will be used instead)."); + } + + IgniteCheckedException error = res.error(); + + if (suppressedErr != null) + error.addSuppressed(suppressedErr); + + err.add(keys, error, req.topologyVersion()); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateRequest.java index 62618f81e6af5..f0d89bf952a09 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicAbstractUpdateRequest.java @@ -396,14 +396,14 @@ public void keepBinary(boolean val) { } /** - * @return Keep binary flag. + * @return Recovery flag. */ public final boolean recovery() { return isFlag(RECOVERY_FLAG_MASK); } /** - * @param val Keep binary flag. + * @param val Recovery flag. */ public void recovery(boolean val) { setFlag(val, RECOVERY_FLAG_MASK); @@ -528,7 +528,7 @@ abstract void addUpdateEntry(KeyCacheObject key, /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ @@ -546,44 +546,44 @@ abstract void addUpdateEntry(KeyCacheObject key, } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeByte("op", op != null ? (byte)op.ordinal() : -1)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeByte("syncMode", syncMode != null ? (byte)syncMode.ordinal() : -1)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 9: - if (!writer.writeMessage("topVer", topVer)) + case 10: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -604,7 +604,7 @@ abstract void addUpdateEntry(KeyCacheObject key, return false; switch (reader.state()) { - case 3: + case 4: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -612,7 +612,7 @@ abstract void addUpdateEntry(KeyCacheObject key, reader.incrementState(); - case 4: + case 5: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -620,7 +620,7 @@ abstract void addUpdateEntry(KeyCacheObject key, reader.incrementState(); - case 5: + case 6: byte opOrd; opOrd = reader.readByte("op"); @@ -632,7 +632,7 @@ abstract void addUpdateEntry(KeyCacheObject key, reader.incrementState(); - case 6: + case 7: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -640,7 +640,7 @@ abstract void addUpdateEntry(KeyCacheObject key, reader.incrementState(); - case 7: + case 8: byte syncModeOrd; syncModeOrd = reader.readByte("syncMode"); @@ -652,7 +652,7 @@ abstract void addUpdateEntry(KeyCacheObject key, reader.incrementState(); - case 8: + case 9: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -660,8 +660,8 @@ abstract void addUpdateEntry(KeyCacheObject key, reader.incrementState(); - case 9: - topVer = reader.readMessage("topVer"); + case 10: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicCheckUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicCheckUpdateRequest.java index 96be0233c308a..a19e28029b89f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicCheckUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicCheckUpdateRequest.java @@ -101,7 +101,7 @@ GridNearAtomicAbstractUpdateRequest updateRequest() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 5; + return 6; } /** {@inheritDoc} */ @@ -119,13 +119,13 @@ GridNearAtomicAbstractUpdateRequest updateRequest() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeInt("partId", partId)) return false; @@ -147,7 +147,7 @@ GridNearAtomicAbstractUpdateRequest updateRequest() { return false; switch (reader.state()) { - case 3: + case 4: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -155,7 +155,7 @@ GridNearAtomicAbstractUpdateRequest updateRequest() { reader.incrementState(); - case 4: + case 5: partId = reader.readInt("partId"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicFullUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicFullUpdateRequest.java index d6956a64dae69..170586b22d589 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicFullUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicFullUpdateRequest.java @@ -435,55 +435,55 @@ else if (conflictVers != null) } switch (writer.state()) { - case 10: + case 11: if (!writer.writeMessage("conflictExpireTimes", conflictExpireTimes)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeMessage("conflictTtls", conflictTtls)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeCollection("conflictVers", conflictVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeCollection("entryProcessorsBytes", entryProcessorsBytes, MessageCollectionItemType.BYTE_ARR)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeByteArray("expiryPlcBytes", expiryPlcBytes)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeObjectArray("filter", filter, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeObjectArray("invokeArgsBytes", invokeArgsBytes, MessageCollectionItemType.BYTE_ARR)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 18: + case 19: if (!writer.writeCollection("vals", vals, MessageCollectionItemType.MSG)) return false; @@ -505,7 +505,7 @@ else if (conflictVers != null) return false; switch (reader.state()) { - case 10: + case 11: conflictExpireTimes = reader.readMessage("conflictExpireTimes"); if (!reader.isLastRead()) @@ -513,7 +513,7 @@ else if (conflictVers != null) reader.incrementState(); - case 11: + case 12: conflictTtls = reader.readMessage("conflictTtls"); if (!reader.isLastRead()) @@ -521,7 +521,7 @@ else if (conflictVers != null) reader.incrementState(); - case 12: + case 13: conflictVers = reader.readCollection("conflictVers", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -529,7 +529,7 @@ else if (conflictVers != null) reader.incrementState(); - case 13: + case 14: entryProcessorsBytes = reader.readCollection("entryProcessorsBytes", MessageCollectionItemType.BYTE_ARR); if (!reader.isLastRead()) @@ -537,7 +537,7 @@ else if (conflictVers != null) reader.incrementState(); - case 14: + case 15: expiryPlcBytes = reader.readByteArray("expiryPlcBytes"); if (!reader.isLastRead()) @@ -545,7 +545,7 @@ else if (conflictVers != null) reader.incrementState(); - case 15: + case 16: filter = reader.readObjectArray("filter", MessageCollectionItemType.MSG, CacheEntryPredicate.class); if (!reader.isLastRead()) @@ -553,7 +553,7 @@ else if (conflictVers != null) reader.incrementState(); - case 16: + case 17: invokeArgsBytes = reader.readObjectArray("invokeArgsBytes", MessageCollectionItemType.BYTE_ARR, byte[].class); if (!reader.isLastRead()) @@ -561,7 +561,7 @@ else if (conflictVers != null) reader.incrementState(); - case 17: + case 18: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -569,7 +569,7 @@ else if (conflictVers != null) reader.incrementState(); - case 18: + case 19: vals = reader.readCollection("vals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -601,7 +601,7 @@ else if (conflictVers != null) /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 19; + return 20; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFilterRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFilterRequest.java index 5c66bc46894b2..c7076988a659f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFilterRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFilterRequest.java @@ -155,7 +155,7 @@ public GridNearAtomicSingleUpdateFilterRequest() { } switch (writer.state()) { - case 12: + case 13: if (!writer.writeObjectArray("filter", filter, MessageCollectionItemType.MSG)) return false; @@ -177,7 +177,7 @@ public GridNearAtomicSingleUpdateFilterRequest() { return false; switch (reader.state()) { - case 12: + case 13: filter = reader.readObjectArray("filter", MessageCollectionItemType.MSG, CacheEntryPredicate.class); if (!reader.isLastRead()) @@ -197,7 +197,7 @@ public GridNearAtomicSingleUpdateFilterRequest() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 13; + return 14; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFuture.java index 82a796492d3b4..4e87df50921f4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateFuture.java @@ -23,6 +23,7 @@ import java.util.UUID; import javax.cache.expiry.ExpiryPolicy; import javax.cache.processor.EntryProcessor; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; @@ -357,7 +358,7 @@ private void waitAndRemap(AffinityTopologyVersion remapTopVer) { ClusterTopologyCheckedException cause = new ClusterTopologyCheckedException( "Failed to update keys, topology changed while execute atomic update inside transaction."); - cause.retryReadyFuture(cctx.affinity().affinityReadyFuture(remapTopVer)); + cause.retryReadyFuture(cctx.shared().exchange().affinityReadyFuture(remapTopVer)); e.add(Collections.singleton(cctx.toCacheKeyObject(key)), cause); @@ -404,8 +405,11 @@ private void updateNear(GridNearAtomicAbstractUpdateRequest req, GridNearAtomicU AffinityTopologyVersion topVer; if (cache.topology().stopping()) { - completeFuture(null,new CacheStoppedException( - cache.name()), + completeFuture( + null, + cctx.shared().cache().isCacheRestarting(cache.name())? + new IgniteCacheRestartingException(cache.name()): + new CacheStoppedException(cache.name()), null); return; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateInvokeRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateInvokeRequest.java index 865d6f8664e9e..ee3d2a4fe036a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateInvokeRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateInvokeRequest.java @@ -225,13 +225,13 @@ public GridNearAtomicSingleUpdateInvokeRequest() { } switch (writer.state()) { - case 12: + case 13: if (!writer.writeByteArray("entryProcessorBytes", entryProcessorBytes)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeObjectArray("invokeArgsBytes", invokeArgsBytes, MessageCollectionItemType.BYTE_ARR)) return false; @@ -253,7 +253,7 @@ public GridNearAtomicSingleUpdateInvokeRequest() { return false; switch (reader.state()) { - case 12: + case 13: entryProcessorBytes = reader.readByteArray("entryProcessorBytes"); if (!reader.isLastRead()) @@ -261,7 +261,7 @@ public GridNearAtomicSingleUpdateInvokeRequest() { reader.incrementState(); - case 13: + case 14: invokeArgsBytes = reader.readObjectArray("invokeArgsBytes", MessageCollectionItemType.BYTE_ARR, byte[].class); if (!reader.isLastRead()) @@ -276,7 +276,7 @@ public GridNearAtomicSingleUpdateInvokeRequest() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 14; + return 15; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateRequest.java index dd3a7bedb7de9..83ec4565f49ce 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicSingleUpdateRequest.java @@ -247,13 +247,13 @@ public GridNearAtomicSingleUpdateRequest() { } switch (writer.state()) { - case 10: + case 11: if (!writer.writeMessage("key", key)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeMessage("val", val)) return false; @@ -275,7 +275,7 @@ public GridNearAtomicSingleUpdateRequest() { return false; switch (reader.state()) { - case 10: + case 11: key = reader.readMessage("key"); if (!reader.isLastRead()) @@ -283,7 +283,7 @@ public GridNearAtomicSingleUpdateRequest() { reader.incrementState(); - case 11: + case 12: val = reader.readMessage("val"); if (!reader.isLastRead()) @@ -311,7 +311,7 @@ public GridNearAtomicSingleUpdateRequest() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 12; + return 13; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateFuture.java index fd6b63e62b84e..4ccea7bfb66e6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateFuture.java @@ -26,6 +26,7 @@ import java.util.UUID; import javax.cache.expiry.ExpiryPolicy; import javax.cache.processor.EntryProcessor; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cluster.ClusterNode; @@ -485,7 +486,7 @@ private void waitAndRemap(AffinityTopologyVersion remapTopVer) { ClusterTopologyCheckedException cause = new ClusterTopologyCheckedException( "Failed to update keys, topology changed while execute atomic update inside transaction."); - cause.retryReadyFuture(cctx.affinity().affinityReadyFuture(remapTopVer)); + cause.retryReadyFuture(cctx.shared().exchange().affinityReadyFuture(remapTopVer)); e.add(remapKeys, cause); @@ -627,7 +628,12 @@ private void updateNear(GridNearAtomicAbstractUpdateRequest req, GridNearAtomicU AffinityTopologyVersion topVer; if (cache.topology().stopping()) { - completeFuture(null,new CacheStoppedException(cache.name()), null); + completeFuture( + null, + cctx.shared().cache().isCacheRestarting(cache.name())? + new IgniteCacheRestartingException(cache.name()): + new CacheStoppedException(cache.name()), + null); return; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateResponse.java index 37fe824a39f99..6dccd8b9cdc9b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridNearAtomicUpdateResponse.java @@ -405,43 +405,43 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeMessage("errs", errs)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeCollection("mapping", mapping, MessageCollectionItemType.UUID)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeMessage("nearUpdates", nearUpdates)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeInt("partId", partId)) return false; writer.incrementState(); - case 8: - if (!writer.writeMessage("remapTopVer", remapTopVer)) + case 9: + if (!writer.writeAffinityTopologyVersion("remapTopVer", remapTopVer)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeMessage("ret", ret)) return false; @@ -463,7 +463,7 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { return false; switch (reader.state()) { - case 3: + case 4: errs = reader.readMessage("errs"); if (!reader.isLastRead()) @@ -471,7 +471,7 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { reader.incrementState(); - case 4: + case 5: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -479,7 +479,7 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { reader.incrementState(); - case 5: + case 6: mapping = reader.readCollection("mapping", MessageCollectionItemType.UUID); if (!reader.isLastRead()) @@ -487,7 +487,7 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { reader.incrementState(); - case 6: + case 7: nearUpdates = reader.readMessage("nearUpdates"); if (!reader.isLastRead()) @@ -495,7 +495,7 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { reader.incrementState(); - case 7: + case 8: partId = reader.readInt("partId"); if (!reader.isLastRead()) @@ -503,15 +503,15 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { reader.incrementState(); - case 8: - remapTopVer = reader.readMessage("remapTopVer"); + case 9: + remapTopVer = reader.readAffinityTopologyVersion("remapTopVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 9: + case 10: ret = reader.readMessage("ret"); if (!reader.isLastRead()) @@ -531,7 +531,7 @@ synchronized void addFailedKeys(Collection keys, Throwable e) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedCache.java index a83a93fa11dcc..9eb5417e01bf9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedCache.java @@ -189,7 +189,7 @@ public GridDistributedCacheEntry entryExx( if (keyCheck) validateCacheKey(key); - GridNearTxLocal tx = ctx.mvccEnabled() ? MvccUtils.tx(ctx.kernalContext()) : ctx.tm().threadLocalTx(ctx); + GridNearTxLocal tx = checkCurrentTx(); final CacheOperationContext opCtx = ctx.operationContextPerCall(); @@ -267,6 +267,7 @@ public GridDistributedCacheEntry entryExx( needVer, /*keepCacheObjects*/false, opCtx != null && opCtx.recovery(), + null, mvccSnapshot); fut.init(); @@ -305,7 +306,7 @@ public GridDistributedCacheEntry entryExx( if (keyCheck) validateCacheKeys(keys); - GridNearTxLocal tx = (ctx.mvccEnabled()) ? MvccUtils.tx(ctx.kernalContext()) : ctx.tm().threadLocalTx(ctx); + GridNearTxLocal tx = checkCurrentTx(); final CacheOperationContext opCtx = ctx.operationContextPerCall(); @@ -345,7 +346,7 @@ public GridDistributedCacheEntry entryExx( assert mvccSnapshot != null; } catch (IgniteCheckedException ex) { - return new GridFinishedFuture(ex); + return new GridFinishedFuture<>(ex); } } @@ -364,6 +365,7 @@ public GridDistributedCacheEntry entryExx( skipVals, needVer, false, + null, mvccSnapshot); if(mvccTracker != null){ @@ -393,6 +395,7 @@ public GridDistributedCacheEntry entryExx( * @param skipVals Skip values flag. * @param needVer If {@code true} returns values as tuples containing value and version. * @param keepCacheObj Keep cache objects flag. + * @param txLbl Transaction label. * @return Load future. */ public final IgniteInternalFuture loadAsync( @@ -408,6 +411,7 @@ public final IgniteInternalFuture loadAsync( boolean needVer, boolean keepCacheObj, boolean recovery, + @Nullable String txLbl, @Nullable MvccSnapshot mvccSnapshot ) { GridPartitionedSingleGetFuture fut = new GridPartitionedSingleGetFuture(ctx, @@ -423,6 +427,7 @@ public final IgniteInternalFuture loadAsync( needVer, keepCacheObj, recovery, + txLbl, mvccSnapshot); fut.init(); @@ -442,6 +447,7 @@ public final IgniteInternalFuture loadAsync( * @param skipVals Skip values flag. * @param needVer If {@code true} returns values as tuples containing value and version. * @param keepCacheObj Keep cache objects flag. + * @param txLbl Transaction label. * @param mvccSnapshot Mvcc snapshot. * @return Load future. */ @@ -458,6 +464,7 @@ public final IgniteInternalFuture> loadAsync( boolean skipVals, boolean needVer, boolean keepCacheObj, + @Nullable String txLbl, @Nullable MvccSnapshot mvccSnapshot ) { assert mvccSnapshot == null || ctx.mvccEnabled(); @@ -468,8 +475,9 @@ public final IgniteInternalFuture> loadAsync( if (expiryPlc == null) expiryPlc = expiryPolicy(null); - // Optimization: try to resolve value locally and escape 'get future' creation. - if (!forcePrimary && ctx.affinityNode() && (!ctx.mvccEnabled() || mvccSnapshot != null)) { + // Optimization: try to resolve value locally and escape 'get future' creation. Not applcable for MVCC, + // because local node may contain a visible version which is no the most recent one. + if (!ctx.mvccEnabled() && !forcePrimary && ctx.affinityNode()) { try { Map locVals = null; @@ -506,6 +514,7 @@ public final IgniteInternalFuture> loadAsync( if (evt) { ctx.events().readEvent(key, null, + txLbl, row.value(), subjId, taskName, @@ -609,7 +618,7 @@ public final IgniteInternalFuture> loadAsync( } finally { if (entry != null) - entry.touch(topVer); + entry.touch(); } } } @@ -648,6 +657,7 @@ else if (!skipVals && ctx.statisticsEnabled()) skipVals, needVer, keepCacheObj, + txLbl, mvccSnapshot); fut.init(topVer); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedLockFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedLockFuture.java index 9dbb8be974433..ecb9d6e01764e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedLockFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/colocated/GridDhtColocatedLockFuture.java @@ -29,6 +29,7 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.concurrent.atomic.AtomicReference; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; @@ -778,7 +779,18 @@ void map() { if (topVer != null) { for (GridDhtTopologyFuture fut : cctx.shared().exchange().exchangeFutures()) { if (fut.exchangeDone() && fut.topologyVersion().equals(topVer)) { - Throwable err = fut.validateCache(cctx, recovery, read, null, keys); + Throwable err = null; + + // Before cache validation, make sure that this topology future is already completed. + try { + fut.get(); + } + catch (IgniteCheckedException e) { + err = fut.error(); + } + + if (err == null) + err = fut.validateCache(cctx, recovery, read, null, keys); if (err != null) { onDone(err); @@ -820,7 +832,10 @@ private void mapOnTopology(final boolean remap, @Nullable final Runnable c) { try { if (cctx.topology().stopping()) { - onDone(new CacheStoppedException(cctx.name())); + onDone( + cctx.shared().cache().isCacheRestarting(cctx.name())? + new IgniteCacheRestartingException(cctx.name()): + new CacheStoppedException(cctx.name())); return; } @@ -1073,7 +1088,8 @@ private synchronized void map0( keepBinary, clientFirst, false, - cctx.deploymentEnabled()); + cctx.deploymentEnabled(), + inTx() ? tx.label() : null); mapping.request(req); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CacheGroupAffinityMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CacheGroupAffinityMessage.java index 7da4051e27929..695eadca581a3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CacheGroupAffinityMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CacheGroupAffinityMessage.java @@ -192,16 +192,10 @@ public static List toNodes(GridLongList assign, Map discoCache.serverNodeByOrder(order)); - if (affNode == null) { - affNode = discoCache.serverNodeByOrder(order); - - assert affNode != null : "Failed to find node by order [order=" + order + - ", topVer=" + discoCache.version() + ']'; - - nodesByOrder.put(order, affNode); - } + assert affNode != null : "Failed to find node by order [order=" + order + + ", topVer=" + discoCache.version() + ']'; assign0.add(affNode); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionFullCountersMap.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionFullCountersMap.java index 2d5eec3878cf5..008c2766e4a47 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionFullCountersMap.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionFullCountersMap.java @@ -93,27 +93,6 @@ public void updateCounter(int p, long updCntr) { updCntrs[p] = updCntr; } - /** - * Creates submap for provided partition IDs. - * - * @param parts Partition IDs. - * @return Partial counters map. - */ - public CachePartitionPartialCountersMap subMap(Set parts) { - CachePartitionPartialCountersMap res = new CachePartitionPartialCountersMap(parts.size()); - - for (int p = 0; p < updCntrs.length; p++) { - if (!parts.contains(p)) - continue; - - res.add(p, initialUpdCntrs[p], updCntrs[p]); - } - - assert res.size() == parts.size(); - - return res; - } - /** * Clears full counters map. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionPartialCountersMap.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionPartialCountersMap.java index 9fc7f941a2188..986a100222ac6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionPartialCountersMap.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/CachePartitionPartialCountersMap.java @@ -34,7 +34,7 @@ public class CachePartitionPartialCountersMap implements Serializable { private static final long serialVersionUID = 0L; /** */ - static final IgniteProductVersion PARTIAL_COUNTERS_MAP_SINCE = IgniteProductVersion.fromString("2.1.4"); + public static final IgniteProductVersion PARTIAL_COUNTERS_MAP_SINCE = IgniteProductVersion.fromString("2.1.4"); /** */ public static final CachePartitionPartialCountersMap EMPTY = new CachePartitionPartialCountersMap(); @@ -53,7 +53,7 @@ public class CachePartitionPartialCountersMap implements Serializable { /** */ private CachePartitionPartialCountersMap() { - // Empty map. + this(0); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysFuture.java index b37acf3af8c4d..323fe7505e984 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysFuture.java @@ -552,8 +552,8 @@ void onResult(GridDhtForceKeysResponse res) { false )) { if (rec && !entry.isInternal()) - cctx.events().addEvent(entry.partition(), entry.key(), cctx.localNodeId(), - (IgniteUuid)null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, info.value(), true, null, + cctx.events().addEvent(entry.partition(), entry.key(), cctx.localNodeId(), null, + null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, info.value(), true, null, false, null, null, null, false); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysRequest.java index 124ae44ca5b30..80c45efc76805 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysRequest.java @@ -167,26 +167,26 @@ private int keyCount() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 6: - if (!writer.writeMessage("topVer", topVer)) + case 7: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -207,7 +207,7 @@ private int keyCount() { return false; switch (reader.state()) { - case 3: + case 4: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -215,7 +215,7 @@ private int keyCount() { reader.incrementState(); - case 4: + case 5: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -223,7 +223,7 @@ private int keyCount() { reader.incrementState(); - case 5: + case 6: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -231,8 +231,8 @@ private int keyCount() { reader.incrementState(); - case 6: - topVer = reader.readMessage("topVer"); + case 7: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; @@ -251,7 +251,7 @@ private int keyCount() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 7; + return 8; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysResponse.java index 977e9ba41eef2..ab85df3e94622 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtForceKeysResponse.java @@ -213,31 +213,31 @@ public void addInfo(GridCacheEntryInfo info) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeCollection("infos", infos, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeCollection("missedKeys", missedKeys, MessageCollectionItemType.MSG)) return false; @@ -259,7 +259,7 @@ public void addInfo(GridCacheEntryInfo info) { return false; switch (reader.state()) { - case 3: + case 4: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -267,7 +267,7 @@ public void addInfo(GridCacheEntryInfo info) { reader.incrementState(); - case 4: + case 5: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -275,7 +275,7 @@ public void addInfo(GridCacheEntryInfo info) { reader.incrementState(); - case 5: + case 6: infos = reader.readCollection("infos", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -283,7 +283,7 @@ public void addInfo(GridCacheEntryInfo info) { reader.incrementState(); - case 6: + case 7: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -291,7 +291,7 @@ public void addInfo(GridCacheEntryInfo info) { reader.incrementState(); - case 7: + case 8: missedKeys = reader.readCollection("missedKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -311,7 +311,7 @@ public void addInfo(GridCacheEntryInfo info) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 8; + return 9; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandLegacyMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandLegacyMessage.java index 46e9ceb004830..cd7741b554fa7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandLegacyMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandLegacyMessage.java @@ -285,49 +285,49 @@ Long partitionCounter(int part) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeCollection("historicalParts", historicalParts, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeCollection("parts", parts, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeMap("partsCntrs", partsCntrs, MessageCollectionItemType.INT, MessageCollectionItemType.LONG)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 7: - if (!writer.writeMessage("topVer", topVer)) + case 8: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeByteArray("topicBytes", topicBytes)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeLong("updateSeq", updateSeq)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeInt("workerId", workerId)) return false; @@ -349,7 +349,7 @@ Long partitionCounter(int part) { return false; switch (reader.state()) { - case 3: + case 4: historicalParts = reader.readCollection("historicalParts", MessageCollectionItemType.INT); if (!reader.isLastRead()) @@ -357,7 +357,7 @@ Long partitionCounter(int part) { reader.incrementState(); - case 4: + case 5: parts = reader.readCollection("parts", MessageCollectionItemType.INT); if (!reader.isLastRead()) @@ -365,7 +365,7 @@ Long partitionCounter(int part) { reader.incrementState(); - case 5: + case 6: partsCntrs = reader.readMap("partsCntrs", MessageCollectionItemType.INT, MessageCollectionItemType.LONG, false); if (!reader.isLastRead()) @@ -373,7 +373,7 @@ Long partitionCounter(int part) { reader.incrementState(); - case 6: + case 7: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -381,15 +381,15 @@ Long partitionCounter(int part) { reader.incrementState(); - case 7: - topVer = reader.readMessage("topVer"); + case 8: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 8: + case 9: topicBytes = reader.readByteArray("topicBytes"); if (!reader.isLastRead()) @@ -397,7 +397,7 @@ Long partitionCounter(int part) { reader.incrementState(); - case 9: + case 10: updateSeq = reader.readLong("updateSeq"); if (!reader.isLastRead()) @@ -405,7 +405,7 @@ Long partitionCounter(int part) { reader.incrementState(); - case 10: + case 11: workerId = reader.readInt("workerId"); if (!reader.isLastRead()) @@ -415,7 +415,7 @@ Long partitionCounter(int part) { } - return reader.afterMessageRead(GridDhtPartitionDemandMessage.class); + return reader.afterMessageRead(GridDhtPartitionDemandLegacyMessage.class); } /** {@inheritDoc} */ @@ -425,7 +425,7 @@ Long partitionCounter(int part) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 11; + return 12; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandMessage.java index dc6162bc321f2..bae326424d0fb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemandMessage.java @@ -21,6 +21,7 @@ import java.nio.ByteBuffer; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheGroupIdMessage; import org.apache.ignite.internal.processors.cache.GridCacheMessage; @@ -35,6 +36,7 @@ /** * Partition demand request. */ +@IgniteCodeGeneratingFail public class GridDhtPartitionDemandMessage extends GridCacheGroupIdMessage { /** */ private static final long serialVersionUID = 0L; @@ -259,37 +261,37 @@ public GridCacheMessage convertIfNeeded(IgniteProductVersion target) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByteArray("partsBytes", partsBytes)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 5: - if (!writer.writeMessage("topVer", topVer)) + case 6: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeByteArray("topicBytes", topicBytes)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeLong("rebalanceId", rebalanceId)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeInt("workerId", workerId)) return false; @@ -311,7 +313,7 @@ public GridCacheMessage convertIfNeeded(IgniteProductVersion target) { return false; switch (reader.state()) { - case 3: + case 4: partsBytes = reader.readByteArray("partsBytes"); if (!reader.isLastRead()) @@ -319,7 +321,7 @@ public GridCacheMessage convertIfNeeded(IgniteProductVersion target) { reader.incrementState(); - case 4: + case 5: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -327,15 +329,15 @@ public GridCacheMessage convertIfNeeded(IgniteProductVersion target) { reader.incrementState(); - case 5: - topVer = reader.readMessage("topVer"); + case 6: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 6: + case 7: topicBytes = reader.readByteArray("topicBytes"); if (!reader.isLastRead()) @@ -343,7 +345,7 @@ public GridCacheMessage convertIfNeeded(IgniteProductVersion target) { reader.incrementState(); - case 7: + case 8: rebalanceId = reader.readLong("rebalanceId"); if (!reader.isLastRead()) @@ -351,7 +353,7 @@ public GridCacheMessage convertIfNeeded(IgniteProductVersion target) { reader.incrementState(); - case 8: + case 9: workerId = reader.readInt("workerId"); if (!reader.isLastRead()) @@ -371,7 +373,7 @@ public GridCacheMessage convertIfNeeded(IgniteProductVersion target) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 9; + return 10; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemander.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemander.java index 40defa14b41e4..9ab6a619e439b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemander.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemander.java @@ -741,7 +741,25 @@ public void handleSupplyMessage( int p = e.getKey(); if (aff.get(p).contains(ctx.localNode())) { - GridDhtLocalPartition part = top.localPartition(p, topVer, true); + GridDhtLocalPartition part; + + try { + part = top.localPartition(p, topVer, true); + } + catch (GridDhtInvalidPartitionException err) { + assert !topVer.equals(top.lastTopologyChangeVersion()); + + if (log.isDebugEnabled()) { + log.debug("Failed to get partition for rebalancing [" + + "grp=" + grp.cacheOrGroupName() + + ", err=" + err + + ", p=" + p + + ", topVer=" + topVer + + ", lastTopVer=" + top.lastTopologyChangeVersion() + ']'); + } + + continue; + } assert part != null; @@ -890,6 +908,9 @@ private boolean preloadEntry( try { GridCacheContext cctx = grp.sharedGroup() ? ctx.cacheContext(entry.cacheId()) : grp.singleCacheContext(); + if (cctx == null) + return true; + if (cctx.isNear()) cctx = cctx.dhtCache().context(); @@ -913,15 +934,15 @@ private boolean preloadEntry( cctx.isDrEnabled() ? DR_PRELOAD : DR_NONE, false )) { - cached.touch(topVer); // Start tracking. + cached.touch(); // Start tracking. if (cctx.events().isRecordable(EVT_CACHE_REBALANCE_OBJECT_LOADED) && !cached.isInternal()) - cctx.events().addEvent(cached.partition(), cached.key(), cctx.localNodeId(), - (IgniteUuid)null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, entry.value(), true, null, + cctx.events().addEvent(cached.partition(), cached.key(), cctx.localNodeId(), null, + null, null, EVT_CACHE_REBALANCE_OBJECT_LOADED, entry.value(), true, null, false, null, null, null, true); } else { - cached.touch(topVer); // Start tracking. + cached.touch(); // Start tracking. if (log.isTraceEnabled()) log.trace("Rebalancing entry is already in cache (will ignore) [key=" + cached.key() + diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionExchangeId.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionExchangeId.java index 741386b40707b..2206f2e372fad 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionExchangeId.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionExchangeId.java @@ -248,7 +248,7 @@ public boolean isLeft() { writer.incrementState(); case 2: - if (!writer.writeMessage("topVer", topVer)) + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -283,7 +283,7 @@ public boolean isLeft() { reader.incrementState(); case 2: - topVer = reader.readMessage("topVer"); + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; @@ -311,4 +311,4 @@ public boolean isLeft() { "nodeId", U.id8(nodeId), "evt", U.gridEventName(evt)); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplier.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplier.java index 835910e36387c..92547667ca39e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplier.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplier.java @@ -27,6 +27,7 @@ import java.util.stream.Collectors; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; @@ -45,6 +46,7 @@ import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.typedef.T3; +import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; @@ -71,6 +73,10 @@ class GridDhtPartitionSupplier { /** Supply context map. T3: nodeId, topicId, topVer. */ private final Map, SupplyContext> scMap = new HashMap<>(); + /** Override for rebalance throttle. */ + private long rebalanceThrottleOverride = + IgniteSystemProperties.getLong(IgniteSystemProperties.IGNITE_REBALANCE_THROTTLE_OVERRIDE, 0); + /** * @param grp Cache group. */ @@ -82,6 +88,9 @@ class GridDhtPartitionSupplier { log = grp.shared().logger(getClass()); top = grp.topology(); + + if (rebalanceThrottleOverride > 0) + LT.info(log, "Using rebalance throttle override: " + rebalanceThrottleOverride); } /** @@ -511,7 +520,9 @@ private boolean reply( grp.shared().io().sendOrderedMessage(demander, demandMsg.topic(), supplyMsg, grp.ioPolicy(), demandMsg.timeout()); // Throttle preloading. - if (grp.config().getRebalanceThrottle() > 0) + if (rebalanceThrottleOverride > 0) + U.sleep(rebalanceThrottleOverride); + else if (grp.config().getRebalanceThrottle() > 0) U.sleep(grp.config().getRebalanceThrottle()); return true; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessage.java index 284700ad3a44f..7e281e59a7e1e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessage.java @@ -28,6 +28,7 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.GridDirectCollection; import org.apache.ignite.internal.GridDirectMap; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheEntryInfoCollection; import org.apache.ignite.internal.processors.cache.CacheGroupContext; @@ -45,6 +46,7 @@ /** * Partition supply message. */ +@IgniteCodeGeneratingFail public class GridDhtPartitionSupplyMessage extends GridCacheGroupIdMessage implements GridCacheDeployable { /** */ private static final long serialVersionUID = 0L; @@ -247,6 +249,9 @@ void addEntry0(int p, boolean historical, GridCacheEntryInfo info, GridCacheShar CacheGroupContext grp = ctx.cache().cacheGroup(grpId); + if (grp == null) + return; + for (CacheEntryInfoCollection col : infos().values()) { List entries = col.infos(); @@ -282,55 +287,55 @@ public int size() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeCollection("clean", clean, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("estimatedKeysCnt", estimatedKeysCnt)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeMap("infos", infos, MessageCollectionItemType.INT, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeMap("keysPerCache", keysPerCache, MessageCollectionItemType.INT, MessageCollectionItemType.LONG)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeMap("last", last, MessageCollectionItemType.INT, MessageCollectionItemType.LONG)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeCollection("missed", missed, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeInt("msgSize", msgSize)) return false; writer.incrementState(); - case 10: - if (!writer.writeMessage("topVer", topVer)) + case 11: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 11: + case 12: // Keep 'updateSeq' name for compatibility. if (!writer.writeLong("updateSeq", rebalanceId)) return false; @@ -353,7 +358,7 @@ public int size() { return false; switch (reader.state()) { - case 3: + case 4: clean = reader.readCollection("clean", MessageCollectionItemType.INT); if (!reader.isLastRead()) @@ -361,7 +366,7 @@ public int size() { reader.incrementState(); - case 4: + case 5: estimatedKeysCnt = reader.readLong("estimatedKeysCnt"); if (!reader.isLastRead()) @@ -369,7 +374,7 @@ public int size() { reader.incrementState(); - case 5: + case 6: infos = reader.readMap("infos", MessageCollectionItemType.INT, MessageCollectionItemType.MSG, false); if (!reader.isLastRead()) @@ -377,7 +382,7 @@ public int size() { reader.incrementState(); - case 6: + case 7: keysPerCache = reader.readMap("keysPerCache", MessageCollectionItemType.INT, MessageCollectionItemType.LONG, false); if (!reader.isLastRead()) @@ -385,7 +390,7 @@ public int size() { reader.incrementState(); - case 7: + case 8: last = reader.readMap("last", MessageCollectionItemType.INT, MessageCollectionItemType.LONG, false); if (!reader.isLastRead()) @@ -393,7 +398,7 @@ public int size() { reader.incrementState(); - case 8: + case 9: missed = reader.readCollection("missed", MessageCollectionItemType.INT); if (!reader.isLastRead()) @@ -401,7 +406,7 @@ public int size() { reader.incrementState(); - case 9: + case 10: msgSize = reader.readInt("msgSize"); if (!reader.isLastRead()) @@ -409,15 +414,15 @@ public int size() { reader.incrementState(); - case 10: - topVer = reader.readMessage("topVer"); + case 11: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 11: + case 12: // Keep 'updateSeq' name for compatibility. rebalanceId = reader.readLong("updateSeq"); @@ -438,7 +443,7 @@ public int size() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 12; + return 13; } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessageV2.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessageV2.java index b6bff0e823506..154d9fb4a6136 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessageV2.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionSupplyMessageV2.java @@ -36,7 +36,7 @@ public class GridDhtPartitionSupplyMessageV2 extends GridDhtPartitionSupplyMessa private static final long serialVersionUID = 0L; /** Available since. */ - public static final IgniteProductVersion AVAILABLE_SINCE = IgniteProductVersion.fromString("2.7.0"); + public static final IgniteProductVersion AVAILABLE_SINCE = IgniteProductVersion.fromString("2.5.3"); /** Supplying process error. */ @GridDirectTransient @@ -101,7 +101,7 @@ public GridDhtPartitionSupplyMessageV2( } switch (writer.state()) { - case 12: + case 13: if (!writer.writeByteArray("errBytes", errBytes)) return false; @@ -123,7 +123,7 @@ public GridDhtPartitionSupplyMessageV2( return false; switch (reader.state()) { - case 12: + case 13: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -148,6 +148,6 @@ public GridDhtPartitionSupplyMessageV2( /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 13; + return 14; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsAbstractMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsAbstractMessage.java index 84cc792fe22ea..26fcb8a7269d9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsAbstractMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsAbstractMessage.java @@ -118,14 +118,14 @@ public void exchangeId(GridDhtPartitionExchangeId exchId) { /** * @return {@code True} if message data is compressed. */ - protected final boolean compressed() { + public final boolean compressed() { return (flags & COMPRESSED_FLAG_MASK) != 0; } /** * @param compressed {@code True} if message data is compressed. */ - protected final void compressed(boolean compressed) { + public final void compressed(boolean compressed) { flags = compressed ? (byte)(flags | COMPRESSED_FLAG_MASK) : (byte)(flags & ~COMPRESSED_FLAG_MASK); } @@ -145,7 +145,7 @@ public boolean restoreState() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 5; + return 6; } /** {@inheritDoc} */ @@ -163,19 +163,19 @@ public boolean restoreState() { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeMessage("exchId", exchId)) return false; writer.incrementState(); - case 3: + case 4: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeMessage("lastVer", lastVer)) return false; @@ -197,7 +197,7 @@ public boolean restoreState() { return false; switch (reader.state()) { - case 2: + case 3: exchId = reader.readMessage("exchId"); if (!reader.isLastRead()) @@ -205,7 +205,7 @@ public boolean restoreState() { reader.incrementState(); - case 3: + case 4: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -213,7 +213,7 @@ public boolean restoreState() { reader.incrementState(); - case 4: + case 5: lastVer = reader.readMessage("lastVer"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsExchangeFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsExchangeFuture.java index f43afa07fbeef..4b3ba176c419d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsExchangeFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsExchangeFuture.java @@ -36,12 +36,14 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReadWriteLock; import java.util.stream.Collectors; import java.util.stream.Stream; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.CacheMode; @@ -49,8 +51,9 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.events.DiscoveryEvent; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException; import org.apache.ignite.internal.IgniteDiagnosticAware; import org.apache.ignite.internal.IgniteDiagnosticPrepareContext; @@ -72,6 +75,7 @@ import org.apache.ignite.internal.processors.cache.CachePartitionExchangeWorkerTask; import org.apache.ignite.internal.processors.cache.DynamicCacheChangeBatch; import org.apache.ignite.internal.processors.cache.DynamicCacheChangeFailureMessage; +import org.apache.ignite.internal.processors.cache.DynamicCacheChangeRequest; import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.ExchangeActions; import org.apache.ignite.internal.processors.cache.ExchangeContext; @@ -81,17 +85,16 @@ import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.GridCacheUtils; -import org.apache.ignite.internal.processors.cache.LocalJoinCachesContext; import org.apache.ignite.internal.processors.cache.StateChangeRequest; import org.apache.ignite.internal.processors.cache.WalStateAbstractMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.Latch; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridClientPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionsStateValidator; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter; -import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.Latch; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; +import org.apache.ignite.internal.processors.cache.persistence.DatabaseLifecycleListener; import org.apache.ignite.internal.processors.cache.persistence.snapshot.SnapshotDiscoveryMessage; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; @@ -99,8 +102,8 @@ import org.apache.ignite.internal.processors.cluster.ChangeGlobalStateFinishMessage; import org.apache.ignite.internal.processors.cluster.ChangeGlobalStateMessage; import org.apache.ignite.internal.processors.cluster.DiscoveryDataClusterState; -import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.IgniteUtils; +import org.apache.ignite.internal.util.TimeBag; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.tostring.GridToStringInclude; @@ -130,6 +133,7 @@ import static org.apache.ignite.internal.processors.cache.ExchangeDiscoveryEvents.serverJoinEvent; import static org.apache.ignite.internal.processors.cache.ExchangeDiscoveryEvents.serverLeftEvent; import static org.apache.ignite.internal.processors.cache.distributed.dht.preloader.CachePartitionPartialCountersMap.PARTIAL_COUNTERS_MAP_SINCE; +import static org.apache.ignite.internal.util.IgniteUtils.doInParallel; /** * Future for exchanging partition maps. @@ -181,7 +185,7 @@ public class GridDhtPartitionsExchangeFuture extends GridDhtTopologyFutureAdapte private List srvNodes; /** */ - private ClusterNode crd; + private volatile ClusterNode crd; /** ExchangeFuture id. */ private final GridDhtPartitionExchangeId exchId; @@ -189,7 +193,10 @@ public class GridDhtPartitionsExchangeFuture extends GridDhtTopologyFutureAdapte /** Cache context. */ private final GridCacheSharedContext cctx; - /** Busy lock to prevent activities from accessing exchanger while it's stopping. */ + /** + * Busy lock to prevent activities from accessing exchanger while it's stopping. Stopping uses write lock, so every + * {@link #enterBusy()} will be failed as false. But regular operation uses read lock acquired multiple times. + */ private ReadWriteLock busyLock; /** */ @@ -333,6 +340,18 @@ public class GridDhtPartitionsExchangeFuture extends GridDhtTopologyFutureAdapte /** Latest (by update sequences) full message with exchangeId == null, need to be processed right after future is done. */ private GridDhtPartitionsFullMessage delayedLatestMsg; + /** Future for wait all exchange listeners comepleted. */ + private final GridFutureAdapter afterLsnrCompleteFut = new GridFutureAdapter<>(); + + /** Time bag to measure and store exchange stages times. */ + private final TimeBag timeBag; + + /** Start time of exchange. */ + private long startTime = System.nanoTime(); + + /** Discovery lag / Clocks discrepancy, calculated on coordinator when all single messages are received. */ + private T2 discoveryLag; + /** * @param cctx Cache context. * @param busyLock Busy lock. @@ -364,6 +383,8 @@ public GridDhtPartitionsExchangeFuture( log = cctx.logger(getClass()); exchLog = cctx.logger(EXCHANGE_LOG); + timeBag = new TimeBag(); + initFut = new GridFutureAdapter() { @Override public IgniteLogger logger() { return log; @@ -544,6 +565,14 @@ private boolean dynamicCacheStartExchange() { && exchActions.cacheStopRequests().isEmpty(); } + /** + * @param cacheOrGroupName Group or cache name for reset lost partitions. + * @return {@code True} if reset lost partition exchange. + */ + public boolean resetLostPartitionFor(String cacheOrGroupName) { + return exchActions != null && exchActions.cachesToResetLostPartitions().contains(cacheOrGroupName); + } + /** * @return {@code True} if activate cluster exchange. */ @@ -563,6 +592,19 @@ public boolean changedBaseline() { return exchActions != null && exchActions.changedBaseline(); } + /** {@inheritDoc} */ + @Override public boolean changedAffinity() { + DiscoveryEvent firstDiscoEvt0 = firstDiscoEvt; + + assert firstDiscoEvt0 != null; + + return firstDiscoEvt0.type() == DiscoveryCustomEvent.EVT_DISCOVERY_CUSTOM_EVT + || !firstDiscoEvt0.eventNode().isClient() + || firstDiscoEvt0.eventNode().isLocal() + || ((firstDiscoEvt.type() == EVT_NODE_JOINED) && + cctx.cache().hasCachesReceivedFromJoin(firstDiscoEvt.eventNode())); + } + /** * @return {@code True} if there are caches to start. */ @@ -600,7 +642,7 @@ public GridDhtPartitionExchangeId exchangeId() { } /** - * @return {@code true} if entered to busy state. + * @return {@code true} if entered to busy state. {@code false} for stop node. */ private boolean enterBusy() { if (busyLock.readLock().tryLock()) @@ -627,13 +669,25 @@ private void initCoordinatorCaches(boolean newCrd) throws IgniteCheckedException if (newCrd) { IgniteInternalFuture fut = cctx.affinity().initCoordinatorCaches(this, false); - if (fut != null) + if (fut != null) { fut.get(); + cctx.exchange().exchangerUpdateHeartbeat(); + } + cctx.exchange().onCoordinatorInitialized(); + + cctx.exchange().exchangerUpdateHeartbeat(); } } + /** + * @return Object to collect exchange timings. + */ + public TimeBag timeBag() { + return timeBag; + } + /** * Starts activity. * @@ -648,7 +702,14 @@ public void init(boolean newCrd) throws IgniteInterruptedCheckedException { initTs = U.currentTimeMillis(); - U.await(evtLatch); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + U.await(evtLatch); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } assert firstDiscoEvt != null : this; assert exchId.nodeId().equals(firstDiscoEvt.eventNode().id()) : this; @@ -664,17 +725,9 @@ public void init(boolean newCrd) throws IgniteInterruptedCheckedException { boolean crdNode = crd != null && crd.isLocal(); - MvccCoordinator mvccCrd = firstEvtDiscoCache.mvccCoordinator(); - - boolean mvccCrdChange = mvccCrd != null && - (initialVersion().equals(mvccCrd.topologyVersion()) || activateCluster()); - - // Mvcc coordinator should has been initialized before exchange context is created. - cctx.kernalContext().coordinators().updateCoordinator(mvccCrd); - - exchCtx = new ExchangeContext(crdNode, mvccCrdChange, this); + exchCtx = new ExchangeContext(crdNode, this); - cctx.kernalContext().coordinators().onExchangeStart(mvccCrd, exchCtx, crd); + cctx.exchange().exchangerBlockingSectionBegin(); assert state == null : state; @@ -685,8 +738,6 @@ public void init(boolean newCrd) throws IgniteInterruptedCheckedException { if (exchLog.isInfoEnabled()) { exchLog.info("Started exchange init [topVer=" + topVer + - ", mvccCrd=" + mvccCrd + - ", mvccCrdChange=" + mvccCrdChange + ", crd=" + crdNode + ", evt=" + IgniteUtils.gridEventName(firstDiscoEvt.type()) + ", evtNode=" + firstDiscoEvt.eventNode().id() + @@ -694,6 +745,8 @@ public void init(boolean newCrd) throws IgniteInterruptedCheckedException { ", allowMerge=" + exchCtx.mergeExchanges() + ']'); } + timeBag.finishGlobalStage("Exchange parameters initialization"); + ExchangeType exchange; if (firstDiscoEvt.type() == EVT_DISCOVERY_CUSTOM_EVT) { @@ -774,7 +827,14 @@ else if (msg instanceof WalStateAbstractMessage) } } - updateTopologies(crdNode, cctx.coordinators().currentCoordinator()); + cctx.cache().registrateProxyRestart(resolveCacheRequests(exchActions), afterLsnrCompleteFut); + + for (PartitionsExchangeAware comp : cctx.exchange().exchangeAwareComponents()) + comp.onInitBeforeTopologyLock(this); + + updateTopologies(crdNode); + + timeBag.finishGlobalStage("Determine exchange type"); switch (exchange) { case ALL: { @@ -804,14 +864,30 @@ else if (msg instanceof WalStateAbstractMessage) assert false; } - if (cctx.localNode().isClient()) - tryToPerformLocalSnapshotOperation(); + if (cctx.localNode().isClient()) { + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + tryToPerformLocalSnapshotOperation(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } + } + + for (PartitionsExchangeAware comp : cctx.exchange().exchangeAwareComponents()) + comp.onInitAfterTopologyLock(this); if (exchLog.isInfoEnabled()) exchLog.info("Finished exchange init [topVer=" + topVer + ", crd=" + crdNode + ']'); } catch (IgniteInterruptedCheckedException e) { - onDone(e); + assert cctx.kernalContext().isStopping() || cctx.kernalContext().clientDisconnected(); + + if (cctx.kernalContext().clientDisconnected()) + onDone(new IgniteCheckedException("Client disconnected")); + else + onDone(new IgniteCheckedException("Node stopped")); throw e; } @@ -836,38 +912,39 @@ else if (msg instanceof WalStateAbstractMessage) * @throws IgniteCheckedException If failed. */ private IgniteInternalFuture initCachesOnLocalJoin() throws IgniteCheckedException { - if (isLocalNodeNotInBaseline()) { - cctx.cache().cleanupCachesDirectories(); + if (!cctx.kernalContext().clientNode() && !isLocalNodeInBaseline()) { + cctx.exchange().exchangerBlockingSectionBegin(); - cctx.database().cleanupCheckpointDirectory(); - - if (cctx.wal() != null) - cctx.wal().cleanupWalDirectories(); - } - - cctx.activate(); + try { + List listeners = cctx.kernalContext().internalSubscriptionProcessor() + .getDatabaseListeners(); - LocalJoinCachesContext locJoinCtx = exchActions == null ? null : exchActions.localJoinContext(); + for (DatabaseLifecycleListener lsnr : listeners) + lsnr.onBaselineChange(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } - List> caches = locJoinCtx == null ? null : - locJoinCtx.caches(); + timeBag.finishGlobalStage("Baseline change callback"); + } - if (!cctx.kernalContext().clientNode()) { - List startDescs = new ArrayList<>(); + cctx.exchange().exchangerBlockingSectionBegin(); - if (caches != null) { - for (T2 c : caches) { - DynamicCacheDescriptor startDesc = c.get1(); + try { + cctx.activate(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } - if (CU.isPersistentCache(startDesc.cacheConfiguration(), cctx.gridConfig().getDataStorageConfiguration())) - startDescs.add(startDesc); - } - } + timeBag.finishGlobalStage("Components activation"); - cctx.database().readCheckpointAndRestoreMemory(startDescs); - } + IgniteInternalFuture cachesRegistrationFut = cctx.cache().startCachesOnLocalJoin(initialVersion(), + exchActions == null ? null : exchActions.localJoinContext()); - IgniteInternalFuture cachesRegistrationFut = cctx.cache().startCachesOnLocalJoin(initialVersion(), locJoinCtx); + if (!cctx.kernalContext().clientNode()) + cctx.cache().shutdownNotFinishedRecoveryCaches(); ensureClientCachesStarted(); @@ -894,12 +971,12 @@ private void ensureClientCachesStarted() { } /** - * @return {@code true} if local node is not in baseline and {@code false} otherwise. + * @return {@code true} if local node is in baseline and {@code false} otherwise. */ - private boolean isLocalNodeNotInBaseline() { + private boolean isLocalNodeInBaseline() { BaselineTopology topology = cctx.discovery().discoCache().state().baselineTopology(); - return topology!= null && !topology.consistentIds().contains(cctx.localNode().consistentId()); + return topology != null && topology.consistentIds().contains(cctx.localNode().consistentId()); } /** @@ -915,6 +992,8 @@ private void initTopologies() throws IgniteCheckedException { continue; grp.topology().beforeExchange(this, !centralizedAff && !forceAffReassignment, false); + + cctx.exchange().exchangerUpdateHeartbeat(); } } } @@ -927,10 +1006,9 @@ private void initTopologies() throws IgniteCheckedException { * Updates topology versions and discovery caches on all topologies. * * @param crd Coordinator flag. - * @param mvccCrd Mvcc coordinator. * @throws IgniteCheckedException If failed. */ - private void updateTopologies(boolean crd, MvccCoordinator mvccCrd) throws IgniteCheckedException { + private void updateTopologies(boolean crd) throws IgniteCheckedException { for (CacheGroupContext grp : cctx.cache().cacheGroups()) { if (grp.isLocal()) continue; @@ -945,29 +1023,48 @@ private void updateTopologies(boolean crd, MvccCoordinator mvccCrd) throws Ignit boolean updateTop = exchId.topologyVersion().equals(grp.localStartVersion()); if (updateTop && clientTop != null) { - top.update(null, - clientTop.partitionMap(true), - clientTop.fullUpdateCounters(), - Collections.emptySet(), - null, - null); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + top.update(null, + clientTop.partitionMap(true), + clientTop.fullUpdateCounters(), + Collections.emptySet(), + null, + null); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } } - top.updateTopologyVersion( - this, - events().discoveryCache(), - mvccCrd, - updSeq, - cacheGroupStopping(grp.groupId())); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + top.updateTopologyVersion( + this, + events().discoveryCache(), + updSeq, + cacheGroupStopping(grp.groupId())); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } - for (GridClientPartitionTopology top : cctx.exchange().clientTopologies()) { - top.updateTopologyVersion(this, - events().discoveryCache(), - mvccCrd, - -1, - cacheGroupStopping(top.groupId())); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + for (GridClientPartitionTopology top : cctx.exchange().clientTopologies()) { + top.updateTopologyVersion(this, + events().discoveryCache(), + -1, + cacheGroupStopping(top.groupId())); + } + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); } } @@ -996,25 +1093,28 @@ private ExchangeType onClusterStateChangeRequest(boolean crd) { } try { - cctx.activate(); - - if (!cctx.kernalContext().clientNode()) { - List startDescs = new ArrayList<>(); - - for (ExchangeActions.CacheActionData startReq : exchActions.cacheStartRequests()) { - DynamicCacheDescriptor desc = startReq.descriptor(); - - if (CU.isPersistentCache(desc.cacheConfiguration(), - cctx.gridConfig().getDataStorageConfiguration())) - startDescs.add(desc); - } + cctx.exchange().exchangerBlockingSectionBegin(); - cctx.database().readCheckpointAndRestoreMemory(startDescs); + try { + cctx.activate(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); } assert registerCachesFuture == null : "No caches registration should be scheduled before new caches have started."; - registerCachesFuture = cctx.affinity().onCacheChangeRequest(this, crd, exchActions); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + registerCachesFuture = cctx.affinity().onCacheChangeRequest(this, crd, exchActions); + + if (!cctx.kernalContext().clientNode()) + cctx.cache().shutdownNotFinishedRecoveryCaches(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } if (log.isInfoEnabled()) { log.info("Successfully activated caches [nodeId=" + cctx.localNodeId() + @@ -1030,8 +1130,15 @@ private ExchangeType onClusterStateChangeRequest(boolean crd) { exchangeLocE = e; if (crd) { - synchronized (mux) { - exchangeGlobalExceptions.put(cctx.localNodeId(), e); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + synchronized (mux) { + exchangeGlobalExceptions.put(cctx.localNodeId(), e); + } + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); } } } @@ -1043,6 +1150,8 @@ private ExchangeType onClusterStateChangeRequest(boolean crd) { ", topVer=" + initialVersion() + "]"); } + cctx.exchange().exchangerBlockingSectionBegin(); + try { cctx.kernalContext().dataStructures().onDeActivate(cctx.kernalContext()); @@ -1052,6 +1161,8 @@ private ExchangeType onClusterStateChangeRequest(boolean crd) { registerCachesFuture = cctx.affinity().onCacheChangeRequest(this, crd, exchActions); + cctx.kernalContext().encryption().onDeActivate(cctx.kernalContext()); + if (log.isInfoEnabled()) { log.info("Successfully deactivated data structures, services and caches [" + "nodeId=" + cctx.localNodeId() + @@ -1066,9 +1177,14 @@ private ExchangeType onClusterStateChangeRequest(boolean crd) { exchangeLocE = e; } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } } else if (req.activate()) { + cctx.exchange().exchangerBlockingSectionBegin(); + // TODO: BLT changes on inactive cluster can't be handled easily because persistent storage hasn't been initialized yet. try { if (!forceAffReassignment) { @@ -1091,6 +1207,9 @@ assert firstEventCache().minimumNodeVersion() exchangeLocE = e; } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } return cctx.kernalContext().clientNode() ? ExchangeType.CLIENT : ExchangeType.ALL; @@ -1106,6 +1225,8 @@ private ExchangeType onCacheChangeRequest(boolean crd) throws IgniteCheckedExcep assert !exchActions.clientOnlyExchange() : exchActions; + cctx.exchange().exchangerBlockingSectionBegin(); + try { assert registerCachesFuture == null : "No caches registration should be scheduled before new caches have started."; @@ -1116,13 +1237,17 @@ private ExchangeType onCacheChangeRequest(boolean crd) throws IgniteCheckedExcep // This exception will be handled by init() method. throw e; - U.error(log, "Failed to initialize cache(s) (will try to rollback). " + exchId, e); + U.error(log, "Failed to initialize cache(s) (will try to rollback) [exchId=" + exchId + + ", caches=" + exchActions.cacheGroupsToStart() + ']', e); exchangeLocE = new IgniteCheckedException( "Failed to initialize exchange locally [locNodeId=" + cctx.localNodeId() + "]", e); exchangeGlobalExceptions.put(cctx.localNodeId(), exchangeLocE); } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } return cctx.kernalContext().clientNode() ? ExchangeType.CLIENT : ExchangeType.ALL; } @@ -1203,10 +1328,17 @@ private void clientOnlyExchange() throws IgniteCheckedException { if (crd != null) { assert !crd.isLocal() : crd; - if (!centralizedAff) - sendLocalPartitions(crd); + cctx.exchange().exchangerBlockingSectionBegin(); - initDone(); + try { + if (!centralizedAff) + sendLocalPartitions(crd); + + initDone(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } return; } @@ -1216,13 +1348,22 @@ private void clientOnlyExchange() throws IgniteCheckedException { GridAffinityAssignmentCache aff = grp.affinity(); aff.initialize(initialVersion(), aff.idealAssignment()); + + cctx.exchange().exchangerUpdateHeartbeat(); } } else onAllServersLeft(); } - onDone(initialVersion()); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + onDone(initialVersion()); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } /** @@ -1237,13 +1378,31 @@ private void distributedExchange() throws IgniteCheckedException { if (grp.isLocal()) continue; - grp.preloader().onTopologyChanged(this); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + grp.preloader().onTopologyChanged(this); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } - cctx.database().releaseHistoryForPreloading(); + timeBag.finishGlobalStage("Preloading notification"); + + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + cctx.database().releaseHistoryForPreloading(); - // To correctly rebalance when persistence is enabled, it is necessary to reserve history within exchange. - partHistReserved = cctx.database().reserveHistoryForExchange(); + // To correctly rebalance when persistence is enabled, it is necessary to reserve history within exchange. + partHistReserved = cctx.database().reserveHistoryForExchange(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } + + timeBag.finishGlobalStage("WAL history reservation"); // Skipping wait on local join is available when all cluster nodes have the same protocol. boolean skipWaitOnLocalJoin = cctx.exchange().latch().canSkipJoiningNodes(initialVersion()) @@ -1278,15 +1437,30 @@ private void distributedExchange() throws IgniteCheckedException { if (topChanged) { // Partition release future is done so we can flush the write-behind store. - cacheCtx.store().forceFlush(); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + cacheCtx.store().forceFlush(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } } - /* It is necessary to run database callback before all topology callbacks. - In case of persistent store is enabled we first restore partitions presented on disk. - We need to guarantee that there are no partition state changes logged to WAL before this callback - to make sure that we correctly restored last actual states. */ - boolean restored = cctx.database().beforeExchange(this); + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + /* It is necessary to run database callback before all topology callbacks. + In case of persistent store is enabled we first restore partitions presented on disk. + We need to guarantee that there are no partition state changes logged to WAL before this callback + to make sure that we correctly restored last actual states. */ + + cctx.database().beforeExchange(this); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } // Pre-create missing partitions using current affinity. if (!exchCtx.mergeExchanges()) { @@ -1295,25 +1469,56 @@ private void distributedExchange() throws IgniteCheckedException { continue; // It is possible affinity is not initialized yet if node joins to cluster. - if (grp.affinity().lastVersion().topologyVersion() > 0) - grp.topology().beforeExchange(this, !centralizedAff && !forceAffReassignment, false); + if (grp.affinity().lastVersion().topologyVersion() > 0) { + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + grp.topology().beforeExchange(this, !centralizedAff && !forceAffReassignment, false); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } + } } } // After all partitions have been restored and pre-created it's safe to make first checkpoint. - if (restored) - cctx.database().onStateRestored(); + if (localJoinExchange() || activateCluster()) { + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + cctx.database().onStateRestored(initialVersion()); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } + } + + timeBag.finishGlobalStage("After states restored callback"); changeWalModeIfNeeded(); - if (crd.isLocal()) { - if (remaining.isEmpty()) - onAllReceived(null); - } - else - sendPartitions(crd); + if (events().hasServerLeft()) + finalizePartitionCounters(); + + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + if (crd.isLocal()) { + if (remaining.isEmpty()) { + initFut.onDone(true); + + onAllReceived(null); + } + } + else + sendPartitions(crd); - initDone(); + initDone(); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } /** @@ -1346,8 +1551,16 @@ private void tryToPerformLocalSnapshotOperation() { private void changeWalModeIfNeeded() { WalStateAbstractMessage msg = firstWalMessage(); - if (msg != null) - cctx.walState().onProposeExchange(msg.exchangeMessage()); + if (msg != null) { + cctx.exchange().exchangerBlockingSectionBegin(); + + try { + cctx.walState().onProposeExchange(msg.exchangeMessage()); + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } + } } /** @@ -1387,17 +1600,23 @@ private void changeWalModeIfNeeded() { private void waitPartitionRelease(boolean distributed, boolean doRollback) throws IgniteCheckedException { Latch releaseLatch = null; - // Wait for other nodes only on first phase. - if (distributed) - releaseLatch = cctx.exchange().latch().getOrCreate(DISTRIBUTED_LATCH_ID, initialVersion()); + IgniteInternalFuture partReleaseFut; + + cctx.exchange().exchangerBlockingSectionBegin(); - IgniteInternalFuture partReleaseFut = cctx.partitionReleaseFuture(initialVersion()); + try { + // Wait for other nodes only on first phase. + if (distributed) + releaseLatch = cctx.exchange().latch().getOrCreate(DISTRIBUTED_LATCH_ID, initialVersion()); - // Assign to class variable so it will be included into toString() method. - this.partReleaseFut = partReleaseFut; + partReleaseFut = cctx.partitionReleaseFuture(initialVersion()); - if (exchId.isLeft()) - cctx.mvcc().removeExplicitNodeLocks(exchId.nodeId(), exchId.topologyVersion()); + // Assign to class variable so it will be included into toString() method. + this.partReleaseFut = partReleaseFut; + } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } if (log.isTraceEnabled()) log.trace("Before waiting for partition release future: " + this); @@ -1418,6 +1637,8 @@ private void waitPartitionRelease(boolean distributed, boolean doRollback) throw // Read txTimeoutOnPME from configuration after every iteration. long curTimeout = cfg.getTransactionConfiguration().getTxTimeoutOnPartitionMapExchange(); + cctx.exchange().exchangerBlockingSectionBegin(); + try { // This avoids unnessesary waiting for rollback. partReleaseFut.get(curTimeout > 0 && !txRolledBack ? @@ -1444,6 +1665,9 @@ private void waitPartitionRelease(boolean distributed, boolean doRollback) throw throw e; } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } long waitEnd = U.currentTimeMillis(); @@ -1467,6 +1691,8 @@ private void waitPartitionRelease(boolean distributed, boolean doRollback) throw dumpCnt = 0; while (true) { + cctx.exchange().exchangerBlockingSectionBegin(); + try { locksFut.get(waitTimeout, TimeUnit.MILLISECONDS); @@ -1497,8 +1723,13 @@ private void waitPartitionRelease(boolean distributed, boolean doRollback) throw U.dumpThreads(log); } } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } + timeBag.finishGlobalStage("Wait partitions release"); + if (releaseLatch == null) { assert !distributed : "Partitions release latch must be initialized in distributed mode."; @@ -1539,6 +1770,8 @@ private void waitPartitionRelease(boolean distributed, boolean doRollback) throw catch (IgniteCheckedException e) { U.warn(log, "Stop waiting for partitions release latch: " + e.getMessage()); } + + timeBag.finishGlobalStage("Wait partitions release latch"); } /** @@ -1550,9 +1783,9 @@ private void onLeft() { continue; grp.preloader().unwindUndeploys(); - } - cctx.mvcc().removeExplicitNodeLocks(exchId.nodeId(), exchId.topologyVersion()); + cctx.exchange().exchangerUpdateHeartbeat(); + } } /** @@ -1611,8 +1844,6 @@ public boolean localJoinExchange() { private void sendLocalPartitions(ClusterNode node) throws IgniteCheckedException { assert node != null; - long time = System.currentTimeMillis(); - GridDhtPartitionsSingleMessage msg; // Reset lost partitions before sending local partitions to coordinator. @@ -1642,17 +1873,13 @@ private void sendLocalPartitions(ClusterNode node) throws IgniteCheckedException msg.partitionHistoryCounters(partHistReserved0); } - if (exchCtx.newMvccCoordinator() && cctx.coordinators().currentCoordinatorId().equals(node.id())) { - Map activeQueries = exchCtx.activeQueries(); - - msg.activeQueries(activeQueries != null ? activeQueries.get(cctx.localNodeId()) : null); - } - if ((stateChangeExchange() || dynamicCacheStartExchange()) && exchangeLocE != null) msg.setError(exchangeLocE); else if (localJoinExchange()) msg.cacheGroupsAffinityRequest(exchCtx.groupsAffinityRequestOnJoin()); + msg.exchangeStartTime(startTime); + if (log.isTraceEnabled()) log.trace("Sending local partitions [nodeId=" + node.id() + ", exchId=" + exchId + ", msg=" + msg + ']'); @@ -1663,9 +1890,6 @@ else if (localJoinExchange()) if (log.isDebugEnabled()) log.debug("Node left during partition exchange [nodeId=" + node.id() + ", exchId=" + exchId + ']'); } - - if (log.isInfoEnabled()) - log.info("Sending Single Message performed in " + (System.currentTimeMillis() - time) + " ms."); } /** @@ -1724,8 +1948,6 @@ private void sendAllPartitions( .map(singleMessage -> fullMsg.copy().joinedNodeAffinity(affinityForJoinedNodes)) .orElse(null); - long time = System.currentTimeMillis(); - // Prepare and send full messages for given nodes. nodes.stream() .map(node -> { @@ -1778,9 +2000,6 @@ private void sendAllPartitions( U.error(log, "Failed to send partitions [node=" + node + ']', e); } }); - - if (log.isInfoEnabled()) - log.info("Sending Full Message performed in " + (System.currentTimeMillis() - time) + " ms."); } /** @@ -1822,12 +2041,51 @@ public boolean serverNodeDiscoveryEvent() { /** * Finish merged future to allow GridCachePartitionExchangeManager.ExchangeFutureSet cleanup. */ - public void finishMerged() { - super.onDone(null, null); + public void finishMerged(AffinityTopologyVersion resVer) { + synchronized (mux) { + if (state == null) state = ExchangeLocalState.MERGED; + } + + done.set(true); + + super.onDone(resVer, null); + } + + /** + * @return {@code True} if future was merged. + */ + public boolean isMerged() { + synchronized (mux) { + return state == ExchangeLocalState.MERGED; + } + } + + /** + * Make a log message that contains given exchange timings. + * + * @param header Header of log message. + * @param timings Exchange stages timings. + * @return Log message with exchange timings and exchange version. + */ + private String exchangeTimingsLogMessage(String header, List timings) { + StringBuilder timingsToLog = new StringBuilder(); + + timingsToLog.append(header).append(" ["); + timingsToLog.append("startVer=").append(initialVersion()); + timingsToLog.append(", resVer=").append(topologyVersion()); + + for (String stageTiming : timings) + timingsToLog.append(", ").append(stageTiming); + + timingsToLog.append(']'); + + return timingsToLog.toString(); } /** {@inheritDoc} */ @Override public boolean onDone(@Nullable AffinityTopologyVersion res, @Nullable Throwable err) { + assert res != null || err != null : "TopVer=" + res + ", err=" + err; + if (isDone() || !done.compareAndSet(false, true)) return false; @@ -1856,22 +2114,24 @@ public void finishMerged() { if (centralizedAff || forceAffReassignment) { assert !exchCtx.mergeExchanges(); + Collection grpToRefresh = U.newHashSet(cctx.cache().cacheGroups().size()); + for (CacheGroupContext grp : cctx.cache().cacheGroups()) { if (grp.isLocal()) continue; - boolean needRefresh = false; - try { - needRefresh = grp.topology().initPartitionsWhenAffinityReady(res, this); + if (grp.topology().initPartitionsWhenAffinityReady(res, this)) + grpToRefresh.add(grp); } catch (IgniteInterruptedCheckedException e) { U.error(log, "Failed to initialize partitions.", e); } - if (needRefresh) - cctx.exchange().refreshPartitions(); } + + if (!grpToRefresh.isEmpty()) + cctx.exchange().refreshPartitions(grpToRefresh); } for (GridCacheContext cacheCtx : cctx.cacheContexts()) { @@ -1887,10 +2147,10 @@ public void finishMerged() { } } - if (serverNodeDiscoveryEvent()) + if (serverNodeDiscoveryEvent() || localJoinExchange()) detectLostPartitions(res); - Map m = U.newHashMap(cctx.cache().cacheGroups().size()); + Map m = U.newHashMap(cctx.cache().cacheGroups().size()); for (CacheGroupContext grp : cctx.cache().cacheGroups()) m.put(grp.groupId(), validateCacheGroup(grp, events().lastEvent().topologyNodes())); @@ -1902,21 +2162,16 @@ public void finishMerged() { tryToPerformLocalSnapshotOperation(); if (err == null) - cctx.coordinators().onExchangeDone(exchCtx.newMvccCoordinator(), exchCtx.events().discoveryCache(), - exchCtx.activeQueries()); + cctx.coordinators().onExchangeDone(events().discoveryCache()); - cctx.cache().onExchangeDone(initialVersion(), exchActions, err); + for (PartitionsExchangeAware comp : cctx.exchange().exchangeAwareComponents()) + comp.onDoneBeforeTopologyUnlock(this); - cctx.exchange().onExchangeDone(res, initialVersion(), err); + // Create and destory caches and cache proxies. + cctx.cache().onExchangeDone(initialVersion(), exchActions, err); cctx.kernalContext().authentication().onActivate(); - if (exchActions != null && err == null) - exchActions.completeRequestFutures(cctx, null); - - if (stateChangeExchange() && err == null) - cctx.kernalContext().state().onStateChangeExchangeDone(exchActions.stateChangeRequest()); - Map, Long> localReserved = partHistSuppliers.getReservations(cctx.localNodeId()); if (localReserved != null) { @@ -1933,24 +2188,62 @@ public void finishMerged() { cctx.database().releaseHistoryForExchange(); - cctx.database().rebuildIndexesIfNeeded(this); - if (err == null) { + cctx.database().rebuildIndexesIfNeeded(this); + for (CacheGroupContext grp : cctx.cache().cacheGroups()) { if (!grp.isLocal()) grp.topology().onExchangeDone(this, grp.affinity().readyAffinity(res), false); } - cctx.walState().changeLocalStatesOnExchangeDone(res); + if (changedAffinity()) + cctx.walState().changeLocalStatesOnExchangeDone(res, changedBaseline()); } + final Throwable err0 = err; + + // Should execute this listener first, before any external listeners. + // Listeners use stack as data structure. + listen(f -> { + // Update last finished future in the first. + cctx.exchange().lastFinishedFuture(this); + + // Complete any affReady futures and update last exchange done version. + cctx.exchange().onExchangeDone(res, initialVersion(), err0); + + cctx.cache().completeProxyRestart(resolveCacheRequests(exchActions), initialVersion(), res); + + if (exchActions != null && err0 == null) + exchActions.completeRequestFutures(cctx, null); + + if (stateChangeExchange() && err0 == null) + cctx.kernalContext().state().onStateChangeExchangeDone(exchActions.stateChangeRequest()); + }); + if (super.onDone(res, err)) { - if (log.isDebugEnabled()) - log.debug("Completed partition exchange [localNode=" + cctx.localNodeId() + ", exchange= " + this + - ", durationFromInit=" + (U.currentTimeMillis() - initTs) + ']'); - else if(log.isInfoEnabled()) - log.info("Completed partition exchange [localNode=" + cctx.localNodeId() + ", exchange=" + shortInfo() + - ", topVer=" + topologyVersion() + ", durationFromInit=" + (U.currentTimeMillis() - initTs) + ']'); + afterLsnrCompleteFut.onDone(); + + if (log.isInfoEnabled()) { + log.info("Completed partition exchange [localNode=" + cctx.localNodeId() + + ", exchange=" + (log.isDebugEnabled() ? this : shortInfo()) + ", topVer=" + topologyVersion() + "]"); + + if (err == null) { + timeBag.finishGlobalStage("Exchange done"); + + // Collect all stages timings. + List timings = timeBag.stagesTimings(); + + if (discoveryLag != null && discoveryLag.get1() != 0) + timings.add("Discovery lag=" + discoveryLag.get1() + + " ms, Latest started node id=" + discoveryLag.get2()); + + log.info(exchangeTimingsLogMessage("Exchange timings", timings)); + + List localTimings = timeBag.longestLocalStagesTimings(3); + + log.info(exchangeTimingsLogMessage("Exchange longest local stages", localTimings)); + } + } initFut.onDone(err == null); @@ -1965,14 +2258,13 @@ else if(log.isInfoEnabled()) } } - exchActions = null; + for (PartitionsExchangeAware comp : cctx.exchange().exchangeAwareComponents()) + comp.onDoneAfterTopologyUnlock(this); if (firstDiscoEvt instanceof DiscoveryCustomEvent) ((DiscoveryCustomEvent)firstDiscoEvt).customMessage(null); if (err == null) { - cctx.exchange().lastFinishedFuture(this); - if (exchCtx != null && (exchCtx.events().hasServerLeft() || exchCtx.events().hasServerJoin())) { ExchangeDiscoveryEvents evts = exchCtx.events(); @@ -1990,6 +2282,56 @@ else if(log.isInfoEnabled()) return false; } + /** + * Calculates discovery lag (Maximal difference between exchange start times across all nodes). + * + * @param declared Single messages that were expected to be received during exchange. + * @param merged Single messages from nodes that were merged during exchange. + * + * @return Pair with discovery lag and node id which started exchange later than others. + */ + private T2 calculateDiscoveryLag( + Map declared, + Map merged + ) { + Map msgs = new HashMap<>(declared); + + msgs.putAll(merged); + + long minStartTime = startTime; + long maxStartTime = startTime; + UUID latestStartedNode = cctx.localNodeId(); + + for (Map.Entry msg : msgs.entrySet()) { + UUID nodeId = msg.getKey(); + long exchangeTime = msg.getValue().exchangeStartTime(); + + if (exchangeTime != 0) { + minStartTime = Math.min(minStartTime, exchangeTime); + maxStartTime = Math.max(maxStartTime, exchangeTime); + } + + if (maxStartTime == exchangeTime) + latestStartedNode = nodeId; + } + + return new T2<>(TimeUnit.NANOSECONDS.toMillis(maxStartTime - minStartTime), latestStartedNode); + } + + /** + * @param exchangeActions Exchange actions. + * @return Map of cache names and start descriptors. + */ + private Map resolveCacheRequests(ExchangeActions exchangeActions) { + if (exchangeActions == null) + return Collections.emptyMap(); + + return exchangeActions.cacheStartRequests() + .stream() + .map(ExchangeActions.CacheActionData::request) + .collect(Collectors.toMap(DynamicCacheChangeRequest::cacheName, r -> r)); + } + /** * Method waits for new caches registration and cache configuration persisting to disk. */ @@ -2002,6 +2344,8 @@ private void waitUntilNewCachesAreRegistered() { (int)(cctx.kernalContext().config().getFailureDetectionTimeout() / 2)); for (;;) { + cctx.exchange().exchangerBlockingSectionBegin(); + try { registerCachesFut.get(timeout, TimeUnit.SECONDS); @@ -2016,6 +2360,9 @@ private void waitUntilNewCachesAreRegistered() { "Probably disk is too busy or slow." + "[caches=" + cacheNames + "]"); } + finally { + cctx.exchange().exchangerBlockingSectionEnd(); + } } } } @@ -2071,6 +2418,8 @@ public void cleanUp() { newCrdFut = null; exchangeLocE = null; exchangeGlobalExceptions.clear(); + if (finishState != null) + finishState.cleanUp(); } /** @@ -2445,9 +2794,6 @@ public void waitAndReplyToNode(final UUID nodeId, final GridDhtPartitionsSingleM */ private void processSingleMessage(UUID nodeId, GridDhtPartitionsSingleMessage msg) { if (msg.client()) { - if (msg.activeQueries() != null) - cctx.coordinators().processClientActiveQueries(nodeId, msg.activeQueries()); - waitAndReplyToNode(nodeId, msg); return; @@ -2543,6 +2889,7 @@ else if (log.isDebugEnabled()) } } } + if (allReceived) { if (!awaitSingleMapUpdates()) return; @@ -2775,24 +3122,31 @@ else if (cntr == maxCntr.cnt) * @param resTopVer Result topology version. */ private void detectLostPartitions(AffinityTopologyVersion resTopVer) { - boolean detected = false; - - long time = System.currentTimeMillis(); - - synchronized (cctx.exchange().interruptLock()) { - if (Thread.currentThread().isInterrupted()) - return; + AtomicInteger detected = new AtomicInteger(); - for (CacheGroupContext grp : cctx.cache().cacheGroups()) { - if (!grp.isLocal()) { - boolean detectedOnGrp = grp.topology().detectLostPartitions(resTopVer, events().lastEvent()); + try { + // Reserve at least 2 threads for system operations. + doInParallel( + U.availableThreadCount(cctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2), + cctx.kernalContext().getSystemExecutorService(), + cctx.cache().cacheGroups(), + grp -> { + if (!grp.isLocal()) { + // Do not trigger lost partition events on start. + boolean evt = !localJoinExchange() && !activateCluster(); + + if (grp.topology().detectLostPartitions(resTopVer, evt ? events().lastEvent() : null)) + detected.incrementAndGet(); + } - detected |= detectedOnGrp; - } - } + return null; + }); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); } - if (detected) { + if (detected.get() > 0) { if (log.isDebugEnabled()) log.debug("Partitions have been scheduled to resend [reason=" + "Lost partitions detect on " + resTopVer + "]"); @@ -2800,8 +3154,7 @@ private void detectLostPartitions(AffinityTopologyVersion resTopVer) { cctx.exchange().scheduleResendPartitions(); } - if (log.isInfoEnabled()) - log.info("Detecting lost partitions performed in " + (System.currentTimeMillis() - time) + " ms."); + timeBag.finishGlobalStage("Detect lost partitions"); } /** @@ -2810,22 +3163,29 @@ private void detectLostPartitions(AffinityTopologyVersion resTopVer) { private void resetLostPartitions(Collection cacheNames) { assert !exchCtx.mergeExchanges(); - synchronized (cctx.exchange().interruptLock()) { - if (Thread.currentThread().isInterrupted()) - return; - - for (CacheGroupContext grp : cctx.cache().cacheGroups()) { - if (grp.isLocal()) - continue; + try { + // Reserve at least 2 threads for system operations. + U.doInParallel( + U.availableThreadCount(cctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2), + cctx.kernalContext().getSystemExecutorService(), + cctx.cache().cacheGroups(), + grp -> { + if (grp.isLocal()) + return null; - for (String cacheName : cacheNames) { - if (grp.hasCache(cacheName)) { - grp.topology().resetLostPartitions(initialVersion()); + for (String cacheName : cacheNames) { + if (grp.hasCache(cacheName)) { + grp.topology().resetLostPartitions(initialVersion()); - break; + break; + } } - } - } + + return null; + }); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); } } @@ -2897,6 +3257,10 @@ private void sendExchangeFailureMessage() { */ private void onAllReceived(@Nullable Collection sndResNodes) { try { + initFut.get(); + + timeBag.finishGlobalStage("Waiting for all single messages"); + assert crd.isLocal(); assert partHistSuppliers.isEmpty() : partHistSuppliers; @@ -2920,12 +3284,9 @@ private void onAllReceived(@Nullable Collection sndResNodes) { if (log.isInfoEnabled()) log.info("Coordinator received all messages, try merge [ver=" + initialVersion() + ']'); - long time = System.currentTimeMillis(); - boolean finish = cctx.exchange().mergeExchangesOnCoordinator(this); - if (log.isInfoEnabled()) - log.info("Exchanges merging performed in " + (System.currentTimeMillis() - time) + " ms."); + timeBag.finishGlobalStage("Exchanges merge"); if (!finish) return; @@ -2945,6 +3306,9 @@ private void onAllReceived(@Nullable Collection sndResNodes) { * @param sndResNodes Additional nodes to send finish message to. */ private void finishExchangeOnCoordinator(@Nullable Collection sndResNodes) { + if (isDone() || !enterBusy()) + return; + try { if (!F.isEmpty(exchangeGlobalExceptions) && dynamicCacheStartExchange() && isRollbackSupported()) { sendExchangeFailureMessage(); @@ -2961,7 +3325,8 @@ private void finishExchangeOnCoordinator(@Nullable Collection sndRe Map idealAffDiff = null; - long time = System.currentTimeMillis(); + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(cctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2); if (exchCtx.mergeExchanges()) { synchronized (mux) { @@ -2983,55 +3348,42 @@ private void finishExchangeOnCoordinator(@Nullable Collection sndRe else cctx.affinity().onServerJoinWithExchangeMergeProtocol(this, true); - for (CacheGroupDescriptor desc : cctx.affinity().cacheGroups().values()) { - if (desc.config().getCacheMode() == CacheMode.LOCAL) - continue; - - CacheGroupContext grp = cctx.cache().cacheGroup(desc.groupId()); - - GridDhtPartitionTopology top = grp != null ? grp.topology() : - cctx.exchange().clientTopology(desc.groupId(), events().discoveryCache()); - - top.beforeExchange(this, true, true); - } - } - if (log.isInfoEnabled()) - log.info("Affinity changes (coordinator) applied in " + (System.currentTimeMillis() - time) + " ms."); + doInParallel( + parallelismLvl, + cctx.kernalContext().getSystemExecutorService(), + cctx.affinity().cacheGroups().values(), + desc -> { + if (desc.config().getCacheMode() == CacheMode.LOCAL) + return null; - Map joinedNodeAff = null; + CacheGroupContext grp = cctx.cache().cacheGroup(desc.groupId()); - for (Map.Entry e : msgs.entrySet()) { - GridDhtPartitionsSingleMessage msg = e.getValue(); + GridDhtPartitionTopology top = grp != null ? grp.topology() : + cctx.exchange().clientTopology(desc.groupId(), events().discoveryCache()); - if (exchCtx.newMvccCoordinator()) - exchCtx.addActiveQueries(e.getKey(), msg.activeQueries()); + top.beforeExchange(this, true, true); - // Apply update counters after all single messages are received. - for (Map.Entry entry : msg.partitions().entrySet()) { - Integer grpId = entry.getKey(); + return null; + }); + } - CacheGroupContext grp = cctx.cache().cacheGroup(grpId); + timeBag.finishGlobalStage("Affinity recalculation (crd)"); - GridDhtPartitionTopology top = grp != null ? grp.topology() : - cctx.exchange().clientTopology(grpId, events().discoveryCache()); + Map joinedNodeAff = new ConcurrentHashMap<>(cctx.cache().cacheGroups().size()); - CachePartitionPartialCountersMap cntrs = msg.partitionUpdateCounters(grpId, - top.partitions()); + doInParallel( + parallelismLvl, + cctx.kernalContext().getSystemExecutorService(), + msgs.values(), + msg -> { + processSingleMessageOnCrdFinish(msg, joinedNodeAff); - if (cntrs != null) - top.collectUpdateCounters(cntrs); + return null; } + ); - Collection affReq = msg.cacheGroupsAffinityRequest(); - - if (affReq != null) { - joinedNodeAff = CacheGroupAffinityMessage.createAffinityMessages(cctx, - resTopVer, - affReq, - joinedNodeAff); - } - } + timeBag.finishGlobalStage("Collect update counters and create affinity messages"); validatePartitionsState(); @@ -3066,22 +3418,25 @@ else if (discoveryCustomMessage instanceof SnapshotDiscoveryMessage } // Recalculate new affinity based on partitions availability. - if (!exchCtx.mergeExchanges() && forceAffReassignment) + if (!exchCtx.mergeExchanges() && forceAffReassignment) { idealAffDiff = cctx.affinity().onCustomEventWithEnforcedAffinityReassignment(this); + timeBag.finishGlobalStage("Ideal affinity diff calculation (enforced)"); + } + for (CacheGroupContext grpCtx : cctx.cache().cacheGroups()) { if (!grpCtx.isLocal()) grpCtx.topology().applyUpdateCounters(); } + timeBag.finishGlobalStage("Apply update counters"); + updateLastVersion(cctx.versions().last()); cctx.versions().onExchange(lastVer.get().order()); IgniteProductVersion minVer = exchCtx.events().discoveryCache().minimumNodeVersion(); - time = System.currentTimeMillis(); - GridDhtPartitionsFullMessage msg = createPartitionsMessage(true, minVer.compareToIgnoreTimestamp(PARTIAL_COUNTERS_MAP_SINCE) >= 0); @@ -3098,8 +3453,7 @@ else if (forceAffReassignment) msg.prepareMarshal(cctx); - if (log.isInfoEnabled()) - log.info("Preparing Full Message performed in " + (System.currentTimeMillis() - time) + " ms."); + timeBag.finishGlobalStage("Full message preparing"); synchronized (mux) { finishState = new FinishState(crd.id(), resTopVer, msg); @@ -3110,8 +3464,6 @@ else if (forceAffReassignment) if (centralizedAff) { assert !exchCtx.mergeExchanges(); - time = System.currentTimeMillis(); - IgniteInternalFuture>>> fut = cctx.affinity().initAffinityOnNodeLeft(this); if (!fut.isDone()) { @@ -3123,9 +3475,6 @@ else if (forceAffReassignment) } else onAffinityInitialized(fut); - - if (log.isInfoEnabled()) - log.info("Centralized affinity changes are performed in " + (System.currentTimeMillis() - time) + " ms."); } else { Set nodes; @@ -3149,20 +3498,24 @@ else if (forceAffReassignment) } } } + else + mergedJoinExchMsgs0 = Collections.emptyMap(); if (!F.isEmpty(sndResNodes)) nodes.addAll(sndResNodes); } - time = System.currentTimeMillis(); - if (!nodes.isEmpty()) sendAllPartitions(msg, nodes, mergedJoinExchMsgs0, joinedNodeAff); - partitionsSent = true; + timeBag.finishGlobalStage("Full message sending"); - if (log.isInfoEnabled()) - log.info("Sending Full Message to all nodes performed in " + (System.currentTimeMillis() - time) + " ms."); + discoveryLag = calculateDiscoveryLag( + msgs, + mergedJoinExchMsgs0 + ); + + partitionsSent = true; if (!stateChangeExchange()) onDone(exchCtx.events().topologyVersion(), null); @@ -3179,8 +3532,6 @@ else if (forceAffReassignment) } if (stateChangeExchange()) { - IgniteCheckedException err = null; - StateChangeRequest req = exchActions.stateChangeRequest(); assert req != null : exchActions; @@ -3190,8 +3541,6 @@ else if (forceAffReassignment) if (!F.isEmpty(exchangeGlobalExceptions)) { stateChangeErr = true; - err = new IgniteCheckedException("Cluster state change failed."); - cctx.kernalContext().state().onStateChangeError(exchangeGlobalExceptions, req); } else { @@ -3222,8 +3571,10 @@ else if (forceAffReassignment) cctx.discovery().sendCustomEvent(stateFinishMsg); + timeBag.finishGlobalStage("State finish message sending"); + if (!centralizedAff) - onDone(exchCtx.events().topologyVersion(), err); + onDone(exchCtx.events().topologyVersion(), null); } } catch (IgniteCheckedException e) { @@ -3232,77 +3583,169 @@ else if (forceAffReassignment) else onDone(e); } + finally { + leaveBusy(); + } } /** - * Validates that partition update counters and cache sizes for all caches are consistent. + * @param msg Single message to process. + * @param messageAccumulator Message to store message which need to be sent after. */ - private void validatePartitionsState() { - long time = System.currentTimeMillis(); + private void processSingleMessageOnCrdFinish( + GridDhtPartitionsSingleMessage msg, + Map messageAccumulator + ) { + for (Map.Entry e : msg.partitions().entrySet()) { + Integer grpId = e.getKey(); - for (Map.Entry e : cctx.affinity().cacheGroups().entrySet()) { - CacheGroupDescriptor grpDesc = e.getValue(); - if (grpDesc.config().getCacheMode() == CacheMode.LOCAL) - continue; + CacheGroupContext grp = cctx.cache().cacheGroup(grpId); - int grpId = e.getKey(); + GridDhtPartitionTopology top = grp != null + ? grp.topology() + : cctx.exchange().clientTopology(grpId, events().discoveryCache()); - CacheGroupContext grpCtx = cctx.cache().cacheGroup(grpId); + CachePartitionPartialCountersMap cntrs = msg.partitionUpdateCounters(grpId, top.partitions()); - GridDhtPartitionTopology top = grpCtx != null ? - grpCtx.topology() : - cctx.exchange().clientTopology(grpId, events().discoveryCache()); + if (cntrs != null) + top.collectUpdateCounters(cntrs); + } - // Do not validate read or write through caches or caches with disabled rebalance - // or ExpiryPolicy is set or validation is disabled. - if (grpCtx == null - || grpCtx.config().isReadThrough() - || grpCtx.config().isWriteThrough() - || grpCtx.config().getCacheStoreFactory() != null - || grpCtx.config().getRebalanceDelay() == -1 - || grpCtx.config().getRebalanceMode() == CacheRebalanceMode.NONE - || grpCtx.config().getExpiryPolicyFactory() == null - || SKIP_PARTITION_SIZE_VALIDATION) - continue; + Collection affReq = msg.cacheGroupsAffinityRequest(); - try { - validator.validatePartitionCountersAndSizes(this, top, msgs); - } - catch (IgniteCheckedException ex) { - log.warning("Partition states validation has failed for group: " + grpDesc.cacheOrGroupName() + ". " + ex.getMessage()); - // TODO: Handle such errors https://issues.apache.org/jira/browse/IGNITE-7833 - } + if (affReq != null) + CacheGroupAffinityMessage.createAffinityMessages( + cctx, + exchCtx.events().topologyVersion(), + affReq, + messageAccumulator + ); + } + + /** + * Collects non local cache group descriptors. + * + * @return Collection of non local cache group descriptors. + */ + private List nonLocalCacheGroupDescriptors() { + return cctx.affinity().cacheGroups().values().stream() + .filter(grpDesc -> grpDesc.config().getCacheMode() != CacheMode.LOCAL) + .collect(Collectors.toList()); + } + + /** + * Collects non local cache groups. + * + * @return Collection of non local cache groups. + */ + private List nonLocalCacheGroups() { + return cctx.cache().cacheGroups().stream() + .filter(grp -> !grp.isLocal() && !cacheGroupStopping(grp.groupId())) + .collect(Collectors.toList()); + } + + /** + * Validates that partition update counters and cache sizes for all caches are consistent. + */ + private void validatePartitionsState() { + try { + U.doInParallel( + cctx.kernalContext().getSystemExecutorService(), + nonLocalCacheGroupDescriptors(), + grpDesc -> { + CacheGroupContext grpCtx = cctx.cache().cacheGroup(grpDesc.groupId()); + + GridDhtPartitionTopology top = grpCtx != null + ? grpCtx.topology() + : cctx.exchange().clientTopology(grpDesc.groupId(), events().discoveryCache()); + + // Do not validate read or write through caches or caches with disabled rebalance + // or ExpiryPolicy is set or validation is disabled. + if (grpCtx == null + || grpCtx.config().isReadThrough() + || grpCtx.config().isWriteThrough() + || grpCtx.config().getCacheStoreFactory() != null + || grpCtx.config().getRebalanceDelay() == -1 + || grpCtx.config().getRebalanceMode() == CacheRebalanceMode.NONE + || grpCtx.config().getExpiryPolicyFactory() == null + || SKIP_PARTITION_SIZE_VALIDATION) + return null; + + try { + validator.validatePartitionCountersAndSizes(GridDhtPartitionsExchangeFuture.this, top, msgs); + } + catch (IgniteCheckedException ex) { + log.warning("Partition states validation has failed for group: " + grpCtx.cacheOrGroupName() + ". " + ex.getMessage()); + // TODO: Handle such errors https://issues.apache.org/jira/browse/IGNITE-7833 + } + + return null; + } + ); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to validate partitions state", e); } - if (log.isInfoEnabled()) - log.info("Partitions validation performed in " + (System.currentTimeMillis() - time) + " ms."); + timeBag.finishGlobalStage("Validate partitions states"); } /** * */ private void assignPartitionsStates() { - long time = System.currentTimeMillis(); + try { + U.doInParallel( + cctx.kernalContext().getSystemExecutorService(), + nonLocalCacheGroupDescriptors(), + grpDesc -> { + CacheGroupContext grpCtx = cctx.cache().cacheGroup(grpDesc.groupId()); + + GridDhtPartitionTopology top = grpCtx != null + ? grpCtx.topology() + : cctx.exchange().clientTopology(grpDesc.groupId(), events().discoveryCache()); + + if (!CU.isPersistentCache(grpDesc.config(), cctx.gridConfig().getDataStorageConfiguration())) + assignPartitionSizes(top); + else + assignPartitionStates(top); - for (Map.Entry e : cctx.affinity().cacheGroups().entrySet()) { - CacheGroupDescriptor grpDesc = e.getValue(); - if (grpDesc.config().getCacheMode() == CacheMode.LOCAL) - continue; + return null; + } + ); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to assign partition states", e); + } - CacheGroupContext grpCtx = cctx.cache().cacheGroup(e.getKey()); + timeBag.finishGlobalStage("Assign partitions states"); + } - GridDhtPartitionTopology top = grpCtx != null ? - grpCtx.topology() : - cctx.exchange().clientTopology(e.getKey(), events().discoveryCache()); + /** + * Removes gaps in the local update counters. Gaps in update counters are possible on backup node when primary + * failed to send update counter deltas to backup. + */ + private void finalizePartitionCounters() { + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(cctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2); - if (!CU.isPersistentCache(grpDesc.config(), cctx.gridConfig().getDataStorageConfiguration())) - assignPartitionSizes(top); - else - assignPartitionStates(top); + try { + U.doInParallel( + parallelismLvl, + cctx.kernalContext().getSystemExecutorService(), + nonLocalCacheGroups(), + grp -> { + grp.topology().finalizeUpdateCounters(); + + return null; + } + ); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to finalize partition counters", e); } - if (log.isInfoEnabled()) - log.info("Partitions assignment performed in " + (System.currentTimeMillis() - time) + " ms."); + timeBag.finishGlobalStage("Finalize update counters"); } /** @@ -3545,6 +3988,8 @@ private void processFullMessage(boolean checkCrd, ClusterNode node, GridDhtParti assert exchId.equals(msg.exchangeId()) : msg; assert msg.lastVersion() != null : msg; + timeBag.finishGlobalStage("Waiting for Full message"); + if (checkCrd) { assert node != null; @@ -3620,8 +4065,6 @@ private void processFullMessage(boolean checkCrd, ClusterNode node, GridDhtParti AffinityTopologyVersion resTopVer = initialVersion(); - long time = System.currentTimeMillis(); - if (exchCtx.mergeExchanges()) { if (msg.resultTopologyVersion() != null && !initialVersion().equals(msg.resultTopologyVersion())) { if (log.isInfoEnabled()) { @@ -3665,8 +4108,7 @@ else if (localJoinExchange() && !exchCtx.fetchAffinityOnJoin()) else if (forceAffReassignment) cctx.affinity().applyAffinityFromFullMessage(this, msg); - if (log.isInfoEnabled()) - log.info("Affinity changes applied in " + (System.currentTimeMillis() - time) + " ms."); + timeBag.finishGlobalStage("Affinity recalculation"); if (dynamicCacheStartExchange() && !F.isEmpty(exchangeGlobalExceptions)) { assert cctx.localNode().isClient(); @@ -3685,15 +4127,10 @@ else if (forceAffReassignment) updatePartitionFullMap(resTopVer, msg); - IgniteCheckedException err = null; - - if (stateChangeExchange() && !F.isEmpty(msg.getErrorsMap())) { - err = new IgniteCheckedException("Cluster state change failed"); - + if (stateChangeExchange() && !F.isEmpty(msg.getErrorsMap())) cctx.kernalContext().state().onStateChangeError(msg.getErrorsMap(), exchActions.stateChangeRequest()); - } - onDone(resTopVer, err); + onDone(resTopVer, null); } catch (IgniteCheckedException e) { onDone(e); @@ -3713,48 +4150,57 @@ private void updatePartitionFullMap(AffinityTopologyVersion resTopVer, GridDhtPa partHistSuppliers.putAll(msg.partitionHistorySuppliers()); - long time = System.currentTimeMillis(); + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(cctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2); - for (Map.Entry entry : msg.partitions().entrySet()) { - Integer grpId = entry.getKey(); + try { + Map> partsSizes = msg.partitionSizes(cctx); - CacheGroupContext grp = cctx.cache().cacheGroup(grpId); + doInParallel( + parallelismLvl, + cctx.kernalContext().getSystemExecutorService(), + msg.partitions().keySet(), grpId -> { + CacheGroupContext grp = cctx.cache().cacheGroup(grpId); - if (grp != null) { - CachePartitionFullCountersMap cntrMap = msg.partitionUpdateCounters(grpId, - grp.topology().partitions()); + if (grp != null) { + CachePartitionFullCountersMap cntrMap = msg.partitionUpdateCounters(grpId, + grp.topology().partitions()); - grp.topology().update(resTopVer, - entry.getValue(), - cntrMap, - msg.partsToReload(cctx.localNodeId(), grpId), - msg.partitionSizes(grpId), - null); - } - else { - ClusterNode oldest = cctx.discovery().oldestAliveServerNode(AffinityTopologyVersion.NONE); + grp.topology().update(resTopVer, + msg.partitions().get(grpId), + cntrMap, + msg.partsToReload(cctx.localNodeId(), grpId), + partsSizes.getOrDefault(grpId, Collections.emptyMap()), + null); + } + else { + ClusterNode oldest = cctx.discovery().oldestAliveServerNode(AffinityTopologyVersion.NONE); - if (oldest != null && oldest.isLocal()) { - GridDhtPartitionTopology top = cctx.exchange().clientTopology(grpId, events().discoveryCache()); + if (oldest != null && oldest.isLocal()) { + GridDhtPartitionTopology top = cctx.exchange().clientTopology(grpId, events().discoveryCache()); - CachePartitionFullCountersMap cntrMap = msg.partitionUpdateCounters(grpId, - top.partitions()); + CachePartitionFullCountersMap cntrMap = msg.partitionUpdateCounters(grpId, + top.partitions()); - top.update(resTopVer, - entry.getValue(), - cntrMap, - Collections.emptySet(), - null, - null); - } - } + top.update(resTopVer, + msg.partitions().get(grpId), + cntrMap, + Collections.emptySet(), + null, + null); + } + } + + return null; + }); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); } partitionsReceived = true; - if (log.isInfoEnabled()) - log.info("Full map updating for " + msg.partitions().size() - + " groups performed in " + (System.currentTimeMillis() - time) + " ms."); + timeBag.finishGlobalStage("Full map updating"); } /** @@ -3948,6 +4394,8 @@ private void onAllServersLeft() { grp.affinity().idealAssignment(affAssignment); grp.affinity().initialize(initialVersion(), affAssignment); + + cctx.exchange().exchangerUpdateHeartbeat(); } } @@ -3960,8 +4408,6 @@ public void onNodeLeft(final ClusterNode node) { if (isDone() || !enterBusy()) return; - cctx.mvcc().removeExplicitNodeLocks(node.id(), initialVersion()); - try { onDiscoveryEvent(new IgniteRunnable() { @Override public void run() { @@ -4056,7 +4502,16 @@ public void onNodeLeft(final ClusterNode node) { cctx.kernalContext().closure().callLocal(new Callable() { @Override public Void call() throws Exception { - newCrdFut.init(GridDhtPartitionsExchangeFuture.this); + try { + newCrdFut.init(GridDhtPartitionsExchangeFuture.this); + } + catch (Throwable t) { + U.error(log, "Failed to initialize new coordinator future [topVer=" + initialVersion() + "]", t); + + cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, t)); + + throw t; + } newCrdFut.listen(new CI1() { @Override public void apply(IgniteInternalFuture fut) { @@ -4178,21 +4633,38 @@ private void onBecomeCoordinator(InitNewCoordinatorFuture newCrdFut) { Map msgs = newCrdFut.messages(); if (!F.isEmpty(msgs)) { - Map joinedNodeAff = null; - - for (Map.Entry e : msgs.entrySet()) { - this.msgs.put(e.getKey().id(), e.getValue()); + Map joinedNodeAff = new ConcurrentHashMap<>(); - GridDhtPartitionsSingleMessage msg = e.getValue(); + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(cctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2); - Collection affReq = msg.cacheGroupsAffinityRequest(); + try { + U.doInParallel( + parallelismLvl, + cctx.kernalContext().getSystemExecutorService(), + msgs.entrySet(), + entry -> { + this.msgs.put(entry.getKey().id(), entry.getValue()); + + GridDhtPartitionsSingleMessage msg = entry.getValue(); + + Collection affReq = msg.cacheGroupsAffinityRequest(); + + if (!F.isEmpty(affReq)) { + CacheGroupAffinityMessage.createAffinityMessages( + cctx, + fullMsg.resultTopologyVersion(), + affReq, + joinedNodeAff + ); + } - if (!F.isEmpty(affReq)) { - joinedNodeAff = CacheGroupAffinityMessage.createAffinityMessages(cctx, - fullMsg.resultTopologyVersion(), - affReq, - joinedNodeAff); - } + return null; + } + ); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); } Map mergedJoins = newCrdFut.mergedJoinExchangeMessages(); @@ -4315,12 +4787,12 @@ public boolean reconnectOnError(Throwable e) { * @return {@code True} If partition changes triggered by receiving Single/Full messages are not finished yet. */ public boolean partitionChangesInProgress() { - boolean isCoordinator = crd.equals(cctx.localNode()); + ClusterNode crd0 = crd; - if (isCoordinator) - return !partitionsSent; - else - return !partitionsReceived; + if (crd0 == null) + return false; + + return crd0.equals(cctx.localNode()) ? !partitionsSent : !partitionsReceived; } /** @@ -4354,7 +4826,7 @@ public synchronized boolean addOrMergeDelayedFullMessage(ClusterNode node, GridD }); } else - delayedLatestMsg.merge(fullMsg); + delayedLatestMsg.merge(fullMsg, cctx.discovery()); return true; } @@ -4508,6 +4980,14 @@ private static class FinishState { this.resTopVer = resTopVer; this.msg = msg; } + + /** + * Cleans up resources to avoid excessive memory usage. + */ + public void cleanUp() { + if (msg != null) + msg.cleanUp(); + } } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsFullMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsFullMessage.java index b84dc79e821f3..c76df986c2e52 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsFullMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsFullMessage.java @@ -19,18 +19,26 @@ import java.io.Externalizable; import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; +import java.util.Iterator; import java.util.Map; import java.util.Set; import java.util.UUID; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; +import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.GridDirectMap; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.managers.communication.GridIoPolicy; +import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.util.lang.IgniteThrowableConsumer; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.T2; @@ -93,11 +101,6 @@ public class GridDhtPartitionsFullMessage extends GridDhtPartitionsAbstractMessa /** Serialized partitions that must be cleared and re-loaded. */ private byte[] partsToReloadBytes; - /** Partitions sizes. */ - @GridToStringInclude - @GridDirectTransient - private Map> partsSizes; - /** Serialized partitions sizes. */ private byte[] partsSizesBytes; @@ -112,10 +115,6 @@ public class GridDhtPartitionsFullMessage extends GridDhtPartitionsAbstractMessa /** */ private byte[] errsBytes; - /** */ - @GridDirectTransient - private transient boolean compress; - /** */ private AffinityTopologyVersion resTopVer; @@ -172,12 +171,10 @@ public GridDhtPartitionsFullMessage(@Nullable GridDhtPartitionExchangeId id, cp.partHistSuppliersBytes = partHistSuppliersBytes; cp.partsToReload = partsToReload; cp.partsToReloadBytes = partsToReloadBytes; - cp.partsSizes = partsSizes; cp.partsSizesBytes = partsSizesBytes; cp.topVer = topVer; cp.errs = errs; cp.errsBytes = errsBytes; - cp.compress = compress; cp.resTopVer = resTopVer; cp.joinedNodeAff = joinedNodeAff; cp.idealAffDiff = idealAffDiff; @@ -243,13 +240,6 @@ void idealAffinityDiff(Map idealAffDiff) { return 0; } - /** - * @param compress {@code True} if it is possible to use compression for message. - */ - public void compress(boolean compress) { - this.compress = compress; - } - /** * @return Local partitions. */ @@ -283,7 +273,7 @@ public void addFullPartitionsMap(int grpId, GridDhtPartitionFullMap fullMap, @Nu parts.put(grpId, fullMap); if (dupDataCache != null) { - assert compress; + assert compressed(); assert parts.containsKey(dupDataCache); if (dupPartsData == null) @@ -354,32 +344,43 @@ public Set partsToReload(UUID nodeId, int grpId) { } /** - * Adds partition sizes map for specified {@code grpId} to the current message. + * Supplies partition sizes map for all cache groups. * - * @param grpId Group id. - * @param partSizesMap Partition sizes map. + * @param ctx Cache context. + * @param partsSizes Partitions sizes map. */ - public void addPartitionSizes(int grpId, Map partSizesMap) { - if (partSizesMap.isEmpty()) - return; + public void partitionSizes(GridCacheSharedContext ctx, Map> partsSizes) { + try { + byte[] marshalled = U.marshal(ctx, partsSizes); - if (partsSizes == null) - partsSizes = new HashMap<>(); + if (compressed()) + marshalled = U.zip(marshalled, ctx.gridConfig().getNetworkCompressionLevel()); - partsSizes.put(grpId, partSizesMap); + partsSizesBytes = marshalled; + } + catch (IgniteCheckedException ex) { + throw new IgniteException(ex); + } } /** - * Returns partition sizes map for specified {@code grpId}. + * Returns partition sizes map for all cache groups. * - * @param grpId Group id. - * @return Partition sizes map (partId, partSize). + * @param ctx Cache context. + * @return Partition sizes map (grpId, (partId, partSize)). */ - public Map partitionSizes(int grpId) { - if (partsSizes == null) + public Map> partitionSizes(GridCacheSharedContext ctx) { + if (partsSizesBytes == null) return Collections.emptyMap(); - return partsSizes.getOrDefault(grpId, Collections.emptyMap()); + try { + return compressed() + ? U.unmarshalZip(ctx.marshaller(), partsSizesBytes, ctx.deploy().globalLoader()) + : U.unmarshal(ctx, partsSizesBytes, ctx.deploy().globalLoader()); + } + catch (IgniteCheckedException ex) { + throw new IgniteException(ex); + } } /** @@ -408,69 +409,63 @@ void setErrorsMap(Map errs) { (!F.isEmpty(errs) && errsBytes == null); if (marshal) { - byte[] partsBytes0 = null; - byte[] partCntrsBytes0 = null; - byte[] partCntrsBytes20 = null; - byte[] partHistSuppliersBytes0 = null; - byte[] partsToReloadBytes0 = null; - byte[] partsSizesBytes0 = null; - byte[] errsBytes0 = null; + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(ctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2); + + Collection objectsToMarshall = new ArrayList<>(); if (!F.isEmpty(parts) && partsBytes == null) - partsBytes0 = U.marshal(ctx, parts); + objectsToMarshall.add(parts); if (partCntrs != null && !partCntrs.empty() && partCntrsBytes == null) - partCntrsBytes0 = U.marshal(ctx, partCntrs); + objectsToMarshall.add(partCntrs); if (partCntrs2 != null && !partCntrs2.empty() && partCntrsBytes2 == null) - partCntrsBytes20 = U.marshal(ctx, partCntrs2); + objectsToMarshall.add(partCntrs2); if (partHistSuppliers != null && partHistSuppliersBytes == null) - partHistSuppliersBytes0 = U.marshal(ctx, partHistSuppliers); + objectsToMarshall.add(partHistSuppliers); if (partsToReload != null && partsToReloadBytes == null) - partsToReloadBytes0 = U.marshal(ctx, partsToReload); - - if (partsSizes != null && partsSizesBytes == null) - partsSizesBytes0 = U.marshal(ctx, partsSizes); + objectsToMarshall.add(partsToReload); if (!F.isEmpty(errs) && errsBytes == null) - errsBytes0 = U.marshal(ctx, errs); - - if (compress) { - assert !compressed(); - - try { - byte[] partsBytesZip = U.zip(partsBytes0); - byte[] partCntrsBytesZip = U.zip(partCntrsBytes0); - byte[] partCntrsBytes2Zip = U.zip(partCntrsBytes20); - byte[] partHistSuppliersBytesZip = U.zip(partHistSuppliersBytes0); - byte[] partsToReloadBytesZip = U.zip(partsToReloadBytes0); - byte[] partsSizesBytesZip = U.zip(partsSizesBytes0); - byte[] exsBytesZip = U.zip(errsBytes0); - - partsBytes0 = partsBytesZip; - partCntrsBytes0 = partCntrsBytesZip; - partCntrsBytes20 = partCntrsBytes2Zip; - partHistSuppliersBytes0 = partHistSuppliersBytesZip; - partsToReloadBytes0 = partsToReloadBytesZip; - partsSizesBytes0 = partsSizesBytesZip; - errsBytes0 = exsBytesZip; - - compressed(true); - } - catch (IgniteCheckedException e) { - U.error(ctx.logger(getClass()), "Failed to compress partitions data: " + e, e); - } - } + objectsToMarshall.add(errs); + + Collection marshalled = U.doInParallel( + parallelismLvl, + ctx.kernalContext().getSystemExecutorService(), + objectsToMarshall, + new IgniteThrowableConsumer() { + @Override public byte[] accept(Object payload) throws IgniteCheckedException { + byte[] marshalled = U.marshal(ctx, payload); + + if(compressed()) + marshalled = U.zip(marshalled, ctx.gridConfig().getNetworkCompressionLevel()); + + return marshalled; + } + }); + + Iterator iterator = marshalled.iterator(); + + if (!F.isEmpty(parts) && partsBytes == null) + partsBytes = iterator.next(); + + if (partCntrs != null && !partCntrs.empty() && partCntrsBytes == null) + partCntrsBytes = iterator.next(); + + if (partCntrs2 != null && !partCntrs2.empty() && partCntrsBytes2 == null) + partCntrsBytes2 = iterator.next(); - partsBytes = partsBytes0; - partCntrsBytes = partCntrsBytes0; - partCntrsBytes2 = partCntrsBytes20; - partHistSuppliersBytes = partHistSuppliersBytes0; - partsToReloadBytes = partsToReloadBytes0; - partsSizesBytes = partsSizesBytes0; - errsBytes = errsBytes0; + if (partHistSuppliers != null && partHistSuppliersBytes == null) + partHistSuppliersBytes = iterator.next(); + + if (partsToReload != null && partsToReloadBytes == null) + partsToReloadBytes = iterator.next(); + + if (!F.isEmpty(errs) && errsBytes == null) + errsBytes = iterator.next(); } } @@ -492,11 +487,48 @@ public void topologyVersion(AffinityTopologyVersion topVer) { @Override public void finishUnmarshal(GridCacheSharedContext ctx, ClassLoader ldr) throws IgniteCheckedException { super.finishUnmarshal(ctx, ldr); + ClassLoader classLoader = U.resolveClassLoader(ldr, ctx.gridConfig()); + + Collection objectsToUnmarshall = new ArrayList<>(); + + // Reserve at least 2 threads for system operations. + int parallelismLvl = U.availableThreadCount(ctx.kernalContext(), GridIoPolicy.SYSTEM_POOL, 2); + + if (partsBytes != null && parts == null) + objectsToUnmarshall.add(partsBytes); + + if (partCntrsBytes != null && partCntrs == null) + objectsToUnmarshall.add(partCntrsBytes); + + if (partCntrsBytes2 != null && partCntrs2 == null) + objectsToUnmarshall.add(partCntrsBytes2); + + if (partHistSuppliersBytes != null && partHistSuppliers == null) + objectsToUnmarshall.add(partHistSuppliersBytes); + + if (partsToReloadBytes != null && partsToReload == null) + objectsToUnmarshall.add(partsToReloadBytes); + + if (errsBytes != null && errs == null) + objectsToUnmarshall.add(errsBytes); + + Collection unmarshalled = U.doInParallel( + parallelismLvl, + ctx.kernalContext().getSystemExecutorService(), + objectsToUnmarshall, + new IgniteThrowableConsumer() { + @Override public Object accept(byte[] binary) throws IgniteCheckedException { + return compressed() + ? U.unmarshalZip(ctx.marshaller(), binary, classLoader) + : U.unmarshal(ctx, binary, classLoader); + } + } + ); + + Iterator iterator = unmarshalled.iterator(); + if (partsBytes != null && parts == null) { - if (compressed()) - parts = U.unmarshalZip(ctx.marshaller(), partsBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - else - parts = U.unmarshal(ctx, partsBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); + parts = (Map)iterator.next(); if (dupPartsData != null) { assert parts != null; @@ -526,53 +558,35 @@ public void topologyVersion(AffinityTopologyVersion topVer) { } } - if (parts == null) - parts = new HashMap<>(); + if (partCntrsBytes != null && partCntrs == null) + partCntrs = (IgniteDhtPartitionCountersMap)iterator.next(); - if (partCntrsBytes != null && partCntrs == null) { - if (compressed()) - partCntrs = U.unmarshalZip(ctx.marshaller(), partCntrsBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - else - partCntrs = U.unmarshal(ctx, partCntrsBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - } + if (partCntrsBytes2 != null && partCntrs2 == null) + partCntrs2 = (IgniteDhtPartitionCountersMap2)iterator.next(); - if (partCntrsBytes2 != null && partCntrs2 == null) { - if (compressed()) - partCntrs2 = U.unmarshalZip(ctx.marshaller(), partCntrsBytes2, U.resolveClassLoader(ldr, ctx.gridConfig())); - else - partCntrs2 = U.unmarshal(ctx, partCntrsBytes2, U.resolveClassLoader(ldr, ctx.gridConfig())); - } + if (partHistSuppliersBytes != null && partHistSuppliers == null) + partHistSuppliers = (IgniteDhtPartitionHistorySuppliersMap)iterator.next(); - if (partHistSuppliersBytes != null && partHistSuppliers == null) { - if (compressed()) - partHistSuppliers = U.unmarshalZip(ctx.marshaller(), partHistSuppliersBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - else - partHistSuppliers = U.unmarshal(ctx, partHistSuppliersBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - } + if (partsToReloadBytes != null && partsToReload == null) + partsToReload = (IgniteDhtPartitionsToReloadMap)iterator.next(); - if (partsToReloadBytes != null && partsToReload == null) { - if (compressed()) - partsToReload = U.unmarshalZip(ctx.marshaller(), partsToReloadBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - else - partsToReload = U.unmarshal(ctx, partsToReloadBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - } + if (errsBytes != null && errs == null) + errs = (Map)iterator.next(); - if (partsSizesBytes != null && partsSizes == null) { - if (compressed()) - partsSizes = U.unmarshalZip(ctx.marshaller(), partsSizesBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - else - partsSizes = U.unmarshal(ctx, partsSizesBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - } + if (parts == null) + parts = new HashMap<>(); if (partCntrs == null) partCntrs = new IgniteDhtPartitionCountersMap(); - if (errsBytes != null && errs == null) { - if (compressed()) - errs = U.unmarshalZip(ctx.marshaller(), errsBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - else - errs = U.unmarshal(ctx, errsBytes, U.resolveClassLoader(ldr, ctx.gridConfig())); - } + if (partCntrs2 == null) + partCntrs2 = new IgniteDhtPartitionCountersMap2(); + + if(partHistSuppliers == null) + partHistSuppliers = new IgniteDhtPartitionHistorySuppliersMap(); + + if(partsToReload == null) + partsToReload = new IgniteDhtPartitionsToReloadMap(); if (errs == null) errs = new HashMap<>(); @@ -593,74 +607,74 @@ public void topologyVersion(AffinityTopologyVersion topVer) { } switch (writer.state()) { - case 5: + case 6: if (!writer.writeMap("dupPartsData", dupPartsData, MessageCollectionItemType.INT, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeByteArray("errsBytes", errsBytes)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeMap("idealAffDiff", idealAffDiff, MessageCollectionItemType.INT, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeMap("joinedNodeAff", joinedNodeAff, MessageCollectionItemType.INT, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeByteArray("partCntrsBytes", partCntrsBytes)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeByteArray("partCntrsBytes2", partCntrsBytes2)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeByteArray("partHistSuppliersBytes", partHistSuppliersBytes)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeByteArray("partsBytes", partsBytes)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeByteArray("partsSizesBytes", partsSizesBytes)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeByteArray("partsToReloadBytes", partsToReloadBytes)) return false; writer.incrementState(); - case 15: - if (!writer.writeMessage("resTopVer", resTopVer)) + case 16: + if (!writer.writeAffinityTopologyVersion("resTopVer", resTopVer)) return false; writer.incrementState(); - case 16: - if (!writer.writeMessage("topVer", topVer)) + case 17: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -681,7 +695,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { return false; switch (reader.state()) { - case 5: + case 6: dupPartsData = reader.readMap("dupPartsData", MessageCollectionItemType.INT, MessageCollectionItemType.INT, false); if (!reader.isLastRead()) @@ -689,7 +703,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 6: + case 7: errsBytes = reader.readByteArray("errsBytes"); if (!reader.isLastRead()) @@ -697,7 +711,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 7: + case 8: idealAffDiff = reader.readMap("idealAffDiff", MessageCollectionItemType.INT, MessageCollectionItemType.MSG, false); if (!reader.isLastRead()) @@ -705,7 +719,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 8: + case 9: joinedNodeAff = reader.readMap("joinedNodeAff", MessageCollectionItemType.INT, MessageCollectionItemType.MSG, false); if (!reader.isLastRead()) @@ -713,7 +727,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 9: + case 10: partCntrsBytes = reader.readByteArray("partCntrsBytes"); if (!reader.isLastRead()) @@ -721,7 +735,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 10: + case 11: partCntrsBytes2 = reader.readByteArray("partCntrsBytes2"); if (!reader.isLastRead()) @@ -729,7 +743,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 11: + case 12: partHistSuppliersBytes = reader.readByteArray("partHistSuppliersBytes"); if (!reader.isLastRead()) @@ -737,7 +751,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 12: + case 13: partsBytes = reader.readByteArray("partsBytes"); if (!reader.isLastRead()) @@ -745,7 +759,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 13: + case 14: partsSizesBytes = reader.readByteArray("partsSizesBytes"); if (!reader.isLastRead()) @@ -753,7 +767,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 14: + case 15: partsToReloadBytes = reader.readByteArray("partsToReloadBytes"); if (!reader.isLastRead()) @@ -761,16 +775,16 @@ public void topologyVersion(AffinityTopologyVersion topVer) { reader.incrementState(); - case 15: - resTopVer = reader.readMessage("resTopVer"); + case 16: + resTopVer = reader.readAffinityTopologyVersion("resTopVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 16: - topVer = reader.readMessage("topVer"); + case 17: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; @@ -789,7 +803,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 17; + return 18; } /** {@inheritDoc} */ @@ -803,7 +817,7 @@ public void topologyVersion(AffinityTopologyVersion topVer) { * * @param other Other full message. */ - public void merge(GridDhtPartitionsFullMessage other) { + public void merge(GridDhtPartitionsFullMessage other, GridDiscoveryManager discovery) { assert other.exchangeId() == null && exchangeId() == null : "Both current and merge full message must have exchangeId == null" + other.exchangeId() + "," + exchangeId(); @@ -814,8 +828,32 @@ public void merge(GridDhtPartitionsFullMessage other) { GridDhtPartitionFullMap currMap = partitions().get(grpId); - if (currMap == null || updMap.compareTo(currMap) >= 0) + if (currMap == null) partitions().put(grpId, updMap); + else { + ClusterNode currentMapSentBy = discovery.node(currMap.nodeId()); + ClusterNode newMapSentBy = discovery.node(updMap.nodeId()); + + if (newMapSentBy == null) + return; + + if (currentMapSentBy == null || newMapSentBy.order() > currentMapSentBy.order() || updMap.compareTo(currMap) >= 0) + partitions().put(grpId, updMap); + } } } + + /** + * Cleans up resources to avoid excessive memory usage. + */ + public void cleanUp() { + partsBytes = null; + partCntrs2 = null; + partCntrsBytes = null; + partCntrsBytes2 = null; + partHistSuppliersBytes = null; + partsToReloadBytes = null; + partsSizesBytes = null; + errsBytes = null; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleMessage.java index 088fb31d7f715..6e075ac9335b6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleMessage.java @@ -30,7 +30,6 @@ import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; -import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.T2; @@ -95,23 +94,19 @@ public class GridDhtPartitionsSingleMessage extends GridDhtPartitionsAbstractMes /** */ private boolean client; - /** */ - @GridDirectTransient - private transient boolean compress; - /** */ @GridDirectCollection(Integer.class) private Collection grpsAffRequest; + /** Start time of exchange on node which sent this message in nanoseconds. */ + private long exchangeStartTime; + /** * Exchange finish message, sent to new coordinator when it tries to * restore state after previous coordinator failed during exchange. */ private GridDhtPartitionsFullMessage finishMsg; - /** */ - private GridLongList activeQryTrackers; - /** * Required by {@link Externalizable}. */ @@ -131,22 +126,9 @@ public GridDhtPartitionsSingleMessage(GridDhtPartitionExchangeId exchId, boolean compress) { super(exchId, lastVer); - this.client = client; - this.compress = compress; - } - - /** - * @return Active queries started with previous coordinator. - */ - GridLongList activeQueries() { - return activeQryTrackers; - } + compressed(compress); - /** - * @param activeQrys Active queries started with previous coordinator. - */ - void activeQueries(GridLongList activeQrys) { - this.activeQryTrackers = activeQrys; + this.client = client; } /** @@ -201,7 +183,7 @@ public void addLocalPartitionMap(int cacheId, GridDhtPartitionMap locMap, @Nulla parts.put(cacheId, locMap); if (dupDataCache != null) { - assert compress; + assert compressed(); assert F.isEmpty(locMap.map()); assert parts.containsKey(dupDataCache); @@ -334,6 +316,20 @@ public void setError(Exception ex) { return err; } + /** + * Start time of exchange on node which sent this message. + */ + public long exchangeStartTime() { + return exchangeStartTime; + } + + /** + * @param exchangeStartTime Start time of exchange. + */ + public void exchangeStartTime(long exchangeStartTime) { + this.exchangeStartTime = exchangeStartTime; + } + /** {@inheritDoc} * @param ctx*/ @Override public void prepareMarshal(GridCacheSharedContext ctx) throws IgniteCheckedException { @@ -367,9 +363,7 @@ public void setError(Exception ex) { if (err != null && errBytes == null) errBytes0 = U.marshal(ctx, err); - if (compress) { - assert !compressed(); - + if (compressed()) { try { byte[] partsBytesZip = U.zip(partsBytes0); byte[] partCntrsBytesZip = U.zip(partCntrsBytes0); @@ -382,8 +376,6 @@ public void setError(Exception ex) { partHistCntrsBytes0 = partHistCntrsBytesZip; partsSizesBytes0 = partsSizesBytesZip; errBytes0 = exBytesZip; - - compressed(true); } catch (IgniteCheckedException e) { U.error(ctx.logger(getClass()), "Failed to compress partitions data: " + e, e); @@ -473,65 +465,66 @@ public void setError(Exception ex) { } switch (writer.state()) { - case 5: + case 6: if (!writer.writeBoolean("client", client)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeMap("dupPartsData", dupPartsData, MessageCollectionItemType.INT, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 8: + case 9: + if (!writer.writeLong("exchangeStartTime", exchangeStartTime)) + return false; + + writer.incrementState(); + + case 10: if (!writer.writeMessage("finishMsg", finishMsg)) return false; writer.incrementState(); - case 9: + case 11: if (!writer.writeCollection("grpsAffRequest", grpsAffRequest, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 10: + case 12: if (!writer.writeByteArray("partCntrsBytes", partCntrsBytes)) return false; writer.incrementState(); - case 11: + case 13: if (!writer.writeByteArray("partHistCntrsBytes", partHistCntrsBytes)) return false; writer.incrementState(); - case 12: + case 14: if (!writer.writeByteArray("partsBytes", partsBytes)) return false; writer.incrementState(); - case 13: + case 15: if (!writer.writeByteArray("partsSizesBytes", partsSizesBytes)) return false; writer.incrementState(); - case 14: - if (!writer.writeMessage("activeQryTrackers", activeQryTrackers)) - return false; - - writer.incrementState(); } return true; @@ -548,7 +541,7 @@ public void setError(Exception ex) { return false; switch (reader.state()) { - case 5: + case 6: client = reader.readBoolean("client"); if (!reader.isLastRead()) @@ -556,7 +549,7 @@ public void setError(Exception ex) { reader.incrementState(); - case 6: + case 7: dupPartsData = reader.readMap("dupPartsData", MessageCollectionItemType.INT, MessageCollectionItemType.INT, false); if (!reader.isLastRead()) @@ -564,7 +557,7 @@ public void setError(Exception ex) { reader.incrementState(); - case 7: + case 8: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -572,7 +565,15 @@ public void setError(Exception ex) { reader.incrementState(); - case 8: + case 9: + exchangeStartTime = reader.readLong("exchangeStartTime"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 10: finishMsg = reader.readMessage("finishMsg"); if (!reader.isLastRead()) @@ -580,7 +581,7 @@ public void setError(Exception ex) { reader.incrementState(); - case 9: + case 11: grpsAffRequest = reader.readCollection("grpsAffRequest", MessageCollectionItemType.INT); if (!reader.isLastRead()) @@ -588,7 +589,7 @@ public void setError(Exception ex) { reader.incrementState(); - case 10: + case 12: partCntrsBytes = reader.readByteArray("partCntrsBytes"); if (!reader.isLastRead()) @@ -596,7 +597,7 @@ public void setError(Exception ex) { reader.incrementState(); - case 11: + case 13: partHistCntrsBytes = reader.readByteArray("partHistCntrsBytes"); if (!reader.isLastRead()) @@ -604,7 +605,7 @@ public void setError(Exception ex) { reader.incrementState(); - case 12: + case 14: partsBytes = reader.readByteArray("partsBytes"); if (!reader.isLastRead()) @@ -612,7 +613,7 @@ public void setError(Exception ex) { reader.incrementState(); - case 13: + case 15: partsSizesBytes = reader.readByteArray("partsSizesBytes"); if (!reader.isLastRead()) @@ -620,13 +621,6 @@ public void setError(Exception ex) { reader.incrementState(); - case 14: - activeQryTrackers = reader.readMessage("activeQryTrackers"); - - if (!reader.isLastRead()) - return false; - - reader.incrementState(); } return reader.afterMessageRead(GridDhtPartitionsSingleMessage.class); @@ -639,7 +633,7 @@ public void setError(Exception ex) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 15; + return 16; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleRequest.java index 0be0f37aa22ac..26d3cdeffd684 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionsSingleRequest.java @@ -89,7 +89,7 @@ GridDhtPartitionExchangeId restoreExchangeId() { } switch (writer.state()) { - case 5: + case 6: if (!writer.writeMessage("restoreExchId", restoreExchId)) return false; @@ -111,7 +111,7 @@ GridDhtPartitionExchangeId restoreExchangeId() { return false; switch (reader.state()) { - case 5: + case 6: restoreExchId = reader.readMessage("restoreExchId"); if (!reader.isLastRead()) @@ -131,7 +131,7 @@ GridDhtPartitionExchangeId restoreExchangeId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 6; + return 7; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPreloader.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPreloader.java index 034bf72cecf4e..b3f597cd5d5ca 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPreloader.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPreloader.java @@ -20,14 +20,16 @@ import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.Queue; import java.util.UUID; +import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; -import java.util.stream.Collectors; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.NodeStoppingException; +import org.apache.ignite.internal.managers.communication.GridIoPolicy; import org.apache.ignite.internal.processors.affinity.AffinityAssignment; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheGroupContext; @@ -43,7 +45,9 @@ import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.lang.GridPlainRunnable; +import org.apache.ignite.internal.util.lang.GridTuple3; import org.apache.ignite.internal.util.typedef.CI1; +import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; import org.jetbrains.annotations.Nullable; @@ -80,6 +84,12 @@ public class GridDhtPreloader extends GridCachePreloaderAdapter { /** Demand lock. */ private final ReadWriteLock demandLock = new ReentrantReadWriteLock(); + /** */ + private boolean paused; + + /** */ + private Queue> pausedDemanderQueue = new ConcurrentLinkedQueue<>(); + /** */ private boolean stopped; @@ -168,6 +178,9 @@ private IgniteCheckedException stopError() { if (ctx.kernalContext().clientNode() || rebTopVer.equals(AffinityTopologyVersion.NONE)) return false; // No-op. + if (exchFut.resetLostPartitionFor(grp.cacheOrGroupName())) + return true; + if (exchFut.localJoinExchange()) return true; // Required, can have outdated updSeq partition counter if node reconnects. @@ -184,38 +197,10 @@ private IgniteCheckedException stopError() { if (rebFut.isDone() && !rebFut.result()) return true; // Required, previous rebalance cancelled. - final AffinityTopologyVersion exchTopVer = exchFut.context().events().topologyVersion(); - - Collection aliveNodes = ctx.discovery().aliveServerNodes().stream() - .map(ClusterNode::id) - .collect(Collectors.toList()); - - return assignmentsChanged(rebTopVer, exchTopVer) || - !aliveNodes.containsAll(demander.remainingNodes()); // Some of nodes left before rabalance compelete. - } - - /** - * @param oldTopVer Previous topology version. - * @param newTopVer New topology version to check result. - * @return {@code True} if affinity assignments changed between two versions for local node. - */ - private boolean assignmentsChanged(AffinityTopologyVersion oldTopVer, AffinityTopologyVersion newTopVer) { - final AffinityAssignment aff = grp.affinity().readyAffinity(newTopVer); - - // We should get affinity assignments based on previous rebalance to calculate difference. - // Whole history size described by IGNITE_AFFINITY_HISTORY_SIZE constant. - final AffinityAssignment prevAff = grp.affinity().cachedVersions().contains(oldTopVer) ? - grp.affinity().cachedAffinity(oldTopVer) : null; - - if (prevAff == null) - return false; - - boolean assignsChanged = false; + AffinityTopologyVersion lastAffChangeTopVer = + ctx.exchange().lastAffinityChangedTopologyVersion(exchFut.topologyVersion()); - for (int p = 0; !assignsChanged && p < grp.affinity().partitions(); p++) - assignsChanged |= aff.get(p).contains(ctx.localNode()) != prevAff.get(p).contains(ctx.localNode()); - - return assignsChanged; + return lastAffChangeTopVer.compareTo(rebTopVer) > 0; } /** {@inheritDoc} */ @@ -384,7 +369,10 @@ private List remoteOwners(int p, AffinityTopologyVersion topVer) { demandLock.readLock().lock(); try { - demander.handleSupplyMessage(idx, id, s); + if (paused) + pausedDemanderQueue.add(F.t(idx, id, s)); + else + demander.handleSupplyMessage(idx, id, s); } finally { demandLock.readLock().unlock(); @@ -589,6 +577,41 @@ private GridDhtFuture request0(GridCacheContext cctx, Collection> msgToProc = + new ArrayList<>(pausedDemanderQueue); + + pausedDemanderQueue.clear(); + + final GridDhtPreloader preloader = this; + + ctx.kernalContext().closure().runLocalSafe(() -> msgToProc.forEach( + m -> preloader.handleSupplyMessage(m.get1(), m.get2(), m.get3()) + ), GridIoPolicy.SYSTEM_POOL); + + paused = false; + } + finally { + demandLock.writeLock().unlock(); + } + } + /** {@inheritDoc} */ @Override public void dumpDebugInfo() { // No-op diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/PartitionsExchangeAware.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/PartitionsExchangeAware.java new file mode 100644 index 0000000000000..7318726ff15ee --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/PartitionsExchangeAware.java @@ -0,0 +1,69 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.dht.preloader; + +import org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager; + +/** + * Interface which allows to subscribe a component for partition map exchange events + * (via {@link GridCachePartitionExchangeManager#registerExchangeAwareComponent(PartitionsExchangeAware)}). + * Heavy computations shouldn't be performed in listener methods: aware components will be notified + * synchronously from exchange thread. + * Runtime exceptions thrown by listener methods will trigger failure handler (as per exchange thread is critical). + * Please ensure that your implementation will never throw an exception if you subscribe to exchange events for + * non-system-critical activities. + */ +public interface PartitionsExchangeAware { + /** + * Callback from exchange process initialization; called before topology is locked. + * + * @param fut Partition map exchange future. + */ + public default void onInitBeforeTopologyLock(GridDhtPartitionsExchangeFuture fut) { + // No-op. + } + + /** + * Callback from exchange process initialization; called after topology is locked. + * Guarantees that no more data updates will be performed on local node until exchange process is completed. + * + * @param fut Partition map exchange future. + */ + public default void onInitAfterTopologyLock(GridDhtPartitionsExchangeFuture fut) { + // No-op. + } + + /** + * Callback from exchange process completion; called before topology is unlocked. + * Guarantees that no updates were performed on local node since exchange process started. + * + * @param fut Partition map exchange future. + */ + public default void onDoneBeforeTopologyUnlock(GridDhtPartitionsExchangeFuture fut) { + // No-op. + } + + /** + * Callback from exchange process completion; called after topology is unlocked. + * + * @param fut Partition map exchange future. + */ + public default void onDoneAfterTopologyUnlock(GridDhtPartitionsExchangeFuture fut) { + // No-op. + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/ExchangeLatchManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/ExchangeLatchManager.java index 35c04fb7d7902..0308ff4198d95 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/ExchangeLatchManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/ExchangeLatchManager.java @@ -64,7 +64,7 @@ public class ExchangeLatchManager { * Exchange latch V2 protocol introduces following optimization: * Joining nodes are explicitly excluded from possible latch participants. */ - public static final IgniteProductVersion PROTOCOL_V2_VERSION_SINCE = IgniteProductVersion.fromString("2.7.0"); + public static final IgniteProductVersion PROTOCOL_V2_VERSION_SINCE = IgniteProductVersion.fromString("2.5.3"); /** Logger. */ private final IgniteLogger log; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/LatchAckMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/LatchAckMessage.java index bad1b6137bac5..9c69fdf581856 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/LatchAckMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/latch/LatchAckMessage.java @@ -103,10 +103,11 @@ public boolean isFinal() { writer.incrementState(); case 2: - if (!writer.writeMessage("topVer", topVer)) + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); + } return true; @@ -137,12 +138,13 @@ public boolean isFinal() { reader.incrementState(); case 2: - topVer = reader.readMessage("topVer"); + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); + } return reader.afterMessageRead(LatchAckMessage.class); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridClientPartitionTopology.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridClientPartitionTopology.java index 9140322051b31..0a50b94e20b17 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridClientPartitionTopology.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridClientPartitionTopology.java @@ -48,7 +48,6 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.util.F0; import org.apache.ignite.internal.util.GridAtomicLong; import org.apache.ignite.internal.util.GridPartitionStateMap; @@ -201,11 +200,6 @@ private String mapString(GridDhtPartitionMap map) { lock.readLock().unlock(); } - /** {@inheritDoc} */ - @Override public MvccCoordinator mvccCoordinator() { - throw new UnsupportedOperationException(); - } - /** {@inheritDoc} */ @Override public boolean holdsLock() { return lock.isWriteLockedByCurrentThread() || lock.getReadHoldCount() > 0; @@ -215,7 +209,6 @@ private String mapString(GridDhtPartitionMap map) { @Override public void updateTopologyVersion( GridDhtTopologyFuture exchFut, DiscoCache discoCache, - MvccCoordinator mvccCrd, long updSeq, boolean stopping ) throws IgniteInterruptedCheckedException { @@ -1119,7 +1112,7 @@ private void removeNode(UUID nodeId) { } /** {@inheritDoc} */ - @Override public void ownMoving(AffinityTopologyVersion topVer) { + @Override public void ownMoving(AffinityTopologyVersion rebFinishedTopVer) { // No-op } @@ -1238,6 +1231,11 @@ private void removeNode(UUID nodeId) { return CachePartitionPartialCountersMap.EMPTY; } + /** {@inheritDoc} */ + @Override public void finalizeUpdateCounters() { + // No-op. + } + /** {@inheritDoc} */ @Override public Map partitionSizes() { return Collections.emptyMap(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtLocalPartition.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtLocalPartition.java index 253a56a0bc015..dc6d3206aef59 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtLocalPartition.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtLocalPartition.java @@ -32,6 +32,8 @@ import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.NodeStoppingException; @@ -53,6 +55,7 @@ import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.query.GridQueryRowCacheCleaner; +import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.lang.GridIterator; import org.apache.ignite.internal.util.tostring.GridToStringExclude; @@ -79,15 +82,7 @@ */ public class GridDhtLocalPartition extends GridCacheConcurrentMapImpl implements Comparable, GridReservable { /** */ - private static final GridCacheMapEntryFactory ENTRY_FACTORY = new GridCacheMapEntryFactory() { - @Override public GridCacheMapEntry create( - GridCacheContext ctx, - AffinityTopologyVersion topVer, - KeyCacheObject key - ) { - return new GridDhtCacheEntry(ctx, topVer, key); - } - }; + private static final GridCacheMapEntryFactory ENTRY_FACTORY = GridDhtCacheEntry::new; /** Maximum size for delete queue. */ public static final int MAX_DELETE_QUEUE_SIZE = Integer.getInteger(IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE, 200_000); @@ -179,11 +174,15 @@ public class GridDhtLocalPartition extends GridCacheConcurrentMapImpl implements * @param ctx Context. * @param grp Cache group. * @param id Partition ID. + * @param recovery Flag indicates that partition is created during recovery phase. */ @SuppressWarnings("ExternalizableWithoutPublicNoArgConstructor") - public GridDhtLocalPartition(GridCacheSharedContext ctx, - CacheGroupContext grp, - int id) { + public GridDhtLocalPartition( + GridCacheSharedContext ctx, + CacheGroupContext grp, + int id, + boolean recovery + ) { super(ENTRY_FACTORY); this.id = id; @@ -220,7 +219,7 @@ public GridDhtLocalPartition(GridCacheSharedContext ctx, store = grp.offheap().createCacheDataStore(id); // Log partition creation for further crash recovery purposes. - if (grp.walEnabled()) + if (grp.walEnabled() && !recovery) ctx.wal().log(new PartitionMetaStateRecord(grp.groupId(), id, state(), updateCounter())); // Inject row cache cleaner on store creation @@ -359,7 +358,7 @@ public int reservations() { * @return {@code True} if partition is empty. */ public boolean isEmpty() { - return store.fullSize() == 0 && internalSize() == 0; + return store.isEmpty() && internalSize() == 0; } /** @@ -567,6 +566,8 @@ private boolean casState(long state, GridDhtPartitionState toState) { } catch (IgniteCheckedException e) { U.error(log, "Failed to log partition state change to WAL.", e); + + ctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); } if (log.isDebugEnabled()) @@ -981,6 +982,19 @@ public long nextUpdateCounter(int cacheId, AffinityTopologyVersion topVer, boole if (grp.sharedGroup()) grp.onPartitionCounterUpdate(cacheId, id, primaryCntr != null ? primaryCntr : nextCntr, topVer, primary); + // This is first update in partition, we should log partition state information for further crash recovery. + if (nextCntr == 1) { + if (grp.persistenceEnabled() && grp.walEnabled()) + try { + ctx.wal().log(new PartitionMetaStateRecord(grp.groupId(), id, state(), 0)); + } + catch (IgniteCheckedException e) { + U.error(log, "Failed to log partition state snapshot to WAL.", e); + + ctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); + } + } + return nextCntr; } @@ -1098,7 +1112,8 @@ private long clearAll(EvictionContext evictionCtx) throws NodeStoppingException hld.cctx.events().addEvent(cached.partition(), cached.key(), ctx.localNodeId(), - (IgniteUuid)null, + null, + null, null, EVT_CACHE_REBALANCE_OBJECT_UNLOADED, null, @@ -1370,6 +1385,15 @@ private static long setSize(long state, int size) { return (state & (~0xFFFFFFFF00000000L)) | ((long)size << 32); } + /** + * Flushes pending update counters closing all possible gaps. + * + * @return Even-length array of pairs [start, end] for each gap. + */ + public GridLongList finalizeUpdateCounters() { + return store.finalizeUpdateCounters(); + } + /** * Removed entry holder. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopology.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopology.java index b6cb5bb1f8b2f..4b1e5a6ada99d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopology.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopology.java @@ -25,6 +25,7 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.events.DiscoveryEvent; +import org.apache.ignite.events.EventType; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.managers.discovery.DiscoCache; import org.apache.ignite.internal.processors.affinity.AffinityAssignment; @@ -37,7 +38,6 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.jetbrains.annotations.Nullable; @@ -77,7 +77,6 @@ public interface GridDhtPartitionTopology { public void updateTopologyVersion( GridDhtTopologyFuture exchFut, DiscoCache discoCache, - MvccCoordinator mvccCrd, long updateSeq, boolean stopping ) throws IgniteInterruptedCheckedException; @@ -327,10 +326,10 @@ public boolean update(@Nullable GridDhtPartitionExchangeId exchId, * This method should be called on topology coordinator after all partition messages are received. * * @param resTopVer Exchange result version. - * @param discoEvt Discovery event for which we detect lost partitions. + * @param discoEvt Discovery event for which we detect lost partitions if {@link EventType#EVT_CACHE_REBALANCE_PART_DATA_LOST} event should be fired. * @return {@code True} if partitions state got updated. */ - public boolean detectLostPartitions(AffinityTopologyVersion resTopVer, DiscoveryEvent discoEvt); + public boolean detectLostPartitions(AffinityTopologyVersion resTopVer, @Nullable DiscoveryEvent discoEvt); /** * Resets the state of all LOST partitions to OWNING. @@ -344,12 +343,18 @@ public boolean update(@Nullable GridDhtPartitionExchangeId exchId, */ public Collection lostPartitions(); + /** + * Pre-processes partition update counters before exchange. + */ + void finalizeUpdateCounters(); + /** * @return Partition update counters. */ public CachePartitionFullCountersMap fullUpdateCounters(); /** + * @param skipZeros {@code True} for adding zero counter to map. * @return Partition update counters. */ public CachePartitionPartialCountersMap localUpdateCounters(boolean skipZeros); @@ -368,9 +373,9 @@ public boolean update(@Nullable GridDhtPartitionExchangeId exchId, /** * Owns all moving partitions for the given topology version. * - * @param topVer Topology version. + * @param rebFinishedTopVer Topology version when rebalancing finished. */ - public void ownMoving(AffinityTopologyVersion topVer); + public void ownMoving(AffinityTopologyVersion rebFinishedTopVer); /** * @param part Evicted partition. @@ -425,6 +430,4 @@ public boolean update(@Nullable GridDhtPartitionExchangeId exchId, * @param updateRebalanceVer {@code True} if need check rebalance state. */ public void onExchangeDone(GridDhtPartitionsExchangeFuture fut, AffinityAssignment assignment, boolean updateRebalanceVer); - - public MvccCoordinator mvccCoordinator(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopologyImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopologyImpl.java index 7035e371ee04d..39484d4b0b38e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopologyImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopologyImpl.java @@ -27,6 +27,7 @@ import java.util.Map; import java.util.NoSuchElementException; import java.util.Set; +import java.util.TreeSet; import java.util.UUID; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReferenceArray; @@ -45,6 +46,7 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.ExchangeDiscoveryEvents; +import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; @@ -54,9 +56,9 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.util.F0; import org.apache.ignite.internal.util.GridAtomicLong; +import org.apache.ignite.internal.util.GridLongList; import org.apache.ignite.internal.util.GridPartitionStateMap; import org.apache.ignite.internal.util.StripedCompositeReadWriteLock; import org.apache.ignite.internal.util.tostring.GridToStringExclude; @@ -110,8 +112,8 @@ public class GridDhtPartitionTopologyImpl implements GridDhtPartitionTopology { /** Node to partition map. */ private GridDhtPartitionFullMap node2part; - /** Partitions map for left nodes. */ - private GridDhtPartitionFullMap leftNode2Part = new GridDhtPartitionFullMap(); + /** */ + private Set lostParts; /** */ private final Map> diffFromAffinity = new HashMap<>(); @@ -149,9 +151,6 @@ public class GridDhtPartitionTopologyImpl implements GridDhtPartitionTopology { /** */ private volatile AffinityTopologyVersion rebalancedTopVer = AffinityTopologyVersion.NONE; - /** */ - private volatile MvccCoordinator mvccCrd; - /** * @param ctx Cache shared context. * @param grp Cache group. @@ -243,11 +242,6 @@ private String mapString(GridDhtPartitionMap map) { lock.readLock().unlock(); } - /** {@inheritDoc} */ - @Override public MvccCoordinator mvccCoordinator() { - return mvccCrd; - } - /** {@inheritDoc} */ @Override public boolean holdsLock() { return lock.isWriteLockedByCurrentThread() || lock.getReadHoldCount() > 0; @@ -257,7 +251,6 @@ private String mapString(GridDhtPartitionMap map) { @Override public void updateTopologyVersion( GridDhtTopologyFuture exchFut, @NotNull DiscoCache discoCache, - MvccCoordinator mvccCrd, long updSeq, boolean stopping ) throws IgniteInterruptedCheckedException { @@ -284,7 +277,6 @@ private String mapString(GridDhtPartitionMap map) { lastTopChangeVer = exchTopVer; this.discoCache = discoCache; - this.mvccCrd = mvccCrd; } finally { lock.writeLock().unlock(); @@ -313,9 +305,18 @@ private String mapString(GridDhtPartitionMap map) { /** {@inheritDoc} */ @Override public GridDhtTopologyFuture topologyVersionFuture() { - assert topReadyFut != null; + GridDhtTopologyFuture topReadyFut0 = topReadyFut; - return topReadyFut; + assert topReadyFut0 != null; + + if (!topReadyFut0.changedAffinity()) { + GridDhtTopologyFuture lastFut = ctx.exchange().lastFinishedFuture(); + + if (lastFut != null) + return lastFut; + } + + return topReadyFut0; } /** {@inheritDoc} */ @@ -748,45 +749,47 @@ private boolean partitionLocalNode(int p, AffinityTopologyVersion topVer) { long updateSeq = this.updateSeq.incrementAndGet(); - for (int p = 0; p < partitions; p++) { - GridDhtLocalPartition locPart = localPartition0(p, topVer, false, true); - - if (partitionLocalNode(p, topVer)) { - // Prepare partition to rebalance if it's not happened on full map update phase. - if (locPart == null || locPart.state() == RENTING || locPart.state() == EVICTED) - locPart = rebalancePartition(p, false); + if (!ctx.localNode().isClient()) { + for (int p = 0; p < partitions; p++) { + GridDhtLocalPartition locPart = localPartition0(p, topVer, false, true); - GridDhtPartitionState state = locPart.state(); + if (partitionLocalNode(p, topVer)) { + // Prepare partition to rebalance if it's not happened on full map update phase. + if (locPart == null || locPart.state() == RENTING || locPart.state() == EVICTED) + locPart = rebalancePartition(p, false); - if (state == MOVING) { - if (grp.rebalanceEnabled()) { - Collection owners = owners(p); + GridDhtPartitionState state = locPart.state(); - // If an owner node left during exchange, then new exchange should be started with detecting lost partitions. - if (!F.isEmpty(owners)) { - if (log.isDebugEnabled()) - log.debug("Will not own partition (there are owners to rebalance from) " + - "[grp=" + grp.cacheOrGroupName() + ", p=" + p + ", owners = " + owners + ']'); + if (state == MOVING) { + if (grp.rebalanceEnabled()) { + Collection owners = owners(p); + + // If an owner node left during exchange, then new exchange should be started with detecting lost partitions. + if (!F.isEmpty(owners)) { + if (log.isDebugEnabled()) + log.debug("Will not own partition (there are owners to rebalance from) " + + "[grp=" + grp.cacheOrGroupName() + ", p=" + p + ", owners = " + owners + ']'); + } } + else + updateSeq = updateLocal(p, locPart.state(), updateSeq, topVer); } - else - updateSeq = updateLocal(p, locPart.state(), updateSeq, topVer); } - } - else { - if (locPart != null) { - GridDhtPartitionState state = locPart.state(); + else { + if (locPart != null) { + GridDhtPartitionState state = locPart.state(); - if (state == MOVING) { - locPart.rent(false); + if (state == MOVING) { + locPart.rent(false); - updateSeq = updateLocal(p, locPart.state(), updateSeq, topVer); + updateSeq = updateLocal(p, locPart.state(), updateSeq, topVer); - changed = true; + changed = true; - if (log.isDebugEnabled()) { - log.debug("Evicting " + state + " partition (it does not belong to affinity) [" + - "grp=" + grp.cacheOrGroupName() + ", p=" + locPart.id() + ']'); + if (log.isDebugEnabled()) { + log.debug("Evicting " + state + " partition (it does not belong to affinity) [" + + "grp=" + grp.cacheOrGroupName() + ", p=" + locPart.id() + ']'); + } } } } @@ -850,7 +853,7 @@ private GridDhtLocalPartition getOrCreatePartition(int p) { if (loc != null) loc.awaitDestroy(); - locParts.set(p, loc = new GridDhtLocalPartition(ctx, grp, p)); + locParts.set(p, loc = new GridDhtLocalPartition(ctx, grp, p, false)); long updCntr = cntrMap.updateCounter(p); @@ -881,12 +884,10 @@ private GridDhtLocalPartition getOrCreatePartition(int p) { if (part != null && part.state() != EVICTED) return part; - part = new GridDhtLocalPartition(ctx, grp, p); + part = new GridDhtLocalPartition(ctx, grp, p, true); locParts.set(p, part); - ctx.pageStore().onPartitionCreated(grp.groupId(), p); - return part; } finally { @@ -946,8 +947,10 @@ private GridDhtLocalPartition localPartition0(int p, } } else if (loc != null && state == RENTING && !showRenting) { + boolean belongsNow = topVer.equals(this.readyTopVer) ? belongs : partitionLocalNode(p, this.readyTopVer); + throw new GridDhtInvalidPartitionException(p, "Adding entry to partition that is concurrently " + - "evicted [grp=" + grp.cacheOrGroupName() + ", part=" + p + ", shouldBeMoving=" + "evicted [grp=" + grp.cacheOrGroupName() + ", part=" + p + ", belongsNow=" + belongsNow + ", belongs=" + belongs + ", topVer=" + topVer + ", curTopVer=" + this.readyTopVer + "]"); } @@ -958,7 +961,7 @@ else if (loc != null && state == RENTING && !showRenting) { "[grp=" + grp.cacheOrGroupName() + ", part=" + p + ", topVer=" + topVer + ", this.topVer=" + this.readyTopVer + ']'); - locParts.set(p, loc = new GridDhtLocalPartition(ctx, grp, p)); + locParts.set(p, loc = new GridDhtLocalPartition(ctx, grp, p, false)); this.updateSeq.incrementAndGet(); @@ -1131,25 +1134,38 @@ else if (loc != null && state == RENTING && !showRenting) { List nodes = null; - if (!topVer.equals(diffFromAffinityVer)) { - LT.warn(log, "Requested topology version does not match calculated diff, will require full iteration to" + - "calculate mapping [grp=" + grp.cacheOrGroupName() + ", topVer=" + topVer + - ", diffVer=" + diffFromAffinityVer + "]"); + AffinityTopologyVersion diffVer = diffFromAffinityVer; - nodes = new ArrayList<>(); + if (!diffVer.equals(topVer)) { + LT.warn(log, "Requested topology version does not match calculated diff, need to check if " + + "affinity has changed [grp=" + grp.cacheOrGroupName() + ", topVer=" + topVer + + ", diffVer=" + diffVer + "]"); - nodes.addAll(affNodes); + boolean affChanged; - for (Map.Entry entry : node2part.entrySet()) { - GridDhtPartitionState state = entry.getValue().get(p); + if (diffVer.compareTo(topVer) < 0) + affChanged = ctx.exchange().affinityChanged(diffVer, topVer); + else + affChanged = ctx.exchange().affinityChanged(topVer, diffVer); - ClusterNode n = ctx.discovery().node(entry.getKey()); + if (affChanged) { + LT.warn(log, "Requested topology version does not match calculated diff, will require full iteration to" + + "calculate mapping [grp=" + grp.cacheOrGroupName() + ", topVer=" + topVer + + ", diffVer=" + diffVer + "]"); - if (n != null && state != null && (state == MOVING || state == OWNING || state == RENTING) - && !nodes.contains(n) && (topVer.topologyVersion() < 0 || n.order() <= topVer.topologyVersion())) { - nodes.add(n); - } + nodes = new ArrayList<>(); + + nodes.addAll(affNodes); + + for (Map.Entry entry : node2part.entrySet()) { + GridDhtPartitionState state = entry.getValue().get(p); + ClusterNode n = ctx.discovery().node(entry.getKey()); + + if (n != null && state != null && (state == MOVING || state == OWNING || state == RENTING) + && !nodes.contains(n) && (topVer.topologyVersion() < 0 || n.order() <= topVer.topologyVersion())) + nodes.add(n); + } } return nodes; @@ -1468,13 +1484,6 @@ private boolean shouldOverridePartitionMap(GridDhtPartitionMap currentMap, GridD log.trace("Removing left node from full map update [grp=" + grp.cacheOrGroupName() + ", nodeId=" + nodeId + ", partMap=" + partMap + ']'); - if (node2part.containsKey(nodeId)) { - GridDhtPartitionMap map = partMap.get(nodeId); - - if (map != null) - leftNode2Part.put(nodeId, map); - } - it.remove(); } } @@ -2005,7 +2014,7 @@ private void rebuildDiff(AffinityAssignment affAssignment) { } /** {@inheritDoc} */ - @Override public boolean detectLostPartitions(AffinityTopologyVersion resTopVer, DiscoveryEvent discoEvt) { + @Override public boolean detectLostPartitions(AffinityTopologyVersion resTopVer, @Nullable DiscoveryEvent discoEvt) { ctx.database().checkpointReadLock(); try { @@ -2015,42 +2024,55 @@ private void rebuildDiff(AffinityAssignment affAssignment) { if (node2part == null) return false; + PartitionLossPolicy plc = grp.config().getPartitionLossPolicy(); + + assert plc != null; + int parts = grp.affinity().partitions(); - Set lost = new HashSet<>(parts); + Set recentlyLost = null; - for (int p = 0; p < parts; p++) - lost.add(p); + boolean changed = false; - for (GridDhtPartitionMap partMap : node2part.values()) { - for (Map.Entry e : partMap.entrySet()) { - if (e.getValue() == OWNING) { - lost.remove(e.getKey()); + for (int part = 0; part < parts; part++) { + boolean lost = F.contains(lostParts, part); + + if (!lost) { + boolean hasOwner = false; - if (lost.isEmpty()) + for (GridDhtPartitionMap partMap : node2part.values()) { + if (partMap.get(part) == OWNING) { + hasOwner = true; break; + } } - } - } - boolean changed = false; + if (!hasOwner) { + lost = true; + + if (lostParts == null) + lostParts = new TreeSet<>(); - if (!F.isEmpty(lost)) { - PartitionLossPolicy plc = grp.config().getPartitionLossPolicy(); + lostParts.add(part); - assert plc != null; + if (discoEvt != null) { + if (recentlyLost == null) + recentlyLost = new HashSet<>(); - Set recentlyLost = new HashSet<>(); + recentlyLost.add(part); - for (Map.Entry leftEntry : leftNode2Part.entrySet()) { - for (Map.Entry entry : leftEntry.getValue().entrySet()) { - if (entry.getValue() == OWNING) - recentlyLost.add(entry.getKey()); + if (grp.eventRecordable(EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST)) { + grp.addRebalanceEvent(part, + EVT_CACHE_REBALANCE_PART_DATA_LOST, + discoEvt.eventNode(), + discoEvt.type(), + discoEvt.timestamp()); + } + } } } - // Update partition state on all nodes. - for (Integer part : lost) { + if (lost) { long updSeq = updateSeq.incrementAndGet(); GridDhtLocalPartition locPart = localPartition(part, resTopVer, false, true); @@ -2076,21 +2098,17 @@ else if (plc != PartitionLossPolicy.IGNORE) { e.getValue().put(part, LOST); } } - - if (recentlyLost.contains(part) && grp.eventRecordable(EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST)) { - grp.addRebalanceEvent(part, - EVT_CACHE_REBALANCE_PART_DATA_LOST, - discoEvt.eventNode(), - discoEvt.type(), - discoEvt.timestamp()); - } } + } - if (plc != PartitionLossPolicy.IGNORE) - grp.needsRecovery(true); + if (recentlyLost != null) { + U.warn(log, "Detected lost partitions [grp=" + grp.cacheOrGroupName() + + ", parts=" + S.compact(recentlyLost) + + ", plc=" + plc + "]"); } - leftNode2Part.clear(); + if (lostParts != null && plc != PartitionLossPolicy.IGNORE) + grp.needsRecovery(true); return changed; } @@ -2125,14 +2143,23 @@ else if (plc != PartitionLossPolicy.IGNORE) { if (locPart != null && locPart.state() == LOST) { boolean marked = locPart.own(); - if (marked) + if (marked) { updateLocal(locPart.id(), locPart.state(), updSeq, resTopVer); + + long updateCntr = locPart.updateCounter(); + + //Set update counters to 0, for full rebalance. + locPart.updateCounter(updateCntr, -updateCntr); + locPart.initialUpdateCounter(0); + } } } } checkEvictions(updSeq, grp.affinity().readyAffinity(resTopVer)); + lostParts = null; + grp.needsRecovery(false); } finally { @@ -2152,22 +2179,7 @@ else if (plc != PartitionLossPolicy.IGNORE) { lock.readLock().lock(); try { - Set res = null; - - int parts = grp.affinity().partitions(); - - for (GridDhtPartitionMap partMap : node2part.values()) { - for (Map.Entry e : partMap.entrySet()) { - if (e.getValue() == LOST) { - if (res == null) - res = new HashSet<>(parts); - - res.add(e.getKey()); - } - } - } - - return res == null ? Collections.emptySet() : res; + return lostParts == null ? Collections.emptySet() : new HashSet<>(lostParts); } finally { lock.readLock().unlock(); @@ -2523,9 +2535,6 @@ private void removeNode(UUID nodeId) { GridDhtPartitionMap parts = node2part.remove(nodeId); - if (parts != null) - leftNode2Part.put(nodeId, parts); - if (!grp.isReplicated()) { if (parts != null) { for (Integer p : parts.keySet()) { @@ -2568,19 +2577,26 @@ private void removeNode(UUID nodeId) { } /** {@inheritDoc} */ - @Override public void ownMoving(AffinityTopologyVersion topVer) { + @Override public void ownMoving(AffinityTopologyVersion rebFinishedTopVer) { lock.writeLock().lock(); + AffinityTopologyVersion lastAffChangeVer = ctx.exchange().lastAffinityChangedTopologyVersion(lastTopChangeVer); + + if (lastAffChangeVer.compareTo(rebFinishedTopVer) > 0) + log.info("Affinity topology changed, no MOVING partitions will be owned " + + "[rebFinishedTopVer=" + rebFinishedTopVer + + ", lastAffChangeVer=" + lastAffChangeVer + "]"); + try { for (GridDhtLocalPartition locPart : grp.topology().currentLocalPartitions()) { if (locPart.state() == MOVING) { boolean reserved = locPart.reserve(); try { - if (reserved && locPart.state() == MOVING && lastTopChangeVer.equals(topVer)) - grp.topology().own(locPart); - else // topology changed, rebalancing must be restarted - return; + if (reserved && locPart.state() == MOVING && + lastAffChangeVer.compareTo(rebFinishedTopVer) <= 0 && + rebFinishedTopVer.compareTo(lastTopChangeVer) <= 0) + grp.topology().own(locPart); } finally { if (reserved) @@ -2636,6 +2652,49 @@ private void removeNode(UUID nodeId) { } } + /** + * Pre-processes partition update counters before exchange. + */ + @Override public void finalizeUpdateCounters() { + if (!grp.mvccEnabled()) + return; + + // It is need to acquire checkpoint lock before topology lock acquiring. + ctx.database().checkpointReadLock(); + + try { + lock.readLock().lock(); + + try { + for (int i = 0; i < locParts.length(); i++) { + GridDhtLocalPartition part = locParts.get(i); + + if (part != null && part.state().active()) { + // We need to close all gaps in partition update counters sequence. We assume this finalizing is + // happened on exchange and hence all txs are completed. Therefore each gap in update counters + // sequence is a result of undelivered DhtTxFinishMessage on backup (sequences on primary nodes + // do not have gaps). Here we close these gaps and asynchronously notify continuous query engine + // about the skipped events. + AffinityTopologyVersion topVer = ctx.exchange().readyAffinityVersion(); + + GridLongList gaps = part.finalizeUpdateCounters(); + + if (gaps != null) { + for (GridCacheContext ctx0 : grp.caches()) + ctx0.continuousQueries().closeBackupUpdateCountersGaps(ctx0, part.id(), topVer, gaps); + } + } + } + } + finally { + lock.readLock().unlock(); + } + } + finally { + ctx.database().checkpointReadUnlock(); + } + } + /** {@inheritDoc} */ @Override public CachePartitionFullCountersMap fullUpdateCounters() { lock.readLock().lock(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsReservation.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsReservation.java index 2682a896e7476..501748628661b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsReservation.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsReservation.java @@ -179,7 +179,7 @@ public void onPublish(CI1 unpublish) { */ private static void tryEvict(GridDhtLocalPartition[] parts) { if (parts == null) // Can be not initialized yet. - return ; + return; for (GridDhtLocalPartition part : parts) tryEvict(part); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsStateValidator.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsStateValidator.java index d131d56375e4a..4a0e21832764f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsStateValidator.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionsStateValidator.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.cache.distributed.dht.topology; +import java.util.AbstractMap; import java.util.HashMap; import java.util.HashSet; import java.util.Map; @@ -33,7 +34,6 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils; -import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.lang.IgniteProductVersion; import org.jetbrains.annotations.Nullable; @@ -78,14 +78,16 @@ public void validatePartitionCountersAndSizes( final Set ignoringNodes = new HashSet<>(); // Ignore just joined nodes. - for (DiscoveryEvent evt : fut.events().events()) + for (DiscoveryEvent evt : fut.events().events()) { if (evt.type() == EVT_NODE_JOINED) ignoringNodes.add(evt.eventNode().id()); + } AffinityTopologyVersion topVer = fut.context().events().topologyVersion(); // Validate update counters. Map> result = validatePartitionsUpdateCounters(top, messages, ignoringNodes); + if (!result.isEmpty()) throw new IgniteCheckedException("Partitions update counters are inconsistent for " + fold(topVer, result)); @@ -110,16 +112,21 @@ public void validatePartitionCountersAndSizes( * * @param top Topology to validate. * @param nodeId Node which sent single message. - * @param singleMsg Single message. + * @param countersMap Counters map. + * @param sizesMap Sizes map. * @return Set of partition ids should be excluded from validation. */ - @Nullable private Set shouldIgnore(GridDhtPartitionTopology top, UUID nodeId, GridDhtPartitionsSingleMessage singleMsg) { - CachePartitionPartialCountersMap countersMap = singleMsg.partitionUpdateCounters(top.groupId(), top.partitions()); - Map sizesMap = singleMsg.partitionSizes(top.groupId()); - + @Nullable private Set shouldIgnore( + GridDhtPartitionTopology top, + UUID nodeId, + CachePartitionPartialCountersMap countersMap, + Map sizesMap + ) { Set ignore = null; - for (int p = 0; p < top.partitions(); p++) { + for (int i = 0; i < countersMap.size(); i++) { + int p = countersMap.partitionAt(i); + if (top.partitionState(nodeId, p) != GridDhtPartitionState.OWNING) { if (ignore == null) ignore = new HashSet<>(); @@ -129,9 +136,8 @@ public void validatePartitionCountersAndSizes( continue; } - int partIdx = countersMap.partitionIndex(p); - long updateCounter = partIdx >= 0 ? countersMap.updateCounterAt(partIdx) : 0; - long size = sizesMap.containsKey(p) ? sizesMap.get(p) : 0; + long updateCounter = countersMap.updateCounterAt(i); + long size = sizesMap.getOrDefault(p, 0L); // Do not validate partitions with zero update counter and size. if (updateCounter == 0 && size == 0) { @@ -154,14 +160,14 @@ public void validatePartitionCountersAndSizes( * @return Invalid partitions map with following structure: (partId, (nodeId, updateCounter)). * If map is empty validation is successful. */ - public Map> validatePartitionsUpdateCounters( - GridDhtPartitionTopology top, - Map messages, - Set ignoringNodes - ) { + public Map> validatePartitionsUpdateCounters( + GridDhtPartitionTopology top, + Map messages, + Set ignoringNodes + ) { Map> invalidPartitions = new HashMap<>(); - Map> updateCountersAndNodesByPartitions = new HashMap<>(); + Map> updateCountersAndNodesByPartitions = new HashMap<>(); // Populate counters statistics from local node partitions. for (GridDhtLocalPartition part : top.currentLocalPartitions()) { @@ -171,7 +177,7 @@ public Map> validatePartitionsUpdateCounters( if (part.updateCounter() == 0 && part.fullSize() == 0) continue; - updateCountersAndNodesByPartitions.put(part.id(), new T2<>(cctx.localNodeId(), part.updateCounter())); + updateCountersAndNodesByPartitions.put(part.id(), new AbstractMap.SimpleEntry<>(cctx.localNodeId(), part.updateCounter())); } int partitions = top.partitions(); @@ -182,18 +188,23 @@ public Map> validatePartitionsUpdateCounters( if (ignoringNodes.contains(nodeId)) continue; - CachePartitionPartialCountersMap countersMap = e.getValue().partitionUpdateCounters(top.groupId(), partitions); + final GridDhtPartitionsSingleMessage message = e.getValue(); + + CachePartitionPartialCountersMap countersMap = message.partitionUpdateCounters(top.groupId(), partitions); - Set ignorePartitions = shouldIgnore(top, nodeId, e.getValue()); + Map sizesMap = message.partitionSizes(top.groupId()); - for (int part = 0; part < partitions; part++) { - if (ignorePartitions != null && ignorePartitions.contains(part)) + Set ignorePartitions = shouldIgnore(top, nodeId, countersMap, sizesMap); + + for (int i = 0; i < countersMap.size(); i++) { + int p = countersMap.partitionAt(i); + + if (ignorePartitions != null && ignorePartitions.contains(p)) continue; - int partIdx = countersMap.partitionIndex(part); - long currentCounter = partIdx >= 0 ? countersMap.updateCounterAt(partIdx) : 0; + long currentCounter = countersMap.updateCounterAt(i); - process(invalidPartitions, updateCountersAndNodesByPartitions, part, nodeId, currentCounter); + process(invalidPartitions, updateCountersAndNodesByPartitions, p, nodeId, currentCounter); } } @@ -209,14 +220,14 @@ public Map> validatePartitionsUpdateCounters( * @return Invalid partitions map with following structure: (partId, (nodeId, cacheSize)). * If map is empty validation is successful. */ - public Map> validatePartitionsSizes( - GridDhtPartitionTopology top, - Map messages, - Set ignoringNodes - ) { + public Map> validatePartitionsSizes( + GridDhtPartitionTopology top, + Map messages, + Set ignoringNodes + ) { Map> invalidPartitions = new HashMap<>(); - Map> sizesAndNodesByPartitions = new HashMap<>(); + Map> sizesAndNodesByPartitions = new HashMap<>(); // Populate sizes statistics from local node partitions. for (GridDhtLocalPartition part : top.currentLocalPartitions()) { @@ -226,7 +237,7 @@ public Map> validatePartitionsSizes( if (part.updateCounter() == 0 && part.fullSize() == 0) continue; - sizesAndNodesByPartitions.put(part.id(), new T2<>(cctx.localNodeId(), part.fullSize())); + sizesAndNodesByPartitions.put(part.id(), new AbstractMap.SimpleEntry<>(cctx.localNodeId(), part.fullSize())); } int partitions = top.partitions(); @@ -237,17 +248,23 @@ public Map> validatePartitionsSizes( if (ignoringNodes.contains(nodeId)) continue; - Map sizesMap = e.getValue().partitionSizes(top.groupId()); + final GridDhtPartitionsSingleMessage message = e.getValue(); + + CachePartitionPartialCountersMap countersMap = message.partitionUpdateCounters(top.groupId(), partitions); + + Map sizesMap = message.partitionSizes(top.groupId()); + + Set ignorePartitions = shouldIgnore(top, nodeId, countersMap, sizesMap); - Set ignorePartitions = shouldIgnore(top, nodeId, e.getValue()); + for (int i = 0; i < countersMap.size(); i++) { + int p = countersMap.partitionAt(i); - for (int part = 0; part < partitions; part++) { - if (ignorePartitions != null && ignorePartitions.contains(part)) + if (ignorePartitions != null && ignorePartitions.contains(p)) continue; - long currentSize = sizesMap.containsKey(part) ? sizesMap.get(part) : 0L; + long currentSize = sizesMap.getOrDefault(p, 0L); - process(invalidPartitions, sizesAndNodesByPartitions, part, nodeId, currentSize); + process(invalidPartitions, sizesAndNodesByPartitions, p, nodeId, currentSize); } } @@ -264,20 +281,22 @@ public Map> validatePartitionsSizes( * @param node Node id. * @param counter Counter value reported by {@code node}. */ - private void process(Map> invalidPartitions, - Map> countersAndNodes, - int part, - UUID node, - long counter) { - T2 existingData = countersAndNodes.get(part); + private void process( + Map> invalidPartitions, + Map> countersAndNodes, + int part, + UUID node, + long counter + ) { + AbstractMap.Entry existingData = countersAndNodes.get(part); if (existingData == null) - countersAndNodes.put(part, new T2<>(node, counter)); + countersAndNodes.put(part, new AbstractMap.SimpleEntry<>(node, counter)); - if (existingData != null && counter != existingData.get2()) { + if (existingData != null && counter != existingData.getValue()) { if (!invalidPartitions.containsKey(part)) { Map map = new HashMap<>(); - map.put(existingData.get1(), existingData.get2()); + map.put(existingData.getKey(), existingData.getValue()); invalidPartitions.put(part, map); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/PartitionsEvictManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/PartitionsEvictManager.java index cd010fa89d32d..404e194a9615a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/PartitionsEvictManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/topology/PartitionsEvictManager.java @@ -19,6 +19,7 @@ import java.util.Collection; import java.util.Comparator; +import java.util.HashSet; import java.util.Map; import java.util.Queue; import java.util.Set; @@ -31,7 +32,6 @@ import org.apache.ignite.internal.managers.communication.GridIoPolicy; import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter; -import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.U; @@ -63,6 +63,7 @@ public class PartitionsEvictManager extends GridCacheSharedManagerAdapter { /** Next time of show eviction progress. */ private long nextShowProgressTime; + /** */ private final Map evictionGroupsMap = new ConcurrentHashMap<>(); /** Flag indicates that eviction process has stopped. */ @@ -110,28 +111,28 @@ public void onCacheGroupStopped(CacheGroupContext grp){ * @param part Partition to evict. */ public void evictPartitionAsync(CacheGroupContext grp, GridDhtLocalPartition part) { - // Check node stop. - if (sharedEvictionContext.shouldStop()) - return; - GroupEvictionContext groupEvictionContext = evictionGroupsMap.computeIfAbsent( grp.groupId(), (k) -> new GroupEvictionContext(grp)); - PartitionEvictionTask evictionTask = groupEvictionContext.createEvictPartitionTask(part); - - if (evictionTask == null) + // Check node stop. + if (groupEvictionContext.shouldStop()) return; - if (log.isDebugEnabled()) - log.debug("Partition has been scheduled for eviction [grp=" + grp.cacheOrGroupName() - + ", p=" + part.id() + ", state=" + part.state() + "]"); - int bucket; synchronized (mux) { - bucket = evictionQueue.offer(evictionTask); + if (!groupEvictionContext.partIds.add(part.id())) + return; + + bucket = evictionQueue.offer(new PartitionEvictionTask(part, groupEvictionContext)); } + groupEvictionContext.totalTasks.incrementAndGet(); + + if (log.isDebugEnabled()) + log.debug("Partition has been scheduled for eviction [grp=" + grp.cacheOrGroupName() + + ", p=" + part.id() + ", state=" + part.state() + "]"); + scheduleNextPartitionEviction(bucket); } @@ -177,7 +178,7 @@ private void scheduleNextPartitionEviction(int bucket) { // Print current eviction progress. showProgress(); - GroupEvictionContext groupEvictionContext = evictionTask.groupEvictionContext; + GroupEvictionContext groupEvictionContext = evictionTask.groupEvictionCtx; // Check that group or node stopping. if (groupEvictionContext.shouldStop()) @@ -270,7 +271,7 @@ private class GroupEvictionContext implements EvictionContext { private final CacheGroupContext grp; /** Deduplicate set partition ids. */ - private final Set partIds = new GridConcurrentHashSet<>(); + private final Set partIds = new HashSet<>(); /** Future for currently running partition eviction task. */ private final Map> partsEvictFutures = new ConcurrentHashMap<>(); @@ -296,19 +297,6 @@ private GroupEvictionContext(CacheGroupContext grp) { return stop || sharedEvictionContext.shouldStop(); } - /** - * - * @param part Grid local partition. - */ - private PartitionEvictionTask createEvictPartitionTask(GridDhtLocalPartition part){ - if (shouldStop() || !partIds.add(part.id())) - return null; - - totalTasks.incrementAndGet(); - - return new PartitionEvictionTask(part, this); - } - /** * * @param task Partition eviction task. @@ -323,6 +311,8 @@ private synchronized void taskScheduled(PartitionEvictionTask task) { int partId = task.part.id(); + partIds.remove(partId); + partsEvictFutures.put(partId, fut); fut.listen(f -> { @@ -390,48 +380,52 @@ private class PartitionEvictionTask implements Runnable { /** Partition to evict. */ private final GridDhtLocalPartition part; + /** */ private final long size; /** Eviction context. */ - private final GroupEvictionContext groupEvictionContext; + private final GroupEvictionContext groupEvictionCtx; /** */ private final GridFutureAdapter finishFut = new GridFutureAdapter<>(); /** * @param part Partition. + * @param groupEvictionCtx Eviction context. */ private PartitionEvictionTask( GridDhtLocalPartition part, - GroupEvictionContext groupEvictionContext + GroupEvictionContext groupEvictionCtx ) { this.part = part; - this.groupEvictionContext = groupEvictionContext; + this.groupEvictionCtx = groupEvictionCtx; size = part.fullSize(); } /** {@inheritDoc} */ @Override public void run() { - if (groupEvictionContext.shouldStop()) { + if (groupEvictionCtx.shouldStop()) { finishFut.onDone(); return; } try { - boolean success = part.tryClear(groupEvictionContext); + boolean success = part.tryClear(groupEvictionCtx); if (success) { if (part.state() == GridDhtPartitionState.EVICTED && part.markForDestroy()) part.destroy(); } - else // Re-offer partition if clear was unsuccessful due to partition reservation. - evictionQueue.offer(this); // Complete eviction future before schedule new to prevent deadlock with // simultaneous eviction stopping and scheduling new eviction. finishFut.onDone(); + + // Re-offer partition if clear was unsuccessful due to partition reservation. + if (!success) + evictPartitionAsync(groupEvictionCtx.grp, part); } catch (Throwable ex) { finishFut.onDone(ex); @@ -502,6 +496,7 @@ PartitionEvictionTask pollAny() { /** * Offer task to queue. * + * @param task Eviction task. * @return Bucket index. */ int offer(PartitionEvictionTask task) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/CacheVersionedValue.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/CacheVersionedValue.java index 9670f8a4ea5cf..c19d486a489b4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/CacheVersionedValue.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/CacheVersionedValue.java @@ -174,4 +174,4 @@ public void finishUnmarshal(GridCacheContext ctx, ClassLoader ldr) throws Ignite @Override public String toString() { return S.toString(CacheVersionedValue.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearAtomicCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearAtomicCache.java index 503c324de3390..21f409a401779 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearAtomicCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearAtomicCache.java @@ -284,7 +284,7 @@ private void processNearAtomicUpdateResponse( } finally { if (entry != null) - entry.touch(topVer); + entry.touch(); } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheAdapter.java index 39047813e85e5..086247c519d15 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheAdapter.java @@ -444,7 +444,7 @@ private EntrySet(Set> nearSet, Set> dhtSet) F.iterator0(dhtSet, false, new P1>() { @Override public boolean apply(Cache.Entry e) { try { - return GridNearCacheAdapter.super.localPeek(e.getKey(), NEAR_PEEK_MODE, null) == null; + return GridNearCacheAdapter.super.localPeek(e.getKey(), NEAR_PEEK_MODE) == null; } catch (IgniteCheckedException ex) { throw new IgniteException(ex); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetFuture.java index 87801a9f6f5fe..8bf99af1b0dc1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetFuture.java @@ -23,10 +23,9 @@ import java.util.LinkedHashMap; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.UUID; -import java.util.concurrent.atomic.AtomicReference; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; @@ -38,8 +37,6 @@ import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.internal.processors.cache.GridCacheEntryInfo; import org.apache.ignite.internal.processors.cache.GridCacheEntryRemovedException; -import org.apache.ignite.internal.processors.cache.GridCacheMessage; -import org.apache.ignite.internal.processors.cache.GridCacheUtils.BackupPostProcessingClosure; import org.apache.ignite.internal.processors.cache.IgniteCacheExpiryPolicy; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.dht.CacheDistributedGetFutureAdapter; @@ -50,13 +47,7 @@ import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.GridLeanMap; import org.apache.ignite.internal.util.future.GridFinishedFuture; -import org.apache.ignite.internal.util.future.GridFutureAdapter; -import org.apache.ignite.internal.util.tostring.GridToStringInclude; -import org.apache.ignite.internal.util.typedef.C1; -import org.apache.ignite.internal.util.typedef.CI1; -import org.apache.ignite.internal.util.typedef.CIX1; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; @@ -67,15 +58,6 @@ * */ public final class GridNearGetFuture extends CacheDistributedGetFutureAdapter { - /** */ - private static final long serialVersionUID = 0L; - - /** Logger reference. */ - private static final AtomicReference logRef = new AtomicReference<>(); - - /** Logger. */ - private static IgniteLogger log; - /** Transaction. */ private IgniteTxLocalEx tx; @@ -129,12 +111,9 @@ public GridNearGetFuture( this.tx = tx; - futId = IgniteUuid.randomUuid(); - ver = tx == null ? cctx.versions().next() : tx.xidVersion(); - if (log == null) - log = U.logger(cctx.kernalContext(), logRef, GridNearGetFuture.class); + initLogger(GridNearGetFuture.class); } /** @@ -148,73 +127,21 @@ public void init(@Nullable AffinityTopologyVersion topVer) { if (lockedTopVer != null) { canRemap = false; - map(keys, Collections.>emptyMap(), lockedTopVer); + map(keys, Collections.emptyMap(), lockedTopVer); } else { AffinityTopologyVersion mapTopVer = topVer; if (mapTopVer == null) { - mapTopVer = tx == null ? - (canRemap ? cctx.affinity().affinityTopologyVersion() : cctx.shared().exchange().readyAffinityVersion()) : - tx.topologyVersion(); + mapTopVer = tx == null ? cctx.affinity().affinityTopologyVersion() : tx.topologyVersion(); } - map(keys, Collections.>emptyMap(), mapTopVer); + map(keys, Collections.emptyMap(), mapTopVer); } markInitialized(); } - /** {@inheritDoc} */ - @Override public boolean trackable() { - return trackable; - } - - /** {@inheritDoc} */ - @Override public void markNotTrackable() { - // Should not flip trackable flag from true to false since get future can be remapped. - } - - /** {@inheritDoc} */ - @Override public IgniteUuid futureId() { - return futId; - } - - /** {@inheritDoc} */ - @Override public boolean onNodeLeft(UUID nodeId) { - boolean found = false; - - for (IgniteInternalFuture> fut : futures()) - if (isMini(fut)) { - MiniFuture f = (MiniFuture)fut; - - if (f.node().id().equals(nodeId)) { - found = true; - - f.onNodeLeft(); - } - } - - return found; - } - - /** - * @param nodeId Sender. - * @param res Result. - */ - @Override public void onResult(UUID nodeId, GridNearGetResponse res) { - for (IgniteInternalFuture> fut : futures()) - if (isMini(fut)) { - MiniFuture f = (MiniFuture)fut; - - if (f.futureId().equals(res.miniId())) { - assert f.node().id().equals(nodeId); - - f.onResult(res); - } - } - } - /** {@inheritDoc} */ @Override public boolean onDone(Map res, Throwable err) { if (super.onDone(res, err)) { @@ -230,11 +157,8 @@ public void init(@Nullable AffinityTopologyVersion topVer) { return false; } - /** - * @param f Future. - * @return {@code True} if mini-future. - */ - private boolean isMini(IgniteInternalFuture f) { + /** {@inheritDoc} */ + @Override protected boolean isMini(IgniteInternalFuture f) { return f.getClass().equals(MiniFuture.class); } @@ -243,10 +167,10 @@ private boolean isMini(IgniteInternalFuture f) { * @param mapped Mappings to check for duplicates. * @param topVer Topology version to map on. */ - private void map( + @Override protected void map( Collection keys, Map> mapped, - final AffinityTopologyVersion topVer + AffinityTopologyVersion topVer ) { Collection affNodes = CU.affinityNodes(cctx, topVer); @@ -269,7 +193,7 @@ private void map( try { // Assign keys to primary nodes. for (KeyCacheObject key : keys) - savedEntries = map(key, mappings, topVer, mapped, savedEntries); + savedEntries = map(key, topVer, mappings, mapped, savedEntries); success = true; } @@ -293,23 +217,24 @@ private void map( if (isDone()) return; - final Map saved = savedEntries != null ? savedEntries : - Collections.emptyMap(); + Map saved = + savedEntries != null ? savedEntries : Collections.emptyMap(); - final int keysSize = keys.size(); + int keysSize = keys.size(); // Create mini futures. for (Map.Entry> entry : mappings.entrySet()) { - final ClusterNode n = entry.getKey(); + ClusterNode n = entry.getKey(); - final LinkedHashMap mappedKeys = entry.getValue(); + LinkedHashMap mappedKeys = entry.getValue(); assert !mappedKeys.isEmpty(); // If this is the primary or backup node for the keys. if (n.isLocal()) { - final GridDhtFuture> fut = - dht().getDhtAsync(n.id(), + GridDhtFuture> fut = dht() + .getDhtAsync( + n.id(), -1, mappedKeys, false, @@ -320,73 +245,53 @@ private void map( expiryPlc, skipVals, recovery, - null); // TODO IGNITE-7371 + null, + null + ); // TODO IGNITE-7371 - final Collection invalidParts = fut.invalidPartitions(); + Collection invalidParts = fut.invalidPartitions(); if (!F.isEmpty(invalidParts)) { Collection remapKeys = new ArrayList<>(keysSize); for (KeyCacheObject key : keys) { - if (key != null && invalidParts.contains(cctx.affinity().partition(key))) + int part = cctx.affinity().partition(key); + + if (key != null && invalidParts.contains(part)) { + addNodeAsInvalid(n, part, topVer); + remapKeys.add(key); + } } AffinityTopologyVersion updTopVer = cctx.shared().exchange().readyAffinityVersion(); - assert updTopVer.compareTo(topVer) > 0 : "Got invalid partitions for local node but topology version did " + - "not change [topVer=" + topVer + ", updTopVer=" + updTopVer + - ", invalidParts=" + invalidParts + ']'; - // Remap recursively. map(remapKeys, mappings, updTopVer); } // Add new future. - add(fut.chain(new C1>, Map>() { - @Override public Map apply(IgniteInternalFuture> fut) { - try { - return loadEntries(n.id(), mappedKeys.keySet(), fut.get(), saved, topVer); - } - catch (Exception e) { - U.error(log, "Failed to get values from dht cache [fut=" + fut + "]", e); + add(fut.chain(f -> { + try { + return loadEntries(n.id(), mappedKeys.keySet(), f.get(), saved, topVer); + } + catch (Exception e) { + U.error(log, "Failed to get values from dht cache [fut=" + fut + "]", e); - onDone(e); + onDone(e); - return Collections.emptyMap(); - } + return Collections.emptyMap(); } })); } else { - if (!trackable) { - trackable = true; + registrateFutureInMvccManager(this); - cctx.mvcc().addFuture(this, futId); - } + MiniFuture miniFuture = new MiniFuture(n, mappedKeys, saved, topVer); + + GridNearGetRequest req = miniFuture.createGetRequest(futId); - MiniFuture fut = new MiniFuture(n, mappedKeys, saved, topVer, - CU.createBackupPostProcessingClosure(topVer, log, cctx, null, expiryPlc, readThrough, skipVals)); - - GridCacheMessage req = new GridNearGetRequest( - cctx.cacheId(), - futId, - fut.futureId(), - ver, - mappedKeys, - readThrough, - topVer, - subjId, - taskName == null ? 0 : taskName.hashCode(), - expiryPlc != null ? expiryPlc.forCreate() : -1L, - expiryPlc != null ? expiryPlc.forAccess() : -1L, - true, - skipVals, - cctx.deploymentEnabled(), - recovery, - null); // TODO IGNITE-7371 - - add(fut); // Append new future. + add(miniFuture); // Append new future. try { cctx.io().send(n, req, cctx.ioPolicy()); @@ -394,16 +299,14 @@ private void map( catch (IgniteCheckedException e) { // Fail the whole thing. if (e instanceof ClusterTopologyCheckedException) - fut.onNodeLeft(); + miniFuture.onNodeLeft((ClusterTopologyCheckedException)e); else - fut.onResult(e); + miniFuture.onResult(e); } } } } - - /** * @param mappings Mappings. * @param key Key to map. @@ -415,8 +318,8 @@ private void map( @SuppressWarnings("unchecked") private Map map( KeyCacheObject key, - Map> mappings, AffinityTopologyVersion topVer, + Map> mappings, Map> mapped, Map saved ) { @@ -425,7 +328,7 @@ private Map map( List affNodes = cctx.affinity().nodesByPartition(part, topVer); if (affNodes.isEmpty()) { - onDone(serverNotFoundError(topVer)); + onDone(serverNotFoundError(part, topVer)); return null; } @@ -494,10 +397,12 @@ private Map map( } } - ClusterNode affNode = cctx.selectAffinityNodeBalanced(affNodes, canRemap); + Set invalidNodesSet = getInvalidNodes(part, topVer); + + ClusterNode affNode = cctx.selectAffinityNodeBalanced(affNodes, invalidNodesSet, part, canRemap); if (affNode == null) { - onDone(serverNotFoundError(topVer)); + onDone(serverNotFoundError(part, topVer)); return saved; } @@ -505,17 +410,8 @@ private Map map( if (cctx.statisticsEnabled() && !skipVals && !affNode.isLocal() && !isNear) cache().metrics0().onRead(false); - LinkedHashMap keys = mapped.get(affNode); - - if (keys != null && keys.containsKey(key)) { - if (REMAP_CNT_UPD.incrementAndGet(this) > MAX_REMAP_CNT) { - onDone(new ClusterTopologyCheckedException("Failed to remap key to a new node after " + - MAX_REMAP_CNT + " attempts (key got remapped to the same node) " + - "[key=" + key + ", node=" + U.toShortString(affNode) + ", mappings=" + mapped + ']')); - - return saved; - } - } + if (!checkRetryPermits(key,affNode,mapped)) + return saved; if (!affNodes.contains(cctx.localNode())) { GridNearCacheEntry nearEntry = entry != null ? entry : near.entryExx(key, topVer); @@ -558,7 +454,7 @@ private Map map( } finally { if (entry != null && tx == null) - entry.touch(topVer); + entry.touch(); } } @@ -572,10 +468,12 @@ private Map map( * @param nearRead {@code True} if already tried to read from near cache. * @return {@code True} if there is no need to further search value. */ - private boolean localDhtGet(KeyCacheObject key, + private boolean localDhtGet( + KeyCacheObject key, int part, AffinityTopologyVersion topVer, - boolean nearRead) { + boolean nearRead + ) { GridDhtCacheAdapter dht = cache().dht(); assert dht.context().affinityNode() : this; @@ -661,7 +559,7 @@ private boolean localDhtGet(KeyCacheObject key, if (dhtEntry != null) // Near cache is enabled, so near entry will be enlisted in the transaction. // Always touch DHT entry in this case. - dhtEntry.touch(topVer); + dhtEntry.touch(); } } } @@ -805,108 +703,64 @@ private void releaseEvictions(Collection keys, entry.releaseEviction(); if (tx == null) - entry.touch(topVer); + entry.touch(); } } } /** {@inheritDoc} */ @Override public String toString() { - Collection futs = F.viewReadOnly(futures(), new C1, String>() { - @SuppressWarnings("unchecked") - @Override public String apply(IgniteInternalFuture f) { - if (isMini(f)) { - return "[node=" + ((MiniFuture)f).node().id() + - ", loc=" + ((MiniFuture)f).node().isLocal() + - ", done=" + f.isDone() + "]"; - } - else - return "[loc=true, done=" + f.isDone() + "]"; - } - }); - return S.toString(GridNearGetFuture.class, this, - "innerFuts", futs, "super", super.toString()); } /** - * Mini-future for get operations. Mini-futures are only waiting on a single - * node as opposed to multiple nodes. + * Mini-future for get operations. Mini-futures are only waiting on a single node as opposed to multiple nodes. */ - private class MiniFuture extends GridFutureAdapter> { - /** */ - private final IgniteUuid futId = IgniteUuid.randomUuid(); - - /** Node ID. */ - private ClusterNode node; - - /** Keys. */ - @GridToStringInclude - private LinkedHashMap keys; - + private class MiniFuture extends AbstractMiniFuture { /** Saved entry versions. */ - private Map savedEntries; - - /** Topology version on which this future was mapped. */ - private AffinityTopologyVersion topVer; - - /** Post processing closure. */ - private final BackupPostProcessingClosure postProcessingClos; - - /** {@code True} if remapped after node left. */ - private boolean remapped; + private final Map savedEntries; /** * @param node Node. * @param keys Keys. * @param savedEntries Saved entries. * @param topVer Topology version. - * @param postProcessingClos Post processing closure. */ MiniFuture( ClusterNode node, LinkedHashMap keys, Map savedEntries, - AffinityTopologyVersion topVer, - BackupPostProcessingClosure postProcessingClos) { - this.node = node; - this.keys = keys; + AffinityTopologyVersion topVer + ) { + super(node, keys, topVer); this.savedEntries = savedEntries; - this.topVer = topVer; - this.postProcessingClos = postProcessingClos; } - /** - * @return Future ID. - */ - IgniteUuid futureId() { - return futId; - } - - /** - * @return Node ID. - */ - public ClusterNode node() { - return node; - } - - /** - * @return Keys. - */ - public Collection keys() { - return keys.keySet(); + /** {@inheritDoc} */ + @Override protected GridNearGetRequest createGetRequest0(IgniteUuid rootFutId, IgniteUuid futId) { + return new GridNearGetRequest(cctx.cacheId(), + rootFutId, + futId, + ver, + keys, + readThrough, + topVer, + subjId, + taskName == null ? 0 : taskName.hashCode(), + expiryPlc != null ? expiryPlc.forCreate() : -1L, + expiryPlc != null ? expiryPlc.forAccess() : -1L, + true, + skipVals, + cctx.deploymentEnabled(), + recovery, + null, + null); // TODO IGNITE-7371 } - /** - * @param e Error. - */ - void onResult(Throwable e) { - if (log.isDebugEnabled()) - log.debug("Failed to get future result [fut=" + this + ", err=" + e + ']'); - - // Fail. - onDone(e); + /** {@inheritDoc} */ + @Override protected Map createResultMap(Collection entries) { + return loadEntries(node.id(), keys.keySet(), entries, savedEntries, topVer); } /** {@inheritDoc} */ @@ -920,139 +774,6 @@ void onResult(Throwable e) { return false; } - /** - */ - synchronized void onNodeLeft() { - if (remapped) - return; - - remapped = true; - - if (log.isDebugEnabled()) - log.debug("Remote node left grid while sending or waiting for reply (will retry): " + this); - - // Try getting value from alive nodes. - if (!canRemap) { - // Remap - map(keys.keySet(), F.t(node, keys), topVer); - - onDone(Collections.emptyMap()); - } - else { - AffinityTopologyVersion updTopVer = - new AffinityTopologyVersion(Math.max(topVer.topologyVersion() + 1, cctx.discovery().topologyVersion())); - - cctx.affinity().affinityReadyFuture(updTopVer).listen( - new CI1>() { - @Override public void apply(IgniteInternalFuture fut) { - try { - // Remap. - map(keys.keySet(), F.t(node, keys), fut.get()); - - onDone(Collections.emptyMap()); - } - catch (IgniteCheckedException e) { - GridNearGetFuture.this.onDone(e); - } - } - } - ); - } - } - - /** - * @param res Result callback. - */ - void onResult(final GridNearGetResponse res) { - final Collection invalidParts = res.invalidPartitions(); - - // If error happened on remote node, fail the whole future. - if (res.error() != null) { - onDone(res.error()); - - return; - } - - // Remap invalid partitions. - if (!F.isEmpty(invalidParts)) { - AffinityTopologyVersion rmtTopVer = res.topologyVersion(); - - assert rmtTopVer.topologyVersion() != 0; - - if (rmtTopVer.compareTo(topVer) <= 0) { - // Fail the whole get future. - onDone(new IgniteCheckedException("Failed to process invalid partitions response (remote node reported " + - "invalid partitions but remote topology version does not differ from local) " + - "[topVer=" + topVer + ", rmtTopVer=" + rmtTopVer + ", invalidParts=" + invalidParts + - ", nodeId=" + node.id() + ']')); - - return; - } - - if (log.isDebugEnabled()) - log.debug("Remapping mini get future [invalidParts=" + invalidParts + ", fut=" + this + ']'); - - if (!canRemap) { - map(F.view(keys.keySet(), new P1() { - @Override public boolean apply(KeyCacheObject key) { - return invalidParts.contains(cctx.affinity().partition(key)); - } - }), F.t(node, keys), topVer); - - postProcessResultAndDone(res); - - return; - } - - // Need to wait for next topology version to remap. - IgniteInternalFuture topFut = cctx.affinity().affinityReadyFuture(rmtTopVer); - - topFut.listen(new CIX1>() { - @Override public void applyx( - IgniteInternalFuture fut) throws IgniteCheckedException { - AffinityTopologyVersion readyTopVer = fut.get(); - - // This will append new futures to compound list. - map(F.view(keys.keySet(), new P1() { - @Override public boolean apply(KeyCacheObject key) { - return invalidParts.contains(cctx.affinity().partition(key)); - } - }), F.t(node, keys), readyTopVer); - - postProcessResultAndDone(res); - } - }); - } - else - postProcessResultAndDone(res); - - } - - /** - * Post processes result and done future. - * - * @param res Response. - */ - private void postProcessResultAndDone(final GridNearGetResponse res){ - try { - postProcessResult(res); - - // It is critical to call onDone after adding futures to compound list. - onDone(loadEntries(node.id(), keys.keySet(), res.entries(), savedEntries, topVer)); - } - catch (Exception ex) { - onDone(ex); - } - } - - /** - * @param res Response. - */ - private void postProcessResult(final GridNearGetResponse res) { - if (postProcessingClos != null) - postProcessingClos.apply(res.entries()); - } - /** {@inheritDoc} */ @Override public String toString() { return S.toString(MiniFuture.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetRequest.java index f594e2bd353f6..247a1f3f0d7e7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetRequest.java @@ -108,6 +108,9 @@ public class GridNearGetRequest extends GridCacheIdMessage implements GridCacheD /** TTL for read operation. */ private long accessTtl; + /** Transaction label. */ + private @Nullable String txLbl; + /** */ private MvccSnapshot mvccSnapshot; @@ -133,6 +136,7 @@ public GridNearGetRequest() { * @param createTtl New TTL to set after entry is created, -1 to leave unchanged. * @param accessTtl New TTL to set after entry is accessed, -1 to leave unchanged. * @param addDepInfo Deployment info. + * @param txLbl Transaction label. * @param mvccSnapshot Mvcc snapshot. */ public GridNearGetRequest( @@ -151,6 +155,7 @@ public GridNearGetRequest( boolean skipVals, boolean addDepInfo, boolean recovery, + @Nullable String txLbl, @Nullable MvccSnapshot mvccSnapshot ) { assert futId != null; @@ -180,6 +185,7 @@ public GridNearGetRequest( this.createTtl = createTtl; this.accessTtl = accessTtl; this.addDepInfo = addDepInfo; + this.txLbl = txLbl; this.mvccSnapshot = mvccSnapshot; if (readThrough) @@ -296,6 +302,15 @@ public long accessTtl() { return keys != null && !keys.isEmpty() ? keys.get(0).partition() : -1; } + /** + * Get transaction label (may be null). + * + * @return Possible transaction label; + */ + @Nullable public String txLabel() { + return txLbl; + } + /** * @param ctx Cache context. * @throws IgniteCheckedException If failed. @@ -360,74 +375,80 @@ public long accessTtl() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeLong("accessTtl", accessTtl)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("createTtl", createTtl)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeCollection("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 9: + case 10: + if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + return false; + + writer.incrementState(); + + case 11: if (!writer.writeCollection("readersFlags", readersFlags, MessageCollectionItemType.BOOLEAN)) return false; writer.incrementState(); - case 10: + case 12: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 11: + case 13: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 12: - if (!writer.writeMessage("topVer", topVer)) + case 14: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 13: - if (!writer.writeMessage("ver", ver)) + case 15: + if (!writer.writeString("txLbl", txLbl)) return false; writer.incrementState(); - case 14: - if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + case 16: + if (!writer.writeMessage("ver", ver)) return false; writer.incrementState(); @@ -448,7 +469,7 @@ public long accessTtl() { return false; switch (reader.state()) { - case 3: + case 4: accessTtl = reader.readLong("accessTtl"); if (!reader.isLastRead()) @@ -456,7 +477,7 @@ public long accessTtl() { reader.incrementState(); - case 4: + case 5: createTtl = reader.readLong("createTtl"); if (!reader.isLastRead()) @@ -464,7 +485,7 @@ public long accessTtl() { reader.incrementState(); - case 5: + case 6: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -472,7 +493,7 @@ public long accessTtl() { reader.incrementState(); - case 6: + case 7: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -480,7 +501,7 @@ public long accessTtl() { reader.incrementState(); - case 7: + case 8: keys = reader.readCollection("keys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -488,7 +509,7 @@ public long accessTtl() { reader.incrementState(); - case 8: + case 9: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -496,7 +517,15 @@ public long accessTtl() { reader.incrementState(); - case 9: + case 10: + mvccSnapshot = reader.readMessage("mvccSnapshot"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 11: readersFlags = reader.readCollection("readersFlags", MessageCollectionItemType.BOOLEAN); if (!reader.isLastRead()) @@ -504,7 +533,7 @@ public long accessTtl() { reader.incrementState(); - case 10: + case 12: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -512,7 +541,7 @@ public long accessTtl() { reader.incrementState(); - case 11: + case 13: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -520,24 +549,24 @@ public long accessTtl() { reader.incrementState(); - case 12: - topVer = reader.readMessage("topVer"); + case 14: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 13: - ver = reader.readMessage("ver"); + case 15: + txLbl = reader.readString("txLbl"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 14: - mvccSnapshot = reader.readMessage("mvccSnapshot"); + case 16: + ver = reader.readMessage("ver"); if (!reader.isLastRead()) return false; @@ -556,7 +585,7 @@ public long accessTtl() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 15; + return 17; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetResponse.java index b4e4424862c54..578c46b6ac34a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearGetResponse.java @@ -228,43 +228,43 @@ public void error(IgniteCheckedException err) { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeCollection("entries", entries, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeCollection("invalidParts", invalidParts, MessageCollectionItemType.INT)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeIgniteUuid("miniId", miniId)) return false; writer.incrementState(); - case 8: - if (!writer.writeMessage("topVer", topVer)) + case 9: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeMessage("ver", ver)) return false; @@ -286,7 +286,7 @@ public void error(IgniteCheckedException err) { return false; switch (reader.state()) { - case 3: + case 4: entries = reader.readCollection("entries", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -294,7 +294,7 @@ public void error(IgniteCheckedException err) { reader.incrementState(); - case 4: + case 5: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -302,7 +302,7 @@ public void error(IgniteCheckedException err) { reader.incrementState(); - case 5: + case 6: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -310,7 +310,7 @@ public void error(IgniteCheckedException err) { reader.incrementState(); - case 6: + case 7: invalidParts = reader.readCollection("invalidParts", MessageCollectionItemType.INT); if (!reader.isLastRead()) @@ -318,7 +318,7 @@ public void error(IgniteCheckedException err) { reader.incrementState(); - case 7: + case 8: miniId = reader.readIgniteUuid("miniId"); if (!reader.isLastRead()) @@ -326,15 +326,15 @@ public void error(IgniteCheckedException err) { reader.incrementState(); - case 8: - topVer = reader.readMessage("topVer"); + case 9: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 9: + case 10: ver = reader.readMessage("ver"); if (!reader.isLastRead()) @@ -354,7 +354,7 @@ public void error(IgniteCheckedException err) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockFuture.java index 6cd45148f95b1..58accf6b46230 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockFuture.java @@ -30,6 +30,7 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.concurrent.atomic.AtomicReference; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; @@ -847,7 +848,17 @@ void map() { if (topVer != null) { for (GridDhtTopologyFuture fut : cctx.shared().exchange().exchangeFutures()) { if (fut.exchangeDone() && fut.topologyVersion().equals(topVer)){ - Throwable err = fut.validateCache(cctx, recovery, read, null, keys); + Throwable err = null; + + // Before cache validation, make sure that this topology future is already completed. + try { + fut.get(); + } + catch (IgniteCheckedException e) { + err = fut.error(); + } + + err = (err == null)? fut.validateCache(cctx, recovery, read, null, keys): err; if (err != null) { onDone(err); @@ -886,7 +897,10 @@ synchronized void mapOnTopology(final boolean remap) { try { if (cctx.topology().stopping()) { - onDone(new CacheStoppedException(cctx.name())); + onDone( + cctx.shared().cache().isCacheRestarting(cctx.name())? + new IgniteCacheRestartingException(cctx.name()): + new CacheStoppedException(cctx.name())); return; } @@ -1123,7 +1137,8 @@ private void map(Iterable keys, boolean remap, boolean topLocked keepBinary, clientFirst, true, - cctx.deploymentEnabled()); + cctx.deploymentEnabled(), + inTx() ? tx.label() : null); mapping.request(req); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockRequest.java index f736cae61848c..8c2d0e706d09c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockRequest.java @@ -86,6 +86,9 @@ public class GridNearLockRequest extends GridDistributedLockRequest { /** */ private byte flags; + /** Transaction label. */ + private String txLbl; + /** * Empty constructor required for {@link Externalizable}. */ @@ -116,6 +119,7 @@ public GridNearLockRequest() { * @param skipStore Skip store flag. * @param firstClientReq {@code True} if first lock request for lock operation sent from client node. * @param addDepInfo Deployment info flag. + * @param txLbl Transaction label. */ public GridNearLockRequest( int cacheId, @@ -141,7 +145,8 @@ public GridNearLockRequest( boolean keepBinary, boolean firstClientReq, boolean nearCache, - boolean addDepInfo + boolean addDepInfo, + @Nullable String txLbl ) { super( cacheId, @@ -168,6 +173,7 @@ public GridNearLockRequest( this.taskNameHash = taskNameHash; this.createTtl = createTtl; this.accessTtl = accessTtl; + this.txLbl = txLbl; dhtVers = new GridCacheVersion[keyCnt]; @@ -320,6 +326,13 @@ public long accessTtl() { return accessTtl; } + /** + * @return Transaction label. + */ + @Nullable public String txLabel() { + return txLbl; + } + /** {@inheritDoc} */ @Override public void prepareMarshal(GridCacheSharedContext ctx) throws IgniteCheckedException { super.prepareMarshal(ctx); @@ -363,56 +376,62 @@ public long accessTtl() { } switch (writer.state()) { - case 20: + case 21: if (!writer.writeLong("accessTtl", accessTtl)) return false; writer.incrementState(); - case 21: + case 22: if (!writer.writeLong("createTtl", createTtl)) return false; writer.incrementState(); - case 22: + case 23: if (!writer.writeObjectArray("dhtVers", dhtVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 23: + case 24: if (!writer.writeObjectArray("filter", filter, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 24: + case 25: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 25: + case 26: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 26: + case 27: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 27: + case 28: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 28: - if (!writer.writeMessage("topVer", topVer)) + case 29: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) + return false; + + writer.incrementState(); + + case 30: + if (!writer.writeString("txLbl", txLbl)) return false; writer.incrementState(); @@ -433,7 +452,7 @@ public long accessTtl() { return false; switch (reader.state()) { - case 20: + case 21: accessTtl = reader.readLong("accessTtl"); if (!reader.isLastRead()) @@ -441,7 +460,7 @@ public long accessTtl() { reader.incrementState(); - case 21: + case 22: createTtl = reader.readLong("createTtl"); if (!reader.isLastRead()) @@ -449,7 +468,7 @@ public long accessTtl() { reader.incrementState(); - case 22: + case 23: dhtVers = reader.readObjectArray("dhtVers", MessageCollectionItemType.MSG, GridCacheVersion.class); if (!reader.isLastRead()) @@ -457,7 +476,7 @@ public long accessTtl() { reader.incrementState(); - case 23: + case 24: filter = reader.readObjectArray("filter", MessageCollectionItemType.MSG, CacheEntryPredicate.class); if (!reader.isLastRead()) @@ -465,7 +484,7 @@ public long accessTtl() { reader.incrementState(); - case 24: + case 25: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -473,7 +492,7 @@ public long accessTtl() { reader.incrementState(); - case 25: + case 26: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -481,7 +500,7 @@ public long accessTtl() { reader.incrementState(); - case 26: + case 27: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -489,7 +508,7 @@ public long accessTtl() { reader.incrementState(); - case 27: + case 28: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -497,8 +516,16 @@ public long accessTtl() { reader.incrementState(); - case 28: - topVer = reader.readMessage("topVer"); + case 29: + topVer = reader.readAffinityTopologyVersion("topVer"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 30: + txLbl = reader.readString("txLbl"); if (!reader.isLastRead()) return false; @@ -517,7 +544,7 @@ public long accessTtl() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 29; + return 31; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockResponse.java index e88f0a07e0755..b6c6d8c903c49 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearLockResponse.java @@ -208,37 +208,37 @@ public void addValueBytes( } switch (writer.state()) { - case 10: - if (!writer.writeMessage("clientRemapVer", clientRemapVer)) + case 11: + if (!writer.writeAffinityTopologyVersion("clientRemapVer", clientRemapVer)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeObjectArray("dhtVers", dhtVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeBooleanArray("filterRes", filterRes)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeObjectArray("mappedVers", mappedVers, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeCollection("pending", pending, MessageCollectionItemType.MSG)) return false; @@ -260,15 +260,15 @@ public void addValueBytes( return false; switch (reader.state()) { - case 10: - clientRemapVer = reader.readMessage("clientRemapVer"); + case 11: + clientRemapVer = reader.readAffinityTopologyVersion("clientRemapVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 11: + case 12: dhtVers = reader.readObjectArray("dhtVers", MessageCollectionItemType.MSG, GridCacheVersion.class); if (!reader.isLastRead()) @@ -276,7 +276,7 @@ public void addValueBytes( reader.incrementState(); - case 12: + case 13: filterRes = reader.readBooleanArray("filterRes"); if (!reader.isLastRead()) @@ -284,7 +284,7 @@ public void addValueBytes( reader.incrementState(); - case 13: + case 14: mappedVers = reader.readObjectArray("mappedVers", MessageCollectionItemType.MSG, GridCacheVersion.class); if (!reader.isLastRead()) @@ -292,7 +292,7 @@ public void addValueBytes( reader.incrementState(); - case 14: + case 15: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -300,7 +300,7 @@ public void addValueBytes( reader.incrementState(); - case 15: + case 16: pending = reader.readCollection("pending", MessageCollectionItemType.MSG); if (!reader.isLastRead()) @@ -320,7 +320,7 @@ public void addValueBytes( /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 16; + return 17; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticSerializableTxPrepareFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticSerializableTxPrepareFuture.java index 140c1d5fdbb7d..762e244d323b1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticSerializableTxPrepareFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticSerializableTxPrepareFuture.java @@ -38,7 +38,6 @@ import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxMapping; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxMapping; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; @@ -59,7 +58,6 @@ import org.jetbrains.annotations.Nullable; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.TRANSFORM; -import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.noCoordinatorError; import static org.apache.ignite.transactions.TransactionState.PREPARED; import static org.apache.ignite.transactions.TransactionState.PREPARING; @@ -196,9 +194,6 @@ private void prepareError(Throwable e) { if (keyLockFut != null) keyLockFut.onDone(e); - - if (mvccVerFut != null) - mvccVerFut.onDone(); } /** {@inheritDoc} */ @@ -275,7 +270,8 @@ private boolean isMini(IgniteInternalFuture f) { private boolean onComplete() { Throwable err0 = err; - if (err0 == null || tx.needCheckBackup()) + if ((!tx.onePhaseCommit() || tx.mappings().get(cctx.localNodeId()) == null) && + (err0 == null || tx.needCheckBackup())) tx.state(PREPARED); if (super.onDone(tx, err0)) { @@ -349,30 +345,20 @@ private void prepare( boolean hasNearCache = false; - MvccCoordinator mvccCrd = null; - for (IgniteTxEntry write : writes) { map(write, topVer, mappings, txMapping, remap, topLocked); - GridCacheContext cctx = write.context(); - - if (cctx.isNear()) + if (write.context().isNear()) hasNearCache = true; - - if (cctx.mvccEnabled() && mvccCrd == null) { - mvccCrd = cctx.affinity().mvccCoordinator(topVer); - - if (mvccCrd == null) { - onDone(noCoordinatorError(topVer)); - - return; - } - } } - for (IgniteTxEntry read : reads) + for (IgniteTxEntry read : reads) { map(read, topVer, mappings, txMapping, remap, topLocked); + if (read.context().isNear()) + hasNearCache = true; + } + if (keyLockFut != null) keyLockFut.onAllKeysAdded(); @@ -383,8 +369,6 @@ private void prepare( return; } - assert !tx.txState().mvccEnabled(cctx) || mvccCrd != null || F.isEmpty(writes); - tx.addEntryMapping(mappings.values()); cctx.mvcc().recheckPendingLocks(); @@ -396,16 +380,12 @@ private void prepare( MiniFuture locNearEntriesFut = null; - int lockCnt = keyLockFut != null ? 1 : 0; - // Create futures in advance to have all futures when process {@link GridNearTxPrepareResponse#clientRemapVersion}. for (GridDistributedTxMapping m : mappings.values()) { assert !m.empty(); MiniFuture fut = new MiniFuture(this, m, ++miniId); - lockCnt++; - add((IgniteInternalFuture)fut); if (m.primary().isLocal() && m.hasNearCacheEntries() && m.hasColocatedCacheEntries()) { @@ -414,14 +394,9 @@ private void prepare( locNearEntriesFut = fut; add((IgniteInternalFuture)new MiniFuture(this, m, ++miniId)); - - lockCnt++; } } - if (mvccCrd != null) - initMvccVersionFuture(lockCnt, remap); - Collection> futs = (Collection)futures(); Iterator> it = futs.iterator(); @@ -584,7 +559,8 @@ private GridNearTxPrepareRequest createRequest( tx.taskNameHash(), m.clientFirst(), txNodes.size() == 1, - tx.activeCachesDeploymentEnabled()); + tx.activeCachesDeploymentEnabled(), + tx.txState().recovery()); for (IgniteTxEntry txEntry : writes) { if (txEntry.op() == TRANSFORM) @@ -605,7 +581,7 @@ private void prepareLocal(GridNearTxPrepareRequest req, final MiniFuture fut, final boolean nearEntries) { IgniteInternalFuture prepFut = nearEntries ? - cctx.tm().txHandler().prepareNearTxLocal(req) : + cctx.tm().txHandler().prepareNearTxLocal(tx, req) : cctx.tm().txHandler().prepareColocatedTx(tx, req); prepFut.listen(new CI1>() { @@ -986,9 +962,6 @@ void onResult(final GridNearTxPrepareResponse res, boolean updateMapping) { // Finish this mini future (need result only on client node). onDone(parent.cctx.kernalContext().clientNode() ? res : null); - - if (parent.mvccVerFut != null) - parent.mvccVerFut.onLockReceived(); } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFuture.java index 06d7a8c2cdf00..7e8c6888c899f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFuture.java @@ -43,7 +43,6 @@ import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxMapping; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxMapping; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; @@ -69,7 +68,6 @@ import org.jetbrains.annotations.Nullable; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.TRANSFORM; -import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.noCoordinatorError; import static org.apache.ignite.transactions.TransactionState.PREPARED; import static org.apache.ignite.transactions.TransactionState.PREPARING; @@ -301,7 +299,8 @@ private boolean isMini(IgniteInternalFuture f) { private boolean onComplete() { Throwable err0 = err; - if (err0 == null || tx.needCheckBackup()) + if ((!tx.onePhaseCommit() || tx.mappings().get(cctx.localNodeId()) == null) && + (err0 == null || tx.needCheckBackup())) tx.state(PREPARED); if (super.onDone(tx, err0)) { @@ -382,18 +381,6 @@ else if (write.context().isColocated()) tx.colocatedLocallyMapped(true); } - if (write.context().mvccEnabled()) { - MvccCoordinator mvccCrd = write.context().affinity().mvccCoordinator(topVer); - - if (mvccCrd == null) { - onDone(noCoordinatorError(topVer)); - - return; - } - - initMvccVersionFuture(keyLockFut != null ? 2 : 1, remap); - } - if (keyLockFut != null) keyLockFut.onAllKeysAdded(); @@ -438,8 +425,6 @@ private void prepare( boolean hasNearCache = false; - MvccCoordinator mvccCrd = null; - for (IgniteTxEntry write : writes) { write.clearEntryReadVersion(); @@ -449,16 +434,6 @@ private void prepare( // an exception occurred while transaction mapping, stop further processing break; - if (write.context().mvccEnabled() && mvccCrd == null) { - mvccCrd = write.context().affinity().mvccCoordinator(topVer); - - if (mvccCrd == null) { - onDone(noCoordinatorError(topVer)); - - break; - } - } - if (write.context().isNear()) hasNearCache = true; @@ -498,11 +473,6 @@ else if (write.context().isColocated()) return; } - assert !tx.txState().mvccEnabled(cctx) || mvccCrd != null; - - if (mvccCrd != null) - initMvccVersionFuture(keyLockFut != null ? 2 : 1, remap); - if (keyLockFut != null) keyLockFut.onAllKeysAdded(); @@ -526,12 +496,8 @@ else if (write.context().isColocated()) private void proceedPrepare(final Queue mappings) { final GridDistributedTxMapping m = mappings.poll(); - if (m == null) { - if (mvccVerFut != null) - mvccVerFut.onLockReceived(); - + if (m == null) return; - } proceedPrepare(m, mappings); } @@ -574,7 +540,8 @@ private void proceedPrepare(GridDistributedTxMapping m, @Nullable final Queue prepFut = - m.hasNearCacheEntries() ? cctx.tm().txHandler().prepareNearTxLocal(req) + m.hasNearCacheEntries() ? cctx.tm().txHandler().prepareNearTxLocal(tx, req) : cctx.tm().txHandler().prepareColocatedTx(tx, req); prepFut.listen(new CI1>() { @@ -1039,8 +1006,6 @@ void onResult(final GridNearTxPrepareResponse res) { // Proceed prepare before finishing mini future. if (mappings != null) parent.proceedPrepare(mappings); - else if (parent.mvccVerFut != null) - parent.mvccVerFut.onLockReceived(); // Finish this mini future. onDone((GridNearTxPrepareResponse)null); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFutureAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFutureAdapter.java index 6f541d330b5a1..51ee9c2f022d3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFutureAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearOptimisticTxPrepareFutureAdapter.java @@ -18,15 +18,12 @@ package org.apache.ignite.internal.processors.cache.distributed.near; import java.util.Collection; -import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.managers.communication.GridIoPolicy; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; -import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; -import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshotResponseListener; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; @@ -35,7 +32,6 @@ import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.lang.IgniteInClosure; import org.jetbrains.annotations.Nullable; /** @@ -45,18 +41,10 @@ public abstract class GridNearOptimisticTxPrepareFutureAdapter extends GridNearT /** */ private static final long serialVersionUID = 7460376140787916619L; - /** */ - private static final AtomicIntegerFieldUpdater LOCK_CNT_UPD = - AtomicIntegerFieldUpdater.newUpdater(MvccSnapshotFutureExt.class, "lockCnt"); - /** */ @GridToStringExclude protected KeyLockFuture keyLockFut; - /** */ - @GridToStringExclude - protected MvccSnapshotFutureExt mvccVerFut; - /** * @param cctx Context. * @param tx Transaction. @@ -194,14 +182,30 @@ protected final void prepareOnTopology(final boolean remap, @Nullable final Runn } else { cctx.time().waitAsync(topFut, tx.remainingTime(), (e, timedOut) -> { - if (errorOrTimeoutOnTopologyVersion(e, timedOut)) - return; - - try { - prepareOnTopology(remap, c); + // TODO GG-17429 this should actually go to the cctx.time().waitAsync(...) logic. + if (cctx.exchange().currentThreadIsExchanger()) { + cctx.kernalContext().closure().runLocalSafe(() -> { + if (errorOrTimeoutOnTopologyVersion(e, timedOut)) + return; + + try { + prepareOnTopology(remap, c); + } + finally { + cctx.txContextReset(); + } + }, GridIoPolicy.SYSTEM_POOL); } - finally { - cctx.txContextReset(); + else { + if (errorOrTimeoutOnTopologyVersion(e, timedOut)) + return; + + try { + prepareOnTopology(remap, c); + } + finally { + cctx.txContextReset(); + } } }); } @@ -213,29 +217,6 @@ protected final void prepareOnTopology(final boolean remap, @Nullable final Runn */ protected abstract void prepare0(boolean remap, boolean topLocked); - /** - * @param lockCnt Expected number of lock responses. - * @param remap Remap flag. - */ - @SuppressWarnings("unchecked") - final void initMvccVersionFuture(int lockCnt, boolean remap) { - if (!remap) { - mvccVerFut = new MvccSnapshotFutureExt(); - - mvccVerFut.init(lockCnt); - - if (keyLockFut != null) - keyLockFut.listen(mvccVerFut); - - add((IgniteInternalFuture)mvccVerFut); - } - else { - assert mvccVerFut != null; - - mvccVerFut.init(lockCnt); - } - } - /** * @param e Exception. * @param timedOut {@code True} if timed out. @@ -314,82 +295,4 @@ private void checkLocks() { return S.toString(KeyLockFuture.class, this, super.toString()); } } - - /** - * - */ - class MvccSnapshotFutureExt extends GridFutureAdapter implements MvccSnapshotResponseListener, IgniteInClosure> { - /** */ - private static final long serialVersionUID = 5883078648683911226L; - - /** */ - volatile int lockCnt; - - /** {@inheritDoc} */ - @Override public void apply(IgniteInternalFuture keyLockFut) { - try { - keyLockFut.get(); - - onLockReceived(); - } - catch (IgniteCheckedException e) { - if (log.isDebugEnabled()) - log.debug("MvccSnapshotFutureExt ignores key lock future failure: " + e); - } - } - - /** - * @param lockCnt Expected number of lock responses. - */ - void init(int lockCnt) { - assert lockCnt > 0; - - this.lockCnt = lockCnt; - - assert !isDone(); - } - - /** */ - void onLockReceived() { - int remaining = LOCK_CNT_UPD.decrementAndGet(this); - - assert remaining >= 0 : remaining; - - if (remaining == 0) { - try { - MvccSnapshot snapshot = cctx.coordinators().tryRequestSnapshotLocal(tx); - - if (snapshot != null) - onResponse(snapshot); - else - cctx.coordinators().requestSnapshotAsync(tx, this); - } - catch (ClusterTopologyCheckedException e) { - onError(e); - } - } - } - - /** {@inheritDoc} */ - @Override public void onResponse(MvccSnapshot res) { - tx.mvccSnapshot(res); - - onDone(); - } - - /** {@inheritDoc} */ - @Override public void onError(IgniteCheckedException e) { - if (e instanceof ClusterTopologyCheckedException) - ((ClusterTopologyCheckedException)e).retryReadyFuture(cctx.nextAffinityReadyFuture(tx.topologyVersion())); - - ERR_UPD.compareAndSet(GridNearOptimisticTxPrepareFutureAdapter.this, null, e); - - onDone(); - } - - /** {@inheritDoc} */ - @Override public String toString() { - return S.toString(MvccSnapshotFutureExt.class, this, super.toString()); - } - } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearPessimisticTxPrepareFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearPessimisticTxPrepareFuture.java index 85a48a3051de4..e5095fd92b54c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearPessimisticTxPrepareFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearPessimisticTxPrepareFuture.java @@ -34,11 +34,8 @@ import org.apache.ignite.internal.processors.cache.GridCacheMvccCandidate; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxMapping; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxMapping; -import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; -import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; -import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshotResponseListener; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; @@ -52,7 +49,6 @@ import org.jetbrains.annotations.Nullable; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.TRANSFORM; -import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.noCoordinatorError; import static org.apache.ignite.transactions.TransactionState.PREPARED; import static org.apache.ignite.transactions.TransactionState.PREPARING; @@ -229,7 +225,8 @@ private GridNearTxPrepareRequest createRequest(Map> txNod tx.taskNameHash(), false, true, - tx.activeCachesDeploymentEnabled()); + tx.activeCachesDeploymentEnabled(), + tx.txState().recovery()); req.queryUpdate(m.queryUpdate()); @@ -259,7 +256,7 @@ private void prepareLocal(GridNearTxPrepareRequest req, add((IgniteInternalFuture)fut); IgniteInternalFuture prepFut = nearEntries ? - cctx.tm().txHandler().prepareNearTxLocal(req) : + cctx.tm().txHandler().prepareNearTxLocal(tx, req) : cctx.tm().txHandler().prepareColocatedTx(tx, req); prepFut.listen(new CI1>() { @@ -279,35 +276,31 @@ private void prepareLocal(GridNearTxPrepareRequest req, */ @SuppressWarnings("unchecked") private void preparePessimistic() { + assert !tx.implicitSingle() || tx.queryEnlisted(); // Non-mvcc implicit-single tx goes fast commit way. + Map mappings = new HashMap<>(); AffinityTopologyVersion topVer = tx.topologyVersion(); - GridDhtTxMapping txMapping = new GridDhtTxMapping(); - - boolean queryMapped = false; + boolean hasNearCache = false; - assert !tx.implicitSingle() || tx.queryEnlisted(); // Non-mvcc implicit-single tx goes fast commit way. + Map> txNodes; - Collection txMappings = !tx.implicitSingle() ? tx.mappings().mappings() - : Collections.singleton(tx.mappings().singleMapping()); + if (tx.txState().mvccEnabled()) { + Collection mvccMappings = tx.implicitSingle() + ? Collections.singleton(tx.mappings().singleMapping()) : tx.mappings().mappings(); - for (GridDistributedTxMapping m : F.view(txMappings, CU.FILTER_QUERY_MAPPING)) { - GridDistributedTxMapping nodeMapping = mappings.get(m.primary().id()); + txNodes = new HashMap<>(mvccMappings.size()); - if (nodeMapping == null) + for (GridDistributedTxMapping m : mvccMappings) { mappings.put(m.primary().id(), m); - txMapping.addMapping(F.asList(m.primary())); - - queryMapped = true; + txNodes.put(m.primary().id(), m.backups()); + } } + else { + GridDhtTxMapping txMapping = new GridDhtTxMapping(); - MvccCoordinator mvccCrd = null; - - boolean hasNearCache = false; - - if (!queryMapped) { for (IgniteTxEntry txEntry : tx.allEntries()) { txEntry.clearEntryReadVersion(); @@ -326,16 +319,6 @@ private void preparePessimistic() { else nodes = cacheCtx.affinity().nodesByKey(txEntry.key(), topVer); - if (tx.mvccSnapshot() == null && mvccCrd == null && cacheCtx.mvccEnabled()) { - mvccCrd = cacheCtx.affinity().mvccCoordinator(topVer); - - if (mvccCrd == null) { - onDone(noCoordinatorError(topVer)); - - return; - } - } - if (F.isEmpty(nodes)) { onDone(new ClusterTopologyServerNotFoundException("Failed to map keys to nodes (partition " + "is not mapped to any node) [key=" + txEntry.key() + @@ -357,14 +340,14 @@ private void preparePessimistic() { txMapping.addMapping(nodes); } - } - assert !tx.txState().mvccEnabled(cctx) || tx.mvccSnapshot() != null || mvccCrd != null; + txNodes = txMapping.transactionNodes(); + } - tx.transactionNodes(txMapping.transactionNodes()); + tx.transactionNodes(txNodes); if (!hasNearCache) - checkOnePhase(txMapping); + checkOnePhase(txNodes); long timeout = tx.remainingTime(); @@ -376,31 +359,17 @@ private void preparePessimistic() { int miniId = 0; - Map> txNodes = txMapping.transactionNodes(); - for (final GridDistributedTxMapping m : mappings.values()) { final ClusterNode primary = m.primary(); - boolean needCntr = false; - - if (mvccCrd != null) { - if (tx.onePhaseCommit() || mvccCrd.nodeId().equals(primary.id())) { - needCntr = true; - - mvccCrd = null; - } - } - if (primary.isLocal()) { if (m.hasNearCacheEntries() && m.hasColocatedCacheEntries()) { - GridNearTxPrepareRequest nearReq = createRequest(txMapping.transactionNodes(), + GridNearTxPrepareRequest nearReq = createRequest(txNodes, m, timeout, m.nearEntriesReads(), m.nearEntriesWrites()); - nearReq.requestMvccCounter(needCntr); - prepareLocal(nearReq, m, ++miniId, true); GridNearTxPrepareRequest colocatedReq = createRequest(txNodes, @@ -414,8 +383,6 @@ private void preparePessimistic() { else { GridNearTxPrepareRequest req = createRequest(txNodes, m, timeout, m.reads(), m.writes()); - req.requestMvccCounter(needCntr); - prepareLocal(req, m, ++miniId, m.hasNearCacheEntries()); } } @@ -426,8 +393,6 @@ private void preparePessimistic() { m.reads(), m.writes()); - req.requestMvccCounter(needCntr); - final MiniFuture fut = new MiniFuture(m, ++miniId); req.miniId(fut.futureId()); @@ -460,16 +425,6 @@ private void preparePessimistic() { } } - if (mvccCrd != null) { - assert !tx.onePhaseCommit(); - - MvccSnapshotFutureExt fut = new MvccSnapshotFutureExt(); - - cctx.coordinators().requestSnapshotAsync(tx, fut); - - add((IgniteInternalFuture)fut); - } - markInitialized(); } @@ -485,7 +440,8 @@ private void preparePessimistic() { err = this.err; - if (err == null || tx.needCheckBackup()) + if ((!tx.onePhaseCommit() || tx.mappings().get(cctx.localNodeId()) == null) && + (err == null || tx.needCheckBackup())) tx.state(PREPARED); if (super.onDone(tx, err)) { @@ -517,35 +473,6 @@ private void preparePessimistic() { "super", super.toString()); } - /** - * - */ - private class MvccSnapshotFutureExt extends GridFutureAdapter implements MvccSnapshotResponseListener { - /** {@inheritDoc} */ - @Override public void onResponse(MvccSnapshot res) { - tx.mvccSnapshot(res); - - onDone(); - } - - /** {@inheritDoc} */ - @Override public void onError(IgniteCheckedException e) { - if (log.isDebugEnabled()) - log.debug("Error on tx prepare [fut=" + this + ", err=" + e + ", tx=" + tx + ']'); - - if (ERR_UPD.compareAndSet(GridNearPessimisticTxPrepareFuture.this, null, e)) - tx.setRollbackOnly(); - - onDone(e); - } - - /** {@inheritDoc} */ - @Override public String toString() { - return S.toString(MvccSnapshotFutureExt.class, this, super.toString()); - } - } - - /** */ private class MiniFuture extends GridFutureAdapter { /** */ @@ -585,9 +512,6 @@ void onResult(GridNearTxPrepareResponse res, boolean updateMapping) { if (res.error() != null) onError(res.error()); else { - if (res.mvccSnapshot() != null) - tx.mvccSnapshot(res.mvccSnapshot()); - onPrepareResponse(m, res, updateMapping); onDone(res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetRequest.java index cf885e27111e3..3040e5ceceea1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetRequest.java @@ -83,6 +83,9 @@ public class GridNearSingleGetRequest extends GridCacheIdMessage implements Grid /** TTL for read operation. */ private long accessTtl; + /** Transaction label. */ + private @Nullable String txLbl; + /** */ private MvccSnapshot mvccSnapshot; @@ -108,6 +111,7 @@ public GridNearSingleGetRequest() { * @param addReader Add reader flag. * @param needVer {@code True} if entry version is needed. * @param addDepInfo Deployment info. + * @param txLbl Transaction label. * @param mvccSnapshot MVCC snapshot. */ public GridNearSingleGetRequest( @@ -125,6 +129,7 @@ public GridNearSingleGetRequest( boolean needVer, boolean addDepInfo, boolean recovery, + @Nullable String txLbl, MvccSnapshot mvccSnapshot ) { assert key != null; @@ -138,6 +143,7 @@ public GridNearSingleGetRequest( this.createTtl = createTtl; this.accessTtl = accessTtl; this.addDepInfo = addDepInfo; + this.txLbl = txLbl; this.mvccSnapshot = mvccSnapshot; if (readThrough) @@ -221,6 +227,15 @@ public long accessTtl() { return key.partition(); } + /** + * Get transaction label (may be null). + * + * @return Transaction label; + */ + @Nullable public String txLabel() { + return txLbl; + } + /** * @return Read through flag. */ @@ -296,7 +311,7 @@ public boolean recovery() { return false; switch (reader.state()) { - case 3: + case 4: accessTtl = reader.readLong("accessTtl"); if (!reader.isLastRead()) @@ -304,7 +319,7 @@ public boolean recovery() { reader.incrementState(); - case 4: + case 5: createTtl = reader.readLong("createTtl"); if (!reader.isLastRead()) @@ -312,7 +327,7 @@ public boolean recovery() { reader.incrementState(); - case 5: + case 6: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -320,7 +335,7 @@ public boolean recovery() { reader.incrementState(); - case 6: + case 7: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -328,7 +343,7 @@ public boolean recovery() { reader.incrementState(); - case 7: + case 8: key = reader.readMessage("key"); if (!reader.isLastRead()) @@ -336,7 +351,15 @@ public boolean recovery() { reader.incrementState(); - case 8: + case 9: + mvccSnapshot = reader.readMessage("mvccSnapshot"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 10: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -344,7 +367,7 @@ public boolean recovery() { reader.incrementState(); - case 9: + case 11: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -352,16 +375,16 @@ public boolean recovery() { reader.incrementState(); - case 10: - topVer = reader.readMessage("topVer"); + case 12: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 11: - mvccSnapshot = reader.readMessage("mvccSnapshot"); + case 13: + txLbl = reader.readString("txLbl"); if (!reader.isLastRead()) return false; @@ -388,56 +411,62 @@ public boolean recovery() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeLong("accessTtl", accessTtl)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeLong("createTtl", createTtl)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeMessage("key", key)) return false; writer.incrementState(); - case 8: + case 9: + if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + return false; + + writer.incrementState(); + + case 10: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 9: + case 11: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 10: - if (!writer.writeMessage("topVer", topVer)) + case 12: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 11: - if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + case 13: + if (!writer.writeString("txLbl", txLbl)) return false; writer.incrementState(); @@ -459,7 +488,7 @@ public boolean recovery() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 12; + return 14; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetResponse.java index 2cb75c253e290..584cec2eb1cb1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearSingleGetResponse.java @@ -206,32 +206,32 @@ else if (res instanceof GridCacheEntryInfo) } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeMessage("res", res)) return false; writer.incrementState(); - case 7: - if (!writer.writeMessage("topVer", topVer)) + case 8: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -252,7 +252,7 @@ else if (res instanceof GridCacheEntryInfo) return false; switch (reader.state()) { - case 3: + case 4: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -260,7 +260,7 @@ else if (res instanceof GridCacheEntryInfo) reader.incrementState(); - case 4: + case 5: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -268,7 +268,7 @@ else if (res instanceof GridCacheEntryInfo) reader.incrementState(); - case 5: + case 6: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -276,7 +276,7 @@ else if (res instanceof GridCacheEntryInfo) reader.incrementState(); - case 6: + case 7: res = reader.readMessage("res"); if (!reader.isLastRead()) @@ -284,8 +284,8 @@ else if (res instanceof GridCacheEntryInfo) reader.incrementState(); - case 7: - topVer = reader.readMessage("topVer"); + case 8: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; @@ -309,7 +309,7 @@ else if (res instanceof GridCacheEntryInfo) /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 8; + return 9; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTransactionalCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTransactionalCache.java index 494f388fb84e1..8d89d93f8a4f0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTransactionalCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTransactionalCache.java @@ -255,7 +255,7 @@ public void clearLocks(UUID nodeId, GridDhtUnlockRequest req) { "(added to cancelled locks set): " + req); } - entry.touch(topVer); + entry.touch(); } else if (log.isDebugEnabled()) log.debug("Received unlock request for entry that could not be found: " + req); @@ -331,7 +331,8 @@ else if (log.isDebugEnabled()) req.timeout(), req.txSize(), req.subjectId(), - req.taskNameHash() + req.taskNameHash(), + req.txLabel() ); tx = ctx.tm().onCreated(null, tx); @@ -363,7 +364,7 @@ else if (log.isDebugEnabled()) ); if (!req.inTx()) - entry.touch(req.topologyVersion()); + entry.touch(); } else { if (evicted == null) @@ -596,7 +597,7 @@ else if (log.isDebugEnabled()) if (topVer.equals(AffinityTopologyVersion.NONE)) topVer = ctx.affinity().affinityTopologyVersion(); - entry.touch(topVer); + entry.touch(); break; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxAbstractEnlistFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxAbstractEnlistFuture.java index 11f98cacd5154..3f82144fac052 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxAbstractEnlistFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxAbstractEnlistFuture.java @@ -19,6 +19,7 @@ import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; @@ -41,7 +42,6 @@ import org.apache.ignite.internal.processors.timeout.GridTimeoutObjectAdapter; import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; import org.apache.ignite.internal.util.tostring.GridToStringExclude; -import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteInClosure; @@ -151,7 +151,7 @@ public void init() { else if (timeout > 0) timeoutObj = new LockTimeoutObject(); - while(true) { + while (true) { IgniteInternalFuture fut = tx.lockFuture(); if (fut == GridDhtTxLocalAdapter.ROLLBACK_FUT) { @@ -224,7 +224,18 @@ else if (tx.updateLockFuture(null, this)) if (topVer != null) { for (GridDhtTopologyFuture fut : cctx.shared().exchange().exchangeFutures()) { if (fut.exchangeDone() && fut.topologyVersion().equals(topVer)) { - Throwable err = fut.validateCache(cctx, false, false, null, null); + Throwable err = null; + + // Before cache validation, make sure that this topology future is already completed. + try { + fut.get(); + } + catch (IgniteCheckedException e) { + err = fut.error(); + } + + if (err == null) + err = fut.validateCache(cctx, false, false, null, null); if (err != null) { onDone(err); @@ -305,7 +316,10 @@ private void mapOnTopology() { try { if (cctx.topology().stopping()) { - onDone(new CacheStoppedException(cctx.name())); + onDone( + cctx.shared().cache().isCacheRestarting(cctx.name())? + new IgniteCacheRestartingException(cctx.name()): + new CacheStoppedException(cctx.name())); return; } @@ -332,25 +346,22 @@ private void mapOnTopology() { map(false); } else { - fut.listen(new CI1>() { - @Override public void apply(IgniteInternalFuture fut) { - try { - fut.get(); - + cctx.time().waitAsync(fut, tx.remainingTime(), (e, timedOut) -> { + try { + if (e != null || timedOut) + onDone(timedOut ? tx.timeoutException() : e); + else mapOnTopology(); - } - catch (IgniteCheckedException e) { - onDone(e); - } - finally { - cctx.shared().txContextReset(); - } + } + finally { + cctx.shared().txContextReset(); } }); } } finally { - cctx.topology().readUnlock(); + if (cctx.topology().holdsLock()) + cctx.topology().readUnlock(); } } @@ -367,6 +378,10 @@ private void mapOnTopology() { if (!DONE_UPD.compareAndSet(this, 0, 1)) return false; + // Need to unlock topology to avoid deadlock with binary descriptors registration. + if(cctx.topology().holdsLock()) + cctx.topology().readUnlock(); + cctx.tm().txContext(tx); Throwable ex0 = ex; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistFuture.java index 8d85bd9c55e78..8b750f495a2d2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistFuture.java @@ -26,10 +26,12 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheMessage; @@ -70,6 +72,10 @@ public class GridNearTxEnlistFuture extends GridNearTxAbstractEnlistFuture SKIP_UPD = AtomicIntegerFieldUpdater.newUpdater(GridNearTxEnlistFuture.class, "skipCntr"); + /** Res field updater. */ + private static final AtomicReferenceFieldUpdater RES_UPD = + AtomicReferenceFieldUpdater.newUpdater(GridNearTxEnlistFuture.class, GridCacheReturn.class, "res"); + /** Marker object. */ private static final Object FINISHED = new Object(); @@ -158,6 +164,10 @@ private void sendNextBatches(@Nullable UUID nodeId) { boolean first = (nodeId != null); + // Need to unlock topology to avoid deadlock with binary descriptors registration. + if(!topLocked && cctx.topology().holdsLock()) + cctx.topology().readUnlock(); + for (Batch batch : next) { ClusterNode node = batch.node(); @@ -206,12 +216,10 @@ private Collection continueLoop(@Nullable UUID nodeId) throws IgniteCheck KeyCacheObject key = cctx.toCacheKeyObject(op.isDeleteOrLock() ? cur : ((IgniteBiTuple)cur).getKey()); - List nodes = cctx.affinity().nodesByKey(key, topVer); + ClusterNode node = cctx.affinity().primaryByKey(key, topVer); - ClusterNode node; - - if (F.isEmpty(nodes) || ((node = nodes.get(0)) == null)) - throw new ClusterTopologyCheckedException("Failed to get primary node " + + if (node == null) + throw new ClusterTopologyServerNotFoundException("Failed to get primary node " + "[topVer=" + topVer + ", key=" + key + ']'); tx.markQueryEnlisted(null); @@ -237,8 +245,7 @@ else if (batch != null && !batch.node().equals(node)) break; } - batch.add(op.isDeleteOrLock() ? key : cur, - op != EnlistOperation.LOCK && cctx.affinityNode() && (cctx.isReplicated() || nodes.indexOf(cctx.localNode()) > 0)); + batch.add(op.isDeleteOrLock() ? key : cur, !node.isLocal() && isLocalBackup(op, key)); if (batch.size() == batchSize) res = markReady(res, batch); @@ -294,6 +301,16 @@ private boolean hasNext0() { return peek != FINISHED; } + /** */ + private boolean isLocalBackup(EnlistOperation op, KeyCacheObject key) { + if (!cctx.affinityNode() || op == EnlistOperation.LOCK) + return false; + else if (cctx.isReplicated()) + return true; + + return cctx.topology().nodes(key.partition(), tx.topologyVersion()).indexOf(cctx.localNode()) > 0; + } + /** * Add batch to batch collection if it is ready. * @@ -338,7 +355,11 @@ private void processBatchLocalBackupKeys(UUID primaryId, List rows, Grid keys.add(cctx.toCacheKeyObject(row)); else { keys.add(cctx.toCacheKeyObject(((IgniteBiTuple)row).getKey())); - vals.add(cctx.toCacheObject(((IgniteBiTuple)row).getValue())); + + if (op.isInvoke()) + vals.add((Message)((IgniteBiTuple)row).getValue()); + else + vals.add(cctx.toCacheObject(((IgniteBiTuple)row).getValue())); } } @@ -363,7 +384,8 @@ private void processBatchLocalBackupKeys(UUID primaryId, List rows, Grid -1, this.tx.subjectId(), this.tx.taskNameHash(), - false); + false, + null); dhtTx.mvccSnapshot(new MvccSnapshotWithoutTxs(mvccSnapshot.coordinatorVersion(), mvccSnapshot.counter(), MVCC_OP_COUNTER_NA, mvccSnapshot.cleanupVersion())); @@ -376,7 +398,8 @@ private void processBatchLocalBackupKeys(UUID primaryId, List rows, Grid } } - dhtTx.mvccEnlistBatch(cctx, it.operation(), keys, vals, mvccSnapshot.withoutActiveTransactions()); + cctx.tm().txHandler().mvccEnlistBatch(dhtTx, cctx, it.operation(), keys, vals, + mvccSnapshot.withoutActiveTransactions(), null, -1); } catch (IgniteCheckedException e) { onDone(e); @@ -566,8 +589,8 @@ public boolean checkResponse(UUID nodeId, GridNearTxEnlistResponse res, Throwabl if (err == null && res.error() != null) err = res.error(); - if (X.hasCause(err, ClusterTopologyCheckedException.class)) - tx.removeMapping(nodeId); + if (res != null) + tx.mappings().get(nodeId).addBackups(res.newDhtNodes()); if (err != null) processFailure(err, null); @@ -583,9 +606,18 @@ public boolean checkResponse(UUID nodeId, GridNearTxEnlistResponse res, Throwabl assert res != null; - this.res = res.result(); + if (this.res != null || !RES_UPD.compareAndSet(this, null, res.result())) { + GridCacheReturn res0 = this.res; + + if (res.result().invokeResult()) + res0.mergeEntryProcessResults(res.result()); + else if (res0.success() && !res.result().success()) + res0.success(false); + } + + assert this.res != null && (this.res.emptyResult() || needRes || this.res.invokeResult() || !this.res.success()); - assert this.res != null && (this.res.emptyResult() || needRes || !this.res.success()); + tx.hasRemoteLocks(true); return true; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistRequest.java index 1d870238b2727..dd868618c005e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistRequest.java @@ -31,6 +31,7 @@ import org.apache.ignite.internal.processors.cache.GridCacheIdMessage; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridInvokeValue; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.query.EnlistOperation; @@ -38,6 +39,7 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; import org.apache.ignite.plugin.extensions.communication.MessageReader; import org.apache.ignite.plugin.extensions.communication.MessageWriter; @@ -95,7 +97,7 @@ public class GridNearTxEnlistRequest extends GridCacheIdMessage { /** Serialized rows values. */ @GridToStringExclude - private CacheObject[] values; + private Message[] values; /** Enlist operation. */ private EnlistOperation op; @@ -286,7 +288,7 @@ public CacheEntryPredicate filter() { boolean keysOnly = op.isDeleteOrLock(); - values = keysOnly ? null : new CacheObject[keys.length]; + values = keysOnly ? null : new Message[keys.length]; for (Object row : rows) { Object key, val = null; @@ -309,13 +311,24 @@ public CacheEntryPredicate filter() { keys[i] = key0; if (!keysOnly) { - CacheObject val0 = cctx.toCacheObject(val); + if (op.isInvoke()) { + GridInvokeValue val0 = (GridInvokeValue)val; - assert val0 != null; + assert val0 != null; - val0.prepareMarshal(objCtx); + val0.prepareMarshal(cctx); - values[i] = val0; + values[i] = val0; + } + else { + CacheObject val0 = cctx.toCacheObject(val); + + assert val0 != null; + + val0.prepareMarshal(objCtx); + + values[i] = val0; + } } i++; @@ -341,8 +354,12 @@ public CacheEntryPredicate filter() { if (op.isDeleteOrLock()) rows.add(keys[i]); else { - if (values[i] != null) - values[i].finishUnmarshal(objCtx, ldr); + if (values[i] != null) { + if(op.isInvoke()) + ((GridInvokeValue)values[i]).finishUnmarshal(ctx, ldr); + else + ((CacheObject)values[i]).finishUnmarshal(objCtx, ldr); + } rows.add(new IgniteBiTuple<>(keys[i], values[i])); } @@ -371,97 +388,97 @@ public CacheEntryPredicate filter() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeBoolean("clientFirst", clientFirst)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeMessage("filter", filter)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeObjectArray("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeMessage("lockVer", lockVer)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeBoolean("needRes", needRes)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeByte("op", op != null ? (byte)op.ordinal() : -1)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeLong("threadId", threadId)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 16: - if (!writer.writeMessage("topVer", topVer)) + case 17: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeLong("txTimeout", txTimeout)) return false; writer.incrementState(); - case 18: + case 19: if (!writer.writeObjectArray("values", values, MessageCollectionItemType.MSG)) return false; @@ -483,7 +500,7 @@ public CacheEntryPredicate filter() { return false; switch (reader.state()) { - case 3: + case 4: clientFirst = reader.readBoolean("clientFirst"); if (!reader.isLastRead()) @@ -491,7 +508,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 4: + case 5: filter = reader.readMessage("filter"); if (!reader.isLastRead()) @@ -499,7 +516,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 5: + case 6: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -507,7 +524,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 6: + case 7: keys = reader.readObjectArray("keys", MessageCollectionItemType.MSG, KeyCacheObject.class); if (!reader.isLastRead()) @@ -515,7 +532,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 7: + case 8: lockVer = reader.readMessage("lockVer"); if (!reader.isLastRead()) @@ -523,7 +540,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 8: + case 9: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -531,7 +548,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 9: + case 10: mvccSnapshot = reader.readMessage("mvccSnapshot"); if (!reader.isLastRead()) @@ -539,7 +556,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 10: + case 11: needRes = reader.readBoolean("needRes"); if (!reader.isLastRead()) @@ -547,7 +564,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 11: + case 12: byte opOrd; opOrd = reader.readByte("op"); @@ -559,7 +576,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 12: + case 13: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -567,7 +584,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 13: + case 14: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -575,7 +592,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 14: + case 15: threadId = reader.readLong("threadId"); if (!reader.isLastRead()) @@ -583,7 +600,7 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 15: + case 16: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -591,15 +608,15 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 16: - topVer = reader.readMessage("topVer"); + case 17: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 17: + case 18: txTimeout = reader.readLong("txTimeout"); if (!reader.isLastRead()) @@ -607,8 +624,8 @@ public CacheEntryPredicate filter() { reader.incrementState(); - case 18: - values = reader.readObjectArray("values", MessageCollectionItemType.MSG, CacheObject.class); + case 19: + values = reader.readObjectArray("values", MessageCollectionItemType.MSG, Message.class); if (!reader.isLastRead()) return false; @@ -622,7 +639,7 @@ public CacheEntryPredicate filter() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 19; + return 20; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistResponse.java index 4f4bbb62cc707..93df0c2d78fea 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxEnlistResponse.java @@ -158,7 +158,7 @@ public GridCacheReturn result() { } /** {@inheritDoc} */ - @Nullable @Override public Throwable error() { + @Override public @Nullable Throwable error() { return err; } @@ -181,9 +181,16 @@ public IgniteUuid dhtFutureId() { return dhtFutId; } + /** + * @return New DHT nodes involved into transaction. + */ + public Collection newDhtNodes() { + return newDhtNodes; + } + /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 11; + return 12; } /** {@inheritDoc} */ @@ -201,49 +208,49 @@ public IgniteUuid dhtFutureId() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeIgniteUuid("dhtFutId", dhtFutId)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeMessage("dhtVer", dhtVer)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeMessage("lockVer", lockVer)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeCollection("newDhtNodes", newDhtNodes, MessageCollectionItemType.UUID)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeMessage("res", res)) return false; @@ -265,7 +272,7 @@ public IgniteUuid dhtFutureId() { return false; switch (reader.state()) { - case 3: + case 4: dhtFutId = reader.readIgniteUuid("dhtFutId"); if (!reader.isLastRead()) @@ -273,7 +280,7 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 4: + case 5: dhtVer = reader.readMessage("dhtVer"); if (!reader.isLastRead()) @@ -281,7 +288,7 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 5: + case 6: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -289,7 +296,7 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 6: + case 7: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -297,7 +304,7 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 7: + case 8: lockVer = reader.readMessage("lockVer"); if (!reader.isLastRead()) @@ -305,7 +312,7 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 8: + case 9: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -313,7 +320,7 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 9: + case 10: newDhtNodes = reader.readCollection("newDhtNodes", MessageCollectionItemType.UUID); if (!reader.isLastRead()) @@ -321,7 +328,7 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 10: + case 11: res = reader.readMessage("res"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishFuture.java index 4a4d8e3524932..befa3053fe605 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishFuture.java @@ -311,7 +311,7 @@ void forceFinish() { if (err != null) { tx.setRollbackOnly(); - nodeStop = err instanceof NodeStoppingException; + nodeStop = err instanceof NodeStoppingException || cctx.kernalContext().failure().nodeStopping(); } if (commit) { @@ -357,29 +357,6 @@ else if (err != null) } if (super.onDone(tx0, err)) { - if (error() instanceof IgniteTxHeuristicCheckedException && !nodeStop) { - AffinityTopologyVersion topVer = tx.topologyVersion(); - - for (IgniteTxEntry e : tx.writeMap().values()) { - GridCacheContext cacheCtx = e.context(); - - try { - if (e.op() != NOOP && !cacheCtx.affinity().keyLocalNode(e.key(), topVer)) { - GridCacheEntryEx entry = cacheCtx.cache().peekEx(e.key()); - - if (entry != null) - entry.invalidate(tx.xidVersion()); - } - } - catch (Throwable t) { - U.error(log, "Failed to invalidate entry.", t); - - if (t instanceof Error) - throw (Error)t; - } - } - } - // Don't forget to clean up. cctx.mvcc().removeFuture(futId); @@ -402,8 +379,7 @@ private boolean isMini(IgniteInternalFuture fut) { } /** {@inheritDoc} */ - @Override @SuppressWarnings("ForLoopReplaceableByForEach") - public void finish(final boolean commit, final boolean clearThreadMap, final boolean onTimeout) { + @Override public void finish(final boolean commit, final boolean clearThreadMap, final boolean onTimeout) { if (!cctx.mvcc().addFuture(this, futureId())) return; @@ -490,18 +466,22 @@ private void doFinish(boolean commit, boolean clearThreadMap) { } } + // Cleanup transaction if heuristic failure. + if (tx.state() == UNKNOWN) + cctx.tm().rollbackTx(tx, clearThreadMap, false); + if ((tx.onePhaseCommit() && needFinishOnePhase(commit)) || (!tx.onePhaseCommit() && mappings != null)) { if (mappings.single()) { GridDistributedTxMapping mapping = mappings.singleMapping(); if (mapping != null) { - assert !hasFutures() || waitTxs != null : futures(); + assert !hasFutures() || isDone() || waitTxs != null : futures(); finish(1, mapping, commit, !clearThreadMap); } } else { - assert !hasFutures() || waitTxs != null : futures(); + assert !hasFutures() || isDone() || waitTxs != null : futures(); finish(mappings.mappings(), commit, !clearThreadMap); } @@ -762,7 +742,7 @@ private void readyNearMappingFromBackup(GridDistributedTxMapping mapping) { /** * @param mappings Mappings. * @param commit Commit flag. - * @param {@code true} If need to add completed version on finish. + * @param useCompletedVer {@code True} if need to add completed version on finish. */ private void finish(Iterable mappings, boolean commit, boolean useCompletedVer) { int miniId = 0; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishRequest.java index 6b5aa90e3f5f3..91079dff6c13f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishRequest.java @@ -186,13 +186,13 @@ public void miniId(int miniId) { } switch (writer.state()) { - case 21: + case 22: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 22: + case 23: if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) return false; @@ -214,7 +214,7 @@ public void miniId(int miniId) { return false; switch (reader.state()) { - case 21: + case 22: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -222,7 +222,7 @@ public void miniId(int miniId) { reader.incrementState(); - case 22: + case 23: mvccSnapshot = reader.readMessage("mvccSnapshot"); if (!reader.isLastRead()) @@ -242,7 +242,7 @@ public void miniId(int miniId) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 23; + return 24; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishResponse.java index a1a2b5712fcdd..e3dcbf832bd14 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxFinishResponse.java @@ -133,19 +133,19 @@ public long threadId() { } switch (writer.state()) { - case 6: + case 7: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeLong("nearThreadId", nearThreadId)) return false; @@ -167,7 +167,7 @@ public long threadId() { return false; switch (reader.state()) { - case 6: + case 7: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -175,7 +175,7 @@ public long threadId() { reader.incrementState(); - case 7: + case 8: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -183,7 +183,7 @@ public long threadId() { reader.incrementState(); - case 8: + case 9: nearThreadId = reader.readLong("nearThreadId"); if (!reader.isLastRead()) @@ -203,7 +203,7 @@ public long threadId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 9; + return 10; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxLocal.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxLocal.java index 9493510b5d7c8..4ae15884b7b44 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxLocal.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxLocal.java @@ -59,6 +59,7 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxFinishFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridInvokeValue; import org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtDetachedCacheEntry; import org.apache.ignite.internal.processors.cache.dr.GridCacheDrInfo; import org.apache.ignite.internal.processors.cache.mvcc.MvccQueryTracker; @@ -90,6 +91,7 @@ import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.CI2; import org.apache.ignite.internal.util.typedef.CX1; +import org.apache.ignite.internal.util.typedef.CX2; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.S; @@ -99,6 +101,7 @@ import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.security.SecurityPermission; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; @@ -187,7 +190,7 @@ public class GridNearTxLocal extends GridDhtTxLocalAdapter implements GridTimeou private TransactionProxyImpl rollbackOnlyProxy; /** Tx label. */ - private @Nullable String lb; + @Nullable private String lb; /** */ private MvccQueryTracker mvccTracker; @@ -700,19 +703,6 @@ private IgniteInternalFuture putAsync0( } } - /** - * Validate Tx mode. - * - * @param ctx Cache context. - * @throws IgniteCheckedException If tx mode is not supported. - */ - protected void validateTxMode(GridCacheContext ctx) throws IgniteCheckedException { - if(!ctx.mvccEnabled() || pessimistic() && repeatableRead()) - return; - - throw new IgniteCheckedException("Only pessimistic repeatable read transactions are supported at the moment."); - } - /** * Internal method for put and transform operations in Mvcc mode. * Note: Only one of {@code map}, {@code transformMap} maps must be non-null. @@ -734,17 +724,7 @@ private IgniteInternalFuture mvccPutAllAsync0( @Nullable final CacheEntryPredicate filter ) { try { - validateTxMode(cacheCtx); - - // TODO: IGNITE-9540: Fix invoke/invokeAll. - if(invokeMap != null) - MvccUtils.verifyMvccOperationSupport(cacheCtx, "invoke/invokeAll"); - - if (mvccSnapshot == null) { - MvccUtils.mvccTracker(cacheCtx, this); - - assert mvccSnapshot != null; - } + MvccUtils.requestSnapshot(cacheCtx, this); beforePut(cacheCtx, retval, true); } @@ -752,16 +732,12 @@ private IgniteInternalFuture mvccPutAllAsync0( return new GridFinishedFuture(e); } - // Cached entry may be passed only from entry wrapper. - final Map map0 = map; - final Map> invokeMap0 = (Map>)invokeMap; - if (log.isDebugEnabled()) - log.debug("Called putAllAsync(...) [tx=" + this + ", map=" + map0 + ", retval=" + retval + "]"); + log.debug("Called putAllAsync(...) [tx=" + this + ", map=" + map + ", retval=" + retval + "]"); - assert map0 != null || invokeMap0 != null; + assert map != null || invokeMap != null; - if (F.isEmpty(map0) && F.isEmpty(invokeMap0)) { + if (F.isEmpty(map) && F.isEmpty(invokeMap)) { if (implicit()) try { commit(); @@ -773,14 +749,13 @@ private IgniteInternalFuture mvccPutAllAsync0( return new GridFinishedFuture<>(new GridCacheReturn(true, false)); } - try { - // Set transform flag for transaction. - if (invokeMap != null) - transform = true; + // Set transform flag for operation. + boolean transform = invokeMap != null; - Set keys = map0 != null ? map0.keySet() : invokeMap0.keySet(); + try { + Set keys = map != null ? map.keySet() : invokeMap.keySet(); - final Map enlisted = new HashMap<>(keys.size()); + final Map enlisted = new LinkedHashMap<>(keys.size()); for (Object key : keys) { if (isRollbackOnly()) @@ -792,7 +767,7 @@ private IgniteInternalFuture mvccPutAllAsync0( throw new NullPointerException("Null key."); } - Object val = map0 == null ? null : map0.get(key); + Object val = map == null ? null : map.get(key); EntryProcessor entryProcessor = transform ? invokeMap.get(key) : null; if (val == null && entryProcessor == null) { @@ -802,25 +777,27 @@ private IgniteInternalFuture mvccPutAllAsync0( } KeyCacheObject cacheKey = cacheCtx.toCacheKeyObject(key); - CacheObject cacheVal = cacheCtx.toCacheObject(val); - enlisted.put(cacheKey, cacheVal); + if (transform) + enlisted.put(cacheKey, new GridInvokeValue(entryProcessor, invokeArgs)); + else + enlisted.put(cacheKey, cacheCtx.toCacheObject(val)); } - return updateAsync(cacheCtx, new UpdateSourceIterator>() { + return updateAsync(cacheCtx, new UpdateSourceIterator>() { - private Iterator> it = enlisted.entrySet().iterator(); + private Iterator> it = enlisted.entrySet().iterator(); @Override public EnlistOperation operation() { - return EnlistOperation.UPSERT; + return transform ? EnlistOperation.TRANSFORM : EnlistOperation.UPSERT; } @Override public boolean hasNextX() throws IgniteCheckedException { return it.hasNext(); } - @Override public IgniteBiTuple nextX() throws IgniteCheckedException { - Map.Entry next = it.next(); + @Override public IgniteBiTuple nextX() throws IgniteCheckedException { + Map.Entry next = it.next(); return new IgniteBiTuple<>(next.getKey(), next.getValue()); } @@ -1445,7 +1422,7 @@ private boolean enlistWriteEntry(GridCacheContext cacheCtx, } } catch (ClusterTopologyCheckedException e) { - entry.touch(topologyVersion()); + entry.touch(); throw e; } @@ -1503,7 +1480,7 @@ private boolean enlistWriteEntry(GridCacheContext cacheCtx, } if (readCommitted()) - entry.touch(topologyVersion()); + entry.touch(); break; // While. } @@ -1929,13 +1906,7 @@ private IgniteInternalFuture mvccRemoveAllAsync0( @Nullable final CacheEntryPredicate filter ) { try { - validateTxMode(cacheCtx); - - if (mvccSnapshot == null) { - MvccUtils.mvccTracker(cacheCtx, this); - - assert mvccSnapshot != null; - } + MvccUtils.requestSnapshot(cacheCtx, this); beforeRemove(cacheCtx, retval, true); } @@ -2120,7 +2091,15 @@ private IgniteInternalFuture updateAsync(GridCacheContext cache mvccSnapshot.incrementOperationCounter(); - return new GridCacheReturn(cacheCtx, true, keepBinary, futRes.value(), futRes.success()); + Object val = futRes.value(); + + if (futRes.invokeResult() && val != null) { + assert val instanceof Map; + + val = cacheCtx.unwrapInvokeResult((Map)val, keepBinary); + } + + return new GridCacheReturn(cacheCtx, true, keepBinary, val, futRes.success()); } })); } @@ -2188,13 +2167,6 @@ public IgniteInternalFuture> getAllAsync( if (F.isEmpty(keys)) return new GridFinishedFuture<>(Collections.emptyMap()); - try { - validateTxMode(cacheCtx); - } - catch (IgniteCheckedException e) { - return new GridFinishedFuture(e); - } - if (cacheCtx.mvccEnabled() && !isOperationAllowed(true)) return txTypeMismatchFinishFuture(); @@ -2422,16 +2394,16 @@ public IgniteInternalFuture> getAllAsync( try { IgniteInternalFuture> fut1 = plc2.apply(fut.get(), null); - return fut1.isDone() ? + return nonInterruptable(fut1.isDone() ? new GridFinishedFuture<>(finClos.apply(fut1.get(), null)) : - new GridEmbeddedFuture<>(finClos, fut1); + new GridEmbeddedFuture<>(finClos, fut1)); } catch (GridClosureException e) { return new GridFinishedFuture<>(e.unwrap()); } catch (IgniteCheckedException e) { try { - return plc2.apply(false, e); + return nonInterruptable(plc2.apply(false, e)); } catch (Exception e1) { return new GridFinishedFuture<>(e1); @@ -2439,10 +2411,10 @@ public IgniteInternalFuture> getAllAsync( } } else { - return new GridEmbeddedFuture<>( + return nonInterruptable(new GridEmbeddedFuture<>( fut, plc2, - finClos); + finClos)); } } else { @@ -2798,7 +2770,7 @@ private Collection enlistRead( } } else - entry.touch(topVer); + entry.touch(); } } } @@ -2987,7 +2959,7 @@ private void onException() { GridCacheEntryEx cached0 = txEntry.cached(); if (cached0 != null) - cached0.touch(topologyVersion()); + cached0.touch(); } } @@ -3069,6 +3041,7 @@ else if (cacheCtx.isColocated()) { needVer, /*keepCacheObject*/true, recovery, + label(), mvccReadSnapshot(cacheCtx) ).chain(new C1, Void>() { @Override public Void apply(IgniteInternalFuture f) { @@ -3101,6 +3074,7 @@ else if (cacheCtx.isColocated()) { skipVals, needVer, /*keepCacheObject*/true, + label(), mvccReadSnapshot(cacheCtx) ).chain(new C1>, Void>() { @Override public Void apply(IgniteInternalFuture> f) { @@ -3913,7 +3887,7 @@ private NearTxFinishFuture finishFuture(boolean fast, boolean commit) { NearTxFinishFuture fut = fast ? new GridNearTxFastFinishFuture(this, commit) : new GridNearTxFinishFuture<>(cctx, this, commit); - if (mvccQueryTracker() != null || mvccSnapshot != null || txState.mvccEnabled(cctx)) { + if (mvccQueryTracker() != null || mvccSnapshot != null || txState.mvccEnabled()) { if (commit) fut = new GridNearTxFinishAndAckFuture(fut); else @@ -3947,16 +3921,23 @@ private IgniteInternalFuture chainFinishFuture(final NearTxFin fut.listen(new IgniteInClosure>() { @Override public void apply(IgniteInternalFuture fut0) { if (FINISH_FUT_UPD.compareAndSet(tx, fut, rollbackFut)) { - if (tx.state() == COMMITTED) { - if (log.isDebugEnabled()) - log.debug("Failed to rollback, transaction is already committed: " + tx); + switch (tx.state()) { + case COMMITTED: + if (log.isDebugEnabled()) + log.debug("Failed to rollback, transaction is already committed: " + tx); + + // Fall-through. - rollbackFut.forceFinish(); + case ROLLED_BACK: + rollbackFut.forceFinish(); + + assert rollbackFut.isDone() : rollbackFut; + + break; - assert rollbackFut.isDone() : rollbackFut; + default: // First finish attempt was unsuccessful. Try again. + rollbackFut.finish(false, clearThreadMap, onTimeout); } - else - rollbackFut.finish(false, clearThreadMap, onTimeout); } else { finishFut.listen(new IgniteInClosure>() { @@ -4076,10 +4057,26 @@ public IgniteInternalFuture commitAsyncLocal() { // Do not create finish future if there are no remote nodes. if (F.isEmpty(dhtMap) && F.isEmpty(nearMap)) { - if (prep != null) - return (IgniteInternalFuture)prep; + if (prep != null) { + return new GridEmbeddedFuture<>(new CX2() { + @Override public IgniteInternalTx applyx(IgniteInternalTx o, Exception e) throws IgniteCheckedException { + cctx.tm().mvccFinish(GridNearTxLocal.this, e == null); + + return o; + } + }, (IgniteInternalFuture)prep); + } + + try { + cctx.tm().mvccFinish(this, true); + + return new GridFinishedFuture<>(this); + } + catch (IgniteCheckedException e) { + commitError(e); - return new GridFinishedFuture(this); + return new GridFinishedFuture<>(e); + } } final GridDhtTxFinishFuture fut = new GridDhtTxFinishFuture<>(cctx, this, true); @@ -4567,7 +4564,7 @@ private IgniteInternalFuture> checkMissed( GridCacheEntryEx e = txEntry == null ? entryEx(cacheCtx, txKey, topVer) : txEntry.cached(); if (readCommitted() || skipVals) { - e.touch(topologyVersion()); + e.touch(); if (visibleVal != null) { cacheCtx.addResult(map, @@ -4723,7 +4720,7 @@ private void checkUpdatesAllowed(GridCacheContext cacheCtx) throws IgniteChecked * @param fut Future. * @return Future ignoring interrupts on {@code get()}. */ - private IgniteInternalFuture nonInterruptable(IgniteInternalFuture fut) { + private static IgniteInternalFuture nonInterruptable(IgniteInternalFuture fut) { // Safety. if (fut instanceof GridFutureAdapter) ((GridFutureAdapter)fut).ignoreInterrupts(); @@ -4759,10 +4756,8 @@ private boolean removeTimeoutHandler() { return startTime() + timeout(); } - /** - * @return Tx label. - */ - public String label() { + /** {@inheritDoc} */ + @Override public String label() { return lb; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareFutureAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareFutureAdapter.java index a9b1848bb6d64..285481d021e5d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareFutureAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareFutureAdapter.java @@ -165,13 +165,21 @@ public IgniteInternalTx tx() { * @param txMapping Transaction mapping. */ final void checkOnePhase(GridDhtTxMapping txMapping) { - if (tx.storeWriteThrough() || tx.txState().mvccEnabled(cctx)) // TODO IGNITE-3479 (onePhase + mvcc) - return; + checkOnePhase(tx.transactionNodes()); + } - Map> map = txMapping.transactionNodes(); + /** + * Checks if mapped transaction can be committed on one phase. + * One-phase commit can be done if transaction maps to one primary node and not more than one backup. + * + * @param txNodes Primary to backups node map. + */ + final void checkOnePhase(Map> txNodes) { + if (tx.storeWriteThrough() || tx.txState().mvccEnabled()) // TODO IGNITE-3479 (onePhase + mvcc) + return; - if (map.size() == 1) { - Map.Entry> entry = map.entrySet().iterator().next(); + if (txNodes.size() == 1) { + Map.Entry> entry = txNodes.entrySet().iterator().next(); assert entry != null; @@ -277,4 +285,4 @@ else if (txEntry.cached().detached()) { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareRequest.java index 55c809d6f189f..941c7b34c0068 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareRequest.java @@ -28,6 +28,7 @@ import org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxPrepareRequest; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.util.tostring.GridToStringExclude; +import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; @@ -58,8 +59,8 @@ public class GridNearTxPrepareRequest extends GridDistributedTxPrepareRequest { /** */ private static final int ALLOW_WAIT_TOP_FUT_FLAG_MASK = 0x10; - /** */ - private static final int REQUEST_MVCC_CNTR_FLAG_MASK = 0x20; + /** Recovery value flag. */ + private static final int RECOVERY_FLAG_MASK = 0x40; /** Future ID. */ private IgniteUuid futId; @@ -80,6 +81,10 @@ public class GridNearTxPrepareRequest extends GridDistributedTxPrepareRequest { @GridToStringExclude private byte flags; + /** Transaction label. */ + @GridToStringInclude + @Nullable private String txLbl; + /** * Empty constructor required for {@link Externalizable}. */ @@ -125,7 +130,8 @@ public GridNearTxPrepareRequest( int taskNameHash, boolean firstClientReq, boolean allowWaitTopFut, - boolean addDepInfo + boolean addDepInfo, + boolean recovery ) { super(tx, timeout, @@ -145,33 +151,36 @@ public GridNearTxPrepareRequest( this.subjId = subjId; this.taskNameHash = taskNameHash; + txLbl = tx.label(); + setFlag(near, NEAR_FLAG_MASK); setFlag(implicitSingle, IMPLICIT_SINGLE_FLAG_MASK); setFlag(explicitLock, EXPLICIT_LOCK_FLAG_MASK); setFlag(firstClientReq, FIRST_CLIENT_REQ_FLAG_MASK); setFlag(allowWaitTopFut, ALLOW_WAIT_TOP_FUT_FLAG_MASK); + setFlag(recovery, RECOVERY_FLAG_MASK); } /** - * @return {@code True} if need request MVCC counter on primary node on prepare step. + * @return {@code True} if it is safe for first client request to wait for topology future + * completion. */ - public boolean requestMvccCounter() { - return isFlag(REQUEST_MVCC_CNTR_FLAG_MASK); + public boolean allowWaitTopologyFuture() { + return isFlag(ALLOW_WAIT_TOP_FUT_FLAG_MASK); } /** - * @param val {@code True} if need request MVCC counter on primary node on prepare step. + * @return Recovery flag. */ - public void requestMvccCounter(boolean val) { - setFlag(val, REQUEST_MVCC_CNTR_FLAG_MASK); + public final boolean recovery() { + return isFlag(RECOVERY_FLAG_MASK); } /** - * @return {@code True} if it is safe for first client request to wait for topology future - * completion. + * @param val Recovery flag. */ - public boolean allowWaitTopologyFuture() { - return isFlag(ALLOW_WAIT_TOP_FUT_FLAG_MASK); + public void recovery(boolean val) { + setFlag(val, RECOVERY_FLAG_MASK); } /** @@ -244,6 +253,13 @@ public final boolean explicitLock() { return topVer; } + /** + * @return Transaction label. + */ + @Nullable public String txLabel() { + return txLbl; + } + /** * */ @@ -318,38 +334,44 @@ private boolean isFlag(int mask) { } switch (writer.state()) { - case 20: + case 21: if (!writer.writeByte("flags", flags)) return false; writer.incrementState(); - case 21: + case 22: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 22: + case 23: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 23: + case 24: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 24: + case 25: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 25: - if (!writer.writeMessage("topVer", topVer)) + case 26: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) + return false; + + writer.incrementState(); + + case 27: + if (!writer.writeString("txLbl", txLbl)) return false; writer.incrementState(); @@ -370,7 +392,7 @@ private boolean isFlag(int mask) { return false; switch (reader.state()) { - case 20: + case 21: flags = reader.readByte("flags"); if (!reader.isLastRead()) @@ -378,7 +400,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 21: + case 22: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -386,7 +408,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 22: + case 23: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -394,7 +416,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 23: + case 24: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -402,7 +424,7 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 24: + case 25: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -410,8 +432,16 @@ private boolean isFlag(int mask) { reader.incrementState(); - case 25: - topVer = reader.readMessage("topVer"); + case 26: + topVer = reader.readAffinityTopologyVersion("topVer"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 27: + txLbl = reader.readString("txLbl"); if (!reader.isLastRead()) return false; @@ -430,7 +460,7 @@ private boolean isFlag(int mask) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 26; + return 28; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareResponse.java index e9865df5e3e3b..ef547ccaed0a6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxPrepareResponse.java @@ -33,7 +33,6 @@ import org.apache.ignite.internal.processors.cache.GridCacheReturn; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxPrepareResponse; -import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.tostring.GridToStringExclude; @@ -98,9 +97,6 @@ public class GridNearTxPrepareResponse extends GridDistributedTxPrepareResponse /** Not {@code null} if client node should remap transaction. */ private AffinityTopologyVersion clientRemapVer; - /** */ - private MvccSnapshot mvccSnapshot; - /** * Empty constructor required by {@link Externalizable}. */ @@ -149,20 +145,6 @@ public GridNearTxPrepareResponse( flags |= NEAR_PREPARE_ONE_PHASE_COMMIT_FLAG_MASK; } - /** - * @param mvccSnapshot Mvcc info. - */ - public void mvccSnapshot(MvccSnapshot mvccSnapshot) { - this.mvccSnapshot = mvccSnapshot; - } - - /** - * @return Mvcc info. - */ - @Nullable public MvccSnapshot mvccSnapshot() { - return mvccSnapshot; - } - /** * @return One-phase commit state on primary node. */ @@ -376,72 +358,65 @@ public boolean hasOwnedValue(IgniteTxKey key) { } switch (writer.state()) { - case 10: - if (!writer.writeMessage("clientRemapVer", clientRemapVer)) - return false; - - writer.incrementState(); - case 11: - if (!writer.writeMessage("dhtVer", dhtVer)) + if (!writer.writeAffinityTopologyVersion("clientRemapVer", clientRemapVer)) return false; writer.incrementState(); case 12: - if (!writer.writeCollection("filterFailedKeys", filterFailedKeys, MessageCollectionItemType.MSG)) + if (!writer.writeMessage("dhtVer", dhtVer)) return false; writer.incrementState(); case 13: - if (!writer.writeIgniteUuid("futId", futId)) + if (!writer.writeCollection("filterFailedKeys", filterFailedKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); case 14: - if (!writer.writeInt("miniId", miniId)) + if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); case 15: - if (!writer.writeCollection("ownedValKeys", ownedValKeys, MessageCollectionItemType.MSG)) + if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); case 16: - if (!writer.writeCollection("ownedValVals", ownedValVals, MessageCollectionItemType.MSG)) + if (!writer.writeCollection("ownedValKeys", ownedValKeys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); case 17: - if (!writer.writeCollection("pending", pending, MessageCollectionItemType.MSG)) + if (!writer.writeCollection("ownedValVals", ownedValVals, MessageCollectionItemType.MSG)) return false; writer.incrementState(); case 18: - if (!writer.writeMessage("retVal", retVal)) + if (!writer.writeCollection("pending", pending, MessageCollectionItemType.MSG)) return false; writer.incrementState(); case 19: - if (!writer.writeMessage("writeVer", writeVer)) + if (!writer.writeMessage("retVal", retVal)) return false; writer.incrementState(); case 20: - if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + if (!writer.writeMessage("writeVer", writeVer)) return false; writer.incrementState(); - } return true; @@ -458,16 +433,8 @@ public boolean hasOwnedValue(IgniteTxKey key) { return false; switch (reader.state()) { - case 10: - clientRemapVer = reader.readMessage("clientRemapVer"); - - if (!reader.isLastRead()) - return false; - - reader.incrementState(); - case 11: - dhtVer = reader.readMessage("dhtVer"); + clientRemapVer = reader.readAffinityTopologyVersion("clientRemapVer"); if (!reader.isLastRead()) return false; @@ -475,7 +442,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 12: - filterFailedKeys = reader.readCollection("filterFailedKeys", MessageCollectionItemType.MSG); + dhtVer = reader.readMessage("dhtVer"); if (!reader.isLastRead()) return false; @@ -483,7 +450,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 13: - futId = reader.readIgniteUuid("futId"); + filterFailedKeys = reader.readCollection("filterFailedKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) return false; @@ -491,7 +458,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 14: - miniId = reader.readInt("miniId"); + futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) return false; @@ -499,7 +466,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 15: - ownedValKeys = reader.readCollection("ownedValKeys", MessageCollectionItemType.MSG); + miniId = reader.readInt("miniId"); if (!reader.isLastRead()) return false; @@ -507,7 +474,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 16: - ownedValVals = reader.readCollection("ownedValVals", MessageCollectionItemType.MSG); + ownedValKeys = reader.readCollection("ownedValKeys", MessageCollectionItemType.MSG); if (!reader.isLastRead()) return false; @@ -515,7 +482,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 17: - pending = reader.readCollection("pending", MessageCollectionItemType.MSG); + ownedValVals = reader.readCollection("ownedValVals", MessageCollectionItemType.MSG); if (!reader.isLastRead()) return false; @@ -523,7 +490,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 18: - retVal = reader.readMessage("retVal"); + pending = reader.readCollection("pending", MessageCollectionItemType.MSG); if (!reader.isLastRead()) return false; @@ -531,7 +498,7 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 19: - writeVer = reader.readMessage("writeVer"); + retVal = reader.readMessage("retVal"); if (!reader.isLastRead()) return false; @@ -539,13 +506,12 @@ public boolean hasOwnedValue(IgniteTxKey key) { reader.incrementState(); case 20: - mvccSnapshot = reader.readMessage("mvccSnapshot"); + writeVer = reader.readMessage("writeVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - } return reader.afterMessageRead(GridNearTxPrepareResponse.class); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistFuture.java index 6d48b97b02e27..faf84ef96d44f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistFuture.java @@ -23,6 +23,7 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; import org.apache.ignite.internal.processors.affinity.AffinityAssignment; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheMessage; @@ -116,11 +117,14 @@ protected GridNearTxQueryEnlistFuture( else { primary = assignment.primaryPartitionNodes(); - for (ClusterNode pNode : primary) { + for (ClusterNode pNode : primary) updateMappings(pNode); - } } + if (primary.isEmpty()) + throw new ClusterTopologyServerNotFoundException("Failed to find data nodes for cache (all partition " + + "nodes left the grid)."); + boolean locallyMapped = primary.contains(cctx.localNode()); if (locallyMapped) @@ -130,6 +134,10 @@ protected GridNearTxQueryEnlistFuture( boolean first = true; boolean clientFirst = false; + // Need to unlock topology to avoid deadlock with binary descriptors registration. + if(!topLocked && cctx.topology().holdsLock()) + cctx.topology().readUnlock(); + for (ClusterNode node : F.view(primary, F.remoteNodes(cctx.localNodeId()))) { add(mini = new MiniFuture(node)); @@ -360,8 +368,7 @@ public boolean onResult(GridNearTxQueryEnlistResponse res, Throwable err) { completed = true; } - if (X.hasCause(err, ClusterTopologyCheckedException.class) - || (res != null && res.removeMapping())) { + if (res != null && res.removeMapping()) { GridDistributedTxMapping m = tx.mappings().get(node.id()); assert m != null && m.empty(); @@ -371,8 +378,12 @@ public boolean onResult(GridNearTxQueryEnlistResponse res, Throwable err) { if (node.isLocal()) tx.colocatedLocallyMapped(false); } - else if (res != null && res.result() > 0 && !node.isLocal()) - tx.hasRemoteLocks(true); + else if (res != null) { + tx.mappings().get(node.id()).addBackups(res.newDhtNodes()); + + if (res.result() > 0 && !node.isLocal()) + tx.hasRemoteLocks(true); + } return err != null ? onDone(err) : onDone(res.result(), res.error()); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistRequest.java index 472937be41569..3b22afb7734b7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistRequest.java @@ -297,7 +297,7 @@ public boolean firstClientRequest() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 21; + return 22; } /** {@inheritDoc} */ @@ -331,109 +331,109 @@ public boolean firstClientRequest() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeIntArray("cacheIds", cacheIds)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeBoolean("clientFirst", clientFirst)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeInt("flags", flags)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeMessage("lockVer", lockVer)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeInt("pageSize", pageSize)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeByteArray("paramsBytes", paramsBytes)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeIntArray("parts", parts)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeString("qry", qry)) return false; writer.incrementState(); - case 14: + case 15: if (!writer.writeString("schema", schema)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 17: + case 18: if (!writer.writeLong("threadId", threadId)) return false; writer.incrementState(); - case 18: + case 19: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 19: - if (!writer.writeMessage("topVer", topVer)) + case 20: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 20: + case 21: if (!writer.writeLong("txTimeout", txTimeout)) return false; @@ -455,7 +455,7 @@ public boolean firstClientRequest() { return false; switch (reader.state()) { - case 3: + case 4: cacheIds = reader.readIntArray("cacheIds"); if (!reader.isLastRead()) @@ -463,7 +463,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 4: + case 5: clientFirst = reader.readBoolean("clientFirst"); if (!reader.isLastRead()) @@ -471,7 +471,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 5: + case 6: flags = reader.readInt("flags"); if (!reader.isLastRead()) @@ -479,7 +479,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 6: + case 7: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -487,7 +487,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 7: + case 8: lockVer = reader.readMessage("lockVer"); if (!reader.isLastRead()) @@ -495,7 +495,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 8: + case 9: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -503,7 +503,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 9: + case 10: mvccSnapshot = reader.readMessage("mvccSnapshot"); if (!reader.isLastRead()) @@ -511,7 +511,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 10: + case 11: pageSize = reader.readInt("pageSize"); if (!reader.isLastRead()) @@ -519,7 +519,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 11: + case 12: paramsBytes = reader.readByteArray("paramsBytes"); if (!reader.isLastRead()) @@ -527,7 +527,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 12: + case 13: parts = reader.readIntArray("parts"); if (!reader.isLastRead()) @@ -535,7 +535,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 13: + case 14: qry = reader.readString("qry"); if (!reader.isLastRead()) @@ -543,7 +543,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 14: + case 15: schema = reader.readString("schema"); if (!reader.isLastRead()) @@ -551,7 +551,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 15: + case 16: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -559,7 +559,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 16: + case 17: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -567,7 +567,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 17: + case 18: threadId = reader.readLong("threadId"); if (!reader.isLastRead()) @@ -575,7 +575,7 @@ public boolean firstClientRequest() { reader.incrementState(); - case 18: + case 19: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -583,15 +583,15 @@ public boolean firstClientRequest() { reader.incrementState(); - case 19: - topVer = reader.readMessage("topVer"); + case 20: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 20: + case 21: txTimeout = reader.readLong("txTimeout"); if (!reader.isLastRead()) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistResponse.java index d628de1d56311..2715f89b408f3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryEnlistResponse.java @@ -166,7 +166,7 @@ public boolean removeMapping() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 10; + return 11; } /** {@inheritDoc} */ @@ -184,47 +184,48 @@ public boolean removeMapping() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeMessage("lockVer", lockVer)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 7: - if (!writer.writeBoolean("removeMapping", removeMapping)) + case 8: + if (!writer.writeCollection("newDhtNodes", newDhtNodes, MessageCollectionItemType.UUID)) return false; writer.incrementState(); - case 8: - if (!writer.writeLong("res", res)) + case 9: + if (!writer.writeBoolean("removeMapping", removeMapping)) return false; writer.incrementState(); - case 9: - if (!writer.writeCollection("newDhtNodes", newDhtNodes, MessageCollectionItemType.UUID)) + case 10: + if (!writer.writeLong("res", res)) return false; writer.incrementState(); + } return true; @@ -241,7 +242,7 @@ public boolean removeMapping() { return false; switch (reader.state()) { - case 3: + case 4: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -249,7 +250,7 @@ public boolean removeMapping() { reader.incrementState(); - case 4: + case 5: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -257,7 +258,7 @@ public boolean removeMapping() { reader.incrementState(); - case 5: + case 6: lockVer = reader.readMessage("lockVer"); if (!reader.isLastRead()) @@ -265,7 +266,7 @@ public boolean removeMapping() { reader.incrementState(); - case 6: + case 7: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -273,29 +274,30 @@ public boolean removeMapping() { reader.incrementState(); - case 7: - removeMapping = reader.readBoolean("removeMapping"); + case 8: + newDhtNodes = reader.readCollection("newDhtNodes", MessageCollectionItemType.UUID); if (!reader.isLastRead()) return false; reader.incrementState(); - case 8: - res = reader.readLong("res"); + case 9: + removeMapping = reader.readBoolean("removeMapping"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 9: - newDhtNodes = reader.readCollection("newDhtNodes", MessageCollectionItemType.UUID); + case 10: + res = reader.readLong("res"); if (!reader.isLastRead()) return false; reader.incrementState(); + } return reader.afterMessageRead(GridNearTxQueryEnlistResponse.class); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistFuture.java index b83339b1ed9fb..a40f632bc8a9b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistFuture.java @@ -31,6 +31,7 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheMessage; import org.apache.ignite.internal.processors.cache.KeyCacheObject; @@ -150,6 +151,10 @@ private void sendNextBatches(@Nullable UUID nodeId) { boolean first = (nodeId != null); + // Need to unlock topology to avoid deadlock with binary descriptors registration. + if(!topLocked && cctx.topology().holdsLock()) + cctx.topology().readUnlock(); + for (Batch batch : next) { ClusterNode node = batch.node(); @@ -197,12 +202,10 @@ private Collection continueLoop(@Nullable UUID nodeId) throws IgniteCheck KeyCacheObject key = cctx.toCacheKeyObject(op.isDeleteOrLock() ? cur : ((IgniteBiTuple)cur).getKey()); - List nodes = cctx.affinity().nodesByKey(key, topVer); - - ClusterNode node; + ClusterNode node = cctx.affinity().primaryByPartition(key.partition(), topVer); - if (F.isEmpty(nodes) || ((node = nodes.get(0)) == null)) - throw new ClusterTopologyCheckedException("Failed to get primary node " + + if (node == null) + throw new ClusterTopologyServerNotFoundException("Failed to get primary node " + "[topVer=" + topVer + ", key=" + key + ']'); if (!sequential) @@ -226,8 +229,7 @@ else if (batch != null && !batch.node().equals(node)) break; } - batch.add(op.isDeleteOrLock() ? key : cur, - op != EnlistOperation.LOCK && cctx.affinityNode() && (cctx.isReplicated() || nodes.indexOf(cctx.localNode()) > 0)); + batch.add(op.isDeleteOrLock() ? key : cur, !node.isLocal() && isLocalBackup(op, key)); if (batch.size() == batchSize) res = markReady(res, batch); @@ -283,6 +285,16 @@ private boolean hasNext0() { return peek != FINISHED; } + /** */ + private boolean isLocalBackup(EnlistOperation op, KeyCacheObject key) { + if (!cctx.affinityNode() || op == EnlistOperation.LOCK) + return false; + else if (cctx.isReplicated()) + return true; + + return cctx.topology().nodes(key.partition(), tx.topologyVersion()).contains(cctx.localNode()); + } + /** */ private ArrayList markReady(ArrayList batches, Batch batch) { if (!batch.ready()) { @@ -347,7 +359,8 @@ private void processBatchLocalBackupKeys(UUID primaryId, List rows, Grid -1, this.tx.subjectId(), this.tx.taskNameHash(), - false); + false, + null); dhtTx.mvccSnapshot(new MvccSnapshotWithoutTxs(mvccSnapshot.coordinatorVersion(), mvccSnapshot.counter(), MVCC_OP_COUNTER_NA, mvccSnapshot.cleanupVersion())); @@ -360,7 +373,8 @@ private void processBatchLocalBackupKeys(UUID primaryId, List rows, Grid } } - dhtTx.mvccEnlistBatch(cctx, it.operation(), keys, vals, mvccSnapshot.withoutActiveTransactions()); + cctx.tm().txHandler().mvccEnlistBatch(dhtTx, cctx, it.operation(), keys, vals, + mvccSnapshot.withoutActiveTransactions(), null, -1); } catch (IgniteCheckedException e) { onDone(e); @@ -547,8 +561,8 @@ public boolean checkResponse(UUID nodeId, GridNearTxQueryResultsEnlistResponse r if (err == null && res.error() != null) err = res.error(); - if (X.hasCause(err, ClusterTopologyCheckedException.class)) - tx.removeMapping(nodeId); + if (res != null) + tx.mappings().get(nodeId).addBackups(res.newDhtNodes()); if (err != null) processFailure(err, null); @@ -566,6 +580,8 @@ public boolean checkResponse(UUID nodeId, GridNearTxQueryResultsEnlistResponse r RES_UPD.getAndAdd(this, res.result()); + tx.hasRemoteLocks(true); + return true; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistRequest.java index f350d502b9b7d..8e10a7edb3494 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistRequest.java @@ -335,85 +335,85 @@ public EnlistOperation operation() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeBoolean("clientFirst", clientFirst)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeIgniteUuid("futId", futId)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeObjectArray("keys", keys, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeMessage("lockVer", lockVer)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeInt("miniId", miniId)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) return false; writer.incrementState(); - case 9: + case 10: if (!writer.writeByte("op", op != null ? (byte)op.ordinal() : -1)) return false; writer.incrementState(); - case 10: + case 11: if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeInt("taskNameHash", taskNameHash)) return false; writer.incrementState(); - case 12: + case 13: if (!writer.writeLong("threadId", threadId)) return false; writer.incrementState(); - case 13: + case 14: if (!writer.writeLong("timeout", timeout)) return false; writer.incrementState(); - case 14: - if (!writer.writeMessage("topVer", topVer)) + case 15: + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); - case 15: + case 16: if (!writer.writeLong("txTimeout", txTimeout)) return false; writer.incrementState(); - case 16: + case 17: if (!writer.writeObjectArray("values", values, MessageCollectionItemType.MSG)) return false; @@ -435,7 +435,7 @@ public EnlistOperation operation() { return false; switch (reader.state()) { - case 3: + case 4: clientFirst = reader.readBoolean("clientFirst"); if (!reader.isLastRead()) @@ -443,7 +443,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 4: + case 5: futId = reader.readIgniteUuid("futId"); if (!reader.isLastRead()) @@ -451,7 +451,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 5: + case 6: keys = reader.readObjectArray("keys", MessageCollectionItemType.MSG, KeyCacheObject.class); if (!reader.isLastRead()) @@ -459,7 +459,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 6: + case 7: lockVer = reader.readMessage("lockVer"); if (!reader.isLastRead()) @@ -467,7 +467,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 7: + case 8: miniId = reader.readInt("miniId"); if (!reader.isLastRead()) @@ -475,7 +475,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 8: + case 9: mvccSnapshot = reader.readMessage("mvccSnapshot"); if (!reader.isLastRead()) @@ -483,7 +483,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 9: + case 10: byte opOrd; opOrd = reader.readByte("op"); @@ -495,7 +495,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 10: + case 11: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -503,7 +503,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 11: + case 12: taskNameHash = reader.readInt("taskNameHash"); if (!reader.isLastRead()) @@ -511,7 +511,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 12: + case 13: threadId = reader.readLong("threadId"); if (!reader.isLastRead()) @@ -519,7 +519,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 13: + case 14: timeout = reader.readLong("timeout"); if (!reader.isLastRead()) @@ -527,15 +527,15 @@ public EnlistOperation operation() { reader.incrementState(); - case 14: - topVer = reader.readMessage("topVer"); + case 15: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 15: + case 16: txTimeout = reader.readLong("txTimeout"); if (!reader.isLastRead()) @@ -543,7 +543,7 @@ public EnlistOperation operation() { reader.incrementState(); - case 16: + case 17: values = reader.readObjectArray("values", MessageCollectionItemType.MSG, CacheObject.class); if (!reader.isLastRead()) @@ -558,7 +558,7 @@ public EnlistOperation operation() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 17; + return 18; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistResponse.java index 48c63bc3c942c..2a0c63242f867 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxQueryResultsEnlistResponse.java @@ -102,7 +102,7 @@ public IgniteUuid dhtFutureId() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 12; + return 13; } /** {@inheritDoc} */ @@ -120,17 +120,18 @@ public IgniteUuid dhtFutureId() { } switch (writer.state()) { - case 10: + case 11: if (!writer.writeIgniteUuid("dhtFutId", dhtFutId)) return false; writer.incrementState(); - case 11: + case 12: if (!writer.writeMessage("dhtVer", dhtVer)) return false; writer.incrementState(); + } return true; @@ -147,7 +148,7 @@ public IgniteUuid dhtFutureId() { return false; switch (reader.state()) { - case 10: + case 11: dhtFutId = reader.readIgniteUuid("dhtFutId"); if (!reader.isLastRead()) @@ -155,13 +156,14 @@ public IgniteUuid dhtFutureId() { reader.incrementState(); - case 11: + case 12: dhtVer = reader.readMessage("dhtVer"); if (!reader.isLastRead()) return false; reader.incrementState(); + } return reader.afterMessageRead(GridNearTxQueryResultsEnlistResponse.class); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxRemote.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxRemote.java index 879bf26fb7273..e71125d067df9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxRemote.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearTxRemote.java @@ -109,22 +109,24 @@ public GridNearTxRemote( Collection writeEntries, int txSize, @Nullable UUID subjId, - int taskNameHash + int taskNameHash, + @Nullable String txLbl ) throws IgniteCheckedException { super( - ctx, - nodeId, + ctx, + nodeId, xidVer, commitVer, - sys, - plc, - concurrency, - isolation, - invalidate, + sys, + plc, + concurrency, + isolation, + invalidate, timeout, txSize, - subjId, - taskNameHash + subjId, + taskNameHash, + txLbl ); assert nearNodeId != null; @@ -185,22 +187,24 @@ public GridNearTxRemote( long timeout, int txSize, @Nullable UUID subjId, - int taskNameHash + int taskNameHash, + @Nullable String txLbl ) { super( - ctx, - nodeId, + ctx, + nodeId, xidVer, commitVer, sys, plc, - concurrency, - isolation, - invalidate, + concurrency, + isolation, + invalidate, timeout, txSize, subjId, - taskNameHash + taskNameHash, + txLbl ); assert nearNodeId != null; @@ -340,7 +344,7 @@ private boolean addEntry(IgniteTxEntry entry) throws IgniteCheckedException { try { cached.unswap(); - CacheObject val = cached.peek(null); + CacheObject val = cached.peek(); if (val == null && cached.evictInternal(xidVer, null, false)) { evicted.add(entry.txKey()); @@ -401,7 +405,7 @@ public boolean addEntry( else { cached.unswap(); - CacheObject peek = cached.peek(null); + CacheObject peek = cached.peek(); if (peek == null && cached.evictInternal(xidVer, null, false)) { cached.context().cache().removeIfObsolete(key.key()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearUnlockRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearUnlockRequest.java index 2b49889cadc27..b2ed241e91fd2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearUnlockRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearUnlockRequest.java @@ -85,7 +85,7 @@ public GridNearUnlockRequest(int cacheId, int keyCnt, boolean addDepInfo) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 8; + return 9; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/TxTopologyVersionFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/TxTopologyVersionFuture.java index b5e38839b75b4..7f8a121b8b806 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/TxTopologyVersionFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/near/TxTopologyVersionFuture.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.cache.distributed.near; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -70,7 +71,18 @@ private void init() { if (topVer != null) { for (GridDhtTopologyFuture fut : cctx.shared().exchange().exchangeFutures()) { if (fut.exchangeDone() && fut.topologyVersion().equals(topVer)) { - Throwable err = fut.validateCache(cctx, false, false, null, null); + Throwable err = null; + + // Before cache validation, make sure that this topology future is already completed. + try { + fut.get(); + } + catch (IgniteCheckedException e) { + err = fut.error(); + } + + if (err == null) + err = fut.validateCache(cctx, false, false, null, null); if (err != null) { onDone(err); @@ -100,7 +112,10 @@ private void acquireTopologyVersion() { try { if (cctx.topology().stopping()) { - onDone(new CacheStoppedException(cctx.name())); + onDone( + cctx.shared().cache().isCacheRestarting(cctx.name())? + new IgniteCacheRestartingException(cctx.name()): + new CacheStoppedException(cctx.name())); return; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridCacheDrManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridCacheDrManager.java index f36237491f18f..f2a4b30c4af82 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridCacheDrManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridCacheDrManager.java @@ -22,7 +22,6 @@ import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.GridCacheManager; import org.apache.ignite.internal.processors.cache.KeyCacheObject; -import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.dr.GridDrType; import org.jetbrains.annotations.Nullable; @@ -56,36 +55,6 @@ public void replicate(KeyCacheObject key, GridDrType drType, AffinityTopologyVersion topVer)throws IgniteCheckedException; - /** - * Enlist for DR. - * - * @param key Key. - * @param val Value. - * @param ttl TTL. - * @param expireTime Expire time. - * @param ver Version. - * @param drType Replication type. - * @param topVer Topology version. - * @param mvccVer Tx mvcc version. - * @throws IgniteCheckedException If failed. - */ - void mvccReplicate(KeyCacheObject key, - @Nullable CacheObject val, - long ttl, - long expireTime, - GridCacheVersion ver, - GridDrType drType, - AffinityTopologyVersion topVer, - MvccVersion mvccVer) throws IgniteCheckedException; - - /** - * @param mvccVer Tx mvcc version. - * @param commit {@code True} if tx committed, {@code False} otherwise. - * @param topVer Tx snapshot affinity version. - * @throws IgniteCheckedException If failed. - */ - void onTxFinished(MvccVersion mvccVer, boolean commit, AffinityTopologyVersion topVer); - /** * Process partitions exchange event. * diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridOsCacheDrManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridOsCacheDrManager.java index 8d7e4d8bedd0a..f3c1b23f7c7d6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridOsCacheDrManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/dr/GridOsCacheDrManager.java @@ -22,7 +22,6 @@ import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.KeyCacheObject; -import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.dr.GridDrType; import org.apache.ignite.lang.IgniteFuture; @@ -78,18 +77,6 @@ public class GridOsCacheDrManager implements GridCacheDrManager { // No-op. } - /** {@inheritDoc} */ - @Override public void mvccReplicate(KeyCacheObject key, @Nullable CacheObject val, long ttl, long expireTime, - GridCacheVersion ver, GridDrType drType, AffinityTopologyVersion topVer, - MvccVersion mvccVer) throws IgniteCheckedException { - // No-op. - } - - /** {@inheritDoc} */ - @Override public void onTxFinished(MvccVersion mvccVer, boolean commit, AffinityTopologyVersion topVer) { - // No-op. - } - /** {@inheritDoc} */ @Override public void onExchange(AffinityTopologyVersion topVer, boolean left) throws IgniteCheckedException { // No-op. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCache.java index 7b7ac66c0cc31..3bdb44e72aa55 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCache.java @@ -182,7 +182,7 @@ public IgniteInternalFuture lockAllAsync(Collection key if (entry != null && ctx.isAll(entry, CU.empty0())) { entry.releaseLocal(); - entry.touch(topVer); + entry.touch(); } } } @@ -232,4 +232,27 @@ else if (modes.heap) @Override public long localSizeLong(int part, CachePeekMode[] peekModes) throws IgniteCheckedException { return localSizeLong(peekModes); } + + /** {@inheritDoc} */ + @Override public void preloadPartition(int part) throws IgniteCheckedException { + ctx.offheap().preloadPartition(part); + } + + /** {@inheritDoc} */ + @Override public IgniteInternalFuture preloadPartitionAsync(int part) throws IgniteCheckedException { + return ctx.closures().callLocalSafe(new Callable() { + @Override public Void call() throws Exception { + preloadPartition(part); + + return null; + } + }); + } + + /** {@inheritDoc} */ + @Override public boolean localPreloadPartition(int part) throws IgniteCheckedException { + ctx.offheap().preloadPartition(part); + + return true; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCacheEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCacheEntry.java index 412d4d30e4e76..e26174a2adef5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCacheEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/GridLocalCacheEntry.java @@ -257,15 +257,6 @@ public void recheck() { } } - /** - * Unlocks lock if it is currently owned. - * - * @param tx Transaction to unlock. - */ - @Override public void txUnlock(IgniteInternalTx tx) throws GridCacheEntryRemovedException { - removeLock(tx.xidVersion()); - } - /** * Releases local lock. */ @@ -327,6 +318,8 @@ private void releaseLocal(long threadId) { GridCacheMvccCandidate doomed; + GridCacheVersion deferredDelVer; + lockEntry(); try { @@ -351,11 +344,22 @@ private void releaseLocal(long threadId) { } val = this.val; + + deferredDelVer = this.ver; } finally { unlockEntry(); } + if (val == null) { + boolean deferred = cctx.deferredDelete() && !detached() && !isInternal(); + + if (deferred) { + if (deferredDelVer != null) + cctx.onDeferredDelete(this, deferredDelVer); + } + } + if (doomed != null) checkThreadChain(doomed); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/atomic/GridLocalAtomicCache.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/atomic/GridLocalAtomicCache.java index b615952f785cd..9a1868bd10947 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/atomic/GridLocalAtomicCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/local/atomic/GridLocalAtomicCache.java @@ -426,6 +426,7 @@ private Map getAllInternal(@Nullable Collection keys, if (evt) { ctx.events().readEvent(cacheKey, + null, null, row.value(), subjId, @@ -516,7 +517,7 @@ private Map getAllInternal(@Nullable Collection keys, } finally { if (entry != null) - entry.touch(ctx.affinity().affinityTopologyVersion()); + entry.touch(); } if (!success && storeEnabled) @@ -982,7 +983,7 @@ else if (res == null) } finally { if (entry != null) - entry.touch(ctx.affinity().affinityTopologyVersion()); + entry.touch(); } } } @@ -1513,7 +1514,7 @@ private List lockEntries(Collection keys) { AffinityTopologyVersion topVer = ctx.affinity().affinityTopologyVersion(); for (GridCacheEntryEx entry : locked) - entry.touch(topVer); + entry.touch(); throw new NullPointerException("Null key."); } @@ -1530,7 +1531,7 @@ private void unlockEntries(Iterable locked) { AffinityTopologyVersion topVer = ctx.affinity().affinityTopologyVersion(); for (GridCacheEntryEx entry : locked) - entry.touch(topVer); + entry.touch(); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccCachingManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccCachingManager.java new file mode 100644 index 0000000000000..8f83b6e5b3e1d --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccCachingManager.java @@ -0,0 +1,357 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import java.util.Collection; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.Map; +import java.util.SortedMap; +import java.util.TreeMap; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.processors.cache.CacheObject; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter; +import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.distributed.dht.PartitionUpdateCountersMessage; +import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxKey; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryListener; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager; +import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; +import org.apache.ignite.internal.processors.cache.transactions.TxCounters; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.util.tostring.GridToStringInclude; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.T2; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.lang.IgniteUuid; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_MVCC_TX_SIZE_CACHING_THRESHOLD; +import static org.apache.ignite.internal.processors.dr.GridDrType.DR_BACKUP; +import static org.apache.ignite.internal.processors.dr.GridDrType.DR_PRIMARY; + +/** + * Manager for caching MVCC transaction updates. + * This updates can be used further in CQ, DR and other places. + */ +public class MvccCachingManager extends GridCacheSharedManagerAdapter { + /** Maximum possible transaction size when caching is enabled. */ + public static final int TX_SIZE_THRESHOLD = IgniteSystemProperties.getInteger(IGNITE_MVCC_TX_SIZE_CACHING_THRESHOLD, + 20_000); + + /** Cached enlist values*/ + private final Map enlistCache = new ConcurrentHashMap<>(); + + /** Counters map. Used for OOM prevention caused by the big transactions. */ + private final Map cntrs = new ConcurrentHashMap<>(); + + /** + * Adds enlisted tx entry to cache. + * @param key Key. + * @param val Value. + * @param ttl Time to live. + * @param expireTime Expire time. + * @param ver Version. + * @param oldVal Old value. + * @param primary Flag whether this is a primary node. + * @param topVer Topology version. + * @param mvccVer Mvcc version. + * @param cacheId Cache id. + * @param tx Transaction. + * @param futId Dht future id. + * @param batchNum Batch number (for batches reordering prevention). + * @throws IgniteCheckedException If failed. + */ + public void addEnlisted(KeyCacheObject key, + @Nullable CacheObject val, + long ttl, + long expireTime, + GridCacheVersion ver, + CacheObject oldVal, + boolean primary, + AffinityTopologyVersion topVer, + MvccVersion mvccVer, + int cacheId, + IgniteInternalTx tx, + IgniteUuid futId, + int batchNum) throws IgniteCheckedException { + assert key != null; + assert mvccVer != null; + assert tx != null; + + if (log.isDebugEnabled()) { + log.debug("Added entry to mvcc cache: [key=" + key + ", val=" + val + ", oldVal=" + oldVal + + ", primary=" + primary + ", mvccVer=" + mvccVer + ", cacheId=" + cacheId + ", ver=" + ver +']'); + } + + GridCacheContext ctx0 = cctx.cacheContext(cacheId); + + // Do not cache updates if there is no DR or CQ enabled. + if (!needDrReplicate(ctx0, key) && + F.isEmpty(continuousQueryListeners(ctx0, tx, key)) && + !ctx0.group().hasContinuousQueryCaches()) + return; + + AtomicInteger cntr = cntrs.computeIfAbsent(new TxKey(mvccVer.coordinatorVersion(), mvccVer.counter()), + v -> new AtomicInteger()); + + if (cntr.incrementAndGet() > TX_SIZE_THRESHOLD) + throw new IgniteCheckedException("Transaction is too large. Consider reducing transaction size or " + + "turning off continuous queries and datacenter replication [size=" + cntr.get() + ", txXid=" + ver + ']'); + + MvccTxEntry e = new MvccTxEntry(key, val, ttl, expireTime, ver, oldVal, primary, topVer, mvccVer, cacheId); + + EnlistBuffer cached = enlistCache.computeIfAbsent(ver, v -> new EnlistBuffer()); + + cached.add(primary ? null : futId, primary ? -1 : batchNum, key, e); + } + + /** + * + * @param tx Transaction. + * @param commit {@code True} if commit. + */ + public void onTxFinished(IgniteInternalTx tx, boolean commit) throws IgniteCheckedException { + if (log.isDebugEnabled()) + log.debug("Transaction finished: [commit=" + commit + ", tx=" + tx + ']'); + + if (tx.system() || tx.internal() || tx.mvccSnapshot() == null) + return; + + cntrs.remove(new TxKey(tx.mvccSnapshot().coordinatorVersion(), tx.mvccSnapshot().counter())); + + EnlistBuffer buf = enlistCache.remove(tx.xidVersion()); + + if (buf == null) + return; + + Map cached = buf.getCached(); + + if (F.isEmpty(cached)) + return; + + TxCounters txCntrs = tx.txCounters(false); + + assert txCntrs != null || !commit; + + if (txCntrs == null) + return; + + Collection cntrsColl = txCntrs.updateCounters(); + + if (F.isEmpty(cntrsColl)) { + assert !commit; + + return; + } + + // cacheId -> partId -> initCntr -> cntr + delta. + Map>> cntrsMap = new HashMap<>(); + + for (PartitionUpdateCountersMessage msg : cntrsColl) { + for (int i = 0; i < msg.size(); i++) { + Map> cntrPerPart = + cntrsMap.computeIfAbsent(msg.cacheId(), k -> new HashMap<>()); + + T2 prev = cntrPerPart.put(msg.partition(i), + new T2<>(new AtomicLong(msg.initialCounter(i)), msg.initialCounter(i) + msg.updatesCount(i))); + + assert prev == null; + } + } + + // Feed CQ & DR with entries. + for (Map.Entry entry : cached.entrySet()) { + MvccTxEntry e = entry.getValue(); + + assert e.key().partition() != -1; + + Map> cntrPerCache = cntrsMap.get(e.cacheId()); + + GridCacheContext ctx0 = cctx.cacheContext(e.cacheId()); + + assert ctx0 != null && cntrPerCache != null; + + T2 cntr = cntrPerCache.get(e.key().partition()); + + long resCntr = cntr.getKey().incrementAndGet(); + + assert resCntr <= cntr.getValue(); + + e.updateCounter(resCntr); + + if (ctx0.group().sharedGroup()) { + ctx0.group().onPartitionCounterUpdate(ctx0.cacheId(), e.key().partition(), resCntr, + tx.topologyVersion(), tx.local()); + } + + if (log.isDebugEnabled()) + log.debug("Process cached entry:" + e); + + // DR + if (ctx0.isDrEnabled()) { + ctx0.dr().replicate(e.key(), e.value(), e.ttl(), e.expireTime(), e.version(), + tx.local() ? DR_PRIMARY : DR_BACKUP, e.topologyVersion()); + } + + // CQ + CacheContinuousQueryManager contQryMgr = ctx0.continuousQueries(); + + if (ctx0.continuousQueries().notifyContinuousQueries(tx)) { + contQryMgr.getListenerReadLock().lock(); + + try { + Map lsnrCol = continuousQueryListeners(ctx0, tx, e.key()); + + if (!F.isEmpty(lsnrCol)) { + contQryMgr.onEntryUpdated( + lsnrCol, + e.key(), + commit ? e.value() : null, // Force skip update counter if rolled back. + commit ? e.oldValue() : null, // Force skip update counter if rolled back. + false, + e.key().partition(), + tx.local(), + false, + e.updateCounter(), + null, + e.topologyVersion()); + } + } + finally { + contQryMgr.getListenerReadLock().unlock(); + } + } + } + } + + /** + * @param ctx0 Cache context. + * @param key Key. + * @return {@code True} if need to replicate this value. + */ + public boolean needDrReplicate(GridCacheContext ctx0, KeyCacheObject key) { + return ctx0.isDrEnabled() && !key.internal(); + } + + /** + * @param ctx0 Cache context. + * @param tx Transaction. + * @param key Key. + * @return Map of listeners to be notified by this update. + */ + public Map continuousQueryListeners(GridCacheContext ctx0, @Nullable IgniteInternalTx tx, KeyCacheObject key) { + boolean internal = key != null && key.internal() || !ctx0.userCache(); + + return ctx0.continuousQueries().notifyContinuousQueries(tx) ? + ctx0.continuousQueries().updateListeners(internal, false) : null; + } + + /** + * Buffer for collecting enlisted entries. The main goal of this buffer is to fix reordering of dht enlist requests + * on backups. + */ + private static class EnlistBuffer { + /** Last DHT future id. */ + private IgniteUuid lastFutId; + + /** Main buffer for entries. */ + @GridToStringInclude + private Map cached = new LinkedHashMap<>(); + + /** Pending entries. */ + @GridToStringInclude + private SortedMap> pending; + + /** + * Adds entry to caching buffer. + * + * @param futId Future id. + * @param batchNum Batch number. + * @param key Key. + * @param e Entry. + */ + synchronized void add(IgniteUuid futId, int batchNum, KeyCacheObject key, MvccTxEntry e) { + if (batchNum >= 0) { + /* + * Assume that batches within one future may be reordered. But batches between futures cannot be + * reordered. This means that if batches from the new DHT future has arrived, all batches from the + * previous one has already been collected. + */ + if (lastFutId != null && !lastFutId.equals(futId)) { // Request from new DHT future arrived. + lastFutId = futId; + + // Flush pending for previous future. + flushPending(); + } + + if (pending == null) + pending = new TreeMap<>() ; + + MvccTxEntry prev = pending.computeIfAbsent(batchNum, k -> new LinkedHashMap<>()).put(key, e); + + if (prev != null && prev.oldValue() != null) + e.oldValue(prev.oldValue()); + } + else { // batchNum == -1 means no reordering (e.g. this is a primary node). + assert batchNum == -1; + + MvccTxEntry prev = cached.put(key, e); + + if (prev != null && prev.oldValue() != null) + e.oldValue(prev.oldValue()); + } + } + + /** + * @return Cached entries map. + */ + synchronized Map getCached() { + flushPending(); + + return cached; + } + + /** + * Flush pending updates to cached map. + */ + private void flushPending() { + if (F.isEmpty(pending)) + return; + + for (Map.Entry> entry : pending.entrySet()) { + Map vals = entry.getValue(); + + cached.putAll(vals); + } + + pending.clear(); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(EnlistBuffer.class, this); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccPreviousCoordinatorQueries.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccPreviousCoordinatorQueries.java index cd7560f40a205..6218bc026fa50 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccPreviousCoordinatorQueries.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccPreviousCoordinatorQueries.java @@ -19,7 +19,6 @@ import java.util.Collection; import java.util.HashSet; -import java.util.Map; import java.util.Set; import java.util.UUID; import java.util.concurrent.ConcurrentHashMap; @@ -58,7 +57,7 @@ class MvccPreviousCoordinatorQueries { * @param nodes Cluster nodes. * @param mgr Discovery manager. */ - void init(Map nodeQueries, Collection nodes, GridDiscoveryManager mgr) { + void init(GridLongList nodeQueries, Collection nodes, GridDiscoveryManager mgr) { synchronized (this) { assert !initDone; assert waitNodes == null; @@ -66,18 +65,14 @@ void init(Map nodeQueries, Collection nodes, Gr waitNodes = new HashSet<>(); for (ClusterNode node : nodes) { - if ((nodeQueries == null || !nodeQueries.containsKey(node.id())) && - mgr.alive(node) && - !F.contains(rcvd, node.id())) + if (!node.isLocal() && mgr.alive(node) && !F.contains(rcvd, node.id())) waitNodes.add(node.id()); } initDone = waitNodes.isEmpty(); - if (nodeQueries != null) { - for (Map.Entry e : nodeQueries.entrySet()) - mergeToActiveQueries(e.getKey(), e.getValue()); - } + if (nodeQueries != null) + mergeToActiveQueries(mgr.localNode().id(), nodeQueries); if (initDone && !prevQueriesDone) prevQueriesDone = activeQueries.isEmpty() && rcvdAcks.isEmpty(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessor.java index a09468f867249..fd45c7a053585 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessor.java @@ -17,21 +17,16 @@ package org.apache.ignite.internal.processors.cache.mvcc; -import java.util.Collection; -import java.util.Map; import java.util.UUID; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; -import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.events.DiscoveryEvent; import org.apache.ignite.internal.IgniteDiagnosticPrepareContext; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.managers.discovery.DiscoCache; -import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; import org.apache.ignite.internal.processors.GridProcessor; -import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.ExchangeContext; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.util.GridLongList; @@ -42,31 +37,18 @@ */ public interface MvccProcessor extends GridProcessor { /** - * @param evtType Event type. - * @param nodes Current nodes. - * @param topVer Topology version. - * @param customMsg Message - */ - void onDiscoveryEvent(int evtType, Collection nodes, long topVer, - @Nullable DiscoveryCustomMessage customMsg); - - /** - * Exchange start callback. + * Local join callback. * - * @param mvccCrd Mvcc coordinator. - * @param exchCtx Exchange context. - * @param exchCrd Exchange coordinator. + * @param evt Discovery event. */ - void onExchangeStart(MvccCoordinator mvccCrd, ExchangeContext exchCtx, ClusterNode exchCrd); + void onLocalJoin(DiscoveryEvent evt); /** * Exchange done callback. * - * @param newCoord New coordinator flag. * @param discoCache Disco cache. - * @param activeQueries Active queries. */ - void onExchangeDone(boolean newCoord, DiscoCache discoCache, Map activeQueries); + void onExchangeDone(DiscoCache discoCache); /** * @param nodeId Node ID @@ -74,33 +56,16 @@ void onDiscoveryEvent(int evtType, Collection nodes, long topVer, */ void processClientActiveQueries(UUID nodeId, @Nullable GridLongList activeQueries); - /** - * @return Mvcc coordinator received from discovery event. - */ - @Nullable MvccCoordinator assignedCoordinator(); - /** * @return Coordinator. */ @Nullable MvccCoordinator currentCoordinator(); - /** - * Check that the given topology is greater or equals to coordinator's one and returns current coordinator. - * @param topVer Topology version. - * @return Mvcc coordinator. - */ - @Nullable MvccCoordinator currentCoordinator(AffinityTopologyVersion topVer); - /** * @return Current coordinator node ID. */ UUID currentCoordinatorId(); - /** - * @param curCrd Coordinator. - */ - void updateCoordinator(MvccCoordinator curCrd); - /** * @param crdVer Mvcc coordinator version. * @param cntr Mvcc counter. @@ -177,13 +142,6 @@ void onDiscoveryEvent(int evtType, Collection nodes, long topVer, */ MvccSnapshot tryRequestSnapshotLocal(@Nullable IgniteInternalTx tx) throws ClusterTopologyCheckedException; - /** - * Requests snapshot on Mvcc coordinator. - * - * @return Snapshot future. - */ - IgniteInternalFuture requestSnapshotAsync(); - /** * Requests snapshot on Mvcc coordinator. * diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessorImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessorImpl.java index ca77bf9341da1..c367e2b2a7044 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessorImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessorImpl.java @@ -20,19 +20,21 @@ import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; +import java.util.HashSet; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.TreeMap; import java.util.UUID; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.atomic.AtomicInteger; -import javax.cache.expiry.EternalExpiryPolicy; +import java.util.stream.Collectors; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; -import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; @@ -47,15 +49,14 @@ import org.apache.ignite.internal.NodeStoppingException; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.managers.communication.GridMessageListener; +import org.apache.ignite.internal.managers.discovery.CustomEventListener; import org.apache.ignite.internal.managers.discovery.DiscoCache; -import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; import org.apache.ignite.internal.managers.eventstorage.GridLocalEventListener; import org.apache.ignite.internal.processors.GridProcessorAdapter; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.DynamicCacheChangeBatch; import org.apache.ignite.internal.processors.cache.DynamicCacheChangeRequest; -import org.apache.ignite.internal.processors.cache.ExchangeContext; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.internal.processors.cache.KeyCacheObject; @@ -70,6 +71,7 @@ import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccFutureResponse; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccMessage; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccQuerySnapshotRequest; +import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccRecoveryFinishedMessage; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccSnapshotResponse; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccTxSnapshotRequest; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccWaitTxsRequest; @@ -78,8 +80,8 @@ import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxState; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.DatabaseLifecycleListener; +import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; -import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.tree.mvcc.data.MvccDataRow; import org.apache.ignite.internal.processors.cache.tree.mvcc.search.MvccLinkAwareSearchRow; @@ -89,9 +91,10 @@ import org.apache.ignite.internal.util.future.GridCompoundIdentityFuture; import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.lang.GridClosureException; import org.apache.ignite.internal.util.lang.GridCursor; -import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; @@ -105,16 +108,13 @@ import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; -import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_DISCONNECTED; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; +import static org.apache.ignite.events.EventType.EVT_NODE_JOINED; import static org.apache.ignite.events.EventType.EVT_NODE_LEFT; -import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; -import static org.apache.ignite.events.EventType.EVT_NODE_SEGMENTED; import static org.apache.ignite.internal.GridTopic.TOPIC_CACHE_COORDINATOR; -import static org.apache.ignite.internal.events.DiscoveryCustomEvent.EVT_DISCOVERY_CUSTOM_EVT; import static org.apache.ignite.internal.managers.communication.GridIoPolicy.SYSTEM_POOL; -import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; import static org.apache.ignite.internal.processors.cache.mvcc.MvccQueryTracker.MVCC_TRACKER_ID_NA; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_COUNTER_NA; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_CRD_COUNTER_NA; @@ -136,6 +136,10 @@ */ @SuppressWarnings("serial") public class MvccProcessorImpl extends GridProcessorAdapter implements MvccProcessor, DatabaseLifecycleListener { + /** */ + private static final boolean FORCE_MVCC = + IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, false); + /** */ private static final IgniteProductVersion MVCC_SUPPORTED_SINCE = IgniteProductVersion.fromString("2.7.0"); @@ -158,14 +162,11 @@ static void coordinatorAssignClosure(IgniteClosure, Clus } /** Topology version when local node was assigned as coordinator. */ - private long crdVer; + private volatile long crdVer; /** */ private volatile MvccCoordinator curCrd; - /** */ - private volatile MvccCoordinator assignedCrd; - /** */ private TxLog txLog; @@ -180,9 +181,6 @@ static void coordinatorAssignClosure(IgniteClosure, Clus */ private final Object mux = new Object(); - /** For tests only. */ - private volatile Throwable vacuumError; - /** */ private final GridAtomicLong futIdCntr = new GridAtomicLong(0); @@ -192,8 +190,11 @@ static void coordinatorAssignClosure(IgniteClosure, Clus /** */ private final GridAtomicLong committedCntr = new GridAtomicLong(MVCC_INITIAL_CNTR); - /** */ - private final Map activeTxs = new HashMap<>(); + /** + * Contains active transactions on mvcc coordinator. Key is mvcc counter. + * Access is protected by "this" monitor. + */ + private final Map activeTxs = new HashMap<>(); /** Active query trackers. */ private final Map activeTrackers = new ConcurrentHashMap<>(); @@ -225,6 +226,14 @@ static void coordinatorAssignClosure(IgniteClosure, Clus /** Flag whether all nodes in cluster support MVCC. */ private volatile boolean mvccSupported = true; + /** Flag whether coordinator was changed by the last discovery event. */ + private volatile boolean crdChanged; + + /** + * Maps failed node id to votes accumulator for that node. + */ + private final ConcurrentHashMap recoveryBallotBoxes = new ConcurrentHashMap<>(); + /** * @param ctx Context. */ @@ -236,10 +245,21 @@ public MvccProcessorImpl(GridKernalContext ctx) { /** {@inheritDoc} */ @Override public void start() throws IgniteCheckedException { - ctx.event().addLocalEventListener(new CacheCoordinatorNodeFailListener(), - EVT_NODE_FAILED, EVT_NODE_LEFT); + ctx.event().addLocalEventListener(new GridLocalEventListener() { + @Override public void onEvent(Event evt) { + onDiscovery((DiscoveryEvent)evt); + } + }, + EVT_NODE_FAILED, EVT_NODE_LEFT, EVT_NODE_JOINED); ctx.io().addMessageListener(TOPIC_CACHE_COORDINATOR, new CoordinatorMessageListener()); + + ctx.discovery().setCustomEventListener(DynamicCacheChangeBatch.class, + new CustomEventListener() { + @Override public void onCustomEvent(AffinityTopologyVersion topVer, ClusterNode snd, DynamicCacheChangeBatch msg) { + checkMvccCacheStarted(msg); + } + }); } /** {@inheritDoc} */ @@ -249,6 +269,11 @@ public MvccProcessorImpl(GridKernalContext ctx) { /** {@inheritDoc} */ @Override public void preProcessCacheConfiguration(CacheConfiguration ccfg) { + if (FORCE_MVCC && ccfg.getAtomicityMode() == TRANSACTIONAL && !CU.isSystemCache(ccfg.getName())) { + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + ccfg.setNearConfiguration(null); + } + if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) { if (!mvccSupported) throw new IgniteException("Cannot start MVCC transactional cache. " + @@ -259,32 +284,12 @@ public MvccProcessorImpl(GridKernalContext ctx) { } /** {@inheritDoc} */ - @Override public void validateCacheConfiguration(CacheConfiguration ccfg) throws IgniteCheckedException { + @Override public void validateCacheConfiguration(CacheConfiguration ccfg) { if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) { if (!mvccSupported) throw new IgniteException("Cannot start MVCC transactional cache. " + "MVCC is unsupported by the cluster."); - if (ccfg.getCacheStoreFactory() != null) { - throw new IgniteCheckedException("Transactional cache may not have a third party cache store when " + - "MVCC is enabled."); - } - - if (ccfg.getExpiryPolicyFactory() != null && !(ccfg.getExpiryPolicyFactory().create() instanceof - EternalExpiryPolicy)) { - throw new IgniteCheckedException("Transactional cache may not have expiry policy when " + - "MVCC is enabled."); - } - - if (ccfg.getInterceptor() != null) { - throw new IgniteCheckedException("Transactional cache may not have an interceptor when " + - "MVCC is enabled."); - } - - if (ccfg.getCacheMode() == CacheMode.LOCAL) - throw new IgniteCheckedException("Local caches are not supported by MVCC engine. Use " + - "CacheAtomicityMode.TRANSACTIONAL for local caches."); - mvccEnabled = true; } } @@ -303,14 +308,16 @@ public MvccProcessorImpl(GridKernalContext ctx) { /** {@inheritDoc} */ @Override public void ensureStarted() throws IgniteCheckedException { - if (!ctx.clientNode() && txLog == null) { + if (!ctx.clientNode()) { assert mvccEnabled && mvccSupported; - txLog = new TxLog(ctx, ctx.cache().context().database()); + if (txLog == null) + txLog = new TxLog(ctx, ctx.cache().context().database()); startVacuumWorkers(); - log.info("Mvcc processor started."); + if (log.isInfoEnabled()) + log.info("Mvcc processor started."); } } @@ -333,83 +340,230 @@ public MvccProcessorImpl(GridKernalContext ctx) { } /** {@inheritDoc} */ - @Override public void afterInitialise(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { - // No-op. + @Override public void beforeResumeWalLogging(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { + // In case of blt changed we should re-init TX_LOG cache. + txLogPageStoreInit(mgr); } /** {@inheritDoc} */ - @SuppressWarnings("ConstantConditions") - @Override public void beforeMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { + @Override public void beforeBinaryMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { + txLogPageStoreInit(mgr); + } + + /** {@inheritDoc} */ + @Override public void afterBinaryMemoryRestore(IgniteCacheDatabaseSharedManager mgr, + GridCacheDatabaseSharedManager.RestoreBinaryState restoreState) throws IgniteCheckedException { + + boolean hasMvccCaches = ctx.cache().persistentCaches().stream() + .anyMatch(c -> c.cacheConfiguration().getAtomicityMode() == TRANSACTIONAL_SNAPSHOT); + + if (hasMvccCaches) { + txLog = new TxLog(ctx, mgr); + + mvccEnabled = true; + } + } + + /** + * @param mgr Database shared manager. + * @throws IgniteCheckedException If failed. + */ + private void txLogPageStoreInit(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { assert CU.isPersistenceEnabled(ctx.config()); - assert txLog == null; ctx.cache().context().pageStore().initialize(TX_LOG_CACHE_ID, 1, TX_LOG_CACHE_NAME, mgr.dataRegion(TX_LOG_CACHE_NAME).memoryMetrics()); } /** {@inheritDoc} */ - @Override public void afterMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { - // No-op. + @Override public void onExchangeDone(DiscoCache discoCache) { + MvccCoordinator curCrd0 = curCrd; + + if (crdChanged) { + // Rollback all transactions with old snapshots. + ctx.cache().context().tm().rollbackMvccTxOnCoordinatorChange(); + + // Complete init future if local node is a new coordinator. All previous txs are already completed here. + if (crdVer != 0 && !initFut.isDone()) { + assert curCrd0 != null && curCrd0.nodeId().equals(ctx.localNodeId()); + + initFut.onDone(); + } + + crdChanged = false; + } + else { + if (curCrd0 != null && ctx.localNodeId().equals(curCrd0.nodeId()) && discoCache != null) + cleanupOrphanedServerTransactions(discoCache.serverNodes()); + } } /** {@inheritDoc} */ - @Override public void onDiscoveryEvent(int evtType, Collection nodes, long topVer, - @Nullable DiscoveryCustomMessage customMsg) { - if (evtType == EVT_NODE_METRICS_UPDATED) + @Override public void onLocalJoin(DiscoveryEvent evt) { + assert evt.type() == EVT_NODE_JOINED && ctx.localNodeId().equals(evt.eventNode().id()); + + onCoordinatorChanged(evt.topologyNodes(), evt.topologyVersion(), false); + } + + /** + * Discovery listener. Note: initial join event is handled by {@link MvccProcessorImpl#onLocalJoin(DiscoveryEvent)} + * method. + * + * @param evt Discovery event. + */ + private void onDiscovery(DiscoveryEvent evt) { + assert evt.type() == EVT_NODE_FAILED || evt.type() == EVT_NODE_LEFT || evt.type() == EVT_NODE_JOINED; + + UUID nodeId = evt.eventNode().id(); + + MvccCoordinator curCrd0 = curCrd; + + if (evt.type() == EVT_NODE_JOINED) { + if (curCrd0 == null) // Handle join event only if coordinator has not been elected yet. + onCoordinatorChanged(evt.topologyNodes(), evt.topologyVersion(), false); + return; + } - if (evtType == EVT_DISCOVERY_CUSTOM_EVT) - checkMvccCacheStarted(customMsg); - else - assignMvccCoordinator(evtType, nodes, topVer); + // Process mvcc coordinator left event on the rest nodes. + if (nodeId.equals(curCrd0.nodeId())) { + // 1. Notify all listeners waiting for a snapshot. + Map map = snapLsnrs.remove(nodeId); + + if (map != null) { + ClusterTopologyCheckedException ex = new ClusterTopologyCheckedException("Failed to request mvcc " + + "version, coordinator failed: " + nodeId); + + MvccSnapshotResponseListener lsnr; + + for (Long id : map.keySet()) { + if ((lsnr = map.remove(id)) != null) + lsnr.onError(ex); + } + } + + // 2. Notify acknowledge futures. + for (WaitAckFuture fut : ackFuts.values()) + fut.onNodeLeft(nodeId); + + // 3. Process coordinator change. + onCoordinatorChanged(evt.topologyNodes(), evt.topologyVersion(), true); + } + // Process node left event on the current mvcc coordinator. + else if (curCrd0.nodeId().equals(ctx.localNodeId())) { + // 1. Notify active queries. + activeQueries.onNodeFailed(nodeId); + + // 2. Notify previous queries. + prevCrdQueries.onNodeFailed(nodeId); + + // 3. Recover transactions started by the failed node. + recoveryBallotBoxes.forEach((nearNodeId, ballotBox) -> { + // Put synthetic vote from another failed node + ballotBox.vote(nodeId); + + tryFinishRecoveryVoting(nearNodeId, ballotBox); + }); + + if (evt.eventNode().isClient()) { + RecoveryBallotBox ballotBox = recoveryBallotBoxes + .computeIfAbsent(nodeId, uuid -> new RecoveryBallotBox()); + + ballotBox + .voters(evt.topologyNodes().stream().map(ClusterNode::id).collect(Collectors.toList())); + + tryFinishRecoveryVoting(nodeId, ballotBox); + } + } } - /** {@inheritDoc} */ - @Override public void onExchangeStart(MvccCoordinator mvccCrd, ExchangeContext exchCtx, ClusterNode exchCrd) { - if (!exchCtx.newMvccCoordinator()) + /** + * Coordinator change callback. Performs all needed actions for handling new coordinator assignment. + * + * @param nodes Cluster topology snapshot. + * @param topVer Topology version. + * @param sndQrys {@code True} if it is need to send an active queries list to the new coordinator. + */ + private void onCoordinatorChanged(Collection nodes, long topVer, boolean sndQrys) { + MvccCoordinator newCrd = pickMvccCoordinator(nodes, topVer); + + if (newCrd == null) return; - GridLongList activeQryTrackers = collectActiveQueryTrackers(); + // Update current coordinator, collect active queries and send it to the new coordinator if needed. + GridLongList activeQryTrackers = null; + + synchronized (activeTrackers) { + assert curCrd == null || newCrd.topologyVersion().compareTo(curCrd.topologyVersion()) > 0; + + if (sndQrys) { + activeQryTrackers = new GridLongList(); + + for (MvccQueryTracker tracker : activeTrackers.values()) { + long trackerId = tracker.onMvccCoordinatorChange(newCrd); - exchCtx.addActiveQueries(ctx.localNodeId(), activeQryTrackers); + if (trackerId != MVCC_TRACKER_ID_NA) + activeQryTrackers.add(trackerId); + } + } - if (exchCrd == null || !mvccCrd.nodeId().equals(exchCrd.id())) { + curCrd = newCrd; + } + + // Send local active queries to remote coordinator, if needed. + if (!newCrd.nodeId().equals(ctx.localNodeId())) { try { - sendMessage(mvccCrd.nodeId(), new MvccActiveQueriesMessage(activeQryTrackers)); + if (sndQrys) + sendMessage(newCrd.nodeId(), new MvccActiveQueriesMessage(activeQryTrackers)); } catch (IgniteCheckedException e) { U.error(log, "Failed to send active queries to mvcc coordinator: " + e); } } - } - - /** {@inheritDoc} */ - @Override public void onExchangeDone(boolean newCrd, DiscoCache discoCache, Map activeQueries) { - if (!newCrd) - return; + // If a current node was elected as a new mvcc coordinator, we need to pre-initialize it. + else { + assert crdVer == 0 : crdVer; - ctx.cache().context().tm().rollbackMvccTxOnCoordinatorChange(); + crdVer = newCrd.coordinatorVersion(); - if (ctx.localNodeId().equals(curCrd.nodeId())) { - assert ctx.localNodeId().equals(curCrd.nodeId()); + if (log.isInfoEnabled()) + log.info("Initialize local node as mvcc coordinator [node=" + ctx.localNodeId() + + ", crdVer=" + crdVer + ']'); - MvccCoordinator crd = discoCache.mvccCoordinator(); + prevCrdQueries.init(activeQryTrackers, F.view(nodes, this::supportsMvcc), ctx.discovery()); - assert crd != null; + // Do not complete init future here, because we should wait until all old transactions become terminated. + } - // No need to re-initialize if coordinator version hasn't changed (e.g. it was cluster activation). - if (crdVer == crd.coordinatorVersion()) - return; + crdChanged = true; + } - crdVer = crd.coordinatorVersion(); + /** + * Cleans up active transactions lost near node which is server. Executed on coordinator. + * + * @param liveSrvs Live server nodes at the moment of cleanup. + */ + private void cleanupOrphanedServerTransactions(Collection liveSrvs) { + Set ids = liveSrvs.stream() + .map(ClusterNode::id) + .collect(Collectors.toSet()); - log.info("Initialize local node as mvcc coordinator [node=" + ctx.localNodeId() + - ", crdVer=" + crdVer + ']'); + List forRmv = new ArrayList<>(); - prevCrdQueries.init(activeQueries, F.view(discoCache.allNodes(), this::supportsMvcc), ctx.discovery()); + synchronized (this) { + for (Map.Entry entry : activeTxs.entrySet()) { + // If node started tx is not known as live then remove such tx from active list + ActiveTx activeTx = entry.getValue(); - initFut.onDone(); + if (activeTx.getClass() == ActiveServerTx.class && !ids.contains(activeTx.nearNodeId)) + forRmv.add(entry.getKey()); + } } + + for (Long txCntr : forRmv) + // Committed counter is increased because it is not known if transaction was committed or not and we must + // bump committed counter for committed transaction as it is used in (read-only) query snapshot. + onTxDone(txCntr, true); } /** {@inheritDoc} */ @@ -419,24 +573,7 @@ public MvccProcessorImpl(GridKernalContext ctx) { /** {@inheritDoc} */ @Override @Nullable public MvccCoordinator currentCoordinator() { - return currentCoordinator(AffinityTopologyVersion.NONE); - } - - /** {@inheritDoc} */ - @Override @Nullable public MvccCoordinator currentCoordinator(AffinityTopologyVersion topVer) { - MvccCoordinator crd = curCrd; - - // Assert coordinator did not already change. - assert crd == null - || topVer == AffinityTopologyVersion.NONE - || crd.topologyVersion().compareTo(topVer) <= 0 : "Invalid coordinator [crd=" + crd + ", topVer=" + topVer + ']'; - - return crd; - } - - /** {@inheritDoc} */ - @Override @Nullable public MvccCoordinator assignedCoordinator() { - return assignedCrd; + return curCrd; } /** {@inheritDoc} */ @@ -447,20 +584,15 @@ public MvccProcessorImpl(GridKernalContext ctx) { } /** {@inheritDoc} */ - @Override public void updateCoordinator(MvccCoordinator curCrd) { - this.curCrd = curCrd; + @Override public byte state(MvccVersion ver) throws IgniteCheckedException { + return state(ver.coordinatorVersion(), ver.counter()); } /** {@inheritDoc} */ @Override public byte state(long crdVer, long cntr) throws IgniteCheckedException { - return txLog.get(crdVer, cntr); - } - - /** {@inheritDoc} */ - @Override public byte state(MvccVersion ver) throws IgniteCheckedException { assert txLog != null && mvccEnabled; - return txLog.get(ver.coordinatorVersion(), ver.counter()); + return txLog.get(crdVer, cntr); } /** {@inheritDoc} */ @@ -551,16 +683,11 @@ public MvccProcessorImpl(GridKernalContext ctx) { if (!ctx.localNodeId().equals(crd.nodeId()) || !initFut.isDone()) return null; else if (tx != null) - return assignTxSnapshot(0L); + return assignTxSnapshot(0L, ctx.localNodeId(), false); else return activeQueries.assignQueryCounter(ctx.localNodeId(), 0L); } - /** {@inheritDoc} */ - @Override public IgniteInternalFuture requestSnapshotAsync() { - return requestSnapshotAsync((IgniteInternalTx)null); - } - /** {@inheritDoc} */ @Override public IgniteInternalFuture requestSnapshotAsync(IgniteInternalTx tx) { MvccSnapshotFuture fut = new MvccSnapshotFuture(); @@ -606,7 +733,7 @@ else if (tx != null) }); } else if (tx != null) - lsnr.onResponse(assignTxSnapshot(0L)); + lsnr.onResponse(assignTxSnapshot(0L, ctx.localNodeId(), false)); else lsnr.onResponse(activeQueries.assignQueryCounter(ctx.localNodeId(), 0L)); @@ -762,9 +889,8 @@ && sendQueryDone(crd, new MvccAckRequestQueryCntr(queryTrackCounter(snapshot)))) first = false; } - for (MvccSnapshotResponseListener lsnr : map.values()) { + for (MvccSnapshotResponseListener lsnr : map.values()) U.warn(log, ">>> " + lsnr.toString()); - } } first = true; @@ -812,51 +938,47 @@ private DataStorageConfiguration dataStorageConfiguration() { return ctx.config().getDataStorageConfiguration(); } - /** */ - private void assignMvccCoordinator(int evtType, Collection nodes, long topVer) { + /** + * Picks mvcc coordinator from the given list of nodes. + * + * @param nodes List of nodes. + * @param topVer Topology version. + * @return Chosen mvcc coordinator. + */ + private MvccCoordinator pickMvccCoordinator(Collection nodes, long topVer) { checkMvccSupported(nodes); - MvccCoordinator crd; - - if (evtType == EVT_NODE_SEGMENTED || evtType == EVT_CLIENT_NODE_DISCONNECTED) - crd = null; - else { - crd = assignedCrd; - - if (crd == null || - ((evtType == EVT_NODE_FAILED || evtType == EVT_NODE_LEFT) && !F.nodeIds(nodes).contains(crd.nodeId()))) { - ClusterNode crdNode = null; + ClusterNode crdNode = null; - if (crdC != null) { - crdNode = crdC.apply(nodes); + if (crdC != null) { + crdNode = crdC.apply(nodes); - if (log.isInfoEnabled()) - log.info("Assigned coordinator using test closure: " + crd); - } - else { - // Expect nodes are sorted by order. - for (ClusterNode node : nodes) { - if (!node.isClient() && supportsMvcc(node)) { - crdNode = node; + if (crdNode != null && log.isInfoEnabled()) + log.info("Assigned coordinator using test closure: " + crdNode.id()); + } + else { + // Expect nodes are sorted by order. + for (ClusterNode node : nodes) { + if (!node.isClient() && supportsMvcc(node)) { + crdNode = node; - break; - } - } + break; } - - crd = crdNode != null ? new MvccCoordinator(crdNode.id(), coordinatorVersion(crdNode), - new AffinityTopologyVersion(topVer, 0)) : null; - - if (log.isInfoEnabled() && crd != null) - log.info("Assigned mvcc coordinator [crd=" + crd + ", crdNode=" + crdNode + ']'); - else if (crd == null) - U.warn(log, "New mvcc coordinator was not assigned [topVer=" + topVer + ']'); } } - assignedCrd = crd; + MvccCoordinator crd = crdNode != null ? new MvccCoordinator(crdNode.id(), coordinatorVersion(crdNode), + new AffinityTopologyVersion(topVer, 0)) : null; + + if (log.isInfoEnabled() && crd != null) + log.info("Assigned mvcc coordinator [crd=" + crd + ", crdNode=" + crdNode + ']'); + else if (crd == null) + U.warn(log, "New mvcc coordinator was not assigned [topVer=" + topVer + ']'); + + return crd; } + /** * @param crdNode Assigned coordinator node. * @return Coordinator version. @@ -893,11 +1015,9 @@ private boolean supportsMvcc(ClusterNode node) { } /** */ - private void checkMvccCacheStarted(@Nullable DiscoveryCustomMessage customMsg) { - assert customMsg != null; - - if (!mvccEnabled && customMsg instanceof DynamicCacheChangeBatch) { - for (DynamicCacheChangeRequest req : ((DynamicCacheChangeBatch)customMsg).requests()) { + private void checkMvccCacheStarted(DynamicCacheChangeBatch cacheMsg) { + if (!mvccEnabled) { + for (DynamicCacheChangeRequest req : cacheMsg.requests()) { CacheConfiguration ccfg = req.startCacheConfiguration(); if (ccfg == null) @@ -912,28 +1032,8 @@ private void checkMvccCacheStarted(@Nullable DiscoveryCustomMessage customMsg) { } } - /** - * @return Active queries list. - */ - private GridLongList collectActiveQueryTrackers() { - assert curCrd != null; - - GridLongList activeQryTrackers = new GridLongList(); - - for (MvccQueryTracker tracker : activeTrackers.values()) { - long trackerId = tracker.onMvccCoordinatorChange(curCrd); - - if (trackerId != MVCC_TRACKER_ID_NA) - activeQryTrackers.add(trackerId); - } - - return activeQryTrackers; - } - - /** - * @return Counter. - */ - private MvccSnapshotResponse assignTxSnapshot(long futId) { + /** */ + private MvccSnapshotResponse assignTxSnapshot(long futId, UUID nearId, boolean client) { assert initFut.isDone(); assert crdVer != 0; assert ctx.localNodeId().equals(currentCoordinatorId()); @@ -947,14 +1047,16 @@ private MvccSnapshotResponse assignTxSnapshot(long futId) { tracking = ver; cleanup = committedCntr.get() + 1; - for (Map.Entry txVer : activeTxs.entrySet()) { - cleanup = Math.min(txVer.getValue(), cleanup); - tracking = Math.min(txVer.getKey(), tracking); + for (Map.Entry entry : activeTxs.entrySet()) { + cleanup = Math.min(entry.getValue().tracking, cleanup); + tracking = Math.min(entry.getKey(), tracking); - res.addTx(txVer.getKey()); + res.addTx(entry.getKey()); } - boolean add = activeTxs.put(ver, tracking) == null; + ActiveTx activeTx = client ? new ActiveTx(tracking, nearId) : new ActiveServerTx(tracking, nearId); + + boolean add = activeTxs.put(ver, activeTx) == null; assert add : ver; } @@ -971,10 +1073,8 @@ private MvccSnapshotResponse assignTxSnapshot(long futId) { return res; } - /** - * @param txCntr Counter assigned to transaction. - */ - private void onTxDone(Long txCntr, boolean committed) { + /** */ + private void onTxDone(Long txCntr, boolean increaseCommittedCntr) { assert initFut.isDone(); GridFutureAdapter fut; @@ -982,7 +1082,7 @@ private void onTxDone(Long txCntr, boolean committed) { synchronized (this) { activeTxs.remove(txCntr); - if (committed) + if (increaseCommittedCntr) committedCntr.setIfGreater(txCntr); } @@ -1085,8 +1185,8 @@ void stopVacuumWorkers() { } if (workers == null) { - if (log.isInfoEnabled()) - log.info("Attempting to stop inactive vacuum."); + if (log.isDebugEnabled() && mvccEnabled()) + log.debug("Attempting to stop inactive vacuum."); return; } @@ -1122,8 +1222,7 @@ IgniteInternalFuture runVacuum() { crdVer == 0 && ctx.localNodeId().equals(crd0.nodeId())) return new GridFinishedFuture<>(new VacuumMetrics()); - final GridCompoundIdentityFuture res = - new GridCompoundIdentityFuture<>(new VacuumMetricsReducer()); + final GridFutureAdapter res = new GridFutureAdapter<>(); MvccSnapshot snapshot; @@ -1159,29 +1258,11 @@ IgniteInternalFuture runVacuum() { return res; } - /** - * For tests only. - * - * @return Vacuum error. - */ - Throwable vacuumError() { - return vacuumError; - } - - /** - * For tests only. - * - * @param e Vacuum error. - */ - void vacuumError(Throwable e) { - this.vacuumError = e; - } - /** * @param res Result. * @param snapshot Snapshot. */ - private void continueRunVacuum(GridCompoundIdentityFuture res, MvccSnapshot snapshot) { + private void continueRunVacuum(GridFutureAdapter res, MvccSnapshot snapshot) { ackTxCommit(snapshot) .listen(new IgniteInClosure() { @Override public void apply(IgniteInternalFuture fut) { @@ -1206,23 +1287,45 @@ else if (snapshot.cleanupVersion() <= MVCC_COUNTER_NA) return; } + GridCompoundIdentityFuture res0 = + new GridCompoundIdentityFuture(new VacuumMetricsReducer()) { + /** {@inheritDoc} */ + @Override protected void logError(IgniteLogger log, String msg, Throwable e) { + // no-op + } + + /** {@inheritDoc} */ + @Override protected void logDebug(IgniteLogger log, String msg) { + // no-op + } + }; + for (CacheGroupContext grp : ctx.cache().cacheGroups()) { if (grp.mvccEnabled()) { - for (GridDhtLocalPartition part : grp.topology().localPartitions()) { - VacuumTask task = new VacuumTask(snapshot, part); + grp.topology().readLock(); + + try { + for (GridDhtLocalPartition part : grp.topology().localPartitions()) { + VacuumTask task = new VacuumTask(snapshot, part); - cleanupQueue.offer(task); + cleanupQueue.offer(task); - res.add(task); + res0.add(task); + } + } + finally { + grp.topology().readUnlock(); } } } - } - res.listen(new CI1>() { - @Override public void apply(IgniteInternalFuture fut) { + res0.markInitialized(); + + res0.listen(future -> { + VacuumMetrics metrics = null; Throwable ex = null; + try { - VacuumMetrics metrics = fut.get(); + metrics = future.get(); if (U.assertionsEnabled()) { MvccCoordinator crd = currentCoordinator(); @@ -1231,10 +1334,13 @@ else if (snapshot.cleanupVersion() <= MVCC_COUNTER_NA) && crd.coordinatorVersion() >= snapshot.coordinatorVersion(); for (TxKey key : waitMap.keySet()) { - assert key.major() == snapshot.coordinatorVersion() + if (!( key.major() == snapshot.coordinatorVersion() && key.minor() > snapshot.cleanupVersion() - || key.major() > snapshot.coordinatorVersion() : - "key=" + key + ", snapshot=" + snapshot; + || key.major() > snapshot.coordinatorVersion())) { + byte state = state(key.major(), key.minor()); + + assert state == TxState.ABORTED : "tx state=" + state; + } } } @@ -1242,18 +1348,20 @@ else if (snapshot.cleanupVersion() <= MVCC_COUNTER_NA) if (log.isDebugEnabled()) log.debug("Vacuum completed. " + metrics); + } catch (Throwable e) { + if (X.hasCause(e, NodeStoppingException.class)) { + if (log.isDebugEnabled()) + log.debug("Cannot complete vacuum (node is stopping)."); + + metrics = new VacuumMetrics(); + } else + ex = new GridClosureException(e); } - catch (NodeStoppingException ignored) { - if (log.isDebugEnabled()) - log.debug("Cannot complete vacuum (node is stopping)."); - } - catch (Throwable e) { - U.error(log, "Vacuum error.", e); - } - } - }); - res.markInitialized(); + res.onDone(metrics, ex); + }); + } + } catch (Throwable e) { completeWithException(res, e); @@ -1311,7 +1419,6 @@ private void sendFutureResponse(UUID nodeId, MvccWaitTxsRequest msg) { fut.onDone(); // No need to ack, finish without error. } else - fut.onDone(e); } } @@ -1373,10 +1480,14 @@ private void processCoordinatorTxSnapshotRequest(UUID nodeId, MvccTxSnapshotRequ return; } - MvccSnapshotResponse res = assignTxSnapshot(msg.futureId()); + MvccSnapshotResponse res = assignTxSnapshot(msg.futureId(), nodeId, node.isClient()); + + boolean finishFailed = true; try { sendMessage(node.id(), res); + + finishFailed = false; } catch (ClusterTopologyCheckedException e) { if (log.isDebugEnabled()) @@ -1385,6 +1496,9 @@ private void processCoordinatorTxSnapshotRequest(UUID nodeId, MvccTxSnapshotRequ catch (IgniteCheckedException e) { U.error(log, "Failed to send tx snapshot response [msg=" + msg + ", node=" + nodeId + ']', e); } + + if (finishFailed) + onTxDone(res.counter(), false); } /** @@ -1411,9 +1525,9 @@ private void processCoordinatorQuerySnapshotRequest(UUID nodeId, MvccQuerySnapsh log.debug("Failed to send query counter response, node left [msg=" + msg + ", node=" + nodeId + ']'); } catch (IgniteCheckedException e) { - U.error(log, "Failed to send query counter response [msg=" + msg + ", node=" + nodeId + ']', e); - onQueryDone(nodeId, res.tracking()); + + U.error(log, "Failed to send query counter response [msg=" + msg + ", node=" + nodeId + ']', e); } } @@ -1702,46 +1816,6 @@ void onNodeLeft(UUID nodeId) { } } - /** - * - */ - private class CacheCoordinatorNodeFailListener implements GridLocalEventListener { - /** {@inheritDoc} */ - @Override public void onEvent(Event evt) { - assert evt instanceof DiscoveryEvent : evt; - - DiscoveryEvent discoEvt = (DiscoveryEvent)evt; - - UUID nodeId = discoEvt.eventNode().id(); - - Map map = snapLsnrs.remove(nodeId); - - if (map != null) { - ClusterTopologyCheckedException ex = new ClusterTopologyCheckedException("Failed to request mvcc " + - "version, coordinator failed: " + nodeId); - - MvccSnapshotResponseListener lsnr; - - for (Long id : map.keySet()) { - if ((lsnr = map.remove(id)) != null) - lsnr.onError(ex); - } - } - - for (WaitAckFuture fut : ackFuts.values()) - fut.onNodeLeft(nodeId); - - activeQueries.onNodeFailed(nodeId); - - prevCrdQueries.onNodeFailed(nodeId); - } - - /** {@inheritDoc} */ - @Override public String toString() { - return "CacheCoordinatorDiscoveryListener[]"; - } - } - /** * */ @@ -1788,6 +1862,8 @@ else if (msg instanceof MvccAckRequestQueryId) processNewCoordinatorQueryAckRequest(nodeId, (MvccAckRequestQueryId)msg); else if (msg instanceof MvccActiveQueriesMessage) processCoordinatorActiveQueriesMessage(nodeId, (MvccActiveQueriesMessage)msg); + else if (msg instanceof MvccRecoveryFinishedMessage) + processRecoveryFinishedMessage(nodeId, ((MvccRecoveryFinishedMessage)msg)); else U.warn(log, "Unexpected message received [node=" + nodeId + ", msg=" + msg + ']'); } @@ -1798,6 +1874,82 @@ else if (msg instanceof MvccActiveQueriesMessage) } } + /** + * Accumulates transaction recovery votes for a node left the cluster. + * Transactions started by the left node are considered not active + * when each cluster server node aknowledges that is has finished transactions for the left node. + */ + private static class RecoveryBallotBox { + /** */ + private List voters; + /** */ + private final Set ballots = new HashSet<>(); + + /** + * @param voters Nodes which can have transaction started by the left node. + */ + private synchronized void voters(List voters) { + this.voters = voters; + } + + /** + * @param nodeId Voting node id. + * + */ + private synchronized void vote(UUID nodeId) { + ballots.add(nodeId); + } + + /** + * @return {@code True} if all nodes expected to vote done it. + */ + private synchronized boolean isVotingDone() { + if (voters == null) + return false; + + return ballots.containsAll(voters); + } + } + + /** + * Process message that one node has finished with transactions for the left node. + * @param nodeId Node sent the message. + * @param msg Message. + */ + private void processRecoveryFinishedMessage(UUID nodeId, MvccRecoveryFinishedMessage msg) { + UUID nearNodeId = msg.nearNodeId(); + + RecoveryBallotBox ballotBox = recoveryBallotBoxes.computeIfAbsent(nearNodeId, uuid -> new RecoveryBallotBox()); + + ballotBox.vote(nodeId); + + tryFinishRecoveryVoting(nearNodeId, ballotBox); + } + + /** + * Finishes recovery on coordinator by removing transactions started by the left node + * @param nearNodeId Left node. + * @param ballotBox Votes accumulator for the left node. + */ + private void tryFinishRecoveryVoting(UUID nearNodeId, RecoveryBallotBox ballotBox) { + if (ballotBox.isVotingDone()) { + List recoveredTxs; + + synchronized (this) { + recoveredTxs = activeTxs.entrySet().stream() + .filter(e -> e.getValue().nearNodeId.equals(nearNodeId)) + .map(Map.Entry::getKey) + .collect(Collectors.toList()); + } + + // Committed counter is increased because it is not known if transaction was committed or not and we must + // bump committed counter for committed transaction as it is used in (read-only) query snapshot. + recoveredTxs.forEach(txCntr -> onTxDone(txCntr, true)); + + recoveryBallotBoxes.remove(nearNodeId); + } + } + /** */ private interface Waiter { /** @@ -2022,13 +2174,14 @@ private static class VacuumScheduler extends GridWorker { } } catch (IgniteInterruptedCheckedException e) { - throw e; + throw e; // Cancelled. } catch (Throwable e) { - prc.vacuumError(e); - if (e instanceof Error) throw (Error) e; + + if (log.isDebugEnabled()) + U.warn(log, "Failed to perform vacuum.", e); } long delay = nextScheduledTime - U.currentTimeMillis(); @@ -2063,20 +2216,29 @@ private static class VacuumWorker extends GridWorker { VacuumTask task = cleanupQueue.take(); try { - if (task.part().state() != OWNING) { - task.part().group().preloader().rebalanceFuture() - .listen(new IgniteInClosure>() { - @Override public void apply(IgniteInternalFuture future) { - cleanupQueue.add(task); - } - }); + switch (task.part().state()) { + case EVICTED: + case RENTING: + task.onDone(new VacuumMetrics()); - continue; - } + break; + case MOVING: + task.part().group().preloader().rebalanceFuture().listen(f -> cleanupQueue.add(task)); + + break; + case OWNING: + task.onDone(processPartition(task)); + + break; + case LOST: + task.onDone(new IgniteCheckedException("Partition is lost.")); - task.onDone(processPartition(task)); + break; + } } catch (IgniteInterruptedCheckedException e) { + task.onDone(e); + throw e; // Cancelled. } catch (Throwable e) { @@ -2101,9 +2263,11 @@ private VacuumMetrics processPartition(VacuumTask task) throws IgniteCheckedExce VacuumMetrics metrics = new VacuumMetrics(); - if (part == null || part.state() != OWNING || !part.reserve()) + if (!part.reserve()) return metrics; + int curCacheId = CU.UNDEFINED_CACHE_ID; + try { GridCursor cursor = part.dataStore().cursor(KEY_ONLY); @@ -2117,8 +2281,6 @@ private VacuumMetrics processPartition(VacuumTask task) throws IgniteCheckedExce GridCacheContext cctx = null; - int curCacheId = CU.UNDEFINED_CACHE_ID; - boolean shared = part.group().sharedGroup(); if (!shared && (cctx = F.first(part.group().caches())) == null) @@ -2134,8 +2296,6 @@ private VacuumMetrics processPartition(VacuumTask task) throws IgniteCheckedExce prevKey = row.key(); if (cctx == null) { - assert shared; - cctx = part.group().shared().cacheContext(curCacheId = row.cacheId()); if (cctx == null) @@ -2251,48 +2411,80 @@ private void cleanup(GridDhtLocalPartition part, KeyCacheObject key, List)rest)) { - part.dataStore().updateTxState(cctx, row); + if (rest != null) { + if (rest.getClass() == ArrayList.class) { + for (MvccDataRow row : ((List) rest)) + part.dataStore().updateTxState(cctx, row); + } + else + part.dataStore().updateTxState(cctx, (MvccDataRow) rest); } } - else - part.dataStore().updateTxState(cctx, (MvccDataRow)rest); + finally { + cctx.shared().database().checkpointReadUnlock(); + } + } + finally { + entry.unlockEntry(); + + cctx.evicts().touch(entry); + + metrics.addCleanupNanoTime(System.nanoTime() - cleanupStartNanoTime); + metrics.addCleanupRowsCnt(cleaned); } } finally { - cctx.shared().database().checkpointReadUnlock(); + cctx.gate().leave(); + } + } + } - entry.unlockEntry(); - cctx.evicts().touch(entry, AffinityTopologyVersion.NONE); + /** */ + private static class ActiveTx { + /** */ + private final long tracking; + /** */ + private final UUID nearNodeId; - metrics.addCleanupNanoTime(System.nanoTime() - cleanupStartNanoTime); - metrics.addCleanupRowsCnt(cleaned); - } + /** */ + private ActiveTx(long tracking, UUID nearNodeId) { + this.tracking = tracking; + this.nearNodeId = nearNodeId; + } + } + + /** */ + private static class ActiveServerTx extends ActiveTx { + /** */ + private ActiveServerTx(long tracking, UUID nearNodeId) { + super(tracking, nearNodeId); } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccQueryTrackerImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccQueryTrackerImpl.java index 9a767ec4fddca..d86f5ecd64f80 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccQueryTrackerImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccQueryTrackerImpl.java @@ -61,6 +61,9 @@ public class MvccQueryTrackerImpl implements MvccQueryTracker { /** */ private final boolean canRemap; + /** */ + private boolean done; + /** * @param cctx Cache context. */ @@ -136,6 +139,9 @@ public MvccQueryTrackerImpl(GridCacheContext cctx, boolean canRemap) { /** {@inheritDoc} */ @Override public void onDone() { + if (!checkDone()) + return; + MvccProcessor prc = cctx.shared().coordinators(); MvccSnapshot snapshot = snapshot(); @@ -151,7 +157,7 @@ public MvccQueryTrackerImpl(GridCacheContext cctx, boolean canRemap) { @Override public IgniteInternalFuture onDone(@NotNull GridNearTxLocal tx, boolean commit) { MvccSnapshot snapshot = snapshot(), txSnapshot = tx.mvccSnapshot(); - if (snapshot == null && txSnapshot == null) + if (!checkDone() || snapshot == null && txSnapshot == null) return commit ? new GridFinishedFuture<>() : null; MvccProcessor prc = cctx.shared().coordinators(); @@ -221,7 +227,7 @@ private MvccSnapshotResponseListener decorate(MvccSnapshotResponseListener lsnr) * @return {@code True} if topology is valid. */ private boolean checkTopology(AffinityTopologyVersion topVer, MvccSnapshotResponseListener lsnr) { - MvccCoordinator crd = cctx.affinity().mvccCoordinator(topVer); + MvccCoordinator crd = cctx.shared().coordinators().currentCoordinator(); if (crd == null) { lsnr.onError(noCoordinatorError(topVer)); @@ -235,16 +241,6 @@ private boolean checkTopology(AffinityTopologyVersion topVer, MvccSnapshotRespon crdVer = crd.coordinatorVersion(); } - MvccCoordinator curCrd = cctx.topology().mvccCoordinator(); - - if (!crd.equals(curCrd)) { - assert cctx.topology().topologyVersionFuture().initialVersion().compareTo(topVer) > 0; - - tryRemap(lsnr); - - return false; - } - return true; } @@ -328,6 +324,14 @@ private boolean onError0(IgniteCheckedException e, MvccSnapshotResponseListener return true; } + /** */ + private synchronized boolean checkDone() { + if (!done) + return done = true; + + return false; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(MvccQueryTrackerImpl.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccTxEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccTxEntry.java new file mode 100644 index 0000000000000..28864f8dbcaed --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccTxEntry.java @@ -0,0 +1,203 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.processors.cache.CacheObject; +import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.version.GridCacheRawVersionedEntry; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.jetbrains.annotations.Nullable; + +/** + * Holder for the enlisted entries data. + */ +public class MvccTxEntry { + /** */ + private KeyCacheObject key; + + /** */ + private CacheObject val; + + /** */ + private int cacheId; + + /** */ + private GridCacheVersion ver; + + /** */ + private CacheObject oldVal; + + /** */ + private boolean primary; + + /** */ + private AffinityTopologyVersion topVer; + + /** */ + private MvccVersion mvccVer; + + /** */ + private long ttl; + + /** */ + private long expireTime; + + /** */ + private long updCntr; + + /** + * @param key Key. + * @param val New value. + * @param ttl Time to live. + * @param expireTime Expire time. + * @param ver Tx grig cache version. + * @param oldVal Old value. + * @param primary {@code True} if this is a primary node. + * @param topVer Topology version. + * @param mvccVer Mvcc version. + * @param cacheId Cache id. + */ + public MvccTxEntry(KeyCacheObject key, + @Nullable CacheObject val, + long ttl, + long expireTime, + GridCacheVersion ver, + CacheObject oldVal, + boolean primary, + AffinityTopologyVersion topVer, + MvccVersion mvccVer, + int cacheId) { + assert key != null; + assert mvccVer != null; + + this.key = key; + this.val = val; + this.ttl = ttl; + this.expireTime = expireTime; + this.ver = ver; + this.oldVal = oldVal; + this.primary = primary; + this.topVer = topVer; + this.mvccVer = mvccVer; + this.cacheId = cacheId; + } + + /** + * @return Versioned entry (for DR). + */ + public GridCacheRawVersionedEntry versionedEntry() { + return new GridCacheRawVersionedEntry(key, val, ttl, expireTime, ver); + } + + /** + * @return Key. + */ + public KeyCacheObject key() { + return key; + } + + /** + * @return Value. + */ + public CacheObject value() { + return val; + } + + /** + * @return Time to live. + */ + public long ttl() { + return ttl; + } + + /** + * @return Expire time. + */ + public long expireTime() { + return expireTime; + } + + /** + * @return Version. + */ + public GridCacheVersion version() { + return ver; + } + + /** + * @return Old value. + */ + public CacheObject oldValue() { + return oldVal; + } + + /** + * @param oldVal Old value. + */ + public void oldValue(CacheObject oldVal) { + this.oldVal = oldVal; + } + + /** + * @return {@code True} if this entry is created on a primary node. + */ + public boolean isPrimary() { + return primary; + } + + /** + * @return Topology version. + */ + public AffinityTopologyVersion topologyVersion() { + return topVer; + } + + /** + * @return Mvcc version. + */ + public MvccVersion mvccVersion() { + return mvccVer; + } + + /** + * @return Cache id. + */ + public int cacheId() { + return cacheId; + } + + /** + * @return Update counter. + */ + public long updateCounter() { + return updCntr; + } + + /** + * @param updCntr Update counter. + */ + public void updateCounter(long updCntr) { + this.updCntr = updCntr; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(MvccTxEntry.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUpdateVersionAware.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUpdateVersionAware.java index fee928e038eeb..17804c49887a7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUpdateVersionAware.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUpdateVersionAware.java @@ -73,9 +73,4 @@ public default void newMvccVersion(long crd, long cntr, int opCntr) { public default MvccVersion newMvccVersion() { return new MvccVersionImpl(newMvccCoordinatorVersion(), newMvccCounter(), newMvccOperationCounter()); } - - /** - * @return {@code True} if this key was inserted in the cache with this row in the same transaction. - */ - public boolean isKeyAbsentBefore(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUtils.java index 896c9aacdf46f..c6848d35ef1aa 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUtils.java @@ -100,13 +100,13 @@ public class MvccUtils { /** */ public static final int MVCC_VISIBLE = 2; - /** */ + /** A special version visible by everyone */ public static final MvccVersion INITIAL_VERSION = mvccVersion(MVCC_CRD_START_CNTR, MVCC_INITIAL_CNTR, MVCC_START_OP_CNTR); - /** */ - public static final MvccVersion MVCC_VERSION_NA = - mvccVersion(MVCC_CRD_COUNTER_NA, MVCC_COUNTER_NA, MVCC_OP_COUNTER_NA); + /** A special snapshot for which all committed versions are visible */ + public static final MvccSnapshot MVCC_MAX_SNAPSHOT = + new MvccSnapshotWithoutTxs(Long.MAX_VALUE, Long.MAX_VALUE, MVCC_READ_OP_CNTR, MVCC_COUNTER_NA); /** */ private static final MvccClosure getVisibleState = new GetVisibleState(); @@ -184,7 +184,14 @@ private static byte state(MvccProcessor proc, long mvccCrd, long mvccCntr, int m if ((mvccOpCntr & MVCC_HINTS_MASK) != 0) return (byte)(mvccOpCntr >>> MVCC_HINTS_BIT_OFF); - return proc.state(mvccCrd, mvccCntr); + byte state = proc.state(mvccCrd, mvccCntr); + + if ((state == TxState.NA || state == TxState.PREPARED) + && (proc.currentCoordinator() == null // Recovery from WAL. + || mvccCrd < proc.currentCoordinator().coordinatorVersion())) + state = TxState.ABORTED; + + return state; } /** @@ -242,7 +249,11 @@ public static boolean isVisible(GridCacheContext cctx, MvccSnapshot snapshot, lo if (mvccCntr > snapshotCntr) // we don't see future updates return false; - if (mvccCntr == snapshotCntr) { + // Basically we can make fast decision about visibility if found rows from the same transaction. + // But we can't make such decision for read-only queries, + // because read-only queries use last committed version in it's snapshot which could be actually aborted + // (during transaction recovery we do not know whether recovered transaction was committed or aborted). + if (mvccCntr == snapshotCntr && snapshotOpCntr != MVCC_READ_OP_CNTR) { assert opCntr <= snapshotOpCntr : "rowVer=" + mvccVersion(mvccCrd, mvccCntr, opCntr) + ", snapshot=" + snapshot; return opCntr < snapshotOpCntr; // we don't see own pending updates @@ -464,6 +475,17 @@ public static int compare(long mvccCrdLeft, long mvccCntrLeft, int mvccOpCntrLef return 0; } + /** + * Compares left version (xid_min) with the given version ignoring operation counter. + * + * @param left Version. + * @param right Version. + * @return Comparison result, see {@link Comparable}. + */ + public static int compareIgnoreOpCounter(MvccVersion left, MvccVersion right) { + return compare(left.coordinatorVersion(), left.counter(), 0, right.coordinatorVersion(), right.counter(), 0); + } + /** * Compares new row version (xid_max) with the given counter and coordinator versions. * @@ -577,7 +599,8 @@ private static R invoke(GridCacheContext cctx, long link, MvccClosure clo try{ DataPageIO dataIo = DataPageIO.VERSIONS.forPage(pageAddr); - int offset = dataIo.getPayloadOffset(pageAddr, itemId(link), pageMem.pageSize(), MVCC_INFO_SIZE); + int offset = dataIo.getPayloadOffset(pageAddr, itemId(link), pageMem.realPageSize(grpId), + MVCC_INFO_SIZE); long mvccCrd = dataIo.mvccCoordinator(pageAddr, offset); long mvccCntr = dataIo.mvccCounter(pageAddr, offset); @@ -654,6 +677,26 @@ public static GridNearTxLocal checkActive(GridNearTxLocal tx) { * @return Currently started user transaction, or {@code null} if none started. */ @Nullable public static GridNearTxLocal tx(GridKernalContext ctx, @Nullable GridCacheVersion txId) { + try { + return currentTx(ctx, txId); + } + catch (UnsupportedTxModeException e) { + throw new IgniteSQLException(e.getMessage(), IgniteQueryErrorCode.UNSUPPORTED_OPERATION); + } + catch (NonMvccTransactionException e) { + throw new IgniteSQLException(e.getMessage(), IgniteQueryErrorCode.TRANSACTION_TYPE_MISMATCH); + } + } + + /** + * @param ctx Grid kernal context. + * @param txId Transaction ID. + * @return Currently started user transaction, or {@code null} if none started. + * @throws UnsupportedTxModeException If transaction mode is not supported when MVCC is enabled. + * @throws NonMvccTransactionException If started transaction spans non MVCC caches. + */ + @Nullable public static GridNearTxLocal currentTx(GridKernalContext ctx, + @Nullable GridCacheVersion txId) throws UnsupportedTxModeException, NonMvccTransactionException { IgniteTxManager tm = ctx.cache().context().tm(); IgniteInternalTx tx0 = txId == null ? tm.tx() : tm.tx(txId); @@ -661,26 +704,22 @@ public static GridNearTxLocal checkActive(GridNearTxLocal tx) { GridNearTxLocal tx = tx0 != null && tx0.user() ? (GridNearTxLocal)tx0 : null; if (tx != null) { - if (!tx.pessimistic() || !tx.repeatableRead()) { + if (!tx.pessimistic()) { tx.setRollbackOnly(); - throw new IgniteSQLException("Only pessimistic repeatable read transactions are supported at the moment.", - IgniteQueryErrorCode.UNSUPPORTED_OPERATION); - + throw new UnsupportedTxModeException(); } if (!tx.isOperationAllowed(true)) { tx.setRollbackOnly(); - throw new IgniteSQLException("SQL queries and cache operations " + - "may not be used in the same transaction.", IgniteQueryErrorCode.TRANSACTION_TYPE_MISMATCH); + throw new NonMvccTransactionException(); } } return tx; } - /** * @param ctx Grid kernal context. * @param timeout Transaction timeout. @@ -902,4 +941,24 @@ private static class GetNewVersion implements MvccClosure { return newMvccCrd == MVCC_CRD_COUNTER_NA ? null : mvccVersion(newMvccCrd, newMvccCntr, newMvccOpCntr); } } + + /** */ + public static class UnsupportedTxModeException extends IgniteCheckedException { + /** */ + private static final long serialVersionUID = 0L; + /** */ + private UnsupportedTxModeException() { + super("Only pessimistic transactions are supported when MVCC is enabled."); + } + } + + /** */ + public static class NonMvccTransactionException extends IgniteCheckedException { + /** */ + private static final long serialVersionUID = 0L; + /** */ + private NonMvccTransactionException() { + super("Operations on MVCC caches are not permitted in transactions spanning non MVCC caches."); + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccAckRequestTxAndQueryId.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccAckRequestTxAndQueryId.java index 89f09db5d0b2f..f3b3150480af1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccAckRequestTxAndQueryId.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccAckRequestTxAndQueryId.java @@ -76,6 +76,7 @@ public MvccAckRequestTxAndQueryId(long futId, long txCntr, long qryTrackerId) { return false; writer.incrementState(); + } return true; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccRecoveryFinishedMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccRecoveryFinishedMessage.java new file mode 100644 index 0000000000000..a4ea103c3e49c --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccRecoveryFinishedMessage.java @@ -0,0 +1,116 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc.msg; + +import java.nio.ByteBuffer; +import java.util.UUID; +import org.apache.ignite.plugin.extensions.communication.MessageReader; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; + +/** */ +public class MvccRecoveryFinishedMessage implements MvccMessage { + /** */ + private static final long serialVersionUID = -505062368078979867L; + + /** */ + private UUID nearNodeId; + + /** */ + public MvccRecoveryFinishedMessage() { + } + + /** */ + public MvccRecoveryFinishedMessage(UUID nearNodeId) { + this.nearNodeId = nearNodeId; + } + + /** + * @return Left node id for which transactions were recovered. + */ + public UUID nearNodeId() { + return nearNodeId; + } + + /** {@inheritDoc} */ + @Override public boolean waitForCoordinatorInit() { + return false; + } + + /** {@inheritDoc} */ + @Override public boolean processedFromNioThread() { + return false; + } + + /** {@inheritDoc} */ + @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { + writer.setBuffer(buf); + + if (!writer.isHeaderWritten()) { + if (!writer.writeHeader(directType(), fieldsCount())) + return false; + + writer.onHeaderWritten(); + } + + switch (writer.state()) { + case 0: + if (!writer.writeUuid("nearNodeId", nearNodeId)) + return false; + + writer.incrementState(); + + } + + return true; + } + + /** {@inheritDoc} */ + @Override public boolean readFrom(ByteBuffer buf, MessageReader reader) { + reader.setBuffer(buf); + + if (!reader.beforeMessageRead()) + return false; + + switch (reader.state()) { + case 0: + nearNodeId = reader.readUuid("nearNodeId"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + } + + return reader.afterMessageRead(MvccRecoveryFinishedMessage.class); + } + + /** {@inheritDoc} */ + @Override public short directType() { + return 164; + } + + /** {@inheritDoc} */ + @Override public byte fieldsCount() { + return 1; + } + + /** {@inheritDoc} */ + @Override public void onAckReceived() { + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccSnapshotResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccSnapshotResponse.java index 2c22616094f4f..73d3f9483981a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccSnapshotResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccSnapshotResponse.java @@ -20,10 +20,13 @@ import java.nio.ByteBuffer; import java.util.Arrays; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.managers.communication.GridIoMessageFactory; import org.apache.ignite.internal.processors.cache.mvcc.MvccLongList; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshotWithoutTxs; +import org.apache.ignite.internal.util.tostring.GridToStringExclude; +import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.MessageReader; import org.apache.ignite.plugin.extensions.communication.MessageWriter; @@ -31,6 +34,7 @@ /** * */ +@IgniteCodeGeneratingFail public class MvccSnapshotResponse implements MvccMessage, MvccSnapshot, MvccLongList { /** */ private static final long serialVersionUID = 0L; @@ -49,9 +53,11 @@ public class MvccSnapshotResponse implements MvccMessage, MvccSnapshot, MvccLong /** */ @GridDirectTransient + @GridToStringExclude private int txsCnt; /** */ + @GridToStringInclude private long[] txs; /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccTxSnapshotRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccTxSnapshotRequest.java index cd30eb85b88af..4cf6f65a44de4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccTxSnapshotRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/MvccTxSnapshotRequest.java @@ -81,6 +81,7 @@ public long futureId() { return false; writer.incrementState(); + } return true; @@ -101,6 +102,7 @@ public long futureId() { return false; reader.incrementState(); + } return reader.afterMessageRead(MvccTxSnapshotRequest.class); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/PartitionCountersNeighborcastRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/PartitionCountersNeighborcastRequest.java new file mode 100644 index 0000000000000..5c46ca66a280a --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/PartitionCountersNeighborcastRequest.java @@ -0,0 +1,145 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc.msg; + +import java.nio.ByteBuffer; +import java.util.Collection; +import org.apache.ignite.internal.GridDirectCollection; +import org.apache.ignite.internal.processors.cache.GridCacheIdMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.PartitionUpdateCountersMessage; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; +import org.apache.ignite.plugin.extensions.communication.MessageReader; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; + +/** */ +public class PartitionCountersNeighborcastRequest extends GridCacheIdMessage { + /** */ + private static final long serialVersionUID = -1893577108462486998L; + + /** */ + @GridDirectCollection(PartitionUpdateCountersMessage.class) + private Collection updCntrs; + + /** */ + private IgniteUuid futId; + + /** */ + public PartitionCountersNeighborcastRequest() { + } + + /** */ + public PartitionCountersNeighborcastRequest( + Collection updCntrs, IgniteUuid futId) { + this.updCntrs = updCntrs; + this.futId = futId; + } + + /** + * @return Partition update counters for remote node. + */ + public Collection updateCounters() { + return updCntrs; + } + + /** + * @return Sending future id. + */ + public IgniteUuid futId() { + return futId; + } + + /** {@inheritDoc} */ + @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { + writer.setBuffer(buf); + + if (!super.writeTo(buf, writer)) + return false; + + if (!writer.isHeaderWritten()) { + if (!writer.writeHeader(directType(), fieldsCount())) + return false; + + writer.onHeaderWritten(); + } + + switch (writer.state()) { + case 4: + if (!writer.writeIgniteUuid("futId", futId)) + return false; + + writer.incrementState(); + + case 5: + if (!writer.writeCollection("updCntrs", updCntrs, MessageCollectionItemType.MSG)) + return false; + + writer.incrementState(); + + } + + return true; + } + + /** {@inheritDoc} */ + @Override public boolean readFrom(ByteBuffer buf, MessageReader reader) { + reader.setBuffer(buf); + + if (!reader.beforeMessageRead()) + return false; + + if (!super.readFrom(buf, reader)) + return false; + + switch (reader.state()) { + case 4: + futId = reader.readIgniteUuid("futId"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 5: + updCntrs = reader.readCollection("updCntrs", MessageCollectionItemType.MSG); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + } + + return reader.afterMessageRead(PartitionCountersNeighborcastRequest.class); + } + + /** {@inheritDoc} */ + @Override public short directType() { + return 165; + } + + /** {@inheritDoc} */ + @Override public byte fieldsCount() { + return 6; + } + + /** {@inheritDoc} */ + @Override public boolean addDeploymentInfo() { + return false; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/PartitionCountersNeighborcastResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/PartitionCountersNeighborcastResponse.java new file mode 100644 index 0000000000000..2f88cde5a6fbb --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/msg/PartitionCountersNeighborcastResponse.java @@ -0,0 +1,114 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc.msg; + +import java.nio.ByteBuffer; +import org.apache.ignite.internal.processors.cache.GridCacheIdMessage; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.MessageReader; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; + +/** */ +public class PartitionCountersNeighborcastResponse extends GridCacheIdMessage { + /** */ + private static final long serialVersionUID = -8731050539139260521L; + + /** */ + private IgniteUuid futId; + + /** */ + public PartitionCountersNeighborcastResponse() { + } + + /** */ + public PartitionCountersNeighborcastResponse(IgniteUuid futId) { + this.futId = futId; + } + + /** + * @return Sending future id. + */ + public IgniteUuid futId() { + return futId; + } + + /** {@inheritDoc} */ + @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { + writer.setBuffer(buf); + + if (!super.writeTo(buf, writer)) + return false; + + if (!writer.isHeaderWritten()) { + if (!writer.writeHeader(directType(), fieldsCount())) + return false; + + writer.onHeaderWritten(); + } + + switch (writer.state()) { + case 4: + if (!writer.writeIgniteUuid("futId", futId)) + return false; + + writer.incrementState(); + + } + + return true; + } + + /** {@inheritDoc} */ + @Override public boolean readFrom(ByteBuffer buf, MessageReader reader) { + reader.setBuffer(buf); + + if (!reader.beforeMessageRead()) + return false; + + if (!super.readFrom(buf, reader)) + return false; + + switch (reader.state()) { + case 4: + futId = reader.readIgniteUuid("futId"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + } + + return reader.afterMessageRead(PartitionCountersNeighborcastResponse.class); + } + + /** {@inheritDoc} */ + @Override public short directType() { + return 166; + } + + /** {@inheritDoc} */ + @Override public byte fieldsCount() { + return 5; + } + + /** {@inheritDoc} */ + @Override public boolean addDeploymentInfo() { + return false; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/txlog/TxLog.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/txlog/TxLog.java index 61d9cc67417b8..ca3053c7fb073 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/txlog/TxLog.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/txlog/TxLog.java @@ -37,7 +37,7 @@ import org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree; import org.apache.ignite.internal.processors.cache.persistence.tree.io.BPlusIO; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; -import org.apache.ignite.internal.processors.cache.persistence.tree.io.PagePartitionMetaIO; +import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageMetaIO; import org.apache.ignite.internal.processors.cache.persistence.tree.reuse.ReuseList; import org.apache.ignite.internal.processors.cache.persistence.tree.reuse.ReuseListImpl; import org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler; @@ -100,36 +100,36 @@ private void init(GridKernalContext ctx) throws IgniteCheckedException { IgniteWriteAheadLogManager wal = ctx.cache().context().wal(); PageMemoryEx pageMemory = (PageMemoryEx)mgr.dataRegion(TX_LOG_CACHE_NAME).pageMemory(); - long partMetaId = pageMemory.partitionMetaPageId(TX_LOG_CACHE_ID, 0); - long partMetaPage = pageMemory.acquirePage(TX_LOG_CACHE_ID, partMetaId); + long metaId = pageMemory.metaPageId(TX_LOG_CACHE_ID); + long metaPage = pageMemory.acquirePage(TX_LOG_CACHE_ID, metaId); long treeRoot, reuseListRoot; boolean isNew = false; try { - long pageAddr = pageMemory.writeLock(TX_LOG_CACHE_ID, partMetaId, partMetaPage); + long pageAddr = pageMemory.writeLock(TX_LOG_CACHE_ID, metaId, metaPage); try { - if (PageIO.getType(pageAddr) != PageIO.T_PART_META) { + if (PageIO.getType(pageAddr) != PageIO.T_META) { // Initialize new page. - PagePartitionMetaIO io = PagePartitionMetaIO.VERSIONS.latest(); + PageMetaIO io = PageMetaIO.VERSIONS.latest(); - io.initNewPage(pageAddr, partMetaId, pageMemory.pageSize()); + io.initNewPage(pageAddr, metaId, pageMemory.pageSize()); - treeRoot = pageMemory.allocatePage(TX_LOG_CACHE_ID, 0, PageMemory.FLAG_DATA); - reuseListRoot = pageMemory.allocatePage(TX_LOG_CACHE_ID, 0, PageMemory.FLAG_DATA); + treeRoot = pageMemory.allocatePage(TX_LOG_CACHE_ID, INDEX_PARTITION, PageMemory.FLAG_IDX); + reuseListRoot = pageMemory.allocatePage(TX_LOG_CACHE_ID, INDEX_PARTITION, PageMemory.FLAG_IDX); - assert PageIdUtils.flag(treeRoot) == PageMemory.FLAG_DATA; - assert PageIdUtils.flag(reuseListRoot) == PageMemory.FLAG_DATA; + assert PageIdUtils.flag(treeRoot) == PageMemory.FLAG_IDX; + assert PageIdUtils.flag(reuseListRoot) == PageMemory.FLAG_IDX; io.setTreeRoot(pageAddr, treeRoot); io.setReuseListRoot(pageAddr, reuseListRoot); - if (PageHandler.isWalDeltaRecordNeeded(pageMemory, TX_LOG_CACHE_ID, partMetaId, partMetaPage, wal, null)) + if (PageHandler.isWalDeltaRecordNeeded(pageMemory, TX_LOG_CACHE_ID, metaId, metaPage, wal, null)) wal.log(new MetaPageInitRecord( TX_LOG_CACHE_ID, - partMetaId, + metaId, io.getType(), io.getVersion(), treeRoot, @@ -139,23 +139,23 @@ private void init(GridKernalContext ctx) throws IgniteCheckedException { isNew = true; } else { - PagePartitionMetaIO io = PageIO.getPageIO(pageAddr); + PageMetaIO io = PageIO.getPageIO(pageAddr); treeRoot = io.getTreeRoot(pageAddr); reuseListRoot = io.getReuseListRoot(pageAddr); - assert PageIdUtils.flag(treeRoot) == PageMemory.FLAG_DATA : - U.hexLong(treeRoot) + ", part=" + 0 + ", TX_LOG_CACHE_ID=" + TX_LOG_CACHE_ID; - assert PageIdUtils.flag(reuseListRoot) == PageMemory.FLAG_DATA : - U.hexLong(reuseListRoot) + ", part=" + 0 + ", TX_LOG_CACHE_ID=" + TX_LOG_CACHE_ID; + assert PageIdUtils.flag(treeRoot) == PageMemory.FLAG_IDX : + U.hexLong(treeRoot) + ", TX_LOG_CACHE_ID=" + TX_LOG_CACHE_ID; + assert PageIdUtils.flag(reuseListRoot) == PageMemory.FLAG_IDX : + U.hexLong(reuseListRoot) + ", TX_LOG_CACHE_ID=" + TX_LOG_CACHE_ID; } } finally { - pageMemory.writeUnlock(TX_LOG_CACHE_ID, partMetaId, partMetaPage, null, isNew); + pageMemory.writeUnlock(TX_LOG_CACHE_ID, metaId, metaPage, null, isNew); } } finally { - pageMemory.releasePage(TX_LOG_CACHE_ID, partMetaId, partMetaPage); + pageMemory.releasePage(TX_LOG_CACHE_ID, metaId, metaPage); } reuseList = new ReuseListImpl( @@ -236,18 +236,13 @@ public byte get(TxKey key) throws IgniteCheckedException { * @throws IgniteCheckedException If failed. */ public void put(TxKey key, byte state, boolean primary) throws IgniteCheckedException { + assert mgr.checkpointLockIsHeldByThread(); + Sync sync = syncObject(key); try { - mgr.checkpointReadLock(); - - try { - synchronized (sync) { - tree.invoke(key, null, new TxLogUpdateClosure(key.major(), key.minor(), state, primary)); - } - } - finally { - mgr.checkpointReadUnlock(); + synchronized (sync) { + tree.invoke(key, null, new TxLogUpdateClosure(key.major(), key.minor(), state, primary)); } } finally { evict(key, sync); @@ -267,8 +262,14 @@ public void removeUntil(long major, long minor) throws IgniteCheckedException { tree.iterate(LOWEST, clo, clo); if (clo.rows != null) { - for (TxKey row : clo.rows) { - remove(row); + mgr.checkpointReadLock(); + + try { + for (TxKey row : clo.rows) + remove(row); + } + finally { + mgr.checkpointReadUnlock(); } } } @@ -278,17 +279,11 @@ private void remove(TxKey key) throws IgniteCheckedException { Sync sync = syncObject(key); try { - mgr.checkpointReadLock(); - - try { - synchronized (sync) { - tree.removex(key); - } + synchronized (sync) { + tree.removex(key); } - finally { - mgr.checkpointReadUnlock(); - } - } finally { + } + finally { evict(key, sync); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CacheDataRowAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CacheDataRowAdapter.java index 574e6d5232c7c..e6360dfb7b218 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CacheDataRowAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CacheDataRowAdapter.java @@ -155,7 +155,7 @@ public final void initFromLink( DataPagePayload data = io.readPayload(pageAddr, itemId(nextLink), - pageMem.pageSize()); + pageMem.realPageSize(grpId)); nextLink = data.nextLink(); @@ -661,11 +661,6 @@ public boolean isReady() { return TxState.NA; } - /** {@inheritDoc} */ - @Override public boolean isKeyAbsentBefore() { - return false; - } - /** * */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CheckpointFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CheckpointFuture.java index 1c77013d63333..767173812ba8f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CheckpointFuture.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/CheckpointFuture.java @@ -31,5 +31,5 @@ public interface CheckpointFuture { /** * @return Finish future. */ - public GridFutureAdapter finishFuture(); + public GridFutureAdapter finishFuture(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataRegionMetricsImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataRegionMetricsImpl.java index 2408660d17fe9..420af3d389722 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataRegionMetricsImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataRegionMetricsImpl.java @@ -481,4 +481,23 @@ public void subIntervals(int subInts) { pageReplaceRate = new HitRateMetrics((int)rateTimeInterval, subInts); pageReplaceAge = new HitRateMetrics((int)rateTimeInterval, subInts); } + + /** + * Clear metrics. + */ + public void clear() { + totalAllocatedPages.reset(); + grpAllocationTrackers.values().forEach(GroupAllocationTracker::reset); + largeEntriesPages.reset(); + dirtyPages.reset(); + readPages.reset(); + writtenPages.reset(); + replacedPages.reset(); + offHeapSize.set(0); + checkpointBufferSize.set(0); + allocRate.clear(); + evictRate.clear(); + pageReplaceRate.clear(); + pageReplaceAge.clear(); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataStructure.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataStructure.java index 017740721dfaa..610f6e638d0db 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataStructure.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DataStructure.java @@ -360,7 +360,7 @@ protected final long recyclePage( boolean needWalDeltaRecord = needWalDeltaRecord(pageId, page, walPlc); - if (PageIdUtils.tag(pageId) == FLAG_DATA) { + if (PageIdUtils.flag(pageId) == FLAG_DATA) { int rotatedIdPart = PageIO.getRotatedIdPart(pageAddr); if (rotatedIdPart != 0) { @@ -387,10 +387,10 @@ protected final long recyclePage( } /** - * @return Page size. + * @return Page size without encryption overhead. */ - protected final int pageSize() { - return pageMem.pageSize(); + protected int pageSize() { + return pageMem.realPageSize(grpId); } @Override public void onBeforeWriteLock(int cacheId, long pageId, long page) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DatabaseLifecycleListener.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DatabaseLifecycleListener.java index f96cdd91a7e8f..269dbd318f520 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DatabaseLifecycleListener.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/DatabaseLifecycleListener.java @@ -22,33 +22,74 @@ /** * */ +@SuppressWarnings("RedundantThrows") public interface DatabaseLifecycleListener { - /** + * Callback executed when data regions become to start-up. + * * @param mgr Database shared manager. + * @throws IgniteCheckedException If failed. + */ + public default void onInitDataRegions(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException {} + + /** + * Callback executed when node detected that baseline topology is changed and node is not in that baseline. + * It's useful to cleanup and invalidate all available data restored at that moment. * + * @throws IgniteCheckedException If failed. + */ + public default void onBaselineChange() throws IgniteCheckedException {} + + /** + * Callback executed right before node become perform binary recovery. + * + * @param mgr Database shared manager. + * @throws IgniteCheckedException If failed. */ - void onInitDataRegions(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException; + public default void beforeBinaryMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException {} /** - * @param mgr Page store manager. + * Callback executed when binary memory has fully restored and WAL logging is resumed. * + * + * @param mgr Database shared manager. + * @param restoreState Result of binary recovery. + * @throws IgniteCheckedException If failed. */ - void beforeMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException; + public default void afterBinaryMemoryRestore(IgniteCacheDatabaseSharedManager mgr, + GridCacheDatabaseSharedManager.RestoreBinaryState restoreState) throws IgniteCheckedException {} /** + * Callback executed when all logical updates were applied and page memory become to fully consistent state. + * + * * @param mgr Database shared manager. + * @param restoreState Result of logical recovery. + * @throws IgniteCheckedException If failed. + */ + public default void afterLogicalUpdatesApplied(IgniteCacheDatabaseSharedManager mgr, + GridCacheDatabaseSharedManager.RestoreLogicalState restoreState) throws IgniteCheckedException {} + + /** + * Callback executed when all physical updates are applied and we are ready to write new physical records + * during logical recovery. * + * @param mgr Database shared manager. + * @throws IgniteCheckedException If failed. */ - void afterMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException; + public default void beforeResumeWalLogging(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException {} /** + * Callback executed after all data regions are initialized. + * * @param mgr Database shared manager. */ - void afterInitialise(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException; + public default void afterInitialise(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException {} /** + * Callback executed before shared manager will be stopped. + * * @param mgr Database shared manager. */ - void beforeStop(IgniteCacheDatabaseSharedManager mgr); + public default void beforeStop(IgniteCacheDatabaseSharedManager mgr) {} } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheDatabaseSharedManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheDatabaseSharedManager.java index 158c3b14c1432..6199896389c9a 100755 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheDatabaseSharedManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheDatabaseSharedManager.java @@ -40,6 +40,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Optional; import java.util.Set; import java.util.UUID; import java.util.concurrent.ConcurrentHashMap; @@ -53,10 +54,10 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.LongAdder; import java.util.concurrent.locks.ReentrantReadWriteLock; +import java.util.function.Predicate; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; -import javax.management.ObjectName; import org.apache.ignite.DataStorageMetrics; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; @@ -70,8 +71,6 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.events.DiscoveryEvent; -import org.apache.ignite.events.EventType; import org.apache.ignite.failure.FailureContext; import org.apache.ignite.failure.FailureType; import org.apache.ignite.internal.GridKernalContext; @@ -79,13 +78,11 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.NodeStoppingException; -import org.apache.ignite.internal.managers.communication.GridIoPolicy; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; import org.apache.ignite.internal.mem.DirectMemoryProvider; import org.apache.ignite.internal.mem.DirectMemoryRegion; -import org.apache.ignite.internal.mem.file.MappedFileMemoryProvider; -import org.apache.ignite.internal.mem.unsafe.UnsafeMemoryProvider; import org.apache.ignite.internal.pagemem.FullPageId; +import org.apache.ignite.internal.pagemem.PageIdAllocator; import org.apache.ignite.internal.pagemem.PageIdUtils; import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.pagemem.PageUtils; @@ -99,22 +96,26 @@ import org.apache.ignite.internal.pagemem.wal.record.DataRecord; import org.apache.ignite.internal.pagemem.wal.record.MemoryRecoveryRecord; import org.apache.ignite.internal.pagemem.wal.record.MetastoreDataRecord; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MvccTxRecord; import org.apache.ignite.internal.pagemem.wal.record.PageSnapshot; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; +import org.apache.ignite.internal.pagemem.wal.record.WalRecordCacheGroupAware; import org.apache.ignite.internal.pagemem.wal.record.delta.PageDeltaRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.PartitionDestroyRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.PartitionMetaStateRecord; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.CacheGroupDescriptor; import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.ExchangeActions; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; -import org.apache.ignite.internal.processors.cache.StoredCacheData; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; -import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxLog; +import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxState; import org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry; import org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntryType; import org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory; @@ -127,19 +128,21 @@ import org.apache.ignite.internal.processors.cache.persistence.pagemem.CheckpointMetricsTracker; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl; +import org.apache.ignite.internal.processors.cache.persistence.partstate.GroupPartitionId; import org.apache.ignite.internal.processors.cache.persistence.partstate.PartitionAllocationMap; +import org.apache.ignite.internal.processors.cache.persistence.partstate.PartitionRecoverState; import org.apache.ignite.internal.processors.cache.persistence.snapshot.IgniteCacheSnapshotManager; import org.apache.ignite.internal.processors.cache.persistence.snapshot.SnapshotOperation; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PagePartitionMetaIO; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; import org.apache.ignite.internal.processors.port.GridPortRecord; +import org.apache.ignite.internal.processors.query.GridQueryProcessor; import org.apache.ignite.internal.util.GridMultiCollectionWrapper; -import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.internal.util.future.CountDownFuture; +import org.apache.ignite.internal.util.future.GridCompoundFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.lang.GridInClosure3X; import org.apache.ignite.internal.util.tostring.GridToStringInclude; @@ -153,24 +156,28 @@ import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.util.worker.GridWorker; +import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteOutClosure; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.mxbean.DataStorageMetricsMXBean; import org.apache.ignite.thread.IgniteThread; -import org.apache.ignite.thread.IgniteTaskTrackingThreadPoolExecutor; +import org.apache.ignite.thread.IgniteThreadPoolExecutor; +import org.apache.ignite.transactions.TransactionState; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; import org.jsr166.ConcurrentLinkedHashMap; import static java.nio.file.StandardOpenOption.READ; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_PDS_SKIP_CRC; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT; import static org.apache.ignite.IgniteSystemProperties.IGNITE_PDS_WAL_REBALANCE_THRESHOLD; import static org.apache.ignite.failure.FailureType.CRITICAL_ERROR; +import static org.apache.ignite.failure.FailureType.SYSTEM_CRITICAL_OPERATION_TIMEOUT; import static org.apache.ignite.failure.FailureType.SYSTEM_WORKER_TERMINATION; import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.CHECKPOINT_RECORD; -import static org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage.METASTORAGE_CACHE_ID; +import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.METASTORE_DATA_RECORD; +import static org.apache.ignite.internal.util.IgniteUtils.checkpointBufferSize; /** * @@ -180,24 +187,15 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan /** */ public static final String IGNITE_PDS_CHECKPOINT_TEST_SKIP_SYNC = "IGNITE_PDS_CHECKPOINT_TEST_SKIP_SYNC"; - /** MemoryPolicyConfiguration name reserved for meta store. */ - public static final String METASTORE_DATA_REGION_NAME = "metastoreMemPlc"; - /** */ - private static final long GB = 1024L * 1024 * 1024; - - /** Minimum checkpointing page buffer size (may be adjusted by Ignite). */ - public static final Long DFLT_MIN_CHECKPOINTING_PAGE_BUFFER_SIZE = GB / 4; + public static final String IGNITE_PDS_SKIP_CHECKPOINT_ON_NODE_STOP = "IGNITE_PDS_SKIP_CHECKPOINT_ON_NODE_STOP"; - /** Default minimum checkpointing page buffer size (may be adjusted by Ignite). */ - public static final Long DFLT_MAX_CHECKPOINTING_PAGE_BUFFER_SIZE = 2 * GB; + /** MemoryPolicyConfiguration name reserved for meta store. */ + public static final String METASTORE_DATA_REGION_NAME = "metastoreMemPlc"; /** Skip sync. */ private final boolean skipSync = IgniteSystemProperties.getBoolean(IGNITE_PDS_CHECKPOINT_TEST_SKIP_SYNC); - /** */ - private boolean skipCrc = IgniteSystemProperties.getBoolean(IGNITE_PDS_SKIP_CRC, false); - /** */ private final int walRebalanceThreshold = IgniteSystemProperties.getInteger( IGNITE_PDS_WAL_REBALANCE_THRESHOLD, 500_000); @@ -206,6 +204,9 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan private final String throttlingPolicyOverride = IgniteSystemProperties.getString( IgniteSystemProperties.IGNITE_OVERRIDE_WRITE_THROTTLING_ENABLED); + /** */ + private final boolean skipCheckpointOnNodeStop = IgniteSystemProperties.getBoolean(IGNITE_PDS_SKIP_CHECKPOINT_ON_NODE_STOP, false); + /** Checkpoint lock hold count. */ private static final ThreadLocal CHECKPOINT_LOCK_HOLD_COUNT = new ThreadLocal() { @Override protected Integer initialValue() { @@ -219,9 +220,6 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan /** Checkpoint file name pattern. */ public static final Pattern CP_FILE_NAME_PATTERN = Pattern.compile("(\\d+)-(.*)-(START|END)\\.bin"); - /** Node started file suffix. */ - public static final String NODE_STARTED_FILE_NAME_SUFFIX = "-node-started.bin"; - /** */ private static final String MBEAN_NAME = "DataStorageMetrics"; @@ -256,6 +254,9 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan /** Checkpoint thread. Needs to be volatile because it is created in exchange worker. */ private volatile Checkpointer checkpointer; + /** Checkpointer thread instance. */ + private volatile IgniteThread checkpointerThread; + /** For testing only. */ private volatile boolean checkpointsEnabled = true; @@ -263,7 +264,7 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan private volatile GridFutureAdapter enableChangeApplied; /** */ - private ReentrantReadWriteLock checkpointLock = new ReentrantReadWriteLock(); + ReentrantReadWriteLock checkpointLock = new ReentrantReadWriteLock(); /** */ private long checkpointFreq; @@ -289,8 +290,16 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan /** */ private boolean stopping; + /** + * The position of last seen WAL pointer. Used for resumming logging from this pointer. + * + * If binary memory recovery pefrormed on node start, the checkpoint END pointer will store + * not the last WAL pointer and can't be used for resumming logging. + */ + private volatile WALPointer walTail; + /** Checkpoint runner thread pool. If null tasks are to be run in single thread */ - @Nullable private IgniteTaskTrackingThreadPoolExecutor asyncRunner; + @Nullable private IgniteThreadPoolExecutor asyncRunner; /** Thread local with buffers for the checkpoint threads. Each buffer represent one page for durable memory. */ private ThreadLocal threadBuf; @@ -322,9 +331,6 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan /** */ private DataStorageMetricsImpl persStoreMetrics; - /** */ - private ObjectName persistenceMetricsMbeanName; - /** Counter for written checkpoint pages. Not null only if checkpoint is running. */ private volatile AtomicInteger writtenPagesCntr = null; @@ -337,7 +343,10 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan /** Number of pages in current checkpoint at the beginning of checkpoint. */ private volatile int currCheckpointPagesCnt; - /** */ + /** + * MetaStorage instance. Value {@code null} means storage not initialized yet. + * Guarded by {@link GridCacheDatabaseSharedManager#checkpointReadLock()} + */ private MetaStorage metaStorage; /** */ @@ -351,6 +360,17 @@ public class GridCacheDatabaseSharedManager extends IgniteCacheDatabaseSharedMan /** File I/O factory for writing checkpoint markers. */ private final FileIOFactory ioFactory; + + /** Timeout for checkpoint read lock acquisition in milliseconds. */ + private volatile long checkpointReadLockTimeout; + + /** Flag allows to log additional information about partitions during recovery phases. */ + private final boolean recoveryVerboseLogging = IgniteSystemProperties.getBoolean( + IgniteSystemProperties.IGNITE_RECOVERY_VERBOSE_LOGGING, true); + + /** Pointer to a memory recovery record that should be included into the next checkpoint record. */ + private volatile WALPointer memoryRecoveryRecordPtr; + /** * @param ctx Kernal context. */ @@ -376,6 +396,17 @@ public GridCacheDatabaseSharedManager(GridKernalContext ctx) { ); ioFactory = persistenceCfg.getFileIOFactory(); + + Long cfgCheckpointReadLockTimeout = ctx.config().getDataStorageConfiguration() != null + ? ctx.config().getDataStorageConfiguration().getCheckpointReadLockTimeout() + : null; + + checkpointReadLockTimeout = IgniteSystemProperties.getLong(IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT, + cfgCheckpointReadLockTimeout != null + ? cfgCheckpointReadLockTimeout + : (ctx.workersRegistry() != null + ? ctx.workersRegistry().getSystemWorkerBlockedTimeout() + : ctx.config().getFailureDetectionTimeout())); } /** */ @@ -397,6 +428,15 @@ public Checkpointer getCheckpointer() { return checkpointer; } + /** + * For test use only. + * + * @return Checkpointer thread instance. + */ + public IgniteThread checkpointerThread() { + return checkpointerThread; + } + /** * For test use only. */ @@ -418,7 +458,7 @@ public IgniteInternalFuture enableCheckpoints(boolean enable) { addDataRegion( memCfg, - createDataRegionConfiguration(memCfg), + createMetastoreDataRegionConfig(memCfg), false ); @@ -426,16 +466,19 @@ public IgniteInternalFuture enableCheckpoints(boolean enable) { } /** + * Create metastorage data region configuration with enabled persistence by default. + * * @param storageCfg Data storage configuration. * @return Data region configuration. */ - private DataRegionConfiguration createDataRegionConfiguration(DataStorageConfiguration storageCfg) { + private DataRegionConfiguration createMetastoreDataRegionConfig(DataStorageConfiguration storageCfg) { DataRegionConfiguration cfg = new DataRegionConfiguration(); cfg.setName(METASTORE_DATA_REGION_NAME); cfg.setInitialSize(storageCfg.getSystemRegionInitialSize()); cfg.setMaxSize(storageCfg.getSystemRegionMaxSize()); cfg.setPersistenceEnabled(true); + return cfg; } @@ -459,6 +502,8 @@ private DataRegionConfiguration createDataRegionConfiguration(DataStorageConfigu final GridKernalContext kernalCtx = cctx.kernalContext(); if (!kernalCtx.clientNode()) { + kernalCtx.internalSubscriptionProcessor().registerDatabaseListener(new MetastorageRecoveryLifecycle()); + checkpointer = new Checkpointer(cctx.igniteInstanceName(), "db-checkpoint-thread", log); cpHistory = new CheckpointHistory(kernalCtx); @@ -478,15 +523,7 @@ private DataRegionConfiguration createDataRegionConfiguration(DataStorageConfigu .resolveFolders() .getLockedFileLockHolder(); - fileLockHolder = preLocked == null ? - new FileLockHolder(storeMgr.workDir().getPath(), kernalCtx, log) : preLocked; - - if (log.isDebugEnabled()) - log.debug("Try to capture file lock [nodeId=" + - cctx.localNodeId() + " path=" + fileLockHolder.lockPath() + "]"); - - if (!fileLockHolder.isLocked()) - fileLockHolder.tryLock(lockWaitTime); + acquireFileLock(preLocked); cleanupTempCheckpointDirectory(); @@ -512,10 +549,48 @@ private DataRegionConfiguration createDataRegionConfiguration(DataStorageConfigu } } - /** - * Cleanup checkpoint directory. - */ + /** {@inheritDoc} */ + @Override public void cleanupRestoredCaches() { + if (dataRegionMap.isEmpty()) + return; + + for (CacheGroupDescriptor grpDesc : cctx.cache().cacheGroupDescriptors().values()) { + String regionName = grpDesc.config().getDataRegionName(); + + DataRegion region = regionName != null ? dataRegionMap.get(regionName) : dfltDataRegion; + + if (region == null) + continue; + + if (log.isInfoEnabled()) + log.info("Page memory " + region.config().getName() + " for " + grpDesc + " has invalidated."); + + int partitions = grpDesc.config().getAffinity().partitions(); + + if (region.pageMemory() instanceof PageMemoryEx) { + PageMemoryEx memEx = (PageMemoryEx)region.pageMemory(); + + for (int partId = 0; partId < partitions; partId++) + memEx.invalidate(grpDesc.groupId(), partId); + + memEx.invalidate(grpDesc.groupId(), PageIdAllocator.INDEX_PARTITION); + } + } + + storeMgr.cleanupPageStoreIfMatch( + new Predicate() { + @Override public boolean test(Integer grpId) { + return MetaStorage.METASTORAGE_CACHE_ID != grpId; + } + }, + true); + } + + /** {@inheritDoc} */ @Override public void cleanupCheckpointDirectory() throws IgniteCheckedException { + if (cpHistory != null) + cpHistory = new CheckpointHistory(cctx.kernalContext()); + try { try (DirectoryStream files = Files.newDirectoryStream(cpDir.toPath())) { for (Path path : files) @@ -527,6 +602,39 @@ private DataRegionConfiguration createDataRegionConfiguration(DataStorageConfigu } } + /** + * @param preLocked Pre-locked file lock holder. + */ + private void acquireFileLock(FileLockHolder preLocked) throws IgniteCheckedException { + if (cctx.kernalContext().clientNode()) + return; + + fileLockHolder = preLocked == null ? + new FileLockHolder(storeMgr.workDir().getPath(), cctx.kernalContext(), log) : preLocked; + + if (!fileLockHolder.isLocked()) { + if (log.isDebugEnabled()) + log.debug("Try to capture file lock [nodeId=" + + cctx.localNodeId() + " path=" + fileLockHolder.lockPath() + "]"); + + fileLockHolder.tryLock(lockWaitTime); + } + } + + /** + * + */ + private void releaseFileLock() { + if (cctx.kernalContext().clientNode() || fileLockHolder == null) + return; + + if (log.isDebugEnabled()) + log.debug("Release file lock [nodeId=" + + cctx.localNodeId() + " path=" + fileLockHolder.lockPath() + "]"); + + fileLockHolder.close(); + } + /** * Retreives checkpoint history form specified {@code dir}. * @@ -610,52 +718,36 @@ private void removeCheckpointFiles(CheckpointEntry cpEntry) throws IgniteChecked /** */ private void readMetastore() throws IgniteCheckedException { try { - DataStorageConfiguration memCfg = cctx.kernalContext().config().getDataStorageConfiguration(); - - DataRegionConfiguration plcCfg = createDataRegionConfiguration(memCfg); - - File allocPath = buildAllocPath(plcCfg); - - DirectMemoryProvider memProvider = allocPath == null ? - new UnsafeMemoryProvider(log) : - new MappedFileMemoryProvider( - log, - allocPath); - - DataRegionMetricsImpl memMetrics = new DataRegionMetricsImpl(plcCfg); - - PageMemoryEx storePageMem = (PageMemoryEx)createPageMemory(memProvider, memCfg, plcCfg, memMetrics, false); - - DataRegion regCfg = new DataRegion(storePageMem, plcCfg, memMetrics, createPageEvictionTracker(plcCfg, storePageMem)); - CheckpointStatus status = readCheckpointStatus(); - cctx.pageStore().initializeForMetastorage(); - - storePageMem.start(); - checkpointReadLock(); try { - restoreMemory(status, true, storePageMem); + dataRegion(METASTORE_DATA_REGION_NAME).pageMemory().start(); - metaStorage = new MetaStorage(cctx, regCfg, memMetrics, true); + performBinaryMemoryRestore(status, onlyMetastorageGroup(), physicalRecords(), false); - metaStorage.init(this); + metaStorage = createMetastorage(true); - applyLastUpdates(status, true); + applyLogicalUpdates(status, onlyMetastorageGroup(), onlyMetastorageRecords(), false); fillWalDisabledGroups(); notifyMetastorageReadyForRead(); } finally { - checkpointReadUnlock(); - } + metaStorage = null; - metaStorage = null; + dataRegion(METASTORE_DATA_REGION_NAME).pageMemory().stop(false); - storePageMem.stop(); + cctx.pageStore().cleanupPageStoreIfMatch(new Predicate() { + @Override public boolean test(Integer grpId) { + return MetaStorage.METASTORAGE_CACHE_ID == grpId; + } + }, false); + + checkpointReadUnlock(); + } } catch (StorageException e) { cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); @@ -664,30 +756,6 @@ private void readMetastore() throws IgniteCheckedException { } } - /** - * Get checkpoint buffer size for the given configuration. - * - * @param regCfg Configuration. - * @return Checkpoint buffer size. - */ - public static long checkpointBufferSize(DataRegionConfiguration regCfg) { - if (!regCfg.isPersistenceEnabled()) - return 0L; - - long res = regCfg.getCheckpointPageBufferSize(); - - if (res == 0L) { - if (regCfg.getMaxSize() < GB) - res = Math.min(DFLT_MIN_CHECKPOINTING_PAGE_BUFFER_SIZE, regCfg.getMaxSize()); - else if (regCfg.getMaxSize() < 8 * GB) - res = regCfg.getMaxSize() / 4; - else - res = DFLT_MAX_CHECKPOINTING_PAGE_BUFFER_SIZE; - } - - return res; - } - /** {@inheritDoc} */ @Override public void onActivate(GridKernalContext ctx) throws IgniteCheckedException { if (log.isDebugEnabled()) @@ -696,16 +764,16 @@ else if (regCfg.getMaxSize() < 8 * GB) snapshotMgr = cctx.snapshot(); - if (!cctx.localNode().isClient()) { - initDataBase(); - - registrateMetricsMBean(); - } - - if (checkpointer == null) + if (!cctx.kernalContext().clientNode() && checkpointer == null) checkpointer = new Checkpointer(cctx.igniteInstanceName(), "db-checkpoint-thread", log); super.onActivate(ctx); + + if (!cctx.kernalContext().clientNode()) { + initializeCheckpointPool(); + + finishRecovery(); + } } /** {@inheritDoc} */ @@ -725,60 +793,29 @@ else if (regCfg.getMaxSize() < 8 * GB) /** * */ - private void initDataBase() { + private void initializeCheckpointPool() { if (persistenceCfg.getCheckpointThreads() > 1) - asyncRunner = new IgniteTaskTrackingThreadPoolExecutor( + asyncRunner = new IgniteThreadPoolExecutor( CHECKPOINT_RUNNER_THREAD_PREFIX, cctx.igniteInstanceName(), persistenceCfg.getCheckpointThreads(), persistenceCfg.getCheckpointThreads(), - 30_000, // A value is ignored if corePoolSize equals to maxPoolSize - new LinkedBlockingQueue(), - GridIoPolicy.UNDEFINED, - cctx.kernalContext().uncaughtExceptionHandler() + 30_000, + new LinkedBlockingQueue() ); } - /** - * Try to register Metrics MBean. - * - * @throws IgniteCheckedException If failed. - */ - private void registrateMetricsMBean() throws IgniteCheckedException { - if (U.IGNITE_MBEANS_DISABLED) - return; - - try { - persistenceMetricsMbeanName = U.registerMBean( - cctx.kernalContext().config().getMBeanServer(), - cctx.kernalContext().igniteInstanceName(), - MBEAN_GROUP, - MBEAN_NAME, - persStoreMetrics, - DataStorageMetricsMXBean.class); - } - catch (Throwable e) { - throw new IgniteCheckedException("Failed to register " + MBEAN_NAME + " MBean.", e); - } - } - - /** - * Unregister metrics MBean. - */ - private void unRegistrateMetricsMBean() { - if (persistenceMetricsMbeanName == null) - return; - - assert !U.IGNITE_MBEANS_DISABLED; - - try { - cctx.kernalContext().config().getMBeanServer().unregisterMBean(persistenceMetricsMbeanName); - - persistenceMetricsMbeanName = null; - } - catch (Throwable e) { - U.error(log, "Failed to unregister " + MBEAN_NAME + " MBean.", e); - } + /** {@inheritDoc} */ + @Override protected void registerMetricsMBeans(IgniteConfiguration cfg) { + super.registerMetricsMBeans(cfg); + + registerMetricsMBean( + cctx.kernalContext().config(), + MBEAN_GROUP, + MBEAN_NAME, + persStoreMetrics, + DataStorageMetricsMXBean.class + ); } /** {@inheritDoc} */ @@ -806,11 +843,14 @@ private void unRegistrateMetricsMBean() { }; } - /** {@inheritDoc} */ - @Override public void readCheckpointAndRestoreMemory( - List cachesToStart - ) throws IgniteCheckedException { - assert !cctx.localNode().isClient(); + /** + * Restores last valid WAL pointer and resumes logging from that pointer. + * Re-creates metastorage if needed. + * + * @throws IgniteCheckedException If failed. + */ + private void finishRecovery() throws IgniteCheckedException { + assert !cctx.kernalContext().clientNode(); long time = System.currentTimeMillis(); @@ -818,50 +858,26 @@ private void unRegistrateMetricsMBean() { try { for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) - lsnr.beforeMemoryRestore(this); + lsnr.beforeResumeWalLogging(this); - if (!F.isEmpty(cachesToStart)) { - for (DynamicCacheDescriptor desc : cachesToStart) { - if (CU.affinityNode(cctx.localNode(), desc.cacheConfiguration().getNodeFilter())) - storeMgr.initializeForCache(desc.groupDescriptor(), new StoredCacheData(desc.cacheConfiguration())); - } - } - - CheckpointStatus status = readCheckpointStatus(); - - cctx.pageStore().initializeForMetastorage(); - - metaStorage = new MetaStorage( - cctx, - dataRegionMap.get(METASTORE_DATA_REGION_NAME), - (DataRegionMetricsImpl)memMetricsMap.get(METASTORE_DATA_REGION_NAME) - ); + // Try to resume logging since last finished checkpoint if possible. + if (walTail == null) { + CheckpointStatus status = readCheckpointStatus(); - WALPointer restore = restoreMemory(status); - - if (restore == null && !status.endPtr.equals(CheckpointStatus.NULL_PTR)) { - throw new StorageException("Restore wal pointer = " + restore + ", while status.endPtr = " + - status.endPtr + ". Can't restore memory - critical part of WAL archive is missing."); + walTail = CheckpointStatus.NULL_PTR.equals(status.endPtr) ? null : status.endPtr; } - // First, bring memory to the last consistent checkpoint state if needed. - // This method should return a pointer to the last valid record in the WAL. - cctx.wal().resumeLogging(restore); + cctx.wal().resumeLogging(walTail); - WALPointer ptr = cctx.wal().log(new MemoryRecoveryRecord(U.currentTimeMillis())); + walTail = null; - if (ptr != null) { - cctx.wal().flush(ptr, true); - - nodeStart(ptr); - } - - metaStorage.init(this); + // Recreate metastorage to refresh page memory state after deactivation. + if (metaStorage == null) + metaStorage = createMetastorage(false); notifyMetastorageReadyForReadWrite(); - for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) - lsnr.afterMemoryRestore(this); + U.log(log, "Finish recovery performed in " + (System.currentTimeMillis() - time) + " ms."); } catch (IgniteCheckedException e) { if (X.hasCause(e, StorageException.class, IOException.class)) @@ -871,103 +887,88 @@ private void unRegistrateMetricsMBean() { } finally { checkpointReadUnlock(); - - if (log.isInfoEnabled()) - log.info("Binary recovery performed in " + (System.currentTimeMillis() - time) + " ms."); } } /** - * Creates file with current timestamp and specific "node-started.bin" suffix - * and writes into memory recovery pointer. - * - * @param ptr Memory recovery wal pointer. + * @param readOnly Metastorage read-only mode. + * @return Instance of Metastorage. + * @throws IgniteCheckedException If failed to create metastorage. */ - private void nodeStart(WALPointer ptr) throws IgniteCheckedException { - FileWALPointer p = (FileWALPointer)ptr; - - String fileName = U.currentTimeMillis() + NODE_STARTED_FILE_NAME_SUFFIX; - String tmpFileName = fileName + FilePageStoreManager.TMP_SUFFIX; - - ByteBuffer buf = ByteBuffer.allocate(FileWALPointer.POINTER_SIZE); - buf.order(ByteOrder.nativeOrder()); - - try { - try (FileIO io = ioFactory.create(Paths.get(cpDir.getAbsolutePath(), tmpFileName).toFile(), - StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE)) { - buf.putLong(p.index()); - - buf.putInt(p.fileOffset()); - - buf.putInt(p.length()); - - buf.flip(); + private MetaStorage createMetastorage(boolean readOnly) throws IgniteCheckedException { + cctx.pageStore().initializeForMetastorage(); - io.writeFully(buf); - - buf.clear(); + MetaStorage storage = new MetaStorage( + cctx, + dataRegion(METASTORE_DATA_REGION_NAME), + (DataRegionMetricsImpl) memMetricsMap.get(METASTORE_DATA_REGION_NAME), + readOnly + ); - io.force(true); - } + storage.init(this); - Files.move(Paths.get(cpDir.getAbsolutePath(), tmpFileName), Paths.get(cpDir.getAbsolutePath(), fileName)); - } - catch (IOException e) { - throw new StorageException("Failed to write node start marker: " + ptr, e); - } + return storage; } /** - * Collects memory recovery pointers from node started files. See {@link #nodeStart(WALPointer)}. - * Each pointer associated with timestamp extracted from file. - * Tuples are sorted by timestamp. - * - * @return Sorted list of tuples (node started timestamp, memory recovery pointer). - * + * @param cacheGroupsPredicate Cache groups to restore. + * @param recordTypePredicate Filter records by type. + * @return Last seen WAL pointer during binary memory recovery. * @throws IgniteCheckedException If failed. */ - public List> nodeStartedPointers() throws IgniteCheckedException { - List> res = new ArrayList<>(); + private RestoreBinaryState restoreBinaryMemory( + IgnitePredicate cacheGroupsPredicate, + IgniteBiPredicate recordTypePredicate + ) throws IgniteCheckedException { + long time = System.currentTimeMillis(); - try (DirectoryStream nodeStartedFiles = Files.newDirectoryStream( - cpDir.toPath(), - path -> path.toFile().getName().endsWith(NODE_STARTED_FILE_NAME_SUFFIX)) - ) { - ByteBuffer buf = ByteBuffer.allocate(FileWALPointer.POINTER_SIZE); - buf.order(ByteOrder.nativeOrder()); + try { + log.info("Starting binary memory restore for: " + cctx.cache().cacheGroupDescriptors().keySet()); - for (Path path : nodeStartedFiles) { - File f = path.toFile(); + for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) + lsnr.beforeBinaryMemoryRestore(this); - String name = f.getName(); + CheckpointStatus status = readCheckpointStatus(); - Long ts = Long.valueOf(name.substring(0, name.length() - NODE_STARTED_FILE_NAME_SUFFIX.length())); + // First, bring memory to the last consistent checkpoint state if needed. + // This method should return a pointer to the last valid record in the WAL. + RestoreBinaryState binaryState = performBinaryMemoryRestore( + status, + cacheGroupsPredicate, + recordTypePredicate, + true + ); - try (FileIO io = ioFactory.create(f, READ)) { - io.readFully(buf); + WALPointer restored = binaryState.lastReadRecordPointer().map(FileWALPointer::next).orElse(null); - buf.flip(); + if (restored == null && !status.endPtr.equals(CheckpointStatus.NULL_PTR)) { + throw new StorageException("The memory cannot be restored. The critical part of WAL archive is missing " + + "[tailWalPtr=" + restored + ", endPtr=" + status.endPtr + ']'); + } + else if (restored != null) + U.log(log, "Binary memory state restored at node startup [restoredPtr=" + restored + ']'); - FileWALPointer ptr = new FileWALPointer( - buf.getLong(), buf.getInt(), buf.getInt()); + // Wal logging is now available. + cctx.wal().resumeLogging(restored); - res.add(new T2<>(ts, ptr)); + // Log MemoryRecoveryRecord to make sure that old physical records are not replayed during + // next physical recovery. + memoryRecoveryRecordPtr = cctx.wal().log(new MemoryRecoveryRecord(U.currentTimeMillis())); - buf.clear(); - } - catch (IOException e) { - throw new StorageException("Failed to read node started marker file: " + f.getAbsolutePath(), e); - } - } - } - catch (IOException e) { - throw new StorageException("Failed to retreive node started files.", e); - } + for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) + lsnr.afterBinaryMemoryRestore(this, binaryState); - // Sort start markers by file timestamp. - res.sort(Comparator.comparingLong(IgniteBiTuple::get1)); + if (log.isInfoEnabled()) + log.info("Binary recovery performed in " + (System.currentTimeMillis() - time) + " ms."); - return res; + return binaryState; + } + catch (IgniteCheckedException e) { + if (X.hasCause(e, StorageException.class, IOException.class)) + cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); + + throw e; + } } /** {@inheritDoc} */ @@ -987,22 +988,20 @@ public List> nodeStartedPointers() throws IgniteCheckedExce super.onKernalStop0(cancel); - unRegistrateMetricsMBean(); + unregisterMetricsMBean( + cctx.gridConfig(), + MBEAN_GROUP, + MBEAN_NAME + ); + + metaStorage = null; } /** {@inheritDoc} */ @Override protected void stop0(boolean cancel) { super.stop0(cancel); - if (!cctx.kernalContext().clientNode()) { - if (fileLockHolder != null) { - if (log.isDebugEnabled()) - log.debug("Release file lock [nodeId=" + - cctx.localNodeId() + " path=" + fileLockHolder.lockPath() + "]"); - - fileLockHolder.close(); - } - } + releaseFileLock(); } /** */ @@ -1079,11 +1078,11 @@ private long[] calculateFragmentSizes(int concLvl, long cacheSize, long chpBufSi (fullId, pageBuf, tag) -> { memMetrics.onPageWritten(); - // First of all, write page to disk. - storeMgr.write(fullId.groupId(), fullId.pageId(), pageBuf, tag); + // We can write only page from disk into snapshot. + snapshotMgr.beforePageWrite(fullId); - // Only after write we can write page into snapshot. - snapshotMgr.flushDirtyPageHandler(fullId, pageBuf, tag); + // Write page to disk. + storeMgr.write(fullId.groupId(), fullId.pageId(), pageBuf, tag); AtomicInteger cntr = evictedPagesCntr; @@ -1122,8 +1121,8 @@ private long[] calculateFragmentSizes(int concLvl, long cacheSize, long chpBufSi checkPointBufferIdxCnt.set(chunkSizes.length); } - @Override public void shutdown() { - memProvider.shutdown(); + @Override public void shutdown(boolean deallocate) { + memProvider.shutdown(deallocate); } @Override public DirectMemoryRegion nextRegion() { @@ -1160,8 +1159,9 @@ private long[] calculateFragmentSizes(int concLvl, long cacheSize, long chpBufSi plc = PageMemoryImpl.ThrottlingPolicy.valueOf(throttlingPolicyOverride.toUpperCase()); } catch (IllegalArgumentException e) { - log.error("Incorrect value of IGNITE_OVERRIDE_WRITE_THROTTLING_ENABLED property: " + - throttlingPolicyOverride + ". Default throttling policy " + plc + " will be used."); + log.error("Incorrect value of IGNITE_OVERRIDE_WRITE_THROTTLING_ENABLED property. " + + "The default throttling policy will be used [plc=" + throttlingPolicyOverride + + ", defaultPlc=" + plc + ']'); } } return plc; @@ -1173,8 +1173,8 @@ private long[] calculateFragmentSizes(int concLvl, long cacheSize, long chpBufSi if (!regCfg.isPersistenceEnabled()) super.checkRegionEvictionProperties(regCfg, dbCfg); else if (regCfg.getPageEvictionMode() != DataPageEvictionMode.DISABLED) { - U.warn(log, "Page eviction mode set for [" + regCfg.getName() + "] data will have no effect" + - " because the oldest pages are evicted automatically if Ignite persistence is enabled."); + U.warn(log, "Page eviction mode will have no effect because the oldest pages are evicted automatically " + + "if Ignite persistence is enabled: " + regCfg.getName()); } } @@ -1294,34 +1294,37 @@ private void shutdownCheckpointer(boolean cancel) { } /** {@inheritDoc} */ - @Override public boolean beforeExchange(GridDhtPartitionsExchangeFuture fut) throws IgniteCheckedException { - DiscoveryEvent discoEvt = fut.firstEvent(); - - boolean joinEvt = discoEvt.type() == EventType.EVT_NODE_JOINED; - - boolean locNode = discoEvt.eventNode().isLocal(); - - boolean isSrvNode = !cctx.kernalContext().clientNode(); + @Override public void beforeExchange(GridDhtPartitionsExchangeFuture fut) throws IgniteCheckedException { + // Try to restore partition states. + if (fut.localJoinExchange() || fut.activateCluster() + || (fut.exchangeActions() != null && !F.isEmpty(fut.exchangeActions().cacheGroupsToStart()))) { + U.doInParallel( + cctx.kernalContext().getSystemExecutorService(), + cctx.cache().cacheGroups(), + cacheGroup -> { + if (cacheGroup.isLocal()) + return null; - boolean clusterInTransitionStateToActive = fut.activateCluster(); + cctx.database().checkpointReadLock(); - boolean restored = false; + try { + cacheGroup.offheap().restorePartitionStates(Collections.emptyMap()); - long time = System.currentTimeMillis(); + if (cacheGroup.localStartVersion().equals(fut.initialVersion())) + cacheGroup.topology().afterStateRestored(fut.initialVersion()); - // In case of cluster activation or local join restore, restore whole manager state. - if (clusterInTransitionStateToActive || (joinEvt && locNode && isSrvNode)) { - restoreState(); + fut.timeBag().finishLocalStage("Restore partition states " + + "[grp=" + cacheGroup.cacheOrGroupName() + "]"); + } + finally { + cctx.database().checkpointReadUnlock(); + } - restored = true; - } - // In case of starting groups, restore partition states only for these groups. - else if (fut.exchangeActions() != null && !F.isEmpty(fut.exchangeActions().cacheGroupsToStart())) { - Set restoreGroups = fut.exchangeActions().cacheGroupsToStart().stream() - .map(actionData -> actionData.descriptor().groupId()) - .collect(Collectors.toSet()); + return null; + } + ); - restorePartitionStates(Collections.emptyMap(), restoreGroups); + fut.timeBag().finishGlobalStage("Restore partition states"); } if (cctx.kernalContext().query().moduleEnabled()) { @@ -1338,11 +1341,6 @@ else if (acts.localJoinContext() != null && !F.isEmpty(acts.localJoinContext().c } } } - - if (log.isInfoEnabled()) - log.info("Logical recovery performed in " + (System.currentTimeMillis() - time) + " ms."); - - return restored; } /** @@ -1362,16 +1360,19 @@ private void prepareIndexRebuildFuture(int cacheId) { /** {@inheritDoc} */ @Override public void rebuildIndexesIfNeeded(GridDhtPartitionsExchangeFuture fut) { - if (cctx.kernalContext().query().moduleEnabled()) { + GridQueryProcessor qryProc = cctx.kernalContext().query(); + + if (qryProc.moduleEnabled()) { for (final GridCacheContext cacheCtx : (Collection)cctx.cacheContexts()) { if (cacheCtx.startTopologyVersion().equals(fut.initialVersion())) { final int cacheId = cacheCtx.cacheId(); final GridFutureAdapter usrFut = idxRebuildFuts.get(cacheId); - if (!cctx.pageStore().hasIndexStore(cacheCtx.groupId()) && cacheCtx.affinityNode() - && cacheCtx.group().persistenceEnabled()) { - IgniteInternalFuture rebuildFut = cctx.kernalContext().query() - .rebuildIndexesFromHash(Collections.singleton(cacheCtx.cacheId())); + IgniteInternalFuture rebuildFut = qryProc.rebuildIndexesFromHash(cacheCtx); + + if (rebuildFut != null) { + log().info("Started indexes rebuilding for cache [name=" + cacheCtx.name() + + ", grpName=" + cacheCtx.group().name() + ']'); assert usrFut != null : "Missing user future for cache: " + cacheCtx.name(); @@ -1428,6 +1429,9 @@ private void prepareIndexRebuildFuture(int cacheId) { grpIds.add(tup.get1().groupId()); pageMem.onCacheGroupDestroyed(tup.get1().groupId()); + + if (tup.get2()) + cctx.kernalContext().encryption().onCacheGroupDestroyed(gctx.groupId()); } Collection> clearFuts = new ArrayList<>(destroyed.size()); @@ -1451,14 +1455,12 @@ private void prepareIndexRebuildFuture(int cacheId) { for (IgniteBiTuple tup : stoppedGrps) { CacheGroupContext grp = tup.get1(); - if (grp.affinityNode()) { - try { - cctx.pageStore().shutdownForCacheGroup(grp, tup.get2()); - } - catch (IgniteCheckedException e) { - U.error(log, "Failed to gracefully clean page store resources for destroyed cache " + - "[cache=" + grp.cacheOrGroupName() + "]", e); - } + try { + cctx.pageStore().shutdownForCacheGroup(grp, tup.get2()); + } + catch (IgniteCheckedException e) { + U.error(log, "Failed to gracefully clean page store resources for destroyed cache " + + "[cache=" + grp.cacheOrGroupName() + "]", e); } } } @@ -1474,53 +1476,64 @@ private void prepareIndexRebuildFuture(int cacheId) { if (checkpointLock.writeLock().isHeldByCurrentThread()) return; - long timeout = cctx.gridConfig().getFailureDetectionTimeout(); + long timeout = checkpointReadLockTimeout; long start = U.currentTimeMillis(); - long passed; boolean interruped = false; try { for (; ; ) { - if ((passed = U.currentTimeMillis() - start) >= timeout) - failCheckpointReadLock(); - try { - if (!checkpointLock.readLock().tryLock(timeout - passed, TimeUnit.MILLISECONDS)) + if (timeout > 0 && (U.currentTimeMillis() - start) >= timeout) failCheckpointReadLock(); - } - catch (InterruptedException e) { - interruped = true; - continue; - } + try { + if (timeout > 0) { + if (!checkpointLock.readLock().tryLock(timeout - (U.currentTimeMillis() - start), + TimeUnit.MILLISECONDS)) + failCheckpointReadLock(); + } + else + checkpointLock.readLock().lock(); + } + catch (InterruptedException e) { + interruped = true; - if (stopping) { - checkpointLock.readLock().unlock(); + continue; + } - throw new IgniteException(new NodeStoppingException("Failed to perform cache update: node is stopping.")); - } + if (stopping) { + checkpointLock.readLock().unlock(); - if (checkpointLock.getReadHoldCount() > 1 || safeToUpdatePageMemories()) - break; - else { - checkpointLock.readLock().unlock(); + throw new IgniteException(new NodeStoppingException("Failed to perform cache update: node is stopping.")); + } - if (U.currentTimeMillis() - start >= timeout) - failCheckpointReadLock(); + if (checkpointLock.getReadHoldCount() > 1 || safeToUpdatePageMemories()) + break; + else { + checkpointLock.readLock().unlock(); - try { - checkpointer.wakeupForCheckpoint(0, "too many dirty pages").cpBeginFut - .getUninterruptibly(); - } - catch (IgniteFutureTimeoutCheckedException e) { - failCheckpointReadLock(); - } - catch (IgniteCheckedException e) { - throw new IgniteException("Failed to wait for checkpoint begin.", e); + if (timeout > 0 && U.currentTimeMillis() - start >= timeout) + failCheckpointReadLock(); + + try { + checkpointer.wakeupForCheckpoint(0, "too many dirty pages").cpBeginFut + .getUninterruptibly(); + } + catch (IgniteFutureTimeoutCheckedException e) { + failCheckpointReadLock(); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to wait for checkpoint begin.", e); + } } } + catch (CheckpointReadLockTimeoutException e) { + log.error(e.getMessage(), e); + + timeout = 0; + } } } finally { @@ -1532,13 +1545,21 @@ private void prepareIndexRebuildFuture(int cacheId) { CHECKPOINT_LOCK_HOLD_COUNT.set(CHECKPOINT_LOCK_HOLD_COUNT.get() + 1); } - /** */ - private void failCheckpointReadLock() throws IgniteException { - IgniteException e = new IgniteException("Checkpoint read lock acquisition has been timed out."); + /** + * Invokes critical failure processing. Always throws. + * + * @throws CheckpointReadLockTimeoutException If node was not invalidated as result of handling. + * @throws IgniteException If node was invalidated as result of handling. + */ + private void failCheckpointReadLock() throws CheckpointReadLockTimeoutException, IgniteException { + String msg = "Checkpoint read lock acquisition has been timed out."; - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, e)); + IgniteException e = new IgniteException(msg); + + if (cctx.kernalContext().failure().process(new FailureContext(SYSTEM_CRITICAL_OPERATION_TIMEOUT, e))) + throw e; - throw e; + throw new CheckpointReadLockTimeoutException(msg); } /** {@inheritDoc} */ @@ -1603,52 +1624,6 @@ private boolean safeToUpdatePageMemories() { CHECKPOINT_LOCK_HOLD_COUNT.set(CHECKPOINT_LOCK_HOLD_COUNT.get() - 1); } - /** - * Restores from last checkpoint and applies WAL changes since this checkpoint. - * - * @throws IgniteCheckedException If failed to restore database status from WAL. - */ - private void restoreState() throws IgniteCheckedException { - try { - CheckpointStatus status = readCheckpointStatus(); - - checkpointReadLock(); - - try { - applyLastUpdates(status, false); - } - finally { - checkpointReadUnlock(); - } - - snapshotMgr.restoreState(); - } - catch (StorageException e) { - throw new IgniteCheckedException(e); - } - } - - /** - * Called when all partitions have been fully restored and pre-created on node start. - * - * Starts checkpointing process and initiates first checkpoint. - * - * @throws IgniteCheckedException If first checkpoint has failed. - */ - @Override public void onStateRestored() throws IgniteCheckedException { - long time = System.currentTimeMillis(); - - new IgniteThread(cctx.igniteInstanceName(), "db-checkpoint-thread", checkpointer).start(); - - CheckpointProgressSnapshot chp = checkpointer.wakeupForCheckpoint(0, "node started"); - - if (chp != null) - chp.cpBeginFut.get(); - - if (log.isInfoEnabled()) - log.info("Checkpointer initilialzation performed in " + (System.currentTimeMillis() - time) + " ms."); - } - /** {@inheritDoc} */ @Override public synchronized Map> reserveHistoryForExchange() { assert reservedForExchange == null : reservedForExchange; @@ -1758,16 +1733,7 @@ private Map> partitionsApplicableForWalRebalance() { if (ptr == null) return false; - boolean reserved; - - try { - reserved = cctx.wal().reserve(ptr); - } - catch (IgniteCheckedException e) { - U.error(log, "Error while trying to reserve history", e); - - reserved = false; - } + boolean reserved = cctx.wal().reserve(ptr); if (reserved) reservedForPreloading.put(new T2<>(grpId, partId), new T2<>(cntr, ptr)); @@ -1951,40 +1917,144 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC } } + /** {@inheritDoc} */ + @Override public void startMemoryRestore(GridKernalContext kctx) throws IgniteCheckedException { + if (kctx.clientNode()) + return; + + checkpointReadLock(); + + try { + // Preform early regions startup before restoring state. + initAndStartRegions(kctx.config().getDataStorageConfiguration()); + + // Restore binary memory for all not WAL disabled cache groups. + restoreBinaryMemory( + groupsWithEnabledWal(), + physicalRecords() + ); + + if (recoveryVerboseLogging && log.isInfoEnabled()) { + log.info("Partition states information after BINARY RECOVERY phase:"); + + dumpPartitionsInfo(cctx, log); + } + + CheckpointStatus status = readCheckpointStatus(); + + RestoreLogicalState logicalState = applyLogicalUpdates( + status, + groupsWithEnabledWal(), + logicalRecords(), + true + ); + + if (recoveryVerboseLogging && log.isInfoEnabled()) { + log.info("Partition states information after LOGICAL RECOVERY phase:"); + + dumpPartitionsInfo(cctx, log); + } + + walTail = tailPointer(logicalState.lastReadRecordPointer().orElse(null)); + + cctx.wal().onDeActivate(kctx); + } + catch (IgniteCheckedException e) { + releaseFileLock(); + + throw e; + } + finally { + checkpointReadUnlock(); + } + } + /** - * @param status Checkpoint status. + * Calculates tail pointer for WAL at the end of logical recovery. + * + * @param from Start replay WAL from. + * @return Tail pointer. * @throws IgniteCheckedException If failed. - * @throws StorageException In case I/O error occurred during operations with storage. */ - @Nullable private WALPointer restoreMemory(CheckpointStatus status) throws IgniteCheckedException { - return restoreMemory(status, false, (PageMemoryEx)metaStorage.pageMemory()); + private WALPointer tailPointer(WALPointer from) throws IgniteCheckedException { + WALIterator it = cctx.wal().replay(from); + + try { + while (it.hasNextX()) { + IgniteBiTuple rec = it.nextX(); + + if (rec == null) + break; + } + } + finally { + it.close(); + } + + return it.lastRead().map(WALPointer::next).orElse(null); + } + + /** + * Called when all partitions have been fully restored and pre-created on node start. + * + * Starts checkpointing process and initiates first checkpoint. + * + * @throws IgniteCheckedException If first checkpoint has failed. + */ + @Override public void onStateRestored(AffinityTopologyVersion topVer) throws IgniteCheckedException { + IgniteThread cpThread = new IgniteThread(cctx.igniteInstanceName(), "db-checkpoint-thread", checkpointer); + + cpThread.start(); + + checkpointerThread = cpThread; + + CheckpointProgressSnapshot chp = checkpointer.wakeupForCheckpoint(0, "node started"); + + if (chp != null) + chp.cpBeginFut.get(); } /** * @param status Checkpoint status. - * @param metastoreOnly If {@code True} restores Metastorage only. - * @param storePageMem Metastore page memory. + * @param cacheGroupsPredicate Cache groups to restore. * @throws IgniteCheckedException If failed. * @throws StorageException In case I/O error occurred during operations with storage. */ - @Nullable private WALPointer restoreMemory( + private RestoreBinaryState performBinaryMemoryRestore( CheckpointStatus status, - boolean metastoreOnly, - PageMemoryEx storePageMem + IgnitePredicate cacheGroupsPredicate, + IgniteBiPredicate recordTypePredicate, + boolean finalizeState ) throws IgniteCheckedException { - assert !metastoreOnly || storePageMem != null; - if (log.isInfoEnabled()) log.info("Checking memory state [lastValidPos=" + status.endPtr + ", lastMarked=" + status.startPtr + ", lastCheckpointId=" + status.cpStartId + ']'); + WALPointer recPtr = status.endPtr; + boolean apply = status.needRestoreMemory(); if (apply) { - U.quietAndWarn(log, "Ignite node stopped in the middle of checkpoint. Will restore memory state and " + - "finish checkpoint on node start."); + if (finalizeState) + U.quietAndWarn(log, "Ignite node stopped in the middle of checkpoint. Will restore memory state and " + + "finish checkpoint on node start."); cctx.pageStore().beginRecover(); + + WALRecord rec = cctx.wal().read(status.startPtr); + + if (!(rec instanceof CheckpointRecord)) + throw new StorageException("Checkpoint marker doesn't point to checkpoint record " + + "[ptr=" + status.startPtr + ", rec=" + rec + "]"); + + WALPointer cpMark = ((CheckpointRecord)rec).checkpointMark(); + + if (cpMark != null) { + log.info("Restoring checkpoint after logical recovery, will start physical recovery from " + + "back pointer: " + cpMark); + + recPtr = cpMark; + } } else cctx.wal().notchLastCheckpointPtr(status.startPtr); @@ -1993,16 +2063,15 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC long lastArchivedSegment = cctx.wal().lastArchivedSegment(); - RestoreBinaryState restoreBinaryState = new RestoreBinaryState(status, lastArchivedSegment, log); + WALIterator it = cctx.wal().replay(recPtr, recordTypePredicate); - Collection ignoreGrps = metastoreOnly ? Collections.emptySet() : - F.concat(false, initiallyGlobalWalDisabledGrps, initiallyLocalWalDisabledGrps); + RestoreBinaryState restoreBinaryState = new RestoreBinaryState(status, it, lastArchivedSegment, cacheGroupsPredicate); int applied = 0; - try (WALIterator it = cctx.wal().replay(status.endPtr)) { + try { while (it.hasNextX()) { - WALRecord rec = restoreBinaryState.next(it); + WALRecord rec = restoreBinaryState.next(); if (rec == null) break; @@ -2016,32 +2085,30 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC // several repetitive restarts and the same pages may have changed several times. int grpId = pageRec.fullPageId().groupId(); - if (metastoreOnly && grpId != METASTORAGE_CACHE_ID) - continue; + long pageId = pageRec.fullPageId().pageId(); - if (!ignoreGrps.contains(grpId)) { - long pageId = pageRec.fullPageId().pageId(); + PageMemoryEx pageMem = getPageMemoryForCacheGroup(grpId); - PageMemoryEx pageMem = grpId == METASTORAGE_CACHE_ID ? storePageMem : getPageMemoryForCacheGroup(grpId); + if (pageMem == null) + break; - long page = pageMem.acquirePage(grpId, pageId, true); + long page = pageMem.acquirePage(grpId, pageId, true); - try { - long pageAddr = pageMem.writeLock(grpId, pageId, page); + try { + long pageAddr = pageMem.writeLock(grpId, pageId, page, true); - try { - PageUtils.putBytes(pageAddr, 0, pageRec.pageData()); - } - finally { - pageMem.writeUnlock(grpId, pageId, page, null, true, true); - } + try { + PageUtils.putBytes(pageAddr, 0, pageRec.pageData()); } finally { - pageMem.releasePage(grpId, pageId, page); + pageMem.writeUnlock(grpId, pageId, page, null, true, true); } - - applied++; } + finally { + pageMem.releasePage(grpId, pageId, page); + } + + applied++; } break; @@ -2052,12 +2119,6 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC { int grpId = metaStateRecord.groupId(); - if (metastoreOnly && grpId != METASTORAGE_CACHE_ID) - continue; - - if (ignoreGrps.contains(grpId)) - continue; - int partId = metaStateRecord.partitionId(); GridDhtPartitionState state = GridDhtPartitionState.fromOrdinal(metaStateRecord.state()); @@ -2076,13 +2137,10 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC { int grpId = destroyRecord.groupId(); - if (metastoreOnly && grpId != METASTORAGE_CACHE_ID) - continue; - - if (ignoreGrps.contains(grpId)) - continue; + PageMemoryEx pageMem = getPageMemoryForCacheGroup(grpId); - PageMemoryEx pageMem = grpId == METASTORAGE_CACHE_ID ? storePageMem : getPageMemoryForCacheGroup(grpId); + if (pageMem == null) + break; pageMem.invalidate(grpId, destroyRecord.partitionId()); @@ -2097,43 +2155,44 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC int grpId = r.groupId(); - if (metastoreOnly && grpId != METASTORAGE_CACHE_ID) - continue; + long pageId = r.pageId(); - if (!ignoreGrps.contains(grpId)) { - long pageId = r.pageId(); + PageMemoryEx pageMem = getPageMemoryForCacheGroup(grpId); - PageMemoryEx pageMem = grpId == METASTORAGE_CACHE_ID ? storePageMem : getPageMemoryForCacheGroup(grpId); + if (pageMem == null) + break; - // Here we do not require tag check because we may be applying memory changes after - // several repetitive restarts and the same pages may have changed several times. - long page = pageMem.acquirePage(grpId, pageId, true); + // Here we do not require tag check because we may be applying memory changes after + // several repetitive restarts and the same pages may have changed several times. + long page = pageMem.acquirePage(grpId, pageId, true); - try { - long pageAddr = pageMem.writeLock(grpId, pageId, page); + try { + long pageAddr = pageMem.writeLock(grpId, pageId, page, true); - try { - r.applyDelta(pageMem, pageAddr); - } - finally { - pageMem.writeUnlock(grpId, pageId, page, null, true, true); - } + try { + r.applyDelta(pageMem, pageAddr); } finally { - pageMem.releasePage(grpId, pageId, page); + pageMem.writeUnlock(grpId, pageId, page, null, true, true); } - - applied++; } + finally { + pageMem.releasePage(grpId, pageId, page); + } + + applied++; } } } } + finally { + it.close(); + } - if (metastoreOnly) + if (!finalizeState) return null; - WALPointer lastReadPtr = restoreBinaryState.lastReadRecordPointer(); + FileWALPointer lastReadPtr = restoreBinaryState.lastReadRecordPointer().orElse(null); if (status.needRestoreMemory()) { if (restoreBinaryState.needApplyBinaryUpdate()) @@ -2142,7 +2201,7 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC "[cpStatus=" + status + ", lastRead=" + lastReadPtr + "]"); log.info("Finished applying memory changes [changesApplied=" + applied + - ", time=" + (U.currentTimeMillis() - start) + "ms]"); + ", time=" + (U.currentTimeMillis() - start) + " ms]"); if (applied > 0) finalizeCheckpointOnRecovery(status.cpStartTs, status.cpStartId, status.startPtr); @@ -2150,7 +2209,7 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC cpHistory.initialize(retreiveHistory()); - return lastReadPtr == null ? null : lastReadPtr.next(); + return restoreBinaryState; } /** @@ -2161,6 +2220,9 @@ private WALPointer readPointer(File cpMarkerFile, ByteBuffer buf) throws IgniteC * @throws IgniteCheckedException if no DataRegion is configured for a name obtained from cache descriptor. */ private PageMemoryEx getPageMemoryForCacheGroup(int grpId) throws IgniteCheckedException { + if (grpId == MetaStorage.METASTORAGE_CACHE_ID) + return (PageMemoryEx)dataRegion(METASTORE_DATA_REGION_NAME).pageMemory(); + // TODO IGNITE-7792 add generic mapping. if (grpId == TxLog.TX_LOG_CACHE_ID) return (PageMemoryEx)dataRegion(TxLog.TX_LOG_CACHE_NAME).pageMemory(); @@ -2171,7 +2233,7 @@ private PageMemoryEx getPageMemoryForCacheGroup(int grpId) throws IgniteCheckedE CacheGroupDescriptor desc = sharedCtx.cache().cacheGroupDescriptors().get(grpId); if (desc == null) - throw new IgniteCheckedException("Failed to find cache group descriptor [grpId=" + grpId + ']'); + return null; String memPlcName = desc.config().getDataRegionName(); @@ -2184,76 +2246,84 @@ private PageMemoryEx getPageMemoryForCacheGroup(int grpId) throws IgniteCheckedE * @param it WalIterator. * @param recPredicate Wal record filter. * @param entryPredicate Entry filter. - * @param partStates Partition to restore state. */ public void applyUpdatesOnRecovery( @Nullable WALIterator it, - IgnitePredicate> recPredicate, - IgnitePredicate entryPredicate, - Map, T2> partStates + IgniteBiPredicate recPredicate, + IgnitePredicate entryPredicate ) throws IgniteCheckedException { + if (it == null) + return; + cctx.walState().runWithOutWAL(() -> { - if (it != null) { - while (it.hasNext()) { - IgniteBiTuple next = it.next(); + while (it.hasNext()) { + IgniteBiTuple next = it.next(); - WALRecord rec = next.get2(); + WALRecord rec = next.get2(); - if (!recPredicate.apply(next)) - break; + if (!recPredicate.apply(next.get1(), rec)) + break; - switch (rec.type()) { - case DATA_RECORD: - checkpointReadLock(); + switch (rec.type()) { + case MVCC_DATA_RECORD: + case DATA_RECORD: + checkpointReadLock(); - try { - DataRecord dataRec = (DataRecord)rec; + try { + DataRecord dataRec = (DataRecord)rec; - for (DataEntry dataEntry : dataRec.writeEntries()) { - if (entryPredicate.apply(dataEntry)) { - checkpointReadLock(); + for (DataEntry dataEntry : dataRec.writeEntries()) { + if (entryPredicate.apply(dataEntry)) { + checkpointReadLock(); - try { - int cacheId = dataEntry.cacheId(); + try { + int cacheId = dataEntry.cacheId(); - GridCacheContext cacheCtx = cctx.cacheContext(cacheId); + GridCacheContext cacheCtx = cctx.cacheContext(cacheId); - if (cacheCtx != null) - applyUpdate(cacheCtx, dataEntry); - else if (log != null) - log.warning("Cache (cacheId=" + cacheId + ") is not started, can't apply updates."); - } - finally { - checkpointReadUnlock(); - } + if (cacheCtx != null) + applyUpdate(cacheCtx, dataEntry); + else if (log != null) + log.warning("Cache is not started. Updates cannot be applied " + + "[cacheId=" + cacheId + ']'); + } + finally { + checkpointReadUnlock(); } } } - catch (IgniteCheckedException e) { - throw new IgniteException(e); - } - finally { - checkpointReadUnlock(); - } + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + finally { + checkpointReadUnlock(); + } - break; + break; - default: - // Skip other records. - } - } - } + case MVCC_TX_RECORD: + checkpointReadLock(); - checkpointReadLock(); + try { + MvccTxRecord txRecord = (MvccTxRecord)rec; + + byte txState = convertToTxState(txRecord.state()); + + cctx.coordinators().updateState(txRecord.mvccVersion(), txState, false); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + finally { + checkpointReadUnlock(); + } + + break; - try { - restorePartitionStates(partStates, null); - } - catch (IgniteCheckedException e) { - throw new IgniteException(e); - } - finally { - checkpointReadUnlock(); + default: + // Skip other records. + } } }); } @@ -2263,66 +2333,81 @@ else if (log != null) * @throws IgniteCheckedException If failed to apply updates. * @throws StorageException If IO exception occurred while reading write-ahead log. */ - private void applyLastUpdates(CheckpointStatus status, boolean metastoreOnly) throws IgniteCheckedException { + private RestoreLogicalState applyLogicalUpdates( + CheckpointStatus status, + IgnitePredicate cacheGroupsPredicate, + IgniteBiPredicate recordTypePredicate, + boolean skipFieldLookup + ) throws IgniteCheckedException { if (log.isInfoEnabled()) log.info("Applying lost cache updates since last checkpoint record [lastMarked=" + status.startPtr + ", lastCheckpointId=" + status.cpStartId + ']'); - if (!metastoreOnly) + if (skipFieldLookup) cctx.kernalContext().query().skipFieldLookup(true); - long lastArchivedSegment = cctx.wal().lastArchivedSegment(); - - RestoreLogicalState restoreLogicalState = new RestoreLogicalState(lastArchivedSegment, log); - long start = U.currentTimeMillis(); + int applied = 0; - Collection ignoreGrps = metastoreOnly ? Collections.emptySet() : - F.concat(false, initiallyGlobalWalDisabledGrps, initiallyLocalWalDisabledGrps); + long lastArchivedSegment = cctx.wal().lastArchivedSegment(); + + WALIterator it = cctx.wal().replay(status.startPtr, recordTypePredicate); - try (WALIterator it = cctx.wal().replay(status.startPtr)) { - Map, T2> partStates = new HashMap<>(); + RestoreLogicalState restoreLogicalState = new RestoreLogicalState(it, lastArchivedSegment, cacheGroupsPredicate); + try { while (it.hasNextX()) { - WALRecord rec = restoreLogicalState.next(it); + WALRecord rec = restoreLogicalState.next(); if (rec == null) break; switch (rec.type()) { + case MVCC_DATA_RECORD: case DATA_RECORD: - if (metastoreOnly) - continue; - + case ENCRYPTED_DATA_RECORD: DataRecord dataRec = (DataRecord)rec; for (DataEntry dataEntry : dataRec.writeEntries()) { int cacheId = dataEntry.cacheId(); - int grpId = cctx.cache().cacheDescriptor(cacheId).groupId(); + DynamicCacheDescriptor cacheDesc = cctx.cache().cacheDescriptor(cacheId); - if (!ignoreGrps.contains(grpId)) { - GridCacheContext cacheCtx = cctx.cacheContext(cacheId); + // Can empty in case recovery node on blt changed. + if (cacheDesc == null) + continue; - applyUpdate(cacheCtx, dataEntry); + GridCacheContext cacheCtx = cctx.cacheContext(cacheId); - applied++; - } + applyUpdate(cacheCtx, dataEntry); + + applied++; } break; - case PART_META_UPDATE_STATE: - if (metastoreOnly) - continue; + case MVCC_TX_RECORD: + MvccTxRecord txRecord = (MvccTxRecord)rec; + + byte txState = convertToTxState(txRecord.state()); + cctx.coordinators().updateState(txRecord.mvccVersion(), txState, false); + + break; + + case PART_META_UPDATE_STATE: PartitionMetaStateRecord metaStateRecord = (PartitionMetaStateRecord)rec; - if (!ignoreGrps.contains(metaStateRecord.groupId())) { - partStates.put(new T2<>(metaStateRecord.groupId(), metaStateRecord.partitionId()), - new T2<>((int)metaStateRecord.state(), metaStateRecord.updateCounter())); - } + GroupPartitionId groupPartitionId = new GroupPartitionId( + metaStateRecord.groupId(), metaStateRecord.partitionId() + ); + + PartitionRecoverState state = new PartitionRecoverState( + (int)metaStateRecord.state(), metaStateRecord.updateCounter() + ); + + restoreLogicalState.partitionRecoveryStates.put(groupPartitionId, state); break; @@ -2336,13 +2421,14 @@ private void applyLastUpdates(CheckpointStatus status, boolean metastoreOnly) th case META_PAGE_UPDATE_NEXT_SNAPSHOT_ID: case META_PAGE_UPDATE_LAST_SUCCESSFUL_SNAPSHOT_ID: case META_PAGE_UPDATE_LAST_SUCCESSFUL_FULL_SNAPSHOT_ID: - if (metastoreOnly) - continue; - + case META_PAGE_UPDATE_LAST_ALLOCATED_INDEX: PageDeltaRecord rec0 = (PageDeltaRecord) rec; PageMemoryEx pageMem = getPageMemoryForCacheGroup(rec0.groupId()); + if (pageMem == null) + break; + long page = pageMem.acquirePage(rec0.groupId(), rec0.pageId(), true); try { @@ -2365,166 +2451,44 @@ private void applyLastUpdates(CheckpointStatus status, boolean metastoreOnly) th // Skip other records. } } - - if (!metastoreOnly) { - long startRestorePart = U.currentTimeMillis(); - - if (log.isInfoEnabled()) - log.info("Restoring partition state for local groups [cntPartStateWal=" - + partStates.size() + ", lastCheckpointId=" + status.cpStartId + ']'); - - long proc = restorePartitionStates(partStates, null); - - if (log.isInfoEnabled()) - log.info("Finished restoring partition state for local groups [cntProcessed=" + proc + - ", cntPartStateWal=" + partStates.size() + - ", time=" + (U.currentTimeMillis() - startRestorePart) + "ms]"); - } } finally { - if (!metastoreOnly) + it.close(); + + if (skipFieldLookup) cctx.kernalContext().query().skipFieldLookup(false); } if (log.isInfoEnabled()) log.info("Finished applying WAL changes [updatesApplied=" + applied + - ", time=" + (U.currentTimeMillis() - start) + "ms]"); + ", time=" + (U.currentTimeMillis() - start) + " ms]"); + + for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) + lsnr.afterLogicalUpdatesApplied(this, restoreLogicalState); + + return restoreLogicalState; } /** - * Initializes not empty partitions and restores their state from page memory or WAL. - * Partition states presented in page memory may be overriden by states restored from WAL {@code partStates}. + * Convert {@link TransactionState} to Mvcc {@link TxState}. * - * @param partStates Partition states restored from WAL. - * @param onlyForGroups If not {@code null} restore states only for specified cache groups. - * @return cntParts Count of partitions processed. - * @throws IgniteCheckedException If failed to restore partition states. + * @param state TransactionState. + * @return TxState. */ - private long restorePartitionStates( - Map, T2> partStates, - @Nullable Set onlyForGroups - ) throws IgniteCheckedException { - long cntParts = 0; - - for (CacheGroupContext grp : cctx.cache().cacheGroups()) { - if (grp.isLocal() || !grp.affinityNode()) { - // Local cache has no partitions and its states. - continue; - } - - if (!grp.dataRegion().config().isPersistenceEnabled()) - continue; - - if (onlyForGroups != null && !onlyForGroups.contains(grp.groupId())) - continue; - - int grpId = grp.groupId(); - - PageMemoryEx pageMem = (PageMemoryEx)grp.dataRegion().pageMemory(); - - for (int i = 0; i < grp.affinity().partitions(); i++) { - T2 restore = partStates.get(new T2<>(grpId, i)); - - if (storeMgr.exists(grpId, i)) { - storeMgr.ensure(grpId, i); - - if (storeMgr.pages(grpId, i) <= 1) - continue; + private byte convertToTxState(TransactionState state) { + switch (state) { + case PREPARED: + return TxState.PREPARED; - if (log.isDebugEnabled()) - log.debug("Creating partition on recovery (exists in page store) " + - "[grp=" + grp.cacheOrGroupName() + ", p=" + i + "]"); + case COMMITTED: + return TxState.COMMITTED; - GridDhtLocalPartition part = grp.topology().forceCreatePartition(i); + case ROLLED_BACK: + return TxState.ABORTED; - assert part != null; - - // TODO: https://issues.apache.org/jira/browse/IGNITE-6097 - grp.offheap().onPartitionInitialCounterUpdated(i, 0); - - checkpointReadLock(); - - try { - long partMetaId = pageMem.partitionMetaPageId(grpId, i); - long partMetaPage = pageMem.acquirePage(grpId, partMetaId); - - try { - long pageAddr = pageMem.writeLock(grpId, partMetaId, partMetaPage); - - boolean changed = false; - - try { - PagePartitionMetaIO io = PagePartitionMetaIO.VERSIONS.forPage(pageAddr); - - if (restore != null) { - int stateId = restore.get1(); - - io.setPartitionState(pageAddr, (byte)stateId); - - changed = updateState(part, stateId); - - if (stateId == GridDhtPartitionState.OWNING.ordinal() - || (stateId == GridDhtPartitionState.MOVING.ordinal() - && part.initialUpdateCounter() < restore.get2())) { - part.initialUpdateCounter(restore.get2()); - - changed = true; - } - - if (log.isDebugEnabled()) - log.debug("Restored partition state (from WAL) " + - "[grp=" + grp.cacheOrGroupName() + ", p=" + i + ", state=" + part.state() + - "updCntr=" + part.initialUpdateCounter() + "]"); - } - else { - changed = updateState(part, (int) io.getPartitionState(pageAddr)); - - if (log.isDebugEnabled()) - log.debug("Restored partition state (from page memory) " + - "[grp=" + grp.cacheOrGroupName() + ", p=" + i + ", state=" + part.state() + - "updCntr=" + part.initialUpdateCounter() + "]"); - } - } - finally { - pageMem.writeUnlock(grpId, partMetaId, partMetaPage, null, changed); - } - } - finally { - pageMem.releasePage(grpId, partMetaId, partMetaPage); - } - } - finally { - checkpointReadUnlock(); - } - } - else if (restore != null) { - if (log.isDebugEnabled()) - log.debug("Creating partition on recovery (exists in WAL) " + - "[grp=" + grp.cacheOrGroupName() + ", p=" + i + "]"); - - GridDhtLocalPartition part = grp.topology().forceCreatePartition(i); - - assert part != null; - - // TODO: https://issues.apache.org/jira/browse/IGNITE-6097 - grp.offheap().onPartitionInitialCounterUpdated(i, 0); - - updateState(part, restore.get1()); - - if (log.isDebugEnabled()) - log.debug("Restored partition state (from WAL) " + - "[grp=" + grp.cacheOrGroupName() + ", p=" + i + ", state=" + part.state() + - "updCntr=" + part.initialUpdateCounter() + "]"); - } - - cntParts++; - } - - // After partition states are restored, it is necessary to update internal data structures in topology. - grp.topology().afterStateRestored(grp.topology().lastTopologyChangeVersion()); + default: + throw new IllegalStateException("Unsupported TxState."); } - - return cntParts; } /** @@ -2539,25 +2503,6 @@ public void onWalTruncated(WALPointer highBound) throws IgniteCheckedException { removeCheckpointFiles(cp); } - /** - * @param part Partition to restore state for. - * @param stateId State enum ordinal. - * @return Updated flag. - */ - private boolean updateState(GridDhtLocalPartition part, int stateId) { - if (stateId != -1) { - GridDhtPartitionState state = GridDhtPartitionState.fromOrdinal(stateId); - - assert state != null; - - part.restoreState(state == GridDhtPartitionState.EVICTED ? GridDhtPartitionState.RENTING : state); - - return true; - } - - return false; - } - /** * @param cacheCtx Cache context to apply an update. * @param dataEntry Data entry to apply. @@ -2574,14 +2519,26 @@ private void applyUpdate(GridCacheContext cacheCtx, DataEntry dataEntry) throws switch (dataEntry.op()) { case CREATE: case UPDATE: - cacheCtx.offheap().update( - cacheCtx, - dataEntry.key(), - dataEntry.value(), - dataEntry.writeVersion(), - 0L, - locPart, - null); + if (dataEntry instanceof MvccDataEntry) { + cacheCtx.offheap().mvccApplyUpdate( + cacheCtx, + dataEntry.key(), + dataEntry.value(), + dataEntry.writeVersion(), + 0L, + locPart, + ((MvccDataEntry)dataEntry).mvccVer()); + } + else { + cacheCtx.offheap().update( + cacheCtx, + dataEntry.key(), + dataEntry.value(), + dataEntry.writeVersion(), + 0L, + locPart, + null); + } if (dataEntry.partitionCounter() != 0) cacheCtx.offheap().onPartitionInitialCounterUpdated(partId, dataEntry.partitionCounter()); @@ -2589,7 +2546,18 @@ private void applyUpdate(GridCacheContext cacheCtx, DataEntry dataEntry) throws break; case DELETE: - cacheCtx.offheap().remove(cacheCtx, dataEntry.key(), partId, locPart); + if (dataEntry instanceof MvccDataEntry) { + cacheCtx.offheap().mvccApplyUpdate( + cacheCtx, + dataEntry.key(), + null, + dataEntry.writeVersion(), + 0L, + locPart, + ((MvccDataEntry)dataEntry).mvccVer()); + } + else + cacheCtx.offheap().remove(cacheCtx, dataEntry.key(), partId, locPart); if (dataEntry.partitionCounter() != 0) cacheCtx.offheap().onPartitionInitialCounterUpdated(partId, dataEntry.partitionCounter()); @@ -2800,8 +2768,9 @@ private static String checkpointFileName(long cpTs, UUID cpId, CheckpointEntryTy /** * @param cp Checkpoint entry. * @param type Checkpoint type. + * @return Checkpoint file name. */ - private static String checkpointFileName(CheckpointEntry cp, CheckpointEntryType type) { + public static String checkpointFileName(CheckpointEntry cp, CheckpointEntryType type) { return checkpointFileName(cp.timestamp(), cp.checkpointId(), type); } @@ -2879,6 +2848,24 @@ public void cancelOrWaitPartitionDestroy(int grpId, int partId) throws IgniteChe cp.cancelOrWaitPartitionDestroy(grpId, partId); } + /** + * Timeout for checkpoint read lock acquisition. + * + * @return Timeout for checkpoint read lock acquisition in milliseconds. + */ + @Override public long checkpointReadLockTimeout() { + return checkpointReadLockTimeout; + } + + /** + * Sets timeout for checkpoint read lock acquisition. + * + * @param val New timeout in milliseconds, non-positive value denotes infinite timeout. + */ + @Override public void checkpointReadLockTimeout(long val) { + checkpointReadLockTimeout = val; + } + /** * Partition destroy queue. */ @@ -3060,6 +3047,13 @@ protected Checkpointer(@Nullable String gridName, String name, IgniteLogger log) while (!isCancelled()) { waitCheckpointEvent(); + if (skipCheckpointOnNodeStop && (isCancelled() || shutdownNow)) { + if (log.isInfoEnabled()) + log.warning("Skipping last checkpoint because node is stopping."); + + return; + } + GridFutureAdapter enableChangeApplied = GridCacheDatabaseSharedManager.this.enableChangeApplied; if (enableChangeApplied != null) { @@ -3086,7 +3080,7 @@ protected Checkpointer(@Nullable String gridName, String name, IgniteLogger log) } finally { if (err == null && !(stopping && isCancelled)) - err = new IllegalStateException("Thread " + name() + " is terminated unexpectedly"); + err = new IllegalStateException("Thread is terminated unexpectedly: " + name()); if (err instanceof OutOfMemoryError) cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); @@ -3491,7 +3485,7 @@ private void cancelOrWaitPartitionDestroy(int grpId, int partId) throws IgniteCh if (req != null) req.waitCompleted(); - if (log.isDebugEnabled()) + if (req != null && log.isDebugEnabled()) log.debug("Partition file destroy has cancelled [grpId=" + grpId + ", partId=" + partId + "]"); } @@ -3533,7 +3527,9 @@ private void waitCheckpointEvent() { */ @SuppressWarnings("TooBroadScope") private Checkpoint markCheckpointBegin(CheckpointMetricsTracker tracker) throws IgniteCheckedException { - CheckpointRecord cpRec = new CheckpointRecord(null); + CheckpointRecord cpRec = new CheckpointRecord(memoryRecoveryRecordPtr); + + memoryRecoveryRecordPtr = null; WALPointer cpPtr = null; @@ -3581,8 +3577,7 @@ private Checkpoint markCheckpointBegin(CheckpointMetricsTracker tracker) throws final PartitionAllocationMap map = new PartitionAllocationMap(); - if (asyncRunner != null) - asyncRunner.reset(); + GridCompoundFuture asyncLsnrFut = asyncRunner == null ? null : new GridCompoundFuture(); DbCheckpointListener.Context ctx0 = new DbCheckpointListener.Context() { @Override public boolean nextSnapshot() { @@ -3603,10 +3598,14 @@ private Checkpoint markCheckpointBegin(CheckpointMetricsTracker tracker) throws @Override public Executor executor() { return asyncRunner == null ? null : cmd -> { try { - asyncRunner.execute(cmd); + GridFutureAdapter res = new GridFutureAdapter<>(); + + asyncRunner.execute(U.wrapIgniteFuture(cmd, res)); + + asyncLsnrFut.add(res); } catch (RejectedExecutionException e) { - assert false: "A task should never be rejected by async runner"; + assert false : "A task should never be rejected by async runner"; } }; } @@ -3616,17 +3615,16 @@ private Checkpoint markCheckpointBegin(CheckpointMetricsTracker tracker) throws for (DbCheckpointListener lsnr : lsnrs) lsnr.onCheckpointBegin(ctx0); - if (asyncRunner != null) { - asyncRunner.markInitialized(); + if (asyncLsnrFut != null) { + asyncLsnrFut.markInitialized(); - asyncRunner.awaitDone(); + asyncLsnrFut.get(); } if (curr.nextSnapshot) snapFut = snapshotMgr.onMarkCheckPointBegin(curr.snapshotOperation, map); - if (asyncRunner != null) - asyncRunner.reset(); + GridCompoundFuture grpHandleFut = asyncRunner == null ? null : new GridCompoundFuture(); for (CacheGroupContext grp : cctx.cache().cacheGroups()) { if (grp.isLocal() || !grp.walEnabled()) @@ -3658,17 +3656,21 @@ private Checkpoint markCheckpointBegin(CheckpointMetricsTracker tracker) throws r.run(); else try { - asyncRunner.execute(r); + GridFutureAdapter res = new GridFutureAdapter<>(); + + asyncRunner.execute(U.wrapIgniteFuture(r, res)); + + grpHandleFut.add(res); } catch (RejectedExecutionException e) { - assert false: "Task should never be rejected by async runner"; + assert false : "Task should never be rejected by async runner"; } } - if (asyncRunner != null) { - asyncRunner.markInitialized(); + if (grpHandleFut != null) { + grpHandleFut.markInitialized(); - asyncRunner.awaitDone(); + grpHandleFut.get(); } cpPagesTuple = beginAllCheckpoints(); @@ -3711,7 +3713,7 @@ private Checkpoint markCheckpointBegin(CheckpointMetricsTracker tracker) throws } catch (IgniteException e) { U.error(log, "Failed to wait for snapshot operation initialization: " + - curr.snapshotOperation + "]", e); + curr.snapshotOperation, e); } } @@ -4024,8 +4026,6 @@ private WriteCheckpointPages( private List writePages(Collection writePageIds) throws IgniteCheckedException { ByteBuffer tmpWriteBuf = threadBuf.get(); - long writeAddr = GridUnsafe.bufferAddress(tmpWriteBuf); - List pagesToRetry = new ArrayList<>(); for (FullPageId fullId : writePageIds) { @@ -4050,7 +4050,7 @@ else if (grpId == TxLog.TX_LOG_CACHE_ID) else { CacheGroupContext grp = context().cache().cacheGroup(grpId); - DataRegion region = grp != null ?grp .dataRegion() : null; + DataRegion region = grp != null ? grp.dataRegion() : null; if (region == null || !region.config().isPersistenceEnabled()) continue; @@ -4080,19 +4080,9 @@ else if (grpId == TxLog.TX_LOG_CACHE_ID) tracker.onDataPageWritten(); } - if (!skipCrc) { - PageIO.setCrc(writeAddr, PureJavaCrc32.calcCrc32(tmpWriteBuf, pageSize())); - - tmpWriteBuf.rewind(); - } - - int curWrittenPages = writtenPagesCntr.incrementAndGet(); - - snapshotMgr.onPageWrite(fullId, tmpWriteBuf, curWrittenPages, totalPagesToWrite); - - tmpWriteBuf.rewind(); + writtenPagesCntr.incrementAndGet(); - PageStore store = storeMgr.writeInternal(grpId, fullId.pageId(), tmpWriteBuf, tag, false); + PageStore store = storeMgr.writeInternal(grpId, fullId.pageId(), tmpWriteBuf, tag, true); updStores.computeIfAbsent(store, k -> new LongAdder()).increment(); } @@ -4166,12 +4156,12 @@ public void walSegsCoveredRange(final IgniteBiTuple walSegsCoveredRa /** * */ - private static class CheckpointStatus { + public static class CheckpointStatus { /** Null checkpoint UUID. */ private static final UUID NULL_UUID = new UUID(0L, 0L); /** Null WAL pointer. */ - private static final WALPointer NULL_PTR = new FileWALPointer(0, 0, 0); + public static final WALPointer NULL_PTR = new FileWALPointer(0, 0, 0); /** */ private long cpStartTs; @@ -4205,7 +4195,8 @@ private CheckpointStatus(long cpStartTs, UUID cpStartId, WALPointer startPtr, UU } /** - * @return {@code True} if need to apply page log to restore tree structure. + * @return {@code True} if need perform binary memory recovery. Only records {@link PageDeltaRecord} + * and {@link PageSnapshot} needs to be applyed from {@link #cpStartId}. */ public boolean needRestoreMemory() { return !F.eq(cpStartId, cpEndId) && !F.eq(NULL_UUID, cpStartId); @@ -4286,7 +4277,7 @@ private static class CheckpointProgressSnapshot implements CheckpointFuture { } /** {@inheritDoc} */ - @Override public GridFutureAdapter finishFuture() { + @Override public GridFutureAdapter finishFuture() { return cpFinishFut; } } @@ -4389,9 +4380,9 @@ public void tryLock(long lockWaitTimeMillis) throws IgniteCheckedException { if (content == null) content = readContent(); - log.warning("Failed to acquire file lock (local nodeId:" + ctx.localNodeId() - + ", already locked by " + content + "), will try again in 1s: " - + file.getAbsolutePath()); + log.warning("Failed to acquire file lock. Will try again in 1s " + + "[nodeId=" + ctx.localNodeId() + ", holder=" + content + + ", path=" + file.getAbsolutePath() + ']'); } U.sleep(1000); @@ -4400,8 +4391,8 @@ public void tryLock(long lockWaitTimeMillis) throws IgniteCheckedException { if (content == null) content = readContent(); - failMsg = "Failed to acquire file lock during " + (lockWaitTimeMillis / 1000) + - " sec, (locked by " + content + "): " + file.getAbsolutePath(); + failMsg = "Failed to acquire file lock [holder=" + content + ", time=" + (lockWaitTimeMillis / 1000) + + " sec, path=" + file.getAbsolutePath() + ']'; } catch (Exception e) { throw new IgniteCheckedException(e); @@ -4492,7 +4483,7 @@ public DataStorageMetricsImpl persistentStoreMetricsImpl() { } /** {@inheritDoc} */ - public void notifyMetaStorageSubscribersOnReadyForRead() throws IgniteCheckedException { + @Override public void notifyMetaStorageSubscribersOnReadyForRead() throws IgniteCheckedException { metastorageLifecycleLsnrs = cctx.kernalContext().internalSubscriptionProcessor().getMetastorageSubscribers(); readMetastore(); @@ -4569,10 +4560,10 @@ public boolean isCheckpointInapplicableForWalRebalance(Long cpTs, int grpId) thr * */ private void fillWalDisabledGroups() { - MetaStorage meta = cctx.database().metaStorage(); + assert metaStorage != null; try { - Set keys = meta.readForPredicate(WAL_KEY_PREFIX_PRED).keySet(); + Set keys = metaStorage.readForPredicate(WAL_KEY_PREFIX_PRED).keySet(); if (keys.isEmpty()) return; @@ -4635,47 +4626,216 @@ else if (key.startsWith(WAL_GLOBAL_KEY_PREFIX)) } /** - * Abstract class for create restore context. + * Method dumps partitions info see {@link #dumpPartitionsInfo(CacheGroupContext, IgniteLogger)} + * for all persistent cache groups. + * + * @param cctx Shared context. + * @param log Logger. + * @throws IgniteCheckedException If failed. */ - public abstract static class RestoreStateContext { - /** */ - protected final IgniteLogger log; + private static void dumpPartitionsInfo(GridCacheSharedContext cctx, IgniteLogger log) throws IgniteCheckedException { + for (CacheGroupContext grp : cctx.cache().cacheGroups()) { + if (grp.isLocal() || !grp.persistenceEnabled()) + continue; + + dumpPartitionsInfo(grp, log); + } + } + + /** + * Retrieves from page memory meta information about given {@code grp} group partitions + * and dumps this information to log INFO level. + * + * @param grp Cache group. + * @param log Logger. + * @throws IgniteCheckedException If failed. + */ + private static void dumpPartitionsInfo(CacheGroupContext grp, IgniteLogger log) throws IgniteCheckedException { + PageMemoryEx pageMem = (PageMemoryEx)grp.dataRegion().pageMemory(); + + IgnitePageStoreManager pageStore = grp.shared().pageStore(); + + assert pageStore != null : "Persistent cache should have initialize page store manager."; + + for (int p = 0; p < grp.affinity().partitions(); p++) { + if (grp.topology().localPartition(p) != null) { + GridDhtLocalPartition part = grp.topology().localPartition(p); + + log.info("Partition [grp=" + grp.cacheOrGroupName() + + ", id=" + p + + ", state=" + part.state() + + ", counter=" + part.updateCounter() + + ", size=" + part.fullSize() + "]"); + + continue; + } + + if (!pageStore.exists(grp.groupId(), p)) + continue; + + pageStore.ensure(grp.groupId(), p); + + if (pageStore.pages(grp.groupId(), p) <= 1) { + log.info("Partition [grp=" + grp.cacheOrGroupName() + ", id=" + p + ", state=N/A (only file header) ]"); + + continue; + } + + long partMetaId = pageMem.partitionMetaPageId(grp.groupId(), p); + long partMetaPage = pageMem.acquirePage(grp.groupId(), partMetaId); + + try { + long pageAddr = pageMem.readLock(grp.groupId(), partMetaId, partMetaPage); + + try { + PagePartitionMetaIO io = PagePartitionMetaIO.VERSIONS.forPage(pageAddr); + + GridDhtPartitionState partitionState = GridDhtPartitionState.fromOrdinal(io.getPartitionState(pageAddr)); + + String state = partitionState != null ? partitionState.toString() : "N/A"; + + long updateCounter = io.getUpdateCounter(pageAddr); + long size = io.getSize(pageAddr); + + log.info("Partition [grp=" + grp.cacheOrGroupName() + + ", id=" + p + + ", state=" + state + + ", counter=" + updateCounter + + ", size=" + size + "]"); + } + finally { + pageMem.readUnlock(grp.groupId(), partMetaId, partMetaPage); + } + } + finally { + pageMem.releasePage(grp.groupId(), partMetaId, partMetaPage); + } + } + } + /** + * Recovery lifecycle for read-write metastorage. + */ + private class MetastorageRecoveryLifecycle implements DatabaseLifecycleListener { + /** {@inheritDoc} */ + @Override public void beforeBinaryMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException { + cctx.pageStore().initializeForMetastorage(); + } + + /** {@inheritDoc} */ + @Override public void afterBinaryMemoryRestore( + IgniteCacheDatabaseSharedManager mgr, + RestoreBinaryState restoreState + ) throws IgniteCheckedException { + assert metaStorage == null; + + metaStorage = createMetastorage(false); + } + } + + /** + * @return Cache group predicate that passes only Metastorage cache group id. + */ + private IgnitePredicate onlyMetastorageGroup() { + return groupId -> MetaStorage.METASTORAGE_CACHE_ID == groupId; + } + + /** + * @return Cache group predicate that passes only cache groups with enabled WAL. + */ + private IgnitePredicate groupsWithEnabledWal() { + return groupId -> !initiallyGlobalWalDisabledGrps.contains(groupId) + && !initiallyLocalWalDisabledGrps.contains(groupId); + } + + /** + * @return WAL records predicate that passes only Metastorage data records. + */ + private IgniteBiPredicate onlyMetastorageRecords() { + return (type, ptr) -> type == METASTORE_DATA_RECORD; + } + + /** + * @return WAL records predicate that passes only physical and mixed WAL records. + */ + private IgniteBiPredicate physicalRecords() { + return (type, ptr) -> type.purpose() == WALRecord.RecordPurpose.PHYSICAL + || type.purpose() == WALRecord.RecordPurpose.MIXED; + } + + /** + * @return WAL records predicate that passes only logical and mixed WAL records. + */ + private IgniteBiPredicate logicalRecords() { + return (type, ptr) -> type.purpose() == WALRecord.RecordPurpose.LOGICAL + || type.purpose() == WALRecord.RecordPurpose.MIXED; + } + + /** + * Abstract class to create restore context. + */ + private abstract class RestoreStateContext { /** Last archived segment. */ protected final long lastArchivedSegment; - /** Last read record WAL pointer. */ - protected FileWALPointer lastRead; + /** WAL iterator. */ + private final WALIterator iterator; + + /** Only {@link WalRecordCacheGroupAware} records satisfied this predicate will be applied. */ + private final IgnitePredicate cacheGroupPredicate; /** + * @param iterator WAL iterator. * @param lastArchivedSegment Last archived segment index. - * @param log Ignite logger. + * @param cacheGroupPredicate Cache groups predicate. */ - public RestoreStateContext(long lastArchivedSegment, IgniteLogger log) { + protected RestoreStateContext( + WALIterator iterator, + long lastArchivedSegment, + IgnitePredicate cacheGroupPredicate + ) { + this.iterator = iterator; this.lastArchivedSegment = lastArchivedSegment; - this.log = log; + this.cacheGroupPredicate = cacheGroupPredicate; } /** * Advance iterator to the next record. * - * @param it WAL iterator. * @return WALRecord entry. * @throws IgniteCheckedException If CRC check fail during binary recovery state or another exception occurring. */ - public WALRecord next(WALIterator it) throws IgniteCheckedException { + public WALRecord next() throws IgniteCheckedException { try { - IgniteBiTuple tup = it.nextX(); + for (;;) { + if (!iterator.hasNextX()) + return null; + + IgniteBiTuple tup = iterator.nextX(); + + if (tup == null) + return null; + + WALRecord rec = tup.get2(); - WALRecord rec = tup.get2(); + WALPointer ptr = tup.get1(); - WALPointer ptr = tup.get1(); + rec.position(ptr); - lastRead = (FileWALPointer)ptr; + // Filter out records by group id. + if (rec instanceof WalRecordCacheGroupAware) { + WalRecordCacheGroupAware grpAwareRecord = (WalRecordCacheGroupAware) rec; - rec.position(ptr); + if (!cacheGroupPredicate.apply(grpAwareRecord.groupId())) + continue; + } - return rec; + // Filter out data entries by group id. + if (rec instanceof DataRecord) + rec = filterEntriesByGroupId((DataRecord) rec); + + return rec; + } } catch (IgniteCheckedException e) { boolean throwsCRCError = throwsCRCError(); @@ -4687,35 +4847,51 @@ public WALRecord next(WALIterator it) throws IgniteCheckedException { return null; } - log.error("Catch error during restore state, throwsCRCError=" + throwsCRCError, e); + log.error("There is an error during restore state [throwsCRCError=" + throwsCRCError + ']', e); throw e; } } + /** + * Filter outs data entries from given data record that not satisfy {@link #cacheGroupPredicate}. + * + * @param record Original data record. + * @return Data record with filtered data entries. + */ + private DataRecord filterEntriesByGroupId(DataRecord record) { + List filteredEntries = record.writeEntries().stream() + .filter(entry -> { + int cacheId = entry.cacheId(); + + return cctx.cacheContext(cacheId) != null && cacheGroupPredicate.apply(cctx.cacheContext(cacheId).groupId()); + }) + .collect(Collectors.toList()); + + return record.setWriteEntries(filteredEntries); + } + /** * * @return Last read WAL record pointer. */ - public WALPointer lastReadRecordPointer() { - return lastRead; + public Optional lastReadRecordPointer() { + return iterator.lastRead().map(ptr -> (FileWALPointer)ptr); } /** * * @return Flag indicates need throws CRC exception or not. */ - public boolean throwsCRCError(){ - FileWALPointer lastReadPtr = lastRead; - - return lastReadPtr != null && lastReadPtr.index() <= lastArchivedSegment; + public boolean throwsCRCError() { + return lastReadRecordPointer().filter(ptr -> ptr.index() <= lastArchivedSegment).isPresent(); } } /** * Restore memory context. Tracks the safety of binary recovery. */ - public static class RestoreBinaryState extends RestoreStateContext { + public class RestoreBinaryState extends RestoreStateContext { /** Checkpoint status. */ private final CheckpointStatus status; @@ -4724,25 +4900,30 @@ public static class RestoreBinaryState extends RestoreStateContext { /** * @param status Checkpoint status. + * @param iterator WAL iterator. * @param lastArchivedSegment Last archived segment index. - * @param log Ignite logger. + * @param cacheGroupsPredicate Cache groups predicate. */ - public RestoreBinaryState(CheckpointStatus status, long lastArchivedSegment, IgniteLogger log) { - super(lastArchivedSegment, log); + public RestoreBinaryState( + CheckpointStatus status, + WALIterator iterator, + long lastArchivedSegment, + IgnitePredicate cacheGroupsPredicate + ) { + super(iterator, lastArchivedSegment, cacheGroupsPredicate); this.status = status; - needApplyBinaryUpdates = status.needRestoreMemory(); + this.needApplyBinaryUpdates = status.needRestoreMemory(); } /** * Advance iterator to the next record. * - * @param it WAL iterator. * @return WALRecord entry. * @throws IgniteCheckedException If CRC check fail during binary recovery state or another exception occurring. */ - @Override public WALRecord next(WALIterator it) throws IgniteCheckedException { - WALRecord rec = super.next(it); + @Override public WALRecord next() throws IgniteCheckedException { + WALRecord rec = super.next(); if (rec == null) return null; @@ -4778,8 +4959,8 @@ public boolean needApplyBinaryUpdate() { * @return Flag indicates need throws CRC exception or not. */ @Override public boolean throwsCRCError() { - log.info("Throws CRC error check, needApplyBinaryUpdates=" + needApplyBinaryUpdates + - ", lastArchivedSegment=" + lastArchivedSegment + ", lastRead=" + lastRead); + log.info("Throws CRC error check [needApplyBinaryUpdates=" + needApplyBinaryUpdates + + ", lastArchivedSegment=" + lastArchivedSegment + ", lastRead=" + lastReadRecordPointer() + ']'); if (needApplyBinaryUpdates) return true; @@ -4791,13 +4972,33 @@ public boolean needApplyBinaryUpdate() { /** * Restore logical state context. Tracks the safety of logical recovery. */ - public static class RestoreLogicalState extends RestoreStateContext { + public class RestoreLogicalState extends RestoreStateContext { + /** States of partitions recovered during applying logical updates. */ + private final Map partitionRecoveryStates = new HashMap<>(); + /** * @param lastArchivedSegment Last archived segment index. - * @param log Ignite logger. */ - public RestoreLogicalState(long lastArchivedSegment, IgniteLogger log) { - super(lastArchivedSegment, log); + public RestoreLogicalState(WALIterator iterator, long lastArchivedSegment, IgnitePredicate cacheGroupsPredicate) { + super(iterator, lastArchivedSegment, cacheGroupsPredicate); + } + + /** + * @return Map of restored partition states for cache groups. + */ + public Map partitionRecoveryStates() { + return Collections.unmodifiableMap(partitionRecoveryStates); + } + } + + /** Indicates checkpoint read lock acquisition failure which did not lead to node invalidation. */ + private static class CheckpointReadLockTimeoutException extends IgniteCheckedException { + /** */ + private static final long serialVersionUID = 0L; + + /** */ + private CheckpointReadLockTimeoutException(String msg) { + super(msg); } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheOffheapManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheOffheapManager.java index 16c30c929a51c..e9af7bb67cb9a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheOffheapManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheOffheapManager.java @@ -28,6 +28,7 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.Executor; import java.util.concurrent.atomic.AtomicBoolean; +import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.failure.FailureContext; @@ -37,6 +38,7 @@ import org.apache.ignite.internal.pagemem.PageIdUtils; import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.pagemem.PageSupport; +import org.apache.ignite.internal.pagemem.store.IgnitePageStoreManager; import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; import org.apache.ignite.internal.pagemem.wal.WALIterator; import org.apache.ignite.internal.pagemem.wal.WALPointer; @@ -57,10 +59,10 @@ import org.apache.ignite.internal.processors.cache.GridCacheTtlManager; import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl; import org.apache.ignite.internal.processors.cache.KeyCacheObject; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.CachePartitionPartialCountersMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteHistoricalIterator; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl; @@ -69,6 +71,7 @@ import org.apache.ignite.internal.processors.cache.persistence.partstate.GroupPartitionId; import org.apache.ignite.internal.processors.cache.persistence.partstate.PagesAllocationRange; import org.apache.ignite.internal.processors.cache.persistence.partstate.PartitionAllocationMap; +import org.apache.ignite.internal.processors.cache.persistence.partstate.PartitionRecoverState; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageMetaIO; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PagePartitionCountersIO; @@ -110,6 +113,9 @@ public class GridCacheOffheapManager extends IgniteCacheOffheapManagerImpl imple /** */ private ReuseListImpl reuseList; + /** Flag indicates that all group partitions have restored their state from page memory / disk. */ + private volatile boolean partitionStatesRestored; + /** {@inheritDoc} */ @Override protected void initPendingTree(GridCacheContext cctx) throws IgniteCheckedException { // No-op. Per-partition PendingTree should be used. @@ -136,6 +142,7 @@ public class GridCacheOffheapManager extends IgniteCacheOffheapManagerImpl imple ctx.wal(), globalRemoveId(), grp.groupId(), + grp.sharedGroup(), PageIdAllocator.INDEX_PARTITION, PageIdAllocator.FLAG_IDX, reuseList, @@ -170,14 +177,39 @@ public IndexStorage getIndexStorage() { Executor execSvc = ctx.executor(); - if (execSvc == null) { - reuseList.saveMetadata(); + boolean needSnapshot = ctx.nextSnapshot() && ctx.needToSnapshot(grp.cacheOrGroupName()); - if (ctx.nextSnapshot()) + if (needSnapshot) { + if (execSvc == null) updateSnapshotTag(ctx); + else { + execSvc.execute(() -> { + try { + updateSnapshotTag(ctx); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + }); + } + } + + syncMetadata(execSvc, ctx, needSnapshot); + } + + /** + * Syncs and saves meta-information of all data structures to page memory. + * + * @param execSvc Executor service to run save process + * @param ctx Checkpoint listener context. + * @throws IgniteCheckedException If failed. + */ + private void syncMetadata(Executor execSvc, Context ctx, boolean needSnapshot) throws IgniteCheckedException { + if (execSvc == null) { + reuseList.saveMetadata(); for (CacheDataStore store : partDataStores.values()) - saveStoreMetadata(store, ctx, false); + saveStoreMetadata(store, ctx, false, needSnapshot); } else { execSvc.execute(() -> { @@ -189,21 +221,10 @@ public IndexStorage getIndexStorage() { } }); - if (ctx.nextSnapshot()) { - execSvc.execute(() -> { - try { - updateSnapshotTag(ctx); - } - catch (IgniteCheckedException e) { - throw new IgniteException(e); - } - }); - } - for (CacheDataStore store : partDataStores.values()) execSvc.execute(() -> { try { - saveStoreMetadata(store, ctx, false); + saveStoreMetadata(store, ctx, false, needSnapshot); } catch (IgniteCheckedException e) { throw new IgniteException(e); @@ -212,6 +233,13 @@ public IndexStorage getIndexStorage() { } } + /** + * @return {@code True} is group is not empty. + */ + private boolean notEmpty(CacheDataStore store) { + return store.rowStore() != null && (store.fullSize() > 0 || store.updateCounter() > 0); + } + /** * @param store Store to save metadata. * @throws IgniteCheckedException If failed. @@ -219,12 +247,11 @@ public IndexStorage getIndexStorage() { private void saveStoreMetadata( CacheDataStore store, Context ctx, - boolean beforeDestroy + boolean beforeDestroy, + boolean needSnapshot ) throws IgniteCheckedException { RowStore rowStore0 = store.rowStore(); - boolean needSnapshot = ctx != null && ctx.nextSnapshot() && ctx.needToSnapshot(grp.cacheOrGroupName()); - if (rowStore0 != null) { CacheFreeListImpl freeList = (CacheFreeListImpl)rowStore0.freeList(); @@ -375,6 +402,142 @@ else if (needSnapshot) tryAddEmptyPartitionToSnapshot(store, ctx); } + /** {@inheritDoc} */ + @Override public long restorePartitionStates(Map partitionRecoveryStates) throws IgniteCheckedException { + if (grp.isLocal() || !grp.affinityNode() || !grp.dataRegion().config().isPersistenceEnabled()) + return 0; + + if (partitionStatesRestored) + return 0; + + long processed = 0; + + PageMemoryEx pageMem = (PageMemoryEx)grp.dataRegion().pageMemory(); + + for (int p = 0; p < grp.affinity().partitions(); p++) { + PartitionRecoverState recoverState = partitionRecoveryStates.get(new GroupPartitionId(grp.groupId(), p)); + + if (ctx.pageStore().exists(grp.groupId(), p)) { + ctx.pageStore().ensure(grp.groupId(), p); + + if (ctx.pageStore().pages(grp.groupId(), p) <= 1) { + if (log.isDebugEnabled()) + log.debug("Skipping partition on recovery (pages less than 1) " + + "[grp=" + grp.cacheOrGroupName() + ", p=" + p + "]"); + + continue; + } + + if (log.isDebugEnabled()) + log.debug("Creating partition on recovery (exists in page store) " + + "[grp=" + grp.cacheOrGroupName() + ", p=" + p + "]"); + + processed++; + + GridDhtLocalPartition part = grp.topology().forceCreatePartition(p); + + onPartitionInitialCounterUpdated(p, 0); + + ctx.database().checkpointReadLock(); + + try { + long partMetaId = pageMem.partitionMetaPageId(grp.groupId(), p); + long partMetaPage = pageMem.acquirePage(grp.groupId(), partMetaId); + + try { + long pageAddr = pageMem.writeLock(grp.groupId(), partMetaId, partMetaPage); + + boolean changed = false; + + try { + PagePartitionMetaIO io = PagePartitionMetaIO.VERSIONS.forPage(pageAddr); + + if (recoverState != null) { + io.setPartitionState(pageAddr, (byte) recoverState.stateId()); + + changed = updateState(part, recoverState.stateId()); + + if (recoverState.stateId() == GridDhtPartitionState.OWNING.ordinal() + || (recoverState.stateId() == GridDhtPartitionState.MOVING.ordinal() + && part.initialUpdateCounter() < recoverState.updateCounter())) { + part.initialUpdateCounter(recoverState.updateCounter()); + + changed = true; + } + + if (log.isDebugEnabled()) + log.debug("Restored partition state (from WAL) " + + "[grp=" + grp.cacheOrGroupName() + ", p=" + p + ", state=" + part.state() + + ", updCntr=" + part.initialUpdateCounter() + "]"); + } + else { + int stateId = (int) io.getPartitionState(pageAddr); + + changed = updateState(part, stateId); + + if (log.isDebugEnabled()) + log.debug("Restored partition state (from page memory) " + + "[grp=" + grp.cacheOrGroupName() + ", p=" + p + ", state=" + part.state() + + ", updCntr=" + part.initialUpdateCounter() + ", stateId=" + stateId + "]"); + } + } + finally { + pageMem.writeUnlock(grp.groupId(), partMetaId, partMetaPage, null, changed); + } + } + finally { + pageMem.releasePage(grp.groupId(), partMetaId, partMetaPage); + } + } + finally { + ctx.database().checkpointReadUnlock(); + } + } + else if (recoverState != null) { + GridDhtLocalPartition part = grp.topology().forceCreatePartition(p); + + onPartitionInitialCounterUpdated(p, recoverState.updateCounter()); + + updateState(part, recoverState.stateId()); + + processed++; + + if (log.isDebugEnabled()) + log.debug("Restored partition state (from WAL) " + + "[grp=" + grp.cacheOrGroupName() + ", p=" + p + ", state=" + part.state() + + ", updCntr=" + part.initialUpdateCounter() + "]"); + } + else { + if (log.isDebugEnabled()) + log.debug("Skipping partition on recovery (no page store OR wal state) " + + "[grp=" + grp.cacheOrGroupName() + ", p=" + p + "]"); + } + } + + partitionStatesRestored = true; + + return processed; + } + + /** + * @param part Partition to restore state for. + * @param stateId State enum ordinal. + * @return Updated flag. + */ + private boolean updateState(GridDhtLocalPartition part, int stateId) { + if (stateId != -1) { + GridDhtPartitionState state = GridDhtPartitionState.fromOrdinal(stateId); + + assert state != null; + + part.restoreState(state == GridDhtPartitionState.EVICTED ? GridDhtPartitionState.RENTING : state); + + return true; + } + + return false; + } + /** * Check that we need to snapshot this partition and add it to map. * @@ -480,8 +643,6 @@ private static long writeSharedGroupCacheSizes(PageMemory pageMem, int grpId, try { final long curAddr = pageMem.writeLock(grpId, curId, curPage); - int pageSize = pageMem.pageSize(); - assert curAddr != 0; try { @@ -490,12 +651,12 @@ private static long writeSharedGroupCacheSizes(PageMemory pageMem, int grpId, if (init) { partCntrIo = PagePartitionCountersIO.VERSIONS.latest(); - partCntrIo.initNewPage(curAddr, curId, pageSize); + partCntrIo.initNewPage(curAddr, curId, pageMem.realPageSize(grpId)); } else partCntrIo = PageIO.getPageIO(curAddr); - written += partCntrIo.writeCacheSizes(pageSize, curAddr, data, written); + written += partCntrIo.writeCacheSizes(pageMem.realPageSize(grpId), curAddr, data, written); nextId = partCntrIo.getNextCountersPageId(curAddr); @@ -543,10 +704,8 @@ private void updateSnapshotTag(Context ctx) throws IgniteCheckedException { log.debug("Save next snapshot before checkpoint start for grId = " + grpId + ", nextSnapshotTag = " + nextSnapshotTag); - if (PageHandler.isWalDeltaRecordNeeded(pageMem, grpId, metaPageId, - metaPage, wal, null)) - wal.log(new MetaPageUpdateNextSnapshotId(grpId, metaPageId, - nextSnapshotTag + 1)); + if (!wal.disabled(grpId)) + wal.log(new MetaPageUpdateNextSnapshotId(grpId, metaPageId, nextSnapshotTag + 1)); addPartition( null, @@ -617,7 +776,7 @@ private static boolean addPartition( ctx.database().checkpointReadLock(); try { - saveStoreMetadata(store, null, true); + saveStoreMetadata(store, null, true, false); } finally { ctx.database().checkpointReadUnlock(); @@ -676,19 +835,13 @@ public void destroyPartitionStore(int grpId, int partId) throws IgniteCheckedExc } /** {@inheritDoc} */ - @Override public RootPage rootPageForIndex(int cacheId, String idxName) throws IgniteCheckedException { - if (grp.sharedGroup()) - idxName = Integer.toString(cacheId) + "_" + idxName; - - return indexStorage.getOrAllocateForTree(idxName); + @Override public RootPage rootPageForIndex(int cacheId, String idxName, int segment) throws IgniteCheckedException { + return indexStorage.allocateCacheIndex(cacheId, idxName, segment); } /** {@inheritDoc} */ - @Override public void dropRootPageForIndex(int cacheId, String idxName) throws IgniteCheckedException { - if (grp.sharedGroup()) - idxName = Integer.toString(cacheId) + "_" + idxName; - - indexStorage.dropRootPage(idxName); + @Override public void dropRootPageForIndex(int cacheId, String idxName, int segment) throws IgniteCheckedException { + indexStorage.dropCacheIndex(cacheId, idxName, segment); } /** {@inheritDoc} */ @@ -725,7 +878,7 @@ private Metas getOrAllocateCacheMetas() throws IgniteCheckedException { if (PageIO.getType(pageAddr) != PageIO.T_META) { PageMetaIO pageIO = PageMetaIO.VERSIONS.latest(); - pageIO.initNewPage(pageAddr, metaId, pageMem.pageSize()); + pageIO.initNewPage(pageAddr, metaId, pageMem.realPageSize(grpId)); metastoreRoot = pageMem.allocatePage(grpId, PageIdAllocator.INDEX_PARTITION, PageMemory.FLAG_IDX); reuseListRoot = pageMem.allocatePage(grpId, PageIdAllocator.INDEX_PARTITION, PageMemory.FLAG_IDX); @@ -813,9 +966,6 @@ private Metas getOrAllocateCacheMetas() throws IgniteCheckedException { ) throws IgniteCheckedException { assert !cctx.isNear() : cctx.name(); - if (!hasPendingEntries || nextCleanTime > U.currentTimeMillis()) - return false; - // Prevent manager being stopped in the middle of pds operation. if (!busyLock.enterBusy()) return false; @@ -829,10 +979,6 @@ private Metas getOrAllocateCacheMetas() throws IgniteCheckedException { if (amount != -1 && cleared >= amount) return true; } - - // Throttle if there is nothing to clean anymore. - if (cleared < amount) - nextCleanTime = U.currentTimeMillis() + UNWIND_THROTTLING_TIMEOUT; } finally { busyLock.leaveBusy(); @@ -851,6 +997,21 @@ private Metas getOrAllocateCacheMetas() throws IgniteCheckedException { return size; } + /** {@inheritDoc} */ + @Override public void preloadPartition(int part) throws IgniteCheckedException { + if (grp.isLocal()) { + dataStore(part).preload(); + + return; + } + + GridDhtLocalPartition locPart = grp.topology().localPartition(part, AffinityTopologyVersion.NONE, false, false); + + assert locPart != null && locPart.reservations() > 0; + + locPart.dataStore().preload(); + } + /** * Calculates free space of all partition data stores - number of bytes available for use in allocated pages. * @@ -1208,11 +1369,6 @@ private DataEntryRow(DataEntry entry) { @Override public byte newMvccTxState() { return 0; // TODO IGNITE-7384 } - - /** {@inheritDoc} */ - @Override public boolean isKeyAbsentBefore() { - return false; - } } /** @@ -1266,8 +1422,16 @@ public class GridCacheDataStore implements CacheDataStore { /** */ private volatile CacheDataStore delegate; - /** Timestamp when next clean try will be allowed for current partition. - * Used for fine-grained throttling on per-partition basis. */ + /** + * Cache id which should be throttled. + */ + private volatile int lastThrottledCacheId; + + /** + * Timestamp when next clean try will be allowed for the current partition + * in accordance with the value of {@code lastThrottledCacheId}. + * Used for fine-grained throttling on per-partition basis. + */ private volatile long nextStoreCleanTime; /** */ @@ -1375,12 +1539,37 @@ private CacheDataStore init0(boolean checkExists) throws IgniteCheckedException @Override public PendingEntriesTree pendingTree() { return pendingTree0; } + + /** {@inheritDoc} */ + @Override public void preload() throws IgniteCheckedException { + IgnitePageStoreManager pageStoreMgr = ctx.pageStore(); + + if (pageStoreMgr == null) + return; + + final int pages = pageStoreMgr.pages(grp.groupId(), partId); + + long pageId = pageMem.partitionMetaPageId(grp.groupId(), partId); + + // For each page sequentially pin/unpin. + for (int pageNo = 0; pageNo < pages; pageId++, pageNo++) { + long pagePointer = -1; + + try { + pagePointer = pageMem.acquirePage(grp.groupId(), pageId); + } + finally { + if (pagePointer != -1) + pageMem.releasePage(grp.groupId(), pageId, pagePointer); + } + } + } }; pendingTree = pendingTree0; - if (!hasPendingEntries && pendingTree0.size() > 0) - hasPendingEntries = true; + if (!pendingTree0.isEmpty()) + grp.caches().forEach(cctx -> cctx.ttl().hasPendingEntries(true)); int grpId = grp.groupId(); long partMetaId = pageMem.partitionMetaPageId(grpId, partId); @@ -1462,7 +1651,7 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { if (PageIO.getType(pageAddr) != PageIO.T_PART_META) { PagePartitionMetaIO io = PagePartitionMetaIO.VERSIONS.latest(); - io.initNewPage(pageAddr, partMetaId, pageMem.pageSize()); + io.initNewPage(pageAddr, partMetaId, pageMem.realPageSize(grpId)); treeRoot = pageMem.allocatePage(grpId, partId, PageMemory.FLAG_DATA); reuseListRoot = pageMem.allocatePage(grpId, partId, PageMemory.FLAG_DATA); @@ -1476,8 +1665,10 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { io.setReuseListRoot(pageAddr, reuseListRoot); io.setPendingTreeRoot(pageAddr, pendingTreeRoot); - if (PageHandler.isWalDeltaRecordNeeded(pageMem, grpId, partMetaId, partMetaPage, wal, null)) - wal.log(new PageSnapshot(new FullPageId(partMetaId, grpId), pageAddr, pageMem.pageSize())); + if (PageHandler.isWalDeltaRecordNeeded(pageMem, grpId, partMetaId, partMetaPage, wal, null)) { + wal.log(new PageSnapshot(new FullPageId(partMetaId, grpId), pageAddr, + pageMem.pageSize(), pageMem.realPageSize(grpId))); + } allocated = true; } @@ -1506,8 +1697,11 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { io.setPendingTreeRoot(pageAddr, pendingTreeRoot); - if (PageHandler.isWalDeltaRecordNeeded(pageMem, grpId, partMetaId, partMetaPage, wal, null)) - wal.log(new PageSnapshot(new FullPageId(partMetaId, grpId), pageAddr, pageMem.pageSize())); + if (PageHandler.isWalDeltaRecordNeeded(pageMem, grpId, partMetaId, partMetaPage, wal, + null)) { + wal.log(new PageSnapshot(new FullPageId(partMetaId, grpId), pageAddr, + pageMem.pageSize(), pageMem.realPageSize(grpId))); + } pendingTreeAllocated = true; } @@ -1570,6 +1764,18 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { } } + /** {@inheritDoc} */ + @Override public boolean isEmpty() { + try { + CacheDataStore delegate0 = init0(true); + + return delegate0 == null || delegate0.isEmpty(); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + } + /** {@inheritDoc} */ @Override public long cacheSize(int cacheId) { try { @@ -1662,6 +1868,18 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { } } + /** {@inheritDoc} */ + @Override public GridLongList finalizeUpdateCounters() { + try { + CacheDataStore delegate0 = init0(true); + + return delegate0 != null ? delegate0.finalizeUpdateCounters() : null; + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + } + /** {@inheritDoc} */ @Override public long nextUpdateCounter() { try { @@ -1797,14 +2015,17 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { long expireTime, MvccSnapshot mvccVer, CacheEntryPredicate filter, + EntryProcessor entryProc, + Object[] invokeArgs, boolean primary, boolean needHistory, boolean noCreate, + boolean needOldVal, boolean retVal) throws IgniteCheckedException { CacheDataStore delegate = init0(false); - return delegate.mvccUpdate(cctx, key, val, ver, expireTime, mvccVer, filter, primary, - needHistory, noCreate, retVal); + return delegate.mvccUpdate(cctx, key, val, ver, expireTime, mvccVer, filter, entryProc, invokeArgs, primary, + needHistory, noCreate, needOldVal, retVal); } /** {@inheritDoc} */ @@ -1815,10 +2036,11 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { CacheEntryPredicate filter, boolean primary, boolean needHistory, + boolean needOldVal, boolean retVal) throws IgniteCheckedException { CacheDataStore delegate = init0(false); - return delegate.mvccRemove(cctx, key, mvccVer, filter, primary, needHistory, retVal); + return delegate.mvccRemove(cctx, key, mvccVer,filter, primary, needHistory, needOldVal, retVal); } /** {@inheritDoc} */ @@ -1852,6 +2074,14 @@ private Metas getOrAllocatePartitionMetas() throws IgniteCheckedException { delegate.mvccRemoveAll(cctx, key); } + /** {@inheritDoc} */ + @Override public void mvccApplyUpdate(GridCacheContext cctx, KeyCacheObject key, CacheObject val, GridCacheVersion ver, + long expireTime, MvccVersion mvccVer) throws IgniteCheckedException { + CacheDataStore delegate = init0(false); + + delegate.mvccApplyUpdate(cctx, key, val, ver, expireTime, mvccVer); + } + /** {@inheritDoc} */ @Override public CacheDataRow createRow( GridCacheContext cctx, @@ -2099,14 +2329,16 @@ public long expiredSize() throws IgniteCheckedException { * @return cleared entries count. * @throws IgniteCheckedException If failed. */ - public int purgeExpired(GridCacheContext cctx, + public int purgeExpired( + GridCacheContext cctx, IgniteInClosure2X c, - int amount) throws IgniteCheckedException { + int amount + ) throws IgniteCheckedException { CacheDataStore delegate0 = init0(true); long now = U.currentTimeMillis(); - if (delegate0 == null || nextStoreCleanTime > now) + if (delegate0 == null || (cctx.cacheId() == lastThrottledCacheId && nextStoreCleanTime > now)) return 0; assert pendingTree != null : "Partition data store was not initialized."; @@ -2114,8 +2346,11 @@ public int purgeExpired(GridCacheContext cctx, int cleared = purgeExpiredInternal(cctx, c, amount); // Throttle if there is nothing to clean anymore. - if (cleared < amount) - nextStoreCleanTime = now + UNWIND_THROTTLING_TIMEOUT; + if (cleared < amount) { + lastThrottledCacheId = cctx.cacheId(); + + nextStoreCleanTime = now + GridCacheTtlManager.UNWIND_THROTTLING_TIMEOUT; + } return cleared; } @@ -2129,10 +2364,11 @@ public int purgeExpired(GridCacheContext cctx, * @return cleared entries count. * @throws IgniteCheckedException If failed. */ - private int purgeExpiredInternal(GridCacheContext cctx, + private int purgeExpiredInternal( + GridCacheContext cctx, IgniteInClosure2X c, - int amount) throws IgniteCheckedException { - + int amount + ) throws IgniteCheckedException { GridDhtLocalPartition part = cctx.topology().localPartition(partId, AffinityTopologyVersion.NONE, false, false); // Skip non-owned partitions. @@ -2210,6 +2446,14 @@ private int purgeExpiredInternal(GridCacheContext cctx, throw new IgniteException(e); } } + + /** {@inheritDoc} */ + @Override public void preload() throws IgniteCheckedException { + CacheDataStore delegate0 = init0(true); + + if (delegate0 != null) + delegate0.preload(); + } } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IgniteCacheDatabaseSharedManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IgniteCacheDatabaseSharedManager.java index 52430c02db19b..7fc70d0b8923d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IgniteCacheDatabaseSharedManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IgniteCacheDatabaseSharedManager.java @@ -25,17 +25,20 @@ import java.util.List; import java.util.Map; import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; import javax.management.InstanceNotFoundException; import org.apache.ignite.DataRegionMetrics; import org.apache.ignite.DataStorageMetrics; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.configuration.DataPageEvictionMode; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; import org.apache.ignite.internal.mem.DirectMemoryProvider; import org.apache.ignite.internal.mem.DirectMemoryRegion; import org.apache.ignite.internal.mem.file.MappedFileMemoryProvider; @@ -43,8 +46,8 @@ import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl; import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheGroupContext; -import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.GridCacheMapEntry; import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; @@ -69,6 +72,7 @@ import org.apache.ignite.mxbean.DataRegionMetricsMXBean; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_REUSE_MEMORY_ON_DEACTIVATE; import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_DATA_REG_DEFAULT_NAME; import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_PAGE_SIZE; import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_WAL_ARCHIVE_MAX_SIZE; @@ -88,14 +92,26 @@ public class IgniteCacheDatabaseSharedManager extends GridCacheSharedManagerAdap /** Maximum initial size on 32-bit JVM */ private static final long MAX_PAGE_MEMORY_INIT_SIZE_32_BIT = 2L * 1024 * 1024 * 1024; + /** {@code True} to reuse memory on deactive. */ + protected final boolean reuseMemory = IgniteSystemProperties.getBoolean(IGNITE_REUSE_MEMORY_ON_DEACTIVATE); + + /** */ + protected final Map dataRegionMap = new ConcurrentHashMap<>(); + + /** Stores memory providers eligible for reuse. */ + private final Map memProviderMap = new ConcurrentHashMap<>(); + + /** */ + private static final String MBEAN_GROUP_NAME = "DataRegionMetrics"; + /** */ - protected volatile Map dataRegionMap; + protected final Map memMetricsMap = new ConcurrentHashMap<>(); /** */ private volatile boolean dataRegionsInitialized; /** */ - protected Map memMetricsMap; + private volatile boolean dataRegionsStarted; /** */ protected DataRegion dfltDataRegion; @@ -112,6 +128,7 @@ public class IgniteCacheDatabaseSharedManager extends GridCacheSharedManagerAdap /** First eviction was warned flag. */ private volatile boolean firstEvictWarn; + /** {@inheritDoc} */ @Override protected void start0() throws IgniteCheckedException { if (cctx.kernalContext().clientNode() && cctx.kernalContext().config().getDataStorageConfiguration() == null) @@ -129,44 +146,88 @@ public class IgniteCacheDatabaseSharedManager extends GridCacheSharedManagerAdap } /** - * Registers MBeans for all DataRegionMetrics configured in this instance. - */ - private void registerMetricsMBeans() { - if(U.IGNITE_MBEANS_DISABLED) + * @param cfg Ignite configuration. + * @param groupName Name of group. + * @param dataRegionName Metrics MBean name. + * @param impl Metrics implementation. + * @param clazz Metrics class type. + */ + protected void registerMetricsMBean( + IgniteConfiguration cfg, + String groupName, + String dataRegionName, + T impl, + Class clazz + ) { + if (U.IGNITE_MBEANS_DISABLED) return; - IgniteConfiguration cfg = cctx.gridConfig(); - - for (DataRegionMetrics memMetrics : memMetricsMap.values()) { - DataRegionConfiguration memPlcCfg = dataRegionMap.get(memMetrics.getName()).config(); - - registerMetricsMBean((DataRegionMetricsImpl)memMetrics, memPlcCfg, cfg); + try { + U.registerMBean( + cfg.getMBeanServer(), + cfg.getIgniteInstanceName(), + groupName, + dataRegionName, + impl, + clazz); + } + catch (Throwable e) { + U.error(log, "Failed to register MBean with name: " + dataRegionName, e); } } /** - * @param memMetrics Memory metrics. - * @param dataRegionCfg Data region configuration. * @param cfg Ignite configuration. + * @param groupName Name of group. + * @param name Name of MBean. */ - private void registerMetricsMBean( - DataRegionMetricsImpl memMetrics, - DataRegionConfiguration dataRegionCfg, - IgniteConfiguration cfg + protected void unregisterMetricsMBean( + IgniteConfiguration cfg, + String groupName, + String name ) { - assert !U.IGNITE_MBEANS_DISABLED; + if (U.IGNITE_MBEANS_DISABLED) + return; + + assert cfg != null; try { - U.registerMBean( - cfg.getMBeanServer(), - cfg.getIgniteInstanceName(), - "DataRegionMetrics", - dataRegionCfg.getName(), - new DataRegionMetricsMXBeanImpl(memMetrics, dataRegionCfg), - DataRegionMetricsMXBean.class); + cfg.getMBeanServer().unregisterMBean( + U.makeMBeanName( + cfg.getIgniteInstanceName(), + groupName, + name + )); + } + catch (InstanceNotFoundException ignored) { + // We tried to unregister a non-existing MBean, not a big deal. } catch (Throwable e) { - U.error(log, "Failed to register MBean for DataRegionMetrics with name: '" + memMetrics.getName() + "'", e); + U.error(log, "Failed to unregister MBean for memory metrics: " + name, e); + } + } + + /** + * Registers MBeans for all DataRegionMetrics configured in this instance. + * + * @param cfg Ignite configuration. + */ + protected void registerMetricsMBeans(IgniteConfiguration cfg) { + if (U.IGNITE_MBEANS_DISABLED) + return; + + assert cfg != null; + + for (DataRegionMetrics memMetrics : memMetricsMap.values()) { + DataRegionConfiguration memPlcCfg = dataRegionMap.get(memMetrics.getName()).config(); + + registerMetricsMBean( + cfg, + MBEAN_GROUP_NAME, + memPlcCfg.getName(), + new DataRegionMetricsMXBeanImpl((DataRegionMetricsImpl)memMetrics, memPlcCfg), + DataRegionMetricsMXBean.class + ); } } @@ -211,11 +272,11 @@ public int pageSize() { /** * */ - private void startMemoryPolicies() { - for (DataRegion memPlc : dataRegionMap.values()) { - memPlc.pageMemory().start(); + private void startDataRegions() { + for (DataRegion region : dataRegionMap.values()) { + region.pageMemory().start(); - memPlc.evictionTracker().start(); + region.evictionTracker().start(); } } @@ -230,6 +291,8 @@ protected void initDataRegions(DataStorageConfiguration memCfg) throws IgniteChe initDataRegions0(memCfg); dataRegionsInitialized = true; + + U.log(log, "Configured data regions initialized successfully [total=" + dataRegionMap.size() + ']'); } /** @@ -239,11 +302,6 @@ protected void initDataRegions(DataStorageConfiguration memCfg) throws IgniteChe protected void initDataRegions0(DataStorageConfiguration memCfg) throws IgniteCheckedException { DataRegionConfiguration[] dataRegionCfgs = memCfg.getDataRegionConfigurations(); - int dataRegions = dataRegionCfgs == null ? 0 : dataRegionCfgs.length; - - dataRegionMap = U.newHashMap(3 + dataRegions); - memMetricsMap = U.newHashMap(3 + dataRegions); - if (dataRegionCfgs != null) { for (DataRegionConfiguration dataRegionCfg : dataRegionCfgs) addDataRegion(memCfg, dataRegionCfg, dataRegionCfg.isPersistenceEnabled()); @@ -265,9 +323,8 @@ protected void initDataRegions0(DataStorageConfiguration memCfg) throws IgniteCh CU.isPersistenceEnabled(memCfg) ); - for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) { + for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) lsnr.onInitDataRegions(this); - } } /** @@ -297,17 +354,17 @@ public void addDataRegion( DataRegionMetricsImpl memMetrics = new DataRegionMetricsImpl(dataRegionCfg, freeSpaceProvider(dataRegionCfg)); - DataRegion memPlc = initMemory(dataStorageCfg, dataRegionCfg, memMetrics, trackable); + DataRegion region = initMemory(dataStorageCfg, dataRegionCfg, memMetrics, trackable); - dataRegionMap.put(dataRegionName, memPlc); + dataRegionMap.put(dataRegionName, region); memMetricsMap.put(dataRegionName, memMetrics); if (dataRegionName.equals(dfltMemPlcName)) - dfltDataRegion = memPlc; + dfltDataRegion = region; else if (dataRegionName.equals(DFLT_DATA_REG_DEFAULT_NAME)) U.warn(log, "Data Region with name 'default' isn't used as a default. " + - "Please check Memory Policies configuration."); + "Please, check Data Region configuration."); } /** @@ -639,20 +696,12 @@ public DataStorageMetrics persistentStoreMetrics() { return null; } - /** - * @param cachesToStart Started caches. - * @throws IgniteCheckedException If failed. - */ - public void readCheckpointAndRestoreMemory(List cachesToStart) throws IgniteCheckedException { - // No-op. - } - /** * @param memPlcName Name of {@link DataRegion} to obtain {@link DataRegionMetrics} for. * @return {@link DataRegionMetrics} snapshot for specified {@link DataRegion} or {@code null} if * no {@link DataRegion} is configured for specified name. */ - @Nullable public DataRegionMetrics memoryMetrics(String memPlcName) { + public @Nullable DataRegionMetrics memoryMetrics(String memPlcName) { if (!F.isEmpty(memMetricsMap)) { DataRegionMetrics memMetrics = memMetricsMap.get(memPlcName); @@ -671,7 +720,7 @@ public DataRegion dataRegion(String memPlcName) throws IgniteCheckedException { if (memPlcName == null) return dfltDataRegion; - if (dataRegionMap == null) + if (dataRegionMap.isEmpty()) return null; DataRegion plc; @@ -706,33 +755,7 @@ public ReuseList reuseList(String memPlcName) { /** {@inheritDoc} */ @Override protected void stop0(boolean cancel) { - onDeActivate(cctx.kernalContext()); - } - - /** - * Unregister MBean. - * @param name Name of mbean. - */ - private void unregisterMBean(String name) { - if(U.IGNITE_MBEANS_DISABLED) - return; - - IgniteConfiguration cfg = cctx.gridConfig(); - - try { - cfg.getMBeanServer().unregisterMBean( - U.makeMBeanName( - cfg.getIgniteInstanceName(), - "DataRegionMetrics", name - )); - } - catch (InstanceNotFoundException ignored) { - // We tried to unregister a non-existing MBean, not a big deal. - } - catch (Throwable e) { - U.error(log, "Failed to unregister MBean for memory metrics: " + - name, e); - } + onDeActivate(true); } /** {@inheritDoc} */ @@ -754,9 +777,31 @@ public void checkpointReadUnlock() { // No-op. } + /** + * @return {@code 0} for non-persistent storage. + */ + public long checkpointReadLockTimeout() { + return 0; + } + /** * No-op for non-persistent storage. */ + public void checkpointReadLockTimeout(long val) { + // No-op. + } + + /** + * Method will perform cleanup cache page memory and each cache partition store. + */ + public void cleanupRestoredCaches() { + // No-op. + } + + /** + * Clean checkpoint directory {@link GridCacheDatabaseSharedManager#cpDir}. The operation + * is necessary when local node joined to baseline topology with different consistentId. + */ public void cleanupCheckpointDirectory() throws IgniteCheckedException { // No-op. } @@ -802,11 +847,19 @@ public void waitForCheckpoint(String reason) throws IgniteCheckedException { /** * @param discoEvt Before exchange for the given discovery event. + */ + public void beforeExchange(GridDhtPartitionsExchangeFuture discoEvt) throws IgniteCheckedException { + + } + + /** + * Perform memory restore before {@link GridDiscoveryManager} start. * - * @return {@code True} if partitions have been restored from persistent storage. + * @param kctx Current kernal context. + * @throws IgniteCheckedException If fails. */ - public boolean beforeExchange(GridDhtPartitionsExchangeFuture discoEvt) throws IgniteCheckedException { - return false; + public void startMemoryRestore(GridKernalContext kctx) throws IgniteCheckedException { + // No-op. } /** @@ -814,7 +867,7 @@ public boolean beforeExchange(GridDhtPartitionsExchangeFuture discoEvt) throws I * * @throws IgniteCheckedException If failed. */ - public void onStateRestored() throws IgniteCheckedException { + public void onStateRestored(AffinityTopologyVersion topVer) throws IgniteCheckedException { // No-op. } @@ -935,17 +988,49 @@ private DataRegion initMemory( DataRegionMetricsImpl memMetrics, boolean trackable ) throws IgniteCheckedException { + PageMemory pageMem = createPageMemory(createOrReuseMemoryProvider(plcCfg), memCfg, plcCfg, memMetrics, trackable); + + return new DataRegion(pageMem, plcCfg, memMetrics, createPageEvictionTracker(plcCfg, pageMem)); + } + + /** + * @param plcCfg Policy config. + * @return DirectMemoryProvider provider. + */ + private DirectMemoryProvider createOrReuseMemoryProvider(DataRegionConfiguration plcCfg) + throws IgniteCheckedException { + if (!supportsMemoryReuse(plcCfg)) + return createMemoryProvider(plcCfg); + + DirectMemoryProvider memProvider = memProviderMap.get(plcCfg.getName()); + + if (memProvider == null) + memProviderMap.put(plcCfg.getName(), (memProvider = createMemoryProvider(plcCfg))); + + return memProvider; + } + + /** + * @param plcCfg Policy config. + * + * @return {@code True} if policy supports memory reuse. + */ + public boolean supportsMemoryReuse(DataRegionConfiguration plcCfg) { + return reuseMemory && plcCfg.getSwapPath() == null; + } + + /** + * @param plcCfg Policy config. + * @return DirectMemoryProvider provider. + */ + private DirectMemoryProvider createMemoryProvider(DataRegionConfiguration plcCfg) throws IgniteCheckedException { File allocPath = buildAllocPath(plcCfg); - DirectMemoryProvider memProvider = allocPath == null ? + return allocPath == null ? new UnsafeMemoryProvider(log) : new MappedFileMemoryProvider( log, allocPath); - - PageMemory pageMem = createPageMemory(memProvider, memCfg, plcCfg, memMetrics, trackable); - - return new DataRegion(pageMem, plcCfg, memMetrics, createPageEvictionTracker(plcCfg, pageMem)); } /** @@ -1045,8 +1130,8 @@ protected DirectMemoryProvider wrapMetricsMemoryProvider( memProvider.initialize(chunkSizes); } - @Override public void shutdown() { - memProvider.shutdown(); + @Override public void shutdown(boolean deallocate) { + memProvider.shutdown(deallocate); } @Override public DirectMemoryRegion nextRegion() { @@ -1080,47 +1165,80 @@ protected File buildPath(String path, String consId) throws IgniteCheckedExcepti /** {@inheritDoc} */ @Override public void onActivate(GridKernalContext kctx) throws IgniteCheckedException { - if (cctx.kernalContext().clientNode() && cctx.kernalContext().config().getDataStorageConfiguration() == null) + if (kctx.clientNode() && kctx.config().getDataStorageConfiguration() == null) return; - DataStorageConfiguration memCfg = cctx.kernalContext().config().getDataStorageConfiguration(); + initAndStartRegions(kctx.config().getDataStorageConfiguration()); - assert memCfg != null; + for (DatabaseLifecycleListener lsnr : getDatabaseListeners(kctx)) + lsnr.afterInitialise(this); + } - initDataRegions(memCfg); + /** + * @param cfg Current data storage configuration. + * @throws IgniteCheckedException If fails. + */ + protected void initAndStartRegions(DataStorageConfiguration cfg) throws IgniteCheckedException { + assert cfg != null; - registerMetricsMBeans(); + initDataRegions(cfg); - startMemoryPolicies(); + startDataRegions(cfg); + } - initPageMemoryDataStructures(memCfg); + /** + * @param cfg Regions configuration. + * @throws IgniteCheckedException If fails. + */ + private void startDataRegions(DataStorageConfiguration cfg) throws IgniteCheckedException { + if (dataRegionsStarted) + return; - for (DatabaseLifecycleListener lsnr : getDatabaseListeners(kctx)) { - lsnr.afterInitialise(this); - } + assert cfg != null; + + registerMetricsMBeans(cctx.gridConfig()); + + startDataRegions(); + + initPageMemoryDataStructures(cfg); + + dataRegionsStarted = true; + + U.log(log, "Configured data regions started successfully [total=" + dataRegionMap.size() + ']'); } /** {@inheritDoc} */ @Override public void onDeActivate(GridKernalContext kctx) { - for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) { + onDeActivate(!reuseMemory); + } + + /** + * @param shutdown {@code True} to force memory regions shutdown. + */ + private void onDeActivate(boolean shutdown) { + for (DatabaseLifecycleListener lsnr : getDatabaseListeners(cctx.kernalContext())) lsnr.beforeStop(this); - } - if (dataRegionMap != null) { - for (DataRegion memPlc : dataRegionMap.values()) { - memPlc.pageMemory().stop(); + for (DataRegion region : dataRegionMap.values()) { + region.pageMemory().stop(shutdown); - memPlc.evictionTracker().stop(); + region.evictionTracker().stop(); - unregisterMBean(memPlc.memoryMetrics().getName()); + unregisterMetricsMBean( + cctx.gridConfig(), + MBEAN_GROUP_NAME, + region.memoryMetrics().getName() + ); } - dataRegionMap.clear(); + dataRegionMap.clear(); - dataRegionMap = null; + if (shutdown && memProviderMap != null) + memProviderMap.clear(); - dataRegionsInitialized = false; - } + dataRegionsInitialized = false; + + dataRegionsStarted = false; } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorage.java index 5141b04868f7c..295ff00f26311 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorage.java @@ -23,6 +23,17 @@ * Meta store. */ public interface IndexStorage { + /** + * Allocate page for cache index. Index name will be masked if needed. + * + * @param cacheId Cache ID. + * @param idxName Index name. + * @param segment Segment. + * @return Root page. + * @throws IgniteCheckedException If failed. + */ + public RootPage allocateCacheIndex(Integer cacheId, String idxName, int segment) throws IgniteCheckedException; + /** * Get or allocate initial page for an index. * @@ -31,7 +42,18 @@ public interface IndexStorage { * was newly allocated, and rootId that is counter which increments each time new page allocated. * @throws IgniteCheckedException If failed. */ - public RootPage getOrAllocateForTree(String idxName) throws IgniteCheckedException; + public RootPage allocateIndex(String idxName) throws IgniteCheckedException; + + /** + * Deallocate index page and remove from tree. + * + * @param cacheId Cache ID. + * @param idxName Index name. + * @param segment Segment. + * @return Root ID or -1 if no page was removed. + * @throws IgniteCheckedException If failed. + */ + public RootPage dropCacheIndex(Integer cacheId, String idxName, int segment) throws IgniteCheckedException; /** * Deallocate index page and remove from tree. @@ -40,7 +62,7 @@ public interface IndexStorage { * @return Root ID or -1 if no page was removed. * @throws IgniteCheckedException If failed. */ - public RootPage dropRootPage(String idxName) throws IgniteCheckedException; + public RootPage dropIndex(String idxName) throws IgniteCheckedException; /** * Destroy this meta store. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorageImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorageImpl.java index 6248765525866..b29553c6b8e2a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorageImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/IndexStorageImpl.java @@ -61,6 +61,9 @@ public class IndexStorageImpl implements IndexStorage { /** Cache group ID. */ private final int grpId; + /** Whether group is shared. */ + private final boolean grpShared; + /** */ private final int allocPartId; @@ -76,6 +79,7 @@ public IndexStorageImpl( final IgniteWriteAheadLogManager wal, final AtomicLong globalRmvId, final int grpId, + boolean grpShared, final int allocPartId, final byte allocSpace, final ReuseList reuseList, @@ -86,6 +90,7 @@ public IndexStorageImpl( try { this.pageMem = pageMem; this.grpId = grpId; + this.grpShared = grpShared; this.allocPartId = allocPartId; this.allocSpace = allocSpace; this.reuseList = reuseList; @@ -99,7 +104,15 @@ public IndexStorageImpl( } /** {@inheritDoc} */ - @Override public RootPage getOrAllocateForTree(final String idxName) throws IgniteCheckedException { + @Override public RootPage allocateCacheIndex(Integer cacheId, String idxName, int segment) + throws IgniteCheckedException { + String maskedIdxName = maskCacheIndexName(cacheId, idxName, segment); + + return allocateIndex(maskedIdxName); + } + + /** {@inheritDoc} */ + @Override public RootPage allocateIndex(String idxName) throws IgniteCheckedException { final MetaTree tree = metaTree; synchronized (this) { @@ -132,8 +145,15 @@ public IndexStorageImpl( } /** {@inheritDoc} */ - @Override public RootPage dropRootPage(final String idxName) + @Override public RootPage dropCacheIndex(Integer cacheId, String idxName, int segment) throws IgniteCheckedException { + String maskedIdxName = maskCacheIndexName(cacheId, idxName, segment); + + return dropIndex(maskedIdxName); + } + + /** {@inheritDoc} */ + @Override public RootPage dropIndex(final String idxName) throws IgniteCheckedException { byte[] idxNameBytes = idxName.getBytes(StandardCharsets.UTF_8); final IndexItem row = metaTree.remove(new IndexItem(idxNameBytes, 0)); @@ -151,6 +171,19 @@ public IndexStorageImpl( metaTree.destroy(); } + /** + * Mask cache index name. + * + * @param idxName Index name. + * @return Masked name. + */ + private String maskCacheIndexName(Integer cacheId, String idxName, int segment) { + if (grpShared) + idxName = Integer.toString(cacheId) + "_" + idxName; + + return idxName + "%" + segment; + } + /** * */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AsyncFileIOFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AsyncFileIOFactory.java index 104697e810e58..e0c545b8a87c0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AsyncFileIOFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AsyncFileIOFactory.java @@ -22,10 +22,6 @@ import java.nio.channels.AsynchronousFileChannel; import java.nio.file.OpenOption; -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; - /** * File I/O factory which uses {@link AsynchronousFileChannel} based implementation of FileIO. */ @@ -36,11 +32,6 @@ public class AsyncFileIOFactory implements FileIOFactory { /** Thread local channel future holder. */ private transient volatile ThreadLocal holder = initHolder(); - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { if (holder == null) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/EncryptedFileIO.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/EncryptedFileIO.java new file mode 100644 index 0000000000000..86d9bbc5b56e6 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/EncryptedFileIO.java @@ -0,0 +1,372 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.file; + +import java.io.IOException; +import java.io.Serializable; +import java.nio.ByteBuffer; +import java.nio.MappedByteBuffer; + +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; +import org.apache.ignite.spi.encryption.EncryptionSpi; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; +import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; +import org.apache.ignite.internal.util.typedef.internal.CU; + +/** + * Implementation of {@code FileIO} that supports encryption(decryption) of pages written(readed) to(from) file. + * + * @see EncryptedFileIOFactory + */ +public class EncryptedFileIO implements FileIO { + /** + * Underlying file. + */ + private final FileIO plainFileIO; + + /** + * Group id. + */ + private final int groupId; + + /** + * Size of plain data page in bytes. + */ + private final int pageSize; + + /** + * Size of file header in bytes. + */ + private final int headerSize; + + /** + * Shared database manager. + */ + private final GridEncryptionManager encMgr; + + /** + * Shared database manager. + */ + private final EncryptionSpi encSpi; + + /** + * Encryption key. + */ + private Serializable encKey; + + /** + * Extra bytes added by encryption. + */ + private final int encryptionOverhead; + + /** + * Array of zeroes to fulfill tail of decrypted page. + */ + private final byte[] zeroes; + + /** + * @param plainFileIO Underlying file. + * @param groupId Group id. + * @param pageSize Size of plain data page in bytes. + * @param headerSize Size of file header in bytes. + * @param encMgr Encryption manager. + */ + EncryptedFileIO(FileIO plainFileIO, int groupId, int pageSize, int headerSize, + GridEncryptionManager encMgr, EncryptionSpi encSpi) { + this.plainFileIO = plainFileIO; + this.groupId = groupId; + this.pageSize = pageSize; + this.headerSize = headerSize; + this.encMgr = encMgr; + this.encSpi = encSpi; + + this.encryptionOverhead = pageSize - CU.encryptedPageSize(pageSize, encSpi); + this.zeroes = new byte[encryptionOverhead]; + } + + /** {@inheritDoc} */ + @Override public long position() throws IOException { + return plainFileIO.position(); + } + + /** {@inheritDoc} */ + @Override public void position(long newPosition) throws IOException { + plainFileIO.position(newPosition); + } + + /** {@inheritDoc} */ + @Override public int read(ByteBuffer destBuf) throws IOException { + assert position() == 0; + + return plainFileIO.read(destBuf); + } + + /** {@inheritDoc} */ + @Override public int readFully(ByteBuffer destBuf) throws IOException { + return read(destBuf); + } + + /** {@inheritDoc} */ + @Override public int read(ByteBuffer destBuf, long position) throws IOException { + assert destBuf.remaining() >= pageSize; + assert position() != 0; + + ByteBuffer encrypted = ByteBuffer.allocate(pageSize); + + int res = plainFileIO.read(encrypted, position); + + if (res < 0) + return res; + + if (res != pageSize) { + throw new IllegalStateException("Expecting to read whole page[" + pageSize + " bytes], " + + "but read only " + res + " bytes"); + } + + encrypted.rewind(); + + decrypt(encrypted, destBuf); + + return res; + } + + /** {@inheritDoc} */ + @Override public int readFully(ByteBuffer destBuf, long position) throws IOException { + assert destBuf.capacity() == pageSize; + assert position() != 0; + + ByteBuffer encrypted = ByteBuffer.allocate(pageSize); + + int res = plainFileIO.readFully(encrypted, position); + + if (res < 0) + return res; + + if (res != pageSize) { + throw new IllegalStateException("Expecting to read whole page[" + pageSize + " bytes], " + + "but read only " + res + " bytes"); + } + + encrypted.rewind(); + + decrypt(encrypted, destBuf); + + return res; + } + + /** {@inheritDoc} */ + @Override public int read(byte[] buf, int off, int len) throws IOException { + throw new UnsupportedOperationException("Encrypted File doesn't support this operation"); + } + + /** {@inheritDoc} */ + @Override public int readFully(byte[] buf, int off, int len) throws IOException { + return read(buf, off, len); + } + + /** {@inheritDoc} */ + @Override public int write(ByteBuffer srcBuf) throws IOException { + assert position() == 0; + assert headerSize == srcBuf.capacity(); + + return plainFileIO.write(srcBuf); + } + + /** {@inheritDoc} */ + @Override public int writeFully(ByteBuffer srcBuf) throws IOException { + return write(srcBuf); + } + + /** {@inheritDoc} */ + @Override public int write(ByteBuffer srcBuf, long position) throws IOException { + ByteBuffer encrypted = ByteBuffer.allocate(pageSize); + + encrypt(srcBuf, encrypted); + + encrypted.rewind(); + + return plainFileIO.write(encrypted, position); + } + + /** {@inheritDoc} */ + @Override public int writeFully(ByteBuffer srcBuf, long position) throws IOException { + ByteBuffer encrypted = ByteBuffer.allocate(pageSize); + + encrypt(srcBuf, encrypted); + + encrypted.rewind(); + + return plainFileIO.writeFully(encrypted, position); + } + + /** + * @param srcBuf Source buffer. + * @param res Destination buffer. + * @throws IOException If failed. + */ + private void encrypt(ByteBuffer srcBuf, ByteBuffer res) throws IOException { + assert position() != 0; + assert srcBuf.remaining() >= pageSize; + assert tailIsEmpty(srcBuf, PageIO.getType(srcBuf)); + + int srcLimit = srcBuf.limit(); + + srcBuf.limit(srcBuf.position() + plainDataSize()); + + encSpi.encryptNoPadding(srcBuf, key(), res); + + res.rewind(); + + storeCRC(res); + + srcBuf.limit(srcLimit); + srcBuf.position(srcBuf.position() + encryptionOverhead); + } + + /** + * @param encrypted Encrypted buffer. + * @param destBuf Destination buffer. + */ + private void decrypt(ByteBuffer encrypted, ByteBuffer destBuf) throws IOException { + assert encrypted.remaining() >= pageSize; + assert encrypted.limit() >= pageSize; + + checkCRC(encrypted); + + encrypted.limit(encryptedDataSize()); + + encSpi.decryptNoPadding(encrypted, key(), destBuf); + + destBuf.put(zeroes); //Forcibly purge page buffer tail. + } + + /** + * Stores CRC in res. + * + * @param res Destination buffer. + */ + private void storeCRC(ByteBuffer res) { + int crc = FastCrc.calcCrc(res, encryptedDataSize()); + + res.put((byte) (crc >> 24)); + res.put((byte) (crc >> 16)); + res.put((byte) (crc >> 8)); + res.put((byte) crc); + } + + /** + * Checks encrypted data integrity. + * + * @param encrypted Encrypted data buffer. + */ + private void checkCRC(ByteBuffer encrypted) throws IOException { + int crc = FastCrc.calcCrc(encrypted, encryptedDataSize()); + + int storedCrc = 0; + + storedCrc |= (int)encrypted.get() << 24; + storedCrc |= ((int)encrypted.get() & 0xff) << 16; + storedCrc |= ((int)encrypted.get() & 0xff) << 8; + storedCrc |= encrypted.get() & 0xff; + + if(crc != storedCrc) { + throw new IOException("Content of encrypted page is broken. [StoredCrc=" + storedCrc + + ", calculatedCrd=" + crc + "]"); + } + + encrypted.position(encrypted.position() - (encryptedDataSize() + 4 /* CRC size. */)); + } + + /** + * @return Encrypted data size. + */ + private int encryptedDataSize() { + return pageSize - encSpi.blockSize(); + } + + /** + * @return Plain data size. + */ + private int plainDataSize() { + return pageSize - encryptionOverhead; + } + + /** */ + private boolean tailIsEmpty(ByteBuffer src, int pageType) { + int srcPos = src.position(); + + src.position(srcPos + plainDataSize()); + + for (int i = 0; i < encryptionOverhead; i++) + assert src.get() == 0 : "Tail of src should be empty [i=" + i + ", pageType=" + pageType + "]"; + + src.position(srcPos); + + return true; + } + + /** + * @return Encryption key. + */ + private Serializable key() { + if (encKey == null) + return encKey = encMgr.groupKey(groupId); + + return encKey; + } + + /** {@inheritDoc} */ + @Override public int write(byte[] buf, int off, int len) throws IOException { + throw new UnsupportedOperationException("Encrypted File doesn't support this operation"); + } + + /** {@inheritDoc} */ + @Override public int writeFully(byte[] buf, int off, int len) throws IOException { + return write(buf, off, len); + } + + /** {@inheritDoc} */ + @Override public MappedByteBuffer map(int sizeBytes) throws IOException { + throw new UnsupportedOperationException("Encrypted File doesn't support this operation"); + } + + /** {@inheritDoc} */ + @Override public void force() throws IOException { + plainFileIO.force(); + } + + /** {@inheritDoc} */ + @Override public void force(boolean withMetadata) throws IOException { + plainFileIO.force(withMetadata); + } + + /** {@inheritDoc} */ + @Override public long size() throws IOException { + return plainFileIO.size(); + } + + /** {@inheritDoc} */ + @Override public void clear() throws IOException { + plainFileIO.clear(); + } + + /** {@inheritDoc} */ + @Override public void close() throws IOException { + plainFileIO.close(); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/EncryptedFileIOFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/EncryptedFileIOFactory.java new file mode 100644 index 0000000000000..b4b0389ef2c39 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/EncryptedFileIOFactory.java @@ -0,0 +1,93 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.file; + +import java.io.File; +import java.io.IOException; +import java.nio.file.OpenOption; +import org.apache.ignite.spi.encryption.EncryptionSpi; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; + +/** + * Factory to produce {@code EncryptedFileIO}. + */ +public class EncryptedFileIOFactory implements FileIOFactory { + /** */ + private static final long serialVersionUID = 0L; + + /** + * Factory to produce underlying {@code FileIO} instances. + */ + private FileIOFactory plainIOFactory; + + /** + * Size of plain data page in bytes. + */ + private int pageSize; + + /** + * Size of file header in bytes. + */ + private int headerSize; + + /** + * Group id. + */ + private int groupId; + + /** + * Encryption manager. + */ + private GridEncryptionManager encMgr; + + /** + * Encryption spi. + */ + private EncryptionSpi encSpi; + + /** + * @param plainIOFactory Underlying file factory. + * @param groupId Group id. + * @param pageSize Size of plain data page in bytes. + * @param encMgr Encryption manager. + */ + EncryptedFileIOFactory(FileIOFactory plainIOFactory, int groupId, int pageSize, GridEncryptionManager encMgr, + EncryptionSpi encSpi) { + this.plainIOFactory = plainIOFactory; + this.groupId = groupId; + this.pageSize = pageSize; + this.encMgr = encMgr; + this.encSpi = encSpi; + } + + /** {@inheritDoc} */ + @Override public FileIO create(File file, OpenOption... modes) throws IOException { + FileIO io = plainIOFactory.create(file, modes); + + return new EncryptedFileIO(io, groupId, pageSize, headerSize, encMgr, encSpi); + } + + /** + * Sets size of file header in bytes. + * + * @param headerSize Size of file header in bytes. + */ + void headerSize(int headerSize) { + this.headerSize = headerSize; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileIOFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileIOFactory.java index 27351853269db..b236000d07c72 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileIOFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileIOFactory.java @@ -22,6 +22,10 @@ import java.io.Serializable; import java.nio.file.OpenOption; +import static java.nio.file.StandardOpenOption.CREATE; +import static java.nio.file.StandardOpenOption.READ; +import static java.nio.file.StandardOpenOption.WRITE; + /** * {@link FileIO} factory definition. */ @@ -33,7 +37,9 @@ public interface FileIOFactory extends Serializable { * @return File I/O interface. * @throws IOException If I/O interface creation was failed. */ - public FileIO create(File file) throws IOException; + default FileIO create(File file) throws IOException{ + return create(file, CREATE, READ, WRITE); + }; /** * Creates I/O interface for file with specified mode. @@ -43,5 +49,5 @@ public interface FileIOFactory extends Serializable { * @return File I/O interface. * @throws IOException If I/O interface creation was failed. */ - public FileIO create(File file, OpenOption... modes) throws IOException; + FileIO create(File file, OpenOption... modes) throws IOException; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStore.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStore.java index 110807c3922d5..16d74c33c66e6 100755 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStore.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStore.java @@ -27,6 +27,7 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; + import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.configuration.DataStorageConfiguration; @@ -35,8 +36,8 @@ import org.apache.ignite.internal.processors.cache.persistence.AllocatedPageTracker; import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; import org.apache.ignite.internal.util.typedef.internal.U; import static java.nio.file.StandardOpenOption.CREATE; @@ -235,11 +236,8 @@ private long checkFile(FileIO fileIO) throws IOException { return fileSize; } - /** - * @param delete {@code True} to delete file. - * @throws StorageException If failed in case of underlying I/O exception. - */ - public void stop(boolean delete) throws StorageException { + /** {@inheritDoc} */ + @Override public void stop(boolean delete) throws StorageException { lock.writeLock().lock(); try { @@ -260,17 +258,16 @@ public void stop(boolean delete) throws StorageException { + ", delete=" + delete + "]", e); } finally { + allocatedTracker.updateTotalAllocatedPages(-1L * allocated.getAndSet(0) / pageSize); + + inited = false; + lock.writeLock().unlock(); } } - /** - * Truncates and deletes partition file. - * - * @param tag New partition tag. - * @throws StorageException If failed in case of underlying I/O exception. - */ - public void truncate(int tag) throws StorageException { + /** {@inheritDoc} */ + @Override public void truncate(int tag) throws StorageException { init(); lock.writeLock().lock(); @@ -298,10 +295,8 @@ public void truncate(int tag) throws StorageException { } } - /** - * - */ - public void beginRecover() { + /** {@inheritDoc} */ + @Override public void beginRecover() { lock.writeLock().lock(); try { @@ -312,10 +307,8 @@ public void beginRecover() { } } - /** - * @throws StorageException If failed in case of underlying I/O exception. - */ - public void finishRecover() throws StorageException { + /** {@inheritDoc} */ + @Override public void finishRecover() throws StorageException { lock.writeLock().lock(); try { @@ -370,7 +363,7 @@ public void finishRecover() throws StorageException { pageBuf.position(0); if (!skipCrc) { - int curCrc32 = PureJavaCrc32.calcCrc32(pageBuf, pageSize); + int curCrc32 = FastCrc.calcCrc(pageBuf, pageSize); if ((savedCrc32 ^ curCrc32) != 0) throw new IgniteDataIntegrityViolationException("Failed to read page (CRC validation failed) " + @@ -553,7 +546,8 @@ private void reinit(FileIO fileIO) throws IOException { long off = pageOffset(pageId); assert (off >= 0 && off <= allocated.get()) || recover : - "off=" + U.hexLong(off) + ", allocated=" + U.hexLong(allocated.get()) + ", pageId=" + U.hexLong(pageId); + "off=" + U.hexLong(off) + ", allocated=" + U.hexLong(allocated.get()) + + ", pageId=" + U.hexLong(pageId) + ", file=" + cfgFile.getPath(); assert pageBuf.capacity() == pageSize; assert pageBuf.position() == 0; @@ -625,7 +619,7 @@ private static int calcCrc32(ByteBuffer pageBuf, int pageSize) { try { pageBuf.position(0); - return PureJavaCrc32.calcCrc32(pageBuf, pageSize); + return FastCrc.calcCrc(pageBuf, pageSize); } finally { pageBuf.position(0); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreFactory.java index fe93d0743be07..2fb1d5064022b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreFactory.java @@ -20,6 +20,7 @@ import java.io.File; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.pagemem.PageIdAllocator; +import org.apache.ignite.internal.pagemem.store.PageStore; import org.apache.ignite.internal.processors.cache.persistence.AllocatedPageTracker; /** @@ -32,5 +33,5 @@ public interface FilePageStoreFactory { * @param type Data type, can be {@link PageIdAllocator#FLAG_IDX} or {@link PageIdAllocator#FLAG_DATA}. * @param file File Page store file. */ - public FilePageStore createPageStore(byte type, File file, AllocatedPageTracker allocatedTracker) throws IgniteCheckedException; + PageStore createPageStore(byte type, File file, AllocatedPageTracker allocatedTracker) throws IgniteCheckedException; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreManager.java index 101a33d66f038..ed0b356622b8b 100755 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FilePageStoreManager.java @@ -22,7 +22,6 @@ import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; -import java.io.FilenameFilter; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; @@ -32,12 +31,17 @@ import java.nio.file.Path; import java.nio.file.StandardCopyOption; import java.util.Arrays; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Map; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.locks.ReadWriteLock; +import java.util.function.Predicate; +import java.util.stream.Collectors; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -54,12 +58,16 @@ import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter; import org.apache.ignite.internal.processors.cache.StoredCacheData; import org.apache.ignite.internal.processors.cache.persistence.AllocatedPageTracker; +import org.apache.ignite.internal.processors.cache.persistence.DataRegion; import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl; +import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.processors.cache.persistence.filename.PdsFolderSettings; import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage; import org.apache.ignite.internal.processors.cache.persistence.snapshot.IgniteCacheSnapshotManager; +import org.apache.ignite.internal.util.GridStripedReadWriteLock; import org.apache.ignite.internal.util.IgniteUtils; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.marshaller.Marshaller; @@ -143,6 +151,10 @@ public class FilePageStoreManager extends GridCacheSharedManagerAdapter implemen /** */ private final Set grpsWithoutIdx = Collections.newSetFromMap(new ConcurrentHashMap()); + /** */ + private final GridStripedReadWriteLock initDirLock = + new GridStripedReadWriteLock(Math.max(Runtime.getRuntime().availableProcessors(), 8)); + /** * @param ctx Kernal context. */ @@ -181,6 +193,22 @@ public FilePageStoreManager(GridKernalContext ctx) { "DataStorageConfiguration#storagePath properties). " + "Current persistence store directory is: [" + tmpDir + "]"); } + + File[] files = storeWorkDir.listFiles(); + + for (File file : files) { + if (file.isDirectory()) { + File[] tmpFiles = file.listFiles((k, v) -> v.endsWith(CACHE_DATA_TMP_FILENAME)); + + if (tmpFiles != null) { + for (File tmpFile : tmpFiles) { + if (!tmpFile.delete()) + log.warning("Failed to delete temporary cache config file" + + "(make sure Ignite process has enough rights):" + file.getName()); + } + } + } + } } /** {@inheritDoc} */ @@ -226,15 +254,29 @@ public FilePageStoreManager(GridKernalContext ctx) { } } + /** {@inheritDoc} */ + @Override public void cleanupPageStoreIfMatch(Predicate cacheGrpPred, boolean cleanFiles) { + Map filteredStores = idxCacheStores.entrySet().stream() + .filter(e -> cacheGrpPred.test(e.getKey())) + .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); + + IgniteCheckedException ex = shutdown(filteredStores.values(), cleanFiles); + + if (ex != null) + U.error(log, "Failed to gracefully stop page store managers", ex); + + idxCacheStores.entrySet().removeIf(e -> cacheGrpPred.test(e.getKey())); + + U.log(log, "Cleanup cache stores [total=" + filteredStores.keySet().size() + + ", left=" + idxCacheStores.size() + ", cleanFiles=" + cleanFiles + ']'); + } + /** {@inheritDoc} */ @Override public void stop0(boolean cancel) { if (log.isDebugEnabled()) log.debug("Stopping page store manager."); - IgniteCheckedException ex = shutdown(false); - - if (ex != null) - U.error(log, "Failed to gracefully stop page store manager", ex); + cleanupPageStoreIfMatch(p -> true, false); } /** {@inheritDoc} */ @@ -253,8 +295,6 @@ public FilePageStoreManager(GridKernalContext ctx) { " topVer=" + cctx.discovery().topologyVersionEx() + " ]"); stop0(true); - - idxCacheStores.clear(); } /** {@inheritDoc} */ @@ -262,7 +302,7 @@ public FilePageStoreManager(GridKernalContext ctx) { for (CacheStoreHolder holder : idxCacheStores.values()) { holder.idxStore.beginRecover(); - for (FilePageStore partStore : holder.partStores) + for (PageStore partStore : holder.partStores) partStore.beginRecover(); } } @@ -273,7 +313,7 @@ public FilePageStoreManager(GridKernalContext ctx) { for (CacheStoreHolder holder : idxCacheStores.values()) { holder.idxStore.finishRecover(); - for (FilePageStore partStore : holder.partStores) + for (PageStore partStore : holder.partStores) partStore.finishRecover(); } } @@ -287,12 +327,15 @@ public FilePageStoreManager(GridKernalContext ctx) { /** {@inheritDoc} */ @Override public void initialize(int cacheId, int partitions, String workingDir, AllocatedPageTracker tracker) throws IgniteCheckedException { + assert storeWorkDir != null; + if (!idxCacheStores.containsKey(cacheId)) { CacheStoreHolder holder = initDir( new File(storeWorkDir, workingDir), cacheId, partitions, - tracker + tracker, + cctx.cacheContext(cacheId) != null && cctx.cacheContext(cacheId).config().isEncryptionEnabled() ); CacheStoreHolder old = idxCacheStores.put(cacheId, holder); @@ -303,6 +346,8 @@ public FilePageStoreManager(GridKernalContext ctx) { /** {@inheritDoc} */ @Override public void initializeForCache(CacheGroupDescriptor grpDesc, StoredCacheData cacheData) throws IgniteCheckedException { + assert storeWorkDir != null; + int grpId = grpDesc.groupId(); if (!idxCacheStores.containsKey(grpId)) { @@ -316,14 +361,19 @@ public FilePageStoreManager(GridKernalContext ctx) { /** {@inheritDoc} */ @Override public void initializeForMetastorage() throws IgniteCheckedException { + assert storeWorkDir != null; + int grpId = MetaStorage.METASTORAGE_CACHE_ID; if (!idxCacheStores.containsKey(grpId)) { + DataRegion dataRegion = cctx.database().dataRegion(GridCacheDatabaseSharedManager.METASTORE_DATA_REGION_NAME); + CacheStoreHolder holder = initDir( new File(storeWorkDir, META_STORAGE_NAME), - grpId, - 1, - AllocatedPageTracker.NO_OP ); + grpId, + PageIdAllocator.METASTORE_PARTITION + 1, + dataRegion.memoryMetrics(), + false); CacheStoreHolder old = idxCacheStores.put(grpId, holder); @@ -351,17 +401,17 @@ public FilePageStoreManager(GridKernalContext ctx) { if (overwrite || !Files.exists(filePath) || Files.size(filePath) == 0) { File tmp = new File(file.getParent(), file.getName() + TMP_SUFFIX); - tmp.createNewFile(); + if (tmp.exists() && !tmp.delete()) { + log.warning("Failed to delete temporary cache config file" + + "(make sure Ignite process has enough rights):" + file.getName()); + } // Pre-existing file will be truncated upon stream open. try (OutputStream stream = new BufferedOutputStream(new FileOutputStream(tmp))) { marshaller.marshal(cacheData, stream); } - if (file.exists()) - file.delete(); - - Files.move(tmp.toPath(), file.toPath()); + Files.move(tmp.toPath(), file.toPath(), StandardCopyOption.REPLACE_EXISTING); } } catch (IOException ex) { @@ -377,16 +427,15 @@ public FilePageStoreManager(GridKernalContext ctx) { CacheStoreHolder old = idxCacheStores.remove(grp.groupId()); - assert old != null : "Missing cache store holder [cache=" + grp.cacheOrGroupName() + - ", locNodeId=" + cctx.localNodeId() + ", gridName=" + cctx.igniteInstanceName() + ']'; - - IgniteCheckedException ex = shutdown(old, /*clean files if destroy*/destroy, null); + if (old != null) { + IgniteCheckedException ex = shutdown(old, /*clean files if destroy*/destroy, null); - if (destroy) - removeCacheGroupConfigurationData(grp); + if (destroy) + removeCacheGroupConfigurationData(grp); - if (ex != null) - throw ex; + if (ex != null) + throw ex; + } } /** {@inheritDoc} */ @@ -400,9 +449,7 @@ public FilePageStoreManager(GridKernalContext ctx) { PageStore store = getStore(grpId, partId); - assert store instanceof FilePageStore : store; - - ((FilePageStore)store).truncate(tag); + store.truncate(tag); } /** {@inheritDoc} */ @@ -521,7 +568,8 @@ private CacheStoreHolder initForCache(CacheGroupDescriptor grpDesc, CacheConfigu cacheWorkDir, grpDesc.groupId(), grpDesc.config().getAffinity().partitions(), - allocatedTracker + allocatedTracker, + ccfg.isEncryptionEnabled() ); } @@ -530,13 +578,15 @@ private CacheStoreHolder initForCache(CacheGroupDescriptor grpDesc, CacheConfigu * @param grpId Group ID. * @param partitions Number of partitions. * @param allocatedTracker Metrics updater. + * @param encrypted {@code True} if this cache encrypted. * @return Cache store holder. * @throws IgniteCheckedException If failed. */ private CacheStoreHolder initDir(File cacheWorkDir, int grpId, int partitions, - AllocatedPageTracker allocatedTracker) throws IgniteCheckedException { + AllocatedPageTracker allocatedTracker, + boolean encrypted) throws IgniteCheckedException { try { boolean dirExisted = checkAndInitCacheWorkDir(cacheWorkDir); @@ -545,19 +595,48 @@ private CacheStoreHolder initDir(File cacheWorkDir, if (dirExisted && !idxFile.exists()) grpsWithoutIdx.add(grpId); - FilePageStoreFactory pageStoreFactory = new FileVersionCheckingFactory( - pageStoreFileIoFactory, pageStoreV1FileIoFactory, igniteCfg.getDataStorageConfiguration()); - FilePageStore idxStore = + FileIOFactory pageStoreFileIoFactory = this.pageStoreFileIoFactory; + FileIOFactory pageStoreV1FileIoFactory = this.pageStoreV1FileIoFactory; + + if (encrypted) { + pageStoreFileIoFactory = new EncryptedFileIOFactory( + this.pageStoreFileIoFactory, + grpId, + pageSize(), + cctx.kernalContext().encryption(), + cctx.gridConfig().getEncryptionSpi()); + + pageStoreV1FileIoFactory = new EncryptedFileIOFactory( + this.pageStoreV1FileIoFactory, + grpId, + pageSize(), + cctx.kernalContext().encryption(), + cctx.gridConfig().getEncryptionSpi()); + } + + FileVersionCheckingFactory pageStoreFactory = new FileVersionCheckingFactory( + pageStoreFileIoFactory, + pageStoreV1FileIoFactory, + igniteCfg.getDataStorageConfiguration()); + + if (encrypted) { + int headerSize = pageStoreFactory.headerSize(pageStoreFactory.latestVersion()); + + ((EncryptedFileIOFactory)pageStoreFileIoFactory).headerSize(headerSize); + ((EncryptedFileIOFactory)pageStoreV1FileIoFactory).headerSize(headerSize); + } + + PageStore idxStore = pageStoreFactory.createPageStore( PageMemory.FLAG_IDX, idxFile, allocatedTracker); - FilePageStore[] partStores = new FilePageStore[partitions]; + PageStore[] partStores = new PageStore[partitions]; for (int partId = 0; partId < partStores.length; partId++) { - FilePageStore partStore = + PageStore partStore = pageStoreFactory.createPageStore( PageMemory.FLAG_DATA, getPartitionFile(cacheWorkDir, partId), @@ -568,8 +647,9 @@ private CacheStoreHolder initDir(File cacheWorkDir, return new CacheStoreHolder(idxStore, partStores); } - catch (StorageException e) { - cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); + catch (IgniteCheckedException e) { + if (X.hasCause(e, StorageException.class, IOException.class)) + cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); throw e; } @@ -594,69 +674,78 @@ private CacheStoreHolder initDir(File cacheWorkDir, private boolean checkAndInitCacheWorkDir(File cacheWorkDir) throws IgniteCheckedException { boolean dirExisted = false; - if (!Files.exists(cacheWorkDir.toPath())) { - try { - Files.createDirectory(cacheWorkDir.toPath()); - } - catch (IOException e) { - throw new IgniteCheckedException("Failed to initialize cache working directory " + - "(failed to create, make sure the work folder has correct permissions): " + - cacheWorkDir.getAbsolutePath(), e); + ReadWriteLock lock = initDirLock.getLock(cacheWorkDir.getName().hashCode()); + + lock.writeLock().lock(); + + try { + if (!Files.exists(cacheWorkDir.toPath())) { + try { + Files.createDirectory(cacheWorkDir.toPath()); + } + catch (IOException e) { + throw new IgniteCheckedException("Failed to initialize cache working directory " + + "(failed to create, make sure the work folder has correct permissions): " + + cacheWorkDir.getAbsolutePath(), e); + } } - } - else { - if (cacheWorkDir.isFile()) - throw new IgniteCheckedException("Failed to initialize cache working directory " + - "(a file with the same name already exists): " + cacheWorkDir.getAbsolutePath()); + else { + if (cacheWorkDir.isFile()) + throw new IgniteCheckedException("Failed to initialize cache working directory " + + "(a file with the same name already exists): " + cacheWorkDir.getAbsolutePath()); - File lockF = new File(cacheWorkDir, IgniteCacheSnapshotManager.SNAPSHOT_RESTORE_STARTED_LOCK_FILENAME); + File lockF = new File(cacheWorkDir, IgniteCacheSnapshotManager.SNAPSHOT_RESTORE_STARTED_LOCK_FILENAME); - Path cacheWorkDirPath = cacheWorkDir.toPath(); + Path cacheWorkDirPath = cacheWorkDir.toPath(); - Path tmp = cacheWorkDirPath.getParent().resolve(cacheWorkDir.getName() + TMP_SUFFIX); + Path tmp = cacheWorkDirPath.getParent().resolve(cacheWorkDir.getName() + TMP_SUFFIX); - if (Files.exists(tmp) && Files.isDirectory(tmp) && + if (Files.exists(tmp) && Files.isDirectory(tmp) && Files.exists(tmp.resolve(IgniteCacheSnapshotManager.TEMP_FILES_COMPLETENESS_MARKER))) { - U.warn(log, "Ignite node crashed during the snapshot restore process " + - "(there is a snapshot restore lock file left for cache). But old version of cache was saved. " + - "Trying to restore it. Cache - [" + cacheWorkDir.getAbsolutePath() + ']'); + U.warn(log, "Ignite node crashed during the snapshot restore process " + + "(there is a snapshot restore lock file left for cache). But old version of cache was saved. " + + "Trying to restore it. Cache - [" + cacheWorkDir.getAbsolutePath() + ']'); - U.delete(cacheWorkDir); + U.delete(cacheWorkDir); - try { - Files.move(tmp, cacheWorkDirPath, StandardCopyOption.ATOMIC_MOVE); + try { + Files.move(tmp, cacheWorkDirPath, StandardCopyOption.ATOMIC_MOVE); - cacheWorkDirPath.resolve(IgniteCacheSnapshotManager.TEMP_FILES_COMPLETENESS_MARKER).toFile().delete(); - } - catch (IOException e) { - throw new IgniteCheckedException(e); + cacheWorkDirPath.resolve(IgniteCacheSnapshotManager.TEMP_FILES_COMPLETENESS_MARKER).toFile().delete(); + } + catch (IOException e) { + throw new IgniteCheckedException(e); + } } - } - else if (lockF.exists()) { - U.warn(log, "Ignite node crashed during the snapshot restore process " + - "(there is a snapshot restore lock file left for cache). Will remove both the lock file and " + - "incomplete cache directory [cacheDir=" + cacheWorkDir.getAbsolutePath() + ']'); + else if (lockF.exists()) { + U.warn(log, "Ignite node crashed during the snapshot restore process " + + "(there is a snapshot restore lock file left for cache). Will remove both the lock file and " + + "incomplete cache directory [cacheDir=" + cacheWorkDir.getAbsolutePath() + ']'); - boolean deleted = U.delete(cacheWorkDir); + boolean deleted = U.delete(cacheWorkDir); - if (!deleted) - throw new IgniteCheckedException("Failed to remove obsolete cache working directory " + - "(remove the directory manually and make sure the work folder has correct permissions): " + - cacheWorkDir.getAbsolutePath()); + if (!deleted) + throw new IgniteCheckedException("Failed to remove obsolete cache working directory " + + "(remove the directory manually and make sure the work folder has correct permissions): " + + cacheWorkDir.getAbsolutePath()); - cacheWorkDir.mkdirs(); - } - else - dirExisted = true; + cacheWorkDir.mkdirs(); + } + else + dirExisted = true; - if (!cacheWorkDir.exists()) - throw new IgniteCheckedException("Failed to initialize cache working directory " + - "(failed to create, make sure the work folder has correct permissions): " + - cacheWorkDir.getAbsolutePath()); + if (!cacheWorkDir.exists()) + throw new IgniteCheckedException("Failed to initialize cache working directory " + + "(failed to create, make sure the work folder has correct permissions): " + + cacheWorkDir.getAbsolutePath()); - if (Files.exists(tmp)) - U.delete(tmp); + if (Files.exists(tmp)) + U.delete(tmp); + } + } + finally { + lock.writeLock().unlock(); } return dirExisted; @@ -732,20 +821,6 @@ else if (lockF.exists()) { for (File file : files) { if (file.isDirectory()) { - File[] tmpFiles = file.listFiles(new FilenameFilter() { - @Override public boolean accept(File dir, String name) { - return name.endsWith(CACHE_DATA_TMP_FILENAME); - } - }); - - if (tmpFiles != null) { - for (File tmpFile: tmpFiles) { - if (!tmpFile.delete()) - log.warning("Failed to delete temporary cache config file" + - "(make sure Ignite process has enough rights):" + file.getName()); - } - } - if (file.getName().startsWith(CACHE_DIR_PREFIX)) { File conf = new File(file, CACHE_DATA_FILENAME); @@ -866,10 +941,10 @@ public File cacheWorkDir(boolean isSharedGroup, String cacheOrGroupName) { /** * @param cleanFiles {@code True} if the stores should delete it's files upon close. */ - private IgniteCheckedException shutdown(boolean cleanFiles) { + private IgniteCheckedException shutdown(Collection holders, boolean cleanFiles) { IgniteCheckedException ex = null; - for (CacheStoreHolder holder : idxCacheStores.values()) + for (CacheStoreHolder holder : holders) ex = shutdown(holder, cleanFiles, ex); return ex; @@ -885,7 +960,7 @@ private IgniteCheckedException shutdown(CacheStoreHolder holder, boolean cleanFi @Nullable IgniteCheckedException aggr) { aggr = shutdown(holder.idxStore, cleanFile, aggr); - for (FilePageStore store : holder.partStores) { + for (PageStore store : holder.partStores) { if (store != null) aggr = shutdown(store, cleanFile, aggr); } @@ -932,7 +1007,7 @@ private void removeCacheGroupConfigurationData(CacheGroupContext ctx) throws Ign if (file.exists()) { if (!file.delete()) - throw new IgniteCheckedException("Failed to delete cache configuration:" + cacheCfg.getName()); + throw new IgniteCheckedException("Failed to delete cache configuration: " + cacheCfg.getName()); } } @@ -942,7 +1017,7 @@ private void removeCacheGroupConfigurationData(CacheGroupContext ctx) throws Ign * @param aggr Aggregating exception. * @return Aggregating exception, if error occurred. */ - private IgniteCheckedException shutdown(FilePageStore store, boolean cleanFile, IgniteCheckedException aggr) { + private IgniteCheckedException shutdown(PageStore store, boolean cleanFile, IgniteCheckedException aggr) { try { if (store != null) store.stop(cleanFile); @@ -957,6 +1032,39 @@ private IgniteCheckedException shutdown(FilePageStore store, boolean cleanFile, return aggr; } + /** + * Return cache store holedr. + * + * @param grpId Cache group ID. + * @return Cache store holder. + */ + private CacheStoreHolder getHolder(int grpId) throws IgniteCheckedException { + try { + return idxCacheStores.computeIfAbsent(grpId, (key) -> { + CacheGroupDescriptor gDesc = cctx.cache().cacheGroupDescriptors().get(grpId); + + CacheStoreHolder holder0 = null; + + if (gDesc != null) { + if (CU.isPersistentCache(gDesc.config(), cctx.gridConfig().getDataStorageConfiguration())) { + try { + holder0 = initForCache(gDesc, gDesc.config()); + } catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + } + } + + return holder0; + }); + } catch (IgniteException ex) { + if (X.hasCause(ex, IgniteCheckedException.class)) + throw ex.getCause(IgniteCheckedException.class); + else + throw ex; + } + } + /** * @param grpId Cache group ID. * @param partId Partition ID. @@ -966,7 +1074,7 @@ private IgniteCheckedException shutdown(FilePageStore store, boolean cleanFile, * Note: visible for testing. */ public PageStore getStore(int grpId, int partId) throws IgniteCheckedException { - CacheStoreHolder holder = idxCacheStores.get(grpId); + CacheStoreHolder holder = getHolder(grpId); if (holder == null) throw new IgniteCheckedException("Failed to get page store for the given cache ID " + @@ -978,7 +1086,7 @@ public PageStore getStore(int grpId, int partId) throws IgniteCheckedException { if (partId > PageIdAllocator.MAX_PARTITION_ID) throw new IgniteCheckedException("Partition ID is reserved: " + partId); - FilePageStore store = holder.partStores[partId]; + PageStore store = holder.partStores[partId]; if (store == null) throw new IgniteCheckedException("Failed to get page store for the given partition ID " + @@ -1038,15 +1146,15 @@ public int pageSize() { */ private static class CacheStoreHolder { /** Index store. */ - private final FilePageStore idxStore; + private final PageStore idxStore; /** Partition stores. */ - private final FilePageStore[] partStores; + private final PageStore[] partStores; /** * */ - CacheStoreHolder(FilePageStore idxStore, FilePageStore[] partStores) { + public CacheStoreHolder(PageStore idxStore, PageStore[] partStores) { this.idxStore = idxStore; this.partStores = partStores; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileVersionCheckingFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileVersionCheckingFactory.java index bc938a57912fc..af478dec6250c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileVersionCheckingFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/FileVersionCheckingFactory.java @@ -62,14 +62,6 @@ public FileVersionCheckingFactory( this.memCfg = memCfg; } - /** - * @param fileIOFactory File IO factory for V1 & V2 page store and for version checking. - * @param memCfg Memory configuration. - */ - public FileVersionCheckingFactory(FileIOFactory fileIOFactory, DataStorageConfiguration memCfg) { - this(fileIOFactory, fileIOFactory, memCfg); - } - /** {@inheritDoc} */ @Override public FilePageStore createPageStore( byte type, @@ -140,4 +132,21 @@ public FilePageStore createPageStore( throw new IllegalArgumentException("Unknown version of file page store: " + ver + " for file [" + file.getAbsolutePath() + "]"); } } + + /** + * @param ver Version. + * @return Header size. + */ + public int headerSize(int ver) { + switch (ver) { + case FilePageStore.VERSION: + return FilePageStore.HEADER_SIZE; + + case FilePageStoreV2.VERSION: + return memCfg.getPageSize(); + + default: + throw new IllegalArgumentException("Unknown version of file page store."); + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/RandomAccessFileIOFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/RandomAccessFileIOFactory.java index 856ba1c749b45..3fa3e2dded52d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/RandomAccessFileIOFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/RandomAccessFileIOFactory.java @@ -21,10 +21,6 @@ import java.io.IOException; import java.nio.file.OpenOption; -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; - /** * File I/O factory which provides RandomAccessFileIO implementation of FileIO. */ @@ -32,11 +28,6 @@ public class RandomAccessFileIOFactory implements FileIOFactory { /** */ private static final long serialVersionUID = 0L; - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { return new RandomAccessFileIO(file, modes); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/freelist/PagesList.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/freelist/PagesList.java index 831465d9938d4..f1cc32a376c61 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/freelist/PagesList.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/freelist/PagesList.java @@ -276,7 +276,9 @@ public void saveMetadata() throws IgniteCheckedException { int tailIdx = 0; while (tailIdx < tails.length) { - int written = curPage != 0L ? curIo.addTails(pageMem.pageSize(), curAddr, bucket, tails, tailIdx) : 0; + int written = curPage != 0L ? + curIo.addTails(pageMem.realPageSize(grpId), curAddr, bucket, tails, tailIdx) : + 0; if (written == 0) { if (nextPageId == 0L) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetaStorage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetaStorage.java index 556d9974ecc2e..6559b07157271 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetaStorage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetaStorage.java @@ -17,24 +17,37 @@ package org.apache.ignite.internal.processors.cache.persistence.metastorage; +import java.io.Closeable; +import java.io.File; +import java.io.IOException; +import java.io.RandomAccessFile; import java.io.Serializable; import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; +import java.util.Iterator; +import java.util.List; import java.util.Map; import java.util.concurrent.Executor; import java.util.concurrent.atomic.AtomicLong; +import java.util.stream.Stream; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.internal.pagemem.FullPageId; +import org.apache.ignite.internal.pagemem.PageIdAllocator; import org.apache.ignite.internal.pagemem.PageIdUtils; import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; -import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.pagemem.wal.WALPointer; import org.apache.ignite.internal.pagemem.wal.record.MetastoreDataRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.MetaPageInitRecord; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; +import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.IncompleteObject; import org.apache.ignite.internal.processors.cache.persistence.DataRegion; @@ -43,6 +56,7 @@ import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.RootPage; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx; import org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO; @@ -57,11 +71,14 @@ import org.apache.ignite.internal.util.lang.GridCursor; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.jetbrains.annotations.NotNull; +import static org.apache.ignite.internal.pagemem.PageIdAllocator.FLAG_DATA; +import static org.apache.ignite.internal.pagemem.PageIdAllocator.OLD_METASTORE_PARTITION; import static org.apache.ignite.internal.pagemem.PageIdUtils.itemId; import static org.apache.ignite.internal.pagemem.PageIdUtils.pageId; @@ -75,9 +92,19 @@ public class MetaStorage implements DbCheckpointListener, ReadOnlyMetastorage, R /** */ public static final int METASTORAGE_CACHE_ID = CU.cacheId(METASTORAGE_CACHE_NAME); + /** This flag is used ONLY FOR TESTING the migration of a metastorage from Part 0 to Part 1. */ + public static boolean PRESERVE_LEGACY_METASTORAGE_PARTITION_ID = false; + /** Marker for removed entry. */ private static final byte[] TOMBSTONE = new byte[0]; + /** Temporary metastorage memory size. */ + private static final int TEMPORARY_METASTORAGE_IN_MEMORY_SIZE = 128 * 1024 * 1024; + + /** Temporary metastorage buffer size (file). */ + private static final int TEMPORARY_METASTORAGE_BUFFER_SIZE = 1024 * 1024; + + /** */ private final IgniteWriteAheadLogManager wal; @@ -97,7 +124,7 @@ public class MetaStorage implements DbCheckpointListener, ReadOnlyMetastorage, R private DataRegionMetricsImpl regionMetrics; /** */ - private boolean readOnly; + private final boolean readOnly; /** */ private boolean empty; @@ -120,6 +147,12 @@ public class MetaStorage implements DbCheckpointListener, ReadOnlyMetastorage, R /** */ private final FailureProcessor failureProcessor; + /** Partition id. */ + private int partId; + + /** Cctx. */ + private final GridCacheSharedContext cctx; + /** */ public MetaStorage( GridCacheSharedContext cctx, @@ -127,6 +160,7 @@ public MetaStorage( DataRegionMetricsImpl regionMetrics, boolean readOnly ) { + this.cctx = cctx; wal = cctx.wal(); this.dataRegion = dataRegion; this.regionMetrics = regionMetrics; @@ -136,13 +170,80 @@ public MetaStorage( } /** */ - public MetaStorage(GridCacheSharedContext cctx, DataRegion memPlc, DataRegionMetricsImpl memMetrics) { - this(cctx, memPlc, memMetrics, false); + public void init(GridCacheDatabaseSharedManager db) throws IgniteCheckedException { + regionMetrics.clear(); + + initInternal(db); + + if (!PRESERVE_LEGACY_METASTORAGE_PARTITION_ID) { + GridCacheProcessor gcProcessor = cctx.kernalContext().cache(); + + if (partId == OLD_METASTORE_PARTITION) + gcProcessor.setTmpStorage(copyDataToTmpStorage()); + else if (gcProcessor.getTmpStorage() != null) { + restoreDataFromTmpStorage(gcProcessor.getTmpStorage()); + + gcProcessor.setTmpStorage(null); + + // remove old partitions + CacheGroupContext cgc = cctx.cache().cacheGroup(METASTORAGE_CACHE_ID); + + if (cgc != null) { + db.schedulePartitionDestroy(METASTORAGE_CACHE_ID, OLD_METASTORE_PARTITION); + + db.schedulePartitionDestroy(METASTORAGE_CACHE_ID, PageIdAllocator.INDEX_PARTITION); + } + } + } } - /** */ - public void init(IgniteCacheDatabaseSharedManager db) throws IgniteCheckedException { - getOrAllocateMetas(); + /** + * Copying all data from the 'meta' to temporary storage. + * + * @return Target temporary storage + */ + private TmpStorage copyDataToTmpStorage() throws IgniteCheckedException { + TmpStorage tmpStorage = new TmpStorage(TEMPORARY_METASTORAGE_IN_MEMORY_SIZE, log); + + GridCursor cur = tree.find(null, null); + + while (cur.next()) { + MetastorageDataRow row = cur.get(); + + tmpStorage.add(row.key(), row.value()); + } + + return tmpStorage; + } + + /** + * Data recovery from temporary storage + * + * @param tmpStorage temporary storage. + */ + private void restoreDataFromTmpStorage(TmpStorage tmpStorage) throws IgniteCheckedException { + for (Iterator> it = tmpStorage.stream().iterator(); it.hasNext(); ) { + IgniteBiTuple t = it.next(); + + putData(t.get1(), t.get2()); + } + + try { + tmpStorage.close(); + } + catch (IOException e) { + log.error(e.getMessage(), e); + } + } + + /** + * @param db Database. + */ + private void initInternal(IgniteCacheDatabaseSharedManager db) throws IgniteCheckedException { + if (PRESERVE_LEGACY_METASTORAGE_PARTITION_ID) + getOrAllocateMetas(partId = PageIdAllocator.OLD_METASTORE_PARTITION); + else if (!readOnly || getOrAllocateMetas(partId = PageIdAllocator.OLD_METASTORE_PARTITION)) + getOrAllocateMetas(partId = PageIdAllocator.METASTORE_PARTITION); if (!empty) { freeList = new FreeListImpl(METASTORAGE_CACHE_ID, "metastorage", @@ -152,7 +253,7 @@ public void init(IgniteCacheDatabaseSharedManager db) throws IgniteCheckedExcept MetastorageRowStore rowStore = new MetastorageRowStore(freeList, db); tree = new MetastorageTree(METASTORAGE_CACHE_ID, dataRegion.pageMemory(), wal, rmvId, - freeList, rowStore, treeRoot.pageId().pageId(), treeRoot.isAllocated(), failureProcessor); + freeList, rowStore, treeRoot.pageId().pageId(), treeRoot.isAllocated(), failureProcessor, partId); if (!readOnly) ((GridCacheDatabaseSharedManager)db).addCheckpointListener(this); @@ -227,6 +328,23 @@ public void init(IgniteCacheDatabaseSharedManager db) throws IgniteCheckedExcept return res; } + /** + * Read all items from metastore. + */ + public Collection> readAll() throws IgniteCheckedException { + ArrayList> res = new ArrayList<>(); + + GridCursor cur = tree.find(null, null); + + while (cur.next()) { + MetastorageDataRow row = cur.get(); + + res.add(new IgniteBiTuple<>(row.key(), row.value())); + } + + return res; + } + /** {@inheritDoc} */ @Override public void write(@NotNull String key, @NotNull Serializable val) throws IgniteCheckedException { assert val != null; @@ -315,11 +433,16 @@ private void checkRootsPageIdFlag(long treeRoot, long reuseListRoot) throws Stor + U.hexLong(reuseListRoot) + ", METASTORAGE_CACHE_ID=" + METASTORAGE_CACHE_ID); } - /** */ - private void getOrAllocateMetas() throws IgniteCheckedException { - PageMemoryEx pageMem = (PageMemoryEx)dataRegion.pageMemory(); + /** + * Initializing the selected partition for use as MetaStorage + * + * @param partId Partition id. + * @return true if the partion is empty + */ + private boolean getOrAllocateMetas(int partId) throws IgniteCheckedException { + empty = false; - int partId = 0; + PageMemoryEx pageMem = (PageMemoryEx)dataRegion.pageMemory(); long partMetaId = pageMem.partitionMetaPageId(METASTORAGE_CACHE_ID, partId); long partMetaPage = pageMem.acquirePage(METASTORAGE_CACHE_ID, partMetaId); @@ -331,7 +454,7 @@ private void getOrAllocateMetas() throws IgniteCheckedException { if (PageIO.getType(pageAddr) != PageIO.T_PART_META) { empty = true; - return; + return true; } PagePartitionMetaIO io = PageIO.getPageIO(pageAddr); @@ -361,6 +484,7 @@ private void getOrAllocateMetas() throws IgniteCheckedException { // Initialize new page. PagePartitionMetaIO io = PagePartitionMetaIO.VERSIONS.latest(); + //MetaStorage never encrypted so realPageSize == pageSize. io.initNewPage(pageAddr, partMetaId, pageMem.pageSize()); treeRoot = pageMem.allocatePage(METASTORAGE_CACHE_ID, partId, PageMemory.FLAG_DATA); @@ -406,6 +530,8 @@ private void getOrAllocateMetas() throws IgniteCheckedException { finally { pageMem.releasePage(METASTORAGE_CACHE_ID, partMetaId, partMetaPage); } + + return false; } /** @@ -451,8 +577,6 @@ public PageMemory pageMemory() { private void saveStoreMetadata() throws IgniteCheckedException { PageMemoryEx pageMem = (PageMemoryEx) pageMemory(); - int partId = 0; - long partMetaId = pageMem.partitionMetaPageId(METASTORAGE_CACHE_ID, partId); long partMetaPage = pageMem.acquirePage(METASTORAGE_CACHE_ID, partMetaId); @@ -498,7 +622,7 @@ public void applyUpdate(String key, byte[] value) throws IgniteCheckedException } /** */ - public static class FreeListImpl extends AbstractFreeList { + public class FreeListImpl extends AbstractFreeList { /** {@inheritDoc} */ FreeListImpl(int cacheId, String name, DataRegionMetricsImpl regionMetrics, DataRegion dataRegion, ReuseList reuseList, @@ -511,6 +635,11 @@ public static class FreeListImpl extends AbstractFreeList { return SimpleDataPageIO.VERSIONS; } + /** {@inheritDoc} */ + @Override protected long allocatePageNoReuse() throws IgniteCheckedException { + return pageMem.allocatePage(grpId, partId, FLAG_DATA); + } + /** * Read row from data pages. */ @@ -537,6 +666,7 @@ final MetastorageDataRow readRow(String key, long link) try { SimpleDataPageIO io = (SimpleDataPageIO)ioVersions().forPage(pageAddr); + //MetaStorage never encrypted so realPageSize == pageSize. DataPagePayload data = io.readPayload(pageAddr, itemId(nextLink), pageMem.pageSize()); nextLink = data.nextLink(); @@ -592,4 +722,259 @@ final MetastorageDataRow readRow(String key, long link) return new MetastorageDataRow(link, key, incomplete.data()); } } + + /** + * Temporary storage internal + */ + private interface TmpStorageInternal extends Closeable { + /** + * Put data + * + * @param key Key. + * @param val Value. + */ + boolean add(String key, byte[] val) throws IOException; + + /** + * Read data from storage + */ + Stream> stream() throws IOException; + } + + /** + * Temporary storage (memory) + */ + private static class MemoryTmpStorage implements TmpStorageInternal { + /** Buffer. */ + final ByteBuffer buf; + + /** Size. */ + int size; + + /** + * @param size Size. + */ + MemoryTmpStorage(int size) { + buf = ByteBuffer.allocateDirect(size); + } + + /** {@inheritDoc} */ + @Override public boolean add(String key, byte[] val) { + byte[] keyData = key.getBytes(StandardCharsets.UTF_8); + + if (val.length + keyData.length + 8 > buf.remaining()) + return false; + + buf.putInt(keyData.length).putInt(val.length).put(keyData).put(val); + + size++; + + return true; + } + + /** {@inheritDoc} */ + @Override public Stream> stream() { + buf.flip(); + + return Stream.generate(() -> { + int keyLen = buf.getInt(); + int dataLen = buf.getInt(); + + byte[] tmpBuf = new byte[Math.max(keyLen, dataLen)]; + + buf.get(tmpBuf, 0, keyLen); + + String key = new String(tmpBuf, 0, keyLen, StandardCharsets.UTF_8); + + buf.get(tmpBuf, 0, dataLen); + + return new IgniteBiTuple<>(key, tmpBuf.length > dataLen ? Arrays.copyOf(tmpBuf, dataLen) : tmpBuf); + }).limit(size); + } + + /** {@inheritDoc} */ + @Override public void close() throws IOException { + } + } + + /** + * Temporary storage (file) + */ + private static class FileTmpStorage implements TmpStorageInternal { + /** Cache. */ + final ByteBuffer cache = ByteBuffer.allocateDirect(TEMPORARY_METASTORAGE_BUFFER_SIZE); + + /** File. */ + RandomAccessFile file; + + /** Size. */ + long size; + + /** {@inheritDoc} */ + @Override public boolean add(String key, byte[] val) throws IOException { + if (file == null) + file = new RandomAccessFile(File.createTempFile("m_storage", "bin"), "rw"); + + byte[] keyData = key.getBytes(StandardCharsets.UTF_8); + + if (val.length + keyData.length + 8 > cache.remaining()) + flushCache(false); + + cache.putInt(keyData.length).putInt(val.length).put(keyData).put(val); + + size++; + + return true; + } + + /** {@inheritDoc} */ + @Override public Stream> stream() throws IOException { + if (file == null) + return Stream.empty(); + + flushCache(true); + + file.getChannel().position(0); + + readToCache(); + + return Stream.generate(() -> { + if (cache.remaining() <= 8) { + cache.compact(); + + try { + readToCache(); + } + catch (IOException e) { + throw new IgniteException(e); + } + } + + int keyLen = cache.getInt(); + int dataLen = cache.getInt(); + + if (cache.remaining() < keyLen + dataLen) { + cache.compact(); + + try { + readToCache(); + } + catch (IOException e) { + throw new IgniteException(e); + } + } + + byte[] tmpBuf = new byte[Math.max(keyLen, dataLen)]; + + cache.get(tmpBuf, 0, keyLen); + + String key = new String(tmpBuf, 0, keyLen, StandardCharsets.UTF_8); + + cache.get(tmpBuf, 0, dataLen); + + return new IgniteBiTuple<>(key, tmpBuf.length > dataLen ? Arrays.copyOf(tmpBuf, dataLen) : tmpBuf); + }).limit(size); + } + + /** {@inheritDoc} */ + @Override public void close() throws IOException { + file.close(); + } + + /** + * Read data to cache + */ + private void readToCache() throws IOException { + int len = (int)Math.min(file.length() - file.getChannel().position(), cache.remaining()); + + while (len > 0) + len -= file.getChannel().read(cache); + + cache.flip(); + } + + /** + * Write cache to file. + * + * @param force force metadata. + */ + private void flushCache(boolean force) throws IOException { + if (cache.position() > 0) { + cache.flip(); + + while (cache.remaining() > 0) + file.getChannel().write(cache); + + cache.clear(); + } + + file.getChannel().force(force); + } + } + + /** + * Temporary storage + */ + public static class TmpStorage implements Closeable { + /** Chain of internal storages. */ + final List chain = new ArrayList<>(2); + + /** Current internal storage. */ + TmpStorageInternal current; + + /** Logger. */ + final IgniteLogger log; + + /** + * @param memBufSize Memory buffer size. + * @param log Logger. + */ + TmpStorage(int memBufSize, IgniteLogger log) { + this.log = log; + + chain.add(current = new MemoryTmpStorage(memBufSize)); + } + + /** + * Put data + * + * @param key Key. + * @param val Value. + */ + public void add(String key, byte[] val) throws IgniteCheckedException { + try { + while (!current.add(key, val)) + chain.add(current = new FileTmpStorage()); + } + catch (IOException e) { + throw new IgniteCheckedException(e); + } + } + + /** + * Read data from storage + */ + public Stream> stream() { + return chain.stream().flatMap(storage -> { + try { + return storage.stream(); + } + catch (IOException e) { + throw new IgniteException(e); + } + }); + } + + /** {@inheritDoc} */ + @Override public void close() throws IOException { + for (TmpStorageInternal storage : chain) { + try { + storage.close(); + } + catch (IOException ex) { + log.error(ex.getMessage(), ex); + } + } + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageDataRow.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageDataRow.java index 5e2660bc3aa97..2d7b0a66e47a4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageDataRow.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageDataRow.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache.persistence.metastorage; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.pagemem.PageIdAllocator; import org.apache.ignite.internal.processors.cache.persistence.Storable; /** @@ -62,7 +63,7 @@ public int hash() { /** {@inheritDoc} */ @Override public int partition() { - return 0; + return MetaStorage.PRESERVE_LEGACY_METASTORAGE_PARTITION_ID ? PageIdAllocator.OLD_METASTORE_PARTITION: PageIdAllocator.METASTORE_PARTITION; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageLifecycleListener.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageLifecycleListener.java index 8ab418c4f4d5e..12cc4468fc010 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageLifecycleListener.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageLifecycleListener.java @@ -33,7 +33,7 @@ public interface MetastorageLifecycleListener { * * @param metastorage Read-only meta storage. */ - public void onReadyForRead(ReadOnlyMetastorage metastorage) throws IgniteCheckedException; + default void onReadyForRead(ReadOnlyMetastorage metastorage) throws IgniteCheckedException { }; /** * Fully functional metastore capable of performing reading and writing operations. @@ -43,5 +43,5 @@ public interface MetastorageLifecycleListener { * * @param metastorage Fully functional meta storage. */ - public void onReadyForReadWrite(ReadWriteMetastorage metastorage) throws IgniteCheckedException; + default void onReadyForReadWrite(ReadWriteMetastorage metastorage) throws IgniteCheckedException { }; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageTree.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageTree.java index 00db5cd791163..72e7f510c9f9c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageTree.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/MetastorageTree.java @@ -31,6 +31,9 @@ import org.apache.ignite.internal.processors.failure.FailureProcessor; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.internal.pagemem.PageIdAllocator.FLAG_DATA; +import static org.apache.ignite.internal.pagemem.PageIdAllocator.FLAG_IDX; + /** * */ @@ -41,6 +44,9 @@ public class MetastorageTree extends BPlusTree io = io(rootAddr); + + return io.getCount(rootAddr) == 0; + } + finally { + readUnlock(rootId, rootPage, rootAddr); + } + } + finally { + releasePage(rootId, rootPage); + } + } + } + /** * Returns number of elements in the tree by scanning pages of the bottom (leaf) level. * Since a concurrent access is permitted, there is no guarantee about diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO.java index 5e1cb8104c9ea..349e87748dccc 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO.java @@ -159,7 +159,7 @@ public final boolean isLeaf() { /** * @param pageAddr Page address. - * @param pageSize Page size. + * @param pageSize Page size without encryption overhead. * @return Max items count. */ public abstract int getMaxCount(long pageAddr, int pageSize); @@ -331,7 +331,7 @@ public void remove(long pageAddr, int idx, int cnt) throws IgniteCheckedExceptio * @param leftPageAddr Left page address. * @param rightPageAddr Right page address. * @param emptyBranch We are merging an empty branch. - * @param pageSize Page size. + * @param pageSize Page size without encryption overhead. * @return {@code false} If we were not able to merge. * @throws IgniteCheckedException If failed. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/DataPageIO.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/DataPageIO.java index 95a46eb2eafdb..f9e6bcc0152b1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/DataPageIO.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/DataPageIO.java @@ -24,6 +24,7 @@ import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils; import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; +import org.apache.ignite.internal.processors.cache.tree.mvcc.data.MvccUpdateResult; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.GridStringBuilder; @@ -71,11 +72,14 @@ protected DataPageIO(int ver) { if (mvccInfoSize > 0) { assert MvccUtils.mvccVersionIsValid(row.mvccCoordinatorVersion(), row.mvccCounter(), row.mvccOperationCounter()); + byte keyAbsentBeforeFlag = (byte)((row instanceof MvccUpdateResult) && + ((MvccUpdateResult)row).isKeyAbsentBefore() ? 1 : 0); + // xid_min. PageUtils.putLong(addr, 0, row.mvccCoordinatorVersion()); PageUtils.putLong(addr, 8, row.mvccCounter()); PageUtils.putInt(addr, 16, row.mvccOperationCounter() | (row.mvccTxState() << MVCC_HINTS_BIT_OFF) | - ((row.isKeyAbsentBefore() ? 1 : 0) << MVCC_KEY_ABSENT_BEFORE_OFF)); + (keyAbsentBeforeFlag << MVCC_KEY_ABSENT_BEFORE_OFF)); assert row.newMvccCoordinatorVersion() == 0 || MvccUtils.mvccVersionIsValid(row.newMvccCoordinatorVersion(), row.newMvccCounter(), row.newMvccOperationCounter()); @@ -84,7 +88,7 @@ protected DataPageIO(int ver) { PageUtils.putLong(addr, 20, row.newMvccCoordinatorVersion()); PageUtils.putLong(addr, 28, row.newMvccCounter()); PageUtils.putInt(addr, 36, row.newMvccOperationCounter() | (row.newMvccTxState() << MVCC_HINTS_BIT_OFF) | - ((row.isKeyAbsentBefore() ? 1 : 0) << MVCC_KEY_ABSENT_BEFORE_OFF)); + (keyAbsentBeforeFlag << MVCC_KEY_ABSENT_BEFORE_OFF)); addr += mvccInfoSize; } @@ -208,6 +212,9 @@ private int writeFragment( final int len = Math.min(curLen - rowOff, payloadSize); + byte keyAbsentBeforeFlag = (byte)((row instanceof MvccUpdateResult) && + ((MvccUpdateResult)row).isKeyAbsentBefore() ? 1 : 0); + if (type == EXPIRE_TIME) writeExpireTimeFragment(buf, row.expireTime(), rowOff, len, prevLen); else if (type == CACHE_ID) @@ -217,11 +224,11 @@ else if (type == MVCC_INFO) row.mvccCoordinatorVersion(), row.mvccCounter(), row.mvccOperationCounter() | (row.mvccTxState() << MVCC_HINTS_BIT_OFF) | - ((row.isKeyAbsentBefore() ? 1 : 0) << MVCC_KEY_ABSENT_BEFORE_OFF), + (keyAbsentBeforeFlag << MVCC_KEY_ABSENT_BEFORE_OFF), row.newMvccCoordinatorVersion(), row.newMvccCounter(), row.newMvccOperationCounter() | (row.newMvccTxState() << MVCC_HINTS_BIT_OFF) | - ((row.isKeyAbsentBefore() ? 1 : 0) << MVCC_KEY_ABSENT_BEFORE_OFF), + (keyAbsentBeforeFlag << MVCC_KEY_ABSENT_BEFORE_OFF), len); else if (type != VERSION) { // Write key or value. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PageIO.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PageIO.java index 22d242012e16a..ee61e252dd0c9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PageIO.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PageIO.java @@ -21,6 +21,7 @@ import java.util.ArrayList; import java.util.List; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.spi.encryption.EncryptionSpi; import org.apache.ignite.internal.pagemem.PageIdUtils; import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.pagemem.PageUtils; @@ -293,7 +294,7 @@ public static int getType(ByteBuffer buf) { } /** - * @param pageAddr Page addres. + * @param pageAddr Page address. * @return Page type. */ public static int getType(long pageAddr) { @@ -503,6 +504,8 @@ public final int getVersion() { * @param pageAddr Page address. * @param pageId Page ID. * @param pageSize Page size. + * + * @see EncryptionSpi#encryptedSize(int) */ public void initNewPage(long pageAddr, long pageId, int pageSize) { setType(pageAddr, getType()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PagePartitionCountersIO.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PagePartitionCountersIO.java index 68e6e2f2d24eb..a3e92cfe45155 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PagePartitionCountersIO.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/PagePartitionCountersIO.java @@ -107,7 +107,7 @@ public void setNextCountersPageId(long pageAddr, long partMetaPageId) { } /** - * @param pageSize Page size. + * @param pageSize Page size without encryption overhead. * @param pageAddr Page address. * @param cacheSizes Serialized cache size items (pairs of cache ID and its size). * @return Number of written pairs. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/util/PageHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/util/PageHandler.java index 98c6f1f766cc1..113586806dc25 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/util/PageHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/util/PageHandler.java @@ -23,8 +23,6 @@ import org.apache.ignite.internal.pagemem.PageSupport; import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; import org.apache.ignite.internal.pagemem.wal.record.delta.InitNewPageRecord; -import org.apache.ignite.internal.processors.cache.GridCacheSharedManager; -import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; import org.apache.ignite.internal.util.GridUnsafe; @@ -211,7 +209,7 @@ public static void readUnlock( /** * @param pageMem Page memory. - * @param cacheId Cache ID. + * @param grpId Group ID. * @param pageId Page ID. * @param init IO for new page initialization. * @param wal Write ahead log. @@ -220,20 +218,20 @@ public static void readUnlock( */ public static void initPage( PageMemory pageMem, - int cacheId, + int grpId, long pageId, PageIO init, IgniteWriteAheadLogManager wal, PageLockListener lsnr ) throws IgniteCheckedException { - Boolean res = writePage(pageMem, cacheId, pageId, lsnr, PageHandler.NO_OP, init, wal, null, null, 0, FALSE); + Boolean res = writePage(pageMem, grpId, pageId, lsnr, PageHandler.NO_OP, init, wal, null, null, 0, FALSE); assert res != FALSE; } /** * @param pageMem Page memory. - * @param cacheId Cache ID. + * @param grpId Group ID. * @param pageId Page ID. * @param lsnr Lock listener. * @param h Handler. @@ -248,7 +246,7 @@ public static void initPage( */ public static R writePage( PageMemory pageMem, - int cacheId, + int grpId, final long pageId, PageLockListener lsnr, PageHandler h, @@ -260,9 +258,9 @@ public static R writePage( R lockFailed ) throws IgniteCheckedException { boolean releaseAfterWrite = true; - long page = pageMem.acquirePage(cacheId, pageId); + long page = pageMem.acquirePage(grpId, pageId); try { - long pageAddr = writeLock(pageMem, cacheId, pageId, page, lsnr, false); + long pageAddr = writeLock(pageMem, grpId, pageId, page, lsnr, false); if (pageAddr == 0L) return lockFailed; @@ -272,13 +270,13 @@ public static R writePage( try { if (init != null) { // It is a new page and we have to initialize it. - doInitPage(pageMem, cacheId, pageId, page, pageAddr, init, wal); + doInitPage(pageMem, grpId, pageId, page, pageAddr, init, wal); walPlc = FALSE; } else init = PageIO.getPageIO(pageAddr); - R res = h.run(cacheId, pageId, page, pageAddr, init, walPlc, arg, intArg); + R res = h.run(grpId, pageId, page, pageAddr, init, walPlc, arg, intArg); ok = true; @@ -287,19 +285,19 @@ public static R writePage( finally { assert PageIO.getCrc(pageAddr) == 0; //TODO GG-11480 - if (releaseAfterWrite = h.releaseAfterWrite(cacheId, pageId, page, pageAddr, arg, intArg)) - writeUnlock(pageMem, cacheId, pageId, page, pageAddr, lsnr, walPlc, ok); + if (releaseAfterWrite = h.releaseAfterWrite(grpId, pageId, page, pageAddr, arg, intArg)) + writeUnlock(pageMem, grpId, pageId, page, pageAddr, lsnr, walPlc, ok); } } finally { if (releaseAfterWrite) - pageMem.releasePage(cacheId, pageId, page); + pageMem.releasePage(grpId, pageId, page); } } /** * @param pageMem Page memory. - * @param cacheId Cache ID. + * @param grpId Group ID. * @param pageId Page ID. * @param page Page pointer. * @param lsnr Lock listener. @@ -315,7 +313,7 @@ public static R writePage( */ public static R writePage( PageMemory pageMem, - int cacheId, + int grpId, long pageId, long page, PageLockListener lsnr, @@ -327,7 +325,7 @@ public static R writePage( int intArg, R lockFailed ) throws IgniteCheckedException { - long pageAddr = writeLock(pageMem, cacheId, pageId, page, lsnr, false); + long pageAddr = writeLock(pageMem, grpId, pageId, page, lsnr, false); if (pageAddr == 0L) return lockFailed; @@ -337,13 +335,13 @@ public static R writePage( try { if (init != null) { // It is a new page and we have to initialize it. - doInitPage(pageMem, cacheId, pageId, page, pageAddr, init, wal); + doInitPage(pageMem, grpId, pageId, page, pageAddr, init, wal); walPlc = FALSE; } else init = PageIO.getPageIO(pageAddr); - R res = h.run(cacheId, pageId, page, pageAddr, init, walPlc, arg, intArg); + R res = h.run(grpId, pageId, page, pageAddr, init, walPlc, arg, intArg); ok = true; @@ -352,8 +350,8 @@ public static R writePage( finally { assert PageIO.getCrc(pageAddr) == 0; //TODO GG-11480 - if (h.releaseAfterWrite(cacheId, pageId, page, pageAddr, arg, intArg)) - writeUnlock(pageMem, cacheId, pageId, page, pageAddr, lsnr, walPlc, ok); + if (h.releaseAfterWrite(grpId, pageId, page, pageAddr, arg, intArg)) + writeUnlock(pageMem, grpId, pageId, page, pageAddr, lsnr, walPlc, ok); } } @@ -408,7 +406,7 @@ public static long writeLock( /** * @param pageMem Page memory. - * @param cacheId Cache ID. + * @param grpId Group ID. * @param pageId Page ID. * @param page Page pointer. * @param pageAddr Page address. @@ -418,7 +416,7 @@ public static long writeLock( */ private static void doInitPage( PageMemory pageMem, - int cacheId, + int grpId, long pageId, long page, long pageAddr, @@ -427,11 +425,11 @@ private static void doInitPage( assert PageIO.getCrc(pageAddr) == 0; //TODO GG-11480 - init.initNewPage(pageAddr, pageId, pageMem.pageSize()); + init.initNewPage(pageAddr, pageId, pageMem.realPageSize(grpId)); // Here we should never write full page, because it is known to be new. - if (isWalDeltaRecordNeeded(pageMem, cacheId, pageId, page, wal, FALSE)) - wal.log(new InitNewPageRecord(cacheId, pageId, + if (isWalDeltaRecordNeeded(pageMem, grpId, pageId, page, wal, FALSE)) + wal.log(new InitNewPageRecord(grpId, pageId, init.getType(), init.getVersion(), pageId)); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalRecordsIterator.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalRecordsIterator.java index 3cbe577c24c59..f37b154c60507 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalRecordsIterator.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalRecordsIterator.java @@ -22,6 +22,7 @@ import java.io.FileNotFoundException; import java.io.IOException; import java.nio.ByteOrder; +import java.util.Optional; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.internal.pagemem.wal.WALIterator; @@ -92,6 +93,9 @@ public abstract class AbstractWalRecordsIterator /** Factory to provide I/O interfaces for read primitives with files. */ private final SegmentFileInputFactory segmentFileInputFactory; + /** Position of last read valid record. */ + private WALPointer lastRead; + /** * @param log Logger. * @param sharedCtx Shared context. @@ -154,6 +158,8 @@ protected void advance() throws IgniteCheckedException { curRec = advanceRecord(currWalSegment); if (curRec != null) { + lastRead = curRec.get1(); + if (curRec.get2().type() == null) continue; // Record was skipped by filter of current serializer, should read next record. @@ -183,6 +189,11 @@ protected void advance() throws IgniteCheckedException { } } + /** {@inheritDoc} */ + @Override public Optional lastRead() { + return Optional.ofNullable(lastRead); + } + /** * @param tailReachedException Tail reached exception. * @param currWalSegment Current WAL segment read handler. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWALPointer.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWALPointer.java index 6ea7e002b6e6d..5e5917871983f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWALPointer.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWALPointer.java @@ -80,7 +80,7 @@ public void length(int len) { } /** {@inheritDoc} */ - @Override public WALPointer next() { + @Override public FileWALPointer next() { if (len == 0) throw new IllegalStateException("Failed to calculate next WAL pointer " + "(this pointer is a terminal): " + this); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWriteAheadLogManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWriteAheadLogManager.java index 5d165fd4f81c0..dcc761d579ee0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWriteAheadLogManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FileWriteAheadLogManager.java @@ -26,17 +26,11 @@ import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; -import java.lang.reflect.Field; -import java.lang.reflect.InvocationTargetException; -import java.lang.reflect.Method; import java.nio.ByteBuffer; import java.nio.ByteOrder; -import java.nio.MappedByteBuffer; import java.nio.channels.ClosedByInterruptException; -import java.nio.file.DirectoryStream; import java.nio.file.FileAlreadyExistsException; import java.nio.file.Files; -import java.nio.file.Path; import java.sql.Time; import java.util.ArrayList; import java.util.Arrays; @@ -45,19 +39,12 @@ import java.util.HashSet; import java.util.List; import java.util.Map; +import java.util.Objects; import java.util.Set; import java.util.TreeSet; -import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.PriorityBlockingQueue; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; -import java.util.concurrent.atomic.AtomicLongFieldUpdater; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; -import java.util.concurrent.locks.Condition; -import java.util.concurrent.locks.Lock; -import java.util.concurrent.locks.LockSupport; -import java.util.concurrent.locks.ReentrantLock; import java.util.regex.Pattern; import java.util.stream.Stream; import java.util.zip.ZipEntry; @@ -80,10 +67,13 @@ import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; import org.apache.ignite.internal.pagemem.wal.WALIterator; import org.apache.ignite.internal.pagemem.wal.WALPointer; -import org.apache.ignite.internal.pagemem.wal.record.CheckpointRecord; import org.apache.ignite.internal.pagemem.wal.record.MarshalledRecord; +import org.apache.ignite.internal.pagemem.wal.record.MemoryRecoveryRecord; +import org.apache.ignite.internal.pagemem.wal.record.PageSnapshot; +import org.apache.ignite.internal.pagemem.wal.record.RolloverType; import org.apache.ignite.internal.pagemem.wal.record.SwitchSegmentRecord; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; +import org.apache.ignite.internal.pagemem.wal.record.delta.PageDeltaRecord; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter; import org.apache.ignite.internal.processors.cache.WalStateManager.WALDisableContext; @@ -95,15 +85,19 @@ import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; import org.apache.ignite.internal.processors.cache.persistence.filename.PdsFolderSettings; +import org.apache.ignite.internal.processors.cache.persistence.wal.aware.SegmentAware; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; +import org.apache.ignite.internal.processors.cache.persistence.wal.filehandle.AbstractFileHandle; +import org.apache.ignite.internal.processors.cache.persistence.wal.filehandle.FileHandleManagerFactory; +import org.apache.ignite.internal.processors.cache.persistence.wal.filehandle.FileHandleManager; +import org.apache.ignite.internal.processors.cache.persistence.wal.filehandle.FileWriteHandle; import org.apache.ignite.internal.processors.cache.persistence.wal.io.FileInput; -import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentFileInputFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.io.LockedSegmentFileInputFactory; +import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentFileInputFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SimpleSegmentFileInputFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.record.HeaderRecord; -import org.apache.ignite.internal.processors.cache.persistence.wal.aware.SegmentAware; import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactoryImpl; @@ -111,7 +105,6 @@ import org.apache.ignite.internal.processors.failure.FailureProcessor; import org.apache.ignite.internal.processors.timeout.GridTimeoutObject; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; -import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.typedef.CI1; @@ -121,6 +114,7 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.util.worker.GridWorker; +import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgnitePredicate; @@ -134,59 +128,32 @@ import static java.nio.file.StandardOpenOption.WRITE; import static org.apache.ignite.IgniteSystemProperties.IGNITE_CHECKPOINT_TRIGGER_ARCHIVE_SIZE_PERCENTAGE; import static org.apache.ignite.IgniteSystemProperties.IGNITE_THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_COMPRESSOR_WORKER_THREAD_CNT; import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_MMAP; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_SEGMENT_SYNC_TIMEOUT; import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_SERIALIZER_VERSION; -import static org.apache.ignite.configuration.WALMode.LOG_ONLY; import static org.apache.ignite.events.EventType.EVT_WAL_SEGMENT_ARCHIVED; import static org.apache.ignite.events.EventType.EVT_WAL_SEGMENT_COMPACTED; import static org.apache.ignite.failure.FailureType.CRITICAL_ERROR; import static org.apache.ignite.failure.FailureType.SYSTEM_WORKER_TERMINATION; -import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.SWITCH_SEGMENT_RECORD; -import static org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer.BufferMode.DIRECT; +import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactory.LATEST_SERIALIZER_VERSION; import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.HEADER_RECORD_SIZE; import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.readSegmentHeader; -import static org.apache.ignite.internal.util.IgniteUtils.findField; -import static org.apache.ignite.internal.util.IgniteUtils.findNonPublicMethod; -import static org.apache.ignite.internal.util.IgniteUtils.sleep; /** * File WAL manager. */ @SuppressWarnings("IfMayBeConditional") public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter implements IgniteWriteAheadLogManager { - /** Dfault wal segment sync timeout. */ - public static final long DFLT_WAL_SEGMENT_SYNC_TIMEOUT = 500L; - /** {@link MappedByteBuffer#force0(java.io.FileDescriptor, long, long)}. */ - private static final Method force0 = findNonPublicMethod( - MappedByteBuffer.class, "force0", - java.io.FileDescriptor.class, long.class, long.class - ); - - /** {@link MappedByteBuffer#mappingOffset()}. */ - private static final Method mappingOffset = findNonPublicMethod(MappedByteBuffer.class, "mappingOffset"); - - /** {@link MappedByteBuffer#mappingAddress(long)}. */ - private static final Method mappingAddress = findNonPublicMethod( - MappedByteBuffer.class, "mappingAddress", long.class - ); - - /** {@link MappedByteBuffer#fd} */ - private static final Field fd = findField(MappedByteBuffer.class, "fd"); - - /** Page size. */ - private static final int PAGE_SIZE = GridUnsafe.pageSize(); - /** */ private static final FileDescriptor[] EMPTY_DESCRIPTORS = new FileDescriptor[0]; - /** */ + /** Zero-filled buffer for file formatting. */ private static final byte[] FILL_BUF = new byte[1024 * 1024]; - /** Pattern for segment file names */ + /** Pattern for segment file names. */ public static final Pattern WAL_NAME_PATTERN = Pattern.compile("\\d{16}\\.wal"); - /** */ + /** Pattern for WAL temp files - these files will be cleared at startup. */ public static final Pattern WAL_TEMP_NAME_PATTERN = Pattern.compile("\\d{16}\\.wal\\.tmp"); /** WAL segment file filter, see {@link #WAL_NAME_PATTERN} */ @@ -196,7 +163,7 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl } }; - /** */ + /** WAL segment temporary file filter, see {@link #WAL_TEMP_NAME_PATTERN} */ private static final FileFilter WAL_SEGMENT_TEMP_FILE_FILTER = new FileFilter() { @Override public boolean accept(File file) { return !file.isDirectory() && WAL_TEMP_NAME_PATTERN.matcher(file.getName()).matches(); @@ -231,19 +198,12 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl } }; - /** Latest serializer version to use. */ - private static final int LATEST_SERIALIZER_VERSION = 2; - /** Buffer size. */ private static final int BUF_SIZE = 1024 * 1024; /** Use mapped byte buffer. */ private final boolean mmap = IgniteSystemProperties.getBoolean(IGNITE_WAL_MMAP, true); - /** {@link FileWriteHandle#written} atomic field updater. */ - private static final AtomicLongFieldUpdater WRITTEN_UPD = - AtomicLongFieldUpdater.newUpdater(FileWriteHandle.class, "written"); - /** * Percentage of archive size for checkpoint trigger. Need for calculate max size of WAL after last checkpoint. * Checkpoint should be triggered when max size of WAL after last checkpoint more than maxWallArchiveSize * thisValue @@ -257,6 +217,12 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl private static final double THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE = IgniteSystemProperties.getDouble(IGNITE_THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE, 0.5); + /** + * Number of WAL compressor worker threads. + */ + private final int WAL_COMPRESSOR_WORKER_THREAD_CNT = + IgniteSystemProperties.getInteger(IGNITE_WAL_COMPRESSOR_WORKER_THREAD_CNT, 4); + /** */ private final boolean alwaysWriteFullPages; @@ -278,9 +244,6 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl /** WAL flush frequency. Makes sense only for {@link WALMode#BACKGROUND} log WALMode. */ private final long flushFreq; - /** Fsync delay. */ - private final long fsyncDelay; - /** */ private final DataStorageConfiguration dsCfg; @@ -316,7 +279,7 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl private final SegmentFileInputFactory segmentFileInputFactory; /** Holder of actual information of latest manipulation on WAL segments. */ - private final SegmentAware segmentAware; + private volatile SegmentAware segmentAware; /** Updater for {@link #currHnd}, used for verify there are no concurrent update for current log segment handle */ private static final AtomicReferenceFieldUpdater CURR_HND_UPD = @@ -337,9 +300,12 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl /** */ private final ThreadLocal lastWALPtr = new ThreadLocal<>(); - /** Current log segment handle */ + /** Current log segment handle. */ private volatile FileWriteHandle currHnd; + /** File handle manager. */ + private volatile FileHandleManager fileHandleManager; + /** */ private volatile WALDisableContext walDisableContext; @@ -354,7 +320,7 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl * segment, skip possible archiving for this case
Value is filled only for case {@link * #walAutoArchiveAfterInactivity} > 0
*/ - private AtomicLong lastRecordLoggedMs = new AtomicLong(); + private final AtomicLong lastRecordLoggedMs = new AtomicLong(); /** * Cancellable task for {@link WALMode#BACKGROUND}, should be cancelled at shutdown. @@ -368,24 +334,21 @@ public class FileWriteAheadLogManager extends GridCacheSharedManagerAdapter impl */ @Nullable private volatile GridTimeoutObject nextAutoArchiveTimeoutObj; - /** WAL writer worker. */ - private WALWriter walWriter; - /** * Listener invoked for each segment file IO initializer. */ @Nullable private volatile IgniteInClosure createWalFileListener; - /** Wal segment sync worker. */ - private WalSegmentSyncer walSegmentSyncWorker; - /** * Manage of segment location. */ private SegmentRouter segmentRouter; /** Segment factory with ability locked segment during reading. */ - private SegmentFileInputFactory lockedSegmentFileInputFactory; + private volatile SegmentFileInputFactory lockedSegmentFileInputFactory; + + /** FileHandleManagerFactory. */ + private final FileHandleManagerFactory fileHandleManagerFactory; /** * @param ctx Kernal context. @@ -402,20 +365,21 @@ public FileWriteAheadLogManager(@NotNull final GridKernalContext ctx) { maxWalSegmentSize = dsCfg.getWalSegmentSize(); mode = dsCfg.getWalMode(); flushFreq = dsCfg.getWalFlushFrequency(); - fsyncDelay = dsCfg.getWalFsyncDelayNanos(); alwaysWriteFullPages = dsCfg.isAlwaysWriteFullPages(); - ioFactory = new RandomAccessFileIOFactory(); + ioFactory = mode == WALMode.FSYNC ? dsCfg.getFileIOFactory() : new RandomAccessFileIOFactory(); segmentFileInputFactory = new SimpleSegmentFileInputFactory(); walAutoArchiveAfterInactivity = dsCfg.getWalAutoArchiveAfterInactivity(); - maxSegCountWithoutCheckpoint = - (long)((dsCfg.getMaxWalArchiveSize() * CHECKPOINT_TRIGGER_ARCHIVE_SIZE_PERCENTAGE) / dsCfg.getWalSegmentSize()); - allowedThresholdWalArchiveSize = (long)(dsCfg.getMaxWalArchiveSize() * THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE); evt = ctx.event(); failureProcessor = ctx.failure(); - segmentAware = new SegmentAware(dsCfg.getWalSegments()); + + fileHandleManagerFactory = new FileHandleManagerFactory(dsCfg); + + maxSegCountWithoutCheckpoint = + (long)((U.adjustedWalHistorySize(dsCfg, log) * CHECKPOINT_TRIGGER_ARCHIVE_SIZE_PERCENTAGE) + / dsCfg.getWalSegmentSize()); } /** @@ -429,92 +393,111 @@ public void setFileIOFactory(FileIOFactory ioFactory) { /** {@inheritDoc} */ @Override public void start0() throws IgniteCheckedException { - if (!cctx.kernalContext().clientNode()) { - final PdsFolderSettings resolveFolders = cctx.kernalContext().pdsFolderResolver().resolveFolders(); + if(cctx.kernalContext().clientNode()) + return; - checkWalConfiguration(); + final PdsFolderSettings resolveFolders = cctx.kernalContext().pdsFolderResolver().resolveFolders(); - final File walWorkDir0 = walWorkDir = initDirectory( + checkWalConfiguration(); + + final File walWorkDir0 = walWorkDir = initDirectory( dsCfg.getWalPath(), DataStorageConfiguration.DFLT_WAL_PATH, resolveFolders.folderName(), "write ahead log work directory" - ); + ); - final File walArchiveDir0 = walArchiveDir = initDirectory( + final File walArchiveDir0 = walArchiveDir = initDirectory( dsCfg.getWalArchivePath(), DataStorageConfiguration.DFLT_WAL_ARCHIVE_PATH, resolveFolders.folderName(), "write ahead log archive directory" - ); + ); - serializer = new RecordSerializerFactoryImpl(cctx).createSerializer(serializerVer); + serializer = new RecordSerializerFactoryImpl(cctx).createSerializer(serializerVer); - GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)cctx.database(); + GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)cctx.database(); - metrics = dbMgr.persistentStoreMetricsImpl(); + metrics = dbMgr.persistentStoreMetricsImpl(); - checkOrPrepareFiles(); + checkOrPrepareFiles(); - if (metrics != null) - metrics.setWalSizeProvider(new CO() { - @Override public Long apply() { - long size = 0; + if (metrics != null) + metrics.setWalSizeProvider(new CO() { + @Override public Long apply() { + long size = 0; - for (File f : walWorkDir0.listFiles()) - size += f.length(); + for (File f : walWorkDir0.listFiles()) + size += f.length(); - for (File f : walArchiveDir0.listFiles()) - size += f.length(); + for (File f : walArchiveDir0.listFiles()) + size += f.length(); - return size; - } - }); + return size; + } + }); - IgniteBiTuple tup = scanMinMaxArchiveIndices(); + IgniteBiTuple tup = scanMinMaxArchiveIndices(); - segmentAware.lastTruncatedArchiveIdx(tup == null ? -1 : tup.get1() - 1); + segmentAware = new SegmentAware(dsCfg.getWalSegments(), dsCfg.isWalCompactionEnabled()); - long lastAbsArchivedIdx = tup == null ? -1 : tup.get2(); + segmentAware.lastTruncatedArchiveIdx(tup == null ? -1 : tup.get1() - 1); - if (isArchiverEnabled()) - archiver = new FileArchiver(lastAbsArchivedIdx, log); - else - archiver = null; + long lastAbsArchivedIdx = tup == null ? -1 : tup.get2(); - if (lastAbsArchivedIdx > 0) - segmentAware.setLastArchivedAbsoluteIndex(lastAbsArchivedIdx); + if (isArchiverEnabled()) + archiver = new FileArchiver(lastAbsArchivedIdx, log); + else + archiver = null; - if (dsCfg.isWalCompactionEnabled()) { - compressor = new FileCompressor(); + if (lastAbsArchivedIdx > 0) + segmentAware.setLastArchivedAbsoluteIndex(lastAbsArchivedIdx); - if (decompressor == null) { // Preventing of two file-decompressor thread instantiations. - decompressor = new FileDecompressor(log); + if (dsCfg.isWalCompactionEnabled()) { + compressor = new FileCompressor(log); - new IgniteThread(decompressor).start(); - } - } + decompressor = new FileDecompressor(log); + } - segmentRouter = new SegmentRouter(walWorkDir, walArchiveDir, segmentAware, dsCfg); + segmentRouter = new SegmentRouter(walWorkDir, walArchiveDir, segmentAware, dsCfg); - walDisableContext = cctx.walState().walDisableContext(); + walDisableContext = cctx.walState().walDisableContext(); - if (mode != WALMode.NONE && mode != WALMode.FSYNC) { - walSegmentSyncWorker = new WalSegmentSyncer(igCfg.getIgniteInstanceName(), - cctx.kernalContext().log(WalSegmentSyncer.class)); + fileHandleManager = fileHandleManagerFactory.build( + cctx, metrics, mmap, lastWALPtr::get, serializer, this::currentHandle + ); - if (log.isInfoEnabled()) - log.info("Started write-ahead log manager [mode=" + mode + ']'); - } - else - U.quietAndWarn(log, "Started write-ahead log manager in NONE mode, persisted data may be lost in " + - "a case of unexpected node failure. Make sure to deactivate the cluster before shutdown."); + fileHandleManager.start(); - lockedSegmentFileInputFactory = new LockedSegmentFileInputFactory( - segmentAware, - segmentRouter, - ioFactory - ); + lockedSegmentFileInputFactory = new LockedSegmentFileInputFactory( + segmentAware, + segmentRouter, + ioFactory + ); + } + + /** + * + */ + private void startArchiverAndCompressor() { + segmentAware.reset(); + + if (isArchiverEnabled()) { + assert archiver != null : "FileArchiver should be initialized."; + + archiver.restart(); + } + + fileHandleManager.onActivate(); + + if (dsCfg.isWalCompactionEnabled()) { + assert compressor != null : "Compressor should be initialized."; + + compressor.restart(); + + assert decompressor != null : "Compressor should be initialized."; + + decompressor.restart(); } } @@ -583,7 +566,10 @@ private void checkWalConfiguration() throws IgniteCheckedException { } } - /** {@inheritDoc} */ + /** + * Method is called twice on deactivate and stop. + * It shutdown workers but do not deallocate them to avoid duplication. + * */ @Override protected void stop0(boolean cancel) { final GridTimeoutProcessor.CancelableTask schedule = backgroundFlushSchedule; @@ -595,22 +581,8 @@ private void checkWalConfiguration() throws IgniteCheckedException { if (timeoutObj != null) cctx.time().removeTimeoutObject(timeoutObj); - final FileWriteHandle currHnd = currentHandle(); - try { - if (mode == WALMode.BACKGROUND) { - if (currHnd != null) - currHnd.flush(null); - } - - if (currHnd != null) - currHnd.close(false); - - if (walSegmentSyncWorker != null) - walSegmentSyncWorker.shutdown(); - - if (walWriter != null) - walWriter.shutdown(); + fileHandleManager.onDeactivate(); segmentAware.interrupt(); @@ -624,7 +596,7 @@ private void checkWalConfiguration() throws IgniteCheckedException { decompressor.shutdown(); } catch (Exception e) { - U.error(log, "Failed to gracefully close WAL segment: " + this.currHnd.fileIO, e); + U.error(log, "Failed to gracefully close WAL segment: " + this.currHnd, e); } } @@ -633,22 +605,7 @@ private void checkWalConfiguration() throws IgniteCheckedException { if (log.isDebugEnabled()) log.debug("Activated file write ahead log manager [nodeId=" + cctx.localNodeId() + " topVer=" + cctx.discovery().topologyVersionEx() + " ]"); - - start0(); - - if (!cctx.kernalContext().clientNode()) { - if (isArchiverEnabled()) { - assert archiver != null; - - new IgniteThread(archiver).start(); - } - - if (walSegmentSyncWorker != null) - new IgniteThread(walSegmentSyncWorker).start(); - - if (compressor != null) - compressor.start(); - } + //NOOP implementation, we need to override it. } /** {@inheritDoc} */ @@ -674,15 +631,26 @@ private void checkWalConfiguration() throws IgniteCheckedException { /** {@inheritDoc} */ @Override public void resumeLogging(WALPointer lastPtr) throws IgniteCheckedException { + if (log.isDebugEnabled()) + log.debug("File write ahead log manager resuming logging [nodeId=" + cctx.localNodeId() + + " topVer=" + cctx.discovery().topologyVersionEx() + " ]"); + + /* + walDisableContext is started after FileWriteAheadLogManager, so we obtain actual walDisableContext ref here. + */ + walDisableContext = cctx.walState().walDisableContext(); + assert currHnd == null; assert lastPtr == null || lastPtr instanceof FileWALPointer; - FileWALPointer filePtr = (FileWALPointer)lastPtr; + startArchiverAndCompressor(); - walWriter = new WALWriter(log); + assert (isArchiverEnabled() && archiver != null) || (!isArchiverEnabled() && archiver == null) : + "Trying to restore FileWriteHandle on deactivated write ahead log manager"; - if (!mmap) - new IgniteThread(walWriter).start(); + FileWALPointer filePtr = (FileWALPointer)lastPtr; + + fileHandleManager.resumeLogging(); currHnd = restoreWriteHandle(filePtr); @@ -690,24 +658,19 @@ private void checkWalConfiguration() throws IgniteCheckedException { if (filePtr == null) currHnd.writeHeader(); - if (currHnd.serializer.version() != serializer.version()) { + if (currHnd.serializerVersion() != serializer.version()) { if (log.isInfoEnabled()) log.info("Record serializer version change detected, will start logging with a new WAL record " + "serializer to a new WAL segment [curFile=" + currHnd + ", newVer=" + serializer.version() + - ", oldVer=" + currHnd.serializer.version() + ']'); + ", oldVer=" + currHnd.serializerVersion() + ']'); - rollOver(currHnd); + rollOver(currHnd, null); } - currHnd.resume = false; + currHnd.finishResumeLogging(); - if (mode == WALMode.BACKGROUND) { - backgroundFlushSchedule = cctx.time().schedule(new Runnable() { - @Override public void run() { - doFlush(); - } - }, flushFreq, flushFreq); - } + if (mode == WALMode.BACKGROUND) + backgroundFlushSchedule = cctx.time().schedule(this::doFlush, flushFreq, flushFreq); if (walAutoArchiveAfterInactivity > 0) scheduleNextInactivityPeriodElapsedCheck(); @@ -776,9 +739,7 @@ private void checkWalRolloverRequiredDuringInactivityPeriod() { final FileWriteHandle handle = currentHandle(); try { - handle.buf.close(); - - rollOver(handle); + closeBufAndRollover(handle, null, RolloverType.NONE); } catch (IgniteCheckedException e) { U.error(log, "Unable to perform segment rollover: " + e.getMessage(), e); @@ -788,11 +749,20 @@ private void checkWalRolloverRequiredDuringInactivityPeriod() { } /** {@inheritDoc} */ - @SuppressWarnings("TooBroadScope") - @Override public WALPointer log(WALRecord rec) throws IgniteCheckedException, StorageException { + @Override public WALPointer log(WALRecord rec) throws IgniteCheckedException { + return log(rec, RolloverType.NONE); + } + + /** {@inheritDoc} */ + @Override public WALPointer log(WALRecord rec, RolloverType rolloverType) throws IgniteCheckedException { if (serializer == null || mode == WALMode.NONE) return null; + // Only delta-records, page snapshots and memory recovery are allowed to write in recovery mode. + if (cctx.kernalContext().recoveryMode() && + !(rec instanceof PageDeltaRecord || rec instanceof PageSnapshot || rec instanceof MemoryRecoveryRecord)) + return null; + FileWriteHandle currWrHandle = currentHandle(); WALDisableContext isDisable = walDisableContext; @@ -805,21 +775,32 @@ private void checkWalRolloverRequiredDuringInactivityPeriod() { rec.size(serializer.size(rec)); while (true) { - if (rec.rollOver()) { - assert cctx.database().checkpointLockIsHeldByThread(); + WALPointer ptr; - long idx = currWrHandle.getSegmentId(); + if (rolloverType == RolloverType.NONE) + ptr = currWrHandle.addRecord(rec); + else { + assert cctx.database().checkpointLockIsHeldByThread(); - currWrHandle.buf.close(); + if (rolloverType == RolloverType.NEXT_SEGMENT) { + WALPointer pos = rec.position(); - currWrHandle = rollOver(currWrHandle); + do { + // This will change rec.position() unless concurrent rollover happened. + currWrHandle = closeBufAndRollover(currWrHandle, rec, rolloverType); + } + while (Objects.equals(pos, rec.position())); - if (log != null && log.isInfoEnabled()) - log.info("Rollover segment [" + idx + " to " + currWrHandle.getSegmentId() + "], recordType=" + rec.type()); + ptr = rec.position(); + } + else if (rolloverType == RolloverType.CURRENT_SEGMENT) { + if ((ptr = currWrHandle.addRecord(rec)) != null) + currWrHandle = closeBufAndRollover(currWrHandle, rec, rolloverType); + } + else + throw new IgniteCheckedException("Unknown rollover type: " + rolloverType); } - WALPointer ptr = currWrHandle.addRecord(rec); - if (ptr != null) { metrics.onWalRecordLogged(); @@ -831,7 +812,7 @@ private void checkWalRolloverRequiredDuringInactivityPeriod() { return ptr; } else - currWrHandle = rollOver(currWrHandle); + currWrHandle = rollOver(currWrHandle, null); checkNode(); @@ -840,34 +821,51 @@ private void checkWalRolloverRequiredDuringInactivityPeriod() { } } - /** {@inheritDoc} */ - @Override public void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException { - if (serializer == null || mode == WALMode.NONE) - return; + /** */ + private FileWriteHandle closeBufAndRollover( + FileWriteHandle currWriteHandle, + WALRecord rec, + RolloverType rolloverType + ) throws IgniteCheckedException { + long idx = currWriteHandle.getSegmentId(); - FileWriteHandle cur = currentHandle(); + currWriteHandle.closeBuffer(); - // WAL manager was not started (client node). - if (cur == null) - return; + FileWriteHandle res = rollOver(currWriteHandle, rolloverType == RolloverType.NEXT_SEGMENT ? rec : null); - FileWALPointer filePtr = (FileWALPointer)(ptr == null ? lastWALPtr.get() : ptr); + if (log != null && log.isInfoEnabled()) + log.info("Rollover segment [" + idx + " to " + res.getSegmentId() + "], recordType=" + rec.type()); - if (mode == LOG_ONLY) - cur.flushOrWait(filePtr); + return res; + } - if (!explicitFsync && mode != WALMode.FSYNC) - return; // No need to sync in LOG_ONLY or BACKGROUND unless explicit fsync is required. + /** {@inheritDoc} */ + @Override public void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException { + fileHandleManager.flush(ptr, explicitFsync); + } - // No need to sync if was rolled over. - if (filePtr != null && !cur.needFsync(filePtr)) - return; + /** {@inheritDoc} */ + @Override public WALRecord read(WALPointer ptr) throws IgniteCheckedException, StorageException { + try (WALIterator it = replay(ptr)) { + IgniteBiTuple rec = it.next(); - cur.fsync(filePtr); + if (rec.get1().equals(ptr)) + return rec.get2(); + else + throw new StorageException("Failed to read record by pointer [ptr=" + ptr + ", rec=" + rec + "]"); + } } /** {@inheritDoc} */ @Override public WALIterator replay(WALPointer start) throws IgniteCheckedException, StorageException { + return replay(start, null); + } + + /** {@inheritDoc} */ + @Override public WALIterator replay( + WALPointer start, + @Nullable IgniteBiPredicate recordDeserializeFilter + ) throws IgniteCheckedException, StorageException { assert start == null || start instanceof FileWALPointer : "Invalid start pointer: " + start; FileWriteHandle hnd = currentHandle(); @@ -884,7 +882,7 @@ private void checkWalRolloverRequiredDuringInactivityPeriod() { (FileWALPointer)start, end, dsCfg, - new RecordSerializerFactoryImpl(cctx), + new RecordSerializerFactoryImpl(cctx).recordDeserializeFilter(recordDeserializeFilter), ioFactory, archiver, decompressor, @@ -895,7 +893,7 @@ private void checkWalRolloverRequiredDuringInactivityPeriod() { } /** {@inheritDoc} */ - @Override public boolean reserve(WALPointer start) throws IgniteCheckedException { + @Override public boolean reserve(WALPointer start) { assert start != null && start instanceof FileWALPointer : "Invalid start pointer: " + start; if (mode == WALMode.NONE) @@ -1005,7 +1003,7 @@ private boolean segmentReservedOrLocked(long absIdx) { /** {@inheritDoc} */ @Override public void notchLastCheckpointPtr(WALPointer ptr) { if (compressor != null) - compressor.keepUncompressedIdxFrom(((FileWALPointer)ptr).index()); + segmentAware.keepUncompressedIdxFrom(((FileWALPointer)ptr).index()); } /** {@inheritDoc} */ @@ -1170,9 +1168,10 @@ private FileWriteHandle currentHandle() { /** * @param cur Handle that failed to fit the given entry. + * @param rec Optional record to be added right after header. * @return Handle that will fit the entry. */ - private FileWriteHandle rollOver(FileWriteHandle cur) throws StorageException, IgniteCheckedException { + private FileWriteHandle rollOver(FileWriteHandle cur, @Nullable WALRecord rec) throws IgniteCheckedException { FileWriteHandle hnd = currentHandle(); if (hnd != cur) @@ -1186,6 +1185,12 @@ private FileWriteHandle rollOver(FileWriteHandle cur) throws StorageException, I next.writeHeader(); + if (rec != null) { + WALPointer ptr = next.addRecord(rec); + + assert ptr != null; + } + if (next.getSegmentId() - lashCheckpointFileIdx() >= maxSegCountWithoutCheckpoint) cctx.database().forceCheckpoint("too big size of WAL without checkpoint"); @@ -1258,25 +1263,7 @@ private FileWriteHandle restoreWriteHandle(FileWALPointer lastReadPtr) throws St log.info("Resuming logging to WAL segment [file=" + curFile.getAbsolutePath() + ", offset=" + off + ", ver=" + serVer + ']'); - SegmentedRingByteBuffer rbuf; - - if (mmap) { - MappedByteBuffer buf = fileIO.map((int)maxWalSegmentSize); - - rbuf = new SegmentedRingByteBuffer(buf, metrics); - } - else - rbuf = new SegmentedRingByteBuffer(dsCfg.getWalBufferSize(), maxWalSegmentSize, DIRECT, metrics); - - if (lastReadPtr != null) - rbuf.init(lastReadPtr.fileOffset() + lastReadPtr.length()); - - FileWriteHandle hnd = new FileWriteHandle( - fileIO, - off + len, - true, - ser, - rbuf); + FileWriteHandle hnd = fileHandleManager.initHandle(fileIO, off + len, ser); if (archiver0 != null) segmentAware.curAbsWalIdx(absIdx); @@ -1321,8 +1308,6 @@ private FileWriteHandle initNextWriteHandle(FileWriteHandle cur) throws IgniteCh if (log.isDebugEnabled()) log.debug("Switching to a new WAL segment: " + nextFile.getAbsolutePath()); - SegmentedRingByteBuffer rbuf = null; - SegmentIO fileIO = null; FileWriteHandle hnd; @@ -1337,20 +1322,8 @@ private FileWriteHandle initNextWriteHandle(FileWriteHandle cur) throws IgniteCh if (lsnr != null) lsnr.apply(fileIO); - if (mmap) { - MappedByteBuffer buf = fileIO.map((int)maxWalSegmentSize); - - rbuf = new SegmentedRingByteBuffer(buf, metrics); - } - else - rbuf = cur.buf.reset(); - hnd = new FileWriteHandle( - fileIO, - 0, - false, - serializer, - rbuf); + hnd = fileHandleManager.nextHandle(fileIO, serializer); if (interrupted) Thread.currentThread().interrupt(); @@ -1372,12 +1345,6 @@ private FileWriteHandle initNextWriteHandle(FileWriteHandle cur) throws IgniteCh fileIO = null; } - - if (rbuf != null) { - rbuf.free(); - - rbuf = null; - } } } @@ -1433,29 +1400,6 @@ private void checkOrPrepareFiles() throws StorageException { checkFiles(0, false, null, null); } - /** {@inheritDoc} */ - @Override public void cleanupWalDirectories() throws IgniteCheckedException { - try { - try (DirectoryStream files = Files.newDirectoryStream(walWorkDir.toPath())) { - for (Path path : files) - Files.delete(path); - } - } - catch (IOException e) { - throw new IgniteCheckedException("Failed to cleanup wal work directory: " + walWorkDir, e); - } - - try { - try (DirectoryStream files = Files.newDirectoryStream(walArchiveDir.toPath())) { - for (Path path : files) - Files.delete(path); - } - } - catch (IOException e) { - throw new IgniteCheckedException("Failed to cleanup wal archive directory: " + walArchiveDir, e); - } - } - /** * Clears whole the file, fills with zeros for Default mode. * @@ -1633,19 +1577,18 @@ public long maxWalSegmentSize() { * the absolute index of last archived segment is denoted by A and the absolute index of next segment we want to * write is denoted by W, then we can allow write to S(W) if W - A <= walSegments.
* - * Monitor of current object is used for notify on:
  • exception occurred ({@link - * FileArchiver#cleanErr}!=null)
  • stopping thread ({@link FileArchiver#stopped}==true)
  • current file - * index changed
  • last archived file index was changed ({@link - *
  • some WAL index was removed from map
  • + * Monitor of current object is used for notify on:
      + *
    • exception occurred ({@link FileArchiver#cleanErr}!=null)
    • + *
    • stopping thread ({@link FileArchiver#isCancelled==true})
    • + *
    • current file index changed
    • + *
    • last archived file index was changed
    • + *
    • some WAL index was removed from map
    • *
    */ private class FileArchiver extends GridWorker { /** Exception which occurred during initial creation of files or during archiving WAL segment */ private StorageException cleanErr; - /** current thread stopping advice */ - private volatile boolean stopped; - /** Formatted index. */ private int formatted; @@ -1664,7 +1607,7 @@ private FileArchiver(long lastAbsArchivedIdx, IgniteLogger log) { */ private void shutdown() throws IgniteInterruptedCheckedException { synchronized (this) { - stopped = true; + isCancelled = true; notifyAll(); } @@ -1701,13 +1644,15 @@ private void shutdown() throws IgniteInterruptedCheckedException { try { blockingSectionBegin(); + try { segmentAware.awaitSegment(0);//wait for init at least one work segments. } finally { blockingSectionEnd(); } - while (!Thread.currentThread().isInterrupted() && !stopped) { + + while (!Thread.currentThread().isInterrupted() && !isCancelled()) { long toArchive; blockingSectionBegin(); @@ -1718,7 +1663,8 @@ private void shutdown() throws IgniteInterruptedCheckedException { finally { blockingSectionEnd(); } - if (stopped) + + if (isCancelled()) break; SegmentArchiveResult res; @@ -1741,7 +1687,7 @@ private void shutdown() throws IgniteInterruptedCheckedException { blockingSectionEnd(); } - if (evt.isRecordable(EVT_WAL_SEGMENT_ARCHIVED)) { + if (evt.isRecordable(EVT_WAL_SEGMENT_ARCHIVED) && !cctx.kernalContext().recoveryMode()) { evt.record(new WalSegmentArchivedEvent( cctx.discovery().localNode(), res.getAbsIdx(), @@ -1756,14 +1702,14 @@ private void shutdown() throws IgniteInterruptedCheckedException { Thread.currentThread().interrupt(); synchronized (this) { - stopped = true; + isCancelled = true; } } catch (Throwable t) { err = t; } finally { - if (err == null && !stopped) + if (err == null && !isCancelled()) err = new IllegalStateException("Worker " + name() + " is terminated unexpectedly"); if (err instanceof OutOfMemoryError) @@ -1877,7 +1823,7 @@ private SegmentArchiveResult archiveSegment(long absIdx) throws StorageException * */ private boolean checkStop() { - return stopped; + return isCancelled(); } /** @@ -1904,22 +1850,30 @@ private void allocateRemainingFiles() throws StorageException { } ); } + + /** + * Restart worker in IgniteThread. + */ + public void restart() { + assert runner() == null : "FileArchiver is still running"; + + isCancelled = false; + + new IgniteThread(archiver).start(); + } } /** * Responsible for compressing WAL archive segments. * Also responsible for deleting raw copies of already compressed WAL archive segments if they are not reserved. */ - private class FileCompressor extends Thread { - /** Current thread stopping advice. */ - private volatile boolean stopped; - - /** All segments prior to this (inclusive) can be compressed. */ - private volatile long minUncompressedIdxToKeep = -1L; + private class FileCompressor extends FileCompressorWorker { + /** Workers queue. */ + private final List workers = new ArrayList<>(); /** */ - FileCompressor() { - super("wal-file-compressor%" + cctx.igniteInstanceName()); + FileCompressor(IgniteLogger log) { + super(0, log); } /** */ @@ -1927,7 +1881,7 @@ private void init() { File[] toDel = walArchiveDir.listFiles(WAL_SEGMENT_TEMP_FILE_COMPACTED_FILTER); for (File f : toDel) { - if (stopped) + if (isCancelled()) return; f.delete(); @@ -1936,113 +1890,154 @@ private void init() { FileDescriptor[] alreadyCompressed = scan(walArchiveDir.listFiles(WAL_SEGMENT_FILE_COMPACTED_FILTER)); if (alreadyCompressed.length > 0) - segmentAware.lastCompressedIdx(alreadyCompressed[alreadyCompressed.length - 1].idx()); + segmentAware.onSegmentCompressed(alreadyCompressed[alreadyCompressed.length - 1].idx()); + + for (int i = 1; i < calculateThreadCount(); i++) { + FileCompressorWorker worker = new FileCompressorWorker(i, log); + + worker.restart(); + + synchronized (this) { + workers.add(worker); + } + } } /** - * @param idx Minimum raw segment index that should be preserved from deletion. + * Calculate optimal additional compressor worker threads count. If quarter of proc threads greater + * than WAL_COMPRESSOR_WORKER_THREAD_CNT, use this value. Otherwise, reduce number of threads. + * + * @return Optimal number of compressor threads. */ - void keepUncompressedIdxFrom(long idx) { - minUncompressedIdxToKeep = idx; + private int calculateThreadCount() { + int procNum = Runtime.getRuntime().availableProcessors(); + + // If quarter of proc threads greater than WAL_COMPRESSOR_WORKER_THREAD_CNT, + // use this value. Otherwise, reduce number of threads. + if (procNum >> 2 >= WAL_COMPRESSOR_WORKER_THREAD_CNT) + return WAL_COMPRESSOR_WORKER_THREAD_CNT; + else + return procNum >> 2; } - /** - * Pessimistically tries to reserve segment for compression in order to avoid concurrent truncation. - * Waits if there's no segment to archive right now. - */ - private long tryReserveNextSegmentOrWait() throws IgniteCheckedException { - long segmentToCompress = segmentAware.waitNextSegmentToCompress(); - boolean reserved = reserve(new FileWALPointer(segmentToCompress, 0, 0)); + /** {@inheritDoc} */ + @Override public void body() throws InterruptedException, IgniteInterruptedCheckedException{ + init(); - return reserved ? segmentToCompress : -1; + super.body0(); } /** - * Deletes raw WAL segments if they aren't locked and already have compressed copies of themselves. + * @throws IgniteInterruptedCheckedException If failed to wait for thread shutdown. */ - private void deleteObsoleteRawSegments() { - FileDescriptor[] descs = scan(walArchiveDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER)); + private void shutdown() throws IgniteInterruptedCheckedException { + synchronized (this) { + for (FileCompressorWorker worker: workers) + U.cancel(worker); - Set indices = new HashSet<>(); - Set duplicateIndices = new HashSet<>(); + for (FileCompressorWorker worker: workers) + U.join(worker); - for (FileDescriptor desc : descs) { - if (!indices.add(desc.idx)) - duplicateIndices.add(desc.idx); + workers.clear(); + + U.cancel(this); } - for (FileDescriptor desc : descs) { - if (desc.isCompressed()) - continue; + U.join(this); + } + } - // Do not delete reserved or locked segment and any segment after it. - if (segmentReservedOrLocked(desc.idx)) - return; + /** */ + private class FileCompressorWorker extends GridWorker { + /** */ + FileCompressorWorker(int idx, IgniteLogger log) { + super(cctx.igniteInstanceName(), "wal-file-compressor-%" + cctx.igniteInstanceName() + "%-" + idx, log); + } - if (desc.idx < minUncompressedIdxToKeep && duplicateIndices.contains(desc.idx)) { - if (!desc.file.delete()) - U.warn(log, "Failed to remove obsolete WAL segment (make sure the process has enough rights): " + - desc.file.getAbsolutePath() + ", exists: " + desc.file.exists()); - } + /** */ + void restart() { + assert runner() == null : "FileCompressorWorker is still running."; + + isCancelled = false; + + new IgniteThread(this).start(); + } + + /** + * Pessimistically tries to reserve segment for compression in order to avoid concurrent truncation. + * Waits if there's no segment to archive right now. + */ + private long tryReserveNextSegmentOrWait() throws IgniteInterruptedCheckedException{ + long segmentToCompress = segmentAware.waitNextSegmentToCompress(); + + boolean reserved = reserve(new FileWALPointer(segmentToCompress, 0, 0)); + + if (reserved) + return segmentToCompress; + else { + segmentAware.onSegmentCompressed(segmentToCompress); + + return -1; } } /** {@inheritDoc} */ - @Override public void run() { - init(); + @Override protected void body() throws InterruptedException, IgniteInterruptedCheckedException { + body0(); + } - while (!Thread.currentThread().isInterrupted() && !stopped) { - long currReservedSegment = -1; + /** */ + private void body0() { + while (!isCancelled()) { + long segIdx = -1L; try { + if ((segIdx = tryReserveNextSegmentOrWait()) == -1) + continue; + deleteObsoleteRawSegments(); - currReservedSegment = tryReserveNextSegmentOrWait(); - if (currReservedSegment == -1) - continue; + File tmpZip = new File(walArchiveDir, FileDescriptor.fileName(segIdx) + + FilePageStoreManager.ZIP_SUFFIX + FilePageStoreManager.TMP_SUFFIX); - File tmpZip = new File(walArchiveDir, FileDescriptor.fileName(currReservedSegment) - + FilePageStoreManager.ZIP_SUFFIX + FilePageStoreManager.TMP_SUFFIX); + File zip = new File(walArchiveDir, FileDescriptor.fileName(segIdx) + FilePageStoreManager.ZIP_SUFFIX); - File zip = new File(walArchiveDir, FileDescriptor.fileName(currReservedSegment) + FilePageStoreManager.ZIP_SUFFIX); + File raw = new File(walArchiveDir, FileDescriptor.fileName(segIdx)); - File raw = new File(walArchiveDir, FileDescriptor.fileName(currReservedSegment)); if (!Files.exists(raw.toPath())) throw new IgniteCheckedException("WAL archive segment is missing: " + raw); - compressSegmentToFile(currReservedSegment, raw, tmpZip); + compressSegmentToFile(segIdx, raw, tmpZip); Files.move(tmpZip.toPath(), zip.toPath()); - if (mode != WALMode.NONE) { - try (FileIO f0 = ioFactory.create(zip, CREATE, READ, WRITE)) { - f0.force(); - } + try (FileIO f0 = ioFactory.create(zip, CREATE, READ, WRITE)) { + f0.force(); + } - if (evt.isRecordable(EVT_WAL_SEGMENT_COMPACTED)) { - evt.record(new WalSegmentCompactedEvent( - cctx.discovery().localNode(), - currReservedSegment, + segmentAware.onSegmentCompressed(segIdx); + + if (evt.isRecordable(EVT_WAL_SEGMENT_COMPACTED) && !cctx.kernalContext().recoveryMode()) { + evt.record(new WalSegmentCompactedEvent( + cctx.localNode(), + segIdx, zip.getAbsoluteFile()) - ); - } + ); } - - segmentAware.lastCompressedIdx(currReservedSegment); } catch (IgniteInterruptedCheckedException ignore) { Thread.currentThread().interrupt(); } catch (IgniteCheckedException | IOException e) { - U.error(log, "Compression of WAL segment [idx=" + currReservedSegment + - "] was skipped due to unexpected error", e); + U.error(log, "Compression of WAL segment [idx=" + segIdx + + "] was skipped due to unexpected error", e); - segmentAware.lastCompressedIdx(currReservedSegment); + segmentAware.onSegmentCompressed(segIdx); } finally { - if (currReservedSegment != -1) - release(new FileWALPointer(currReservedSegment, 0, 0)); + if (segIdx != -1L) + release(new FileWALPointer(segIdx, 0, 0)); } } } @@ -2053,7 +2048,7 @@ private void deleteObsoleteRawSegments() { * @param zip Zip file. */ private void compressSegmentToFile(long nextSegment, File raw, File zip) - throws IOException, IgniteCheckedException { + throws IOException, IgniteCheckedException { int segmentSerializerVer; try (FileIO fileIO = ioFactory.create(raw)) { @@ -2062,7 +2057,7 @@ private void compressSegmentToFile(long nextSegment, File raw, File zip) try (ZipOutputStream zos = new ZipOutputStream(new BufferedOutputStream(new FileOutputStream(zip)))) { zos.setLevel(dsCfg.getWalCompactionLevel()); - zos.putNextEntry(new ZipEntry("")); + zos.putNextEntry(new ZipEntry(nextSegment + ".wal")); ByteBuffer buf = ByteBuffer.allocate(HEADER_RECORD_SIZE); buf.order(ByteOrder.nativeOrder()); @@ -2083,7 +2078,7 @@ private void compressSegmentToFile(long nextSegment, File raw, File zip) }; try (SingleSegmentLogicalRecordsIterator iter = new SingleSegmentLogicalRecordsIterator( - log, cctx, ioFactory, BUF_SIZE, nextSegment, walArchiveDir, appendToZipC)) { + log, cctx, ioFactory, BUF_SIZE, nextSegment, walArchiveDir, appendToZipC)) { while (iter.hasNextX()) iter.nextX(); @@ -2102,7 +2097,7 @@ private void compressSegmentToFile(long nextSegment, File raw, File zip) * @param ser Record Serializer. */ @NotNull private ByteBuffer prepareSwitchSegmentRecordBuffer(long nextSegment, RecordSerializer ser) - throws IgniteCheckedException { + throws IgniteCheckedException { SwitchSegmentRecord switchRecord = new SwitchSegmentRecord(); int switchRecordSize = ser.size(switchRecord); @@ -2117,16 +2112,33 @@ private void compressSegmentToFile(long nextSegment, File raw, File zip) } /** - * @throws IgniteInterruptedCheckedException If failed to wait for thread shutdown. + * Deletes raw WAL segments if they aren't locked and already have compressed copies of themselves. */ - private void shutdown() throws IgniteInterruptedCheckedException { - synchronized (this) { - stopped = true; + private void deleteObsoleteRawSegments() { + FileDescriptor[] descs = scan(walArchiveDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER)); - notifyAll(); + Set indices = new HashSet<>(); + Set duplicateIndices = new HashSet<>(); + + for (FileDescriptor desc : descs) { + if (!indices.add(desc.idx)) + duplicateIndices.add(desc.idx); } - U.join(this); + for (FileDescriptor desc : descs) { + if (desc.isCompressed()) + continue; + + // Do not delete reserved or locked segment and any segment after it. + if (segmentReservedOrLocked(desc.idx)) + return; + + if (desc.idx < segmentAware.keepUncompressedIdxFrom() && duplicateIndices.contains(desc.idx)) { + if (desc.file.exists() && !desc.file.delete()) + U.warn(log, "Failed to remove obsolete WAL segment (make sure the process has enough rights): " + + desc.file.getAbsolutePath() + ", exists: " + desc.file.exists()); + } + } } } @@ -2269,7 +2281,16 @@ private void shutdown() { U.join(this, log); } - } + + /** Restart worker. */ + void restart() { + assert runner() == null : "FileDecompressor is still running."; + + isCancelled = false; + + new IgniteThread(this).start(); + } + } /** * Validate files depending on {@link DataStorageConfiguration#getWalSegments()} and create if need. Check end @@ -2315,7 +2336,7 @@ else if (create) * @param ver Version. * @param compacted Compacted flag. */ - @NotNull private static ByteBuffer prepareSerializerVersionBuffer(long idx, int ver, boolean compacted, ByteBuffer buf) { + @NotNull public static ByteBuffer prepareSerializerVersionBuffer(long idx, int ver, boolean compacted, ByteBuffer buf) { // Write record type. buf.put((byte) (WALRecord.RecordType.HEADER_RECORD.ordinal() + 1)); @@ -2335,7 +2356,7 @@ else if (create) buf.position(0); // This call will move buffer position to the end of the record again. - int crcVal = PureJavaCrc32.calcCrc32(buf, curPos); + int crcVal = FastCrc.calcCrc(buf, curPos); buf.putInt(crcVal); } @@ -2351,29 +2372,7 @@ else if (create) /** * */ - private abstract static class FileHandle { - /** I/O interface for read/write operations with file */ - SegmentIO fileIO; - - /** - * @param fileIO I/O interface for read/write operations of FileHandle. * - */ - private FileHandle(SegmentIO fileIO) { - this.fileIO = fileIO; - } - - /** - * @return Absolute WAL segment file index (incremental counter). - */ - public long getSegmentId(){ - return fileIO.getSegmentId(); - } - } - - /** - * - */ - public static class ReadFileHandle extends FileHandle implements AbstractWalRecordsIterator.AbstractReadFileHandle { + public static class ReadFileHandle extends AbstractFileHandle implements AbstractWalRecordsIterator.AbstractReadFileHandle { /** Entry serializer. */ RecordSerializer ser; @@ -2384,7 +2383,7 @@ public static class ReadFileHandle extends FileHandle implements AbstractWalReco private final SegmentAware segmentAware; /** - * @param fileIO I/O interface for read/write operations of FileHandle. + * @param fileIO I/O interface for read/write operations of AbstractFileHandle. * @param ser Entry serializer. * @param in File input. * @param aware Segment aware. @@ -2436,459 +2435,6 @@ public ReadFileHandle( } } - /** - * File handle for one log segment. - */ - @SuppressWarnings("SignalWithoutCorrespondingAwait") - private class FileWriteHandle extends FileHandle { - /** */ - private final RecordSerializer serializer; - - /** Created on resume logging. */ - private volatile boolean resume; - - /** - * Position in current file after the end of last written record (incremented after file channel write - * operation) - */ - volatile long written; - - /** */ - private volatile long lastFsyncPos; - - /** Stop guard to provide warranty that only one thread will be successful in calling {@link #close(boolean)} */ - private final AtomicBoolean stop = new AtomicBoolean(false); - - /** */ - private final Lock lock = new ReentrantLock(); - - /** Condition for timed wait of several threads, see {@link DataStorageConfiguration#getWalFsyncDelayNanos()} */ - private final Condition fsync = lock.newCondition(); - - /** - * Next segment available condition. Protection from "spurious wakeup" is provided by predicate {@link - * #fileIO}=null - */ - private final Condition nextSegment = lock.newCondition(); - - /** Buffer. */ - private final SegmentedRingByteBuffer buf; - - /** - * @param fileIO I/O file interface to use - * @param pos Position. - * @param resume Created on resume logging flag. - * @param serializer Serializer. - * @param buf Buffer. - * @throws IOException If failed. - */ - private FileWriteHandle( - SegmentIO fileIO, - long pos, - boolean resume, - RecordSerializer serializer, - SegmentedRingByteBuffer buf - ) throws IOException { - super(fileIO); - - assert serializer != null; - - if (!mmap) - fileIO.position(pos); - - this.serializer = serializer; - - written = pos; - lastFsyncPos = pos; - this.resume = resume; - this.buf = buf; - } - - /** - * Write serializer version to current handle. - */ - public void writeHeader() { - SegmentedRingByteBuffer.WriteSegment seg = buf.offer(HEADER_RECORD_SIZE); - - assert seg != null && seg.position() > 0; - - prepareSerializerVersionBuffer(getSegmentId(), serializerVersion(), false, seg.buffer()); - - seg.release(); - } - - /** - * @param rec Record to be added to write queue. - * @return Pointer or null if roll over to next segment is required or already started by other thread. - * @throws StorageException If failed. - * @throws IgniteCheckedException If failed. - */ - @Nullable private WALPointer addRecord(WALRecord rec) throws StorageException, IgniteCheckedException { - assert rec.size() > 0 : rec; - - for (;;) { - checkNode(); - - SegmentedRingByteBuffer.WriteSegment seg; - - // Buffer can be in open state in case of resuming with different serializer version. - if (rec.type() == SWITCH_SEGMENT_RECORD && !currHnd.resume) - seg = buf.offerSafe(rec.size()); - else - seg = buf.offer(rec.size()); - - FileWALPointer ptr = null; - - if (seg != null) { - try { - int pos = (int)(seg.position() - rec.size()); - - ByteBuffer buf = seg.buffer(); - - if (buf == null) - return null; // Can not write to this segment, need to switch to the next one. - - ptr = new FileWALPointer(getSegmentId(), pos, rec.size()); - - rec.position(ptr); - - fillBuffer(buf, rec); - - if (mmap) { - // written field must grow only, but segment with greater position can be serialized - // earlier than segment with smaller position. - while (true) { - long written0 = written; - - if (seg.position() > written0) { - if (WRITTEN_UPD.compareAndSet(this, written0, seg.position())) - break; - } - else - break; - } - } - - return ptr; - } - finally { - seg.release(); - - if (mode == WALMode.BACKGROUND && rec instanceof CheckpointRecord) - flushOrWait(ptr); - } - } - else - walWriter.flushAll(); - } - } - - /** - * Flush or wait for concurrent flush completion. - * - * @param ptr Pointer. - */ - private void flushOrWait(FileWALPointer ptr) throws IgniteCheckedException { - if (ptr != null) { - // If requested obsolete file index, it must be already flushed by close. - if (ptr.index() != getSegmentId()) - return; - } - - flush(ptr); - } - - /** - * @param ptr Pointer. - */ - private void flush(FileWALPointer ptr) throws IgniteCheckedException { - if (ptr == null) { // Unconditional flush. - walWriter.flushAll(); - - return; - } - - assert ptr.index() == getSegmentId(); - - walWriter.flushBuffer(ptr.fileOffset()); - } - - /** - * @param buf Buffer. - * @param rec WAL record. - * @throws IgniteCheckedException If failed. - */ - private void fillBuffer(ByteBuffer buf, WALRecord rec) throws IgniteCheckedException { - try { - serializer.writeRecord(rec, buf); - } - catch (RuntimeException e) { - throw new IllegalStateException("Failed to write record: " + rec, e); - } - } - - /** - * Non-blocking check if this pointer needs to be sync'ed. - * - * @param ptr WAL pointer to check. - * @return {@code False} if this pointer has been already sync'ed. - */ - private boolean needFsync(FileWALPointer ptr) { - // If index has changed, it means that the log was rolled over and already sync'ed. - // If requested position is smaller than last sync'ed, it also means all is good. - // If position is equal, then our record is the last not synced. - return getSegmentId() == ptr.index() && lastFsyncPos <= ptr.fileOffset(); - } - - /** - * @return Pointer to the end of the last written record (probably not fsync-ed). - */ - private FileWALPointer position() { - lock.lock(); - - try { - return new FileWALPointer(getSegmentId(), (int)written, 0); - } - finally { - lock.unlock(); - } - } - - /** - * @param ptr Pointer to sync. - * @throws StorageException If failed. - */ - private void fsync(FileWALPointer ptr) throws StorageException, IgniteCheckedException { - lock.lock(); - - try { - if (ptr != null) { - if (!needFsync(ptr)) - return; - - if (fsyncDelay > 0 && !stop.get()) { - // Delay fsync to collect as many updates as possible: trade latency for throughput. - U.await(fsync, fsyncDelay, TimeUnit.NANOSECONDS); - - if (!needFsync(ptr)) - return; - } - } - - flushOrWait(ptr); - - if (stop.get()) - return; - - long lastFsyncPos0 = lastFsyncPos; - long written0 = written; - - if (lastFsyncPos0 != written0) { - // Fsync position must be behind. - assert lastFsyncPos0 < written0 : "lastFsyncPos=" + lastFsyncPos0 + ", written=" + written0; - - boolean metricsEnabled = metrics.metricsEnabled(); - - long start = metricsEnabled ? System.nanoTime() : 0; - - if (mmap) { - long pos = ptr == null ? -1 : ptr.fileOffset(); - - List segs = buf.poll(pos); - - if (segs != null) { - assert segs.size() == 1; - - SegmentedRingByteBuffer.ReadSegment seg = segs.get(0); - - int off = seg.buffer().position(); - int len = seg.buffer().limit() - off; - - fsync((MappedByteBuffer)buf.buf, off, len); - - seg.release(); - } - } - else - walWriter.force(); - - lastFsyncPos = written; - - if (fsyncDelay > 0) - fsync.signalAll(); - - long end = metricsEnabled ? System.nanoTime() : 0; - - if (metricsEnabled) - metrics.onFsync(end - start); - } - } - finally { - lock.unlock(); - } - } - - /** - * @param buf Mapped byte buffer.. - * @param off Offset. - * @param len Length. - */ - private void fsync(MappedByteBuffer buf, int off, int len) throws IgniteCheckedException { - try { - long mappedOff = (Long)mappingOffset.invoke(buf); - - assert mappedOff == 0 : mappedOff; - - long addr = (Long)mappingAddress.invoke(buf, mappedOff); - - long delta = (addr + off) % PAGE_SIZE; - - long alignedAddr = (addr + off) - delta; - - force0.invoke(buf, fd.get(buf), alignedAddr, len + delta); - } - catch (IllegalAccessException | InvocationTargetException e) { - throw new IgniteCheckedException(e); - } - } - - /** - * @return {@code true} If this thread actually closed the segment. - * @throws IgniteCheckedException If failed. - * @throws StorageException If failed. - */ - private boolean close(boolean rollOver) throws IgniteCheckedException, StorageException { - if (stop.compareAndSet(false, true)) { - lock.lock(); - - try { - flushOrWait(null); - - try { - RecordSerializer backwardSerializer = new RecordSerializerFactoryImpl(cctx) - .createSerializer(serializerVer); - - SwitchSegmentRecord segmentRecord = new SwitchSegmentRecord(); - - int switchSegmentRecSize = backwardSerializer.size(segmentRecord); - - if (rollOver && written < (maxWalSegmentSize - switchSegmentRecSize)) { - segmentRecord.size(switchSegmentRecSize); - - WALPointer segRecPtr = addRecord(segmentRecord); - - if (segRecPtr != null) - fsync((FileWALPointer)segRecPtr); - } - - if (mmap) { - List segs = buf.poll(maxWalSegmentSize); - - if (segs != null) { - assert segs.size() == 1; - - segs.get(0).release(); - } - } - - // Do the final fsync. - if (mode != WALMode.NONE) { - if (mmap) - ((MappedByteBuffer)buf.buf).force(); - else - fileIO.force(); - - lastFsyncPos = written; - } - - if (mmap) { - try { - fileIO.close(); - } - catch (IOException ignore) { - // No-op. - } - } - else { - walWriter.close(); - - if (!rollOver) - buf.free(); - } - } - catch (IOException e) { - throw new StorageException("Failed to close WAL write handle [idx=" + getSegmentId() + "]", e); - } - - if (log.isDebugEnabled()) - log.debug("Closed WAL write handle [idx=" + getSegmentId() + "]"); - - return true; - } - finally { - if (mmap) - buf.free(); - - lock.unlock(); - } - } - else - return false; - } - - /** - * Signals next segment available to wake up other worker threads waiting for WAL to write - */ - private void signalNextAvailable() { - lock.lock(); - - try { - assert cctx.kernalContext().invalid() || - written == lastFsyncPos || mode != WALMode.FSYNC : - "fsync [written=" + written + ", lastFsync=" + lastFsyncPos + ", idx=" + getSegmentId() + ']'; - - fileIO = null; - - nextSegment.signalAll(); - } - finally { - lock.unlock(); - } - } - - /** - * - */ - private void awaitNext() { - lock.lock(); - - try { - while (fileIO != null) - U.awaitQuiet(nextSegment); - } - finally { - lock.unlock(); - } - } - - /** - * @return Safely reads current position of the file channel as String. Will return "null" if channel is null. - */ - private String safePosition() { - FileIO io = fileIO; - - if (io == null) - return "null"; - - try { - return String.valueOf(io.position()); - } - catch (IOException e) { - return "{Failed to read channel position: " + e.getMessage() + '}'; - } - } - } - /** * Iterator over WAL-log. */ @@ -3177,353 +2723,13 @@ private void doFlush() { FileWriteHandle hnd = currentHandle(); try { - hnd.flush(null); + hnd.flushAll(); } catch (Exception e) { U.warn(log, "Failed to flush WAL record queue", e); } } - /** - * WAL writer worker. - */ - @SuppressWarnings("ForLoopReplaceableByForEach") - private class WALWriter extends GridWorker { - /** Unconditional flush. */ - private static final long UNCONDITIONAL_FLUSH = -1L; - - /** File close. */ - private static final long FILE_CLOSE = -2L; - - /** File force. */ - private static final long FILE_FORCE = -3L; - - /** Err. */ - private volatile Throwable err; - - //TODO: replace with GC free data structure. - /** Parked threads. */ - final Map waiters = new ConcurrentHashMap<>(); - - /** - * Default constructor. - * - * @param log Logger. - */ - WALWriter(IgniteLogger log) { - super(cctx.igniteInstanceName(), "wal-write-worker%" + cctx.igniteInstanceName(), log, - cctx.kernalContext().workersRegistry()); - } - - /** {@inheritDoc} */ - @Override protected void body() { - Throwable err = null; - - try { - while (!isCancelled()) { - onIdle(); - - while (waiters.isEmpty()) { - if (!isCancelled()) { - blockingSectionBegin(); - - try { - LockSupport.park(); - } - finally { - blockingSectionEnd(); - } - } - else { - unparkWaiters(Long.MAX_VALUE); - - return; - } - } - - Long pos = null; - - for (Long val : waiters.values()) { - if (val > Long.MIN_VALUE) - pos = val; - } - - updateHeartbeat(); - - if (pos == null) - continue; - else if (pos < UNCONDITIONAL_FLUSH) { - try { - assert pos == FILE_CLOSE || pos == FILE_FORCE : pos; - - if (pos == FILE_CLOSE) - currHnd.fileIO.close(); - else if (pos == FILE_FORCE) - currHnd.fileIO.force(); - } - catch (IOException e) { - log.error("Exception in WAL writer thread: ", e); - - err = e; - - unparkWaiters(Long.MAX_VALUE); - - return; - } - - unparkWaiters(pos); - } - - updateHeartbeat(); - - List segs = currentHandle().buf.poll(pos); - - if (segs == null) { - unparkWaiters(pos); - - continue; - } - - for (int i = 0; i < segs.size(); i++) { - SegmentedRingByteBuffer.ReadSegment seg = segs.get(i); - - updateHeartbeat(); - - try { - writeBuffer(seg.position(), seg.buffer()); - } - catch (Throwable e) { - log.error("Exception in WAL writer thread:", e); - - err = e; - } - finally { - seg.release(); - - long p = pos <= UNCONDITIONAL_FLUSH || err != null ? Long.MAX_VALUE : currentHandle().written; - - unparkWaiters(p); - } - } - } - } - catch (Throwable t) { - err = t; - } - finally { - unparkWaiters(Long.MAX_VALUE); - - if (err == null && !isCancelled) - err = new IllegalStateException("Worker " + name() + " is terminated unexpectedly"); - - if (err instanceof OutOfMemoryError) - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); - else if (err != null) - cctx.kernalContext().failure().process(new FailureContext(SYSTEM_WORKER_TERMINATION, err)); - } - } - - /** - * Shutdowns thread. - */ - public void shutdown() throws IgniteInterruptedCheckedException { - U.cancel(this); - - LockSupport.unpark(runner()); - - U.join(runner()); - } - - /** - * Unparks waiting threads. - * - * @param pos Pos. - */ - private void unparkWaiters(long pos) { - assert pos > Long.MIN_VALUE : pos; - - for (Map.Entry e : waiters.entrySet()) { - Long val = e.getValue(); - - if (val <= pos) { - if (val != Long.MIN_VALUE) - waiters.put(e.getKey(), Long.MIN_VALUE); - - LockSupport.unpark(e.getKey()); - } - } - } - - /** - * Forces all made changes to the file. - */ - void force() throws IgniteCheckedException { - flushBuffer(FILE_FORCE); - } - - /** - * Closes file. - */ - void close() throws IgniteCheckedException { - flushBuffer(FILE_CLOSE); - } - - /** - * Flushes all data from the buffer. - */ - void flushAll() throws IgniteCheckedException { - flushBuffer(UNCONDITIONAL_FLUSH); - } - - /** - * @param expPos Expected position. - */ - @SuppressWarnings("ForLoopReplaceableByForEach") - void flushBuffer(long expPos) throws IgniteCheckedException { - if (mmap) - return; - - Throwable err = walWriter.err; - - if (err != null) - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); - - if (expPos == UNCONDITIONAL_FLUSH) - expPos = (currentHandle().buf.tail()); - - Thread t = Thread.currentThread(); - - waiters.put(t, expPos); - - LockSupport.unpark(walWriter.runner()); - - while (true) { - Long val = waiters.get(t); - - assert val != null : "Only this thread can remove thread from waiters"; - - if (val == Long.MIN_VALUE) { - waiters.remove(t); - - Throwable walWriterError = walWriter.err; - - if (walWriterError != null) - throw new IgniteCheckedException("Flush buffer failed.", walWriterError); - - return; - } - else - LockSupport.park(); - } - } - - /** - * @param pos Position in file to start write from. May be checked against actual position to wait previous - * writes to complete - * @param buf Buffer to write to file - * @throws StorageException If failed. - * @throws IgniteCheckedException If failed. - */ - @SuppressWarnings("TooBroadScope") - private void writeBuffer(long pos, ByteBuffer buf) throws StorageException, IgniteCheckedException { - FileWriteHandle hdl = currentHandle(); - - assert hdl.fileIO != null : "Writing to a closed segment."; - - checkNode(); - - long lastLogged = U.currentTimeMillis(); - - long logBackoff = 2_000; - - // If we were too fast, need to wait previous writes to complete. - while (hdl.written != pos) { - assert hdl.written < pos : "written = " + hdl.written + ", pos = " + pos; // No one can write further than we are now. - - // Permutation occurred between blocks write operations. - // Order of acquiring lock is not the same as order of write. - long now = U.currentTimeMillis(); - - if (now - lastLogged >= logBackoff) { - if (logBackoff < 60 * 60_000) - logBackoff *= 2; - - U.warn(log, "Still waiting for a concurrent write to complete [written=" + hdl.written + - ", pos=" + pos + ", lastFsyncPos=" + hdl.lastFsyncPos + ", stop=" + hdl.stop.get() + - ", actualPos=" + hdl.safePosition() + ']'); - - lastLogged = now; - } - - checkNode(); - } - - // Do the write. - int size = buf.remaining(); - - assert size > 0 : size; - - try { - assert hdl.written == hdl.fileIO.position(); - - hdl.written += hdl.fileIO.writeFully(buf); - - metrics.onWalBytesWritten(size); - - assert hdl.written == hdl.fileIO.position(); - } - catch (IOException e) { - StorageException se = new StorageException("Failed to write buffer.", e); - - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, se)); - - throw se; - } - } - } - - /** - * Syncs WAL segment file. - */ - private class WalSegmentSyncer extends GridWorker { - /** Sync timeout. */ - long syncTimeout; - - /** - * @param igniteInstanceName Ignite instance name. - * @param log Logger. - */ - public WalSegmentSyncer(String igniteInstanceName, IgniteLogger log) { - super(igniteInstanceName, "wal-segment-syncer", log); - - syncTimeout = Math.max(IgniteSystemProperties.getLong(IGNITE_WAL_SEGMENT_SYNC_TIMEOUT, - DFLT_WAL_SEGMENT_SYNC_TIMEOUT), 100L); - } - - /** {@inheritDoc} */ - @Override protected void body() throws InterruptedException, IgniteInterruptedCheckedException { - while (!isCancelled()) { - sleep(syncTimeout); - - try { - flush(null, true); - } - catch (IgniteCheckedException e) { - U.error(log, "Exception when flushing WAL.", e); - } - } - } - - /** Shutted down the worker. */ - private void shutdown() { - synchronized (this) { - U.cancel(this); - } - - U.join(this, log); - } - } - /** * Scans provided folder for a WAL segment files * @param walFilesDir directory to scan diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FsyncModeFileWriteAheadLogManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FsyncModeFileWriteAheadLogManager.java deleted file mode 100644 index df8f4de0c888d..0000000000000 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/FsyncModeFileWriteAheadLogManager.java +++ /dev/null @@ -1,3445 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache.persistence.wal; - -import java.io.BufferedInputStream; -import java.io.BufferedOutputStream; -import java.io.EOFException; -import java.io.File; -import java.io.FileFilter; -import java.io.FileInputStream; -import java.io.FileNotFoundException; -import java.io.FileOutputStream; -import java.io.IOException; -import java.nio.ByteBuffer; -import java.nio.ByteOrder; -import java.nio.file.DirectoryStream; -import java.nio.file.FileAlreadyExistsException; -import java.nio.file.Files; -import java.nio.file.Path; -import java.sql.Time; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.NavigableMap; -import java.util.Set; -import java.util.TreeMap; -import java.util.TreeSet; -import java.util.concurrent.PriorityBlockingQueue; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicLong; -import java.util.concurrent.atomic.AtomicReference; -import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; -import java.util.concurrent.locks.Condition; -import java.util.concurrent.locks.Lock; -import java.util.concurrent.locks.LockSupport; -import java.util.concurrent.locks.ReentrantLock; -import java.util.regex.Pattern; -import java.util.stream.Stream; -import java.util.zip.ZipEntry; -import java.util.zip.ZipInputStream; -import java.util.zip.ZipOutputStream; -import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.IgniteLogger; -import org.apache.ignite.IgniteSystemProperties; -import org.apache.ignite.configuration.DataStorageConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.configuration.WALMode; -import org.apache.ignite.events.EventType; -import org.apache.ignite.events.WalSegmentArchivedEvent; -import org.apache.ignite.events.WalSegmentCompactedEvent; -import org.apache.ignite.failure.FailureContext; -import org.apache.ignite.internal.GridKernalContext; -import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.internal.IgniteInterruptedCheckedException; -import org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager; -import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; -import org.apache.ignite.internal.pagemem.wal.WALIterator; -import org.apache.ignite.internal.pagemem.wal.WALPointer; -import org.apache.ignite.internal.pagemem.wal.record.MarshalledRecord; -import org.apache.ignite.internal.pagemem.wal.record.SwitchSegmentRecord; -import org.apache.ignite.internal.pagemem.wal.record.WALRecord; -import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; -import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter; -import org.apache.ignite.internal.processors.cache.WalStateManager.WALDisableContext; -import org.apache.ignite.internal.processors.cache.persistence.DataStorageMetricsImpl; -import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; -import org.apache.ignite.internal.processors.cache.persistence.StorageException; -import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; -import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; -import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; -import org.apache.ignite.internal.processors.cache.persistence.filename.PdsFolderSettings; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; -import org.apache.ignite.internal.processors.cache.persistence.wal.io.FileInput; -import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentFileInputFactory; -import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; -import org.apache.ignite.internal.processors.cache.persistence.wal.io.SimpleSegmentFileInputFactory; -import org.apache.ignite.internal.processors.cache.persistence.wal.record.HeaderRecord; -import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; -import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactory; -import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactoryImpl; -import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer; -import org.apache.ignite.internal.processors.timeout.GridTimeoutObject; -import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; -import org.apache.ignite.internal.util.GridUnsafe; -import org.apache.ignite.internal.util.future.GridFinishedFuture; -import org.apache.ignite.internal.util.future.GridFutureAdapter; -import org.apache.ignite.internal.util.typedef.CI1; -import org.apache.ignite.internal.util.typedef.CIX1; -import org.apache.ignite.internal.util.typedef.CO; -import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.internal.util.worker.GridWorker; -import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.lang.IgniteInClosure; -import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.thread.IgniteThread; -import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; - -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_CHECKPOINT_TRIGGER_ARCHIVE_SIZE_PERCENTAGE; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_SERIALIZER_VERSION; -import static org.apache.ignite.events.EventType.EVT_WAL_SEGMENT_COMPACTED; -import static org.apache.ignite.failure.FailureType.CRITICAL_ERROR; -import static org.apache.ignite.failure.FailureType.SYSTEM_WORKER_TERMINATION; -import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.readSegmentHeader; - -/** - * File WAL manager. - */ -public class FsyncModeFileWriteAheadLogManager extends GridCacheSharedManagerAdapter implements IgniteWriteAheadLogManager { - /** */ - public static final FileDescriptor[] EMPTY_DESCRIPTORS = new FileDescriptor[0]; - - /** */ - private static final byte[] FILL_BUF = new byte[1024 * 1024]; - - /** Pattern for segment file names */ - private static final Pattern WAL_NAME_PATTERN = Pattern.compile("\\d{16}\\.wal"); - - /** */ - private static final Pattern WAL_TEMP_NAME_PATTERN = Pattern.compile("\\d{16}\\.wal\\.tmp"); - - /** WAL segment file filter, see {@link #WAL_NAME_PATTERN} */ - public static final FileFilter WAL_SEGMENT_FILE_FILTER = new FileFilter() { - @Override public boolean accept(File file) { - return !file.isDirectory() && WAL_NAME_PATTERN.matcher(file.getName()).matches(); - } - }; - - /** */ - private static final FileFilter WAL_SEGMENT_TEMP_FILE_FILTER = new FileFilter() { - @Override public boolean accept(File file) { - return !file.isDirectory() && WAL_TEMP_NAME_PATTERN.matcher(file.getName()).matches(); - } - }; - - /** */ - private static final Pattern WAL_SEGMENT_FILE_COMPACTED_PATTERN = Pattern.compile("\\d{16}\\.wal\\.zip"); - - /** WAL segment file filter, see {@link #WAL_NAME_PATTERN} */ - public static final FileFilter WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER = new FileFilter() { - @Override public boolean accept(File file) { - return !file.isDirectory() && (WAL_NAME_PATTERN.matcher(file.getName()).matches() || - WAL_SEGMENT_FILE_COMPACTED_PATTERN.matcher(file.getName()).matches()); - } - }; - - /** */ - private static final Pattern WAL_SEGMENT_TEMP_FILE_COMPACTED_PATTERN = Pattern.compile("\\d{16}\\.wal\\.zip\\.tmp"); - - /** */ - private static final FileFilter WAL_SEGMENT_FILE_COMPACTED_FILTER = new FileFilter() { - @Override public boolean accept(File file) { - return !file.isDirectory() && WAL_SEGMENT_FILE_COMPACTED_PATTERN.matcher(file.getName()).matches(); - } - }; - - /** */ - private static final FileFilter WAL_SEGMENT_TEMP_FILE_COMPACTED_FILTER = new FileFilter() { - @Override public boolean accept(File file) { - return !file.isDirectory() && WAL_SEGMENT_TEMP_FILE_COMPACTED_PATTERN.matcher(file.getName()).matches(); - } - }; - - /** Latest serializer version to use. */ - private static final int LATEST_SERIALIZER_VERSION = 2; - - /** - * Percentage of archive size for checkpoint trigger. Need for calculate max size of WAL after last checkpoint. - * Checkpoint should be triggered when max size of WAL after last checkpoint more than maxWallArchiveSize * thisValue - */ - private static final double CHECKPOINT_TRIGGER_ARCHIVE_SIZE_PERCENTAGE = - IgniteSystemProperties.getDouble(IGNITE_CHECKPOINT_TRIGGER_ARCHIVE_SIZE_PERCENTAGE, 0.25); - - /** - * Percentage of WAL archive size to calculate threshold since which removing of old archive should be started. - */ - private static final double THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE = - IgniteSystemProperties.getDouble(IGNITE_THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE, 0.5); - - /** */ - private final boolean alwaysWriteFullPages; - - /** WAL segment size in bytes */ - private final long maxWalSegmentSize; - - /** - * Maximum number of allowed segments without checkpoint. If we have their more checkpoint should be triggered. - * It is simple way to calculate wal size without checkpoint instead fair wal size calculating. - */ - private final long maxSegCountWithoutCheckpoint; - - /** Size of wal archive since which removing of old archive should be started */ - private final long allowedThresholdWalArchiveSize; - - /** */ - private final WALMode mode; - - /** Thread local byte buffer size, see {@link #tlb} */ - private final int tlbSize; - - /** WAL flush frequency. Makes sense only for {@link WALMode#BACKGROUND} log WALMode. */ - private final long flushFreq; - - /** Fsync delay. */ - private final long fsyncDelay; - - /** */ - private final DataStorageConfiguration dsCfg; - - /** Events service */ - private final GridEventStorageManager evt; - - /** */ - private IgniteConfiguration igCfg; - - /** Persistence metrics tracker. */ - private DataStorageMetricsImpl metrics; - - /** */ - private File walWorkDir; - - /** WAL archive directory (including consistent ID as subfolder) */ - private File walArchiveDir; - - /** Serializer of latest version, used to read header record and for write records */ - private RecordSerializer serializer; - - /** Serializer latest version to use. */ - private final int serializerVersion = - IgniteSystemProperties.getInteger(IGNITE_WAL_SERIALIZER_VERSION, LATEST_SERIALIZER_VERSION); - - /** Latest segment cleared by {@link #truncate(WALPointer, WALPointer)}. */ - private volatile long lastTruncatedArchiveIdx = -1L; - - /** Factory to provide I/O interfaces for read/write operations with files */ - private volatile FileIOFactory ioFactory; - - /** Factory to provide I/O interfaces for read primitives with files */ - private final SegmentFileInputFactory segmentFileInputFactory; - - /** Updater for {@link #currentHnd}, used for verify there are no concurrent update for current log segment handle */ - private static final AtomicReferenceFieldUpdater currentHndUpd = - AtomicReferenceFieldUpdater.newUpdater(FsyncModeFileWriteAheadLogManager.class, FileWriteHandle.class, "currentHnd"); - - /** - * Thread local byte buffer for saving serialized WAL records chain, see {@link FileWriteHandle#head}. - * Introduced to decrease number of buffers allocation. - * Used only for record itself is shorter than {@link #tlbSize}. - */ - private final ThreadLocal tlb = new ThreadLocal() { - @Override protected ByteBuffer initialValue() { - ByteBuffer buf = ByteBuffer.allocateDirect(tlbSize); - - buf.order(GridUnsafe.NATIVE_BYTE_ORDER); - - return buf; - } - }; - - /** */ - private volatile FileArchiver archiver; - - /** Compressor. */ - private volatile FileCompressor compressor; - - /** Decompressor. */ - private volatile FileDecompressor decompressor; - - /** */ - private final ThreadLocal lastWALPtr = new ThreadLocal<>(); - - /** Current log segment handle */ - private volatile FileWriteHandle currentHnd; - - /** */ - private volatile WALDisableContext walDisableContext; - - /** - * Positive (non-0) value indicates WAL can be archived even if not complete
    - * See {@link DataStorageConfiguration#setWalAutoArchiveAfterInactivity(long)}
    - */ - private final long walAutoArchiveAfterInactivity; - - /** - * Container with last WAL record logged timestamp.
    - * Zero value means there was no records logged to current segment, skip possible archiving for this case
    - * Value is filled only for case {@link #walAutoArchiveAfterInactivity} > 0
    - */ - private AtomicLong lastRecordLoggedMs = new AtomicLong(); - - /** - * Cancellable task for {@link WALMode#BACKGROUND}, should be cancelled at shutdown - * Null for non background modes - */ - @Nullable private volatile GridTimeoutProcessor.CancelableTask backgroundFlushSchedule; - - /** - * Reference to the last added next archive timeout check object. - * Null if mode is not enabled. - * Should be cancelled at shutdown - */ - @Nullable private volatile GridTimeoutObject nextAutoArchiveTimeoutObj; - - /** - * @param ctx Kernal context. - */ - public FsyncModeFileWriteAheadLogManager(@NotNull final GridKernalContext ctx) { - igCfg = ctx.config(); - - DataStorageConfiguration dsCfg = igCfg.getDataStorageConfiguration(); - - assert dsCfg != null; - - this.dsCfg = dsCfg; - - maxWalSegmentSize = dsCfg.getWalSegmentSize(); - mode = dsCfg.getWalMode(); - tlbSize = dsCfg.getWalThreadLocalBufferSize(); - flushFreq = dsCfg.getWalFlushFrequency(); - fsyncDelay = dsCfg.getWalFsyncDelayNanos(); - alwaysWriteFullPages = dsCfg.isAlwaysWriteFullPages(); - ioFactory = dsCfg.getFileIOFactory(); - segmentFileInputFactory = new SimpleSegmentFileInputFactory(); - walAutoArchiveAfterInactivity = dsCfg.getWalAutoArchiveAfterInactivity(); - evt = ctx.event(); - - maxSegCountWithoutCheckpoint = - (long)((dsCfg.getMaxWalArchiveSize() * CHECKPOINT_TRIGGER_ARCHIVE_SIZE_PERCENTAGE) / dsCfg.getWalSegmentSize()); - - allowedThresholdWalArchiveSize = (long)(dsCfg.getMaxWalArchiveSize() * THRESHOLD_WAL_ARCHIVE_SIZE_PERCENTAGE); - - assert mode == WALMode.FSYNC : dsCfg; - } - - /** - * For test purposes only. - * - * @param ioFactory IO factory. - */ - public void setFileIOFactory(FileIOFactory ioFactory) { - this.ioFactory = ioFactory; - } - - /** {@inheritDoc} */ - @Override public void start0() throws IgniteCheckedException { - if (!cctx.kernalContext().clientNode()) { - final PdsFolderSettings resolveFolders = cctx.kernalContext().pdsFolderResolver().resolveFolders(); - - checkWalConfiguration(); - - final File walWorkDir0 = walWorkDir = initDirectory( - dsCfg.getWalPath(), - DataStorageConfiguration.DFLT_WAL_PATH, - resolveFolders.folderName(), - "write ahead log work directory" - ); - - final File walArchiveDir0 = walArchiveDir = initDirectory( - dsCfg.getWalArchivePath(), - DataStorageConfiguration.DFLT_WAL_ARCHIVE_PATH, - resolveFolders.folderName(), - "write ahead log archive directory" - ); - - serializer = new RecordSerializerFactoryImpl(cctx).createSerializer(serializerVersion); - - GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)cctx.database(); - - metrics = dbMgr.persistentStoreMetricsImpl(); - - checkOrPrepareFiles(); - - metrics.setWalSizeProvider(new CO() { - @Override public Long apply() { - long size = 0; - - for (File f : walWorkDir0.listFiles()) - size += f.length(); - - for (File f : walArchiveDir0.listFiles()) - size += f.length(); - - return size; - } - }); - - IgniteBiTuple tup = scanMinMaxArchiveIndices(); - - lastTruncatedArchiveIdx = tup == null ? -1 : tup.get1() - 1; - - archiver = new FileArchiver(tup == null ? -1 : tup.get2(), log); - - if (dsCfg.isWalCompactionEnabled()) { - compressor = new FileCompressor(); - - if (decompressor == null) { // Preventing of two file-decompressor thread instantiations. - decompressor = new FileDecompressor(log); - - new IgniteThread(decompressor).start(); - } - } - - walDisableContext = cctx.walState().walDisableContext(); - - if (mode != WALMode.NONE) { - if (log.isInfoEnabled()) - log.info("Started write-ahead log manager [mode=" + mode + ']'); - } - else - U.quietAndWarn(log, "Started write-ahead log manager in NONE mode, persisted data may be lost in " + - "a case of unexpected node failure. Make sure to deactivate the cluster before shutdown."); - } - } - - /** - * @throws IgniteCheckedException if WAL store path is configured and archive path isn't (or vice versa) - */ - private void checkWalConfiguration() throws IgniteCheckedException { - if (dsCfg.getWalPath() == null ^ dsCfg.getWalArchivePath() == null) { - throw new IgniteCheckedException( - "Properties should be either both specified or both null " + - "[walStorePath = " + dsCfg.getWalPath() + - ", walArchivePath = " + dsCfg.getWalArchivePath() + "]" - ); - } - } - - /** {@inheritDoc} */ - @Override protected void stop0(boolean cancel) { - final GridTimeoutProcessor.CancelableTask schedule = backgroundFlushSchedule; - - if (schedule != null) - schedule.close(); - - final GridTimeoutObject timeoutObj = nextAutoArchiveTimeoutObj; - - if (timeoutObj != null) - cctx.time().removeTimeoutObject(timeoutObj); - - final FileWriteHandle currHnd = currentHandle(); - - try { - if (mode == WALMode.BACKGROUND) { - if (currHnd != null) - currHnd.flush((FileWALPointer)null, true); - } - - if (currHnd != null) - currHnd.close(false); - - if (archiver != null) - archiver.shutdown(); - - if (compressor != null) - compressor.shutdown(); - - if (decompressor != null) - decompressor.shutdown(); - } - catch (Exception e) { - U.error(log, "Failed to gracefully close WAL segment: " + currentHnd.fileIO, e); - } - } - - /** {@inheritDoc} */ - @Override public void onActivate(GridKernalContext kctx) throws IgniteCheckedException { - if (log.isDebugEnabled()) - log.debug("Activated file write ahead log manager [nodeId=" + cctx.localNodeId() + - " topVer=" + cctx.discovery().topologyVersionEx() + " ]"); - - start0(); - - if (!cctx.kernalContext().clientNode()) { - if (isArchiverEnabled()) { - assert archiver != null; - - new IgniteThread(archiver).start(); - } - - if (compressor != null) - compressor.start(); - } - } - - /** {@inheritDoc} */ - @Override public void onDeActivate(GridKernalContext kctx) { - if (log.isDebugEnabled()) - log.debug("DeActivate file write ahead log [nodeId=" + cctx.localNodeId() + - " topVer=" + cctx.discovery().topologyVersionEx() + " ]"); - - stop0(true); - - currentHnd = null; - } - - /** {@inheritDoc} */ - @Override public boolean isAlwaysWriteFullPages() { - return alwaysWriteFullPages; - } - - /** {@inheritDoc} */ - @Override public boolean isFullSync() { - return mode == WALMode.FSYNC; - } - - /** {@inheritDoc} */ - @Override public void resumeLogging(WALPointer lastPtr) throws IgniteCheckedException { - assert currentHnd == null; - assert lastPtr == null || lastPtr instanceof FileWALPointer; - - FileWALPointer filePtr = (FileWALPointer)lastPtr; - - currentHnd = restoreWriteHandle(filePtr); - - if (currentHnd.serializer.version() != serializer.version()) { - if (log.isInfoEnabled()) - log.info("Record serializer version change detected, will start logging with a new WAL record " + - "serializer to a new WAL segment [curFile=" + currentHnd + ", newVer=" + serializer.version() + - ", oldVer=" + currentHnd.serializer.version() + ']'); - - rollOver(currentHnd); - } - - if (mode == WALMode.BACKGROUND) { - backgroundFlushSchedule = cctx.time().schedule(new Runnable() { - @Override public void run() { - doFlush(); - } - }, flushFreq, flushFreq); - } - - if (walAutoArchiveAfterInactivity > 0) - scheduleNextInactivityPeriodElapsedCheck(); - } - - /** - * Schedules next check of inactivity period expired. Based on current record update timestamp. - * At timeout method does check of inactivity period and schedules new launch. - */ - private void scheduleNextInactivityPeriodElapsedCheck() { - final long lastRecMs = lastRecordLoggedMs.get(); - final long nextPossibleAutoArchive = (lastRecMs <= 0 ? U.currentTimeMillis() : lastRecMs) + walAutoArchiveAfterInactivity; - - if (log.isDebugEnabled()) - log.debug("Schedule WAL rollover check at " + new Time(nextPossibleAutoArchive).toString()); - - nextAutoArchiveTimeoutObj = new GridTimeoutObject() { - private final IgniteUuid id = IgniteUuid.randomUuid(); - - @Override public IgniteUuid timeoutId() { - return id; - } - - @Override public long endTime() { - return nextPossibleAutoArchive; - } - - @Override public void onTimeout() { - if (log.isDebugEnabled()) - log.debug("Checking if WAL rollover required (" + new Time(U.currentTimeMillis()).toString() + ")"); - - checkWalRolloverRequiredDuringInactivityPeriod(); - - scheduleNextInactivityPeriodElapsedCheck(); - } - }; - cctx.time().addTimeoutObject(nextAutoArchiveTimeoutObj); - } - - /** - * Archiver can be not created, all files will be written to WAL folder, using absolute segment index. - * - * @return flag indicating if archiver is disabled. - */ - private boolean isArchiverEnabled() { - if (walArchiveDir != null && walWorkDir != null) - return !walArchiveDir.equals(walWorkDir); - - return !new File(dsCfg.getWalArchivePath()).equals(new File(dsCfg.getWalPath())); - } - - /** - * Collect wal segment files from low pointer (include) to high pointer (not include) and reserve low pointer. - * - * @param low Low bound. - * @param high High bound. - */ - public Collection getAndReserveWalFiles(FileWALPointer low, FileWALPointer high) throws IgniteCheckedException { - final long awaitIdx = high.index() - 1; - - while (archiver.lastArchivedAbsoluteIndex() < awaitIdx) - LockSupport.parkNanos(Thread.currentThread(), 1_000_000); - - if (!reserve(low)) - throw new IgniteCheckedException("WAL archive segment has been deleted [idx=" + low.index() + "]"); - - List res = new ArrayList<>(); - - for (long i = low.index(); i < high.index(); i++) { - String segmentName = FileDescriptor.fileName(i); - - File file = new File(walArchiveDir, segmentName); - File fileZip = new File(walArchiveDir, segmentName + FilePageStoreManager.ZIP_SUFFIX); - - if (file.exists()) - res.add(file); - else if (fileZip.exists()) - res.add(fileZip); - else { - if (log.isInfoEnabled()) { - log.info("Segment not found: " + file.getName() + "/" + fileZip.getName()); - - log.info("Stopped iteration on idx: " + i); - } - - break; - } - } - - return res; - } - - /** {@inheritDoc}*/ - @Override public int serializerVersion() { - return serializerVersion; - } - - /** - * Checks if there was elapsed significant period of inactivity. - * If WAL auto-archive is enabled using {@link #walAutoArchiveAfterInactivity} > 0 this method will activate - * roll over by timeout
    - */ - private void checkWalRolloverRequiredDuringInactivityPeriod() { - if (walAutoArchiveAfterInactivity <= 0) - return; // feature not configured, nothing to do - - final long lastRecMs = lastRecordLoggedMs.get(); - - if (lastRecMs == 0) - return; //no records were logged to current segment, does not consider inactivity - - final long elapsedMs = U.currentTimeMillis() - lastRecMs; - - if (elapsedMs <= walAutoArchiveAfterInactivity) - return; // not enough time elapsed since last write - - if (!lastRecordLoggedMs.compareAndSet(lastRecMs, 0)) - return; // record write occurred concurrently - - final FileWriteHandle handle = currentHandle(); - - try { - rollOver(handle); - } - catch (IgniteCheckedException e) { - U.error(log, "Unable to perform segment rollover: " + e.getMessage(), e); - - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, e)); - } - } - - /** {@inheritDoc} */ - @SuppressWarnings("TooBroadScope") - @Override public WALPointer log(WALRecord record) throws IgniteCheckedException, StorageException { - if (serializer == null || mode == WALMode.NONE) - return null; - - FileWriteHandle currWrHandle = currentHandle(); - - WALDisableContext isDisable = walDisableContext; - - // Logging was not resumed yet. - if (currWrHandle == null || (isDisable != null && isDisable.check())) - return null; - - // Need to calculate record size first. - record.size(serializer.size(record)); - - while (true) { - if (record.rollOver()){ - assert cctx.database().checkpointLockIsHeldByThread(); - - currWrHandle = rollOver(currWrHandle); - } - - WALPointer ptr = currWrHandle.addRecord(record); - - if (ptr != null) { - metrics.onWalRecordLogged(); - - lastWALPtr.set(ptr); - - if (walAutoArchiveAfterInactivity > 0) - lastRecordLoggedMs.set(U.currentTimeMillis()); - - return ptr; - } - else - currWrHandle = rollOver(currWrHandle); - - checkNode(); - - if (isStopping()) - throw new IgniteCheckedException("Stopping."); - } - } - - /** {@inheritDoc} */ - @Override public void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException { - if (serializer == null || mode == WALMode.NONE) - return; - - FileWriteHandle cur = currentHandle(); - - // WAL manager was not started (client node). - if (cur == null) - return; - - FileWALPointer filePtr = (FileWALPointer)(ptr == null ? lastWALPtr.get() : ptr); - - // No need to sync if was rolled over. - if (filePtr != null && !cur.needFsync(filePtr)) - return; - - cur.fsync(filePtr, false); - } - - /** {@inheritDoc} */ - @Override public WALIterator replay(WALPointer start) - throws IgniteCheckedException, StorageException { - assert start == null || start instanceof FileWALPointer : "Invalid start pointer: " + start; - - FileWriteHandle hnd = currentHandle(); - - FileWALPointer end = null; - - if (hnd != null) - end = hnd.position(); - - return new RecordsIterator( - cctx, - walWorkDir, - walArchiveDir, - (FileWALPointer)start, - end, - dsCfg, - new RecordSerializerFactoryImpl(cctx), - ioFactory, - archiver, - decompressor, - log, - segmentFileInputFactory - ); - } - - /** {@inheritDoc} */ - @Override public boolean reserve(WALPointer start) throws IgniteCheckedException { - assert start != null && start instanceof FileWALPointer : "Invalid start pointer: " + start; - - if (mode == WALMode.NONE) - return false; - - FileArchiver archiver0 = archiver; - - if (archiver0 == null) - throw new IgniteCheckedException("Could not reserve WAL segment: archiver == null"); - - archiver0.reserve(((FileWALPointer)start).index()); - - if (!hasIndex(((FileWALPointer)start).index())) { - archiver0.release(((FileWALPointer)start).index()); - - return false; - } - - return true; - } - - /** {@inheritDoc} */ - @Override public void release(WALPointer start) throws IgniteCheckedException { - assert start != null && start instanceof FileWALPointer : "Invalid start pointer: " + start; - - if (mode == WALMode.NONE) - return; - - FileArchiver archiver0 = archiver; - - if (archiver0 == null) - throw new IgniteCheckedException("Could not release WAL segment: archiver == null"); - - archiver0.release(((FileWALPointer)start).index()); - } - - /** - * @param absIdx Absolulte index to check. - * @return {@code true} if has this index. - */ - private boolean hasIndex(long absIdx) { - String segmentName = FileDescriptor.fileName(absIdx); - - String zipSegmentName = FileDescriptor.fileName(absIdx) + FilePageStoreManager.ZIP_SUFFIX; - - boolean inArchive = new File(walArchiveDir, segmentName).exists() || - new File(walArchiveDir, zipSegmentName).exists(); - - if (inArchive) - return true; - - if (absIdx <= lastArchivedIndex()) - return false; - - FileWriteHandle cur = currentHnd; - - return cur != null && cur.getSegmentId() >= absIdx; - } - - /** {@inheritDoc} */ - @Override public int truncate(WALPointer low, WALPointer high) { - if (high == null) - return 0; - - assert high instanceof FileWALPointer : high; - - // File pointer bound: older entries will be deleted from archive - FileWALPointer lowPtr = (FileWALPointer)low; - FileWALPointer highPtr = (FileWALPointer)high; - - FileDescriptor[] descs = scan(walArchiveDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER)); - - int deleted = 0; - - FileArchiver archiver0 = archiver; - - for (FileDescriptor desc : descs) { - if (lowPtr != null && desc.idx < lowPtr.index()) - continue; - - // Do not delete reserved or locked segment and any segment after it. - if (archiver0 != null && archiver0.reserved(desc.idx)) - return deleted; - - long lastArchived = archiver0 != null ? archiver0.lastArchivedAbsoluteIndex() : lastArchivedIndex(); - - // We need to leave at least one archived segment to correctly determine the archive index. - if (desc.idx < highPtr.index() && desc.idx < lastArchived) { - if (!desc.file.delete()) - U.warn(log, "Failed to remove obsolete WAL segment (make sure the process has enough rights): " + - desc.file.getAbsolutePath()); - else - deleted++; - - // Bump up the oldest archive segment index. - if (lastTruncatedArchiveIdx < desc.idx) - lastTruncatedArchiveIdx = desc.idx; - } - } - - return deleted; - } - - /** {@inheritDoc} */ - @Override public void notchLastCheckpointPtr(WALPointer ptr) { - if (compressor != null) - compressor.keepUncompressedIdxFrom(((FileWALPointer)ptr).index()); - } - - /** {@inheritDoc} */ - @Override public int walArchiveSegments() { - long lastTruncated = lastTruncatedArchiveIdx; - - long lastArchived = archiver.lastArchivedAbsoluteIndex(); - - if (lastArchived == -1) - return 0; - - int res = (int)(lastArchived - lastTruncated); - - return res >= 0 ? res : 0; - } - - /** - * Files from archive WAL directory. - */ - private FileDescriptor[] walArchiveFiles() { - return scan(walArchiveDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER)); - } - - /** {@inheritDoc} */ - @Override public long maxArchivedSegmentToDelete() { - //When maxWalArchiveSize==MAX_VALUE deleting files is not permit. - if (dsCfg.getMaxWalArchiveSize() == Long.MAX_VALUE) - return -1; - - FileDescriptor[] archivedFiles = walArchiveFiles(); - - Long totalArchiveSize = Stream.of(archivedFiles) - .map(desc -> desc.file().length()) - .reduce(0L, Long::sum); - - if (archivedFiles.length == 0 || totalArchiveSize < allowedThresholdWalArchiveSize) - return -1; - - long sizeOfOldestArchivedFiles = 0; - - for (FileDescriptor desc : archivedFiles) { - sizeOfOldestArchivedFiles += desc.file().length(); - - if (totalArchiveSize - sizeOfOldestArchivedFiles < allowedThresholdWalArchiveSize) - return desc.getIdx(); - } - - return archivedFiles[archivedFiles.length - 1].getIdx(); - } - - /** {@inheritDoc} */ - @Override public long lastArchivedSegment() { - return archiver.lastArchivedAbsoluteIndex(); - } - - /** {@inheritDoc} */ - @Override public long lastCompactedSegment() { - return compressor != null ? compressor.lastCompressedIdx : -1L; - } - - /** {@inheritDoc} */ - @Override public boolean reserved(WALPointer ptr) { - FileWALPointer fPtr = (FileWALPointer)ptr; - - FileArchiver archiver0 = archiver; - - return archiver0 != null && archiver0.reserved(fPtr.index()); - } - - /** {@inheritDoc} */ - @Override public int reserved(WALPointer low, WALPointer high) { - // It is not clear now how to get the highest WAL pointer. So when high is null method returns 0. - if (high == null) - return 0; - - assert high instanceof FileWALPointer : high; - - assert low == null || low instanceof FileWALPointer : low; - - FileWALPointer lowPtr = (FileWALPointer)low; - - FileWALPointer highPtr = (FileWALPointer)high; - - FileArchiver archiver0 = archiver; - - long lowIdx = lowPtr != null ? lowPtr.index() : 0; - - long highIdx = highPtr.index(); - - while (lowIdx < highIdx) { - if(archiver0 != null && archiver0.reserved(lowIdx)) - break; - - lowIdx++; - } - - return (int)(highIdx - lowIdx + 1); - } - - /** {@inheritDoc} */ - @Override public boolean disabled(int grpId) { - return cctx.walState().isDisabled(grpId); - } - - /** {@inheritDoc} */ - @Override public void cleanupWalDirectories() throws IgniteCheckedException { - try { - try (DirectoryStream files = Files.newDirectoryStream(walWorkDir.toPath())) { - for (Path path : files) - Files.delete(path); - } - } - catch (IOException e) { - throw new IgniteCheckedException("Failed to cleanup wal work directory: " + walWorkDir, e); - } - - try { - try (DirectoryStream files = Files.newDirectoryStream(walArchiveDir.toPath())) { - for (Path path : files) - Files.delete(path); - } - } - catch (IOException e) { - throw new IgniteCheckedException("Failed to cleanup wal archive directory: " + walArchiveDir, e); - } - } - - /** - * Lists files in archive directory and returns the index of last archived file. - * - * @return The absolute index of last archived file. - */ - private long lastArchivedIndex() { - long lastIdx = -1; - - for (File file : walArchiveDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER)) { - try { - long idx = Long.parseLong(file.getName().substring(0, 16)); - - lastIdx = Math.max(lastIdx, idx); - } - catch (NumberFormatException | IndexOutOfBoundsException ignore) { - - } - } - - return lastIdx; - } - - /** - * Lists files in archive directory and returns the indices of least and last archived files. - * In case of holes, first segment after last "hole" is considered as minimum. - * Example: minimum(0, 1, 10, 11, 20, 21, 22) should be 20 - * - * @return The absolute indices of min and max archived files. - */ - private IgniteBiTuple scanMinMaxArchiveIndices() { - TreeSet archiveIndices = new TreeSet<>(); - - for (File file : walArchiveDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER)) { - try { - long idx = Long.parseLong(file.getName().substring(0, 16)); - - archiveIndices.add(idx); - } - catch (NumberFormatException | IndexOutOfBoundsException ignore) { - // No-op. - } - } - - if (archiveIndices.isEmpty()) - return null; - else { - Long min = archiveIndices.first(); - Long max = archiveIndices.last(); - - if (max - min == archiveIndices.size() - 1) - return F.t(min, max); // Short path. - - for (Long idx : archiveIndices.descendingSet()) { - if (!archiveIndices.contains(idx - 1)) - return F.t(idx, max); - } - - throw new IllegalStateException("Should never happen if TreeSet is valid."); - } - } - - /** - * Creates a directory specified by the given arguments. - * - * @param cfg Configured directory path, may be {@code null}. - * @param defDir Default directory path, will be used if cfg is {@code null}. - * @param consId Local node consistent ID. - * @param msg File description to print out on successful initialization. - * @return Initialized directory. - * @throws IgniteCheckedException If failed to initialize directory. - */ - private File initDirectory(String cfg, String defDir, String consId, String msg) throws IgniteCheckedException { - File dir; - - if (cfg != null) { - File workDir0 = new File(cfg); - - dir = workDir0.isAbsolute() ? - new File(workDir0, consId) : - new File(U.resolveWorkDirectory(igCfg.getWorkDirectory(), cfg, false), consId); - } - else - dir = new File(U.resolveWorkDirectory(igCfg.getWorkDirectory(), defDir, false), consId); - - U.ensureDirectory(dir, msg, log); - - return dir; - } - - /** - * @return Current log segment handle. - */ - private FileWriteHandle currentHandle() { - return currentHnd; - } - - /** - * @param cur Handle that failed to fit the given entry. - * @return Handle that will fit the entry. - */ - private FileWriteHandle rollOver(FileWriteHandle cur) throws IgniteCheckedException { - FileWriteHandle hnd = currentHandle(); - - if (hnd != cur) - return hnd; - - if (hnd.close(true)) { - if (metrics.metricsEnabled()) - metrics.onWallRollOver(); - - FileWriteHandle next = initNextWriteHandle(cur.getSegmentId()); - - if (next.getSegmentId() - lashCheckpointFileIdx() >= maxSegCountWithoutCheckpoint) - cctx.database().forceCheckpoint("too big size of WAL without checkpoint"); - - boolean swapped = currentHndUpd.compareAndSet(this, hnd, next); - - assert swapped : "Concurrent updates on rollover are not allowed"; - - if (walAutoArchiveAfterInactivity > 0) - lastRecordLoggedMs.set(0); - - // Let other threads to proceed with new segment. - hnd.signalNextAvailable(); - } - else - hnd.awaitNext(); - - return currentHandle(); - } - - /** - * Give last checkpoint file idx - */ - private long lashCheckpointFileIdx() { - WALPointer lastCheckpointMark = cctx.database().lastCheckpointMarkWalPointer(); - - return lastCheckpointMark == null ? 0 : ((FileWALPointer)lastCheckpointMark).index(); - } - - /** - * @param lastReadPtr Last read WAL file pointer. - * @return Initialized file write handle. - * @throws StorageException If failed to initialize WAL write handle. - */ - private FileWriteHandle restoreWriteHandle(FileWALPointer lastReadPtr) throws StorageException { - long absIdx = lastReadPtr == null ? 0 : lastReadPtr.index(); - - long segNo = absIdx % dsCfg.getWalSegments(); - - File curFile = new File(walWorkDir, FileDescriptor.fileName(segNo)); - - int offset = lastReadPtr == null ? 0 : lastReadPtr.fileOffset(); - int len = lastReadPtr == null ? 0 : lastReadPtr.length(); - - try { - SegmentIO fileIO = new SegmentIO(absIdx, ioFactory.create(curFile)); - - try { - int serVer = serializerVersion; - - // If we have existing segment, try to read version from it. - if (lastReadPtr != null) { - try { - serVer = readSegmentHeader(fileIO, segmentFileInputFactory).getSerializerVersion(); - } - catch (SegmentEofException | EOFException ignore) { - serVer = serializerVersion; - } - } - - RecordSerializer ser = new RecordSerializerFactoryImpl(cctx).createSerializer(serVer); - - if (log.isInfoEnabled()) - log.info("Resuming logging to WAL segment [file=" + curFile.getAbsolutePath() + - ", offset=" + offset + ", ver=" + serVer + ']'); - - FileWriteHandle hnd = new FileWriteHandle( - fileIO, - offset + len, - maxWalSegmentSize, - ser); - - // For new handle write serializer version to it. - if (lastReadPtr == null) - hnd.writeSerializerVersion(); - - archiver.currentWalIndex(absIdx); - - return hnd; - } - catch (IgniteCheckedException | IOException e) { - try { - fileIO.close(); - } - catch (IOException suppressed) { - e.addSuppressed(suppressed); - } - - if (e instanceof StorageException) - throw (StorageException) e; - - throw e instanceof IOException ? (IOException) e : new IOException(e); - } - } - catch (IOException e) { - throw new StorageException("Failed to restore WAL write handle: " + curFile.getAbsolutePath(), e); - } - } - - /** - * Fills the file header for a new segment. - * Calling this method signals we are done with the segment and it can be archived. - * If we don't have prepared file yet and achiever is busy this method blocks - * - * @param curIdx current absolute segment released by WAL writer - * @return Initialized file handle. - * @throws IgniteCheckedException If exception occurred. - */ - private FileWriteHandle initNextWriteHandle(long curIdx) throws IgniteCheckedException { - IgniteCheckedException error = null; - - try { - File nextFile = pollNextFile(curIdx); - - if (log.isDebugEnabled()) - log.debug("Switching to a new WAL segment: " + nextFile.getAbsolutePath()); - - SegmentIO fileIO = new SegmentIO(curIdx + 1, ioFactory.create(nextFile)); - - FileWriteHandle hnd = new FileWriteHandle( - fileIO, - 0, - maxWalSegmentSize, - serializer); - - hnd.writeSerializerVersion(); - - return hnd; - } - catch (IgniteCheckedException e) { - throw error = e; - } - catch (IOException e) { - throw error = new StorageException("Unable to initialize WAL segment", e); - } - finally { - if (error != null) - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, error)); - } - } - - /** - * Deletes temp files, creates and prepares new; Creates first segment if necessary. - * - * @throws StorageException If failed. - */ - private void checkOrPrepareFiles() throws StorageException { - // Clean temp files. - { - File[] tmpFiles = walWorkDir.listFiles(WAL_SEGMENT_TEMP_FILE_FILTER); - - if (!F.isEmpty(tmpFiles)) { - for (File tmp : tmpFiles) { - boolean deleted = tmp.delete(); - - if (!deleted) - throw new StorageException("Failed to delete previously created temp file " + - "(make sure Ignite process has enough rights): " + tmp.getAbsolutePath()); - } - } - } - - File[] allFiles = walWorkDir.listFiles(WAL_SEGMENT_FILE_FILTER); - - if (allFiles.length != 0 && allFiles.length > dsCfg.getWalSegments()) - throw new StorageException("Failed to initialize wal (work directory contains " + - "incorrect number of segments) [cur=" + allFiles.length + ", expected=" + dsCfg.getWalSegments() + ']'); - - // Allocate the first segment synchronously. All other segments will be allocated by archiver in background. - if (allFiles.length == 0) { - File first = new File(walWorkDir, FileDescriptor.fileName(0)); - - createFile(first); - } - else - checkFiles(0, false, null, null); - } - - /** - * Clears whole the file, fills with zeros for Default mode. - * - * @param file File to format. - * @throws StorageException if formatting failed. - */ - private void formatFile(File file) throws StorageException { - formatFile(file, dsCfg.getWalSegmentSize()); - } - - /** - * Clears the file, fills with zeros for Default mode. - * - * @param file File to format. - * @param bytesCntToFormat Count of first bytes to format. - * @throws StorageException If formatting failed. - */ - private void formatFile(File file, int bytesCntToFormat) throws StorageException { - if (log.isDebugEnabled()) - log.debug("Formatting file [exists=" + file.exists() + ", file=" + file.getAbsolutePath() + ']'); - - try (FileIO fileIO = ioFactory.create(file, CREATE, READ, WRITE)) { - int left = bytesCntToFormat; - - if (mode == WALMode.FSYNC) { - while ((left -= fileIO.writeFully(FILL_BUF, 0, Math.min(FILL_BUF.length, left))) > 0) - ; - - fileIO.force(); - } - else - fileIO.clear(); - } - catch (IOException e) { - throw new StorageException("Failed to format WAL segment file: " + file.getAbsolutePath(), e); - } - } - - /** - * Creates a file atomically with temp file. - * - * @param file File to create. - * @throws StorageException If failed. - */ - private void createFile(File file) throws StorageException { - if (log.isDebugEnabled()) - log.debug("Creating new file [exists=" + file.exists() + ", file=" + file.getAbsolutePath() + ']'); - - File tmp = new File(file.getParent(), file.getName() + FilePageStoreManager.TMP_SUFFIX); - - formatFile(tmp); - - try { - Files.move(tmp.toPath(), file.toPath()); - } - catch (IOException e) { - throw new StorageException("Failed to move temp file to a regular WAL segment file: " + - file.getAbsolutePath(), e); - } - - if (log.isDebugEnabled()) - log.debug("Created WAL segment [file=" + file.getAbsolutePath() + ", size=" + file.length() + ']'); - } - - /** - * Retrieves next available file to write WAL data, waiting - * if necessary for a segment to become available. - * - * @param curIdx Current absolute WAL segment index. - * @return File ready for use as new WAL segment. - * @throws StorageException If exception occurred in the archiver thread. - * @throws IgniteInterruptedCheckedException If interrupted. - */ - private File pollNextFile(long curIdx) throws StorageException, IgniteInterruptedCheckedException { - // Signal to archiver that we are done with the segment and it can be archived. - long absNextIdx = archiver.nextAbsoluteSegmentIndex(curIdx); - - long segmentIdx = absNextIdx % dsCfg.getWalSegments(); - - return new File(walWorkDir, FileDescriptor.fileName(segmentIdx)); - } - - - /** - * @return Sorted WAL files descriptors. - */ - public static FileDescriptor[] scan(File[] allFiles) { - if (allFiles == null) - return EMPTY_DESCRIPTORS; - - FileDescriptor[] descs = new FileDescriptor[allFiles.length]; - - for (int i = 0; i < allFiles.length; i++) { - File f = allFiles[i]; - - descs[i] = new FileDescriptor(f); - } - - Arrays.sort(descs); - - return descs; - } - - /** - * @throws StorageException If node is no longer valid and we missed a WAL operation. - */ - private void checkNode() throws StorageException { - if (cctx.kernalContext().invalid()) - throw new StorageException("Failed to perform WAL operation (environment was invalidated by a " + - "previous error)"); - } - - /** - * File archiver operates on absolute segment indexes. For any given absolute segment index N we can calculate - * the work WAL segment: S(N) = N % dsCfg.walSegments. - * When a work segment is finished, it is given to the archiver. If the absolute index of last archived segment - * is denoted by A and the absolute index of next segment we want to write is denoted by W, then we can allow - * write to S(W) if W - A <= walSegments.
    - * - * Monitor of current object is used for notify on: - *
      - *
    • exception occurred ({@link FileArchiver#cleanException}!=null)
    • - *
    • stopping thread ({@link FileArchiver#stopped}==true)
    • - *
    • current file index changed ({@link FileArchiver#curAbsWalIdx})
    • - *
    • last archived file index was changed ({@link FileArchiver#lastAbsArchivedIdx})
    • - *
    • some WAL index was removed from {@link FileArchiver#locked} map
    • - *
    - */ - private class FileArchiver extends GridWorker { - /** Exception which occurred during initial creation of files or during archiving WAL segment */ - private StorageException cleanException; - - /** - * Absolute current segment index WAL Manager writes to. Guarded by this. - * Incremented during rollover. Also may be directly set if WAL is resuming logging after start. - */ - private long curAbsWalIdx = -1; - - /** Last archived file index (absolute, 0-based). Guarded by this. */ - private volatile long lastAbsArchivedIdx = -1; - - /** current thread stopping advice */ - private volatile boolean stopped; - - /** */ - private NavigableMap reserved = new TreeMap<>(); - - /** - * Maps absolute segment index to locks counter. Lock on segment protects from archiving segment and may - * come from {@link RecordsIterator} during WAL replay. Map itself is guarded by this. - */ - private Map locked = new HashMap<>(); - - /** Formatted index. */ - private int formatted; - - /** - * - */ - private FileArchiver(long lastAbsArchivedIdx, IgniteLogger log) { - super(cctx.igniteInstanceName(), "wal-file-archiver%" + cctx.igniteInstanceName(), log, - cctx.kernalContext().workersRegistry()); - - this.lastAbsArchivedIdx = lastAbsArchivedIdx; - } - - /** - * @return Last archived segment absolute index. - */ - private long lastArchivedAbsoluteIndex() { - return lastAbsArchivedIdx; - } - - /** - * @throws IgniteInterruptedCheckedException If failed to wait for thread shutdown. - */ - private void shutdown() throws IgniteInterruptedCheckedException { - synchronized (this) { - stopped = true; - - notifyAll(); - } - - U.join(runner()); - } - - /** - * @param curAbsWalIdx Current absolute WAL segment index. - */ - private void currentWalIndex(long curAbsWalIdx) { - synchronized (this) { - this.curAbsWalIdx = curAbsWalIdx; - - notifyAll(); - } - } - - /** - * @param absIdx Index for reservation. - */ - private synchronized void reserve(long absIdx) { - Integer cur = reserved.get(absIdx); - - if (cur == null) - reserved.put(absIdx, 1); - else - reserved.put(absIdx, cur + 1); - } - - /** - * Check if WAL segment locked or reserved - * - * @param absIdx Index for check reservation. - * @return {@code True} if index is reserved. - */ - private synchronized boolean reserved(long absIdx) { - return locked.containsKey(absIdx) || reserved.floorKey(absIdx) != null; - } - - /** - * @param absIdx Reserved index. - */ - private synchronized void release(long absIdx) { - Integer cur = reserved.get(absIdx); - - assert cur != null && cur >= 1 : cur; - - if (cur == 1) - reserved.remove(absIdx); - else - reserved.put(absIdx, cur - 1); - } - - /** {@inheritDoc} */ - @Override protected void body() { - blockingSectionBegin(); - - try { - allocateRemainingFiles(); - } - catch (StorageException e) { - synchronized (this) { - // Stop the thread and report to starter. - cleanException = e; - - notifyAll(); - } - - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, e)); - - return; - } - finally { - blockingSectionEnd(); - } - - Throwable err = null; - - try { - synchronized (this) { - while (curAbsWalIdx == -1 && !stopped) { - blockingSectionBegin(); - - try { - wait(); - } - finally { - blockingSectionEnd(); - } - } - - // If the archive directory is empty, we can be sure that there were no WAL segments archived. - // This is ensured by the check in truncate() which will leave at least one file there - // once it was archived. - } - - while (!Thread.currentThread().isInterrupted() && !stopped) { - long toArchive; - - synchronized (this) { - assert lastAbsArchivedIdx <= curAbsWalIdx : "lastArchived=" + lastAbsArchivedIdx + - ", current=" + curAbsWalIdx; - - while (lastAbsArchivedIdx >= curAbsWalIdx - 1 && !stopped) { - blockingSectionBegin(); - - try { - wait(); - } - finally { - blockingSectionEnd(); - } - } - - toArchive = lastAbsArchivedIdx + 1; - } - - if (stopped) - break; - - SegmentArchiveResult res; - - blockingSectionBegin(); - - try { - res = archiveSegment(toArchive); - } - finally { - blockingSectionEnd(); - } - - synchronized (this) { - while (locked.containsKey(toArchive) && !stopped) { - blockingSectionBegin(); - - try { - wait(); - } - finally { - blockingSectionEnd(); - } - } - - changeLastArchivedIndexAndWakeupCompressor(toArchive); - - notifyAll(); - } - - if (evt.isRecordable(EventType.EVT_WAL_SEGMENT_ARCHIVED)) { - evt.record(new WalSegmentArchivedEvent(cctx.discovery().localNode(), - res.getAbsIdx(), res.getDstArchiveFile())); - } - - onIdle(); - } - } - catch (InterruptedException t) { - Thread.currentThread().interrupt(); - - if (!stopped) - err = t; - } - catch (Throwable t) { - err = t; - } - finally { - if (err == null && !stopped) - err = new IllegalStateException("Worker " + name() + " is terminated unexpectedly"); - - if (err instanceof OutOfMemoryError) - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); - else if (err != null) - cctx.kernalContext().failure().process(new FailureContext(SYSTEM_WORKER_TERMINATION, err)); - } - } - - /** - * @param idx Index. - */ - private void changeLastArchivedIndexAndWakeupCompressor(long idx) { - lastAbsArchivedIdx = idx; - - if (compressor != null) - compressor.onNextSegmentArchived(); - } - - /** - * Gets the absolute index of the next WAL segment available to write. - * Blocks till there are available file to write - * - * @param curIdx Current absolute index that we want to increment. - * @return Next index (curWalSegmIdx+1) when it is ready to be written. - * @throws StorageException If exception occurred in the archiver thread. - * @throws IgniteInterruptedCheckedException If interrupted. - */ - private long nextAbsoluteSegmentIndex(long curIdx) throws StorageException, IgniteInterruptedCheckedException { - try { - synchronized (this) { - if (cleanException != null) - throw cleanException; - - assert curIdx == curAbsWalIdx; - - curAbsWalIdx++; - - // Notify archiver thread. - notifyAll(); - - int segments = dsCfg.getWalSegments(); - - while ((curAbsWalIdx - lastAbsArchivedIdx > segments && cleanException == null)) - wait(); - - if (cleanException != null) - throw cleanException; - - // Wait for formatter so that we do not open an empty file in DEFAULT mode. - while (curAbsWalIdx % dsCfg.getWalSegments() > formatted && cleanException == null) - wait(); - - if (cleanException != null) - throw cleanException; - - return curAbsWalIdx; - } - } - catch (InterruptedException e) { - Thread.currentThread().interrupt(); - - throw new IgniteInterruptedCheckedException(e); - } - } - - /** - * @param absIdx Segment absolute index. - * @return
    • {@code True} if can read, no lock is held,
    • {@code false} if work segment, need - * release segment later, use {@link #releaseWorkSegment} for unlock
    - */ - @SuppressWarnings("NonPrivateFieldAccessedInSynchronizedContext") - private boolean checkCanReadArchiveOrReserveWorkSegment(long absIdx) { - synchronized (this) { - if (lastAbsArchivedIdx >= absIdx) { - if (log.isDebugEnabled()) - log.debug("Not needed to reserve WAL segment: absIdx=" + absIdx + ";" + - " lastAbsArchivedIdx=" + lastAbsArchivedIdx); - - return true; - } - - Integer cur = locked.get(absIdx); - - cur = cur == null ? 1 : cur + 1; - - locked.put(absIdx, cur); - - if (log.isDebugEnabled()) - log.debug("Reserved work segment [absIdx=" + absIdx + ", pins=" + cur + ']'); - - return false; - } - } - - /** - * @param absIdx Segment absolute index. - */ - @SuppressWarnings("NonPrivateFieldAccessedInSynchronizedContext") - private void releaseWorkSegment(long absIdx) { - synchronized (this) { - Integer cur = locked.get(absIdx); - - assert cur != null && cur > 0 : "WAL Segment with Index " + absIdx + " is not locked;" + - " lastAbsArchivedIdx = " + lastAbsArchivedIdx; - - if (cur == 1) { - locked.remove(absIdx); - - if (log.isDebugEnabled()) - log.debug("Fully released work segment (ready to archive) [absIdx=" + absIdx + ']'); - } - else { - locked.put(absIdx, cur - 1); - - if (log.isDebugEnabled()) - log.debug("Partially released work segment [absIdx=" + absIdx + ", pins=" + (cur - 1) + ']'); - } - - notifyAll(); - } - } - - /** - * Moves WAL segment from work folder to archive folder. - * Temp file is used to do movement - * - * @param absIdx Absolute index to archive. - */ - private SegmentArchiveResult archiveSegment(long absIdx) throws IgniteCheckedException { - long segIdx = absIdx % dsCfg.getWalSegments(); - - File origFile = new File(walWorkDir, FileDescriptor.fileName(segIdx)); - - String name = FileDescriptor.fileName(absIdx); - - File dstTmpFile = new File(walArchiveDir, name + FilePageStoreManager.TMP_SUFFIX); - - File dstFile = new File(walArchiveDir, name); - - if (log.isInfoEnabled()) - log.info("Starting to copy WAL segment [absIdx=" + absIdx + ", segIdx=" + segIdx + - ", origFile=" + origFile.getAbsolutePath() + ", dstFile=" + dstFile.getAbsolutePath() + ']'); - - try { - Files.deleteIfExists(dstTmpFile.toPath()); - - Files.copy(origFile.toPath(), dstTmpFile.toPath()); - - Files.move(dstTmpFile.toPath(), dstFile.toPath()); - - if (mode == WALMode.FSYNC) { - try (FileIO f0 = ioFactory.create(dstFile, CREATE, READ, WRITE)) { - f0.force(); - } - } - } - catch (IOException e) { - throw new IgniteCheckedException("Failed to archive WAL segment [" + - "srcFile=" + origFile.getAbsolutePath() + - ", dstFile=" + dstTmpFile.getAbsolutePath() + ']', e); - } - - if (log.isInfoEnabled()) - log.info("Copied file [src=" + origFile.getAbsolutePath() + - ", dst=" + dstFile.getAbsolutePath() + ']'); - - return new SegmentArchiveResult(absIdx, origFile, dstFile); - } - - /** - * - */ - private boolean checkStop() { - return stopped; - } - - /** - * Background creation of all segments except first. First segment was created in main thread by - * {@link FsyncModeFileWriteAheadLogManager#checkOrPrepareFiles()} - */ - private void allocateRemainingFiles() throws StorageException { - final FileArchiver archiver = this; - - checkFiles(1, - true, - new IgnitePredicate() { - @Override public boolean apply(Integer integer) { - return !checkStop(); - } - }, new CI1() { - @Override public void apply(Integer idx) { - synchronized (archiver) { - formatted = idx; - - archiver.notifyAll(); - } - } - }); - } - } - - /** - * Responsible for compressing WAL archive segments. - * Also responsible for deleting raw copies of already compressed WAL archive segments if they are not reserved. - */ - private class FileCompressor extends Thread { - /** Current thread stopping advice. */ - private volatile boolean stopped; - - /** Last successfully compressed segment. */ - private volatile long lastCompressedIdx = -1L; - - /** All segments prior to this (inclusive) can be compressed. */ - private volatile long minUncompressedIdxToKeep = -1L; - - /** */ - FileCompressor() { - super("wal-file-compressor%" + cctx.igniteInstanceName()); - } - - /** */ - private void init() { - File[] toDel = walArchiveDir.listFiles(WAL_SEGMENT_TEMP_FILE_COMPACTED_FILTER); - - for (File f : toDel) { - if (stopped) - return; - - f.delete(); - } - - FileDescriptor[] alreadyCompressed = scan(walArchiveDir.listFiles(WAL_SEGMENT_FILE_COMPACTED_FILTER)); - - if (alreadyCompressed.length > 0) - lastCompressedIdx = alreadyCompressed[alreadyCompressed.length - 1].idx(); - } - - /** - * @param idx Minimum raw segment index that should be preserved from deletion. - */ - synchronized void keepUncompressedIdxFrom(long idx) { - minUncompressedIdxToKeep = idx; - - notify(); - } - - /** - * Callback for waking up compressor when new segment is archived. - */ - synchronized void onNextSegmentArchived() { - notify(); - } - - /** - * Pessimistically tries to reserve segment for compression in order to avoid concurrent truncation. - * Waits if there's no segment to archive right now. - */ - private long tryReserveNextSegmentOrWait() throws InterruptedException, IgniteCheckedException { - long segmentToCompress = lastCompressedIdx + 1; - - synchronized (this) { - if (stopped) - return -1; - - while (segmentToCompress > archiver.lastArchivedAbsoluteIndex()) { - wait(); - - if (stopped) - return -1; - } - } - - segmentToCompress = Math.max(segmentToCompress, lastTruncatedArchiveIdx + 1); - - boolean reserved = reserve(new FileWALPointer(segmentToCompress, 0, 0)); - - return reserved ? segmentToCompress : -1; - } - - /** - * Deletes raw WAL segments if they aren't locked and already have compressed copies of themselves. - */ - private void deleteObsoleteRawSegments() { - FileDescriptor[] descs = scan(walArchiveDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER)); - - Set indices = new HashSet<>(); - Set duplicateIndices = new HashSet<>(); - - for (FileDescriptor desc : descs) { - if (!indices.add(desc.idx)) - duplicateIndices.add(desc.idx); - } - - FileArchiver archiver0 = archiver; - - for (FileDescriptor desc : descs) { - if (desc.isCompressed()) - continue; - - // Do not delete reserved or locked segment and any segment after it. - if (archiver0 != null && archiver0.reserved(desc.idx)) - return; - - if (desc.idx < minUncompressedIdxToKeep && duplicateIndices.contains(desc.idx)) { - if (!desc.file.delete()) - U.warn(log, "Failed to remove obsolete WAL segment (make sure the process has enough rights): " + - desc.file.getAbsolutePath() + ", exists: " + desc.file.exists()); - } - } - } - - /** {@inheritDoc} */ - @Override public void run() { - init(); - - while (!Thread.currentThread().isInterrupted() && !stopped) { - long currReservedSegment = -1; - - try { - deleteObsoleteRawSegments(); - - currReservedSegment = tryReserveNextSegmentOrWait(); - if (currReservedSegment == -1) - continue; - - File tmpZip = new File(walArchiveDir, FileDescriptor.fileName(currReservedSegment) - + FilePageStoreManager.ZIP_SUFFIX + FilePageStoreManager.TMP_SUFFIX); - - File zip = new File(walArchiveDir, FileDescriptor.fileName(currReservedSegment) - + FilePageStoreManager.ZIP_SUFFIX); - - File raw = new File(walArchiveDir, FileDescriptor.fileName(currReservedSegment)); - if (!Files.exists(raw.toPath())) - throw new IgniteCheckedException("WAL archive segment is missing: " + raw); - - compressSegmentToFile(currReservedSegment, raw, tmpZip); - - Files.move(tmpZip.toPath(), zip.toPath()); - - if (mode != WALMode.NONE) { - try (FileIO f0 = ioFactory.create(zip, CREATE, READ, WRITE)) { - f0.force(); - } - - if (evt.isRecordable(EVT_WAL_SEGMENT_COMPACTED)) { - evt.record(new WalSegmentCompactedEvent( - cctx.discovery().localNode(), - currReservedSegment, - zip.getAbsoluteFile()) - ); - } - } - - lastCompressedIdx = currReservedSegment; - } - catch (IgniteCheckedException | IOException e) { - U.error(log, "Compression of WAL segment [idx=" + currReservedSegment + - "] was skipped due to unexpected error", e); - - lastCompressedIdx++; - } - catch (InterruptedException ignore) { - Thread.currentThread().interrupt(); - } - finally { - try { - if (currReservedSegment != -1) - release(new FileWALPointer(currReservedSegment, 0, 0)); - } - catch (IgniteCheckedException e) { - U.error(log, "Can't release raw WAL segment [idx=" + currReservedSegment + - "] after compression", e); - } - } - } - } - - /** - * @param nextSegment Next segment absolute idx. - * @param raw Raw file. - * @param zip Zip file. - */ - private void compressSegmentToFile(long nextSegment, File raw, File zip) - throws IOException, IgniteCheckedException { - int segmentSerializerVer; - - try (FileIO fileIO = ioFactory.create(raw)) { - segmentSerializerVer = readSegmentHeader(new SegmentIO(nextSegment, fileIO), segmentFileInputFactory).getSerializerVersion(); - } - - try (ZipOutputStream zos = new ZipOutputStream(new BufferedOutputStream(new FileOutputStream(zip)))) { - zos.setLevel(dsCfg.getWalCompactionLevel()); - zos.putNextEntry(new ZipEntry("")); - - zos.write(prepareSerializerVersionBuffer(nextSegment, segmentSerializerVer, true).array()); - - final CIX1 appendToZipC = new CIX1() { - @Override public void applyx(WALRecord record) throws IgniteCheckedException { - final MarshalledRecord marshRec = (MarshalledRecord)record; - - try { - zos.write(marshRec.buffer().array(), 0, marshRec.buffer().remaining()); - } - catch (IOException e) { - throw new IgniteCheckedException(e); - } - } - }; - - try (SingleSegmentLogicalRecordsIterator iter = new SingleSegmentLogicalRecordsIterator( - log, cctx, ioFactory, tlbSize, nextSegment, walArchiveDir, appendToZipC)) { - - while (iter.hasNextX()) - iter.nextX(); - } - } - } - - /** - * @throws IgniteInterruptedCheckedException If failed to wait for thread shutdown. - */ - private void shutdown() throws IgniteInterruptedCheckedException { - synchronized (this) { - stopped = true; - - notifyAll(); - } - - U.join(this); - } - } - - /** - * Responsible for decompressing previously compressed segments of WAL archive if they are needed for replay. - */ - private class FileDecompressor extends GridWorker { - /** Decompression futures. */ - private Map> decompressionFutures = new HashMap<>(); - - /** Segments queue. */ - private PriorityBlockingQueue segmentsQueue = new PriorityBlockingQueue<>(); - - /** Byte array for draining data. */ - private byte[] arr = new byte[tlbSize]; - - /** - * @param log Logger. - */ - FileDecompressor(IgniteLogger log) { - super(cctx.igniteInstanceName(), "wal-file-decompressor%" + cctx.igniteInstanceName(), log, - cctx.kernalContext().workersRegistry()); - } - - /** {@inheritDoc} */ - @Override protected void body() { - Throwable err = null; - - try { - while (!isCancelled()) { - long segmentToDecompress = -1L; - - try { - blockingSectionBegin(); - - try { - segmentToDecompress = segmentsQueue.take(); - } - finally { - blockingSectionEnd(); - } - - if (isCancelled()) - break; - - File zip = new File(walArchiveDir, FileDescriptor.fileName(segmentToDecompress) - + FilePageStoreManager.ZIP_SUFFIX); - File unzipTmp = new File(walArchiveDir, FileDescriptor.fileName(segmentToDecompress) - + FilePageStoreManager.TMP_SUFFIX); - File unzip = new File(walArchiveDir, FileDescriptor.fileName(segmentToDecompress)); - - try (ZipInputStream zis = new ZipInputStream(new BufferedInputStream(new FileInputStream(zip))); - FileIO io = ioFactory.create(unzipTmp)) { - zis.getNextEntry(); - - while (io.writeFully(arr, 0, zis.read(arr)) > 0) - updateHeartbeat(); - } - - try { - Files.move(unzipTmp.toPath(), unzip.toPath()); - } - catch (FileAlreadyExistsException e) { - U.error(log, "Can't rename temporary unzipped segment: raw segment is already present " + - "[tmp=" + unzipTmp + ", raw=" + unzip + ']', e); - - if (!unzipTmp.delete()) - U.error(log, "Can't delete temporary unzipped segment [tmp=" + unzipTmp + ']'); - } - - updateHeartbeat(); - - synchronized (this) { - decompressionFutures.remove(segmentToDecompress).onDone(); - } - } - catch (IOException ex) { - if (!isCancelled && segmentToDecompress != -1L) { - IgniteCheckedException e = new IgniteCheckedException("Error during WAL segment " + - "decompression [segmentIdx=" + segmentToDecompress + ']', ex); - - synchronized (this) { - decompressionFutures.remove(segmentToDecompress).onDone(e); - } - } - } - } - } - catch (InterruptedException e) { - Thread.currentThread().interrupt(); - - if (!isCancelled) - err = e; - } - catch (Throwable t) { - err = t; - } - finally { - if (err == null && !isCancelled) - err = new IllegalStateException("Worker " + name() + " is terminated unexpectedly"); - - if (err instanceof OutOfMemoryError) - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); - else if (err != null) - cctx.kernalContext().failure().process(new FailureContext(SYSTEM_WORKER_TERMINATION, err)); - } - } - - /** - * Asynchronously decompresses WAL segment which is present only in .zip file. - * - * @return Future which is completed once file is decompressed. - */ - synchronized IgniteInternalFuture decompressFile(long idx) { - if (decompressionFutures.containsKey(idx)) - return decompressionFutures.get(idx); - - File f = new File(walArchiveDir, FileDescriptor.fileName(idx)); - - if (f.exists()) - return new GridFinishedFuture<>(); - - segmentsQueue.put(idx); - - GridFutureAdapter res = new GridFutureAdapter<>(); - - decompressionFutures.put(idx, res); - - return res; - } - - /** */ - private void shutdown() { - synchronized (this) { - U.cancel(this); - - // Put fake -1 to wake thread from queue.take() - segmentsQueue.put(-1L); - } - - U.join(this, log); - } - } - - /** - * Validate files depending on {@link DataStorageConfiguration#getWalSegments()} and create if need. - * Check end when exit condition return false or all files are passed. - * - * @param startWith Start with. - * @param create Flag create file. - * @param p Predicate Exit condition. - * @throws StorageException if validation or create file fail. - */ - private void checkFiles( - int startWith, - boolean create, - @Nullable IgnitePredicate p, - @Nullable IgniteInClosure completionCallback - ) throws StorageException { - for (int i = startWith; i < dsCfg.getWalSegments() && (p == null || (p != null && p.apply(i))); i++) { - File checkFile = new File(walWorkDir, FileDescriptor.fileName(i)); - - if (checkFile.exists()) { - if (checkFile.isDirectory()) - throw new StorageException("Failed to initialize WAL log segment (a directory with " + - "the same name already exists): " + checkFile.getAbsolutePath()); - else if (checkFile.length() != dsCfg.getWalSegmentSize() && mode == WALMode.FSYNC) - throw new StorageException("Failed to initialize WAL log segment " + - "(WAL segment size change is not supported):" + checkFile.getAbsolutePath()); - } - else if (create) - createFile(checkFile); - - if (completionCallback != null) - completionCallback.apply(i); - } - } - - /** - * Writes record serializer version to provided {@code io}. - * NOTE: Method mutates position of {@code io}. - * - * @param io I/O interface for file. - * @param idx Segment index. - * @param version Serializer version. - * @return I/O position after write version. - * @throws IOException If failed to write serializer version. - */ - public static long writeSerializerVersion(FileIO io, long idx, int version, WALMode mode) throws IOException { - ByteBuffer buffer = prepareSerializerVersionBuffer(idx, version, false); - - io.writeFully(buffer); - - // Flush - if (mode == WALMode.FSYNC) - io.force(); - - return io.position(); - } - - /** - * @param idx Index. - * @param ver Version. - * @param compacted Compacted flag. - */ - @NotNull private static ByteBuffer prepareSerializerVersionBuffer(long idx, int ver, boolean compacted) { - ByteBuffer buf = ByteBuffer.allocate(RecordV1Serializer.HEADER_RECORD_SIZE); - buf.order(ByteOrder.nativeOrder()); - - // Write record type. - buf.put((byte) (WALRecord.RecordType.HEADER_RECORD.ordinal() + 1)); - - // Write position. - RecordV1Serializer.putPosition(buf, new FileWALPointer(idx, 0, 0)); - - // Place magic number. - buf.putLong(compacted ? HeaderRecord.COMPACTED_MAGIC : HeaderRecord.REGULAR_MAGIC); - - // Place serializer version. - buf.putInt(ver); - - // Place CRC if needed. - if (!RecordV1Serializer.skipCrc) { - int curPos = buf.position(); - - buf.position(0); - - // This call will move buffer position to the end of the record again. - int crcVal = PureJavaCrc32.calcCrc32(buf, curPos); - - buf.putInt(crcVal); - } - else - buf.putInt(0); - - // Write header record through io. - buf.position(0); - - return buf; - } - - /** - * - */ - private abstract static class FileHandle { - /** I/O interface for read/write operations with file */ - protected SegmentIO fileIO; - - /** - * @param fileIO I/O interface for read/write operations of FileHandle. - */ - private FileHandle(SegmentIO fileIO) { - this.fileIO = fileIO; - } - - /** - * @return Current segment id. - */ - public long getSegmentId(){ - return fileIO.getSegmentId(); - } - } - - /** - * - */ - public static class ReadFileHandle extends FileHandle implements AbstractWalRecordsIterator.AbstractReadFileHandle { - /** Entry serializer. */ - RecordSerializer ser; - - /** */ - FileInput in; - - /** - * true if this file handle came from work directory. - * false if this file handle came from archive directory. - */ - private boolean workDir; - - /** - * @param fileIO I/O interface for read/write operations of FileHandle. - * @param ser Entry serializer. - * @param in File input. - */ - ReadFileHandle( - SegmentIO fileIO, - RecordSerializer ser, - FileInput in - ) { - super(fileIO); - - this.ser = ser; - this.in = in; - } - - /** - * @throws IgniteCheckedException If failed to close the WAL segment file. - */ - @Override public void close() throws IgniteCheckedException { - try { - fileIO.close(); - } - catch (IOException e) { - throw new IgniteCheckedException(e); - } - } - - /** {@inheritDoc} */ - @Override public long idx() { - return getSegmentId(); - } - - /** {@inheritDoc} */ - @Override public FileInput in() { - return in; - } - - /** {@inheritDoc} */ - @Override public RecordSerializer ser() { - return ser; - } - - /** {@inheritDoc} */ - @Override public boolean workDir() { - return workDir; - } - } - - /** - * File handle for one log segment. - */ - @SuppressWarnings("SignalWithoutCorrespondingAwait") - private class FileWriteHandle extends FileHandle { - /** */ - private final RecordSerializer serializer; - - /** See {@link FsyncModeFileWriteAheadLogManager#maxWalSegmentSize} */ - private final long maxSegmentSize; - - /** - * Accumulated WAL records chain. - * This reference points to latest WAL record. - * When writing records chain is iterated from latest to oldest (see {@link WALRecord#previous()}) - * Records from chain are saved into buffer in reverse order - */ - private final AtomicReference head = new AtomicReference<>(); - - /** - * Position in current file after the end of last written record (incremented after file channel write - * operation) - */ - private volatile long written; - - /** */ - private volatile long lastFsyncPos; - - /** Stop guard to provide warranty that only one thread will be successful in calling {@link #close(boolean)}*/ - private final AtomicBoolean stop = new AtomicBoolean(false); - - /** */ - private final Lock lock = new ReentrantLock(); - - /** Condition activated each time writeBuffer() completes. Used to wait previously flushed write to complete */ - private final Condition writeComplete = lock.newCondition(); - - /** Condition for timed wait of several threads, see {@link DataStorageConfiguration#getWalFsyncDelayNanos()} */ - private final Condition fsync = lock.newCondition(); - - /** - * Next segment available condition. - * Protection from "spurious wakeup" is provided by predicate {@link #fileIO}=null - */ - private final Condition nextSegment = lock.newCondition(); - - /** - * @param fileIO I/O file interface to use - * @param pos Position. - * @param maxSegmentSize Max segment size. - * @param serializer Serializer. - * @throws IOException If failed. - */ - private FileWriteHandle( - SegmentIO fileIO, - long pos, - long maxSegmentSize, - RecordSerializer serializer - ) throws IOException { - super(fileIO); - - assert serializer != null; - - fileIO.position(pos); - - this.maxSegmentSize = maxSegmentSize; - this.serializer = serializer; - - head.set(new FakeRecord(new FileWALPointer(fileIO.getSegmentId(), (int)pos, 0), false)); - written = pos; - lastFsyncPos = pos; - } - - /** - * Write serializer version to current handle. - * NOTE: Method mutates {@code fileIO} position, written and lastFsyncPos fields. - * - * @throws IOException If fail to write serializer version. - */ - private void writeSerializerVersion() throws IOException { - try { - assert fileIO.position() == 0 : "Serializer version can be written only at the begin of file " + - fileIO.position(); - - long updatedPosition = FsyncModeFileWriteAheadLogManager.writeSerializerVersion(fileIO, getSegmentId(), - serializer.version(), mode); - - written = updatedPosition; - lastFsyncPos = updatedPosition; - head.set(new FakeRecord(new FileWALPointer(getSegmentId(), (int)updatedPosition, 0), false)); - } - catch (IOException e) { - throw new IOException("Unable to write serializer version for segment " + getSegmentId(), e); - } - } - - /** - * Checks if current head is a close fake record and returns {@code true} if so. - * - * @return {@code true} if current head is close record. - */ - private boolean stopped() { - return stopped(head.get()); - } - - /** - * @param record Record to check. - * @return {@code true} if the record is fake close record. - */ - private boolean stopped(WALRecord record) { - return record instanceof FakeRecord && ((FakeRecord)record).stop; - } - - /** - * @param rec Record to be added to record chain as new {@link #head} - * @return Pointer or null if roll over to next segment is required or already started by other thread. - * @throws StorageException If failed. - */ - @Nullable private WALPointer addRecord(WALRecord rec) throws StorageException { - assert rec.size() > 0 || rec.getClass() == FakeRecord.class; - - boolean flushed = false; - - for (; ; ) { - WALRecord h = head.get(); - - long nextPos = nextPosition(h); - - if (nextPos + rec.size() >= maxSegmentSize || stopped(h)) { - // Can not write to this segment, need to switch to the next one. - return null; - } - - int newChainSize = h.chainSize() + rec.size(); - - if (newChainSize > tlbSize && !flushed) { - boolean res = h.previous() == null || flush(h, false); - - if (rec.size() > tlbSize) - flushed = res; - - continue; - } - - rec.chainSize(newChainSize); - rec.previous(h); - - FileWALPointer ptr = new FileWALPointer( - getSegmentId(), - (int)nextPos, - rec.size()); - - rec.position(ptr); - - if (head.compareAndSet(h, rec)) - return ptr; - } - } - - /** - * @param rec Record. - * @return Position for the next record. - */ - private long nextPosition(WALRecord rec) { - return recordOffset(rec) + rec.size(); - } - - /** - * Flush or wait for concurrent flush completion. - * - * @param ptr Pointer. - * @throws StorageException If failed. - */ - private void flushOrWait(FileWALPointer ptr, boolean stop) throws StorageException { - long expWritten; - - if (ptr != null) { - // If requested obsolete file index, it must be already flushed by close. - if (ptr.index() != getSegmentId()) - return; - - expWritten = ptr.fileOffset(); - } - else // We read head position before the flush because otherwise we can get wrong position. - expWritten = recordOffset(head.get()); - - if (flush(ptr, stop)) - return; - else if (stop) { - FakeRecord fr = (FakeRecord)head.get(); - - assert fr.stop : "Invalid fake record on top of the queue: " + fr; - - expWritten = recordOffset(fr); - } - - // Spin-wait for a while before acquiring the lock. - for (int i = 0; i < 64; i++) { - if (written >= expWritten) - return; - } - - // If we did not flush ourselves then await for concurrent flush to complete. - lock.lock(); - - try { - while (written < expWritten && !cctx.kernalContext().invalid()) - U.awaitQuiet(writeComplete); - } - finally { - lock.unlock(); - } - } - - /** - * @param ptr Pointer. - * @return {@code true} If the flush really happened. - * @throws StorageException If failed. - */ - private boolean flush(FileWALPointer ptr, boolean stop) throws StorageException { - if (ptr == null) { // Unconditional flush. - for (; ; ) { - WALRecord expHead = head.get(); - - if (expHead.previous() == null) { - FakeRecord frHead = (FakeRecord)expHead; - - if (frHead.stop == stop || frHead.stop || - head.compareAndSet(expHead, new FakeRecord(frHead.position(), stop))) - return false; - } - - if (flush(expHead, stop)) - return true; - } - } - - assert ptr.index() == getSegmentId(); - - for (; ; ) { - WALRecord h = head.get(); - - // If current chain begin position is greater than requested, then someone else flushed our changes. - if (chainBeginPosition(h) > ptr.fileOffset()) - return false; - - if (flush(h, stop)) - return true; // We are lucky. - } - } - - /** - * @param h Head of the chain. - * @return Chain begin position. - */ - private long chainBeginPosition(WALRecord h) { - return recordOffset(h) + h.size() - h.chainSize(); - } - - /** - * @param expHead Expected head of chain. If head was changed, flush is not performed in this thread - * @throws StorageException If failed. - */ - private boolean flush(WALRecord expHead, boolean stop) throws StorageException { - if (expHead.previous() == null) { - FakeRecord frHead = (FakeRecord)expHead; - - if (!stop || frHead.stop) // Protects from CASing terminal FakeRecord(true) to FakeRecord(false) - return false; - } - - // Fail-fast before CAS. - checkNode(); - - if (!head.compareAndSet(expHead, new FakeRecord(new FileWALPointer(getSegmentId(), (int)nextPosition(expHead), 0), stop))) - return false; - - if (expHead.chainSize() == 0) - return false; - - // At this point we grabbed the piece of WAL chain. - // Any failure in this code must invalidate the environment. - try { - // We can safely allow other threads to start building next chains while we are doing flush here. - ByteBuffer buf; - - boolean tmpBuf = false; - - if (expHead.chainSize() > tlbSize) { - buf = GridUnsafe.allocateBuffer(expHead.chainSize()); - - tmpBuf = true; // We need to manually release this temporary direct buffer. - } - else - buf = tlb.get(); - - try { - long pos = fillBuffer(buf, expHead); - - writeBuffer(pos, buf); - } - finally { - if (tmpBuf) - GridUnsafe.freeBuffer(buf); - } - - return true; - } - catch (Throwable e) { - StorageException se = e instanceof StorageException ? (StorageException) e : - new StorageException("Unable to write", new IOException(e)); - - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, se)); - - // All workers waiting for a next segment must be woken up and stopped - signalNextAvailable(); - - throw se; - } - } - - /** - * Serializes WAL records chain to provided byte buffer - * - * @param buf Buffer, will be filled with records chain from end to beginning - * @param head Head of the chain to write to the buffer. - * @return Position in file for this buffer. - * @throws IgniteCheckedException If failed. - */ - private long fillBuffer(ByteBuffer buf, WALRecord head) throws IgniteCheckedException { - final int limit = head.chainSize(); - - assert limit <= buf.capacity(); - - buf.rewind(); - buf.limit(limit); - - do { - buf.position(head.chainSize() - head.size()); - buf.limit(head.chainSize()); // Just to make sure that serializer works in bounds. - - try { - serializer.writeRecord(head, buf); - } - catch (RuntimeException e) { - throw new IllegalStateException("Failed to write record: " + head, e); - } - - assert !buf.hasRemaining() : "Reported record size is greater than actual: " + head; - - head = head.previous(); - } - while (head.previous() != null); - - assert head instanceof FakeRecord : head.getClass(); - - buf.rewind(); - buf.limit(limit); - - return recordOffset(head); - } - - /** - * Non-blocking check if this pointer needs to be sync'ed. - * - * @param ptr WAL pointer to check. - * @return {@code False} if this pointer has been already sync'ed. - */ - private boolean needFsync(FileWALPointer ptr) { - // If index has changed, it means that the log was rolled over and already sync'ed. - // If requested position is smaller than last sync'ed, it also means all is good. - // If position is equal, then our record is the last not synced. - return getSegmentId() == ptr.index() && lastFsyncPos <= ptr.fileOffset(); - } - - /** - * @return Pointer to the end of the last written record (probably not fsync-ed). - */ - private FileWALPointer position() { - lock.lock(); - - try { - return new FileWALPointer(getSegmentId(), (int)written, 0); - } - finally { - lock.unlock(); - } - } - - /** - * @param ptr Pointer to sync. - * @throws StorageException If failed. - * @throws IgniteInterruptedCheckedException If interrupted. - */ - private void fsync(FileWALPointer ptr, boolean stop) throws StorageException, IgniteInterruptedCheckedException { - lock.lock(); - - try { - if (ptr != null) { - if (!needFsync(ptr)) - return; - - if (fsyncDelay > 0 && !stopped()) { - // Delay fsync to collect as many updates as possible: trade latency for throughput. - U.await(fsync, fsyncDelay, TimeUnit.NANOSECONDS); - - if (!needFsync(ptr)) - return; - } - } - - flushOrWait(ptr, stop); - - if (stopped()) - return; - - if (lastFsyncPos != written) { - assert lastFsyncPos < written; // Fsync position must be behind. - - boolean metricsEnabled = metrics.metricsEnabled(); - - long start = metricsEnabled ? System.nanoTime() : 0; - - try { - fileIO.force(); - } - catch (IOException e) { - throw new StorageException(e); - } - - lastFsyncPos = written; - - if (fsyncDelay > 0) - fsync.signalAll(); - - long end = metricsEnabled ? System.nanoTime() : 0; - - if (metricsEnabled) - metrics.onFsync(end - start); - } - } - finally { - lock.unlock(); - } - } - - /** - * @return {@code true} If this thread actually closed the segment. - * @throws StorageException If failed. - */ - private boolean close(boolean rollOver) throws StorageException { - if (stop.compareAndSet(false, true)) { - lock.lock(); - - try { - flushOrWait(null, true); - - assert stopped() : "Segment is not closed after close flush: " + head.get(); - - try { - try { - RecordSerializer backwardSerializer = new RecordSerializerFactoryImpl(cctx) - .createSerializer(serializerVersion); - - SwitchSegmentRecord segmentRecord = new SwitchSegmentRecord(); - - int switchSegmentRecSize = backwardSerializer.size(segmentRecord); - - if (rollOver && written < (maxSegmentSize - switchSegmentRecSize)) { - final ByteBuffer buf = ByteBuffer.allocate(switchSegmentRecSize); - - segmentRecord.position(new FileWALPointer(getSegmentId(), (int)written, switchSegmentRecSize)); - backwardSerializer.writeRecord(segmentRecord, buf); - - buf.rewind(); - - written += fileIO.writeFully(buf, written); - } - } - catch (IgniteCheckedException e) { - throw new IOException(e); - } - finally { - assert mode == WALMode.FSYNC; - - // Do the final fsync. - fileIO.force(); - - lastFsyncPos = written; - - fileIO.close(); - } - } - catch (IOException e) { - throw new StorageException("Failed to close WAL write handle [idx=" + getSegmentId() + "]", e); - } - - if (log.isDebugEnabled()) - log.debug("Closed WAL write handle [idx=" + getSegmentId() + "]"); - - return true; - } - finally { - lock.unlock(); - } - } - else - return false; - } - - /** - * Signals next segment available to wake up other worker threads waiting for WAL to write - */ - private void signalNextAvailable() { - lock.lock(); - - try { - WALRecord rec = head.get(); - - if (!cctx.kernalContext().invalid()) { - assert rec instanceof FakeRecord : "Expected head FakeRecord, actual head " - + (rec != null ? rec.getClass().getSimpleName() : "null"); - - assert written == lastFsyncPos || mode != WALMode.FSYNC : - "fsync [written=" + written + ", lastFsync=" + lastFsyncPos + ']'; - - fileIO = null; - } - else { - try { - fileIO.close(); - } - catch (IOException e) { - U.error(log, "Failed to close WAL file [idx=" + getSegmentId() + ", fileIO=" + fileIO + "]", e); - } - } - - nextSegment.signalAll(); - } - finally { - lock.unlock(); - } - } - - /** - * - */ - private void awaitNext() { - lock.lock(); - - try { - while (fileIO != null && !cctx.kernalContext().invalid()) - U.awaitQuiet(nextSegment); - } - finally { - lock.unlock(); - } - } - - /** - * @param pos Position in file to start write from. May be checked against actual position to wait previous - * writes to complete - * @param buf Buffer to write to file - * @throws StorageException If failed. - * @throws IgniteCheckedException If failed. - */ - @SuppressWarnings("TooBroadScope") - private void writeBuffer(long pos, ByteBuffer buf) throws StorageException { - boolean interrupted = false; - - lock.lock(); - - try { - assert fileIO != null : "Writing to a closed segment."; - - checkNode(); - - long lastLogged = U.currentTimeMillis(); - - long logBackoff = 2_000; - - // If we were too fast, need to wait previous writes to complete. - while (written != pos) { - assert written < pos : "written = " + written + ", pos = " + pos; // No one can write further than we are now. - - // Permutation occurred between blocks write operations. - // Order of acquiring lock is not the same as order of write. - long now = U.currentTimeMillis(); - - if (now - lastLogged >= logBackoff) { - if (logBackoff < 60 * 60_000) - logBackoff *= 2; - - U.warn(log, "Still waiting for a concurrent write to complete [written=" + written + - ", pos=" + pos + ", lastFsyncPos=" + lastFsyncPos + ", stop=" + stop.get() + - ", actualPos=" + safePosition() + ']'); - - lastLogged = now; - } - - try { - writeComplete.await(2, TimeUnit.SECONDS); - } - catch (InterruptedException ignore) { - interrupted = true; - } - - checkNode(); - } - - // Do the write. - int size = buf.remaining(); - - assert size > 0 : size; - - try { - assert written == fileIO.position(); - - fileIO.writeFully(buf); - - written += size; - - metrics.onWalBytesWritten(size); - - assert written == fileIO.position(); - } - catch (IOException e) { - StorageException se = new StorageException("Unable to write", e); - - cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, se)); - - throw se; - } - } - finally { - writeComplete.signalAll(); - - lock.unlock(); - - if (interrupted) - Thread.currentThread().interrupt(); - } - } - - /** - * @return Safely reads current position of the file channel as String. Will return "null" if channel is null. - */ - private String safePosition() { - FileIO io = this.fileIO; - - if (io == null) - return "null"; - - try { - return String.valueOf(io.position()); - } - catch (IOException e) { - return "{Failed to read channel position: " + e.getMessage() + "}"; - } - } - } - - /** - * Gets WAL record offset relative to the WAL segment file beginning. - * - * @param rec WAL record. - * @return File offset. - */ - private static int recordOffset(WALRecord rec) { - FileWALPointer ptr = (FileWALPointer)rec.position(); - - assert ptr != null; - - return ptr.fileOffset(); - } - - /** - * Fake record is zero-sized record, which is not stored into file. - * Fake record is used for storing position in file {@link WALRecord#position()}. - * Fake record is allowed to have no previous record. - */ - private static final class FakeRecord extends WALRecord { - /** */ - private final boolean stop; - - /** - * @param pos Position. - */ - FakeRecord(FileWALPointer pos, boolean stop) { - position(pos); - - this.stop = stop; - } - - /** {@inheritDoc} */ - @Override public RecordType type() { - return null; - } - - /** {@inheritDoc} */ - @Override public FileWALPointer position() { - return (FileWALPointer) super.position(); - } - - /** {@inheritDoc} */ - @Override public String toString() { - return S.toString(FakeRecord.class, this, "super", super.toString()); - } - } - - /** - * Iterator over WAL-log. - */ - private class RecordsIterator extends AbstractWalRecordsIterator { - /** */ - private static final long serialVersionUID = 0L; - - /** */ - private final File walWorkDir; - - /** */ - private final File walArchiveDir; - - /** */ - private final FileArchiver archiver; - - /** */ - private final FileDecompressor decompressor; - - /** */ - private final DataStorageConfiguration psCfg; - - /** Optional start pointer. */ - @Nullable - private FileWALPointer start; - - /** Optional end pointer. */ - @Nullable - private FileWALPointer end; - - /** - * @param cctx Shared context. - * @param walWorkDir WAL work dir. - * @param walArchiveDir WAL archive dir. - * @param start Optional start pointer. - * @param end Optional end pointer. - * @param psCfg Database configuration. - * @param serializerFactory Serializer factory. - * @param archiver Archiver. - * @param decompressor Decompressor. - * @param log Logger @throws IgniteCheckedException If failed to initialize WAL segment. - * @param segmentFileInputFactory Segment file input factory. - */ - private RecordsIterator( - GridCacheSharedContext cctx, - File walWorkDir, - File walArchiveDir, - @Nullable FileWALPointer start, - @Nullable FileWALPointer end, - DataStorageConfiguration psCfg, - @NotNull RecordSerializerFactory serializerFactory, - FileIOFactory ioFactory, - FileArchiver archiver, - FileDecompressor decompressor, - IgniteLogger log, - SegmentFileInputFactory segmentFileInputFactory - ) throws IgniteCheckedException { - super( - log, - cctx, - serializerFactory, - ioFactory, - psCfg.getWalRecordIteratorBufferSize(), - segmentFileInputFactory - ); - this.walWorkDir = walWorkDir; - this.walArchiveDir = walArchiveDir; - this.psCfg = psCfg; - this.archiver = archiver; - this.start = start; - this.end = end; - this.decompressor = decompressor; - - init(); - - advance(); - } - - /** {@inheritDoc} */ - @Override protected ReadFileHandle initReadHandle( - @NotNull AbstractFileDescriptor desc, - @Nullable FileWALPointer start - ) throws IgniteCheckedException, FileNotFoundException { - AbstractFileDescriptor currDesc = desc; - - if (!desc.file().exists()) { - FileDescriptor zipFile = new FileDescriptor( - new File(walArchiveDir, FileDescriptor.fileName(desc.idx()) - + FilePageStoreManager.ZIP_SUFFIX)); - - if (!zipFile.file.exists()) { - throw new FileNotFoundException("Both compressed and raw segment files are missing in archive " + - "[segmentIdx=" + desc.idx() + "]"); - } - - if (decompressor != null) - decompressor.decompressFile(desc.idx()).get(); - else - currDesc = zipFile; - } - - return (ReadFileHandle) super.initReadHandle(currDesc, start); - } - - /** {@inheritDoc} */ - @Override protected void onClose() throws IgniteCheckedException { - super.onClose(); - - curRec = null; - - final AbstractReadFileHandle handle = closeCurrentWalSegment(); - - if (handle != null && handle.workDir()) - releaseWorkSegment(curWalSegmIdx); - - curWalSegmIdx = Integer.MAX_VALUE; - } - - /** - * @throws IgniteCheckedException If failed to initialize first file handle. - */ - private void init() throws IgniteCheckedException { - AbstractFileDescriptor[] descs = loadFileDescriptors(walArchiveDir); - - if (start != null) { - if (!F.isEmpty(descs)) { - if (descs[0].idx() > start.index()) - throw new IgniteCheckedException("WAL history is too short " + - "[descs=" + Arrays.asList(descs) + ", start=" + start + ']'); - - for (AbstractFileDescriptor desc : descs) { - if (desc.idx() == start.index()) { - curWalSegmIdx = start.index(); - - break; - } - } - - if (curWalSegmIdx == -1) { - long lastArchived = descs[descs.length - 1].idx(); - - if (lastArchived > start.index()) - throw new IgniteCheckedException("WAL history is corrupted (segment is missing): " + start); - - // This pointer may be in work files because archiver did not - // copy the file yet, check that it is not too far forward. - curWalSegmIdx = start.index(); - } - } - else { - // This means that whole checkpoint history fits in one segment in WAL work directory. - // Will start from this index right away. - curWalSegmIdx = start.index(); - } - } - else - curWalSegmIdx = !F.isEmpty(descs) ? descs[0].idx() : 0; - - curWalSegmIdx--; - - if (log.isDebugEnabled()) - log.debug("Initialized WAL cursor [start=" + start + ", end=" + end + ", curWalSegmIdx=" + curWalSegmIdx + ']'); - } - - /** {@inheritDoc} */ - @Override protected AbstractReadFileHandle advanceSegment( - @Nullable final AbstractReadFileHandle curWalSegment - ) throws IgniteCheckedException { - if (curWalSegment != null) { - curWalSegment.close(); - - if (curWalSegment.workDir()) - releaseWorkSegment(curWalSegment.idx()); - - } - - // We are past the end marker. - if (end != null && curWalSegmIdx + 1 > end.index()) - return null; //stop iteration - - curWalSegmIdx++; - - FileDescriptor fd; - - boolean readArchive = canReadArchiveOrReserveWork(curWalSegmIdx); - - if (readArchive) - fd = new FileDescriptor(new File(walArchiveDir, FileDescriptor.fileName(curWalSegmIdx))); - else { - long workIdx = curWalSegmIdx % psCfg.getWalSegments(); - - fd = new FileDescriptor( - new File(walWorkDir, FileDescriptor.fileName(workIdx)), - curWalSegmIdx); - } - - if (log.isDebugEnabled()) - log.debug("Reading next file [absIdx=" + curWalSegmIdx + ", file=" + fd.file().getAbsolutePath() + ']'); - - ReadFileHandle nextHandle; - - try { - nextHandle = initReadHandle(fd, start != null && curWalSegmIdx == start.index() ? start : null); - } - catch (FileNotFoundException e) { - if (readArchive) - throw new IgniteCheckedException("Missing WAL segment in the archive", e); - else - nextHandle = null; - } - - if (nextHandle == null) { - if (!readArchive) - releaseWorkSegment(curWalSegmIdx); - } - else - nextHandle.workDir = !readArchive; - - curRec = null; - - return nextHandle; - } - - /** {@inheritDoc} */ - @Override protected IgniteCheckedException handleRecordException( - @NotNull Exception e, - @Nullable FileWALPointer ptr) { - - if (e instanceof IgniteCheckedException) - if (X.hasCause(e, IgniteDataIntegrityViolationException.class)) - // This means that there is no explicit last sengment, so we iterate unil the very end. - if (end == null) { - long nextWalSegmentIdx = curWalSegmIdx + 1; - - // Check that we should not look this segment up in archive directory. - // Basically the same check as in "advanceSegment" method. - if (archiver != null) - if (!canReadArchiveOrReserveWork(nextWalSegmentIdx)) - try { - long workIdx = nextWalSegmentIdx % dsCfg.getWalSegments(); - - FileDescriptor fd = new FileDescriptor( - new File(walWorkDir, FileDescriptor.fileName(workIdx)), - nextWalSegmentIdx - ); - - try { - ReadFileHandle nextHandle = initReadHandle(fd, null); - - // "nextHandle == null" is true only if current segment is the last one in the - // whole history. Only in such case we ignore crc validation error and just stop - // as if we reached the end of the WAL. - if (nextHandle == null) - return null; - } - catch (IgniteCheckedException | FileNotFoundException initReadHandleException) { - e.addSuppressed(initReadHandleException); - } - } - finally { - releaseWorkSegment(nextWalSegmentIdx); - } - } - - return super.handleRecordException(e, ptr); - } - - /** - * @param absIdx Absolute index to check. - * @return
    • {@code True} if we can safely read the archive,
    • {@code false} if the segment has - * not been archived yet. In this case the corresponding work segment is reserved (will not be deleted until - * release). Use {@link #releaseWorkSegment} for unlock
    - */ - private boolean canReadArchiveOrReserveWork(long absIdx) { - return archiver != null && archiver.checkCanReadArchiveOrReserveWorkSegment(absIdx); - } - - /** - * @param absIdx Absolute index to release. - */ - private void releaseWorkSegment(long absIdx) { - if (archiver != null) - archiver.releaseWorkSegment(absIdx); - } - - /** {@inheritDoc} */ - @Override protected AbstractReadFileHandle createReadFileHandle(SegmentIO fileIO, - RecordSerializer ser, FileInput in) { - return new ReadFileHandle(fileIO, ser, in); - } - } - - /** - * Flushes current file handle for {@link WALMode#BACKGROUND} WALMode. - * Called periodically from scheduler. - */ - private void doFlush() { - final FileWriteHandle hnd = currentHandle(); - try { - hnd.flush(hnd.head.get(), false); - } - catch (Exception e) { - U.warn(log, "Failed to flush WAL record queue", e); - } - } - - /** - * Scans provided folder for a WAL segment files - * @param walFilesDir directory to scan - * @return found WAL file descriptors - */ - private static FileDescriptor[] loadFileDescriptors(@NotNull final File walFilesDir) throws IgniteCheckedException { - final File[] files = walFilesDir.listFiles(WAL_SEGMENT_COMPACTED_OR_RAW_FILE_FILTER); - - if (files == null) { - throw new IgniteCheckedException("WAL files directory does not not denote a " + - "directory, or if an I/O error occurs: [" + walFilesDir.getAbsolutePath() + "]"); - } - return scan(files); - } -} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentArchivedStorage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentArchivedStorage.java index 1ed607e9b7dd8..57ac848fc8295 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentArchivedStorage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentArchivedStorage.java @@ -34,6 +34,8 @@ class SegmentArchivedStorage extends SegmentObservable { * no segments archived. */ private volatile long lastAbsArchivedIdx = -1; + /** Latest truncated segment. */ + private volatile long lastTruncatedArchiveIdx = -1; /** * @param segmentLockStorage Protects WAL work segments from moving. @@ -63,10 +65,12 @@ long lastArchivedAbsoluteIndex() { /** * @param lastAbsArchivedIdx New value of last archived segment index. */ - synchronized void setLastArchivedAbsoluteIndex(long lastAbsArchivedIdx) { - this.lastAbsArchivedIdx = lastAbsArchivedIdx; + void setLastArchivedAbsoluteIndex(long lastAbsArchivedIdx) { + synchronized (this) { + this.lastAbsArchivedIdx = lastAbsArchivedIdx; - notifyAll(); + notifyAll(); + } notifyObservers(lastAbsArchivedIdx); } @@ -120,6 +124,13 @@ synchronized void interrupt() { notifyAll(); } + /** + * Resets interrupted flag. + */ + void reset() { + interrupted = false; + } + /** * Check for interrupt flag was set. */ @@ -134,4 +145,18 @@ private void checkInterrupted() throws IgniteInterruptedCheckedException { private synchronized void onSegmentUnlocked(long segmentId) { notifyAll(); } + + /** + * @param lastTruncatedArchiveIdx Last truncated segment. + */ + void lastTruncatedArchiveIdx(long lastTruncatedArchiveIdx) { + this.lastTruncatedArchiveIdx = lastTruncatedArchiveIdx; + } + + /** + * @return Last truncated segment. + */ + long lastTruncatedArchiveIdx() { + return lastTruncatedArchiveIdx; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAware.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAware.java index 3379b74cf5780..b564aec868bb0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAware.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAware.java @@ -27,8 +27,6 @@ * Holder of actual information of latest manipulation on WAL segments. */ public class SegmentAware { - /** Latest truncated segment. */ - private volatile long lastTruncatedArchiveIdx = -1L; /** Segment reservations storage: Protects WAL segments from deletion during WAL log cleanup. */ private final SegmentReservationStorage reservationStorage = new SegmentReservationStorage(); /** Lock on segment protects from archiving segment. */ @@ -36,15 +34,17 @@ public class SegmentAware { /** Manages last archived index, emulates archivation in no-archiver mode. */ private final SegmentArchivedStorage segmentArchivedStorage = buildArchivedStorage(segmentLockStorage); /** Storage of actual information about current index of compressed segments. */ - private final SegmentCompressStorage segmentCompressStorage = buildCompressStorage(segmentArchivedStorage); + private final SegmentCompressStorage segmentCompressStorage; /** Storage of absolute current segment index. */ private final SegmentCurrentStateStorage segmentCurrStateStorage; /** * @param walSegmentsCnt Total WAL segments count. + * @param compactionEnabled Is wal compaction enabled. */ - public SegmentAware(int walSegmentsCnt) { + public SegmentAware(int walSegmentsCnt, boolean compactionEnabled) { segmentCurrStateStorage = buildCurrentStateStorage(walSegmentsCnt, segmentArchivedStorage); + segmentCompressStorage = buildCompressStorage(segmentArchivedStorage, compactionEnabled); } /** @@ -104,16 +104,21 @@ public void awaitSegmentArchived(long awaitIdx) throws IgniteInterruptedCheckedE * there's no segment to archive right now. */ public long waitNextSegmentToCompress() throws IgniteInterruptedCheckedException { - return Math.max(segmentCompressStorage.nextSegmentToCompressOrWait(), lastTruncatedArchiveIdx + 1); + long idx; + + while ((idx = segmentCompressStorage.nextSegmentToCompressOrWait()) <= lastTruncatedArchiveIdx()) + onSegmentCompressed(idx); + + return idx; } /** - * Force set last compressed segment. + * Callback after segment compression finish. * - * @param lastCompressedIdx Segment which was last compressed. + * @param compressedIdx Index of compressed segment. */ - public void lastCompressedIdx(long lastCompressedIdx) { - segmentCompressStorage.lastCompressedIdx(lastCompressedIdx); + public void onSegmentCompressed(long compressedIdx) { + segmentCompressStorage.onSegmentCompressed(compressedIdx); } /** @@ -123,6 +128,20 @@ public long lastCompressedIdx() { return segmentCompressStorage.lastCompressedIdx(); } + /** + * @param idx Minimum raw segment index that should be preserved from deletion. + */ + public void keepUncompressedIdxFrom(long idx) { + segmentCompressStorage.keepUncompressedIdxFrom(idx); + } + + /** + * @return Minimum raw segment index that should be preserved from deletion. + */ + public long keepUncompressedIdxFrom() { + return segmentCompressStorage.keepUncompressedIdxFrom(); + } + /** * Update current WAL index. * @@ -136,14 +155,14 @@ public void curAbsWalIdx(long curAbsWalIdx) { * @param lastTruncatedArchiveIdx Last truncated segment; */ public void lastTruncatedArchiveIdx(long lastTruncatedArchiveIdx) { - this.lastTruncatedArchiveIdx = lastTruncatedArchiveIdx; + segmentArchivedStorage.lastTruncatedArchiveIdx(lastTruncatedArchiveIdx); } /** * @return Last truncated segment. */ public long lastTruncatedArchiveIdx() { - return lastTruncatedArchiveIdx; + return segmentArchivedStorage.lastTruncatedArchiveIdx(); } /** @@ -203,6 +222,15 @@ public boolean checkCanReadArchiveOrReserveWorkSegment(long absIdx) { return lastArchivedAbsoluteIndex() >= absIdx || segmentLockStorage.lockWorkSegment(absIdx); } + /** + * Visible for test. + * + * @param absIdx Segment absolute index. segment later, use {@link #releaseWorkSegment} for unlock
+ */ + void lockWorkSegment(long absIdx) { + segmentLockStorage.lockWorkSegment(absIdx); + } + /** * @param absIdx Segment absolute index. */ @@ -210,6 +238,17 @@ public void releaseWorkSegment(long absIdx) { segmentLockStorage.releaseWorkSegment(absIdx); } + /** + * Reset interrupted flag. + */ + public void reset() { + segmentArchivedStorage.reset(); + + segmentCompressStorage.reset(); + + segmentCurrStateStorage.reset(); + } + /** * Interrupt waiting on related objects. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCompressStorage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCompressStorage.java index 30c9a2d50d363..d93bb8423acc4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCompressStorage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCompressStorage.java @@ -18,6 +18,10 @@ package org.apache.ignite.internal.processors.cache.persistence.wal.aware; import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.List; +import java.util.Queue; /** * Storage of actual information about current index of compressed segments. @@ -25,25 +29,50 @@ public class SegmentCompressStorage { /** Flag of interrupt waiting on this object. */ private volatile boolean interrupted; + /** Manages last archived index, emulates archivation in no-archiver mode. */ private final SegmentArchivedStorage segmentArchivedStorage; + + /** If WAL compaction enabled. */ + private final boolean compactionEnabled; + /** Last successfully compressed segment. */ private volatile long lastCompressedIdx = -1L; + /** Last enqueued to compress segment. */ + private long lastEnqueuedToCompressIdx = -1L; + + /** Segments to compress queue. */ + private final Queue segmentsToCompress = new ArrayDeque<>(); + + /** List of currently compressing segments. */ + private final List compressingSegments = new ArrayList<>(); + + /** Compressed segment with maximal index. */ + private long lastMaxCompressedIdx = -1L; + + /** Min uncompressed index to keep. */ + private volatile long minUncompressedIdxToKeep = -1L; + /** * @param segmentArchivedStorage Storage of last archived segment. + * @param compactionEnabled If WAL compaction enabled. */ - private SegmentCompressStorage(SegmentArchivedStorage segmentArchivedStorage) { + private SegmentCompressStorage(SegmentArchivedStorage segmentArchivedStorage, boolean compactionEnabled) { this.segmentArchivedStorage = segmentArchivedStorage; + this.compactionEnabled = compactionEnabled; + this.segmentArchivedStorage.addObserver(this::onSegmentArchived); } /** * @param segmentArchivedStorage Storage of last archived segment. + * @param compactionEnabled If WAL compaction enabled. */ - static SegmentCompressStorage buildCompressStorage(SegmentArchivedStorage segmentArchivedStorage) { - SegmentCompressStorage storage = new SegmentCompressStorage(segmentArchivedStorage); + static SegmentCompressStorage buildCompressStorage(SegmentArchivedStorage segmentArchivedStorage, + boolean compactionEnabled) { + SegmentCompressStorage storage = new SegmentCompressStorage(segmentArchivedStorage, compactionEnabled); segmentArchivedStorage.addObserver(storage::onSegmentArchived); @@ -51,12 +80,23 @@ static SegmentCompressStorage buildCompressStorage(SegmentArchivedStorage segmen } /** - * Force set last compressed segment. + * Callback after segment compression finish. * - * @param lastCompressedIdx Segment which was last compressed. + * @param compressedIdx Index of compressed segment. */ - void lastCompressedIdx(long lastCompressedIdx) { - this.lastCompressedIdx = lastCompressedIdx; + synchronized void onSegmentCompressed(long compressedIdx) { + if (compressedIdx > lastMaxCompressedIdx) + lastMaxCompressedIdx = compressedIdx; + + compressingSegments.remove(compressedIdx); + + if (!compressingSegments.isEmpty()) + this.lastCompressedIdx = Math.min(lastMaxCompressedIdx, compressingSegments.get(0) - 1); + else + this.lastCompressedIdx = lastMaxCompressedIdx; + + if (compressedIdx > lastEnqueuedToCompressIdx) + lastEnqueuedToCompressIdx = compressedIdx; } /** @@ -71,13 +111,8 @@ long lastCompressedIdx() { * there's no segment to archive right now. */ synchronized long nextSegmentToCompressOrWait() throws IgniteInterruptedCheckedException { - long segmentToCompress = lastCompressedIdx + 1; - try { - while ( - segmentToCompress > segmentArchivedStorage.lastArchivedAbsoluteIndex() - && !interrupted - ) + while (segmentsToCompress.peek() == null && !interrupted) wait(); } catch (InterruptedException e) { @@ -86,7 +121,13 @@ synchronized long nextSegmentToCompressOrWait() throws IgniteInterruptedCheckedE checkInterrupted(); - return segmentToCompress; + Long idx = segmentsToCompress.poll(); + + assert idx != null; + + compressingSegments.add(idx); + + return idx; } /** @@ -110,7 +151,30 @@ private void checkInterrupted() throws IgniteInterruptedCheckedException { * Callback for waking up compressor when new segment is archived. */ private synchronized void onSegmentArchived(long lastAbsArchivedIdx) { + while (lastEnqueuedToCompressIdx < lastAbsArchivedIdx && compactionEnabled) + segmentsToCompress.add(++lastEnqueuedToCompressIdx); + notifyAll(); } + /** + * @param idx Minimum raw segment index that should be preserved from deletion. + */ + void keepUncompressedIdxFrom(long idx) { + minUncompressedIdxToKeep = idx; + } + + /** + * @return Minimum raw segment index that should be preserved from deletion. + */ + long keepUncompressedIdxFrom() { + return minUncompressedIdxToKeep; + } + + /** + * Reset interrupted flag. + */ + public void reset() { + interrupted = false; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCurrentStateStorage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCurrentStateStorage.java index 5761ef9fcbb46..d08a9b8bdeaae 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCurrentStateStorage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentCurrentStateStorage.java @@ -168,4 +168,13 @@ private void checkInterrupted() throws IgniteInterruptedCheckedException { if (interrupted) throw new IgniteInterruptedCheckedException("Interrupt waiting of change current idx"); } + + /** + * Reset interrupted flag. + */ + public void reset() { + interrupted = false; + + forceInterrupted = false; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentLockStorage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentLockStorage.java index 2e145e7789f43..f638d4d57be70 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentLockStorage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentLockStorage.java @@ -17,8 +17,8 @@ package org.apache.ignite.internal.processors.cache.persistence.wal.aware; -import java.util.HashMap; import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; /** @@ -29,7 +29,7 @@ public class SegmentLockStorage extends SegmentObservable { * Maps absolute segment index to locks counter. Lock on segment protects from archiving segment and may come from * {@link FileWriteAheadLogManager.RecordsIterator} during WAL replay. Map itself is guarded by this. */ - private Map locked = new HashMap<>(); + private Map locked = new ConcurrentHashMap<>(); /** * Check if WAL segment locked (protected from move to archive) @@ -37,7 +37,7 @@ public class SegmentLockStorage extends SegmentObservable { * @param absIdx Index for check reservation. * @return {@code True} if index is locked. */ - public synchronized boolean locked(long absIdx) { + public boolean locked(long absIdx) { return locked.containsKey(absIdx); } @@ -47,12 +47,8 @@ public synchronized boolean locked(long absIdx) { * segment later, use {@link #releaseWorkSegment} for unlock */ @SuppressWarnings("NonPrivateFieldAccessedInSynchronizedContext") - synchronized boolean lockWorkSegment(long absIdx) { - Integer cur = locked.get(absIdx); - - cur = cur == null ? 1 : cur + 1; - - locked.put(absIdx, cur); + boolean lockWorkSegment(long absIdx) { + locked.compute(absIdx, (idx, count) -> count == null ? 1 : count + 1); return false; } @@ -61,15 +57,12 @@ synchronized boolean lockWorkSegment(long absIdx) { * @param absIdx Segment absolute index. */ @SuppressWarnings("NonPrivateFieldAccessedInSynchronizedContext") - synchronized void releaseWorkSegment(long absIdx) { - Integer cur = locked.get(absIdx); - - assert cur != null && cur >= 1 : "cur=" + cur + ", absIdx=" + absIdx; + void releaseWorkSegment(long absIdx) { + locked.compute(absIdx, (idx, count) -> { + assert count != null && count >= 1 : "cur=" + count + ", absIdx=" + absIdx; - if (cur == 1) - locked.remove(absIdx); - else - locked.put(absIdx, cur - 1); + return count == 1 ? null : count - 1; + }); notifyObservers(absIdx); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentObservable.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentObservable.java index ba5ad300cd127..3e915044dd01b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentObservable.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentObservable.java @@ -17,8 +17,8 @@ package org.apache.ignite.internal.processors.cache.persistence.wal.aware; -import java.util.ArrayList; -import java.util.List; +import java.util.Queue; +import java.util.concurrent.ConcurrentLinkedQueue; import java.util.function.Consumer; /** @@ -26,12 +26,12 @@ */ public abstract class SegmentObservable { /** Observers for handle changes of archived index. */ - private final List> observers = new ArrayList<>(); + private final Queue> observers = new ConcurrentLinkedQueue<>(); /** * @param observer Observer for notification about segment's changes. */ - synchronized void addObserver(Consumer observer) { + void addObserver(Consumer observer) { observers.add(observer); } @@ -40,7 +40,7 @@ synchronized void addObserver(Consumer observer) { * * @param segmentId Segment which was been changed. */ - synchronized void notifyObservers(long segmentId) { + void notifyObservers(long segmentId) { observers.forEach(observer -> observer.accept(segmentId)); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/crc/FastCrc.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/crc/FastCrc.java new file mode 100644 index 0000000000000..0dcbafdb9c978 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/crc/FastCrc.java @@ -0,0 +1,101 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.crc; + +import java.nio.ByteBuffer; +import java.util.zip.CRC32; + +/** + * This CRC calculation implementation workf much faster then {@link PureJavaCrc32} + */ +public final class FastCrc { + /** CRC algo. */ + private static final ThreadLocal CRC = ThreadLocal.withInitial(CRC32::new); + + /** */ + private final CRC32 crc = new CRC32(); + + /** + * Current value. + */ + private int val; + + /** */ + public FastCrc() { + reset(); + } + + /** + * Preparation for further calculations. + */ + public void reset() { + val = 0xffffffff; + + crc.reset(); + } + + /** + * @return crc value. + */ + public int getValue() { + return val; + } + + /** + * @param buf Input buffer. + * @param len Data length. + */ + public void update(final ByteBuffer buf, final int len) { + val = calcCrc(crc, buf, len); + } + + /** + * @param buf Input buffer. + * @param len Data length. + * + * @return Crc checksum. + */ + public static int calcCrc(ByteBuffer buf, int len) { + CRC32 crcAlgo = CRC.get(); + + int res = calcCrc(crcAlgo, buf, len); + + crcAlgo.reset(); + + return res; + } + + /** + * @param crcAlgo CRC algorithm. + * @param buf Input buffer. + * @param len Buffer length. + * + * @return Crc checksum. + */ + private static int calcCrc(CRC32 crcAlgo, ByteBuffer buf, int len) { + int initLimit = buf.limit(); + + buf.limit(buf.position() + len); + + crcAlgo.update(buf); + + buf.limit(initLimit); + + return (int)crcAlgo.getValue() ^ 0xFFFFFFFF; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/crc/PureJavaCrc32.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/crc/PureJavaCrc32.java index 6bd4a35b2fdf8..b011f783be005 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/crc/PureJavaCrc32.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/crc/PureJavaCrc32.java @@ -29,7 +29,9 @@ * succession. * * The current version is ~10x to 1.8x as fast as Sun's native java.util.zip.CRC32 in Java 1.6 + * @deprecated Use {@link FastCrc} instead. */ +@Deprecated public class PureJavaCrc32 { /** * the current CRC value diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/AbstractFileHandle.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/AbstractFileHandle.java new file mode 100644 index 0000000000000..127d73787d3fc --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/AbstractFileHandle.java @@ -0,0 +1,47 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; +import org.jetbrains.annotations.NotNull; + +/** + * + */ +public abstract class AbstractFileHandle { + /** I/O interface for read/write operations with file. */ + protected SegmentIO fileIO; + + /** Segment idx corresponded to fileIo. */ + private final long segmentIdx; + + /** + * @param fileIO I/O interface for read/write operations of AbstractFileHandle. + */ + public AbstractFileHandle(@NotNull SegmentIO fileIO) { + this.fileIO = fileIO; + segmentIdx = fileIO.getSegmentId(); + } + + /** + * @return Absolute WAL segment file index (incremental counter). + */ + public long getSegmentId(){ + return segmentIdx; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManager.java new file mode 100644 index 0000000000000..6597e4615394f --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManager.java @@ -0,0 +1,81 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import java.io.IOException; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; + +/** + * Manager of {@link FileWriteHandle}. + */ +public interface FileHandleManager { + /** + * Initialize {@link FileWriteHandle} for first time. + * + * @param fileIO FileIO. + * @param position Init position. + * @param serializer Serializer for file handle. + * @return Created file handle. + * @throws IOException if creation was not success. + */ + FileWriteHandle initHandle(SegmentIO fileIO, long position, RecordSerializer serializer) throws IOException; + + /** + * Create next file handle. + * + * @param fileIO FileIO. + * @param serializer Serializer for file handle. + * @return Created file handle. + * @throws IOException if creation was not success. + */ + FileWriteHandle nextHandle(SegmentIO fileIO, RecordSerializer serializer) throws IOException; + + /** + * Start manager. + */ + void start(); + + /** + * On activate. + */ + void onActivate(); + + /** + * On deactivate. + * + * @throws IgniteCheckedException if fail. + */ + void onDeactivate() throws IgniteCheckedException; + + /** + * Resume logging. + */ + void resumeLogging(); + + /** + * @param ptr Pointer until need to flush. + * @param explicitFsync {@code true} if fsync required. + * @throws IgniteCheckedException if fail. + * @throws StorageException if storage was fail. + */ + void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException; +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManagerFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManagerFactory.java new file mode 100644 index 0000000000000..b0c456e42dd1b --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManagerFactory.java @@ -0,0 +1,90 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import java.util.function.Supplier; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.DataStorageMetricsImpl; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; + +/** + * Factory of {@link FileHandleManager}. + */ +public class FileHandleManagerFactory { + /** */ + private final boolean walFsyncWithDedicatedWorker = + IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER, false); + + /** Data storage configuration. */ + private final DataStorageConfiguration dsConf; + + /** + * @param conf Data storage configuration. + */ + public FileHandleManagerFactory(DataStorageConfiguration conf) { + dsConf = conf; + } + + /** + * @param cctx Cache context. + * @param metrics Data storage metrics. + * @param mmap Using mmap. + * @param lastWALPtr Last WAL pointer. + * @param serializer Serializer. + * @param currHandleSupplier Supplier of current handle. + * @return One of implementation of {@link FileHandleManager}. + */ + public FileHandleManager build( + GridCacheSharedContext cctx, + DataStorageMetricsImpl metrics, + boolean mmap, + Supplier lastWALPtr, + RecordSerializer serializer, + Supplier currHandleSupplier + ) { + if (dsConf.getWalMode() == WALMode.FSYNC && !walFsyncWithDedicatedWorker) + return new FsyncFileHandleManagerImpl( + cctx, + metrics, + lastWALPtr, + serializer, + currHandleSupplier, + dsConf.getWalMode(), + dsConf.getWalSegmentSize(), + dsConf.getWalFsyncDelayNanos(), + dsConf.getWalThreadLocalBufferSize() + ); + else + return new FileHandleManagerImpl( + cctx, + metrics, + mmap, + lastWALPtr, + serializer, + currHandleSupplier, + dsConf.getWalMode(), + dsConf.getWalBufferSize(), + dsConf.getWalSegmentSize(), + dsConf.getWalFsyncDelayNanos() + ); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManagerImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManagerImpl.java new file mode 100644 index 0000000000000..1daac31cafc60 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileHandleManagerImpl.java @@ -0,0 +1,603 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.MappedByteBuffer; +import java.nio.channels.ClosedByInterruptException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.locks.LockSupport; +import java.util.function.Supplier; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.DataStorageMetricsImpl; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; +import org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer; +import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.util.worker.GridWorker; +import org.apache.ignite.thread.IgniteThread; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_SEGMENT_SYNC_TIMEOUT; +import static org.apache.ignite.configuration.WALMode.LOG_ONLY; +import static org.apache.ignite.failure.FailureType.CRITICAL_ERROR; +import static org.apache.ignite.failure.FailureType.SYSTEM_WORKER_TERMINATION; +import static org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer.BufferMode.DIRECT; +import static org.apache.ignite.internal.util.IgniteUtils.sleep; + +/** + * Manager for {@link FileWriteHandleImpl}. + */ +public class FileHandleManagerImpl implements FileHandleManager { + /** Default wal segment sync timeout. */ + private static final long DFLT_WAL_SEGMENT_SYNC_TIMEOUT = 500L; + + /** WAL writer worker. */ + private WALWriter walWriter; + /** Wal segment sync worker. */ + private WalSegmentSyncer walSegmentSyncWorker; + /** Context. */ + protected final GridCacheSharedContext cctx; + /** Logger. */ + private final IgniteLogger log; + /** */ + private final WALMode mode; + /** Persistence metrics tracker. */ + private final DataStorageMetricsImpl metrics; + /** Use mapped byte buffer. */ + private final boolean mmap; + /** Last WAL pointer. */ + private final Supplier lastWALPtr; + /** */ + private final RecordSerializer serializer; + /** Current handle supplier. */ + private final Supplier currentHandleSupplier; + /** WAL buffer size. */ + private final int walBufferSize; + /** WAL segment size in bytes. . This is maximum value, actual segments may be shorter. */ + private final long maxWalSegmentSize; + /** Fsync delay. */ + private final long fsyncDelay; + + /** + * @param cctx Context. + * @param metrics Data storage metrics. + * @param mmap Mmap. + * @param lastWALPtr Last WAL pointer. + * @param serializer Serializer. + * @param currentHandleSupplier Current handle supplier. + * @param mode WAL mode. + * @param walBufferSize WAL buffer size. + * @param maxWalSegmentSize Max WAL segment size. + * @param fsyncDelay Fsync delay. + */ + public FileHandleManagerImpl( + GridCacheSharedContext cctx, + DataStorageMetricsImpl metrics, + boolean mmap, + Supplier lastWALPtr, + RecordSerializer serializer, + Supplier currentHandleSupplier, + WALMode mode, + int walBufferSize, + long maxWalSegmentSize, + long fsyncDelay) { + this.cctx = cctx; + this.log = cctx.logger(FileHandleManagerImpl.class); + this.mode = mode; + this.metrics = metrics; + this.mmap = mmap; + this.lastWALPtr = lastWALPtr; + this.serializer = serializer; + this.currentHandleSupplier = currentHandleSupplier; + this.walBufferSize = walBufferSize; + this.maxWalSegmentSize = maxWalSegmentSize; + this.fsyncDelay = fsyncDelay; + } + + /** {@inheritDoc} */ + @Override public FileWriteHandle initHandle( + SegmentIO fileIO, + long position, + RecordSerializer serializer + ) throws IOException { + SegmentedRingByteBuffer rbuf; + + if (mmap) { + MappedByteBuffer buf = fileIO.map((int)maxWalSegmentSize); + + rbuf = new SegmentedRingByteBuffer(buf, metrics); + } + else + rbuf = new SegmentedRingByteBuffer(walBufferSize, maxWalSegmentSize, DIRECT, metrics); + + rbuf.init(position); + + return new FileWriteHandleImpl( + cctx, fileIO, rbuf, serializer, metrics, walWriter, position, + mode, mmap, true, fsyncDelay, maxWalSegmentSize + ); + } + + /** {@inheritDoc} */ + @Override public FileWriteHandle nextHandle(SegmentIO fileIO, RecordSerializer serializer) throws IOException { + SegmentedRingByteBuffer rbuf; + + if (mmap) { + MappedByteBuffer buf = fileIO.map((int)maxWalSegmentSize); + + rbuf = new SegmentedRingByteBuffer(buf, metrics); + } + else + rbuf = currentHandle().buf.reset(); + + try { + return new FileWriteHandleImpl( + cctx, fileIO, rbuf, serializer, metrics, walWriter, 0, + mode, mmap, false, fsyncDelay, maxWalSegmentSize + ); + } + catch (ClosedByInterruptException e) { + if (rbuf != null) + rbuf.free(); + } + + return null; + } + + /** + * @return Current handle. + */ + private FileWriteHandleImpl currentHandle() { + return (FileWriteHandleImpl)currentHandleSupplier.get(); + } + + /** {@inheritDoc} */ + @Override public void start() { + if (mode != WALMode.NONE && mode != WALMode.FSYNC) { + walSegmentSyncWorker = new WalSegmentSyncer(cctx.igniteInstanceName(), + cctx.kernalContext().log(WalSegmentSyncer.class)); + + if (log.isInfoEnabled()) + log.info("Started write-ahead log manager [mode=" + mode + ']'); + } + else + U.quietAndWarn(log, "Started write-ahead log manager in NONE mode, persisted data may be lost in " + + "a case of unexpected node failure. Make sure to deactivate the cluster before shutdown."); + + } + + /** {@inheritDoc} */ + @Override public void onActivate() { + if (!cctx.kernalContext().clientNode()) { + if (walSegmentSyncWorker != null) + new IgniteThread(walSegmentSyncWorker).start(); + } + } + + /** {@inheritDoc} */ + @Override public void onDeactivate() throws IgniteCheckedException { + FileWriteHandleImpl currHnd = currentHandle(); + + if (mode == WALMode.BACKGROUND) { + if (currHnd != null) + currHnd.flush(null); + } + + if (currHnd != null) + currHnd.close(false); + + if (walSegmentSyncWorker != null) + walSegmentSyncWorker.shutdown(); + + if (walWriter != null) + walWriter.shutdown(); + } + + /** {@inheritDoc} */ + @Override public void resumeLogging() { + walWriter = new WALWriter(log); + + if (!mmap) + new IgniteThread(walWriter).start(); + } + + /** {@inheritDoc} */ + @Override public void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException { + if (serializer == null || mode == WALMode.NONE) + return; + + FileWriteHandleImpl cur = currentHandle(); + + // WAL manager was not started (client node). + if (cur == null) + return; + + FileWALPointer filePtr = (FileWALPointer)(ptr == null ? lastWALPtr.get() : ptr); + + if (mode == LOG_ONLY) + cur.flushOrWait(filePtr); + + if (!explicitFsync && mode != WALMode.FSYNC) + return; // No need to sync in LOG_ONLY or BACKGROUND unless explicit fsync is required. + + // No need to sync if was rolled over. + if (filePtr != null && !cur.needFsync(filePtr)) + return; + + cur.fsync(filePtr); + } + + /** + * @throws StorageException If node is no longer valid and we missed a WAL operation. + */ + private void checkNode() throws StorageException { + if (cctx.kernalContext().invalid()) + throw new StorageException("Failed to perform WAL operation (environment was invalidated by a " + + "previous error)"); + } + + /** + * WAL writer worker. + */ + public class WALWriter extends GridWorker { + /** Unconditional flush. */ + private static final long UNCONDITIONAL_FLUSH = -1L; + + /** File close. */ + private static final long FILE_CLOSE = -2L; + + /** File force. */ + private static final long FILE_FORCE = -3L; + + /** Err. */ + private volatile Throwable err; + + //TODO: replace with GC free data structure. + /** Parked threads. */ + final Map waiters = new ConcurrentHashMap<>(); + + /** + * Default constructor. + * + * @param log Logger. + */ + WALWriter(IgniteLogger log) { + super(cctx.igniteInstanceName(), "wal-write-worker%" + cctx.igniteInstanceName(), log, + cctx.kernalContext().workersRegistry()); + } + + /** {@inheritDoc} */ + @Override protected void body() { + Throwable err = null; + + try { + while (!isCancelled()) { + onIdle(); + + while (waiters.isEmpty()) { + if (!isCancelled()) { + blockingSectionBegin(); + + try { + LockSupport.park(); + } + finally { + blockingSectionEnd(); + } + } + else { + unparkWaiters(Long.MAX_VALUE); + + return; + } + } + + Long pos = null; + + for (Long val : waiters.values()) { + if (val > Long.MIN_VALUE) + pos = val; + } + + updateHeartbeat(); + + if (pos == null) + continue; + else if (pos < UNCONDITIONAL_FLUSH) { + try { + assert pos == FILE_CLOSE || pos == FILE_FORCE : pos; + + if (pos == FILE_CLOSE) + currentHandle().fileIO.close(); + else if (pos == FILE_FORCE) + currentHandle().fileIO.force(); + } + catch (IOException e) { + log.error("Exception in WAL writer thread: ", e); + + err = e; + + unparkWaiters(Long.MAX_VALUE); + + return; + } + + unparkWaiters(pos); + } + + updateHeartbeat(); + + List segs = currentHandle().buf.poll(pos); + + if (segs == null) { + unparkWaiters(pos); + + continue; + } + + for (int i = 0; i < segs.size(); i++) { + SegmentedRingByteBuffer.ReadSegment seg = segs.get(i); + + updateHeartbeat(); + + try { + writeBuffer(seg.position(), seg.buffer()); + } + catch (Throwable e) { + log.error("Exception in WAL writer thread:", e); + + err = e; + } + finally { + seg.release(); + + long p = pos <= UNCONDITIONAL_FLUSH || err != null ? Long.MAX_VALUE : currentHandle().written; + + unparkWaiters(p); + } + } + } + } + catch (Throwable t) { + err = t; + } + finally { + unparkWaiters(Long.MAX_VALUE); + + if (err == null && !isCancelled) + err = new IllegalStateException("Worker " + name() + " is terminated unexpectedly"); + + if (err instanceof OutOfMemoryError) + cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); + else if (err != null) + cctx.kernalContext().failure().process(new FailureContext(SYSTEM_WORKER_TERMINATION, err)); + } + } + + /** + * Shutdowns thread. + */ + private void shutdown() throws IgniteInterruptedCheckedException { + U.cancel(this); + + LockSupport.unpark(runner()); + + U.join(runner()); + } + + /** + * Unparks waiting threads. + * + * @param pos Pos. + */ + private void unparkWaiters(long pos) { + assert pos > Long.MIN_VALUE : pos; + + for (Map.Entry e : waiters.entrySet()) { + Long val = e.getValue(); + + if (val <= pos) { + if (val != Long.MIN_VALUE) + waiters.put(e.getKey(), Long.MIN_VALUE); + + LockSupport.unpark(e.getKey()); + } + } + } + + /** + * Forces all made changes to the file. + */ + void force() throws IgniteCheckedException { + flushBuffer(FILE_FORCE); + } + + /** + * Closes file. + */ + void close() throws IgniteCheckedException { + flushBuffer(FILE_CLOSE); + } + + /** + * Flushes all data from the buffer. + */ + void flushAll() throws IgniteCheckedException { + flushBuffer(UNCONDITIONAL_FLUSH); + } + + /** + * @param expPos Expected position. + */ + void flushBuffer(long expPos) throws IgniteCheckedException { + if (mmap) + return; + + Throwable err = walWriter.err; + + if (err != null) + cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, err)); + + if (expPos == UNCONDITIONAL_FLUSH) + expPos = (currentHandle().buf.tail()); + + Thread t = Thread.currentThread(); + + waiters.put(t, expPos); + + LockSupport.unpark(walWriter.runner()); + + while (true) { + Long val = waiters.get(t); + + assert val != null : "Only this thread can remove thread from waiters"; + + if (val == Long.MIN_VALUE) { + waiters.remove(t); + + Throwable walWriterError = walWriter.err; + + if (walWriterError != null) + throw new IgniteCheckedException("Flush buffer failed.", walWriterError); + + return; + } + else + LockSupport.park(); + } + } + + /** + * @param pos Position in file to start write from. May be checked against actual position to wait previous + * writes to complete. + * @param buf Buffer to write to file. + * @throws StorageException If failed. + * @throws IgniteCheckedException If failed. + */ + private void writeBuffer(long pos, ByteBuffer buf) throws StorageException, IgniteCheckedException { + FileWriteHandleImpl hdl = currentHandle(); + + assert hdl.fileIO != null : "Writing to a closed segment."; + + checkNode(); + + long lastLogged = U.currentTimeMillis(); + + long logBackoff = 2_000; + + // If we were too fast, need to wait previous writes to complete. + while (hdl.written != pos) { + assert hdl.written < pos : "written = " + hdl.written + ", pos = " + pos; // No one can write further than we are now. + + // Permutation occurred between blocks write operations. + // Order of acquiring lock is not the same as order of write. + long now = U.currentTimeMillis(); + + if (now - lastLogged >= logBackoff) { + if (logBackoff < 60 * 60_000) + logBackoff *= 2; + + U.warn(log, "Still waiting for a concurrent write to complete [written=" + hdl.written + + ", pos=" + pos + ", lastFsyncPos=" + hdl.lastFsyncPos + ", stop=" + hdl.stop.get() + + ", actualPos=" + hdl.safePosition() + ']'); + + lastLogged = now; + } + + checkNode(); + } + + // Do the write. + int size = buf.remaining(); + + assert size > 0 : size; + + try { + assert hdl.written == hdl.fileIO.position(); + + hdl.written += hdl.fileIO.writeFully(buf); + + metrics.onWalBytesWritten(size); + + assert hdl.written == hdl.fileIO.position(); + } + catch (IOException e) { + StorageException se = new StorageException("Failed to write buffer.", e); + + cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, se)); + + throw se; + } + } + } + + /** + * Syncs WAL segment file. + */ + public class WalSegmentSyncer extends GridWorker { + /** Sync timeout. */ + long syncTimeout; + + /** + * @param igniteInstanceName Ignite instance name. + * @param log Logger. + */ + private WalSegmentSyncer(String igniteInstanceName, IgniteLogger log) { + super(igniteInstanceName, "wal-segment-syncer", log); + + syncTimeout = Math.max(IgniteSystemProperties.getLong(IGNITE_WAL_SEGMENT_SYNC_TIMEOUT, + DFLT_WAL_SEGMENT_SYNC_TIMEOUT), 100L); + } + + /** {@inheritDoc} */ + @Override protected void body() throws InterruptedException, IgniteInterruptedCheckedException { + while (!isCancelled()) { + sleep(syncTimeout); + + try { + flush(null, true); + } + catch (IgniteCheckedException e) { + U.error(log, "Exception when flushing WAL.", e); + } + } + } + + /** Shutted down the worker. */ + private void shutdown() { + synchronized (this) { + U.cancel(this); + } + + U.join(this, log); + } + } + +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileWriteHandle.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileWriteHandle.java new file mode 100644 index 0000000000000..410cd56046dcb --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileWriteHandle.java @@ -0,0 +1,113 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.pagemem.wal.record.WALRecord; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; +import org.jetbrains.annotations.Nullable; + +/** + * File write handle. + */ +public interface FileWriteHandle { + + /** + * @return Version of serializer. + */ + int serializerVersion(); + + /** + * Do action after finish resume logging. + */ + void finishResumeLogging(); + + /** + * Write header to segment. + * + * @throws IgniteCheckedException if fail. + */ + void writeHeader() throws IgniteCheckedException; + + /** + * @param rec Record to be added. + * @return Pointer or null if roll over to next segment is required or already started by other thread. + * @throws StorageException if storage was failed. + * @throws IgniteCheckedException if fail. + */ + @Nullable WALPointer addRecord(WALRecord rec) throws StorageException, IgniteCheckedException; + + /** + * Flush all records. + * + * @throws IgniteCheckedException if fail. + */ + void flushAll() throws IgniteCheckedException; + + /** + * @param ptr Pointer. + * @return {@code true} if fsync needed. + */ + boolean needFsync(FileWALPointer ptr); + + /** + * @return Pointer to the end of the last written record (probably not fsync-ed). + */ + FileWALPointer position(); + + /** + * Do fsync. + * + * @param ptr Pointer to which fsync required. + * @throws StorageException if storage fail. + * @throws IgniteCheckedException if fail. + */ + void fsync(FileWALPointer ptr) throws StorageException, IgniteCheckedException; + + /** + * Close buffer. + */ + void closeBuffer(); + + /** + * Close segment. + * + * @param rollOver Close for rollover. + * @return {@code true} if close was success. + * @throws IgniteCheckedException if fail. + * @throws StorageException if storage was fail. + */ + boolean close(boolean rollOver) throws IgniteCheckedException, StorageException; + + /** + * Signals next segment available to wake up other worker threads waiting for WAL to write. + */ + void signalNextAvailable(); + + /** + * Awaiting when next segment would be initialized. + */ + void awaitNext(); + + /** + * @return Absolute WAL segment file index (incremental counter). + */ + long getSegmentId(); +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileWriteHandleImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileWriteHandleImpl.java new file mode 100644 index 0000000000000..f582dbd496ae8 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FileWriteHandleImpl.java @@ -0,0 +1,601 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import java.io.IOException; +import java.lang.reflect.Field; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.nio.ByteBuffer; +import java.nio.MappedByteBuffer; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLongFieldUpdater; +import java.util.concurrent.locks.Condition; +import java.util.concurrent.locks.Lock; +import java.util.concurrent.locks.ReentrantLock; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.pagemem.wal.record.CheckpointRecord; +import org.apache.ignite.internal.pagemem.wal.record.SwitchSegmentRecord; +import org.apache.ignite.internal.pagemem.wal.record.WALRecord; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.DataStorageMetricsImpl; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; +import org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer; +import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactoryImpl; +import org.apache.ignite.internal.util.GridUnsafe; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_SERIALIZER_VERSION; +import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.SWITCH_SEGMENT_RECORD; +import static org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.prepareSerializerVersionBuffer; +import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactory.LATEST_SERIALIZER_VERSION; +import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.HEADER_RECORD_SIZE; +import static org.apache.ignite.internal.util.IgniteUtils.findField; +import static org.apache.ignite.internal.util.IgniteUtils.findNonPublicMethod; + +/** + * File handle for one log segment. + */ +@SuppressWarnings("SignalWithoutCorrespondingAwait") +class FileWriteHandleImpl extends AbstractFileHandle implements FileWriteHandle { + /** {@link MappedByteBuffer#force0(java.io.FileDescriptor, long, long)}. */ + private static final Method force0 = findNonPublicMethod( + MappedByteBuffer.class, "force0", + java.io.FileDescriptor.class, long.class, long.class + ); + /** {@link FileWriteHandleImpl#written} atomic field updater. */ + private static final AtomicLongFieldUpdater WRITTEN_UPD = + AtomicLongFieldUpdater.newUpdater(FileWriteHandleImpl.class, "written"); + + /** {@link MappedByteBuffer#mappingOffset()}. */ + private static final Method mappingOffset = findNonPublicMethod(MappedByteBuffer.class, "mappingOffset"); + + /** {@link MappedByteBuffer#mappingAddress(long)}. */ + private static final Method mappingAddress = findNonPublicMethod( + MappedByteBuffer.class, "mappingAddress", long.class + ); + + /** {@link MappedByteBuffer#fd} */ + private static final Field fd = findField(MappedByteBuffer.class, "fd"); + + /** Page size. */ + private static final int PAGE_SIZE = GridUnsafe.pageSize(); + + /** Serializer latest version to use. */ + private final int serializerVer = + IgniteSystemProperties.getInteger(IGNITE_WAL_SERIALIZER_VERSION, LATEST_SERIALIZER_VERSION); + + /** Use mapped byte buffer. */ + private final boolean mmap; + + /** Created on resume logging. */ + private volatile boolean resume; + + /** + * Position in current file after the end of last written record (incremented after file channel write operation) + */ + volatile long written; + + /** */ + protected volatile long lastFsyncPos; + + /** Stop guard to provide warranty that only one thread will be successful in calling {@link #close(boolean)}. */ + protected final AtomicBoolean stop = new AtomicBoolean(false); + + /** */ + private final Lock lock = new ReentrantLock(); + + /** Condition for timed wait of several threads, see {@link DataStorageConfiguration#getWalFsyncDelayNanos()}. */ + private final Condition fsync = lock.newCondition(); + + /** + * Next segment available condition. Protection from "spurious wakeup" is provided by predicate {@link + * #fileIO}=null. + */ + private final Condition nextSegment = lock.newCondition(); + + /** Buffer. */ + protected final SegmentedRingByteBuffer buf; + + /** */ + private final WALMode mode; + + /** Fsync delay. */ + private final long fsyncDelay; + + /** Persistence metrics tracker. */ + private final DataStorageMetricsImpl metrics; + + /** WAL segment size in bytes. This is maximum value, actual segments may be shorter. */ + private final long maxWalSegmentSize; + + /** Logger. */ + protected final IgniteLogger log; + + /** */ + private final RecordSerializer serializer; + + /** Context. */ + protected final GridCacheSharedContext cctx; + + /** WAL writer worker. */ + private final FileHandleManagerImpl.WALWriter walWriter; + + /** + * @param cctx Context. + * @param fileIO I/O file interface to use + * @param serializer Serializer. + * @param metrics Data storage metrics. + * @param writer WAL writer. + * @param pos Initial position. + * @param mode WAL mode. + * @param mmap Mmap. + * @param resume Created on resume logging flag. + * @param fsyncDelay Fsync delay. + * @param maxWalSegmentSize Max WAL segment size. + * @throws IOException If failed. + */ + FileWriteHandleImpl( + GridCacheSharedContext cctx, SegmentIO fileIO, SegmentedRingByteBuffer rbuf, RecordSerializer serializer, + DataStorageMetricsImpl metrics, FileHandleManagerImpl.WALWriter writer, long pos, WALMode mode, boolean mmap, + boolean resume, long fsyncDelay, long maxWalSegmentSize) throws IOException { + super(fileIO); + assert serializer != null; + + this.mmap = mmap; + this.mode = mode; + this.fsyncDelay = fsyncDelay; + this.metrics = metrics; + this.maxWalSegmentSize = maxWalSegmentSize; + this.log = cctx.logger(FileWriteHandleImpl.class); + this.cctx = cctx; + this.walWriter = writer; + this.serializer = serializer; + this.written = pos; + this.lastFsyncPos = pos; + this.resume = resume; + this.buf = rbuf; + + if (!mmap) + fileIO.position(pos); + } + + /** {@inheritDoc} */ + @Override public int serializerVersion() { + return serializer.version(); + } + + /** {@inheritDoc} */ + @Override public void finishResumeLogging() { + resume = false; + } + + /** + * @throws StorageException If node is no longer valid and we missed a WAL operation. + */ + private void checkNode() throws StorageException { + if (cctx.kernalContext().invalid()) + throw new StorageException("Failed to perform WAL operation (environment was invalidated by a " + + "previous error)"); + } + + /** + * Write serializer version to current handle. + */ + @Override public void writeHeader() { + SegmentedRingByteBuffer.WriteSegment seg = buf.offer(HEADER_RECORD_SIZE); + + assert seg != null && seg.position() > 0; + + prepareSerializerVersionBuffer(getSegmentId(), serializerVer, false, seg.buffer()); + + seg.release(); + } + + /** + * @param rec Record to be added to write queue. + * @return Pointer or null if roll over to next segment is required or already started by other thread. + * @throws StorageException If failed. + * @throws IgniteCheckedException If failed. + */ + @Override @Nullable public WALPointer addRecord(WALRecord rec) throws StorageException, IgniteCheckedException { + assert rec.size() > 0 : rec; + + for (; ; ) { + checkNode(); + + SegmentedRingByteBuffer.WriteSegment seg; + + // Buffer can be in open state in case of resuming with different serializer version. + if (rec.type() == SWITCH_SEGMENT_RECORD && !resume) + seg = buf.offerSafe(rec.size()); + else + seg = buf.offer(rec.size()); + + FileWALPointer ptr = null; + + if (seg != null) { + try { + int pos = (int)(seg.position() - rec.size()); + + ByteBuffer buf = seg.buffer(); + + if (buf == null) + return null; // Can not write to this segment, need to switch to the next one. + + ptr = new FileWALPointer(getSegmentId(), pos, rec.size()); + + rec.position(ptr); + + fillBuffer(buf, rec); + + if (mmap) { + // written field must grow only, but segment with greater position can be serialized + // earlier than segment with smaller position. + while (true) { + long written0 = written; + + if (seg.position() > written0) { + if (WRITTEN_UPD.compareAndSet(this, written0, seg.position())) + break; + } + else + break; + } + } + + return ptr; + } + finally { + seg.release(); + + if (mode == WALMode.BACKGROUND && rec instanceof CheckpointRecord) + flushOrWait(ptr); + } + } + else + walWriter.flushAll(); + } + } + + /** + * Flush or wait for concurrent flush completion. + * + * @param ptr Pointer. + */ + public void flushOrWait(FileWALPointer ptr) throws IgniteCheckedException { + if (ptr != null) { + // If requested obsolete file index, it must be already flushed by close. + if (ptr.index() != getSegmentId()) + return; + } + + flush(ptr); + } + + /** {@inheritDoc} */ + @Override public void flushAll() throws IgniteCheckedException { + flush(null); + } + + /** + * @param ptr Pointer. + */ + public void flush(FileWALPointer ptr) throws IgniteCheckedException { + if (ptr == null) { // Unconditional flush. + walWriter.flushAll(); + + return; + } + + assert ptr.index() == getSegmentId(); + + walWriter.flushBuffer(ptr.fileOffset()); + } + + /** + * @param buf Buffer. + * @param rec WAL record. + * @throws IgniteCheckedException If failed. + */ + private void fillBuffer(ByteBuffer buf, WALRecord rec) throws IgniteCheckedException { + try { + serializer.writeRecord(rec, buf); + } + catch (RuntimeException e) { + throw new IllegalStateException("Failed to write record: " + rec, e); + } + } + + /** + * Non-blocking check if this pointer needs to be sync'ed. + * + * @param ptr WAL pointer to check. + * @return {@code False} if this pointer has been already sync'ed. + */ + @Override public boolean needFsync(FileWALPointer ptr) { + // If index has changed, it means that the log was rolled over and already sync'ed. + // If requested position is smaller than last sync'ed, it also means all is good. + // If position is equal, then our record is the last not synced. + return getSegmentId() == ptr.index() && lastFsyncPos <= ptr.fileOffset(); + } + + /** + * @return Pointer to the end of the last written record (probably not fsync-ed). + */ + @Override public FileWALPointer position() { + lock.lock(); + + try { + return new FileWALPointer(getSegmentId(), (int)written, 0); + } + finally { + lock.unlock(); + } + } + + /** + * @param ptr Pointer to sync. + * @throws StorageException If failed. + */ + @Override public void fsync(FileWALPointer ptr) throws StorageException, IgniteCheckedException { + lock.lock(); + + try { + if (ptr != null) { + if (!needFsync(ptr)) + return; + + if (fsyncDelay > 0 && !stop.get()) { + // Delay fsync to collect as many updates as possible: trade latency for throughput. + U.await(fsync, fsyncDelay, TimeUnit.NANOSECONDS); + + if (!needFsync(ptr)) + return; + } + } + + flushOrWait(ptr); + + if (stop.get()) + return; + + long lastFsyncPos0 = lastFsyncPos; + long written0 = written; + + if (lastFsyncPos0 != written0) { + // Fsync position must be behind. + assert lastFsyncPos0 < written0 : "lastFsyncPos=" + lastFsyncPos0 + ", written=" + written0; + + boolean metricsEnabled = metrics.metricsEnabled(); + + long start = metricsEnabled ? System.nanoTime() : 0; + + if (mmap) { + long pos = ptr == null ? -1 : ptr.fileOffset(); + + List segs = buf.poll(pos); + + if (segs != null) { + assert segs.size() == 1; + + SegmentedRingByteBuffer.ReadSegment seg = segs.get(0); + + int off = seg.buffer().position(); + int len = seg.buffer().limit() - off; + + fsync((MappedByteBuffer)buf.buf, off, len); + + seg.release(); + } + } + else + walWriter.force(); + + lastFsyncPos = written; + + if (fsyncDelay > 0) + fsync.signalAll(); + + long end = metricsEnabled ? System.nanoTime() : 0; + + if (metricsEnabled) + metrics.onFsync(end - start); + } + } + finally { + lock.unlock(); + } + } + + /** + * @param buf Mapped byte buffer. + * @param off Offset. + * @param len Length. + */ + private void fsync(MappedByteBuffer buf, int off, int len) throws IgniteCheckedException { + try { + long mappedOff = (Long)mappingOffset.invoke(buf); + + assert mappedOff == 0 : mappedOff; + + long addr = (Long)mappingAddress.invoke(buf, mappedOff); + + long delta = (addr + off) % PAGE_SIZE; + + long alignedAddr = (addr + off) - delta; + + force0.invoke(buf, fd.get(buf), alignedAddr, len + delta); + } + catch (IllegalAccessException | InvocationTargetException e) { + throw new IgniteCheckedException(e); + } + } + + /** {@inheritDoc} */ + @Override public void closeBuffer() { + buf.close(); + } + + /** + * @return {@code true} If this thread actually closed the segment. + * @throws IgniteCheckedException If failed. + * @throws StorageException If failed. + */ + @Override public boolean close(boolean rollOver) throws IgniteCheckedException, StorageException { + if (stop.compareAndSet(false, true)) { + lock.lock(); + + try { + flushOrWait(null); + + try { + RecordSerializer backwardSerializer = new RecordSerializerFactoryImpl(cctx) + .createSerializer(serializerVer); + + SwitchSegmentRecord segmentRecord = new SwitchSegmentRecord(); + + int switchSegmentRecSize = backwardSerializer.size(segmentRecord); + + if (rollOver && written < (maxWalSegmentSize - switchSegmentRecSize)) { + segmentRecord.size(switchSegmentRecSize); + + WALPointer segRecPtr = addRecord(segmentRecord); + + if (segRecPtr != null) + fsync((FileWALPointer)segRecPtr); + } + + if (mmap) { + List segs = buf.poll(maxWalSegmentSize); + + if (segs != null) { + assert segs.size() == 1; + + segs.get(0).release(); + } + } + + // Do the final fsync. + if (mode != WALMode.NONE) { + if (mmap) + ((MappedByteBuffer)buf.buf).force(); + else + fileIO.force(); + + lastFsyncPos = written; + } + + if (mmap) { + try { + fileIO.close(); + } + catch (IOException ignore) { + // No-op. + } + } + else { + walWriter.close(); + + if (!rollOver) + buf.free(); + } + } + catch (IOException e) { + throw new StorageException("Failed to close WAL write handle [idx=" + getSegmentId() + "]", e); + } + + if (log.isDebugEnabled()) + log.debug("Closed WAL write handle [idx=" + getSegmentId() + "]"); + + return true; + } + finally { + if (mmap) + buf.free(); + + lock.unlock(); + } + } + else + return false; + } + + /** + * Signals next segment available to wake up other worker threads waiting for WAL to write. + */ + @Override public void signalNextAvailable() { + lock.lock(); + + try { + assert cctx.kernalContext().invalid() || + written == lastFsyncPos || mode != WALMode.FSYNC : + "fsync [written=" + written + ", lastFsync=" + lastFsyncPos + ", idx=" + getSegmentId() + ']'; + + fileIO = null; + + nextSegment.signalAll(); + } + finally { + lock.unlock(); + } + } + + /** {@inheritDoc} */ + @Override public void awaitNext() { + lock.lock(); + + try { + while (fileIO != null) + U.awaitQuiet(nextSegment); + } + finally { + lock.unlock(); + } + } + + /** + * @return Safely reads current position of the file channel as String. Will return "null" if channel is null. + */ + public String safePosition() { + FileIO io = fileIO; + + if (io == null) + return "null"; + + try { + return String.valueOf(io.position()); + } + catch (IOException e) { + return "{Failed to read channel position: " + e.getMessage() + '}'; + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FsyncFileHandleManagerImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FsyncFileHandleManagerImpl.java new file mode 100644 index 0000000000000..e456f04c1711d --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FsyncFileHandleManagerImpl.java @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import java.io.IOException; +import java.util.function.Supplier; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.DataStorageMetricsImpl; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; +import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; + +/** + * Implementation of {@link FileWriteHandle} for FSYNC mode. + */ +public class FsyncFileHandleManagerImpl implements FileHandleManager { + /** Context. */ + protected final GridCacheSharedContext cctx; + /** Logger. */ + protected final IgniteLogger log; + /** */ + private final WALMode mode; + /** Persistence metrics tracker. */ + private final DataStorageMetricsImpl metrics; + /** Last WAL pointer. */ + private final Supplier lastWALPtr; + /** */ + protected final RecordSerializer serializer; + /** Current handle supplier. */ + private final Supplier currentHandleSupplier; + /** WAL segment size in bytes. This is maximum value, actual segments may be shorter. */ + private final long maxWalSegmentSize; + /** Fsync delay. */ + private final long fsyncDelay; + /** Thread local byte buffer size. */ + private final int tlbSize; + + /** + * @param cctx Context. + * @param metrics Data storage metrics. + * @param ptr Last WAL pointer. + * @param serializer Serializer. + * @param handle Current handle supplier. + * @param mode WAL mode. + * @param maxWalSegmentSize Max WAL segment size. + * @param fsyncDelay Fsync delay. + * @param tlbSize Thread local byte buffer size. + */ + public FsyncFileHandleManagerImpl(GridCacheSharedContext cctx, + DataStorageMetricsImpl metrics, Supplier ptr, RecordSerializer serializer, + Supplier handle, WALMode mode, + long maxWalSegmentSize, long fsyncDelay, int tlbSize) { + this.cctx = cctx; + this.log = cctx.logger(FsyncFileHandleManagerImpl.class); + this.mode = mode; + this.metrics = metrics; + lastWALPtr = ptr; + this.serializer = serializer; + currentHandleSupplier = handle; + this.maxWalSegmentSize = maxWalSegmentSize; + this.fsyncDelay = fsyncDelay; + this.tlbSize = tlbSize; + } + + /** {@inheritDoc} */ + @Override public FileWriteHandle initHandle(SegmentIO fileIO, long position, + RecordSerializer serializer) throws IOException { + return new FsyncFileWriteHandle( + cctx, fileIO, metrics, serializer, position, + mode, maxWalSegmentSize, tlbSize, fsyncDelay + ); + } + + /** {@inheritDoc} */ + @Override public FileWriteHandle nextHandle(SegmentIO fileIO, + RecordSerializer serializer) throws IOException { + return new FsyncFileWriteHandle( + cctx, fileIO, metrics, serializer, 0, + mode, maxWalSegmentSize, tlbSize, fsyncDelay + ); + } + + /** + * @return Current handle. + */ + private FsyncFileWriteHandle currentHandle() { + return (FsyncFileWriteHandle)currentHandleSupplier.get(); + } + + /** {@inheritDoc} */ + @Override public void start() { + //NOOP. + } + + /** {@inheritDoc} */ + @Override public void onActivate() { + //NOOP. + } + + /** {@inheritDoc} */ + @Override public void onDeactivate() throws IgniteCheckedException { + FsyncFileWriteHandle currHnd = currentHandle(); + + if (mode == WALMode.BACKGROUND) { + if (currHnd != null) + currHnd.flushAllOnStop(); + } + + if (currHnd != null) + currHnd.close(false); + } + + /** {@inheritDoc} */ + @Override public void resumeLogging() { + //NOOP. + } + + /** {@inheritDoc} */ + @Override public void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException { + if (serializer == null || mode == WALMode.NONE) + return; + + FsyncFileWriteHandle cur = currentHandle(); + + // WAL manager was not started (client node). + if (cur == null) + return; + + FileWALPointer filePtr = (FileWALPointer)(ptr == null ? lastWALPtr.get() : ptr); + + // No need to sync if was rolled over. + if (filePtr != null && !cur.needFsync(filePtr)) + return; + + cur.fsync(filePtr, false); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FsyncFileWriteHandle.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FsyncFileWriteHandle.java new file mode 100644 index 0000000000000..2acc3a7994263 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/filehandle/FsyncFileWriteHandle.java @@ -0,0 +1,845 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.filehandle; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.ByteOrder; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.concurrent.locks.Condition; +import java.util.concurrent.locks.Lock; +import java.util.concurrent.locks.ReentrantLock; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.pagemem.wal.record.SwitchSegmentRecord; +import org.apache.ignite.internal.pagemem.wal.record.WALRecord; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.DataStorageMetricsImpl; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; +import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactoryImpl; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer; +import org.apache.ignite.internal.util.GridUnsafe; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_SERIALIZER_VERSION; +import static org.apache.ignite.failure.FailureType.CRITICAL_ERROR; +import static org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.prepareSerializerVersionBuffer; +import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactory.LATEST_SERIALIZER_VERSION; + +/** + * File handle for one log segment. + */ +@SuppressWarnings("SignalWithoutCorrespondingAwait") +class FsyncFileWriteHandle extends AbstractFileHandle implements FileWriteHandle { + /** */ + private final RecordSerializer serializer; + /** Max segment size. */ + private final long maxSegmentSize; + /** Serializer latest version to use. */ + private final int serializerVersion = + IgniteSystemProperties.getInteger(IGNITE_WAL_SERIALIZER_VERSION, LATEST_SERIALIZER_VERSION); + /** + * Accumulated WAL records chain. This reference points to latest WAL record. When writing records chain is iterated + * from latest to oldest (see {@link WALRecord#previous()}) Records from chain are saved into buffer in reverse + * order + */ + private final AtomicReference head = new AtomicReference<>(); + /** + * Position in current file after the end of last written record (incremented after file channel write operation) + */ + private volatile long written; + /** */ + private volatile long lastFsyncPos; + /** Stop guard to provide warranty that only one thread will be successful in calling {@link #close(boolean)} */ + private final AtomicBoolean stop = new AtomicBoolean(false); + /** */ + private final Lock lock = new ReentrantLock(); + /** Condition activated each time writeBuffer() completes. Used to wait previously flushed write to complete */ + private final Condition writeComplete = lock.newCondition(); + /** Condition for timed wait of several threads, see {@link DataStorageConfiguration#getWalFsyncDelayNanos()} */ + private final Condition fsync = lock.newCondition(); + /** + * Next segment available condition. Protection from "spurious wakeup" is provided by predicate {@link + * #fileIO}=null + */ + private final Condition nextSegment = lock.newCondition(); + /** */ + private final WALMode mode; + /** Thread local byte buffer size, see {@link #tlb} */ + private final int tlbSize; + /** Context. */ + protected final GridCacheSharedContext cctx; + /** Persistence metrics tracker. */ + private final DataStorageMetricsImpl metrics; + /** Logger. */ + protected final IgniteLogger log; + /** Fsync delay. */ + private final long fsyncDelay; + + /** + * Thread local byte buffer for saving serialized WAL records chain, see {@link FsyncFileWriteHandle#head}. + * Introduced to decrease number of buffers allocation. Used only for record itself is shorter than {@link + * #tlbSize}. + */ + private final ThreadLocal tlb = new ThreadLocal() { + @Override protected ByteBuffer initialValue() { + ByteBuffer buf = ByteBuffer.allocateDirect(tlbSize); + + buf.order(GridUnsafe.NATIVE_BYTE_ORDER); + + return buf; + } + }; + + /** + * @param cctx Context. + * @param fileIO I/O file interface to use. + * @param metrics Data storage metrics. + * @param serializer Serializer. + * @param pos Position. + * @param mode WAL mode. + * @param maxSegmentSize Max segment size. + * @param size Thread local byte buffer size. + * @param fsyncDelay Fsync delay. + * @throws IOException If failed. + */ + FsyncFileWriteHandle( + GridCacheSharedContext cctx, SegmentIO fileIO, + DataStorageMetricsImpl metrics, RecordSerializer serializer, long pos, + WALMode mode, long maxSegmentSize, int size, long fsyncDelay) throws IOException { + super(fileIO); + assert serializer != null; + + this.mode = mode; + tlbSize = size; + this.cctx = cctx; + this.metrics = metrics; + this.log = cctx.logger(FsyncFileWriteHandle.class); + this.fsyncDelay = fsyncDelay; + this.maxSegmentSize = maxSegmentSize; + this.serializer = serializer; + this.written = pos; + this.lastFsyncPos = pos; + + head.set(new FakeRecord(new FileWALPointer(fileIO.getSegmentId(), (int)pos, 0), false)); + + fileIO.position(pos); + } + + /** {@inheritDoc} */ + @Override public int serializerVersion() { + return serializer.version(); + } + + /** {@inheritDoc} */ + @Override public void finishResumeLogging() { + // NOOP. + } + + /** + * Write serializer version to current handle. NOTE: Method mutates {@code fileIO} position, written and + * lastFsyncPos fields. + * + * @throws IgniteCheckedException If fail to write serializer version. + */ + @Override public void writeHeader() throws IgniteCheckedException { + try { + assert fileIO.position() == 0 : "Serializer version can be written only at the begin of file " + + fileIO.position(); + + long updatedPosition = writeSerializerVersion(fileIO, getSegmentId(), + serializer.version(), mode); + + written = updatedPosition; + lastFsyncPos = updatedPosition; + head.set(new FakeRecord(new FileWALPointer(getSegmentId(), (int)updatedPosition, 0), false)); + } + catch (IOException e) { + throw new IgniteCheckedException("Unable to write serializer version for segment " + getSegmentId(), e); + } + } + + /** + * Writes record serializer version to provided {@code io}. NOTE: Method mutates position of {@code io}. + * + * @param io I/O interface for file. + * @param idx Segment index. + * @param version Serializer version. + * @return I/O position after write version. + * @throws IOException If failed to write serializer version. + */ + private static long writeSerializerVersion(FileIO io, long idx, int version, WALMode mode) throws IOException { + ByteBuffer buf = ByteBuffer.allocate(RecordV1Serializer.HEADER_RECORD_SIZE); + buf.order(ByteOrder.nativeOrder()); + + io.writeFully(prepareSerializerVersionBuffer(idx, version, false, buf)); + + // Flush + if (mode == WALMode.FSYNC) + io.force(); + + return io.position(); + } + + /** + * Checks if current head is a close fake record and returns {@code true} if so. + * + * @return {@code true} if current head is close record. + */ + private boolean stopped() { + return stopped(head.get()); + } + + /** + * @param record Record to check. + * @return {@code true} if the record is fake close record. + */ + private boolean stopped(WALRecord record) { + return record instanceof FakeRecord && ((FakeRecord)record).stop; + } + + /** {@inheritDoc} */ + @Nullable @Override public WALPointer addRecord(WALRecord rec) throws StorageException { + assert rec.size() > 0 || rec.getClass() == FakeRecord.class; + + boolean flushed = false; + + for (; ; ) { + WALRecord h = head.get(); + + long nextPos = nextPosition(h); + + if (nextPos + rec.size() >= maxSegmentSize || stopped(h)) { + // Can not write to this segment, need to switch to the next one. + return null; + } + + int newChainSize = h.chainSize() + rec.size(); + + if (newChainSize > tlbSize && !flushed) { + boolean res = h.previous() == null || flush(h, false); + + if (rec.size() > tlbSize) + flushed = res; + + continue; + } + + rec.chainSize(newChainSize); + rec.previous(h); + + FileWALPointer ptr = new FileWALPointer( + getSegmentId(), + (int)nextPos, + rec.size()); + + rec.position(ptr); + + if (head.compareAndSet(h, rec)) + return ptr; + } + } + + /** {@inheritDoc} */ + @Override public void flushAll() throws IgniteCheckedException { + flush(head.get(), false); + } + + /** + * @throws IgniteCheckedException if failed. + */ + public void flushAllOnStop() throws IgniteCheckedException { + flush(head.get(), true); + } + + /** + * @param rec Record. + * @return Position for the next record. + */ + private long nextPosition(WALRecord rec) { + return recordOffset(rec) + rec.size(); + } + + /** + * Gets WAL record offset relative to the WAL segment file beginning. + * + * @param rec WAL record. + * @return File offset. + */ + private static int recordOffset(WALRecord rec) { + FileWALPointer ptr = (FileWALPointer)rec.position(); + + assert ptr != null; + + return ptr.fileOffset(); + } + + /** + * Flush or wait for concurrent flush completion. + * + * @param ptr Pointer. + * @throws StorageException If failed. + */ + private void flushOrWait(FileWALPointer ptr, boolean stop) throws StorageException { + long expWritten; + + if (ptr != null) { + // If requested obsolete file index, it must be already flushed by close. + if (ptr.index() != getSegmentId()) + return; + + expWritten = ptr.fileOffset(); + } + else // We read head position before the flush because otherwise we can get wrong position. + expWritten = recordOffset(head.get()); + + if (flush(ptr, stop)) + return; + else if (stop) { + FakeRecord fr = (FakeRecord)head.get(); + + assert fr.stop : "Invalid fake record on top of the queue: " + fr; + + expWritten = recordOffset(fr); + } + + // Spin-wait for a while before acquiring the lock. + for (int i = 0; i < 64; i++) { + if (written >= expWritten) + return; + } + + // If we did not flush ourselves then await for concurrent flush to complete. + lock.lock(); + + try { + while (written < expWritten && !cctx.kernalContext().invalid()) + U.awaitQuiet(writeComplete); + } + finally { + lock.unlock(); + } + } + + /** + * @param ptr Pointer. + * @return {@code true} If the flush really happened. + * @throws StorageException If failed. + */ + private boolean flush(FileWALPointer ptr, boolean stop) throws StorageException { + if (ptr == null) { // Unconditional flush. + for (; ; ) { + WALRecord expHead = head.get(); + + if (expHead.previous() == null) { + FakeRecord frHead = (FakeRecord)expHead; + + if (frHead.stop == stop || frHead.stop || + head.compareAndSet(expHead, new FakeRecord(frHead.position(), stop))) + return false; + } + + if (flush(expHead, stop)) + return true; + } + } + + assert ptr.index() == getSegmentId(); + + for (; ; ) { + WALRecord h = head.get(); + + // If current chain begin position is greater than requested, then someone else flushed our changes. + if (chainBeginPosition(h) > ptr.fileOffset()) + return false; + + if (flush(h, stop)) + return true; // We are lucky. + } + } + + /** + * @param h Head of the chain. + * @return Chain begin position. + */ + private long chainBeginPosition(WALRecord h) { + return recordOffset(h) + h.size() - h.chainSize(); + } + + /** + * @throws StorageException If node is no longer valid and we missed a WAL operation. + */ + private void checkNode() throws StorageException { + if (cctx.kernalContext().invalid()) + throw new StorageException("Failed to perform WAL operation (environment was invalidated by a " + + "previous error)"); + } + + /** + * @param expHead Expected head of chain. If head was changed, flush is not performed in this thread + * @throws StorageException If failed. + */ + private boolean flush(WALRecord expHead, boolean stop) throws StorageException { + if (expHead.previous() == null) { + FakeRecord frHead = (FakeRecord)expHead; + + if (!stop || frHead.stop) // Protects from CASing terminal FakeRecord(true) to FakeRecord(false) + return false; + } + + // Fail-fast before CAS. + checkNode(); + + if (!head.compareAndSet(expHead, new FakeRecord(new FileWALPointer(getSegmentId(), (int)nextPosition(expHead), 0), stop))) + return false; + + if (expHead.chainSize() == 0) + return false; + + // At this point we grabbed the piece of WAL chain. + // Any failure in this code must invalidate the environment. + try { + // We can safely allow other threads to start building next chains while we are doing flush here. + ByteBuffer buf; + + boolean tmpBuf = false; + + if (expHead.chainSize() > tlbSize) { + buf = GridUnsafe.allocateBuffer(expHead.chainSize()); + + tmpBuf = true; // We need to manually release this temporary direct buffer. + } + else + buf = tlb.get(); + + try { + long pos = fillBuffer(buf, expHead); + + writeBuffer(pos, buf); + } + finally { + if (tmpBuf) + GridUnsafe.freeBuffer(buf); + } + + return true; + } + catch (Throwable e) { + StorageException se = e instanceof StorageException ? (StorageException)e : + new StorageException("Unable to write", new IOException(e)); + + cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, se)); + + // All workers waiting for a next segment must be woken up and stopped + signalNextAvailable(); + + throw se; + } + } + + /** + * Serializes WAL records chain to provided byte buffer. + * + * @param buf Buffer, will be filled with records chain from end to beginning. + * @param head Head of the chain to write to the buffer. + * @return Position in file for this buffer. + * @throws IgniteCheckedException If failed. + */ + private long fillBuffer(ByteBuffer buf, WALRecord head) throws IgniteCheckedException { + final int limit = head.chainSize(); + + assert limit <= buf.capacity(); + + buf.rewind(); + buf.limit(limit); + + do { + buf.position(head.chainSize() - head.size()); + buf.limit(head.chainSize()); // Just to make sure that serializer works in bounds. + + try { + serializer.writeRecord(head, buf); + } + catch (RuntimeException e) { + throw new IllegalStateException("Failed to write record: " + head, e); + } + + assert !buf.hasRemaining() : "Reported record size is greater than actual: " + head; + + head = head.previous(); + } + while (head.previous() != null); + + assert head instanceof FakeRecord : head.getClass(); + + buf.rewind(); + buf.limit(limit); + + return recordOffset(head); + } + + /** + * Non-blocking check if this pointer needs to be sync'ed. + * + * @param ptr WAL pointer to check. + * @return {@code False} if this pointer has been already sync'ed. + */ + @Override public boolean needFsync(FileWALPointer ptr) { + // If index has changed, it means that the log was rolled over and already sync'ed. + // If requested position is smaller than last sync'ed, it also means all is good. + // If position is equal, then our record is the last not synced. + return getSegmentId() == ptr.index() && lastFsyncPos <= ptr.fileOffset(); + } + + /** {@inheritDoc} */ + @Override public FileWALPointer position() { + lock.lock(); + + try { + return new FileWALPointer(getSegmentId(), (int)written, 0); + } + finally { + lock.unlock(); + } + } + + /** {@inheritDoc} */ + @Override public void fsync(FileWALPointer ptr) throws StorageException, IgniteCheckedException { + fsync(ptr, false); + } + + /** {@inheritDoc} */ + @Override public void closeBuffer() { + //NOOP. + } + + /** + * @param ptr Pointer to sync. + * @throws StorageException If failed. + * @throws IgniteInterruptedCheckedException If interrupted. + */ + protected void fsync(FileWALPointer ptr, boolean stop) throws StorageException, IgniteInterruptedCheckedException { + lock.lock(); + + try { + if (ptr != null) { + if (!needFsync(ptr)) + return; + + if (fsyncDelay > 0 && !stopped()) { + // Delay fsync to collect as many updates as possible: trade latency for throughput. + U.await(fsync, fsyncDelay, TimeUnit.NANOSECONDS); + + if (!needFsync(ptr)) + return; + } + } + + flushOrWait(ptr, stop); + + if (stopped()) + return; + + if (lastFsyncPos != written) { + assert lastFsyncPos < written; // Fsync position must be behind. + + boolean metricsEnabled = metrics.metricsEnabled(); + + long start = metricsEnabled ? System.nanoTime() : 0; + + try { + fileIO.force(); + } + catch (IOException e) { + throw new StorageException(e); + } + + lastFsyncPos = written; + + if (fsyncDelay > 0) + fsync.signalAll(); + + long end = metricsEnabled ? System.nanoTime() : 0; + + if (metricsEnabled) + metrics.onFsync(end - start); + } + } + finally { + lock.unlock(); + } + } + + /** + * @return {@code true} If this thread actually closed the segment. + * @throws StorageException If failed. + */ + @Override public boolean close(boolean rollOver) throws StorageException { + if (stop.compareAndSet(false, true)) { + lock.lock(); + + try { + flushOrWait(null, true); + + assert stopped() : "Segment is not closed after close flush: " + head.get(); + + try { + try { + RecordSerializer backwardSerializer = new RecordSerializerFactoryImpl(cctx) + .createSerializer(serializerVersion); + + SwitchSegmentRecord segmentRecord = new SwitchSegmentRecord(); + + int switchSegmentRecSize = backwardSerializer.size(segmentRecord); + + if (rollOver && written < (maxSegmentSize - switchSegmentRecSize)) { + final ByteBuffer buf = ByteBuffer.allocate(switchSegmentRecSize); + + segmentRecord.position(new FileWALPointer(getSegmentId(), (int)written, switchSegmentRecSize)); + backwardSerializer.writeRecord(segmentRecord, buf); + + buf.rewind(); + + written += fileIO.writeFully(buf, written); + } + } + catch (IgniteCheckedException e) { + throw new IOException(e); + } + finally { + assert mode == WALMode.FSYNC; + + // Do the final fsync. + fileIO.force(); + + lastFsyncPos = written; + + fileIO.close(); + } + } + catch (IOException e) { + throw new StorageException("Failed to close WAL write handle [idx=" + getSegmentId() + "]", e); + } + + if (log.isDebugEnabled()) + log.debug("Closed WAL write handle [idx=" + getSegmentId() + "]"); + + return true; + } + finally { + lock.unlock(); + } + } + else + return false; + } + + /** {@inheritDoc} */ + @Override public void signalNextAvailable() { + lock.lock(); + + try { + WALRecord rec = head.get(); + + if (!cctx.kernalContext().invalid()) { + assert rec instanceof FakeRecord : "Expected head FakeRecord, actual head " + + (rec != null ? rec.getClass().getSimpleName() : "null"); + + assert written == lastFsyncPos || mode != WALMode.FSYNC : + "fsync [written=" + written + ", lastFsync=" + lastFsyncPos + ']'; + + fileIO = null; + } + else { + try { + fileIO.close(); + } + catch (IOException e) { + U.error(log, "Failed to close WAL file [idx=" + getSegmentId() + ", fileIO=" + fileIO + "]", e); + } + } + + nextSegment.signalAll(); + } + finally { + lock.unlock(); + } + } + + /** {@inheritDoc} */ + @Override public void awaitNext() { + lock.lock(); + + try { + while (fileIO != null && !cctx.kernalContext().invalid()) + U.awaitQuiet(nextSegment); + } + finally { + lock.unlock(); + } + } + + /** + * @param pos Position in file to start write from. May be checked against actual position to wait previous writes + * to complete. + * @param buf Buffer to write to file. + * @throws StorageException If failed. + */ + @SuppressWarnings("TooBroadScope") + private void writeBuffer(long pos, ByteBuffer buf) throws StorageException { + boolean interrupted = false; + + lock.lock(); + + try { + assert fileIO != null : "Writing to a closed segment."; + + checkNode(); + + long lastLogged = U.currentTimeMillis(); + + long logBackoff = 2_000; + + // If we were too fast, need to wait previous writes to complete. + while (written != pos) { + assert written < pos : "written = " + written + ", pos = " + pos; // No one can write further than we are now. + + // Permutation occurred between blocks write operations. + // Order of acquiring lock is not the same as order of write. + long now = U.currentTimeMillis(); + + if (now - lastLogged >= logBackoff) { + if (logBackoff < 60 * 60_000) + logBackoff *= 2; + + U.warn(log, "Still waiting for a concurrent write to complete [written=" + written + + ", pos=" + pos + ", lastFsyncPos=" + lastFsyncPos + ", stop=" + stop.get() + + ", actualPos=" + safePosition() + ']'); + + lastLogged = now; + } + + try { + writeComplete.await(2, TimeUnit.SECONDS); + } + catch (InterruptedException ignore) { + interrupted = true; + } + + checkNode(); + } + + // Do the write. + int size = buf.remaining(); + + assert size > 0 : size; + + try { + assert written == fileIO.position(); + + fileIO.writeFully(buf); + + written += size; + + metrics.onWalBytesWritten(size); + + assert written == fileIO.position(); + } + catch (IOException e) { + StorageException se = new StorageException("Unable to write", e); + + cctx.kernalContext().failure().process(new FailureContext(CRITICAL_ERROR, se)); + + throw se; + } + } + finally { + writeComplete.signalAll(); + + lock.unlock(); + + if (interrupted) + Thread.currentThread().interrupt(); + } + } + + /** + * @return Safely reads current position of the file channel as String. Will return "null" if channel is null. + */ + public String safePosition() { + FileIO io = this.fileIO; + + if (io == null) + return "null"; + + try { + return String.valueOf(io.position()); + } + catch (IOException e) { + return "{Failed to read channel position: " + e.getMessage() + "}"; + } + } + + /** + * Fake record is zero-sized record, which is not stored into file. Fake record is used for storing position in file + * {@link WALRecord#position()}. Fake record is allowed to have no previous record. + */ + private static final class FakeRecord extends WALRecord { + /** */ + private final boolean stop; + + /** + * @param pos Position. + */ + FakeRecord(FileWALPointer pos, boolean stop) { + position(pos); + + this.stop = stop; + } + + /** {@inheritDoc} */ + @Override public RecordType type() { + return null; + } + + /** {@inheritDoc} */ + @Override public FileWALPointer position() { + return (FileWALPointer)super.position(); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(FakeRecord.class, this, "super", super.toString()); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/FileInput.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/FileInput.java index d19d17b3c7d7f..c9615f597e0f7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/FileInput.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/FileInput.java @@ -19,10 +19,12 @@ import java.io.IOException; import java.nio.ByteBuffer; +import java.util.zip.CRC32; + import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferBackedDataInput; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; import org.jetbrains.annotations.NotNull; /** @@ -56,7 +58,7 @@ public interface FileInput extends ByteBufferBackedDataInput { */ public class Crc32CheckingFileInput implements ByteBufferBackedDataInput, AutoCloseable { /** */ - private final PureJavaCrc32 crc32 = new PureJavaCrc32(); + private final FastCrc crc = new FastCrc(); /** Last calc position. */ private int lastCalcPosition; @@ -93,7 +95,7 @@ public Crc32CheckingFileInput(FileInput delegate, boolean skipCheck) { @Override public void close() throws Exception { updateCrc(); - int val = crc32.getValue(); + int val = crc.getValue(); int writtenCrc = this.readInt(); @@ -118,7 +120,7 @@ private void updateCrc() { buffer().position(lastCalcPosition); - crc32.update(delegate.buffer(), oldPos - lastCalcPosition); + crc.update(delegate.buffer(), oldPos - lastCalcPosition); lastCalcPosition = oldPos; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/SimpleFileInput.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/SimpleFileInput.java index 5918b0b34e93e..1a1562e0909e0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/SimpleFileInput.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/io/SimpleFileInput.java @@ -20,6 +20,7 @@ import java.io.EOFException; import java.io.IOException; import java.nio.ByteBuffer; + import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferExpander; import org.jetbrains.annotations.NotNull; @@ -264,7 +265,7 @@ private void clearBuffer() { /** * @param skipCheck If CRC check should be skipped. - * @return autoclosable fileInput, after its closing crc32 will be calculated and compared with saved one + * @return autoclosable fileInput, after its closing crc will be calculated and compared with saved one */ public Crc32CheckingFileInput startRead(boolean skipCheck) { return new Crc32CheckingFileInput(this, skipCheck); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/IgniteWalIteratorFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/IgniteWalIteratorFactory.java index f9388eaa15898..6e5759dd88f5e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/IgniteWalIteratorFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/IgniteWalIteratorFactory.java @@ -174,14 +174,16 @@ public WALIterator iterator( iteratorParametersBuilder.validate(); return new StandaloneWalRecordsIterator(log, - prepareSharedCtx(iteratorParametersBuilder), + iteratorParametersBuilder.sharedCtx == null ? prepareSharedCtx(iteratorParametersBuilder) : + iteratorParametersBuilder.sharedCtx, iteratorParametersBuilder.ioFactory, resolveWalFiles(iteratorParametersBuilder), iteratorParametersBuilder.filter, iteratorParametersBuilder.lowBound, iteratorParametersBuilder.highBound, iteratorParametersBuilder.keepBinary, - iteratorParametersBuilder.bufferSize + iteratorParametersBuilder.bufferSize, + iteratorParametersBuilder.strictBoundsCheck ); } @@ -367,7 +369,7 @@ private FileDescriptor readFileDescriptor(File file, FileIOFactory ioFactory) { kernalCtx, null, null, null, null, null, null, dbMgr, null, null, null, null, null, - null, null,null, null + null, null,null, null, null ); } @@ -409,6 +411,14 @@ public static class IteratorParametersBuilder { */ @Nullable private File marshallerMappingFileStoreDir; + /** + * Cache shared context. In case context is specified binary objects converting and unmarshalling will be + * performed using processors of this shared context. + *
This field can't be specified together with {@link #binaryMetadataFileStoreDir} or + * {@link #marshallerMappingFileStoreDir} fields. + * */ + @Nullable private GridCacheSharedContext sharedCtx; + /** */ @Nullable private IgniteBiPredicate filter; @@ -418,6 +428,9 @@ public static class IteratorParametersBuilder { /** */ private FileWALPointer highBound = DFLT_HIGH_BOUND; + /** Use strict bounds check for WAL segments. */ + private boolean strictBoundsCheck; + /** * @param filesOrDirs Paths to files or directories. * @return IteratorParametersBuilder Self reference. @@ -504,6 +517,16 @@ public IteratorParametersBuilder marshallerMappingFileStoreDir(File marshallerMa return this; } + /** + * @param sharedCtx Cache shared context. + * @return IteratorParametersBuilder Self reference. + */ + public IteratorParametersBuilder sharedContext(GridCacheSharedContext sharedCtx) { + this.sharedCtx = sharedCtx; + + return this; + } + /** * @param filter Record filter for skip records during iteration. * @return IteratorParametersBuilder Self reference. @@ -534,6 +557,16 @@ public IteratorParametersBuilder to(FileWALPointer highBound) { return this; } + /** + * @param flag Use strict check. + * @return IteratorParametersBuilder Self reference. + */ + public IteratorParametersBuilder strictBoundsCheck(boolean flag) { + this.strictBoundsCheck = flag; + + return this; + } + /** * Copy current state of builder to new instance. * @@ -548,9 +581,11 @@ public IteratorParametersBuilder copy() { .ioFactory(ioFactory) .binaryMetadataFileStoreDir(binaryMetadataFileStoreDir) .marshallerMappingFileStoreDir(marshallerMappingFileStoreDir) + .sharedContext(sharedCtx) .from(lowBound) .to(highBound) - .filter(filter); + .filter(filter) + .strictBoundsCheck(strictBoundsCheck); } /** @@ -561,6 +596,10 @@ public void validate() throws IllegalArgumentException { A.ensure(U.isPow2(pageSize), "Page size must be a power of 2."); A.ensure(bufferSize >= pageSize * 2, "Buffer to small."); + + A.ensure(sharedCtx == null || (binaryMetadataFileStoreDir == null && + marshallerMappingFileStoreDir == null), "GridCacheSharedContext and binaryMetadataFileStoreDir/" + + "marshallerMappingFileStoreDir can't be specified in the same time"); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneGridKernalContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneGridKernalContext.java index 3f35c5fafe10f..e70a02709f4f8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneGridKernalContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneGridKernalContext.java @@ -41,6 +41,7 @@ import org.apache.ignite.internal.managers.communication.GridIoManager; import org.apache.ignite.internal.managers.deployment.GridDeploymentManager; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager; import org.apache.ignite.internal.managers.failover.GridFailoverManager; import org.apache.ignite.internal.managers.indexing.GridIndexingManager; @@ -457,6 +458,11 @@ protected IgniteConfiguration prepareIgniteConfiguration() { return null; } + /** {@inheritDoc} */ + @Override public GridEncryptionManager encryption() { + return null; + } + /** {@inheritDoc} */ @Override public WorkersRegistry workersRegistry() { return null; @@ -646,6 +652,11 @@ protected IgniteConfiguration prepareIgniteConfiguration() { return null; } + /** {@inheritDoc} */ + @Override public boolean recoveryMode() { + return false; + } + /** {@inheritDoc} */ @Override public PdsFoldersResolver pdsFolderResolver() { return new PdsFoldersResolver() { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIterator.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIterator.java index c33a45ba30699..56a44cd8ad4e3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIterator.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIterator.java @@ -17,11 +17,11 @@ package org.apache.ignite.internal.processors.cache.persistence.wal.reader; -import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.util.ArrayList; import java.util.List; +import java.util.stream.Collectors; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.internal.GridKernalContext; @@ -29,8 +29,11 @@ import org.apache.ignite.internal.pagemem.wal.record.DataEntry; import org.apache.ignite.internal.pagemem.wal.record.DataRecord; import org.apache.ignite.internal.pagemem.wal.record.FilteredRecord; -import org.apache.ignite.internal.pagemem.wal.record.LazyDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MarshalledDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataRecord; import org.apache.ignite.internal.pagemem.wal.record.UnwrapDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.UnwrapMvccDataEntry; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType; import org.apache.ignite.internal.processors.cache.CacheObject; @@ -43,11 +46,12 @@ import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.ReadFileHandle; import org.apache.ignite.internal.processors.cache.persistence.wal.WalSegmentTailReachedException; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; import org.apache.ignite.internal.processors.cache.persistence.wal.io.FileInput; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentFileInputFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SimpleSegmentFileInputFactory; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; +import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordDataV1Serializer.EncryptedDataEntry; import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializer; import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordSerializerFactoryImpl; import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.SegmentHeader; @@ -115,7 +119,8 @@ class StandaloneWalRecordsIterator extends AbstractWalRecordsIterator { FileWALPointer lowBound, FileWALPointer highBound, boolean keepBinary, - int initialReadBufferSize + int initialReadBufferSize, + boolean strictBoundsCheck ) throws IgniteCheckedException { super( log, @@ -126,6 +131,9 @@ class StandaloneWalRecordsIterator extends AbstractWalRecordsIterator { FILE_INPUT_FACTORY ); + if (strictBoundsCheck) + strictCheck(walFiles, lowBound, highBound); + this.lowBound = lowBound; this.highBound = highBound; @@ -138,6 +146,55 @@ class StandaloneWalRecordsIterator extends AbstractWalRecordsIterator { advance(); } + /** + * @param walFiles Wal files. + * @return printable indexes of segment files. + */ + private static String printIndexes(List walFiles) { + return "[" + String.join(",", walFiles.stream().map(f -> Long.toString(f.idx())).collect(Collectors.toList())) + "]"; + } + + /** + * @param walFiles Wal files. + * @param lowBound Low bound. + * @param highBound High bound. + * + * @throws IgniteCheckedException if failed + */ + private static void strictCheck(List walFiles, FileWALPointer lowBound, FileWALPointer highBound) throws IgniteCheckedException { + int idx = 0; + + if (lowBound.index() > Long.MIN_VALUE) { + for (; idx < walFiles.size(); idx++) { + FileDescriptor desc = walFiles.get(idx); + + assert desc != null; + + if (desc.idx() == lowBound.index()) + break; + } + } + + if (idx == walFiles.size()) + throw new StrictBoundsCheckException("Wal segments not in bounds. loBoundIndex=" + lowBound.index() + + ", indexes=" + printIndexes(walFiles)); + + long curWalSegmIdx = walFiles.get(idx).idx(); + + for (; idx < walFiles.size() && curWalSegmIdx <= highBound.index(); idx++, curWalSegmIdx++) { + FileDescriptor desc = walFiles.get(idx); + + assert desc != null; + + if (curWalSegmIdx != desc.idx()) + throw new StrictBoundsCheckException("Wal segment " + curWalSegmIdx + " not found in files " + printIndexes(walFiles)); + } + + if (highBound.index() < Long.MAX_VALUE && curWalSegmIdx <= highBound.index()) + throw new StrictBoundsCheckException("Wal segments not in bounds. hiBoundIndex=" + highBound.index() + + ", indexes=" + printIndexes(walFiles)); + } + /** * For directory mode sets oldest file as initial segment, for file by file mode, converts all files to descriptors * and gets oldest as initial. @@ -295,11 +352,11 @@ private boolean checkBounds(long idx) { } /** {@inheritDoc} */ - @NotNull @Override protected WALRecord postProcessRecord(@NotNull final WALRecord rec) { + @Override protected @NotNull WALRecord postProcessRecord(@NotNull final WALRecord rec) { GridKernalContext kernalCtx = sharedCtx.kernalContext(); IgniteCacheObjectProcessor processor = kernalCtx.cacheObjects(); - if (processor != null && rec.type() == RecordType.DATA_RECORD) { + if (processor != null && (rec.type() == RecordType.DATA_RECORD || rec.type() == RecordType.MVCC_DATA_RECORD)) { try { return postProcessDataRecord((DataRecord)rec, kernalCtx, processor); } @@ -354,7 +411,9 @@ private boolean checkBounds(long idx) { postProcessedEntries.add(postProcessedEntry); } - DataRecord res = new DataRecord(postProcessedEntries, dataRec.timestamp()); + DataRecord res = dataRec instanceof MvccDataRecord ? + new MvccDataRecord(postProcessedEntries, dataRec.timestamp()) : + new DataRecord(postProcessedEntries, dataRec.timestamp()); res.size(dataRec.size()); res.position(dataRec.position()); @@ -371,18 +430,19 @@ private boolean checkBounds(long idx) { * @return post precessed entry * @throws IgniteCheckedException if failed */ - @NotNull private DataEntry postProcessDataEntry( + private @NotNull DataEntry postProcessDataEntry( final IgniteCacheObjectProcessor processor, final CacheObjectContext fakeCacheObjCtx, final DataEntry dataEntry) throws IgniteCheckedException { + if (dataEntry instanceof EncryptedDataEntry) + return dataEntry; final KeyCacheObject key; final CacheObject val; - final File marshallerMappingFileStoreDir = - fakeCacheObjCtx.kernalContext().marshallerContext().getMarshallerMappingFileStoreDir(); + boolean keepBinary = this.keepBinary || !fakeCacheObjCtx.kernalContext().marshallerContext().initialized(); - if (dataEntry instanceof LazyDataEntry) { - final LazyDataEntry lazyDataEntry = (LazyDataEntry)dataEntry; + if (dataEntry instanceof MarshalledDataEntry) { + final MarshalledDataEntry lazyDataEntry = (MarshalledDataEntry)dataEntry; key = processor.toKeyCacheObject(fakeCacheObjCtx, lazyDataEntry.getKeyType(), @@ -400,18 +460,47 @@ private boolean checkBounds(long idx) { val = dataEntry.value(); } - return new UnwrapDataEntry( - dataEntry.cacheId(), - key, - val, - dataEntry.op(), - dataEntry.nearXidVersion(), - dataEntry.writeVersion(), - dataEntry.expireTime(), - dataEntry.partitionId(), - dataEntry.partitionCounter(), - fakeCacheObjCtx, - keepBinary || marshallerMappingFileStoreDir == null); + return unwrapDataEntry(fakeCacheObjCtx, dataEntry, key, val, keepBinary); + } + + /** + * Unwrap data entry. + * @param coCtx CacheObject context. + * @param dataEntry Data entry. + * @param key Entry key. + * @param val Entry value. + * @param keepBinary Don't convert non primitive types. + * @return Unwrapped entry. + */ + private DataEntry unwrapDataEntry(CacheObjectContext coCtx, DataEntry dataEntry, + KeyCacheObject key, CacheObject val, boolean keepBinary) { + if (dataEntry instanceof MvccDataEntry) + return new UnwrapMvccDataEntry( + dataEntry.cacheId(), + key, + val, + dataEntry.op(), + dataEntry.nearXidVersion(), + dataEntry.writeVersion(), + dataEntry.expireTime(), + dataEntry.partitionId(), + dataEntry.partitionCounter(), + ((MvccDataEntry)dataEntry).mvccVer(), + coCtx, + keepBinary); + else + return new UnwrapDataEntry( + dataEntry.cacheId(), + key, + val, + dataEntry.op(), + dataEntry.nearXidVersion(), + dataEntry.writeVersion(), + dataEntry.expireTime(), + dataEntry.partitionId(), + dataEntry.partitionCounter(), + coCtx, + keepBinary); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StrictBoundsCheckException.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StrictBoundsCheckException.java new file mode 100644 index 0000000000000..89d54eb1a2e28 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StrictBoundsCheckException.java @@ -0,0 +1,35 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.wal.reader; + +import org.apache.ignite.IgniteCheckedException; + +/** + * This exception is used in checking boundaries (StandaloneWalRecordsIterator). + */ +public class StrictBoundsCheckException extends IgniteCheckedException { + /** */ + private static final long serialVersionUID = 0L; + + /** + * @param mesg Message. + */ + public StrictBoundsCheckException(String mesg) { + super(mesg); + } +} \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV1Serializer.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV1Serializer.java index 752777a0b5f2a..604b643e7c5d0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV1Serializer.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV1Serializer.java @@ -20,6 +20,7 @@ import java.io.DataInput; import java.io.EOFException; import java.io.IOException; +import java.io.Serializable; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Collections; @@ -28,19 +29,24 @@ import java.util.Map; import java.util.UUID; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.pagemem.FullPageId; import org.apache.ignite.internal.pagemem.wal.record.CacheState; import org.apache.ignite.internal.pagemem.wal.record.CheckpointRecord; import org.apache.ignite.internal.pagemem.wal.record.DataEntry; import org.apache.ignite.internal.pagemem.wal.record.DataRecord; +import org.apache.ignite.internal.pagemem.wal.record.EncryptedRecord; import org.apache.ignite.internal.pagemem.wal.record.LazyDataEntry; import org.apache.ignite.internal.pagemem.wal.record.MemoryRecoveryRecord; import org.apache.ignite.internal.pagemem.wal.record.MetastoreDataRecord; import org.apache.ignite.internal.pagemem.wal.record.PageSnapshot; import org.apache.ignite.internal.pagemem.wal.record.TxRecord; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; +import org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType; +import org.apache.ignite.internal.pagemem.wal.record.WalRecordCacheGroupAware; import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageInsertFragmentRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageInsertRecord; +import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageMvccMarkUpdatedRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageMvccUpdateNewTxStateHintRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageMvccUpdateTxStateHintRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageRemoveRecord; @@ -63,7 +69,6 @@ import org.apache.ignite.internal.pagemem.wal.record.delta.MetaPageUpdateLastSuccessfulSnapshotId; import org.apache.ignite.internal.pagemem.wal.record.delta.MetaPageUpdateNextSnapshotId; import org.apache.ignite.internal.pagemem.wal.record.delta.MetaPageUpdatePartitionDataRecord; -import org.apache.ignite.internal.pagemem.wal.record.delta.DataPageMvccMarkUpdatedRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.NewRootInitRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.PageListMetaResetCountRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.PagesListAddPageRecord; @@ -82,6 +87,7 @@ import org.apache.ignite.internal.pagemem.wal.record.delta.TrackingPageDeltaRecord; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.CacheObjectContext; +import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheOperation; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; @@ -91,11 +97,25 @@ import org.apache.ignite.internal.processors.cache.persistence.tree.io.BPlusInnerIO; import org.apache.ignite.internal.processors.cache.persistence.tree.io.CacheVersionIO; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferBackedDataInput; +import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferBackedDataInputImpl; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; import org.apache.ignite.internal.processors.cache.persistence.wal.record.HeaderRecord; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.cacheobject.IgniteCacheObjectProcessor; +import org.apache.ignite.internal.util.typedef.T2; +import org.apache.ignite.internal.util.typedef.T3; +import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.spi.encryption.EncryptionSpi; +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.DATA_RECORD; +import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.ENCRYPTED_DATA_RECORD; +import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.ENCRYPTED_RECORD; +import static org.apache.ignite.internal.processors.cache.GridCacheOperation.READ; +import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.REC_TYPE_SIZE; +import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.putRecordType; /** * Record data V1 serializer. @@ -104,18 +124,36 @@ public class RecordDataV1Serializer implements RecordDataSerializer { /** Length of HEADER record data. */ static final int HEADER_RECORD_DATA_SIZE = /*Magic*/8 + /*Version*/4; - /** Cache shared context */ - private final GridCacheSharedContext cctx; + /** Cache shared context. */ + protected final GridCacheSharedContext cctx; - /** Size of page used for PageMemory regions */ + /** Size of page used for PageMemory regions. */ private final int pageSize; - /** Cache object processor to reading {@link DataEntry DataEntries} */ - private final IgniteCacheObjectProcessor co; + /** Size of page without encryption overhead. */ + private final int realPageSize; + + /** Cache object processor to reading {@link DataEntry DataEntries}. */ + protected final IgniteCacheObjectProcessor co; /** Serializer of {@link TxRecord} records. */ private TxRecordSerializer txRecordSerializer; + /** Encryption SPI instance. */ + private final EncryptionSpi encSpi; + + /** Encryption manager. */ + private final GridEncryptionManager encMgr; + + /** */ + private final boolean encryptionDisabled; + + /** */ + private static final byte ENCRYPTED = 1; + + /** */ + private static final byte PLAIN = 0; + /** * @param cctx Cache shared context. */ @@ -124,10 +162,183 @@ public RecordDataV1Serializer(GridCacheSharedContext cctx) { this.txRecordSerializer = new TxRecordSerializer(); this.co = cctx.kernalContext().cacheObjects(); this.pageSize = cctx.database().pageSize(); + this.encSpi = cctx.gridConfig().getEncryptionSpi(); + this.encMgr = cctx.kernalContext().encryption(); + + encryptionDisabled = encSpi instanceof NoopEncryptionSpi; + + //This happen on offline WAL iteration(we don't have encryption keys available). + if (encSpi != null) + this.realPageSize = CU.encryptedPageSize(pageSize, encSpi); + else + this.realPageSize = pageSize; } /** {@inheritDoc} */ @Override public int size(WALRecord record) throws IgniteCheckedException { + int clSz = plainSize(record); + + if (needEncryption(record)) + return encSpi.encryptedSize(clSz) + 4 /* groupId */ + 4 /* data size */ + REC_TYPE_SIZE; + + return clSz; + } + + /** {@inheritDoc} */ + @Override public WALRecord readRecord(RecordType type, ByteBufferBackedDataInput in) + throws IOException, IgniteCheckedException { + if (type == ENCRYPTED_RECORD) { + if (encSpi == null) { + T2 knownData = skipEncryptedRecord(in, true); + + //This happen on offline WAL iteration(we don't have encryption keys available). + return new EncryptedRecord(knownData.get1(), knownData.get2()); + } + + T3 clData = readEncryptedData(in, true); + + //This happen during startup. On first WAL iteration we restore only metastore. + //So, no encryption keys available. See GridCacheDatabaseSharedManager#readMetastore + if (clData.get1() == null) + return new EncryptedRecord(clData.get2(), clData.get3()); + + return readPlainRecord(clData.get3(), clData.get1(), true); + } + + return readPlainRecord(type, in, false); + } + + /** {@inheritDoc} */ + @Override public void writeRecord(WALRecord rec, ByteBuffer buf) throws IgniteCheckedException { + if (needEncryption(rec)) { + int clSz = plainSize(rec); + + ByteBuffer clData = ByteBuffer.allocate(clSz); + + writePlainRecord(rec, clData); + + clData.rewind(); + + writeEncryptedData(((WalRecordCacheGroupAware)rec).groupId(), rec.type(), clData, buf); + + return; + } + + writePlainRecord(rec, buf); + } + + /** + * @param rec Record to check. + * @return {@code True} if this record should be encrypted. + */ + private boolean needEncryption(WALRecord rec) { + if (encryptionDisabled) + return false; + + if (!(rec instanceof WalRecordCacheGroupAware)) + return false; + + return needEncryption(((WalRecordCacheGroupAware)rec).groupId()); + } + + /** + * @param grpId Group id. + * @return {@code True} if this record should be encrypted. + */ + private boolean needEncryption(int grpId) { + if (encryptionDisabled) + return false; + + return encMgr != null && encMgr.groupKey(grpId) != null; + } + + /** + * Reads and decrypt data from {@code in} stream. + * + * @param in Input stream. + * @param readType If {@code true} plain record type will be read from {@code in}. + * @return Plain data stream, group id, plain record type, + * @throws IOException If failed. + * @throws IgniteCheckedException If failed. + */ + private T3 readEncryptedData(ByteBufferBackedDataInput in, + boolean readType) + throws IOException, IgniteCheckedException { + int grpId = in.readInt(); + int encRecSz = in.readInt(); + RecordType plainRecType = null; + + if (readType) + plainRecType = RecordV1Serializer.readRecordType(in); + + byte[] encData = new byte[encRecSz]; + + in.readFully(encData); + + Serializable key = encMgr.groupKey(grpId); + + if (key == null) + return new T3<>(null, grpId, plainRecType); + + byte[] clData = encSpi.decrypt(encData, key); + + return new T3<>(new ByteBufferBackedDataInputImpl().buffer(ByteBuffer.wrap(clData)), grpId, plainRecType); + } + + /** + * Reads encrypted record without decryption. + * Should be used only for a offline WAL iteration. + * + * @param in Data stream. + * @param readType If {@code true} plain record type will be read from {@code in}. + * @return Group id and type of skipped record. + */ + private T2 skipEncryptedRecord(ByteBufferBackedDataInput in, boolean readType) + throws IOException, IgniteCheckedException { + int grpId = in.readInt(); + int encRecSz = in.readInt(); + RecordType plainRecType = null; + + if (readType) + plainRecType = RecordV1Serializer.readRecordType(in); + + int skipped = in.skipBytes(encRecSz); + + assert skipped == encRecSz; + + return new T2<>(grpId, plainRecType); + } + + /** + * Writes encrypted {@code clData} to {@code dst} stream. + * + * @param grpId Group id; + * @param plainRecType Plain record type + * @param clData Plain data. + * @param dst Destination buffer. + */ + private void writeEncryptedData(int grpId, @Nullable RecordType plainRecType, ByteBuffer clData, ByteBuffer dst) { + int dtSz = encSpi.encryptedSize(clData.capacity()); + + dst.putInt(grpId); + dst.putInt(dtSz); + + if (plainRecType != null) + putRecordType(dst, plainRecType); + + Serializable key = encMgr.groupKey(grpId); + + assert key != null; + + encSpi.encrypt(clData, key, dst); + } + + /** + * @param record Record to measure. + * @return Plain(without encryption) size of serialized rec in bytes. + * @throws IgniteCheckedException If failed. + */ + int plainSize(WALRecord record) throws IgniteCheckedException { switch (record.type()) { case PAGE_RECORD: assert record instanceof PageSnapshot; @@ -313,8 +524,19 @@ assert record instanceof PageSnapshot; } } - /** {@inheritDoc} */ - @Override public WALRecord readRecord(WALRecord.RecordType type, ByteBufferBackedDataInput in) throws IOException, IgniteCheckedException { + /** + * Reads {@code WalRecord} of {@code type} from input. + * Input should be plain(not encrypted). + * + * @param type Record type. + * @param in Input + * @param encrypted Record was encrypted. + * @return Deserialized record. + * @throws IOException If failed. + * @throws IgniteCheckedException If failed. + */ + WALRecord readPlainRecord(RecordType type, ByteBufferBackedDataInput in, + boolean encrypted) throws IOException, IgniteCheckedException { WALRecord res; switch (type) { @@ -326,7 +548,7 @@ assert record instanceof PageSnapshot; in.readFully(arr); - res = new PageSnapshot(new FullPageId(pageId, cacheId), arr); + res = new PageSnapshot(new FullPageId(pageId, cacheId), arr, encrypted ? realPageSize : pageSize); break; @@ -334,7 +556,7 @@ assert record instanceof PageSnapshot; long msb = in.readLong(); long lsb = in.readLong(); boolean hasPtr = in.readByte() != 0; - int idx = hasPtr ? in.readInt() : 0; + long idx = hasPtr ? in.readLong() : 0; int off = hasPtr ? in.readInt() : 0; int len = hasPtr ? in.readInt() : 0; @@ -401,7 +623,19 @@ assert record instanceof PageSnapshot; List entries = new ArrayList<>(entryCnt); for (int i = 0; i < entryCnt; i++) - entries.add(readDataEntry(in)); + entries.add(readPlainDataEntry(in)); + + res = new DataRecord(entries, 0L); + + break; + + case ENCRYPTED_DATA_RECORD: + entryCnt = in.readInt(); + + entries = new ArrayList<>(entryCnt); + + for (int i = 0; i < entryCnt; i++) + entries.add(readEncryptedDataEntry(in)); res = new DataRecord(entries, 0L); @@ -900,7 +1134,7 @@ assert record instanceof PageSnapshot; throw new EOFException("END OF SEGMENT"); case TX_RECORD: - res = txRecordSerializer.read(in); + res = txRecordSerializer.readTx(in); break; @@ -911,8 +1145,14 @@ assert record instanceof PageSnapshot; return res; } - /** {@inheritDoc} */ - @Override public void writeRecord(WALRecord rec, ByteBuffer buf) throws IgniteCheckedException { + /** + * Write {@code rec} to {@code buf} without encryption. + * + * @param rec Record to serialize. + * @param buf Output buffer. + * @throws IgniteCheckedException If failed. + */ + void writePlainRecord(WALRecord rec, ByteBuffer buf) throws IgniteCheckedException { switch (rec.type()) { case PAGE_RECORD: PageSnapshot snap = (PageSnapshot)rec; @@ -997,8 +1237,14 @@ assert record instanceof PageSnapshot; buf.putInt(dataRec.writeEntries().size()); - for (DataEntry dataEntry : dataRec.writeEntries()) - putDataEntry(buf, dataEntry); + boolean encrypted = isDataRecordEncrypted(dataRec); + + for (DataEntry dataEntry : dataRec.writeEntries()) { + if (encrypted) + putEncryptedDataEntry(buf, dataEntry); + else + putPlainDataEntry(buf, dataEntry); + } break; @@ -1479,11 +1725,39 @@ public GridCacheSharedContext cctx() { return cctx; } + /** + * @param buf Buffer to write to. + * @param entry Data entry. + * @throws IgniteCheckedException If failed. + */ + void putEncryptedDataEntry(ByteBuffer buf, DataEntry entry) throws IgniteCheckedException { + DynamicCacheDescriptor desc = cctx.cache().cacheDescriptor(entry.cacheId()); + + if (desc != null && needEncryption(desc.groupId())) { + int clSz = entrySize(entry); + + ByteBuffer clData = ByteBuffer.allocate(clSz); + + putPlainDataEntry(clData, entry); + + clData.rewind(); + + buf.put(ENCRYPTED); + + writeEncryptedData(desc.groupId(), null, clData, buf); + } + else { + buf.put(PLAIN); + + putPlainDataEntry(buf, entry); + } + } + /** * @param buf Buffer to write to. * @param entry Data entry. */ - static void putDataEntry(ByteBuffer buf, DataEntry entry) throws IgniteCheckedException { + void putPlainDataEntry(ByteBuffer buf, DataEntry entry) throws IgniteCheckedException { buf.putInt(entry.cacheId()); if (!entry.key().putValue(buf)) @@ -1532,7 +1806,7 @@ private static void putCacheStates(ByteBuffer buf, Map stat * @param ver Version to write. * @param allowNull Is {@code null}version allowed. */ - private static void putVersion(ByteBuffer buf, GridCacheVersion ver, boolean allowNull) { + static void putVersion(ByteBuffer buf, GridCacheVersion ver, boolean allowNull) { CacheVersionIO.write(buf, ver, allowNull); } @@ -1550,8 +1824,35 @@ private static void putRow(ByteBuffer buf, byte[] rowBytes) { /** * @param in Input to read from. * @return Read entry. + * @throws IOException If failed. + * @throws IgniteCheckedException If failed. */ - DataEntry readDataEntry(ByteBufferBackedDataInput in) throws IOException, IgniteCheckedException { + DataEntry readEncryptedDataEntry(ByteBufferBackedDataInput in) throws IOException, IgniteCheckedException { + boolean needDecryption = in.readByte() == ENCRYPTED; + + if (needDecryption) { + if (encSpi == null) { + skipEncryptedRecord(in, false); + + return new EncryptedDataEntry(); + } + + T3 clData = readEncryptedData(in, false); + + if (clData.get1() == null) + return null; + + return readPlainDataEntry(clData.get1()); + } + + return readPlainDataEntry(in); + } + + /** + * @param in Input to read from. + * @return Read entry. + */ + DataEntry readPlainDataEntry(ByteBufferBackedDataInput in) throws IOException, IgniteCheckedException { int cacheId = in.readInt(); int keySize = in.readInt(); @@ -1621,6 +1922,39 @@ DataEntry readDataEntry(ByteBufferBackedDataInput in) throws IOException, Ignite partCntr); } + /** + * @param rec Record. + * @return Real record type. + */ + RecordType recordType(WALRecord rec) { + if (encryptionDisabled) + return rec.type(); + + if (needEncryption(rec)) + return ENCRYPTED_RECORD; + + if (rec.type() != DATA_RECORD) + return rec.type(); + + return isDataRecordEncrypted((DataRecord)rec) ? ENCRYPTED_DATA_RECORD : DATA_RECORD; + } + + /** + * @param rec Data record. + * @return {@code True} if this data record should be encrypted. + */ + boolean isDataRecordEncrypted(DataRecord rec) { + if (encryptionDisabled) + return false; + + for (DataEntry e : rec.writeEntries()) { + if (cctx.cacheContext(e.cacheId()) != null && needEncryption(cctx.cacheContext(e.cacheId()).groupId())) + return true; + } + + return false; + } + /** * @param buf Buffer to read from. * @return Read map. @@ -1661,7 +1995,7 @@ private Map readPartitionStates(DataInput buf) throws IOExc * @param allowNull Is {@code null}version allowed. * @return Read cache version. */ - private GridCacheVersion readVersion(ByteBufferBackedDataInput in, boolean allowNull) throws IOException { + GridCacheVersion readVersion(ByteBufferBackedDataInput in, boolean allowNull) throws IOException { // To be able to read serialization protocol version. in.ensure(1); @@ -1682,11 +2016,23 @@ private GridCacheVersion readVersion(ByteBufferBackedDataInput in, boolean allow * @return Full data record size. * @throws IgniteCheckedException If failed to obtain the length of one of the entries. */ - private int dataSize(DataRecord dataRec) throws IgniteCheckedException { + protected int dataSize(DataRecord dataRec) throws IgniteCheckedException { + boolean encrypted = isDataRecordEncrypted(dataRec); + int sz = 0; - for (DataEntry entry : dataRec.writeEntries()) - sz += entrySize(entry); + for (DataEntry entry : dataRec.writeEntries()) { + int clSz = entrySize(entry); + + if (!encryptionDisabled && needEncryption(cctx.cacheContext(entry.cacheId()).groupId())) + sz += encSpi.encryptedSize(clSz) + 1 /* encrypted flag */ + 4 /* groupId */ + 4 /* data size */; + else { + sz += clSz; + + if (encrypted) + sz += 1 /* encrypted flag */; + } + } return sz; } @@ -1696,7 +2042,7 @@ private int dataSize(DataRecord dataRec) throws IgniteCheckedException { * @return Entry size. * @throws IgniteCheckedException If failed to get key or value bytes length. */ - private int entrySize(DataEntry entry) throws IgniteCheckedException { + protected int entrySize(DataEntry entry) throws IgniteCheckedException { GridCacheContext cctx = this.cctx.cacheContext(entry.cacheId()); CacheObjectContext coCtx = cctx.cacheObjectContext(); @@ -1735,4 +2081,14 @@ private int cacheStatesSize(Map states) { return size; } + + /** + * Represents encrypted Data Entry ({@link #key}, {@link #val value}) pair. + */ + public static class EncryptedDataEntry extends DataEntry { + /** Constructor. */ + EncryptedDataEntry() { + super(0, null, null, READ, null, null, 0, 0, 0); + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV2Serializer.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV2Serializer.java index b760547e6e933..aaf3cb77720e9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV2Serializer.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordDataV2Serializer.java @@ -32,22 +32,32 @@ import org.apache.ignite.internal.pagemem.wal.record.DataEntry; import org.apache.ignite.internal.pagemem.wal.record.DataRecord; import org.apache.ignite.internal.pagemem.wal.record.ExchangeRecord; +import org.apache.ignite.internal.pagemem.wal.record.LazyMvccDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataRecord; +import org.apache.ignite.internal.pagemem.wal.record.MvccTxRecord; import org.apache.ignite.internal.pagemem.wal.record.SnapshotRecord; import org.apache.ignite.internal.pagemem.wal.record.TxRecord; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; +import org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType; +import org.apache.ignite.internal.processors.cache.CacheObject; +import org.apache.ignite.internal.processors.cache.CacheObjectContext; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheOperation; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferBackedDataInput; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; import org.apache.ignite.internal.processors.cache.persistence.wal.record.HeaderRecord; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; /** * Record data V2 serializer. */ -public class RecordDataV2Serializer implements RecordDataSerializer { +public class RecordDataV2Serializer extends RecordDataV1Serializer { /** Length of HEADER record data. */ - static final int HEADER_RECORD_DATA_SIZE = /*Magic*/8 + /*Version*/4; - - /** V1 data serializer delegate. */ - private final RecordDataV1Serializer delegateSerializer; + private static final int HEADER_RECORD_DATA_SIZE = /*Magic*/8 + /*Version*/4; /** Serializer of {@link TxRecord} records. */ private final TxRecordSerializer txRecordSerializer; @@ -55,15 +65,16 @@ public class RecordDataV2Serializer implements RecordDataSerializer { /** * Create an instance of V2 data serializer. * - * @param delegateSerializer V1 data serializer. + * @param cctx Cache shared context. */ - public RecordDataV2Serializer(RecordDataV1Serializer delegateSerializer) { - this.delegateSerializer = delegateSerializer; + public RecordDataV2Serializer(GridCacheSharedContext cctx) { + super(cctx); + this.txRecordSerializer = new TxRecordSerializer(); } /** {@inheritDoc} */ - @Override public int size(WALRecord rec) throws IgniteCheckedException { + @Override protected int plainSize(WALRecord rec) throws IgniteCheckedException { switch (rec.type()) { case HEADER_RECORD: return HEADER_RECORD_DATA_SIZE; @@ -80,8 +91,11 @@ public RecordDataV2Serializer(RecordDataV1Serializer delegateSerializer) { return 18 + cacheStatesSize + (walPtr == null ? 0 : 16); + case MVCC_DATA_RECORD: + return 4/*entry count*/ + 8/*timestamp*/ + dataSize((DataRecord)rec); + case DATA_RECORD: - return delegateSerializer.size(rec) + 8/*timestamp*/; + return super.plainSize(rec) + 8/*timestamp*/; case SNAPSHOT: return 8 + 1; @@ -92,22 +106,26 @@ public RecordDataV2Serializer(RecordDataV1Serializer delegateSerializer) { case TX_RECORD: return txRecordSerializer.size((TxRecord)rec); + case MVCC_TX_RECORD: + return txRecordSerializer.size((MvccTxRecord)rec); + default: - return delegateSerializer.size(rec); + return super.plainSize(rec); } } /** {@inheritDoc} */ - @Override public WALRecord readRecord( - WALRecord.RecordType type, - ByteBufferBackedDataInput in + @Override WALRecord readPlainRecord( + RecordType type, + ByteBufferBackedDataInput in, + boolean encrypted ) throws IOException, IgniteCheckedException { switch (type) { case CHECKPOINT_RECORD: long msb = in.readLong(); long lsb = in.readLong(); boolean hasPtr = in.readByte() != 0; - int idx0 = hasPtr ? in.readInt() : 0; + long idx0 = hasPtr ? in.readLong() : 0; int off = hasPtr ? in.readInt() : 0; int len = hasPtr ? in.readInt() : 0; @@ -130,9 +148,32 @@ public RecordDataV2Serializer(RecordDataV1Serializer delegateSerializer) { List entries = new ArrayList<>(entryCnt); for (int i = 0; i < entryCnt; i++) - entries.add(delegateSerializer.readDataEntry(in)); + entries.add(readPlainDataEntry(in)); + + return new DataRecord(entries, timeStamp); + + case MVCC_DATA_RECORD: + entryCnt = in.readInt(); + timeStamp = in.readLong(); + + entries = new ArrayList<>(entryCnt); + + for (int i = 0; i < entryCnt; i++) + entries.add(readMvccDataEntry(in)); + + return new MvccDataRecord(entries, timeStamp); + + case ENCRYPTED_DATA_RECORD: + entryCnt = in.readInt(); + timeStamp = in.readLong(); + + entries = new ArrayList<>(entryCnt); + + for (int i = 0; i < entryCnt; i++) + entries.add(readEncryptedDataEntry(in)); return new DataRecord(entries, timeStamp); + case SNAPSHOT: long snpId = in.readLong(); byte full = in.readByte(); @@ -147,16 +188,18 @@ public RecordDataV2Serializer(RecordDataV1Serializer delegateSerializer) { return new ExchangeRecord(constId, ExchangeRecord.Type.values()[idx], ts); case TX_RECORD: - return txRecordSerializer.read(in); + return txRecordSerializer.readTx(in); + + case MVCC_TX_RECORD: + return txRecordSerializer.readMvccTx(in); default: - return delegateSerializer.readRecord(type, in); + return super.readPlainRecord(type, in, encrypted); } - } /** {@inheritDoc} */ - @Override public void writeRecord(WALRecord rec, ByteBuffer buf) throws IgniteCheckedException { + @Override protected void writePlainRecord(WALRecord rec, ByteBuffer buf) throws IgniteCheckedException { if (rec instanceof HeaderRecord) throw new UnsupportedOperationException("Writing header records is forbidden since version 2 of serializer"); @@ -187,14 +230,21 @@ public RecordDataV2Serializer(RecordDataV1Serializer delegateSerializer) { break; + case MVCC_DATA_RECORD: case DATA_RECORD: DataRecord dataRec = (DataRecord)rec; buf.putInt(dataRec.writeEntries().size()); buf.putLong(dataRec.timestamp()); - for (DataEntry dataEntry : dataRec.writeEntries()) - RecordDataV1Serializer.putDataEntry(buf, dataEntry); + boolean encrypted = isDataRecordEncrypted(dataRec); + + for (DataEntry dataEntry : dataRec.writeEntries()) { + if (encrypted) + putEncryptedDataEntry(buf, dataEntry); + else + putPlainDataEntry(buf, dataEntry); + } break; @@ -220,11 +270,118 @@ public RecordDataV2Serializer(RecordDataV1Serializer delegateSerializer) { break; + case MVCC_TX_RECORD: + txRecordSerializer.write((MvccTxRecord)rec, buf); + + break; + default: - delegateSerializer.writeRecord(rec, buf); + super.writePlainRecord(rec, buf); } } + /** {@inheritDoc} */ + @Override void putPlainDataEntry(ByteBuffer buf, DataEntry entry) throws IgniteCheckedException { + if (entry instanceof MvccDataEntry) + putMvccDataEntry(buf, (MvccDataEntry)entry); + else + super.putPlainDataEntry(buf, entry); + } + + /** + * @param buf Buffer to write to. + * @param entry Data entry. + */ + private void putMvccDataEntry(ByteBuffer buf, MvccDataEntry entry) throws IgniteCheckedException { + super.putPlainDataEntry(buf, entry); + + txRecordSerializer.putMvccVersion(buf, entry.mvccVer()); + } + + /** + * @param in Input to read from. + * @return Read entry. + */ + private MvccDataEntry readMvccDataEntry(ByteBufferBackedDataInput in) throws IOException, IgniteCheckedException { + int cacheId = in.readInt(); + + int keySize = in.readInt(); + byte keyType = in.readByte(); + byte[] keyBytes = new byte[keySize]; + in.readFully(keyBytes); + + int valSize = in.readInt(); + + byte valType = 0; + byte[] valBytes = null; + + if (valSize >= 0) { + valType = in.readByte(); + valBytes = new byte[valSize]; + in.readFully(valBytes); + } + + byte ord = in.readByte(); + + GridCacheOperation op = GridCacheOperation.fromOrdinal(ord & 0xFF); + + GridCacheVersion nearXidVer = readVersion(in, true); + GridCacheVersion writeVer = readVersion(in, false); + + int partId = in.readInt(); + long partCntr = in.readLong(); + long expireTime = in.readLong(); + + MvccVersion mvccVer = txRecordSerializer.readMvccVersion(in); + + GridCacheContext cacheCtx = cctx.cacheContext(cacheId); + + if (cacheCtx != null) { + CacheObjectContext coCtx = cacheCtx.cacheObjectContext(); + + KeyCacheObject key = co.toKeyCacheObject(coCtx, keyType, keyBytes); + + if (key.partition() == -1) + key.partition(partId); + + CacheObject val = valBytes != null ? co.toCacheObject(coCtx, valType, valBytes) : null; + + return new MvccDataEntry( + cacheId, + key, + val, + op, + nearXidVer, + writeVer, + expireTime, + partId, + partCntr, + mvccVer + ); + } + else + return new LazyMvccDataEntry( + cctx, + cacheId, + keyType, + keyBytes, + valType, + valBytes, + op, + nearXidVer, + writeVer, + expireTime, + partId, + partCntr, + mvccVer); + } + + /** {@inheritDoc} */ + @Override protected int entrySize(DataEntry entry) throws IgniteCheckedException { + return super.entrySize(entry) + + /*mvcc version*/ ((entry instanceof MvccDataEntry) ? (8 + 8 + 4) : 0); + } + /** * @param buf Buffer to read from. * @return Read map. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactory.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactory.java index f46c315426516..afc4e9462219c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactory.java @@ -28,6 +28,8 @@ * Factory for creating {@link RecordSerializer}. */ public interface RecordSerializerFactory { + /** Latest serializer version to use. */ + static final int LATEST_SERIALIZER_VERSION = 2; /** * Factory method for creation {@link RecordSerializer}. * diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactoryImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactoryImpl.java index 2e2e2f8cbe942..96b78e664cc83 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactoryImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordSerializerFactoryImpl.java @@ -22,6 +22,7 @@ import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.lang.IgniteBiPredicate; +import org.jetbrains.annotations.Nullable; /** * @@ -34,7 +35,7 @@ public class RecordSerializerFactoryImpl implements RecordSerializerFactory { private boolean needWritePointer; /** Read record filter. */ - private IgniteBiPredicate recordDeserializeFilter; + private @Nullable IgniteBiPredicate recordDeserializeFilter; /** * Marshalled mode flag. @@ -56,7 +57,7 @@ public RecordSerializerFactoryImpl(GridCacheSharedContext cctx) { */ public RecordSerializerFactoryImpl( GridCacheSharedContext cctx, - IgniteBiPredicate readTypeFilter + @Nullable IgniteBiPredicate readTypeFilter ) { this.cctx = cctx; this.recordDeserializeFilter = readTypeFilter; @@ -77,11 +78,8 @@ public RecordSerializerFactoryImpl( recordDeserializeFilter); case 2: - RecordDataV2Serializer dataV2Serializer = new RecordDataV2Serializer( - new RecordDataV1Serializer(cctx)); - return new RecordV2Serializer( - dataV2Serializer, + new RecordDataV2Serializer(cctx), needWritePointer, marshalledMode, skipPositionCheck, @@ -117,7 +115,7 @@ public IgniteBiPredicate recordDeserializeFilt /** {@inheritDoc} */ @Override public RecordSerializerFactoryImpl recordDeserializeFilter( - IgniteBiPredicate readTypeFilter + @Nullable IgniteBiPredicate readTypeFilter ) { this.recordDeserializeFilter = readTypeFilter; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV1Serializer.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV1Serializer.java index afd770d8efecf..fa32c87709ee7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV1Serializer.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV1Serializer.java @@ -22,6 +22,7 @@ import java.io.IOException; import java.nio.ByteBuffer; import java.nio.ByteOrder; + import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.internal.pagemem.wal.WALPointer; @@ -32,6 +33,7 @@ import org.apache.ignite.internal.processors.cache.persistence.tree.io.CacheVersionIO; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferBackedDataInput; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferExpander; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; import org.apache.ignite.internal.processors.cache.persistence.wal.io.FileInput; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentFileInputFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; @@ -39,7 +41,6 @@ import org.apache.ignite.internal.processors.cache.persistence.wal.io.SegmentIO; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SimpleFileInput; import org.apache.ignite.internal.processors.cache.persistence.wal.WalSegmentTailReachedException; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; import org.apache.ignite.internal.processors.cache.persistence.wal.record.HeaderRecord; import org.apache.ignite.internal.processors.cache.persistence.wal.serializer.io.RecordIO; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; @@ -143,7 +144,8 @@ public class RecordV1Serializer implements RecordSerializer { rec.position(ptr); - if (recordFilter != null && !recordFilter.apply(rec.type(), ptr)) + if (recType.purpose() != WALRecord.RecordPurpose.INTERNAL + && recordFilter != null && !recordFilter.apply(rec.type(), ptr)) return FilteredRecord.INSTANCE; else if (marshalledMode) { ByteBuffer buf = heapTlb.get(); @@ -170,7 +172,7 @@ else if (marshalledMode) { /** {@inheritDoc} */ @Override public void writeWithHeaders(WALRecord rec, ByteBuffer buf) throws IgniteCheckedException { // Write record type. - putRecordType(buf, rec); + putRecordType(buf, dataSerializer.recordType(rec)); // SWITCH_SEGMENT_RECORD should have only type, no need to write pointer. if (rec.type() == SWITCH_SEGMENT_RECORD) @@ -326,10 +328,10 @@ private static void putPositionOfRecord(ByteBuffer buf, WALRecord rec) { * Writes record type to given {@code buf}. * * @param buf Buffer to write record type. - * @param rec WAL record. + * @param type WAL record type. */ - static void putRecordType(ByteBuffer buf, WALRecord rec) { - buf.put((byte)(rec.type().ordinal() + 1)); + static void putRecordType(ByteBuffer buf, RecordType type) { + buf.put((byte)(type.ordinal() + 1)); } /** @@ -421,7 +423,7 @@ static void writeWithCrc(WALRecord rec, ByteBuffer buf, RecordIO writer) throws buf.position(startPos); // This call will move buffer position to the end of the record again. - int crcVal = PureJavaCrc32.calcCrc32(buf, curPos - startPos); + int crcVal = FastCrc.calcCrc(buf, curPos - startPos); buf.putInt(crcVal); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV2Serializer.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV2Serializer.java index e112522fc31f9..0d78d08291bc7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV2Serializer.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/RecordV2Serializer.java @@ -122,7 +122,8 @@ public class RecordV2Serializer implements RecordSerializer { ", expected pointer [idx=" + exp.index() + ", offset=" + exp.fileOffset() + "]"); } - if (recordFilter != null && !recordFilter.apply(recType, ptr)) { + if (recType.purpose() != WALRecord.RecordPurpose.INTERNAL + && recordFilter != null && !recordFilter.apply(recType, ptr)) { int toSkip = ptr.length() - REC_TYPE_SIZE - FILE_WAL_POINTER_SIZE - CRC_SIZE; assert toSkip >= 0 : "Too small saved record length: " + ptr; @@ -174,7 +175,7 @@ else if (marshalledMode) { ByteBuffer buf ) throws IgniteCheckedException { // Write record type. - RecordV1Serializer.putRecordType(buf, record); + RecordV1Serializer.putRecordType(buf, dataSerializer.recordType(record)); // SWITCH_SEGMENT_RECORD should have only type, no need to write pointer. if (record.type() == SWITCH_SEGMENT_RECORD) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/TxRecordSerializer.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/TxRecordSerializer.java index 2f6d4c712bb80..47915fd88ec8c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/TxRecordSerializer.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/wal/serializer/TxRecordSerializer.java @@ -23,7 +23,10 @@ import java.util.Collection; import java.util.Map; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.pagemem.wal.record.MvccTxRecord; import org.apache.ignite.internal.pagemem.wal.record.TxRecord; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersionImpl; import org.apache.ignite.internal.processors.cache.persistence.tree.io.CacheVersionIO; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferBackedDataInput; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; @@ -34,14 +37,45 @@ * {@link TxRecord} WAL serializer. */ public class TxRecordSerializer { + /** Mvcc version record size. */ + static final int MVCC_VERSION_SIZE = 8 + 8 + 4; + + /** + * Reads {@link MvccVersion} from given input. + * + * @param in Data input to read from. + * @return Mvcc version. + */ + public MvccVersion readMvccVersion(ByteBufferBackedDataInput in) throws IOException { + in.ensure(MVCC_VERSION_SIZE); + + long coordVer = in.readLong(); + long cntr = in.readLong(); + int opCntr = in.readInt(); + + return new MvccVersionImpl(coordVer, cntr, opCntr); + } + + /** + * Writes {@link MvccVersion} to given buffer. + * + * @param buf Buffer to write. + * @param mvccVer Mvcc version. + */ + public void putMvccVersion(ByteBuffer buf, MvccVersion mvccVer) { + buf.putLong(mvccVer.coordinatorVersion()); + buf.putLong(mvccVer.counter()); + + buf.putInt(mvccVer.operationCounter()); + } + /** * Writes {@link TxRecord} to given buffer. * * @param rec TxRecord. * @param buf Byte buffer. - * @throws IgniteCheckedException In case of fail. */ - public void write(TxRecord rec, ByteBuffer buf) throws IgniteCheckedException { + public void write(TxRecord rec, ByteBuffer buf) { buf.put((byte)rec.state().ordinal()); RecordV1Serializer.putVersion(buf, rec.nearXidVersion(), true); RecordV1Serializer.putVersion(buf, rec.writeVersion(), true); @@ -74,9 +108,8 @@ public void write(TxRecord rec, ByteBuffer buf) throws IgniteCheckedException { * @param in Input * @return TxRecord. * @throws IOException In case of fail. - * @throws IgniteCheckedException In case of fail. */ - public TxRecord read(ByteBufferBackedDataInput in) throws IOException, IgniteCheckedException { + public TxRecord readTx(ByteBufferBackedDataInput in) throws IOException { byte txState = in.readByte(); TransactionState state = TransactionState.fromOrdinal(txState); @@ -113,9 +146,8 @@ public TxRecord read(ByteBufferBackedDataInput in) throws IOException, IgniteChe * * @param rec TxRecord. * @return Size of TxRecord in bytes. - * @throws IgniteCheckedException In case of fail. */ - public int size(TxRecord rec) throws IgniteCheckedException { + public int size(TxRecord rec) { int size = 0; size += /* transaction state. */ 1; @@ -140,4 +172,92 @@ public int size(TxRecord rec) throws IgniteCheckedException { return size; } + + /** + * Reads {@link MvccTxRecord} from given input. + * + * @param in Input + * @return MvccTxRecord. + * @throws IOException In case of fail. + */ + public MvccTxRecord readMvccTx(ByteBufferBackedDataInput in) throws IOException { + byte txState = in.readByte(); + TransactionState state = TransactionState.fromOrdinal(txState); + + GridCacheVersion nearXidVer = RecordV1Serializer.readVersion(in, true); + GridCacheVersion writeVer = RecordV1Serializer.readVersion(in, true); + MvccVersion mvccVer = readMvccVersion(in); + + int participatingNodesSize = in.readInt(); + + Map> participatingNodes = U.newHashMap(participatingNodesSize); + + for (int i = 0; i < participatingNodesSize; i++) { + short primaryNode = in.readShort(); + + int backupNodesSize = in.readInt(); + + Collection backupNodes = new ArrayList<>(backupNodesSize); + + for (int j = 0; j < backupNodesSize; j++) { + short backupNode = in.readShort(); + + backupNodes.add(backupNode); + } + + participatingNodes.put(primaryNode, backupNodes); + } + + long ts = in.readLong(); + + return new MvccTxRecord(state, nearXidVer, writeVer, participatingNodes, mvccVer, ts); + } + + /** + * Writes {@link MvccTxRecord} to given buffer. + * + * @param rec MvccTxRecord. + * @param buf Byte buffer. + * @throws IgniteCheckedException In case of fail. + */ + public void write(MvccTxRecord rec, ByteBuffer buf) throws IgniteCheckedException { + buf.put((byte)rec.state().ordinal()); + + RecordV1Serializer.putVersion(buf, rec.nearXidVersion(), true); + RecordV1Serializer.putVersion(buf, rec.writeVersion(), true); + putMvccVersion(buf, rec.mvccVersion()); + + Map> participatingNodes = rec.participatingNodes(); + + if (participatingNodes != null && !participatingNodes.isEmpty()) { + buf.putInt(participatingNodes.size()); + + for (Map.Entry> e : participatingNodes.entrySet()) { + buf.putShort(e.getKey()); + + Collection backupNodes = e.getValue(); + + buf.putInt(backupNodes.size()); + + for (short backupNode : backupNodes) + buf.putShort(backupNode); + } + } + else + buf.putInt(0); // Put zero size of participating nodes. + + buf.putLong(rec.timestamp()); + } + + + /** + * Returns size of marshalled {@link TxRecord} in bytes. + * + * @param rec TxRecord. + * @return Size of TxRecord in bytes. + * @throws IgniteCheckedException In case of fail. + */ + public int size(MvccTxRecord rec) throws IgniteCheckedException { + return size((TxRecord)rec) + MVCC_VERSION_SIZE; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryAdapter.java index 0e3ab43c88173..ce9a175cf489f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryAdapter.java @@ -864,7 +864,7 @@ private void retryIfPossible(IgniteCheckedException e) { assert waitVer != null; - retryFut = cctx.affinity().affinityReadyFuture(waitVer); + retryFut = cctx.shared().exchange().affinityReadyFuture(waitVer); } else if (e.hasCause(ClusterTopologyCheckedException.class)) { ClusterTopologyCheckedException topEx = X.cause(e, ClusterTopologyCheckedException.class); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryManager.java index 6134bd47546e5..e723bdcba5c46 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryManager.java @@ -172,7 +172,7 @@ public abstract class GridCacheQueryManager extends GridCacheManagerAdapte GridCacheQueryDetailMetricsAdapter> QRY_DETAIL_METRICS_MERGE_FX = GridCacheQueryDetailMetricsAdapter::aggregate; - /** Default is @{code true} */ + /** */ private final boolean isIndexingSpiAllowsBinary = !IgniteSystemProperties.getBoolean(IgniteSystemProperties.IGNITE_UNWRAP_BINARY_FOR_INDEXING_SPI); /** */ @@ -3011,7 +3011,7 @@ private void advance() { val = entry.peek(true, true, topVer, expiryPlc); - entry.touch(topVer); + entry.touch(); break; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryRequest.java index b7205b6e17bbb..21c636334e121 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryRequest.java @@ -513,128 +513,128 @@ public int taskHash() { } switch (writer.state()) { - case 3: - if (!writer.writeBoolean("all", all)) - return false; - - writer.incrementState(); - case 4: - if (!writer.writeByteArray("argsBytes", argsBytes)) + if (!writer.writeBoolean("all", all)) return false; writer.incrementState(); case 5: - if (!writer.writeString("cacheName", cacheName)) + if (!writer.writeByteArray("argsBytes", argsBytes)) return false; writer.incrementState(); case 6: - if (!writer.writeBoolean("cancel", cancel)) + if (!writer.writeString("cacheName", cacheName)) return false; writer.incrementState(); case 7: - if (!writer.writeString("clause", clause)) + if (!writer.writeBoolean("cancel", cancel)) return false; writer.incrementState(); case 8: - if (!writer.writeString("clsName", clsName)) + if (!writer.writeString("clause", clause)) return false; writer.incrementState(); case 9: - if (!writer.writeBoolean("fields", fields)) + if (!writer.writeString("clsName", clsName)) return false; writer.incrementState(); case 10: - if (!writer.writeLong("id", id)) + if (!writer.writeBoolean("fields", fields)) return false; writer.incrementState(); case 11: - if (!writer.writeBoolean("incBackups", incBackups)) + if (!writer.writeLong("id", id)) return false; writer.incrementState(); case 12: - if (!writer.writeBoolean("incMeta", incMeta)) + if (!writer.writeBoolean("incBackups", incBackups)) return false; writer.incrementState(); case 13: - if (!writer.writeBoolean("keepBinary", keepBinary)) + if (!writer.writeBoolean("incMeta", incMeta)) return false; writer.incrementState(); case 14: - if (!writer.writeByteArray("keyValFilterBytes", keyValFilterBytes)) + if (!writer.writeBoolean("keepBinary", keepBinary)) return false; writer.incrementState(); case 15: - if (!writer.writeInt("pageSize", pageSize)) + if (!writer.writeByteArray("keyValFilterBytes", keyValFilterBytes)) return false; writer.incrementState(); case 16: - if (!writer.writeInt("part", part)) + if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) return false; writer.incrementState(); case 17: - if (!writer.writeByteArray("rdcBytes", rdcBytes)) + if (!writer.writeInt("pageSize", pageSize)) return false; writer.incrementState(); case 18: - if (!writer.writeUuid("subjId", subjId)) + if (!writer.writeInt("part", part)) return false; writer.incrementState(); case 19: - if (!writer.writeInt("taskHash", taskHash)) + if (!writer.writeByteArray("rdcBytes", rdcBytes)) return false; writer.incrementState(); case 20: - if (!writer.writeMessage("topVer", topVer)) + if (!writer.writeUuid("subjId", subjId)) return false; writer.incrementState(); case 21: - if (!writer.writeByteArray("transBytes", transBytes)) + if (!writer.writeInt("taskHash", taskHash)) return false; writer.incrementState(); case 22: - if (!writer.writeByte("type", type != null ? (byte)type.ordinal() : -1)) + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); case 23: - if (!writer.writeMessage("mvccSnapshot", mvccSnapshot)) + if (!writer.writeByteArray("transBytes", transBytes)) + return false; + + writer.incrementState(); + + case 24: + if (!writer.writeByte("type", type != null ? (byte)type.ordinal() : -1)) return false; writer.incrementState(); @@ -655,7 +655,7 @@ public int taskHash() { return false; switch (reader.state()) { - case 3: + case 4: all = reader.readBoolean("all"); if (!reader.isLastRead()) @@ -663,7 +663,7 @@ public int taskHash() { reader.incrementState(); - case 4: + case 5: argsBytes = reader.readByteArray("argsBytes"); if (!reader.isLastRead()) @@ -671,7 +671,7 @@ public int taskHash() { reader.incrementState(); - case 5: + case 6: cacheName = reader.readString("cacheName"); if (!reader.isLastRead()) @@ -679,7 +679,7 @@ public int taskHash() { reader.incrementState(); - case 6: + case 7: cancel = reader.readBoolean("cancel"); if (!reader.isLastRead()) @@ -687,7 +687,7 @@ public int taskHash() { reader.incrementState(); - case 7: + case 8: clause = reader.readString("clause"); if (!reader.isLastRead()) @@ -695,7 +695,7 @@ public int taskHash() { reader.incrementState(); - case 8: + case 9: clsName = reader.readString("clsName"); if (!reader.isLastRead()) @@ -703,7 +703,7 @@ public int taskHash() { reader.incrementState(); - case 9: + case 10: fields = reader.readBoolean("fields"); if (!reader.isLastRead()) @@ -711,7 +711,7 @@ public int taskHash() { reader.incrementState(); - case 10: + case 11: id = reader.readLong("id"); if (!reader.isLastRead()) @@ -719,7 +719,7 @@ public int taskHash() { reader.incrementState(); - case 11: + case 12: incBackups = reader.readBoolean("incBackups"); if (!reader.isLastRead()) @@ -727,7 +727,7 @@ public int taskHash() { reader.incrementState(); - case 12: + case 13: incMeta = reader.readBoolean("incMeta"); if (!reader.isLastRead()) @@ -735,7 +735,7 @@ public int taskHash() { reader.incrementState(); - case 13: + case 14: keepBinary = reader.readBoolean("keepBinary"); if (!reader.isLastRead()) @@ -743,7 +743,7 @@ public int taskHash() { reader.incrementState(); - case 14: + case 15: keyValFilterBytes = reader.readByteArray("keyValFilterBytes"); if (!reader.isLastRead()) @@ -751,7 +751,15 @@ public int taskHash() { reader.incrementState(); - case 15: + case 16: + mvccSnapshot = reader.readMessage("mvccSnapshot"); + + if (!reader.isLastRead()) + return false; + + reader.incrementState(); + + case 17: pageSize = reader.readInt("pageSize"); if (!reader.isLastRead()) @@ -759,7 +767,7 @@ public int taskHash() { reader.incrementState(); - case 16: + case 18: part = reader.readInt("part"); if (!reader.isLastRead()) @@ -767,7 +775,7 @@ public int taskHash() { reader.incrementState(); - case 17: + case 19: rdcBytes = reader.readByteArray("rdcBytes"); if (!reader.isLastRead()) @@ -775,7 +783,7 @@ public int taskHash() { reader.incrementState(); - case 18: + case 20: subjId = reader.readUuid("subjId"); if (!reader.isLastRead()) @@ -783,7 +791,7 @@ public int taskHash() { reader.incrementState(); - case 19: + case 21: taskHash = reader.readInt("taskHash"); if (!reader.isLastRead()) @@ -791,15 +799,15 @@ public int taskHash() { reader.incrementState(); - case 20: - topVer = reader.readMessage("topVer"); + case 22: + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; reader.incrementState(); - case 21: + case 23: transBytes = reader.readByteArray("transBytes"); if (!reader.isLastRead()) @@ -807,7 +815,7 @@ public int taskHash() { reader.incrementState(); - case 22: + case 24: byte typeOrd; typeOrd = reader.readByte("type"); @@ -819,14 +827,6 @@ public int taskHash() { reader.incrementState(); - case 23: - mvccSnapshot = reader.readMessage("mvccSnapshot"); - - if (!reader.isLastRead()) - return false; - - reader.incrementState(); - } return reader.afterMessageRead(GridCacheQueryRequest.class); @@ -839,7 +839,7 @@ public int taskHash() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 24; + return 25; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryResponse.java index 13e0915875b92..a1650be162c46 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryResponse.java @@ -287,37 +287,37 @@ public boolean fields() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeCollection("dataBytes", dataBytes, MessageCollectionItemType.BYTE_ARR)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeByteArray("errBytes", errBytes)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeBoolean("fields", fields)) return false; writer.incrementState(); - case 6: + case 7: if (!writer.writeBoolean("finished", finished)) return false; writer.incrementState(); - case 7: + case 8: if (!writer.writeCollection("metaDataBytes", metaDataBytes, MessageCollectionItemType.BYTE_ARR)) return false; writer.incrementState(); - case 8: + case 9: if (!writer.writeLong("reqId", reqId)) return false; @@ -339,7 +339,7 @@ public boolean fields() { return false; switch (reader.state()) { - case 3: + case 4: dataBytes = reader.readCollection("dataBytes", MessageCollectionItemType.BYTE_ARR); if (!reader.isLastRead()) @@ -347,7 +347,7 @@ public boolean fields() { reader.incrementState(); - case 4: + case 5: errBytes = reader.readByteArray("errBytes"); if (!reader.isLastRead()) @@ -355,7 +355,7 @@ public boolean fields() { reader.incrementState(); - case 5: + case 6: fields = reader.readBoolean("fields"); if (!reader.isLastRead()) @@ -363,7 +363,7 @@ public boolean fields() { reader.incrementState(); - case 6: + case 7: finished = reader.readBoolean("finished"); if (!reader.isLastRead()) @@ -371,7 +371,7 @@ public boolean fields() { reader.incrementState(); - case 7: + case 8: metaDataBytes = reader.readCollection("metaDataBytes", MessageCollectionItemType.BYTE_ARR); if (!reader.isLastRead()) @@ -379,7 +379,7 @@ public boolean fields() { reader.incrementState(); - case 8: + case 9: reqId = reader.readLong("reqId"); if (!reader.isLastRead()) @@ -399,7 +399,7 @@ public boolean fields() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 9; + return 10; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/IgniteQueryErrorCode.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/IgniteQueryErrorCode.java index 5dab5fd0d4b98..afbeb5a884c5a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/IgniteQueryErrorCode.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/IgniteQueryErrorCode.java @@ -80,6 +80,9 @@ public final class IgniteQueryErrorCode { /** Conversion failure. */ public final static int CONVERSION_FAILED = 3013; + /** Query canceled. */ + public static final int QUERY_CANCELED = 3014; + /* 4xxx - cache related runtime errors */ /** Attempt to INSERT a key that is already in cache. */ @@ -106,6 +109,12 @@ public final class IgniteQueryErrorCode { /** Attempt to INSERT, UPDATE or MERGE value that exceed maximum column length. */ public final static int TOO_LONG_VALUE = 4008; + /** Attempt to INSERT, UPDATE or MERGE value which scale exceed maximum DECIMAL column scale. */ + public static final int VALUE_SCALE_OUT_OF_RANGE = 4009; + + /** Attempt to INSERT, UPDATE or MERGE value which scale exceed maximum DECIMAL column scale. */ + public static final int KEY_SCALE_OUT_OF_RANGE = 4010; + /* 5xxx - transactions related runtime errors. */ /** Transaction is already open. */ @@ -120,6 +129,9 @@ public final class IgniteQueryErrorCode { /** Transaction is already completed. */ public final static int TRANSACTION_COMPLETED = 5004; + /** Transaction serialization error. */ + public final static int TRANSACTION_SERIALIZATION_ERROR = 5005; + /** */ private IgniteQueryErrorCode() { // No-op. @@ -148,6 +160,8 @@ public static String codeToSqlState(int statusCode) { case DUPLICATE_KEY: case TOO_LONG_KEY: case TOO_LONG_VALUE: + case KEY_SCALE_OUT_OF_RANGE: + case VALUE_SCALE_OUT_OF_RANGE: return SqlStateCode.CONSTRAINT_VIOLATION; case NULL_KEY: @@ -179,6 +193,12 @@ public static String codeToSqlState(int statusCode) { case TRANSACTION_COMPLETED: return SqlStateCode.TRANSACTION_STATE_EXCEPTION; + case TRANSACTION_SERIALIZATION_ERROR: + return SqlStateCode.SERIALIZATION_FAILURE; + + case QUERY_CANCELED: + return SqlStateCode.QUERY_CANCELLED; + default: return SqlStateCode.INTERNAL_ERROR; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryBatchAck.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryBatchAck.java index ef0157ee70caa..9ba9ad2d1cd28 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryBatchAck.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryBatchAck.java @@ -90,15 +90,14 @@ Map updateCntrs() { } switch (writer.state()) { - case 3: + case 4: if (!writer.writeUuid("routineId", routineId)) return false; writer.incrementState(); - case 4: - if (!writer.writeMap("updateCntrs", updateCntrs, MessageCollectionItemType.INT, - MessageCollectionItemType.LONG)) + case 5: + if (!writer.writeMap("updateCntrs", updateCntrs, MessageCollectionItemType.INT, MessageCollectionItemType.LONG)) return false; writer.incrementState(); @@ -119,7 +118,7 @@ Map updateCntrs() { return false; switch (reader.state()) { - case 3: + case 4: routineId = reader.readUuid("routineId"); if (!reader.isLastRead()) @@ -127,9 +126,8 @@ Map updateCntrs() { reader.incrementState(); - case 4: - updateCntrs = reader.readMap("updateCntrs", MessageCollectionItemType.INT, - MessageCollectionItemType.LONG, false); + case 5: + updateCntrs = reader.readMap("updateCntrs", MessageCollectionItemType.INT, MessageCollectionItemType.LONG, false); if (!reader.isLastRead()) return false; @@ -153,7 +151,7 @@ Map updateCntrs() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 5; + return 6; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEntry.java index 88005d047933f..b05c275451ffe 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEntry.java @@ -411,7 +411,7 @@ CacheObject oldValue() { writer.incrementState(); case 8: - if (!writer.writeMessage("topVer", topVer)) + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -500,7 +500,7 @@ CacheObject oldValue() { reader.incrementState(); case 8: - topVer = reader.readMessage("topVer"); + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryHandler.java index 307a4eabfdae6..ade360a1d60d3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryHandler.java @@ -144,7 +144,7 @@ public class CacheContinuousQueryHandler implements GridContinuousHandler private transient boolean skipPrimaryCheck; /** */ - private boolean locCache; + private transient boolean locOnly; /** */ private boolean keepBinary; @@ -247,10 +247,10 @@ public void notifyExisting(boolean notifyExisting) { } /** - * @param locCache Local cache. + * @param locOnly Local only. */ - public void localCache(boolean locCache) { - this.locCache = locCache; + public void localOnly(boolean locOnly) { + this.locOnly = locOnly; } /** @@ -514,7 +514,7 @@ public void keepBinary(boolean keepBinary) { skipCtx = new CounterSkipContext(part, cntr, topVer); if (loc) { - assert !locCache; + assert !locOnly; final Collection> evts = handleEvent(ctx, skipCtx.entry()); @@ -583,6 +583,10 @@ public void keepBinary(boolean keepBinary) { private String taskName() { return ctx.security().enabled() ? ctx.task().resolveTaskName(taskHash) : null; } + + @Override public boolean isPrimaryOnly() { + return locOnly && !skipPrimaryCheck; + } }; CacheContinuousQueryManager mgr = manager(ctx); @@ -641,7 +645,7 @@ void waitTopologyFuture(GridKernalContext ctx) throws IgniteCheckedException { if (!cctx.isLocal()) { AffinityTopologyVersion topVer = initTopVer; - cacheContext(ctx).affinity().affinityReadyFuture(topVer).get(); + cacheContext(ctx).shared().exchange().affinityReadyFuture(topVer).get(); for (int partId = 0; partId < cacheContext(ctx).affinity().partitions(); partId++) getOrCreatePartitionRecovery(ctx, partId, topVer); @@ -860,7 +864,7 @@ private void onEntryUpdate(CacheContinuousQueryEvent evt, boolean notify, boolea IgniteClosure, ?> trans = getTransformer(); if (loc) { - if (!locCache) { + if (!locOnly) { Collection> evts = handleEvent(ctx, entry); notifyLocalListener(evts, trans); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryListener.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryListener.java index 7da657fe399dd..029b6d8dca73b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryListener.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryListener.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache.query.continuous; import java.util.Map; +import org.apache.ignite.cache.query.ContinuousQuery; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheContext; @@ -114,4 +115,9 @@ public void onEntryUpdated(CacheContinuousQueryEvent evt, boolean primary, * @return Whether to notify on existing entries. */ public boolean notifyExisting(); -} \ No newline at end of file + + /** + * @return {@code True} if this listener should be called on events on primary partitions only. + */ + public boolean isPrimaryOnly(); +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryManager.java index ab60f47deacb5..4c399bf8cb3f8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryManager.java @@ -58,15 +58,18 @@ import org.apache.ignite.internal.NodeStoppingException; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheObject; +import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.internal.processors.cache.GridCacheManagerAdapter; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; +import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.continuous.GridContinuousHandler; -import org.apache.ignite.internal.util.StripedCompositeReadWriteLock; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; +import org.apache.ignite.internal.util.GridLongList; +import org.apache.ignite.internal.util.StripedCompositeReadWriteLock; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.CI2; import org.apache.ignite.internal.util.typedef.F; @@ -209,6 +212,16 @@ public Lock getListenerReadLock() { return listenerLock.readLock(); } + /** + * @param tx Transaction. + * @return {@code True} if should notify continuous query manager. + */ + public boolean notifyContinuousQueries(@Nullable IgniteInternalTx tx) { + return cctx.isLocal() || + cctx.isReplicated() || + (!cctx.isNear() && !(tx != null && tx.onePhaseCommit() && !tx.local())); + } + /** * @param lsnrs Listeners to notify. * @param key Entry key. @@ -263,6 +276,37 @@ private void skipUpdateEvent(Map lsnrs, return skipCtx; } + /** + * For cache updates in shared cache group need notify others caches CQ listeners + * that generated counter should be skipped. + * + * @param cctx Cache context. + * @param part Partition. + * @param topVer Topology version. + * @param gaps Even-length array of pairs [start, end] for each gap. + */ + @Nullable public void closeBackupUpdateCountersGaps(GridCacheContext cctx, + int part, + AffinityTopologyVersion topVer, + GridLongList gaps) { + assert gaps != null && gaps.size() % 2 == 0; + + for (int i = 0; i < gaps.size() / 2; i++) { + long gapStart = gaps.get(i * 2); + long gapStop = gaps.get(i * 2 + 1); + + /* + * No user listeners should be called by this invocation. In the common case of partitioned cache or + * replicated cache with non-local-only listener gaps (dummy filtered CQ events) will be added to the + * backup queue without passing it to any listener. In the special case of local-only listener on + * replicated cache there is no backup queues used at all and therefore no gaps occur - all unfiltered + * events are passed to listeners upon arrive. + */ + for (long cntr = gapStart; cntr <= gapStop; cntr++) + skipUpdateEvent(lsnrs, null, part, cntr, false, topVer); + } + } + /** * @param internal Internal entry flag (internal key or not user cache). * @param preload Whether update happened during preloading. @@ -374,7 +418,7 @@ public void onEntryUpdated( boolean recordIgniteEvt = primary && !internal && cctx.events().isRecordable(EVT_CACHE_QUERY_OBJECT_READ); for (CacheContinuousQueryListener lsnr : lsnrCol.values()) { - if (preload && !lsnr.notifyExisting()) + if (preload && !lsnr.notifyExisting() || lsnr.isPrimaryOnly() && !primary) continue; if (!initialized) { @@ -395,7 +439,7 @@ public void onEntryUpdated( cctx.cacheId(), evtType, key, - evtType == REMOVED && lsnr.oldValueRequired() ? oldVal : newVal, + (!internal && evtType == REMOVED && lsnr.oldValueRequired()) ? oldVal : newVal, lsnr.oldValueRequired() ? oldVal : null, lsnr.keepBinary(), partId, @@ -403,7 +447,7 @@ public void onEntryUpdated( topVer, (byte)0); - IgniteCacheProxy jcache = cctx.kernalContext().cache().jcacheProxy(cctx.name()); + IgniteCacheProxy jcache = cctx.kernalContext().cache().jcacheProxy(cctx.name(), true); assert jcache != null : "Failed to get cache proxy [name=" + cctx.name() + ", locStart=" + cctx.startTopologyVersion() + @@ -496,11 +540,6 @@ public UUID executeQuery(@Nullable final CacheEntryUpdatedListener locLsnr, final boolean keepBinary, final boolean includeExpired) throws IgniteCheckedException { - //TODO IGNITE-7953 - if (cctx.transactionalSnapshot()) - throw new UnsupportedOperationException("Continuous queries are not supported for transactional caches " + - "when MVCC is enabled."); - IgniteOutClosure clsr; if (rmtTransFactory != null) { @@ -716,12 +755,14 @@ private UUID executeQuery0(CacheEntryUpdatedListener locLsnr, final CacheContinuousQueryHandler hnd = clsr.apply(); + boolean locOnly = cctx.isLocal() || loc; + hnd.taskNameHash(taskNameHash); hnd.skipPrimaryCheck(skipPrimaryCheck); hnd.notifyExisting(notifyExisting); hnd.internal(internal); hnd.keepBinary(keepBinary); - hnd.localCache(cctx.isLocal()); + hnd.localOnly(locOnly); IgnitePredicate pred = (loc || cctx.config().getCacheMode() == CacheMode.LOCAL) ? F.nodeForNodeId(cctx.localNodeId()) : cctx.group().nodeFilter(); @@ -733,13 +774,13 @@ private UUID executeQuery0(CacheEntryUpdatedListener locLsnr, try { id = cctx.kernalContext().continuous().startRoutine( hnd, - internal && loc, + locOnly, bufSize, timeInterval, autoUnsubscribe, pred).get(); - if (hnd.isQuery() && cctx.userCache() && !onStart) + if (hnd.isQuery() && cctx.userCache() && !locOnly && !onStart) hnd.waitTopologyFuture(cctx.kernalContext()); } catch (NodeStoppingException e) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteInternalTx.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteInternalTx.java index 05ebe5d31496b..7588fb0d00e2e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteInternalTx.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteInternalTx.java @@ -641,6 +641,13 @@ public void completedVersions(GridCacheVersion base, */ public void commitError(Throwable e); + /** + * Returns label of transactions. + * + * @return Label of transaction or {@code null} if there was not set. + */ + @Nullable public String label(); + /** * @param mvccSnapshot Mvcc snapshot. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxAdapter.java index cb52771fb70b4..9537ba9666ffa 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxAdapter.java @@ -17,6 +17,8 @@ package org.apache.ignite.internal.processors.cache.transactions; +import javax.cache.expiry.ExpiryPolicy; +import javax.cache.processor.EntryProcessor; import java.io.Externalizable; import java.io.IOException; import java.io.ObjectInput; @@ -37,8 +39,6 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; -import javax.cache.expiry.ExpiryPolicy; -import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; @@ -49,7 +49,6 @@ import org.apache.ignite.internal.managers.discovery.ConsistentIdMapper; import org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager; import org.apache.ignite.internal.pagemem.wal.WALPointer; -import org.apache.ignite.internal.pagemem.wal.record.TxRecord; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheInvokeEntry; import org.apache.ignite.internal.processors.cache.CacheLazyEntry; @@ -63,7 +62,9 @@ import org.apache.ignite.internal.processors.cache.GridCacheReturn; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; @@ -75,9 +76,9 @@ import org.apache.ignite.internal.processors.cache.version.GridCacheVersionConflictContext; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionedEntryEx; import org.apache.ignite.internal.processors.cluster.BaselineTopology; +import org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException; import org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException; import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; -import org.apache.ignite.internal.util.GridIntIterator; import org.apache.ignite.internal.util.GridSetWrapper; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.lang.GridMetadataAwareAdapter; @@ -91,7 +92,6 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.thread.IgniteThread; import org.apache.ignite.transactions.TransactionConcurrency; @@ -276,6 +276,7 @@ public abstract class IgniteTxAdapter extends GridMetadataAwareAdapter implement protected ConsistentIdMapper consistentIdMapper; /** Mvcc tx update snapshot. */ + @GridToStringInclude protected volatile MvccSnapshot mvccSnapshot; /** Rollback finish future. */ @@ -283,7 +284,12 @@ public abstract class IgniteTxAdapter extends GridMetadataAwareAdapter implement private volatile IgniteInternalFuture rollbackFut; /** */ - private volatile TxCounters txCounters = new TxCounters(); + @SuppressWarnings("unused") + @GridToStringExclude + private volatile TxCounters txCounters; + + /** Transaction from which this transaction was copied by(if it was). */ + private GridNearTxLocal parentTx; /** * Empty constructor required for {@link Externalizable}. @@ -403,6 +409,13 @@ protected IgniteTxAdapter( consistentIdMapper = new ConsistentIdMapper(cctx.discovery()); } + /** + * @param parentTx Transaction from which this transaction was copied by. + */ + public void setParentTx(GridNearTxLocal parentTx) { + this.parentTx = parentTx; + } + /** * @return Mvcc info. */ @@ -765,6 +778,36 @@ public final IgniteCheckedException rollbackException() { "[timeout=" + timeout() + ", tx=" + CU.txString(this) + ']'); } + /** + * @param ex Root cause. + */ + public final IgniteCheckedException heuristicException(Throwable ex) { + return new IgniteTxHeuristicCheckedException("Committing a transaction has produced runtime exception", ex); + } + + /** + * @param log Log. + * @param commit Commit. + * @param e Exception. + */ + public void logTxFinishErrorSafe(@Nullable IgniteLogger log, boolean commit, Throwable e) { + assert e != null : "Exception is expected"; + + final String fmt = "Failed completing the transaction: [commit=%s, tx=%s, plc=%s]"; + + try { + // First try printing a full transaction. This is error prone. + U.error(log, String.format(fmt, commit, this, + cctx.gridConfig().getFailureHandler().getClass().getSimpleName()), e); + } + catch (Throwable e0) { + e.addSuppressed(e0); + + U.error(log, String.format(fmt, commit, CU.txString(this), + cctx.gridConfig().getFailureHandler().getClass().getSimpleName()), e); + } + } + /** {@inheritDoc} */ @Override public GridCacheVersion xidVersion() { return xidVer; @@ -1015,6 +1058,19 @@ public void onePhaseCommit(boolean onePhaseCommit) { return state(state, false); } + /** + * Changing state for this transaction as well as chained(parent) transactions. + * + * @param state Transaction state. + * @return {@code True} if transition was valid, {@code false} otherwise. + */ + public boolean chainState(TransactionState state) { + if (parentTx != null) + parentTx.state(state); + + return state(state); + } + /** {@inheritDoc} */ @SuppressWarnings("ExternalizableWithoutPublicNoArgConstructor") @Override public IgniteInternalFuture finishFuture() { @@ -1158,82 +1214,21 @@ protected final boolean state(TransactionState state, boolean timedOut) { seal(); if (state == PREPARED || state == COMMITTED || state == ROLLED_BACK) { - if (mvccSnapshot != null) { - byte txState; - - switch (state) { - case PREPARED: - txState = TxState.PREPARED; - break; - case ROLLED_BACK: - txState = TxState.ABORTED; - break; - case COMMITTED: - txState = TxState.COMMITTED; - break; - default: - throw new IllegalStateException("Illegal state: " + state); - } - + if (state == PREPARED) { try { - if (!cctx.localNode().isClient()) { - if (dht() && remote()) - cctx.coordinators().updateState(mvccSnapshot, txState, false); - else if (local()) { - IgniteInternalFuture rollbackFut = rollbackFuture(); - - boolean syncUpdate = txState == TxState.PREPARED || txState == TxState.COMMITTED || - rollbackFut == null || rollbackFut.isDone(); - - if (syncUpdate) - cctx.coordinators().updateState(mvccSnapshot, txState); - else { - // If tx was aborted, we need to wait tx log is updated on all backups. - rollbackFut.listen(new IgniteInClosure() { - @Override public void apply(IgniteInternalFuture fut) { - try { - cctx.coordinators().updateState(mvccSnapshot, txState); - } - catch (IgniteCheckedException e) { - U.error(log, "Failed to log TxState: " + txState, e); - } - } - }); - } - } - } + cctx.tm().mvccPrepare(this); } catch (IgniteCheckedException e) { - U.error(log, "Failed to log TxState: " + txState, e); - - throw new IgniteException("Failed to log TxState: " + txState, e); - } - } - - // Log tx state change to WAL. - if (cctx.wal() != null && cctx.tm().logTxRecords() && txNodes != null) { + String msg = "Failed to update TxState: " + TxState.PREPARED; - BaselineTopology baselineTop = cctx.kernalContext().state().clusterState().baselineTopology(); + U.error(log, msg, e); - Map> participatingNodes = consistentIdMapper - .mapToCompactIds(topVer, txNodes, baselineTop); - - TxRecord txRecord = new TxRecord( - state, - nearXidVersion(), - writeVersion(), - participatingNodes - ); - - try { - ptr = cctx.wal().log(txRecord); - } - catch (IgniteCheckedException e) { - U.error(log, "Failed to log TxRecord: " + txRecord, e); - - throw new IgniteException("Failed to log TxRecord: " + txRecord, e); + throw new IgniteException(msg, e); } } + + if (!txState().mvccEnabled()) + ptr = cctx.tm().logTxRecord(this); } } } @@ -1858,32 +1853,6 @@ else if (op == DELETE && resVal != null) return F.t(op, ctx); } - /** - * Notify Dr on tx finished. - * - * @param commit {@code True} if commited, {@code False} otherwise. - */ - protected void notifyDrManager(boolean commit) { - if (system() || internal()) - return; - - IgniteTxState txState = txState(); - - if (mvccSnapshot == null || txState.cacheIds().isEmpty()) - return; - - GridIntIterator iter = txState.cacheIds().iterator(); - - while (iter.hasNext()) { - int cacheId = iter.next(); - - GridCacheContext ctx0 = cctx.cacheContext(cacheId); - - if (ctx0.isDrEnabled()) - ctx0.dr().onTxFinished(mvccSnapshot, commit, topologyVersionSnapshot()); - } - } - /** * @param e Transaction entry. * @param primaryOnly Flag to include backups into check or not. @@ -2045,21 +2014,51 @@ protected void applyTxSizes() { for (Map.Entry> entry : sizeDeltas.entrySet()) { Integer cacheId = entry.getKey(); - Map partDeltas = entry.getValue(); + Map deltas = entry.getValue(); - assert !F.isEmpty(partDeltas); + assert !F.isEmpty(deltas); GridDhtPartitionTopology top = cctx.cacheContext(cacheId).topology(); - for (Map.Entry e : partDeltas.entrySet()) { - Integer p = e.getKey(); + // Need to reserve on backups only + boolean reserve = dht() && remote(); + + for (Map.Entry e : deltas.entrySet()) { + boolean invalid = false; + int p = e.getKey(); long delta = e.getValue().get(); - GridDhtLocalPartition dhtPart = top.localPartition(p); + try { + GridDhtLocalPartition part = top.localPartition(p); + + if (!reserve || part != null && part.reserve()) { + assert part != null; + + try { + if (part.state() != GridDhtPartitionState.RENTING) + part.dataStore().updateSize(cacheId, delta); + else + invalid = true; + } + finally { + if (reserve) + part.release(); + } + } + else + invalid = true; + } + catch (GridDhtInvalidPartitionException e1) { + invalid = true; + } - assert dhtPart != null; + if (invalid) { + assert reserve; - dhtPart.dataStore().updateSize(cacheId, delta); + if (log.isDebugEnabled()) + log.debug("Trying to apply size delta for invalid partition: " + + "[cacheId=" + cacheId + ", part=" + p + "]"); + } } } } @@ -2296,6 +2295,11 @@ private static class TxShadow implements IgniteInternalTx { throw new IllegalStateException("Deserialized transaction can only be used as read-only."); } + /** {@inheritDoc} */ + @Nullable @Override public String label() { + throw new IllegalStateException("Deserialized transaction can only be used as read-only."); + } + /** {@inheritDoc} */ @Override public boolean empty() { throw new IllegalStateException("Deserialized transaction can only be used as read-only."); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxEntry.java index 8e65605f79ec8..2a7ab4e8d7030 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxEntry.java @@ -30,6 +30,7 @@ import org.apache.ignite.internal.GridDirectTransient; import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; +import org.apache.ignite.internal.processors.cache.CacheInvalidStateException; import org.apache.ignite.internal.processors.cache.CacheInvokeEntry; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.GridCacheContext; @@ -935,9 +936,10 @@ public void unmarshal(GridCacheSharedContext ctx, boolean near, if (this.ctx == null) { GridCacheContext cacheCtx = ctx.cacheContext(cacheId); - assert cacheCtx != null : "Failed to find cache context [cacheId=" + cacheId + - ", readyTopVer=" + ctx.exchange().readyAffinityVersion() + ']'; - + if (cacheCtx == null) + throw new CacheInvalidStateException( + "Failed to perform cache operation (cache is stopped), cacheId=" + cacheId); + if (cacheCtx.isNear() && !near) cacheCtx = cacheCtx.near().dht().context(); else if (!cacheCtx.isNear() && near) diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxHandler.java index 314bb52be0df9..078aac4e99dfb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxHandler.java @@ -17,9 +17,11 @@ package org.apache.ignite.internal.processors.cache.transactions; +import java.util.ArrayList; import java.util.Collection; import java.util.List; import java.util.UUID; +import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; @@ -29,6 +31,7 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.processors.cache.CacheEntryInfoCollection; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; @@ -37,14 +40,14 @@ import org.apache.ignite.internal.processors.cache.GridCacheMessage; import org.apache.ignite.internal.processors.cache.GridCacheReturnCompletableWrapper; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.GridCacheUpdateTxResult; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.GridCacheTxRecoveryFuture; import org.apache.ignite.internal.processors.cache.distributed.GridCacheTxRecoveryRequest; import org.apache.ignite.internal.processors.cache.distributed.GridCacheTxRecoveryResponse; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxFinishFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxFinishRequest; @@ -55,6 +58,12 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareRequest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareResponse; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxRemote; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridInvokeValue; +import org.apache.ignite.internal.processors.cache.distributed.dht.PartitionUpdateCountersMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishRequest; @@ -64,7 +73,13 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareResponse; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxRemote; +import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; +import org.apache.ignite.internal.processors.cache.mvcc.msg.PartitionCountersNeighborcastRequest; +import org.apache.ignite.internal.processors.cache.mvcc.msg.PartitionCountersNeighborcastResponse; +import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.processors.query.EnlistOperation; +import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException; import org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException; import org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException; @@ -79,6 +94,8 @@ import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFutureCancelledException; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.thread.IgniteThread; import org.apache.ignite.transactions.TransactionState; import org.jetbrains.annotations.Nullable; @@ -89,6 +106,7 @@ import static org.apache.ignite.internal.processors.cache.GridCacheOperation.TRANSFORM; import static org.apache.ignite.internal.processors.cache.GridCacheUtils.isNearEnabled; import static org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx.FinalizationStatus.USER_FINISH; +import static org.apache.ignite.internal.util.lang.GridFunc.isEmpty; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionState.PREPARED; @@ -143,13 +161,13 @@ private void processNearTxPrepareRequest(UUID nearNodeId, GridNearTxPrepareReque * @param nearNode Sender node. * @param req Request. */ - private void processNearTxPrepareRequest0(ClusterNode nearNode, GridNearTxPrepareRequest req) { + private IgniteInternalFuture processNearTxPrepareRequest0(ClusterNode nearNode, GridNearTxPrepareRequest req) { IgniteInternalFuture fut; if (req.firstClientRequest() && req.allowWaitTopologyFuture()) { for (;;) { if (waitForExchangeFuture(nearNode, req)) - return; + return new GridFinishedFuture<>(); fut = prepareNearTx(nearNode, req); @@ -162,6 +180,8 @@ private void processNearTxPrepareRequest0(ClusterNode nearNode, GridNearTxPrepar assert req.txState() != null || fut == null || fut.error() != null || (ctx.tm().tx(req.version()) == null && ctx.tm().nearTx(req.version()) == null); + + return fut; } /** @@ -243,6 +263,20 @@ public IgniteTxHandler(GridCacheSharedContext ctx) { processCheckPreparedTxResponse(nodeId, res); } }); + + ctx.io().addCacheHandler(0, PartitionCountersNeighborcastRequest.class, + new CI2() { + @Override public void apply(UUID nodeId, PartitionCountersNeighborcastRequest req) { + processPartitionCountersRequest(nodeId, req); + } + }); + + ctx.io().addCacheHandler(0, PartitionCountersNeighborcastResponse.class, + new CI2() { + @Override public void apply(UUID nodeId, PartitionCountersNeighborcastResponse res) { + processPartitionCountersResponse(nodeId, res); + } + }); } /** @@ -311,14 +345,17 @@ private IgniteTxEntry unmarshal(@Nullable Collection entries) thr } /** + * @param originTx Transaction for copy. * @param req Request. * @return Prepare future. */ - public IgniteInternalFuture prepareNearTxLocal(final GridNearTxPrepareRequest req) { + public IgniteInternalFuture prepareNearTxLocal( + final GridNearTxLocal originTx, + final GridNearTxPrepareRequest req) { // Make sure not to provide Near entries to DHT cache. req.cloneEntries(); - return prepareNearTx(ctx.localNode(), req); + return prepareNearTx(originTx, ctx.localNode(), req); } /** @@ -329,6 +366,20 @@ public IgniteInternalFuture prepareNearTxLocal(final @Nullable private IgniteInternalFuture prepareNearTx( final ClusterNode nearNode, final GridNearTxPrepareRequest req + ) { + return prepareNearTx(null, nearNode, req); + } + + /** + * @param originTx Transaction for copy. + * @param nearNode Node that initiated transaction. + * @param req Near prepare request. + * @return Prepare future or {@code null} if need retry operation. + */ + @Nullable private IgniteInternalFuture prepareNearTx( + final GridNearTxLocal originTx, + final ClusterNode nearNode, + final GridNearTxPrepareRequest req ) { IgniteTxEntry firstEntry; @@ -477,7 +528,9 @@ public IgniteInternalFuture prepareNearTxLocal(final req.txSize(), req.transactionNodes(), req.subjectId(), - req.taskNameHash() + req.taskNameHash(), + req.txLabel(), + originTx ); tx = ctx.tm().onCreated(null, tx); @@ -571,10 +624,26 @@ private boolean waitForExchangeFuture(final ClusterNode node, final GridNearTxPr return; } ctx.kernalContext().closure().runLocalWithThreadPolicy(thread, () -> { + IgniteInternalFuture fut = null; + + Throwable err = null; + try { - processNearTxPrepareRequest0(node, req); + for (IgniteTxEntry itm : F.concat(false, req.writes(), req.reads())) { + err = topFut.validateCache(itm.context(), req.recovery(), isEmpty(req.writes()), + null, null); + + if (err != null) + break; + } + + if (err == null) + fut = processNearTxPrepareRequest0(node, req); } finally { + if (fut == null || fut.error() != null || err != null) + sendResponseOnTimeoutOrError(e, topFut, node, req); + ctx.io().onMessageProcessed(req); } }); @@ -658,6 +727,11 @@ private boolean needRemap(AffinityTopologyVersion expVer, if (expVer.equals(curVer)) return false; + AffinityTopologyVersion lastAffChangedTopVer = ctx.exchange().lastAffinityChangedTopologyVersion(expVer); + + if (curVer.compareTo(expVer) <= 0 && curVer.compareTo(lastAffChangedTopVer) >= 0) + return false; + // TODO IGNITE-6754 check mvcc crd for mvcc enabled txs. for (IgniteTxEntry e : F.concat(false, req.reads(), req.writes())) { @@ -1016,45 +1090,34 @@ private IgniteInternalFuture finishDhtLocal(UUID nodeId, } catch (Throwable e) { try { - U.error(log, "Failed completing transaction [commit=" + req.commit() + ", tx=" + tx + ']', e); - } - catch (Throwable e0) { - ClusterNode node0 = ctx.discovery().node(nodeId); - - U.error(log, "Failed completing transaction [commit=" + req.commit() + ", tx=" + - CU.txString(tx) + ']', e); - - U.error(log, "Failed to log message due to an error: ", e0); - - if (node0 != null && (!node0.isClient() || node0.isLocal())) { - ctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); - - throw e; - } - } + if (tx != null) { + tx.commitError(e); - if (tx != null) { - tx.commitError(e); + tx.systemInvalidate(true); - tx.systemInvalidate(true); + try { + IgniteInternalFuture res = tx.rollbackDhtLocalAsync(); - try { - IgniteInternalFuture res = tx.rollbackDhtLocalAsync(); + // Only for error logging. + res.listen(CU.errorLogger(log)); - // Only for error logging. - res.listen(CU.errorLogger(log)); + return res; + } + catch (Throwable e1) { + e.addSuppressed(e1); + } - return res; - } - catch (Throwable e1) { - e.addSuppressed(e1); + tx.logTxFinishErrorSafe(log, req.commit(), e); } - } - if (e instanceof Error) - throw (Error)e; + if (e instanceof Error) + throw (Error)e; - return new GridFinishedFuture<>(e); + return new GridFinishedFuture<>(e); + } + finally { + ctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); + } } } @@ -1079,20 +1142,26 @@ public IgniteInternalFuture finishColocatedLocal(boolean commi return tx.rollbackAsyncLocal(); } catch (Throwable e) { - U.error(log, "Failed completing transaction [commit=" + commit + ", tx=" + tx + ']', e); - - if (e instanceof Error) - throw e; + try { + if (tx != null) { + try { + return tx.rollbackNearTxLocalAsync(); + } + catch (Throwable e1) { + e.addSuppressed(e1); + } - if (tx != null) - try { - return tx.rollbackNearTxLocalAsync(); - } - catch (Throwable e1) { - e.addSuppressed(e1); + tx.logTxFinishErrorSafe(log, commit, e); } - return new GridFinishedFuture<>(e); + if (e instanceof Error) + throw e; + + return new GridFinishedFuture<>(e); + } + finally { + ctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); + } } } @@ -1179,10 +1248,6 @@ else if (e instanceof IgniteTxOptimisticCheckedException) { if (log.isDebugEnabled()) log.debug("Optimistic failure for remote transaction (will rollback): " + req); } - else if (e instanceof IgniteTxHeuristicCheckedException) { - U.warn(log, "Failed to commit transaction (all transaction entries were invalidated): " + - CU.txString(dhtTx)); - } else U.error(log, "Failed to process prepare request: " + req, e); @@ -1407,9 +1472,10 @@ protected void finish( tx.rollbackRemoteTx(); } } + catch (IgniteTxHeuristicCheckedException e) { + // Already uncommitted. + } catch (Throwable e) { - U.error(log, "Failed completing transaction [commit=" + req.commit() + ", tx=" + tx + ']', e); - // Mark transaction for invalidate. tx.invalidate(true); tx.systemInvalidate(true); @@ -1427,6 +1493,8 @@ protected void finish( } /** + * Finish for one-phase distributed tx. + * * @param tx Transaction. * @param req Request. */ @@ -1450,22 +1518,27 @@ protected void finish( throw e; } catch (Throwable e) { - U.error(log, "Failed committing transaction [tx=" + tx + ']', e); + try { + // Mark transaction for invalidate. + tx.invalidate(true); - // Mark transaction for invalidate. - tx.invalidate(true); - tx.systemInvalidate(true); + tx.systemInvalidate(true); - try { - tx.rollbackRemoteTx(); + try { + tx.rollbackRemoteTx(); + } + catch (Throwable e1) { + e.addSuppressed(e1); + } + + tx.logTxFinishErrorSafe(log, true, e); + + if (e instanceof Error) + throw (Error)e; } - catch (Throwable e1) { - e.addSuppressed(e1); - U.error(log, "Failed to automatically rollback transaction: " + tx, e1); + finally { + ctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, e)); } - - if (e instanceof Error) - throw (Error)e; } } @@ -1627,8 +1700,6 @@ private void sendReply(UUID nodeId, GridDhtTxFinishRequest req, boolean committe GridDhtTxRemote tx = ctx.tm().tx(req.version()); if (tx == null) { - assert !req.queryUpdate(); - boolean single = req.last() && req.writes().size() == 1; tx = new GridDhtTxRemote( @@ -1651,8 +1722,10 @@ private void sendReply(UUID nodeId, GridDhtTxFinishRequest req, boolean committe req.subjectId(), req.taskNameHash(), single, - req.storeWriteThrough()); + req.storeWriteThrough(), + req.txLabel()); + tx.onePhaseCommit(req.onePhaseCommit()); tx.writeVersion(req.writeVersion()); tx = ctx.tm().onCreated(null, tx); @@ -1798,6 +1871,164 @@ private void sendReply(UUID nodeId, GridDhtTxFinishRequest req, boolean committe return null; } + /** + * Writes updated values on the backup node. + * + * @param tx Transaction. + * @param ctx Cache context. + * @param op Operation. + * @param keys Keys. + * @param vals Values sent from the primary node. + * @param snapshot Mvcc snapshot. + * @param batchNum Batch number. + * @param futId Future id. + * @throws IgniteCheckedException If failed. + */ + public void mvccEnlistBatch(GridDhtTxRemote tx, GridCacheContext ctx, EnlistOperation op, List keys, + List vals, MvccSnapshot snapshot, IgniteUuid futId, int batchNum) throws IgniteCheckedException { + assert keys != null && (vals == null || vals.size() == keys.size()); + assert tx != null; + + GridDhtCacheAdapter dht = ctx.dht(); + + tx.addActiveCache(ctx, false); + + for (int i = 0; i < keys.size(); i++) { + KeyCacheObject key = keys.get(i); + + assert key != null; + + int part = ctx.affinity().partition(key); + + try { + GridDhtLocalPartition locPart = ctx.topology().localPartition(part, tx.topologyVersion(), false); + + if (locPart != null && locPart.reserve()) { + try { + // do not process renting partitions. + if (locPart.state() == GridDhtPartitionState.RENTING) { + tx.addInvalidPartition(ctx, part); + + continue; + } + + CacheObject val = null; + EntryProcessor entryProc = null; + Object[] invokeArgs = null; + + boolean needOldVal = ctx.shared().mvccCaching().continuousQueryListeners(ctx, tx, key) != null; + + Message val0 = vals != null ? vals.get(i) : null; + + CacheEntryInfoCollection entries = + val0 instanceof CacheEntryInfoCollection ? (CacheEntryInfoCollection)val0 : null; + + if (entries == null && !op.isDeleteOrLock() && !op.isInvoke()) + val = (val0 instanceof CacheObject) ? (CacheObject)val0 : null; + + if (entries == null && op.isInvoke()) { + assert val0 instanceof GridInvokeValue; + + GridInvokeValue invokeVal = (GridInvokeValue)val0; + + entryProc = invokeVal.entryProcessor(); + invokeArgs = invokeVal.invokeArgs(); + } + + assert entries != null || entryProc != null || !op.isInvoke() : "entryProc=" + entryProc + ", op=" + op; + + GridDhtCacheEntry entry = dht.entryExx(key, tx.topologyVersion()); + + GridCacheUpdateTxResult updRes; + + while (true) { + ctx.shared().database().checkpointReadLock(); + + try { + if (entries == null) { + switch (op) { + case DELETE: + updRes = entry.mvccRemove( + tx, + ctx.localNodeId(), + tx.topologyVersion(), + snapshot, + false, + needOldVal, + null, + false); + + break; + + case INSERT: + case TRANSFORM: + case UPSERT: + case UPDATE: + updRes = entry.mvccSet( + tx, + ctx.localNodeId(), + val, + entryProc, + invokeArgs, + 0, + tx.topologyVersion(), + snapshot, + op.cacheOperation(), + false, + false, + needOldVal, + null, + false); + + break; + + default: + throw new IgniteSQLException("Cannot acquire lock for operation [op= " + + op + "]" + "Operation is unsupported at the moment ", + IgniteQueryErrorCode.UNSUPPORTED_OPERATION); + } + } + else { + updRes = entry.mvccUpdateRowsWithPreloadInfo(tx, + ctx.localNodeId(), + tx.topologyVersion(), + entries.infos(), + op.cacheOperation(), + snapshot, + futId, + batchNum); + } + + break; + } + catch (GridCacheEntryRemovedException ignore) { + entry = dht.entryExx(key); + } + finally { + ctx.shared().database().checkpointReadUnlock(); + } + } + + if (!updRes.filtered()) + ctx.shared().mvccCaching().addEnlisted(key, updRes.newValue(), 0, 0, tx.xidVersion(), + updRes.oldValue(), tx.local(), tx.topologyVersion(), snapshot, ctx.cacheId(), tx, futId, batchNum); + + assert updRes.updateFuture() == null : "Entry should not be locked on the backup"; + } + + finally { + locPart.release(); + } + } + else + tx.addInvalidPartition(ctx, part); + } + catch (GridDhtInvalidPartitionException e) { + tx.addInvalidPartition(ctx, e.partition()); + } + } + } + /** * @param cacheCtx Context. * @param key Key @@ -1847,7 +2078,8 @@ private void invalidateNearEntry(GridCacheContext cacheCtx, KeyCacheObject key, req.nearWrites(), req.txSize(), req.subjectId(), - req.taskNameHash() + req.taskNameHash(), + req.txLabel() ); tx.writeVersion(req.writeVersion()); @@ -1988,4 +2220,125 @@ protected void processCheckPreparedTxResponse(UUID nodeId, GridCacheTxRecoveryRe fut.onResult(nodeId, res); } + + /** + * @param nodeId Node id. + * @param req Request. + */ + private void processPartitionCountersRequest(UUID nodeId, PartitionCountersNeighborcastRequest req) { + applyPartitionsUpdatesCounters(req.updateCounters()); + + try { + ctx.io().send(nodeId, new PartitionCountersNeighborcastResponse(req.futId()), SYSTEM_POOL); + } + catch (ClusterTopologyCheckedException ignored) { + if (txRecoveryMsgLog.isDebugEnabled()) + txRecoveryMsgLog.debug("Failed to send partition counters response, node left [node=" + nodeId + ']'); + } + catch (IgniteCheckedException e) { + U.error(txRecoveryMsgLog, "Failed to send partition counters response [node=" + nodeId + ']', e); + } + } + + /** + * @param nodeId Node id. + * @param res Response. + */ + private void processPartitionCountersResponse(UUID nodeId, PartitionCountersNeighborcastResponse res) { + PartitionCountersNeighborcastFuture fut = ((PartitionCountersNeighborcastFuture)ctx.mvcc().future(res.futId())); + + if (fut == null) { + log.warning("Failed to find future for partition counters response [futId=" + res.futId() + + ", node=" + nodeId + ']'); + + return; + } + + fut.onResult(nodeId); + } + + /** + * Applies partition counter updates for mvcc transactions. + * + * @param counters Counter values to be updated. + */ + public void applyPartitionsUpdatesCounters(Iterable counters) { + if (counters == null) + return; + + for (PartitionUpdateCountersMessage counter : counters) { + GridCacheContext ctx0 = ctx.cacheContext(counter.cacheId()); + + assert ctx0.mvccEnabled(); + + GridDhtPartitionTopology top = ctx0.topology(); + + assert top != null; + + for (int i = 0; i < counter.size(); i++) { + boolean invalid = false; + + try { + GridDhtLocalPartition part = top.localPartition(counter.partition(i)); + + if (part != null && part.reserve()) { + try { + if (part.state() != GridDhtPartitionState.RENTING) + part.updateCounter(counter.initialCounter(i), counter.updatesCount(i)); + else + invalid = true; + } + finally { + part.release(); + } + } + else + invalid = true; + } + catch (GridDhtInvalidPartitionException e) { + invalid = true; + } + + if (invalid && log.isDebugEnabled()) + log.debug("Received partition update counters message for invalid partition: " + + "[cacheId=" + counter.cacheId() + ", part=" + counter.partition(i) + "]"); + } + } + } + + /** + * @param tx Transaction. + * @param node Backup node. + * @return Partition counters for the given backup node. + */ + @Nullable public List filterUpdateCountersForBackupNode( + IgniteInternalTx tx, ClusterNode node) { + TxCounters txCntrs = tx.txCounters(false); + Collection updCntrs; + + if (txCntrs == null || F.isEmpty(updCntrs = txCntrs.updateCounters())) + return null; + + List res = new ArrayList<>(updCntrs.size()); + + AffinityTopologyVersion top = tx.topologyVersionSnapshot(); + + for (PartitionUpdateCountersMessage partCntrs : updCntrs) { + GridDhtPartitionTopology topology = ctx.cacheContext(partCntrs.cacheId()).topology(); + + PartitionUpdateCountersMessage resCntrs = new PartitionUpdateCountersMessage(partCntrs.cacheId(), partCntrs.size()); + + for (int i = 0; i < partCntrs.size(); i++) { + int part = partCntrs.partition(i); + + if (topology.nodes(part, top).indexOf(node) > 0) + resCntrs.add(part, partCntrs.initialCounter(i), partCntrs.updatesCount(i)); + } + + if (resCntrs.size() > 0) + res.add(resCntrs); + } + + return res; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxImplicitSingleStateImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxImplicitSingleStateImpl.java index 10acb2266b04d..b1e1b022ef636 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxImplicitSingleStateImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxImplicitSingleStateImpl.java @@ -23,6 +23,7 @@ import java.util.List; import java.util.Map; import java.util.Set; +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; @@ -148,7 +149,10 @@ public class IgniteTxImplicitSingleStateImpl extends IgniteTxLocalStateAdapter { cacheCtx.topology().readLock(); if (cacheCtx.topology().stopping()) { - fut.onDone(new CacheStoppedException(cacheCtx.name())); + fut.onDone( + cctx.cache().isCacheRestarting(cacheCtx.name())? + new IgniteCacheRestartingException(cacheCtx.name()): + new CacheStoppedException(cacheCtx.name())); return null; } @@ -296,12 +300,17 @@ public class IgniteTxImplicitSingleStateImpl extends IgniteTxLocalStateAdapter { } /** {@inheritDoc} */ - @Override public boolean mvccEnabled(GridCacheSharedContext cctx) { + @Override public boolean mvccEnabled() { GridCacheContext ctx0 = cacheCtx; return ctx0 != null && ctx0.mvccEnabled(); } + /** {@inheritDoc} */ + @Override public boolean recovery() { + return recovery; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(IgniteTxImplicitSingleStateImpl.class, this); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxKey.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxKey.java index efcb48b4714c3..2c3892f564dde 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxKey.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxKey.java @@ -191,4 +191,4 @@ public void finishUnmarshal(GridCacheContext ctx, ClassLoader ldr) throws Ignite @Override public String toString() { return S.toString(IgniteTxKey.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalAdapter.java index d0e3dcaa3ba08..7e042923ad48b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalAdapter.java @@ -167,6 +167,9 @@ public abstract class IgniteTxLocalAdapter extends IgniteTxAdapter implements Ig /** */ private volatile boolean qryEnlisted; + /** Whether to skip update of completed versions map during rollback caused by empty update set in MVCC TX. */ + private boolean forceSkipCompletedVers; + /** * Empty constructor required for {@link Externalizable}. */ @@ -555,7 +558,7 @@ protected GridCacheEntryEx entryEx(GridCacheContext cacheCtx, IgniteTxKey key, A /** {@inheritDoc} */ @SuppressWarnings({"CatchGenericClass"}) - @Override public final void userCommit() throws IgniteCheckedException { + @Override public void userCommit() throws IgniteCheckedException { TransactionState state = state(); if (state != COMMITTING) { @@ -587,7 +590,7 @@ protected GridCacheEntryEx entryEx(GridCacheContext cacheCtx, IgniteTxKey key, A WALPointer ptr = null; - Exception err = null; + IgniteCheckedException err = null; cctx.database().checkpointReadLock(); @@ -606,176 +609,175 @@ protected GridCacheEntryEx entryEx(GridCacheContext cacheCtx, IgniteTxKey key, A UUID nodeId = txEntry.nodeId() == null ? this.nodeId : txEntry.nodeId(); - try { - while (true) { - try { - GridCacheEntryEx cached = txEntry.cached(); + while (true) { + try { + GridCacheEntryEx cached = txEntry.cached(); - // Must try to evict near entries before committing from - // transaction manager to make sure locks are held. - if (!evictNearEntry(txEntry, false)) { - if (cacheCtx.isNear() && cacheCtx.dr().receiveEnabled()) { - cached.markObsolete(xidVer); + // Must try to evict near entries before committing from + // transaction manager to make sure locks are held. + if (!evictNearEntry(txEntry, false)) { + if (cacheCtx.isNear() && cacheCtx.dr().receiveEnabled()) { + cached.markObsolete(xidVer); - break; - } + break; + } - if (cached.detached()) - break; + if (cached.detached()) + break; - boolean updateNearCache = updateNearCache(cacheCtx, txEntry.key(), topVer); + boolean updateNearCache = updateNearCache(cacheCtx, txEntry.key(), topVer); - boolean metrics = true; + boolean metrics = true; - if (!updateNearCache && cacheCtx.isNear() && txEntry.locallyMapped()) - metrics = false; + if (!updateNearCache && cacheCtx.isNear() && txEntry.locallyMapped()) + metrics = false; - boolean evt = !isNearLocallyMapped(txEntry, false); + boolean evt = !isNearLocallyMapped(txEntry, false); - if (!F.isEmpty(txEntry.entryProcessors()) || !F.isEmpty(txEntry.filters())) - txEntry.cached().unswap(false); + if (!F.isEmpty(txEntry.entryProcessors()) || !F.isEmpty(txEntry.filters())) + txEntry.cached().unswap(false); - IgniteBiTuple res = applyTransformClosures(txEntry, - true, null); + IgniteBiTuple res = applyTransformClosures(txEntry, + true, null); - GridCacheVersion dhtVer = null; + GridCacheVersion dhtVer = null; - // For near local transactions we must record DHT version - // in order to keep near entries on backup nodes until - // backup remote transaction completes. - if (cacheCtx.isNear()) { - if (txEntry.op() == CREATE || txEntry.op() == UPDATE || - txEntry.op() == DELETE || txEntry.op() == TRANSFORM) - dhtVer = txEntry.dhtVersion(); + // For near local transactions we must record DHT version + // in order to keep near entries on backup nodes until + // backup remote transaction completes. + if (cacheCtx.isNear()) { + if (txEntry.op() == CREATE || txEntry.op() == UPDATE || + txEntry.op() == DELETE || txEntry.op() == TRANSFORM) + dhtVer = txEntry.dhtVersion(); - if ((txEntry.op() == CREATE || txEntry.op() == UPDATE) && - txEntry.conflictExpireTime() == CU.EXPIRE_TIME_CALCULATE) { - ExpiryPolicy expiry = cacheCtx.expiryForTxEntry(txEntry); + if ((txEntry.op() == CREATE || txEntry.op() == UPDATE) && + txEntry.conflictExpireTime() == CU.EXPIRE_TIME_CALCULATE) { + ExpiryPolicy expiry = cacheCtx.expiryForTxEntry(txEntry); - if (expiry != null) { - txEntry.cached().unswap(false); + if (expiry != null) { + txEntry.cached().unswap(false); - Duration duration = cached.hasValue() ? - expiry.getExpiryForUpdate() : expiry.getExpiryForCreation(); + Duration duration = cached.hasValue() ? + expiry.getExpiryForUpdate() : expiry.getExpiryForCreation(); - txEntry.ttl(CU.toTtl(duration)); - } + txEntry.ttl(CU.toTtl(duration)); } } + } - GridCacheOperation op = res.get1(); - CacheObject val = res.get2(); + GridCacheOperation op = res.get1(); + CacheObject val = res.get2(); - // Deal with conflicts. - GridCacheVersion explicitVer = txEntry.conflictVersion() != null ? - txEntry.conflictVersion() : writeVersion(); + // Deal with conflicts. + GridCacheVersion explicitVer = txEntry.conflictVersion() != null ? + txEntry.conflictVersion() : writeVersion(); - if ((op == CREATE || op == UPDATE) && - txEntry.conflictExpireTime() == CU.EXPIRE_TIME_CALCULATE) { - ExpiryPolicy expiry = cacheCtx.expiryForTxEntry(txEntry); + if ((op == CREATE || op == UPDATE) && + txEntry.conflictExpireTime() == CU.EXPIRE_TIME_CALCULATE) { + ExpiryPolicy expiry = cacheCtx.expiryForTxEntry(txEntry); - if (expiry != null) { - Duration duration = cached.hasValue() ? - expiry.getExpiryForUpdate() : expiry.getExpiryForCreation(); + if (expiry != null) { + Duration duration = cached.hasValue() ? + expiry.getExpiryForUpdate() : expiry.getExpiryForCreation(); - long ttl = CU.toTtl(duration); + long ttl = CU.toTtl(duration); - txEntry.ttl(ttl); + txEntry.ttl(ttl); - if (ttl == CU.TTL_ZERO) - op = DELETE; - } + if (ttl == CU.TTL_ZERO) + op = DELETE; } + } - boolean conflictNeedResolve = cacheCtx.conflictNeedResolve(); - - GridCacheVersionConflictContext conflictCtx = null; - - if (conflictNeedResolve) { - IgniteBiTuple conflictRes = - conflictResolve(op, txEntry, val, explicitVer, cached); + boolean conflictNeedResolve = cacheCtx.conflictNeedResolve(); - assert conflictRes != null; + GridCacheVersionConflictContext conflictCtx = null; - conflictCtx = conflictRes.get2(); + if (conflictNeedResolve) { + IgniteBiTuple conflictRes = + conflictResolve(op, txEntry, val, explicitVer, cached); - if (conflictCtx.isUseOld()) - op = NOOP; - else if (conflictCtx.isUseNew()) { - txEntry.ttl(conflictCtx.ttl()); - txEntry.conflictExpireTime(conflictCtx.expireTime()); - } - else { - assert conflictCtx.isMerge(); + assert conflictRes != null; - op = conflictRes.get1(); - val = txEntry.context().toCacheObject(conflictCtx.mergeValue()); - explicitVer = writeVersion(); + conflictCtx = conflictRes.get2(); - txEntry.ttl(conflictCtx.ttl()); - txEntry.conflictExpireTime(conflictCtx.expireTime()); - } + if (conflictCtx.isUseOld()) + op = NOOP; + else if (conflictCtx.isUseNew()) { + txEntry.ttl(conflictCtx.ttl()); + txEntry.conflictExpireTime(conflictCtx.expireTime()); } - else - // Nullify explicit version so that innerSet/innerRemove will work as usual. - explicitVer = null; + else { + assert conflictCtx.isMerge(); - if (sndTransformedVals || conflictNeedResolve) { - assert sndTransformedVals && cacheCtx.isReplicated() || conflictNeedResolve; + op = conflictRes.get1(); + val = txEntry.context().toCacheObject(conflictCtx.mergeValue()); + explicitVer = writeVersion(); - txEntry.value(val, true, false); - txEntry.op(op); - txEntry.entryProcessors(null); - txEntry.conflictVersion(explicitVer); + txEntry.ttl(conflictCtx.ttl()); + txEntry.conflictExpireTime(conflictCtx.expireTime()); } + } + else + // Nullify explicit version so that innerSet/innerRemove will work as usual. + explicitVer = null; - if (dhtVer == null) - dhtVer = explicitVer != null ? explicitVer : writeVersion(); + if (sndTransformedVals || conflictNeedResolve) { + assert sndTransformedVals && cacheCtx.isReplicated() || conflictNeedResolve; - if (op == CREATE || op == UPDATE) { - assert val != null : txEntry; + txEntry.value(val, true, false); + txEntry.op(op); + txEntry.entryProcessors(null); + txEntry.conflictVersion(explicitVer); + } - GridCacheUpdateTxResult updRes = cached.innerSet( - this, - eventNodeId(), - txEntry.nodeId(), - val, - false, - false, - txEntry.ttl(), - evt, - metrics, - txEntry.keepBinary(), - txEntry.hasOldValue(), - txEntry.oldValue(), - topVer, - null, - cached.detached() ? DR_NONE : drType, - txEntry.conflictExpireTime(), - cached.isNear() ? null : explicitVer, - CU.subjectId(this, cctx), - resolveTaskName(), - dhtVer, - null, - mvccSnapshot()); - - if (updRes.success()) { - txEntry.updateCounter(updRes.updateCounter()); - - GridLongList waitTxs = updRes.mvccWaitTransactions(); - - updateWaitTxs(waitTxs); - } + if (dhtVer == null) + dhtVer = explicitVer != null ? explicitVer : writeVersion(); + + if (op == CREATE || op == UPDATE) { + assert val != null : txEntry; + + GridCacheUpdateTxResult updRes = cached.innerSet( + this, + eventNodeId(), + txEntry.nodeId(), + val, + false, + false, + txEntry.ttl(), + evt, + metrics, + txEntry.keepBinary(), + txEntry.hasOldValue(), + txEntry.oldValue(), + topVer, + null, + cached.detached() ? DR_NONE : drType, + txEntry.conflictExpireTime(), + cached.isNear() ? null : explicitVer, + CU.subjectId(this, cctx), + resolveTaskName(), + dhtVer, + null, + mvccSnapshot()); + + if (updRes.success()) { + txEntry.updateCounter(updRes.updateCounter()); + + GridLongList waitTxs = updRes.mvccWaitTransactions(); + + updateWaitTxs(waitTxs); + } - if (updRes.loggedPointer() != null) - ptr = updRes.loggedPointer(); + if (updRes.loggedPointer() != null) + ptr = updRes.loggedPointer(); - if (updRes.success() && updateNearCache) { - final CacheObject val0 = val; - final boolean metrics0 = metrics; - final GridCacheVersion dhtVer0 = dhtVer; + if (updRes.success() && updateNearCache) { + final CacheObject val0 = val; + final boolean metrics0 = metrics; + final GridCacheVersion dhtVer0 = dhtVer; - updateNearEntrySafely(cacheCtx, txEntry.key(), entry -> entry.innerSet( + updateNearEntrySafely(cacheCtx, txEntry.key(), entry -> entry.innerSet( null, eventNodeId(), nodeId, @@ -798,46 +800,46 @@ else if (conflictCtx.isUseNew()) { dhtVer0, null, mvccSnapshot()) - ); - } + ); + } + } + else if (op == DELETE) { + GridCacheUpdateTxResult updRes = cached.innerRemove( + this, + eventNodeId(), + txEntry.nodeId(), + false, + evt, + metrics, + txEntry.keepBinary(), + txEntry.hasOldValue(), + txEntry.oldValue(), + topVer, + null, + cached.detached() ? DR_NONE : drType, + cached.isNear() ? null : explicitVer, + CU.subjectId(this, cctx), + resolveTaskName(), + dhtVer, + null, + mvccSnapshot()); + + if (updRes.success()) { + txEntry.updateCounter(updRes.updateCounter()); + + GridLongList waitTxs = updRes.mvccWaitTransactions(); + + updateWaitTxs(waitTxs); } - else if (op == DELETE) { - GridCacheUpdateTxResult updRes = cached.innerRemove( - this, - eventNodeId(), - txEntry.nodeId(), - false, - evt, - metrics, - txEntry.keepBinary(), - txEntry.hasOldValue(), - txEntry.oldValue(), - topVer, - null, - cached.detached() ? DR_NONE : drType, - cached.isNear() ? null : explicitVer, - CU.subjectId(this, cctx), - resolveTaskName(), - dhtVer, - null, - mvccSnapshot()); - - if (updRes.success()) { - txEntry.updateCounter(updRes.updateCounter()); - - GridLongList waitTxs = updRes.mvccWaitTransactions(); - - updateWaitTxs(waitTxs); - } - if (updRes.loggedPointer() != null) - ptr = updRes.loggedPointer(); + if (updRes.loggedPointer() != null) + ptr = updRes.loggedPointer(); - if (updRes.success() && updateNearCache) { - final boolean metrics0 = metrics; - final GridCacheVersion dhtVer0 = dhtVer; + if (updRes.success() && updateNearCache) { + final boolean metrics0 = metrics; + final GridCacheVersion dhtVer0 = dhtVer; - updateNearEntrySafely(cacheCtx, txEntry.key(), entry -> entry.innerRemove( + updateNearEntrySafely(cacheCtx, txEntry.key(), entry -> entry.innerRemove( null, eventNodeId(), nodeId, @@ -856,144 +858,118 @@ else if (op == DELETE) { dhtVer0, null, mvccSnapshot()) - ); - } + ); } - else if (op == RELOAD) { - cached.innerReload(); + } + else if (op == RELOAD) { + cached.innerReload(); - if (updateNearCache) - updateNearEntrySafely(cacheCtx, txEntry.key(), entry -> entry.innerReload()); + if (updateNearCache) + updateNearEntrySafely(cacheCtx, txEntry.key(), entry -> entry.innerReload()); + } + else if (op == READ) { + CacheGroupContext grp = cacheCtx.group(); + + if (grp.persistenceEnabled() && grp.walEnabled() && + cctx.snapshot().needTxReadLogging()) { + ptr = cctx.wal().log(new DataRecord(new DataEntry( + cacheCtx.cacheId(), + txEntry.key(), + val, + op, + nearXidVersion(), + writeVersion(), + 0, + txEntry.key().partition(), + txEntry.updateCounter()))); } - else if (op == READ) { - CacheGroupContext grp = cacheCtx.group(); - - if (grp.persistenceEnabled() && grp.walEnabled() && - cctx.snapshot().needTxReadLogging()) { - ptr = cctx.wal().log(new DataRecord(new DataEntry( - cacheCtx.cacheId(), - txEntry.key(), - val, - op, - nearXidVersion(), - writeVersion(), - 0, - txEntry.key().partition(), - txEntry.updateCounter()))); - } - - ExpiryPolicy expiry = cacheCtx.expiryForTxEntry(txEntry); - if (expiry != null) { - Duration duration = expiry.getExpiryForAccess(); + ExpiryPolicy expiry = cacheCtx.expiryForTxEntry(txEntry); - if (duration != null) - cached.updateTtl(null, CU.toTtl(duration)); - } + if (expiry != null) { + Duration duration = expiry.getExpiryForAccess(); - if (log.isDebugEnabled()) - log.debug("Ignoring READ entry when committing: " + txEntry); + if (duration != null) + cached.updateTtl(null, CU.toTtl(duration)); } - else { - assert ownsLock(txEntry.cached()) : - "Transaction does not own lock for group lock entry during commit [tx=" + - this + ", txEntry=" + txEntry + ']'; - - if (conflictCtx == null || !conflictCtx.isUseOld()) { - if (txEntry.ttl() != CU.TTL_NOT_CHANGED) - cached.updateTtl(null, txEntry.ttl()); - } - if (log.isDebugEnabled()) - log.debug("Ignoring NOOP entry when committing: " + txEntry); - } + if (log.isDebugEnabled()) + log.debug("Ignoring READ entry when committing: " + txEntry); } + else { + assert ownsLock(txEntry.cached()) : + "Transaction does not own lock for group lock entry during commit [tx=" + + this + ", txEntry=" + txEntry + ']'; + + if (conflictCtx == null || !conflictCtx.isUseOld()) { + if (txEntry.ttl() != CU.TTL_NOT_CHANGED) + cached.updateTtl(null, txEntry.ttl()); + } - // Check commit locks after set, to make sure that - // we are not changing obsolete entries. - // (innerSet and innerRemove will throw an exception - // if an entry is obsolete). - if (txEntry.op() != READ) - checkCommitLocks(cached); - - // Break out of while loop. - break; - } - // If entry cached within transaction got removed. - catch (GridCacheEntryRemovedException ignored) { - if (log.isDebugEnabled()) - log.debug("Got removed entry during transaction commit (will retry): " + txEntry); - - txEntry.cached(entryEx(cacheCtx, txEntry.txKey(), topologyVersion())); + if (log.isDebugEnabled()) + log.debug("Ignoring NOOP entry when committing: " + txEntry); + } } - } - } - catch (Throwable ex) { - // We are about to initiate transaction rollback when tx has started to committing. - // Need to remove version from committed list. - cctx.tm().removeCommittedTx(this); - - boolean isNodeStopping = X.hasCause(ex, NodeStoppingException.class); - boolean hasInvalidEnvironmentIssue = X.hasCause(ex, InvalidEnvironmentException.class); - IgniteCheckedException err0 = new IgniteTxHeuristicCheckedException("Failed to locally write to cache " + - "(all transaction entries will be invalidated, however there was a window when " + - "entries for this transaction were visible to others): " + this, ex); + // Check commit locks after set, to make sure that + // we are not changing obsolete entries. + // (innerSet and innerRemove will throw an exception + // if an entry is obsolete). + if (txEntry.op() != READ) + checkCommitLocks(cached); - if (isNodeStopping) { - U.warn(log, "Failed to commit transaction, node is stopping [tx=" + this + - ", err=" + ex + ']'); + // Break out of while loop. + break; } - else if (hasInvalidEnvironmentIssue) { - U.warn(log, "Failed to commit transaction, node is in invalid state and will be stopped [tx=" + this + - ", err=" + ex + ']'); - } - else - U.error(log, "Commit failed.", err0); - - COMMIT_ERR_UPD.compareAndSet(this, null, err0); - - state(UNKNOWN); - - if (hasInvalidEnvironmentIssue) - cctx.kernalContext().failure().process(new FailureContext(FailureType.CRITICAL_ERROR, ex)); - else if (!isNodeStopping) { // Skip fair uncommit in case of node stopping or invalidation. - try { - // Courtesy to minimize damage. - uncommit(); - } - catch (Throwable ex1) { - U.error(log, "Failed to uncommit transaction: " + this, ex1); + // If entry cached within transaction got removed. + catch (GridCacheEntryRemovedException ignored) { + if (log.isDebugEnabled()) + log.debug("Got removed entry during transaction commit (will retry): " + txEntry); - if (ex1 instanceof Error) - throw ex1; - } + txEntry.cached(entryEx(cacheCtx, txEntry.txKey(), topologyVersion())); } - - if (ex instanceof Error) - throw ex; - - throw err0; } + } // Apply cache sizes only for primary nodes. Update counters were applied on prepare state. applyTxSizes(); + cctx.mvccCaching().onTxFinished(this, true); + if (ptr != null && !cctx.tm().logTxRecords()) cctx.wal().flush(ptr, false); } - catch (StorageException e) { - err = e; + catch (Throwable ex) { + // We are about to initiate transaction rollback when tx has started to committing. + // Need to remove version from committed list. + cctx.tm().removeCommittedTx(this); + + if (X.hasCause(ex, NodeStoppingException.class)) { + U.warn(log, "Failed to commit transaction, node is stopping [tx=" + CU.txString(this) + + ", err=" + ex + ']'); + + return; + } + + err = heuristicException(ex); + + COMMIT_ERR_UPD.compareAndSet(this, null, err); + + state(UNKNOWN); + + try { + uncommit(); + } + catch (Throwable e) { + err.addSuppressed(e); + } - throw new IgniteCheckedException("Failed to log transaction record " + - "(transaction will be rolled back): " + this, e); + throw err; } finally { cctx.database().checkpointReadUnlock(); - notifyDrManager(state() == COMMITTING && err == null); - cctx.tm().resetContext(); } } @@ -1089,6 +1065,8 @@ public void tmFinish(boolean commit, boolean nodeStop, boolean clearThreadMap) t assert !needsCompletedVersions || committedVers != null : "Missing committed versions for transaction: " + this; assert !needsCompletedVersions || rolledbackVers != null : "Missing rolledback versions for transaction: " + this; } + + cctx.mvccCaching().onTxFinished(this, commit); } } @@ -1127,8 +1105,6 @@ public Collection rolledbackVersions() { @Override public void userRollback(boolean clearThreadMap) throws IgniteCheckedException { TransactionState state = state(); - notifyDrManager(false); - if (state != ROLLING_BACK && state != ROLLED_BACK) { setRollbackOnly(); @@ -1144,7 +1120,9 @@ public Collection rolledbackVersions() { } if (DONE_FLAG_UPD.compareAndSet(this, 0, 1)) { - cctx.tm().rollbackTx(this, clearThreadMap, false); + cctx.tm().rollbackTx(this, clearThreadMap, forceSkipCompletedVers); + + cctx.mvccCaching().onTxFinished(this, false); if (!internal()) { Collection stores = txState.stores(cctx); @@ -1162,6 +1140,14 @@ assert isWriteToStoreFromDhtValid(stores) : } } + /** + * Forces transaction to skip update of completed versions map during rollback caused by empty update set + * in MVCC TX. + */ + public void forceSkipCompletedVersions() { + forceSkipCompletedVers = true; + } + /** * @param ctx Cache context. * @param key Key. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalState.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalState.java index 01eb4f4417877..e007f903b03ea 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalState.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxLocalState.java @@ -57,4 +57,9 @@ public interface IgniteTxLocalState extends IgniteTxState { * @param partId Partition id. */ public void touchPartition(int cacheId, int partId); + + /** + * @return Recovery mode flag. + */ + public boolean recovery(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxManager.java index 438c8ab266bf0..fbd815ed90aba 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxManager.java @@ -32,16 +32,19 @@ import java.util.concurrent.ConcurrentMap; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteClientDisconnectedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.binary.BinaryObjectException; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.events.DiscoveryEvent; -import org.apache.ignite.events.Event; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.managers.communication.GridIoPolicy; import org.apache.ignite.internal.managers.communication.GridMessageListener; -import org.apache.ignite.internal.managers.eventstorage.GridLocalEventListener; +import org.apache.ignite.internal.managers.discovery.DiscoCache; +import org.apache.ignite.internal.managers.eventstorage.DiscoveryEventListener; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.pagemem.wal.record.MvccTxRecord; import org.apache.ignite.internal.pagemem.wal.record.TxRecord; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheObjectsReleaseFuture; @@ -59,18 +62,22 @@ import org.apache.ignite.internal.processors.cache.distributed.GridCacheTxFinishSync; import org.apache.ignite.internal.processors.cache.distributed.GridCacheTxRecoveryFuture; import org.apache.ignite.internal.processors.cache.distributed.GridDistributedLockCancelledException; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxOnePhaseCommitAckRequest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxRemote; import org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearLockFuture; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; +import org.apache.ignite.internal.processors.cache.mvcc.MvccCoordinator; +import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccRecoveryFinishedMessage; +import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxState; import org.apache.ignite.internal.processors.cache.transactions.TxDeadlockDetection.TxDeadlockFuture; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.processors.cluster.BaselineTopology; import org.apache.ignite.internal.processors.timeout.GridTimeoutObjectAdapter; import org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException; import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; @@ -102,6 +109,7 @@ import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; import static org.apache.ignite.events.EventType.EVT_NODE_LEFT; import static org.apache.ignite.events.EventType.EVT_TX_STARTED; +import static org.apache.ignite.internal.GridTopic.TOPIC_CACHE_COORDINATOR; import static org.apache.ignite.internal.GridTopic.TOPIC_TX; import static org.apache.ignite.internal.managers.communication.GridIoPolicy.SYSTEM_POOL; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.READ; @@ -250,18 +258,16 @@ public class IgniteTxManager extends GridCacheSharedManagerAdapter { } }; - cctx.gridEvents().addLocalEventListener( - new GridLocalEventListener() { - @Override public void onEvent(Event evt) { - assert evt instanceof DiscoveryEvent; + cctx.gridEvents().addDiscoveryEventListener( + new DiscoveryEventListener() { + @Override public void onEvent(DiscoveryEvent evt, DiscoCache discoCache) { assert evt.type() == EVT_NODE_FAILED || evt.type() == EVT_NODE_LEFT; - DiscoveryEvent discoEvt = (DiscoveryEvent)evt; - - UUID nodeId = discoEvt.eventNode().id(); + UUID nodeId = evt.eventNode().id(); // Wait some time in case there are some unprocessed messages from failed node. - cctx.time().addTimeoutObject(new NodeFailureTimeoutObject(nodeId)); + cctx.time().addTimeoutObject( + new NodeFailureTimeoutObject(evt.eventNode(), cctx.coordinators().currentCoordinator())); if (txFinishSync != null) txFinishSync.onNodeLeft(nodeId); @@ -296,6 +302,38 @@ public void rollbackTransactionsForCache(int cacheId) { rollbackTransactionsForCache(cacheId, idMap); } + /** + * @param cachesToStop Caches to stop. + */ + public void rollbackTransactionsForCaches(Set cachesToStop) { + if (!cachesToStop.isEmpty()) { + IgniteTxManager tm = context().tm(); + + Collection active = tm.activeTransactions(); + + GridCompoundFuture compFut = new GridCompoundFuture<>(); + + for (IgniteInternalTx tx : active) { + for (IgniteTxEntry e : tx.allEntries()) { + if (cachesToStop.contains(e.context().cacheId())) { + compFut.add(tx.rollbackAsync()); + + break; + } + } + } + + compFut.markInitialized(); + + try { + compFut.get(); + } + catch (IgniteCheckedException e) { + U.error(log, "Error occured during tx rollback.", e); + } + } + } + /** * Rollback transactions blocking partition map exchange. * @@ -317,7 +355,7 @@ public void rollbackOnTopologyChange(AffinityTopologyVersion topVer) { */ public void rollbackMvccTxOnCoordinatorChange() { for (IgniteInternalTx tx : activeTransactions()) { - if (tx.mvccSnapshot() != null) + if (tx.mvccSnapshot() != null && tx instanceof GridNearTxLocal) ((GridNearTxLocal)tx).rollbackNearTxLocalAsync(false, false); } } @@ -889,7 +927,8 @@ public void prepareTx(IgniteInternalTx tx, @Nullable Collection e throw new IgniteCheckedException("Transaction is marked for rollback: " + tx); } - if (tx.remainingTime() == -1) { + // One-phase commit tx cannot timeout on prepare because it is expected to be committed. + if (tx.remainingTime() == -1 && !tx.onePhaseCommit()) { tx.setRollbackOnly(); throw new IgniteTxTimeoutCheckedException("Transaction timed out: " + this); @@ -2022,7 +2061,7 @@ public IgniteInternalFuture remoteTxFinishFuture(GridCacheVersion nearVer) { */ public void finishTxOnRecovery(final IgniteInternalTx tx, boolean commit) { if (log.isInfoEnabled()) - log.info("Finishing prepared transaction [tx=" + tx + ", commit=" + commit + ']'); + log.info("Finishing prepared transaction [commit=" + commit + ", tx=" + tx + ']'); if (!tx.markFinalizing(RECOVERY_FINISH)) { if (log.isInfoEnabled()) @@ -2042,10 +2081,28 @@ public void finishTxOnRecovery(final IgniteInternalTx tx, boolean commit) { if (commit) tx.commitAsync().listen(new CommitListener(tx)); + else if (tx.mvccSnapshot() != null && !tx.local()) + // remote (backup) mvcc transaction sends partition counters to other backup transaction + // in order to keep counters consistent + neighborcastPartitionCountersAndRollback(tx); else tx.rollbackAsync(); } + /** */ + private void neighborcastPartitionCountersAndRollback(IgniteInternalTx tx) { + TxCounters txCounters = tx.txCounters(false); + + if (txCounters == null || txCounters.updateCounters() == null) + tx.rollbackAsync(); + + PartitionCountersNeighborcastFuture fut = new PartitionCountersNeighborcastFuture(tx, cctx); + + fut.listen(fut0 -> tx.rollbackAsync()); + + fut.init(); + } + /** * Commits transaction in case when node started transaction failed, but all related * transactions were prepared (invalidates transaction if it is not fully prepared). @@ -2396,20 +2453,115 @@ public boolean logTxRecords() { return logTxRecords; } + /** + * Marks MVCC transaction as {@link TxState#COMMITTED} or {@link TxState#ABORTED}. + * + * @param tx Transaction. + * @param commit Commit flag. + * @throws IgniteCheckedException If failed to add version to TxLog. + */ + public void mvccFinish(IgniteTxAdapter tx, boolean commit) throws IgniteCheckedException { + if (!cctx.kernalContext().clientNode() && tx.mvccSnapshot != null && !(tx.near() && tx.remote())) { + WALPointer ptr = null; + + cctx.database().checkpointReadLock(); + + try { + if (cctx.wal() != null) + ptr = cctx.wal().log(newTxRecord(tx)); + + cctx.coordinators().updateState(tx.mvccSnapshot, commit ? TxState.COMMITTED : TxState.ABORTED, tx.local()); + } + finally { + cctx.database().checkpointReadUnlock(); + } + + if (ptr != null) + cctx.wal().flush(ptr, true); + } + } + + /** + * Marks MVCC transaction as {@link TxState#PREPARED}. + * + * @param tx Transaction. + * @throws IgniteCheckedException If failed to add version to TxLog. + */ + public void mvccPrepare(IgniteTxAdapter tx) throws IgniteCheckedException { + if (!cctx.kernalContext().clientNode() && tx.mvccSnapshot != null && !(tx.near() && tx.remote())) { + cctx.database().checkpointReadLock(); + + try { + if (cctx.wal() != null) + cctx.wal().log(newTxRecord(tx)); + + cctx.coordinators().updateState(tx.mvccSnapshot, TxState.PREPARED); + } + finally { + cctx.database().checkpointReadUnlock(); + } + } + } + + /** + * Logs Tx state to WAL if needed. + * + * @param tx Transaction. + * @return WALPointer or {@code null} if nothing was logged. + */ + @Nullable WALPointer logTxRecord(IgniteTxAdapter tx) { + // Log tx state change to WAL. + if (cctx.wal() != null && logTxRecords) { + TxRecord txRecord = newTxRecord(tx); + + try { + return cctx.wal().log(txRecord); + } + catch (IgniteCheckedException e) { + U.error(log, "Failed to log TxRecord: " + txRecord, e); + + throw new IgniteException("Failed to log TxRecord: " + txRecord, e); + } + } + + return null; + } + + /** + * Creates Tx state record for WAL. + * + * @param tx Transaction. + * @return Tx state record. + */ + private TxRecord newTxRecord(IgniteTxAdapter tx) { + BaselineTopology baselineTop = cctx.kernalContext().state().clusterState().baselineTopology(); + + Map> nodes = tx.consistentIdMapper.mapToCompactIds(tx.topVer, tx.txNodes, baselineTop); + + if (tx.txState().mvccEnabled()) + return new MvccTxRecord(tx.state(), tx.nearXidVersion(), tx.writeVersion(), nodes, tx.mvccSnapshot()); + else + return new TxRecord(tx.state(), tx.nearXidVersion(), tx.writeVersion(), nodes); + } + /** * Timeout object for node failure handler. */ private final class NodeFailureTimeoutObject extends GridTimeoutObjectAdapter { - /** Left or failed node. */ - private final UUID evtNodeId; + /** */ + private final ClusterNode node; + /** */ + private final MvccCoordinator mvccCrd; /** - * @param evtNodeId Event node ID. + * @param node Failed node. + * @param mvccCrd Mvcc coordinator at time of node failure. */ - private NodeFailureTimeoutObject(UUID evtNodeId) { + private NodeFailureTimeoutObject(ClusterNode node, MvccCoordinator mvccCrd) { super(IgniteUuid.fromUuid(cctx.localNodeId()), TX_SALVAGE_TIMEOUT); - this.evtNodeId = evtNodeId; + this.node = node; + this.mvccCrd = mvccCrd; } /** @@ -2426,11 +2578,17 @@ private void onTimeout0() { return; } + UUID evtNodeId = node.id(); + try { if (log.isDebugEnabled()) log.debug("Processing node failed event [locNodeId=" + cctx.localNodeId() + ", failedNodeId=" + evtNodeId + ']'); + // Null means that recovery voting is not needed. + GridCompoundFuture allTxFinFut = node.isClient() && mvccCrd != null + ? new GridCompoundFuture<>() : null; + for (final IgniteInternalTx tx : activeTransactions()) { if ((tx.near() && !tx.local()) || (tx.storeWriteThrough() && tx.masterNodeIds().contains(evtNodeId))) { // Invalidate transactions. @@ -2445,24 +2603,57 @@ private void onTimeout0() { IgniteInternalFuture prepFut = tx.currentPrepareFuture(); if (prepFut != null) { - prepFut.listen(new CI1>() { - @Override public void apply(IgniteInternalFuture fut) { - if (tx.state() == PREPARED) - commitIfPrepared(tx, Collections.singleton(evtNodeId)); - else if (tx.setRollbackOnly()) - tx.rollbackAsync(); - } + prepFut.listen(fut -> { + if (tx.state() == PREPARED) + commitIfPrepared(tx, Collections.singleton(evtNodeId)); + // If we could not mark tx as rollback, it means that transaction is being committed. + else if (tx.setRollbackOnly()) + tx.rollbackAsync(); }); } - else { - // If we could not mark tx as rollback, it means that transaction is being committed. - if (tx.setRollbackOnly()) - tx.rollbackAsync(); - } + // If we could not mark tx as rollback, it means that transaction is being committed. + else if (tx.setRollbackOnly()) + tx.rollbackAsync(); } } + + // Await only mvcc transactions initiated by failed client node. + if (allTxFinFut != null && tx.eventNodeId().equals(evtNodeId) + && tx.mvccSnapshot() != null) + allTxFinFut.add(tx.finishFuture()); } } + + if (allTxFinFut == null) + return; + + allTxFinFut.markInitialized(); + + // Send vote to mvcc coordinator when all recovering transactions have finished. + allTxFinFut.listen(fut -> { + // If mvcc coordinator issued snapshot for recovering transaction has failed during recovery, + // then there is no need to send messages to new coordinator. + try { + cctx.kernalContext().io().sendToGridTopic( + mvccCrd.nodeId(), + TOPIC_CACHE_COORDINATOR, + new MvccRecoveryFinishedMessage(evtNodeId), + SYSTEM_POOL); + } + catch (ClusterTopologyCheckedException e) { + if (log.isInfoEnabled()) + log.info("Mvcc coordinator issued snapshots for recovering transactions " + + "has left the cluster (will ignore) [locNodeId=" + cctx.localNodeId() + + ", failedNodeId=" + evtNodeId + + ", mvccCrdNodeId=" + mvccCrd.nodeId() + ']'); + } + catch (IgniteCheckedException e) { + log.warning("Failed to notify mvcc coordinator that all recovering transactions were " + + "finished [locNodeId=" + cctx.localNodeId() + + ", failedNodeId=" + evtNodeId + + ", mvccCrdNodeId=" + mvccCrd.nodeId() + ']', e); + } + }); } finally { cctx.kernalContext().gateway().readUnlock(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteSingleStateImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteSingleStateImpl.java index 3a2ef374678fd..ee917a6d2650d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteSingleStateImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteSingleStateImpl.java @@ -144,7 +144,7 @@ public class IgniteTxRemoteSingleStateImpl extends IgniteTxRemoteStateAdapter { } /** {@inheritDoc} */ - @Override public boolean mvccEnabled(GridCacheSharedContext cctx) { + @Override public boolean mvccEnabled() { return entry != null && entry.context().mvccEnabled(); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteStateImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteStateImpl.java index 35c3fb3133935..e160ed79b4df7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteStateImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxRemoteStateImpl.java @@ -211,7 +211,7 @@ else if (cacheCtx.affinity().partition(e.key()) == part) } /** {@inheritDoc} */ - @Override public boolean mvccEnabled(GridCacheSharedContext cctx) { + @Override public boolean mvccEnabled() { for (IgniteTxEntry e : writeMap.values()) { if (e.context().mvccEnabled()) return true; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxState.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxState.java index e42fe7f88d4dd..2039cc9289c75 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxState.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxState.java @@ -190,8 +190,7 @@ public void addActiveCache(GridCacheContext cacheCtx, boolean recovery, IgniteTx public boolean empty(); /** - * @param cctx Context. * @return {@code True} if MVCC mode is enabled for transaction. */ - public boolean mvccEnabled(GridCacheSharedContext cctx); + public boolean mvccEnabled(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxStateImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxStateImpl.java index 371c6d044f039..25667968a4a8c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxStateImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/IgniteTxStateImpl.java @@ -24,7 +24,10 @@ import java.util.HashSet; import java.util.Map; import java.util.Set; + +import org.apache.ignite.IgniteCacheRestartingException; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.cache.CacheInterceptor; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; @@ -118,6 +121,9 @@ public class IgniteTxStateImpl extends IgniteTxLocalStateAdapter { for (int i = 0; i < activeCacheIds.size(); i++) { int cacheId = activeCacheIds.get(i); + if (cctx.cacheContext(cacheId) == null) + throw new IgniteException("Cache is stopped, id=" + cacheId); + cctx.cacheContext(cacheId).cache().awaitLastFut(); } } @@ -149,7 +155,7 @@ public class IgniteTxStateImpl extends IgniteTxLocalStateAdapter { assert ctx != null : cacheId; - Throwable err = topFut.validateCache(ctx, recovery != null && recovery, read, null, e.getValue()); + Throwable err = topFut.validateCache(ctx, recovery(), read, null, e.getValue()); if (err != null) { if (invalidCaches != null) @@ -180,6 +186,11 @@ public class IgniteTxStateImpl extends IgniteTxLocalStateAdapter { return null; } + /** {@inheritDoc} */ + @Override public boolean recovery() { + return recovery != null && recovery; + } + /** {@inheritDoc} */ @Override public CacheWriteSynchronizationMode syncMode(GridCacheSharedContext cctx) { CacheWriteSynchronizationMode syncMode = CacheWriteSynchronizationMode.FULL_ASYNC; @@ -288,7 +299,10 @@ public class IgniteTxStateImpl extends IgniteTxLocalStateAdapter { nonLocCtx.topology().readLock(); if (nonLocCtx.topology().stopping()) { - fut.onDone(new CacheStoppedException(nonLocCtx.name())); + fut.onDone( + cctx.cache().isCacheRestarting(nonLocCtx.name())? + new IgniteCacheRestartingException(nonLocCtx.name()): + new CacheStoppedException(nonLocCtx.name())); return null; } @@ -377,6 +391,8 @@ public class IgniteTxStateImpl extends IgniteTxLocalStateAdapter { GridCacheContext cacheCtx = cctx.cacheContext(cacheId); + assert cacheCtx != null : "cacheCtx == null, cacheId=" + cacheId; + onTxEnd(cacheCtx, tx, commit); } } @@ -477,7 +493,7 @@ public synchronized Collection allEntriesCopy() { } /** {@inheritDoc} */ - @Override public boolean mvccEnabled(GridCacheSharedContext cctx) { + @Override public boolean mvccEnabled() { return Boolean.TRUE == mvccEnabled; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/PartitionCountersNeighborcastFuture.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/PartitionCountersNeighborcastFuture.java new file mode 100644 index 0000000000000..f7fe9cb721282 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/PartitionCountersNeighborcastFuture.java @@ -0,0 +1,211 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.transactions; + +import java.util.Collection; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.processors.cache.GridCacheCompoundIdentityFuture; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.distributed.dht.PartitionUpdateCountersMessage; +import org.apache.ignite.internal.processors.cache.mvcc.msg.PartitionCountersNeighborcastRequest; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.lang.IgniteUuid; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.internal.managers.communication.GridIoPolicy.SYSTEM_POOL; + +/** + * Represents partition update counters delivery to remote nodes. + */ +public class PartitionCountersNeighborcastFuture extends GridCacheCompoundIdentityFuture { + /** */ + private final IgniteUuid futId = IgniteUuid.randomUuid(); + /** */ + private boolean trackable = true; + /** */ + private final GridCacheSharedContext cctx; + /** */ + private final IgniteInternalTx tx; + /** */ + private final IgniteLogger log; + + /** */ + public PartitionCountersNeighborcastFuture( + IgniteInternalTx tx, GridCacheSharedContext cctx) { + super(null); + + this.tx = tx; + + this.cctx = cctx; + + log = cctx.logger(CU.TX_MSG_RECOVERY_LOG_CATEGORY); + } + + /** + * Starts processing. + */ + public void init() { + if (log.isInfoEnabled()) { + log.info("Starting delivery partition countres to remote nodes [txId=" + tx.nearXidVersion() + + ", futId=" + futId); + } + + HashSet siblings = siblingBackups(); + + cctx.mvcc().addFuture(this, futId); + + for (UUID peer : siblings) { + List cntrs = cctx.tm().txHandler() + .filterUpdateCountersForBackupNode(tx, cctx.node(peer)); + + if (F.isEmpty(cntrs)) + continue; + + MiniFuture miniFut = new MiniFuture(peer); + + try { + cctx.io().send(peer, new PartitionCountersNeighborcastRequest(cntrs, futId), SYSTEM_POOL); + + add(miniFut); + } + catch (IgniteCheckedException e) { + if (!(e instanceof ClusterTopologyCheckedException)) + log.warning("Failed to send partition counters to remote node [node=" + peer + ']', e); + else + logNodeLeft(peer); + + miniFut.onDone(); + } + } + + markInitialized(); + } + + /** */ + private HashSet siblingBackups() { + Map> txNodes = tx.transactionNodes(); + + assert txNodes != null; + + UUID locNodeId = cctx.localNodeId(); + + HashSet siblings = new HashSet<>(); + + txNodes.values().stream() + .filter(backups -> backups.contains(locNodeId)) + .forEach(siblings::addAll); + + siblings.remove(locNodeId); + + return siblings; + } + + /** {@inheritDoc} */ + @Override public boolean onDone(@Nullable Void res, @Nullable Throwable err) { + boolean comp = super.onDone(res, err); + + if (comp) + cctx.mvcc().removeFuture(futId); + + return comp; + } + + /** + * Processes a response from a remote peer. Completes a mini future for that peer. + * + * @param nodeId Remote peer node id. + */ + public void onResult(UUID nodeId) { + if (log.isInfoEnabled()) + log.info("Remote peer acked partition counters delivery [futId=" + futId + + ", node=" + nodeId + ']'); + + completeMini(nodeId); + } + + /** {@inheritDoc} */ + @Override public boolean onNodeLeft(UUID nodeId) { + logNodeLeft(nodeId); + + // if a left node is one of remote peers then a mini future for it is completed successfully + completeMini(nodeId); + + return true; + } + + /** */ + private void completeMini(UUID nodeId) { + for (IgniteInternalFuture fut : futures()) { + assert fut instanceof MiniFuture; + + MiniFuture mini = (MiniFuture)fut; + + if (mini.nodeId.equals(nodeId)) { + cctx.kernalContext().closure().runLocalSafe(mini::onDone); + + break; + } + } + } + + /** */ + private void logNodeLeft(UUID nodeId) { + if (log.isInfoEnabled()) { + log.info("Failed during partition counters delivery to remote node. " + + "Node left cluster (will ignore) [futId=" + futId + + ", node=" + nodeId + ']'); + } + } + + /** {@inheritDoc} */ + @Override public IgniteUuid futureId() { + return futId; + } + + /** {@inheritDoc} */ + @Override public boolean trackable() { + return trackable; + } + + /** {@inheritDoc} */ + @Override public void markNotTrackable() { + trackable = false; + } + + /** + * Component of compound parent future. Represents interaction with one of remote peers. + */ + private static class MiniFuture extends GridFutureAdapter { + /** */ + private final UUID nodeId; + + /** */ + private MiniFuture(UUID nodeId) { + this.nodeId = nodeId; + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TransactionProxyImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TransactionProxyImpl.java index 628b8138222f0..110f34da5e632 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TransactionProxyImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TransactionProxyImpl.java @@ -363,9 +363,6 @@ private void leave() { try { return (IgniteFuture)(new IgniteFutureImpl(cctx.rollbackTxAsync(tx))); } - catch (IgniteCheckedException e) { - throw U.convertException(e); - } finally { leave(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxCounters.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxCounters.java index e1a0bd6425a13..550ec09ae00f2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxCounters.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxCounters.java @@ -23,6 +23,7 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicLong; import org.apache.ignite.internal.processors.cache.distributed.dht.PartitionUpdateCountersMessage; +import org.jetbrains.annotations.Nullable; /** * Values which should be tracked during transaction execution and applied on commit. @@ -69,7 +70,7 @@ public void updateCounters(Collection updCntrs) /** * @return Final update counters. */ - public Collection updateCounters() { + @Nullable public Collection updateCounters() { return updCntrs; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksRequest.java index 94fe00527a488..86109c8f323a9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksRequest.java @@ -149,13 +149,13 @@ public Collection txKeys() { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 3: + case 4: if (!writer.writeObjectArray("txKeysArr", txKeysArr, MessageCollectionItemType.MSG)) return false; @@ -177,7 +177,7 @@ public Collection txKeys() { return false; switch (reader.state()) { - case 2: + case 3: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -185,7 +185,7 @@ public Collection txKeys() { reader.incrementState(); - case 3: + case 4: txKeysArr = reader.readObjectArray("txKeysArr", MessageCollectionItemType.MSG, IgniteTxKey.class); if (!reader.isLastRead()) @@ -205,7 +205,7 @@ public Collection txKeys() { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 4; + return 5; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksResponse.java index a5c8f0917da8c..df5caa978b609 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/transactions/TxLocksResponse.java @@ -239,25 +239,25 @@ public void addKey(IgniteTxKey key) { } switch (writer.state()) { - case 2: + case 3: if (!writer.writeLong("futId", futId)) return false; writer.incrementState(); - case 3: + case 4: if (!writer.writeObjectArray("locksArr", locksArr, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 4: + case 5: if (!writer.writeObjectArray("nearTxKeysArr", nearTxKeysArr, MessageCollectionItemType.MSG)) return false; writer.incrementState(); - case 5: + case 6: if (!writer.writeObjectArray("txKeysArr", txKeysArr, MessageCollectionItemType.MSG)) return false; @@ -279,7 +279,7 @@ public void addKey(IgniteTxKey key) { return false; switch (reader.state()) { - case 2: + case 3: futId = reader.readLong("futId"); if (!reader.isLastRead()) @@ -287,7 +287,7 @@ public void addKey(IgniteTxKey key) { reader.incrementState(); - case 3: + case 4: locksArr = reader.readObjectArray("locksArr", MessageCollectionItemType.MSG, TxLockList.class); if (!reader.isLastRead()) @@ -295,7 +295,7 @@ public void addKey(IgniteTxKey key) { reader.incrementState(); - case 4: + case 5: nearTxKeysArr = reader.readObjectArray("nearTxKeysArr", MessageCollectionItemType.MSG, IgniteTxKey.class); if (!reader.isLastRead()) @@ -303,7 +303,7 @@ public void addKey(IgniteTxKey key) { reader.incrementState(); - case 5: + case 6: txKeysArr = reader.readObjectArray("txKeysArr", MessageCollectionItemType.MSG, IgniteTxKey.class); if (!reader.isLastRead()) @@ -323,7 +323,7 @@ public void addKey(IgniteTxKey key) { /** {@inheritDoc} */ @Override public byte fieldsCount() { - return 6; + return 7; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccDataRow.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccDataRow.java index 9ded71fc7befa..bdd3166841876 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccDataRow.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccDataRow.java @@ -34,9 +34,9 @@ import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_COUNTER_NA; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_CRD_COUNTER_NA; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_HINTS_BIT_OFF; +import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_HINTS_MASK; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_KEY_ABSENT_BEFORE_MASK; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_KEY_ABSENT_BEFORE_OFF; -import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_OP_COUNTER_MASK; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.MVCC_OP_COUNTER_NA; import static org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.MVCC_INFO_SIZE; @@ -46,38 +46,35 @@ public class MvccDataRow extends DataRow { /** Mvcc coordinator version. */ @GridToStringInclude - protected long mvccCrd; + private long mvccCrd; /** Mvcc counter. */ @GridToStringInclude - protected long mvccCntr; + private long mvccCntr; /** Mvcc operation counter. */ @GridToStringInclude - protected int mvccOpCntr; + private int mvccOpCntr; /** Mvcc tx state. */ @GridToStringInclude - protected byte mvccTxState; + private byte mvccTxState; /** New mvcc coordinator version. */ @GridToStringInclude - protected long newMvccCrd; + private long newMvccCrd; /** New mvcc counter. */ @GridToStringInclude - protected long newMvccCntr; + private long newMvccCntr; /** New mvcc operation counter. */ @GridToStringInclude - protected int newMvccOpCntr; + private int newMvccOpCntr; /** New mvcc tx state. */ @GridToStringInclude - protected byte newMvccTxState; - - /** Flag, whether this key was absent in cache before this transaction. */ - protected boolean keyAbsentBefore; + private byte newMvccTxState; /** * @param link Link. @@ -109,9 +106,9 @@ public MvccDataRow(CacheGroupContext grp, assert MvccUtils.mvccVersionIsValid(crdVer, mvccCntr, mvccOpCntr); assert rowData == RowData.LINK_ONLY - || this.mvccCrd == crdVer && this.mvccCntr == mvccCntr && this.mvccOpCntr == mvccOpCntr : + || mvccCoordinatorVersion() == crdVer && mvccCounter() == mvccCntr && mvccOperationCounter() == mvccOpCntr : "mvccVer=" + new MvccVersionImpl(crdVer, mvccCntr, mvccOpCntr) + - ", dataMvccVer=" + new MvccVersionImpl(this.mvccCrd, this.mvccCntr, this.mvccOpCntr) ; + ", dataMvccVer=" + new MvccVersionImpl(mvccCoordinatorVersion(), mvccCounter(), mvccOperationCounter()) ; if (rowData == RowData.LINK_ONLY) { this.mvccCrd = crdVer; @@ -158,9 +155,8 @@ public MvccDataRow(KeyCacheObject key, CacheObject val, GridCacheVersion ver, in int withHint = PageUtils.getInt(addr, off + 16); - mvccOpCntr = withHint & ~MVCC_OP_COUNTER_MASK; + mvccOpCntr = withHint & ~MVCC_HINTS_MASK; mvccTxState = (byte)(withHint >>> MVCC_HINTS_BIT_OFF); - keyAbsentBefore = ((withHint & MVCC_KEY_ABSENT_BEFORE_MASK) >>> MVCC_KEY_ABSENT_BEFORE_OFF) == 1; assert MvccUtils.mvccVersionIsValid(mvccCrd, mvccCntr, mvccOpCntr); @@ -170,12 +166,9 @@ public MvccDataRow(KeyCacheObject key, CacheObject val, GridCacheVersion ver, in withHint = PageUtils.getInt(addr, off + 36); - newMvccOpCntr = withHint & ~MVCC_OP_COUNTER_MASK; + newMvccOpCntr = withHint & ~MVCC_HINTS_MASK; newMvccTxState = (byte)(withHint >>> MVCC_HINTS_BIT_OFF); - if (newMvccCrd != MVCC_CRD_COUNTER_NA) - keyAbsentBefore = ((withHint & MVCC_KEY_ABSENT_BEFORE_MASK) >>> MVCC_KEY_ABSENT_BEFORE_OFF) == 1; - assert newMvccCrd == MVCC_CRD_COUNTER_NA || MvccUtils.mvccVersionIsValid(newMvccCrd, newMvccCntr, newMvccOpCntr); return MVCC_INFO_SIZE; @@ -193,7 +186,7 @@ public MvccDataRow(KeyCacheObject key, CacheObject val, GridCacheVersion ver, in /** {@inheritDoc} */ @Override public int mvccOperationCounter() { - return mvccOpCntr; + return mvccOpCntr & ~MVCC_KEY_ABSENT_BEFORE_MASK; } /** {@inheritDoc} */ @@ -213,7 +206,7 @@ public MvccDataRow(KeyCacheObject key, CacheObject val, GridCacheVersion ver, in /** {@inheritDoc} */ @Override public int newMvccOperationCounter() { - return newMvccOpCntr; + return newMvccOpCntr & ~MVCC_KEY_ABSENT_BEFORE_MASK; } /** {@inheritDoc} */ @@ -255,9 +248,30 @@ public void newMvccTxState(byte newMvccTxState) { this.newMvccTxState = newMvccTxState; } - /** {@inheritDoc} */ - @Override public boolean isKeyAbsentBefore() { - return keyAbsentBefore; + /** + * @return {@code True} if key absent before. + */ + protected boolean keyAbsentBeforeFlag() { + long withHint = newMvccCrd == MVCC_CRD_COUNTER_NA ? mvccOpCntr : newMvccOpCntr; + + return ((withHint & MVCC_KEY_ABSENT_BEFORE_MASK) >>> MVCC_KEY_ABSENT_BEFORE_OFF) == 1; + } + + /** + * @param flag {@code True} if key is absent before. + */ + protected void keyAbsentBeforeFlag(boolean flag) { + if (flag) { + if (mvccCrd != MVCC_CRD_COUNTER_NA) + mvccOpCntr |= MVCC_KEY_ABSENT_BEFORE_MASK; + + if (newMvccCrd != MVCC_CRD_COUNTER_NA) + newMvccOpCntr |= MVCC_KEY_ABSENT_BEFORE_MASK; + } + else { + mvccOpCntr &= ~MVCC_KEY_ABSENT_BEFORE_MASK; + newMvccOpCntr &= ~MVCC_KEY_ABSENT_BEFORE_MASK; + } } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateDataRow.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateDataRow.java index 2a0b582a5e4e7..75c510b3d67a1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateDataRow.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateDataRow.java @@ -22,6 +22,7 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; +import org.apache.ignite.internal.processors.cache.CacheInvokeResult; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; @@ -97,11 +98,14 @@ public class MvccUpdateDataRow extends MvccDataRow implements MvccUpdateResult, /** */ private static final int DELETED = FAST_MISMATCH << 1; - /** Whether tx has overridden it's own update. */ - private static final int OWN_VALUE_OVERRIDDEN = DELETED << 1; + /** Force read full entry instead of header only. Old value == value before tx started. */ + private static final int NEED_OLD_VALUE = DELETED << 1; - /** Force read full entry instead of header only. */ - private static final int NEED_PREV_VALUE = OWN_VALUE_OVERRIDDEN << 1; + /** + * Force read full entry instead of header only. Prev value == value on previous tx step or old value + * if it is a first tx step. + */ + private static final int NEED_PREV_VALUE = NEED_OLD_VALUE << 1; /** */ @GridToStringExclude @@ -132,12 +136,15 @@ public class MvccUpdateDataRow extends MvccDataRow implements MvccUpdateResult, private long resCntr; /** */ - private List historyRows; + private List histRows; /** */ @GridToStringExclude private CacheEntryPredicate filter; + /** */ + private CacheInvokeResult invokeRes; + /** * @param cctx Cache context. * @param key Key. @@ -149,8 +156,9 @@ public class MvccUpdateDataRow extends MvccDataRow implements MvccUpdateResult, * @param newVer Update version. * @param primary Primary node flag. * @param lockOnly Whether no actual update should be done and the only thing to do is to acquire lock. - * @param needHistory Whether to collect rows created or affected by the current tx. + * @param needHist Whether to collect rows created or affected by the current tx. * @param fastUpdate Fast update visit mode. + * @param needOldVal {@code True} if need old value. */ public MvccUpdateDataRow( GridCacheContext cctx, @@ -164,8 +172,9 @@ public MvccUpdateDataRow( @Nullable CacheEntryPredicate filter, boolean primary, boolean lockOnly, - boolean needHistory, + boolean needHist, boolean fastUpdate, + boolean needOldVal, boolean needPrevValue) { super(key, val, @@ -179,7 +188,6 @@ public MvccUpdateDataRow( this.mvccSnapshot = mvccSnapshot; this.cctx = cctx; this.filter = filter; - this.keyAbsentBefore = primary; // True for primary and false for backup (backups do not use this flag). assert !lockOnly || val == null; @@ -191,23 +199,29 @@ public MvccUpdateDataRow( if (primary && (lockOnly || val == null)) flags |= CAN_WRITE | REMOVE_OR_LOCK; - if (needHistory) + if (needHist) flags |= NEED_HISTORY; if (fastUpdate) flags |= FAST_UPDATE; + if (needOldVal) + flags |= NEED_OLD_VALUE; + if(needPrevValue) flags |= NEED_PREV_VALUE; setFlags(flags); + + keyAbsentBeforeFlag(primary); // True for primary and false for backup (backups do not use this flag). } /** {@inheritDoc} */ @Override public int visit(BPlusTree tree, BPlusIO io, long pageAddr, - int idx, IgniteWriteAheadLogManager wal) + int idx, + IgniteWriteAheadLogManager wal) throws IgniteCheckedException { unsetFlags(DIRTY); @@ -219,7 +233,7 @@ public MvccUpdateDataRow( long lockCntr = rowIo.getMvccLockCounter(pageAddr, idx); // We cannot continue while entry is locked by another transaction. - if ((lockCrd != mvccCrd || lockCntr != mvccCntr) + if ((lockCrd != mvccCoordinatorVersion() || lockCntr != mvccCounter()) && isActive(cctx, lockCrd, lockCntr, mvccSnapshot)) { resCrd = lockCrd; resCntr = lockCntr; @@ -251,7 +265,7 @@ && isActive(cctx, lockCrd, lockCntr, mvccSnapshot)) { } if (compare(mvccSnapshot, rowCrd, rowCntr) == 0) { - res = mvccOpCntr == rowOpCntr ? ResultType.VERSION_FOUND : + res = mvccOperationCounter() == rowOpCntr ? ResultType.VERSION_FOUND : removed ? ResultType.PREV_NULL : ResultType.PREV_NOT_NULL; if (removed) @@ -259,8 +273,11 @@ && isActive(cctx, lockCrd, lockCntr, mvccSnapshot)) { else { // Actually, full row can be omitted for replace(k,newval) and putIfAbsent, but // operation context is not available here and full row required if filter is set. - if (res == ResultType.PREV_NOT_NULL && (isFlagsSet(NEED_PREV_VALUE) || filter != null)) - oldRow = tree.getRow(io, pageAddr, idx, RowData.FULL); + if (res == ResultType.PREV_NOT_NULL && (isFlagsSet(NEED_PREV_VALUE) || filter != null)) { + oldRow = tree.getRow(io, pageAddr, idx, RowData.NO_KEY); + + oldRow.key(key); + } else oldRow = row; } @@ -269,11 +286,11 @@ && isActive(cctx, lockCrd, lockCntr, mvccSnapshot)) { if(filter != null && !applyFilter(res == ResultType.PREV_NOT_NULL ? oldRow.value() : null)) res = FILTERED; - setFlags(LAST_COMMITTED_FOUND | OWN_VALUE_OVERRIDDEN); + setFlags(LAST_COMMITTED_FOUND); // Copy new key flag from the previous row version if it was created by the current tx. if (isFlagsSet(PRIMARY)) - keyAbsentBefore = row.isKeyAbsentBefore(); + keyAbsentBeforeFlag(row.keyAbsentBeforeFlag()); } } @@ -322,12 +339,15 @@ else if (rowNewCrd == resCrd && rowNewCntr == resCntr) else { res = ResultType.PREV_NOT_NULL; - keyAbsentBefore = false; + keyAbsentBeforeFlag(false); // Actually, full row can be omitted for replace(k,newval) and putIfAbsent, but // operation context is not available here and full row required if filter is set. - if( (isFlagsSet(NEED_PREV_VALUE) || filter != null)) - oldRow = tree.getRow(io, pageAddr, idx, RowData.FULL); + if((isFlagsSet(NEED_PREV_VALUE) || isFlagsSet(NEED_OLD_VALUE) || filter != null)) { + oldRow = tree.getRow(io, pageAddr, idx, RowData.NO_KEY); + + oldRow.key(key); + } else oldRow = row; } @@ -378,10 +398,10 @@ && isVisible(cctx, mvccSnapshot, rowCrd, rowCntr, rowOpCntr, false))) { // Lock entry for primary partition if needed. // If invisible row is found for FAST_UPDATE case we should not lock row. if (!isFlagsSet(DELETED) && isFlagsSet(PRIMARY | REMOVE_OR_LOCK) && !isFlagsSet(FAST_MISMATCH)) { - rowIo.setMvccLockCoordinatorVersion(pageAddr, idx, mvccCrd); - rowIo.setMvccLockCounter(pageAddr, idx, mvccCntr); + rowIo.setMvccLockCoordinatorVersion(pageAddr, idx, mvccCoordinatorVersion()); + rowIo.setMvccLockCounter(pageAddr, idx, mvccCounter()); - // TODO Delta record IGNITE-7991 + // Actually, there is no need to log lock delta record into WAL. setFlags(DIRTY); } @@ -427,7 +447,7 @@ && isFlagsSet(LAST_COMMITTED_FOUND | DELETED)) { // We can cleanup previous row only if it was deleted by another // transaction and delete version is less or equal to cleanup one - if (rowNewCrd < mvccCrd || Long.compare(cleanupVer, rowNewCntr) >= 0) + if (rowNewCrd < mvccCoordinatorVersion() || cleanupVer >= rowNewCntr) setFlags(CAN_CLEANUP); } @@ -442,18 +462,18 @@ && isFlagsSet(LAST_COMMITTED_FOUND | DELETED)) { // Row obsoleted by current operation, all rows created or updated with current tx. if (isFlagsSet(NEED_HISTORY) && (row == oldRow - || (rowCrd == mvccCrd && rowCntr == mvccCntr) - || (rowNewCrd == mvccCrd && rowNewCntr == mvccCntr))) { - if (historyRows == null) - historyRows = new ArrayList<>(); + || (rowCrd == mvccCoordinatorVersion() && rowCntr == mvccCounter()) + || (rowNewCrd == mvccCoordinatorVersion() && rowNewCntr == mvccCounter()))) { + if (histRows == null) + histRows = new ArrayList<>(); - historyRows.add(new MvccLinkAwareSearchRow(cacheId, key, rowCrd, rowCntr, rowOpCntr & ~MVCC_OP_COUNTER_MASK, rowLink)); + histRows.add(new MvccLinkAwareSearchRow(cacheId, key, rowCrd, rowCntr, rowOpCntr & ~MVCC_OP_COUNTER_MASK, rowLink)); } if (cleanupVer > MVCC_OP_COUNTER_NA // Do not clean if cleanup version is not assigned. && !isFlagsSet(CAN_CLEANUP) && isFlagsSet(LAST_COMMITTED_FOUND) - && (rowCrd < mvccCrd || Long.compare(cleanupVer, rowCntr) >= 0)) + && (rowCrd < mvccCoordinatorVersion() || Long.compare(cleanupVer, rowCntr) >= 0)) // all further versions are guaranteed to be less than cleanup version setFlags(CAN_CLEANUP); } @@ -526,34 +546,67 @@ public List cleanupRows() { switch (resultType()) { case VERSION_FOUND: case PREV_NULL: + return new MvccVersionImpl(mvccCoordinatorVersion(), mvccCounter(), mvccOperationCounter()); - return new MvccVersionImpl(mvccCrd, mvccCntr, mvccOpCntr); case PREV_NOT_NULL: - + case REMOVED_NOT_NULL: return new MvccVersionImpl(oldRow.mvccCoordinatorVersion(), oldRow.mvccCounter(), oldRow.mvccOperationCounter()); + case LOCKED: case VERSION_MISMATCH: - assert resCrd != MVCC_CRD_COUNTER_NA && resCntr != MVCC_COUNTER_NA; return new MvccVersionImpl(resCrd, resCntr, MVCC_OP_COUNTER_NA); - default: + case FILTERED: + if (oldRow != null) + return new MvccVersionImpl(oldRow.mvccCoordinatorVersion(), oldRow.mvccCounter(), oldRow.mvccOperationCounter()); + else + return new MvccVersionImpl(mvccCoordinatorVersion(), mvccCounter(), mvccOperationCounter()); + + default: throw new IllegalStateException("Unexpected result type: " + resultType()); } } /** {@inheritDoc} */ @Override public List history() { - if (isFlagsSet(NEED_HISTORY) && historyRows == null) - historyRows = new ArrayList<>(); + if (isFlagsSet(NEED_HISTORY) && histRows == null) + histRows = new ArrayList<>(); + + return histRows; + } - return historyRows; + /** {@inheritDoc} */ + @Override public CacheObject newValue() { + return val; + } + + /** {@inheritDoc} */ + @Override public CacheObject oldValue() { + return oldRow == null ? null : oldRow.value(); } /** {@inheritDoc} */ - @Override public boolean isOwnValueOverridden() { - return isFlagsSet(OWN_VALUE_OVERRIDDEN); + @Override public boolean isKeyAbsentBefore() { + return keyAbsentBeforeFlag(); + } + + /** */ + public void value(CacheObject val0) { + val = val0; + } + + /** */ + public void invokeResult(CacheInvokeResult invokeRes) { + this.invokeRes = invokeRes; + } + + /** + * @return Invoke result. + */ + @Override public CacheInvokeResult invokeResult(){ + return invokeRes; } /** */ @@ -571,6 +624,11 @@ private int unsetFlags(int flags) { return state &= (~flags); } + /** */ + public void resultType(ResultType type) { + res = type; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(MvccUpdateDataRow.class, this, "super", super.toString()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateResult.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateResult.java index d76a6e8b05257..f0d33fcf79be2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/MvccUpdateResult.java @@ -18,6 +18,8 @@ package org.apache.ignite.internal.processors.cache.tree.mvcc.data; import java.util.List; +import org.apache.ignite.internal.processors.cache.CacheInvokeResult; +import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.tree.mvcc.search.MvccLinkAwareSearchRow; @@ -36,18 +38,27 @@ public interface MvccUpdateResult { public MvccVersion resultVersion(); /** - * * @return Collection of row created or affected by the current tx. */ public List history(); + /** + * @return New value of updated entry. + */ + public CacheObject newValue(); + + /** + * @return Old value. + */ + public CacheObject oldValue(); + /** * @return {@code True} if this key was inserted in the cache with this row in the same transaction. */ public boolean isKeyAbsentBefore(); /** - * @return Flag whether tx has overridden it's own update. + * @return Entry processor invoke result. */ - public boolean isOwnValueOverridden(); + CacheInvokeResult invokeResult(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/ResultType.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/ResultType.java index 16e7e1eeea22e..d8636843d115a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/ResultType.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/mvcc/data/ResultType.java @@ -32,5 +32,7 @@ public enum ResultType { /** */ VERSION_MISMATCH, /** */ - FILTERED + FILTERED, + /** */ + REMOVED_NOT_NULL, } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/CacheInfo.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/CacheInfo.java index 31c0b3fcbe2d0..01231990050bd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/CacheInfo.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/CacheInfo.java @@ -20,7 +20,8 @@ import java.io.IOException; import java.io.ObjectInput; import java.io.ObjectOutput; - +import java.util.LinkedHashMap; +import java.util.Map; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -238,7 +239,7 @@ public CacheAtomicityMode getAtomicityMode() { } /** - * @param atomicityMode + * @param atomicityMode Atomicity mode. */ public void setAtomicityMode(CacheAtomicityMode atomicityMode) { this.atomicityMode = atomicityMode; @@ -272,32 +273,74 @@ public void setAffinityClsName(String affinityClsName) { this.affinityClsName = affinityClsName; } + /** + * Gets name of info for multi line output depending on cache command. + * + * @param cmd Cache command. + * @return Header. + */ + public Object name(VisorViewCacheCmd cmd) { + switch (cmd) { + case CACHES: + return getCacheName(); + + case GROUPS: + return getGrpName(); + + case SEQ: + return getSeqName(); + + default: + throw new IllegalArgumentException("Unknown cache subcommand " + cmd); + } + } + /** * @param cmd Command. */ - public void print(VisorViewCacheCmd cmd) { - if (cmd == null) - cmd = VisorViewCacheCmd.CACHES; + public Map toMap(VisorViewCacheCmd cmd) { + Map map; switch (cmd) { case SEQ: - System.out.println("[seqName=" + getSeqName() + ", curVal=" + seqVal + ']'); + map = new LinkedHashMap<>(2); + + map.put("seqName", getSeqName()); + map.put("curVal", seqVal); break; case GROUPS: - System.out.println("[grpName=" + getGrpName() + ", grpId=" + getGrpId() + ", cachesCnt=" + getCachesCnt() + - ", prim=" + getPartitions() + ", mapped=" + getMapped() + ", mode=" + getMode() + - ", atomicity=" + getAtomicityMode() + ", backups=" + getBackupsCnt() + ", affCls=" + getAffinityClsName() + ']'); + map = new LinkedHashMap<>(10); + + map.put("grpName", getGrpName()); + map.put("grpId", getGrpId()); + map.put("cachesCnt", getCachesCnt()); + map.put("prim", getPartitions()); + map.put("mapped", getMapped()); + map.put("mode", getMode()); + map.put("atomicity", getAtomicityMode()); + map.put("backups", getBackupsCnt()); + map.put("affCls", getAffinityClsName()); break; default: - System.out.println("[cacheName=" + getCacheName() + ", cacheId=" + getCacheId() + - ", grpName=" + getGrpName() + ", grpId=" + getGrpId() + ", prim=" + getPartitions() + - ", mapped=" + getMapped() + ", mode=" + getMode() + ", atomicity=" + getAtomicityMode() + - ", backups=" + getBackupsCnt() + ", affCls=" + getAffinityClsName() + ']'); + map = new LinkedHashMap<>(10); + + map.put("cacheName", getCacheName()); + map.put("cacheId", getCacheId()); + map.put("grpName", getGrpName()); + map.put("grpId", getGrpId()); + map.put("prim", getPartitions()); + map.put("mapped", getMapped()); + map.put("mode", getMode()); + map.put("atomicity", getAtomicityMode()); + map.put("backups", getBackupsCnt()); + map.put("affCls", getAffinityClsName()); } + + return map; } /** {@inheritDoc} */ @@ -345,4 +388,4 @@ public void print(VisorViewCacheCmd cmd) { @Override public String toString() { return S.toString(CacheInfo.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/IdleVerifyResultV2.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/IdleVerifyResultV2.java index c31d6627ed94a..a153063d8998a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/IdleVerifyResultV2.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/IdleVerifyResultV2.java @@ -21,6 +21,7 @@ import java.io.ObjectOutput; import java.util.List; import java.util.Map; +import java.util.UUID; import java.util.function.Consumer; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; @@ -43,6 +44,9 @@ public class IdleVerifyResultV2 extends VisorDataTransferObject { /** Moving partitions. */ private Map> movingPartitions; + /** Exceptions. */ + private Map exceptions; + /** * @param cntrConflicts Counter conflicts. * @param hashConflicts Hash conflicts. @@ -51,11 +55,13 @@ public class IdleVerifyResultV2 extends VisorDataTransferObject { public IdleVerifyResultV2( Map> cntrConflicts, Map> hashConflicts, - Map> movingPartitions + Map> movingPartitions, + Map exceptions ) { this.cntrConflicts = cntrConflicts; this.hashConflicts = hashConflicts; this.movingPartitions = movingPartitions; + this.exceptions = exceptions; } /** @@ -64,11 +70,17 @@ public IdleVerifyResultV2( public IdleVerifyResultV2() { } + /** {@inheritDoc} */ + @Override public byte getProtocolVersion() { + return V2; + } + /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { U.writeMap(out, cntrConflicts); U.writeMap(out, hashConflicts); U.writeMap(out, movingPartitions); + U.writeMap(out, exceptions); } /** {@inheritDoc} */ @@ -77,6 +89,9 @@ public IdleVerifyResultV2() { cntrConflicts = U.readMap(in); hashConflicts = U.readMap(in); movingPartitions = U.readMap(in); + + if (protoVer >= V2) + exceptions = U.readMap(in); } /** @@ -107,6 +122,13 @@ public boolean hasConflicts() { return !F.isEmpty(hashConflicts()) || !F.isEmpty(counterConflicts()); } + /** + * @return Exceptions on nodes. + */ + public Map exceptions() { + return exceptions; + } + /** * Print formatted result to given printer. * @@ -159,6 +181,16 @@ public void print(Consumer printer) { printer.accept("\n"); } + + if (!F.isEmpty(exceptions())) { + printer.accept("Idle verify failed on nodes:\n"); + + for (Map.Entry e : exceptions().entrySet()) { + printer.accept("Node ID: " + e.getKey() + "\n"); + printer.accept("Exception message:" + "\n"); + printer.accept(e.getValue().getMessage() + "\n"); + } + } } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsDumpTask.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsDumpTask.java index 7bbd9f19b9b15..c0fd36ae5a6b0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsDumpTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsDumpTask.java @@ -30,15 +30,18 @@ import java.util.TreeMap; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteLogger; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.compute.ComputeJob; import org.apache.ignite.compute.ComputeJobResult; +import org.apache.ignite.compute.ComputeJobResultPolicy; import org.apache.ignite.compute.ComputeTaskAdapter; import org.apache.ignite.internal.processors.task.GridInternal; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.visor.verify.VisorIdleVerifyDumpTaskArg; import org.apache.ignite.internal.visor.verify.VisorIdleVerifyTaskArg; import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.resources.LoggerResource; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; @@ -57,7 +60,7 @@ public class VerifyBackupPartitionsDumpTask extends ComputeTaskAdapter map( List subgrid, VisorIdleVerifyTaskArg arg) throws IgniteException { @@ -83,6 +91,9 @@ public class VerifyBackupPartitionsDumpTask extends ComputeTaskAdapter> clusterHashes = new TreeMap<>(buildPartitionKeyComparator()); for (ComputeJobResult res : results) { + if (res.getException() != null) + continue; + Map nodeHashes = res.getData(); for (Map.Entry e : nodeHashes.entrySet()) { @@ -111,6 +122,22 @@ public class VerifyBackupPartitionsDumpTask extends ComputeTaskAdapter rcvd) throws + IgniteException { + ComputeJobResultPolicy superRes = super.result(res, rcvd); + + // Deny failover. + if (superRes == ComputeJobResultPolicy.FAILOVER) { + superRes = ComputeJobResultPolicy.WAIT; + + log.warning("VerifyBackupPartitionsJobV2 failed on node " + + "[consistentId=" + res.getNode().consistentId() + "]", res.getException()); + } + + return superRes; + } + /** * Checking conditions for adding given record to result. * @@ -140,6 +167,8 @@ record = records.get(i); /** * @param partitions Dump result. + * @param conflictRes Conflict results. + * @param skippedRecords Number of empty partitions. * @return Path where results are written. * @throws IgniteException If failed to write the file. */ @@ -152,7 +181,7 @@ private String writeHashes( ? new File("/tmp") : new File(ignite.configuration().getWorkDirectory()); - File out = new File(workDir, IDLE_DUMP_FILE_PREMIX + LocalDateTime.now().format(TIME_FORMATTER) + ".txt"); + File out = new File(workDir, IDLE_DUMP_FILE_PREFIX + LocalDateTime.now().format(TIME_FORMATTER) + ".txt"); ignite.log().info("IdleVerifyDumpTask will write output to " + out.getAbsolutePath()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsTaskV2.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsTaskV2.java index e698e3a2e6bc4..c30d37bdfea11 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsTaskV2.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/verify/VerifyBackupPartitionsTaskV2.java @@ -25,6 +25,7 @@ import java.util.List; import java.util.Map; import java.util.Set; +import java.util.UUID; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ForkJoinPool; @@ -36,20 +37,28 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteInterruptedException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.compute.ComputeJob; import org.apache.ignite.compute.ComputeJobAdapter; import org.apache.ignite.compute.ComputeJobResult; +import org.apache.ignite.compute.ComputeJobResultPolicy; import org.apache.ignite.compute.ComputeTaskAdapter; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; +import org.apache.ignite.internal.processors.cache.GridCacheUtils; +import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.task.GridInternal; import org.apache.ignite.internal.util.lang.GridIterator; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.visor.verify.CacheFilterEnum; +import org.apache.ignite.internal.visor.verify.VisorIdleVerifyDumpTaskArg; import org.apache.ignite.internal.visor.verify.VisorIdleVerifyTaskArg; import org.apache.ignite.lang.IgniteProductVersion; import org.apache.ignite.resources.IgniteInstanceResource; @@ -71,6 +80,10 @@ public class VerifyBackupPartitionsTaskV2 extends ComputeTaskAdapter results) throws IgniteException { Map> clusterHashes = new HashMap<>(); + Map exceptions = new HashMap<>(); for (ComputeJobResult res : results) { + if (res.getException() != null) { + exceptions.put(res.getNode().id(), res.getException()); + + continue; + } + Map nodeHashes = res.getData(); for (Map.Entry e : nodeHashes.entrySet()) { @@ -135,7 +155,23 @@ public class VerifyBackupPartitionsTaskV2 extends ComputeTaskAdapter rcvd) throws + IgniteException { + ComputeJobResultPolicy superRes = super.result(res, rcvd); + + // Deny failover. + if (superRes == ComputeJobResultPolicy.FAILOVER) { + superRes = ComputeJobResultPolicy.WAIT; + + log.warning("VerifyBackupPartitionsJobV2 failed on node " + + "[consistentId=" + res.getNode().consistentId() + "]", res.getException()); + } + + return superRes; } /** @@ -170,13 +206,13 @@ public VerifyBackupPartitionsJobV2(VisorIdleVerifyTaskArg arg) { @Override public Map execute() throws IgniteException { Set grpIds = new HashSet<>(); - Set missingCaches = new HashSet<>(); + if (arg.getCaches() != null && !arg.getCaches().isEmpty()) { + Set missingCaches = new HashSet<>(); - if (arg.getCaches() != null) { for (String cacheName : arg.getCaches()) { DynamicCacheDescriptor desc = ignite.context().cache().cacheDescriptor(cacheName); - if (desc == null) { + if (desc == null || !isCacheMatchFilter(cacheName)) { missingCaches.add(cacheName); continue; @@ -185,25 +221,17 @@ public VerifyBackupPartitionsJobV2(VisorIdleVerifyTaskArg arg) { grpIds.add(desc.groupId()); } - if (!missingCaches.isEmpty()) { - StringBuilder strBuilder = new StringBuilder("The following caches do not exist: "); - - for (String name : missingCaches) - strBuilder.append(name).append(", "); - - strBuilder.delete(strBuilder.length() - 2, strBuilder.length()); - - throw new IgniteException(strBuilder.toString()); - } + handlingMissedCaches(missingCaches); } - else { - Collection groups = ignite.context().cache().cacheGroups(); - - for (CacheGroupContext grp : groups) { - if (!grp.systemCache() && !grp.isLocal()) - grpIds.add(grp.groupId()); + else if (onlySpecificCaches()) { + for (DynamicCacheDescriptor desc : ignite.context().cache().cacheDescriptors().values()) { + if (desc.cacheConfiguration().getCacheMode() != CacheMode.LOCAL + && isCacheMatchFilter(desc.cacheName())) + grpIds.add(desc.groupId()); } } + else + grpIds = getCacheGroupIds(); List>> partHashCalcFutures = new ArrayList<>(); @@ -259,6 +287,118 @@ else if (e.getCause() instanceof IgniteException) return res; } + /** + * Gets filtered group ids. + */ + private Set getCacheGroupIds() { + Collection groups = ignite.context().cache().cacheGroups(); + + Set grpIds = new HashSet<>(); + + if (arg.excludeCaches() == null || arg.excludeCaches().isEmpty()) { + for (CacheGroupContext grp : groups) { + if (!grp.systemCache() && !grp.isLocal()) + grpIds.add(grp.groupId()); + } + return grpIds; + } + + for (CacheGroupContext grp : groups) { + if (!grp.systemCache() && !grp.isLocal() && !isGrpExcluded(grp)) + grpIds.add(grp.groupId()); + } + + return grpIds; + } + + /** + * @param grp Group. + */ + private boolean isGrpExcluded(CacheGroupContext grp) { + if (arg.excludeCaches().contains(grp.name())) + return true; + + for (GridCacheContext cacheCtx : grp.caches()) { + if (arg.excludeCaches().contains(cacheCtx.name())) + return true; + } + + return false; + } + + /** + * Checks and throw exception if caches was missed. + * + * @param missingCaches Missing caches. + */ + private void handlingMissedCaches(Set missingCaches) { + if (missingCaches.isEmpty()) + return; + + StringBuilder strBuilder = new StringBuilder("The following caches do not exist"); + + if (onlySpecificCaches()) { + VisorIdleVerifyDumpTaskArg vdta = (VisorIdleVerifyDumpTaskArg)arg; + + strBuilder.append(" or do not match to the given filter [") + .append(vdta.getCacheFilterEnum()) + .append("]: "); + } + else + strBuilder.append(": "); + + for (String name : missingCaches) + strBuilder.append(name).append(", "); + + strBuilder.delete(strBuilder.length() - 2, strBuilder.length()); + + throw new IgniteException(strBuilder.toString()); + } + + /** + * @return True if validates only specific caches, else false. + */ + private boolean onlySpecificCaches() { + if (arg instanceof VisorIdleVerifyDumpTaskArg) { + VisorIdleVerifyDumpTaskArg vdta = (VisorIdleVerifyDumpTaskArg)arg; + + return vdta.getCacheFilterEnum() != CacheFilterEnum.ALL; + } + + return false; + } + + /** + * @param cacheName Cache name. + */ + private boolean isCacheMatchFilter(String cacheName) { + if (arg instanceof VisorIdleVerifyDumpTaskArg) { + DataStorageConfiguration dsc = ignite.context().config().getDataStorageConfiguration(); + DynamicCacheDescriptor desc = ignite.context().cache().cacheDescriptor(cacheName); + CacheConfiguration cc = desc.cacheConfiguration(); + VisorIdleVerifyDumpTaskArg vdta = (VisorIdleVerifyDumpTaskArg)arg; + + switch (vdta.getCacheFilterEnum()) { + case SYSTEM: + return !desc.cacheType().userCache(); + + case NOT_PERSISTENT: + return desc.cacheType().userCache() && !GridCacheUtils.isPersistentCache(cc, dsc); + + case PERSISTENT: + return desc.cacheType().userCache() && GridCacheUtils.isPersistentCache(cc, dsc); + + case ALL: + break; + + default: + assert false: "Illegal cache filter: " + vdta.getCacheFilterEnum(); + } + } + + return true; + } + /** * @param grpCtx Group context. * @param part Local partition. @@ -274,7 +414,6 @@ private Future> calculatePartitionHas }); } - /** * @param grpCtx Group context. * @param part Local partition. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheRawVersionedEntry.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheRawVersionedEntry.java index 586a043cc8d22..ceea0a2d9a5cd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheRawVersionedEntry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheRawVersionedEntry.java @@ -24,6 +24,7 @@ import java.nio.ByteBuffer; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.CacheObjectContext; import org.apache.ignite.internal.processors.cache.CacheObjectValueContext; @@ -39,6 +40,7 @@ /** * Raw versioned entry. */ +@IgniteCodeGeneratingFail public class GridCacheRawVersionedEntry extends DataStreamerEntry implements GridCacheVersionedEntry, GridCacheVersionable, Externalizable { /** */ @@ -381,4 +383,4 @@ public void prepareDirectMarshal(CacheObjectContext ctx) throws IgniteCheckedExc "valBytesLen", valBytes != null ? valBytes.length : "n/a", "super", super.toString()); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheVersionManager.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheVersionManager.java index df8af4811b7b3..2a6526aa2a377 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheVersionManager.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/GridCacheVersionManager.java @@ -41,9 +41,6 @@ public class GridCacheVersionManager extends GridCacheSharedManagerAdapter { /** */ public static final GridCacheVersion EVICT_VER = new GridCacheVersion(Integer.MAX_VALUE, 0, 0, 0); - /** */ - public static final GridCacheVersion START_VER = new GridCacheVersion(0, 0, 0, 0); - /** Timestamp used as base time for cache topology version (January 1, 2014). */ public static final long TOP_VER_BASE_TIME = 1388520000000L; @@ -56,6 +53,9 @@ public class GridCacheVersionManager extends GridCacheSharedManagerAdapter { /** Current order for store operations. */ private final AtomicLong loadOrder = new AtomicLong(0); + /** Entry start version. */ + private GridCacheVersion startVer; + /** Last version. */ private volatile GridCacheVersion last; @@ -87,6 +87,8 @@ public class GridCacheVersionManager extends GridCacheSharedManagerAdapter { @Override public void start0() throws IgniteCheckedException { last = new GridCacheVersion(0, order.get(), 0, dataCenterId); + startVer = new GridCacheVersion(0, 0, 0, dataCenterId); + cctx.gridEvents().addLocalEventListener(discoLsnr, EVT_NODE_METRICS_UPDATED); } @@ -104,6 +106,8 @@ public void dataCenterId(byte dataCenterId) { this.dataCenterId = dataCenterId; last = new GridCacheVersion(0, order.get(), 0, dataCenterId); + + startVer = new GridCacheVersion(0, 0, 0, dataCenterId); } /** @@ -301,4 +305,25 @@ private GridCacheVersion next(long topVer, boolean addTime, boolean forLoad, byt public GridCacheVersion last() { return last; } + + /** + * Gets start version. + * + * @return Start version. + */ + public GridCacheVersion startVersion() { + assert startVer != null; + + return startVer; + } + + /** + * Check if given version is start version. + * + * @param ver Version. + * @return {@code True} if given version is start version. + */ + public boolean isStartVersion(GridCacheVersion ver) { + return startVer.equals(ver); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cacheobject/IgniteCacheObjectProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cacheobject/IgniteCacheObjectProcessor.java index defb3ccb2ad31..4c56ea7c28865 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cacheobject/IgniteCacheObjectProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cacheobject/IgniteCacheObjectProcessor.java @@ -215,5 +215,5 @@ public IncompleteCacheObject toKeyCacheObject(CacheObjectContext ctx, ByteBuffer /** * @return Ignite binary interface. */ - public IgniteBinary binary(); + public IgniteBinary binary() ; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/BaselineTopology.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/BaselineTopology.java index fcb7a397e8936..5a7e66aefc7bb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/BaselineTopology.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/BaselineTopology.java @@ -298,13 +298,15 @@ public ClusterNode baselineNode(Object consId) { * @return Sorted list of baseline topology nodes. */ public List createBaselineView( - List aliveNodes, - @Nullable IgnitePredicate nodeFilter) - { + Collection aliveNodes, + @Nullable IgnitePredicate nodeFilter + ) { List res = new ArrayList<>(nodeMap.size()); + boolean nullNodeFilter = nodeFilter == null; + for (ClusterNode node : aliveNodes) { - if (nodeMap.containsKey(node.consistentId()) && (nodeFilter == null || CU.affinityNode(node, nodeFilter))) + if (nodeMap.containsKey(node.consistentId()) && (nullNodeFilter || CU.affinityNode(node, nodeFilter))) res.add(node); } @@ -316,7 +318,7 @@ public List createBaselineView( Map consIdMap = new HashMap<>(); for (ClusterNode node : aliveNodes) { - if (nodeMap.containsKey(node.consistentId()) && (nodeFilter == null || CU.affinityNode(node, nodeFilter))) + if (nodeMap.containsKey(node.consistentId()) && (nullNodeFilter || CU.affinityNode(node, nodeFilter))) consIdMap.put(node.consistentId(), node); } @@ -326,7 +328,7 @@ public List createBaselineView( if (!consIdMap.containsKey(consId)) { DetachedClusterNode node = new DetachedClusterNode(consId, e.getValue()); - if (nodeFilter == null || CU.affinityNode(node, nodeFilter)) + if (nullNodeFilter || CU.affinityNode(node, nodeFilter)) consIdMap.put(consId, node); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/ChangeGlobalStateMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/ChangeGlobalStateMessage.java index 81855fcbf44a8..51a65bb9fd3c5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/ChangeGlobalStateMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/ChangeGlobalStateMessage.java @@ -70,6 +70,9 @@ public class ChangeGlobalStateMessage implements DiscoveryCustomMessage { * @param initiatingNodeId Node initiated state change. * @param storedCfgs Configurations read from persistent store. * @param activate New cluster state. + * @param baselineTopology Baseline topology. + * @param forceChangeBaselineTopology Force change baseline topology flag. + * @param timestamp Timestamp. */ public ChangeGlobalStateMessage( UUID reqId, @@ -78,8 +81,7 @@ public ChangeGlobalStateMessage( boolean activate, BaselineTopology baselineTopology, boolean forceChangeBaselineTopology, - long timestamp - ) { + long timestamp) { assert reqId != null; assert initiatingNodeId != null; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/DiscoveryDataClusterState.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/DiscoveryDataClusterState.java index a28bfc9f4a7b0..b3879ee1dd08d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/DiscoveryDataClusterState.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/DiscoveryDataClusterState.java @@ -200,6 +200,23 @@ public boolean active() { return prevState != null ? prevState.baselineTopology() : null; } + /** + * + * @return {@code True} If baseLine changed, {@code False} if not. + */ + public boolean baselineChanged() { + BaselineTopology prevBLT = previousBaselineTopology(); + BaselineTopology curBLT = baselineTopology(); + + if (prevBLT == null && curBLT != null) + return true; + + if (prevBLT!= null && curBLT != null) + return !prevBLT.equals(curBLT); + + return false; + } + /** * @return {@code True} if baseline topology is set in the cluster. {@code False} otherwise. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/GridClusterStateProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/GridClusterStateProcessor.java index c4a3126a6e42f..778d9f24e8631 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/GridClusterStateProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/GridClusterStateProcessor.java @@ -17,18 +17,6 @@ package org.apache.ignite.internal.processors.cluster; -import java.io.Serializable; -import java.util.ArrayList; -import java.util.Collection; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.UUID; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ConcurrentMap; -import java.util.concurrent.atomic.AtomicReference; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteCompute; @@ -59,8 +47,11 @@ import org.apache.ignite.internal.processors.task.GridInternal; import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.future.IgniteFinishedFutureImpl; +import org.apache.ignite.internal.util.future.IgniteFutureImpl; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.tostring.GridToStringInclude; +import org.apache.ignite.internal.util.typedef.C1; import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.CI2; import org.apache.ignite.internal.util.typedef.F; @@ -79,6 +70,19 @@ import org.apache.ignite.spi.discovery.DiscoveryDataBag; import org.jetbrains.annotations.Nullable; +import java.io.Serializable; +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.atomic.AtomicReference; + import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; import static org.apache.ignite.events.EventType.EVT_NODE_LEFT; import static org.apache.ignite.internal.GridComponent.DiscoveryDataExchangeType.STATE_PROC; @@ -173,6 +177,11 @@ public boolean compatibilityMode() { /** {@inheritDoc} */ @Override public boolean publicApiActiveState(boolean waitForTransition) { + return publicApiActiveStateAsync(waitForTransition).get(); + } + + /** {@inheritDoc} */ + @Override public IgniteFuture publicApiActiveStateAsync(boolean asyncWaitForTransition) { if (ctx.isDaemon()) return sendComputeCheckGlobalState(); @@ -184,32 +193,34 @@ public boolean compatibilityMode() { Boolean transitionRes = globalState.transitionResult(); if (transitionRes != null) - return transitionRes; + return new IgniteFinishedFutureImpl<>(transitionRes); else { - if (waitForTransition) { - GridFutureAdapter fut = transitionFuts.get(globalState.transitionRequestId()); + GridFutureAdapter fut = transitionFuts.get(globalState.transitionRequestId()); + if (fut != null) { + if (asyncWaitForTransition) { + return new IgniteFutureImpl<>(fut.chain(new C1, Boolean>() { + @Override public Boolean apply(IgniteInternalFuture fut) { + Boolean res = globalState.transitionResult(); - if (fut != null) { - try { - fut.get(); - } - catch (IgniteCheckedException ex) { - throw new IgniteException(ex); - } + assert res != null; + + return res; + } + })); } + else + return new IgniteFinishedFutureImpl<>(globalState.baselineChanged()); + } - transitionRes = globalState.transitionResult(); + transitionRes = globalState.transitionResult(); - assert transitionRes != null; + assert transitionRes != null; - return transitionRes; - } - else - return false; + return new IgniteFinishedFutureImpl<>(transitionRes); } } else - return globalState.active(); + return new IgniteFinishedFutureImpl<>(globalState.active()); } /** {@inheritDoc} */ @@ -404,7 +415,7 @@ private boolean isBaselineSatisfied(BaselineTopology blt, List serv if (msg.requestId().equals(state.transitionRequestId())) { log.info("Received state change finish message: " + msg.clusterActive()); - globalState = globalState.finish(msg.success()); + globalState = state.finish(msg.success()); afterStateChangeFinished(msg.id(), msg.success()); @@ -839,7 +850,8 @@ private IgniteInternalFuture changeGlobalState0(final boolean activate, DiscoveryDataClusterState curState = globalState; - if (!curState.transition() && curState.active() == activate && BaselineTopology.equals(curState.baselineTopology(), blt)) + if (!curState.transition() && curState.active() == activate + && (!activate || BaselineTopology.equals(curState.baselineTopology(), blt))) return new GridFinishedFuture<>(); GridChangeGlobalStateFuture startedFut = null; @@ -893,15 +905,19 @@ private IgniteInternalFuture changeGlobalState0(final boolean activate, forceChangeBaselineTopology, System.currentTimeMillis()); + IgniteInternalFuture resFut = wrapStateChangeFuture(startedFut, msg); + try { if (log.isInfoEnabled()) U.log(log, "Sending " + prettyStr(activate) + " request with BaselineTopology " + blt); ctx.discovery().sendCustomEvent(msg); - if (ctx.isStopping()) - startedFut.onDone(new IgniteCheckedException("Failed to execute " + prettyStr(activate) + " request, " + - "node is stopping.")); + if (ctx.isStopping()) { + String errMsg = "Failed to execute " + prettyStr(activate) + " request, node is stopping."; + + startedFut.onDone(new IgniteCheckedException(errMsg)); + } } catch (IgniteCheckedException e) { U.error(log, "Failed to send global state change request: " + activate, e); @@ -909,7 +925,7 @@ private IgniteInternalFuture changeGlobalState0(final boolean activate, startedFut.onDone(e); } - return wrapStateChangeFuture(startedFut, msg); + return resFut; } /** {@inheritDoc} */ @@ -1066,7 +1082,7 @@ private void sendComputeChangeGlobalState( * * @return Cluster state, {@code True} if cluster active, {@code False} if inactive. */ - private boolean sendComputeCheckGlobalState() { + private IgniteFuture sendComputeCheckGlobalState() { AffinityTopologyVersion topVer = ctx.discovery().topologyVersionEx(); if (log.isInfoEnabled()) { @@ -1079,11 +1095,11 @@ private boolean sendComputeCheckGlobalState() { ClusterGroupAdapter clusterGroupAdapter = (ClusterGroupAdapter)ctx.cluster().get().forServers(); if (F.isEmpty(clusterGroupAdapter.nodes())) - return false; + return new IgniteFinishedFutureImpl<>(false); IgniteCompute comp = clusterGroupAdapter.compute(); - return comp.call(new IgniteCallable() { + return comp.callAsync(new IgniteCallable() { @IgniteInstanceResource private Ignite ig; @@ -1160,6 +1176,8 @@ private void onFinalActivate(final StateChangeRequest req) { ctx.task().onActivate(ctx); + ctx.encryption().onActivate(ctx); + if (log.isInfoEnabled()) log.info("Successfully performed final activation steps [nodeId=" + ctx.localNodeId() + ", client=" + client + ", topVer=" + req.topologyVersion() + "]"); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/IGridClusterStateProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/IGridClusterStateProcessor.java index bc72a516b860a..1f677ed0cdef3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/IGridClusterStateProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/cluster/IGridClusterStateProcessor.java @@ -18,9 +18,6 @@ package org.apache.ignite.internal.processors.cluster; -import java.util.Collection; -import java.util.Map; -import java.util.UUID; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cluster.BaselineNode; import org.apache.ignite.cluster.ClusterNode; @@ -29,8 +26,13 @@ import org.apache.ignite.internal.processors.GridProcessor; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.StateChangeRequest; +import org.apache.ignite.lang.IgniteFuture; import org.jetbrains.annotations.Nullable; +import java.util.Collection; +import java.util.Map; +import java.util.UUID; + /** * */ @@ -40,6 +42,11 @@ public interface IGridClusterStateProcessor extends GridProcessor { */ boolean publicApiActiveState(boolean waitForTransition); + /** + * @return Cluster state to be used on public API. + */ + IgniteFuture publicApiActiveStateAsync(boolean waitForTransition); + /** * @param discoCache Discovery data cache. * @return If transition is in progress returns future which is completed when transition finishes. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousMessage.java index ec04ac355e5a6..5e7cc9d3a4576 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousMessage.java @@ -274,4 +274,4 @@ public void dataBytes(byte[] dataBytes) { @Override public String toString() { return S.toString(GridContinuousMessage.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousProcessor.java index 4cbff6d978974..f95740efdd077 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/continuous/GridContinuousProcessor.java @@ -875,11 +875,11 @@ public IgniteInternalFuture startRoutine(GridContinuousHandler hnd, true); } - ctx.discovery().sendCustomEvent(msg); - } - catch (IgniteCheckedException e) { - startFuts.remove(routineId); - locInfos.remove(routineId); + ctx.discovery().sendCustomEvent(msg); + } + catch (IgniteCheckedException e) { + startFuts.remove(routineId); + locInfos.remove(routineId); unregisterHandler(routineId, hnd, true); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImpl.java index bf1e13db1d377..e3ea0be2a32e4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImpl.java @@ -85,10 +85,10 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.processors.cacheobject.IgniteCacheObjectProcessor; import org.apache.ignite.internal.processors.dr.GridDrType; @@ -725,7 +725,7 @@ private void loadData(Collection entries, GridFutur keys.add(new KeyCacheObjectWrapper(e.getKey())); } - load0(entries, fut, keys, 0); + load0(entries, fut, keys, 0, null, null); } /** {@inheritDoc} */ @@ -798,13 +798,18 @@ private void acquireRemapSemaphore() throws IgniteInterruptedCheckedException { * @param resFut Result future. * @param activeKeys Active keys. * @param remaps Remaps count. + * @param remapNode Node for remap. In case update with {@code allowOverride() == false} fails on one node, + * we don't need to send update request to all affinity nodes again, if topology version does not changed. + * @param remapTopVer Topology version. */ private void load0( Collection entries, final GridFutureAdapter resFut, @Nullable final Collection activeKeys, - final int remaps - ) { + final int remaps, + ClusterNode remapNode, + AffinityTopologyVersion remapTopVer + ) { try { assert entries != null; @@ -876,7 +881,10 @@ else if (rcvr != null) if (key.partition() == -1) key.partition(cctx.affinity().partition(key, false)); - nodes = nodes(key, topVer, cctx); + if (!allowOverwrite() && remapNode != null && F.eq(topVer, remapTopVer)) + nodes = Collections.singletonList(remapNode); + else + nodes = nodes(key, topVer, cctx); } catch (IgniteCheckedException e) { resFut.onDone(e); @@ -903,6 +911,7 @@ else if (rcvr != null) } for (final Map.Entry> e : mappings.entrySet()) { + final ClusterNode node = e.getKey(); final UUID nodeId = e.getKey().id(); Buffer buf = bufMappings.get(nodeId); @@ -964,7 +973,7 @@ else if (remaps + 1 > maxRemapCnt) { if (cancelled) closedException(); - load0(entriesForNode, resFut, activeKeys, remaps + 1); + load0(entriesForNode, resFut, activeKeys, remaps + 1, node, topVer); } catch (Throwable ex) { resFut.onDone( @@ -2195,11 +2204,11 @@ protected static class IsolatedUpdater implements StreamReceiver cctx = internalCache.context(); - AffinityTopologyVersion topVer = cctx.isLocal() ? - cctx.affinity().affinityTopologyVersion() : - cctx.shared().exchange().readyAffinityVersion(); + GridDhtTopologyFuture topFut = cctx.shared().exchange().lastFinishedFuture(); + + AffinityTopologyVersion topVer = topFut.topologyVersion(); GridCacheVersion ver = cctx.versions().isolatedStreamerVersion(); @@ -2260,6 +2269,13 @@ else if (ttl == CU.TTL_NOT_CHANGED) expiryTime = CU.toExpireTime(ttl); } + if (topFut != null) { + Throwable err = topFut.validateCache(cctx, false, false, entry.key(), null); + + if (err != null) + throw new IgniteCheckedException(err); + } + boolean primary = cctx.affinity().primaryByKey(cctx.localNode(), entry.key(), topVer); entry.initialValue(e.getValue(), @@ -2271,7 +2287,7 @@ else if (ttl == CU.TTL_NOT_CHANGED) primary ? GridDrType.DR_LOAD : GridDrType.DR_PRELOAD, false); - entry.touch(topVer); + entry.touch(); CU.unwindEvicts(cctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerRequest.java index f70ee9c486fa0..87313de15ebd2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerRequest.java @@ -369,7 +369,7 @@ public int partition() { writer.incrementState(); case 13: - if (!writer.writeMessage("topVer", topVer)) + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -508,7 +508,7 @@ public int partition() { reader.incrementState(); case 13: - topVer = reader.readMessage("topVer"); + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerResponse.java index 4cb46e1582022..56f37c2cf0b98 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerResponse.java @@ -169,4 +169,4 @@ public boolean forceLocalDeployment() { @Override public byte fieldsCount() { return 3; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/AtomicDataStructureProxy.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/AtomicDataStructureProxy.java index 2c336a052e648..17672341b2118 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/AtomicDataStructureProxy.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/AtomicDataStructureProxy.java @@ -149,7 +149,7 @@ private IllegalStateException removedError() { * @return Error. */ private IllegalStateException suspendedError() { - throw new IgniteCacheRestartingException(new IgniteFutureImpl<>(suspendFut), "Underlying cache is restarting: " + ctx.name()); + throw new IgniteCacheRestartingException(new IgniteFutureImpl<>(suspendFut), ctx.name()); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/DataStructuresProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/DataStructuresProcessor.java index 92671e00d2e6f..18b21211c2e2c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/DataStructuresProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/DataStructuresProcessor.java @@ -705,6 +705,10 @@ private void removeDataStructure( }); } + /** + * Would suspend calls for this cache if it is atomics cache. + * @param cacheName To suspend. + */ public void suspend(String cacheName) { for (Map.Entry e : dsMap.entrySet()) { String cacheName0 = ATOMICS_CACHE_NAME + "@" + e.getKey().groupName(); @@ -714,12 +718,24 @@ public void suspend(String cacheName) { } } - public void restart(IgniteInternalCache cache) { + + /** + * Would return this cache to normal work if it was suspened (and if it is atomics cache). + * @param cacheName To restart. + */ + public void restart(String cacheName, IgniteInternalCache cache) { for (Map.Entry e : dsMap.entrySet()) { String cacheName0 = ATOMICS_CACHE_NAME + "@" + e.getKey().groupName(); - if (cacheName0.equals(cache.name())) - e.getValue().restart(cache); + if (cacheName0.equals(cacheName)) { + if (cache != null) + e.getValue().restart(cache); + else { + e.getValue().onRemoved(); + + dsMap.remove(e.getKey(), e.getValue()); + } + } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheRemovable.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheRemovable.java index d26a153f0289d..aa66ae01fe614 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheRemovable.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheRemovable.java @@ -35,7 +35,14 @@ public interface GridCacheRemovable { */ public void needCheckNotRemoved(); + /** + * Would suspend calls for this object. + */ public void suspend(); + /** + * Would return this object work to normal. + * @param cache To update with. + */ public void restart(IgniteInternalCache cache); } \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheSetProxy.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheSetProxy.java index 729f6ebdb6c4e..541ca3026ac89 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheSetProxy.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridCacheSetProxy.java @@ -365,7 +365,7 @@ public void blockOnRemove() { if (delegate.separated()) { IgniteInternalFuture fut = cctx.kernalContext().cache().dynamicDestroyCache( - cctx.cache().name(), false, true, false); + cctx.cache().name(), false, true, false, null); ((GridFutureAdapter)fut).ignoreInterrupts(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/failure/FailureProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/failure/FailureProcessor.java index b48eff1311719..e47115a103c88 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/failure/FailureProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/failure/FailureProcessor.java @@ -24,6 +24,7 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.failure.FailureContext; import org.apache.ignite.failure.FailureHandler; +import org.apache.ignite.failure.NoOpFailureHandler; import org.apache.ignite.failure.StopNodeOrHaltFailureHandler; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.processors.GridProcessorAdapter; @@ -78,17 +79,20 @@ public FailureProcessor(GridKernalContext ctx) { U.quietAndInfo(log, "Configured failure handler: [hnd=" + hnd + ']'); } + /** + * @return @{code True} if a node will be stopped by current handler in near time. + */ + public boolean nodeStopping() { + return failureCtx != null && !(hnd instanceof NoOpFailureHandler); + } + /** * This method is used to initialize local failure handler if {@link IgniteConfiguration} don't contain configured one. * * @return Default {@link FailureHandler} implementation. */ protected FailureHandler getDefaultFailureHandler() { - FailureHandler hnd = new StopNodeOrHaltFailureHandler(); - - hnd.setIgnoredFailureTypes(Collections.singleton(SYSTEM_WORKER_BLOCKED)); - - return hnd; + return new StopNodeOrHaltFailureHandler(); } /** @@ -102,9 +106,10 @@ public FailureContext failureContext() { * Processes failure accordingly to configured {@link FailureHandler}. * * @param failureCtx Failure context. + * @return {@code True} If this very call led to Ignite node invalidation. */ - public void process(FailureContext failureCtx) { - process(failureCtx, hnd); + public boolean process(FailureContext failureCtx) { + return process(failureCtx, hnd); } /** @@ -112,13 +117,14 @@ public void process(FailureContext failureCtx) { * * @param failureCtx Failure context. * @param hnd Failure handler. + * @return {@code True} If this very call led to Ignite node invalidation. */ - public synchronized void process(FailureContext failureCtx, FailureHandler hnd) { + public synchronized boolean process(FailureContext failureCtx, FailureHandler hnd) { assert failureCtx != null; assert hnd != null; if (this.failureCtx != null) // Node already terminating, no reason to process more errors. - return; + return false; U.error(ignite.log(), "Critical system error detected. Will be handled accordingly to configured handler " + "[hnd=" + hnd + ", failureCtx=" + failureCtx + ']', failureCtx.error()); @@ -136,5 +142,7 @@ public synchronized void process(FailureContext failureCtx, FailureHandler hnd) log.error("Ignite node is in invalid state due to a critical failure."); } + + return invalidated; } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopClassLoader.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopClassLoader.java index e2995bc965fed..7dfc13823b0cf 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopClassLoader.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopClassLoader.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.hadoop; +import java.util.Collections; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.internal.util.ClassCache; @@ -176,11 +177,24 @@ private void initializeNativeLibraries(@Nullable String[] usrLibs) { } // Link libraries to class loader. - Vector ldrLibs = nativeLibraries(this); + Object ldrLibsObj = nativeLibraries(this); - synchronized (ldrLibs) { - ldrLibs.addAll(res); + if (ldrLibsObj instanceof Vector) { + Vector ldrLibs = (Vector)ldrLibsObj; + + synchronized (ldrLibs) { + ldrLibs.addAll(res); + } + } + else if (ldrLibsObj instanceof ConcurrentHashMap) { + ConcurrentHashMap ldrLibs = (ConcurrentHashMap)ldrLibsObj; + + synchronized (ldrLibs) { + for (Object nl : res) + ldrLibs.put(nativeLibraryName(nl), nl); + } } + } /** @@ -213,9 +227,17 @@ private static Collection initializeNativeLibraries0(Collection ldrLibObjs = nativeLibraries(ldr); + Object ldrLibObject = nativeLibraries(ldr); - synchronized (ldrLibObjs) { + Collection ldrLibObjs = null; + if (ldrLibObject instanceof Vector) + ldrLibObjs = (Vector)ldrLibObject; + else if (ldrLibObject instanceof ConcurrentHashMap) + ldrLibObjs = ((ConcurrentHashMap)ldrLibObject).values(); + else + ldrLibObjs = Collections.emptySet(); + + synchronized (ldrLibObject) { for (Object ldrLibObj : ldrLibObjs) { String name = nativeLibraryName(ldrLibObj); @@ -225,7 +247,8 @@ private static Collection initializeNativeLibraries0(Collection initializeNativeLibraries0(Collection nativeLibraries(ClassLoader ldr) { + private static Object nativeLibraries(ClassLoader ldr) { assert ldr != null; return U.field(ldr, "nativeLibraries"); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelper.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelper.java index 7936fef92ea6f..ae9985dc6d8ae 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelper.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelper.java @@ -59,4 +59,9 @@ public interface HadoopHelper { * @return Work directory. */ public String workDirectory(); + + /** + * Close helper. + */ + public void close(); } \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopNoopHelper.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopNoopHelper.java index f8f870fabf858..986af1fe5e3da 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopNoopHelper.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopNoopHelper.java @@ -62,6 +62,11 @@ public HadoopNoopHelper(GridKernalContext ctx) { throw unsupported(); } + /** {@inheritDoc} */ + @Override public void close() { + // No-op. + } + /** * @return Exception. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopDirectShuffleMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopDirectShuffleMessage.java index f6bddb64fa366..e990cb3cd7303 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopDirectShuffleMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopDirectShuffleMessage.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.hadoop.shuffle; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.processors.hadoop.HadoopJobId; import org.apache.ignite.internal.processors.hadoop.message.HadoopMessage; import org.apache.ignite.internal.util.tostring.GridToStringInclude; @@ -35,6 +36,7 @@ /** * Direct shuffle message. */ +@IgniteCodeGeneratingFail public class HadoopDirectShuffleMessage implements Message, HadoopMessage { /** */ private static final long serialVersionUID = 0L; @@ -268,4 +270,4 @@ public int dataLength() { @Override public String toString() { return S.toString(HadoopDirectShuffleMessage.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopShuffleMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopShuffleMessage.java index 07b8c2fd1635a..2660e281f20de 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopShuffleMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/hadoop/shuffle/HadoopShuffleMessage.java @@ -23,6 +23,7 @@ import java.nio.ByteBuffer; import java.util.concurrent.atomic.AtomicLong; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.processors.hadoop.HadoopJobId; import org.apache.ignite.internal.processors.hadoop.message.HadoopMessage; import org.apache.ignite.internal.util.GridUnsafe; @@ -36,6 +37,7 @@ /** * Shuffle message. */ +@IgniteCodeGeneratingFail public class HadoopShuffleMessage implements Message, HadoopMessage { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsAckMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsAckMessage.java index 5ae3fed3e47e0..76a08f9877aab 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsAckMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsAckMessage.java @@ -195,4 +195,4 @@ public IgniteCheckedException error() { @Override public byte fieldsCount() { return 3; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlockKey.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlockKey.java index 736525de1489a..98bf306a479df 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlockKey.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlockKey.java @@ -297,4 +297,4 @@ public boolean evictExclude() { @Override public String toString() { return S.toString(IgfsBlockKey.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlocksMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlocksMessage.java index 2ec54b2ed0663..5448c50b9352d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlocksMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsBlocksMessage.java @@ -176,4 +176,4 @@ public Map blocks() { @Override public byte fieldsCount() { return 3; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsCommunicationMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsCommunicationMessage.java index 412c45baf6daf..7f5daa617bdb7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsCommunicationMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsCommunicationMessage.java @@ -77,4 +77,4 @@ public void finishUnmarshal(Marshaller marsh, @Nullable ClassLoader ldr) throws @Override public byte fieldsCount() { return 0; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsDeleteMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsDeleteMessage.java index b5e96664b4634..2c2c98c82f18b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsDeleteMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsDeleteMessage.java @@ -190,4 +190,4 @@ public IgniteCheckedException error() { @Override public String toString() { return S.toString(IgfsDeleteMessage.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerRequest.java index bb605b857641c..c64b627dac641 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerRequest.java @@ -157,4 +157,4 @@ public Collection fragmentRanges() { @Override public byte fieldsCount() { return 2; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerResponse.java index 76793bfeb6d0e..f7857dcfad154 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsFragmentizerResponse.java @@ -118,4 +118,4 @@ public IgniteUuid fileId() { @Override public byte fieldsCount() { return 1; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsImpl.java index ff53e00fc6de8..bac536dde4cd5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsImpl.java @@ -276,6 +276,9 @@ public final class IgfsImpl implements IgfsEx { // Restore interrupted flag. if (interrupted) Thread.currentThread().interrupt(); + + if (dualPool != null) + dualPool.shutdownNow(); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsSyncMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsSyncMessage.java index 2b32084b7f3e6..2707de543002e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsSyncMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/igfs/IgfsSyncMessage.java @@ -149,4 +149,4 @@ public boolean response() { @Override public byte fieldsCount() { return 2; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerAbstractConnectionContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerAbstractConnectionContext.java index 1c19d5599cd20..14fcf9d9f631d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerAbstractConnectionContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerAbstractConnectionContext.java @@ -65,10 +65,8 @@ public GridKernalContext kernalContext() { return ctx; } - /** - * @return Security context. - */ - @Nullable public SecurityContext securityContext() { + /** {@inheritDoc} */ + @Nullable @Override public SecurityContext securityContext() { return secCtx; } @@ -128,4 +126,10 @@ private AuthenticationContext authenticateExternal(String user, String pwd) thro return authCtx; } + + /** {@inheritDoc} */ + @Override public void onDisconnected() { + if (ctx.security().enabled()) + ctx.security().onSessionExpired(secCtx.subject().id()); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerConnectionContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerConnectionContext.java index b693cb6f6ef38..c39bfe20a443a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerConnectionContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerConnectionContext.java @@ -20,6 +20,7 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.binary.BinaryReaderExImpl; import org.apache.ignite.internal.processors.authentication.AuthorizationContext; +import org.apache.ignite.internal.processors.security.SecurityContext; import org.jetbrains.annotations.Nullable; /** @@ -76,4 +77,9 @@ void initializeFromHandshake(ClientListenerProtocolVersion ver, BinaryReaderExIm * @return authorization context. */ @Nullable AuthorizationContext authorizationContext(); + + /** + * @return Security context. + */ + @Nullable SecurityContext securityContext(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerMessageParser.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerMessageParser.java index ab80f47710225..c3782ef732fa7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerMessageParser.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerMessageParser.java @@ -27,7 +27,7 @@ public interface ClientListenerMessageParser { * @param msg Message. * @return Request. */ - public ClientListenerRequest decode(byte[] msg); + ClientListenerRequest decode(byte[] msg); /** * Encode response to byte array. @@ -35,5 +35,21 @@ public interface ClientListenerMessageParser { * @param resp Response. * @return Message. */ - public byte[] encode(ClientListenerResponse resp); + byte[] encode(ClientListenerResponse resp); + + /** + * Decode command type. Allows to recognize the command (message type) without decoding the entire message. + * + * @param msg Message. + * @return Command type. + */ + int decodeCommandType(byte[] msg); + + /** + * Decode request Id. Allows to recognize the request Id, if any, without decoding the entire message. + * + * @param msg Message. + * @return Request Id. + */ + long decodeRequestId(byte[] msg); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerNioListener.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerNioListener.java index 0eb6ac4d480e3..defde3df5b0a8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerNioListener.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerNioListener.java @@ -27,11 +27,14 @@ import org.apache.ignite.internal.binary.streams.BinaryHeapInputStream; import org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream; import org.apache.ignite.internal.binary.streams.BinaryInputStream; +import org.apache.ignite.internal.processors.authentication.AuthorizationContext; import org.apache.ignite.internal.processors.authentication.IgniteAccessControlException; import org.apache.ignite.internal.processors.odbc.jdbc.JdbcConnectionContext; import org.apache.ignite.internal.processors.odbc.odbc.OdbcConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientStatus; +import org.apache.ignite.internal.processors.security.SecurityContext; +import org.apache.ignite.internal.processors.security.SecurityContextHolder; import org.apache.ignite.internal.util.GridSpinBusyLock; import org.apache.ignite.internal.util.nio.GridNioServerListenerAdapter; import org.apache.ignite.internal.util.nio.GridNioSession; @@ -59,7 +62,7 @@ public class ClientListenerNioListener extends GridNioServerListenerAdapter 0 ? idleTimeout : Long.MAX_VALUE) .build(); - srv0.start(); - srv = srv0; ctx.ports().registerPort(port, IgnitePortProtocol.TCP, getClass()); @@ -196,6 +198,14 @@ public ClientListenerProcessor(GridKernalContext ctx) { } } + /** {@inheritDoc} */ + @Override public void onKernalStart(boolean active) throws IgniteCheckedException { + super.onKernalStart(active); + + if (srv != null) + srv.start(); + } + /** * Register an Ignite MBean for managing clients connections. */ @@ -253,6 +263,46 @@ private void unregisterMBean() { throws IgniteCheckedException { proceedSessionOpened(ses); } + + @Override public void onMessageReceived(GridNioSession ses, Object msg) throws IgniteCheckedException { + ClientListenerConnectionContext connCtx = ses.meta(ClientListenerNioListener.CONN_CTX_META_KEY); + + if (connCtx != null && connCtx.parser() != null) { + byte[] inMsg; + + int cmdType; + + long reqId; + + try { + inMsg = (byte[])msg; + + cmdType = connCtx.parser().decodeCommandType(inMsg); + + reqId = connCtx.parser().decodeRequestId(inMsg); + } + catch (Exception e) { + U.error(log, "Failed to parse client request.", e); + + ses.close(); + + return; + } + + if (connCtx.handler().isCancellationCommand(cmdType)) { + proceedMessageReceived(ses, msg); + + CANCEL_COUNTER.incrementAndGet(); + } + else { + connCtx.handler().registerRequest(reqId, cmdType); + + super.onMessageReceived(ses, msg); + } + } + else + super.onMessageReceived(ses, msg); + } }; GridNioFilter codecFilter = new GridNioCodecFilter(new ClientListenerBufferedParser(), log, false); @@ -548,4 +598,4 @@ else if (ctx instanceof OdbcConnectionContext) return sb.append(']').toString(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerRequestHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerRequestHandler.java index cebde08add0b1..6e901e605a23b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerRequestHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/ClientListenerRequestHandler.java @@ -29,7 +29,7 @@ public interface ClientListenerRequestHandler { * @param req Request. * @return Response. */ - public ClientListenerResponse handle(ClientListenerRequest req); + ClientListenerResponse handle(ClientListenerRequest req); /** * Handle exception. @@ -38,12 +38,33 @@ public interface ClientListenerRequestHandler { * @param req Request. * @return Error response. */ - public ClientListenerResponse handleException(Exception e, ClientListenerRequest req); + ClientListenerResponse handleException(Exception e, ClientListenerRequest req); /** * Write successful handshake response. * * @param writer Binary writer. */ - public void writeHandshake(BinaryWriterExImpl writer); + void writeHandshake(BinaryWriterExImpl writer); + + /** + * Detect whether given command is a cancellation command. + * + * @param cmdId Command Id + * @return true if given command is cancellation one, false otherwise; + */ + boolean isCancellationCommand(int cmdId); + + /** + * Registers request for futher cancellation if any. + * @param reqId Request Id. + * @param cmdType Command Type. + */ + void registerRequest(long reqId, int cmdType); + + /** + * Try to unregister request. + * @param reqId Request Id. + */ + void unregisterRequest(long reqId); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/SqlStateCode.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/SqlStateCode.java index 68ac200f428f6..67c56883ce234 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/SqlStateCode.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/SqlStateCode.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -64,9 +65,15 @@ private SqlStateCode() { /** Transaction state exception. */ public final static String TRANSACTION_STATE_EXCEPTION = "25000"; + /** Transaction state exception. */ + public final static String SERIALIZATION_FAILURE = "40001"; + /** Parsing exception. */ public final static String PARSING_EXCEPTION = "42000"; /** Internal error. */ public final static String INTERNAL_ERROR = "50000"; // Generic value for custom "50" class. + + /** Query canceled. */ + public static final String QUERY_CANCELLED = "57014"; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBatchExecuteRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBatchExecuteRequest.java index 404a1c931b477..0568e1c086583 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBatchExecuteRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBatchExecuteRequest.java @@ -72,7 +72,8 @@ protected JdbcBatchExecuteRequest(byte type) { * @param autoCommit Client auto commit flag state. * @param lastStreamBatch {@code true} in case the request is the last batch at the stream. */ - public JdbcBatchExecuteRequest(String schemaName, List queries, boolean autoCommit, boolean lastStreamBatch) { + public JdbcBatchExecuteRequest(String schemaName, List queries, boolean autoCommit, + boolean lastStreamBatch) { super(BATCH_EXEC); assert lastStreamBatch || !F.isEmpty(queries); @@ -92,7 +93,8 @@ public JdbcBatchExecuteRequest(String schemaName, List queries, boole * @param autoCommit Client auto commit flag state. * @param lastStreamBatch {@code true} in case the request is the last batch at the stream. */ - protected JdbcBatchExecuteRequest(byte type, String schemaName, List queries, boolean autoCommit, boolean lastStreamBatch) { + protected JdbcBatchExecuteRequest(byte type, String schemaName, List queries, boolean autoCommit, + boolean lastStreamBatch) { super(type); assert lastStreamBatch || !F.isEmpty(queries); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadAckResult.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadAckResult.java index b0750fd5f6aab..0d97009517ca7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadAckResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadAckResult.java @@ -33,8 +33,8 @@ * @see SqlBulkLoadCommand */ public class JdbcBulkLoadAckResult extends JdbcResult { - /** Query ID for matching this command on server in further {@link JdbcBulkLoadBatchRequest} commands. */ - private long qryId; + /** Cursor ID for matching this command on server in further {@link JdbcBulkLoadBatchRequest} commands. */ + private long cursorId; /** * Bulk load parameters, which are parsed on the server side and sent to client to specify @@ -46,30 +46,30 @@ public class JdbcBulkLoadAckResult extends JdbcResult { public JdbcBulkLoadAckResult() { super(BULK_LOAD_ACK); - qryId = 0; + cursorId = 0; params = null; } /** * Constructs a request from server (in form of reply) to send files from client to server. * - * @param qryId Query ID to send in further {@link JdbcBulkLoadBatchRequest}s. + * @param cursorId Cursor ID to send in further {@link JdbcBulkLoadBatchRequest}s. * @param params Various parameters for sending batches from client side. */ - public JdbcBulkLoadAckResult(long qryId, BulkLoadAckClientParameters params) { + public JdbcBulkLoadAckResult(long cursorId, BulkLoadAckClientParameters params) { super(BULK_LOAD_ACK); - this.qryId = qryId; + this.cursorId = cursorId; this.params = params; } /** - * Returns the query ID. + * Returns the cursor ID. * - * @return Query ID. + * @return Cursor ID. */ - public long queryId() { - return qryId; + public long cursorId() { + return cursorId; } /** @@ -86,7 +86,7 @@ public BulkLoadAckClientParameters params() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.writeBinary(writer, ver); - writer.writeLong(qryId); + writer.writeLong(cursorId); writer.writeString(params.localFileName()); writer.writeInt(params.packetSize()); } @@ -96,7 +96,7 @@ public BulkLoadAckClientParameters params() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.readBinary(reader, ver); - qryId = reader.readLong(); + cursorId = reader.readLong(); String locFileName = reader.readString(); int batchSize = reader.readInt(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadBatchRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadBatchRequest.java index 347a5df0c98c0..2467465fc9a98 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadBatchRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadBatchRequest.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -48,8 +49,8 @@ public class JdbcBulkLoadBatchRequest extends JdbcRequest { */ public static final int CMD_FINISHED_EOF = 2; - /** QueryID of the original COPY command request. */ - private long qryId; + /** CursorId of the original COPY command request. */ + private long cursorId; /** Batch index starting from 0. */ private int batchIdx; @@ -66,7 +67,7 @@ public class JdbcBulkLoadBatchRequest extends JdbcRequest { public JdbcBulkLoadBatchRequest() { super(BULK_LOAD_BATCH); - qryId = -1; + cursorId = -1; batchIdx = -1; cmd = CMD_UNKNOWN; data = null; @@ -76,28 +77,28 @@ public JdbcBulkLoadBatchRequest() { * Creates the request with specified parameters and zero-length data. * Typically used with {@link #CMD_FINISHED_ERROR} and {@link #CMD_FINISHED_EOF}. * - * @param qryId The query ID from the {@link JdbcBulkLoadAckResult}. + * @param cursorId The cursor ID from the {@link JdbcBulkLoadAckResult}. * @param batchIdx Index of the current batch starting with 0. * @param cmd The command ({@link #CMD_CONTINUE}, {@link #CMD_FINISHED_EOF}, or {@link #CMD_FINISHED_ERROR}). */ @SuppressWarnings("ZeroLengthArrayAllocation") - public JdbcBulkLoadBatchRequest(long qryId, int batchIdx, int cmd) { - this(qryId, batchIdx, cmd, new byte[0]); + public JdbcBulkLoadBatchRequest(long cursorId, int batchIdx, int cmd) { + this(cursorId, batchIdx, cmd, new byte[0]); } /** * Creates the request with the specified parameters. * - * @param qryId The query ID from the {@link JdbcBulkLoadAckResult}. + * @param cursorId The cursor ID from the {@link JdbcBulkLoadAckResult}. * @param batchIdx Index of the current batch starting with 0. * @param cmd The command ({@link #CMD_CONTINUE}, {@link #CMD_FINISHED_EOF}, or {@link #CMD_FINISHED_ERROR}). * @param data The data block (zero length is acceptable). */ @SuppressWarnings("AssignmentOrReturnOfFieldWithMutableType") - public JdbcBulkLoadBatchRequest(long qryId, int batchIdx, int cmd, @NotNull byte[] data) { + public JdbcBulkLoadBatchRequest(long cursorId, int batchIdx, int cmd, @NotNull byte[] data) { super(BULK_LOAD_BATCH); - this.qryId = qryId; + this.cursorId = cursorId; this.batchIdx = batchIdx; assert isCmdValid(cmd) : "Invalid command value: " + cmd; @@ -107,12 +108,12 @@ public JdbcBulkLoadBatchRequest(long qryId, int batchIdx, int cmd, @NotNull byte } /** - * Returns the original query ID. + * Returns the original cursor ID. * - * @return The original query ID. + * @return The original cursor ID. */ - public long queryId() { - return qryId; + public long cursorId() { + return cursorId; } /** @@ -148,7 +149,7 @@ public int cmd() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.writeBinary(writer, ver); - writer.writeLong(qryId); + writer.writeLong(cursorId); writer.writeInt(batchIdx); writer.writeInt(cmd); writer.writeByteArray(data); @@ -159,7 +160,7 @@ public int cmd() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.readBinary(reader, ver); - qryId = reader.readLong(); + cursorId = reader.readLong(); batchIdx = reader.readInt(); int c = reader.readInt(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadProcessor.java index 97577917ff6c1..2018c0f5433f3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcBulkLoadProcessor.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.odbc.jdbc; +import java.io.IOException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteIllegalStateException; import org.apache.ignite.internal.processors.bulkload.BulkLoadProcessor; @@ -71,7 +72,7 @@ * {@link JdbcBulkLoadBatchRequest#CMD_FINISHED_ERROR} and the processing * is aborted on the both sides. */ -public class JdbcBulkLoadProcessor { +public class JdbcBulkLoadProcessor extends JdbcCursor { /** A core processor that handles incoming data packets. */ private final BulkLoadProcessor processor; @@ -82,8 +83,11 @@ public class JdbcBulkLoadProcessor { * Creates a JDBC-specific adapter for bulk load processor. * * @param processor Bulk load processor from the core to delegate calls to. + * @param reqId Id of the request that created given processor. */ - public JdbcBulkLoadProcessor(BulkLoadProcessor processor) { + public JdbcBulkLoadProcessor(BulkLoadProcessor processor, long reqId) { + super(reqId); + this.processor = processor; nextBatchIdx = 0; } @@ -98,7 +102,7 @@ public JdbcBulkLoadProcessor(BulkLoadProcessor processor) { */ public void processBatch(JdbcBulkLoadBatchRequest req) throws IgniteCheckedException { - if (nextBatchIdx != req.batchIdx()) + if (nextBatchIdx != req.batchIdx() && req.cmd() != CMD_FINISHED_ERROR) throw new IgniteSQLException("Batch #" + (nextBatchIdx + 1) + " is missing. Received #" + req.batchIdx() + " instead."); @@ -127,10 +131,15 @@ public void processBatch(JdbcBulkLoadBatchRequest req) * Closes the underlying objects. * Currently we don't handle normal termination vs. abort. */ - public void close() throws Exception { - processor.close(); + @Override public void close() throws IOException { + try { + processor.close(); - nextBatchIdx = -1; + nextBatchIdx = -1; + } + catch (Exception e) { + throw new IOException("Unable to close processor: " + e.getMessage(), e); + } } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcConnectionContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcConnectionContext.java index c80136d140c9b..bd6b328f94a72 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcConnectionContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcConnectionContext.java @@ -57,8 +57,11 @@ public class JdbcConnectionContext extends ClientListenerAbstractConnectionConte /** Version 2.7.0: adds maximum length for columns feature.*/ static final ClientListenerProtocolVersion VER_2_7_0 = ClientListenerProtocolVersion.create(2, 7, 0); + /** Version 2.8.0: adds query id in order to implement cancel feature.*/ + static final ClientListenerProtocolVersion VER_2_8_0 = ClientListenerProtocolVersion.create(2, 8, 0); + /** Current version. */ - private static final ClientListenerProtocolVersion CURRENT_VER = VER_2_7_0; + private static final ClientListenerProtocolVersion CURRENT_VER = VER_2_8_0; /** Supported versions. */ private static final Set SUPPORTED_VERS = new HashSet<>(); @@ -83,6 +86,7 @@ public class JdbcConnectionContext extends ClientListenerAbstractConnectionConte static { SUPPORTED_VERS.add(CURRENT_VER); + SUPPORTED_VERS.add(VER_2_8_0); SUPPORTED_VERS.add(VER_2_7_0); SUPPORTED_VERS.add(VER_2_5_0); SUPPORTED_VERS.add(VER_2_4_0); @@ -207,5 +211,7 @@ public JdbcConnectionContext(GridKernalContext ctx, GridNioSession ses, GridSpin /** {@inheritDoc} */ @Override public void onDisconnected() { handler.onDisconnect(); + + super.onDisconnected(); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcCursor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcCursor.java new file mode 100644 index 0000000000000..8c1571468b86f --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcCursor.java @@ -0,0 +1,60 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.odbc.jdbc; + +import java.io.Closeable; +import java.util.concurrent.atomic.AtomicLong; + +/** + * JDBC Cursor. + */ +public abstract class JdbcCursor implements Closeable { + /** Cursor Id generator. */ + private static final AtomicLong CURSOR_ID_GENERATOR = new AtomicLong(); + + /** Cursor Id. */ + private final long cursorId; + + /** Id of the request that created given cursor. */ + private final long reqId; + + /** + * Constructor. + * + * @param reqId Id of the request that created given cursor. + */ + protected JdbcCursor(long reqId) { + cursorId = CURSOR_ID_GENERATOR.getAndIncrement(); + + this.reqId = reqId; + } + + /** + * @return Cursor Id. + */ + public long cursorId() { + return cursorId; + } + + /** + * @return Id of the request that created given cursor. + */ + public long requestId() { + return reqId; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcMessageParser.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcMessageParser.java index 1718c00bf4c57..66abf6d356538 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcMessageParser.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcMessageParser.java @@ -91,4 +91,19 @@ protected BinaryWriterExImpl createWriter(int cap) { res.writeBinary(writer, ver); return writer.array(); - }} + } + + /** {@inheritDoc} */ + @Override public int decodeCommandType(byte[] msg) { + assert msg != null; + + return JdbcRequest.readType(msg); + } + + /** {@inheritDoc} */ + @Override public long decodeRequestId(byte[] msg) { + assert msg != null; + + return JdbcRequest.readRequestId(msg); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCancelRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCancelRequest.java new file mode 100644 index 0000000000000..7d6e854865da7 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCancelRequest.java @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.odbc.jdbc; + +import org.apache.ignite.binary.BinaryObjectException; +import org.apache.ignite.internal.binary.BinaryReaderExImpl; +import org.apache.ignite.internal.binary.BinaryWriterExImpl; +import org.apache.ignite.internal.processors.odbc.ClientListenerProtocolVersion; +import org.apache.ignite.internal.util.typedef.internal.S; + +/** + * JDBC query cancel request. + */ +public class JdbcQueryCancelRequest extends JdbcRequest { + + /** Id of a request to be cancelled. */ + private long reqIdToCancel; + + /** + */ + public JdbcQueryCancelRequest() { + super(QRY_CANCEL); + } + + /** + * @param reqIdToCancel Id of a request to be cancelled. + */ + public JdbcQueryCancelRequest(long reqIdToCancel) { + super(QRY_CANCEL); + + this.reqIdToCancel = reqIdToCancel; + } + + /** + * @return Id of a request to be cancelled. + */ + public long requestIdToBeCancelled() { + return reqIdToCancel; + } + + /** {@inheritDoc} */ + @Override public void writeBinary(BinaryWriterExImpl writer, + ClientListenerProtocolVersion ver) throws BinaryObjectException { + super.writeBinary(writer, ver); + + writer.writeLong(reqIdToCancel); + } + + /** {@inheritDoc} */ + @Override public void readBinary(BinaryReaderExImpl reader, + ClientListenerProtocolVersion ver) throws BinaryObjectException { + super.readBinary(reader, ver); + + reqIdToCancel = reader.readLong(); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(JdbcQueryCancelRequest.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCloseRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCloseRequest.java index 5c631c33d31de..02eb7740a97f7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCloseRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCloseRequest.java @@ -27,8 +27,8 @@ * JDBC query close request. */ public class JdbcQueryCloseRequest extends JdbcRequest { - /** Query ID. */ - private long queryId; + /** Cursor ID. */ + private long cursorId; /** */ @@ -37,19 +37,19 @@ public class JdbcQueryCloseRequest extends JdbcRequest { } /** - * @param queryId Query ID. + * @param cursorId Cursor ID. */ - public JdbcQueryCloseRequest(long queryId) { + public JdbcQueryCloseRequest(long cursorId) { super(QRY_CLOSE); - this.queryId = queryId; + this.cursorId = cursorId; } /** - * @return Query ID. + * @return Cursor ID. */ - public long queryId() { - return queryId; + public long cursorId() { + return cursorId; } /** {@inheritDoc} */ @@ -57,7 +57,7 @@ public long queryId() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.writeBinary(writer, ver); - writer.writeLong(queryId); + writer.writeLong(cursorId); } /** {@inheritDoc} */ @@ -65,7 +65,7 @@ public long queryId() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.readBinary(reader, ver); - queryId = reader.readLong(); + cursorId = reader.readLong(); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCursor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCursor.java index 830daea0c7052..cd566ed55bbc5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCursor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryCursor.java @@ -27,10 +27,7 @@ /** * SQL listener query fetch result. */ -class JdbcQueryCursor { - /** Query ID. */ - private final long queryId; - +class JdbcQueryCursor extends JdbcCursor { /** Fetch size. */ private int pageSize; @@ -44,20 +41,26 @@ class JdbcQueryCursor { private final QueryCursorImpl> cur; /** Query results iterator. */ - private final Iterator> iter; + private Iterator> iter; /** - * @param queryId Query ID. * @param pageSize Fetch size. * @param maxRows Max rows. * @param cur Query cursor. + * @param reqId Id of the request that created given cursor. */ - JdbcQueryCursor(long queryId, int pageSize, int maxRows, QueryCursorImpl> cur) { - this.queryId = queryId; + JdbcQueryCursor(int pageSize, int maxRows, QueryCursorImpl> cur, long reqId) { + super(reqId); + this.pageSize = pageSize; this.maxRows = maxRows; this.cur = cur; + } + /** + * Open iterator; + */ + void openIterator(){ iter = cur.iterator(); } @@ -104,17 +107,10 @@ boolean hasNext() { return iter.hasNext() && !(maxRows > 0 && fetched >= maxRows); } - /** - * @return Query ID. - */ - public long queryId() { - return queryId; - } - /** * Close the cursor. */ - public void close() { + @Override public void close() { cur.close(); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryDescriptor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryDescriptor.java new file mode 100644 index 0000000000000..c9621f40d3ccb --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryDescriptor.java @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.odbc.jdbc; + +import org.apache.ignite.internal.processors.query.GridQueryCancel; + +/** + * JDBC query descriptor used for appropriate query cancellation. + */ +public class JdbcQueryDescriptor { + /** Hook for cancellation. */ + private final GridQueryCancel cancelHook; + + /** Canceled flag. */ + private boolean canceled; + + /** Usage count of given descriptor. */ + private int usageCnt; + + /** Execution started flag. */ + private boolean executionStarted; + + /** + * Constructor. + */ + public JdbcQueryDescriptor() { + cancelHook = new GridQueryCancel(); + } + + /** + * @return Hook for cancellation. + */ + public GridQueryCancel cancelHook() { + return cancelHook; + } + + /** + * @return True if descriptor was marked as canceled. + */ + public boolean isCanceled() { + return canceled; + } + + /** + * Marks descriptor as canceled. + */ + public void markCancelled() { + canceled = true; + } + + /** + * Increments usage count. + */ + public void incrementUsageCount() { + usageCnt++; + + executionStarted = true; + } + + /** + * Descrements usage count. + */ + public void decrementUsageCount() { + usageCnt--; + } + + /** + * @return Usage count. + */ + public int usageCount() { + return usageCnt; + } + + /** + * @return True if execution was started, false otherwise. + */ + public boolean isExecutionStarted() { + return executionStarted; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryExecuteResult.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryExecuteResult.java index 342e8ef9b63a2..0b91555081c70 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryExecuteResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryExecuteResult.java @@ -28,8 +28,8 @@ * JDBC query execute result. */ public class JdbcQueryExecuteResult extends JdbcResult { - /** Query ID. */ - private long queryId; + /** Cursor ID. */ + private long cursorId; /** Query result rows. */ private List> items; @@ -44,44 +44,44 @@ public class JdbcQueryExecuteResult extends JdbcResult { private long updateCnt; /** - * Condtructor. + * Constructor. */ JdbcQueryExecuteResult() { super(QRY_EXEC); } /** - * @param queryId Query ID. + * @param cursorId Cursor ID. * @param items Query result rows. * @param last Flag indicates the query has no unfetched results. */ - JdbcQueryExecuteResult(long queryId, List> items, boolean last) { + JdbcQueryExecuteResult(long cursorId, List> items, boolean last) { super(QRY_EXEC); - this.queryId = queryId; + this.cursorId = cursorId; this.items = items; this.last = last; this.isQuery = true; } /** - * @param queryId Query ID. + * @param cursorId Cursor ID. * @param updateCnt Update count for DML queries. */ - public JdbcQueryExecuteResult(long queryId, long updateCnt) { + public JdbcQueryExecuteResult(long cursorId, long updateCnt) { super(QRY_EXEC); - this.queryId = queryId; + this.cursorId = cursorId; this.last = true; this.isQuery = false; this.updateCnt = updateCnt; } /** - * @return Query ID. + * @return Cursor ID. */ - public long getQueryId() { - return queryId; + public long cursorId() { + return cursorId; } /** @@ -117,7 +117,7 @@ public long updateCount() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.writeBinary(writer, ver); - writer.writeLong(queryId); + writer.writeLong(cursorId); writer.writeBoolean(isQuery); if (isQuery) { @@ -137,7 +137,7 @@ public long updateCount() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.readBinary(reader, ver); - queryId = reader.readLong(); + cursorId = reader.readLong(); isQuery = reader.readBoolean(); if (isQuery) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryFetchRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryFetchRequest.java index 59ed9a87cf17a..79d61d6718943 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryFetchRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryFetchRequest.java @@ -27,8 +27,8 @@ * JDBC query fetch request. */ public class JdbcQueryFetchRequest extends JdbcRequest { - /** Query ID. */ - private long queryId; + /** Cursor ID. */ + private long cursorId; /** Fetch size. */ private int pageSize; @@ -41,21 +41,21 @@ public class JdbcQueryFetchRequest extends JdbcRequest { } /** - * @param queryId Query ID. + * @param cursorId Cursor ID. * @param pageSize Fetch size. */ - public JdbcQueryFetchRequest(long queryId, int pageSize) { + public JdbcQueryFetchRequest(long cursorId, int pageSize) { super(QRY_FETCH); - this.queryId = queryId; + this.cursorId = cursorId; this.pageSize = pageSize; } /** - * @return Query ID. + * @return Cursor ID. */ - public long queryId() { - return queryId; + public long cursorId() { + return cursorId; } /** @@ -70,7 +70,7 @@ public int pageSize() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.writeBinary(writer, ver); - writer.writeLong(queryId); + writer.writeLong(cursorId); writer.writeInt(pageSize); } @@ -79,9 +79,9 @@ public int pageSize() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.readBinary(reader, ver); - queryId = reader.readLong(); + cursorId = reader.readLong(); pageSize = reader.readInt(); - } + } /** {@inheritDoc} */ @Override public String toString() { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryMetadataRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryMetadataRequest.java index f30ecfd0fd625..08ed43ebe0765 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryMetadataRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcQueryMetadataRequest.java @@ -27,8 +27,8 @@ * JDBC query metadata request. */ public class JdbcQueryMetadataRequest extends JdbcRequest { - /** Query ID. */ - private long qryId; + /** Cursor ID. */ + private long cursorId; /** * Constructor. @@ -38,19 +38,19 @@ public class JdbcQueryMetadataRequest extends JdbcRequest { } /** - * @param qryId Query ID. + * @param cursorId Cursor ID. */ - public JdbcQueryMetadataRequest(long qryId) { + public JdbcQueryMetadataRequest(long cursorId) { super(QRY_META); - this.qryId = qryId; + this.cursorId = cursorId; } /** - * @return Query ID. + * @return Cursor ID. */ - public long queryId() { - return qryId; + public long cursorId() { + return cursorId; } /** {@inheritDoc} */ @@ -58,7 +58,7 @@ public long queryId() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.writeBinary(writer, ver); - writer.writeLong(qryId); + writer.writeLong(cursorId); } /** {@inheritDoc} */ @@ -66,7 +66,7 @@ public long queryId() { ClientListenerProtocolVersion ver) throws BinaryObjectException { super.readBinary(reader, ver); - qryId = reader.readLong(); + cursorId = reader.readLong(); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequest.java index 0674edfb65156..f611c8c49fbc8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequest.java @@ -17,13 +17,18 @@ package org.apache.ignite.internal.processors.odbc.jdbc; +import java.util.concurrent.atomic.AtomicLong; import org.apache.ignite.IgniteException; import org.apache.ignite.binary.BinaryObjectException; import org.apache.ignite.internal.binary.BinaryReaderExImpl; import org.apache.ignite.internal.binary.BinaryWriterExImpl; +import org.apache.ignite.internal.binary.streams.BinaryHeapInputStream; +import org.apache.ignite.internal.binary.streams.BinaryInputStream; import org.apache.ignite.internal.processors.odbc.ClientListenerProtocolVersion; import org.apache.ignite.internal.processors.odbc.ClientListenerRequestNoId; +import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcConnectionContext.VER_2_8_0; + /** * JDBC request. */ @@ -67,26 +72,47 @@ public class JdbcRequest extends ClientListenerRequestNoId implements JdbcRawBin /** Ordered batch request. */ static final byte BATCH_EXEC_ORDERED = 14; + /** Execute cancel request. */ + static final byte QRY_CANCEL = 15; + + /** Request Id generator. */ + private static final AtomicLong REQ_ID_GENERATOR = new AtomicLong(); + /** Request type. */ private byte type; + /** Request id. */ + private long reqId; + /** * @param type Command type. */ public JdbcRequest(byte type) { this.type = type; + + reqId = REQ_ID_GENERATOR.incrementAndGet(); } /** {@inheritDoc} */ @Override public void writeBinary(BinaryWriterExImpl writer, ClientListenerProtocolVersion ver) throws BinaryObjectException { writer.writeByte(type); + + if (ver.compareTo(VER_2_8_0) >= 0) + writer.writeLong(reqId); } /** {@inheritDoc} */ @Override public void readBinary(BinaryReaderExImpl reader, ClientListenerProtocolVersion ver) throws BinaryObjectException { - // No-op. + + if (ver.compareTo(VER_2_8_0) >= 0) + reqId = reader.readLong(); + } + + /** {@inheritDoc} */ + @Override public long requestId() { + return reqId; } /** @@ -174,6 +200,11 @@ public static JdbcRequest readRequest(BinaryReaderExImpl reader, break; + case QRY_CANCEL: + req = new JdbcQueryCancelRequest(); + + break; + default: throw new IgniteException("Unknown SQL listener request ID: [request ID=" + reqType + ']'); } @@ -182,4 +213,28 @@ public static JdbcRequest readRequest(BinaryReaderExImpl reader, return req; } + + /** + * Reads JdbcRequest command type. + * + * @param msg Jdbc request as byte array. + * @return Command type. + */ + public static byte readType(byte[] msg) { + return msg[0]; + } + + /** + * Reads JdbcRequest Id. + * + * @param msg Jdbc request as byte array. + * @return Request Id. + */ + public static long readRequestId(byte[] msg) { + BinaryInputStream stream = new BinaryHeapInputStream(msg); + + stream.position(1); + + return stream.readLong(); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequestHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequestHandler.java index d59788c3d6d9b..a2a8d4251ce20 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequestHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequestHandler.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -23,6 +24,7 @@ import java.util.ArrayList; import java.util.Collection; import java.util.Collections; +import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedHashSet; @@ -31,12 +33,12 @@ import java.util.PriorityQueue; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.atomic.AtomicLong; import javax.cache.configuration.Factory; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.cache.query.BulkLoadContextCursor; import org.apache.ignite.cache.query.FieldsQueryCursor; +import org.apache.ignite.cache.query.QueryCancelledException; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.IgniteVersionUtils; @@ -54,6 +56,7 @@ import org.apache.ignite.internal.processors.odbc.ClientListenerResponse; import org.apache.ignite.internal.processors.odbc.ClientListenerResponseSender; import org.apache.ignite.internal.processors.odbc.odbc.OdbcQueryGetColumnsMetaRequest; +import org.apache.ignite.internal.processors.query.GridQueryCancel; import org.apache.ignite.internal.processors.query.GridQueryIndexDescriptor; import org.apache.ignite.internal.processors.query.GridQueryProperty; import org.apache.ignite.internal.processors.query.GridQueryTypeDescriptor; @@ -76,6 +79,7 @@ import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcConnectionContext.VER_2_3_0; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcConnectionContext.VER_2_4_0; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcConnectionContext.VER_2_7_0; +import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcConnectionContext.VER_2_8_0; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.BATCH_EXEC; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.BATCH_EXEC_ORDERED; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.BULK_LOAD_BATCH; @@ -85,6 +89,7 @@ import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.META_PRIMARY_KEYS; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.META_SCHEMAS; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.META_TABLES; +import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.QRY_CANCEL; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.QRY_CLOSE; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.QRY_EXEC; import static org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequest.QRY_FETCH; @@ -94,8 +99,9 @@ * JDBC request handler. */ public class JdbcRequestHandler implements ClientListenerRequestHandler { - /** Query ID sequence. */ - private static final AtomicLong QRY_ID_GEN = new AtomicLong(); + /** Jdbc query cancelled response. */ + private static final JdbcResponse JDBC_QUERY_CANCELLED_RESPONSE = + new JdbcResponse(IgniteQueryErrorCode.QUERY_CANCELED, QueryCancelledException.ERR_MSG); /** Kernel context. */ private final GridKernalContext ctx; @@ -115,11 +121,8 @@ public class JdbcRequestHandler implements ClientListenerRequestHandler { /** Maximum allowed cursors. */ private final int maxCursors; - /** Current queries cursors. */ - private final ConcurrentHashMap qryCursors = new ConcurrentHashMap<>(); - - /** Current bulk load processors. */ - private final ConcurrentHashMap bulkLoadRequests = new ConcurrentHashMap<>(); + /** Current JDBC cursors. */ + private final ConcurrentHashMap jdbcCursors = new ConcurrentHashMap<>(); /** Ordered batches queue. */ private final PriorityQueue orderedBatchesQueue = new PriorityQueue<>(); @@ -127,6 +130,9 @@ public class JdbcRequestHandler implements ClientListenerRequestHandler { /** Ordered batches mutex. */ private final Object orderedBatchesMux = new Object(); + /** Request mutex. */ + private final Object reqMux = new Object(); + /** Response sender. */ private final ClientListenerResponseSender sender; @@ -137,11 +143,14 @@ public class JdbcRequestHandler implements ClientListenerRequestHandler { private final NestedTxMode nestedTxMode; /** Protocol version. */ - private ClientListenerProtocolVersion protocolVer; + private final ClientListenerProtocolVersion protocolVer; /** Authentication context */ private AuthorizationContext actx; + /** Register that keeps non-cancelled requests. */ + private Map reqRegister = new HashMap<>(); + /** * Constructor. * @param ctx Context. @@ -218,6 +227,32 @@ public JdbcRequestHandler(GridKernalContext ctx, GridSpinBusyLock busyLock, } } + /** {@inheritDoc} */ + @Override public boolean isCancellationCommand(int cmdId) { + return cmdId == JdbcRequest.QRY_CANCEL; + } + + /** {@inheritDoc} */ + @Override public void registerRequest(long reqId, int cmdType) { + assert reqId != 0; + + synchronized (reqMux) { + if (isCancellationSupported() && (cmdType == QRY_EXEC || cmdType == BATCH_EXEC || + cmdType == BATCH_EXEC_ORDERED)) + reqRegister.put(reqId, new JdbcQueryDescriptor()); + } + } + + /** {@inheritDoc} */ + @Override public void unregisterRequest(long reqId) { + assert reqId != 0; + + synchronized (reqMux) { + if (isCancellationSupported()) + reqRegister.remove(reqId); + } + } + /** * Start worker, if it's present. */ @@ -279,6 +314,9 @@ ClientListenerResponse doHandle(JdbcRequest req) { case BULK_LOAD_BATCH: return processBulkLoadFileBatch((JdbcBulkLoadBatchRequest)req); + + case QRY_CANCEL: + return cancelQuery((JdbcQueryCancelRequest)req); } return new JdbcResponse(IgniteQueryErrorCode.UNSUPPORTED_OPERATION, @@ -347,11 +385,16 @@ private ClientListenerResponse executeBatchOrdered(JdbcOrderedBatchExecuteReques * @return Response to send to the client. */ private ClientListenerResponse processBulkLoadFileBatch(JdbcBulkLoadBatchRequest req) { - JdbcBulkLoadProcessor processor = bulkLoadRequests.get(req.queryId()); - if (ctx == null) return new JdbcResponse(IgniteQueryErrorCode.UNEXPECTED_OPERATION, "Unknown query ID: " - + req.queryId() + ". Bulk load session may have been reclaimed due to timeout."); + + req.cursorId() + ". Bulk load session may have been reclaimed due to timeout."); + + JdbcBulkLoadProcessor processor = (JdbcBulkLoadProcessor)jdbcCursors.get(req.cursorId()); + + if (!prepareQueryCancellationMeta(processor)) + return JDBC_QUERY_CANCELLED_RESPONSE; + + boolean unregisterReq = false; try { processor.processBatch(req); @@ -359,10 +402,12 @@ private ClientListenerResponse processBulkLoadFileBatch(JdbcBulkLoadBatchRequest switch (req.cmd()) { case CMD_FINISHED_ERROR: case CMD_FINISHED_EOF: - bulkLoadRequests.remove(req.queryId()); + jdbcCursors.remove(req.cursorId()); processor.close(); + unregisterReq = true; + break; case CMD_CONTINUE: @@ -371,12 +416,18 @@ private ClientListenerResponse processBulkLoadFileBatch(JdbcBulkLoadBatchRequest default: throw new IllegalArgumentException(); } - - return new JdbcResponse(new JdbcQueryExecuteResult(req.queryId(), processor.updateCnt())); + return new JdbcResponse(new JdbcQueryExecuteResult(req.cursorId(), processor.updateCnt())); } catch (Exception e) { U.error(null, "Error processing file batch", e); - return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, "Server error: " + e); + + if (X.cause(e, QueryCancelledException.class) != null) + return exceptionToResult(new QueryCancelledException()); + else + return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, "Server error: " + e); + } + finally { + cleanupQueryCancellationMeta(unregisterReq, processor.requestId()); } } @@ -404,41 +455,27 @@ private ClientListenerResponse processBulkLoadFileBatch(JdbcBulkLoadBatchRequest * or due to {@code IOException} during network operations. */ public void onDisconnect() { - if (busyLock.enterBusy()) - { - if (worker != null) { - worker.cancel(); + if (worker != null) { + worker.cancel(); - try { - worker.join(); - } - catch (InterruptedException e) { - // No-op. - } + try { + worker.join(); } + catch (InterruptedException e) { + // No-op. + } + } - try - { - for (JdbcQueryCursor cursor : qryCursors.values()) - cursor.close(); - - for (JdbcBulkLoadProcessor processor : bulkLoadRequests.values()) { - try { - processor.close(); - } - catch (Exception e) { - U.error(null, "Error closing JDBC bulk load processor.", e); - } - } + for (JdbcCursor cursor : jdbcCursors.values()) + U.close(cursor, log); - bulkLoadRequests.clear(); + jdbcCursors.clear(); - U.close(cliCtx, log); - } - finally { - busyLock.leaveBusy(); - } + synchronized (reqMux) { + reqRegister.clear(); } + + U.close(cliCtx, log); } /** @@ -449,24 +486,40 @@ public void onDisconnect() { */ @SuppressWarnings("unchecked") private JdbcResponse executeQuery(JdbcQueryExecuteRequest req) { - int cursorCnt = qryCursors.size(); + GridQueryCancel cancel = null; - if (maxCursors > 0 && cursorCnt >= maxCursors) - return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, "Too many open cursors (either close other " + - "open cursors or increase the limit through " + - "ClientConnectorConfiguration.maxOpenCursorsPerConnection) [maximum=" + maxCursors + - ", current=" + cursorCnt + ']'); + boolean unregisterReq = false; - long qryId = QRY_ID_GEN.getAndIncrement(); + if (isCancellationSupported()) { + synchronized (reqMux) { + JdbcQueryDescriptor desc = reqRegister.get(req.requestId()); - assert !cliCtx.isStream(); + // Query was already cancelled and unregistered. + if (desc == null) + return null; + + cancel = desc.cancelHook(); + + desc.incrementUsageCount(); + } + } try { + int cursorCnt = jdbcCursors.size(); + + if (maxCursors > 0 && cursorCnt >= maxCursors) + return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, "Too many open cursors (either close other " + + "open cursors or increase the limit through " + + "ClientConnectorConfiguration.maxOpenCursorsPerConnection) [maximum=" + maxCursors + + ", current=" + cursorCnt + ']'); + + assert !cliCtx.isStream(); + String sql = req.sqlQuery(); SqlFieldsQueryEx qry; - switch(req.expectedStatementType()) { + switch (req.expectedStatementType()) { case ANY_STATEMENT_TYPE: qry = new SqlFieldsQueryEx(sql, null); @@ -509,29 +562,36 @@ private JdbcResponse executeQuery(JdbcQueryExecuteRequest req) { qry.setSchema(schemaName); List>> results = ctx.query().querySqlFields(null, qry, cliCtx, true, - protocolVer.compareTo(VER_2_3_0) < 0); + protocolVer.compareTo(VER_2_3_0) < 0, cancel); FieldsQueryCursor> fieldsCur = results.get(0); if (fieldsCur instanceof BulkLoadContextCursor) { - BulkLoadContextCursor blCur = (BulkLoadContextCursor) fieldsCur; + BulkLoadContextCursor blCur = (BulkLoadContextCursor)fieldsCur; BulkLoadProcessor blProcessor = blCur.bulkLoadProcessor(); BulkLoadAckClientParameters clientParams = blCur.clientParams(); - bulkLoadRequests.put(qryId, new JdbcBulkLoadProcessor(blProcessor)); + JdbcBulkLoadProcessor processor = new JdbcBulkLoadProcessor(blProcessor, req.requestId()); + + jdbcCursors.put(processor.cursorId(), processor); - return new JdbcResponse(new JdbcBulkLoadAckResult(qryId, clientParams)); + // responses for the same query on the client side + return new JdbcResponse(new JdbcBulkLoadAckResult(processor.cursorId(), clientParams)); } if (results.size() == 1) { - JdbcQueryCursor cur = new JdbcQueryCursor(qryId, req.pageSize(), req.maxRows(), - (QueryCursorImpl)fieldsCur); + JdbcQueryCursor cur = new JdbcQueryCursor(req.pageSize(), req.maxRows(), + (QueryCursorImpl)fieldsCur, req.requestId()); + + jdbcCursors.put(cur.cursorId(), cur); + + cur.openIterator(); JdbcQueryExecuteResult res; if (cur.isQuery()) - res = new JdbcQueryExecuteResult(qryId, cur.fetchRows(), !cur.hasNext()); + res = new JdbcQueryExecuteResult(cur.cursorId(), cur.fetchRows(), !cur.hasNext()); else { List> items = cur.fetchRows(); @@ -540,13 +600,16 @@ private JdbcResponse executeQuery(JdbcQueryExecuteRequest req) { "Invalid result set for not-SELECT query. [qry=" + sql + ", res=" + S.toString(List.class, items) + ']'; - res = new JdbcQueryExecuteResult(qryId, (Long)items.get(0).get(0)); + res = new JdbcQueryExecuteResult(cur.cursorId(), (Long)items.get(0).get(0)); } - if (res.last() && (!res.isQuery() || autoCloseCursors)) + if (res.last() && (!res.isQuery() || autoCloseCursors)) { + jdbcCursors.remove(cur.cursorId()); + + unregisterReq = true; + cur.close(); - else - qryCursors.put(qryId, cur); + } return new JdbcResponse(res); } @@ -561,13 +624,13 @@ private JdbcResponse executeQuery(JdbcQueryExecuteRequest req) { JdbcResultInfo jdbcRes; if (qryCur.isQuery()) { - jdbcRes = new JdbcResultInfo(true, -1, qryId); + JdbcQueryCursor cur = new JdbcQueryCursor(req.pageSize(), req.maxRows(), qryCur, req.requestId()); - JdbcQueryCursor cur = new JdbcQueryCursor(qryId, req.pageSize(), req.maxRows(), qryCur); + jdbcCursors.put(cur.cursorId(), cur); - qryCursors.put(qryId, cur); + jdbcRes = new JdbcResultInfo(true, -1, cur.cursorId()); - qryId = QRY_ID_GEN.getAndIncrement(); + cur.openIterator(); if (items == null) { items = cur.fetchRows(); @@ -584,11 +647,20 @@ private JdbcResponse executeQuery(JdbcQueryExecuteRequest req) { } } catch (Exception e) { - qryCursors.remove(qryId); + // Trying to close all cursors of current request. + clearCursors(req.requestId()); + + unregisterReq = true; U.error(log, "Failed to execute SQL query [reqId=" + req.requestId() + ", req=" + req + ']', e); - return exceptionToResult(e); + if (X.cause(e, QueryCancelledException.class) != null) + return exceptionToResult(new QueryCancelledException()); + else + return exceptionToResult(e); + } + finally { + cleanupQueryCancellationMeta(unregisterReq, req.requestId()); } } @@ -599,23 +671,59 @@ private JdbcResponse executeQuery(JdbcQueryExecuteRequest req) { * @return Response. */ private JdbcResponse closeQuery(JdbcQueryCloseRequest req) { + JdbcCursor cur = jdbcCursors.get(req.cursorId()); + + if (!prepareQueryCancellationMeta(cur)) + return new JdbcResponse(null); + try { - JdbcQueryCursor cur = qryCursors.remove(req.queryId()); + cur = jdbcCursors.remove(req.cursorId()); if (cur == null) return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, - "Failed to find query cursor with ID: " + req.queryId()); + "Failed to find query cursor with ID: " + req.cursorId()); cur.close(); return new JdbcResponse(null); } catch (Exception e) { - qryCursors.remove(req.queryId()); + jdbcCursors.remove(req.cursorId()); - U.error(log, "Failed to close SQL query [reqId=" + req.requestId() + ", req=" + req.queryId() + ']', e); + U.error(log, "Failed to close SQL query [reqId=" + req.requestId() + ", req=" + req + ']', e); - return exceptionToResult(e); + if (X.cause(e, QueryCancelledException.class) != null) + return new JdbcResponse(null); + else + return exceptionToResult(e); + } + finally { + if (isCancellationSupported()) { + boolean clearCursors = false; + + synchronized (reqMux) { + assert cur != null; + + JdbcQueryDescriptor desc = reqRegister.get(cur.requestId()); + + if (desc != null) { + // Query was cancelled during execution. + if (desc.isCanceled()) { + clearCursors = true; + + unregisterRequest(req.requestId()); + } + else { + tryUnregisterRequest(cur.requestId()); + + desc.decrementUsageCount(); + } + } + } + + if (clearCursors) + clearCursors(cur.requestId()); + } } } @@ -626,12 +734,17 @@ private JdbcResponse closeQuery(JdbcQueryCloseRequest req) { * @return Response. */ private JdbcResponse fetchQuery(JdbcQueryFetchRequest req) { - try { - JdbcQueryCursor cur = qryCursors.get(req.queryId()); + final JdbcQueryCursor cur = (JdbcQueryCursor)jdbcCursors.get(req.cursorId()); + + if (!prepareQueryCancellationMeta(cur)) + return JDBC_QUERY_CANCELLED_RESPONSE; + + boolean unregisterReq = false; + try { if (cur == null) return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, - "Failed to find query cursor with ID: " + req.queryId()); + "Failed to find query cursor with ID: " + req.cursorId()); if (req.pageSize() <= 0) return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, @@ -642,7 +755,9 @@ private JdbcResponse fetchQuery(JdbcQueryFetchRequest req) { JdbcQueryFetchResult res = new JdbcQueryFetchResult(cur.fetchRows(), !cur.hasNext()); if (res.last() && (!cur.isQuery() || autoCloseCursors)) { - qryCursors.remove(req.queryId()); + jdbcCursors.remove(req.cursorId()); + + unregisterReq = true; cur.close(); } @@ -652,7 +767,15 @@ private JdbcResponse fetchQuery(JdbcQueryFetchRequest req) { catch (Exception e) { U.error(log, "Failed to fetch SQL query result [reqId=" + req.requestId() + ", req=" + req + ']', e); - return exceptionToResult(e); + if (X.cause(e, QueryCancelledException.class) != null) + return exceptionToResult(new QueryCancelledException()); + else + return exceptionToResult(e); + } + finally { + assert cur != null; + + cleanupQueryCancellationMeta(unregisterReq, cur.requestId()); } } @@ -661,14 +784,17 @@ private JdbcResponse fetchQuery(JdbcQueryFetchRequest req) { * @return Response. */ private JdbcResponse getQueryMeta(JdbcQueryMetadataRequest req) { - try { - JdbcQueryCursor cur = qryCursors.get(req.queryId()); + final JdbcQueryCursor cur = (JdbcQueryCursor)jdbcCursors.get(req.cursorId()); + if (!prepareQueryCancellationMeta(cur)) + return JDBC_QUERY_CANCELLED_RESPONSE; + + try { if (cur == null) return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, - "Failed to find query with ID: " + req.queryId()); + "Failed to find query cursor with ID: " + req.cursorId()); - JdbcQueryMetadataResult res = new JdbcQueryMetadataResult(req.queryId(), + JdbcQueryMetadataResult res = new JdbcQueryMetadataResult(req.cursorId(), cur.meta()); return new JdbcResponse(res); @@ -678,6 +804,11 @@ private JdbcResponse getQueryMeta(JdbcQueryMetadataRequest req) { return exceptionToResult(e); } + finally { + assert cur != null; + + cleanupQueryCancellationMeta(false, cur.requestId()); + } } /** @@ -685,55 +816,80 @@ private JdbcResponse getQueryMeta(JdbcQueryMetadataRequest req) { * @return Response. */ private ClientListenerResponse executeBatch(JdbcBatchExecuteRequest req) { - String schemaName = req.schemaName(); + GridQueryCancel cancel = null; + + if (isCancellationSupported()) { + synchronized (reqMux) { + JdbcQueryDescriptor desc = reqRegister.get(req.requestId()); + + // Query was already cancelled and unregisterd. + if (desc == null) + return null; + + cancel = desc.cancelHook(); + + desc.incrementUsageCount(); + } + } - if (F.isEmpty(schemaName)) - schemaName = QueryUtils.DFLT_SCHEMA; + try { + String schemaName = req.schemaName(); - int qryCnt = req.queries().size(); + if (F.isEmpty(schemaName)) + schemaName = QueryUtils.DFLT_SCHEMA; - List updCntsAcc = new ArrayList<>(qryCnt); + int qryCnt = req.queries().size(); - // Send back only the first error. Others will be written to the log. - IgniteBiTuple firstErr = new IgniteBiTuple<>(); + List updCntsAcc = new ArrayList<>(qryCnt); - SqlFieldsQueryEx qry = null; + // Send back only the first error. Others will be written to the log. + IgniteBiTuple firstErr = new IgniteBiTuple<>(); - for (JdbcQuery q : req.queries()) { - if (q.sql() != null) { // If we have a new query string in the batch, - if (qry != null) // then execute the previous sub-batch and create a new SqlFieldsQueryEx. - executeBatchedQuery(qry, updCntsAcc, firstErr); + SqlFieldsQueryEx qry = null; - qry = new SqlFieldsQueryEx(q.sql(), false); + for (JdbcQuery q : req.queries()) { + if (q.sql() != null) { // If we have a new query string in the batch, + if (qry != null) // then execute the previous sub-batch and create a new SqlFieldsQueryEx. + executeBatchedQuery(qry, updCntsAcc, firstErr, cancel); - qry.setDistributedJoins(cliCtx.isDistributedJoins()); - qry.setEnforceJoinOrder(cliCtx.isEnforceJoinOrder()); - qry.setCollocated(cliCtx.isCollocated()); - qry.setReplicatedOnly(cliCtx.isReplicatedOnly()); - qry.setLazy(cliCtx.isLazy()); - qry.setNestedTxMode(nestedTxMode); - qry.setAutoCommit(req.autoCommit()); + qry = new SqlFieldsQueryEx(q.sql(), false); - qry.setSchema(schemaName); - } + qry.setDistributedJoins(cliCtx.isDistributedJoins()); + qry.setEnforceJoinOrder(cliCtx.isEnforceJoinOrder()); + qry.setCollocated(cliCtx.isCollocated()); + qry.setReplicatedOnly(cliCtx.isReplicatedOnly()); + qry.setLazy(cliCtx.isLazy()); + qry.setNestedTxMode(nestedTxMode); + qry.setAutoCommit(req.autoCommit()); - assert qry != null; + qry.setSchema(schemaName); + } - qry.addBatchedArgs(q.args()); - } + assert qry != null; + + qry.addBatchedArgs(q.args()); + } - if (qry != null) - executeBatchedQuery(qry, updCntsAcc, firstErr); + if (qry != null) + executeBatchedQuery(qry, updCntsAcc, firstErr, cancel); - if (req.isLastStreamBatch()) - cliCtx.disableStreaming(); + if (req.isLastStreamBatch()) + cliCtx.disableStreaming(); - int updCnts[] = U.toIntArray(updCntsAcc); + int updCnts[] = U.toIntArray(updCntsAcc); - if (firstErr.isEmpty()) - return new JdbcResponse(new JdbcBatchExecuteResult(updCnts, ClientListenerResponse.STATUS_SUCCESS, null)); - else - return new JdbcResponse(new JdbcBatchExecuteResult(updCnts, firstErr.getKey(), firstErr.getValue())); + if (firstErr.isEmpty()) + return new JdbcResponse(new JdbcBatchExecuteResult(updCnts, ClientListenerResponse.STATUS_SUCCESS, + null)); + else + return new JdbcResponse(new JdbcBatchExecuteResult(updCnts, firstErr.getKey(), firstErr.getValue())); + } + catch (QueryCancelledException e) { + return exceptionToResult(e); + } + finally { + cleanupQueryCancellationMeta(true, req.requestId()); + } } /** @@ -742,10 +898,12 @@ private ClientListenerResponse executeBatch(JdbcBatchExecuteRequest req) { * @param qry Query. * @param updCntsAcc Per query rows updates counter. * @param firstErr First error data - code and message. + * @param cancel Hook for query cancellation. + * @throws QueryCancelledException If query was cancelled during execution. */ - @SuppressWarnings("ForLoopReplaceableByForEach") + @SuppressWarnings({"ForLoopReplaceableByForEach", "unchecked"}) private void executeBatchedQuery(SqlFieldsQueryEx qry, List updCntsAcc, - IgniteBiTuple firstErr) { + IgniteBiTuple firstErr, GridQueryCancel cancel) throws QueryCancelledException { try { if (cliCtx.isStream()) { List cnt = ctx.query().streamBatchedUpdateQuery(qry.getSchema(), cliCtx, qry.getSql(), @@ -757,7 +915,7 @@ private void executeBatchedQuery(SqlFieldsQueryEx qry, List updCntsAcc, return; } - List>> qryRes = ctx.query().querySqlFields(null, qry, cliCtx, true, true); + List>> qryRes = ctx.query().querySqlFields(null, qry, cliCtx, true, true, cancel); for (FieldsQueryCursor> cur : qryRes) { if (cur instanceof BulkLoadContextCursor) @@ -779,7 +937,9 @@ private void executeBatchedQuery(SqlFieldsQueryEx qry, List updCntsAcc, String msg; - if (e instanceof IgniteSQLException) { + if (X.cause(e, QueryCancelledException.class) != null) + throw new QueryCancelledException(); + else if (e instanceof IgniteSQLException) { BatchUpdateException batchCause = X.cause(e, BatchUpdateException.class); if (batchCause != null) { @@ -813,7 +973,7 @@ private void executeBatchedQuery(SqlFieldsQueryEx qry, List updCntsAcc, if (firstErr.isEmpty()) firstErr.set(code, msg); else - U.error(log, "Failed to execute batch query [qry=" + qry +']', e); + U.error(log, "Failed to execute batch query [qry=" + qry + ']', e); } } @@ -1007,14 +1167,16 @@ private ClientListenerResponse getPrimaryKeys(JdbcMetaPrimaryKeysRequest req) { fields.add(field); } - final String keyName = table.keyFieldName() == null ? "PK_" + table.schemaName() + "_" + table.tableName() : table.keyFieldName(); if (fields.isEmpty()) { + String keyColName = + table.keyFieldName() == null ? QueryUtils.KEY_FIELD_NAME : table.keyFieldName(); + meta.add(new JdbcPrimaryKeyMeta(table.schemaName(), table.tableName(), keyName, - Collections.singletonList("_KEY"))); + Collections.singletonList(keyColName))); } else meta.add(new JdbcPrimaryKeyMeta(table.schemaName(), table.tableName(), keyName, fields)); @@ -1076,10 +1238,12 @@ private static boolean matches(String str, String ptrn) { * @return resulting {@link JdbcResponse}. */ private JdbcResponse exceptionToResult(Exception e) { + if (e instanceof QueryCancelledException) + return new JdbcResponse(IgniteQueryErrorCode.QUERY_CANCELED, e.getMessage()); if (e instanceof IgniteSQLException) - return new JdbcResponse(((IgniteSQLException) e).statusCode(), e.getMessage()); + return new JdbcResponse(((IgniteSQLException)e).statusCode(), e.getMessage()); else - return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, e.toString()); + return new JdbcResponse(IgniteQueryErrorCode.UNKNOWN, e.getMessage()); } /** @@ -1119,4 +1283,163 @@ private class OrderedBatchWorker extends GridWorker { } } } + + /** + * Cancels query with specified request id; + * + * @param req Query cancellation request; + * @return QueryCancelledException wrapped with JdbcResponse + */ + private JdbcResponse cancelQuery(JdbcQueryCancelRequest req) { + boolean clearCursors = false; + + GridQueryCancel cancelHook; + + synchronized (reqMux) { + JdbcQueryDescriptor desc = reqRegister.get(req.requestIdToBeCancelled()); + + // Query was already executed. + if (desc == null) + return null; + + // Query was registered, however execution didn't start yet. + else if (!desc.isExecutionStarted()) { + unregisterRequest(req.requestId()); + + return exceptionToResult(new QueryCancelledException()); + } + else { + cancelHook = desc.cancelHook(); + + desc.markCancelled(); + + if (desc.usageCount() == 0) { + clearCursors = true; + + unregisterRequest(req.requestIdToBeCancelled()); + } + } + } + + cancelHook.cancel(); + + if (clearCursors) + clearCursors(req.requestIdToBeCancelled()); + + return null; + } + + /** + * Checks whether query cancellation is supported whithin given version of protocal. + * + * @return True if supported, false otherwise. + */ + private boolean isCancellationSupported() { + return (protocolVer.compareTo(VER_2_8_0) >= 0); + } + + /** + * Unregisters request if there are no cursors binded to it. + * + * @param reqId Reuest to unregist. + */ + private void tryUnregisterRequest(long reqId) { + assert isCancellationSupported(); + + boolean unregisterReq = true; + + for (JdbcCursor cursor : jdbcCursors.values()) { + if (cursor.requestId() == reqId) { + unregisterReq = false; + + break; + } + } + + if (unregisterReq) + unregisterRequest(reqId); + } + + /** + * Tries to close all cursors of request with given id and removes them from jdbcCursors map. + * + * @param reqId Request ID. + */ + private void clearCursors(long reqId) { + for (Iterator> it = jdbcCursors.entrySet().iterator(); it.hasNext(); ) { + Map.Entry entry = it.next(); + + JdbcCursor cursor = entry.getValue(); + + if (cursor.requestId() == reqId) { + try { + cursor.close(); + } + catch (Exception e) { + U.error(log, "Failed to close cursor [reqId=" + reqId + ", cursor=" + cursor + ']', e); + } + + it.remove(); + } + } + } + + /** + * Checks whether query was cancelled - returns null if true, otherwise increments query descriptor usage count. + * + * @param cur Jdbc Cursor. + * @return False, if query was already cancelled. + */ + private boolean prepareQueryCancellationMeta(JdbcCursor cur) { + if (isCancellationSupported()) { + // Nothing to do - cursor was already removed. + if (cur == null) + return false; + + synchronized (reqMux) { + JdbcQueryDescriptor desc = reqRegister.get(cur.requestId()); + + // Query was already cancelled and unregisterd. + if (desc == null) + return false; + + desc.incrementUsageCount(); + } + } + + return true; + } + + /** + * Cleanups cursors or processors and unregistered request if necessary. + * + * @param unregisterReq Flag, that detecs whether it's necessary to unregister request. + * @param reqId Request Id. + */ + private void cleanupQueryCancellationMeta(boolean unregisterReq, long reqId) { + if (isCancellationSupported()) { + boolean clearCursors = false; + + synchronized (reqMux) { + JdbcQueryDescriptor desc = reqRegister.get(reqId); + + if (desc != null) { + // Query was cancelled during execution. + if (desc.isCanceled()) { + clearCursors = true; + + unregisterReq = true; + } + else + desc.decrementUsageCount(); + + if (unregisterReq) + unregisterRequest(reqId); + } + } + + if (clearCursors) + clearCursors(reqId); + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcResultInfo.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcResultInfo.java index 5fab77aa3f00b..ceb7964cbcd30 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcResultInfo.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcResultInfo.java @@ -33,8 +33,8 @@ public class JdbcResultInfo implements JdbcRawBinarylizable { /** Update count. */ private long updCnt; - /** Query ID. */ - private long qryId; + /** Cursor ID. */ + private long cursorId; /** * Default constructor is used for serialization. @@ -46,12 +46,12 @@ public class JdbcResultInfo implements JdbcRawBinarylizable { /** * @param isQuery Query flag. * @param updCnt Update count. - * @param qryId Query ID. + * @param cursorId Cursor ID. */ - public JdbcResultInfo(boolean isQuery, long updCnt, long qryId) { + public JdbcResultInfo(boolean isQuery, long updCnt, long cursorId) { this.isQuery = isQuery; this.updCnt = updCnt; - this.qryId = qryId; + this.cursorId = cursorId; } /** @@ -62,10 +62,10 @@ public boolean isQuery() { } /** - * @return Query ID. + * @return Cursor ID. */ - public long queryId() { - return qryId; + public long cursorId() { + return cursorId; } /** @@ -80,7 +80,7 @@ public long updateCount() { ClientListenerProtocolVersion ver) { writer.writeBoolean(isQuery); writer.writeLong(updCnt); - writer.writeLong(qryId); + writer.writeLong(cursorId); } /** {@inheritDoc} */ @@ -88,7 +88,7 @@ public long updateCount() { ClientListenerProtocolVersion ver) { isQuery = reader.readBoolean(); updCnt = reader.readLong(); - qryId = reader.readLong(); + cursorId = reader.readLong(); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcConnectionContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcConnectionContext.java index 5592aabe0ce35..fa8468c2b5209 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcConnectionContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcConnectionContext.java @@ -17,6 +17,8 @@ package org.apache.ignite.internal.processors.odbc.odbc; +import java.util.HashSet; +import java.util.Set; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.internal.GridKernalContext; @@ -26,15 +28,12 @@ import org.apache.ignite.internal.processors.odbc.ClientListenerMessageParser; import org.apache.ignite.internal.processors.odbc.ClientListenerProtocolVersion; import org.apache.ignite.internal.processors.odbc.ClientListenerRequestHandler; -import org.apache.ignite.internal.processors.query.NestedTxMode; import org.apache.ignite.internal.processors.odbc.ClientListenerResponse; import org.apache.ignite.internal.processors.odbc.ClientListenerResponseSender; +import org.apache.ignite.internal.processors.query.NestedTxMode; import org.apache.ignite.internal.util.GridSpinBusyLock; import org.apache.ignite.internal.util.nio.GridNioSession; -import java.util.HashSet; -import java.util.Set; - /** * ODBC Connection Context. */ @@ -146,7 +145,9 @@ public OdbcConnectionContext(GridKernalContext ctx, GridNioSession ses, GridSpin if (ver.compareTo(VER_2_5_0) >= 0) { user = reader.readString(); passwd = reader.readString(); + } + if (ver.compareTo(VER_2_7_0) >= 0) { byte nestedTxModeVal = reader.readByte(); nestedTxMode = NestedTxMode.fromByte(nestedTxModeVal); @@ -188,5 +189,7 @@ public OdbcConnectionContext(GridKernalContext ctx, GridNioSession ses, GridSpin /** {@inheritDoc} */ @Override public void onDisconnected() { handler.onDisconnect(); + + super.onDisconnected(); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcMessageParser.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcMessageParser.java index 0e9b48a545525..0e66c93e85fa6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcMessageParser.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcMessageParser.java @@ -409,6 +409,19 @@ else if (res0 instanceof OdbcQueryGetParamsMetaResult) { return writer.array(); } + /** {@inheritDoc} */ + @Override public int decodeCommandType(byte[] msg) { + assert msg != null; + + return msg[0]; + } + + + /** {@inheritDoc} */ + @Override public long decodeRequestId(byte[] msg) { + return 0; + } + /** * @param writer Writer to use. * @param affectedRows Affected rows. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcRequestHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcRequestHandler.java index 6f3324dbf4df1..edb8a31c19b0c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcRequestHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcRequestHandler.java @@ -300,6 +300,21 @@ public void onDisconnect() { } } + /** {@inheritDoc} */ + @Override public boolean isCancellationCommand(int cmdId) { + return false; + } + + /** {@inheritDoc} */ + @Override public void registerRequest(long reqId, int cmdType) { + // No-op. + } + + /** {@inheritDoc} */ + @Override public void unregisterRequest(long reqId) { + // No-op. + } + /** * Make query considering handler configuration. * @param schema Schema. @@ -691,7 +706,7 @@ private ClientListenerResponse getTablesMeta(OdbcQueryGetTablesMetaRequest req) for (GridQueryTypeDescriptor table : ctx.query().types(cacheName)) { if (!matches(table.schemaName(), schemaPattern) || !matches(table.tableName(), req.table()) || - !matches("TABLE", req.tableType())) + !matchesTableType("TABLE", req.tableType())) continue; OdbcTableMeta tableMeta = new OdbcTableMeta(null, table.schemaName(), table.tableName(), "TABLE"); @@ -854,6 +869,34 @@ private static byte sqlTypeToBinary(int sqlType) { } } + /** + * Checks whether string matches table type pattern. + * + * @param str String. + * @param ptrn Pattern. + * @return Whether string matches pattern. + */ + private static boolean matchesTableType(String str, String ptrn) { + if (F.isEmpty(ptrn)) + return true; + + if (str == null) + return false; + + String pattern = OdbcUtils.preprocessPattern(ptrn); + + String[] types = pattern.split(","); + + for (String type0 : types) { + String type = OdbcUtils.removeQuotationMarksIfNeeded(type0.trim()); + + if (str.toUpperCase().matches(type)) + return true; + } + + return false; + } + /** * Checks whether string matches SQL pattern. * @@ -868,10 +911,7 @@ private static boolean matches(String str, String ptrn) { if (str == null) return false; - String pattern = ptrn.toUpperCase().replace("%", ".*").replace("_", "."); - - if (pattern.length() >= 2 && pattern.matches("['\"].*['\"]")) - pattern = pattern.substring(1, pattern.length() - 1); + String pattern = OdbcUtils.preprocessPattern(ptrn); return str.toUpperCase().matches(pattern); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcUtils.java index a1c67aad62f3d..d294ac25c583f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/odbc/OdbcUtils.java @@ -25,6 +25,7 @@ import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.odbc.SqlListenerDataTypes; import org.apache.ignite.internal.processors.query.IgniteSQLException; +import org.apache.ignite.internal.util.typedef.F; /** * Various ODBC utility methods. @@ -56,6 +57,27 @@ public static String removeQuotationMarksIfNeeded(String str) { return str; } + /** + * Pre-process table or column pattern. + * + * @param ptrn Pattern to pre-process. + * @return Processed pattern. + */ + public static String preprocessPattern(String ptrn) { + if (F.isEmpty(ptrn)) + return ptrn; + + String ptrn0 = ' ' + removeQuotationMarksIfNeeded(ptrn.toUpperCase()); + + ptrn0 = ptrn0.replaceAll("([^\\\\])%", "$1.*"); + + ptrn0 = ptrn0.replaceAll("([^\\\\])_", "$1."); + + ptrn0 = ptrn0.replaceAll("\\\\(.)", "$1"); + + return ptrn0.substring(1); + } + /** * Private constructor. */ @@ -173,6 +195,7 @@ public static String tryRetrieveH2ErrorMessage(Throwable err) { String msg = err.getMessage(); Throwable e = err.getCause(); + while (e != null) { if (e.getClass().getCanonicalName().equals("org.h2.jdbc.JdbcSQLException")) { msg = e.getMessage(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientConnectionContext.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientConnectionContext.java index ffe38ca68870f..5e68c34ec56ef 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientConnectionContext.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientConnectionContext.java @@ -143,6 +143,8 @@ public ClientResourceRegistry resources() { /** {@inheritDoc} */ @Override public void onDisconnected() { resReg.clean(); + + super.onDisconnected(); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientMessageParser.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientMessageParser.java index c65e64a51b078..0e81c20f831c2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientMessageParser.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientMessageParser.java @@ -388,4 +388,18 @@ public ClientListenerRequest decode(BinaryRawReaderEx reader) { return outStream.arrayCopy(); } + + /** {@inheritDoc} */ + @Override public int decodeCommandType(byte[] msg) { + assert msg != null; + + BinaryInputStream inStream = new BinaryHeapInputStream(msg); + + return inStream.readShort(); + } + + /** {@inheritDoc} */ + @Override public long decodeRequestId(byte[] msg) { + return 0; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequest.java index 799b3e733f410..76823b592ce32 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequest.java @@ -19,9 +19,6 @@ import org.apache.ignite.binary.BinaryRawReader; import org.apache.ignite.internal.processors.odbc.ClientListenerRequest; -import org.apache.ignite.internal.processors.security.SecurityContext; -import org.apache.ignite.plugin.security.SecurityException; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Thin client request. @@ -61,30 +58,4 @@ public ClientRequest(long reqId) { public ClientResponse process(ClientConnectionContext ctx) { return new ClientResponse(reqId); } - - /** - * Run the code with converting {@link SecurityException} to {@link IgniteClientException}. - */ - protected static void runWithSecurityExceptionHandler(Runnable runnable) { - try { - runnable.run(); - } - catch (SecurityException ex) { - throw new IgniteClientException( - ClientStatus.SECURITY_VIOLATION, - "Client is not authorized to perform this operation", - ex - ); - } - } - - /** - * Authorize for specified permission. - */ - protected void authorize(ClientConnectionContext ctx, SecurityPermission perm) { - SecurityContext secCtx = ctx.securityContext(); - - if (secCtx != null) - runWithSecurityExceptionHandler(() -> ctx.kernalContext().security().authorize(null, perm, secCtx)); - } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequestHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequestHandler.java index 5ed0d38d8eac5..1b59c4892b1b5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequestHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/ClientRequestHandler.java @@ -22,7 +22,7 @@ import org.apache.ignite.internal.processors.odbc.ClientListenerRequest; import org.apache.ignite.internal.processors.odbc.ClientListenerRequestHandler; import org.apache.ignite.internal.processors.odbc.ClientListenerResponse; -import org.apache.ignite.internal.processors.security.SecurityContextHolder; +import org.apache.ignite.plugin.security.SecurityException; /** * Thin client request handler. @@ -48,19 +48,15 @@ public class ClientRequestHandler implements ClientListenerRequestHandler { /** {@inheritDoc} */ @Override public ClientListenerResponse handle(ClientListenerRequest req) { - if (authCtx != null) { - AuthorizationContext.context(authCtx); - SecurityContextHolder.set(ctx.securityContext()); - } - try { return ((ClientRequest)req).process(ctx); } - finally { - if (authCtx != null) - AuthorizationContext.clear(); - - SecurityContextHolder.clear(); + catch (SecurityException ex) { + throw new IgniteClientException( + ClientStatus.SECURITY_VIOLATION, + "Client is not authorized to perform this operation", + ex + ); } } @@ -79,4 +75,20 @@ public class ClientRequestHandler implements ClientListenerRequestHandler { @Override public void writeHandshake(BinaryWriterExImpl writer) { writer.writeBoolean(true); } + + /** {@inheritDoc} */ + @Override public boolean isCancellationCommand(int cmdId) { + return false; + } + + + /** {@inheritDoc} */ + @Override public void registerRequest(long reqId, int cmdType) { + // No-op. + } + + /** {@inheritDoc} */ + @Override public void unregisterRequest(long reqId) { + // No-op. + } } \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeyRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeyRequest.java index 5f8e952234bf8..6bcbbe89b2636 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeyRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeyRequest.java @@ -20,7 +20,6 @@ import org.apache.ignite.internal.binary.BinaryRawReaderEx; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Clear key request. @@ -38,8 +37,6 @@ public ClientCacheClearKeyRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_REMOVE); - cache(ctx).clear(key()); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeysRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeysRequest.java index d803f697420f3..04eb7f60c9502 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeysRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearKeysRequest.java @@ -20,7 +20,6 @@ import org.apache.ignite.internal.binary.BinaryRawReaderEx; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Clear keys request. @@ -38,8 +37,6 @@ public ClientCacheClearKeysRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_REMOVE); - cache(ctx).clearAll(keys()); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearRequest.java index 7b84522921c7c..0e5f20de1eb1b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheClearRequest.java @@ -20,7 +20,6 @@ import org.apache.ignite.binary.BinaryRawReader; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache clear request. @@ -38,8 +37,6 @@ public ClientCacheClearRequest(BinaryRawReader reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_REMOVE); - cache(ctx).clear(); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeyRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeyRequest.java index 386f448bb4b53..8470828e424a1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeyRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeyRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientBooleanResponse; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * ContainsKey request. @@ -39,8 +38,6 @@ public ClientCacheContainsKeyRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); - boolean val = cache(ctx).containsKey(key()); return new ClientBooleanResponse(requestId(), val); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeysRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeysRequest.java index b5184bfc1afc9..41e13068db1f5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeysRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheContainsKeysRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientBooleanResponse; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * ContainsKeys request. @@ -39,8 +38,6 @@ public ClientCacheContainsKeysRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); - boolean val = cache(ctx).containsKeys(keys()); return new ClientBooleanResponse(requestId(), val); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithConfigurationRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithConfigurationRequest.java index 9f1d63fccabc2..93a18edc77e9d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithConfigurationRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithConfigurationRequest.java @@ -26,7 +26,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientResponse; import org.apache.ignite.internal.processors.platform.client.ClientStatus; import org.apache.ignite.internal.processors.platform.client.IgniteClientException; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache create with configuration request. @@ -50,12 +49,10 @@ public ClientCacheCreateWithConfigurationRequest(BinaryRawReader reader, ClientL /** {@inheritDoc} */ @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_CREATE); - try { - // Use security exception handler since the code authorizes "enable on-heap cache" permission - runWithSecurityExceptionHandler(() -> ctx.kernalContext().grid().createCache(cacheCfg)); - } catch (CacheExistsException e) { + ctx.kernalContext().grid().createCache(cacheCfg); + } + catch (CacheExistsException e) { throw new IgniteClientException(ClientStatus.CACHE_EXISTS, e.getMessage()); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithNameRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithNameRequest.java index cacf099b4f81f..526eb6cdeb01a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithNameRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheCreateWithNameRequest.java @@ -24,7 +24,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientResponse; import org.apache.ignite.internal.processors.platform.client.ClientStatus; import org.apache.ignite.internal.processors.platform.client.IgniteClientException; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache create with name request. @@ -46,11 +45,10 @@ public ClientCacheCreateWithNameRequest(BinaryRawReader reader) { /** {@inheritDoc} */ @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_CREATE); - try { ctx.kernalContext().grid().createCache(cacheName); - } catch (CacheExistsException e) { + } + catch (CacheExistsException e) { throw new IgniteClientException(ClientStatus.CACHE_EXISTS, e.getMessage()); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheDestroyRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheDestroyRequest.java index b6f85eec3d9fc..6645a03a06b4f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheDestroyRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheDestroyRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientRequest; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache destroy request. @@ -43,8 +42,6 @@ public ClientCacheDestroyRequest(BinaryRawReader reader) { /** {@inheritDoc} */ @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_DESTROY); - String cacheName = ClientCacheRequest.cacheDescriptor(ctx, cacheId).cacheName(); ctx.kernalContext().grid().destroyCache(cacheName); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAllRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAllRequest.java index a07305c4ce14b..d07da1cf46cb4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAllRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAllRequest.java @@ -22,7 +22,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientResponse; import java.util.Map; -import org.apache.ignite.plugin.security.SecurityPermission; /** * GetAll request. @@ -40,9 +39,7 @@ public ClientCacheGetAllRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); - - Map val = cache(ctx).getAll(keys()); + Map val = cache(ctx).getAll(keys()); return new ClientCacheGetAllResponse(requestId(), val); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutIfAbsentRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutIfAbsentRequest.java index 8713a211bb4eb..836021313c5fc 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutIfAbsentRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutIfAbsentRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientObjectResponse; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache get and put if absent request. @@ -39,8 +38,6 @@ public ClientCacheGetAndPutIfAbsentRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_PUT); - Object res = cache(ctx).getAndPutIfAbsent(key(), val()); return new ClientObjectResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutRequest.java index dde5181303cef..7a540e8473ac9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndPutRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientObjectResponse; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache get and put request. @@ -39,8 +38,6 @@ public ClientCacheGetAndPutRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_PUT); - Object res = cache(ctx).getAndPut(key(), val()); return new ClientObjectResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndRemoveRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndRemoveRequest.java index 3b9dd4bab88c3..e4fd735b186ac 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndRemoveRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndRemoveRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientObjectResponse; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache get and remove request. @@ -39,8 +38,6 @@ public ClientCacheGetAndRemoveRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_REMOVE); - Object val = cache(ctx).getAndRemove(key()); return new ClientObjectResponse(requestId(), val); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndReplaceRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndReplaceRequest.java index 8ba157a762c9c..dba8639e4c07a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndReplaceRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetAndReplaceRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientObjectResponse; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache get and replace request. @@ -39,8 +38,6 @@ public ClientCacheGetAndReplaceRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_PUT); - Object res = cache(ctx).getAndReplace(key(), val()); return new ClientObjectResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithConfigurationRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithConfigurationRequest.java index b005fb235cf9f..77368beed7695 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithConfigurationRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithConfigurationRequest.java @@ -26,7 +26,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientResponse; import org.apache.ignite.internal.processors.platform.client.ClientStatus; import org.apache.ignite.internal.processors.platform.client.IgniteClientException; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache get or create with configuration request. @@ -50,12 +49,10 @@ public ClientCacheGetOrCreateWithConfigurationRequest(BinaryRawReader reader, Cl /** {@inheritDoc} */ @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_CREATE); - try { - // Use security exception handler since the code authorizes "enable on-heap cache" permission - runWithSecurityExceptionHandler(() -> ctx.kernalContext().grid().getOrCreateCache(cacheCfg)); - } catch (CacheExistsException e) { + ctx.kernalContext().grid().getOrCreateCache(cacheCfg); + } + catch (CacheExistsException e) { throw new IgniteClientException(ClientStatus.CACHE_EXISTS, e.getMessage()); } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithNameRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithNameRequest.java index 3c4ce7b06a694..94dd115d6075f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithNameRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetOrCreateWithNameRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientRequest; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache create with name request. @@ -43,8 +42,6 @@ public ClientCacheGetOrCreateWithNameRequest(BinaryRawReader reader) { /** {@inheritDoc} */ @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_CREATE); - ctx.kernalContext().grid().getOrCreateCache(cacheName); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetRequest.java index dc17cbfbce548..41558c2863d03 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientObjectResponse; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache get request. @@ -39,8 +38,6 @@ public ClientCacheGetRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); - Object val = cache(ctx).get(key()); return new ClientObjectResponse(requestId(), val); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetSizeRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetSizeRequest.java index 474c206b8cae3..ba185bf7415d8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetSizeRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheGetSizeRequest.java @@ -22,7 +22,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientLongResponse; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache size request. @@ -51,8 +50,6 @@ public ClientCacheGetSizeRequest(BinaryRawReader reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); - long res = cache(ctx).sizeLong(modes); return new ClientLongResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheLocalPeekRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheLocalPeekRequest.java index 068bbc9792c18..2002664d00714 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheLocalPeekRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheLocalPeekRequest.java @@ -22,7 +22,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientObjectResponse; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache local peek request. @@ -41,8 +40,6 @@ public ClientCacheLocalPeekRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); - Object val = cache(ctx).localPeek(key(), CachePeekMode.ALL); return new ClientObjectResponse(requestId(), val); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheNodePartitionsRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheNodePartitionsRequest.java index b9bf80e60608f..abc9acf646ba1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheNodePartitionsRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheNodePartitionsRequest.java @@ -29,7 +29,6 @@ import org.apache.ignite.internal.processors.odbc.ClientListenerProcessor; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cluster node list request. @@ -46,12 +45,10 @@ public ClientCacheNodePartitionsRequest(BinaryRawReader reader) { /** {@inheritDoc} */ @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); IgniteCache cache = cache(ctx); GridDiscoveryManager discovery = ctx.kernalContext().discovery(); - Collection nodes = discovery.cacheNodes(cache.getName(), - new AffinityTopologyVersion(discovery.topologyVersion())); + Collection nodes = discovery.discoCache().cacheNodes(cache.getName()); Affinity aff = ctx.kernalContext().affinity().affinityProxy(cache.getName()); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutAllRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutAllRequest.java index 57e31443b474a..28a7fa57e3ee5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutAllRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutAllRequest.java @@ -23,7 +23,6 @@ import java.util.LinkedHashMap; import java.util.Map; -import org.apache.ignite.plugin.security.SecurityPermission; /** * PutAll request. @@ -51,8 +50,6 @@ public ClientCachePutAllRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_PUT); - cache(ctx).putAll(map); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutIfAbsentRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutIfAbsentRequest.java index ec81bc0c0fe48..4dd2cde58ce06 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutIfAbsentRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutIfAbsentRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientBooleanResponse; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache put if absent request. @@ -39,8 +38,6 @@ public ClientCachePutIfAbsentRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_PUT); - boolean res = cache(ctx).putIfAbsent(key(), val()); return new ClientBooleanResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutRequest.java index 116460eece965..2c396b7ede87a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCachePutRequest.java @@ -20,7 +20,6 @@ import org.apache.ignite.internal.binary.BinaryRawReaderEx; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache put request. @@ -38,8 +37,6 @@ public ClientCachePutRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_PUT); - cache(ctx).put(key(), val()); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveAllRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveAllRequest.java index d90d873968105..f5adc6378912e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveAllRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveAllRequest.java @@ -20,7 +20,6 @@ import org.apache.ignite.binary.BinaryRawReader; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache removeAll request. @@ -38,8 +37,6 @@ public ClientCacheRemoveAllRequest(BinaryRawReader reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_REMOVE); - cache(ctx).removeAll(); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveIfEqualsRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveIfEqualsRequest.java index 26c191f5b5553..b86f2f8895d64 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveIfEqualsRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveIfEqualsRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientBooleanResponse; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache remove request with value. @@ -39,8 +38,6 @@ public ClientCacheRemoveIfEqualsRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_REMOVE); - boolean res = cache(ctx).remove(key(), val()); return new ClientBooleanResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeyRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeyRequest.java index 5af9743b3cac3..a68c32730f4fe 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeyRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeyRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientBooleanResponse; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Remove request. @@ -39,8 +38,6 @@ public ClientCacheRemoveKeyRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_REMOVE); - boolean val = cache(ctx).remove(key()); return new ClientBooleanResponse(requestId(), val); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeysRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeysRequest.java index 62dea00201af8..043b5688a3f43 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeysRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRemoveKeysRequest.java @@ -20,7 +20,6 @@ import org.apache.ignite.internal.binary.BinaryRawReaderEx; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Remove keys request. @@ -38,8 +37,6 @@ public ClientCacheRemoveKeysRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_REMOVE); - cache(ctx).removeAll(keys()); return super.process(ctx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceIfEqualsRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceIfEqualsRequest.java index 056367d71d2a9..8645fbb817322 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceIfEqualsRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceIfEqualsRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientBooleanResponse; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache replace request. @@ -44,8 +43,6 @@ public ClientCacheReplaceIfEqualsRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_PUT); - boolean res = cache(ctx).replace(key(), val(), newVal); return new ClientBooleanResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceRequest.java index ea04593e7cf15..bd7a642bb39e0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheReplaceRequest.java @@ -21,7 +21,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientBooleanResponse; import org.apache.ignite.internal.processors.platform.client.ClientConnectionContext; import org.apache.ignite.internal.processors.platform.client.ClientResponse; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache replace request. @@ -39,8 +38,6 @@ public ClientCacheReplaceRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ, SecurityPermission.CACHE_PUT); - boolean res = cache(ctx).replace(key(), val()); return new ClientBooleanResponse(requestId(), res); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRequest.java index 9e39b5618c686..52b799f345120 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheRequest.java @@ -24,8 +24,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientRequest; import org.apache.ignite.internal.processors.platform.client.ClientStatus; import org.apache.ignite.internal.processors.platform.client.IgniteClientException; -import org.apache.ignite.internal.processors.security.SecurityContext; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Cache get request. @@ -121,34 +119,4 @@ public static DynamicCacheDescriptor cacheDescriptor(ClientConnectionContext ctx protected int cacheId() { return cacheId; } - - /** {@inheritDoc} */ - @Override protected void authorize(ClientConnectionContext ctx, SecurityPermission perm) { - SecurityContext secCtx = ctx.securityContext(); - - if (secCtx != null) { - DynamicCacheDescriptor cacheDesc = cacheDescriptor(ctx, cacheId); - - runWithSecurityExceptionHandler(() -> { - ctx.kernalContext().security().authorize(cacheDesc.cacheName(), perm, secCtx); - }); - } - } - - /** - * Authorize for multiple permissions. - */ - protected void authorize(ClientConnectionContext ctx, SecurityPermission... perm) - throws IgniteClientException { - SecurityContext secCtx = ctx.securityContext(); - - if (secCtx != null) { - DynamicCacheDescriptor cacheDesc = cacheDescriptor(ctx, cacheId); - - runWithSecurityExceptionHandler(() -> { - for (SecurityPermission p : perm) - ctx.kernalContext().security().authorize(cacheDesc.cacheName(), p, secCtx); - }); - } - } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheScanQueryRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheScanQueryRequest.java index 70b6966e999c4..26ab236e8be1e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheScanQueryRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/client/cache/ClientCacheScanQueryRequest.java @@ -28,7 +28,6 @@ import org.apache.ignite.internal.processors.platform.client.ClientResponse; import org.apache.ignite.internal.processors.platform.utils.PlatformUtils; import org.apache.ignite.lang.IgniteBiPredicate; -import org.apache.ignite.plugin.security.SecurityPermission; /** * Scan query request. @@ -81,8 +80,6 @@ public ClientCacheScanQueryRequest(BinaryRawReaderEx reader) { /** {@inheritDoc} */ @Override public ClientResponse process(ClientConnectionContext ctx) { - authorize(ctx, SecurityPermission.CACHE_READ); - IgniteCache cache = filterPlatform == FILTER_PLATFORM_JAVA && !isKeepBinary() ? rawCache(ctx) : cache(ctx); ScanQuery qry = new ScanQuery() diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/utils/PlatformConfigurationUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/utils/PlatformConfigurationUtils.java index cd67f15246ca3..8338a3b9849cd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/utils/PlatformConfigurationUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/platform/utils/PlatformConfigurationUtils.java @@ -69,6 +69,7 @@ import org.apache.ignite.configuration.SqlConnectorConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.spi.encryption.EncryptionSpi; import org.apache.ignite.events.Event; import org.apache.ignite.failure.FailureHandler; import org.apache.ignite.failure.NoOpFailureHandler; @@ -99,6 +100,8 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi; import org.apache.ignite.spi.eventstorage.EventStorageSpi; import org.apache.ignite.spi.eventstorage.NoopEventStorageSpi; import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi; @@ -220,6 +223,7 @@ public static CacheConfiguration readCacheConfiguration(BinaryRawReaderEx in, Cl ccfg.setQueryDetailMetricsSize(in.readInt()); ccfg.setQueryParallelism(in.readInt()); ccfg.setSqlSchema(in.readString()); + ccfg.setEncryptionEnabled(in.readBoolean()); int qryEntCnt = in.readInt(); @@ -247,9 +251,8 @@ public static CacheConfiguration readCacheConfiguration(BinaryRawReaderEx in, Cl if (keyCnt > 0) { CacheKeyConfiguration[] keys = new CacheKeyConfiguration[keyCnt]; - for (int i = 0; i < keyCnt; i++) { + for (int i = 0; i < keyCnt; i++) keys[i] = new CacheKeyConfiguration(in.readString(), in.readString()); - } ccfg.setKeyConfiguration(keys); } @@ -654,6 +657,12 @@ public static void readIgniteConfiguration(BinaryRawReaderEx in, IgniteConfigura cfg.setActiveOnStart(in.readBoolean()); if (in.readBoolean()) cfg.setAuthenticationEnabled(in.readBoolean()); + if (in.readBoolean()) + cfg.setMvccVacuumFrequency(in.readLong()); + if (in.readBoolean()) + cfg.setMvccVacuumThreadCount(in.readInt()); + if (in.readBoolean()) + cfg.setSystemWorkerBlockedTimeout(in.readLong()); int sqlSchemasCnt = in.readInt(); @@ -698,14 +707,17 @@ public static void readIgniteConfiguration(BinaryRawReaderEx in, IgniteConfigura readCacheConfigurations(in, cfg, ver); readDiscoveryConfiguration(in, cfg); + readEncryptionConfiguration(in, cfg, ver); if (in.readBoolean()) { TcpCommunicationSpi comm = new TcpCommunicationSpi(); comm.setAckSendThreshold(in.readInt()); + comm.setConnectionsPerNode(in.readInt()); comm.setConnectTimeout(in.readLong()); comm.setDirectBuffer(in.readBoolean()); comm.setDirectSendBuffer(in.readBoolean()); + comm.setFilterReachableAddresses(in.readBoolean()); comm.setIdleConnectionTimeout(in.readLong()); comm.setLocalAddress(in.readString()); comm.setLocalPort(in.readInt()); @@ -714,11 +726,15 @@ public static void readIgniteConfiguration(BinaryRawReaderEx in, IgniteConfigura comm.setMessageQueueLimit(in.readInt()); comm.setReconnectCount(in.readInt()); comm.setSelectorsCount(in.readInt()); + comm.setSelectorSpins(in.readLong()); + comm.setSharedMemoryPort(in.readInt()); comm.setSlowClientQueueLimit(in.readInt()); comm.setSocketReceiveBuffer(in.readInt()); comm.setSocketSendBuffer(in.readInt()); + comm.setSocketWriteTimeout(in.readLong()); comm.setTcpNoDelay(in.readBoolean()); comm.setUnacknowledgedMessagesBufferSize(in.readInt()); + comm.setUsePairedConnections(in.readBoolean()); cfg.setCommunicationSpi(comm); } @@ -938,6 +954,30 @@ else if (ipFinderType == 2) { cfg.setDiscoverySpi(disco); } + /** + * Reads encryption configuration + * @param in Reader. + * @param cfg Configuration. + * @param ver Client version. + */ + private static void readEncryptionConfiguration(BinaryRawReaderEx in, IgniteConfiguration cfg, + ClientListenerProtocolVersion ver) { + if (ver.compareTo(VER_1_2_0) < 0 || !in.readBoolean()) { + cfg.setEncryptionSpi(new NoopEncryptionSpi()); + + return; + } + + KeystoreEncryptionSpi enc = new KeystoreEncryptionSpi(); + + enc.setMasterKeyName(in.readString()); + enc.setKeySize(in.readInt()); + enc.setKeyStorePath(in.readString()); + enc.setKeyStorePassword(in.readCharArray()); + + cfg.setEncryptionSpi(enc); + } + /** * Writes cache configuration. * @@ -999,6 +1039,7 @@ public static void writeCacheConfiguration(BinaryRawWriter writer, CacheConfigur writer.writeInt(ccfg.getQueryDetailMetricsSize()); writer.writeInt(ccfg.getQueryParallelism()); writer.writeString(ccfg.getSqlSchema()); + writer.writeBoolean(ccfg.isEncryptionEnabled()); Collection qryEntities = ccfg.getQueryEntities(); @@ -1201,6 +1242,16 @@ public static void writeIgniteConfiguration(BinaryRawWriter w, IgniteConfigurati w.writeBoolean(cfg.isActiveOnStart()); w.writeBoolean(true); w.writeBoolean(cfg.isAuthenticationEnabled()); + w.writeBoolean(true); + w.writeLong(cfg.getMvccVacuumFrequency()); + w.writeBoolean(true); + w.writeInt(cfg.getMvccVacuumThreadCount()); + if (cfg.getSystemWorkerBlockedTimeout() != null) { + w.writeBoolean(true); + w.writeLong(cfg.getSystemWorkerBlockedTimeout()); + } else { + w.writeBoolean(false); + } if (cfg.getSqlSchemas() == null) w.writeInt(-1); @@ -1245,6 +1296,7 @@ public static void writeIgniteConfiguration(BinaryRawWriter w, IgniteConfigurati w.writeInt(0); writeDiscoveryConfiguration(w, cfg.getDiscoverySpi()); + writeEncryptionConfiguration(w, cfg.getEncryptionSpi(), ver); CommunicationSpi comm = cfg.getCommunicationSpi(); @@ -1253,9 +1305,11 @@ public static void writeIgniteConfiguration(BinaryRawWriter w, IgniteConfigurati TcpCommunicationSpi tcp = (TcpCommunicationSpi) comm; w.writeInt(tcp.getAckSendThreshold()); + w.writeInt(tcp.getConnectionsPerNode()); w.writeLong(tcp.getConnectTimeout()); w.writeBoolean(tcp.isDirectBuffer()); w.writeBoolean(tcp.isDirectSendBuffer()); + w.writeBoolean(tcp.isFilterReachableAddresses()); w.writeLong(tcp.getIdleConnectionTimeout()); w.writeString(tcp.getLocalAddress()); w.writeInt(tcp.getLocalPort()); @@ -1264,11 +1318,15 @@ public static void writeIgniteConfiguration(BinaryRawWriter w, IgniteConfigurati w.writeInt(tcp.getMessageQueueLimit()); w.writeInt(tcp.getReconnectCount()); w.writeInt(tcp.getSelectorsCount()); + w.writeLong(tcp.getSelectorSpins()); + w.writeInt(tcp.getSharedMemoryPort()); w.writeInt(tcp.getSlowClientQueueLimit()); w.writeInt(tcp.getSocketReceiveBuffer()); w.writeInt(tcp.getSocketSendBuffer()); + w.writeLong(tcp.getSocketWriteTimeout()); w.writeBoolean(tcp.isTcpNoDelay()); w.writeInt(tcp.getUnacknowledgedMessagesBufferSize()); + w.writeBoolean(tcp.isUsePairedConnections()); } else w.writeBoolean(false); @@ -1453,6 +1511,34 @@ private static void writeDiscoveryConfiguration(BinaryRawWriter w, DiscoverySpi w.writeInt((int)tcp.getTopHistorySize()); } + /** + * Writes encryption configuration. + * + * @param w Writer. + * @param enc Encryption Spi. + * @param ver Client version. + */ + private static void writeEncryptionConfiguration(BinaryRawWriter w, EncryptionSpi enc, + ClientListenerProtocolVersion ver) { + if (ver.compareTo(VER_1_2_0) < 0) + return; + + if (enc instanceof NoopEncryptionSpi) { + w.writeBoolean(false); + + return; + } + + KeystoreEncryptionSpi keystoreEnc = (KeystoreEncryptionSpi)enc; + + w.writeBoolean(true); + + w.writeString(keystoreEnc.getMasterKeyName()); + w.writeInt(keystoreEnc.getKeySize()); + w.writeString(keystoreEnc.getKeyStorePath()); + w.writeCharArray(keystoreEnc.getKeyStorePwd()); + } + /** * Writes enum as byte. * @@ -1827,21 +1913,22 @@ private static DataStorageConfiguration readDataStorageConfiguration(BinaryRawRe .setConcurrencyLevel(in.readInt()) .setWalAutoArchiveAfterInactivity(in.readLong()); + if (in.readBoolean()) + res.setCheckpointReadLockTimeout(in.readLong()); + int cnt = in.readInt(); if (cnt > 0) { DataRegionConfiguration[] regs = new DataRegionConfiguration[cnt]; - for (int i = 0; i < cnt; i++) { + for (int i = 0; i < cnt; i++) regs[i] = readDataRegionConfiguration(in); - } res.setDataRegionConfigurations(regs); } - if (in.readBoolean()) { + if (in.readBoolean()) res.setDefaultDataRegionConfiguration(readDataRegionConfiguration(in)); - } return res; } @@ -1955,25 +2042,31 @@ private static void writeDataStorageConfiguration(BinaryRawWriter w, DataStorage w.writeInt(cfg.getConcurrencyLevel()); w.writeLong(cfg.getWalAutoArchiveAfterInactivity()); + if (cfg.getCheckpointReadLockTimeout() != null) { + w.writeBoolean(true); + w.writeLong(cfg.getCheckpointReadLockTimeout()); + } + else + w.writeBoolean(false); + if (cfg.getDataRegionConfigurations() != null) { w.writeInt(cfg.getDataRegionConfigurations().length); - for (DataRegionConfiguration d : cfg.getDataRegionConfigurations()) { + for (DataRegionConfiguration d : cfg.getDataRegionConfigurations()) writeDataRegionConfiguration(w, d); - } - } else { - w.writeInt(0); } + else + w.writeInt(0); if (cfg.getDefaultDataRegionConfiguration() != null) { w.writeBoolean(true); writeDataRegionConfiguration(w, cfg.getDefaultDataRegionConfiguration()); - } else { - w.writeBoolean(false); } - } else { - w.writeBoolean(false); + else + w.writeBoolean(false); } + else + w.writeBoolean(false); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/EnlistOperation.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/EnlistOperation.java index fdb6f1e580772..631bf1899d9f3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/EnlistOperation.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/EnlistOperation.java @@ -46,7 +46,11 @@ public enum EnlistOperation { * This operation locks existing entry protecting it from updates by other transactions * or does notrhing if entry does not exist. */ - LOCK(null); + LOCK(null), + /** + * This operation applies entry transformer. + */ + TRANSFORM(GridCacheOperation.UPDATE); /** */ private final GridCacheOperation cacheOp; @@ -68,6 +72,11 @@ public boolean isDeleteOrLock() { return this == DELETE || this == LOCK; } + /** */ + public boolean isInvoke() { + return this == TRANSFORM; + } + /** * Indicates that an operation cannot create new row. */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryIndexing.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryIndexing.java index 7aa4021d7b5a0..868bd29a87f32 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryIndexing.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryIndexing.java @@ -21,14 +21,13 @@ import java.sql.SQLException; import java.util.Collection; import java.util.List; -import javax.cache.Cache; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.cache.query.FieldsQueryCursor; -import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.mvcc.MvccQueryTracker; @@ -70,16 +69,13 @@ public interface GridQueryIndexing { public void onClientDisconnect() throws IgniteCheckedException; /** - * Parses SQL query into two step query and executes it. + * Generate SqlFieldsQuery from SqlQuery. * - * @param schemaName Schema name. * @param cacheName Cache name. * @param qry Query. - * @param keepBinary Keep binary flag. - * @throws IgniteCheckedException If failed. + * @return Fields query. */ - public QueryCursor> queryDistributedSql(String schemaName, String cacheName, SqlQuery qry, - boolean keepBinary) throws IgniteCheckedException; + public SqlFieldsQuery generateFieldsQuery(String cacheName, SqlQuery qry); /** * Detect whether SQL query should be executed in distributed or local manner and execute it. @@ -120,18 +116,6 @@ public long streamUpdateQuery(String schemaName, String qry, @Nullable Object[] public List streamBatchedUpdateQuery(String schemaName, String qry, List params, SqlClientContext cliCtx) throws IgniteCheckedException; - /** - * Executes regular query. - * - * @param schemaName Schema name. - * @param cacheName Cache name. - * @param qry Query. - * @param filter Cache name and key filter. - * @param keepBinary Keep binary flag. @return Cursor. - */ - public QueryCursor> queryLocalSql(String schemaName, String cacheName, SqlQuery qry, - IndexingQueryFilter filter, boolean keepBinary) throws IgniteCheckedException; - /** * Queries individual fields (generally used by JDBC drivers). * @@ -293,19 +277,12 @@ public void remove(GridCacheContext cctx, GridQueryTypeDescriptor type, CacheDat throws IgniteCheckedException; /** - * Rebuilds all indexes of given type from hash index. + * Rebuild indexes for the given cache if necessary. * - * @param cacheName Cache name. - * @throws IgniteCheckedException If failed. - */ - public void rebuildIndexesFromHash(String cacheName) throws IgniteCheckedException; - - /** - * Marks all indexes of given type for rebuild from hash index, making them unusable until rebuild finishes. - * - * @param cacheName Cache name. + * @param cctx Cache context. + * @return Future completed when index rebuild finished. */ - public void markForRebuildFromHash(String cacheName); + public IgniteInternalFuture rebuildIndexesFromHash(GridCacheContext cctx); /** * Returns backup filter. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryProcessor.java index eb3f2a70c444b..a428196db90ea 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryProcessor.java @@ -3,11 +3,12 @@ * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 + * (the "License"); + you may not use this file except in compliance with* the License. + You may obtain a copy of the License at * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -65,7 +66,9 @@ import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager; import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.QueryCursorImpl; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.query.CacheQueryFuture; @@ -93,8 +96,6 @@ import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; import org.apache.ignite.internal.util.GridBoundedConcurrentLinkedHashSet; import org.apache.ignite.internal.util.GridSpinBusyLock; -import org.apache.ignite.internal.util.future.GridCompoundFuture; -import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.lang.GridCloseableIterator; import org.apache.ignite.internal.util.lang.GridClosureException; import org.apache.ignite.internal.util.lang.IgniteOutClosureX; @@ -106,8 +107,6 @@ import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.internal.util.worker.GridWorker; -import org.apache.ignite.internal.util.worker.GridWorkerFuture; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteInClosure; @@ -152,7 +151,7 @@ public class GridQueryProcessor extends GridProcessorAdapter { private final ConcurrentMap typesByName = new ConcurrentHashMap<>(); /** */ - private final GridQueryIndexing idx; + private final @Nullable GridQueryIndexing idx; /** Value object context. */ private final CacheQueryObjectValueContext valCtx; @@ -1484,13 +1483,14 @@ else if (op instanceof SchemaAlterTableDropColumnOperation) { * @param writeSyncMode Write synchronization mode. * @param backups Backups. * @param ifNotExists Quietly ignore this command if table already exists. + * @param encrypted Encrypted flag. * @throws IgniteCheckedException If failed. */ @SuppressWarnings("unchecked") public void dynamicTableCreate(String schemaName, QueryEntity entity, String templateName, String cacheName, String cacheGroup, @Nullable String dataRegion, String affinityKey, @Nullable CacheAtomicityMode atomicityMode, - @Nullable CacheWriteSynchronizationMode writeSyncMode, @Nullable Integer backups, boolean ifNotExists) - throws IgniteCheckedException { + @Nullable CacheWriteSynchronizationMode writeSyncMode, @Nullable Integer backups, boolean ifNotExists, + boolean encrypted) throws IgniteCheckedException { assert !F.isEmpty(templateName); assert backups == null || backups >= 0; @@ -1534,10 +1534,14 @@ else if (QueryUtils.TEMPLATE_REPLICATED.equalsIgnoreCase(templateName)) if (backups != null) ccfg.setBackups(backups); + ccfg.setEncryptionEnabled(encrypted); ccfg.setSqlSchema(schemaName); ccfg.setSqlEscapeAll(true); ccfg.setQueryEntities(Collections.singleton(entity)); + if (!QueryUtils.isCustomAffinityMapper(ccfg.getAffinityMapper())) + ccfg.setAffinityMapper(null); + if (affinityKey != null) ccfg.setKeyConfiguration(new CacheKeyConfiguration(entity.getKeyType(), affinityKey)); @@ -1750,78 +1754,45 @@ public boolean belongsToTable(GridCacheContext cctx, String expCacheName, String /** * Rebuilds indexes for provided caches from corresponding hash indexes. * - * @param cacheIds Cache IDs. + * @param cctx Cache context. * @return Future that will be completed when rebuilding is finished. */ - public IgniteInternalFuture rebuildIndexesFromHash(Set cacheIds) { - if (!busyLock.enterBusy()) - throw new IllegalStateException("Failed to rebuild indexes from hash (grid is stopping)."); - - // Because of alt type ids, there can be few entries in 'types' for a single cache. - // In order to avoid processing a cache more than once, let's track processed names. - Set processedCacheNames = new HashSet<>(); - - try { - GridCompoundFuture fut = new GridCompoundFuture(); - - for (Map.Entry e : types.entrySet()) { - if (cacheIds.contains(CU.cacheId(e.getKey().cacheName())) && - processedCacheNames.add(e.getKey().cacheName())) - fut.add(rebuildIndexesFromHash(e.getKey().cacheName(), e.getValue())); - } - - fut.markInitialized(); - - return fut; - } - finally { - busyLock.leaveBusy(); - } - } - - /** - * @param cacheName Cache name. - * @param desc Type descriptor. - * @return Future that will be completed when rebuilding of all indexes is finished. - */ - private IgniteInternalFuture rebuildIndexesFromHash(@Nullable final String cacheName, - @Nullable final QueryTypeDescriptorImpl desc) { + public IgniteInternalFuture rebuildIndexesFromHash(GridCacheContext cctx) { + // Indexing module is disabled, nothing to rebuild. if (idx == null) - return new GridFinishedFuture<>(new IgniteCheckedException("Indexing is disabled.")); - - if (desc == null) - return new GridFinishedFuture<>(); - - final GridWorkerFuture fut = new GridWorkerFuture<>(); + return null; - idx.markForRebuildFromHash(cacheName); + // No data on non-affinity nodes. + if (!cctx.affinityNode()) + return null; - GridWorker w = new GridWorker(ctx.igniteInstanceName(), "index-rebuild-worker", log) { - @Override protected void body() { - try { - idx.rebuildIndexesFromHash(cacheName); + // No indexes to rebuild when there are no QueryEntities. + if (!cctx.isQueryEnabled()) + return null; - fut.onDone(); - } - catch (Exception e) { - fut.onDone(e); - } - catch (Throwable e) { - U.error(log, "Failed to rebuild indexes for type [cache=" + cacheName + - ", name=" + desc.name() + ']', e); + // No need to rebuild if cache has no data. + boolean empty = true; - fut.onDone(e); + for (IgniteCacheOffheapManager.CacheDataStore store : cctx.offheap().cacheDataStores()) { + if (!store.isEmpty()) { + empty = false; - throw e; - } + break; } - }; + } - fut.setWorker(w); + if (empty) + return null; - ctx.getExecutorService().execute(w); + if (!busyLock.enterBusy()) + throw new IllegalStateException("Failed to rebuild indexes from hash (grid is stopping)."); - return fut; + try { + return idx.rebuildIndexesFromHash(cctx); + } + finally { + busyLock.leaveBusy(); + } } /** @@ -1845,6 +1816,7 @@ public void store(GridCacheContext cctx, CacheDataRow newRow, @Nullable CacheDat assert cctx != null; assert newRow != null; assert prevRowAvailable || prevRow == null; + // No need to acquire busy lock here - operation is protected by GridCacheQueryManager.busyLock KeyCacheObject key = newRow.key(); CacheObject val = newRow.value(); @@ -1855,56 +1827,48 @@ public void store(GridCacheContext cctx, CacheDataRow newRow, @Nullable CacheDat if (idx == null) return; - if (!busyLock.enterBusy()) - throw new NodeStoppingException("Operation has been cancelled (node is stopping)."); - - try { - String cacheName = cctx.name(); + String cacheName = cctx.name(); - CacheObjectContext coctx = cctx.cacheObjectContext(); + CacheObjectContext coctx = cctx.cacheObjectContext(); - QueryTypeDescriptorImpl desc = typeByValue(cacheName, coctx, key, val, true); + QueryTypeDescriptorImpl desc = typeByValue(cacheName, coctx, key, val, true); - if (prevRowAvailable && prevRow != null) { - QueryTypeDescriptorImpl prevValDesc = typeByValue(cacheName, - coctx, - key, - prevRow.value(), - false); + if (prevRowAvailable && prevRow != null) { + QueryTypeDescriptorImpl prevValDesc = typeByValue(cacheName, + coctx, + key, + prevRow.value(), + false); - if (prevValDesc != desc) { - if (prevValDesc != null) - idx.remove(cctx, prevValDesc, prevRow); + if (prevValDesc != desc) { + if (prevValDesc != null) + idx.remove(cctx, prevValDesc, prevRow); - // Row has already been removed from another table indexes - prevRow = null; - } + // Row has already been removed from another table indexes + prevRow = null; } + } - if (desc == null) { - int typeId = ctx.cacheObjects().typeId(val); + if (desc == null) { + int typeId = ctx.cacheObjects().typeId(val); - long missedCacheTypeKey = missedCacheTypeKey(cacheName, typeId); + long missedCacheTypeKey = missedCacheTypeKey(cacheName, typeId); - if (!missedCacheTypes.contains(missedCacheTypeKey)) { - if (missedCacheTypes.add(missedCacheTypeKey)) { - LT.warn(log, "Key-value pair is not inserted into any SQL table [cacheName=" + cacheName + - ", " + describeTypeMismatch(cacheName, val) + "]"); + if (!missedCacheTypes.contains(missedCacheTypeKey)) { + if (missedCacheTypes.add(missedCacheTypeKey)) { + LT.warn(log, "Key-value pair is not inserted into any SQL table [cacheName=" + cacheName + + ", " + describeTypeMismatch(cacheName, val) + "]"); - LT.warn(log, " ^-- Value type(s) are specified via CacheConfiguration.indexedTypes or CacheConfiguration.queryEntities"); - LT.warn(log, " ^-- Make sure that same type(s) used when adding Object or BinaryObject to cache"); - LT.warn(log, " ^-- Otherwise, entries will be stored in cache, but not appear as SQL Table rows"); - } + LT.warn(log, " ^-- Value type(s) are specified via CacheConfiguration.indexedTypes or CacheConfiguration.queryEntities"); + LT.warn(log, " ^-- Make sure that same type(s) used when adding Object or BinaryObject to cache"); + LT.warn(log, " ^-- Otherwise, entries will be stored in cache, but not appear as SQL Table rows"); } - - return; } - idx.store(cctx, desc, newRow, prevRow, prevRowAvailable); - } - finally { - busyLock.leaveBusy(); + return; } + + idx.store(cctx, desc, newRow, prevRow, prevRowAvailable); } /** @@ -2080,7 +2044,13 @@ public UpdateSourceIterator prepareDistributedUpdate(GridCacheContext c */ public List>> querySqlFields(final SqlFieldsQuery qry, final boolean keepBinary, final boolean failOnMultipleStmts) { - return querySqlFields(null, qry, null, keepBinary, failOnMultipleStmts); + return querySqlFields( + null, + qry, + null, + keepBinary, + failOnMultipleStmts + ); } /** @@ -2091,7 +2061,13 @@ public List>> querySqlFields(final SqlFieldsQuery qry, * @return Cursor. */ public FieldsQueryCursor> querySqlFields(final SqlFieldsQuery qry, final boolean keepBinary) { - return querySqlFields(null, qry, null, keepBinary, true).get(0); + return querySqlFields( + null, + qry, + null, + keepBinary, + true + ).get(0); } /** @@ -2105,47 +2081,137 @@ public FieldsQueryCursor> querySqlFields(final SqlFieldsQuery qry, final * more then one SQL statement. * @return Cursor. */ - @SuppressWarnings("unchecked") - public List>> querySqlFields(@Nullable final GridCacheContext cctx, - final SqlFieldsQuery qry, final SqlClientContext cliCtx, final boolean keepBinary, - final boolean failOnMultipleStmts) { - checkxEnabled(); + public List>> querySqlFields( + @Nullable final GridCacheContext cctx, + final SqlFieldsQuery qry, + final SqlClientContext cliCtx, + final boolean keepBinary, + final boolean failOnMultipleStmts + ) { + return querySqlFields( + cctx, + qry, + cliCtx, + keepBinary, + failOnMultipleStmts, + GridCacheQueryType.SQL_FIELDS, + null + ); + } - validateSqlFieldsQuery(qry, ctx, cctx); + /** + * Query SQL fields. + * + * @param cctx Cache context. + * @param qry Query. + * @param cliCtx Client context. + * @param keepBinary Keep binary flag. + * @param failOnMultipleStmts If {@code true} the method must throws exception when query contains + * more then one SQL statement. + * @param cancel Hook for query cancellation. + * @return Cursor. + */ + public List>> querySqlFields( + @Nullable final GridCacheContext cctx, + final SqlFieldsQuery qry, + final SqlClientContext cliCtx, + final boolean keepBinary, + final boolean failOnMultipleStmts, + @Nullable final GridQueryCancel cancel + ) { + return querySqlFields( + cctx, + qry, + cliCtx, + keepBinary, + failOnMultipleStmts, + GridCacheQueryType.SQL_FIELDS, + cancel + ); + } - if (!ctx.state().publicApiActiveState(true)) { - throw new IgniteException("Can not perform the operation because the cluster is inactive. Note, that " + - "the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes " + - "join the cluster. To activate the cluster call Ignite.active(true)."); - } - if (!busyLock.enterBusy()) - throw new IllegalStateException("Failed to execute query (grid is stopping)."); + /** + * Query SQL fields. + * + * @param cctx Cache context. + * @param qry Query. + * @param cliCtx Client context. + * @param keepBinary Keep binary flag. + * @param failOnMultipleStmts If {@code true} the method must throws exception when query contains + * more then one SQL statement. + * @param qryType Real query type. + * @param cancel Hook for query cancellation. + * @return Cursor. + */ + public List>> querySqlFields( + @Nullable final GridCacheContext cctx, + final SqlFieldsQuery qry, + final SqlClientContext cliCtx, + final boolean keepBinary, + final boolean failOnMultipleStmts, + GridCacheQueryType qryType, + @Nullable final GridQueryCancel cancel + ) { + // Validate. + checkxEnabled(); - GridCacheContext oldCctx = curCache.get(); + if (qry.isDistributedJoins() && qry.getPartitions() != null) + throw new CacheException("Using both partitions and distributed JOINs is not supported for the same query"); - curCache.set(cctx); + if (qry.isLocal() && ctx.clientNode() && (cctx == null || cctx.config().getCacheMode() != CacheMode.LOCAL)) + throw new CacheException("Execution of local SqlFieldsQuery on client node disallowed."); - final String schemaName = qry.getSchema() != null ? qry.getSchema() - : (cctx != null ? idx.schema(cctx.name()) : QueryUtils.DFLT_SCHEMA); + return executeQuerySafe(cctx, () -> { + assert idx != null; + + final String schemaName = qry.getSchema() != null ? qry.getSchema() + : (cctx != null ? idx.schema(cctx.name()) : QueryUtils.DFLT_SCHEMA); - try { IgniteOutClosureX>>> clo = new IgniteOutClosureX>>>() { - @Override public List>> applyx() throws IgniteCheckedException { - GridQueryCancel cancel = new GridQueryCancel(); + @Override public List>> applyx() { + GridQueryCancel cancel0 = cancel != null ? cancel : new GridQueryCancel(); + + List>> res = + idx.querySqlFields( + schemaName, + qry, + cliCtx, + keepBinary, + failOnMultipleStmts, + null, + cancel0 + ); + + if (cctx != null) + sendQueryExecutedEvent(qry.getSql(), qry.getArgs(), cctx); + + return res; + } + }; - List>> res = - idx.querySqlFields(schemaName, qry, cliCtx, keepBinary, failOnMultipleStmts, null, cancel); + return executeQuery(qryType, qry.getSql(), cctx, clo, true); + }); + } + + /** + * Execute query setting busy lock, preserving current cache context and properly handling checked exceptions. + * + * @param cctx Cache context. + * @param supplier Code to be executed. + * @return Result. + */ + private T executeQuerySafe(@Nullable final GridCacheContext cctx, SupplierX supplier) { + GridCacheContext oldCctx = curCache.get(); - if (cctx != null) - sendQueryExecutedEvent(qry.getSql(), qry.getArgs(), cctx); + curCache.set(cctx); - return res; - } - }; + if (!busyLock.enterBusy()) + throw new IllegalStateException("Failed to execute query (grid is stopping)."); - return executeQuery(GridCacheQueryType.SQL_FIELDS, qry.getSql(), cctx, clo, true); + try { + return supplier.get(); } catch (IgniteCheckedException e) { throw new CacheException(e); @@ -2157,37 +2223,6 @@ public List>> querySqlFields(@Nullable final GridCache } } - /** - * Validate SQL fields query. - * - * @param qry Query. - * @param ctx Kernal context. - * @param cctx Cache context. - */ - private static void validateSqlFieldsQuery(SqlFieldsQuery qry, GridKernalContext ctx, - @Nullable GridCacheContext cctx) { - if (qry.isReplicatedOnly() && qry.getPartitions() != null) - throw new CacheException("Partitions are not supported in replicated only mode."); - - if (qry.isDistributedJoins() && qry.getPartitions() != null) - throw new CacheException("Using both partitions and distributed JOINs is not supported for the same query"); - - if (qry.isLocal() && ctx.clientNode() && (cctx == null || cctx.config().getCacheMode() != CacheMode.LOCAL)) - throw new CacheException("Execution of local SqlFieldsQuery on client node disallowed."); - } - - /** - * Validate SQL query. - * - * @param qry Query. - * @param ctx Kernal context. - * @param cctx Cache context. - */ - private static void validateSqlQuery(SqlQuery qry, GridKernalContext ctx, GridCacheContext cctx) { - if (qry.isLocal() && ctx.clientNode() && cctx.config().getCacheMode() != CacheMode.LOCAL) - throw new CacheException("Execution of local SqlQuery on client node disallowed."); - } - /** * @param cacheName Cache name. * @param schemaName Schema name. @@ -2254,99 +2289,39 @@ public List streamBatchedUpdateQuery(final String schemaName, final SqlCli * @param keepBinary Keep binary flag. * @return Cursor. */ - public QueryCursor> querySql(final GridCacheContext cctx, final SqlQuery qry, - boolean keepBinary) { - validateSqlQuery(qry, ctx, cctx); - - if (qry.isReplicatedOnly() && qry.getPartitions() != null) - throw new CacheException("Partitions are not supported in replicated only mode."); - - if (qry.isDistributedJoins() && qry.getPartitions() != null) - throw new CacheException( - "Using both partitions and distributed JOINs is not supported for the same query"); - - if ((qry.isReplicatedOnly() && cctx.isReplicatedAffinityNode()) || cctx.isLocal() || qry.isLocal()) - return queryLocalSql(cctx, qry, keepBinary); - - return queryDistributedSql(cctx, qry, keepBinary); - } - - /** - * @param cctx Cache context. - * @param qry Query. - * @param keepBinary Keep binary flag. - * @return Cursor. - */ - private QueryCursor> queryDistributedSql(final GridCacheContext cctx, - final SqlQuery qry, final boolean keepBinary) { - checkxEnabled(); - - if (!busyLock.enterBusy()) - throw new IllegalStateException("Failed to execute query (grid is stopping)."); - - try { - final String schemaName = idx.schema(cctx.name()); - - return executeQuery(GridCacheQueryType.SQL, qry.getSql(), cctx, - new IgniteOutClosureX>>() { - @Override public QueryCursor> applyx() throws IgniteCheckedException { - return idx.queryDistributedSql(schemaName, cctx.name(), qry, keepBinary); - } - }, true); - } - catch (IgniteCheckedException e) { - throw new IgniteException(e); - } - finally { - busyLock.leaveBusy(); - } - } - - /** - * @param cctx Cache context. - * @param qry Query. - * @param keepBinary Keep binary flag. - * @return Cursor. - */ - private QueryCursor> queryLocalSql(final GridCacheContext cctx, final SqlQuery qry, - final boolean keepBinary) { - if (!busyLock.enterBusy()) - throw new IllegalStateException("Failed to execute query (grid is stopping)."); - - final String schemaName = idx.schema(cctx.name()); + public QueryCursor> querySql( + final GridCacheContext cctx, + final SqlQuery qry, + boolean keepBinary + ) { + // Generate. + String type = qry.getType(); - try { - return executeQuery(GridCacheQueryType.SQL, qry.getSql(), cctx, - new IgniteOutClosureX>>() { - @Override public QueryCursor> applyx() throws IgniteCheckedException { - String type = qry.getType(); + String typeName = typeName(cctx.name(), type); - String typeName = typeName(cctx.name(), type); + qry.setType(typeName); - qry.setType(typeName); + SqlFieldsQuery fieldsQry = idx.generateFieldsQuery(cctx.name(), qry); - sendQueryExecutedEvent( - qry.getSql(), - qry.getArgs(), - cctx); + // Execute. + FieldsQueryCursor> res = querySqlFields( + cctx, + fieldsQry, + null, + keepBinary, + true, + GridCacheQueryType.SQL, + null + ).get(0); - if (cctx.config().getQueryParallelism() > 1) { - qry.setDistributedJoins(true); + // Convert. + QueryKeyValueIterableconverted = new QueryKeyValueIterable<>(res); - return idx.queryDistributedSql(schemaName, cctx.name(), qry, keepBinary); - } - else - return idx.queryLocalSql(schemaName, cctx.name(), qry, idx.backupFilter(requestTopVer.get(), - qry.getPartitions()), keepBinary); - } - }, true); - } - catch (IgniteCheckedException e) { - throw new CacheException(e); - } - finally { - busyLock.leaveBusy(); - } + return new QueryCursorImpl>(converted) { + @Override public void close() { + converted.cursor().close(); + } + }; } /** @@ -2524,15 +2499,15 @@ private void processDynamicAddColumn(QueryTypeDescriptorImpl d, List for (QueryField col : cols) { try { props.add(new QueryBinaryProperty( - ctx, + ctx, col.name(), - null, - Class.forName(col.typeName()), - false, - null, - !col.isNullable(), - null, - col.precision(), + null, + Class.forName(col.typeName()), + false, + null, + !col.isNullable(), + null, + col.precision(), col.scale())); } catch (ClassNotFoundException e) { @@ -2577,31 +2552,24 @@ public PreparedStatement prepareNativeStatement(String schemaName, String sql) t public void remove(GridCacheContext cctx, CacheDataRow row) throws IgniteCheckedException { assert row != null; + // No need to acquire busy lock here - operation is protected by GridCacheQueryManager.busyLock if (log.isDebugEnabled()) - log.debug("Remove [cacheName=" + cctx.name() + ", key=" + row.key()+ ", val=" + row.value() + "]"); + log.debug("Remove [cacheName=" + cctx.name() + ", key=" + row.key() + ", val=" + row.value() + "]"); if (idx == null) return; - if (!busyLock.enterBusy()) - throw new IllegalStateException("Failed to remove from index (grid is stopping)."); - - try { - QueryTypeDescriptorImpl desc = typeByValue(cctx.name(), - cctx.cacheObjectContext(), - row.key(), - row.value(), - false); + QueryTypeDescriptorImpl desc = typeByValue(cctx.name(), + cctx.cacheObjectContext(), + row.key(), + row.value(), + false); - if (desc == null) - return; + if (desc == null) + return; - idx.remove(cctx, desc, row); - } - finally { - busyLock.leaveBusy(); - } + idx.remove(cctx, desc, row); } /** @@ -2682,11 +2650,11 @@ public Collection types(@Nullable String cacheName) { * @return Type descriptor. * @throws IgniteCheckedException If failed. */ - private String typeName(@Nullable String cacheName, String typeName) throws IgniteCheckedException { + private String typeName(@Nullable String cacheName, String typeName) throws IgniteException { QueryTypeDescriptorImpl type = typesByName.get(new QueryTypeNameKey(cacheName, typeName)); if (type == null) - throw new IgniteCheckedException("Failed to find SQL table for type: " + typeName); + throw new IgniteException("Failed to find SQL table for type: " + typeName); return type.name(); } @@ -3158,4 +3126,18 @@ private static class TableCacheFilter implements SchemaIndexCacheFilter { return S.toString(TableCacheFilter.class, this); } } + + /** + * Function which can throw exception. + */ + @FunctionalInterface + private interface SupplierX { + /** + * Get value. + * + * @return Value. + * @throws IgniteCheckedException If failed. + */ + T get() throws IgniteCheckedException; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryKeyValueIterable.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryKeyValueIterable.java new file mode 100644 index 0000000000000..41d5145c80e19 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryKeyValueIterable.java @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query; + +import org.apache.ignite.cache.query.QueryCursor; + +import javax.cache.Cache; +import java.util.Iterator; +import java.util.List; + +/** + * SqlQuery key-value iterable. + */ +public class QueryKeyValueIterable implements Iterable> { + /** Underlying fields query cursor. */ + private final QueryCursor> cur; + + /** + * Constructor. + * + * @param cur Underlying fields query cursor. + */ + public QueryKeyValueIterable(QueryCursor> cur) { + this.cur = cur; + } + + /** {@inheritDoc} */ + @Override public Iterator> iterator() { + return new QueryKeyValueIterator<>(cur.iterator()); + } + + /** + * @return Underlying fields query cursor. + */ + QueryCursor> cursor() { + return cur; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryKeyValueIterator.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryKeyValueIterator.java new file mode 100644 index 0000000000000..02dde9d2d8318 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryKeyValueIterator.java @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query; + +import org.apache.ignite.internal.processors.cache.CacheEntryImpl; + +import javax.cache.Cache; +import javax.cache.CacheException; +import java.util.Iterator; +import java.util.List; + +/** + * SqlQuery key-value iterator. + */ +public class QueryKeyValueIterator implements Iterator> { + /** Target iterator. */ + private final Iterator> iter; + + /** + * Constructor. + * + * @param iter Target iterator. + */ + public QueryKeyValueIterator(Iterator> iter) { + this.iter = iter; + } + + /** {@inheritDoc} */ + @Override public boolean hasNext() { + return iter.hasNext(); + } + + /** {@inheritDoc} */ + @SuppressWarnings("unchecked") + @Override public Cache.Entry next() { + try { + List row = iter.next(); + + return new CacheEntryImpl<>((K)row.get(0), (V)row.get(1)); + } + catch (CacheException e) { + throw e; + } + catch (Exception e) { + throw new CacheException(e); + } + } + + /** {@inheritDoc} */ + @Override public void remove() { + throw new UnsupportedOperationException(); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryTypeDescriptorImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryTypeDescriptorImpl.java index 2eaeb1feadb32..e02b7dfd859e5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryTypeDescriptorImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryTypeDescriptorImpl.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.query; +import java.math.BigDecimal; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; @@ -36,10 +37,12 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.KEY_SCALE_OUT_OF_RANGE; import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.NULL_KEY; import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.NULL_VALUE; import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.TOO_LONG_KEY; import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.TOO_LONG_VALUE; +import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.VALUE_SCALE_OUT_OF_RANGE; import static org.apache.ignite.internal.processors.query.QueryUtils.KEY_FIELD_NAME; import static org.apache.ignite.internal.processors.query.QueryUtils.VAL_FIELD_NAME; @@ -580,14 +583,30 @@ else if (F.eq(prop.name(), valFieldName) || (valFieldName == null && F.eq(prop.n isKey ? NULL_KEY : NULL_VALUE); } - if (prop.precision() != -1 && - propVal != null && - String.class == propVal.getClass() && + if (propVal == null || prop.precision() == -1) + continue; + + if (String.class == propVal.getClass() && ((String)propVal).length() > prop.precision()) { throw new IgniteSQLException("Value for a column '" + prop.name() + "' is too long. " + "Maximum length: " + prop.precision() + ", actual length: " + ((CharSequence)propVal).length(), isKey ? TOO_LONG_KEY : TOO_LONG_VALUE); } + else if (BigDecimal.class == propVal.getClass()) { + BigDecimal dec = (BigDecimal)propVal; + + if (dec.precision() > prop.precision()) { + throw new IgniteSQLException("Value for a column '" + prop.name() + "' is out of range. " + + "Maximum precision: " + prop.precision() + ", actual precision: " + dec.precision(), + isKey ? TOO_LONG_KEY : TOO_LONG_VALUE); + } + else if (prop.scale() != -1 && + dec.scale() > prop.scale()) { + throw new IgniteSQLException("Value for a column '" + prop.name() + "' is out of range. " + + "Maximum scale : " + prop.scale() + ", actual scale: " + dec.scale(), + isKey ? KEY_SCALE_OUT_OF_RANGE : VALUE_SCALE_OUT_OF_RANGE); + } + } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryUtils.java index d977a49006648..e19247c0ffafa 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/QueryUtils.java @@ -67,6 +67,8 @@ import static org.apache.ignite.IgniteSystemProperties.IGNITE_INDEXING_DISCOVERY_HISTORY_SIZE; import static org.apache.ignite.IgniteSystemProperties.getInteger; +import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.TOO_LONG_VALUE; +import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.VALUE_SCALE_OUT_OF_RANGE; /** * Utility methods for queries. @@ -825,17 +827,10 @@ public static QueryBinaryProperty buildBinaryProperty(GridKernalContext ctx, Str public static QueryClassProperty buildClassProperty(Class keyCls, Class valCls, String pathStr, Class resType, Map aliases, boolean notNull, CacheObjectContext coCtx) throws IgniteCheckedException { - QueryClassProperty res = buildClassProperty( - true, - keyCls, - pathStr, - resType, - aliases, - notNull, - coCtx); - - if (res == null) // We check key before value consistently with BinaryProperty. - res = buildClassProperty(false, valCls, pathStr, resType, aliases, notNull, coCtx); + QueryClassProperty res = buildClassProperty(false, valCls, pathStr, resType, aliases, notNull, coCtx); + + if (res == null) // We check value before key consistently with BinaryProperty. + res = buildClassProperty(true, keyCls, pathStr, resType, aliases, notNull, coCtx); if (res == null) throw new IgniteCheckedException(propertyInitializationExceptionMessage(keyCls, valCls, pathStr, resType)); @@ -1271,6 +1266,7 @@ private static void validateQueryEntity(QueryEntity entity) { Map dfltVals = entity.getDefaultFieldValues(); Map precision = entity.getFieldsPrecision(); + Map scale = entity.getFieldsScale(); if (!F.isEmpty(precision)) { for (String fld : precision.keySet()) { @@ -1282,9 +1278,23 @@ private static void validateQueryEntity(QueryEntity entity) { if (dfltVal == null) continue; - if (dfltVal.toString().length() > precision.get(fld)) { + if (dfltVal.getClass() == String.class && dfltVal.toString().length() > precision.get(fld)) { throw new IgniteSQLException("Default value '" + dfltVal + - "' is longer than maximum length " + precision.get(fld)); + "' is longer than maximum length " + precision.get(fld), TOO_LONG_VALUE); + } + else if (dfltVal.getClass() == BigDecimal.class) { + BigDecimal dec = (BigDecimal)dfltVal; + + if (dec.precision() > precision.get(fld)) { + throw new IgniteSQLException("Default value: '" + dfltVal + "' for a column " + fld + + " is out of range. Maximum precision: " + precision.get(fld) + + ", actual precision: " + dec.precision(), TOO_LONG_VALUE); + } + else if (!F.isEmpty(scale) && scale.containsKey(fld) && dec.scale() > scale.get(fld)) { + throw new IgniteSQLException("Default value:: '" + dfltVal + "' for a column " + fld + + " is out of range. Maximum scale: " + scale.get(fld) + + ", actual scale: " + dec.scale(), VALUE_SCALE_OUT_OF_RANGE); + } } } } @@ -1355,6 +1365,18 @@ public static void checkNotNullAllowed(CacheConfiguration cfg) { "is set.", IgniteQueryErrorCode.UNSUPPORTED_OPERATION); } + /** + * Checks whether affinity key mapper is custom or default. + * + * @param affinityKeyMapper Affinity key mapper. + * @return {@code true} if affinity key mapper is custom. + */ + public static boolean isCustomAffinityMapper(AffinityKeyMapper affinityKeyMapper) { + return affinityKeyMapper != null && + !(affinityKeyMapper instanceof CacheDefaultBinaryAffinityKeyMapper) && + !(affinityKeyMapper instanceof GridCacheDefaultAffinityKeyMapper); + } + /** * Checks if given column can be removed from table using its {@link QueryEntity}. * diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/messages/GridQueryNextPageResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/messages/GridQueryNextPageResponse.java index 6b976c24179cf..7fdd5d6a57ceb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/messages/GridQueryNextPageResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/messages/GridQueryNextPageResponse.java @@ -220,7 +220,7 @@ public Collection plainRows() { writer.incrementState(); case 6: - if (!writer.writeMessage("retry", retry)) + if (!writer.writeAffinityTopologyVersion("retry", retry)) return false; writer.incrementState(); @@ -310,7 +310,7 @@ public Collection plainRows() { reader.incrementState(); case 6: - retry = reader.readMessage("retry"); + retry = reader.readAffinityTopologyVersion("retry"); if (!reader.isLastRead()) return false; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/property/QueryBinaryProperty.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/property/QueryBinaryProperty.java index 7a47c2fb3f1a3..3f2d233a75cfe 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/property/QueryBinaryProperty.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/property/QueryBinaryProperty.java @@ -131,11 +131,11 @@ public QueryBinaryProperty(GridKernalContext ctx, String propName, QueryBinaryPr if (isKeyProp0 == 0) { // Key is allowed to be a non-binary object here. - // We check key before value consistently with ClassProperty. - if (key instanceof BinaryObject && ((BinaryObject)key).hasField(propName)) - isKeyProp = isKeyProp0 = 1; - else if (val instanceof BinaryObject && ((BinaryObject)val).hasField(propName)) + // We check value before key consistently with ClassProperty. + if (val instanceof BinaryObject && ((BinaryObject)val).hasField(propName)) isKeyProp = isKeyProp0 = -1; + else if (key instanceof BinaryObject && ((BinaryObject)key).hasField(propName)) + isKeyProp = isKeyProp0 = 1; else { if (!warned) { U.warn(log, "Neither key nor value have property \"" + propName + "\" " + diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/SchemaIndexCacheVisitorImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/SchemaIndexCacheVisitorImpl.java index 57a1a49de7cb5..7c40cd12cb848 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/SchemaIndexCacheVisitorImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/SchemaIndexCacheVisitorImpl.java @@ -19,7 +19,6 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.IgniteInterruptedCheckedException; -import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.internal.processors.cache.GridCacheEntryRemovedException; @@ -244,7 +243,7 @@ private void processKey(KeyCacheObject key, SchemaIndexCacheVisitorClosure clo) entry.updateIndex(rowFilter, clo); } finally { - entry.touch(AffinityTopologyVersion.NONE); + entry.touch(); } break; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/message/SchemaOperationStatusMessage.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/message/SchemaOperationStatusMessage.java index 5f75e607d4173..9e81d5a84b432 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/message/SchemaOperationStatusMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/query/schema/message/SchemaOperationStatusMessage.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.query.schema.message; import org.apache.ignite.internal.GridDirectTransient; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageReader; @@ -30,6 +31,7 @@ /** * Schema operation status message. */ +@IgniteCodeGeneratingFail public class SchemaOperationStatusMessage implements Message { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestCommand.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestCommand.java index 587ed2ee8c1de..4046b1572a213 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestCommand.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestCommand.java @@ -176,6 +176,18 @@ public enum GridRestCommand { /** */ CLUSTER_CURRENT_STATE("currentstate"), + /** */ + BASELINE_CURRENT_STATE("baseline"), + + /** */ + BASELINE_SET("setbaseline"), + + /** */ + BASELINE_ADD("addbaseline"), + + /** */ + BASELINE_REMOVE("removebaseline"), + /** */ AUTHENTICATE("authenticate"), diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestProcessor.java index 30d2f0a1328d2..f24c4d33a88ad 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/GridRestProcessor.java @@ -49,6 +49,7 @@ import org.apache.ignite.internal.processors.rest.handlers.GridRestCommandHandler; import org.apache.ignite.internal.processors.rest.handlers.auth.AuthenticationCommandHandler; import org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler; +import org.apache.ignite.internal.processors.rest.handlers.cluster.GridBaselineCommandHandler; import org.apache.ignite.internal.processors.rest.handlers.cluster.GridChangeStateCommandHandler; import org.apache.ignite.internal.processors.rest.handlers.datastructures.DataStructuresCommandHandler; import org.apache.ignite.internal.processors.rest.handlers.log.GridLogCommandHandler; @@ -500,6 +501,9 @@ public GridRestProcessor(GridKernalContext ctx) { if (ses.isTimedOut(sesTtl)) { clientId2SesId.remove(ses.clientId, ses.sesId); sesId2Ses.remove(ses.sesId, ses); + + if (ctx.security().enabled() && ses.secCtx != null && ses.secCtx.subject() != null) + ctx.security().onSessionExpired(ses.secCtx.subject().id()); } } } @@ -528,6 +532,7 @@ public GridRestProcessor(GridKernalContext ctx) { addHandler(new GridChangeStateCommandHandler(ctx)); addHandler(new AuthenticationCommandHandler(ctx)); addHandler(new UserActionCommandHandler(ctx)); + addHandler(new GridBaselineCommandHandler(ctx)); // Start protocols. startTcpProtocol(); @@ -892,6 +897,9 @@ private void authorize(GridRestRequest req, SecurityContext sCtx) throws Securit case CLUSTER_INACTIVE: case CLUSTER_ACTIVATE: case CLUSTER_DEACTIVATE: + case BASELINE_SET: + case BASELINE_ADD: + case BASELINE_REMOVE: perm = SecurityPermission.ADMIN_OPS; break; @@ -909,6 +917,7 @@ private void authorize(GridRestRequest req, SecurityContext sCtx) throws Securit case NAME: case LOG: case CLUSTER_CURRENT_STATE: + case BASELINE_CURRENT_STATE: case AUTHENTICATE: case ADD_USER: case REMOVE_USER: diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/cluster/GridBaselineCommandHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/cluster/GridBaselineCommandHandler.java new file mode 100644 index 0000000000000..88fde87464647 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/cluster/GridBaselineCommandHandler.java @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.rest.handlers.cluster; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cluster.BaselineNode; +import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.cluster.IgniteClusterEx; +import org.apache.ignite.internal.processors.rest.GridRestCommand; +import org.apache.ignite.internal.processors.rest.GridRestResponse; +import org.apache.ignite.internal.processors.rest.handlers.GridRestCommandHandlerAdapter; +import org.apache.ignite.internal.processors.rest.request.GridRestBaselineRequest; +import org.apache.ignite.internal.processors.rest.request.GridRestRequest; +import org.apache.ignite.internal.util.future.GridFinishedFuture; +import org.apache.ignite.internal.util.typedef.internal.U; + +import static java.util.function.Function.identity; +import static java.util.stream.Collectors.toMap; +import static org.apache.ignite.internal.processors.rest.GridRestCommand.BASELINE_ADD; +import static org.apache.ignite.internal.processors.rest.GridRestCommand.BASELINE_CURRENT_STATE; +import static org.apache.ignite.internal.processors.rest.GridRestCommand.BASELINE_REMOVE; +import static org.apache.ignite.internal.processors.rest.GridRestCommand.BASELINE_SET; + +/** + * + */ +public class GridBaselineCommandHandler extends GridRestCommandHandlerAdapter { + /** Supported commands. */ + private static final Collection SUPPORTED_COMMANDS = U.sealList(BASELINE_CURRENT_STATE, + BASELINE_SET, BASELINE_ADD, BASELINE_REMOVE); + + /** + * @param ctx Context. + */ + public GridBaselineCommandHandler(GridKernalContext ctx) { + super(ctx); + } + + /** {@inheritDoc} */ + @Override public Collection supportedCommands() { + return SUPPORTED_COMMANDS; + } + + /** {@inheritDoc} */ + @Override public IgniteInternalFuture handleAsync(GridRestRequest req) { + assert req != null; + + assert SUPPORTED_COMMANDS.contains(req.command()); + assert req instanceof GridRestBaselineRequest : "Invalid type of baseline request."; + + + if (log.isDebugEnabled()) + log.debug("Handling baseline REST request: " + req); + + GridRestBaselineRequest req0 = (GridRestBaselineRequest)req; + + try { + IgniteClusterEx cluster = ctx.grid().cluster(); + + List consistentIds = req0.consistentIds(); + + switch (req0.command()) { + case BASELINE_CURRENT_STATE: { + // No-op. + + break; + } + + case BASELINE_SET: { + Long topVer = req0.topologyVersion(); + + if (topVer == null && consistentIds == null) + throw new IgniteCheckedException("Failed to handle request (either topVer or consistentIds should be specified)."); + + if (topVer != null) + cluster.setBaselineTopology(topVer); + else + cluster.setBaselineTopology(filterServerNodesByConsId(consistentIds)); + + break; + } + + case BASELINE_ADD: { + if (consistentIds == null) + throw new IgniteCheckedException(missingParameter("consistentIds")); + + Set baselineTop = new HashSet<>(currentBaseLine()); + + baselineTop.addAll(filterServerNodesByConsId(consistentIds)); + + cluster.setBaselineTopology(baselineTop); + + break; + } + + case BASELINE_REMOVE: { + if (consistentIds == null) + throw new IgniteCheckedException(missingParameter("consistentIds")); + + Collection baseline = currentBaseLine(); + + Set baselineTop = new HashSet<>(baseline); + + baselineTop.removeAll(filterNodesByConsId(baseline, consistentIds)); + + cluster.setBaselineTopology(baselineTop); + + break; + } + + default: + assert false : "Invalid command for baseline handler: " + req; + } + + return new GridFinishedFuture<>(new GridRestResponse(currentState())); + } + catch (IgniteCheckedException e) { + return new GridFinishedFuture<>(e); + } + finally { + if (log.isDebugEnabled()) + log.debug("Handled baseline REST request: " + req); + } + } + + /** + * @return Current baseline. + */ + private Collection currentBaseLine() { + Collection baselineNodes = ctx.grid().cluster().currentBaselineTopology(); + + return baselineNodes != null ? baselineNodes : Collections.emptyList(); + } + + /** + * Collect baseline topology command result. + * + * @return Baseline descriptor. + */ + private GridBaselineCommandResponse currentState() { + IgniteClusterEx cluster = ctx.grid().cluster(); + + Collection srvrs = cluster.forServers().nodes(); + + return new GridBaselineCommandResponse(cluster.active(), cluster.topologyVersion(), currentBaseLine(), srvrs); + } + + /** + * Filter passed nodes by consistent IDs. + * + * @param nodes Collection of nodes. + * @param consistentIds Collection of consistent IDs. + * @throws IllegalStateException In case of some consistent ID not found in nodes collection. + */ + private Collection filterNodesByConsId(Collection nodes, List consistentIds) { + Map nodeMap = + nodes.stream().collect(toMap(n -> n.consistentId().toString(), identity())); + + Collection filtered = new ArrayList<>(consistentIds.size()); + + for (Object consistentId : consistentIds) { + BaselineNode node = nodeMap.get(consistentId); + + if (node == null) + throw new IllegalStateException("Node not found for consistent ID: " + consistentId); + + filtered.add(node); + } + + return filtered; + } + + /** + * Filter server nodes by consistent IDs. + * + * @param consistentIds Collection of consistent IDs to add. + * @throws IllegalStateException In case of some consistent ID not found in nodes collection. + */ + private Collection filterServerNodesByConsId(List consistentIds) { + return filterNodesByConsId(ctx.grid().cluster().forServers().nodes(), consistentIds); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/cluster/GridBaselineCommandResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/cluster/GridBaselineCommandResponse.java new file mode 100644 index 0000000000000..aae16c0ec1ad9 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/cluster/GridBaselineCommandResponse.java @@ -0,0 +1,161 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.rest.handlers.cluster; + +import java.io.Externalizable; +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import java.util.Collection; +import org.apache.ignite.cluster.BaselineNode; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; + +import static java.util.stream.Collectors.toList; + +/** + * Result for baseline command. + */ +public class GridBaselineCommandResponse implements Externalizable { + /** */ + private static final long serialVersionUID = 0L; + + /** Cluster state. */ + private boolean active; + + /** Current topology version. */ + private long topVer; + + /** Current baseline nodes. */ + private Collection baseline; + + /** Current server nodes. */ + private Collection srvs; + + /** + * @param nodes Nodes to process. + * @return Collection of consistentIds. + */ + private static Collection consistentIds(Collection nodes) { + return nodes.stream().map(n -> String.valueOf(n.consistentId())).collect(toList()); + } + + /** + * Default constructor. + */ + public GridBaselineCommandResponse() { + // No-op. + } + + /** + * Constructor. + * + * @param active Cluster state. + * @param topVer Current topology version. + * @param baseline Current baseline nodes. + * @param srvs Current server nodes. + */ + GridBaselineCommandResponse( + boolean active, + long topVer, + Collection baseline, + Collection srvs + ) { + this.active = active; + this.topVer = topVer; + this.baseline = consistentIds(baseline); + this.srvs = consistentIds(srvs); + } + + /** + * @return Cluster state. + */ + public boolean isActive() { + return active; + } + + /** + * @param active Cluster active. + */ + public void setActive(boolean active) { + this.active = active; + } + + /** + * @return Current topology version. + */ + public long getTopologyVersion() { + return topVer; + } + + /** + * @param topVer Current topology version. + */ + public void setTopologyVersion(long topVer) { + this.topVer = topVer; + } + + /** + * @return Baseline nodes. + */ + public Collection getBaseline() { + return baseline; + } + + /** + * @param baseline Baseline nodes. + */ + public void setBaseline(Collection baseline) { + this.baseline = baseline; + } + + /** + * @return Server nodes. + */ + public Collection getServers() { + return srvs; + } + + /** + * @param srvs Server nodes. + */ + public void setServers(Collection srvs) { + this.srvs = srvs; + } + + /** {@inheritDoc} */ + @Override public void writeExternal(ObjectOutput out) throws IOException { + out.writeBoolean(active); + out.writeLong(topVer); + U.writeCollection(out, baseline); + U.writeCollection(out, srvs); + } + + /** {@inheritDoc} */ + @Override public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { + active = in.readBoolean(); + topVer = in.readLong(); + baseline = U.readCollection(in); + srvs = U.readCollection(in); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(GridBaselineCommandResponse.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultRequest.java index b8b4edbf01bb7..f3ec0e8afacaf 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultRequest.java @@ -171,4 +171,4 @@ public void topic(String topic) { @Override public byte fieldsCount() { return 2; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultResponse.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultResponse.java index b9bb27c067182..88b42f5e80a5c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultResponse.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/task/GridTaskResultResponse.java @@ -217,4 +217,4 @@ public void error(String err) { @Override public byte fieldsCount() { return 4; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/top/GridTopologyCommandHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/top/GridTopologyCommandHandler.java index 0390936e77395..edcb3741bc331 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/top/GridTopologyCommandHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/handlers/top/GridTopologyCommandHandler.java @@ -97,6 +97,7 @@ public GridTopologyCommandHandler(GridKernalContext ctx) { boolean mtr = req0.includeMetrics(); boolean attr = req0.includeAttributes(); + boolean caches = req0.includeCaches(); switch (req.command()) { case TOPOLOGY: { @@ -107,7 +108,7 @@ public GridTopologyCommandHandler(GridKernalContext ctx) { new ArrayList<>(allNodes.size()); for (ClusterNode node : allNodes) - top.add(createNodeBean(node, mtr, attr)); + top.add(createNodeBean(node, mtr, attr, caches)); res.setResponse(top); @@ -143,7 +144,7 @@ public boolean apply(ClusterNode n) { }); if (node != null) - res.setResponse(createNodeBean(node, mtr, attr)); + res.setResponse(createNodeBean(node, mtr, attr, caches)); else res.setResponse(null); @@ -196,14 +197,15 @@ public GridClientCacheBean createCacheBean(CacheConfiguration ccfg) { } /** - * Creates node bean out of grid node. Notice that cache attribute is handled separately. + * Creates node bean out of cluster node. Notice that cache attribute is handled separately. * - * @param node Grid node. - * @param mtr {@code true} to add metrics. - * @param attr {@code true} to add attributes. + * @param node Cluster node. + * @param mtr Whether to include node metrics. + * @param attr Whether to include node attributes. + * @param caches Whether to include node caches. * @return Grid Node bean. */ - private GridClientNodeBean createNodeBean(ClusterNode node, boolean mtr, boolean attr) { + private GridClientNodeBean createNodeBean(ClusterNode node, boolean mtr, boolean attr, boolean caches) { assert node != null; GridClientNodeBean nodeBean = new GridClientNodeBean(); @@ -216,14 +218,16 @@ private GridClientNodeBean createNodeBean(ClusterNode node, boolean mtr, boolean nodeBean.setTcpAddresses(nonEmptyList(node.>attribute(ATTR_REST_TCP_ADDRS))); nodeBean.setTcpHostNames(nonEmptyList(node.>attribute(ATTR_REST_TCP_HOST_NAMES))); - Map nodeCaches = ctx.discovery().nodePublicCaches(node); + if (caches) { + Map nodeCaches = ctx.discovery().nodePublicCaches(node); - Collection caches = new ArrayList<>(nodeCaches.size()); + Collection cacheBeans = new ArrayList<>(nodeCaches.size()); - for (CacheConfiguration ccfg : nodeCaches.values()) - caches.add(createCacheBean(ccfg)); + for (CacheConfiguration ccfg : nodeCaches.values()) + cacheBeans.add(createCacheBean(ccfg)); - nodeBean.setCaches(caches); + nodeBean.setCaches(cacheBeans); + } if (mtr) { ClusterMetrics metrics = node.metrics(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/request/GridRestBaselineRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/request/GridRestBaselineRequest.java new file mode 100644 index 0000000000000..a3aa042e5d0db --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/request/GridRestBaselineRequest.java @@ -0,0 +1,65 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.rest.request; + +import java.util.List; +import org.apache.ignite.internal.util.typedef.internal.S; + +/** + * Grid command topology request. + */ +public class GridRestBaselineRequest extends GridRestRequest { + /** Topology version to set. */ + private Long topVer; + + /** Collection of consistent IDs to set. */ + private List consistentIds; + + /** + * @return Topology version to set. + */ + public Long topologyVersion() { + return topVer; + } + + /** + * @param topVer New topology version to set. + */ + public void topologyVersion(Long topVer) { + this.topVer = topVer; + } + + /** + * @return Collection of consistent IDs to set. + */ + public List consistentIds() { + return consistentIds; + } + + /** + * @param consistentIds New collection of consistent IDs to set. + */ + public void consistentIds(List consistentIds) { + this.consistentIds = consistentIds; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(GridRestBaselineRequest.class, this, super.toString()); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/request/GridRestTopologyRequest.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/request/GridRestTopologyRequest.java index b02836743f67b..fadd178945e2b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/request/GridRestTopologyRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/request/GridRestTopologyRequest.java @@ -36,6 +36,9 @@ public class GridRestTopologyRequest extends GridRestRequest { /** Include node attributes flag. */ private boolean includeAttrs; + /** Include caches flag. With default value for compatibility. */ + private boolean includeCaches = true; + /** * @return Include metrics flag. */ @@ -64,6 +67,20 @@ public void includeAttributes(boolean includeAttrs) { this.includeAttrs = includeAttrs; } + /** + * @return Include caches flag. + */ + public boolean includeCaches() { + return includeCaches; + } + + /** + * @param includeCaches Include caches flag. + */ + public void includeCaches(boolean includeCaches) { + this.includeCaches = includeCaches; + } + /** * @return Node identifier, if specified, {@code null} otherwise. */ @@ -96,4 +113,4 @@ public void nodeIp(String nodeIp) { @Override public String toString() { return S.toString(GridRestTopologyRequest.class, this, super.toString()); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/security/SecurityContextHolder.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/security/SecurityContextHolder.java index 14d70c97cbfdb..d01071166ea73 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/security/SecurityContextHolder.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/security/SecurityContextHolder.java @@ -39,15 +39,22 @@ public class SecurityContextHolder { * Set security context. * * @param ctx Context. + * @return Old context. */ - public static void set(@Nullable SecurityContext ctx) { + public static SecurityContext push(@Nullable SecurityContext ctx) { + SecurityContext oldCtx = CTX.get(); + CTX.set(ctx); + + return oldCtx; } /** - * Clear security context. + * Pop security context. + * + * @param oldCtx Old context. */ - public static void clear() { - set(null); + public static void pop(@Nullable SecurityContext oldCtx) { + CTX.set(oldCtx); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/task/GridTaskProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/task/GridTaskProcessor.java index 9007472b034f5..b81665e32002f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/task/GridTaskProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/task/GridTaskProcessor.java @@ -28,6 +28,7 @@ import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.LongAdder; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; @@ -54,6 +55,7 @@ import org.apache.ignite.internal.GridTaskSessionRequest; import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException; import org.apache.ignite.internal.IgniteDeploymentCheckedException; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.compute.ComputeTaskCancelledCheckedException; import org.apache.ignite.internal.managers.communication.GridIoManager; @@ -69,7 +71,9 @@ import org.apache.ignite.internal.util.typedef.C1; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.visor.VisorTaskArgument; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteUuid; @@ -77,6 +81,7 @@ import org.apache.ignite.plugin.security.SecurityPermission; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.events.EventType.EVT_MANAGEMENT_TASK_STARTED; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; import static org.apache.ignite.events.EventType.EVT_NODE_LEFT; import static org.apache.ignite.events.EventType.EVT_TASK_SESSION_ATTR_SET; @@ -187,7 +192,24 @@ private IgniteClientDisconnectedCheckedException disconnectedError(@Nullable Ign /** {@inheritDoc} */ @SuppressWarnings("TooBroadScope") @Override public void onKernalStop(boolean cancel) { - lock.writeLock(); + boolean interrupted = false; + + while (true) { + try { + if (lock.tryWriteLock(1, TimeUnit.SECONDS)) + break; + else { + LT.warn(log, "Still waiting to acquire write lock on stop"); + + U.sleep(50); + } + } + catch (IgniteInterruptedCheckedException | InterruptedException e) { + LT.warn(log, "Stopping thread was interrupted while waiting for write lock (will wait anyway)"); + + interrupted = true; + } + } try { stopping = true; @@ -196,6 +218,9 @@ private IgniteClientDisconnectedCheckedException disconnectedError(@Nullable Ign } finally { lock.writeUnlock(); + + if (interrupted) + Thread.currentThread().interrupt(); } startLatch.countDown(); @@ -471,7 +496,7 @@ public String resolveTaskName(int taskNameHash) { try { return taskMetaCache().localPeek( - new GridTaskNameHashKey(taskNameHash), null, null); + new GridTaskNameHashKey(taskNameHash), null); } catch (IgniteCheckedException e) { throw new IgniteException(e); @@ -679,7 +704,10 @@ else if (task != null) { top = nodes != null ? F.nodeIds(nodes) : null; } - UUID subjId = getThreadContext(TC_SUBJ_ID); + UUID subjId = (UUID)map.get(TC_SUBJ_ID); + + if (subjId == null) + subjId = getThreadContext(TC_SUBJ_ID); if (subjId == null) subjId = ctx.localNodeId(); @@ -742,6 +770,24 @@ else if (task != null) { assert taskWorker0 == null : "Session ID is not unique: " + sesId; + if (ctx.event().isRecordable(EVT_MANAGEMENT_TASK_STARTED) && dep.visorManagementTask(task, taskCls)) { + VisorTaskArgument visorTaskArgument = (VisorTaskArgument)arg; + + Event evt = new TaskEvent( + ctx.discovery().localNode(), + visorTaskArgument != null && visorTaskArgument.getArgument() != null + ? visorTaskArgument.getArgument().toString() : "[]", + EVT_MANAGEMENT_TASK_STARTED, + ses.getId(), + taskCls == null ? null : taskCls.getSimpleName(), + "VisorManagementTask", + false, + subjId + ); + + ctx.event().record(evt); + } + if (!ctx.clientDisconnected()) { if (dep.annotation(taskCls, ComputeTaskMapAsync.class) != null) { try { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/task/GridVisorManagementTask.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/task/GridVisorManagementTask.java new file mode 100644 index 0000000000000..09c87bf017964 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/task/GridVisorManagementTask.java @@ -0,0 +1,38 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.task; + +import java.lang.annotation.Documented; +import java.lang.annotation.ElementType; +import java.lang.annotation.Inherited; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; + +/** + * Indicates that annotated task is a visor task that was invoked by user. They can be handled by event listeners. + * + * This annotation intended for internal use only. + */ +@Documented +@Inherited +@Retention(RetentionPolicy.RUNTIME) +@Target({ElementType.TYPE}) +public @interface GridVisorManagementTask { + // No-op. +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessor.java b/modules/core/src/main/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessor.java index 405e321eb7a1e..7efcea9c8a8ab 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessor.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessor.java @@ -144,15 +144,15 @@ public boolean removeTimeoutObject(GridTimeoutObject timeoutObj) { /** * Wait for a future (listen with timeout). * @param fut Future. - * @param timeout Timeout millis. -1 means expired timeout, 0 - no timeout. + * @param timeout Timeout millis. -1 means expired timeout, 0 means waiting without timeout. * @param clo Finish closure. First argument contains error on future or null if no errors, - * second is {@code true} if wait timed out. + * second is {@code true} if wait timed out or passed timeout argument means expired timeout. */ public void waitAsync(final IgniteInternalFuture fut, long timeout, IgniteBiInClosure clo) { if (timeout == -1) { - clo.apply(null, false); + clo.apply(null, true); return; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/GridByteArrayList.java b/modules/core/src/main/java/org/apache/ignite/internal/util/GridByteArrayList.java index 0200d7747002e..e9c7d016bdb4c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/GridByteArrayList.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/GridByteArrayList.java @@ -491,4 +491,4 @@ public InputStream inputStream() { @Override public String toString() { return S.toString(GridByteArrayList.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/GridIntList.java b/modules/core/src/main/java/org/apache/ignite/internal/util/GridIntList.java index ebe2e0765c787..c54584a78c4f1 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/GridIntList.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/GridIntList.java @@ -600,4 +600,4 @@ public GridIntIterator iterator() { } }; } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/GridLongList.java b/modules/core/src/main/java/org/apache/ignite/internal/util/GridLongList.java index d1f20e634a03c..1c022b0a4ce5c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/GridLongList.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/GridLongList.java @@ -26,6 +26,7 @@ import java.nio.ByteBuffer; import java.util.Arrays; import java.util.NoSuchElementException; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.SB; @@ -38,10 +39,14 @@ * Minimal list API to work with primitive longs. This list exists * to avoid boxing/unboxing when using standard list from Java. */ +@IgniteCodeGeneratingFail public class GridLongList implements Message, Externalizable { /** */ private static final long serialVersionUID = 0L; + /** Empty array. */ + public static final long[] EMPTY_ARRAY = new long[0]; + /** */ private long[] arr; @@ -390,6 +395,9 @@ public int replaceValue(int startIdx, long oldVal, long newVal) { * @return Array copy. */ public long[] array() { + if (arr == null) + return EMPTY_ARRAY; + long[] res = new long[idx]; System.arraycopy(arr, 0, res, 0, idx); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteStopwatch.java b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteStopwatch.java new file mode 100644 index 0000000000000..83cd7bc8d3ba6 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteStopwatch.java @@ -0,0 +1,230 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/* + * Copyright (C) 2008 The Guava Authors + */ + +package org.apache.ignite.internal.util; + +import java.time.Duration; +import java.util.concurrent.TimeUnit; +import org.jetbrains.annotations.NotNull; + +import static java.util.concurrent.TimeUnit.DAYS; +import static java.util.concurrent.TimeUnit.HOURS; +import static java.util.concurrent.TimeUnit.MICROSECONDS; +import static java.util.concurrent.TimeUnit.MILLISECONDS; +import static java.util.concurrent.TimeUnit.MINUTES; +import static java.util.concurrent.TimeUnit.NANOSECONDS; +import static java.util.concurrent.TimeUnit.SECONDS; + +/** + * An object that measures elapsed time in nanoseconds. It is useful to measure elapsed time using + * this class instead of direct calls to {@link System#nanoTime} for a few reasons: + * + *
    + *
  • An alternate time source can be substituted, for testing or performance reasons. + *
  • As documented by {@code nanoTime}, the value returned has no absolute meaning, and can only + * be interpreted as relative to another timestamp returned by {@code nanoTime} at a different + * time. {@code Stopwatch} is a more effective abstraction because it exposes only these + * relative values, not the absolute ones. + *
+ * + *

Basic usage: + * + *

{@code
+ * Stopwatch stopwatch = Stopwatch.createStarted();
+ * doSomething();
+ * stopwatch.stop(); // optional
+ *
+ * Duration duration = stopwatch.elapsed();
+ *
+ * log.info("time: " + stopwatch); // formatted string like "12.3 ms"
+ * }
+ * + *

Stopwatch methods are not idempotent; it is an error to start or stop a stopwatch that is + * already in the desired state. + * + *

When testing code that uses this class, use {@link #createUnstarted(IgniteTicker)} or {@link + * #createStarted(IgniteTicker)} to supply a fake or mock ticker. This allows you to simulate any valid + * behavior of the stopwatch. + * + *

Note: This class is not thread-safe. + * + *

Warning for Android users: a stopwatch with default behavior may not continue to keep + * time while the device is asleep. Instead, create one like this: + * + *

{@code
+ * Stopwatch.createStarted(
+ *      new Ticker() {
+ *        public long read() {
+ *          return android.os.SystemClock.elapsedRealtimeNanos();
+ *        }
+ *      });
+ * }
+ */ +@SuppressWarnings("GoodTime") // lots of violations +public final class IgniteStopwatch { + /** Ticker. */ + private final IgniteTicker ticker; + /** Is running. */ + private boolean isRunning; + /** Elapsed nanos. */ + private long elapsedNanos; + /** Start tick. */ + private long startTick; + + /** + * Creates (but does not start) a new stopwatch using {@link System#nanoTime} as its time source. + */ + public static IgniteStopwatch createUnstarted() { + return new IgniteStopwatch(); + } + + /** + * Creates (but does not start) a new stopwatch, using the specified time source. + */ + public static IgniteStopwatch createUnstarted(IgniteTicker ticker) { + return new IgniteStopwatch(ticker); + } + + /** + * Creates (and starts) a new stopwatch using {@link System#nanoTime} as its time source. + */ + public static IgniteStopwatch createStarted() { + return new IgniteStopwatch().start(); + } + + /** + * Creates (and starts) a new stopwatch, using the specified time source. + */ + public static IgniteStopwatch createStarted(IgniteTicker ticker) { + return new IgniteStopwatch(ticker).start(); + } + + /** + * Default constructor. + */ + IgniteStopwatch() { + this.ticker = IgniteTicker.systemTicker(); + } + + /** + * @param ticker Ticker. + */ + IgniteStopwatch(@NotNull IgniteTicker ticker) { + this.ticker = ticker; + } + + /** + * Returns {@code true} if {@link #start()} has been called on this stopwatch, and {@link #stop()} + * has not been called since the last call to {@code start()}. + */ + public boolean isRunning() { + return isRunning; + } + + /** + * Starts the stopwatch. + * + * @return this {@code Stopwatch} instance + * @throws IllegalStateException if the stopwatch is already running. + */ + public IgniteStopwatch start() { + assert !isRunning : "This stopwatch is already running."; + + isRunning = true; + + startTick = ticker.read(); + + return this; + } + + /** + * Stops the stopwatch. Future reads will return the fixed duration that had elapsed up to this + * point. + * + * @return this {@code Stopwatch} instance + * @throws IllegalStateException if the stopwatch is already stopped. + */ + public IgniteStopwatch stop() { + long tick = ticker.read(); + + assert !isRunning : "This stopwatch is already running."; + + isRunning = false; + elapsedNanos += tick - startTick; + return this; + } + + /** + * Sets the elapsed time for this stopwatch to zero, and places it in a stopped state. + * + * @return this {@code Stopwatch} instance + */ + public IgniteStopwatch reset() { + elapsedNanos = 0; + + isRunning = false; + + return this; + } + + /** + * + */ + private long elapsedNanos() { + return isRunning ? ticker.read() - startTick + elapsedNanos : elapsedNanos; + } + + /** + * Returns the current elapsed time shown on this stopwatch, expressed in the desired time unit, + * with any fraction rounded down. + * + *

Note: the overhead of measurement can be more than a microsecond, so it is generally + * not useful to specify {@link TimeUnit#NANOSECONDS} precision here. + * + *

It is generally not a good idea to use an ambiguous, unitless {@code long} to represent + * elapsed time. Therefore, we recommend using {@link #elapsed()} instead, which returns a + * strongly-typed {@link Duration} instance. + */ + public long elapsed(TimeUnit desiredUnit) { + return desiredUnit.convert(elapsedNanos(), NANOSECONDS); + } + + /** + * Returns the current elapsed time shown on this stopwatch as a {@link Duration}. Unlike {@link + * #elapsed(TimeUnit)}, this method does not lose any precision due to rounding. + */ + public Duration elapsed() { + return Duration.ofNanos(elapsedNanos()); + } + + /** + * @param nanos Nanos. + */ + private static TimeUnit chooseUnit(long nanos) { + if (DAYS.convert(nanos, NANOSECONDS) > 0) return DAYS; + if (HOURS.convert(nanos, NANOSECONDS) > 0) return HOURS; + if (MINUTES.convert(nanos, NANOSECONDS) > 0) return MINUTES; + if (SECONDS.convert(nanos, NANOSECONDS) > 0) return SECONDS; + if (MILLISECONDS.convert(nanos, NANOSECONDS) > 0) return MILLISECONDS; + if (MICROSECONDS.convert(nanos, NANOSECONDS) > 0) return MICROSECONDS; + return NANOSECONDS; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteTicker.java b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteTicker.java new file mode 100644 index 0000000000000..1f7f41d9a8d76 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteTicker.java @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/* + * Copyright (C) 2011 The Guava Authors + */ + +package org.apache.ignite.internal.util; + +/** + * A time source; returns a time value representing the number of nanoseconds elapsed since some + * fixed but arbitrary point in time. Note that most users should use {@link IgniteStopwatch} instead of + * interacting with this class directly. + * + *

Warning: this interface can only be used to measure elapsed time, not wall time. + */ +public abstract class IgniteTicker { + /** Constructor for use by subclasses. */ + protected IgniteTicker() {} + + /** Returns the number of nanoseconds elapsed since this ticker's fixed point of reference. */ + public abstract long read(); + + /** + * A ticker that reads the current time using {@link System#nanoTime}. + */ + public static IgniteTicker systemTicker() { + return SYSTEM_TICKER; + } + + /** System ticker. */ + private static final IgniteTicker SYSTEM_TICKER = + new IgniteTicker() { + @Override public long read() { + return System.nanoTime(); + } + }; +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java index b8ba74262306b..b0385b84ea5d9 100755 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java @@ -17,6 +17,114 @@ package org.apache.ignite.internal.util; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteClientDisconnectedException; +import org.apache.ignite.IgniteDeploymentException; +import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteIllegalStateException; +import org.apache.ignite.IgniteInterruptedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.binary.BinaryRawReader; +import org.apache.ignite.binary.BinaryRawWriter; +import org.apache.ignite.cluster.ClusterGroupEmptyException; +import org.apache.ignite.cluster.ClusterMetrics; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.cluster.ClusterTopologyException; +import org.apache.ignite.compute.ComputeTask; +import org.apache.ignite.compute.ComputeTaskCancelledException; +import org.apache.ignite.compute.ComputeTaskName; +import org.apache.ignite.compute.ComputeTaskTimeoutException; +import org.apache.ignite.configuration.AddressResolver; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.EventType; +import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException; +import org.apache.ignite.internal.IgniteDeploymentCheckedException; +import org.apache.ignite.internal.IgniteFutureCancelledCheckedException; +import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.IgniteNodeAttributes; +import org.apache.ignite.internal.cluster.ClusterGroupEmptyCheckedException; +import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.compute.ComputeTaskCancelledCheckedException; +import org.apache.ignite.internal.compute.ComputeTaskTimeoutCheckedException; +import org.apache.ignite.internal.events.DiscoveryCustomEvent; +import org.apache.ignite.internal.managers.communication.GridIoManager; +import org.apache.ignite.internal.managers.communication.GridIoPolicy; +import org.apache.ignite.internal.managers.deployment.GridDeploymentInfo; +import org.apache.ignite.internal.mxbean.IgniteStandardMXBean; +import org.apache.ignite.internal.processors.cache.CacheClassLoaderMarker; +import org.apache.ignite.internal.processors.cache.GridCacheAttributes; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cluster.BaselineTopology; +import org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException; +import org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException; +import org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException; +import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.future.IgniteFutureImpl; +import org.apache.ignite.internal.util.io.GridFilenameUtils; +import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryNativeLoader; +import org.apache.ignite.internal.util.lang.GridClosureException; +import org.apache.ignite.internal.util.lang.GridPeerDeployAware; +import org.apache.ignite.internal.util.lang.GridTuple; +import org.apache.ignite.internal.util.lang.IgniteThrowableConsumer; +import org.apache.ignite.internal.util.typedef.C1; +import org.apache.ignite.internal.util.typedef.CI1; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.internal.util.typedef.P1; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.internal.util.typedef.internal.A; +import org.apache.ignite.internal.util.typedef.internal.SB; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.util.worker.GridWorker; +import org.apache.ignite.lang.IgniteBiTuple; +import org.apache.ignite.lang.IgniteClosure; +import org.apache.ignite.lang.IgniteFutureCancelledException; +import org.apache.ignite.lang.IgniteFutureTimeoutException; +import org.apache.ignite.lang.IgniteOutClosure; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.lang.IgniteProductVersion; +import org.apache.ignite.lang.IgniteUuid; +import org.apache.ignite.lifecycle.LifecycleAware; +import org.apache.ignite.marshaller.Marshaller; +import org.apache.ignite.plugin.PluginProvider; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.plugin.extensions.communication.MessageWriter; +import org.apache.ignite.spi.IgniteSpi; +import org.apache.ignite.spi.IgniteSpiException; +import org.apache.ignite.spi.discovery.DiscoverySpi; +import org.apache.ignite.spi.discovery.DiscoverySpiOrderSupport; +import org.apache.ignite.transactions.TransactionDeadlockException; +import org.apache.ignite.transactions.TransactionHeuristicException; +import org.apache.ignite.transactions.TransactionOptimisticException; +import org.apache.ignite.transactions.TransactionRollbackException; +import org.apache.ignite.transactions.TransactionTimeoutException; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import sun.misc.Unsafe; + +import javax.management.DynamicMBean; +import javax.management.JMException; +import javax.management.MBeanRegistrationException; +import javax.management.MBeanServer; +import javax.management.MalformedObjectNameException; +import javax.management.ObjectName; +import javax.naming.Context; +import javax.naming.NamingException; +import javax.net.ssl.HostnameVerifier; +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLContext; +import javax.net.ssl.SSLSession; +import javax.net.ssl.TrustManager; +import javax.net.ssl.X509TrustManager; import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.ByteArrayInputStream; @@ -32,10 +140,13 @@ import java.io.InputStream; import java.io.InputStreamReader; import java.io.ObjectInput; +import java.io.ObjectInputStream; import java.io.ObjectOutput; +import java.io.ObjectOutputStream; import java.io.OutputStream; import java.io.PrintStream; import java.io.Reader; +import java.io.Serializable; import java.io.StringWriter; import java.io.Writer; import java.lang.annotation.Annotation; @@ -72,6 +183,7 @@ import java.nio.channels.FileLock; import java.nio.channels.SelectionKey; import java.nio.channels.Selector; +import java.nio.channels.SocketChannel; import java.nio.charset.Charset; import java.nio.file.DirectoryStream; import java.nio.file.FileVisitResult; @@ -92,7 +204,6 @@ import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Arrays; -import java.util.Calendar; import java.util.Collection; import java.util.Collections; import java.util.Comparator; @@ -107,10 +218,10 @@ import java.util.List; import java.util.Map; import java.util.NoSuchElementException; +import java.util.Random; import java.util.ServiceLoader; import java.util.Set; import java.util.StringTokenizer; -import java.util.TimeZone; import java.util.TreeMap; import java.util.UUID; import java.util.concurrent.BrokenBarrierException; @@ -137,112 +248,12 @@ import java.util.logging.Level; import java.util.logging.Logger; import java.util.regex.Pattern; +import java.util.stream.Collectors; +import java.util.zip.Deflater; import java.util.zip.ZipEntry; import java.util.zip.ZipFile; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; -import javax.management.DynamicMBean; -import javax.management.JMException; -import javax.management.MBeanRegistrationException; -import javax.management.MBeanServer; -import javax.management.MalformedObjectNameException; -import javax.management.ObjectName; -import javax.naming.Context; -import javax.naming.NamingException; -import javax.net.ssl.HostnameVerifier; -import javax.net.ssl.HttpsURLConnection; -import javax.net.ssl.SSLContext; -import javax.net.ssl.SSLSession; -import javax.net.ssl.TrustManager; -import javax.net.ssl.X509TrustManager; -import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.IgniteClientDisconnectedException; -import org.apache.ignite.IgniteDeploymentException; -import org.apache.ignite.IgniteException; -import org.apache.ignite.IgniteIllegalStateException; -import org.apache.ignite.IgniteInterruptedException; -import org.apache.ignite.IgniteLogger; -import org.apache.ignite.IgniteSystemProperties; -import org.apache.ignite.binary.BinaryRawReader; -import org.apache.ignite.binary.BinaryRawWriter; -import org.apache.ignite.cluster.ClusterGroupEmptyException; -import org.apache.ignite.cluster.ClusterMetrics; -import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.cluster.ClusterTopologyException; -import org.apache.ignite.compute.ComputeTask; -import org.apache.ignite.compute.ComputeTaskCancelledException; -import org.apache.ignite.compute.ComputeTaskName; -import org.apache.ignite.compute.ComputeTaskTimeoutException; -import org.apache.ignite.configuration.AddressResolver; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.events.EventType; -import org.apache.ignite.internal.GridKernalContext; -import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException; -import org.apache.ignite.internal.IgniteDeploymentCheckedException; -import org.apache.ignite.internal.IgniteFutureCancelledCheckedException; -import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; -import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.internal.IgniteInterruptedCheckedException; -import org.apache.ignite.internal.IgniteNodeAttributes; -import org.apache.ignite.internal.cluster.ClusterGroupEmptyCheckedException; -import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; -import org.apache.ignite.internal.compute.ComputeTaskCancelledCheckedException; -import org.apache.ignite.internal.compute.ComputeTaskTimeoutCheckedException; -import org.apache.ignite.internal.events.DiscoveryCustomEvent; -import org.apache.ignite.internal.managers.communication.GridIoManager; -import org.apache.ignite.internal.managers.deployment.GridDeploymentInfo; -import org.apache.ignite.internal.mxbean.IgniteStandardMXBean; -import org.apache.ignite.internal.processors.cache.CacheClassLoaderMarker; -import org.apache.ignite.internal.processors.cache.GridCacheAttributes; -import org.apache.ignite.internal.processors.cache.GridCacheContext; -import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; -import org.apache.ignite.internal.processors.cluster.BaselineTopology; -import org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException; -import org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException; -import org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException; -import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; -import org.apache.ignite.internal.util.future.IgniteFutureImpl; -import org.apache.ignite.internal.util.io.GridFilenameUtils; -import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryNativeLoader; -import org.apache.ignite.internal.util.lang.GridClosureException; -import org.apache.ignite.internal.util.lang.GridPeerDeployAware; -import org.apache.ignite.internal.util.lang.GridTuple; -import org.apache.ignite.internal.util.typedef.C1; -import org.apache.ignite.internal.util.typedef.CI1; -import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.internal.util.typedef.P1; -import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.internal.util.typedef.internal.A; -import org.apache.ignite.internal.util.typedef.internal.SB; -import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.internal.util.worker.GridWorker; -import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.lang.IgniteClosure; -import org.apache.ignite.lang.IgniteFutureCancelledException; -import org.apache.ignite.lang.IgniteFutureTimeoutException; -import org.apache.ignite.lang.IgniteOutClosure; -import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.lang.IgniteProductVersion; -import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.lifecycle.LifecycleAware; -import org.apache.ignite.marshaller.Marshaller; -import org.apache.ignite.plugin.PluginProvider; -import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.plugin.extensions.communication.MessageWriter; -import org.apache.ignite.spi.IgniteSpi; -import org.apache.ignite.spi.IgniteSpiException; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.DiscoverySpiOrderSupport; -import org.apache.ignite.transactions.TransactionDeadlockException; -import org.apache.ignite.transactions.TransactionHeuristicException; -import org.apache.ignite.transactions.TransactionOptimisticException; -import org.apache.ignite.transactions.TransactionRollbackException; -import org.apache.ignite.transactions.TransactionTimeoutException; -import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; -import sun.misc.Unsafe; import static org.apache.ignite.IgniteSystemProperties.IGNITE_DISABLE_HOSTNAME_VERIFIER; import static org.apache.ignite.IgniteSystemProperties.IGNITE_HOME; @@ -272,6 +283,15 @@ */ @SuppressWarnings({"UnusedReturnValue", "UnnecessaryFullyQualifiedName", "RedundantStringConstructorCall"}) public abstract class IgniteUtils { + /** */ + private static final long GB = 1024L * 1024 * 1024; + + /** Minimum checkpointing page buffer size (may be adjusted by Ignite). */ + public static final Long DFLT_MIN_CHECKPOINTING_PAGE_BUFFER_SIZE = GB / 4; + + /** Default minimum checkpointing page buffer size (may be adjusted by Ignite). */ + public static final Long DFLT_MAX_CHECKPOINTING_PAGE_BUFFER_SIZE = 2 * GB; + /** {@code True} if {@code unsafe} should be used for array copy. */ private static final boolean UNSAFE_BYTE_ARR_CP = unsafeByteArrayCopyAvailable(); @@ -477,6 +497,9 @@ public abstract class IgniteUtils { /** Ignite Work Directory. */ public static final String IGNITE_WORK_DIR = System.getenv(IgniteSystemProperties.IGNITE_WORK_DIR); + /** Random is used to get random server node to authentication from client node. */ + private static final Random RND = new Random(System.currentTimeMillis()); + /** Clock timer. */ private static Thread timer; @@ -7176,183 +7199,52 @@ private static boolean checkNextToken(StringTokenizer t, String str, String date } /** + * Adds values to collection and returns the same collection to allow chaining. * - * @param str ISO date. - * @return Calendar instance. - * @throws IgniteCheckedException Thrown in case of any errors. + * @param c Collection to add values to. + * @param vals Values. + * @param Value type. + * @return Passed in collection. */ - public static Calendar parseIsoDate(String str) throws IgniteCheckedException { - StringTokenizer t = new StringTokenizer(str, "+-:.TZ", true); + public static > C addAll(C c, V... vals) { + Collections.addAll(c, vals); - Calendar cal = Calendar.getInstance(); - cal.clear(); + return c; + } - try { - if (t.hasMoreTokens()) - cal.set(Calendar.YEAR, Integer.parseInt(t.nextToken())); - else - return cal; + /** + * Adds values to collection and returns the same collection to allow chaining. + * + * @param m Map to add entries to. + * @param entries Entries. + * @param Key type. + * @param Value type. + * @param Map type. + * @return Passed in collection. + */ + public static > M addAll(M m, Map.Entry... entries) { + for (Map.Entry e : entries) + m.put(e.getKey(), e.getValue()); - if (checkNextToken(t, "-", str) && t.hasMoreTokens()) - cal.set(Calendar.MONTH, Integer.parseInt(t.nextToken()) - 1); - else - return cal; + return m; + } - if (checkNextToken(t, "-", str) && t.hasMoreTokens()) - cal.set(Calendar.DAY_OF_MONTH, Integer.parseInt(t.nextToken())); - else - return cal; + /** + * Adds values to collection and returns the same collection to allow chaining. + * + * @param m Map to add entries to. + * @param entries Entries. + * @param Key type. + * @param Value type. + * @param Map type. + * @return Passed in collection. + */ + public static > M addAll(M m, IgniteBiTuple... entries) { + for (IgniteBiTuple t : entries) + m.put(t.get1(), t.get2()); - if (checkNextToken(t, "T", str) && t.hasMoreTokens()) - cal.set(Calendar.HOUR_OF_DAY, Integer.parseInt(t.nextToken())); - else { - cal.set(Calendar.HOUR_OF_DAY, 0); - cal.set(Calendar.MINUTE, 0); - cal.set(Calendar.SECOND, 0); - cal.set(Calendar.MILLISECOND, 0); - - return cal; - } - - if (checkNextToken(t, ":", str) && t.hasMoreTokens()) - cal.set(Calendar.MINUTE, Integer.parseInt(t.nextToken())); - else { - cal.set(Calendar.MINUTE, 0); - cal.set(Calendar.SECOND, 0); - cal.set(Calendar.MILLISECOND, 0); - - return cal; - } - - if (!t.hasMoreTokens()) - return cal; - - String tok = t.nextToken(); - - if (":".equals(tok)) { // Seconds. - if (t.hasMoreTokens()) { - cal.set(Calendar.SECOND, Integer.parseInt(t.nextToken())); - - if (!t.hasMoreTokens()) - return cal; - - tok = t.nextToken(); - - if (".".equals(tok)) { - String nt = t.nextToken(); - - while (nt.length() < 3) - nt += "0"; - - nt = nt.substring(0, 3); // Cut trailing chars. - - cal.set(Calendar.MILLISECOND, Integer.parseInt(nt)); - - if (!t.hasMoreTokens()) - return cal; - - tok = t.nextToken(); - } - else - cal.set(Calendar.MILLISECOND, 0); - } - else - throw new IgniteCheckedException("Invalid date format: " + str); - } - else { - cal.set(Calendar.SECOND, 0); - cal.set(Calendar.MILLISECOND, 0); - } - - if (!"Z".equals(tok)) { - if (!"+".equals(tok) && !"-".equals(tok)) - throw new IgniteCheckedException("Invalid date format: " + str); - - boolean plus = "+".equals(tok); - - if (!t.hasMoreTokens()) - throw new IgniteCheckedException("Invalid date format: " + str); - - tok = t.nextToken(); - - int tzHour; - int tzMin; - - if (tok.length() == 4) { - tzHour = Integer.parseInt(tok.substring(0, 2)); - tzMin = Integer.parseInt(tok.substring(2, 4)); - } - else { - tzHour = Integer.parseInt(tok); - - if (checkNextToken(t, ":", str) && t.hasMoreTokens()) - tzMin = Integer.parseInt(t.nextToken()); - else - throw new IgniteCheckedException("Invalid date format: " + str); - } - - if (plus) - cal.set(Calendar.ZONE_OFFSET, (tzHour * 60 + tzMin) * 60 * 1000); - else - cal.set(Calendar.ZONE_OFFSET, -(tzHour * 60 + tzMin) * 60 * 1000); - } - else - cal.setTimeZone(TimeZone.getTimeZone("GMT")); - } - catch (NumberFormatException ex) { - throw new IgniteCheckedException("Invalid date format: " + str, ex); - } - - return cal; - } - - /** - * Adds values to collection and returns the same collection to allow chaining. - * - * @param c Collection to add values to. - * @param vals Values. - * @param Value type. - * @return Passed in collection. - */ - public static > C addAll(C c, V... vals) { - Collections.addAll(c, vals); - - return c; - } - - /** - * Adds values to collection and returns the same collection to allow chaining. - * - * @param m Map to add entries to. - * @param entries Entries. - * @param Key type. - * @param Value type. - * @param Map type. - * @return Passed in collection. - */ - public static > M addAll(M m, Map.Entry... entries) { - for (Map.Entry e : entries) - m.put(e.getKey(), e.getValue()); - - return m; - } - - /** - * Adds values to collection and returns the same collection to allow chaining. - * - * @param m Map to add entries to. - * @param entries Entries. - * @param Key type. - * @param Value type. - * @param Map type. - * @return Passed in collection. - */ - public static > M addAll(M m, IgniteBiTuple... entries) { - for (IgniteBiTuple t : entries) - m.put(t.get1(), t.get2()); - - return m; - } + return m; + } /** * Utility method creating {@link JMException} with given cause. @@ -7411,20 +7303,6 @@ public static IgniteCheckedException cast(Throwable t) { : new IgniteCheckedException(t); } - /** - * Parses passed string with specified date. - * - * @param src String to parse. - * @param ptrn Pattern. - * @return Parsed date. - * @throws java.text.ParseException If exception occurs while parsing. - */ - public static Date parse(String src, String ptrn) throws java.text.ParseException { - java.text.DateFormat format = new java.text.SimpleDateFormat(ptrn); - - return format.parse(src); - } - /** * Checks if class loader is an internal P2P class loader. * @@ -8668,6 +8546,17 @@ public static long safeAbs(long i) { return i < 0 ? 0 : i; } + /** + * When {@code long} value given is positive returns that value, otherwise returns provided default value. + * + * @param i Input value. + * @param dflt Default value. + * @return {@code i} if {@code i > 0} and {@code dflt} otherwise. + */ + public static long ensurePositive(long i, long dflt) { + return i <= 0 ? dflt : i; + } + /** * Gets wrapper class for a primitive type. * @@ -10305,11 +10194,23 @@ public static void restoreOldIgniteName(@Nullable String oldName, @Nullable Stri } /** + * Zip binary payload using default compression. + * * @param bytes Byte array to compress. * @return Compressed bytes. * @throws IgniteCheckedException If failed. */ public static byte[] zip(@Nullable byte[] bytes) throws IgniteCheckedException { + return zip(bytes, Deflater.DEFAULT_COMPRESSION); + } + + /** + * @param bytes Byte array to compress. + * @param compressionLevel Level of compression to encode. + * @return Compressed bytes. + * @throws IgniteCheckedException If failed. + */ + public static byte[] zip(@Nullable byte[] bytes, int compressionLevel) throws IgniteCheckedException { try { if (bytes == null) return null; @@ -10317,6 +10218,8 @@ public static byte[] zip(@Nullable byte[] bytes) throws IgniteCheckedException { ByteArrayOutputStream bos = new ByteArrayOutputStream(); try (ZipOutputStream zos = new ZipOutputStream(bos)) { + zos.setLevel(compressionLevel); + ZipEntry entry = new ZipEntry(""); try { @@ -10338,6 +10241,118 @@ public static byte[] zip(@Nullable byte[] bytes) throws IgniteCheckedException { } } + /** + * Serialize object to byte array. + * + * @param obj Object. + * @return Serialized object. + */ + public static byte[] toBytes(Serializable obj) { + try (ByteArrayOutputStream bos = new ByteArrayOutputStream(); + ObjectOutputStream oos = new ObjectOutputStream(bos)) { + + oos.writeObject(obj); + oos.flush(); + + return bos.toByteArray(); + } + catch (IOException e) { + throw new IgniteException(e); + } + } + + /** + * Deserialize object from byte array. + * + * @param data Serialized object. + * @return Object. + */ + public static T fromBytes(byte[] data) { + try (ByteArrayInputStream bis = new ByteArrayInputStream(data); + ObjectInputStream ois = new ObjectInputStream(bis)) { + + return (T)ois.readObject(); + } + catch (IOException | ClassNotFoundException e) { + throw new IgniteException(e); + } + } + + /** + * Get checkpoint buffer size for the given configuration. + * + * @param regCfg Configuration. + * @return Checkpoint buffer size. + */ + public static long checkpointBufferSize(DataRegionConfiguration regCfg) { + if (!regCfg.isPersistenceEnabled()) + return 0L; + + long res = regCfg.getCheckpointPageBufferSize(); + + if (res == 0L) { + if (regCfg.getMaxSize() < GB) + res = Math.min(DFLT_MIN_CHECKPOINTING_PAGE_BUFFER_SIZE, regCfg.getMaxSize()); + else if (regCfg.getMaxSize() < 8 * GB) + res = regCfg.getMaxSize() / 4; + else + res = DFLT_MAX_CHECKPOINTING_PAGE_BUFFER_SIZE; + } + + return res; + } + + /** + * Calculates maximum WAL archive size based on maximum checkpoint buffer size, if the default value of + * {@link DataStorageConfiguration#getMaxWalArchiveSize()} is not overridden. + * + * @return User-set max WAL archive size of triple size of the maximum checkpoint buffer. + */ + public static long adjustedWalHistorySize(DataStorageConfiguration dsCfg, @Nullable IgniteLogger log) { + if (dsCfg.getMaxWalArchiveSize() != DataStorageConfiguration.DFLT_WAL_ARCHIVE_MAX_SIZE) + return dsCfg.getMaxWalArchiveSize(); + + // Find out the maximum checkpoint buffer size. + long maxCpBufSize = 0; + + if (dsCfg.getDataRegionConfigurations() != null) { + for (DataRegionConfiguration regCfg : dsCfg.getDataRegionConfigurations()) { + long cpBufSize = checkpointBufferSize(regCfg); + + if (cpBufSize > regCfg.getMaxSize()) + cpBufSize = regCfg.getMaxSize(); + + if (cpBufSize > maxCpBufSize) + maxCpBufSize = cpBufSize; + } + } + + { + DataRegionConfiguration regCfg = dsCfg.getDefaultDataRegionConfiguration(); + + long cpBufSize = checkpointBufferSize(regCfg); + + if (cpBufSize > regCfg.getMaxSize()) + cpBufSize = regCfg.getMaxSize(); + + if (cpBufSize > maxCpBufSize) + maxCpBufSize = cpBufSize; + } + + long adjustedWalArchiveSize = maxCpBufSize * 4; + + if (adjustedWalArchiveSize > dsCfg.getMaxWalArchiveSize()) { + if (log != null) + U.quietAndInfo(log, "Automatically adjusted max WAL archive size to " + + U.readableSize(adjustedWalArchiveSize, false) + + " (to override, use DataStorageConfiguration.setMaxWalArhiveSize)"); + + return adjustedWalArchiveSize; + } + + return dsCfg.getMaxWalArchiveSize(); + } + /** * Return count of regular file in the directory (including in sub-directories) * @@ -10579,6 +10594,378 @@ public static String toHexString(ByteBuffer buf) { return sb.toString(); } + /** + * @param ctx Kernel context. + * @return Random alive server node. + */ + public static ClusterNode randomServerNode(GridKernalContext ctx) { + Collection aliveNodes = ctx.discovery().aliveServerNodes(); + + int rndIdx = RND.nextInt(aliveNodes.size()) + 1; + + int i = 0; + ClusterNode rndNode = null; + + for (Iterator it = aliveNodes.iterator(); i < rndIdx && it.hasNext(); i++) + rndNode = it.next(); + + if (rndNode == null) + assert rndNode != null; + + return rndNode; + } + + /** + * @param ctx Kernal context. + * @param plc IO Policy. + * @param reserved Thread to reserve. + * @return Number of available threads in executor service for {@code plc}. If {@code plc} + * is invalid, return {@code 1}. + */ + public static int availableThreadCount(GridKernalContext ctx, byte plc, int reserved) { + IgniteConfiguration cfg = ctx.config(); + + int parallelismLvl; + + switch (plc) { + case GridIoPolicy.P2P_POOL: + parallelismLvl = cfg.getPeerClassLoadingThreadPoolSize(); + + break; + + case GridIoPolicy.SYSTEM_POOL: + parallelismLvl = cfg.getSystemThreadPoolSize(); + + break; + + case GridIoPolicy.PUBLIC_POOL: + parallelismLvl = cfg.getPublicThreadPoolSize(); + + break; + + case GridIoPolicy.MANAGEMENT_POOL: + parallelismLvl = cfg.getManagementThreadPoolSize(); + + break; + + case GridIoPolicy.UTILITY_CACHE_POOL: + parallelismLvl = cfg.getUtilityCacheThreadPoolSize(); + + break; + + case GridIoPolicy.IGFS_POOL: + parallelismLvl = cfg.getIgfsThreadPoolSize(); + + break; + + case GridIoPolicy.SERVICE_POOL: + parallelismLvl = cfg.getServiceThreadPoolSize(); + + break; + + case GridIoPolicy.DATA_STREAMER_POOL: + parallelismLvl = cfg.getDataStreamerThreadPoolSize(); + + break; + + case GridIoPolicy.QUERY_POOL: + parallelismLvl = cfg.getQueryThreadPoolSize(); + + break; + + default: + parallelismLvl = -1; + } + + return Math.max(1, parallelismLvl - reserved); + } + + /** + * Execute operation on data in parallel. + * + * @param executorSvc Service for parallel execution. + * @param srcDatas List of data for parallelization. + * @param operation Logic for execution of on each item of data. + * @param Type of data. + * @throws IgniteCheckedException if parallel execution was failed. + */ + public static Collection doInParallel( + ExecutorService executorSvc, + Collection srcDatas, + IgniteThrowableConsumer operation + ) throws IgniteCheckedException, IgniteInterruptedCheckedException { + return doInParallel(srcDatas.size(), executorSvc, srcDatas, operation); + } + + /** + * Execute operation on data in parallel. + * + * @param parallelismLvl Number of threads on which it should be executed. + * @param executorSvc Service for parallel execution. + * @param srcDatas List of data for parallelization. + * @param operation Logic for execution of on each item of data. + * @param Type of data. + * @param Type of return value. + * @throws IgniteCheckedException if parallel execution was failed. + */ + public static Collection doInParallel( + int parallelismLvl, + ExecutorService executorSvc, + Collection srcDatas, + IgniteThrowableConsumer operation + ) throws IgniteCheckedException, IgniteInterruptedCheckedException { + if(srcDatas.isEmpty()) + return Collections.emptyList(); + + int[] batchSizes = calculateOptimalBatchSizes(parallelismLvl, srcDatas.size()); + + List> batches = new ArrayList<>(batchSizes.length); + + // Set for sharing batches between executor and current thread. + // If executor cannot perform immediately, we will execute task in the current thread. + Set> sharedBatchesSet = new GridConcurrentHashSet<>(batchSizes.length); + + Iterator iterator = srcDatas.iterator(); + + for (int idx = 0; idx < batchSizes.length; idx++) { + int batchSize = batchSizes[idx]; + + Batch batch = new Batch<>(batchSize); + + for (int i = 0; i < batchSize; i++) + batch.addTask(iterator.next()); + + batches.add(batch); + } + + batches = batches.stream() + .filter(batch -> !batch.tasks.isEmpty()) + // Add to set only after check that batch is not empty. + .peek(sharedBatchesSet::add) + // Setup future in batch for waiting result. + .peek(batch -> batch.future = executorSvc.submit(() -> { + // Batch was stolen by the main stream. + if (!sharedBatchesSet.remove(batch)) { + return null; + } + + Collection results = new ArrayList<>(batch.tasks.size()); + + for (T item : batch.tasks) + results.add(operation.accept(item)); + + return results; + })) + .collect(Collectors.toList()); + + Throwable error = null; + + // Stealing jobs if executor is busy and cannot process task immediately. + // Perform batches in a current thread. + for (Batch batch : sharedBatchesSet) { + // Executor steal task. + if (!sharedBatchesSet.remove(batch)) + continue; + + Collection res = new ArrayList<>(batch.tasks.size()); + + try { + for (T item : batch.tasks) + res.add(operation.accept(item)); + + batch.result(res); + } + catch (Throwable e) { + batch.result(e); + } + } + + // Final result collection. + Collection results = new ArrayList<>(srcDatas.size()); + + for (Batch batch: batches) { + try { + Throwable err = batch.error; + + if (err != null) { + error = addSuppressed(error, err); + + continue; + } + + Collection res = batch.result(); + + if (res != null) + results.addAll(res); + else + assert error != null; + } + catch (InterruptedException e) { + Thread.currentThread().interrupt(); + + throw new IgniteInterruptedCheckedException(e); + } + catch (ExecutionException e) { + error = addSuppressed(error, e.getCause()); + } + catch (CancellationException e) { + error = addSuppressed(error, e); + } + } + + if (error != null) { + if (error instanceof IgniteCheckedException) + throw (IgniteCheckedException)error; + + if (error instanceof RuntimeException) + throw (RuntimeException)error; + + if (error instanceof Error) + throw (Error)error; + + throw new IgniteCheckedException(error); + } + + return results; + } + + /** + * Utility method to add the given throwable error to the given throwable root error. If the given + * suppressed throwable is an {@code Error}, but the root error is not, will change the root to the {@code Error}. + * + * @param root Root error to add suppressed error to. + * @param err Error to add. + * @return New root error. + */ + private static Throwable addSuppressed(Throwable root, Throwable err) { + assert err != null; + + if (root == null) + return err; + + if (err instanceof Error && !(root instanceof Error)) { + err.addSuppressed(root); + + root = err; + } + else + root.addSuppressed(err); + + return root; + } + + /** + * The batch of tasks with a batch index in global array. + */ + private static class Batch { + /** List tasks. */ + private final List tasks; + + /** */ + private Collection result; + + /** */ + private Throwable error; + + /** */ + private Future> future; + + /** + * @param batchSize Batch size. + */ + private Batch(int batchSize) { + this.tasks = new ArrayList<>(batchSize); + } + + /** + * @param task Add task. + */ + public void addTask(T task){ + tasks.add(task); + } + + /** + * @param res Setup results for tasks. + */ + public void result(Collection res) { + this.result = res; + } + + /** + * @param e Throwable if task was completed with error. + */ + public void result(Throwable e) { + this.error = e; + } + + /** + * Get tasks results. + */ + public Collection result() throws ExecutionException, InterruptedException { + assert future != null; + + return result != null ? result : future.get(); + } + } + + /** + * Split number of tasks into optimized batches. + * @param parallelismLvl Level of parallelism. + * @param size number of tasks to split. + * @return array of batch sizes. + */ + public static int[] calculateOptimalBatchSizes(int parallelismLvl, int size) { + int[] batcheSizes = new int[Math.min(parallelismLvl, size)]; + + for (int i = 0; i < size; i++) + batcheSizes[i % batcheSizes.length]++; + + return batcheSizes; + } + + /** + * @param fut Future to wait for completion. + * @throws ExecutionException If the future + */ + private static void getUninterruptibly(Future fut) throws ExecutionException { + boolean interrupted = false; + + while (true) { + try { + fut.get(); + + break; + } + catch (InterruptedException e) { + interrupted = true; + } + } + + if (interrupted) + Thread.currentThread().interrupt(); + } + + /** + * + * @param r Runnable. + * @param fut Grid future apater. + * @return Runnable with wrapped future. + */ + public static Runnable wrapIgniteFuture(Runnable r, GridFutureAdapter fut) { + return () -> { + try { + r.run(); + + fut.onDone(); + } + catch (Throwable e) { + fut.onDone(e); + + throw e; + } + }; + } + /** * */ @@ -10736,4 +11123,28 @@ public long getLockUnlockCounter() { return cnt.get(); } } + + /** + * Safely write buffer fully to blocking socket channel. + * Will throw assert if non blocking channel passed. + * + * @param sockCh WritableByteChannel. + * @param buf Buffer. + * @throws IOException IOException. + */ + public static void writeFully(SocketChannel sockCh, ByteBuffer buf) throws IOException { + int totalWritten = 0; + + assert sockCh.isBlocking() : "SocketChannel should be in blocking mode " + sockCh; + + while (buf.hasRemaining()) { + int written = sockCh.write(buf); + + if (written < 0) + throw new IOException("Error writing buffer to channel " + + "[written = " + written + ", buf " + buf + ", totalWritten = " + totalWritten + "]"); + + totalWritten += written; + } + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/InitializationProtector.java b/modules/core/src/main/java/org/apache/ignite/internal/util/InitializationProtector.java new file mode 100644 index 0000000000000..7c501c42486c9 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/InitializationProtector.java @@ -0,0 +1,79 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.util; + +import java.util.concurrent.locks.Lock; +import java.util.function.Supplier; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.util.lang.IgniteThrowableRunner; + +/** + * Class for avoid multiple initialization of specific value from various threads. + */ +public class InitializationProtector { + /** Default striped lock concurrency level. */ + private static final int DEFAULT_CONCURRENCY_LEVEL = Runtime.getRuntime().availableProcessors(); + + /** Striped lock. */ + private GridStripedLock stripedLock = new GridStripedLock(DEFAULT_CONCURRENCY_LEVEL); + + /** + * @param protectedKey Unique value by which initialization code should be run only one time. + * @param initializedVal Supplier for given already initialized value if it exist or null as sign that + * initialization required. + * @param initializationCode Code for initialization value corresponding protectedKey. Should be idempotent. + * @param Type of initialization value. + * @return Initialized value. + * @throws IgniteCheckedException if initialization was failed. + */ + public T protect(Object protectedKey, Supplier initializedVal, + IgniteThrowableRunner initializationCode) throws IgniteCheckedException { + T value = initializedVal.get(); + + if (value != null) + return value; + + Lock lock = stripedLock.getLock(protectedKey.hashCode() % stripedLock.concurrencyLevel()); + + lock.lock(); + try { + value = initializedVal.get(); + + if (value != null) + return value; + + initializationCode.run(); + + return initializedVal.get(); + } + finally { + lock.unlock(); + } + } + + /** + * It method allows to avoid simultaneous initialization from various threads. + * + * @param protectedKey Unique value by which initialization code should be run only from one thread in one time. + * @param initializationCode Code for initialization value corresponding protectedKey. Should be idempotent. + * @throws IgniteCheckedException if initialization was failed. + */ + public void protect(Object protectedKey, IgniteThrowableRunner initializationCode) throws IgniteCheckedException { + protect(protectedKey, () -> null, initializationCode); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/OffheapReadWriteLock.java b/modules/core/src/main/java/org/apache/ignite/internal/util/OffheapReadWriteLock.java index b119960cfc822..95c007be7ad96 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/OffheapReadWriteLock.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/OffheapReadWriteLock.java @@ -263,7 +263,7 @@ public void writeUnlock(long lock, int tag) { if (lockCount(state) != -1) throw new IllegalMonitorStateException("Attempted to release write lock while not holding it " + - "[lock=" + U.hexLong(lock) + ", state=" + U.hexLong(state)); + "[lock=" + U.hexLong(lock) + ", state=" + U.hexLong(state) + ']'); updated = releaseWithTag(state, tag); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/TimeBag.java b/modules/core/src/main/java/org/apache/ignite/internal/util/TimeBag.java new file mode 100644 index 0000000000000..156d826e9935c --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/TimeBag.java @@ -0,0 +1,312 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.util; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.PriorityQueue; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.locks.ReentrantReadWriteLock; +import org.jetbrains.annotations.NotNull; + +/** + * Utility class to measure and collect timings of some execution workflow. + */ +public class TimeBag { + /** Initial global stage. */ + private final CompositeStage INITIAL_STAGE = new CompositeStage("", 0, new HashMap<>()); + + /** Lock. */ + private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); + + /** Global stopwatch. */ + private final IgniteStopwatch globalStopwatch = IgniteStopwatch.createStarted(); + + /** Measurement unit. */ + private final TimeUnit measurementUnit; + + /** List of global stages (guarded by {@code lock}). */ + private final List stages; + + /** List of current local stages separated by threads (guarded by {@code lock}). */ + private Map> localStages; + + /** Last seen global stage by thread. */ + private final ThreadLocal tlLastSeenStage = ThreadLocal.withInitial(() -> INITIAL_STAGE); + + /** Thread-local stopwatch. */ + private final ThreadLocal tlStopwatch = ThreadLocal.withInitial(IgniteStopwatch::createUnstarted); + + + /** + * Default constructor. + */ + public TimeBag() { + this(TimeUnit.MILLISECONDS); + } + + /** + * @param measurementUnit Measurement unit. + */ + public TimeBag(TimeUnit measurementUnit) { + this.stages = new ArrayList<>(); + this.localStages = new ConcurrentHashMap<>(); + this.measurementUnit = measurementUnit; + + this.stages.add(INITIAL_STAGE); + } + + /** + * + */ + private CompositeStage lastCompletedGlobalStage() { + assert !stages.isEmpty() : "No stages :("; + + return stages.get(stages.size() - 1); + } + + /** + * @param description Description. + */ + public void finishGlobalStage(String description) { + lock.writeLock().lock(); + + try { + stages.add( + new CompositeStage(description, globalStopwatch.elapsed(measurementUnit), Collections.unmodifiableMap(localStages)) + ); + + localStages = new ConcurrentHashMap<>(); + + globalStopwatch.reset().start(); + } + finally { + lock.writeLock().unlock(); + } + } + + /** + * @param description Description. + */ + public void finishLocalStage(String description) { + lock.readLock().lock(); + + try { + CompositeStage lastSeen = tlLastSeenStage.get(); + CompositeStage lastCompleted = lastCompletedGlobalStage(); + IgniteStopwatch localStopWatch = tlStopwatch.get(); + + Stage stage; + + // We see this stage first time, get elapsed time from last completed global stage and start tracking local. + if (lastSeen != lastCompleted) { + stage = new Stage(description, globalStopwatch.elapsed(measurementUnit)); + + tlLastSeenStage.set(lastCompleted); + } + else + stage = new Stage(description, localStopWatch.elapsed(measurementUnit)); + + localStopWatch.reset().start(); + + // Associate local stage with current thread name. + String threadName = Thread.currentThread().getName(); + + localStages.computeIfAbsent(threadName, t -> new ArrayList<>()).add(stage); + } + finally { + lock.readLock().unlock(); + } + } + + /** + * @return Short name of desired measurement unit. + */ + private String measurementUnitShort() { + switch (measurementUnit) { + case MILLISECONDS: + return "ms"; + case SECONDS: + return "s"; + case NANOSECONDS: + return "ns"; + case MICROSECONDS: + return "mcs"; + case HOURS: + return "h"; + case MINUTES: + return "min"; + case DAYS: + return "days"; + default: + return ""; + } + } + + /** + * @return List of string representation of all stage timings. + */ + public List stagesTimings() { + lock.readLock().lock(); + + try { + List timings = new ArrayList<>(); + + long totalTime = 0; + + // Skip initial stage. + for (int i = 1; i < stages.size(); i++) { + CompositeStage stage = stages.get(i); + + totalTime += stage.time(); + + timings.add(stage.toString()); + } + + // Add last stage with summary time of all global stages. + timings.add(new Stage("Total time", totalTime).toString()); + + return timings; + } + finally { + lock.readLock().unlock(); + } + } + + /** + * @param maxPerCompositeStage Max count of local stages to collect per composite stage. + * @return List of string represenation of longest local stages per each composite stage. + */ + public List longestLocalStagesTimings(int maxPerCompositeStage) { + lock.readLock().lock(); + + try { + List timings = new ArrayList<>(); + + for (int i = 1; i < stages.size(); i++) { + CompositeStage stage = stages.get(i); + + if (!stage.localStages.isEmpty()) { + PriorityQueue stagesByTime = new PriorityQueue<>(); + + for (Map.Entry> threadAndStages : stage.localStages.entrySet()) { + for (Stage locStage : threadAndStages.getValue()) + stagesByTime.add(locStage); + } + + int stageCount = 0; + while (!stagesByTime.isEmpty() && stageCount < maxPerCompositeStage) { + stageCount++; + + Stage locStage = stagesByTime.poll(); + + timings.add(locStage.toString() + " (parent=" + stage.description() + ")"); + } + } + } + + return timings; + } + finally { + lock.readLock().unlock(); + } + } + + /** + * + */ + private class CompositeStage extends Stage { + /** Local stages. */ + private final Map> localStages; + + /** + * @param description Description. + * @param time Time. + * @param localStages Local stages. + */ + public CompositeStage(String description, long time, Map> localStages) { + super(description, time); + + this.localStages = localStages; + } + + /** + * + */ + public Map> localStages() { + return localStages; + } + } + + /** + * + */ + private class Stage implements Comparable { + /** Description. */ + private final String description; + + /** Time. */ + private final long time; + + /** + * @param description Description. + * @param time Time. + */ + public Stage(String description, long time) { + this.description = description; + this.time = time; + } + + /** + * + */ + public String description() { + return description; + } + + /** + * + */ + public long time() { + return time; + } + + /** {@inheritDoc} */ + @Override public String toString() { + StringBuilder sb = new StringBuilder(); + + sb.append("stage=").append('"').append(description()).append('"'); + sb.append(' ').append('(').append(time()).append(' ').append(measurementUnitShort()).append(')'); + + return sb.toString(); + } + + /** {@inheritDoc} */ + @Override public int compareTo(@NotNull TimeBag.Stage o) { + if (o.time < time) + return -1; + if (o.time > time) + return 1; + return o.description.compareTo(description); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/future/GridFutureAdapter.java b/modules/core/src/main/java/org/apache/ignite/internal/util/future/GridFutureAdapter.java index 8302504dd90de..a191f30f130eb 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/future/GridFutureAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/future/GridFutureAdapter.java @@ -357,13 +357,23 @@ private void unblock(Object waiter) { /** {@inheritDoc} */ @Override public IgniteInternalFuture chain(final IgniteClosure, T> doneCb) { - return new ChainFuture<>(this, doneCb, null); + ChainFuture fut = new ChainFuture<>(this, doneCb, null); + + if (ignoreInterrupts) + fut.ignoreInterrupts(); + + return fut; } /** {@inheritDoc} */ @Override public IgniteInternalFuture chain(final IgniteClosure, T> doneCb, Executor exec) { - return new ChainFuture<>(this, doneCb, exec); + ChainFuture fut = new ChainFuture<>(this, doneCb, exec); + + if (ignoreInterrupts) + fut.ignoreInterrupts(); + + return fut; } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/lang/GridFunc.java b/modules/core/src/main/java/org/apache/ignite/internal/util/lang/GridFunc.java index f9c4d9d75388d..ce5076b2382d7 100755 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/lang/GridFunc.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/lang/GridFunc.java @@ -298,13 +298,13 @@ public static List range(int fromIncl, int toExcl) { * @param delim Delimiter (optional). * @return Concatenated string. */ - public static String concat(Iterable c, @Nullable String delim) { + public static String concat(Iterable c, @Nullable String delim) { A.notNull(c, "c"); IgniteReducer f = new StringConcatReducer(delim); - for (String x : c) - if (!f.collect(x)) + for (Object x : c) + if (!f.collect(x == null ? null : x.toString())) break; return f.reduce(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/lang/IgniteThrowableConsumer.java b/modules/core/src/main/java/org/apache/ignite/internal/util/lang/IgniteThrowableConsumer.java new file mode 100644 index 0000000000000..55feed8cc2d33 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/lang/IgniteThrowableConsumer.java @@ -0,0 +1,39 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.util.lang; + +import java.io.Serializable; +import org.apache.ignite.IgniteCheckedException; + +/** + * Represents an operation that accepts a single input argument and returns no result. Unlike most other functional + * interfaces, {@code IgniteThrowableConsumer} is expected to operate via side-effects. + * + * @param Type of closure parameter. + * @param Type of result value. + */ +public interface IgniteThrowableConsumer extends Serializable { + /** + * Consumer body. + * + * @param e Consumer parameter. + * @return Result of consumer operation. + * @throws IgniteCheckedException if body execution was failed. + */ + public R accept(E e) throws IgniteCheckedException; +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/lang/IgniteThrowableRunner.java b/modules/core/src/main/java/org/apache/ignite/internal/util/lang/IgniteThrowableRunner.java new file mode 100644 index 0000000000000..a5c95e1a3e575 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/lang/IgniteThrowableRunner.java @@ -0,0 +1,30 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.util.lang; + +import org.apache.ignite.IgniteCheckedException; + +/** + * Represents a throwable runner. + */ +public interface IgniteThrowableRunner { + /** + * Execute a body. + */ + void run() throws IgniteCheckedException; +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/nio/GridNioServer.java b/modules/core/src/main/java/org/apache/ignite/internal/util/nio/GridNioServer.java index e4c96b4ac102b..6c95835558609 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/nio/GridNioServer.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/nio/GridNioServer.java @@ -1041,6 +1041,22 @@ private synchronized void offerBalanced(NioOperationFuture req, @Nullable Map queue = ses.meta(BUF_SSL_SYSTEM_META_KEY); @@ -1546,8 +1550,10 @@ private void writeSslSystem(GridSelectorNioSessionImpl ses, WritableByteChannel if (!buf.hasRemaining()) queue.poll(); else - break; + return false; } + + return true; } /** @@ -1600,16 +1606,7 @@ private void processWrite0(SelectionKey key) throws IOException { req = ses.pollFuture(); if (req == null && buf.position() == 0) { - if (ses.procWrite.get()) { - ses.procWrite.set(false); - - if (ses.writeQueue().isEmpty()) { - if ((key.interestOps() & SelectionKey.OP_WRITE) != 0) - key.interestOps(key.interestOps() & (~SelectionKey.OP_WRITE)); - } - else - ses.procWrite.set(true); - } + stopPollingForWrite(key, ses); return; } @@ -1830,6 +1827,9 @@ else if (!closed) { } else if (err != null) lsnr.onFailure(SYSTEM_WORKER_TERMINATION, err); + else + // In case of closed == true, prevent general-case termination handling. + cancel(); } } @@ -2859,6 +2859,16 @@ protected GridNioAcceptWorker( this.selector = selector; } + /** {@inheritDoc} */ + @Override public void cancel() { + super.cancel(); + + // If accept worker never was started then explicitly close selector, otherwise selector will be closed + // in finally block when workers thread will be stopped. + if (runner() == null) + closeSelector(); + } + /** {@inheritDoc} */ @Override protected void body() throws InterruptedException, IgniteInterruptedCheckedException { Throwable err = null; @@ -2906,6 +2916,9 @@ protected GridNioAcceptWorker( lsnr.onFailure(CRITICAL_ERROR, err); else if (err != null) lsnr.onFailure(SYSTEM_WORKER_TERMINATION, err); + else + // In case of closed == true, prevent general-case termination handling. + cancel(); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/nio/ssl/BlockingSslHandler.java b/modules/core/src/main/java/org/apache/ignite/internal/util/nio/ssl/BlockingSslHandler.java index 0099c4675a381..dded870f21b93 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/nio/ssl/BlockingSslHandler.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/nio/ssl/BlockingSslHandler.java @@ -511,7 +511,7 @@ private void readFromNet() throws IgniteCheckedException { */ private void writeNetBuffer() throws IgniteCheckedException { try { - ch.write(outNetBuf); + U.writeFully(ch, outNetBuf); } catch (IOException e) { throw new IgniteCheckedException("Failed to write byte to socket.", e); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/GridToStringBuilder.java b/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/GridToStringBuilder.java index 329682f462525..77c333daa774c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/GridToStringBuilder.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/GridToStringBuilder.java @@ -21,7 +21,6 @@ import java.io.InputStream; import java.io.OutputStream; import java.io.Serializable; -import java.lang.reflect.Array; import java.lang.reflect.Field; import java.lang.reflect.Modifier; import java.util.ArrayList; @@ -29,16 +28,19 @@ import java.util.Collection; import java.util.Collections; import java.util.EventListener; -import java.util.IdentityHashMap; +import java.util.LinkedList; import java.util.List; import java.util.Map; +import java.util.Queue; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReadWriteLock; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.SB; +import org.apache.ignite.internal.util.typedef.internal.U; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; @@ -48,9 +50,6 @@ /** * Provides auto-generation framework for {@code toString()} output. *

- * In case of recursion, repeatable objects will be shown as "ClassName@hash". - * But fields will be printed only for the first entry to prevent recursion. - *

* Default exclusion policy (can be overridden with {@link GridToStringInclude} * annotation): *

    @@ -84,9 +83,6 @@ *
*/ public class GridToStringBuilder { - /** */ - private static final Object[] EMPTY_ARRAY = new Object[0]; - /** */ private static final Map classCache = new ConcurrentHashMap<>(); @@ -98,30 +94,25 @@ public class GridToStringBuilder { private static final int COLLECTION_LIMIT = IgniteSystemProperties.getInteger(IGNITE_TO_STRING_COLLECTION_LIMIT, 100); - /** Every thread has its own string builder. */ - private static ThreadLocal threadLocSB = new ThreadLocal() { - @Override protected SBLimitedLength initialValue() { - SBLimitedLength sb = new SBLimitedLength(256); + /** */ + private static ThreadLocal> threadCache = new ThreadLocal>() { + @Override protected Queue initialValue() { + Queue queue = new LinkedList<>(); - sb.initLimit(new SBLengthLimit()); + queue.offer(new GridToStringThreadLocal()); - return sb; + return queue; } }; - /** - * Contains objects currently printing in the string builder. - *

- * Since {@code toString()} methods can be chain-called from the same thread we - * have to keep a map of this objects pointed to the position of previous occurrence - * and remove/add them in each {@code toString()} apply. - */ - private static ThreadLocal> savedObjects = new ThreadLocal>() { - @Override protected IdentityHashMap initialValue() { - return new IdentityHashMap<>(); + /** */ + private static ThreadLocal threadCurLen = new ThreadLocal() { + @Override protected SBLengthLimit initialValue() { + return new SBLengthLimit(); } }; + /** * Produces auto-generated output of string presentation for given object and its declaration class. * @@ -270,9 +261,18 @@ public static String toString(Class cls, T obj, assert name3 != null; assert name4 != null; - Object[] addNames = new Object[5]; - Object[] addVals = new Object[5]; - boolean[] addSens = new boolean[5]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] addNames = tmp.getAdditionalNames(); + Object[] addVals = tmp.getAdditionalValues(); + boolean[] addSens = tmp.getAdditionalSensitives(); addNames[0] = name0; addVals[0] = val0; @@ -290,16 +290,20 @@ public static String toString(Class cls, T obj, addVals[4] = val4; addSens[4] = sens4; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, addNames, addVals, addSens, 5); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, addNames, addVals, addSens, 5); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -345,9 +349,18 @@ public static String toString(Class cls, T obj, assert name4 != null; assert name5 != null; - Object[] addNames = new Object[6]; - Object[] addVals = new Object[6]; - boolean[] addSens = new boolean[6]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] addNames = tmp.getAdditionalNames(); + Object[] addVals = tmp.getAdditionalValues(); + boolean[] addSens = tmp.getAdditionalSensitives(); addNames[0] = name0; addVals[0] = val0; @@ -368,16 +381,20 @@ public static String toString(Class cls, T obj, addVals[5] = val5; addSens[5] = sens5; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, addNames, addVals, addSens, 6); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, addNames, addVals, addSens, 6); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -428,9 +445,18 @@ public static String toString(Class cls, T obj, assert name5 != null; assert name6 != null; - Object[] addNames = new Object[7]; - Object[] addVals = new Object[7]; - boolean[] addSens = new boolean[7]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] addNames = tmp.getAdditionalNames(); + Object[] addVals = tmp.getAdditionalValues(); + boolean[] addSens = tmp.getAdditionalSensitives(); addNames[0] = name0; addVals[0] = val0; @@ -454,16 +480,20 @@ public static String toString(Class cls, T obj, addVals[6] = val6; addSens[6] = sens6; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, addNames, addVals, addSens, 7); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, addNames, addVals, addSens, 7); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -527,9 +557,18 @@ public static String toString(Class cls, T obj, assert name2 != null; assert name3 != null; - Object[] addNames = new Object[4]; - Object[] addVals = new Object[4]; - boolean[] addSens = new boolean[4]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] addNames = tmp.getAdditionalNames(); + Object[] addVals = tmp.getAdditionalValues(); + boolean[] addSens = tmp.getAdditionalSensitives(); addNames[0] = name0; addVals[0] = val0; @@ -544,16 +583,20 @@ public static String toString(Class cls, T obj, addVals[3] = val3; addSens[3] = sens3; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, addNames, addVals, addSens, 4); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, addNames, addVals, addSens, 4); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -609,9 +652,18 @@ public static String toString(Class cls, T obj, assert name1 != null; assert name2 != null; - Object[] addNames = new Object[3]; - Object[] addVals = new Object[3]; - boolean[] addSens = new boolean[3]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] addNames = tmp.getAdditionalNames(); + Object[] addVals = tmp.getAdditionalValues(); + boolean[] addSens = tmp.getAdditionalSensitives(); addNames[0] = name0; addVals[0] = val0; @@ -623,16 +675,20 @@ public static String toString(Class cls, T obj, addVals[2] = val2; addSens[2] = sens2; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, addNames, addVals, addSens, 3); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, addNames, addVals, addSens, 3); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -676,9 +732,18 @@ public static String toString(Class cls, T obj, assert name0 != null; assert name1 != null; - Object[] addNames = new Object[2]; - Object[] addVals = new Object[2]; - boolean[] addSens = new boolean[2]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] addNames = tmp.getAdditionalNames(); + Object[] addVals = tmp.getAdditionalValues(); + boolean[] addSens = tmp.getAdditionalSensitives(); addNames[0] = name0; addVals[0] = val0; @@ -687,16 +752,20 @@ public static String toString(Class cls, T obj, addVals[1] = val1; addSens[1] = sens1; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, addNames, addVals, addSens, 2); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, addNames, addVals, addSens, 2); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -730,24 +799,37 @@ public static String toString(Class cls, T obj, String name, @Nullable Ob assert obj != null; assert name != null; - Object[] addNames = new Object[1]; - Object[] addVals = new Object[1]; - boolean[] addSens = new boolean[1]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] addNames = tmp.getAdditionalNames(); + Object[] addVals = tmp.getAdditionalValues(); + boolean[] addSens = tmp.getAdditionalSensitives(); addNames[0] = name; addVals[0] = val; addSens[0] = sens; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, addNames, addVals, addSens, 1); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, addNames, addVals, addSens, 1); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -763,16 +845,30 @@ public static String toString(Class cls, T obj) { assert cls != null; assert obj != null; - SBLimitedLength sb = threadLocSB.get(); + Queue queue = threadCache.get(); + + assert queue != null; - boolean newStr = sb.length() == 0; + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + SBLengthLimit lenLim = threadCurLen.get(); + + boolean newStr = false; try { - return toStringImpl(cls, sb, obj, EMPTY_ARRAY, EMPTY_ARRAY, null, 0); + newStr = lenLim.length() == 0; + + return toStringImpl(cls, tmp.getStringBuilder(lenLim), obj, tmp.getAdditionalNames(), + tmp.getAdditionalValues(), null, 0); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -790,164 +886,68 @@ public static String toString(Class cls, T obj, String parent) { } /** - * Print value with length limitation. - * + * Print value with length limitation * @param buf buffer to print to. * @param val value to print, can be {@code null}. */ private static void toString(SBLimitedLength buf, Object val) { - toString(buf, null, val); - } - - /** - * Print value with length limitation. - * - * @param buf buffer to print to. - * @param cls value class. - * @param val value to print. - */ - @SuppressWarnings({"unchecked"}) - private static void toString(SBLimitedLength buf, Class cls, Object val) { - if (val == null) { + if (val == null) buf.a("null"); - - return; - } - - if (cls == null) - cls = val.getClass(); - - if (cls.isPrimitive()) { - buf.a(val); - - return; - } - - IdentityHashMap svdObjs = savedObjects.get(); - - if (handleRecursion(buf, val, svdObjs)) - return; - - svdObjs.put(val, buf.length()); - - try { - if (cls.isArray()) - addArray(buf, cls, val); - else if (val instanceof Collection) - addCollection(buf, (Collection) val); - else if (val instanceof Map) - addMap(buf, (Map) val); - else - buf.a(val); - } - finally { - svdObjs.remove(val); - } - } - - /** - * Writes array to buffer. - * - * @param buf String builder buffer. - * @param arrType Type of the array. - * @param obj Array object. - */ - private static void addArray(SBLimitedLength buf, Class arrType, Object obj) { - if (arrType.getComponentType().isPrimitive()) { - buf.a(arrayToString(obj)); - - return; - } - - Object[] arr = (Object[]) obj; - - buf.a(arrType.getSimpleName()).a(" ["); - - for (int i = 0; i < arr.length; i++) { - toString(buf, arr[i]); - - if (i == COLLECTION_LIMIT - 1 || i == arr.length - 1) - break; - - buf.a(", "); - } - - handleOverflow(buf, arr.length); - - buf.a(']'); + else + toString(buf, val.getClass(), val); } /** - * Writes collection to buffer. - * - * @param buf String builder buffer. - * @param col Collection object. + * Print value with length limitation + * @param buf buffer to print to. + * @param valClass value class. + * @param val value to print */ - private static void addCollection(SBLimitedLength buf, Collection col) { - buf.a(col.getClass().getSimpleName()).a(" ["); - - int cnt = 0; - - for (Object obj : col) { - toString(buf, obj); - - if (++cnt == COLLECTION_LIMIT || cnt == col.size()) - break; - - buf.a(", "); - } + private static void toString(SBLimitedLength buf, Class valClass, Object val) { + if (valClass.isArray()) + buf.a(arrayToString(valClass, val)); + else { + int overflow = 0; + char bracket = ' '; + + if (val instanceof Collection && ((Collection)val).size() > COLLECTION_LIMIT) { + overflow = ((Collection)val).size() - COLLECTION_LIMIT; + bracket = ']'; + val = F.retain((Collection) val, true, COLLECTION_LIMIT); + } + else if (val instanceof Map && ((Map)val).size() > COLLECTION_LIMIT) { + Map tmp = U.newHashMap(COLLECTION_LIMIT); - handleOverflow(buf, col.size()); + overflow = ((Map)val).size() - COLLECTION_LIMIT; - buf.a(']'); - } + bracket= '}'; - /** - * Writes map to buffer. - * - * @param buf String builder buffer. - * @param map Map object. - */ - private static void addMap(SBLimitedLength buf, Map map) { - buf.a(map.getClass().getSimpleName()).a(" {"); + int cntr = 0; - int cnt = 0; + for (Object o : ((Map)val).entrySet()) { + Map.Entry e = (Map.Entry)o; - for (Map.Entry e : map.entrySet()) { - toString(buf, e.getKey()); + tmp.put(e.getKey(), e.getValue()); - buf.a('='); + if (++cntr >= COLLECTION_LIMIT) + break; + } - toString(buf, e.getValue()); + val = tmp; + } - if (++cnt == COLLECTION_LIMIT || cnt == map.size()) - break; + buf.a(val); - buf.a(", "); + if (overflow > 0) { + buf.d(buf.length() - 1); + buf.a("... and ").a(overflow).a(" more").a(bracket); + } } - - handleOverflow(buf, map.size()); - - buf.a('}'); - } - - /** - * Writes overflow message to buffer if needed. - * - * @param buf String builder buffer. - * @param size Size to compare with limit. - */ - private static void handleOverflow(SBLimitedLength buf, int size) { - int overflow = size - COLLECTION_LIMIT; - - if (overflow > 0) - buf.a("... and ").a(overflow).a(" more"); } /** * Creates an uniformed string presentation for the given object. * - * @param Type of object. * @param cls Class of the object. * @param buf String builder buffer. * @param obj Object for which to get string presentation. @@ -956,7 +956,9 @@ private static void handleOverflow(SBLimitedLength buf, int size) { * @param addSens Sensitive flag of values or {@code null} if all values are not sensitive. * @param addLen How many additional values will be included. * @return String presentation of the given object. + * @param Type of object. */ + @SuppressWarnings({"unchecked"}) private static String toStringImpl( Class cls, SBLimitedLength buf, @@ -973,56 +975,13 @@ private static String toStringImpl( assert addNames.length == addVals.length; assert addLen <= addNames.length; - boolean newStr = buf.length() == 0; - - IdentityHashMap svdObjs = savedObjects.get(); - - if (newStr) - svdObjs.put(obj, buf.length()); - - try { - String s = toStringImpl0(cls, buf, obj, addNames, addVals, addSens, addLen); - - if (newStr) - return s; - - // Called from another GTSB.toString(), so this string is already in the buffer and shouldn't be returned. - return ""; - } - finally { - if (newStr) - svdObjs.remove(obj); - } - } - - /** - * Creates an uniformed string presentation for the given object. - * - * @param cls Class of the object. - * @param buf String builder buffer. - * @param obj Object for which to get string presentation. - * @param addNames Names of additional values to be included. - * @param addVals Additional values to be included. - * @param addSens Sensitive flag of values or {@code null} if all values are not sensitive. - * @param addLen How many additional values will be included. - * @return String presentation of the given object. - * @param Type of object. - */ - @SuppressWarnings({"unchecked"}) - private static String toStringImpl0( - Class cls, - SBLimitedLength buf, - T obj, - Object[] addNames, - Object[] addVals, - @Nullable boolean[] addSens, - int addLen - ) { try { GridToStringClassDescriptor cd = getClassDescriptor(cls); assert cd != null; + buf.setLength(0); + buf.a(cd.getSimpleClassName()).a(" ["); boolean first = true; @@ -1054,6 +1013,7 @@ private static String toStringImpl0( } // Specifically catching all exceptions. catch (Exception e) { + // Remove entry from cache to avoid potential memory leak // in case new class loader got loaded under the same identity hash. classCache.remove(cls.getName() + System.identityHashCode(cls.getClassLoader())); @@ -1076,40 +1036,95 @@ public static String toString(String str, String name, @Nullable Object val) { } /** - * Returns limited string representation of array. - * - * @param arr Array object. Each value is automatically wrapped if it has a primitive type. + * @param arrType Type of the array. + * @param arr Array object. * @return String representation of an array. */ - public static String arrayToString(Object arr) { + @SuppressWarnings({"ConstantConditions", "unchecked"}) + public static String arrayToString(Class arrType, Object arr) { if (arr == null) return "null"; String res; + int more = 0; - int arrLen; - - if (arr instanceof Object[]) { + if (arrType.equals(byte[].class)) { + byte[] byteArr = (byte[])arr; + if (byteArr.length > COLLECTION_LIMIT) { + more = byteArr.length - COLLECTION_LIMIT; + byteArr = Arrays.copyOf(byteArr, COLLECTION_LIMIT); + } + res = Arrays.toString(byteArr); + } + else if (arrType.equals(boolean[].class)) { + boolean[] boolArr = (boolean[])arr; + if (boolArr.length > COLLECTION_LIMIT) { + more = boolArr.length - COLLECTION_LIMIT; + boolArr = Arrays.copyOf(boolArr, COLLECTION_LIMIT); + } + res = Arrays.toString(boolArr); + } + else if (arrType.equals(short[].class)) { + short[] shortArr = (short[])arr; + if (shortArr.length > COLLECTION_LIMIT) { + more = shortArr.length - COLLECTION_LIMIT; + shortArr = Arrays.copyOf(shortArr, COLLECTION_LIMIT); + } + res = Arrays.toString(shortArr); + } + else if (arrType.equals(int[].class)) { + int[] intArr = (int[])arr; + if (intArr.length > COLLECTION_LIMIT) { + more = intArr.length - COLLECTION_LIMIT; + intArr = Arrays.copyOf(intArr, COLLECTION_LIMIT); + } + res = Arrays.toString(intArr); + } + else if (arrType.equals(long[].class)) { + long[] longArr = (long[])arr; + if (longArr.length > COLLECTION_LIMIT) { + more = longArr.length - COLLECTION_LIMIT; + longArr = Arrays.copyOf(longArr, COLLECTION_LIMIT); + } + res = Arrays.toString(longArr); + } + else if (arrType.equals(float[].class)) { + float[] floatArr = (float[])arr; + if (floatArr.length > COLLECTION_LIMIT) { + more = floatArr.length - COLLECTION_LIMIT; + floatArr = Arrays.copyOf(floatArr, COLLECTION_LIMIT); + } + res = Arrays.toString(floatArr); + } + else if (arrType.equals(double[].class)) { + double[] doubleArr = (double[])arr; + if (doubleArr.length > COLLECTION_LIMIT) { + more = doubleArr.length - COLLECTION_LIMIT; + doubleArr = Arrays.copyOf(doubleArr, COLLECTION_LIMIT); + } + res = Arrays.toString(doubleArr); + } + else if (arrType.equals(char[].class)) { + char[] charArr = (char[])arr; + if (charArr.length > COLLECTION_LIMIT) { + more = charArr.length - COLLECTION_LIMIT; + charArr = Arrays.copyOf(charArr, COLLECTION_LIMIT); + } + res = Arrays.toString(charArr); + } + else { Object[] objArr = (Object[])arr; - - arrLen = objArr.length; - - if (arrLen > COLLECTION_LIMIT) + if (objArr.length > COLLECTION_LIMIT) { + more = objArr.length - COLLECTION_LIMIT; objArr = Arrays.copyOf(objArr, COLLECTION_LIMIT); - + } res = Arrays.toString(objArr); - } else { - res = toStringWithLimit(arr, COLLECTION_LIMIT); - - arrLen = Array.getLength(arr); } - - if (arrLen > COLLECTION_LIMIT) { + if (more > 0) { StringBuilder resSB = new StringBuilder(res); resSB.deleteCharAt(resSB.length() - 1); - - resSB.append("... and ").append(arrLen - COLLECTION_LIMIT).append(" more]"); + resSB.append("... and ").append(more).append(" more]"); res = resSB.toString(); } @@ -1117,37 +1132,6 @@ public static String arrayToString(Object arr) { return res; } - /** - * Returns limited string representation of array. - * - * @param arr Input array. Each value is automatically wrapped if it has a primitive type. - * @param limit max array items to string limit. - * @return String representation of an array. - */ - private static String toStringWithLimit(Object arr, int limit) { - int arrIdxMax = Array.getLength(arr) - 1; - - if (arrIdxMax == -1) - return "[]"; - - int idxMax = Math.min(arrIdxMax, limit); - - StringBuilder b = new StringBuilder(); - - b.append('['); - - for (int i = 0; i <= idxMax; ++i) { - b.append(Array.get(arr, i)); - - if (i == idxMax) - return b.append(']').toString(); - - b.append(", "); - } - - return b.toString(); - } - /** * Produces uniformed output of string with context properties * @@ -1160,24 +1144,37 @@ private static String toStringWithLimit(Object arr, int limit) { public static String toString(String str, String name, @Nullable Object val, boolean sens) { assert name != null; - Object[] propNames = new Object[1]; - Object[] propVals = new Object[1]; - boolean[] propSens = new boolean[1]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] propNames = tmp.getAdditionalNames(); + Object[] propVals = tmp.getAdditionalValues(); + boolean[] propSens = tmp.getAdditionalSensitives(); propNames[0] = name; propVals[0] = val; propSens[0] = sens; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(str, sb, propNames, propVals, propSens, 1); + newStr = lenLim.length() == 0; + + return toStringImpl(str, tmp.getStringBuilder(lenLim), propNames, propVals, propSens, 1); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -1214,9 +1211,18 @@ public static String toString(String str, assert name0 != null; assert name1 != null; - Object[] propNames = new Object[2]; - Object[] propVals = new Object[2]; - boolean[] propSens = new boolean[2]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] propNames = tmp.getAdditionalNames(); + Object[] propVals = tmp.getAdditionalValues(); + boolean[] propSens = tmp.getAdditionalSensitives(); propNames[0] = name0; propVals[0] = val0; @@ -1225,16 +1231,20 @@ public static String toString(String str, propVals[1] = val1; propSens[1] = sens1; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(str, sb, propNames, propVals, propSens, 2); + newStr = lenLim.length() == 0; + + return toStringImpl(str, tmp.getStringBuilder(lenLim), propNames, propVals, propSens, 2); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -1261,9 +1271,18 @@ public static String toString(String str, assert name1 != null; assert name2 != null; - Object[] propNames = new Object[3]; - Object[] propVals = new Object[3]; - boolean[] propSens = new boolean[3]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] propNames = tmp.getAdditionalNames(); + Object[] propVals = tmp.getAdditionalValues(); + boolean[] propSens = tmp.getAdditionalSensitives(); propNames[0] = name0; propVals[0] = val0; @@ -1275,16 +1294,20 @@ public static String toString(String str, propVals[2] = val2; propSens[2] = sens2; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(str, sb, propNames, propVals, propSens, 3); + newStr = lenLim.length() == 0; + + return toStringImpl(str, tmp.getStringBuilder(lenLim), propNames, propVals, propSens, 3); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -1316,9 +1339,18 @@ public static String toString(String str, assert name2 != null; assert name3 != null; - Object[] propNames = new Object[4]; - Object[] propVals = new Object[4]; - boolean[] propSens = new boolean[4]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] propNames = tmp.getAdditionalNames(); + Object[] propVals = tmp.getAdditionalValues(); + boolean[] propSens = tmp.getAdditionalSensitives(); propNames[0] = name0; propVals[0] = val0; @@ -1333,16 +1365,20 @@ public static String toString(String str, propVals[3] = val3; propSens[3] = sens3; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(str, sb, propNames, propVals, propSens, 4); + newStr = lenLim.length() == 0; + + return toStringImpl(str, tmp.getStringBuilder(lenLim), propNames, propVals, propSens, 4); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -1379,9 +1415,18 @@ public static String toString(String str, assert name3 != null; assert name4 != null; - Object[] propNames = new Object[5]; - Object[] propVals = new Object[5]; - boolean[] propSens = new boolean[5]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] propNames = tmp.getAdditionalNames(); + Object[] propVals = tmp.getAdditionalValues(); + boolean[] propSens = tmp.getAdditionalSensitives(); propNames[0] = name0; propVals[0] = val0; @@ -1399,16 +1444,20 @@ public static String toString(String str, propVals[4] = val4; propSens[4] = sens4; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(str, sb, propNames, propVals, propSens, 5); + newStr = lenLim.length() == 0; + + return toStringImpl(str, tmp.getStringBuilder(lenLim), propNames, propVals, propSens, 5); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -1450,9 +1499,18 @@ public static String toString(String str, assert name4 != null; assert name5 != null; - Object[] propNames = new Object[6]; - Object[] propVals = new Object[6]; - boolean[] propSens = new boolean[6]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] propNames = tmp.getAdditionalNames(); + Object[] propVals = tmp.getAdditionalValues(); + boolean[] propSens = tmp.getAdditionalSensitives(); propNames[0] = name0; propVals[0] = val0; @@ -1473,16 +1531,20 @@ public static String toString(String str, propVals[5] = val5; propSens[5] = sens5; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(str, sb, propNames, propVals, propSens, 6); + newStr = lenLim.length() == 0; + + return toStringImpl(str, tmp.getStringBuilder(lenLim), propNames, propVals, propSens, 6); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -1529,9 +1591,18 @@ public static String toString(String str, assert name5 != null; assert name6 != null; - Object[] propNames = new Object[7]; - Object[] propVals = new Object[7]; - boolean[] propSens = new boolean[7]; + Queue queue = threadCache.get(); + + assert queue != null; + + // Since string() methods can be chain-called from the same thread we + // have to keep a list of thread-local objects and remove/add them + // in each string() apply. + GridToStringThreadLocal tmp = queue.isEmpty() ? new GridToStringThreadLocal() : queue.remove(); + + Object[] propNames = tmp.getAdditionalNames(); + Object[] propVals = tmp.getAdditionalValues(); + boolean[] propSens = tmp.getAdditionalSensitives(); propNames[0] = name0; propVals[0] = val0; @@ -1555,16 +1626,20 @@ public static String toString(String str, propVals[6] = val6; propSens[6] = sens6; - SBLimitedLength sb = threadLocSB.get(); + SBLengthLimit lenLim = threadCurLen.get(); - boolean newStr = sb.length() == 0; + boolean newStr = false; try { - return toStringImpl(str, sb, propNames, propVals, propSens, 7); + newStr = lenLim.length() == 0; + + return toStringImpl(str, tmp.getStringBuilder(lenLim), propNames, propVals, propSens, 7); } finally { + queue.offer(tmp); + if (newStr) - sb.reset(); + lenLim.reset(); } } @@ -1582,7 +1657,7 @@ public static String toString(String str, private static String toStringImpl(String str, SBLimitedLength buf, Object[] propNames, Object[] propVals, boolean[] propSens, int propCnt) { - boolean newStr = buf.length() == 0; + buf.setLength(0); if (str != null) buf.a(str).a(" "); @@ -1593,11 +1668,7 @@ private static String toStringImpl(String str, SBLimitedLength buf, Object[] pro buf.a(']'); - if (newStr) - return buf.toString(); - - // Called from another GTSB.toString(), so this string is already in the buffer and shouldn't be returned. - return ""; + return buf.toString(); } /** @@ -1770,54 +1841,4 @@ public static String compact(@NotNull Collection col) { return sb.toString(); } - - /** - * Checks that object is already saved. - * In positive case this method inserts hash to the saved object entry (if needed) and name@hash for current entry. - * Further toString operations are not needed for current object. - * - * @param buf String builder buffer. - * @param obj Object. - * @param svdObjs Map with saved objects to handle recursion. - * @return {@code True} if object is already saved and name@hash was added to buffer. - * {@code False} if it wasn't saved previously and it should be saved. - */ - private static boolean handleRecursion(SBLimitedLength buf, Object obj, IdentityHashMap svdObjs) { - Integer pos = svdObjs.get(obj); - - if (pos == null) - return false; - - String name = obj.getClass().getSimpleName(); - String hash = '@' + Integer.toHexString(System.identityHashCode(obj)); - String savedName = name + hash; - - if (!buf.isOverflowed() && buf.impl().indexOf(savedName, pos) != pos) { - buf.i(pos + name.length(), hash); - - incValues(svdObjs, obj, hash.length()); - } - - buf.a(savedName); - - return true; - } - - /** - * Increment positions of already presented objects afterward given object. - * - * @param svdObjs Map with objects already presented in the buffer. - * @param obj Object. - * @param hashLen Length of the object's hash. - */ - private static void incValues(IdentityHashMap svdObjs, Object obj, int hashLen) { - Integer baseline = svdObjs.get(obj); - - for (IdentityHashMap.Entry entry : svdObjs.entrySet()) { - Integer pos = entry.getValue(); - - if (pos > baseline) - entry.setValue(pos + hashLen); - } - } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/GridToStringThreadLocal.java b/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/GridToStringThreadLocal.java new file mode 100644 index 0000000000000..2f62727b17802 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/GridToStringThreadLocal.java @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.util.tostring; + +/** + * Helper wrapper containing StringBuilder and additional values. Stored as a thread-local variable. + */ +class GridToStringThreadLocal { + /** */ + private SBLimitedLength sb = new SBLimitedLength(256); + + /** */ + private Object[] addNames = new Object[7]; + + /** */ + private Object[] addVals = new Object[7]; + + /** */ + private boolean[] addSens = new boolean[7]; + + /** + * @param len Length limit. + * @return String builder. + */ + SBLimitedLength getStringBuilder(SBLengthLimit len) { + sb.initLimit(len); + + return sb; + } + + /** + * @return Additional names. + */ + Object[] getAdditionalNames() { + return addNames; + } + + /** + * @return Additional values. + */ + Object[] getAdditionalValues() { + return addVals; + } + + /** + * @return Additional values. + */ + boolean[] getAdditionalSensitives() { + return addSens; + } +} \ No newline at end of file diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/SBLimitedLength.java b/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/SBLimitedLength.java index b524b3d9ad39c..c47cf40c26ef7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/SBLimitedLength.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/tostring/SBLimitedLength.java @@ -50,19 +50,6 @@ void initLimit(SBLengthLimit lenLimit) { tail.reset(); } - /** - * Resets buffer. - */ - public void reset() { - super.setLength(0); - - lenLimit.reset(); - - if (tail != null) - tail.reset(); - } - - /** * @return tail string builder. */ @@ -305,11 +292,4 @@ private GridStringBuilder onWrite(int lenBeforeWrite) { return res.toString(); } } - - /** - * @return {@code True} - if buffer limit is reached. - */ - public boolean isOverflowed() { - return lenLimit.overflowed(this); - } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/worker/GridWorker.java b/modules/core/src/main/java/org/apache/ignite/internal/util/worker/GridWorker.java index 3d9163d35f4ff..3f779da9ea3cd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/worker/GridWorker.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/worker/GridWorker.java @@ -99,12 +99,12 @@ protected GridWorker(@Nullable String igniteInstanceName, String name, IgniteLog /** {@inheritDoc} */ @Override public final void run() { + updateHeartbeat(); + // Runner thread must be recorded first as other operations // may depend on it being present. runner = Thread.currentThread(); - updateHeartbeat(); - if (log.isDebugEnabled()) log.debug("Grid runnable started: " + name); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObject.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObject.java index 38d7a0ad0aef5..6fd51cdd3d55c 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObject.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObject.java @@ -26,10 +26,13 @@ import java.util.LinkedHashSet; import java.util.List; import java.util.Set; +import org.apache.ignite.internal.dto.IgniteDataTransferObject; import org.jetbrains.annotations.Nullable; /** - * Base class for data transfer objects. + * Base class for data transfer objects for Visor tasks. + * + * @deprecated Use {@link IgniteDataTransferObject} instead. This class may be removed in Ignite 3.0. */ public abstract class VisorDataTransferObject implements Externalizable { /** */ @@ -47,6 +50,9 @@ public abstract class VisorDataTransferObject implements Externalizable { /** Version 3. */ protected static final byte V3 = 3; + /** Version 4. */ + protected static final byte V4 = 4; + /** * @param col Source collection. * @param Collection type. diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectInput.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectInput.java index 16e933090eb9b..7caf26914ba5e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectInput.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectInput.java @@ -20,12 +20,14 @@ import java.io.IOException; import java.io.ObjectInput; import java.io.ObjectInputStream; +import org.apache.ignite.internal.dto.IgniteDataTransferObjectInput; import org.apache.ignite.internal.util.io.GridByteArrayInputStream; import org.apache.ignite.internal.util.typedef.internal.U; import org.jetbrains.annotations.NotNull; /** * Wrapper for object input. + * @deprecated Use {@link IgniteDataTransferObjectInput} instead. This class may be removed in Ignite 3.0. */ public class VisorDataTransferObjectInput implements ObjectInput { /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectOutput.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectOutput.java index 7fa772e75ca13..4dbd25035bac9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectOutput.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorDataTransferObjectOutput.java @@ -20,12 +20,14 @@ import java.io.IOException; import java.io.ObjectOutput; import java.io.ObjectOutputStream; +import org.apache.ignite.internal.dto.IgniteDataTransferObjectOutput; import org.apache.ignite.internal.util.io.GridByteArrayOutputStream; import org.apache.ignite.internal.util.typedef.internal.U; import org.jetbrains.annotations.NotNull; /** * Wrapper for object output. + * @deprecated Use {@link IgniteDataTransferObjectOutput} instead. This class may be removed in Ignite 3.0. */ public class VisorDataTransferObjectOutput implements ObjectOutput { /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorMultiNodeTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorMultiNodeTask.java index 29ad5195c59e1..dbf1c6630905d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorMultiNodeTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/VisorMultiNodeTask.java @@ -105,7 +105,7 @@ protected Map map0(List subgrid, map.put(job(taskArg), node); if (map.isEmpty()) - ignite.log().error("No mapped jobs: [task=" + getClass().getName() + + ignite.log().warning("No mapped jobs: [task=" + getClass().getName() + ", topVer=" + ignite.cluster().topologyVersion() + ", jobNids=" + nodeIds + ", subGrid=" + U.toShortString(subgrid) + "]"); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/baseline/VisorBaselineTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/baseline/VisorBaselineTask.java index 3c00452c83a48..be5fcca5e85b5 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/baseline/VisorBaselineTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/baseline/VisorBaselineTask.java @@ -27,6 +27,7 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.cluster.IgniteClusterEx; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; @@ -37,6 +38,7 @@ * Task that will collect information about baseline topology and can change its state. */ @GridInternal +@GridVisorManagementTask public class VisorBaselineTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/binary/VisorBinaryMetadataCollectorTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/binary/VisorBinaryMetadataCollectorTask.java index bf072e22485de..c62c658e529ca 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/binary/VisorBinaryMetadataCollectorTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/binary/VisorBinaryMetadataCollectorTask.java @@ -20,6 +20,7 @@ import org.apache.ignite.IgniteBinary; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -29,6 +30,7 @@ * Task that collects binary metadata. */ @GridInternal +@GridVisorManagementTask public class VisorBinaryMetadataCollectorTask extends VisorOneNodeTask { /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCache.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCache.java index 63eb13c3cecc6..495a9cb6dbe7e 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCache.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCache.java @@ -91,6 +91,9 @@ public class VisorCache extends VisorDataTransferObject { /** Cache system state. */ private boolean sys; + /** Checks whether statistics collection is enabled in this cache. */ + private boolean statisticsEnabled; + /** * Create data transfer object for given cache. */ @@ -119,7 +122,7 @@ public VisorCache(IgniteEx ignite, GridCacheAdapter ca, boolean collectMetrics) backupSize = ca.localSizeLong(PEEK_ONHEAP_BACKUP); nearSize = ca.nearSize(); size = primarySize + backupSize + nearSize; - + partitions = ca.affinity().partitions(); near = cctx.isNear(); @@ -127,6 +130,8 @@ public VisorCache(IgniteEx ignite, GridCacheAdapter ca, boolean collectMetrics) metrics = new VisorCacheMetrics(ignite, name); sys = ignite.context().cache().systemCache(name); + + statisticsEnabled = ca.clusterMetrics().isStatisticsEnabled(); } /** @@ -278,9 +283,16 @@ public long offHeapEntriesCount() { return metrics != null ? metrics.getOffHeapEntriesCount() : 0L; } + /** + * @return Checks whether statistics collection is enabled in this cache. + */ + public boolean isStatisticsEnabled() { + return statisticsEnabled; + } + /** {@inheritDoc} */ @Override public byte getProtocolVersion() { - return V2; + return V3; } /** {@inheritDoc} */ @@ -298,6 +310,7 @@ public long offHeapEntriesCount() { out.writeBoolean(near); out.writeObject(metrics); out.writeBoolean(sys); + out.writeBoolean(statisticsEnabled); } /** {@inheritDoc} */ @@ -316,6 +329,9 @@ public long offHeapEntriesCount() { metrics = (VisorCacheMetrics)in.readObject(); sys = protoVer > V1 ? in.readBoolean() : metrics != null && metrics.isSystem(); + + if (protoVer > V2) + statisticsEnabled = in.readBoolean(); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheAffinityNodeTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheAffinityNodeTask.java index 447cc11d7d387..439b8d3edcd7b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheAffinityNodeTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheAffinityNodeTask.java @@ -21,6 +21,7 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -30,6 +31,7 @@ * Task that will find affinity node for key. */ @GridInternal +@GridVisorManagementTask public class VisorCacheAffinityNodeTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheClearTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheClearTask.java index bdfc9eb183a1f..68142b97aad60 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheClearTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheClearTask.java @@ -21,6 +21,7 @@ import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.compute.ComputeJobContext; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -32,6 +33,7 @@ * Task that clears specified caches on specified node. */ @GridInternal +@GridVisorManagementTask public class VisorCacheClearTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfiguration.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfiguration.java index b0126788db819..1e9faaabaded2 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfiguration.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfiguration.java @@ -36,6 +36,7 @@ import org.apache.ignite.lang.IgniteUuid; import org.jetbrains.annotations.Nullable; + import static org.apache.ignite.internal.visor.util.VisorTaskUtils.compactClass; import static org.apache.ignite.internal.visor.util.VisorTaskUtils.compactIterable; @@ -489,8 +490,8 @@ public int getQueryDetailMetricsSize() { } /** - * @return {@code true} if data can be read from backup node or {@code false} if data always - * should be read from primary node and never from backup. + * @return {@code true} if data can be read from backup node or {@code false} if data always should be read from + * primary node and never from backup. */ public boolean isReadFromBackup() { return readFromBackup; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorJob.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorJob.java index f72cd7de09421..51477c96a6896 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorJob.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorJob.java @@ -19,16 +19,18 @@ import java.util.Collection; import java.util.Map; +import java.util.regex.Pattern; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorJob; +import org.apache.ignite.internal.visor.util.VisorTaskUtils; import org.apache.ignite.lang.IgniteUuid; /** - * Job that collect cache metrics from node. + * Job that collect cache configuration from node. */ public class VisorCacheConfigurationCollectorJob extends VisorJob> { @@ -38,7 +40,7 @@ public class VisorCacheConfigurationCollectorJob /** * Create job with given argument. * - * @param arg Whether to collect metrics for all caches or for specified cache name only. + * @param arg Whether to collect metrics for all caches or for specified cache name only or by regex. * @param debug Debug flag. */ public VisorCacheConfigurationCollectorJob(VisorCacheConfigurationCollectorTaskArg arg, boolean debug) { @@ -49,18 +51,24 @@ public VisorCacheConfigurationCollectorJob(VisorCacheConfigurationCollectorTaskA @Override protected Map run(VisorCacheConfigurationCollectorTaskArg arg) { Collection> caches = ignite.context().cache().jcaches(); - Collection cacheNames = arg.getCacheNames(); + Pattern ptrn = arg.getRegex() != null ? Pattern.compile(arg.getRegex()) : null; - boolean all = F.isEmpty(cacheNames); + boolean all = F.isEmpty(arg.getCacheNames()); - Map res = U.newHashMap(all ? caches.size() : cacheNames.size()); + boolean hasPtrn = ptrn != null; + + Map res = U.newHashMap(caches.size()); for (IgniteCacheProxy cache : caches) { String cacheName = cache.getName(); - if (all || cacheNames.contains(cacheName)) { - res.put(cacheName, config(cache.getConfiguration(CacheConfiguration.class), - cache.context().dynamicDeploymentId())); + boolean matched = hasPtrn ? ptrn.matcher(cacheName).find() : all || arg.getCacheNames().contains(cacheName); + + if (!VisorTaskUtils.isRestartingCache(ignite, cacheName) && matched) { + VisorCacheConfiguration cfg = + config(cache.getConfiguration(CacheConfiguration.class), cache.context().dynamicDeploymentId()); + + res.put(cacheName, cfg); } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTask.java index fd224a893eb39..418eb1b57b81b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTask.java @@ -17,9 +17,15 @@ package org.apache.ignite.internal.visor.cache; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; import java.util.Map; +import org.apache.ignite.IgniteException; +import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; import org.apache.ignite.internal.visor.VisorOneNodeTask; +import org.jetbrains.annotations.Nullable; /** * Task that collect cache metrics from all nodes. @@ -34,4 +40,38 @@ public class VisorCacheConfigurationCollectorTask @Override protected VisorCacheConfigurationCollectorJob job(VisorCacheConfigurationCollectorTaskArg arg) { return new VisorCacheConfigurationCollectorJob(arg, debug); } + + /** {@inheritDoc} */ + @Override protected @Nullable Map reduce0( + List results + ) throws IgniteException { + if (results == null) + return null; + + Map map = new HashMap<>(); + + List resultsExceptions = null; + + for (ComputeJobResult res : results) { + if (res.getException() == null) + map.putAll(res.getData()); + else { + if (resultsExceptions == null) + resultsExceptions = new ArrayList<>(results.size()); + + resultsExceptions.add(new IgniteException("Job failed on node: " + res.getNode().id(), res.getException())); + } + } + + if (resultsExceptions != null) { + IgniteException e = new IgniteException("Reduce failed because of job failed on some nodes"); + + for (Exception ex : resultsExceptions) + e.addSuppressed(ex); + + throw e; + } + + return map; + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTaskArg.java index a0b43dbcf890d..b4b8143be0de0 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheConfigurationCollectorTaskArg.java @@ -21,6 +21,7 @@ import java.io.ObjectInput; import java.io.ObjectOutput; import java.util.Collection; +import java.util.regex.Pattern; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorDataTransferObject; @@ -32,9 +33,12 @@ public class VisorCacheConfigurationCollectorTaskArg extends VisorDataTransferOb /** */ private static final long serialVersionUID = 0L; - /** Collection of cache deployment IDs. */ + /** Collection of cache names. */ private Collection cacheNames; + /** Cache name regexp. */ + private String regex; + /** * Default constructor. */ @@ -49,6 +53,16 @@ public VisorCacheConfigurationCollectorTaskArg(Collection cacheNames) { this.cacheNames = cacheNames; } + /** + * @param regex Cache name regexp. + */ + public VisorCacheConfigurationCollectorTaskArg(String regex) { + // Checks, that regex is correct. + Pattern.compile(regex); + + this.regex = regex; + } + /** * @return Collection of cache deployment IDs. */ @@ -56,14 +70,30 @@ public Collection getCacheNames() { return cacheNames; } + /** + * @return Cache name regexp. + */ + public String getRegex() { + return regex; + } + + /** {@inheritDoc} */ + @Override public byte getProtocolVersion() { + return V2; + } + /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { U.writeCollection(out, cacheNames); + U.writeString(out, regex); } /** {@inheritDoc} */ @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { cacheNames = U.readCollection(in); + + if (protoVer > V1) + regex = U.readString(in); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLoadTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLoadTask.java index 7b25ae4e84cf4..5d1bccd06f3bf 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLoadTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLoadTask.java @@ -26,6 +26,7 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorJob; @@ -35,6 +36,7 @@ * Task to loads caches. */ @GridInternal +@GridVisorManagementTask public class VisorCacheLoadTask extends VisorOneNodeTask> { /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTask.java index 9785b9cf96921..daa4af7bec8ae 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTask.java @@ -24,6 +24,7 @@ import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -32,6 +33,7 @@ * Collect list of lost partitions. */ @GridInternal +@GridVisorManagementTask public class VisorCacheLostPartitionsTask extends VisorOneNodeTask { /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTaskArg.java index d6404bfdeb5ec..3ccaeaa3ba179 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheLostPartitionsTaskArg.java @@ -35,6 +35,9 @@ public class VisorCacheLostPartitionsTaskArg extends VisorDataTransferObject { /** List of cache names. */ private List cacheNames; + /** Created for toString method because Collection can't be printed. */ + private String modifiedCaches; + /** * Default constructor. */ @@ -47,6 +50,9 @@ public VisorCacheLostPartitionsTaskArg() { */ public VisorCacheLostPartitionsTaskArg(List cacheNames) { this.cacheNames = cacheNames; + + if (cacheNames != null) + modifiedCaches = cacheNames.toString(); } /** @@ -59,11 +65,13 @@ public List getCacheNames() { /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { U.writeCollection(out, cacheNames); + U.writeString(out, modifiedCaches); } /** {@inheritDoc} */ - @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { + @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { cacheNames = U.readList(in); + modifiedCaches = U.readString(in); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheMetricsCollectorTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheMetricsCollectorTask.java index ab1fa8c835d41..4c18ba3c78247 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheMetricsCollectorTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheMetricsCollectorTask.java @@ -30,6 +30,7 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorMultiNodeTask; +import org.apache.ignite.internal.visor.util.VisorTaskUtils; import org.jetbrains.annotations.Nullable; /** @@ -110,15 +111,17 @@ private VisorCacheMetricsCollectorJob(VisorCacheMetricsCollectorTaskArg arg, boo boolean allCaches = cacheNames.isEmpty(); for (IgniteCacheProxy ca : caches) { - GridCacheContext ctx = ca.context(); + String cacheName = ca.getName(); - if (ctx.started() && (ctx.affinityNode() || ctx.isNear())) { - String cacheName = ca.getName(); + if (!VisorTaskUtils.isRestartingCache(ignite, cacheName)) { + GridCacheContext ctx = ca.context(); - VisorCacheMetrics cm = new VisorCacheMetrics(ignite, cacheName); + if (ctx.started() && (ctx.affinityNode() || ctx.isNear())) { + VisorCacheMetrics cm = new VisorCacheMetrics(ignite, cacheName); - if ((allCaches || cacheNames.contains(cacheName)) && (showSysCaches || !cm.isSystem())) - res.add(cm); + if ((allCaches || cacheNames.contains(cacheName)) && (showSysCaches || !cm.isSystem())) + res.add(cm); + } } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTask.java index 62e7ac6449cf3..f68efa8622e1b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTask.java @@ -21,6 +21,7 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -31,6 +32,7 @@ * Task that modify value in specified cache. */ @GridInternal +@GridVisorManagementTask public class VisorCacheModifyTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTaskArg.java index bef73ed5fa508..3948a2f086672 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheModifyTaskArg.java @@ -43,6 +43,9 @@ public class VisorCacheModifyTaskArg extends VisorDataTransferObject { /** Specified value. */ private Object val; + /** Created for toString method because Object can't be printed. */ + private String modifiedValues; + /** * Default constructor. */ @@ -61,6 +64,7 @@ public VisorCacheModifyTaskArg(String cacheName, VisorModifyCacheMode mode, Obje this.mode = mode; this.key = key; this.val = val; + this.modifiedValues = "[Key=" + key + ", Value=" + val + "]"; } /** @@ -97,6 +101,7 @@ public Object getValue() { U.writeEnum(out, mode); out.writeObject(key); out.writeObject(val); + U.writeString(out, modifiedValues); } /** {@inheritDoc} */ @@ -105,6 +110,7 @@ public Object getValue() { mode = VisorModifyCacheMode.fromOrdinal(in.readByte()); key = in.readObject(); val = in.readObject(); + modifiedValues = U.readString(in); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCachePartitionsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCachePartitionsTask.java index 93c9c60c571d8..11ae18de754f8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCachePartitionsTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCachePartitionsTask.java @@ -36,10 +36,11 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorMultiNodeTask; +import org.apache.ignite.internal.visor.util.VisorTaskUtils; import org.jetbrains.annotations.Nullable; -import static org.apache.ignite.internal.visor.util.VisorTaskUtils.log; import static org.apache.ignite.internal.visor.util.VisorTaskUtils.escapeName; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.log; /** * Task that collect keys distribution in partitions. @@ -98,7 +99,7 @@ private VisorCachePartitionsJob(VisorCachePartitionsTaskArg arg, boolean debug) GridCacheAdapter ca = ignite.context().cache().internalCache(cacheName); // Cache was not started. - if (ca == null || !ca.context().started()) + if (ca == null || !ca.context().started() || VisorTaskUtils.isRestartingCache(ignite, cacheName)) return parts; CacheConfiguration cfg = ca.configuration(); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheRebalanceTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheRebalanceTask.java index 87a2ce6713c22..9c3fd37e32bb6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheRebalanceTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheRebalanceTask.java @@ -23,6 +23,7 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorJob; @@ -32,6 +33,7 @@ * Pre-loads caches. Made callable just to conform common pattern. */ @GridInternal +@GridVisorManagementTask public class VisorCacheRebalanceTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTask.java index 2ad88ee75b7b6..aa90d3e52a77d 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTask.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.visor.cache; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -26,6 +27,7 @@ * Reset lost partitions for caches. */ @GridInternal +@GridVisorManagementTask public class VisorCacheResetLostPartitionsTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTaskArg.java index 2f365c82d6fba..5f607304cb8d8 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheResetLostPartitionsTaskArg.java @@ -35,6 +35,9 @@ public class VisorCacheResetLostPartitionsTaskArg extends VisorDataTransferObjec /** List of cache names. */ private List cacheNames; + /** Created for toString method because Collection can't be printed. */ + private String modifiedCaches; + /** * Default constructor. */ @@ -47,6 +50,9 @@ public VisorCacheResetLostPartitionsTaskArg() { */ public VisorCacheResetLostPartitionsTaskArg(List cacheNames) { this.cacheNames = cacheNames; + + if(cacheNames != null) + modifiedCaches = cacheNames.toString(); } /** @@ -59,11 +65,13 @@ public List getCacheNames() { /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { U.writeCollection(out, cacheNames); + U.writeString(out, modifiedCaches); } /** {@inheritDoc} */ @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { cacheNames = U.readList(in); + modifiedCaches = U.readString(in); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStartTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStartTask.java index 5f6337be0147e..7663827b53e6b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStartTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStartTask.java @@ -29,6 +29,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; @@ -40,6 +41,7 @@ * Task that start cache or near cache with specified configuration. */ @GridInternal +@GridVisorManagementTask public class VisorCacheStartTask extends VisorMultiNodeTask, Void> { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStopTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStopTask.java index df95c5ea73488..b76fc3df40315 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStopTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheStopTask.java @@ -20,6 +20,7 @@ import java.util.Collection; import java.util.HashSet; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; @@ -29,6 +30,7 @@ * Task that stop specified caches on specified node. */ @GridInternal +@GridVisorManagementTask public class VisorCacheStopTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheToggleStatisticsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheToggleStatisticsTask.java new file mode 100644 index 0000000000000..aebed810d2cc0 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheToggleStatisticsTask.java @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.cache; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.visor.VisorJob; +import org.apache.ignite.internal.visor.VisorOneNodeTask; + +/** + * Switch statisticsEnabled flag for specified caches to specified state. + */ +@GridInternal +public class VisorCacheToggleStatisticsTask extends VisorOneNodeTask { + /** */ + private static final long serialVersionUID = 0L; + + /** {@inheritDoc} */ + @Override protected VisorCachesToggleStatisticsJob job(VisorCacheToggleStatisticsTaskArg arg) { + return new VisorCachesToggleStatisticsJob(arg, debug); + } + + /** + * Job that switch statisticsEnabled flag for specified caches to specified state. + */ + private static class VisorCachesToggleStatisticsJob extends VisorJob { + /** */ + private static final long serialVersionUID = 0L; + + /** + * @param arg Job argument object. + * @param debug Debug flag. + */ + private VisorCachesToggleStatisticsJob(VisorCacheToggleStatisticsTaskArg arg, boolean debug) { + super(arg, debug); + } + + /** {@inheritDoc} */ + @Override protected Void run(VisorCacheToggleStatisticsTaskArg arg) { + try { + ignite.context().cache().enableStatistics(arg.getCacheNames(), arg.getState()); + } + catch (IgniteCheckedException e) { + throw U.convertException(e); + } + + return null; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(VisorCachesToggleStatisticsJob.class, this); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheToggleStatisticsTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheToggleStatisticsTaskArg.java new file mode 100644 index 0000000000000..34359daeacf6c --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/cache/VisorCacheToggleStatisticsTaskArg.java @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.cache; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import java.util.Set; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.visor.VisorDataTransferObject; + +/** + * Argument for {@link VisorCacheToggleStatisticsTask}. + */ +public class VisorCacheToggleStatisticsTaskArg extends VisorDataTransferObject { + /** */ + private static final long serialVersionUID = 0L; + + /** State to set to statisticsEnabled flag. */ + private boolean state; + + /** Cache names to toggle statisticsEnabled flag. */ + private Set cacheNames; + + /** + * Default constructor. + */ + public VisorCacheToggleStatisticsTaskArg() { + // No-op. + } + + /** + * @param state State to set to statisticsEnabled flag. + * @param cacheNames Collection of cache names to toggle statisticsEnabled flag. + */ + public VisorCacheToggleStatisticsTaskArg(boolean state, Set cacheNames) { + this.state = state; + this.cacheNames = cacheNames; + } + + /** + * @return State to set to statisticsEnabled flag. + */ + public boolean getState() { + return state; + } + + /** + * @return Cache names to toggle statisticsEnabled flag. + */ + public Set getCacheNames() { + return cacheNames; + } + + /** {@inheritDoc} */ + @Override protected void writeExternalData(ObjectOutput out) throws IOException { + out.writeBoolean(state); + U.writeCollection(out, cacheNames); + } + + /** {@inheritDoc} */ + @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { + state = in.readBoolean(); + cacheNames = U.readSet(in); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(VisorCacheToggleStatisticsTaskArg.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeCancelSessionsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeCancelSessionsTask.java index 6cd683ca42318..8b29a400e19da 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeCancelSessionsTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeCancelSessionsTask.java @@ -24,6 +24,7 @@ import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.compute.ComputeTaskFuture; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -34,6 +35,7 @@ * Cancels given tasks sessions. */ @GridInternal +@GridVisorManagementTask public class VisorComputeCancelSessionsTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeResetMetricsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeResetMetricsTask.java index eefeb746240d8..ac3e18b4a0d93 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeResetMetricsTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/compute/VisorComputeResetMetricsTask.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.visor.compute; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -26,6 +27,7 @@ * Reset compute grid metrics task. */ @GridInternal +@GridVisorManagementTask public class VisorComputeResetMetricsTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadDumpTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadDumpTask.java index 9c6aa603a10b5..f064028abcf29 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadDumpTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadDumpTask.java @@ -20,6 +20,7 @@ import java.lang.management.ThreadInfo; import java.lang.management.ThreadMXBean; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorJob; @@ -29,6 +30,7 @@ * Creates thread dump. */ @GridInternal +@GridVisorManagementTask public class VisorThreadDumpTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadMonitorInfo.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadMonitorInfo.java index f8ddc68a31e3a..189b8e3cdfba4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadMonitorInfo.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/debug/VisorThreadMonitorInfo.java @@ -71,11 +71,6 @@ public StackTraceElement getStackFrame() { return stackFrame; } - /** {@inheritDoc} */ - @Override public byte getProtocolVersion() { - return 1; - } - /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { try (VisorDataTransferObjectOutput dtout = new VisorDataTransferObjectOutput(out)) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDeploymentEvent.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDeploymentEvent.java index 8b0c2110368b9..e74b54c896c34 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDeploymentEvent.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDeploymentEvent.java @@ -79,11 +79,6 @@ public String getAlias() { return alias; } - /** {@inheritDoc} */ - @Override public byte getProtocolVersion() { - return 1; - } - /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { try (VisorDataTransferObjectOutput dtout = new VisorDataTransferObjectOutput(out)) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDiscoveryEvent.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDiscoveryEvent.java index ec2220d1fd7ba..c5f1d30c53d92 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDiscoveryEvent.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridDiscoveryEvent.java @@ -119,11 +119,6 @@ public long getTopologyVersion() { return topVer; } - /** {@inheritDoc} */ - @Override public byte getProtocolVersion() { - return 1; - } - /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { try (VisorDataTransferObjectOutput dtout = new VisorDataTransferObjectOutput(out)) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridJobEvent.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridJobEvent.java index 734d85fa11cb7..2a7ea1862e396 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridJobEvent.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridJobEvent.java @@ -118,11 +118,6 @@ public IgniteUuid getJobId() { return jobId; } - /** {@inheritDoc} */ - @Override public byte getProtocolVersion() { - return 1; - } - /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { try (VisorDataTransferObjectOutput dtout = new VisorDataTransferObjectOutput(out)) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridTaskEvent.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridTaskEvent.java index 11c9a17d38dd5..a0836c481356f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridTaskEvent.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/event/VisorGridTaskEvent.java @@ -118,11 +118,6 @@ public boolean isInternal() { return internal; } - /** {@inheritDoc} */ - @Override public byte getProtocolVersion() { - return 1; - } - /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { try (VisorDataTransferObjectOutput dtout = new VisorDataTransferObjectOutput(out)) { diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsFormatTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsFormatTask.java index e4685b73bcf6d..340abea3b0505 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsFormatTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsFormatTask.java @@ -19,6 +19,7 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -27,6 +28,7 @@ * Format IGFS instance. */ @GridInternal +@GridVisorManagementTask public class VisorIgfsFormatTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsProfilerClearTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsProfilerClearTask.java index c730840ab621b..dc1a30ada5060 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsProfilerClearTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsProfilerClearTask.java @@ -28,6 +28,7 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteFileSystem; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorJob; @@ -39,6 +40,7 @@ * Remove all IGFS profiler logs. */ @GridInternal +@GridVisorManagementTask public class VisorIgfsProfilerClearTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsResetMetricsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsResetMetricsTask.java index ee90edf3bc78e..b1eaddad9d683 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsResetMetricsTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/igfs/VisorIgfsResetMetricsTask.java @@ -19,6 +19,7 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -27,6 +28,7 @@ * Resets IGFS metrics. */ @GridInternal +@GridVisorManagementTask public class VisorIgfsResetMetricsTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/log/VisorLogSearchTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/log/VisorLogSearchTask.java index e4d2882568d1a..04ee29cda70bc 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/log/VisorLogSearchTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/log/VisorLogSearchTask.java @@ -30,6 +30,7 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.io.GridReversedLinesFileReader; import org.apache.ignite.internal.util.lang.GridTuple3; import org.apache.ignite.internal.util.typedef.internal.S; @@ -48,6 +49,7 @@ * Search text matching in logs */ @GridInternal +@GridVisorManagementTask public class VisorLogSearchTask extends VisorMultiNodeTask> { /** */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/misc/VisorChangeGridActiveStateTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/misc/VisorChangeGridActiveStateTask.java index bde4d6ccfc806..86aee0343202b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/misc/VisorChangeGridActiveStateTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/misc/VisorChangeGridActiveStateTask.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.visor.misc; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -26,6 +27,7 @@ * Task for changing grid active state. */ @GridInternal +@GridVisorManagementTask public class VisorChangeGridActiveStateTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorJobResult.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorJobResult.java new file mode 100644 index 0000000000000..5bd818d424848 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorJobResult.java @@ -0,0 +1,91 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.node; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import org.apache.ignite.internal.dto.IgniteDataTransferObject; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; + +/** + * Result object for cache rebalance job. + */ +public class VisorCacheRebalanceCollectorJobResult extends IgniteDataTransferObject { + /** */ + private static final long serialVersionUID = 0L; + + /** Rebalance percent. */ + private double rebalance; + + /** Node baseline state. */ + private VisorNodeBaselineStatus baseline; + + /** + * Default constructor. + */ + public VisorCacheRebalanceCollectorJobResult() { + // No-op. + } + + /** + * @return Rebalance progress. + */ + public double getRebalance() { + return rebalance; + } + + /** + * @param rebalance Rebalance progress. + */ + public void setRebalance(double rebalance) { + this.rebalance = rebalance; + } + + /** + * @return Node baseline status. + */ + public VisorNodeBaselineStatus getBaseline() { + return baseline; + } + + /** + * @param baseline Node baseline status. + */ + public void setBaseline(VisorNodeBaselineStatus baseline) { + this.baseline = baseline; + } + + /** {@inheritDoc} */ + @Override protected void writeExternalData(ObjectOutput out) throws IOException { + out.writeDouble(rebalance); + U.writeEnum(out, baseline); + } + + /** {@inheritDoc} */ + @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { + rebalance = in.readDouble(); + baseline = VisorNodeBaselineStatus.fromOrdinal(in.readByte()); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(VisorCacheRebalanceCollectorJobResult.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTask.java new file mode 100644 index 0000000000000..9d0e04144ea58 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTask.java @@ -0,0 +1,194 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.node; + +import java.util.Collection; +import java.util.List; +import org.apache.ignite.cache.CacheMetrics; +import org.apache.ignite.cluster.BaselineNode; +import org.apache.ignite.compute.ComputeJobResult; +import org.apache.ignite.internal.cluster.IgniteClusterEx; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; +import org.apache.ignite.internal.processors.cache.GridCacheAdapter; +import org.apache.ignite.internal.processors.cache.GridCacheProcessor; +import org.apache.ignite.internal.processors.cache.GridCacheUtils; +import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.visor.VisorJob; +import org.apache.ignite.internal.visor.VisorMultiNodeTask; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.internal.visor.node.VisorNodeBaselineStatus.BASELINE_NOT_AVAILABLE; +import static org.apache.ignite.internal.visor.node.VisorNodeBaselineStatus.NODE_IN_BASELINE; +import static org.apache.ignite.internal.visor.node.VisorNodeBaselineStatus.NODE_NOT_IN_BASELINE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.MINIMAL_REBALANCE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.NOTHING_TO_REBALANCE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.REBALANCE_COMPLETE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.REBALANCE_NOT_AVAILABLE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.isProxyCache; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.isRestartingCache; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.log; + +/** + * Collects topology rebalance metrics. + */ +@GridInternal +public class VisorCacheRebalanceCollectorTask extends VisorMultiNodeTask { + /** */ + private static final long serialVersionUID = 0L; + + /** {@inheritDoc} */ + @Override protected VisorCacheRebalanceCollectorJob job(VisorCacheRebalanceCollectorTaskArg arg) { + return new VisorCacheRebalanceCollectorJob(arg, debug); + } + + /** {@inheritDoc} */ + @Nullable @Override protected VisorCacheRebalanceCollectorTaskResult reduce0(List results) { + return reduce(new VisorCacheRebalanceCollectorTaskResult(), results); + } + + /** + * @param taskRes Task result. + * @param results Results. + * @return Topology rebalance metrics collector task result. + */ + protected VisorCacheRebalanceCollectorTaskResult reduce( + VisorCacheRebalanceCollectorTaskResult taskRes, + List results + ) { + for (ComputeJobResult res : results) { + VisorCacheRebalanceCollectorJobResult jobRes = res.getData(); + + if (jobRes != null) { + if (res.getException() == null) + taskRes.getRebalance().put(res.getNode().id(), jobRes.getRebalance()); + + taskRes.getBaseline().put(res.getNode().id(), jobRes.getBaseline()); + } + } + + return taskRes; + } + + /** + * Job that collects rebalance metrics. + */ + private static class VisorCacheRebalanceCollectorJob extends VisorJob { + /** */ + private static final long serialVersionUID = 0L; + + /** + * Create job with given argument. + * + * @param arg Job argument. + * @param debug Debug flag. + */ + private VisorCacheRebalanceCollectorJob(VisorCacheRebalanceCollectorTaskArg arg, boolean debug) { + super(arg, debug); + } + + /** {@inheritDoc} */ + @Override protected VisorCacheRebalanceCollectorJobResult run(VisorCacheRebalanceCollectorTaskArg arg) { + VisorCacheRebalanceCollectorJobResult res = new VisorCacheRebalanceCollectorJobResult(); + + long start0 = U.currentTimeMillis(); + + try { + int partitions = 0; + double total = 0; + double ready = 0; + + GridCacheProcessor cacheProc = ignite.context().cache(); + + boolean rebalanceInProgress = false; + + for (CacheGroupContext grp: cacheProc.cacheGroups()) { + String cacheName = grp.config().getName(); + + if (isProxyCache(ignite, cacheName) || isRestartingCache(ignite, cacheName)) + continue; + + try { + GridCacheAdapter ca = cacheProc.internalCache(cacheName); + + if (ca == null || !ca.context().started()) + continue; + + CacheMetrics cm = ca.localMetrics(); + + partitions += cm.getTotalPartitionsCount(); + + long keysTotal = cm.getEstimatedRebalancingKeys(); + long keysReady = cm.getRebalancedKeys(); + + if (keysReady >= keysTotal) + keysReady = Math.max(keysTotal - 1, 0); + + total += keysTotal; + ready += keysReady; + + if (cm.getRebalancingPartitionsCount() > 0) + rebalanceInProgress = true; + } + catch(IllegalStateException | IllegalArgumentException e) { + if (debug && ignite.log() != null) + ignite.log().error("Ignored cache group: " + grp.cacheOrGroupName(), e); + } + } + + if (partitions == 0) + res.setRebalance(NOTHING_TO_REBALANCE); + else if (total == 0 && rebalanceInProgress) + res.setRebalance(MINIMAL_REBALANCE); + else + res.setRebalance(total > 0 && rebalanceInProgress ? Math.max(ready / total, MINIMAL_REBALANCE) : REBALANCE_COMPLETE); + } + catch (Exception e) { + res.setRebalance(REBALANCE_NOT_AVAILABLE); + + ignite.log().error("Failed to collect rebalance metrics", e); + } + + if (GridCacheUtils.isPersistenceEnabled(ignite.configuration())) { + IgniteClusterEx cluster = ignite.cluster(); + + Object consistentId = ignite.localNode().consistentId(); + + Collection baseline = cluster.currentBaselineTopology(); + + boolean inBaseline = baseline.stream().anyMatch(n -> consistentId.equals(n.consistentId())); + + res.setBaseline(inBaseline ? NODE_IN_BASELINE : NODE_NOT_IN_BASELINE); + } + else + res.setBaseline(BASELINE_NOT_AVAILABLE); + + if (debug) + log(ignite.log(), "Collected rebalance metrics", getClass(), start0); + + return res; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(VisorCacheRebalanceCollectorJob.class, this); + } + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTaskArg.java new file mode 100644 index 0000000000000..d97fd50192572 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTaskArg.java @@ -0,0 +1,54 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.node; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.visor.VisorDataTransferObject; + +/** + * Argument for {@link VisorCacheRebalanceCollectorTask} task. + */ +public class VisorCacheRebalanceCollectorTaskArg extends VisorDataTransferObject { + /** */ + private static final long serialVersionUID = 0L; + + /** + * Default constructor. + */ + public VisorCacheRebalanceCollectorTaskArg() { + // No-op. + } + + /** {@inheritDoc} */ + @Override protected void writeExternalData(ObjectOutput out) throws IOException { + // No-op. + } + + /** {@inheritDoc} */ + @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { + // No-op. + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(VisorCacheRebalanceCollectorTaskArg.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTaskResult.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTaskResult.java new file mode 100644 index 0000000000000..1305cd2df87eb --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorCacheRebalanceCollectorTaskResult.java @@ -0,0 +1,92 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.node; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import java.util.HashMap; +import java.util.Map; +import java.util.UUID; +import org.apache.ignite.internal.dto.IgniteDataTransferObject; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; + +/** + * Result object for {@link VisorCacheRebalanceCollectorTask} task. + */ +public class VisorCacheRebalanceCollectorTaskResult extends IgniteDataTransferObject { + /** */ + private static final long serialVersionUID = 0L; + + /** Rebalance state on nodes. */ + private Map rebalance = new HashMap<>(); + + /** Nodes baseline status. */ + private Map baseline = new HashMap<>(); + + /** + * Default constructor. + */ + public VisorCacheRebalanceCollectorTaskResult() { + // No-op. + } + + /** + * @return Rebalance on nodes. + */ + public Map getRebalance() { + return rebalance; + } + + /** + * @return Baseline. + */ + public Map getBaseline() { + return baseline; + } + + /** + * Add specified results. + * + * @param res Results to add. + */ + public void add(VisorCacheRebalanceCollectorTaskResult res) { + assert res != null; + + rebalance.putAll(res.getRebalance()); + baseline.putAll(res.getBaseline()); + } + + /** {@inheritDoc} */ + @Override protected void writeExternalData(ObjectOutput out) throws IOException { + U.writeMap(out, rebalance); + U.writeMap(out, baseline); + } + + /** {@inheritDoc} */ + @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { + rebalance = U.readMap(in); + baseline = U.readMap(in); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(VisorCacheRebalanceCollectorTaskResult.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorGridConfiguration.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorGridConfiguration.java index 85849a56f907c..a9144abc5f45a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorGridConfiguration.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorGridConfiguration.java @@ -126,6 +126,9 @@ public class VisorGridConfiguration extends VisorDataTransferObject { /** Client connector configuration */ private VisorClientConnectorConfiguration clnConnCfg; + /** MVCC configuration. */ + private VisorMvccConfiguration mvccCfg; + /** * Default constructor. */ @@ -192,6 +195,8 @@ public VisorGridConfiguration(IgniteEx ignite) { if (dsCfg != null) dataStorage = new VisorDataStorageConfiguration(dsCfg); + + mvccCfg = new VisorMvccConfiguration(c); } /** @@ -383,9 +388,16 @@ public VisorDataStorageConfiguration getDataStorageConfiguration() { return dataStorage; } + /** + * @return MVCC configuration. + */ + public VisorMvccConfiguration getMvccConfiguration() { + return mvccCfg; + } + /** {@inheritDoc} */ @Override public byte getProtocolVersion() { - return V3; + return V4; } /** {@inheritDoc} */ @@ -417,6 +429,7 @@ public VisorDataStorageConfiguration getDataStorageConfiguration() { U.writeCollection(out, srvcCfgs); out.writeObject(dataStorage); out.writeObject(clnConnCfg); + out.writeObject(mvccCfg); } /** {@inheritDoc} */ @@ -447,11 +460,14 @@ public VisorDataStorageConfiguration getDataStorageConfiguration() { sqlConnCfg = (VisorSqlConnectorConfiguration) in.readObject(); srvcCfgs = U.readList(in); - if (protoVer >= V2) + if (protoVer > V1) dataStorage = (VisorDataStorageConfiguration)in.readObject(); - if (protoVer >= V3) + if (protoVer > V2) clnConnCfg = (VisorClientConnectorConfiguration)in.readObject(); + + if (protoVer > V3) + mvccCfg = (VisorMvccConfiguration)in.readObject(); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorMvccConfiguration.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorMvccConfiguration.java new file mode 100644 index 0000000000000..1bcaa21507818 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorMvccConfiguration.java @@ -0,0 +1,94 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.node; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.visor.VisorDataTransferObject; + +/** + * Data transfer object for data store configuration. + */ +public class VisorMvccConfiguration extends VisorDataTransferObject { + /** */ + private static final long serialVersionUID = 0L; + + /** Number of MVCC vacuum cleanup threads. */ + private int mvccVacuumThreadCnt; + + /** Time interval between vacuum runs */ + private long mvccVacuumFreq; + + /** + * Default constructor. + */ + public VisorMvccConfiguration() { + // No-op. + } + + /** + * Constructor. + * + * @param cfg Ignite configuration. + */ + public VisorMvccConfiguration(IgniteConfiguration cfg) { + assert cfg != null; + + mvccVacuumThreadCnt = cfg.getMvccVacuumThreadCount(); + mvccVacuumFreq = cfg.getMvccVacuumFrequency(); + } + + /** + * @return Number of MVCC vacuum threads. + */ + public int getMvccVacuumThreadCount() { + return mvccVacuumThreadCnt; + } + + /** + * @return Time interval between MVCC vacuum runs in milliseconds. + */ + public long getMvccVacuumFrequency() { + return mvccVacuumFreq; + } + + /** {@inheritDoc} */ + @Override public byte getProtocolVersion() { + return V1; + } + + /** {@inheritDoc} */ + @Override protected void writeExternalData(ObjectOutput out) throws IOException { + out.writeInt(mvccVacuumThreadCnt); + out.writeLong(mvccVacuumFreq); + } + + /** {@inheritDoc} */ + @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { + mvccVacuumThreadCnt = in.readInt(); + mvccVacuumFreq = in.readLong(); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(VisorMvccConfiguration.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeBaselineStatus.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeBaselineStatus.java new file mode 100644 index 0000000000000..ea90be3a6ef2c --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeBaselineStatus.java @@ -0,0 +1,45 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.node; + +import org.jetbrains.annotations.Nullable; + +/** + * Node baseline status. + */ +public enum VisorNodeBaselineStatus { + /** */ + NODE_IN_BASELINE, + /** */ + NODE_NOT_IN_BASELINE, + /** */ + BASELINE_NOT_AVAILABLE; + + /** Enumerated values. */ + private static final VisorNodeBaselineStatus[] VALS = values(); + + /** + * Efficiently gets enumerated value from its ordinal. + * + * @param ord Ordinal value. + * @return Enumerated value or {@code null} if ordinal out of range. + */ + @Nullable public static VisorNodeBaselineStatus fromOrdinal(int ord) { + return ord >= 0 && ord < VALS.length ? VALS[ord] : null; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorJob.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorJob.java index 5fab8d16d5e94..3a115dffd5a1f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorJob.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorJob.java @@ -28,8 +28,9 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.FileSystemConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; +import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager; import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.igfs.IgfsProcessorAdapter; @@ -50,9 +51,15 @@ import static org.apache.ignite.internal.processors.cache.GridCacheUtils.isSystemCache; import static org.apache.ignite.internal.visor.compute.VisorComputeMonitoringHolder.COMPUTE_MONITORING_HOLDER_KEY; import static org.apache.ignite.internal.visor.util.VisorTaskUtils.EVT_MAPPER; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.MINIMAL_REBALANCE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.NOTHING_TO_REBALANCE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.REBALANCE_COMPLETE; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.REBALANCE_NOT_AVAILABLE; import static org.apache.ignite.internal.visor.util.VisorTaskUtils.VISOR_TASK_EVTS; import static org.apache.ignite.internal.visor.util.VisorTaskUtils.checkExplicitTaskMonitoring; import static org.apache.ignite.internal.visor.util.VisorTaskUtils.collectEvents; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.isProxyCache; +import static org.apache.ignite.internal.visor.util.VisorTaskUtils.isRestartingCache; import static org.apache.ignite.internal.visor.util.VisorTaskUtils.log; /** @@ -140,18 +147,6 @@ protected boolean compatibleWith(IgniteProductVersion ver) { return false; } - /** - * @param cacheName Cache name to check. - * @return {@code true} if cache on local node is not a data cache or near cache disabled. - */ - private boolean proxyCache(String cacheName) { - GridDiscoveryManager discovery = ignite.context().discovery(); - - ClusterNode locNode = ignite.localNode(); - - return !(discovery.cacheAffinityNode(locNode, cacheName) || discovery.cacheNearNode(locNode, cacheName)); - } - /** * Collect memory metrics. * @@ -194,38 +189,51 @@ protected void caches(VisorNodeDataCollectorJobResult res, VisorNodeDataCollecto List resCaches = res.getCaches(); - for (String cacheName : cacheProc.cacheNames()) { - if (proxyCache(cacheName)) - continue; + boolean rebalanceInProgress = false; - boolean sysCache = isSystemCache(cacheName); + for (CacheGroupContext grp : cacheProc.cacheGroups()) { + boolean first = true; - if (arg.getSystemCaches() || !(sysCache || isIgfsCache(cfg, cacheName))) { + for (GridCacheContext cache : grp.caches()) { long start0 = U.currentTimeMillis(); + String cacheName = cache.name(); + try { + if (isProxyCache(ignite, cacheName) || isRestartingCache(ignite, cacheName)) + continue; + GridCacheAdapter ca = cacheProc.internalCache(cacheName); if (ca == null || !ca.context().started()) continue; - CacheMetrics cm = ca.localMetrics(); + if (first) { + CacheMetrics cm = ca.localMetrics(); + + partitions += cm.getTotalPartitionsCount(); + + long keysTotal = cm.getEstimatedRebalancingKeys(); + long keysReady = cm.getRebalancedKeys(); + + if (keysReady >= keysTotal) + keysReady = Math.max(keysTotal - 1, 0); - partitions += cm.getTotalPartitionsCount(); + total += keysTotal; + ready += keysReady; - long partTotal = cm.getEstimatedRebalancingKeys(); - long partReady = cm.getRebalancedKeys(); + if (!rebalanceInProgress && cm.getRebalancingPartitionsCount() > 0) + rebalanceInProgress = true; - if (partReady >= partTotal) - partReady = Math.max(partTotal - 1, 0); + first = false; + } - total += partTotal; - ready += partReady; + boolean addToRes = arg.getSystemCaches() || !(isSystemCache(cacheName) || isIgfsCache(cfg, cacheName)); - if (all || cacheGrps.contains(ca.configuration().getGroupName())) + if (addToRes && (all || cacheGrps.contains(ca.configuration().getGroupName()))) resCaches.add(new VisorCache(ignite, ca, arg.isCollectCacheMetrics())); } - catch(IllegalStateException | IllegalArgumentException e) { + catch (IllegalStateException | IllegalArgumentException e) { if (debug && ignite.log() != null) ignite.log().error("Ignored cache: " + cacheName, e); } @@ -237,11 +245,16 @@ protected void caches(VisorNodeDataCollectorJobResult res, VisorNodeDataCollecto } if (partitions == 0) - res.setRebalance(-1); + res.setRebalance(NOTHING_TO_REBALANCE); + else if (total == 0 && rebalanceInProgress) + res.setRebalance(MINIMAL_REBALANCE); else - res.setRebalance(total > 0 ? ready / total : 1); + res.setRebalance(total > 0 && rebalanceInProgress + ? Math.max(ready / total, MINIMAL_REBALANCE) + : REBALANCE_COMPLETE); } catch (Exception e) { + res.setRebalance(REBALANCE_NOT_AVAILABLE); res.setCachesEx(new VisorExceptionWrapper(e)); } } @@ -260,7 +273,8 @@ protected void igfs(VisorNodeDataCollectorJobResult res) { FileSystemConfiguration igfsCfg = igfs.configuration(); - if (proxyCache(igfsCfg.getDataCacheConfiguration().getName()) || proxyCache(igfsCfg.getMetaCacheConfiguration().getName())) + if (isProxyCache(ignite, igfsCfg.getDataCacheConfiguration().getName()) || + isProxyCache(ignite, igfsCfg.getMetaCacheConfiguration().getName())) continue; try { @@ -335,7 +349,8 @@ protected VisorNodeDataCollectorJobResult run(VisorNodeDataCollectorJobResult re if (debug) start0 = log(ignite.log(), "Collected memory metrics", getClass(), start0); - caches(res, arg); + if (ignite.cluster().active()) + caches(res, arg); if (debug) start0 = log(ignite.log(), "Collected caches", getClass(), start0); diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorTaskResult.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorTaskResult.java index eb161f82c6638..f8eb8690ce2ea 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorTaskResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeDataCollectorTaskResult.java @@ -131,7 +131,8 @@ public boolean isEmpty() { readyTopVers.isEmpty() && pendingExchanges.isEmpty() && persistenceMetrics.isEmpty() && - persistenceMetricsEx.isEmpty(); + persistenceMetricsEx.isEmpty() && + rebalance.isEmpty(); } /** diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeGcTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeGcTask.java index bdc1104cc5dcf..1e0ca4717d4f7 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeGcTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeGcTask.java @@ -25,6 +25,7 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorMultiNodeTask; @@ -34,6 +35,7 @@ * Task to run gc on nodes. */ @GridInternal +@GridVisorManagementTask public class VisorNodeGcTask extends VisorMultiNodeTask, VisorNodeGcTaskResult> { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodePingTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodePingTask.java index ee2e968ffb11f..656d12aee3d52 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodePingTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodePingTask.java @@ -21,6 +21,7 @@ import org.apache.ignite.cluster.ClusterTopologyException; import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -30,6 +31,7 @@ * Ping other node. */ @GridInternal +@GridVisorManagementTask public class VisorNodePingTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeRestartTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeRestartTask.java index bb293710e8051..693665a672451 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeRestartTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeRestartTask.java @@ -21,6 +21,7 @@ import org.apache.ignite.Ignition; import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorMultiNodeTask; @@ -30,6 +31,7 @@ * Restarts nodes. */ @GridInternal +@GridVisorManagementTask public class VisorNodeRestartTask extends VisorMultiNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeStopTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeStopTask.java index 16bf5d6288cd0..6581e44e352c3 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeStopTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/node/VisorNodeStopTask.java @@ -21,6 +21,7 @@ import org.apache.ignite.Ignition; import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorMultiNodeTask; @@ -30,6 +31,7 @@ * Stops nodes. */ @GridInternal +@GridVisorManagementTask public class VisorNodeStopTask extends VisorMultiNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryCancelTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryCancelTask.java index 207b690bc9bd5..32dc93cabe3d4 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryCancelTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryCancelTask.java @@ -22,6 +22,7 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; import org.jetbrains.annotations.Nullable; @@ -30,6 +31,7 @@ * Task to cancel queries. */ @GridInternal +@GridVisorManagementTask public class VisorQueryCancelTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryResetMetricsTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryResetMetricsTask.java index 1d807f1aeb6e5..a339f8d790474 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryResetMetricsTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryResetMetricsTask.java @@ -19,6 +19,7 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -27,6 +28,7 @@ * Reset compute grid query metrics. */ @GridInternal +@GridVisorManagementTask public class VisorQueryResetMetricsTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryTask.java index b5af1b04b5151..97ee83e128f7a 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/query/VisorQueryTask.java @@ -28,6 +28,7 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.processors.query.GridQueryFieldMetadata; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; @@ -44,6 +45,7 @@ * Task for execute SQL fields query and get first page of results. */ @GridInternal +@GridVisorManagementTask public class VisorQueryTask extends VisorOneNodeTask> { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/service/VisorCancelServiceTask.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/service/VisorCancelServiceTask.java index 53c3bb3fa8914..1bb7ecde4e39f 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/service/VisorCancelServiceTask.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/service/VisorCancelServiceTask.java @@ -19,6 +19,7 @@ import org.apache.ignite.IgniteServices; import org.apache.ignite.internal.processors.task.GridInternal; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorOneNodeTask; @@ -27,6 +28,7 @@ * Task for cancel services with specified name. */ @GridInternal +@GridVisorManagementTask public class VisorCancelServiceTask extends VisorOneNodeTask { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/tx/VisorTxSortOrder.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/tx/VisorTxSortOrder.java index 9a18882c24958..1b4cdff7f0e2b 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/tx/VisorTxSortOrder.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/tx/VisorTxSortOrder.java @@ -17,7 +17,6 @@ package org.apache.ignite.internal.visor.tx; -import org.apache.ignite.internal.commandline.CommandHandler; import org.jetbrains.annotations.Nullable; /** @@ -42,20 +41,4 @@ public enum VisorTxSortOrder { @Nullable public static VisorTxSortOrder fromOrdinal(int ord) { return ord >= 0 && ord < VALS.length ? VALS[ord] : null; } - - /** - * @param name Name. - */ - public static VisorTxSortOrder fromString(String name) { - if (DURATION.toString().equals(name)) - return DURATION; - - if (SIZE.toString().equals(name)) - return SIZE; - - if (CommandHandler.CMD_TX_ORDER_START_TIME.equals(name)) - return START_TIME; - - throw new IllegalArgumentException("Sort order is unknown: " + name); - } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/util/VisorTaskUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/util/VisorTaskUtils.java index fda9ba199bc0f..7ab1ffcb18b84 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/util/VisorTaskUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/util/VisorTaskUtils.java @@ -58,6 +58,10 @@ import org.apache.ignite.cache.eviction.AbstractEvictionPolicyFactory; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.events.Event; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; +import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; +import org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl; import org.apache.ignite.internal.processors.igfs.IgfsEx; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; @@ -109,6 +113,19 @@ public class VisorTaskUtils { /** Log files count limit */ public static final int LOG_FILES_COUNT_LIMIT = 5000; + /** */ + public static final int NOTHING_TO_REBALANCE = -1; + + /** */ + public static final int REBALANCE_NOT_AVAILABLE = -2; + + /** */ + public static final double MINIMAL_REBALANCE = 0.01; + + /** */ + public static final int REBALANCE_COMPLETE = 1; + + /** */ private static final int DFLT_BUFFER_SIZE = 4096; @@ -1248,4 +1265,30 @@ public static Collection splitAddresses(String s) { return Arrays.asList(addrs); } + + /** + * @param ignite Ignite. + * @param cacheName Cache name to check. + * @return {@code true} if cache on local node is not a data cache or near cache disabled. + */ + public static boolean isProxyCache(IgniteEx ignite, String cacheName) { + GridDiscoveryManager discovery = ignite.context().discovery(); + + ClusterNode locNode = ignite.localNode(); + + return !(discovery.cacheAffinityNode(locNode, cacheName) || discovery.cacheNearNode(locNode, cacheName)); + } + + /** + * Check whether cache restarting in progress. + * + * @param ignite Grid. + * @param cacheName Cache name to check. + * @return {@code true} when cache restarting in progress. + */ + public static boolean isRestartingCache(IgniteEx ignite, String cacheName) { + IgniteCacheProxy proxy = ignite.context().cache().jcache(cacheName); + + return proxy instanceof IgniteCacheProxyImpl && ((IgniteCacheProxyImpl) proxy).isRestarting(); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/CacheFilterEnum.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/CacheFilterEnum.java new file mode 100644 index 0000000000000..4e87d5068eaa5 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/CacheFilterEnum.java @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.verify; + +import org.jetbrains.annotations.Nullable; + +/** + * Represents a type of cache(s) that can be used for comparing update counters and checksums between primary and backup partitions. + *
+ * @see org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2 + */ +public enum CacheFilterEnum { + /** All. */ + ALL, + + /** System. */ + SYSTEM, + + /** Persistent. */ + PERSISTENT, + + /** Not persistent. */ + NOT_PERSISTENT; + + /** Enumerated values. */ + private static final CacheFilterEnum[] VALS = values(); + + /** + * Efficiently gets enumerated value from its ordinal. + * + * @param ord Ordinal value. + * @return Enumerated value or {@code null} if ordinal out of range. + */ + public static @Nullable CacheFilterEnum fromOrdinal(int ord) { + return ord >= 0 && ord < VALS.length ? VALS[ord] : null; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/IndexIntegrityCheckIssue.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/IndexIntegrityCheckIssue.java new file mode 100644 index 0000000000000..ec6e5b2440765 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/IndexIntegrityCheckIssue.java @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.visor.verify; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import org.apache.ignite.internal.util.tostring.GridToStringExclude; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.visor.VisorDataTransferObject; + +/** + * + */ +public class IndexIntegrityCheckIssue extends VisorDataTransferObject { + /** */ + private static final long serialVersionUID = 0L; + + /** Cache group name. */ + private String grpName; + + /** T. */ + @GridToStringExclude + private Throwable t; + + /** + * + */ + public IndexIntegrityCheckIssue() { + // Default constructor required for Externalizable. + } + + /** + * @param grpName Group name. + * @param t Data integrity check error. + */ + public IndexIntegrityCheckIssue(String grpName, Throwable t) { + this.grpName = grpName; + this.t = t; + } + + /** {@inheritDoc} */ + @Override protected void writeExternalData(ObjectOutput out) throws IOException { + U.writeString(out, this.grpName); + out.writeObject(t); + } + + /** {@inheritDoc} */ + @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { + this.grpName = U.readString(in); + this.t = (Throwable)in.readObject(); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(IndexIntegrityCheckIssue.class, this) + ", " + t.getClass() + ": " + t.getMessage(); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyDumpTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyDumpTaskArg.java index 6316c2443a73d..3c836a40cdef6 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyDumpTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyDumpTaskArg.java @@ -22,6 +22,7 @@ import java.io.ObjectOutput; import java.util.Set; import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; /** * Arguments for {@link VisorIdleVerifyDumpTask}. @@ -29,9 +30,13 @@ public class VisorIdleVerifyDumpTaskArg extends VisorIdleVerifyTaskArg { /** */ private static final long serialVersionUID = 0L; + /** */ private boolean skipZeros; + /** Cache kind. */ + private CacheFilterEnum cacheFilterEnum; + /** * Default constructor. */ @@ -40,11 +45,14 @@ public VisorIdleVerifyDumpTaskArg() { /** * @param caches Caches. + * @param excludeCaches Caches to exclude. * @param skipZeros Skip zeros partitions. + * @param cacheFilterEnum Cache kind. */ - public VisorIdleVerifyDumpTaskArg(Set caches, boolean skipZeros) { - super(caches); + public VisorIdleVerifyDumpTaskArg(Set caches, Set excludeCaches, boolean skipZeros, CacheFilterEnum cacheFilterEnum) { + super(caches, excludeCaches); this.skipZeros = skipZeros; + this.cacheFilterEnum = cacheFilterEnum; } /** @@ -54,16 +62,37 @@ public boolean isSkipZeros() { return skipZeros; } + /** + * @return Kind fo cache. + */ + public CacheFilterEnum getCacheFilterEnum() { + return cacheFilterEnum; + } + /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { super.writeExternalData(out); + out.writeBoolean(skipZeros); + + U.writeEnum(out, cacheFilterEnum); } /** {@inheritDoc} */ @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { super.readExternalData(protoVer, in); + skipZeros = in.readBoolean(); + + if (protoVer >= V2) + cacheFilterEnum = CacheFilterEnum.fromOrdinal(in.readByte()); + else + cacheFilterEnum = CacheFilterEnum.ALL; + } + + /** {@inheritDoc} */ + @Override public byte getProtocolVersion() { + return V2; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyTaskArg.java index c82af5878ebb6..d645fec16f7b9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorIdleVerifyTaskArg.java @@ -35,6 +35,9 @@ public class VisorIdleVerifyTaskArg extends VisorDataTransferObject { /** Caches. */ private Set caches; + /** Exclude caches or groups. */ + private Set excludeCaches; + /** * Default constructor. */ @@ -42,11 +45,21 @@ public VisorIdleVerifyTaskArg() { // No-op. } + /** + * @param caches Caches. + * @param excludeCaches Exclude caches or group. + */ + public VisorIdleVerifyTaskArg(Set caches, Set excludeCaches) { + this.caches = caches; + this.excludeCaches = excludeCaches; + } + /** * @param caches Caches. */ public VisorIdleVerifyTaskArg(Set caches) { this.caches = caches; + this.excludeCaches = excludeCaches; } @@ -57,14 +70,30 @@ public Set getCaches() { return caches; } + /** + * @return Exclude caches or groups. + */ + public Set excludeCaches() { + return excludeCaches; + } + + /** {@inheritDoc} */ + @Override public byte getProtocolVersion() { + return V2; + } + /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { U.writeCollection(out, caches); + U.writeCollection(out, excludeCaches); } /** {@inheritDoc} */ @Override protected void readExternalData(byte protoVer, ObjectInput in) throws IOException, ClassNotFoundException { caches = U.readSet(in); + + if (protoVer >= V2) + excludeCaches = U.readSet(in); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesJobResult.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesJobResult.java index aa74323898bd8..6d7f7652518cd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesJobResult.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesJobResult.java @@ -20,11 +20,14 @@ import java.io.IOException; import java.io.ObjectInput; import java.io.ObjectOutput; +import java.util.Collection; +import java.util.Collections; import java.util.Map; import org.apache.ignite.internal.processors.cache.verify.PartitionKey; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorDataTransferObject; +import org.jetbrains.annotations.NotNull; /** * @@ -39,14 +42,22 @@ public class VisorValidateIndexesJobResult extends VisorDataTransferObject { /** Results of reverse indexes validation from node. */ private Map idxRes; + /** Integrity check issues. */ + private Collection integrityCheckFailures; + /** * @param partRes Results of indexes validation from node. * @param idxRes Results of reverse indexes validation from node. + * @param integrityCheckFailures Collection of indexes integrity check failures. */ - public VisorValidateIndexesJobResult(Map partRes, - Map idxRes) { + public VisorValidateIndexesJobResult( + @NotNull Map partRes, + @NotNull Map idxRes, + @NotNull Collection integrityCheckFailures + ) { this.partRes = partRes; this.idxRes = idxRes; + this.integrityCheckFailures = integrityCheckFailures; } /** @@ -57,7 +68,7 @@ public VisorValidateIndexesJobResult() { /** {@inheritDoc} */ @Override public byte getProtocolVersion() { - return V2; + return V3; } /** @@ -71,13 +82,30 @@ public Map partitionResult() { * @return Results of reverse indexes validation from node. */ public Map indexResult() { - return idxRes; + return idxRes == null ? Collections.emptyMap() : idxRes; + } + + /** + * @return Collection of failed integrity checks. + */ + public Collection integrityCheckFailures() { + return integrityCheckFailures == null ? Collections.emptyList() : integrityCheckFailures; + } + + /** + * @return {@code true} If any indexes issues found on node, otherwise returns {@code false}. + */ + public boolean hasIssues() { + return (integrityCheckFailures != null && !integrityCheckFailures.isEmpty()) || + (partRes != null && partRes.entrySet().stream().anyMatch(e -> !e.getValue().issues().isEmpty())) || + (idxRes != null && idxRes.entrySet().stream().anyMatch(e -> !e.getValue().issues().isEmpty())); } /** {@inheritDoc} */ @Override protected void writeExternalData(ObjectOutput out) throws IOException { U.writeMap(out, partRes); U.writeMap(out, idxRes); + U.writeCollection(out, integrityCheckFailures); } /** {@inheritDoc} */ @@ -86,6 +114,9 @@ public Map indexResult() { if (protoVer >= V2) idxRes = U.readMap(in); + + if (protoVer >= V3) + integrityCheckFailures = U.readCollection(in); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTaskArg.java index aa49977c7a6fa..6dbd961028e69 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTaskArg.java @@ -21,12 +21,14 @@ import java.io.ObjectInput; import java.io.ObjectOutput; import java.util.Set; +import java.util.UUID; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorDataTransferObject; + /** - * Arguments for task {@link VisorIdleVerifyTask} + * */ public class VisorValidateIndexesTaskArg extends VisorDataTransferObject { /** */ @@ -41,6 +43,9 @@ public class VisorValidateIndexesTaskArg extends VisorDataTransferObject { /** Check through K element (skip K-1, check Kth). */ private int checkThrough; + /** Nodes on which task will run. */ + private Set nodes; + /** * Default constructor. */ @@ -51,10 +56,11 @@ public VisorValidateIndexesTaskArg() { /** * @param caches Caches. */ - public VisorValidateIndexesTaskArg(Set caches, int checkFirst, int checkThrough) { + public VisorValidateIndexesTaskArg(Set caches, Set nodes, int checkFirst, int checkThrough) { this.caches = caches; this.checkFirst = checkFirst; this.checkThrough = checkThrough; + this.nodes = nodes; } @@ -65,6 +71,13 @@ public Set getCaches() { return caches; } + /** + * @return Nodes on which task will run. If {@code null}, task will run on all server nodes. + */ + public Set getNodes() { + return nodes; + } + /** * @return checkFirst. */ @@ -84,6 +97,7 @@ public int getCheckThrough() { U.writeCollection(out, caches); out.writeInt(checkFirst); out.writeInt(checkThrough); + U.writeCollection(out, nodes); } /** {@inheritDoc} */ @@ -98,11 +112,14 @@ public int getCheckThrough() { checkFirst = -1; checkThrough = -1; } + + if (protoVer > V2) + nodes = U.readSet(in); } /** {@inheritDoc} */ @Override public byte getProtocolVersion() { - return V2; + return V3; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorViewCacheTaskArg.java b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorViewCacheTaskArg.java index 5fcd66d18d082..6bc2369406ee9 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorViewCacheTaskArg.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/visor/verify/VisorViewCacheTaskArg.java @@ -1,19 +1,19 @@ /* -* Licensed to the Apache Software Foundation (ASF) under one or more -* contributor license agreements. See the NOTICE file distributed with -* this work for additional information regarding copyright ownership. -* The ASF licenses this file to You under the Apache License, Version 2.0 -* (the "License"); you may not use this file except in compliance with -* the License. You may obtain a copy of the License at -* -* http://www.apache.org/licenses/LICENSE-2.0 -* -* Unless required by applicable law or agreed to in writing, software -* distributed under the License is distributed on an "AS IS" BASIS, -* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -* See the License for the specific language governing permissions and -* limitations under the License. -*/ + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ package org.apache.ignite.internal.visor.verify; @@ -23,7 +23,6 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.VisorDataTransferObject; -import org.jetbrains.annotations.Nullable; /** * @@ -36,13 +35,13 @@ public class VisorViewCacheTaskArg extends VisorDataTransferObject { private String regex; /** Type. */ - private @Nullable VisorViewCacheCmd cmd; + private VisorViewCacheCmd cmd; /** * @param regex Regex. * @param cmd Command. */ - public VisorViewCacheTaskArg(String regex, @Nullable VisorViewCacheCmd cmd) { + public VisorViewCacheTaskArg(String regex, VisorViewCacheCmd cmd) { this.regex = regex; this.cmd = cmd; } diff --git a/modules/core/src/main/java/org/apache/ignite/internal/worker/FailureHandlingMxBeanImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/worker/FailureHandlingMxBeanImpl.java new file mode 100644 index 0000000000000..61b9afeca74fa --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/internal/worker/FailureHandlingMxBeanImpl.java @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.worker; + +import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; +import org.apache.ignite.mxbean.FailureHandlingMxBean; +import org.jetbrains.annotations.NotNull; + +/** {@inheritDoc} */ +public class FailureHandlingMxBeanImpl implements FailureHandlingMxBean { + /** System worker registry. */ + private final WorkersRegistry workerRegistry; + + /** Database manager. */ + private final IgniteCacheDatabaseSharedManager dbMgr; + + /** + * @param workersRegistry Workers registry. + * @param dbMgr Database manager. + */ + public FailureHandlingMxBeanImpl( + @NotNull WorkersRegistry workersRegistry, + @NotNull IgniteCacheDatabaseSharedManager dbMgr + ) { + this.workerRegistry = workersRegistry; + this.dbMgr = dbMgr; + } + + /** {@inheritDoc} */ + @Override public boolean getLivenessCheckEnabled() { + return workerRegistry.livenessCheckEnabled(); + } + + /** {@inheritDoc} */ + @Override public void setLivenessCheckEnabled(boolean val) { + workerRegistry.livenessCheckEnabled(val); + } + + /** {@inheritDoc} */ + @Override public long getSystemWorkerBlockedTimeout() { + return workerRegistry.getSystemWorkerBlockedTimeout(); + } + + /** {@inheritDoc} */ + @Override public void setSystemWorkerBlockedTimeout(long val) { + workerRegistry.setSystemWorkerBlockedTimeout(val); + } + + /** {@inheritDoc} */ + @Override public long getCheckpointReadLockTimeout() { + return dbMgr.checkpointReadLockTimeout(); + } + + /** {@inheritDoc} */ + @Override public void setCheckpointReadLockTimeout(long val) { + dbMgr.checkpointReadLockTimeout(val); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersControlMXBeanImpl.java b/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersControlMXBeanImpl.java index e6abe6e9d1da9..1f082b53ea4dd 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersControlMXBeanImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersControlMXBeanImpl.java @@ -66,16 +66,6 @@ public WorkersControlMXBeanImpl(WorkersRegistry registry) { return true; } - /** {@inheritDoc} */ - @Override public boolean getHealthMonitoringEnabled() { - return workerRegistry.livenessCheckEnabled(); - } - - /** {@inheritDoc} */ - @Override public void setHealthMonitoringEnabled(boolean val) { - workerRegistry.livenessCheckEnabled(val); - } - /** {@inheritDoc} */ @Override public boolean stopThreadByUniqueName(String name) { Thread[] threads = Thread.getAllStackTraces().keySet().stream() diff --git a/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersRegistry.java b/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersRegistry.java index 55740a444d1b2..3d45f3fc5c404 100644 --- a/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersRegistry.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/worker/WorkersRegistry.java @@ -61,8 +61,10 @@ public class WorkersRegistry implements GridWorkerListener { /** */ private final IgniteBiInClosure workerFailedHnd; - /** Worker heartbeat timeout in milliseconds, when exceeded, worker is considered as blocked. */ - private final long heartbeatTimeout; + /** + * Maximum inactivity period for system worker in milliseconds, when exceeded, worker is considered as blocked. + */ + private volatile long sysWorkerBlockedTimeout; /** Time in milliseconds between successive workers checks. */ private final long checkInterval; @@ -72,15 +74,17 @@ public class WorkersRegistry implements GridWorkerListener { /** * @param workerFailedHnd Closure to invoke on worker failure. - * @param heartbeatTimeout Maximum allowed worker heartbeat interval in milliseconds, should be positive. + * @param sysWorkerBlockedTimeout Maximum allowed worker heartbeat interval in milliseconds, non-positive value denotes + * infinite interval. */ public WorkersRegistry( @NotNull IgniteBiInClosure workerFailedHnd, - long heartbeatTimeout, - IgniteLogger log) { + long sysWorkerBlockedTimeout, + IgniteLogger log + ) { this.workerFailedHnd = workerFailedHnd; - this.heartbeatTimeout = heartbeatTimeout; - this.checkInterval = Math.min(DFLT_CHECK_INTERVAL, heartbeatTimeout); + this.sysWorkerBlockedTimeout = U.ensurePositive(sysWorkerBlockedTimeout, Long.MAX_VALUE); + this.checkInterval = Math.min(DFLT_CHECK_INTERVAL, sysWorkerBlockedTimeout); this.log = log; } @@ -127,15 +131,33 @@ public GridWorker worker(String name) { } /** */ - boolean livenessCheckEnabled() { + public boolean livenessCheckEnabled() { return livenessCheckEnabled; } /** */ - void livenessCheckEnabled(boolean val) { + public void livenessCheckEnabled(boolean val) { livenessCheckEnabled = val; } + /** + * Returns maximum inactivity period for system worker. When exceeded, worker is considered as blocked. + * + * @return Maximum inactivity period for system worker in milliseconds. + */ + public long getSystemWorkerBlockedTimeout() { + return sysWorkerBlockedTimeout == Long.MAX_VALUE ? 0 : sysWorkerBlockedTimeout; + } + + /** + * Sets maximum inactivity period for system worker. When exceeded, worker is considered as blocked. + * + * @param val Maximum inactivity period for system worker in milliseconds. + */ + public void setSystemWorkerBlockedTimeout(long val) { + sysWorkerBlockedTimeout = U.ensurePositive(val, Long.MAX_VALUE); + } + /** {@inheritDoc} */ @Override public void onStarted(GridWorker w) { register(w); @@ -143,6 +165,9 @@ void livenessCheckEnabled(boolean val) { /** {@inheritDoc} */ @Override public void onStopped(GridWorker w) { + if (!w.isCancelled()) + workerFailedHnd.apply(w, SYSTEM_WORKER_TERMINATION); + unregister(w.runner().getName()); } @@ -161,7 +186,7 @@ void livenessCheckEnabled(boolean val) { try { lastCheckTs = U.currentTimeMillis(); - long workersToCheck = Math.max(registeredWorkers.size() * checkInterval / heartbeatTimeout, 1); + long workersToCheck = Math.max(registeredWorkers.size() * checkInterval / sysWorkerBlockedTimeout, 1); int workersChecked = 0; @@ -187,7 +212,7 @@ void livenessCheckEnabled(boolean val) { // That is, if worker is dead, but still resides in registeredWorkers // then something went wrong, the only extra thing is to test // whether the iterator refers to actual state of registeredWorkers. - GridWorker worker0 = registeredWorkers.get(worker.runner().getName()); + GridWorker worker0 = registeredWorkers.get(runner.getName()); if (worker0 != null && worker0 == worker) workerFailedHnd.apply(worker, SYSTEM_WORKER_TERMINATION); @@ -195,7 +220,7 @@ void livenessCheckEnabled(boolean val) { long heartbeatDelay = U.currentTimeMillis() - worker.heartbeatTs(); - if (heartbeatDelay > heartbeatTimeout) { + if (heartbeatDelay > sysWorkerBlockedTimeout) { GridWorker worker0 = registeredWorkers.get(worker.runner().getName()); if (worker0 != null && worker0 == worker) { diff --git a/modules/core/src/main/java/org/apache/ignite/mxbean/FailureHandlingMxBean.java b/modules/core/src/main/java/org/apache/ignite/mxbean/FailureHandlingMxBean.java new file mode 100644 index 0000000000000..199d7523019ad --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/mxbean/FailureHandlingMxBean.java @@ -0,0 +1,47 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.mxbean; + +/** + * MBean that controls critical failure handling. + */ +@MXBeanDescription("MBean that controls critical failure handling.") +public interface FailureHandlingMxBean { + /** */ + @MXBeanDescription("Enable/disable critical workers liveness checking.") + public boolean getLivenessCheckEnabled(); + + /** */ + public void setLivenessCheckEnabled(boolean val); + + /** */ + @MXBeanDescription("Maximum inactivity period for system worker. Critical failure handler fires if exceeded. " + + "Nonpositive value denotes infinite timeout.") + public long getSystemWorkerBlockedTimeout(); + + /** */ + public void setSystemWorkerBlockedTimeout(long val); + + /** */ + @MXBeanDescription("Timeout for checkpoint read lock acquisition. Critical failure handler fires if exceeded. " + + "Nonpositive value denotes infinite timeout.") + public long getCheckpointReadLockTimeout(); + + /** */ + public void setCheckpointReadLockTimeout(long val); +} diff --git a/modules/core/src/main/java/org/apache/ignite/mxbean/WorkersControlMXBean.java b/modules/core/src/main/java/org/apache/ignite/mxbean/WorkersControlMXBean.java index 18b0084f5fcd4..b999ab7d716de 100644 --- a/modules/core/src/main/java/org/apache/ignite/mxbean/WorkersControlMXBean.java +++ b/modules/core/src/main/java/org/apache/ignite/mxbean/WorkersControlMXBean.java @@ -47,13 +47,6 @@ public interface WorkersControlMXBean { ) public boolean terminateWorker(String name); - /** */ - @MXBeanDescription("Whether workers check each other's health.") - public boolean getHealthMonitoringEnabled(); - - /** */ - public void setHealthMonitoringEnabled(boolean val); - /** * Stops thread by {@code name}, if exists and unique. * diff --git a/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageCollectionItemType.java b/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageCollectionItemType.java index 2134912e5a500..2340dd9d6afe6 100644 --- a/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageCollectionItemType.java +++ b/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageCollectionItemType.java @@ -84,7 +84,10 @@ public enum MessageCollectionItemType { IGNITE_UUID, /** Message. */ - MSG; + MSG, + + /** Topology version. */ + AFFINITY_TOPOLOGY_VERSION; /** Enum values. */ private static final MessageCollectionItemType[] VALS = values(); diff --git a/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageReader.java b/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageReader.java index 050204288927d..6feee1a21eab4 100644 --- a/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageReader.java +++ b/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageReader.java @@ -23,6 +23,7 @@ import java.util.LinkedHashMap; import java.util.Map; import java.util.UUID; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.lang.IgniteUuid; /** @@ -229,6 +230,14 @@ public interface MessageReader { */ public IgniteUuid readIgniteUuid(String name); + /** + * Reads {@link AffinityTopologyVersion}. + * + * @param name Field name. + * @return {@link AffinityTopologyVersion}. + */ + public AffinityTopologyVersion readAffinityTopologyVersion(String name); + /** * Reads nested message. * diff --git a/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageWriter.java b/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageWriter.java index 692955f39cf15..14d4417be3ee2 100644 --- a/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageWriter.java +++ b/modules/core/src/main/java/org/apache/ignite/plugin/extensions/communication/MessageWriter.java @@ -22,6 +22,7 @@ import java.util.Collection; import java.util.Map; import java.util.UUID; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.lang.IgniteUuid; /** @@ -254,6 +255,15 @@ public interface MessageWriter { */ public boolean writeIgniteUuid(String name, IgniteUuid val); + /** + * Writes {@link AffinityTopologyVersion}. + * + * @param name Field name. + * @param val {@link AffinityTopologyVersion}. + * @return Whether value was fully written. + */ + public boolean writeAffinityTopologyVersion(String name, AffinityTopologyVersion val); + /** * Writes nested message. * diff --git a/modules/core/src/main/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilder.java b/modules/core/src/main/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilder.java index abac541ffdb68..659613aa0e8c5 100644 --- a/modules/core/src/main/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilder.java +++ b/modules/core/src/main/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilder.java @@ -126,6 +126,11 @@ public SecurityPermissionSetBuilder appendServicePermissions(String name, Securi * @return {@link SecurityPermissionSetBuilder} refer to same permission builder. */ public SecurityPermissionSetBuilder appendCachePermissions(String name, SecurityPermission... perms) { + for (SecurityPermission perm : perms) { + if (perm == SecurityPermission.CACHE_CREATE || perm == SecurityPermission.CACHE_DESTROY) + throw new IgniteException(perm + " should be assigned as system permission, not cache permission"); + } + validate(toCollection("CACHE_"), perms); append(cachePerms, name, toCollection(perms)); @@ -140,7 +145,7 @@ public SecurityPermissionSetBuilder appendCachePermissions(String name, Security * @return {@link SecurityPermissionSetBuilder} refer to same permission builder. */ public SecurityPermissionSetBuilder appendSystemPermissions(SecurityPermission... perms) { - validate(toCollection("EVENTS_", "ADMIN_"), perms); + validate(toCollection("EVENTS_", "ADMIN_", "CACHE_CREATE", "CACHE_DESTROY", "JOIN_AS_SERVER"), perms); sysPerms.addAll(toCollection(perms)); @@ -194,7 +199,7 @@ private void validate(Collection ptrns, SecurityPermission perm) { private final Collection toCollection(T... perms) { assert perms != null; - Collection col = U.newHashSet(perms.length); + Collection col = U.newLinkedHashSet(perms.length); Collections.addAll(col, perms); diff --git a/modules/core/src/main/java/org/apache/ignite/spi/ExponentialBackoffTimeoutStrategy.java b/modules/core/src/main/java/org/apache/ignite/spi/ExponentialBackoffTimeoutStrategy.java new file mode 100644 index 0000000000000..41d34e5e7b566 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/ExponentialBackoffTimeoutStrategy.java @@ -0,0 +1,138 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi; + +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; + +/** + * Strategy which incorporates retriable network operation, handling of totalTimeout logic. + * It increases startTimeout based on exponential backoff algorithm. + * + * If failure detection is enabled it relies on totalTimeout + * otherwise implements exponential backoff totalTimeout logic based on startTimeout, maxTimeout and retryCnt. + */ +public class ExponentialBackoffTimeoutStrategy implements TimeoutStrategy { + /** Default backoff coefficient to calculate next timeout based on backoff strategy. */ + private static final double DLFT_BACKOFF_COEFF = 2.0; + + /** Max timeout of the next try, ms. */ + private final long maxTimeout; + + /** Total timeout, ms. */ + private final long totalTimeout; + + /** Timestamp of operation start to check totalTimeout. */ + private final long start; + + /** Current calculated timeout, ms. */ + private long currTimeout; + + /** + * Compute expected max backoff timeout based on initTimeout, maxTimeout and reconCnt and backoff coefficient. + * + * @param initTimeout Initial timeout. + * @param maxTimeout Max Timeout per retry. + * @param reconCnt Reconnection count. + * @return Calculated total backoff timeout. + */ + public static long totalBackoffTimeout( + long initTimeout, + long maxTimeout, + long reconCnt + ) { + long totalBackoffTimeout = initTimeout; + + for (int i = 1; i < reconCnt && totalBackoffTimeout < maxTimeout; i++) + totalBackoffTimeout += backoffTimeout(totalBackoffTimeout, maxTimeout); + + return totalBackoffTimeout; + } + + /** + * + * @param timeout Timeout. + * @param maxTimeout Maximum startTimeout for backoff function. + * @return Next exponential backoff timeout. + */ + public static long backoffTimeout(long timeout, long maxTimeout) { + return (long) Math.min(timeout * DLFT_BACKOFF_COEFF, maxTimeout); + } + + /** + * + * @param totalTimeout Total startTimeout. + * @param startTimeout Initial connection timeout. + * @param maxTimeout Max connection Timeout. + * + */ + public ExponentialBackoffTimeoutStrategy( + long totalTimeout, + long startTimeout, + long maxTimeout + ) { + this.totalTimeout = totalTimeout; + + this.maxTimeout = maxTimeout; + + currTimeout = startTimeout; + + start = U.currentTimeMillis(); + } + + /** {@inheritDoc} */ + @Override public long nextTimeout(long timeout) throws IgniteSpiOperationTimeoutException { + long remainingTime = remainingTime(U.currentTimeMillis()); + + if (remainingTime <= 0) + throw new IgniteSpiOperationTimeoutException("Operation timed out [timeoutStrategy= " +this +"]"); + + /* + If timeout is zero that means we need return current verified timeout and calculate next timeout. + In case of non zero we just reverify previously calculated value not to breach totalTimeout. + */ + if (timeout == 0) { + long prevTimeout = currTimeout; + + currTimeout = backoffTimeout(currTimeout, maxTimeout); + + return Math.min(prevTimeout, remainingTime); + } else + return Math.min(timeout, remainingTime); + } + + /** + * Returns remaining time for current totalTimeout chunk. + * + * @param curTs Current timestamp. + * @return Time to wait in millis. + */ + public long remainingTime(long curTs) { + return totalTimeout - (curTs - start); + } + + /** {@inheritDoc} */ + @Override public boolean checkTimeout(long timeInFut) { + return remainingTime(U.currentTimeMillis() + timeInFut) <= 0; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(ExponentialBackoffTimeoutStrategy.class, this); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiAdapter.java b/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiAdapter.java index a7e6e8c72de42..f68ecd68e2591 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiAdapter.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiAdapter.java @@ -41,7 +41,6 @@ import org.apache.ignite.internal.processors.timeout.GridSpiTimeoutObject; import org.apache.ignite.internal.util.IgniteExceptionRegistry; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; diff --git a/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiOperationTimeoutHelper.java b/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiOperationTimeoutHelper.java index b2432cea99790..9f1773b8bb292 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiOperationTimeoutHelper.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/IgniteSpiOperationTimeoutHelper.java @@ -27,8 +27,12 @@ * * A new instance of the class should be created for every complex network based operations that usually consists of * request and response parts. + * */ public class IgniteSpiOperationTimeoutHelper { + // https://issues.apache.org/jira/browse/IGNITE-11221 + // We need to reuse new logic ExponentialBackoffTimeout logic in TcpDiscovery instead of this class. + /** */ private long lastOperStartTs; @@ -84,7 +88,7 @@ public long nextTimeoutChunk(long dfltTimeout) throws IgniteSpiOperationTimeoutE "'failureDetectionTimeout' configuration property [failureDetectionTimeout=" + failureDetectionTimeout + ']'); } - + return timeout; } @@ -103,4 +107,4 @@ public boolean checkFailureTimeoutReached(Exception e) { return (timeout - (U.currentTimeMillis() - lastOperStartTs) <= 0); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/TimeoutStrategy.java b/modules/core/src/main/java/org/apache/ignite/spi/TimeoutStrategy.java new file mode 100644 index 0000000000000..d5bfcfb76f50f --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/TimeoutStrategy.java @@ -0,0 +1,60 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi; + +/** + * Strategy to calculate next timeout and check if total timeout reached. + */ +public interface TimeoutStrategy { + /** + * Get next timeout based on previously timeout calculated by strategy. + * + * @return Gets next timeout. + * @throws IgniteSpiOperationTimeoutException in case of total timeout already breached. + */ + public long nextTimeout(long currTimeout) throws IgniteSpiOperationTimeoutException; + + /** + * Get next timeout. + * + * @return Get next timeout. + * @throws IgniteSpiOperationTimeoutException In case of total timeout already breached. + */ + public default long nextTimeout() throws IgniteSpiOperationTimeoutException { + return nextTimeout(0); + } + + /** + * Check if total timeout will be reached in now() + timeInFut. + * + * If timeInFut is 0, will check that timeout already reached. + * + * @param timeInFut Some millis in future. + * @return {@code True} if total timeout will be reached. + */ + public boolean checkTimeout(long timeInFut); + + /** + * Check if total timeout will be reached by now. + * + * @return {@code True} if total timeout already reached. + */ + public default boolean checkTimeout() { + return checkTimeout(0); + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/collision/jobstealing/JobStealingRequest.java b/modules/core/src/main/java/org/apache/ignite/spi/collision/jobstealing/JobStealingRequest.java index 6ecb14572ee2b..5f028047981e8 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/collision/jobstealing/JobStealingRequest.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/collision/jobstealing/JobStealingRequest.java @@ -118,4 +118,4 @@ int delta() { @Override public String toString() { return S.toString(JobStealingRequest.class, this); } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.java b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.java index 877d2be0f2fc8..703a8b69b86ca 100755 --- a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.java @@ -23,7 +23,6 @@ import java.net.ConnectException; import java.net.InetAddress; import java.net.InetSocketAddress; -import java.net.SocketException; import java.net.SocketTimeoutException; import java.nio.ByteBuffer; import java.nio.ByteOrder; @@ -126,6 +125,7 @@ import org.apache.ignite.plugin.extensions.communication.MessageWriter; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.LoggerResource; +import org.apache.ignite.spi.ExponentialBackoffTimeoutStrategy; import org.apache.ignite.spi.IgnitePortProtocol; import org.apache.ignite.spi.IgniteSpiAdapter; import org.apache.ignite.spi.IgniteSpiConfiguration; @@ -138,6 +138,7 @@ import org.apache.ignite.spi.IgniteSpiOperationTimeoutHelper; import org.apache.ignite.spi.IgniteSpiThread; import org.apache.ignite.spi.IgniteSpiTimeoutObject; +import org.apache.ignite.spi.TimeoutStrategy; import org.apache.ignite.spi.communication.CommunicationListener; import org.apache.ignite.spi.communication.CommunicationSpi; import org.apache.ignite.spi.communication.tcp.internal.ConnectionKey; @@ -161,6 +162,7 @@ import static org.apache.ignite.spi.communication.tcp.messages.RecoveryLastReceivedMessage.ALREADY_CONNECTED; import static org.apache.ignite.spi.communication.tcp.messages.RecoveryLastReceivedMessage.NEED_WAIT; import static org.apache.ignite.spi.communication.tcp.messages.RecoveryLastReceivedMessage.NODE_STOPPING; +import static org.apache.ignite.spi.communication.tcp.messages.RecoveryLastReceivedMessage.UNKNOWN_NODE; /** * TcpCommunicationSpi is default communication SPI which uses @@ -269,6 +271,9 @@ @IgniteSpiMultipleInstancesSupport(true) @IgniteSpiConsistencyChecked(optional = false) public class TcpCommunicationSpi extends IgniteSpiAdapter implements CommunicationSpi { + /** Time threshold to log too long connection establish. */ + private static final int CONNECTION_ESTABLISH_THRESHOLD_MS = 100; + /** IPC error message. */ public static final String OUT_OF_RESOURCES_TCP_MSG = "Failed to allocate shared memory segment " + "(switching to TCP, may be slower)."; @@ -321,6 +326,15 @@ public class TcpCommunicationSpi extends IgniteSpiAdapter implements Communicati */ public static final int DFLT_SELECTORS_CNT = Math.max(4, Runtime.getRuntime().availableProcessors() / 2); + /** Default initial connect/handshake timeout in case of failure detection enabled. */ + private static final int DFLT_INITIAL_TIMEOUT = 500; + + /** Default initial delay in case of target node is still out of topology. */ + private static final int DFLT_NEED_WAIT_DELAY = 200; + + /** Default delay between reconnects attempts in case of temporary network issues. */ + private static final int DFLT_RECONNECT_DELAY = 50; + /** * Version when client is ready to wait to connect to server (could be needed when client tries to open connection * before it starts being visible for server) @@ -522,7 +536,11 @@ else if (discoverySpi instanceof IgniteDiscoverySpi) if (unknownNode) { U.warn(log, "Close incoming connection, unknown node [nodeId=" + sndId + ", ses=" + ses + ']'); - ses.close(); + ses.send(new RecoveryLastReceivedMessage(UNKNOWN_NODE)).listen(new CI1>() { + @Override public void apply(IgniteInternalFuture fut) { + ses.close(); + } + }); } else { ses.send(new RecoveryLastReceivedMessage(NEED_WAIT)).listen(new CI1>() { @@ -2959,6 +2977,7 @@ private GridCommunicationClient reserveClient(ClusterNode node, int connIdx) thr // then we are likely to run on the same host and shared memory communication could be tried. if (shmemPort != null && U.sameMacs(locNode, node)) { try { + // https://issues.apache.org/jira/browse/IGNITE-11126 Rework failure detection logic. GridCommunicationClient client = createShmemClient( node, connIdx, @@ -2981,19 +3000,20 @@ else if (log.isDebugEnabled()) } } - connectGate.enter(); + final long start = System.currentTimeMillis(); - try { - GridCommunicationClient client = createTcpClient(node, connIdx); + GridCommunicationClient client = createTcpClient(node, connIdx); - if (log.isDebugEnabled()) - log.debug("TCP client created: " + client); + final long time = System.currentTimeMillis() - start; - return client; - } - finally { - connectGate.leave(); - } + if (time > CONNECTION_ESTABLISH_THRESHOLD_MS) { + if (log.isInfoEnabled()) + log.info("TCP client created [client=" + client + ", duration=" + time + "ms]"); + } + else if (log.isDebugEnabled()) + log.debug("TCP client created [client=" + client + ", duration=" + time + "ms]"); + + return client; } /** @@ -3044,11 +3064,10 @@ else if (log.isDebugEnabled()) try { safeShmemHandshake(client, node.id(), timeoutHelper.nextTimeoutChunk(connTimeout0)); } - catch (HandshakeTimeoutException | IgniteSpiOperationTimeoutException e) { + catch (IgniteSpiOperationTimeoutException e) { client.forceClose(); - if (failureDetectionTimeoutEnabled() && (e instanceof HandshakeTimeoutException || - timeoutHelper.checkFailureTimeoutReached(e))) { + if (failureDetectionTimeoutEnabled() && timeoutHelper.checkFailureTimeoutReached(e)) { if (log.isDebugEnabled()) log.debug("Handshake timed out (failure threshold reached) [failureDetectionTimeout=" + failureDetectionTimeout() + ", err=" + e.getMessage() + ", client=" + client + ']'); @@ -3224,17 +3243,24 @@ protected GridCommunicationClient createTcpClient(ClusterNode node, int connIdx) GridCommunicationClient client = null; IgniteCheckedException errs = null; - int connectAttempts = 1; + long totalTimeout; - for (InetSocketAddress addr : addrs) { - long connTimeout0 = connTimeout; - - int attempt = 1; - - IgniteSpiOperationTimeoutHelper timeoutHelper = new IgniteSpiOperationTimeoutHelper(this, - !node.isClient()); + if (failureDetectionTimeoutEnabled()) + totalTimeout = node.isClient() ? clientFailureDetectionTimeout() : failureDetectionTimeout(); + else { + totalTimeout = ExponentialBackoffTimeoutStrategy.totalBackoffTimeout( + connTimeout, + maxConnTimeout, + reconCnt + ); + } - int lastWaitingTimeout = 1; + for (InetSocketAddress addr : addrs) { + TimeoutStrategy connTimeoutStgy = new ExponentialBackoffTimeoutStrategy( + totalTimeout, + failureDetectionTimeoutEnabled() ? DFLT_INITIAL_TIMEOUT : connTimeout, + maxConnTimeout + ); while (client == null) { // Reconnection on handshake timeout. if (stopping) @@ -3248,9 +3274,12 @@ protected GridCommunicationClient createTcpClient(ClusterNode node, int connIdx) break; } - boolean needWait = false; + long timeout = 0; try { + if (getSpiContext().node(node.id()) == null) + throw new ClusterTopologyCheckedException("Failed to send message (node left topology): " + node); + SocketChannel ch = SocketChannel.open(); ch.configureBlocking(true); @@ -3264,13 +3293,6 @@ protected GridCommunicationClient createTcpClient(ClusterNode node, int connIdx) if (sockSndBuf > 0) ch.socket().setSendBufferSize(sockSndBuf); - if (getSpiContext().node(node.id()) == null) { - U.closeQuiet(ch); - - throw new ClusterTopologyCheckedException("Failed to send message " + - "(node left topology): " + node); - } - ConnectionKey connKey = new ConnectionKey(node.id(), connIdx, -1); GridNioRecoveryDescriptor recoveryDesc = outRecoveryDescriptor(node, connKey); @@ -3289,14 +3311,19 @@ protected GridCommunicationClient createTcpClient(ClusterNode node, int connIdx) return null; } - Long rcvCnt; + long rcvCnt; Map meta = new HashMap<>(); GridSslMeta sslMeta = null; try { - ch.socket().connect(addr, (int)timeoutHelper.nextTimeoutChunk(connTimeout)); + timeout = connTimeoutStgy.nextTimeout(); + + ch.socket().connect(addr, (int) timeout); + + if (getSpiContext().node(node.id()) == null) + throw new ClusterTopologyCheckedException("Failed to send message (node left topology): " + node); if (isSslEnabled()) { meta.put(SSL_META.ordinal(), sslMeta = new GridSslMeta()); @@ -3310,10 +3337,12 @@ protected GridCommunicationClient createTcpClient(ClusterNode node, int connIdx) Integer handshakeConnIdx = connIdx; + timeout = connTimeoutStgy.nextTimeout(timeout); + rcvCnt = safeTcpHandshake(ch, recoveryDesc, node.id(), - timeoutHelper.nextTimeoutChunk(connTimeout0), + timeout, sslMeta, handshakeConnIdx); @@ -3323,11 +3352,38 @@ protected GridCommunicationClient createTcpClient(ClusterNode node, int connIdx) else if (rcvCnt == NODE_STOPPING) { throw new ClusterTopologyCheckedException("Remote node started stop procedure: " + node.id()); } + else if (rcvCnt == UNKNOWN_NODE) + throw new ClusterTopologyCheckedException("Remote node does not observe current node " + + "in topology : " + node.id()); else if (rcvCnt == NEED_WAIT) { - needWait = true; + //check that failure timeout will be reached after sleep(outOfTopDelay). + if (connTimeoutStgy.checkTimeout(DFLT_NEED_WAIT_DELAY)) { + U.warn(log, "Handshake NEED_WAIT timed out (will stop attempts to perform the handshake) " + + "[node=" + node.id() + + ", connTimeoutStgy=" + connTimeoutStgy + + ", addr=" + addr + + ", failureDetectionTimeoutEnabled=" + failureDetectionTimeoutEnabled() + + ", timeout=" + timeout + ']'); + + throw new ClusterTopologyCheckedException("Failed to connect to node " + + "(current or target node is out of topology on target node within timeout). " + + "Make sure that each ComputeTask and cache Transaction has a timeout set " + + "in order to prevent parties from waiting forever in case of network issues " + + "[nodeId=" + node.id() + ", addrs=" + addrs + ']'); + } + else { + if (log.isDebugEnabled()) + log.debug("NEED_WAIT received, handshake after delay [node = " + + node + ", outOfTopologyDelay = " + DFLT_NEED_WAIT_DELAY + "ms]"); - continue; + U.sleep(DFLT_NEED_WAIT_DELAY); + + continue; + } } + else if (rcvCnt < 0) + throw new IgniteCheckedException("Unsupported negative receivedCount [rcvCnt=" + rcvCnt + + ", senderNode=" + node + ']'); meta.put(CONN_IDX_META, connKey); @@ -3347,85 +3403,49 @@ else if (rcvCnt == NEED_WAIT) { if (recoveryDesc != null) recoveryDesc.release(); - - if (needWait) { - if (lastWaitingTimeout < 60000) - lastWaitingTimeout *= 2; - - U.sleep(lastWaitingTimeout); - } } } } - catch (HandshakeTimeoutException | IgniteSpiOperationTimeoutException e) { + catch (IgniteSpiOperationTimeoutException e) { // Handshake is timed out. if (client != null) { client.forceClose(); client = null; } - if (failureDetectionTimeoutEnabled() && (e instanceof HandshakeTimeoutException || - X.hasCause(e, SocketException.class) || - timeoutHelper.checkFailureTimeoutReached(e))) { - - String msg = "Handshake timed out (failure detection timeout is reached) " + - "[failureDetectionTimeout=" + failureDetectionTimeout() + ", addr=" + addr + ']'; - - onException(msg, e); - - if (log.isDebugEnabled()) - log.debug(msg); - - if (errs == null) - errs = new IgniteCheckedException("Failed to connect to node (is node still alive?). " + - "Make sure that each ComputeTask and cache Transaction has a timeout set " + - "in order to prevent parties from waiting forever in case of network issues " + - "[nodeId=" + node.id() + ", addrs=" + addrs + ']'); - - errs.addSuppressed(new IgniteCheckedException("Failed to connect to address: " + addr, e)); - - break; - } - - assert !failureDetectionTimeoutEnabled(); - - onException("Handshake timed out (will retry with increased timeout) [timeout=" + connTimeout0 + + onException("Handshake timed out (will retry with increased timeout) [connTimeoutStrategy=" + connTimeoutStgy + ", addr=" + addr + ']', e); if (log.isDebugEnabled()) - log.debug( - "Handshake timed out (will retry with increased timeout) [timeout=" + connTimeout0 + - ", addr=" + addr + ", err=" + e + ']'); - - if (attempt == reconCnt || connTimeout0 > maxConnTimeout) { - U.warn(log, "Handshake timedout (will stop attempts to perform the handshake) " + - "[node=" + node.id() + ", timeout=" + connTimeout0 + - ", maxConnTimeout=" + maxConnTimeout + - ", attempt=" + attempt + ", reconCnt=" + reconCnt + - ", err=" + e.getMessage() + ", addr=" + addr + ']'); + log.debug("Handshake timed out (will retry with increased timeout) [connTimeoutStrategy=" + connTimeoutStgy + + ", addr=" + addr + ", err=" + e + ']' + ); + + if (connTimeoutStgy.checkTimeout()) { + U.warn(log, "Handshake timed out (will stop attempts to perform the handshake) " + + "[node=" + node.id() + ", connTimeoutStrategy=" + connTimeoutStgy + + ", err=" + e.getMessage() + ", addr=" + addr + + ", failureDetectionTimeoutEnabled=" + failureDetectionTimeoutEnabled() + + ", timeout=" + timeout + ']'); + + String msg = "Failed to connect to node (is node still alive?). " + + "Make sure that each ComputeTask and cache Transaction has a timeout set " + + "in order to prevent parties from waiting forever in case of network issues " + + "[nodeId=" + node.id() + ", addrs=" + addrs + ']'; if (errs == null) - errs = new IgniteCheckedException("Failed to connect to node (is node still alive?). " + - "Make sure that each ComputeTask and cache Transaction has a timeout set " + - "in order to prevent parties from waiting forever in case of network issues " + - "[nodeId=" + node.id() + ", addrs=" + addrs + ']'); - - errs.addSuppressed(new IgniteCheckedException("Failed to connect to address: " + addr, e)); + errs = new IgniteCheckedException(msg, e); + else + errs.addSuppressed(new IgniteCheckedException(msg, e)); break; } - else { - attempt++; - - connTimeout0 *= 2; - - // Continue loop. - } } catch (ClusterTopologyCheckedException e) { throw e; } catch (Exception e) { + // Most probably IO error on socket connect or handshake. if (client != null) { client.forceClose(); @@ -3437,41 +3457,42 @@ else if (rcvCnt == NEED_WAIT) { if (log.isDebugEnabled()) log.debug("Client creation failed [addr=" + addr + ", err=" + e + ']'); - boolean failureDetThrReached = timeoutHelper.checkFailureTimeoutReached(e); + // check if timeout occured in case of unrecoverable exception + if (connTimeoutStgy.checkTimeout()) { + U.warn(log, "Connection timed out (will stop attempts to perform the connect) " + + "[node=" + node.id() + ", connTimeoutStgy=" + connTimeoutStgy + + ", failureDetectionTimeoutEnabled=" + failureDetectionTimeoutEnabled() + + ", timeout=" + timeout + + ", err=" + e.getMessage() + ", addr=" + addr + ']'); - if (enableTroubleshootingLog) - U.error(log, "Failed to establish connection to a remote node [node=" + node + - ", addr=" + addr + ", connectAttempts=" + connectAttempts + - ", failureDetThrReached=" + failureDetThrReached + ']', e); - - if (failureDetThrReached) - LT.warn(log, "Connect timed out (consider increasing 'failureDetectionTimeout' " + - "configuration property) [addr=" + addr + ", failureDetectionTimeout=" + - failureDetectionTimeout() + ']'); - else if (X.hasCause(e, SocketTimeoutException.class)) - LT.warn(log, "Connect timed out (consider increasing 'connTimeout' " + - "configuration property) [addr=" + addr + ", connTimeout=" + connTimeout + ']'); + String msg = "Failed to connect to node (is node still alive?). " + + "Make sure that each ComputeTask and cache Transaction has a timeout set " + + "in order to prevent parties from waiting forever in case of network issues " + + "[nodeId=" + node.id() + ", addrs=" + addrs + ']'; - if (errs == null) - errs = new IgniteCheckedException("Failed to connect to node (is node still alive?). " + - "Make sure that each ComputeTask and cache Transaction has a timeout set " + - "in order to prevent parties from waiting forever in case of network issues " + - "[nodeId=" + node.id() + ", addrs=" + addrs + ']'); + if (errs == null) + errs = new IgniteCheckedException(msg, e); + else + errs.addSuppressed(new IgniteCheckedException(msg, e)); - errs.addSuppressed(new IgniteCheckedException("Failed to connect to address " + - "[addr=" + addr + ", err=" + e.getMessage() + ']', e)); + break; + } - // Reconnect for the second time, if connection is not established. - if (!failureDetThrReached && connectAttempts < 5 && - (X.hasCause(e, ConnectException.class, HandshakeException.class, SocketTimeoutException.class))) { - U.sleep(200); + if (isRecoverableException(e)) + U.sleep(DFLT_RECONNECT_DELAY); + else { + String msg = "Failed to connect to node due to unrecoverable exception (is node still alive?). " + + "Make sure that each ComputeTask and cache Transaction has a timeout set " + + "in order to prevent parties from waiting forever in case of network issues " + + "[nodeId=" + node.id() + ", addrs=" + addrs + ", err= "+ e + ']'; - connectAttempts++; + if (errs == null) + errs = new IgniteCheckedException(msg, e); + else + errs.addSuppressed(new IgniteCheckedException(msg, e)); - continue; + break; } - - break; } } @@ -3514,17 +3535,11 @@ protected void processClientCreationError( ) throws IgniteCheckedException { assert errs != null; - if (X.hasCause(errs, ConnectException.class)) - LT.warn(log, "Failed to connect to a remote node " + - "(make sure that destination node is alive and " + - "operating system firewall is disabled on local and remote hosts) " + - "[addrs=" + addrs + ']'); - boolean commErrResolve = false; IgniteSpiContext ctx = getSpiContext(); - if (connectionError(errs) && ctx.communicationFailureResolveSupported()) { + if (isRecoverableException(errs) && ctx.communicationFailureResolveSupported()) { commErrResolve = true; ctx.resolveCommunicationFailure(node, errs); @@ -3532,8 +3547,11 @@ protected void processClientCreationError( if (!commErrResolve && enableForcibleNodeKill) { if (ctx.node(node.id()) != null - && (node.isClient() || !getLocalNode().isClient()) && - connectionError(errs)) { + && node.isClient() + && !getLocalNode().isClient() + && isRecoverableException(errs) + ) { + // Only server can fail client for now, as in TcpDiscovery resolveCommunicationFailure() is not supported. String msg = "TcpCommunicationSpi failed to establish connection to node, node will be dropped from " + "cluster [" + "rmtNode=" + node + ']'; @@ -3549,21 +3567,26 @@ protected void processClientCreationError( } } - if (!X.hasCause(errs, SocketTimeoutException.class, HandshakeTimeoutException.class, - IgniteSpiOperationTimeoutException.class)) - throw errs; + throw errs; } /** * @param errs Error. - * @return {@code True} if error was caused by some connection IO error. + * @return {@code True} if error was caused by some connection IO error or IgniteCheckedException due to timeout. */ - private static boolean connectionError(IgniteCheckedException errs) { - return X.hasCause(errs, ConnectException.class, + private boolean isRecoverableException(Exception errs) { + return X.hasCause( + errs, + IOException.class, HandshakeException.class, - SocketTimeoutException.class, - HandshakeTimeoutException.class, - IgniteSpiOperationTimeoutException.class); + IgniteSpiOperationTimeoutException.class + ); + } + + /** */ + private IgniteSpiOperationTimeoutException handshakeTimeoutException() { + return new IgniteSpiOperationTimeoutException("Failed to perform handshake due to timeout " + + "(consider increasing 'connectionTimeout' configuration property)."); } /** @@ -3589,16 +3612,10 @@ private void safeShmemHandshake( client.doHandshake(new HandshakeClosure(rmtNodeId)); } finally { - boolean cancelled = obj.cancel(); - - if (cancelled) + if (obj.cancel()) removeTimeoutObject(obj); - - // Ignoring whatever happened after timeout - reporting only timeout event. - if (!cancelled) - throw new HandshakeTimeoutException( - new IgniteSpiOperationTimeoutException("Failed to perform handshake due to timeout " + - "(consider increasing 'connectionTimeout' configuration property).")); + else + throw handshakeTimeoutException(); } } @@ -3825,16 +3842,10 @@ else if (log.isDebugEnabled()) throw new IgniteCheckedException("Failed to read from channel.", e); } finally { - boolean cancelled = obj.cancel(); - - if (cancelled) + if (obj.cancel()) removeTimeoutObject(obj); - - // Ignoring whatever happened after timeout - reporting only timeout event. - if (!cancelled) - throw new HandshakeTimeoutException( - new IgniteSpiOperationTimeoutException("Failed to perform handshake due to timeout " + - "(consider increasing 'connectionTimeout' configuration property).")); + else + throw handshakeTimeoutException(); } return rcvCnt; @@ -4044,19 +4055,6 @@ private static class HandshakeException extends IgniteCheckedException { } } - /** Internal exception class for proper timeout handling. */ - private static class HandshakeTimeoutException extends IgniteCheckedException { - /** */ - private static final long serialVersionUID = 0L; - - /** - * @param cause Exception cause - */ - HandshakeTimeoutException(IgniteSpiOperationTimeoutException cause) { - super(cause); - } - } - /** * This worker takes responsibility to shut the server down when stopping, * No other thread shall stop passed server. diff --git a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage.java b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage.java index e3be9c99c7ed0..f845b0b5bbc19 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage.java @@ -19,6 +19,7 @@ import java.nio.ByteBuffer; import java.util.UUID; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.plugin.extensions.communication.Message; @@ -29,6 +30,7 @@ /** * Handshake message. */ +@IgniteCodeGeneratingFail public class HandshakeMessage implements Message { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage2.java b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage2.java index f27a825740cae..220781371a69f 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage2.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/HandshakeMessage2.java @@ -19,6 +19,7 @@ import java.nio.ByteBuffer; import java.util.UUID; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.MessageReader; import org.apache.ignite.plugin.extensions.communication.MessageWriter; @@ -26,6 +27,7 @@ /** * Updated handshake message. */ +@IgniteCodeGeneratingFail public class HandshakeMessage2 extends HandshakeMessage { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/NodeIdMessage.java b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/NodeIdMessage.java index 2c6aa300bd8be..12e2522ce4f05 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/NodeIdMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/NodeIdMessage.java @@ -19,6 +19,7 @@ import java.nio.ByteBuffer; import java.util.UUID; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.plugin.extensions.communication.Message; @@ -29,6 +30,7 @@ /** * Node ID message. */ +@IgniteCodeGeneratingFail public class NodeIdMessage implements Message { /** */ private static final long serialVersionUID = 0L; diff --git a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/RecoveryLastReceivedMessage.java b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/RecoveryLastReceivedMessage.java index eef2655a3eafa..fc9f62856791e 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/RecoveryLastReceivedMessage.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/messages/RecoveryLastReceivedMessage.java @@ -18,6 +18,7 @@ package org.apache.ignite.spi.communication.tcp.messages; import java.nio.ByteBuffer; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageReader; @@ -27,6 +28,7 @@ /** * Recovery acknowledgment message. */ +@IgniteCodeGeneratingFail public class RecoveryLastReceivedMessage implements Message { /** */ private static final long serialVersionUID = 0L; @@ -40,6 +42,9 @@ public class RecoveryLastReceivedMessage implements Message { /** Need wait. */ public static final long NEED_WAIT = -3; + /** Initiator node is not in current topogy. */ + public static final long UNKNOWN_NODE = -4; + /** Message body size in bytes. */ private static final int MESSAGE_SIZE = 8; diff --git a/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoveryDataBag.java b/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoveryDataBag.java index b8d8908215130..2b6bbca0c5358 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoveryDataBag.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoveryDataBag.java @@ -23,6 +23,7 @@ import java.util.Set; import java.util.UUID; import org.apache.ignite.internal.GridComponent; +import org.apache.ignite.internal.util.typedef.internal.S; import org.jetbrains.annotations.Nullable; /** @@ -312,4 +313,9 @@ public Map commonData() { @Nullable public Map localNodeSpecificData() { return nodeSpecificData.get(DEFAULT_KEY); } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(DiscoveryDataBag.class, this); + } } diff --git a/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpi.java b/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpi.java index 98222a322f785..545e1a043e733 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpi.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpi.java @@ -99,13 +99,11 @@ public interface DiscoverySpi extends IgniteSpi { * {@link org.apache.ignite.events.DiscoveryEvent} for a set of all possible * discovery events. *

- * Note that as of Ignite 3.0.2 this method is called before - * method {@link #spiStart(String)} is called. This is done to - * avoid potential window when SPI is started but the listener is - * not registered yet. + * TODO: This method should be removed from public API in Apache Ignite 3.0 * * @param lsnr Listener to discovery events or {@code null} to unset the listener. */ + @Deprecated public void setListener(@Nullable DiscoverySpiListener lsnr); /** diff --git a/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpiListener.java b/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpiListener.java index 519a235ae103b..db59de07155a7 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpiListener.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/discovery/DiscoverySpiListener.java @@ -21,7 +21,7 @@ import java.util.Map; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.events.DiscoveryEvent; -import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.lang.IgniteFuture; import org.jetbrains.annotations.Nullable; /** @@ -52,7 +52,7 @@ public interface DiscoverySpiListener { * * @return A future that will be completed when notification process has finished. */ - public IgniteInternalFuture onDiscovery( + public IgniteFuture onDiscovery( int type, long topVer, ClusterNode node, diff --git a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ClientImpl.java b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ClientImpl.java index faaaff735fe2f..b0969dd3eb5b5 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ClientImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ClientImpl.java @@ -305,6 +305,11 @@ class ClientImpl extends TcpDiscoveryImpl { } }.start(); + timer.schedule( + new MetricsSender(), + spi.metricsUpdateFreq, + spi.metricsUpdateFreq); + try { joinLatch.await(); @@ -317,11 +322,6 @@ class ClientImpl extends TcpDiscoveryImpl { throw new IgniteSpiException("Thread has been interrupted.", e); } - timer.schedule( - new MetricsSender(), - spi.metricsUpdateFreq, - spi.metricsUpdateFreq); - spi.printStartInfo(); } @@ -479,12 +479,7 @@ else if (state == DISCONNECTED) { Collection top = updateTopologyHistory(topVer + 1, null); - try { - lsnr.onDiscovery(EVT_NODE_FAILED, topVer, n, top, new TreeMap<>(topHist), null).get(); - } - catch (IgniteCheckedException e) { - throw new IgniteException("Failed to wait for discovery listener notification", e); - } + lsnr.onDiscovery(EVT_NODE_FAILED, topVer, n, top, new TreeMap<>(topHist), null).get(); } } @@ -1889,6 +1884,7 @@ else if (log.isInfoEnabled()) { err = spi.duplicateIdError((TcpDiscoveryDuplicateIdMessage)msg); else if (discoMsg instanceof TcpDiscoveryAuthFailedMessage) err = spi.authenticationFailedError((TcpDiscoveryAuthFailedMessage)msg); + //TODO: https://issues.apache.org/jira/browse/IGNITE-9829 else if (discoMsg instanceof TcpDiscoveryCheckFailedMessage) err = spi.checkFailedError((TcpDiscoveryCheckFailedMessage)msg); @@ -1903,6 +1899,7 @@ else if (discoMsg instanceof TcpDiscoveryCheckFailedMessage) else joinError(err); + cancel(); break; } } @@ -2594,12 +2591,7 @@ private void notifyDiscovery( debugLog.debug("Discovery notification [node=" + node + ", type=" + U.gridEventName(type) + ", topVer=" + topVer + ']'); - try { - lsnr.onDiscovery(type, topVer, node, top, new TreeMap<>(topHist), data).get(); - } - catch (IgniteCheckedException e) { - throw new IgniteException("Failed to wait for discovery listener notification", e); - } + lsnr.onDiscovery(type, topVer, node, top, new TreeMap<>(topHist), data).get(); } else if (debugLog.isDebugEnabled()) debugLog.debug("Skipped discovery notification [node=" + node + ", type=" + U.gridEventName(type) + diff --git a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ServerImpl.java b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ServerImpl.java index 778e8d761ca66..5bf30b0966837 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ServerImpl.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ServerImpl.java @@ -70,7 +70,6 @@ import org.apache.ignite.failure.FailureContext; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; -import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.internal.IgnitionEx; @@ -99,6 +98,7 @@ import org.apache.ignite.internal.util.worker.GridWorkerListener; import org.apache.ignite.internal.worker.WorkersRegistry; import org.apache.ignite.lang.IgniteBiTuple; +import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteProductVersion; import org.apache.ignite.lang.IgniteUuid; @@ -263,6 +263,9 @@ class ServerImpl extends TcpDiscoveryImpl { private final ConcurrentMap>> pingMap = new ConcurrentHashMap<>(); + /** Last listener future. */ + private IgniteFuture lastCustomEvtLsnrFut; + /** * @param adapter Adapter. */ @@ -952,10 +955,10 @@ private void joinTopology() throws IgniteSpiException { timeout = threshold - U.currentTimeMillis(); } - catch (InterruptedException ignored) { + catch (InterruptedException e) { Thread.currentThread().interrupt(); - throw new IgniteSpiException("Thread has been interrupted."); + throw new IgniteSpiException("Thread has been interrupted.", e); } } @@ -1959,9 +1962,6 @@ private IpFinderCleaner() { while (!isInterrupted()) { Thread.sleep(spi.ipFinderCleanFreq); - if (!isLocalNodeCoordinator()) - continue; - if (spiStateCopy() != CONNECTED) { if (log.isDebugEnabled()) log.debug("Stopping IP finder cleaner (SPI is not connected to topology)."); @@ -1969,6 +1969,9 @@ private IpFinderCleaner() { return; } + if (!isLocalNodeCoordinator()) + continue; + if (spi.ipFinder.isShared()) cleanIpFinder(); } @@ -2159,6 +2162,20 @@ private static WorkersRegistry getWorkerRegistry(TcpDiscoverySpi spi) { return spi.ignite() instanceof IgniteEx ? ((IgniteEx)spi.ignite()).context().workersRegistry() : null; } + /** + * Wait for all the listeners from previous discovery message to be completed. + */ + private void waitForLastCustomEventListenerFuture() { + if (lastCustomEvtLsnrFut != null) { + try { + lastCustomEvtLsnrFut.get(); + } + finally { + lastCustomEvtLsnrFut = null; + } + } + } + /** * Discovery messages history used for client reconnect. */ @@ -3210,16 +3227,18 @@ else if (!spi.failureDetectionTimeoutEnabled() && (e instanceof assert !forceSndPending || msg instanceof TcpDiscoveryNodeLeftMessage; - if (failure || forceSndPending) { + if (failure || forceSndPending || newNextNode) { if (log.isDebugEnabled()) log.debug("Pending messages will be sent [failure=" + failure + ", newNextNode=" + newNextNode + - ", forceSndPending=" + forceSndPending + ']'); + ", forceSndPending=" + forceSndPending + + ", failedNodes=" + failedNodes + ']'); if (debugMode) debugLog(msg, "Pending messages will be sent [failure=" + failure + ", newNextNode=" + newNextNode + - ", forceSndPending=" + forceSndPending + ']'); + ", forceSndPending=" + forceSndPending + + ", failedNodes=" + failedNodes + ']'); for (TcpDiscoveryAbstractMessage pendingMsg : pendingMsgs) { long tstamp = U.currentTimeMillis(); @@ -4202,6 +4221,15 @@ private void trySendMessageDirectly(TcpDiscoveryNode node, TcpDiscoveryAbstractM private void processNodeAddedMessage(TcpDiscoveryNodeAddedMessage msg) { assert msg != null; + blockingSectionBegin(); + + try { + waitForLastCustomEventListenerFuture(); + } + finally { + blockingSectionEnd(); + } + TcpDiscoveryNode node = msg.node(); assert node != null; @@ -5274,6 +5302,9 @@ private void processMetricsUpdateMessage(TcpDiscoveryMetricsUpdateMessage msg) { if (clientNodeIds.contains(clientNode.id())) clientNode.clientAliveTime(spi.clientFailureDetectionTimeout()); else { + if (clientNode.clientAliveTime() == 0L) + clientNode.clientAliveTime(spi.clientFailureDetectionTimeout()); + boolean aliveCheck = clientNode.isClientAlive(); if (!aliveCheck && isLocalNodeCoordinator()) { @@ -5639,7 +5670,7 @@ private void notifyDiscoveryListener(TcpDiscoveryCustomEventMessage msg, boolean throw new IgniteException("Failed to unmarshal discovery custom message: " + msg, t); } - IgniteInternalFuture fut = lsnr.onDiscovery(DiscoveryCustomEvent.EVT_DISCOVERY_CUSTOM_EVT, + IgniteFuture fut = lsnr.onDiscovery(DiscoveryCustomEvent.EVT_DISCOVERY_CUSTOM_EVT, msg.topologyVersion(), node, snapshot, @@ -5647,13 +5678,18 @@ private void notifyDiscoveryListener(TcpDiscoveryCustomEventMessage msg, boolean msgObj); if (waitForNotification || msgObj.isMutable()) { + blockingSectionBegin(); + try { fut.get(); } - catch (IgniteCheckedException e) { - throw new IgniteException("Failed to wait for discovery listener notification", e); + finally { + blockingSectionEnd(); } } + else { + lastCustomEvtLsnrFut = fut; + } if (msgObj.isMutable()) { try { @@ -6298,6 +6334,17 @@ else if (msg instanceof TcpDiscoveryClientReconnectMessage) { continue; } else { + TcpDiscoveryClientReconnectMessage msg0 = (TcpDiscoveryClientReconnectMessage)msg; + + // If message is received from previous node and node is connecting forward to next node. + if (!getLocalNodeId().equals(msg0.routerNodeId()) && state == CONNECTING) { + spi.writeToSocket(msg, sock, RES_OK, sockTimeout); + + msgWorker.addMessage(msg); + + continue; + } + spi.writeToSocket(msg, sock, RES_CONTINUE_JOIN, sockTimeout); break; diff --git a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.java b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.java index 048abf62953ee..dd2019d841f73 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.java @@ -104,6 +104,7 @@ import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAuthFailedMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCheckFailedMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryClientReconnectMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryDuplicateIdMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryEnsureDelivery; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryJoinRequestMessage; @@ -277,6 +278,9 @@ public class TcpDiscoverySpi extends IgniteSpiAdapter implements IgniteDiscovery /** Maximum ack timeout value for receiving message acknowledgement in milliseconds (value is 600,000ms). */ public static final long DFLT_MAX_ACK_TIMEOUT = 10 * 60 * 1000; + /** Default SO_LINGER to set for socket, 0 means enabled with 0 timeout. */ + public static final int DFLT_SO_LINGER = 5; + /** Default connection recovery timeout in ms. */ public static final long DFLT_CONNECTION_RECOVERY_TIMEOUT = IgniteConfiguration.DFLT_FAILURE_DETECTION_TIMEOUT; @@ -373,6 +377,9 @@ public class TcpDiscoverySpi extends IgniteSpiAdapter implements IgniteDiscovery /** Maximum message acknowledgement timeout. */ private long maxAckTimeout = DFLT_MAX_ACK_TIMEOUT; + /** Default SO_LINGER to use for socket. Set negative to disable, non-negative to enable, default is {@DFLT_SO_LINGER }. */ + private int soLinger = DFLT_SO_LINGER; + /** IP finder clean frequency. */ @SuppressWarnings({"FieldAccessedSynchronizedAndUnsynchronized"}) protected long ipFinderCleanFreq = DFLT_IP_FINDER_CLEAN_FREQ; @@ -386,6 +393,9 @@ public class TcpDiscoverySpi extends IgniteSpiAdapter implements IgniteDiscovery /** SSL socket factory. */ protected SSLSocketFactory sslSockFactory; + /** SSL enable/disable flag. */ + protected boolean sslEnable; + /** Context initialization latch. */ @GridToStringExclude private final CountDownLatch ctxInitLatch = new CountDownLatch(1); @@ -423,6 +433,9 @@ public class TcpDiscoverySpi extends IgniteSpiAdapter implements IgniteDiscovery /** */ private IgniteDiscoverySpiInternalListener internalLsnr; + /** For test purposes. */ + private boolean skipAddrsRandomization = false; + /** * Gets current SPI state. * @@ -914,6 +927,17 @@ public TcpDiscoverySpi setAckTimeout(long ackTimeout) { return this; } + /** + * Sets SO_LINGER to use for all created sockets. + *

+ * If not specified, default is {@link #DFLT_SO_LINGER} + *

+ */ + @IgniteSpiConfiguration(optional = true) + public void setSoLinger(int soLinger) { + this.soLinger = soLinger; + } + /** * Sets maximum network timeout to use for network operations. *

@@ -1250,6 +1274,15 @@ public long getAckTimeout() { return ackTimeout; } + /** + * Gets SO_LINGER timeout for socket. + * + * @return SO_LINGER timeout for socket. + */ + public int getSoLinger() { + return soLinger; + } + /** * Gets network timeout. * @@ -1538,6 +1571,8 @@ Socket createSocket() throws IOException { sock.setTcpNoDelay(true); + sock.setSoLinger(getSoLinger() >= 0, getSoLinger()); + return sock; } catch (IOException e) { if (sock != null) @@ -1646,8 +1681,13 @@ protected void writeToSocket(Socket sock, OutputStream out, TcpDiscoveryAbstractMessage msg, long timeout) throws IOException, IgniteCheckedException { - if (internalLsnr != null && msg instanceof TcpDiscoveryJoinRequestMessage) - internalLsnr.beforeJoin(locNode, log); + if (internalLsnr != null) { + if (msg instanceof TcpDiscoveryJoinRequestMessage) + internalLsnr.beforeJoin(locNode, log); + + if (msg instanceof TcpDiscoveryClientReconnectMessage) + internalLsnr.beforeReconnect(locNode, log); + } assert sock != null; assert msg != null; @@ -1881,7 +1921,7 @@ protected Collection resolvedAddresses() throws IgniteSpiExce } } - if (!res.isEmpty()) + if (!res.isEmpty() && !skipAddrsRandomization) Collections.shuffle(res); return res; @@ -2027,6 +2067,8 @@ private void initializeImpl() { if (impl != null) return; + sslEnable = ignite().configuration().getSslContextFactory() != null; + initFailureDetectionTimeout(); if (!forceSrvMode && (Boolean.TRUE.equals(ignite.configuration().isClientMode()))) { @@ -2205,7 +2247,7 @@ boolean ipFinderHasLocalAddress() throws IgniteSpiException { * @return {@code True} if ssl enabled. */ boolean isSslEnabled() { - return ignite().configuration().getSslContextFactory() != null; + return sslEnable; } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/internal/TcpDiscoveryNode.java b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/internal/TcpDiscoveryNode.java index 13d100652d3c0..763a678b9ade1 100644 --- a/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/internal/TcpDiscoveryNode.java +++ b/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/internal/TcpDiscoveryNode.java @@ -124,7 +124,7 @@ public class TcpDiscoveryNode extends GridMetadataAwareAdapter implements Ignite /** Alive check time (used by clients). */ @GridToStringExclude - private transient long aliveCheckTime; + private transient volatile long aliveCheckTime; /** Client router node ID. */ @GridToStringExclude @@ -499,6 +499,13 @@ public void clientAliveTime(long aliveTime) { this.aliveCheckTime = U.currentTimeMillis() + aliveTime; } + /** + * @return Client alive check time. + */ + public long clientAliveTime() { + return aliveCheckTime; + } + /** * @return Client router node ID. */ diff --git a/modules/core/src/main/java/org/apache/ignite/spi/encryption/EncryptionSpi.java b/modules/core/src/main/java/org/apache/ignite/spi/encryption/EncryptionSpi.java new file mode 100644 index 0000000000000..693cefd9138f9 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/encryption/EncryptionSpi.java @@ -0,0 +1,113 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.encryption; + +import java.io.Serializable; +import java.nio.ByteBuffer; +import org.apache.ignite.IgniteException; +import org.apache.ignite.spi.IgniteSpi; + +/** + * SPI provides encryption features for an Ignite. + */ +public interface EncryptionSpi extends IgniteSpi { + /** + * Returns master key digest. + * Should always return same digest for a same key. + * Digest used for a configuration consistency check. + * + * @return Master key digest. + */ + byte[] masterKeyDigest(); + + /** + * Creates new key for an encryption/decryption of cache persistent data: pages, WAL records. + * + * @return Newly created encryption key. + * @throws IgniteException If key creation failed. + */ + Serializable create() throws IgniteException; + + /** + * Encrypts data. + * + * @param data Data to encrypt. + * @param key Encryption key. + * @param res Destination buffer. + */ + void encrypt(ByteBuffer data, Serializable key, ByteBuffer res); + + /** + * Encrypts data without padding info. + * + * @param data Data to encrypt. + * @param key Encryption key. + * @param res Destination buffer. + */ + void encryptNoPadding(ByteBuffer data, Serializable key, ByteBuffer res); + + /** + * Decrypts data encrypted with {@link #encrypt(ByteBuffer, Serializable, ByteBuffer)} + * + * @param data Data to decrypt. + * @param key Encryption key. + */ + byte[] decrypt(byte[] data, Serializable key); + + /** + * Decrypts data encrypted with {@link #encryptNoPadding(ByteBuffer, Serializable, ByteBuffer)} + * + * @param data Data to decrypt. + * @param key Encryption key. + */ + void decryptNoPadding(ByteBuffer data, Serializable key, ByteBuffer res); + + /** + * Encrypts key. + * Adds some info to check key integrity on decryption. + * + * @param key Key to encrypt. + * @return Encrypted key. + */ + byte[] encryptKey(Serializable key); + + /** + * Decrypts key and checks it integrity. + * + * @param key Key to decrypt. + * @return Encrypted key. + */ + Serializable decryptKey(byte[] key); + + /** + * @param dataSize Size of plain data in bytes. + * @return Size of encrypted data in bytes for padding encryption mode. + */ + int encryptedSize(int dataSize); + + /** + * @param dataSize Size of plain data in bytes. + * @return Size of encrypted data in bytes for nopadding encryption mode. + */ + int encryptedSizeNoPadding(int dataSize); + + /** + * @return Encrypted data block size. + */ + int blockSize(); +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/KeystoreEncryptionKey.java b/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/KeystoreEncryptionKey.java new file mode 100644 index 0000000000000..c2577c73e03a8 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/KeystoreEncryptionKey.java @@ -0,0 +1,84 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.encryption.keystore; + +import java.io.Serializable; +import java.security.Key; +import java.util.Arrays; +import java.util.Objects; +import org.jetbrains.annotations.Nullable; + +/** + * {@code EncryptionKey} implementation based on java security. + * + * @see Key + * @see KeystoreEncryptionSpi + */ +public final class KeystoreEncryptionKey implements Serializable { + /** */ + private static final long serialVersionUID = 0L; + + /** + * Encryption key. + */ + private final Key k; + + /** + * Key digest. + */ + @Nullable final byte[] digest; + + /** + * @param k Encryption key. + * @param digest Message digest. + */ + KeystoreEncryptionKey(Key k, @Nullable byte[] digest) { + this.k = k; + this.digest = digest; + } + + /** + * Encryption key. + */ + public Key key() { + return k; + } + + /** {@inheritDoc} */ + @Override public boolean equals(Object o) { + if (this == o) + return true; + + if (o == null || getClass() != o.getClass()) + return false; + + KeystoreEncryptionKey key = (KeystoreEncryptionKey)o; + + return Objects.equals(k, key.k) && + Arrays.equals(digest, key.digest); + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + int result = Objects.hash(k); + + result = 31 * result + Arrays.hashCode(digest); + + return result; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/KeystoreEncryptionSpi.java b/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/KeystoreEncryptionSpi.java new file mode 100644 index 0000000000000..beba0158f7717 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/KeystoreEncryptionSpi.java @@ -0,0 +1,501 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.encryption.keystore; + +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.Serializable; +import java.net.URL; +import java.nio.ByteBuffer; +import java.security.GeneralSecurityException; +import java.security.InvalidAlgorithmParameterException; +import java.security.InvalidKeyException; +import java.security.KeyStore; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.Arrays; +import java.util.concurrent.ThreadLocalRandom; +import javax.crypto.BadPaddingException; +import javax.crypto.Cipher; +import javax.crypto.IllegalBlockSizeException; +import javax.crypto.KeyGenerator; +import javax.crypto.NoSuchPaddingException; +import javax.crypto.SecretKey; +import javax.crypto.ShortBufferException; +import javax.crypto.spec.IvParameterSpec; +import javax.crypto.spec.SecretKeySpec; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.spi.encryption.EncryptionSpi; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.resources.LoggerResource; +import org.apache.ignite.spi.IgniteSpiAdapter; +import org.apache.ignite.spi.IgniteSpiException; +import org.jetbrains.annotations.Nullable; + +import static javax.crypto.Cipher.DECRYPT_MODE; +import static javax.crypto.Cipher.ENCRYPT_MODE; + +/** + * EncryptionSPI implementation base on JDK provided cipher algorithm implementations. + * + * @see EncryptionSpi + * @see KeystoreEncryptionKey + */ +public class KeystoreEncryptionSpi extends IgniteSpiAdapter implements EncryptionSpi { + /** + * Default key store entry name to store Encryption master key. + */ + public static final String DEFAULT_MASTER_KEY_NAME = "ignite.master.key"; + + /** + * Algorithm supported by implementation. + */ + public static final String CIPHER_ALGO = "AES"; + + /** + * Default encryption key size; + */ + public static final int DEFAULT_KEY_SIZE = 256; + + /** + * Full name of cipher algorithm. + */ + private static final String AES_WITH_PADDING = "AES/CBC/PKCS5Padding"; + + /** + * Full name of cipher algorithm without padding. + */ + private static final String AES_WITHOUT_PADDING = "AES/CBC/NoPadding"; + + /** + * Algorithm used for digest calculation. + */ + private static final String DIGEST_ALGO = "SHA-512"; + + /** + * Data block size. + */ + private static final int BLOCK_SZ = 16; + + /** + * Path to master key store. + */ + private String keyStorePath; + + /** + * Key store password. + */ + private char[] keyStorePwd; + + /** + * Key size. + */ + private int keySize = DEFAULT_KEY_SIZE; + + /** + * Master key name. + */ + private String masterKeyName = DEFAULT_MASTER_KEY_NAME; + + /** + * Master key. + */ + private KeystoreEncryptionKey masterKey; + + /** Logger. */ + @LoggerResource + protected IgniteLogger log; + + /** Ignite */ + @IgniteInstanceResource + protected Ignite ignite; + + /** */ + private ThreadLocal aesWithPadding = ThreadLocal.withInitial(() -> { + try { + return Cipher.getInstance(AES_WITH_PADDING); + } + catch (NoSuchAlgorithmException | NoSuchPaddingException e) { + throw new IgniteException(e); + } + }); + + /** */ + private ThreadLocal aesWithoutPadding = ThreadLocal.withInitial(() -> { + try { + return Cipher.getInstance(AES_WITHOUT_PADDING); + } + catch (NoSuchAlgorithmException | NoSuchPaddingException e) { + throw new IgniteException(e); + } + }); + + /** {@inheritDoc} */ + @Override public void spiStart(@Nullable String igniteInstanceName) throws IgniteSpiException { + assertParameter(!F.isEmpty(keyStorePath), "KeyStorePath shouldn't be empty"); + assertParameter(keyStorePwd != null && keyStorePwd.length > 0, + "KeyStorePassword shouldn't be empty"); + + try (InputStream keyStoreFile = keyStoreFile()) { + assertParameter(keyStoreFile != null, keyStorePath + " doesn't exists!"); + + KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); + + ks.load(keyStoreFile, keyStorePwd); + + if (log != null) + log.info("Successfully load keyStore [path=" + keyStorePath + "]"); + + masterKey = new KeystoreEncryptionKey(ks.getKey(masterKeyName, keyStorePwd), null); + } + catch (GeneralSecurityException | IOException e) { + throw new IgniteSpiException(e); + } + } + + /** {@inheritDoc} */ + @Override public void spiStop() throws IgniteSpiException { + ensureStarted(); + + //empty. + } + + /** {@inheritDoc} */ + @Override public byte[] masterKeyDigest() { + ensureStarted(); + + return makeDigest(masterKey.key().getEncoded()); + } + + /** {@inheritDoc} */ + @Override public KeystoreEncryptionKey create() throws IgniteException { + ensureStarted(); + + try { + KeyGenerator gen = KeyGenerator.getInstance(CIPHER_ALGO); + + gen.init(keySize); + + SecretKey key = gen.generateKey(); + + return new KeystoreEncryptionKey(key, makeDigest(key.getEncoded())); + } + catch (NoSuchAlgorithmException e) { + throw new IgniteException(e); + } + } + + /** {@inheritDoc} */ + @Override public void encrypt(ByteBuffer data, Serializable key, ByteBuffer res) { + doEncryption(data, aesWithPadding.get(), key, res); + } + + /** {@inheritDoc} */ + @Override public void encryptNoPadding(ByteBuffer data, Serializable key, ByteBuffer res) { + doEncryption(data, aesWithoutPadding.get(), key, res); + } + + /** {@inheritDoc} */ + @Override public byte[] decrypt(byte[] data, Serializable key) { + assert key instanceof KeystoreEncryptionKey; + + ensureStarted(); + + try { + SecretKeySpec keySpec = new SecretKeySpec(((KeystoreEncryptionKey)key).key().getEncoded(), CIPHER_ALGO); + + Cipher cipher = aesWithPadding.get(); + + cipher.init(DECRYPT_MODE, keySpec, new IvParameterSpec(data, 0, cipher.getBlockSize())); + + return cipher.doFinal(data, cipher.getBlockSize(), data.length - cipher.getBlockSize()); + } + catch (InvalidAlgorithmParameterException | InvalidKeyException | IllegalBlockSizeException | + BadPaddingException e) { + throw new IgniteSpiException(e); + } + } + + /** {@inheritDoc} */ + @Override public void decryptNoPadding(ByteBuffer data, Serializable key, ByteBuffer res) { + assert key instanceof KeystoreEncryptionKey; + + ensureStarted(); + + try { + SecretKeySpec keySpec = new SecretKeySpec(((KeystoreEncryptionKey)key).key().getEncoded(), CIPHER_ALGO); + + Cipher cipher = aesWithoutPadding.get(); + + byte[] iv = new byte[cipher.getBlockSize()]; + + data.get(iv); + + cipher.init(DECRYPT_MODE, keySpec, new IvParameterSpec(iv)); + + cipher.doFinal(data, res); + } + catch (InvalidAlgorithmParameterException | InvalidKeyException | IllegalBlockSizeException | + ShortBufferException | BadPaddingException e) { + throw new IgniteSpiException(e); + } + } + + /** + * @param data Plain data. + * @param cipher Cipher. + * @param key Encryption key. + */ + private void doEncryption(ByteBuffer data, Cipher cipher, Serializable key, ByteBuffer res) { + assert key instanceof KeystoreEncryptionKey; + + ensureStarted(); + + try { + SecretKeySpec keySpec = new SecretKeySpec(((KeystoreEncryptionKey)key).key().getEncoded(), CIPHER_ALGO); + + byte[] iv = initVector(cipher); + + res.put(iv); + + cipher.init(ENCRYPT_MODE, keySpec, new IvParameterSpec(iv)); + + cipher.doFinal(data, res); + } + catch (ShortBufferException | InvalidAlgorithmParameterException | InvalidKeyException | + IllegalBlockSizeException | BadPaddingException e) { + throw new IgniteSpiException(e); + } + } + + /** {@inheritDoc} */ + @Override public byte[] encryptKey(Serializable key) { + assert key instanceof KeystoreEncryptionKey; + + byte[] serKey = U.toBytes(key); + + byte[] res = new byte[encryptedSize(serKey.length)]; + + encrypt(ByteBuffer.wrap(serKey), masterKey, ByteBuffer.wrap(res)); + + return res; + } + + /** {@inheritDoc} */ + @Override public KeystoreEncryptionKey decryptKey(byte[] data) { + byte[] serKey = decrypt(data, masterKey); + + KeystoreEncryptionKey key = U.fromBytes(serKey); + + byte[] digest = makeDigest(key.key().getEncoded()); + + if (!Arrays.equals(key.digest, digest)) + throw new IgniteException("Key is broken!"); + + return key; + + } + + /** {@inheritDoc} */ + @Override public int encryptedSize(int dataSize) { + return encryptedSize(dataSize, AES_WITH_PADDING); + } + + /** {@inheritDoc} */ + @Override public int encryptedSizeNoPadding(int dataSize) { + return encryptedSize(dataSize, AES_WITHOUT_PADDING); + } + + /** {@inheritDoc} */ + @Override public int blockSize() { + return BLOCK_SZ; + } + + /** + * @param dataSize Data size. + * @param algo Encryption algorithm + * @return Encrypted data size. + */ + private int encryptedSize(int dataSize, String algo) { + int cntBlocks; + + switch (algo) { + case AES_WITH_PADDING: + cntBlocks = 2; + break; + + case AES_WITHOUT_PADDING: + cntBlocks = 1; + break; + + default: + throw new IllegalStateException("Unknown algorithm: " + algo); + } + + return (dataSize/BLOCK_SZ + cntBlocks)*BLOCK_SZ; + } + + /** + * Calculates message digest. + * + * @param msg Message. + * @return Digest. + */ + private byte[] makeDigest(byte[] msg) { + try { + MessageDigest md = MessageDigest.getInstance(DIGEST_ALGO); + + return md.digest(msg); + } + catch (NoSuchAlgorithmException e) { + throw new IgniteException(e); + } + } + + /** + * @param cipher Cipher. + * @return Init vector for encryption. + * @see Initialization vector + */ + private byte[] initVector(Cipher cipher) { + byte[] iv = new byte[cipher.getBlockSize()]; + + ThreadLocalRandom.current().nextBytes(iv); + + return iv; + } + + /** + * {@code keyStorePath} could be absolute path or path to classpath resource. + * + * @return File for {@code keyStorePath}. + */ + private InputStream keyStoreFile() throws IOException { + File abs = new File(keyStorePath); + + if (abs.exists()) + return new FileInputStream(abs); + + URL clsPthRes = KeystoreEncryptionSpi.class.getClassLoader().getResource(keyStorePath); + + if (clsPthRes != null) + return clsPthRes.openStream(); + + return null; + } + + /** + * Ensures spi started. + * + * @throws IgniteException If spi not started. + */ + private void ensureStarted() throws IgniteException { + if (started()) + return; + + throw new IgniteException("EncryptionSpi is not started!"); + } + + /** + * Gets path to jdk keyStore that stores master key. + * + * @return Key store path. + */ + public String getKeyStorePath() { + return keyStorePath; + } + + /** + * Sets path to jdk keyStore that stores master key. + * + * @param keyStorePath Path to JDK KeyStore. + */ + public void setKeyStorePath(String keyStorePath) { + assert !F.isEmpty(keyStorePath) : "KeyStore path shouldn't be empty"; + assert !started() : "Spi already started"; + + this.keyStorePath = keyStorePath; + } + + /** + * Gets key store password. + * + * @return Key store password. + */ + public char[] getKeyStorePwd() { + return keyStorePwd; + } + + /** + * Sets password to access KeyStore. + * + * @param keyStorePassword Password for Key Store. + */ + public void setKeyStorePassword(char[] keyStorePassword) { + assert keyStorePassword != null && keyStorePassword.length > 0; + assert !started() : "Spi already started"; + + this.keyStorePwd = keyStorePassword; + } + + /** + * Gets encryption key size. + * + * @return Encryption key size. + */ + public int getKeySize() { + return keySize; + } + + /** + * Sets encryption key size. + * + * @param keySize Key size. + */ + public void setKeySize(int keySize) { + assert !started() : "Spi already started"; + + this.keySize = keySize; + } + + /** + * Gets master key name. + * + * @return Master key name. + */ + public String getMasterKeyName() { + return masterKeyName; + } + + /** + * Sets mater key name. + * + * @param masterKeyName Master key name. + */ + public void setMasterKeyName(String masterKeyName) { + assert !started() : "Spi already started"; + + this.masterKeyName = masterKeyName; + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/package-info.java b/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/package-info.java new file mode 100644 index 0000000000000..71f1634709670 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/package-info.java @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * + * Contains encryption SPI implementation based on standard jdk keystore. + */ +package org.apache.ignite.spi.encryption.keystore; diff --git a/modules/core/src/main/java/org/apache/ignite/spi/encryption/noop/NoopEncryptionSpi.java b/modules/core/src/main/java/org/apache/ignite/spi/encryption/noop/NoopEncryptionSpi.java new file mode 100644 index 0000000000000..1de64d9dcd510 --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/encryption/noop/NoopEncryptionSpi.java @@ -0,0 +1,101 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.encryption.noop; + +import java.io.Serializable; +import java.nio.ByteBuffer; +import org.apache.ignite.IgniteException; +import org.apache.ignite.spi.IgniteSpiAdapter; +import org.apache.ignite.spi.IgniteSpiException; +import org.apache.ignite.spi.IgniteSpiNoop; +import org.apache.ignite.spi.encryption.EncryptionSpi; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi; +import org.jetbrains.annotations.Nullable; + +/** + * No operation {@code EncryptionSPI} implementation. + * + * @see EncryptionSpi + * @see KeystoreEncryptionSpi + */ +@IgniteSpiNoop +public class NoopEncryptionSpi extends IgniteSpiAdapter implements EncryptionSpi { + /** {@inheritDoc} */ + @Override public byte[] masterKeyDigest() { + return null; + } + + /** {@inheritDoc} */ + @Override public Serializable create() throws IgniteException { + throw new IgniteSpiException("You have to configure custom EncryptionSpi implementation."); + } + + /** {@inheritDoc} */ + @Override public void encrypt(ByteBuffer data, Serializable key, ByteBuffer res) { + throw new IgniteSpiException("You have to configure custom EncryptionSpi implementation."); + } + + /** {@inheritDoc} */ + @Override public void encryptNoPadding(ByteBuffer data, Serializable key, ByteBuffer res) { + throw new IgniteSpiException("You have to configure custom EncryptionSpi implementation."); + } + + /** {@inheritDoc} */ + @Override public byte[] decrypt(byte[] data, Serializable key) { + throw new IgniteSpiException("You have to configure custom EncryptionSpi implementation."); + } + + /** {@inheritDoc} */ + @Override public void decryptNoPadding(ByteBuffer data, Serializable key, ByteBuffer res) { + throw new IgniteSpiException("You have to configure custom EncryptionSpi implementation."); + } + + /** {@inheritDoc} */ + @Override public byte[] encryptKey(Serializable key) { + throw new IgniteSpiException("You have to configure custom EncryptionSpi implementation."); + } + + /** {@inheritDoc} */ + @Override public Serializable decryptKey(byte[] key) { + throw new IgniteSpiException("You have to configure custom EncryptionSpi implementation."); + } + + /** {@inheritDoc} */ + @Override public int encryptedSize(int dataSize) { + return dataSize; + } + + /** {@inheritDoc} */ + @Override public int encryptedSizeNoPadding(int dataSize) { + return dataSize; + } + + @Override public int blockSize() { + return 0; + } + + /** {@inheritDoc} */ + @Override public void spiStart(@Nullable String igniteInstanceName) throws IgniteSpiException { + // No-op. + } + + /** {@inheritDoc} */ + @Override public void spiStop() throws IgniteSpiException { + // No-op. + } +} diff --git a/modules/core/src/main/java/org/apache/ignite/spi/encryption/noop/package-info.java b/modules/core/src/main/java/org/apache/ignite/spi/encryption/noop/package-info.java new file mode 100644 index 0000000000000..a302c3aff70ab --- /dev/null +++ b/modules/core/src/main/java/org/apache/ignite/spi/encryption/noop/package-info.java @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * + * Contains no-op encryption SPI implementation. + */ +package org.apache.ignite.spi.encryption.noop; diff --git a/modules/web-console/frontend/app/components/ignite-status/index.js b/modules/core/src/main/java/org/apache/ignite/spi/encryption/package-info.java similarity index 86% rename from modules/web-console/frontend/app/components/ignite-status/index.js rename to modules/core/src/main/java/org/apache/ignite/spi/encryption/package-info.java index a8520bc876952..9805aaf3b7426 100644 --- a/modules/web-console/frontend/app/components/ignite-status/index.js +++ b/modules/core/src/main/java/org/apache/ignite/spi/encryption/package-info.java @@ -15,8 +15,8 @@ * limitations under the License. */ -import angular from 'angular'; -import './style.scss'; - -export default angular - .module('ignite-console.ignite-status', []); +/** + * + * Contains APIs for encryption SPI. + */ +package org.apache.ignite.spi.encryption; diff --git a/modules/core/src/main/java/org/apache/ignite/ssl/DelegatingSSLContextSpi.java b/modules/core/src/main/java/org/apache/ignite/ssl/DelegatingSSLContextSpi.java index d8621f2cefd9a..360e92239c6bb 100644 --- a/modules/core/src/main/java/org/apache/ignite/ssl/DelegatingSSLContextSpi.java +++ b/modules/core/src/main/java/org/apache/ignite/ssl/DelegatingSSLContextSpi.java @@ -31,7 +31,6 @@ /** */ class DelegatingSSLContextSpi extends SSLContextSpi { - /** */ private final SSLContext delegate; @@ -57,8 +56,7 @@ class DelegatingSSLContextSpi extends SSLContextSpi { /** {@inheritDoc} */ @Override protected SSLServerSocketFactory engineGetServerSocketFactory() { - return new SSLServerSocketFactoryWrapper(delegate.getServerSocketFactory(), - parameters); + return new SSLServerSocketFactoryWrapper(delegate.getServerSocketFactory(), parameters); } /** {@inheritDoc} */ diff --git a/modules/core/src/main/java/org/apache/ignite/ssl/SSLContextWrapper.java b/modules/core/src/main/java/org/apache/ignite/ssl/SSLContextWrapper.java index 901d42b1f4ba9..67bc834d0886c 100644 --- a/modules/core/src/main/java/org/apache/ignite/ssl/SSLContextWrapper.java +++ b/modules/core/src/main/java/org/apache/ignite/ssl/SSLContextWrapper.java @@ -20,10 +20,15 @@ import javax.net.ssl.SSLContext; import javax.net.ssl.SSLParameters; -/** */ -class SSLContextWrapper extends SSLContext { - /** */ - SSLContextWrapper(SSLContext delegate, SSLParameters sslParameters) { +/** + * Wrapper for {@link SSLContext} that extend source context with custom SSL parameters. + */ +public class SSLContextWrapper extends SSLContext { + /** + * @param delegate Wrapped SSL context. + * @param sslParameters Extended SSL parameters. + */ + public SSLContextWrapper(SSLContext delegate, SSLParameters sslParameters) { super(new DelegatingSSLContextSpi(delegate, sslParameters), delegate.getProvider(), delegate.getProtocol()); diff --git a/modules/core/src/main/java/org/apache/ignite/ssl/SSLSocketFactoryWrapper.java b/modules/core/src/main/java/org/apache/ignite/ssl/SSLSocketFactoryWrapper.java index bfe6d0d6f4bca..992f836702d62 100644 --- a/modules/core/src/main/java/org/apache/ignite/ssl/SSLSocketFactoryWrapper.java +++ b/modules/core/src/main/java/org/apache/ignite/ssl/SSLSocketFactoryWrapper.java @@ -24,9 +24,10 @@ import javax.net.ssl.SSLSocket; import javax.net.ssl.SSLSocketFactory; -/** */ +/** + * SSL socket factory that configure additional SSL params for socket. + */ class SSLSocketFactoryWrapper extends SSLSocketFactory { - /** */ private final SSLSocketFactory delegate; @@ -49,65 +50,46 @@ class SSLSocketFactoryWrapper extends SSLSocketFactory { return delegate.getSupportedCipherSuites(); } - /** {@inheritDoc} */ - @Override public Socket createSocket() throws IOException { - SSLSocket sock = (SSLSocket)delegate.createSocket(); - + /** + * Configure socket with SSL parameters. + * + * @param socket Socket to configure. + * @return Configured socket. + */ + private Socket configureSocket(Socket socket) { if (parameters != null) - sock.setSSLParameters(parameters); + ((SSLSocket)socket).setSSLParameters(parameters); - return sock; + return socket; } /** {@inheritDoc} */ - @Override public Socket createSocket(Socket sock, String host, int port, boolean autoClose) throws IOException { - SSLSocket sslSock = (SSLSocket)delegate.createSocket(sock, host, port, autoClose); - - if (parameters != null) - sslSock.setSSLParameters(parameters); + @Override public Socket createSocket() throws IOException { + return configureSocket(delegate.createSocket()); + } - return sock; + /** {@inheritDoc} */ + @Override public Socket createSocket(Socket sock, String host, int port, boolean autoClose) throws IOException { + return configureSocket(delegate.createSocket(sock, host, port, autoClose)); } /** {@inheritDoc} */ @Override public Socket createSocket(String host, int port) throws IOException { - SSLSocket sock = (SSLSocket)delegate.createSocket(host, port); - - if (parameters != null) - sock.setSSLParameters(parameters); - - return sock; + return configureSocket(delegate.createSocket(host, port)); } /** {@inheritDoc} */ @Override public Socket createSocket(String host, int port, InetAddress locAddr, int locPort) throws IOException { - SSLSocket sock = (SSLSocket)delegate.createSocket(host, port, locAddr, locPort); - - if (parameters != null) - sock.setSSLParameters(parameters); - - return sock; + return configureSocket(delegate.createSocket(host, port, locAddr, locPort)); } /** {@inheritDoc} */ @Override public Socket createSocket(InetAddress addr, int port) throws IOException { - SSLSocket sock = (SSLSocket)delegate.createSocket(addr, port); - - if (parameters != null) - sock.setSSLParameters(parameters); - - return sock; + return configureSocket(delegate.createSocket(addr, port)); } /** {@inheritDoc} */ - @Override public Socket createSocket(InetAddress addr, int port, InetAddress locAddr, - int locPort) throws IOException { - SSLSocket sock = (SSLSocket)delegate.createSocket(addr, port, locAddr, locPort); - - if (parameters != null) - sock.setSSLParameters(parameters); - - return sock; + @Override public Socket createSocket(InetAddress addr, int port, InetAddress locAddr, int locPort) throws IOException { + return configureSocket(delegate.createSocket(addr, port, locAddr, locPort)); } - } diff --git a/modules/core/src/main/java/org/apache/ignite/ssl/SslContextFactory.java b/modules/core/src/main/java/org/apache/ignite/ssl/SslContextFactory.java index edff5c9990373..fb370a57babce 100644 --- a/modules/core/src/main/java/org/apache/ignite/ssl/SslContextFactory.java +++ b/modules/core/src/main/java/org/apache/ignite/ssl/SslContextFactory.java @@ -93,7 +93,7 @@ public class SslContextFactory implements Factory { /** Enabled cipher suites. */ private String[] cipherSuites; - /** Enabled cipher suites. */ + /** Enabled protocols. */ private String[] protocols; /** @@ -289,6 +289,7 @@ public static TrustManager getDisabledTrustManager() { /** * Sets enabled cipher suites. + * * @param cipherSuites enabled cipher suites. */ public void setCipherSuites(String... cipherSuites) { @@ -296,7 +297,8 @@ public void setCipherSuites(String... cipherSuites) { } /** - * Gets enabled cipher suites + * Gets enabled cipher suites. + * * @return enabled cipher suites */ public String[] getCipherSuites() { @@ -304,8 +306,9 @@ public String[] getCipherSuites() { } /** - * Gets enabled cipher suites - * @return enabled cipher suites + * Gets enabled protocols. + * + * @return Enabled protocols. */ public String[] getProtocols() { return protocols; @@ -313,7 +316,8 @@ public String[] getProtocols() { /** * Sets enabled protocols. - * @param protocols enabled protocols. + * + * @param protocols Enabled protocols. */ public void setProtocols(String... protocols) { this.protocols = protocols; @@ -515,4 +519,4 @@ private static class DisabledX509TrustManager implements X509TrustManager { throw new IgniteException(e); } } -} \ No newline at end of file +} diff --git a/modules/core/src/main/java/org/apache/ignite/thread/IgniteTaskTrackingThreadPoolExecutor.java b/modules/core/src/main/java/org/apache/ignite/thread/IgniteTaskTrackingThreadPoolExecutor.java deleted file mode 100644 index 6cae57eb78e6a..0000000000000 --- a/modules/core/src/main/java/org/apache/ignite/thread/IgniteTaskTrackingThreadPoolExecutor.java +++ /dev/null @@ -1,180 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.thread; - -import java.lang.Thread.UncaughtExceptionHandler; -import java.util.concurrent.BlockingQueue; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.ThreadFactory; -import java.util.concurrent.atomic.AtomicReference; -import java.util.concurrent.atomic.LongAdder; -import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.IgniteException; -import org.apache.ignite.internal.managers.communication.GridIoPolicy; - -/** - * An {@link ExecutorService} that executes submitted tasks using pooled grid threads. - * - * In addition to what it allows to track all enqueued tasks completion or failure during execution. - */ -public class IgniteTaskTrackingThreadPoolExecutor extends IgniteThreadPoolExecutor { - /** */ - private final LongAdder pendingTaskCnt = new LongAdder(); - - /** */ - private final LongAdder completedTaskCnt = new LongAdder(); - - /** */ - private volatile boolean initialized; - - /** */ - private volatile AtomicReference err = new AtomicReference<>(); - - /** - * Creates a new service with the given initial parameters. - * - * @param threadNamePrefix Will be added at the beginning of all created threads. - * @param igniteInstanceName Must be the name of the grid. - * @param corePoolSize The number of threads to keep in the pool, even if they are idle. - * @param maxPoolSize The maximum number of threads to allow in the pool. - * @param keepAliveTime When the number of threads is greater than the core, this is the maximum time - * that excess idle threads will wait for new tasks before terminating. - * @param workQ The queue to use for holding tasks before they are executed. This queue will hold only - * runnable tasks submitted by the {@link #execute(Runnable)} method. - */ - public IgniteTaskTrackingThreadPoolExecutor(String threadNamePrefix, String igniteInstanceName, int corePoolSize, - int maxPoolSize, long keepAliveTime, BlockingQueue workQ) { - super(threadNamePrefix, igniteInstanceName, corePoolSize, maxPoolSize, keepAliveTime, workQ); - } - - /** - * Creates a new service with the given initial parameters. - * - * @param threadNamePrefix Will be added at the beginning of all created threads. - * @param igniteInstanceName Must be the name of the grid. - * @param corePoolSize The number of threads to keep in the pool, even if they are idle. - * @param maxPoolSize The maximum number of threads to allow in the pool. - * @param keepAliveTime When the number of threads is greater than the core, this is the maximum time - * that excess idle threads will wait for new tasks before terminating. - * @param workQ The queue to use for holding tasks before they are executed. This queue will hold only - * runnable tasks submitted by the {@link #execute(Runnable)} method. - * @param plc {@link GridIoPolicy} for thread pool. - * @param eHnd Uncaught exception handler for thread pool. - */ - public IgniteTaskTrackingThreadPoolExecutor(String threadNamePrefix, String igniteInstanceName, int corePoolSize, - int maxPoolSize, long keepAliveTime, BlockingQueue workQ, byte plc, - UncaughtExceptionHandler eHnd) { - super(threadNamePrefix, igniteInstanceName, corePoolSize, maxPoolSize, keepAliveTime, workQ, plc, eHnd); - } - - /** - * Creates a new service with the given initial parameters. - * - * @param corePoolSize The number of threads to keep in the pool, even if they are idle. - * @param maxPoolSize The maximum number of threads to allow in the pool. - * @param keepAliveTime When the number of threads is greater than the core, this is the maximum time - * that excess idle threads will wait for new tasks before terminating. - * @param workQ The queue to use for holding tasks before they are executed. This queue will hold only the - * runnable tasks submitted by the {@link #execute(Runnable)} method. - * @param threadFactory Thread factory. - */ - public IgniteTaskTrackingThreadPoolExecutor(int corePoolSize, int maxPoolSize, long keepAliveTime, - BlockingQueue workQ, ThreadFactory threadFactory) { - super(corePoolSize, maxPoolSize, keepAliveTime, workQ, threadFactory); - } - - /** {@inheritDoc} */ - @Override public void execute(Runnable cmd) { - pendingTaskCnt.add(1); - - super.execute(cmd); - } - - /** {@inheritDoc} */ - @Override protected void afterExecute(Runnable r, Throwable t) { - super.afterExecute(r, t); - - completedTaskCnt.add(1); - - if (t != null && err.compareAndSet(null, t) || isDone()) { - synchronized (this) { - notifyAll(); - } - } - } - - /** - * Mark this executor as initialized. - * This method should be called when all required tasks are enqueued for execution. - */ - public final void markInitialized() { - initialized = true; - } - - /** - * Check error status. - * - * @return {@code True} if any task execution resulted in error. - */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public final boolean isError() { - return err.get() != null; - } - - /** - * Check done status. - * - * @return {@code True} when all enqueued task are completed. - */ - public final boolean isDone() { - return initialized && completedTaskCnt.sum() == pendingTaskCnt.sum(); - } - - /** - * Wait synchronously until all tasks are completed or error has occurred. - * - * @throws IgniteCheckedException if task execution resulted in error. - */ - public final synchronized void awaitDone() throws IgniteCheckedException { - // There are no guarantee what all enqueued tasks will be finished if an error has occurred. - while(!isError() && !isDone()) { - try { - wait(); - } - catch (InterruptedException e) { - err.set(e); - - Thread.currentThread().interrupt(); - } - } - - if (isError()) - throw new IgniteCheckedException("Task execution resulted in error", err.get()); - } - - /** - * Reset tasks tracking context. - * The method should be called before adding new tasks to the executor. - */ - public final void reset() { - initialized = false; - completedTaskCnt.reset(); - pendingTaskCnt.reset(); - err.set(null); - } -} diff --git a/modules/core/src/main/java/org/apache/ignite/thread/IgniteThread.java b/modules/core/src/main/java/org/apache/ignite/thread/IgniteThread.java index 6f65e0e34d39b..e37e6a749bc7c 100644 --- a/modules/core/src/main/java/org/apache/ignite/thread/IgniteThread.java +++ b/modules/core/src/main/java/org/apache/ignite/thread/IgniteThread.java @@ -57,10 +57,10 @@ public class IgniteThread extends Thread { private final byte plc; /** */ - private boolean executingEntryProcessor; + private boolean holdsTopLock; /** */ - private boolean holdsTopLock; + private boolean forbiddenToRequestBinaryMetadata; /** * Creates thread with given worker. @@ -164,17 +164,10 @@ public void compositeRwLockIndex(int compositeRwLockIdx) { } /** - * @return {@code True} if thread is currently executing entry processor. - */ - public boolean executingEntryProcessor() { - return executingEntryProcessor; - } - - /** - * @return {@code True} if thread is currently holds topology lock. + * @return {@code True} if thread is not allowed to request binary metadata to avoid potential deadlock. */ - public boolean holdsTopLock() { - return holdsTopLock; + public boolean isForbiddenToRequestBinaryMetadata() { + return holdsTopLock || forbiddenToRequestBinaryMetadata; } /** @@ -183,11 +176,8 @@ public boolean holdsTopLock() { public static void onEntryProcessorEntered(boolean holdsTopLock) { Thread curThread = Thread.currentThread(); - if (curThread instanceof IgniteThread) { - ((IgniteThread)curThread).executingEntryProcessor = true; - + if (curThread instanceof IgniteThread) ((IgniteThread)curThread).holdsTopLock = holdsTopLock; - } } /** @@ -197,7 +187,27 @@ public static void onEntryProcessorLeft() { Thread curThread = Thread.currentThread(); if (curThread instanceof IgniteThread) - ((IgniteThread)curThread).executingEntryProcessor = false; + ((IgniteThread)curThread).holdsTopLock = false; + } + + /** + * Callback on entering critical section where binary metadata requests are forbidden. + */ + public static void onForbidBinaryMetadataRequestSectionEntered() { + Thread curThread = Thread.currentThread(); + + if (curThread instanceof IgniteThread) + ((IgniteThread)curThread).forbiddenToRequestBinaryMetadata = true; + } + + /** + * Callback on leaving critical section where binary metadata requests are forbidden. + */ + public static void onForbidBinaryMetadataRequestSectionLeft() { + Thread curThread = Thread.currentThread(); + + if (curThread instanceof IgniteThread) + ((IgniteThread)curThread).forbiddenToRequestBinaryMetadata = false; } /** diff --git a/modules/core/src/main/resources/META-INF/classnames.properties b/modules/core/src/main/resources/META-INF/classnames.properties index a0e2ba79eba8b..1b65642f77160 100644 --- a/modules/core/src/main/resources/META-INF/classnames.properties +++ b/modules/core/src/main/resources/META-INF/classnames.properties @@ -1021,11 +1021,6 @@ org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogMan org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileArchiver$2 org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileCompressor$1 org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$RecordsIterator -org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$7 -org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileArchiver$1 -org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileArchiver$2 -org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileCompressor$1 -org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$RecordsIterator org.apache.ignite.internal.processors.cache.persistence.wal.SegmentEofException org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer$BufferMode org.apache.ignite.internal.processors.cache.persistence.wal.SingleSegmentLogicalRecordsIterator @@ -2206,6 +2201,7 @@ org.apache.ignite.internal.visor.util.VisorEventMapper org.apache.ignite.internal.visor.util.VisorExceptionWrapper org.apache.ignite.internal.visor.util.VisorTaskUtils$4 org.apache.ignite.internal.visor.verify.IndexValidationIssue +org.apache.ignite.internal.visor.verify.IndexIntegrityCheckIssue org.apache.ignite.internal.visor.verify.ValidateIndexesPartitionResult org.apache.ignite.internal.visor.verify.VisorContentionJobResult org.apache.ignite.internal.visor.verify.VisorContentionTask diff --git a/modules/core/src/test/config/log4j-tc-test.xml b/modules/core/src/test/config/log4j-tc-test.xml new file mode 100644 index 0000000000000..302913e2a6969 --- /dev/null +++ b/modules/core/src/test/config/log4j-tc-test.xml @@ -0,0 +1,121 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/modules/core/src/test/java/org/apache/ignite/GridCacheAffinityBackupsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/GridCacheAffinityBackupsSelfTest.java index 5e7b4e26b2bcf..5e6bc25250e98 100644 --- a/modules/core/src/test/java/org/apache/ignite/GridCacheAffinityBackupsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/GridCacheAffinityBackupsSelfTest.java @@ -25,18 +25,16 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests affinity function with different number of backups. */ +@RunWith(JUnit4.class) public class GridCacheAffinityBackupsSelfTest extends GridCommonAbstractTest { - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Number of backups. */ private int backups; @@ -47,8 +45,6 @@ public class GridCacheAffinityBackupsSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setCacheMode(CacheMode.PARTITIONED); @@ -63,6 +59,7 @@ public class GridCacheAffinityBackupsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRendezvousBackups() throws Exception { for (int i = 0; i < nodesCnt; i++) checkBackups(i); @@ -97,4 +94,4 @@ private void checkBackups(int backups) throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/GridSuppressedExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/GridSuppressedExceptionSelfTest.java index 55e54fb884fac..2db1a9e205ae5 100644 --- a/modules/core/src/test/java/org/apache/ignite/GridSuppressedExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/GridSuppressedExceptionSelfTest.java @@ -19,16 +19,23 @@ import java.io.IOException; import java.util.List; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.X; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; /** * */ -public class GridSuppressedExceptionSelfTest extends TestCase { +public class GridSuppressedExceptionSelfTest { /** * @throws Exception If failed. */ + @Test public void testHasCause() throws Exception { IgniteCheckedException me = prepareMultiException(); @@ -40,6 +47,7 @@ public void testHasCause() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetCause() throws Exception { IgniteCheckedException me = prepareMultiException(); @@ -55,6 +63,7 @@ public void testGetCause() throws Exception { /** * @throws Exception If failed. */ + @Test public void testXHasCause() throws Exception { IgniteCheckedException me = prepareMultiException(); @@ -71,6 +80,7 @@ public void testXHasCause() throws Exception { /** * @throws Exception If failed. */ + @Test public void testXGetSuppressedList() throws Exception { IgniteCheckedException me = prepareMultiException(); @@ -91,6 +101,7 @@ public void testXGetSuppressedList() throws Exception { /** * @throws Exception If failed. */ + @Test public void testXCause() throws Exception { IgniteCheckedException me = prepareMultiException(); diff --git a/modules/core/src/test/java/org/apache/ignite/IgniteCacheAffinitySelfTest.java b/modules/core/src/test/java/org/apache/ignite/IgniteCacheAffinitySelfTest.java index cf54949f7800b..001fa18365b31 100644 --- a/modules/core/src/test/java/org/apache/ignite/IgniteCacheAffinitySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/IgniteCacheAffinitySelfTest.java @@ -32,6 +32,9 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.affinity.GridAffinityProcessor; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,6 +42,7 @@ /** * Tests for {@link GridAffinityProcessor.CacheAffinityProxy}. */ +@RunWith(JUnit4.class) public class IgniteCacheAffinitySelfTest extends IgniteCacheAbstractTest { /** Initial grid count. */ private int GRID_CNT = 3; @@ -87,6 +91,7 @@ public class IgniteCacheAffinitySelfTest extends IgniteCacheAbstractTest { /** * @throws Exception if failed. */ + @Test public void testAffinity() throws Exception { checkAffinity(); @@ -301,4 +306,4 @@ private static void checkEqualPartitionMaps(Map map1, Map< private Collection nodes() { return grid(0).cluster().nodes(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/IgniteCacheEntryProcessorSequentialCallTest.java b/modules/core/src/test/java/org/apache/ignite/cache/IgniteCacheEntryProcessorSequentialCallTest.java index 165bca78b2aa8..94c0bc61e862e 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/IgniteCacheEntryProcessorSequentialCallTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/IgniteCacheEntryProcessorSequentialCallTest.java @@ -19,6 +19,8 @@ import java.util.concurrent.Callable; import java.util.concurrent.CountDownLatch; +import javax.cache.processor.EntryProcessorException; +import javax.cache.processor.MutableEntry; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; @@ -28,13 +30,24 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; -import javax.cache.processor.EntryProcessorException; -import javax.cache.processor.MutableEntry; import org.apache.ignite.transactions.TransactionOptimisticException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class IgniteCacheEntryProcessorSequentialCallTest extends GridCommonAbstractTest { + /** */ + private static final String CACHE = "cache"; + + /** */ + private static final String MVCC_CACHE = "mvccCache"; + + /** */ + private String cacheName; + /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrids(2); @@ -45,11 +58,34 @@ public class IgniteCacheEntryProcessorSequentialCallTest extends GridCommonAbstr stopAllGrids(); } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + cacheName = CACHE; + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - CacheConfiguration cacheCfg = new CacheConfiguration("cache"); + CacheConfiguration ccfg = cacheConfiguration(CACHE); + + CacheConfiguration mvccCfg = cacheConfiguration(MVCC_CACHE) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + + cfg.setCacheConfiguration(ccfg, mvccCfg); + + return cfg; + } + + /** + * + * @return Cache configuration. + * @param name Cache name. + */ + private CacheConfiguration cacheConfiguration(String name) { + CacheConfiguration cacheCfg = new CacheConfiguration(name); cacheCfg.setCacheMode(CacheMode.PARTITIONED); cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); @@ -57,15 +93,13 @@ public class IgniteCacheEntryProcessorSequentialCallTest extends GridCommonAbstr cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); cacheCfg.setMaxConcurrentAsyncOperations(0); cacheCfg.setBackups(0); - - cfg.setCacheConfiguration(cacheCfg); - - return cfg; + return cacheCfg; } /** * */ + @Test public void testOptimisticSerializableTxInvokeSequentialCall() throws Exception { transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency.OPTIMISTIC, TransactionIsolation.SERIALIZABLE); @@ -75,6 +109,7 @@ public void testOptimisticSerializableTxInvokeSequentialCall() throws Exception /** * */ + @Test public void testOptimisticRepeatableReadTxInvokeSequentialCall() throws Exception { transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency.OPTIMISTIC, TransactionIsolation.REPEATABLE_READ); @@ -84,6 +119,7 @@ public void testOptimisticRepeatableReadTxInvokeSequentialCall() throws Exceptio /** * */ + @Test public void testOptimisticReadCommittedTxInvokeSequentialCall() throws Exception { transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency.OPTIMISTIC, TransactionIsolation.READ_COMMITTED); @@ -93,6 +129,7 @@ public void testOptimisticReadCommittedTxInvokeSequentialCall() throws Exception /** * */ + @Test public void testPessimisticSerializableTxInvokeSequentialCall() throws Exception { transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.SERIALIZABLE); @@ -102,6 +139,7 @@ public void testPessimisticSerializableTxInvokeSequentialCall() throws Exception /** * */ + @Test public void testPessimisticRepeatableReadTxInvokeSequentialCall() throws Exception { transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); @@ -111,12 +149,25 @@ public void testPessimisticRepeatableReadTxInvokeSequentialCall() throws Excepti /** * */ + @Test public void testPessimisticReadCommittedTxInvokeSequentialCall() throws Exception { transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.READ_COMMITTED); transactionInvokeSequentialCallOnNearNode(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.READ_COMMITTED); } + /** + * + */ + @Test + public void testMvccTxInvokeSequentialCall() throws Exception { + cacheName = MVCC_CACHE; + + transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + + transactionInvokeSequentialCallOnNearNode(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + } + /** * Test for sequential entry processor invoking not null value on primary cache. * In this test entry processor gets value from local node. @@ -132,12 +183,12 @@ public void transactionInvokeSequentialCallOnPrimaryNode(TransactionConcurrency Ignite primaryIgnite; - if (ignite(0).affinity("cache").isPrimary(ignite(0).cluster().localNode(), key)) + if (ignite(0).affinity(cacheName).isPrimary(ignite(0).cluster().localNode(), key)) primaryIgnite = ignite(0); else primaryIgnite = ignite(1); - IgniteCache cache = primaryIgnite.cache("cache"); + IgniteCache cache = primaryIgnite.cache(cacheName); cache.put(key, val); @@ -171,7 +222,7 @@ public void transactionInvokeSequentialCallOnNearNode(TransactionConcurrency tra Ignite nearIgnite; Ignite primaryIgnite; - if (ignite(0).affinity("cache").isPrimary(ignite(0).cluster().localNode(), key)) { + if (ignite(0).affinity(cacheName).isPrimary(ignite(0).cluster().localNode(), key)) { primaryIgnite = ignite(0); nearIgnite = ignite(1); @@ -182,9 +233,9 @@ public void transactionInvokeSequentialCallOnNearNode(TransactionConcurrency tra nearIgnite = ignite(0); } - primaryIgnite.cache("cache").put(key, val); + primaryIgnite.cache(cacheName).put(key, val); - IgniteCache nearCache = nearIgnite.cache("cache"); + IgniteCache nearCache = nearIgnite.cache(cacheName); NotNullCacheEntryProcessor cacheEntryProcessor = new NotNullCacheEntryProcessor(); @@ -197,17 +248,19 @@ public void transactionInvokeSequentialCallOnNearNode(TransactionConcurrency tra transaction.commit(); } - primaryIgnite.cache("cache").remove(key); + primaryIgnite.cache(cacheName).remove(key); } /** * Test for sequential entry processor invocation. During transaction value is changed externally, which leads to * optimistic conflict exception. */ + @Test + @SuppressWarnings("ThrowableNotThrown") public void testTxInvokeSequentialOptimisticConflict() throws Exception { TestKey key = new TestKey(1L); - IgniteCache cache = ignite(0).cache("cache"); + IgniteCache cache = ignite(0).cache(CACHE); CountDownLatch latch = new CountDownLatch(1); @@ -308,4 +361,4 @@ public void value(String val) { this.val = val; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/IgniteWarmupClosureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/IgniteWarmupClosureSelfTest.java index 1fccb904313d6..f15d70f9b74b4 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/IgniteWarmupClosureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/IgniteWarmupClosureSelfTest.java @@ -19,19 +19,17 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.startup.BasicWarmupClosure; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteWarmupClosureSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public IgniteWarmupClosureSelfTest(){ super(false); @@ -41,14 +39,8 @@ public IgniteWarmupClosureSelfTest(){ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - cfg.getTransactionConfiguration().setTxSerializableEnabled(true); - cfg.setDiscoverySpi(disco); - BasicWarmupClosure warmupClosure = new BasicWarmupClosure(); warmupClosure.setGridCount(2); @@ -72,7 +64,8 @@ public IgniteWarmupClosureSelfTest(){ /** * @throws Exception If failed. */ + @Test public void testWarmupClosure() throws Exception { startGrid(1); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/LargeEntryUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/cache/LargeEntryUpdateTest.java index 008da71b6679a..e842a10165f51 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/LargeEntryUpdateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/LargeEntryUpdateTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class LargeEntryUpdateTest extends GridCommonAbstractTest { /** */ private static final int THREAD_COUNT = 10; @@ -95,6 +99,7 @@ public class LargeEntryUpdateTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testEntryUpdate() throws Exception { try (Ignite ignite = startGrid()) { for (int i = 0; i < CACHE_COUNT; ++i) { diff --git a/modules/core/src/test/java/org/apache/ignite/cache/ResetLostPartitionTest.java b/modules/core/src/test/java/org/apache/ignite/cache/ResetLostPartitionTest.java new file mode 100644 index 0000000000000..05f6d8e2f72b2 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/cache/ResetLostPartitionTest.java @@ -0,0 +1,262 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.cache; + +import java.util.Arrays; +import java.util.List; +import java.util.stream.Collectors; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgnitionEx; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.LOST; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; +import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; + +/** + * + */ +@RunWith(JUnit4.class) +public class ResetLostPartitionTest extends GridCommonAbstractTest { + /** Cache name. */ + private static final String[] CACHE_NAMES = {"cacheOne", "cacheTwo", "cacheThree"}; + /** Cache size */ + public static final int CACHE_SIZE = 100000 / CACHE_NAMES.length; + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10560"); + + super.beforeTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + super.afterTest(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setConsistentId(igniteInstanceName); + + DataStorageConfiguration storageCfg = new DataStorageConfiguration(); + + storageCfg.getDefaultDataRegionConfiguration() + .setPersistenceEnabled(true) + .setMaxSize(500L * 1024 * 1024); + + cfg.setDataStorageConfiguration(storageCfg); + + CacheConfiguration[] ccfg = new CacheConfiguration[] { + cacheConfiguration(CACHE_NAMES[0], CacheAtomicityMode.ATOMIC), + cacheConfiguration(CACHE_NAMES[1], CacheAtomicityMode.ATOMIC), + cacheConfiguration(CACHE_NAMES[2], CacheAtomicityMode.TRANSACTIONAL) + }; + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** + * @param cacheName Cache name. + * @param mode Cache atomicity mode. + * @return Configured cache configuration. + */ + private CacheConfiguration cacheConfiguration(String cacheName, CacheAtomicityMode mode) { + return new CacheConfiguration<>(cacheName) + .setCacheMode(CacheMode.PARTITIONED) + .setAtomicityMode(mode) + .setBackups(1) + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) + .setPartitionLossPolicy(PartitionLossPolicy.READ_ONLY_SAFE) + .setAffinity(new RendezvousAffinityFunction(false, 1024)) + .setIndexedTypes(String.class, String.class); + } + + /** Client configuration */ + private IgniteConfiguration getClientConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setPeerClassLoadingEnabled(true); + cfg.setClientMode(true); + + return cfg; + } + + /** + * Test to restore lost partitions after grid reactivation. + * + * @throws Exception if fail. + */ + @Test + public void testReactivateGridBeforeResetLostPartitions() throws Exception { + doRebalanceAfterPartitionsWereLost(true); + } + + /** + * Test to restore lost partitions on working grid. + * + * @throws Exception if fail. + */ + @Test + public void testResetLostPartitions() throws Exception { + doRebalanceAfterPartitionsWereLost(false); + } + + /** + * @param reactivateGridBeforeResetPart Reactive grid before try to reset lost partitions. + * @throws Exception if fail. + */ + private void doRebalanceAfterPartitionsWereLost(boolean reactivateGridBeforeResetPart) throws Exception { + startGrids(3); + + grid(0).cluster().active(true); + + Ignite igniteClient = startGrid(getClientConfiguration("client")); + + for (String cacheName : CACHE_NAMES) { + IgniteCache cache = igniteClient.cache(cacheName); + + for (int j = 0; j < CACHE_SIZE; j++) + cache.put(j, "Value" + j); + } + + stopGrid("client"); + + String dn2DirName = grid(1).name().replace(".", "_"); + + stopGrid(1); + + //Clean up the pds and WAL for second data node. + U.delete(U.resolveWorkDirectory(U.defaultWorkDirectory(), DFLT_STORE_DIR + "/" + dn2DirName, true)); + + U.delete(U.resolveWorkDirectory(U.defaultWorkDirectory(), DFLT_STORE_DIR + "/wal/" + dn2DirName, true)); + + //Here we have two from three data nodes and cache with 1 backup. So there is no data loss expected. + assertEquals(CACHE_NAMES.length * CACHE_SIZE, averageSizeAroundAllNodes()); + + //Start node 2 with empty PDS. Rebalance will be started. + startGrid(1); + + //During rebalance stop node 3. Rebalance will be stopped. + stopGrid(2); + + //Start node 3. + startGrid(2); + + //Loss data expected because rebalance to node 1 have not finished and node 2 was stopped. + assertTrue(CACHE_NAMES.length * CACHE_SIZE > averageSizeAroundAllNodes()); + + for (String cacheName : CACHE_NAMES) { + //Node 1 will have only OWNING partitions. + assertTrue(getPartitionsStates(0, cacheName).stream().allMatch(state -> state == OWNING)); + + //Node 3 will have only OWNING partitions. + assertTrue(getPartitionsStates(2, cacheName).stream().allMatch(state -> state == OWNING)); + } + + boolean hasLost = false; + for (String cacheName : CACHE_NAMES) { + //Node 2 will have OWNING and LOST partitions. + hasLost |= getPartitionsStates(1, cacheName).stream().anyMatch(state -> state == LOST); + } + + assertTrue(hasLost); + + if (reactivateGridBeforeResetPart) { + grid(0).cluster().active(false); + grid(0).cluster().active(true); + } + + //Try to reset lost partitions. + grid(2).resetLostPartitions(Arrays.asList(CACHE_NAMES)); + + awaitPartitionMapExchange(); + + for (String cacheName : CACHE_NAMES) { + //Node 2 will have only OWNING partitions. + assertTrue(getPartitionsStates(1, cacheName).stream().allMatch(state -> state == OWNING)); + } + + //All data was back. + assertEquals(CACHE_NAMES.length * CACHE_SIZE, averageSizeAroundAllNodes()); + + //Stop node 2 for checking rebalance correctness from this node. + stopGrid(2); + + //Rebalance should be successfully finished. + assertEquals(CACHE_NAMES.length * CACHE_SIZE, averageSizeAroundAllNodes()); + } + + /** + * @param gridNumber Grid number. + * @param cacheName Cache name. + * @return Partitions states for given cache name. + */ + private List getPartitionsStates(int gridNumber, String cacheName) { + CacheGroupContext cgCtx = grid(gridNumber).context().cache().cacheGroup(CU.cacheId(cacheName)); + + GridDhtPartitionTopologyImpl top = (GridDhtPartitionTopologyImpl)cgCtx.topology(); + + return top.localPartitions().stream() + .map(GridDhtLocalPartition::state) + .collect(Collectors.toList()); + } + + /** + * Checks that all nodes see the correct size. + */ + private int averageSizeAroundAllNodes() { + int totalSize = 0; + + for (Ignite ignite : IgnitionEx.allGrids()) { + for (String cacheName : CACHE_NAMES) { + totalSize += ignite.cache(cacheName).size(); + } + } + + return totalSize / IgnitionEx.allGrids().size(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AbstractAffinityFunctionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AbstractAffinityFunctionSelfTest.java index 8f8d78a96803b..d11609ee54d9f 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AbstractAffinityFunctionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AbstractAffinityFunctionSelfTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.processors.affinity.GridAffinityFunctionContextImpl; import org.apache.ignite.testframework.GridTestNode; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class AbstractAffinityFunctionSelfTest extends GridCommonAbstractTest { /** MAC prefix. */ private static final String MAC_PREF = "MAC"; @@ -50,6 +54,7 @@ public abstract class AbstractAffinityFunctionSelfTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testNodeRemovedNoBackups() throws Exception { checkNodeRemoved(0); } @@ -57,6 +62,7 @@ public void testNodeRemovedNoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeRemovedOneBackup() throws Exception { checkNodeRemoved(1); } @@ -64,6 +70,7 @@ public void testNodeRemovedOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeRemovedTwoBackups() throws Exception { checkNodeRemoved(2); } @@ -71,6 +78,7 @@ public void testNodeRemovedTwoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeRemovedThreeBackups() throws Exception { checkNodeRemoved(3); } @@ -78,6 +86,7 @@ public void testNodeRemovedThreeBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomReassignmentNoBackups() throws Exception { checkRandomReassignment(0); } @@ -85,6 +94,7 @@ public void testRandomReassignmentNoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomReassignmentOneBackup() throws Exception { checkRandomReassignment(1); } @@ -92,6 +102,7 @@ public void testRandomReassignmentOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomReassignmentTwoBackups() throws Exception { checkRandomReassignment(2); } @@ -99,6 +110,7 @@ public void testRandomReassignmentTwoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomReassignmentThreeBackups() throws Exception { checkRandomReassignment(3); } @@ -107,6 +119,7 @@ public void testRandomReassignmentThreeBackups() throws Exception { * @param backups Number of backups. * @throws Exception If failed. */ + @Test public void testNullKeyForPartitionCalculation() throws Exception { AffinityFunction aff = affinityFunction(); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityClientNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityClientNodeSelfTest.java index 6d1c0209d4eea..82be87b28879d 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityClientNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityClientNodeSelfTest.java @@ -27,20 +27,18 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * */ +@RunWith(JUnit4.class) public class AffinityClientNodeSelfTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODE_CNT = 4; @@ -60,8 +58,6 @@ public class AffinityClientNodeSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration ccfg1 = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg1.setBackups(1); @@ -109,6 +105,7 @@ public class AffinityClientNodeSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClientNodeNotInAffinity() throws Exception { checkCache(CACHE1, 2); @@ -238,4 +235,4 @@ private static class TestNodesFilter implements IgnitePredicate { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityDistributionLoggingTest.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityDistributionLoggingTest.java index 0168c7c483f30..60ddfd48165cf 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityDistributionLoggingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityDistributionLoggingTest.java @@ -35,6 +35,9 @@ import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_IGNITE_INSTANCE_NAME; @@ -46,6 +49,7 @@ * * @see EvenDistributionAffinityFunction */ +@RunWith(JUnit4.class) public class AffinityDistributionLoggingTest extends GridCommonAbstractTest { /** Pattern to test. */ private static final String LOG_MESSAGE_PREFIX = "Local node affinity assignment distribution is not ideal "; @@ -102,6 +106,7 @@ public class AffinityDistributionLoggingTest extends GridCommonAbstractTest { /** * @throws Exception In case of an error. */ + @Test public void test2PartitionsIdealDistributionIsNotLogged() throws Exception { System.setProperty(IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD, "0"); @@ -117,6 +122,7 @@ public void test2PartitionsIdealDistributionIsNotLogged() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void test120PartitionsIdeadDistributionIsNotLogged() throws Exception { System.setProperty(IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD, "0.0"); @@ -132,6 +138,7 @@ public void test120PartitionsIdeadDistributionIsNotLogged() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void test5PartitionsNotIdealDistributionIsLogged() throws Exception { System.setProperty(IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD, "50.0"); @@ -147,6 +154,7 @@ public void test5PartitionsNotIdealDistributionIsLogged() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void test5PartitionsNotIdealDistributionSuppressedLoggingOnClientNode() throws Exception { System.setProperty(IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD, "0.0"); @@ -162,6 +170,7 @@ public void test5PartitionsNotIdealDistributionSuppressedLoggingOnClientNode() t /** * @throws Exception In case of an error. */ + @Test public void test7PartitionsNotIdealDistributionSuppressedLogging() throws Exception { System.setProperty(IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD, "50.0"); @@ -177,6 +186,7 @@ public void test7PartitionsNotIdealDistributionSuppressedLogging() throws Except /** * @throws Exception In case of an error. */ + @Test public void test5PartitionsNotIdealDistributionSuppressedLogging() throws Exception { System.setProperty(IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD, "65"); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionBackupFilterAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionBackupFilterAbstractSelfTest.java index 74add0ccb7946..c3785cd207b58 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionBackupFilterAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionBackupFilterAbstractSelfTest.java @@ -29,11 +29,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgniteBiPredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -42,10 +42,8 @@ /** * Base tests of {@link AffinityFunction} implementations with user provided backup filter. */ +@RunWith(JUnit4.class) public abstract class AffinityFunctionBackupFilterAbstractSelfTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Split attribute name. */ private static final String SPLIT_ATTRIBUTE_NAME = "split-attribute"; @@ -131,13 +129,9 @@ public abstract class AffinityFunctionBackupFilterAbstractSelfTest extends GridC cacheCfg.setRebalanceMode(SYNC); cacheCfg.setAtomicityMode(TRANSACTIONAL); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - spi.setIpFinder(IP_FINDER); - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setCacheConfiguration(cacheCfg); - cfg.setDiscoverySpi(spi); cfg.setUserAttributes(F.asMap(SPLIT_ATTRIBUTE_NAME, splitAttrVal)); return cfg; @@ -156,6 +150,7 @@ public abstract class AffinityFunctionBackupFilterAbstractSelfTest extends GridC /** * @throws Exception If failed. */ + @Test public void testPartitionDistribution() throws Exception { backups = 1; @@ -205,6 +200,7 @@ private void checkPartitions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionDistributionWithAffinityBackupFilter() throws Exception { backups = 3; diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionExcludeNeighborsAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionExcludeNeighborsAbstractSelfTest.java index eee47c717286b..a151504b2b9b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionExcludeNeighborsAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityFunctionExcludeNeighborsAbstractSelfTest.java @@ -32,9 +32,10 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteProductVersion; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.NONE; @@ -42,7 +43,7 @@ /** * Partitioned affinity test. */ -@SuppressWarnings({"PointlessArithmeticExpression", "FieldCanBeLocal"}) +@RunWith(JUnit4.class) public abstract class AffinityFunctionExcludeNeighborsAbstractSelfTest extends GridCommonAbstractTest { /** Number of backups. */ private int backups = 2; @@ -50,9 +51,6 @@ public abstract class AffinityFunctionExcludeNeighborsAbstractSelfTest extends G /** Number of girds. */ private int gridInstanceNum; - /** Ip finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(final String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -72,7 +70,7 @@ public abstract class AffinityFunctionExcludeNeighborsAbstractSelfTest extends G } }; - spi.setIpFinder(ipFinder); + spi.setIpFinder(sharedStaticIpFinder); c.setDiscoverySpi(spi); @@ -108,6 +106,7 @@ private static Collection nodes(Affinity aff, Obj /** * @throws Exception If failed. */ + @Test public void testAffinityMultiNode() throws Exception { int grids = 9; @@ -159,6 +158,7 @@ public void testAffinityMultiNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinitySingleNode() throws Exception { Ignite g = startGrid(); @@ -175,4 +175,4 @@ public void testAffinitySingleNode() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityHistoryCleanupTest.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityHistoryCleanupTest.java index f89d9ee7381ad..74706605ae033 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityHistoryCleanupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/AffinityHistoryCleanupTest.java @@ -31,19 +31,17 @@ import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class AffinityHistoryCleanupTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -51,8 +49,6 @@ public class AffinityHistoryCleanupTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration[] ccfgs = new CacheConfiguration[4]; for (int i = 0; i < ccfgs.length; i++) { @@ -81,6 +77,7 @@ public class AffinityHistoryCleanupTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAffinityHistoryCleanup() throws Exception { String histProp = System.getProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE); @@ -115,8 +112,8 @@ public void testAffinityHistoryCleanup() throws Exception { topVer(2, 1), // FullHistSize = 3. topVer(3, 0), // FullHistSize = 4. topVer(3, 1), // FullHistSize = 5. - topVer(4, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(4, 1)), // FullHistSize = 5. + topVer(4, 0), // FullHistSize = 6 - 1 = 5. + topVer(4, 1)), // FullHistSize = 6 - 1 = 5. 5); client = true; @@ -126,11 +123,13 @@ public void testAffinityHistoryCleanup() throws Exception { stopGrid(4); checkHistory(ignite, F.asList( + topVer(2, 1), // FullHistSize = 3. + topVer(3, 0), // FullHistSize = 4. topVer(3, 1), // FullHistSize = 5. - topVer(4, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(4, 1), // FullHistSize = 5. - topVer(5, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(6, 0)), // FullHistSize = 5. + topVer(4, 0), // FullHistSize = 6 - 1 = 5. + topVer(4, 1), // FullHistSize = 6 - 1 = 5. + topVer(5, 0), // Client event -> FullHistSize = 5. + topVer(6, 0)), // Client event -> FullHistSize = 5. 5); startGrid(4); @@ -138,11 +137,15 @@ public void testAffinityHistoryCleanup() throws Exception { stopGrid(4); checkHistory(ignite, F.asList( - topVer(4, 1), // FullHistSize = 5. - topVer(5, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(6, 0), // FullHistSize = 5. - topVer(7, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(8, 0)), // FullHistSize = 5. + topVer(2, 1), // FullHistSize = 3. + topVer(3, 0), // FullHistSize = 4. + topVer(3, 1), // FullHistSize = 5. + topVer(4, 0), // FullHistSize = 6 - 1 = 5. + topVer(4, 1), // FullHistSize = 6 - 1 = 5. + topVer(5, 0), // Client event -> FullHistSize = 5. + topVer(6, 0), // Client event -> FullHistSize = 5. + topVer(7, 0), // Client event -> FullHistSize = 5. + topVer(8, 0)), // Client event -> FullHistSize = 5. 5); startGrid(4); @@ -150,11 +153,17 @@ public void testAffinityHistoryCleanup() throws Exception { stopGrid(4); checkHistory(ignite, F.asList( - topVer(6, 0), // FullHistSize = 5. - topVer(7, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(8, 0), // FullHistSize = 5. - topVer(9, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(10, 0)), // FullHistSize = 5. + topVer(2, 1), // FullHistSize = 3. + topVer(3, 0), // FullHistSize = 4. + topVer(3, 1), // FullHistSize = 5. + topVer(4, 0), // FullHistSize = 6 - 1 = 5. + topVer(4, 1), // FullHistSize = 6 - 1 = 5. + topVer(5, 0), // Client event -> FullHistSize = 5. + topVer(6, 0), // Client event -> FullHistSize = 5. + topVer(7, 0), // Client event -> FullHistSize = 5. + topVer(8, 0), // Client event -> FullHistSize = 5. + topVer(9, 0), // Client event -> FullHistSize = 5. + topVer(10, 0)), // Client event -> FullHistSize = 5. 5); client = false; @@ -162,13 +171,30 @@ public void testAffinityHistoryCleanup() throws Exception { startGrid(4); checkHistory(ignite, F.asList( - topVer(8, 0), // FullHistSize = 5. - topVer(9, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(10, 0), // FullHistSize = 5. - topVer(11, 0), // FullHistSize = (6 - IGNITE_AFFINITY_HISTORY_SIZE(5)/2) = 4. - topVer(11, 1)), // FullHistSize = 5. + topVer(3, 1), // FullHistSize = 5. + topVer(4, 0), // FullHistSize = 6 - 1 = 5. + topVer(4, 1), // FullHistSize = 6 - 1 = 5. + topVer(5, 0), // Client event -> FullHistSize = 5. + topVer(6, 0), // Client event -> FullHistSize = 5. + topVer(7, 0), // Client event -> FullHistSize = 5. + topVer(8, 0), // Client event -> FullHistSize = 5. + topVer(9, 0), // Client event -> FullHistSize = 5. + topVer(10, 0), // Client event -> FullHistSize = 5. + topVer(11, 0), // FullHistSize = 6 - 1 = 5. + topVer(11, 1)), // FullHistSize = 6 - 1 = 5. 5); - } + + stopGrid(4); + + startGrid(4); + + checkHistory(ignite, F.asList( + topVer(11, 0), // FullHistSize = 5. + topVer(11,1), // FullHistSize = 5. + topVer(12, 0), // FullHistSize = 6 - 1 =5. + topVer(13, 0), // FullHistSize = 5. + topVer(13, 1)), // FullHistSize = 6 - 1 =5. + 5);} finally { if (histProp != null) System.setProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE, histProp); @@ -193,7 +219,7 @@ private void checkHistory(Ignite ignite, List expHist, for (GridCacheContext cctx : proc.context().cacheContexts()) { GridAffinityAssignmentCache aff = GridTestUtils.getFieldValue(cctx.affinity(), "aff"); - AtomicInteger fullHistSize = GridTestUtils.getFieldValue(aff, "fullHistSize"); + AtomicInteger fullHistSize = GridTestUtils.getFieldValue(aff, "nonShallowHistSize"); assertEquals(expSize, fullHistSize.get()); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/local/LocalAffinityFunctionTest.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/local/LocalAffinityFunctionTest.java index e08d2120cea13..b194137c11f28 100755 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/local/LocalAffinityFunctionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/local/LocalAffinityFunctionTest.java @@ -22,18 +22,16 @@ import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for local affinity function. */ +@RunWith(JUnit4.class) public class LocalAffinityFunctionTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODE_CNT = 1; @@ -43,8 +41,6 @@ public class LocalAffinityFunctionTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setBackups(1); @@ -61,6 +57,7 @@ public class LocalAffinityFunctionTest extends GridCommonAbstractTest { startGrids(NODE_CNT); } + @Test public void testWronglySetAffinityFunctionForLocalCache() { Ignite node = ignite(NODE_CNT - 1); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunctionSimpleBenchmark.java b/modules/core/src/test/java/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunctionSimpleBenchmark.java index c680a68f58c71..7dec6b299cdd5 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunctionSimpleBenchmark.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunctionSimpleBenchmark.java @@ -65,11 +65,15 @@ import java.util.concurrent.atomic.AtomicInteger; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Simple benchmarks, compatibility test and distribution check utils for affinity functions. * Needs to check changes at the {@link RendezvousAffinityFunction}. */ +@RunWith(JUnit4.class) public class RendezvousAffinityFunctionSimpleBenchmark extends GridCommonAbstractTest { /** MAC prefix. */ private static final String MAC_PREF = "MAC"; @@ -346,6 +350,7 @@ private double chiSquare(List> byNodes, int parts, double goldenNo /** * @throws IOException On error. */ + @Test public void testDistribution() throws IOException { AffinityFunction aff0 = new RendezvousAffinityFunction(true, 1024); @@ -397,6 +402,7 @@ private void affinityDistribution(AffinityFunction aff0, AffinityFunction aff1) /** * */ + @Test public void testAffinityBenchmarkAdd() { mode = TopologyModificationMode.ADD; @@ -410,6 +416,7 @@ public void testAffinityBenchmarkAdd() { /** * */ + @Test public void testAffinityBenchmarkChangeLast() { mode = TopologyModificationMode.CHANGE_LAST_NODE; @@ -503,6 +510,7 @@ private int countPartitionsToMigrate(List> affOld, List nodes = createBaseNodes(nodesCnt); - List> assignment0 = assignPartitions(aff0, nodes, null, backups, 0).get2(); + List structure0 = structureOf(assignPartitions(aff0, nodes, null, backups, 0).get2()); - List> assignment1 = assignPartitions(aff1, nodes, null, backups, 0).get2(); + List structure1 = structureOf(assignPartitions(aff1, nodes, null, backups, 0).get2()); - assertEquals (assignment0, assignment1); + assertEquals (structure0, structure1); } } + /** */ + private List structureOf(List> assignment) { + List res = new ArrayList<>(); + + for (List nodes : assignment) + res.add(nodes != null && !nodes.contains(null) ? nodes.size() : null); + + return res; + } + /** * */ diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreListenerRWThroughDisabledTransactionalCacheTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreListenerRWThroughDisabledTransactionalCacheTest.java index 45038154ad60c..e2b658ca47010 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreListenerRWThroughDisabledTransactionalCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreListenerRWThroughDisabledTransactionalCacheTest.java @@ -20,9 +20,13 @@ import java.util.Random; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -35,7 +39,15 @@ * This class tests that redundant calls of {@link CacheStoreSessionListener#onSessionStart(CacheStoreSession)} * and {@link CacheStoreSessionListener#onSessionEnd(CacheStoreSession, boolean)} are not executed. */ +@RunWith(JUnit4.class) public class CacheStoreListenerRWThroughDisabledTransactionalCacheTest extends CacheStoreSessionListenerReadWriteThroughDisabledAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; @@ -44,6 +56,7 @@ public class CacheStoreListenerRWThroughDisabledTransactionalCacheTest extends C /** * Tests {@link IgniteCache#get(Object)} with disabled read-through and write-through modes. */ + @Test public void testTransactionalLookup() { testTransactionalLookup(OPTIMISTIC, READ_COMMITTED); testTransactionalLookup(OPTIMISTIC, REPEATABLE_READ); @@ -74,6 +87,7 @@ private void testTransactionalLookup(TransactionConcurrency concurrency, Transac /** * Tests {@link IgniteCache#put(Object, Object)} with disabled read-through and write-through modes. */ + @Test public void testTransactionalUpdate() { testTransactionalUpdate(OPTIMISTIC, READ_COMMITTED); testTransactionalUpdate(OPTIMISTIC, REPEATABLE_READ); @@ -104,6 +118,7 @@ private void testTransactionalUpdate(TransactionConcurrency concurrency, Transac /** * Tests {@link IgniteCache#remove(Object)} with disabled read-through and write-through modes. */ + @Test public void testTransactionalRemove() { testTransactionalRemove(OPTIMISTIC, READ_COMMITTED); testTransactionalRemove(OPTIMISTIC, REPEATABLE_READ); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreReadFromBackupTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreReadFromBackupTest.java index d8913dcf1992f..8ee347cdae687 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreReadFromBackupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreReadFromBackupTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,6 +47,7 @@ * Checks that once value is read from store, it will be loaded in * backups as well. */ +@RunWith(JUnit4.class) public class CacheStoreReadFromBackupTest extends GridCommonAbstractTest { /** */ public static final String CACHE_NAME = "cache"; @@ -100,6 +104,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception If failed. */ + @Test public void testReplicated() throws Exception { cacheMode = REPLICATED; backups = 0; @@ -111,6 +116,7 @@ public void testReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitioned() throws Exception { cacheMode = PARTITIONED; backups = 1; @@ -122,6 +128,7 @@ public void testPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearReplicated() throws Exception { cacheMode = REPLICATED; backups = 0; @@ -133,6 +140,7 @@ public void testNearReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearPartitioned() throws Exception { cacheMode = PARTITIONED; backups = 1; diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerAbstractSelfTest.java index 412a879e14932..9db8f4f546a8e 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerAbstractSelfTest.java @@ -29,21 +29,18 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for store session listeners. */ +@RunWith(JUnit4.class) public abstract class CacheStoreSessionListenerAbstractSelfTest extends GridCommonAbstractTest implements Serializable { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ protected static final String URL = "jdbc:h2:mem:example;DB_CLOSE_DELAY=-1"; @@ -68,19 +65,6 @@ public abstract class CacheStoreSessionListenerAbstractSelfTest extends GridComm /** */ protected static final AtomicBoolean fail = new AtomicBoolean(); - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGridsMultiThreaded(3); @@ -109,6 +93,7 @@ public abstract class CacheStoreSessionListenerAbstractSelfTest extends GridComm /** * @throws Exception If failed. */ + @Test public void testAtomicCache() throws Exception { CacheConfiguration cfg = cacheConfiguration(DEFAULT_CACHE_NAME, CacheAtomicityMode.ATOMIC); @@ -134,6 +119,7 @@ public void testAtomicCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransactionalCache() throws Exception { CacheConfiguration cfg = cacheConfiguration(DEFAULT_CACHE_NAME, CacheAtomicityMode.TRANSACTIONAL); @@ -159,6 +145,7 @@ public void testTransactionalCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExplicitTransaction() throws Exception { CacheConfiguration cfg = cacheConfiguration(DEFAULT_CACHE_NAME, CacheAtomicityMode.TRANSACTIONAL); @@ -184,6 +171,7 @@ public void testExplicitTransaction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheTransaction() throws Exception { CacheConfiguration cfg1 = cacheConfiguration("cache1", CacheAtomicityMode.TRANSACTIONAL); CacheConfiguration cfg2 = cacheConfiguration("cache2", CacheAtomicityMode.TRANSACTIONAL); @@ -212,6 +200,7 @@ public void testCrossCacheTransaction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCommit() throws Exception { write.set(true); @@ -241,6 +230,7 @@ public void testCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollback() throws Exception { write.set(true); fail.set(true); @@ -329,4 +319,4 @@ private CacheConfiguration cacheConfiguration(String name, Cac * @return Session listener factory. */ protected abstract Factory sessionListenerFactory(); -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerLifecycleSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerLifecycleSelfTest.java index ff176c5f0f7c1..ba52317e22ea5 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerLifecycleSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerLifecycleSelfTest.java @@ -31,24 +31,30 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lifecycle.LifecycleAware; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * Store session listeners test. */ +@RunWith(JUnit4.class) public class CacheStoreSessionListenerLifecycleSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final Queue evts = new ConcurrentLinkedDeque<>(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -58,12 +64,6 @@ public class CacheStoreSessionListenerLifecycleSelfTest extends GridCommonAbstra new SessionListenerFactory("Shared 2") ); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -75,6 +75,7 @@ public class CacheStoreSessionListenerLifecycleSelfTest extends GridCommonAbstra /** * @throws Exception If failed. */ + @Test public void testNoCaches() throws Exception { try { startGrid(); @@ -90,6 +91,7 @@ public void testNoCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoOverride() throws Exception { try { Ignite ignite = startGrid(); @@ -152,6 +154,7 @@ public void testNoOverride() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartialOverride() throws Exception { try { Ignite ignite = startGrid(); @@ -227,6 +230,7 @@ public void testPartialOverride() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOverride() throws Exception { try { Ignite ignite = startGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerReadWriteThroughDisabledAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerReadWriteThroughDisabledAbstractTest.java index 774d4f7aac694..70f6a9fef301e 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerReadWriteThroughDisabledAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerReadWriteThroughDisabledAbstractTest.java @@ -39,11 +39,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * This class tests that redundant calls of {@link CacheStoreSessionListener#onSessionStart(CacheStoreSession)} * and {@link CacheStoreSessionListener#onSessionEnd(CacheStoreSession, boolean)} are not executed. */ +@RunWith(JUnit4.class) public abstract class CacheStoreSessionListenerReadWriteThroughDisabledAbstractTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -81,6 +85,7 @@ public abstract class CacheStoreSessionListenerReadWriteThroughDisabledAbstractT * * @throws Exception If failed. */ + @Test public void testLookup() throws Exception { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -97,6 +102,7 @@ public void testLookup() throws Exception { * * @throws Exception If failed. */ + @Test public void testBatchLookup() throws Exception { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -117,6 +123,7 @@ public void testBatchLookup() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdate() throws Exception { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -133,6 +140,7 @@ public void testUpdate() throws Exception { * * @throws Exception If failed. */ + @Test public void testBatchUpdate() throws Exception { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -153,6 +161,7 @@ public void testBatchUpdate() throws Exception { * * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -174,6 +183,7 @@ public void testRemove() throws Exception { * * @throws Exception If failed. */ + @Test public void testBatchRemove() throws Exception { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerWriteBehindEnabledTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerWriteBehindEnabledTest.java index c9a912ad51d57..1db21130656c5 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerWriteBehindEnabledTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreSessionListenerWriteBehindEnabledTest.java @@ -44,18 +44,23 @@ import org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore; import org.apache.ignite.resources.CacheStoreSessionResource; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * This class tests that calls of {@link CacheStoreSessionListener#onSessionStart(CacheStoreSession)} * and {@link CacheStoreSessionListener#onSessionEnd(CacheStoreSession, boolean)} are executed from * {@link GridCacheWriteBehindStore} only. */ +@RunWith(JUnit4.class) public class CacheStoreSessionListenerWriteBehindEnabledTest extends GridCacheAbstractSelfTest { /** */ - protected final static int CNT = 100; + protected static final int CNT = 100; /** */ - private final static int WRITE_BEHIND_FLUSH_FREQUENCY = 1000; + private static final int WRITE_BEHIND_FLUSH_FREQUENCY = 1000; /** */ private static final List operations = Collections.synchronizedList(new ArrayList()); @@ -66,6 +71,13 @@ public class CacheStoreSessionListenerWriteBehindEnabledTest extends GridCacheAb /** */ private static final AtomicInteger uninitializedListenerCnt = new AtomicInteger(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; @@ -106,6 +118,7 @@ public class CacheStoreSessionListenerWriteBehindEnabledTest extends GridCacheAb * Tests that there are no redundant calls of {@link CacheStoreSessionListener#onSessionStart(CacheStoreSession)} * while {@link IgniteCache#get(Object)} performed. */ + @Test public void testLookup() { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -119,6 +132,7 @@ public void testLookup() { * Tests that there are no redundant calls of {@link CacheStoreSessionListener#onSessionStart(CacheStoreSession)} * while {@link IgniteCache#put(Object, Object)} performed. */ + @Test public void testUpdate() { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -132,6 +146,7 @@ public void testUpdate() { * Tests that there are no redundant calls of {@link CacheStoreSessionListener#onSessionStart(CacheStoreSession)} * while {@link IgniteCache#remove(Object)} performed. */ + @Test public void testRemove() { IgniteCache cache = grid(0).getOrCreateCache(DEFAULT_CACHE_NAME); @@ -145,6 +160,7 @@ public void testRemove() { /** * Tests that cache store session listeners are notified by write-behind store. */ + @Test public void testFlushSingleValue() throws Exception { CacheConfiguration cfg = cacheConfiguration(getTestIgniteInstanceName()); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreWriteErrorTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreWriteErrorTest.java new file mode 100644 index 0000000000000..ec6c72bc528f2 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheStoreWriteErrorTest.java @@ -0,0 +1,133 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.cache.store; + +import com.google.common.base.Throwables; +import java.util.HashMap; +import java.util.concurrent.Callable; +import javax.cache.Cache; +import javax.cache.configuration.FactoryBuilder; +import javax.cache.integration.CacheLoaderException; +import javax.cache.integration.CacheWriterException; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.binary.BinaryObject; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * This class tests handling exceptions from {@link CacheStore#write(Cache.Entry)}. + */ +@RunWith(JUnit4.class) +public class CacheStoreWriteErrorTest extends GridCommonAbstractTest { + /** */ + public static final String CACHE_NAME = "cache"; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + CacheConfiguration cacheCfg = new CacheConfiguration(CACHE_NAME) + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC) + .setCacheStoreFactory(FactoryBuilder.factoryOf(ThrowableCacheStore.class)) + .setWriteThrough(true) + .setStoreKeepBinary(true); + + return super.getConfiguration(gridName) + .setCacheConfiguration(cacheCfg); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + } + + /** + * Checks primary error while saving batch with one entry. + */ + @Test + public void testPrimaryErrorForBatchSize1() { + checkPrimaryError(1); + } + + /** + * Checks primary error while saving batch with two entries. + */ + @Test + public void testPrimaryErrorForBatchSize2() { + checkPrimaryError(2); + } + + /** + * Checks that primary error ({@link CacheWriterException}) is not lost due to unwrapping a key. + * + * @param batchSize Batch size. + */ + private void checkPrimaryError(int batchSize) { + Throwable t = GridTestUtils.assertThrows(log, + new Callable() { + @Override public Object call() throws Exception { + try (Ignite grid = startGrid()) { + IgniteCache cache = grid.cache(CACHE_NAME); + + HashMap batch = new HashMap<>(); + + for (int i = 0; i < batchSize; i++) { + BinaryObject key = grid.binary().builder("KEY_TYPE_NAME").setField("id", i).build(); + + batch.put(key, "VALUE"); + } + + cache.putAllAsync(batch).get(); + } + + return null; + } + }, CacheWriterException.class, null); + + assertTrue("Stacktrace should contain the message of the original exception", + Throwables.getStackTraceAsString(t).contains(ThrowableCacheStore.EXCEPTION_MESSAGE)); + } + + /** + * {@link CacheStore} implementation which throws {@link RuntimeException} for every write operation. + */ + public static class ThrowableCacheStore extends CacheStoreAdapter { + /** */ + private static final String EXCEPTION_MESSAGE = "WRITE CACHE STORE EXCEPTION"; + + /** {@inheritDoc} */ + @Override public Object load(Object o) throws CacheLoaderException { + return null; + } + + /** {@inheritDoc} */ + @Override public void write(Cache.Entry entry) throws CacheWriterException { + throw new RuntimeException(EXCEPTION_MESSAGE); + } + + /** {@inheritDoc} */ + @Override public void delete(Object o) throws CacheWriterException { + // No-op. + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheTransactionalStoreReadFromBackupTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheTransactionalStoreReadFromBackupTest.java index 4837936621f46..f1b14aba08988 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/CacheTransactionalStoreReadFromBackupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/CacheTransactionalStoreReadFromBackupTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.cache.store; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -25,6 +26,13 @@ * */ public class CacheTransactionalStoreReadFromBackupTest extends CacheStoreReadFromBackupTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheBalancingStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheBalancingStoreSelfTest.java index d1e33b60ecf31..067399e6fb5cb 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheBalancingStoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheBalancingStoreSelfTest.java @@ -43,14 +43,19 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Store test. */ +@RunWith(JUnit4.class) public class GridCacheBalancingStoreSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testLoads() throws Exception { final int range = 300; @@ -127,6 +132,7 @@ public void testLoads() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentLoad() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -138,6 +144,7 @@ public void testConcurrentLoad() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentLoadCustomThreshold() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -179,6 +186,7 @@ private void doTestConcurrentLoad(int threads, final int keys, int threshold) th /** * @throws Exception If failed. */ + @Test public void testConcurrentLoadAll() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -190,6 +198,7 @@ public void testConcurrentLoadAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentLoadAllCustomThreshold() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheLoadOnlyStoreAdapterSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheLoadOnlyStoreAdapterSelfTest.java index 2a1e23a8bcbd7..3d894863dda9c 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheLoadOnlyStoreAdapterSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/GridCacheLoadOnlyStoreAdapterSelfTest.java @@ -21,51 +21,55 @@ import java.util.Iterator; import java.util.NoSuchElementException; import javax.cache.integration.CacheLoaderException; +import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.lang.IgniteBiTuple; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ -public class GridCacheLoadOnlyStoreAdapterSelfTest extends GridCacheAbstractSelfTest { +@RunWith(JUnit4.class) +public class GridCacheLoadOnlyStoreAdapterSelfTest extends GridCommonAbstractTest { /** Expected loadAll arguments, hardcoded on call site for convenience. */ private static final Integer[] EXP_ARGS = {1, 2, 3}; /** Store to use. */ private CacheLoadOnlyStoreAdapter store; - /** {@inheritDoc} */ - @Override protected int gridCount() { - return 1; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); - } - - /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { + super.beforeTestsStarted(); + startGrid(0); } /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); } /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { + grid(0).destroyCache(DEFAULT_CACHE_NAME); + super.afterTest(); } - /** {@inheritDoc} */ + /** + * @return Cache configuration. + */ @SuppressWarnings("unchecked") - @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { - CacheConfiguration cfg = super.cacheConfiguration(igniteInstanceName); + private CacheConfiguration cacheConfiguration() { + CacheConfiguration cfg = defaultCacheConfiguration(); assertNotNull(store); @@ -80,59 +84,38 @@ public class GridCacheLoadOnlyStoreAdapterSelfTest extends GridCacheAbstractSelf /** * @throws Exception If failed. */ + @Test public void testStore() throws Exception { - try { - int inputSize = 100; - - store = new TestStore(inputSize); - - startGrids(gridCount()); + int inputSize = 100; - awaitPartitionMapExchange(); + store = new TestStore(inputSize); - jcache().localLoadCache(null, 1, 2, 3); + IgniteCache cache = grid(0).createCache(cacheConfiguration()); - int cnt = 0; + cache.localLoadCache(null, 1, 2, 3); - for (int i = 0; i < gridCount(); i++) - cnt += jcache(i).localSize(); - - assertEquals(inputSize - (inputSize / 10), cnt); - } - finally { - stopAllGrids(); - } + assertEquals(inputSize - (inputSize / 10), cache.localSize()); } /** * @throws Exception If failed. */ + @Test public void testStoreSmallQueueSize() throws Exception { - try { - int inputSize = 1500; - - store = new ParallelTestStore(inputSize); - - store.setBatchSize(1); - store.setBatchQueueSize(1); - store.setThreadsCount(2); + int inputSize = 1500; - startGrids(gridCount()); + store = new ParallelTestStore(inputSize); - awaitPartitionMapExchange(); + store.setBatchSize(1); + store.setBatchQueueSize(1); + store.setThreadsCount(2); - jcache().localLoadCache(null, 1, 2, 3); + IgniteCache cache = grid(0).createCache(cacheConfiguration()); - int cnt = 0; + cache.localLoadCache(null, 1, 2, 3); - for (int i = 0; i < gridCount(); i++) - cnt += jcache(i).localSize(); + assertEquals(inputSize, cache.localSize()); - assertEquals(inputSize, cnt); - } - finally { - stopAllGrids(); - } } /** diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/GridStoreLoadCacheTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/GridStoreLoadCacheTest.java index d88c4318ee27e..2292d30574b7c 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/GridStoreLoadCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/GridStoreLoadCacheTest.java @@ -31,11 +31,15 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test checks that local cacheLoad task never blocks remote * cacheLoad. */ +@RunWith(JUnit4.class) public class GridStoreLoadCacheTest extends GridCommonAbstractTest { /** Barrier. */ private static final CyclicBarrier BARRIER = new CyclicBarrier(3); @@ -61,6 +65,7 @@ public class GridStoreLoadCacheTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void test() throws Exception { for (int i = 0; i < 3; i++) { IgniteEx srv1 = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/IgniteCacheExpiryStoreLoadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/IgniteCacheExpiryStoreLoadSelfTest.java index cdc4277974bf7..22d9211bfc1f5 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/IgniteCacheExpiryStoreLoadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/IgniteCacheExpiryStoreLoadSelfTest.java @@ -39,6 +39,9 @@ import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -46,6 +49,7 @@ /** * Test check that cache values removes from cache on expiry. */ +@RunWith(JUnit4.class) public class IgniteCacheExpiryStoreLoadSelfTest extends GridCacheAbstractSelfTest { /** Expected time to live in milliseconds. */ private static final int TIME_TO_LIVE = 1000; @@ -79,6 +83,7 @@ public class IgniteCacheExpiryStoreLoadSelfTest extends GridCacheAbstractSelfTes /** * @throws Exception If failed. */ + @Test public void testLoadCacheWithExpiry() throws Exception { checkLoad(false); } @@ -86,6 +91,7 @@ public void testLoadCacheWithExpiry() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheWithExpiryAsync() throws Exception { checkLoad(true); } @@ -119,6 +125,7 @@ private void checkLoad(boolean async) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalLoadCacheWithExpiry() throws Exception { checkLocalLoad(false); } @@ -126,6 +133,7 @@ public void testLocalLoadCacheWithExpiry() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalLoadCacheWithExpiryAsync() throws Exception { checkLocalLoad(true); } @@ -159,6 +167,7 @@ private void checkLocalLoad(boolean async) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadAllWithExpiry() throws Exception { IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME) .withExpiryPolicy(new CreatedExpiryPolicy(new Duration(MILLISECONDS, TIME_TO_LIVE))); @@ -239,4 +248,4 @@ private static class TestStore implements CacheStore { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/StoreResourceInjectionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/StoreResourceInjectionSelfTest.java index f043746893691..165a6fd218574 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/StoreResourceInjectionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/StoreResourceInjectionSelfTest.java @@ -19,20 +19,18 @@ import org.apache.ignite.configuration.*; import org.apache.ignite.internal.processors.cache.*; import org.apache.ignite.resources.*; -import org.apache.ignite.spi.discovery.tcp.*; -import org.apache.ignite.spi.discovery.tcp.ipfinder.*; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.*; import org.apache.ignite.testframework.junits.common.*; import javax.cache.configuration.*; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test resource injection. */ +@RunWith(JUnit4.class) public class StoreResourceInjectionSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private CacheConfiguration cacheCfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); @@ -45,12 +43,6 @@ public class StoreResourceInjectionSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -62,6 +54,7 @@ public class StoreResourceInjectionSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testResourcesInStoreFactory() throws Exception { cacheCfg.setCacheStoreFactory(new MyCacheStoreFactory()); @@ -71,6 +64,7 @@ public void testResourcesInStoreFactory() throws Exception { /** * */ + @Test public void testResourcesInLoaderFactory() throws Exception { cacheCfg.setCacheLoaderFactory(new MyCacheStoreFactory()); @@ -80,6 +74,7 @@ public void testResourcesInLoaderFactory() throws Exception { /** * */ + @Test public void testResourcesInWriterFactory() throws Exception { cacheCfg.setCacheWriterFactory(new MyCacheStoreFactory()); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreAbstractSelfTest.java index 703cbe18908b9..6f1e42217ffc9 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreAbstractSelfTest.java @@ -29,8 +29,8 @@ import javax.cache.integration.CacheLoaderException; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.store.jdbc.dialect.H2Dialect; -import org.apache.ignite.cache.store.jdbc.model.Person; import org.apache.ignite.cache.store.jdbc.model.Gender; +import org.apache.ignite.cache.store.jdbc.model.Person; import org.apache.ignite.cache.store.jdbc.model.PersonKey; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; @@ -38,10 +38,11 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.marshaller.Marshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -50,10 +51,8 @@ /** * Class for {@link CacheJdbcPojoStore} tests. */ +@RunWith(JUnit4.class) public abstract class CacheJdbcPojoStoreAbstractSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** DB connection URL. */ private static final String DFLT_CONN_URL = "jdbc:h2:mem:TestDatabase;DB_CLOSE_DELAY=-1"; @@ -96,6 +95,13 @@ protected Connection getConnection() throws SQLException { return DriverManager.getConnection(DFLT_CONN_URL, "sa", ""); } + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { Connection conn = getConnection(); @@ -135,12 +141,6 @@ protected Connection getConnection() throws SQLException { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration()); cfg.setMarshaller(marshaller()); @@ -370,6 +370,7 @@ protected void checkCacheLoadWithStatement() throws SQLException { /** * @throws Exception If failed. */ + @Test public void testLoadCacheWithStatement() throws Exception { startTestGrid(false, false, false, false, 512); @@ -379,6 +380,7 @@ public void testLoadCacheWithStatement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheWithStatementTx() throws Exception { startTestGrid(false, false, false, true, 512); @@ -388,6 +390,7 @@ public void testLoadCacheWithStatementTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCache() throws Exception { startTestGrid(false, false, false, false, 512); @@ -397,6 +400,7 @@ public void testLoadCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAll() throws Exception { startTestGrid(false, false, false, false, ORGANIZATION_CNT + PERSON_CNT + 1); @@ -406,6 +410,7 @@ public void testLoadCacheAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheWithSql() throws Exception { startTestGrid(false, false, false, false, 512); @@ -415,6 +420,7 @@ public void testLoadCacheWithSql() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheTx() throws Exception { startTestGrid(false, false, false, true, 512); @@ -424,6 +430,7 @@ public void testLoadCacheTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheWithSqlTx() throws Exception { startTestGrid(false, false, false, true, 512); @@ -433,6 +440,7 @@ public void testLoadCacheWithSqlTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCachePrimitiveKeys() throws Exception { startTestGrid(true, false, false, false, 512); @@ -442,6 +450,7 @@ public void testLoadCachePrimitiveKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCachePrimitiveKeysTx() throws Exception { startTestGrid(true, false, false, true, 512); @@ -532,6 +541,7 @@ private void checkPutRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoveBuiltIn() throws Exception { startTestGrid(true, false, false, false, 512); @@ -541,6 +551,7 @@ public void testPutRemoveBuiltIn() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemove() throws Exception { startTestGrid(false, false, false, false, 512); @@ -550,6 +561,7 @@ public void testPutRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoveTxBuiltIn() throws Exception { startTestGrid(true, false, false, true, 512); @@ -559,6 +571,7 @@ public void testPutRemoveTxBuiltIn() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoveTx() throws Exception { startTestGrid(false, false, false, true, 512); @@ -568,6 +581,7 @@ public void testPutRemoveTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadNotRegisteredType() throws Exception { startTestGrid(false, false, false, false, 512); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreBinaryMarshallerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreBinaryMarshallerSelfTest.java index b6d6fe13ac275..b95ed24b111f7 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreBinaryMarshallerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreBinaryMarshallerSelfTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.internal.binary.BinaryMarshaller; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link CacheJdbcPojoStore} with binary marshaller. */ +@RunWith(JUnit4.class) public class CacheJdbcPojoStoreBinaryMarshallerSelfTest extends CacheJdbcPojoStoreAbstractSelfTest { /** {@inheritDoc} */ @Override protected Marshaller marshaller(){ @@ -32,6 +36,7 @@ public class CacheJdbcPojoStoreBinaryMarshallerSelfTest extends CacheJdbcPojoSto /** * @throws Exception If failed. */ + @Test public void testLoadCacheNoKeyClasses() throws Exception { startTestGrid(false, true, false, false, 512); @@ -41,6 +46,7 @@ public void testLoadCacheNoKeyClasses() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheNoKeyClassesTx() throws Exception { startTestGrid(false, true, false, true, 512); @@ -50,6 +56,7 @@ public void testLoadCacheNoKeyClassesTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheNoValueClasses() throws Exception { startTestGrid(false, false, true, false, 512); @@ -59,6 +66,7 @@ public void testLoadCacheNoValueClasses() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheNoValueClassesTx() throws Exception { startTestGrid(false, false, true, true, 512); @@ -68,6 +76,7 @@ public void testLoadCacheNoValueClassesTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheNoKeyAndValueClasses() throws Exception { startTestGrid(false, true, true, false, 512); @@ -77,6 +86,7 @@ public void testLoadCacheNoKeyAndValueClasses() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheNoKeyAndValueClassesTx() throws Exception { startTestGrid(false, true, true, true, 512); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreTest.java index ea2808fa7cd7c..f2db5dfaa5c55 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStoreTest.java @@ -50,10 +50,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.cache.GridAbstractCacheStoreSelfTest; import org.h2.jdbcx.JdbcConnectionPool; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Class for {@code PojoCacheStore} tests. */ +@RunWith(JUnit4.class) public class CacheJdbcPojoStoreTest extends GridAbstractCacheStoreSelfTest> { /** DB connection URL. */ private static final String DFLT_CONN_URL = "jdbc:h2:mem:autoCacheStore;DB_CLOSE_DELAY=-1"; @@ -269,6 +273,7 @@ public CacheJdbcPojoStoreTest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCache() throws Exception { Connection conn = store.openConnection(false); @@ -442,6 +447,7 @@ else if (k instanceof PersonComplexKey && v instanceof Person) { /** * @throws Exception If failed. */ + @Test public void testParallelLoad() throws Exception { Connection conn = store.openConnection(false); @@ -503,6 +509,7 @@ public void testParallelLoad() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteRetry() throws Exception { CacheJdbcPojoStore store = store(); @@ -556,6 +563,7 @@ public void testWriteRetry() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { Timestamp k = new Timestamp(System.currentTimeMillis()); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreAbstractMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreAbstractMultithreadedSelfTest.java index b1f8cd3c17746..56d990be00f77 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreAbstractMultithreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreAbstractMultithreadedSelfTest.java @@ -42,12 +42,12 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.springframework.beans.BeansException; import org.springframework.beans.factory.xml.XmlBeanDefinitionReader; import org.springframework.context.support.GenericApplicationContext; @@ -61,6 +61,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class CacheJdbcStoreAbstractMultithreadedSelfTest extends GridCommonAbstractTest { /** Default config with mapping. */ @@ -69,9 +70,6 @@ public abstract class CacheJdbcStoreAbstractMultithreadedSelfTest fut1 = runMultiThreadedAsync(new Callable() { private final Random rnd = new Random(); @@ -243,6 +236,7 @@ public void testMultithreadedPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedPutAll() throws Exception { multithreaded(new Callable() { private final Random rnd = new Random(); @@ -285,6 +279,7 @@ public void testMultithreadedPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedExplicitTx() throws Exception { runMultiThreaded(new Callable() { private final Random rnd = new Random(); diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreSessionListenerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreSessionListenerSelfTest.java index 237cfeb363799..968dc0837da34 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreSessionListenerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/CacheJdbcStoreSessionListenerSelfTest.java @@ -32,12 +32,20 @@ import org.apache.ignite.cache.store.CacheStoreSessionListenerAbstractSelfTest; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.resources.CacheStoreSessionResource; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.h2.jdbcx.JdbcConnectionPool; /** * Tests for {@link CacheJdbcStoreSessionListener}. */ public class CacheJdbcStoreSessionListenerSelfTest extends CacheStoreSessionListenerAbstractSelfTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected Factory> storeFactory() { return new Factory>() { diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/GridCacheJdbcBlobStoreMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/GridCacheJdbcBlobStoreMultithreadedSelfTest.java index 1f6849030a664..a801f1355bdb0 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/GridCacheJdbcBlobStoreMultithreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/GridCacheJdbcBlobStoreMultithreadedSelfTest.java @@ -37,11 +37,12 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -52,10 +53,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheJdbcBlobStoreMultithreadedSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of grids to start. */ private static final int GRID_CNT = 5; @@ -65,6 +64,13 @@ public class GridCacheJdbcBlobStoreMultithreadedSelfTest extends GridCommonAbstr /** Client flag. */ private boolean client; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { startGridsMultiThreaded(GRID_CNT - 2); @@ -90,12 +96,6 @@ public class GridCacheJdbcBlobStoreMultithreadedSelfTest extends GridCommonAbstr @Override protected final IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - if (!client) { CacheConfiguration cc = defaultCacheConfiguration(); @@ -120,6 +120,7 @@ public class GridCacheJdbcBlobStoreMultithreadedSelfTest extends GridCommonAbstr /** * @throws Exception If failed. */ + @Test public void testMultithreadedPut() throws Exception { IgniteInternalFuture fut1 = runMultiThreadedAsync(new Callable() { private final Random rnd = new Random(); @@ -158,6 +159,7 @@ public void testMultithreadedPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedPutAll() throws Exception { runMultiThreaded(new Callable() { private final Random rnd = new Random(); @@ -184,6 +186,7 @@ public void testMultithreadedPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedExplicitTx() throws Exception { runMultiThreaded(new Callable() { private final Random rnd = new Random(); @@ -262,4 +265,4 @@ private void checkOpenedClosedCount() { assertEquals(opened, closed); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/JdbcTypesDefaultTransformerTest.java b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/JdbcTypesDefaultTransformerTest.java index 5e490f7368053..a1d5690c8c3fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/JdbcTypesDefaultTransformerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/cache/store/jdbc/JdbcTypesDefaultTransformerTest.java @@ -28,14 +28,19 @@ import java.sql.Timestamp; import java.util.UUID; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link JdbcTypesDefaultTransformer}. */ +@RunWith(JUnit4.class) public class JdbcTypesDefaultTransformerTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTransformer() throws Exception { // Connection to H2. String jdbcUrl = "jdbc:h2:mem:JdbcTypesDefaultTransformerTest"; diff --git a/modules/core/src/test/java/org/apache/ignite/client/AsyncChannelTest.java b/modules/core/src/test/java/org/apache/ignite/client/AsyncChannelTest.java new file mode 100644 index 0000000000000..e2cb69c0ebc20 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/client/AsyncChannelTest.java @@ -0,0 +1,197 @@ +/* + * Copyright 2019 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.client; + +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.CyclicBarrier; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.locks.Lock; +import javax.cache.Cache; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.Ignition; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CachePeekMode; +import org.apache.ignite.cache.query.Query; +import org.apache.ignite.cache.query.QueryCursor; +import org.apache.ignite.cache.query.ScanQuery; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.ClientConfiguration; +import org.apache.ignite.configuration.ClientConnectorConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Async channel tests. + */ +public class AsyncChannelTest extends GridCommonAbstractTest { + /** Nodes count. */ + private static final int NODES_CNT = 3; + + /** Threads count. */ + private static final int THREADS_CNT = 25; + + /** Cache name. */ + private static final String CACHE_NAME = "tx_cache"; + + /** Client connector address. */ + private static final String CLIENT_CONN_ADDR = "127.0.0.1:" + ClientConnectorConfiguration.DFLT_PORT; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName).setCacheConfiguration( + new CacheConfiguration(CACHE_NAME).setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + startGrids(NODES_CNT); + + awaitPartitionMapExchange(); + } + + /** + * Test that client channel works in async mode. + */ + @Test + public void testAsyncRequests() throws Exception { + try (IgniteClient client = Ignition.startClient(new ClientConfiguration().setAddresses(CLIENT_CONN_ADDR))) { + Ignite ignite = grid(0); + + IgniteCache igniteCache = ignite.cache(CACHE_NAME); + ClientCache clientCache = client.cache(CACHE_NAME); + + clientCache.clear(); + + Lock keyLock = igniteCache.lock(0); + + IgniteInternalFuture fut; + + keyLock.lock(); + + try { + CountDownLatch latch = new CountDownLatch(1); + + fut = GridTestUtils.runAsync(() -> { + latch.countDown(); + + // This request is blocked until we explicitly unlock key in another thread. + clientCache.put(0, 0); + + clientCache.put(1, 1); + + assertEquals(10, clientCache.size(CachePeekMode.PRIMARY)); + }); + + latch.await(); + + for (int i = 2; i < 10; i++) { + clientCache.put(i, i); + + assertEquals((Integer)i, igniteCache.get(i)); + assertEquals((Integer)i, clientCache.get(i)); + } + + // Parallel thread must be blocked on key 0. + assertFalse(clientCache.containsKey(1)); + } + finally { + keyLock.unlock(); + } + + fut.get(); + + assertTrue(clientCache.containsKey(1)); + } + } + + /** + * Test multiple concurrent async requests. + */ + @Test + public void testConcurrentRequests() throws Exception { + try (IgniteClient client = Ignition.startClient(new ClientConfiguration().setAddresses(CLIENT_CONN_ADDR))) { + ClientCache clientCache = client.cache(CACHE_NAME); + + clientCache.clear(); + + AtomicInteger keyCnt = new AtomicInteger(); + + CyclicBarrier barrier = new CyclicBarrier(THREADS_CNT); + + GridTestUtils.runMultiThreaded(() -> { + try { + barrier.await(); + } + catch (Exception e) { + fail(); + } + + for (int i = 0; i < 100; i++) { + int key = keyCnt.incrementAndGet(); + + clientCache.put(key, key); + + assertEquals(key, (long)clientCache.get(key)); + } + + }, THREADS_CNT, "thin-client-thread"); + } + } + + /** + * Test multiple concurrent async queries. + */ + @Test + public void testConcurrentQueries() throws Exception { + try (IgniteClient client = Ignition.startClient(new ClientConfiguration().setAddresses(CLIENT_CONN_ADDR))) { + ClientCache clientCache = client.cache(CACHE_NAME); + + clientCache.clear(); + + for (int i = 0; i < 10; i++) + clientCache.put(i, i); + + CyclicBarrier barrier = new CyclicBarrier(THREADS_CNT); + + GridTestUtils.runMultiThreaded(() -> { + try { + barrier.await(); + } + catch (Exception e) { + fail(); + } + + for (int i = 0; i < 10; i++) { + Query> qry = new ScanQuery().setPageSize(1); + + try (QueryCursor> cur = clientCache.query(qry)) { + int cacheSize = clientCache.size(CachePeekMode.PRIMARY); + int curSize = cur.getAll().size(); + + assertEquals(cacheSize, curSize); + } + } + }, THREADS_CNT, "thin-client-thread"); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/client/ClientConfigurationTest.java b/modules/core/src/test/java/org/apache/ignite/client/ClientConfigurationTest.java index bcc212abbe130..287c6ec6793bc 100644 --- a/modules/core/src/test/java/org/apache/ignite/client/ClientConfigurationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/client/ClientConfigurationTest.java @@ -25,9 +25,19 @@ import java.io.ObjectOutput; import java.io.ObjectOutputStream; import java.util.Collections; +import java.util.Set; +import java.util.function.Predicate; +import java.util.stream.Collectors; + +import org.apache.ignite.Ignite; +import org.apache.ignite.Ignition; +import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.BinaryConfiguration; import org.apache.ignite.configuration.ClientConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Assert; import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; @@ -73,4 +83,36 @@ public void testSerialization() throws IOException, ClassNotFoundException { assertTrue(Comparers.equal(target, desTarget)); } + + /** + * Test check the case when {@link IgniteConfiguration#getRebalanceThreadPoolSize()} is equal to {@link + * IgniteConfiguration#getSystemThreadPoolSize()} + */ + @Test + public void testRebalanceThreadPoolSize() { + GridStringLogger gridStrLog = new GridStringLogger(); + gridStrLog.logLength(1024 * 100); + + IgniteConfiguration cci = Config.getServerConfiguration().setClientMode(true); + cci.setRebalanceThreadPoolSize(cci.getSystemThreadPoolSize()); + cci.setGridLogger(gridStrLog); + + try ( + Ignite si = Ignition.start(Config.getServerConfiguration()); + Ignite ci = Ignition.start(cci)) { + Set collect = si.cluster().nodes().stream() + .filter(new Predicate() { + @Override public boolean test(ClusterNode clusterNode) { + return clusterNode.isClient(); + } + }) + .collect(Collectors.toSet()); + + String log = gridStrLog.toString(); + boolean containsMsg = log.contains("Setting the rebalance pool size has no effect on the client mode"); + + Assert.assertTrue(containsMsg); + Assert.assertEquals(1, collect.size()); + } + } } diff --git a/modules/core/src/test/java/org/apache/ignite/client/Config.java b/modules/core/src/test/java/org/apache/ignite/client/Config.java index 3e8e0e1bed027..c279e8f0c9d07 100644 --- a/modules/core/src/test/java/org/apache/ignite/client/Config.java +++ b/modules/core/src/test/java/org/apache/ignite/client/Config.java @@ -17,8 +17,6 @@ package org.apache.ignite.client; -import java.net.InetSocketAddress; -import java.util.Collections; import java.util.UUID; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -34,12 +32,10 @@ public class Config { /** Name of the cache created by default in the cluster. */ public static final String DEFAULT_CACHE_NAME = "default"; + private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder().setShared(true); + /** */ public static IgniteConfiguration getServerConfiguration() { - TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); - - ipFinder.registerAddresses(Collections.singletonList(new InetSocketAddress("127.0.0.1", 47500))); - TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi(); discoverySpi.setIpFinder(ipFinder); diff --git a/modules/core/src/test/java/org/apache/ignite/client/ConnectToStartingNodeTest.java b/modules/core/src/test/java/org/apache/ignite/client/ConnectToStartingNodeTest.java new file mode 100644 index 0000000000000..6ce382a02e617 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/client/ConnectToStartingNodeTest.java @@ -0,0 +1,89 @@ +/* + * Copyright 2019 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.client; + +import java.util.concurrent.Callable; +import java.util.concurrent.CyclicBarrier; +import org.apache.ignite.Ignite; +import org.apache.ignite.Ignition; +import org.apache.ignite.configuration.ClientConfiguration; +import org.apache.ignite.configuration.ClientConnectorConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.spi.IgniteSpiException; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.Nullable; +import org.junit.Test; + +/** + * Checks that connection with starting node will be established correctly. + */ +public class ConnectToStartingNodeTest extends GridCommonAbstractTest { + /** Client connector address. */ + private static final String CLIENT_CONN_ADDR = "127.0.0.1:" + ClientConnectorConfiguration.DFLT_PORT; + + /** Barrier to suspend discovery SPI start. */ + private final CyclicBarrier barrier = new CyclicBarrier(2); + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName).setDiscoverySpi(new TcpDiscoverySpi() { + @Override public void spiStart(@Nullable String igniteInstanceName) throws IgniteSpiException { + try { + barrier.await(); + } + catch (Exception ignore) { + // No-op. + } + + super.spiStart(igniteInstanceName); + + try { + barrier.await(); + } + catch (Exception ignore) { + // No-op. + } + } + }); + } + + /** + * Test that client can't connect to server before discovery SPI start. + */ + @Test + public void testClientConnectBeforeDiscoveryStart() throws Exception { + IgniteInternalFuture futStartGrid = GridTestUtils.runAsync((Callable)this::startGrid); + + barrier.await(); + + IgniteInternalFuture futStartClient = GridTestUtils.runAsync( + () -> Ignition.startClient(new ClientConfiguration().setAddresses(CLIENT_CONN_ADDR))); + + // Server doesn't accept connection before discovery SPI started. + assertFalse(GridTestUtils.waitForCondition(futStartClient::isDone, 500L)); + + barrier.await(); + + futStartGrid.get(); + + // Server accept connection after discovery SPI started. + assertTrue(GridTestUtils.waitForCondition(futStartClient::isDone, 500L)); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/client/LocalIgniteCluster.java b/modules/core/src/test/java/org/apache/ignite/client/LocalIgniteCluster.java index 18ff338cd1da2..c6e159300fabe 100644 --- a/modules/core/src/test/java/org/apache/ignite/client/LocalIgniteCluster.java +++ b/modules/core/src/test/java/org/apache/ignite/client/LocalIgniteCluster.java @@ -17,18 +17,16 @@ package org.apache.ignite.client; -import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Collection; -import java.util.Collections; import java.util.List; import java.util.Objects; import java.util.Random; import java.util.stream.Collectors; import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; -import org.apache.ignite.configuration.ClientConnectorConfiguration; import org.apache.ignite.configuration.ClientConfiguration; +import org.apache.ignite.configuration.ClientConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; @@ -77,16 +75,18 @@ public static LocalIgniteCluster start(int initSize) { } /** {@inheritDoc} */ - @Override public void close() { + @Override public synchronized void close() { srvs.forEach(Ignite::close); srvs.clear(); + + failedCfgs.clear(); } /** * Remove one random node. */ - public void failNode() { + public synchronized void failNode() { if (srvs.isEmpty()) throw new IllegalStateException("Cannot remove node from empty cluster"); @@ -109,7 +109,7 @@ public void failNode() { /** * Restore one of the failed nodes. */ - public void restoreNode() { + public synchronized void restoreNode() { if (failedCfgs.isEmpty()) throw new IllegalStateException("Cannot restore nodes in healthy cluster"); @@ -154,10 +154,6 @@ public int getInitialSize() { private static IgniteConfiguration getConfiguration(NodeConfiguration nodeCfg) { IgniteConfiguration igniteCfg = Config.getServerConfiguration(); - ((TcpDiscoverySpi)igniteCfg.getDiscoverySpi()).getIpFinder().registerAddresses( - Collections.singletonList(new InetSocketAddress(HOST, nodeCfg.getDiscoveryPort())) - ); - igniteCfg.setClientConnectorConfiguration(new ClientConnectorConfiguration() .setHost(HOST) .setPort(nodeCfg.getClientPort()) diff --git a/modules/core/src/test/java/org/apache/ignite/client/ReliabilityTest.java b/modules/core/src/test/java/org/apache/ignite/client/ReliabilityTest.java index f019fd9ca2475..646c52b5942be 100644 --- a/modules/core/src/test/java/org/apache/ignite/client/ReliabilityTest.java +++ b/modules/core/src/test/java/org/apache/ignite/client/ReliabilityTest.java @@ -17,42 +17,37 @@ package org.apache.ignite.client; +import java.lang.management.ManagementFactory; +import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Random; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicReference; import java.util.stream.Collectors; import java.util.stream.IntStream; import java.util.stream.Stream; import javax.cache.Cache; +import javax.management.MBeanServerInvocationHandler; +import javax.management.ObjectName; +import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.query.Query; import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.ScanQuery; -import org.apache.ignite.internal.processors.platform.client.ClientStatus; import org.apache.ignite.configuration.ClientConfiguration; -import org.apache.ignite.internal.client.thin.ClientServerError; -import org.apache.ignite.testframework.GridTestUtils; -import org.junit.Rule; +import org.apache.ignite.internal.processors.odbc.ClientListenerProcessor; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.mxbean.ClientProcessorMXBean; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Test; -import org.junit.rules.Timeout; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; /** * High Availability tests. */ -public class ReliabilityTest { - /** Per test timeout */ - @Rule - public Timeout globalTimeout = new Timeout((int) GridTestUtils.DFLT_TEST_TIMEOUT); - +public class ReliabilityTest extends GridCommonAbstractTest { /** * Thin clint failover. */ @@ -83,6 +78,8 @@ public void testFailover() throws Exception { assertEquals(val, cachedVal); }); + cache.clear(); + // Composite operation failover: query Map data = IntStream.rangeClosed(1, 1000).boxed() .collect(Collectors.toMap(i -> i, i -> String.format("String %s", i))); @@ -127,62 +124,123 @@ public void testFailover() throws Exception { } } - /** */ - @FunctionalInterface - private interface Assertion { - /** */ - void call() throws Exception; + /** + * Test single server failover. + */ + @Test + public void testSingleServerFailover() throws Exception { + try (LocalIgniteCluster cluster = LocalIgniteCluster.start(1); + IgniteClient client = Ignition.startClient(new ClientConfiguration() + .setAddresses(cluster.clientAddresses().iterator().next())) + ) { + ClientCache cache = client.createCache("cache"); + + // Before fail. + cache.put(0, 0); + + // Fail. + dropAllThinClientConnections(Ignition.allGrids().get(0)); + + try { + cache.put(0, 0); + } + catch (Exception expected) { + // No-op. + } + + // Recover after fail. + cache.put(0, 0); + } } /** - * Run the assertion while Ignite nodes keep failing/recovering 10 times. + * Test that failover doesn't lead to silent query inconsistency. */ - private static void assertOnUnstableCluster(LocalIgniteCluster cluster, Assertion assertion) { - // Keep changing Ignite cluster topology by adding/removing nodes - final AtomicBoolean isTopStable = new AtomicBoolean(false); + @Test + public void testQueryConsistencyOnFailover() throws Exception { + int CLUSTER_SIZE = 2; - final AtomicReference err = new AtomicReference<>(null); + try (LocalIgniteCluster cluster = LocalIgniteCluster.start(CLUSTER_SIZE); + IgniteClient client = Ignition.startClient(new ClientConfiguration() + .setAddresses(cluster.clientAddresses().toArray(new String[CLUSTER_SIZE]))) + ) { + ClientCache cache = client.createCache("cache"); - Future topChangeFut = Executors.newSingleThreadExecutor().submit(() -> { - for (int i = 0; i < 10 && err.get() == null; i++) { - while (cluster.size() != 1) - cluster.failNode(); + cache.put(0, 0); + cache.put(1, 1); - while (cluster.size() != cluster.getInitialSize()) - cluster.restoreNode(); - } + Query> qry = new ScanQuery().setPageSize(1); - isTopStable.set(true); - }); + try (QueryCursor> cur = cache.query(qry)) { + int cnt = 0; - // Use Ignite while the nodes keep failing - try { - while (err.get() == null && !isTopStable.get()) { - try { - assertion.call(); - } - catch (ClientServerError ex) { - // TODO: fix CACHE_DOES_NOT_EXIST server error and remove this exception handler - if (ex.getCode() != ClientStatus.CACHE_DOES_NOT_EXIST) - throw ex; + for (Iterator> it = cur.iterator(); it.hasNext(); it.next()) { + cnt++; + + if (cnt == 1) { + for (int i = 0; i < CLUSTER_SIZE; i++) + dropAllThinClientConnections(Ignition.allGrids().get(i)); + } } + + fail("ClientReconnectedException must be thrown"); + } + catch (ClientReconnectedException expected) { + // No-op. } } - catch (Throwable e) { - err.set(e); - } + } + + /** + * Drop all thin client connections on given Ignite instance. + * + * @param ignite Ignite. + */ + private void dropAllThinClientConnections(Ignite ignite) throws Exception { + ObjectName mbeanName = U.makeMBeanName(ignite.name(), "Clients", + ClientListenerProcessor.class.getSimpleName()); + + ClientProcessorMXBean mxBean = MBeanServerInvocationHandler.newProxyInstance( + ManagementFactory.getPlatformMBeanServer(), mbeanName, ClientProcessorMXBean.class, true); + + mxBean.dropAllConnections(); + } + + /** + * Run the closure while Ignite nodes keep failing/recovering several times. + */ + private void assertOnUnstableCluster(LocalIgniteCluster cluster, Runnable clo) throws Exception { + // Keep changing Ignite cluster topology by adding/removing nodes. + final AtomicBoolean stopFlag = new AtomicBoolean(false); + + Future topChangeFut = Executors.newSingleThreadExecutor().submit(() -> { + try { + for (int i = 0; i < 5 && !stopFlag.get(); i++) { + while (cluster.size() != 1) + cluster.failNode(); + + while (cluster.size() != cluster.getInitialSize()) + cluster.restoreNode(); + + awaitPartitionMapExchange(); + } + } + catch (InterruptedException ignore) { + // No-op. + } + stopFlag.set(true); + }); + + // Use Ignite while nodes keep failing. try { + while (!stopFlag.get()) + clo.run(); + topChangeFut.get(); } - catch (Exception e) { - err.set(e); + finally { + stopFlag.set(true); } - - Throwable ex = err.get(); - - String msg = ex == null ? "" : ex.getMessage(); - - assertNull(msg, ex); } } diff --git a/modules/core/src/test/java/org/apache/ignite/client/SslParametersTest.java b/modules/core/src/test/java/org/apache/ignite/client/SslParametersTest.java index 7ac6108197a24..a950d6dc10126 100644 --- a/modules/core/src/test/java/org/apache/ignite/client/SslParametersTest.java +++ b/modules/core/src/test/java/org/apache/ignite/client/SslParametersTest.java @@ -18,23 +18,28 @@ package org.apache.ignite.client; import java.util.concurrent.Callable; -import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.Ignition; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ClientConfiguration; import org.apache.ignite.configuration.ClientConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.ssl.SslContextFactory; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cases when node connects to cluster with different set of cipher suites. */ +@RunWith(JUnit4.class) public class SslParametersTest extends GridCommonAbstractTest { - + /** */ public static final String TEST_CACHE_NAME = "TEST"; + /** */ private volatile String[] cipherSuites; @@ -58,8 +63,10 @@ public class SslParametersTest extends GridCommonAbstractTest { return cfg; } - /** {@inheritDoc} */ - protected ClientConfiguration getClientConfiguration() throws Exception { + /** + * @return Client config. + */ + protected ClientConfiguration getClientConfiguration() { ClientConfiguration cfg = new ClientConfiguration(); cfg.setAddresses("127.0.0.1:10800"); @@ -71,9 +78,11 @@ protected ClientConfiguration getClientConfiguration() throws Exception { return cfg; } + /** + * @return SSL factory. + */ @NotNull private SslContextFactory createSslFactory() { - SslContextFactory factory = (SslContextFactory)GridTestUtils.sslTrustedFactory( - "node01", "trustone"); + SslContextFactory factory = (SslContextFactory)GridTestUtils.sslTrustedFactory("node01", "trustone"); factory.setCipherSuites(cipherSuites); factory.setProtocols(protocols); @@ -92,6 +101,7 @@ protected ClientConfiguration getClientConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSameCipherSuite() throws Exception { cipherSuites = new String[] { "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", @@ -102,12 +112,10 @@ public void testSameCipherSuite() throws Exception { startGrid(); checkSuccessfulClientStart( - new String[][] { - new String[] { - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", - "TLS_RSA_WITH_AES_128_GCM_SHA256", - "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" - } + new String[] { + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" }, null ); @@ -116,6 +124,7 @@ public void testSameCipherSuite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOneCommonCipherSuite() throws Exception { cipherSuites = new String[] { "TLS_RSA_WITH_AES_128_GCM_SHA256", @@ -123,13 +132,11 @@ public void testOneCommonCipherSuite() throws Exception { }; startGrid(); - + checkSuccessfulClientStart( - new String[][] { - new String[] { - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", - "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" - } + new String[] { + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" }, null ); @@ -138,19 +145,18 @@ public void testOneCommonCipherSuite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoCommonCipherSuite() throws Exception { cipherSuites = new String[] { "TLS_RSA_WITH_AES_128_GCM_SHA256" }; startGrid(); - + checkClientStartFailure( - new String[][] { - new String[] { - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", - "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" - } + new String[] { + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" }, null ); @@ -159,19 +165,18 @@ public void testNoCommonCipherSuite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNonExistentCipherSuite() throws Exception { cipherSuites = new String[] { "TLS_RSA_WITH_AES_128_GCM_SHA256" }; startGrid(); - + checkClientStartFailure( - new String[][] { - new String[] { - "TLC_FAKE_CIPHER", - "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" - } + new String[] { + "TLC_FAKE_CIPHER", + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" }, null, IllegalArgumentException.class, @@ -182,6 +187,7 @@ public void testNonExistentCipherSuite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoCommonProtocols() throws Exception { protocols = new String[] { "TLSv1.1", @@ -192,11 +198,9 @@ public void testNoCommonProtocols() throws Exception { checkClientStartFailure( null, - new String[][] { - new String[] { - "TLSv1", - "TLSv1.2", - } + new String[] { + "TLSv1", + "TLSv1.2" } ); } @@ -204,6 +208,7 @@ public void testNoCommonProtocols() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNonExistentProtocol() throws Exception { protocols = new String[] { "SSLv3" @@ -213,11 +218,9 @@ public void testNonExistentProtocol() throws Exception { checkClientStartFailure( null, - new String[][] { - new String[] { - "SSLv3", - "SSLvDoesNotExist" - } + new String[] { + "SSLv3", + "SSLvDoesNotExist" }, IllegalArgumentException.class, "SSLvDoesNotExist" @@ -227,20 +230,19 @@ public void testNonExistentProtocol() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSameProtocols() throws Exception { protocols = new String[] { "TLSv1.1", - "TLSv1.2", + "TLSv1.2" }; startGrid(); checkSuccessfulClientStart(null, - new String[][] { - new String[] { - "TLSv1.1", - "TLSv1.2", - } + new String[] { + "TLSv1.1", + "TLSv1.2" } ); } @@ -248,6 +250,7 @@ public void testSameProtocols() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOneCommonProtocol() throws Exception { protocols = new String[] { "TLSv1", @@ -258,11 +261,9 @@ public void testOneCommonProtocol() throws Exception { startGrid(); checkSuccessfulClientStart(null, - new String[][] { - new String[] { - "TLSv1.1", - "SSLv3" - } + new String[] { + "TLSv1.1", + "SSLv3" } ); } @@ -272,30 +273,26 @@ public void testOneCommonProtocol() throws Exception { * @param protocols list of protocols * @throws Exception If failed. */ - private void checkSuccessfulClientStart(String[][] cipherSuites, String[][] protocols) throws Exception { - int n = Math.max( - cipherSuites != null ? cipherSuites.length : 0, - protocols != null ? protocols.length : 0); - - for (int i = 0; i < n; i++) { - this.cipherSuites = cipherSuites != null && i < cipherSuites.length ? cipherSuites[i] : null; - this.protocols = protocols != null && i < protocols.length ? protocols[i] : null; - - IgniteClient client = Ignition.startClient(getClientConfiguration()); + private void checkSuccessfulClientStart(String[] cipherSuites, String[] protocols) throws Exception { + this.cipherSuites = F.isEmpty(cipherSuites) ? null : cipherSuites; + this.protocols = F.isEmpty(protocols) ? null : protocols; + try (IgniteClient client = Ignition.startClient(getClientConfiguration())) { client.getOrCreateCache(TEST_CACHE_NAME); - - client.close(); } } /** * @param cipherSuites list of cipher suites * @param protocols list of protocols - * @throws Exception If failed. */ - private void checkClientStartFailure(String[][] cipherSuites, String[][] protocols) throws Exception { - checkClientStartFailure(cipherSuites, protocols, ClientConnectionException.class, "Ignite cluster is unavailable"); + private void checkClientStartFailure(String[] cipherSuites, String[] protocols) { + checkClientStartFailure( + cipherSuites, + protocols, + ClientConnectionException.class, + "Ignite cluster is unavailable" + ); } /** @@ -303,27 +300,27 @@ private void checkClientStartFailure(String[][] cipherSuites, String[][] protoco * @param protocols list of protocols * @param ex expected exception class * @param msg exception message - * @throws Exception If failed. */ - private void checkClientStartFailure(String[][] cipherSuites, String[][] protocols, Class ex, String msg) throws Exception { - int n = Math.max( - cipherSuites != null ? cipherSuites.length : 0, - protocols != null ? protocols.length : 0); - - for (int i = 0; i < n; i++) { - this.cipherSuites = cipherSuites != null && i < cipherSuites.length ? cipherSuites[i] : null; - this.protocols = protocols != null && i < protocols.length ? protocols[i] : null; - - int finalI = i; - - GridTestUtils.assertThrows(null, new Callable() { - @Override public Object call() throws Exception { + private void checkClientStartFailure( + String[] cipherSuites, + String[] protocols, + Class ex, + String msg + ) { + this.cipherSuites = F.isEmpty(cipherSuites) ? null : cipherSuites; + this.protocols = F.isEmpty(protocols) ? null : protocols; + + GridTestUtils.assertThrows( + null, + new Callable() { + @Override public Object call() { Ignition.startClient(getClientConfiguration()); return null; } - }, ex, msg); - } + }, + ex, + msg + ); } - } diff --git a/modules/core/src/test/java/org/apache/ignite/failure/FailureHandlerTriggeredTest.java b/modules/core/src/test/java/org/apache/ignite/failure/FailureHandlerTriggeredTest.java index 23c7e0895b4b9..79a2116b9ad0d 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/FailureHandlerTriggeredTest.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/FailureHandlerTriggeredTest.java @@ -29,14 +29,19 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test of triggering of failure handler. */ +@RunWith(JUnit4.class) public class FailureHandlerTriggeredTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testFailureHandlerTriggeredOnExchangeWorkerTermination() throws Exception { try { CountDownLatch latch = new CountDownLatch(1); diff --git a/modules/core/src/test/java/org/apache/ignite/failure/FailureHandlingConfigurationTest.java b/modules/core/src/test/java/org/apache/ignite/failure/FailureHandlingConfigurationTest.java new file mode 100644 index 0000000000000..b7a9f1dc54263 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/failure/FailureHandlingConfigurationTest.java @@ -0,0 +1,271 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.failure; + +import java.lang.management.ManagementFactory; +import java.util.concurrent.CountDownLatch; +import javax.management.MBeanServer; +import javax.management.MBeanServerInvocationHandler; +import javax.management.ObjectName; +import org.apache.ignite.Ignite; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.worker.FailureHandlingMxBeanImpl; +import org.apache.ignite.internal.worker.WorkersRegistry; +import org.apache.ignite.mxbean.FailureHandlingMxBean; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT; + +/** + * Tests configuration parameters related to failure handling. + */ +@RunWith(JUnit4.class) +public class FailureHandlingConfigurationTest extends GridCommonAbstractTest { + /** */ + private Long checkpointReadLockTimeout; + + /** */ + private Long sysWorkerBlockedTimeout; + + /** */ + private CountDownLatch failureLatch; + + /** */ + private class TestFailureHandler extends AbstractFailureHandler { + /** */ + TestFailureHandler() { + failureLatch = new CountDownLatch(1); + } + + /** {@inheritDoc} */ + @Override protected boolean handle(Ignite ignite, FailureContext failureCtx) { + failureLatch.countDown(); + + return false; + } + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setFailureHandler(new TestFailureHandler()); + + DataRegionConfiguration drCfg = new DataRegionConfiguration(); + drCfg.setPersistenceEnabled(true); + + DataStorageConfiguration dsCfg = new DataStorageConfiguration(); + dsCfg.setDefaultDataRegionConfiguration(drCfg); + + if (checkpointReadLockTimeout != null) + dsCfg.setCheckpointReadLockTimeout(checkpointReadLockTimeout); + + cfg.setDataStorageConfiguration(dsCfg); + + if (sysWorkerBlockedTimeout != null) + cfg.setSystemWorkerBlockedTimeout(sysWorkerBlockedTimeout); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + + sysWorkerBlockedTimeout = null; + checkpointReadLockTimeout = null; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCfgParamsPropagation() throws Exception { + sysWorkerBlockedTimeout = 30_000L; + checkpointReadLockTimeout = 20_000L; + + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + WorkersRegistry reg = ignite.context().workersRegistry(); + + IgniteCacheDatabaseSharedManager dbMgr = ignite.context().cache().context().database(); + + FailureHandlingMxBean mBean = getMBean(); + + assertEquals(sysWorkerBlockedTimeout.longValue(), reg.getSystemWorkerBlockedTimeout()); + assertEquals(checkpointReadLockTimeout.longValue(), dbMgr.checkpointReadLockTimeout()); + + assertEquals(sysWorkerBlockedTimeout.longValue(), mBean.getSystemWorkerBlockedTimeout()); + assertEquals(checkpointReadLockTimeout.longValue(), mBean.getCheckpointReadLockTimeout()); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPartialCfgParamsPropagation() throws Exception { + sysWorkerBlockedTimeout = 30_000L; + checkpointReadLockTimeout = null; + + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + WorkersRegistry reg = ignite.context().workersRegistry(); + + IgniteCacheDatabaseSharedManager dbMgr = ignite.context().cache().context().database(); + + FailureHandlingMxBean mBean = getMBean(); + + assertEquals(sysWorkerBlockedTimeout.longValue(), reg.getSystemWorkerBlockedTimeout()); + assertEquals(sysWorkerBlockedTimeout.longValue(), dbMgr.checkpointReadLockTimeout()); + + assertEquals(sysWorkerBlockedTimeout.longValue(), mBean.getSystemWorkerBlockedTimeout()); + assertEquals(sysWorkerBlockedTimeout.longValue(), mBean.getCheckpointReadLockTimeout()); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNegativeParamValues() throws Exception { + sysWorkerBlockedTimeout = -1L; + checkpointReadLockTimeout = -85L; + + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + WorkersRegistry reg = ignite.context().workersRegistry(); + + IgniteCacheDatabaseSharedManager dbMgr = ignite.context().cache().context().database(); + + FailureHandlingMxBean mBean = getMBean(); + + assertEquals(0L, reg.getSystemWorkerBlockedTimeout()); + assertEquals(-85L, dbMgr.checkpointReadLockTimeout()); + + assertEquals(0L, mBean.getSystemWorkerBlockedTimeout()); + assertEquals(-85L, mBean.getCheckpointReadLockTimeout()); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testOverridingBySysProps() throws Exception { + String prevWorkerProp = System.getProperty(IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT); + String prevCheckpointProp = System.getProperty(IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT); + + long workerPropVal = 80_000; + long checkpointPropVal = 90_000; + + System.setProperty(IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT, String.valueOf(workerPropVal)); + System.setProperty(IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT, String.valueOf(checkpointPropVal)); + + try { + sysWorkerBlockedTimeout = 1L; + checkpointReadLockTimeout = 2L; + + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + WorkersRegistry reg = ignite.context().workersRegistry(); + + IgniteCacheDatabaseSharedManager dbMgr = ignite.context().cache().context().database(); + + FailureHandlingMxBean mBean = getMBean(); + + assertEquals(sysWorkerBlockedTimeout, ignite.configuration().getSystemWorkerBlockedTimeout()); + assertEquals(checkpointReadLockTimeout, + ignite.configuration().getDataStorageConfiguration().getCheckpointReadLockTimeout()); + + assertEquals(workerPropVal, reg.getSystemWorkerBlockedTimeout()); + assertEquals(checkpointPropVal, dbMgr.checkpointReadLockTimeout()); + + assertEquals(workerPropVal, mBean.getSystemWorkerBlockedTimeout()); + assertEquals(checkpointPropVal, mBean.getCheckpointReadLockTimeout()); + } + finally { + if (prevWorkerProp != null) + System.setProperty(IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT, prevWorkerProp); + else + System.clearProperty(IGNITE_SYSTEM_WORKER_BLOCKED_TIMEOUT); + + if (prevCheckpointProp != null) + System.setProperty(IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT, prevCheckpointProp); + else + System.clearProperty(IGNITE_CHECKPOINT_READ_LOCK_TIMEOUT); + } + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMBeanParamsChanging() throws Exception { + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + FailureHandlingMxBean mBean = getMBean(); + + mBean.setSystemWorkerBlockedTimeout(80_000L); + assertEquals(80_000L, ignite.context().workersRegistry().getSystemWorkerBlockedTimeout()); + + mBean.setCheckpointReadLockTimeout(90_000L); + assertEquals(90_000L, ignite.context().cache().context().database().checkpointReadLockTimeout()); + + assertTrue(mBean.getLivenessCheckEnabled()); + mBean.setLivenessCheckEnabled(false); + assertFalse(ignite.context().workersRegistry().livenessCheckEnabled()); + ignite.context().workersRegistry().livenessCheckEnabled(true); + assertTrue(mBean.getLivenessCheckEnabled()); + } + + /** */ + private FailureHandlingMxBean getMBean() throws Exception { + ObjectName name = U.makeMBeanName(getTestIgniteInstanceName(0), "Kernal", + FailureHandlingMxBeanImpl.class.getSimpleName()); + + MBeanServer srv = ManagementFactory.getPlatformMBeanServer(); + + assertTrue(srv.isRegistered(name)); + + return MBeanServerInvocationHandler.newProxyInstance(srv, name, FailureHandlingMxBean.class, true); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/failure/IoomFailureHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/failure/IoomFailureHandlerTest.java index a777f815a89e2..6c9304bac9c57 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/IoomFailureHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/IoomFailureHandlerTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.internal.mem.IgniteOutOfMemoryException; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * IgniteOutOfMemoryError failure handler test. */ +@RunWith(JUnit4.class) public class IoomFailureHandlerTest extends AbstractFailureHandlerTest { /** Offheap size for memory policy. */ private static final int SIZE = 10 * 1024 * 1024; @@ -45,6 +49,9 @@ public class IoomFailureHandlerTest extends AbstractFailureHandlerTest { /** PDS enabled. */ private boolean pds; + /** MVCC enabled. */ + private boolean mvcc; + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -69,7 +76,7 @@ public class IoomFailureHandlerTest extends AbstractFailureHandlerTest { .setName(DEFAULT_CACHE_NAME) .setCacheMode(CacheMode.PARTITIONED) .setBackups(0) - .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + .setAtomicityMode(mvcc ? CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT : CacheAtomicityMode.TRANSACTIONAL); cfg.setCacheConfiguration(ccfg); @@ -93,22 +100,43 @@ public class IoomFailureHandlerTest extends AbstractFailureHandlerTest { /** * Test IgniteOutOfMemoryException handling with no store. */ + @Test public void testIoomErrorNoStoreHandling() throws Exception { - testIoomErrorHandling(false); + testIoomErrorHandling(false, false); } /** * Test IgniteOutOfMemoryException handling with PDS. */ + @Test public void testIoomErrorPdsHandling() throws Exception { - testIoomErrorHandling(true); + testIoomErrorHandling(true, false); + } + + /** + * Test IgniteOutOfMemoryException handling with no store. + */ + @Test + public void testIoomErrorMvccNoStoreHandling() throws Exception { + testIoomErrorHandling(false, true); + } + + /** + * Test IgniteOutOfMemoryException handling with PDS. + */ + @Test + public void testIoomErrorMvccPdsHandling() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10185"); + + testIoomErrorHandling(true, true); } /** * Test IOOME handling. */ - public void testIoomErrorHandling(boolean pds) throws Exception { + public void testIoomErrorHandling(boolean pds, boolean mvcc) throws Exception { this.pds = pds; + this.mvcc = mvcc; IgniteEx ignite0 = startGrid(0); IgniteEx ignite1 = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/failure/OomFailureHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/failure/OomFailureHandlerTest.java index 2af94b88ee3e4..243316a887a5d 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/OomFailureHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/OomFailureHandlerTest.java @@ -36,10 +36,14 @@ import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Out of memory error failure handler test. */ +@RunWith(JUnit4.class) public class OomFailureHandlerTest extends AbstractFailureHandlerTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -64,6 +68,7 @@ public class OomFailureHandlerTest extends AbstractFailureHandlerTest { /** * Test OOME in IgniteCompute. */ + @Test public void testComputeOomError() throws Exception { IgniteEx ignite0 = startGrid(0); IgniteEx ignite1 = startGrid(1); @@ -88,6 +93,7 @@ public void testComputeOomError() throws Exception { /** * Test OOME in EntryProcessor. */ + @Test public void testEntryProcessorOomError() throws Exception { IgniteEx ignite0 = startGrid(0); IgniteEx ignite1 = startGrid(1); @@ -121,6 +127,7 @@ public void testEntryProcessorOomError() throws Exception { /** * Test OOME in service method invocation. */ + @Test public void testServiceInvokeOomError() throws Exception { IgniteEx ignite0 = startGrid(0); IgniteEx ignite1 = startGrid(1); @@ -149,6 +156,7 @@ public void testServiceInvokeOomError() throws Exception { /** * Test OOME in service execute. */ + @Test public void testServiceExecuteOomError() throws Exception { IgniteEx ignite0 = startGrid(0); IgniteEx ignite1 = startGrid(1); @@ -168,6 +176,7 @@ public void testServiceExecuteOomError() throws Exception { /** * Test OOME in event listener. */ + @Test public void testEventListenerOomError() throws Exception { IgniteEx ignite0 = startGrid(0); IgniteEx ignite1 = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/failure/StopNodeFailureHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/failure/StopNodeFailureHandlerTest.java index fb75aaeb40a92..c6f13f791dd10 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/StopNodeFailureHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/StopNodeFailureHandlerTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.internal.IgnitionEx; import org.apache.ignite.internal.util.typedef.PE; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * {@link StopNodeFailureHandler} tests. */ +@RunWith(JUnit4.class) public class StopNodeFailureHandlerTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected FailureHandler getFailureHandler(String igniteInstanceName) { @@ -43,6 +47,7 @@ public class StopNodeFailureHandlerTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testNodeStopped() throws Exception { try { IgniteEx ignite0 = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/failure/StopNodeOrHaltFailureHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/failure/StopNodeOrHaltFailureHandlerTest.java index 05e3e6e8faee1..86af2a38a0a42 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/StopNodeOrHaltFailureHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/StopNodeOrHaltFailureHandlerTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.multijvm.IgniteProcessProxy; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * {@link StopNodeOrHaltFailureHandler} tests. */ +@RunWith(JUnit4.class) public class StopNodeOrHaltFailureHandlerTest extends GridCommonAbstractTest { /** Number of grids started for tests. */ public static final int NODES_CNT = 3; @@ -56,6 +60,7 @@ public class StopNodeOrHaltFailureHandlerTest extends GridCommonAbstractTest { /** * Tests failed node's JVM is halted after triggering StopNodeOrHaltFailureHandler. */ + @Test public void testJvmHalted() throws Exception { IgniteEx g = grid(0); IgniteEx rmt1 = grid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersBlockingTest.java b/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersBlockingTest.java index 3ca7948a38c76..3d7e5dfbe05f7 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersBlockingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersBlockingTest.java @@ -17,6 +17,8 @@ package org.apache.ignite.failure; +import java.util.HashSet; +import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.ignite.Ignite; @@ -25,10 +27,14 @@ import org.apache.ignite.internal.util.worker.GridWorker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.thread.IgniteThread; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests the handling of long blocking operations in system-critical workers. */ +@RunWith(JUnit4.class) public class SystemWorkersBlockingTest extends GridCommonAbstractTest { /** Handler latch. */ private static volatile CountDownLatch hndLatch; @@ -40,13 +46,22 @@ public class SystemWorkersBlockingTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setFailureHandler(new AbstractFailureHandler() { + AbstractFailureHandler failureHnd = new AbstractFailureHandler() { @Override protected boolean handle(Ignite ignite, FailureContext failureCtx) { - hndLatch.countDown(); + if (failureCtx.type() == FailureType.SYSTEM_WORKER_BLOCKED) + hndLatch.countDown(); return false; } - }); + }; + + Set ignoredFailureTypes = new HashSet<>(failureHnd.getIgnoredFailureTypes()); + + ignoredFailureTypes.remove(FailureType.SYSTEM_WORKER_BLOCKED); + + failureHnd.setIgnoredFailureTypes(ignoredFailureTypes); + + cfg.setFailureHandler(failureHnd); cfg.setFailureDetectionTimeout(FAILURE_DETECTION_TIMEOUT); @@ -72,6 +87,7 @@ public class SystemWorkersBlockingTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testBlockingWorker() throws Exception { IgniteEx ignite = grid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersTerminationTest.java b/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersTerminationTest.java index 638e6f1095141..9dadff6ec4fdd 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersTerminationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersTerminationTest.java @@ -17,31 +17,28 @@ package org.apache.ignite.failure; -import java.util.ArrayList; -import java.util.Collection; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; import org.apache.ignite.Ignite; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.util.worker.GridWorker; import org.apache.ignite.internal.worker.WorkersRegistry; +import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.thread.IgniteThread; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests system critical workers termination. */ +@RunWith(JUnit4.class) public class SystemWorkersTerminationTest extends GridCommonAbstractTest { - /** Handler latch. */ - private static volatile CountDownLatch hndLatch; - /** */ - private static final long FAILURE_DETECTION_TIMEOUT = 5_000; + private static volatile String failureHndThreadName; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -58,8 +55,6 @@ public class SystemWorkersTerminationTest extends GridCommonAbstractTest { cfg.setDataStorageConfiguration(dsCfg); - cfg.setFailureDetectionTimeout(FAILURE_DETECTION_TIMEOUT); - return cfg; } @@ -84,62 +79,29 @@ public class SystemWorkersTerminationTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ - public void testTermination() throws Exception { - Ignite ignite = ignite(0); - - ignite.cluster().active(true); - - WorkersRegistry registry = ((IgniteKernal)ignite).context().workersRegistry(); - - Collection threadNames = new ArrayList<>(registry.names()); - - int cnt = 0; - - for (String threadName : threadNames) { - log.info("Worker termination: " + threadName); - - hndLatch = new CountDownLatch(1); - - GridWorker w = registry.worker(threadName); - - Thread t = w.runner(); - - t.interrupt(); - - assertTrue(hndLatch.await(3, TimeUnit.SECONDS)); - - log.info("Worker is terminated: " + threadName); - - cnt++; - } - - assertEquals(threadNames.size(), cnt); - } - - /** - * @throws Exception If failed. - */ + @Test public void testSyntheticWorkerTermination() throws Exception { - hndLatch = new CountDownLatch(1); - IgniteEx ignite = grid(0); - GridWorker worker = new GridWorker(ignite.name(), "test-worker", log) { + WorkersRegistry registry = ignite.context().workersRegistry(); + + long fdTimeout = ignite.configuration().getFailureDetectionTimeout(); + + GridWorker worker = new GridWorker(ignite.name(), "test-worker", log, registry) { @Override protected void body() throws InterruptedException { - Thread.sleep(ignite.configuration().getFailureDetectionTimeout() / 2); + Thread.sleep(fdTimeout / 2); } }; - new IgniteThread(worker).start(); + IgniteThread thread = new IgniteThread(worker); - while (worker.runner() == null) - Thread.sleep(10); + failureHndThreadName = null; - ignite.context().workersRegistry().register(worker); + thread.start(); - worker.runner().join(); + thread.join(); - assertTrue(hndLatch.await(ignite.configuration().getFailureDetectionTimeout() * 2, TimeUnit.MILLISECONDS)); + assertTrue(GridTestUtils.waitForCondition(() -> thread.getName().equals(failureHndThreadName), fdTimeout * 2)); } /** @@ -157,7 +119,8 @@ private void deleteWorkFiles() throws Exception { private class TestFailureHandler extends AbstractFailureHandler { /** {@inheritDoc} */ @Override protected boolean handle(Ignite ignite, FailureContext failureCtx) { - hndLatch.countDown(); + if (failureCtx.type() == FailureType.SYSTEM_WORKER_TERMINATION) + failureHndThreadName = Thread.currentThread().getName(); return false; } diff --git a/modules/core/src/test/java/org/apache/ignite/failure/TestFailureHandler.java b/modules/core/src/test/java/org/apache/ignite/failure/TestFailureHandler.java index 09dce9b2a0e17..5ac75d112bc39 100644 --- a/modules/core/src/test/java/org/apache/ignite/failure/TestFailureHandler.java +++ b/modules/core/src/test/java/org/apache/ignite/failure/TestFailureHandler.java @@ -52,12 +52,14 @@ public TestFailureHandler(boolean invalidate, CountDownLatch latch) { /** {@inheritDoc} */ @Override protected boolean handle(Ignite ignite, FailureContext failureCtx) { - this.failureCtx = failureCtx; + if (this.failureCtx == null) { + this.failureCtx = failureCtx; - if (latch != null) - latch.countDown(); + if (latch != null) + latch.countDown(); - ignite.log().warning("Handled ignite failure: " + failureCtx); + ignite.log().warning("Handled ignite failure: " + failureCtx); + } return invalidate; } diff --git a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsEventsAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsEventsAbstractSelfTest.java index bb84ae3ec78dd..33a610fa215f9 100644 --- a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsEventsAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsEventsAbstractSelfTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -63,6 +66,7 @@ /** * Tests events, generated by {@link org.apache.ignite.IgniteFileSystem} implementation. */ +@RunWith(JUnit4.class) public abstract class IgfsEventsAbstractSelfTest extends GridCommonAbstractTest { /** IGFS. */ private static IgfsImpl igfs; @@ -184,6 +188,7 @@ protected static int[] concat(@Nullable int[] arr, int... obj) { * * @throws Exception If failed. */ + @Test public void testSingleFileNestedDirs() throws Exception { final List evtList = new ArrayList<>(); @@ -265,6 +270,7 @@ public void testSingleFileNestedDirs() throws Exception { * * @throws Exception If failed. */ + @Test public void testDirWithFiles() throws Exception { final List evtList = new ArrayList<>(); @@ -346,6 +352,7 @@ public void testDirWithFiles() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingleEmptyDir() throws Exception { final List evtList = new ArrayList<>(); @@ -403,6 +410,7 @@ public void testSingleEmptyDir() throws Exception { * * @throws Exception If failed. */ + @Test public void testTwoFiles() throws Exception { final List evtList = new ArrayList<>(); @@ -490,6 +498,7 @@ public void testTwoFiles() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeleteNonRecursive() throws Exception { final List evtList = new ArrayList<>(); @@ -544,6 +553,7 @@ public void testDeleteNonRecursive() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFile() throws Exception { final List evtList = new ArrayList<>(); @@ -611,6 +621,7 @@ public void testMoveFile() throws Exception { * * @throws Exception If failed. */ + @Test public void testNestedEmptyDirs() throws Exception { final List evtList = new ArrayList<>(); @@ -661,6 +672,7 @@ public void testNestedEmptyDirs() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingleFileOverwrite() throws Exception { final List evtList = new ArrayList<>(); @@ -747,6 +759,7 @@ public void testSingleFileOverwrite() throws Exception { * * @throws Exception If failed. */ + @Test public void testFileDataEvents() throws Exception { final List evtList = new ArrayList<>(); @@ -836,4 +849,4 @@ private static class EventPredicate implements IgnitePredicate { return e0.type() == evt && e0.path().equals(path); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerAbstractSelfTest.java index 0b14b1d91c01f..8ccd2b0b06412 100644 --- a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerAbstractSelfTest.java @@ -27,9 +27,6 @@ import org.apache.ignite.internal.processors.igfs.IgfsMetaManager; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,9 +36,6 @@ * Fragmentizer abstract self test. */ public class IgfsFragmentizerAbstractSelfTest extends IgfsCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test nodes count. */ protected static final int NODE_CNT = 4; @@ -55,12 +49,6 @@ public class IgfsFragmentizerAbstractSelfTest extends IgfsCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - FileSystemConfiguration igfsCfg = new FileSystemConfiguration(); igfsCfg.setName("igfs"); diff --git a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerSelfTest.java index b95fc9cecdac7..1d2425236f5f5 100644 --- a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerSelfTest.java @@ -28,14 +28,19 @@ import org.apache.ignite.internal.util.typedef.CA; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests fragmentizer work. */ +@RunWith(JUnit4.class) public class IgfsFragmentizerSelfTest extends IgfsFragmentizerAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testReadFragmentizing() throws Exception { IgniteFileSystem igfs = grid(0).fileSystem("igfs"); @@ -70,6 +75,7 @@ public void testReadFragmentizing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppendFragmentizing() throws Exception { checkAppendFragmentizing(IGFS_BLOCK_SIZE / 4, false); } @@ -77,6 +83,7 @@ public void testAppendFragmentizing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppendFragmentizingAligned() throws Exception { checkAppendFragmentizing(IGFS_BLOCK_SIZE, false); } @@ -84,6 +91,7 @@ public void testAppendFragmentizingAligned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppendFragmentizingDifferentNodes() throws Exception { checkAppendFragmentizing(IGFS_BLOCK_SIZE / 4, true); } @@ -91,6 +99,7 @@ public void testAppendFragmentizingDifferentNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAppendFragmentizingAlignedDifferentNodes() throws Exception { checkAppendFragmentizing(IGFS_BLOCK_SIZE, true); } @@ -158,6 +167,7 @@ private void checkAppendFragmentizing(int chunkSize, boolean rotate) throws Exce /** * @throws Exception If failed. */ + @Test public void testFlushFragmentizing() throws Exception { checkFlushFragmentizing(IGFS_BLOCK_SIZE / 4); } @@ -165,6 +175,7 @@ public void testFlushFragmentizing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFlushFragmentizingAligned() throws Exception { checkFlushFragmentizing(IGFS_BLOCK_SIZE); } @@ -223,6 +234,7 @@ private void checkFlushFragmentizing(int chunkSize) throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeleteFragmentizing() throws Exception { IgfsImpl igfs = (IgfsImpl)grid(0).fileSystem("igfs"); @@ -265,4 +277,4 @@ private static void readFully(InputStream in, byte[] data) throws IOException { while(read < data.length) read += in.read(data, read, data.length - read); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerTopologySelfTest.java b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerTopologySelfTest.java index 9cc500624beb6..ffed70d701c2f 100644 --- a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerTopologySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsFragmentizerTopologySelfTest.java @@ -18,14 +18,19 @@ package org.apache.ignite.igfs; import org.apache.ignite.IgniteFileSystem; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests coordinator transfer from one node to other. */ +@RunWith(JUnit4.class) public class IgfsFragmentizerTopologySelfTest extends IgfsFragmentizerAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testCoordinatorLeave() throws Exception { stopGrid(0); @@ -46,4 +51,4 @@ public void testCoordinatorLeave() throws Exception { startGrid(0); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsPathSelfTest.java b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsPathSelfTest.java index 465a4400905c8..0aba5435b12ea 100644 --- a/modules/core/src/test/java/org/apache/ignite/igfs/IgfsPathSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/igfs/IgfsPathSelfTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * {@link IgfsPath} self test. */ +@RunWith(JUnit4.class) public class IgfsPathSelfTest extends GridCommonAbstractTest { /** Marshaller to test {@link Externalizable} interface. */ private final Marshaller marshaller; @@ -47,6 +51,7 @@ public IgfsPathSelfTest() throws IgniteCheckedException { * * @throws Exception In case of any exception. */ + @Test public void testMethods() throws Exception { IgfsPath path = new IgfsPath("/a/s/d/f"); @@ -97,6 +102,7 @@ private T mu(T obj) throws IgniteCheckedException { * @throws Exception In case of any exception. */ @SuppressWarnings("TooBroadScope") + @Test public void testConstructors() throws Exception { String pathStr = "///"; URI uri = URI.create(pathStr); @@ -158,4 +164,4 @@ private void expectConstructorThrows(Class cls, final Objec } }, cls, null); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClassSetTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClassSetTest.java index c51957a0e7082..38f7098b7430e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClassSetTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClassSetTest.java @@ -17,15 +17,19 @@ package org.apache.ignite.internal; -import junit.framework.TestCase; +import org.junit.Test; + +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; /** * Tests for {@link ClassSet} class. */ -public class ClassSetTest extends TestCase { +public class ClassSetTest { /** * @throws Exception If failed. */ + @Test public void testAddAndContains() throws Exception { ClassSet clsSet = new ClassSet(); @@ -39,6 +43,7 @@ public void testAddAndContains() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddWithMaskAndContains() throws Exception { ClassSet clsSet = new ClassSet(); @@ -52,6 +57,7 @@ public void testAddWithMaskAndContains() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReduceOnAddWithMask() throws Exception { ClassSet clsSet = new ClassSet(); @@ -68,4 +74,4 @@ public void testReduceOnAddWithMask() throws Exception { assertTrue(clsSet.contains("org.apache.ignite.Ignition")); assertTrue(clsSet.contains("org.apache.ignite.NotIgnite")); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClusterBaselineNodesMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClusterBaselineNodesMetricsSelfTest.java index 46b09ac3b1819..911264aaa4f3c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClusterBaselineNodesMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClusterBaselineNodesMetricsSelfTest.java @@ -31,20 +31,18 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.mxbean.ClusterMetricsMXBean; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Baseline nodes metrics self test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class ClusterBaselineNodesMetricsSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -53,6 +51,7 @@ public class ClusterBaselineNodesMetricsSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testBaselineNodes() throws Exception { // Start 2 server nodes. IgniteEx ignite0 = startGrid(0); @@ -125,8 +124,6 @@ public void testBaselineNodes() throws Exception { String storePath = getClass().getSimpleName().toLowerCase() + "/" + getName(); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setWalMode(WALMode.LOG_ONLY) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupAbstractTest.java index fbf938d0ea913..23cbe2de0652d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupAbstractTest.java @@ -53,10 +53,11 @@ import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_JOB_STARTED; @@ -64,10 +65,8 @@ * Abstract test for {@link org.apache.ignite.cluster.ClusterGroup} */ @SuppressWarnings("deprecation") +@RunWith(JUnit4.class) public abstract class ClusterGroupAbstractTest extends GridCommonAbstractTest implements Externalizable { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Waiting timeout. */ private static final int WAIT_TIMEOUT = 30000; @@ -121,7 +120,7 @@ protected ClusterGroupAbstractTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setForceServerMode(true).setIpFinder(ipFinder)); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); return cfg; } @@ -184,6 +183,7 @@ private Collection projectionNodeIds() { /** * Test for projection on not existing node IDs. */ + @Test public void testInvalidProjection() { Collection ids = new HashSet<>(); @@ -198,6 +198,7 @@ public void testInvalidProjection() { /** * @throws Exception If test failed. */ + @Test public void testProjection() throws Exception { assert prj != null; @@ -217,6 +218,7 @@ public void testProjection() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testRemoteNodes() throws Exception { Collection remoteNodeIds = remoteNodeIds(); @@ -247,6 +249,7 @@ public void testRemoteNodes() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testRemoteProjection() throws Exception { Collection remoteNodeIds = remoteNodeIds(); @@ -277,6 +280,7 @@ public void testRemoteProjection() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExecution() throws Exception { String name = "oneMoreGrid"; @@ -757,4 +761,4 @@ public static class TestJob extends ComputeJobAdapter { @Override public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { // No-op. } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupHostsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupHostsSelfTest.java index 141f4af7f9194..8d2b43205eeb6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupHostsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupHostsSelfTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link ClusterGroup#forHost(String, String...)}. @@ -36,6 +39,7 @@ * @see ClusterGroupSelfTest */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class ClusterGroupHostsSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -60,6 +64,7 @@ public class ClusterGroupHostsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testForHosts() throws Exception { if (!tcpDiscovery()) return; @@ -89,6 +94,7 @@ public void testForHosts() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHostNames() throws Exception { Ignite ignite = grid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupSelfTest.java index 0c4812f61a5e7..0fb9f80bd89a5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClusterGroupSelfTest.java @@ -35,11 +35,15 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link ClusterGroup}. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class ClusterGroupSelfTest extends ClusterGroupAbstractTest { /** Nodes count. */ private static final int NODES_CNT = 4; @@ -89,6 +93,7 @@ public class ClusterGroupSelfTest extends ClusterGroupAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRandom() throws Exception { assertTrue(ignite.cluster().nodes().contains(ignite.cluster().forRandom().node())); } @@ -96,6 +101,7 @@ public void testRandom() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOldest() throws Exception { ClusterGroup oldest = ignite.cluster().forOldest(); @@ -121,6 +127,7 @@ public void testOldest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testYoungest() throws Exception { ClusterGroup youngest = ignite.cluster().forYoungest(); @@ -146,6 +153,7 @@ public void testYoungest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForDaemons() throws Exception { assertEquals(4, ignite.cluster().nodes().size()); @@ -171,6 +179,7 @@ public void testForDaemons() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNewNodes() throws Exception { ClusterGroup youngest = ignite.cluster().forYoungest(); ClusterGroup oldest = ignite.cluster().forOldest(); @@ -194,6 +203,7 @@ public void testNewNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForPredicate() throws Exception { IgnitePredicate evenP = new IgnitePredicate() { @Override public boolean apply(ClusterNode node) { @@ -237,6 +247,7 @@ public void testForPredicate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAgeClusterGroupSerialization() throws Exception { Marshaller marshaller = ignite.configuration().getMarshaller(); @@ -260,6 +271,7 @@ public void testAgeClusterGroupSerialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientServer() throws Exception { ClusterGroup srv = ignite.cluster().forServers(); @@ -277,6 +289,7 @@ public void testClientServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForCacheNodesOnDynamicCacheCreateDestroy() throws Exception { Random rnd = ThreadLocalRandom.current(); @@ -294,6 +307,7 @@ public void testForCacheNodesOnDynamicCacheCreateDestroy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForClientNodesOnDynamicCacheCreateDestroy() throws Exception { Random rnd = ThreadLocalRandom.current(); @@ -370,6 +384,7 @@ private void addException(AtomicReference exHldr, Exception ex) { /** * @throws Exception If failed. */ + @Test public void testEmptyGroup() throws Exception { ClusterGroup emptyGrp = ignite.cluster().forAttribute("nonExistent", "val"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClusterMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClusterMetricsSelfTest.java index 7168d3a2cca11..79460511b849f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClusterMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClusterMetricsSelfTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_JOB_FINISHED; import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; @@ -42,6 +45,7 @@ * Tests for projection metrics. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class ClusterMetricsSelfTest extends GridCommonAbstractTest { /** */ private static final int NODES_CNT = 4; @@ -69,6 +73,7 @@ public class ClusterMetricsSelfTest extends GridCommonAbstractTest { /** * @throws Exception In case of error. */ + @Test public void testEmptyProjection() throws Exception { try { grid(0).cluster().forPredicate(F.alwaysFalse()).metrics(); @@ -83,6 +88,7 @@ public void testEmptyProjection() throws Exception { /** * */ + @Test public void testTaskExecution() { for (int i = 0; i < ITER_CNT; i++) { info("Starting new iteration: " + i); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsSelfTest.java index b77e46374d846..c44cad27a3ce9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsSelfTest.java @@ -45,12 +45,12 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.messaging.MessagingListenActor; import org.apache.ignite.mxbean.ClusterMetricsMXBean; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_BUILD_VER; @@ -61,10 +61,8 @@ * Grid node metrics self test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class ClusterNodeMetricsSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test message size. */ private static final int MSG_SIZE = 1024; @@ -94,12 +92,6 @@ public class ClusterNodeMetricsSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setCacheConfiguration(); cfg.setMetricsUpdateFrequency(500); @@ -120,6 +112,7 @@ public class ClusterNodeMetricsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAllocatedMemory() throws Exception { IgniteEx ignite = grid(); @@ -185,6 +178,7 @@ private void fillCache(final IgniteCache cache) throws Exceptio /** * @throws Exception If failed. */ + @Test public void testSingleTaskMetrics() throws Exception { Ignite ignite = grid(); @@ -243,6 +237,7 @@ public void testSingleTaskMetrics() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInternalTaskMetrics() throws Exception { Ignite ignite = grid(); @@ -300,6 +295,7 @@ public void testInternalTaskMetrics() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIoMetrics() throws Exception { Ignite ignite0 = grid(); Ignite ignite1 = startGrid(1); @@ -353,6 +349,7 @@ public void testIoMetrics() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClusterNodeMetrics() throws Exception { final Ignite ignite0 = grid(); final Ignite ignite1 = startGrid(1); @@ -381,6 +378,7 @@ public void testClusterNodeMetrics() throws Exception { * * @throws Exception If failed. */ + @Test public void testJmxClusterMetrics() throws Exception { Ignite node = grid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsUpdateTest.java index 6e6b4a4fd8344..f8dd4b899f8f1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsUpdateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ClusterNodeMetricsUpdateTest.java @@ -30,19 +30,17 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.lang.IgniteCallable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class ClusterNodeMetricsUpdateTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -50,8 +48,6 @@ public class ClusterNodeMetricsUpdateTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setMetricsUpdateFrequency(500); cfg.setClientMode(client); @@ -62,6 +58,7 @@ public class ClusterNodeMetricsUpdateTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testMetrics() throws Exception { int NODES = 6; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/ComputeJobCancelWithServiceSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/ComputeJobCancelWithServiceSelfTest.java index 6978ba2973ecc..7ff611b0002a2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/ComputeJobCancelWithServiceSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/ComputeJobCancelWithServiceSelfTest.java @@ -28,31 +28,19 @@ import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.compute.ComputeTaskFuture; import org.apache.ignite.compute.ComputeTaskSplitAdapter; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test cancellation of a job that depends on service. */ +@RunWith(JUnit4.class) public class ComputeJobCancelWithServiceSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -61,6 +49,7 @@ public class ComputeJobCancelWithServiceSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testJobCancel() throws Exception { Ignite server = startGrid("server"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/DiscoverySpiTestListener.java b/modules/core/src/test/java/org/apache/ignite/internal/DiscoverySpiTestListener.java index 46d9edc6854b7..051327ac69048 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/DiscoverySpiTestListener.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/DiscoverySpiTestListener.java @@ -42,6 +42,9 @@ public class DiscoverySpiTestListener implements IgniteDiscoverySpiInternalListe /** */ private volatile CountDownLatch joinLatch; + /** */ + private volatile CountDownLatch reconLatch; + /** */ private Set> blockCustomEvtCls; @@ -64,6 +67,13 @@ public void startBlockJoin() { joinLatch = new CountDownLatch(1); } + /** + * + */ + public void startBlockReconnect() { + reconLatch = new CountDownLatch(1); + } + /** * */ @@ -71,6 +81,13 @@ public void stopBlockJoin() { joinLatch.countDown(); } + /** + * + */ + public void stopBlockRestart() { + reconLatch.countDown(); + } + /** {@inheritDoc} */ @Override public void beforeJoin(ClusterNode locNode, IgniteLogger log) { try { @@ -87,6 +104,22 @@ public void stopBlockJoin() { } } + /** {@inheritDoc} */ + @Override public void beforeReconnect(ClusterNode locNode, IgniteLogger log) { + try { + CountDownLatch writeLatch0 = reconLatch; + + if (writeLatch0 != null) { + log.info("Block reconnect"); + + U.await(writeLatch0); + } + } + catch (Exception e) { + throw new IgniteException(e); + } + } + /** {@inheritDoc} */ @Override public boolean beforeSendCustomEvent(DiscoverySpi spi, IgniteLogger log, DiscoverySpiCustomMessage msg) { this.spi = spi; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityMappedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityMappedTest.java index 79c903114fcac..a37646bd1c079 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityMappedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityMappedTest.java @@ -28,20 +28,18 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheModuloAffinityFunction; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests affinity mapping when {@link AffinityKeyMapper} is used. */ +@RunWith(JUnit4.class) public class GridAffinityMappedTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -53,12 +51,6 @@ public GridAffinityMappedTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - if (igniteInstanceName.endsWith("1")) cfg.setCacheConfiguration(); // Empty cache configuration. else { @@ -88,6 +80,7 @@ public GridAffinityMappedTest() { /** * @throws IgniteCheckedException If failed. */ + @Test public void testMappedAffinity() throws IgniteCheckedException { Ignite g1 = grid(1); Ignite g2 = grid(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityNoCacheSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityNoCacheSelfTest.java index bdd1ce6b9a13e..24d2659673062 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityNoCacheSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityNoCacheSelfTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests usage of affinity in case when cache doesn't exist. */ +@RunWith(JUnit4.class) public class GridAffinityNoCacheSelfTest extends GridCommonAbstractTest { /** */ public static final String EXPECTED_MSG = "Failed to find cache"; @@ -52,6 +56,7 @@ public class GridAffinityNoCacheSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAffinityProxyNoCache() throws Exception { checkAffinityProxyNoCache(new Object()); } @@ -59,6 +64,7 @@ public void testAffinityProxyNoCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityProxyNoCacheCacheObject() throws Exception { checkAffinityProxyNoCache(new TestCacheObject(new Object())); } @@ -81,6 +87,7 @@ private void checkAffinityProxyNoCache(Object key) { /** * @throws Exception If failed. */ + @Test public void testAffinityImplCacheDeleted() throws Exception { checkAffinityImplCacheDeleted(new Object()); } @@ -88,6 +95,7 @@ public void testAffinityImplCacheDeleted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityImplCacheDeletedCacheObject() throws Exception { checkAffinityImplCacheDeleted(new TestCacheObject(new Object())); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityP2PSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityP2PSelfTest.java index 7061e7520f989..db8362ed1aac1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityP2PSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinityP2PSelfTest.java @@ -31,22 +31,20 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheModuloAffinityFunction; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestExternalClassLoader; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests affinity and affinity mapper P2P loading. */ +@RunWith(JUnit4.class) public class GridAffinityP2PSelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String EXT_AFFINITY_MAPPER_CLS_NAME = "org.apache.ignite.tests.p2p.GridExternalAffinityMapper"; @@ -83,12 +81,6 @@ public GridAffinityP2PSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setDeploymentMode(depMode); if (igniteInstanceName.endsWith("1")) @@ -119,6 +111,7 @@ public GridAffinityP2PSelfTest() { * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { depMode = DeploymentMode.PRIVATE; @@ -130,6 +123,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { depMode = DeploymentMode.ISOLATED; @@ -141,6 +135,7 @@ public void testIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -152,6 +147,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { depMode = DeploymentMode.SHARED; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinitySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinitySelfTest.java index f85b2c0f5a8ea..fb9f69c93340c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridAffinitySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridAffinitySelfTest.java @@ -29,30 +29,22 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests affinity mapping. */ +@RunWith(JUnit4.class) public class GridAffinitySelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - if (igniteInstanceName.endsWith("1")) cfg.setClientMode(true); else { @@ -84,6 +76,7 @@ public class GridAffinitySelfTest extends GridCommonAbstractTest { /** * @throws IgniteCheckedException If failed. */ + @Test public void testAffinity() throws Exception { Ignite g1 = grid(1); Ignite g2 = grid(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridAlwaysFailoverSpiFailSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridAlwaysFailoverSpiFailSelfTest.java index c62c8406a1ead..c4cabfb769ec8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridAlwaysFailoverSpiFailSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridAlwaysFailoverSpiFailSelfTest.java @@ -36,11 +36,15 @@ import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Always failover SPI test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridAlwaysFailoverSpiFailSelfTest extends GridCommonAbstractTest { /** */ private boolean isFailoverCalled; @@ -62,7 +66,7 @@ public GridAlwaysFailoverSpiFailSelfTest() { /** * @throws Exception If failed. */ - @SuppressWarnings({"UnusedCatchParameter", "ThrowableInstanceNeverThrown"}) + @Test public void testFailoverTask() throws Exception { isFailoverCalled = false; @@ -86,7 +90,7 @@ public void testFailoverTask() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"UnusedCatchParameter", "ThrowableInstanceNeverThrown"}) + @Test public void testNoneFailoverTask() throws Exception { isFailoverCalled = false; @@ -164,4 +168,4 @@ private static class GridTestFailoverJob extends ComputeJobAdapter { throw this.argument(0); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridCachePartitionExchangeManagerHistSizeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridCachePartitionExchangeManagerHistSizeTest.java index 69d7368c652c7..bbcb6cb0e31df 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridCachePartitionExchangeManagerHistSizeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridCachePartitionExchangeManagerHistSizeTest.java @@ -19,20 +19,18 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_EXCHANGE_HISTORY_SIZE; /** * Test exchange history size parameter effect. */ +@RunWith(JUnit4.class) public class GridCachePartitionExchangeManagerHistSizeTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private String oldHistVal; @@ -40,8 +38,6 @@ public class GridCachePartitionExchangeManagerHistSizeTest extends GridCommonAbs @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME)); return cfg; @@ -66,6 +62,7 @@ public class GridCachePartitionExchangeManagerHistSizeTest extends GridCommonAbs /** * @throws Exception If failed. */ + @Test public void testSingleExchangeHistSize() throws Exception { System.setProperty(IGNITE_EXCHANGE_HISTORY_SIZE, "1"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridCancelOnGridStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridCancelOnGridStopSelfTest.java index 61ed2b3ecb129..e06032e09a9c0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridCancelOnGridStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridCancelOnGridStopSelfTest.java @@ -32,12 +32,16 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test task cancellation on grid stop. */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridCancelOnGridStopSelfTest extends GridCommonAbstractTest { /** */ private static CountDownLatch cnt; @@ -53,6 +57,7 @@ public GridCancelOnGridStopSelfTest() { /** * @throws Exception If failed. */ + @Test public void testCancelingJob() throws Exception { cancelCall = false; @@ -108,4 +113,4 @@ private static final class CancelledTask extends ComputeTaskAdapter empty = Collections.emptyList(); @@ -121,4 +126,4 @@ public UUID getSenderId() { return buf.toString(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridComputationBinarylizableClosuresSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridComputationBinarylizableClosuresSelfTest.java index a0e49b01ae1f0..e677d57790653 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridComputationBinarylizableClosuresSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridComputationBinarylizableClosuresSelfTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test ensuring that correct closures are serialized. */ +@RunWith(JUnit4.class) public class GridComputationBinarylizableClosuresSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -88,6 +92,7 @@ public class GridComputationBinarylizableClosuresSelfTest extends GridCommonAbst * * @throws Exception If failed. */ + @Test public void testJob() throws Exception { Ignite ignite = startGrid(1); startGrid(2); @@ -110,6 +115,7 @@ public void testJob() throws Exception { * * @throws Exception If failed. */ + @Test public void testMasterLeaveAwareJob() throws Exception { Ignite ignite = startGrid(1); startGrid(2); @@ -134,6 +140,7 @@ public void testMasterLeaveAwareJob() throws Exception { * * @throws Exception If failed. */ + @Test public void testCallable() throws Exception { Ignite ignite = startGrid(1); startGrid(2); @@ -153,6 +160,7 @@ public void testCallable() throws Exception { * * @throws Exception If failed. */ + @Test public void testMasterLeaveAwareCallable() throws Exception { Ignite ignite = startGrid(1); startGrid(2); @@ -174,6 +182,7 @@ public void testMasterLeaveAwareCallable() throws Exception { * * @throws Exception If failed. */ + @Test public void testRunnable() throws Exception { Ignite ignite = startGrid(1); startGrid(2); @@ -193,6 +202,7 @@ public void testRunnable() throws Exception { * * @throws Exception If failed. */ + @Test public void testMasterLeaveAwareRunnable() throws Exception { Ignite ignite = startGrid(1); startGrid(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobAnnotationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobAnnotationSelfTest.java index 55ba0fe98f32b..50173605d6594 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobAnnotationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobAnnotationSelfTest.java @@ -40,11 +40,15 @@ import org.apache.ignite.resources.TaskContinuousMapperResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for various job callback annotations. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridContinuousJobAnnotationSelfTest extends GridCommonAbstractTest { /** */ private static final AtomicBoolean fail = new AtomicBoolean(); @@ -70,6 +74,7 @@ public class GridContinuousJobAnnotationSelfTest extends GridCommonAbstractTest /** * @throws Exception If test failed. */ + @Test public void testJobAnnotation() throws Exception { testContinuousJobAnnotation(TestJob.class); } @@ -77,6 +82,7 @@ public void testJobAnnotation() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testJobChildAnnotation() throws Exception { testContinuousJobAnnotation(TestJobChild.class); } @@ -224,4 +230,4 @@ private static class TestJobChild extends TestJob { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobSiblingsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobSiblingsSelfTest.java index 5bbbd855d0df7..4f37ec47a89bd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobSiblingsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousJobSiblingsSelfTest.java @@ -37,11 +37,15 @@ import org.apache.ignite.resources.TaskSessionResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test continuous mapper with siblings. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridContinuousJobSiblingsSelfTest extends GridCommonAbstractTest { /** */ private static final int JOB_COUNT = 10; @@ -49,6 +53,7 @@ public class GridContinuousJobSiblingsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If test failed. */ + @Test public void testContinuousJobSiblings() throws Exception { try { Ignite ignite = startGrid(0); @@ -64,6 +69,7 @@ public void testContinuousJobSiblings() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testContinuousJobSiblingsLocalNode() throws Exception { try { Ignite ignite = startGrid(0); @@ -154,4 +160,4 @@ private static class TestJob extends ComputeJobAdapter { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousTaskSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousTaskSelfTest.java index 6589dced7262f..cb1d4aefcebde 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousTaskSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridContinuousTaskSelfTest.java @@ -55,11 +55,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Continuous task test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridContinuousTaskSelfTest extends GridCommonAbstractTest { /** */ private static final int JOB_COUNT = 10; @@ -70,6 +74,7 @@ public class GridContinuousTaskSelfTest extends GridCommonAbstractTest { /** * @throws Exception If test failed. */ + @Test public void testContinuousJobsChain() throws Exception { try { Ignite ignite = startGrid(0); @@ -89,6 +94,7 @@ public void testContinuousJobsChain() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testContinuousJobsChainMultiThreaded() throws Exception { try { final Ignite ignite = startGrid(0); @@ -121,6 +127,7 @@ public void testContinuousJobsChainMultiThreaded() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testContinuousJobsSessionChain() throws Exception { try { Ignite ignite = startGrid(0); @@ -137,6 +144,7 @@ public void testContinuousJobsSessionChain() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testContinuousSlowMap() throws Exception { try { Ignite ignite = startGrid(0); @@ -154,6 +162,7 @@ public void testContinuousSlowMap() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testClearTimeouts() throws Exception { int holdccTimeout = 4000; @@ -176,6 +185,7 @@ public void testClearTimeouts() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultipleHoldccCalls() throws Exception { try { Ignite grid = startGrid(0); @@ -190,6 +200,7 @@ public void testMultipleHoldccCalls() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testClosureWithNestedInternalTask() throws Exception { try { IgniteEx ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridDeploymentMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridDeploymentMultiThreadedSelfTest.java index cabcba16404e1..1b0f6d4de5104 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridDeploymentMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridDeploymentMultiThreadedSelfTest.java @@ -31,12 +31,16 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; /** * Task deployment tests. */ +@RunWith(JUnit4.class) public class GridDeploymentMultiThreadedSelfTest extends GridCommonAbstractTest { /** */ private static final int THREAD_CNT = 20; @@ -47,6 +51,7 @@ public class GridDeploymentMultiThreadedSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testDeploy() throws Exception { try { final Ignite ignite = startGrid(0); @@ -123,4 +128,4 @@ private static class GridDeploymentTestTask extends ComputeTaskAdapter nodes = ignite.cluster().forRemotes().nodes(); @@ -112,6 +103,7 @@ public void testGetRemoteNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllNodes() throws Exception { Collection nodes = ignite.cluster().nodes(); @@ -124,7 +116,7 @@ public void testGetAllNodes() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"SuspiciousMethodCalls"}) + @Test public void testGetLocalNode() throws Exception { ClusterNode node = ignite.cluster().localNode(); @@ -139,6 +131,7 @@ public void testGetLocalNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPingNode() throws Exception { ClusterNode node = ignite.cluster().localNode(); @@ -152,6 +145,7 @@ public void testPingNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDiscoveryListener() throws Exception { ClusterNode node = ignite.cluster().localNode(); @@ -221,6 +215,7 @@ else if (EVT_NODE_LEFT == evt.type() || EVT_NODE_FAILED == evt.type()) { * * @throws Exception In case of any exception. */ + @Test public void testCacheNodes() throws Exception { // Validate only original node is available. GridDiscoveryManager discoMgr = ((IgniteKernal) ignite).context().discovery(); @@ -353,7 +348,6 @@ private static class GridDiscoveryTestNode extends GridMetadataAwareAdapter impl } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Nullable @Override public T attribute(String name) { return null; } @@ -403,4 +397,4 @@ private static class GridDiscoveryTestNode extends GridMetadataAwareAdapter impl return id().hashCode(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageCheckAllEventsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageCheckAllEventsSelfTest.java index 91869023211fc..53a9daa7a3091 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageCheckAllEventsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageCheckAllEventsSelfTest.java @@ -53,6 +53,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.compute.ComputeJobResultPolicy.FAILOVER; import static org.apache.ignite.compute.ComputeJobResultPolicy.WAIT; @@ -80,6 +83,7 @@ * Test event storage. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridEventStorageCheckAllEventsSelfTest extends GridCommonAbstractTest { /** */ private static Ignite ignite; @@ -132,6 +136,7 @@ private void assertEvent(int evtType, int expType, List evts) { /** * @throws Exception If test failed. */ + @Test public void testCheckpointEvents() throws Exception { long tstamp = startTimestamp(); @@ -158,6 +163,7 @@ public void testCheckpointEvents() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testTaskUndeployEvents() throws Exception { final long tstamp = startTimestamp(); @@ -201,6 +207,7 @@ public void testTaskUndeployEvents() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testSuccessTask() throws Exception { generateEvents(null, new GridAllEventsSuccessTestJob()).get(); @@ -228,6 +235,7 @@ public void testSuccessTask() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testFailoverJobTask() throws Exception { startGrid(0); @@ -269,6 +277,7 @@ public void testFailoverJobTask() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testFailTask() throws Exception { long tstamp = startTimestamp(); @@ -300,6 +309,7 @@ public void testFailTask() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testTimeoutTask() throws Exception { long tstamp = startTimestamp(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageDefaultExceptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageDefaultExceptionTest.java index 2cf727e6f1be9..8346abfc45191 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageDefaultExceptionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageDefaultExceptionTest.java @@ -25,11 +25,15 @@ import org.apache.ignite.spi.eventstorage.NoopEventStorageSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Event storage tests with default no-op spi. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridEventStorageDefaultExceptionTest extends GridCommonAbstractTest { /** */ public GridEventStorageDefaultExceptionTest() { @@ -48,6 +52,7 @@ public GridEventStorageDefaultExceptionTest() { /** * @throws Exception In case of error. */ + @Test public void testLocalNodeEventStorage() throws Exception { try { grid().events().localQuery(F.alwaysTrue()); @@ -64,6 +69,7 @@ public void testLocalNodeEventStorage() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoteNodeEventStorage() throws Exception { try { grid().events().remoteQuery(F.alwaysTrue(), 0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageRuntimeConfigurationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageRuntimeConfigurationSelfTest.java index 97626c8e414e6..22f917c36825d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageRuntimeConfigurationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageRuntimeConfigurationSelfTest.java @@ -32,6 +32,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_ENTRY_CREATED; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; @@ -44,6 +47,7 @@ /** * Tests for runtime events configuration. */ +@RunWith(JUnit4.class) public class GridEventStorageRuntimeConfigurationSelfTest extends GridCommonAbstractTest { /** */ private int[] inclEvtTypes; @@ -61,6 +65,7 @@ public class GridEventStorageRuntimeConfigurationSelfTest extends GridCommonAbst /** * @throws Exception If failed. */ + @Test public void testEnableWithDefaults() throws Exception { inclEvtTypes = null; @@ -95,6 +100,7 @@ public void testEnableWithDefaults() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEnableWithIncludes() throws Exception { inclEvtTypes = new int[] { EVT_TASK_STARTED, EVT_TASK_FINISHED }; @@ -129,6 +135,7 @@ public void testEnableWithIncludes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDisableWithIncludes() throws Exception { inclEvtTypes = null; @@ -165,6 +172,7 @@ public void testDisableWithIncludes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEnableDisable() throws Exception { inclEvtTypes = null; @@ -188,7 +196,7 @@ public void testEnableDisable() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("UnusedDeclaration") + @Test public void testInvalidTypes() throws Exception { inclEvtTypes = new int[]{EVT_TASK_STARTED}; @@ -224,6 +232,7 @@ public void testInvalidTypes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetters() throws Exception { inclEvtTypes = new int[]{EVT_TASK_STARTED, EVT_TASK_FINISHED, 30000}; @@ -354,4 +363,4 @@ private int[] getEnabledEvents(int limit, Ignite g, int... customTypes) { return U.toIntArray(res); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageSelfTest.java index 4f98b0c271171..12185989e5e75 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridEventStorageSelfTest.java @@ -39,6 +39,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVTS_ALL_MINUS_METRIC_UPDATE; import static org.apache.ignite.events.EventType.EVTS_JOB_EXECUTION; @@ -56,6 +59,7 @@ * serialized form. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridEventStorageSelfTest extends GridCommonAbstractTest { /** First grid. */ private static Ignite ignite1; @@ -82,6 +86,7 @@ public GridEventStorageSelfTest() { /** * @throws Exception In case of error. */ + @Test public void testAddRemoveGlobalListener() throws Exception { IgnitePredicate lsnr = new IgnitePredicate() { @Override public boolean apply(Event evt) { @@ -99,6 +104,7 @@ public void testAddRemoveGlobalListener() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testAddRemoveDiscoListener() throws Exception { IgnitePredicate lsnr = new IgnitePredicate() { @Override public boolean apply(Event evt) { @@ -117,6 +123,7 @@ public void testAddRemoveDiscoListener() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testLocalNodeEventStorage() throws Exception { TestEventListener lsnr = new TestEventListener(); @@ -164,6 +171,7 @@ public void testLocalNodeEventStorage() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoteNodeEventStorage() throws Exception { IgnitePredicate filter = new TestEventFilter(); @@ -180,6 +188,7 @@ public void testRemoteNodeEventStorage() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoteAndLocalNodeEventStorage() throws Exception { IgnitePredicate filter = new TestEventFilter(); @@ -211,6 +220,7 @@ private void checkGridInternalEvent(Event evt) { /** * @throws Exception In case of error. */ + @Test public void testGridInternalEvents() throws Exception { IgnitePredicate lsnr = new IgnitePredicate() { @Override public boolean apply(Event evt) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridFailFastNodeFailureDetectionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridFailFastNodeFailureDetectionSelfTest.java index 79dc81ae84152..d4bdbda5d14a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridFailFastNodeFailureDetectionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridFailFastNodeFailureDetectionSelfTest.java @@ -28,9 +28,10 @@ import org.apache.ignite.spi.communication.CommunicationSpi; import org.apache.ignite.spi.discovery.DiscoverySpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; @@ -38,24 +39,19 @@ /** * Fail fast test. */ +@RunWith(JUnit4.class) public class GridFailFastNodeFailureDetectionSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); + TcpDiscoverySpi disco = (TcpDiscoverySpi)cfg.getDiscoverySpi(); // Set parameters for fast ping failure. disco.setSocketTimeout(100); disco.setNetworkTimeout(100); disco.setReconnectCount(2); - cfg.setDiscoverySpi(disco); cfg.setMetricsUpdateFrequency(10_000); return cfg; @@ -69,6 +65,7 @@ public class GridFailFastNodeFailureDetectionSelfTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Test public void testFailFast() throws Exception { startGridsMultiThreaded(5); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridFailedInputParametersSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridFailedInputParametersSelfTest.java index 9446db68eabd9..a8437f9d1e1ef 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridFailedInputParametersSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridFailedInputParametersSelfTest.java @@ -23,6 +23,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVTS_ALL; @@ -30,6 +33,7 @@ * Test for invalid input parameters. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridFailedInputParametersSelfTest extends GridCommonAbstractTest { /** */ private static Ignite ignite; @@ -47,6 +51,7 @@ public GridFailedInputParametersSelfTest() { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testAddEventLocalListener() throws Exception { try { ignite.events().localListen(null, EVTS_ALL); @@ -61,6 +66,7 @@ public void testAddEventLocalListener() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testRemoveEventLocalListener() throws Exception { try { ignite.events().stopLocalListen(null); @@ -75,6 +81,7 @@ public void testRemoveEventLocalListener() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testAddDiscoveryListener() throws Exception { try { ignite.events().localListen(null, EVTS_ALL); @@ -89,6 +96,7 @@ public void testAddDiscoveryListener() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testRemoveDiscoveryListener() throws Exception { try { ignite.events().stopLocalListen(null); @@ -103,6 +111,7 @@ public void testRemoveDiscoveryListener() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testGetNode() throws Exception { try { ignite.cluster().node(null); @@ -117,6 +126,7 @@ public void testGetNode() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testPingNode() throws Exception { try { ignite.cluster().pingNode(null); @@ -131,6 +141,7 @@ public void testPingNode() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testDeployTask() throws Exception { try { ignite.compute().localDeployTask(null, null); @@ -153,4 +164,4 @@ public void testDeployTask() throws Exception { // Check for exceptions. ignite.compute().localDeployTask(GridTestTask.class, U.detectClassLoader(GridTestTask.class)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverCustomTopologySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverCustomTopologySelfTest.java index ea0c6eb309cf3..a0485c28d7e1a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverCustomTopologySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverCustomTopologySelfTest.java @@ -40,11 +40,15 @@ import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test failover and custom topology. Topology returns local node if remote node fails. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridFailoverCustomTopologySelfTest extends GridCommonAbstractTest { /** */ private final AtomicInteger failCnt = new AtomicInteger(0); @@ -80,7 +84,8 @@ public GridFailoverCustomTopologySelfTest() { * * @throws Exception If failed. */ - @SuppressWarnings({"WaitNotInLoop", "UnconditionalWait", "unchecked"}) + @SuppressWarnings({"WaitNotInLoop", "UnconditionalWait"}) + @Test public void testFailoverTopology() throws Exception { try { Ignite ignite1 = startGrid(1); @@ -192,4 +197,4 @@ public static class JobTask extends ComputeTaskAdapter { return results.get(0).getData(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverSelfTest.java index 5ed22459d48a0..d10e9c5f6c598 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverSelfTest.java @@ -40,11 +40,15 @@ import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Failover tests. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridFailoverSelfTest extends GridCommonAbstractTest { /** Initial node that job has been mapped to. */ private static final AtomicReference nodeRef = new AtomicReference<>(null); @@ -66,6 +70,7 @@ public GridFailoverSelfTest() { /** * @throws Exception If failed. */ + @Test public void testJobFail() throws Exception { try { Ignite ignite1 = startGrid(1); @@ -160,4 +165,4 @@ private static class JobFailTask implements ComputeTask { return results.get(0).getData(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTaskWithPredicateSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTaskWithPredicateSelfTest.java index 84f31cb9485b0..6074efcc031be 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTaskWithPredicateSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTaskWithPredicateSelfTest.java @@ -43,11 +43,15 @@ import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test failover of a task with Node filter predicate. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridFailoverTaskWithPredicateSelfTest extends GridCommonAbstractTest { /** First node's name. */ private static final String NODE1 = "NODE1"; @@ -98,6 +102,7 @@ public class GridFailoverTaskWithPredicateSelfTest extends GridCommonAbstractTes * * @throws Exception If failed. */ + @Test public void testJobNotFailedOver() throws Exception { failed.set(false); routed.set(false); @@ -129,6 +134,7 @@ public void testJobNotFailedOver() throws Exception { * * @throws Exception If failed. */ + @Test public void testJobFailedOver() throws Exception { failed.set(false); routed.set(false); @@ -166,6 +172,7 @@ public void testJobFailedOver() throws Exception { * * @throws Exception If error happens. */ + @Test public void testJobNotFailedOverWithStaticProjection() throws Exception { failed.set(false); routed.set(false); @@ -255,4 +262,4 @@ private static class JobFailTask implements ComputeTask { } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTopologySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTopologySelfTest.java index 096554902646f..e3fdd1ff23a8c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTopologySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridFailoverTopologySelfTest.java @@ -37,11 +37,15 @@ import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test failover and topology. It don't pick local node if it has been excluded from topology. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridFailoverTopologySelfTest extends GridCommonAbstractTest { /** */ private final AtomicBoolean failed = new AtomicBoolean(false); @@ -92,7 +96,7 @@ public GridFailoverTopologySelfTest() { * * @throws Exception If failed. */ - @SuppressWarnings("unchecked") + @Test public void testFailoverTopology() throws Exception { try { Ignite ignite1 = startGrid(1); @@ -165,4 +169,4 @@ private static class JobFailTask implements ComputeTask { return results.get(0).getData(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridGetOrStartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridGetOrStartSelfTest.java index 74d50cfcbf8d4..402240e9991bc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridGetOrStartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridGetOrStartSelfTest.java @@ -20,12 +20,16 @@ import org.apache.ignite.*; import org.apache.ignite.configuration.*; import org.apache.ignite.testframework.junits.common.*; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * The GirdGetOrStartSelfTest tests get or start semantics. See IGNITE-2941 */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridGetOrStartSelfTest extends GridCommonAbstractTest { /** * Default constructor. @@ -37,6 +41,7 @@ public GridGetOrStartSelfTest() { /** * Tests default Ignite instance */ + @Test public void testDefaultIgniteInstanceGetOrStart() throws Exception { IgniteConfiguration cfg = getConfiguration(null); try(Ignite ignite = Ignition.getOrStart(cfg)) { @@ -54,6 +59,7 @@ public void testDefaultIgniteInstanceGetOrStart() throws Exception { /** * Tests named Ignite instance */ + @Test public void testNamedIgniteInstanceGetOrStart() throws Exception { IgniteConfiguration cfg = getConfiguration("test"); try(Ignite ignite = Ignition.getOrStart(cfg)) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridHomePathSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridHomePathSelfTest.java index 281c360d48b27..b38e5fd3c2fda 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridHomePathSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridHomePathSelfTest.java @@ -22,12 +22,16 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_HOME; /** * */ +@RunWith(JUnit4.class) public class GridHomePathSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -41,6 +45,7 @@ public class GridHomePathSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testHomeOverride() throws Exception { try { startGrid(0); @@ -73,4 +78,4 @@ public void testHomeOverride() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridJobCheckpointCleanupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridJobCheckpointCleanupSelfTest.java index eca7ebee3067c..b016a2f86fd87 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridJobCheckpointCleanupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridJobCheckpointCleanupSelfTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.spi.checkpoint.CheckpointSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for checkpoint cleanup. */ +@RunWith(JUnit4.class) public class GridJobCheckpointCleanupSelfTest extends GridCommonAbstractTest { /** Number of currently alive checkpoints. */ private final AtomicInteger cntr = new AtomicInteger(); @@ -64,6 +68,7 @@ public class GridJobCheckpointCleanupSelfTest extends GridCommonAbstractTest { * * @throws Exception if failed. */ + @Test public void testCheckpointCleanup() throws Exception { try { checkpointSpi = new TestCheckpointSpi("task-checkpoints", cntr); @@ -169,4 +174,4 @@ private static class CheckpointCountingTestTask extends ComputeTaskAdapter>() { @Override public IgniteFuture applyx(ClusterGroup grid) { @@ -282,6 +280,7 @@ public void testApply1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testApply2() throws Exception { testMasterLeaveAwareCallback(2, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup grid) { @@ -293,6 +292,7 @@ public void testApply2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testApply3() throws Exception { testMasterLeaveAwareCallback(2, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup grid) { @@ -314,6 +314,7 @@ public void testApply3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRun1() throws Exception { testMasterLeaveAwareCallback(1, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -325,6 +326,7 @@ public void testRun1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRun2() throws Exception { testMasterLeaveAwareCallback(2, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -336,6 +338,7 @@ public void testRun2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCall1() throws Exception { testMasterLeaveAwareCallback(1, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -347,6 +350,7 @@ public void testCall1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCall2() throws Exception { testMasterLeaveAwareCallback(2, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -358,6 +362,7 @@ public void testCall2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCall3() throws Exception { testMasterLeaveAwareCallback(2, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -379,6 +384,7 @@ public void testCall3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBroadcast1() throws Exception { testMasterLeaveAwareCallback(1, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -390,6 +396,7 @@ public void testBroadcast1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBroadcast2() throws Exception { testMasterLeaveAwareCallback(1, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -401,6 +408,7 @@ public void testBroadcast2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBroadcast3() throws Exception { testMasterLeaveAwareCallback(1, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -412,6 +420,7 @@ public void testBroadcast3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityRun() throws Exception { testMasterLeaveAwareCallback(1, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -427,6 +436,7 @@ public void testAffinityRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityCall() throws Exception { testMasterLeaveAwareCallback(1, new CX1>() { @Override public IgniteFuture applyx(ClusterGroup prj) { @@ -767,4 +777,4 @@ private void awaitResponse() throws IgniteInterruptedCheckedException { U.await(respLatch); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridJobServicesAddNodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridJobServicesAddNodeTest.java index aab33db8334c7..cbd338e2311e1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridJobServicesAddNodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridJobServicesAddNodeTest.java @@ -32,17 +32,18 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests multiple parallel jobs execution, accessing services(), while starting new nodes. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridJobServicesAddNodeTest extends GridCommonAbstractTest { /** */ private static final int LOG_MOD = 100; @@ -50,9 +51,6 @@ public class GridJobServicesAddNodeTest extends GridCommonAbstractTest { /** */ private static final int MAX_ADD_NODES = 64; - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrid(1); @@ -65,12 +63,6 @@ public class GridJobServicesAddNodeTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); commSpi.setSharedMemoryPort(-1); @@ -83,6 +75,7 @@ public class GridJobServicesAddNodeTest extends GridCommonAbstractTest { /** * @throws Exception If test failed. */ + @Test public void testServiceDescriptorsJob() throws Exception { final int tasks = 5000; final int threads = 10; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingSelfTest.java index 6824d51bcb17e..f2879eff76346 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingSelfTest.java @@ -50,12 +50,16 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Job stealing test. */ @SuppressWarnings("unchecked") @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridJobStealingSelfTest extends GridCommonAbstractTest { /** Task execution timeout in milliseconds. */ private static final int TASK_EXEC_TIMEOUT_MS = 50000; @@ -96,6 +100,7 @@ public GridJobStealingSelfTest() { * * @throws IgniteCheckedException If test failed. */ + @Test public void testTwoJobs() throws IgniteCheckedException { executeAsync(ignite1.compute(), new JobStealingSingleNodeTask(2), null).get(TASK_EXEC_TIMEOUT_MS); @@ -110,7 +115,7 @@ public void testTwoJobs() throws IgniteCheckedException { * * @throws IgniteCheckedException If test failed. */ - @SuppressWarnings("NullArgumentToVariableArgMethod") + @Test public void testTwoJobsNullPredicate() throws IgniteCheckedException { executeAsync(ignite1.compute(), new JobStealingSingleNodeTask(2), null).get(TASK_EXEC_TIMEOUT_MS); @@ -125,7 +130,7 @@ public void testTwoJobsNullPredicate() throws IgniteCheckedException { * * @throws IgniteCheckedException If test failed. */ - @SuppressWarnings("NullArgumentToVariableArgMethod") + @Test public void testTwoJobsTaskNameNullPredicate() throws IgniteCheckedException { executeAsync(ignite1.compute(), JobStealingSingleNodeTask.class.getName(), null).get(TASK_EXEC_TIMEOUT_MS); @@ -140,7 +145,7 @@ public void testTwoJobsTaskNameNullPredicate() throws IgniteCheckedException { * * @throws IgniteCheckedException If test failed. */ - @SuppressWarnings("unchecked") + @Test public void testTwoJobsPartiallyNullPredicate() throws IgniteCheckedException { IgnitePredicate topPred = new IgnitePredicate() { @Override public boolean apply(ClusterNode e) { @@ -161,6 +166,7 @@ public void testTwoJobsPartiallyNullPredicate() throws IgniteCheckedException { * * @throws Exception If failed. */ + @Test public void testProjectionPredicate() throws Exception { final Ignite ignite3 = startGrid(3); @@ -184,6 +190,7 @@ public void testProjectionPredicate() throws Exception { * * @throws Exception If failed. */ + @Test public void testProjectionPredicateInternalStealing() throws Exception { final Ignite ignite3 = startGrid(3); @@ -212,6 +219,7 @@ public void testProjectionPredicateInternalStealing() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingleNodeTopology() throws Exception { IgnitePredicate p = new IgnitePredicate() { @Override public boolean apply(ClusterNode e) { @@ -232,6 +240,7 @@ public void testSingleNodeTopology() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingleNodeProjection() throws Exception { ClusterGroup prj = ignite1.cluster().forNodeIds(Collections.singleton(ignite1.cluster().localNode().id())); @@ -247,7 +256,7 @@ public void testSingleNodeProjection() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings("NullArgumentToVariableArgMethod") + @Test public void testSingleNodeProjectionNullPredicate() throws Exception { ClusterGroup prj = ignite1.cluster().forNodeIds(Collections.singleton(ignite1.cluster().localNode().id())); @@ -264,6 +273,7 @@ public void testSingleNodeProjectionNullPredicate() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testProjectionPredicateDifferentClassLoaders() throws Exception { final Ignite ignite3 = startGrid(3); @@ -339,7 +349,6 @@ private static class JobStealingSpreadTask extends ComputeTaskAdapter map(List subgrid, @Nullable Object arg) { //assert subgrid.size() == 2 : "Invalid subgrid size: " + subgrid.size(); @@ -360,7 +369,6 @@ private static class JobStealingSpreadTask extends ComputeTaskAdapter results) { for (ComputeJobResult res : results) { log.info("Job result: " + res.getData()); @@ -389,7 +397,6 @@ private static class JobStealingSingleNodeTask extends JobStealingSpreadTask { } /** {@inheritDoc} */ - @SuppressWarnings("ForLoopReplaceableByForEach") @Override public Map map(List subgrid, @Nullable Object arg) { assert subgrid.size() > 1 : "Invalid subgrid size: " + subgrid.size(); @@ -454,4 +461,4 @@ private static final class GridJobStealingJob extends ComputeJobAdapter { return ignite.cluster().localNode().id(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingZeroActiveJobsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingZeroActiveJobsSelfTest.java index 31015cefe4791..aee0e4893ad86 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingZeroActiveJobsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridJobStealingZeroActiveJobsSelfTest.java @@ -39,11 +39,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Job stealing test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridJobStealingZeroActiveJobsSelfTest extends GridCommonAbstractTest { /** */ private static Ignite ignite1; @@ -75,6 +79,7 @@ public GridJobStealingZeroActiveJobsSelfTest() { * * @throws IgniteCheckedException If test failed. */ + @Test public void testTwoJobs() throws IgniteCheckedException { ignite1.compute().execute(JobStealingTask.class, null); } @@ -176,4 +181,4 @@ public static final class GridJobStealingJob extends ComputeJobAdapter { return ignite.name(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridJobSubjectIdSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridJobSubjectIdSelfTest.java index 07720675d379e..354354bc7bf57 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridJobSubjectIdSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridJobSubjectIdSelfTest.java @@ -36,10 +36,16 @@ import org.apache.ignite.resources.TaskSessionResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.internal.processors.task.GridTaskThreadContextKey.TC_SUBJ_ID; /** * Test job subject ID propagation. */ +@RunWith(JUnit4.class) public class GridJobSubjectIdSelfTest extends GridCommonAbstractTest { /** Job subject ID. */ private static volatile UUID taskSubjId; @@ -59,6 +65,7 @@ public class GridJobSubjectIdSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { node1 = startGrid(1); + node2 = startGrid(2); } @@ -67,7 +74,14 @@ public class GridJobSubjectIdSelfTest extends GridCommonAbstractTest { stopAllGrids(); node1 = null; + node2 = null; + + evtSubjId = null; + + taskSubjId = null; + + jobSubjId = null; } /** @@ -75,6 +89,7 @@ public class GridJobSubjectIdSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testJobSubjectId() throws Exception { node2.events().localListen(new IgnitePredicate() { @Override public boolean apply(Event evt) { @@ -91,9 +106,40 @@ public void testJobSubjectId() throws Exception { node1.compute().execute(new Task(node2.cluster().localNode().id()), null); assertEquals(taskSubjId, jobSubjId); + assertEquals(taskSubjId, evtSubjId); } + /** + * Test job subject ID propagation in case if was changed. + * + * @throws Exception If failed. + */ + @Test + public void testModifiedSubjectId() throws Exception { + node1.events().localListen(new IgnitePredicate() { + @Override public boolean apply(Event evt) { + JobEvent evt0 = (JobEvent)evt; + + assert evtSubjId == null; + + evtSubjId = evt0.taskSubjectId(); + + return false; + } + }, EventType.EVT_JOB_STARTED); + + UUID uuid = new UUID(100, 100); + + ((IgniteEx) node1).context().task().setThreadContextIfNotNull(TC_SUBJ_ID, uuid); + + ((IgniteEx) node1).context().task().execute(new Task(node1.cluster().localNode().id()), null).get(); + + assertEquals(uuid, jobSubjId); + + assertEquals(uuid, evtSubjId); + } + /** * Task class. */ @@ -157,4 +203,4 @@ public static class Job extends ComputeJobAdapter { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridKernalConcurrentAccessStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridKernalConcurrentAccessStopSelfTest.java index 2db34e0761c62..d797c1a83f177 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridKernalConcurrentAccessStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridKernalConcurrentAccessStopSelfTest.java @@ -20,6 +20,9 @@ import org.apache.ignite.events.Event; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; import static org.apache.ignite.events.EventType.EVT_NODE_JOINED; @@ -28,6 +31,7 @@ /** * Tests kernal stop while it is being accessed from asynchronous even listener. */ +@RunWith(JUnit4.class) public class GridKernalConcurrentAccessStopSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRIDS = 2; @@ -41,6 +45,7 @@ public class GridKernalConcurrentAccessStopSelfTest extends GridCommonAbstractT /** * */ + @Test public void testConcurrentAccess() { for (int i = 0; i < GRIDS; i++) { grid(i).events().localListen(new IgnitePredicate() { @@ -56,4 +61,4 @@ public void testConcurrentAccess() { }, EVT_NODE_FAILED, EVT_NODE_LEFT, EVT_NODE_JOINED); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridLifecycleBeanSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridLifecycleBeanSelfTest.java index 7fe0924d1f88d..ed14ecff7b51b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridLifecycleBeanSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridLifecycleBeanSelfTest.java @@ -35,6 +35,9 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.lifecycle.LifecycleEventType.AFTER_NODE_START; import static org.apache.ignite.lifecycle.LifecycleEventType.AFTER_NODE_STOP; @@ -45,6 +48,7 @@ * Lifecycle bean test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridLifecycleBeanSelfTest extends GridCommonAbstractTest { /** */ private LifeCycleBaseBean bean; @@ -61,6 +65,7 @@ public class GridLifecycleBeanSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGetIgnite() throws Exception { final AtomicBoolean done = new AtomicBoolean(); @@ -96,6 +101,7 @@ public void testGetIgnite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoErrors() throws Exception { bean = new LifeCycleBaseBean(); @@ -125,6 +131,7 @@ public void testNoErrors() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGridErrorBeforeStart() throws Exception { checkBeforeStart(true); } @@ -132,6 +139,7 @@ public void testGridErrorBeforeStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOtherErrorBeforeStart() throws Exception { checkBeforeStart(false); } @@ -139,6 +147,7 @@ public void testOtherErrorBeforeStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGridErrorAfterStart() throws Exception { checkAfterStart(true); } @@ -146,6 +155,7 @@ public void testGridErrorAfterStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOtherErrorAfterStart() throws Exception { checkAfterStart(false); } @@ -207,6 +217,7 @@ private void checkAfterStart(boolean gridErr) throws Exception { /** * @throws Exception If failed. */ + @Test public void testGridErrorBeforeStop() throws Exception { checkOnStop(BEFORE_NODE_STOP, true); @@ -219,6 +230,7 @@ public void testGridErrorBeforeStop() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOtherErrorBeforeStop() throws Exception { checkOnStop(BEFORE_NODE_STOP, false); @@ -231,6 +243,7 @@ public void testOtherErrorBeforeStop() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGridErrorAfterStop() throws Exception { checkOnStop(AFTER_NODE_STOP, true); @@ -243,6 +256,7 @@ public void testGridErrorAfterStop() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOtherErrorAfterStop() throws Exception { checkOnStop(AFTER_NODE_STOP, false); @@ -356,4 +370,4 @@ private LifeCycleExceptionBean(LifecycleEventType errType, boolean gridErr) { super.onLifecycleEvent(evt); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridListenActorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridListenActorSelfTest.java index ae2a505441148..c89ad0ab60cc5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridListenActorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridListenActorSelfTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.messaging.MessagingListenActor; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link org.apache.ignite.messaging.MessagingListenActor}. */ +@RunWith(JUnit4.class) public class GridListenActorSelfTest extends GridCommonAbstractTest { /** */ private static final int MSG_QTY = 10; @@ -53,6 +57,7 @@ public class GridListenActorSelfTest extends GridCommonAbstractTest { * * @throws Exception Thrown if failed. */ + @Test public void testBasicFlow() throws Exception { final AtomicInteger cnt = new AtomicInteger(0); @@ -86,6 +91,7 @@ public void testBasicFlow() throws Exception { /** * @throws Exception If failed. */ + @Test public void testImmediateStop() throws Exception { doSendReceive(MSG_QTY, 1); } @@ -93,6 +99,7 @@ public void testImmediateStop() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReceiveAll() throws Exception { doSendReceive(MSG_QTY, MSG_QTY); } @@ -102,6 +109,7 @@ public void testReceiveAll() throws Exception { * * @throws Exception If failed. */ + @Test public void testRespondToRemote() throws Exception { startGrid(1); @@ -145,6 +153,7 @@ public void testRespondToRemote() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPingPong() throws Exception { final AtomicInteger pingCnt = new AtomicInteger(); final AtomicInteger pongCnt = new AtomicInteger(); @@ -220,4 +229,4 @@ private void doSendReceive(int snd, final int rcv) throws Exception { assert cnt.intValue() == rcv; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridLocalEventListenerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridLocalEventListenerSelfTest.java index 43371789f2914..cd1be316dd667 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridLocalEventListenerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridLocalEventListenerSelfTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.events.EventType; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test ensuring that event listeners are picked by started node. */ +@RunWith(JUnit4.class) public class GridLocalEventListenerSelfTest extends GridCommonAbstractTest { /** Whether event fired. */ private final CountDownLatch fired = new CountDownLatch(1); @@ -67,9 +71,10 @@ public class GridLocalEventListenerSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testListener() throws Exception { startGrids(2); assert fired.await(5000, TimeUnit.MILLISECONDS); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridMBeansTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridMBeansTest.java index 1c3998257e955..48c5cb04b9129 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridMBeansTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridMBeansTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import javax.management.ObjectName; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for the standard JMX beans registered by the kernal. */ +@RunWith(JUnit4.class) public class GridMBeansTest extends GridCommonAbstractTest { /** Executor name for setExecutorConfiguration */ private static final String CUSTOM_EXECUTOR_0 = "Custom executor 0"; @@ -54,6 +58,7 @@ public GridMBeansTest() { } /** Check that kernal bean is available */ + @Test public void testKernalBeans() throws Exception { checkBean("Kernal", "IgniteKernal", "InstanceName", grid().name()); checkBean("Kernal", "ClusterMetricsMXBeanImpl", "TotalServerNodes", 1); @@ -61,6 +66,7 @@ public void testKernalBeans() throws Exception { } /** Check that kernal bean is available */ + @Test public void testExecutorBeans() throws Exception { // standard executors checkBean("Thread Pools", "GridExecutionExecutor", "Terminated", false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleJobsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleJobsSelfTest.java index b9b0925dc5f44..8654db2e22695 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleJobsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleJobsSelfTest.java @@ -31,12 +31,12 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,6 +44,7 @@ * Tests multiple parallel jobs execution. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridMultipleJobsSelfTest extends GridCommonAbstractTest { /** */ private static final int LOG_MOD = 100; @@ -51,9 +52,6 @@ public class GridMultipleJobsSelfTest extends GridCommonAbstractTest { /** */ private static final int TEST_TIMEOUT = 60 * 1000; - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrid(1); @@ -71,12 +69,6 @@ public class GridMultipleJobsSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - if (getTestIgniteInstanceName(1).equals(igniteInstanceName)) c.setCacheConfiguration(/* no configured caches */); else { @@ -100,6 +92,7 @@ public class GridMultipleJobsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If test failed. */ + @Test public void testNotAffinityJobs() throws Exception { /* =========== Test properties =========== */ int jobsNum = 5000; @@ -111,6 +104,7 @@ public void testNotAffinityJobs() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testAffinityJobs() throws Exception { /* =========== Test properties =========== */ int jobsNum = 5000; @@ -224,4 +218,4 @@ public static class AffinityJob implements IgniteCallable { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleSpisSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleSpisSelfTest.java index 32a7e65de0d09..d894c0e208cb5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleSpisSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleSpisSelfTest.java @@ -42,11 +42,15 @@ import org.apache.ignite.spi.loadbalancing.roundrobin.RoundRobinLoadBalancingSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multiple SPIs test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridMultipleSpisSelfTest extends GridCommonAbstractTest { /** */ private boolean isTaskFailoverCalled; @@ -109,7 +113,7 @@ public GridMultipleSpisSelfTest() { /** * @throws Exception If failed. */ - @SuppressWarnings({"UnusedCatchParameter"}) + @Test public void testFailoverTask() throws Exception { // Start local and remote grids. Ignite ignite1 = startGrid(1); @@ -311,4 +315,4 @@ private static class GridTestMultipleSpisJob extends ComputeJobAdapter { return argument(0); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleVersionsDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleVersionsDeploymentSelfTest.java index dc5c16daa2def..33bfb22ba86a9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleVersionsDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridMultipleVersionsDeploymentSelfTest.java @@ -44,6 +44,9 @@ import org.apache.ignite.testframework.GridTestClassLoader; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVT_TASK_UNDEPLOYED; @@ -52,6 +55,7 @@ * */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridMultipleVersionsDeploymentSelfTest extends GridCommonAbstractTest { /** Excluded classes. */ private static final String[] EXCLUDE_CLASSES = new String[] { @@ -101,6 +105,7 @@ private boolean checkDeployed(Ignite ignite, String taskName) { * @throws Exception If test failed. */ @SuppressWarnings("unchecked") + @Test public void testMultipleVersionsLocalDeploy() throws Exception { try { Ignite ignite = startGrid(1); @@ -160,6 +165,7 @@ public void testMultipleVersionsLocalDeploy() throws Exception { * @throws Exception If test failed. */ @SuppressWarnings("unchecked") + @Test public void testMultipleVersionsP2PDeploy() throws Exception { try { Ignite g1 = startGrid(1); @@ -317,4 +323,4 @@ public static class GridDeploymentTestJob extends ComputeJobAdapter { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridMultithreadedJobStealingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridMultithreadedJobStealingSelfTest.java index 4a76c686ac7f3..5bd6f809f69b5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridMultithreadedJobStealingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridMultithreadedJobStealingSelfTest.java @@ -47,11 +47,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.eclipse.jetty.util.ConcurrentHashSet; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multithreaded job stealing test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridMultithreadedJobStealingSelfTest extends GridCommonAbstractTest { /** */ private Ignite ignite; @@ -81,6 +85,7 @@ public GridMultithreadedJobStealingSelfTest() { * * @throws Exception If test failed. */ + @Test public void testTwoJobsMultithreaded() throws Exception { final AtomicReference fail = new AtomicReference<>(null); @@ -136,6 +141,7 @@ public void testTwoJobsMultithreaded() throws Exception { * * @throws Exception If test failed. */ + @Test public void testJoinedNodeCanStealJobs() throws Exception { final AtomicReference fail = new AtomicReference<>(null); @@ -342,4 +348,4 @@ public JobStealingResult(int stolen, int nonStolen, Set nodes) { '}'; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeFilterSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeFilterSelfTest.java index 7a07deff3769c..c3a71f1c18d0f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeFilterSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeFilterSelfTest.java @@ -23,11 +23,15 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Node filter test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridNodeFilterSelfTest extends GridCommonAbstractTest { /** Grid instance. */ private Ignite ignite; @@ -51,6 +55,7 @@ public GridNodeFilterSelfTest() { /** * @throws Exception If failed. */ + @Test public void testSynchronousExecute() throws Exception { UUID nodeId = ignite.cluster().localNode().id(); @@ -66,4 +71,4 @@ public void testSynchronousExecute() throws Exception { assert rmtNodes.size() == 1; assert rmtNodes.iterator().next().id().equals(rmtNodeId); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeLocalSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeLocalSelfTest.java index 8d8b59f476041..07b4aa9125f10 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeLocalSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeLocalSelfTest.java @@ -26,11 +26,15 @@ import org.apache.ignite.mxbean.IgniteMXBean; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * This test will test node local storage. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridNodeLocalSelfTest extends GridCommonAbstractTest { /** Create test. */ public GridNodeLocalSelfTest() { @@ -42,6 +46,7 @@ public GridNodeLocalSelfTest() { * * @throws Exception If test failed. */ + @Test public void testNodeLocal() throws Exception { Ignite g = G.ignite(getTestIgniteInstanceName()); @@ -70,6 +75,7 @@ public void testNodeLocal() throws Exception { * * @throws Exception if test failed. */ + @Test public void testClearNodeLocalMap() throws Exception { final String key = "key"; final String value = "value"; @@ -86,4 +92,4 @@ public void testClearNodeLocalMap() throws Exception { igniteMXBean.clearNodeLocalMap(); assert nodeLocalMap.isEmpty() : "Not empty node local map"; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogPdsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogPdsSelfTest.java index a768a11ed0233..41c3588a78d17 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogPdsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogPdsSelfTest.java @@ -32,9 +32,6 @@ * Check logging local node metrics with PDS enabled. */ public class GridNodeMetricsLogPdsSelfTest extends GridNodeMetricsLogSelfTest { - /** */ - private static final String UNKNOWN_SIZE = "unknown"; - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -85,7 +82,7 @@ public class GridNodeMetricsLogPdsSelfTest extends GridNodeMetricsLogSelfTest { Set regions = new HashSet<>(); Pattern ptrn = Pattern.compile("(?m).{2,}( {3}(?.+) region|Ignite persistence) " + - "\\[used=(?[-.\\d]+|" + UNKNOWN_SIZE + ")?.*]"); + "\\[used=(?[-.\\d]+)?.*]"); Matcher matcher = ptrn.matcher(logOutput); @@ -96,9 +93,7 @@ public class GridNodeMetricsLogPdsSelfTest extends GridNodeMetricsLogSelfTest { String usedSize = matcher.group("used"); - // TODO https://issues.apache.org/jira/browse/IGNITE-9455 - // TODO The actual value of the metric should be printed when this issue is solved. - int used = UNKNOWN_SIZE.equals(usedSize) ? 0 : Integer.parseInt(usedSize); + int used = Integer.parseInt(usedSize); assertTrue(used + " should be non negative: " + subj, used >= 0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogSelfTest.java index 4389338a84fb5..80c1bc6da14eb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeMetricsLogSelfTest.java @@ -29,12 +29,16 @@ import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Check logging local node metrics */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "Kernal") +@RunWith(JUnit4.class) public class GridNodeMetricsLogSelfTest extends GridCommonAbstractTest { /** Executor name for setExecutorConfiguration */ private static final String CUSTOM_EXECUTOR_0 = "Custom executor 0"; @@ -77,6 +81,7 @@ public class GridNodeMetricsLogSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNodeMetricsLog() throws Exception { IgniteCache cache1 = grid(0).createCache("TestCache1"); IgniteCache cache2 = grid(1).createCache("TestCache2"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeVisorAttributesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeVisorAttributesSelfTest.java index 78385a1784bbc..cdff29a13507b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridNodeVisorAttributesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridNodeVisorAttributesSelfTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.Ignite; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Ensures that system properties required by Visor are always passed to node attributes. */ +@RunWith(JUnit4.class) public class GridNodeVisorAttributesSelfTest extends GridCommonAbstractTest { /** System properties required by Visor. */ private static final String[] SYSTEM_PROPS = new String[] { @@ -82,6 +86,7 @@ private void startGridAndCheck() throws Exception { * * @throws Exception If failed. */ + @Test public void testIncludeNull() throws Exception { inclProps = null; @@ -94,6 +99,7 @@ public void testIncludeNull() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ZeroLengthArrayAllocation") + @Test public void testIncludeEmpty() throws Exception { inclProps = new String[] {}; @@ -105,9 +111,10 @@ public void testIncludeEmpty() throws Exception { * * @throws Exception If failed. */ + @Test public void testIncludeNonEmpty() throws Exception { inclProps = new String[] {"prop1", "prop2"}; startGridAndCheck(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridNonHistoryMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridNonHistoryMetricsSelfTest.java index 2fcbf49d7e73d..b224c18b8e660 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridNonHistoryMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridNonHistoryMetricsSelfTest.java @@ -35,12 +35,16 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; /** * */ +@RunWith(JUnit4.class) public class GridNonHistoryMetricsSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { @@ -66,6 +70,7 @@ public class GridNonHistoryMetricsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSingleTaskMetrics() throws Exception { final Ignite ignite = grid(); @@ -126,4 +131,4 @@ private static class TestTask extends ComputeTaskSplitAdapter { return results; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionForCachesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionForCachesSelfTest.java index 3548234902b38..a5ddfff8df515 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionForCachesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionForCachesSelfTest.java @@ -29,23 +29,20 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests for {@link ClusterGroup#forCacheNodes(String)} method. */ +@RunWith(JUnit4.class) public class GridProjectionForCachesSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "cache"; @@ -56,8 +53,6 @@ public class GridProjectionForCachesSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(discoverySpi()); - List ccfgs = new ArrayList<>(); if (igniteInstanceName.equals(getTestIgniteInstanceName(0))) @@ -72,17 +67,6 @@ else if (igniteInstanceName.equals(getTestIgniteInstanceName(2)) || return cfg; } - /** - * @return Discovery SPI; - */ - private DiscoverySpi discoverySpi() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** * @param cacheName Cache name. * @return Cache configuration. @@ -126,6 +110,7 @@ private CacheConfiguration cacheConfiguration( /** * @throws Exception If failed. */ + @Test public void testProjectionForDefaultCache() throws Exception { final ClusterGroup prj = ignite.cluster().forCacheNodes(DEFAULT_CACHE_NAME); @@ -149,6 +134,7 @@ public void testProjectionForDefaultCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProjectionForNamedCache() throws Exception { final ClusterGroup prj = ignite.cluster().forCacheNodes(CACHE_NAME); @@ -171,6 +157,7 @@ public void testProjectionForNamedCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProjectionForDataCaches() throws Exception { ClusterGroup prj = ignite.cluster().forDataNodes(DEFAULT_CACHE_NAME); @@ -182,6 +169,7 @@ public void testProjectionForDataCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProjectionForClientCaches() throws Exception { ClusterGroup prj = ignite.cluster().forClientNodes(CACHE_NAME); @@ -193,6 +181,7 @@ public void testProjectionForClientCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProjectionForWrongCacheName() throws Exception { ClusterGroup prj = ignite.cluster().forCacheNodes("wrong"); @@ -203,6 +192,7 @@ public void testProjectionForWrongCacheName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProjections() throws Exception { ClusterNode locNode = ignite.cluster().localNode(); UUID locId = locNode.id(); @@ -320,4 +310,4 @@ private AttributeFilter(String... attrs) { return false; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionLocalJobMultipleArgumentsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionLocalJobMultipleArgumentsSelfTest.java index d9cc732985d36..f80afc96a5732 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionLocalJobMultipleArgumentsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridProjectionLocalJobMultipleArgumentsSelfTest.java @@ -28,20 +28,18 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.lang.IgniteRunnable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests for methods that run job locally with multiple arguments. */ +@RunWith(JUnit4.class) public class GridProjectionLocalJobMultipleArgumentsSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static Collection ids; @@ -66,12 +64,6 @@ public GridProjectionLocalJobMultipleArgumentsSelfTest() { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -84,6 +76,7 @@ public GridProjectionLocalJobMultipleArgumentsSelfTest() { /** * @throws Exception If failed. */ + @Test public void testAffinityCall() throws Exception { Collection res = new ArrayList<>(); @@ -104,6 +97,7 @@ public void testAffinityCall() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityRun() throws Exception { for (int i : F.asList(1, 2, 3)) { grid().compute().affinityRun(DEFAULT_CACHE_NAME, i, new IgniteRunnable() { @@ -122,6 +116,7 @@ public void testAffinityRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCall() throws Exception { Collection res = grid().compute().apply(new C1() { @Override public Integer apply(Integer arg) { @@ -138,6 +133,7 @@ public void testCall() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCallWithProducer() throws Exception { Collection args = Arrays.asList(1, 2, 3); @@ -152,4 +148,4 @@ public void testCallWithProducer() throws Exception { assertEquals(36, F.sumInt(res)); assertEquals(3, ids.size()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridReduceSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridReduceSelfTest.java index 827e2a2040c2f..45802619d62bd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridReduceSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridReduceSelfTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test reduce with long operations. */ +@RunWith(JUnit4.class) public class GridReduceSelfTest extends GridCommonAbstractTest { /** Number of nodes in the grid. */ private static final int GRID_CNT = 3; @@ -37,6 +41,7 @@ public class GridReduceSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testReduce() throws Exception { startGrids(GRID_CNT); @@ -79,6 +84,7 @@ public void testReduce() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReduceAsync() throws Exception { startGrids(GRID_CNT); @@ -187,4 +193,4 @@ private static class ReducerTestClosure implements IgniteCallable { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridReleaseTypeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridReleaseTypeSelfTest.java index b3e69a070aa07..0507e0076e6b4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridReleaseTypeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridReleaseTypeSelfTest.java @@ -24,17 +24,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteProductVersion; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test grids starting with non compatible release types. */ +@RunWith(JUnit4.class) public class GridReleaseTypeSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private String nodeVer; @@ -57,7 +56,7 @@ public class GridReleaseTypeSelfTest extends GridCommonAbstractTest { } }; - discoSpi.setIpFinder(IP_FINDER).setForceServerMode(true); + discoSpi.setIpFinder(sharedStaticIpFinder).setForceServerMode(true); cfg.setDiscoverySpi(discoSpi); @@ -74,6 +73,7 @@ public class GridReleaseTypeSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testOsEditionDoesNotSupportRollingUpdates() throws Exception { nodeVer = "1.0.0"; @@ -101,6 +101,7 @@ public void testOsEditionDoesNotSupportRollingUpdates() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOsEditionDoesNotSupportRollingUpdatesClientMode() throws Exception { nodeVer = "1.0.0"; @@ -125,4 +126,4 @@ public void testOsEditionDoesNotSupportRollingUpdatesClientMode() throws Excepti throw e; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridRuntimeExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridRuntimeExceptionSelfTest.java index a1f946b5b867f..c5ad9ad594903 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridRuntimeExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridRuntimeExceptionSelfTest.java @@ -40,6 +40,9 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_TASK_FAILED; @@ -48,6 +51,7 @@ */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridRuntimeExceptionSelfTest extends GridCommonAbstractTest { /** */ private enum FailType { @@ -77,6 +81,7 @@ public GridRuntimeExceptionSelfTest() { /** * @throws Exception If failed. */ + @Test public void testExecuteFailed() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -107,6 +112,7 @@ public void testExecuteFailed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMapFailed() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -137,6 +143,7 @@ public void testMapFailed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testResultFailed() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -167,6 +174,7 @@ public void testResultFailed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReduceFailed() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -305,4 +313,4 @@ private static class GridTaskFailedTestJob extends ComputeJobAdapter { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridSameVmStartupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridSameVmStartupSelfTest.java index a04c38e7a7590..9b7cdfcb930f7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridSameVmStartupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridSameVmStartupSelfTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVTS_DISCOVERY; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; @@ -40,6 +43,7 @@ * events while stopping one them. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridSameVmStartupSelfTest extends GridCommonAbstractTest { /** * @@ -53,6 +57,7 @@ public GridSameVmStartupSelfTest() { * * @throws Exception If failed. */ + @Test public void testSameVmStartup() throws Exception { Ignite ignite1 = startGrid(1); @@ -118,4 +123,4 @@ public void testSameVmStartup() throws Exception { assert G.allGrids().isEmpty(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridSelfTest.java index 08b5402cf940e..32b0e8a73fc47 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridSelfTest.java @@ -29,11 +29,15 @@ import org.apache.ignite.messaging.MessagingListenActor; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link org.apache.ignite.IgniteCluster}. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridSelfTest extends ClusterGroupAbstractTest { /** Nodes count. */ private static final int NODES_CNT = 4; @@ -65,6 +69,7 @@ public class GridSelfTest extends ClusterGroupAbstractTest { } /** {@inheritDoc} */ + @Test @Override public void testRemoteNodes() throws Exception { int size = remoteNodeIds().size(); @@ -85,6 +90,7 @@ public class GridSelfTest extends ClusterGroupAbstractTest { } /** {@inheritDoc} */ + @Test @Override public void testRemoteProjection() throws Exception { ClusterGroup remotePrj = projection().forRemotes(); @@ -109,7 +115,7 @@ public class GridSelfTest extends ClusterGroupAbstractTest { /** * @throws Exception If failed. */ - @SuppressWarnings({"TooBroadScope"}) + @Test public void testAsyncListen() throws Exception { final String hello = "HELLO!"; @@ -155,6 +161,7 @@ public void testAsyncListen() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForOthers() throws Exception { ClusterNode node0 = grid(0).localNode(); ClusterNode node1 = grid(1).localNode(); @@ -171,4 +178,4 @@ public void testForOthers() throws Exception { assertEquals(1, grid(0).cluster().forOthers(node1, node2, node3).nodes().size()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridSpiExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridSpiExceptionSelfTest.java index a588ce0dd2e99..beef7a0d38f6d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridSpiExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridSpiExceptionSelfTest.java @@ -36,11 +36,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests exceptions that are thrown by event storage and deployment spi. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridSpiExceptionSelfTest extends GridCommonAbstractTest { /** */ private static final String TEST_MSG = "Test exception message"; @@ -66,6 +70,7 @@ public GridSpiExceptionSelfTest() { /** * @throws Exception If failed. */ + @Test public void testSpiFail() throws Exception { Ignite ignite = startGrid(); @@ -182,4 +187,4 @@ private static class GridTestSpiException extends IgniteSpiException { super(msg, cause); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridStartStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridStartStopSelfTest.java index 32f7a21810979..1f664f7531735 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridStartStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridStartStopSelfTest.java @@ -18,10 +18,8 @@ package org.apache.ignite.internal; import java.lang.reflect.InvocationTargetException; -import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; -import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; @@ -36,7 +34,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.apache.ignite.transactions.Transaction; -import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.IgniteSystemProperties.IGNITE_OVERRIDE_MCAST_GRP; @@ -47,8 +47,9 @@ /** * Checks basic node start/stop operations. */ -@SuppressWarnings({"CatchGenericClass", "InstanceofCatchParameter"}) +@SuppressWarnings({"InstanceofCatchParameter"}) @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridStartStopSelfTest extends GridCommonAbstractTest { /** */ public static final int COUNT = 1; @@ -60,6 +61,7 @@ public class GridStartStopSelfTest extends GridCommonAbstractTest { /** */ + @Test public void testStartStop() { IgniteConfiguration cfg = new IgniteConfiguration(); @@ -81,6 +83,7 @@ public void testStartStop() { /** * @throws Exception If failed. */ + @Test public void testStopWhileInUse() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); @@ -148,6 +151,7 @@ public void testStopWhileInUse() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStoppedState() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); @@ -221,4 +225,4 @@ public void testStoppedState() throws Exception { assertTrue(errs, errs == null || errs.isEmpty()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridStopWithCancelSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridStopWithCancelSelfTest.java index 5cc9e9b6728cd..e621d88acd2ef 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridStopWithCancelSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridStopWithCancelSelfTest.java @@ -33,11 +33,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests grid stop with jobs canceling. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridStopWithCancelSelfTest extends GridCommonAbstractTest { /** */ private static CountDownLatch cnt; @@ -60,6 +64,7 @@ public GridStopWithCancelSelfTest() { /** * @throws Exception If an error occurs. */ + @Test public void testStopGrid() throws Exception { cancelCorrect = false; @@ -125,4 +130,4 @@ public static final class CancelledTask extends ComputeTaskAdapter nodeRef = new AtomicReference<>(null); @@ -79,6 +83,7 @@ public GridStopWithWaitSelfTest() { /** * @throws Exception If failed. */ + @Test public void testWait() throws Exception { jobStarted = new CountDownLatch(1); @@ -113,6 +118,7 @@ public void testWait() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWaitFailover() throws Exception { jobStarted = new CountDownLatch(1); @@ -278,4 +284,4 @@ private static class JobFailTask implements ComputeTask { return res.get(0).getData(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskCancelSingleNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskCancelSingleNodeSelfTest.java index 950f89db09c24..22bfc7b0d4066 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskCancelSingleNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskCancelSingleNodeSelfTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_JOB_CANCELLED; import static org.apache.ignite.events.EventType.EVT_JOB_FINISHED; @@ -47,6 +50,7 @@ /** * Test for task cancellation issue. */ +@RunWith(JUnit4.class) public class GridTaskCancelSingleNodeSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { @@ -61,6 +65,7 @@ public class GridTaskCancelSingleNodeSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testImmediateCancellation() throws Exception { checkCancellation(0L); } @@ -68,6 +73,7 @@ public void testImmediateCancellation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancellation() throws Exception { checkCancellation(2000L); } @@ -203,4 +209,4 @@ private static class TestTask extends ComputeTaskSplitAdapter { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskContinuousMapperSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskContinuousMapperSelfTest.java index 8f551b6f002bd..42a3b5db8b6f5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskContinuousMapperSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskContinuousMapperSelfTest.java @@ -38,15 +38,20 @@ import org.apache.ignite.resources.TaskContinuousMapperResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * {@link org.apache.ignite.compute.ComputeTaskContinuousMapper} test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridTaskContinuousMapperSelfTest extends GridCommonAbstractTest { /** * @throws Exception If test failed. */ + @Test public void testContinuousMapperMethods() throws Exception { try { Ignite ignite = startGrid(0); @@ -63,6 +68,7 @@ public void testContinuousMapperMethods() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testContinuousMapperLifeCycle() throws Exception { try { Ignite ignite = startGrid(0); @@ -77,6 +83,7 @@ public void testContinuousMapperLifeCycle() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testContinuousMapperNegative() throws Exception { try { Ignite ignite = startGrid(0); @@ -340,4 +347,4 @@ public TestJob(int idx) { return argument(0); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionContextSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionContextSelfTest.java index c31c80508f58f..2f8ce1da6290b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionContextSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionContextSelfTest.java @@ -35,10 +35,14 @@ import org.apache.ignite.resources.TaskSessionResource; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@code GridProjection.withXXX(..)} methods. */ +@RunWith(JUnit4.class) public class GridTaskExecutionContextSelfTest extends GridCommonAbstractTest { /** */ private static final AtomicInteger CNT = new AtomicInteger(); @@ -56,6 +60,7 @@ public class GridTaskExecutionContextSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testWithName() throws Exception { IgniteCallable f = new IgniteCallable() { @TaskSessionResource @@ -80,6 +85,7 @@ public void testWithName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithNoFailoverClosure() throws Exception { final IgniteRunnable r = new GridAbsClosureX() { @Override public void applyx() { @@ -110,6 +116,7 @@ public void testWithNoFailoverClosure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithNoFailoverTask() throws Exception { final Ignite g = grid(0); @@ -165,4 +172,4 @@ private TestTask(boolean fail) { return F.first(results).getData(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionSelfTest.java index e197908c15ec2..465a15d854e76 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskExecutionSelfTest.java @@ -33,11 +33,15 @@ import org.apache.ignite.resources.JobContextResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Task execution test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridTaskExecutionSelfTest extends GridCommonAbstractTest { /** Grid instance. */ private Ignite ignite; @@ -78,6 +82,7 @@ protected boolean peerClassLoadingEnabled() { /** * @throws Exception If failed. */ + @Test public void testSynchronousExecute() throws Exception { ComputeTaskFuture fut = ignite.compute().executeAsync(GridTestTask.class, "testArg"); @@ -91,6 +96,7 @@ public void testSynchronousExecute() throws Exception { * * @throws Exception If failed. */ + @Test public void testJobIdCollision() throws Exception { fail("Test refactoring is needed: https://issues.apache.org/jira/browse/IGNITE-4706"); @@ -155,6 +161,7 @@ public void testJobIdCollision() throws Exception { * * @throws Exception If failed. */ + @Test public void testExecuteTaskWithInvalidName() throws Exception { try { ComputeTaskFuture fut = ignite.compute().execute("invalid.task.name", null); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverAffinityRunTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverAffinityRunTest.java index 1358936a7a100..0d154620a3925 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverAffinityRunTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverAffinityRunTest.java @@ -29,10 +29,11 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -41,10 +42,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridTaskFailoverAffinityRunTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean clientMode; @@ -52,8 +51,6 @@ public class GridTaskFailoverAffinityRunTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); boolean client = clientMode && igniteInstanceName.equals(getTestIgniteInstanceName(0)); @@ -86,6 +83,7 @@ public class GridTaskFailoverAffinityRunTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNodeRestart() throws Exception { clientMode = false; @@ -95,6 +93,7 @@ public void testNodeRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeRestartClient() throws Exception { clientMode = true; @@ -169,4 +168,4 @@ private static class TestJob implements IgniteCallable { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverSelfTest.java index 77368956d4391..101c5adb634b2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskFailoverSelfTest.java @@ -32,11 +32,15 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for task failover. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridTaskFailoverSelfTest extends GridCommonAbstractTest { /** Don't change it value. */ public static final int SPLIT_COUNT = 2; @@ -49,7 +53,7 @@ public GridTaskFailoverSelfTest() { /** * @throws Exception If test failed. */ - @SuppressWarnings("unchecked") + @Test public void testFailover() throws Exception { Ignite ignite = startGrid(); @@ -120,4 +124,4 @@ public static class GridFailoverTestTask extends ComputeTaskSplitAdapter { /** */ @LoggerResource @@ -220,4 +223,4 @@ public static class GridStopTestJob extends ComputeJobAdapter { return !Thread.currentThread().isInterrupted() ? 0 : 1; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstanceExecutionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstanceExecutionSelfTest.java index 2f153bbf70c6f..4bda43105d477 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstanceExecutionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstanceExecutionSelfTest.java @@ -30,12 +30,16 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Task instance execution test. */ @SuppressWarnings("PublicInnerClass") @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridTaskInstanceExecutionSelfTest extends GridCommonAbstractTest { /** */ private static Object testState; @@ -48,6 +52,7 @@ public GridTaskInstanceExecutionSelfTest() { /** * @throws Exception If failed. */ + @Test public void testSynchronousExecute() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -120,4 +125,4 @@ public Object getState() { return super.reduce(results); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstantiationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstantiationSelfTest.java index b50bcb6f20a55..1e9b2b75de7fd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstantiationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskInstantiationSelfTest.java @@ -32,12 +32,16 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests instantiation of various task types (defined as private inner class, without default constructor, non-public * default constructor). */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridTaskInstantiationSelfTest extends GridCommonAbstractTest { /** * Constructor. @@ -49,6 +53,7 @@ public GridTaskInstantiationSelfTest() { /** * @throws Exception If an error occurs. */ + @Test public void testTasksInstantiation() throws Exception { grid().compute().execute(PrivateClassTask.class, null); @@ -120,4 +125,4 @@ private NoDefaultConstructorTask(Object param) { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskJobRejectSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskJobRejectSelfTest.java index a6678dadb2581..e5bc96694d08c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskJobRejectSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskJobRejectSelfTest.java @@ -35,6 +35,9 @@ import org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.events.EventType.EVTS_JOB_EXECUTION; @@ -47,6 +50,7 @@ /** * Test that rejected job is not failed over. */ +@RunWith(JUnit4.class) public class GridTaskJobRejectSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -70,6 +74,7 @@ public class GridTaskJobRejectSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testReject() throws Exception { grid(1).events().localListen(new IgnitePredicate() { @Override public boolean apply(Event evt) { @@ -156,4 +161,4 @@ private static final class SleepJob extends ComputeJobAdapter { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskListenerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskListenerSelfTest.java index 3cf9ef8a8086a..3505c4a2ff138 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/GridTaskListenerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/GridTaskListenerSelfTest.java @@ -34,12 +34,15 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * This test checks that GridTaskListener is only called once per task. */ -@SuppressWarnings("deprecation") @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridTaskListenerSelfTest extends GridCommonAbstractTest { /** */ public GridTaskListenerSelfTest() { @@ -51,7 +54,8 @@ public GridTaskListenerSelfTest() { * * @throws Exception If failed. */ - @SuppressWarnings({"BusyWait", "unchecked"}) + @SuppressWarnings({"BusyWait"}) + @Test public void testGridTaskListener() throws Exception { final AtomicInteger cnt = new AtomicInteger(0); @@ -111,4 +115,4 @@ private static class TestTask extends ComputeTaskSplitAdapter 0) @@ -183,7 +176,9 @@ protected BlockTcpCommunicationSpi commSpi(Ignite ignite) { /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + super.afterTestsStopped(); + + stopAllGrids(); } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectApiExceptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectApiExceptionTest.java index 43da2d15a7d0e..d9ddc64c8f8fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectApiExceptionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectApiExceptionTest.java @@ -54,6 +54,9 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.spi.discovery.DiscoverySpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_DISCONNECTED; @@ -62,6 +65,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectApiExceptionTest extends IgniteClientReconnectAbstractTest { /** Cache key for test put and invoke operation after reconnect */ @@ -84,6 +88,7 @@ public class IgniteClientReconnectApiExceptionTest extends IgniteClientReconnect /** * @throws Exception If failed. */ + @Test public void testErrorOnDisconnect() throws Exception { // Check cache operations. cacheOperationsTest(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectAtomicsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectAtomicsTest.java index d1e3ade29b70a..e786c31dac80e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectAtomicsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectAtomicsTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearLockResponse; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareResponse; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectAtomicsTest extends IgniteClientReconnectAbstractTest { /** {@inheritDoc} */ @Override protected int serverCount() { @@ -51,6 +55,7 @@ public class IgniteClientReconnectAtomicsTest extends IgniteClientReconnectAbstr /** * @throws Exception If failed. */ + @Test public void testAtomicsReconnectClusterRestart() throws Exception { Ignite client = grid(serverCount()); @@ -106,6 +111,7 @@ public void testAtomicsReconnectClusterRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicSeqReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -139,6 +145,7 @@ public void testAtomicSeqReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicSeqReconnectRemoved() throws Exception { Ignite client = grid(serverCount()); @@ -187,6 +194,7 @@ public void testAtomicSeqReconnectRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicSeqReconnectInProgress() throws Exception { Ignite client = grid(serverCount()); @@ -248,6 +256,7 @@ public void testAtomicSeqReconnectInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicReferenceReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -289,6 +298,7 @@ public void testAtomicReferenceReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicReferenceReconnectRemoved() throws Exception { Ignite client = grid(serverCount()); @@ -342,6 +352,7 @@ public void testAtomicReferenceReconnectRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicReferenceReconnectInProgress() throws Exception { Ignite client = grid(serverCount()); @@ -409,6 +420,7 @@ public void testAtomicReferenceReconnectInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicStampedReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -450,6 +462,7 @@ public void testAtomicStampedReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicStampedReconnectRemoved() throws Exception { Ignite client = grid(serverCount()); @@ -501,6 +514,7 @@ public void testAtomicStampedReconnectRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicStampedReconnectInProgress() throws Exception { Ignite client = grid(serverCount()); @@ -569,6 +583,7 @@ public void testAtomicStampedReconnectInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -600,6 +615,7 @@ public void testAtomicLongReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongReconnectRemoved() throws Exception { Ignite client = grid(serverCount()); @@ -641,6 +657,7 @@ public void testAtomicLongReconnectRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongReconnectInProgress() throws Exception { Ignite client = grid(serverCount()); @@ -696,6 +713,7 @@ public void testAtomicLongReconnectInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLatchReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -737,6 +755,7 @@ public void testLatchReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -778,6 +797,7 @@ public void testSemaphoreReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReentrantLockReconnect() throws Exception { testReentrantLockReconnect(false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectBinaryContexTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectBinaryContexTest.java index 1486476d90a3b..f7740fa217be1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectBinaryContexTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectBinaryContexTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectBinaryContexTest extends IgniteClientReconnectAbstractTest { /** {@inheritDoc} */ @Override protected int serverCount() { @@ -52,6 +56,7 @@ public class IgniteClientReconnectBinaryContexTest extends IgniteClientReconnect /** * @throws Exception If failed. */ + @Test public void testReconnectCleaningUsersMetadata() throws Exception { Ignite client = grid(serverCount()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCacheTest.java index bc498da5f9123..fb3530702ffb0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCacheTest.java @@ -74,6 +74,9 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionRollbackException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -93,6 +96,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectCacheTest extends IgniteClientReconnectAbstractTest { /** */ private static final int SRV_CNT = 3; @@ -156,6 +160,7 @@ public class IgniteClientReconnectCacheTest extends IgniteClientReconnectAbstrac /** * @throws Exception If failed. */ + @Test public void testReconnect() throws Exception { clientMode = true; @@ -335,6 +340,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testReconnectTransactions() throws Exception { clientMode = true; @@ -404,6 +410,7 @@ public void testReconnectTransactions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxStateAfterClientReconnect() throws Exception { clientMode = true; @@ -443,6 +450,7 @@ public void testTxStateAfterClientReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectTransactionInProgress1() throws Exception { clientMode = true; @@ -604,6 +612,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testReconnectTransactionInProgress2() throws Exception { clientMode = true; @@ -663,6 +672,7 @@ private void txInProgressFails(final IgniteEx client, /** * @throws Exception If failed. */ + @Test public void testReconnectExchangeInProgress() throws Exception { clientMode = true; @@ -725,6 +735,7 @@ public void testReconnectExchangeInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectInitialExchangeInProgress() throws Exception { final UUID clientId = UUID.randomUUID(); @@ -813,6 +824,7 @@ public void testReconnectInitialExchangeInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectOperationInProgress() throws Exception { clientMode = true; @@ -885,6 +897,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) /** * @throws Exception If failed. */ + @Test public void testReconnectCacheDestroyed() throws Exception { clientMode = true; @@ -922,6 +935,7 @@ public void testReconnectCacheDestroyed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectCacheDestroyedAndCreated() throws Exception { clientMode = true; @@ -965,6 +979,7 @@ public void testReconnectCacheDestroyedAndCreated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectMarshallerCache() throws Exception { clientMode = true; @@ -1007,6 +1022,7 @@ public void testReconnectMarshallerCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectClusterRestart() throws Exception { clientMode = true; @@ -1071,6 +1087,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testReconnectClusterRestartMultinode() throws Exception { clientMode = true; @@ -1134,6 +1151,7 @@ public void testReconnectClusterRestartMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectMultinode() throws Exception { reconnectMultinode(false); } @@ -1141,6 +1159,7 @@ public void testReconnectMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectMultinodeLongHistory() throws Exception { reconnectMultinode(true); } @@ -1256,6 +1275,7 @@ private void reconnectMultinode(boolean longHist) throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectDestroyCache() throws Exception { clientMode = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCollectionsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCollectionsTest.java index 5be59b0537ea2..5ca64bcf0f90c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCollectionsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectCollectionsTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareResponse; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -37,6 +40,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectCollectionsTest extends IgniteClientReconnectAbstractTest { /** */ private static final CollectionConfiguration TX_CFGS = new CollectionConfiguration(); @@ -65,6 +69,7 @@ public class IgniteClientReconnectCollectionsTest extends IgniteClientReconnectA /** * @throws Exception If failed. */ + @Test public void testCollectionsReconnectClusterRestart() throws Exception { Ignite client = grid(serverCount()); @@ -113,6 +118,7 @@ public void testCollectionsReconnectClusterRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueReconnect() throws Exception { queueReconnect(TX_CFGS); @@ -122,6 +128,7 @@ public void testQueueReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueReconnectRemoved() throws Exception { queueReconnectRemoved(TX_CFGS); @@ -131,6 +138,7 @@ public void testQueueReconnectRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueReconnectInProgress() throws Exception { queueReconnectInProgress(TX_CFGS); @@ -140,6 +148,7 @@ public void testQueueReconnectInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetReconnect() throws Exception { setReconnect(TX_CFGS); @@ -149,6 +158,7 @@ public void testSetReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetReconnectRemoved() throws Exception { setReconnectRemove(TX_CFGS); @@ -158,6 +168,7 @@ public void testSetReconnectRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetReconnectInProgress() throws Exception { setReconnectInProgress(TX_CFGS); @@ -167,6 +178,7 @@ public void testSetReconnectInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerReconnect() throws Exception { serverNodeReconnect(TX_CFGS); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectComputeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectComputeTest.java index 57d31882db6d7..2dcb580cdc687 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectComputeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectComputeTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectComputeTest extends IgniteClientReconnectAbstractTest { /** {@inheritDoc} */ @Override protected int serverCount() { @@ -44,6 +48,7 @@ public class IgniteClientReconnectComputeTest extends IgniteClientReconnectAbstr /** * @throws Exception If failed. */ + @Test public void testReconnectAffinityCallInProgress() throws Exception { final Ignite client = grid(serverCount()); @@ -98,6 +103,7 @@ public void testReconnectAffinityCallInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectBroadcastInProgress() throws Exception { final Ignite client = grid(serverCount()); @@ -147,6 +153,7 @@ public void testReconnectBroadcastInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectApplyInProgress() throws Exception { final Ignite client = grid(serverCount()); @@ -192,4 +199,4 @@ public void testReconnectApplyInProgress() throws Exception { assertTrue((Boolean)fut.get(2, TimeUnit.SECONDS)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectContinuousProcessorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectContinuousProcessorTest.java index d68fc1cd3cceb..3e80a3f50c43c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectContinuousProcessorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectContinuousProcessorTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_RECONNECTED; @@ -40,6 +43,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectContinuousProcessorTest extends IgniteClientReconnectAbstractTest { /** */ private static volatile CountDownLatch latch; @@ -57,6 +61,7 @@ public class IgniteClientReconnectContinuousProcessorTest extends IgniteClientRe /** * @throws Exception If failed. */ + @Test public void testEventListenerReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -114,6 +119,7 @@ public void testEventListenerReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMessageListenerReconnectAndStopFromServer() throws Exception { testMessageListenerReconnect(false); } @@ -121,6 +127,7 @@ public void testMessageListenerReconnectAndStopFromServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMessageListenerReconnectAndStopFromClient() throws Exception { testMessageListenerReconnect(true); } @@ -212,6 +219,7 @@ private void testMessageListenerReconnect(boolean stopFromClient) throws Excepti /** * @throws Exception If failed. */ + @Test public void testCacheContinuousQueryReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -249,6 +257,7 @@ public void testCacheContinuousQueryReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheContinuousQueryReconnectNewServer() throws Exception { Ignite client = grid(serverCount()); @@ -457,4 +466,4 @@ static class DummyJob implements IgniteRunnable { ignite.log().info("Job run."); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDelayedSpiTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDelayedSpiTest.java index a4a091282b6ca..8c70cec1e96ee 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDelayedSpiTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDelayedSpiTest.java @@ -30,11 +30,15 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test cases for emulation of delayed messages sending with {@link TestRecordingCommunicationSpi} for blocking and * resending messages at the moment we need it. */ +@RunWith(JUnit4.class) public class IgniteClientReconnectDelayedSpiTest extends IgniteClientReconnectAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -59,6 +63,7 @@ public class IgniteClientReconnectDelayedSpiTest extends IgniteClientReconnectAb * * @throws Exception If failed. */ + @Test public void testReconnectCacheDestroyedDelayedAffinityChange() throws Exception { Ignite ignite = ignite(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDiscoveryStateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDiscoveryStateTest.java index 6e77742b758f6..d002667632255 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDiscoveryStateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectDiscoveryStateTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.discovery.DiscoverySpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_DISCONNECTED; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_RECONNECTED; @@ -35,6 +38,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectDiscoveryStateTest extends IgniteClientReconnectAbstractTest { /** {@inheritDoc} */ @Override protected int serverCount() { @@ -49,6 +53,7 @@ public class IgniteClientReconnectDiscoveryStateTest extends IgniteClientReconne /** * @throws Exception If failed. */ + @Test public void testReconnect() throws Exception { final Ignite client = ignite(serverCount()); @@ -132,4 +137,4 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { assertEquals(10, cluster.nodeLocalMap().get("locMapKey")); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectFailoverTest.java index 57c2e933375c2..dd652ddf29e17 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectFailoverTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -43,6 +46,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectFailoverTest extends IgniteClientReconnectFailoverAbstractTest { /** */ protected static final String ATOMIC_CACHE = "ATOMIC_CACHE"; @@ -75,6 +79,7 @@ public class IgniteClientReconnectFailoverTest extends IgniteClientReconnectFail /** * @throws Exception If failed. */ + @Test public void testReconnectAtomicCache() throws Exception { final Ignite client = grid(serverCount()); @@ -114,6 +119,7 @@ public void testReconnectAtomicCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectTxCache() throws Exception { final Ignite client = grid(serverCount()); @@ -181,6 +187,7 @@ public void testReconnectTxCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectComputeApi() throws Exception { final Ignite client = grid(serverCount()); @@ -200,6 +207,7 @@ public void testReconnectComputeApi() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectStreamerApi() throws Exception { final Ignite client = grid(serverCount()); @@ -236,4 +244,4 @@ public static class DummyClosure implements IgniteCallable { return 1; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectServicesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectServicesTest.java index 1e6dd64f970b2..98803e0f18888 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectServicesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectServicesTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectServicesTest extends IgniteClientReconnectAbstractTest { /** {@inheritDoc} */ @Override protected int serverCount() { @@ -48,6 +52,7 @@ public class IgniteClientReconnectServicesTest extends IgniteClientReconnectAbst /** * @throws Exception If failed. */ + @Test public void testReconnect() throws Exception { Ignite client = grid(serverCount()); @@ -83,6 +88,7 @@ public void testReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServiceRemove() throws Exception { Ignite client = grid(serverCount()); @@ -125,6 +131,7 @@ public void testServiceRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectInDeploying() throws Exception { Ignite client = grid(serverCount()); @@ -172,6 +179,7 @@ public void testReconnectInDeploying() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectInProgress() throws Exception { Ignite client = grid(serverCount()); @@ -262,4 +270,4 @@ public static class TestServiceImpl implements Service, TestService { return ignite.cluster().topologyVersion(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStopTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStopTest.java index b5c3ee86d0d4a..f83b5d80c94ff 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStopTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStopTest.java @@ -27,6 +27,9 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.discovery.DiscoverySpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_DISCONNECTED; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_RECONNECTED; @@ -34,6 +37,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectStopTest extends IgniteClientReconnectAbstractTest { /** {@inheritDoc} */ @Override protected int serverCount() { @@ -43,6 +47,7 @@ public class IgniteClientReconnectStopTest extends IgniteClientReconnectAbstract /** * @throws Exception If failed. */ + @Test public void testStopWhenDisconnected() throws Exception { clientMode = true; @@ -112,4 +117,4 @@ public void testStopWhenDisconnected() throws Exception { log.info("Expected reconnect exception: " + e); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStreamerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStreamerTest.java index 36b989093440e..88a38b64b790f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStreamerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientReconnectStreamerTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.internal.processors.datastreamer.DataStreamerResponse; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -36,6 +39,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectStreamerTest extends IgniteClientReconnectAbstractTest { /** */ public static final String CACHE_NAME = "streamer"; @@ -66,6 +70,7 @@ public class IgniteClientReconnectStreamerTest extends IgniteClientReconnectAbst /** * @throws Exception If failed. */ + @Test public void testStreamerReconnect() throws Exception { final Ignite client = grid(serverCount()); @@ -130,6 +135,7 @@ public void testStreamerReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStreamerReconnectInProgress() throws Exception { Ignite client = grid(serverCount()); @@ -234,4 +240,4 @@ private void checkStreamerClosed(IgniteDataStreamer streamer) streamer.close(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientRejoinTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientRejoinTest.java index 8744465402ce4..53fd4eb44b1fa 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientRejoinTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteClientRejoinTest.java @@ -49,10 +49,14 @@ import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests client to be able restore connection to cluster if coordination is not available. */ +@RunWith(JUnit4.class) public class IgniteClientRejoinTest extends GridCommonAbstractTest { /** Block. */ private volatile boolean block; @@ -115,6 +119,7 @@ public class IgniteClientRejoinTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClientsReconnectAfterStart() throws Exception { Ignite srv1 = startGrid("server1"); @@ -192,6 +197,7 @@ public void testClientsReconnectAfterStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientsReconnect() throws Exception { Ignite srv1 = startGrid("server1"); @@ -268,6 +274,7 @@ public void testClientsReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientsReconnectDisabled() throws Exception { clientReconnectDisabled = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeEmptyClusterGroupTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeEmptyClusterGroupTest.java index f346b8edbfce9..68c1b1a4c7366 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeEmptyClusterGroupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeEmptyClusterGroupTest.java @@ -28,31 +28,23 @@ import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteRunnable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class IgniteComputeEmptyClusterGroupTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - CacheConfiguration ccfg = defaultCacheConfiguration(); ccfg.setCacheMode(PARTITIONED); @@ -70,6 +62,7 @@ public class IgniteComputeEmptyClusterGroupTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAsync() throws Exception { ClusterGroup empty = ignite(0).cluster().forNodeId(UUID.randomUUID()); @@ -89,6 +82,7 @@ public void testAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSync() throws Exception { ClusterGroup empty = ignite(0).cluster().forNodeId(UUID.randomUUID()); @@ -177,4 +171,4 @@ private static class FailCallable implements IgniteCallable { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeJobOneThreadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeJobOneThreadTest.java index 76f669e5eadf7..8d1b44437539b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeJobOneThreadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeJobOneThreadTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test of absence of gaps between jobs in compute */ +@RunWith(JUnit4.class) public class IgniteComputeJobOneThreadTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi(); @@ -55,6 +59,7 @@ public class IgniteComputeJobOneThreadTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNoTimeout() throws Exception { Ignite ignite = ignite(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeResultExceptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeResultExceptionTest.java index fab5de6d25229..59a476cc178b7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeResultExceptionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeResultExceptionTest.java @@ -31,38 +31,48 @@ import org.apache.ignite.compute.ComputeTaskFuture; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Testing that if {@link ComputeTask#result(ComputeJobResult, List)} throws an {@link IgniteException} * then that exception is thrown as the execution result. */ +@RunWith(JUnit4.class) public class IgniteComputeResultExceptionTest extends GridCommonAbstractTest { /** */ + @Test public void testIgniteExceptionExecute() throws Exception { checkExecuteException(new IgniteException()); } /** */ + @Test public void testIgniteExceptionWithCauseExecute() throws Exception { checkExecuteException(new IgniteException(new Exception())); } /** */ + @Test public void testIgniteExceptionWithCauseChainExecute() throws Exception { checkExecuteException(new IgniteException(new Exception(new Throwable()))); } /** */ + @Test public void testCustomExceptionExecute() throws Exception { checkExecuteException(new TaskException()); } /** */ + @Test public void testCustomExceptionWithCauseExecute() throws Exception { checkExecuteException(new TaskException(new Exception())); } /** */ + @Test public void testCustomExceptionWithCauseChainExecute() throws Exception { checkExecuteException(new TaskException(new Exception(new Throwable()))); } @@ -80,32 +90,38 @@ private void checkExecuteException(IgniteException resE) throws Exception { } /** */ + @Test public void testIgniteExceptionExecuteAsync() throws Exception { checkExecuteAsyncException(new IgniteException()); } /** */ + @Test public void testIgniteExceptionWithCauseExecuteAsync() throws Exception { checkExecuteAsyncException(new IgniteException(new Exception())); } /** */ + @Test public void testIgniteExceptionWithCauseChainExecuteAsync() throws Exception { checkExecuteAsyncException(new IgniteException(new Exception(new Throwable()))); } /** */ + @Test public void testCustomExceptionExecuteAsync() throws Exception { checkExecuteAsyncException(new TaskException()); } /** */ + @Test public void testCustomExceptionWithCauseExecuteAsync() throws Exception { checkExecuteAsyncException(new TaskException(new Exception())); } /** */ + @Test public void testCustomExceptionWithCauseChainExecuteAsync() throws Exception { checkExecuteAsyncException(new TaskException(new Exception(new Throwable()))); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeTopologyExceptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeTopologyExceptionTest.java index a82373b618507..583b9e42a4293 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeTopologyExceptionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteComputeTopologyExceptionTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.GridClosureCallMode.BALANCE; /** * */ +@RunWith(JUnit4.class) public class IgniteComputeTopologyExceptionTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { @@ -46,6 +50,7 @@ public class IgniteComputeTopologyExceptionTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCorrectException() throws Exception { Ignite ignite = ignite(0); @@ -72,6 +77,7 @@ public void testCorrectException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCorrectCheckedException() throws Exception { IgniteKernal ignite0 = (IgniteKernal)ignite(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteConcurrentEntryProcessorAccessStopTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteConcurrentEntryProcessorAccessStopTest.java index af65ffba2f51a..ac2e1e8d52364 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteConcurrentEntryProcessorAccessStopTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteConcurrentEntryProcessorAccessStopTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests node stop while it is being accessed from EntryProcessor. */ +@RunWith(JUnit4.class) public class IgniteConcurrentEntryProcessorAccessStopTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -38,6 +42,7 @@ public class IgniteConcurrentEntryProcessorAccessStopTest extends GridCommonAbst * * @throws Exception If failed. */ + @Test public void testConcurrentAccess() throws Exception { CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteConnectionConcurrentReserveAndRemoveTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteConnectionConcurrentReserveAndRemoveTest.java index fb194491b7c6b..3ca51de7fc330 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteConnectionConcurrentReserveAndRemoveTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteConnectionConcurrentReserveAndRemoveTest.java @@ -31,17 +31,15 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.communication.tcp.messages.HandshakeMessage2; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class IgniteConnectionConcurrentReserveAndRemoveTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -53,12 +51,6 @@ public class IgniteConnectionConcurrentReserveAndRemoveTest extends GridCommonAb c.setClientMode(igniteInstanceName.startsWith("client")); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - TestRecordingCommunicationSpi spi = new TestRecordingCommunicationSpi(); spi.setIdleConnectionTimeout(Integer.MAX_VALUE); @@ -79,6 +71,7 @@ private static final class TestClosure implements IgniteCallable { } + @Test public void test() throws Exception { IgniteEx svr = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteDiscoveryMassiveNodeFailTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteDiscoveryMassiveNodeFailTest.java index 32ce9783c0332..50b771bc7f9dd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteDiscoveryMassiveNodeFailTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteDiscoveryMassiveNodeFailTest.java @@ -38,12 +38,16 @@ import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests checks case when one node is unable to connect to next in a ring, * but those nodes are not experiencing any connectivity troubles between * each other. */ +@RunWith(JUnit4.class) public class IgniteDiscoveryMassiveNodeFailTest extends GridCommonAbstractTest { /** */ private static final int FAILURE_DETECTION_TIMEOUT = 5_000; @@ -118,6 +122,7 @@ public class IgniteDiscoveryMassiveNodeFailTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testMassiveFailDisabledRecovery() throws Exception { timeout = 0; // Disable previous node check. @@ -176,6 +181,7 @@ private long waitTime() { * * @throws Exception If failed. */ + @Test public void testMassiveFailSelfKill() throws Exception { startGrids(5); @@ -211,6 +217,7 @@ public void testMassiveFailSelfKill() throws Exception { * * @throws Exception If failed. */ + @Test public void testMassiveFailAndRecovery() throws Exception { startGrids(5); @@ -254,6 +261,7 @@ public void testMassiveFailAndRecovery() throws Exception { * * @throws Exception If failed. */ + @Test public void testMassiveFail() throws Exception { failNodes = true; @@ -270,6 +278,7 @@ public void testMassiveFail() throws Exception { * * @throws Exception If failed. */ + @Test public void testMassiveFailForceNodeFail() throws Exception { failNodes = true; @@ -285,6 +294,7 @@ public void testMassiveFailForceNodeFail() throws Exception { * * @throws Exception If failed. */ + @Test public void testRecoveryOnDisconnect() throws Exception { startGrids(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteExecutorServiceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteExecutorServiceTest.java index 19f46fae56084..c7b0cf83b2237 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteExecutorServiceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteExecutorServiceTest.java @@ -32,11 +32,15 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid distributed executor test. */ @GridCommonTest(group = "Thread Tests") +@RunWith(JUnit4.class) public class IgniteExecutorServiceTest extends GridCommonAbstractTest { /** */ public IgniteExecutorServiceTest() { @@ -46,6 +50,7 @@ public IgniteExecutorServiceTest() { /** * @throws Exception Thrown in case of test failure. */ + @Test public void testExecute() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -70,6 +75,7 @@ public void testExecute() throws Exception { /** * @throws Exception Thrown in case of test failure. */ + @Test public void testSubmit() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -107,6 +113,7 @@ public void testSubmit() throws Exception { /** * @throws Exception Thrown in case of test failure. */ + @Test public void testSubmitWithFutureTimeout() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -134,6 +141,7 @@ public void testSubmitWithFutureTimeout() throws Exception { * @throws Exception Thrown in case of test failure. */ @SuppressWarnings("TooBroadScope") + @Test public void testInvokeAll() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -164,7 +172,7 @@ public void testInvokeAll() throws Exception { /** * @throws Exception Thrown in case of test failure. */ - @SuppressWarnings("TooBroadScope") + @Test public void testInvokeAllWithTimeout() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -211,6 +219,7 @@ public void testInvokeAllWithTimeout() throws Exception { * @throws Exception Thrown in case of test failure. */ @SuppressWarnings("TooBroadScope") + @Test public void testInvokeAny() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -236,7 +245,7 @@ public void testInvokeAny() throws Exception { /** * @throws Exception Thrown in case of test failure. */ - @SuppressWarnings("TooBroadScope") + @Test public void testInvokeAnyWithTimeout() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -317,4 +326,4 @@ private static class TestRunnable implements Runnable, Serializable { assert ignite != null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteExplicitImplicitDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteExplicitImplicitDeploymentSelfTest.java index 1b253ed821644..a6f0ba52665d5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteExplicitImplicitDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteExplicitImplicitDeploymentSelfTest.java @@ -46,11 +46,15 @@ import org.apache.ignite.testframework.GridTestClassLoader; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class IgniteExplicitImplicitDeploymentSelfTest extends GridCommonAbstractTest { /** */ public IgniteExplicitImplicitDeploymentSelfTest() { @@ -73,6 +77,7 @@ public IgniteExplicitImplicitDeploymentSelfTest() { /** * @throws Exception If test failed. */ + @Test public void testImplicitDeployLocally() throws Exception { execImplicitDeployLocally(true, true, true); } @@ -80,6 +85,7 @@ public void testImplicitDeployLocally() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testImplicitDeployP2P() throws Exception { execImplicitDeployP2P(true, true, true); } @@ -87,6 +93,7 @@ public void testImplicitDeployP2P() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExplicitDeployLocally() throws Exception { execExplicitDeployLocally(true, true, true); } @@ -94,6 +101,7 @@ public void testExplicitDeployLocally() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExplicitDeployP2P() throws Exception { execExplicitDeployP2P(true, true, true); } @@ -490,4 +498,4 @@ public static final class GridDeploymentResourceTestJob extends ComputeJobAdapte } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteInternalCacheRemoveTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteInternalCacheRemoveTest.java index ce45dc2e5c644..9ea04bc7dbefe 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteInternalCacheRemoveTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteInternalCacheRemoveTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteInternalCacheRemoveTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -32,10 +36,11 @@ public class IgniteInternalCacheRemoveTest extends GridCacheAbstractSelfTest { /** * @throws IgniteCheckedException If failed. */ + @Test public void testRemove() throws IgniteCheckedException { jcache().put("key", 1); assert jcache().remove("key", 1); assert !jcache().remove("key", 1); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteLocalNodeMapBeforeStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteLocalNodeMapBeforeStartTest.java index 7eae2ce2e6f37..e331bfb9b7a44 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteLocalNodeMapBeforeStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteLocalNodeMapBeforeStartTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.lifecycle.LifecycleEventType; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.lifecycle.LifecycleEventType.AFTER_NODE_START; import static org.apache.ignite.lifecycle.LifecycleEventType.AFTER_NODE_STOP; @@ -35,10 +38,12 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteLocalNodeMapBeforeStartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNodeLocalMapFromLifecycleBean() throws Exception { IgniteConfiguration cfg = getConfiguration(getTestIgniteInstanceName(0)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteReflectionFactorySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteReflectionFactorySelfTest.java index 899728c66a56d..b9207c6e0e222 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteReflectionFactorySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteReflectionFactorySelfTest.java @@ -20,16 +20,19 @@ import java.io.Serializable; import java.util.HashMap; import java.util.Map; -import junit.framework.TestCase; import org.apache.ignite.configuration.IgniteReflectionFactory; +import org.junit.Test; + +import static junit.framework.Assert.assertEquals; /** * Tests for {@link IgniteReflectionFactory} class. */ -public class IgniteReflectionFactorySelfTest extends TestCase { +public class IgniteReflectionFactorySelfTest { /** * @throws Exception If failed. */ + @Test public void testByteMethod() throws Exception { byte expByteVal = 42; short expShortVal = 42; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteRoundRobinErrorAfterClientReconnectTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteRoundRobinErrorAfterClientReconnectTest.java index 00a33a61ab656..b1ebbfea16fcb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteRoundRobinErrorAfterClientReconnectTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteRoundRobinErrorAfterClientReconnectTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to reproduce IGNITE-4060. */ +@RunWith(JUnit4.class) public class IgniteRoundRobinErrorAfterClientReconnectTest extends GridCommonAbstractTest { /** Server index. */ private static final int SRV_IDX = 0; @@ -65,6 +69,7 @@ public class IgniteRoundRobinErrorAfterClientReconnectTest extends GridCommonAbs /** * @throws Exception If failed. */ + @Test public void testClientReconnect() throws Exception { final Ignite cli = grid(CLI_IDX); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteSlowClientDetectionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteSlowClientDetectionSelfTest.java index d7387845c6e5d..2993d2144852b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteSlowClientDetectionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteSlowClientDetectionSelfTest.java @@ -37,23 +37,22 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; /** * */ +@RunWith(JUnit4.class) public class IgniteSlowClientDetectionSelfTest extends GridCommonAbstractTest { /** */ public static final String PARTITIONED = "partitioned"; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * @return Node count. */ @@ -65,7 +64,6 @@ private int nodeCount() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setClientReconnectDisabled(true); if (getTestIgniteInstanceName(nodeCount() - 1).equals(igniteInstanceName) || @@ -99,6 +97,7 @@ private int nodeCount() { /** * @throws Exception If failed. */ + @Test public void testSlowClient() throws Exception { final IgniteEx slowClient = grid(nodeCount() - 1); @@ -189,4 +188,4 @@ private static class Listener implements CacheEntryUpdatedListener>>> Received update: " + iterable); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteUpdateNotifierPerClusterSettingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteUpdateNotifierPerClusterSettingSelfTest.java index a348ea555b4fe..fb6cde39136eb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/IgniteUpdateNotifierPerClusterSettingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteUpdateNotifierPerClusterSettingSelfTest.java @@ -21,18 +21,16 @@ import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cluster.ClusterProcessor; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class IgniteUpdateNotifierPerClusterSettingSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private String backup; @@ -55,8 +53,6 @@ public class IgniteUpdateNotifierPerClusterSettingSelfTest extends GridCommonAbs @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -65,6 +61,7 @@ public class IgniteUpdateNotifierPerClusterSettingSelfTest extends GridCommonAbs /** * @throws Exception If failed. */ + @Test public void testNotifierEnabledForCluster() throws Exception { checkNotifierStatusForCluster(true); } @@ -72,6 +69,7 @@ public void testNotifierEnabledForCluster() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotifierDisabledForCluster() throws Exception { checkNotifierStatusForCluster(false); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/IgniteVersionUtilsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/IgniteVersionUtilsSelfTest.java new file mode 100644 index 0000000000000..95624de55b539 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/IgniteVersionUtilsSelfTest.java @@ -0,0 +1,41 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal; + +import java.util.Calendar; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + */ +@RunWith(JUnit4.class) +public class IgniteVersionUtilsSelfTest extends GridCommonAbstractTest { + /** + * @throws Exception If failed. + */ + @Test + public void testIgniteCopyrights() throws Exception { + final String COPYRIGHT = String.valueOf(Calendar.getInstance().get(Calendar.YEAR)) + " Copyright(C) Apache Software Foundation"; + + assertNotNull(IgniteVersionUtils.COPYRIGHT); + + assertTrue(COPYRIGHT.equals(IgniteVersionUtils.COPYRIGHT)); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/LongJVMPauseDetectorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/LongJVMPauseDetectorTest.java index 267f389bdc1e9..34a58e69ffc7f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/LongJVMPauseDetectorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/LongJVMPauseDetectorTest.java @@ -20,10 +20,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests if LongJVMPauseDetector starts properly. */ +@RunWith(JUnit4.class) public class LongJVMPauseDetectorTest extends GridCommonAbstractTest { /** */ private GridStringLogger strLog; @@ -46,6 +50,7 @@ public class LongJVMPauseDetectorTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testJulMessage() throws Exception { this.strLog = new GridStringLogger(true); @@ -59,6 +64,7 @@ public void testJulMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopWorkerThread() throws Exception { strLog = new GridStringLogger(true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/MarshallerContextLockingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/MarshallerContextLockingSelfTest.java index f31a56da46975..1f153064b6683 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/MarshallerContextLockingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/MarshallerContextLockingSelfTest.java @@ -35,12 +35,16 @@ import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.MarshallerPlatformIds.JAVA_ID; /** * Test marshaller context. */ +@RunWith(JUnit4.class) public class MarshallerContextLockingSelfTest extends GridCommonAbstractTest { /** Inner logger. */ private InnerLogger innerLog; @@ -73,6 +77,7 @@ public class MarshallerContextLockingSelfTest extends GridCommonAbstractTest { /** * Multithreaded test, used custom class loader */ + @Test public void testMultithreadedUpdate() throws Exception { multithreaded(new Callable() { @Override public Object call() throws Exception { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/MemoryLeaksOnRestartNodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/MemoryLeaksOnRestartNodeTest.java index bf0c91104fb53..88fc226e2d479 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/MemoryLeaksOnRestartNodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/MemoryLeaksOnRestartNodeTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.internal.util.GridDebug; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests leaks on node restart with enabled persistence. */ +@RunWith(JUnit4.class) public class MemoryLeaksOnRestartNodeTest extends GridCommonAbstractTest { /** Heap dump file name. */ private static final String HEAP_DUMP_FILE_NAME = "test.hprof"; @@ -70,6 +74,7 @@ public class MemoryLeaksOnRestartNodeTest extends GridCommonAbstractTest { /** * @throws Exception On failed. */ + @Test public void test() throws Exception { // Warmup for (int i = 0; i < RESTARTS / 2; ++i) { @@ -108,4 +113,4 @@ public void test() throws Exception { // Remove dump if successful. dumpFile.delete(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TaskNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/TaskNodeRestartTest.java index 7cab4e6dc8423..b8b0c6c2ef2ec 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/TaskNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/TaskNodeRestartTest.java @@ -37,20 +37,18 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class TaskNodeRestartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 3; @@ -60,8 +58,6 @@ public class TaskNodeRestartTest extends GridCommonAbstractTest { ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - return cfg; } @@ -75,6 +71,7 @@ public class TaskNodeRestartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTaskNodeRestart() throws Exception { final AtomicBoolean finished = new AtomicBoolean(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TestManagementVisorMultiNodeTask.java b/modules/core/src/test/java/org/apache/ignite/internal/TestManagementVisorMultiNodeTask.java new file mode 100644 index 0000000000000..beccd799043dd --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/TestManagementVisorMultiNodeTask.java @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal; + +import java.util.List; +import org.apache.ignite.compute.ComputeJobResult; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; +import org.apache.ignite.internal.visor.VisorJob; +import org.apache.ignite.internal.visor.VisorMultiNodeTask; +import org.apache.ignite.internal.visor.VisorTaskArgument; +import org.jetbrains.annotations.Nullable; + +/** + * + */ +@GridVisorManagementTask +public class TestManagementVisorMultiNodeTask extends VisorMultiNodeTask { + /** */ + private static final long serialVersionUID = 0L; + + /** {@inheritDoc} */ + @Override protected VisorValidMultiNodeJob job(VisorTaskArgument arg) { + return new VisorValidMultiNodeJob(arg, debug); + } + + /** {@inheritDoc} */ + @Nullable @Override protected Object reduce0(List results) { + return null; + } + + /** + * Valid Management multi node visor job. + */ + private static class VisorValidMultiNodeJob extends VisorJob { + /** */ + private static final long serialVersionUID = 0L; + + /** + * @param arg Argument. + * @param debug Debug flag. + */ + protected VisorValidMultiNodeJob(VisorTaskArgument arg, boolean debug) { + super(arg, debug); + } + + /** {@inheritDoc} */ + @Override protected Object run(VisorTaskArgument arg) { + return null; + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TestManagementVisorOneNodeTask.java b/modules/core/src/test/java/org/apache/ignite/internal/TestManagementVisorOneNodeTask.java new file mode 100644 index 0000000000000..f7479af08b19b --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/TestManagementVisorOneNodeTask.java @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal; + +import java.util.List; +import org.apache.ignite.compute.ComputeJobResult; +import org.apache.ignite.internal.processors.task.GridVisorManagementTask; +import org.apache.ignite.internal.visor.VisorJob; +import org.apache.ignite.internal.visor.VisorOneNodeTask; +import org.apache.ignite.internal.visor.VisorTaskArgument; +import org.jetbrains.annotations.Nullable; + +/** + * + */ +@GridVisorManagementTask +public class TestManagementVisorOneNodeTask extends VisorOneNodeTask { + /** */ + private static final long serialVersionUID = 0L; + + /** {@inheritDoc} */ + @Override protected VisorValidOneNodeJob job(VisorTaskArgument arg) { + return new VisorValidOneNodeJob(arg, debug); + } + + /** {@inheritDoc} */ + @Nullable @Override protected Object reduce0(List results) { + return null; + } + + /** + * Valid Management one node visor job. + */ + private static class VisorValidOneNodeJob extends VisorJob { + /** */ + private static final long serialVersionUID = 0L; + + /** + * @param arg Argument. + * @param debug Debug flag. + */ + protected VisorValidOneNodeJob(VisorTaskArgument arg, boolean debug) { + super(arg, debug); + } + + /** {@inheritDoc} */ + @Override protected Object run(VisorTaskArgument arg) { + return null; + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TestNotManagementVisorMultiNodeTask.java b/modules/core/src/test/java/org/apache/ignite/internal/TestNotManagementVisorMultiNodeTask.java new file mode 100644 index 0000000000000..d8f279363385f --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/TestNotManagementVisorMultiNodeTask.java @@ -0,0 +1,64 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal; + +import java.util.List; +import org.apache.ignite.compute.ComputeJobResult; +import org.apache.ignite.internal.visor.VisorJob; +import org.apache.ignite.internal.visor.VisorMultiNodeTask; +import org.apache.ignite.internal.visor.VisorTaskArgument; +import org.jetbrains.annotations.Nullable; + +/** + * + */ +public class TestNotManagementVisorMultiNodeTask extends VisorMultiNodeTask { + /** */ + private static final long serialVersionUID = 0L; + + /** {@inheritDoc} */ + @Override protected VisorNotManagementMultiNodeJob job(VisorTaskArgument arg) { + return new VisorNotManagementMultiNodeJob(arg, debug); + } + + /** {@inheritDoc} */ + @Nullable @Override protected Object reduce0(List results) { + return null; + } + + /** + * Not management multi node visor job. + */ + private static class VisorNotManagementMultiNodeJob extends VisorJob { + /** */ + private static final long serialVersionUID = 0L; + + /** + * @param arg Argument. + * @param debug Debug flag. + */ + protected VisorNotManagementMultiNodeJob(VisorTaskArgument arg, boolean debug) { + super(arg, debug); + } + + /** {@inheritDoc} */ + @Override protected Object run(VisorTaskArgument arg) { + return null; + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TestNotManagementVisorOneNodeTask.java b/modules/core/src/test/java/org/apache/ignite/internal/TestNotManagementVisorOneNodeTask.java new file mode 100644 index 0000000000000..ea47ee09be730 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/TestNotManagementVisorOneNodeTask.java @@ -0,0 +1,64 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal; + +import java.util.List; +import org.apache.ignite.compute.ComputeJobResult; +import org.apache.ignite.internal.visor.VisorJob; +import org.apache.ignite.internal.visor.VisorOneNodeTask; +import org.apache.ignite.internal.visor.VisorTaskArgument; +import org.jetbrains.annotations.Nullable; + +/** + * + */ +public class TestNotManagementVisorOneNodeTask extends VisorOneNodeTask { + /** */ + private static final long serialVersionUID = 0L; + + /** {@inheritDoc} */ + @Override protected VisorNotManagementOneNodeJob job(VisorTaskArgument arg) { + return new VisorNotManagementOneNodeJob(arg, debug); + } + + /** {@inheritDoc} */ + @Nullable @Override protected Object reduce0(List results) { + return null; + } + + /** + * Not management one node visor job. + */ + private static class VisorNotManagementOneNodeJob extends VisorJob { + /** */ + private static final long serialVersionUID = 0L; + + /** + * @param arg Argument. + * @param debug Debug flag. + */ + protected VisorNotManagementOneNodeJob(VisorTaskArgument arg, boolean debug) { + super(arg, debug); + } + + /** {@inheritDoc} */ + @Override protected Object run(VisorTaskArgument arg) { + return null; + } + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TestRecordingCommunicationSpi.java b/modules/core/src/test/java/org/apache/ignite/internal/TestRecordingCommunicationSpi.java index 7b68a6bfd8337..988395f473573 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/TestRecordingCommunicationSpi.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/TestRecordingCommunicationSpi.java @@ -217,6 +217,26 @@ public void waitForBlocked(int size) throws InterruptedException { } } + /** + * @param size Size + * @param timeout Timeout. + * @throws InterruptedException + */ + public boolean waitForBlocked(int size, long timeout) throws InterruptedException { + long t0 = U.currentTimeMillis() + timeout; + + synchronized (this) { + while (blockedMsgs.size() < size) { + wait(1000); + + if (U.currentTimeMillis() >= t0) + return false; + } + } + + return true; + } + /** * @throws InterruptedException If interrupted. */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TransactionMetricsMxBeanImplTest.java b/modules/core/src/test/java/org/apache/ignite/internal/TransactionMetricsMxBeanImplTest.java index 91bd9bcf98271..7a6c832f874b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/TransactionMetricsMxBeanImplTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/TransactionMetricsMxBeanImplTest.java @@ -17,13 +17,13 @@ package org.apache.ignite.internal; -import javax.management.MBeanServer; -import javax.management.MBeanServerInvocationHandler; -import javax.management.ObjectName; import java.lang.management.ManagementFactory; import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; +import javax.management.MBeanServer; +import javax.management.MBeanServerInvocationHandler; +import javax.management.ObjectName; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheRebalanceMode; @@ -32,11 +32,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.mxbean.TransactionMetricsMxBean; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -46,10 +47,8 @@ /** * */ +@RunWith(JUnit4.class) public class TransactionMetricsMxBeanImplTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int TRANSACTIONS = 10; @@ -57,8 +56,6 @@ public class TransactionMetricsMxBeanImplTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { final IgniteConfiguration cfg = super.getConfiguration(name); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); cfg.setLocalHost("127.0.0.1"); @@ -81,9 +78,17 @@ public class TransactionMetricsMxBeanImplTest extends GridCommonAbstractTest { super.afterTest(); } + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.METRICS); + + super.beforeTestsStarted(); + } + /** * */ + @Test public void testTxMetric() throws Exception { //given: int keysNumber = 10; @@ -171,6 +176,7 @@ public void testTxMetric() throws Exception { /** * */ + @Test public void testNearTxInfo() throws Exception { IgniteEx primaryNode1 = startGrid(0); IgniteEx primaryNode2 = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/TransactionsMXBeanImplTest.java b/modules/core/src/test/java/org/apache/ignite/internal/TransactionsMXBeanImplTest.java index d358c72e79917..c8a91b0aa54d4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/TransactionsMXBeanImplTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/TransactionsMXBeanImplTest.java @@ -27,10 +27,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.mxbean.TransactionsMXBean; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -38,10 +38,8 @@ /** * */ +@RunWith(JUnit4.class) public class TransactionsMXBeanImplTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -53,8 +51,6 @@ public class TransactionsMXBeanImplTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { final IgniteConfiguration cfg = super.getConfiguration(name); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); final CacheConfiguration cCfg = new CacheConfiguration() @@ -73,6 +69,7 @@ public class TransactionsMXBeanImplTest extends GridCommonAbstractTest { /** * */ + @Test public void testBasic() throws Exception { IgniteEx ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/VisorManagementEventSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/VisorManagementEventSelfTest.java new file mode 100644 index 0000000000000..3ebb79aebf074 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/VisorManagementEventSelfTest.java @@ -0,0 +1,200 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal; + +import java.util.Arrays; +import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.compute.ComputeTask; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.EventType; +import org.apache.ignite.events.TaskEvent; +import org.apache.ignite.internal.visor.VisorTaskArgument; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.events.EventType.EVT_MANAGEMENT_TASK_STARTED;; + +/** + * + */ +@RunWith(JUnit4.class) +public class VisorManagementEventSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = new IgniteConfiguration(); + + // Enable visor management events. + cfg.setIncludeEventTypes( + EVT_MANAGEMENT_TASK_STARTED + ); + + cfg.setCacheConfiguration( + new CacheConfiguration() + .setName("TEST") + .setIndexedTypes(Integer.class, Integer.class) + .setStatisticsEnabled(true) + ); + + TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); + + List addrs = Arrays.asList("127.0.0.1:47500..47502"); + + ipFinder.setAddresses(addrs); + + TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); + + discoSpi.setIpFinder(ipFinder); + + cfg.setDiscoverySpi(discoSpi); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + super.afterTest(); + } + + /** + * Current test case start valid one node visor task that has GridVisorManagementTask annotation. + * No exceptions are expected. + * + * @throws Exception If failed. + */ + @Test + public void testManagementOneNodeVisorTask() throws Exception { + IgniteEx ignite = startGrid(0); + + doTestVisorTask(TestManagementVisorOneNodeTask.class, new VisorTaskArgument(), ignite); + } + + /** + * Current test case start valid multi node visor task that has GridVisorManagementTask annotation. + * No exceptions are expected. + * + * @throws Exception If failed. + */ + @Test + public void testManagementMultiNodeVisorTask() throws Exception { + IgniteEx ignite = startGrid(0); + + doTestVisorTask(TestManagementVisorMultiNodeTask.class, new VisorTaskArgument(), ignite); + } + + /** + * Current test case start one node visor task that has not GridVisorManagementTask annotation. + * No exceptions are expected. + * + * @throws Exception If failed. + */ + @Test + public void testNotManagementOneNodeVisorTask() throws Exception { + IgniteEx ignite = startGrid(0); + + doTestNotManagementVisorTask(TestNotManagementVisorOneNodeTask.class, new VisorTaskArgument(), ignite); + } + + /** + * Current test case start multi node visor task that has not GridVisorManagementTask annotation. + * No exceptions are expected. + * + * @throws Exception If failed. + */ + @Test + public void testNotManagementMultiNodeVisorTask() throws Exception { + IgniteEx ignite = startGrid(0); + + doTestNotManagementVisorTask(TestNotManagementVisorMultiNodeTask.class, new VisorTaskArgument(), ignite); + } + + /** + * @param cls class of the task. + * @param arg argument. + * @param ignite instance of Ignite. + * + * @throws Exception If failed. + */ + private void doTestVisorTask( + Class, R>> cls, T arg, IgniteEx ignite) throws Exception + { + final AtomicReference evt = new AtomicReference<>(); + + final CountDownLatch evtLatch = new CountDownLatch(1); + + ignite.events().localListen(new IgnitePredicate() { + @Override public boolean apply(TaskEvent e) { + evt.set(e); + + evtLatch.countDown(); + + return false; + } + }, EventType.EVT_MANAGEMENT_TASK_STARTED); + + for (ClusterNode node : ignite.cluster().forServers().nodes()) + ignite.compute().executeAsync(cls, new VisorTaskArgument<>(node.id(), arg, true)); + + assertTrue(evtLatch.await(10000, TimeUnit.MILLISECONDS)); + + assertNotNull(evt.get()); + } + + /** + * @param cls class of the task. + * @param arg argument. + * @param ignite instance of Ignite. + * + * @throws Exception If failed. + */ + private void doTestNotManagementVisorTask( + Class, R>> cls, T arg, IgniteEx ignite) throws Exception + { + final AtomicReference evt = new AtomicReference<>(); + + final CountDownLatch evtLatch = new CountDownLatch(1); + + ignite.events().localListen(new IgnitePredicate() { + @Override public boolean apply(TaskEvent e) { + evt.set(e); + + evtLatch.countDown(); + + return false; + } + }, EventType.EVT_MANAGEMENT_TASK_STARTED); + + for (ClusterNode node : ignite.cluster().forServers().nodes()) + ignite.compute().executeAsync(cls, new VisorTaskArgument<>(node.id(), arg, true)); + + assertFalse(evtLatch.await(10000, TimeUnit.MILLISECONDS)); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryArrayIdentityResolverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryArrayIdentityResolverSelfTest.java index 27c39c3feaf52..f13d666eb10de 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryArrayIdentityResolverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryArrayIdentityResolverSelfTest.java @@ -34,12 +34,16 @@ import java.util.ArrayList; import java.util.List; import java.util.Set; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.junit.Assert.assertNotEquals; /** * Array identity resolver self test. */ +@RunWith(JUnit4.class) public class BinaryArrayIdentityResolverSelfTest extends GridCommonAbstractTest { /** Pointers to release. */ private final Set ptrs = new ConcurrentHashSet<>(); @@ -88,6 +92,7 @@ public class BinaryArrayIdentityResolverSelfTest extends GridCommonAbstractTest /** * Test hash code generation for simple object. */ + @Test public void testHashCode() { InnerClass obj = new InnerClass(1, "2", 3); @@ -99,6 +104,7 @@ public void testHashCode() { /** * Test hash code generation for simple object. */ + @Test public void testHashCodeBinarylizable() { InnerClassBinarylizable obj = new InnerClassBinarylizable(1, "2", 3); @@ -110,6 +116,7 @@ public void testHashCodeBinarylizable() { /** * Test equals for simple object. */ + @Test public void testEquals() { InnerClass obj = new InnerClass(1, "2", 3); @@ -128,6 +135,7 @@ public void testEquals() { /** * Test equals for simple object. */ + @Test public void testEqualsBinarilyzable() { InnerClassBinarylizable obj = new InnerClassBinarylizable(1, "2", 3); @@ -148,6 +156,7 @@ public void testEqualsBinarilyzable() { /** * Test equals for different type IDs. */ + @Test public void testEqualsDifferenTypes() { InnerClass obj1 = new InnerClass(1, "2", 3); InnerClassBinarylizable obj2 = new InnerClassBinarylizable(1, "2", 3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicIdMapperSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicIdMapperSelfTest.java index 1d6da2cadd1fb..f0b6213d4a32a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicIdMapperSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicIdMapperSelfTest.java @@ -20,14 +20,19 @@ import org.apache.ignite.binary.BinaryBasicIdMapper; import org.apache.ignite.internal.binary.test.GridBinaryTestClass1; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class BinaryBasicIdMapperSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testLowerCase() throws Exception { BinaryBasicIdMapper mapper = new BinaryBasicIdMapper(true); @@ -40,6 +45,7 @@ public void testLowerCase() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultCase() throws Exception { BinaryBasicIdMapper mapper = new BinaryBasicIdMapper(false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicNameMapperSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicNameMapperSelfTest.java index 70fb8e7d76535..f2c94fec11af6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicNameMapperSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryBasicNameMapperSelfTest.java @@ -20,14 +20,19 @@ import org.apache.ignite.binary.BinaryBasicNameMapper; import org.apache.ignite.internal.binary.test.GridBinaryTestClass1; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class BinaryBasicNameMapperSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSimpleName() throws Exception { BinaryBasicNameMapper mapper = new BinaryBasicNameMapper(true); @@ -39,6 +44,7 @@ public void testSimpleName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFullName() throws Exception { BinaryBasicNameMapper mapper = new BinaryBasicNameMapper(false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationConsistencySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationConsistencySelfTest.java index 3dfa17925549e..b7d81c4155260 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationConsistencySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationConsistencySelfTest.java @@ -30,12 +30,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK; /** * Tests a check of binary configuration consistency. */ +@RunWith(JUnit4.class) public class BinaryConfigurationConsistencySelfTest extends GridCommonAbstractTest { /** */ private BinaryConfiguration binaryCfg; @@ -66,6 +70,7 @@ public class BinaryConfigurationConsistencySelfTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testSkipCheckConsistencyFlagEnabled() throws Exception { String backup = System.setProperty(IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK, "true"); @@ -95,6 +100,7 @@ public void testSkipCheckConsistencyFlagEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPositiveNullConfig() throws Exception { binaryCfg = null; @@ -108,6 +114,7 @@ public void testPositiveNullConfig() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPositiveEmptyConfig() throws Exception { binaryCfg = new BinaryConfiguration(); @@ -121,6 +128,7 @@ public void testPositiveEmptyConfig() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPositiveCustomConfig() throws Exception { binaryCfg = customConfig(false); @@ -134,6 +142,7 @@ public void testPositiveCustomConfig() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeNullEmptyConfigs() throws Exception { checkNegative(null, new BinaryConfiguration()); } @@ -141,6 +150,7 @@ public void testNegativeNullEmptyConfigs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeEmptyNullConfigs() throws Exception { checkNegative(new BinaryConfiguration(), null); } @@ -148,6 +158,7 @@ public void testNegativeEmptyNullConfigs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeEmptyCustomConfigs() throws Exception { checkNegative(new BinaryConfiguration(), customConfig(false)); } @@ -156,6 +167,7 @@ public void testNegativeEmptyCustomConfigs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeCustomNullConfigs() throws Exception { checkNegative(customConfig(false), null); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationCustomSerializerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationCustomSerializerSelfTest.java index cedbbaf5f4bc1..32e0b527e4910 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationCustomSerializerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryConfigurationCustomSerializerSelfTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.internal.visor.node.VisorNodePingTask; import org.apache.ignite.internal.visor.node.VisorNodePingTaskArg; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests that node will start with custom binary serializer and thin client will connect to such node. */ +@RunWith(JUnit4.class) public class BinaryConfigurationCustomSerializerSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -88,6 +92,7 @@ public class BinaryConfigurationCustomSerializerSelfTest extends GridCommonAbstr * * @throws Exception If failed. */ + @Test public void testThinClientConnected() throws Exception { UUID nid = ignite(0).cluster().localNode().id(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryEnumsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryEnumsSelfTest.java index c18c0eef3b749..754d33caf6a36 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryEnumsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryEnumsSelfTest.java @@ -39,11 +39,15 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Contains tests for binary enums. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class BinaryEnumsSelfTest extends GridCommonAbstractTest { /** Cache name. */ private static String CACHE_NAME = "cache"; @@ -144,6 +148,7 @@ private void startUp(boolean register) throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleRegistered() throws Exception { checkSimple(true); } @@ -153,6 +158,7 @@ public void testSimpleRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleNotRegistered() throws Exception { checkSimple(false); } @@ -162,6 +168,7 @@ public void testSimpleNotRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testNestedRegistered() throws Exception { checkNested(true); } @@ -171,6 +178,7 @@ public void testNestedRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testNestedNotRegistered() throws Exception { checkNested(false); } @@ -180,6 +188,7 @@ public void testNestedNotRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleBuilderRegistered() throws Exception { checkSimpleBuilder(true); } @@ -189,6 +198,7 @@ public void testSimpleBuilderRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleBuilderNotRegistered() throws Exception { checkSimpleBuilder(false); } @@ -198,6 +208,7 @@ public void testSimpleBuilderNotRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testNestedBuilderRegistered() throws Exception { checkNestedBuilder(true); } @@ -207,6 +218,7 @@ public void testNestedBuilderRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testNestedBuilderNotRegistered() throws Exception { checkNestedBuilder(false); } @@ -214,6 +226,7 @@ public void testNestedBuilderNotRegistered() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInstanceFromBytes() throws Exception { startUp(true); @@ -353,6 +366,7 @@ public void checkSimpleBuilder(boolean registered) throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleArrayRegistered() throws Exception { checkSimpleArray(true); } @@ -362,6 +376,7 @@ public void testSimpleArrayRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleArrayNotRegistered() throws Exception { checkSimpleArray(false); } @@ -371,6 +386,7 @@ public void testSimpleArrayNotRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleBuilderArrayRegistered() throws Exception { checkSimpleBuilderArray(true); } @@ -380,6 +396,7 @@ public void testSimpleBuilderArrayRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testSimpleBuilderArrayNotRegistered() throws Exception { checkSimpleBuilderArray(false); } @@ -420,6 +437,7 @@ public void checkSimpleBuilderArray(boolean registered) throws Exception { * * @throws Exception If failed. */ + @Test public void testZeroTypeId() throws Exception { startUp(true); @@ -437,6 +455,7 @@ public void testZeroTypeId() throws Exception { * * @throws Exception */ + @Test public void testBinaryTypeEnumValues() throws Exception { startUp(false); @@ -455,6 +474,7 @@ public void testBinaryTypeEnumValues() throws Exception { * * @throws Exception */ + @Test public void testEnumWrongBinaryConfig() throws Exception { this.register = true; @@ -472,6 +492,7 @@ public void testEnumWrongBinaryConfig() throws Exception { * * @throws Exception */ + @Test public void testEnumValidation() throws Exception { startUp(false); @@ -493,6 +514,7 @@ public void testEnumValidation() throws Exception { * * @throws Exception */ + @Test public void testEnumMerge() throws Exception { startUp(false); @@ -543,6 +565,7 @@ public void testEnumMerge() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeclaredBodyEnumRegistered() throws Exception { checkDeclaredBodyEnum(true); } @@ -552,6 +575,7 @@ public void testDeclaredBodyEnumRegistered() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeclaredBodyEnumNotRegistered() throws Exception { checkDeclaredBodyEnum(false); } @@ -644,7 +668,7 @@ private void validate(BinaryObject obj, EnumType val) { if (register) assertEquals(val.name(), obj.enumName()); } - + /** * Validate single value. * diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldExtractionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldExtractionSelfTest.java index a050591b2d693..132253f8d8507 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldExtractionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldExtractionSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.marshaller.MarshallerContextTestImpl; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class BinaryFieldExtractionSelfTest extends GridCommonAbstractTest { /** * Create marshaller. @@ -60,6 +64,7 @@ protected BinaryMarshaller createMarshaller() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimitiveMarshalling() throws Exception { BinaryMarshaller marsh = createMarshaller(); @@ -101,6 +106,7 @@ public void testPrimitiveMarshalling() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimeMarshalling() throws Exception { BinaryMarshaller marsh = createMarshaller(); @@ -122,6 +128,7 @@ public void testTimeMarshalling() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecimalFieldMarshalling() throws Exception { BinaryMarshaller marsh = createMarshaller(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldsAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldsAbstractSelfTest.java index fd095e99f9d0b..19a349c3354ba 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldsAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFieldsAbstractSelfTest.java @@ -33,10 +33,14 @@ import java.util.Arrays; import java.util.Date; import java.util.UUID; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Contains tests for binary object fields. */ +@RunWith(JUnit4.class) public abstract class BinaryFieldsAbstractSelfTest extends GridCommonAbstractTest { /** Marshaller. */ protected BinaryMarshaller dfltMarsh; @@ -105,6 +109,7 @@ protected static BinaryContext binaryContext(BinaryMarshaller marsh) { * * @throws Exception If failed. */ + @Test public void testByte() throws Exception { check("fByte"); } @@ -114,6 +119,7 @@ public void testByte() throws Exception { * * @throws Exception If failed. */ + @Test public void testByteArray() throws Exception { check("fByteArr"); } @@ -123,6 +129,7 @@ public void testByteArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { check("fBool"); } @@ -132,6 +139,7 @@ public void testBoolean() throws Exception { * * @throws Exception If failed. */ + @Test public void testBooleanArray() throws Exception { check("fBoolArr"); } @@ -141,6 +149,7 @@ public void testBooleanArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testShort() throws Exception { check("fShort"); } @@ -150,6 +159,7 @@ public void testShort() throws Exception { * * @throws Exception If failed. */ + @Test public void testShortArray() throws Exception { check("fShortArr"); } @@ -159,6 +169,7 @@ public void testShortArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testChar() throws Exception { check("fChar"); } @@ -168,6 +179,7 @@ public void testChar() throws Exception { * * @throws Exception If failed. */ + @Test public void testCharArray() throws Exception { check("fCharArr"); } @@ -177,6 +189,7 @@ public void testCharArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testInt() throws Exception { check("fInt"); } @@ -186,6 +199,7 @@ public void testInt() throws Exception { * * @throws Exception If failed. */ + @Test public void testIntArray() throws Exception { check("fIntArr"); } @@ -195,6 +209,7 @@ public void testIntArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testLong() throws Exception { check("fLong"); } @@ -204,6 +219,7 @@ public void testLong() throws Exception { * * @throws Exception If failed. */ + @Test public void testLongArray() throws Exception { check("fLongArr"); } @@ -213,6 +229,7 @@ public void testLongArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { check("fFloat"); } @@ -222,6 +239,7 @@ public void testFloat() throws Exception { * * @throws Exception If failed. */ + @Test public void testFloatArray() throws Exception { check("fFloatArr"); } @@ -231,6 +249,7 @@ public void testFloatArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { check("fDouble"); } @@ -240,6 +259,7 @@ public void testDouble() throws Exception { * * @throws Exception If failed. */ + @Test public void testDoubleArray() throws Exception { check("fDoubleArr"); } @@ -249,6 +269,7 @@ public void testDoubleArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testString() throws Exception { check("fString"); } @@ -258,6 +279,7 @@ public void testString() throws Exception { * * @throws Exception If failed. */ + @Test public void testStringArray() throws Exception { check("fStringArr"); } @@ -267,6 +289,7 @@ public void testStringArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testDate() throws Exception { check("fDate"); } @@ -276,6 +299,7 @@ public void testDate() throws Exception { * * @throws Exception If failed. */ + @Test public void testDateArray() throws Exception { check("fDateArr"); } @@ -285,6 +309,7 @@ public void testDateArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { check("fTimestamp"); } @@ -294,6 +319,7 @@ public void testTimestamp() throws Exception { * * @throws Exception If failed. */ + @Test public void testTimestampArray() throws Exception { check("fTimestampArr"); } @@ -303,6 +329,7 @@ public void testTimestampArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testUuid() throws Exception { check("fUuid"); } @@ -312,6 +339,7 @@ public void testUuid() throws Exception { * * @throws Exception If failed. */ + @Test public void testUuidArray() throws Exception { check("fUuidArr"); } @@ -321,6 +349,7 @@ public void testUuidArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testDecimal() throws Exception { check("fDecimal"); } @@ -330,6 +359,7 @@ public void testDecimal() throws Exception { * * @throws Exception If failed. */ + @Test public void testDecimalArray() throws Exception { check("fDecimalArr"); } @@ -339,6 +369,7 @@ public void testDecimalArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testObject() throws Exception { check("fObj"); } @@ -348,6 +379,7 @@ public void testObject() throws Exception { * * @throws Exception If failed. */ + @Test public void testObjectArray() throws Exception { check("fObjArr"); } @@ -357,6 +389,7 @@ public void testObjectArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testNull() throws Exception { check("fNull"); } @@ -366,6 +399,7 @@ public void testNull() throws Exception { * * @throws Exception If failed. */ + @Test public void testMissing() throws Exception { String fieldName = "fMissing"; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFooterOffsetsAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFooterOffsetsAbstractSelfTest.java index 265d283d3d378..f3e40c59ff287 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFooterOffsetsAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryFooterOffsetsAbstractSelfTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.logger.NullLogger; import org.apache.ignite.marshaller.MarshallerContextTestImpl; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Contains tests for compact offsets. */ +@RunWith(JUnit4.class) public abstract class BinaryFooterOffsetsAbstractSelfTest extends GridCommonAbstractTest { /** 2 pow 8. */ private static int POW_8 = 1 << 8; @@ -78,6 +82,7 @@ protected boolean compactFooter() { * * @throws Exception If failed. */ + @Test public void test1Byte() throws Exception { check(POW_8 >> 2); } @@ -87,6 +92,7 @@ public void test1Byte() throws Exception { * * @throws Exception If failed. */ + @Test public void test1ByteSign() throws Exception { check(POW_8 >> 1); } @@ -96,6 +102,7 @@ public void test1ByteSign() throws Exception { * * @throws Exception If failed. */ + @Test public void test2Bytes() throws Exception { check(POW_16 >> 2); } @@ -105,6 +112,7 @@ public void test2Bytes() throws Exception { * * @throws Exception If failed. */ + @Test public void test2BytesSign() throws Exception { check(POW_16 >> 1); } @@ -114,6 +122,7 @@ public void test2BytesSign() throws Exception { * * @throws Exception If failed. */ + @Test public void test4Bytes() throws Exception { check(POW_16 << 2); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryMarshallerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryMarshallerSelfTest.java index 1ba6edc2bd979..eaca668365fe2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryMarshallerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryMarshallerSelfTest.java @@ -25,7 +25,6 @@ import java.io.Serializable; import java.lang.reflect.Field; import java.lang.reflect.InvocationHandler; -import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.lang.reflect.Proxy; import java.math.BigDecimal; @@ -97,6 +96,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.charset.StandardCharsets.UTF_8; import static org.apache.ignite.internal.binary.streams.BinaryMemoryAllocator.INSTANCE; @@ -106,11 +108,13 @@ /** * Binary marshaller tests. */ -@SuppressWarnings({"OverlyStrongTypeCast", "ArrayHashCode", "ConstantConditions"}) +@SuppressWarnings({"OverlyStrongTypeCast", "ConstantConditions"}) +@RunWith(JUnit4.class) public class BinaryMarshallerSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNull() throws Exception { assertNull(marshalUnmarshal(null)); } @@ -118,6 +122,7 @@ public void testNull() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { assertEquals((byte)100, marshalUnmarshal((byte)100).byteValue()); } @@ -125,6 +130,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { assertEquals((short)100, marshalUnmarshal((short)100).shortValue()); } @@ -132,6 +138,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInt() throws Exception { assertEquals(100, marshalUnmarshal(100).intValue()); } @@ -139,6 +146,7 @@ public void testInt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { assertEquals(100L, marshalUnmarshal(100L).longValue()); } @@ -146,6 +154,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { assertEquals(100.001f, marshalUnmarshal(100.001f), 0); } @@ -153,6 +162,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { assertEquals(100.001d, marshalUnmarshal(100.001d), 0); } @@ -160,6 +170,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testChar() throws Exception { assertEquals((char)100, marshalUnmarshal((char)100).charValue()); } @@ -167,6 +178,7 @@ public void testChar() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { assertEquals(true, marshalUnmarshal(true).booleanValue()); } @@ -174,6 +186,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecimal() throws Exception { BigDecimal val; @@ -190,6 +203,7 @@ public void testDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeScaleDecimal() throws Exception { BigDecimal val; @@ -202,6 +216,7 @@ public void testNegativeScaleDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeScaleRoundingModeDecimal() throws Exception { BigDecimal val; @@ -218,6 +233,7 @@ public void testNegativeScaleRoundingModeDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStringVer1() throws Exception { doTestString(false); } @@ -225,6 +241,7 @@ public void testStringVer1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStringVer2() throws Exception { doTestString(true); } @@ -286,6 +303,7 @@ private void doTestString(boolean ver2) throws Exception { /** * @throws Exception If failed. */ + @Test public void testUuid() throws Exception { UUID uuid = UUID.randomUUID(); @@ -295,6 +313,7 @@ public void testUuid() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteUuid() throws Exception { IgniteUuid uuid = IgniteUuid.randomUuid(); @@ -304,6 +323,7 @@ public void testIgniteUuid() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDate() throws Exception { Date date = new Date(); @@ -316,6 +336,7 @@ public void testDate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { Timestamp ts = new Timestamp(System.currentTimeMillis()); @@ -327,6 +348,7 @@ public void testTimestamp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTime() throws Exception { Time time = new Time(System.currentTimeMillis()); assertEquals(time, marshalUnmarshal(time)); @@ -335,6 +357,7 @@ public void testTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimeArray() throws Exception { Time[] times = new Time[]{new Time(System.currentTimeMillis()), new Time(123456789)}; assertArrayEquals(times, marshalUnmarshal(times)); @@ -343,6 +366,7 @@ public void testTimeArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByteArray() throws Exception { byte[] arr = new byte[] {10, 20, 30}; @@ -352,6 +376,7 @@ public void testByteArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShortArray() throws Exception { short[] arr = new short[] {10, 20, 30}; @@ -361,6 +386,7 @@ public void testShortArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIntArray() throws Exception { int[] arr = new int[] {10, 20, 30}; @@ -370,6 +396,7 @@ public void testIntArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLongArray() throws Exception { long[] arr = new long[] {10, 20, 30}; @@ -379,6 +406,7 @@ public void testLongArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloatArray() throws Exception { float[] arr = new float[] {10.1f, 20.1f, 30.1f}; @@ -388,6 +416,7 @@ public void testFloatArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDoubleArray() throws Exception { double[] arr = new double[] {10.1d, 20.1d, 30.1d}; @@ -397,6 +426,7 @@ public void testDoubleArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCharArray() throws Exception { char[] arr = new char[] {10, 20, 30}; @@ -406,6 +436,7 @@ public void testCharArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBooleanArray() throws Exception { boolean[] arr = new boolean[] {true, false, true}; @@ -415,6 +446,7 @@ public void testBooleanArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecimalArray() throws Exception { BigDecimal[] arr = new BigDecimal[] {BigDecimal.ZERO, BigDecimal.ONE, BigDecimal.TEN}; @@ -424,6 +456,7 @@ public void testDecimalArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStringArray() throws Exception { String[] arr = new String[] {"str1", "str2", "str3"}; @@ -433,6 +466,7 @@ public void testStringArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUuidArray() throws Exception { UUID[] arr = new UUID[] {UUID.randomUUID(), UUID.randomUUID(), UUID.randomUUID()}; @@ -442,6 +476,7 @@ public void testUuidArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDateArray() throws Exception { Date[] arr = new Date[] {new Date(11111), new Date(22222), new Date(33333)}; @@ -451,6 +486,7 @@ public void testDateArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectArray() throws Exception { Object[] arr = new Object[] {1, 2, 3}; @@ -460,6 +496,7 @@ public void testObjectArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testException() throws Exception { Exception ex = new RuntimeException(); @@ -471,6 +508,7 @@ public void testException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollection() throws Exception { testCollection(new ArrayList(3)); testCollection(new LinkedHashSet()); @@ -493,6 +531,7 @@ private void testCollection(Collection col) throws Exception { /** * @throws Exception If failed. */ + @Test public void testMap() throws Exception { testMap(new HashMap()); testMap(new LinkedHashMap()); @@ -518,6 +557,7 @@ private void testMap(Map map) throws Exception { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testCustomCollections() throws Exception { CustomCollections cc = new CustomCollections(); @@ -541,6 +581,7 @@ public void testCustomCollections() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testCustomCollections2() throws Exception { CustomArrayList arrList = new CustomArrayList(); @@ -563,6 +604,7 @@ public void testCustomCollections2() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testCustomCollectionsWithFactory() throws Exception { CustomCollectionsWithFactory cc = new CustomCollectionsWithFactory(); @@ -581,6 +623,7 @@ public void testCustomCollectionsWithFactory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExternalizableInEnclosing() throws Exception { SimpleEnclosingObject obj = new SimpleEnclosingObject(); obj.simpl = new SimpleExternalizable("field"); @@ -593,6 +636,7 @@ public void testExternalizableInEnclosing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMapEntry() throws Exception { Map.Entry e = new GridMapEntry<>(1, "str1"); @@ -613,6 +657,7 @@ public void testMapEntry() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryObject() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList(new BinaryTypeConfiguration(SimpleObject.class.getName()))); @@ -640,6 +685,7 @@ public void testBinaryObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEnum() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList(new BinaryTypeConfiguration(TestEnum.class.getName()))); @@ -649,6 +695,7 @@ public void testEnum() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeclaredBodyEnum() throws Exception { final MarshallerContextTestImpl ctx = new MarshallerContextTestImpl(); ctx.registerClassName((byte)0, 1, EnumObject.class.getName()); @@ -675,6 +722,7 @@ public void testDeclaredBodyEnum() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDateAndTimestampInSingleObject() throws Exception { BinaryTypeConfiguration cfg1 = new BinaryTypeConfiguration(DateClass1.class.getName()); @@ -712,6 +760,7 @@ public void testDateAndTimestampInSingleObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimpleObject() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -795,6 +844,7 @@ public void testSimpleObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinary() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()), @@ -916,6 +966,7 @@ public void testBinary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectFieldOfExternalizableCollection() throws Exception { EnclosingObj obj = new EnclosingObj(); @@ -927,6 +978,7 @@ public void testObjectFieldOfExternalizableCollection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVoid() throws Exception { Class clazz = Void.class; @@ -940,6 +992,7 @@ public void testVoid() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteReplacePrivate() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Collections.singleton( new BinaryTypeConfiguration(TestObject.class.getName()) @@ -957,6 +1010,7 @@ public void testWriteReplacePrivate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteReplaceInheritable() throws Exception { ImmutableList obj = ImmutableList.of("This is a test"); @@ -1114,6 +1168,7 @@ private void checkSimpleObjectData(SimpleObject obj, BinaryObject po) { /** * @throws Exception If failed. */ + @Test public void testClassWithoutPublicConstructor() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(NoPublicConstructor.class.getName()), @@ -1140,6 +1195,7 @@ public void testClassWithoutPublicConstructor() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomSerializer() throws Exception { BinaryTypeConfiguration type = new BinaryTypeConfiguration(CustomSerializedObject1.class.getName()); @@ -1158,6 +1214,7 @@ public void testCustomSerializer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomSerializerWithGlobal() throws Exception { BinaryTypeConfiguration type1 = new BinaryTypeConfiguration(CustomSerializedObject1.class.getName()); @@ -1184,6 +1241,7 @@ public void testCustomSerializerWithGlobal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomIdMapper() throws Exception { BinaryTypeConfiguration type = new BinaryTypeConfiguration(CustomMappedObject1.class.getName()); @@ -1224,6 +1282,7 @@ else if ("val2".equals(fieldName)) /** * @throws Exception If failed. */ + @Test public void testCustomIdMapperWithGlobal() throws Exception { BinaryTypeConfiguration type1 = new BinaryTypeConfiguration(CustomMappedObject1.class.getName()); @@ -1294,6 +1353,7 @@ else if ("val2".equals(fieldName)) /** * @throws Exception If failed. */ + @Test public void testSimpleNameLowerCaseMappers() throws Exception { BinaryTypeConfiguration innerClassType = new BinaryTypeConfiguration(InnerMappedObject.class.getName()); BinaryTypeConfiguration publicClassType = new BinaryTypeConfiguration(TestMappedObject.class.getName()); @@ -1351,6 +1411,7 @@ else if ("val2".equals(fieldName)) /** * @throws Exception If failed. */ + @Test public void testDynamicObject() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(DynamicObject.class.getName()) @@ -1396,6 +1457,7 @@ public void testDynamicObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCycleLink() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(CycleLinkObject.class.getName()) @@ -1415,6 +1477,7 @@ public void testCycleLink() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDetached() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(DetachedTestObject.class.getName()), @@ -1469,6 +1532,7 @@ public void testDetached() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollectionFields() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(CollectionFieldsObject.class.getName()), @@ -1509,24 +1573,14 @@ public void testCollectionFields() throws Exception { /** * @throws Exception If failed. */ - public void _testDefaultMapping() throws Exception { + @Test + public void testDefaultMapping() throws Exception { BinaryTypeConfiguration customMappingType = new BinaryTypeConfiguration(TestBinary.class.getName()); customMappingType.setIdMapper(new BinaryIdMapper() { @Override public int typeId(String clsName) { - String typeName; - - try { - Method mtd = BinaryContext.class.getDeclaredMethod("typeName", String.class); - - mtd.setAccessible(true); - - typeName = (String)mtd.invoke(null, clsName); - } - catch (NoSuchMethodException | InvocationTargetException | IllegalAccessException e) { - throw new RuntimeException(e); - } + String typeName = BinaryContext.SIMPLE_NAME_LOWER_CASE_MAPPER.typeName(clsName); return typeName.toLowerCase().hashCode(); } @@ -1558,6 +1612,7 @@ public void _testDefaultMapping() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTypeNamesSimpleNameMapper() throws Exception { BinaryTypeConfiguration customType1 = new BinaryTypeConfiguration(Value.class.getName()); @@ -1636,6 +1691,7 @@ public void testTypeNamesSimpleNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTypeNamesFullNameMappers() throws Exception { BinaryTypeConfiguration customType1 = new BinaryTypeConfiguration(Value.class.getName()); @@ -1714,6 +1770,7 @@ public void testTypeNamesFullNameMappers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTypeNamesSimpleNameMappers() throws Exception { BinaryTypeConfiguration customType1 = new BinaryTypeConfiguration(Value.class.getName()); @@ -1801,6 +1858,7 @@ public void testTypeNamesSimpleNameMappers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTypeNamesCustomIdMapper() throws Exception { BinaryTypeConfiguration customType1 = new BinaryTypeConfiguration(Value.class.getName()); @@ -1915,6 +1973,7 @@ else if ("NonExistentClass4".equals(clsName)) /** * @throws Exception If failed. */ + @Test public void testCustomTypeRegistration() throws Exception { BinaryTypeConfiguration customType = new BinaryTypeConfiguration(Value.class.getName()); @@ -1956,6 +2015,7 @@ public void testCustomTypeRegistration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFieldIdMapping() throws Exception { BinaryTypeConfiguration customType1 = new BinaryTypeConfiguration(Value.class.getName()); @@ -2020,6 +2080,7 @@ public void testFieldIdMapping() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDuplicateTypeId() throws Exception { BinaryTypeConfiguration customType1 = new BinaryTypeConfiguration("org.gridgain.Class1"); @@ -2061,6 +2122,7 @@ public void testDuplicateTypeId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopy() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2184,6 +2246,7 @@ public void testBinaryCopy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyString() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2205,6 +2268,7 @@ public void testBinaryCopyString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyUuid() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2228,6 +2292,7 @@ public void testBinaryCopyUuid() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyByteArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2265,6 +2330,7 @@ private BinaryObject copy(BinaryObject po, Map fields) { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyShortArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2286,6 +2352,7 @@ public void testBinaryCopyShortArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyIntArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2307,6 +2374,7 @@ public void testBinaryCopyIntArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyLongArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2328,6 +2396,7 @@ public void testBinaryCopyLongArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyFloatArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2349,6 +2418,7 @@ public void testBinaryCopyFloatArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyDoubleArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2370,6 +2440,7 @@ public void testBinaryCopyDoubleArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyCharArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2391,6 +2462,7 @@ public void testBinaryCopyCharArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyStringArray() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2412,6 +2484,7 @@ public void testBinaryCopyStringArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyObject() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2439,6 +2512,7 @@ public void testBinaryCopyObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyNonPrimitives() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(SimpleObject.class.getName()) @@ -2476,6 +2550,7 @@ public void testBinaryCopyNonPrimitives() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryCopyMixed() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList(new BinaryTypeConfiguration(SimpleObject.class.getName()))); @@ -2520,6 +2595,7 @@ public void testBinaryCopyMixed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testKeepDeserialized() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList(new BinaryTypeConfiguration(SimpleObject.class.getName()))); @@ -2537,6 +2613,7 @@ public void testKeepDeserialized() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOffheapBinary() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList(new BinaryTypeConfiguration(SimpleObject.class.getName()))); @@ -2631,6 +2708,7 @@ public void testOffheapBinary() throws Exception { /** * */ + @Test public void testReadResolve() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Arrays.asList( new BinaryTypeConfiguration(MySingleton.class.getName()), @@ -2648,6 +2726,7 @@ public void testReadResolve() throws Exception { /** * */ + @Test public void testReadResolveOnBinaryAware() throws Exception { BinaryMarshaller marsh = binaryMarshaller(Collections.singletonList( new BinaryTypeConfiguration(MyTestClass.class.getName()))); @@ -2662,6 +2741,7 @@ public void testReadResolveOnBinaryAware() throws Exception { /** * */ + @Test public void testDecimalFields() throws Exception { Collection clsNames = new ArrayList<>(); @@ -2732,6 +2812,7 @@ public void testDecimalFields() throws Exception { /** * @throws IgniteCheckedException If failed. */ + @Test public void testFinalField() throws IgniteCheckedException { BinaryMarshaller marsh = binaryMarshaller(); @@ -2745,6 +2826,7 @@ public void testFinalField() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testThreadLocalArrayReleased() throws Exception { // Checking the writer directly. assertEquals(false, INSTANCE.isAcquired()); @@ -2786,6 +2868,7 @@ public void testThreadLocalArrayReleased() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDuplicateNameSimpleNameMapper() throws Exception { BinaryMarshaller marsh = binaryMarshaller(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true), null, null, null); @@ -2810,6 +2893,7 @@ public void testDuplicateNameSimpleNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDuplicateNameFullNameMapper() throws Exception { BinaryMarshaller marsh = binaryMarshaller(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false), null, null, null); @@ -2825,6 +2909,7 @@ public void testDuplicateNameFullNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClass() throws Exception { BinaryMarshaller marsh = binaryMarshaller(); @@ -2838,6 +2923,7 @@ public void testClass() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClassFieldsMarshalling() throws Exception { BinaryMarshaller marsh = binaryMarshaller(); @@ -2861,6 +2947,7 @@ public void testClassFieldsMarshalling() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMarshallingThroughJdk() throws Exception { BinaryMarshaller marsh = binaryMarshaller(); @@ -2897,6 +2984,7 @@ public void testMarshallingThroughJdk() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPredefinedTypeIds() throws Exception { BinaryMarshaller marsh = binaryMarshaller(); @@ -2925,6 +3013,7 @@ public void testPredefinedTypeIds() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProxy() throws Exception { BinaryMarshaller marsh = binaryMarshaller(); @@ -2954,6 +3043,7 @@ public void testProxy() throws Exception { * * @throws Exception If fails. */ + @Test public void testObjectContainingProxy() throws Exception { BinaryMarshaller marsh = binaryMarshaller(); @@ -2983,6 +3073,7 @@ public void testObjectContainingProxy() throws Exception { * * @throws Exception If failed. */ + @Test public void testDuplicateFields() throws Exception { BinaryMarshaller marsh = binaryMarshaller(); @@ -3038,6 +3129,7 @@ public void testDuplicateFields() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleHandle() throws Exception { SingleHandleA a = new SingleHandleA(new SingleHandleB()); @@ -3053,6 +3145,7 @@ public void testSingleHandle() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnregisteredClass() throws Exception { BinaryMarshaller m = binaryMarshaller(null, Collections.singletonList(Value.class.getName())); @@ -3064,6 +3157,7 @@ public void testUnregisteredClass() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMixedRawCollections() throws Exception { Collection excludedClasses = Arrays.asList( ObjectRaw.class.getName(), @@ -3091,6 +3185,7 @@ public void testMixedRawCollections() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryEquals() throws Exception { Collection excludedClasses = Arrays.asList( ObjectRaw.class.getName(), @@ -3226,6 +3321,7 @@ private void checkEquals(Object binObj0, Object binObj1) { /** * @throws Exception If failed. */ + @Test public void testBinaryEqualsComplexObject() throws Exception { List excludedClasses = Arrays.asList( TestClass0.class.getName(), @@ -3284,6 +3380,7 @@ public void testBinaryEqualsComplexObject() throws Exception { * * @throws Exception If failed. */ + @Test public void testFieldOrder() throws Exception { if (BinaryUtils.FIELDS_SORTED_ORDER) return; @@ -3313,6 +3410,7 @@ public void testFieldOrder() throws Exception { * * @throws Exception If failed. */ + @Test public void testFieldOrderByBuilder() throws Exception { if (BinaryUtils.FIELDS_SORTED_ORDER) return; @@ -3402,7 +3500,6 @@ private static class NonSerializableA { * @param strArr Array. * @param shortVal Short value. */ - @SuppressWarnings({"UnusedDeclaration"}) private NonSerializableA(@Nullable String[] strArr, @Nullable Short shortVal) { // No-op. } @@ -3499,7 +3596,6 @@ private static class NonSerializable extends NonSerializableB { * * @param aVal Unused. */ - @SuppressWarnings({"UnusedDeclaration"}) private NonSerializable(NonSerializableA aVal) { } @@ -4169,7 +4265,6 @@ private static class SimpleObject { private SimpleObject inner; /** {@inheritDoc} */ - @SuppressWarnings("FloatingPointEquality") @Override public boolean equals(Object other) { if (this == other) return true; @@ -4525,7 +4620,6 @@ private static class TestBinary implements Binarylizable { } /** {@inheritDoc} */ - @SuppressWarnings("FloatingPointEquality") @Override public boolean equals(Object other) { if (this == other) return true; @@ -4916,7 +5010,6 @@ private DetachedInnerTestObject(DetachedInnerTestObject inner, UUID id) { /** */ - @SuppressWarnings("UnusedDeclaration") private static class CollectionFieldsObject { /** */ private Object[] arr; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderAdditionalSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderAdditionalSelfTest.java index 2366bac0dfaf5..5a405439d04bb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderAdditionalSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderAdditionalSelfTest.java @@ -56,7 +56,6 @@ import org.apache.ignite.configuration.BinaryConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.MarshallerPlatformIds; import org.apache.ignite.internal.binary.builder.BinaryBuilderEnum; import org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl; @@ -70,6 +69,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -77,6 +79,7 @@ /** * */ +@RunWith(JUnit4.class) public class BinaryObjectBuilderAdditionalSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -128,6 +131,7 @@ protected IgniteBinary binaries() { /** * @throws Exception If failed. */ + @Test public void testSimpleTypeFieldRead() throws Exception { GridBinaryTestClasses.TestObjectAllTypes exp = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -162,6 +166,7 @@ public void testSimpleTypeFieldRead() throws Exception { /** * */ + @Test public void testSimpleTypeFieldSerialize() { GridBinaryTestClasses.TestObjectAllTypes exp = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -177,6 +182,7 @@ public void testSimpleTypeFieldSerialize() { /** * @throws Exception If any error occurs. */ + @Test public void testSimpleTypeFieldOverride() throws Exception { GridBinaryTestClasses.TestObjectAllTypes exp = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -195,6 +201,7 @@ public void testSimpleTypeFieldOverride() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testSimpleTypeFieldSetNull() throws Exception { GridBinaryTestClasses.TestObjectAllTypes exp = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -218,6 +225,7 @@ public void testSimpleTypeFieldSetNull() throws Exception { /** * @throws IgniteCheckedException If any error occurs. */ + @Test public void testMakeCyclicDependency() throws IgniteCheckedException { GridBinaryTestClasses.TestObjectOuter outer = new GridBinaryTestClasses.TestObjectOuter(); outer.inner = new GridBinaryTestClasses.TestObjectInner(); @@ -238,6 +246,7 @@ public void testMakeCyclicDependency() throws IgniteCheckedException { /** * */ + @Test public void testDateArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -256,6 +265,7 @@ public void testDateArrayModification() { /** * */ + @Test public void testTimestampArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -274,6 +284,7 @@ public void testTimestampArrayModification() { /** * */ + @Test public void testUUIDArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -292,6 +303,7 @@ public void testUUIDArrayModification() { /** * */ + @Test public void testDecimalArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -311,6 +323,7 @@ public void testDecimalArrayModification() { /** * */ + @Test public void testBooleanArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -334,6 +347,7 @@ public void testBooleanArrayModification() { /** * */ + @Test public void testCharArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -352,6 +366,7 @@ public void testCharArrayModification() { /** * */ + @Test public void testDoubleArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -370,6 +385,7 @@ public void testDoubleArrayModification() { /** * */ + @Test public void testFloatArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -390,6 +406,7 @@ public void testFloatArrayModification() { /** * */ + @Test public void testLongArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -408,6 +425,7 @@ public void testLongArrayModification() { /** * */ + @Test public void testIntArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -426,6 +444,7 @@ public void testIntArrayModification() { /** * */ + @Test public void testShortArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -444,6 +463,7 @@ public void testShortArrayModification() { /** * */ + @Test public void testByteArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -462,6 +482,7 @@ public void testByteArrayModification() { /** * */ + @Test public void testStringArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -480,6 +501,7 @@ public void testStringArrayModification() { /** * */ + @Test public void testModifyObjectArray() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = new Object[] {"a"}; @@ -500,6 +522,7 @@ public void testModifyObjectArray() { /** * */ + @Test public void testOverrideObjectArrayField() { BinaryObjectBuilderImpl mutObj = wrap(new GridBinaryTestClasses.TestObjectContainer()); @@ -517,6 +540,7 @@ public void testOverrideObjectArrayField() { /** * */ + @Test public void testDeepArray() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = new Object[] {new Object[] {"a", obj}}; @@ -541,6 +565,7 @@ public void testDeepArray() { /** * */ + @Test public void testArrayListRead() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Lists.newArrayList(obj, "a"); @@ -555,6 +580,7 @@ public void testArrayListRead() { /** * */ + @Test public void testArrayListOverride() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -575,6 +601,7 @@ public void testArrayListOverride() { /** * */ + @Test public void testArrayListModification() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Lists.newArrayList("a", "b", "c"); @@ -602,6 +629,7 @@ public void testArrayListModification() { /** * */ + @Test public void testArrayListClear() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Lists.newArrayList("a", "b", "c"); @@ -618,6 +646,7 @@ public void testArrayListClear() { /** * */ + @Test public void testArrayListWriteUnmodifiable() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -639,6 +668,7 @@ public void testArrayListWriteUnmodifiable() { /** * */ + @Test public void testLinkedListRead() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Lists.newLinkedList(Arrays.asList(obj, "a")); @@ -653,6 +683,7 @@ public void testLinkedListRead() { /** * */ + @Test public void testLinkedListOverride() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -673,6 +704,7 @@ public void testLinkedListOverride() { /** * */ + @Test public void testLinkedListModification() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -701,6 +733,7 @@ public void testLinkedListModification() { /** * */ + @Test public void testLinkedListWriteUnmodifiable() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -722,6 +755,7 @@ public void testLinkedListWriteUnmodifiable() { /** * */ + @Test public void testHashSetRead() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Sets.newHashSet(obj, "a"); @@ -736,6 +770,7 @@ public void testHashSetRead() { /** * */ + @Test public void testHashSetOverride() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -757,6 +792,7 @@ public void testHashSetOverride() { /** * */ + @Test public void testHashSetModification() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Sets.newHashSet("a", "b", "c"); @@ -781,6 +817,7 @@ public void testHashSetModification() { /** * */ + @Test public void testHashSetWriteUnmodifiable() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -801,6 +838,7 @@ public void testHashSetWriteUnmodifiable() { /** * */ + @Test public void testMapRead() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Maps.newHashMap(ImmutableMap.of(obj, "a", "b", obj)); @@ -815,6 +853,7 @@ public void testMapRead() { /** * */ + @Test public void testMapOverride() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -832,6 +871,7 @@ public void testMapOverride() { /** * */ + @Test public void testMapModification() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Maps.newHashMap(ImmutableMap.of(1, "a", 2, "b")); @@ -853,6 +893,7 @@ public void testMapModification() { /** * */ + @Test public void testEnumArrayModification() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); @@ -871,6 +912,7 @@ public void testEnumArrayModification() { /** * */ + @Test public void testEditObjectWithRawData() { GridBinaryMarshalerAwareTestClass obj = new GridBinaryMarshalerAwareTestClass(); @@ -889,6 +931,7 @@ public void testEditObjectWithRawData() { /** * */ + @Test public void testHashCode() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -902,6 +945,7 @@ public void testHashCode() { /** * */ + @Test public void testCollectionsInCollection() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); obj.foo = Lists.newArrayList( @@ -919,6 +963,7 @@ public void testCollectionsInCollection() { /** * */ + @Test public void testMapEntryOverride() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -934,6 +979,7 @@ public void testMapEntryOverride() { /** * */ + @Test public void testMetadataChangingDoublePut() { BinaryObjectBuilderImpl mutableObj = wrap(new GridBinaryTestClasses.TestObjectContainer()); @@ -950,6 +996,7 @@ public void testMetadataChangingDoublePut() { /** * */ + @Test public void testMetadataChangingDoublePut2() { BinaryObjectBuilderImpl mutableObj = wrap(new GridBinaryTestClasses.TestObjectContainer()); @@ -966,6 +1013,7 @@ public void testMetadataChangingDoublePut2() { /** * */ + @Test public void testMetadataChanging() { GridBinaryTestClasses.TestObjectContainer c = new GridBinaryTestClasses.TestObjectContainer(); @@ -1000,6 +1048,7 @@ public void testMetadataChanging() { /** * */ + @Test public void testWrongMetadataNullField() { BinaryObjectBuilder builder = binaries().builder("SomeType"); @@ -1037,6 +1086,7 @@ public void testWrongMetadataNullField() { /** * */ + @Test public void testWrongMetadataNullField2() { BinaryObjectBuilder builder = binaries().builder("SomeType1"); @@ -1074,6 +1124,7 @@ public void testWrongMetadataNullField2() { /** * */ + @Test public void testCorrectMetadataNullField() { BinaryObjectBuilder builder = binaries().builder("SomeType2"); @@ -1096,6 +1147,7 @@ public void testCorrectMetadataNullField() { /** * */ + @Test public void testCorrectMetadataNullField2() { BinaryObjectBuilder builder = binaries().builder("SomeType3"); @@ -1117,6 +1169,7 @@ public void testCorrectMetadataNullField2() { /** * */ + @Test public void testDateInObjectField() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1130,6 +1183,7 @@ public void testDateInObjectField() { /** * */ + @Test public void testTimestampInObjectField() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1143,6 +1197,7 @@ public void testTimestampInObjectField() { /** * */ + @Test public void testDateInCollection() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1156,6 +1211,7 @@ public void testDateInCollection() { /** * */ + @Test public void testTimestampInCollection() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1169,7 +1225,7 @@ public void testTimestampInCollection() { /** * */ - @SuppressWarnings("AssertEqualsBetweenInconvertibleTypes") + @Test public void testDateArrayOverride() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1188,7 +1244,7 @@ public void testDateArrayOverride() { /** * */ - @SuppressWarnings("AssertEqualsBetweenInconvertibleTypes") + @Test public void testTimestampArrayOverride() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1207,6 +1263,7 @@ public void testTimestampArrayOverride() { /** * */ + @Test public void testChangeMap() { GridBinaryTestClasses.Addresses addrs = new GridBinaryTestClasses.Addresses(); @@ -1249,6 +1306,7 @@ public void testChangeMap() { /** * */ + @Test public void testSavingObjectWithNotZeroStart() { GridBinaryTestClasses.TestObjectOuter out = new GridBinaryTestClasses.TestObjectOuter(); GridBinaryTestClasses.TestObjectInner inner = new GridBinaryTestClasses.TestObjectInner(); @@ -1268,6 +1326,7 @@ public void testSavingObjectWithNotZeroStart() { /** * */ + @Test public void testBinaryObjectField() { GridBinaryTestClasses.TestObjectContainer container = new GridBinaryTestClasses.TestObjectContainer(toBinary(new GridBinaryTestClasses.TestObjectArrayList())); @@ -1282,6 +1341,7 @@ public void testBinaryObjectField() { /** * */ + @Test public void testAssignBinaryObject() { GridBinaryTestClasses.TestObjectContainer container = new GridBinaryTestClasses.TestObjectContainer(); @@ -1296,6 +1356,7 @@ public void testAssignBinaryObject() { /** * */ + @Test public void testRemoveFromNewObject() { BinaryObjectBuilderImpl wrapper = newWrapper(GridBinaryTestClasses.TestObjectAllTypes.class); @@ -1309,6 +1370,7 @@ public void testRemoveFromNewObject() { /** * */ + @Test public void testRemoveFromExistingObject() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); obj.setDefaultData(); @@ -1323,6 +1385,7 @@ public void testRemoveFromExistingObject() { /** * */ + @Test public void testCyclicArrays() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1344,6 +1407,7 @@ public void testCyclicArrays() { * */ @SuppressWarnings("TypeMayBeWeakened") + @Test public void testCyclicArrayList() { GridBinaryTestClasses.TestObjectContainer obj = new GridBinaryTestClasses.TestObjectContainer(); @@ -1365,6 +1429,7 @@ public void testCyclicArrayList() { /** * @throws Exception If failed. */ + @Test public void testSameBinaryKey() throws Exception { IgniteCache replicatedCache = jcache(0).withKeepBinary(); @@ -1392,6 +1457,7 @@ public void testSameBinaryKey() throws Exception { /** * Ensure that object w/o schema can be re-built. */ + @Test public void testBuildFromObjectWithoutSchema() { BinaryObjectBuilderImpl binBuilder = wrap(new GridBinaryTestClass2()); @@ -1441,6 +1507,7 @@ private BinaryObjectBuilderImpl newWrapper(String typeName) { /** * Check that correct type is stored in binary object. */ + @Test public void testCollectionsSerialization() { final BinaryObjectBuilder root = newWrapper(BigInteger.class); @@ -1522,6 +1589,7 @@ public void testCollectionsSerialization() { * * @throws Exception If failed. */ + @Test public void testBuilderExternalizable() throws Exception { BinaryObjectBuilder builder = newWrapper("TestType"); @@ -1557,6 +1625,7 @@ public void testBuilderExternalizable() throws Exception { * * @throws Exception If failed. */ + @Test public void testEnum() throws Exception { BinaryObjectBuilder builder = newWrapper("TestType"); @@ -1580,6 +1649,7 @@ public void testEnum() throws Exception { /** * Test {@link BinaryObjectBuilder#build()} adds type mapping to the binary marshaller's cache. */ + @Test public void testMarshallerMappings() throws IgniteCheckedException, ClassNotFoundException { String typeName = "TestType"; @@ -1614,6 +1684,7 @@ private TestEnum[] deserializeEnumBinaryArray(Object obj) { /** * @throws Exception If fails */ + @Test public void testBuilderReusage() throws Exception { // Check: rewrite null field value. BinaryObjectBuilder builder = newWrapper("SimpleCls1"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderDefaultMappersSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderDefaultMappersSelfTest.java index 1c5fbd7dece09..382687c1ff49b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderDefaultMappersSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectBuilderDefaultMappersSelfTest.java @@ -46,13 +46,16 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.util.GridUnsafe.BIG_ENDIAN; /** * Binary builder test. */ -@SuppressWarnings("ResultOfMethodCallIgnored") +@RunWith(JUnit4.class) public class BinaryObjectBuilderDefaultMappersSelfTest extends GridCommonAbstractTest { /** */ private static IgniteConfiguration cfg; @@ -111,6 +114,7 @@ protected boolean compactFooter() { /** * */ + @Test public void testAllFieldsSerialization() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); obj.setDefaultData(); @@ -124,6 +128,7 @@ public void testAllFieldsSerialization() { /** * @throws Exception If failed. */ + @Test public void testNullField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -152,6 +157,7 @@ public void testNullField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByteField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -168,6 +174,7 @@ public void testByteField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShortField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -184,6 +191,7 @@ public void testShortField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIntField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -200,6 +208,7 @@ public void testIntField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLongField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -216,6 +225,7 @@ public void testLongField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloatField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -232,6 +242,7 @@ public void testFloatField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDoubleField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -248,6 +259,7 @@ public void testDoubleField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCharField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -281,6 +293,7 @@ private int expectedHashCode(String fullName) { /** * @throws Exception If failed. */ + @Test public void testBooleanField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -297,6 +310,7 @@ public void testBooleanField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecimalField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -313,6 +327,7 @@ public void testDecimalField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStringField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -329,6 +344,7 @@ public void testStringField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDateField() throws Exception { Date date = new Date(); @@ -338,6 +354,7 @@ public void testDateField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestampField() throws Exception { Timestamp ts = new Timestamp(new Date().getTime()); ts.setNanos(1000); @@ -348,6 +365,7 @@ public void testTimestampField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUuidField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -366,6 +384,7 @@ public void testUuidField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByteArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -382,6 +401,7 @@ public void testByteArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShortArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -398,6 +418,7 @@ public void testShortArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIntArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -414,6 +435,7 @@ public void testIntArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLongArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -430,6 +452,7 @@ public void testLongArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloatArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -446,6 +469,7 @@ public void testFloatArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDoubleArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -462,6 +486,7 @@ public void testDoubleArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCharArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -478,6 +503,7 @@ public void testCharArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBooleanArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -499,6 +525,7 @@ public void testBooleanArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecimalArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -515,6 +542,7 @@ public void testDecimalArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStringArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -531,6 +559,7 @@ public void testStringArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDateArrayField() throws Exception { Date date1 = new Date(); Date date2 = new Date(date1.getTime() + 1000); @@ -543,6 +572,7 @@ public void testDateArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestampArrayField() throws Exception { Timestamp ts1 = new Timestamp(new Date().getTime()); Timestamp ts2 = new Timestamp(new Date().getTime() + 1000); @@ -558,6 +588,7 @@ public void testTimestampArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUuidArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -576,6 +607,7 @@ public void testUuidArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -592,6 +624,7 @@ public void testObjectField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectArrayField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -613,6 +646,7 @@ public void testObjectArrayField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollectionField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -640,6 +674,7 @@ public void testCollectionField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMapField() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -671,6 +706,7 @@ public void testMapField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSeveralFields() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -701,6 +737,7 @@ public void testSeveralFields() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOffheapBinary() throws Exception { BinaryObjectBuilder builder = builder("Class"); @@ -761,6 +798,7 @@ public void testOffheapBinary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBuildAndDeserialize() throws Exception { BinaryObjectBuilder builder = builder(Value.class.getName()); @@ -777,6 +815,7 @@ public void testBuildAndDeserialize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetaData2() throws Exception { BinaryObjectBuilder builder = builder("org.test.MetaTest2"); @@ -806,6 +845,7 @@ private String expectedTypeName(String fullClsName) { /** * @throws Exception If failed. */ + @Test public void testMetaData() throws Exception { BinaryObjectBuilder builder = builder("org.test.MetaTest"); @@ -855,6 +895,7 @@ public void testMetaData() throws Exception { /** * */ + @Test public void testGetFromCopiedObj() { BinaryObject objStr = builder(GridBinaryTestClasses.TestObjectAllTypes.class.getName()).setField("str", "aaa").build(); @@ -872,6 +913,7 @@ public void testGetFromCopiedObj() { * */ @SuppressWarnings("unchecked") + @Test public void testCopyFromInnerObjects() { ArrayList list = new ArrayList<>(); list.add(new GridBinaryTestClasses.TestObjectAllTypes()); @@ -896,6 +938,7 @@ public void testCopyFromInnerObjects() { /** * */ + @Test public void testSetBinaryObject() { // Prepare marshaller context. CacheObjectBinaryProcessorImpl proc = ((CacheObjectBinaryProcessorImpl)(grid(0)).context().cacheObjects()); @@ -915,6 +958,7 @@ public void testSetBinaryObject() { /** * */ + @Test public void testPlainBinaryObjectCopyFrom() { GridBinaryTestClasses.TestObjectPlainBinary obj = new GridBinaryTestClasses.TestObjectPlainBinary(toBinary(new GridBinaryTestClasses.TestObjectAllTypes())); @@ -928,6 +972,7 @@ public void testPlainBinaryObjectCopyFrom() { /** * */ + @Test public void testRemoveFromNewObject() { BinaryObjectBuilder builder = builder(GridBinaryTestClasses.TestObjectAllTypes.class.getName()); @@ -941,6 +986,7 @@ public void testRemoveFromNewObject() { /** * */ + @Test public void testRemoveFromExistingObject() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); obj.setDefaultData(); @@ -960,6 +1006,7 @@ public void testRemoveFromExistingObject() { /** * */ + @Test public void testRemoveFromExistingObjectAfterGet() { GridBinaryTestClasses.TestObjectAllTypes obj = new GridBinaryTestClasses.TestObjectAllTypes(); obj.setDefaultData(); @@ -977,6 +1024,7 @@ public void testRemoveFromExistingObjectAfterGet() { /** * @throws IgniteCheckedException If any error occurs. */ + @Test public void testDontBrokeCyclicDependency() throws IgniteCheckedException { GridBinaryTestClasses.TestObjectOuter outer = new GridBinaryTestClasses.TestObjectOuter(); outer.inner = new GridBinaryTestClasses.TestObjectInner(); @@ -1025,7 +1073,6 @@ private BinaryObjectBuilderImpl builder(BinaryObject obj) { /** * */ - @SuppressWarnings("UnusedDeclaration") private static class CustomIdMapper { /** */ private String str = "a"; @@ -1036,7 +1083,6 @@ private static class CustomIdMapper { /** */ - @SuppressWarnings("UnusedDeclaration") private static class Key { /** */ private int i; @@ -1075,7 +1121,6 @@ private Key(int i) { /** */ - @SuppressWarnings("UnusedDeclaration") private static class Value { /** */ private int i; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectExceptionSelfTest.java index 0a10fe74517fb..56517843b9275 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectExceptionSelfTest.java @@ -31,21 +31,19 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * BinaryObjectExceptionSelfTest */ +@RunWith(JUnit4.class) public class BinaryObjectExceptionSelfTest extends GridCommonAbstractTest { /** */ private static final String TEST_KEY = "test_key"; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private final String cacheName = "cache"; @@ -54,7 +52,6 @@ public class BinaryObjectExceptionSelfTest extends GridCommonAbstractTest { IgniteConfiguration cfg = super.getConfiguration(gridName); cfg.setMarshaller(new BinaryMarshaller()); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); cfg.setCacheConfiguration( new CacheConfiguration(cacheName) @@ -86,7 +83,7 @@ public class BinaryObjectExceptionSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ - @SuppressWarnings("WhileLoopReplaceableByForEach") + @Test public void testUnexpectedFieldType() throws Exception { IgniteEx grid = grid(0); @@ -157,6 +154,7 @@ public void testUnexpectedFieldType() throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedMarshallingLogging() throws Exception { BinaryMarshaller marshaller = createStandaloneBinaryMarshaller(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectToStringSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectToStringSelfTest.java index e208daac0032d..7e8b183191895 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectToStringSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectToStringSelfTest.java @@ -23,24 +23,21 @@ import java.util.Map; import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@code BinaryObject.toString()}. */ +@RunWith(JUnit4.class) public class BinaryObjectToStringSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setMarshaller(new BinaryMarshaller()); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); return cfg; } @@ -59,6 +56,7 @@ public class BinaryObjectToStringSelfTest extends GridCommonAbstractTest { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testToString() throws Exception { MyObject obj = new MyObject(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectTypeCompatibilityTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectTypeCompatibilityTest.java index 3ef4a83b20ee8..0000029a5bcb4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectTypeCompatibilityTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryObjectTypeCompatibilityTest.java @@ -36,31 +36,19 @@ import org.apache.ignite.Ignite; import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.binary.BinaryObjectBuilder; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class BinaryObjectTypeCompatibilityTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final Random RANDOM = new Random(System.currentTimeMillis()); - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration igniteCfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)igniteCfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return igniteCfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -69,6 +57,7 @@ public class BinaryObjectTypeCompatibilityTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCompatibilityWithObject() throws Exception { Ignite ignite = startGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySerialiedFieldComparatorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySerialiedFieldComparatorSelfTest.java index 4278ef43ea82f..e0f0450d62e33 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySerialiedFieldComparatorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySerialiedFieldComparatorSelfTest.java @@ -30,10 +30,14 @@ import java.util.Set; import java.util.UUID; import java.util.concurrent.atomic.AtomicInteger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Unit tests for serialized field comparer. */ +@RunWith(JUnit4.class) public class BinarySerialiedFieldComparatorSelfTest extends GridCommonAbstractTest { /** Type counter. */ private static final AtomicInteger TYPE_CTR = new AtomicInteger(); @@ -80,6 +84,7 @@ public class BinarySerialiedFieldComparatorSelfTest extends GridCommonAbstractTe * * @throws Exception If failed. */ + @Test public void testByte() throws Exception { checkTwoValues((byte)1, (byte)2); } @@ -89,6 +94,7 @@ public void testByte() throws Exception { * * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { checkTwoValues(true, false); } @@ -98,6 +104,7 @@ public void testBoolean() throws Exception { * * @throws Exception If failed. */ + @Test public void testShort() throws Exception { checkTwoValues((short)1, (short)2); } @@ -107,6 +114,7 @@ public void testShort() throws Exception { * * @throws Exception If failed. */ + @Test public void testChar() throws Exception { checkTwoValues('a', 'b'); } @@ -116,6 +124,7 @@ public void testChar() throws Exception { * * @throws Exception If failed. */ + @Test public void testInt() throws Exception { checkTwoValues(1, 2); } @@ -125,6 +134,7 @@ public void testInt() throws Exception { * * @throws Exception If failed. */ + @Test public void testLong() throws Exception { checkTwoValues(1L, 2L); } @@ -134,6 +144,7 @@ public void testLong() throws Exception { * * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { checkTwoValues(1.0f, 2.0f); } @@ -143,6 +154,7 @@ public void testFloat() throws Exception { * * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { checkTwoValues(1.0d, 2.0d); } @@ -152,6 +164,7 @@ public void testDouble() throws Exception { * * @throws Exception If failed. */ + @Test public void testString() throws Exception { checkTwoValues("str1", "str2"); } @@ -161,6 +174,7 @@ public void testString() throws Exception { * * @throws Exception If failed. */ + @Test public void testDate() throws Exception { long time = System.currentTimeMillis(); @@ -172,6 +186,7 @@ public void testDate() throws Exception { * * @throws Exception If failed. */ + @Test public void testTimestamp() throws Exception { long time = System.currentTimeMillis(); @@ -183,6 +198,7 @@ public void testTimestamp() throws Exception { * * @throws Exception If failed. */ + @Test public void testUuid() throws Exception { checkTwoValues(UUID.randomUUID(), UUID.randomUUID()); } @@ -192,6 +208,7 @@ public void testUuid() throws Exception { * * @throws Exception If failed. */ + @Test public void testDecimal() throws Exception { checkTwoValues(new BigDecimal("12.3E+7"), new BigDecimal("12.4E+7")); checkTwoValues(new BigDecimal("12.3E+7"), new BigDecimal("12.3E+8")); @@ -202,6 +219,7 @@ public void testDecimal() throws Exception { * * @throws Exception If failed. */ + @Test public void testInnerObject() throws Exception { checkTwoValues(new InnerClass(1), new InnerClass(2)); } @@ -211,6 +229,7 @@ public void testInnerObject() throws Exception { * * @throws Exception If failed. */ + @Test public void testByteArray() throws Exception { checkTwoValues(new byte[] { 1, 2 }, new byte[] { 1, 3 }); checkTwoValues(new byte[] { 1, 2 }, new byte[] { 1 }); @@ -223,6 +242,7 @@ public void testByteArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testBooleanArray() throws Exception { checkTwoValues(new boolean[] { true, false }, new boolean[] { false, true }); checkTwoValues(new boolean[] { true, false }, new boolean[] { true }); @@ -235,6 +255,7 @@ public void testBooleanArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testShortArray() throws Exception { checkTwoValues(new short[] { 1, 2 }, new short[] { 1, 3 }); checkTwoValues(new short[] { 1, 2 }, new short[] { 1 }); @@ -247,6 +268,7 @@ public void testShortArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testCharArray() throws Exception { checkTwoValues(new char[] { 1, 2 }, new char[] { 1, 3 }); checkTwoValues(new char[] { 1, 2 }, new char[] { 1 }); @@ -259,6 +281,7 @@ public void testCharArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testIntArray() throws Exception { checkTwoValues(new int[] { 1, 2 }, new int[] { 1, 3 }); checkTwoValues(new int[] { 1, 2 }, new int[] { 1 }); @@ -271,6 +294,7 @@ public void testIntArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testLongArray() throws Exception { checkTwoValues(new long[] { 1, 2 }, new long[] { 1, 3 }); checkTwoValues(new long[] { 1, 2 }, new long[] { 1 }); @@ -283,6 +307,7 @@ public void testLongArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testFloatArray() throws Exception { checkTwoValues(new float[] { 1.0f, 2.0f }, new float[] { 1.0f, 3.0f }); checkTwoValues(new float[] { 1.0f, 2.0f }, new float[] { 1.0f }); @@ -295,6 +320,7 @@ public void testFloatArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testDoubleArray() throws Exception { checkTwoValues(new double[] { 1.0d, 2.0d }, new double[] { 1.0d, 3.0d }); checkTwoValues(new double[] { 1.0d, 2.0d }, new double[] { 1.0d }); @@ -307,6 +333,7 @@ public void testDoubleArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testStringArray() throws Exception { checkTwoValues(new String[] { "a", "b" }, new String[] { "a", "c" }); checkTwoValues(new String[] { "a", "b" }, new String[] { "a" }); @@ -319,6 +346,7 @@ public void testStringArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testDateArray() throws Exception { long curTime = System.currentTimeMillis(); @@ -337,6 +365,7 @@ public void testDateArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testTimestampArray() throws Exception { long curTime = System.currentTimeMillis(); @@ -355,6 +384,7 @@ public void testTimestampArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testUuidArray() throws Exception { UUID v1 = UUID.randomUUID(); UUID v2 = UUID.randomUUID(); @@ -371,6 +401,7 @@ public void testUuidArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testDecimalArray() throws Exception { BigDecimal v1 = new BigDecimal("12.3E+7"); BigDecimal v2 = new BigDecimal("12.4E+7"); @@ -395,6 +426,7 @@ public void testDecimalArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testInnerObjectArray() throws Exception { InnerClass v1 = new InnerClass(1); InnerClass v2 = new InnerClass(2); @@ -558,4 +590,4 @@ public InnerClass(int val) { this.val = val; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySimpleNameTestPropertySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySimpleNameTestPropertySelfTest.java index b9077d1c6b2f8..d02fb95d63c6f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySimpleNameTestPropertySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinarySimpleNameTestPropertySelfTest.java @@ -24,6 +24,9 @@ import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.config.GridTestProperties.BINARY_MARSHALLER_USE_SIMPLE_NAME_MAPPER; import static org.apache.ignite.testframework.config.GridTestProperties.MARSH_CLASS_NAME; @@ -31,6 +34,7 @@ /** * Tests testing framewrok, epecially BINARY_MARSHALLER_USE_SIMPLE_NAME_MAPPER test property. */ +@RunWith(JUnit4.class) public class BinarySimpleNameTestPropertySelfTest extends GridCommonAbstractTest { /** * flag for facade disabled test. As we use binary marshaller by default al @@ -55,6 +59,7 @@ public class BinarySimpleNameTestPropertySelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testPropertyEnabled() throws Exception { String useSimpleNameBackup = GridTestProperties.getProperty(BINARY_MARSHALLER_USE_SIMPLE_NAME_MAPPER); @@ -72,6 +77,7 @@ public void testPropertyEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPropertyDisabled() throws Exception { checkProperty("org.ignite.test.TestClass"); } @@ -80,6 +86,7 @@ public void testPropertyDisabled() throws Exception { * Check if Binary facade is disabled test. Test uses JDK marshaller to provide warranty facade is not available * @throws Exception If failed. */ + @Test public void testBinaryDisabled() throws Exception { enableJdkMarshaller = true; assertNull(startGrid().binary()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryTreeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryTreeSelfTest.java index 8d84f54d072d4..4d9e3544ff5a4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryTreeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/BinaryTreeSelfTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for TreeMap and TreeSet structures. */ +@RunWith(JUnit4.class) public class BinaryTreeSelfTest extends GridCommonAbstractTest { /** Data structure size. */ private static final int SIZE = 100; @@ -69,6 +73,7 @@ public class BinaryTreeSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testTreeMapAsValueRegularNoComparator() throws Exception { checkTreeMapAsValue(false, false); } @@ -78,6 +83,7 @@ public void testTreeMapAsValueRegularNoComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeMapAsValueRegularComparator() throws Exception { checkTreeMapAsValue(false, true); } @@ -87,6 +93,7 @@ public void testTreeMapAsValueRegularComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeMapAsValueBinaryNoComparator() throws Exception { checkTreeMapAsValue(true, false); } @@ -96,6 +103,7 @@ public void testTreeMapAsValueBinaryNoComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeMapAsValueBinaryComparator() throws Exception { checkTreeMapAsValue(true, true); } @@ -105,6 +113,7 @@ public void testTreeMapAsValueBinaryComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeMapAsKeyNoComparator() throws Exception { checkTreeMapAsKey(false); } @@ -114,6 +123,7 @@ public void testTreeMapAsKeyNoComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeMapAsKeyComparator() throws Exception { checkTreeMapAsKey(true); } @@ -224,6 +234,7 @@ private TreeMap testMap(boolean useComp) { * * @throws Exception If failed. */ + @Test public void testTreeSetAsValueRegularNoComparator() throws Exception { checkTreeSetAsValue(false, false); } @@ -233,6 +244,7 @@ public void testTreeSetAsValueRegularNoComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeSetAsValueRegularComparator() throws Exception { checkTreeSetAsValue(false, true); } @@ -242,6 +254,7 @@ public void testTreeSetAsValueRegularComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeSetAsValueBinaryNoComparator() throws Exception { checkTreeSetAsValue(true, false); } @@ -251,6 +264,7 @@ public void testTreeSetAsValueBinaryNoComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeSetAsValueBinaryComparator() throws Exception { checkTreeSetAsValue(true, true); } @@ -260,6 +274,7 @@ public void testTreeSetAsValueBinaryComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeSetAsKeyNoComparator() throws Exception { checkTreeSetAsKey(false); } @@ -269,6 +284,7 @@ public void testTreeSetAsKeyNoComparator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTreeSetAsKeyComparator() throws Exception { checkTreeSetAsKey(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryAffinityKeySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryAffinityKeySelfTest.java index 8d3f34bb1442a..67dc057f32c63 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryAffinityKeySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryAffinityKeySelfTest.java @@ -36,23 +36,21 @@ import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.binary.BinaryTypeConfiguration; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Test for binary object affinity key. */ +@RunWith(JUnit4.class) public class GridBinaryAffinityKeySelfTest extends GridCommonAbstractTest { /** */ private static final AtomicReference nodeId = new AtomicReference<>(); - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static int GRID_CNT = 5; @@ -85,8 +83,6 @@ public class GridBinaryAffinityKeySelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheCfg); } - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -98,6 +94,7 @@ public class GridBinaryAffinityKeySelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAffinity() throws Exception { checkAffinity(grid(0)); @@ -167,6 +164,7 @@ private void checkAffinity(Ignite ignite) throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityRun() throws Exception { Affinity aff = grid(0).affinity(DEFAULT_CACHE_NAME); @@ -200,6 +198,7 @@ public void testAffinityRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityCall() throws Exception { Affinity aff = grid(0).affinity(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryMarshallerCtxDisabledSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryMarshallerCtxDisabledSelfTest.java index bb7c65d6d2136..ac0fa000ceef2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryMarshallerCtxDisabledSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryMarshallerCtxDisabledSelfTest.java @@ -34,14 +34,19 @@ import org.apache.ignite.marshaller.MarshallerContext; import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridBinaryMarshallerCtxDisabledSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testObjectExchange() throws Exception { BinaryMarshaller marsh = new BinaryMarshaller(); marsh.setContext(new MarshallerContextWithNoStorage()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryWildcardsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryWildcardsSelfTest.java index f69cea45899f0..74b53d76a1fa0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryWildcardsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridBinaryWildcardsSelfTest.java @@ -37,10 +37,14 @@ import org.apache.ignite.logger.NullLogger; import org.apache.ignite.marshaller.MarshallerContextTestImpl; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Wildcards test. */ +@RunWith(JUnit4.class) public class GridBinaryWildcardsSelfTest extends GridCommonAbstractTest { /** */ public static final String CLASS1_FULL_NAME = GridBinaryTestClass1.class.getName(); @@ -54,6 +58,7 @@ public class GridBinaryWildcardsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClassNamesFullNameMapper() throws Exception { checkClassNames(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false)); } @@ -61,6 +66,7 @@ public void testClassNamesFullNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClassNamesSimpleNameMapper() throws Exception { checkClassNames(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true)); } @@ -68,6 +74,7 @@ public void testClassNamesSimpleNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClassNamesMixedMappers() throws Exception { checkClassNames(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(true)); } @@ -97,6 +104,7 @@ private void checkClassNames(BinaryNameMapper nameMapper, BinaryIdMapper mapper) /** * @throws Exception If failed. */ + @Test public void testClassNamesCustomMappers() throws Exception { BinaryMarshaller marsh = binaryMarshaller(null, new BinaryIdMapper() { @SuppressWarnings("IfMayBeConditional") @@ -133,6 +141,7 @@ else if (clsName.endsWith("InnerClass")) /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsSimpleNameIdMapper() throws Exception { checkTypeConfigurations(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true)); } @@ -140,6 +149,7 @@ public void testTypeConfigurationsSimpleNameIdMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsFullNameIdMapper() throws Exception { checkTypeConfigurations(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false)); } @@ -185,6 +195,7 @@ private int typeId(String typeName, BinaryNameMapper nameMapper, BinaryIdMapper /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsWithGlobalMapper() throws Exception { BinaryMarshaller marsh = binaryMarshaller(new BinaryBasicNameMapper(false), new BinaryIdMapper() { @SuppressWarnings("IfMayBeConditional") @@ -221,6 +232,7 @@ else if (clsName.endsWith("InnerClass")) /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsWithNonGlobalMapper() throws Exception { BinaryMarshaller marsh = binaryMarshaller(new BinaryBasicNameMapper(true), new BinaryIdMapper() { @SuppressWarnings("IfMayBeConditional") @@ -257,6 +269,7 @@ else if (clsName.endsWith("InnerClass")) /** * @throws Exception If failed. */ + @Test public void testOverrideIdMapperSimpleNameMapper() throws Exception { checkOverrideNameMapper(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true)); } @@ -264,6 +277,7 @@ public void testOverrideIdMapperSimpleNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOverrideIdMapperFullNameMapper() throws Exception { checkOverrideNameMapper(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false)); } @@ -310,6 +324,7 @@ private void checkOverrideIdMapper(BinaryNameMapper nameMapper, BinaryIdMapper m /** * @throws Exception If failed. */ + @Test public void testOverrideNameMapperSimpleNameMapper() throws Exception { checkOverrideNameMapper(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true)); } @@ -317,6 +332,7 @@ public void testOverrideNameMapperSimpleNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOverrideNameMapperFullNameMapper() throws Exception { checkOverrideNameMapper(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false)); } @@ -363,6 +379,7 @@ private void checkOverrideNameMapper(BinaryNameMapper nameMapper, BinaryIdMapper /** * @throws Exception If failed. */ + @Test public void testClassNamesJarFullNameMapper() throws Exception { checkClassNamesJar(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false)); } @@ -370,6 +387,7 @@ public void testClassNamesJarFullNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClassNamesJarSimpleNameMapper() throws Exception { checkClassNamesJar(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true)); } @@ -399,6 +417,7 @@ private void checkClassNamesJar(BinaryNameMapper nameMapper, BinaryIdMapper idMa /** * @throws Exception If failed. */ + @Test public void testClassNamesWithCustomMapperJar() throws Exception { BinaryMarshaller marsh = binaryMarshaller(new BinaryBasicNameMapper(false), new BinaryIdMapper() { @SuppressWarnings("IfMayBeConditional") @@ -435,6 +454,7 @@ else if (clsName.endsWith("2")) /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsJarSimpleNameMapper() throws Exception { checkTypeConfigurationJar(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true)); } @@ -442,6 +462,7 @@ public void testTypeConfigurationsJarSimpleNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsJarFullNameMapper() throws Exception { checkTypeConfigurationJar(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false)); } @@ -472,6 +493,7 @@ private void checkTypeConfigurationJar(BinaryNameMapper nameMapper, BinaryIdMapp /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsWithGlobalMapperJar() throws Exception { BinaryMarshaller marsh = binaryMarshaller(new BinaryBasicNameMapper(false), new BinaryIdMapper() { @SuppressWarnings("IfMayBeConditional") @@ -508,6 +530,7 @@ else if (clsName.endsWith("2")) /** * @throws Exception If failed. */ + @Test public void testTypeConfigurationsWithNonGlobalMapperJar() throws Exception { BinaryMarshaller marsh = binaryMarshaller(new BinaryBasicNameMapper(false), new BinaryIdMapper() { @SuppressWarnings("IfMayBeConditional") @@ -544,6 +567,7 @@ else if (clsName.endsWith("2")) /** * @throws Exception If failed. */ + @Test public void testOverrideJarSimpleNameMapper() throws Exception { checkOverrideJar(new BinaryBasicNameMapper(true), new BinaryBasicIdMapper(true)); } @@ -551,6 +575,7 @@ public void testOverrideJarSimpleNameMapper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOverrideJarFullNameMapper() throws Exception { checkOverrideJar(new BinaryBasicNameMapper(false), new BinaryBasicIdMapper(false)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridDefaultBinaryMappersBinaryMetaDataSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridDefaultBinaryMappersBinaryMetaDataSelfTest.java index 06fb3f4c00757..649dd1dacb31c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/GridDefaultBinaryMappersBinaryMetaDataSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/GridDefaultBinaryMappersBinaryMetaDataSelfTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.binary.BinaryReader; import org.apache.ignite.binary.BinaryWriter; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Binary meta data test. */ +@RunWith(JUnit4.class) public class GridDefaultBinaryMappersBinaryMetaDataSelfTest extends GridCommonAbstractTest { /** */ private static IgniteConfiguration cfg; @@ -95,6 +99,7 @@ protected IgniteBinary binaries() { /** * @throws Exception If failed. */ + @Test public void testGetAll() throws Exception { binaries().toBinary(new TestObject2()); @@ -155,6 +160,7 @@ else if (expectedTypeName(TestObject2.class.getName()).equals(meta.typeName())) /** * @throws Exception If failed. */ + @Test public void testNoConfiguration() throws Exception { binaries().toBinary(new TestObject3()); @@ -164,6 +170,7 @@ public void testNoConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReflection() throws Exception { BinaryType meta = binaries().type(TestObject1.class); @@ -208,6 +215,7 @@ private String expectedTypeName(String clsName) { /** * @throws Exception If failed. */ + @Test public void testBinaryMarshalAware() throws Exception { binaries().toBinary(new TestObject2()); @@ -241,6 +249,7 @@ public void testBinaryMarshalAware() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMerge() throws Exception { binaries().toBinary(new TestObject2()); @@ -282,6 +291,7 @@ public void testMerge() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSerializedObject() throws Exception { TestObject1 obj = new TestObject1(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/AbstractBinaryStreamByteOrderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/AbstractBinaryStreamByteOrderSelfTest.java index c68a8862aba4e..e857ef2ea785c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/AbstractBinaryStreamByteOrderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/AbstractBinaryStreamByteOrderSelfTest.java @@ -20,6 +20,9 @@ import java.util.Random; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.GridTestIoUtils.getCharByByteLE; import static org.apache.ignite.GridTestIoUtils.getDoubleByByteLE; @@ -31,6 +34,7 @@ /** * Binary input/output streams byte order sanity tests. */ +@RunWith(JUnit4.class) public abstract class AbstractBinaryStreamByteOrderSelfTest extends GridCommonAbstractTest { /** Array length. */ protected static final int ARR_LEN = 16; @@ -64,6 +68,7 @@ public abstract class AbstractBinaryStreamByteOrderSelfTest extends GridCommonAb /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { short val = (short)RND.nextLong(); @@ -109,6 +114,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShortArray() throws Exception { short[] arr = new short[ARR_LEN]; @@ -128,6 +134,7 @@ public void testShortArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testChar() throws Exception { char val = (char)RND.nextLong(); @@ -157,6 +164,7 @@ public void testChar() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCharArray() throws Exception { char[] arr = new char[ARR_LEN]; @@ -176,6 +184,7 @@ public void testCharArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInt() throws Exception { int val = RND.nextInt(); @@ -228,6 +237,7 @@ public void testInt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIntArray() throws Exception { int[] arr = new int[ARR_LEN]; @@ -247,6 +257,7 @@ public void testIntArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { long val = RND.nextLong(); @@ -283,6 +294,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLongArray() throws Exception { long[] arr = new long[ARR_LEN]; @@ -302,6 +314,7 @@ public void testLongArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { float val = RND.nextFloat(); @@ -330,6 +343,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloatArray() throws Exception { float[] arr = new float[ARR_LEN]; @@ -349,6 +363,7 @@ public void testFloatArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { double val = RND.nextDouble(); @@ -377,6 +392,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDoubleArray() throws Exception { double[] arr = new double[ARR_LEN]; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/BinaryAbstractOutputStreamTest.java b/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/BinaryAbstractOutputStreamTest.java index ed1d3d65ad19d..7ee9c74211a31 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/BinaryAbstractOutputStreamTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/binary/streams/BinaryAbstractOutputStreamTest.java @@ -18,14 +18,19 @@ package org.apache.ignite.internal.binary.streams; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class BinaryAbstractOutputStreamTest extends GridCommonAbstractTest { /** * */ + @Test public void testCapacity() { assertEquals(256, BinaryAbstractOutputStream.capacity(0, 1)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/commandline/CommandHandlerParsingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/commandline/CommandHandlerParsingTest.java index 0ac5d1a4483b0..45b0aab55b07e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/commandline/CommandHandlerParsingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/commandline/CommandHandlerParsingTest.java @@ -38,6 +38,7 @@ import static org.apache.ignite.internal.commandline.CommandHandler.VI_CHECK_THROUGH; import static org.apache.ignite.internal.commandline.CommandHandler.WAL_DELETE; import static org.apache.ignite.internal.commandline.CommandHandler.WAL_PRINT; +import static org.junit.Assert.assertArrayEquals; /** * Tests Command Handler parsing arguments. @@ -167,46 +168,66 @@ public void testExperimentalCommandIsDisabled() { } /** - * Tests parsing and validation for user and password arguments. + * Tests parsing and validation for the SSL arguments. */ - public void testParseAndValidateUserAndPassword() { + public void testParseAndValidateSSLArguments() { CommandHandler hnd = new CommandHandler(); for (Command cmd : Command.values()) { + if (cmd == Command.CACHE || cmd == Command.WAL) continue; // --cache subcommand requires its own specific arguments. try { - hnd.parseAndValidate(asList("--user")); + hnd.parseAndValidate(asList("--truststore")); - fail("expected exception: Expected user name"); + fail("expected exception: Expected truststore"); } catch (IllegalArgumentException e) { e.printStackTrace(); } - try { - hnd.parseAndValidate(asList("--password")); + Arguments args = hnd.parseAndValidate(asList("--keystore", "testKeystore", "--keystore-password", "testKeystorePassword", "--keystore-type", "testKeystoreType", + "--truststore", "testTruststore", "--truststore-password", "testTruststorePassword", "--truststore-type", "testTruststoreType", + "--ssl-key-algorithm", "testSSLKeyAlgorithm", "--ssl-protocol", "testSSLProtocol", cmd.text())); - fail("expected exception: Expected password"); - } - catch (IllegalArgumentException e) { - e.printStackTrace(); - } + assertEquals("testSSLProtocol", args.sslProtocol()); + assertEquals("testSSLKeyAlgorithm", args.sslKeyAlgorithm()); + assertEquals("testKeystore", args.sslKeyStorePath()); + assertArrayEquals("testKeystorePassword".toCharArray(), args.sslKeyStorePassword()); + assertEquals("testKeystoreType", args.sslKeyStoreType()); + assertEquals("testTruststore", args.sslTrustStorePath()); + assertArrayEquals("testTruststorePassword".toCharArray(), args.sslTrustStorePassword()); + assertEquals("testTruststoreType", args.sslTrustStoreType()); + + assertEquals(cmd, args.command()); + } + } + + + /** + * Tests parsing and validation for user and password arguments. + */ + public void testParseAndValidateUserAndPassword() { + CommandHandler hnd = new CommandHandler(); + + for (Command cmd : Command.values()) { + if (cmd == Command.CACHE || cmd == Command.WAL) + continue; // --cache subcommand requires its own specific arguments. try { - hnd.parseAndValidate(asList("--user", "testUser", cmd.text())); + hnd.parseAndValidate(asList("--user")); - fail("expected exception: Both user and password should be specified"); + fail("expected exception: Expected user name"); } catch (IllegalArgumentException e) { e.printStackTrace(); } try { - hnd.parseAndValidate(asList("--password", "testPass", cmd.text())); + hnd.parseAndValidate(asList("--password")); - fail("expected exception: Both user and password should be specified"); + fail("expected exception: Expected password"); } catch (IllegalArgumentException e) { e.printStackTrace(); @@ -214,8 +235,8 @@ public void testParseAndValidateUserAndPassword() { Arguments args = hnd.parseAndValidate(asList("--user", "testUser", "--password", "testPass", cmd.text())); - assertEquals("testUser", args.user()); - assertEquals("testPass", args.password()); + assertEquals("testUser", args.getUserName()); + assertEquals("testPass", args.getPassword()); assertEquals(cmd, args.command()); } } @@ -304,7 +325,7 @@ public void testParseAutoConfirmationFlag() { break; } case TX: { - args = hnd.parseAndValidate(asList(cmd.text(), "xid", "xid1", "minDuration", "10", "kill", "--yes")); + args = hnd.parseAndValidate(asList(cmd.text(), "--xid", "xid1", "--min-duration", "10", "--kill", "--yes")); assertEquals(cmd, args.command()); assertEquals(DFLT_HOST, args.host()); @@ -440,8 +461,8 @@ public void testTransactionArguments() { catch (IllegalArgumentException ignored) { } - args = hnd.parseAndValidate(asList("--tx", "minDuration", "120", "minSize", "10", "limit", "100", "order", "SIZE", - "servers")); + args = hnd.parseAndValidate(asList("--tx", "--min-duration", "120", "--min-size", "10", "--limit", "100", "--order", "SIZE", + "--servers")); VisorTxTaskArg arg = args.transactionArguments(); @@ -451,8 +472,8 @@ public void testTransactionArguments() { assertEquals(VisorTxSortOrder.SIZE, arg.getSortOrder()); assertEquals(VisorTxProjection.SERVER, arg.getProjection()); - args = hnd.parseAndValidate(asList("--tx", "minDuration", "130", "minSize", "1", "limit", "60", "order", "DURATION", - "clients")); + args = hnd.parseAndValidate(asList("--tx", "--min-duration", "130", "--min-size", "1", "--limit", "60", "--order", "DURATION", + "--clients")); arg = args.transactionArguments(); @@ -462,7 +483,7 @@ public void testTransactionArguments() { assertEquals(VisorTxSortOrder.DURATION, arg.getSortOrder()); assertEquals(VisorTxProjection.CLIENT, arg.getProjection()); - args = hnd.parseAndValidate(asList("--tx", "nodes", "1,2,3")); + args = hnd.parseAndValidate(asList("--tx", "--nodes", "1,2,3")); arg = args.transactionArguments(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2ByteOrderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2ByteOrderSelfTest.java index 710e4454562c9..0423b78e0f490 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2ByteOrderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/direct/stream/v2/DirectByteBufferStreamImplV2ByteOrderSelfTest.java @@ -22,13 +22,14 @@ import java.util.ArrayList; import java.util.List; import java.util.Random; -import junit.framework.TestCase; import org.apache.ignite.IgniteException; import org.apache.ignite.internal.direct.stream.DirectByteBufferStream; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageFactory; import org.jetbrains.annotations.Nullable; +import org.junit.Before; +import org.junit.Test; import static org.apache.ignite.GridTestIoUtils.getCharByByteLE; import static org.apache.ignite.GridTestIoUtils.getDoubleByByteLE; @@ -37,11 +38,15 @@ import static org.apache.ignite.GridTestIoUtils.getLongByByteLE; import static org.apache.ignite.GridTestIoUtils.getShortByByteLE; import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; /** * {@link DirectByteBufferStreamImplV2} byte order sanity tests. */ -public class DirectByteBufferStreamImplV2ByteOrderSelfTest extends TestCase { +public class DirectByteBufferStreamImplV2ByteOrderSelfTest { /** Array length. */ private static final int ARR_LEN = 16; @@ -60,10 +65,9 @@ public class DirectByteBufferStreamImplV2ByteOrderSelfTest extends TestCase { /** Array. */ private byte[] outArr; - /** {@inheritDoc} */ - @Override public void setUp() throws Exception { - super.setUp(); - + /** */ + @Before + public void setUp() throws Exception { outArr = new byte[ARR_LEN * 8 + LEN_BYTES]; buff = ByteBuffer.wrap(outArr); @@ -92,6 +96,7 @@ private static DirectByteBufferStreamImplV2 createStream(ByteBuffer buff) { /** * */ + @Test public void testShortArray() { short[] arr = new short[ARR_LEN]; @@ -111,6 +116,7 @@ public void testShortArray() { /** * */ + @Test public void testCharArray() { char[] arr = new char[ARR_LEN]; @@ -130,6 +136,7 @@ public void testCharArray() { /** * */ + @Test public void testIntArray() { int[] arr = new int[ARR_LEN]; @@ -149,6 +156,7 @@ public void testIntArray() { /** * */ + @Test public void testLongArray() { long[] arr = new long[ARR_LEN]; @@ -168,6 +176,7 @@ public void testLongArray() { /** * */ + @Test public void testFloatArray() { float[] arr = new float[ARR_LEN]; @@ -187,6 +196,7 @@ public void testFloatArray() { /** * */ + @Test public void testDoubleArray() { double[] arr = new double[ARR_LEN]; @@ -206,6 +216,7 @@ public void testDoubleArray() { /** * */ + @Test public void testCharArrayInternal() { char[] arr = new char[ARR_LEN]; @@ -226,6 +237,7 @@ public void testCharArrayInternal() { /** * */ + @Test public void testShortArrayInternal() { short[] arr = new short[ARR_LEN]; @@ -246,6 +258,7 @@ public void testShortArrayInternal() { /** * */ + @Test public void testIntArrayInternal() { int[] arr = new int[ARR_LEN]; @@ -266,6 +279,7 @@ public void testIntArrayInternal() { /** * */ + @Test public void testLongArrayInternal() { long[] arr = new long[ARR_LEN]; @@ -286,6 +300,7 @@ public void testLongArrayInternal() { /** * */ + @Test public void testFloatArrayInternal() { float[] arr = new float[ARR_LEN]; @@ -306,6 +321,7 @@ public void testFloatArrayInternal() { /** * */ + @Test public void testDoubleArrayInternal() { double[] arr = new double[ARR_LEN]; @@ -519,4 +535,4 @@ else if (arr.getClass().getComponentType() == double.class) else throw new IllegalArgumentException("Unsupported array type"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/encryption/AbstractEncryptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/encryption/AbstractEncryptionTest.java new file mode 100644 index 0000000000000..f267186464661 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/encryption/AbstractEncryptionTest.java @@ -0,0 +1,245 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.encryption; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.OutputStream; +import java.security.KeyStore; +import java.util.HashSet; +import java.util.Set; +import javax.crypto.KeyGenerator; +import javax.crypto.SecretKey; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.processors.cache.IgniteInternalCache; +import org.apache.ignite.internal.util.IgniteUtils; +import org.apache.ignite.internal.util.typedef.T2; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionKey; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; +import static org.apache.ignite.configuration.WALMode.FSYNC; +import static org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi.CIPHER_ALGO; +import static org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi.DEFAULT_MASTER_KEY_NAME; + +/** + * Abstract encryption test. + */ +public abstract class AbstractEncryptionTest extends GridCommonAbstractTest { + /** */ + static final String ENCRYPTED_CACHE = "encrypted"; + + /** */ + public static final String KEYSTORE_PATH = + IgniteUtils.resolveIgnitePath("modules/core/src/test/resources/tde.jks").getAbsolutePath(); + + /** */ + static final String GRID_0 = "grid-0"; + + /** */ + static final String GRID_1 = "grid-1"; + + /** */ + public static final String KEYSTORE_PASSWORD = "love_sex_god"; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(name); + + KeystoreEncryptionSpi encSpi = new KeystoreEncryptionSpi(); + + encSpi.setKeyStorePath(keystorePath()); + encSpi.setKeyStorePassword(keystorePassword()); + + cfg.setEncryptionSpi(encSpi); + + DataStorageConfiguration memCfg = new DataStorageConfiguration() + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setMaxSize(10L * 1024 * 1024) + .setPersistenceEnabled(true)) + .setPageSize(4 * 1024) + .setWalMode(FSYNC); + + cfg.setDataStorageConfiguration(memCfg); + + return cfg; + } + + /** */ + private char[] keystorePassword() { + return KEYSTORE_PASSWORD.toCharArray(); + } + + /** */ + protected String keystorePath() { + return KEYSTORE_PATH; + } + + /** */ + void checkEncryptedCaches(IgniteEx grid0, IgniteEx grid1) { + Set cacheNames = new HashSet<>(grid0.cacheNames()); + + cacheNames.addAll(grid1.cacheNames()); + + for (String cacheName : cacheNames) { + CacheConfiguration ccfg = grid1.cache(cacheName).getConfiguration(CacheConfiguration.class); + + if (!ccfg.isEncryptionEnabled()) + continue; + + IgniteInternalCache encrypted0 = grid0.cachex(cacheName); + + int grpId = CU.cacheGroupId(cacheName, ccfg.getGroupName()); + + assertNotNull(encrypted0); + + IgniteInternalCache encrypted1 = grid1.cachex(cacheName); + + assertNotNull(encrypted1); + + assertTrue(encrypted1.configuration().isEncryptionEnabled()); + + KeystoreEncryptionKey encKey0 = (KeystoreEncryptionKey)grid0.context().encryption().groupKey(grpId); + + assertNotNull(encKey0); + assertNotNull(encKey0.key()); + + if (!grid1.configuration().isClientMode()) { + KeystoreEncryptionKey encKey1 = (KeystoreEncryptionKey)grid1.context().encryption().groupKey(grpId); + + assertNotNull(encKey1); + assertNotNull(encKey1.key()); + + assertEquals(encKey0.key(), encKey1.key()); + } + else + assertNull(grid1.context().encryption().groupKey(grpId)); + } + + checkData(grid0); + } + + /** */ + protected void checkData(IgniteEx grid0) { + IgniteCache cache = grid0.cache(cacheName()); + + assertNotNull(cache); + + for (long i=0; i<100; i++) + assertEquals("" + i, cache.get(i)); + } + + /** */ + protected void createEncryptedCache(IgniteEx grid0, @Nullable IgniteEx grid1, String cacheName, String cacheGroup) + throws IgniteInterruptedCheckedException { + createEncryptedCache(grid0, grid1, cacheName, cacheGroup, true); + } + + /** */ + protected void createEncryptedCache(IgniteEx grid0, @Nullable IgniteEx grid1, String cacheName, String cacheGroup, + boolean putData) throws IgniteInterruptedCheckedException { + CacheConfiguration ccfg = new CacheConfiguration(cacheName) + .setWriteSynchronizationMode(FULL_SYNC) + .setGroupName(cacheGroup) + .setEncryptionEnabled(true); + + IgniteCache cache = grid0.createCache(ccfg); + + if (grid1 != null) + GridTestUtils.waitForCondition(() -> grid1.cachex(cacheName()) != null, 2_000L); + + if (putData) { + for (long i = 0; i < 100; i++) + cache.put(i, "" + i); + + for (long i = 0; i < 100; i++) + assertEquals("" + i, cache.get(i)); + } + } + + /** + * Starts tests grid instances. + * + * @param clnPersDir If {@code true} than before start persistence dir are cleaned. + * @return Started grids. + * @throws Exception If failed. + */ + protected T2 startTestGrids(boolean clnPersDir) throws Exception { + if (clnPersDir) + cleanPersistenceDir(); + + IgniteEx grid0 = startGrid(GRID_0); + + IgniteEx grid1 = startGrid(GRID_1); + + grid0.cluster().active(true); + + awaitPartitionMapExchange(); + + return new T2<>(grid0, grid1); + } + + /** */ + @NotNull protected String cacheName() { + return ENCRYPTED_CACHE; + } + + /** + * Method to create new keystore. + * Use it whenever you need special keystore for an encryption tests. + */ + @SuppressWarnings("unused") + protected File createKeyStore(String keystorePath) throws Exception { + KeyStore ks = KeyStore.getInstance("PKCS12"); + + ks.load(null, null); + + KeyGenerator gen = KeyGenerator.getInstance(CIPHER_ALGO); + + gen.init(KeystoreEncryptionSpi.DEFAULT_KEY_SIZE); + + SecretKey key = gen.generateKey(); + + ks.setEntry( + DEFAULT_MASTER_KEY_NAME, + new KeyStore.SecretKeyEntry(key), + new KeyStore.PasswordProtection(KEYSTORE_PASSWORD.toCharArray())); + + File keyStoreFile = new File(keystorePath); + + keyStoreFile.createNewFile(); + + try (OutputStream os = new FileOutputStream(keyStoreFile)) { + ks.store(os, KEYSTORE_PASSWORD.toCharArray()); + } + + return keyStoreFile; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheBigEntryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheBigEntryTest.java new file mode 100644 index 0000000000000..a72e93e8b2f63 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheBigEntryTest.java @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.encryption; + +import java.util.Arrays; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.util.typedef.T2; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionKey; +import org.apache.ignite.testframework.GridTestUtils; +import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Tests to check encryption of entry bigger then page size. + */ +@RunWith(JUnit4.class) +public class EncryptedCacheBigEntryTest extends AbstractEncryptionTest { + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(false); + + cleanPersistenceDir(); + } + + /** @throws Exception If failed. */ + @Test + public void testCreateEncryptedCacheWithBigEntry() throws Exception { + T2 grids = startTestGrids(true); + + createEncryptedCache(grids.get1(), grids.get2(), cacheName(), null); + + checkEncryptedCaches(grids.get1(), grids.get2()); + + int grpId = CU.cacheGroupId(cacheName(), null); + + KeystoreEncryptionKey keyBeforeRestart = + (KeystoreEncryptionKey)grids.get1().context().encryption().groupKey(grpId); + + stopAllGrids(); + + grids = startTestGrids(false); + + checkEncryptedCaches(grids.get1(), grids.get2()); + + KeystoreEncryptionKey keyAfterRestart = (KeystoreEncryptionKey)grids.get1().context().encryption().groupKey(grpId); + + assertNotNull(keyAfterRestart); + assertNotNull(keyAfterRestart.key()); + + assertEquals(keyBeforeRestart.key(), keyAfterRestart.key()); + } + + /** {@inheritDoc} */ + @Override protected void createEncryptedCache(IgniteEx grid0, @Nullable IgniteEx grid1, String cacheName, + String cacheGroup, boolean putData) throws IgniteInterruptedCheckedException { + CacheConfiguration ccfg = new CacheConfiguration(cacheName) + .setWriteSynchronizationMode(FULL_SYNC) + .setGroupName(cacheGroup) + .setEncryptionEnabled(true); + + IgniteCache cache = grid0.createCache(ccfg); + + if (grid1 != null) + GridTestUtils.waitForCondition(() -> grid1.cachex(cacheName()) != null, 2_000L); + + if (putData) { + cache.put(1, bigArray(grid0)); + + assertTrue(Arrays.equals(bigArray(grid0), cache.get(1))); + } + } + + /** {@inheritDoc} */ + @Override protected void checkData(IgniteEx grid0) { + IgniteCache cache = grid0.cache(cacheName()); + + assertTrue(Arrays.equals(bigArray(grid0), cache.get(1))); + } + + /** */ + private byte[] bigArray(IgniteEx grid) { + int arrSz = grid.configuration().getDataStorageConfiguration().getPageSize() * 3; + + byte[] bigArr = new byte[arrSz]; + + for (int i=0; i ccfg = new CacheConfiguration<>(ENCRYPTED_CACHE); + + ccfg.setEncryptionEnabled(true); + + IgniteEx grid = grid(0); + + grid.createCache(ccfg); + + IgniteInternalCache enc = grid.cachex(ENCRYPTED_CACHE); + + assertNotNull(enc); + + KeystoreEncryptionKey key = + (KeystoreEncryptionKey)grid.context().encryption().groupKey(CU.cacheGroupId(ENCRYPTED_CACHE, null)); + + assertNotNull(key); + assertNotNull(key.key()); + } + + /** @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8640") + @Test + public void testCreateEncryptedNotPersistedCacheFail() throws Exception { + GridTestUtils.assertThrowsWithCause(() -> { + CacheConfiguration ccfg = new CacheConfiguration<>(NO_PERSISTENCE_REGION); + + ccfg.setEncryptionEnabled(true); + ccfg.setDataRegionName(NO_PERSISTENCE_REGION); + + grid(0).createCache(ccfg); + }, IgniteCheckedException.class); + } + + /** @throws Exception If failed. */ + @Test + public void testPersistedContentEncrypted() throws Exception { + IgniteCache enc = grid(0).createCache( + new CacheConfiguration(ENCRYPTED_CACHE) + .setEncryptionEnabled(true)); + + IgniteCache plain = grid(0).createCache(new CacheConfiguration<>("plain-cache")); + + assertNotNull(enc); + + String encValPart = "AAAAAAAAAA"; + String plainValPart = "BBBBBBBBBB"; + + StringBuilder longEncVal = new StringBuilder(encValPart.length()*100); + StringBuilder longPlainVal = new StringBuilder(plainValPart.length()*100); + + for (int i = 0; i < 100; i++) { + longEncVal.append(encValPart); + longPlainVal.append(plainValPart); + } + + enc.put(1, longEncVal.toString()); + plain.put(1, longPlainVal.toString()); + + stopAllGrids(false); + + byte[] encValBytes = encValPart.getBytes(StandardCharsets.UTF_8); + byte[] plainValBytes = plainValPart.getBytes(StandardCharsets.UTF_8); + + final boolean[] plainBytesFound = {false}; + + Files.walk(U.resolveWorkDirectory(U.defaultWorkDirectory(), "db", false).toPath()) + .filter(Files::isRegularFile) + .forEach(f -> { + try { + if (Files.size(f) == 0) + return; + + byte[] fileBytes = Files.readAllBytes(f); + + boolean notFound = Bytes.indexOf(fileBytes, encValBytes) == -1; + + assertTrue("Value should be encrypted in persisted file. " + f.getFileName(), notFound); + + plainBytesFound[0] = plainBytesFound[0] || Bytes.indexOf(fileBytes, plainValBytes) != -1; + + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + + assertTrue("Plain value should be found in persistent store", plainBytesFound[0]); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheDestroyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheDestroyTest.java new file mode 100644 index 0000000000000..cfbe642bc5e4b --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheDestroyTest.java @@ -0,0 +1,133 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.encryption; + +import java.util.Collection; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage; +import org.apache.ignite.internal.util.typedef.T2; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionKey; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.internal.managers.encryption.GridEncryptionManager.ENCRYPTION_KEY_PREFIX; + +/** + */ +@RunWith(JUnit4.class) +public class EncryptedCacheDestroyTest extends AbstractEncryptionTest { + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testEncryptedCacheDestroy() throws Exception { + T2 grids = startTestGrids(true); + + createEncryptedCache(grids.get1(), grids.get2(), cacheName(), null); + + checkEncryptedCaches(grids.get1(), grids.get2()); + + String encryptedCacheName = cacheName(); + + grids.get1().destroyCache(encryptedCacheName); + + checkCacheDestroyed(grids.get2(), encryptedCacheName, null, true); + + stopAllGrids(true); + + grids = startTestGrids(false); + + checkCacheDestroyed(grids.get1(), encryptedCacheName, null, true); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testEncryptedCacheFromGroupDestroy() throws Exception { + T2 grids = startTestGrids(true); + + String encCacheName = cacheName(); + + String grpName = "group1"; + + createEncryptedCache(grids.get1(), grids.get2(), encCacheName + "2", grpName); + createEncryptedCache(grids.get1(), grids.get2(), encCacheName, grpName); + + checkEncryptedCaches(grids.get1(), grids.get2()); + + grids.get1().destroyCache(encCacheName); + + checkCacheDestroyed(grids.get2(), encCacheName, grpName, false); + + stopAllGrids(true); + + grids = startTestGrids(false); + + checkCacheDestroyed(grids.get1(), encCacheName, grpName, false); + + grids.get1().destroyCache(encCacheName + "2"); + + checkCacheDestroyed(grids.get1(), encCacheName + "2", grpName, true); + + stopAllGrids(true); + + grids = startTestGrids(false); + + checkCacheDestroyed(grids.get1(), encCacheName, grpName, true); + + checkCacheDestroyed(grids.get1(), encCacheName + "2", grpName, true); + } + + /** */ + private void checkCacheDestroyed(IgniteEx grid, String encCacheName, String grpName, boolean keyShouldBeEmpty) + throws Exception { + awaitPartitionMapExchange(); + + Collection cacheNames = grid.cacheNames(); + + for (String cacheName : cacheNames) { + if (cacheName.equals(encCacheName)) + fail(encCacheName + " should be destroyed."); + } + + int grpId = CU.cacheGroupId(encCacheName, grpName); + + KeystoreEncryptionKey encKey = (KeystoreEncryptionKey)grid.context().encryption().groupKey(grpId); + MetaStorage metaStore = grid.context().cache().context().database().metaStorage(); + + if (keyShouldBeEmpty) { + assertNull(encKey); + + assertNull(metaStore.getData(ENCRYPTION_KEY_PREFIX + grpId)); + } else { + assertNotNull(encKey); + + assertNotNull(metaStore.getData(ENCRYPTION_KEY_PREFIX + grpId)); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheGroupCreateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheGroupCreateTest.java new file mode 100644 index 0000000000000..6a7bf83bdb9ca --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheGroupCreateTest.java @@ -0,0 +1,122 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.encryption; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; +import org.apache.ignite.internal.processors.cache.IgniteInternalCache; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionKey; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + */ +@RunWith(JUnit4.class) +public class EncryptedCacheGroupCreateTest extends AbstractEncryptionTest { + /** */ + public static final String ENCRYPTED_GROUP = "encrypted-group"; + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + cleanPersistenceDir(); + + IgniteEx igniteEx = startGrid(0); + + startGrid(1); + + igniteEx.cluster().active(true); + + awaitPartitionMapExchange(); + } + + /** @throws Exception If failed. */ + @Test + public void testCreateEncryptedCacheGroup() throws Exception { + KeystoreEncryptionKey key = createEncryptedCache(ENCRYPTED_CACHE, ENCRYPTED_GROUP); + + CacheConfiguration ccfg = new CacheConfiguration<>(ENCRYPTED_CACHE + "2"); + + ccfg.setEncryptionEnabled(true); + ccfg.setGroupName(ENCRYPTED_GROUP); + + IgniteEx grid = grid(0); + + grid.createCache(ccfg); + + IgniteInternalCache encrypted2 = grid.cachex(ENCRYPTED_CACHE + "2"); + + GridEncryptionManager encMgr = encrypted2.context().kernalContext().encryption(); + + KeystoreEncryptionKey key2 = (KeystoreEncryptionKey)encMgr.groupKey(CU.cacheGroupId(ENCRYPTED_CACHE, ENCRYPTED_GROUP)); + + assertNotNull(key2); + assertNotNull(key2.key()); + + assertEquals(key.key(), key2.key()); + } + + /** @throws Exception If failed. */ + @Test + public void testCreateNotEncryptedCacheInEncryptedGroupFails() throws Exception { + createEncryptedCache(ENCRYPTED_CACHE + "3", ENCRYPTED_GROUP + "3"); + + IgniteEx grid = grid(0); + + GridTestUtils.assertThrowsWithCause(() -> { + grid.createCache(new CacheConfiguration<>(ENCRYPTED_CACHE + "4") + .setEncryptionEnabled(false) + .setGroupName(ENCRYPTED_GROUP + "3")); + }, IgniteCheckedException.class); + } + + /** */ + private KeystoreEncryptionKey createEncryptedCache(String cacheName, String grpName) { + CacheConfiguration ccfg = new CacheConfiguration<>(cacheName); + + ccfg.setEncryptionEnabled(true); + ccfg.setGroupName(grpName); + + IgniteEx grid = grid(0); + + grid.createCache(ccfg); + + IgniteInternalCache enc = grid.cachex(cacheName); + + assertNotNull(enc); + + KeystoreEncryptionKey key = + (KeystoreEncryptionKey)grid.context().encryption().groupKey(CU.cacheGroupId(cacheName, grpName)); + + assertNotNull(key); + assertNotNull(key.key()); + + return key; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheNodeJoinTest.java b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheNodeJoinTest.java new file mode 100644 index 0000000000000..cca156aea426f --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheNodeJoinTest.java @@ -0,0 +1,249 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.encryption; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.IgniteUtils; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.testframework.GridTestUtils.assertThrowsWithCause; + +/** + */ +@RunWith(JUnit4.class) +public class EncryptedCacheNodeJoinTest extends AbstractEncryptionTest { + /** */ + private static final String GRID_2 = "grid-2"; + + /** */ + private static final String GRID_3 = "grid-3"; + + /** */ + private static final String GRID_4 = "grid-4"; + + /** */ + private static final String GRID_5 = "grid-5"; + + /** */ + public static final String CLIENT = "client"; + + /** */ + private boolean configureCache; + + /** */ + private static final String KEYSTORE_PATH_2 = + IgniteUtils.resolveIgnitePath("modules/core/src/test/resources/other_tde_keystore.jks").getAbsolutePath(); + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + configureCache = false; + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String grid) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(grid); + + cfg.setConsistentId(grid); + + if (grid.equals(GRID_0) || + grid.equals(GRID_2) || + grid.equals(GRID_3) || + grid.equals(GRID_4) || + grid.equals(GRID_5)) { + KeystoreEncryptionSpi encSpi = new KeystoreEncryptionSpi(); + + encSpi.setKeyStorePath(grid.equals(GRID_2) ? KEYSTORE_PATH_2 : KEYSTORE_PATH); + encSpi.setKeyStorePassword(KEYSTORE_PASSWORD.toCharArray()); + + cfg.setEncryptionSpi(encSpi); + } + else + cfg.setEncryptionSpi(null); + + cfg.setClientMode(grid.equals(CLIENT)); + + if (configureCache) + cfg.setCacheConfiguration(cacheConfiguration(grid)); + + return cfg; + } + + /** */ + protected CacheConfiguration cacheConfiguration(String gridName) { + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setName(cacheName()); + ccfg.setEncryptionEnabled(gridName.equals(GRID_0)); + + return ccfg; + } + + /** */ + @Test + public void testNodeCantJoinWithoutEncryptionSpi() throws Exception { + startGrid(GRID_0); + + assertThrowsWithCause(() -> { + try { + startGrid(GRID_1); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, IgniteCheckedException.class); + } + + /** */ + @Test + public void testNodeCantJoinWithDifferentKeyStore() throws Exception { + startGrid(GRID_0); + + assertThrowsWithCause(() -> { + try { + startGrid(GRID_2); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, IgniteCheckedException.class); + } + + /** */ + @Test + public void testNodeCanJoin() throws Exception { + startGrid(GRID_0); + + startGrid(GRID_3).cluster().active(true); + } + + /** */ + @Test + public void testNodeCantJoinWithDifferentCacheKeys() throws Exception { + IgniteEx grid0 = startGrid(GRID_0); + startGrid(GRID_3); + + grid0.cluster().active(true); + + stopGrid(GRID_3, false); + + createEncryptedCache(grid0, null, cacheName(), null, false); + + stopGrid(GRID_0, false); + IgniteEx grid3 = startGrid(GRID_3); + + grid3.cluster().active(true); + + createEncryptedCache(grid3, null, cacheName(), null, false); + + assertThrowsWithCause(() -> { + try { + startGrid(GRID_0); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, IgniteCheckedException.class); + } + + /** */ + @Test + public void testThirdNodeCanJoin() throws Exception { + IgniteEx grid0 = startGrid(GRID_0); + + IgniteEx grid3 = startGrid(GRID_3); + + grid3.cluster().active(true); + + createEncryptedCache(grid0, grid3, cacheName(), null); + + checkEncryptedCaches(grid0, grid3); + + IgniteEx grid4 = startGrid(GRID_4); + + awaitPartitionMapExchange(); + + checkEncryptedCaches(grid0, grid4); + } + + /** */ + @Test + public void testClientNodeJoin() throws Exception { + IgniteEx grid0 = startGrid(GRID_0); + + IgniteEx grid3 = startGrid(GRID_3); + + grid3.cluster().active(true); + + IgniteEx client = startGrid(CLIENT); + + createEncryptedCache(client, grid0, cacheName(), null); + } + + /** */ + @Test + public void testNodeCantJoinWithSameNameButNotEncCache() throws Exception { + configureCache = true; + + IgniteEx grid0 = startGrid(GRID_0); + + grid0.cluster().active(true); + + assertThrowsWithCause(() -> { + try { + startGrid(GRID_5); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, IgniteCheckedException.class); + } + + /** */ + @Test + public void testNodeCantJoinWithSameNameButEncCache() throws Exception { + configureCache = true; + + IgniteEx grid0 = startGrid(GRID_5); + + grid0.cluster().active(true); + + assertThrowsWithCause(() -> { + try { + startGrid(GRID_0); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, IgniteCheckedException.class); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCachePreconfiguredRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCachePreconfiguredRestartTest.java new file mode 100644 index 0000000000000..58e2c3d1b1760 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCachePreconfiguredRestartTest.java @@ -0,0 +1,93 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.encryption; + +import org.apache.ignite.IgniteCache; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** */ +@RunWith(JUnit4.class) +public class EncryptedCachePreconfiguredRestartTest extends EncryptedCacheRestartTest { + /** */ + private boolean differentCachesOnNodes; + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + cleanPersistenceDir(); + } + + /** @throws Exception If failed. */ + @Test + public void testDifferentPreconfiguredCachesOnNodes() throws Exception { + differentCachesOnNodes = true; + + super.testCreateEncryptedCache(); + } + + /** {@inheritDoc} */ + @Test + @Override public void testCreateEncryptedCache() throws Exception { + differentCachesOnNodes = false; + + super.testCreateEncryptedCache(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + String cacheName = ENCRYPTED_CACHE + (differentCachesOnNodes ? "." + igniteInstanceName : ""); + + CacheConfiguration ccfg = new CacheConfiguration(cacheName) + .setEncryptionEnabled(true); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** + * @return Cache name. + */ + @NotNull @Override protected String cacheName() { + return ENCRYPTED_CACHE + (differentCachesOnNodes ? "." + GRID_1 : ""); + } + + /** + * Creates encrypted cache. + */ + @Override protected void createEncryptedCache(IgniteEx grid0, IgniteEx grid1, String cacheName, String groupName) { + IgniteCache cache = grid0.cache(cacheName()); + + for (long i=0; i<100; i++) + cache.put(i, "" + i); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheRestartTest.java new file mode 100644 index 0000000000000..638fc53320652 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/encryption/EncryptedCacheRestartTest.java @@ -0,0 +1,69 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.encryption; + +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.typedef.T2; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionKey; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** */ +@RunWith(JUnit4.class) +public class EncryptedCacheRestartTest extends AbstractEncryptionTest { + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(false); + + cleanPersistenceDir(); + } + + /** @throws Exception If failed. */ + @Test + public void testCreateEncryptedCache() throws Exception { + T2 grids = startTestGrids(true); + + createEncryptedCache(grids.get1(), grids.get2(), cacheName(), null); + + checkEncryptedCaches(grids.get1(), grids.get2()); + + int grpId = CU.cacheGroupId(cacheName(), null); + + KeystoreEncryptionKey keyBeforeRestart = (KeystoreEncryptionKey)grids.get1().context().encryption().groupKey(grpId); + + stopAllGrids(); + + grids = startTestGrids(false); + + checkEncryptedCaches(grids.get1(), grids.get2()); + + KeystoreEncryptionKey keyAfterRestart = (KeystoreEncryptionKey)grids.get1().context().encryption().groupKey(grpId); + + assertNotNull(keyAfterRestart); + assertNotNull(keyAfterRestart.key()); + + assertEquals(keyBeforeRestart.key(), keyAfterRestart.key()); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerLocalMessageListenerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerLocalMessageListenerSelfTest.java index b88eef9e417d1..e969110d64018 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerLocalMessageListenerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerLocalMessageListenerSelfTest.java @@ -32,11 +32,11 @@ import org.apache.ignite.spi.IgniteSpiContext; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.NANOSECONDS; @@ -45,10 +45,8 @@ /** * Test Managers to add and remove local message listener. */ +@RunWith(JUnit4.class) public class GridManagerLocalMessageListenerSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final short DIRECT_TYPE = 210; @@ -64,12 +62,6 @@ public class GridManagerLocalMessageListenerSelfTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - c.setDiscoverySpi(discoSpi); - TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); c.setCommunicationSpi(commSpi); @@ -85,6 +77,7 @@ public class GridManagerLocalMessageListenerSelfTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testSendMessage() throws Exception { startGridsMultiThreaded(2); @@ -121,6 +114,7 @@ public void testSendMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddLocalMessageListener() throws Exception { startGrid(); @@ -136,6 +130,7 @@ public void testAddLocalMessageListener() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveLocalMessageListener() throws Exception { startGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerMxBeanIllegalArgumentHandleTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerMxBeanIllegalArgumentHandleTest.java index 0276abdff2aa2..7e0caa916a1c4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerMxBeanIllegalArgumentHandleTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerMxBeanIllegalArgumentHandleTest.java @@ -21,17 +21,20 @@ import java.lang.management.MemoryUsage; import java.lang.reflect.Field; import java.lang.reflect.Modifier; -import junit.framework.TestCase; import org.apache.ignite.IgniteLogger; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.jetbrains.annotations.NotNull; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; import org.mockito.Mockito; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; +import static org.junit.Assert.assertEquals; import static org.mockito.Mockito.when; /** @@ -39,7 +42,7 @@ * * Test modifies static final field, used only for development */ -public class GridManagerMxBeanIllegalArgumentHandleTest extends TestCase { +public class GridManagerMxBeanIllegalArgumentHandleTest { /** Original value of {@link GridDiscoveryManager#mem} to be restored after test */ private Object mxBeanToRestore; @@ -49,9 +52,9 @@ public class GridManagerMxBeanIllegalArgumentHandleTest extends TestCase { /** If we succeeded to set final field this flag is true, otherwise test assertions will not be performed */ private boolean correctSetupOfTestPerformed; - /** {@inheritDoc} Changes field to always failing mock */ - @Override public void setUp() throws Exception { - super.setUp(); + /** Changes field to always failing mock. */ + @Before + public void setUp() throws Exception { try { final MemoryMXBean memoryMXBean = createAlwaysFailingMxBean(); memMxBeanField = createAccessibleMemField(); @@ -95,13 +98,14 @@ public class GridManagerMxBeanIllegalArgumentHandleTest extends TestCase { * * @throws Exception if field set failed */ - @Override public void tearDown() throws Exception { - super.tearDown(); + @After + public void tearDown() throws Exception { if (correctSetupOfTestPerformed) memMxBeanField.set(null, mxBeanToRestore); } /** Creates minimal disco manager mock, checks illegal state is not propagated */ + @Test public void testIllegalStateIsCatch() { final IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setDiscoverySpi(new TcpDiscoverySpi()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerStopSelfTest.java index 283ef8702ea56..bef42cba028d5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridManagerStopSelfTest.java @@ -47,11 +47,15 @@ import org.apache.ignite.spi.loadbalancing.roundrobin.RoundRobinLoadBalancingSpi; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Managers stop test. * */ +@RunWith(JUnit4.class) public class GridManagerStopSelfTest extends GridCommonAbstractTest { /** Kernal context. */ private GridTestKernalContext ctx; @@ -88,6 +92,7 @@ private void injectLogger(IgniteSpi target) throws IgniteCheckedException { /** * @throws Exception If failed. */ + @Test public void testStopCheckpointManager() throws Exception { SharedFsCheckpointSpi spi = new SharedFsCheckpointSpi(); @@ -103,6 +108,7 @@ public void testStopCheckpointManager() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopCollisionManager() throws Exception { CollisionSpi spi = new FifoQueueCollisionSpi(); @@ -118,6 +124,7 @@ public void testStopCollisionManager() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopCommunicationManager() throws Exception { CommunicationSpi spi = new TcpCommunicationSpi(); @@ -136,6 +143,7 @@ public void testStopCommunicationManager() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopDeploymentManager() throws Exception { DeploymentSpi spi = new LocalDeploymentSpi(); @@ -151,6 +159,7 @@ public void testStopDeploymentManager() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopDiscoveryManager() throws Exception { DiscoverySpi spi = new TcpDiscoverySpi(); @@ -166,6 +175,7 @@ public void testStopDiscoveryManager() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopEventStorageManager() throws Exception { EventStorageSpi spi = new MemoryEventStorageSpi(); @@ -181,6 +191,7 @@ public void testStopEventStorageManager() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopFailoverManager() throws Exception { AlwaysFailoverSpi spi = new AlwaysFailoverSpi(); @@ -196,6 +207,7 @@ public void testStopFailoverManager() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopLoadBalancingManager() throws Exception { RoundRobinLoadBalancingSpi spi = new RoundRobinLoadBalancingSpi(); @@ -207,4 +219,4 @@ public void testStopLoadBalancingManager() throws Exception { mgr.stop(true); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridNoopManagerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridNoopManagerSelfTest.java index 795bda41fe069..4146c2eba6999 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/GridNoopManagerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/GridNoopManagerSelfTest.java @@ -27,14 +27,19 @@ import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests manager with {@link org.apache.ignite.spi.IgniteSpiNoop} SPI's. */ +@RunWith(JUnit4.class) public class GridNoopManagerSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testEnabledManager() throws IgniteCheckedException { GridTestKernalContext ctx = new GridTestKernalContext(new GridStringLogger()); @@ -96,4 +101,4 @@ private static class Spi extends IgniteSpiAdapter implements TestSpi { private static class NoopSpi extends Spi { // No-op. } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/IgniteDiagnosticMessagesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/IgniteDiagnosticMessagesTest.java index a29d572cb6949..6de5bb510d14c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/IgniteDiagnosticMessagesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/IgniteDiagnosticMessagesTest.java @@ -26,6 +26,7 @@ import java.util.concurrent.atomic.AtomicReference; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -43,17 +44,19 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; -import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -62,10 +65,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteDiagnosticMessagesTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -82,8 +83,6 @@ public class IgniteDiagnosticMessagesTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (testSpi) cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); @@ -109,8 +108,17 @@ public class IgniteDiagnosticMessagesTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDiagnosticMessages1() throws Exception { - checkBasicDiagnosticInfo(); + checkBasicDiagnosticInfo(CacheAtomicityMode.TRANSACTIONAL); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testDiagnosticMessagesMvcc1() throws Exception { + checkBasicDiagnosticInfo(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); } /** @@ -119,13 +127,40 @@ public void testDiagnosticMessages1() throws Exception { public void testDiagnosticMessages2() throws Exception { connectionsPerNode = 5; - checkBasicDiagnosticInfo(); + checkBasicDiagnosticInfo(CacheAtomicityMode.TRANSACTIONAL); } /** * @throws Exception If failed. */ + @Test + public void testDiagnosticMessagesMvcc2() throws Exception { + connectionsPerNode = 5; + + checkBasicDiagnosticInfo(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testLongRunning() throws Exception { + checkLongRunning(TRANSACTIONAL); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testLongRunningMvcc() throws Exception { + checkLongRunning(TRANSACTIONAL_SNAPSHOT); + } + + /** + * @param atomicityMode Cache atomicity mode. + * @throws Exception If failed. + */ + public void checkLongRunning(CacheAtomicityMode atomicityMode) throws Exception { System.setProperty(IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT, "3500"); try { @@ -139,11 +174,7 @@ public void testLongRunning() throws Exception { awaitPartitionMapExchange(); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - - ccfg.setWriteSynchronizationMode(FULL_SYNC); - ccfg.setCacheMode(PARTITIONED); - ccfg.setAtomicityMode(TRANSACTIONAL); + CacheConfiguration ccfg = cacheConfiguration(atomicityMode); final Ignite node0 = ignite(0); @@ -183,10 +214,40 @@ public void testLongRunning() throws Exception { } } + /** + * @param atomicityMode Cache atomicity mode. + * @return Cache configuration. + */ + @SuppressWarnings("unchecked") + private CacheConfiguration cacheConfiguration(CacheAtomicityMode atomicityMode) { + return defaultCacheConfiguration() + .setAtomicityMode(atomicityMode) + .setWriteSynchronizationMode(FULL_SYNC) + .setNearConfiguration(null); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9322") // Fix diagnostic message or disable test. + @Test + public void testSeveralLongRunningMvccTxs() throws Exception { + checkSeveralLongRunningTxs(TRANSACTIONAL_SNAPSHOT); + } + /** * @throws Exception If failed. */ + @Test public void testSeveralLongRunningTxs() throws Exception { + checkSeveralLongRunningTxs(TRANSACTIONAL); + } + + /** + * @param atomicityMode Cache atomicity mode. + * @throws Exception If failed. + */ + public void checkSeveralLongRunningTxs(CacheAtomicityMode atomicityMode) throws Exception { int timeout = 3500; System.setProperty(IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT, String.valueOf(timeout)); @@ -204,11 +265,7 @@ public void testSeveralLongRunningTxs() throws Exception { awaitPartitionMapExchange(); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - - ccfg.setWriteSynchronizationMode(FULL_SYNC); - ccfg.setCacheMode(PARTITIONED); - ccfg.setAtomicityMode(TRANSACTIONAL); + CacheConfiguration ccfg = cacheConfiguration(atomicityMode); final Ignite node0 = ignite(0); final Ignite node1 = ignite(1); @@ -294,8 +351,29 @@ private int countTxKeysInASingleBlock(String log) { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9322") // Fix diagnostic message or disable test. + @Test + public void testLongRunningMvccTx() throws Exception { + checkLongRunningTx(TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testLongRunningTx() throws Exception { - System.setProperty(IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT, "3500"); + checkLongRunningTx(TRANSACTIONAL); + + } + + /** + * @param atomicityMode Cache atomicity mode. + * @throws Exception If failed. + */ + public void checkLongRunningTx(CacheAtomicityMode atomicityMode) throws Exception { + final int longOpDumpTimeout = 1000; + + System.setProperty(IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT, String.valueOf(longOpDumpTimeout)); try { startGrid(0); @@ -308,11 +386,7 @@ public void testLongRunningTx() throws Exception { awaitPartitionMapExchange(); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - - ccfg.setWriteSynchronizationMode(FULL_SYNC); - ccfg.setCacheMode(PARTITIONED); - ccfg.setAtomicityMode(TRANSACTIONAL); + CacheConfiguration ccfg = cacheConfiguration(atomicityMode); final Ignite node0 = ignite(0); final Ignite node1 = ignite(1); @@ -386,7 +460,24 @@ public void testLongRunningTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoteTx() throws Exception { + checkRemoteTx(TRANSACTIONAL); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testRemoteMvccTx() throws Exception { + checkRemoteTx(TRANSACTIONAL_SNAPSHOT); + } + + /** + * @param atomicityMode Cache atomicity mode. + * @throws Exception If failed. + */ + public void checkRemoteTx(CacheAtomicityMode atomicityMode) throws Exception { int timeout = 3500; System.setProperty(IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT, String.valueOf(timeout)); @@ -404,13 +495,11 @@ public void testRemoteTx() throws Exception { awaitPartitionMapExchange(); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + CacheConfiguration ccfg = cacheConfiguration(atomicityMode).setBackups(1); - ccfg.setWriteSynchronizationMode(FULL_SYNC); - ccfg.setCacheMode(PARTITIONED); - ccfg.setAtomicityMode(TRANSACTIONAL); - ccfg.setBackups(1); - ccfg.setNearConfiguration(new NearCacheConfiguration()); + if (atomicityMode != TRANSACTIONAL_SNAPSHOT || + MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) + ccfg.setNearConfiguration(new NearCacheConfiguration<>()); final Ignite node0 = ignite(0); final Ignite node1 = ignite(1); @@ -462,9 +551,10 @@ public void testRemoteTx() throws Exception { } /** + * @param atomicityMode Cache atomicity mode. * @throws Exception If failed. */ - private void checkBasicDiagnosticInfo() throws Exception { + private void checkBasicDiagnosticInfo(CacheAtomicityMode atomicityMode) throws Exception { startGrids(3); client = true; @@ -473,11 +563,7 @@ private void checkBasicDiagnosticInfo() throws Exception { startGrid(4); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - - ccfg.setWriteSynchronizationMode(FULL_SYNC); - ccfg.setCacheMode(REPLICATED); - ccfg.setAtomicityMode(TRANSACTIONAL); + CacheConfiguration ccfg = cacheConfiguration(atomicityMode).setCacheMode(REPLICATED); ignite(0).createCache(ccfg); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerAbstractSelfTest.java index 45699796d2d62..8d22747386720 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerAbstractSelfTest.java @@ -50,9 +50,6 @@ import org.apache.ignite.resources.TaskSessionResource; import org.apache.ignite.spi.checkpoint.cache.CacheCheckpointSpi; import org.apache.ignite.spi.checkpoint.jdbc.JdbcCheckpointSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.hsqldb.jdbc.jdbcDataSource; @@ -111,9 +108,6 @@ public abstract class GridCheckpointManagerAbstractSelfTest extends GridCommonAb /** */ private static final String SES_VAL_OVERWRITTEN = SES_VAL + "-overwritten"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** * Static variable to control whether test should retry checkpoint read attempts. * It is needed for s3-based tests because of weak s3 consistency model. @@ -139,12 +133,6 @@ private GridCheckpointManager checkpoints(Ignite ignite) { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - if (igniteInstanceName.contains("cache")) { String cacheName = "test-checkpoints"; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerSelfTest.java index 414d05c765877..beb3ce26daae6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointManagerSelfTest.java @@ -18,15 +18,20 @@ package org.apache.ignite.internal.managers.checkpoint; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Checkpoint Manager") +@RunWith(JUnit4.class) public class GridCheckpointManagerSelfTest extends GridCheckpointManagerAbstractSelfTest { /** * @throws Exception Thrown if any exception occurs. */ + @Test public void testCacheBased() throws Exception { doTest("cache"); } @@ -34,6 +39,7 @@ public void testCacheBased() throws Exception { /** * @throws Exception Thrown if any exception occurs. */ + @Test public void testSharedFsBased() throws Exception { doTest("sharedfs"); } @@ -41,6 +47,7 @@ public void testSharedFsBased() throws Exception { /** * @throws Exception Thrown if any exception occurs. */ + @Test public void testDatabaseBased() throws Exception { doTest("jdbc"); } @@ -48,6 +55,7 @@ public void testDatabaseBased() throws Exception { /** * @throws Exception Thrown if any exception occurs. */ + @Test public void testMultiNodeCacheBased() throws Exception { doMultiNodeTest("cache"); } @@ -55,6 +63,7 @@ public void testMultiNodeCacheBased() throws Exception { /** * @throws Exception Thrown if any exception occurs. */ + @Test public void testMultiNodeSharedFsBased() throws Exception { doMultiNodeTest("sharedfs"); } @@ -62,7 +71,8 @@ public void testMultiNodeSharedFsBased() throws Exception { /** * @throws Exception Thrown if any exception occurs. */ + @Test public void testMultiNodeDatabaseBased() throws Exception { doMultiNodeTest("jdbc"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointTaskSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointTaskSelfTest.java index e36f54a77a9e7..8b071bbc1c8c1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointTaskSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/checkpoint/GridCheckpointTaskSelfTest.java @@ -36,12 +36,11 @@ import org.apache.ignite.resources.TaskSessionResource; import org.apache.ignite.spi.checkpoint.CheckpointSpi; import org.apache.ignite.spi.checkpoint.cache.CacheCheckpointSpi; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -49,10 +48,8 @@ /** * Checkpoint tests. */ +@RunWith(JUnit4.class) public class GridCheckpointTaskSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Checkpoints cache name. */ private static final String CACHE_NAME = "checkpoints.cache"; @@ -65,7 +62,6 @@ public class GridCheckpointTaskSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheConfiguration()); cfg.setCheckpointSpi(checkpointSpi()); - cfg.setDiscoverySpi(discoverySpi()); return cfg; } @@ -94,17 +90,6 @@ private CheckpointSpi checkpointSpi() { return spi; } - /** - * @return Discovery SPI. - */ - private DiscoverySpi discoverySpi() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { startGrid(1); @@ -122,6 +107,7 @@ private DiscoverySpi discoverySpi() { /** * @throws Exception If failed. */ + @Test public void testFailover() throws Exception { grid(1).compute().execute(FailoverTestTask.class, null); } @@ -129,6 +115,7 @@ public void testFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReduce() throws Exception { grid(1).compute().execute(ReduceTestTask.class, null); } @@ -238,4 +225,4 @@ private static class ReduceTestTask extends ComputeTaskAdapter { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/GridCommunicationManagerListenersSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/GridCommunicationManagerListenersSelfTest.java index 03b7921f5b571..fb5e2921fd9d0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/GridCommunicationManagerListenersSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/GridCommunicationManagerListenersSelfTest.java @@ -33,11 +33,15 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid communication manager self test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridCommunicationManagerListenersSelfTest extends GridCommonAbstractTest { /** */ public GridCommunicationManagerListenersSelfTest() { @@ -47,7 +51,7 @@ public GridCommunicationManagerListenersSelfTest() { /** * Works fine. */ - @SuppressWarnings({"deprecation"}) + @Test public void testDifferentListeners() { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -68,6 +72,7 @@ public void testDifferentListeners() { /** * Fails on the 1001st time. */ + @Test public void testMultipleExecutionsWithoutListeners() { checkLoop(1001); } @@ -76,7 +81,7 @@ public void testMultipleExecutionsWithoutListeners() { * This is the workaround- as long as we keep a message listener in * the stack, our FIFO bug isn't exposed. Comment above out to see. */ - @SuppressWarnings({"deprecation"}) + @Test public void testOneListener() { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -101,6 +106,7 @@ public void testOneListener() { * Now, our test will fail on the first message added after our safety * message listener has been removed. */ + @Test public void testSingleExecutionWithoutListeners() { checkLoop(1); } @@ -163,4 +169,4 @@ private static class MessageListeningTask extends ComputeTaskSplitAdapter() { @Override public Object call() throws Exception { @@ -101,6 +106,7 @@ public void testSendIfOneOfNodesIsLocalAndTopicIsEnum() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSendUserMessageThinVersionIfOneOfNodesIsLocal() throws Exception { Object msg = new Object(); @@ -125,6 +131,7 @@ public void testSendUserMessageThinVersionIfOneOfNodesIsLocal() throws Exception /** * @throws Exception If failed. */ + @Test public void testSendUserMessageUnorderedThickVersionIfOneOfNodesIsLocal() throws Exception { Object msg = new Object(); @@ -149,6 +156,7 @@ public void testSendUserMessageUnorderedThickVersionIfOneOfNodesIsLocal() throws /** * @throws Exception If failed. */ + @Test public void testSendUserMessageOrderedThickVersionIfOneOfNodesIsLocal() throws Exception { Object msg = new Object(); @@ -243,4 +251,4 @@ private static class TestMessage implements Message { return 0; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceMultipleConnectionsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceMultipleConnectionsTest.java index 444f086522a8b..e6aeee1313fe8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceMultipleConnectionsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceMultipleConnectionsTest.java @@ -17,9 +17,12 @@ package org.apache.ignite.internal.managers.communication; +import org.junit.Ignore; + /** * */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-5689") public class IgniteCommunicationBalanceMultipleConnectionsTest extends IgniteCommunicationBalanceTest { /** {@inheritDoc} */ @Override protected int connectionsPerNode() { @@ -28,6 +31,6 @@ public class IgniteCommunicationBalanceMultipleConnectionsTest extends IgniteCom /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-5689"); + // No-op. } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceTest.java index 666bc1dfbf6c2..4b4de15fc3e72 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteCommunicationBalanceTest.java @@ -36,19 +36,17 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCommunicationBalanceTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -68,8 +66,6 @@ public class IgniteCommunicationBalanceTest extends GridCommonAbstractTest { if (selectors > 0) commSpi.setSelectorsCount(selectors); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); if (sslEnabled()) @@ -109,6 +105,7 @@ protected int connectionsPerNode() { /** * @throws Exception If failed. */ + @Test public void testBalance1() throws Exception { if (sslEnabled()) return; @@ -210,6 +207,7 @@ public void testBalance1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBalance2() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_IO_BALANCE_PERIOD, "1000"); @@ -317,6 +315,7 @@ private void waitNioBalanceStop(List nodes, long timeout) throws Excepti /** * @throws Exception If failed. */ + @Test public void testRandomBalance() throws Exception { System.setProperty(GridNioServer.IGNITE_IO_BALANCE_RANDOM_BALANCE, "true"); System.setProperty(IgniteSystemProperties.IGNITE_IO_BALANCE_PERIOD, "500"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessagesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessagesTest.java index 65231e7097222..57e500c2874a8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessagesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteIoTestMessagesTest.java @@ -24,18 +24,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteIoTestMessagesTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -43,8 +41,6 @@ public class IgniteIoTestMessagesTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -66,6 +62,7 @@ public class IgniteIoTestMessagesTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testIoTestMessages() throws Exception { for (Ignite node : G.allGrids()) { IgniteKernal ignite = (IgniteKernal)node; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteVariousConnectionNumberTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteVariousConnectionNumberTest.java index 2ea1f90b854ff..e68047e269ec3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteVariousConnectionNumberTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/communication/IgniteVariousConnectionNumberTest.java @@ -28,11 +28,11 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -40,10 +40,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteVariousConnectionNumberTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 6; @@ -57,8 +55,6 @@ public class IgniteVariousConnectionNumberTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - int connections = rnd.nextInt(10) + 1; log.info("Node connections [name=" + igniteInstanceName + ", connections=" + connections + ']'); @@ -86,6 +82,7 @@ public class IgniteVariousConnectionNumberTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testVariousConnectionNumber() throws Exception { startGridsMultiThreaded(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/DeploymentRequestOfUnknownClassProcessingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/DeploymentRequestOfUnknownClassProcessingTest.java new file mode 100644 index 0000000000000..1c6027f710bc4 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/DeploymentRequestOfUnknownClassProcessingTest.java @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.managers.deployment; + +import java.net.URL; +import java.util.UUID; +import java.util.concurrent.TimeUnit; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.managers.communication.GridIoPolicy; +import org.apache.ignite.internal.managers.communication.GridMessageListener; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestExternalClassLoader; +import org.apache.ignite.testframework.ListeningTestLogger; +import org.apache.ignite.testframework.LogListener; +import org.apache.ignite.testframework.config.GridTestProperties; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +import static org.apache.ignite.internal.GridTopic.TOPIC_CLASSLOAD; + +/** + * Tests the processing of deployment request with an attempt to load a class with an unknown class name. + */ +public class DeploymentRequestOfUnknownClassProcessingTest extends GridCommonAbstractTest { + /** */ + private static final String TEST_TOPIC_NAME = "TEST_TOPIC_NAME"; + + /** */ + private static final String UNKNOWN_CLASS_NAME = "unknown.UnknownClassName"; + + /** */ + private final ListeningTestLogger remNodeLog = new ListeningTestLogger(false, log); + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setPeerClassLoadingEnabled(true); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + startGrid(getConfiguration(getTestIgniteInstanceName(0))); + + IgniteConfiguration cfg = getConfiguration(getTestIgniteInstanceName(1)); + + cfg.setGridLogger(remNodeLog); + + startGrid(cfg); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testResponseReceivingOnDeploymentRequestOfUnknownClass() throws Exception { + IgniteEx locNode = grid(0); + IgniteEx remNode = grid(1); + + // Register deployment on remote node for attemt to load class on request receiving + GridTestExternalClassLoader ldr = new GridTestExternalClassLoader(new URL[] { + new URL(GridTestProperties.getProperty("p2p.uri.cls")) + }); + + Class task = ldr.loadClass("org.apache.ignite.tests.p2p.P2PTestTaskExternalPath1"); + + GridDeployment locDep = remNode.context().deploy().deploy(task, task.getClassLoader()); + + final GridFutureAdapter testResultFut = new GridFutureAdapter<>(); + + final LogListener remNodeLogLsnr = LogListener + .matches(s -> s.startsWith("Failed to resolve class: " + UNKNOWN_CLASS_NAME)).build(); + + remNodeLog.registerListener(remNodeLogLsnr); + + locNode.context().io().addMessageListener(TEST_TOPIC_NAME, new GridMessageListener() { + @Override public void onMessage(UUID nodeId, Object msg, byte plc) { + try { + assertTrue(msg instanceof GridDeploymentResponse); + + GridDeploymentResponse resp = (GridDeploymentResponse)msg; + + assertFalse("Unexpected response result, success=" + resp.success(), resp.success()); + + String errMsg = resp.errorMessage(); + + assertNotNull("Response should contain an error message.", errMsg); + + assertTrue("Response contains unexpected error message, errorMessage=" + errMsg, + errMsg.startsWith("Requested resource not found (ignoring locally): " + UNKNOWN_CLASS_NAME)); + + testResultFut.onDone(); + } + catch (Error e) { + testResultFut.onDone(e); + } + } + }); + + GridDeploymentRequest req = new GridDeploymentRequest(TEST_TOPIC_NAME, locDep.classLoaderId(), + UNKNOWN_CLASS_NAME, false); + + req.responseTopicBytes(U.marshal(locNode.context(), req.responseTopic())); + + locNode.context().io().sendToGridTopic(remNode.localNode(), TOPIC_CLASSLOAD, req, GridIoPolicy.P2P_POOL); + + // Сhecks that the expected response has been received. + testResultFut.get(5_000, TimeUnit.MILLISECONDS); + + // Checks that error has been logged on remote node. + assertTrue(remNodeLogLsnr.check()); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentManagerStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentManagerStopSelfTest.java index 1d1a20a20c5bd..71aabb1a092b9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentManagerStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentManagerStopSelfTest.java @@ -30,15 +30,20 @@ import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid deployment manager stop test. */ @GridCommonTest(group = "Kernal Self") +@RunWith(JUnit4.class) public class GridDeploymentManagerStopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testOnKernalStop() throws Exception { DeploymentSpi spi = new GridTestDeploymentSpi(); @@ -106,4 +111,4 @@ private static class GridTestDeploymentSpi implements DeploymentSpi { /** {@inheritDoc} */ @Override public void onClientReconnected(boolean clusterRestarted) { /* No-op. */ } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentMessageCountSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentMessageCountSelfTest.java index 6b9a6818346cd..b417de41e487e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentMessageCountSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/deployment/GridDeploymentMessageCountSelfTest.java @@ -20,23 +20,20 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicInteger; -import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; -import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.compute.ComputeTaskFuture; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.managers.communication.GridIoMessage; -import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -44,16 +41,11 @@ /** * Tests message count for different deployment scenarios. */ +@RunWith(JUnit4.class) public class GridDeploymentMessageCountSelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Test p2p task. */ private static final String TEST_TASK = "org.apache.ignite.tests.p2p.SingleSplitTestTask"; - /** Test p2p value. */ - private static final String TEST_VALUE = "org.apache.ignite.tests.p2p.CacheDeploymentTestValue"; - /** SPIs. */ private Map commSpis = new ConcurrentHashMap<>(); @@ -61,12 +53,6 @@ public class GridDeploymentMessageCountSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - cfg.setPeerClassLoadingEnabled(true); CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -86,7 +72,7 @@ public class GridDeploymentMessageCountSelfTest extends GridCommonAbstractTest { } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") + @Test public void testTaskDeployment() throws Exception { ClassLoader ldr = getExternalClassLoader(); @@ -123,49 +109,6 @@ public void testTaskDeployment() throws Exception { } } - /** - * @throws Exception If failed. - */ - public void testCacheValueDeploymentOnPut() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-4551"); - - ClassLoader ldr = getExternalClassLoader(); - - Class valCls = ldr.loadClass(TEST_VALUE); - - try { - startGrids(2); - - IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); - - cache.put("key", valCls.newInstance()); - - for (int i = 0; i < 2; i++) - assertNotNull("For grid: " + i, grid(i).cache(DEFAULT_CACHE_NAME).localPeek("key", CachePeekMode.ONHEAP)); - - for (MessageCountingCommunicationSpi spi : commSpis.values()) { - assertTrue(spi.deploymentMessageCount() > 0); - - spi.resetCount(); - } - - for (int i = 0; i < 10; i++) { - String key = "key" + i; - - cache.put(key, valCls.newInstance()); - - for (int k = 0; k < 2; k++) - assertNotNull(grid(k).cache(DEFAULT_CACHE_NAME).localPeek(key, CachePeekMode.ONHEAP)); - } - - for (MessageCountingCommunicationSpi spi : commSpis.values()) - assertEquals(0, spi.deploymentMessageCount()); - } - finally { - stopAllGrids(); - } - } - /** * */ @@ -174,12 +117,12 @@ private class MessageCountingCommunicationSpi extends TcpCommunicationSpi { private AtomicInteger msgCnt = new AtomicInteger(); /** {@inheritDoc} */ - @Override public void sendMessage(ClusterNode node, Message msg, IgniteInClosure ackClosure) + @Override public void sendMessage(ClusterNode node, Message msg, IgniteInClosure ackC) throws IgniteSpiException { if (isDeploymentMessage((GridIoMessage)msg)) msgCnt.incrementAndGet(); - super.sendMessage(node, msg, ackClosure); + super.sendMessage(node, msg, ackC); } /** @@ -213,4 +156,4 @@ private boolean isDeploymentMessage(GridIoMessage msg) { return dep; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAliveCacheSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAliveCacheSelfTest.java index 8fad6404f1936..b58f056293a27 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAliveCacheSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAliveCacheSelfTest.java @@ -32,9 +32,10 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -43,10 +44,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridDiscoveryManagerAliveCacheSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int PERM_NODES_CNT = 5; @@ -96,19 +95,17 @@ public class GridDiscoveryManagerAliveCacheSelfTest extends GridCommonAbstractTe cCfg.setRebalanceMode(SYNC); cCfg.setWriteSynchronizationMode(FULL_SYNC); - TcpDiscoverySpi disc = new TcpDiscoverySpi(); + TcpDiscoverySpi disc = (TcpDiscoverySpi)cfg.getDiscoverySpi(); if (clientMode && ((igniteInstanceName.charAt(igniteInstanceName.length() - 1) - '0') & 1) != 0) cfg.setClientMode(true); else cfg.setClientFailureDetectionTimeout(50000); - disc.setIpFinder(IP_FINDER); disc.setAckTimeout(1000); disc.setSocketTimeout(1000); cfg.setCacheConfiguration(cCfg); - cfg.setDiscoverySpi(disc); cfg.setMetricsUpdateFrequency(500); return cfg; @@ -159,6 +156,7 @@ private void doTestAlive() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAlives() throws Exception { clientMode = false; @@ -168,6 +166,7 @@ public void testAlives() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAlivesClient() throws Exception { clientMode = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAttributesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAttributesSelfTest.java index 69f95e8aab1d9..8db1551e94594 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAttributesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManagerAttributesSelfTest.java @@ -27,27 +27,25 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.TestReconnectPluginProvider; import org.apache.ignite.spi.discovery.tcp.TestReconnectProcessor; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_BINARY_MARSHALLER_USE_STRING_SERIALIZATION_VER_2; import static org.apache.ignite.IgniteSystemProperties.IGNITE_OPTIMIZED_MARSHALLER_USE_DEFAULT_SUID; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SECURITY_COMPATIBILITY_MODE; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_SERVICES_COMPATIBILITY_MODE; import static org.apache.ignite.configuration.DeploymentMode.CONTINUOUS; import static org.apache.ignite.configuration.DeploymentMode.SHARED; /** * Tests for node attributes consistency checks. */ +@RunWith(JUnit4.class) public abstract class GridDiscoveryManagerAttributesSelfTest extends GridCommonAbstractTest { /** */ private static final String PREFER_IPV4 = "java.net.preferIPv4Stack"; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static DeploymentMode mode; @@ -71,12 +69,6 @@ public abstract class GridDiscoveryManagerAttributesSelfTest extends GridCommonA cfg.setDeploymentMode(mode); cfg.setPeerClassLoadingEnabled(p2pEnabled); - TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi(); - - discoverySpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoverySpi); - return cfg; } @@ -95,6 +87,7 @@ public abstract class GridDiscoveryManagerAttributesSelfTest extends GridCommonA /** * @throws Exception If failed. */ + @Test public void testPreferIpV4StackTrue() throws Exception { testPreferIpV4Stack(true); } @@ -102,6 +95,7 @@ public void testPreferIpV4StackTrue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreferIpV4StackFalse() throws Exception { testPreferIpV4Stack(false); } @@ -116,6 +110,7 @@ public void testPreferIpV4StackFalse() throws Exception { * * @throws Exception If failed. */ + @Test public void testPreferIpV4StackDifferentValues() throws Exception { System.setProperty(PREFER_IPV4, "true"); @@ -137,6 +132,7 @@ public void testPreferIpV4StackDifferentValues() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUseDefaultSuid() throws Exception { try { doTestUseDefaultSuid(Boolean.TRUE.toString(), Boolean.FALSE.toString(), true); @@ -184,6 +180,7 @@ private void doTestUseDefaultSuid(String first, String second, boolean fail) thr } } + @Test public void testUseStringSerVer2() throws Exception { String old = System.getProperty(IGNITE_BINARY_MARSHALLER_USE_STRING_SERIALIZATION_VER_2); @@ -248,42 +245,7 @@ private void doTestUseStrSerVer2(String first, String second, boolean fail) thro /** * @throws Exception If failed. */ - public void testServiceCompatibilityEnabled() throws Exception { - String backup = System.getProperty(IGNITE_SERVICES_COMPATIBILITY_MODE); - - try { - doTestServiceCompatibilityEnabled(true, null, true); - doTestServiceCompatibilityEnabled(false, null, true); - doTestServiceCompatibilityEnabled(null, false, true); - doTestServiceCompatibilityEnabled(true, false, true); - doTestServiceCompatibilityEnabled(null, true, true); - doTestServiceCompatibilityEnabled(false, true, true); - - doTestServiceCompatibilityEnabled(true, true, false); - doTestServiceCompatibilityEnabled(false, false, false); - doTestServiceCompatibilityEnabled(null, null, false); - } - finally { - if (backup != null) - System.setProperty(IGNITE_SERVICES_COMPATIBILITY_MODE, backup); - else - System.clearProperty(IGNITE_SERVICES_COMPATIBILITY_MODE); - } - } - - /** - * @param first Service compatibility enabled flag for first node. - * @param second Service compatibility enabled flag for second node. - * @param fail Fail flag. - * @throws Exception If failed. - */ - private void doTestServiceCompatibilityEnabled(Object first, Object second, boolean fail) throws Exception { - doTestCompatibilityEnabled(IGNITE_SERVICES_COMPATIBILITY_MODE, first, second, fail); - } - - /** - * @throws Exception If failed. - */ + @Test public void testSecurityCompatibilityEnabled() throws Exception { TestReconnectPluginProvider.enabled = true; TestReconnectProcessor.enabled = true; @@ -370,6 +332,7 @@ private void doTestCompatibilityEnabled(String prop, Object first, Object second /** * @throws Exception If failed. */ + @Test public void testDifferentDeploymentModes() throws Exception { IgniteEx g = startGrid(0); @@ -391,6 +354,7 @@ public void testDifferentDeploymentModes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentPeerClassLoadingEnabledFlag() throws Exception { IgniteEx g = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/IgniteTopologyPrintFormatSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/IgniteTopologyPrintFormatSelfTest.java index 86ad4589eaa85..fb572db8d7a84 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/IgniteTopologyPrintFormatSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/discovery/IgniteTopologyPrintFormatSelfTest.java @@ -24,18 +24,21 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.managers.GridManagerAdapter; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger; import org.apache.log4j.Level; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteTopologyPrintFormatSelfTest extends GridCommonAbstractTest { /** */ public static final String TOPOLOGY_SNAPSHOT = "Topology snapshot"; @@ -46,26 +49,18 @@ public class IgniteTopologyPrintFormatSelfTest extends GridCommonAbstractTest { /** */ public static final String CLIENT_NODE = ">>> Number of client nodes"; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disc = new TcpDiscoverySpi(); - disc.setIpFinder(IP_FINDER); - if (igniteInstanceName.endsWith("client")) cfg.setClientMode(true); if (igniteInstanceName.endsWith("client_force_server")) { cfg.setClientMode(true); - disc.setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); } - cfg.setDiscoverySpi(disc); - return cfg; } @@ -85,6 +80,7 @@ public class IgniteTopologyPrintFormatSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testServerLogs() throws Exception { MockLogger log = new MockLogger(); @@ -96,6 +92,7 @@ public void testServerLogs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerDebugLogs() throws Exception { MockLogger log = new MockLogger(); @@ -109,9 +106,13 @@ public void testServerDebugLogs() throws Exception { * @throws Exception If failed. */ private void doServerLogTest(MockLogger log) throws Exception { + String nodeId8; + try { Ignite server = startGrid("server"); + nodeId8 = U.id8(server.cluster().localNode().id()); + setLogger(log, server); Ignite server1 = startGrid("server1"); @@ -124,7 +125,7 @@ private void doServerLogTest(MockLogger log) throws Exception { assertTrue(F.forAny(log.logs(), new IgnitePredicate() { @Override public boolean apply(String s) { - return s.contains("Topology snapshot [ver=2, servers=2, clients=0,") + return s.contains("Topology snapshot [ver=2, locNode=" + nodeId8 + ", servers=2, clients=0,") || (s.contains(">>> Number of server nodes: 2") && s.contains(">>> Number of client nodes: 0")); } })); @@ -133,6 +134,7 @@ private void doServerLogTest(MockLogger log) throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerAndClientLogs() throws Exception { MockLogger log = new MockLogger(); @@ -144,6 +146,7 @@ public void testServerAndClientLogs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerAndClientDebugLogs() throws Exception { MockLogger log = new MockLogger(); @@ -157,9 +160,13 @@ public void testServerAndClientDebugLogs() throws Exception { * @throws Exception If failed. */ private void doServerAndClientTest(MockLogger log) throws Exception { + String nodeId8; + try { Ignite server = startGrid("server"); + nodeId8 = U.id8(server.cluster().localNode().id()); + setLogger(log, server); Ignite server1 = startGrid("server1"); @@ -174,7 +181,7 @@ private void doServerAndClientTest(MockLogger log) throws Exception { assertTrue(F.forAny(log.logs(), new IgnitePredicate() { @Override public boolean apply(String s) { - return s.contains("Topology snapshot [ver=4, servers=2, clients=2,") + return s.contains("Topology snapshot [ver=4, locNode=" + nodeId8 + ", servers=2, clients=2,") || (s.contains(">>> Number of server nodes: 2") && s.contains(">>> Number of client nodes: 2")); } })); @@ -183,6 +190,7 @@ private void doServerAndClientTest(MockLogger log) throws Exception { /** * @throws Exception If failed. */ + @Test public void testForceServerAndClientLogs() throws Exception { MockLogger log = new MockLogger(); @@ -194,6 +202,7 @@ public void testForceServerAndClientLogs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForceServerAndClientDebugLogs() throws Exception { MockLogger log = new MockLogger(); @@ -207,9 +216,13 @@ public void testForceServerAndClientDebugLogs() throws Exception { * @throws Exception If failed. */ private void doForceServerAndClientTest(MockLogger log) throws Exception { + String nodeId8; + try { Ignite server = startGrid("server"); + nodeId8 = U.id8(server.cluster().localNode().id()); + setLogger(log, server); Ignite server1 = startGrid("server1"); @@ -225,7 +238,7 @@ private void doForceServerAndClientTest(MockLogger log) throws Exception { assertTrue(F.forAny(log.logs(), new IgnitePredicate() { @Override public boolean apply(String s) { - return s.contains("Topology snapshot [ver=5, servers=2, clients=3,") + return s.contains("Topology snapshot [ver=5, locNode=" + nodeId8 + ", servers=2, clients=3,") || (s.contains(">>> Number of server nodes: 2") && s.contains(">>> Number of client nodes: 3")); } })); @@ -286,4 +299,4 @@ public void clear() { logs.clear(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/managers/events/GridEventStorageManagerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/managers/events/GridEventStorageManagerSelfTest.java index 0479b81beb104..9103e6cc2396d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/managers/events/GridEventStorageManagerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/managers/events/GridEventStorageManagerSelfTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.lang.IgniteFutureTimeoutException; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVTS_ALL; /** * Tests for {@link GridEventStorageManager}. */ +@RunWith(JUnit4.class) public class GridEventStorageManagerSelfTest extends GridCommonAbstractTest { /** * @@ -60,6 +64,7 @@ public GridEventStorageManagerSelfTest() { /** * @throws Exception If failed. */ + @Test public void testWaitForEvent() throws Exception { Ignite ignite = grid(); @@ -92,6 +97,7 @@ public void testWaitForEvent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWaitForEventContinuationTimeout() throws Exception { Ignite ignite = grid(); @@ -111,6 +117,7 @@ public void testWaitForEventContinuationTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUserEvent() throws Exception { Ignite ignite = grid(); @@ -126,4 +133,4 @@ public void testUserEvent() throws Exception { info("Caught expected exception: " + e); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerEnumSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerEnumSelfTest.java index c7a58f74a2b6a..5dfebc4d7a590 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerEnumSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerEnumSelfTest.java @@ -17,7 +17,6 @@ package org.apache.ignite.internal.marshaller.optimized; -import junit.framework.TestCase; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.configuration.IgniteConfiguration; @@ -25,11 +24,14 @@ import org.apache.ignite.marshaller.MarshallerContextTestImpl; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; /** * */ -public class OptimizedMarshallerEnumSelfTest extends TestCase { +public class OptimizedMarshallerEnumSelfTest { private String igniteHome = System.getProperty("user.dir"); @@ -37,6 +39,7 @@ public class OptimizedMarshallerEnumSelfTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testEnumSerialisation() throws Exception { OptimizedMarshaller marsh = new OptimizedMarshaller(); @@ -84,4 +87,4 @@ private enum TestEnum { public abstract String getTestString(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerNodeFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerNodeFailoverTest.java index 7bd0a5d492b73..33bb3eaaacaec 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerNodeFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerNodeFailoverTest.java @@ -30,11 +30,11 @@ import org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.marshaller.Marshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -42,10 +42,8 @@ /** * */ +@RunWith(JUnit4.class) public class OptimizedMarshallerNodeFailoverTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean cache; @@ -56,12 +54,6 @@ public class OptimizedMarshallerNodeFailoverTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setMarshaller(new OptimizedMarshaller()); cfg.setWorkDirectory(workDir); @@ -84,6 +76,7 @@ public class OptimizedMarshallerNodeFailoverTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testClassCacheUpdateFailover1() throws Exception { classCacheUpdateFailover(false); } @@ -91,6 +84,7 @@ public void testClassCacheUpdateFailover1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClassCacheUpdateFailover2() throws Exception { classCacheUpdateFailover(true); } @@ -141,6 +135,7 @@ private void classCacheUpdateFailover(boolean stopSrv) throws Exception { /** * @throws Exception If failed. */ + @Test public void testRestartAllNodes() throws Exception { cache = true; @@ -355,4 +350,4 @@ static class TestClass19 implements Serializable {} * */ static class TestClass20 implements Serializable {} -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSelfTest.java index fa95abc43c164..9238cf012d7b3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSelfTest.java @@ -29,11 +29,15 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Optimized marshaller self test. */ @GridCommonTest(group = "Marshaller") +@RunWith(JUnit4.class) public class OptimizedMarshallerSelfTest extends GridMarshallerAbstractTest { /** {@inheritDoc} */ @Override protected Marshaller marshaller() { @@ -43,6 +47,7 @@ public class OptimizedMarshallerSelfTest extends GridMarshallerAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTestMarshalling() throws Exception { final String msg = "PASSED"; @@ -70,6 +75,7 @@ public void testTestMarshalling() throws Exception { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testMarshallingSelfLink() throws IgniteCheckedException { SelfLink sl = new SelfLink("a string 1"); @@ -83,6 +89,7 @@ public void testMarshallingSelfLink() throws IgniteCheckedException { /** * @throws Exception If failed. */ + @Test public void testInvalid() throws Exception { GridTestUtils.assertThrows( log, @@ -105,6 +112,7 @@ public void testInvalid() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNested() throws Exception { NestedTestObject obj = new NestedTestObject("String", 100); @@ -281,4 +289,4 @@ public void link(SelfLink link) { this.link = link; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSerialPersistentFieldsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSerialPersistentFieldsSelfTest.java index 5a9d10cd1fe61..560884cf5b01e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSerialPersistentFieldsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerSerialPersistentFieldsSelfTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller; import org.apache.ignite.marshaller.GridMarshallerAbstractTest; import org.apache.ignite.marshaller.Marshaller; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that Optimized Marshaller works with classes with serialPersistentFields. */ +@RunWith(JUnit4.class) public class OptimizedMarshallerSerialPersistentFieldsSelfTest extends GridMarshallerAbstractTest { /** {@inheritDoc} */ @Override protected Marshaller marshaller() { @@ -38,6 +42,7 @@ public class OptimizedMarshallerSerialPersistentFieldsSelfTest extends GridMars /** * @throws Exception If failed. */ + @Test public void testOptimizedMarshaller() throws Exception { unmarshal(marshal(new TestClass())); @@ -113,4 +118,4 @@ private void readObject(ObjectInputStream s) throws IOException, ClassNotFoundEx s.readObject(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerTest.java index a7e29c41a16ee..0da247335243e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedMarshallerTest.java @@ -45,10 +45,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class OptimizedMarshallerTest extends GridCommonAbstractTest { /** * @return Marshaller. @@ -68,6 +72,7 @@ private OptimizedMarshaller marshaller() { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testNonSerializable() throws IgniteCheckedException { OptimizedMarshaller marsh = marshaller(); @@ -83,6 +88,7 @@ public void testNonSerializable() throws IgniteCheckedException { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testNonSerializable1() throws IgniteCheckedException { OptimizedMarshaller marsh = marshaller(); @@ -104,6 +110,7 @@ public void testNonSerializable1() throws IgniteCheckedException { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testNonSerializable2() throws IgniteCheckedException { OptimizedMarshaller marsh = marshaller(); @@ -137,6 +144,7 @@ public void testNonSerializable2() throws IgniteCheckedException { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testNonSerializable3() throws IgniteCheckedException { OptimizedMarshaller marsh = marshaller(); @@ -154,6 +162,7 @@ public void testNonSerializable3() throws IgniteCheckedException { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testNonSerializable4() throws IgniteCheckedException { OptimizedMarshaller marsh = marshaller(); @@ -173,6 +182,7 @@ public void testNonSerializable4() throws IgniteCheckedException { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testNonSerializable5() throws IgniteCheckedException { Marshaller marsh = marshaller(); @@ -188,6 +198,7 @@ public void testNonSerializable5() throws IgniteCheckedException { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testSerializable() throws IgniteCheckedException { Marshaller marsh = marshaller(); @@ -199,6 +210,7 @@ public void testSerializable() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testSerializableAfterChangingValue() throws IgniteCheckedException { Marshaller marsh = marshaller(); @@ -220,6 +232,7 @@ public void testSerializableAfterChangingValue() throws IgniteCheckedException { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testExternalizable() throws IgniteCheckedException { Marshaller marsh = marshaller(); @@ -233,6 +246,7 @@ public void testExternalizable() throws IgniteCheckedException { /** * Tests {@link OptimizedMarshaller#setRequireSerializable(boolean)}. */ + @Test public void testRequireSerializable() { OptimizedMarshaller marsh = marshaller(); @@ -253,6 +267,7 @@ public void testRequireSerializable() { * * @throws IgniteCheckedException If marshalling failed. */ + @Test public void testProxy() throws IgniteCheckedException { OptimizedMarshaller marsh = marshaller(); @@ -286,6 +301,7 @@ public void testProxy() throws IgniteCheckedException { /** * @throws Exception If failed. */ + @Test public void testDescriptorCache() throws Exception { try { Ignite ignite = startGridsMultiThreaded(2); @@ -322,6 +338,7 @@ public void testDescriptorCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPerformance() throws Exception { System.gc(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedObjectStreamSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedObjectStreamSelfTest.java index 5efc5b97ceda9..ed2d8004ba14a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedObjectStreamSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/marshaller/optimized/OptimizedObjectStreamSelfTest.java @@ -68,12 +68,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.junit.Assert.assertArrayEquals; /** * Test for optimized object streams. */ +@RunWith(JUnit4.class) public class OptimizedObjectStreamSelfTest extends GridCommonAbstractTest { /** */ private static final MarshallerContext CTX = new MarshallerContextTestImpl(); @@ -84,6 +88,7 @@ public class OptimizedObjectStreamSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNull() throws Exception { assertNull(marshalUnmarshal(null)); } @@ -91,6 +96,7 @@ public void testNull() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { byte val = 10; @@ -100,6 +106,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { short val = 100; @@ -109,6 +116,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInteger() throws Exception { int val = 100; @@ -118,6 +126,7 @@ public void testInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { long val = 1000L; @@ -127,6 +136,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { float val = 10.0f; @@ -136,6 +146,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { double val = 100.0d; @@ -145,6 +156,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBoolean() throws Exception { assertEquals(Boolean.TRUE, marshalUnmarshal(Boolean.TRUE)); @@ -154,6 +166,7 @@ public void testBoolean() throws Exception { /** * @throws Exception If failed. */ + @Test public void testChar() throws Exception { char val = 10; @@ -163,6 +176,7 @@ public void testChar() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByteArray() throws Exception { byte[] arr = marshalUnmarshal(new byte[] {1, 2}); @@ -172,6 +186,7 @@ public void testByteArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShortArray() throws Exception { short[] arr = marshalUnmarshal(new short[] {1, 2}); @@ -181,6 +196,7 @@ public void testShortArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIntArray() throws Exception { int[] arr = marshalUnmarshal(new int[] {1, 2}); @@ -190,6 +206,7 @@ public void testIntArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLongArray() throws Exception { long[] arr = marshalUnmarshal(new long[] {1L, 2L}); @@ -199,6 +216,7 @@ public void testLongArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloatArray() throws Exception { float[] arr = marshalUnmarshal(new float[] {1.0f, 2.0f}); @@ -208,6 +226,7 @@ public void testFloatArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDoubleArray() throws Exception { double[] arr = marshalUnmarshal(new double[] {1.0d, 2.0d}); @@ -217,6 +236,7 @@ public void testDoubleArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBooleanArray() throws Exception { boolean[] arr = marshalUnmarshal(new boolean[] {true, false, false}); @@ -229,6 +249,7 @@ public void testBooleanArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCharArray() throws Exception { char[] arr = marshalUnmarshal(new char[] {1, 2}); @@ -238,6 +259,7 @@ public void testCharArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObject() throws Exception { TestObject obj = new TestObject(); @@ -252,6 +274,7 @@ public void testObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRequireSerializable() throws Exception { try { OptimizedMarshaller marsh = new OptimizedMarshaller(true); @@ -275,6 +298,7 @@ public void testRequireSerializable() throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedUnmarshallingLogging() throws Exception { OptimizedMarshaller marsh = new OptimizedMarshaller(true); @@ -297,6 +321,7 @@ public void testFailedUnmarshallingLogging() throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedMarshallingLogging() throws Exception { OptimizedMarshaller marsh = new OptimizedMarshaller(true); @@ -316,6 +341,7 @@ public void testFailedMarshallingLogging() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPool() throws Exception { final TestObject obj = new TestObject(); @@ -351,6 +377,7 @@ public void testPool() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectWithNulls() throws Exception { TestObject obj = new TestObject(); @@ -363,6 +390,7 @@ public void testObjectWithNulls() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectArray() throws Exception { TestObject obj1 = new TestObject(); @@ -390,6 +418,7 @@ public void testObjectArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExternalizable() throws Exception { ExternalizableTestObject1 obj = new ExternalizableTestObject1(); @@ -404,6 +433,7 @@ public void testExternalizable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExternalizableWithNulls() throws Exception { ExternalizableTestObject2 obj = new ExternalizableTestObject2(); @@ -423,6 +453,7 @@ public void testExternalizableWithNulls() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLink() throws Exception { for (int i = 0; i < 20; i++) { LinkTestObject1 obj1 = new LinkTestObject1(); @@ -442,6 +473,7 @@ public void testLink() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCycleLink() throws Exception { for (int i = 0; i < 20; i++) { CycleLinkTestObject obj = new CycleLinkTestObject(); @@ -456,6 +488,7 @@ public void testCycleLink() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoDefaultConstructor() throws Exception { NoDefaultConstructorTestObject obj = new NoDefaultConstructorTestObject(100); @@ -465,6 +498,7 @@ public void testNoDefaultConstructor() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEnum() throws Exception { assertEquals(TestEnum.B, marshalUnmarshal(TestEnum.B)); @@ -476,6 +510,7 @@ public void testEnum() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollection() throws Exception { TestObject obj1 = new TestObject(); @@ -499,6 +534,7 @@ public void testCollection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMap() throws Exception { TestObject obj1 = new TestObject(); @@ -522,6 +558,7 @@ public void testMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUuid() throws Exception { UUID uuid = UUID.randomUUID(); @@ -531,6 +568,7 @@ public void testUuid() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDate() throws Exception { Date date = new Date(); @@ -540,6 +578,7 @@ public void testDate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransient() throws Exception { TransientTestObject obj = marshalUnmarshal(new TransientTestObject(100, 200, "str1", "str2")); @@ -552,6 +591,7 @@ public void testTransient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteReadObject() throws Exception { WriteReadTestObject obj = marshalUnmarshal(new WriteReadTestObject(100, "str")); @@ -562,6 +602,7 @@ public void testWriteReadObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteReplace() throws Exception { ReplaceTestObject obj = marshalUnmarshal(new ReplaceTestObject(100)); @@ -571,6 +612,7 @@ public void testWriteReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteReplaceNull() throws Exception { ReplaceNullTestObject obj = marshalUnmarshal(new ReplaceNullTestObject()); @@ -580,6 +622,7 @@ public void testWriteReplaceNull() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadResolve() throws Exception { ResolveTestObject obj = marshalUnmarshal(new ResolveTestObject(100)); @@ -589,6 +632,7 @@ public void testReadResolve() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArrayDeque() throws Exception { Queue queue = new ArrayDeque<>(); @@ -608,6 +652,7 @@ public void testArrayDeque() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArrayList() throws Exception { Collection list = new ArrayList<>(); @@ -620,6 +665,7 @@ public void testArrayList() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHashMap() throws Exception { Map map = new HashMap<>(); @@ -632,6 +678,7 @@ public void testHashMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHashSet() throws Exception { Collection set = new HashSet<>(); @@ -645,6 +692,7 @@ public void testHashSet() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("UseOfObsoleteCollectionType") + @Test public void testHashtable() throws Exception { Map map = new Hashtable<>(); @@ -657,6 +705,7 @@ public void testHashtable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIdentityHashMap() throws Exception { Map map = new IdentityHashMap<>(); @@ -669,6 +718,7 @@ public void testIdentityHashMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLinkedHashMap() throws Exception { Map map = new LinkedHashMap<>(); @@ -681,6 +731,7 @@ public void testLinkedHashMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLinkedHashSet() throws Exception { Collection set = new LinkedHashSet<>(); @@ -693,6 +744,7 @@ public void testLinkedHashSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLinkedList() throws Exception { Collection list = new LinkedList<>(); @@ -705,6 +757,7 @@ public void testLinkedList() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPriorityQueue() throws Exception { Queue queue = new PriorityQueue<>(); @@ -724,6 +777,7 @@ public void testPriorityQueue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProperties() throws Exception { Properties dflts = new Properties(); @@ -743,6 +797,7 @@ public void testProperties() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTreeMap() throws Exception { Map map = new TreeMap<>(); @@ -755,6 +810,7 @@ public void testTreeMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTreeSet() throws Exception { Collection set = new TreeSet<>(); @@ -768,6 +824,7 @@ public void testTreeSet() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("UseOfObsoleteCollectionType") + @Test public void testVector() throws Exception { Collection vector = new Vector<>(); @@ -780,6 +837,7 @@ public void testVector() throws Exception { /** * @throws Exception If failed. */ + @Test public void testString() throws Exception { assertEquals("Latin", marshalUnmarshal("Latin")); assertEquals("Кириллица", marshalUnmarshal("Кириллица")); @@ -789,6 +847,7 @@ public void testString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadLine() throws Exception { OptimizedObjectInputStream in = new OptimizedObjectInputStream(new GridUnsafeDataInput()); @@ -805,6 +864,7 @@ public void testReadLine() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHierarchy() throws Exception { C c = new C(100, "str", 200, "str", 300, "str"); @@ -821,6 +881,7 @@ public void testHierarchy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInet4Address() throws Exception { Inet4Address addr = (Inet4Address)InetAddress.getByName("localhost"); @@ -830,6 +891,7 @@ public void testInet4Address() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClass() throws Exception { assertEquals(int.class, marshalUnmarshal(int.class)); assertEquals(Long.class, marshalUnmarshal(Long.class)); @@ -839,6 +901,7 @@ public void testClass() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteReadFields() throws Exception { WriteReadFieldsTestObject obj = marshalUnmarshal(new WriteReadFieldsTestObject(100, "str")); @@ -849,6 +912,7 @@ public void testWriteReadFields() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteFields() throws Exception { WriteFieldsTestObject obj = marshalUnmarshal(new WriteFieldsTestObject(100, "str")); @@ -859,6 +923,7 @@ public void testWriteFields() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigInteger() throws Exception { BigInteger b = new BigInteger("54654865468745468465321414646834562346475457488"); @@ -868,6 +933,7 @@ public void testBigInteger() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimal() throws Exception { BigDecimal b = new BigDecimal("849572389457208934572093574.123512938654126458542145"); @@ -877,6 +943,7 @@ public void testBigDecimal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimpleDateFormat() throws Exception { SimpleDateFormat f = new SimpleDateFormat("MM/dd/yyyy"); @@ -886,6 +953,7 @@ public void testSimpleDateFormat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testComplexObject() throws Exception { ComplexTestObject obj = new ComplexTestObject(); @@ -959,6 +1027,7 @@ public void testComplexObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadToArray() throws Exception { OptimizedObjectInputStream in = OptimizedObjectStreamRegistry.in(); @@ -1006,6 +1075,7 @@ public void testReadToArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleTableGrow() throws Exception { List c = new ArrayList<>(); @@ -1025,6 +1095,7 @@ public void testHandleTableGrow() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncorrectExternalizable() throws Exception { GridTestUtils.assertThrows( log, @@ -1040,6 +1111,7 @@ public void testIncorrectExternalizable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExcludedClass() throws Exception { Class[] exclClasses = U.staticField(MarshallerExclusions.class, "EXCL_CLASSES"); @@ -1052,6 +1124,7 @@ public void testExcludedClass() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInet6Address() throws Exception { final InetAddress address = Inet6Address.getByAddress(new byte[16]); @@ -1061,7 +1134,7 @@ public void testInet6Address() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testPutFieldsWithDefaultWriteObject() throws Exception { try { marshalUnmarshal(new CustomWriteObjectMethodObject("test")); @@ -1074,7 +1147,7 @@ public void testPutFieldsWithDefaultWriteObject() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("ThrowableInstanceNeverThrown") + @Test public void testThrowable() throws Exception { Throwable t = new Throwable("Throwable"); @@ -1084,6 +1157,7 @@ public void testThrowable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNestedReadWriteObject() throws Exception { NestedReadWriteObject[] arr = new NestedReadWriteObject[5]; @@ -1460,7 +1534,6 @@ private TransientTestObject(int val1, int val2, String str1, String str2) { /** * Test object with {@code writeObject} and {@code readObject} methods. */ - @SuppressWarnings("TransientFieldNotInitialized") private static class WriteReadTestObject implements Serializable { /** */ private int val; @@ -1552,7 +1625,6 @@ private static class WriteFieldsTestObject implements Serializable { private int val; /** */ - @SuppressWarnings("UnusedDeclaration") private String str; /** @@ -1772,7 +1844,6 @@ String stringB() { /** * Class C. */ - @SuppressWarnings("MethodOverridesPrivateMethodOfSuperclass") private static class C extends B { /** */ @SuppressWarnings("InstanceVariableMayNotBeInitializedByReadObject") @@ -2171,7 +2242,6 @@ private ComplexTestObject(byte byteVal1, short shortVal1, int intVal1, long long /** * Test enum. */ - @SuppressWarnings("JavaDoc") private enum TestEnum { /** */ A, @@ -2327,4 +2397,4 @@ private void readObject(ObjectInputStream os){ } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageIdUtilsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageIdUtilsSelfTest.java index 8b419444815cd..9b6d0b0b8d9a2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageIdUtilsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageIdUtilsSelfTest.java @@ -22,14 +22,19 @@ import org.apache.ignite.internal.pagemem.PageIdUtils; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PageIdUtilsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRotatePageId() throws Exception { assertEquals(0x0102FFFFFFFFFFFFL, PageIdUtils.rotatePageId(0x0002FFFFFFFFFFFFL)); assertEquals(0x0B02FFFFFFFFFFFFL, PageIdUtils.rotatePageId(0x0A02FFFFFFFFFFFFL)); @@ -41,6 +46,7 @@ public void testRotatePageId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEffectivePageId() throws Exception { assertEquals(0x0000FFFFFFFFFFFFL, PageIdUtils.effectivePageId(0x0002FFFFFFFFFFFFL)); assertEquals(0x0000FFFFFFFFFFFFL, PageIdUtils.effectivePageId(0x0A02FFFFFFFFFFFFL)); @@ -51,6 +57,7 @@ public void testEffectivePageId() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLinkConstruction() throws Exception { assertEquals(0x00FFFFFFFFFFFFFFL, PageIdUtils.link(0xFFFFFFFFFFFFFFL, 0)); assertEquals(0x01FFFFFFFFFFFFFFL, PageIdUtils.link(0xFFFFFFFFFFFFFFL, 1)); @@ -71,6 +78,7 @@ public void testLinkConstruction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOffsetExtraction() throws Exception { assertEquals(0, PageIdUtils.itemId(0x00FFFFFFFFFFFFFFL)); assertEquals(1, PageIdUtils.itemId(0x01FFFFFFFFFFFFFFL)); @@ -91,6 +99,7 @@ public void testOffsetExtraction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPageIdFromLink() throws Exception { assertEquals(0x00FFFFFFFFFFFFFFL, PageIdUtils.pageId(0x00FFFFFFFFFFFFFFL)); assertEquals(0x00FFFFFFFFFFFFFFL, PageIdUtils.pageId(0x10FFFFFFFFFFFFFFL)); @@ -121,6 +130,7 @@ public void testPageIdFromLink() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomIds() throws Exception { Random rnd = new Random(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoLoadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoLoadSelfTest.java index 6ad977f486a36..68581dc7d2172 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoLoadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/pagemem/impl/PageMemoryNoLoadSelfTest.java @@ -37,10 +37,14 @@ import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PageMemoryNoLoadSelfTest extends GridCommonAbstractTest { /** */ protected static final int PAGE_SIZE = 8 * 1024; @@ -59,6 +63,7 @@ public class PageMemoryNoLoadSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPageTearingInner() throws Exception { PageMemory mem = memory(); @@ -77,8 +82,8 @@ public void testPageTearingInner() throws Exception { ", page2Id=" + fullId2.pageId() + ", page2=" + page2 + ']'); try { - writePage(mem, fullId1.pageId(), page1, 1); - writePage(mem, fullId2.pageId(), page2, 2); + writePage(mem, fullId1, page1, 1); + writePage(mem, fullId2, page2, 2); readPage(mem, fullId1.pageId(), page1, 1); readPage(mem, fullId2.pageId(), page2, 2); @@ -96,13 +101,14 @@ public void testPageTearingInner() throws Exception { } } finally { - mem.stop(); + mem.stop(true); } } /** * @throws Exception If failed. */ + @Test public void testLoadedPagesCount() throws Exception { PageMemory mem = memory(); @@ -121,13 +127,14 @@ public void testLoadedPagesCount() throws Exception { assertEquals(mem.loadedPages(), expPages); } finally { - mem.stop(); + mem.stop(true); } } /** * @throws Exception If failed. */ + @Test public void testPageTearingSequential() throws Exception { PageMemory mem = memory(); @@ -149,7 +156,7 @@ public void testPageTearingSequential() throws Exception { if (i % 64 == 0) info("Writing page [idx=" + i + ", pageId=" + fullId.pageId() + ", page=" + page + ']'); - writePage(mem, fullId.pageId(), page, i + 1); + writePage(mem, fullId, page, i + 1); } finally { mem.releasePage(fullId.groupId(), fullId.pageId(), page); @@ -173,13 +180,14 @@ public void testPageTearingSequential() throws Exception { } } finally { - mem.stop(); + mem.stop(true); } } /** * @throws Exception If failed. */ + @Test public void testPageHandleDeallocation() throws Exception { PageMemory mem = memory(); @@ -200,13 +208,14 @@ public void testPageHandleDeallocation() throws Exception { assertFalse(handles.add(allocatePage(mem))); } finally { - mem.stop(); + mem.stop(true); } } /** * @throws Exception If failed. */ + @Test public void testPageIdRotation() throws Exception { PageMemory mem = memory(); @@ -230,7 +239,7 @@ public void testPageIdRotation() throws Exception { assertNotNull(pageAddr); try { - PAGE_IO.initNewPage(pageAddr, id.pageId(), mem.pageSize()); + PAGE_IO.initNewPage(pageAddr, id.pageId(), mem.realPageSize(id.groupId())); long updId = PageIdUtils.rotatePageId(id.pageId()); @@ -304,7 +313,7 @@ public void testPageIdRotation() throws Exception { } } finally { - mem.stop(); + mem.stop(true); } } @@ -332,21 +341,21 @@ protected PageMemory memory() throws Exception { /** * @param mem Page memory. - * @param pageId Page ID. + * @param fullId Page ID. * @param page Page pointer. * @param val Value to write. */ - private void writePage(PageMemory mem, long pageId, long page, int val) { - long pageAddr = mem.writeLock(-1, pageId, page); + private void writePage(PageMemory mem, FullPageId fullId, long page, int val) { + long pageAddr = mem.writeLock(-1, fullId.pageId(), page); try { - PAGE_IO.initNewPage(pageAddr, pageId, mem.pageSize()); + PAGE_IO.initNewPage(pageAddr, fullId.pageId(), mem.realPageSize(fullId.groupId())); for (int i = PageIO.COMMON_HEADER_END; i < PAGE_SIZE; i++) PageUtils.putByte(pageAddr, i, (byte)val); } finally { - mem.writeUnlock(-1, pageId, page, null, true); + mem.writeUnlock(-1, fullId.pageId(), page, null, true); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/GridCacheTxLoadFromStoreOnLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/GridCacheTxLoadFromStoreOnLockSelfTest.java index 6293723f18ae6..5ac51b4b2780b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/GridCacheTxLoadFromStoreOnLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/GridCacheTxLoadFromStoreOnLockSelfTest.java @@ -30,33 +30,33 @@ import org.apache.ignite.cache.store.CacheStoreAdapter; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheTxLoadFromStoreOnLockSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - cfg.getTransactionConfiguration().setTxSerializableEnabled(true); - cfg.setDiscoverySpi(disco); - return cfg; } @@ -68,6 +68,7 @@ public class GridCacheTxLoadFromStoreOnLockSelfTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testLoadedValueOneBackup() throws Exception { checkLoadedValue(1); } @@ -75,6 +76,7 @@ public void testLoadedValueOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadedValueNoBackups() throws Exception { checkLoadedValue(0); } @@ -152,4 +154,4 @@ private static class Store extends CacheStoreAdapter implement // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorAbstractSelfTest.java index 74c8ef22ea54d..dc8aa3db1a51f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorAbstractSelfTest.java @@ -28,10 +28,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,6 +40,7 @@ * Tests for {@link GridAffinityProcessor}. */ @GridCommonTest(group = "Affinity Processor") +@RunWith(JUnit4.class) public abstract class GridAffinityProcessorAbstractSelfTest extends GridCommonAbstractTest { /** Number of grids started for tests. Should not be less than 2. */ private static final int NODES_CNT = 3; @@ -46,9 +48,6 @@ public abstract class GridAffinityProcessorAbstractSelfTest extends GridCommonAb /** Cache name. */ private static final String CACHE_NAME = "cache"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Flag to start grid with cache. */ private boolean withCache; @@ -56,12 +55,7 @@ public abstract class GridAffinityProcessorAbstractSelfTest extends GridCommonAb @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setForceServerMode(true); - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); if (withCache) { CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -107,7 +101,7 @@ public abstract class GridAffinityProcessorAbstractSelfTest extends GridCommonAb * * @throws Exception In case of any exception. */ - @SuppressWarnings("AssertEqualsBetweenInconvertibleTypes") + @Test public void testAffinityProcessor() throws Exception { Random rnd = new Random(); @@ -159,6 +153,7 @@ public void testAffinityProcessor() throws Exception { * * @throws Exception In case of any exception. */ + @Test public void testPerformance() throws Exception { IgniteKernal grid = (IgniteKernal)grid(0); GridAffinityProcessor aff = grid.context().affinity(); @@ -184,4 +179,4 @@ public void testPerformance() throws Exception { assertTrue(diff < 25000); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorMemoryLeakTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorMemoryLeakTest.java index 3b6857d5ad2ca..a926029110f41 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorMemoryLeakTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/affinity/GridAffinityProcessorMemoryLeakTest.java @@ -21,45 +21,34 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; - -import static org.apache.ignite.IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE; -import static org.apache.ignite.IgniteSystemProperties.getInteger; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link GridAffinityProcessor}. */ @GridCommonTest(group = "Affinity Processor") +@RunWith(JUnit4.class) public class GridAffinityProcessorMemoryLeakTest extends GridCommonAbstractTest { - /** Max value for affinity history size name. Should be the same as in GridAffinityAssignmentCache.MAX_HIST_SIZE */ - private final int MAX_HIST_SIZE = getInteger(IGNITE_AFFINITY_HISTORY_SIZE, 100); - /** Cache name. */ private static final String CACHE_NAME = "cache"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setForceServerMode(true); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -84,47 +73,60 @@ public class GridAffinityProcessorMemoryLeakTest extends GridCommonAbstractTest * * @throws Exception In case of any exception. */ + @Ignore("https://ggsystems.atlassian.net/browse/GG-24138") + @Test public void testAffinityProcessor() throws Exception { - Ignite ignite = startGrid(0); + System.setProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE, "10"); - IgniteKernal grid = (IgniteKernal)grid(0); + try { + int maxHistSize = 10; - IgniteCache cache; + Ignite ignite = startGrid(0); - IgniteCache globalCache = getOrCreateGlobalCache(ignite); + IgniteKernal grid = (IgniteKernal)grid(0); - IgniteDataStreamer globalStreamer; + IgniteCache cache; - int count = MAX_HIST_SIZE * 4; + IgniteCache globalCache = getOrCreateGlobalCache(ignite); - int size; + IgniteDataStreamer globalStreamer; - do { - try { - cache = createLocalCache(ignite, count); + int cnt = maxHistSize * 30; - cache.put("Key" + count, "Value" + count); + int expLimit = cnt / 2; - cache.destroy(); + int size; - globalStreamer = createGlobalStreamer(ignite, globalCache); + do { + try { + cache = createLocalCache(ignite, cnt); - globalStreamer.addData("GlobalKey" + count, "GlobalValue" + count); + cache.put("Key" + cnt, "Value" + cnt); - globalStreamer.flush(); + cache.destroy(); - globalStreamer.close(); + globalStreamer = createGlobalStreamer(ignite, globalCache); - size = ((ConcurrentSkipListMap)GridTestUtils.getFieldValue(grid.context().affinity(), "affMap")).size(); + globalStreamer.addData("GlobalKey" + cnt, "GlobalValue" + cnt); - assertTrue("Cache has size that bigger then expected [size=" + size + "" + - ", expLimit=" + MAX_HIST_SIZE * 3 + "]", size < MAX_HIST_SIZE * 3); - } - catch (Exception e) { - fail("Error was handled [" + e.getMessage() + "]"); + globalStreamer.flush(); + + globalStreamer.close(); + + size = ((ConcurrentSkipListMap)GridTestUtils.getFieldValue(grid.context().affinity(), "affMap")).size(); + + assertTrue("Cache has size that bigger then expected [size=" + size + + ", expLimit=" + expLimit + "]", size < expLimit); + } + catch (Exception e) { + fail("Error was handled [" + e.getMessage() + "]"); + } } + while (cnt-- > 0); + } + finally { + System.clearProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE); } - while (count-- > 0); } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/Authentication1kUsersNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/Authentication1kUsersNodeRestartTest.java index 0ca621e3bc2ad..10f112162304b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/Authentication1kUsersNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/Authentication1kUsersNodeRestartTest.java @@ -22,29 +22,24 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link IgniteAuthenticationProcessor} on unstable topology. */ +@RunWith(JUnit4.class) public class Authentication1kUsersNodeRestartTest extends GridCommonAbstractTest { /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - + private static final int USERS_COUNT = 1000; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setAuthenticationEnabled(true); cfg.setDataStorageConfiguration(new DataStorageConfiguration() @@ -71,6 +66,7 @@ public class Authentication1kUsersNodeRestartTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void test1kUsersNodeRestartServer() throws Exception { final int USERS_COUNT = 1000; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationConfigurationClusterTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationConfigurationClusterTest.java index fb61d60ad656a..2e7dfc7e5ee83 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationConfigurationClusterTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationConfigurationClusterTest.java @@ -24,19 +24,17 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for disabled {@link IgniteAuthenticationProcessor}. */ +@RunWith(JUnit4.class) public class AuthenticationConfigurationClusterTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** * @param idx Node index. * @param authEnabled Authentication enabled. @@ -49,12 +47,6 @@ private IgniteConfiguration configuration(int idx, boolean authEnabled, boolean cfg.setClientMode(client); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setAuthenticationEnabled(authEnabled); cfg.setDataStorageConfiguration(new DataStorageConfiguration() @@ -82,6 +74,7 @@ private IgniteConfiguration configuration(int idx, boolean authEnabled, boolean /** * @throws Exception If failed. */ + @Test public void testServerNodeJoinDisabled() throws Exception { checkNodeJoinDisabled(false); } @@ -89,6 +82,7 @@ public void testServerNodeJoinDisabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeJoinDisabled() throws Exception { checkNodeJoinDisabled(true); } @@ -96,6 +90,7 @@ public void testClientNodeJoinDisabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerNodeJoinEnabled() throws Exception { checkNodeJoinEnabled(false); } @@ -103,6 +98,7 @@ public void testServerNodeJoinEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeJoinEnabled() throws Exception { checkNodeJoinEnabled(true); } @@ -146,6 +142,7 @@ private void checkNodeJoinEnabled(boolean client) throws Exception { /** * @throws Exception If failed. */ + @Test public void testDisabledAuthentication() throws Exception { startGrid(configuration(0, false, false)); @@ -191,6 +188,7 @@ public void testDisabledAuthentication() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEnableAuthenticationWithoutPersistence() throws Exception { GridTestUtils.assertThrowsAnyCause(log, new Callable() { @Override public Object call() throws Exception { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationOnNotActiveClusterTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationOnNotActiveClusterTest.java index 638c378bb1837..751577707d7e0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationOnNotActiveClusterTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationOnNotActiveClusterTest.java @@ -21,18 +21,17 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link IgniteAuthenticationProcessor}. */ +@RunWith(JUnit4.class) public class AuthenticationOnNotActiveClusterTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Nodes count. */ protected static final int NODES_COUNT = 4; @@ -46,12 +45,6 @@ public class AuthenticationOnNotActiveClusterTest extends GridCommonAbstractTest if (getTestIgniteInstanceIndex(igniteInstanceName) == CLI_NODE) cfg.setClientMode(true); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setAuthenticationEnabled(true); cfg.setDataStorageConfiguration(new DataStorageConfiguration() @@ -79,6 +72,7 @@ public class AuthenticationOnNotActiveClusterTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testDefaultUser() throws Exception { startGrids(NODES_COUNT); @@ -93,6 +87,7 @@ public void testDefaultUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotDefaultUser() throws Exception { startGrids(NODES_COUNT + 1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNPEOnStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNPEOnStartTest.java index 661c875e61916..cc8bc325006ee 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNPEOnStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNPEOnStartTest.java @@ -22,29 +22,21 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for NPE on start node simultaneous. */ +@RunWith(JUnit4.class) public class AuthenticationProcessorNPEOnStartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setAuthenticationEnabled(true); cfg.setDataStorageConfiguration(new DataStorageConfiguration() @@ -72,6 +64,7 @@ public class AuthenticationProcessorNPEOnStartTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void test() throws Exception { final AtomicInteger nodeIdx = new AtomicInteger(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNodeRestartTest.java index 7496cfe9bdde6..4534a4f73108e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorNodeRestartTest.java @@ -18,7 +18,6 @@ package org.apache.ignite.internal.processors.authentication; import java.util.Random; -import java.util.concurrent.Callable; import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.DataRegionConfiguration; @@ -26,19 +25,17 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link IgniteAuthenticationProcessor} on unstable topology. */ +@RunWith(JUnit4.class) public class AuthenticationProcessorNodeRestartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Nodes count. */ private static final int NODES_COUNT = 4; @@ -46,10 +43,10 @@ public class AuthenticationProcessorNodeRestartTest extends GridCommonAbstractTe private static final int RESTARTS = 10; /** Client node. */ - protected static final int CLI_NODE = NODES_COUNT - 1; + private static final int CLI_NODE = NODES_COUNT - 1; /** Authorization context for default user. */ - protected AuthorizationContext actxDflt; + private AuthorizationContext actxDflt; /** Random. */ private static final Random RND = new Random(System.currentTimeMillis()); @@ -61,12 +58,6 @@ public class AuthenticationProcessorNodeRestartTest extends GridCommonAbstractTe if (getTestIgniteInstanceIndex(igniteInstanceName) == CLI_NODE) cfg.setClientMode(true); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setAuthenticationEnabled(true); cfg.setDataStorageConfiguration(new DataStorageConfiguration() @@ -102,6 +93,7 @@ public class AuthenticationProcessorNodeRestartTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testConcurrentAddUpdateRemoveNodeRestartCoordinator() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-7472"); @@ -111,52 +103,50 @@ public void testConcurrentAddUpdateRemoveNodeRestartCoordinator() throws Excepti final AtomicInteger usrCnt = new AtomicInteger(); - GridTestUtils.runMultiThreaded(new Runnable() { - @Override public void run() { - AuthorizationContext.context(actxDflt); - - String user = "test" + usrCnt.getAndIncrement(); + GridTestUtils.runMultiThreaded(() -> { + AuthorizationContext.context(actxDflt); - try { - int state = 0; - while (!restartFut.isDone()) { - try { - switch (state) { - case 0: - grid(CLI_NODE).context().authentication().addUser(user, "passwd_" + user); + String user = "test" + usrCnt.getAndIncrement(); - break; + try { + int state = 0; + while (!restartFut.isDone()) { + try { + switch (state) { + case 0: + grid(CLI_NODE).context().authentication().addUser(user, "passwd_" + user); - case 1: - grid(CLI_NODE).context().authentication().updateUser(user, "new_passwd_" + user); + break; - break; + case 1: + grid(CLI_NODE).context().authentication().updateUser(user, "new_passwd_" + user); - case 2: - grid(CLI_NODE).context().authentication().removeUser(user); + break; - break; + case 2: + grid(CLI_NODE).context().authentication().removeUser(user); - default: - fail("Invalid state: " + state); - } + break; - state = ++state > 2 ? 0 : state; - } - catch (UserManagementException e) { - U.error(log, e); - fail("Unexpected exception on user operation"); - } - catch (IgniteCheckedException e) { - // Reconnect - U.error(log, e); + default: + fail("Invalid state: " + state); } + + state = ++state > 2 ? 0 : state; + } + catch (UserManagementException e) { + U.error(log, e); + fail("Unexpected exception on user operation"); + } + catch (IgniteCheckedException e) { + // Reconnect + U.error(log, e); } } - catch (Exception e) { - U.error(log, "Unexpected exception on concurrent add/remove: " + user, e); - fail(); - } + } + catch (Exception e) { + U.error(log, "Unexpected exception on concurrent add/remove: " + user, e); + fail(); } }, 10, "user-op"); @@ -166,6 +156,7 @@ public void testConcurrentAddUpdateRemoveNodeRestartCoordinator() throws Excepti /** * @throws Exception If failed. */ + @Test public void testConcurrentAuthorize() throws Exception { final int testUsersCnt = 10; @@ -174,88 +165,90 @@ public void testConcurrentAuthorize() throws Exception { for (int i = 0; i < testUsersCnt; ++i) grid(CLI_NODE).context().authentication().addUser("test" + i, "passwd_test" + i); - final IgniteInternalFuture restartFut = GridTestUtils.runAsync(new Runnable() { - @Override public void run() { - try { - for (int i = 0; i < RESTARTS; ++i) { - int nodeIdx = RND.nextInt(NODES_COUNT - 1); + final IgniteInternalFuture restartFut = GridTestUtils.runAsync(() -> { + try { + for (int i = 0; i < RESTARTS; ++i) { + int nodeIdx = RND.nextInt(NODES_COUNT - 1); - stopGrid(nodeIdx); + stopGrid(nodeIdx); - U.sleep(500); + U.sleep(500); - startGrid(nodeIdx); + startGrid(nodeIdx); - U.sleep(500); - } - } - catch (Exception e) { - e.printStackTrace(System.err); - fail("Unexpected exception on server restart: " + e.getMessage()); + U.sleep(500); } } + catch (Exception e) { + e.printStackTrace(System.err); + fail("Unexpected exception on server restart: " + e.getMessage()); + } }); final AtomicInteger usrCnt = new AtomicInteger(); - GridTestUtils.runMultiThreaded(new Runnable() { - @Override public void run() { - String user = "test" + usrCnt.getAndIncrement(); + GridTestUtils.runMultiThreaded(() -> { + String user = "test" + usrCnt.getAndIncrement(); - try { - while (!restartFut.isDone()) { - AuthorizationContext actx = grid(CLI_NODE).context().authentication() - .authenticate(user, "passwd_" + user); + try { + while (!restartFut.isDone()) { + AuthorizationContext actx = grid(CLI_NODE).context().authentication() + .authenticate(user, "passwd_" + user); - assertNotNull(actx); - } - } - catch (IgniteCheckedException e) { - // Skip exception if server down. - if (!e.getMessage().contains("Failed to send message (node may have left the grid or " - + "TCP connection cannot be established due to firewall issues)")) { - e.printStackTrace(); - fail("Unexpected exception: " + e.getMessage()); - } + assertNotNull(actx); } - catch (Exception e) { + } + catch (IgniteCheckedException e) { + // Skip exception if server down. + if (!serverDownMessage(e.getMessage())) { e.printStackTrace(); fail("Unexpected exception: " + e.getMessage()); } } + catch (Exception e) { + e.printStackTrace(); + fail("Unexpected exception: " + e.getMessage()); + } }, testUsersCnt, "user-op"); restartFut.get(); } + /** + * Exception messages from {@code org.apache.ignite.internal.managers.communication.GridIoManager#send}. + */ + private boolean serverDownMessage(String text) { + return text.contains("Failed to send message (node may have left the grid or " + + "TCP connection cannot be established due to firewall issues)") + || text.contains("Failed to send message, node left"); + } + /** * @return Future. */ - protected IgniteInternalFuture restartCoordinator() { - return GridTestUtils.runAsync(new Runnable() { - @Override public void run() { - try { - int restarts = 0; - - while (restarts < RESTARTS) { - for (int i = 0; i < CLI_NODE; ++i, ++restarts) { - if (restarts >= RESTARTS) - break; + private IgniteInternalFuture restartCoordinator() { + return GridTestUtils.runAsync(() -> { + try { + int restarts = 0; - stopGrid(i); + while (restarts < RESTARTS) { + for (int i = 0; i < CLI_NODE; ++i, ++restarts) { + if (restarts >= RESTARTS) + break; - U.sleep(500); + stopGrid(i); - startGrid(i); + U.sleep(500); - U.sleep(500); - } + startGrid(i); + + U.sleep(500); } } - catch (Exception e) { - U.error(log, "Unexpected exception on coordinator restart", e); - fail(); - } + } + catch (Exception e) { + U.error(log, "Unexpected exception on coordinator restart", e); + fail(); } }); } @@ -263,48 +256,45 @@ protected IgniteInternalFuture restartCoordinator() { /** * @throws Exception If failed. */ + @Test public void test1kUsersNodeRestartServer() throws Exception { final AtomicInteger usrCnt = new AtomicInteger(); - GridTestUtils.runMultiThreaded(new Runnable() { - @Override public void run() { - AuthorizationContext.context(actxDflt); + GridTestUtils.runMultiThreaded(() -> { + AuthorizationContext.context(actxDflt); - try { - while (usrCnt.get() < 200) { - String user = "test" + usrCnt.getAndIncrement(); + try { + while (usrCnt.get() < 200) { + String user = "test" + usrCnt.getAndIncrement(); - System.out.println("+++ CREATE " + user); - grid(0).context().authentication().addUser(user, "init"); - } - } - catch (Exception e) { - e.printStackTrace(); - fail("Unexpected exception on add / remove"); + System.out.println("+++ CREATE " + user); + grid(0).context().authentication().addUser(user, "init"); } } + catch (Exception e) { + e.printStackTrace(); + fail("Unexpected exception on add / remove"); + } }, 3, "user-op"); usrCnt.set(0); - GridTestUtils.runMultiThreaded(new Runnable() { - @Override public void run() { - AuthorizationContext.context(actxDflt); + GridTestUtils.runMultiThreaded(() -> { + AuthorizationContext.context(actxDflt); - try { - while (usrCnt.get() < 200) { - String user = "test" + usrCnt.getAndIncrement(); + try { + while (usrCnt.get() < 200) { + String user = "test" + usrCnt.getAndIncrement(); - System.out.println("+++ ALTER " + user); + System.out.println("+++ ALTER " + user); - grid(0).context().authentication().updateUser(user, "passwd_" + user); - } - } - catch (Exception e) { - e.printStackTrace(); - fail("Unexpected exception on add / remove"); + grid(0).context().authentication().updateUser(user, "passwd_" + user); } } + catch (Exception e) { + e.printStackTrace(); + fail("Unexpected exception on add / remove"); + } }, 3, "user-op"); System.out.println("+++ STOP"); @@ -321,51 +311,32 @@ public void test1kUsersNodeRestartServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentAddUpdateRemoveNodeRestartServer() throws Exception { - final IgniteInternalFuture restartFut = GridTestUtils.runAsync(new Runnable() { - @Override public void run() { - try { - for (int i = 0; i < RESTARTS; ++i) { - stopGrid(1); - - U.sleep(500); - - startGrid(1); - - U.sleep(500); - } - } - catch (Exception e) { - e.printStackTrace(System.err); - fail("Unexpected exception on server restart: " + e.getMessage()); - } - } - }); + IgniteInternalFuture restartFut = loopServerRestarts(); AuthorizationContext.context(actxDflt); final AtomicInteger usrCnt = new AtomicInteger(); - GridTestUtils.runMultiThreaded(new Runnable() { - @Override public void run() { - AuthorizationContext.context(actxDflt); + GridTestUtils.runMultiThreaded(() -> { + AuthorizationContext.context(actxDflt); - String user = "test" + usrCnt.getAndIncrement(); + String user = "test" + usrCnt.getAndIncrement(); - try { - while (!restartFut.isDone()) { - grid(CLI_NODE).context().authentication().addUser(user, "init"); + try { + while (!restartFut.isDone()) { + grid(CLI_NODE).context().authentication().addUser(user, "init"); - grid(CLI_NODE).context().authentication().updateUser(user, "passwd_" + user); + grid(CLI_NODE).context().authentication().updateUser(user, "passwd_" + user); - grid(CLI_NODE).context().authentication().removeUser(user); - } - } - catch (Exception e) { - e.printStackTrace(); - fail("Unexpected exception on add / remove"); + grid(CLI_NODE).context().authentication().removeUser(user); } } + catch (Exception e) { + e.printStackTrace(); + fail("Unexpected exception on add / remove"); + } }, 10, "user-op"); restartFut.get(); @@ -374,53 +345,53 @@ public void testConcurrentAddUpdateRemoveNodeRestartServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentFailedOperationNodeRestartServer() throws Exception { - final IgniteInternalFuture restartFut = GridTestUtils.runAsync(new Runnable() { - @Override public void run() { - try { - for (int i = 0; i < RESTARTS; ++i) { - stopGrid(1); - - U.sleep(500); - - startGrid(1); - - U.sleep(500); - } - } - catch (Exception e) { - e.printStackTrace(System.err); - fail("Unexpected exception on server restart: " + e.getMessage()); - } - } - }); + IgniteInternalFuture restartFut = loopServerRestarts(); AuthorizationContext.context(actxDflt); grid(CLI_NODE).context().authentication().addUser("test", "test"); - GridTestUtils.runMultiThreaded(new Runnable() { - @Override public void run() { - AuthorizationContext.context(actxDflt); + GridTestUtils.runMultiThreaded(() -> { + AuthorizationContext.context(actxDflt); - try { - while (!restartFut.isDone()) { - GridTestUtils.assertThrows(log, new Callable() { - @Override public Object call() throws Exception { - grid(CLI_NODE).context().authentication().addUser("test", "test"); + try { + while (!restartFut.isDone()) { + GridTestUtils.assertThrows(log, () -> { + grid(CLI_NODE).context().authentication().addUser("test", "test"); - return null; - } - }, UserManagementException.class, "User already exists"); - } - } - catch (Exception e) { - e.printStackTrace(); - fail("Unexpected error on failed operation"); + return null; + }, UserManagementException.class, "User already exists"); } } + catch (Exception e) { + e.printStackTrace(); + fail("Unexpected error on failed operation"); + } }, 10, "user-op"); restartFut.get(); } + + /** */ + private IgniteInternalFuture loopServerRestarts() { + return GridTestUtils.runAsync(() -> { + try { + for (int i = 0; i < RESTARTS; ++i) { + stopGrid(1); + + U.sleep(500); + + startGrid(1); + + U.sleep(500); + } + } + catch (Exception e) { + e.printStackTrace(System.err); + fail("Unexpected exception on server restart: " + e.getMessage()); + } + }); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorSelfTest.java index 6c79c7f380411..b47b0e7a87e86 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/authentication/AuthenticationProcessorSelfTest.java @@ -28,19 +28,17 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link IgniteAuthenticationProcessor}. */ +@RunWith(JUnit4.class) public class AuthenticationProcessorSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Nodes count. */ protected static final int NODES_COUNT = 4; @@ -75,12 +73,6 @@ private static String randomString(int len) { if (getTestIgniteInstanceIndex(igniteInstanceName) == CLI_NODE) cfg.setClientMode(true); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setAuthenticationEnabled(true); cfg.setDataStorageConfiguration(new DataStorageConfiguration() @@ -116,6 +108,7 @@ private static String randomString(int len) { /** * @throws Exception If failed. */ + @Test public void testDefaultUser() throws Exception { for (int i = 0; i < NODES_COUNT; ++i) { AuthorizationContext actx = grid(i).context().authentication().authenticate("ignite", "ignite"); @@ -128,6 +121,7 @@ public void testDefaultUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultUserUpdate() throws Exception { AuthorizationContext.context(actxDflt); @@ -153,6 +147,7 @@ public void testDefaultUserUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveDefault() throws Exception { AuthorizationContext.context(actxDflt); @@ -179,6 +174,7 @@ public void testRemoveDefault() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUserManagementPermission() throws Exception { AuthorizationContext.context(actxDflt); @@ -232,6 +228,7 @@ public void testUserManagementPermission() throws Exception { /** * @throws Exception If failed. */ + @Test public void testProceedUsersOnJoinNode() throws Exception { AuthorizationContext.context(actxDflt); @@ -259,6 +256,7 @@ public void testProceedUsersOnJoinNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAuthenticationInvalidUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -291,6 +289,7 @@ public void testAuthenticationInvalidUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddUpdateRemoveUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -308,6 +307,7 @@ public void testAddUpdateRemoveUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -329,6 +329,7 @@ public void testUpdateUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateRemoveDoesNotExistsUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -361,6 +362,7 @@ public void testUpdateRemoveDoesNotExistsUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddAlreadyExistsUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -387,6 +389,7 @@ public void testAddAlreadyExistsUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAuthorizeOnClientDisconnect() throws Exception { AuthorizationContext.context(actxDflt); @@ -431,6 +434,7 @@ public void testAuthorizeOnClientDisconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentAddRemove() throws Exception { final AtomicInteger usrCnt = new AtomicInteger(); @@ -457,6 +461,7 @@ public void testConcurrentAddRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUserPersistence() throws Exception { AuthorizationContext.context(actxDflt); @@ -494,6 +499,7 @@ public void testUserPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultUserPersistence() throws Exception { AuthorizationContext.context(actxDflt); @@ -529,6 +535,7 @@ public void testDefaultUserPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvalidUserNamePassword() throws Exception { AuthorizationContext.context(actxDflt); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/AtomicCacheAffinityConfigurationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/AtomicCacheAffinityConfigurationTest.java index 3fd4e1f476807..726450969e1c7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/AtomicCacheAffinityConfigurationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/AtomicCacheAffinityConfigurationTest.java @@ -29,9 +29,13 @@ import org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class AtomicCacheAffinityConfigurationTest extends GridCommonAbstractTest { /** Affinity function. */ private AffinityFunction affinityFunction; @@ -48,6 +52,7 @@ public class AtomicCacheAffinityConfigurationTest extends GridCommonAbstractTest * @throws Exception If failed. * */ + @Test public void testRendezvousAffinity() throws Exception { try { affinityFunction = new RendezvousAffinityFunction(false, 10); @@ -80,6 +85,7 @@ public void testRendezvousAffinity() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTestAffinity() throws Exception { try { affinityFunction = new TestAffinityFunction("Some value"); @@ -112,6 +118,7 @@ public void testTestAffinity() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultAffinity() throws Exception { try { affinityFunction = null; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/BinaryMetadataRegistrationInsideEntryProcessorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/BinaryMetadataRegistrationInsideEntryProcessorTest.java index 73dae4bb39a5d..0c878c7d4602a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/BinaryMetadataRegistrationInsideEntryProcessorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/BinaryMetadataRegistrationInsideEntryProcessorTest.java @@ -17,7 +17,7 @@ package org.apache.ignite.internal.processors.cache; -import java.util.Arrays; +import java.util.Collections; import java.util.HashMap; import java.util.Map; import javax.cache.processor.EntryProcessor; @@ -29,10 +29,14 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class BinaryMetadataRegistrationInsideEntryProcessorTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "test-cache"; @@ -40,7 +44,7 @@ public class BinaryMetadataRegistrationInsideEntryProcessorTest extends GridComm /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration() { TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder() - .setAddresses(Arrays.asList("127.0.0.1:47500..47509")); + .setAddresses(Collections.singletonList("127.0.0.1:47500..47509")); return new IgniteConfiguration() .setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder)) @@ -50,6 +54,7 @@ public class BinaryMetadataRegistrationInsideEntryProcessorTest extends GridComm /** * @throws Exception If failed; */ + @Test public void test() throws Exception { Ignite ignite = startGrids(2); @@ -60,9 +65,9 @@ public void test() throws Exception { cache.invoke(i, new CustomProcessor()); } catch (Exception e) { - Map value = cache.get(1); + Map val = cache.get(1); - if ((value != null) && (value.get(1) != null) && (value.get(1).getObj() == CustomEnum.ONE)) + if ((val != null) && (val.get(1).anEnum == CustomEnum.ONE) && val.get(1).obj.data.equals("test")) System.out.println("Data was saved."); else System.out.println("Data wasn't saved."); @@ -82,7 +87,7 @@ private static class CustomProcessor implements EntryProcessor map = new HashMap<>(); - map.put(1, new CustomObj(CustomEnum.ONE)); + map.put(1, new CustomObj(new CustomInnerObject("test"), CustomEnum.ONE)); entry.setValue(map); @@ -95,27 +100,20 @@ private static class CustomProcessor implements EntryProcessor { clo.apply(i, i); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheComparatorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheComparatorTest.java index 0bd587de85d40..94b3a33aa4d7e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheComparatorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheComparatorTest.java @@ -17,17 +17,20 @@ package org.apache.ignite.internal.processors.cache; -import junit.framework.TestCase; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.query.QuerySchema; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; /** * Test for CacheComparators from ClusterCachesInfo */ -public class CacheComparatorTest extends TestCase { +public class CacheComparatorTest { /** * Test if comparator not violates its general contract */ + @Test public void testDirect() { DynamicCacheDescriptor desc1 = new DynamicCacheDescriptor(null, new CacheConfiguration().setName("1111"), CacheType.DATA_STRUCTURES, diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConcurrentReadThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConcurrentReadThroughTest.java index 130280dd3fcb0..87190f01737ed 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConcurrentReadThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConcurrentReadThroughTest.java @@ -24,7 +24,6 @@ import javax.cache.configuration.Factory; import javax.cache.integration.CacheLoaderException; import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteCompute; import org.apache.ignite.IgniteException; import org.apache.ignite.cache.store.CacheStoreAdapter; import org.apache.ignite.configuration.CacheConfiguration; @@ -32,31 +31,35 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test was added to check fix for IGNITE-4465. */ +@RunWith(JUnit4.class) public class CacheConcurrentReadThroughTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SYS_THREADS = 16; /** */ private boolean client; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); if (!client) { @@ -77,6 +80,7 @@ public class CacheConcurrentReadThroughTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testConcurrentReadThrough() throws Exception { startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationLeakTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationLeakTest.java index 6b0386729e738..d85c52bbfa174 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationLeakTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationLeakTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheConfigurationLeakTest extends GridCommonAbstractTest { /** * @@ -60,6 +64,7 @@ public CacheConfigurationLeakTest() { /** * @throws Exception If failed. */ + @Test public void testCacheCreateLeak() throws Exception { final Ignite ignite = grid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConnectionLeakStoreTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConnectionLeakStoreTxTest.java index 27dbe62577a61..5e16e173523c2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConnectionLeakStoreTxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheConnectionLeakStoreTxTest.java @@ -34,14 +34,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.resources.CacheStoreSessionResource; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.cache.TestCacheSession; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -51,10 +52,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheConnectionLeakStoreTxTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -71,11 +70,6 @@ public class CacheConnectionLeakStoreTxTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); cfg.setClientMode(client); return cfg; @@ -101,6 +95,7 @@ public class CacheConnectionLeakStoreTxTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupAtomic() throws Exception { checkConnectionLeak(CacheAtomicityMode.ATOMIC, null, null); } @@ -108,6 +103,7 @@ public void testConnectionLeakOneBackupAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupAtomicLoadFromStore() throws Exception { isLoadFromStore = true; @@ -117,6 +113,7 @@ public void testConnectionLeakOneBackupAtomicLoadFromStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupOptimisticRepeatableRead() throws Exception { checkConnectionLeak(CacheAtomicityMode.TRANSACTIONAL, OPTIMISTIC, REPEATABLE_READ); } @@ -124,6 +121,7 @@ public void testConnectionLeakOneBackupOptimisticRepeatableRead() throws Excepti /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupOptimisticRepeatableReadLoadFromStore() throws Exception { isLoadFromStore = true; @@ -133,6 +131,7 @@ public void testConnectionLeakOneBackupOptimisticRepeatableReadLoadFromStore() t /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupOptimisticReadCommitted() throws Exception { checkConnectionLeak(CacheAtomicityMode.TRANSACTIONAL, OPTIMISTIC, READ_COMMITTED); } @@ -140,6 +139,7 @@ public void testConnectionLeakOneBackupOptimisticReadCommitted() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupOptimisticReadCommittedLoadFromStore() throws Exception { isLoadFromStore = true; @@ -149,6 +149,7 @@ public void testConnectionLeakOneBackupOptimisticReadCommittedLoadFromStore() th /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupPessimisticRepeatableRead() throws Exception { checkConnectionLeak(CacheAtomicityMode.TRANSACTIONAL, PESSIMISTIC, REPEATABLE_READ); } @@ -156,6 +157,7 @@ public void testConnectionLeakOneBackupPessimisticRepeatableRead() throws Except /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupPessimisticReadCommitted() throws Exception { checkConnectionLeak(CacheAtomicityMode.TRANSACTIONAL, PESSIMISTIC, READ_COMMITTED); } @@ -163,12 +165,33 @@ public void testConnectionLeakOneBackupPessimisticReadCommitted() throws Excepti /** * @throws Exception If failed. */ + @Test public void testConnectionLeakOneBackupPessimisticReadCommittedLoadFromStore() throws Exception { isLoadFromStore = true; checkConnectionLeak(CacheAtomicityMode.TRANSACTIONAL, PESSIMISTIC, READ_COMMITTED); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testConnectionLeakOneBackupMvccPessimisticRepeatableRead() throws Exception { + checkConnectionLeak(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, PESSIMISTIC, REPEATABLE_READ); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testConnectionLeakOneBackupMvccPessimisticRepeatableReadLoadFromStore() throws Exception { + isLoadFromStore = true; + + checkConnectionLeak(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, PESSIMISTIC, REPEATABLE_READ); + } + /** * @param atomicityMode Atomicity mode. * @param txConcurrency Transaction concurrency. @@ -283,4 +306,4 @@ private void addSession() { sessions.remove(ses == null ? NULL : ses); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDataRegionConfigurationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDataRegionConfigurationTest.java index 614c90051ca55..852fca470b174 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDataRegionConfigurationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDataRegionConfigurationTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheDataRegionConfigurationTest extends GridCommonAbstractTest { /** */ private volatile CacheConfiguration ccfg; @@ -77,6 +81,7 @@ private void checkStartGridException(Class ex, String messa /** * Verifies that proper exception is thrown when DataRegion is misconfigured for cache. */ + @Test public void testMissingDataRegion() { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -88,6 +93,7 @@ public void testMissingDataRegion() { /** * Verifies that {@link IgniteOutOfMemoryException} is thrown when cache is configured with too small DataRegion. */ + @Test public void testTooSmallDataRegion() throws Exception { memCfg = new DataStorageConfiguration(); @@ -142,6 +148,7 @@ public void testTooSmallDataRegion() throws Exception { /** * Verifies that with enough memory allocated adding values to cache doesn't cause any exceptions. */ + @Test public void testProperlySizedMemoryPolicy() throws Exception { memCfg = new DataStorageConfiguration(); @@ -177,6 +184,7 @@ public void testProperlySizedMemoryPolicy() throws Exception { * Verifies that {@link IgniteCheckedException} is thrown when swap and persistence are enabled at the same time * for a data region. */ + @Test public void testSetPersistenceAndSwap() { DataRegionConfiguration invCfg = new DataRegionConfiguration(); @@ -200,6 +208,7 @@ public void testSetPersistenceAndSwap() { /** * Verifies that {@link IgniteCheckedException} is thrown when page eviction threshold is less than 0.5. */ + @Test public void testSetSmallInvalidEviction() { final double SMALL_EVICTION_THRESHOLD = 0.1D; DataRegionConfiguration invCfg = new DataRegionConfiguration(); @@ -222,6 +231,7 @@ public void testSetSmallInvalidEviction() { /** * Verifies that {@link IgniteCheckedException} is thrown when page eviction threshold is greater than 0.999. */ + @Test public void testSetBigInvalidEviction() { final double BIG_EVICTION_THRESHOLD = 1.0D; DataRegionConfiguration invCfg = new DataRegionConfiguration(); @@ -244,6 +254,7 @@ public void testSetBigInvalidEviction() { /** * Verifies that {@link IgniteCheckedException} is thrown when empty pages pool size is less than 10 */ + @Test public void testInvalidSmallEmptyPagesPoolSize() { final int SMALL_PAGES_POOL_SIZE = 5; DataRegionConfiguration invCfg = new DataRegionConfiguration(); @@ -267,6 +278,7 @@ public void testInvalidSmallEmptyPagesPoolSize() { * Verifies that {@link IgniteCheckedException} is thrown when empty pages pool size is greater than * DataRegionConfiguration.getMaxSize() / DataStorageConfiguration.getPageSize() / 10. */ + @Test public void testInvalidBigEmptyPagesPoolSize() { final int DFLT_PAGE_SIZE = 1024; long expectedMaxPoolSize; @@ -298,6 +310,7 @@ public void testInvalidBigEmptyPagesPoolSize() { * Verifies that {@link IgniteCheckedException} is thrown when IgniteCheckedException if validation of * memory metrics properties fails. Metrics rate time interval must not be less than 1000ms. */ + @Test public void testInvalidMetricsProperties() { final long SMALL_RATE_TIME_INTERVAL_MS = 999; DataRegionConfiguration invCfg = new DataRegionConfiguration(); @@ -321,6 +334,7 @@ public void testInvalidMetricsProperties() { * Verifies that {@link IgniteCheckedException} is thrown when IgniteCheckedException if validation of * memory metrics properties fails. Metrics sub interval count must be positive. */ + @Test public void testInvalidSubIntervalCount() { final int NEG_SUB_INTERVAL_COUNT = -1000; DataRegionConfiguration invCfg = new DataRegionConfiguration(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteQueueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteQueueTest.java index 871bc338e9698..c4f0c1bced992 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteQueueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteQueueTest.java @@ -28,16 +28,22 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_CACHE_REMOVED_ENTRIES_TTL; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class CacheDeferredDeleteQueueTest extends GridCommonAbstractTest { /** */ private static String ttlProp; @@ -65,16 +71,28 @@ public class CacheDeferredDeleteQueueTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDeferredDeleteQueue() throws Exception { testQueue(ATOMIC, false); testQueue(TRANSACTIONAL, false); + testQueue(TRANSACTIONAL_SNAPSHOT, false); + testQueue(ATOMIC, true); testQueue(TRANSACTIONAL, true); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testDeferredDeleteQueueMvcc() throws Exception { + testQueue(TRANSACTIONAL_SNAPSHOT, true); + } + /** * @param atomicityMode Cache atomicity mode. * @param nearCache {@code True} if need create near cache. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteSanitySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteSanitySelfTest.java index dd7579905b29d..ae63b1ba9c471 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteSanitySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDeferredDeleteSanitySelfTest.java @@ -23,9 +23,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -33,6 +38,7 @@ /** * Sanity tests of deferred delete for different cache configurations. */ +@RunWith(JUnit4.class) public class CacheDeferredDeleteSanitySelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -42,6 +48,7 @@ public class CacheDeferredDeleteSanitySelfTest extends GridCommonAbstractTest { /** * @throws Exception If fails. */ + @Test public void testDeferredDelete() throws Exception { testDeferredDelete(LOCAL, ATOMIC, false, false); testDeferredDelete(LOCAL, TRANSACTIONAL, false, false); @@ -63,6 +70,35 @@ public void testDeferredDelete() throws Exception { testDeferredDelete(REPLICATED, TRANSACTIONAL, true, true); } + /** + * @throws Exception If fails. + */ + @Test + public void testDeferredDeleteMvcc() throws Exception { + testDeferredDelete(PARTITIONED, TRANSACTIONAL_SNAPSHOT, false, true); + testDeferredDelete(REPLICATED, TRANSACTIONAL_SNAPSHOT, false, true); + } + + /** + * @throws Exception If fails. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testDeferredDeleteMvccNear() throws Exception { + testDeferredDelete(PARTITIONED, TRANSACTIONAL_SNAPSHOT, true, false); + testDeferredDelete(REPLICATED, TRANSACTIONAL_SNAPSHOT, true, true); + } + + /** + * @throws Exception If fails. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9530") + @Test + public void testDeferredDeleteMvccLocal() throws Exception { + testDeferredDelete(LOCAL, TRANSACTIONAL_SNAPSHOT, false, false); + testDeferredDelete(LOCAL, TRANSACTIONAL_SNAPSHOT, true, false); + } + /** * @param mode Mode. * @param atomicityMode Atomicity mode. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDhtLocalPartitionAfterRemoveSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDhtLocalPartitionAfterRemoveSelfTest.java index 09f2a6a8b7d67..3d380f096cd7b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDhtLocalPartitionAfterRemoveSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheDhtLocalPartitionAfterRemoveSelfTest.java @@ -22,12 +22,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * Test for remove operation. */ +@RunWith(JUnit4.class) public class CacheDhtLocalPartitionAfterRemoveSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; @@ -54,6 +58,7 @@ public class CacheDhtLocalPartitionAfterRemoveSelfTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testMemoryUsage() throws Exception { assertEquals(10_000, GridDhtLocalPartition.MAX_DELETE_QUEUE_SIZE); @@ -105,4 +110,4 @@ public TestKey(String key) { return key.equals(((TestKey)obj).key); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEntryProcessorCopySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEntryProcessorCopySelfTest.java index aabd3b6445826..bb293f096807d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEntryProcessorCopySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEntryProcessorCopySelfTest.java @@ -31,14 +31,17 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for value copy in entry processor. */ +@RunWith(JUnit4.class) public class CacheEntryProcessorCopySelfTest extends GridCommonAbstractTest { /** Old value. */ private static final int OLD_VAL = 100; @@ -69,6 +72,7 @@ public class CacheEntryProcessorCopySelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testMutableEntryWithP2PEnabled() throws Exception { doTestMutableEntry(true); } @@ -76,6 +80,7 @@ public void testMutableEntryWithP2PEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMutableEntryWithP2PDisabled() throws Exception { doTestMutableEntry(false); } @@ -158,7 +163,7 @@ private void doTest(boolean cpOnRead, final boolean mutate, int expVal, int expC CacheObject obj = entry.peekVisibleValue(); - entry.touch(AffinityTopologyVersion.NONE); + entry.touch(); int actCnt = cnt.get(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEnumOperationsAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEnumOperationsAbstractTest.java index 148b60e473060..61d4289db314b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEnumOperationsAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEnumOperationsAbstractTest.java @@ -29,22 +29,23 @@ import org.apache.ignite.internal.binary.BinaryEnumObjectImpl; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.marshaller.Marshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public abstract class CacheEnumOperationsAbstractTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -52,8 +53,6 @@ public abstract class CacheEnumOperationsAbstractTest extends GridCommonAbstract @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -82,6 +81,7 @@ public abstract class CacheEnumOperationsAbstractTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Test public void testAtomic() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, ATOMIC); @@ -91,8 +91,21 @@ public void testAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTx() throws Exception { - CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, ATOMIC); + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL); + + enumOperations(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTx() throws Exception { + Assume.assumeTrue("https://issues.apache.org/jira/browse/IGNITE-7187", singleNode()); + + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL_SNAPSHOT); enumOperations(ccfg); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEventWithTxLabelTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEventWithTxLabelTest.java new file mode 100644 index 0000000000000..0adb63ccc5b05 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheEventWithTxLabelTest.java @@ -0,0 +1,492 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.IntStream; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheEntryProcessor; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.CacheEvent; +import org.apache.ignite.events.Event; +import org.apache.ignite.internal.IgniteKernal; +import org.apache.ignite.internal.util.lang.IgnitePair; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; +import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ; +import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED; + +/** + * Test to check passing transaction's label for EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_PUT, + * EVT_CACHE_OBJECT_REMOVED events. + */ +@RunWith(JUnit4.class) +public class CacheEventWithTxLabelTest extends GridCommonAbstractTest { + /** Types event to be checked. */ + private static final int[] CACHE_EVENT_TYPES = {EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_REMOVED}; + + /** Transaction label. */ + private static final String TX_LABEL = "TX_LABEL"; + + /** Number of server nodes. */ + private static final int SRVS = 3; + + /** Number of client nodes. */ + private static final int CLIENTS = 1; + + /** Cache name. */ + public static final String CACHE_NAME = "cache"; + + /** Client or server mode to start Ignite instance. */ + private static boolean client; + + /** Key related to primary node. */ + private Integer primaryKey = 0; + + /** Key related to backup node. */ + private Integer backupKey = 0; + + /** Current cash backup count. */ + private int backupCnt; + + /** Current transaction isolation level. */ + private TransactionIsolation isolation; + + /** Current transaction concurrency level. */ + private TransactionConcurrency concurrency; + + /** All failed tests information, */ + private ArrayList errors = new ArrayList<>(); + + /** Count of errors on previous iteration of testing. */ + private int prevErrCnt = 0; + + /** List to keep all events with no tx label between run tests */ + private static List wrongEvts = Collections.synchronizedList(new ArrayList<>()); + + /** Simple entry processor to use for tests */ + private static CacheEntryProcessor entryProcessor = (CacheEntryProcessor)(entry, objects) -> entry.getValue(); + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName).setClientMode(client); + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + client = false; + + startGridsMultiThreaded(SRVS); + + client = true; + + startGridsMultiThreaded(SRVS, CLIENTS); + + client = false; + + waitForDiscovery(primary(), backup1(), backup2(), client()); + + registerEventListeners(primary(), backup1(), backup2(), client()); + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + super.afterTestsStopped(); + + stopAllGrids(); + } + + /** + * Check all cases for passing transaction label in cash event. + * + * @throws Exception If failed. + */ + @Test + public void testPassTxLabelInCashEventForAllCases() throws Exception { + Ignite[] nodes = {client(), primary(), backup1(), backup2()}; + + for (int backupCnt = 0; backupCnt < SRVS; backupCnt++) { + this.backupCnt = backupCnt; + + prepareCache(backupCnt); + + for (TransactionIsolation isolation : TransactionIsolation.values()) { + this.isolation = isolation; + + for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { + this.concurrency = concurrency; + + for (int i = 0; i < nodes.length - 1; i++) { + Ignite nodeForPut = nodes[i]; + Ignite nodeForGet = nodes[i + 1]; + + singleWriteReadRemoveTest(nodeForPut, nodeForGet); + + multiWriteReadRemoveTest(nodeForPut, nodeForGet); + + singleNodeBatchWriteReadRemoveTest(nodeForPut, nodeForGet); + + multiNodeBatchWriteReadRemoveTest(nodeForPut, nodeForGet); + + writeInvokeRemoveTest(nodeForPut, nodeForGet); + + writeInvokeAllRemoveTest(nodeForPut, nodeForGet); + } + } + } + } + + String listOfFailedTests = String.join(",\n", errors); + + Assert.assertTrue("Have been received " + prevErrCnt + " cache events with incorrect txlabel.\n" + + "Failed tests:" + listOfFailedTests, + errors.isEmpty()); + } + + /** + * Check error after run test. In case error occured information about failed test will be added to errors list. + * + * @param testName Name of test which result will be checked. + * @param node1 First node + * @param node2 Second node + */ + private void checkResult(String testName, Ignite node1, Ignite node2) { + int currErrCnt = wrongEvts.size(); + + if (prevErrCnt != currErrCnt) { + prevErrCnt = currErrCnt; + + errors.add(String.format("%s backCnt-%s, %s, %s, node1-%s, node2-%s", + testName, backupCnt, isolation, concurrency, nodeType(node1), nodeType(node2))); + } + } + + /** + * @param node Ignite node + * @return Node type in the test + */ + private String nodeType(Ignite node) { + if (client().equals(node)) + return "CLIENT"; + else if (primary().equals(node)) + return "PRIMARY"; + else if (backup1().equals(node)) + return "BACKUP1"; + else if (backup2().equals(node)) + return "BACKUP2"; + else + return "UNKNOWN"; + } + + /** + * Test single put, get, remove operations. + * + * @param instanceToPut Ignite instance to put test data. + * @param instanceToGet Ignite instance to get test data. + */ + private void singleWriteReadRemoveTest(Ignite instanceToPut, Ignite instanceToGet) { + runTransactionally(instanceToPut, (Ignite ign) -> { + ign.cache(CACHE_NAME).put(primaryKey, 3); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).get(primaryKey); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).remove(primaryKey); + }); + + checkResult("singleWriteReadRemoveTest", instanceToPut, instanceToGet); + } + + /** + * Test multi put, get, remove operations + * + * @param instanceToPut Ignite instance to put test data. + * @param instanceToGet Ignite instance to get test data. + */ + private void multiWriteReadRemoveTest(Ignite instanceToPut, Ignite instanceToGet) { + runTransactionally(instanceToPut, (Ignite ign) -> { + ign.cache(CACHE_NAME).put(primaryKey, 2); + ign.cache(CACHE_NAME).put(backupKey, 3); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).get(primaryKey); + ign.cache(CACHE_NAME).get(backupKey); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).remove(primaryKey); + ign.cache(CACHE_NAME).remove(backupKey); + }); + + checkResult("multiWriteReadRemoveTest", instanceToPut, instanceToGet); + } + + /** + * Test multi nodes batch write-read + * + * @param instanceToPut Ignite instance to put test data. + * @param instanceToGet Ignite instance to get test data. + */ + private void multiNodeBatchWriteReadRemoveTest(Ignite instanceToPut, Ignite instanceToGet) { + Map keyValuesMap = IntStream.range(0, 100).boxed() + .collect(Collectors.toMap(Function.identity(), Function.identity())); + + runTransactionally(instanceToPut, (Ignite ign) -> { + ign.cache(CACHE_NAME).putAll(keyValuesMap); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).getAll(keyValuesMap.keySet()); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).removeAll(keyValuesMap.keySet()); + }); + + checkResult("multiNodeBatchWriteReadRemoveTest", instanceToPut, instanceToGet); + } + + /** + * Test single node batch write-read-remove + * + * @param instanceToPut Ignite instance to put test data. + * @param instanceToGet Ignite instance to get test data. + */ + private void singleNodeBatchWriteReadRemoveTest(Ignite instanceToPut, Ignite instanceToGet) { + IgnitePair keys = evaluatePrimaryAndBackupKeys(primaryKey + 1, backupKey + 1); + + Map keyValuesMap = new HashMap<>(); + keyValuesMap.put(primaryKey, 1); + keyValuesMap.put(keys.get1(), 2); + + runTransactionally(instanceToPut, (Ignite ign) -> { + ign.cache(CACHE_NAME).putAll(keyValuesMap); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).getAll(keyValuesMap.keySet()); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).removeAll(keyValuesMap.keySet()); + }); + + checkResult("oneNodeBatchWriteReadRemoveTest", instanceToPut, instanceToGet); + } + + /** + * Test put-invoke-remove + * + * @param instanceToPut Ignite instance to put test data. + * @param instanceToGet Ignite instance to get test data. + */ + @SuppressWarnings("unchecked") + private void writeInvokeRemoveTest(Ignite instanceToPut, Ignite instanceToGet) { + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).put(primaryKey, 3); + }); + + runTransactionally(instanceToPut, (Ignite ign) -> { + ign.cache(CACHE_NAME).invoke(primaryKey, entryProcessor); + ign.cache(CACHE_NAME).invoke(backupKey, entryProcessor); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).remove(primaryKey); + }); + + checkResult("writeInvokeRemoveTest", instanceToPut, instanceToGet); + } + + /** + * Test putAll-invokeAll-removeAll + * + * @param instanceToPut Ignite instance to put test data. + * @param instanceToGet Ignite instance to get test data. + */ + @SuppressWarnings("unchecked") + private void writeInvokeAllRemoveTest(Ignite instanceToPut, Ignite instanceToGet) { + Map keyValuesMap = IntStream.range(0, 100).boxed() + .collect(Collectors.toMap(Function.identity(), Function.identity())); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).putAll(keyValuesMap); + }); + + runTransactionally(instanceToPut, (Ignite ign) -> { + ign.cache(CACHE_NAME).invokeAll(keyValuesMap.keySet(), entryProcessor); + }); + + runTransactionally(instanceToGet, (Ignite ign) -> { + ign.cache(CACHE_NAME).removeAll(keyValuesMap.keySet()); + }); + + checkResult("WriteInvokeAllRemoveTest", instanceToPut, instanceToGet); + } + + /** + * Run command in transaction. + * + * @param startNode Ignite node to start transaction and run passed command. + * @param cmdInTx Command which should be done in transaction. + */ + private void runTransactionally(Ignite startNode, Consumer cmdInTx) { + try (Transaction tx = startNode.transactions().withLabel(TX_LABEL).txStart(concurrency, isolation)) { + cmdInTx.accept(startNode); + + tx.commit(); + } + } + + /** + * Add event listener to passed Ignite instances for cache event types. + * + * @param igns Ignite instances. + */ + private void registerEventListeners(Ignite... igns) { + if (igns != null) { + for (Ignite ign : igns) { + ign.events().enableLocal(CACHE_EVENT_TYPES); + ign.events().localListen((IgnitePredicate)event -> { + CacheEvent cacheEvt = (CacheEvent)event; + + if (!TX_LABEL.equals(cacheEvt.txLabel())) { + log.error("Has been received event with incorrect label " + cacheEvt.txLabel() + " ," + + " expected " + TX_LABEL + " label"); + + wrongEvts.add(cacheEvt); + } + + return true; + }, CACHE_EVENT_TYPES); + } + } + } + + /** + * Create cache with passed number of backups and determinate primary and backup keys. If cache was created before + * it will be removed before create new one. + * + * @param cacheBackups Number of backups for cache. + * @throws InterruptedException In case of fail. + */ + private void prepareCache(int cacheBackups) throws InterruptedException { + IgniteCache cache = client().cache(CACHE_NAME); + + if (cache != null) + cache.destroy(); + + client().createCache( + new CacheConfiguration() + .setName(CACHE_NAME) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setCacheMode(CacheMode.PARTITIONED) + .setBackups(cacheBackups) + ); + + awaitPartitionMapExchange(); + + IgnitePair keys = evaluatePrimaryAndBackupKeys(0, 0); + + primaryKey = keys.get1(); + backupKey = keys.get2(); + } + + /** + * Evaluate primary and backup keys. + * + * @param primaryKeyStart Value from need to start calculate primary key. + * @param backupKeyStart Value from need to start calculate backup key. + * @return Pair of result. The first result is found primary key. The second is found backup key. + */ + private IgnitePair evaluatePrimaryAndBackupKeys(final int primaryKeyStart, final int backupKeyStart) { + int primaryKey = primaryKeyStart; + int backupKey = backupKeyStart; + + while (!client().affinity(CACHE_NAME).isPrimary(((IgniteKernal)primary()).localNode(), primaryKey)) + primaryKey++; + + while (!client().affinity(CACHE_NAME).isBackup(((IgniteKernal)primary()).localNode(), backupKey) + && backupKey < 100 + backupKeyStart) + backupKey++; + + return new IgnitePair<>(primaryKey, backupKey); + } + + /** + * Return primary node. + * + * @return Primary node. + */ + private Ignite primary() { + return ignite(0); + } + + /** + * Return first backup node. + * + * @return First backup node. + */ + private Ignite backup1() { + return ignite(1); + } + + /** + * Return second backup node. + * + * @return Second backup node. + */ + private Ignite backup2() { + return ignite(2); + } + + /** + * Return client node. + * + * @return Client node. + */ + private Ignite client() { + return ignite(3); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheExchangeMessageDuplicatedStateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheExchangeMessageDuplicatedStateTest.java index cdd0002d573ac..86919349b0662 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheExchangeMessageDuplicatedStateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheExchangeMessageDuplicatedStateTest.java @@ -39,20 +39,18 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.getFieldValue; /** * */ +@RunWith(JUnit4.class) public class CacheExchangeMessageDuplicatedStateTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String AFF1_CACHE1 = "a1c1"; @@ -75,8 +73,6 @@ public class CacheExchangeMessageDuplicatedStateTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); @@ -146,6 +142,7 @@ public class CacheExchangeMessageDuplicatedStateTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testExchangeMessages() throws Exception { ignite(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheFutureExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheFutureExceptionSelfTest.java index a51765c652d4c..2c522e7db9ebc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheFutureExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheFutureExceptionSelfTest.java @@ -30,20 +30,19 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; /** * Cache future self test. */ +@RunWith(JUnit4.class) public class CacheFutureExceptionSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static volatile boolean fail; @@ -53,12 +52,6 @@ public class CacheFutureExceptionSelfTest extends GridCommonAbstractTest { cfg.setIgniteInstanceName(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - if (igniteInstanceName.equals(getTestIgniteInstanceName(1))) cfg.setClientMode(true); @@ -73,6 +66,7 @@ public class CacheFutureExceptionSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAsyncCacheFuture() throws Exception { startGrid(0); @@ -93,6 +87,11 @@ public void testAsyncCacheFuture() throws Exception { * @throws Exception If failed. */ private void testGet(boolean nearCache, boolean cpyOnRead) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + if (!MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) + return; + } + fail = false; Ignite srv = grid(0); @@ -157,4 +156,4 @@ private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundE in.readObject(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryAbstractTest.java index 6a7cfc6da4772..bfba956c0dbaf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryAbstractTest.java @@ -42,6 +42,9 @@ import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionOptimisticException; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -58,6 +61,7 @@ /** * Test getEntry and getEntries methods. */ +@RunWith(JUnit4.class) public abstract class CacheGetEntryAbstractTest extends GridCacheAbstractSelfTest { /** */ private static final String UPDATED_ENTRY_ERR = "Impossible to get version for entry updated in transaction"; @@ -98,6 +102,7 @@ public abstract class CacheGetEntryAbstractTest extends GridCacheAbstractSelfTes /** * @throws Exception If failed. */ + @Test public void testNear() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -113,6 +118,7 @@ public void testNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearTransactional() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -128,6 +134,7 @@ public void testNearTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitioned() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -142,6 +149,7 @@ public void testPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedTransactional() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -156,6 +164,7 @@ public void testPartitionedTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocal() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -170,9 +179,8 @@ public void testLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalTransactional() throws Exception { - // TODO: fails since d13520e9a05bd9e9b987529472d6317951b72f96, need to review changes. - CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); cfg.setWriteSynchronizationMode(FULL_SYNC); @@ -186,6 +194,7 @@ public void testLocalTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicated() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -200,6 +209,7 @@ public void testReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedTransactional() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -215,7 +225,7 @@ public void testReplicatedTransactional() throws Exception { * @param cfg Cache configuration. * @throws Exception If failed. */ - private void test(CacheConfiguration cfg) throws Exception { + protected void test(CacheConfiguration cfg) throws Exception { test(cfg, true); test(cfg, false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticReadCommittedSeltTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticReadCommittedSelfTest.java similarity index 95% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticReadCommittedSeltTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticReadCommittedSelfTest.java index c04612d08d235..09275468b338e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticReadCommittedSeltTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticReadCommittedSelfTest.java @@ -23,7 +23,7 @@ /** * Test getEntry and getEntries methods. */ -public class CacheGetEntryOptimisticReadCommittedSeltTest extends CacheGetEntryAbstractTest { +public class CacheGetEntryOptimisticReadCommittedSelfTest extends CacheGetEntryAbstractTest { /** {@inheritDoc} */ @Override protected TransactionConcurrency concurrency() { return TransactionConcurrency.OPTIMISTIC; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticRepeatableReadSeltTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticRepeatableReadSelfTest.java similarity index 95% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticRepeatableReadSeltTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticRepeatableReadSelfTest.java index 6153869821a0b..2c6a20440e196 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticRepeatableReadSeltTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticRepeatableReadSelfTest.java @@ -23,7 +23,7 @@ /** * Test getEntry and getEntries methods. */ -public class CacheGetEntryOptimisticRepeatableReadSeltTest extends CacheGetEntryAbstractTest { +public class CacheGetEntryOptimisticRepeatableReadSelfTest extends CacheGetEntryAbstractTest { /** {@inheritDoc} */ @Override protected TransactionConcurrency concurrency() { return TransactionConcurrency.OPTIMISTIC; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticSerializableSeltTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticSerializableSelfTest.java similarity index 95% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticSerializableSeltTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticSerializableSelfTest.java index 6ded4a9acd2e2..63161e245d401 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticSerializableSeltTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryOptimisticSerializableSelfTest.java @@ -23,7 +23,7 @@ /** * Test getEntry and getEntries methods. */ -public class CacheGetEntryOptimisticSerializableSeltTest extends CacheGetEntryAbstractTest { +public class CacheGetEntryOptimisticSerializableSelfTest extends CacheGetEntryAbstractTest { /** {@inheritDoc} */ @Override protected TransactionConcurrency concurrency() { return TransactionConcurrency.OPTIMISTIC; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticReadCommittedSeltTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticReadCommittedSelfTest.java similarity index 95% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticReadCommittedSeltTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticReadCommittedSelfTest.java index 975d2718af5e5..0a291f768d997 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticReadCommittedSeltTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticReadCommittedSelfTest.java @@ -23,7 +23,7 @@ /** * Test getEntry and getEntries methods. */ -public class CacheGetEntryPessimisticReadCommittedSeltTest extends CacheGetEntryAbstractTest { +public class CacheGetEntryPessimisticReadCommittedSelfTest extends CacheGetEntryAbstractTest { /** {@inheritDoc} */ @Override protected TransactionConcurrency concurrency() { return TransactionConcurrency.PESSIMISTIC; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticRepeatableReadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticRepeatableReadSelfTest.java new file mode 100644 index 0000000000000..6852997a825b2 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticRepeatableReadSelfTest.java @@ -0,0 +1,109 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Ignore; +import org.junit.Test; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.LOCAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheMode.REPLICATED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Test getEntry and getEntries methods. + */ +public class CacheGetEntryPessimisticRepeatableReadSelfTest extends CacheGetEntryAbstractTest { + /** {@inheritDoc} */ + @Override protected TransactionConcurrency concurrency() { + return TransactionConcurrency.PESSIMISTIC; + } + + /** {@inheritDoc} */ + @Override protected TransactionIsolation isolation() { + return TransactionIsolation.REPEATABLE_READ; + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testNearTransactionalMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + cfg.setWriteSynchronizationMode(FULL_SYNC); + cfg.setCacheMode(PARTITIONED); + cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + cfg.setName("nearT"); + cfg.setNearConfiguration(new NearCacheConfiguration()); + + test(cfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPartitionedTransactionalMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + cfg.setWriteSynchronizationMode(FULL_SYNC); + cfg.setCacheMode(PARTITIONED); + cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + cfg.setName("partitionedT"); + + test(cfg); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9530") + @Test + public void testLocalTransactionalMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + cfg.setWriteSynchronizationMode(FULL_SYNC); + cfg.setCacheMode(LOCAL); + cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + cfg.setName("localT"); + + test(cfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testReplicatedTransactionalMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + cfg.setWriteSynchronizationMode(FULL_SYNC); + cfg.setCacheMode(REPLICATED); + cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + cfg.setName("replicatedT"); + + test(cfg); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticSerializableSeltTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticSerializableSelfTest.java similarity index 95% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticSerializableSeltTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticSerializableSelfTest.java index 70f71ced31342..dfaed7e4149fa 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticSerializableSeltTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticSerializableSelfTest.java @@ -23,7 +23,7 @@ /** * Test getEntry and getEntries methods. */ -public class CacheGetEntryPessimisticSerializableSeltTest extends CacheGetEntryAbstractTest { +public class CacheGetEntryPessimisticSerializableSelfTest extends CacheGetEntryAbstractTest { /** {@inheritDoc} */ @Override protected TransactionConcurrency concurrency() { return TransactionConcurrency.PESSIMISTIC; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetFromJobTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetFromJobTest.java index 7c9eeec7f9278..05a7c58ef9b89 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetFromJobTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetFromJobTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Job tries to get cache during topology change. */ +@RunWith(JUnit4.class) public class CacheGetFromJobTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -49,6 +53,7 @@ public class CacheGetFromJobTest extends GridCacheAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testTopologyChange() throws Exception { final AtomicReference err = new AtomicReference<>(); @@ -113,4 +118,4 @@ public TestJob() { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetsDistributionAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetsDistributionAbstractTest.java new file mode 100644 index 0000000000000..c4d490983e6cd --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetsDistributionAbstractTest.java @@ -0,0 +1,383 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.TreeSet; +import java.util.UUID; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.TransactionConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; +import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_MACS; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Tests of replicated cache's 'get' requests distribution. + */ +@RunWith(JUnit4.class) +public abstract class CacheGetsDistributionAbstractTest extends GridCommonAbstractTest { + /** Client nodes instance's name. */ + private static final String CLIENT_NAME = "client"; + + /** Value prefix. */ + private static final String VAL_PREFIX = "val"; + + /** */ + private static final int PRIMARY_KEYS_NUMBER = 1_000; + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + assert gridCount() >= 1 : "At least one grid must be started"; + + startGridsMultiThreaded(gridCount()); + + IgniteConfiguration clientCfg = getConfiguration(CLIENT_NAME); + + clientCfg.setClientMode(true); + + startGrid(clientCfg); + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + super.afterTestsStopped(); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME); + + if (cache != null) + cache.destroy(); + + // Setting different MAC addresses for all nodes + Map macs = getClusterMacs(); + + int idx = 0; + + for (Map.Entry entry : macs.entrySet()) + entry.setValue("x2-xx-xx-xx-xx-x" + idx++); + + replaceMacAddresses(G.allGrids(), macs); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + TransactionConfiguration txCfg = new TransactionConfiguration() + .setDefaultTxIsolation(transactionIsolation()) + .setDefaultTxConcurrency(transactionConcurrency()); + + cfg.setTransactionConfiguration(txCfg); + + return cfg; + } + + /** + * @return Grids count to start. + */ + protected int gridCount() { + return 4; + } + + /** + * @return Cache configuration. + */ + protected CacheConfiguration cacheConfiguration() { + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setCacheMode(cacheMode()); + ccfg.setAtomicityMode(atomicityMode()); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + ccfg.setReadFromBackup(true); + ccfg.setStatisticsEnabled(true); + + if (cacheMode() == CacheMode.PARTITIONED) + ccfg.setBackups(backupsCount()); + + return ccfg; + } + + /** + * @return Cache mode. + */ + protected abstract CacheMode cacheMode(); + + /** + * @return Cache atomicity mode. + */ + protected abstract CacheAtomicityMode atomicityMode(); + + /** + * @return Cache transaction isolation. + */ + protected TransactionIsolation transactionIsolation() { + return REPEATABLE_READ; + } + + /** + * @return Cache transaction concurrency. + */ + protected TransactionConcurrency transactionConcurrency() { + return PESSIMISTIC; + } + + /** + * @return Backups count. + */ + protected int backupsCount() { + return gridCount() - 1; + } + + /** + * Test 'get' operations requests generator distribution. + * + * @throws Exception In case of an error. + * @see #runTestBalancingDistribution(boolean) + */ + @Test + public void testGetRequestsGeneratorDistribution() throws Exception { + runTestBalancingDistribution(false); + } + + /** + * Test 'getAll' operations requests generator distribution. + * + * @throws Exception In case of an error. + * @see #runTestBalancingDistribution(boolean) + */ + @Test + public void testGetAllRequestsGeneratorDistribution() throws Exception { + runTestBalancingDistribution(true); + } + + /** + * @param batchMode Whenever 'get' or 'getAll' operations are used in the test. + * @throws Exception In case of an error. + */ + protected void runTestBalancingDistribution(boolean batchMode) throws Exception { + IgniteCache cache = grid(0).createCache(cacheConfiguration()); + + List keys = primaryKeys(cache, PRIMARY_KEYS_NUMBER); + + for (Integer key : keys) + cache.put(key, VAL_PREFIX + key); + + IgniteCache clientCache = grid(CLIENT_NAME).cache(DEFAULT_CACHE_NAME) + .withAllowAtomicOpsInTx(); + + assertTrue(GridTestUtils.waitForCondition( + new GridAbsPredicate() { + int batchSize = 10; + int idx = 0; + + @Override public boolean apply() { + if (idx >= PRIMARY_KEYS_NUMBER) + idx = 0; + + try (Transaction tx = grid(CLIENT_NAME).transactions().txStart()) { + if (batchMode) { + Set keys0 = new TreeSet<>(); + + for (int i = idx; i < idx + batchSize && i < PRIMARY_KEYS_NUMBER; i++) + keys0.add(keys.get(i)); + + idx += batchSize; + + Map results = clientCache.getAll(keys0); + + for (Map.Entry entry : results.entrySet()) + assertEquals(VAL_PREFIX + entry.getKey(), entry.getValue()); + } + else { + for (int i = idx; i < idx + gridCount() && i < PRIMARY_KEYS_NUMBER; i++) { + Integer key = keys.get(i); + + assertEquals(VAL_PREFIX + key, clientCache.get(key)); + } + + idx += gridCount(); + } + + tx.commit(); + } + + for (int i = 0; i < gridCount(); i++) { + IgniteEx ignite = grid(i); + + long getsCnt = ignite.cache(DEFAULT_CACHE_NAME).localMetrics().getCacheGets(); + + if (getsCnt == 0) + return false; + } + + return true; + } + }, + getTestTimeout()) + ); + } + + /** + * Tests that the 'get' operation requests are routed to node with same MAC address as at requester. + * + * @throws Exception In case of an error. + * @see #runTestSameHostDistribution(UUID, boolean) + */ + @Test + public void testGetRequestsDistribution() throws Exception { + UUID destId = grid(0).localNode().id(); + + runTestSameHostDistribution(destId, false); + } + + /** + * Tests that the 'getAll' operation requests are routed to node with same MAC address as at requester. + * + * @throws Exception In case of an error. + * @see #runTestSameHostDistribution(UUID, boolean) + */ + @Test + public void testGetAllRequestsDistribution() throws Exception { + UUID destId = grid(gridCount() - 1).localNode().id(); + + runTestSameHostDistribution(destId, true); + } + + /** + * Tests that the 'get' and 'getAll' requests are routed to node with same MAC address as at requester. + * + * @param destId Destination Ignite instance id for requests distribution. + * @param batchMode Test mode. + * @throws Exception In case of an error. + */ + protected void runTestSameHostDistribution(final UUID destId, final boolean batchMode) throws Exception { + Map macs = getClusterMacs(); + + String clientMac = macs.get(grid(CLIENT_NAME).localNode().id()); + + macs.put(destId, clientMac); + + replaceMacAddresses(G.allGrids(), macs); + + IgniteCache cache = grid(0).createCache(cacheConfiguration()); + + List keys = primaryKeys(cache, PRIMARY_KEYS_NUMBER); + + for (Integer key : keys) + cache.put(key, VAL_PREFIX + key); + + IgniteCache clientCache = grid(CLIENT_NAME).cache(DEFAULT_CACHE_NAME) + .withAllowAtomicOpsInTx(); + + try (Transaction tx = grid(CLIENT_NAME).transactions().txStart()) { + if (batchMode) { + Map results = clientCache.getAll(new TreeSet<>(keys)); + + for (Map.Entry entry : results.entrySet()) + assertEquals(VAL_PREFIX + entry.getKey(), entry.getValue()); + } + else { + for (Integer key : keys) + assertEquals(VAL_PREFIX + key, clientCache.get(key)); + } + + tx.commit(); + } + + for (int i = 0; i < gridCount(); i++) { + IgniteEx ignite = grid(i); + + long getsCnt = ignite.cache(DEFAULT_CACHE_NAME).localMetrics().getCacheGets(); + + if (destId.equals(ignite.localNode().id())) + assertEquals(PRIMARY_KEYS_NUMBER, getsCnt); + else + assertEquals(0L, getsCnt); + } + } + + /** + * @param instances Started Ignite instances. + * @param macs Mapping MAC addresses to UUID. + */ + private void replaceMacAddresses(List instances, Map macs) { + for (Ignite ignite : instances) { + for (ClusterNode node : ignite.cluster().nodes()) { + String mac = macs.get(node.id()); + + assertNotNull(mac); + + Map attrs = new HashMap<>(node.attributes()); + + attrs.put(ATTR_MACS, mac); + + ((TcpDiscoveryNode)node).setAttributes(attrs); + } + } + } + + /** + * @return Cluster nodes MAC addresses. + */ + private Map getClusterMacs() { + Map macs = new HashMap<>(); + + for (Ignite ignite : G.allGrids()) { + ClusterNode node = ignite.cluster().localNode(); + + String mac = node.attribute(ATTR_MACS); + + assert mac != null; + + macs.put(node.id(), mac); + } + + return macs; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupLocalConfigurationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupLocalConfigurationSelfTest.java index 51f900165dbed..eb8d39531edba 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupLocalConfigurationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupLocalConfigurationSelfTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheGroupLocalConfigurationSelfTest extends GridCommonAbstractTest { /** */ private static final String SECOND_NODE_NAME = "secondNode"; @@ -84,6 +88,7 @@ public class CacheGroupLocalConfigurationSelfTest extends GridCommonAbstractTest * * @throws Exception If failed. */ + @Test public void testDefaultGroupLocalAttributesPreserved() throws Exception { useNonDfltCacheGrp = false; @@ -96,6 +101,7 @@ public void testDefaultGroupLocalAttributesPreserved() throws Exception { * * @throws Exception If failed. */ + @Test public void testNonDefaultGroupLocalAttributesPreserved() throws Exception { useNonDfltCacheGrp = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMBeanTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMBeanTest.java index f8108eb7123a2..9dca0b1aef17e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMBeanTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupMetricsMBeanTest.java @@ -34,6 +34,7 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.affinity.AffinityFunction; import org.apache.ignite.cache.affinity.AffinityFunctionContext; @@ -51,10 +52,14 @@ import org.apache.ignite.mxbean.CacheGroupMetricsMXBean; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Cache group JMX metrics test. */ +@RunWith(JUnit4.class) public class CacheGroupMetricsMBeanTest extends GridCommonAbstractTest implements Serializable { /** */ private boolean pds = false; @@ -132,21 +137,25 @@ private static class RoundRobinVariableSizeAffinityFunction implements AffinityF .setGroupName("group1") .setCacheMode(CacheMode.PARTITIONED) .setBackups(3) - .setAffinity(new RoundRobinVariableSizeAffinityFunction()); + .setAffinity(new RoundRobinVariableSizeAffinityFunction()) + .setAtomicityMode(atomicityMode()); CacheConfiguration cCfg2 = new CacheConfiguration() .setName("cache2") .setGroupName("group2") - .setCacheMode(CacheMode.REPLICATED); + .setCacheMode(CacheMode.REPLICATED) + .setAtomicityMode(atomicityMode()); CacheConfiguration cCfg3 = new CacheConfiguration() .setName("cache3") .setGroupName("group2") - .setCacheMode(CacheMode.REPLICATED); + .setCacheMode(CacheMode.REPLICATED) + .setAtomicityMode(atomicityMode()); CacheConfiguration cCfg4 = new CacheConfiguration() .setName("cache4") - .setCacheMode(CacheMode.PARTITIONED); + .setCacheMode(CacheMode.PARTITIONED) + .setAtomicityMode(atomicityMode()); cfg.setCacheConfiguration(cCfg1, cCfg2, cCfg3, cCfg4); @@ -163,6 +172,13 @@ private static class RoundRobinVariableSizeAffinityFunction implements AffinityF return cfg; } + /** + * @return Cache atomicity mode. + */ + protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.ATOMIC; + } + /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -234,6 +250,7 @@ private Map> arrayToAssignmentMap(int[][] arr) { /** * @throws Exception If failed. */ + @Test public void testCacheGroupMetrics() throws Exception { pds = false; @@ -336,6 +353,7 @@ public void testCacheGroupMetrics() throws Exception { /** * Test allocated pages counts for cache groups. */ + @Test public void testAllocatedPages() throws Exception { pds = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupsMetricsRebalanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupsMetricsRebalanceTest.java index af2dc633d825e..bc1c9e53d2355 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupsMetricsRebalanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGroupsMetricsRebalanceTest.java @@ -17,6 +17,8 @@ package org.apache.ignite.internal.processors.cache; +import java.util.Collections; +import java.util.Random; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.ignite.Ignite; @@ -31,14 +33,18 @@ import org.apache.ignite.events.CacheRebalancingEvent; import org.apache.ignite.events.Event; import org.apache.ignite.events.EventType; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.visor.VisorTaskArgument; +import org.apache.ignite.internal.visor.node.VisorNodeDataCollectorTask; +import org.apache.ignite.internal.visor.node.VisorNodeDataCollectorTaskArg; +import org.apache.ignite.internal.visor.node.VisorNodeDataCollectorTaskResult; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; import static org.apache.ignite.IgniteSystemProperties.IGNITE_REBALANCE_STATISTICS_TIME_INTERVAL; import static org.apache.ignite.testframework.GridTestUtils.runAsync; @@ -48,9 +54,6 @@ * */ public class CacheGroupsMetricsRebalanceTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE1 = "cache1"; @@ -83,13 +86,11 @@ public class CacheGroupsMetricsRebalanceTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration cfg1 = new CacheConfiguration() .setName(CACHE1) .setGroupName(GROUP) .setCacheMode(CacheMode.PARTITIONED) - .setAtomicityMode(CacheAtomicityMode.ATOMIC) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setRebalanceMode(CacheRebalanceMode.ASYNC) .setRebalanceBatchSize(100) .setStatisticsEnabled(true); @@ -100,7 +101,7 @@ public class CacheGroupsMetricsRebalanceTest extends GridCommonAbstractTest { CacheConfiguration cfg3 = new CacheConfiguration() .setName(CACHE3) .setCacheMode(CacheMode.PARTITIONED) - .setAtomicityMode(CacheAtomicityMode.ATOMIC) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setRebalanceMode(CacheRebalanceMode.ASYNC) .setRebalanceBatchSize(100) .setStatisticsEnabled(true) @@ -114,6 +115,7 @@ public class CacheGroupsMetricsRebalanceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRebalance() throws Exception { Ignite ignite = startGrids(4); @@ -171,6 +173,74 @@ public void testRebalance() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testRebalanceProgressUnderLoad() throws Exception { + Ignite ignite = startGrids(4); + + IgniteCache cache1 = ignite.cache(CACHE1); + + Random r = new Random(); + + GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + for (int i = 0; i < 100_000; i++) { + int next = r.nextInt(); + + cache1.put(next, CACHE1 + "-" + next); + } + } + }); + + IgniteEx ig = startGrid(4); + + GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + for (int i = 0; i < 100_000; i++) { + int next = r.nextInt(); + + cache1.put(next, CACHE1 + "-" + next); + } + } + }); + + CountDownLatch latch = new CountDownLatch(1); + + ig.events().localListen(new IgnitePredicate() { + @Override public boolean apply(Event evt) { + latch.countDown(); + + return false; + } + }, EventType.EVT_CACHE_REBALANCE_STOPPED); + + latch.await(); + + VisorNodeDataCollectorTaskArg taskArg = new VisorNodeDataCollectorTaskArg(); + taskArg.setCacheGroups(Collections.emptySet()); + + VisorTaskArgument arg = new VisorTaskArgument<>( + Collections.singletonList(ignite.cluster().localNode().id()), + taskArg, + false + ); + + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + VisorNodeDataCollectorTaskResult res = ignite.compute().execute(VisorNodeDataCollectorTask.class, arg); + + CacheMetrics snapshot = ig.cache(CACHE1).metrics(); + + return snapshot.getRebalancedKeys() > snapshot.getEstimatedRebalancingKeys() + && res.getRebalance().get(ignite.cluster().localNode().id()) == 1.0 + && snapshot.getRebalancingPartitionsCount() == 0; + } + }, 5000); + } + + /** + * @throws Exception If failed. + */ + @Test public void testRebalanceEstimateFinishTime() throws Exception { System.setProperty(IGNITE_REBALANCE_STATISTICS_TIME_INTERVAL, String.valueOf(1000)); @@ -277,7 +347,7 @@ public void testRebalanceEstimateFinishTime() throws Exception { @Override public boolean apply() { return ig2.cache(CACHE1).localMetrics().getKeysToRebalanceLeft() == 0; } - }, timeLeft + 10_000L); + }, timeLeft + 12_000L); log.info("[timePassed=" + timePassed + ", timeLeft=" + timeLeft + ", Time to rebalance=" + (finishTime - startTime) + @@ -292,12 +362,13 @@ public void testRebalanceEstimateFinishTime() throws Exception { long diff = finishTime - currTime; - assertTrue("Expected less than 10000, but actual: " + diff, Math.abs(diff) < 10_000L); + assertTrue("Expected less than 12000, but actual: " + diff, Math.abs(diff) < 12_000L); } /** * @throws Exception If failed. */ + @Test public void testRebalanceDelay() throws Exception { Ignite ig1 = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterLocalSanityTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterLocalSanityTest.java index acb929b5672ef..9f370ddcf9231 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterLocalSanityTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterLocalSanityTest.java @@ -47,11 +47,15 @@ import org.apache.ignite.transactions.TransactionIsolation; import org.eclipse.jetty.util.BlockingArrayQueue; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -60,6 +64,7 @@ /** * */ +@RunWith(JUnit4.class) public class CacheInterceptorPartitionCounterLocalSanityTest extends GridCommonAbstractTest { /** */ private static final int NODES = 1; @@ -104,6 +109,7 @@ public class CacheInterceptorPartitionCounterLocalSanityTest extends GridCommonA /** * @throws Exception If failed. */ + @Test public void testLocal() throws Exception { CacheConfiguration ccfg = cacheConfiguration(ATOMIC,false); @@ -113,6 +119,7 @@ public void testLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalWithStore() throws Exception { CacheConfiguration ccfg = cacheConfiguration(ATOMIC,true); @@ -122,6 +129,7 @@ public void testLocalWithStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalTx() throws Exception { CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL,false); @@ -131,12 +139,37 @@ public void testLocalTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalTxWithStore() throws Exception { CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL,true); doTestPartitionCounterOperation(ccfg); } + /** + * @throws Exception If failed. + */ + @Test + public void testLocalMvccTx() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL_SNAPSHOT,false); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testLocalMvccTxWithStore() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL_SNAPSHOT,true); + + doTestPartitionCounterOperation(ccfg); + } + /** * @param ccfg Cache configuration. * @throws Exception If failed. @@ -187,7 +220,7 @@ private void randomUpdate( Transaction tx = null; - if (cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL && rnd.nextBoolean()) + if (atomicityMode(cache) == TRANSACTIONAL && rnd.nextBoolean()) tx = ignite.transactions().txStart(txRandomConcurrency(rnd), txRandomIsolation(rnd)); try { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterRandomOperationsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterRandomOperationsTest.java index 74170a6559a65..e69cc6b1f0b68 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterRandomOperationsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheInterceptorPartitionCounterRandomOperationsTest.java @@ -48,20 +48,21 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.eclipse.jetty.util.BlockingArrayQueue; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -72,10 +73,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheInterceptorPartitionCounterRandomOperationsTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 5; @@ -103,8 +102,6 @@ public class CacheInterceptorPartitionCounterRandomOperationsTest extends GridCo @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -137,6 +134,7 @@ public class CacheInterceptorPartitionCounterRandomOperationsTest extends GridCo /** * @throws Exception If failed. */ + @Test public void testAtomic() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, @@ -149,6 +147,7 @@ public void testAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicWithStore() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, @@ -161,6 +160,7 @@ public void testAtomicWithStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicReplicated() throws Exception { CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 0, @@ -173,6 +173,7 @@ public void testAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicReplicatedWithStore() throws Exception { CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 0, @@ -185,6 +186,7 @@ public void testAtomicReplicatedWithStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicNoBackups() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, @@ -197,6 +199,7 @@ public void testAtomicNoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTx() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, @@ -209,6 +212,7 @@ public void testTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxWithStore() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, @@ -221,6 +225,7 @@ public void testTxWithStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxExplicit() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, @@ -233,6 +238,7 @@ public void testTxExplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxReplicated() throws Exception { CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 0, @@ -245,6 +251,7 @@ public void testTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxReplicatedWithStore() throws Exception { CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 0, @@ -257,6 +264,7 @@ public void testTxReplicatedWithStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoBackups() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, @@ -269,6 +277,7 @@ public void testTxNoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoBackupsWithStore() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, @@ -281,6 +290,7 @@ public void testTxNoBackupsWithStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoBackupsExplicit() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, @@ -290,12 +300,119 @@ public void testTxNoBackupsExplicit() throws Exception { doTestPartitionCounterOperation(ccfg); } + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTx() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, + 1, + TRANSACTIONAL_SNAPSHOT, + false); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxWithStore() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, + 1, + TRANSACTIONAL_SNAPSHOT, + true); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxExplicit() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, + 1, + TRANSACTIONAL_SNAPSHOT, + false); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxReplicated() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(REPLICATED, + 0, + TRANSACTIONAL_SNAPSHOT, + false); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxReplicatedWithStore() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(REPLICATED, + 0, + TRANSACTIONAL_SNAPSHOT, + true); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxNoBackups() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, + 0, + TRANSACTIONAL_SNAPSHOT, + false); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxNoBackupsWithStore() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, + 0, + TRANSACTIONAL_SNAPSHOT, + true); + + doTestPartitionCounterOperation(ccfg); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxNoBackupsExplicit() throws Exception { + CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, + 0, + TRANSACTIONAL_SNAPSHOT, + false); + + doTestPartitionCounterOperation(ccfg); + } + /** * @param ccfg Cache configuration. * @throws Exception If failed. */ protected void doTestPartitionCounterOperation(CacheConfiguration ccfg) throws Exception { + if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) + fail("https://issues.apache.org/jira/browse/IGNITE-9323"); + ignite(0).createCache(ccfg); try { @@ -345,7 +462,9 @@ private void randomUpdate( Transaction tx = null; - if (cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL && rnd.nextBoolean()) + CacheAtomicityMode atomicityMode = GridCommonAbstractTest.atomicityMode(cache); + + if (atomicityMode == TRANSACTIONAL && rnd.nextBoolean()) tx = ignite.transactions().txStart(txRandomConcurrency(rnd), txRandomIsolation(rnd)); try { @@ -578,8 +697,7 @@ private void randomUpdate( Object key, boolean rmv ) { - Collection nodes = - cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL ? + Collection nodes = atomicityMode(cache) == TRANSACTIONAL ? affinity(cache).mapKeyToPrimaryAndBackups(key) : Collections.singletonList(affinity(cache).mapKeyToNode(key)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheKeepBinaryTransactionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheKeepBinaryTransactionTest.java index 841e49c7c1a66..538bf8dda3611 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheKeepBinaryTransactionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheKeepBinaryTransactionTest.java @@ -26,22 +26,30 @@ import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.binary.BinaryMarshaller; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that no deserialization happens with binary objects and keepBinary set flag. */ +@RunWith(JUnit4.class) public class CacheKeepBinaryTransactionTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); TransactionConfiguration txCfg = new TransactionConfiguration(); - txCfg.setDefaultTxConcurrency(TransactionConcurrency.OPTIMISTIC); - txCfg.setDefaultTxIsolation(TransactionIsolation.REPEATABLE_READ); + + if (!MvccFeatureChecker.forcedMvcc()) { + txCfg.setDefaultTxConcurrency(TransactionConcurrency.OPTIMISTIC); + txCfg.setDefaultTxIsolation(TransactionIsolation.REPEATABLE_READ); + } cfg.setTransactionConfiguration(txCfg); @@ -65,6 +73,7 @@ public class CacheKeepBinaryTransactionTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testBinaryGet() throws Exception { IgniteEx ignite = grid(0); IgniteCache cache = ignite.cache("tx-cache").withKeepBinary(); @@ -80,6 +89,7 @@ public void testBinaryGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryContains() throws Exception { IgniteEx ignite = grid(0); IgniteCache cache = ignite.cache("tx-cache").withKeepBinary(); @@ -95,6 +105,7 @@ public void testBinaryContains() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryPutGetContains() throws Exception { IgniteEx ignite = grid(0); IgniteCache cache = ignite.cache("tx-cache").withKeepBinary(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsEntitiesCountTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsEntitiesCountTest.java index 1ab7c40ced6a0..df396b6bd459d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsEntitiesCountTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsEntitiesCountTest.java @@ -17,15 +17,23 @@ package org.apache.ignite.internal.processors.cache; +import java.util.ArrayList; +import java.util.Collection; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * This test checks that entries count metrics, calculated by method @@ -33,6 +41,7 @@ * over local partitions to get all set of metrics), have the same values as metrics, calculated by individual methods * (which use iteration over local partition per each method call). */ +@RunWith(JUnit4.class) public class CacheMetricsEntitiesCountTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; @@ -48,35 +57,45 @@ public class CacheMetricsEntitiesCountTest extends GridCommonAbstractTest { CachePeekMode.ONHEAP, CachePeekMode.PRIMARY, CachePeekMode.BACKUP, CachePeekMode.NEAR}; /** Cache count. */ - private static final int CACHE_CNT = 4; + private static int cacheCnt = 4; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - CacheConfiguration ccfg0 = new CacheConfiguration<>() - .setName(CACHE_PREFIX + 0) - .setCacheMode(CacheMode.LOCAL); + Collection ccfgs = new ArrayList<>(4); - CacheConfiguration ccfg1 = new CacheConfiguration<>() - .setName(CACHE_PREFIX + 1) + ccfgs.add(new CacheConfiguration<>() + .setName(CACHE_PREFIX + 0) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) - .setCacheMode(CacheMode.REPLICATED); + .setCacheMode(CacheMode.REPLICATED) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); - CacheConfiguration ccfg2 = new CacheConfiguration<>() - .setName(CACHE_PREFIX + 2) + ccfgs.add(new CacheConfiguration<>() + .setName(CACHE_PREFIX + 1) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) .setCacheMode(CacheMode.PARTITIONED) - .setBackups(1); + .setBackups(1) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); - CacheConfiguration ccfg3 = new CacheConfiguration<>() - .setName(CACHE_PREFIX + 3) + ccfgs.add(new CacheConfiguration<>() + .setName(CACHE_PREFIX + 2) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) .setCacheMode(CacheMode.PARTITIONED) .setBackups(1) - .setNearConfiguration(new NearCacheConfiguration<>()); + .setNearConfiguration(new NearCacheConfiguration<>()) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.LOCAL_CACHE)) { + ccfgs.add(new CacheConfiguration<>() + .setName(CACHE_PREFIX + 3) + .setCacheMode(CacheMode.LOCAL) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + } + + cacheCnt = ccfgs.size(); - cfg.setCacheConfiguration(ccfg0, ccfg1, ccfg2, ccfg3); + cfg.setCacheConfiguration(U.toArray(ccfgs, new CacheConfiguration[cacheCnt])); return cfg; } @@ -89,15 +108,16 @@ public class CacheMetricsEntitiesCountTest extends GridCommonAbstractTest { /** * Test entities count, calculated by different implementations. */ + @Test public void testEnitiesCount() throws Exception { awaitPartitionMapExchange(); for (int igniteIdx = 0; igniteIdx < GRID_CNT; igniteIdx++) - for (int cacheIdx = 0; cacheIdx < CACHE_CNT; cacheIdx++) + for (int cacheIdx = 0; cacheIdx < cacheCnt; cacheIdx++) fillCache(igniteIdx, cacheIdx); for (int igniteIdx = 0; igniteIdx < GRID_CNT; igniteIdx++) - for (int cacheIdx = 0; cacheIdx < CACHE_CNT; cacheIdx++) + for (int cacheIdx = 0; cacheIdx < cacheCnt; cacheIdx++) checkCache(igniteIdx, cacheIdx); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsForClusterGroupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsForClusterGroupSelfTest.java index 3f196a57ad80b..73ff37ce819e5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsForClusterGroupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsForClusterGroupSelfTest.java @@ -21,6 +21,7 @@ import java.util.Map; import java.util.concurrent.CountDownLatch; import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.CacheMetrics; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; @@ -30,13 +31,18 @@ import org.apache.ignite.internal.managers.discovery.IgniteClusterNode; import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; /** * Test for cluster wide cache metrics. */ +@RunWith(JUnit4.class) public class CacheMetricsForClusterGroupSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; @@ -63,30 +69,31 @@ public class CacheMetricsForClusterGroupSelfTest extends GridCommonAbstractTest private boolean daemon; /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - cfg.setDaemon(daemon); + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.METRICS); - return cfg; + super.beforeTestsStarted(); } /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - startGrids(GRID_CNT); + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - daemon = true; + cfg.setDaemon(daemon); - startGrid(GRID_CNT); + return cfg; } /** * Test cluster group metrics in case of statistics enabled. */ + @Test public void testMetricsStatisticsEnabled() throws Exception { - createCaches(true); + startGrids(); try { + createCaches(true); + populateCacheData(cache1, ENTRY_CNT_CACHE1); populateCacheData(cache2, ENTRY_CNT_CACHE2); @@ -107,7 +114,7 @@ public void testMetricsStatisticsEnabled() throws Exception { assertMetrics(cache2, true); } finally { - destroyCaches(); + stopAllGrids(); } } @@ -116,10 +123,13 @@ public void testMetricsStatisticsEnabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testMetricsStatisticsDisabled() throws Exception { - createCaches(false); + startGrids(); try { + createCaches(false); + populateCacheData(cache1, ENTRY_CNT_CACHE1); populateCacheData(cache2, ENTRY_CNT_CACHE2); @@ -140,10 +150,62 @@ public void testMetricsStatisticsDisabled() throws Exception { assertMetrics(cache2, false); } finally { - destroyCaches(); + stopAllGrids(); + } + } + + /** + * Tests that only local metrics are updating if discovery updates disabled. + */ + @Test + public void testMetricsDiscoveryUpdatesDisabled() throws Exception { + System.setProperty(IgniteSystemProperties.IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE, "true"); + + try { + startGrids(); + + try { + createCaches(true); + + populateCacheData(cache1, ENTRY_CNT_CACHE1); + populateCacheData(cache2, ENTRY_CNT_CACHE2); + + readCacheData(cache1, ENTRY_CNT_CACHE1); + readCacheData(cache2, ENTRY_CNT_CACHE2); + + awaitMetricsUpdate(); + + Collection nodes = grid(0).cluster().forRemotes().nodes(); + + for (ClusterNode node : nodes) { + Map metrics = ((IgniteClusterNode)node).cacheMetrics(); + assertNotNull(metrics); + assertTrue(metrics.isEmpty()); + } + + assertOnlyLocalMetricsUpdating(CACHE1); + assertOnlyLocalMetricsUpdating(CACHE2); + } + finally { + stopAllGrids(); + } + } + finally { + System.setProperty(IgniteSystemProperties.IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE, "false"); } } + /** + * Start grids. + */ + private void startGrids() throws Exception { + startGrids(GRID_CNT); + + daemon = true; + + startGrid(GRID_CNT); + } + /** * @param statisticsEnabled Statistics enabled. */ @@ -267,6 +329,34 @@ private void assertMetrics(IgniteCache cache, boolean expectNo } } + /** + * Asserts that only local metrics are updating. + * + * @param cacheName Cache name. + */ + private void assertOnlyLocalMetricsUpdating(String cacheName) { + for (int i = 0; i < GRID_CNT; i++) { + IgniteCache cache = grid(i).cache(cacheName); + + CacheMetrics clusterMetrics = cache.metrics(grid(i).cluster().forCacheNodes(cacheName)); + CacheMetrics locMetrics = cache.localMetrics(); + + assertEquals(clusterMetrics.name(), locMetrics.name()); + + assertEquals(0L, clusterMetrics.getCacheGets()); + assertEquals(0L, cache.mxBean().getCacheGets()); + assertEquals(locMetrics.getCacheGets(), cache.localMxBean().getCacheGets()); + + assertEquals(0L, clusterMetrics.getCachePuts()); + assertEquals(0L, cache.mxBean().getCachePuts()); + assertEquals(locMetrics.getCachePuts(), cache.localMxBean().getCachePuts()); + + assertEquals(0L, clusterMetrics.getCacheHits()); + assertEquals(0L, cache.mxBean().getCacheHits()); + assertEquals(locMetrics.getCacheHits(), cache.localMxBean().getCacheHits()); + } + } + /** * @param ms Milliseconds. * @param f Function. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsManageTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsManageTest.java index ae00ac9659225..8a94166386465 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsManageTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheMetricsManageTest.java @@ -48,19 +48,19 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.mxbean.CacheMetricsMXBean; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheMetricsManageTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE1 = "cache1"; @@ -79,20 +79,27 @@ public class CacheMetricsManageTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testJmxNoPdsStatisticsEnable() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-9224", MvccFeatureChecker.forcedMvcc()); + testJmxStatisticsEnable(false); } /** * @throws Exception If failed. */ + @Test public void testJmxPdsStatisticsEnable() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10421", MvccFeatureChecker.forcedMvcc()); + testJmxStatisticsEnable(true); } /** * @throws Exception If failed. */ + @Test public void testCacheManagerStatisticsEnable() throws Exception { final CacheManager mgr1 = Caching.getCachingProvider().getCacheManager(); final CacheManager mgr2 = Caching.getCachingProvider().getCacheManager(); @@ -101,7 +108,7 @@ public void testCacheManagerStatisticsEnable() throws Exception { .setName(CACHE1) .setGroupName(GROUP) .setCacheMode(CacheMode.PARTITIONED) - .setAtomicityMode(CacheAtomicityMode.ATOMIC); + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); mgr1.createCache(CACHE1, cfg1); @@ -132,6 +139,7 @@ public void testCacheManagerStatisticsEnable() throws Exception { /** * */ + @Test public void testPublicApiStatisticsEnable() throws Exception { Ignite ig1 = startGrid(1); startGrid(2); @@ -159,6 +167,7 @@ public void testPublicApiStatisticsEnable() throws Exception { /** * */ + @Test public void testMultiThreadStatisticsEnable() throws Exception { startGrids(5); @@ -215,6 +224,7 @@ public void testMultiThreadStatisticsEnable() throws Exception { /** * */ + @Test public void testCacheApiClearStatistics() throws Exception { startGrids(3); @@ -232,6 +242,7 @@ public void testCacheApiClearStatistics() throws Exception { /** * */ + @Test public void testClearStatisticsAfterDisableStatistics() throws Exception { startGrids(3); @@ -253,6 +264,7 @@ public void testClearStatisticsAfterDisableStatistics() throws Exception { /** * */ + @Test public void testClusterApiClearStatistics() throws Exception { startGrids(3); @@ -277,6 +289,7 @@ public void testClusterApiClearStatistics() throws Exception { /** * */ + @Test public void testJmxApiClearStatistics() throws Exception { startGrids(3); @@ -484,13 +497,11 @@ private CacheMetricsMXBean mxBean(int nodeIdx, String cacheName, Class cache = ignite.cache(DEFAULT_CACHE_NAME); + + for (boolean commit : new boolean[] {true, false}) { + try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) { + checkFastTxFinish(tx, commit); + } + + for (int i = 0; i < 100; i++) { + try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.get(i); + + checkNormalTxFinish(tx, commit); + } + } + + for (int i = 0; i < 100; i++) { + try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.put(i, i); + + checkNormalTxFinish(tx, commit); + } + } + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesSelfTest.java index d7d06a1c090a6..de04b9e1b527d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesSelfTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that validates {@link Ignite#cacheNames()} implementation. */ +@RunWith(JUnit4.class) public class CacheNamesSelfTest extends GridCommonAbstractTest { /** */ private boolean client; @@ -57,6 +61,7 @@ public class CacheNamesSelfTest extends GridCommonAbstractTest { /** * @throws Exception In case of failure. */ + @Test public void testCacheNames() throws Exception { try { startGridsMultiThreaded(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesWithSpecialCharactersTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesWithSpecialCharactersTest.java index ecb8227e58f53..0272de8d61695 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesWithSpecialCharactersTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNamesWithSpecialCharactersTest.java @@ -17,21 +17,26 @@ package org.apache.ignite.internal.processors.cache; +import java.util.Collection; import org.apache.ignite.Ignite; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -import java.util.Collection; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that validates {@link Ignite#cacheNames()} implementation. */ +@RunWith(JUnit4.class) public class CacheNamesWithSpecialCharactersTest extends GridCommonAbstractTest { + /** */ + private static final String CACHE_NAME_1 = "--№=+:(replicated)"; - public static final String CACHE_NAME_1 = "--№=+:(replicated)"; - public static final String CACHE_NAME_2 = ":_&:: (partitioned)"; + /** */ + private static final String CACHE_NAME_2 = ":_&:: (partitioned)"; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -53,6 +58,7 @@ public class CacheNamesWithSpecialCharactersTest extends GridCommonAbstractTest /** * @throws Exception In case of failure. */ + @Test public void testCacheNames() throws Exception { try { startGridsMultiThreaded(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearReaderUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearReaderUpdateTest.java index 96167f5d038dc..b6aff93c79a34 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearReaderUpdateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearReaderUpdateTest.java @@ -39,15 +39,17 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionOptimisticException; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -60,10 +62,9 @@ /** * */ +@RunWith(JUnit4.class) +@Ignore("https://issues.apache.org/jira/browse/IGNITE-627") public class CacheNearReaderUpdateTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -82,8 +83,6 @@ public class CacheNearReaderUpdateTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -91,6 +90,8 @@ public class CacheNearReaderUpdateTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + super.beforeTestsStarted(); startGridsMultiThreaded(SRVS); @@ -110,6 +111,7 @@ public class CacheNearReaderUpdateTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNoBackups() throws Exception { runTestGetUpdateMultithreaded(cacheConfiguration(PARTITIONED, FULL_SYNC, 0, false, false)); } @@ -117,6 +119,7 @@ public void testNoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOneBackup() throws Exception { runTestGetUpdateMultithreaded(cacheConfiguration(PARTITIONED, FULL_SYNC, 1, false, false)); } @@ -124,6 +127,7 @@ public void testOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOneBackupNearEnabled() throws Exception { runTestGetUpdateMultithreaded(cacheConfiguration(PARTITIONED, FULL_SYNC, 1, false, true)); } @@ -131,6 +135,7 @@ public void testOneBackupNearEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOneBackupStoreEnabled() throws Exception { runTestGetUpdateMultithreaded(cacheConfiguration(PARTITIONED, FULL_SYNC, 1, true, false)); } @@ -336,6 +341,9 @@ private CacheConfiguration cacheConfiguration( int backups, boolean storeEnabled, boolean nearCache) { + if (storeEnabled) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setCacheMode(cacheMode); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearUpdateTopologyChangeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearUpdateTopologyChangeAbstractTest.java index 21b3c727dcd4c..6cf9b013b3686 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearUpdateTopologyChangeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNearUpdateTopologyChangeAbstractTest.java @@ -27,6 +27,9 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CachePeekMode.ONHEAP; @@ -34,6 +37,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class CacheNearUpdateTopologyChangeAbstractTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -53,6 +57,7 @@ public abstract class CacheNearUpdateTopologyChangeAbstractTest extends IgniteCa /** * @throws Exception If failed. */ + @Test public void testNearUpdateTopologyChange() throws Exception { awaitPartitionMapExchange(); @@ -140,4 +145,4 @@ public void testNearUpdateTopologyChange() throws Exception { assertEquals((Object)2, nearCache.localPeek(key, ONHEAP)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNoAffinityExchangeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNoAffinityExchangeTest.java new file mode 100644 index 0000000000000..ab452756b6875 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheNoAffinityExchangeTest.java @@ -0,0 +1,511 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.Collections; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.locks.Lock; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteInterruptedException; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessageV2; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.lang.IgniteBiPredicate; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeAddFinishedMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeFailedMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeLeftMessage; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheNoAffinityExchangeTest extends GridCommonAbstractTest { + /** */ + private volatile boolean startClient; + + /** */ + private volatile boolean startClientCaches; + + /** Tx cache name from client static configuration. */ + private static final String PARTITIONED_TX_CLIENT_CACHE_NAME = "p-tx-client-cache"; + + /** */ + private final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder().setShared(true); + + /** */ + private final TcpDiscoveryIpFinder CLIENT_IP_FINDER = new TcpDiscoveryVmIpFinder() + .setAddresses(Collections.singleton("127.0.0.1:47500")); + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.beforeTestsStarted(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); + + cfg.setDiscoverySpi(new TestDiscoverySpi().setIpFinder(IP_FINDER)); + + cfg.setActiveOnStart(false); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration().setDefaultDataRegionConfiguration( + new DataRegionConfiguration().setMaxSize(200 * 1024 * 1024))); + + if (startClient) { + cfg.setClientMode(true); + + // It is necessary to ensure that client always connects to grid(0). + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(CLIENT_IP_FINDER); + + if (startClientCaches) { + CacheConfiguration txCfg = new CacheConfiguration() + .setName(PARTITIONED_TX_CLIENT_CACHE_NAME) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) + .setAffinity(new RendezvousAffinityFunction(false, 32)) + .setBackups(2); + + cfg.setCacheConfiguration(txCfg); + } + } + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + startClient = false; + + startClientCaches = false; + + super.afterTest(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNoAffinityChangeOnClientJoin() throws Exception { + Ignite ig = startGrids(4); + + ig.cluster().active(true); + + IgniteCache atomicCache = ig.createCache(new CacheConfiguration() + .setName("atomic").setAtomicityMode(CacheAtomicityMode.ATOMIC)); + + IgniteCache txCache = ig.createCache(new CacheConfiguration() + .setName("tx").setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + + assertTrue(GridTestUtils.waitForCondition(() -> + new AffinityTopologyVersion(4, 3).equals(grid(3).context().discovery().topologyVersionEx()), + 5_000)); + + TestDiscoverySpi discoSpi = (TestDiscoverySpi) grid(2).context().discovery().getInjectedDiscoverySpi(); + + CountDownLatch latch = new CountDownLatch(1); + + discoSpi.latch = latch; + + startClient = true; + + startGrid(4); + + assertTrue(GridTestUtils.waitForCondition(() -> + new AffinityTopologyVersion(5, 0).equals(grid(0).context().discovery().topologyVersionEx()) && + new AffinityTopologyVersion(5, 0).equals(grid(1).context().discovery().topologyVersionEx()) && + new AffinityTopologyVersion(4, 3).equals(grid(2).context().discovery().topologyVersionEx()) && + new AffinityTopologyVersion(4, 3).equals(grid(3).context().discovery().topologyVersionEx()), + 10_000)); + + for (int k = 0; k < 100; k++) { + atomicCache.put(k, k); + txCache.put(k, k); + + Lock lock = txCache.lock(k); + lock.lock(); + lock.unlock(); + } + + for (int k = 0; k < 100; k++) { + assertEquals(Integer.valueOf(k), atomicCache.get(k)); + assertEquals(Integer.valueOf(k), txCache.get(k)); + } + + assertEquals(new AffinityTopologyVersion(5, 0), grid(0).context().discovery().topologyVersionEx()); + assertEquals(new AffinityTopologyVersion(5, 0), grid(1).context().discovery().topologyVersionEx()); + assertEquals(new AffinityTopologyVersion(4, 3), grid(2).context().discovery().topologyVersionEx()); + assertEquals(new AffinityTopologyVersion(4, 3), grid(3).context().discovery().topologyVersionEx()); + + latch.countDown(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNoAffinityChangeOnClientLeft() throws Exception { + Ignite ig = startGrids(4); + + ig.cluster().active(true); + + IgniteCache atomicCache = ig.createCache(new CacheConfiguration() + .setName("atomic").setAtomicityMode(CacheAtomicityMode.ATOMIC)); + + IgniteCache txCache = ig.createCache(new CacheConfiguration() + .setName("tx").setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + + assertTrue(GridTestUtils.waitForCondition(() -> + new AffinityTopologyVersion(4, 3).equals(grid(3).context().discovery().topologyVersionEx()), + 5_000)); + + startClient = true; + + startGrid(4); + + TestDiscoverySpi discoSpi = (TestDiscoverySpi)grid(2).context().discovery().getInjectedDiscoverySpi(); + + CountDownLatch latch = new CountDownLatch(1); + + discoSpi.latch = latch; + + stopGrid(4); + + assertTrue(GridTestUtils.waitForCondition(() -> + new AffinityTopologyVersion(6, 0).equals(grid(0).context().discovery().topologyVersionEx()) && + new AffinityTopologyVersion(6, 0).equals(grid(1).context().discovery().topologyVersionEx()) && + new AffinityTopologyVersion(5, 0).equals(grid(2).context().discovery().topologyVersionEx()) && + new AffinityTopologyVersion(5, 0).equals(grid(3).context().discovery().topologyVersionEx()), + 10_000)); + + for (int k = 0; k < 100; k++) { + atomicCache.put(k, k); + txCache.put(k, k); + + Lock lock = txCache.lock(k); + lock.lock(); + lock.unlock(); + } + + for (int k = 0; k < 100; k++) { + assertEquals(Integer.valueOf(k), atomicCache.get(k)); + assertEquals(Integer.valueOf(k), txCache.get(k)); + } + + assertEquals(new AffinityTopologyVersion(6, 0), grid(0).context().discovery().topologyVersionEx()); + assertEquals(new AffinityTopologyVersion(6, 0), grid(1).context().discovery().topologyVersionEx()); + assertEquals(new AffinityTopologyVersion(5, 0), grid(2).context().discovery().topologyVersionEx()); + assertEquals(new AffinityTopologyVersion(5, 0), grid(3).context().discovery().topologyVersionEx()); + + latch.countDown(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNoAffinityChangeOnClientLeftWithMergedExchanges() throws Exception { + System.setProperty(IgniteSystemProperties.IGNITE_EXCHANGE_MERGE_DELAY, "1000"); + + try { + Ignite ig = startGridsMultiThreaded(4); + + ig.cluster().active(true); + + IgniteCache atomicCache = ig.createCache(new CacheConfiguration() + .setName("atomic").setAtomicityMode(CacheAtomicityMode.ATOMIC).setCacheMode(CacheMode.REPLICATED)); + + IgniteCache txCache = ig.createCache(new CacheConfiguration() + .setName("tx").setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL).setCacheMode(CacheMode.REPLICATED)); + + startClient = true; + + Ignite client = startGrid("client"); + + startClient = false; + + stopGrid(1); + stopGrid(2); + stopGrid(3); + + awaitPartitionMapExchange(); + + atomicCache.put(-1, -1); + txCache.put(-1, -1); + + TestRecordingCommunicationSpi.spi(ig).blockMessages(new IgniteBiPredicate() { + @Override public boolean apply(ClusterNode node, Message msg) { + return msg instanceof GridDhtPartitionSupplyMessageV2; + } + }); + + startGridsMultiThreaded(1, 3); + + CountDownLatch latch = new CountDownLatch(1); + for (Ignite ignite : G.allGrids()) { + if (ignite.cluster().localNode().order() == 9) { + TestDiscoverySpi discoSpi = + (TestDiscoverySpi)((IgniteEx)ignite).context().discovery().getInjectedDiscoverySpi(); + + discoSpi.latch = latch; + + break; + } + } + + client.close(); + + for (int k = 0; k < 100; k++) { + atomicCache.put(k, k); + txCache.put(k, k); + + Lock lock = txCache.lock(k); + lock.lock(); + lock.unlock(); + } + + for (int k = 0; k < 100; k++) { + assertEquals(Integer.valueOf(k), atomicCache.get(k)); + assertEquals(Integer.valueOf(k), txCache.get(k)); + } + + latch.countDown(); + } + finally { + System.clearProperty(IgniteSystemProperties.IGNITE_EXCHANGE_MERGE_DELAY); + } + } + + /** + * Checks case when number of client events is greater than affinity history size. + * + * @throws Exception If failed. + */ + @Test + public void testMulipleClientLeaveJoin() throws Exception { + System.setProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE, "10"); + + try { + doTestMulipleClientLeaveJoin(); + } + finally { + System.clearProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE); + } + } + + /** + * Checks case when number of client events is so big that history consists only from client event versions. + * + * @throws Exception If failed. + */ + /** + * + */ + @Test + public void testMulipleClientLeaveJoinLinksLimitOverflow() throws Exception { + System.setProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE, "2"); + + try { + doTestMulipleClientLeaveJoin(); + } + finally { + System.clearProperty(IgniteSystemProperties.IGNITE_AFFINITY_HISTORY_SIZE); + } + } + + /** + * Tests that multiple client events won't fail transactions due to affinity assignment history expiration. + * + * @throws Exception If failed. + */ + public void doTestMulipleClientLeaveJoin() throws Exception { + Ignite ig = startGrids(2); + + ig.cluster().active(true); + + startClient = true; + + IgniteEx stableClient = startGrid(2); + + IgniteCache stableClientTxCacheProxy = stableClient.createCache( + new CacheConfiguration() + .setName("tx") + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setBackups(1) + .setAffinity(new RendezvousAffinityFunction(false, 32))); + + awaitPartitionMapExchange(); + + IgniteInternalFuture fut = GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + for (int i = 0; i < 10; i++) { + try { + startGrid(3); + + stopGrid(3); + } + catch (Exception e) { + throw new RuntimeException(e); + } + } + } + }); + + CountDownLatch clientTxLatch = new CountDownLatch(1); + + IgniteInternalFuture loadFut = GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + try (Transaction tx = stableClient.transactions().txStart()) { + ThreadLocalRandom r = ThreadLocalRandom.current(); + + stableClientTxCacheProxy.put(r.nextInt(100), r.nextInt()); + + try { + clientTxLatch.await(); + } + catch (InterruptedException e) { + throw new IgniteInterruptedException(e); + } + + tx.commit(); + } + } + }); + + fut.get(); + + clientTxLatch.countDown(); + + loadFut.get(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testAffinityChangeOnClientConnectWithStaticallyConfiguredCaches() throws Exception { + Ignite ig = startGrids(2); + + ig.cluster().active(true); + + TestDiscoverySpi discoSpi = (TestDiscoverySpi)grid(1).context().discovery().getInjectedDiscoverySpi(); + + CountDownLatch latch = new CountDownLatch(1); + + discoSpi.latch = latch; + + startClient = true; + + startClientCaches = true; + + Ignite client = startGrid(2); + + assertTrue(GridTestUtils.waitForCondition(() -> { + AffinityTopologyVersion topVer0 = grid(0).context().discovery().topologyVersionEx(); + AffinityTopologyVersion topVer1 = grid(1).context().discovery().topologyVersionEx(); + + return topVer0.topologyVersion() == 3 && topVer1.topologyVersion() == 2; + }, 10_000)); + + final IgniteCache txCache = client.cache(PARTITIONED_TX_CLIENT_CACHE_NAME); + + final AtomicBoolean updated = new AtomicBoolean(); + + GridTestUtils.runAsync(() -> { + for (int i = 0; i < 32; ++i) + txCache.put(i, i); + + updated.set(true); + }); + + assertFalse(GridTestUtils.waitForCondition(updated::get, 5_000)); + + latch.countDown(); + + assertTrue(GridTestUtils.waitForCondition(updated::get, 5_000)); + + for (int i = 0; i < 32; ++i) + assertEquals(Integer.valueOf(i), txCache.get(i)); + + assertEquals("Expected major topology version is 3.", + 3, grid(1).context().discovery().topologyVersionEx().topologyVersion()); + } + + /** + * + */ + public static class TestDiscoverySpi extends TcpDiscoverySpi { + /** */ + private volatile CountDownLatch latch; + + /** {@inheritDoc} */ + @Override protected void startMessageProcess(TcpDiscoveryAbstractMessage msg) { + if (msg instanceof TcpDiscoveryNodeAddFinishedMessage + || msg instanceof TcpDiscoveryNodeLeftMessage + || msg instanceof TcpDiscoveryNodeFailedMessage) { + CountDownLatch latch0 = latch; + + if (latch0 != null) + try { + latch0.await(); + } + catch (InterruptedException ex) { + throw new IgniteException(ex); + } + } + + super.startMessageProcess(msg); + } + } + +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapMapEntrySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapMapEntrySelfTest.java index e520d455c1aaa..fae80ebe2024e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapMapEntrySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapMapEntrySelfTest.java @@ -25,9 +25,14 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheEntry; import org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -35,6 +40,7 @@ /** * Cache map entry self test. */ +@RunWith(JUnit4.class) public class CacheOffheapMapEntrySelfTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -76,24 +82,37 @@ private CacheConfiguration cacheConfiguration(String gridName, cfg.setAtomicityMode(atomicityMode); cfg.setName(cacheName); + if (atomicityMode == TRANSACTIONAL_SNAPSHOT && !MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) + cfg.setNearConfiguration(null); + return cfg; } /** * @throws Exception If failed. */ + @Test public void testCacheMapEntry() throws Exception { checkCacheMapEntry(ATOMIC, LOCAL, GridLocalCacheEntry.class); checkCacheMapEntry(TRANSACTIONAL, LOCAL, GridLocalCacheEntry.class); + if (MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.LOCAL_CACHE)) + checkCacheMapEntry(TRANSACTIONAL_SNAPSHOT, LOCAL, GridLocalCacheEntry.class); + checkCacheMapEntry(ATOMIC, PARTITIONED, GridNearCacheEntry.class); checkCacheMapEntry(TRANSACTIONAL, PARTITIONED, GridNearCacheEntry.class); + if (MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.CACHE_STORE)) + checkCacheMapEntry(TRANSACTIONAL_SNAPSHOT, PARTITIONED, GridDhtCacheEntry.class); + checkCacheMapEntry(ATOMIC, REPLICATED, GridDhtCacheEntry.class); checkCacheMapEntry(TRANSACTIONAL, REPLICATED, GridDhtCacheEntry.class); + + if (MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.CACHE_STORE)) + checkCacheMapEntry(TRANSACTIONAL_SNAPSHOT, REPLICATED, GridDhtCacheEntry.class); } /** @@ -135,4 +154,4 @@ private void checkCacheMapEntry(CacheAtomicityMode atomicityMode, jcache.destroy(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOptimisticTransactionsWithFilterTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOptimisticTransactionsWithFilterTest.java index d953e04355faa..e98cb78b2e621 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOptimisticTransactionsWithFilterTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheOptimisticTransactionsWithFilterTest.java @@ -27,13 +27,13 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -46,10 +46,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheOptimisticTransactionsWithFilterTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -60,8 +58,6 @@ public class CacheOptimisticTransactionsWithFilterTest extends GridCommonAbstrac @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -92,6 +88,7 @@ protected int serversNumber() { /** * @throws Exception If failed. */ + @Test public void testCasReplace() throws Exception { executeTestForAllCaches(new TestClosure() { @Override public void apply(Ignite ignite, String cacheName) throws Exception { @@ -179,6 +176,7 @@ public void testCasReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsent() throws Exception { executeTestForAllCaches(new TestClosure() { @Override public void apply(Ignite ignite, String cacheName) throws Exception { @@ -238,6 +236,7 @@ public void testPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { executeTestForAllCaches(new TestClosure() { @Override public void apply(Ignite ignite, String cacheName) throws Exception { @@ -297,6 +296,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveWithOldValue() throws Exception { executeTestForAllCaches(new TestClosure() { @Override public void apply(Ignite ignite, String cacheName) throws Exception { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutEventListenerErrorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutEventListenerErrorSelfTest.java index 8891f64233476..aa6a1b1aaf5b8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutEventListenerErrorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutEventListenerErrorSelfTest.java @@ -31,28 +31,21 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for cache put with error in event listener. */ +@RunWith(JUnit4.class) public class CachePutEventListenerErrorSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setIncludeEventTypes(EventType.EVT_CACHE_OBJECT_PUT); return cfg; @@ -60,6 +53,8 @@ public class CachePutEventListenerErrorSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + startGridsMultiThreaded(3); Ignition.setClientMode(true); @@ -89,6 +84,7 @@ public class CachePutEventListenerErrorSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPartitionedAtomicOnHeap() throws Exception { doTest(CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC); } @@ -96,6 +92,7 @@ public void testPartitionedAtomicOnHeap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedTransactionalOnHeap() throws Exception { doTest(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL); } @@ -103,6 +100,7 @@ public void testPartitionedTransactionalOnHeap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedAtomicOnHeap() throws Exception { doTest(CacheMode.REPLICATED, CacheAtomicityMode.ATOMIC); } @@ -110,6 +108,7 @@ public void testReplicatedAtomicOnHeap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedTransactionalOnHeap() throws Exception { doTest(CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutIfAbsentTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutIfAbsentTest.java index 00ba25fb7651c..256b561736e4c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutIfAbsentTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CachePutIfAbsentTest.java @@ -26,14 +26,14 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -43,22 +43,11 @@ /** * */ +@RunWith(JUnit4.class) public class CachePutIfAbsentTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 4; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { super.beforeTestsStarted(); @@ -111,6 +100,7 @@ private CacheConfiguration cacheConfiguration( /** * @throws Exception If failed. */ + @Test public void testTxConflictGetAndPutIfAbsent() throws Exception { Ignite ignite0 = ignite(0); @@ -129,6 +119,9 @@ public void testTxConflictGetAndPutIfAbsent() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + try (Transaction tx = txs.txStart(concurrency, isolation)) { Object old = cache.getAndPutIfAbsent(key, 3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughLocalRestartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughLocalRestartSelfTest.java index 58fa8d6f62828..5e8b1eb442abe 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughLocalRestartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughLocalRestartSelfTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -29,4 +30,11 @@ public class CacheReadThroughLocalRestartSelfTest extends CacheReadThroughRestar @Override protected CacheMode cacheMode() { return LOCAL; } + + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } } \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughRestartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughRestartSelfTest.java index 422ed586be455..dd60974d45b7a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughRestartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheReadThroughRestartSelfTest.java @@ -24,12 +24,13 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,9 +38,14 @@ /** * Test for read through store. */ +@RunWith(JUnit4.class) public class CacheReadThroughRestartSelfTest extends GridCacheAbstractSelfTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } /** {@inheritDoc} */ @Override protected int gridCount() { @@ -62,12 +68,6 @@ public class CacheReadThroughRestartSelfTest extends GridCacheAbstractSelfTest { cfg.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -84,6 +84,7 @@ public class CacheReadThroughRestartSelfTest extends GridCacheAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testReadThroughInTx() throws Exception { testReadThroughInTx(false); } @@ -91,6 +92,7 @@ public void testReadThroughInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadEntryThroughInTx() throws Exception { testReadThroughInTx(true); } @@ -116,6 +118,9 @@ private void testReadThroughInTx(boolean needVer) throws Exception { for (TransactionConcurrency txConcurrency : TransactionConcurrency.values()) { for (TransactionIsolation txIsolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(txConcurrency, txIsolation)) + continue; + try (Transaction tx = ignite.transactions().txStart(txConcurrency, txIsolation, 100000, 1000)) { for (int k = 0; k < 1000; k++) { String key = "key" + k; @@ -139,6 +144,7 @@ private void testReadThroughInTx(boolean needVer) throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadThrough() throws Exception { testReadThrough(false); } @@ -146,6 +152,7 @@ public void testReadThrough() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadEntryThrough() throws Exception { testReadThrough(true); } @@ -175,4 +182,4 @@ private void testReadThrough(boolean needVer) throws Exception { assertNotNull("Null value for key: " + key, cache.get(key)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalanceConfigValidationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalanceConfigValidationTest.java index 2f31b31372396..c3eee1b13d0da 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalanceConfigValidationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalanceConfigValidationTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheRebalanceConfigValidationTest extends GridCommonAbstractTest { /** Rebalance pool size. */ private int rebalancePoolSize; @@ -43,6 +47,7 @@ public class CacheRebalanceConfigValidationTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testParameterConsistency() throws Exception { rebalancePoolSize = 2; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalancingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalancingSelfTest.java index 75e62964b3ac8..c3085f6c83a28 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalancingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRebalancingSelfTest.java @@ -35,11 +35,16 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for rebalancing. */ +@RunWith(JUnit4.class) public class CacheRebalancingSelfTest extends GridCommonAbstractTest { /** Cache name with one backups */ private static final String REBALANCE_TEST_CACHE_NAME = "rebalanceCache"; @@ -48,12 +53,12 @@ public class CacheRebalancingSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - CacheConfiguration rebalabceCacheCfg = new CacheConfiguration<>(); - rebalabceCacheCfg.setBackups(1); - rebalabceCacheCfg.setName(REBALANCE_TEST_CACHE_NAME); - rebalabceCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + CacheConfiguration ccfg = new CacheConfiguration<>(); + ccfg.setBackups(1); + ccfg.setName(REBALANCE_TEST_CACHE_NAME); + ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); - cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME), rebalabceCacheCfg); + cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME), ccfg); return cfg; } @@ -68,7 +73,10 @@ public class CacheRebalancingSelfTest extends GridCommonAbstractTest { /** * @throws Exception If fails. */ + @Test public void testRebalanceLocalCacheFuture() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + startGrid( getTestIgniteInstanceName(0), getConfiguration(getTestIgniteInstanceName(0)) @@ -86,6 +94,7 @@ public void testRebalanceLocalCacheFuture() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalanceFuture() throws Exception { IgniteEx ig0 = startGrid(0); @@ -121,6 +130,7 @@ private static IgniteInternalFuture internalFuture(IgniteFuture fut) { * * @throws Exception If failed. */ + @Test public void testDisableRebalancing() throws Exception { IgniteEx ig0 = startGrid(0); IgniteEx ig1 = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRemoveAllSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRemoveAllSelfTest.java index a27bdda62a177..665e7cee9adb5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRemoveAllSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheRemoveAllSelfTest.java @@ -24,11 +24,24 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * Test remove all method. */ +@RunWith(JUnit4.class) public class CacheRemoveAllSelfTest extends GridCacheAbstractSelfTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10082"); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected long getTestTimeout() { return 2 * 60 * 1000; @@ -42,6 +55,7 @@ public class CacheRemoveAllSelfTest extends GridCacheAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testRemoveAll() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -76,4 +90,4 @@ public void testRemoveAll() throws Exception { 0, locCache.localSize()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheSerializableTransactionsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheSerializableTransactionsTest.java index 714ae6aa59bb4..37c4fabd279c6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheSerializableTransactionsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheSerializableTransactionsTest.java @@ -67,15 +67,15 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionOptimisticException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -92,10 +92,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheSerializableTransactionsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final boolean FAST = false; @@ -117,8 +115,6 @@ public class CacheSerializableTransactionsTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); cfg.setClientMode(client); @@ -147,6 +143,7 @@ public class CacheSerializableTransactionsTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTxStreamerLoad() throws Exception { txStreamerLoad(false); } @@ -154,6 +151,7 @@ public void testTxStreamerLoad() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxStreamerLoadAllowOverwrite() throws Exception { txStreamerLoad(true); } @@ -240,6 +238,7 @@ private void txStreamerLoad(Ignite ignite, /** * @throws Exception If failed. */ + @Test public void testTxLoadFromStore() throws Exception { Ignite ignite0 = ignite(0); @@ -295,6 +294,7 @@ public void testTxLoadFromStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxCommitReadOnly1() throws Exception { Ignite ignite0 = ignite(0); @@ -353,6 +353,7 @@ public void testTxCommitReadOnly1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxCommitReadOnly2() throws Exception { Ignite ignite0 = ignite(0); @@ -419,6 +420,7 @@ public void testTxCommitReadOnly2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxCommit() throws Exception { Ignite ignite0 = ignite(0); Ignite ignite1 = ignite(1); @@ -500,6 +502,7 @@ public void testTxCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxRollback() throws Exception { Ignite ignite0 = ignite(0); Ignite ignite1 = ignite(1); @@ -567,6 +570,7 @@ public void testTxRollback() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxCommitReadOnlyGetAll() throws Exception { testTxCommitReadOnlyGetAll(false); } @@ -574,6 +578,7 @@ public void testTxCommitReadOnlyGetAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxCommitReadOnlyGetEntries() throws Exception { testTxCommitReadOnlyGetAll(true); } @@ -642,6 +647,7 @@ private void testTxCommitReadOnlyGetAll(boolean needVer) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxCommitReadWriteTwoNodes() throws Exception { Ignite ignite0 = ignite(0); @@ -673,6 +679,7 @@ public void testTxCommitReadWriteTwoNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictRead1() throws Exception { txConflictRead(true, false); } @@ -680,6 +687,7 @@ public void testTxConflictRead1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictRead2() throws Exception { txConflictRead(false, false); } @@ -687,6 +695,7 @@ public void testTxConflictRead2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadEntry1() throws Exception { txConflictRead(true, true); } @@ -694,6 +703,7 @@ public void testTxConflictReadEntry1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadEntry2() throws Exception { txConflictRead(false, true); } @@ -778,6 +788,7 @@ private void txConflictRead(boolean noVal, boolean needVer) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadWrite1() throws Exception { txConflictReadWrite(true, false, false); } @@ -785,6 +796,7 @@ public void testTxConflictReadWrite1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadWrite2() throws Exception { txConflictReadWrite(false, false, false); } @@ -792,6 +804,7 @@ public void testTxConflictReadWrite2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadRemove1() throws Exception { txConflictReadWrite(true, true, false); } @@ -799,6 +812,7 @@ public void testTxConflictReadRemove1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadRemove2() throws Exception { txConflictReadWrite(false, true, false); } @@ -806,6 +820,7 @@ public void testTxConflictReadRemove2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadEntryWrite1() throws Exception { txConflictReadWrite(true, false, true); } @@ -813,6 +828,7 @@ public void testTxConflictReadEntryWrite1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadEntryWrite2() throws Exception { txConflictReadWrite(false, false, true); } @@ -820,6 +836,7 @@ public void testTxConflictReadEntryWrite2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadEntryRemove1() throws Exception { txConflictReadWrite(true, true, true); } @@ -827,6 +844,7 @@ public void testTxConflictReadEntryRemove1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReadEntryRemove2() throws Exception { txConflictReadWrite(false, true, true); } @@ -923,6 +941,7 @@ private void txConflictReadWrite(boolean noVal, boolean rmv, boolean needVer) th /** * @throws Exception If failed. */ + @Test public void testTxConflictReadWrite3() throws Exception { Ignite ignite0 = ignite(0); @@ -988,6 +1007,7 @@ public void testTxConflictReadWrite3() throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictGetAndPut1() throws Exception { txConflictGetAndPut(true, false); } @@ -995,6 +1015,7 @@ public void testTxConflictGetAndPut1() throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictGetAndPut2() throws Exception { txConflictGetAndPut(false, false); } @@ -1002,6 +1023,7 @@ public void testTxConflictGetAndPut2() throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictGetAndRemove1() throws Exception { txConflictGetAndPut(true, true); } @@ -1009,6 +1031,7 @@ public void testTxConflictGetAndRemove1() throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictGetAndRemove2() throws Exception { txConflictGetAndPut(false, true); } @@ -1081,6 +1104,7 @@ private void txConflictGetAndPut(boolean noVal, boolean rmv) throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictInvoke1() throws Exception { txConflictInvoke(true, false); } @@ -1088,6 +1112,7 @@ public void testTxConflictInvoke1() throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictInvoke2() throws Exception { txConflictInvoke(false, false); } @@ -1095,6 +1120,7 @@ public void testTxConflictInvoke2() throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictInvoke3() throws Exception { txConflictInvoke(true, true); } @@ -1102,6 +1128,7 @@ public void testTxConflictInvoke3() throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictInvoke4() throws Exception { txConflictInvoke(false, true); } @@ -1174,6 +1201,7 @@ private void txConflictInvoke(boolean noVal, boolean rmv) throws Exception { /** * @throws Exception If failed */ + @Test public void testTxConflictInvokeAll() throws Exception { Ignite ignite0 = ignite(0); @@ -1268,6 +1296,7 @@ public void testTxConflictInvokeAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictPutIfAbsent() throws Exception { Ignite ignite0 = ignite(0); @@ -1354,6 +1383,7 @@ public void testTxConflictPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictGetAndPutIfAbsent() throws Exception { Ignite ignite0 = ignite(0); @@ -1440,6 +1470,7 @@ public void testTxConflictGetAndPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictReplace() throws Exception { Ignite ignite0 = ignite(0); @@ -1565,6 +1596,7 @@ public void testTxConflictReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictGetAndReplace() throws Exception { Ignite ignite0 = ignite(0); @@ -1690,6 +1722,7 @@ public void testTxConflictGetAndReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictRemoveWithOldValue() throws Exception { Ignite ignite0 = ignite(0); @@ -1827,6 +1860,7 @@ public void testTxConflictRemoveWithOldValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictCasReplace() throws Exception { Ignite ignite0 = ignite(0); @@ -1964,6 +1998,7 @@ public void testTxConflictCasReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictRemoveReturnBoolean1() throws Exception { txConflictRemoveReturnBoolean(false); } @@ -1971,6 +2006,7 @@ public void testTxConflictRemoveReturnBoolean1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxConflictRemoveReturnBoolean2() throws Exception { txConflictRemoveReturnBoolean(true); } @@ -2135,6 +2171,7 @@ private void txConflictRemoveReturnBoolean(boolean noVal) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoConflictPut1() throws Exception { txNoConflictUpdate(true, false, false); } @@ -2142,6 +2179,7 @@ public void testTxNoConflictPut1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoConflictPut2() throws Exception { txNoConflictUpdate(false, false, false); } @@ -2149,6 +2187,7 @@ public void testTxNoConflictPut2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoConflictPut3() throws Exception { txNoConflictUpdate(false, false, true); } @@ -2156,6 +2195,7 @@ public void testTxNoConflictPut3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoConflictRemove1() throws Exception { txNoConflictUpdate(true, true, false); } @@ -2163,6 +2203,7 @@ public void testTxNoConflictRemove1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoConflictRemove2() throws Exception { txNoConflictUpdate(false, true, false); } @@ -2170,6 +2211,7 @@ public void testTxNoConflictRemove2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoConflictRemove3() throws Exception { txNoConflictUpdate(false, true, true); } @@ -2285,6 +2327,7 @@ private void txNoConflictUpdate(boolean noVal, boolean rmv, boolean getAfterUpda /** * @throws Exception If failed. */ + @Test public void testTxNoConflictContainsKey1() throws Exception { txNoConflictContainsKey(false); } @@ -2292,6 +2335,7 @@ public void testTxNoConflictContainsKey1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoConflictContainsKey2() throws Exception { txNoConflictContainsKey(true); } @@ -2375,6 +2419,7 @@ private void txNoConflictContainsKey(boolean noVal) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxRollbackIfLocked1() throws Exception { Ignite ignite0 = ignite(0); @@ -2434,6 +2479,7 @@ public void testTxRollbackIfLocked1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxRollbackIfLocked2() throws Exception { rollbackIfLockedPartialLock(false); } @@ -2441,6 +2487,7 @@ public void testTxRollbackIfLocked2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxRollbackIfLocked3() throws Exception { rollbackIfLockedPartialLock(true); } @@ -2507,6 +2554,7 @@ private void rollbackIfLockedPartialLock(boolean locKey) throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoReadLockConflict() throws Exception { checkNoReadLockConflict(false); } @@ -2514,6 +2562,7 @@ public void testNoReadLockConflict() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoReadLockConflictGetEntry() throws Exception { checkNoReadLockConflict(true); } @@ -2614,6 +2663,7 @@ private void checkNoReadLockConflict(final Ignite ignite, /** * @throws Exception If failed. */ + @Test public void testNoReadLockConflictMultiNode() throws Exception { Ignite ignite0 = ignite(0); @@ -2675,6 +2725,7 @@ public void testNoReadLockConflictMultiNode() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("UnnecessaryLocalVariable") + @Test public void testReadLockPessimisticTxConflict() throws Exception { Ignite ignite0 = ignite(0); @@ -2729,6 +2780,7 @@ public void testReadLockPessimisticTxConflict() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("UnnecessaryLocalVariable") + @Test public void testReadWriteTxConflict() throws Exception { Ignite ignite0 = ignite(0); @@ -2782,6 +2834,7 @@ public void testReadWriteTxConflict() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadWriteTransactionsNoDeadlock() throws Exception { checkReadWriteTransactionsNoDeadlock(false); } @@ -2789,6 +2842,7 @@ public void testReadWriteTransactionsNoDeadlock() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadWriteTransactionsNoDeadlockMultinode() throws Exception { checkReadWriteTransactionsNoDeadlock(true); } @@ -2853,6 +2907,7 @@ private void checkReadWriteTransactionsNoDeadlock(final boolean multiNode) throw /** * @throws Exception If failed. */ + @Test public void testReadWriteAccountTx() throws Exception { final CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, FULL_SYNC, @@ -3032,6 +3087,7 @@ public void testReadWriteAccountTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearCacheReaderUpdate() throws Exception { Ignite ignite0 = ignite(0); @@ -3080,6 +3136,7 @@ public void testNearCacheReaderUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackNearCache1() throws Exception { rollbackNearCacheWrite(true); } @@ -3087,6 +3144,7 @@ public void testRollbackNearCache1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackNearCache2() throws Exception { rollbackNearCacheWrite(false); } @@ -3166,6 +3224,7 @@ private void rollbackNearCacheWrite(boolean near) throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackNearCache3() throws Exception { rollbackNearCacheRead(true); } @@ -3173,6 +3232,7 @@ public void testRollbackNearCache3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackNearCache4() throws Exception { rollbackNearCacheRead(false); } @@ -3246,6 +3306,7 @@ private void rollbackNearCacheRead(boolean near) throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheTx() throws Exception { Ignite ignite0 = ignite(0); @@ -3509,6 +3570,7 @@ public void testCrossCacheTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomOperations() throws Exception { Ignite ignite0 = ignite(0); @@ -3608,6 +3670,7 @@ private void randomOperation(ThreadLocalRandom rnd, IgniteCache cache0 = grid(0).getOrCreateCache(getNearConfig()); @@ -635,7 +654,10 @@ public void testNearClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testNearCloseWithTry() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + startGridsMultiThreaded(gridCount()); String curVal = null; @@ -673,7 +695,10 @@ public void testNearCloseWithTry() throws Exception { * * @throws Exception If failed. */ + @Test public void testLocalClose() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + memCfg = new DataStorageConfiguration(); startGridsMultiThreaded(gridCount()); @@ -724,7 +749,10 @@ public void testLocalClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testLocalCloseWithTry() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + memCfg = new DataStorageConfiguration(); startGridsMultiThreaded(gridCount()); @@ -756,6 +784,7 @@ public void testLocalCloseWithTry() throws Exception { /** * Tests start -> destroy -> start -> close using CacheManager. */ + @Test public void testTckStyleCreateDestroyClose() throws Exception { startGridsMultiThreaded(gridCount()); @@ -788,6 +817,7 @@ public void testTckStyleCreateDestroyClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentUseAndCloseFromClient() throws Exception { testConcurrentUseAndClose(true); } @@ -795,6 +825,7 @@ public void testConcurrentUseAndCloseFromClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentUseAndCloseFromServer() throws Exception { testConcurrentUseAndClose(false); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeAbstractTest.java index b75500fe9649a..ff4fb753c71d4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeAbstractTest.java @@ -37,9 +37,6 @@ import org.apache.ignite.internal.processors.cache.store.CacheLocalStore; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; @@ -57,9 +54,6 @@ * */ public abstract class CacheStoreUsageMultinodeAbstractTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ protected boolean client; @@ -87,8 +81,6 @@ public abstract class CacheStoreUsageMultinodeAbstractTest extends GridCommonAbs cfg.setClientMode(client); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (cache) cfg.setCacheConfiguration(cacheConfiguration()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartAbstractTest.java index 745e97b81324b..80fc6da1c85ca 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartAbstractTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class CacheStoreUsageMultinodeDynamicStartAbstractTest extends CacheStoreUsageMultinodeAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -40,6 +44,7 @@ public abstract class CacheStoreUsageMultinodeDynamicStartAbstractTest extends C /** * @throws Exception If failed. */ + @Test public void testDynamicStart() throws Exception { checkStoreWithDynamicStart(false); } @@ -47,6 +52,7 @@ public void testDynamicStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartNearEnabled() throws Exception { nearCache = true; @@ -56,6 +62,7 @@ public void testDynamicStartNearEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicFromClientStart() throws Exception { checkStoreWithDynamicStart(true); } @@ -63,6 +70,7 @@ public void testDynamicFromClientStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartFromClientNearEnabled() throws Exception { nearCache = true; @@ -72,6 +80,7 @@ public void testDynamicStartFromClientNearEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartLocalStore() throws Exception { locStore = true; @@ -81,6 +90,7 @@ public void testDynamicStartLocalStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartFromClientLocalStore() throws Exception { locStore = true; @@ -90,6 +100,7 @@ public void testDynamicStartFromClientLocalStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartLocalStoreNearEnabled() throws Exception { locStore = true; @@ -101,6 +112,7 @@ public void testDynamicStartLocalStoreNearEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartWriteBehindStore() throws Exception { writeBehind = true; @@ -110,6 +122,7 @@ public void testDynamicStartWriteBehindStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartFromClientWriteBehindStore() throws Exception { writeBehind = true; @@ -119,6 +132,7 @@ public void testDynamicStartFromClientWriteBehindStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicStartWriteBehindStoreNearEnabled() throws Exception { writeBehind = true; @@ -161,4 +175,4 @@ private void checkStoreWithDynamicStart(boolean clientStart) throws Exception { cache.destroy(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartTxTest.java index 4511fc5a3d826..e217a1f878382 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartTxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeDynamicStartTxTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -25,6 +26,13 @@ * */ public class CacheStoreUsageMultinodeDynamicStartTxTest extends CacheStoreUsageMultinodeDynamicStartAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartAbstractTest.java index e16f16ed937dc..a3b0ea5f58796 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartAbstractTest.java @@ -17,9 +17,14 @@ package org.apache.ignite.internal.processors.cache; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + /** * */ +@RunWith(JUnit4.class) public abstract class CacheStoreUsageMultinodeStaticStartAbstractTest extends CacheStoreUsageMultinodeAbstractTest { /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { @@ -31,6 +36,7 @@ public abstract class CacheStoreUsageMultinodeStaticStartAbstractTest extends Ca /** * @throws Exception If failed. */ + @Test public void testStaticConfiguration() throws Exception { checkStoreUpdateStaticConfig(true); } @@ -38,6 +44,7 @@ public void testStaticConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationNearEnabled() throws Exception { nearCache = true; @@ -47,6 +54,7 @@ public void testStaticConfigurationNearEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationLocalStore() throws Exception { locStore = true; @@ -56,6 +64,7 @@ public void testStaticConfigurationLocalStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationLocalStoreNearEnabled() throws Exception { locStore = true; @@ -67,6 +76,7 @@ public void testStaticConfigurationLocalStoreNearEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationTxLocalStoreNoClientStore() throws Exception { locStore = true; @@ -76,6 +86,7 @@ public void testStaticConfigurationTxLocalStoreNoClientStore() throws Exception /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationTxLocalStoreNoClientStoreNearEnabled() throws Exception { locStore = true; @@ -87,6 +98,7 @@ public void testStaticConfigurationTxLocalStoreNoClientStoreNearEnabled() throws /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationTxWriteBehindStore() throws Exception { writeBehind = true; @@ -96,6 +108,7 @@ public void testStaticConfigurationTxWriteBehindStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationTxWriteBehindStoreNearEnabled() throws Exception { writeBehind = true; @@ -107,6 +120,7 @@ public void testStaticConfigurationTxWriteBehindStoreNearEnabled() throws Except /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationTxWriteBehindStoreNoClientStore() throws Exception { writeBehind = true; @@ -116,6 +130,7 @@ public void testStaticConfigurationTxWriteBehindStoreNoClientStore() throws Exce /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationTxWriteBehindStoreNoClientStoreNearEnabled() throws Exception { writeBehind = true; @@ -148,4 +163,4 @@ private void checkStoreUpdateStaticConfig(boolean clientStore) throws Exception stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartTxTest.java index 2f11fc862b47c..8fa7e65b2c37e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartTxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheStoreUsageMultinodeStaticStartTxTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -25,6 +26,13 @@ * */ public class CacheStoreUsageMultinodeStaticStartTxTest extends CacheStoreUsageMultinodeStaticStartAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxFastFinishTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxFastFinishTest.java index 79316bfa457b4..407788ccf07d7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxFastFinishTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxFastFinishTest.java @@ -27,14 +27,14 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFastFinishFuture; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -48,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheTxFastFinishTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -62,8 +60,6 @@ public class CacheTxFastFinishTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setCacheMode(PARTITIONED); @@ -91,6 +87,7 @@ public class CacheTxFastFinishTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testFastFinishTxNearCache() throws Exception { nearCache = true; @@ -100,6 +97,7 @@ public void testFastFinishTxNearCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFastFinishTx() throws Exception { fastFinishTx(); } @@ -107,7 +105,7 @@ public void testFastFinishTx() throws Exception { /** * @throws Exception If failed. */ - private void fastFinishTx() throws Exception { + protected void fastFinishTx() throws Exception { startGrid(0); fastFinishTx(ignite(0)); @@ -142,7 +140,7 @@ private void fastFinishTx() throws Exception { /** * @param ignite Node. */ - private void fastFinishTx(Ignite ignite) { + protected void fastFinishTx(Ignite ignite) { IgniteTransactions txs = ignite.transactions(); IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -202,7 +200,7 @@ private void fastFinishTx(Ignite ignite) { * @param tx Transaction. * @param commit Commit flag. */ - private void checkFastTxFinish(Transaction tx, boolean commit) { + protected void checkFastTxFinish(Transaction tx, boolean commit) { if (commit) tx.commit(); else @@ -218,7 +216,7 @@ private void checkFastTxFinish(Transaction tx, boolean commit) { * @param tx Transaction. * @param commit Commit flag. */ - private void checkNormalTxFinish(Transaction tx, boolean commit) { + protected void checkNormalTxFinish(Transaction tx, boolean commit) { IgniteInternalTx tx0 = ((TransactionProxyImpl)tx).tx(); if (commit) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxNotAllowReadFromBackupTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxNotAllowReadFromBackupTest.java index f0ec084a55764..dcc9a07e2eed2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxNotAllowReadFromBackupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheTxNotAllowReadFromBackupTest.java @@ -29,22 +29,20 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for query with BinaryMarshaller and different serialization modes and with reflective serializer. */ +@RunWith(JUnit4.class) public class CacheTxNotAllowReadFromBackupTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 2; @@ -61,7 +59,6 @@ public class CacheTxNotAllowReadFromBackupTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); return cfg; @@ -77,6 +74,7 @@ public class CacheTxNotAllowReadFromBackupTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testBackupConsistencyReplicated() throws Exception { CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); @@ -105,6 +103,7 @@ public void testBackupConsistencyReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupConsistencyReplicatedFullSync() throws Exception { CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); @@ -133,6 +132,7 @@ public void testBackupConsistencyReplicatedFullSync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupConsistencyPartitioned() throws Exception { CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); @@ -162,6 +162,7 @@ public void testBackupConsistencyPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupConsistencyPartitionedFullSync() throws Exception { CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); @@ -188,6 +189,76 @@ public void testBackupConsistencyPartitionedFullSync() throws Exception { checkBackupConsistencyGetAll(cfg, TransactionConcurrency.OPTIMISTIC, TransactionIsolation.SERIALIZABLE); } + /** + * @throws Exception If failed. + */ + @Test + public void testBackupConsistencyReplicatedMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); + + cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + cfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC); + cfg.setCacheMode(CacheMode.REPLICATED); + cfg.setReadFromBackup(false); + + checkBackupConsistency(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + + checkBackupConsistencyGetAll(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBackupConsistencyReplicatedFullSyncMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); + + cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + cfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + cfg.setCacheMode(CacheMode.REPLICATED); + cfg.setReadFromBackup(false); + + checkBackupConsistency(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + + checkBackupConsistencyGetAll(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBackupConsistencyPartitionedMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); + + cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + cfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC); + cfg.setCacheMode(CacheMode.PARTITIONED); + cfg.setBackups(NODES - 1); + cfg.setReadFromBackup(false); + + checkBackupConsistency(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + + checkBackupConsistencyGetAll(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBackupConsistencyPartitionedFullSyncMvcc() throws Exception { + CacheConfiguration cfg = new CacheConfiguration<>("test-cache"); + + cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + cfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + cfg.setCacheMode(CacheMode.PARTITIONED); + cfg.setBackups(NODES - 1); + cfg.setReadFromBackup(false); + + checkBackupConsistency(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + + checkBackupConsistencyGetAll(cfg, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); + } + /** * @param ccfg Cache configuration. * @throws Exception If failed. @@ -287,4 +358,4 @@ private void checkBackupConsistencyGetAll(CacheConfiguration c } return batches; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheValidatorMetricsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheValidatorMetricsTest.java index 4a950dda8f1a1..84216c51d5daf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheValidatorMetricsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheValidatorMetricsTest.java @@ -21,6 +21,7 @@ import java.util.Collection; import java.util.List; import org.apache.ignite.Ignite; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.PartitionLossPolicy; import org.apache.ignite.cluster.ClusterNode; @@ -29,10 +30,14 @@ import org.apache.ignite.configuration.TopologyValidator; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Cache validator metrics test. */ +@RunWith(JUnit4.class) public class CacheValidatorMetricsTest extends GridCommonAbstractTest implements Serializable { /** Cache name 1. */ private static String CACHE_NAME_1 = "cache1"; @@ -48,11 +53,13 @@ public class CacheValidatorMetricsTest extends GridCommonAbstractTest implements .setName(CACHE_NAME_1) .setCacheMode(CacheMode.PARTITIONED) .setBackups(0) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setPartitionLossPolicy(PartitionLossPolicy.READ_ONLY_ALL); CacheConfiguration cCfg2 = new CacheConfiguration() .setName(CACHE_NAME_2) .setCacheMode(CacheMode.REPLICATED) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setTopologyValidator(new TopologyValidator() { @Override public boolean validate(Collection nodes) { return nodes.size() == 2; @@ -90,6 +97,7 @@ void assertCacheStatus(String cacheName, boolean validForReading, boolean validF * * @throws Exception If failed. */ + @Test public void testCacheValidatorMetrics() throws Exception { startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeAbstractTest.java new file mode 100644 index 0000000000000..3134f1c4ff1f3 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeAbstractTest.java @@ -0,0 +1,123 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.Collection; +import java.util.Collections; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheMode.REPLICATED; + +/** + * + */ +public class ClusterReadOnlyModeAbstractTest extends GridCommonAbstractTest { + /** */ + private static final int SRVS = 3; + + /** Replicated atomic cache. */ + private static final String REPL_ATOMIC_CACHE = "repl_atomic_cache"; + + /** Replicated transactional cache. */ + private static final String REPL_TX_CACHE = "repl_tx_cache"; + + /** Replicated transactional cache. */ + private static final String REPL_MVCC_CACHE = "repl_mvcc_cache"; + + /** Partitioned atomic cache. */ + private static final String PART_ATOMIC_CACHE = "part_atomic_cache"; + + /** Partitioned transactional cache. */ + private static final String PART_TX_CACHE = "part_tx_cache"; + + /** Partitioned mvcc transactional cache. */ + private static final String PART_MVCC_CACHE = "part_mvcc_cache"; + + /** Cache names. */ + protected static final Collection CACHE_NAMES = F.asList(REPL_ATOMIC_CACHE, REPL_TX_CACHE, + PART_ATOMIC_CACHE, PART_TX_CACHE); + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + startGridsMultiThreaded(SRVS); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + changeClusterReadOnlyMode(false); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setCacheConfiguration( + cacheConfiguration(REPL_ATOMIC_CACHE, REPLICATED, ATOMIC, null), + cacheConfiguration(REPL_TX_CACHE, REPLICATED, TRANSACTIONAL, null), + cacheConfiguration(REPL_MVCC_CACHE, REPLICATED, TRANSACTIONAL_SNAPSHOT, "mvcc_repl_grp"), + cacheConfiguration(PART_ATOMIC_CACHE, PARTITIONED, ATOMIC, "part_grp"), + cacheConfiguration(PART_TX_CACHE, PARTITIONED, TRANSACTIONAL, "part_grp"), + cacheConfiguration(PART_MVCC_CACHE, PARTITIONED, TRANSACTIONAL_SNAPSHOT, "mvcc_part_grp") + ); + + return cfg; + } + + /** + * @param cacheMode Cache mode. + * @param atomicityMode Atomicity mode. + * @param grpName Cache group name. + */ + private CacheConfiguration cacheConfiguration(String name, CacheMode cacheMode, + CacheAtomicityMode atomicityMode, String grpName) { + return new CacheConfiguration() + .setName(name) + .setCacheMode(cacheMode) + .setAtomicityMode(atomicityMode) + .setGroupName(grpName) + .setQueryEntities(Collections.singletonList(new QueryEntity(Integer.class, Integer.class))); + } + + /** + * Change read only mode on all nodes. + * + * @param readOnly Read only. + */ + protected void changeClusterReadOnlyMode(boolean readOnly) { + for (int idx = 0; idx < SRVS; idx++) { + IgniteEx ignite = grid(idx); + + ignite.context().cache().context().readOnlyMode(readOnly); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeTest.java new file mode 100644 index 0000000000000..2da280c74a773 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeTest.java @@ -0,0 +1,140 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.Random; +import javax.cache.CacheException; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl; +import org.apache.ignite.internal.util.typedef.G; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Tests cache get/put/remove and data streaming in read-only cluster mode. + */ +@RunWith(JUnit4.class) +public class ClusterReadOnlyModeTest extends ClusterReadOnlyModeAbstractTest { + /** + * Tests cache get/put/remove. + */ + @Test + public void testCacheGetPutRemove() { + assertCachesReadOnlyMode(false); + + changeClusterReadOnlyMode(true); + + assertCachesReadOnlyMode(true); + + changeClusterReadOnlyMode(false); + + assertCachesReadOnlyMode(false); + } + + /** + * Tests data streamer. + */ + @Test + public void testDataStreamerReadOnly() { + assertDataStreamerReadOnlyMode(false); + + changeClusterReadOnlyMode(true); + + assertDataStreamerReadOnlyMode(true); + + changeClusterReadOnlyMode(false); + + assertDataStreamerReadOnlyMode(false); + } + + /** + * Asserts that all caches in read-only or in read/write mode on all nodes. + * + * @param readOnly If {@code true} then cache must be in read only mode, else in read/write mode. + */ + private void assertCachesReadOnlyMode(boolean readOnly) { + Random rnd = new Random(); + + for (Ignite ignite : G.allGrids()) { + for (String cacheName : CACHE_NAMES) { + IgniteCache cache = ignite.cache(cacheName); + + for (int i = 0; i < 10; i++) { + cache.get(rnd.nextInt(100)); // All gets must succeed. + + if (readOnly) { + // All puts must fail. + try { + cache.put(rnd.nextInt(100), rnd.nextInt()); + + fail("Put must fail for cache " + cacheName); + } + catch (Exception e) { + // No-op. + } + + // All removes must fail. + try { + cache.remove(rnd.nextInt(100)); + + fail("Remove must fail for cache " + cacheName); + } + catch (Exception e) { + // No-op. + } + } + else { + cache.put(rnd.nextInt(100), rnd.nextInt()); // All puts must succeed. + + cache.remove(rnd.nextInt(100)); // All removes must succeed. + } + } + } + } + } + + /** + * @param readOnly If {@code true} then data streamer must fail, else succeed. + */ + private void assertDataStreamerReadOnlyMode(boolean readOnly) { + Random rnd = new Random(); + + for (Ignite ignite : G.allGrids()) { + for (String cacheName : CACHE_NAMES) { + boolean failed = false; + + try (IgniteDataStreamer streamer = ignite.dataStreamer(cacheName)) { + for (int i = 0; i < 10; i++) { + ((DataStreamerImpl)streamer).maxRemapCount(5); + + streamer.addData(rnd.nextInt(1000), rnd.nextInt()); + } + } + catch (CacheException ignored) { + failed = true; + } + + if (failed != readOnly) + fail("Streaming to " + cacheName + " must " + (readOnly ? "fail" : "succeed")); + } + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterStateAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterStateAbstractTest.java index d5f9c900b0fe9..fc8d2f22edf86 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterStateAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ClusterStateAbstractTest.java @@ -42,11 +42,14 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ -@SuppressWarnings("TooBroadScope") +@RunWith(JUnit4.class) public abstract class ClusterStateAbstractTest extends GridCommonAbstractTest { /** Entry count. */ public static final int ENTRY_CNT = 5000; @@ -106,6 +109,7 @@ public abstract class ClusterStateAbstractTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testDynamicCacheStart() throws Exception { activeOnStart = false; @@ -118,14 +122,14 @@ public void testDynamicCacheStart() throws Exception { forbidden.clear(); - grid(0).active(true); + grid(0).cluster().active(true); IgniteCache cache2 = grid(0).createCache(new CacheConfiguration<>("cache2")); for (int k = 0; k < ENTRY_CNT; k++) cache2.put(k, k); - grid(0).active(false); + grid(0).cluster().active(false); checkInactive(GRID_CNT); @@ -135,6 +139,7 @@ public void testDynamicCacheStart() throws Exception { /** * @throws Exception if failed. */ + @Test public void testNoRebalancing() throws Exception { activeOnStart = false; @@ -147,7 +152,7 @@ public void testNoRebalancing() throws Exception { forbidden.clear(); - grid(0).active(true); + grid(0).cluster().active(true); awaitPartitionMapExchange(); @@ -158,7 +163,7 @@ public void testNoRebalancing() throws Exception { for (int g = 0; g < GRID_CNT; g++) { // Tests that state changes are propagated to existing and new nodes. - assertTrue(grid(g).active()); + assertTrue(grid(g).cluster().active()); IgniteCache cache0 = grid(g).cache(CACHE_NAME); @@ -191,12 +196,12 @@ public void testNoRebalancing() throws Exception { assertEquals(k, cache0.get(k)); } - grid(0).active(false); + grid(0).cluster().active(false); GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { for (int g = 0; g < GRID_CNT; g++) { - if (grid(g).active()) + if (grid(g).cluster().active()) return false; } @@ -216,6 +221,7 @@ public void testNoRebalancing() throws Exception { /** * @throws Exception if failed. */ + @Test public void testActivationFromClient() throws Exception { forbidden.add(GridDhtPartitionSupplyMessage.class); forbidden.add(GridDhtPartitionDemandMessage.class); @@ -234,7 +240,7 @@ public void testActivationFromClient() throws Exception { forbidden.clear(); - cl.active(true); + cl.cluster().active(true); awaitPartitionMapExchange(); @@ -245,7 +251,7 @@ public void testActivationFromClient() throws Exception { for (int g = 0; g < GRID_CNT + 1; g++) { // Tests that state changes are propagated to existing and new nodes. - assertTrue(grid(g).active()); + assertTrue(grid(g).cluster().active()); IgniteCache cache0 = grid(g).cache(CACHE_NAME); @@ -253,12 +259,12 @@ public void testActivationFromClient() throws Exception { assertEquals(k, cache0.get(k)); } - cl.active(false); + cl.cluster().active(false); GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { for (int g = 0; g < GRID_CNT + 1; g++) { - if (grid(g).active()) + if (grid(g).cluster().active()) return false; } @@ -274,6 +280,7 @@ public void testActivationFromClient() throws Exception { * * @throws Exception If fails. */ + @Test public void testDeactivationWithPendingLock() throws Exception { startGrids(GRID_CNT); @@ -281,16 +288,20 @@ public void testDeactivationWithPendingLock() throws Exception { lock.lock(); - GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Object call() throws Exception { - grid(0).active(false); - - return null; - } - }, IgniteException.class, - "Failed to deactivate cluster (must invoke the method outside of an active transaction or lock)."); + try { + //noinspection ThrowableNotThrown + GridTestUtils.assertThrowsAnyCause(log, new Callable() { + @Override public Object call() { + grid(0).cluster().active(false); - lock.unlock(); + return null; + } + }, IgniteException.class, + "Failed to deactivate cluster (must invoke the method outside of an active transaction)."); + } + finally { + lock.unlock(); + } } /** @@ -298,6 +309,7 @@ public void testDeactivationWithPendingLock() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeactivationWithPendingTransaction() throws Exception { startGrids(GRID_CNT); @@ -308,25 +320,25 @@ public void testDeactivationWithPendingTransaction() throws Exception { } /** - * @throws Exception if failed. */ private void deactivateWithPendingTransaction(TransactionConcurrency concurrency, - TransactionIsolation isolation) throws Exception { + TransactionIsolation isolation) { final Ignite ignite0 = grid(0); final IgniteCache cache0 = ignite0.cache(CACHE_NAME); - try (Transaction tx = ignite0.transactions().txStart(concurrency, isolation)) { + try (Transaction ignore = ignite0.transactions().txStart(concurrency, isolation)) { cache0.put(1, "1"); + //noinspection ThrowableNotThrown GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Object call() throws Exception { - grid(0).active(false); + @Override public Object call() { + grid(0).cluster().active(false); return null; } }, IgniteException.class, - "Failed to deactivate cluster (must invoke the method outside of an active transaction or lock)."); + "Failed to deactivate cluster (must invoke the method outside of an active transaction)."); } assertNull(cache0.get(1)); @@ -338,7 +350,7 @@ private void deactivateWithPendingTransaction(TransactionConcurrency concurrency */ private void checkInactive(int cnt) { for (int g = 0; g < cnt; g++) - assertFalse(grid(g).active()); + assertFalse(grid(g).cluster().active()); } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ConcurrentCacheStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ConcurrentCacheStartTest.java index 28a6683e19aed..8828c577dee63 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ConcurrentCacheStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ConcurrentCacheStartTest.java @@ -24,14 +24,19 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class ConcurrentCacheStartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void test() throws Exception { try { final IgniteEx ignite = (IgniteEx) startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheLockTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheLockTest.java index 8f0e20f604ece..387bf3a2236ca 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheLockTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheLockTest.java @@ -22,20 +22,19 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * */ +@RunWith(JUnit4.class) public class CrossCacheLockTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 4; @@ -45,12 +44,17 @@ public class CrossCacheLockTest extends GridCommonAbstractTest { /** */ private static final String CACHE2 = "cache2"; + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (igniteInstanceName.equals(getTestIgniteInstanceName(GRID_CNT - 1))) cfg.setClientMode(true); @@ -80,6 +84,7 @@ public class CrossCacheLockTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testLockUnlock() throws Exception { for (int i = 0; i < GRID_CNT; i++) { Ignite ignite = ignite(i); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheTxRandomOperationsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheTxRandomOperationsTest.java index cc9823bc63e7e..2337de8af9054 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheTxRandomOperationsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CrossCacheTxRandomOperationsTest.java @@ -35,14 +35,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -57,10 +58,8 @@ /** * */ +@RunWith(JUnit4.class) public class CrossCacheTxRandomOperationsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE1 = "cache1"; @@ -77,8 +76,6 @@ public class CrossCacheTxRandomOperationsTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (igniteInstanceName.equals(getTestIgniteInstanceName(GRID_CNT - 1))) cfg.setClientMode(true); @@ -92,6 +89,9 @@ public class CrossCacheTxRandomOperationsTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + if (nearCacheEnabled()) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + super.beforeTestsStarted(); startGridsMultiThreaded(GRID_CNT - 1); @@ -109,6 +109,7 @@ protected boolean nearCacheEnabled() { /** * @throws Exception If failed. */ + @Test public void testTxOperations() throws Exception { txOperations(PARTITIONED, FULL_SYNC, false); } @@ -116,6 +117,7 @@ public void testTxOperations() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheTxOperations() throws Exception { txOperations(PARTITIONED, FULL_SYNC, true); } @@ -123,6 +125,7 @@ public void testCrossCacheTxOperations() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheTxOperationsPrimarySync() throws Exception { txOperations(PARTITIONED, PRIMARY_SYNC, true); } @@ -130,6 +133,7 @@ public void testCrossCacheTxOperationsPrimarySync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheTxOperationsReplicated() throws Exception { txOperations(REPLICATED, FULL_SYNC, true); } @@ -137,6 +141,7 @@ public void testCrossCacheTxOperationsReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheTxOperationsReplicatedPrimarySync() throws Exception { txOperations(REPLICATED, PRIMARY_SYNC, true); } @@ -194,6 +199,13 @@ protected void createCache(CacheMode cacheMode, private void txOperations(CacheMode cacheMode, CacheWriteSynchronizationMode writeSync, boolean crossCacheTx) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + assert !nearCacheEnabled(); + + if(writeSync != CacheWriteSynchronizationMode.FULL_SYNC) + return; + } + Ignite ignite = ignite(0); try { @@ -203,12 +215,14 @@ private void txOperations(CacheMode cacheMode, txOperations(PESSIMISTIC, REPEATABLE_READ, crossCacheTx, false); txOperations(PESSIMISTIC, REPEATABLE_READ, crossCacheTx, true); - txOperations(OPTIMISTIC, REPEATABLE_READ, crossCacheTx, false); - txOperations(OPTIMISTIC, REPEATABLE_READ, crossCacheTx, true); + if(!MvccFeatureChecker.forcedMvcc()) { + txOperations(OPTIMISTIC, REPEATABLE_READ, crossCacheTx, false); + txOperations(OPTIMISTIC, REPEATABLE_READ, crossCacheTx, true); - if (writeSync == FULL_SYNC) { - txOperations(OPTIMISTIC, SERIALIZABLE, crossCacheTx, false); - txOperations(OPTIMISTIC, SERIALIZABLE, crossCacheTx, true); + if (writeSync == FULL_SYNC) { + txOperations(OPTIMISTIC, SERIALIZABLE, crossCacheTx, false); + txOperations(OPTIMISTIC, SERIALIZABLE, crossCacheTx, true); + } } } finally { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/DataStorageConfigurationValidationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/DataStorageConfigurationValidationTest.java index 9471a826fa426..8a6e2c1f39b27 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/DataStorageConfigurationValidationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/DataStorageConfigurationValidationTest.java @@ -18,19 +18,22 @@ package org.apache.ignite.internal.processors.cache; import java.util.concurrent.Callable; -import junit.framework.TestCase; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; /** * Tests assertions in DataStorageConfiguration. */ -public class DataStorageConfigurationValidationTest extends TestCase { +public class DataStorageConfigurationValidationTest { /** * Tests {@link DataStorageConfiguration#walSegmentSize} property assertion. * * @throws Exception If failed. */ + @Test public void testWalSegmentSizeOverflow() throws Exception { final DataStorageConfiguration cfg = new DataStorageConfiguration(); @@ -46,6 +49,7 @@ public void testWalSegmentSizeOverflow() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetWalSegmentSizeShouldThrowExceptionWhenSizeLessThen512Kb() throws Exception { final DataStorageConfiguration cfg = new DataStorageConfiguration(); @@ -61,6 +65,7 @@ public void testSetWalSegmentSizeShouldThrowExceptionWhenSizeLessThen512Kb() thr /** * @throws Exception If failed. */ + @Test public void testSetWalSegmentSizeShouldBeOkWhenSizeBetween512KbAnd2Gb() throws Exception { final DataStorageConfiguration cfg = new DataStorageConfiguration(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/EntryVersionConsistencyReadThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/EntryVersionConsistencyReadThroughTest.java index ebf253d50cd73..09228601ef454 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/EntryVersionConsistencyReadThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/EntryVersionConsistencyReadThroughTest.java @@ -38,16 +38,23 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Tests consistency of entry's versions after invokeAll. */ +@RunWith(JUnit4.class) public class EntryVersionConsistencyReadThroughTest extends GridCommonAbstractTest { /** */ private static final int NODES_CNT = 5; @@ -102,6 +109,7 @@ private CacheConfiguration> createCacheConfiguration(CacheA /** * @throws Exception If failed. */ + @Test public void testInvokeAllTransactionalCache() throws Exception { check(false, createCacheConfiguration(TRANSACTIONAL)); } @@ -109,6 +117,18 @@ public void testInvokeAllTransactionalCache() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testInvokeAllMvccTxCache() throws Exception { + Assume.assumeTrue("https://issues.apache.org/jira/browse/IGNITE-8582", + MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.CACHE_STORE)); + + check(false, createCacheConfiguration(TRANSACTIONAL_SNAPSHOT)); + } + + /** + * @throws Exception If failed. + */ + @Test public void testInvokeAllAtomicCache() throws Exception { check(false, createCacheConfiguration(ATOMIC)); } @@ -116,6 +136,7 @@ public void testInvokeAllAtomicCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAtomicCache() throws Exception { check(true, createCacheConfiguration(ATOMIC)); } @@ -123,10 +144,22 @@ public void testInvokeAtomicCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeTransactionalCache() throws Exception { check(true, createCacheConfiguration(TRANSACTIONAL)); } + /** + * @throws Exception If failed. + */ + @Test + public void testInvokeMvccTxCache() throws Exception { + Assume.assumeTrue("https://issues.apache.org/jira/browse/IGNITE-8582", + MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.CACHE_STORE)); + + check(true, createCacheConfiguration(TRANSACTIONAL_SNAPSHOT)); + } + /** * Tests entry's versions consistency after invokeAll. * diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridAbstractCacheInterceptorRebalanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridAbstractCacheInterceptorRebalanceTest.java index 611f42b76f34e..30a86f0f7c763 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridAbstractCacheInterceptorRebalanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridAbstractCacheInterceptorRebalanceTest.java @@ -33,13 +33,13 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -49,10 +49,8 @@ /** * */ +@RunWith(JUnit4.class) public abstract class GridAbstractCacheInterceptorRebalanceTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "test_cache"; @@ -87,8 +85,6 @@ public abstract class GridAbstractCacheInterceptorRebalanceTest extends GridComm cfg.setCacheConfiguration(ccfg); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -109,6 +105,7 @@ public abstract class GridAbstractCacheInterceptorRebalanceTest extends GridComm /** * @throws Exception If fail. */ + @Test public void testRebalanceUpdate() throws Exception { interceptor = new RebalanceUpdateInterceptor(); @@ -122,6 +119,7 @@ public void testRebalanceUpdate() throws Exception { /** * @throws Exception If fail. */ + @Test public void testRebalanceUpdateInvoke() throws Exception { interceptor = new RebalanceUpdateInterceptor(); @@ -137,6 +135,7 @@ public void testRebalanceUpdateInvoke() throws Exception { /** * @throws Exception If fail. */ + @Test public void testRebalanceRemoveInvoke() throws Exception { interceptor = new RebalanceUpdateInterceptor(); @@ -152,6 +151,7 @@ public void testRebalanceRemoveInvoke() throws Exception { /** * @throws Exception If fail. */ + @Test public void testRebalanceRemove() throws Exception { interceptor = new RebalanceRemoveInterceptor(); @@ -165,6 +165,7 @@ public void testRebalanceRemove() throws Exception { /** * @throws Exception If fail. */ + @Test public void testPutIfAbsent() throws Exception { interceptor = new RebalanceUpdateInterceptor(); @@ -178,6 +179,7 @@ public void testPutIfAbsent() throws Exception { /** * @throws Exception If fail. */ + @Test public void testGetAndPut() throws Exception { interceptor = new RebalanceUpdateInterceptor(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractByteArrayValuesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractByteArrayValuesSelfTest.java index cabe41f23d2cd..00f5dafb27bf4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractByteArrayValuesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractByteArrayValuesSelfTest.java @@ -17,44 +17,18 @@ package org.apache.ignite.internal.processors.cache; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; /** * Base class for various tests for byte array values. */ public abstract class GridCacheAbstractByteArrayValuesSelfTest extends GridCommonAbstractTest { - /** Regular cache name. */ - protected static final String CACHE_REGULAR = "cache"; - /** Key 1. */ protected static final Integer KEY_1 = 1; /** Key 2. */ protected static final Integer KEY_2 = 2; - /** Use special key for swap test, otherwise entry with readers is not evicted. */ - protected static final Integer SWAP_TEST_KEY = 3; - - /** Shared IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - - return c; - } - /** * Wrap provided values into byte array. * diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverSelfTest.java index 26f75295257ac..755e8cb5278c5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverSelfTest.java @@ -41,12 +41,16 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; /** * Failover tests for cache. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractFailoverSelfTest extends GridCacheAbstractSelfTest { /** */ private static final long TEST_TIMEOUT = 3 * 60 * 1000; @@ -129,6 +133,7 @@ public abstract class GridCacheAbstractFailoverSelfTest extends GridCacheAbstrac /** * @throws Exception If failed. */ + @Test public void testTopologyChange() throws Exception { testTopologyChange(null, null); } @@ -136,6 +141,7 @@ public void testTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConstantTopologyChange() throws Exception { testConstantTopologyChange(null, null); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverTxSelfTest.java index 353386b921db0..c5433f5841e9c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverTxSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFailoverTxSelfTest.java @@ -17,6 +17,10 @@ package org.apache.ignite.internal.processors.cache; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; @@ -26,10 +30,12 @@ /** * */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractFailoverTxSelfTest extends GridCacheAbstractFailoverSelfTest { /** * @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedTxConstantTopologyChange() throws Exception { testConstantTopologyChange(OPTIMISTIC, READ_COMMITTED); } @@ -37,6 +43,7 @@ public void testOptimisticReadCommittedTxConstantTopologyChange() throws Excepti /** * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadTxConstantTopologyChange() throws Exception { testConstantTopologyChange(OPTIMISTIC, REPEATABLE_READ); } @@ -44,6 +51,7 @@ public void testOptimisticRepeatableReadTxConstantTopologyChange() throws Except /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableTxConstantTopologyChange() throws Exception { testConstantTopologyChange(OPTIMISTIC, SERIALIZABLE); } @@ -51,6 +59,7 @@ public void testOptimisticSerializableTxConstantTopologyChange() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedTxConstantTopologyChange() throws Exception { testConstantTopologyChange(PESSIMISTIC, READ_COMMITTED); } @@ -58,6 +67,7 @@ public void testPessimisticReadCommittedTxConstantTopologyChange() throws Except /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadTxConstantTopologyChange() throws Exception { testConstantTopologyChange(PESSIMISTIC, REPEATABLE_READ); } @@ -65,6 +75,7 @@ public void testPessimisticRepeatableReadTxConstantTopologyChange() throws Excep /** * @throws Exception If failed. */ + @Test public void testPessimisticSerializableTxConstantTopologyChange() throws Exception { testConstantTopologyChange(PESSIMISTIC, SERIALIZABLE); } @@ -72,6 +83,7 @@ public void testPessimisticSerializableTxConstantTopologyChange() throws Excepti /** * @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedTxTopologyChange() throws Exception { testTopologyChange(OPTIMISTIC, READ_COMMITTED); } @@ -79,6 +91,7 @@ public void testOptimisticReadCommittedTxTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadTxTopologyChange() throws Exception { testTopologyChange(OPTIMISTIC, REPEATABLE_READ); } @@ -86,6 +99,7 @@ public void testOptimisticRepeatableReadTxTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableTxTopologyChange() throws Exception { testTopologyChange(OPTIMISTIC, SERIALIZABLE); } @@ -93,6 +107,7 @@ public void testOptimisticSerializableTxTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedTxTopologyChange() throws Exception { testTopologyChange(PESSIMISTIC, READ_COMMITTED); } @@ -100,6 +115,7 @@ public void testPessimisticReadCommittedTxTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadTxTopologyChange() throws Exception { testTopologyChange(PESSIMISTIC, REPEATABLE_READ); } @@ -107,7 +123,8 @@ public void testPessimisticRepeatableReadTxTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticSerializableTxTopologyChange() throws Exception { testTopologyChange(PESSIMISTIC, SERIALIZABLE); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiMultithreadedSelfTest.java index 140efb0fd8be6..809ec59fb21d8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiMultithreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiMultithreadedSelfTest.java @@ -39,10 +39,14 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multithreaded cache API tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractFullApiMultithreadedSelfTest extends GridCacheAbstractSelfTest { /** */ private static final Random RND = new Random(); @@ -163,6 +167,7 @@ private Set rangeKeys(int fromIncl, int toExcl) { /** * @throws Exception In case of error. */ + @Test public void testContainsKey() throws Exception { runTest(new CI1>() { @Override public void apply(IgniteCache cache) { @@ -175,6 +180,7 @@ public void testContainsKey() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGet() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -189,6 +195,7 @@ public void testGet() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAsyncOld() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -210,6 +217,7 @@ public void testGetAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAsync() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -225,6 +233,7 @@ public void testGetAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAll() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -243,6 +252,7 @@ public void testGetAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllAsyncOld() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -265,6 +275,7 @@ public void testGetAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllAsync() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -283,6 +294,7 @@ public void testGetAllAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemove() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -314,6 +326,7 @@ public void testRemove() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAsyncOld() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -352,6 +365,7 @@ public void testRemoveAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAsync() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -384,6 +398,7 @@ public void testRemoveAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAll() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -404,6 +419,7 @@ public void testRemoveAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllAsyncOld() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -428,6 +444,7 @@ public void testRemoveAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllAsync() throws Exception { runTest(new CIX1>() { @Override public void applyx(IgniteCache cache) { @@ -463,4 +480,4 @@ private V removeAsync(IgniteCache cache, K key) { private boolean removeAsync(IgniteCache cache, K key, V val) { return cache.removeAsync(key, val).get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiSelfTest.java index f238786c780f0..1d1cb255c5404 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractFullApiSelfTest.java @@ -99,10 +99,14 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Assume; +import org.junit.Ignore; +import org.junit.Test; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -174,6 +178,13 @@ public abstract class GridCacheAbstractFullApiSelfTest extends GridCacheAbstract } }; + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-9543", MvccFeatureChecker.forcedMvcc()); + + super.setUp(); + } + /** Dflt grid. */ protected transient Ignite dfltIgnite; @@ -273,20 +284,14 @@ public abstract class GridCacheAbstractFullApiSelfTest extends GridCacheAbstract * Checks that any invoke returns result. * * @throws Exception if something goes bad. - * - * TODO https://issues.apache.org/jira/browse/IGNITE-4380. */ - public void _testInvokeAllMultithreaded() throws Exception { + @Ignore("https://issues.apache.org/jira/browse/IGNITE-4380") + @Test + public void testInvokeAllMultithreaded() throws Exception { final IgniteCache cache = jcache(); final int threadCnt = 4; final int cnt = 5000; - // Concurrent invoke can not be used for ATOMIC cache in CLOCK mode. - if (atomicityMode() == ATOMIC && - cacheMode() != LOCAL && - false) - return; - final Set keys = Collections.singleton("myKey"); GridTestUtils.runMultiThreaded(new Runnable() { @@ -305,6 +310,7 @@ public void _testInvokeAllMultithreaded() throws Exception { /** * Checks that skipStore flag gets overridden inside a transaction. */ + @Test public void testWriteThroughTx() { String key = "writeThroughKey"; @@ -331,6 +337,7 @@ public void testWriteThroughTx() { /** * Checks that skipStore flag gets overridden inside a transaction. */ + @Test public void testNoReadThroughTx() { String key = "writeThroughKey"; @@ -412,6 +419,7 @@ protected IgniteCache fullCache() { /** * @throws Exception In case of error. */ + @Test public void testSize() throws Exception { assert jcache().localSize() == 0; @@ -477,6 +485,7 @@ public void testSize() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testContainsKey() throws Exception { jcache().put("testContainsKey", 1); @@ -487,6 +496,7 @@ public void testContainsKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsKeyTx() throws Exception { if (!txEnabled()) return; @@ -521,6 +531,7 @@ public void testContainsKeyTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsKeysTx() throws Exception { if (!txEnabled()) return; @@ -562,6 +573,7 @@ public void testContainsKeysTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveInExplicitLocks() throws Exception { if (lockingEnabled()) { IgniteCache cache = jcache(); @@ -587,6 +599,7 @@ public void testRemoveInExplicitLocks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAllSkipStore() throws Exception { IgniteCache jcache = jcache(); @@ -602,6 +615,7 @@ public void testRemoveAllSkipStore() throws Exception { /** * @throws IgniteCheckedException If failed. */ + @Test public void testAtomicOps() throws IgniteCheckedException { IgniteCache c = jcache(); @@ -634,6 +648,7 @@ public void testAtomicOps() throws IgniteCheckedException { /** * @throws Exception In case of error. */ + @Test public void testGet() throws Exception { IgniteCache cache = jcache(); @@ -648,6 +663,7 @@ public void testGet() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetEntry() throws Exception { IgniteCache cache = jcache(); @@ -672,6 +688,7 @@ public void testGetEntry() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAsync() throws Exception { IgniteCache cache = jcache(); @@ -692,6 +709,7 @@ public void testGetAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -721,6 +739,7 @@ public void testGetAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAll() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -801,6 +820,7 @@ public void testGetAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetEntries() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -917,6 +937,7 @@ public void testGetEntries() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllWithLastNull() throws Exception { final IgniteCache cache = jcache(); @@ -937,6 +958,7 @@ public void testGetAllWithLastNull() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllWithFirstNull() throws Exception { final IgniteCache cache = jcache(); @@ -957,6 +979,7 @@ public void testGetAllWithFirstNull() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllWithInTheMiddle() throws Exception { final IgniteCache cache = jcache(); @@ -978,6 +1001,7 @@ public void testGetAllWithInTheMiddle() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetTxNonExistingKey() throws Exception { if (txShouldBeUsed()) { try (Transaction ignored = transactions().txStart()) { @@ -989,6 +1013,7 @@ public void testGetTxNonExistingKey() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllAsync() throws Exception { final IgniteCache cache = jcache(); @@ -1016,6 +1041,7 @@ public void testGetAllAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllAsyncOld() throws Exception { final IgniteCache cache = jcache(); @@ -1047,6 +1073,7 @@ public void testGetAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPut() throws Exception { IgniteCache cache = jcache(); @@ -1083,6 +1110,7 @@ public void testPut() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutTx() throws Exception { if (txShouldBeUsed()) { IgniteCache cache = jcache(); @@ -1122,6 +1150,7 @@ public void testPutTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticReadCommitted() throws Exception { checkTransform(OPTIMISTIC, READ_COMMITTED); } @@ -1129,6 +1158,7 @@ public void testTransformOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticRepeatableRead() throws Exception { checkTransform(OPTIMISTIC, REPEATABLE_READ); } @@ -1136,6 +1166,7 @@ public void testTransformOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticReadCommitted() throws Exception { checkTransform(PESSIMISTIC, READ_COMMITTED); } @@ -1143,6 +1174,7 @@ public void testTransformPessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticRepeatableRead() throws Exception { checkTransform(PESSIMISTIC, REPEATABLE_READ); } @@ -1150,6 +1182,7 @@ public void testTransformPessimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteTransformOptimisticReadCommitted() throws Exception { checkIgniteTransform(OPTIMISTIC, READ_COMMITTED); } @@ -1157,6 +1190,7 @@ public void testIgniteTransformOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteTransformOptimisticRepeatableRead() throws Exception { checkIgniteTransform(OPTIMISTIC, REPEATABLE_READ); } @@ -1164,6 +1198,7 @@ public void testIgniteTransformOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteTransformPessimisticReadCommitted() throws Exception { checkIgniteTransform(PESSIMISTIC, READ_COMMITTED); } @@ -1171,6 +1206,7 @@ public void testIgniteTransformPessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteTransformPessimisticRepeatableRead() throws Exception { checkIgniteTransform(PESSIMISTIC, REPEATABLE_READ); } @@ -1287,6 +1323,7 @@ private void checkTransform(TransactionConcurrency concurrency, TransactionIsola /** * @throws Exception If failed. */ + @Test public void testTransformAllOptimisticReadCommitted() throws Exception { checkTransformAll(OPTIMISTIC, READ_COMMITTED); } @@ -1294,6 +1331,7 @@ public void testTransformAllOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformAllOptimisticRepeatableRead() throws Exception { checkTransformAll(OPTIMISTIC, REPEATABLE_READ); } @@ -1301,6 +1339,7 @@ public void testTransformAllOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformAllPessimisticReadCommitted() throws Exception { checkTransformAll(PESSIMISTIC, READ_COMMITTED); } @@ -1308,6 +1347,7 @@ public void testTransformAllPessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformAllPessimisticRepeatableRead() throws Exception { checkTransformAll(PESSIMISTIC, REPEATABLE_READ); } @@ -1398,6 +1438,7 @@ private void checkTransformAll(TransactionConcurrency concurrency, TransactionIs /** * @throws Exception If failed. */ + @Test public void testTransformAllWithNulls() throws Exception { final IgniteCache cache = jcache(); @@ -1444,6 +1485,7 @@ public void testTransformAllWithNulls() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformSequentialOptimisticNoStart() throws Exception { checkTransformSequential0(false, OPTIMISTIC); } @@ -1451,6 +1493,7 @@ public void testTransformSequentialOptimisticNoStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformSequentialPessimisticNoStart() throws Exception { checkTransformSequential0(false, PESSIMISTIC); } @@ -1458,6 +1501,7 @@ public void testTransformSequentialPessimisticNoStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformSequentialOptimisticWithStart() throws Exception { checkTransformSequential0(true, OPTIMISTIC); } @@ -1465,6 +1509,7 @@ public void testTransformSequentialOptimisticWithStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformSequentialPessimisticWithStart() throws Exception { checkTransformSequential0(true, PESSIMISTIC); } @@ -1521,6 +1566,7 @@ private void checkTransformSequential0(boolean startVal, TransactionConcurrency /** * @throws Exception If failed. */ + @Test public void testTransformAfterRemoveOptimistic() throws Exception { checkTransformAfterRemove(OPTIMISTIC); } @@ -1528,6 +1574,7 @@ public void testTransformAfterRemoveOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformAfterRemovePessimistic() throws Exception { checkTransformAfterRemove(PESSIMISTIC); } @@ -1564,6 +1611,7 @@ private void checkTransformAfterRemove(TransactionConcurrency concurrency) throw /** * @throws Exception If failed. */ + @Test public void testTransformReturnValueGetOptimisticReadCommitted() throws Exception { checkTransformReturnValue(false, OPTIMISTIC, READ_COMMITTED); } @@ -1571,6 +1619,7 @@ public void testTransformReturnValueGetOptimisticReadCommitted() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testTransformReturnValueGetOptimisticRepeatableRead() throws Exception { checkTransformReturnValue(false, OPTIMISTIC, REPEATABLE_READ); } @@ -1578,6 +1627,7 @@ public void testTransformReturnValueGetOptimisticRepeatableRead() throws Excepti /** * @throws Exception If failed. */ + @Test public void testTransformReturnValueGetPessimisticReadCommitted() throws Exception { checkTransformReturnValue(false, PESSIMISTIC, READ_COMMITTED); } @@ -1585,6 +1635,7 @@ public void testTransformReturnValueGetPessimisticReadCommitted() throws Excepti /** * @throws Exception If failed. */ + @Test public void testTransformReturnValueGetPessimisticRepeatableRead() throws Exception { checkTransformReturnValue(false, PESSIMISTIC, REPEATABLE_READ); } @@ -1592,6 +1643,7 @@ public void testTransformReturnValueGetPessimisticRepeatableRead() throws Except /** * @throws Exception If failed. */ + @Test public void testTransformReturnValuePutInTx() throws Exception { checkTransformReturnValue(true, OPTIMISTIC, READ_COMMITTED); } @@ -1637,6 +1689,7 @@ private void checkTransformReturnValue(boolean put, /** * @throws Exception In case of error. */ + @Test public void testGetAndPutAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -1663,6 +1716,7 @@ public void testGetAndPutAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAndPutAsync() throws Exception { IgniteCache cache = jcache(); @@ -1683,6 +1737,7 @@ public void testGetAndPutAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAsyncOld0() throws Exception { IgniteCache cacheAsync = jcache().withAsync(); @@ -1701,6 +1756,7 @@ public void testPutAsyncOld0() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAsync0() throws Exception { IgniteCache cache = jcache(); @@ -1715,6 +1771,7 @@ public void testPutAsync0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -1750,6 +1807,7 @@ public void testInvokeAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAsync() throws Exception { IgniteCache cache = jcache(); @@ -1777,6 +1835,7 @@ public void testInvokeAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvoke() throws Exception { final IgniteCache cache = jcache(); @@ -1820,6 +1879,7 @@ public void testInvoke() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutx() throws Exception { if (txShouldBeUsed()) checkPut(true); @@ -1828,6 +1888,7 @@ public void testPutx() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxNoTx() throws Exception { checkPut(false); } @@ -1872,6 +1933,7 @@ private void checkPut(boolean inTx) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAsyncOld() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -1917,6 +1979,7 @@ public void testPutAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAsync() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -1956,6 +2019,7 @@ public void testPutAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAll() throws Exception { Map map = F.asMap("key1", 1, "key2", 2); @@ -1982,6 +2046,7 @@ public void testPutAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testNullInTx() throws Exception { if (!txShouldBeUsed()) return; @@ -2073,6 +2138,7 @@ public void testNullInTx() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAllWithNulls() throws Exception { final IgniteCache cache = jcache(); @@ -2201,6 +2267,7 @@ public void testPutAllWithNulls() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAllAsyncOld() throws Exception { Map map = F.asMap("key1", 1, "key2", 2); @@ -2231,6 +2298,7 @@ public void testPutAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAllAsync() throws Exception { Map map = F.asMap("key1", 1, "key2", 2); @@ -2255,6 +2323,7 @@ public void testPutAllAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAndPutIfAbsent() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -2332,6 +2401,7 @@ public void testGetAndPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutIfAbsentAsyncOld() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -2404,6 +2474,7 @@ public void testGetAndPutIfAbsentAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutIfAbsentAsync() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -2465,6 +2536,7 @@ public void testGetAndPutIfAbsentAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsent() throws Exception { IgniteCache cache = jcache(); @@ -2510,6 +2582,7 @@ public void testPutIfAbsent() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxIfAbsentAsync() throws Exception { if (txShouldBeUsed()) checkPutxIfAbsentAsync(true); @@ -2518,6 +2591,7 @@ public void testPutxIfAbsentAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxIfAbsentAsyncNoTx() throws Exception { checkPutxIfAbsentAsync(false); } @@ -2652,6 +2726,7 @@ private void checkPutxIfAbsentAsync(boolean inTx) throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutIfAbsentAsyncConcurrentOld() throws Exception { IgniteCache cacheAsync = jcache().withAsync(); @@ -2670,6 +2745,7 @@ public void testPutIfAbsentAsyncConcurrentOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutIfAbsentAsyncConcurrent() throws Exception { IgniteCache cache = jcache(); @@ -2684,6 +2760,7 @@ public void testPutIfAbsentAsyncConcurrent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndReplace() throws Exception { IgniteCache cache = jcache(); @@ -2769,6 +2846,7 @@ public void testGetAndReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { IgniteCache cache = jcache(); @@ -2817,6 +2895,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndReplaceAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -2897,6 +2976,7 @@ public void testGetAndReplaceAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndReplaceAsync() throws Exception { IgniteCache cache = jcache(); @@ -2959,6 +3039,7 @@ public void testGetAndReplaceAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplacexAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3021,6 +3102,7 @@ public void testReplacexAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplacexAsync() throws Exception { IgniteCache cache = jcache(); @@ -3071,6 +3153,7 @@ public void testReplacexAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAndRemove() throws Exception { IgniteCache cache = jcache(); @@ -3089,6 +3172,7 @@ public void testGetAndRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndRemoveObject() throws Exception { IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME); @@ -3119,6 +3203,7 @@ public void testGetAndRemoveObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutObject() throws Exception { IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME); @@ -3145,6 +3230,7 @@ public void testGetAndPutObject() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeletedEntriesFlag() throws Exception { if (cacheMode() != LOCAL && cacheMode() != REPLICATED) { final int cnt = 3; @@ -3165,6 +3251,7 @@ public void testDeletedEntriesFlag() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveLoad() throws Exception { int cnt = 10; @@ -3196,6 +3283,7 @@ public void testRemoveLoad() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveLoadAsync() throws Exception { if (isMultiJvm()) return; @@ -3230,6 +3318,7 @@ public void testRemoveLoadAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3264,6 +3353,7 @@ public void testRemoveAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAsync() throws Exception { IgniteCache cache = jcache(); @@ -3288,6 +3378,7 @@ public void testRemoveAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemove() throws Exception { IgniteCache cache = jcache(); @@ -3301,6 +3392,7 @@ public void testRemove() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemovexAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3322,6 +3414,7 @@ public void testRemovexAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemovexAsync() throws Exception { IgniteCache cache = jcache(); @@ -3337,6 +3430,7 @@ public void testRemovexAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGlobalRemoveAll() throws Exception { globalRemoveAll(false); } @@ -3344,6 +3438,7 @@ public void testGlobalRemoveAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGlobalRemoveAllAsync() throws Exception { globalRemoveAll(true); } @@ -3475,6 +3570,7 @@ protected long hugeRemoveAllEntryCount() { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllWithNulls() throws Exception { final IgniteCache cache = jcache(); @@ -3529,6 +3625,7 @@ public void testRemoveAllWithNulls() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllDuplicates() throws Exception { jcache().removeAll(ImmutableSet.of("key1", "key1", "key1")); } @@ -3536,6 +3633,7 @@ public void testRemoveAllDuplicates() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllDuplicatesTx() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart()) { @@ -3549,6 +3647,7 @@ public void testRemoveAllDuplicatesTx() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllEmpty() throws Exception { jcache().removeAll(); } @@ -3556,6 +3655,7 @@ public void testRemoveAllEmpty() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3581,6 +3681,7 @@ public void testRemoveAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllAsync() throws Exception { IgniteCache cache = jcache(); @@ -3602,6 +3703,7 @@ public void testRemoveAllAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testLoadAll() throws Exception { IgniteCache cache = jcache(); @@ -3639,6 +3741,7 @@ public void testLoadAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAfterClear() throws Exception { IgniteEx ignite = grid(0); @@ -3685,6 +3788,7 @@ public void testRemoveAfterClear() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testClear() throws Exception { IgniteCache cache = jcache(); @@ -3829,6 +3933,7 @@ protected void checkUnlocked(final Collection keys0) throws IgniteChecke /** * @throws Exception If failed. */ + @Test public void testGlobalClearAll() throws Exception { globalClearAll(false, false); } @@ -3836,6 +3941,7 @@ public void testGlobalClearAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearAllAsyncOld() throws Exception { globalClearAll(true, true); } @@ -3843,6 +3949,7 @@ public void testGlobalClearAllAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearAllAsync() throws Exception { globalClearAll(true, false); } @@ -3881,6 +3988,7 @@ protected void globalClearAll(boolean async, boolean oldAsync) throws Exception * @throws Exception In case of error. */ @SuppressWarnings("BusyWait") + @Test public void testLockUnlock() throws Exception { if (lockingEnabled()) { final CountDownLatch lockCnt = new CountDownLatch(1); @@ -3940,6 +4048,7 @@ public void testLockUnlock() throws Exception { * @throws Exception In case of error. */ @SuppressWarnings("BusyWait") + @Test public void testLockUnlockAll() throws Exception { if (lockingEnabled()) { IgniteCache cache = jcache(); @@ -3995,6 +4104,7 @@ public void testLockUnlockAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPeek() throws Exception { Ignite ignite = primaryIgnite("key"); IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -4011,6 +4121,7 @@ public void testPeek() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPeekTxRemoveOptimistic() throws Exception { checkPeekTxRemove(OPTIMISTIC); } @@ -4018,6 +4129,7 @@ public void testPeekTxRemoveOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPeekTxRemovePessimistic() throws Exception { checkPeekTxRemove(PESSIMISTIC); } @@ -4047,6 +4159,7 @@ private void checkPeekTxRemove(TransactionConcurrency concurrency) throws Except /** * @throws Exception If failed. */ + @Test public void testPeekRemove() throws Exception { IgniteCache cache = primaryCache("key"); @@ -4057,9 +4170,9 @@ public void testPeekRemove() throws Exception { } /** - * TODO GG-11133. * @throws Exception In case of error. */ + @Test public void testEvictExpired() throws Exception { final IgniteCache cache = jcache(); @@ -4111,10 +4224,9 @@ public void testEvictExpired() throws Exception { } /** - * TODO GG-11133. - * * @throws Exception If failed. */ + @Test public void testPeekExpired() throws Exception { final IgniteCache c = jcache(); @@ -4146,10 +4258,9 @@ public void testPeekExpired() throws Exception { } /** - * TODO GG-11133. - * * @throws Exception If failed. */ + @Test public void testPeekExpiredTx() throws Exception { if (txShouldBeUsed()) { final IgniteCache c = jcache(); @@ -4180,6 +4291,7 @@ public void testPeekExpiredTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTtlTx() throws Exception { if (txShouldBeUsed()) checkTtl(true, false); @@ -4188,6 +4300,7 @@ public void testTtlTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTtlNoTx() throws Exception { checkTtl(false, false); } @@ -4195,6 +4308,7 @@ public void testTtlNoTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTtlNoTxOldEntry() throws Exception { checkTtl(false, true); } @@ -4205,10 +4319,6 @@ public void testTtlNoTxOldEntry() throws Exception { * @throws Exception If failed. */ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { - // TODO GG-11133. - if (true) - return; - int ttl = 1000; final ExpiryPolicy expiry = new TouchedExpiryPolicy(new Duration(MILLISECONDS, ttl)); @@ -4273,7 +4383,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertTrue(curEntryTtl.get2() > startTime); expireTimes[i] = curEntryTtl.get2(); @@ -4302,7 +4411,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertTrue(curEntryTtl.get2() > startTime); expireTimes[i] = curEntryTtl.get2(); @@ -4331,7 +4439,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertTrue(curEntryTtl.get2() > startTime); expireTimes[i] = curEntryTtl.get2(); @@ -4364,7 +4471,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertEquals(expireTimes[i], (long)curEntryTtl.get2()); } } @@ -4373,7 +4479,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { storeStgy.removeFromStore(key); assertTrue(GridTestUtils.waitForCondition(new GridAbsPredicateX() { - @SuppressWarnings("unchecked") @Override public boolean applyx() { try { Integer val = c.get(key); @@ -4450,6 +4555,7 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { /** * @throws Exception In case of error. */ + @Test public void testLocalEvict() throws Exception { IgniteCache cache = jcache(); @@ -4492,6 +4598,7 @@ public void testLocalEvict() throws Exception { /** * JUnit. */ + @Test public void testCacheProxy() { IgniteCache cache = jcache(); @@ -4499,10 +4606,9 @@ public void testCacheProxy() { } /** - * TODO GG-11133. - * * @throws Exception If failed. */ + @Test public void testCompactExpired() throws Exception { final IgniteCache cache = jcache(); @@ -4536,6 +4642,7 @@ public void testCompactExpired() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticTxMissingKey() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(OPTIMISTIC, READ_COMMITTED)) { @@ -4552,6 +4659,7 @@ public void testOptimisticTxMissingKey() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticTxMissingKeyNoCommit() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(OPTIMISTIC, READ_COMMITTED)) { @@ -4566,6 +4674,7 @@ public void testOptimisticTxMissingKeyNoCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticTxReadCommittedInTx() throws Exception { checkRemovexInTx(OPTIMISTIC, READ_COMMITTED); } @@ -4573,6 +4682,7 @@ public void testOptimisticTxReadCommittedInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticTxRepeatableReadInTx() throws Exception { checkRemovexInTx(OPTIMISTIC, REPEATABLE_READ); } @@ -4580,6 +4690,7 @@ public void testOptimisticTxRepeatableReadInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxReadCommittedInTx() throws Exception { checkRemovexInTx(PESSIMISTIC, READ_COMMITTED); } @@ -4587,6 +4698,7 @@ public void testPessimisticTxReadCommittedInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxRepeatableReadInTx() throws Exception { checkRemovexInTx(PESSIMISTIC, REPEATABLE_READ); } @@ -4635,6 +4747,7 @@ private void checkRemovexInTx(TransactionConcurrency concurrency, TransactionIso * * @throws Exception If failed. */ + @Test public void testPessimisticTxMissingKey() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { @@ -4651,6 +4764,7 @@ public void testPessimisticTxMissingKey() throws Exception { * * @throws Exception If failed. */ + @Test public void testPessimisticTxMissingKeyNoCommit() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { @@ -4665,6 +4779,7 @@ public void testPessimisticTxMissingKeyNoCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxRepeatableRead() throws Exception { if (txShouldBeUsed()) { try (Transaction ignored = transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { @@ -4678,6 +4793,7 @@ public void testPessimisticTxRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxRepeatableReadOnUpdate() throws Exception { if (txShouldBeUsed()) { try (Transaction ignored = transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { @@ -4691,6 +4807,7 @@ public void testPessimisticTxRepeatableReadOnUpdate() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testToMap() throws Exception { IgniteCache cache = jcache(); @@ -4819,6 +4936,7 @@ protected IgnitePair entryTtl(IgniteCache cache, String key) { /** * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -4846,6 +4964,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteCacheIterator() throws Exception { IgniteCache cache = jcache(0); @@ -4892,6 +5011,7 @@ public void testIgniteCacheIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorLeakOnCancelCursor() throws Exception { IgniteCache cache = jcache(0); @@ -5046,7 +5166,7 @@ private void waitForIteratorsCleared(IgniteCache cache, int sec // If AssertionFailedError is in the chain, assume we need to wait and retry. if (!X.hasCause(t, AssertionFailedError.class)) throw t; - + if (i == 9) { for (int j = 0; j < gridCount(); j++) executeOnLocalOrRemoteJvm(j, new PrintIteratorStateTask()); @@ -5086,6 +5206,7 @@ private void checkIteratorEmpty(IgniteCache cache) throws Excep /** * @throws Exception If failed. */ + @Test public void testLocalClearKey() throws Exception { addKeys(); @@ -5137,6 +5258,7 @@ protected void checkLocalRemovedKey(String keyToRmv) { /** * @throws Exception If failed. */ + @Test public void testLocalClearKeys() throws Exception { Map> keys = addKeys(); @@ -5208,6 +5330,7 @@ protected Map> addKeys() { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKey() throws Exception { testGlobalClearKey(false, Arrays.asList("key25"), false); } @@ -5215,6 +5338,7 @@ public void testGlobalClearKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeyAsyncOld() throws Exception { testGlobalClearKey(true, Arrays.asList("key25"), true); } @@ -5222,6 +5346,7 @@ public void testGlobalClearKeyAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeyAsync() throws Exception { testGlobalClearKey(true, Arrays.asList("key25"), false); } @@ -5229,6 +5354,7 @@ public void testGlobalClearKeyAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeys() throws Exception { testGlobalClearKey(false, Arrays.asList("key25", "key100", "key150"), false); } @@ -5236,6 +5362,7 @@ public void testGlobalClearKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeysAsyncOld() throws Exception { testGlobalClearKey(true, Arrays.asList("key25", "key100", "key150"), true); } @@ -5243,6 +5370,7 @@ public void testGlobalClearKeysAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeysAsync() throws Exception { testGlobalClearKey(true, Arrays.asList("key25", "key100", "key150"), false); } @@ -5309,6 +5437,7 @@ protected void testGlobalClearKey(boolean async, Collection keysToRmv, b /** * @throws Exception If failed. */ + @Test public void testWithSkipStore() throws Exception { IgniteCache cache = jcache(0); @@ -5518,6 +5647,7 @@ public void testWithSkipStore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithSkipStoreRemoveAll() throws Exception { if (atomicityMode() == TRANSACTIONAL || (atomicityMode() == ATOMIC && nearEnabled())) // TODO IGNITE-373. return; @@ -5559,6 +5689,7 @@ public void testWithSkipStoreRemoveAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithSkipStoreTx() throws Exception { if (txShouldBeUsed()) { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -5861,6 +5992,7 @@ protected CacheStartMode cacheStartType() { /** * @throws Exception If failed. */ + @Test public void testGetOutTx() throws Exception { checkGetOutTx(false); } @@ -5868,6 +6000,7 @@ public void testGetOutTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOutTxAsync() throws Exception { checkGetOutTx(true); } @@ -5945,6 +6078,7 @@ private void checkGetOutTx(boolean async) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformException() throws Exception { final IgniteCache cache = jcache(); @@ -5966,6 +6100,7 @@ public void testTransformException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockInsideTransaction() throws Exception { if (txEnabled()) { GridTestUtils.assertThrows( @@ -6007,6 +6142,7 @@ public void testLockInsideTransaction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformResourceInjection() throws Exception { ClusterGroup servers = grid(0).cluster().forServers(); @@ -6390,7 +6526,7 @@ public CheckEntriesTask(Collection keys) { size++; - e.touch(null); + e.touch(); } } @@ -6492,11 +6628,16 @@ public EntryTtlTask(String key, boolean useDhtForNearCache) { if (useDhtForNearCache && internalCache.context().isNear()) internalCache = internalCache.context().near().dht(); - GridCacheEntryEx entry = internalCache.peekEx(key); + GridCacheEntryEx entry = internalCache.entryEx(key); + + entry.unswap(); + + IgnitePair pair = new IgnitePair<>(entry.ttl(), entry.expireTime()); + + if (!entry.isNear()) + entry.context().cache().removeEntry(entry); - return entry != null ? - new IgnitePair<>(entry.ttl(), entry.expireTime()) : - new IgnitePair(null, null); + return pair; } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractIteratorsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractIteratorsSelfTest.java index 85e3d6544298c..cfee6aafd8736 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractIteratorsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractIteratorsSelfTest.java @@ -23,10 +23,16 @@ import org.apache.ignite.internal.util.typedef.CA; import org.apache.ignite.internal.util.typedef.CAX; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for cache iterators. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractIteratorsSelfTest extends GridCacheAbstractSelfTest { /** Key prefix. */ protected static final String KEY_PREFIX = "testKey"; @@ -47,6 +53,7 @@ public abstract class GridCacheAbstractIteratorsSelfTest extends GridCacheAbstra /** * @throws Exception If failed. */ + @Test public void testCacheIterator() throws Exception { int cnt = 0; @@ -68,6 +75,7 @@ public void testCacheIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheIteratorMultithreaded() throws Exception { for (int i = 0; i < gridCount(); i++) jcache(i).removeAll(); @@ -95,7 +103,10 @@ public void testCacheIteratorMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntrySetIterator() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10082", MvccFeatureChecker.forcedMvcc()); + assert jcache().localSize(CachePeekMode.ALL) == entryCount(); int cnt = 0; @@ -118,6 +129,7 @@ public void testEntrySetIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntrySetIteratorMultithreaded() throws Exception { for (int i = 0; i < gridCount(); i++) jcache(i).removeAll(); @@ -142,4 +154,4 @@ public void testEntrySetIteratorMultithreaded() throws Exception { }, 3, "iterator-thread"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractLocalStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractLocalStoreSelfTest.java index bc1996b8b596c..5f52d3e224722 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractLocalStoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractLocalStoreSelfTest.java @@ -59,13 +59,13 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_LOCAL_STORE_KEEPS_PRIMARY_ONLY; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -75,10 +75,8 @@ /** * */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractLocalStoreSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public static final TestLocalStore LOCAL_STORE_1 = new TestLocalStore<>(); @@ -134,12 +132,6 @@ public GridCacheAbstractLocalStoreSelfTest() { cfg.setCacheConfiguration(cacheCfg, cacheBackup1Cfg, cacheBackup2Cfg); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - return cfg; } @@ -204,6 +196,7 @@ protected NearCacheConfiguration nearConfiguration() { /** * @throws Exception If failed. */ + @Test public void testEvict() throws Exception { Ignite ignite1 = startGrid(1); @@ -232,6 +225,7 @@ public void testEvict() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryNode() throws Exception { Ignite ignite1 = startGrid(1); @@ -280,6 +274,7 @@ public void testPrimaryNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupRestorePrimary() throws Exception { testBackupRestore(); } @@ -287,6 +282,7 @@ public void testBackupRestorePrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupRestore() throws Exception { final IgniteEx ignite1 = startGrid(1); Ignite ignite2 = startGrid(2); @@ -406,6 +402,7 @@ public void testBackupRestore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalStoreCorrespondsAffinityWithBackups() throws Exception { testLocalStoreCorrespondsAffinity(BACKUP_CACHE_2); } @@ -413,6 +410,7 @@ public void testLocalStoreCorrespondsAffinityWithBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalStoreCorrespondsAffinityWithBackup() throws Exception { testLocalStoreCorrespondsAffinity(BACKUP_CACHE_1); } @@ -420,6 +418,7 @@ public void testLocalStoreCorrespondsAffinityWithBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalStoreCorrespondsAffinityNoBackups() throws Exception { testLocalStoreCorrespondsAffinity(DEFAULT_CACHE_NAME); } @@ -506,6 +505,7 @@ private void testLocalStoreCorrespondsAffinity(String name) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalStoreWithNearKeysPrimary() throws Exception { try { System.setProperty(IGNITE_LOCAL_STORE_KEEPS_PRIMARY_ONLY, "true"); @@ -520,6 +520,7 @@ public void testLocalStoreWithNearKeysPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalStoreWithNearKeysPrimaryAndBackups() throws Exception { testLocalStoreWithNearKeys(); } @@ -527,6 +528,7 @@ public void testLocalStoreWithNearKeysPrimaryAndBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalStoreWithNearKeys() throws Exception { if (getCacheMode() == REPLICATED) return; @@ -619,6 +621,7 @@ public void testLocalStoreWithNearKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupNode() throws Exception { Ignite ignite1 = startGrid(1); @@ -684,6 +687,7 @@ public void testBackupNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSwap() throws Exception { Ignite ignite1 = startGrid(1); @@ -883,4 +887,4 @@ else if (igniteInstanceName.endsWith("5")) return LOCAL_STORE_6; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractMetricsSelfTest.java index eb4d2d5a7ef04..873987a079f24 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractMetricsSelfTest.java @@ -45,12 +45,16 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; /** * Cache metrics test. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractMetricsSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int KEY_CNT = 500; @@ -169,6 +173,7 @@ protected int expectedMissesPerPut(boolean isPrimary) { /** * @throws Exception If failed. */ + @Test public void testGetMetricsDisable() throws Exception { // Disable statistics. for (int i = 0; i < gridCount(); i++) { @@ -262,6 +267,7 @@ public Object process(MutableEntry entry, /** * @throws Exception If failed. */ + @Test public void testGetMetricsSnapshot() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -271,6 +277,7 @@ public void testGetMetricsSnapshot() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndRemoveAsyncAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -288,6 +295,7 @@ public void testGetAndRemoveAsyncAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAsyncValAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -315,6 +323,7 @@ public void testRemoveAsyncValAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -332,6 +341,7 @@ public void testRemoveAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAllAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -356,6 +366,7 @@ public void testRemoveAllAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAllAsyncAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -385,6 +396,7 @@ public void testRemoveAllAsyncAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -406,6 +418,7 @@ public void testGetAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -430,6 +443,7 @@ public void testGetAllAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllAsyncAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -458,6 +472,7 @@ public void testGetAllAsyncAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAvgTime() throws Exception { final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -475,6 +490,7 @@ public void testPutAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAsyncAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -491,6 +507,7 @@ public void testPutAsyncAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutAsyncAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -518,6 +535,7 @@ public void testGetAndPutAsyncAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsentAsyncAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -543,6 +561,7 @@ public void testPutIfAbsentAsyncAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutIfAbsentAsyncAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -568,6 +587,7 @@ public void testGetAndPutIfAbsentAsyncAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllAvgTime() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -591,6 +611,7 @@ public void testPutAllAvgTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutsReads() throws Exception { IgniteCache cache0 = grid(0).cache(DEFAULT_CACHE_NAME); @@ -649,6 +670,7 @@ public void testPutsReads() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMissHitPercentage() throws Exception { IgniteCache cache0 = grid(0).cache(DEFAULT_CACHE_NAME); @@ -683,6 +705,7 @@ public void testMissHitPercentage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMisses() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -725,6 +748,7 @@ public void testMisses() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMissesOnEmptyCache() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -763,6 +787,7 @@ public void testMissesOnEmptyCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoves() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -779,6 +804,7 @@ public void testRemoves() throws Exception { * * @throws Exception If failed. */ + @Test public void testCacheSizeWorksAsSize() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -816,6 +842,7 @@ public void testCacheSizeWorksAsSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxEvictions() throws Exception { if (grid(0).cache(DEFAULT_CACHE_NAME).getConfiguration(CacheConfiguration.class).getAtomicityMode() != CacheAtomicityMode.ATOMIC) checkTtl(true); @@ -824,6 +851,7 @@ public void testTxEvictions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNonTxEvictions() throws Exception { if (grid(0).cache(DEFAULT_CACHE_NAME).getConfiguration(CacheConfiguration.class).getAtomicityMode() == CacheAtomicityMode.ATOMIC) checkTtl(false); @@ -1038,6 +1066,7 @@ private void checkTtl(boolean inTx) throws Exception { /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvocationRemovesOnEmptyCache() throws IgniteCheckedException { testInvocationRemoves(true); } @@ -1045,6 +1074,7 @@ public void testInvocationRemovesOnEmptyCache() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvocationRemoves() throws IgniteCheckedException { testInvocationRemoves(false); } @@ -1066,34 +1096,35 @@ private void testInvocationRemoves(boolean emptyCache) throws IgniteCheckedExcep assertEquals(1, cache0.localMetrics().getEntryProcessorRemovals()); - if (emptyCache) { - assertEquals(1, cache0.localMetrics().getEntryProcessorMisses()); - - assertEquals(100f, cache0.localMetrics().getEntryProcessorMissPercentage()); - assertEquals(0f, cache0.localMetrics().getEntryProcessorHitPercentage()); - } - else { - assertEquals(1, cache0.localMetrics().getEntryProcessorHits()); - - assertEquals(0f, cache0.localMetrics().getEntryProcessorMissPercentage()); - assertEquals(100f, cache0.localMetrics().getEntryProcessorHitPercentage()); - } - - for (int i = 1; i < gridCount(); i++) { - Ignite ignite = ignite(i); - - IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); - - if (affinity(cache).isPrimaryOrBackup(ignite.cluster().localNode(), key)) - assertEquals(1, cache.localMetrics().getEntryProcessorRemovals()); - } - - assertEquals(1, cache0.localMetrics().getEntryProcessorInvocations()); +// if (emptyCache) { +// assertEquals(1, cache0.localMetrics().getEntryProcessorMisses()); +// +// assertEquals(100f, cache0.localMetrics().getEntryProcessorMissPercentage()); +// assertEquals(0f, cache0.localMetrics().getEntryProcessorHitPercentage()); +// } +// else { +// assertEquals(1, cache0.localMetrics().getEntryProcessorHits()); +// +// assertEquals(0f, cache0.localMetrics().getEntryProcessorMissPercentage()); +// assertEquals(100f, cache0.localMetrics().getEntryProcessorHitPercentage()); +// } +// +// for (int i = 1; i < gridCount(); i++) { +// Ignite ignite = ignite(i); +// +// IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); +// +// if (affinity(cache).isPrimaryOrBackup(ignite.cluster().localNode(), key)) +// assertEquals(1, cache.localMetrics().getEntryProcessorRemovals()); +// } +// +// assertEquals(1, cache0.localMetrics().getEntryProcessorInvocations()); } /** * @throws IgniteCheckedException If failed. */ + @Test public void testUpdateInvocationsOnEmptyCache() throws IgniteCheckedException { testUpdateInvocations(true); } @@ -1101,6 +1132,7 @@ public void testUpdateInvocationsOnEmptyCache() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testUpdateInvocations() throws IgniteCheckedException { testUpdateInvocations(false); } @@ -1150,6 +1182,7 @@ private void testUpdateInvocations(final boolean emptyCache) throws IgniteChecke /** * @throws IgniteCheckedException If failed. */ + @Test public void testReadOnlyInvocationsOnEmptyCache() throws IgniteCheckedException { testReadOnlyInvocations(true); } @@ -1157,6 +1190,7 @@ public void testReadOnlyInvocationsOnEmptyCache() throws IgniteCheckedException /** * @throws IgniteCheckedException If failed. */ + @Test public void testReadOnlyInvocations() throws IgniteCheckedException { testReadOnlyInvocations(false); } @@ -1201,6 +1235,7 @@ private void testReadOnlyInvocations(final boolean emptyCache) throws IgniteChec /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvokeAvgTime() throws IgniteCheckedException { IgniteCache cache0 = grid(0).cache(DEFAULT_CACHE_NAME); @@ -1265,6 +1300,7 @@ public Object process(MutableEntry entry, /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvokeAsyncAvgTime() throws IgniteCheckedException { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -1282,6 +1318,7 @@ public void testInvokeAsyncAvgTime() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvokeAllAvgTime() throws IgniteCheckedException { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -1299,6 +1336,7 @@ public void testInvokeAllAvgTime() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvokeAllAsyncAvgTime() throws IgniteCheckedException { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -1317,6 +1355,7 @@ public void testInvokeAllAsyncAvgTime() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvokeAllMultipleKeysAvgTime() throws IgniteCheckedException { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -1336,6 +1375,7 @@ public void testInvokeAllMultipleKeysAvgTime() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testInvokeAllAsyncMultipleKeysAvgTime() throws IgniteCheckedException { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractRemoveFailureTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractRemoveFailureTest.java index 036153dcc0b91..324f69ad75eb5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractRemoveFailureTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractRemoveFailureTest.java @@ -47,14 +47,15 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import java.util.concurrent.ConcurrentHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -67,10 +68,8 @@ /** * Tests that removes are not lost when topology changes. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractRemoveFailureTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 3; @@ -96,7 +95,7 @@ public abstract class GridCacheAbstractRemoveFailureTest extends GridCommonAbstr @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER).setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); if (testClientNode() && getTestIgniteInstanceName(0).equals(igniteInstanceName)) cfg.setClientMode(true); @@ -151,6 +150,7 @@ protected boolean testClientNode() { /** * @throws Exception If failed. */ + @Test public void testPutAndRemove() throws Exception { putAndRemove(duration(), null, null); } @@ -158,6 +158,7 @@ public void testPutAndRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAndRemovePessimisticTx() throws Exception { if (atomicityMode() != CacheAtomicityMode.TRANSACTIONAL) return; @@ -168,6 +169,7 @@ public void testPutAndRemovePessimisticTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAndRemoveOptimisticSerializableTx() throws Exception { if (atomicityMode() != CacheAtomicityMode.TRANSACTIONAL) return; @@ -485,4 +487,4 @@ protected static int random(int min, int max) { return ThreadLocalRandom.current().nextInt(min, max); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractSelfTest.java index 89f0ca79b2a69..9504b812ee83d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractSelfTest.java @@ -53,6 +53,7 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; @@ -92,7 +93,8 @@ public abstract class GridCacheAbstractSelfTest extends GridCommonAbstractTest { assert cnt >= 1 : "At least one grid must be started"; - initStoreStrategy(); + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.CACHE_STORE)) + initStoreStrategy(); startGrids(cnt); @@ -256,7 +258,7 @@ protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throw cfg.setIndexedTypes(idxTypes); if (cacheMode() == PARTITIONED) - cfg.setBackups(1); + cfg.setBackups(backups()); return cfg; } @@ -361,6 +363,13 @@ protected IgniteTransactions transactions() { return grid(0).transactions(); } + /** + * @return Backups. + */ + protected int backups() { + return 1; + } + /** * @param idx Index of grid. * @return Default cache. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractTxReadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractTxReadTest.java index c6d99ec19659a..5c57384408971 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractTxReadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractTxReadTest.java @@ -24,12 +24,16 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Tests value read inside transaction. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractTxReadTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -53,6 +57,7 @@ public abstract class GridCacheAbstractTxReadTest extends GridCacheAbstractSelfT /** * @throws IgniteCheckedException If failed */ + @Test public void testTxReadOptimisticReadCommitted() throws IgniteCheckedException { checkTransactionalRead(TransactionConcurrency.OPTIMISTIC, TransactionIsolation.READ_COMMITTED); } @@ -60,6 +65,7 @@ public void testTxReadOptimisticReadCommitted() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed */ + @Test public void testTxReadOptimisticRepeatableRead() throws IgniteCheckedException { checkTransactionalRead(TransactionConcurrency.OPTIMISTIC, TransactionIsolation.REPEATABLE_READ); } @@ -67,6 +73,7 @@ public void testTxReadOptimisticRepeatableRead() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed */ + @Test public void testTxReadOptimisticSerializable() throws IgniteCheckedException { checkTransactionalRead(TransactionConcurrency.OPTIMISTIC, TransactionIsolation.SERIALIZABLE); } @@ -74,6 +81,7 @@ public void testTxReadOptimisticSerializable() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed */ + @Test public void testTxReadPessimisticReadCommitted() throws IgniteCheckedException { checkTransactionalRead(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.READ_COMMITTED); } @@ -81,6 +89,7 @@ public void testTxReadPessimisticReadCommitted() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed */ + @Test public void testTxReadPessimisticRepeatableRead() throws IgniteCheckedException { checkTransactionalRead(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ); } @@ -88,6 +97,7 @@ public void testTxReadPessimisticRepeatableRead() throws IgniteCheckedException /** * @throws IgniteCheckedException If failed */ + @Test public void testTxReadPessimisticSerializable() throws IgniteCheckedException { checkTransactionalRead(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.SERIALIZABLE); } @@ -135,4 +145,4 @@ protected void checkTransactionalRead(TransactionConcurrency concurrency, Transa @Override protected int gridCount() { return 1; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractUsersAffinityMapperSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractUsersAffinityMapperSelfTest.java index 37355f77ef688..e0b20720aed28 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractUsersAffinityMapperSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAbstractUsersAffinityMapperSelfTest.java @@ -32,23 +32,21 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.GridArgumentCheck; import org.apache.ignite.lang.IgniteRunnable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; /** * Test affinity mapper. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractUsersAffinityMapperSelfTest extends GridCommonAbstractTest { /** */ private static final int KEY_CNT = 1000; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public static final AffinityKeyMapper AFFINITY_MAPPER = new UsersAffinityKeyMapper(); @@ -79,12 +77,6 @@ protected GridCacheAbstractUsersAffinityMapperSelfTest() { cfg.setCacheConfiguration(cacheCfg); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - return cfg; } @@ -106,6 +98,7 @@ protected GridCacheAbstractUsersAffinityMapperSelfTest() { /** * @throws Exception If failed. */ + @Test public void testAffinityMapper() throws Exception { IgniteCache cache = startGrid(0).cache(DEFAULT_CACHE_NAME); @@ -212,4 +205,4 @@ private static class NoopClosure implements IgniteRunnable { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityApiSelfTest.java index c1465250d7397..f5686896c5414 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityApiSelfTest.java @@ -35,6 +35,9 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.affinity.GridAffinityFunctionContextImpl; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -42,6 +45,7 @@ /** * Affinity API tests. */ +@RunWith(JUnit4.class) public class GridCacheAffinityApiSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 4; @@ -85,6 +89,7 @@ private AffinityKeyMapper affinityMapper() { * * @throws Exception If failed. */ + @Test public void testPartitions() throws Exception { assertEquals(affinity().partitions(), grid(0).affinity(DEFAULT_CACHE_NAME).partitions()); } @@ -94,6 +99,7 @@ public void testPartitions() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartition() throws Exception { String key = "key"; @@ -105,6 +111,7 @@ public void testPartition() throws Exception { * * @throws Exception If failed. */ + @Test public void testPrimaryPartitionsOneNode() throws Exception { AffinityFunctionContext ctx = new GridAffinityFunctionContextImpl(new ArrayList<>(grid(0).cluster().nodes()), null, null, @@ -134,6 +141,7 @@ public void testPrimaryPartitionsOneNode() throws Exception { * * @throws Exception If failed. */ + @Test public void testPrimaryPartitions() throws Exception { // Pick 2 nodes and create a projection over them. ClusterNode n0 = grid(0).localNode(); @@ -171,6 +179,7 @@ public void testPrimaryPartitions() throws Exception { * * @throws Exception If failed. */ + @Test public void testBackupPartitions() throws Exception { // Pick 2 nodes and create a projection over them. ClusterNode n0 = grid(0).localNode(); @@ -206,6 +215,7 @@ public void testBackupPartitions() throws Exception { * * @throws Exception If failed. */ + @Test public void testAllPartitions() throws Exception { // Pick 2 nodes and create a projection over them. ClusterNode n0 = grid(0).localNode(); @@ -234,6 +244,7 @@ public void testAllPartitions() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapPartitionToNode() throws Exception { int part = RND.nextInt(affinity().partitions()); @@ -253,6 +264,7 @@ public void testMapPartitionToNode() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapPartitionsToNode() throws Exception { Map map = grid(0).affinity(DEFAULT_CACHE_NAME).mapPartitionsToNodes(F.asList(0, 1, 5, 19, 12)); @@ -273,6 +285,7 @@ public void testMapPartitionsToNode() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapPartitionsToNodeArray() throws Exception { Map map = grid(0).affinity(DEFAULT_CACHE_NAME).mapPartitionsToNodes(F.asList(0, 1, 5, 19, 12)); @@ -293,6 +306,7 @@ public void testMapPartitionsToNodeArray() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapPartitionsToNodeCollection() throws Exception { Collection parts = new LinkedList<>(); @@ -337,6 +351,7 @@ private Iterable nodes(List> assignment, Affinity /** * @throws Exception If failed. */ + @Test public void testPartitionWithAffinityMapper() throws Exception { AffinityKey key = new AffinityKey<>(1, 2); @@ -345,4 +360,4 @@ public void testPartitionWithAffinityMapper() throws Exception { for (int i = 0; i < gridCount(); i++) assertEquals(expPart, grid(i).affinity(DEFAULT_CACHE_NAME).partition(key)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityMapperSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityMapperSelfTest.java index 09fc296344331..272e3e2a38a78 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityMapperSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityMapperSelfTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.cache.affinity.AffinityKeyMapper; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test affinity mapper. */ +@RunWith(JUnit4.class) public class GridCacheAffinityMapperSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -37,6 +41,7 @@ public class GridCacheAffinityMapperSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testMethodAffinityMapper() { AffinityKeyMapper mapper = new GridCacheDefaultAffinityKeyMapper(); @@ -63,6 +68,7 @@ public void testMethodAffinityMapper() { /** * */ + @Test public void testFieldAffinityMapper() { AffinityKeyMapper mapper = new GridCacheDefaultAffinityKeyMapper(); @@ -89,6 +95,7 @@ public void testFieldAffinityMapper() { /** * */ + @Test public void testFieldAffinityMapperWithWrongClass() { AffinityKeyMapper mapper = new GridCacheDefaultAffinityKeyMapper(); @@ -143,4 +150,4 @@ public Object affinityKey() { return affKey; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityRoutingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityRoutingSelfTest.java index ddb38e0520c8b..a9f9e11c117a1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityRoutingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAffinityRoutingSelfTest.java @@ -32,12 +32,12 @@ import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.JobContextResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -45,6 +45,7 @@ /** * Affinity routing tests. */ +@RunWith(JUnit4.class) public class GridCacheAffinityRoutingSelfTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 4; @@ -58,9 +59,6 @@ public class GridCacheAffinityRoutingSelfTest extends GridCommonAbstractTest { /** */ private static final int MAX_FAILOVER_ATTEMPTS = 5; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * Constructs test. */ @@ -72,12 +70,6 @@ public GridCacheAffinityRoutingSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - AlwaysFailoverSpi failSpi = new AlwaysFailoverSpi(); failSpi.setMaximumFailoverAttempts(MAX_FAILOVER_ATTEMPTS); cfg.setFailoverSpi(failSpi); @@ -129,6 +121,7 @@ public GridCacheAffinityRoutingSelfTest() { * * @throws Exception If failed. */ + @Test public void testAffinityRun() throws Exception { for (int i = 0; i < KEY_CNT; i++) grid(0).compute().affinityRun(NON_DFLT_CACHE_NAME, i, new CheckRunnable(i, i)); @@ -137,6 +130,7 @@ public void testAffinityRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityCallRestartFails() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -150,6 +144,7 @@ public void testAffinityCallRestartFails() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityCallRestart() throws Exception { assertEquals(MAX_FAILOVER_ATTEMPTS, grid(0).compute().affinityCall(NON_DFLT_CACHE_NAME, "key", @@ -159,6 +154,7 @@ public void testAffinityCallRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityRunRestartFails() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -172,6 +168,7 @@ public void testAffinityRunRestartFails() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityRunRestart() throws Exception { grid(0).compute().affinityRun(NON_DFLT_CACHE_NAME, "key", new FailedRunnable("key", MAX_FAILOVER_ATTEMPTS)); } @@ -181,6 +178,7 @@ public void testAffinityRunRestart() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinityRunComplexKey() throws Exception { for (int i = 0; i < KEY_CNT; i++) { AffinityTestKey key = new AffinityTestKey(i); @@ -195,6 +193,7 @@ public void testAffinityRunComplexKey() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinityCall() throws Exception { for (int i = 0; i < KEY_CNT; i++) grid(0).compute().affinityCall(NON_DFLT_CACHE_NAME, i, new CheckCallable(i, i)); @@ -205,6 +204,7 @@ public void testAffinityCall() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinityCallComplexKey() throws Exception { for (int i = 0; i < KEY_CNT; i++) { final AffinityTestKey key = new AffinityTestKey(i); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAsyncOperationsLimitSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAsyncOperationsLimitSelfTest.java index cebab2f45a99e..fad6806dfe5c2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAsyncOperationsLimitSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAsyncOperationsLimitSelfTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.internal.util.GridAtomicInteger; import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.lang.IgniteFuture; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks that number of concurrent asynchronous operations is limited when configuration parameter is set. */ +@RunWith(JUnit4.class) public class GridCacheAsyncOperationsLimitSelfTest extends GridCacheAbstractSelfTest { /** */ public static final int MAX_CONCURRENT_ASYNC_OPS = 50; @@ -47,6 +51,7 @@ public class GridCacheAsyncOperationsLimitSelfTest extends GridCacheAbstractSelf /** * @throws Exception If failed. */ + @Test public void testAsyncOps() throws Exception { final AtomicInteger cnt = new AtomicInteger(); final GridAtomicInteger max = new GridAtomicInteger(); @@ -70,4 +75,4 @@ public void testAsyncOps() throws Exception { assertTrue("Maximum number of permits exceeded: " + max.get(), max.get() <= 51); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicEntryProcessorDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicEntryProcessorDeploymentSelfTest.java index 4f513b3845668..729fcc7f0b86c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicEntryProcessorDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicEntryProcessorDeploymentSelfTest.java @@ -27,11 +27,12 @@ import org.apache.ignite.configuration.DeploymentMode; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -41,10 +42,8 @@ /** * Cache EntryProcessor + Deployment. */ +@RunWith(JUnit4.class) public class GridCacheAtomicEntryProcessorDeploymentSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test value. */ protected static String TEST_VALUE = "org.apache.ignite.tests.p2p.CacheDeploymentTestValue"; @@ -54,6 +53,14 @@ public class GridCacheAtomicEntryProcessorDeploymentSelfTest extends GridCommonA /** */ protected boolean clientMode; + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + assert atomicityMode() != ATOMIC; + + fail("https://issues.apache.org/jira/browse/IGNITE-10359"); + } + } /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -68,12 +75,6 @@ public class GridCacheAtomicEntryProcessorDeploymentSelfTest extends GridCommonA cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(null); return cfg; @@ -119,6 +120,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception In case of error. */ + @Test public void testInvokeDeployment() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -128,6 +130,7 @@ public void testInvokeDeployment() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testInvokeDeployment2() throws Exception { depMode = DeploymentMode.SHARED; @@ -137,6 +140,7 @@ public void testInvokeDeployment2() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testInvokeAllDeployment() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -146,6 +150,7 @@ public void testInvokeAllDeployment() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testInvokeAllDeployment2() throws Exception { depMode = DeploymentMode.SHARED; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicMessageCountSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicMessageCountSelfTest.java index ca94c7f60137d..c45490a03c4b4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicMessageCountSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheAtomicMessageCountSelfTest.java @@ -35,9 +35,10 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -45,10 +46,8 @@ /** * Tests messages being sent between nodes in ATOMIC mode. */ +@RunWith(JUnit4.class) public class GridCacheAtomicMessageCountSelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Starting grid index. */ private int idx; @@ -59,12 +58,7 @@ public class GridCacheAtomicMessageCountSelfTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setForceServerMode(true); - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); CacheConfiguration cCfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -87,6 +81,7 @@ public class GridCacheAtomicMessageCountSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testPartitioned() throws Exception { checkMessages(false); } @@ -94,6 +89,7 @@ public void testPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClient() throws Exception { checkMessages(true); } @@ -203,4 +199,4 @@ public void resetCount() { cntMap.clear(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicApiAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicApiAbstractTest.java index f766d01517d65..03e8e6817553b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicApiAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicApiAbstractTest.java @@ -38,8 +38,12 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestThread; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVTS_CACHE; @@ -50,6 +54,7 @@ * Test cases for multi-threaded tests. */ @SuppressWarnings("LockAcquiredButNotSafelyReleased") +@RunWith(JUnit4.class) public abstract class GridCacheBasicApiAbstractTest extends GridCommonAbstractTest { /** Grid. */ private Ignite ignite; @@ -88,7 +93,10 @@ protected GridCacheBasicApiAbstractTest() { * * @throws Exception If test failed. */ + @Test public void testBasicLock() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); Lock lock = cache.lock(1); @@ -105,7 +113,10 @@ public void testBasicLock() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testSingleLockReentry() throws IgniteCheckedException { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); Lock lock = cache.lock(1); @@ -133,7 +144,10 @@ public void testSingleLockReentry() throws IgniteCheckedException { * * @throws Exception If test failed. */ + @Test public void testReentry() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); Lock lock = cache.lock(1); @@ -172,7 +186,10 @@ public void testReentry() throws Exception { /** * */ + @Test public void testInterruptLock() throws InterruptedException { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); final Lock lock = cache.lock(1); @@ -216,7 +233,10 @@ public void testInterruptLock() throws InterruptedException { /** * */ + @Test public void testInterruptLockWithTimeout() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); startGrid(1); @@ -275,7 +295,10 @@ public void testInterruptLockWithTimeout() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testManyLockReentries() throws IgniteCheckedException { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); Integer key = 1; @@ -318,7 +341,10 @@ public void testManyLockReentries() throws IgniteCheckedException { /** * @throws Exception If test failed. */ + @Test public void testLockMultithreaded() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); final CountDownLatch l1 = new CountDownLatch(1); @@ -436,7 +462,11 @@ public void testLockMultithreaded() throws Exception { * * @throws Exception If error occur. */ + @Test public void testBasicOps() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); CountDownLatch latch = new CountDownLatch(1); @@ -497,7 +527,11 @@ public void testBasicOps() throws Exception { /** * @throws Exception If error occur. */ + @Test public void testBasicOpsWithReentry() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); int key = (int)System.currentTimeMillis(); @@ -569,7 +603,10 @@ public void testBasicOpsWithReentry() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultiLocks() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); Collection keys = Arrays.asList(1, 2, 3); @@ -600,6 +637,7 @@ public void testMultiLocks() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testGetPutRemove() throws IgniteCheckedException { IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -624,7 +662,12 @@ public void testGetPutRemove() throws IgniteCheckedException { * * @throws Exception In case of error. */ + @Test public void testPutWithExpiration() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); CacheEventListener lsnr = new CacheEventListener(new CountDownLatch(1)); @@ -699,4 +742,4 @@ void await() throws InterruptedException { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreAbstractTest.java index 84ce5385256c3..5537e9b73ac9b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreAbstractTest.java @@ -30,12 +30,13 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.P2; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -46,13 +47,18 @@ /** * Basic store test. */ +@RunWith(JUnit4.class) public abstract class GridCacheBasicStoreAbstractTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache store. */ private static final GridCacheTestStore store = new GridCacheTestStore(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** * */ @@ -80,12 +86,6 @@ protected GridCacheBasicStoreAbstractTest() { @Override protected final IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(cacheMode()); @@ -113,6 +113,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws IgniteCheckedException If failed. */ + @Test public void testNotExistingKeys() throws IgniteCheckedException { IgniteCache cache = jcache(); Map map = store.getMap(); @@ -137,6 +138,7 @@ public void testNotExistingKeys() throws IgniteCheckedException { } /** @throws Exception If test fails. */ + @Test public void testWriteThrough() throws Exception { IgniteCache cache = jcache(); @@ -208,6 +210,7 @@ public void testWriteThrough() throws Exception { } /** @throws Exception If test failed. */ + @Test public void testReadThrough() throws Exception { IgniteCache cache = jcache(); @@ -301,6 +304,7 @@ public void testReadThrough() throws Exception { } /** @throws Exception If test failed. */ + @Test public void testLoadCache() throws Exception { IgniteCache cache = jcache(); @@ -331,6 +335,7 @@ public void testLoadCache() throws Exception { } /** @throws Exception If test failed. */ + @Test public void testLoadCacheWithPredicate() throws Exception { IgniteCache cache = jcache(); @@ -368,6 +373,7 @@ public void testLoadCacheWithPredicate() throws Exception { } /** @throws Exception If test failed. */ + @Test public void testReloadCache() throws Exception { IgniteCache cache = jcache(); @@ -438,6 +444,7 @@ public void testReloadCache() throws Exception { } /** @throws Exception If test failed. */ + @Test public void testReloadAll() throws Exception { IgniteCache cache = jcache(); @@ -501,7 +508,7 @@ public void testReloadAll() throws Exception { } /** @throws Exception If test failed. */ - @SuppressWarnings("StringEquality") + @Test public void testReload() throws Exception { IgniteCache cache = jcache(); @@ -587,4 +594,4 @@ private void checkLastMethod(@Nullable String mtd) { assert lastMtd.equals(mtd) : "Last method does not match [expected=" + mtd + ", lastMtd=" + lastMtd + ']'; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreMultithreadedAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreMultithreadedAbstractTest.java index 22b02dc51ad97..a65a0c69cb88f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreMultithreadedAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheBasicStoreMultithreadedAbstractTest.java @@ -29,12 +29,16 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Basic store test. */ +@RunWith(JUnit4.class) public abstract class GridCacheBasicStoreMultithreadedAbstractTest extends GridCommonAbstractTest { /** Cache store. */ private CacheStore store; @@ -90,6 +94,7 @@ protected GridCacheBasicStoreMultithreadedAbstractTest() { /** * @throws Exception If failed. */ + @Test public void testConcurrentGet() throws Exception { final AtomicInteger cntr = new AtomicInteger(); @@ -129,4 +134,4 @@ public void testConcurrentGet() throws Exception { assertEquals(1, cntr.get()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearAllSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearAllSelfTest.java index cae58e1816e4d..756fe0051df95 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearAllSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearAllSelfTest.java @@ -25,10 +25,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -38,6 +39,7 @@ * Test {@link IgniteCache#clear()} operation in multinode environment with nodes * having caches with different names. */ +@RunWith(JUnit4.class) public class GridCacheClearAllSelfTest extends GridCommonAbstractTest { /** Grid nodes count. */ private static final int GRID_CNT = 3; @@ -57,15 +59,20 @@ public class GridCacheClearAllSelfTest extends GridCommonAbstractTest { /** Test attribute name. */ private static final String TEST_ATTRIBUTE = "TestAttribute"; - /** VM IP finder for TCP discovery SPI. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name which will be passed to grid configuration. */ private CacheMode cacheMode = PARTITIONED; /** Cache mode which will be passed to grid configuration. */ private String cacheName = CACHE_NAME; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-7952"); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -84,12 +91,6 @@ public class GridCacheClearAllSelfTest extends GridCommonAbstractTest { cfg.setUserAttributes(F.asMap(TEST_ATTRIBUTE, cacheName)); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } @@ -122,6 +123,7 @@ private void startNodes() throws Exception { * * @throws Exception In case of exception. */ + @Test public void testGlobalClearAllPartitioned() throws Exception { cacheMode = PARTITIONED; @@ -135,6 +137,7 @@ public void testGlobalClearAllPartitioned() throws Exception { * * @throws Exception In case of exception. */ + @Test public void testGlobalClearAllReplicated() throws Exception { cacheMode = REPLICATED; @@ -195,4 +198,4 @@ private AttributeFilter(String attrValue) { return F.eq(attrValue, clusterNode.attribute(TEST_ATTRIBUTE)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearLocallySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearLocallySelfTest.java index 82c4659a74aa4..f3c669e4e7047 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearLocallySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearLocallySelfTest.java @@ -27,11 +27,11 @@ import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -43,6 +43,7 @@ /** * Test {@link IgniteCache#localClearAll(java.util.Set)} operations in multinode environment with nodes having caches with different names. */ +@RunWith(JUnit4.class) public class GridCacheClearLocallySelfTest extends GridCommonAbstractTest { /** Local cache. */ private static final String CACHE_LOCAL = "cache_local"; @@ -59,9 +60,6 @@ public class GridCacheClearLocallySelfTest extends GridCommonAbstractTest { /** Grid nodes count. */ private static final int GRID_CNT = 3; - /** VM IP finder for TCP discovery SPI. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Local caches. */ private IgniteCache[] cachesLoc; @@ -116,12 +114,6 @@ public class GridCacheClearLocallySelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfgLoc, ccfgPartitioned, ccfgColocated, ccfgReplicated); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } @@ -170,6 +162,7 @@ private void startUp() throws Exception { * * @throws Exception If failed. */ + @Test public void testLocalNoSplit() throws Exception { test(Mode.TEST_LOCAL, CLEAR_ALL_SPLIT_THRESHOLD / 2); } @@ -179,6 +172,7 @@ public void testLocalNoSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testLocalSplit() throws Exception { test(Mode.TEST_LOCAL, CLEAR_ALL_SPLIT_THRESHOLD + 1); } @@ -188,6 +182,7 @@ public void testLocalSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionedNoSplit() throws Exception { test(Mode.TEST_PARTITIONED, CLEAR_ALL_SPLIT_THRESHOLD / 2); } @@ -197,6 +192,7 @@ public void testPartitionedNoSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionedSplit() throws Exception { test(Mode.TEST_PARTITIONED, CLEAR_ALL_SPLIT_THRESHOLD + 1); } @@ -206,6 +202,7 @@ public void testPartitionedSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testColocatedNoSplit() throws Exception { test(Mode.TEST_COLOCATED, CLEAR_ALL_SPLIT_THRESHOLD / 2); } @@ -215,6 +212,7 @@ public void testColocatedNoSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testColocatedSplit() throws Exception { test(Mode.TEST_COLOCATED, CLEAR_ALL_SPLIT_THRESHOLD + 1); } @@ -224,6 +222,7 @@ public void testColocatedSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testReplicatedNoSplit() throws Exception { test(Mode.TEST_REPLICATED, CLEAR_ALL_SPLIT_THRESHOLD / 2); } @@ -233,6 +232,7 @@ public void testReplicatedNoSplit() throws Exception { * * @throws Exception If failed. */ + @Test public void testReplicatedSplit() throws Exception { test(Mode.TEST_REPLICATED, CLEAR_ALL_SPLIT_THRESHOLD + 1); } @@ -378,4 +378,4 @@ private AttributeFilter(String... attrs) { return false; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearSelfTest.java index 81c44134b6d90..8a24410032a45 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheClearSelfTest.java @@ -25,37 +25,21 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for cache clear. */ +@RunWith(JUnit4.class) public class GridCacheClearSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGridsMultiThreaded(3); @@ -74,6 +58,7 @@ public class GridCacheClearSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClearPartitioned() throws Exception { testClear(CacheMode.PARTITIONED, false, null); } @@ -81,6 +66,7 @@ public void testClearPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearPartitionedNear() throws Exception { testClear(CacheMode.PARTITIONED, true, null); } @@ -88,6 +74,7 @@ public void testClearPartitionedNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearReplicated() throws Exception { testClear(CacheMode.REPLICATED, false, null); } @@ -95,6 +82,7 @@ public void testClearReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearReplicatedNear() throws Exception { testClear(CacheMode.REPLICATED, true, null); } @@ -102,6 +90,7 @@ public void testClearReplicatedNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeyPartitioned() throws Exception { testClear(CacheMode.PARTITIONED, false, Collections.singleton(3)); } @@ -109,6 +98,7 @@ public void testClearKeyPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeyPartitionedNear() throws Exception { testClear(CacheMode.PARTITIONED, true, Collections.singleton(3)); } @@ -116,6 +106,7 @@ public void testClearKeyPartitionedNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeyReplicated() throws Exception { testClear(CacheMode.REPLICATED, false, Collections.singleton(3)); } @@ -123,6 +114,7 @@ public void testClearKeyReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeyReplicatedNear() throws Exception { testClear(CacheMode.REPLICATED, true, Collections.singleton(3)); } @@ -130,6 +122,7 @@ public void testClearKeyReplicatedNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeysPartitioned() throws Exception { testClear(CacheMode.PARTITIONED, false, F.asSet(2, 6, 9)); } @@ -137,6 +130,7 @@ public void testClearKeysPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeysPartitionedNear() throws Exception { testClear(CacheMode.PARTITIONED, true, F.asSet(2, 6, 9)); } @@ -144,6 +138,7 @@ public void testClearKeysPartitionedNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeysReplicated() throws Exception { testClear(CacheMode.REPLICATED, false, F.asSet(2, 6, 9)); } @@ -151,6 +146,7 @@ public void testClearKeysReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClearKeysReplicatedNear() throws Exception { testClear(CacheMode.REPLICATED, true, F.asSet(2, 6, 9)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentGetCacheOnClientTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentGetCacheOnClientTest.java index fb83405f08e7c..6d51de76152ad 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentGetCacheOnClientTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentGetCacheOnClientTest.java @@ -23,33 +23,22 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.runAsync; /** * */ +@RunWith(JUnit4.class) public class GridCacheConcurrentGetCacheOnClientTest extends GridCommonAbstractTest{ - /** Ip finder. */ - private final static TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** - * @param gridName Grid name. - */ - @Override protected IgniteConfiguration getConfiguration(final String gridName) throws Exception { - final IgniteConfiguration cfg = super.getConfiguration(gridName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** * */ + @Test public void test() throws Exception { IgniteConfiguration node1cfg = getConfiguration("node1"); IgniteConfiguration node2cfg = getConfiguration("node2"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapSelfTest.java index ae1f8224dc23a..1f08236094307 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentMapSelfTest.java @@ -27,10 +27,10 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -38,10 +38,8 @@ /** * Grid cache concurrent hash map self test. */ +@RunWith(JUnit4.class) public class GridCacheConcurrentMapSelfTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -51,12 +49,6 @@ public class GridCacheConcurrentMapSelfTest extends GridCommonAbstractTest { cc.setCacheMode(LOCAL); cc.setWriteSynchronizationMode(FULL_SYNC); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cc); return cfg; @@ -75,6 +67,7 @@ public class GridCacheConcurrentMapSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRehash() throws Exception { IgniteCache c = grid().cache(DEFAULT_CACHE_NAME); @@ -106,6 +99,7 @@ public void testRehash() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRehashRandom() throws Exception { IgniteCache c = grid().cache(DEFAULT_CACHE_NAME); @@ -145,6 +139,7 @@ public void testRehashRandom() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRehashMultithreaded1() throws Exception { final AtomicInteger tidGen = new AtomicInteger(); @@ -217,6 +212,7 @@ public void testRehashMultithreaded1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRehashMultithreaded2() throws Exception { final AtomicInteger tidGen = new AtomicInteger(0); @@ -310,7 +306,7 @@ public void testRehashMultithreaded2() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("ResultOfObjectAllocationIgnored") + @Test public void testEmptyWeakIterator() throws Exception { final IgniteCache c = grid().cache(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentTxMultiNodeLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentTxMultiNodeLoadTest.java index 0d2dc008d8946..41d536a88f0e6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentTxMultiNodeLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConcurrentTxMultiNodeLoadTest.java @@ -67,12 +67,12 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.NONE; @@ -83,10 +83,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheConcurrentTxMultiNodeLoadTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Timers. */ private static final ConcurrentMap>> timers = new ConcurrentHashMap<>(); @@ -153,12 +151,6 @@ public class GridCacheConcurrentTxMultiNodeLoadTest extends GridCommonAbstractTe else c.setCacheConfiguration(); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setPeerClassLoadingEnabled(false); return c; @@ -172,6 +164,7 @@ public class GridCacheConcurrentTxMultiNodeLoadTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testEvictions() throws Exception { try { cacheOn = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConditionalDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConditionalDeploymentSelfTest.java index b0d9f0e135359..15f368fa29405 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConditionalDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConditionalDeploymentSelfTest.java @@ -26,11 +26,10 @@ import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.internal.util.typedef.CO; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -40,10 +39,8 @@ /** * Cache + conditional deployment test. */ +@RunWith(JUnit4.class) public class GridCacheConditionalDeploymentSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -61,12 +58,6 @@ public class GridCacheConditionalDeploymentSelfTest extends GridCommonAbstractTe cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -105,6 +96,7 @@ protected CacheConfiguration cacheConfiguration() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testNoDeploymentInfo() throws Exception { GridCacheIoManager ioMgr = cacheIoManager(); @@ -122,6 +114,7 @@ public void testNoDeploymentInfo() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testAddedDeploymentInfo() throws Exception { GridCacheContext ctx = cacheContext(); @@ -145,6 +138,7 @@ public void testAddedDeploymentInfo() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testAddedDeploymentInfo2() throws Exception { GridCacheContext ctx = cacheContext(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationConsistencySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationConsistencySelfTest.java index 3f4efc293e045..833ba5a6963ee 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationConsistencySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationConsistencySelfTest.java @@ -42,15 +42,16 @@ import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.lang.IgniteCallable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -65,6 +66,7 @@ * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class GridCacheConfigurationConsistencySelfTest extends GridCommonAbstractTest { /** */ private boolean cacheEnabled; @@ -93,9 +95,6 @@ public class GridCacheConfigurationConsistencySelfTest extends GridCommonAbstrac /** */ private int backups; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -110,12 +109,6 @@ public class GridCacheConfigurationConsistencySelfTest extends GridCommonAbstrac cfg.setGridLogger(strLog); } - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setDeploymentMode(depMode); if (cacheEnabled) { @@ -141,6 +134,7 @@ public class GridCacheConfigurationConsistencySelfTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testCacheUtilsCheckAttributeMismatch() throws Exception { Ignite ignite = startGrid(1); @@ -188,6 +182,7 @@ public void testCacheUtilsCheckAttributeMismatch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNullCacheMode() throws Exception { // Grid with null cache mode. // This is a legal case. The default cache mode should be used. @@ -202,6 +197,7 @@ public void testNullCacheMode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithCacheAndWithoutCache() throws Exception { // 1st grid without cache. cacheEnabled = false; @@ -221,6 +217,7 @@ public void testWithCacheAndWithoutCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSameCacheDifferentModes() throws Exception { // 1st grid with replicated cache. cacheEnabled = true; @@ -249,6 +246,7 @@ public void testSameCacheDifferentModes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentCacheDifferentModes() throws Exception { // 1st grid with local cache. cacheEnabled = true; @@ -286,6 +284,7 @@ public void testDifferentCacheDifferentModes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentDeploymentModes() throws Exception { // 1st grid with SHARED mode. cacheEnabled = true; @@ -311,6 +310,7 @@ public void testDifferentDeploymentModes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentAffinities() throws Exception { cacheMode = PARTITIONED; @@ -335,6 +335,7 @@ public void testDifferentAffinities() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentPreloadModes() throws Exception { checkSecondGridStartFails( new C1() { @@ -357,6 +358,7 @@ public void testDifferentPreloadModes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentEvictionEnabled() throws Exception { checkSecondGridStartFails( new C1() { @@ -379,6 +381,7 @@ public void testDifferentEvictionEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentEvictionPolicyEnabled() throws Exception { checkSecondGridStartFails( new C1() { @@ -401,6 +404,7 @@ public void testDifferentEvictionPolicyEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentEvictionPolicies() throws Exception { checkSecondGridStartFails( new C1() { @@ -425,6 +429,7 @@ public void testDifferentEvictionPolicies() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentEvictionPolicyFactories() throws Exception { checkSecondGridStartFails( new C1() { @@ -449,6 +454,7 @@ public void testDifferentEvictionPolicyFactories() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentEvictionFilters() throws Exception { checkSecondGridStartFails( new C1() { @@ -471,6 +477,7 @@ public void testDifferentEvictionFilters() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentAffinityMappers() throws Exception { checkSecondGridStartFails( new C1() { @@ -493,6 +500,7 @@ public void testDifferentAffinityMappers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentAtomicity() throws Exception { cacheMode = PARTITIONED; @@ -519,6 +527,34 @@ public void testDifferentAtomicity() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testDifferentTxAtomicity() throws Exception { + cacheMode = PARTITIONED; + + checkSecondGridStartFails( + new C1() { + /** {@inheritDoc} */ + @Override public Void apply(CacheConfiguration cfg) { + cfg.setNearConfiguration(null); + cfg.setAtomicityMode(TRANSACTIONAL); + return null; + } + }, + new C1() { + /** {@inheritDoc} */ + @Override public Void apply(CacheConfiguration cfg) { + cfg.setNearConfiguration(null); + cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + return null; + } + } + ); + } + + /** + * @throws Exception If failed. + */ + @Test public void testDifferentSynchronization() throws Exception { cacheMode = PARTITIONED; @@ -543,6 +579,7 @@ public void testDifferentSynchronization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityFunctionConsistency() throws Exception { cacheEnabled = true; cacheMode = PARTITIONED; @@ -589,6 +626,7 @@ public void testAffinityFunctionConsistency() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAttributesWarnings() throws Exception { cacheEnabled = true; @@ -624,6 +662,7 @@ public void testAttributesWarnings() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedOnlyAttributesIgnoredForReplicated() throws Exception { cacheEnabled = true; @@ -663,6 +702,7 @@ public void testPartitionedOnlyAttributesIgnoredForReplicated() throws Exception /** * @throws Exception If failed. */ + @Test public void testIgnoreMismatchForLocalCaches() throws Exception { cacheEnabled = true; @@ -710,6 +750,7 @@ public void testIgnoreMismatchForLocalCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStoreCheckAtomic() throws Exception { cacheEnabled = true; @@ -755,6 +796,7 @@ public void testStoreCheckAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStoreCheckTransactional() throws Exception { cacheEnabled = true; @@ -802,6 +844,7 @@ public void testStoreCheckTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityForReplicatedCache() throws Exception { cacheEnabled = true; @@ -822,6 +865,7 @@ public void testAffinityForReplicatedCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentInterceptors() throws Exception { cacheMode = PARTITIONED; @@ -906,4 +950,4 @@ private static class TestCacheInterceptor extends CacheInterceptorAdapter implem private static class TestCacheDefaultAffinityKeyMapper extends GridCacheDefaultAffinityKeyMapper { // No-op, just different class. } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationValidationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationValidationSelfTest.java index 7b793b78d8138..81452980f2f1c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationValidationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheConfigurationValidationSelfTest.java @@ -21,10 +21,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -35,6 +35,7 @@ /** * Attribute validation self test. */ +@RunWith(JUnit4.class) public class GridCacheConfigurationValidationSelfTest extends GridCommonAbstractTest { /** */ private static final String NON_DFLT_CACHE_NAME = "non-dflt-cache"; @@ -65,9 +66,6 @@ public class GridCacheConfigurationValidationSelfTest extends GridCommonAbstract private static final String RESERVED_FOR_DATASTRUCTURES_CACHE_GROUP_NAME_IGNITE_INSTANCE_NAME = "reservedForDsCacheGroupNameCheckFails"; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * Constructs test. */ @@ -79,12 +77,6 @@ public GridCacheConfigurationValidationSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - // Default cache config. CacheConfiguration dfltCacheCfg = defaultCacheConfiguration(); @@ -138,6 +130,7 @@ else if (igniteInstanceName.contains(DUP_DFLT_CACHES_IGNITE_INSTANCE_NAME)) * * @throws Exception If failed. */ + @Test public void testDuplicateCacheConfigurations() throws Exception { // This grid should not start. startInvalidGrid(DUP_CACHES_IGNITE_INSTANCE_NAME); @@ -149,6 +142,7 @@ public void testDuplicateCacheConfigurations() throws Exception { /** * @throws Exception If fails. */ + @Test public void testCacheAttributesValidation() throws Exception { try { startGrid(0); @@ -215,4 +209,4 @@ public TestRendezvousAffinityFunction() { private static class TestCacheDefaultAffinityKeyMapper extends GridCacheDefaultAffinityKeyMapper { // No-op. Just to have another class name. } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDaemonNodeAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDaemonNodeAbstractSelfTest.java index 807725159eebf..c55187eead73c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDaemonNodeAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDaemonNodeAbstractSelfTest.java @@ -27,12 +27,12 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -41,10 +41,8 @@ /** * Test cache operations with daemon node. */ +@RunWith(JUnit4.class) public abstract class GridCacheDaemonNodeAbstractSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Daemon flag. */ protected boolean daemon; @@ -61,12 +59,6 @@ public abstract class GridCacheDaemonNodeAbstractSelfTest extends GridCommonAbst c.setDaemon(daemon); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setConnectorConfiguration(null); CacheConfiguration cc = defaultCacheConfiguration(); @@ -90,6 +82,7 @@ public abstract class GridCacheDaemonNodeAbstractSelfTest extends GridCommonAbst /** * @throws Exception If failed. */ + @Test public void testImplicit() throws Exception { try { startGridsMultiThreaded(3); @@ -121,6 +114,7 @@ public void testImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExplicit() throws Exception { try { startGridsMultiThreaded(3); @@ -162,6 +156,7 @@ public void testExplicit() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapKeyToNode() throws Exception { try { // Start normal nodes. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentSelfTest.java index ff3ab362fe726..3396932188129 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheDeploymentSelfTest.java @@ -30,10 +30,10 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,10 +45,8 @@ /** * Cache + Deployment test. */ +@RunWith(JUnit4.class) public class GridCacheDeploymentSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Name for Ignite instance without cache. */ protected static final String IGNITE_INSTANCE_NAME = "grid-no-cache"; @@ -84,12 +82,6 @@ public class GridCacheDeploymentSelfTest extends GridCommonAbstractTest { else cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(null); return cfg; @@ -124,6 +116,7 @@ protected boolean isCacheUndeployed(Ignite g) { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment() throws Exception { try { depMode = CONTINUOUS; @@ -150,6 +143,7 @@ public void testDeployment() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment2() throws Exception { try { depMode = CONTINUOUS; @@ -185,6 +179,7 @@ public void testDeployment2() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment3() throws Exception { try { depMode = SHARED; @@ -231,12 +226,14 @@ public void testDeployment3() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment4() throws Exception { doDeployment4(false); } /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment4BackupLeavesGrid() throws Exception { doDeployment4(true); } @@ -288,6 +285,7 @@ private void doDeployment4(boolean backupLeavesGrid) throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment5() throws Exception { ClassLoader ldr = getExternalClassLoader(); @@ -345,6 +343,7 @@ public void testDeployment5() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment6() throws Exception { try { depMode = SHARED; @@ -378,6 +377,7 @@ public void testDeployment6() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment7() throws Exception { try { depMode = SHARED; @@ -410,6 +410,7 @@ public void testDeployment7() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedDeploymentPreloading() throws Exception { ClassLoader ldr = getExternalClassLoader(); @@ -434,6 +435,7 @@ public void testPartitionedDeploymentPreloading() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheUndeploymentSharedMode() throws Exception { testCacheUndeployment(SHARED); } @@ -441,6 +443,7 @@ public void testCacheUndeploymentSharedMode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheUndeploymentContMode() throws Exception { testCacheUndeployment(CONTINUOUS); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryMemorySizeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryMemorySizeSelfTest.java index 016f3a099afc5..e23ff3f21e74c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryMemorySizeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryMemorySizeSelfTest.java @@ -35,10 +35,11 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.marshaller.MarshallerContext; import org.apache.ignite.marshaller.jdk.JdkMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -48,10 +49,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheEntryMemorySizeSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Null reference size (optimized marshaller writes one byte for null reference). */ private static final int NULL_REF_SIZE = 1; @@ -83,12 +82,6 @@ public class GridCacheEntryMemorySizeSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setMarshaller(createMarshaller()); return cfg; @@ -115,12 +108,26 @@ public class GridCacheEntryMemorySizeSelfTest extends GridCommonAbstractTest { startGrids(2); } + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + super.afterTestsStopped(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + grid(0).destroyCache(DEFAULT_CACHE_NAME); + + super.afterTest(); + } + /** * @param nearEnabled {@code True} if near cache should be enabled. * @param mode Cache mode. * @return Created cache. */ - private IgniteCache testCache(boolean nearEnabled, CacheMode mode) { + private IgniteCache createCache(boolean nearEnabled, CacheMode mode) { CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(mode); @@ -176,8 +183,11 @@ protected Marshaller createMarshaller() throws IgniteCheckedException { } /** @throws Exception If failed. */ + @Test public void testLocal() throws Exception { - IgniteCache cache = testCache(false, LOCAL); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + IgniteCache cache = createCache(false, LOCAL); try { cache.put(1, new Value(new byte[1024])); @@ -199,8 +209,9 @@ public void testLocal() throws Exception { } /** @throws Exception If failed. */ + @Test public void testReplicated() throws Exception { - IgniteCache cache = testCache(false, REPLICATED); + IgniteCache cache = createCache(false, REPLICATED); try { cache.put(1, new Value(new byte[1024])); @@ -222,8 +233,11 @@ public void testReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearEnabled() throws Exception { - IgniteCache cache = testCache(true, PARTITIONED); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + IgniteCache cache = createCache(true, PARTITIONED); try { int[] keys = new int[3]; @@ -274,8 +288,9 @@ public void testPartitionedNearEnabled() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearDisabled() throws Exception { - IgniteCache cache = testCache(false, PARTITIONED); + IgniteCache cache = createCache(false, PARTITIONED); try { int[] keys = new int[3]; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryVersionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryVersionSelfTest.java index 9e933be2acfca..a692455b4177c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryVersionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEntryVersionSelfTest.java @@ -24,13 +24,14 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.version.GridCacheVersionManager.TOP_VER_BASE_TIME; @@ -38,10 +39,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheEntryVersionSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Atomicity mode. */ private CacheAtomicityMode atomicityMode; @@ -49,10 +48,6 @@ public class GridCacheEntryVersionSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setCacheMode(PARTITIONED); @@ -62,14 +57,13 @@ public class GridCacheEntryVersionSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfg); - cfg.setDiscoverySpi(discoSpi); - return cfg; } /** * @throws Exception If failed. */ + @Test public void testVersionAtomic() throws Exception { atomicityMode = ATOMIC; @@ -79,12 +73,23 @@ public void testVersionAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersionTransactional() throws Exception { atomicityMode = TRANSACTIONAL; checkVersion(); } + /** + * @throws Exception If failed. + */ + @Test + public void testVersionMvccTx() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + checkVersion(); + } + /** * @throws Exception If failed. */ @@ -155,4 +160,4 @@ private void checkVersion() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionEventAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionEventAbstractTest.java index 554a7a9ed3db4..6cc11f9a0c107 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionEventAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheEvictionEventAbstractTest.java @@ -32,10 +32,11 @@ import org.apache.ignite.events.Event; import org.apache.ignite.events.EventType; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_ENTRY_EVICTED; import static org.apache.ignite.events.EventType.EVT_JOB_MAPPED; @@ -45,9 +46,15 @@ /** * Eviction event self test. */ +@RunWith(JUnit4.class) public abstract class GridCacheEvictionEventAbstractTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + + super.setUp(); + } /** * @@ -60,12 +67,6 @@ protected GridCacheEvictionEventAbstractTest() { @Override protected IgniteConfiguration getConfiguration() throws Exception { IgniteConfiguration c = super.getConfiguration(); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(cacheMode()); @@ -93,6 +94,7 @@ protected GridCacheEvictionEventAbstractTest() { /** * @throws Exception If failed. */ + @Test public void testEvictionEvent() throws Exception { Ignite g = grid(); @@ -120,4 +122,4 @@ public void testEvictionEvent() throws Exception { assertTrue("Failed to wait for eviction event", latch.await(10, TimeUnit.SECONDS)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFinishPartitionsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFinishPartitionsSelfTest.java index 9732272d5f1e1..2d70e544bf8be 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFinishPartitionsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFinishPartitionsSelfTest.java @@ -34,7 +34,11 @@ import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,6 +48,7 @@ /** * Abstract class for cache tests. */ +@RunWith(JUnit4.class) public class GridCacheFinishPartitionsSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 1; @@ -56,6 +61,13 @@ public class GridCacheFinishPartitionsSelfTest extends GridCacheAbstractSelfTest return GRID_CNT; } + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { grid = (IgniteKernal)grid(0); @@ -85,6 +97,7 @@ public class GridCacheFinishPartitionsSelfTest extends GridCacheAbstractSelfTest /** * @throws Exception If failed. */ + @Test public void testTxFinishPartitions() throws Exception { String key = "key"; String val = "value"; @@ -167,6 +180,7 @@ private long runTransactions(final String key, final int keyPart, final Collecti * * @throws Exception If failed. */ + @Test public void testMvccFinishPartitions() throws Exception { String key = "key"; @@ -193,6 +207,7 @@ public void testMvccFinishPartitions() throws Exception { * * @throws Exception If failed. */ + @Test public void testMvccFinishKeys() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -226,6 +241,7 @@ public void testMvccFinishKeys() throws Exception { * * @throws Exception If failed. */ + @Test public void testMvccFinishPartitionsContinuousLockAcquireRelease() throws Exception { int key = 1; @@ -330,4 +346,4 @@ private long runLock(String key, int keyPart, Collection waitParts) thr return end.get() - start; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQueryMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQueryMultithreadedSelfTest.java index 90f8d60915cd5..b993948931a7f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQueryMultithreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQueryMultithreadedSelfTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.internal.processors.cache.query.CacheQueryFuture; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.S; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -35,6 +38,7 @@ /** * Multithreaded reduce query tests with lots of data. */ +@RunWith(JUnit4.class) public class GridCacheFullTextQueryMultithreadedSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 3; @@ -69,6 +73,7 @@ public class GridCacheFullTextQueryMultithreadedSelfTest extends GridCacheAbstra * @throws Exception In case of error. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testH2Text() throws Exception { int duration = 20 * 1000; final int keyCnt = 5000; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheGetAndTransformStoreAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheGetAndTransformStoreAbstractTest.java index f140945656e26..d180b794de9a4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheGetAndTransformStoreAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheGetAndTransformStoreAbstractTest.java @@ -29,10 +29,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -41,13 +42,18 @@ /** * Basic get and transform store test. */ +@RunWith(JUnit4.class) public abstract class GridCacheGetAndTransformStoreAbstractTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache store. */ private static final GridCacheTestStore store = new GridCacheTestStore(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** * */ @@ -75,12 +81,6 @@ protected GridCacheGetAndTransformStoreAbstractTest() { @Override protected final IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(cacheMode()); @@ -108,6 +108,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception If failed. */ + @Test public void testGetAndTransform() throws Exception { final AtomicBoolean finish = new AtomicBoolean(); @@ -165,4 +166,4 @@ private static class Processor implements EntryProcessor, return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheIncrementTransformTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheIncrementTransformTest.java index 33b094f1f2f52..3ccb7c11eaf83 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheIncrementTransformTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheIncrementTransformTest.java @@ -30,11 +30,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,10 +44,8 @@ /** * Tests cache in-place modification logic with iterative value increment. */ +@RunWith(JUnit4.class) public class GridCacheIncrementTransformTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of nodes to test on. */ private static final int GRID_CNT = 4; @@ -71,12 +69,6 @@ public class GridCacheIncrementTransformTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -102,6 +94,7 @@ public class GridCacheIncrementTransformTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testIncrement() throws Exception { testIncrement(false); } @@ -109,6 +102,7 @@ public void testIncrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrementRestart() throws Exception { final AtomicBoolean stop = new AtomicBoolean(); final AtomicReference error = new AtomicReference<>(); @@ -230,4 +224,4 @@ private static class Processor implements EntryProcessor> emptyFilter() { /** * @throws Exception If failed. */ + @Test public void testSmall() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -135,6 +128,7 @@ public void testSmall() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLarge() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -164,6 +158,7 @@ public void testLarge() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFiltered() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -243,4 +238,4 @@ void reset() { i = 0; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheKeyCheckSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheKeyCheckSelfTest.java index ebb88a3a821ca..56c241e08a214 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheKeyCheckSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheKeyCheckSelfTest.java @@ -22,9 +22,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -34,10 +34,8 @@ /** * Tests for cache key check. */ +@RunWith(JUnit4.class) public class GridCacheKeyCheckSelfTest extends GridCacheAbstractSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Atomicity mode. */ private CacheAtomicityMode atomicityMode; @@ -55,12 +53,6 @@ public class GridCacheKeyCheckSelfTest extends GridCacheAbstractSelfTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(cacheConfiguration()); return cfg; @@ -84,6 +76,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testGetTransactional() throws Exception { checkGet(TRANSACTIONAL); } @@ -91,6 +84,7 @@ public void testGetTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAtomic() throws Exception { checkGet(ATOMIC); } @@ -98,6 +92,7 @@ public void testGetAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutTransactional() throws Exception { checkPut(TRANSACTIONAL); } @@ -105,6 +100,7 @@ public void testPutTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAtomic() throws Exception { checkPut(ATOMIC); } @@ -112,6 +108,7 @@ public void testPutAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveTransactional() throws Exception { checkRemove(TRANSACTIONAL); } @@ -119,6 +116,7 @@ public void testRemoveTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAtomic() throws Exception { checkRemove(ATOMIC); } @@ -204,4 +202,4 @@ public int getSomeVal() { return someVal; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLeakTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLeakTest.java index 1fed55fafdbdd..63449f1621418 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLeakTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLeakTest.java @@ -25,10 +25,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.internal.CU; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -38,10 +38,8 @@ /** * Leak test. */ +@RunWith(JUnit4.class) public class GridCacheLeakTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "igfs-data"; @@ -52,12 +50,6 @@ public class GridCacheLeakTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(cacheConfiguration()); return cfg; @@ -87,6 +79,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testLeakTransactional() throws Exception { checkLeak(TRANSACTIONAL); } @@ -94,6 +87,7 @@ public void testLeakTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLeakAtomic() throws Exception { checkLeak(ATOMIC); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLifecycleAwareSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLifecycleAwareSelfTest.java index 22d94fbbc27b9..6654edc6a463b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLifecycleAwareSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLifecycleAwareSelfTest.java @@ -49,12 +49,16 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridAbstractLifecycleAwareSelfTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Test for {@link LifecycleAware} support in {@link CacheConfiguration}. */ +@RunWith(JUnit4.class) public class GridCacheLifecycleAwareSelfTest extends GridAbstractLifecycleAwareSelfTest { /** */ private static final String CACHE_NAME = "cache"; @@ -365,6 +369,7 @@ public TestTopologyValidator() { /** {@inheritDoc} */ @SuppressWarnings("ErrorNotRethrown") + @Test @Override public void testLifecycleAware() throws Exception { for (boolean nearEnabled : new boolean[] {true, false}) { near = nearEnabled; @@ -390,4 +395,4 @@ public TestTopologyValidator() { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLocalTxStoreExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLocalTxStoreExceptionSelfTest.java index 71f1f7f9765e0..fccf5ecd9bf4e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLocalTxStoreExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLocalTxStoreExceptionSelfTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -25,6 +26,13 @@ * */ public class GridCacheLocalTxStoreExceptionSelfTest extends IgniteTxStoreExceptionAbstractSelfTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallerTxAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallerTxAbstractTest.java index 0d616d4a1f141..57c019d304497 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallerTxAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallerTxAbstractTest.java @@ -22,12 +22,11 @@ import java.io.ObjectInput; import java.io.ObjectOutput; import java.util.UUID; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -35,6 +34,7 @@ /** * Test transaction with wrong marshalling. */ +@RunWith(JUnit4.class) public abstract class GridCacheMarshallerTxAbstractTest extends GridCommonAbstractTest { /** * Wrong Externalizable class. @@ -57,9 +57,6 @@ private static class GridCacheWrongValue1 { private long val2 = 9; } - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * Constructs a test. */ @@ -67,24 +64,12 @@ protected GridCacheMarshallerTxAbstractTest() { super(true /* start grid. */); } - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - - return cfg; - } - /** * JUnit. * * @throws Exception If failed. */ + @Test public void testValueMarshallerFail() throws Exception { String key = UUID.randomUUID().toString(); String value = UUID.randomUUID().toString(); @@ -131,4 +116,4 @@ public void testValueMarshallerFail() throws Exception { tx.close(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallingNodeJoinSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallingNodeJoinSelfTest.java index df3430f2bbfc2..001d88fba3a35 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallingNodeJoinSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMarshallingNodeJoinSelfTest.java @@ -36,20 +36,26 @@ import org.apache.ignite.events.EventType; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.PE; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; /** */ +@RunWith(JUnit4.class) public class GridCacheMarshallingNodeJoinSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -66,12 +72,6 @@ public class GridCacheMarshallingNodeJoinSelfTest extends GridCommonAbstractTest cfg.setCacheConfiguration(cacheCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -83,6 +83,7 @@ public class GridCacheMarshallingNodeJoinSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testNodeJoin() throws Exception { final CountDownLatch allowJoin = new CountDownLatch(1); final CountDownLatch joined = new CountDownLatch(2); @@ -168,4 +169,4 @@ public TestObject() { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMissingCommitVersionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMissingCommitVersionSelfTest.java index a6f202218d28c..a9f6d392e5af5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMissingCommitVersionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMissingCommitVersionSelfTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MAX_COMPLETED_TX_COUNT; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -38,6 +41,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheMissingCommitVersionSelfTest extends GridCommonAbstractTest { /** */ private volatile boolean putFailed; @@ -86,6 +90,7 @@ public GridCacheMissingCommitVersionSelfTest() { /** * @throws Exception If failed. */ + @Test public void testMissingCommitVersion() throws Exception { final IgniteCache cache = jcache(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMixedPartitionExchangeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMixedPartitionExchangeSelfTest.java index f6f47517e4de2..7867be29c2a5d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMixedPartitionExchangeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMixedPartitionExchangeSelfTest.java @@ -30,11 +30,13 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -46,18 +48,24 @@ * exchange should be skipped in this case). */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class GridCacheMixedPartitionExchangeSelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Flag indicating whether to include cache to the node configuration. */ private boolean cache; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-9470"); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder).setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); if (cache) cfg.setCacheConfiguration(cacheConfiguration()); @@ -84,6 +92,7 @@ private CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testNodeJoinLeave() throws Exception { try { cache = true; @@ -169,4 +178,4 @@ public void testNodeJoinLeave() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultiUpdateLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultiUpdateLockSelfTest.java index 367d8ac833d66..fbe629d1a9c69 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultiUpdateLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultiUpdateLockSelfTest.java @@ -29,11 +29,11 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.checkpoint.noop.NoopCheckpointSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -44,10 +44,8 @@ /** * Tests multi-update locks. */ +@RunWith(JUnit4.class) public class GridCacheMultiUpdateLockSelfTest extends GridCommonAbstractTest { - /** Shared IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Near enabled flag. */ private boolean nearEnabled; @@ -55,12 +53,6 @@ public class GridCacheMultiUpdateLockSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(cacheConfiguration()); cfg.setCheckpointSpi(new NoopCheckpointSpi()); @@ -89,6 +81,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testMultiUpdateLocksNear() throws Exception { checkMultiUpdateLocks(true); } @@ -96,6 +89,7 @@ public void testMultiUpdateLocksNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultiUpdateLocksColocated() throws Exception { checkMultiUpdateLocks(false); } @@ -202,4 +196,4 @@ private void checkMultiUpdateLocks(boolean nearEnabled) throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultinodeUpdateAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultinodeUpdateAbstractSelfTest.java index 800f4bac7b67c..298e2917bea35 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultinodeUpdateAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMultinodeUpdateAbstractSelfTest.java @@ -26,14 +26,19 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.testframework.MvccFeatureChecker.assertMvccWriteConflict; /** * Multinode update test. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public abstract class GridCacheMultinodeUpdateAbstractSelfTest extends GridCacheAbstractSelfTest { /** */ protected static volatile boolean failed; @@ -74,7 +79,11 @@ public abstract class GridCacheMultinodeUpdateAbstractSelfTest extends GridCache /** * @throws Exception If failed. */ + @Test public void testInvoke() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10778"); + IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); final Integer key = primaryKey(cache); @@ -86,8 +95,12 @@ public void testInvoke() throws Exception { Integer expVal = 0; - for (int i = 0; i < iterations(); i++) { - log.info("Iteration: " + i); + final long endTime = System.currentTimeMillis() + GridTestUtils.SF.applyLB(60_000, 10_000); + + int iter = 0; + + while (System.currentTimeMillis() < endTime) { + log.info("Iteration: " + iter++); final AtomicInteger gridIdx = new AtomicInteger(); @@ -97,8 +110,20 @@ public void testInvoke() throws Exception { final IgniteCache cache = grid(idx).cache(DEFAULT_CACHE_NAME); - for (int i = 0; i < ITERATIONS_PER_THREAD && !failed; i++) - cache.invoke(key, new IncProcessor()); + for (int i = 0; i < ITERATIONS_PER_THREAD && !failed; i++) { + boolean updated = false; + + while (!updated) { + try { + cache.invoke(key, new IncProcessor()); + + updated = true; + } + catch (Exception e) { + assertMvccWriteConflict(e); + } + } + } return null; } @@ -116,13 +141,6 @@ public void testInvoke() throws Exception { } } - /** - * @return Number of iterations. - */ - protected int iterations() { - return atomicityMode() == ATOMIC ? 30 : 15; - } - /** * */ @@ -144,4 +162,4 @@ protected static class IncProcessor implements EntryProcessor cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -107,6 +112,7 @@ public void testAllTrueFlags() { /** * */ + @Test public void testAllFalseFlags() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -140,4 +146,4 @@ public void testAllFalseFlags() { for (GridCacheMvccCandidate.Mask mask : GridCacheMvccCandidate.Mask.values()) assertFalse("Mask check failed [mask=" + mask + ", c=" + c + ']', mask.get(flags)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccManagerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccManagerSelfTest.java index 0a284e989c1ee..dcc4dcb88871a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccManagerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccManagerSelfTest.java @@ -24,11 +24,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -38,10 +38,8 @@ /** * Tests for {@link GridCacheMvccManager}. */ +@RunWith(JUnit4.class) public class GridCacheMvccManagerSelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Cache mode. */ private CacheMode mode; @@ -49,12 +47,6 @@ public class GridCacheMvccManagerSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration()); return cfg; @@ -72,6 +64,7 @@ protected CacheConfiguration cacheConfiguration() { } /** @throws Exception If failed. */ + @Test public void testLocalCache() throws Exception { mode = LOCAL; @@ -79,6 +72,7 @@ public void testLocalCache() throws Exception { } /** @throws Exception If failed. */ + @Test public void testReplicatedCache() throws Exception { mode = REPLICATED; @@ -86,6 +80,7 @@ public void testReplicatedCache() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedCache() throws Exception { mode = PARTITIONED; @@ -117,4 +112,4 @@ private void testCandidates(int gridCnt) throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccMultiThreadedUpdateSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccMultiThreadedUpdateSelfTest.java new file mode 100644 index 0000000000000..1f9cbf027766c --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccMultiThreadedUpdateSelfTest.java @@ -0,0 +1,207 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.concurrent.Callable; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteTransactions; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; + +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Multithreaded update test. + */ +public class GridCacheMvccMultiThreadedUpdateSelfTest extends GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest { + /** */ + public static final int THREADS = 5; + + /** {@inheritDoc} */ + @Override protected long getTestTimeout() { + return 5 * 60_000; + } + + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-9470"); + + super.beforeTestsStarted(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testTransformTx() throws Exception { + testTransformTx(keyForNode(0)); + + if (gridCount() > 1) + testTransformTx(keyForNode(1)); + } + + /** + * @param key Key. + * @throws Exception If failed. + */ + private void testTransformTx(final Integer key) throws Exception { + final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); + + cache.put(key, 0); + + final int ITERATIONS_PER_THREAD = iterations(); + + GridTestUtils.runMultiThreaded(new Callable() { + @Override public Void call() throws Exception { + IgniteTransactions txs = ignite(0).transactions(); + + for (int i = 0; i < ITERATIONS_PER_THREAD && !failed; i++) { + if (i % 500 == 0) + log.info("Iteration " + i); + + try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.invoke(key, new IncProcessor()); + + tx.commit(); + } + } + + return null; + } + }, THREADS, "transform"); + + for (int i = 0; i < gridCount(); i++) { + Integer val = (Integer)grid(i).cache(DEFAULT_CACHE_NAME).get(key); + + assertEquals("Unexpected value for grid " + i, (Integer)(ITERATIONS_PER_THREAD * THREADS), val); + } + + if (failed) { + for (int g = 0; g < gridCount(); g++) + info("Value for cache [g=" + g + ", val=" + grid(g).cache(DEFAULT_CACHE_NAME).get(key) + ']'); + + assertFalse(failed); + } + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTxPessimistic() throws Exception { + testPutTx(keyForNode(0)); + + if (gridCount() > 1) + testPutTx(keyForNode(1)); + } + + /** + * @param key Key. + * @throws Exception If failed. + */ + private void testPutTx(final Integer key) throws Exception { + final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); + + cache.put(key, 0); + + final int ITERATIONS_PER_THREAD = iterations(); + + GridTestUtils.runMultiThreaded(new Callable() { + @Override public Void call() throws Exception { + for (int i = 0; i < ITERATIONS_PER_THREAD; i++) { + if (i % 500 == 0) + log.info("Iteration " + i); + + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + Integer val = cache.getAndPut(key, i); + + assertNotNull(val); + + tx.commit(); + } + } + + return null; + } + }, THREADS, "put"); + + for (int i = 0; i < gridCount(); i++) { + Integer val = (Integer)grid(i).cache(DEFAULT_CACHE_NAME).get(key); + + assertNotNull("Unexpected value for grid " + i, val); + } + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutxIfAbsentTxPessimistic() throws Exception { + testPutxIfAbsentTx(keyForNode(0)); + + if (gridCount() > 1) + testPutxIfAbsentTx(keyForNode(1)); + } + + /** + * @param key Key. + * @throws Exception If failed. + */ + private void testPutxIfAbsentTx(final Integer key) throws Exception { + final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); + + cache.put(key, 0); + + final int ITERATIONS_PER_THREAD = iterations(); + + GridTestUtils.runMultiThreaded(new Callable() { + @Override public Void call() throws Exception { + for (int i = 0; i < ITERATIONS_PER_THREAD && !failed; i++) { + if (i % 500 == 0) + log.info("Iteration " + i); + + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.putIfAbsent(key, 100); + + tx.commit(); + } + } + + return null; + } + }, THREADS, "putxIfAbsent"); + + for (int i = 0; i < gridCount(); i++) { + Integer val = (Integer)grid(i).cache(DEFAULT_CACHE_NAME).get(key); + + assertEquals("Unexpected value for grid " + i, (Integer)0, val); + } + + assertFalse(failed); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccPartitionedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccPartitionedSelfTest.java index 119002ca75717..7117adf3c0cc4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccPartitionedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccPartitionedSelfTest.java @@ -25,11 +25,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,6 +37,7 @@ /** * Test cases for multi-threaded tests in partitioned cache. */ +@RunWith(JUnit4.class) public class GridCacheMvccPartitionedSelfTest extends GridCommonAbstractTest { /** */ private static final UUID nodeId = UUID.randomUUID(); @@ -44,9 +45,6 @@ public class GridCacheMvccPartitionedSelfTest extends GridCommonAbstractTest { /** Grid. */ private IgniteKernal grid; - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -68,12 +66,6 @@ public GridCacheMvccPartitionedSelfTest() { @Override protected IgniteConfiguration getConfiguration() throws Exception { IgniteConfiguration cfg = super.getConfiguration(); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -88,6 +80,7 @@ public GridCacheMvccPartitionedSelfTest() { /** * Tests remote candidates. */ + @Test public void testNearLocalsWithPending() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -122,6 +115,7 @@ public void testNearLocalsWithPending() { /** * Tests remote candidates. */ + @Test public void testNearLocalsWithCommitted() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -155,6 +149,7 @@ public void testNearLocalsWithCommitted() { /** * Tests remote candidates. */ + @Test public void testNearLocalsWithRolledback() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -188,6 +183,7 @@ public void testNearLocalsWithRolledback() { /** * Tests remote candidates. */ + @Test public void testNearLocals() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -218,6 +214,7 @@ public void testNearLocals() { /** * Tests remote candidates. */ + @Test public void testNearLocalsWithOwned() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -257,6 +254,7 @@ public void testNearLocalsWithOwned() { /** * @throws Exception If failed. */ + @Test public void testAddPendingRemote0() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -289,6 +287,7 @@ public void testAddPendingRemote0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddPendingRemote1() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -332,6 +331,7 @@ public void testAddPendingRemote1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddPendingRemote2() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -376,6 +376,7 @@ public void testAddPendingRemote2() throws Exception { /** * Tests salvageRemote method */ + @Test public void testSalvageRemote() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -434,6 +435,7 @@ public void testSalvageRemote() { /** * @throws Exception If failed. */ + @Test public void testNearRemoteConsistentOrdering0() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -472,6 +474,7 @@ public void testNearRemoteConsistentOrdering0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearRemoteConsistentOrdering1() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -517,6 +520,7 @@ public void testNearRemoteConsistentOrdering1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearRemoteConsistentOrdering2() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -562,6 +566,7 @@ public void testNearRemoteConsistentOrdering2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearRemoteConsistentOrdering3() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -601,6 +606,7 @@ public void testNearRemoteConsistentOrdering3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSerializableReadLocksAdd() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -678,6 +684,7 @@ public void testSerializableReadLocksAdd() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSerializableReadLocksAssign() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -817,6 +824,7 @@ private void checkCandidates(CacheLockCandidates all, GridCacheVersion...vers) { /** * @throws Exception If failed. */ + @Test public void testSerializableLocks() throws Exception { checkSerializableAdd(false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccSelfTest.java index 6f52cce302d09..ef653d308f135 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheMvccSelfTest.java @@ -27,23 +27,21 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.marshaller.Marshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * Test cases for multi-threaded tests. */ +@RunWith(JUnit4.class) public class GridCacheMvccSelfTest extends GridCommonAbstractTest { /** Grid. */ private IgniteKernal grid; - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -65,12 +63,6 @@ public GridCacheMvccSelfTest() { @Override protected IgniteConfiguration getConfiguration() throws Exception { IgniteConfiguration cfg = super.getConfiguration(); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(REPLICATED); @@ -83,6 +75,7 @@ public GridCacheMvccSelfTest() { /** * @throws Exception If failed. */ + @Test public void testMarshalUnmarshalCandidate() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -115,6 +108,7 @@ public void testMarshalUnmarshalCandidate() throws Exception { /** * Tests remote candidates. */ + @Test public void testRemotes() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -266,6 +260,7 @@ public void testRemotes() { /** * Tests that orderOwned does not reorder owned locks. */ + @Test public void testNearRemoteWithOwned() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -308,6 +303,7 @@ public void testNearRemoteWithOwned() { /** * Tests that orderOwned does not reorder owned locks. */ + @Test public void testNearRemoteWithOwned1() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -354,6 +350,7 @@ public void testNearRemoteWithOwned1() { /** * Tests that orderOwned does not reorder owned locks. */ + @Test public void testNearRemoteWithOwned2() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -402,6 +399,7 @@ public void testNearRemoteWithOwned2() { /** * Tests remote candidates. */ + @Test public void testLocal() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -476,6 +474,7 @@ public void testLocal() { /** * Tests assignment of local candidates when remote exist. */ + @Test public void testLocalWithRemote() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -512,6 +511,7 @@ public void testLocalWithRemote() { /** * */ + @Test public void testCompletedWithBaseInTheMiddle() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -565,6 +565,7 @@ public void testCompletedWithBaseInTheMiddle() { /** * */ + @Test public void testCompletedWithCompletedBaseInTheMiddle() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -607,6 +608,7 @@ public void testCompletedWithCompletedBaseInTheMiddle() { /** * */ + @Test public void testCompletedTwiceWithBaseInTheMiddle() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -653,6 +655,7 @@ public void testCompletedTwiceWithBaseInTheMiddle() { /** * */ + @Test public void testCompletedWithBaseInTheMiddleNoChange() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -692,6 +695,7 @@ public void testCompletedWithBaseInTheMiddleNoChange() { /** * */ + @Test public void testCompletedWithBaseInTheBeginning() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -734,6 +738,7 @@ public void testCompletedWithBaseInTheBeginning() { /** * This case should never happen, nevertheless we need to test for it. */ + @Test public void testCompletedWithBaseInTheBeginningNoChange() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -773,6 +778,7 @@ public void testCompletedWithBaseInTheBeginningNoChange() { /** * This case should never happen, nevertheless we need to test for it. */ + @Test public void testCompletedWithBaseInTheEndNoChange() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -812,6 +818,7 @@ public void testCompletedWithBaseInTheEndNoChange() { /** * */ + @Test public void testCompletedWithBaseNotPresentInTheMiddle() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -853,6 +860,7 @@ public void testCompletedWithBaseNotPresentInTheMiddle() { /** * */ + @Test public void testCompletedWithBaseNotPresentInTheMiddleNoChange() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -888,6 +896,7 @@ public void testCompletedWithBaseNotPresentInTheMiddleNoChange() { /** * */ + @Test public void testCompletedWithBaseNotPresentInTheBeginning() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -929,6 +938,7 @@ public void testCompletedWithBaseNotPresentInTheBeginning() { /** * */ + @Test public void testCompletedWithBaseNotPresentInTheBeginningNoChange() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -970,6 +980,7 @@ public void testCompletedWithBaseNotPresentInTheBeginningNoChange() { /** * */ + @Test public void testCompletedWithBaseNotPresentInTheEndNoChange() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -1007,6 +1018,7 @@ public void testCompletedWithBaseNotPresentInTheEndNoChange() { /** * Test local and remote candidates together. */ + @Test public void testLocalAndRemote() { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -1133,6 +1145,7 @@ public void testLocalAndRemote() { /** * @throws Exception If test failed. */ + @Test public void testMultipleLocalAndRemoteLocks1() throws Exception { UUID nodeId = UUID.randomUUID(); @@ -1206,6 +1219,7 @@ public void testMultipleLocalAndRemoteLocks1() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultipleLocalAndRemoteLocks2() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -1293,6 +1307,7 @@ public void testMultipleLocalAndRemoteLocks2() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultipleLocalLocks() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -1334,6 +1349,7 @@ public void testMultipleLocalLocks() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testUsedCandidates() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -1399,6 +1415,7 @@ public void testUsedCandidates() throws Exception { /** * Test 2 keys with candidates in reverse order. */ + @Test public void testReverseOrder1() { UUID id = UUID.randomUUID(); @@ -1452,6 +1469,7 @@ public void testReverseOrder1() { * * @throws Exception If failed. */ + @Test public void testReverseOrder2() throws Exception { UUID id = UUID.randomUUID(); @@ -1514,6 +1532,7 @@ public void testReverseOrder2() throws Exception { * * @throws Exception If failed. */ + @Test public void testReverseOrder3() throws Exception { GridCacheAdapter cache = grid.internalCache(DEFAULT_CACHE_NAME); @@ -1562,6 +1581,7 @@ public void testReverseOrder3() throws Exception { * * @throws Exception If failed. */ + @Test public void testReverseOrder4() throws Exception { UUID id = UUID.randomUUID(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheNestedTxAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheNestedTxAbstractTest.java index 762cc4d9d1484..fc38fb88d956b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheNestedTxAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheNestedTxAbstractTest.java @@ -26,14 +26,13 @@ import java.util.concurrent.locks.Lock; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -43,10 +42,8 @@ /** * Nested transaction emulation. */ +@RunWith(JUnit4.class) public class GridCacheNestedTxAbstractTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Counter key. */ private static final String CNTR_KEY = "CNTR_KEY"; @@ -62,19 +59,6 @@ public class GridCacheNestedTxAbstractTest extends GridCommonAbstractTest { /** */ private static final AtomicInteger globalCntr = new AtomicInteger(); - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { for (int i = 0; i < GRID_CNT; i++) @@ -105,6 +89,7 @@ protected GridCacheNestedTxAbstractTest() { * * @throws Exception If failed. */ + @Test public void testTwoTx() throws Exception { final IgniteCache c = grid(0).cache(DEFAULT_CACHE_NAME); @@ -136,6 +121,7 @@ public void testTwoTx() throws Exception { * * @throws Exception If failed. */ + @Test public void testLockAndTx() throws Exception { final IgniteCache c = grid(0).cache(DEFAULT_CACHE_NAME); @@ -212,6 +198,7 @@ public void testLockAndTx() throws Exception { * * @throws Exception If failed. */ + @Test public void testLockAndTx1() throws Exception { final IgniteCache c = grid(0).cache(DEFAULT_CACHE_NAME); @@ -281,4 +268,4 @@ public void testLockAndTx1() throws Exception { for (int i = 0; i < globalCntr.get(); i++) assertNotNull(c1.get(i)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheObjectToStringSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheObjectToStringSelfTest.java index 33b7033df911e..aa810122ac05c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheObjectToStringSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheObjectToStringSelfTest.java @@ -27,11 +27,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -43,10 +44,8 @@ /** * Tests that common cache objects' toString() methods do not lead to stack overflow. */ +@RunWith(JUnit4.class) public class GridCacheObjectToStringSelfTest extends GridCommonAbstractTest { - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Cache mode for test. */ private CacheMode cacheMode; @@ -56,14 +55,17 @@ public class GridCacheObjectToStringSelfTest extends GridCommonAbstractTest { /** Near enabled flag. */ private boolean nearEnabled; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(ipFinder); - cfg.setDiscoverySpi(discoSpi); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(cacheMode); @@ -85,6 +87,7 @@ public class GridCacheObjectToStringSelfTest extends GridCommonAbstractTest { } /** @throws Exception If failed. */ + @Test public void testLocalCacheFifoEvictionPolicy() throws Exception { cacheMode = LOCAL; evictionPlc = new FifoEvictionPolicy(); @@ -93,6 +96,7 @@ public void testLocalCacheFifoEvictionPolicy() throws Exception { } /** @throws Exception If failed. */ + @Test public void testLocalCacheLruEvictionPolicy() throws Exception { cacheMode = LOCAL; evictionPlc = new LruEvictionPolicy(); @@ -101,6 +105,7 @@ public void testLocalCacheLruEvictionPolicy() throws Exception { } /** @throws Exception If failed. */ + @Test public void testReplicatedCacheFifoEvictionPolicy() throws Exception { cacheMode = REPLICATED; evictionPlc = new FifoEvictionPolicy(); @@ -109,6 +114,7 @@ public void testReplicatedCacheFifoEvictionPolicy() throws Exception { } /** @throws Exception If failed. */ + @Test public void testReplicatedCacheLruEvictionPolicy() throws Exception { cacheMode = REPLICATED; evictionPlc = new LruEvictionPolicy(); @@ -117,6 +123,7 @@ public void testReplicatedCacheLruEvictionPolicy() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedCacheFifoEvictionPolicy() throws Exception { cacheMode = PARTITIONED; nearEnabled = true; @@ -126,6 +133,7 @@ public void testPartitionedCacheFifoEvictionPolicy() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedCacheLruEvictionPolicy() throws Exception { cacheMode = PARTITIONED; nearEnabled = true; @@ -135,6 +143,7 @@ public void testPartitionedCacheLruEvictionPolicy() throws Exception { } /** @throws Exception If failed. */ + @Test public void testColocatedCacheFifoEvictionPolicy() throws Exception { cacheMode = PARTITIONED; nearEnabled = false; @@ -144,6 +153,7 @@ public void testColocatedCacheFifoEvictionPolicy() throws Exception { } /** @throws Exception If failed. */ + @Test public void testColocatedCacheLruEvictionPolicy() throws Exception { cacheMode = PARTITIONED; nearEnabled = false; @@ -188,4 +198,4 @@ private void checkToString() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest.java index afc41ffcd34ab..c0b0ddab0ca47 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -35,6 +38,7 @@ /** * Multithreaded update test with off heap enabled. */ +@RunWith(JUnit4.class) public abstract class GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest extends GridCacheAbstractSelfTest { /** */ protected static volatile boolean failed; @@ -71,6 +75,7 @@ public abstract class GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest extend /** * @throws Exception If failed. */ + @Test public void testTransform() throws Exception { testTransform(keyForNode(0)); @@ -115,6 +120,7 @@ private void testTransform(final Integer key) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { testPut(keyForNode(0)); @@ -161,6 +167,7 @@ private void testPut(final Integer key) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutxIfAbsent() throws Exception { testPutxIfAbsent(keyForNode(0)); @@ -205,6 +212,7 @@ private void testPutxIfAbsent(final Integer key) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGet() throws Exception { testPutGet(keyForNode(0)); @@ -345,4 +353,4 @@ else if (e.getValue() == null) { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateSelfTest.java index 40e2e23322f9e..4f50d384920f8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapMultiThreadedUpdateSelfTest.java @@ -23,6 +23,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -31,6 +34,7 @@ /** * Multithreaded update test with off heap enabled. */ +@RunWith(JUnit4.class) public class GridCacheOffHeapMultiThreadedUpdateSelfTest extends GridCacheOffHeapMultiThreadedUpdateAbstractSelfTest { /** {@inheritDoc} */ @Override protected long getTestTimeout() { @@ -40,6 +44,7 @@ public class GridCacheOffHeapMultiThreadedUpdateSelfTest extends GridCacheOffHea /** * @throws Exception If failed. */ + @Test public void testTransformTx() throws Exception { info(">>> PESSIMISTIC node 0"); @@ -109,6 +114,7 @@ private void testTransformTx(final Integer key, final TransactionConcurrency txC /** * @throws Exception If failed. */ + @Test public void testPutTxPessimistic() throws Exception { testPutTx(keyForNode(0), PESSIMISTIC); @@ -121,6 +127,7 @@ public void testPutTxPessimistic() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutTxOptimistic() throws Exception { testPutTx(keyForNode(0), OPTIMISTIC); @@ -170,6 +177,7 @@ private void testPutTx(final Integer key, final TransactionConcurrency txConcurr /** * @throws Exception If failed. */ + @Test public void testPutxIfAbsentTxPessimistic() throws Exception { testPutxIfAbsentTx(keyForNode(0), PESSIMISTIC); @@ -182,6 +190,7 @@ public void testPutxIfAbsentTxPessimistic() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutxIfAbsentTxOptimistic() throws Exception { testPutxIfAbsentTx(keyForNode(0), OPTIMISTIC); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapUpdateSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapUpdateSelfTest.java index b8f68585a2a03..dac3e6713628b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapUpdateSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapUpdateSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -35,6 +38,7 @@ /** * Check for specific support issue. */ +@RunWith(JUnit4.class) public class GridCacheOffheapUpdateSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -56,6 +60,7 @@ public class GridCacheOffheapUpdateSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testUpdateInPessimisticTxOnRemoteNode() throws Exception { try { Ignite ignite = startGrids(2); @@ -99,6 +104,7 @@ public void testUpdateInPessimisticTxOnRemoteNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadEvictedPartition() throws Exception { try { Ignite grid = startGrid(0); @@ -127,16 +133,23 @@ public void testReadEvictedPartition() throws Exception { assertEquals(10, cache.get(key)); - try (Transaction ignored = grid.transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - assertEquals(10, cache.get(key)); - } + if(((IgniteCacheProxy)cache).context().config().getAtomicityMode() != CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) { + try (Transaction ignored = grid.transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + assertEquals(10, cache.get(key)); + } - try (Transaction ignored = grid.transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { - assertEquals(10, cache.get(key)); + try (Transaction ignored = grid.transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { + assertEquals(10, cache.get(key)); + } + } + else { + try (Transaction ignored = grid.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + assertEquals(10, cache.get(key)); + } } } finally { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOnCopyFlagAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOnCopyFlagAbstractSelfTest.java index 2a90bf608b3f1..15cb4891b3533 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOnCopyFlagAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOnCopyFlagAbstractSelfTest.java @@ -36,21 +36,19 @@ import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.junit.Assert.assertNotEquals; /** * Tests that cache value is copied for get, interceptor and invoke closure. */ +@RunWith(JUnit4.class) public abstract class GridCacheOnCopyFlagAbstractSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public static final int ITER_CNT = 1000; @@ -83,12 +81,6 @@ public abstract class GridCacheOnCopyFlagAbstractSelfTest extends GridCommonAbst @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(spi); - c.setPeerClassLoadingEnabled(p2pEnabled); c.getTransactionConfiguration().setTxSerializableEnabled(true); @@ -117,6 +109,7 @@ protected CacheConfiguration cacheConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCopyOnReadFlagP2PEnabled() throws Exception { doTest(true); } @@ -124,6 +117,7 @@ public void testCopyOnReadFlagP2PEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCopyOnReadFlagP2PDisbaled() throws Exception { doTest(false); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOrderedPreloadingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOrderedPreloadingSelfTest.java index 34b5876f3e0fd..c81a2b332e960 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOrderedPreloadingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOrderedPreloadingSelfTest.java @@ -31,10 +31,10 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -43,10 +43,8 @@ /** * Checks ordered preloading. */ +@RunWith(JUnit4.class) public class GridCacheOrderedPreloadingSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of grids in test. */ private static final int GRID_CNT = 4; @@ -86,12 +84,6 @@ public class GridCacheOrderedPreloadingSelfTest extends GridCommonAbstractTest { cacheConfig(firstCacheMode, 1, FIRST_CACHE_NAME), cacheConfig(secondCacheMode, 2, SECOND_CACHE_NAME)); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - Map, int[]> listeners = new HashMap<>(); listeners.put(new IgnitePredicate() { @@ -125,6 +117,7 @@ private CacheConfiguration cacheConfig(CacheMode cacheMode, int preloadOrder, St /** * @throws Exception If failed. */ + @Test public void testPreloadOrderPartitionedPartitioned() throws Exception { checkPreloadOrder(PARTITIONED, PARTITIONED); } @@ -132,6 +125,7 @@ public void testPreloadOrderPartitionedPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreloadOrderReplicatedReplicated() throws Exception { checkPreloadOrder(REPLICATED, REPLICATED); } @@ -139,6 +133,7 @@ public void testPreloadOrderReplicatedReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreloadOrderPartitionedReplicated() throws Exception { checkPreloadOrder(PARTITIONED, REPLICATED); } @@ -146,6 +141,7 @@ public void testPreloadOrderPartitionedReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreloadOrderReplicatedPartitioned() throws Exception { checkPreloadOrder(REPLICATED, PARTITIONED); } @@ -203,4 +199,4 @@ private void checkPreloadOrder(CacheMode first, CacheMode second) throws Excepti private int gridIdx(Event evt) { return getTestIgniteInstanceIndex((String)evt.node().attributes().get(GRID_NAME_ATTR)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheP2PUndeploySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheP2PUndeploySelfTest.java index 8de8e04d0b61c..21b936745aedc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheP2PUndeploySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheP2PUndeploySelfTest.java @@ -32,11 +32,11 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.marshaller.jdk.JdkMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -48,16 +48,11 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheP2PUndeploySelfTest extends GridCommonAbstractTest { /** Test p2p value. */ private static final String TEST_VALUE = "org.apache.ignite.tests.p2p.GridCacheDeploymentTestValue3"; - /** */ - private static final long OFFHEAP = 0;// 4 * 1024 * 1024; - - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private final AtomicInteger idxGen = new AtomicInteger(); @@ -73,12 +68,6 @@ public class GridCacheP2PUndeploySelfTest extends GridCommonAbstractTest { cfg.setNetworkTimeout(2000); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - cfg.setMarshaller(new JdkMarshaller()); CacheConfiguration repCacheCfg = defaultCacheConfiguration(); @@ -121,6 +110,7 @@ public class GridCacheP2PUndeploySelfTest extends GridCommonAbstractTest { } /** @throws Exception If failed. */ + @Test public void testSwapP2PReplicated() throws Exception { offheap = false; @@ -128,6 +118,7 @@ public void testSwapP2PReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOffHeapP2PReplicated() throws Exception { offheap = true; @@ -135,6 +126,7 @@ public void testOffHeapP2PReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSwapP2PPartitioned() throws Exception { offheap = false; @@ -142,6 +134,7 @@ public void testSwapP2PPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOffheapP2PPartitioned() throws Exception { offheap = true; @@ -149,6 +142,7 @@ public void testOffheapP2PPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSwapP2PReplicatedNoPreloading() throws Exception { mode = NONE; offheap = false; @@ -157,6 +151,7 @@ public void testSwapP2PReplicatedNoPreloading() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOffHeapP2PReplicatedNoPreloading() throws Exception { mode = NONE; offheap = true; @@ -165,6 +160,7 @@ public void testOffHeapP2PReplicatedNoPreloading() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSwapP2PPartitionedNoPreloading() throws Exception { mode = NONE; offheap = false; @@ -173,6 +169,7 @@ public void testSwapP2PPartitionedNoPreloading() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOffHeapP2PPartitionedNoPreloading() throws Exception { mode = NONE; offheap = true; @@ -299,4 +296,4 @@ private boolean waitCacheEmpty(IgniteCache cache, long timeout) return cache.localSize() == 0; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedGetSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedGetSelfTest.java index f5cc65b9b8ec5..b44dceb59b425 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedGetSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedGetSelfTest.java @@ -29,11 +29,11 @@ import org.apache.ignite.internal.managers.communication.GridMessageListener; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearSingleGetRequest; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -43,10 +43,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCachePartitionedGetSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 3; @@ -63,23 +61,11 @@ public class GridCachePartitionedGetSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(discoverySpi()); cfg.setCacheConfiguration(cacheConfiguration()); return cfg; } - /** - * @return Discovery SPI; - */ - private DiscoverySpi discoverySpi() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** * @return Cache configuration. */ @@ -109,6 +95,7 @@ private CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testGetFromPrimaryNode() throws Exception { for (int i = 0; i < GRID_CNT; i++) { IgniteCache c = grid(i).cache(DEFAULT_CACHE_NAME); @@ -128,7 +115,13 @@ public void testGetFromPrimaryNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetFromBackupNode() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10274"); + + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + for (int i = 0; i < GRID_CNT; i++) { IgniteCache c = grid(i).cache(DEFAULT_CACHE_NAME); @@ -159,6 +152,7 @@ public void testGetFromBackupNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetFromNearNode() throws Exception { for (int i = 0; i < GRID_CNT; i++) { IgniteCache c = grid(i).cache(DEFAULT_CACHE_NAME); @@ -235,4 +229,4 @@ private void prepare() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedProjectionAffinitySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedProjectionAffinitySelfTest.java index 7f589fe82fa75..e3c4129d174fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedProjectionAffinitySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedProjectionAffinitySelfTest.java @@ -23,11 +23,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -36,7 +36,7 @@ /** * Partitioned affinity test for projections. */ -@SuppressWarnings({"PointlessArithmeticExpression"}) +@RunWith(JUnit4.class) public class GridCachePartitionedProjectionAffinitySelfTest extends GridCommonAbstractTest { /** Backup count. */ private static final int BACKUPS = 1; @@ -44,9 +44,6 @@ public class GridCachePartitionedProjectionAffinitySelfTest extends GridCommonAb /** Grid count. */ private static final int GRIDS = 3; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -61,12 +58,6 @@ public class GridCachePartitionedProjectionAffinitySelfTest extends GridCommonAb cfg.setCacheConfiguration(cacheCfg); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - return cfg; } @@ -81,6 +72,7 @@ public class GridCachePartitionedProjectionAffinitySelfTest extends GridCommonAb } /** @throws Exception If failed. */ + @Test public void testAffinity() throws Exception { waitTopologyUpdate(); @@ -92,7 +84,7 @@ public void testAffinity() throws Exception { } /** @throws Exception If failed. */ - @SuppressWarnings("deprecation") + @Test public void testProjectionAffinity() throws Exception { waitTopologyUpdate(); @@ -110,7 +102,6 @@ public void testProjectionAffinity() throws Exception { } /** @throws Exception If failed. */ - @SuppressWarnings("BusyWait") private void waitTopologyUpdate() throws Exception { GridTestUtils.waitTopologyUpdate(DEFAULT_CACHE_NAME, BACKUPS, log()); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedWritesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedWritesTest.java index 0d00bef472dac..9e4f251bf78b0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedWritesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePartitionedWritesTest.java @@ -27,8 +27,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -38,10 +42,16 @@ * wrapped only in near cache (see {@link GridCacheProcessor} init logic). */ @SuppressWarnings({"unchecked"}) +@RunWith(JUnit4.class) public class GridCachePartitionedWritesTest extends GridCommonAbstractTest { /** Cache store. */ private CacheStore store; + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + } + /** {@inheritDoc} */ @Override protected final IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -78,6 +88,7 @@ public class GridCachePartitionedWritesTest extends GridCommonAbstractTest { } /** @throws Exception If test fails. */ + @Test public void testWrite() throws Exception { final AtomicInteger putCnt = new AtomicInteger(); final AtomicInteger rmvCnt = new AtomicInteger(); @@ -143,4 +154,4 @@ public void testWrite() throws Exception { stopGrid(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePreloadingEvictionsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePreloadingEvictionsSelfTest.java index f13485252c41f..d97b6ac61b774 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePreloadingEvictionsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePreloadingEvictionsSelfTest.java @@ -29,7 +29,6 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; -import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.IgniteKernal; @@ -39,12 +38,13 @@ import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -55,13 +55,11 @@ /** * */ +@RunWith(JUnit4.class) public class GridCachePreloadingEvictionsSelfTest extends GridCommonAbstractTest { /** */ private static final String VALUE = createValue(); - public static final CachePeekMode[] ALL_PEEK_MODES = new CachePeekMode[]{CachePeekMode.ALL}; - - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + public static final CachePeekMode[] ALL_PEEK_MODES = new CachePeekMode[] {CachePeekMode.ALL}; /** */ private final AtomicInteger idxGen = new AtomicInteger(); @@ -70,12 +68,6 @@ public class GridCachePreloadingEvictionsSelfTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - CacheConfiguration partCacheCfg = defaultCacheConfiguration(); partCacheCfg.setCacheMode(PARTITIONED); @@ -101,8 +93,11 @@ public class GridCachePreloadingEvictionsSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testEvictions() throws Exception { try { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + final Ignite ignite1 = startGrid(1); final IgniteCache cache1 = ignite1.cache(DEFAULT_CACHE_NAME); @@ -219,22 +214,14 @@ private void sleepUntilCashesEqualize(final Ignite ignite1, final Ignite ignite2 * @throws Exception If failed. */ private void checkCachesConsistency(Ignite ignite1, Ignite ignite2) throws Exception { - IgniteKernal g1 = (IgniteKernal) ignite1; - IgniteKernal g2 = (IgniteKernal) ignite2; + IgniteKernal g1 = (IgniteKernal)ignite1; + IgniteKernal g2 = (IgniteKernal)ignite2; GridCacheAdapter cache1 = g1.internalCache(DEFAULT_CACHE_NAME); GridCacheAdapter cache2 = g2.internalCache(DEFAULT_CACHE_NAME); - for (int i = 0; i < 3; i++) { - if (cache1.size(ALL_PEEK_MODES) != cache2.size(ALL_PEEK_MODES)) { - U.warn(log, "Sizes do not match (will retry in 1000 ms) [s1=" + cache1.size(ALL_PEEK_MODES) + - ", s2=" + cache2.size(ALL_PEEK_MODES) + ']'); - - U.sleep(1000); - } - else - break; - } + // Sleeping to allow the cache sizes to settle down. + U.sleep(3000); info("Cache1 size: " + cache1.size(ALL_PEEK_MODES)); info("Cache2 size: " + cache2.size(ALL_PEEK_MODES)); @@ -243,7 +230,7 @@ private void checkCachesConsistency(Ignite ignite1, Ignite ignite2) throws Excep "Sizes do not match [s1=" + cache1.size(ALL_PEEK_MODES) + ", s2=" + cache2.size(ALL_PEEK_MODES) + ']'; for (Integer key : cache1.keySet()) { - Object e = cache1.localPeek(key, new CachePeekMode[] {CachePeekMode.ONHEAP}, null); + Object e = cache1.localPeek(key, new CachePeekMode[] {CachePeekMode.ONHEAP}); if (e != null) assert cache2.containsKey(key) : "Cache2 does not contain key: " + key; @@ -261,4 +248,4 @@ private static String createValue() { return sb.toString(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePutAllFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePutAllFailoverSelfTest.java index 1d9a4eca4ffd6..9d078b8dcd405 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePutAllFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCachePutAllFailoverSelfTest.java @@ -57,12 +57,13 @@ import org.apache.ignite.spi.IgniteSpiConsistencyChecked; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.failover.FailoverContext; import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -72,10 +73,8 @@ /** * Tests putAll() method along with failover and different configurations. */ +@RunWith(JUnit4.class) public class GridCachePutAllFailoverSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Size of the test map. */ private static final int TEST_MAP_SIZE = 30_000; @@ -120,21 +119,10 @@ public class GridCachePutAllFailoverSelfTest extends GridCommonAbstractTest { /** Test failover SPI. */ private MasterFailoverSpi failoverSpi = new MasterFailoverSpi((IgnitePredicate)workerNodesFilter); - /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); - - super.beforeTestsStarted(); - } - - /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); - } - /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverColocatedNearEnabledThreeBackups() throws Exception { checkPutAllFailoverColocated(true, 7, 3); } @@ -142,6 +130,7 @@ public void testPutAllFailoverColocatedNearEnabledThreeBackups() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverColocatedNearDisabledThreeBackups() throws Exception { checkPutAllFailoverColocated(false, 7, 3); } @@ -149,6 +138,7 @@ public void testPutAllFailoverColocatedNearDisabledThreeBackups() throws Excepti /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverNearEnabledOneBackup() throws Exception { checkPutAllFailover(true, 3, 1); } @@ -156,6 +146,7 @@ public void testPutAllFailoverNearEnabledOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverNearDisabledOneBackup() throws Exception { checkPutAllFailover(false, 3, 1); } @@ -163,6 +154,7 @@ public void testPutAllFailoverNearDisabledOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverNearEnabledTwoBackups() throws Exception { checkPutAllFailover(true, 5, 2); } @@ -170,6 +162,7 @@ public void testPutAllFailoverNearEnabledTwoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverNearDisabledTwoBackups() throws Exception { checkPutAllFailover(false, 5, 2); } @@ -177,6 +170,7 @@ public void testPutAllFailoverNearDisabledTwoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverNearEnabledThreeBackups() throws Exception { checkPutAllFailover(true, 7, 3); } @@ -184,6 +178,7 @@ public void testPutAllFailoverNearEnabledThreeBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverNearDisabledThreeBackups() throws Exception { checkPutAllFailover(false, 7, 3); } @@ -191,6 +186,7 @@ public void testPutAllFailoverNearDisabledThreeBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverColocatedNearEnabledOneBackup() throws Exception { checkPutAllFailoverColocated(true, 3, 1); } @@ -198,6 +194,7 @@ public void testPutAllFailoverColocatedNearEnabledOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverColocatedNearDisabledOneBackup() throws Exception { checkPutAllFailoverColocated(false, 3, 1); } @@ -205,6 +202,7 @@ public void testPutAllFailoverColocatedNearDisabledOneBackup() throws Exception /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverColocatedNearEnabledTwoBackups() throws Exception { checkPutAllFailoverColocated(true, 5, 2); } @@ -212,6 +210,7 @@ public void testPutAllFailoverColocatedNearEnabledTwoBackups() throws Exception /** * @throws Exception If failed. */ + @Test public void testPutAllFailoverColocatedNearDisabledTwoBackups() throws Exception { checkPutAllFailoverColocated(false, 5, 2); } @@ -671,10 +670,9 @@ protected CacheAtomicityMode atomicityMode() { cfg.setDeploymentMode(DeploymentMode.CONTINUOUS); - TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi(); + TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); discoverySpi.setAckTimeout(60000); - discoverySpi.setIpFinder(ipFinder); discoverySpi.setForceServerMode(true); cfg.setDiscoverySpi(discoverySpi); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexingDisabledSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexingDisabledSelfTest.java index 92a70847fbc95..fbbacd80480d6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexingDisabledSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexingDisabledSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.cache.query.TextQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheQueryIndexingDisabledSelfTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -56,6 +60,7 @@ private void doTest(Callable c, String expectedMsg) { /** * @throws IgniteCheckedException If failed. */ + @Test public void testSqlFieldsQuery() throws IgniteCheckedException { // Should not throw despite the cache not having QueryEntities. jcache().query(new SqlFieldsQuery("select * from dual")).getAll(); @@ -64,6 +69,7 @@ public void testSqlFieldsQuery() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testTextQuery() throws IgniteCheckedException { doTest(new Callable() { @Override public Object call() throws IgniteCheckedException { @@ -75,6 +81,7 @@ public void testTextQuery() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testSqlQuery() throws IgniteCheckedException { // Failure occurs not on validation stage, hence specific error message. doTest(new Callable() { @@ -87,7 +94,8 @@ public void testSqlQuery() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testScanQuery() throws IgniteCheckedException { jcache().query(new ScanQuery<>(null)).getAll(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryInternalKeysSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryInternalKeysSelfTest.java index 288e5721dc091..18d37195aea90 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryInternalKeysSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryInternalKeysSelfTest.java @@ -31,6 +31,9 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.P1; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -38,6 +41,7 @@ /** * Cache query internal keys self test. */ +@RunWith(JUnit4.class) public class GridCacheQueryInternalKeysSelfTest extends GridCacheAbstractSelfTest { /** Grid count. */ private static final int GRID_CNT = 2; @@ -68,6 +72,7 @@ public class GridCacheQueryInternalKeysSelfTest extends GridCacheAbstractSelfTes * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testInternalKeysPreloading() throws Exception { try { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -112,4 +117,4 @@ public void testInternalKeysPreloading() throws Exception { } }); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySqlFieldInlineSizeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySqlFieldInlineSizeSelfTest.java index a2571602471d4..11ceef6c75247 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySqlFieldInlineSizeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySqlFieldInlineSizeSelfTest.java @@ -26,15 +26,20 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cache configuration with inlineSize property of the QuerySqlField annotation. */ -@SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "unchecked", "unused"}) +@SuppressWarnings({"unchecked", "unused"}) +@RunWith(JUnit4.class) public class GridCacheQuerySqlFieldInlineSizeSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSingleFieldIndexes() throws Exception { CacheConfiguration ccfg = defaultCacheConfiguration(); @@ -57,6 +62,7 @@ else if(idx.getFields().containsKey("val1")) /** * @throws Exception If failed. */ + @Test public void testGroupIndex() throws Exception { CacheConfiguration ccfg = defaultCacheConfiguration(); @@ -76,6 +82,7 @@ public void testGroupIndex() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGroupIndexInvalidAnnotaion() throws Exception { final CacheConfiguration ccfg = defaultCacheConfiguration(); @@ -91,6 +98,7 @@ public void testGroupIndexInvalidAnnotaion() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeInlineSize() throws Exception { final CacheConfiguration ccfg = defaultCacheConfiguration(); @@ -157,4 +165,4 @@ static class TestValueNegativeInlineSize { @QuerySqlField(index = true, inlineSize = -10) String val; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReferenceCleanupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReferenceCleanupSelfTest.java index e6a40a64c1819..0849d3a700270 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReferenceCleanupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReferenceCleanupSelfTest.java @@ -37,8 +37,12 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.testframework.GridTestUtils.cacheContext; @@ -46,10 +50,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheReferenceCleanupSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Cache mode for the current test. */ private CacheMode mode; @@ -60,12 +62,6 @@ public class GridCacheReferenceCleanupSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(mode); @@ -79,6 +75,7 @@ public class GridCacheReferenceCleanupSelfTest extends GridCommonAbstractTest { } /** @throws Exception If failed. */ + @Test public void testAtomicLongPartitioned() throws Exception { mode = CacheMode.PARTITIONED; @@ -93,6 +90,7 @@ public void testAtomicLongPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAtomicLongReplicated() throws Exception { mode = CacheMode.REPLICATED; @@ -107,7 +105,10 @@ public void testAtomicLongReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAtomicLongLocal() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + mode = CacheMode.LOCAL; try { @@ -119,6 +120,7 @@ public void testAtomicLongLocal() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOneAsyncOpPartitioned() throws Exception { mode = CacheMode.PARTITIONED; @@ -133,6 +135,7 @@ public void testOneAsyncOpPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOneAsyncOpReplicated() throws Exception { mode = CacheMode.REPLICATED; @@ -147,7 +150,10 @@ public void testOneAsyncOpReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOneAsyncOpLocal() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + mode = CacheMode.LOCAL; try { @@ -159,6 +165,7 @@ public void testOneAsyncOpLocal() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSeveralAsyncOpsPartitioned() throws Exception { mode = CacheMode.PARTITIONED; @@ -173,6 +180,7 @@ public void testSeveralAsyncOpsPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSeveralAsyncOpsReplicated() throws Exception { mode = CacheMode.REPLICATED; @@ -187,7 +195,10 @@ public void testSeveralAsyncOpsReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSeveralAsyncOpsLocal() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + mode = CacheMode.LOCAL; try { @@ -199,6 +210,7 @@ public void testSeveralAsyncOpsLocal() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSyncOpAsyncCommitPartitioned() throws Exception { mode = CacheMode.PARTITIONED; @@ -213,6 +225,7 @@ public void testSyncOpAsyncCommitPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSyncOpAsyncCommitReplicated() throws Exception { mode = CacheMode.REPLICATED; @@ -227,7 +240,10 @@ public void testSyncOpAsyncCommitReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSyncOpAsyncCommitLocal() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + mode = CacheMode.LOCAL; try { @@ -239,6 +255,7 @@ public void testSyncOpAsyncCommitLocal() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAsyncOpsAsyncCommitPartitioned() throws Exception { mode = CacheMode.PARTITIONED; @@ -253,6 +270,7 @@ public void testAsyncOpsAsyncCommitPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAsyncOpsAsyncCommitReplicated() throws Exception { mode = CacheMode.REPLICATED; @@ -267,7 +285,10 @@ public void testAsyncOpsAsyncCommitReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAsyncOpsAsyncCommitLocal() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + mode = CacheMode.LOCAL; try { @@ -502,4 +523,4 @@ private TestValue(int i) { this.i = i; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReloadSelfTest.java index 50bed2a611f99..2006a478037bc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReloadSelfTest.java @@ -30,13 +30,18 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Checks that CacheProjection.reload() operations are performed correctly. */ +@RunWith(JUnit4.class) public class GridCacheReloadSelfTest extends GridCommonAbstractTest { /** Maximum allowed number of cache entries. */ public static final int MAX_CACHE_ENTRIES = 500; @@ -53,6 +58,26 @@ public class GridCacheReloadSelfTest extends GridCommonAbstractTest { /** Near enabled flag. */ private boolean nearEnabled = true; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + + if (nearEnabled) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.beforeTestsStarted(); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + + if (nearEnabled) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { cacheMode = null; @@ -118,7 +143,10 @@ public class GridCacheReloadSelfTest extends GridCommonAbstractTest { * * @throws Exception If error occurs. */ + @Test public void testReloadEvictionLocalCache() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + cacheMode = CacheMode.LOCAL; doTest(); @@ -130,6 +158,7 @@ public void testReloadEvictionLocalCache() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testReloadEvictionPartitionedCacheNearEnabled() throws Exception { cacheMode = PARTITIONED; @@ -142,6 +171,7 @@ public void testReloadEvictionPartitionedCacheNearEnabled() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testReloadEvictionPartitionedCacheNearDisabled() throws Exception { cacheMode = PARTITIONED; nearEnabled = false; @@ -154,6 +184,7 @@ public void testReloadEvictionPartitionedCacheNearDisabled() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testReloadEvictionReplicatedCache() throws Exception { cacheMode = CacheMode.REPLICATED; @@ -180,4 +211,4 @@ private void doTest() throws Exception { stopGrid(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReplicatedSynchronousCommitTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReplicatedSynchronousCommitTest.java index deca296f59033..fddfe1cfeb00c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReplicatedSynchronousCommitTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReplicatedSynchronousCommitTest.java @@ -35,17 +35,18 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Test cases for preload tests. */ +@RunWith(JUnit4.class) public class GridCacheReplicatedSynchronousCommitTest extends GridCommonAbstractTest { /** */ private static final int ADDITION_CACHE_NUMBER = 2; @@ -56,9 +57,6 @@ public class GridCacheReplicatedSynchronousCommitTest extends GridCommonAbstract /** */ private final Collection commSpis = new ConcurrentLinkedDeque<>(); - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -84,12 +82,6 @@ public GridCacheReplicatedSynchronousCommitTest() { commSpis.add(commSpi); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } @@ -103,6 +95,7 @@ public GridCacheReplicatedSynchronousCommitTest() { /** * @throws Exception If test failed. */ + @Test public void testSynchronousCommit() throws Exception { try { Ignite firstIgnite = startGrid("1"); @@ -129,6 +122,7 @@ public void testSynchronousCommit() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testSynchronousCommitNodeLeave() throws Exception { try { Ignite ignite1 = startGrid("1"); @@ -202,4 +196,4 @@ public int messagesCount() { super.sendMessage(node, msg, ackClosure); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReturnValueTransferSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReturnValueTransferSelfTest.java index 1b06e00cf5095..1ad3f4cca2ba0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReturnValueTransferSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheReturnValueTransferSelfTest.java @@ -32,6 +32,9 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,6 +42,7 @@ /** * Tests transform for extra traffic. */ +@RunWith(JUnit4.class) public class GridCacheReturnValueTransferSelfTest extends GridCommonAbstractTest { /** Distribution mode. */ private boolean cache; @@ -76,6 +80,7 @@ public class GridCacheReturnValueTransferSelfTest extends GridCommonAbstractTest * @throws Exception If failed. * TODO IGNITE-581 enable when fixed. */ + @Test public void testTransformTransactionalNoBackups() throws Exception { // Test works too long and fails. fail("https://issues.apache.org/jira/browse/IGNITE-581"); @@ -87,6 +92,7 @@ public void testTransformTransactionalNoBackups() throws Exception { * @throws Exception If failed. * TODO IGNITE-581 enable when fixed. */ + @Test public void testTransformTransactionalOneBackup() throws Exception { // Test works too long and fails. fail("https://issues.apache.org/jira/browse/IGNITE-581"); @@ -179,4 +185,4 @@ public TestObject() { assert !failDeserialization; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheSlowTxWarnTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheSlowTxWarnTest.java index a37bbf36da7d5..09f81d3e35bf4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheSlowTxWarnTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheSlowTxWarnTest.java @@ -22,11 +22,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,10 +37,8 @@ * {@link org.apache.ignite.IgniteSystemProperties#IGNITE_SLOW_TX_WARN_TIMEOUT} * system property. */ +@RunWith(JUnit4.class) public class GridCacheSlowTxWarnTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -63,18 +61,13 @@ public class GridCacheSlowTxWarnTest extends GridCommonAbstractTest { c.setCacheConfiguration(cc1, cc2, cc3); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } /** * @throws Exception If failed. */ + @Test public void testWarningOutput() throws Exception { try { IgniteKernal g = (IgniteKernal)startGrid(1); @@ -147,4 +140,4 @@ private void checkCache(Ignite g, String cacheName, boolean simulateTimeout, tx.close(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStopSelfTest.java index 7f82d75d02ca1..f7efc9e398e46 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStopSelfTest.java @@ -27,21 +27,23 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -51,15 +53,10 @@ /** * Tests correct cache stopping. */ +@RunWith(JUnit4.class) public class GridCacheStopSelfTest extends GridCommonAbstractTest { /** */ - private static final String EXPECTED_MSG = "Cache has been closed or destroyed"; - - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** */ - private boolean atomic; + private CacheAtomicityMode atomicityMode = TRANSACTIONAL; /** */ private boolean replicated; @@ -68,21 +65,14 @@ public class GridCacheStopSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disc = new TcpDiscoverySpi(); - - disc.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disc); + CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME) + .setAtomicityMode(atomicityMode); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - - ccfg.setCacheMode(replicated ? REPLICATED : PARTITIONED); - - if (!replicated) + if (replicated) + ccfg.setCacheMode(REPLICATED); + else ccfg.setBackups(1); - ccfg.setAtomicityMode(atomic ? ATOMIC : TRANSACTIONAL); - cfg.setCacheConfiguration(ccfg); return cfg; @@ -96,21 +86,29 @@ public class GridCacheStopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStopExplicitTransactions() throws Exception { + atomicityMode = TRANSACTIONAL; + testStop(true); } /** * @throws Exception If failed. */ + @Test public void testStopImplicitTransactions() throws Exception { + atomicityMode = TRANSACTIONAL; + testStop(false); } /** * @throws Exception If failed. */ + @Test public void testStopExplicitTransactionsReplicated() throws Exception { + atomicityMode = TRANSACTIONAL; replicated = true; testStop(true); @@ -119,7 +117,51 @@ public void testStopExplicitTransactionsReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopImplicitTransactionsReplicated() throws Exception { + atomicityMode = TRANSACTIONAL; + replicated = true; + + testStop(false); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testStopExplicitMvccTransactions() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + testStop(true); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testStopImplicitMvccTransactions() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + testStop(false); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testStopExplicitMvccTransactionsReplicated() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + replicated = true; + + testStop(true); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testStopImplicitMvccTransactionsReplicated() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; replicated = true; testStop(false); @@ -128,8 +170,9 @@ public void testStopImplicitTransactionsReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopAtomic() throws Exception { - atomic = true; + atomicityMode = ATOMIC; testStop(false); } @@ -137,6 +180,7 @@ public void testStopAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopMultithreaded() throws Exception { try { startGrid(0); @@ -218,6 +262,7 @@ public void testStopMultithreaded() throws Exception { * @param node Node. * @param cache Cache. */ + @SuppressWarnings("unchecked") private void cacheOperations(Ignite node, IgniteCache cache) { ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -227,10 +272,12 @@ private void cacheOperations(Ignite node, IgniteCache cache) { cache.get(key); - try (Transaction tx = node.transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - cache.put(key, key); + if (cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() != TRANSACTIONAL_SNAPSHOT) { + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + cache.put(key, key); - tx.commit(); + tx.commit(); + } } try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { @@ -260,7 +307,7 @@ private void testStop(final boolean startTx) throws Exception { CacheConfiguration ccfg = cache.getConfiguration(CacheConfiguration.class); - assertEquals(atomic ? ATOMIC : TRANSACTIONAL, ccfg.getAtomicityMode()); + assertEquals(atomicityMode, ccfg.getAtomicityMode()); assertEquals(replicated ? REPLICATED : PARTITIONED, ccfg.getCacheMode()); Collection> putFuts = new ArrayList<>(); @@ -272,7 +319,8 @@ private void testStop(final boolean startTx) throws Exception { @Override public Void call() throws Exception { try { if (startTx) { - TransactionConcurrency concurrency = key % 2 == 0 ? OPTIMISTIC : PESSIMISTIC; + TransactionConcurrency concurrency = + atomicityMode != TRANSACTIONAL_SNAPSHOT && (key % 2 == 0) ? OPTIMISTIC : PESSIMISTIC; try (Transaction tx = grid(0).transactions().txStart(concurrency, REPEATABLE_READ)) { cache.put(key, key); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreManagerDeserializationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreManagerDeserializationTest.java index a1623d2880497..0a97c53d2a818 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreManagerDeserializationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreManagerDeserializationTest.java @@ -39,10 +39,11 @@ import org.apache.ignite.internal.processors.cache.store.CacheLocalStore; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.marshaller.jdk.JdkMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -56,16 +57,21 @@ * https://issues.apache.org/jira/browse/IGNITE-2753 * */ +@RunWith(JUnit4.class) public class GridCacheStoreManagerDeserializationTest extends GridCommonAbstractTest { - /** IP finder. */ - protected static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache store. */ protected static final GridCacheLocalTestStore store = new GridCacheLocalTestStore(); /** Test cache name. */ protected static final String CACHE_NAME = "cache_name"; + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** * @return Cache mode. */ @@ -81,7 +87,6 @@ private CacheWriteSynchronizationMode cacheWriteSynchronizationMode() { } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override protected IgniteConfiguration getConfiguration(final String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -90,12 +95,6 @@ private CacheWriteSynchronizationMode cacheWriteSynchronizationMode() { else c.setMarshaller(new JdkMarshaller()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - c.setCacheConfiguration(cacheConfiguration()); return c; @@ -124,7 +123,7 @@ protected CacheConfiguration cacheConfiguration() { cc.setBackups(0); - cc.setAtomicityMode(CacheAtomicityMode.ATOMIC); + cc.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); return cc; } @@ -144,6 +143,7 @@ protected CacheConfiguration cacheConfiguration() { * * @throws Exception If failed. */ + @Test public void testStream() throws Exception { final Ignite grid = startGrid(); @@ -169,6 +169,7 @@ public void testStream() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionMove() throws Exception { final Ignite grid = startGrid("binaryGrid1"); @@ -203,13 +204,12 @@ public void testPartitionMove() throws Exception { } /** - * TODO GG-11148. - * * Check whether binary objects are stored without unmarshalling via stream API. * * @throws Exception If failed. */ - public void _testBinaryStream() throws Exception { + @Test + public void testBinaryStream() throws Exception { final Ignite grid = startGrid("binaryGrid"); final IgniteCache cache = grid.createCache(CACHE_NAME).withKeepBinary(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStorePutxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStorePutxSelfTest.java deleted file mode 100644 index 2175abb43d39e..0000000000000 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStorePutxSelfTest.java +++ /dev/null @@ -1,159 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache; - -import java.util.Collection; -import java.util.Collections; -import java.util.Map; -import java.util.concurrent.atomic.AtomicInteger; -import org.apache.ignite.IgniteCache; -import org.apache.ignite.cache.store.CacheStore; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.lang.IgniteBiInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.apache.ignite.transactions.Transaction; -import org.jetbrains.annotations.Nullable; - -import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; -import static org.apache.ignite.cache.CacheMode.PARTITIONED; -import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; - -/** - * Tests for reproduce problem with GG-6895: - * putx calls CacheStore.load() when null GridPredicate passed in to avoid IDE warnings - */ -public class GridCacheStorePutxSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** */ - private static AtomicInteger loads; - - /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - - ccfg.setCacheMode(PARTITIONED); - ccfg.setAtomicityMode(TRANSACTIONAL); - ccfg.setWriteSynchronizationMode(FULL_SYNC); - ccfg.setCacheStoreFactory(singletonFactory(new TestStore())); - ccfg.setReadThrough(true); - ccfg.setWriteThrough(true); - ccfg.setLoadPreviousValue(true); - - cfg.setCacheConfiguration(ccfg); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - - /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - loads = new AtomicInteger(); - - startGrid(); - } - - /** {@inheritDoc} */ - @Override protected void afterTest() throws Exception { - stopGrid(); - } - - /** - * @throws Exception If failed. - */ - public void testPutShouldNotTriggerLoad() throws Exception { - jcache().put(1, 1); - jcache().put(2, 2); - - assertEquals(0, loads.get()); - } - - /** - * @throws Exception If failed. - */ - public void testPutShouldNotTriggerLoadWithTx() throws Exception { - IgniteCache cache = jcache(); - - try (Transaction tx = grid().transactions().txStart()) { - cache.put(1, 1); - cache.put(2, 2); - - tx.commit(); - } - - assertEquals(0, loads.get()); - } - - /** */ - private static class TestStore implements CacheStore { - /** {@inheritDoc} */ - @Nullable @Override public Integer load(Integer key) { - loads.incrementAndGet(); - - return null; - } - - /** {@inheritDoc} */ - @Override public void loadCache(IgniteBiInClosure clo, @Nullable Object... args) { - // No-op. - } - - /** {@inheritDoc} */ - @Override public Map loadAll(Iterable keys) { - return Collections.emptyMap(); - } - - /** {@inheritDoc} */ - @Override public void write(javax.cache.Cache.Entry entry) { - // No-op. - } - - /** {@inheritDoc} */ - @Override public void writeAll(Collection> entries) { - // No-op. - } - - /** {@inheritDoc} */ - @Override public void delete(Object key) { - // No-op. - } - - /** {@inheritDoc} */ - @Override public void deleteAll(Collection keys) { - // No-op. - } - - /** {@inheritDoc} */ - @Override public void sessionEnd(boolean commit) { - // No-op. - } - } -} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreValueBytesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreValueBytesSelfTest.java index 237ae72544ee0..6a74c08956163 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreValueBytesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheStoreValueBytesSelfTest.java @@ -22,10 +22,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -33,23 +33,15 @@ /** * Test for {@link CacheConfiguration#isStoreKeepBinary()}. */ +@RunWith(JUnit4.class) public class GridCacheStoreValueBytesSelfTest extends GridCommonAbstractTest { /** */ private boolean storeValBytes; - /** VM ip finder for TCP discovery. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration ccfg = defaultCacheConfiguration(); ccfg.setCacheMode(REPLICATED); @@ -74,6 +66,7 @@ public class GridCacheStoreValueBytesSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testEnabled() throws Exception { storeValBytes = true; @@ -92,4 +85,4 @@ public void testEnabled() throws Exception { assert entry.valueBytes() != null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheSwapPreloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheSwapPreloadSelfTest.java deleted file mode 100644 index 261411fa52217..0000000000000 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheSwapPreloadSelfTest.java +++ /dev/null @@ -1,245 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache; - -import java.util.Collections; -import java.util.HashSet; -import java.util.Random; -import java.util.Set; -import java.util.TreeSet; -import java.util.concurrent.Callable; -import java.util.concurrent.atomic.AtomicBoolean; -import javax.cache.Cache; -import org.apache.ignite.IgniteCache; -import org.apache.ignite.cache.CacheMode; -import org.apache.ignite.cache.CachePeekMode; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.jetbrains.annotations.Nullable; - -import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; -import static org.apache.ignite.cache.CacheMode.PARTITIONED; -import static org.apache.ignite.cache.CacheMode.REPLICATED; -import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; -import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; - -/** - * Test for cache swap preloading. - */ -public class GridCacheSwapPreloadSelfTest extends GridCommonAbstractTest { - /** Entry count. */ - private static final int ENTRY_CNT = 15_000; - - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** */ - private CacheMode cacheMode; - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - - cfg.setNetworkTimeout(2000); - - CacheConfiguration cacheCfg = defaultCacheConfiguration(); - - cacheCfg.setWriteSynchronizationMode(FULL_SYNC); - cacheCfg.setCacheMode(cacheMode); - cacheCfg.setRebalanceMode(SYNC); - cacheCfg.setAtomicityMode(TRANSACTIONAL); - - if (cacheMode == PARTITIONED) - cacheCfg.setBackups(1); - - cfg.setCacheConfiguration(cacheCfg); - - return cfg; - } - - /** @throws Exception If failed. */ - public void testSwapReplicated() throws Exception { - cacheMode = REPLICATED; - - checkSwap(); - } - - /** @throws Exception If failed. */ - public void testSwapPartitioned() throws Exception { - cacheMode = PARTITIONED; - - checkSwap(); - } - - /** @throws Exception If failed. */ - private void checkSwap() throws Exception { - try { - startGrid(0); - - IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); - - Set keys = new HashSet<>(); - - // Populate. - for (int i = 0; i < ENTRY_CNT; i++) { - keys.add(i); - - cache.put(i, i); - } - - info("Put finished."); - - // Evict all. - cache.localEvict(keys); - - info("Evict finished."); - - for (int i = 0; i < ENTRY_CNT; i++) - assertNull(cache.localPeek(i, CachePeekMode.ONHEAP)); - - assert cache.localSize(CachePeekMode.PRIMARY, CachePeekMode.BACKUP, CachePeekMode.NEAR, - CachePeekMode.ONHEAP) == 0; - - startGrid(1); - - int size = grid(1).cache(DEFAULT_CACHE_NAME).localSize(CachePeekMode.ALL); - - info("New node cache size: " + size); - - assertEquals(ENTRY_CNT, size); - } - finally { - stopAllGrids(); - } - } - - /** - * @throws Exception If failed. - */ - public void testSwapReplicatedMultithreaded() throws Exception { - cacheMode = REPLICATED; - - checkSwapMultithreaded(); - } - - /** @throws Exception If failed. */ - public void testSwapPartitionedMultithreaded() throws Exception { - cacheMode = PARTITIONED; - - checkSwapMultithreaded(); - } - - /** @throws Exception If failed. */ - private void checkSwapMultithreaded() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-614"); - - final AtomicBoolean done = new AtomicBoolean(); - IgniteInternalFuture fut = null; - - try { - startGrid(0); - - final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); - - assertNotNull(cache); - - // Populate. - for (int i = 0; i < ENTRY_CNT; i++) - cache.put(i, i); - - Set keys = new HashSet<>(); - - for (Cache.Entry entry : cache.localEntries()) - keys.add(entry.getKey()); - - cache.localEvict(keys); - - fut = multithreadedAsync(new Callable() { - @Nullable @Override public Object call() throws Exception { - Random rnd = new Random(); - - while (!done.get()) { - int key = rnd.nextInt(ENTRY_CNT); - - Integer i = cache.get(key); - - assertNotNull(i); - assertEquals(Integer.valueOf(key), i); - - cache.localEvict(Collections.singleton(rnd.nextInt(ENTRY_CNT))); - } - - return null; - } - }, 10); - - startGrid(1); - - done.set(true); - - fut.get(); - - fut = null; - - int size = grid(1).cache(DEFAULT_CACHE_NAME).localSize(CachePeekMode.PRIMARY, CachePeekMode.BACKUP, - CachePeekMode.NEAR, CachePeekMode.ONHEAP); - - info("New node cache size: " + size); - - if (size != ENTRY_CNT) { - Set keySet = new TreeSet<>(); - - int next = 0; - - for (IgniteCache.Entry e : grid(1).cache(DEFAULT_CACHE_NAME).localEntries()) - keySet.add(e.getKey()); - - for (Integer i : keySet) { - while (next < i) - info("Missing key: " + next++); - - next++; - } - } - - assertEquals(ENTRY_CNT, size); - } - finally { - done.set(true); - - try { - if (fut != null) - fut.get(); - } - finally { - stopAllGrids(); - } - } - } -} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTcpClientDiscoveryMultiThreadedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTcpClientDiscoveryMultiThreadedTest.java index 8a2f5d5bc8521..0bb3cd56d8f20 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTcpClientDiscoveryMultiThreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTcpClientDiscoveryMultiThreadedTest.java @@ -31,12 +31,16 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests {@link TcpDiscoverySpi} in client mode with multiple client nodes that interact with a cache concurrently. */ +@RunWith(JUnit4.class) public class GridCacheTcpClientDiscoveryMultiThreadedTest extends GridCacheAbstractSelfTest { /** Server nodes count. */ private static int srvNodesCnt; @@ -104,6 +108,7 @@ public class GridCacheTcpClientDiscoveryMultiThreadedTest extends GridCacheAbstr /** * @throws Exception If failed. */ + @Test public void testCacheConcurrentlyWithMultipleClientNodes() throws Exception { srvNodesCnt = 2; clientNodesCnt = 3; @@ -190,4 +195,4 @@ private void performSimpleOperationsOnCache(IgniteCache cache) for (int i = 100; i < 200; i++) assertEquals(i, (int) cache.get(i)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTestEntryEx.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTestEntryEx.java index 1a3c8d7801729..6e640110afd76 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTestEntryEx.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTestEntryEx.java @@ -23,6 +23,7 @@ import java.util.UUID; import javax.cache.Cache; import javax.cache.expiry.ExpiryPolicy; +import javax.cache.processor.EntryProcessor; import javax.cache.processor.EntryProcessorResult; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.eviction.EvictableEntry; @@ -41,6 +42,7 @@ import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorClosure; import org.apache.ignite.internal.util.lang.GridMetadataAwareAdapter; import org.apache.ignite.internal.util.lang.GridTuple3; +import org.apache.ignite.lang.IgniteUuid; import org.jetbrains.annotations.Nullable; /** @@ -106,6 +108,11 @@ public class GridCacheTestEntryEx extends GridMetadataAwareAdapter implements Gr return false; } + /** {@inheritDoc} */ + @Override public boolean isMvcc() { + return false; + } + /** {@inheritDoc} */ @Override public boolean detached() { return false; @@ -482,8 +489,8 @@ void recheckLock() { /** {@inheritDoc} */ @Override public GridCacheUpdateTxResult mvccSet(@Nullable IgniteInternalTx tx, UUID affNodeId, CacheObject val, - long ttl0, AffinityTopologyVersion topVer, MvccSnapshot mvccVer, - GridCacheOperation op, boolean needHistory, boolean noCreate, CacheEntryPredicate filter, boolean retVal) + EntryProcessor entryProc, Object[] invokeArgs, long ttl0, AffinityTopologyVersion topVer, MvccSnapshot mvccVer, GridCacheOperation op, boolean needHistory, + boolean noCreate, boolean needOldVal, CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException, GridCacheEntryRemovedException { rawPut(val, ttl); @@ -492,7 +499,7 @@ void recheckLock() { /** {@inheritDoc} */ @Override public GridCacheUpdateTxResult mvccRemove(@Nullable IgniteInternalTx tx, UUID affNodeId, - AffinityTopologyVersion topVer, MvccSnapshot mvccVer, boolean needHistory, + AffinityTopologyVersion topVer, MvccSnapshot mvccVer, boolean needHistory, boolean needOldVal, CacheEntryPredicate filter, boolean retVal) throws IgniteCheckedException, GridCacheEntryRemovedException { obsoleteVer = ver; @@ -904,18 +911,21 @@ GridCacheMvccCandidate anyOwner() { return false; } + /** {@inheritDoc} */ + @Nullable @Override public CacheObject mvccPeek(boolean onheapOnly) { + return null; + } + /** {@inheritDoc} */ @Nullable @Override public CacheObject peek(boolean heap, boolean offheap, AffinityTopologyVersion topVer, - @Nullable IgniteCacheExpiryPolicy plc) - { + @Nullable IgniteCacheExpiryPolicy plc) { return null; } /** {@inheritDoc} */ - @Nullable @Override public CacheObject peek( - @Nullable IgniteCacheExpiryPolicy plc) + @Nullable @Override public CacheObject peek() throws GridCacheEntryRemovedException, IgniteCheckedException { return null; } @@ -946,12 +956,14 @@ GridCacheMvccCandidate anyOwner() { AffinityTopologyVersion topVer, List entries, GridCacheOperation op, - MvccSnapshot mvccVer) throws IgniteCheckedException, GridCacheEntryRemovedException { + MvccSnapshot mvccVer, + IgniteUuid futId, + int batchNum) throws IgniteCheckedException, GridCacheEntryRemovedException { return null; } /** {@inheritDoc} */ - @Override public void touch(AffinityTopologyVersion topVer) { - context().evicts().touch(this, topVer); + @Override public void touch() { + context().evicts().touch(this); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTransactionalAbstractMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTransactionalAbstractMetricsSelfTest.java index c4458797dde9f..e3d9a5ca8f702 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTransactionalAbstractMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTransactionalAbstractMetricsSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionMetrics; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -37,6 +40,7 @@ /** * Transactional cache metrics test. */ +@RunWith(JUnit4.class) public abstract class GridCacheTransactionalAbstractMetricsSelfTest extends GridCacheAbstractMetricsSelfTest { /** */ private static final int TX_CNT = 3; @@ -47,6 +51,7 @@ public abstract class GridCacheTransactionalAbstractMetricsSelfTest extends Grid /** * @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedCommits() throws Exception { testCommits(OPTIMISTIC, READ_COMMITTED, true); } @@ -54,6 +59,7 @@ public void testOptimisticReadCommittedCommits() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedCommitsNoData() throws Exception { testCommits(OPTIMISTIC, READ_COMMITTED, false); } @@ -61,6 +67,7 @@ public void testOptimisticReadCommittedCommitsNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadCommits() throws Exception { testCommits(OPTIMISTIC, REPEATABLE_READ, true); } @@ -68,6 +75,7 @@ public void testOptimisticRepeatableReadCommits() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadCommitsNoData() throws Exception { testCommits(OPTIMISTIC, REPEATABLE_READ, false); } @@ -75,6 +83,7 @@ public void testOptimisticRepeatableReadCommitsNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableCommits() throws Exception { testCommits(OPTIMISTIC, SERIALIZABLE, true); } @@ -82,6 +91,7 @@ public void testOptimisticSerializableCommits() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableCommitsNoData() throws Exception { testCommits(OPTIMISTIC, SERIALIZABLE, false); } @@ -89,6 +99,7 @@ public void testOptimisticSerializableCommitsNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedCommits() throws Exception { testCommits(PESSIMISTIC, READ_COMMITTED, true); } @@ -96,6 +107,7 @@ public void testPessimisticReadCommittedCommits() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedCommitsNoData() throws Exception { testCommits(PESSIMISTIC, READ_COMMITTED, false); } @@ -103,6 +115,7 @@ public void testPessimisticReadCommittedCommitsNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadCommits() throws Exception { testCommits(PESSIMISTIC, REPEATABLE_READ, true); } @@ -110,6 +123,7 @@ public void testPessimisticRepeatableReadCommits() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadCommitsNoData() throws Exception { testCommits(PESSIMISTIC, REPEATABLE_READ, false); } @@ -117,6 +131,7 @@ public void testPessimisticRepeatableReadCommitsNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticSerializableCommits() throws Exception { testCommits(PESSIMISTIC, SERIALIZABLE, true); } @@ -124,6 +139,7 @@ public void testPessimisticSerializableCommits() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticSerializableCommitsNoData() throws Exception { testCommits(PESSIMISTIC, SERIALIZABLE, false); } @@ -131,6 +147,7 @@ public void testPessimisticSerializableCommitsNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedRollbacks() throws Exception { testRollbacks(OPTIMISTIC, READ_COMMITTED, true); } @@ -138,6 +155,7 @@ public void testOptimisticReadCommittedRollbacks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedRollbacksNoData() throws Exception { testRollbacks(OPTIMISTIC, READ_COMMITTED, false); } @@ -145,6 +163,7 @@ public void testOptimisticReadCommittedRollbacksNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadRollbacks() throws Exception { testRollbacks(OPTIMISTIC, REPEATABLE_READ, true); } @@ -152,6 +171,7 @@ public void testOptimisticRepeatableReadRollbacks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadRollbacksNoData() throws Exception { testRollbacks(OPTIMISTIC, REPEATABLE_READ, false); } @@ -159,6 +179,7 @@ public void testOptimisticRepeatableReadRollbacksNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableRollbacks() throws Exception { testRollbacks(OPTIMISTIC, SERIALIZABLE, true); } @@ -166,6 +187,7 @@ public void testOptimisticSerializableRollbacks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableRollbacksNoData() throws Exception { testRollbacks(OPTIMISTIC, SERIALIZABLE, false); } @@ -173,6 +195,7 @@ public void testOptimisticSerializableRollbacksNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedRollbacks() throws Exception { testRollbacks(PESSIMISTIC, READ_COMMITTED, true); } @@ -180,6 +203,7 @@ public void testPessimisticReadCommittedRollbacks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedRollbacksNoData() throws Exception { testRollbacks(PESSIMISTIC, READ_COMMITTED, false); } @@ -187,6 +211,7 @@ public void testPessimisticReadCommittedRollbacksNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadRollbacks() throws Exception { testRollbacks(PESSIMISTIC, REPEATABLE_READ, true); } @@ -194,6 +219,7 @@ public void testPessimisticRepeatableReadRollbacks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadRollbacksNoData() throws Exception { testRollbacks(PESSIMISTIC, REPEATABLE_READ, false); } @@ -201,6 +227,7 @@ public void testPessimisticRepeatableReadRollbacksNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticSerializableRollbacks() throws Exception { testRollbacks(PESSIMISTIC, SERIALIZABLE, true); } @@ -208,6 +235,7 @@ public void testPessimisticSerializableRollbacks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticSerializableRollbacksNoData() throws Exception { testRollbacks(PESSIMISTIC, SERIALIZABLE, false); } @@ -215,6 +243,7 @@ public void testPessimisticSerializableRollbacksNoData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSuspendedReadCommittedTxTimeoutRollbacks() throws Exception { doTestSuspendedTxTimeoutRollbacks(OPTIMISTIC, READ_COMMITTED); } @@ -222,6 +251,7 @@ public void testOptimisticSuspendedReadCommittedTxTimeoutRollbacks() throws Exce /** * @throws Exception If failed. */ + @Test public void testOptimisticSuspendedRepeatableReadTxTimeoutRollbacks() throws Exception { doTestSuspendedTxTimeoutRollbacks(OPTIMISTIC, REPEATABLE_READ); } @@ -229,6 +259,7 @@ public void testOptimisticSuspendedRepeatableReadTxTimeoutRollbacks() throws Exc /** * @throws Exception If failed. */ + @Test public void testOptimisticSuspendedSerializableTxTimeoutRollbacks() throws Exception { doTestSuspendedTxTimeoutRollbacks(OPTIMISTIC, SERIALIZABLE); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerEvictionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerEvictionSelfTest.java index 66ef47c8be5d2..dab80fddf2e3d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerEvictionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerEvictionSelfTest.java @@ -28,14 +28,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TTL manager eviction self test. */ +@RunWith(JUnit4.class) public class GridCacheTtlManagerEvictionSelfTest extends GridCommonAbstractTest { /** */ private static final int ENTRIES_TO_PUT = 10_100; @@ -43,21 +45,20 @@ public class GridCacheTtlManagerEvictionSelfTest extends GridCommonAbstractTest /** */ private static final int ENTRIES_LIMIT = 1_000; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache mode. */ private volatile CacheMode cacheMode; /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); - discoSpi.setIpFinder(IP_FINDER); + super.beforeTestsStarted(); + } - cfg.setDiscoverySpi(discoSpi); + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -75,6 +76,7 @@ public class GridCacheTtlManagerEvictionSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testLocalEviction() throws Exception { checkEviction(CacheMode.LOCAL); } @@ -82,6 +84,7 @@ public void testLocalEviction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedEviction() throws Exception { checkEviction(CacheMode.PARTITIONED); } @@ -89,6 +92,7 @@ public void testPartitionedEviction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedEviction() throws Exception { checkEviction(CacheMode.REPLICATED); } @@ -131,4 +135,4 @@ private void checkEviction(CacheMode mode) throws Exception { Ignition.stopAll(true); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerLoadTest.java index b117c86c806c9..f8fe56097d1ce 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerLoadTest.java @@ -25,6 +25,9 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -32,10 +35,12 @@ /** * Check ttl manager for memory leak. */ +@RunWith(JUnit4.class) public class GridCacheTtlManagerLoadTest extends GridCacheTtlManagerSelfTest { /** * @throws Exception If failed. */ + @Test public void testLoad() throws Exception { cacheMode = REPLICATED; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerNotificationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerNotificationTest.java index d25b2af645ce6..4bc0b98623b99 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerNotificationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerNotificationTest.java @@ -33,12 +33,12 @@ import org.apache.ignite.events.Event; import org.apache.ignite.events.EventType; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.eclipse.jetty.util.BlockingArrayQueue; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; @@ -46,6 +46,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheTtlManagerNotificationTest extends GridCommonAbstractTest { /** Count of caches in multi caches test. */ private static final int CACHES_CNT = 10; @@ -53,9 +54,6 @@ public class GridCacheTtlManagerNotificationTest extends GridCommonAbstractTest /** Prefix for cache name fir multi caches test. */ private static final String CACHE_PREFIX = "cache-"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test cache mode. */ protected CacheMode cacheMode; @@ -63,12 +61,6 @@ public class GridCacheTtlManagerNotificationTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - CacheConfiguration[] ccfgs = new CacheConfiguration[CACHES_CNT + 1]; ccfgs[0] = createCacheConfiguration(DEFAULT_CACHE_NAME); @@ -98,6 +90,7 @@ private CacheConfiguration createCacheConfiguration(String name) { /** * @throws Exception If failed. */ + @Test public void testThatNotificationWorkAsExpected() throws Exception { try (final Ignite g = startGrid(0)) { final BlockingArrayQueue queue = new BlockingArrayQueue<>(); @@ -134,6 +127,7 @@ public void testThatNotificationWorkAsExpected() throws Exception { * * @throws Exception If failed. */ + @Test public void testThatNotificationWorkAsExpectedInMultithreadedMode() throws Exception { final CyclicBarrier barrier = new CyclicBarrier(21); final AtomicInteger keysRangeGen = new AtomicInteger(); @@ -185,6 +179,7 @@ public void testThatNotificationWorkAsExpectedInMultithreadedMode() throws Excep * * @throws Exception If failed. */ + @Test public void testThatNotificationWorkAsExpectedManyCaches() throws Exception { final int smallDuration = 4_000; @@ -294,4 +289,4 @@ private static class CacheFiller implements Runnable { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerSelfTest.java index 52f19b78ca65a..1648ff3cd742f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTtlManagerSelfTest.java @@ -25,11 +25,12 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.CAX; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -39,22 +40,21 @@ /** * TTL manager self test. */ +@RunWith(JUnit4.class) public class GridCacheTtlManagerSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test cache mode. */ protected CacheMode cacheMode; /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); - discoSpi.setIpFinder(IP_FINDER); + super.beforeTestsStarted(); + } - cfg.setDiscoverySpi(discoSpi); + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -69,6 +69,7 @@ public class GridCacheTtlManagerSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testLocalTtl() throws Exception { checkTtl(LOCAL); } @@ -76,6 +77,7 @@ public void testLocalTtl() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedTtl() throws Exception { checkTtl(PARTITIONED); } @@ -83,6 +85,7 @@ public void testPartitionedTtl() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedTtl() throws Exception { checkTtl(REPLICATED); } @@ -121,4 +124,4 @@ private void checkTtl(CacheMode mode) throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTxPartitionedLocalStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTxPartitionedLocalStoreSelfTest.java index 272665b7555d5..9000c95f51e11 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTxPartitionedLocalStoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheTxPartitionedLocalStoreSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -27,6 +28,13 @@ * */ public class GridCacheTxPartitionedLocalStoreSelfTest extends GridCacheAbstractLocalStoreSelfTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode getAtomicMode() { return TRANSACTIONAL; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheUtilsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheUtilsSelfTest.java index e28daf335f633..8dc33e6a8632e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheUtilsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheUtilsSelfTest.java @@ -35,10 +35,14 @@ import org.apache.ignite.marshaller.MarshallerContextTestImpl; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid cache utils test. */ +@RunWith(JUnit4.class) public class GridCacheUtilsSelfTest extends GridCommonAbstractTest { /** * Does not override equals and hashCode. @@ -74,7 +78,6 @@ private static class WrongEquals { * @param obj Object. * @return {@code False}. */ - @SuppressWarnings("CovariantEquals") @Override public boolean equals(Object obj) { return false; } @@ -118,7 +121,7 @@ private static class ExtendsClassWithEqualsAndHashCode2 extends EqualsAndHashCod /** */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testCacheKeyValidation() throws IgniteCheckedException { CU.validateCacheKey("key"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueBytesPreloadingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueBytesPreloadingSelfTest.java index e32d8aaa63bef..2d0eed043b95d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueBytesPreloadingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueBytesPreloadingSelfTest.java @@ -22,6 +22,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -30,6 +33,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheValueBytesPreloadingSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -61,6 +65,7 @@ protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throw /** * @throws Exception If failed. */ + @Test public void testOnHeapTiered() throws Exception { startGrids(1); @@ -108,4 +113,4 @@ private void checkByteArrays() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueConsistencyAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueConsistencyAbstractSelfTest.java index 19f98ff8ab373..9f0a564500de3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueConsistencyAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheValueConsistencyAbstractSelfTest.java @@ -26,6 +26,10 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -35,6 +39,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class GridCacheValueConsistencyAbstractSelfTest extends GridCacheAbstractSelfTest { /** Number of threads for test. */ private static final int THREAD_CNT = 16; @@ -76,6 +81,14 @@ public abstract class GridCacheValueConsistencyAbstractSelfTest extends GridCach super.beforeTestsStarted(); } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + if (nearEnabled()) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { super.afterTestsStopped(); @@ -98,6 +111,7 @@ public abstract class GridCacheValueConsistencyAbstractSelfTest extends GridCach /** * @throws Exception If failed. */ + @Test public void testPutRemove() throws Exception { awaitPartitionMapExchange(); @@ -156,6 +170,7 @@ public void testPutRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoveAll() throws Exception { awaitPartitionMapExchange(); @@ -219,6 +234,7 @@ public void testPutRemoveAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutConsistencyMultithreaded() throws Exception { if (nearEnabled()) fail("https://issues.apache.org/jira/browse/IGNITE-627"); @@ -272,6 +288,7 @@ public void testPutConsistencyMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoveConsistencyMultithreaded() throws Exception { if (nearEnabled()) fail("https://issues.apache.org/jira/browse/IGNITE-627"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVariableTopologySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVariableTopologySelfTest.java index 25817a191c3ee..7756d4c70f882 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVariableTopologySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVariableTopologySelfTest.java @@ -31,13 +31,14 @@ import org.apache.ignite.internal.util.typedef.CAX; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionRollbackException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,12 +46,18 @@ /** * Affinity routing tests. */ +@RunWith(JUnit4.class) public class GridCacheVariableTopologySelfTest extends GridCommonAbstractTest { /** */ private static final Random RAND = new Random(); - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-7388"); + + super.setUp(); + } /** Constructs test. */ public GridCacheVariableTopologySelfTest() { @@ -61,12 +68,6 @@ public GridCacheVariableTopologySelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - // Default cache configuration. CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -108,6 +109,7 @@ private void startGrids(int cnt, int startIdx) throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testNodeStop() throws Exception { // -- Test parameters. -- // int nodeCnt = 3; @@ -201,4 +203,4 @@ public void testNodeStop() throws Exception { //debugFut.get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionMultinodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionMultinodeTest.java index 0fd3fb9ca5ca3..bb604093e770e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionMultinodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionMultinodeTest.java @@ -30,9 +30,14 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.jetbrains.annotations.Nullable; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -41,6 +46,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheVersionMultinodeTest extends GridCacheAbstractSelfTest { /** */ private CacheAtomicityMode atomicityMode; @@ -89,6 +95,7 @@ public class GridCacheVersionMultinodeTest extends GridCacheAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testVersionTx() throws Exception { atomicityMode = TRANSACTIONAL; @@ -98,6 +105,7 @@ public void testVersionTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersionTxNearEnabled() throws Exception { atomicityMode = TRANSACTIONAL; @@ -109,6 +117,31 @@ public void testVersionTxNearEnabled() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testVersionMvccTx() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + checkVersion(); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testVersionMvccTxNearEnabled() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + near = true; + + checkVersion(); + } + + /** + * @throws Exception If failed. + */ + @Test public void testVersionAtomicPrimary() throws Exception { atomicityMode = ATOMIC; @@ -118,6 +151,7 @@ public void testVersionAtomicPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersionAtomicPrimaryNearEnabled() throws Exception { atomicityMode = ATOMIC; @@ -138,17 +172,19 @@ private void checkVersion() throws Exception { checkVersion(String.valueOf(i), null); // Update. } - if (atomicityMode == TRANSACTIONAL) { + if (atomicityMode != ATOMIC) { for (int i = 100; i < 200; i++) { checkVersion(String.valueOf(i), PESSIMISTIC); // Create. checkVersion(String.valueOf(i), PESSIMISTIC); // Update. } - for (int i = 200; i < 300; i++) { - checkVersion(String.valueOf(i), OPTIMISTIC); // Create. + if (atomicityMode != TRANSACTIONAL_SNAPSHOT) { + for (int i = 200; i < 300; i++) { + checkVersion(String.valueOf(i), OPTIMISTIC); // Create. - checkVersion(String.valueOf(i), OPTIMISTIC); // Update. + checkVersion(String.valueOf(i), OPTIMISTIC); // Update. + } } } } @@ -224,4 +260,4 @@ private void checkEntryVersion(String key) throws Exception { assertTrue(verified); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionSelfTest.java index 9cec7ecf7b270..f70dfd25c8d8a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionSelfTest.java @@ -23,14 +23,19 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheVersionSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTopologyVersionDrId() throws Exception { GridCacheVersion ver = version(10, 0); @@ -76,6 +81,7 @@ public void testTopologyVersionDrId() throws Exception { * * @throws Exception If failed. */ + @Test public void testMarshalling() throws Exception { GridCacheVersion ver = version(1, 1); GridCacheVersionEx verEx = new GridCacheVersionEx(2, 2, 0, ver); @@ -100,4 +106,4 @@ public void testMarshalling() throws Exception { private GridCacheVersion version(int nodeOrder, int drId) { return new GridCacheVersion(0, 0, nodeOrder, drId); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionTopologyChangeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionTopologyChangeTest.java index 5c9ecd71f6d13..b4f7981396f11 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionTopologyChangeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheVersionTopologyChangeTest.java @@ -31,32 +31,22 @@ import org.apache.ignite.cache.affinity.AffinityFunction; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils.SF; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class GridCacheVersionTopologyChangeTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -67,6 +57,7 @@ public class GridCacheVersionTopologyChangeTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testVersionIncreaseAtomic() throws Exception { checkVersionIncrease(cacheConfigurations(ATOMIC)); } @@ -74,10 +65,19 @@ public void testVersionIncreaseAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersionIncreaseTx() throws Exception { checkVersionIncrease(cacheConfigurations(TRANSACTIONAL)); } + /** + * @throws Exception If failed. + */ + @Test + public void testVersionIncreaseMvccTx() throws Exception { + checkVersionIncrease(cacheConfigurations(TRANSACTIONAL_SNAPSHOT)); + } + /** * @param ccfgs Cache configurations. * @throws Exception If failed. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridDataStorageConfigurationConsistencySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridDataStorageConfigurationConsistencySelfTest.java index 3c728f7898045..fe730bb2ddf50 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridDataStorageConfigurationConsistencySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridDataStorageConfigurationConsistencySelfTest.java @@ -21,28 +21,21 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests a check of memory configuration consistency. */ +@RunWith(JUnit4.class) public class GridDataStorageConfigurationConsistencySelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - DataStorageConfiguration memCfg = new DataStorageConfiguration(); // Nodes will have different page size. @@ -56,6 +49,7 @@ public class GridDataStorageConfigurationConsistencySelfTest extends GridCommonA /** * @throws Exception If failed. */ + @Test public void testMemoryConfigurationConsistency() throws Exception { GridTestUtils.assertThrows(log, new Callable() { /** {@inheritDoc} */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridEvictionPolicyMBeansTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridEvictionPolicyMBeansTest.java index b093bb2f33307..d9ab3ab6b45cc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridEvictionPolicyMBeansTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridEvictionPolicyMBeansTest.java @@ -25,17 +25,29 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.IgniteUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for the eviction policy JMX beans registered by the kernal. */ +@RunWith(JUnit4.class) public class GridEvictionPolicyMBeansTest extends GridCommonAbstractTest { /** Create test and auto-start the grid */ public GridEvictionPolicyMBeansTest() { super(true); } + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + + super.beforeTestsStarted(); + } + /** * {@inheritDoc} * @@ -60,8 +72,6 @@ public GridEvictionPolicyMBeansTest() { ncf.setNearEvictionPolicyFactory(new LruEvictionPolicyFactory<>(40, 10, 500)); - cache1.setNearConfiguration(ncf); - CacheConfiguration cache2 = defaultCacheConfiguration(); cache2.setName("cache2"); @@ -81,7 +91,11 @@ public GridEvictionPolicyMBeansTest() { lep.setMaxMemorySize(500); lep.setMaxSize(40); ncf.setNearEvictionPolicy(lep); - cache2.setNearConfiguration(ncf); + + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) { + cache1.setNearConfiguration(ncf); + cache2.setNearConfiguration(ncf); + } cfg.setCacheConfiguration(cache1, cache2); @@ -89,22 +103,27 @@ public GridEvictionPolicyMBeansTest() { } /** Check that eviction bean is available */ + @Test public void testEvictionPolicyBeans() throws Exception{ checkBean("cache1", "org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy", "MaxSize", 100); checkBean("cache1", "org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy", "BatchSize", 10); checkBean("cache1", "org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy", "MaxMemorySize", 20L); - checkBean("cache1-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxSize", 40); - checkBean("cache1-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "BatchSize", 10); - checkBean("cache1-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxMemorySize", 500L); + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) { + checkBean("cache1-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxSize", 40); + checkBean("cache1-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "BatchSize", 10); + checkBean("cache1-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxMemorySize", 500L); + } checkBean("cache2", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxSize", 30); checkBean("cache2", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "BatchSize", 10); checkBean("cache2", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxMemorySize", 125L); - checkBean("cache2-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxSize", 40); - checkBean("cache2-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "BatchSize", 10); - checkBean("cache2-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxMemorySize", 500L); + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) { + checkBean("cache2-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxSize", 40); + checkBean("cache2-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "BatchSize", 10); + checkBean("cache2-near", "org.apache.ignite.cache.eviction.lru.LruEvictionPolicy", "MaxMemorySize", 500L); + } } /** Checks that a bean with the specified group and name is available and has the expected attribute */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalCacheStoreManagerDeserializationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalCacheStoreManagerDeserializationTest.java index 827b3cf3a1e54..c2fea0cfc94c9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalCacheStoreManagerDeserializationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalCacheStoreManagerDeserializationTest.java @@ -27,6 +27,9 @@ import javax.cache.expiry.ExpiryPolicy; import java.util.UUID; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks whether storing to local store doesn't cause binary objects unmarshalling, @@ -36,6 +39,7 @@ * https://issues.apache.org/jira/browse/IGNITE-2753 * */ +@RunWith(JUnit4.class) public class GridLocalCacheStoreManagerDeserializationTest extends GridCacheStoreManagerDeserializationTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -50,6 +54,7 @@ public class GridLocalCacheStoreManagerDeserializationTest extends GridCacheStor * * @throws Exception */ + @Test public void testUpdate() throws Exception { // Goal is to check correct saving to store (no exception must be thrown) @@ -78,6 +83,7 @@ public void testUpdate() throws Exception { * * @throws Exception */ + @Test public void testBinaryUpdate() throws Exception { // Goal is to check correct saving to store (no exception must be thrown) final Ignite grid = startGrid("binaryGrid"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalIgniteSerializationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalIgniteSerializationTest.java index 00b3137149742..1ced797f3f8aa 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalIgniteSerializationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridLocalIgniteSerializationTest.java @@ -39,10 +39,14 @@ import java.io.ObjectOutputStream; import java.io.Serializable; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for local Ignite instance processing during serialization/deserialization. */ +@RunWith(JUnit4.class) public class GridLocalIgniteSerializationTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache_name"; @@ -63,6 +67,7 @@ public class GridLocalIgniteSerializationTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testPutGetSimple() throws Exception { checkPutGet(new SimpleTestObject("one"), null); } @@ -70,6 +75,7 @@ public void testPutGetSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetSerializable() throws Exception { checkPutGet(new SerializableTestObject("test"), null); } @@ -77,6 +83,7 @@ public void testPutGetSerializable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetExternalizable() throws Exception { checkPutGet(new ExternalizableTestObject("test"), null); } @@ -84,6 +91,7 @@ public void testPutGetExternalizable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetBinarylizable() throws Exception { checkPutGet(new BinarylizableTestObject("test"), "binaryIgnite"); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridProjectionForCachesOnDaemonNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridProjectionForCachesOnDaemonNodeSelfTest.java index 85b237323191f..068ebcffa010f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridProjectionForCachesOnDaemonNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/GridProjectionForCachesOnDaemonNodeSelfTest.java @@ -21,19 +21,16 @@ import org.apache.ignite.cluster.ClusterGroup; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests of cache related cluster projections for daemon node. */ +@RunWith(JUnit4.class) public class GridProjectionForCachesOnDaemonNodeSelfTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Daemon node. */ private static boolean daemonNode; @@ -47,24 +44,11 @@ public class GridProjectionForCachesOnDaemonNodeSelfTest extends GridCommonAbstr @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(discoverySpi()); - cfg.setDaemon(daemonNode); return cfg; } - /** - * @return Discovery SPI; - */ - private DiscoverySpi discoverySpi() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { ignite = startGrid(0); @@ -89,6 +73,7 @@ private DiscoverySpi discoverySpi() { /** * @throws Exception If failed. */ + @Test public void testForDataNodes() throws Exception { ClusterGroup grp = ignite.cluster().forDataNodes(DEFAULT_CACHE_NAME); @@ -107,6 +92,7 @@ public void testForDataNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForClientNodes() throws Exception { ClusterGroup grp = ignite.cluster().forClientNodes(DEFAULT_CACHE_NAME); @@ -125,6 +111,7 @@ public void testForClientNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForCacheNodes() throws Exception { ClusterGroup grp = ignite.cluster().forCacheNodes(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteAbstractDynamicCacheStartFailTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteAbstractDynamicCacheStartFailTest.java index db13c11d1a947..ee3a8e0816be6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteAbstractDynamicCacheStartFailTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteAbstractDynamicCacheStartFailTest.java @@ -1,786 +1,1200 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache; - -import java.util.ArrayList; -import java.util.Collection; -import java.util.List; -import java.util.UUID; -import java.util.concurrent.Callable; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.atomic.AtomicInteger; -import javax.cache.CacheException; -import javax.cache.configuration.Factory; -import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteCache; -import org.apache.ignite.IgniteDataStreamer; -import org.apache.ignite.cache.CacheAtomicityMode; -import org.apache.ignite.cache.CacheMode; -import org.apache.ignite.cache.affinity.AffinityFunctionContext; -import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; -import org.apache.ignite.cache.query.annotations.QuerySqlField; -import org.apache.ignite.cache.store.CacheStore; -import org.apache.ignite.cluster.BaselineNode; -import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.testframework.GridTestUtils; - -/** - * Tests the recovery after a dynamic cache start failure. - */ -public abstract class IgniteAbstractDynamicCacheStartFailTest extends GridCacheAbstractSelfTest { - /** */ - private static final String DYNAMIC_CACHE_NAME = "TestDynamicCache"; - - /** */ - private static final String CLIENT_GRID_NAME = "client"; - - /** */ - protected static final String EXISTING_CACHE_NAME = "existing-cache";; - - /** */ - private static final int PARTITION_COUNT = 16; - - /** Coordinator node index. */ - private int crdIdx = 0; - - /** {@inheritDoc} */ - @Override protected int gridCount() { - return 3; - } - - /** - * Returns {@code true} if persistence is enabled. - * - * @return {@code true} if persistence is enabled. - */ - protected boolean persistenceEnabled() { - return false; - } - - /** - * @throws Exception If failed. - */ - public void testBrokenAffinityFunStartOnServerFailedOnClient() throws Exception { - final String clientName = CLIENT_GRID_NAME + "testBrokenAffinityFunStartOnServerFailedOnClient"; - - IgniteConfiguration clientCfg = getConfiguration(clientName); - - clientCfg.setClientMode(true); - - Ignite client = startGrid(clientName, clientCfg); - - CacheConfiguration cfg = new CacheConfiguration(); - - cfg.setName(DYNAMIC_CACHE_NAME + "-server-1"); - - cfg.setAffinity(new BrokenAffinityFunction(false, clientName)); - - try { - IgniteCache cache = ignite(0).getOrCreateCache(cfg); - } - catch (CacheException e) { - fail("Exception should not be thrown."); - } - - stopGrid(clientName); - } - - /** - * @throws Exception If failed. - */ - public void testBrokenAffinityFunStartOnServerFailedOnServer() throws Exception { - final String clientName = CLIENT_GRID_NAME + "testBrokenAffinityFunStartOnServerFailedOnServer"; - - IgniteConfiguration clientCfg = getConfiguration(clientName); - - clientCfg.setClientMode(true); - - Ignite client = startGrid(clientName, clientCfg); - - CacheConfiguration cfg = new CacheConfiguration(); - - cfg.setName(DYNAMIC_CACHE_NAME + "-server-2"); - - cfg.setAffinity(new BrokenAffinityFunction(false, getTestIgniteInstanceName(0))); - - try { - IgniteCache cache = ignite(0).getOrCreateCache(cfg); - - fail("Expected exception was not thrown."); - } - catch (CacheException e) { - } - - stopGrid(clientName); - } - - /** - * @throws Exception If failed. - */ - public void testBrokenAffinityFunStartOnClientFailOnServer() throws Exception { - final String clientName = CLIENT_GRID_NAME + "testBrokenAffinityFunStartOnClientFailOnServer"; - - IgniteConfiguration clientCfg = getConfiguration(clientName); - - clientCfg.setClientMode(true); - - Ignite client = startGrid(clientName, clientCfg); - - CacheConfiguration cfg = new CacheConfiguration(); - - cfg.setName(DYNAMIC_CACHE_NAME + "-client-2"); - - cfg.setAffinity(new BrokenAffinityFunction(false, getTestIgniteInstanceName(0))); - - try { - IgniteCache cache = client.getOrCreateCache(cfg); - - fail("Expected exception was not thrown."); - } - catch (CacheException e) { - } - - stopGrid(clientName); - } - - /** - * Test cache start with broken affinity function that throws an exception on all nodes. - */ - public void testBrokenAffinityFunOnAllNodes() { - final boolean failOnAllNodes = true; - final int unluckyNode = 0; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = 0; - - testDynamicCacheStart( - createCacheConfigsWithBrokenAffinityFun( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Test cache start with broken affinity function that throws an exception on initiator node. - */ - public void testBrokenAffinityFunOnInitiator() { - final boolean failOnAllNodes = false; - final int unluckyNode = 1; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = 1; - - testDynamicCacheStart( - createCacheConfigsWithBrokenAffinityFun( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Test cache start with broken affinity function that throws an exception on non-initiator node. - */ - public void testBrokenAffinityFunOnNonInitiator() { - final boolean failOnAllNodes = false; - final int unluckyNode = 1; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = 2; - - testDynamicCacheStart( - createCacheConfigsWithBrokenAffinityFun( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Test cache start with broken affinity function that throws an exception on coordinator node. - */ - public void testBrokenAffinityFunOnCoordinatorDiffInitiator() { - final boolean failOnAllNodes = false; - final int unluckyNode = crdIdx; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = (crdIdx + 1) % gridCount(); - - testDynamicCacheStart( - createCacheConfigsWithBrokenAffinityFun( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Test cache start with broken affinity function that throws an exception on initiator node. - */ - public void testBrokenAffinityFunOnCoordinator() { - final boolean failOnAllNodes = false; - final int unluckyNode = crdIdx; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = crdIdx; - - testDynamicCacheStart( - createCacheConfigsWithBrokenAffinityFun( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Tests cache start with node filter and broken affinity function that throws an exception on initiator node. - */ - public void testBrokenAffinityFunWithNodeFilter() { - final boolean failOnAllNodes = false; - final int unluckyNode = 0; - final int unluckyCfg = 0; - final int numOfCaches = 1; - final int initiator = 0; - - testDynamicCacheStart( - createCacheConfigsWithBrokenAffinityFun( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, true), - initiator); - } - - /** - * Tests cache start with broken cache store that throws an exception on all nodes. - */ - public void testBrokenCacheStoreOnAllNodes() { - final boolean failOnAllNodes = true; - final int unluckyNode = 0; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = 0; - - testDynamicCacheStart( - createCacheConfigsWithBrokenCacheStore( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Tests cache start with broken cache store that throws an exception on initiator node. - */ - public void testBrokenCacheStoreOnInitiator() { - final boolean failOnAllNodes = false; - final int unluckyNode = 1; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = 1; - - testDynamicCacheStart( - createCacheConfigsWithBrokenCacheStore( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Tests cache start with broken cache store that throws an exception on non-initiator node. - */ - public void testBrokenCacheStoreOnNonInitiator() { - final boolean failOnAllNodes = false; - final int unluckyNode = 1; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = 2; - - testDynamicCacheStart( - createCacheConfigsWithBrokenCacheStore( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Tests cache start with broken cache store that throws an exception on initiator node. - */ - public void testBrokenCacheStoreOnCoordinatorDiffInitiator() { - final boolean failOnAllNodes = false; - final int unluckyNode = crdIdx; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = (crdIdx + 1) % gridCount(); - - testDynamicCacheStart( - createCacheConfigsWithBrokenCacheStore( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Tests cache start with broken cache store that throws an exception on coordinator node. - */ - public void testBrokenCacheStoreFunOnCoordinator() { - final boolean failOnAllNodes = false; - final int unluckyNode = crdIdx; - final int unluckyCfg = 1; - final int numOfCaches = 3; - final int initiator = crdIdx; - - testDynamicCacheStart( - createCacheConfigsWithBrokenCacheStore( - failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), - initiator); - } - - /** - * Tests multiple creation of cache with broken affinity function. - */ - public void testCreateCacheMultipleTimes() { - final boolean failOnAllNodes = false; - final int unluckyNode = 1; - final int unluckyCfg = 0; - final int numOfAttempts = 15; - - CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( - failOnAllNodes, unluckyNode, unluckyCfg, 1, false).get(0); - - for (int i = 0; i < numOfAttempts; ++i) { - try { - IgniteCache cache = ignite(0).getOrCreateCache(cfg); - - fail("Expected exception was not thrown"); - } - catch (CacheException e) { - } - } - } - - /** - * Tests that a cache with the same name can be started after failure if cache configuration is corrected. - * - * @throws Exception If test failed. - */ - public void testCacheStartAfterFailure() throws Exception { - CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( - false, 1, 0, 1, false).get(0); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Object call() throws Exception { - grid(0).getOrCreateCache(cfg); - return null; - } - }, CacheException.class, null); - - // Correct the cache configuration. Default constructor creates a good affinity function. - cfg.setAffinity(new BrokenAffinityFunction()); - - IgniteCache cache = grid(0).getOrCreateCache(createCacheConfiguration(EXISTING_CACHE_NAME)); - - checkCacheOperations(cache); - } - - /** - * Tests that other cache (existed before the failed start) is still operable after the failure. - * - * @throws Exception If test failed. - */ - public void testExistingCacheAfterFailure() throws Exception { - IgniteCache cache = grid(0).getOrCreateCache(createCacheConfiguration(EXISTING_CACHE_NAME)); - - CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( - false, 1, 0, 1, false).get(0); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Object call() throws Exception { - grid(0).getOrCreateCache(cfg); - return null; - } - }, CacheException.class, null); - - checkCacheOperations(cache); - } - - /** - * Tests that other cache works as expected after the failure and further topology changes. - * - * @throws Exception If test failed. - */ - public void testTopologyChangesAfterFailure() throws Exception { - final String clientName = "testTopologyChangesAfterFailure"; - - IgniteCache cache = grid(0).getOrCreateCache(createCacheConfiguration(EXISTING_CACHE_NAME)); - - checkCacheOperations(cache); - - CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( - false, 0, 0, 1, false).get(0); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Object call() throws Exception { - grid(0).getOrCreateCache(cfg); - return null; - } - }, CacheException.class, null); - - awaitPartitionMapExchange(); - - checkCacheOperations(cache); - - // Start a new server node and check cache operations. - Ignite serverNode = startGrid(gridCount() + 1); - - if (persistenceEnabled()) { - // Start a new client node to perform baseline change. - // TODO: This change is workaround: - // Sometimes problem with caches configuration deserialization from test thread arises. - final String clientName1 = "baseline-changer"; - - IgniteConfiguration clientCfg = getConfiguration(clientName1); - - clientCfg.setClientMode(true); - - Ignite clientNode = startGrid(clientName1, clientCfg); - - List baseline = new ArrayList<>(grid(0).cluster().currentBaselineTopology()); - - baseline.add(serverNode.cluster().localNode()); - - clientNode.cluster().setBaselineTopology(baseline); - } - - awaitPartitionMapExchange(); - - checkCacheOperations(serverNode.cache(EXISTING_CACHE_NAME)); - - // Start a new client node and check cache operations. - IgniteConfiguration clientCfg = getConfiguration(clientName); - - clientCfg.setClientMode(true); - - Ignite clientNode = startGrid(clientName, clientCfg); - - checkCacheOperations(clientNode.cache(EXISTING_CACHE_NAME)); - } - - public void testConcurrentClientNodeJoins() throws Exception { - final int clientCnt = 3; - final int numberOfAttempts = 5; - - IgniteCache cache = grid(0).getOrCreateCache(createCacheConfiguration(EXISTING_CACHE_NAME)); - - final AtomicInteger attemptCnt = new AtomicInteger(); - final CountDownLatch stopLatch = new CountDownLatch(clientCnt); - - IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(new Callable() { - @Override public Object call() throws Exception { - String clientName = Thread.currentThread().getName(); - - try { - for (int i = 0; i < numberOfAttempts; ++i) { - int uniqueCnt = attemptCnt.getAndIncrement(); - - IgniteConfiguration clientCfg = getConfiguration(clientName + uniqueCnt); - - clientCfg.setClientMode(true); - - final Ignite clientNode = startGrid(clientName, clientCfg); - - CacheConfiguration cfg = new CacheConfiguration(); - - cfg.setName(clientName + uniqueCnt); - - String instanceName = getTestIgniteInstanceName(uniqueCnt % gridCount()); - - cfg.setAffinity(new BrokenAffinityFunction(false, instanceName)); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Object call() throws Exception { - clientNode.getOrCreateCache(cfg); - return null; - } - }, CacheException.class, null); - - stopGrid(clientName, true); - } - } - catch (Exception e) { - fail("Unexpected exception: " + e.getMessage()); - } - finally { - stopLatch.countDown(); - } - - return null; - } - }, clientCnt, "start-client-thread"); - - stopLatch.await(); - - assertEquals(numberOfAttempts * clientCnt, attemptCnt.get()); - - checkCacheOperations(cache); - } - - protected void testDynamicCacheStart(final Collection cfgs, final int initiatorId) { - assert initiatorId < gridCount(); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Object call() throws Exception { - grid(initiatorId).getOrCreateCaches(cfgs); - return null; - } - }, CacheException.class, null); - - for (CacheConfiguration cfg: cfgs) { - IgniteCache cache = grid(initiatorId).cache(cfg.getName()); - - assertNull(cache); - } - } - - /** - * Creates new cache configuration with the given name. - * - * @param cacheName Cache name. - * @return New cache configuration. - */ - protected CacheConfiguration createCacheConfiguration(String cacheName) { - CacheConfiguration cfg = new CacheConfiguration() - .setName(cacheName) - .setCacheMode(CacheMode.PARTITIONED) - .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) - .setAffinity(new BrokenAffinityFunction()); - - return cfg; - } - - /** - * Create list of cache configurations. - * - * @param failOnAllNodes {@code true} if affinity function should be broken on all nodes. - * @param unluckyNode Node, where exception is raised. - * @param unluckyCfg Unlucky cache configuration number. - * @param cacheNum Number of caches. - * @param useFilter {@code true} if NodeFilter should be used. - * - * @return List of cache configurations. - */ - protected List createCacheConfigsWithBrokenAffinityFun( - boolean failOnAllNodes, - int unluckyNode, - final int unluckyCfg, - int cacheNum, - boolean useFilter - ) { - assert unluckyCfg >= 0 && unluckyCfg < cacheNum; - - final UUID uuid = ignite(unluckyNode).cluster().localNode().id(); - - List cfgs = new ArrayList<>(); - - for (int i = 0; i < cacheNum; ++i) { - CacheConfiguration cfg = createCacheConfiguration(DYNAMIC_CACHE_NAME + "-" + i); - - if (i == unluckyCfg) - cfg.setAffinity(new BrokenAffinityFunction(failOnAllNodes, getTestIgniteInstanceName(unluckyNode))); - - if (useFilter) - cfg.setNodeFilter(new NodeFilter(uuid)); - - cfgs.add(cfg); - } - - return cfgs; - } - - /** - * Create list of cache configurations. - * - * @param failOnAllNodes {@code true} if cache store should be broken on all nodes. - * @param unluckyNode Node, where exception is raised. - * @param unluckyCfg Unlucky cache configuration number. - * @param cacheNum Number of caches. - * @param useFilter {@code true} if NodeFilter should be used. - * - * @return List of cache configurations. - */ - protected List createCacheConfigsWithBrokenCacheStore( - boolean failOnAllNodes, - int unluckyNode, - int unluckyCfg, - int cacheNum, - boolean useFilter - ) { - assert unluckyCfg >= 0 && unluckyCfg < cacheNum; - - final UUID uuid = ignite(unluckyNode).cluster().localNode().id(); - - List cfgs = new ArrayList<>(); - - for (int i = 0; i < cacheNum; ++i) { - CacheConfiguration cfg = new CacheConfiguration(); - - cfg.setName(DYNAMIC_CACHE_NAME + "-" + i); - - if (i == unluckyCfg) - cfg.setCacheStoreFactory(new BrokenStoreFactory(failOnAllNodes, getTestIgniteInstanceName(unluckyNode))); - - if (useFilter) - cfg.setNodeFilter(new NodeFilter(uuid)); - - cfgs.add(cfg); - } - - return cfgs; - } - - /** - * Test the basic cache operations. - * - * @param cache Cache. - * @throws Exception If test failed. - */ - protected void checkCacheOperations(IgniteCache cache) throws Exception { - int cnt = 1000; - - // Check cache operations. - for (int i = 0; i < cnt; ++i) - cache.put(i, new Value(i)); - - for (int i = 0; i < cnt; ++i) { - Value v = cache.get(i); - - assertNotNull(v); - assertEquals(i, v.getValue()); - } - - // Check Data Streamer functionality. - try (IgniteDataStreamer streamer = grid(0).dataStreamer(cache.getName())) { - for (int i = 0; i < 10_000; ++i) - streamer.addData(i, new Value(i)); - } - } - - /** - * - */ - public static class Value { - @QuerySqlField - private final int fieldVal; - - public Value(int fieldVal) { - this.fieldVal = fieldVal; - } - - public int getValue() { - return fieldVal; - } - } - - /** - * Filter specifying on which node the cache should be started. - */ - public static class NodeFilter implements IgnitePredicate { - /** Cache should be created node with certain UUID. */ - public UUID uuid; - - /** - * @param uuid node ID. - */ - public NodeFilter(UUID uuid) { - this.uuid = uuid; - } - - /** {@inheritDoc} */ - @Override public boolean apply(ClusterNode clusterNode) { - return clusterNode.id().equals(uuid); - } - } - - /** - * Affinity function that throws an exception when affinity nodes are calculated on the given node. - */ - public static class BrokenAffinityFunction extends RendezvousAffinityFunction { - /** */ - private static final long serialVersionUID = 0L; - - /** */ - @IgniteInstanceResource - private Ignite ignite; - - /** Exception should arise on all nodes. */ - private boolean eOnAllNodes = false; - - /** Exception should arise on node with certain name. */ - private String gridName; - - /** - * Constructs a good affinity function. - */ - public BrokenAffinityFunction() { - super(false, PARTITION_COUNT); - // No-op. - } - - /** - * @param eOnAllNodes {@code True} if exception should be thrown on all nodes. - * @param gridName Exception should arise on node with certain name. - */ - public BrokenAffinityFunction(boolean eOnAllNodes, String gridName) { - super(false, PARTITION_COUNT); - - this.eOnAllNodes = eOnAllNodes; - this.gridName = gridName; - } - - /** {@inheritDoc} */ - @Override public List> assignPartitions(AffinityFunctionContext affCtx) { - if (eOnAllNodes || ignite.name().equals(gridName)) - throw new IllegalStateException("Simulated exception [locNodeId=" - + ignite.cluster().localNode().id() + "]"); - else - return super.assignPartitions(affCtx); - } - } - - /** - * Factory that throws an exception is got created. - */ - public static class BrokenStoreFactory implements Factory> { - /** */ - @IgniteInstanceResource - private Ignite ignite; - - /** Exception should arise on all nodes. */ - boolean eOnAllNodes = true; - - /** Exception should arise on node with certain name. */ - public static String gridName; - - /** - * @param eOnAllNodes {@code True} if exception should be thrown on all nodes. - * @param gridName Exception should arise on node with certain name. - */ - public BrokenStoreFactory(boolean eOnAllNodes, String gridName) { - this.eOnAllNodes = eOnAllNodes; - - this.gridName = gridName; - } - - /** {@inheritDoc} */ - @Override public CacheStore create() { - if (eOnAllNodes || ignite.name().equals(gridName)) - throw new IllegalStateException("Simulated exception [locNodeId=" - + ignite.cluster().localNode().id() + "]"); - else - return null; - } - } -} +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.io.ObjectInputStream; +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.Callable; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicInteger; +import javax.cache.CacheException; +import javax.cache.configuration.Factory; +import javax.management.Attribute; +import javax.management.AttributeList; +import javax.management.AttributeNotFoundException; +import javax.management.InstanceAlreadyExistsException; +import javax.management.InstanceNotFoundException; +import javax.management.IntrospectionException; +import javax.management.InvalidAttributeValueException; +import javax.management.ListenerNotFoundException; +import javax.management.MBeanException; +import javax.management.MBeanInfo; +import javax.management.MBeanRegistrationException; +import javax.management.MBeanServer; +import javax.management.NotCompliantMBeanException; +import javax.management.NotificationFilter; +import javax.management.NotificationListener; +import javax.management.ObjectInstance; +import javax.management.ObjectName; +import javax.management.OperationsException; +import javax.management.QueryExp; +import javax.management.ReflectionException; +import javax.management.loading.ClassLoaderRepository; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.affinity.AffinityFunctionContext; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cache.query.annotations.QuerySqlField; +import org.apache.ignite.cache.store.CacheStore; +import org.apache.ignite.cluster.BaselineNode; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Tests the recovery after a dynamic cache start failure. + */ +@RunWith(JUnit4.class) +public abstract class IgniteAbstractDynamicCacheStartFailTest extends GridCacheAbstractSelfTest { + /** */ + private static final String DYNAMIC_CACHE_NAME = "TestDynamicCache"; + + /** */ + private static final String CLIENT_GRID_NAME = "client"; + + /** */ + protected static final String EXISTING_CACHE_NAME = "existing-cache";; + + /** */ + private static final int PARTITION_COUNT = 16; + + /** Failure MBean server. */ + private static FailureMBeanServer mbSrv; + + /** Coordinator node index. */ + private int crdIdx = 0; + + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 3; + } + + /** + * Returns {@code true} if persistence is enabled. + * + * @return {@code true} if persistence is enabled. + */ + protected boolean persistenceEnabled() { + return false; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBrokenAffinityFunStartOnServerFailedOnClient() throws Exception { + final String clientName = CLIENT_GRID_NAME + "testBrokenAffinityFunStartOnServerFailedOnClient"; + + IgniteConfiguration clientCfg = getConfiguration(clientName); + + clientCfg.setClientMode(true); + + Ignite client = startGrid(clientName, clientCfg); + + CacheConfiguration cfg = new CacheConfiguration(); + + cfg.setName(DYNAMIC_CACHE_NAME + "-server-1"); + + cfg.setAffinity(new BrokenAffinityFunction(false, clientName)); + + try { + IgniteCache cache = ignite(0).getOrCreateCache(cfg); + } + catch (CacheException e) { + fail("Exception should not be thrown."); + } + + stopGrid(clientName); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBrokenAffinityFunStartOnServerFailedOnServer() throws Exception { + final String clientName = CLIENT_GRID_NAME + "testBrokenAffinityFunStartOnServerFailedOnServer"; + + IgniteConfiguration clientCfg = getConfiguration(clientName); + + clientCfg.setClientMode(true); + + Ignite client = startGrid(clientName, clientCfg); + + CacheConfiguration cfg = new CacheConfiguration(); + + cfg.setName(DYNAMIC_CACHE_NAME + "-server-2"); + + cfg.setAffinity(new BrokenAffinityFunction(false, getTestIgniteInstanceName(0))); + + try { + IgniteCache cache = ignite(0).getOrCreateCache(cfg); + + fail("Expected exception was not thrown."); + } + catch (CacheException e) { + } + + stopGrid(clientName); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testBrokenAffinityFunStartOnClientFailOnServer() throws Exception { + final String clientName = CLIENT_GRID_NAME + "testBrokenAffinityFunStartOnClientFailOnServer"; + + IgniteConfiguration clientCfg = getConfiguration(clientName); + + clientCfg.setClientMode(true); + + Ignite client = startGrid(clientName, clientCfg); + + CacheConfiguration cfg = new CacheConfiguration(); + + cfg.setName(DYNAMIC_CACHE_NAME + "-client-2"); + + cfg.setAffinity(new BrokenAffinityFunction(false, getTestIgniteInstanceName(0))); + + try { + IgniteCache cache = client.getOrCreateCache(cfg); + + fail("Expected exception was not thrown."); + } + catch (CacheException e) { + } + + stopGrid(clientName); + } + + /** + * Test cache start with broken affinity function that throws an exception on all nodes. + */ + @Test + public void testBrokenAffinityFunOnAllNodes() { + final boolean failOnAllNodes = true; + final int unluckyNode = 0; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 0; + + testDynamicCacheStart( + createCacheConfigsWithBrokenAffinityFun( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Test cache start with broken affinity function that throws an exception on initiator node. + */ + @Test + public void testBrokenAffinityFunOnInitiator() { + final boolean failOnAllNodes = false; + final int unluckyNode = 1; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 1; + + testDynamicCacheStart( + createCacheConfigsWithBrokenAffinityFun( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Test cache start with broken affinity function that throws an exception on non-initiator node. + */ + @Test + public void testBrokenAffinityFunOnNonInitiator() { + final boolean failOnAllNodes = false; + final int unluckyNode = 1; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 2; + + testDynamicCacheStart( + createCacheConfigsWithBrokenAffinityFun( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Test cache start with broken affinity function that throws an exception on coordinator node. + */ + @Test + public void testBrokenAffinityFunOnCoordinatorDiffInitiator() { + final boolean failOnAllNodes = false; + final int unluckyNode = crdIdx; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = (crdIdx + 1) % gridCount(); + + testDynamicCacheStart( + createCacheConfigsWithBrokenAffinityFun( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Test cache start with broken affinity function that throws an exception on initiator node. + */ + @Test + public void testBrokenAffinityFunOnCoordinator() { + final boolean failOnAllNodes = false; + final int unluckyNode = crdIdx; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = crdIdx; + + testDynamicCacheStart( + createCacheConfigsWithBrokenAffinityFun( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Tests cache start with node filter and broken affinity function that throws an exception on initiator node. + */ + @Test + public void testBrokenAffinityFunWithNodeFilter() { + final boolean failOnAllNodes = false; + final int unluckyNode = 0; + final int unluckyCfg = 0; + final int numOfCaches = 1; + final int initiator = 0; + + testDynamicCacheStart( + createCacheConfigsWithBrokenAffinityFun( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, true), + initiator); + } + + /** + * Tests cache start with broken cache store that throws an exception on all nodes. + */ + @Test + public void testBrokenCacheStoreOnAllNodes() { + final boolean failOnAllNodes = true; + final int unluckyNode = 0; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 0; + + testDynamicCacheStart( + createCacheConfigsWithBrokenCacheStore( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Tests cache start with broken cache store that throws an exception on initiator node. + */ + @Test + public void testBrokenCacheStoreOnInitiator() { + final boolean failOnAllNodes = false; + final int unluckyNode = 1; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 1; + + testDynamicCacheStart( + createCacheConfigsWithBrokenCacheStore( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Tests cache start that throws an Ignite checked exception on initiator node. + */ + @Test + public void testThrowsIgniteCheckedExceptionOnInitiator() { + final int unluckyNode = 1; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 1; + + testDynamicCacheStart( + createCacheConfigsWithFailureMbServer(unluckyNode, unluckyCfg, numOfCaches), + initiator); + } + + /** + * Tests cache start with broken cache store that throws an exception on non-initiator node. + */ + @Test + public void testBrokenCacheStoreOnNonInitiator() { + final boolean failOnAllNodes = false; + final int unluckyNode = 1; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 2; + + testDynamicCacheStart( + createCacheConfigsWithBrokenCacheStore( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Tests cache start that throws an Ignite checked exception on non-initiator node. + */ + @Test + public void testThrowsIgniteCheckedExceptionOnNonInitiator() { + final int unluckyNode = 1; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = 2; + + testDynamicCacheStart( + createCacheConfigsWithFailureMbServer(unluckyNode, unluckyCfg, numOfCaches), + initiator); + } + + /** + * Tests cache start with broken cache store that throws an exception on initiator node. + */ + @Test + public void testBrokenCacheStoreOnCoordinatorDiffInitiator() { + final boolean failOnAllNodes = false; + final int unluckyNode = crdIdx; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = (crdIdx + 1) % gridCount(); + + testDynamicCacheStart( + createCacheConfigsWithBrokenCacheStore( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Tests cache start that throws an Ignite checked exception on coordinator node + * that doesn't initiator node. + */ + @Test + public void testThrowsIgniteCheckedExceptionOnCoordinatorDiffInitiator() { + final int unluckyNode = crdIdx; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = (crdIdx + 1) % gridCount(); + + testDynamicCacheStart( + createCacheConfigsWithFailureMbServer(unluckyNode, unluckyCfg, numOfCaches), + initiator); + } + + /** + * Tests cache start with broken cache store that throws an exception on coordinator node. + */ + @Test + public void testBrokenCacheStoreFunOnCoordinator() { + final boolean failOnAllNodes = false; + final int unluckyNode = crdIdx; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = crdIdx; + + testDynamicCacheStart( + createCacheConfigsWithBrokenCacheStore( + failOnAllNodes, unluckyNode, unluckyCfg, numOfCaches, false), + initiator); + } + + /** + * Tests cache start that throws an Ignite checked exception on coordinator node. + */ + @Test + public void testThrowsIgniteCheckedExceptionOnCoordinator() { + final int unluckyNode = crdIdx; + final int unluckyCfg = 1; + final int numOfCaches = 3; + final int initiator = crdIdx; + + testDynamicCacheStart( + createCacheConfigsWithFailureMbServer(unluckyNode, unluckyCfg, numOfCaches), + initiator); + } + + /** + * Tests multiple creation of cache with broken affinity function. + */ + @Test + public void testCreateCacheMultipleTimes() { + final boolean failOnAllNodes = false; + final int unluckyNode = 1; + final int unluckyCfg = 0; + final int numOfAttempts = 15; + + CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( + failOnAllNodes, unluckyNode, unluckyCfg, 1, false).get(0); + + for (int i = 0; i < numOfAttempts; ++i) { + try { + IgniteCache cache = ignite(0).getOrCreateCache(cfg); + + fail("Expected exception was not thrown"); + } + catch (CacheException e) { + } + } + } + + /** + * Tests that a cache with the same name can be started after failure if cache configuration is corrected. + * + * @throws Exception If test failed. + */ + @Test + public void testCacheStartAfterFailure() throws Exception { + CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( + false, 1, 0, 1, false).get(0); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + grid(0).getOrCreateCache(cfg); + return null; + } + }, CacheException.class, null); + + // Correct the cache configuration. Default constructor creates a good affinity function. + cfg.setAffinity(new BrokenAffinityFunction()); + + checkCacheOperations(grid(0).getOrCreateCache(cfg)); + } + + /** + * Tests that other cache (existed before the failed start) is still operable after the failure. + * + * @throws Exception If test failed. + */ + @Test + public void testExistingCacheAfterFailure() throws Exception { + IgniteCache cache = grid(0).getOrCreateCache(createCacheConfiguration(EXISTING_CACHE_NAME)); + + CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( + false, 1, 0, 1, false).get(0); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + grid(0).getOrCreateCache(cfg); + return null; + } + }, CacheException.class, null); + + checkCacheOperations(cache); + } + + /** + * Tests that other cache works as expected after the failure and further topology changes. + * + * @throws Exception If test failed. + */ + @Test + public void testTopologyChangesAfterFailure() throws Exception { + final String clientName = "testTopologyChangesAfterFailure"; + + IgniteCache cache = grid(0).getOrCreateCache(createCacheConfiguration(EXISTING_CACHE_NAME)); + + checkCacheOperations(cache); + + CacheConfiguration cfg = createCacheConfigsWithBrokenAffinityFun( + false, 0, 0, 1, false).get(0); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + grid(0).getOrCreateCache(cfg); + return null; + } + }, CacheException.class, null); + + awaitPartitionMapExchange(); + + checkCacheOperations(cache); + + // Start a new server node and check cache operations. + Ignite serverNode = startGrid(gridCount() + 1); + + if (persistenceEnabled()) { + // Start a new client node to perform baseline change. + // TODO: This change is workaround: + // Sometimes problem with caches configuration deserialization from test thread arises. + final String clientName1 = "baseline-changer"; + + IgniteConfiguration clientCfg = getConfiguration(clientName1); + + clientCfg.setClientMode(true); + + Ignite clientNode = startGrid(clientName1, clientCfg); + + List baseline = new ArrayList<>(grid(0).cluster().currentBaselineTopology()); + + baseline.add(serverNode.cluster().localNode()); + + clientNode.cluster().setBaselineTopology(baseline); + } + + awaitPartitionMapExchange(); + + checkCacheOperations(serverNode.cache(EXISTING_CACHE_NAME)); + + // Start a new client node and check cache operations. + IgniteConfiguration clientCfg = getConfiguration(clientName); + + clientCfg.setClientMode(true); + + Ignite clientNode = startGrid(clientName, clientCfg); + + checkCacheOperations(clientNode.cache(EXISTING_CACHE_NAME)); + } + + @Test + public void testConcurrentClientNodeJoins() throws Exception { + final int clientCnt = 3; + final int numberOfAttempts = 5; + + IgniteCache cache = grid(0).getOrCreateCache(createCacheConfiguration(EXISTING_CACHE_NAME)); + + final AtomicInteger attemptCnt = new AtomicInteger(); + final CountDownLatch stopLatch = new CountDownLatch(clientCnt); + + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(new Callable() { + @Override public Object call() throws Exception { + String clientName = Thread.currentThread().getName(); + + try { + for (int i = 0; i < numberOfAttempts; ++i) { + int uniqueCnt = attemptCnt.getAndIncrement(); + + IgniteConfiguration clientCfg = getConfiguration(clientName + uniqueCnt); + + clientCfg.setClientMode(true); + + final Ignite clientNode = startGrid(clientName, clientCfg); + + CacheConfiguration cfg = new CacheConfiguration(); + + cfg.setName(clientName + uniqueCnt); + + String instanceName = getTestIgniteInstanceName(uniqueCnt % gridCount()); + + cfg.setAffinity(new BrokenAffinityFunction(false, instanceName)); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + clientNode.getOrCreateCache(cfg); + return null; + } + }, CacheException.class, null); + + stopGrid(clientName, true); + } + } + catch (Exception e) { + fail("Unexpected exception: " + e.getMessage()); + } + finally { + stopLatch.countDown(); + } + + return null; + } + }, clientCnt, "start-client-thread"); + + stopLatch.await(); + + assertEquals(numberOfAttempts * clientCnt, attemptCnt.get()); + + checkCacheOperations(cache); + } + + protected void testDynamicCacheStart(final Collection cfgs, final int initiatorId) { + assert initiatorId < gridCount(); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + grid(initiatorId).getOrCreateCaches(cfgs); + return null; + } + }, CacheException.class, null); + + for (CacheConfiguration cfg: cfgs) { + IgniteCache cache = grid(initiatorId).cache(cfg.getName()); + + assertNull(cache); + } + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration res = super.getConfiguration(igniteInstanceName); + + if (mbSrv == null) + mbSrv = new FailureMBeanServer(res.getMBeanServer()); + + res.setMBeanServer(mbSrv); + + return res; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + mbSrv.clear(); + + for (String cacheName : grid(0).cacheNames()) { + if (!(EXISTING_CACHE_NAME.equals(cacheName) || DEFAULT_CACHE_NAME.equals(cacheName))) + grid(0).cache(cacheName).destroy(); + } + } + + /** + * Creates new cache configuration with the given name. + * + * @param cacheName Cache name. + * @return New cache configuration. + */ + protected CacheConfiguration createCacheConfiguration(String cacheName) { + CacheConfiguration cfg = new CacheConfiguration() + .setName(cacheName) + .setCacheMode(CacheMode.PARTITIONED) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setAffinity(new BrokenAffinityFunction()); + + return cfg; + } + + /** + * Create list of cache configurations. + * + * @param failOnAllNodes {@code true} if affinity function should be broken on all nodes. + * @param unluckyNode Node, where exception is raised. + * @param unluckyCfg Unlucky cache configuration number. + * @param cacheNum Number of caches. + * @param useFilter {@code true} if NodeFilter should be used. + * + * @return List of cache configurations. + */ + protected List createCacheConfigsWithBrokenAffinityFun( + boolean failOnAllNodes, + int unluckyNode, + final int unluckyCfg, + int cacheNum, + boolean useFilter + ) { + assert unluckyCfg >= 0 && unluckyCfg < cacheNum; + + final UUID uuid = ignite(unluckyNode).cluster().localNode().id(); + + List cfgs = new ArrayList<>(); + + for (int i = 0; i < cacheNum; ++i) { + CacheConfiguration cfg = createCacheConfiguration(DYNAMIC_CACHE_NAME + "-" + i); + + if (i == unluckyCfg) + cfg.setAffinity(new BrokenAffinityFunction(failOnAllNodes, getTestIgniteInstanceName(unluckyNode))); + + if (useFilter) + cfg.setNodeFilter(new NodeFilter(uuid)); + + cfgs.add(cfg); + } + + return cfgs; + } + + /** + * Create list of cache configurations. + * + * @param failOnAllNodes {@code true} if cache store should be broken on all nodes. + * @param unluckyNode Node, where exception is raised. + * @param unluckyCfg Unlucky cache configuration number. + * @param cacheNum Number of caches. + * @param useFilter {@code true} if NodeFilter should be used. + * + * @return List of cache configurations. + */ + protected List createCacheConfigsWithBrokenCacheStore( + boolean failOnAllNodes, + int unluckyNode, + int unluckyCfg, + int cacheNum, + boolean useFilter + ) { + assert unluckyCfg >= 0 && unluckyCfg < cacheNum; + + final UUID uuid = ignite(unluckyNode).cluster().localNode().id(); + + List cfgs = new ArrayList<>(); + + for (int i = 0; i < cacheNum; ++i) { + CacheConfiguration cfg = new CacheConfiguration(); + + cfg.setName(DYNAMIC_CACHE_NAME + "-" + i); + + if (i == unluckyCfg) + cfg.setCacheStoreFactory(new BrokenStoreFactory(failOnAllNodes, getTestIgniteInstanceName(unluckyNode))); + + if (useFilter) + cfg.setNodeFilter(new NodeFilter(uuid)); + + cfgs.add(cfg); + } + + return cfgs; + } + + /** + * Create list of cache configurations. + * + * @param unluckyNode Node, where exception is raised. + * @param unluckyCfg Unlucky cache configuration number. + * @param cacheNum Number of caches. + * + * @return List of cache configurations. + */ + private List createCacheConfigsWithFailureMbServer( + int unluckyNode, + int unluckyCfg, + int cacheNum + ) { + assert unluckyCfg >= 0 && unluckyCfg < cacheNum; + + List cfgs = new ArrayList<>(); + + for (int i = 0; i < cacheNum; ++i) { + CacheConfiguration cfg = new CacheConfiguration(); + + String cacheName = DYNAMIC_CACHE_NAME + "-" + i; + + cfg.setName(cacheName); + + if (i == unluckyCfg) + mbSrv.cache(cacheName); + + cfgs.add(cfg); + } + + mbSrv.node(getTestIgniteInstanceName(unluckyNode)); + + return cfgs; + } + + /** + * Test the basic cache operations. + * + * @param cache Cache. + * @throws Exception If test failed. + */ + protected void checkCacheOperations(IgniteCache cache) throws Exception { + int cnt = 1000; + + // Check cache operations. + for (int i = 0; i < cnt; ++i) + cache.put(i, new Value(i)); + + for (int i = 0; i < cnt; ++i) { + Value v = cache.get(i); + + assertNotNull(v); + assertEquals(i, v.getValue()); + } + + // Check Data Streamer functionality. + try (IgniteDataStreamer streamer = grid(0).dataStreamer(cache.getName())) { + for (int i = 0; i < 10_000; ++i) + streamer.addData(i, new Value(i)); + } + } + + /** + * + */ + public static class Value { + @QuerySqlField + private final int fieldVal; + + public Value(int fieldVal) { + this.fieldVal = fieldVal; + } + + public int getValue() { + return fieldVal; + } + } + + /** + * Filter specifying on which node the cache should be started. + */ + public static class NodeFilter implements IgnitePredicate { + /** Cache should be created node with certain UUID. */ + public UUID uuid; + + /** + * @param uuid node ID. + */ + public NodeFilter(UUID uuid) { + this.uuid = uuid; + } + + /** {@inheritDoc} */ + @Override public boolean apply(ClusterNode clusterNode) { + return clusterNode.id().equals(uuid); + } + } + + /** + * Affinity function that throws an exception when affinity nodes are calculated on the given node. + */ + public static class BrokenAffinityFunction extends RendezvousAffinityFunction { + /** */ + private static final long serialVersionUID = 0L; + + /** */ + @IgniteInstanceResource + private Ignite ignite; + + /** Exception should arise on all nodes. */ + private boolean eOnAllNodes = false; + + /** Exception should arise on node with certain name. */ + private String gridName; + + /** + * Constructs a good affinity function. + */ + public BrokenAffinityFunction() { + super(false, PARTITION_COUNT); + // No-op. + } + + /** + * @param eOnAllNodes {@code True} if exception should be thrown on all nodes. + * @param gridName Exception should arise on node with certain name. + */ + public BrokenAffinityFunction(boolean eOnAllNodes, String gridName) { + super(false, PARTITION_COUNT); + + this.eOnAllNodes = eOnAllNodes; + this.gridName = gridName; + } + + /** {@inheritDoc} */ + @Override public List> assignPartitions(AffinityFunctionContext affCtx) { + if (eOnAllNodes || ignite.name().equals(gridName)) + throw new IllegalStateException("Simulated exception [locNodeId=" + + ignite.cluster().localNode().id() + "]"); + else + return super.assignPartitions(affCtx); + } + } + + /** + * Factory that throws an exception is got created. + */ + public static class BrokenStoreFactory implements Factory> { + /** */ + @IgniteInstanceResource + private Ignite ignite; + + /** Exception should arise on all nodes. */ + boolean eOnAllNodes = true; + + /** Exception should arise on node with certain name. */ + public static String gridName; + + /** + * @param eOnAllNodes {@code True} if exception should be thrown on all nodes. + * @param gridName Exception should arise on node with certain name. + */ + public BrokenStoreFactory(boolean eOnAllNodes, String gridName) { + this.eOnAllNodes = eOnAllNodes; + + this.gridName = gridName; + } + + /** {@inheritDoc} */ + @Override public CacheStore create() { + if (eOnAllNodes || ignite.name().equals(gridName)) + throw new IllegalStateException("Simulated exception [locNodeId=" + + ignite.cluster().localNode().id() + "]"); + else + return null; + } + } + + /** Failure MBean server. */ + private class FailureMBeanServer implements MBeanServer { + /** */ + private final MBeanServer origin; + + /** Set of caches that must be failure. */ + private final Set caches = new HashSet<>(); + + /** Set of nodes that must be failure. */ + private final Set nodes = new HashSet<>(); + + /** */ + private FailureMBeanServer(MBeanServer origin) { + this.origin = origin; + } + + /** Add cache name to failure set. */ + void cache(String cache) { + caches.add('\"' + cache + '\"'); + } + + /** Add node name to failure set. */ + void node(String node) { + nodes.add(node); + } + + /** Clear failure set of caches and set of nodes. */ + void clear() { + caches.clear(); + nodes.clear(); + } + + /** {@inheritDoc} */ + @Override public ObjectInstance registerMBean(Object obj, ObjectName name) + throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException { + String node = name.getKeyProperty("igniteInstanceName"); + + if (nodes.contains(node) && caches.contains(name.getKeyProperty("group"))) + throw new MBeanRegistrationException(new Exception("Simulate exception [node=" + node + ']')); + + return origin.registerMBean(obj, name); + } + + /** {@inheritDoc} */ + @Override public ObjectInstance createMBean(String clsName, ObjectName name) + throws ReflectionException, InstanceAlreadyExistsException, MBeanException, NotCompliantMBeanException { + return origin.createMBean(clsName, name); + } + + /** {@inheritDoc} */ + @Override public ObjectInstance createMBean(String clsName, ObjectName name, ObjectName ldrName) + throws ReflectionException, InstanceAlreadyExistsException, + MBeanException, NotCompliantMBeanException, InstanceNotFoundException { + return origin.createMBean(clsName, name, ldrName); + } + + /** {@inheritDoc} */ + @Override public ObjectInstance createMBean(String clsName, ObjectName name, Object[] params, + String[] signature) throws ReflectionException, InstanceAlreadyExistsException, + MBeanException, NotCompliantMBeanException { + return origin.createMBean(clsName, name, params, signature); + } + + /** {@inheritDoc} */ + @Override public ObjectInstance createMBean(String clsName, ObjectName name, ObjectName ldrName, + Object[] params, String[] signature) throws ReflectionException, InstanceAlreadyExistsException, + MBeanException, NotCompliantMBeanException, InstanceNotFoundException { + return origin.createMBean(clsName, name, ldrName, params, signature); + } + + /** {@inheritDoc} */ + @Override public void unregisterMBean(ObjectName name) throws InstanceNotFoundException, MBeanRegistrationException { + origin.unregisterMBean(name); + } + + /** {@inheritDoc} */ + @Override public ObjectInstance getObjectInstance(ObjectName name) throws InstanceNotFoundException { + return origin.getObjectInstance(name); + } + + /** {@inheritDoc} */ + @Override public Set queryMBeans(ObjectName name, QueryExp qry) { + return origin.queryMBeans(name, qry); + } + + /** {@inheritDoc} */ + @Override public Set queryNames(ObjectName name, QueryExp qry) { + return origin.queryNames(name, qry); + } + + /** {@inheritDoc} */ + @Override public boolean isRegistered(ObjectName name) { + return origin.isRegistered(name); + } + + /** {@inheritDoc} */ + @Override public Integer getMBeanCount() { + return origin.getMBeanCount(); + } + + /** {@inheritDoc} */ + @Override public Object getAttribute(ObjectName name, String attribute) + throws MBeanException, AttributeNotFoundException, InstanceNotFoundException, ReflectionException { + return origin.getAttribute(name, attribute); + } + + /** {@inheritDoc} */ + @Override public AttributeList getAttributes(ObjectName name, + String[] attrs) throws InstanceNotFoundException, ReflectionException { + return origin.getAttributes(name, attrs); + } + + /** {@inheritDoc} */ + @Override public void setAttribute(ObjectName name, + Attribute attribute) throws InstanceNotFoundException, AttributeNotFoundException, + InvalidAttributeValueException, MBeanException, ReflectionException { + origin.setAttribute(name, attribute); + } + + /** {@inheritDoc} */ + @Override public AttributeList setAttributes(ObjectName name, + AttributeList attrs) throws InstanceNotFoundException, ReflectionException { + return origin.setAttributes(name, attrs); + } + + /** {@inheritDoc} */ + @Override public Object invoke(ObjectName name, String operationName, Object[] params, + String[] signature) throws InstanceNotFoundException, MBeanException, ReflectionException { + return origin.invoke(name, operationName, params, signature); + } + + /** {@inheritDoc} */ + @Override public String getDefaultDomain() { + return origin.getDefaultDomain(); + } + + /** {@inheritDoc} */ + @Override public String[] getDomains() { + return origin.getDomains(); + } + + /** {@inheritDoc} */ + @Override public void addNotificationListener(ObjectName name, NotificationListener lsnr, + NotificationFilter filter, Object handback) throws InstanceNotFoundException { + origin.addNotificationListener(name, lsnr, filter, handback); + } + + /** {@inheritDoc} */ + @Override public void addNotificationListener(ObjectName name, ObjectName lsnr, + NotificationFilter filter, Object handback) throws InstanceNotFoundException { + origin.addNotificationListener(name, lsnr, filter, handback); + } + + /** {@inheritDoc} */ + @Override public void removeNotificationListener(ObjectName name, + ObjectName lsnr) throws InstanceNotFoundException, ListenerNotFoundException { + origin.removeNotificationListener(name, lsnr); + } + + /** {@inheritDoc} */ + @Override public void removeNotificationListener(ObjectName name, ObjectName lsnr, + NotificationFilter filter, Object handback) throws InstanceNotFoundException, ListenerNotFoundException { + origin.removeNotificationListener(name, lsnr, filter, handback); + } + + /** {@inheritDoc} */ + @Override public void removeNotificationListener(ObjectName name, + NotificationListener lsnr) throws InstanceNotFoundException, ListenerNotFoundException { + origin.removeNotificationListener(name, lsnr); + } + + /** {@inheritDoc} */ + @Override public void removeNotificationListener(ObjectName name, NotificationListener lsnr, + NotificationFilter filter, Object handback) throws InstanceNotFoundException, ListenerNotFoundException { + origin.removeNotificationListener(name, lsnr, filter, handback); + } + + /** {@inheritDoc} */ + @Override public MBeanInfo getMBeanInfo( + ObjectName name) throws InstanceNotFoundException, IntrospectionException, ReflectionException { + return origin.getMBeanInfo(name); + } + + /** {@inheritDoc} */ + @Override public boolean isInstanceOf(ObjectName name, String clsName) throws InstanceNotFoundException { + return origin.isInstanceOf(name, clsName); + } + + /** {@inheritDoc} */ + @Override public Object instantiate(String clsName) throws ReflectionException, MBeanException { + return origin.instantiate(clsName); + } + + /** {@inheritDoc} */ + @Override public Object instantiate(String clsName, + ObjectName ldrName) throws ReflectionException, MBeanException, InstanceNotFoundException { + return origin.instantiate(clsName, ldrName); + } + + /** {@inheritDoc} */ + @Override public Object instantiate(String clsName, Object[] params, + String[] signature) throws ReflectionException, MBeanException { + return origin.instantiate(clsName, params, signature); + } + + /** {@inheritDoc} */ + @Override public Object instantiate(String clsName, ObjectName ldrName, Object[] params, + String[] signature) throws ReflectionException, MBeanException, InstanceNotFoundException { + return origin.instantiate(clsName, ldrName, params, signature); + } + + /** {@inheritDoc} */ + @Override @Deprecated public ObjectInputStream deserialize(ObjectName name, byte[] data) + throws OperationsException { + return origin.deserialize(name, data); + } + + /** {@inheritDoc} */ + @Override @Deprecated public ObjectInputStream deserialize(String clsName, byte[] data) + throws OperationsException, ReflectionException { + return origin.deserialize(clsName, data); + } + + /** {@inheritDoc} */ + @Override @Deprecated public ObjectInputStream deserialize(String clsName, ObjectName ldrName, byte[] data) + throws OperationsException, ReflectionException { + return origin.deserialize(clsName, ldrName, data); + } + + /** {@inheritDoc} */ + @Override public ClassLoader getClassLoaderFor(ObjectName mbeanName) throws InstanceNotFoundException { + return origin.getClassLoaderFor(mbeanName); + } + + /** {@inheritDoc} */ + @Override public ClassLoader getClassLoader(ObjectName ldrName) throws InstanceNotFoundException { + return origin.getClassLoader(ldrName); + } + + /** {@inheritDoc} */ + @Override public ClassLoaderRepository getClassLoaderRepository() { + return origin.getClassLoaderRepository(); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractStopBusySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractStopBusySelfTest.java index 257ca77399f08..1950630d4be56 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractStopBusySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractStopBusySelfTest.java @@ -47,12 +47,12 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -60,6 +60,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheAbstractStopBusySelfTest extends GridCommonAbstractTest { /** */ public static final int CLN_GRD = 0; @@ -70,9 +71,6 @@ public abstract class IgniteCacheAbstractStopBusySelfTest extends GridCommonAbst /** */ public static final String CACHE_NAME = "StopTest"; - /** */ - public final TcpDiscoveryIpFinder finder = new TcpDiscoveryVmIpFinder(true); - /** */ private AtomicBoolean suspended = new AtomicBoolean(false); @@ -119,7 +117,7 @@ protected CacheAtomicityMode atomicityMode(){ cfg.setCommunicationSpi(commSpi); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(finder).setForceServerMode(true)); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); cfg.setCacheConfiguration(cacheCfg); @@ -166,6 +164,7 @@ protected CacheAtomicityMode atomicityMode(){ /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { executeTest(new Callable() { /** {@inheritDoc} */ @@ -184,6 +183,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { executeTest(new Callable() { /** {@inheritDoc} */ @@ -202,6 +202,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAsync() throws Exception { executeTest(new Callable() { /** {@inheritDoc} */ @@ -220,6 +221,7 @@ public void testPutAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { bannedMsg.set(GridNearSingleGetRequest.class); @@ -240,6 +242,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAll() throws Exception { bannedMsg.set(GridNearGetRequest.class); @@ -303,6 +306,7 @@ private void executeTest(Callable call) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutBatch() throws Exception { assert !suspended.get(); @@ -407,4 +411,4 @@ private class StopRunnable implements Runnable { info("Grid stopped."); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractTest.java index 370a7a8af70d0..403a800ee2b1d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractTest.java @@ -34,8 +34,6 @@ import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.concurrent.ConcurrentHashMap; @@ -47,9 +45,6 @@ * Abstract class for cache tests. */ public abstract class IgniteCacheAbstractTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ public static final Map storeMap = new ConcurrentHashMap<>(); @@ -87,9 +82,9 @@ protected void startGrids() throws Exception { ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); - TcpDiscoverySpi disco = new TcpDiscoverySpi().setForceServerMode(true); + TcpDiscoverySpi disco = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - disco.setIpFinder(ipFinder); + disco.setForceServerMode(true); if (isDebug()) disco.setAckTimeout(Integer.MAX_VALUE); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAtomicStopBusySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAtomicStopBusySelfTest.java index 281397a7c98c8..58ab3bb3279b3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAtomicStopBusySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAtomicStopBusySelfTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateRequest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Stopped node when client operations are executing. */ +@RunWith(JUnit4.class) public class IgniteCacheAtomicStopBusySelfTest extends IgniteCacheAbstractStopBusySelfTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { @@ -30,6 +34,7 @@ public class IgniteCacheAtomicStopBusySelfTest extends IgniteCacheAbstractStopBu } /** {@inheritDoc} */ + @Test @Override public void testPut() throws Exception { bannedMsg.set(GridNearAtomicSingleUpdateRequest.class); @@ -37,6 +42,7 @@ public class IgniteCacheAtomicStopBusySelfTest extends IgniteCacheAbstractStopBu } /** {@inheritDoc} */ + @Test @Override public void testPutBatch() throws Exception { bannedMsg.set(GridNearAtomicSingleUpdateRequest.class); @@ -44,6 +50,7 @@ public class IgniteCacheAtomicStopBusySelfTest extends IgniteCacheAbstractStopBu } /** {@inheritDoc} */ + @Test @Override public void testPutAsync() throws Exception { bannedMsg.set(GridNearAtomicSingleUpdateRequest.class); @@ -51,9 +58,10 @@ public class IgniteCacheAtomicStopBusySelfTest extends IgniteCacheAbstractStopBu } /** {@inheritDoc} */ + @Test @Override public void testRemove() throws Exception { bannedMsg.set(GridNearAtomicSingleUpdateRequest.class); super.testPut(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryEntryProcessorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryEntryProcessorSelfTest.java index 963fe88b899e5..12208865d4bb9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryEntryProcessorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryEntryProcessorSelfTest.java @@ -30,18 +30,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheBinaryEntryProcessorSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRV_CNT = 4; @@ -52,8 +50,6 @@ public class IgniteCacheBinaryEntryProcessorSelfTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (getTestIgniteInstanceName(SRV_CNT).equals(igniteInstanceName)) cfg.setClientMode(true); @@ -88,6 +84,7 @@ private CacheConfiguration cacheConfiguration(CacheMode cach /** * @throws Exception If failed. */ + @Test public void testPartitionedTransactional() throws Exception { checkInvokeBinaryObject(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL); } @@ -95,6 +92,7 @@ public void testPartitionedTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedTransactional() throws Exception { checkInvokeBinaryObject(CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL); } @@ -102,6 +100,7 @@ public void testReplicatedTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedAtomic() throws Exception { checkInvokeBinaryObject(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL); } @@ -109,6 +108,7 @@ public void testPartitionedAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedAtomic() throws Exception { checkInvokeBinaryObject(CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryObjectsScanSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryObjectsScanSelfTest.java index 64c2bbc6e6f89..7c7ae98f21960 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryObjectsScanSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheBinaryObjectsScanSelfTest.java @@ -26,19 +26,17 @@ import org.apache.ignite.cache.query.ScanQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheBinaryObjectsScanSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CLS_NAME = "org.apache.ignite.tests.p2p.cache.Person"; @@ -68,11 +66,6 @@ public class IgniteCacheBinaryObjectsScanSelfTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); cfg.setIncludeEventTypes(getIncludeEventTypes()); cfg.setMarshaller(null); @@ -120,6 +113,7 @@ private void populateCache(ClassLoader ldr) throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanNoClasses() throws Exception { Ignite client = grid("client"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsFullApiTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsFullApiTest.java index 84d0a98e9229f..a7a2fdc25d17f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsFullApiTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsFullApiTest.java @@ -88,6 +88,9 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -113,7 +116,8 @@ /** * Full API cache test. */ -@SuppressWarnings({"TransientFieldInNonSerializableClass", "unchecked"}) +@SuppressWarnings({"unchecked"}) +@RunWith(JUnit4.class) public class IgniteCacheConfigVariationsFullApiTest extends IgniteCacheConfigVariationsAbstractTest { /** Test timeout */ private static final long TEST_TIMEOUT = 60 * 1000; @@ -168,6 +172,7 @@ public class IgniteCacheConfigVariationsFullApiTest extends IgniteCacheConfigVar /** * @throws Exception In case of error. */ + @Test public void testSize() throws Exception { assert jcache().localSize() == 0; @@ -271,6 +276,7 @@ else if (cacheMode() == PARTITIONED) /** * @throws Exception In case of error. */ + @Test public void testContainsKey() throws Exception { Map vals = new HashMap<>(); @@ -292,6 +298,7 @@ public void testContainsKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsKeyTx() throws Exception { if (!txEnabled()) return; @@ -326,6 +333,7 @@ public void testContainsKeyTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsKeysTx() throws Exception { if (!txEnabled()) return; @@ -367,6 +375,7 @@ public void testContainsKeysTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveInExplicitLocks() throws Exception { if (lockingEnabled()) { IgniteCache cache = jcache(); @@ -392,6 +401,7 @@ public void testRemoveInExplicitLocks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAllSkipStore() throws Exception { if (!storeEnabled()) return; @@ -410,6 +420,7 @@ public void testRemoveAllSkipStore() throws Exception { /** * @throws IgniteCheckedException If failed. */ + @Test public void testAtomicOps() throws IgniteCheckedException { IgniteCache c = jcache(); @@ -442,6 +453,7 @@ public void testAtomicOps() throws IgniteCheckedException { /** * @throws Exception In case of error. */ + @Test public void testGet() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() { @@ -461,6 +473,7 @@ public void testGet() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -489,6 +502,7 @@ public void testGetAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAsync() throws Exception { IgniteCache cache = jcache(); @@ -509,6 +523,7 @@ public void testGetAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAll() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() { @@ -600,6 +615,7 @@ public void testGetAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllWithNulls() throws Exception { final IgniteCache cache = jcache(); @@ -620,6 +636,7 @@ public void testGetAllWithNulls() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetTxNonExistingKey() throws Exception { if (txShouldBeUsed()) { try (Transaction ignored = transactions().txStart()) { @@ -631,6 +648,7 @@ public void testGetTxNonExistingKey() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllAsyncOld() throws Exception { final IgniteCache cache = jcache(); @@ -662,6 +680,7 @@ public void testGetAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAllAsync() throws Exception { final IgniteCache cache = jcache(); @@ -689,6 +708,7 @@ public void testGetAllAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPut() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -734,6 +754,7 @@ public void testPut() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutTx() throws Exception { if (txShouldBeUsed()) { IgniteCache cache = jcache(); @@ -773,6 +794,7 @@ public void testPutTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeOptimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -784,6 +806,7 @@ public void testInvokeOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeOptimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -795,6 +818,7 @@ public void testInvokeOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokePessimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -806,6 +830,7 @@ public void testInvokePessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokePessimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -817,6 +842,7 @@ public void testInvokePessimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteInvokeOptimisticReadCommitted1() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -828,6 +854,7 @@ public void testIgniteInvokeOptimisticReadCommitted1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteInvokeOptimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -839,6 +866,7 @@ public void testIgniteInvokeOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteInvokePessimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -850,6 +878,7 @@ public void testIgniteInvokePessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteInvokePessimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -945,6 +974,7 @@ private void checkInvoke(TransactionConcurrency concurrency, TransactionIsolatio /** * @throws Exception If failed. */ + @Test public void testInvokeAllOptimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -956,6 +986,7 @@ public void testInvokeAllOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAllOptimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -967,6 +998,7 @@ public void testInvokeAllOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAllPessimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -978,6 +1010,7 @@ public void testInvokeAllPessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAllPessimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -989,6 +1022,7 @@ public void testInvokeAllPessimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAllAsyncOptimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1000,6 +1034,7 @@ public void testInvokeAllAsyncOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAllAsyncOptimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1011,6 +1046,7 @@ public void testInvokeAllAsyncOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAllAsyncPessimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1022,6 +1058,7 @@ public void testInvokeAllAsyncPessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAllAsyncPessimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1225,6 +1262,7 @@ private void checkInvokeAllAsync(TransactionConcurrency concurrency, Transaction /** * @throws Exception If failed. */ + @Test public void testInvokeAllWithNulls() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1277,6 +1315,7 @@ public void testInvokeAllWithNulls() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeSequentialOptimisticNoStart() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1288,6 +1327,7 @@ public void testInvokeSequentialOptimisticNoStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeSequentialPessimisticNoStart() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1299,6 +1339,7 @@ public void testInvokeSequentialPessimisticNoStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeSequentialOptimisticWithStart() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1310,6 +1351,7 @@ public void testInvokeSequentialOptimisticWithStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeSequentialPessimisticWithStart() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1374,6 +1416,7 @@ private void checkInvokeSequential0(boolean startVal, TransactionConcurrency con /** * @throws Exception If failed. */ + @Test public void testInvokeAfterRemoveOptimistic() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1385,6 +1428,7 @@ public void testInvokeAfterRemoveOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAfterRemovePessimistic() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1427,6 +1471,7 @@ private void checkInvokeAfterRemove(TransactionConcurrency concurrency) throws E /** * @throws Exception If failed. */ + @Test public void testInvokeReturnValueGetOptimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1438,6 +1483,7 @@ public void testInvokeReturnValueGetOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReturnValueGetOptimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1449,6 +1495,7 @@ public void testInvokeReturnValueGetOptimisticRepeatableRead() throws Exception /** * @throws Exception If failed. */ + @Test public void testInvokeReturnValueGetPessimisticReadCommitted() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1460,6 +1507,7 @@ public void testInvokeReturnValueGetPessimisticReadCommitted() throws Exception /** * @throws Exception If failed. */ + @Test public void testInvokeReturnValueGetPessimisticRepeatableRead() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1471,6 +1519,7 @@ public void testInvokeReturnValueGetPessimisticRepeatableRead() throws Exception /** * @throws Exception If failed. */ + @Test public void testInvokeReturnValuePutInTx() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1524,6 +1573,7 @@ private void checkInvokeReturnValue(boolean put, /** * @throws Exception In case of error. */ + @Test public void testGetAndPutAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -1550,6 +1600,7 @@ public void testGetAndPutAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAndPutAsync() throws Exception { IgniteCache cache = jcache(); @@ -1570,6 +1621,7 @@ public void testGetAndPutAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAsyncOld0() throws Exception { IgniteCache cacheAsync = jcache().withAsync(); @@ -1588,6 +1640,7 @@ public void testPutAsyncOld0() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAsync0() throws Exception { IgniteFuture fut1 = jcache().getAndPutAsync("key1", 0); @@ -1600,6 +1653,7 @@ public void testPutAsync0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAsyncOld() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1647,6 +1701,7 @@ public void testInvokeAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAsync() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1687,6 +1742,7 @@ public void testInvokeAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvoke() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -1741,6 +1797,7 @@ public void testInvoke() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutx() throws Exception { if (txShouldBeUsed()) checkPut(true); @@ -1749,6 +1806,7 @@ public void testPutx() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxNoTx() throws Exception { checkPut(false); } @@ -1793,6 +1851,7 @@ private void checkPut(boolean inTx) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAsyncOld() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -1838,6 +1897,7 @@ public void testPutAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAsync() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -1877,6 +1937,7 @@ public void testPutAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAll() throws Exception { Map map = F.asMap("key1", 1, "key2", 2); @@ -1903,6 +1964,7 @@ public void testPutAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testNullInTx() throws Exception { if (!txShouldBeUsed()) return; @@ -1994,6 +2056,7 @@ public void testNullInTx() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAllWithNulls() throws Exception { final IgniteCache cache = jcache(); @@ -2122,6 +2185,7 @@ public void testPutAllWithNulls() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAllAsyncOld() throws Exception { Map map = F.asMap("key1", 1, "key2", 2); @@ -2152,6 +2216,7 @@ public void testPutAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutAllAsync() throws Exception { Map map = F.asMap("key1", 1, "key2", 2); @@ -2176,6 +2241,7 @@ public void testPutAllAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAndPutIfAbsent() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -2262,6 +2328,7 @@ public void testGetAndPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutIfAbsentAsyncOld() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -2343,6 +2410,7 @@ public void testGetAndPutIfAbsentAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutIfAbsentAsync() throws Exception { Transaction tx = txShouldBeUsed() ? transactions().txStart() : null; @@ -2412,6 +2480,7 @@ public void testGetAndPutIfAbsentAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsent() throws Exception { IgniteCache cache = jcache(); @@ -2466,6 +2535,7 @@ public void testPutIfAbsent() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxIfAbsentAsyncOld() throws Exception { if (txShouldBeUsed()) checkPutxIfAbsentAsyncOld(true); @@ -2474,6 +2544,7 @@ public void testPutxIfAbsentAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxIfAbsentAsyncOldNoTx() throws Exception { checkPutxIfAbsentAsyncOld(false); } @@ -2481,6 +2552,7 @@ public void testPutxIfAbsentAsyncOldNoTx() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxIfAbsentAsync() throws Exception { if (txShouldBeUsed()) checkPutxIfAbsentAsync(true); @@ -2489,6 +2561,7 @@ public void testPutxIfAbsentAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutxIfAbsentAsyncNoTx() throws Exception { checkPutxIfAbsentAsync(false); } @@ -2641,6 +2714,7 @@ private void checkPutxIfAbsentAsync(boolean inTx) throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutIfAbsentAsyncOldConcurrent() throws Exception { IgniteCache cacheAsync = jcache().withAsync(); @@ -2659,6 +2733,7 @@ public void testPutIfAbsentAsyncOldConcurrent() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPutIfAbsentAsyncConcurrent() throws Exception { IgniteCache cache = jcache(); @@ -2673,6 +2748,7 @@ public void testPutIfAbsentAsyncConcurrent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndReplace() throws Exception { IgniteCache cache = jcache(); @@ -2767,6 +2843,7 @@ public void testGetAndReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplace() throws Exception { IgniteCache cache = jcache(); @@ -2824,6 +2901,7 @@ public void testReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndReplaceAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -2913,6 +2991,7 @@ public void testGetAndReplaceAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndReplaceAsync() throws Exception { IgniteCache cache = jcache(); @@ -2984,6 +3063,7 @@ public void testGetAndReplaceAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplacexAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3055,6 +3135,7 @@ public void testReplacexAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplacexAsync() throws Exception { IgniteCache cache = jcache(); @@ -3114,6 +3195,7 @@ public void testReplacexAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetAndRemove() throws Exception { IgniteCache cache = jcache(); @@ -3133,6 +3215,7 @@ public void testGetAndRemove() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testGetAndRemoveObject() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -3181,6 +3264,7 @@ public void testGetAndRemoveObject() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndPutSerializableObject() throws Exception { IgniteCache cache = ignite(0).cache(cacheName()); @@ -3203,11 +3287,10 @@ public void testGetAndPutSerializableObject() throws Exception { } /** - * TODO: GG-11241. - * * @throws Exception If failed. */ - public void _testDeletedEntriesFlag() throws Exception { + @Test + public void testDeletedEntriesFlag() throws Exception { if (cacheMode() != LOCAL && cacheMode() != REPLICATED) { final int cnt = 3; @@ -3227,6 +3310,7 @@ public void _testDeletedEntriesFlag() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveLoad() throws Exception { if (!storeEnabled()) return; @@ -3261,6 +3345,7 @@ public void testRemoveLoad() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3295,6 +3380,7 @@ public void testRemoveAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAsync() throws Exception { IgniteCache cache = jcache(); @@ -3319,6 +3405,7 @@ public void testRemoveAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemove() throws Exception { IgniteCache cache = jcache(); @@ -3332,6 +3419,7 @@ public void testRemove() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemovexAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3353,6 +3441,7 @@ public void testRemovexAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemovexAsync() throws Exception { IgniteCache cache = jcache(); @@ -3368,6 +3457,7 @@ public void testRemovexAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGlobalRemoveAll() throws Exception { globalRemoveAll(false, false); } @@ -3375,6 +3465,7 @@ public void testGlobalRemoveAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGlobalRemoveAllAsyncOld() throws Exception { globalRemoveAll(true, true); } @@ -3382,6 +3473,7 @@ public void testGlobalRemoveAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGlobalRemoveAllAsync() throws Exception { globalRemoveAll(true, false); } @@ -3474,6 +3566,7 @@ protected long hugeRemoveAllEntryCount() { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllWithNulls() throws Exception { final IgniteCache cache = jcache(); @@ -3528,6 +3621,7 @@ public void testRemoveAllWithNulls() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllDuplicates() throws Exception { jcache().removeAll(ImmutableSet.of("key1", "key1", "key1")); } @@ -3535,6 +3629,7 @@ public void testRemoveAllDuplicates() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllDuplicatesTx() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart()) { @@ -3548,6 +3643,7 @@ public void testRemoveAllDuplicatesTx() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllEmpty() throws Exception { jcache().removeAll(); } @@ -3555,6 +3651,7 @@ public void testRemoveAllEmpty() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllAsyncOld() throws Exception { IgniteCache cache = jcache(); @@ -3580,6 +3677,7 @@ public void testRemoveAllAsyncOld() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testRemoveAllAsync() throws Exception { IgniteCache cache = jcache(); @@ -3601,6 +3699,7 @@ public void testRemoveAllAsync() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testLoadAll() throws Exception { if (!storeEnabled()) return; @@ -3641,6 +3740,7 @@ public void testLoadAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAfterClear() throws Exception { IgniteEx ignite = grid(0); @@ -3687,6 +3787,7 @@ public void testRemoveAfterClear() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testClear() throws Exception { IgniteCache cache = jcache(); @@ -3834,6 +3935,7 @@ protected void checkUnlocked(final Collection keys0) throws IgniteChecke /** * @throws Exception If failed. */ + @Test public void testGlobalClearAll() throws Exception { globalClearAll(false, false); } @@ -3841,6 +3943,7 @@ public void testGlobalClearAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearAllAsyncOld() throws Exception { globalClearAll(true, true); } @@ -3848,6 +3951,7 @@ public void testGlobalClearAllAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearAllAsync() throws Exception { globalClearAll(true, false); } @@ -3887,6 +3991,7 @@ protected void globalClearAll(boolean async, boolean oldAsync) throws Exception * @throws Exception In case of error. */ @SuppressWarnings("BusyWait") + @Test public void testLockUnlock() throws Exception { if (lockingEnabled()) { final CountDownLatch lockCnt = new CountDownLatch(1); @@ -3946,6 +4051,7 @@ public void testLockUnlock() throws Exception { * @throws Exception In case of error. */ @SuppressWarnings("BusyWait") + @Test public void testLockUnlockAll() throws Exception { if (lockingEnabled()) { IgniteCache cache = jcache(); @@ -4001,6 +4107,7 @@ public void testLockUnlockAll() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testPeek() throws Exception { Ignite ignite = primaryIgnite("key"); IgniteCache cache = ignite.cache(cacheName()); @@ -4017,6 +4124,7 @@ public void testPeek() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPeekTxRemoveOptimistic() throws Exception { checkPeekTxRemove(OPTIMISTIC); } @@ -4024,6 +4132,7 @@ public void testPeekTxRemoveOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPeekTxRemovePessimistic() throws Exception { checkPeekTxRemove(PESSIMISTIC); } @@ -4053,6 +4162,7 @@ private void checkPeekTxRemove(TransactionConcurrency concurrency) throws Except /** * @throws Exception If failed. */ + @Test public void testPeekRemove() throws Exception { IgniteCache cache = primaryCache("key"); @@ -4063,10 +4173,10 @@ public void testPeekRemove() throws Exception { } /** - * TODO GG-11133. * @throws Exception In case of error. */ - public void _testEvictExpired() throws Exception { + @Test + public void testEvictExpired() throws Exception { final IgniteCache cache = jcache(); final String key = primaryKeysForCache(1).get(0); @@ -4123,6 +4233,7 @@ public void _testEvictExpired() throws Exception { * * @throws Exception If failed. */ + @Test public void testPeekExpired() throws Exception { final IgniteCache c = jcache(); @@ -4158,6 +4269,7 @@ public void testPeekExpired() throws Exception { * * @throws Exception If failed. */ + @Test public void testPeekExpiredTx() throws Exception { if (txShouldBeUsed()) { final IgniteCache c = jcache(); @@ -4188,6 +4300,7 @@ public void testPeekExpiredTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTtlTx() throws Exception { if (txShouldBeUsed()) checkTtl(true, false); @@ -4196,6 +4309,7 @@ public void testTtlTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTtlNoTx() throws Exception { checkTtl(false, false); } @@ -4203,6 +4317,7 @@ public void testTtlNoTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTtlNoTxOldEntry() throws Exception { checkTtl(false, true); } @@ -4213,12 +4328,31 @@ public void testTtlNoTxOldEntry() throws Exception { * @throws Exception If failed. */ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { - // TODO GG-11133. - if (true) - return; + int ttlVals[] = {600, 1000, 3000}; - int ttl = 1000; + int i = 0; + while (i < ttlVals.length) { + try { + checkTtl0(inTx, oldEntry, ttlVals[i]); + break; + } + catch (AssertionFailedError e) { + if (i < ttlVals.length - 1) + info("Ttl test failed, try execute with increased ttl"); + else + throw e; + } + i++; + } + } + /** + * @param inTx In tx flag. + * @param oldEntry {@code True} to check TTL on old entry, {@code false} on new. + * @param ttl TTL value. + * @throws Exception If failed. + */ + private void checkTtl0(boolean inTx, boolean oldEntry, int ttl) throws Exception { final ExpiryPolicy expiry = new TouchedExpiryPolicy(new Duration(MILLISECONDS, ttl)); final IgniteCache c = jcache(); @@ -4280,7 +4414,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertTrue(curEntryTtl.get2() > startTime); expireTimes[i] = curEntryTtl.get2(); } @@ -4308,7 +4441,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertTrue(curEntryTtl.get2() > startTime); expireTimes[i] = curEntryTtl.get2(); } @@ -4336,7 +4468,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertTrue(curEntryTtl.get2() > startTime); expireTimes[i] = curEntryTtl.get2(); } @@ -4368,7 +4499,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { assertNotNull(curEntryTtl.get1()); assertNotNull(curEntryTtl.get2()); - assertEquals(ttl, (long)curEntryTtl.get1()); assertEquals(expireTimes[i], (long)curEntryTtl.get2()); } } @@ -4377,7 +4507,6 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { storeStgy.removeFromStore(key); assertTrue(GridTestUtils.waitForCondition(new GridAbsPredicateX() { - @SuppressWarnings("unchecked") @Override public boolean applyx() { try { Integer val = c.get(key); @@ -4418,11 +4547,7 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { // Ensure that old TTL and expire time are not longer "visible". entryTtl = entryTtl(srvNodeCache, key); - - assertNotNull(entryTtl.get1()); - assertNotNull(entryTtl.get2()); - assertEquals(0, (long)entryTtl.get1()); - assertEquals(0, (long)entryTtl.get2()); + assertNull(entryTtl); // Ensure that next update will not pick old expire time. @@ -4439,7 +4564,7 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { tx.close(); } - U.sleep(2000); + U.sleep(ttl + 500); entryTtl = entryTtl(srvNodeCache, key); @@ -4454,6 +4579,7 @@ private void checkTtl(boolean inTx, boolean oldEntry) throws Exception { /** * @throws Exception In case of error. */ + @Test public void testLocalEvict() throws Exception { IgniteCache cache = jcache(); @@ -4507,6 +4633,7 @@ private void checkKeyAfterLocalEvict(IgniteCache cache, String /** * JUnit. */ + @Test public void testCacheProxy() { IgniteCache cache = jcache(); @@ -4518,6 +4645,7 @@ public void testCacheProxy() { * * @throws Exception If failed. */ + @Test public void testCompactExpired() throws Exception { final IgniteCache cache = jcache(); @@ -4551,6 +4679,7 @@ public void testCompactExpired() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticTxMissingKey() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(OPTIMISTIC, READ_COMMITTED)) { @@ -4567,6 +4696,7 @@ public void testOptimisticTxMissingKey() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticTxMissingKeyNoCommit() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(OPTIMISTIC, READ_COMMITTED)) { @@ -4581,6 +4711,7 @@ public void testOptimisticTxMissingKeyNoCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticTxReadCommittedInTx() throws Exception { checkRemovexInTx(OPTIMISTIC, READ_COMMITTED); } @@ -4588,6 +4719,7 @@ public void testOptimisticTxReadCommittedInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticTxRepeatableReadInTx() throws Exception { checkRemovexInTx(OPTIMISTIC, REPEATABLE_READ); } @@ -4595,6 +4727,7 @@ public void testOptimisticTxRepeatableReadInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxReadCommittedInTx() throws Exception { checkRemovexInTx(PESSIMISTIC, READ_COMMITTED); } @@ -4602,6 +4735,7 @@ public void testPessimisticTxReadCommittedInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxRepeatableReadInTx() throws Exception { checkRemovexInTx(PESSIMISTIC, REPEATABLE_READ); } @@ -4654,6 +4788,7 @@ private void checkRemovexInTx(final TransactionConcurrency concurrency, * * @throws Exception If failed. */ + @Test public void testPessimisticTxMissingKey() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { @@ -4670,6 +4805,7 @@ public void testPessimisticTxMissingKey() throws Exception { * * @throws Exception If failed. */ + @Test public void testPessimisticTxMissingKeyNoCommit() throws Exception { if (txShouldBeUsed()) { try (Transaction tx = transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { @@ -4684,6 +4820,7 @@ public void testPessimisticTxMissingKeyNoCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxRepeatableRead() throws Exception { if (txShouldBeUsed()) { try (Transaction ignored = transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { @@ -4697,6 +4834,7 @@ public void testPessimisticTxRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxRepeatableReadOnUpdate() throws Exception { if (txShouldBeUsed()) { try (Transaction ignored = transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { @@ -4710,6 +4848,7 @@ public void testPessimisticTxRepeatableReadOnUpdate() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testToMap() throws Exception { IgniteCache cache = jcache(); @@ -4867,6 +5006,7 @@ protected IgnitePair entryTtl(IgniteCache cache, String key) { /** * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { IgniteCache cache = grid(0).cache(cacheName()); @@ -4894,6 +5034,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteCacheIterator() throws Exception { IgniteCache cache = jcache(0); @@ -5084,6 +5225,7 @@ private void checkIteratorEmpty(IgniteCache cache) throws Excep /** * @throws Exception If failed. */ + @Test public void testLocalClearKey() throws Exception { addKeys(); @@ -5135,6 +5277,7 @@ protected void checkLocalRemovedKey(String keyToRmv) { /** * @throws Exception If failed. */ + @Test public void testLocalClearKeys() throws Exception { Map> keys = addKeys(); @@ -5206,6 +5349,7 @@ protected Map> addKeys() { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKey() throws Exception { testGlobalClearKey(false, false, Arrays.asList("key25")); } @@ -5213,6 +5357,7 @@ public void testGlobalClearKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeyAsyncOld() throws Exception { testGlobalClearKey(true, true, Arrays.asList("key25")); } @@ -5220,6 +5365,7 @@ public void testGlobalClearKeyAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeyAsync() throws Exception { testGlobalClearKey(true, false, Arrays.asList("key25")); } @@ -5227,6 +5373,7 @@ public void testGlobalClearKeyAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeys() throws Exception { testGlobalClearKey(false, false, Arrays.asList("key25", "key100", "key150")); } @@ -5234,6 +5381,7 @@ public void testGlobalClearKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeysAsyncOld() throws Exception { testGlobalClearKey(true, true, Arrays.asList("key25", "key100", "key150")); } @@ -5241,6 +5389,7 @@ public void testGlobalClearKeysAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGlobalClearKeysAsync() throws Exception { testGlobalClearKey(true, false, Arrays.asList("key25", "key100", "key150")); } @@ -5307,6 +5456,7 @@ protected void testGlobalClearKey(boolean async, boolean oldAsync, Collection cache = grid(0).cache(cacheName()); @@ -5850,6 +6002,7 @@ private void checkEmpty(IgniteCache cache, IgniteCache pair = new IgnitePair<>(entry.ttl(), entry.expireTime()); + + if (!entry.isNear()) + entry.context().cache().removeEntry(entry); - return entry != null ? - new IgnitePair<>(entry.ttl(), entry.expireTime()) : - new IgnitePair(null, null); + return pair; } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationDefaultTemplateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationDefaultTemplateTest.java index c1af11ee9ac9c..fb080a207850a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationDefaultTemplateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationDefaultTemplateTest.java @@ -21,24 +21,20 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheConfigurationDefaultTemplateTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration templateCfg = new CacheConfiguration(DEFAULT_CACHE_NAME); templateCfg.setName("org.apache.ignite.template*"); @@ -59,6 +55,7 @@ public class IgniteCacheConfigurationDefaultTemplateTest extends GridCommonAbstr /** * @throws Exception If failed. */ + @Test public void testDefaultTemplate() throws Exception { Ignite ignite = startGrid(0); @@ -110,4 +107,4 @@ private void checkGetOrCreate(Ignite ignite, String name, int expBackups) { assertEquals(name, cfg.getName()); assertEquals(expBackups, cfg.getBackups()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationTemplateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationTemplateTest.java index cad629d0035b0..4cb07a48c10c4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationTemplateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationTemplateTest.java @@ -31,20 +31,19 @@ import org.apache.ignite.events.EventType; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheConfigurationTemplateTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String TEMPLATE1 = "org.apache.ignite*"; @@ -64,7 +63,7 @@ public class IgniteCacheConfigurationTemplateTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder).setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); if (addTemplate) { CacheConfiguration dfltCfg = new CacheConfiguration("*"); @@ -102,6 +101,7 @@ public class IgniteCacheConfigurationTemplateTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testCreateFromTemplate() throws Exception { addTemplate = true; @@ -156,6 +156,7 @@ public void testCreateFromTemplate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateFromTemplate() throws Exception { addTemplate = true; @@ -233,6 +234,7 @@ public void testGetOrCreateFromTemplate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartClientNodeFirst() throws Exception { addTemplate = true; clientMode = true; @@ -259,6 +261,7 @@ public void testStartClientNodeFirst() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddCacheConfigurationMultinode() throws Exception { addTemplate = true; @@ -306,6 +309,7 @@ public void testAddCacheConfigurationMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoPartitionExchangeForTemplate() throws Exception{ final int GRID_CNT = 3; @@ -355,6 +359,7 @@ public void testNoPartitionExchangeForTemplate() throws Exception{ /** * @throws Exception If failed. */ + @Test public void testTemplateCleanup() throws Exception { startGridsMultiThreaded(3); @@ -447,4 +452,4 @@ private void checkNoTemplateCaches(int nodes) { assertNull(ignite.cache(TEMPLATE3)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAbstractSelfTest.java index 7d98968cbf291..66259b720fb92 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAbstractSelfTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * Tests various scenarios for {@code containsKey()} method. */ +@RunWith(JUnit4.class) public abstract class IgniteCacheContainsKeyAbstractSelfTest extends GridCacheAbstractSelfTest { /** * @return Number of grids to start. @@ -76,6 +80,7 @@ public abstract class IgniteCacheContainsKeyAbstractSelfTest extends GridCacheAb /** * @throws Exception If failed. */ + @Test public void testDistributedContains() throws Exception { String key = "1"; @@ -91,6 +96,7 @@ public void testDistributedContains() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsInTx() throws Exception { if (atomicityMode() == TRANSACTIONAL) { String key = "1"; @@ -137,4 +143,4 @@ private boolean txContainsKey(Transaction tx, String key) { return entry != null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAtomicTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAtomicTest.java index 981d245245a20..73324685437a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAtomicTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheContainsKeyAtomicTest.java @@ -23,6 +23,9 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -31,6 +34,7 @@ /** * Verifies that containsKey() works as expected on atomic cache. */ +@RunWith(JUnit4.class) public class IgniteCacheContainsKeyAtomicTest extends GridCacheAbstractSelfTest { /** Cache name. */ public static final String CACHE_NAME = "replicated"; @@ -53,6 +57,7 @@ public class IgniteCacheContainsKeyAtomicTest extends GridCacheAbstractSelfTest /** * @throws Exception If failed. */ + @Test public void testContainsPutIfAbsent() throws Exception { checkPutIfAbsent(false); } @@ -60,6 +65,7 @@ public void testContainsPutIfAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContainsPutIfAbsentAll() throws Exception { checkPutIfAbsent(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCopyOnReadDisabledAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCopyOnReadDisabledAbstractTest.java index 61f8136ab796d..f2eb5dcbd3dc8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCopyOnReadDisabledAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCopyOnReadDisabledAbstractTest.java @@ -22,10 +22,14 @@ import javax.cache.processor.MutableEntry; import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheCopyOnReadDisabledAbstractTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -51,6 +55,7 @@ public abstract class IgniteCacheCopyOnReadDisabledAbstractTest extends GridCach /** * @throws Exception If failed. */ + @Test public void testCopyOnReadDisabled() throws Exception { IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCreateRestartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCreateRestartSelfTest.java index e8544517bcda5..4b6e1d4663104 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCreateRestartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCreateRestartSelfTest.java @@ -25,22 +25,20 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheCreateRestartSelfTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "partitioned"; - /** IP finder. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 4; @@ -52,8 +50,6 @@ public class IgniteCacheCreateRestartSelfTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -70,6 +66,7 @@ public class IgniteCacheCreateRestartSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStopOriginatingNode() throws Exception { startGrids(NODES); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDynamicStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDynamicStopSelfTest.java index b1047ba829d35..1a0c575095db7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDynamicStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDynamicStopSelfTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheDynamicStopSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -44,6 +48,7 @@ public class IgniteCacheDynamicStopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStopStartCacheWithDataLoaderNoOverwrite() throws Exception { checkStopStartCacheWithDataLoader(false); } @@ -51,6 +56,7 @@ public void testStopStartCacheWithDataLoaderNoOverwrite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopStartCacheWithDataLoaderOverwrite() throws Exception { checkStopStartCacheWithDataLoader(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerAbstractTest.java index b4e564faa8ebd..b612c6bcacbf8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerAbstractTest.java @@ -67,7 +67,11 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static javax.cache.event.EventType.CREATED; @@ -81,6 +85,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheEntryListenerAbstractTest extends IgniteCacheAbstractTest { /** */ private static volatile List> evts; @@ -103,6 +108,13 @@ public abstract class IgniteCacheEntryListenerAbstractTest extends IgniteCacheAb /** */ private static AtomicBoolean serialized = new AtomicBoolean(false); + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { @@ -152,6 +164,7 @@ public abstract class IgniteCacheEntryListenerAbstractTest extends IgniteCacheAb /** * @throws Exception If failed. */ + @Test public void testExceptionIgnored() throws Exception { CacheEntryListenerConfiguration lsnrCfg = new MutableCacheEntryListenerConfiguration<>( new Factory>() { @@ -211,6 +224,7 @@ public void testExceptionIgnored() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoOldValue() throws Exception { CacheEntryListenerConfiguration lsnrCfg = new MutableCacheEntryListenerConfiguration<>( new Factory>() { @@ -242,6 +256,7 @@ public void testNoOldValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSynchronousEventsObjectKeyValue() throws Exception { useObjects = true; @@ -251,6 +266,7 @@ public void testSynchronousEventsObjectKeyValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSynchronousEvents() throws Exception { final CacheEntryCreatedListener lsnr = new CreateUpdateRemoveExpireListener() { @Override public void onRemoved(Iterable> evts) { @@ -348,6 +364,7 @@ private void awaitLatch() { /** * @throws Exception If failed. */ + @Test public void testSynchronousEventsListenerNodeFailed() throws Exception { if (cacheMode() != PARTITIONED) return; @@ -402,6 +419,7 @@ public void testSynchronousEventsListenerNodeFailed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentRegisterDeregister() throws Exception { final int THREADS = 10; @@ -438,6 +456,7 @@ public void testConcurrentRegisterDeregister() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSerialization() throws Exception { if (cacheMode() == LOCAL) return; @@ -541,6 +560,7 @@ private Object value(Integer val) { /** * @throws Exception If failed. */ + @Test public void testEventsObjectKeyValue() throws Exception { useObjects = true; @@ -550,6 +570,7 @@ public void testEventsObjectKeyValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvents() throws Exception { IgniteCache cache = jcache(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerEagerTtlDisabledTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerEagerTtlDisabledTest.java index a1bf26b1632dd..90e28618eac33 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerEagerTtlDisabledTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerEagerTtlDisabledTest.java @@ -19,11 +19,19 @@ import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * Tests expire events when {@link CacheConfiguration#isEagerTtl()} is disabled. */ public class IgniteCacheEntryListenerEagerTtlDisabledTest extends IgniteCacheEntryListenerTxTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected boolean eagerTtl() { return false; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerExpiredEventsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerExpiredEventsTest.java index 524d0fb34bec2..ed72511928a11 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerExpiredEventsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerExpiredEventsTest.java @@ -30,14 +30,13 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -48,22 +47,11 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheEntryListenerExpiredEventsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static AtomicInteger evtCntr; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { super.beforeTestsStarted(); @@ -74,6 +62,7 @@ public class IgniteCacheEntryListenerExpiredEventsTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testExpiredEventAtomic() throws Exception { checkExpiredEvents(cacheConfiguration(PARTITIONED, ATOMIC)); } @@ -81,6 +70,7 @@ public void testExpiredEventAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExpiredEventTx() throws Exception { checkExpiredEvents(cacheConfiguration(PARTITIONED, TRANSACTIONAL)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerTxLocalTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerTxLocalTest.java index 5da10c626ca24..0212ddd6d983c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerTxLocalTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryListenerTxLocalTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -28,6 +29,13 @@ * */ public class IgniteCacheEntryListenerTxLocalTest extends IgniteCacheEntryListenerAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorCallTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorCallTest.java index 4efe51303cbca..02a6fbed7ed50 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorCallTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorCallTest.java @@ -26,17 +26,18 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_IGNITE_INSTANCE_NAME; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -47,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheEntryProcessorCallTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ static final AtomicInteger callCnt = new AtomicInteger(); @@ -76,8 +75,6 @@ public class IgniteCacheEntryProcessorCallTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -99,7 +96,8 @@ public class IgniteCacheEntryProcessorCallTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ - public void testEntryProcessorCall() throws Exception { + @Test + public void testEntryProcessorCallOnAtomicCache() throws Exception { { CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setBackups(1); @@ -117,7 +115,13 @@ public void testEntryProcessorCall() throws Exception { checkEntryProcessorCallCount(ccfg, 1); } + } + /** + * @throws Exception If failed. + */ + @Test + public void testEntryProcessorCallOnTxCache() throws Exception { { CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setBackups(1); @@ -137,6 +141,30 @@ public void testEntryProcessorCall() throws Exception { } } + /** + * @throws Exception If failed. + */ + @Test + public void testEntryProcessorCallOnMvccCache() throws Exception { + { + CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); + ccfg.setBackups(1); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + + checkEntryProcessorCallCount(ccfg, 2); + } + + { + CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); + ccfg.setBackups(0); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + + checkEntryProcessorCallCount(ccfg, 1); + } + } + /** * @param ccfg Cache configuration. * @param expCallCnt Expected entry processor calls count. @@ -163,18 +191,22 @@ private void checkEntryProcessorCallCount(CacheConfiguration if (ccfg.getAtomicityMode() == TRANSACTIONAL) { checkEntryProcessCall(key++, clientCache1, OPTIMISTIC, REPEATABLE_READ, expCallCnt + 1); - checkEntryProcessCall(key++, clientCache1, PESSIMISTIC, REPEATABLE_READ, expCallCnt + 1); checkEntryProcessCall(key++, clientCache1, OPTIMISTIC, SERIALIZABLE, expCallCnt + 1); + checkEntryProcessCall(key++, clientCache1, PESSIMISTIC, REPEATABLE_READ, expCallCnt + 1); } + else if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) + checkEntryProcessCall(key++, clientCache1, PESSIMISTIC, REPEATABLE_READ, expCallCnt); for (int i = 100; i < 110; i++) { checkEntryProcessCall(key++, srvCache, null, null, expCallCnt); if (ccfg.getAtomicityMode() == TRANSACTIONAL) { - checkEntryProcessCall(key++, srvCache, OPTIMISTIC, REPEATABLE_READ, expCallCnt + 1); - checkEntryProcessCall(key++, srvCache, PESSIMISTIC, REPEATABLE_READ, expCallCnt + 1); - checkEntryProcessCall(key++, srvCache, OPTIMISTIC, SERIALIZABLE, expCallCnt + 1); + checkEntryProcessCall(key++, clientCache1, OPTIMISTIC, REPEATABLE_READ, expCallCnt + 1); + checkEntryProcessCall(key++, clientCache1, OPTIMISTIC, SERIALIZABLE, expCallCnt + 1); + checkEntryProcessCall(key++, clientCache1, PESSIMISTIC, REPEATABLE_READ, expCallCnt + 1); } + else if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) + checkEntryProcessCall(key++, clientCache1, PESSIMISTIC, REPEATABLE_READ, expCallCnt); } for (int i = 0; i < NODES; i++) @@ -182,7 +214,6 @@ private void checkEntryProcessorCallCount(CacheConfiguration } /** - * * @param key Key. * @param cache Cache. * @param concurrency Transaction concurrency. @@ -205,6 +236,9 @@ private void checkEntryProcessCall(Integer key, ", concurrency=" + concurrency + ", isolation=" + isolation + "]"); + int expCallCntOnGet = cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL_SNAPSHOT ? + 1 : expCallCnt; + Transaction tx; TestReturnValue retVal; @@ -237,7 +271,7 @@ private void checkEntryProcessCall(Integer key, if (tx != null) tx.commit(); - assertEquals(expCallCnt, callCnt.get()); + assertEquals(expCallCntOnGet, callCnt.get()); checkReturnValue(retVal, "0"); checkCacheValue(cache.getName(), key, new TestValue(0)); @@ -415,7 +449,7 @@ public Integer value() { if (o == null || getClass() != o.getClass()) return false; - TestValue testVal = (TestValue) o; + TestValue testVal = (TestValue)o; return val.equals(testVal.val); @@ -473,7 +507,7 @@ public Object argument() { if (o == null || getClass() != o.getClass()) return false; - TestReturnValue testVal = (TestReturnValue) o; + TestReturnValue testVal = (TestReturnValue)o; return val.equals(testVal.val); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorNodeJoinTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorNodeJoinTest.java index ec982949efb3c..60ce7e88cf327 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorNodeJoinTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheEntryProcessorNodeJoinTest.java @@ -38,13 +38,15 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.GridTestUtils.SF; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -53,18 +55,19 @@ /** * Tests cache in-place modification logic with iterative value increment. */ +@RunWith(JUnit4.class) public class IgniteCacheEntryProcessorNodeJoinTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of nodes to test on. */ private static final int GRID_CNT = 2; /** Number of increment iterations. */ - private static final int INCREMENTS = 100; + private final int INCREMENTS = SF.apply(100); + + /** Number of test iterations. */ + private final int ITERATIONS = SF.applyLB(10, 2); /** */ - private static final int KEYS = 50; + private final int KEYS = SF.apply(50); /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -72,18 +75,12 @@ public class IgniteCacheEntryProcessorNodeJoinTest extends GridCommonAbstractTes cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); commSpi.setSharedMemoryPort(-1); cfg.setCommunicationSpi(commSpi); - cfg.setDiscoverySpi(disco); - return cfg; } @@ -111,7 +108,7 @@ protected CacheAtomicityMode atomicityMode() { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { - startGrids(GRID_CNT); + startGridsMultiThreaded(GRID_CNT, true); } /** {@inheritDoc} */ @@ -122,6 +119,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception If failed. */ + @Test public void testSingleEntryProcessorNodeJoin() throws Exception { checkEntryProcessorNodeJoin(false); } @@ -129,6 +127,7 @@ public void testSingleEntryProcessorNodeJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAllEntryProcessorNodeJoin() throws Exception { checkEntryProcessorNodeJoin(true); } @@ -136,11 +135,12 @@ public void testAllEntryProcessorNodeJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntryProcessorNodeLeave() throws Exception { startGrid(GRID_CNT); // TODO: IGNITE-1525 (test fails with one-phase commit). - boolean createCache = atomicityMode() == TRANSACTIONAL; + boolean createCache = atomicityMode() != ATOMIC; String cacheName = DEFAULT_CACHE_NAME; @@ -160,7 +160,7 @@ public void testEntryProcessorNodeLeave() throws Exception { final int RESTART_IDX = GRID_CNT + 1; - for (int iter = 0; iter < 10; iter++) { + for (int iter = 0; iter < ITERATIONS; iter++) { log.info("Iteration: " + iter); startGrid(RESTART_IDX); @@ -179,7 +179,7 @@ public void testEntryProcessorNodeLeave() throws Exception { } }, "stop-thread"); - int increments = checkIncrement(cacheName, iter % 2 == 2, fut, latch); + int increments = checkIncrement(cacheName, iter % 2 == 1, fut, latch); assert increments >= INCREMENTS; @@ -212,28 +212,27 @@ private void checkEntryProcessorNodeJoin(boolean invokeAll) throws Exception { final AtomicReference error = new AtomicReference<>(); final int started = 6; - try { - IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(new Runnable() { - @Override public void run() { - try { - for (int i = 0; i < started; i++) { - U.sleep(1_000); + IgniteInternalFuture fut = GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + try { + for (int i = 0; i < started && !stop.get(); i++) { + U.sleep(1_000); + if (!stop.get()) startGrid(GRID_CNT + i); - } - } - catch (Exception e) { - error.compareAndSet(null, e); } } - }, 1, "starter"); + catch (Exception e) { + error.compareAndSet(null, e); + } + } + }, "starter"); + try { try { checkIncrement(DEFAULT_CACHE_NAME, invokeAll, null, null); } finally { - stop.set(true); - fut.get(getTestTimeout()); } @@ -247,8 +246,10 @@ private void checkEntryProcessorNodeJoin(boolean invokeAll) throws Exception { } } finally { - for (int i = 0; i < started; i++) - stopGrid(GRID_CNT + i); + stop.set(true); + + if (!fut.isDone()) + fut.cancel(); } } @@ -257,8 +258,8 @@ private void checkEntryProcessorNodeJoin(boolean invokeAll) throws Exception { * @param invokeAll If {@code true} tests invokeAll operation. * @param fut If not null then executes updates while future is not done. * @param latch Latch to count down when first update is done. - * @throws Exception If failed. * @return Number of increments. + * @throws Exception If failed. */ private int checkIncrement( String cacheName, @@ -289,7 +290,7 @@ private int checkIncrement( EntryProcessorResult res = resMap.get(key); assertNotNull(res); - assertEquals(k + 1, (Object) res.get()); + assertEquals(k + 1, (Object)res.get()); } } else { @@ -323,55 +324,56 @@ private int checkIncrement( /** * @throws Exception If failed. */ + @Test public void testReplaceNodeJoin() throws Exception { final AtomicReference error = new AtomicReference<>(); final int started = 6; - try { - int keys = 100; + int keys = 100; - final AtomicBoolean done = new AtomicBoolean(false); + final AtomicBoolean stop = new AtomicBoolean(false); - for (int i = 0; i < keys; i++) - ignite(0).cache(DEFAULT_CACHE_NAME).put(i, 0); + for (int i = 0; i < keys; i++) + ignite(0).cache(DEFAULT_CACHE_NAME).put(i, 0); - IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(new Runnable() { - @Override public void run() { - try { - for (int i = 0; i < started; i++) { - U.sleep(1_000); + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(new Runnable() { + @Override public void run() { + try { + for (int i = 0; i < started && !stop.get(); i++) { + U.sleep(1_000); - IgniteEx grid = startGrid(GRID_CNT + i); + if (stop.get()) + continue; - info("Test started grid [idx=" + (GRID_CNT + i) + ", nodeId=" + grid.localNode().id() + ']'); - } - } - catch (Exception e) { - error.compareAndSet(null, e); - } - finally { - done.set(true); + IgniteEx grid = startGrid(GRID_CNT + i); + + info("Test started grid [idx=" + (GRID_CNT + i) + ", nodeId=" + grid.localNode().id() + ']'); } } - }, 1, "starter"); + catch (Exception e) { + error.compareAndSet(null, e); + } + finally { + stop.set(true); + } + } + }, 1, "starter"); + try { int updVal = 0; - try { - while (!done.get()) { - info("Will put: " + (updVal + 1)); + while (!stop.get()) { + info("Will put: " + (updVal + 1)); - for (int i = 0; i < keys; i++) - assertTrue("Failed [key=" + i + ", oldVal=" + updVal+ ']', - ignite(0).cache(DEFAULT_CACHE_NAME).replace(i, updVal, updVal + 1)); + for (int i = 0; i < keys; i++) + assertTrue("Failed [key=" + i + ", oldVal=" + updVal+ ']', + ignite(0).cache(DEFAULT_CACHE_NAME).replace(i, updVal, updVal + 1)); - updVal++; - } - } - finally { - fut.get(getTestTimeout()); + updVal++; } + fut.get(getTestTimeout()); + for (int i = 0; i < keys; i++) { for (int g = 0; g < GRID_CNT + started; g++) { Integer val = ignite(g).cache(DEFAULT_CACHE_NAME).get(i); @@ -386,8 +388,10 @@ public void testReplaceNodeJoin() throws Exception { } } finally { - for (int i = 0; i < started; i++) - stopGrid(GRID_CNT + i); + stop.set(true); + + if (!fut.isDone()) + fut.cancel(); } } @@ -417,4 +421,4 @@ private Processor(String val) { return vals.size(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheExpireAndUpdateConsistencyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheExpireAndUpdateConsistencyTest.java index 5257a4a9f7d4e..d47e5167bfc6c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheExpireAndUpdateConsistencyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheExpireAndUpdateConsistencyTest.java @@ -42,12 +42,12 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -60,10 +60,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheExpireAndUpdateConsistencyTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -74,8 +72,6 @@ public class IgniteCacheExpireAndUpdateConsistencyTest extends GridCommonAbstrac @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -97,6 +93,7 @@ public class IgniteCacheExpireAndUpdateConsistencyTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testAtomic1() throws Exception { updateAndEventConsistencyTest(cacheConfiguration(ATOMIC, 0)); } @@ -104,6 +101,7 @@ public void testAtomic1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomic2() throws Exception { updateAndEventConsistencyTest(cacheConfiguration(ATOMIC, 1)); } @@ -111,6 +109,7 @@ public void testAtomic2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomic3() throws Exception { updateAndEventConsistencyTest(cacheConfiguration(ATOMIC, 2)); } @@ -118,6 +117,7 @@ public void testAtomic3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTx1() throws Exception { updateAndEventConsistencyTest(cacheConfiguration(TRANSACTIONAL, 0)); } @@ -125,6 +125,7 @@ public void testTx1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTx2() throws Exception { updateAndEventConsistencyTest(cacheConfiguration(TRANSACTIONAL, 1)); } @@ -132,6 +133,7 @@ public void testTx2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTx3() throws Exception { updateAndEventConsistencyTest(cacheConfiguration(TRANSACTIONAL, 2)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGetCustomCollectionsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGetCustomCollectionsSelfTest.java index 9e6fd5a57fc7a..489c53106c653 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGetCustomCollectionsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGetCustomCollectionsSelfTest.java @@ -28,18 +28,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheGetCustomCollectionsSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -55,18 +53,13 @@ public class IgniteCacheGetCustomCollectionsSelfTest extends GridCommonAbstractT cfg.setCacheConfiguration(mapCacheConfig); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } /** * @throws Exception If failed. */ + @Test public void testPutGet() throws Exception { startGrids(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsTest.java index 2104c5d91ecb4..abdd07f5afeca 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsTest.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.cache; +import com.google.common.collect.Sets; import java.io.Serializable; import java.util.ArrayList; import java.util.Arrays; @@ -49,7 +50,6 @@ import javax.cache.integration.CacheWriterException; import javax.cache.processor.EntryProcessorException; import javax.cache.processor.MutableEntry; -import com.google.common.collect.Sets; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; @@ -94,16 +94,18 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -117,16 +119,17 @@ * */ @SuppressWarnings({"unchecked", "ThrowableNotThrown"}) +@RunWith(JUnit4.class) public class IgniteCacheGroupsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String GROUP1 = "grp1"; /** */ private static final String GROUP2 = "grp2"; + /** */ + private static final String GROUP3 = "grp3"; + /** */ private static final String CACHE1 = "cache1"; @@ -146,8 +149,6 @@ public class IgniteCacheGroupsTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); if (ccfgs != null) { @@ -174,6 +175,7 @@ public class IgniteCacheGroupsTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCloseCache1() throws Exception { startGrid(0); @@ -208,6 +210,7 @@ public void testCloseCache1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateDestroyCaches1() throws Exception { createDestroyCaches(1); } @@ -215,6 +218,7 @@ public void testCreateDestroyCaches1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateDestroyCaches2() throws Exception { createDestroyCaches(5); } @@ -222,6 +226,7 @@ public void testCreateDestroyCaches2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateCacheWithSameNameInAnotherGroup() throws Exception { startGridsMultiThreaded(2); @@ -240,6 +245,7 @@ public void testCreateCacheWithSameNameInAnotherGroup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateDestroyCachesAtomicPartitioned() throws Exception { createDestroyCaches(PARTITIONED, ATOMIC); } @@ -247,6 +253,7 @@ public void testCreateDestroyCachesAtomicPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateDestroyCachesTxPartitioned() throws Exception { createDestroyCaches(PARTITIONED, TRANSACTIONAL); } @@ -254,6 +261,15 @@ public void testCreateDestroyCachesTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCreateDestroyCachesMvccTxPartitioned() throws Exception { + createDestroyCaches(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCreateDestroyCachesAtomicReplicated() throws Exception { createDestroyCaches(REPLICATED, ATOMIC); } @@ -261,6 +277,7 @@ public void testCreateDestroyCachesAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateDestroyCachesTxReplicated() throws Exception { createDestroyCaches(REPLICATED, TRANSACTIONAL); } @@ -268,6 +285,16 @@ public void testCreateDestroyCachesTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test + public void testCreateDestroyCachesMvccTxReplicated() throws Exception { + createDestroyCaches(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testScanQueryAtomicPartitioned() throws Exception { scanQuery(PARTITIONED, ATOMIC); } @@ -275,6 +302,7 @@ public void testScanQueryAtomicPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryTxPartitioned() throws Exception { scanQuery(PARTITIONED, TRANSACTIONAL); } @@ -282,6 +310,15 @@ public void testScanQueryTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testScanQueryMvccTxPartitioned() throws Exception { + scanQuery(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testScanQueryAtomicReplicated() throws Exception { scanQuery(REPLICATED, ATOMIC); } @@ -289,6 +326,7 @@ public void testScanQueryAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryTxReplicated() throws Exception { scanQuery(REPLICATED, TRANSACTIONAL); } @@ -296,6 +334,17 @@ public void testScanQueryTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10750") + @Test + public void testScanQueryMvccTxReplicated() throws Exception { + scanQuery(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + + /** + * @throws Exception If failed. + */ + @Test public void testScanQueryAtomicLocal() throws Exception { scanQuery(LOCAL, ATOMIC); } @@ -303,6 +352,7 @@ public void testScanQueryAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryTxLocal() throws Exception { scanQuery(LOCAL, TRANSACTIONAL); } @@ -310,6 +360,17 @@ public void testScanQueryTxLocal() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testScanQueryMvccTxLocal() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + scanQuery(LOCAL, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testEntriesTtlAtomicPartitioned() throws Exception { entriesTtl(PARTITIONED, ATOMIC); } @@ -317,6 +378,7 @@ public void testEntriesTtlAtomicPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntriesTtlTxPartitioned() throws Exception { entriesTtl(PARTITIONED, TRANSACTIONAL); } @@ -324,6 +386,17 @@ public void testEntriesTtlTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testEntriesTtlMvccTxPartitioned() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7311"); + + entriesTtl(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testEntriesTtlAtomicReplicated() throws Exception { entriesTtl(REPLICATED, ATOMIC); } @@ -331,6 +404,7 @@ public void testEntriesTtlAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntriesTtlTxReplicated() throws Exception { entriesTtl(REPLICATED, TRANSACTIONAL); } @@ -338,6 +412,17 @@ public void testEntriesTtlTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testEntriesTtlMvccTxReplicated() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7311"); + + entriesTtl(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testEntriesTtlAtomicLocal() throws Exception { entriesTtl(LOCAL, ATOMIC); } @@ -345,6 +430,7 @@ public void testEntriesTtlAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntriesTtlTxLocal() throws Exception { entriesTtl(LOCAL, TRANSACTIONAL); } @@ -352,6 +438,18 @@ public void testEntriesTtlTxLocal() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testEntriesTtlMvccTxLocal() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + fail("https://issues.apache.org/jira/browse/IGNITE-7311"); + + entriesTtl(LOCAL, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCacheIteratorAtomicPartitioned() throws Exception { cacheIterator(PARTITIONED, ATOMIC); } @@ -359,6 +457,7 @@ public void testCacheIteratorAtomicPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheIteratorTxPartitioned() throws Exception { cacheIterator(PARTITIONED, TRANSACTIONAL); } @@ -366,6 +465,15 @@ public void testCacheIteratorTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCacheIteratorMvccTxPartitioned() throws Exception { + cacheIterator(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCacheIteratorAtomicReplicated() throws Exception { cacheIterator(REPLICATED, ATOMIC); } @@ -373,6 +481,7 @@ public void testCacheIteratorAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheIteratorTxReplicated() throws Exception { cacheIterator(REPLICATED, TRANSACTIONAL); } @@ -380,6 +489,15 @@ public void testCacheIteratorTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCacheIteratorMvccTxReplicated() throws Exception { + cacheIterator(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCacheIteratorAtomicLocal() throws Exception { cacheIterator(LOCAL, ATOMIC); } @@ -387,6 +505,7 @@ public void testCacheIteratorAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheIteratorTxLocal() throws Exception { cacheIterator(LOCAL, TRANSACTIONAL); } @@ -394,6 +513,17 @@ public void testCacheIteratorTxLocal() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCacheIteratorMvccTxLocal() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + cacheIterator(LOCAL, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testScanQueryMultiplePartitionsAtomicPartitioned() throws Exception { scanQueryMultiplePartitions(PARTITIONED, ATOMIC); } @@ -401,6 +531,7 @@ public void testScanQueryMultiplePartitionsAtomicPartitioned() throws Exception /** * @throws Exception If failed. */ + @Test public void testScanQueryMultiplePartitionsTxPartitioned() throws Exception { scanQueryMultiplePartitions(PARTITIONED, TRANSACTIONAL); } @@ -408,6 +539,15 @@ public void testScanQueryMultiplePartitionsTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testScanQueryMultiplePartitionsMvccTxPartitioned() throws Exception { + scanQueryMultiplePartitions(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testScanQueryMultiplePartitionsAtomicReplicated() throws Exception { scanQueryMultiplePartitions(REPLICATED, ATOMIC); } @@ -415,6 +555,7 @@ public void testScanQueryMultiplePartitionsAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryMultiplePartitionsTxReplicated() throws Exception { scanQueryMultiplePartitions(REPLICATED, TRANSACTIONAL); } @@ -422,6 +563,15 @@ public void testScanQueryMultiplePartitionsTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testScanQueryMultiplePartitionsMvccTxReplicated() throws Exception { + scanQueryMultiplePartitions(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testContinuousQueryTxReplicated() throws Exception { continuousQuery(REPLICATED, TRANSACTIONAL); } @@ -429,6 +579,15 @@ public void testContinuousQueryTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testContinuousQueryMvccTxReplicated() throws Exception { + continuousQuery(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testContinuousQueryTxPartitioned() throws Exception { continuousQuery(PARTITIONED, TRANSACTIONAL); } @@ -436,6 +595,15 @@ public void testContinuousQueryTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testContinuousQueryMvccTxPartitioned() throws Exception { + continuousQuery(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testContinuousQueryTxLocal() throws Exception { continuousQuery(LOCAL, TRANSACTIONAL); } @@ -443,6 +611,17 @@ public void testContinuousQueryTxLocal() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testContinuousQueryMvccTxLocal() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + continuousQuery(LOCAL, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testContinuousQueryAtomicReplicated() throws Exception { continuousQuery(REPLICATED, ATOMIC); } @@ -450,6 +629,7 @@ public void testContinuousQueryAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContinuousQueryAtomicPartitioned() throws Exception { continuousQuery(PARTITIONED, ATOMIC); } @@ -457,6 +637,7 @@ public void testContinuousQueryAtomicPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContinuousQueryAtomicLocal() throws Exception { continuousQuery(LOCAL, ATOMIC); } @@ -1194,6 +1375,7 @@ private CacheConfiguration[] staticConfigurations1(boolean cache3) { /** * @throws Exception If failed. */ + @Test public void testDiscoveryDataConsistency1() throws Exception { ccfgs = staticConfigurations1(true); Ignite srv0 = startGrid(0); @@ -1288,6 +1470,7 @@ private CacheConfiguration[] cacheConfigurations(int cnt, String grp, String bas /** * @throws Exception If failed. */ + @Test public void testStartManyCaches() throws Exception { final int CACHES = 5_000; @@ -1330,6 +1513,7 @@ public void testStartManyCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalance1() throws Exception { Ignite srv0 = startGrid(0); @@ -1341,6 +1525,10 @@ public void testRebalance1() throws Exception { srv0.createCache(cacheConfiguration(GROUP2, "c3", PARTITIONED, TRANSACTIONAL, 2, false)); IgniteCache srv0Cache4 = srv0.createCache(cacheConfiguration(GROUP2, "c4", PARTITIONED, TRANSACTIONAL, 2, false)); + IgniteCache srv0Cache5 = + srv0.createCache(cacheConfiguration(GROUP3, "c5", PARTITIONED, TRANSACTIONAL_SNAPSHOT, 2, false)); + IgniteCache srv0Cache6 = + srv0.createCache(cacheConfiguration(GROUP3, "c6", PARTITIONED, TRANSACTIONAL_SNAPSHOT, 2, false)); final int ITEMS = 1_000; @@ -1349,6 +1537,9 @@ public void testRebalance1() throws Exception { srv0Cache3.put(new Key1(i), i); srv0Cache4.put(new Key1(i), -i); + + srv0Cache5.put(new Key1(i), i); + srv0Cache6.put(new Key1(i), -i); } assertEquals(ITEMS, srv0Cache1.size()); @@ -1356,6 +1547,8 @@ public void testRebalance1() throws Exception { assertEquals(0, srv0Cache2.size()); assertEquals(ITEMS, srv0Cache3.size()); assertEquals(ITEMS, srv0Cache4.localSize()); + assertEquals(ITEMS, srv0Cache5.size()); + assertEquals(ITEMS, srv0Cache6.localSize()); startGrid(1); @@ -1368,6 +1561,8 @@ public void testRebalance1() throws Exception { IgniteCache cache2 = node.cache("c2"); IgniteCache cache3 = node.cache("c3"); IgniteCache cache4 = node.cache("c4"); + IgniteCache cache5 = node.cache("c5"); + IgniteCache cache6 = node.cache("c6"); assertEquals(ITEMS * 2, cache1.size(CachePeekMode.ALL)); assertEquals(ITEMS, cache1.localSize(CachePeekMode.ALL)); @@ -1380,11 +1575,20 @@ public void testRebalance1() throws Exception { assertEquals(ITEMS * 2, cache4.size(CachePeekMode.ALL)); assertEquals(ITEMS, cache4.localSize(CachePeekMode.ALL)); + assertEquals(ITEMS * 2, cache5.size(CachePeekMode.ALL)); + assertEquals(ITEMS, cache5.localSize(CachePeekMode.ALL)); + + assertEquals(ITEMS * 2, cache6.size(CachePeekMode.ALL)); + assertEquals(ITEMS, cache6.localSize(CachePeekMode.ALL)); + + for (int k = 0; k < ITEMS; k++) { assertEquals(i, cache1.localPeek(new Key1(i))); assertNull(cache2.localPeek(new Key1(i))); assertEquals(i, cache3.localPeek(new Key1(i))); assertEquals(-i, cache4.localPeek(new Key1(i))); + assertEquals(i, cache5.localPeek(new Key1(i))); + assertEquals(-i, cache6.localPeek(new Key1(i))); } } @@ -1402,6 +1606,8 @@ public void testRebalance1() throws Exception { IgniteCache cache2 = node.cache("c2"); IgniteCache cache3 = node.cache("c3"); IgniteCache cache4 = node.cache("c4"); + IgniteCache cache5 = node.cache("c5"); + IgniteCache cache6 = node.cache("c6"); assertEquals(ITEMS * 3, cache1.size(CachePeekMode.ALL)); assertEquals(ITEMS, cache1.localSize(CachePeekMode.ALL)); @@ -1409,6 +1615,8 @@ public void testRebalance1() throws Exception { assertEquals(ITEMS * 2, cache2.localSize(CachePeekMode.ALL)); assertEquals(ITEMS, cache3.localSize(CachePeekMode.ALL)); assertEquals(ITEMS, cache4.localSize(CachePeekMode.ALL)); + assertEquals(ITEMS, cache5.localSize(CachePeekMode.ALL)); + assertEquals(ITEMS, cache6.localSize(CachePeekMode.ALL)); } IgniteCache srv2Cache1 = srv2.cache("c1"); @@ -1424,6 +1632,7 @@ public void testRebalance1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalance2() throws Exception { Ignite srv0 = startGrid(0); @@ -1507,6 +1716,7 @@ public void testRebalance2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoKeyIntersectTx() throws Exception { testNoKeyIntersect(TRANSACTIONAL); } @@ -1514,6 +1724,15 @@ public void testNoKeyIntersectTx() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testNoKeyIntersectMvccTx() throws Exception { + testNoKeyIntersect(TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testNoKeyIntersectAtomic() throws Exception { testNoKeyIntersect(ATOMIC); } @@ -1825,6 +2044,7 @@ private void testNoKeyIntersectTxLocks(final IgniteCache cache1, final IgniteCac /** * @throws Exception If failed. */ + @Test public void testCacheApiTxPartitioned() throws Exception { cacheApiTest(PARTITIONED, TRANSACTIONAL); } @@ -1832,6 +2052,17 @@ public void testCacheApiTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCacheApiMvccTxPartitioned() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7952"); + + cacheApiTest(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCacheApiTxReplicated() throws Exception { cacheApiTest(REPLICATED, TRANSACTIONAL); } @@ -1839,6 +2070,17 @@ public void testCacheApiTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCacheApiMvccTxReplicated() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7952"); + + cacheApiTest(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCacheApiAtomicPartitioned() throws Exception { cacheApiTest(PARTITIONED, ATOMIC); } @@ -1846,6 +2088,7 @@ public void testCacheApiAtomicPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheApiAtomicReplicated() throws Exception { cacheApiTest(REPLICATED, ATOMIC); } @@ -2638,6 +2881,7 @@ private void cacheInvokeAsync(IgniteCache cache) { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAtomicPartitioned() throws Exception { loadCache(PARTITIONED, ATOMIC); } @@ -2645,6 +2889,7 @@ public void testLoadCacheAtomicPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAtomicReplicated() throws Exception { loadCache(REPLICATED, ATOMIC); } @@ -2652,6 +2897,7 @@ public void testLoadCacheAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheTxPartitioned() throws Exception { loadCache(PARTITIONED, TRANSACTIONAL); } @@ -2659,6 +2905,17 @@ public void testLoadCacheTxPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testLoadCacheMvccTxPartitioned() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7954"); + + loadCache(PARTITIONED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testLoadCacheTxReplicated() throws Exception { loadCache(REPLICATED, TRANSACTIONAL); } @@ -2666,6 +2923,17 @@ public void testLoadCacheTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testLoadCacheMvccTxReplicated() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7954"); + + loadCache(REPLICATED, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testLoadCacheAtomicLocal() throws Exception { loadCache(LOCAL, ATOMIC); } @@ -2673,10 +2941,21 @@ public void testLoadCacheAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheTxLocal() throws Exception { loadCache(LOCAL, TRANSACTIONAL); } + /** + * @throws Exception If failed. + */ + @Test + public void testLoadCacheMvccTxLocal() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + loadCache(LOCAL, TRANSACTIONAL_SNAPSHOT); + } + /** * @param cacheMode Cache mode. * @param atomicityMode Atomicity mode. @@ -2724,6 +3003,7 @@ private void loadCache(CacheMode cacheMode, CacheAtomicityMode atomicityMode) th /** * @throws Exception If failed. */ + @Test public void testConcurrentOperationsSameKeys() throws Exception { final int SRVS = 4; final int CLIENTS = 4; @@ -2828,6 +3108,7 @@ private IgniteInternalFuture updateFuture(final int nodes, /** * @throws Exception If failed. */ + @Test public void testConcurrentOperationsAndCacheDestroy() throws Exception { final int SRVS = 4; final int CLIENTS = 4; @@ -2854,6 +3135,8 @@ public void testConcurrentOperationsAndCacheDestroy() throws Exception { srv0.createCache( cacheConfiguration(GROUP2, GROUP2 + "-" + i, PARTITIONED, TRANSACTIONAL, grp2Backups, i % 2 == 0)); + + // TODO IGNITE-7164: add Mvcc cache to test. } final AtomicInteger idx = new AtomicInteger(); @@ -2975,6 +3258,7 @@ public void testConcurrentOperationsAndCacheDestroy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticConfigurationsValidation() throws Exception { ccfgs = new CacheConfiguration[2]; @@ -3015,6 +3299,7 @@ public void testStaticConfigurationsValidation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheIdConflict() throws Exception { ccfgs = new CacheConfiguration[]{new CacheConfiguration("AaAaAa"), new CacheConfiguration("AaAaBB")}; @@ -3058,6 +3343,7 @@ public void testCacheIdConflict() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheGroupIdConflict1() throws Exception { ccfgs = new CacheConfiguration[]{new CacheConfiguration(CACHE1).setGroupName("AaAaAa"), new CacheConfiguration(CACHE2).setGroupName("AaAaBB")}; @@ -3102,6 +3388,7 @@ public void testCacheGroupIdConflict1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheGroupIdConflict2() throws Exception { ccfgs = new CacheConfiguration[]{new CacheConfiguration("AaAaAa"), new CacheConfiguration(CACHE2).setGroupName("AaAaBB")}; @@ -3146,6 +3433,7 @@ public void testCacheGroupIdConflict2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheGroupIdConflict3() throws Exception { ccfgs = new CacheConfiguration[]{new CacheConfiguration(CACHE2).setGroupName("AaAaBB"), new CacheConfiguration("AaAaAa")}; @@ -3190,6 +3478,7 @@ public void testCacheGroupIdConflict3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheGroupNameConflict1() throws Exception { ccfgs = new CacheConfiguration[]{new CacheConfiguration("cache1"), new CacheConfiguration("cache2").setGroupName("cache1")}; @@ -3233,6 +3522,7 @@ public void testCacheGroupNameConflict1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheGroupNameConflict2() throws Exception { ccfgs = new CacheConfiguration[]{new CacheConfiguration("cache2").setGroupName("cache1"), new CacheConfiguration("cache1")}; @@ -3276,6 +3566,7 @@ public void testCacheGroupNameConflict2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConfigurationConsistencyValidation() throws Exception { startGrids(2); @@ -3305,6 +3596,16 @@ public void testConfigurationConsistencyValidation() throws Exception { assertTrue("Unexpected message: " + e.getMessage(), e.getMessage().contains("Backups mismatch for caches related to the same group [groupName=grp1")); } + + try { + ignite(i).createCache(cacheConfiguration(GROUP1, "c2", PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, false)); + + fail(); + } + catch (CacheException e) { + assertTrue("Unexpected message: " + e.getMessage(), + e.getMessage().contains("Atomicity mode mismatch for caches related to the same group [groupName=grp1")); + } } } @@ -3321,6 +3622,8 @@ private CacheConfiguration[] interceptorConfigurations() { ccfgs[4] = cacheConfiguration(GROUP1, "c5", PARTITIONED, ATOMIC, 2, false); ccfgs[5] = cacheConfiguration(GROUP1, "c6", PARTITIONED, TRANSACTIONAL, 2, false); + //TODO IGNITE-9323: Check Mvcc mode. + return ccfgs; } @@ -3329,6 +3632,7 @@ private CacheConfiguration[] interceptorConfigurations() { * * @throws Exception If failed. */ + @Test public void testInterceptors() throws Exception { for (int i = 0; i < 4; i++) { ccfgs = interceptorConfigurations(); @@ -3384,6 +3688,8 @@ private CacheConfiguration[] cacheStoreConfigurations() { ccfgs[4] = cacheConfiguration(GROUP1, "c5", PARTITIONED, ATOMIC, 2, false); ccfgs[5] = cacheConfiguration(GROUP1, "c6", PARTITIONED, TRANSACTIONAL, 2, false); + //TODO IGNITE-8582: Check Mvcc mode. + return ccfgs; } @@ -3392,6 +3698,7 @@ private CacheConfiguration[] cacheStoreConfigurations() { * * @throws Exception If failed. */ + @Test public void testCacheStores() throws Exception { for (int i = 0; i < 4; i++) { ccfgs = cacheStoreConfigurations(); @@ -3464,6 +3771,7 @@ private CacheConfiguration[] mapperConfigurations() { /** * @throws Exception If failed. */ + @Test public void testAffinityMappers() throws Exception { for (int i = 0; i < 4; i++) { ccfgs = mapperConfigurations(); @@ -3526,6 +3834,7 @@ private void checkAffinityMappers(Ignite node) { /** * @throws Exception If failed. */ + @Test public void testContinuousQueriesMultipleGroups1() throws Exception { continuousQueriesMultipleGroups(1); } @@ -3533,6 +3842,7 @@ public void testContinuousQueriesMultipleGroups1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testContinuousQueriesMultipleGroups2() throws Exception { continuousQueriesMultipleGroups(4); } @@ -3559,7 +3869,9 @@ private void continuousQueriesMultipleGroups(int srvs) throws Exception { client.createCache(cacheConfiguration(null, "c7", PARTITIONED, ATOMIC, 1, false)); client.createCache(cacheConfiguration(null, "c8", PARTITIONED, TRANSACTIONAL, 1, false)); - String[] cacheNames = {"c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8"}; + client.createCache(cacheConfiguration(GROUP3, "c9", PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, false)); + + String[] cacheNames = {"c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9"}; AtomicInteger c1 = registerListener(client, "c1"); @@ -3577,6 +3889,7 @@ private void continuousQueriesMultipleGroups(int srvs) throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheIdSort() throws Exception { Ignite node = startGrid(0); @@ -3651,6 +3964,7 @@ public void testCacheIdSort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDataCleanup() throws Exception { Ignite node = startGrid(0); @@ -3711,6 +4025,7 @@ public void testDataCleanup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRestartsAndCacheCreateDestroy() throws Exception { final int SRVS = 5; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheIncrementTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheIncrementTxTest.java index b3a505597b8dd..b0b547219ad02 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheIncrementTxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheIncrementTxTest.java @@ -32,12 +32,12 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -47,10 +47,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheIncrementTxTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 4; @@ -58,8 +56,6 @@ public class IgniteCacheIncrementTxTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); if (getTestIgniteInstanceName(SRVS).equals(igniteInstanceName)) @@ -80,6 +76,7 @@ public class IgniteCacheIncrementTxTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testIncrementTxTopologyChange0() throws Exception { nodeJoin(cacheConfiguration(0)); } @@ -87,6 +84,7 @@ public void testIncrementTxTopologyChange0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrementTxTopologyChange1() throws Exception { nodeJoin(cacheConfiguration(1)); } @@ -94,6 +92,7 @@ public void testIncrementTxTopologyChange1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrementTxTopologyChange2() throws Exception { nodeJoin(cacheConfiguration(2)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInterceptorSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInterceptorSelfTestSuite.java index f457e5df8d4db..5afe40618f95b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInterceptorSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInterceptorSelfTestSuite.java @@ -17,50 +17,62 @@ package org.apache.ignite.internal.processors.cache; +import java.util.Collection; import junit.framework.TestSuite; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache interceptor suite. */ -public class IgniteCacheInterceptorSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheInterceptorSelfTestSuite { /** * @return Cache API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { + return suite(null); + } + + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("CacheInterceptor Test Suite"); - suite.addTestSuite(GridCacheInterceptorLocalSelfTest.class); - suite.addTestSuite(GridCacheInterceptorLocalWithStoreSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorLocalSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorLocalWithStoreSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheInterceptorLocalAtomicSelfTest.class); - suite.addTestSuite(GridCacheInterceptorLocalAtomicWithStoreSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorLocalAtomicSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorLocalAtomicWithStoreSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheInterceptorAtomicSelfTest.class); - suite.addTestSuite(GridCacheInterceptorAtomicNearEnabledSelfTest.class); - suite.addTestSuite(GridCacheInterceptorAtomicWithStoreSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorAtomicSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorAtomicNearEnabledSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorAtomicWithStoreSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheInterceptorAtomicReplicatedSelfTest.class); - suite.addTestSuite(GridCacheInterceptorAtomicWithStoreReplicatedSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorAtomicReplicatedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorAtomicWithStoreReplicatedSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheInterceptorSelfTest.class); - suite.addTestSuite(GridCacheInterceptorNearEnabledSelfTest.class); - suite.addTestSuite(GridCacheInterceptorWithStoreSelfTest.class); - suite.addTestSuite(GridCacheInterceptorReplicatedSelfTest.class); - suite.addTestSuite(GridCacheInterceptorReplicatedWithStoreSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorNearEnabledSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorWithStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorReplicatedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorReplicatedWithStoreSelfTest.class, ignoredTests); // TODO GG-11141. -// suite.addTestSuite(GridCacheOnCopyFlagTxPartitionedSelfTest.class); -// suite.addTestSuite(GridCacheOnCopyFlagReplicatedSelfTest.class); -// suite.addTestSuite(GridCacheOnCopyFlagLocalSelfTest.class); -// suite.addTestSuite(GridCacheOnCopyFlagAtomicSelfTest.class); +// GridTestUtils.addTestIfNeeded(suite,GridCacheOnCopyFlagTxPartitionedSelfTest.class, ignoredTests); +// GridTestUtils.addTestIfNeeded(suite,GridCacheOnCopyFlagReplicatedSelfTest.class, ignoredTests); +// GridTestUtils.addTestIfNeeded(suite,GridCacheOnCopyFlagLocalSelfTest.class, ignoredTests); +// GridTestUtils.addTestIfNeeded(suite,GridCacheOnCopyFlagAtomicSelfTest.class, ignoredTests); - suite.addTestSuite(CacheInterceptorPartitionCounterRandomOperationsTest.class); - suite.addTestSuite(CacheInterceptorPartitionCounterLocalSanityTest.class); + GridTestUtils.addTestIfNeeded(suite,CacheInterceptorPartitionCounterRandomOperationsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,CacheInterceptorPartitionCounterLocalSanityTest.class, ignoredTests); - suite.addTestSuite(GridCacheInterceptorAtomicRebalanceTest.class); - suite.addTestSuite(GridCacheInterceptorTransactionalRebalanceTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorAtomicRebalanceTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheInterceptorTransactionalRebalanceTest.class, ignoredTests); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeAbstractTest.java index d9a04280eff3b..955781025ae03 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeAbstractTest.java @@ -48,7 +48,11 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -58,6 +62,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheInvokeAbstractTest extends IgniteCacheAbstractTest { /** */ private Integer lastKey = 0; @@ -65,21 +70,24 @@ public abstract class IgniteCacheInvokeAbstractTest extends IgniteCacheAbstractT /** * @throws Exception If failed. */ + @Test public void testInvoke() throws Exception { IgniteCache cache = jcache(); invoke(cache, null); - if (atomicityMode() == TRANSACTIONAL) { - invoke(cache, PESSIMISTIC); + if (atomicityMode() != ATOMIC) { + invoke(cache, PESSIMISTIC); // Tx or Mvcc tx. - invoke(cache, OPTIMISTIC); + if (atomicityMode() == TRANSACTIONAL) + invoke(cache, OPTIMISTIC); } } /** * @throws Exception If failed. */ + @Test public void testInternalInvokeNullable() throws Exception { IgniteInternalCache cache = grid(0).cachex(DEFAULT_CACHE_NAME); @@ -230,15 +238,17 @@ private void invoke(final IgniteCache cache, @Nullable Transac /** * @throws Exception If failed. */ + @Test public void testInvokeAll() throws Exception { IgniteCache cache = jcache(); invokeAll(cache, null); - if (atomicityMode() == TRANSACTIONAL) { + if (atomicityMode() != ATOMIC) { invokeAll(cache, PESSIMISTIC); - invokeAll(cache, OPTIMISTIC); + if (atomicityMode() == TRANSACTIONAL) + invokeAll(cache, OPTIMISTIC); } } @@ -305,6 +315,7 @@ static class MyClass3{} /** * @throws Exception If failed. */ + @Test public void testInvokeAllAppliedOnceOnBinaryTypeRegistration() { IgniteCache cache = jcache(); @@ -996,4 +1007,4 @@ public String value() { return S.toString(TestValue.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughAbstractTest.java index 9baa176e1e003..5efa2ee59e6b9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughAbstractTest.java @@ -36,9 +36,6 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; @@ -53,9 +50,6 @@ * */ public abstract class IgniteCacheInvokeReadThroughAbstractTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static volatile boolean failed; @@ -66,8 +60,6 @@ public abstract class IgniteCacheInvokeReadThroughAbstractTest extends GridCommo @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughSingleNodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughSingleNodeTest.java index 406e5afe9ff7d..44029457956c4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughSingleNodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughSingleNodeTest.java @@ -17,8 +17,16 @@ package org.apache.ignite.internal.processors.cache; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import org.apache.ignite.testframework.MvccFeatureChecker; + import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -26,7 +34,15 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheInvokeReadThroughSingleNodeTest extends IgniteCacheInvokeReadThroughAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected void startNodes() throws Exception { startGrid(0); @@ -35,6 +51,7 @@ public class IgniteCacheInvokeReadThroughSingleNodeTest extends IgniteCacheInvok /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomic() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, ATOMIC, 1, false)); } @@ -42,6 +59,7 @@ public void testInvokeReadThroughAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomicNearCache() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, ATOMIC, 1, true)); } @@ -49,6 +67,7 @@ public void testInvokeReadThroughAtomicNearCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomicReplicated() throws Exception { invokeReadThrough(cacheConfiguration(REPLICATED, ATOMIC, 0, false)); } @@ -56,6 +75,7 @@ public void testInvokeReadThroughAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomicLocal() throws Exception { invokeReadThrough(cacheConfiguration(LOCAL, ATOMIC, 0, false)); } @@ -63,6 +83,7 @@ public void testInvokeReadThroughAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTx() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL, 1, false)); } @@ -70,6 +91,7 @@ public void testInvokeReadThroughTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTxNearCache() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL, 1, true)); } @@ -77,6 +99,7 @@ public void testInvokeReadThroughTxNearCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTxReplicated() throws Exception { invokeReadThrough(cacheConfiguration(REPLICATED, TRANSACTIONAL, 0, false)); } @@ -84,7 +107,44 @@ public void testInvokeReadThroughTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTxLocal() throws Exception { invokeReadThrough(cacheConfiguration(LOCAL, TRANSACTIONAL, 0, false)); } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTx() throws Exception { + invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, false)); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTxNearCache() throws Exception { + invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, true)); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTxReplicated() throws Exception { + invokeReadThrough(cacheConfiguration(REPLICATED, TRANSACTIONAL_SNAPSHOT, 0, false)); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTxLocal() throws Exception { + invokeReadThrough(cacheConfiguration(LOCAL, TRANSACTIONAL_SNAPSHOT, 0, false)); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughTest.java index 8fd37588a43eb..cb14e18613ddb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInvokeReadThroughTest.java @@ -17,15 +17,31 @@ package org.apache.ignite.internal.processors.cache; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import org.apache.ignite.testframework.MvccFeatureChecker; + import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheInvokeReadThroughTest extends IgniteCacheInvokeReadThroughAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected void startNodes() throws Exception { startGridsMultiThreaded(4); @@ -38,6 +54,7 @@ public class IgniteCacheInvokeReadThroughTest extends IgniteCacheInvokeReadThrou /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomic0() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, ATOMIC, 0, false)); } @@ -45,6 +62,7 @@ public void testInvokeReadThroughAtomic0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomic1() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, ATOMIC, 1, false)); } @@ -52,6 +70,7 @@ public void testInvokeReadThroughAtomic1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomic2() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, ATOMIC, 2, false)); } @@ -59,6 +78,7 @@ public void testInvokeReadThroughAtomic2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomicNearCache() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, ATOMIC, 1, true)); } @@ -66,6 +86,7 @@ public void testInvokeReadThroughAtomicNearCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughAtomicReplicated() throws Exception { invokeReadThrough(cacheConfiguration(REPLICATED, ATOMIC, 0, false)); } @@ -73,6 +94,7 @@ public void testInvokeReadThroughAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTx0() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL, 0, false)); } @@ -80,6 +102,7 @@ public void testInvokeReadThroughTx0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTx1() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL, 1, false)); } @@ -87,6 +110,7 @@ public void testInvokeReadThroughTx1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTx2() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL, 2, false)); } @@ -94,6 +118,7 @@ public void testInvokeReadThroughTx2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTxNearCache() throws Exception { invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL, 1, true)); } @@ -101,7 +126,53 @@ public void testInvokeReadThroughTxNearCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeReadThroughTxReplicated() throws Exception { invokeReadThrough(cacheConfiguration(REPLICATED, TRANSACTIONAL, 0, false)); } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTx0() throws Exception { + invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 0, false)); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTx1() throws Exception { + invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, false)); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTx2() throws Exception { + invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 2, false)); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTxNearCache() throws Exception { + invokeReadThrough(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, true)); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582") + @Test + public void testInvokeReadThroughMvccTxReplicated() throws Exception { + invokeReadThrough(cacheConfiguration(REPLICATED, TRANSACTIONAL_SNAPSHOT, 0, false)); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLoadRebalanceEvictionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLoadRebalanceEvictionSelfTest.java index d545d09747682..a56916da1f891 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLoadRebalanceEvictionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLoadRebalanceEvictionSelfTest.java @@ -40,23 +40,21 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.lang.IgniteBiInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheLoadRebalanceEvictionSelfTest extends GridCommonAbstractTest { /** */ public static final int LRU_MAX_SIZE = 10; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int ENTRIES_CNT = 10000; @@ -64,12 +62,6 @@ public class IgniteCacheLoadRebalanceEvictionSelfTest extends GridCommonAbstract @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - LruEvictionPolicy evictionPolicy = new LruEvictionPolicy<>(); evictionPolicy.setMaxSize(LRU_MAX_SIZE); @@ -95,6 +87,7 @@ public class IgniteCacheLoadRebalanceEvictionSelfTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Test public void testStartRebalancing() throws Exception { List> futs = new ArrayList<>(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheManyAsyncOperationsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheManyAsyncOperationsTest.java index ca9d15b311e85..856bea9307cef 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheManyAsyncOperationsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheManyAsyncOperationsTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.lang.IgniteFuture; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -36,6 +39,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheManyAsyncOperationsTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -70,6 +74,7 @@ public class IgniteCacheManyAsyncOperationsTest extends IgniteCacheAbstractTest /** * @throws Exception If failed. */ + @Test public void testManyAsyncOperations() throws Exception { try (Ignite client = startGrid(gridCount())) { assertTrue(client.configuration().isClientMode()); @@ -108,4 +113,4 @@ public void testManyAsyncOperations() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/context/IgniteCachePartitionedExecutionContextTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMvccTxInvokeTest.java similarity index 70% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/context/IgniteCachePartitionedExecutionContextTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMvccTxInvokeTest.java index 36d2c2b7f3262..a2da68cc2d207 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/context/IgniteCachePartitionedExecutionContextTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMvccTxInvokeTest.java @@ -15,30 +15,39 @@ * limitations under the License. */ -package org.apache.ignite.internal.processors.cache.context; +package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ -public class IgniteCachePartitionedExecutionContextTest extends IgniteCacheAbstractExecutionContextTest { +@RunWith(JUnit4.class) +public class IgniteCacheMvccTxInvokeTest extends IgniteCacheInvokeAbstractTest { + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 3; + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { - return CacheMode.PARTITIONED; + return PARTITIONED; } /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { - return ATOMIC; + return TRANSACTIONAL_SNAPSHOT; } /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMvccTxNearEnabledInvokeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMvccTxNearEnabledInvokeTest.java new file mode 100644 index 0000000000000..fb3e280c5edbf --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMvccTxNearEnabledInvokeTest.java @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.junit.Ignore; + +/** + * + */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") +public class IgniteCacheMvccTxNearEnabledInvokeTest extends IgniteCacheMvccTxInvokeTest { + /** {@inheritDoc} */ + @Override protected NearCacheConfiguration nearConfiguration() { + return new NearCacheConfiguration(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNearLockValueSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNearLockValueSelfTest.java index 37fe5fa830ad5..8dc44638130f8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNearLockValueSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNearLockValueSelfTest.java @@ -28,10 +28,12 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearLockRequest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -39,12 +41,12 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheNearLockValueSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + startGrid(1); startGrid(0); @@ -54,7 +56,7 @@ public class IgniteCacheNearLockValueSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setForceServerMode(true).setIpFinder(IP_FINDER)); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); if (getTestIgniteInstanceName(0).equals(igniteInstanceName)) cfg.setClientMode(true); @@ -71,6 +73,7 @@ public class IgniteCacheNearLockValueSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDhtVersion() throws Exception { CacheConfiguration pCfg = new CacheConfiguration<>("partitioned"); @@ -114,4 +117,4 @@ public void testDhtVersion() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoSyncForGetTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoSyncForGetTest.java index b14fecc5d4a27..42fa66ab501fd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoSyncForGetTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoSyncForGetTest.java @@ -39,24 +39,23 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheNoSyncForGetTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static volatile CountDownLatch processorStartLatch; @@ -70,8 +69,6 @@ public class IgniteCacheNoSyncForGetTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -91,6 +88,7 @@ public class IgniteCacheNoSyncForGetTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAtomicGet() throws Exception { getTest(ATOMIC); } @@ -98,10 +96,19 @@ public void testAtomicGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxGet() throws Exception { getTest(TRANSACTIONAL); } + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxGet() throws Exception { + getTest(TRANSACTIONAL_SNAPSHOT); + } + /** * @param atomicityMode Cache atomicity mode. * @throws Exception If failed. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectPutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectPutSelfTest.java index f924e83b92ce0..916d494449853 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectPutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectPutSelfTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cacheobject.IgniteCacheObjectProcessor; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheObjectPutSelfTest extends GridCommonAbstractTest { /** */ public static final String CACHE_NAME = "partitioned"; @@ -58,6 +62,7 @@ public class IgniteCacheObjectPutSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPrimitiveValues() throws Exception { IgniteEx ignite = grid(0); @@ -100,6 +105,7 @@ public void testPrimitiveValues() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClassValues() throws Exception { IgniteEx ignite = grid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingErrorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingErrorTest.java index 55ff31a62ed17..89f75a65b001f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingErrorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingErrorTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks behavior on exception while unmarshalling key. */ +@RunWith(JUnit4.class) public class IgniteCacheP2pUnmarshallingErrorTest extends IgniteCacheAbstractTest { /** Allows to change behavior of readExternal method. */ protected static final AtomicInteger readCnt = new AtomicInteger(); @@ -152,6 +156,7 @@ protected void failGet(int k) { * * @throws Exception If failed. */ + @Test public void testResponseMessageOnUnmarshallingFailed() throws Exception { // GridNearAtomicFullUpdateRequest unmarshalling failed test. readCnt.set(1); @@ -281,4 +286,4 @@ public TestValue() { throw new IOException("Class can not be unmarshalled."); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingNearErrorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingNearErrorTest.java index 546ec06f5cf24..5dca856e5accf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingNearErrorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingNearErrorTest.java @@ -20,10 +20,14 @@ import org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks behavior on exception while unmarshalling key. */ +@RunWith(JUnit4.class) public class IgniteCacheP2pUnmarshallingNearErrorTest extends IgniteCacheP2pUnmarshallingErrorTest { /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { @@ -43,6 +47,7 @@ public class IgniteCacheP2pUnmarshallingNearErrorTest extends IgniteCacheP2pUnma } /** {@inheritDoc} */ + @Test @Override public void testResponseMessageOnUnmarshallingFailed() throws InterruptedException { //GridCacheEvictionRequest unmarshalling failed test. readCnt.set(5); //2 for each put. @@ -55,4 +60,4 @@ public class IgniteCacheP2pUnmarshallingNearErrorTest extends IgniteCacheP2pUnma // Wait for eviction complete. Thread.sleep(1000); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingRebalanceErrorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingRebalanceErrorTest.java index 28fc31c003af7..1183d6a17300b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingRebalanceErrorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingRebalanceErrorTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks behavior on exception while unmarshalling key. */ +@RunWith(JUnit4.class) public class IgniteCacheP2pUnmarshallingRebalanceErrorTest extends IgniteCacheP2pUnmarshallingErrorTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -35,6 +39,7 @@ public class IgniteCacheP2pUnmarshallingRebalanceErrorTest extends IgniteCacheP2 } /** {@inheritDoc} */ + @Test @Override public void testResponseMessageOnUnmarshallingFailed() throws Exception { //GridDhtPartitionSupplyMessage unmarshalling failed test. readCnt.set(Integer.MAX_VALUE); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingTxErrorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingTxErrorTest.java index ba77c701dd54e..9fb14d2eafa98 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingTxErrorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingTxErrorTest.java @@ -24,6 +24,9 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -32,6 +35,7 @@ /** * Checks behavior on exception while unmarshalling key. */ +@RunWith(JUnit4.class) public class IgniteCacheP2pUnmarshallingTxErrorTest extends IgniteCacheP2pUnmarshallingErrorTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { @@ -77,6 +81,7 @@ private void failPessimictic() { } /** {@inheritDoc} */ + @Test @Override public void testResponseMessageOnUnmarshallingFailed() { //GridNearTxPrepareRequest unmarshalling failed test readCnt.set(2); @@ -100,4 +105,4 @@ private void failPessimictic() { jcache(0).put(new TestKey(String.valueOf(++key)), ""); //No failure at client side. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionMapUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionMapUpdateTest.java index 87549cf4d4865..7e0694a7223f6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionMapUpdateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionMapUpdateTest.java @@ -28,20 +28,18 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class IgniteCachePartitionMapUpdateTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE1_ATTR = "cache1"; @@ -67,8 +65,6 @@ public class IgniteCachePartitionMapUpdateTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration ccfg1 = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg1.setName(CACHE1); @@ -115,6 +111,7 @@ public class IgniteCachePartitionMapUpdateTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPartitionMapUpdate1() throws Exception { cache1 = false; cache2 = false; @@ -156,6 +153,7 @@ public void testPartitionMapUpdate1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionMapUpdate2() throws Exception { startClientCache = true; @@ -165,6 +163,7 @@ public void testPartitionMapUpdate2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandom() throws Exception { ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -202,6 +201,7 @@ public void testRandom() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandom2() throws Exception { startClientCache = true; @@ -227,4 +227,4 @@ public AttributeFilter(String attrName) { return F.eq(node.attribute(attrName), "true"); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePeekModesAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePeekModesAbstractTest.java index ae1cf8851b34f..a813422ba0e58 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePeekModesAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePeekModesAbstractTest.java @@ -42,6 +42,9 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.spi.IgniteSpiCloseableIterator; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -62,6 +65,7 @@ *
  • {@link IgniteCache#localEntries(CachePeekMode...)}
  • * */ +@RunWith(JUnit4.class) public abstract class IgniteCachePeekModesAbstractTest extends IgniteCacheAbstractTest { /** */ private static final int HEAP_ENTRIES = 30; @@ -106,6 +110,7 @@ protected boolean hasNearCache() { /** * @throws Exception If failed. */ + @Test public void testLocalPeek() throws Exception { if (cacheMode() == LOCAL) { checkAffinityLocalCache(); @@ -371,6 +376,7 @@ private void checkStorage(int nodeIdx) throws Exception { /** * @throws Exception If failed. */ + @Test public void testSize() throws Exception { checkEmpty(); @@ -479,6 +485,7 @@ public void testSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalPartitionSize() throws Exception { if (cacheMode() != LOCAL) return; @@ -534,6 +541,7 @@ public void testLocalPartitionSize() throws Exception { /** * @throws InterruptedException If failed. */ + @Test public void testLocalPartitionSizeFlags() throws InterruptedException { if (true) // TODO GG-11148. return; @@ -595,6 +603,7 @@ public void testLocalPartitionSizeFlags() throws InterruptedException { /** * @throws Exception If failed. */ + @Test public void testNonLocalPartitionSize() throws Exception { if (true) // TODO GG-11148. return; @@ -1280,6 +1289,7 @@ private void checkPrimarySize(int exp) { /** * @throws Exception If failed. */ + @Test public void testLocalEntries() throws Exception { if (cacheMode() == LOCAL) { IgniteCache cache0 = jcache(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutAllRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutAllRestartTest.java index c90030e87f2cf..6ef030f059b55 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutAllRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutAllRestartTest.java @@ -32,11 +32,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -44,13 +44,11 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCachePutAllRestartTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "partitioned"; - /** IP finder. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 4; @@ -62,8 +60,6 @@ public class IgniteCachePutAllRestartTest extends GridCommonAbstractTest { ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setName(CACHE_NAME); @@ -79,20 +75,10 @@ public class IgniteCachePutAllRestartTest extends GridCommonAbstractTest { return cfg; } - /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); - - super.beforeTestsStarted(); - } - - /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { + super.afterTest(); + stopAllGrids(); } @@ -104,6 +90,7 @@ public class IgniteCachePutAllRestartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStopNode() throws Exception { startGrids(NODES); @@ -165,6 +152,7 @@ public void testStopNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopOriginatingNode() throws Exception { startGrids(NODES); @@ -222,4 +210,4 @@ public void testStopOriginatingNode() throws Exception { startGrid(node); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutStackOverflowSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutStackOverflowSelfTest.java index 86f34feb5e3da..568246fcf7f52 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutStackOverflowSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePutStackOverflowSelfTest.java @@ -21,13 +21,12 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -36,23 +35,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCachePutStackOverflowSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrid(0); @@ -61,6 +45,7 @@ public class IgniteCachePutStackOverflowSelfTest extends GridCommonAbstractTest /** * @throws Exception if failed. */ + @Test public void testStackLocal() throws Exception { checkCache(CacheMode.LOCAL); } @@ -68,6 +53,7 @@ public void testStackLocal() throws Exception { /** * @throws Exception if failed. */ + @Test public void testStackPartitioned() throws Exception { checkCache(CacheMode.PARTITIONED); } @@ -75,6 +61,7 @@ public void testStackPartitioned() throws Exception { /** * @throws Exception if failed. */ + @Test public void testStackReplicated() throws Exception { checkCache(CacheMode.REPLICATED); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionSelfTest.java index 00431b195a021..0135732d96cfe 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionSelfTest.java @@ -28,6 +28,7 @@ import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.IgniteCacheConfigVariationsAbstractTest; import javax.cache.configuration.Factory; @@ -36,11 +37,15 @@ import javax.cache.expiry.Duration; import javax.cache.expiry.ExpiryPolicy; import java.util.concurrent.TimeUnit; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheReadThroughEvictionSelfTest extends IgniteCacheConfigVariationsAbstractTest { /** */ private static final int TIMEOUT = 400; @@ -48,6 +53,13 @@ public class IgniteCacheReadThroughEvictionSelfTest extends IgniteCacheConfigVar /** */ private static final int KEYS = 100; + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { storeStgy.resetStore(); @@ -56,7 +68,10 @@ public class IgniteCacheReadThroughEvictionSelfTest extends IgniteCacheConfigVar /** * @throws Exception if failed. */ + @Test public void testReadThroughWithExpirePolicy() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + Ignite ig = testedGrid(); CacheConfiguration cc = variationConfig("expire"); @@ -96,7 +111,10 @@ public void testReadThroughWithExpirePolicy() throws Exception { /** * @throws Exception if failed. */ + @Test public void testReadThroughExpirePolicyConfigured() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + Ignite ig = testedGrid(); CacheConfiguration cc = variationConfig("expireConfig"); @@ -149,7 +167,10 @@ public void testReadThroughExpirePolicyConfigured() throws Exception { /** * @throws Exception if failed. */ + @Test public void testReadThroughEvictionPolicy() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + Ignite ig = testedGrid(); CacheConfiguration cc = variationConfig("eviction"); @@ -185,6 +206,7 @@ public void testReadThroughEvictionPolicy() throws Exception { /** * @throws Exception if failed. */ + @Test public void testReadThroughSkipStore() throws Exception { Ignite ig = testedGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionsVariationsSuite.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionsVariationsSuite.java index 400b2e95eee65..e17c60e384784 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionsVariationsSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughEvictionsVariationsSuite.java @@ -22,16 +22,18 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteCacheReadThroughEvictionsVariationsSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheReadThroughEvictionsVariationsSuite { /** * @return Cache API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return new ConfigVariationsTestSuiteBuilder( "Cache Read Through Variations Test", IgniteCacheReadThroughEvictionSelfTest.class) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughStoreCallTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughStoreCallTest.java index 423c4a156575e..96fdad68b18bd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughStoreCallTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheReadThroughStoreCallTest.java @@ -36,14 +36,15 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteRunnable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -54,22 +55,25 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheReadThroughStoreCallTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final Map storeMap = new ConcurrentHashMap<>(); /** */ protected boolean client; + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -87,6 +91,7 @@ public class IgniteCacheReadThroughStoreCallTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testMultiNode() throws Exception { startGridsMultiThreaded(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheScanPredicateDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheScanPredicateDeploymentSelfTest.java index 37f08797ed8f8..0b37291ab2fee 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheScanPredicateDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheScanPredicateDeploymentSelfTest.java @@ -24,10 +24,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteBiPredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,10 +37,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheScanPredicateDeploymentSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test value. */ protected static final String TEST_PREDICATE = "org.apache.ignite.tests.p2p.CacheDeploymentAlwaysTruePredicate"; @@ -53,12 +51,6 @@ public class IgniteCacheScanPredicateDeploymentSelfTest extends GridCommonAbstra cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setConnectorConfiguration(null); return cfg; @@ -87,6 +79,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception In case of error. */ + @Test public void testDeployScanPredicate() throws Exception { startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSerializationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSerializationSelfTest.java index 436c1982ff43d..da800a64f4fe3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSerializationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSerializationSelfTest.java @@ -24,10 +24,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteCallable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -35,10 +35,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheSerializationSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 3; @@ -49,8 +47,6 @@ public class IgniteCacheSerializationSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (getTestIgniteInstanceName(CLIENT).equals(igniteInstanceName)) cfg.setClientMode(true); @@ -83,6 +79,7 @@ private CacheConfiguration cacheConfiguration(CacheMode cacheM /** * @throws Exception If failed. */ + @Test public void testSerializeClosure() throws Exception { Ignite client = ignite(CLIENT); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartStopLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartStopLoadTest.java index daa5d8337b842..15467ee4eadf7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartStopLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartStopLoadTest.java @@ -27,11 +27,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for dynamic cache start. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheStartStopLoadTest extends GridCommonAbstractTest { /** */ private static final long DURATION = 20_000L; @@ -76,6 +80,7 @@ public int nodeCount() { /** * @throws Exception If failed. */ + @Test public void testMemoryLeaks() throws Exception { final Ignite ignite = ignite(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartTest.java index c889c31ac7b4a..b95698e292cd4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStartTest.java @@ -24,19 +24,17 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.internal.CU; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheStartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "c1"; @@ -50,8 +48,6 @@ public class IgniteCacheStartTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); if (ccfg != null) @@ -71,6 +67,7 @@ public class IgniteCacheStartTest extends GridCommonAbstractTest { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testStartAndNodeJoin() throws Exception { Ignite node0 = startGrid(0); @@ -103,6 +100,7 @@ public void testStartAndNodeJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartFromJoiningNode1() throws Exception { checkStartFromJoiningNode(false); } @@ -110,6 +108,7 @@ public void testStartFromJoiningNode1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartFromJoiningNode2() throws Exception { checkStartFromJoiningNode(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreCollectionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreCollectionTest.java index 13841e3f3556d..f9442e2b3d942 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreCollectionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreCollectionTest.java @@ -25,41 +25,47 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.*; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheStoreCollectionTest extends GridCommonAbstractTest { /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + private static final String CACHE1 = "cache1"; + + /** */ + private static final String CACHE2 = "cache2"; + + /** */ + private static final String CACHE3 = "cache3"; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - CacheConfiguration ccfg1 = new CacheConfiguration<>(DEFAULT_CACHE_NAME); - ccfg1.setName("cache1"); + CacheConfiguration ccfg1 = new CacheConfiguration<>(CACHE1); ccfg1.setAtomicityMode(ATOMIC); ccfg1.setWriteSynchronizationMode(FULL_SYNC); - CacheConfiguration ccfg2 = new CacheConfiguration<>(DEFAULT_CACHE_NAME); - ccfg2.setName("cache2"); + CacheConfiguration ccfg2 = new CacheConfiguration<>(CACHE2); ccfg2.setAtomicityMode(TRANSACTIONAL); ccfg2.setWriteSynchronizationMode(FULL_SYNC); - cfg.setCacheConfiguration(ccfg1, ccfg2); + CacheConfiguration ccfg3 = new CacheConfiguration<>(CACHE3); + ccfg3.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + ccfg3.setWriteSynchronizationMode(FULL_SYNC); - cfg.setMarshaller(null); + cfg.setCacheConfiguration(ccfg1, ccfg2, ccfg3); return cfg; } @@ -74,12 +80,15 @@ public class IgniteCacheStoreCollectionTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStoreMap() throws Exception { - IgniteCache cache1 = ignite(0).cache("cache1"); - IgniteCache cache2 = ignite(0).cache("cache2"); + IgniteCache cache1 = ignite(0).cache(CACHE1); + IgniteCache cache2 = ignite(0).cache(CACHE2); + IgniteCache cache3 = ignite(0).cache(CACHE3); checkStoreMap(cache1); checkStoreMap(cache2); + checkStoreMap(cache3); } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreValueAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreValueAbstractTest.java index bd259a5aa88ea..f4ffb1249f391 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreValueAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStoreValueAbstractTest.java @@ -45,12 +45,16 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheStoreValueAbstractTest extends IgniteCacheAbstractTest { /** */ private boolean cpyOnRead; @@ -125,6 +129,7 @@ public abstract class IgniteCacheStoreValueAbstractTest extends IgniteCacheAbstr /** * @throws Exception If failed. */ + @Test public void testValueNotStored() throws Exception { cpyOnRead = true; @@ -296,6 +301,7 @@ private void checkNoValue(Affinity aff, Object key) { /** * @throws Exception If failed. */ + @Test public void testValueStored() throws Exception { cpyOnRead = false; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTransactionalStopBusySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTransactionalStopBusySelfTest.java index 06e60454dc837..2ee30b145a0f5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTransactionalStopBusySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTransactionalStopBusySelfTest.java @@ -18,12 +18,17 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Stopped node when client operations are executing. */ +@RunWith(JUnit4.class) public class IgniteCacheTransactionalStopBusySelfTest extends IgniteCacheAbstractStopBusySelfTest { /** {@inheritDoc} */ + @Test @Override public void testPut() throws Exception { bannedMsg.set(GridNearTxPrepareRequest.class); @@ -31,6 +36,7 @@ public class IgniteCacheTransactionalStopBusySelfTest extends IgniteCacheAbstrac } /** {@inheritDoc} */ + @Test @Override public void testPutBatch() throws Exception { bannedMsg.set(GridNearTxPrepareRequest.class); @@ -38,6 +44,7 @@ public class IgniteCacheTransactionalStopBusySelfTest extends IgniteCacheAbstrac } /** {@inheritDoc} */ + @Test @Override public void testPutAsync() throws Exception { bannedMsg.set(GridNearTxPrepareRequest.class); @@ -45,9 +52,10 @@ public class IgniteCacheTransactionalStopBusySelfTest extends IgniteCacheAbstrac } /** {@inheritDoc} */ + @Test @Override public void testRemove() throws Exception { bannedMsg.set(GridNearTxPrepareRequest.class); super.testPut(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxCopyOnReadDisabledTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxCopyOnReadDisabledTest.java index 2909f0a296cf8..1b44df8678ee7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxCopyOnReadDisabledTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxCopyOnReadDisabledTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -25,8 +26,14 @@ * */ public class IgniteCacheTxCopyOnReadDisabledTest extends IgniteCacheCopyOnReadDisabledAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalInvokeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalInvokeTest.java index 55cc431dde553..107ad7dae88e0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalInvokeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalInvokeTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxLocalInvokeTest extends IgniteCacheInvokeAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalPeekModesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalPeekModesTest.java index 3439590040fcf..ccc2950d57eec 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalPeekModesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalPeekModesTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -27,6 +28,13 @@ * */ public class IgniteCacheTxLocalPeekModesTest extends IgniteCachePeekModesAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; @@ -41,4 +49,4 @@ public class IgniteCacheTxLocalPeekModesTest extends IgniteCachePeekModesAbstrac @Override protected CacheMode cacheMode() { return LOCAL; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalStoreValueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalStoreValueTest.java index b3987265c1da4..9a0385fea1348 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalStoreValueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxLocalStoreValueTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxLocalStoreValueTest extends IgniteCacheStoreValueAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; @@ -47,4 +55,4 @@ public class IgniteCacheTxLocalStoreValueTest extends IgniteCacheStoreValueAbstr @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearEnabledStoreValueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearEnabledStoreValueTest.java index fc719fa9862c2..d3f0618b64ca9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearEnabledStoreValueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearEnabledStoreValueTest.java @@ -18,13 +18,21 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * */ public class IgniteCacheTxNearEnabledStoreValueTest extends IgniteCacheTxStoreValueTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { return new NearCacheConfiguration(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearPeekModesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearPeekModesTest.java index aa4faaf6049ea..193e6ee37dac1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearPeekModesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxNearPeekModesTest.java @@ -17,16 +17,31 @@ package org.apache.ignite.internal.processors.cache; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import org.apache.ignite.testframework.MvccFeatureChecker; + /** * Tests peek modes with near tx cache. */ +@RunWith(JUnit4.class) public class IgniteCacheTxNearPeekModesTest extends IgniteCacheTxPeekModesTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected boolean hasNearCache() { return true; } /** {@inheritDoc} */ + @Test @Override public void testLocalPeek() throws Exception { // TODO: uncomment and re-open ticket if fails. // fail("https://issues.apache.org/jira/browse/IGNITE-1824"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPeekModesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPeekModesTest.java index ea22f53b0b5bc..9ce23b49bc0a9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPeekModesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPeekModesTest.java @@ -19,6 +19,8 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -26,6 +28,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheTxPeekModesTest extends IgniteCachePeekModesAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -41,12 +44,4 @@ public class IgniteCacheTxPeekModesTest extends IgniteCachePeekModesAbstractTest @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; } - - /** {@inheritDoc} */ - @Override public void testLocalPeek() throws Exception { - // TODO: uncomment and re-open ticket if fails. -// fail("https://issues.apache.org/jira/browse/IGNITE-1824"); - - super.testLocalPeek(); - } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPreloadNoWriteTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPreloadNoWriteTest.java index ebd0a96c8980e..641e3acfbd613 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPreloadNoWriteTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxPreloadNoWriteTest.java @@ -25,11 +25,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -41,22 +41,14 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheTxPreloadNoWriteTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setCacheMode(REPLICATED); @@ -80,6 +72,7 @@ public class IgniteCacheTxPreloadNoWriteTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTxNoWrite() throws Exception { txNoWrite(true); } @@ -87,6 +80,7 @@ public void testTxNoWrite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxNoWriteRollback() throws Exception { txNoWrite(false); } @@ -129,4 +123,4 @@ private void txNoWrite(boolean commit) throws Exception { // Try to start one more node. startGrid(2); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxStoreValueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxStoreValueTest.java index f3f52ebdf196d..eb5edd0baab6e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxStoreValueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheTxStoreValueTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxStoreValueTest extends IgniteCacheStoreValueAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.INTERCEPTOR); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 4; @@ -47,4 +55,4 @@ public class IgniteCacheTxStoreValueTest extends IgniteCacheStoreValueAbstractTe @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachingProviderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachingProviderSelfTest.java index 37dda12853077..67f16b0d8d2da 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachingProviderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachingProviderSelfTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCachingProviderSelfTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -90,6 +94,7 @@ public class IgniteCachingProviderSelfTest extends IgniteCacheAbstractTest { /** * */ + @Test public void testStartIgnite() { javax.cache.spi.CachingProvider cachingProvider = Caching.getCachingProvider(); @@ -118,6 +123,7 @@ public void testStartIgnite() { /** * @throws Exception If failed. */ + @Test public void testCloseManager() throws Exception { startGridsMultiThreaded(1); @@ -131,4 +137,4 @@ public void testCloseManager() throws Exception { assertNotSame(cacheMgr, cachingProvider.getCacheManager()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientAffinityAssignmentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientAffinityAssignmentSelfTest.java index 0a7a680f81d21..34b0e5d9309d1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientAffinityAssignmentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientAffinityAssignmentSelfTest.java @@ -28,19 +28,17 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.lang.GridAbsPredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests affinity assignment for different affinity types. */ +@RunWith(JUnit4.class) public class IgniteClientAffinityAssignmentSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ public static final int PARTS = 256; @@ -51,8 +49,6 @@ public class IgniteClientAffinityAssignmentSelfTest extends GridCommonAbstractTe @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (cache) { CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -75,6 +71,7 @@ public class IgniteClientAffinityAssignmentSelfTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testRendezvousAssignment() throws Exception { checkAffinityFunction(); } @@ -200,4 +197,4 @@ private void awaitTopology(final long topVer) throws Exception { assertEquals(topVer, cache.context().affinity().affinityTopologyVersion().topologyVersion()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheInitializationFailTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheInitializationFailTest.java index b1df28e37f86b..ba04a502181ac 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheInitializationFailTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheInitializationFailTest.java @@ -26,7 +26,6 @@ import java.util.Random; import java.util.Set; import java.util.concurrent.Callable; -import javax.cache.Cache; import javax.cache.CacheException; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; @@ -34,13 +33,13 @@ import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.query.FieldsQueryCursor; -import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.mvcc.MvccQueryTracker; @@ -65,11 +64,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test checks whether cache initialization error on client side * doesn't causes hangs and doesn't impact other caches. */ +@RunWith(JUnit4.class) public class IgniteClientCacheInitializationFailTest extends GridCommonAbstractTest { /** Failed cache name. */ private static final String CACHE_NAME = "cache"; @@ -80,12 +84,18 @@ public class IgniteClientCacheInitializationFailTest extends GridCommonAbstractT /** Tx cache name. */ private static final String TX_CACHE_NAME = "tx-cache"; + /** Mvcc tx cache name. */ + private static final String MVCC_TX_CACHE_NAME = "mvcc-tx-cache"; + /** Near atomic cache name. */ private static final String NEAR_ATOMIC_CACHE_NAME = "near-atomic-cache"; /** Near tx cache name. */ private static final String NEAR_TX_CACHE_NAME = "near-tx-cache"; + /** Near mvcc tx cache name. */ + private static final String NEAR_MVCC_TX_CACHE_NAME = "near-mvcc-tx-cache"; + /** Failed caches. */ private static final Set FAILED_CACHES; @@ -96,6 +106,8 @@ public class IgniteClientCacheInitializationFailTest extends GridCommonAbstractT set.add(TX_CACHE_NAME); set.add(NEAR_ATOMIC_CACHE_NAME); set.add(NEAR_TX_CACHE_NAME); + set.add(MVCC_TX_CACHE_NAME); + set.add(NEAR_MVCC_TX_CACHE_NAME); FAILED_CACHES = Collections.unmodifiableSet(set); } @@ -123,7 +135,13 @@ public class IgniteClientCacheInitializationFailTest extends GridCommonAbstractT ccfg2.setName(TX_CACHE_NAME); ccfg2.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); - cfg.setCacheConfiguration(ccfg1, ccfg2); + CacheConfiguration ccfg3 = new CacheConfiguration<>(); + + ccfg3.setIndexedTypes(Integer.class, String.class); + ccfg3.setName(MVCC_TX_CACHE_NAME); + ccfg3.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + + cfg.setCacheConfiguration(ccfg1, ccfg2, ccfg3); } else { GridQueryProcessor.idxCls = FailedIndexing.class; @@ -137,6 +155,7 @@ public class IgniteClientCacheInitializationFailTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testAtomicCacheInitialization() throws Exception { checkCacheInitialization(ATOMIC_CACHE_NAME); } @@ -144,6 +163,7 @@ public void testAtomicCacheInitialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransactionalCacheInitialization() throws Exception { checkCacheInitialization(TX_CACHE_NAME); } @@ -151,6 +171,15 @@ public void testTransactionalCacheInitialization() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testMvccTransactionalCacheInitialization() throws Exception { + checkCacheInitialization(MVCC_TX_CACHE_NAME); + } + + /** + * @throws Exception If failed. + */ + @Test public void testAtomicNearCacheInitialization() throws Exception { checkCacheInitialization(NEAR_ATOMIC_CACHE_NAME); } @@ -158,10 +187,20 @@ public void testAtomicNearCacheInitialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransactionalNearCacheInitialization() throws Exception { checkCacheInitialization(NEAR_TX_CACHE_NAME); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testMvccTransactionalNearCacheInitialization() throws Exception { + checkCacheInitialization(NEAR_MVCC_TX_CACHE_NAME); + } + /** * @param cacheName Cache name. * @throws Exception If failed. @@ -200,12 +239,15 @@ private void checkFailedCache(final Ignite client, final String cacheName) { IgniteCache cache; // Start cache with near enabled. - if (NEAR_ATOMIC_CACHE_NAME.equals(cacheName) || NEAR_TX_CACHE_NAME.equals(cacheName)) { + if (NEAR_ATOMIC_CACHE_NAME.equals(cacheName) || NEAR_TX_CACHE_NAME.equals(cacheName) || + NEAR_MVCC_TX_CACHE_NAME.equals(cacheName)) { CacheConfiguration ccfg = new CacheConfiguration(cacheName) .setNearConfiguration(new NearCacheConfiguration()); if (NEAR_TX_CACHE_NAME.equals(cacheName)) ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + else if (NEAR_MVCC_TX_CACHE_NAME.equals(cacheName)) + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); cache = client.getOrCreateCache(ccfg); } @@ -231,19 +273,18 @@ private static class FailedIndexing implements GridQueryIndexing { } /** {@inheritDoc} */ - @Override public void start(GridKernalContext ctx, GridSpinBusyLock busyLock) throws IgniteCheckedException { - // No-op + @Override public SqlFieldsQuery generateFieldsQuery(String cacheName, SqlQuery qry) { + return null; } /** {@inheritDoc} */ - @Override public void stop() throws IgniteCheckedException { + @Override public void start(GridKernalContext ctx, GridSpinBusyLock busyLock) throws IgniteCheckedException { // No-op } /** {@inheritDoc} */ - @Override public QueryCursor> queryDistributedSql(String schemaName, String cacheName, - SqlQuery qry, boolean keepBinary) throws IgniteCheckedException { - return null; + @Override public void stop() throws IgniteCheckedException { + // No-op } /** {@inheritDoc} */ @@ -264,12 +305,6 @@ private static class FailedIndexing implements GridQueryIndexing { return 0; } - /** {@inheritDoc} */ - @Override public QueryCursor> queryLocalSql(String schemaName, String cacheName, - SqlQuery qry, IndexingQueryFilter filter, boolean keepBinary) throws IgniteCheckedException { - return null; - } - /** {@inheritDoc} */ @Override public FieldsQueryCursor> queryLocalSqlFields(String schemaName, SqlFieldsQuery qry, boolean keepBinary, IndexingQueryFilter filter, GridQueryCancel cancel) throws IgniteCheckedException { @@ -345,13 +380,8 @@ private static class FailedIndexing implements GridQueryIndexing { } /** {@inheritDoc} */ - @Override public void rebuildIndexesFromHash(String cacheName) throws IgniteCheckedException { - // No-op - } - - /** {@inheritDoc} */ - @Override public void markForRebuildFromHash(String cacheName) { - // No-op + @Override public IgniteInternalFuture rebuildIndexesFromHash(GridCacheContext cctx) { + return null; } /** {@inheritDoc} */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheStartFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheStartFailoverTest.java index a2d9da719ce98..8fb9795e16be2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheStartFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientCacheStartFailoverTest.java @@ -41,30 +41,29 @@ import org.apache.ignite.internal.TestRecordingCommunicationSpi; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtAffinityAssignmentResponse; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteClientCacheStartFailoverTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -72,8 +71,6 @@ public class IgniteClientCacheStartFailoverTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); @@ -81,6 +78,13 @@ public class IgniteClientCacheStartFailoverTest extends GridCommonAbstractTest { return cfg; } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + client = false; + } + /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -91,6 +95,7 @@ public class IgniteClientCacheStartFailoverTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClientStartCoordinatorFailsAtomic() throws Exception { clientStartCoordinatorFails(ATOMIC); } @@ -98,10 +103,19 @@ public void testClientStartCoordinatorFailsAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientStartCoordinatorFailsTx() throws Exception { clientStartCoordinatorFails(TRANSACTIONAL); } + /** + * @throws Exception If failed. + */ + @Test + public void testClientStartCoordinatorFailsMvccTx() throws Exception { + clientStartCoordinatorFails(TRANSACTIONAL_SNAPSHOT); + } + /** * @param atomicityMode Cache atomicity mode. * @throws Exception If failed. @@ -152,6 +166,7 @@ private void clientStartCoordinatorFails(CacheAtomicityMode atomicityMode) throw /** * @throws Exception If failed. */ + @Test public void testClientStartLastServerFailsAtomic() throws Exception { clientStartLastServerFails(ATOMIC); } @@ -159,10 +174,20 @@ public void testClientStartLastServerFailsAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientStartLastServerFailsTx() throws Exception { clientStartLastServerFails(TRANSACTIONAL); } + /** + * @throws Exception If failed. + */ + @Test + public void testClientStartLastServerFailsMvccTx() throws Exception { + clientStartLastServerFails(TRANSACTIONAL_SNAPSHOT); + } + + /** * @param atomicityMode Cache atomicity mode. * @throws Exception If failed. @@ -248,6 +273,7 @@ private void clientStartLastServerFails(CacheAtomicityMode atomicityMode) throws /** * @throws Exception If failed. */ + @Test public void testRebalanceState() throws Exception { final int SRVS = 3; @@ -319,6 +345,7 @@ public void testRebalanceState() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalanceStateConcurrentStart() throws Exception { final int SRVS1 = 3; final int CLIENTS = 5; @@ -413,6 +440,7 @@ public void testRebalanceStateConcurrentStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientStartCloseServersRestart() throws Exception { final int SRVS = 4; final int CLIENTS = 4; @@ -547,6 +575,18 @@ private List startCaches(Ignite node, int keys) { cache.putAll(map); } + //TODO: uncomment TRANSACTIONAL_SNAPSHOT cache creation when IGNITE-9470 will be fixed. + /* for (int i = 0; i < 3; i++) { + CacheConfiguration ccfg = cacheConfiguration("mvcc-" + i, TRANSACTIONAL_SNAPSHOT, i); + + IgniteCache cache = node.createCache(ccfg); + + cacheNames.add(ccfg.getName()); + + cache.putAll(map); + }*/ + + return cacheNames; } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTest.java index 6ed0ef7e4093e..b7ed96ff11628 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTest.java @@ -50,11 +50,14 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -63,16 +66,17 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClusterActivateDeactivateTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ static final String CACHE_NAME_PREFIX = "cache-"; /** Non-persistent data region name. */ private static final String NO_PERSISTENCE_REGION = "no-persistence-region"; + /** */ + private static final int DEFAULT_CACHES_COUNT = 2; + /** */ boolean client; @@ -103,7 +107,7 @@ public class IgniteClusterActivateDeactivateTest extends GridCommonAbstractTest spi.setJoinTimeout(2 * 60_000); } - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(sharedStaticIpFinder); cfg.setConsistentId(igniteInstanceName); @@ -120,11 +124,13 @@ public class IgniteClusterActivateDeactivateTest extends GridCommonAbstractTest DataStorageConfiguration memCfg = new DataStorageConfiguration(); memCfg.setPageSize(4 * 1024); memCfg.setDefaultDataRegionConfiguration(new DataRegionConfiguration() - .setMaxSize(300L * 1024 * 1024) + .setMaxSize(150L * 1024 * 1024) .setPersistenceEnabled(persistenceEnabled())); + memCfg.setWalSegments(2); + memCfg.setWalSegmentSize(512 * 1024); memCfg.setDataRegionConfigurations(new DataRegionConfiguration() - .setMaxSize(300L * 1024 * 1024) + .setMaxSize(150L * 1024 * 1024) .setName(NO_PERSISTENCE_REGION) .setPersistenceEnabled(false)); @@ -132,6 +138,7 @@ public class IgniteClusterActivateDeactivateTest extends GridCommonAbstractTest memCfg.setWalMode(WALMode.LOG_ONLY); cfg.setDataStorageConfiguration(memCfg); + cfg.setFailureDetectionTimeout(60_000); if (testSpi) { TestRecordingCommunicationSpi spi = new TestRecordingCommunicationSpi(); @@ -162,6 +169,7 @@ protected boolean persistenceEnabled() { /** * @throws Exception If failed. */ + @Test public void testActivateSimple_SingleNode() throws Exception { activateSimple(1, 0, 0); } @@ -169,6 +177,7 @@ public void testActivateSimple_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateSimple_5_Servers() throws Exception { activateSimple(5, 0, 0); } @@ -176,6 +185,7 @@ public void testActivateSimple_5_Servers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateSimple_5_Servers2() throws Exception { activateSimple(5, 0, 4); } @@ -183,6 +193,7 @@ public void testActivateSimple_5_Servers2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateSimple_5_Servers_5_Clients() throws Exception { activateSimple(5, 4, 0); } @@ -190,6 +201,7 @@ public void testActivateSimple_5_Servers_5_Clients() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateSimple_5_Servers_5_Clients_FromClient() throws Exception { activateSimple(5, 4, 6); } @@ -226,7 +238,7 @@ private void activateSimple(int srvs, int clients, int activateFrom) throws Exce assertTrue(ignite(i).cluster().active()); for (int i = 0; i < srvs + clients; i++) { - for (int c = 0; c < 2; c++) + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) checkCache(ignite(i), CACHE_NAME_PREFIX + c, true); checkCache(ignite(i), CU.UTILITY_CACHE_NAME, true); @@ -238,7 +250,7 @@ private void activateSimple(int srvs, int clients, int activateFrom) throws Exce startGrid(srvs + clients); - for (int c = 0; c < 2; c++) + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) checkCache(ignite(srvs + clients), CACHE_NAME_PREFIX + c, true); checkCaches(srvs + clients + 1, CACHES); @@ -247,18 +259,78 @@ private void activateSimple(int srvs, int clients, int activateFrom) throws Exce startGrid(srvs + clients + 1); - for (int c = 0; c < 2; c++) + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) checkCache(ignite(srvs + clients + 1), CACHE_NAME_PREFIX + c, false); checkCaches(srvs + clients + 2, CACHES); } + /** + * @throws Exception If failed. + */ + @Test + public void testReActivateSimple_5_Servers_4_Clients_FromClient() throws Exception { + reactivateSimple(5, 4, 6); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testReActivateSimple_5_Servers_4_Clients_FromServer() throws Exception { + reactivateSimple(5, 4, 0); + } + + /** + * @param srvs Number of servers. + * @param clients Number of clients. + * @param activateFrom Index of node stating activation. + * @throws Exception If failed. + */ + public void reactivateSimple(int srvs, int clients, int activateFrom) throws Exception { + activateSimple(srvs, clients, activateFrom); + + rolloverSegmentAtLeastTwice(activateFrom); + + for (int i = 0; i < srvs + clients; i++) { + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) + checkCache(ignite(i), CACHE_NAME_PREFIX + c, true); + + checkCache(ignite(i), CU.UTILITY_CACHE_NAME, true); + } + + ignite(activateFrom).cluster().active(false); + + ignite(activateFrom).cluster().active(true); + + rolloverSegmentAtLeastTwice(activateFrom); + + for (int i = 0; i < srvs + clients; i++) { + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) + checkCache(ignite(i), CACHE_NAME_PREFIX + c, true); + + checkCache(ignite(i), CU.UTILITY_CACHE_NAME, true); + } + } + + /** + * Work directory have 2 segments by default. This method do full circle. + */ + private void rolloverSegmentAtLeastTwice(int activateFrom) { + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) { + IgniteCache cache = ignite(activateFrom).cache(CACHE_NAME_PREFIX + c); + //this should be enough including free-,meta- page and etc. + for (int i = 0; i < 1000; i++) + cache.put(i, i); + } + } + /** * @param nodes Number of nodes. * @param caches Number of caches. */ final void checkCaches(int nodes, int caches) { - for (int i = 0; i < nodes; i++) { + for (int i = 0; i < nodes; i++) { for (int c = 0; c < caches; c++) { IgniteCache cache = ignite(i).cache(CACHE_NAME_PREFIX + c); @@ -278,20 +350,29 @@ final void checkCaches(int nodes, int caches) { /** * @throws Exception If failed. */ + @Test public void testJoinWhileActivate1_Server() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + joinWhileActivate1(false, false); } /** * @throws Exception If failed. */ + @Test public void testJoinWhileActivate1_WithCache_Server() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + joinWhileActivate1(false, true); } /** * @throws Exception If failed. */ + @Test public void testJoinWhileActivate1_Client() throws Exception { joinWhileActivate1(true, false); } @@ -321,7 +402,7 @@ private void joinWhileActivate1(final boolean startClient, final boolean withNew activeFut.get(); startFut.get(); - for (int c = 0; c < 2; c++) + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) checkCache(ignite(2), CACHE_NAME_PREFIX + c, true); if (withNewCache) { @@ -378,7 +459,7 @@ private IgniteInternalFuture startNodesAndBlockStatusChange( } if (blockMsgNodes.length == 0) - blockMsgNodes = new int[]{1}; + blockMsgNodes = new int[] {1}; final AffinityTopologyVersion STATE_CHANGE_TOP_VER = new AffinityTopologyVersion(srvs + clients, minorVer); @@ -426,6 +507,7 @@ private void blockExchangeSingleMessage(TestRecordingCommunicationSpi spi, final /** * @throws Exception If failed. */ + @Test public void testJoinWhileDeactivate1_Server() throws Exception { joinWhileDeactivate1(false, false); } @@ -433,6 +515,7 @@ public void testJoinWhileDeactivate1_Server() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinWhileDeactivate1_WithCache_Server() throws Exception { joinWhileDeactivate1(false, true); } @@ -440,6 +523,7 @@ public void testJoinWhileDeactivate1_WithCache_Server() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinWhileDeactivate1_Client() throws Exception { joinWhileDeactivate1(true, false); } @@ -473,7 +557,7 @@ private void joinWhileDeactivate1(final boolean startClient, final boolean withN ignite(2).cluster().active(true); - for (int c = 0; c < 2; c++) + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) checkCache(ignite(2), CACHE_NAME_PREFIX + c, true); if (withNewCache) { @@ -503,6 +587,7 @@ private void joinWhileDeactivate1(final boolean startClient, final boolean withN /** * @throws Exception If failed. */ + @Test public void testConcurrentJoinAndActivate() throws Exception { for (int iter = 0; iter < 3; iter++) { log.info("Iteration: " + iter); @@ -553,6 +638,7 @@ public void testConcurrentJoinAndActivate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateSimple_SingleNode() throws Exception { deactivateSimple(1, 0, 0); } @@ -560,6 +646,7 @@ public void testDeactivateSimple_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateSimple_5_Servers() throws Exception { deactivateSimple(5, 0, 0); } @@ -567,6 +654,7 @@ public void testDeactivateSimple_5_Servers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateSimple_5_Servers2() throws Exception { deactivateSimple(5, 0, 4); } @@ -574,6 +662,7 @@ public void testDeactivateSimple_5_Servers2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateSimple_5_Servers_5_Clients() throws Exception { deactivateSimple(5, 4, 0); } @@ -581,6 +670,7 @@ public void testDeactivateSimple_5_Servers_5_Clients() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateSimple_5_Servers_5_Clients_FromClient() throws Exception { deactivateSimple(5, 4, 6); } @@ -645,7 +735,7 @@ private void deactivateSimple(int srvs, int clients, int deactivateFrom) throws } for (int i = 0; i < srvs; i++) { - for (int c = 0; c < 2; c++) + for (int c = 0; c < DEFAULT_CACHES_COUNT; c++) checkCache(ignite(i), CACHE_NAME_PREFIX + c, true); } @@ -670,6 +760,7 @@ private void startWithCaches1(int srvs, int clients) throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectClusterActive() throws Exception { testReconnectSpi = true; @@ -708,6 +799,7 @@ public void testClientReconnectClusterActive() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectClusterInactive() throws Exception { testReconnectSpi = true; @@ -747,6 +839,7 @@ public void testClientReconnectClusterInactive() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectClusterDeactivated() throws Exception { clientReconnectClusterDeactivated(false); } @@ -754,6 +847,7 @@ public void testClientReconnectClusterDeactivated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectClusterDeactivateInProgress() throws Exception { clientReconnectClusterDeactivated(true); } @@ -782,8 +876,7 @@ private void clientReconnectClusterDeactivated(final boolean transition) throws checkCaches1(SRVS + CLIENTS); // Wait for late affinity assignment to finish. - grid(0).context().cache().context().exchange().affinityReadyFuture( - new AffinityTopologyVersion(SRVS + CLIENTS, 1)).get(); + awaitPartitionMapExchange(); final AffinityTopologyVersion STATE_CHANGE_TOP_VER = new AffinityTopologyVersion(SRVS + CLIENTS + 1, 1); @@ -848,6 +941,7 @@ private void clientReconnectClusterDeactivated(final boolean transition) throws /** * @throws Exception If failed. */ + @Test public void testClientReconnectClusterActivated() throws Exception { clientReconnectClusterActivated(false); } @@ -855,6 +949,7 @@ public void testClientReconnectClusterActivated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectClusterActivateInProgress() throws Exception { clientReconnectClusterActivated(true); } @@ -935,10 +1030,11 @@ private void clientReconnectClusterActivated(final boolean transition) throws Ex /** * @throws Exception If failed. */ + @Test public void testInactiveTopologyChanges() throws Exception { testSpi = true; - testSpiRecord = new Class[]{GridDhtPartitionsSingleMessage.class, GridDhtPartitionsFullMessage.class}; + testSpiRecord = new Class[] {GridDhtPartitionsSingleMessage.class, GridDhtPartitionsFullMessage.class}; active = false; @@ -991,6 +1087,7 @@ public void testInactiveTopologyChanges() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateFailover1() throws Exception { stateChangeFailover1(true); } @@ -998,6 +1095,7 @@ public void testActivateFailover1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateFailover1() throws Exception { stateChangeFailover1(false); } @@ -1048,6 +1146,7 @@ private void stateChangeFailover1(boolean activate) throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateFailover2() throws Exception { stateChangeFailover2(true); } @@ -1055,6 +1154,7 @@ public void testActivateFailover2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeactivateFailover2() throws Exception { stateChangeFailover2(false); } @@ -1116,6 +1216,8 @@ private void stateChangeFailover2(boolean activate) throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8220") + @Test public void testActivateFailover3() throws Exception { stateChangeFailover3(true); } @@ -1123,6 +1225,8 @@ public void testActivateFailover3() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8220") + @Test public void testDeactivateFailover3() throws Exception { stateChangeFailover3(false); } @@ -1132,21 +1236,19 @@ public void testDeactivateFailover3() throws Exception { * @throws Exception If failed. */ private void stateChangeFailover3(boolean activate) throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-8220"); - testReconnectSpi = true; startNodesAndBlockStatusChange(4, 0, 0, !activate); client = false; - IgniteInternalFuture startFut1 = GridTestUtils.runAsync((Callable) () -> { + IgniteInternalFuture startFut1 = GridTestUtils.runAsync(() -> { startGrid(4); return null; }, "start-node1"); - IgniteInternalFuture startFut2 = GridTestUtils.runAsync((Callable) () -> { + IgniteInternalFuture startFut2 = GridTestUtils.runAsync(() -> { startGrid(5); return null; @@ -1178,12 +1280,13 @@ private void stateChangeFailover3(boolean activate) throws Exception { /** * @throws Exception If failed. */ + @Test public void testClusterStateNotWaitForDeactivation() throws Exception { testSpi = true; final int nodes = 2; - IgniteEx crd = (IgniteEx) startGrids(nodes); + IgniteEx crd = (IgniteEx)startGrids(nodes); crd.cluster().active(true); @@ -1315,7 +1418,7 @@ final void checkNoCaches(int nodes) { GridCacheProcessor cache = ((IgniteEx)ignite(i)).context().cache(); assertTrue(cache.caches().isEmpty()); - assertTrue(cache.internalCaches().isEmpty()); + assertTrue(cache.internalCaches().stream().allMatch(c -> c.context().isRecoveryMode())); } } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTestWithPersistence.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTestWithPersistence.java index 0972ea2a8f6ab..70d0034131ceb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTestWithPersistence.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTestWithPersistence.java @@ -28,6 +28,7 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; @@ -40,11 +41,16 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteClusterActivateDeactivateTestWithPersistence extends IgniteClusterActivateDeactivateTest { /** {@inheritDoc} */ @Override protected boolean persistenceEnabled() { @@ -73,6 +79,7 @@ public class IgniteClusterActivateDeactivateTestWithPersistence extends IgniteCl /** * @throws Exception If failed. */ + @Test public void testActivateCachesRestore_SingleNode() throws Exception { activateCachesRestore(1, false); } @@ -80,6 +87,7 @@ public void testActivateCachesRestore_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateCachesRestore_SingleNode_WithNewCaches() throws Exception { activateCachesRestore(1, true); } @@ -87,6 +95,7 @@ public void testActivateCachesRestore_SingleNode_WithNewCaches() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testActivateCachesRestore_5_Servers() throws Exception { activateCachesRestore(5, false); } @@ -94,10 +103,63 @@ public void testActivateCachesRestore_5_Servers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivateCachesRestore_5_Servers_WithNewCaches() throws Exception { activateCachesRestore(5, true); } + /** {@inheritDoc} */ + @Test + @Override public void testReActivateSimple_5_Servers_4_Clients_FromServer() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10750"); + + super.testReActivateSimple_5_Servers_4_Clients_FromServer(); + } + + /** + * Test deactivation on cluster that is not yet activated. + * + * @throws Exception If failed. + */ + @Test + public void testDeactivateInactiveCluster() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10582"); + + ccfgs = new CacheConfiguration[] { + new CacheConfiguration<>("test_cache_1") + .setGroupName("test_cache") + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL), + new CacheConfiguration<>("test_cache_2") + .setGroupName("test_cache") + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + }; + + Ignite ignite = startGrids(3); + + ignite.cluster().active(true); + + ignite.cache("test_cache_1") + .put("key1", "val1"); + ignite.cache("test_cache_2") + .put("key1", "val1"); + + ignite.cluster().active(false); + + assertFalse(ignite.cluster().active()); + + stopAllGrids(); + + ignite = startGrids(2); + + assertFalse(ignite.cluster().active()); + + ignite.cluster().active(false); + + assertFalse(ignite.cluster().active()); + } + /** */ private Map startGridsAndLoadData(int srvs) throws Exception { Ignite srv = startGrids(srvs); @@ -188,6 +250,7 @@ private void activateCachesRestore(int srvs, boolean withNewCaches) throws Excep /** * @see IGNITE-7330 for more information about context of the test */ + @Test public void testClientJoinsWhenActivationIsInProgress() throws Exception { startGridsAndLoadData(5); @@ -246,6 +309,7 @@ private void checkCachesData(Map cacheData, DataStorageConfigu /** * @throws Exception If failed. */ + @Test public void testActivateCacheRestoreConfigurationConflict() throws Exception { final int SRVS = 3; @@ -253,13 +317,15 @@ public void testActivateCacheRestoreConfigurationConflict() throws Exception { srv.cluster().active(true); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); srv.createCache(ccfg); stopAllGrids(); - ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME + 1); + ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME + 1) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); ccfg.setGroupName(DEFAULT_CACHE_NAME); @@ -282,7 +348,11 @@ public void testActivateCacheRestoreConfigurationConflict() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeactivateDuringEvictionAndRebalance() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10786"); + IgniteEx srv = (IgniteEx) startGrids(3); srv.cluster().active(true); @@ -291,7 +361,8 @@ public void testDeactivateDuringEvictionAndRebalance() throws Exception { .setBackups(1) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) .setIndexedTypes(Integer.class, Integer.class) - .setAffinity(new RendezvousAffinityFunction(false, 64)); + .setAffinity(new RendezvousAffinityFunction(false, 64)) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); IgniteCache cache = srv.createCache(ccfg); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse.java new file mode 100644 index 0000000000000..c95fe3dc4a505 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse.java @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; + +/** + * + */ +public class IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse extends + IgniteClusterActivateDeactivateTestWithPersistence { + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + System.setProperty(IgniteSystemProperties.IGNITE_REUSE_MEMORY_ON_DEACTIVATE, "true"); + + super.beforeTest(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + System.clearProperty(IgniteSystemProperties.IGNITE_REUSE_MEMORY_ON_DEACTIVATE); + } + + /** {@inheritDoc} */ + @Test + @Override public void testDeactivateDuringEvictionAndRebalance() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10788"); + + super.testDeactivateDuringEvictionAndRebalance(); + } + + /** {@inheritDoc} */ + @Test + @Override public void testDeactivateInactiveCluster() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10788"); + + super.testDeactivateInactiveCluster(); + } + + @Override public void testReActivateSimple_5_Servers_4_Clients_FromServer() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10750"); + + super.testReActivateSimple_5_Servers_4_Clients_FromServer(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDaemonNodeMarshallerCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDaemonNodeMarshallerCacheTest.java index 2f9bd533ae916..a985a460b2378 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDaemonNodeMarshallerCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDaemonNodeMarshallerCacheTest.java @@ -24,10 +24,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -36,10 +36,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteDaemonNodeMarshallerCacheTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean daemon; @@ -49,8 +47,6 @@ public class IgniteDaemonNodeMarshallerCacheTest extends GridCommonAbstractTest cfg.setDaemon(daemon); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -64,6 +60,7 @@ public class IgniteDaemonNodeMarshallerCacheTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testMarshalOnDaemonNode1() throws Exception { marshalOnDaemonNode(true); } @@ -71,6 +68,7 @@ public void testMarshalOnDaemonNode1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMarshalOnDaemonNode2() throws Exception { marshalOnDaemonNode(false); } @@ -191,4 +189,4 @@ public TestClass2(int val) { this.val = val; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheAndNodeStop.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheAndNodeStop.java index 50f45a7ed3f36..febfe61bb1073 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheAndNodeStop.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheAndNodeStop.java @@ -20,30 +20,19 @@ import org.apache.ignite.*; import org.apache.ignite.configuration.*; import org.apache.ignite.internal.*; -import org.apache.ignite.spi.discovery.tcp.*; -import org.apache.ignite.spi.discovery.tcp.ipfinder.*; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.*; import org.apache.ignite.testframework.*; import org.apache.ignite.testframework.junits.common.*; import java.util.concurrent.*; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteDynamicCacheAndNodeStop extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { super.afterTest(); @@ -54,6 +43,7 @@ public class IgniteDynamicCacheAndNodeStop extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCacheAndNodeStop() throws Exception { final Ignite ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheFilterTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheFilterTest.java index 53c1cb29fe452..e0919763a4ab8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheFilterTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheFilterTest.java @@ -21,14 +21,15 @@ import java.util.Map; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -37,10 +38,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteDynamicCacheFilterTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String ATTR_NAME = "cacheAttr"; @@ -51,8 +50,6 @@ public class IgniteDynamicCacheFilterTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setWriteSynchronizationMode(FULL_SYNC); @@ -60,6 +57,7 @@ public class IgniteDynamicCacheFilterTest extends GridCommonAbstractTest { ccfg.setRebalanceMode(SYNC); ccfg.setNodeFilter(new TestNodeFilter("A")); + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); if (attrVal != null) { Map attrs = new HashMap<>(); @@ -84,6 +82,7 @@ public class IgniteDynamicCacheFilterTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCofiguredCacheFilter() throws Exception { attrVal = "A"; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheMultinodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheMultinodeTest.java index fb1643efeb526..b7865c126d659 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheMultinodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheMultinodeTest.java @@ -26,11 +26,11 @@ import org.apache.ignite.Ignite; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -38,10 +38,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteDynamicCacheMultinodeTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 6; @@ -52,8 +50,6 @@ public class IgniteDynamicCacheMultinodeTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -73,6 +69,7 @@ public class IgniteDynamicCacheMultinodeTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateCache() throws Exception { createCacheMultinode(TestOp.GET_OR_CREATE_CACHE); } @@ -80,6 +77,7 @@ public void testGetOrCreateCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateCaches() throws Exception { createCacheMultinode(TestOp.GET_OR_CREATE_CACHES); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartCoordinatorFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartCoordinatorFailoverTest.java index 36e1879a1c96b..f0cbd77c13478 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartCoordinatorFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartCoordinatorFailoverTest.java @@ -1,262 +1,258 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache; - -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.concurrent.Callable; -import java.util.concurrent.CountDownLatch; -import javax.cache.CacheException; -import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteCache; -import org.apache.ignite.IgniteDataStreamer; -import org.apache.ignite.IgniteException; -import org.apache.ignite.cache.affinity.AffinityFunctionContext; -import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; -import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.GridJobExecuteResponse; -import org.apache.ignite.internal.managers.communication.GridIoMessage; -import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; -import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; -import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.lang.IgniteInClosure; -import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.IgniteSpiException; -import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; -import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage; -import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -public class IgniteDynamicCacheStartCoordinatorFailoverTest extends GridCommonAbstractTest { - /** Default IP finder for single-JVM cloud grid. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** Latch which blocks DynamicCacheChangeFailureMessage until main thread has sent node fail signal. */ - private static volatile CountDownLatch latch; - - /** */ - private static final String COORDINATOR_ATTRIBUTE = "coordinator"; - - /** Client mode flag. */ - private Boolean appendCustomAttribute; - - /** {@inheritDoc} */ - @Override protected void afterTest() throws Exception { - stopAllGrids(); - } - - /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - latch = new CountDownLatch(1); - } - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - - TcpCommunicationSpi commSpi = new CustomCommunicationSpi(); - commSpi.setLocalPort(GridTestUtils.getNextCommPort(getClass())); - - cfg.setCommunicationSpi(commSpi); - - cfg.setFailureDetectionTimeout(15_000); - - if (appendCustomAttribute) { - Map attrs = new HashMap<>(); - - attrs.put(COORDINATOR_ATTRIBUTE, Boolean.TRUE); - - cfg.setUserAttributes(attrs); - } - - return cfg; - } - - /** - * Tests coordinator failover during cache start failure. - * - * @throws Exception If test failed. - */ - public void testCoordinatorFailure() throws Exception { - // Start coordinator node. - appendCustomAttribute = true; - - Ignite g = startGrid(0); - - appendCustomAttribute = false; - - Ignite g1 = startGrid(1); - Ignite g2 = startGrid(2); - - awaitPartitionMapExchange(); - - CacheConfiguration cfg = new CacheConfiguration(); - - cfg.setName("test-coordinator-failover"); - - cfg.setAffinity(new BrokenAffinityFunction(false, getTestIgniteInstanceName(2))); - - GridTestUtils.runAsync(new Callable() { - @Override public Object call() throws Exception { - GridTestUtils.assertThrows(log, new Callable() { - @Override public Object call() throws Exception { - g1.getOrCreateCache(cfg); - return null; - } - }, CacheException.class, null); - - return null; - } - }, "cache-starter-thread"); - - latch.await(); - - stopGrid(0, true); - - awaitPartitionMapExchange(); - - // Correct the cache configuration. - cfg.setAffinity(new RendezvousAffinityFunction()); - - IgniteCache cache = g1.getOrCreateCache(cfg); - - checkCacheOperations(g1, cache); - } - - /** - * Test the basic cache operations. - * - * @param cache Cache. - * @throws Exception If test failed. - */ - protected void checkCacheOperations(Ignite ignite, IgniteCache cache) throws Exception { - int cnt = 1000; - - // Check base cache operations. - for (int i = 0; i < cnt; ++i) - cache.put(i, i); - - for (int i = 0; i < cnt; ++i) { - Integer v = (Integer) cache.get(i); - - assertNotNull(v); - assertEquals(i, v.intValue()); - } - - // Check Data Streamer capabilities. - try (IgniteDataStreamer streamer = ignite.dataStreamer(cache.getName())) { - for (int i = 0; i < 10_000; ++i) - streamer.addData(i, i); - } - } - - /** - * Communication SPI which could optionally block outgoing messages. - */ - private static class CustomCommunicationSpi extends TcpCommunicationSpi { - /** - * Send message optionally either blocking it or throwing an exception if it is of - * {@link GridJobExecuteResponse} type. - * - * @param node Destination node. - * @param msg Message to be sent. - * @param ackClosure Ack closure. - * @throws org.apache.ignite.spi.IgniteSpiException If failed. - */ - @Override public void sendMessage(ClusterNode node, Message msg, IgniteInClosure ackClosure) - throws IgniteSpiException { - - if (msg instanceof GridIoMessage) { - GridIoMessage msg0 = (GridIoMessage)msg; - - if (msg0.message() instanceof GridDhtPartitionsSingleMessage) { - Boolean attr = (Boolean) node.attributes().get(COORDINATOR_ATTRIBUTE); - - GridDhtPartitionsSingleMessage singleMsg = (GridDhtPartitionsSingleMessage) msg0.message(); - - Exception err = singleMsg.getError(); - - if (Boolean.TRUE.equals(attr) && err != null) { - // skip message - latch.countDown(); - - return; - } - } - } - - super.sendMessage(node, msg, ackClosure); - } - } - - /** - * Affinity function that throws an exception when affinity nodes are calculated on the given node. - */ - public static class BrokenAffinityFunction extends RendezvousAffinityFunction { - /** */ - private static final long serialVersionUID = 0L; - - /** */ - @IgniteInstanceResource - private Ignite ignite; - - /** Exception should arise on all nodes. */ - private boolean eOnAllNodes = false; - - /** Exception should arise on node with certain name. */ - private String gridName; - - /** - * Default constructor. - */ - public BrokenAffinityFunction() { - // No-op. - } - - /** - * @param eOnAllNodes {@code True} if exception should be thrown on all nodes. - * @param gridName Exception should arise on node with certain name. - */ - public BrokenAffinityFunction(boolean eOnAllNodes, String gridName) { - this.eOnAllNodes = eOnAllNodes; - this.gridName = gridName; - } - - /** {@inheritDoc} */ - @Override public List> assignPartitions(AffinityFunctionContext affCtx) { - if (eOnAllNodes || ignite.name().equals(gridName)) - throw new IllegalStateException("Simulated exception [locNodeId=" - + ignite.cluster().localNode().id() + "]"); - else - return super.assignPartitions(affCtx); - } - } -} +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.CountDownLatch; +import javax.cache.CacheException; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.IgniteException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.affinity.AffinityFunctionContext; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.GridJobExecuteResponse; +import org.apache.ignite.internal.managers.communication.GridIoMessage; +import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteInClosure; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.spi.IgniteSpiException; +import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** */ +@RunWith(JUnit4.class) +public class IgniteDynamicCacheStartCoordinatorFailoverTest extends GridCommonAbstractTest { + /** Latch which blocks DynamicCacheChangeFailureMessage until main thread has sent node fail signal. */ + private static volatile CountDownLatch latch; + + /** */ + private static final String COORDINATOR_ATTRIBUTE = "coordinator"; + + /** Client mode flag. */ + private Boolean appendCustomAttribute; + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + latch = new CountDownLatch(1); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + TcpCommunicationSpi commSpi = new CustomCommunicationSpi(); + commSpi.setLocalPort(GridTestUtils.getNextCommPort(getClass())); + + cfg.setCommunicationSpi(commSpi); + + cfg.setFailureDetectionTimeout(15_000); + + if (appendCustomAttribute) { + Map attrs = new HashMap<>(); + + attrs.put(COORDINATOR_ATTRIBUTE, Boolean.TRUE); + + cfg.setUserAttributes(attrs); + } + + return cfg; + } + + /** + * Tests coordinator failover during cache start failure. + * + * @throws Exception If test failed. + */ + @Test + public void testCoordinatorFailure() throws Exception { + // Start coordinator node. + appendCustomAttribute = true; + + Ignite g = startGrid(0); + + appendCustomAttribute = false; + + Ignite g1 = startGrid(1); + Ignite g2 = startGrid(2); + + awaitPartitionMapExchange(); + + CacheConfiguration cfg = new CacheConfiguration(); + + cfg.setName("test-coordinator-failover"); + + cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + + cfg.setAffinity(new BrokenAffinityFunction(false, getTestIgniteInstanceName(2))); + + GridTestUtils.runAsync(new Callable() { + @Override public Object call() throws Exception { + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + g1.getOrCreateCache(cfg); + return null; + } + }, CacheException.class, null); + + return null; + } + }, "cache-starter-thread"); + + latch.await(); + + stopGrid(0, true); + + awaitPartitionMapExchange(); + + // Correct the cache configuration. + cfg.setAffinity(new RendezvousAffinityFunction()); + + IgniteCache cache = g1.getOrCreateCache(cfg); + + checkCacheOperations(g1, cache); + } + + /** + * Test the basic cache operations. + * + * @param cache Cache. + * @throws Exception If test failed. + */ + protected void checkCacheOperations(Ignite ignite, IgniteCache cache) throws Exception { + int cnt = 1000; + + // Check base cache operations. + for (int i = 0; i < cnt; ++i) + cache.put(i, i); + + for (int i = 0; i < cnt; ++i) { + Integer v = (Integer) cache.get(i); + + assertNotNull(v); + assertEquals(i, v.intValue()); + } + + // Check Data Streamer capabilities. + try (IgniteDataStreamer streamer = ignite.dataStreamer(cache.getName())) { + for (int i = 0; i < 10_000; ++i) + streamer.addData(i, i); + } + } + + /** + * Communication SPI which could optionally block outgoing messages. + */ + private static class CustomCommunicationSpi extends TcpCommunicationSpi { + /** + * Send message optionally either blocking it or throwing an exception if it is of + * {@link GridJobExecuteResponse} type. + * + * @param node Destination node. + * @param msg Message to be sent. + * @param ackClosure Ack closure. + * @throws org.apache.ignite.spi.IgniteSpiException If failed. + */ + @Override public void sendMessage(ClusterNode node, Message msg, IgniteInClosure ackClosure) + throws IgniteSpiException { + + if (msg instanceof GridIoMessage) { + GridIoMessage msg0 = (GridIoMessage)msg; + + if (msg0.message() instanceof GridDhtPartitionsSingleMessage) { + Boolean attr = (Boolean) node.attributes().get(COORDINATOR_ATTRIBUTE); + + GridDhtPartitionsSingleMessage singleMsg = (GridDhtPartitionsSingleMessage) msg0.message(); + + Exception err = singleMsg.getError(); + + if (Boolean.TRUE.equals(attr) && err != null) { + // skip message + latch.countDown(); + + return; + } + } + } + + super.sendMessage(node, msg, ackClosure); + } + } + + /** + * Affinity function that throws an exception when affinity nodes are calculated on the given node. + */ + public static class BrokenAffinityFunction extends RendezvousAffinityFunction { + /** */ + private static final long serialVersionUID = 0L; + + /** */ + @IgniteInstanceResource + private Ignite ignite; + + /** Exception should arise on all nodes. */ + private boolean eOnAllNodes = false; + + /** Exception should arise on node with certain name. */ + private String gridName; + + /** + * Default constructor. + */ + public BrokenAffinityFunction() { + // No-op. + } + + /** + * @param eOnAllNodes {@code True} if exception should be thrown on all nodes. + * @param gridName Exception should arise on node with certain name. + */ + public BrokenAffinityFunction(boolean eOnAllNodes, String gridName) { + this.eOnAllNodes = eOnAllNodes; + this.gridName = gridName; + } + + /** {@inheritDoc} */ + @Override public List> assignPartitions(AffinityFunctionContext affCtx) { + if (eOnAllNodes || ignite.name().equals(gridName)) + throw new IllegalStateException("Simulated exception [locNodeId=" + + ignite.cluster().localNode().id() + "]"); + else + return super.assignPartitions(affCtx); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartFailWithPersistenceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartFailWithPersistenceTest.java index 24c9342be20ce..bc00d3416b2e4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartFailWithPersistenceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartFailWithPersistenceTest.java @@ -24,6 +24,8 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Assume; /** * Tests the recovery after a dynamic cache start failure, with enabled persistence. @@ -34,6 +36,7 @@ public class IgniteDynamicCacheStartFailWithPersistenceTest extends IgniteAbstra return 5 * 60 * 1000; } + /** {@inheritDoc} */ @Override protected boolean persistenceEnabled() { return true; } @@ -56,6 +59,8 @@ public class IgniteDynamicCacheStartFailWithPersistenceTest extends IgniteAbstra /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10421", MvccFeatureChecker.forcedMvcc()); + cleanPersistenceDir(); startGrids(gridCount()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartNoExchangeTimeoutTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartNoExchangeTimeoutTest.java index c3e3e884cbc5d..65e00010fc997 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartNoExchangeTimeoutTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartNoExchangeTimeoutTest.java @@ -36,24 +36,23 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteDynamicCacheStartNoExchangeTimeoutTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 4; @@ -61,8 +60,6 @@ public class IgniteDynamicCacheStartNoExchangeTimeoutTest extends GridCommonAbst @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi) cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setCommunicationSpi(new TestCommunicationSpi()); if (igniteInstanceName.equals(getTestIgniteInstanceName(NODES - 1))) @@ -90,6 +87,7 @@ public class IgniteDynamicCacheStartNoExchangeTimeoutTest extends GridCommonAbst /** * @throws Exception If failed. */ + @Test public void testMultinodeCacheStart() throws Exception { for (int i = 0; i < 10; i++) { log.info("Iteration: " + i); @@ -121,6 +119,7 @@ public void testMultinodeCacheStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOldestNotAffinityNode1() throws Exception { for (CacheConfiguration ccfg : cacheConfigurations()) oldestNotAffinityNode1(ccfg); @@ -149,6 +148,7 @@ private void oldestNotAffinityNode1(final CacheConfiguration ccfg) throws Except /** * @throws Exception If failed. */ + @Test public void testOldestNotAffinityNode2() throws Exception { for (CacheConfiguration ccfg : cacheConfigurations()) oldestNotAffinityNode2(ccfg); @@ -180,6 +180,7 @@ private void oldestNotAffinityNode2(final CacheConfiguration ccfg) throws Except /** * @throws Exception If failed. */ + @Test public void testNotAffinityNode1() throws Exception { for (CacheConfiguration ccfg : cacheConfigurations()) notAffinityNode1(ccfg); @@ -208,6 +209,7 @@ private void notAffinityNode1(final CacheConfiguration ccfg) throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotAffinityNode2() throws Exception { for (CacheConfiguration ccfg : cacheConfigurations()) notAffinityNode2(ccfg); @@ -239,6 +241,7 @@ private void notAffinityNode2(final CacheConfiguration ccfg) throws Exception { /** * @throws Exception If failed. */ + @Test public void testOldestChanged1() throws Exception { IgniteEx ignite0 = grid(0); @@ -266,6 +269,7 @@ public void testOldestChanged1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOldestChanged2() throws Exception { IgniteEx ignite0 = grid(0); @@ -291,6 +295,7 @@ public void testOldestChanged2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOldestChanged3() throws Exception { IgniteEx ignite0 = grid(0); @@ -400,7 +405,7 @@ private List cacheConfigurations() { { CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - ccfg.setName("cache-4"); + ccfg.setName("cache-6"); ccfg.setAtomicityMode(TRANSACTIONAL); ccfg.setBackups(1); ccfg.setWriteSynchronizationMode(FULL_SYNC); @@ -408,6 +413,39 @@ private List cacheConfigurations() { res.add(ccfg); } + { + CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + ccfg.setName("cache-7"); + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + ccfg.setBackups(0); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + + res.add(ccfg); + } + + { + CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + ccfg.setName("cache-8"); + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + ccfg.setBackups(1); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + + res.add(ccfg); + } + + { + CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + ccfg.setName("cache-9"); + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + ccfg.setBackups(1); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + + res.add(ccfg); + } + return res; } @@ -476,4 +514,4 @@ private static class TestCommunicationSpi extends TcpCommunicationSpi { super.sendMessage(node, msg, ackClosure); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartSelfTest.java index 53de23f90a602..e0b6a89f8f17d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartSelfTest.java @@ -19,11 +19,13 @@ import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentLinkedDeque; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; @@ -47,22 +49,23 @@ import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for dynamic cache start. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteDynamicCacheStartSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String DYNAMIC_CACHE_NAME = "TestDynamicCache"; @@ -104,8 +107,6 @@ public int nodeCount() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (client) { cfg.setClientMode(true); @@ -140,6 +141,7 @@ public int nodeCount() { /** * @throws Exception If failed. */ + @Test public void testStartStopCacheMultithreadedSameNode() throws Exception { final IgniteEx kernal = grid(0); @@ -191,7 +193,7 @@ public void testStartStopCacheMultithreadedSameNode() throws Exception { GridTestUtils.runMultiThreaded(new Callable() { @Override public Object call() throws Exception { - futs.add(kernal.context().cache().dynamicDestroyCache(DYNAMIC_CACHE_NAME, false, true, false)); + futs.add(kernal.context().cache().dynamicDestroyCache(DYNAMIC_CACHE_NAME, false, true, false, null)); return null; } @@ -206,6 +208,7 @@ public void testStartStopCacheMultithreadedSameNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartCacheMultithreadedDifferentNodes() throws Exception { final Collection> futs = new ConcurrentLinkedDeque<>(); @@ -259,7 +262,7 @@ public void testStartCacheMultithreadedDifferentNodes() throws Exception { @Override public Object call() throws Exception { IgniteEx kernal = grid(ThreadLocalRandom.current().nextInt(nodeCount())); - futs.add(kernal.context().cache().dynamicDestroyCache(DYNAMIC_CACHE_NAME, false, true, false)); + futs.add(kernal.context().cache().dynamicDestroyCache(DYNAMIC_CACHE_NAME, false, true, false, null)); return null; } @@ -274,6 +277,7 @@ public void testStartCacheMultithreadedDifferentNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartStopCacheSimpleTransactional() throws Exception { checkStartStopCacheSimple(CacheAtomicityMode.TRANSACTIONAL); } @@ -281,6 +285,15 @@ public void testStartStopCacheSimpleTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testStartStopCacheSimpleTransactionalMvcc() throws Exception { + checkStartStopCacheSimple(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testStartStopCacheSimpleAtomic() throws Exception { checkStartStopCacheSimple(CacheAtomicityMode.ATOMIC); } @@ -288,6 +301,7 @@ public void testStartStopCacheSimpleAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartStopCachesSimpleTransactional() throws Exception { checkStartStopCachesSimple(CacheAtomicityMode.TRANSACTIONAL); } @@ -295,6 +309,15 @@ public void testStartStopCachesSimpleTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testStartStopCachesSimpleTransactionalMvcc() throws Exception { + checkStartStopCachesSimple(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testStartStopCachesSimpleAtomic() throws Exception { checkStartStopCachesSimple(CacheAtomicityMode.ATOMIC); } @@ -432,6 +455,7 @@ private void checkStartStopCachesSimple(CacheAtomicityMode mode) throws Exceptio /** * @throws Exception If failed. */ + @Test public void testStartStopCacheAddNode() throws Exception { final IgniteEx kernal = grid(0); @@ -484,6 +508,7 @@ public void testStartStopCacheAddNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployFilter() throws Exception { try { testAttribute = false; @@ -550,6 +575,7 @@ public void testDeployFilter() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailWhenConfiguredCacheExists() throws Exception { GridTestUtils.assertThrowsInherited(log, new Callable() { @Override public Object call() throws Exception { @@ -571,6 +597,7 @@ public void testFailWhenConfiguredCacheExists() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailWhenOneOfConfiguredCacheExists() throws Exception { GridTestUtils.assertThrowsInherited(log, new Callable() { @Override public Object call() throws Exception { @@ -601,6 +628,7 @@ public void testFailWhenOneOfConfiguredCacheExists() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientCache() throws Exception { try { testAttribute = false; @@ -644,6 +672,7 @@ public void testClientCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartFromClientNode() throws Exception { try { testAttribute = false; @@ -686,6 +715,7 @@ public void testStartFromClientNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartNearCacheFromClientNode() throws Exception { try { testAttribute = false; @@ -731,6 +761,7 @@ public void testStartNearCacheFromClientNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvents() throws Exception { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -802,6 +833,7 @@ public void testEvents() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearNodesCache() throws Exception { try { testAttribute = false; @@ -845,6 +877,7 @@ public void testNearNodesCache() throws Exception { } /** {@inheritDoc} */ + @Test public void testGetOrCreate() throws Exception { try { final CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -906,6 +939,7 @@ public void testGetOrCreate() throws Exception { } /** {@inheritDoc} */ + @Test public void testGetOrCreateCollection() throws Exception { final int cacheCnt = 3; @@ -942,6 +976,7 @@ public void testGetOrCreateCollection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateMultiNode() throws Exception { try { final AtomicInteger cnt = new AtomicInteger(); @@ -984,6 +1019,7 @@ public void testGetOrCreateMultiNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateMultiNodeTemplate() throws Exception { final AtomicInteger idx = new AtomicInteger(); @@ -1003,6 +1039,7 @@ public void testGetOrCreateMultiNodeTemplate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateNearOnlyMultiNode() throws Exception { checkGetOrCreateNear(true); } @@ -1010,6 +1047,7 @@ public void testGetOrCreateNearOnlyMultiNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateNearMultiNode() throws Exception { checkGetOrCreateNear(false); } @@ -1131,6 +1169,7 @@ private void lightCheckDynamicCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerNodesLeftEvent() throws Exception { testAttribute = false; @@ -1185,6 +1224,7 @@ public void testServerNodesLeftEvent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDaemonNode() throws Exception { daemon = true; @@ -1217,6 +1257,7 @@ public void testDaemonNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAwaitPartitionMapExchange() throws Exception { IgniteCache cache = grid(0).getOrCreateCache(new CacheConfiguration(DYNAMIC_CACHE_NAME)); @@ -1242,6 +1283,7 @@ public void testAwaitPartitionMapExchange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartStopWithClientJoin() throws Exception { Ignite ignite1 = ignite(1); @@ -1296,6 +1338,7 @@ public void testStartStopWithClientJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartStopSameCacheMultinode() throws Exception { final AtomicInteger idx = new AtomicInteger(); @@ -1327,4 +1370,62 @@ public void testStartStopSameCacheMultinode() throws Exception { fut.get(); } + + /** + * @throws Exception If failed. + */ + @Test + public void testCacheRestartIsAllowedOnlyToItsInititator() throws Exception { + IgniteEx kernal = grid(ThreadLocalRandom.current().nextInt(nodeCount())); + + CacheConfiguration ccfg = new CacheConfiguration("testCacheRestartIsAllowedOnlyToItsInititator"); + + kernal.createCache(ccfg); + + IgniteUuid restartId = IgniteUuid.randomUuid(); + + kernal.context().cache().dynamicDestroyCache(ccfg.getName(), false, true, true, restartId) + .get(getTestTimeout(), TimeUnit.MILLISECONDS); + + try { + kernal.createCache(ccfg); + + fail(); + } + catch (Exception e) { + assertTrue(X.hasCause(e, CacheExistsException.class)); + + System.out.println("User couldn't start new cache with the same name"); + } + + try { + kernal.context().cache().dynamicStartCache(ccfg, ccfg.getName(), null, true, false, true).get(); + + fail(); + } + catch (Exception e) { + assertTrue(X.hasCause(e, CacheExistsException.class)); + + System.out.println("We couldn't start new cache with private API"); + } + + StoredCacheData storedCacheData = new StoredCacheData(ccfg); + + try { + kernal.context().cache().dynamicStartCachesByStoredConf(Collections.singleton(storedCacheData), true, true, false, IgniteUuid.randomUuid()).get(); + + fail(); + } + catch (Exception e) { + assertTrue(X.hasCause(e, CacheExistsException.class)); + + System.out.println("We couldn't start new cache with wrong restart id."); + } + + kernal.context().cache().dynamicStartCachesByStoredConf(Collections.singleton(storedCacheData), true, true, false, restartId).get(); + + System.out.println("We successfully restarted cache with initial restartId."); + + kernal.destroyCache(ccfg.getName()); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartStopConcurrentTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartStopConcurrentTest.java index 15d10fc5b60e6..62511392a4705 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartStopConcurrentTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheStartStopConcurrentTest.java @@ -19,36 +19,24 @@ import org.apache.ignite.Ignite; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteRunnable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteDynamicCacheStartStopConcurrentTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 4; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { super.beforeTestsStarted(); @@ -59,6 +47,7 @@ public class IgniteDynamicCacheStartStopConcurrentTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testConcurrentStartStop() throws Exception { awaitPartitionMapExchange(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheWithConfigStartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheWithConfigStartSelfTest.java index ec6b82de7c6d0..c42e2a22109db 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheWithConfigStartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicCacheWithConfigStartSelfTest.java @@ -17,21 +17,20 @@ package org.apache.ignite.internal.processors.cache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteDynamicCacheWithConfigStartSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_NAME = "partitioned"; @@ -42,12 +41,6 @@ public class IgniteDynamicCacheWithConfigStartSelfTest extends GridCommonAbstrac @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - if (client) cfg.setCacheConfiguration(cacheConfiguration()); @@ -63,6 +56,7 @@ private CacheConfiguration cacheConfiguration() { CacheConfiguration ccfg = new CacheConfiguration<>(CACHE_NAME); ccfg.setIndexedTypes(String.class, String.class); + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); return ccfg; } @@ -70,6 +64,7 @@ private CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testStartCacheOnClient() throws Exception { int srvCnt = 3; @@ -95,4 +90,4 @@ public void testStartCacheOnClient() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicClientCacheStartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicClientCacheStartSelfTest.java index 0cb08561521b0..afd8e45cba51f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicClientCacheStartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicClientCacheStartSelfTest.java @@ -37,14 +37,17 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; - -import static org.apache.ignite.cache.CacheAtomicityMode.*; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheAtomicityMode.values; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_IGNITE_INSTANCE_NAME; @@ -52,10 +55,8 @@ /** * Tests that cache specified in configuration start on client nodes. */ +@RunWith(JUnit4.class) public class IgniteDynamicClientCacheStartSelfTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private CacheConfiguration ccfg; @@ -66,8 +67,6 @@ public class IgniteDynamicClientCacheStartSelfTest extends GridCommonAbstractTes @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); if (ccfg != null) @@ -88,6 +87,7 @@ public class IgniteDynamicClientCacheStartSelfTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void testConfiguredCacheOnClientNode() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -133,6 +133,7 @@ public void testConfiguredCacheOnClientNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearCacheStartError() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -172,6 +173,7 @@ public void testNearCacheStartError() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedCacheClient() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -205,6 +207,7 @@ public void testReplicatedCacheClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedWithNearCacheClient() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -240,6 +243,7 @@ public void testReplicatedWithNearCacheClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateCloseClientCache1() throws Exception { Ignite ignite0 = startGrid(0); @@ -265,6 +269,7 @@ public void testCreateCloseClientCache1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateCloseClientCache2_1() throws Exception { createCloseClientCache2(false); } @@ -272,6 +277,7 @@ public void testCreateCloseClientCache2_1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateCloseClientCache2_2() throws Exception { createCloseClientCache2(true); } @@ -279,6 +285,7 @@ public void testCreateCloseClientCache2_2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartMultipleClientCaches() throws Exception { startMultipleClientCaches(null); } @@ -286,6 +293,7 @@ public void testStartMultipleClientCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartMultipleClientCachesForGroup() throws Exception { startMultipleClientCaches("testGrp"); } @@ -377,6 +385,7 @@ private void startCachesForGroup(Ignite srv, * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testStartNewAndClientCaches() throws Exception { final int SRVS = 4; @@ -394,8 +403,9 @@ public void testStartNewAndClientCaches() throws Exception { cfgs.addAll(cacheConfigurations(null, ATOMIC)); cfgs.addAll(cacheConfigurations(null, TRANSACTIONAL)); + cfgs.addAll(cacheConfigurations(null, TRANSACTIONAL_SNAPSHOT)); - assertEquals(6, cfgs.size()); + assertEquals(9, cfgs.size()); Collection caches = client.getOrCreateCaches(cfgs); @@ -551,6 +561,7 @@ private void checkNoCache(Ignite ignite, final String cacheName) throws Exceptio /** * @throws Exception If failed. */ + @Test public void testStartClientCachesOnCoordinatorWithGroup() throws Exception { startGrids(3); @@ -598,4 +609,4 @@ public CachePredicate(List excludeNodes) { return !excludeNodes.contains(name); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteExchangeFutureHistoryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteExchangeFutureHistoryTest.java index a5930c992935e..7e5e17848e713 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteExchangeFutureHistoryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteExchangeFutureHistoryTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks that top value at {@link GridCachePartitionExchangeManager#exchangeFutures()} is the newest one. */ +@RunWith(JUnit4.class) public class IgniteExchangeFutureHistoryTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -55,6 +59,7 @@ public class IgniteExchangeFutureHistoryTest extends IgniteCacheAbstractTest { * * @throws Exception If failed. */ + @Test public void testExchangeFutures() throws Exception { GridCachePartitionExchangeManager mgr = ((IgniteKernal)grid(0)).internalCache(DEFAULT_CACHE_NAME).context().shared().exchange(); @@ -71,4 +76,4 @@ public void testExchangeFutures() throws Exception { assertEquals(futs.get(j), sortedFuts.get(j)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteIncompleteCacheObjectSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteIncompleteCacheObjectSelfTest.java index 5b6d7f3b95c29..cdfb8a359613c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteIncompleteCacheObjectSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteIncompleteCacheObjectSelfTest.java @@ -26,16 +26,21 @@ import java.nio.ByteBuffer; import java.util.concurrent.ThreadLocalRandom; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Simple test for arbitrary CacheObject reading/writing. */ +@RunWith(JUnit4.class) public class IgniteIncompleteCacheObjectSelfTest extends GridCommonAbstractTest { /** * Test case when requested data cut on cache object header. * * @throws Exception If failed. */ + @Test public void testIncompleteObject() throws Exception { final byte[] data = new byte[1024]; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteInternalCacheTypesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteInternalCacheTypesTest.java index 7f3807e63bd69..eb043dbf54de1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteInternalCacheTypesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteInternalCacheTypesTest.java @@ -23,11 +23,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.internal.CU; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.managers.communication.GridIoPolicy.SYSTEM_POOL; import static org.apache.ignite.internal.managers.communication.GridIoPolicy.UTILITY_CACHE_POOL; @@ -35,10 +35,8 @@ /** * Sanity test for cache types. */ +@RunWith(JUnit4.class) public class IgniteInternalCacheTypesTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE1 = "cache1"; @@ -49,8 +47,6 @@ public class IgniteInternalCacheTypesTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (igniteInstanceName.equals(getTestIgniteInstanceName(0))) { CacheConfiguration ccfg = defaultCacheConfiguration(); @@ -65,6 +61,7 @@ public class IgniteInternalCacheTypesTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCacheTypes() throws Exception { Ignite ignite0 = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClassNameConflictTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClassNameConflictTest.java index 64c781741fd51..56fd4d3cb6f4f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClassNameConflictTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClassNameConflictTest.java @@ -31,15 +31,18 @@ import org.apache.ignite.configuration.BinaryConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; -import org.apache.ignite.internal.util.future.GridFinishedFuture; +import org.apache.ignite.internal.util.future.IgniteFinishedFutureImpl; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.spi.discovery.DiscoverySpiCustomMessage; import org.apache.ignite.spi.discovery.DiscoverySpiListener; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -53,6 +56,7 @@ * and {@link org.apache.ignite.internal.processors.marshaller.MappingProposedMessage} is sent * with not-null conflictingClsName field. */ +@RunWith(JUnit4.class) public class IgniteMarshallerCacheClassNameConflictTest extends GridCommonAbstractTest { /** */ private volatile boolean bbClsRejected; @@ -104,6 +108,7 @@ public class IgniteMarshallerCacheClassNameConflictTest extends GridCommonAbstra /** * @throws Exception If failed. */ + @Test public void testCachePutGetClassesWithNameConflict() throws Exception { Ignite srv1 = startGrid(0); Ignite srv2 = startGrid(1); @@ -193,7 +198,7 @@ private DiscoverySpiListenerWrapper(DiscoverySpiListener delegate) { } /** {@inheritDoc} */ - @Override public IgniteInternalFuture onDiscovery( + @Override public IgniteFuture onDiscovery( int type, long topVer, ClusterNode node, @@ -221,7 +226,7 @@ else if (conflClsName.contains(BB.class.getSimpleName())) if (delegate != null) return delegate.onDiscovery(type, topVer, node, topSnapshot, topHist, spiCustomMsg); - return new GridFinishedFuture(); + return new IgniteFinishedFutureImpl<>(); } /** {@inheritDoc} */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClientRequestsMappingOnMissTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClientRequestsMappingOnMissTest.java index 057b97005b081..da287a7020515 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClientRequestsMappingOnMissTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheClientRequestsMappingOnMissTest.java @@ -41,6 +41,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -49,6 +52,7 @@ /** * Tests for client requesting missing mappings from server nodes with and without server nodes failures. */ +@RunWith(JUnit4.class) public class IgniteMarshallerCacheClientRequestsMappingOnMissTest extends GridCommonAbstractTest { /** * Need to point client node to a different working directory @@ -114,6 +118,7 @@ private void cleanupMarshallerFileStore() throws IOException { /** * @throws Exception If failed. */ + @Test public void testRequestedMappingIsStoredInFS() throws Exception { Ignite srv1 = startGrid(0); @@ -151,6 +156,7 @@ public void testRequestedMappingIsStoredInFS() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoNodesDieOnRequest() throws Exception { Ignite srv1 = startGrid(0); @@ -181,6 +187,7 @@ public void testNoNodesDieOnRequest() throws Exception { /** * */ + @Test public void testOneNodeDiesOnRequest() throws Exception { CountDownLatch nodeStopLatch = new CountDownLatch(1); @@ -216,6 +223,7 @@ public void testOneNodeDiesOnRequest() throws Exception { /** * */ + @Test public void testTwoNodesDieOnRequest() throws Exception { CountDownLatch nodeStopLatch = new CountDownLatch(2); @@ -252,6 +260,7 @@ public void testTwoNodesDieOnRequest() throws Exception { /** * */ + @Test public void testAllNodesDieOnRequest() throws Exception { CountDownLatch nodeStopLatch = new CountDownLatch(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheConcurrentReadWriteTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheConcurrentReadWriteTest.java index 4e477f2c96b1b..a16e17d1e59ec 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheConcurrentReadWriteTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheConcurrentReadWriteTest.java @@ -27,11 +27,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -40,18 +40,14 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteMarshallerCacheConcurrentReadWriteTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setPeerClassLoadingEnabled(false); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -75,6 +71,7 @@ public class IgniteMarshallerCacheConcurrentReadWriteTest extends GridCommonAbst /** * @throws Exception If failed. */ + @Test public void testConcurrentReadWrite() throws Exception { Ignite ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheFSRestoreTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheFSRestoreTest.java index 7aa61ebd8f035..15560d32e9b72 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheFSRestoreTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMarshallerCacheFSRestoreTest.java @@ -34,20 +34,24 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.PersistentStoreConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; import org.apache.ignite.internal.processors.marshaller.MappingProposedMessage; -import org.apache.ignite.internal.util.future.GridFinishedFuture; +import org.apache.ignite.internal.util.future.IgniteFinishedFutureImpl; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.spi.discovery.DiscoverySpiCustomMessage; import org.apache.ignite.spi.discovery.DiscoverySpiListener; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteMarshallerCacheFSRestoreTest extends GridCommonAbstractTest { /** */ private volatile boolean isDuplicateObserved = true; @@ -128,6 +132,7 @@ private void cleanUpWorkDir() throws Exception { * * This test must never hang on proposing of MarshallerMapping. */ + @Test public void testFileMappingReadAndPropose() throws Exception { isPersistenceEnabled = false; @@ -185,6 +190,7 @@ private void prepareMarshallerFileStore() throws Exception { * @see IGNITE-6536 JIRA provides more information * about this case. */ + @Test public void testNodeStartFailsOnCorruptedStorage() throws Exception { isPersistenceEnabled = true; @@ -245,7 +251,7 @@ private DiscoverySpiListenerWrapper(DiscoverySpiListener delegate) { } /** {@inheritDoc} */ - @Override public IgniteInternalFuture onDiscovery( + @Override public IgniteFuture onDiscovery( int type, long topVer, ClusterNode node, @@ -271,7 +277,7 @@ private DiscoverySpiListenerWrapper(DiscoverySpiListener delegate) { if (delegate != null) return delegate.onDiscovery(type, topVer, node, topSnapshot, topHist, spiCustomMsg); - return new GridFinishedFuture(); + return new IgniteFinishedFutureImpl<>(); } /** {@inheritDoc} */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMvccTxMultiThreadedAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMvccTxMultiThreadedAbstractTest.java new file mode 100644 index 0000000000000..f7c434a4c3ba1 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMvccTxMultiThreadedAbstractTest.java @@ -0,0 +1,132 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.concurrent.Callable; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Tests for local transactions. + */ +@RunWith(JUnit4.class) +public abstract class IgniteMvccTxMultiThreadedAbstractTest extends IgniteTxAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9470"); + + super.beforeTestsStarted(); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9470"); + } + + /** + * @return Thread count. + */ + protected abstract int threadCount(); + + /** + /** + * @throws IgniteCheckedException If test failed. + */ + @Test + public void testPessimisticRepeatableReadCommitMultithreaded() throws Exception { + checkCommitMultithreaded(PESSIMISTIC, REPEATABLE_READ); + + finalChecks(); + } + + /** + * @throws IgniteCheckedException If test failed. + */ + @Test + public void testPessimisticRepeatableReadRollbackMultithreaded() throws Exception { + checkRollbackMultithreaded(PESSIMISTIC, REPEATABLE_READ); + + finalChecks(); + } + + /** + * @param concurrency Concurrency. + * @param isolation Isolation. + * @throws Exception If check failed. + */ + protected void checkCommitMultithreaded(final TransactionConcurrency concurrency, + final TransactionIsolation isolation) throws Exception { + GridTestUtils.runMultiThreaded(new Callable() { + @Nullable @Override public Object call() throws Exception { + Thread t = Thread.currentThread(); + + t.setName(t.getName() + "-id-" + t.getId()); + + info("Starting commit thread: " + Thread.currentThread().getName()); + + try { + checkCommit(concurrency, isolation); + } + finally { + info("Finished commit thread: " + Thread.currentThread().getName()); + } + + return null; + } + }, threadCount(), concurrency + "-" + isolation); + } + + /** + * @param concurrency Concurrency. + * @param isolation Isolation. + * @throws Exception If check failed. + */ + protected void checkRollbackMultithreaded(final TransactionConcurrency concurrency, + final TransactionIsolation isolation) throws Exception { + final ConcurrentMap map = new ConcurrentHashMap<>(); + GridTestUtils.runMultiThreaded(new Callable() { + @Nullable @Override public Object call() throws Exception { + Thread t = Thread.currentThread(); + + t.setName(t.getName() + "-id-" + t.getId()); + + info("Starting rollback thread: " + Thread.currentThread().getName()); + + try { + checkRollback(map, concurrency, isolation); + + return null; + } + finally { + info("Finished rollback thread: " + Thread.currentThread().getName()); + } + } + }, threadCount(), concurrency + "-" + isolation); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMvccTxSingleThreadedAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMvccTxSingleThreadedAbstractTest.java new file mode 100644 index 0000000000000..51e96e120ee75 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteMvccTxSingleThreadedAbstractTest.java @@ -0,0 +1,56 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.IgniteCheckedException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Tests for local transactions. + */ +@RunWith(JUnit4.class) +public abstract class IgniteMvccTxSingleThreadedAbstractTest extends IgniteTxAbstractTest { + /** + * @throws IgniteCheckedException If test failed. + */ + @Test + public void testPessimisticRepeatableReadCommit() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10261"); + + checkCommit(PESSIMISTIC, REPEATABLE_READ); + + finalChecks(); + } + + /** + * @throws IgniteCheckedException If test failed. + */ + @Test + public void testPessimisticRepeatableReadRollback() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10261"); + + checkRollback(PESSIMISTIC, REPEATABLE_READ); + + finalChecks(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteNearClientCacheCloseTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteNearClientCacheCloseTest.java index e7ab805c3de7c..692f01207a8c9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteNearClientCacheCloseTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteNearClientCacheCloseTest.java @@ -33,23 +33,23 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteNearClientCacheCloseTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -57,8 +57,6 @@ public class IgniteNearClientCacheCloseTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -74,6 +72,7 @@ public class IgniteNearClientCacheCloseTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNearCacheCloseAtomic1() throws Exception { nearCacheClose(1, false, ATOMIC); @@ -83,6 +82,7 @@ public void testNearCacheCloseAtomic1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearCacheCloseAtomic2() throws Exception { nearCacheClose(4, false, ATOMIC); @@ -92,6 +92,7 @@ public void testNearCacheCloseAtomic2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearCacheCloseTx1() throws Exception { nearCacheClose(1, false, TRANSACTIONAL); @@ -101,12 +102,35 @@ public void testNearCacheCloseTx1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearCacheCloseTx2() throws Exception { nearCacheClose(4, false, TRANSACTIONAL); nearCacheClose(4, true, TRANSACTIONAL); } + /** + * @throws Exception If failed. + */ + @Test + public void testNearCacheCloseMvccTx1() throws Exception { + nearCacheClose(1, false, TRANSACTIONAL_SNAPSHOT); + + if (MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) + nearCacheClose(1, true, TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNearCacheCloseMvccTx2() throws Exception { + nearCacheClose(4, false, TRANSACTIONAL_SNAPSHOT); + + if (MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) + nearCacheClose(4, true, TRANSACTIONAL_SNAPSHOT); + } + /** * @param srvs Number of server nodes. * @param srvNearCache {@code True} to enable near cache on server nodes. @@ -162,6 +186,7 @@ private void nearCacheClose(int srvs, boolean srvNearCache, CacheAtomicityMode a /** * @throws Exception If failed. */ + @Test public void testConcurrentUpdateAndNearCacheClose() throws Exception { final int SRVS = 4; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitInvokeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitInvokeTest.java index bccebaa0ab505..35d09b8dc939e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitInvokeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitInvokeTest.java @@ -33,11 +33,11 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -46,10 +46,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteOnePhaseCommitInvokeTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -60,8 +58,6 @@ public class IgniteOnePhaseCommitInvokeTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); cfg.setCommunicationSpi(commSpi); @@ -91,6 +87,7 @@ public class IgniteOnePhaseCommitInvokeTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testOnePhaseInvoke() throws Exception { boolean flags[] = {true, false}; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearReadersTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearReadersTest.java index f796ceace8a3e..8f5e35dbc5824 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearReadersTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearReadersTest.java @@ -27,14 +27,14 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionRollbackException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -43,10 +43,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteOnePhaseCommitNearReadersTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -57,8 +55,6 @@ public class IgniteOnePhaseCommitNearReadersTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); if (testSpi) { @@ -80,6 +76,7 @@ public class IgniteOnePhaseCommitNearReadersTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testPutReadersUpdate1() throws Exception { putReadersUpdate(1); } @@ -87,6 +84,7 @@ public void testPutReadersUpdate1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutReadersUpdate2() throws Exception { putReadersUpdate(0); } @@ -145,6 +143,7 @@ private void putReadersUpdate(int backups) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutReaderUpdatePrimaryFails1() throws Exception { putReaderUpdatePrimaryFails(1); } @@ -152,6 +151,7 @@ public void testPutReaderUpdatePrimaryFails1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutReaderUpdatePrimaryFails2() throws Exception { putReaderUpdatePrimaryFails(0); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearSelfTest.java index 55323bb64501d..542ca53c7bddd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOnePhaseCommitNearSelfTest.java @@ -40,6 +40,9 @@ import java.util.*; import java.util.concurrent.*; import java.util.concurrent.atomic.*; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.*; import static org.apache.ignite.transactions.TransactionIsolation.*; @@ -47,6 +50,7 @@ /** * Checks one-phase commit scenarios. */ +@RunWith(JUnit4.class) public class IgniteOnePhaseCommitNearSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 4; @@ -88,6 +92,7 @@ protected CacheConfiguration cacheConfiguration(String igniteInstanceName) { /** * @throws Exception If failed. */ + @Test public void testOnePhaseCommitFromNearNode() throws Exception { backups = 1; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOutOfMemoryPropagationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOutOfMemoryPropagationTest.java index 34fb74951d213..bf0a286b0b04f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOutOfMemoryPropagationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteOutOfMemoryPropagationTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteOutOfMemoryPropagationTest extends GridCommonAbstractTest { /** */ public static final int NODES = 3; @@ -67,11 +71,13 @@ public class IgniteOutOfMemoryPropagationTest extends GridCommonAbstractTest { } /** */ + @Test public void testPutOOMPropagation() throws Exception { testOOMPropagation(false); } /** */ + @Test public void testStreamerOOMPropagation() throws Exception { testOOMPropagation(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePdsDataRegionMetricsTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePdsDataRegionMetricsTxTest.java new file mode 100644 index 0000000000000..09b1213737fe5 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePdsDataRegionMetricsTxTest.java @@ -0,0 +1,55 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsDataRegionMetricsTest; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class IgnitePdsDataRegionMetricsTxTest extends IgnitePdsDataRegionMetricsTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName) + .setMvccVacuumFrequency(Long.MAX_VALUE); + } + + /** {@inheritDoc} */ + @Override protected CacheConfiguration cacheConfiguration() { + return super.cacheConfiguration().setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10662") + @Test + @Override public void testMemoryUsageMultipleNodes() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10583"); + + super.testMemoryUsageMultipleNodes(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllLargeBatchSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllLargeBatchSelfTest.java index 56a438107c40f..45e52e90119ee 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllLargeBatchSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllLargeBatchSelfTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -42,6 +46,7 @@ /** * Tests putAll method with large number of keys. */ +@RunWith(JUnit4.class) public class IgnitePutAllLargeBatchSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 4; @@ -52,6 +57,13 @@ public class IgnitePutAllLargeBatchSelfTest extends GridCommonAbstractTest { /** Backups. */ private int backups = 1; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -79,6 +91,7 @@ public CacheConfiguration cacheConfiguration(String igniteInstanceName) { /** * @throws Exception If failed. */ + @Test public void testPutAllPessimisticOneBackupPartitioned() throws Exception { backups = 1; @@ -88,6 +101,7 @@ public void testPutAllPessimisticOneBackupPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllPessimisticOneBackupNear() throws Exception { backups = 1; @@ -97,6 +111,7 @@ public void testPutAllPessimisticOneBackupNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllOptimisticOneBackupPartitioned() throws Exception { backups = 1; @@ -106,6 +121,7 @@ public void testPutAllOptimisticOneBackupPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllOptimisticOneBackupNear() throws Exception { backups = 1; @@ -115,6 +131,7 @@ public void testPutAllOptimisticOneBackupNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllPessimisticTwoBackupsPartitioned() throws Exception { backups = 2; @@ -124,6 +141,7 @@ public void testPutAllPessimisticTwoBackupsPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllPessimisticTwoBackupsNear() throws Exception { backups = 2; @@ -133,6 +151,7 @@ public void testPutAllPessimisticTwoBackupsNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllOptimisticTwoBackupsPartitioned() throws Exception { backups = 2; @@ -142,6 +161,7 @@ public void testPutAllOptimisticTwoBackupsPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllOptimisticTwoBackupsNear() throws Exception { backups = 2; @@ -236,6 +256,7 @@ private void checkPutAll(TransactionConcurrency concurrency, boolean nearEnabled /** * @throws Exception If failed. */ + @Test public void testPreviousValuePartitionedOneBackup() throws Exception { backups = 1; nearEnabled = false; @@ -246,6 +267,7 @@ public void testPreviousValuePartitionedOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreviousValuePartitionedTwoBackups() throws Exception { backups = 2; nearEnabled = false; @@ -256,6 +278,7 @@ public void testPreviousValuePartitionedTwoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreviousValueNearOneBackup() throws Exception { backups = 1; nearEnabled = true; @@ -266,6 +289,7 @@ public void testPreviousValueNearOneBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreviousValueNearTwoBackups() throws Exception { backups = 2; nearEnabled = true; @@ -304,4 +328,4 @@ private void checkPreviousValue() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllUpdateNonPreloadedPartitionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllUpdateNonPreloadedPartitionSelfTest.java index 2503e219455e7..99fc4c93cfa6a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllUpdateNonPreloadedPartitionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgnitePutAllUpdateNonPreloadedPartitionSelfTest.java @@ -25,8 +25,12 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -34,6 +38,7 @@ /** * Tests . */ +@RunWith(JUnit4.class) public class IgnitePutAllUpdateNonPreloadedPartitionSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 4; @@ -41,6 +46,13 @@ public class IgnitePutAllUpdateNonPreloadedPartitionSelfTest extends GridCommonA /** Backups. */ private int backups = 1; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -70,6 +82,7 @@ public CacheConfiguration cacheConfiguration(String igniteInstanceName) { /** * @throws Exception If failed. */ + @Test public void testPessimistic() throws Exception { backups = 2; @@ -129,4 +142,4 @@ public void testPessimistic() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStartCacheInTransactionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStartCacheInTransactionSelfTest.java index b037a7b8ef6a8..00be0508ab7cc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStartCacheInTransactionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStartCacheInTransactionSelfTest.java @@ -26,12 +26,13 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -40,10 +41,8 @@ /** * Check starting cache in transaction. */ +@RunWith(JUnit4.class) public class IgniteStartCacheInTransactionSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String EXPECTED_MSG = "Cannot start/stop cache within lock or transaction."; @@ -51,8 +50,6 @@ public class IgniteStartCacheInTransactionSelfTest extends GridCommonAbstractTes @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setAtomicityMode(atomicityMode()); @@ -90,6 +87,7 @@ public CacheConfiguration cacheConfiguration(String cacheName) { /** * @throws Exception If failed. */ + @Test public void testStartCache() throws Exception { final Ignite ignite = grid(0); @@ -116,6 +114,7 @@ public void testStartCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartConfigurationCache() throws Exception { final Ignite ignite = grid(0); @@ -142,6 +141,7 @@ public void testStartConfigurationCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartConfigurationCacheWithNear() throws Exception { final Ignite ignite = grid(0); @@ -168,6 +168,7 @@ public void testStartConfigurationCacheWithNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateCache() throws Exception { final Ignite ignite = grid(0); @@ -194,6 +195,7 @@ public void testGetOrCreateCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetOrCreateCacheConfiguration() throws Exception { final Ignite ignite = grid(0); @@ -220,6 +222,7 @@ public void testGetOrCreateCacheConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopCache() throws Exception { final Ignite ignite = grid(0); @@ -246,10 +249,13 @@ public void testStopCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockCache() throws Exception { if (atomicityMode() != TRANSACTIONAL) return; + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + final Ignite ignite = grid(0); final String key = "key"; @@ -268,4 +274,4 @@ public void testLockCache() throws Exception { lock.unlock(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStaticCacheStartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStaticCacheStartSelfTest.java index 0c64a79a81c2a..75a924e89fa56 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStaticCacheStartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteStaticCacheStartSelfTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cache deploy on topology from static configuration. */ +@RunWith(JUnit4.class) public class IgniteStaticCacheStartSelfTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "TestCache"; @@ -58,6 +62,7 @@ public class IgniteStaticCacheStartSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDeployCacheOnNodeStart() throws Exception { startGrids(3); @@ -99,4 +104,4 @@ public void testDeployCacheOnNodeStart() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteSystemCacheOnClientTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteSystemCacheOnClientTest.java index cc83c8fbb8c86..82eb70626922c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteSystemCacheOnClientTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteSystemCacheOnClientTest.java @@ -23,25 +23,20 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.internal.CU; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteSystemCacheOnClientTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (igniteInstanceName.equals(getTestIgniteInstanceName(1))) cfg.setClientMode(true); @@ -51,6 +46,7 @@ public class IgniteSystemCacheOnClientTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSystemCacheOnClientNode() throws Exception { startGrids(2); @@ -65,4 +61,4 @@ public void testSystemCacheOnClientNode() throws Exception { assertEquals(1, affNodes.size()); assertTrue(affNodes.contains(ignite(0).cluster().localNode())); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractCacheTest.java index e2a4a08e7e8d4..5688afa892704 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractCacheTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Topology validator test. */ +@RunWith(JUnit4.class) public abstract class IgniteTopologyValidatorAbstractCacheTest extends IgniteCacheAbstractTest implements Serializable { /** key-value used at test. */ private static String KEY_VAL = "1"; @@ -241,6 +245,7 @@ void assertEmpty(String cacheName) { /** * @throws Exception If failed. */ + @Test public void testTopologyValidator() throws Exception { putValid(DEFAULT_CACHE_NAME); remove(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheGroupsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheGroupsTest.java index 07875350886e8..a56ccea167d1d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheGroupsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheGroupsTest.java @@ -19,7 +19,11 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -29,6 +33,7 @@ /** * Topology validator test */ +@RunWith(JUnit4.class) public abstract class IgniteTopologyValidatorAbstractTxCacheGroupsTest extends IgniteTopologyValidatorCacheGroupsAbstractTest { /** {@inheritDoc} */ @@ -42,11 +47,22 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheGroupsTest } /** {@inheritDoc} */ + @Test @Override public void testTopologyValidator() throws Exception { - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - putValid(CACHE_NAME_1); - putValid(CACHE_NAME_3); - commitFailed(tx); + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + putInvalid(CACHE_NAME_1); + } + + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + putInvalid(CACHE_NAME_3); + } + + if (!MvccFeatureChecker.forcedMvcc()) { + try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + putValid(CACHE_NAME_1); + putValid(CACHE_NAME_3); + commitFailed(tx); + } } assertEmpty(CACHE_NAME_1); // Rolled back. @@ -69,10 +85,19 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheGroupsTest startGrid(1); - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - putValid(CACHE_NAME_1); - putValid(CACHE_NAME_3); - tx.commit(); + if (!MvccFeatureChecker.forcedMvcc()) { + try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + putValid(CACHE_NAME_1); + putValid(CACHE_NAME_3); + tx.commit(); + } + } + else { + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + putValid(CACHE_NAME_1); + putValid(CACHE_NAME_3); + tx.commit(); + } } remove(CACHE_NAME_1); @@ -96,19 +121,30 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheGroupsTest assertEmpty(CACHE_NAME_3); // Rolled back. - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - putValid(CACHE_NAME_1); - putValid(CACHE_NAME_3); - commitFailed(tx); + if (!MvccFeatureChecker.forcedMvcc()) { + try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + putValid(CACHE_NAME_1); + putValid(CACHE_NAME_3); + commitFailed(tx); + } } assertEmpty(CACHE_NAME_1); // Rolled back. assertEmpty(CACHE_NAME_3); // Rolled back. - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - putValid(DEFAULT_CACHE_NAME); - putValid(CACHE_NAME_3); - tx.commit(); + if (!MvccFeatureChecker.forcedMvcc()) { + try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + putValid(DEFAULT_CACHE_NAME); + putValid(CACHE_NAME_3); + tx.commit(); + } + } + else { + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + putValid(DEFAULT_CACHE_NAME); + putValid(CACHE_NAME_3); + tx.commit(); + } } remove(DEFAULT_CACHE_NAME); @@ -123,4 +159,4 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheGroupsTest remove(DEFAULT_CACHE_NAME); remove(CACHE_NAME_3); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheTest.java index 781d88de40134..ecb62ae824d11 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorAbstractTxCacheTest.java @@ -19,7 +19,11 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -29,6 +33,7 @@ /** * Topology validator test */ +@RunWith(JUnit4.class) public abstract class IgniteTopologyValidatorAbstractTxCacheTest extends IgniteTopologyValidatorAbstractCacheTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { @@ -41,21 +46,24 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheTest extends IgniteT } /** {@inheritDoc} */ + @Test @Override public void testTopologyValidator() throws Exception { - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - putValid(CACHE_NAME_1); - commitFailed(tx); - } - try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { putInvalid(CACHE_NAME_1); } - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - putValid(CACHE_NAME_1); - putValid(DEFAULT_CACHE_NAME); - putValid(CACHE_NAME_2); - commitFailed(tx); + if (!MvccFeatureChecker.forcedMvcc()) { + try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + putValid(CACHE_NAME_1); + commitFailed(tx); + } + + try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + putValid(CACHE_NAME_1); + putValid(DEFAULT_CACHE_NAME); + putValid(CACHE_NAME_2); + commitFailed(tx); + } } assertEmpty(DEFAULT_CACHE_NAME); // rolled back @@ -72,7 +80,7 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheTest extends IgniteT startGrid(1); - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { putValid(CACHE_NAME_1); tx.commit(); } @@ -96,16 +104,19 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheTest extends IgniteT assertEmpty(DEFAULT_CACHE_NAME); // rolled back assertEmpty(CACHE_NAME_1); // rolled back - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { - putValid(CACHE_NAME_1); - commitFailed(tx); + if (!MvccFeatureChecker.forcedMvcc()) { + try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + putValid(CACHE_NAME_1); + commitFailed(tx); + } } + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { putInvalid(CACHE_NAME_1); } - try (Transaction tx = grid(0).transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { putValid(DEFAULT_CACHE_NAME); putValid(CACHE_NAME_2); tx.commit(); @@ -123,4 +134,4 @@ public abstract class IgniteTopologyValidatorAbstractTxCacheTest extends IgniteT remove(DEFAULT_CACHE_NAME); remove(CACHE_NAME_2); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorCacheGroupsAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorCacheGroupsAbstractTest.java index 8613225d5fc42..b42091073b055 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorCacheGroupsAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorCacheGroupsAbstractTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.TopologyValidator; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteTopologyValidatorCacheGroupsAbstractTest extends IgniteTopologyValidatorAbstractCacheTest { /** group name 1. */ protected static final String GROUP_1 = "group1"; @@ -79,6 +83,7 @@ public abstract class IgniteTopologyValidatorCacheGroupsAbstractTest extends Ign /** * @throws Exception If failed. */ + @Test @Override public void testTopologyValidator() throws Exception { putValid(DEFAULT_CACHE_NAME); remove(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorGridSplitCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorGridSplitCacheTest.java index c315ba50f4707..4fa3d8f8e0758 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorGridSplitCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorGridSplitCacheTest.java @@ -42,6 +42,10 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; @@ -52,6 +56,7 @@ * #DC_NODE_ATTR}. If only nodes from single DC are left in topology, grid is moved into inoperative state until special * activator node'll enter a topology, enabling grid operations. */ +@RunWith(JUnit4.class) public class IgniteTopologyValidatorGridSplitCacheTest extends IgniteCacheTopologySplitAbstractTest { /** */ private static final String DC_NODE_ATTR = "dc"; @@ -197,6 +202,9 @@ private String testCacheName(int idx) { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-7952"); + super.beforeTest(); startGridsMultiThreaded(GRID_CNT); @@ -220,6 +228,7 @@ protected void stopGrids(int... grids) { * * @throws Exception If failed. */ + @Test public void testTopologyValidator() throws Exception { testTopologyValidator0(false); } @@ -229,6 +238,7 @@ public void testTopologyValidator() throws Exception { * * @throws Exception If failed. */ + @Test public void testTopologyValidatorWithCacheGroup() throws Exception { testTopologyValidator0(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheGroupsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheGroupsTest.java index acfad80fc5a9d..d9cf301885882 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheGroupsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheGroupsTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * Topology validator test. @@ -28,4 +29,11 @@ public class IgniteTopologyValidatorNearPartitionedTxCacheGroupsTest extends @Override protected NearCacheConfiguration nearConfiguration() { return new NearCacheConfiguration(); } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.beforeTestsStarted(); + } } \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheTest.java index 800c2e654d333..c1a8fa81898eb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTopologyValidatorNearPartitionedTxCacheTest.java @@ -18,11 +18,19 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * Topology validator test */ public class IgniteTopologyValidatorNearPartitionedTxCacheTest extends IgniteTopologyValidatorPartitionedTxCacheTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { return new NearCacheConfiguration(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxAbstractTest.java index 1830db03c367d..93054ab0d026d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxAbstractTest.java @@ -29,13 +29,9 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.affinity.AffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; @@ -56,9 +52,6 @@ abstract class IgniteTxAbstractTest extends GridCommonAbstractTest { /** Execution count. */ private static final AtomicInteger cntr = new AtomicInteger(); - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * Start grid by default. */ @@ -66,19 +59,6 @@ protected IgniteTxAbstractTest() { super(false /*start grid. */); } - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - - return c; - } - /** * @return Grid count. */ @@ -115,12 +95,16 @@ private void debug(String msg) { info(msg); } - /** - * @throws Exception If failed. - */ + /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { - for (int i = 0; i < gridCount(); i++) - startGrid(i); + startGridsMultiThreaded(gridCount()); + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + super.afterTestsStopped(); } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConcurrentGetAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConcurrentGetAbstractTest.java index 5fb076633da19..a0451a262646d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConcurrentGetAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConcurrentGetAbstractTest.java @@ -20,15 +20,14 @@ import java.util.UUID; import java.util.concurrent.Callable; import org.apache.ignite.Ignite; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -36,13 +35,11 @@ /** * Checks multithreaded put/get cache operations on one node. */ +@RunWith(JUnit4.class) public abstract class IgniteTxConcurrentGetAbstractTest extends GridCommonAbstractTest { /** Debug flag. */ private static final boolean DEBUG = false; - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int THREAD_NUM = 20; @@ -54,19 +51,6 @@ protected IgniteTxConcurrentGetAbstractTest() { super(true /** Start grid. */); } - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - - return cfg; - } - /** * @param g Grid. * @return Near cache. @@ -88,6 +72,7 @@ GridDhtCacheAdapter dht(Ignite g) { * * @throws Exception If failed. */ + @Test public void testPutGet() throws Exception { // Random key. final String key = UUID.randomUUID().toString(); @@ -138,4 +123,4 @@ private String txGet(Ignite ignite, String key) throws Exception { return val; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConfigCacheSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConfigCacheSelfTest.java index 680381a21d799..accf079fb519b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConfigCacheSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxConfigCacheSelfTest.java @@ -41,24 +41,24 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; -import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; /** * Test checks that grid transaction configuration doesn't influence system caches. */ +@RunWith(JUnit4.class) public class IgniteTxConfigCacheSelfTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test cache name. */ private static final String CACHE_NAME = "cache_name"; @@ -69,8 +69,6 @@ public class IgniteTxConfigCacheSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - TcpCommunicationSpi commSpi = new TestCommunicationSpi(); cfg.setCommunicationSpi(commSpi); @@ -108,7 +106,10 @@ public CacheAtomicityMode atomicityMode() { * * @throws Exception If failed. */ + @Test public void testUserTxTimeout() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-7952", MvccFeatureChecker.forcedMvcc()); + final Ignite ignite = grid(0); final IgniteCache cache = ignite.getOrCreateCache(CACHE_NAME); @@ -122,6 +123,7 @@ public void testUserTxTimeout() throws Exception { * * @throws Exception If failed. */ + @Test public void testSystemCacheTx() throws Exception { final Ignite ignite = grid(0); @@ -214,7 +216,7 @@ protected void checkExplicitTxTimeout(final IgniteCache cache, f * @throws Exception If failed. */ protected void checkStartTxSuccess(final IgniteInternalCache cache) throws Exception { - try (final GridNearTxLocal tx = CU.txStartInternal(cache.context(), cache, PESSIMISTIC, READ_COMMITTED)) { + try (final GridNearTxLocal tx = CU.txStartInternal(cache.context(), cache, PESSIMISTIC, REPEATABLE_READ)) { assert tx != null; sleepForTxFailure(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxExceptionAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxExceptionAbstractSelfTest.java index ac294b05067fb..48c18b3e20330 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxExceptionAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxExceptionAbstractSelfTest.java @@ -41,11 +41,16 @@ import org.apache.ignite.spi.indexing.IndexingQueryFilter; import org.apache.ignite.spi.indexing.IndexingSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionHeuristicException; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -53,6 +58,7 @@ /** * Tests that transaction is invalidated in case of {@link IgniteTxHeuristicCheckedException}. */ +@RunWith(JUnit4.class) public abstract class IgniteTxExceptionAbstractSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int PRIMARY = 0; @@ -77,8 +83,6 @@ public abstract class IgniteTxExceptionAbstractSelfTest extends GridCacheAbstrac cfg.setIndexingSpi(new TestIndexingSpi()); - cfg.getTransactionConfiguration().setTxSerializableEnabled(true); - return cfg; } @@ -89,7 +93,7 @@ public abstract class IgniteTxExceptionAbstractSelfTest extends GridCacheAbstrac ccfg.setCacheStoreFactory(null); ccfg.setReadThrough(false); ccfg.setWriteThrough(false); - ccfg.setLoadPreviousValue(true); + ccfg.setLoadPreviousValue(false); ccfg.setIndexedTypes(Integer.class, Integer.class); @@ -98,6 +102,8 @@ public abstract class IgniteTxExceptionAbstractSelfTest extends GridCacheAbstrac /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10377", MvccFeatureChecker.forcedMvcc()); + super.beforeTestsStarted(); lastKey = 0; @@ -131,6 +137,7 @@ public abstract class IgniteTxExceptionAbstractSelfTest extends GridCacheAbstrac /** * @throws Exception If failed. */ + @Test public void testPutNear() throws Exception { checkPut(true, keyForNode(grid(0).localNode(), NOT_PRIMARY_AND_BACKUP)); @@ -140,6 +147,7 @@ public void testPutNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutPrimary() throws Exception { checkPut(true, keyForNode(grid(0).localNode(), PRIMARY)); @@ -149,6 +157,7 @@ public void testPutPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutBackup() throws Exception { checkPut(true, keyForNode(grid(0).localNode(), BACKUP)); @@ -158,6 +167,7 @@ public void testPutBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAll() throws Exception { checkPutAll(true, keyForNode(grid(0).localNode(), PRIMARY), keyForNode(grid(0).localNode(), PRIMARY), @@ -181,6 +191,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveNear() throws Exception { checkRemove(false, keyForNode(grid(0).localNode(), NOT_PRIMARY_AND_BACKUP)); @@ -190,7 +201,10 @@ public void testRemoveNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemovePrimary() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-9470", MvccFeatureChecker.forcedMvcc()); + checkRemove(false, keyForNode(grid(0).localNode(), PRIMARY)); checkRemove(true, keyForNode(grid(0).localNode(), PRIMARY)); @@ -199,6 +213,7 @@ public void testRemovePrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveBackup() throws Exception { checkRemove(false, keyForNode(grid(0).localNode(), BACKUP)); @@ -208,6 +223,7 @@ public void testRemoveBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformNear() throws Exception { checkTransform(false, keyForNode(grid(0).localNode(), NOT_PRIMARY_AND_BACKUP)); @@ -217,6 +233,7 @@ public void testTransformNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPrimary() throws Exception { checkTransform(false, keyForNode(grid(0).localNode(), PRIMARY)); @@ -226,6 +243,7 @@ public void testTransformPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformBackup() throws Exception { checkTransform(false, keyForNode(grid(0).localNode(), BACKUP)); @@ -235,6 +253,7 @@ public void testTransformBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutNearTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -248,6 +267,7 @@ public void testPutNearTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutPrimaryTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -261,6 +281,7 @@ public void testPutPrimaryTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutBackupTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -274,6 +295,7 @@ public void testPutBackupTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutMultipleKeysTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -311,6 +333,10 @@ public void testPutMultipleKeysTx() throws Exception { */ private void checkPutTx(boolean putBefore, TransactionConcurrency concurrency, TransactionIsolation isolation, final Integer... keys) throws Exception { + if (MvccFeatureChecker.forcedMvcc() && + !MvccFeatureChecker.isSupported(concurrency, isolation)) + return; + assertTrue(keys.length > 0); info("Test transaction [concurrency=" + concurrency + ", isolation=" + isolation + ']'); @@ -477,6 +503,8 @@ private void checkPut(boolean putBefore, final Integer key) throws Exception { * @throws Exception If failed. */ private void checkTransform(boolean putBefore, final Integer key) throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-9470", MvccFeatureChecker.forcedMvcc()); + if (putBefore) { TestIndexingSpi.forceFail(false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiNodeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiNodeAbstractTest.java index 3df934ff44fdd..0edc1e1779600 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiNodeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiNodeAbstractTest.java @@ -34,7 +34,6 @@ import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; @@ -44,12 +43,12 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -58,7 +57,7 @@ /** * Checks basic multi-node transactional operations. */ -@SuppressWarnings({"PointlessBooleanExpression", "ConstantConditions", "PointlessArithmeticExpression"}) +@RunWith(JUnit4.class) public abstract class IgniteTxMultiNodeAbstractTest extends GridCommonAbstractTest { /** Debug flag. */ private static final boolean DEBUG = false; @@ -66,9 +65,6 @@ public abstract class IgniteTxMultiNodeAbstractTest extends GridCommonAbstractTe /** */ protected static final int GRID_CNT = 4; - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ protected static final int RETRIES = 300; @@ -90,19 +86,6 @@ public abstract class IgniteTxMultiNodeAbstractTest extends GridCommonAbstractTe /** Number of backups for partitioned tests. */ protected int backups = 2; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { backups = 0; @@ -163,7 +146,6 @@ private static UUID primaryId(Ignite ignite, Object key) { * @param itemKey Item key. * @param retry Retry count. */ - @SuppressWarnings("unchecked") private void onItemNear(boolean putCntr, Ignite ignite, String itemKey, int retry) { IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -214,7 +196,6 @@ private void onItemNear(boolean putCntr, Ignite ignite, String itemKey, int retr * @param itemKey Item key. * @param retry Retry count. */ - @SuppressWarnings("unchecked") private void onItemPrimary(boolean putCntr, Ignite ignite, String itemKey, int retry) { IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -267,7 +248,6 @@ private void onItemPrimary(boolean putCntr, Ignite ignite, String itemKey, int r * @param retry Retry count. * @throws IgniteCheckedException If failed. */ - @SuppressWarnings("unchecked") private void onRemoveItemQueried(boolean putCntr, Ignite ignite, int retry) throws IgniteCheckedException { IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -356,7 +336,6 @@ private void onRemoveItemQueried(boolean putCntr, Ignite ignite, int retry) thro * @param ignite Grid. * @param retry Retry count. */ - @SuppressWarnings("unchecked") private void onRemoveItemSimple(boolean putCntr, Ignite ignite, int retry) { IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -494,6 +473,7 @@ private void removeRetriesSimple(Ignite ignite, boolean putCntr) { * * @throws Exception If failed. */ + @Test public void testPutOneEntryInTx() throws Exception { // resetLog4j(Level.INFO, true, GridCacheTxManager.class.getName()); @@ -514,6 +494,7 @@ public void testPutOneEntryInTx() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutTwoEntriesInTx() throws Exception { // resetLog4j(Level.INFO, true, GridCacheTxManager.class.getName()); @@ -538,6 +519,7 @@ public void testPutTwoEntriesInTx() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutOneEntryInTxMultiThreaded() throws Exception { // resetLog4j(Level.INFO, true, GridCacheTxManager.class.getName()); @@ -577,6 +559,7 @@ public void testPutOneEntryInTxMultiThreaded() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutTwoEntryInTxMultiThreaded() throws Exception { // resetLog4j(Level.INFO, true, GridCacheTxManager.class.getName()); @@ -617,6 +600,7 @@ public void testPutTwoEntryInTxMultiThreaded() throws Exception { * * @throws Exception If failed. */ + @Test public void testRemoveInTxQueried() throws Exception { //resetLog4j(Level.INFO, true, GridCacheTxManager.class.getPackage().getName()); @@ -659,6 +643,7 @@ public void testRemoveInTxQueried() throws Exception { * * @throws Exception If failed. */ + @Test public void testRemoveInTxSimple() throws Exception { startGrids(GRID_CNT); @@ -705,6 +690,7 @@ public void testRemoveInTxSimple() throws Exception { * * @throws Exception If failed. */ + @Test public void testRemoveInTxQueriedMultiThreaded() throws Exception { //resetLog4j(Level.INFO, true, GridCacheTxManager.class.getPackage().getName()); @@ -808,7 +794,6 @@ protected class PutTwoEntriesInTxJob implements IgniteCallable { private Ignite ignite; /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override public Integer call() throws IgniteCheckedException { assertNotNull(ignite); @@ -835,7 +820,6 @@ protected class PutOneEntryInTxJob implements IgniteCallable { private Ignite ignite; /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override public Integer call() throws IgniteCheckedException { assertNotNull(ignite); @@ -862,7 +846,6 @@ protected class RemoveInTxJobQueried implements IgniteCallable { private Ignite ignite; /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override public Integer call() throws IgniteCheckedException { assertNotNull(ignite); @@ -889,7 +872,6 @@ protected class RemoveInTxJobSimple implements IgniteCallable { private Ignite ignite; /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override public Integer call() throws IgniteCheckedException { assertNotNull(ignite); @@ -905,4 +887,4 @@ protected class RemoveInTxJobSimple implements IgniteCallable { return S.toString(RemoveInTxJobSimple.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiThreadedAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiThreadedAbstractTest.java index 5a1a1db2fdafe..1ce99ce1683ec 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiThreadedAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxMultiThreadedAbstractTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionOptimisticException; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -44,7 +47,7 @@ /** * Tests for local transactions. */ -@SuppressWarnings( {"BusyWait"}) +@RunWith(JUnit4.class) public abstract class IgniteTxMultiThreadedAbstractTest extends IgniteTxAbstractTest { /** * @return Thread count. @@ -86,7 +89,6 @@ protected void checkCommitMultithreaded(final TransactionConcurrency concurrency protected void checkRollbackMultithreaded(final TransactionConcurrency concurrency, final TransactionIsolation isolation) throws Exception { final ConcurrentMap map = new ConcurrentHashMap<>(); - GridTestUtils.runMultiThreaded(new Callable() { @Nullable @Override public Object call() throws Exception { Thread t = Thread.currentThread(); @@ -110,6 +112,7 @@ protected void checkRollbackMultithreaded(final TransactionConcurrency concurren /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticReadCommittedCommitMultithreaded() throws Exception { checkCommitMultithreaded(PESSIMISTIC, READ_COMMITTED); @@ -119,6 +122,7 @@ public void testPessimisticReadCommittedCommitMultithreaded() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticRepeatableReadCommitMultithreaded() throws Exception { checkCommitMultithreaded(PESSIMISTIC, REPEATABLE_READ); @@ -128,6 +132,7 @@ public void testPessimisticRepeatableReadCommitMultithreaded() throws Exception /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticSerializableCommitMultithreaded() throws Exception { checkCommitMultithreaded(PESSIMISTIC, SERIALIZABLE); @@ -137,6 +142,7 @@ public void testPessimisticSerializableCommitMultithreaded() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticReadCommittedCommitMultithreaded() throws Exception { checkCommitMultithreaded(OPTIMISTIC, READ_COMMITTED); @@ -146,6 +152,7 @@ public void testOptimisticReadCommittedCommitMultithreaded() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticRepeatableReadCommitMultithreaded() throws Exception { checkCommitMultithreaded(OPTIMISTIC, REPEATABLE_READ); @@ -155,6 +162,7 @@ public void testOptimisticRepeatableReadCommitMultithreaded() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticSerializableCommitMultithreaded() throws Exception { checkCommitMultithreaded(OPTIMISTIC, SERIALIZABLE); @@ -164,6 +172,7 @@ public void testOptimisticSerializableCommitMultithreaded() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticReadCommittedRollbackMultithreaded() throws Exception { checkRollbackMultithreaded(PESSIMISTIC, READ_COMMITTED); @@ -173,6 +182,7 @@ public void testPessimisticReadCommittedRollbackMultithreaded() throws Exception /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticRepeatableReadRollbackMultithreaded() throws Exception { checkRollbackMultithreaded(PESSIMISTIC, REPEATABLE_READ); @@ -182,6 +192,7 @@ public void testPessimisticRepeatableReadRollbackMultithreaded() throws Exceptio /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticSerializableRollbackMultithreaded() throws Exception { checkRollbackMultithreaded(PESSIMISTIC, SERIALIZABLE); @@ -191,6 +202,7 @@ public void testPessimisticSerializableRollbackMultithreaded() throws Exception /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticReadCommittedRollbackMultithreaded() throws Exception { checkRollbackMultithreaded(OPTIMISTIC, READ_COMMITTED); @@ -200,6 +212,7 @@ public void testOptimisticReadCommittedRollbackMultithreaded() throws Exception /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticRepeatableReadRollbackMultithreaded() throws Exception { checkRollbackMultithreaded(OPTIMISTIC, REPEATABLE_READ); @@ -209,6 +222,7 @@ public void testOptimisticRepeatableReadRollbackMultithreaded() throws Exception /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticSerializableRollbackMultithreaded() throws Exception { checkRollbackMultithreaded(OPTIMISTIC, SERIALIZABLE); @@ -218,6 +232,7 @@ public void testOptimisticSerializableRollbackMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableConsistency() throws Exception { final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxReentryAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxReentryAbstractSelfTest.java index 39066a24a24b2..a2e9fc7cc02bd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxReentryAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxReentryAbstractSelfTest.java @@ -34,11 +34,11 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -47,10 +47,8 @@ /** * Tests reentry in pessimistic repeatable read tx. */ +@RunWith(JUnit4.class) public abstract class IgniteTxReentryAbstractSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** @return Cache mode. */ protected abstract CacheMode cacheMode(); @@ -76,12 +74,7 @@ public abstract class IgniteTxReentryAbstractSelfTest extends GridCommonAbstract @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - cfg.setCommunicationSpi(new CountingCommunicationSpi()); - cfg.setDiscoverySpi(discoSpi); CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -98,6 +91,7 @@ public abstract class IgniteTxReentryAbstractSelfTest extends GridCommonAbstract } /** @throws Exception If failed. */ + @Test public void testLockReentry() throws Exception { startGridsMultiThreaded(gridCount(), true); @@ -180,4 +174,4 @@ public int dhtLocks() { return dhtLocks.get(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxSingleThreadedAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxSingleThreadedAbstractTest.java index f7034d1736076..72ccb7b66d9f0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxSingleThreadedAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxSingleThreadedAbstractTest.java @@ -18,6 +18,9 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.IgniteCheckedException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -28,11 +31,12 @@ /** * Tests for local transactions. */ -@SuppressWarnings( {"BusyWait"}) +@RunWith(JUnit4.class) public abstract class IgniteTxSingleThreadedAbstractTest extends IgniteTxAbstractTest { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticReadCommittedCommit() throws Exception { checkCommit(PESSIMISTIC, READ_COMMITTED); @@ -42,6 +46,7 @@ public void testPessimisticReadCommittedCommit() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticRepeatableReadCommit() throws Exception { checkCommit(PESSIMISTIC, REPEATABLE_READ); @@ -51,6 +56,7 @@ public void testPessimisticRepeatableReadCommit() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticSerializableCommit() throws Exception { checkCommit(PESSIMISTIC, SERIALIZABLE); @@ -60,6 +66,7 @@ public void testPessimisticSerializableCommit() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticReadCommittedCommit() throws Exception { checkCommit(OPTIMISTIC, READ_COMMITTED); @@ -69,6 +76,7 @@ public void testOptimisticReadCommittedCommit() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticRepeatableReadCommit() throws Exception { checkCommit(OPTIMISTIC, REPEATABLE_READ); @@ -78,6 +86,7 @@ public void testOptimisticRepeatableReadCommit() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticSerializableCommit() throws Exception { checkCommit(OPTIMISTIC, SERIALIZABLE); @@ -87,6 +96,7 @@ public void testOptimisticSerializableCommit() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticReadCommittedRollback() throws Exception { checkRollback(PESSIMISTIC, READ_COMMITTED); @@ -96,6 +106,7 @@ public void testPessimisticReadCommittedRollback() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticRepeatableReadRollback() throws Exception { checkRollback(PESSIMISTIC, REPEATABLE_READ); @@ -105,6 +116,7 @@ public void testPessimisticRepeatableReadRollback() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticSerializableRollback() throws Exception { checkRollback(PESSIMISTIC, SERIALIZABLE); @@ -114,6 +126,7 @@ public void testPessimisticSerializableRollback() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticReadCommittedRollback() throws Exception { checkRollback(OPTIMISTIC, READ_COMMITTED); @@ -123,6 +136,7 @@ public void testOptimisticReadCommittedRollback() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticRepeatableReadRollback() throws Exception { checkRollback(OPTIMISTIC, REPEATABLE_READ); @@ -132,9 +146,10 @@ public void testOptimisticRepeatableReadRollback() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticSerializableRollback() throws Exception { checkRollback(OPTIMISTIC, SERIALIZABLE); finalChecks(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxStoreExceptionAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxStoreExceptionAbstractSelfTest.java index 863ab38ad74ea..54218d5b881c1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxStoreExceptionAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/IgniteTxStoreExceptionAbstractSelfTest.java @@ -37,11 +37,15 @@ import org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionRollbackException; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -49,6 +53,7 @@ /** * Tests that transaction is invalidated in case of {@link IgniteTxHeuristicCheckedException}. */ +@RunWith(JUnit4.class) public abstract class IgniteTxStoreExceptionAbstractSelfTest extends GridCacheAbstractSelfTest { /** Index SPI throwing exception. */ private static TestStore store; @@ -94,6 +99,8 @@ public abstract class IgniteTxStoreExceptionAbstractSelfTest extends GridCacheAb /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + store = new TestStore(); super.beforeTestsStarted(); @@ -118,6 +125,7 @@ public abstract class IgniteTxStoreExceptionAbstractSelfTest extends GridCacheAb /** * @throws Exception If failed. */ + @Test public void testPutNear() throws Exception { checkPut(true, keyForNode(grid(0).localNode(), NOT_PRIMARY_AND_BACKUP)); @@ -127,6 +135,7 @@ public void testPutNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutPrimary() throws Exception { checkPut(true, keyForNode(grid(0).localNode(), PRIMARY)); @@ -136,6 +145,7 @@ public void testPutPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutBackup() throws Exception { checkPut(true, keyForNode(grid(0).localNode(), BACKUP)); @@ -145,6 +155,7 @@ public void testPutBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAll() throws Exception { checkPutAll(true, keyForNode(grid(0).localNode(), PRIMARY), keyForNode(grid(0).localNode(), PRIMARY), @@ -168,6 +179,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveNear() throws Exception { checkRemove(false, keyForNode(grid(0).localNode(), NOT_PRIMARY_AND_BACKUP)); @@ -177,6 +189,7 @@ public void testRemoveNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemovePrimary() throws Exception { checkRemove(false, keyForNode(grid(0).localNode(), PRIMARY)); @@ -186,6 +199,7 @@ public void testRemovePrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveBackup() throws Exception { checkRemove(false, keyForNode(grid(0).localNode(), BACKUP)); @@ -195,6 +209,7 @@ public void testRemoveBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformNear() throws Exception { checkTransform(false, keyForNode(grid(0).localNode(), NOT_PRIMARY_AND_BACKUP)); @@ -204,6 +219,7 @@ public void testTransformNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPrimary() throws Exception { checkTransform(false, keyForNode(grid(0).localNode(), PRIMARY)); @@ -213,6 +229,7 @@ public void testTransformPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformBackup() throws Exception { checkTransform(false, keyForNode(grid(0).localNode(), BACKUP)); @@ -222,6 +239,7 @@ public void testTransformBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutNearTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -235,6 +253,7 @@ public void testPutNearTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutPrimaryTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -248,6 +267,7 @@ public void testPutPrimaryTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutBackupTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -261,6 +281,7 @@ public void testPutBackupTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutMultipleKeysTx() throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { @@ -672,4 +693,4 @@ public void forceFail(boolean fail) { throw new CacheWriterException("Store exception"); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/InterceptorCacheConfigVariationsFullApiTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/InterceptorCacheConfigVariationsFullApiTest.java index b6f5333e04806..a323067fcfb00 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/InterceptorCacheConfigVariationsFullApiTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/InterceptorCacheConfigVariationsFullApiTest.java @@ -23,11 +23,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.lang.IgniteBiTuple; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Full API cache test. */ -@SuppressWarnings({"TransientFieldInNonSerializableClass", "unchecked"}) +@SuppressWarnings({"unchecked"}) +@RunWith(JUnit4.class) public class InterceptorCacheConfigVariationsFullApiTest extends IgniteCacheConfigVariationsFullApiTest { /** */ private static volatile boolean validate = true; @@ -42,16 +46,19 @@ public class InterceptorCacheConfigVariationsFullApiTest extends IgniteCacheConf } /** {@inheritDoc} */ + @Test @Override public void testTtlNoTx() throws Exception { // No-op. } /** {@inheritDoc} */ + @Test @Override public void testTtlNoTxOldEntry() throws Exception { // No-op. } /** {@inheritDoc} */ + @Test @Override public void testTtlTx() throws Exception { // No-op. } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MarshallerCacheJobRunNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MarshallerCacheJobRunNodeRestartTest.java index 2ebc232c922df..cef26366d82cc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MarshallerCacheJobRunNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MarshallerCacheJobRunNodeRestartTest.java @@ -25,19 +25,17 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.lang.IgniteInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class MarshallerCacheJobRunNodeRestartTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -45,8 +43,6 @@ public class MarshallerCacheJobRunNodeRestartTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -55,6 +51,7 @@ public class MarshallerCacheJobRunNodeRestartTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testJobRun() throws Exception { for (int i = 0; i < 5; i++) { U.resolveWorkDirectory(U.defaultWorkDirectory(), "marshaller", true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MemoryPolicyConfigValidationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MemoryPolicyConfigValidationTest.java index 6dec847954f82..c73f6d6299d7f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MemoryPolicyConfigValidationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MemoryPolicyConfigValidationTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.configuration.MemoryConfiguration; import org.apache.ignite.configuration.MemoryPolicyConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class MemoryPolicyConfigValidationTest extends GridCommonAbstractTest { /** */ private static final String VALID_DEFAULT_MEM_PLC_NAME = "valid_dlft_mem_plc"; @@ -240,6 +244,7 @@ private MemoryPolicyConfiguration createMemoryPolicy(String name, long initialSi /** * 'sysMemPlc' name is reserved for MemoryPolicyConfiguration for system caches. */ + @Test public void testReservedMemoryPolicyMisuse() throws Exception { violationType = ValidationViolationType.SYSTEM_MEMORY_POLICY_NAME_MISUSE; @@ -249,6 +254,7 @@ public void testReservedMemoryPolicyMisuse() throws Exception { /** * If user defines default is must be presented among configured memory policies. */ + @Test public void testMissingUserDefinedDefault() throws Exception { violationType = ValidationViolationType.MISSING_USER_DEFINED_DEFAULT; @@ -258,6 +264,7 @@ public void testMissingUserDefinedDefault() throws Exception { /** * Names of all MemoryPolicies must be distinct. */ + @Test public void testNamesConflict() throws Exception { violationType = ValidationViolationType.NAMES_CONFLICT; @@ -267,6 +274,7 @@ public void testNamesConflict() throws Exception { /** * User-defined policy must have a non-null non-empty name. */ + @Test public void testNullNameOnUserDefinedPolicy() throws Exception { violationType = ValidationViolationType.NULL_NAME_ON_USER_DEFINED_POLICY; @@ -276,6 +284,7 @@ public void testNullNameOnUserDefinedPolicy() throws Exception { /** * MemoryPolicy must be configured with size of at least 1MB. */ + @Test public void testMemoryTooSmall() throws Exception { violationType = ValidationViolationType.TOO_SMALL_MEMORY_SIZE; @@ -285,6 +294,7 @@ public void testMemoryTooSmall() throws Exception { /** * MemoryPolicy must be configured with size of at least 1MB. */ + @Test public void testMaxSizeSmallerThanInitialSize() throws Exception { violationType = ValidationViolationType.MAX_SIZE_IS_SMALLER_THAN_INITIAL_SIZE; @@ -294,6 +304,7 @@ public void testMaxSizeSmallerThanInitialSize() throws Exception { /** * User-defined size of default MemoryPolicy must be at least 1MB. */ + @Test public void testUserDefinedDefaultMemoryTooSmall() throws Exception { violationType = ValidationViolationType.TOO_SMALL_USER_DEFINED_DFLT_MEM_PLC_SIZE; @@ -304,6 +315,7 @@ public void testUserDefinedDefaultMemoryTooSmall() throws Exception { * Defining size of default MemoryPolicy twice with and through defaultMemoryPolicySize property * and using MemoryPolicyConfiguration description is prohibited. */ + @Test public void testDefaultMemoryPolicySizeDefinedTwice() throws Exception { violationType = ValidationViolationType.DEFAULT_SIZE_IS_DEFINED_TWICE; @@ -313,6 +325,7 @@ public void testDefaultMemoryPolicySizeDefinedTwice() throws Exception { /** * */ + @Test public void testRateTimeIntervalPropertyIsNegative() throws Exception { violationType = ValidationViolationType.LTE_ZERO_RATE_TIME_INTERVAL; @@ -322,6 +335,7 @@ public void testRateTimeIntervalPropertyIsNegative() throws Exception { /** * */ + @Test public void testSubIntervalsPropertyIsNegative() throws Exception { violationType = ValidationViolationType.LTE_ZERO_SUB_INTERVALS; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MvccCacheGroupMetricsMBeanTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MvccCacheGroupMetricsMBeanTest.java new file mode 100644 index 0000000000000..2642eb5925c97 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/MvccCacheGroupMetricsMBeanTest.java @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.cache.CacheAtomicityMode; + +/** + * + */ +public class MvccCacheGroupMetricsMBeanTest extends CacheGroupMetricsMBeanTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/NonAffinityCoordinatorDynamicStartStopTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/NonAffinityCoordinatorDynamicStartStopTest.java index f88f5b95d600c..300763bfcb8ac 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/NonAffinityCoordinatorDynamicStartStopTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/NonAffinityCoordinatorDynamicStartStopTest.java @@ -27,18 +27,16 @@ import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class NonAffinityCoordinatorDynamicStartStopTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String TEST_ATTRIBUTE = "test-attribute"; @@ -56,9 +54,6 @@ public class NonAffinityCoordinatorDynamicStartStopTest extends GridCommonAbstra @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - discoverySpi.setIpFinder(ipFinder); - DataStorageConfiguration memCfg = new DataStorageConfiguration().setDefaultDataRegionConfiguration( new DataRegionConfiguration().setMaxSize(200L * 1024 * 1024)); @@ -98,6 +93,7 @@ public class NonAffinityCoordinatorDynamicStartStopTest extends GridCommonAbstra /** * @throws Exception If failed. */ + @Test public void testStartStop() throws Exception { startGrids(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/OffheapCacheMetricsForClusterGroupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/OffheapCacheMetricsForClusterGroupSelfTest.java index 9ff6db7c2a5ee..81ac4c2b21da2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/OffheapCacheMetricsForClusterGroupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/OffheapCacheMetricsForClusterGroupSelfTest.java @@ -17,20 +17,25 @@ package org.apache.ignite.internal.processors.cache; +import java.util.concurrent.CountDownLatch; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; import org.apache.ignite.events.EventType; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import java.util.concurrent.CountDownLatch; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; /** * Test for cluster wide offheap cache metrics. */ +@RunWith(JUnit4.class) public class OffheapCacheMetricsForClusterGroupSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; @@ -63,6 +68,8 @@ public class OffheapCacheMetricsForClusterGroupSelfTest extends GridCommonAbstra startGrid("client-" + i); } + /** */ + @Test public void testGetOffHeapPrimaryEntriesCount() throws Exception { String cacheName = "testGetOffHeapPrimaryEntriesCount"; IgniteCache cache = grid("client-0").createCache(cacheConfiguration(cacheName)); @@ -140,11 +147,16 @@ private void assertGetOffHeapPrimaryEntriesCount(String cacheName, int count) th } } + /** + * @param cacheName Cache name. + * @return Cache configuration. + */ private static CacheConfiguration cacheConfiguration(String cacheName) { CacheConfiguration cfg = new CacheConfiguration<>(cacheName); cfg.setBackups(1); cfg.setStatisticsEnabled(true); + cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); return cfg; } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedAtomicCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedAtomicCacheGetsDistributionTest.java index 2241a956e0908..963ccc81d02c8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedAtomicCacheGetsDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedAtomicCacheGetsDistributionTest.java @@ -17,33 +17,23 @@ package org.apache.ignite.internal.processors.cache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; -import org.apache.ignite.configuration.CacheConfiguration; +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests of partitioned atomic cache's 'get' requests distribution. */ -public class PartitionedAtomicCacheGetsDistributionTest extends ReplicatedAtomicCacheGetsDistributionTest { +public class PartitionedAtomicCacheGetsDistributionTest extends CacheGetsDistributionAbstractTest { /** {@inheritDoc} */ - @Override protected CacheMode cacheMode() { - return PARTITIONED; + @Override protected CacheAtomicityMode atomicityMode() { + return ATOMIC; } /** {@inheritDoc} */ - @Override protected CacheConfiguration cacheConfiguration() { - CacheConfiguration cacheCfg = super.cacheConfiguration(); - - cacheCfg.setBackups(backupsCount()); - - return cacheCfg; - } - - /** - * @return Backups count. - */ - protected int backupsCount() { - return gridCount() - 1; + @Override protected CacheMode cacheMode() { + return PARTITIONED; } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedMvccTxPessimisticCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedMvccTxPessimisticCacheGetsDistributionTest.java new file mode 100644 index 0000000000000..de9d9a658afc6 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedMvccTxPessimisticCacheGetsDistributionTest.java @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.transactions.TransactionIsolation; + +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Tests of pessimistic transactional partitioned cache's 'get' requests distribution. + */ +public class PartitionedMvccTxPessimisticCacheGetsDistributionTest extends PartitionedTransactionalPessimisticCacheGetsDistributionTest { + /** {@inheritDoc} */ + @Override protected TransactionIsolation transactionIsolation() { + return REPEATABLE_READ; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalOptimisticCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalOptimisticCacheGetsDistributionTest.java index 4c882294f89eb..e518e32ef5340 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalOptimisticCacheGetsDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalOptimisticCacheGetsDistributionTest.java @@ -18,29 +18,36 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; /** * Tests of optimistic transactional partitioned cache's 'get' requests distribution. */ -public class PartitionedTransactionalOptimisticCacheGetsDistributionTest extends PartitionedAtomicCacheGetsDistributionTest { +public class PartitionedTransactionalOptimisticCacheGetsDistributionTest extends CacheGetsDistributionAbstractTest { /** {@inheritDoc} */ - @Override protected CacheAtomicityMode atomicityMode() { - return TRANSACTIONAL; + @Override protected CacheMode cacheMode() { + return PARTITIONED; } /** {@inheritDoc} */ - @Override protected TransactionIsolation transactionIsolation() { - return READ_COMMITTED; + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL; } /** {@inheritDoc} */ @Override protected TransactionConcurrency transactionConcurrency() { return OPTIMISTIC; } + + /** {@inheritDoc} */ + @Override protected TransactionIsolation transactionIsolation() { + return READ_COMMITTED; + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalPessimisticCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalPessimisticCacheGetsDistributionTest.java index 78ea7a6f8d30d..ce1f7b66ceaa5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalPessimisticCacheGetsDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionedTransactionalPessimisticCacheGetsDistributionTest.java @@ -17,17 +17,37 @@ package org.apache.ignite.internal.processors.cache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; /** * Tests of pessimistic transactional partitioned cache's 'get' requests distribution. */ -public class PartitionedTransactionalPessimisticCacheGetsDistributionTest - extends PartitionedTransactionalOptimisticCacheGetsDistributionTest { +public class PartitionedTransactionalPessimisticCacheGetsDistributionTest extends CacheGetsDistributionAbstractTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return PARTITIONED; + } + + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL; + } + /** {@inheritDoc} */ @Override protected TransactionConcurrency transactionConcurrency() { return PESSIMISTIC; } + + /** {@inheritDoc} */ + @Override protected TransactionIsolation transactionIsolation() { + return READ_COMMITTED; + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeCoordinatorFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeCoordinatorFailoverTest.java index a2adcf77a7bac..dc42b29379038 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeCoordinatorFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeCoordinatorFailoverTest.java @@ -18,45 +18,76 @@ package org.apache.ignite.internal.processors.cache; import java.util.concurrent.CountDownLatch; +import java.util.function.Function; +import java.util.function.Supplier; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteException; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.managers.communication.GridIoMessage; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.spi.IgniteSpiException; +import org.apache.ignite.spi.communication.CommunicationSpi; +import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Advanced coordinator failure scenarios during PME. */ +@RunWith(JUnit4.class) public class PartitionsExchangeCoordinatorFailoverTest extends GridCommonAbstractTest { + /** */ + private static final String CACHE_NAME = "cache"; + + /** */ + private Supplier spiFactory = TcpCommunicationSpi::new; + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setConsistentId(igniteInstanceName); - cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); - - IgnitePredicate nodeFilter = node -> node.consistentId().equals(igniteInstanceName); + cfg.setCommunicationSpi(spiFactory.get()); cfg.setCacheConfiguration( - new CacheConfiguration("cache-" + igniteInstanceName) - .setBackups(1) - .setNodeFilter(nodeFilter) - .setAffinity(new RendezvousAffinityFunction(false, 32)) + new CacheConfiguration(CACHE_NAME) + .setBackups(2) + .setAffinity(new RendezvousAffinityFunction(false, 32)) ); + // Add cache that exists only on coordinator node. + if (igniteInstanceName.equals("crd")) { + IgnitePredicate nodeFilter = node -> node.consistentId().equals(igniteInstanceName); + + cfg.setCacheConfiguration( + new CacheConfiguration(CACHE_NAME + 0) + .setBackups(2) + .setNodeFilter(nodeFilter) + .setAffinity(new RendezvousAffinityFunction(false, 32)) + ); + } + return cfg; } @@ -75,7 +106,10 @@ public class PartitionsExchangeCoordinatorFailoverTest extends GridCommonAbstrac /** * Tests that new coordinator is able to finish old exchanges in case of in-complete coordinator initialization. */ + @Test public void testNewCoordinatorCompletedExchange() throws Exception { + spiFactory = TestRecordingCommunicationSpi::new; + IgniteEx crd = (IgniteEx) startGrid("crd"); IgniteEx newCrd = startGrid(1); @@ -145,11 +179,207 @@ public void testNewCoordinatorCompletedExchange() throws Exception { // Check that all caches are operable. for (Ignite grid : G.allGrids()) { - IgniteCache cache = grid.cache("cache-" + grid.cluster().localNode().consistentId()); + IgniteCache cache = grid.cache(CACHE_NAME); Assert.assertNotNull(cache); cache.put(0, 0); } } + + /** + * Test checks that delayed full messages are processed correctly in case of changed coordinator. + * + * @throws Exception If failed. + */ + @Test + public void testDelayedFullMessageReplacedIfCoordinatorChanged() throws Exception { + spiFactory = TestRecordingCommunicationSpi::new; + + IgniteEx crd = startGrid("crd"); + + IgniteEx newCrd = startGrid(1); + + IgniteEx problemNode = startGrid(2); + + crd.cluster().active(true); + + awaitPartitionMapExchange(); + + blockSendingFullMessage(crd, problemNode); + + IgniteInternalFuture joinNextNodeFut = GridTestUtils.runAsync(() -> startGrid(3)); + + joinNextNodeFut.get(); + + U.sleep(5000); + + blockSendingFullMessage(newCrd, problemNode); + + IgniteInternalFuture stopCoordinatorFut = GridTestUtils.runAsync(() -> stopGrid("crd")); + + stopCoordinatorFut.get(); + + U.sleep(5000); + + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(newCrd); + + spi.stopBlock(true); + + awaitPartitionMapExchange(); + } + + /** + * Test that exchange coordinator initialized correctly in case of exchanges merge and caches without affinity nodes. + * + * @throws Exception If failed. + */ + @Test + public void testCoordinatorChangeAfterExchangesMerge() throws Exception { + // Delay demand messages sending to suspend late affinity assignment. + spiFactory = () -> new DynamicDelayingCommunicationSpi(msg -> { + final int delay = 5_000; + + if (msg instanceof GridDhtPartitionDemandMessage) { + GridDhtPartitionDemandMessage demandMessage = (GridDhtPartitionDemandMessage) msg; + + if (demandMessage.groupId() == GridCacheUtils.cacheId(GridCacheUtils.UTILITY_CACHE_NAME)) + return 0; + + return delay; + } + + return 0; + }); + + final IgniteEx crd = startGrid("crd"); + + startGrid(1); + + for (int k = 0; k < 1024; k++) + crd.cache(CACHE_NAME).put(k, k); + + // Delay sending single messages to ensure exchanges are merged. + spiFactory = () -> new DynamicDelayingCommunicationSpi(msg -> { + final int delay = 1_000; + + if (msg instanceof GridDhtPartitionsSingleMessage) { + GridDhtPartitionsSingleMessage singleMsg = (GridDhtPartitionsSingleMessage) msg; + + if (singleMsg.exchangeId() != null) + return delay; + } + + return 0; + }); + + // This should trigger exchanges merge. + startGridsMultiThreaded(2, 2); + + // Delay sending single message from new node to have time to shutdown coordinator. + spiFactory = () -> new DynamicDelayingCommunicationSpi(msg -> { + final int delay = 5_000; + + if (msg instanceof GridDhtPartitionsSingleMessage) { + GridDhtPartitionsSingleMessage singleMsg = (GridDhtPartitionsSingleMessage) msg; + + if (singleMsg.exchangeId() != null) + return delay; + } + + return 0; + }); + + // Trigger next exchange. + IgniteInternalFuture startNodeFut = GridTestUtils.runAsync(() -> startGrid(4)); + + // Wait till other nodes will send their messages to coordinator. + U.sleep(2_500); + + // And then stop coordinator node. + stopGrid("crd", true); + + startNodeFut.get(); + + awaitPartitionMapExchange(); + + // Check that all caches are operable. + for (Ignite grid : G.allGrids()) { + IgniteCache cache = grid.cache(CACHE_NAME); + + Assert.assertNotNull(cache); + + for (int k = 0; k < 1024; k++) + Assert.assertEquals(k, cache.get(k)); + + for (int k = 0; k < 1024; k++) + cache.put(k, k); + } + } + + /** + * Blocks sending full message from coordinator to non-coordinator node. + * @param from Coordinator node. + * @param to Non-coordinator node. + */ + private void blockSendingFullMessage(IgniteEx from, IgniteEx to) { + // Block FullMessage for newly joined nodes. + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(from); + + // Delay sending full messages (without exchange id). + spi.blockMessages((node, msg) -> { + if (msg instanceof GridDhtPartitionsFullMessage) { + GridDhtPartitionsFullMessage fullMsg = (GridDhtPartitionsFullMessage) msg; + + if (fullMsg.exchangeId() != null && node.order() == to.localNode().order()) { + log.warning("Blocked sending " + msg + " to " + to.localNode()); + + return true; + } + } + + return false; + }); + } + + /** + * Communication SPI that allows to delay sending message by predicate. + */ + class DynamicDelayingCommunicationSpi extends TcpCommunicationSpi { + /** Function that returns delay in milliseconds for given message. */ + private final Function delayMessageFunc; + + /** */ + DynamicDelayingCommunicationSpi() { + this(msg -> 0); + } + + /** + * @param delayMessageFunc Function to calculate delay for message. + */ + DynamicDelayingCommunicationSpi(final Function delayMessageFunc) { + this.delayMessageFunc = delayMessageFunc; + } + + /** {@inheritDoc} */ + @Override public void sendMessage(ClusterNode node, Message msg, IgniteInClosure ackC) + throws IgniteSpiException { + try { + GridIoMessage ioMsg = (GridIoMessage)msg; + + int delay = delayMessageFunc.apply(ioMsg.message()); + + if (delay > 0) { + log.warning(String.format("Delay sending %s to %s", msg, node)); + + U.sleep(delay); + } + } + catch (IgniteInterruptedCheckedException e) { + throw new IgniteSpiException(e); + } + + super.sendMessage(node, msg, ackC); + } + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeOnDiscoveryHistoryOverflowTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeOnDiscoveryHistoryOverflowTest.java index c0896c87a0c4f..1a3909091e6dd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeOnDiscoveryHistoryOverflowTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/PartitionsExchangeOnDiscoveryHistoryOverflowTest.java @@ -32,6 +32,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_DISCOVERY_HISTORY_SIZE; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -44,6 +47,7 @@ /** * Test discovery history overflow. */ +@RunWith(JUnit4.class) public class PartitionsExchangeOnDiscoveryHistoryOverflowTest extends IgniteCacheAbstractTest { /** */ private static final int CACHES_COUNT = 30; @@ -121,6 +125,7 @@ public class PartitionsExchangeOnDiscoveryHistoryOverflowTest extends IgniteCach /** * @throws Exception In case of error. */ + @Test public void testDynamicCacheCreation() throws Exception { for (int iter = 0; iter < 5; iter++) { log.info("Iteration: " + iter); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedAtomicCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedAtomicCacheGetsDistributionTest.java index 1aaea761d084a..ef90408e4a20a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedAtomicCacheGetsDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedAtomicCacheGetsDistributionTest.java @@ -17,350 +17,23 @@ package org.apache.ignite.internal.processors.cache; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.TreeSet; -import java.util.UUID; -import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; -import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.configuration.TransactionConfiguration; -import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.util.lang.GridAbsPredicate; -import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; -import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.transactions.Transaction; -import org.apache.ignite.transactions.TransactionConcurrency; -import org.apache.ignite.transactions.TransactionIsolation; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.REPLICATED; -import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; -import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_MACS; -import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; -import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; /** * Tests of replicated cache's 'get' requests distribution. */ -public class ReplicatedAtomicCacheGetsDistributionTest extends GridCacheAbstractSelfTest { - /** Cache name. */ - private static final String CACHE_NAME = "getsDistributionTest"; - - /** Client nodes instance's name. */ - private static final String CLIENT_NAME = "client"; - - /** Value prefix. */ - private static final String VAL_PREFIX = "val"; - - /** */ - private static final int PRIMARY_KEYS_NUMBER = 1_000; - - /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); - - IgniteConfiguration clientCfg = getConfiguration(CLIENT_NAME); - - clientCfg.setClientMode(true); - - startGrid(clientCfg); - } - - /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - super.beforeTest(); - - IgniteCache cache = ignite(0).cache(CACHE_NAME); - - if (cache != null) - cache.destroy(); - - // Setting different MAC addresses for all nodes - Map macs = getClusterMacs(); - - int idx = 0; - - for (Map.Entry entry : macs.entrySet()) - entry.setValue("x2-xx-xx-xx-xx-x" + idx++); - - replaceMacAddresses(G.allGrids(), macs); - } - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - cfg.setTransactionConfiguration(transactionConfiguration()); - - return cfg; - } - +public class ReplicatedAtomicCacheGetsDistributionTest extends CacheGetsDistributionAbstractTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return ATOMIC; } /** {@inheritDoc} */ - @Override protected int gridCount() { - return 4; - } - - /** - * Test 'get' operations requests generator distribution. - * - * @throws Exception In case of an error. - * @see #runTestBalancingDistribution(boolean) - */ - public void testGetRequestsGeneratorDistribution() throws Exception { - runTestBalancingDistribution(false); - } - - /** - * Test 'getAll' operations requests generator distribution. - * - * @throws Exception In case of an error. - * @see #runTestBalancingDistribution(boolean) - */ - public void testGetAllRequestsGeneratorDistribution() throws Exception { - runTestBalancingDistribution(true); - } - - /** - * @param batchMode Whenever 'get' or 'getAll' operations are used in the test. - * @throws Exception In case of an error. - */ - protected void runTestBalancingDistribution(boolean batchMode) throws Exception { - IgniteCache cache = grid(0).createCache(cacheConfiguration()); - - List keys = primaryKeys(cache, PRIMARY_KEYS_NUMBER); - - for (Integer key : keys) - cache.put(key, VAL_PREFIX + key); - - IgniteCache clientCache = grid(CLIENT_NAME).getOrCreateCache(CACHE_NAME) - .withAllowAtomicOpsInTx(); - - assertTrue(GridTestUtils.waitForCondition( - new GridAbsPredicate() { - int batchSize = 10; - int idx = 0; - - @Override public boolean apply() { - if (idx >= PRIMARY_KEYS_NUMBER) - idx = 0; - - try (Transaction tx = grid(CLIENT_NAME).transactions().txStart()) { - if (batchMode) { - Set keys0 = new TreeSet<>(); - - for (int i = idx; i < idx + batchSize && i < PRIMARY_KEYS_NUMBER; i++) - keys0.add(keys.get(i)); - - idx += batchSize; - - Map results = clientCache.getAll(keys0); - - for (Map.Entry entry : results.entrySet()) - assertEquals(VAL_PREFIX + entry.getKey(), entry.getValue()); - } - else { - for (int i = idx; i < idx + gridCount() && i < PRIMARY_KEYS_NUMBER; i++) { - Integer key = keys.get(i); - - assertEquals(VAL_PREFIX + key, clientCache.get(key)); - } - - idx += gridCount(); - } - - tx.commit(); - } - - for (int i = 0; i < gridCount(); i++) { - IgniteEx ignite = grid(i); - - long getsCnt = ignite.cache(CACHE_NAME).localMetrics().getCacheGets(); - - if (getsCnt == 0) - return false; - } - - return true; - } - }, - getTestTimeout()) - ); - } - - /** - * Tests that the 'get' operation requests are routed to node with same MAC address as at requester. - * - * @throws Exception In case of an error. - * @see #runTestSameHostDistribution(UUID, boolean) - */ - public void testGetRequestsDistribution() throws Exception { - UUID destId = grid(0).localNode().id(); - - runTestSameHostDistribution(destId, false); - } - - /** - * Tests that the 'getAll' operation requests are routed to node with same MAC address as at requester. - * - * @throws Exception In case of an error. - * @see #runTestSameHostDistribution(UUID, boolean) - */ - public void testGetAllRequestsDistribution() throws Exception { - UUID destId = grid(gridCount() - 1).localNode().id(); - - runTestSameHostDistribution(destId, true); - } - - /** - * Tests that the 'get' and 'getAll' requests are routed to node with same MAC address as at requester. - * - * @param destId Destination Ignite instance id for requests distribution. - * @param batchMode Test mode. - * @throws Exception In case of an error. - */ - protected void runTestSameHostDistribution(final UUID destId, final boolean batchMode) throws Exception { - Map macs = getClusterMacs(); - - String clientMac = macs.get(grid(CLIENT_NAME).localNode().id()); - - macs.put(destId, clientMac); - - replaceMacAddresses(G.allGrids(), macs); - - IgniteCache cache = grid(0).createCache(cacheConfiguration()); - - List keys = primaryKeys(cache, PRIMARY_KEYS_NUMBER); - - for (Integer key : keys) - cache.put(key, VAL_PREFIX + key); - - IgniteCache clientCache = grid(CLIENT_NAME).getOrCreateCache(CACHE_NAME) - .withAllowAtomicOpsInTx(); - - try (Transaction tx = grid(CLIENT_NAME).transactions().txStart()) { - if (batchMode) { - Map results = clientCache.getAll(new TreeSet<>(keys)); - - for (Map.Entry entry : results.entrySet()) - assertEquals(VAL_PREFIX + entry.getKey(), entry.getValue()); - } - else { - for (Integer key : keys) - assertEquals(VAL_PREFIX + key, clientCache.get(key)); - } - - tx.commit(); - } - - for (int i = 0; i < gridCount(); i++) { - IgniteEx ignite = grid(i); - - long getsCnt = ignite.cache(CACHE_NAME).localMetrics().getCacheGets(); - - if (destId.equals(ignite.localNode().id())) - assertEquals(PRIMARY_KEYS_NUMBER, getsCnt); - else - assertEquals(0L, getsCnt); - } - } - - /** - * @return Transaction configuration. - */ - protected TransactionConfiguration transactionConfiguration() { - TransactionConfiguration txCfg = new TransactionConfiguration(); - - txCfg.setDefaultTxIsolation(transactionIsolation()); - txCfg.setDefaultTxConcurrency(transactionConcurrency()); - - return txCfg; - } - - /** - * @return Cache transaction isolation. - */ - protected TransactionIsolation transactionIsolation() { - return REPEATABLE_READ; - } - - /** - * @return Cache transaction concurrency. - */ - protected TransactionConcurrency transactionConcurrency() { - return PESSIMISTIC; - } - - /** - * @return Caching mode. - */ @Override protected CacheMode cacheMode() { return REPLICATED; } - - /** - * @return Cache configuration. - */ - protected CacheConfiguration cacheConfiguration() { - CacheConfiguration cfg = new CacheConfiguration(CACHE_NAME); - - cfg.setCacheMode(cacheMode()); - cfg.setAtomicityMode(atomicityMode()); - cfg.setWriteSynchronizationMode(FULL_SYNC); - cfg.setReadFromBackup(true); - cfg.setStatisticsEnabled(true); - - return cfg; - } - - /** - * @param instances Started Ignite instances. - * @param macs Mapping MAC addresses to UUID. - */ - private void replaceMacAddresses(List instances, Map macs) { - for (Ignite ignite : instances) { - for (ClusterNode node : ignite.cluster().nodes()) { - String mac = macs.get(node.id()); - - assertNotNull(mac); - - Map attrs = new HashMap<>(node.attributes()); - - attrs.put(ATTR_MACS, mac); - - ((TcpDiscoveryNode)node).setAttributes(attrs); - } - } - } - - /** - * @return Cluster nodes MAC addresses. - */ - private Map getClusterMacs() { - Map macs = new HashMap<>(); - - for (Ignite ignite : G.allGrids()) { - ClusterNode node = ignite.cluster().localNode(); - - String mac = node.attribute(ATTR_MACS); - - assert mac != null; - - macs.put(node.id(), mac); - } - - return macs; - } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedMvccTxPessimisticCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedMvccTxPessimisticCacheGetsDistributionTest.java new file mode 100644 index 0000000000000..6e2d67c3137d7 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedMvccTxPessimisticCacheGetsDistributionTest.java @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.transactions.TransactionIsolation; + +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Tests of pessimistic transactional replicated cache's 'get' requests distribution. + */ +public class ReplicatedMvccTxPessimisticCacheGetsDistributionTest extends ReplicatedTransactionalPessimisticCacheGetsDistributionTest { + /** {@inheritDoc} */ + @Override protected TransactionIsolation transactionIsolation() { + return REPEATABLE_READ; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalOptimisticCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalOptimisticCacheGetsDistributionTest.java index 3bc680972c050..4744f0a1eaba2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalOptimisticCacheGetsDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalOptimisticCacheGetsDistributionTest.java @@ -18,17 +18,24 @@ package org.apache.ignite.internal.processors.cache; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; /** * Tests of optimistic transactional replicated cache's 'get' requests distribution. */ -public class ReplicatedTransactionalOptimisticCacheGetsDistributionTest extends ReplicatedAtomicCacheGetsDistributionTest { +public class ReplicatedTransactionalOptimisticCacheGetsDistributionTest extends CacheGetsDistributionAbstractTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return REPLICATED; + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalPessimisticCacheGetsDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalPessimisticCacheGetsDistributionTest.java index 7bace3c74fddc..2648851d426be 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalPessimisticCacheGetsDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/ReplicatedTransactionalPessimisticCacheGetsDistributionTest.java @@ -17,17 +17,37 @@ package org.apache.ignite.internal.processors.cache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; /** * Tests of pessimistic transactional replicated cache's 'get' requests distribution. */ -public class ReplicatedTransactionalPessimisticCacheGetsDistributionTest - extends ReplicatedTransactionalOptimisticCacheGetsDistributionTest { +public class ReplicatedTransactionalPessimisticCacheGetsDistributionTest extends CacheGetsDistributionAbstractTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return REPLICATED; + } + + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL; + } + /** {@inheritDoc} */ @Override protected TransactionConcurrency transactionConcurrency() { return PESSIMISTIC; } + + /** {@inheritDoc} */ + @Override protected TransactionIsolation transactionIsolation() { + return READ_COMMITTED; + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/SetTxTimeoutOnPartitionMapExchangeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/SetTxTimeoutOnPartitionMapExchangeTest.java index c9ae34fcc18cd..dc9fa97f1d69c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/SetTxTimeoutOnPartitionMapExchangeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/SetTxTimeoutOnPartitionMapExchangeTest.java @@ -30,7 +30,6 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; @@ -41,14 +40,14 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.mxbean.TransactionsMXBean; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionRollbackException; import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.util.typedef.X.hasCause; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -57,10 +56,8 @@ /** * */ +@RunWith(JUnit4.class) public class SetTxTimeoutOnPartitionMapExchangeTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Wait condition timeout. */ private static final long WAIT_CONDITION_TIMEOUT = 10_000L; @@ -71,19 +68,10 @@ public class SetTxTimeoutOnPartitionMapExchangeTest extends GridCommonAbstractTe stopAllGrids(); } - /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** * */ + @Test public void testDefaultTxTimeoutOnPartitionMapExchange() throws Exception { IgniteEx ig1 = startGrid(1); IgniteEx ig2 = startGrid(2); @@ -100,6 +88,7 @@ public void testDefaultTxTimeoutOnPartitionMapExchange() throws Exception { /** * */ + @Test public void testJmxSetTxTimeoutOnPartitionMapExchange() throws Exception { startGrid(1); startGrid(2); @@ -122,6 +111,7 @@ public void testJmxSetTxTimeoutOnPartitionMapExchange() throws Exception { /** * */ + @Test public void testClusterSetTxTimeoutOnPartitionMapExchange() throws Exception { Ignite ig1 = startGrid(1); Ignite ig2 = startGrid(2); @@ -141,6 +131,7 @@ public void testClusterSetTxTimeoutOnPartitionMapExchange() throws Exception { * * @throws Exception If fails. */ + @Test public void testSetTxTimeoutDuringPartitionMapExchange() throws Exception { IgniteEx ig = (IgniteEx) startGrids(2); @@ -152,6 +143,7 @@ public void testSetTxTimeoutDuringPartitionMapExchange() throws Exception { * * @throws Exception If fails. */ + @Test public void testSetTxTimeoutOnClientDuringPartitionMapExchange() throws Exception { IgniteEx ig = (IgniteEx) startGrids(2); IgniteEx client = startGrid(getConfiguration("client").setClientMode(true)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAbstractSelfTest.java index 15376411d9afb..9bc1347d562db 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAbstractSelfTest.java @@ -17,16 +17,20 @@ package org.apache.ignite.internal.processors.cache; +import java.util.concurrent.Callable; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.lang.IgniteInClosureX; import org.apache.ignite.internal.util.typedef.internal.U; - -import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -38,6 +42,7 @@ * Test dynamic WAL mode change. */ +@RunWith(JUnit4.class) public abstract class WalModeChangeAbstractSelfTest extends WalModeChangeCommonAbstractSelfTest { /** Whether coordinator node should be filtered out. */ private final boolean filterOnCrd; @@ -72,6 +77,7 @@ protected WalModeChangeAbstractSelfTest(boolean filterOnCrd, boolean jdbc) { * * @throws Exception If failed. */ + @Test public void testNullCacheName() throws Exception { forAllNodes(new IgniteInClosureX() { @Override public void applyx(Ignite ignite) throws IgniteCheckedException { @@ -91,6 +97,7 @@ public void testNullCacheName() throws Exception { * * @throws Exception If failed. */ + @Test public void testNoCache() throws Exception { forAllNodes(new IgniteInClosureX() { @Override public void applyx(Ignite ignite) throws IgniteCheckedException { @@ -111,6 +118,7 @@ public void testNoCache() throws Exception { * * @throws Exception If failed. */ + @Test public void testSharedCacheGroup() throws Exception { forAllNodes(new IgniteInClosureX() { @Override public void applyx(Ignite ignite) throws IgniteCheckedException { @@ -146,6 +154,7 @@ public void testSharedCacheGroup() throws Exception { * * @throws Exception If failed. */ + @Test public void testPersistenceDisabled() throws Exception { forAllNodes(new IgniteInClosureX() { @Override public void applyx(Ignite ignite) throws IgniteCheckedException { @@ -182,11 +191,14 @@ public void testPersistenceDisabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testLocalCache() throws Exception { if (jdbc) // Doesn't make sense for JDBC. return; + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + forAllNodes(new IgniteInClosureX() { @Override public void applyx(Ignite ignite) throws IgniteCheckedException { createCache(ignite, cacheConfig(LOCAL).setDataRegionName(REGION_VOLATILE)); @@ -217,6 +229,7 @@ public void testLocalCache() throws Exception { * * @throws Exception If failed. */ + @Test public void testEnableDisablePartitionedAtomic() throws Exception { checkEnableDisable(PARTITIONED, ATOMIC); } @@ -226,6 +239,7 @@ public void testEnableDisablePartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testEnableDisablePartitionedTransactional() throws Exception { checkEnableDisable(PARTITIONED, TRANSACTIONAL); } @@ -235,6 +249,7 @@ public void testEnableDisablePartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testEnableDisableReplicatedAtomic() throws Exception { checkEnableDisable(REPLICATED, ATOMIC); } @@ -244,6 +259,7 @@ public void testEnableDisableReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testEnableDisableReplicatedTransactional() throws Exception { checkEnableDisable(REPLICATED, TRANSACTIONAL); } @@ -272,4 +288,46 @@ private void checkEnableDisable(CacheMode mode, CacheAtomicityMode atomicityMode } }); } + + /** + * Test {@link WalStateManager#prohibitWALDisabling(boolean)} feature. + * + * @throws Exception If failed. + */ + @Test + public void testDisablingProhibition() throws Exception { + forAllNodes(new IgniteInClosureX() { + @Override public void applyx(Ignite ig) throws IgniteCheckedException { + assert ig instanceof IgniteEx; + + IgniteEx ignite = (IgniteEx)ig; + + createCache(ignite, cacheConfig(CACHE_NAME, PARTITIONED, TRANSACTIONAL)); + + WalStateManager stateMgr = ignite.context().cache().context().walState(); + + assertFalse(stateMgr.prohibitWALDisabling()); + + stateMgr.prohibitWALDisabling(true); + assertTrue(stateMgr.prohibitWALDisabling()); + + try { + walDisable(ignite, CACHE_NAME); + + fail(); + } + catch (Exception e) { + // No-op. + } + + stateMgr.prohibitWALDisabling(false); + assertFalse(stateMgr.prohibitWALDisabling()); + + createCache(ignite, cacheConfig(CACHE_NAME, PARTITIONED, TRANSACTIONAL)); + + assertWalDisable(ignite, CACHE_NAME, true); + assertWalEnable(ignite, CACHE_NAME, true); + } + }); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAdvancedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAdvancedSelfTest.java index be0f5df158761..da5196c6d796e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAdvancedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeAdvancedSelfTest.java @@ -29,6 +29,12 @@ import org.apache.ignite.internal.IgniteClientReconnectAbstractTest; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Assume; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,6 +43,7 @@ * Concurrent and advanced tests for WAL state change. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class WalModeChangeAdvancedSelfTest extends WalModeChangeCommonAbstractSelfTest { /** * Constructor. @@ -64,6 +71,7 @@ public WalModeChangeAdvancedSelfTest() { * * @throws Exception If failed. */ + @Test public void testCacheCleanup() throws Exception { Ignite srv = startGrid(config(SRV_1, false, false)); @@ -132,7 +140,10 @@ public void testCacheCleanup() throws Exception { * * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10421", MvccFeatureChecker.forcedMvcc()); + checkJoin(false); } @@ -141,6 +152,7 @@ public void testJoin() throws Exception { * * @throws Exception If failed. */ + @Test public void testJoinCoordinator() throws Exception { checkJoin(true); } @@ -209,6 +221,7 @@ private void checkJoin(boolean crdFiltered) throws Exception { * * @throws Exception If failed. */ + @Test public void testServerRestartNonCoordinator() throws Exception { checkNodeRestart(false); } @@ -218,9 +231,9 @@ public void testServerRestartNonCoordinator() throws Exception { * * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7472") + @Test public void testServerRestartCoordinator() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-7472"); - checkNodeRestart(true); } @@ -300,6 +313,7 @@ public void checkNodeRestart(boolean failCrd) throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnect() throws Exception { final Ignite srv = startGrid(config(SRV_1, false, false)); Ignite cli = startGrid(config(CLI, true, false)); @@ -358,6 +372,7 @@ public void testClientReconnect() throws Exception { * * @throws Exception If failed. */ + @Test public void testCacheDestroy() throws Exception { final Ignite srv = startGrid(config(SRV_1, false, false)); Ignite cli = startGrid(config(CLI, true, false)); @@ -418,6 +433,7 @@ public void testCacheDestroy() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentOperations() throws Exception { final Ignite srv1 = startGrid(config(SRV_1, false, false)); final Ignite srv2 = startGrid(config(SRV_2, false, false)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeCommonAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeCommonAbstractSelfTest.java index a902bfa977ccd..944b85d97456f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeCommonAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WalModeChangeCommonAbstractSelfTest.java @@ -31,7 +31,6 @@ import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteClientReconnectAbstractTest; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -39,7 +38,6 @@ import org.apache.ignite.internal.util.lang.IgniteInClosureX; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; @@ -50,9 +48,6 @@ */ public abstract class WalModeChangeCommonAbstractSelfTest extends GridCommonAbstractTest { - /** Shared IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Node filter. */ private static final IgnitePredicate FILTER = new CacheNodeFilter(); @@ -303,8 +298,6 @@ protected IgniteConfiguration config(String name, boolean cli, boolean filter) t cfg.setClientMode(cli); cfg.setLocalHost("127.0.0.1"); - cfg.setDiscoverySpi(new IgniteClientReconnectAbstractTest.TestTcpDiscoverySpi().setIpFinder(IP_FINDER)); - DataRegionConfiguration regionCfg = new DataRegionConfiguration() .setPersistenceEnabled(true) .setMaxSize(DataStorageConfiguration.DFLT_DATA_REGION_INITIAL_SIZE); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WithKeepBinaryCacheFullApiTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WithKeepBinaryCacheFullApiTest.java index a8eb01d65e439..0fb737e71fdef 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WithKeepBinaryCacheFullApiTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/WithKeepBinaryCacheFullApiTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.testframework.junits.IgniteConfigVariationsAbstractTest.DataMode.PLANE_OBJECT; @@ -45,6 +48,7 @@ * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class WithKeepBinaryCacheFullApiTest extends IgniteCacheConfigVariationsAbstractTest { /** */ protected static volatile boolean interceptorBinaryObjExp = true; @@ -123,6 +127,7 @@ public class WithKeepBinaryCacheFullApiTest extends IgniteCacheConfigVariationsA * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testRemovePutGet() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -178,6 +183,7 @@ public void testRemovePutGet() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testRemovePutGetAsync() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -234,6 +240,7 @@ public void testRemovePutGetAsync() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testPutAllGetAll() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -301,6 +308,7 @@ public void testPutAllGetAll() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testPutAllGetAllAsync() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -374,6 +382,7 @@ public void testPutAllGetAllAsync() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testInvoke() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -439,6 +448,7 @@ public void testInvoke() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeTx() throws Exception { if (!txShouldBeUsed()) return; @@ -558,6 +568,7 @@ public void checkInvokeTx(final TransactionConcurrency conc, final TransactionIs * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testInvokeAsync() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -623,6 +634,7 @@ public void testInvokeAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeAsyncTx() throws Exception { if (!txShouldBeUsed()) return; @@ -744,6 +756,7 @@ public void checkInvokeAsyncTx(final TransactionConcurrency conc, final Transact * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testInvokeAll() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -795,6 +808,7 @@ public void testInvokeAll() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testInvokeAllTx() throws Exception { if (!txShouldBeUsed()) return; @@ -927,6 +941,7 @@ private void checkInvokeAllResult(IgniteCache cache, Map updatesQueue = new LinkedBlockingDeque<>(UPDATES_COUNT); - - /** */ - private static volatile BlockingDeque srvResurrectQueue = new LinkedBlockingDeque<>(1); + private static final int UPDATES_COUNT = 1_000; /** */ - private static final CountDownLatch START_LATCH = new CountDownLatch(1); + private static final int RESTART_DELAY = 1_000; /** */ - private static final CountDownLatch FINISH_LATCH_NO_CLIENTS = new CountDownLatch(5); + private static final int GRID_CNT = 5; /** */ - private static volatile AtomicBoolean stopFlag0 = new AtomicBoolean(false); + private static final String BINARY_TYPE_NAME = "TestBinaryType"; /** */ - private static volatile AtomicBoolean stopFlag1 = new AtomicBoolean(false); + private static final int BINARY_TYPE_ID = 708045005; /** */ - private static volatile AtomicBoolean stopFlag2 = new AtomicBoolean(false); + private final Queue updatesQueue = new ConcurrentLinkedQueue<>(); /** */ - private static volatile AtomicBoolean stopFlag3 = new AtomicBoolean(false); + private final List updatesList = new ArrayList<>(UPDATES_COUNT); /** */ - private static volatile AtomicBoolean stopFlag4 = new AtomicBoolean(false); + private final CountDownLatch startLatch = new CountDownLatch(1); - /** */ - private static final String BINARY_TYPE_NAME = "TestBinaryType"; - - /** */ - private static final int BINARY_TYPE_ID = 708045005; /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { for (int i = 0; i < UPDATES_COUNT; i++) { FieldType fType = null; + Object fVal = null; + switch (i % 4) { case 0: fType = FieldType.NUMBER; + fVal = getNumberFieldVal(); break; case 1: fType = FieldType.STRING; + fVal = getStringFieldVal(); break; case 2: fType = FieldType.ARRAY; + fVal = getArrayFieldVal(); break; case 3: fType = FieldType.OBJECT; + fVal = new Object(); } - updatesQueue.add(new BinaryUpdateDescription(i, "f" + (i + 1), fType)); + BinaryUpdateDescription desc = new BinaryUpdateDescription(i, "f" + (i + 1), fType, fVal); + + updatesQueue.add(desc); + updatesList.add(desc); } } @@ -144,12 +128,10 @@ public class BinaryMetadataUpdatesFlowTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - if (applyDiscoveryHook) { - final DiscoveryHook hook = discoveryHook != null ? discoveryHook : new DiscoveryHook(); - + if (discoveryHook != null) { TcpDiscoverySpi discoSpi = new TcpDiscoverySpi() { @Override public void setListener(@Nullable DiscoverySpiListener lsnr) { - super.setListener(GridTestUtils.DiscoverySpiListenerWrapper.wrap(lsnr, hook)); + super.setListener(GridTestUtils.DiscoverySpiListenerWrapper.wrap(lsnr, discoveryHook)); } }; @@ -158,11 +140,11 @@ public class BinaryMetadataUpdatesFlowTest extends GridCommonAbstractTest { cfg.setMetricsUpdateFrequency(1000); } - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(sharedStaticIpFinder); cfg.setMarshaller(new BinaryMarshaller()); - cfg.setClientMode(clientMode); + cfg.setClientMode("client".equals(gridName) || getTestIgniteInstanceIndex(gridName) >= GRID_CNT); CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -181,185 +163,147 @@ public class BinaryMetadataUpdatesFlowTest extends GridCommonAbstractTest { } /** - * Starts new ignite node and submits computation job to it. - * @param idx Index. - * @param stopFlag Stop flag. - * @throws Exception If failed. + * Starts computation job. + * + * @param idx Grid index on which computation job should start. + * @param restartIdx The index of the node to be restarted. + * @param workersCntr The current number of computation threads. */ - private void startComputation(int idx, AtomicBoolean stopFlag) throws Exception { - clientMode = false; - - final IgniteEx ignite0 = startGrid(idx); + private void startComputation(int idx, AtomicInteger restartIdx, AtomicInteger workersCntr) { + Ignite ignite = grid(idx); - ClusterGroup cg = ignite0.cluster().forNodeId(ignite0.localNode().id()); + ClusterGroup cg = ignite.cluster().forLocal(); - ignite0.compute(cg).withAsync().call(new BinaryObjectAdder(ignite0, updatesQueue, 30, stopFlag)); + ignite.compute(cg).callAsync(new BinaryObjectAdder(startLatch, idx, updatesQueue, restartIdx, workersCntr)); } /** - * @param idx Index. - * @param deafClient Deaf client. - * @param observedIds Observed ids. * @throws Exception If failed. */ - private void startListening(int idx, boolean deafClient, Set observedIds) throws Exception { - clientMode = true; + @Test + public void testFlowNoConflicts() throws Exception { + startGridsMultiThreaded(GRID_CNT); - ContinuousQuery qry = new ContinuousQuery(); + doTestFlowNoConflicts(); - qry.setLocalListener(new CQListener(observedIds)); + awaitPartitionMapExchange(); - if (deafClient) { - applyDiscoveryHook = true; - discoveryHook = new DiscoveryHook() { - @Override public void handleDiscoveryMessage(DiscoverySpiCustomMessage msg) { - DiscoveryCustomMessage customMsg = msg == null ? null - : (DiscoveryCustomMessage) IgniteUtils.field(msg, "delegate"); + Ignite randomNode = G.allGrids().get(0); - if (customMsg instanceof MetadataUpdateProposedMessage) { - if (((MetadataUpdateProposedMessage) customMsg).typeId() == BINARY_TYPE_ID) - GridTestUtils.setFieldValue(customMsg, "typeId", 1); - } - else if (customMsg instanceof MetadataUpdateAcceptedMessage) { - if (((MetadataUpdateAcceptedMessage) customMsg).typeId() == BINARY_TYPE_ID) - GridTestUtils.setFieldValue(customMsg, "typeId", 1); - } - } - }; + IgniteCache cache = randomNode.cache(DEFAULT_CACHE_NAME); - IgniteEx client = startGrid(idx); + int cacheEntries = cache.size(CachePeekMode.PRIMARY); - client.cache(DEFAULT_CACHE_NAME).withKeepBinary().query(qry); - } - else { - applyDiscoveryHook = false; + assertTrue("Cache cannot contain more entries than were put in it;", cacheEntries <= UPDATES_COUNT); - IgniteEx client = startGrid(idx); + assertEquals("There are less than expected entries, data loss occurred;", UPDATES_COUNT, cacheEntries); - client.cache(DEFAULT_CACHE_NAME).withKeepBinary().query(qry); - } + validateCache(randomNode); } /** - * + * @throws Exception If failed. */ - private static class CQListener implements CacheEntryUpdatedListener { - /** */ - private final Set observedIds; - - /** - * @param observedIds - */ - CQListener(Set observedIds) { - this.observedIds = observedIds; - } - - /** {@inheritDoc} */ - @Override public void onUpdated(Iterable iterable) throws CacheEntryListenerException { - for (Object o : iterable) { - if (o instanceof CacheQueryEntryEvent) { - CacheQueryEntryEvent e = (CacheQueryEntryEvent) o; + @Test + public void testFlowNoConflictsWithClients() throws Exception { + startGridsMultiThreaded(GRID_CNT); - BinaryObjectImpl val = (BinaryObjectImpl) e.getValue(); + if (!tcpDiscovery()) + return; - Integer seqNum = val.field(SEQ_NUM_FLD); + discoveryHook = new DiscoveryHook() { + @Override public void handleDiscoveryMessage(DiscoverySpiCustomMessage msg) { + DiscoveryCustomMessage customMsg = msg == null ? null + : (DiscoveryCustomMessage) IgniteUtils.field(msg, "delegate"); - observedIds.add(seqNum); + if (customMsg instanceof MetadataUpdateProposedMessage) { + if (((MetadataUpdateProposedMessage) customMsg).typeId() == BINARY_TYPE_ID) + GridTestUtils.setFieldValue(customMsg, "typeId", 1); + } + else if (customMsg instanceof MetadataUpdateAcceptedMessage) { + if (((MetadataUpdateAcceptedMessage) customMsg).typeId() == BINARY_TYPE_ID) + GridTestUtils.setFieldValue(customMsg, "typeId", 1); } } - } - } - - /** - * @throws Exception If failed. - */ - public void testFlowNoConflicts() throws Exception { - startComputation(0, stopFlag0); + }; - startComputation(1, stopFlag1); + Ignite deafClient = startGrid(GRID_CNT); - startComputation(2, stopFlag2); + discoveryHook = null; - startComputation(3, stopFlag3); + Ignite regClient = startGrid(GRID_CNT + 1); - startComputation(4, stopFlag4); + doTestFlowNoConflicts(); - Thread killer = new Thread(new ServerNodeKiller()); - Thread resurrection = new Thread(new ServerNodeResurrection()); - killer.setName("node-killer-thread"); - killer.start(); - resurrection.setName("node-resurrection-thread"); - resurrection.start(); + awaitPartitionMapExchange(); - START_LATCH.countDown(); + validateCache(deafClient); + validateCache(regClient); + } - while (!updatesQueue.isEmpty()) - Thread.sleep(1000); - FINISH_LATCH_NO_CLIENTS.await(); + /** + * Validates that all updates are readable on the specified node. + * + * @param ignite Ignite instance. + */ + private void validateCache(Ignite ignite) { + String name = ignite.name(); - IgniteEx ignite0 = grid(0); + for (Cache.Entry entry : ignite.cache(DEFAULT_CACHE_NAME).withKeepBinary()) { + BinaryObject binObj = (BinaryObject)entry.getValue(); - IgniteCache cache0 = ignite0.cache(DEFAULT_CACHE_NAME); + Integer idx = binObj.field(SEQ_NUM_FLD); - int cacheEntries = cache0.size(CachePeekMode.PRIMARY); + BinaryUpdateDescription desc = updatesList.get(idx - 1); - assertTrue("Cache cannot contain more entries than were put in it;", cacheEntries <= UPDATES_COUNT); + Object val = binObj.field(desc.fieldName); - assertEquals("There are less than expected entries, data loss occurred;", UPDATES_COUNT, cacheEntries); + String errMsg = "Field " + desc.fieldName + " has unexpeted value (index=" + idx + ", node=" + name + ")"; - killer.interrupt(); - resurrection.interrupt(); + if (desc.fieldType == FieldType.OBJECT) + assertTrue(errMsg, val instanceof BinaryObject); + else if (desc.fieldType == FieldType.ARRAY) + assertArrayEquals(errMsg, (byte[])desc.val, (byte[])val); + else + assertEquals(errMsg, desc.val, binObj.field(desc.fieldName)); + } } /** * @throws Exception If failed. */ - public void testFlowNoConflictsWithClients() throws Exception { - startComputation(0, stopFlag0); - - if (!tcpDiscovery()) - return; - - startComputation(1, stopFlag1); - - startComputation(2, stopFlag2); - - startComputation(3, stopFlag3); - - startComputation(4, stopFlag4); - - final Set deafClientObservedIds = new ConcurrentHashSet<>(); + private void doTestFlowNoConflicts() throws Exception { + final AtomicBoolean stopFlag = new AtomicBoolean(); + final AtomicInteger restartIdx = new AtomicInteger(-1); + final AtomicInteger workersCntr = new AtomicInteger(0); - startListening(5, true, deafClientObservedIds); + try { + for (int i = 0; i < GRID_CNT; i++) + startComputation(i, restartIdx, workersCntr); - final Set regClientObservedIds = new ConcurrentHashSet<>(); + IgniteInternalFuture fut = + GridTestUtils.runAsync(new NodeRestarter(stopFlag, restartIdx, workersCntr), "worker"); - startListening(6, false, regClientObservedIds); + startLatch.countDown(); - START_LATCH.countDown(); - - Thread killer = new Thread(new ServerNodeKiller()); - Thread resurrection = new Thread(new ServerNodeResurrection()); - killer.setName("node-killer-thread"); - killer.start(); - resurrection.setName("node-resurrection-thread"); - resurrection.start(); - - while (!updatesQueue.isEmpty()) - Thread.sleep(1000); + fut.get(); - killer.interrupt(); - resurrection.interrupt(); + GridTestUtils.waitForCondition(() -> workersCntr.get() == 0, 5_000); + } + finally { + stopFlag.set(true); + } } /** * @throws Exception If failed. */ + @Test public void testConcurrentMetadataUpdates() throws Exception { startGrid(0); - final Ignite client = startGrid(getConfiguration("client").setClientMode(true)); + final Ignite client = startGrid(getConfiguration("client")); final IgniteCache cache = client.cache(DEFAULT_CACHE_NAME).withKeepBinary(); @@ -395,101 +339,6 @@ public void testConcurrentMetadataUpdates() throws Exception { fut.get(); } - /** - * Runnable responsible for stopping (gracefully) server nodes during metadata updates process. - */ - private final class ServerNodeKiller implements Runnable { - /** {@inheritDoc} */ - @Override public void run() { - Thread curr = Thread.currentThread(); - try { - START_LATCH.await(); - - while (!curr.isInterrupted()) { - int idx = ThreadLocalRandom.current().nextInt(5); - - AtomicBoolean stopFlag; - - switch (idx) { - case 0: - stopFlag = stopFlag0; - break; - case 1: - stopFlag = stopFlag1; - break; - case 2: - stopFlag = stopFlag2; - break; - case 3: - stopFlag = stopFlag3; - break; - default: - stopFlag = stopFlag4; - } - - stopFlag.set(true); - - while (stopFlag.get()) - Thread.sleep(10); - - stopGrid(idx); - - srvResurrectQueue.put(idx); - - Thread.sleep(RESTART_DELAY); - } - } - catch (Exception ignored) { - // No-op. - } - } - } - - /** - * {@link Runnable} object to restart nodes killed by {@link ServerNodeKiller}. - */ - private final class ServerNodeResurrection implements Runnable { - /** {@inheritDoc} */ - @Override public void run() { - Thread curr = Thread.currentThread(); - - try { - START_LATCH.await(); - - while (!curr.isInterrupted()) { - Integer idx = srvResurrectQueue.takeFirst(); - - AtomicBoolean stopFlag; - - switch (idx) { - case 0: - stopFlag = stopFlag0; - break; - case 1: - stopFlag = stopFlag1; - break; - case 2: - stopFlag = stopFlag2; - break; - case 3: - stopFlag = stopFlag3; - break; - default: - stopFlag = stopFlag4; - } - - clientMode = false; - applyDiscoveryHook = false; - - startComputation(idx, stopFlag); - } - } - catch (Exception ignored) { - // No-op. - } - } - } - /** * Instruction for node to perform add new binary object action on cache in keepBinary mode. * @@ -506,15 +355,20 @@ private static final class BinaryUpdateDescription { /** */ private FieldType fieldType; + /** */ + private Object val; + /** * @param itemId Item id. * @param fieldName Field name. * @param fieldType Field type. + * @param val Field value. */ - private BinaryUpdateDescription(int itemId, String fieldName, FieldType fieldType) { + private BinaryUpdateDescription(int itemId, String fieldName, FieldType fieldType, Object val) { this.itemId = itemId; this.fieldName = fieldName; this.fieldType = fieldType; + this.val = val; } } @@ -565,20 +419,7 @@ private static byte[] getArrayFieldVal() { */ private static BinaryObject newBinaryObject(BinaryObjectBuilder builder, BinaryUpdateDescription desc) { builder.setField(SEQ_NUM_FLD, desc.itemId + 1); - - switch (desc.fieldType) { - case NUMBER: - builder.setField(desc.fieldName, getNumberFieldVal()); - break; - case STRING: - builder.setField(desc.fieldName, getStringFieldVal()); - break; - case ARRAY: - builder.setField(desc.fieldName, getArrayFieldVal()); - break; - case OBJECT: - builder.setField(desc.fieldName, new Object()); - } + builder.setField(desc.fieldName, desc.val); return builder.build(); } @@ -589,60 +430,142 @@ private static BinaryObject newBinaryObject(BinaryObjectBuilder builder, BinaryU */ private static final class BinaryObjectAdder implements IgniteCallable { /** */ - private final IgniteEx ignite; + private final CountDownLatch startLatch; + + /** */ + private final int idx; /** */ private final Queue updatesQueue; /** */ - private final long timeout; + private final AtomicInteger restartIdx; /** */ - private final AtomicBoolean stopFlag; + private final AtomicInteger workersCntr; + + /** */ + @IgniteInstanceResource + private Ignite ignite; /** - * @param ignite Ignite. + * @param startLatch Startup latch. + * @param idx Ignite instance index. * @param updatesQueue Updates queue. - * @param timeout Timeout. - * @param stopFlag Stop flag. + * @param restartIdx The index of the node to be restarted. + * @param workersCntr The number of active computation threads. */ - BinaryObjectAdder(IgniteEx ignite, Queue updatesQueue, long timeout, AtomicBoolean stopFlag) { - this.ignite = ignite; + BinaryObjectAdder( + CountDownLatch startLatch, + int idx, + Queue updatesQueue, + AtomicInteger restartIdx, + AtomicInteger workersCntr + ) { + this.startLatch = startLatch; + this.idx = idx; this.updatesQueue = updatesQueue; - this.timeout = timeout; - this.stopFlag = stopFlag; + this.restartIdx = restartIdx; + this.workersCntr = workersCntr; } /** {@inheritDoc} */ @Override public Object call() throws Exception { - START_LATCH.await(); + startLatch.await(); IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME).withKeepBinary(); - while (!updatesQueue.isEmpty()) { - BinaryUpdateDescription desc = updatesQueue.poll(); + workersCntr.incrementAndGet(); - if (desc == null) - break; + try { + while (!updatesQueue.isEmpty()) { + BinaryUpdateDescription desc = updatesQueue.poll(); - BinaryObjectBuilder builder = ignite.binary().builder(BINARY_TYPE_NAME); + if (desc == null) + break; - BinaryObject bo = newBinaryObject(builder, desc); + BinaryObjectBuilder builder = ignite.binary().builder(BINARY_TYPE_NAME); - cache.put(desc.itemId, bo); + BinaryObject bo = newBinaryObject(builder, desc); - if (stopFlag.get()) - break; - else - Thread.sleep(timeout); - } + cache.put(desc.itemId, bo); - if (updatesQueue.isEmpty()) - FINISH_LATCH_NO_CLIENTS.countDown(); + if (restartIdx.get() == idx) + break; + } + } + finally { + workersCntr.decrementAndGet(); - stopFlag.set(false); + if (restartIdx.get() == idx) + restartIdx.set(-1); + } return null; } } + + /** + * Restarts random server node and computation job. + */ + private final class NodeRestarter implements Runnable { + /** Stop thread flag. */ + private final AtomicBoolean stopFlag; + + /** The index of the node to be restarted. */ + private final AtomicInteger restartIdx; + + /** The current number of computation threads. */ + private final AtomicInteger workersCntr; + + /** + * @param stopFlag Stop thread flag. + * @param restartIdx The index of the node to be restarted. + * @param workersCntr The current number of computation threads. + */ + NodeRestarter(AtomicBoolean stopFlag, AtomicInteger restartIdx, AtomicInteger workersCntr) { + this.stopFlag = stopFlag; + this.restartIdx = restartIdx; + this.workersCntr = workersCntr; + } + + /** {@inheritDoc} */ + @Override public void run() { + try { + startLatch.await(); + + while (!shouldStop()) { + int idx = ThreadLocalRandom.current().nextInt(5); + + restartIdx.set(idx); + + while (restartIdx.get() != -1) { + if (shouldStop()) + return; + + Thread.sleep(10); + } + + stopGrid(idx); + + if (shouldStop()) + return; + + startGrid(idx); + + startComputation(idx, restartIdx, workersCntr); + + Thread.sleep(RESTART_DELAY); + } + } + catch (Exception ignore) { + // No-op. + } + } + + /** */ + private boolean shouldStop() { + return updatesQueue.isEmpty() || stopFlag.get() || Thread.currentThread().isInterrupted(); + } + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/BinaryTxCacheLocalEntriesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/BinaryTxCacheLocalEntriesSelfTest.java index 3528161b589f2..1240dd94113bf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/BinaryTxCacheLocalEntriesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/BinaryTxCacheLocalEntriesSelfTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class BinaryTxCacheLocalEntriesSelfTest extends GridCacheAbstractSelfTest { /** */ private static final String FIELD = "user-name"; @@ -58,6 +62,7 @@ public class BinaryTxCacheLocalEntriesSelfTest extends GridCacheAbstractSelfTest /** * @throws Exception If failed. */ + @Test public void testLocalEntries() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME).withKeepBinary(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/CacheKeepBinaryWithInterceptorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/CacheKeepBinaryWithInterceptorTest.java index 6cef6d20c2790..144988ca1d547 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/CacheKeepBinaryWithInterceptorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/CacheKeepBinaryWithInterceptorTest.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.cache.binary; +import javax.cache.Cache; import org.apache.ignite.IgniteCache; import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.cache.CacheAtomicityMode; @@ -24,47 +25,47 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import javax.cache.Cache; - -import static org.apache.ignite.cache.CacheAtomicityMode.*; -import static org.apache.ignite.cache.CacheWriteSynchronizationMode.*; +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheKeepBinaryWithInterceptorTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setMarshaller(null); return cfg; } /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); + @Override protected void afterTest() throws Exception { + stopAllGrids(); - startGrid(0); + super.afterTest(); } /** * @throws Exception If failed. */ + @Test public void testKeepBinaryWithInterceptor() throws Exception { + startGrid(0); + keepBinaryWithInterceptor(cacheConfiguration(ATOMIC, false)); keepBinaryWithInterceptor(cacheConfiguration(TRANSACTIONAL, false)); @@ -80,6 +81,23 @@ public void testKeepBinaryWithInterceptor() throws Exception { keepBinaryWithInterceptorPrimitives(cacheConfiguration(TRANSACTIONAL, true)); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9323") + @Test + public void testKeepBinaryWithInterceptorOnMvccCache() throws Exception { + startGrid(0); + + keepBinaryWithInterceptor(cacheConfiguration(TRANSACTIONAL_SNAPSHOT, false)); + keepBinaryWithInterceptorPrimitives(cacheConfiguration(TRANSACTIONAL_SNAPSHOT, true)); + + startGridsMultiThreaded(1, 3); + + keepBinaryWithInterceptor(cacheConfiguration(TRANSACTIONAL_SNAPSHOT, false)); + keepBinaryWithInterceptorPrimitives(cacheConfiguration(TRANSACTIONAL_SNAPSHOT, true)); + } + /** * @param ccfg Cache configuration. */ @@ -230,7 +248,8 @@ static class TestInterceptor1 implements CacheInterceptor entry, BinaryObject newVal) { + @Nullable @Override public BinaryObject onBeforePut(Cache.Entry entry, + BinaryObject newVal) { System.out.println("Before put [e=" + entry + ", newVal=" + newVal + ']'); onBeforePut++; @@ -255,7 +274,8 @@ static class TestInterceptor1 implements CacheInterceptor onBeforeRemove(Cache.Entry entry) { + @Nullable @Override public IgniteBiTuple onBeforeRemove( + Cache.Entry entry) { assertEquals(1, (int)entry.getKey().field("key")); assertEquals(10, (int)entry.getValue().field("val")); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest.java index c65f41c45cb85..35a4d7630841d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.configuration.DeploymentMode; import org.apache.ignite.internal.processors.cache.GridCacheAtomicEntryProcessorDeploymentSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Cache EntryProcessor + Deployment. */ +@RunWith(JUnit4.class) public class GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest extends GridCacheAtomicEntryProcessorDeploymentSelfTest { /** {@inheritDoc} */ @@ -41,6 +45,7 @@ public class GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest /** * @throws Exception In case of error. */ + @Test public void testGetDeployment() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -50,6 +55,7 @@ public void testGetDeployment() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetDeployment2() throws Exception { depMode = DeploymentMode.SHARED; @@ -59,6 +65,7 @@ public void testGetDeployment2() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetDeploymentWithKeepBinary() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -68,6 +75,7 @@ public void testGetDeploymentWithKeepBinary() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testGetDeployment2WithKeepBinary() throws Exception { depMode = DeploymentMode.SHARED; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectMetadataExchangeMultinodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectMetadataExchangeMultinodeTest.java index 01ef73d167e9c..eebcce33d16a1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectMetadataExchangeMultinodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectMetadataExchangeMultinodeTest.java @@ -18,10 +18,9 @@ import java.util.Map; import java.util.UUID; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.Future; +import java.util.concurrent.CyclicBarrier; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; -import junit.framework.AssertionFailedError; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.binary.BinaryObject; @@ -39,26 +38,28 @@ import org.apache.ignite.internal.managers.communication.GridMessageListener; import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; import org.apache.ignite.internal.util.IgniteUtils; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteCallable; +import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.spi.discovery.DiscoverySpiCustomMessage; import org.apache.ignite.spi.discovery.DiscoverySpiListener; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.GridTestUtils.DiscoveryHook; import org.apache.ignite.testframework.GridTestUtils.DiscoverySpiListenerWrapper; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheBinaryObjectMetadataExchangeMultinodeTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean clientMode; @@ -74,6 +75,9 @@ public class GridCacheBinaryObjectMetadataExchangeMultinodeTest extends GridComm /** */ private static final int BINARY_TYPE_ID = 708045005; + /** */ + private static final long MAX_AWAIT = 9_000; + /** */ private static final AtomicInteger metadataReqsCounter = new AtomicInteger(0); @@ -91,7 +95,7 @@ public class GridCacheBinaryObjectMetadataExchangeMultinodeTest extends GridComm }); } - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(sharedStaticIpFinder); cfg.setMarshaller(new BinaryMarshaller()); @@ -107,35 +111,6 @@ public class GridCacheBinaryObjectMetadataExchangeMultinodeTest extends GridComm return cfg; } - /** - * - */ - private static final class ErrorHolder { - /** */ - private volatile Error e; - - /** - * @param e Exception. - */ - void error(Error e) { - this.e = e; - } - - /** - * - */ - void fail() { - throw e; - } - - /** - * - */ - boolean isEmpty() { - return e == null; - } - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { super.afterTest(); @@ -143,45 +118,29 @@ boolean isEmpty() { stopAllGrids(); } - /** */ - private static final CountDownLatch LATCH1 = new CountDownLatch(1); - /** * Verifies that if thread tries to read metadata with ongoing update it gets blocked * until acknowledge message arrives. */ + @Test public void testReadRequestBlockedOnUpdatingMetadata() throws Exception { - applyDiscoveryHook = true; - discoveryHook = new DiscoveryHook() { - @Override public void handleDiscoveryMessage(DiscoverySpiCustomMessage msg) { - DiscoveryCustomMessage customMsg = msg == null ? null - : (DiscoveryCustomMessage) IgniteUtils.field(msg, "delegate"); - - if (customMsg instanceof MetadataUpdateAcceptedMessage) { - if (((MetadataUpdateAcceptedMessage)customMsg).typeId() == BINARY_TYPE_ID) - try { - Thread.sleep(300); - } - catch (InterruptedException ignored) { - // No-op. - } - } - } - }; - - final IgniteEx ignite0 = startGrid(0); + final CyclicBarrier barrier = new CyclicBarrier(2); applyDiscoveryHook = false; - final IgniteEx ignite1 = startGrid(1); + final Ignite ignite0 = startGrid(0); + final Ignite ignite1 = startGrid(1); - final ErrorHolder errorHolder = new ErrorHolder(); + final GridFutureAdapter finishFut = new GridFutureAdapter(); applyDiscoveryHook = true; discoveryHook = new DiscoveryHook() { private volatile IgniteEx ignite; @Override public void handleDiscoveryMessage(DiscoverySpiCustomMessage msg) { + if (finishFut.isDone()) + return; + DiscoveryCustomMessage customMsg = msg == null ? null : (DiscoveryCustomMessage) IgniteUtils.field(msg, "delegate"); @@ -192,10 +151,17 @@ public void testReadRequestBlockedOnUpdatingMetadata() throws Exception { Object transport = U.field(binaryProc, "transport"); try { + barrier.await(MAX_AWAIT, TimeUnit.MILLISECONDS); + Map syncMap = U.field(transport, "syncMap"); - int size = syncMap.size(); - assertEquals("unexpected size of syncMap: ", 1, size); + GridTestUtils.waitForCondition(new PA() { + @Override public boolean apply() { + return syncMap.size() == 1; + } + }, MAX_AWAIT); + + assertEquals("unexpected size of syncMap: ", 1, syncMap.size()); Object syncKey = syncMap.keySet().iterator().next(); @@ -204,9 +170,11 @@ public void testReadRequestBlockedOnUpdatingMetadata() throws Exception { int ver = U.field(syncKey, "ver"); assertEquals("unexpected pendingVersion: ", 2, ver); + + finishFut.onDone(); } - catch (AssertionFailedError err) { - errorHolder.error(err); + catch (Throwable t) { + finishFut.onDone(t); } } } @@ -220,47 +188,41 @@ public void testReadRequestBlockedOnUpdatingMetadata() throws Exception { final IgniteEx ignite2 = startGrid(2); discoveryHook.ignite(ignite2); - ignite0.executorService().submit(new Runnable() { + // Unfinished PME may affect max await timeout. + awaitPartitionMapExchange(); + + // Update metadata (version 1). + ignite0.executorService(ignite0.cluster().forLocal()).submit(new Runnable() { @Override public void run() { addIntField(ignite0, "f1", 101, 1); } }).get(); - UUID id2 = ignite2.localNode().id(); - - ClusterGroup cg2 = ignite2.cluster().forNodeId(id2); - - Future fut = ignite1.executorService().submit(new Runnable() { + // Update metadata (version 2). + ignite1.executorService(ignite1.cluster().forLocal()).submit(new Runnable() { @Override public void run() { - LATCH1.countDown(); addStringField(ignite1, "f2", "str", 2); } }); - ignite2.compute(cg2).withAsync().call(new IgniteCallable() { + // Read metadata. + IgniteFuture readFut = ignite2.compute(ignite2.cluster().forLocal()).callAsync(new IgniteCallable() { @Override public Object call() throws Exception { - try { - LATCH1.await(); - } - catch (InterruptedException ignored) { - // No-op. - } - - Object fieldVal = ((BinaryObject) ignite2.cache(DEFAULT_CACHE_NAME).withKeepBinary().get(1)).field("f1"); + barrier.await(MAX_AWAIT, TimeUnit.MILLISECONDS); - return fieldVal; + return ((BinaryObject) ignite2.cache(DEFAULT_CACHE_NAME).withKeepBinary().get(1)).field("f1"); } }); - fut.get(); + finishFut.get(MAX_AWAIT); - if (!errorHolder.isEmpty()) - errorHolder.fail(); + assertEquals(101, readFut.get(MAX_AWAIT)); } /** * Verifies that all sequential updates that don't introduce any conflicts are accepted and observed by all nodes. */ + @Test public void testSequentialUpdatesNoConflicts() throws Exception { IgniteEx ignite0 = startGrid(0); @@ -294,6 +256,7 @@ public void testSequentialUpdatesNoConflicts() throws Exception { /** * Verifies that client is able to detect obsolete metadata situation and request up-to-date from the cluster. */ + @Test public void testClientRequestsUpToDateMetadata() throws Exception { final IgniteEx ignite0 = startGrid(0); @@ -329,6 +292,7 @@ public void testClientRequestsUpToDateMetadata() throws Exception { /** * Verifies that client resends request for up-to-date metadata in case of failure on server received first request. */ + @Test public void testClientRequestsUpToDateMetadataOneNodeDies() throws Exception { final Ignite srv0 = startGrid(0); replaceWithStoppingMappingRequestListener(((GridKernalContext)U.field(srv0, "ctx")).io(), 0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectUserClassloaderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectUserClassloaderSelfTest.java index 40d8d041bcca6..a32e04a3218ea 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectUserClassloaderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectUserClassloaderSelfTest.java @@ -32,10 +32,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -43,6 +43,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheBinaryObjectUserClassloaderSelfTest extends GridCommonAbstractTest { /** */ private static volatile boolean customBinaryConf = false; @@ -53,9 +54,6 @@ public class GridCacheBinaryObjectUserClassloaderSelfTest extends GridCommonAbst /** */ private static volatile boolean useWrappingLoader = false; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { super.afterTest(); @@ -67,12 +65,6 @@ public class GridCacheBinaryObjectUserClassloaderSelfTest extends GridCommonAbst @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration(igniteInstanceName)); cfg.setMarshaller(new BinaryMarshaller()); @@ -140,6 +132,7 @@ CacheConfiguration cacheConfiguration(String igniteInstanceName) { /** * @throws Exception If test failed. */ + @Test public void testConfigurationRegistration() throws Exception { useWrappingLoader = false; @@ -149,6 +142,7 @@ public void testConfigurationRegistration() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testConfigurationRegistrationWithWrappingLoader() throws Exception { useWrappingLoader = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractDataStreamerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractDataStreamerSelfTest.java index 472a7a03b8f7b..05fac42eb405f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractDataStreamerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractDataStreamerSelfTest.java @@ -39,12 +39,16 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; /** * Test for binary objects stored in cache. */ +@RunWith(JUnit4.class) public abstract class GridCacheBinaryObjectsAbstractDataStreamerSelfTest extends GridCommonAbstractTest { /** */ private static final int THREAD_CNT = 64; @@ -111,6 +115,7 @@ protected int gridCount() { * @throws Exception If failed. */ @SuppressWarnings("BusyWait") + @Test public void testGetPut() throws Exception { final AtomicBoolean flag = new AtomicBoolean(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractMultiThreadedSelfTest.java index e2a1a8b041ae2..16222296c8b48 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractMultiThreadedSelfTest.java @@ -41,20 +41,18 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; /** * Test for binary objects stored in cache. */ +@RunWith(JUnit4.class) public abstract class GridCacheBinaryObjectsAbstractMultiThreadedSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int THREAD_CNT = 64; @@ -65,8 +63,6 @@ public abstract class GridCacheBinaryObjectsAbstractMultiThreadedSelfTest extend @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - CacheConfiguration cacheCfg = new CacheConfiguration(DEFAULT_CACHE_NAME); cacheCfg.setCacheMode(cacheMode()); @@ -124,7 +120,9 @@ protected int gridCount() { /** * @throws Exception If failed. */ - @SuppressWarnings("BusyWait") public void testGetPut() throws Exception { + @SuppressWarnings("BusyWait") + @Test + public void testGetPut() throws Exception { final AtomicBoolean flag = new AtomicBoolean(); final LongAdder cnt = new LongAdder(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractSelfTest.java index cc2f2c4cb2011..91a53307529e3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryObjectsAbstractSelfTest.java @@ -68,15 +68,15 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -87,10 +87,8 @@ /** * Test for binary objects stored in cache. */ +@RunWith(JUnit4.class) public abstract class GridCacheBinaryObjectsAbstractSelfTest extends GridCommonAbstractTest { - /** */ - public static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int ENTRY_CNT = 100; @@ -102,12 +100,6 @@ public abstract class GridCacheBinaryObjectsAbstractSelfTest extends GridCommonA @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = createCacheConfig(); cacheCfg.setCacheStoreFactory(singletonFactory(new TestStore())); @@ -222,6 +214,7 @@ protected boolean onheapCacheEnabled() { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testCircularReference() throws Exception { IgniteCache c = keepBinaryCache(); @@ -266,6 +259,7 @@ public void testCircularReference() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { IgniteCache c = jcache(0); @@ -290,6 +284,67 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testReplace() throws Exception { + IgniteCache c = jcache(0); + + for (int i = 0; i < ENTRY_CNT; i++) + c.put(i, new TestObject(i)); + + for (int i = 0; i < ENTRY_CNT; i++) { + TestObject obj = c.get(i); + + assertEquals(i, obj.val); + } + + IgniteCache kpc = keepBinaryCache(); + + BinaryObjectBuilder bldr = grid(0).binary().builder(TestObject.class.getName()); + + bldr.setField("val", -42); + + BinaryObject testObj = bldr.build(); + + for (int i = 0; i < ENTRY_CNT; i++) { + BinaryObject po = kpc.get(i); + + assertEquals(i, (int)po.field("val")); + + assertTrue(kpc.replace(i, po, testObj)); + } + } + + /** + * @throws Exception If failed. + */ + @Test + public void testRemove() throws Exception { + IgniteCache c = jcache(0); + + for (int i = 0; i < ENTRY_CNT; i++) + c.put(i, new TestObject(i)); + + for (int i = 0; i < ENTRY_CNT; i++) { + TestObject obj = c.get(i); + + assertEquals(i, obj.val); + } + + IgniteCache kpc = keepBinaryCache(); + + for (int i = 0; i < ENTRY_CNT; i++) { + BinaryObject po = kpc.get(i); + + assertEquals(i, (int)po.field("val")); + + assertTrue(kpc.remove(i, po)); + } + } + + /** + * @throws Exception If failed. + */ + @Test public void testIterator() throws Exception { IgniteCache c = jcache(0); @@ -329,6 +384,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollection() throws Exception { IgniteCache> c = jcache(0); @@ -375,6 +431,7 @@ public void testCollection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMap() throws Exception { IgniteCache> c = jcache(0); @@ -420,6 +477,7 @@ public void testMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingletonList() throws Exception { IgniteCache> c = jcache(0); @@ -447,6 +505,7 @@ public void testSingletonList() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAsync() throws Exception { IgniteCache c = jcache(0); @@ -473,6 +532,7 @@ public void testGetAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBasicArrays() throws Exception { IgniteCache cache = jcache(0); @@ -506,6 +566,7 @@ public void testBasicArrays() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomArrays() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-3244"); @@ -542,6 +603,7 @@ private void checkArrayClass(IgniteCache cache, Object arr) { /** * @throws Exception If failed. */ + @Test public void testGetTx1() throws Exception { checkGetTx(PESSIMISTIC, REPEATABLE_READ); } @@ -549,6 +611,7 @@ public void testGetTx1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetTx2() throws Exception { checkGetTx(PESSIMISTIC, READ_COMMITTED); } @@ -595,6 +658,7 @@ private void checkGetTx(TransactionConcurrency concurrency, TransactionIsolation /** * @throws Exception If failed. */ + @Test public void testGetTxAsync1() throws Exception { checkGetAsyncTx(PESSIMISTIC, REPEATABLE_READ); } @@ -602,6 +666,7 @@ public void testGetTxAsync1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetTxAsync2() throws Exception { checkGetAsyncTx(PESSIMISTIC, READ_COMMITTED); } @@ -649,6 +714,7 @@ private void checkGetAsyncTx(TransactionConcurrency concurrency, TransactionIsol /** * @throws Exception If failed. */ + @Test public void testGetAsyncTx() throws Exception { if (atomicityMode() != TRANSACTIONAL) return; @@ -684,6 +750,7 @@ public void testGetAsyncTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAll() throws Exception { IgniteCache c = jcache(0); @@ -724,6 +791,7 @@ public void testGetAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllAsync() throws Exception { IgniteCache c = jcache(0); @@ -764,6 +832,7 @@ public void testGetAllAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllTx1() throws Exception { checkGetAllTx(PESSIMISTIC, REPEATABLE_READ); } @@ -771,6 +840,7 @@ public void testGetAllTx1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllTx2() throws Exception { checkGetAllTx(PESSIMISTIC, READ_COMMITTED); } @@ -836,6 +906,7 @@ private void checkGetAllTx(TransactionConcurrency concurrency, TransactionIsolat /** * @throws Exception If failed. */ + @Test public void testGetAllAsyncTx1() throws Exception { checkGetAllAsyncTx(PESSIMISTIC, REPEATABLE_READ); } @@ -843,6 +914,7 @@ public void testGetAllAsyncTx1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAllAsyncTx2() throws Exception { checkGetAllAsyncTx(PESSIMISTIC, READ_COMMITTED); } @@ -908,6 +980,7 @@ private void checkGetAllAsyncTx(TransactionConcurrency concurrency, TransactionI * */ @SuppressWarnings("unchecked") + @Test public void testCrossFormatObjectsIdentity() { IgniteCache c = binKeysCache(); @@ -929,6 +1002,7 @@ public void testCrossFormatObjectsIdentity() { * */ @SuppressWarnings("unchecked") + @Test public void testPutWithArrayHashing() { IgniteCache c = binKeysCache(); @@ -963,6 +1037,7 @@ public void testPutWithArrayHashing() { * */ @SuppressWarnings("unchecked") + @Test public void testPutWithFieldsHashing() { IgniteCache c = binKeysCache(); @@ -996,6 +1071,7 @@ public void testPutWithFieldsHashing() { * */ @SuppressWarnings("unchecked") + @Test public void testPutWithCustomHashing() { IgniteCache c = binKeysCache(); @@ -1026,6 +1102,7 @@ public void testPutWithCustomHashing() { /** * @throws Exception if failed. */ + @Test public void testKeepBinaryTxOverwrite() throws Exception { if (atomicityMode() != TRANSACTIONAL) return; @@ -1050,6 +1127,7 @@ public void testKeepBinaryTxOverwrite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCache() throws Exception { for (int i = 0; i < gridCount(); i++) jcache(i).localLoadCache(null); @@ -1066,6 +1144,7 @@ public void testLoadCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAsync() throws Exception { for (int i = 0; i < gridCount(); i++) jcache(i).loadCacheAsync(null).get(); @@ -1082,6 +1161,7 @@ public void testLoadCacheAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheFilteredAsync() throws Exception { for (int i = 0; i < gridCount(); i++) { IgniteCache c = jcache(i); @@ -1106,6 +1186,7 @@ public void testLoadCacheFilteredAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransform() throws Exception { IgniteCache c = keepBinaryCache(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryStoreAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryStoreAbstractSelfTest.java index 1b7cb298bc620..b62252f6f7c83 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryStoreAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheBinaryStoreAbstractSelfTest.java @@ -31,20 +31,18 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; import java.util.concurrent.ConcurrentHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for cache store with binary. */ +@RunWith(JUnit4.class) public abstract class GridCacheBinaryStoreAbstractSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final TestStore STORE = new TestStore(); @@ -77,12 +75,6 @@ public abstract class GridCacheBinaryStoreAbstractSelfTest extends GridCommonAbs cfg.setCacheConfiguration(cacheCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - GridCacheBinaryStoreAbstractSelfTest.cfg = cfg; return cfg; @@ -110,6 +102,7 @@ public abstract class GridCacheBinaryStoreAbstractSelfTest extends GridCommonAbs /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { jcache().put(new Key(1), new Value(1)); @@ -119,6 +112,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAll() throws Exception { Map map = new HashMap<>(); @@ -133,6 +127,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoad() throws Exception { populateMap(STORE.map(), 1); @@ -146,6 +141,7 @@ public void testLoad() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadAll() throws Exception { populateMap(STORE.map(), 1, 2, 3); @@ -170,6 +166,7 @@ public void testLoadAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { for (int i = 1; i <= 3; i++) jcache().put(new Key(i), new Value(i)); @@ -182,6 +179,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoveAll() throws Exception { for (int i = 1; i <= 3; i++) jcache().put(new Key(i), new Value(i)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataMultinodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataMultinodeTest.java index 81614cb9ff7a5..7c40031251b85 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataMultinodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataMultinodeTest.java @@ -31,27 +31,26 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.binary.BinaryObjectBuilder; import org.apache.ignite.binary.BinaryType; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class GridCacheClientNodeBinaryObjectMetadataMultinodeTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -61,7 +60,7 @@ public class GridCacheClientNodeBinaryObjectMetadataMultinodeTest extends GridCo cfg.setPeerClassLoadingEnabled(false); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder).setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); cfg.setMarshaller(new BinaryMarshaller()); @@ -86,12 +85,13 @@ public class GridCacheClientNodeBinaryObjectMetadataMultinodeTest extends GridCo /** * @throws Exception If failed. */ + @Test public void testClientMetadataInitialization() throws Exception { startGrids(2); final AtomicBoolean stop = new AtomicBoolean(); - final ConcurrentHashSet allTypes = new ConcurrentHashSet<>(); + final GridConcurrentHashSet allTypes = new GridConcurrentHashSet<>(); IgniteInternalFuture fut; @@ -181,6 +181,7 @@ public void testClientMetadataInitialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailoverOnStart() throws Exception { startGrids(4); @@ -267,6 +268,7 @@ public void testFailoverOnStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientStartsFirst() throws Exception { client = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataTest.java index 0316edc54307a..3215713ef89e4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/GridCacheClientNodeBinaryObjectMetadataTest.java @@ -32,12 +32,16 @@ import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; /** * */ +@RunWith(JUnit4.class) public class GridCacheClientNodeBinaryObjectMetadataTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -94,6 +98,7 @@ public class GridCacheClientNodeBinaryObjectMetadataTest extends GridCacheAbstra /** * @throws Exception If failed. */ + @Test public void testBinaryMetadataOnClient() throws Exception { Ignite ignite0 = ignite(gridCount() - 1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteOptimisticTxSuspendResumeMultiServerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/datastreaming/DataStreamProcessorPersistenceBinarySelfTest.java similarity index 73% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteOptimisticTxSuspendResumeMultiServerTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/datastreaming/DataStreamProcessorPersistenceBinarySelfTest.java index b7003d4df6288..84fb997375273 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteOptimisticTxSuspendResumeMultiServerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/datastreaming/DataStreamProcessorPersistenceBinarySelfTest.java @@ -15,16 +15,14 @@ * limitations under the License. */ -package org.apache.ignite.internal.processors.cache.distributed; +package org.apache.ignite.internal.processors.cache.binary.datastreaming; /** * */ -public class IgniteOptimisticTxSuspendResumeMultiServerTest extends IgniteOptimisticTxSuspendResumeTest { - /** - * @return Number of server nodes. - */ - @Override protected int serversNumber() { - return 4; +public class DataStreamProcessorPersistenceBinarySelfTest extends DataStreamProcessorBinarySelfTest { + /** {@inheritDoc} */ + @Override public boolean persistenceEnabled() { + return true; } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/datastreaming/GridDataStreamerImplSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/datastreaming/GridDataStreamerImplSelfTest.java index c629871333952..9cacc5abf5264 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/datastreaming/GridDataStreamerImplSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/binary/datastreaming/GridDataStreamerImplSelfTest.java @@ -37,10 +37,10 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -48,10 +48,8 @@ /** * Tests for {@code IgniteDataStreamerImpl}. */ +@RunWith(JUnit4.class) public class GridDataStreamerImplSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of keys to load via data streamer. */ private static final int KEYS_COUNT = 1000; @@ -62,11 +60,6 @@ public class GridDataStreamerImplSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - if (binaries) { BinaryMarshaller marsh = new BinaryMarshaller(); @@ -100,6 +93,7 @@ private CacheConfiguration cacheConfiguration() { * * @throws Exception If failed. */ + @Test public void testAddDataFromMap() throws Exception { try { binaries = false; @@ -148,6 +142,7 @@ public void testAddDataFromMap() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddMissingBinary() throws Exception { try { binaries = true; @@ -183,6 +178,7 @@ public void testAddMissingBinary() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddBinaryDataFromMap() throws Exception { try { binaries = true; @@ -242,6 +238,7 @@ public void testAddBinaryDataFromMap() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddBinaryCreatedWithBuilder() throws Exception { try { binaries = true; @@ -258,7 +255,7 @@ public void testAddBinaryCreatedWithBuilder() throws Exception { BinaryObjectBuilder obj = g0.binary().builder("NoExistedClass"); obj.setField("id", i); - obj.setField("name", String.valueOf("name = " + i)); + obj.setField("name", "name = " + i); dataLdr.addData(i, obj.build()); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/context/IgniteCacheAbstractExecutionContextTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/context/IgniteCacheAbstractExecutionContextTest.java index 10a5912fc91c7..2f96c9f1b2a17 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/context/IgniteCacheAbstractExecutionContextTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/context/IgniteCacheAbstractExecutionContextTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; import org.apache.ignite.testframework.GridTestExternalClassLoader; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheAbstractExecutionContextTest extends IgniteCacheAbstractTest { /** */ public static final String TEST_VALUE = "org.apache.ignite.tests.p2p.CacheDeploymentTestValue"; @@ -62,6 +66,7 @@ public abstract class IgniteCacheAbstractExecutionContextTest extends IgniteCach /** * @throws Exception If failed. */ + @Test public void testUsersClassLoader() throws Exception { UsersClassLoader testClassLdr = (UsersClassLoader)grid(0).configuration().getClassLoader(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractDataStructuresFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractDataStructuresFailoverSelfTest.java index 797e90f6a0d4f..e408b97e9bba7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractDataStructuresFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractDataStructuresFailoverSelfTest.java @@ -69,6 +69,9 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.testframework.GridTestUtils.waitForCondition; @@ -76,6 +79,7 @@ /** * Failover tests for cache data structures. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractDataStructuresFailoverSelfTest extends IgniteCollectionAbstractTest { /** */ private static final long TEST_TIMEOUT = 3 * 60 * 1000; @@ -171,6 +175,7 @@ protected IgniteEx startClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongFailsWhenServersLeft() throws Exception { client = true; @@ -201,6 +206,7 @@ public void testAtomicLongFailsWhenServersLeft() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongTopologyChange() throws Exception { try (IgniteAtomicLong atomic = grid(0).atomicLong(STRUCTURE_NAME, 10, true)) { Ignite g = startGrid(NEW_IGNITE_INSTANCE_NAME); @@ -218,6 +224,7 @@ public void testAtomicLongTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongConstantTopologyChange() throws Exception { doTestAtomicLong(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -225,6 +232,7 @@ public void testAtomicLongConstantTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongConstantMultipleTopologyChange() throws Exception { doTestAtomicLong(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -263,6 +271,7 @@ private void doTestAtomicLong(ConstantTopologyChangeWorker topWorker) throws Exc /** * @throws Exception If failed. */ + @Test public void testAtomicReferenceTopologyChange() throws Exception { try (IgniteAtomicReference atomic = grid(0).atomicReference(STRUCTURE_NAME, 10, true)) { Ignite g = startGrid(NEW_IGNITE_INSTANCE_NAME); @@ -280,6 +289,7 @@ public void testAtomicReferenceTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicReferenceConstantTopologyChange() throws Exception { doTestAtomicReference(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -287,6 +297,7 @@ public void testAtomicReferenceConstantTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicReferenceConstantMultipleTopologyChange() throws Exception { doTestAtomicReference(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -325,6 +336,7 @@ private void doTestAtomicReference(ConstantTopologyChangeWorker topWorker) throw /** * @throws Exception If failed. */ + @Test public void testAtomicStampedTopologyChange() throws Exception { try (IgniteAtomicStamped atomic = grid(0).atomicStamped(STRUCTURE_NAME, 10, 10, true)) { Ignite g = startGrid(NEW_IGNITE_INSTANCE_NAME); @@ -348,6 +360,7 @@ public void testAtomicStampedTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicStampedConstantTopologyChange() throws Exception { doTestAtomicStamped(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -355,6 +368,7 @@ public void testAtomicStampedConstantTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicStampedConstantMultipleTopologyChange() throws Exception { doTestAtomicStamped(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -405,6 +419,7 @@ private void doTestAtomicStamped(ConstantTopologyChangeWorker topWorker) throws /** * @throws Exception If failed. */ + @Test public void testCountDownLatchTopologyChange() throws Exception { try (IgniteCountDownLatch latch = grid(0).countDownLatch(STRUCTURE_NAME, 20, true, true)) { try { @@ -427,6 +442,7 @@ public void testCountDownLatchTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreFailoverSafe() throws Exception { try (final IgniteSemaphore semaphore = grid(0).semaphore(STRUCTURE_NAME, 20, true, true)) { Ignite g = startGrid(NEW_IGNITE_INSTANCE_NAME); @@ -450,6 +466,7 @@ public void testSemaphoreFailoverSafe() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreNonFailoverSafe() throws Exception { try (IgniteSemaphore sem = grid(0).semaphore(STRUCTURE_NAME, 20, false, true)) { Ignite g = startGrid(NEW_IGNITE_INSTANCE_NAME); @@ -481,6 +498,7 @@ public void testSemaphoreNonFailoverSafe() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCanCloseSetInInterruptedThread() throws Exception { doCloseByInterruptedThread(grid(0).set(STRUCTURE_NAME, new CollectionConfiguration())); } @@ -488,6 +506,7 @@ public void testCanCloseSetInInterruptedThread() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCanCloseQueueInInterruptedThread() throws Exception { doCloseByInterruptedThread(grid(0).queue(STRUCTURE_NAME, 0, new CollectionConfiguration())); } @@ -495,6 +514,7 @@ public void testCanCloseQueueInInterruptedThread() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCanCloseAtomicLongInInterruptedThread() throws Exception { doCloseByInterruptedThread(grid(0).atomicLong(STRUCTURE_NAME, 10, true)); } @@ -502,6 +522,7 @@ public void testCanCloseAtomicLongInInterruptedThread() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCanCloseAtomicReferenceInInterruptedThread() throws Exception { doCloseByInterruptedThread(grid(0).atomicReference(STRUCTURE_NAME, 10, true)); } @@ -509,6 +530,7 @@ public void testCanCloseAtomicReferenceInInterruptedThread() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCanCloseCountDownLatchInInterruptedThread() throws Exception { IgniteCountDownLatch latch = grid(0).countDownLatch(STRUCTURE_NAME, 1, true, true); latch.countDown(); @@ -519,6 +541,7 @@ public void testCanCloseCountDownLatchInInterruptedThread() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCanCloseAtomicStampedInInterruptedThread() throws Exception { doCloseByInterruptedThread(grid(0).atomicStamped(STRUCTURE_NAME, 10, 10,true)); } @@ -526,6 +549,7 @@ public void testCanCloseAtomicStampedInInterruptedThread() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCanCloseSemaphoreInInterruptedThread() throws Exception { doCloseByInterruptedThread(grid(0).semaphore(STRUCTURE_NAME, 1, true, true)); } @@ -553,6 +577,7 @@ private void doCloseByInterruptedThread(final Closeable closeableDs) throws Exce /** * @throws Exception If failed. */ + @Test public void testSemaphoreSingleNodeFailure() throws Exception { final Ignite i1 = grid(0); @@ -606,6 +631,7 @@ public void testSemaphoreSingleNodeFailure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreConstantTopologyChangeFailoverSafe() throws Exception { doTestSemaphore(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), true); } @@ -613,6 +639,7 @@ public void testSemaphoreConstantTopologyChangeFailoverSafe() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreConstantTopologyChangeNonFailoverSafe() throws Exception { doTestSemaphore(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), false); } @@ -620,6 +647,7 @@ public void testSemaphoreConstantTopologyChangeNonFailoverSafe() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testSemaphoreMultipleTopologyChangeFailoverSafe() throws Exception { doTestSemaphore(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), true); } @@ -627,6 +655,7 @@ public void testSemaphoreMultipleTopologyChangeFailoverSafe() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreMultipleTopologyChangeNonFailoverSafe() throws Exception { doTestSemaphore(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), false); } @@ -705,6 +734,7 @@ private void doTestSemaphore(ConstantTopologyChangeWorker topWorker, final boole /** * @throws Exception If failed. */ + @Test public void testReentrantLockFailsWhenServersLeft() throws Exception { testReentrantLockFailsWhenServersLeft(false); } @@ -712,6 +742,7 @@ public void testReentrantLockFailsWhenServersLeft() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFairReentrantLockFailsWhenServersLeft() throws Exception { testReentrantLockFailsWhenServersLeft(true); } @@ -785,6 +816,7 @@ public void testReentrantLockFailsWhenServersLeft(final boolean fair) throws Exc /** * @throws Exception If failed. */ + @Test public void testReentrantLockConstantTopologyChangeFailoverSafe() throws Exception { doTestReentrantLock(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), true, false); } @@ -792,6 +824,7 @@ public void testReentrantLockConstantTopologyChangeFailoverSafe() throws Excepti /** * @throws Exception If failed. */ + @Test public void testReentrantLockConstantMultipleTopologyChangeFailoverSafe() throws Exception { doTestReentrantLock(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), true, false); } @@ -799,6 +832,7 @@ public void testReentrantLockConstantMultipleTopologyChangeFailoverSafe() throws /** * @throws Exception If failed. */ + @Test public void testReentrantLockConstantTopologyChangeNonFailoverSafe() throws Exception { doTestReentrantLock(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), false, false); } @@ -806,6 +840,7 @@ public void testReentrantLockConstantTopologyChangeNonFailoverSafe() throws Exce /** * @throws Exception If failed. */ + @Test public void testReentrantLockConstantMultipleTopologyChangeNonFailoverSafe() throws Exception { doTestReentrantLock(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), false, false); } @@ -813,6 +848,7 @@ public void testReentrantLockConstantMultipleTopologyChangeNonFailoverSafe() thr /** * @throws Exception If failed. */ + @Test public void testFairReentrantLockConstantTopologyChangeFailoverSafe() throws Exception { doTestReentrantLock(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), true, true); } @@ -820,6 +856,7 @@ public void testFairReentrantLockConstantTopologyChangeFailoverSafe() throws Exc /** * @throws Exception If failed. */ + @Test public void testFairReentrantLockConstantMultipleTopologyChangeFailoverSafe() throws Exception { doTestReentrantLock(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), true, true); } @@ -827,6 +864,7 @@ public void testFairReentrantLockConstantMultipleTopologyChangeFailoverSafe() th /** * @throws Exception If failed. */ + @Test public void testFairReentrantLockConstantTopologyChangeNonFailoverSafe() throws Exception { doTestReentrantLock(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), false, true); } @@ -834,6 +872,7 @@ public void testFairReentrantLockConstantTopologyChangeNonFailoverSafe() throws /** * @throws Exception If failed. */ + @Test public void testFairReentrantLockConstantMultipleTopologyChangeNonFailoverSafe() throws Exception { doTestReentrantLock(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT), false, true); } @@ -916,6 +955,7 @@ private void doTestReentrantLock( /** * @throws Exception If failed. */ + @Test public void testCountDownLatchConstantTopologyChange() throws Exception { doTestCountDownLatch(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -923,6 +963,7 @@ public void testCountDownLatchConstantTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCountDownLatchConstantMultipleTopologyChange() throws Exception { doTestCountDownLatch(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -966,6 +1007,7 @@ private void doTestCountDownLatch(ConstantTopologyChangeWorker topWorker) throws /** * @throws Exception If failed. */ + @Test public void testFifoQueueTopologyChange() throws Exception { try { grid(0).queue(STRUCTURE_NAME, 0, config(false)).put(10); @@ -988,6 +1030,7 @@ public void testFifoQueueTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueTopologyChange() throws Exception { ConstantTopologyChangeWorker topWorker = new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT); @@ -1042,6 +1085,7 @@ public void testQueueTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueConstantTopologyChange() throws Exception { int topChangeThreads = collectionCacheMode() == CacheMode.PARTITIONED ? 1 : TOP_CHANGE_THREAD_CNT; @@ -1051,6 +1095,7 @@ public void testQueueConstantTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueConstantMultipleTopologyChange() throws Exception { int topChangeThreads = collectionCacheMode() == CacheMode.PARTITIONED ? 1 : TOP_CHANGE_THREAD_CNT; @@ -1121,6 +1166,7 @@ private void doTestQueue(ConstantTopologyChangeWorker topWorker) throws Exceptio /** * @throws Exception If failed. */ + @Test public void testAtomicSequenceInitialization() throws Exception { checkAtomicSequenceInitialization(false); } @@ -1128,6 +1174,7 @@ public void testAtomicSequenceInitialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicSequenceInitializationOnStableNodes() throws Exception { checkAtomicSequenceInitialization(true); } @@ -1211,6 +1258,7 @@ private void checkAtomicSequenceInitialization(boolean limitProjection) throws E /** * @throws Exception If failed. */ + @Test public void testAtomicSequenceTopologyChange() throws Exception { try (IgniteAtomicSequence s = grid(0).atomicSequence(STRUCTURE_NAME, 10, true)) { Ignite g = startGrid(NEW_IGNITE_INSTANCE_NAME); @@ -1226,6 +1274,7 @@ public void testAtomicSequenceTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicSequenceConstantTopologyChange() throws Exception { doTestAtomicSequence(new ConstantTopologyChangeWorker(TOP_CHANGE_THREAD_CNT, true)); } @@ -1233,6 +1282,7 @@ public void testAtomicSequenceConstantTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicSequenceConstantMultipleTopologyChange() throws Exception { doTestAtomicSequence(multipleTopologyChangeWorker(TOP_CHANGE_THREAD_CNT)); } @@ -1272,6 +1322,7 @@ private void doTestAtomicSequence(ConstantTopologyChangeWorker topWorker) throws /** * @throws Exception If failed. */ + @Test public void testUncommitedTxLeave() throws Exception { final int val = 10; @@ -1320,7 +1371,7 @@ private class ConstantTopologyChangeWorker { protected final AtomicBoolean failed = new AtomicBoolean(false); /** */ - private final int topChangeThreads; + protected final int topChangeThreads; /** Flag to enable circular topology change. */ private boolean circular; @@ -1445,7 +1496,7 @@ public MultipleTopologyChangeWorker(int topChangeThreads) { throw F.wrap(e); } } - }, TOP_CHANGE_THREAD_CNT, "topology-change-thread"); + }, topChangeThreads, "topology-change-thread"); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractQueueFailoverDataConsistencySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractQueueFailoverDataConsistencySelfTest.java index 5b3d201917d44..1cf1b11a87e0d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractQueueFailoverDataConsistencySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAbstractQueueFailoverDataConsistencySelfTest.java @@ -37,6 +37,9 @@ import org.apache.ignite.internal.processors.datastructures.GridCacheQueueHeaderKey; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,6 +47,7 @@ /** * Queue failover test. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractQueueFailoverDataConsistencySelfTest extends IgniteCollectionAbstractTest { /** */ private static final String QUEUE_NAME = "FailoverQueueTest"; @@ -104,6 +108,7 @@ public abstract class GridCacheAbstractQueueFailoverDataConsistencySelfTest exte /** * @throws Exception If failed. */ + @Test public void testAddFailover() throws Exception { testAddFailover(false); } @@ -111,6 +116,7 @@ public void testAddFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddFailoverCollocated() throws Exception { testAddFailover(true); } @@ -199,6 +205,7 @@ private void testAddFailover(IgniteQueue queue, final List kil /** * @throws Exception If failed. */ + @Test public void testPollFailover() throws Exception { testPollFailover(false); } @@ -206,6 +213,7 @@ public void testPollFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPollFailoverCollocated() throws Exception { testPollFailover(true); } @@ -380,4 +388,4 @@ private int primaryQueueNode(IgniteQueue queue) throws Exception { return -1; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceApiSelfAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceApiSelfAbstractTest.java index 015d154ed0e37..a4d4094d0cc91 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceApiSelfAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceApiSelfAbstractTest.java @@ -25,6 +25,9 @@ import org.apache.ignite.configuration.AtomicConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -32,6 +35,7 @@ /** * Basic tests for atomic reference. */ +@RunWith(JUnit4.class) public abstract class GridCacheAtomicReferenceApiSelfAbstractTest extends IgniteAtomicsAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -43,6 +47,7 @@ public abstract class GridCacheAtomicReferenceApiSelfAbstractTest extends Ignite * * @throws Exception If failed. */ + @Test public void testPrepareAtomicReference() throws Exception { /* Name of first atomic. */ String atomicName1 = UUID.randomUUID().toString(); @@ -81,6 +86,7 @@ public void testPrepareAtomicReference() throws Exception { * * @throws Exception If failed. */ + @Test public void testSetAndGet() throws Exception { String atomicName = UUID.randomUUID().toString(); @@ -100,6 +106,7 @@ public void testSetAndGet() throws Exception { * * @throws Exception If failed. */ + @Test public void testCompareAndSetSimpleValue() throws Exception { String atomicName = UUID.randomUUID().toString(); @@ -123,6 +130,7 @@ public void testCompareAndSetSimpleValue() throws Exception { * * @throws Exception If failed. */ + @Test public void testCompareAndSetNullValue() throws Exception { String atomicName = UUID.randomUUID().toString(); @@ -142,6 +150,7 @@ public void testCompareAndSetNullValue() throws Exception { * * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { Ignite ignite = grid(0); @@ -192,6 +201,7 @@ public void testIsolation() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleStructuresInDifferentGroups() throws Exception { Ignite ignite = grid(0); @@ -253,4 +263,4 @@ public void testMultipleStructuresInDifferentGroups() throws Exception { assertNotNull(ignite.atomicReference("ref4", cfg, "d", false)); assertNotNull(ignite.atomicReference("ref6", cfg, "f", false)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceMultiNodeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceMultiNodeAbstractTest.java index 8331454583934..b6c88a6a0a9cd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceMultiNodeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicReferenceMultiNodeAbstractTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.IgniteAtomicReference; import org.apache.ignite.IgniteAtomicStamped; import org.apache.ignite.lang.IgniteCallable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * AtomicReference and AtomicStamped multi node tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheAtomicReferenceMultiNodeAbstractTest extends IgniteAtomicsAbstractTest { /** */ protected static final int GRID_CNT = 4; @@ -40,6 +44,7 @@ public abstract class GridCacheAtomicReferenceMultiNodeAbstractTest extends Igni * * @throws Exception If failed. */ + @Test public void testAtomicReference() throws Exception { // Get random name of reference. final String refName = UUID.randomUUID().toString(); @@ -96,6 +101,7 @@ public void testAtomicReference() throws Exception { * * @throws Exception If failed. */ + @Test public void testAtomicStamped() throws Exception { // Get random name of stamped. final String stampedName = UUID.randomUUID().toString(); @@ -153,4 +159,4 @@ public void testAtomicStamped() throws Exception { } }); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicStampedApiSelfAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicStampedApiSelfAbstractTest.java index 9629e8b715b1f..f5e93ae13b129 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicStampedApiSelfAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheAtomicStampedApiSelfAbstractTest.java @@ -25,6 +25,9 @@ import org.apache.ignite.configuration.AtomicConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -32,6 +35,7 @@ /** * Basic tests for atomic stamped. */ +@RunWith(JUnit4.class) public abstract class GridCacheAtomicStampedApiSelfAbstractTest extends IgniteAtomicsAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -43,6 +47,7 @@ public abstract class GridCacheAtomicStampedApiSelfAbstractTest extends IgniteAt * * @throws Exception If failed. */ + @Test public void testPrepareAtomicStamped() throws Exception { /** Name of first atomic. */ String atomicName1 = UUID.randomUUID().toString(); @@ -81,6 +86,7 @@ public void testPrepareAtomicStamped() throws Exception { * * @throws Exception If failed. */ + @Test public void testSetAndGet() throws Exception { String atomicName = UUID.randomUUID().toString(); @@ -105,6 +111,7 @@ public void testSetAndGet() throws Exception { * * @throws Exception If failed. */ + @Test public void testCompareAndSetSimpleValue() throws Exception { String atomicName = UUID.randomUUID().toString(); @@ -132,6 +139,7 @@ public void testCompareAndSetSimpleValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { Ignite ignite = grid(0); @@ -186,6 +194,7 @@ public void testIsolation() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleStructuresInDifferentGroups() throws Exception { Ignite ignite = grid(0); @@ -226,4 +235,4 @@ public void testMultipleStructuresInDifferentGroups() throws Exception { assertNotNull(ignite.atomicStamped("atomic1", "a", 1, false)); assertNotNull(ignite.atomicStamped("atomic3", cfg, "c", 3, false)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueApiSelfAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueApiSelfAbstractTest.java index ef97c91f03230..d25f4057a79b0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueApiSelfAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueApiSelfAbstractTest.java @@ -43,6 +43,9 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -51,6 +54,7 @@ /** * Queue basic tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheQueueApiSelfAbstractTest extends IgniteCollectionAbstractTest { /** */ private static final int QUEUE_CAPACITY = 3; @@ -68,6 +72,7 @@ public abstract class GridCacheQueueApiSelfAbstractTest extends IgniteCollection * * @throws Exception If failed. */ + @Test public void testPrepareQueue() throws Exception { // Random sequence names. String queueName1 = UUID.randomUUID().toString(); @@ -99,6 +104,7 @@ public void testPrepareQueue() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -117,6 +123,7 @@ public void testAddUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddDeleteUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -137,6 +144,7 @@ public void testAddDeleteUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testCollectionMethods() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -213,6 +221,7 @@ public void testCollectionMethods() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddPollUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -235,6 +244,7 @@ public void testAddPollUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddPeekUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -258,6 +268,7 @@ public void testAddPeekUnbounded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { checkIterator(false); } @@ -265,6 +276,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorCollocated() throws Exception { checkIterator(true); } @@ -328,6 +340,7 @@ private void checkIterator(boolean collocated) { * * @throws Exception If failed. */ + @Test public void testPutGetUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -350,6 +363,7 @@ public void testPutGetUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutGetMultithreadUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -378,6 +392,7 @@ public void testPutGetMultithreadUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutGetMultithreadBounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -405,6 +420,7 @@ public void testPutGetMultithreadBounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testQueueRemoveMultithreadBounded() throws Exception { // Random queue name. final String queueName = UUID.randomUUID().toString(); @@ -485,6 +501,7 @@ public void testQueueRemoveMultithreadBounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutRemoveUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -510,6 +527,7 @@ public void testPutRemoveUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutRemoveMultiThreadedUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -544,6 +562,7 @@ public void testPutRemoveMultiThreadedUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutRemovePeekPollUnbounded() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -575,6 +594,7 @@ public void testPutRemovePeekPollUnbounded() throws Exception { * * @throws Exception If failed. */ + @Test public void testRemovePeek() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -608,6 +628,7 @@ public void testRemovePeek() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReuseCache() throws Exception { CollectionConfiguration colCfg = collectionConfiguration(); @@ -621,6 +642,7 @@ public void testReuseCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotReuseCache() throws Exception { CollectionConfiguration colCfg1 = collectionConfiguration(); @@ -641,6 +663,7 @@ public void testNotReuseCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFilterNode() throws Exception { CollectionConfiguration colCfg1 = collectionConfiguration(); @@ -667,6 +690,7 @@ public void testFilterNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSystemCache() throws Exception { CollectionConfiguration colCfg = collectionConfiguration(); @@ -687,6 +711,7 @@ public void testSystemCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityRun() throws Exception { final CollectionConfiguration colCfg = collectionConfiguration(); @@ -735,6 +760,7 @@ public void testAffinityRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityCall() throws Exception { final CollectionConfiguration colCfg = collectionConfiguration(); @@ -788,6 +814,7 @@ public void testAffinityCall() throws Exception { * * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { Ignite ignite = grid(0); @@ -841,6 +868,7 @@ public void testIsolation() throws Exception { * * @throws Exception If failed. */ + @Test public void testCacheReuse() throws Exception { Ignite ignite = grid(0); @@ -877,6 +905,7 @@ public void testCacheReuse() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleStructuresInDifferentGroups() throws Exception { Ignite ignite = grid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueCleanupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueCleanupSelfTest.java index 75183b0c36609..6489eb803fd76 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueCleanupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueCleanupSelfTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.internal.processors.datastructures.GridCacheQueueHeaderKey; import org.apache.ignite.internal.util.typedef.PAX; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -43,6 +46,7 @@ /** * Tests cleanup of orphaned queue items. */ +@RunWith(JUnit4.class) public class GridCacheQueueCleanupSelfTest extends IgniteCollectionAbstractTest { /** */ private static final String QUEUE_NAME1 = "CleanupTestQueue1"; @@ -77,6 +81,7 @@ public class GridCacheQueueCleanupSelfTest extends IgniteCollectionAbstractTest /** * @throws Exception If failed. */ + @Test public void testCleanup() throws Exception { IgniteQueue queue = grid(0).queue(QUEUE_NAME1, 0, config(false)); @@ -234,4 +239,4 @@ private IgniteInternalFuture startAddPollThread(final Ignite ignite, final At } }); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueClientDisconnectTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueClientDisconnectTest.java index dac3ff6730e41..0b66ebc0cf6ab 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueClientDisconnectTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueClientDisconnectTest.java @@ -1,127 +1,121 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache.datastructures; - -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; -import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteClientDisconnectedException; -import org.apache.ignite.IgniteQueue; -import org.apache.ignite.cache.CacheAtomicityMode; -import org.apache.ignite.configuration.CollectionConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -/** - * - */ -public class GridCacheQueueClientDisconnectTest extends GridCommonAbstractTest { - /** */ - private static final String IGNITE_QUEUE_NAME = "ignite-queue-client-reconnect-test"; - - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** */ - private static final int FAILURE_DETECTION_TIMEOUT = 10_000; - - /** */ - private boolean clientMode; - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - spi.setClientReconnectDisabled(false); - - cfg.setDiscoverySpi(spi); - - cfg.setFailureDetectionTimeout(FAILURE_DETECTION_TIMEOUT); - cfg.setClientFailureDetectionTimeout(FAILURE_DETECTION_TIMEOUT); - - if (clientMode) - cfg.setClientMode(true); - - return cfg; - } - - /** - * @param cacheAtomicityMode Atomicity mode. - * @return Configuration. - */ - private static CollectionConfiguration collectionConfiguration(CacheAtomicityMode cacheAtomicityMode) { - CollectionConfiguration colCfg = new CollectionConfiguration(); - - colCfg.setAtomicityMode(cacheAtomicityMode); - - return colCfg; - } - - /** - * @throws Exception If failed. - */ - public void testClientDisconnect() throws Exception { - try { - Ignite server = startGrid(0); - - clientMode = true; - - Ignite client = startGrid(1); - - awaitPartitionMapExchange(); - - final IgniteQueue queue = client.queue( - IGNITE_QUEUE_NAME, 0, collectionConfiguration(CacheAtomicityMode.ATOMIC)); - - final CountDownLatch latch = new CountDownLatch(1); - - GridTestUtils.runAsync(new Runnable() { - @Override public void run() { - try { - Object value = queue.take(); - } - catch (IgniteClientDisconnectedException icd) { - latch.countDown(); - } - catch (Exception e) { - } - } - }); - - U.sleep(5000); - - server.close(); - - boolean countReachedZero = latch.await(FAILURE_DETECTION_TIMEOUT * 2, TimeUnit.MILLISECONDS); - - assertTrue("IgniteClientDisconnectedException was not thrown", countReachedZero); - } - finally { - stopAllGrids(); - } - } -} +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.datastructures; + +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteClientDisconnectedException; +import org.apache.ignite.IgniteQueue; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.CollectionConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class GridCacheQueueClientDisconnectTest extends GridCommonAbstractTest { + /** */ + private static final String IGNITE_QUEUE_NAME = "ignite-queue-client-reconnect-test"; + + /** */ + private static final int FAILURE_DETECTION_TIMEOUT = 10_000; + + /** */ + private boolean clientMode; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setClientReconnectDisabled(false); + + cfg.setFailureDetectionTimeout(FAILURE_DETECTION_TIMEOUT); + cfg.setClientFailureDetectionTimeout(FAILURE_DETECTION_TIMEOUT); + + if (clientMode) + cfg.setClientMode(true); + + return cfg; + } + + /** + * @param cacheAtomicityMode Atomicity mode. + * @return Configuration. + */ + private static CollectionConfiguration collectionConfiguration(CacheAtomicityMode cacheAtomicityMode) { + CollectionConfiguration colCfg = new CollectionConfiguration(); + + colCfg.setAtomicityMode(cacheAtomicityMode); + + return colCfg; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testClientDisconnect() throws Exception { + try { + Ignite server = startGrid(0); + + clientMode = true; + + Ignite client = startGrid(1); + + awaitPartitionMapExchange(); + + final IgniteQueue queue = client.queue( + IGNITE_QUEUE_NAME, 0, collectionConfiguration(CacheAtomicityMode.ATOMIC)); + + final CountDownLatch latch = new CountDownLatch(1); + + GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + try { + Object value = queue.take(); + } + catch (IgniteClientDisconnectedException icd) { + latch.countDown(); + } + catch (Exception e) { + } + } + }); + + U.sleep(5000); + + server.close(); + + boolean countReachedZero = latch.await(FAILURE_DETECTION_TIMEOUT * 2, TimeUnit.MILLISECONDS); + + assertTrue("IgniteClientDisconnectedException was not thrown", countReachedZero); + } + finally { + stopAllGrids(); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueJoinedNodeSelfAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueJoinedNodeSelfAbstractTest.java index eb8c3c08eef78..a418b01eb2616 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueJoinedNodeSelfAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueJoinedNodeSelfAbstractTest.java @@ -33,21 +33,20 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that joining node is able to take items from queue. * See GG-2311 for more information. */ +@RunWith(JUnit4.class) public abstract class GridCacheQueueJoinedNodeSelfAbstractTest extends IgniteCollectionAbstractTest { /** */ protected static final int GRID_CNT = 3; - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ protected static final int ITEMS_CNT = 300; @@ -59,6 +58,7 @@ public abstract class GridCacheQueueJoinedNodeSelfAbstractTest extends IgniteCol /** * @throws Exception If failed. */ + @Test public void testTakeFromJoined() throws Exception { String queueName = UUID.randomUUID().toString(); @@ -283,4 +283,4 @@ private void awaitDone() throws IgniteInterruptedCheckedException { return S.toString(TakeJob.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeAbstractSelfTest.java index f62d165778f47..4e691dd8ea8ab 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeAbstractSelfTest.java @@ -39,10 +39,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Queue multi node tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheQueueMultiNodeAbstractSelfTest extends IgniteCollectionAbstractTest { /** */ private static final int GRID_CNT = 4; @@ -110,6 +114,7 @@ public abstract class GridCacheQueueMultiNodeAbstractSelfTest extends IgniteColl /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { String queueName = UUID.randomUUID().toString(); @@ -125,6 +130,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutPollCollocated() throws Exception { try { final String queueName = UUID.randomUUID().toString(); @@ -227,7 +233,7 @@ public void testPutPollCollocated() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("unchecked") + @Test public void testAddAll() throws Exception { try { String queueName = UUID.randomUUID().toString(); @@ -256,6 +262,7 @@ public void testAddAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { String queueName = UUID.randomUUID().toString(); @@ -279,6 +286,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutTake() throws Exception { String queueName = UUID.randomUUID().toString(); @@ -298,6 +306,7 @@ public void testPutTake() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddMultinode() throws Exception { testAddMultinode(true); @@ -361,6 +370,7 @@ private void testAddMultinode(final boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddPollMultinode() throws Exception { testAddPollMultinode(true); @@ -465,6 +475,7 @@ private void testAddPollMultinode(final boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { final String queueName = UUID.randomUUID().toString(); @@ -513,6 +524,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSerialization() throws Exception { // Random queue name. String queueName = UUID.randomUUID().toString(); @@ -776,4 +788,4 @@ protected static class PutTakeJob implements IgniteCallable { return S.toString(PutTakeJob.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeConsistencySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeConsistencySelfTest.java index a48c3e29982ec..8d79ca36fe5d8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeConsistencySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueMultiNodeConsistencySelfTest.java @@ -31,6 +31,9 @@ import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -40,6 +43,7 @@ /** * Consistency test for cache queue in multi node environment. */ +@RunWith(JUnit4.class) public class GridCacheQueueMultiNodeConsistencySelfTest extends IgniteCollectionAbstractTest { /** */ protected static final int GRID_CNT = 3; @@ -93,6 +97,7 @@ public class GridCacheQueueMultiNodeConsistencySelfTest extends IgniteCollection /** * @throws Exception If failed. */ + @Test public void testIteratorIfBackupDisabled() throws Exception { backups = 0; @@ -102,6 +107,7 @@ public void testIteratorIfBackupDisabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorIfNoPreloadingAndBackupDisabledAndRepartitionForced() throws Exception { backups = 0; @@ -113,6 +119,7 @@ public void testIteratorIfNoPreloadingAndBackupDisabledAndRepartitionForced() th /** * @throws Exception If failed. */ + @Test public void testIteratorIfBackupEnabled() throws Exception { backups = 1; @@ -122,6 +129,7 @@ public void testIteratorIfBackupEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorIfBackupEnabledAndOneNodeIsKilled() throws Exception { backups = 1; @@ -203,4 +211,4 @@ private void checkCacheQueue() throws Exception { queue0.close(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueRotativeMultiNodeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueRotativeMultiNodeAbstractTest.java index d10a020a4a459..b816efd98638c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueRotativeMultiNodeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheQueueRotativeMultiNodeAbstractTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Queue multi node tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheQueueRotativeMultiNodeAbstractTest extends IgniteCollectionAbstractTest { /** */ protected static final int GRID_CNT = 4; @@ -79,6 +83,7 @@ public abstract class GridCacheQueueRotativeMultiNodeAbstractTest extends Ignite * * @throws Exception If failed. */ + @Test public void testPutRotativeNodes() throws Exception { String queueName = UUID.randomUUID().toString(); @@ -108,6 +113,7 @@ public void testPutRotativeNodes() throws Exception { * * @throws Exception If failed. */ + @Test public void testPutTakeRotativeNodes() throws Exception { String queueName = UUID.randomUUID().toString(); @@ -137,6 +143,7 @@ public void testPutTakeRotativeNodes() throws Exception { * * @throws Exception If failed. */ + @Test public void testTakeRemoveRotativeNodes() throws Exception { lthTake = new CountDownLatch(1); @@ -378,4 +385,4 @@ protected static class RemoveQueueJob implements IgniteCallable { return S.toString(RemoveQueueJob.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceApiSelfAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceApiSelfAbstractTest.java index 81292c7cbefcf..0924d4c0af11c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceApiSelfAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceApiSelfAbstractTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -46,6 +49,7 @@ /** * Cache sequence basic tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheSequenceApiSelfAbstractTest extends IgniteAtomicsAbstractTest { /** */ protected static final int BATCH_SIZE = 3; @@ -145,6 +149,7 @@ public abstract class GridCacheSequenceApiSelfAbstractTest extends IgniteAtomics /** * @throws Exception If failed. */ + @Test public void testPrepareSequence() throws Exception { // Random sequence names. String locSeqName1 = UUID.randomUUID().toString(); @@ -169,6 +174,7 @@ public void testPrepareSequence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddWrongValue() throws Exception { for (IgniteAtomicSequence seq : seqArr) { try { @@ -194,6 +200,7 @@ public void testAddWrongValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndIncrement() throws Exception { for (int i = 0; i < MAX_LOOPS_NUM; i++) { for (IgniteAtomicSequence seq : seqArr) @@ -207,6 +214,7 @@ public void testGetAndIncrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrementAndGet() throws Exception { for (int i = 0; i < MAX_LOOPS_NUM; i++) { for (IgniteAtomicSequence seq : seqArr) @@ -220,6 +228,7 @@ public void testIncrementAndGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddAndGet() throws Exception { for (int i = 1; i < MAX_LOOPS_NUM; i++) { for (IgniteAtomicSequence seq : seqArr) @@ -233,6 +242,7 @@ public void testAddAndGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndAdd() throws Exception { for (int i = 1; i < MAX_LOOPS_NUM; i++) { for (IgniteAtomicSequence seq : seqArr) @@ -246,6 +256,7 @@ public void testGetAndAdd() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndAddInTx() throws Exception { try (Transaction tx = grid().transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { for (int i = 1; i < MAX_LOOPS_NUM; i++) { @@ -261,6 +272,7 @@ public void testGetAndAddInTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSequenceIntegrity0() throws Exception { // Random sequence names. String locSeqName1 = UUID.randomUUID().toString(); @@ -294,6 +306,7 @@ public void testSequenceIntegrity0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSequenceIntegrity1() throws Exception { sequenceIntegrity(1, 0); sequenceIntegrity(7, -1500); @@ -303,6 +316,7 @@ public void testSequenceIntegrity1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultiThreadedSequenceIntegrity() throws Exception { multiThreadedSequenceIntegrity(1, 0); multiThreadedSequenceIntegrity(7, -1500); @@ -312,6 +326,7 @@ public void testMultiThreadedSequenceIntegrity() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { String locSeqName = UUID.randomUUID().toString(); @@ -334,6 +349,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheSets() throws Exception { // Make new atomic sequence in cache. IgniteAtomicSequence seq = grid().atomicSequence(UUID.randomUUID().toString(), 0, true); @@ -363,6 +379,7 @@ public void testCacheSets() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleStructuresInDifferentGroups() throws Exception { Ignite ignite = grid(0); @@ -404,6 +421,7 @@ public void testMultipleStructuresInDifferentGroups() throws Exception { * * @throws Exception If failed. */ + @Test public void testSequenceReserveSizeFromExplicitConfiguration() throws Exception { Ignite ignite = grid(0); @@ -631,4 +649,4 @@ private void removeSequence(String name) throws Exception { assertNull(grid().atomicSequence(name, 0, false)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceMultiNodeAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceMultiNodeAbstractSelfTest.java index 7747d1bd2a80b..48d36b537c17d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceMultiNodeAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSequenceMultiNodeAbstractSelfTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.LoggerResource; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Sequence multi node tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheSequenceMultiNodeAbstractSelfTest extends IgniteAtomicsAbstractTest implements Externalizable { /** */ @@ -59,6 +63,7 @@ public abstract class GridCacheSequenceMultiNodeAbstractSelfTest extends IgniteA * * @throws Exception If failed. */ + @Test public void testIncrementAndGet() throws Exception { Collection res = new HashSet<>(); @@ -94,6 +99,7 @@ public void testIncrementAndGet() throws Exception { * * @throws Exception If failed. */ + @Test public void testGetAndIncrement() throws Exception { Collection res = new HashSet<>(); @@ -129,6 +135,7 @@ public void testGetAndIncrement() throws Exception { * * @throws Exception If failed. */ + @Test public void testMarshalling() throws Exception { String seqName = UUID.randomUUID().toString(); @@ -266,4 +273,4 @@ private static class GetAndIncrementJob implements IgniteCallable> { return S.toString(GetAndIncrementJob.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetAbstractSelfTest.java index 59f13d9b0151e..bb1b8d54e521f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetAbstractSelfTest.java @@ -32,6 +32,7 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteCluster; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteSet; import org.apache.ignite.cache.CacheMode; @@ -52,6 +53,9 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -62,6 +66,7 @@ /** * Cache set tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheSetAbstractSelfTest extends IgniteCollectionAbstractTest { /** */ protected static final String SET_NAME = "testSet"; @@ -166,6 +171,7 @@ private void assertSetIteratorsCleared() { /** * @throws Exception If failed. */ + @Test public void testCreateRemove() throws Exception { testCreateRemove(false); } @@ -173,6 +179,7 @@ public void testCreateRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateRemoveCollocated() throws Exception { testCreateRemove(true); } @@ -181,13 +188,22 @@ public void testCreateRemoveCollocated() throws Exception { * @param collocated Collocation flag. * @throws Exception If failed. */ - private void testCreateRemove(boolean collocated) throws Exception { + protected void testCreateRemove(boolean collocated) throws Exception { + testCreateRemove(collocated, 0); + } + + /** + * @param collocated Collocation flag. + * @param nodeIdx Index of the node from which to create set. + * @throws Exception If failed. + */ + protected void testCreateRemove(boolean collocated, int nodeIdx) throws Exception { for (int i = 0; i < gridCount(); i++) assertNull(grid(i).set(SET_NAME, null)); CollectionConfiguration colCfg0 = config(collocated); - IgniteSet set0 = grid(0).set(SET_NAME, colCfg0); + IgniteSet set0 = grid(nodeIdx).set(SET_NAME, colCfg0); assertNotNull(set0); @@ -233,6 +249,7 @@ private void testCreateRemove(boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testApi() throws Exception { testApi(false); } @@ -240,6 +257,7 @@ public void testApi() throws Exception { /** * @throws Exception If failed. */ + @Test public void testApiCollocated() throws Exception { testApi(true); } @@ -396,6 +414,7 @@ private void testApi(boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { testIterator(false); } @@ -403,6 +422,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorCollocated() throws Exception { testIterator(true); } @@ -411,11 +431,19 @@ public void testIteratorCollocated() throws Exception { * @param collocated Collocation flag. * @throws Exception If failed. */ - @SuppressWarnings("deprecation") - private void testIterator(boolean collocated) throws Exception { + protected void testIterator(boolean collocated) throws Exception { + testIterator(collocated, 0); + } + + /** + * @param collocated Collocation flag. + * @param nodeIdx Index of the node from which to create set. + * @throws Exception If failed. + */ + protected void testIterator(boolean collocated, int nodeIdx) throws Exception { CollectionConfiguration colCfg = config(collocated); - final IgniteSet set0 = grid(0).set(SET_NAME, colCfg); + final IgniteSet set0 = grid(nodeIdx).set(SET_NAME, colCfg); for (int i = 0; i < gridCount(); i++) { IgniteSet set = grid(i).set(SET_NAME, null); @@ -488,6 +516,7 @@ private void testIterator(boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorClose() throws Exception { testIteratorClose(false); } @@ -495,6 +524,7 @@ public void testIteratorClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorCloseCollocated() throws Exception { testIteratorClose(true); } @@ -573,6 +603,7 @@ private void createIterators(IgniteSet set) { /** * @throws Exception If failed. */ + @Test public void testNodeJoinsAndLeaves() throws Exception { if (collectionCacheMode() == LOCAL) return; @@ -583,6 +614,7 @@ public void testNodeJoinsAndLeaves() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeJoinsAndLeavesCollocated() throws Exception { if (collectionCacheMode() == LOCAL) return; @@ -633,6 +665,7 @@ private void testNodeJoinsAndLeaves(boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreaded() throws Exception { testMultithreaded(false); } @@ -640,6 +673,7 @@ public void testMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedCollocated() throws Exception { if (collectionCacheMode() != PARTITIONED) return; @@ -719,6 +753,7 @@ private void testMultithreaded(final boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testCleanup() throws Exception { testCleanup(false); } @@ -726,6 +761,7 @@ public void testCleanup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCleanupCollocated() throws Exception { testCleanup(true); } @@ -734,7 +770,6 @@ public void testCleanupCollocated() throws Exception { * @param collocated Collocation flag. * @throws Exception If failed. */ - @SuppressWarnings("WhileLoopReplaceableByForEach") private void testCleanup(boolean collocated) throws Exception { CollectionConfiguration colCfg = config(collocated); @@ -833,6 +868,7 @@ private void testCleanup(boolean collocated) throws Exception { /** * @throws Exception If failed. */ + @Test public void testSerialization() throws Exception { final IgniteSet set = grid(0).set(SET_NAME, config(false)); @@ -841,7 +877,9 @@ public void testSerialization() throws Exception { for (int i = 0; i < 10; i++) set.add(i); - Collection c = grid(0).compute().broadcast(new IgniteCallable() { + IgniteCluster cluster = grid(0).cluster(); + + Collection c = grid(0).compute(cluster).broadcast(new IgniteCallable() { @Override public Integer call() throws Exception { assertEquals(SET_NAME, set.name()); @@ -858,6 +896,7 @@ public void testSerialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityRun() throws Exception { final CollectionConfiguration colCfg = collectionConfiguration(); @@ -908,6 +947,7 @@ public void testAffinityRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityCall() throws Exception { final CollectionConfiguration colCfg = collectionConfiguration(); @@ -963,6 +1003,7 @@ public void testAffinityCall() throws Exception { * * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { CollectionConfiguration colCfg = collectionConfiguration(); @@ -1008,6 +1049,7 @@ public void testIsolation() throws Exception { /** * Test that non collocated sets are stored in a separated cache. */ + @Test public void testCacheReuse() { testCacheReuse(false); } @@ -1015,6 +1057,7 @@ public void testCacheReuse() { /** * Test that collocated sets within the same group and compatible configurations are stored in the same cache. */ + @Test public void testCacheReuseCollocated() { testCacheReuse(true); } @@ -1062,7 +1105,8 @@ private void testCacheReuse(boolean collocated) { * * @throws Exception If failed. */ - public void _testMultipleStructuresInDifferentGroups() throws Exception { + @Test + public void testMultipleStructuresInDifferentGroups() throws Exception { Ignite ignite = grid(0); CollectionConfiguration cfg1 = collectionConfiguration(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetFailoverAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetFailoverAbstractSelfTest.java index ce320bd4a5ec6..16c93635c8391 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetFailoverAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/GridCacheSetFailoverAbstractSelfTest.java @@ -36,12 +36,16 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Set failover tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheSetFailoverAbstractSelfTest extends IgniteCollectionAbstractTest { /** */ private static final String SET_NAME = "testFailoverSet"; @@ -88,6 +92,7 @@ public abstract class GridCacheSetFailoverAbstractSelfTest extends IgniteCollect * @throws Exception If failed. */ @SuppressWarnings("WhileLoopReplaceableByForEach") + @Test public void testNodeRestart() throws Exception { IgniteSet set = grid(0).set(SET_NAME, config(false)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicLongApiAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicLongApiAbstractSelfTest.java index d2cb361fee6ff..6b02ec46bb48e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicLongApiAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicLongApiAbstractSelfTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * Cache atomic long api test. */ +@RunWith(JUnit4.class) public abstract class IgniteAtomicLongApiAbstractSelfTest extends IgniteAtomicsAbstractTest { /** Random number generator. */ private static final Random RND = new Random(); @@ -61,6 +65,7 @@ public abstract class IgniteAtomicLongApiAbstractSelfTest extends IgniteAtomicsA /** * @throws Exception If failed. */ + @Test public void testCreateRemove() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -102,6 +107,7 @@ public void testCreateRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIncrementAndGet() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -118,6 +124,7 @@ public void testIncrementAndGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndIncrement() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -134,6 +141,7 @@ public void testGetAndIncrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDecrementAndGet() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -150,6 +158,7 @@ public void testDecrementAndGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndDecrement() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -166,6 +175,7 @@ public void testGetAndDecrement() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndAdd() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -184,6 +194,7 @@ public void testGetAndAdd() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAddAndGet() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -202,6 +213,7 @@ public void testAddAndGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndSet() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -220,7 +232,7 @@ public void testGetAndSet() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"NullableProblems", "ConstantConditions"}) + @Test public void testCompareAndSet() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -246,6 +258,7 @@ public void testCompareAndSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetAndSetInTx() throws Exception { info("Running test [name=" + getName() + ", cacheMode=" + atomicsCacheMode() + ']'); @@ -273,6 +286,7 @@ public void testGetAndSetInTx() throws Exception { * * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { Ignite ignite = grid(0); @@ -299,6 +313,7 @@ public void testIsolation() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleStructuresInDifferentGroups() throws Exception { Ignite ignite = grid(0); @@ -339,4 +354,4 @@ public void testMultipleStructuresInDifferentGroups() throws Exception { assertNotNull(ignite.atomicLong("atomic1", 1, false)); assertNotNull(ignite.atomicLong("atomic3", cfg, 3, false)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicsAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicsAbstractTest.java index 8c8e30fe051ef..4ebc67532deda 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicsAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteAtomicsAbstractTest.java @@ -20,28 +20,16 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.AtomicConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; /** * */ public abstract class IgniteAtomicsAbstractTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - AtomicConfiguration atomicCfg = atomicConfiguration(); assertNotNull(atomicCfg); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteClientDataStructuresAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteClientDataStructuresAbstractTest.java index 118ea52c0ab1c..909a87046545f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteClientDataStructuresAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteClientDataStructuresAbstractTest.java @@ -39,18 +39,17 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteClientDataStructuresAbstractTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODE_CNT = 4; @@ -65,8 +64,6 @@ public abstract class IgniteClientDataStructuresAbstractTest extends GridCommonA ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); } - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); return cfg; @@ -87,6 +84,7 @@ public abstract class IgniteClientDataStructuresAbstractTest extends GridCommonA /** * @throws Exception If failed. */ + @Test public void testSequence() throws Exception { Ignite clientNode = clientIgnite(); Ignite srvNode = serverNode(); @@ -147,6 +145,7 @@ private void testSequence(Ignite creator, Ignite other) throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLong() throws Exception { Ignite clientNode = clientIgnite(); Ignite srvNode = serverNode(); @@ -226,6 +225,7 @@ private void assertAtomicLongClosedCorrect(IgniteAtomicLong atomicLong) { /** * @throws Exception If failed. */ + @Test public void testSet() throws Exception { Ignite clientNode = clientIgnite(); Ignite srvNode = serverNode(); @@ -274,6 +274,7 @@ private void testSet(Ignite creator, Ignite other) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLatch() throws Exception { Ignite clientNode = clientIgnite(); Ignite srvNode = serverNode(); @@ -352,6 +353,7 @@ private void testLatch(Ignite creator, final Ignite other) throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphore() throws Exception { Ignite clientNode = clientIgnite(); Ignite srvNode = serverNode(); @@ -430,6 +432,7 @@ private void testSemaphore(Ignite creator, final Ignite other) throws Exception /** * @throws Exception If failed. */ + @Test public void testReentrantLock() throws Exception { Ignite clientNode = clientIgnite(); Ignite srvNode = serverNode(); @@ -520,6 +523,7 @@ private void testReentrantLock(Ignite creator, final Ignite other) throws Except /** * @throws Exception If failed. */ + @Test public void testQueue() throws Exception { Ignite clientNode = clientIgnite(); Ignite srvNode = serverNode(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCollectionAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCollectionAbstractTest.java index f7e12dd29da3e..f3fbbfa21382f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCollectionAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCollectionAbstractTest.java @@ -30,9 +30,6 @@ import org.apache.ignite.internal.processors.datastructures.GridCacheSetImpl; import org.apache.ignite.internal.processors.datastructures.GridCacheSetProxy; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; @@ -42,21 +39,12 @@ * */ public abstract class IgniteCollectionAbstractTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - return cfg; } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCountDownLatchAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCountDownLatchAbstractSelfTest.java index 4dcdbb7a6cf4b..7cbf3ccb42a7d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCountDownLatchAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteCountDownLatchAbstractSelfTest.java @@ -48,6 +48,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.MINUTES; @@ -58,6 +61,7 @@ /** * Cache count down latch self test. */ +@RunWith(JUnit4.class) public abstract class IgniteCountDownLatchAbstractSelfTest extends IgniteAtomicsAbstractTest implements Externalizable { /** */ @@ -77,6 +81,7 @@ public abstract class IgniteCountDownLatchAbstractSelfTest extends IgniteAtomics /** * @throws Exception If failed. */ + @Test public void testLatch() throws Exception { checkLatch(); } @@ -87,6 +92,7 @@ public void testLatch() throws Exception { * * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { Ignite ignite = grid(0); @@ -325,6 +331,7 @@ private void removeLatch(String latchName) /** * @throws Exception If failed. */ + @Test public void testLatchMultinode1() throws Exception { if (gridCount() == 1) return; @@ -382,6 +389,7 @@ public void testLatchMultinode1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLatchBroadcast() throws Exception { Ignite ignite = grid(0); ClusterGroup srvsGrp = ignite.cluster().forServers(); @@ -434,6 +442,7 @@ private IgniteCountDownLatch createLatch2(Ignite client, int numOfSrvs) { /** * @throws Exception If failed. */ + @Test public void testLatchMultinode2() throws Exception { if (gridCount() == 1) return; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureUniqueNameTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureUniqueNameTest.java index db4754c0a7f54..fa32d698e8c17 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureUniqueNameTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureUniqueNameTest.java @@ -41,6 +41,9 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -48,6 +51,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteDataStructureUniqueNameTest extends IgniteCollectionAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -81,6 +85,7 @@ public class IgniteDataStructureUniqueNameTest extends IgniteCollectionAbstractT /** * @throws Exception If failed. */ + @Test public void testUniqueNameMultithreaded() throws Exception { testUniqueName(true); } @@ -88,6 +93,7 @@ public void testUniqueNameMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUniqueNameMultinode() throws Exception { testUniqueName(false); } @@ -95,6 +101,7 @@ public void testUniqueNameMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateRemove() throws Exception { final String name = IgniteUuid.randomUuid().toString(); @@ -232,6 +239,7 @@ public void testCreateRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUniqueNamePerGroup() throws Exception { Ignite ignite = ignite(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureWithJobTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureWithJobTest.java index 2b99a911e02bb..5b471dc620190 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureWithJobTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteDataStructureWithJobTest.java @@ -23,32 +23,20 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteQueue; import org.apache.ignite.configuration.CollectionConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteDataStructureWithJobTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** {@inheritDoc} */ @Override protected long getTestTimeout() { return 2 * 60_000; @@ -57,6 +45,7 @@ public class IgniteDataStructureWithJobTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testJobWithRestart() throws Exception { Ignite ignite = startGrid(0); @@ -103,4 +92,4 @@ public void testJobWithRestart() throws Exception { fut.get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteExchangeLatchManagerCoordinatorFailTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteExchangeLatchManagerCoordinatorFailTest.java index d8f23acdfc1da..9aa1fc62ca276 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteExchangeLatchManagerCoordinatorFailTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteExchangeLatchManagerCoordinatorFailTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link ExchangeLatchManager} functionality when latch coordinator is failed. */ +@RunWith(JUnit4.class) public class IgniteExchangeLatchManagerCoordinatorFailTest extends GridCommonAbstractTest { /** */ private static final String LATCH_NAME = "test"; @@ -145,6 +149,7 @@ public class IgniteExchangeLatchManagerCoordinatorFailTest extends GridCommonAbs * Node 3 state -> {@link #all} * Node 4 state -> {@link #beforeCreate} */ + @Test public void testCoordinatorFail1() throws Exception { List> nodeStates = Lists.newArrayList( beforeCreate, @@ -164,6 +169,7 @@ public void testCoordinatorFail1() throws Exception { * Node 3 state -> {@link #all} * Node 4 state -> {@link #beforeCreate} */ + @Test public void testCoordinatorFail2() throws Exception { List> nodeStates = Lists.newArrayList( beforeCountDown, @@ -183,6 +189,7 @@ public void testCoordinatorFail2() throws Exception { * Node 3 state -> {@link #all} * Node 4 state -> {@link #beforeCreate} */ + @Test public void testCoordinatorFail3() throws Exception { List> nodeStates = Lists.newArrayList( all, diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteLockAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteLockAbstractSelfTest.java index 812ff23ffd789..0052183540a40 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteLockAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteLockAbstractSelfTest.java @@ -58,7 +58,10 @@ import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; import org.junit.Rule; +import org.junit.Test; import org.junit.rules.ExpectedException; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MICROSECONDS; import static java.util.concurrent.TimeUnit.MILLISECONDS; @@ -70,6 +73,7 @@ /** * Cache reentrant lock self test. */ +@RunWith(JUnit4.class) public abstract class IgniteLockAbstractSelfTest extends IgniteAtomicsAbstractTest implements Externalizable { /** */ @@ -93,6 +97,7 @@ public abstract class IgniteLockAbstractSelfTest extends IgniteAtomicsAbstractTe /** * @throws Exception If failed. */ + @Test public void testReentrantLock() throws Exception { checkReentrantLock(false); @@ -102,6 +107,7 @@ public void testReentrantLock() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailover() throws Exception { if (atomicsCacheMode() == LOCAL) return; @@ -121,6 +127,7 @@ public void testFailover() throws Exception { * * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { Ignite ignite = grid(0); @@ -414,6 +421,7 @@ private void removeReentrantLock(String lockName, final boolean fair) /** * @throws Exception If failed. */ + @Test public void testLockSerialization() throws Exception { final IgniteLock lock = grid(0).reentrantLock("lock", true, true, true); @@ -443,6 +451,7 @@ public void testLockSerialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInitialization() throws Exception { // Test #name() method. { @@ -682,6 +691,7 @@ public void testInitialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReentrantLockMultinode1() throws Exception { testReentrantLockMultinode1(false); @@ -759,6 +769,7 @@ private void testReentrantLockMultinode1(final boolean fair) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockInterruptibly() throws Exception { testLockInterruptibly(false); @@ -836,6 +847,7 @@ private void testLockInterruptibly(final boolean fair) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockInterruptiblyMultinode() throws Exception { testLockInterruptiblyMultinode(false); @@ -959,6 +971,7 @@ private void testLockInterruptiblyMultinode(final boolean fair) throws Exception /** * @throws Exception If failed. */ + @Test public void testLock() throws Exception { testLock(false); @@ -1042,6 +1055,7 @@ private void testLock(final boolean fair) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTryLock() throws Exception { testTryLock(false); @@ -1125,6 +1139,7 @@ private void testTryLock(final boolean fair) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTryLockTimed() throws Exception { testTryLockTimed(false); @@ -1201,6 +1216,7 @@ private void testTryLockTimed(final boolean fair) throws Exception { /** * @throws Exception If failed. */ + @Test public void testConditionAwaitUninterruptibly() throws Exception { testConditionAwaitUninterruptibly(false); @@ -1290,6 +1306,7 @@ private void testConditionAwaitUninterruptibly(final boolean fair) throws Except /** * @throws Exception If failed. */ + @Test public void testConditionInterruptAwait() throws Exception { testConditionInterruptAwait(false); @@ -1369,6 +1386,7 @@ private void testConditionInterruptAwait(final boolean fair) throws Exception { /** * @throws Exception If failed. */ + @Test public void testHasQueuedThreads() throws Exception { testHasQueuedThreads(false); @@ -1446,6 +1464,7 @@ private void testHasQueuedThreads(final boolean fair) throws Exception { /** * @throws Exception If failed. */ + @Test public void testHasConditionQueuedThreads() throws Exception { testHasConditionQueuedThreads(false); @@ -1557,6 +1576,7 @@ private void testHasConditionQueuedThreads(final boolean fair) throws Exception * on the OS thread scheduling, certain deviation from uniform distribution is tolerated. * @throws Exception If failed. */ + @Test public void testFairness() throws Exception { if (gridCount() == 1) return; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSemaphoreAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSemaphoreAbstractSelfTest.java index 445d469952409..8ddabb7e459a2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSemaphoreAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSemaphoreAbstractSelfTest.java @@ -47,7 +47,10 @@ import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; import org.junit.Rule; +import org.junit.Test; import org.junit.rules.ExpectedException; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MICROSECONDS; import static java.util.concurrent.TimeUnit.MILLISECONDS; @@ -59,6 +62,7 @@ /** * Cache semaphore self test. */ +@RunWith(JUnit4.class) public abstract class IgniteSemaphoreAbstractSelfTest extends IgniteAtomicsAbstractTest implements Externalizable { /** */ @@ -82,6 +86,7 @@ public abstract class IgniteSemaphoreAbstractSelfTest extends IgniteAtomicsAbstr /** * @throws Exception If failed. */ + @Test public void testSemaphore() throws Exception { checkSemaphore(); checkSemaphoreSerialization(); @@ -90,6 +95,7 @@ public void testSemaphore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailover() throws Exception { if (atomicsCacheMode() == LOCAL) return; @@ -104,6 +110,7 @@ public void testFailover() throws Exception { * * @throws Exception If failed. */ + @Test public void testIsolation() throws Exception { Ignite ignite = grid(0); @@ -282,6 +289,7 @@ private void checkSemaphore() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreClosing() throws Exception { IgniteConfiguration cfg; GridStringLogger stringLogger; @@ -472,6 +480,7 @@ private void removeSemaphore(final String semaphoreName) throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreMultinode1() throws Exception { if (gridCount() == 1) return; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSequenceInternalCleanupTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSequenceInternalCleanupTest.java index d5a76f859266b..b6199e160bd31 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSequenceInternalCleanupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/IgniteSequenceInternalCleanupTest.java @@ -26,16 +26,17 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** */ +@RunWith(JUnit4.class) public class IgniteSequenceInternalCleanupTest extends GridCommonAbstractTest { /** */ public static final int GRIDS_CNT = 5; @@ -46,9 +47,6 @@ public class IgniteSequenceInternalCleanupTest extends GridCommonAbstractTest { /** */ public static final int CACHES_CNT = 10; - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -59,12 +57,6 @@ public class IgniteSequenceInternalCleanupTest extends GridCommonAbstractTest { cfg.setActiveOnStart(false); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - AtomicConfiguration atomicCfg = atomicConfiguration(); assertNotNull(atomicCfg); @@ -98,6 +90,7 @@ protected AtomicConfiguration atomicConfiguration() { } /** */ + @Test public void testDeactivate() throws Exception { try { Ignite ignite = startGridsMultiThreaded(GRIDS_CNT); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverNoWaitingAcquirerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverNoWaitingAcquirerTest.java index 862d240339ca1..73a8220207ea9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverNoWaitingAcquirerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverNoWaitingAcquirerTest.java @@ -21,19 +21,19 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.AtomicConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.concurrent.TimeUnit; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * - * Class to test the retrieval of a permit on a semaphore after initial semaphore owner has been closed. + * Class to test the retrieval of a permit on a semaphore after initial semaphore owner has been closed. * * IGNITE-7090 * @@ -42,10 +42,8 @@ * * */ +@RunWith(JUnit4.class) public class SemaphoreFailoverNoWaitingAcquirerTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Grid count. */ private static final int GRID_CNT = 3; @@ -56,12 +54,6 @@ public class SemaphoreFailoverNoWaitingAcquirerTest extends GridCommonAbstractTe @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - AtomicConfiguration atomicCfg = atomicConfiguration(); assertNotNull(atomicCfg); @@ -74,6 +66,7 @@ public class SemaphoreFailoverNoWaitingAcquirerTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testReleasePermitsPartitioned() throws Exception { atomicsCacheMode = PARTITIONED; @@ -83,6 +76,7 @@ public void testReleasePermitsPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePermitsReplicated() throws Exception { atomicsCacheMode = REPLICATED; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverSafeReleasePermitsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverSafeReleasePermitsTest.java index 210b234e600b5..d1de54dede2d9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverSafeReleasePermitsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/SemaphoreFailoverSafeReleasePermitsTest.java @@ -23,10 +23,10 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.AtomicConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -34,10 +34,8 @@ /** * */ +@RunWith(JUnit4.class) public class SemaphoreFailoverSafeReleasePermitsTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Grid count. */ private static final int GRID_CNT = 3; @@ -48,12 +46,6 @@ public class SemaphoreFailoverSafeReleasePermitsTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - AtomicConfiguration atomicCfg = atomicConfiguration(); assertNotNull(atomicCfg); @@ -66,6 +58,7 @@ public class SemaphoreFailoverSafeReleasePermitsTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testReleasePermitsPartitioned() throws Exception { atomicsCacheMode = PARTITIONED; @@ -75,6 +68,7 @@ public void testReleasePermitsPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePermitsReplicated() throws Exception { atomicsCacheMode = REPLICATED; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalCountDownLatchSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalCountDownLatchSelfTest.java index 9182c398971c3..d39cf5d3111ae 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalCountDownLatchSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalCountDownLatchSelfTest.java @@ -24,6 +24,9 @@ import org.apache.ignite.internal.processors.cache.datastructures.IgniteCountDownLatchAbstractSelfTest; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MINUTES; import static java.util.concurrent.TimeUnit.SECONDS; @@ -32,6 +35,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteLocalCountDownLatchSelfTest extends IgniteCountDownLatchAbstractSelfTest { /** {@inheritDoc} */ @Override protected CacheMode atomicsCacheMode() { @@ -44,6 +48,7 @@ public class IgniteLocalCountDownLatchSelfTest extends IgniteCountDownLatchAbstr } /** {@inheritDoc} */ + @Test @Override public void testLatch() throws Exception { // Test main functionality. IgniteCountDownLatch latch = grid(0).countDownLatch("latch", 2, false, true); @@ -92,4 +97,4 @@ public class IgniteLocalCountDownLatchSelfTest extends IgniteCountDownLatchAbstr checkRemovedLatch(latch); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalLockSelfTest.java index 7e1a11c17678a..b3f1a9d9a6380 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalLockSelfTest.java @@ -24,6 +24,9 @@ import org.apache.ignite.internal.processors.cache.datastructures.IgniteLockAbstractSelfTest; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MINUTES; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -31,6 +34,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteLocalLockSelfTest extends IgniteLockAbstractSelfTest { /** {@inheritDoc} */ @Override protected CacheMode atomicsCacheMode() { @@ -43,6 +47,7 @@ public class IgniteLocalLockSelfTest extends IgniteLockAbstractSelfTest { } /** {@inheritDoc} */ + @Test @Override public void testReentrantLock() throws Exception { // Test main functionality. IgniteLock lock = grid(0).reentrantLock("lock", true, false, true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalSemaphoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalSemaphoreSelfTest.java index a516fc11e2d57..2af67e0d8ab14 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalSemaphoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/local/IgniteLocalSemaphoreSelfTest.java @@ -24,6 +24,9 @@ import org.apache.ignite.internal.processors.cache.datastructures.IgniteSemaphoreAbstractSelfTest; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MINUTES; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -31,6 +34,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteLocalSemaphoreSelfTest extends IgniteSemaphoreAbstractSelfTest { /** {@inheritDoc} */ @Override protected CacheMode atomicsCacheMode() { @@ -43,6 +47,7 @@ public class IgniteLocalSemaphoreSelfTest extends IgniteSemaphoreAbstractSelfTes } /** {@inheritDoc} */ + @Test @Override public void testSemaphore() throws Exception { // Test main functionality. IgniteSemaphore semaphore = grid(0).semaphore("semaphore", -2, false, true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceMultiThreadedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceMultiThreadedTest.java index 4db9bd3339904..31d77f86c73cc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceMultiThreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceMultiThreadedTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.internal.processors.datastructures.GridCacheAtomicSequenceImpl; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Cache partitioned multi-threaded tests. */ +@RunWith(JUnit4.class) public class GridCachePartitionedAtomicSequenceMultiThreadedTest extends IgniteAtomicsAbstractTest { /** Number of threads for multithreaded test. */ private static final int THREAD_NUM = 30; @@ -62,6 +66,7 @@ public class GridCachePartitionedAtomicSequenceMultiThreadedTest extends IgniteA } /** @throws Exception If failed. */ + @Test public void testValues() throws Exception { String seqName = UUID.randomUUID().toString(); @@ -137,16 +142,19 @@ public void testValues() throws Exception { } /** @throws Exception If failed. */ + @Test public void testUpdatedSync() throws Exception { checkUpdate(true); } /** @throws Exception If failed. */ + @Test public void testPreviousSync() throws Exception { checkUpdate(false); } /** @throws Exception If failed. */ + @Test public void testIncrementAndGet() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -163,6 +171,7 @@ public void testIncrementAndGet() throws Exception { } /** @throws Exception If failed. */ + @Test public void testIncrementAndGetAsync() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -179,6 +188,7 @@ public void testIncrementAndGetAsync() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetAndIncrement() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -195,6 +205,7 @@ public void testGetAndIncrement() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetAndIncrementAsync() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -211,6 +222,7 @@ public void testGetAndIncrementAsync() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAddAndGet() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -227,6 +239,7 @@ public void testAddAndGet() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetAndAdd() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -243,6 +256,7 @@ public void testGetAndAdd() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMixed1() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -264,6 +278,7 @@ public void testMixed1() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMixed2() throws Exception { // Random sequence names. String seqName = UUID.randomUUID().toString(); @@ -285,6 +300,7 @@ public void testMixed2() throws Exception { /** * @throws Exception if failed. */ + @Test public void testMultipleSequences() throws Exception { final int seqCnt = 5; final int threadCnt = 5; @@ -369,4 +385,4 @@ private void checkUpdate(boolean updated) throws Exception { private abstract static class GridInUnsafeClosure { public abstract void apply(E p) throws IgniteCheckedException; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceTxSelfTest.java index 19f97bd75ebf6..68727b2a79f62 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceTxSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSequenceTxSelfTest.java @@ -27,17 +27,18 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * Tests {@link IgniteAtomicSequence} operations inside started user transaction. */ +@RunWith(JUnit4.class) public class GridCachePartitionedAtomicSequenceTxSelfTest extends GridCommonAbstractTest { /** Number of threads. */ private static final int THREAD_NUM = 8; @@ -54,19 +55,10 @@ public class GridCachePartitionedAtomicSequenceTxSelfTest extends GridCommonAbst /** Latch. */ private static CountDownLatch latch; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - cfg.setPublicThreadPoolSize(THREAD_NUM); AtomicConfiguration atomicCfg = atomicConfiguration(); @@ -107,6 +99,7 @@ protected AtomicConfiguration atomicConfiguration() { * * @throws Exception If failed. */ + @Test public void testTransactionIncrement() throws Exception { ignite(0).atomicSequence(SEQ_NAME, 0, true); @@ -123,6 +116,7 @@ public void testTransactionIncrement() throws Exception { /** * Tests isolation of system and user transactions. */ + @Test public void testIsolation() { IgniteAtomicSequence seq = ignite(0).atomicSequence(SEQ_NAME, 0, true); @@ -166,4 +160,4 @@ private static class IncrementClosure implements IgniteRunnable { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSetFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSetFailoverSelfTest.java index 4673549ca88aa..9e88a187fcf2d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSetFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedAtomicSetFailoverSelfTest.java @@ -19,12 +19,16 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.internal.processors.cache.datastructures.GridCacheSetFailoverAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; /** * Set failover tests. */ +@RunWith(JUnit4.class) public class GridCachePartitionedAtomicSetFailoverSelfTest extends GridCacheSetFailoverAbstractSelfTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode collectionCacheAtomicityMode() { @@ -32,6 +36,7 @@ public class GridCachePartitionedAtomicSetFailoverSelfTest extends GridCacheSetF } /** {@inheritDoc} */ + @Test @Override public void testNodeRestart() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-170"); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedDataStructuresFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedDataStructuresFailoverSelfTest.java index eecfefe949816..ecb2df9b7b1c9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedDataStructuresFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedDataStructuresFailoverSelfTest.java @@ -38,18 +38,4 @@ public class GridCachePartitionedDataStructuresFailoverSelfTest @Override protected CacheAtomicityMode collectionCacheAtomicityMode() { return TRANSACTIONAL; } - - /** - * - */ - @Override public void testReentrantLockConstantTopologyChangeNonFailoverSafe() { - fail("https://issues.apache.org/jira/browse/IGNITE-6454"); - } - - /** - * - */ - @Override public void testFairReentrantLockConstantTopologyChangeNonFailoverSafe() { - fail("https://issues.apache.org/jira/browse/IGNITE-6454"); - } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedNodeRestartTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedNodeRestartTxSelfTest.java index 00217184d5252..baf3bcde73f61 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedNodeRestartTxSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedNodeRestartTxSelfTest.java @@ -25,11 +25,11 @@ import org.apache.ignite.internal.processors.datastructures.GridCacheAtomicLongValue; import org.apache.ignite.internal.processors.datastructures.GridCacheInternalKey; import org.apache.ignite.internal.processors.datastructures.GridCacheInternalKeyImpl; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -41,10 +41,8 @@ /** * Test with variable number of nodes. */ +@RunWith(JUnit4.class) public class GridCachePartitionedNodeRestartTxSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int INIT_GRID_NUM = 3; @@ -62,12 +60,6 @@ public GridCachePartitionedNodeRestartTxSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -94,6 +86,7 @@ public GridCachePartitionedNodeRestartTxSelfTest() { * * @throws Exception If failed. */ + @Test public void testSimple() throws Exception { String key = UUID.randomUUID().toString(); @@ -115,6 +108,7 @@ public void testSimple() throws Exception { * * @throws Exception If failed. */ + @Test public void testCustom() throws Exception { String key = UUID.randomUUID().toString(); @@ -136,6 +130,7 @@ public void testCustom() throws Exception { * * @throws Exception If failed. */ + @Test public void testAtomic() throws Exception { String key = UUID.randomUUID().toString(); @@ -299,4 +294,4 @@ private void prepareAtomic(String key) throws Exception { stopGrid(0); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueCreateMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueCreateMultiNodeSelfTest.java index 4412c57b938b6..7979e3cb1fbff 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueCreateMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueCreateMultiNodeSelfTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.internal.processors.cache.datastructures.IgniteCollectionAbstractTest; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; @@ -46,6 +49,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCachePartitionedQueueCreateMultiNodeSelfTest extends IgniteCollectionAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -107,6 +111,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testQueueCreation() throws Exception { final AtomicInteger idx = new AtomicInteger(); @@ -161,6 +166,7 @@ public void testQueueCreation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTx() throws Exception { if (cacheConfiguration().getAtomicityMode() != TRANSACTIONAL) return; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueEntryMoveSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueEntryMoveSelfTest.java index cd66f4dbc10ad..ea34e74e6bf12 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueEntryMoveSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedQueueEntryMoveSelfTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.internal.processors.cache.datastructures.IgniteCollectionAbstractTest; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,6 +48,7 @@ /** * Cache queue test with changing topology. */ +@RunWith(JUnit4.class) public class GridCachePartitionedQueueEntryMoveSelfTest extends IgniteCollectionAbstractTest { /** Queue capacity. */ private static final int QUEUE_CAP = 5; @@ -82,6 +86,7 @@ public class GridCachePartitionedQueueEntryMoveSelfTest extends IgniteCollection /** * @throws Exception If failed. */ + @Test public void testQueue() throws Exception { final String queueName = "qq"; @@ -212,4 +217,4 @@ private Collection nodes(AffinityFunction aff, int part, Collection return assignment.get(part); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedSetFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedSetFailoverSelfTest.java index ec57deab71c5f..a37944dc77c53 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedSetFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/GridCachePartitionedSetFailoverSelfTest.java @@ -19,19 +19,24 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.internal.processors.cache.datastructures.GridCacheSetFailoverAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * Set failover tests. */ +@RunWith(JUnit4.class) public class GridCachePartitionedSetFailoverSelfTest extends GridCacheSetFailoverAbstractSelfTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode collectionCacheAtomicityMode() { return TRANSACTIONAL; } + @Test @Override public void testNodeRestart(){ fail("https://issues.apache.org/jira/browse/IGNITE-1593"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedCountDownLatchSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedCountDownLatchSelfTest.java index fc9356e21631c..dc1702b7bf0df 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedCountDownLatchSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedCountDownLatchSelfTest.java @@ -19,6 +19,8 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.datastructures.IgniteCountDownLatchAbstractSelfTest; +import org.junit.Ignore; +import org.junit.Test; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -32,7 +34,9 @@ public class IgnitePartitionedCountDownLatchSelfTest extends IgniteCountDownLatc } /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-1793") + @Test @Override public void testLatch() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-1793"); + // No-op. } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedQueueNoBackupsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedQueueNoBackupsTest.java index aa075c0a9eb16..bb68bbfc9eb0d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedQueueNoBackupsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedQueueNoBackupsTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheMapEntry; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -35,6 +38,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgnitePartitionedQueueNoBackupsTest extends GridCachePartitionedQueueApiSelfTest { /** {@inheritDoc} */ @Override protected CacheMode collectionCacheMode() { @@ -58,6 +62,7 @@ public class IgnitePartitionedQueueNoBackupsTest extends GridCachePartitionedQue /** * @throws Exception If failed. */ + @Test public void testCollocation() throws Exception { IgniteQueue queue = grid(0).queue("queue", 0, config(true)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedSetNoBackupsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedSetNoBackupsSelfTest.java index 5f09dfa84a54b..1c8618d1e2cc6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedSetNoBackupsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/partitioned/IgnitePartitionedSetNoBackupsSelfTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheMapEntry; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePartitionedSetNoBackupsSelfTest extends GridCachePartitionedSetSelfTest { /** {@inheritDoc} */ @Override protected CollectionConfiguration collectionConfiguration() { @@ -43,6 +47,7 @@ public class IgnitePartitionedSetNoBackupsSelfTest extends GridCachePartitionedS /** * @throws Exception If failed. */ + @Test public void testCollocation() throws Exception { Set set0 = grid(0).set(SET_NAME, config(true)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/replicated/GridCacheReplicatedDataStructuresFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/replicated/GridCacheReplicatedDataStructuresFailoverSelfTest.java index 27fbdcf1d48d7..b093d12f1bb33 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/replicated/GridCacheReplicatedDataStructuresFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/datastructures/replicated/GridCacheReplicatedDataStructuresFailoverSelfTest.java @@ -38,32 +38,4 @@ public class GridCacheReplicatedDataStructuresFailoverSelfTest @Override protected CacheAtomicityMode collectionCacheAtomicityMode() { return TRANSACTIONAL; } - - /** - * - */ - @Override public void testFairReentrantLockConstantMultipleTopologyChangeNonFailoverSafe() { - fail("https://issues.apache.org/jira/browse/IGNITE-6454"); - } - - /** - * - */ - @Override public void testReentrantLockConstantMultipleTopologyChangeNonFailoverSafe() { - fail("https://issues.apache.org/jira/browse/IGNITE-6454"); - } - - /** - * - */ - @Override public void testReentrantLockConstantTopologyChangeNonFailoverSafe() { - fail("https://issues.apache.org/jira/browse/IGNITE-6454"); - } - - /** - * - */ - @Override public void testFairReentrantLockConstantTopologyChangeNonFailoverSafe() { - fail("https://issues.apache.org/jira/browse/IGNITE-6454"); - } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/Cache64kPartitionsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/Cache64kPartitionsTest.java index e54251e2b836c..078c5737dae0a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/Cache64kPartitionsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/Cache64kPartitionsTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class Cache64kPartitionsTest extends GridCommonAbstractTest { /** */ private boolean persistenceEnabled; @@ -62,6 +66,7 @@ public class Cache64kPartitionsTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testManyPartitionsNoPersistence() throws Exception { checkManyPartitions(); } @@ -69,6 +74,7 @@ public void testManyPartitionsNoPersistence() throws Exception { /** * @throws Exception if failed. */ + @Test public void testManyPartitionsWithPersistence() throws Exception { persistenceEnabled = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAbstractRestartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAbstractRestartSelfTest.java index dd9362671c504..61d96bc71339b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAbstractRestartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAbstractRestartSelfTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Abstract restart test. */ +@RunWith(JUnit4.class) public abstract class CacheAbstractRestartSelfTest extends IgniteCacheAbstractTest { /** */ private volatile CountDownLatch cacheCheckedLatch = new CountDownLatch(1); @@ -72,6 +76,7 @@ protected int updatersNumber() { /** * @throws Exception If failed. */ + @Test public void testRestart() throws Exception { final int clientGrid = gridCount() - 1; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAffinityEarlyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAffinityEarlyTest.java index 46669acc56807..8d48651f143e4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAffinityEarlyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAffinityEarlyTest.java @@ -30,14 +30,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheAffinityEarlyTest extends GridCommonAbstractTest { /** Grid count. */ private static int GRID_CNT = 8; @@ -54,18 +55,10 @@ public class CacheAffinityEarlyTest extends GridCommonAbstractTest { /** Futs. */ private Collection> futs = new ArrayList<>(GRID_CNT); - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - cfg.setMarshaller(new BinaryMarshaller()); return cfg; @@ -79,6 +72,7 @@ public class CacheAffinityEarlyTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStartNodes() throws Exception { for (int i = 0; i < iters; i++) { try { @@ -156,10 +150,10 @@ private IgniteCache getCache(Ignite grid) { CacheConfiguration ccfg = defaultCacheConfiguration(); ccfg.setCacheMode(CacheMode.PARTITIONED); - ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC); + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); ccfg.setBackups(1); ccfg.setNearConfiguration(null); return grid.getOrCreateCache(ccfg); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsFailoverAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsFailoverAbstractTest.java index f1377df8ddf9f..eff8118dc76ed 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsFailoverAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsFailoverAbstractTest.java @@ -39,12 +39,16 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public abstract class CacheAsyncOperationsFailoverAbstractTest extends GridCacheAbstractSelfTest { /** */ private static final int NODE_CNT = 4; @@ -96,6 +100,7 @@ public abstract class CacheAsyncOperationsFailoverAbstractTest extends GridCache /** * @throws Exception If failed. */ + @Test public void testPutAllAsyncFailover() throws Exception { putAllAsyncFailover(5, 10); } @@ -103,6 +108,7 @@ public void testPutAllAsyncFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllAsyncFailoverManyThreads() throws Exception { putAllAsyncFailover(ignite(0).configuration().getSystemThreadPoolSize() * 2, 3); } @@ -110,6 +116,7 @@ public void testPutAllAsyncFailoverManyThreads() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAsyncFailover() throws Exception { IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME); @@ -372,4 +379,4 @@ public long value() { return S.toString(TestValue.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsTest.java index 0b6d7b1cba115..4f93bb4adb573 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAsyncOperationsTest.java @@ -30,23 +30,22 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class CacheAsyncOperationsTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static volatile CountDownLatch latch; @@ -54,8 +53,6 @@ public class CacheAsyncOperationsTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (igniteInstanceName.equals(getTestIgniteInstanceName(1))) cfg.setClientMode(true); @@ -72,6 +69,7 @@ public class CacheAsyncOperationsTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAsyncOperationsTx() throws Exception { asyncOperations(TRANSACTIONAL); } @@ -79,6 +77,15 @@ public void testAsyncOperationsTx() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testAsyncOperationsMvccTx() throws Exception { + asyncOperations(TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testAsyncOperationsAtomic() throws Exception { asyncOperations(ATOMIC); } @@ -255,4 +262,4 @@ private static class TestStore extends CacheStoreAdapter { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAtomicPrimarySyncBackPressureTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAtomicPrimarySyncBackPressureTest.java index 62707c7a48166..053400177a8b2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAtomicPrimarySyncBackPressureTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheAtomicPrimarySyncBackPressureTest.java @@ -34,12 +34,16 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks that back-pressure control restricts uncontrolled growing * of backup message queue. This means, if queue too big - any reads * will be stopped until received acks from backup nodes. */ +@RunWith(JUnit4.class) public class CacheAtomicPrimarySyncBackPressureTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -73,6 +77,7 @@ public class CacheAtomicPrimarySyncBackPressureTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testClientPut() throws Exception { Ignite srv1 = startGrid("server1"); Ignite srv2 = startGrid("server2"); @@ -85,6 +90,7 @@ public void testClientPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerPut() throws Exception { Ignite srv1 = startGrid("server1"); Ignite srv2 = startGrid("server2"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBaselineTopologyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBaselineTopologyTest.java index 053ed82c569d4..272735032786b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBaselineTopologyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBaselineTopologyTest.java @@ -52,10 +52,11 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -64,6 +65,7 @@ /** * */ +@RunWith(JUnit4.class) public class CacheBaselineTopologyTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache"; @@ -83,9 +85,6 @@ public class CacheBaselineTopologyTest extends GridCommonAbstractTest { /** */ private static final String DATA_NODE = "dataNodeUserAttr"; - /** */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { super.beforeTest(); @@ -110,11 +109,6 @@ public class CacheBaselineTopologyTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setConsistentId(igniteInstanceName); if (disableAutoActivation) @@ -151,6 +145,7 @@ public class CacheBaselineTopologyTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testRebalanceForCacheWithNodeFilter() throws Exception { try { final int EMPTY_NODE_IDX = 2; @@ -232,6 +227,7 @@ private static class DataNodeFilter implements IgnitePredicate { /** * @throws Exception If failed. */ + @Test public void testTopologyChangesWithFixedBaseline() throws Exception { startGrids(NODE_COUNT); @@ -367,6 +363,7 @@ public void testTopologyChangesWithFixedBaseline() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBaselineTopologyChangesFromServer() throws Exception { testBaselineTopologyChanges(false); } @@ -374,6 +371,7 @@ public void testBaselineTopologyChangesFromServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBaselineTopologyChangesFromClient() throws Exception { testBaselineTopologyChanges(true); } @@ -381,6 +379,7 @@ public void testBaselineTopologyChangesFromClient() throws Exception { /** * @throws Exception if failed. */ + @Test public void testClusterActiveWhileBaselineChanging() throws Exception { startGrids(NODE_COUNT); @@ -542,6 +541,7 @@ private void testBaselineTopologyChanges(boolean fromClient) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryLeft() throws Exception { startGrids(NODE_COUNT); @@ -629,6 +629,7 @@ else if (grid(i).localNode().equals(affNodes.get(1))) /** * @throws Exception If failed. */ + @Test public void testPrimaryLeftAndClusterRestart() throws Exception { startGrids(NODE_COUNT); @@ -740,6 +741,7 @@ else if (grid(i).localNode().equals(affNodes.get(1))) { /** * @throws Exception if failed. */ + @Test public void testMetadataUpdate() throws Exception { startGrids(5); @@ -776,6 +778,7 @@ public void testMetadataUpdate() throws Exception { /** * @throws Exception if failed. */ + @Test public void testClusterRestoredOnRestart() throws Exception { startGrids(5); @@ -811,6 +814,7 @@ public void testClusterRestoredOnRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNonPersistentCachesIgnoreBaselineTopology() throws Exception { Ignite ig = startGrids(4); @@ -832,6 +836,7 @@ public void testNonPersistentCachesIgnoreBaselineTopology() throws Exception { /** * @throws Exception if failed. */ + @Test public void testAffinityAssignmentChangedAfterRestart() throws Exception { int parts = 32; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnGetAllTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnGetAllTest.java new file mode 100644 index 0000000000000..dd301b13417a1 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnGetAllTest.java @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.util.HashSet; +import java.util.Random; +import java.util.Set; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheMode.REPLICATED; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheBlockOnGetAllTest extends CacheBlockOnReadAbstractTest { + + /** {@inheritDoc} */ + @Override @NotNull protected CacheReadBackgroundOperation getReadOperation() { + return new IntCacheReadBackgroundOperation() { + /** Random. */ + private Random random = new Random(); + + /** {@inheritDoc} */ + @Override public void doRead() { + Set keys = new HashSet<>(); + + for (int i = 0; i < 500; i++) + keys.add(random.nextInt(entriesCount())); + + cache().getAll(keys); + } + }; + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStopBaselineAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStopBaselineAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStopBaselineTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStopBaselineTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testCreateCacheAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testCreateCacheAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testCreateCacheTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testCreateCacheTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testDestroyCacheAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testDestroyCacheAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testDestroyCacheTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testDestroyCacheTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStartServerAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStartServerAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStartServerTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStartServerTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStopServerAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStopServerAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStopServerTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStopServerTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testUpdateBaselineTopologyAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testUpdateBaselineTopologyAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testUpdateBaselineTopologyTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testUpdateBaselineTopologyTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStartClientAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStartClientAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStartClientTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStartClientTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStopClientAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStopClientAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStopClientTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStopClientTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnReadAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnReadAbstractTest.java new file mode 100644 index 0000000000000..297d2d29c4a94 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnReadAbstractTest.java @@ -0,0 +1,1487 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Objects; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.BiConsumer; +import java.util.function.Predicate; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteInterruptedException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.DiscoveryEvent; +import org.apache.ignite.events.EventType; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.events.DiscoveryCustomEvent; +import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; +import org.apache.ignite.internal.processors.cache.DynamicCacheChangeBatch; +import org.apache.ignite.internal.processors.cache.ExchangeActions; +import org.apache.ignite.internal.processors.cache.ExchangeActions.CacheActionData; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionExchangeId; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; +import org.apache.ignite.internal.processors.cluster.ChangeGlobalStateMessage; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteBiPredicate; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.spi.discovery.tcp.TestTcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeAddFinishedMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeLeftMessage; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheMode.REPLICATED; + +/** + * + */ +@RunWith(JUnit4.class) +public abstract class CacheBlockOnReadAbstractTest extends GridCommonAbstractTest { + /** Default cache entries count. */ + private static final int DFLT_CACHE_ENTRIES_CNT = 2 * 1024; + + /** Ip finder. */ + private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + + /** List of baseline nodes started at the beginning of the test. */ + protected final List baseline = new CopyOnWriteArrayList<>(); + + /** List of server nodes started at the beginning of the test. */ + protected final List srvs = new CopyOnWriteArrayList<>(); + + /** List of client nodes started at the beginning of the test. */ + protected final List clients = new CopyOnWriteArrayList<>(); + + /** Start node in client mode. */ + private volatile boolean startNodesInClientMode; + + /** Latch that is used to wait until all required messages are blocked. */ + private volatile CountDownLatch cntFinishedReadOperations; + + /** Custom ip finder. Replaces {@link #IP_FINDER} if present at the moment of node starting. */ + private volatile TcpDiscoveryIpFinder customIpFinder; + + /** Discovery message processor. Used in every started node. */ + private volatile BiConsumer discoveryMsgProcessor; + + /** + * Number of baseline servers to start before test. + * + * @see Params#baseline() + */ + protected int baselineServersCount() { + return currentTestParams().baseline(); + } + + /** + * Number of non-baseline servers to start before test. + * + * @see Params#servers() + */ + protected int serversCount() { + return currentTestParams().servers(); + } + + /** + * Number of clients to start before test. + * + * @see Params#clients() + */ + protected int clientsCount() { + return currentTestParams().clients(); + } + + /** + * Number of backups to configure in caches by default. + */ + protected int backupsCount() { + return Math.min(3, baselineServersCount() - 1); + } + + /** + * Number of milliseconds to warmup reading process. Used to lower fluctuations in run time. Might be 0. + * + * @see Params#warmup() + */ + protected long warmup() { + return currentTestParams().warmup(); + } + + /** + * Number of milliseconds to wait on the potentially blocking operation. + * + * @see Params#timeout() + */ + protected long timeout() { + return currentTestParams().timeout(); + } + + /** + * Cache atomicity mode. + * + * @see Params#atomicityMode() + */ + protected CacheAtomicityMode atomicityMode() { + return currentTestParams().atomicityMode(); + } + + /** + * Cache mode. + * + * @see Params#cacheMode() + */ + protected CacheMode cacheMode() { + return currentTestParams().cacheMode(); + } + + /** + * Whether allowing {@link ClusterTopologyCheckedException} as the valid reading result or not. + * + * @see Params#allowException() + */ + protected boolean allowException() { + return currentTestParams().allowException(); + } + + /** + * @param startNodesInClientMode Start nodes on client mode. + */ + public void startNodesInClientMode(boolean startNodesInClientMode) { + this.startNodesInClientMode = startNodesInClientMode; + } + + /** List of baseline nodes started at the beginning of the test. */ + public List baseline() { + return baseline; + } + + /** List of server nodes started at the beginning of the test. */ + public List servers() { + return srvs; + } + + /** List of client nodes started at the beginning of the test. */ + public List clients() { + return clients; + } + + /** + * Annotation to configure test methods in {@link CacheBlockOnReadAbstractTest}. Its values are used throughout + * test implementation. + */ + @Target(ElementType.METHOD) + @Retention(RetentionPolicy.RUNTIME) + public @interface Params { + /** + * Number of baseline servers to start before test. + */ + int baseline() default 3; + + /** + * Number of non-baseline servers to start before test. + */ + int servers() default 1; + + /** + * Number of clients to start before test. + */ + int clients() default 1; + + /** + * Number of milliseconds to warmup reading process. Used to lower fluctuations in run time. Might be 0. + */ + long warmup() default 2000L; + + /** + * Number of milliseconds to wait on the potentially blocking operation. + */ + long timeout() default 3000L; + + /** + * Cache atomicity mode. + */ + CacheAtomicityMode atomicityMode(); + + /** + * Cache mode. + */ + CacheMode cacheMode(); + + /** + * Whether allowing {@link ClusterTopologyCheckedException} as the valid reading result or not. + */ + boolean allowException() default false; + } + + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setConsistentId(igniteInstanceName); + + cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); + + cfg.setDiscoverySpi(new TestTcpDiscoverySpi() { + /** {@inheritDoc} */ + @Override protected void startMessageProcess(TcpDiscoveryAbstractMessage msg) { + if (discoveryMsgProcessor != null) + discoveryMsgProcessor.accept(msg, igniteInstanceName); + } + }.setIpFinder(customIpFinder == null ? IP_FINDER : customIpFinder)); + + cfg.setDataStorageConfiguration( + new DataStorageConfiguration() + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setPersistenceEnabled(true) + ) + ); + + cfg.setClientMode(startNodesInClientMode); + + return cfg; + } + + /** {@inheritDoc} */ + @Override public void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + // Checking prerequisites. + assertTrue("Positive timeout is required for the test.", timeout() > 0); + + assertTrue("No baseline servers were requested.", baselineServersCount() > 0); + + int idx = 0; + + // Start baseline nodes. + for (int i = 0; i < baselineServersCount(); i++) + baseline.add(startGrid(idx++)); + + // Activate cluster. + baseline.get(0).cluster().active(true); + + // Start server nodes in activated cluster. + for (int i = 0; i < serversCount(); i++) + srvs.add(startGrid(idx++)); + + // Start client nodes. + startNodesInClientMode(true); + + customIpFinder = new TcpDiscoveryVmIpFinder(false) + .setAddresses( + Collections.singletonList("127.0.0.1:47500") + ); + + for (int i = 0; i < clientsCount(); i++) + clients.add(startGrid(idx++)); + + customIpFinder = null; + } + + /** {@inheritDoc} */ + @Override public void afterTest() throws Exception { + baseline.clear(); + + srvs.clear(); + + clients.clear(); + + grid(0).cluster().active(false); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testCreateCacheAtomicPartitioned() throws Exception { + testCreateCacheTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testCreateCacheAtomicReplicated() throws Exception { + testCreateCacheTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testCreateCacheTransactionalPartitioned() throws Exception { + testCreateCacheTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testCreateCacheTransactionalReplicated() throws Exception { + doTest( + asMessagePredicate(CacheBlockOnReadAbstractTest::createCachePredicate), + () -> baseline.get(0).createCache(UUID.randomUUID().toString()) + ); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testDestroyCacheAtomicPartitioned() throws Exception { + testDestroyCacheTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testDestroyCacheAtomicReplicated() throws Exception { + testDestroyCacheTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testDestroyCacheTransactionalPartitioned() throws Exception { + testDestroyCacheTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testDestroyCacheTransactionalReplicated() throws Exception { + List cacheNames = new ArrayList<>(Arrays.asList( + UUID.randomUUID().toString(), + UUID.randomUUID().toString(), + UUID.randomUUID().toString()) + ); + + for (String cacheName : cacheNames) + baseline.get(0).createCache(cacheName); + + doTest( + asMessagePredicate(CacheBlockOnReadAbstractTest::destroyCachePredicate), + () -> baseline.get(0).destroyCache(cacheNames.remove(0)) + ); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testStartServerAtomicPartitioned() throws Exception { + testStartServerTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testStartServerAtomicReplicated() throws Exception { + testStartServerTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testStartServerTransactionalPartitioned() throws Exception { + testStartServerTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testStartServerTransactionalReplicated() throws Exception { + startNodesInClientMode(false); + + doTest( + asMessagePredicate(discoEvt -> discoEvt.type() == EventType.EVT_NODE_JOINED), + () -> startGrid(UUID.randomUUID().toString()) + ); + } + + /** + * @throws Exception If failed. + */ + @Params(servers = 4, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testStopServerAtomicPartitioned() throws Exception { + testStopServerTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(servers = 4, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testStopServerAtomicReplicated() throws Exception { + testStopServerTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(servers = 4, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testStopServerTransactionalPartitioned() throws Exception { + testStopServerTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(servers = 4, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testStopServerTransactionalReplicated() throws Exception { + doTest( + asMessagePredicate(discoEvt -> discoEvt.type() == EventType.EVT_NODE_LEFT), + () -> stopGrid(srvs.remove(srvs.size() - 1).name()) + ); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 4, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testRestartBaselineAtomicPartitioned() throws Exception { + testRestartBaselineTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 4, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testRestartBaselineAtomicReplicated() throws Exception { + testRestartBaselineTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 4, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testRestartBaselineTransactionalPartitioned() throws Exception { + testRestartBaselineTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 4, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testRestartBaselineTransactionalReplicated() throws Exception { + doTest( + asMessagePredicate(discoEvt -> discoEvt.type() == EventType.EVT_NODE_JOINED), + () -> { + IgniteEx node = baseline.get(baseline.size() - 1); + + TestRecordingCommunicationSpi.spi(node).stopBlock(); + + stopGrid(node.name()); + + for (int i = 0; i < baselineServersCount() - 2; i++) + cntFinishedReadOperations.countDown(); + + startGrid(node.name()); + } + ); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testUpdateBaselineTopologyAtomicPartitioned() throws Exception { + testUpdateBaselineTopologyTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testUpdateBaselineTopologyAtomicReplicated() throws Exception { + testUpdateBaselineTopologyTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testUpdateBaselineTopologyTransactionalPartitioned() throws Exception { + testUpdateBaselineTopologyTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(timeout = 5000L, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testUpdateBaselineTopologyTransactionalReplicated() throws Exception { + doTest( + asMessagePredicate(discoEvt -> { + if (discoEvt instanceof DiscoveryCustomEvent) { + DiscoveryCustomEvent discoCustomEvt = (DiscoveryCustomEvent)discoEvt; + + DiscoveryCustomMessage customMsg = discoCustomEvt.customMessage(); + + return customMsg instanceof ChangeGlobalStateMessage; + } + + return false; + }), + () -> { + startNodesInClientMode(false); + + IgniteEx ignite = startGrid(UUID.randomUUID().toString()); + + baseline.get(0).cluster().setBaselineTopology(baseline.get(0).context().discovery().topologyVersion()); + + baseline.add(ignite); + } + ); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 9, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testStopBaselineAtomicPartitioned() throws Exception { + testStopBaselineTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 9, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testStopBaselineAtomicReplicated() throws Exception { + testStopBaselineTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 9, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testStopBaselineTransactionalPartitioned() throws Exception { + testStopBaselineTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(baseline = 9, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testStopBaselineTransactionalReplicated() throws Exception { + AtomicInteger cntDownCntr = new AtomicInteger(0); + + doTest( + asMessagePredicate(discoEvt -> discoEvt.type() == EventType.EVT_NODE_LEFT), + () -> { + IgniteEx node = baseline.get(baseline.size() - cntDownCntr.get() - 1); + + TestRecordingCommunicationSpi.spi(node).stopBlock(); + + cntDownCntr.incrementAndGet(); + + for (int i = 0; i < cntDownCntr.get(); i++) + cntFinishedReadOperations.countDown(); // This node and previously stopped nodes as well. + + stopGrid(node.name()); + } + ); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testStartClientAtomicPartitioned() throws Exception { + testStartClientTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testStartClientAtomicReplicated() throws Exception { + testStartClientTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testStartClientTransactionalPartitioned() throws Exception { + testStartClientTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + public void testStartClientTransactionalReplicated() throws Exception { + doTest( + TcpDiscoveryNodeAddFinishedMessage.class, + () -> { + startNodesInClientMode(true); + + customIpFinder = new TcpDiscoveryVmIpFinder(false) + .setAddresses( + Collections.singletonList("127.0.0.1:47502") + ); + + try { + startGrid(UUID.randomUUID().toString()); + } + finally { + customIpFinder = null; + } + } + ); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + public void testStopClientAtomicPartitioned() throws Exception { + testStopClientTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + public void testStopClientAtomicReplicated() throws Exception { + testStopClientTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + public void testStopClientTransactionalPartitioned() throws Exception { + testStopClientTransactionalReplicated(); + } + + /** + * @throws Exception If failed. + */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED, timeout = 5_000L) + @Test + public void testStopClientTransactionalReplicated() throws Exception { + startNodesInClientMode(true); + + customIpFinder = new TcpDiscoveryVmIpFinder(false) + .setAddresses( + Collections.singletonList("127.0.0.1:47502") + ); + + for (int i = 0; i < 3; i++) + clients.add(startGrid(UUID.randomUUID().toString())); + + customIpFinder = null; + + doTest( + TcpDiscoveryNodeLeftMessage.class, + () -> stopGrid(clients.remove(clients.size() - 1).name()) + ); + } + + /** + * Checks that given discovery event is from "Create cache" operation. + * + * @param discoEvt Discovery event. + */ + private static boolean createCachePredicate(DiscoveryEvent discoEvt) { + if (discoEvt instanceof DiscoveryCustomEvent) { + + DiscoveryCustomEvent discoCustomEvt = (DiscoveryCustomEvent)discoEvt; + + DiscoveryCustomMessage customMsg = discoCustomEvt.customMessage(); + + if (customMsg instanceof DynamicCacheChangeBatch) { + DynamicCacheChangeBatch cacheChangeBatch = (DynamicCacheChangeBatch)customMsg; + + ExchangeActions exchangeActions = U.field(cacheChangeBatch, "exchangeActions"); + + Collection startRequests = exchangeActions.cacheStartRequests(); + + return !startRequests.isEmpty(); + } + } + + return false; + } + + /** + * Checks that given discovery event is from "Destroy cache" operation. + * + * @param discoEvt Discovery event. + */ + private static boolean destroyCachePredicate(DiscoveryEvent discoEvt) { + if (discoEvt instanceof DiscoveryCustomEvent) { + + DiscoveryCustomEvent discoCustomEvt = (DiscoveryCustomEvent)discoEvt; + + DiscoveryCustomMessage customMsg = discoCustomEvt.customMessage(); + + if (customMsg instanceof DynamicCacheChangeBatch) { + DynamicCacheChangeBatch cacheChangeBatch = (DynamicCacheChangeBatch)customMsg; + + ExchangeActions exchangeActions = U.field(cacheChangeBatch, "exchangeActions"); + + Collection stopRequests = exchangeActions.cacheStopRequests(); + + return !stopRequests.isEmpty(); + } + } + + return false; + } + + /** + * Read operation tat is going to be executed during blocking operation. + */ + @NotNull protected abstract CacheReadBackgroundOperation getReadOperation(); + + /** + * Checks that {@code block} closure doesn't block read operation. + * Does it for client, baseline and regular server node. + * + * @param blockMsgPred Predicate that check whether the message corresponds to the {@code block} or not. + * @param block Blocking operation. + * @throws Exception If failed. + */ + public void doTest(Predicate blockMsgPred, RunnableX block) throws Exception { + BackgroundOperation backgroundOperation = new BlockMessageOnBaselineBackgroundOperation( + block, + blockMsgPred + ); + + doTest(backgroundOperation); + } + + /** + * Checks that {@code block} closure doesn't block read operation. + * Does it for client, baseline and regular server node. + * + * @param blockMsgCls Class of discovery message to block. + * @param block Blocking operation. + * @throws Exception If failed. + */ + public void doTest(Class blockMsgCls, RunnableX block) throws Exception { + BlockDiscoveryMessageBackgroundOperation backgroundOperation = new BlockDiscoveryMessageBackgroundOperation( + block, + blockMsgCls + ); + + doTest(backgroundOperation); + } + + /** + * Checks that {@code block} closure doesn't block read operation. + * Does it for client, baseline and regular server node. + * + * @param backgroundOperation Background operation. + * @throws Exception If failed. + */ + public void doTest(BackgroundOperation backgroundOperation) throws Exception { + CacheReadBackgroundOperation readOperation = getReadOperation(); + + readOperation.initCache(baseline.get(0), true); + + // Warmup. + if (warmup() > 0) { + try (AutoCloseable read = readOperation.start()) { + Thread.sleep(warmup()); + } + + assertEquals( + readOperation.readOperationsFailed() + " read operations failed during warmup.", + 0, + readOperation.readOperationsFailed() + ); + + assertTrue( + "No read operations were finished during warmup.", + readOperation.readOperationsFinishedUnderBlock() > 0 + ); + } + + doTest0(clients.get(0), readOperation, backgroundOperation); + + doTest0(srvs.get(0), readOperation, backgroundOperation); + + doTest0(baseline.get(0), readOperation, backgroundOperation); + + try (AutoCloseable read = readOperation.start()) { + Thread.sleep(500L); + } + + assertEquals( + readOperation.readOperationsFailed() + " read operations failed during finish stage.", + 0, + readOperation.readOperationsFailed() + ); + + assertTrue( + "No read operations were finished during finish stage.", + readOperation.readOperationsFinishedUnderBlock() > 0 + ); + } + + /** + * Internal part for {@link CacheBlockOnReadAbstractTest#doTest(Predicate, RunnableX)}. + * + * @param ignite Ignite instance. Client / baseline / server node. + * @param readOperation Read operation. + * @param backgroundOperation Background operation. + */ + private void doTest0( + IgniteEx ignite, + CacheReadBackgroundOperation readOperation, + BackgroundOperation backgroundOperation + ) throws Exception { + // Reinit internal cache state with given ignite instance. + readOperation.initCache(ignite, false); + + cntFinishedReadOperations = new CountDownLatch(baseline.size() - 1); + + // Read while potentially blocking operation is executing. + try (AutoCloseable block = backgroundOperation.start()) { + cntFinishedReadOperations.await(5 * timeout(), TimeUnit.MILLISECONDS); + + // Possible if test itself is wrong. + assertEquals("Messages weren't blocked in time", 0, cntFinishedReadOperations.getCount()); + + try (AutoCloseable read = readOperation.start()) { + Thread.sleep(timeout()); + } + } + finally { + cntFinishedReadOperations = null; + } + + log.info("Operations finished: " + readOperation.readOperationsFinishedUnderBlock()); + log.info("Longest operation took " + readOperation.maxReadDuration() + "ms"); + + // None of read operations should fail. + assertEquals( + readOperation.readOperationsFailed() + " read operations failed.", + 0, + readOperation.readOperationsFailed() + ); + + assertTrue( + "No read operations were finished during timeout.", + readOperation.readOperationsFinishedUnderBlock() > 0 + ); + + // There were no operations as long as blocking timeout. + assertNotAlmostEqual(timeout(), readOperation.maxReadDuration()); + + // On average every read operation was much faster then blocking timeout. + double avgDuration = (double)timeout() / readOperation.readOperationsFinishedUnderBlock(); + + assertTrue("Avarage duration was too long.",avgDuration < timeout() * 0.25); + } + + /** + * Utility class that allows to start and stop some background operation many times. + */ + protected abstract static class BackgroundOperation { + /** */ + private IgniteInternalFuture fut; + + /** + * Invoked strictly before background thread is started. + */ + protected void init() { + // No-op. + } + + /** + * Operation itself. Will be executed in separate thread. Thread interruption has to be considered as a valid + * way to stop operation. + */ + protected abstract void execute(); + + /** + * @return Allowed time to wait in {@link BackgroundOperation#stop()} method before canceling background thread. + */ + protected abstract long stopTimeout(); + + /** + * Start separate thread and execute method {@link BackgroundOperation#execute()} in it. + * + * @return {@link AutoCloseable} that invokes {@link BackgroundOperation#stop()} on closing. + */ + AutoCloseable start() { + if (fut != null) + throw new UnsupportedOperationException("Only one simultanious operation is allowed"); + + init(); + + CountDownLatch threadStarted = new CountDownLatch(1); + + fut = GridTestUtils.runAsync(() -> { + try { + threadStarted.countDown(); + + execute(); + } + catch (Exception e) { + throw new IgniteException("Unexpected exception in background operation thread", e); + } + }); + + try { + threadStarted.await(); + } + catch (InterruptedException e) { + try { + fut.cancel(); + } + catch (IgniteCheckedException e1) { + e.addSuppressed(e1); + } + + throw new IgniteException(e); + } + + return this::stop; + } + + /** + * Interrupt the operation started in {@link BackgroundOperation#start()} method and join interrupted thread. + */ + void stop() throws Exception { + if (fut == null) + return; + + try { + fut.get(stopTimeout()); + } + catch (IgniteFutureTimeoutCheckedException e) { + fut.cancel(); + + fut.get(); + } + finally { + fut = null; + } + } + } + + /** + * @param discoEvtPred Predicate that tests discovery events. + * @return New predicate that test any message based on {@code discoEvtPred} predicate. + */ + public static Predicate asMessagePredicate(Predicate discoEvtPred) { + return msg -> { + if (msg instanceof GridDhtPartitionsFullMessage) { + GridDhtPartitionsFullMessage fullMsg = (GridDhtPartitionsFullMessage)msg; + + GridDhtPartitionExchangeId exchangeId = fullMsg.exchangeId(); + + if (exchangeId != null) + return discoEvtPred.test(U.field(exchangeId, "discoEvt")); + } + + return false; + }; + } + + /** + * Background operation that executes some node request and doesn't allow its exchange messages to be fully + * processed until operation is stopped. + */ + protected class BlockMessageOnBaselineBackgroundOperation extends BackgroundOperation { + /** */ + private final RunnableX block; + + /** */ + private final Predicate blockMsg; + + /** + * @param block Blocking operation. + * @param blockMsgPred Predicate that checks whether to block message or not. + * + * @see BlockMessageOnBaselineBackgroundOperation#blockMessage(ClusterNode, Message) + */ + protected BlockMessageOnBaselineBackgroundOperation( + RunnableX block, + Predicate blockMsgPred + ) { + this.block = block; + blockMsg = blockMsgPred; + } + + /** {@inheritDoc} */ + @Override protected void execute() { + for (IgniteEx server : baseline) { + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(server); + + spi.blockMessages(this::blockMessage); + } + + block.run(); + } + + /** + * Function to pass into {@link TestRecordingCommunicationSpi#blockMessages(IgniteBiPredicate)}. + * + * @param node Node that receives message. + * @param msg Message. + * @return Whether the given message should be blocked or not. + */ + private boolean blockMessage(ClusterNode node, Message msg) { + boolean block = blockMsg.test(msg) + && baseline.stream().map(IgniteEx::name).anyMatch(node.consistentId()::equals); + + if (block) + cntFinishedReadOperations.countDown(); + + return block; + } + + /** {@inheritDoc} */ + @Override protected long stopTimeout() { + // Should be big enough so thread will stop by it's own. Otherwise test will fail, but that's fine. + return 30_000L; + } + + /** {@inheritDoc} */ + @Override void stop() throws Exception { + for (IgniteEx server : baseline) { + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(server); + + spi.stopBlock(); + } + + super.stop(); + } + } + + /** + * Background operation that executes some node request and doesn't allow its discovery messages to be fully + * processed until operation is stopped. + */ + protected class BlockDiscoveryMessageBackgroundOperation extends BackgroundOperation { + /** */ + private final RunnableX block; + + /** */ + private final Class blockMsgCls; + + /** */ + private volatile CountDownLatch blockLatch; + + /** + * @param block Blocking operation. + * @param blockMsgCls Class of message to block. + * + * @see BlockMessageOnBaselineBackgroundOperation#blockMessage(ClusterNode, Message) + */ + protected BlockDiscoveryMessageBackgroundOperation( + RunnableX block, + Class blockMsgCls + ) { + this.block = block; + this.blockMsgCls = blockMsgCls; + } + + /** {@inheritDoc} */ + @Override protected void execute() { + try { + blockLatch = new CountDownLatch(1); + + discoveryMsgProcessor = this::processMessage; + + for (int i = 0; i < baselineServersCount() - 2; i++) + cntFinishedReadOperations.countDown(); + + block.run(); + } + finally { + discoveryMsgProcessor = null; + } + } + + /** + * Process discovery spi message. + * + * @param msg Message. + * @param receiverConsistentId Consistent ID of message receiver. + */ + private void processMessage(TcpDiscoveryAbstractMessage msg, String receiverConsistentId) { + if (!blockMsgCls.isInstance(msg)) + return; + + boolean baselineSnd = Objects.equals( + baseline.get(1).localNode().consistentId(), + receiverConsistentId + ); + + if (baselineSnd) { + cntFinishedReadOperations.countDown(); + + try { + blockLatch.await(); + } + catch (InterruptedException ignore) { + } + } + } + + /** {@inheritDoc} */ + @Override protected long stopTimeout() { + // Should be big enough so thread will stop by it's own. Otherwise test will fail, but that's fine. + return 30_000L; + } + + /** {@inheritDoc} */ + @Override void stop() throws Exception { + blockLatch.countDown(); + + super.stop(); + } + } + + /** + * Runnable that can throw exceptions. + */ + @FunctionalInterface + public interface RunnableX extends Runnable { + /** + * Closure body. + * + * @throws Exception If failed. + */ + void runx() throws Exception; + + /** {@inheritdoc} */ + @Override default void run() { + try { + runx(); + } + catch (Exception e) { + throw new IgniteException(e); + } + } + } + + /** + * {@link BackgroundOperation} implementation for cache reading operations. + */ + protected abstract class ReadBackgroundOperation extends BackgroundOperation { + + /** Counter for successfully finished operations. */ + private final AtomicInteger readOperationsFinishedUnderBlock = new AtomicInteger(); + + /** Counter for failed operations. */ + private final AtomicInteger readOperationsFailed = new AtomicInteger(); + + /** Duration of the longest read operation. */ + private final AtomicLong maxReadDuration = new AtomicLong(-1); + + /** + * Do single iteration of reading operation. Will be executed in a loop. + */ + protected abstract void doRead() throws Exception; + + + /** {@inheritDoc} */ + @Override protected void init() { + readOperationsFinishedUnderBlock.set(0); + + readOperationsFailed.set(0); + + maxReadDuration.set(-1); + } + + /** {@inheritDoc} */ + @Override protected void execute() { + Set loggedMessages = new HashSet<>(); + + while (!Thread.currentThread().isInterrupted()) { + long prevTs = System.currentTimeMillis(); + + try { + doRead(); + + readOperationsFinishedUnderBlock.incrementAndGet(); + } + catch (Exception e) { + boolean threadInterrupted = X.hasCause(e, + InterruptedException.class, + IgniteInterruptedException.class, + IgniteInterruptedCheckedException.class + ); + + if (threadInterrupted) + Thread.currentThread().interrupt(); + else if (allowException() && X.hasCause(e, ClusterTopologyCheckedException.class)) + readOperationsFinishedUnderBlock.incrementAndGet(); + else { + readOperationsFailed.incrementAndGet(); + + if (loggedMessages.add(e.getMessage())) + log.error("Error during read operation execution", e); + + continue; + } + } + + maxReadDuration.set(Math.max(maxReadDuration.get(), System.currentTimeMillis() - prevTs)); + } + } + + /** {@inheritDoc} */ + @Override protected long stopTimeout() { + return 0; + } + + /** + * @return Number of successfully finished operations. + */ + public int readOperationsFinishedUnderBlock() { + return readOperationsFinishedUnderBlock.get(); + } + + /** + * @return Number of failed operations. + */ + public int readOperationsFailed() { + return readOperationsFailed.get(); + } + + /** + * @return Duration of the longest read operation. + */ + public long maxReadDuration() { + return maxReadDuration.get(); + } + } + + /** + * + */ + protected abstract class CacheReadBackgroundOperation extends ReadBackgroundOperation { + /** + * {@link CacheReadBackgroundOperation#cache()} method backing field. Updated on each + * {@link CacheReadBackgroundOperation#initCache(IgniteEx, boolean)} invocation. + */ + private IgniteCache cache; + + /** + * Reinit internal cache using passed ignite instance and fill it with data if required. + * + * @param ignite Node to get or create cache from. + * @param fillData Whether the cache should be filled with new data or not. + */ + public void initCache(IgniteEx ignite, boolean fillData) { + cache = ignite.getOrCreateCache( + createCacheConfiguration() + .setAtomicityMode(atomicityMode()) + .setCacheMode(cacheMode()) + ); + + if (fillData) { + try (IgniteDataStreamer dataStreamer = ignite.dataStreamer(cache.getName())) { + dataStreamer.allowOverwrite(true); + + for (int i = 0; i < entriesCount(); i++) + dataStreamer.addData(createKey(i), createValue(i)); + } + } + } + + /** + * @return Cache configuration. + */ + protected CacheConfiguration createCacheConfiguration() { + return new CacheConfiguration(DEFAULT_CACHE_NAME) + .setBackups(backupsCount()) + .setAffinity( + new RendezvousAffinityFunction() + .setPartitions(32) + ); + } + + /** + * @return Current cache. + */ + protected final IgniteCache cache() { + return cache; + } + + /** + * @return Count of cache entries to create in {@link CacheReadBackgroundOperation#initCache(IgniteEx, boolean)} + * method. + */ + protected int entriesCount() { + return DFLT_CACHE_ENTRIES_CNT; + } + + /** + * @param idx Unique number. + * @return Key to be used for inserting into cache. + * @see CacheReadBackgroundOperation#createValue(int) + */ + protected abstract KeyType createKey(int idx); + + /** + * @param idx Unique number. + * @return Value to be used for inserting into cache. + * @see CacheReadBackgroundOperation#createKey(int) + */ + protected abstract ValueType createValue(int idx); + } + + /** + * {@link CacheReadBackgroundOperation} implementation for (int -> int) cache. Keys and values are equal by default. + */ + protected abstract class IntCacheReadBackgroundOperation + extends CacheReadBackgroundOperation { + /** {@inheritDoc} */ + @Override protected Integer createKey(int idx) { + return idx; + } + + /** {@inheritDoc} */ + @Override protected Integer createValue(int idx) { + return idx; + } + } + + /** + * @return {@link Params} annotation object from the current test method. + */ + protected Params currentTestParams() { + Params params = currentTestAnnotation(Params.class); + + assertNotNull("Test " + getName() + " is not annotated with @Param annotation.", params); + + return params; + } + + /** + * Assert that two numbers are close to each other. + */ + private static void assertAlmostEqual(long exp, long actual) { + assertTrue(String.format("Numbers differ too much [exp=%d, actual=%d]", exp, actual), almostEqual(exp, actual)); + } + + /** + * Assert that two numbers are not close to each other. + */ + private static void assertNotAlmostEqual(long exp, long actual) { + assertFalse(String.format("Numbers are almost equal [exp=%d, actual=%d]", exp, actual), almostEqual(exp, actual)); + } + + /** + * Check that two numbers are close to each other. + */ + private static boolean almostEqual(long exp, long actual) { + double rel = (double)(actual - exp) / exp; + + return Math.abs(rel) < 0.05; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnScanTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnScanTest.java new file mode 100644 index 0000000000000..2b0289b3e8149 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnScanTest.java @@ -0,0 +1,137 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.util.Objects; +import java.util.Random; +import org.apache.ignite.cache.query.ScanQuery; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheMode.REPLICATED; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheBlockOnScanTest extends CacheBlockOnReadAbstractTest { + + /** {@inheritDoc} */ + @Override @NotNull protected CacheReadBackgroundOperation getReadOperation() { + return new IntCacheReadBackgroundOperation() { + /** Random. */ + private Random random = new Random(); + + /** {@inheritDoc} */ + @Override public void doRead() { + int idx = random.nextInt(entriesCount()); + + cache().query(new ScanQuery<>((k, v) -> Objects.equals(k, idx))).getAll(); + } + }; + } + + /** {@inheritDoc} */ + @Params(baseline = 9, atomicityMode = ATOMIC, cacheMode = PARTITIONED, allowException = true) + @Test + @Override public void testStopBaselineAtomicPartitioned() throws Exception { + super.testStopBaselineAtomicPartitioned(); + } + + /** {@inheritDoc} */ + @Params(baseline = 9, atomicityMode = ATOMIC, cacheMode = REPLICATED, allowException = true) + @Test + @Override public void testStopBaselineAtomicReplicated() throws Exception { + super.testStopBaselineAtomicReplicated(); + } + + /** {@inheritDoc} */ + @Params(baseline = 9, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED, allowException = true) + @Test + @Override public void testStopBaselineTransactionalPartitioned() throws Exception { + super.testStopBaselineTransactionalPartitioned(); + } + + /** {@inheritDoc} */ + @Params(baseline = 9, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED, allowException = true) + @Test + @Override public void testStopBaselineTransactionalReplicated() throws Exception { + super.testStopBaselineTransactionalReplicated(); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStartClientAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStartClientTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStopClientAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStopClientTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStartClientAtomicPartitioned() throws Exception { + super.testStartClientTransactionalReplicated(); + } + + /** {@inheritDoc} */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStartClientTransactionalPartitioned() throws Exception { + super.testStartClientTransactionalReplicated(); + } + + /** {@inheritDoc} */ + @Params(atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStopClientAtomicPartitioned() throws Exception { + super.testStopClientTransactionalReplicated(); + } + + /** {@inheritDoc} */ + @Params(atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStopClientTransactionalPartitioned() throws Exception { + super.testStopClientTransactionalReplicated(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnSingleGetTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnSingleGetTest.java new file mode 100644 index 0000000000000..08c15b4e4442a --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheBlockOnSingleGetTest.java @@ -0,0 +1,274 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.util.Random; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheMode.REPLICATED; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheBlockOnSingleGetTest extends CacheBlockOnReadAbstractTest { + + /** {@inheritDoc} */ + @Override @NotNull protected CacheReadBackgroundOperation getReadOperation() { + return new IntCacheReadBackgroundOperation() { + /** Random. */ + private Random random = new Random(); + + /** {@inheritDoc} */ + @Override public void doRead() { + for (int i = 0; i < 300; i++) + cache().get(random.nextInt(entriesCount())); + } + }; + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStopBaselineAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStopBaselineAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStopBaselineTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStopBaselineTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9915"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testCreateCacheAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testCreateCacheAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testCreateCacheTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testCreateCacheTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testDestroyCacheAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testDestroyCacheAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testDestroyCacheTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testDestroyCacheTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStartServerAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStartServerAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStartServerTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStartServerTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStopServerAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStopServerAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStopServerTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStopServerTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testUpdateBaselineTopologyAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testUpdateBaselineTopologyAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testUpdateBaselineTopologyTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testUpdateBaselineTopologyTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9883"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStartClientAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStartClientAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStartClientTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStartClientTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = PARTITIONED) + @Test + @Override public void testStopClientAtomicPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = ATOMIC, cacheMode = REPLICATED) + @Test + @Override public void testStopClientAtomicReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = PARTITIONED) + @Test + @Override public void testStopClientTransactionalPartitioned() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } + + /** {@inheritDoc} */ + @Params(baseline = 1, atomicityMode = TRANSACTIONAL, cacheMode = REPLICATED) + @Test + @Override public void testStopClientTransactionalReplicated() { + fail("https://issues.apache.org/jira/browse/IGNITE-9987"); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheClientsConcurrentStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheClientsConcurrentStartTest.java index 58d4ea2c2af58..4e7bbab5e56eb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheClientsConcurrentStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheClientsConcurrentStartTest.java @@ -36,8 +36,6 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; @@ -52,6 +50,9 @@ import java.util.UUID; import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.atomic.AtomicBoolean; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.*; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.*; @@ -59,10 +60,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheClientsConcurrentStartTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRV_CNT = 4; @@ -99,8 +98,6 @@ public class CacheClientsConcurrentStartTest extends GridCommonAbstractTest { } }; - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setMarshaller(null); cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); @@ -127,6 +124,7 @@ public class CacheClientsConcurrentStartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStartNodes() throws Exception { for (int i = 0; i < ITERATIONS; i++) { try { @@ -247,4 +245,4 @@ private CacheConfiguration cacheConfiguration(String cacheName) { return ccfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDataLossOnPartitionMoveTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDataLossOnPartitionMoveTest.java index 9ff072500b6fc..02c7ee8ce385a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDataLossOnPartitionMoveTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDataLossOnPartitionMoveTest.java @@ -36,15 +36,20 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.TestRecordingCommunicationSpi; import org.apache.ignite.internal.processors.cache.GridCacheUtils; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.testframework.GridTestNode; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.EVICTED; import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; @@ -52,6 +57,7 @@ /** * */ +@RunWith(JUnit4.class) public class CacheDataLossOnPartitionMoveTest extends GridCommonAbstractTest { /** */ public static final long MB = 1024 * 1024L; @@ -118,7 +124,10 @@ private String grp(int idx) { /** * @throws Exception if failed. */ + @Test public void testDataLossOnPartitionMove() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10421", MvccFeatureChecker.forcedMvcc()); + try { Ignite ignite = startGridsMultiThreaded(GRIDS_CNT / 2, false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDiscoveryDataConcurrentJoinTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDiscoveryDataConcurrentJoinTest.java index 4cf89b2521031..1fa054df51311 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDiscoveryDataConcurrentJoinTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheDiscoveryDataConcurrentJoinTest.java @@ -31,23 +31,21 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.GridAtomicInteger; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryJoinRequestMessage; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * */ -@SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheDiscoveryDataConcurrentJoinTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Iteration. */ private static final int ITERATIONS = 3; @@ -87,7 +85,7 @@ public class CacheDiscoveryDataConcurrentJoinTest extends GridCommonAbstractTest } }; - testSpi.setIpFinder(ipFinder); + testSpi.setIpFinder(sharedStaticIpFinder); testSpi.setJoinTimeout(60_000); cfg.setDiscoverySpi(testSpi); @@ -120,6 +118,7 @@ public class CacheDiscoveryDataConcurrentJoinTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testConcurrentJoin() throws Exception { for (int iter = 0; iter < ITERATIONS; iter++) { log.info("Iteration: " + iter); @@ -170,6 +169,7 @@ public void testConcurrentJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentJoinCacheWithGroup() throws Exception { withCacheGrp = true; @@ -212,4 +212,4 @@ private CacheConfiguration cacheConfiguration(String cacheName) { private void checkCache(Ignite node, final String cacheName) { assertNotNull(((IgniteKernal)node).context().cache().cache(cacheName)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheExchangeMergeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheExchangeMergeTest.java index d95a7bf978aa3..189b4dd029253 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheExchangeMergeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheExchangeMergeTest.java @@ -66,19 +66,22 @@ import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Assert; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_EXCHANGE_HISTORY_SIZE; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -88,10 +91,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheExchangeMergeTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final long WAIT_SECONDS = 15; @@ -120,8 +121,6 @@ public class CacheExchangeMergeTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (testSpi) cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); else if (testDelaySpi) @@ -149,7 +148,12 @@ else if (testDelaySpi) cacheConfiguration("c7", TRANSACTIONAL, PARTITIONED, 1), cacheConfiguration("c8", TRANSACTIONAL, PARTITIONED, 2), cacheConfiguration("c9", TRANSACTIONAL, PARTITIONED, 10), - cacheConfiguration("c10", TRANSACTIONAL, REPLICATED, 0) + cacheConfiguration("c10", TRANSACTIONAL, REPLICATED, 0), + cacheConfiguration("c11", TRANSACTIONAL_SNAPSHOT, PARTITIONED, 0), + cacheConfiguration("c12", TRANSACTIONAL_SNAPSHOT, PARTITIONED, 1), + cacheConfiguration("c13", TRANSACTIONAL_SNAPSHOT, PARTITIONED, 2), + cacheConfiguration("c14", TRANSACTIONAL_SNAPSHOT, PARTITIONED, 10), + cacheConfiguration("c15", TRANSACTIONAL_SNAPSHOT, REPLICATED, 0) ); } @@ -203,6 +207,7 @@ private CacheConfiguration cacheConfiguration(String name, /** * @throws Exception If failed. */ + @Test public void testDelayExchangeMessages() throws Exception { testDelaySpi = true; @@ -275,6 +280,7 @@ public void testDelayExchangeMessages() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeStartRandomClientsServers() throws Exception { for (int iter = 0; iter < 3; iter++) { ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -331,6 +337,7 @@ public void testMergeStartRandomClientsServers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeStartStopRandomClientsServers() throws Exception { for (int iter = 0; iter < 3; iter++) { log.info("Iteration: " + iter); @@ -403,6 +410,7 @@ public void testMergeStartStopRandomClientsServers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentStartServers() throws Exception { concurrentStart(false); } @@ -410,6 +418,7 @@ public void testConcurrentStartServers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentStartServersAndClients() throws Exception { concurrentStart(true); } @@ -419,7 +428,9 @@ public void testConcurrentStartServersAndClients() throws Exception { * @throws Exception If failed. */ private void concurrentStart(final boolean withClients) throws Exception { - for (int i = 0; i < 5; i++) { + int iterations = GridTestUtils.SF.applyLB(5, 1); + + for (int i = 0; i < iterations; i++) { log.info("Iteration: " + i); startGrid(0); @@ -456,6 +467,8 @@ private void concurrentStart(final boolean withClients) throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10186") + @Test public void testMergeServerAndClientJoin1() throws Exception { final IgniteEx srv0 = startGrid(0); @@ -494,6 +507,7 @@ public void testMergeServerAndClientJoin1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartCacheOnJoinAndJoinMerge_2_nodes() throws Exception { startCacheOnJoinAndJoinMerge1(2, false); } @@ -501,6 +515,7 @@ public void testStartCacheOnJoinAndJoinMerge_2_nodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartCacheOnJoinAndJoinMerge_4_nodes() throws Exception { startCacheOnJoinAndJoinMerge1(4, false); } @@ -508,6 +523,7 @@ public void testStartCacheOnJoinAndJoinMerge_4_nodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartCacheOnJoinAndJoinMerge_WithClients() throws Exception { startCacheOnJoinAndJoinMerge1(5, true); } @@ -544,6 +560,7 @@ private void startCacheOnJoinAndJoinMerge1(int nodes, boolean withClients) throw /** * @throws Exception If failed. */ + @Test public void testMergeAndHistoryCleanup() throws Exception { final int histSize = 5; @@ -608,6 +625,7 @@ private void checkHistorySize(int histSize) { /** * @throws Exception If failed. */ + @Test public void testStartCacheOnJoinAndMergeWithFail() throws Exception { cfgCache = false; @@ -633,6 +651,7 @@ public void testStartCacheOnJoinAndMergeWithFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartCacheOnJoinAndCoordinatorFailed1() throws Exception { cfgCache = false; @@ -654,6 +673,7 @@ public void testStartCacheOnJoinAndCoordinatorFailed1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartCacheOnJoinAndCoordinatorFailed2() throws Exception { cfgCache = false; @@ -675,6 +695,7 @@ public void testStartCacheOnJoinAndCoordinatorFailed2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeServersJoin1() throws Exception { IgniteEx srv0 = startGrid(0); @@ -702,6 +723,7 @@ public void testMergeServersJoin1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeServerJoin1ClientsInTopology() throws Exception { IgniteEx srv0 = startGrid(0); @@ -739,6 +761,7 @@ public void testMergeServerJoin1ClientsInTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeAndNewCoordinator() throws Exception { final Ignite srv0 = startGrids(3); @@ -758,6 +781,7 @@ public void testMergeAndNewCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeServersFail1_1() throws Exception { mergeServersFail1(false); } @@ -765,6 +789,7 @@ public void testMergeServersFail1_1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeServersFail1_2() throws Exception { mergeServersFail1(true); } @@ -810,6 +835,7 @@ private void mergeServersFail1(boolean waitRebalance) throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeServersAndClientsFail1() throws Exception { mergeServersAndClientsFail(false); } @@ -817,11 +843,11 @@ public void testMergeServersAndClientsFail1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeServersAndClientsFail2() throws Exception { mergeServersAndClientsFail(true); } - /** * @param waitRebalance Wait for rebalance end before start tested topology change. * @throws Exception If failed. @@ -859,6 +885,7 @@ private void mergeServersAndClientsFail(boolean waitRebalance) throws Exception /** * @throws Exception If failed. */ + @Test public void testJoinExchangeCoordinatorChange_NoMerge_1() throws Exception { for (CoordinatorChangeMode mode : CoordinatorChangeMode.values()) { exchangeCoordinatorChangeNoMerge(4, true, mode); @@ -870,6 +897,7 @@ public void testJoinExchangeCoordinatorChange_NoMerge_1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinExchangeCoordinatorChange_NoMerge_2() throws Exception { for (CoordinatorChangeMode mode : CoordinatorChangeMode.values()) { exchangeCoordinatorChangeNoMerge(8, true, mode); @@ -881,6 +909,7 @@ public void testJoinExchangeCoordinatorChange_NoMerge_2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailExchangeCoordinatorChange_NoMerge_1() throws Exception { for (CoordinatorChangeMode mode : CoordinatorChangeMode.values()) { exchangeCoordinatorChangeNoMerge(5, false, mode); @@ -892,6 +921,7 @@ public void testFailExchangeCoordinatorChange_NoMerge_1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailExchangeCoordinatorChange_NoMerge_2() throws Exception { for (CoordinatorChangeMode mode : CoordinatorChangeMode.values()) { exchangeCoordinatorChangeNoMerge(8, false, mode); @@ -903,6 +933,7 @@ public void testFailExchangeCoordinatorChange_NoMerge_2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMergeJoinExchangesCoordinatorChange1_4_servers() throws Exception { for (CoordinatorChangeMode mode : CoordinatorChangeMode.values()) { mergeJoinExchangesCoordinatorChange1(4, mode); @@ -914,6 +945,7 @@ public void testMergeJoinExchangesCoordinatorChange1_4_servers() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testMergeJoinExchangesCoordinatorChange1_8_servers() throws Exception { for (CoordinatorChangeMode mode : CoordinatorChangeMode.values()) { mergeJoinExchangesCoordinatorChange1(8, mode); @@ -955,6 +987,7 @@ private void mergeJoinExchangesCoordinatorChange1(final int srvs, CoordinatorCha /** * @throws Exception If failed. */ + @Test public void testMergeJoinExchangesCoordinatorChange2_4_servers() throws Exception { mergeJoinExchangeCoordinatorChange2(4, 2, F.asList(1, 2, 3, 4), F.asList(5)); @@ -998,6 +1031,7 @@ private void mergeJoinExchangeCoordinatorChange2(final int srvs, /** * @throws Exception If failed. */ + @Test public void testMergeExchangeCoordinatorChange4() throws Exception { testSpi = true; @@ -1378,7 +1412,7 @@ private void checkNodeCaches(final Ignite node) throws Exception { assertEquals(err, e.getValue(), res.get(e.getKey())); } - if (cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL) { + if (atomicityMode(cache) == TRANSACTIONAL) { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) checkNodeCaches(err, node, cache, concurrency, isolation); @@ -1456,11 +1490,12 @@ private void checkExchanges(Ignite node, long... vers) { for (int i = futs.size() - 1; i >= 0; i--) { GridDhtPartitionsExchangeFuture fut = futs.get(i); - if (fut.exchangeDone() && fut.firstEvent().type() != EVT_DISCOVERY_CUSTOM_EVT) { + if (!fut.isMerged() && fut.exchangeDone() && fut.firstEvent().type() != EVT_DISCOVERY_CUSTOM_EVT) { AffinityTopologyVersion resVer = fut.topologyVersion(); - if (resVer != null) - doneVers.add(resVer); + Assert.assertNotNull(resVer); + + doneVers.add(resVer); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetFutureHangsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetFutureHangsSelfTest.java index 3d1fe11bec8b9..16962e6431bbe 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetFutureHangsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetFutureHangsSelfTest.java @@ -35,21 +35,19 @@ import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; /** * Test for reproducing problems during simultaneously Ignite instances stopping and cache requests executing. */ +@RunWith(JUnit4.class) public class CacheGetFutureHangsSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Grid count. */ private static final int GRID_CNT = 8; @@ -64,8 +62,6 @@ public class CacheGetFutureHangsSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); cfg.setMarshaller(new BinaryMarshaller()); @@ -90,6 +86,7 @@ public class CacheGetFutureHangsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testContainsKeyFailover() throws Exception { int cnt = 3; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetInsideLockChangingTopologyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetInsideLockChangingTopologyTest.java index 80aa9eee5d006..e768c8dd0b385 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetInsideLockChangingTopologyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGetInsideLockChangingTopologyTest.java @@ -37,13 +37,13 @@ import org.apache.ignite.internal.processors.cache.GridCacheAlwaysEvictionPolicy; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -56,10 +56,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheGetInsideLockChangingTopologyTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static ThreadLocal client = new ThreadLocal<>(); @@ -84,8 +82,6 @@ public class CacheGetInsideLockChangingTopologyTest extends GridCommonAbstractTe ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - Boolean clientMode = client.get(); client.set(null); @@ -119,8 +115,6 @@ private CacheConfiguration cacheConfiguration(String name, CacheAtomicityMode at /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); - super.beforeTestsStarted(); startGridsMultiThreaded(SRVS); @@ -149,12 +143,15 @@ private CacheConfiguration cacheConfiguration(String name, CacheAtomicityMode at /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + stopAllGrids(); + + super.afterTestsStopped(); } /** * @throws Exception If failed. */ + @Test public void testTxGetInsideLockStopPrimary() throws Exception { getInsideLockStopPrimary(ignite(SRVS), TX_CACHE1); getInsideLockStopPrimary(ignite(SRVS + 1), TX_CACHE1); @@ -166,6 +163,7 @@ public void testTxGetInsideLockStopPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicGetInsideLockStopPrimary() throws Exception { getInsideLockStopPrimary(ignite(SRVS), ATOMIC_CACHE); @@ -175,6 +173,7 @@ public void testAtomicGetInsideLockStopPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicGetInsideTxStopPrimary() throws Exception { getInsideTxStopPrimary(ignite(SRVS), ATOMIC_CACHE); @@ -184,6 +183,7 @@ public void testAtomicGetInsideTxStopPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadCommittedPessimisticStopPrimary() throws Exception { getReadCommittedStopPrimary(ignite(SRVS), TX_CACHE1, PESSIMISTIC); getReadCommittedStopPrimary(ignite(SRVS + 1), TX_CACHE1, PESSIMISTIC); @@ -195,6 +195,7 @@ public void testReadCommittedPessimisticStopPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadCommittedOptimisticStopPrimary() throws Exception { getReadCommittedStopPrimary(ignite(SRVS), TX_CACHE1, OPTIMISTIC); getReadCommittedStopPrimary(ignite(SRVS + 1), TX_CACHE1, OPTIMISTIC); @@ -363,6 +364,7 @@ private void getInsideTxStopPrimary(Ignite ignite, String cacheName) throws Exce /** * @throws Exception If failed. */ + @Test public void testMultithreaded() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-2204"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGroupsPreloadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGroupsPreloadTest.java index 88596380c2aca..34d9dca53ca15 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGroupsPreloadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheGroupsPreloadTest.java @@ -22,18 +22,17 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheGroupsPreloadTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE1 = "cache1"; @@ -67,8 +66,6 @@ public class CacheGroupsPreloadTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration cfg1 = defaultCacheConfiguration() .setName(CACHE1) .setGroupName(GROUP1) @@ -90,6 +87,7 @@ public class CacheGroupsPreloadTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCachePreload1() throws Exception { cachePreloadTest(); } @@ -97,6 +95,7 @@ public void testCachePreload1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCachePreload2() throws Exception { atomicityMode = CacheAtomicityMode.TRANSACTIONAL; @@ -106,6 +105,18 @@ public void testCachePreload2() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testCachePreloadMvcc2() throws Exception { + atomicityMode = CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + + cachePreloadTest(); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCachePreload3() throws Exception { cacheMode = CacheMode.REPLICATED; @@ -115,6 +126,7 @@ public void testCachePreload3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCachePreload4() throws Exception { cacheMode = CacheMode.REPLICATED; atomicityMode = CacheAtomicityMode.TRANSACTIONAL; @@ -125,6 +137,18 @@ public void testCachePreload4() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCachePreloadMvcc4() throws Exception { + cacheMode = CacheMode.REPLICATED; + atomicityMode = CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + + cachePreloadTest(); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCachePreload5() throws Exception { sameGrp = false; @@ -134,6 +158,7 @@ public void testCachePreload5() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCachePreload6() throws Exception { sameGrp = false; atomicityMode = CacheAtomicityMode.TRANSACTIONAL; @@ -144,6 +169,19 @@ public void testCachePreload6() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testCachePreloadMvcc6() throws Exception { + sameGrp = false; + atomicityMode = CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + + cachePreloadTest(); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCachePreload7() throws Exception { sameGrp = false; cacheMode = CacheMode.REPLICATED; @@ -154,6 +192,7 @@ public void testCachePreload7() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCachePreload8() throws Exception { sameGrp = false; cacheMode = CacheMode.REPLICATED; @@ -162,6 +201,18 @@ public void testCachePreload8() throws Exception { cachePreloadTest(); } + /** + * @throws Exception If failed. + */ + @Test + public void testCachePreloadMvcc8() throws Exception { + sameGrp = false; + cacheMode = CacheMode.REPLICATED; + atomicityMode = CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + + cachePreloadTest(); + } + /** * @throws Exception If failed. */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentNodeJoinValidationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentNodeJoinValidationTest.java index 48b33b6c20b7e..0256f9355d6ed 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentNodeJoinValidationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentNodeJoinValidationTest.java @@ -19,18 +19,16 @@ import org.apache.ignite.Ignite; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheLateAffinityAssignmentNodeJoinValidationTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean lateAff; @@ -43,8 +41,6 @@ public class CacheLateAffinityAssignmentNodeJoinValidationTest extends GridCommo cfg.setLateAffinityAssignment(lateAff); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -60,6 +56,7 @@ public class CacheLateAffinityAssignmentNodeJoinValidationTest extends GridCommo /** * @throws Exception If failed. */ + @Test public void testJoinValidation1() throws Exception { checkNodeJoinValidation(false); } @@ -67,6 +64,7 @@ public void testJoinValidation1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinValidation2() throws Exception { checkNodeJoinValidation(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentTest.java index c777abaf4caeb..e4fc758c632b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLateAffinityAssignmentTest.java @@ -54,13 +54,12 @@ import org.apache.ignite.events.DiscoveryEvent; import org.apache.ignite.internal.DiscoverySpiTestListener; import org.apache.ignite.internal.GridKernalContext; -import org.apache.ignite.internal.cluster.NodeOrderComparator; -import org.apache.ignite.internal.cluster.NodeOrderLegacyComparator; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.internal.TestRecordingCommunicationSpi; import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; +import org.apache.ignite.internal.cluster.NodeOrderComparator; import org.apache.ignite.internal.managers.discovery.IgniteDiscoverySpi; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.affinity.GridAffinityFunctionContextImpl; @@ -90,14 +89,16 @@ import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -109,10 +110,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheLateAffinityAssignmentTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -158,14 +157,11 @@ public class CacheLateAffinityAssignmentTest extends GridCommonAbstractTest { cfg.setCommunicationSpi(commSpi); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); + TcpDiscoverySpi discoSpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); discoSpi.setForceServerMode(forceSrvMode); - discoSpi.setIpFinder(ipFinder); discoSpi.setNetworkTimeout(60_000); - cfg.setDiscoverySpi(discoSpi); - cfg.setClientFailureDetectionTimeout(100000); CacheConfiguration[] ccfg; @@ -234,6 +230,7 @@ protected AffinityFunction affinityFunction(@Nullable Integer parts) { * * @throws Exception If failed. */ + @Test public void testDelayedAffinityCalculation() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -293,6 +290,7 @@ public void testDelayedAffinityCalculation() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinitySimpleSequentialStart() throws Exception { startServer(0, 1); @@ -314,6 +312,7 @@ public void testAffinitySimpleSequentialStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinitySimpleSequentialStartNoCacheOnCoordinator() throws Exception { cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String igniteInstanceName) { @@ -334,6 +333,7 @@ public void testAffinitySimpleSequentialStartNoCacheOnCoordinator() throws Excep /** * @throws Exception If failed. */ + @Test public void testAffinitySimpleNoCacheOnCoordinator1() throws Exception { cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String igniteInstanceName) { @@ -373,6 +373,7 @@ public void testAffinitySimpleNoCacheOnCoordinator1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinitySimpleNoCacheOnCoordinator2() throws Exception { System.setProperty(IGNITE_EXCHANGE_COMPATIBILITY_VER_1, "true"); @@ -432,6 +433,7 @@ public void testAffinitySimpleNoCacheOnCoordinator2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateCloseClientCacheOnCoordinator1() throws Exception { cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String igniteInstanceName) { @@ -457,6 +459,7 @@ public void testCreateCloseClientCacheOnCoordinator1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateCloseClientCacheOnCoordinator2() throws Exception { cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String igniteInstanceName) { @@ -503,6 +506,7 @@ public void testCreateCloseClientCacheOnCoordinator2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheDestroyAndCreate1() throws Exception { cacheDestroyAndCreate(true); } @@ -510,6 +514,7 @@ public void testCacheDestroyAndCreate1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheDestroyAndCreate2() throws Exception { cacheDestroyAndCreate(false); } @@ -579,6 +584,7 @@ private void cacheDestroyAndCreate(boolean cacheOnCrd) throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinitySimpleNodeLeave1() throws Exception { affinitySimpleNodeLeave(2); } @@ -588,6 +594,7 @@ public void testAffinitySimpleNodeLeave1() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinitySimpleNodeLeave2() throws Exception { affinitySimpleNodeLeave(4); } @@ -623,6 +630,7 @@ private void affinitySimpleNodeLeave(int cnt) throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinitySimpleNodeLeaveClientAffinity() throws Exception { startServer(0, 1); @@ -644,6 +652,7 @@ public void testAffinitySimpleNodeLeaveClientAffinity() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeLeaveExchangeWaitAffinityMessage() throws Exception { System.setProperty(IGNITE_EXCHANGE_COMPATIBILITY_VER_1, "true"); @@ -693,6 +702,7 @@ public void testNodeLeaveExchangeWaitAffinityMessage() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinitySimpleClientNodeEvents1() throws Exception { affinitySimpleClientNodeEvents(1); } @@ -702,6 +712,7 @@ public void testAffinitySimpleClientNodeEvents1() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinitySimpleClientNodeEvents2() throws Exception { affinitySimpleClientNodeEvents(3); } @@ -737,6 +748,7 @@ private void affinitySimpleClientNodeEvents(int srvs) throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentMultipleJoin1() throws Exception { delayAssignmentMultipleJoin(2); } @@ -746,6 +758,7 @@ public void testDelayAssignmentMultipleJoin1() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentMultipleJoin2() throws Exception { delayAssignmentMultipleJoin(4); } @@ -794,6 +807,7 @@ private void delayAssignmentMultipleJoin(int joinCnt) throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentClientJoin() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -818,6 +832,7 @@ public void testDelayAssignmentClientJoin() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentClientLeave() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -848,6 +863,7 @@ public void testDelayAssignmentClientLeave() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentClientCacheStart() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -884,6 +900,7 @@ public void testDelayAssignmentClientCacheStart() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentCacheStart() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -918,6 +935,7 @@ public void testDelayAssignmentCacheStart() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentCacheDestroy() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -952,6 +970,7 @@ public void testDelayAssignmentCacheDestroy() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinitySimpleStopRandomNode() throws Exception { //fail("IGNITE-GG-12292"); @@ -1001,6 +1020,7 @@ public void testAffinitySimpleStopRandomNode() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentCoordinatorLeave1() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -1025,6 +1045,7 @@ public void testDelayAssignmentCoordinatorLeave1() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentCoordinatorLeave2() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -1054,6 +1075,7 @@ public void testDelayAssignmentCoordinatorLeave2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg1() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(4, 3, false, 2); } @@ -1062,6 +1084,7 @@ public void testBlockedFinishMsg1() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg2() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(4, 3, false); } @@ -1070,6 +1093,7 @@ public void testBlockedFinishMsg2() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg3() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(4, 3, false, 1); } @@ -1078,6 +1102,7 @@ public void testBlockedFinishMsg3() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg4() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(5, 3, false); } @@ -1086,6 +1111,7 @@ public void testBlockedFinishMsg4() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg5() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(5, 3, false, 1); } @@ -1094,6 +1120,7 @@ public void testBlockedFinishMsg5() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg6() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(5, 3, false, 2); } @@ -1102,6 +1129,7 @@ public void testBlockedFinishMsg6() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg7() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(5, 3, false, 2, 4); } @@ -1110,6 +1138,7 @@ public void testBlockedFinishMsg7() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg8() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(6, 3, false, 2, 4); } @@ -1118,6 +1147,7 @@ public void testBlockedFinishMsg8() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsg9() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(5, 1, false, 4); } @@ -1126,6 +1156,7 @@ public void testBlockedFinishMsg9() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockedFinishMsgForClient() throws Exception { doTestCoordLeaveBlockedFinishExchangeMessage(5, 1, true, 4); } @@ -1223,6 +1254,7 @@ private void doTestCoordLeaveBlockedFinishExchangeMessage(int cnt, * * @throws Exception If failed. */ + @Test public void testCoordinatorLeaveAfterNodeLeavesDelayAssignment() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -1272,6 +1304,7 @@ public void testCoordinatorLeaveAfterNodeLeavesDelayAssignment() throws Exceptio * * @throws Exception If failed. */ + @Test public void testNodeLeftExchangeCoordinatorLeave1() throws Exception { nodeLeftExchangeCoordinatorLeave(3); } @@ -1281,6 +1314,7 @@ public void testNodeLeftExchangeCoordinatorLeave1() throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeLeftExchangeCoordinatorLeave2() throws Exception { nodeLeftExchangeCoordinatorLeave(5); } @@ -1335,6 +1369,7 @@ private void nodeLeftExchangeCoordinatorLeave(int nodes) throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinExchangeBecomeCoordinator() throws Exception { long topVer = 0; @@ -1396,6 +1431,7 @@ public void testJoinExchangeBecomeCoordinator() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentAffinityChanged() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -1438,6 +1474,7 @@ public void testDelayAssignmentAffinityChanged() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentAffinityChanged2() throws Exception { System.setProperty(IGNITE_EXCHANGE_COMPATIBILITY_VER_1, "true"); @@ -1519,6 +1556,7 @@ public void testDelayAssignmentAffinityChanged2() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelayAssignmentCacheDestroyCreate() throws Exception { Ignite ignite0 = startServer(0, 1); @@ -1572,6 +1610,7 @@ public void testDelayAssignmentCacheDestroyCreate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientCacheStartClose() throws Exception { cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String igniteInstanceName) { @@ -1600,6 +1639,7 @@ public void testClientCacheStartClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheStartDestroy() throws Exception { startGridsMultiThreaded(3, false); @@ -1634,6 +1674,7 @@ public void testCacheStartDestroy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInitCacheReceivedOnJoin() throws Exception { cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String s) { @@ -1677,6 +1718,7 @@ public void testInitCacheReceivedOnJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientStartFirst1() throws Exception { clientStartFirst(1); } @@ -1684,6 +1726,7 @@ public void testClientStartFirst1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientStartFirst2() throws Exception { clientStartFirst(3); } @@ -1724,7 +1767,11 @@ private void clientStartFirst(int clients) throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomOperations() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10261"); + forceSrvMode = true; final int MAX_SRVS = 10; @@ -1935,6 +1982,7 @@ public void testRandomOperations() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentStartStaticCaches() throws Exception { concurrentStartStaticCaches(false); } @@ -1942,6 +1990,7 @@ public void testConcurrentStartStaticCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentStartStaticCachesWithClientNodes() throws Exception { concurrentStartStaticCaches(true); } @@ -2014,6 +2063,7 @@ private void concurrentStartStaticCaches(boolean withClients) throws Exception { /** * @throws Exception If failed. */ + @Test public void testServiceReassign() throws Exception { skipCheckOrder = true; @@ -2047,7 +2097,11 @@ public void testServiceReassign() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoForceKeysRequests() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10604"); + cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String s) { return null; @@ -2157,6 +2211,7 @@ public void testNoForceKeysRequests() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStreamer1() throws Exception { cacheC = new IgniteClosure() { @Override public CacheConfiguration[] apply(String s) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLoadingConcurrentGridStartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLoadingConcurrentGridStartSelfTest.java index cbd012443de9b..723161131ce0d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLoadingConcurrentGridStartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheLoadingConcurrentGridStartSelfTest.java @@ -22,6 +22,7 @@ import java.util.LinkedHashSet; import java.util.Set; import java.util.concurrent.Callable; +import java.util.concurrent.CountDownLatch; import javax.cache.Cache; import javax.cache.configuration.FactoryBuilder; import javax.cache.integration.CacheLoaderException; @@ -48,8 +49,12 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -59,6 +64,7 @@ /** * Tests for cache data loading during simultaneous grids start. */ +@RunWith(JUnit4.class) public class CacheLoadingConcurrentGridStartSelfTest extends GridCommonAbstractTest implements Serializable { /** Grids count */ private static int GRIDS_CNT = 5; @@ -78,6 +84,11 @@ public class CacheLoadingConcurrentGridStartSelfTest extends GridCommonAbstractT /** Restarts. */ protected volatile boolean restarts; + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -135,6 +146,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception if failed */ + @Test public void testLoadCacheWithDataStreamer() throws Exception { configured = true; @@ -162,6 +174,7 @@ public void testLoadCacheWithDataStreamer() throws Exception { /** * @throws Exception if failed */ + @Test public void testLoadCacheFromStore() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-4210"); @@ -175,6 +188,7 @@ public void testLoadCacheFromStore() throws Exception { /** * @throws Exception if failed */ + @Test public void testLoadCacheWithDataStreamerSequentialClient() throws Exception { client = true; @@ -189,6 +203,7 @@ public void testLoadCacheWithDataStreamerSequentialClient() throws Exception { /** * @throws Exception if failed */ + @Test public void testLoadCacheWithDataStreamerSequentialClientWithConfig() throws Exception { client = true; configured = true; @@ -205,6 +220,7 @@ public void testLoadCacheWithDataStreamerSequentialClientWithConfig() throws Exc /** * @throws Exception if failed */ + @Test public void testLoadCacheWithDataStreamerSequential() throws Exception { loadCacheWithDataStreamerSequential(); } @@ -212,6 +228,7 @@ public void testLoadCacheWithDataStreamerSequential() throws Exception { /** * @throws Exception if failed */ + @Test public void testLoadCacheWithDataStreamerSequentialWithConfigAndRestarts() throws Exception { restarts = true; configured = true; @@ -228,6 +245,7 @@ public void testLoadCacheWithDataStreamerSequentialWithConfigAndRestarts() throw /** * @throws Exception if failed */ + @Test public void testLoadCacheWithDataStreamerSequentialWithConfig() throws Exception { configured = true; @@ -261,8 +279,11 @@ private void loadCacheWithDataStreamerSequential() throws Exception { } }); + CountDownLatch startNodesLatch = new CountDownLatch(1); IgniteInternalFuture fut = runAsync(new Callable() { @Override public Object call() throws Exception { + startNodesLatch.await(); + for (int i = 2; i < GRIDS_CNT; i++) startGrid(i); @@ -272,24 +293,35 @@ private void loadCacheWithDataStreamerSequential() throws Exception { final HashSet set = new HashSet<>(); - IgniteInClosure f = new IgniteInClosure() { - @Override public void apply(Ignite grid) { - try (IgniteDataStreamer dataStreamer = grid.dataStreamer(DEFAULT_CACHE_NAME)) { - dataStreamer.allowOverwrite(allowOverwrite); + boolean stop = false; + int insertedKeys = 0; - ((DataStreamerImpl)dataStreamer).maxRemapCount(Integer.MAX_VALUE); + startNodesLatch.countDown(); - for (int i = 0; i < KEYS_CNT; i++) { - set.add(dataStreamer.addData(i, "Data")); + try (IgniteDataStreamer dataStreamer = g0.dataStreamer(DEFAULT_CACHE_NAME)) { + dataStreamer.allowOverwrite(allowOverwrite); + ((DataStreamerImpl)dataStreamer).maxRemapCount(Integer.MAX_VALUE); - if (i % 100000 == 0) - log.info("Streaming " + i + "'th entry."); - } - } - } - }; + long startingEndTs = -1L; - f.apply(g0); + while (!stop) { + set.add(dataStreamer.addData(insertedKeys, "Data")); + insertedKeys = insertedKeys + 1; + + if (insertedKeys % 100000 == 0) + log.info("Streaming " + insertedKeys + "'th entry."); + + //When all nodes started we continue restart nodes during 1 second and stop it after this timeout. + if (fut.isDone() && startingEndTs == -1) + startingEndTs = System.currentTimeMillis(); + + if (startingEndTs != -1) //Nodes starting was ended and we check restarts duration after it. + restarts = (System.currentTimeMillis() - startingEndTs) < 1000; + + //Stop test when all keys were inserted or restarts timeout was exceeded. + stop = insertedKeys >= KEYS_CNT || (fut.isDone() && !restarts); + } + } log.info("Data loaded."); @@ -305,10 +337,10 @@ private void loadCacheWithDataStreamerSequential() throws Exception { long size = cache.size(CachePeekMode.PRIMARY); - if (size != KEYS_CNT) { + if (size != insertedKeys) { Set failedKeys = new LinkedHashSet<>(); - for (int i = 0; i < KEYS_CNT; i++) + for (int i = 0; i < insertedKeys; i++) if (!cache.containsKey(i)) { log.info("Actual cache size: " + size); @@ -336,7 +368,7 @@ private void loadCacheWithDataStreamerSequential() throws Exception { assert failedKeys.isEmpty() : "Some failed keys: " + failedKeys.toString(); } - assertCacheSize(); + assertCacheSize(insertedKeys); } /** @@ -361,20 +393,20 @@ protected void loadCache(IgniteInClosure f) throws Exception { fut.get(); } - assertCacheSize(); + assertCacheSize(KEYS_CNT); } /** * @throws Exception If failed. */ - private void assertCacheSize() throws Exception { + private void assertCacheSize(int expectedCacheSize) throws Exception { final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); boolean consistentCache = GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { int size = cache.size(CachePeekMode.PRIMARY); - if (size != KEYS_CNT) + if (size != expectedCacheSize) log.info("Cache size: " + size); int total = 0; @@ -382,10 +414,10 @@ private void assertCacheSize() throws Exception { for (int i = 0; i < GRIDS_CNT; i++) total += grid(i).cache(DEFAULT_CACHE_NAME).localSize(CachePeekMode.PRIMARY); - if (total != KEYS_CNT) + if (total != expectedCacheSize) log.info("Total size: " + size); - return size == KEYS_CNT && KEYS_CNT == total; + return size == expectedCacheSize && expectedCacheSize == total; } }, 2 * 60_000); @@ -418,4 +450,4 @@ private static class TestCacheStoreAdapter extends CacheStoreAdapter f; + + Exception err = null; + + while ((f = q.poll()) != null) { + try { + f.get(60_000); + } + catch (Exception e) { + error("Test operation failed: " + e, e); - lock.unlock(); + if (err == null) + err = e; + } + } + + if (err != null) + fail("Test operation failed, see log for details"); + } + finally { + stopAllGrids(); + } + } + + /** + * @throws Exception If failed. + */ + @Test + public void testLockNodeStop() throws Exception { + final int nodeCnt = 3; + int threadCnt = 2; + final int keys = 100; + + try { + final AtomicBoolean stop = new AtomicBoolean(false); + + Queue> q = new ArrayDeque<>(nodeCnt); + + for (int i = 0; i < nodeCnt; i++) { + final Ignite ignite = startGrid(i); + + IgniteInternalFuture f = GridTestUtils.runMultiThreadedAsync(new Runnable() { + @Override public void run() { + while (!Thread.currentThread().isInterrupted() && !stop.get()) { + try { + IgniteCache cache = ignite.cache(REPLICATED_TEST_CACHE); + + for (int i = 0; i < keys; i++) { + Lock lock = cache.lock(i); + lock.lock(); + + try { + cache.put(i, i); + } + finally { + lock.unlock(); + } + } + } + catch (Exception e) { + log.info("Ignore error: " + e); + + break; } } } @@ -168,12 +251,31 @@ public void testLockTopologyChange() throws Exception { U.sleep(1_000); } + U.sleep(ThreadLocalRandom.current().nextLong(500) + 500); + + // Stop all nodes, check that threads executing cache operations do not hang. + stopAllGrids(); + stop.set(true); IgniteInternalFuture f; - while ((f = q.poll()) != null) - f.get(2_000); + Exception err = null; + + while ((f = q.poll()) != null) { + try { + f.get(60_000); + } + catch (Exception e) { + error("Test operation failed: " + e, e); + + if (err == null) + err = e; + } + } + + if (err != null) + fail("Test operation failed, see log for details"); } finally { stopAllGrids(); @@ -183,6 +285,7 @@ public void testLockTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxLockRelease() throws Exception { startGrids(2); @@ -231,6 +334,7 @@ public void testTxLockRelease() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockRelease2() throws Exception { final Ignite ignite0 = startGrid(0); @@ -277,6 +381,7 @@ public void testLockRelease2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockRelease3() throws Exception { startGrid(0); @@ -311,6 +416,7 @@ public void testLockRelease3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxLockRelease2() throws Exception { final Ignite ignite0 = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheOperationsInterruptTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheOperationsInterruptTest.java new file mode 100644 index 0000000000000..50de443ad8a34 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheOperationsInterruptTest.java @@ -0,0 +1,167 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; + +import java.util.concurrent.Callable; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.atomic.AtomicBoolean; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.REPLICATED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheOperationsInterruptTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + + ccfg.setAtomicityMode(TRANSACTIONAL); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + ccfg.setCacheMode(REPLICATED); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + super.afterTest(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testInterruptPessimisticTx() throws Exception { + final int NODES = 3; + + startGrids(NODES); + + awaitPartitionMapExchange(); + + ThreadLocalRandom rnd = ThreadLocalRandom.current(); + + Ignite node = ignite(0); + + IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); + + final int KEYS = 100; + + final boolean changeTop = true; + + for (int i = 0; i < 10; i++) { + info("Iteration: " + i); + + final AtomicBoolean stop = new AtomicBoolean(); + + try { + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(new Runnable() { + @Override public void run() { + Ignite node = ignite(0); + + IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); + + ThreadLocalRandom rnd = ThreadLocalRandom.current(); + + while (!stop.get()) { + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int i = 0; i < KEYS; i++) { + if (rnd.nextBoolean()) + cache.get(i); + } + } + } + } + }, 3, "tx-thread"); + + IgniteInternalFuture changeTopFut = null; + + if (changeTop) { + changeTopFut = GridTestUtils.runAsync(new Callable() { + @Override public Void call() throws Exception { + while (!stop.get()) { + startGrid(NODES); + + stopGrid(NODES); + } + + return null; + } + }); + } + + U.sleep(rnd.nextInt(500)); + + fut.cancel(); + + U.sleep(rnd.nextInt(500)); + + stop.set(true); + + try { + fut.get(); + } + catch (Exception e) { + info("Ignore error: " + e); + } + + if (changeTopFut != null) + changeTopFut.get(); + + info("Try get"); + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int k = 0; k < KEYS; k++) + cache.get(k); + } + + info("Try get done"); + + startGrid(NODES); + stopGrid(NODES); + } + finally { + stop.set(true); + } + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePageWriteLockUnlockTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePageWriteLockUnlockTest.java index 84fd91656f9c2..7af5c8d346b20 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePageWriteLockUnlockTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePageWriteLockUnlockTest.java @@ -21,6 +21,7 @@ import javax.cache.Cache; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; @@ -37,12 +38,16 @@ import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; /** * */ +@RunWith(JUnit4.class) public class CachePageWriteLockUnlockTest extends GridCommonAbstractTest { /** */ public static final int PARTITION = 0; @@ -51,8 +56,9 @@ public class CachePageWriteLockUnlockTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME). - setAffinity(new RendezvousAffinityFunction(false, 32))); + cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME) + .setAffinity(new RendezvousAffinityFunction(false, 32)) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); cfg.setActiveOnStart(false); @@ -72,6 +78,7 @@ public class CachePageWriteLockUnlockTest extends GridCommonAbstractTest { /** * */ + @Test public void testPreloadPartition() throws Exception { try { IgniteEx grid0 = startGrid(0); @@ -100,6 +107,8 @@ public void testPreloadPartition() throws Exception { grid0 = startGrid(0); + grid0.cluster().active(true); + preloadPartition(grid0, DEFAULT_CACHE_NAME, PARTITION); Iterator> it = grid0.cache(DEFAULT_CACHE_NAME).iterator(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheParallelStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheParallelStartTest.java new file mode 100644 index 0000000000000..1f5bd356d5cef --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheParallelStartTest.java @@ -0,0 +1,197 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.util.ArrayList; +import java.util.Collection; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Test covers parallel start and stop of caches. + */ +@RunWith(JUnit4.class) +public class CacheParallelStartTest extends GridCommonAbstractTest { + /** */ + private static final int CACHES_COUNT = 500; + + /** */ + private static final String STATIC_CACHE_PREFIX = "static-cache-"; + + /** */ + private static final String STATIC_CACHE_CACHE_GROUP_NAME = "static-cache-group"; + + /** + * {@inheritDoc} + */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setSystemThreadPoolSize(Runtime.getRuntime().availableProcessors() * 3); + + long sz = 100 * 1024 * 1024; + + DataStorageConfiguration memCfg = new DataStorageConfiguration().setPageSize(1024) + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration().setPersistenceEnabled(false).setInitialSize(sz).setMaxSize(sz)) + .setWalMode(WALMode.LOG_ONLY).setCheckpointFrequency(24L * 60 * 60 * 1000); + + cfg.setDataStorageConfiguration(memCfg); + + ArrayList staticCaches = new ArrayList<>(CACHES_COUNT); + + for (int i = 0; i < CACHES_COUNT; i++) + staticCaches.add(cacheConfiguration(STATIC_CACHE_PREFIX + i)); + + cfg.setCacheConfiguration(staticCaches.toArray(new CacheConfiguration[CACHES_COUNT])); + + return cfg; + } + + /** + * @param cacheName Cache name. + * @return Cache configuration. + */ + private CacheConfiguration cacheConfiguration(String cacheName) { + CacheConfiguration cfg = defaultCacheConfiguration(); + + cfg.setName(cacheName); + cfg.setBackups(1); + cfg.setGroupName(STATIC_CACHE_CACHE_GROUP_NAME); + cfg.setIndexedTypes(Long.class, Long.class); + + return cfg; + } + + /** + * {@inheritDoc} + */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + cleanupTestData(); + } + + /** + * {@inheritDoc} + */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + cleanupTestData(); + } + + /** */ + private void cleanupTestData() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + System.clearProperty(IgniteSystemProperties.IGNITE_ALLOW_START_CACHES_IN_PARALLEL); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testParallelStartAndStop() throws Exception { + testParallelStartAndStop(true); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testSequentialStartAndStop() throws Exception { + testParallelStartAndStop(false); + } + + /** + * + */ + private void testParallelStartAndStop(boolean parallel) throws Exception { + System.setProperty(IgniteSystemProperties.IGNITE_ALLOW_START_CACHES_IN_PARALLEL, String.valueOf(parallel)); + + IgniteEx igniteEx = startGrid(0); + + IgniteEx igniteEx2 = startGrid(1); + + igniteEx.cluster().active(true); + + assertCaches(igniteEx); + + assertCaches(igniteEx2); + + igniteEx.cluster().active(false); + + assertCachesAfterStop(igniteEx); + + assertCachesAfterStop(igniteEx2); + } + + /** + * + */ + private void assertCachesAfterStop(IgniteEx igniteEx) { + assertNull(igniteEx + .context() + .cache() + .cacheGroup(CU.cacheId(STATIC_CACHE_CACHE_GROUP_NAME))); + + assertTrue(igniteEx.context().cache().cacheGroups().isEmpty()); + + for (int i = 0; i < CACHES_COUNT; i++) { + assertNull(igniteEx.context().cache().cache(STATIC_CACHE_PREFIX + i)); + assertNull(igniteEx.context().cache().internalCache(STATIC_CACHE_PREFIX + i)); + } + } + + /** + * + */ + private void assertCaches(IgniteEx igniteEx) { + Collection caches = igniteEx + .context() + .cache() + .cacheGroup(CU.cacheId(STATIC_CACHE_CACHE_GROUP_NAME)) + .caches(); + + assertEquals(CACHES_COUNT, caches.size()); + + @Nullable CacheGroupContext cacheGroup = igniteEx + .context() + .cache() + .cacheGroup(CU.cacheId(STATIC_CACHE_CACHE_GROUP_NAME)); + + for (GridCacheContext cacheContext : caches) + assertEquals(cacheContext.group(), cacheGroup); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePartitionStateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePartitionStateTest.java index 95059187cfdcf..158de18fe2347 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePartitionStateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePartitionStateTest.java @@ -37,10 +37,10 @@ import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.EVICTED; @@ -50,10 +50,8 @@ /** * */ +@RunWith(JUnit4.class) public class CachePartitionStateTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -64,8 +62,6 @@ public class CachePartitionStateTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); cfg.setClientMode(client); @@ -89,6 +85,7 @@ public class CachePartitionStateTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPartitionState1_1() throws Exception { partitionState1(0, true); } @@ -96,6 +93,7 @@ public void testPartitionState1_1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionState1_2() throws Exception { partitionState1(1, true); } @@ -103,6 +101,7 @@ public void testPartitionState1_2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionState1_2_NoCacheOnCoordinator() throws Exception { partitionState1(1, false); } @@ -110,6 +109,7 @@ public void testPartitionState1_2_NoCacheOnCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionState1_3() throws Exception { partitionState1(100, true); } @@ -117,6 +117,7 @@ public void testPartitionState1_3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionState2_1() throws Exception { partitionState2(0, true); } @@ -124,6 +125,7 @@ public void testPartitionState2_1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionState2_2() throws Exception { partitionState2(1, true); } @@ -131,6 +133,7 @@ public void testPartitionState2_2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionState2_2_NoCacheOnCoordinator() throws Exception { partitionState2(1, false); } @@ -138,6 +141,7 @@ public void testPartitionState2_2_NoCacheOnCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionState2_3() throws Exception { partitionState2(100, true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePutAllFailoverAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePutAllFailoverAbstractTest.java index 57a150fbfd2b1..cc7d5ece5d6c3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePutAllFailoverAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CachePutAllFailoverAbstractTest.java @@ -38,12 +38,15 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public abstract class CachePutAllFailoverAbstractTest extends GridCacheAbstractSelfTest { /** */ private static final int NODE_CNT = 2; @@ -95,6 +98,7 @@ public abstract class CachePutAllFailoverAbstractTest extends GridCacheAbstractS /** * @throws Exception If failed. */ + @org.junit.Test public void testPutAllFailover() throws Exception { testPutAllFailover(Test.PUT_ALL); } @@ -102,6 +106,7 @@ public void testPutAllFailover() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testPutAllFailoverPessimisticTx() throws Exception { if (atomicityMode() == CacheAtomicityMode.ATOMIC) return; @@ -112,6 +117,7 @@ public void testPutAllFailoverPessimisticTx() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testPutAllFailoverAsync() throws Exception { testPutAllFailover(Test.PUT_ALL_ASYNC); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheRentingStateRepairTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheRentingStateRepairTest.java index b21c95ea88a9f..0933a51cb2ad0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheRentingStateRepairTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheRentingStateRepairTest.java @@ -31,22 +31,21 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.waitForCondition; /** * */ +@RunWith(JUnit4.class) public class CacheRentingStateRepairTest extends GridCommonAbstractTest { /** */ public static final int PARTS = 1024; - /** */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -69,12 +68,6 @@ public class CacheRentingStateRepairTest extends GridCommonAbstractTest { cfg.setConsistentId(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - long sz = 100 * 1024 * 1024; DataStorageConfiguration memCfg = new DataStorageConfiguration().setPageSize(1024) @@ -95,6 +88,7 @@ public class CacheRentingStateRepairTest extends GridCommonAbstractTest { /** * */ + @Test public void testRentingStateRepairAfterRestart() throws Exception { try { IgniteEx g0 = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheResultIsNotNullOnPartitionLossTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheResultIsNotNullOnPartitionLossTest.java new file mode 100644 index 0000000000000..4e042633fafef --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheResultIsNotNullOnPartitionLossTest.java @@ -0,0 +1,226 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.util.Collections; +import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.stream.Collectors; +import java.util.stream.IntStream; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteClientDisconnectedException; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.cache.PartitionLossPolicy; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.EventType; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.NodeStoppingException; +import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; +import org.apache.ignite.internal.processors.cache.CacheInvalidStateException; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheResultIsNotNullOnPartitionLossTest extends GridCommonAbstractTest { + /** Number of servers to be started. */ + private static final int SERVERS = 5; + + /** Index of node that is goning to be the only client node. */ + private static final int CLIENT_IDX = SERVERS; + + /** Number of cache entries to insert into the test cache. */ + private static final int CACHE_ENTRIES_CNT = 60; + + /** True if {@link #getConfiguration(String)} is expected to configure client node on next invocations. */ + private boolean isClient; + + /** Client Ignite instance. */ + private IgniteEx client; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setIncludeEventTypes(EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST); + + cfg.setCacheConfiguration( + new CacheConfiguration<>(DEFAULT_CACHE_NAME) + .setCacheMode(CacheMode.PARTITIONED) + .setBackups(0) + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) + .setAffinity(new RendezvousAffinityFunction(false, 50)) + .setPartitionLossPolicy(PartitionLossPolicy.READ_WRITE_SAFE) + ); + + if (isClient) + cfg.setClientMode(true); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + List list = IntStream.range(0, SERVERS).boxed().collect(Collectors.toList()); + + Collections.shuffle(list); + + for (Integer i : list) + startGrid(i); + + isClient = true; + + client = startGrid(CLIENT_IDX); + + try (IgniteDataStreamer dataStreamer = client.dataStreamer(DEFAULT_CACHE_NAME)) { + dataStreamer.allowOverwrite(true); + + for (int i = 0; i < CACHE_ENTRIES_CNT; i++) + dataStreamer.addData(i, i); + } + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCacheResultIsNotNullOnClient() throws Exception { + testCacheResultIsNotNull0(client); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCacheResultIsNotNullOnLastServer() throws Exception { + testCacheResultIsNotNull0(grid(SERVERS - 1)); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCacheResultIsNotNullOnServer() throws Exception { + testCacheResultIsNotNull0(grid(SERVERS - 2)); + } + /** + * @throws Exception If failed. + */ + private void testCacheResultIsNotNull0(IgniteEx ignite) throws Exception { + AtomicBoolean stopReading = new AtomicBoolean(); + + AtomicReference unexpectedThrowable = new AtomicReference<>(); + + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); + + CountDownLatch readerThreadStarted = new CountDownLatch(1); + + IgniteInternalFuture nullCacheValFoundFut = GridTestUtils.runAsync(() -> { + readerThreadStarted.countDown(); + + while (!stopReading.get()) + for (int i = 0; i < CACHE_ENTRIES_CNT && !stopReading.get(); i++) { + try { + if (cache.get(i) == null) + return true; + } + catch (Throwable t) { + if (expectedThrowableClass(t)) { + try { + cache.put(i, i); + + unexpectedThrowable.set(new RuntimeException("Cache put was successful for entry " + i)); + } + catch (Throwable t2) { + if (!expectedThrowableClass(t2)) + unexpectedThrowable.set(t2); + } + } + else + unexpectedThrowable.set(t); + + break; + } + } + + return false; + }); + + try { + readerThreadStarted.await(1, TimeUnit.SECONDS); + + for (int i = 0; i < SERVERS - 1; i++) { + grid(i).close(); + + Thread.sleep(400L); + } + } + finally { + // Ask reader thread to finish its execution. + stopReading.set(true); + } + + assertFalse("Null value was returned by cache.get instead of exception.", nullCacheValFoundFut.get()); + + Throwable throwable = unexpectedThrowable.get(); + if (throwable != null) { + throwable.printStackTrace(); + + fail(throwable.getMessage()); + } + } + + /** + * + */ + private boolean expectedThrowableClass(Throwable throwable) { + return X.hasCause( + throwable, + IgniteClientDisconnectedException.class, + CacheInvalidStateException.class, + ClusterTopologyCheckedException.class, + IllegalStateException.class, + NodeStoppingException.class + ); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheStartOnJoinTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheStartOnJoinTest.java index d59f4709506d1..e04f27b9f2be8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheStartOnJoinTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheStartOnJoinTest.java @@ -38,23 +38,23 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryJoinRequestMessage; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.testframework.MvccFeatureChecker.assertMvccWriteConflict; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheStartOnJoinTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Iteration. */ private static final int ITERATIONS = 3; @@ -92,7 +92,7 @@ public class CacheStartOnJoinTest extends GridCommonAbstractTest { } }; - testSpi.setIpFinder(ipFinder); + testSpi.setIpFinder(sharedStaticIpFinder); testSpi.setJoinTimeout(60_000); cfg.setDiscoverySpi(testSpi); @@ -128,6 +128,7 @@ public class CacheStartOnJoinTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testConcurrentClientsStart1() throws Exception { concurrentClientsStart(false); } @@ -135,6 +136,7 @@ public void testConcurrentClientsStart1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentClientsStart2() throws Exception { concurrentClientsStart(true); } @@ -184,7 +186,18 @@ private void doTest(final boolean createCache) throws Exception { if (createCache) { for (int c = 0; c < 5; c++) { for (IgniteCache cache : node.getOrCreateCaches(cacheConfigurations())) { - cache.put(c, c); + boolean updated = false; + + while (!updated) { + try { + cache.put(c, c); + + updated = true; + } + catch (Exception e) { + assertMvccWriteConflict(e); + } + } assertEquals(c, cache.get(c)); } @@ -256,4 +269,4 @@ private CacheConfiguration cacheConfiguration(String cacheName) { private void checkCache(Ignite node, final String cacheName) { assertNotNull(((IgniteKernal)node).context().cache().cache(cacheName)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTryLockMultithreadedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTryLockMultithreadedTest.java index 82d9c129bcebd..ca4ad037e747e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTryLockMultithreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTryLockMultithreadedTest.java @@ -23,11 +23,12 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -37,10 +38,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheTryLockMultithreadedTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 2; @@ -51,8 +50,6 @@ public class CacheTryLockMultithreadedTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); @@ -77,9 +74,15 @@ public class CacheTryLockMultithreadedTest extends GridCommonAbstractTest { startGrid(SRVS); } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + } + /** * @throws Exception If failed. */ + @Test public void testTryLock() throws Exception { Ignite client = grid(SRVS); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTxNearUpdateTopologyChangeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTxNearUpdateTopologyChangeTest.java index df02bb963ddf5..f00f56b78cd47 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTxNearUpdateTopologyChangeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/CacheTxNearUpdateTopologyChangeTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.internal.processors.cache.CacheNearUpdateTopologyChangeAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -26,6 +27,13 @@ * */ public class CacheTxNearUpdateTopologyChangeTest extends CacheNearUpdateTopologyChangeAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractDistributedByteArrayValuesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractDistributedByteArrayValuesSelfTest.java index 6210a302f0b67..85a91b7714132 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractDistributedByteArrayValuesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractDistributedByteArrayValuesSelfTest.java @@ -17,17 +17,19 @@ package org.apache.ignite.internal.processors.cache.distributed; -import java.util.Arrays; -import java.util.Collections; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; -import org.apache.ignite.cache.CachePeekMode; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractByteArrayValuesSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -37,19 +39,34 @@ /** * Tests for byte array values in distributed caches. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractDistributedByteArrayValuesSelfTest extends GridCacheAbstractByteArrayValuesSelfTest { - /** Grids. */ - protected static Ignite[] ignites; + /** */ + private static final String CACHE = "cache"; + + /** */ + private static final String MVCC_CACHE = "mvccCache"; /** Regular caches. */ private static IgniteCache[] caches; + /** Regular caches. */ + private static IgniteCache[] mvccCaches; + /** {@inheritDoc} */ + @SuppressWarnings("unchecked") @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - c.setCacheConfiguration(cacheConfiguration()); + CacheConfiguration mvccCfg = cacheConfiguration(MVCC_CACHE) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) + .setNearConfiguration(null); // TODO IGNITE-7187: remove near cache disabling. + + + CacheConfiguration ccfg = cacheConfiguration(CACHE); + + c.setCacheConfiguration(ccfg, mvccCfg); c.setPeerClassLoadingEnabled(peerClassLoading()); @@ -69,12 +86,13 @@ protected int gridCount() { } /** + * @param name Cache name. * @return Cache configuration. */ - protected CacheConfiguration cacheConfiguration() { + protected CacheConfiguration cacheConfiguration(String name) { CacheConfiguration cfg = cacheConfiguration0(); - cfg.setName(CACHE_REGULAR); + cfg.setName(name); return cfg; } @@ -87,26 +105,31 @@ protected CacheConfiguration cacheConfiguration() { /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + int gridCnt = gridCount(); assert gridCnt > 0; - ignites = new Ignite[gridCnt]; - caches = new IgniteCache[gridCnt]; + mvccCaches = new IgniteCache[gridCnt]; - for (int i = 0; i < gridCnt; i++) { - ignites[i] = startGrid(i); + startGridsMultiThreaded(gridCnt); - caches[i] = ignites[i].cache(CACHE_REGULAR); + for (int i = 0; i < gridCnt; i++) { + caches[i] = grid(i).cache(CACHE); + mvccCaches[i] = grid(i).cache(MVCC_CACHE); } } /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { caches = null; + mvccCaches = null; - ignites = null; + stopAllGrids(); + + super.afterTestsStopped(); } /** @@ -114,6 +137,7 @@ protected CacheConfiguration cacheConfiguration() { * * @throws Exception If failed. */ + @Test public void testPessimistic() throws Exception { testTransaction0(caches, PESSIMISTIC, KEY_1, wrap(1)); } @@ -123,6 +147,7 @@ public void testPessimistic() throws Exception { * * @throws Exception If failed. */ + @Test public void testPessimisticMixed() throws Exception { testTransactionMixed0(caches, PESSIMISTIC, KEY_1, wrap(1), KEY_2, 1); } @@ -132,6 +157,7 @@ public void testPessimisticMixed() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimistic() throws Exception { testTransaction0(caches, OPTIMISTIC, KEY_1, wrap(1)); } @@ -141,10 +167,32 @@ public void testOptimistic() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticMixed() throws Exception { testTransactionMixed0(caches, OPTIMISTIC, KEY_1, wrap(1), KEY_2, 1); } + /** + * Check whether cache with byte array entry works correctly in PESSIMISTIC transaction. + * + * @throws Exception If failed. + */ + @Test + public void testPessimisticMvcc() throws Exception { + testTransaction0(mvccCaches, PESSIMISTIC, KEY_1, wrap(1)); + } + + /** + * Check whether cache with byte array entry works correctly in PESSIMISTIC transaction. + * + * @throws Exception If failed. + */ + @Test + public void testPessimisticMvccMixed() throws Exception { + testTransactionMixed0(mvccCaches, PESSIMISTIC, KEY_1, wrap(1), KEY_2, 1); + } + + /** * Test transaction behavior. * @@ -172,6 +220,9 @@ private void testTransaction0(IgniteCache[] caches, Transaction */ private void testTransactionMixed0(IgniteCache[] caches, TransactionConcurrency concurrency, Integer key1, byte[] val1, @Nullable Integer key2, @Nullable Object val2) throws Exception { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, REPEATABLE_READ)) + return; + for (IgniteCache cache : caches) { info("Checking cache: " + cache.getName()); @@ -227,4 +278,4 @@ private void testTransactionMixed0(IgniteCache[] caches, Transa assertNull(cache.get(key1)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractJobExecutionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractJobExecutionTest.java index 589435951d247..0242b773d6ec3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractJobExecutionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractJobExecutionTest.java @@ -22,20 +22,19 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.util.typedef.CX1; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -44,29 +43,14 @@ /** * Tests cache access from within jobs. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractJobExecutionTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Job counter. */ private static final AtomicInteger cntr = new AtomicInteger(0); /** */ private static final int GRID_CNT = 4; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGridsMultiThreaded(GRID_CNT, true); @@ -97,6 +81,7 @@ public abstract class GridCacheAbstractJobExecutionTest extends GridCommonAbstra /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableRead() throws Exception { checkTransactions(PESSIMISTIC, REPEATABLE_READ, 1000); } @@ -104,6 +89,7 @@ public void testPessimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticSerializable() throws Exception { checkTransactions(PESSIMISTIC, SERIALIZABLE, 1000); } @@ -192,4 +178,4 @@ private void checkTransactions( } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractNodeRestartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractNodeRestartSelfTest.java index f32aab5fe9b6a..71432ecd92c40 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractNodeRestartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractNodeRestartSelfTest.java @@ -42,12 +42,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -60,7 +60,7 @@ /** * Test node restart. */ -@SuppressWarnings({"PointlessArithmeticExpression"}) +@RunWith(JUnit4.class) public abstract class GridCacheAbstractNodeRestartSelfTest extends GridCommonAbstractTest { /** Cache name. */ protected static final String CACHE_NAME = "TEST_CACHE"; @@ -119,9 +119,6 @@ public abstract class GridCacheAbstractNodeRestartSelfTest extends GridCommonAbs /** Retries. */ private int retries = DFLT_RETRIES; - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -129,16 +126,12 @@ public abstract class GridCacheAbstractNodeRestartSelfTest extends GridCommonAbs ((TcpCommunicationSpi)c.getCommunicationSpi()).setSharedMemoryPort(-1); // Discovery. - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); + TcpDiscoverySpi disco = (TcpDiscoverySpi)c.getDiscoverySpi(); disco.setSocketTimeout(30_000); disco.setAckTimeout(30_000); disco.setNetworkTimeout(30_000); - c.setDiscoverySpi(disco); - CacheConfiguration ccfg = cacheConfiguration(); if (evict) { @@ -160,20 +153,10 @@ public abstract class GridCacheAbstractNodeRestartSelfTest extends GridCommonAbs */ protected abstract CacheConfiguration cacheConfiguration(); - /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); - - super.beforeTestsStarted(); - } - - /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { + super.afterTest(); + stopAllGrids(); } @@ -217,6 +200,7 @@ private void startGrids() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRestart() throws Exception { rebalancMode = SYNC; partitions = 3; @@ -293,6 +277,7 @@ protected TransactionConcurrency txConcurrency() { /** * @throws Exception If failed. */ + @Test public void testRestartWithPutTwoNodesNoBackups() throws Throwable { backups = 0; nodeCnt = 2; @@ -309,6 +294,7 @@ public void testRestartWithPutTwoNodesNoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxTwoNodesNoBackups() throws Throwable { backups = 0; nodeCnt = 2; @@ -325,6 +311,7 @@ public void testRestartWithTxTwoNodesNoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithPutTwoNodesOneBackup() throws Throwable { backups = 1; nodeCnt = 2; @@ -341,6 +328,7 @@ public void testRestartWithPutTwoNodesOneBackup() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxTwoNodesOneBackup() throws Throwable { backups = 1; nodeCnt = 2; @@ -357,6 +345,7 @@ public void testRestartWithTxTwoNodesOneBackup() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithPutFourNodesNoBackups() throws Throwable { backups = 0; nodeCnt = 4; @@ -373,6 +362,7 @@ public void testRestartWithPutFourNodesNoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxFourNodesNoBackups() throws Throwable { backups = 0; nodeCnt = 4; @@ -389,6 +379,7 @@ public void testRestartWithTxFourNodesNoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithPutFourNodesOneBackups() throws Throwable { backups = 1; nodeCnt = 4; @@ -405,6 +396,7 @@ public void testRestartWithPutFourNodesOneBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithPutFourNodesOneBackupsOffheapEvict() throws Throwable { backups = 1; nodeCnt = 4; @@ -421,6 +413,7 @@ public void testRestartWithPutFourNodesOneBackupsOffheapEvict() throws Throwable /** * @throws Exception If failed. */ + @Test public void testRestartWithTxFourNodesOneBackups() throws Throwable { backups = 1; nodeCnt = 4; @@ -437,6 +430,7 @@ public void testRestartWithTxFourNodesOneBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxFourNodesOneBackupsOffheapEvict() throws Throwable { backups = 1; nodeCnt = 4; @@ -453,6 +447,7 @@ public void testRestartWithTxFourNodesOneBackupsOffheapEvict() throws Throwable /** * @throws Exception If failed. */ + @Test public void testRestartWithPutSixNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 6; @@ -469,6 +464,7 @@ public void testRestartWithPutSixNodesTwoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxSixNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 6; @@ -485,6 +481,7 @@ public void testRestartWithTxSixNodesTwoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithPutEightNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 8; @@ -501,6 +498,7 @@ public void testRestartWithPutEightNodesTwoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxEightNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 8; @@ -517,6 +515,7 @@ public void testRestartWithTxEightNodesTwoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithPutTenNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 10; @@ -533,6 +532,7 @@ public void testRestartWithPutTenNodesTwoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxTenNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 10; @@ -549,6 +549,7 @@ public void testRestartWithTxTenNodesTwoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxPutAllTenNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 10; @@ -565,6 +566,7 @@ public void testRestartWithTxPutAllTenNodesTwoBackups() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testRestartWithTxPutAllFourNodesTwoBackups() throws Throwable { backups = 2; nodeCnt = 4; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPartitionedByteArrayValuesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPartitionedByteArrayValuesSelfTest.java index 4af2571ada55b..78429446a84ca 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPartitionedByteArrayValuesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPartitionedByteArrayValuesSelfTest.java @@ -18,9 +18,7 @@ package org.apache.ignite.internal.processors.cache.distributed; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.configuration.TransactionConfiguration; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -31,22 +29,9 @@ */ public abstract class GridCacheAbstractPartitionedByteArrayValuesSelfTest extends GridCacheAbstractDistributedByteArrayValuesSelfTest { - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TransactionConfiguration tCfg = new TransactionConfiguration(); - - tCfg.setTxSerializableEnabled(true); - - cfg.setTransactionConfiguration(tCfg); - - return cfg; - } - /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration0() { - CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + CacheConfiguration cfg = new CacheConfiguration(); cfg.setCacheMode(PARTITIONED); cfg.setAtomicityMode(TRANSACTIONAL); @@ -61,4 +46,4 @@ public abstract class GridCacheAbstractPartitionedByteArrayValuesSelfTest extend * @return Distribution mode. */ protected abstract NearCacheConfiguration nearConfiguration(); -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPrimarySyncSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPrimarySyncSelfTest.java index 625cb18a3d792..30729e06d326a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPrimarySyncSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheAbstractPrimarySyncSelfTest.java @@ -21,11 +21,12 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,13 +38,11 @@ /** * Test ensuring that PRIMARY_SYNC mode works correctly. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractPrimarySyncSelfTest extends GridCommonAbstractTest { /** Grids count. */ private static final int GRID_CNT = 3; - /** IP_FINDER. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -59,12 +58,6 @@ public abstract class GridCacheAbstractPrimarySyncSelfTest extends GridCommonAbs cfg.setCacheConfiguration(ccfg); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } @@ -75,6 +68,12 @@ public abstract class GridCacheAbstractPrimarySyncSelfTest extends GridCommonAbs startGrids(GRID_CNT); } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + if (nearConfiguration() != null) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + /** * @return Distribution mode. */ @@ -83,6 +82,7 @@ public abstract class GridCacheAbstractPrimarySyncSelfTest extends GridCommonAbs /** * @throws Exception If failed. */ + @Test public void testPrimarySync() throws Exception { for (int i = 0; i < GRID_CNT; i++) { for (int j = 0; j < GRID_CNT; j++) { @@ -102,4 +102,4 @@ public void testPrimarySync() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheBasicOpAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheBasicOpAbstractTest.java index a55ff2d2ef5f7..1e981c365b6c4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheBasicOpAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheBasicOpAbstractTest.java @@ -24,16 +24,16 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; @@ -45,6 +45,7 @@ /** * Simple cache test. */ +@RunWith(JUnit4.class) public abstract class GridCacheBasicOpAbstractTest extends GridCommonAbstractTest { /** Grid 1. */ private static Ignite ignite1; @@ -55,22 +56,6 @@ public abstract class GridCacheBasicOpAbstractTest extends GridCommonAbstractTes /** Grid 3. */ private static Ignite ignite3; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGridsMultiThreaded(3); @@ -89,6 +74,11 @@ public abstract class GridCacheBasicOpAbstractTest extends GridCommonAbstractTes /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-7952"); + for (Ignite g : G.allGrids()) g.cache(DEFAULT_CACHE_NAME).clear(); } @@ -97,6 +87,7 @@ public abstract class GridCacheBasicOpAbstractTest extends GridCommonAbstractTes * * @throws Exception If error occur. */ + @Test public void testBasicOps() throws Exception { CountDownLatch latch = new CountDownLatch(3); @@ -172,6 +163,7 @@ public void testBasicOps() throws Exception { /** * @throws Exception If test fails. */ + @Test public void testBasicOpsAsync() throws Exception { CountDownLatch latch = new CountDownLatch(3); @@ -252,6 +244,7 @@ public void testBasicOpsAsync() throws Exception { * * @throws IgniteCheckedException If test fails. */ + @Test public void testOptimisticTransaction() throws Exception { CountDownLatch latch = new CountDownLatch(9); @@ -326,7 +319,10 @@ public void testOptimisticTransaction() throws Exception { * * @throws Exception In case of error. */ + @Test public void testPutWithExpiration() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + IgniteCache cache1 = ignite1.cache(DEFAULT_CACHE_NAME); IgniteCache cache2 = ignite2.cache(DEFAULT_CACHE_NAME); IgniteCache cache3 = ignite3.cache(DEFAULT_CACHE_NAME); @@ -390,4 +386,4 @@ void setLatch(CountDownLatch latch) { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheClientModesAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheClientModesAbstractSelfTest.java index 33766f3725986..0bf51f644e1fa 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheClientModesAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheClientModesAbstractSelfTest.java @@ -30,6 +30,10 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -37,6 +41,7 @@ /** * Tests near-only cache. */ +@RunWith(JUnit4.class) public abstract class GridCacheClientModesAbstractSelfTest extends GridCacheAbstractSelfTest { /** Grid cnt. */ private static AtomicInteger gridCnt; @@ -51,6 +56,9 @@ public abstract class GridCacheClientModesAbstractSelfTest extends GridCacheAbst /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + if (nearEnabled()) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + gridCnt = new AtomicInteger(); super.beforeTestsStarted(); @@ -59,6 +67,12 @@ public abstract class GridCacheClientModesAbstractSelfTest extends GridCacheAbst grid(nearOnlyIgniteInstanceName).createNearCache(DEFAULT_CACHE_NAME, nearConfiguration()); } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + if (nearEnabled()) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -109,6 +123,7 @@ protected boolean isClientStartedLast() { /** * @throws Exception If failed. */ + @Test public void testPutFromClientNode() throws Exception { IgniteCache nearOnly = nearOnlyCache(); @@ -151,6 +166,7 @@ public void testPutFromClientNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetFromClientNode() throws Exception { IgniteCache dht = dhtCache(); @@ -190,6 +206,7 @@ public void testGetFromClientNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearOnlyAffinity() throws Exception { for (int i = 0; i < gridCount(); i++) { Ignite g = grid(i); @@ -278,4 +295,4 @@ public TestClass2(int val) { this.val = val; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetAbstractSelfTest.java index aea6ce6f3c409..cb830ab352975 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetAbstractSelfTest.java @@ -27,6 +27,9 @@ import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -36,6 +39,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class GridCacheEntrySetAbstractSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 2; @@ -61,6 +65,7 @@ public abstract class GridCacheEntrySetAbstractSelfTest extends GridCacheAbstrac /** * @throws Exception If failed. */ + @Test public void testEntrySet() throws Exception { for (int i = 0; i < 10; i++) { log.info("Iteration: " + i); @@ -114,4 +119,4 @@ private void putAndCheckEntrySet(IgniteCache cache) throws Excep tx.commit(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetIterationPreloadingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetIterationPreloadingSelfTest.java index 53d3a7aede3d0..160f8fc22d9d2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetIterationPreloadingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEntrySetIterationPreloadingSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests entry wrappers after preloading happened. */ +@RunWith(JUnit4.class) public class GridCacheEntrySetIterationPreloadingSelfTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -63,6 +67,7 @@ public class GridCacheEntrySetIterationPreloadingSelfTest extends GridCacheAbstr /** * @throws Exception If failed. */ + @Test public void testIteration() throws Exception { try { final IgniteCache cache = jcache(); @@ -93,4 +98,4 @@ public void testIteration() throws Exception { stopGrid(1); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEventAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEventAbstractTest.java index daa1557d7273a..abaaa7e021b51 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEventAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheEventAbstractTest.java @@ -40,7 +40,11 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVTS_CACHE; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; @@ -52,6 +56,7 @@ /** * Tests events. */ +@RunWith(JUnit4.class) public abstract class GridCacheEventAbstractTest extends GridCacheAbstractSelfTest { /** */ private static final boolean TEST_INFO = true; @@ -77,6 +82,8 @@ protected boolean partitioned() { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + super.beforeTestsStarted(); gridCnt = gridCount(); @@ -222,6 +229,7 @@ private Map pairs(int size) { * Note: test was disabled for REPPLICATED cache case because IGNITE-607. * This comment should be removed if test passed stably. */ + @Test public void testGetPutRemove() throws Exception { runTest( new TestCacheRunnable() { @@ -249,6 +257,7 @@ public void testGetPutRemove() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testGetPutRemoveTx1() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -282,6 +291,7 @@ public void testGetPutRemoveTx1() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testGetPutRemoveTx2() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -322,6 +332,7 @@ public void testGetPutRemoveTx2() throws Exception { * Note: test was disabled for REPPLICATED cache case because IGNITE-607. * This comment should be removed if test passed stably. */ + @Test public void testGetPutRemoveAsync() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -348,6 +359,7 @@ public void testGetPutRemoveAsync() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testGetPutRemoveAsyncTx1() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -381,6 +393,7 @@ public void testGetPutRemoveAsyncTx1() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testGetPutRemoveAsyncTx2() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -418,6 +431,7 @@ public void testGetPutRemoveAsyncTx2() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testPutRemovex() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -442,6 +456,7 @@ public void testPutRemovex() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testPutRemovexTx1() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -472,6 +487,7 @@ public void testPutRemovexTx1() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testPutRemovexTx2() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -507,6 +523,7 @@ public void testPutRemovexTx2() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testPutIfAbsent() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -538,6 +555,7 @@ public void testPutIfAbsent() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testPutIfAbsentTx() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -576,6 +594,7 @@ public void testPutIfAbsentTx() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testPutIfAbsentAsync() throws Exception { runTest(new TestCacheRunnable() { @Override public void run(IgniteCache cache) throws IgniteCheckedException { @@ -610,6 +629,7 @@ public void testPutIfAbsentAsync() throws Exception { * @throws Exception If test failed. */ @SuppressWarnings("unchecked") + @Test public void testPutIfAbsentAsyncTx() throws Exception { IgniteBiTuple[] evts = new IgniteBiTuple[] {F.t(EVT_CACHE_OBJECT_PUT, 2 * gridCnt)}; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheLockAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheLockAbstractTest.java index 16c1f5a8cce87..6d5ac3016c642 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheLockAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheLockAbstractTest.java @@ -33,13 +33,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestThread; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -49,6 +50,7 @@ * Test cases for multi-threaded tests. */ @SuppressWarnings({"FieldCanBeLocal"}) +@RunWith(JUnit4.class) public abstract class GridCacheLockAbstractTest extends GridCommonAbstractTest { /** Grid1. */ private static Ignite ignite1; @@ -62,8 +64,12 @@ public abstract class GridCacheLockAbstractTest extends GridCommonAbstractTest { /** (for convenience). */ private static IgniteCache cache2; - /** Ip-finder. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.setUp(); + } /** * @@ -76,12 +82,6 @@ protected GridCacheLockAbstractTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration()); return cfg; @@ -195,6 +195,7 @@ private boolean locked(Iterable keys, int idx) { * @throws Exception If test failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testLockSingleThread() throws Exception { int k = 1; String v = String.valueOf(k); @@ -229,7 +230,7 @@ public void testLockSingleThread() throws Exception { /** * @throws Exception If test failed. */ - @SuppressWarnings({"TooBroadScope"}) + @Test public void testLock() throws Exception { final int kv = 1; @@ -320,6 +321,7 @@ public void testLock() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testLockAndPut() throws Exception { final CountDownLatch l1 = new CountDownLatch(1); final CountDownLatch l2 = new CountDownLatch(1); @@ -405,7 +407,7 @@ public void testLockAndPut() throws Exception { /** * @throws Exception If test failed. */ - @SuppressWarnings({"TooBroadScope"}) + @Test public void testLockTimeoutTwoThreads() throws Exception { int keyCnt = 1; @@ -501,6 +503,7 @@ public void testLockTimeoutTwoThreads() throws Exception { /** * @throws Throwable If failed. */ + @Test public void testLockReentrancy() throws Throwable { Affinity aff = ignite1.affinity(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMixedModeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMixedModeSelfTest.java index 3d6c08806db2b..abfff2a3392bc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMixedModeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMixedModeSelfTest.java @@ -24,10 +24,16 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cache puts in mixed mode. + * + * TODO IGNITE-10345: Remove test in ignite 3.0. */ +@RunWith(JUnit4.class) public class GridCacheMixedModeSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -63,6 +69,7 @@ private CacheConfiguration cacheConfiguration(String igniteInstanceName) { /** * @throws Exception If failed. */ + @Test public void testBasicOps() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -78,4 +85,4 @@ public void testBasicOps() throws Exception { for (int i = 0; i < 1000; i++) assertNull(cache.get(i)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeAbstractTest.java index 1d56ab6cbc00c..963ff33682c6d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeAbstractTest.java @@ -26,16 +26,16 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CachePeekMode; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; import org.apache.ignite.internal.util.tostring.GridToStringExclude; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; @@ -47,6 +47,7 @@ /** * Multi-node cache test. */ +@RunWith(JUnit4.class) public abstract class GridCacheMultiNodeAbstractTest extends GridCommonAbstractTest { /** Grid 1. */ private static Ignite ignite1; @@ -66,25 +67,9 @@ public abstract class GridCacheMultiNodeAbstractTest extends GridCommonAbstractT /** Cache 3. */ private static IgniteCache cache3; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Listeners. */ private static Collection lsnrs = new ArrayList<>(); - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - - return c; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { ignite1 = startGrid(1); @@ -96,6 +81,11 @@ public abstract class GridCacheMultiNodeAbstractTest extends GridCommonAbstractT cache3 = ignite3.cache(DEFAULT_CACHE_NAME); } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + } + /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { cache1 = null; @@ -144,6 +134,7 @@ private void addListener(Ignite ignite, CacheEventListener lsnr, int... type) { /** * @throws Exception If test failed. */ + @Test public void testBasicPut() throws Exception { checkPuts(3, ignite1); } @@ -151,6 +142,7 @@ public void testBasicPut() throws Exception { /** * @throws Exception If test fails. */ + @Test public void testMultiNodePut() throws Exception { checkPuts(1, ignite1, ignite2, ignite3); checkPuts(1, ignite2, ignite1, ignite3); @@ -160,6 +152,7 @@ public void testMultiNodePut() throws Exception { /** * @throws Exception If test fails. */ + @Test public void testMultiValuePut() throws Exception { checkPuts(1, ignite1); } @@ -167,6 +160,7 @@ public void testMultiValuePut() throws Exception { /** * @throws Exception If test fails. */ + @Test public void testMultiValueMultiNodePut() throws Exception { checkPuts(3, ignite1, ignite2, ignite3); checkPuts(3, ignite2, ignite1, ignite3); @@ -181,6 +175,8 @@ public void testMultiValueMultiNodePut() throws Exception { * @throws Exception If check fails. */ private void checkPuts(int cnt, Ignite... ignites) throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + CountDownLatch latch = new CountDownLatch(ignites.length * cnt); CacheEventListener lsnr = new CacheEventListener(latch, EVT_CACHE_OBJECT_PUT); @@ -227,7 +223,10 @@ private void checkPuts(int cnt, Ignite... ignites) throws Exception { /** * @throws Exception If test failed. */ + @Test public void testLockUnlock() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + CacheEventListener lockLsnr1 = new CacheEventListener(ignite1, new CountDownLatch(1), EVT_CACHE_OBJECT_LOCKED); addListener(ignite1, lockLsnr1, EVT_CACHE_OBJECT_LOCKED); @@ -273,6 +272,7 @@ public void testLockUnlock() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testConcurrentPutAsync() throws Exception { CountDownLatch latch = new CountDownLatch(9); @@ -323,6 +323,7 @@ public void testConcurrentPutAsync() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testGlobalClearAll() throws Exception { cache1.put(1, "val1"); cache2.put(2, "val2"); @@ -410,4 +411,4 @@ void setLatch(CountDownLatch latch) { "grid", ignite != null ? ignite.name() : "N/A", "evts", evts); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeLockAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeLockAbstractTest.java index cc3b894cb41d0..5d6987fa36384 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeLockAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultiNodeLockAbstractTest.java @@ -34,12 +34,13 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestThread; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVTS_CACHE; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_UNLOCKED; @@ -47,6 +48,7 @@ /** * Test cases for multi-threaded tests. */ +@RunWith(JUnit4.class) public abstract class GridCacheMultiNodeLockAbstractTest extends GridCommonAbstractTest { /** */ private static final String CACHE2 = "cache2"; @@ -57,12 +59,17 @@ public abstract class GridCacheMultiNodeLockAbstractTest extends GridCommonAbstr /** Grid 2. */ private static Ignite ignite2; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Listeners. */ private static Collection> lsnrs = new ArrayList<>(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + + super.setUp(); + } + /** * */ @@ -74,12 +81,6 @@ protected GridCacheMultiNodeLockAbstractTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration ccfg1 = cacheConfiguration().setName(DEFAULT_CACHE_NAME); CacheConfiguration ccfg2 = cacheConfiguration().setName(CACHE2); @@ -225,6 +226,7 @@ private void checkUnlocked(IgniteCache cache, Iterable * * @throws Exception If test failed. */ + @Test public void testBasicLock() throws Exception { IgniteCache cache = ignite1.cache(DEFAULT_CACHE_NAME); @@ -266,6 +268,7 @@ private String entries(int key) throws IgniteCheckedException { /** * @throws Exception If test fails. */ + @Test public void testMultiNodeLock() throws Exception { IgniteCache cache1 = ignite1.cache(DEFAULT_CACHE_NAME); IgniteCache cache2 = ignite2.cache(DEFAULT_CACHE_NAME); @@ -324,6 +327,7 @@ public void testMultiNodeLock() throws Exception { /** * @throws Exception If test fails. */ + @Test public void testMultiNodeLockWithKeyLists() throws Exception { IgniteCache cache1 = ignite1.cache(DEFAULT_CACHE_NAME); IgniteCache cache2 = ignite2.cache(DEFAULT_CACHE_NAME); @@ -401,6 +405,7 @@ public void testMultiNodeLockWithKeyLists() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testLockReentry() throws IgniteCheckedException { IgniteCache cache = ignite1.cache(DEFAULT_CACHE_NAME); @@ -429,6 +434,7 @@ public void testLockReentry() throws IgniteCheckedException { /** * @throws Exception If test failed. */ + @Test public void testLockMultithreaded() throws Exception { final IgniteCache cache = ignite1.cache(DEFAULT_CACHE_NAME); @@ -547,6 +553,7 @@ public void testLockMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTwoCaches() throws Exception { IgniteCache cache1 = ignite1.cache(DEFAULT_CACHE_NAME); IgniteCache cache2 = ignite1.cache(CACHE2); @@ -616,4 +623,4 @@ private class UnlockListener implements IgnitePredicate { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultithreadedFailoverAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultithreadedFailoverAbstractTest.java index 7bb0012e77862..76b9da60cc62b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultithreadedFailoverAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheMultithreadedFailoverAbstractTest.java @@ -55,12 +55,12 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -71,6 +71,7 @@ /** * Base test for all multithreaded cache scenarios w/ and w/o failover. */ +@RunWith(JUnit4.class) public class GridCacheMultithreadedFailoverAbstractTest extends GridCommonAbstractTest { /** Node name prefix. */ private static final String NODE_PREFIX = "node"; @@ -87,9 +88,6 @@ public class GridCacheMultithreadedFailoverAbstractTest extends GridCommonAbstra /** Proceed put condition. */ private final Condition putCond = lock.newCondition(); - /** Shared IP finder. */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Caches comparison start latch. */ private CountDownLatch cmpLatch; @@ -223,11 +221,6 @@ private IgniteConfiguration configuration(int idx) throws Exception { IgniteConfiguration cfg = getConfiguration(nodeName(idx)); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); cfg.setLocalHost("127.0.0.1"); cfg.setCacheConfiguration(ccfg); cfg.setConnectorConfiguration(null); @@ -240,6 +233,7 @@ private IgniteConfiguration configuration(int idx) throws Exception { * * @throws Exception If failed. */ + @Test public void test() throws Exception { startUp(); @@ -599,4 +593,4 @@ private boolean compareCaches(Map expVals) throws Exception { return !failed; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheNodeFailureAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheNodeFailureAbstractTest.java index 8de2d79b097fd..ba154174a28f1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheNodeFailureAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheNodeFailureAbstractTest.java @@ -34,13 +34,14 @@ import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteState.STOPPED; import static org.apache.ignite.IgniteSystemProperties.IGNITE_TX_SALVAGE_TIMEOUT; @@ -56,6 +57,7 @@ /** * Tests for node failure in transactions. */ +@RunWith(JUnit4.class) public abstract class GridCacheNodeFailureAbstractTest extends GridCommonAbstractTest { /** Random number generator. */ private static final Random RAND = new Random(); @@ -69,12 +71,16 @@ public abstract class GridCacheNodeFailureAbstractTest extends GridCommonAbstrac /** */ private static final String VALUE = "test"; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Grid instances. */ private static final List IGNITEs = new ArrayList<>(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.setUp(); + } + /** * Start grid by default. */ @@ -86,12 +92,6 @@ protected GridCacheNodeFailureAbstractTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setDeploymentMode(DeploymentMode.SHARED); return c; @@ -137,10 +137,11 @@ protected GridCacheNodeFailureAbstractTest() { /** * @throws IgniteCheckedException If test failed. - * + * * Note: test was disabled for REPPLICATED cache case because IGNITE-601. * This comment should be removed if test passed stably. */ + @Test public void testPessimisticReadCommitted() throws Throwable { checkTransaction(PESSIMISTIC, READ_COMMITTED); } @@ -148,6 +149,7 @@ public void testPessimisticReadCommitted() throws Throwable { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticRepeatableRead() throws Throwable { checkTransaction(PESSIMISTIC, REPEATABLE_READ); } @@ -155,6 +157,7 @@ public void testPessimisticRepeatableRead() throws Throwable { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticSerializable() throws Throwable { checkTransaction(PESSIMISTIC, SERIALIZABLE); } @@ -232,10 +235,11 @@ private void checkTransaction(TransactionConcurrency concurrency, TransactionIso /** * @throws Exception If check failed. - * + * * Note: test was disabled for REPPLICATED cache case because IGNITE-601. * This comment should be removed if test passed stably. */ + @Test public void testLock() throws Exception { int idx = 0; @@ -293,4 +297,4 @@ public void testLock() throws Exception { assert !checkCache.isLocalLocked(KEY, false); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionEvictionDuringReadThroughSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionEvictionDuringReadThroughSelfTest.java index 01316f64284c9..e1e57598392d6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionEvictionDuringReadThroughSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionEvictionDuringReadThroughSelfTest.java @@ -40,10 +40,16 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) +@Ignore("https://issues.apache.org/jira/browse/IGNITE-5759") public class GridCachePartitionEvictionDuringReadThroughSelfTest extends GridCommonAbstractTest { /** Failing key. */ private static final int FAILING_KEY = 3; @@ -83,16 +89,15 @@ public class GridCachePartitionEvictionDuringReadThroughSelfTest extends GridCom /** * @throws Exception if failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-5759") + @Test public void testPartitionRent() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-5759"); - startGrid(DATA_READ_GRID_IDX); final AtomicBoolean done = new AtomicBoolean(); IgniteInternalFuture gridAndCacheAccessFut = GridTestUtils.runMultiThreadedAsync(new Callable() { - @Override - public Integer call() throws Exception { + @Override public Integer call() throws Exception { final Set keysSet = new LinkedHashSet<>(); keysSet.add(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionNotLoadedEventSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionNotLoadedEventSelfTest.java index d25304beda669..b96a964788cbe 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionNotLoadedEventSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionNotLoadedEventSelfTest.java @@ -34,13 +34,16 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.GridTestUtils.SF; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.util.TestTcpCommunicationSpi; import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -48,21 +51,22 @@ /** * */ +@RunWith(JUnit4.class) public class GridCachePartitionNotLoadedEventSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private int backupCnt; + /** {@inheritDoc} */ + @Override public void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - disco.setIpFinder(ipFinder); - cfg.setDiscoverySpi(disco); - if (igniteInstanceName.matches(".*\\d")) { String idStr = UUID.randomUUID().toString(); @@ -97,13 +101,12 @@ public class GridCachePartitionNotLoadedEventSelfTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-5968") + @Test public void testPrimaryAndBackupDead() throws Exception { backupCnt = 1; - startGrid(0); - startGrid(1); - startGrid(2); - startGrid(3); + startGridsMultiThreaded(4); awaitPartitionMapExchange(); @@ -140,22 +143,25 @@ public void testPrimaryAndBackupDead() throws Exception { assert !cache.containsKey(key); - GridTestUtils.waitForCondition(new GridAbsPredicate() { + final long awaitingTimeoutMs = SF.apply(5 * 60 * 1000); + + assertTrue(GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { return !lsnr1.lostParts.isEmpty(); } - }, getTestTimeout()); + }, awaitingTimeoutMs)); - GridTestUtils.waitForCondition(new GridAbsPredicate() { + assertTrue(GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { return !lsnr2.lostParts.isEmpty(); } - }, getTestTimeout()); + }, awaitingTimeoutMs)); } /** * @throws Exception If failed. */ + @Test public void testPrimaryDead() throws Exception { startGrid(0); startGrid(1); @@ -190,6 +196,7 @@ public void testPrimaryDead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStableTopology() throws Exception { backupCnt = 1; @@ -226,6 +233,7 @@ public void testStableTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMapPartitioned() throws Exception { backupCnt = 0; diff --git a/modules/web-console/frontend/app/modules/branding/header-logo.directive.js b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionedNearDisabledMvccTxMultiThreadedSelfTest.java similarity index 66% rename from modules/web-console/frontend/app/modules/branding/header-logo.directive.js rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionedNearDisabledMvccTxMultiThreadedSelfTest.java index 60e5d5d68aec0..cf8115c52213f 100644 --- a/modules/web-console/frontend/app/modules/branding/header-logo.directive.js +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionedNearDisabledMvccTxMultiThreadedSelfTest.java @@ -15,25 +15,17 @@ * limitations under the License. */ -import template from './header-logo.pug'; +package org.apache.ignite.internal.processors.cache.distributed; + +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedMvccTxMultiThreadedSelfTest; /** - * @param {import('./branding.service').default} branding + * */ -export default function factory(branding) { - function controller() { - const ctrl = this; - - ctrl.url = branding.headerLogo; +public class GridCachePartitionedNearDisabledMvccTxMultiThreadedSelfTest + extends GridCachePartitionedMvccTxMultiThreadedSelfTest { + /** {@inheritDoc} */ + @Override protected boolean nearEnabled() { + return false; } - - return { - restrict: 'E', - template, - controller, - controllerAs: 'logo', - replace: true - }; } - -factory.$inject = ['IgniteBranding']; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionedReloadAllAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionedReloadAllAbstractSelfTest.java index d7186443ca39d..52a27c0de6654 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionedReloadAllAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePartitionedReloadAllAbstractSelfTest.java @@ -36,10 +36,10 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -47,6 +47,7 @@ /** * Check reloadAll() on partitioned cache. */ +@RunWith(JUnit4.class) public abstract class GridCachePartitionedReloadAllAbstractSelfTest extends GridCommonAbstractTest { /** Amount of nodes in the grid. */ private static final int GRID_CNT = 4; @@ -54,9 +55,6 @@ public abstract class GridCachePartitionedReloadAllAbstractSelfTest extends Grid /** Amount of backups in partitioned cache. */ private static final int BACKUP_CNT = 1; - /** IP finder. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Map where dummy cache store values are stored. */ private final Map map = new ConcurrentHashMap<>(); @@ -68,12 +66,6 @@ public abstract class GridCachePartitionedReloadAllAbstractSelfTest extends Grid @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); if (!nearEnabled()) @@ -175,6 +167,7 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception If test failed. */ + @Test public void testReloadAll() throws Exception { // Fill caches with values. for (IgniteCache cache : caches) { @@ -203,4 +196,4 @@ public void testReloadAll() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePreloadEventsAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePreloadEventsAbstractSelfTest.java index 59af680c80002..80b906ccb081a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePreloadEventsAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCachePreloadEventsAbstractSelfTest.java @@ -28,11 +28,12 @@ import org.apache.ignite.events.Event; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -42,18 +43,17 @@ /** * */ +@RunWith(JUnit4.class) public abstract class GridCachePreloadEventsAbstractSelfTest extends GridCommonAbstractTest { - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + } /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - disco.setIpFinder(ipFinder); - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration()); MemoryEventStorageSpi evtStorageSpi = new MemoryEventStorageSpi(); @@ -95,6 +95,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception if failed. */ + @Test public void testPreloadEvents() throws Exception { Ignite g1 = startGrid("g1"); @@ -130,4 +131,4 @@ protected void checkPreloadEvents(Collection evts, Ignite g, Collection c, int key, int at error("Attempt: " + attempt); error("Node: " + c.unwrap(Ignite.class).cluster().localNode().id()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTransformEventSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTransformEventSelfTest.java index 6f27ce728da73..e7810cf411102 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTransformEventSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/GridCacheTransformEventSelfTest.java @@ -36,19 +36,20 @@ import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.events.CacheEvent; import org.apache.ignite.events.Event; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; -import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -64,6 +65,7 @@ * Test for TRANSFORM events recording. */ @SuppressWarnings("ConstantConditions") +@RunWith(JUnit4.class) public class GridCacheTransformEventSelfTest extends GridCommonAbstractTest { /** Nodes count. */ private static final int GRID_CNT = 3; @@ -83,9 +85,6 @@ public class GridCacheTransformEventSelfTest extends GridCommonAbstractTest { /** Two keys in form of a set. */ private Set keys; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Nodes. */ private Ignite[] ignites; @@ -96,7 +95,7 @@ public class GridCacheTransformEventSelfTest extends GridCommonAbstractTest { private IgniteCache[] caches; /** Recorded events. */ - private ConcurrentHashSet evts; + private GridConcurrentHashSet evts; /** Cache mode. */ private CacheMode cacheMode; @@ -114,10 +113,6 @@ public class GridCacheTransformEventSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - TransactionConfiguration tCfg = cfg.getTransactionConfiguration(); tCfg.setDefaultTxConcurrency(txConcurrency); @@ -134,7 +129,6 @@ public class GridCacheTransformEventSelfTest extends GridCommonAbstractTest { if (cacheMode == PARTITIONED) ccfg.setBackups(BACKUP_CNT); - cfg.setDiscoverySpi(discoSpi); cfg.setCacheConfiguration(ccfg); cfg.setLocalHost("127.0.0.1"); cfg.setIncludeEventTypes(EVT_CACHE_OBJECT_READ); @@ -174,7 +168,7 @@ private void initialize(CacheMode cacheMode, CacheAtomicityMode atomicityMode, this.txConcurrency = txConcurrency; this.txIsolation = txIsolation; - evts = new ConcurrentHashSet<>(); + evts = new GridConcurrentHashSet<>(); startGridsMultiThreaded(GRID_CNT, true); @@ -278,6 +272,7 @@ private boolean backup(int gridIdx, Object key) { * * @throws Exception If failed. */ + @Test public void testTxLocalOptimisticRepeatableRead() throws Exception { checkTx(LOCAL, OPTIMISTIC, REPEATABLE_READ); } @@ -287,6 +282,7 @@ public void testTxLocalOptimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxLocalOptimisticReadCommitted() throws Exception { checkTx(LOCAL, OPTIMISTIC, READ_COMMITTED); } @@ -296,6 +292,7 @@ public void testTxLocalOptimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxLocalOptimisticSerializable() throws Exception { checkTx(LOCAL, OPTIMISTIC, SERIALIZABLE); } @@ -305,6 +302,7 @@ public void testTxLocalOptimisticSerializable() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxLocalPessimisticRepeatableRead() throws Exception { checkTx(LOCAL, PESSIMISTIC, REPEATABLE_READ); } @@ -314,6 +312,7 @@ public void testTxLocalPessimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxLocalPessimisticReadCommitted() throws Exception { checkTx(LOCAL, PESSIMISTIC, READ_COMMITTED); } @@ -323,15 +322,29 @@ public void testTxLocalPessimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxLocalPessimisticSerializable() throws Exception { checkTx(LOCAL, PESSIMISTIC, SERIALIZABLE); } + /** + * Test TRANSACTIONAL_SNAPSHOT LOCAL cache with PESSIMISTIC/REPEATABLE_READ transaction. + * + * @throws Exception If failed. + */ + @Test + public void testMvccTxLocalPessimisticRepeatableRead() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + checkMvccTx(LOCAL, PESSIMISTIC, REPEATABLE_READ); + } + /** * Test TRANSACTIONAL PARTITIONED cache with OPTIMISTIC/REPEATABLE_READ transaction. * * @throws Exception If failed. */ + @Test public void testTxPartitionedOptimisticRepeatableRead() throws Exception { checkTx(PARTITIONED, OPTIMISTIC, REPEATABLE_READ); } @@ -341,6 +354,7 @@ public void testTxPartitionedOptimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxPartitionedOptimisticReadCommitted() throws Exception { checkTx(PARTITIONED, OPTIMISTIC, READ_COMMITTED); } @@ -350,6 +364,7 @@ public void testTxPartitionedOptimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxPartitionedOptimisticSerializable() throws Exception { checkTx(PARTITIONED, OPTIMISTIC, SERIALIZABLE); } @@ -359,6 +374,7 @@ public void testTxPartitionedOptimisticSerializable() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxPartitionedPessimisticRepeatableRead() throws Exception { checkTx(PARTITIONED, PESSIMISTIC, REPEATABLE_READ); } @@ -368,6 +384,7 @@ public void testTxPartitionedPessimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxPartitionedPessimisticReadCommitted() throws Exception { checkTx(PARTITIONED, PESSIMISTIC, READ_COMMITTED); } @@ -377,15 +394,30 @@ public void testTxPartitionedPessimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxPartitionedPessimisticSerializable() throws Exception { checkTx(PARTITIONED, PESSIMISTIC, SERIALIZABLE); } + /** + * Test TRANSACTIONAL_SNAPSHOT PARTITIONED cache with PESSIMISTIC/REPEATABLE_READ transaction. + * + * @throws Exception If failed. + */ + @Test + public void testMvccTxPartitionedPessimisticRepeatableRead() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9321"); + + checkMvccTx(PARTITIONED, PESSIMISTIC, REPEATABLE_READ); + } + + /** * Test TRANSACTIONAL REPLICATED cache with OPTIMISTIC/REPEATABLE_READ transaction. * * @throws Exception If failed. */ + @Test public void testTxReplicatedOptimisticRepeatableRead() throws Exception { checkTx(REPLICATED, OPTIMISTIC, REPEATABLE_READ); } @@ -395,6 +427,7 @@ public void testTxReplicatedOptimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxReplicatedOptimisticReadCommitted() throws Exception { checkTx(REPLICATED, OPTIMISTIC, READ_COMMITTED); } @@ -404,6 +437,7 @@ public void testTxReplicatedOptimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxReplicatedOptimisticSerializable() throws Exception { checkTx(REPLICATED, OPTIMISTIC, SERIALIZABLE); } @@ -413,6 +447,7 @@ public void testTxReplicatedOptimisticSerializable() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxReplicatedPessimisticRepeatableRead() throws Exception { checkTx(REPLICATED, PESSIMISTIC, REPEATABLE_READ); } @@ -422,6 +457,7 @@ public void testTxReplicatedPessimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxReplicatedPessimisticReadCommitted() throws Exception { checkTx(REPLICATED, PESSIMISTIC, READ_COMMITTED); } @@ -431,15 +467,29 @@ public void testTxReplicatedPessimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxReplicatedPessimisticSerializable() throws Exception { checkTx(REPLICATED, PESSIMISTIC, SERIALIZABLE); } + /** + * Test TRANSACTIONAL_SNAPSHOT REPLICATED cache with PESSIMISTIC/REPEATABLE_READ transaction. + * + * @throws Exception If failed. + */ + @Test + public void testMvccTxReplicatedPessimisticRepeatableRead() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9321"); + + checkMvccTx(REPLICATED, PESSIMISTIC, REPEATABLE_READ); + } + /** * Test ATOMIC LOCAL cache. * * @throws Exception If failed. */ + @Test public void testAtomicLocal() throws Exception { checkAtomic(LOCAL); } @@ -449,6 +499,7 @@ public void testAtomicLocal() throws Exception { * * @throws Exception If failed. */ + @Test public void testAtomicPartitioned() throws Exception { checkAtomic(PARTITIONED); } @@ -458,6 +509,7 @@ public void testAtomicPartitioned() throws Exception { * * @throws Exception If failed. */ + @Test public void testAtomicReplicated() throws Exception { checkAtomic(REPLICATED); } @@ -494,6 +546,21 @@ private void checkAtomic(CacheMode cacheMode) throws Exception { checkEventNodeIdsStrict(TransformerWithInjection.class.getName(), primaryIdsForKeys(key1, key2)); } + /** + * Check TRANSACTIONAL_SNAPSHOT cache. + * + * @param cacheMode Cache mode. + * @param txConcurrency TX concurrency. + * @param txIsolation TX isolation. + * @throws Exception If failed. + */ + private void checkMvccTx(CacheMode cacheMode, TransactionConcurrency txConcurrency, + TransactionIsolation txIsolation) throws Exception { + initialize(cacheMode, TRANSACTIONAL_SNAPSHOT, txConcurrency, txIsolation); + + checkTx0(); + } + /** * Check TRANSACTIONAL cache. * @@ -506,6 +573,14 @@ private void checkTx(CacheMode cacheMode, TransactionConcurrency txConcurrency, TransactionIsolation txIsolation) throws Exception { initialize(cacheMode, TRANSACTIONAL, txConcurrency, txIsolation); + checkTx0(); + } + + /** + * Check TX cache. + */ + private void checkTx0() { + System.out.println("BEFORE: " + evts.size()); caches[0].invoke(key1, new Transformer()); @@ -564,7 +639,6 @@ private UUID[] primaryIdsForKeys(int... keys) { * @param keys Keys. * @return Node IDs. */ - @SuppressWarnings("UnusedDeclaration") private UUID[] idsForKeys(boolean primaryOnly, int... keys) { List res = new ArrayList<>(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateChangingTopologySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateChangingTopologySelfTest.java index edfeb967d70e1..c17ce417561dc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateChangingTopologySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateChangingTopologySelfTest.java @@ -46,20 +46,18 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests specific scenario when binary metadata should be updated from a system thread * and topology has been already changed since the original transaction start. */ +@RunWith(JUnit4.class) public class IgniteBinaryMetadataUpdateChangingTopologySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -73,12 +71,6 @@ public class IgniteBinaryMetadataUpdateChangingTopologySelfTest extends GridComm cfg.setCacheConfiguration(ccfg); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCommunicationSpi(new TestCommunicationSpi()); return cfg; @@ -92,6 +84,7 @@ public class IgniteBinaryMetadataUpdateChangingTopologySelfTest extends GridComm /** * @throws Exception If failed. */ + @Test public void testNoDeadlockOptimistic() throws Exception { int key1 = primaryKey(ignite(1).cache("cache")); int key2 = primaryKey(ignite(2).cache("cache")); @@ -131,6 +124,7 @@ public void testNoDeadlockOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoDeadlockInvoke() throws Exception { int key1 = primaryKey(ignite(1).cache("cache")); int key2 = primaryKey(ignite(2).cache("cache")); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateNodeRestartTest.java index b95ad4ad5b8ba..1eb931223ca62 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteBinaryMetadataUpdateNodeRestartTest.java @@ -36,11 +36,11 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -50,10 +50,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteBinaryMetadataUpdateNodeRestartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String ATOMIC_CACHE = "atomicCache"; @@ -73,15 +71,13 @@ public class IgniteBinaryMetadataUpdateNodeRestartTest extends GridCommonAbstrac @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); cfg.setMarshaller(null); CacheConfiguration ccfg1 = cacheConfiguration(TX_CACHE, TRANSACTIONAL); CacheConfiguration ccfg2 = cacheConfiguration(ATOMIC_CACHE, ATOMIC); - + cfg.setCacheConfiguration(ccfg1, ccfg2); cfg.setClientMode(client); @@ -89,16 +85,11 @@ public class IgniteBinaryMetadataUpdateNodeRestartTest extends GridCommonAbstrac return cfg; } - /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); - - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL,"true"); - } - /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + stopAllGrids(); + + super.afterTestsStopped(); } /** @@ -121,6 +112,7 @@ private CacheConfiguration cacheConfiguration(String name, CacheAtomicityMode at /** * @throws Exception If failed. */ + @Test public void testNodeRestart() throws Exception { for (int i = 0; i < 10; i++) { log.info("Iteration: " + i); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCache150ClientsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCache150ClientsTest.java index b7ae84400c2fe..301f5bd55b992 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCache150ClientsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCache150ClientsTest.java @@ -25,6 +25,7 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ClientConnectorConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -32,25 +33,25 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; -import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteCache150ClientsTest extends GridCommonAbstractTest { /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + private static final int CACHES = 10; /** */ - private static final int CACHES = 10; + private static final int CLIENTS = 150; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -66,7 +67,6 @@ public class IgniteCache150ClientsTest extends GridCommonAbstractTest { ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setLocalPortRange(200); ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setJoinTimeout(0); cfg.setClientFailureDetectionTimeout(200000); @@ -80,7 +80,7 @@ public class IgniteCache150ClientsTest extends GridCommonAbstractTest { CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setCacheMode(PARTITIONED); - ccfg.setAtomicityMode(i % 2 == 0 ? ATOMIC : TRANSACTIONAL); + ccfg.setAtomicityMode(CacheAtomicityMode.values()[i % 3]); ccfg.setWriteSynchronizationMode(PRIMARY_SYNC); ccfg.setBackups(1); @@ -109,13 +109,12 @@ public class IgniteCache150ClientsTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void test150Clients() throws Exception { Ignite srv = startGrid(0); assertFalse(srv.configuration().isClientMode()); - final int CLIENTS = 150; - final AtomicInteger idx = new AtomicInteger(1); final CountDownLatch latch = new CountDownLatch(CLIENTS); @@ -196,4 +195,4 @@ private void checkNodes(int expCnt) { ignite.cluster().nodes().size()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheAtomicNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheAtomicNodeRestartTest.java index 6b62912e22eb9..cf171393a37d4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheAtomicNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheAtomicNodeRestartTest.java @@ -19,20 +19,18 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedNodeRestartTest; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheAtomicNodeRestartTest extends GridCachePartitionedNodeRestartTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return ATOMIC; } - - /** {@inheritDoc} */ - @Override public void testRestartWithPutFourNodesNoBackups() { - fail("https://issues.apache.org/jira/browse/IGNITE-1587"); - } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientMultiNodeUpdateTopologyLockTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientMultiNodeUpdateTopologyLockTest.java index 7711bbbaa927e..3ced0500f9d18 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientMultiNodeUpdateTopologyLockTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientMultiNodeUpdateTopologyLockTest.java @@ -32,12 +32,12 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -48,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheClientMultiNodeUpdateTopologyLockTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String TEST_CACHE = "testCache"; @@ -64,8 +62,6 @@ public class IgniteCacheClientMultiNodeUpdateTopologyLockTest extends GridCommon cfg.setConsistentId(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); cfg.setCommunicationSpi(commSpi); @@ -78,6 +74,7 @@ public class IgniteCacheClientMultiNodeUpdateTopologyLockTest extends GridCommon /** * @throws Exception If failed. */ + @Test public void testPessimisticTx() throws Exception { startGrids(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeChangingTopologyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeChangingTopologyTest.java index 300ecb938f42f..a35833c7dfe16 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeChangingTopologyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeChangingTopologyTest.java @@ -82,15 +82,17 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.eclipse.jetty.util.ConcurrentHashSet; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -105,10 +107,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheClientNodeChangingTopologyTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private CacheConfiguration ccfg; @@ -124,7 +124,7 @@ public class IgniteCacheClientNodeChangingTopologyTest extends GridCommonAbstrac cfg.setConsistentId(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder).setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); cfg.setClientMode(client); @@ -139,6 +139,11 @@ public class IgniteCacheClientNodeChangingTopologyTest extends GridCommonAbstrac return cfg; } + /** {@inheritDoc} */ + @Override public void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + } + /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { super.afterTest(); @@ -149,6 +154,7 @@ public class IgniteCacheClientNodeChangingTopologyTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testAtomicPutAllPrimaryMode() throws Exception { atomicPut(true, null); } @@ -156,6 +162,7 @@ public void testAtomicPutAllPrimaryMode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicPutAllNearEnabledPrimaryMode() throws Exception { atomicPut(true, new NearCacheConfiguration()); } @@ -163,6 +170,7 @@ public void testAtomicPutAllNearEnabledPrimaryMode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicPutPrimaryMode() throws Exception { atomicPut(false, null); } @@ -288,6 +296,7 @@ private void atomicPut(final boolean putAll, /** * @throws Exception If failed. */ + @Test public void testAtomicNoRemapPrimaryMode() throws Exception { atomicNoRemap(); } @@ -375,6 +384,7 @@ private void atomicNoRemap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicGetAndPutPrimaryMode() throws Exception { atomicGetAndPut(); } @@ -442,6 +452,7 @@ private void atomicGetAndPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxPutAll() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -509,6 +520,7 @@ public void testTxPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTx() throws Exception { pessimisticTx(null); } @@ -516,6 +528,7 @@ public void testPessimisticTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxNearEnabled() throws Exception { pessimisticTx(new NearCacheConfiguration()); } @@ -734,6 +747,7 @@ private Integer findKey(Affinity aff, int part) { * * @throws Exception If failed. */ + @Test public void testPessimisticTx2() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -827,6 +841,7 @@ public void testPessimisticTx2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxNearEnabledNoRemap() throws Exception { pessimisticTxNoRemap(new NearCacheConfiguration()); } @@ -834,6 +849,7 @@ public void testPessimisticTxNearEnabledNoRemap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxNoRemap() throws Exception { pessimisticTxNoRemap(null); } @@ -948,6 +964,7 @@ private CacheConfiguration testPessimisticTx3Cfg() { /** * @throws Exception If failed. */ + @Test public void testPessimisticTx3() throws Exception { for (int iter = 0; iter < 5; iter++) { info("Iteration: " + iter); @@ -1013,6 +1030,7 @@ public void testPessimisticTx3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableTx() throws Exception { optimisticSerializableTx(null); } @@ -1020,6 +1038,7 @@ public void testOptimisticSerializableTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableTxNearEnabled() throws Exception { optimisticSerializableTx(new NearCacheConfiguration()); } @@ -1165,6 +1184,7 @@ private void optimisticSerializableTx(NearCacheConfiguration nearCfg) throws Exc /** * @throws Exception If failed. */ + @Test public void testLock() throws Exception { lock(null); } @@ -1172,6 +1192,7 @@ public void testLock() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockNearEnabled() throws Exception { lock(new NearCacheConfiguration()); } @@ -1302,6 +1323,7 @@ private boolean unlocked(Ignite ignite) { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxMessageClientFirstFlag() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -1402,6 +1424,7 @@ private void checkClientLockMessages(List msgs, int expCnt) { /** * @throws Exception If failed. */ + @Test public void testOptimisticTxMessageClientFirstFlag() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -1497,6 +1520,7 @@ private void checkClientPrepareMessages(List msgs, int expCnt) { /** * @throws Exception If failed. */ + @Test public void testLockRemoveAfterClientFailed() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -1555,6 +1579,7 @@ public void testLockRemoveAfterClientFailed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockFromClientBlocksExchange() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -1703,7 +1728,7 @@ private void checkData(final Map map, } } finally { - entry.touch(entry.context().affinity().affinityTopologyVersion()); + entry.touch(); } } else @@ -1730,6 +1755,7 @@ private void checkData(final Map map, /** * @throws Exception If failed. */ + @Test public void testAtomicPrimaryPutAllMultinode() throws Exception { multinode(ATOMIC, TestType.PUT_ALL); } @@ -1737,6 +1763,7 @@ public void testAtomicPrimaryPutAllMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticTxPutAllMultinode() throws Exception { multinode(TRANSACTIONAL, TestType.OPTIMISTIC_TX); } @@ -1744,6 +1771,7 @@ public void testOptimisticTxPutAllMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializableTxPutAllMultinode() throws Exception { multinode(TRANSACTIONAL, TestType.OPTIMISTIC_SERIALIZABLE_TX); } @@ -1751,6 +1779,7 @@ public void testOptimisticSerializableTxPutAllMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxPutAllMultinode() throws Exception { multinode(TRANSACTIONAL, TestType.PESSIMISTIC_TX); } @@ -1758,6 +1787,7 @@ public void testPessimisticTxPutAllMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockAllMultinode() throws Exception { multinode(TRANSACTIONAL, TestType.LOCK); } @@ -1993,6 +2023,7 @@ private void multinode(CacheAtomicityMode atomicityMode, final TestType testType /** * @throws Exception If failed. */ + @Test public void testServersLeaveOnStart() throws Exception { ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeConcurrentStart.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeConcurrentStart.java index cdb69133ed262..d92351bdd5f49 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeConcurrentStart.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodeConcurrentStart.java @@ -23,20 +23,18 @@ import org.apache.ignite.Ignite; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheClientNodeConcurrentStart extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES_CNT = 6; @@ -49,8 +47,6 @@ public class IgniteCacheClientNodeConcurrentStart extends GridCommonAbstractTest assertNotNull(clientNodes); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - boolean client = false; for (Integer clientIdx : clientNodes) { @@ -76,6 +72,7 @@ public class IgniteCacheClientNodeConcurrentStart extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testConcurrentStart() throws Exception { ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -111,4 +108,4 @@ public void testConcurrentStart() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodePartitionsExchangeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodePartitionsExchangeTest.java index 510140299931c..8f399c101821a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodePartitionsExchangeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientNodePartitionsExchangeTest.java @@ -53,20 +53,19 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.ExchangeContext.IGNITE_EXCHANGE_COMPATIBILITY_VER_1; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheClientNodePartitionsExchangeTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -74,7 +73,7 @@ public class IgniteCacheClientNodePartitionsExchangeTest extends GridCommonAbstr @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder).setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); cfg.setClientMode(client); @@ -97,6 +96,7 @@ public class IgniteCacheClientNodePartitionsExchangeTest extends GridCommonAbstr /** * @throws Exception If failed. */ + @Test public void testServerNodeLeave() throws Exception { Ignite ignite0 = startGrid(0); @@ -146,6 +146,7 @@ public void testServerNodeLeave() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSkipPreload() throws Exception { Ignite ignite0 = startGrid(0); @@ -198,6 +199,7 @@ public void testSkipPreload() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionsExchange() throws Exception { partitionsExchange(false); } @@ -205,6 +207,7 @@ public void testPartitionsExchange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionsExchangeCompatibilityMode() throws Exception { System.setProperty(IGNITE_EXCHANGE_COMPATIBILITY_VER_1, "true"); @@ -474,6 +477,7 @@ private void waitForTopologyUpdate(int expNodes, final AffinityTopologyVersion t /** * @throws Exception If failed. */ + @Test public void testClientOnlyCacheStart() throws Exception { clientOnlyCacheStart(false, false); } @@ -481,6 +485,7 @@ public void testClientOnlyCacheStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearOnlyCacheStart() throws Exception { clientOnlyCacheStart(true, false); } @@ -488,6 +493,7 @@ public void testNearOnlyCacheStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientOnlyCacheStartFromServerNode() throws Exception { clientOnlyCacheStart(false, true); } @@ -495,6 +501,7 @@ public void testClientOnlyCacheStartFromServerNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearOnlyCacheStartFromServerNode() throws Exception { clientOnlyCacheStart(true, true); } @@ -691,4 +698,4 @@ int partitionsFullMessages() { } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientReconnectTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientReconnectTest.java index a0796a3e22ec7..7ebd602250bc1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientReconnectTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheClientReconnectTest.java @@ -36,10 +36,11 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -49,10 +50,8 @@ /** * Test for customer scenario. */ +@RunWith(JUnit4.class) public class IgniteCacheClientReconnectTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRV_CNT = 3; @@ -80,8 +79,6 @@ public class IgniteCacheClientReconnectTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - if (!client) { CacheConfiguration[] ccfgs = new CacheConfiguration[CACHES]; @@ -129,6 +126,7 @@ public class IgniteCacheClientReconnectTest extends GridCommonAbstractTest { * * @throws Exception If failed */ + @Test public void testClientReconnectOnExchangeHistoryExhaustion() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_EXCHANGE_HISTORY_SIZE, "1"); @@ -161,6 +159,7 @@ public void testClientReconnectOnExchangeHistoryExhaustion() throws Exception { * * @throws Exception If failed */ + @Test public void testClientInForceServerModeStopsOnExchangeHistoryExhaustion() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_EXCHANGE_HISTORY_SIZE, "1"); @@ -268,6 +267,7 @@ private void verifyAffinityTopologyVersions() { /** * @throws Exception If failed. */ + @Test public void testClientReconnect() throws Exception { startGrids(SRV_CNT); @@ -342,4 +342,4 @@ private void putGet(Ignite ignite) { assertEquals(key, cache.get(key)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheConnectionRecoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheConnectionRecoveryTest.java index 5241c37f680c8..0edef90a15fa1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheConnectionRecoveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheConnectionRecoveryTest.java @@ -35,25 +35,25 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheConnectionRecoveryTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -67,15 +67,14 @@ public class IgniteCacheConnectionRecoveryTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); cfg.setClientMode(client); cfg.setCacheConfiguration( cacheConfiguration("cache1", TRANSACTIONAL), - cacheConfiguration("cache2", ATOMIC)); + cacheConfiguration("cache2", TRANSACTIONAL_SNAPSHOT), + cacheConfiguration("cache3", ATOMIC)); return cfg; } @@ -94,6 +93,8 @@ public class IgniteCacheConnectionRecoveryTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test + @SuppressWarnings("unchecked") public void testConnectionRecovery() throws Exception { final Map data = new TreeMap<>(); @@ -115,16 +116,27 @@ public void testConnectionRecovery() throws Exception { Thread.currentThread().setName("test-thread-" + idx0 + "-" + node.name()); - IgniteCache cache1 = node.cache("cache1"); - IgniteCache cache2 = node.cache("cache2"); + IgniteCache[] caches = { + node.cache("cache1"), + node.cache("cache2"), + node.cache("cache3")}; int iter = 0; while (U.currentTimeMillis() < stopTime) { try { - cache1.putAllAsync(data).get(15, SECONDS); - - cache2.putAllAsync(data).get(15, SECONDS); + for (IgniteCache cache : caches) { + while (true) { + try { + cache.putAllAsync(data).get(15, SECONDS); + + break; + } + catch (Exception e) { + MvccFeatureChecker.assertMvccWriteConflict(e); + } + } + } CyclicBarrier b = barrierRef.get(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutMultiNodeSelfTest.java index 23fc941472bc1..e26a18582d1fc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutMultiNodeSelfTest.java @@ -30,31 +30,25 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheCreatePutMultiNodeSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 4; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - cfg.setMarshaller(new BinaryMarshaller()); return cfg; @@ -68,6 +62,7 @@ public class IgniteCacheCreatePutMultiNodeSelfTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void testStartNodes() throws Exception { try { Collection> futs = new ArrayList<>(GRID_CNT); @@ -94,8 +89,18 @@ public void testStartNodes() throws Exception { IgniteCache cache = getCache(ignite, cacheName); - for (int i = 0; i < 100; i++) - cache.getAndPut(i, i); + for (int i = 0; i < 100; i++) { + while (true) { + try { + cache.getAndPut(i, i); + + break; + } + catch (Exception e) { + MvccFeatureChecker.assertMvccWriteConflict(e); + } + } + } barrier.await(); @@ -139,10 +144,10 @@ private IgniteCache getCache(Ignite grid, String cacheName) { CacheConfiguration ccfg = new CacheConfiguration<>(cacheName); ccfg.setCacheMode(CacheMode.PARTITIONED); - ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC); + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); ccfg.setBackups(1); ccfg.setNearConfiguration(null); return grid.getOrCreateCache(ccfg); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutTest.java index 646084c869a32..8173456dd5660 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheCreatePutTest.java @@ -29,28 +29,28 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; +import static org.apache.ignite.testframework.MvccFeatureChecker.assertMvccWriteConflict; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheCreatePutTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -62,11 +62,6 @@ public class IgniteCacheCreatePutTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - cfg.setMarshaller(new BinaryMarshaller()); CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -98,6 +93,7 @@ public class IgniteCacheCreatePutTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStartNodes() throws Exception { long stopTime = System.currentTimeMillis() + 2 * 60_000; @@ -140,6 +136,7 @@ public void testStartNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdatesAndCacheStart() throws Exception { final int NODES = 4; @@ -149,6 +146,7 @@ public void testUpdatesAndCacheStart() throws Exception { ignite0.createCache(cacheConfiguration("atomic-cache", ATOMIC)); ignite0.createCache(cacheConfiguration("tx-cache", TRANSACTIONAL)); + ignite0.createCache(cacheConfiguration("mvcc-tx-cache", TRANSACTIONAL_SNAPSHOT)); final long stopTime = System.currentTimeMillis() + 60_000; @@ -162,6 +160,7 @@ public void testUpdatesAndCacheStart() throws Exception { IgniteCache cache1 = node.cache("atomic-cache"); IgniteCache cache2 = node.cache("tx-cache"); + IgniteCache cache3 = node.cache("mvcc-tx-cache"); ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -174,6 +173,13 @@ public void testUpdatesAndCacheStart() throws Exception { cache2.put(key, key); + try { + cache3.put(key, key); + } + catch (Exception e) { + assertMvccWriteConflict(e); // Do not retry. + } + if (iter++ % 1000 == 0) log.info("Update iteration: " + iter); } @@ -246,4 +252,4 @@ private CacheConfiguration cacheConfiguration(String name, CacheAtomicityMode at return ccfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheFailedUpdateResponseTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheFailedUpdateResponseTest.java index ebcff7c43898e..57b1c8c3acc4e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheFailedUpdateResponseTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheFailedUpdateResponseTest.java @@ -38,11 +38,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.testframework.GridTestUtils.assertThrows; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -51,8 +56,9 @@ import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE; /** - * Checks that no future hangs on non-srializable exceptions and values. + * Checks that no future hangs on non-serializable exceptions and values. */ +@RunWith(JUnit4.class) public class IgniteCacheFailedUpdateResponseTest extends GridCommonAbstractTest { /** Atomic cache. */ private static final String ATOMIC_CACHE = "atomic"; @@ -60,25 +66,34 @@ public class IgniteCacheFailedUpdateResponseTest extends GridCommonAbstractTest /** Tx cache. */ private static final String TX_CACHE = "tx"; + /** Mvcc tx cache. */ + private static final String MVCC_TX_CACHE = "mvcc-tx"; + /** Atomic cache. */ private IgniteCache atomicCache; /** Tx cache. */ private IgniteCache txCache; + /** Mvcc tx cache. */ + private IgniteCache mvccTxCache; + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); CacheConfiguration atomicCfg = new CacheConfiguration(ATOMIC_CACHE); CacheConfiguration txCfg = new CacheConfiguration(TX_CACHE); + CacheConfiguration mvccTxCfg = new CacheConfiguration(MVCC_TX_CACHE); atomicCfg.setBackups(1); txCfg.setBackups(1); + mvccTxCfg.setBackups(1); txCfg.setAtomicityMode(TRANSACTIONAL); + mvccTxCfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); - cfg.setCacheConfiguration(atomicCfg, txCfg); + cfg.setCacheConfiguration(atomicCfg, txCfg, mvccTxCfg); cfg.setClientMode(igniteInstanceName.contains("client")); @@ -98,11 +113,13 @@ public class IgniteCacheFailedUpdateResponseTest extends GridCommonAbstractTest @Override protected void beforeTest() throws Exception { atomicCache = grid("client").cache(ATOMIC_CACHE); txCache = grid("client").cache(TX_CACHE); + mvccTxCache = grid("client").cache(MVCC_TX_CACHE); } /** * @throws Exception If failed. */ + @Test public void testInvokeAtomic() throws Exception { testInvoke(atomicCache); testInvokeAll(atomicCache); @@ -111,6 +128,7 @@ public void testInvokeAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvokeTx() throws Exception { testInvoke(txCache); testInvokeAll(txCache); @@ -134,11 +152,33 @@ public void testInvokeTx() throws Exception { doInTransaction(client, OPTIMISTIC, SERIALIZABLE, clos); } + /** + * @throws Exception If failed. + */ + @Test + public void testInvokeMvccTx() throws Exception { + testInvoke(mvccTxCache); + testInvokeAll(mvccTxCache); + + IgniteEx client = grid("client"); + + Callable clos = new Callable() { + @Override public Object call() throws Exception { + testInvoke(mvccTxCache); + testInvokeAll(mvccTxCache); + + return null; + } + }; + + doInTransaction(client, PESSIMISTIC, REPEATABLE_READ, clos); + } + /** * @param cache Cache. */ private void testInvoke(final IgniteCache cache) throws Exception { - Class exp = grid("client").transactions().tx() == null + Class exp = grid("client").transactions().tx() == null || ((IgniteCacheProxy)cache).context().mvccEnabled() ? EntryProcessorException.class : NonSerializableException.class; @@ -174,7 +214,7 @@ private void testInvokeAll(final IgniteCache cache) throws Excep assertNotNull(epRes); // In transactions EP will be invoked locally. - Class exp = grid("client").transactions().tx() == null + Class exp = grid("client").transactions().tx() == null || ((IgniteCacheProxy)cache).context().mvccEnabled() ? EntryProcessorException.class : NonSerializableException.class; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGetRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGetRestartTest.java index e194e2820bbac..df4e3be10e495 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGetRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGetRestartTest.java @@ -35,11 +35,11 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -49,10 +49,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheGetRestartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final long TEST_TIME = 60_000; @@ -72,8 +70,6 @@ public class IgniteCacheGetRestartTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); Boolean clientMode = client.get(); @@ -91,8 +87,6 @@ public class IgniteCacheGetRestartTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); - super.beforeTestsStarted(); startGrids(SRVS); @@ -108,7 +102,9 @@ public class IgniteCacheGetRestartTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + super.afterTestsStopped(); + + stopAllGrids(); } /** {@inheritDoc} */ @@ -119,6 +115,7 @@ public class IgniteCacheGetRestartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGetRestartReplicated() throws Exception { CacheConfiguration cache = cacheConfiguration(REPLICATED, 0, false); @@ -128,6 +125,7 @@ public void testGetRestartReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetRestartPartitioned1() throws Exception { CacheConfiguration cache = cacheConfiguration(PARTITIONED, 1, false); @@ -137,6 +135,7 @@ public void testGetRestartPartitioned1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetRestartPartitioned2() throws Exception { CacheConfiguration cache = cacheConfiguration(PARTITIONED, 2, false); @@ -146,6 +145,7 @@ public void testGetRestartPartitioned2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetRestartPartitionedNearEnabled() throws Exception { CacheConfiguration cache = cacheConfiguration(PARTITIONED, 1, true); @@ -209,25 +209,28 @@ private void checkRestart(final CacheConfiguration ccfg, final int restartCnt) t log.info("Restart node [node=" + nodeIdx + ", client=" + clientMode + ']'); - Ignite ignite = startGrid(nodeIdx); + try { + Ignite ignite = startGrid(nodeIdx); - IgniteCache cache; + IgniteCache cache; - if (clientMode && ccfg.getNearConfiguration() != null) - cache = ignite.createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); - else - cache = ignite.cache(ccfg.getName()); + if (clientMode && ccfg.getNearConfiguration() != null) + cache = ignite.createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); + else + cache = ignite.cache(ccfg.getName()); - checkGet(cache); - - IgniteInternalFuture syncFut = ((IgniteCacheProxy)cache).context().preloader().syncFuture(); - - while (!syncFut.isDone() && U.currentTimeMillis() < stopTime) checkGet(cache); - checkGet(cache); + IgniteInternalFuture syncFut = ((IgniteCacheProxy)cache).context().preloader().syncFuture(); - stopGrid(nodeIdx); + while (!syncFut.isDone() && U.currentTimeMillis() < stopTime) + checkGet(cache); + + checkGet(cache); + } + finally { + stopGrid(nodeIdx); + } } return null; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGroupsPartitionLossPolicySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGroupsPartitionLossPolicySelfTest.java index 186255318dc49..f0e2c416cae3a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGroupsPartitionLossPolicySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheGroupsPartitionLossPolicySelfTest.java @@ -37,10 +37,10 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.P1; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -48,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheGroupsPartitionLossPolicySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -72,8 +70,6 @@ public class IgniteCacheGroupsPartitionLossPolicySelfTest extends GridCommonAbst @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); CacheConfiguration ccfg1 = new CacheConfiguration(CACHE_1) @@ -100,6 +96,7 @@ public class IgniteCacheGroupsPartitionLossPolicySelfTest extends GridCommonAbst /** * @throws Exception if failed. */ + @Test public void testReadOnlySafe() throws Exception { partLossPlc = PartitionLossPolicy.READ_ONLY_SAFE; @@ -109,6 +106,7 @@ public void testReadOnlySafe() throws Exception { /** * @throws Exception if failed. */ + @Test public void testReadOnlyAll() throws Exception { partLossPlc = PartitionLossPolicy.READ_ONLY_ALL; @@ -118,6 +116,7 @@ public void testReadOnlyAll() throws Exception { /** * @throws Exception if failed. */ + @Test public void testReadWriteSafe() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; @@ -127,6 +126,7 @@ public void testReadWriteSafe() throws Exception { /** * @throws Exception if failed. */ + @Test public void testReadWriteAll() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_ALL; @@ -136,6 +136,7 @@ public void testReadWriteAll() throws Exception { /** * @throws Exception if failed. */ + @Test public void testIgnore() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-5078"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheManyClientsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheManyClientsTest.java index 1e7b32a4232cc..64d577953aa05 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheManyClientsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheManyClientsTest.java @@ -36,11 +36,12 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -49,10 +50,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheManyClientsTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 4; @@ -80,7 +79,6 @@ public class IgniteCacheManyClientsTest extends GridCommonAbstractTest { ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setLocalPortRange(200); ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinderCleanFrequency(10 * 60_000); ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setJoinTimeout(2 * 60_000); @@ -114,6 +112,7 @@ public class IgniteCacheManyClientsTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testManyClientsClientDiscovery() throws Throwable { clientDiscovery = true; @@ -123,6 +122,7 @@ public void testManyClientsClientDiscovery() throws Throwable { /** * @throws Exception If failed. */ + @Test public void testManyClientsSequentiallyClientDiscovery() throws Exception { clientDiscovery = true; @@ -132,6 +132,7 @@ public void testManyClientsSequentiallyClientDiscovery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testManyClientsForceServerMode() throws Throwable { manyClientsPutGet(); } @@ -331,4 +332,4 @@ private void manyClientsPutGet() throws Throwable { stop.set(true); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryAbstractTest.java index 8b75695736e6f..944f86fd6fac2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryAbstractTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -45,6 +48,7 @@ /** * Tests message delivery after reconnection. */ +@RunWith(JUnit4.class) public abstract class IgniteCacheMessageRecoveryAbstractTest extends GridCommonAbstractTest { /** Grid count. */ public static final int GRID_CNT = 3; @@ -107,6 +111,7 @@ protected int connectionsPerNode() { /** * @throws Exception If failed. */ + @Test public void testMessageRecovery() throws Exception { final Ignite ignite = grid(0); @@ -195,4 +200,4 @@ static boolean closeSessions(Ignite ignite) throws Exception { return closed; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryIdleConnectionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryIdleConnectionTest.java index 0f4aaa7a85c17..fcabcefb0e1ea 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryIdleConnectionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageRecoveryIdleConnectionTest.java @@ -30,23 +30,22 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheMessageRecoveryIdleConnectionTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 3; @@ -57,8 +56,6 @@ public class IgniteCacheMessageRecoveryIdleConnectionTest extends GridCommonAbst @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); commSpi.setIdleConnectionTimeout(IDLE_TIMEOUT); @@ -84,6 +81,7 @@ public class IgniteCacheMessageRecoveryIdleConnectionTest extends GridCommonAbst /** * @throws Exception If failed. */ + @Test public void testCacheOperationsIdleConnectionCloseTx() throws Exception { cacheOperationsIdleConnectionClose(TRANSACTIONAL); } @@ -91,6 +89,15 @@ public void testCacheOperationsIdleConnectionCloseTx() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testCacheOperationsIdleConnectionCloseMvccTx() throws Exception { + cacheOperationsIdleConnectionClose(TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCacheOperationsIdleConnectionCloseAtomic() throws Exception { cacheOperationsIdleConnectionClose(ATOMIC); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageWriteTimeoutTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageWriteTimeoutTest.java index 3ba319bde6bd2..93cff7fd3de8c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageWriteTimeoutTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheMessageWriteTimeoutTest.java @@ -26,25 +26,21 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheMessageWriteTimeoutTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - TcpCommunicationSpi commSpi = (TcpCommunicationSpi)cfg.getCommunicationSpi(); // Try provoke connection close on socket writeTimeout. @@ -74,6 +70,7 @@ public class IgniteCacheMessageWriteTimeoutTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testMessageQueueLimit() throws Exception { for (int i = 0; i < 3; i++) { log.info("Iteration: " + i); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNearRestartRollbackSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNearRestartRollbackSelfTest.java index 54b78209c56a2..c04fb0a863bf4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNearRestartRollbackSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNearRestartRollbackSelfTest.java @@ -43,12 +43,13 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionRollbackException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -56,10 +57,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheNearRestartRollbackSelfTest extends GridCommonAbstractTest { - /** Shared IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** * The number of entries to put to the test cache. */ @@ -69,19 +68,14 @@ public class IgniteCacheNearRestartRollbackSelfTest extends GridCommonAbstractTe @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - cfg.setClientFailureDetectionTimeout(50000); - cfg.setDiscoverySpi(discoSpi); cfg.setCacheConfiguration(cacheConfiguration(igniteInstanceName)); if (getTestIgniteInstanceName(3).equals(igniteInstanceName)) { cfg.setClientMode(true); - discoSpi.setForceServerMode(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); } TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); @@ -93,18 +87,6 @@ public class IgniteCacheNearRestartRollbackSelfTest extends GridCommonAbstractTe return cfg; } - /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); - - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL,"true"); - } - - /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); - } - /** * @param igniteInstanceName Ignite instance name. * @return Cache configuration. @@ -128,7 +110,7 @@ protected CacheConfiguration cacheConfiguration(String igniteIns /** * @throws Exception If failed. */ - @SuppressWarnings("SynchronizationOnLocalVariableOrMethodParameter") + @Test public void testRestarts() throws Exception { startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNodeJoinAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNodeJoinAbstractTest.java index 002a28d3fca84..b34da817a3b42 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNodeJoinAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheNodeJoinAbstractTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheNodeJoinAbstractTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -57,6 +61,7 @@ public abstract class IgniteCacheNodeJoinAbstractTest extends IgniteCacheAbstrac /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { final IgniteCache cache = jcache(0); @@ -111,6 +116,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQuery() throws Exception { final IgniteCache cache = jcache(0); @@ -148,4 +154,4 @@ public void testScanQuery() throws Exception { stopGrid(1); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePartitionLossPolicySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePartitionLossPolicySelfTest.java index 1616e8f41359d..49979759d92f6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePartitionLossPolicySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePartitionLossPolicySelfTest.java @@ -18,23 +18,27 @@ package org.apache.ignite.internal.processors.cache.distributed; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collection; -import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.TreeSet; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicBoolean; import javax.cache.CacheException; +import junit.framework.AssertionFailedError; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.PartitionLossPolicy; import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cache.query.ScanQuery; import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.CacheRebalancingEvent; import org.apache.ignite.events.Event; @@ -43,28 +47,30 @@ import org.apache.ignite.internal.TestDelayingCommunicationSpi; import org.apache.ignite.internal.managers.communication.GridIoMessage; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsAbstractMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.P1; +import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import static java.util.Arrays.asList; +import static java.util.Collections.singletonList; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteCachePartitionLossPolicySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -72,37 +78,44 @@ public class IgniteCachePartitionLossPolicySelfTest extends GridCommonAbstractTe private PartitionLossPolicy partLossPlc; /** */ - protected static final String CACHE_NAME = "partitioned"; + private int backups; /** */ - private int backups = 0; + private final AtomicBoolean delayPartExchange = new AtomicBoolean(false); /** */ - private final AtomicBoolean delayPartExchange = new AtomicBoolean(false); + private final TopologyChanger killSingleNode = new TopologyChanger( + false, singletonList(3), asList(0, 1, 2, 4), 0); /** */ - private final TopologyChanger killSingleNode = new TopologyChanger(false, Arrays.asList(3), Arrays.asList(0, 1, 2, 4),0); + private boolean isPersistenceEnabled; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setCommunicationSpi(new TestDelayingCommunicationSpi() { + /** {@inheritDoc} */ @Override protected boolean delayMessage(Message msg, GridIoMessage ioMsg) { - return delayPartExchange.get() && (msg instanceof GridDhtPartitionsFullMessage || msg instanceof GridDhtPartitionsAbstractMessage); + return delayPartExchange.get() && + (msg instanceof GridDhtPartitionsFullMessage || msg instanceof GridDhtPartitionsAbstractMessage); } - @Override protected int delayMillis() { - return 250; - } }); cfg.setClientMode(client); cfg.setCacheConfiguration(cacheConfiguration()); + cfg.setConsistentId(gridName); + + cfg.setDataStorageConfiguration( + new DataStorageConfiguration() + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setPersistenceEnabled(isPersistenceEnabled) + )); + return cfg; } @@ -110,7 +123,7 @@ public class IgniteCachePartitionLossPolicySelfTest extends GridCommonAbstractTe * @return Cache configuration. */ protected CacheConfiguration cacheConfiguration() { - CacheConfiguration cacheCfg = new CacheConfiguration<>(CACHE_NAME); + CacheConfiguration cacheCfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); cacheCfg.setCacheMode(PARTITIONED); cacheCfg.setBackups(backups); @@ -121,23 +134,32 @@ protected CacheConfiguration cacheConfiguration() { return cacheCfg; } - /** {@inheritDoc} */ - @Override protected void afterTest() throws Exception { - stopAllGrids(); - } - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { super.beforeTest(); delayPartExchange.set(false); + partLossPlc = PartitionLossPolicy.IGNORE; + backups = 0; + + isPersistenceEnabled = false; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + super.afterTest(); } /** * @throws Exception if failed. */ + @Test public void testReadOnlySafe() throws Exception { partLossPlc = PartitionLossPolicy.READ_ONLY_SAFE; @@ -147,6 +169,19 @@ public void testReadOnlySafe() throws Exception { /** * @throws Exception if failed. */ + @Test + public void testReadOnlySafeWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_ONLY_SAFE; + + isPersistenceEnabled = true; + + checkLostPartition(false, true, killSingleNode); + } + + /** + * @throws Exception if failed. + */ + @Test public void testReadOnlyAll() throws Exception { partLossPlc = PartitionLossPolicy.READ_ONLY_ALL; @@ -156,6 +191,21 @@ public void testReadOnlyAll() throws Exception { /** * @throws Exception if failed. */ + @Test + public void testReadOnlyAllWithPersistence() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10041"); + + partLossPlc = PartitionLossPolicy.READ_ONLY_ALL; + + isPersistenceEnabled = true; + + checkLostPartition(false, false, killSingleNode); + } + + /** + * @throws Exception if failed. + */ + @Test public void testReadWriteSafe() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; @@ -165,6 +215,19 @@ public void testReadWriteSafe() throws Exception { /** * @throws Exception if failed. */ + @Test + public void testReadWriteSafeWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; + + isPersistenceEnabled = true; + + checkLostPartition(true, true, killSingleNode); + } + + /** + * @throws Exception if failed. + */ + @Test public void testReadWriteAll() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_ALL; @@ -174,80 +237,238 @@ public void testReadWriteAll() throws Exception { /** * @throws Exception if failed. */ + @Test + public void testReadWriteAllWithPersistence() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10041"); + + partLossPlc = PartitionLossPolicy.READ_WRITE_ALL; + + isPersistenceEnabled = true; + + checkLostPartition(true, false, killSingleNode); + } + + /** + * @throws Exception if failed. + */ + @Test public void testReadWriteSafeAfterKillTwoNodes() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; - checkLostPartition(true, true, new TopologyChanger(false, Arrays.asList(3, 2), Arrays.asList(0, 1, 4), 0)); + checkLostPartition(true, true, new TopologyChanger(false, asList(3, 2), asList(0, 1, 4), 0)); } /** * @throws Exception if failed. */ + @Test + public void testReadWriteSafeAfterKillTwoNodesWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; + + isPersistenceEnabled = true; + + checkLostPartition(true, true, new TopologyChanger(false, asList(3, 2), asList(0, 1, 4), 0)); + } + + /** + * @throws Exception if failed. + */ + @Test public void testReadWriteSafeAfterKillTwoNodesWithDelay() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; - checkLostPartition(true, true, new TopologyChanger(false, Arrays.asList(3, 2), Arrays.asList(0, 1, 4), 20)); + checkLostPartition(true, true, new TopologyChanger(false, asList(3, 2), asList(0, 1, 4), 20)); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testReadWriteSafeAfterKillTwoNodesWithDelayWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; + + isPersistenceEnabled = true; + + checkLostPartition(true, true, new TopologyChanger(false, asList(3, 2), asList(0, 1, 4), 20)); } /** * @throws Exception if failed. */ + @Test public void testReadWriteSafeWithBackupsAfterKillThreeNodes() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; backups = 1; - checkLostPartition(true, true, new TopologyChanger(true, Arrays.asList(3, 2, 1), Arrays.asList(0, 4), 0)); + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 2, 1), asList(0, 4), 0)); } /** * @throws Exception if failed. */ + @Test + public void testReadWriteSafeWithBackupsAfterKillThreeNodesWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; + + backups = 1; + + isPersistenceEnabled = true; + + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 2, 1), asList(0, 4), 0)); + } + + /** + * @throws Exception if failed. + */ + @Test public void testReadWriteSafeAfterKillCrd() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; - checkLostPartition(true, true, new TopologyChanger(true, Arrays.asList(3, 0), Arrays.asList(1, 2, 4), 0)); + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 0), asList(1, 2, 4), 0)); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testReadWriteSafeAfterKillCrdWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; + + isPersistenceEnabled = true; + + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 0), asList(1, 2, 4), 0)); } /** * @throws Exception if failed. */ + @Test public void testReadWriteSafeWithBackups() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; backups = 1; - checkLostPartition(true, true, new TopologyChanger(true, Arrays.asList(3, 2), Arrays.asList(0, 1, 4), 0)); + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 2), asList(0, 1, 4), 0)); } /** * @throws Exception if failed. */ + @Test + public void testReadWriteSafeWithBackupsWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; + + backups = 1; + + isPersistenceEnabled = true; + + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 2), asList(0, 1, 4), 0)); + } + + /** + * @throws Exception if failed. + */ + @Test public void testReadWriteSafeWithBackupsAfterKillCrd() throws Exception { partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; backups = 1; - checkLostPartition(true, true, new TopologyChanger(true, Arrays.asList(3, 0), Arrays.asList(1, 2, 4), 0)); + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 0), asList(1, 2, 4), 0)); } /** - * @param topChanger topology changer. * @throws Exception if failed. */ - public void testIgnore(TopologyChanger topChanger) throws Exception { + @Test + public void testReadWriteSafeWithBackupsAfterKillCrdWithPersistence() throws Exception { + partLossPlc = PartitionLossPolicy.READ_WRITE_SAFE; + + backups = 1; + + isPersistenceEnabled = true; + + checkLostPartition(true, true, new TopologyChanger(true, asList(3, 0), asList(1, 2, 4), 0)); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testIgnore() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-5078"); + partLossPlc = PartitionLossPolicy.IGNORE; + + checkIgnore(killSingleNode); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testIgnoreWithPersistence() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-5078"); + + fail("https://issues.apache.org/jira/browse/IGNITE-10041"); + + partLossPlc = PartitionLossPolicy.IGNORE; + + isPersistenceEnabled = true; + + checkIgnore(killSingleNode); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testIgnoreKillThreeNodes() throws Exception { + partLossPlc = PartitionLossPolicy.IGNORE; + + // TODO aliveNodes should include node 4, but it fails due to https://issues.apache.org/jira/browse/IGNITE-5078. + // TODO need to add 4 to the aliveNodes after IGNITE-5078 is fixed. + // TopologyChanger onlyCrdIsAlive = new TopologyChanger(false, Arrays.asList(1, 2, 3), Arrays.asList(0, 4), 0); + TopologyChanger onlyCrdIsAlive = new TopologyChanger(false, asList(1, 2, 3), singletonList(0), 0); + + checkIgnore(onlyCrdIsAlive); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testIgnoreKillThreeNodesWithPersistence() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10041"); + + partLossPlc = PartitionLossPolicy.IGNORE; + + isPersistenceEnabled = true; + + // TODO aliveNodes should include node 4, but it fails due to https://issues.apache.org/jira/browse/IGNITE-5078. + // TODO need to add 4 to the aliveNodes after IGNITE-5078 is fixed. + // TopologyChanger onlyCrdIsAlive = new TopologyChanger(false, Arrays.asList(1, 2, 3), Arrays.asList(0, 4), 0); + TopologyChanger onlyCrdIsAlive = new TopologyChanger(false, asList(1, 2, 3), singletonList(0), 0); + + checkIgnore(onlyCrdIsAlive); + } + + /** + * @param topChanger topology changer. + * @throws Exception if failed. + */ + private void checkIgnore(TopologyChanger topChanger) throws Exception { topChanger.changeTopology(); for (Ignite ig : G.allGrids()) { - IgniteCache cache = ig.cache(CACHE_NAME); + IgniteCache cache = ig.cache(DEFAULT_CACHE_NAME); Collection lost = cache.lostPartitions(); assertTrue("[grid=" + ig.name() + ", lost=" + lost.toString() + ']', lost.isEmpty()); - int parts = ig.affinity(CACHE_NAME).partitions(); + int parts = ig.affinity(DEFAULT_CACHE_NAME).partitions(); for (int i = 0; i < parts; i++) { cache.get(i); @@ -266,14 +487,14 @@ public void testIgnore(TopologyChanger topChanger) throws Exception { private void checkLostPartition(boolean canWrite, boolean safe, TopologyChanger topChanger) throws Exception { assert partLossPlc != null; - int part = topChanger.changeTopology().get(0); + List lostParts = topChanger.changeTopology(); // Wait for all grids (servers and client) have same topology version // to make sure that all nodes received map with lost partition. - GridTestUtils.waitForCondition(() -> { + boolean success = GridTestUtils.waitForCondition(() -> { AffinityTopologyVersion last = null; for (Ignite ig : G.allGrids()) { - AffinityTopologyVersion ver = ((IgniteEx) ig).context().cache().context().exchange().readyAffinityVersion(); + AffinityTopologyVersion ver = ((IgniteEx)ig).context().cache().context().exchange().readyAffinityVersion(); if (last != null && !last.equals(ver)) return false; @@ -284,71 +505,143 @@ private void checkLostPartition(boolean canWrite, boolean safe, TopologyChanger return true; }, 10000); + assertTrue("Failed to wait for new topology", success); + for (Ignite ig : G.allGrids()) { info("Checking node: " + ig.cluster().localNode().id()); - IgniteCache cache = ig.cache(CACHE_NAME); + IgniteCache cache = ig.cache(DEFAULT_CACHE_NAME); - verifyCacheOps(canWrite, safe, part, ig); + verifyLostPartitions(ig, lostParts); - // Check we can read and write to lost partition in recovery mode. - IgniteCache recoverCache = cache.withPartitionRecover(); + verifyCacheOps(canWrite, safe, ig); - for (int lostPart : recoverCache.lostPartitions()) { - recoverCache.get(lostPart); - recoverCache.put(lostPart, lostPart); - } + validateQuery(safe, ig); - // Check that writing in recover mode does not clear partition state. - verifyCacheOps(canWrite, safe, part, ig); + // TODO withPartitionRecover doesn't work with BLT - https://issues.apache.org/jira/browse/IGNITE-10041. + if (!isPersistenceEnabled) { + // Check we can read and write to lost partition in recovery mode. + IgniteCache recoverCache = cache.withPartitionRecover(); - // Validate queries. - validateQuery(safe, part, ig); + for (int lostPart : recoverCache.lostPartitions()) { + recoverCache.get(lostPart); + recoverCache.put(lostPart, lostPart); + } + + // Check that writing in recover mode does not clear partition state. + verifyLostPartitions(ig, lostParts); + + verifyCacheOps(canWrite, safe, ig); + + validateQuery(safe, ig); + } } - // Check that partition state does not change after we start a new node. - IgniteEx grd = startGrid(3); + checkNewNode(true, canWrite, safe); + checkNewNode(false, canWrite, safe); - info("Newly started node: " + grd.cluster().localNode().id()); + // Bring all nodes back. + for (int i : topChanger.killNodes) { + IgniteEx grd = startGrid(i); - for (Ignite ig : G.allGrids()) - verifyCacheOps(canWrite, safe, part, ig); + info("Newly started node: " + grd.cluster().localNode().id()); - ignite(4).resetLostPartitions(Collections.singletonList(CACHE_NAME)); + // Check that partition state does not change after we start each node. + // TODO With persistence enabled LOST partitions become OWNING after a node joins back - https://issues.apache.org/jira/browse/IGNITE-10044. + if (!isPersistenceEnabled) { + for (Ignite ig : G.allGrids()) { + verifyCacheOps(canWrite, safe, ig); + + // TODO Query effectively waits for rebalance due to https://issues.apache.org/jira/browse/IGNITE-10057 + // TODO and after resetLostPartition there is another OWNING copy in the cluster due to https://issues.apache.org/jira/browse/IGNITE-10058. + // TODO Uncomment after https://issues.apache.org/jira/browse/IGNITE-10058 is fixed. +// validateQuery(safe, ig); + } + } + } + + ignite(4).resetLostPartitions(singletonList(DEFAULT_CACHE_NAME)); awaitPartitionMapExchange(true, true, null); for (Ignite ig : G.allGrids()) { - IgniteCache cache = ig.cache(CACHE_NAME); + IgniteCache cache = ig.cache(DEFAULT_CACHE_NAME); assertTrue(cache.lostPartitions().isEmpty()); - int parts = ig.affinity(CACHE_NAME).partitions(); + int parts = ig.affinity(DEFAULT_CACHE_NAME).partitions(); for (int i = 0; i < parts; i++) { cache.get(i); cache.put(i, i); } + + for (int i = 0; i < parts; i++) { + checkQueryPasses(ig, false, i); + + if (shouldExecuteLocalQuery(ig, i)) + checkQueryPasses(ig, true, i); + + } + + checkQueryPasses(ig, false); } } /** - * + * @param client Client flag. + * @param canWrite Can write flag. + * @param safe Safe flag. + * @throws Exception If failed to start a new node. + */ + private void checkNewNode( + boolean client, + boolean canWrite, + boolean safe + ) throws Exception { + this.client = client; + + try { + IgniteEx cl = (IgniteEx)startGrid("newNode"); + + CacheGroupContext grpCtx = cl.context().cache().cacheGroup(CU.cacheId(DEFAULT_CACHE_NAME)); + + assertTrue(grpCtx.needsRecovery()); + + verifyCacheOps(canWrite, safe, cl); + + validateQuery(safe, cl); + } + finally { + stopGrid("newNode", false); + + this.client = false; + } + } + + /** + * @param node Node. + * @param lostParts Lost partition IDs. + */ + private void verifyLostPartitions(Ignite node, List lostParts) { + IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); + + Set actualSortedLostParts = new TreeSet<>(cache.lostPartitions()); + Set expSortedLostParts = new TreeSet<>(lostParts); + + assertEqualsCollections(expSortedLostParts, actualSortedLostParts); + } + + /** * @param canWrite {@code True} if writes are allowed. * @param safe {@code True} if lost partition should trigger exception. - * @param part Lost partition ID. * @param ig Ignite instance. */ - private void verifyCacheOps(boolean canWrite, boolean safe, int part, Ignite ig) { - IgniteCache cache = ig.cache(CACHE_NAME); - - Collection lost = cache.lostPartitions(); + private void verifyCacheOps(boolean canWrite, boolean safe, Ignite ig) { + IgniteCache cache = ig.cache(DEFAULT_CACHE_NAME); - assertTrue("Failed to find expected lost partition [exp=" + part + ", lost=" + lost + ']', - lost.contains(part)); - - int parts = ig.affinity(CACHE_NAME).partitions(); + int parts = ig.affinity(DEFAULT_CACHE_NAME).partitions(); // Check read. for (int i = 0; i < parts; i++) { @@ -395,8 +688,8 @@ private void verifyCacheOps(boolean canWrite, boolean safe, int part, Ignite ig) * @param nodes List of nodes to find partition. * @return List of partitions that aren't primary or backup for specified nodes. */ - protected List noPrimaryOrBackupPartition(List nodes) { - Affinity aff = ignite(4).affinity(CACHE_NAME); + private List noPrimaryOrBackupPartition(List nodes) { + Affinity aff = ignite(4).affinity(DEFAULT_CACHE_NAME); List parts = new ArrayList<>(); @@ -424,15 +717,125 @@ protected List noPrimaryOrBackupPartition(List nodes) { * Validate query execution on a node. * * @param safe Safe flag. - * @param part Partition. * @param node Node. */ - protected void validateQuery(boolean safe, int part, Ignite node) { + private void validateQuery(boolean safe, Ignite node) { + // Get node lost and remaining partitions. + IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); + + Collection lostParts = cache.lostPartitions(); + + int part = cache.lostPartitions().stream().findFirst().orElseThrow(AssertionFailedError::new); + + Integer remainingPart = null; + + for (int i = 0; i < node.affinity(DEFAULT_CACHE_NAME).partitions(); i++) { + if (lostParts.contains(i)) + continue; + + remainingPart = i; + + break; + } + + assertNotNull("Failed to find a partition that isn't lost", remainingPart); + + // 1. Check query against all partitions. + validateQuery0(safe, node); + + // 2. Check query against LOST partition. + validateQuery0(safe, node, part); + + // 3. Check query on remaining partition. + checkQueryPasses(node, false, remainingPart); + + if (shouldExecuteLocalQuery(node, remainingPart)) + checkQueryPasses(node, true, remainingPart); + + // 4. Check query over two partitions - normal and LOST. + validateQuery0(safe, node, part, remainingPart); + } + + /** + * Query validation routine. + * + * @param safe Safe flag. + * @param node Node. + * @param parts Partitions. + */ + private void validateQuery0(boolean safe, Ignite node, int... parts) { + if (safe) + checkQueryFails(node, false, parts); + else + checkQueryPasses(node, false, parts); + + if (shouldExecuteLocalQuery(node, parts)) { + if (safe) + checkQueryFails(node, true, parts); + else + checkQueryPasses(node, true, parts); + } + } + + /** + * @return true if the given node is primary for all given partitions. + */ + private boolean shouldExecuteLocalQuery(Ignite node, int... parts) { + if (parts == null || parts.length == 0) + return false; + + int numOfPrimaryParts = 0; + + for (int nodePrimaryPart : node.affinity(DEFAULT_CACHE_NAME).primaryPartitions(node.cluster().localNode())) { + for (int part : parts) { + if (part == nodePrimaryPart) + numOfPrimaryParts++; + } + } + + return numOfPrimaryParts == parts.length; + } + + /** + * @param node Node. + * @param loc Local flag. + * @param parts Partitions. + */ + protected void checkQueryPasses(Ignite node, boolean loc, int... parts) { + // Scan queries don't support multiple partitions. + if (parts != null && parts.length > 1) + return; + + // TODO Local scan queries fail in non-safe modes - https://issues.apache.org/jira/browse/IGNITE-10059. + if (loc) + return; + + IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); + + ScanQuery qry = new ScanQuery(); + + if (parts != null && parts.length > 0) + qry.setPartition(parts[0]); + + if (loc) + qry.setLocal(true); + + cache.query(qry).getAll(); + } + + /** + * @param node Node. + * @param loc Local flag. + * @param parts Partitions. + */ + protected void checkQueryFails(Ignite node, boolean loc, int... parts) { + // TODO Scan queries never fail due to partition loss - https://issues.apache.org/jira/browse/IGNITE-9902. + // TODO Need to add an actual check after https://issues.apache.org/jira/browse/IGNITE-9902 is fixed. // No-op. } /** */ - class TopologyChanger { + private class TopologyChanger { /** Flag to delay partition exchange */ private boolean delayExchange; @@ -451,7 +854,7 @@ class TopologyChanger { * @param aliveNodes List of nodes to be alive. * @param stopDelay Delay between stopping nodes. */ - public TopologyChanger(boolean delayExchange, List killNodes, List aliveNodes, + private TopologyChanger(boolean delayExchange, List killNodes, List aliveNodes, long stopDelay) { this.delayExchange = delayExchange; this.killNodes = killNodes; @@ -463,13 +866,16 @@ public TopologyChanger(boolean delayExchange, List killNodes, List changeTopology() throws Exception { + private List changeTopology() throws Exception { startGrids(4); - Affinity aff = ignite(0).affinity(CACHE_NAME); + if (isPersistenceEnabled) + grid(0).cluster().active(true); + + Affinity aff = ignite(0).affinity(DEFAULT_CACHE_NAME); for (int i = 0; i < aff.partitions(); i++) - ignite(0).cache(CACHE_NAME).put(i, i); + ignite(0).cache(DEFAULT_CACHE_NAME).put(i, i); client = true; @@ -497,14 +903,13 @@ protected List changeTopology() throws Exception { lostMap.add(semaphoreMap); - grid(i).events().localListen(new P1() { @Override public boolean apply(Event evt) { assert evt.type() == EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST; CacheRebalancingEvent cacheEvt = (CacheRebalancingEvent)evt; - if (F.eq(CACHE_NAME, cacheEvt.cacheName())) { + if (F.eq(DEFAULT_CACHE_NAME, cacheEvt.cacheName())) { if (semaphoreMap.containsKey(cacheEvt.partition())) semaphoreMap.get(cacheEvt.partition()).release(); } @@ -512,7 +917,6 @@ protected List changeTopology() throws Exception { return true; } }, EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST); - } if (delayExchange) @@ -538,16 +942,15 @@ protected List changeTopology() throws Exception { for (Map map : lostMap) { for (Map.Entry entry : map.entrySet()) - assertTrue("Failed to wait for partition LOST event for partition:" + entry.getKey(), entry.getValue().tryAcquire(1)); + assertTrue("Failed to wait for partition LOST event for partition: " + entry.getKey(), entry.getValue().tryAcquire(1)); } for (Map map : lostMap) { for (Map.Entry entry : map.entrySet()) - assertFalse("Partition LOST event raised twice for partition:" + entry.getKey(), entry.getValue().tryAcquire(1)); + assertFalse("Partition LOST event raised twice for partition: " + entry.getKey(), entry.getValue().tryAcquire(1)); } return parts; } } - } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePrimarySyncTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePrimarySyncTest.java index e9e22ee90429f..19a23cc2d0c7e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePrimarySyncTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePrimarySyncTest.java @@ -24,17 +24,20 @@ import org.apache.ignite.IgniteTransactions; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -45,12 +48,19 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCachePrimarySyncTest extends GridCommonAbstractTest { /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + private static final int SRVS = 4; /** */ - private static final int SRVS = 4; + private static final String ATOMIC_CACHE = "atomicCache"; + + /** */ + private static final String TX_CACHE = "txCache"; + + /** */ + private static final String MVCC_CACHE = "mvccCache"; /** */ private boolean clientMode; @@ -59,21 +69,22 @@ public class IgniteCachePrimarySyncTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); + CacheConfiguration ccfg1 = new CacheConfiguration<>(ATOMIC_CACHE) + .setAtomicityMode(ATOMIC) + .setBackups(2) + .setWriteSynchronizationMode(PRIMARY_SYNC); - CacheConfiguration ccfg1 = new CacheConfiguration<>(DEFAULT_CACHE_NAME); - ccfg1.setName("cache1"); - ccfg1.setAtomicityMode(ATOMIC); - ccfg1.setBackups(2); - ccfg1.setWriteSynchronizationMode(PRIMARY_SYNC); + CacheConfiguration ccfg2 = new CacheConfiguration<>(TX_CACHE) + .setAtomicityMode(TRANSACTIONAL) + .setBackups(2) + .setWriteSynchronizationMode(PRIMARY_SYNC); - CacheConfiguration ccfg2 = new CacheConfiguration<>(DEFAULT_CACHE_NAME); - ccfg2.setName("cache2"); - ccfg2.setAtomicityMode(TRANSACTIONAL); - ccfg2.setBackups(2); - ccfg2.setWriteSynchronizationMode(PRIMARY_SYNC); + CacheConfiguration ccfg3 = new CacheConfiguration<>(MVCC_CACHE) + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT) + .setBackups(2) + .setWriteSynchronizationMode(PRIMARY_SYNC); - cfg.setCacheConfiguration(ccfg1, ccfg2); + cfg.setCacheConfiguration(ccfg1, ccfg2, ccfg3); cfg.setClientMode(clientMode); @@ -96,18 +107,25 @@ public class IgniteCachePrimarySyncTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPutGet() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10520", MvccFeatureChecker.forcedMvcc()); + Ignite ignite = ignite(SRVS); - checkPutGet(ignite.cache("cache1"), null, null, null); + checkPutGet(ignite.cache(ATOMIC_CACHE), null, null, null); + + checkPutGet(ignite.cache(TX_CACHE), null, null, null); + + checkPutGet(ignite.cache(MVCC_CACHE), null, null, null); - checkPutGet(ignite.cache("cache2"), null, null, null); + checkPutGet(ignite.cache(TX_CACHE), ignite.transactions(), OPTIMISTIC, REPEATABLE_READ); - checkPutGet(ignite.cache("cache2"), ignite.transactions(), OPTIMISTIC, REPEATABLE_READ); + checkPutGet(ignite.cache(TX_CACHE), ignite.transactions(), OPTIMISTIC, SERIALIZABLE); - checkPutGet(ignite.cache("cache2"), ignite.transactions(), OPTIMISTIC, SERIALIZABLE); + checkPutGet(ignite.cache(TX_CACHE), ignite.transactions(), PESSIMISTIC, READ_COMMITTED); - checkPutGet(ignite.cache("cache2"), ignite.transactions(), PESSIMISTIC, READ_COMMITTED); + checkPutGet(ignite.cache(MVCC_CACHE), ignite.transactions(), PESSIMISTIC, REPEATABLE_READ); } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePutGetRestartAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePutGetRestartAbstractTest.java index 5c3265faea6c5..0650ea08fd16d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePutGetRestartAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCachePutGetRestartAbstractTest.java @@ -35,6 +35,9 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,6 +48,7 @@ /** * Test for specific user scenario. */ +@RunWith(JUnit4.class) public abstract class IgniteCachePutGetRestartAbstractTest extends IgniteCacheAbstractTest { /** */ private static final int ENTRY_CNT = 1000; @@ -104,6 +108,7 @@ public abstract class IgniteCachePutGetRestartAbstractTest extends IgniteCacheAb /** * @throws Exception If failed. */ + @Test public void testTxPutGetRestart() throws Exception { int clientGrid = gridCount() - 1; @@ -240,4 +245,4 @@ private void updateCache(IgniteCache cache, IgniteTransactions log.error("Update failed: " + e, e); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheReadFromBackupTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheReadFromBackupTest.java index 2bb5fbba81e72..960e51a22b81d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheReadFromBackupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheReadFromBackupTest.java @@ -43,23 +43,23 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearSingleGetRequest; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheReadFromBackupTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 4; @@ -71,8 +71,6 @@ public class IgniteCacheReadFromBackupTest extends GridCommonAbstractTest { cfg.setCommunicationSpi(commSpi); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -86,8 +84,25 @@ public class IgniteCacheReadFromBackupTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGetFromBackupStoreReadThroughEnabled() throws Exception { - for (CacheConfiguration ccfg : cacheConfigurations()) { + checkGetFromBackupStoreReadThroughEnabled(cacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10274") + @Test + public void testMvccGetFromBackupStoreReadThroughEnabled() throws Exception { + checkGetFromBackupStoreReadThroughEnabled(mvccCacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + private void checkGetFromBackupStoreReadThroughEnabled(List> cacheCfgs) throws Exception { + for (CacheConfiguration ccfg : cacheCfgs) { ccfg.setCacheStoreFactory(new TestStoreFactory()); ccfg.setReadThrough(true); @@ -130,8 +145,25 @@ public void testGetFromBackupStoreReadThroughEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetFromBackupStoreReadThroughDisabled() throws Exception { - for (CacheConfiguration ccfg : cacheConfigurations()) { + checkGetFromBackupStoreReadThroughDisabled(cacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10274") + @Test + public void testMvccGetFromBackupStoreReadThroughDisabled() throws Exception { + checkGetFromBackupStoreReadThroughDisabled(mvccCacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + private void checkGetFromBackupStoreReadThroughDisabled(List> cacheCfgs) throws Exception { + for (CacheConfiguration ccfg : cacheCfgs) { ccfg.setCacheStoreFactory(new TestStoreFactory()); ccfg.setReadThrough(false); @@ -158,8 +190,25 @@ public void testGetFromBackupStoreReadThroughDisabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetFromPrimaryPreloadInProgress() throws Exception { - for (final CacheConfiguration ccfg : cacheConfigurations()) { + checkGetFromPrimaryPreloadInProgress(cacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10274") + @Test + public void testMvccGetFromPrimaryPreloadInProgress() throws Exception { + checkGetFromPrimaryPreloadInProgress(mvccCacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + private void checkGetFromPrimaryPreloadInProgress(List> cacheCfgs) throws Exception { + for (final CacheConfiguration ccfg : cacheCfgs) { boolean near = (ccfg.getNearConfiguration() != null); log.info("Test cache [mode=" + ccfg.getCacheMode() + @@ -244,8 +293,26 @@ public void testGetFromPrimaryPreloadInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoPrimaryReadPreloadFinished() throws Exception { - for (CacheConfiguration ccfg : cacheConfigurations()) { + checkNoPrimaryReadPreloadFinished(cacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10274") + @Test + public void testMvccNoPrimaryReadPreloadFinished() throws Exception { + checkNoPrimaryReadPreloadFinished(mvccCacheConfigurations()); + + } + + /** + * @throws Exception If failed. + */ + private void checkNoPrimaryReadPreloadFinished(List> cacheCfgs) throws Exception { + for (CacheConfiguration ccfg : cacheCfgs) { boolean near = (ccfg.getNearConfiguration() != null); log.info("Test cache [mode=" + ccfg.getCacheMode() + @@ -369,6 +436,21 @@ private List> cacheConfigurations() { return ccfgs; } + /** + * @return Cache configurations to test. + */ + private List> mvccCacheConfigurations() { + List> ccfgs = new ArrayList<>(); + + ccfgs.add(cacheConfiguration(REPLICATED, TRANSACTIONAL_SNAPSHOT, 0, false)); + + ccfgs.add(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, false)); + ccfgs.add(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 1, true)); + ccfgs.add(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, 2, false)); + + return ccfgs; + } + /** * @param cacheMode Cache mode. * @param atomicityMode Cache atomicity mode. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheServerNodeConcurrentStart.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheServerNodeConcurrentStart.java index 0b5280d4c4392..f680758516845 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheServerNodeConcurrentStart.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheServerNodeConcurrentStart.java @@ -21,9 +21,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -32,10 +33,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheServerNodeConcurrentStart extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int ITERATIONS = 2; @@ -43,7 +42,6 @@ public class IgniteCacheServerNodeConcurrentStart extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinderCleanFrequency(getTestTimeout() * 2); ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); @@ -81,6 +79,7 @@ public class IgniteCacheServerNodeConcurrentStart extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testConcurrentStart() throws Exception { for (int i = 0; i < ITERATIONS; i++) { log.info("Iteration: " + i); @@ -98,4 +97,4 @@ public void testConcurrentStart() throws Exception { log.info("Iteration finished, time: " + (System.currentTimeMillis() - start) / 1000f); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSingleGetMessageTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSingleGetMessageTest.java index 974bcf21dc292..37d89ca27c864 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSingleGetMessageTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSingleGetMessageTest.java @@ -29,14 +29,16 @@ import org.apache.ignite.internal.TestRecordingCommunicationSpi; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearSingleGetRequest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearSingleGetResponse; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -46,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheSingleGetMessageTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 4; @@ -66,8 +66,6 @@ public class IgniteCacheSingleGetMessageTest extends GridCommonAbstractTest { cfg.setCommunicationSpi(commSpi); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -87,12 +85,27 @@ public class IgniteCacheSingleGetMessageTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSingleGetMessage() throws Exception { + checkSingleGetMessage(cacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7371") + @Test + public void testMvccSingleGetMessage() throws Exception { + checkSingleGetMessage(mvccCacheConfigurations()); + } + + /** + * @throws Exception If failed. + */ + public void checkSingleGetMessage(List> ccfgs) throws Exception { assertFalse(ignite(0).configuration().isClientMode()); assertTrue(ignite(SRVS).configuration().isClientMode()); - List> ccfgs = cacheConfigurations(); - for (int i = 0; i < ccfgs.size(); i++) { CacheConfiguration ccfg = ccfgs.get(i); @@ -281,6 +294,19 @@ private List> cacheConfigurations() { return ccfgs; } + /** + * @return Mvcc cache configurations to test. + */ + private List> mvccCacheConfigurations() { + List> ccfgs = new ArrayList<>(); + + ccfgs.add(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, FULL_SYNC, 0)); + ccfgs.add(cacheConfiguration(PARTITIONED, TRANSACTIONAL_SNAPSHOT, FULL_SYNC, 1)); + ccfgs.add(cacheConfiguration(REPLICATED, TRANSACTIONAL_SNAPSHOT, FULL_SYNC, 0)); + + return ccfgs; + } + /** * @param cacheMode Cache mode. * @param atomicityMode Cache atomicity mode. @@ -297,7 +323,6 @@ private CacheConfiguration cacheConfiguration( ccfg.setCacheMode(cacheMode); ccfg.setAtomicityMode(atomicityMode); - ccfg.setAtomicityMode(TRANSACTIONAL); ccfg.setWriteSynchronizationMode(syncMode); if (cacheMode == PARTITIONED) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSizeFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSizeFailoverTest.java index cd859506895e4..b80c31cfae4e0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSizeFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSizeFailoverTest.java @@ -26,11 +26,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,16 +39,12 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheSizeFailoverTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -73,6 +69,7 @@ public class IgniteCacheSizeFailoverTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSize() throws Exception { startGrids(2); @@ -123,4 +120,4 @@ public void testSize() throws Exception { fut.get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSystemTransactionsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSystemTransactionsSelfTest.java index eb19873c816f5..a7646f816493f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSystemTransactionsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheSystemTransactionsSelfTest.java @@ -19,9 +19,8 @@ import java.util.Map; import org.apache.ignite.IgniteCache; -import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.internal.processors.cache.IgniteInternalCache; @@ -30,7 +29,13 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -41,19 +46,23 @@ /** * Tests that system transactions do not interact with user transactions. */ -public class IgniteCacheSystemTransactionsSelfTest extends GridCacheAbstractSelfTest { +@RunWith(JUnit4.class) +public class IgniteCacheSystemTransactionsSelfTest extends GridCommonAbstractTest { + /** */ + private static final int NODES_CNT = 4; + /** {@inheritDoc} */ - @Override protected int gridCount() { - return 4; + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + startGridsMultiThreaded(NODES_CNT); } /** {@inheritDoc} */ - @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { - CacheConfiguration ccfg = super.cacheConfiguration(igniteInstanceName); + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); - ccfg.setAtomicityMode(TRANSACTIONAL); - - return ccfg; + super.afterTestsStopped(); } /** {@inheritDoc} */ @@ -67,10 +76,19 @@ public class IgniteCacheSystemTransactionsSelfTest extends GridCacheAbstractSelf } } + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName) + .setCacheConfiguration(defaultCacheConfiguration().setAtomicityMode(TRANSACTIONAL)); + } + /** * @throws Exception If failed. */ + @Test public void testSystemTxInsideUserTx() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10473", MvccFeatureChecker.forcedMvcc()); + IgniteKernal ignite = (IgniteKernal)grid(0); IgniteCache jcache = ignite.cache(DEFAULT_CACHE_NAME); @@ -100,19 +118,22 @@ public void testSystemTxInsideUserTx() throws Exception { checkTransactionsCommitted(); - checkEntries(DEFAULT_CACHE_NAME, "1", "11", "2", "22", "3", null); - checkEntries(CU.UTILITY_CACHE_NAME, "1", null, "2", "2", "3", "3"); + checkEntries(DEFAULT_CACHE_NAME, "1", "11", "2", "22", "3", null); + checkEntries(CU.UTILITY_CACHE_NAME, "1", null, "2", "2", "3", "3"); } /** * @throws Exception If failed. */ + @Test public void testGridNearTxLocalDuplicateAsyncCommit() throws Exception { IgniteKernal ignite = (IgniteKernal)grid(0); IgniteInternalCache utilityCache = ignite.context().cache().utilityCache(); - try (GridNearTxLocal itx = utilityCache.txStartEx(OPTIMISTIC, SERIALIZABLE)) { + try (GridNearTxLocal itx = MvccFeatureChecker.forcedMvcc() ? + utilityCache.txStartEx(PESSIMISTIC, REPEATABLE_READ) : + utilityCache.txStartEx(OPTIMISTIC, SERIALIZABLE)) { utilityCache.put("1", "1"); itx.commitNearTxLocalAsync(); @@ -124,7 +145,7 @@ public void testGridNearTxLocalDuplicateAsyncCommit() throws Exception { * @throws Exception If failed. */ private void checkTransactionsCommitted() throws Exception { - for (int i = 0; i < gridCount(); i++) { + for (int i = 0; i < NODES_CNT; i++) { IgniteKernal kernal = (IgniteKernal)grid(i); IgniteTxManager tm = kernal.context().cache().context().tm(); @@ -149,7 +170,7 @@ private void checkTransactionsCommitted() throws Exception { * @throws Exception If failed. */ private void checkEntries(String cacheName, Object... vals) throws Exception { - for (int g = 0; g < gridCount(); g++) { + for (int g = 0; g < NODES_CNT; g++) { IgniteKernal kernal = (IgniteKernal)grid(g); GridCacheAdapter cache = kernal.context().cache().internalCache(cacheName); @@ -170,4 +191,4 @@ private void checkEntries(String cacheName, Object... vals) throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheThreadLocalTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheThreadLocalTxTest.java index c8eac20782fe5..6363a1f4e7e15 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheThreadLocalTxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheThreadLocalTxTest.java @@ -26,23 +26,22 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgniteFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheThreadLocalTxTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -50,8 +49,6 @@ public class IgniteCacheThreadLocalTxTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -67,6 +64,7 @@ public class IgniteCacheThreadLocalTxTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSingleNode() throws Exception { threadLocalTx(startGrid(0)); } @@ -74,6 +72,7 @@ public void testSingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultiNode() throws Exception { startGridsMultiThreaded(4); @@ -106,6 +105,9 @@ private void threadLocalTx(Ignite node) throws Exception { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + for (boolean read : reads) { for (boolean write : writes) { for (int i = 0; i < endOps; i++) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheTxIteratorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheTxIteratorSelfTest.java index 6a00ea4133ff3..8c58309de946c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheTxIteratorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCacheTxIteratorSelfTest.java @@ -27,20 +27,23 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; -import org.apache.ignite.internal.processors.cache.GridCacheContext; -import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; -import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils; import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.MvccFeatureChecker.Feature; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import javax.cache.Cache; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheTxIteratorSelfTest extends GridCommonAbstractTest { /** */ public static final String CACHE_NAME = "testCache"; @@ -87,6 +90,7 @@ private CacheConfiguration cacheConfiguration( /** * @throws Exception if failed. */ + @Test public void testModesSingleNode() throws Exception { checkModes(1); } @@ -94,6 +98,7 @@ public void testModesSingleNode() throws Exception { /** * @throws Exception if failed. */ + @Test public void testModesMultiNode() throws Exception { checkModes(3); } @@ -112,9 +117,9 @@ public void checkModes(int gridCnt) throws Exception { checkTxCache(CacheMode.PARTITIONED, atomMode, true, false); } - checkTxCache(CacheMode.PARTITIONED, atomMode, false, true); + checkTxCache(mode, atomMode, false, true); - checkTxCache(CacheMode.PARTITIONED, atomMode, false, false); + checkTxCache(mode, atomMode, false, false); } } } @@ -132,6 +137,13 @@ private void checkTxCache( boolean nearEnabled, boolean useEvicPlc ) throws Exception { + if (atomMode == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) { + if (!MvccFeatureChecker.isSupported(mode) || + (nearEnabled && !MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) || + (useEvicPlc && !MvccFeatureChecker.isSupported(Feature.EVICTION))) + return; // Nothing to do. Mode is not supported. + } + final Ignite ignite = grid(0); final CacheConfiguration ccfg = cacheConfiguration( @@ -156,14 +168,11 @@ private void checkTxCache( for (TransactionIsolation iso : TransactionIsolation.values()) { for (TransactionConcurrency con : TransactionConcurrency.values()) { - try (Transaction transaction = ignite.transactions().txStart(con, iso)) { - //TODO: IGNITE-7187: Fix when ticket will be implemented. (Near cache) - //TODO: IGNITE-7956: Fix when ticket will be implemented. (Eviction) - if (((IgniteCacheProxy)cache).context().mvccEnabled() && - ((iso != TransactionIsolation.REPEATABLE_READ && con != TransactionConcurrency.PESSIMISTIC) - || nearEnabled || useEvicPlc)) - return; // Nothing to do. Mode is not supported. + if (atomMode == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT && + !MvccFeatureChecker.isSupported(con, iso)) + continue; // Mode not supported. + try (Transaction transaction = ignite.transactions().txStart(con, iso)) { assertEquals(val, cache.get(key)); transaction.commit(); @@ -233,4 +242,4 @@ private TestClass(String data) { return S.toString(TestClass.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCrossCacheTxStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCrossCacheTxStoreSelfTest.java index 870ce675ec688..85d6b759877dc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCrossCacheTxStoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteCrossCacheTxStoreSelfTest.java @@ -37,13 +37,18 @@ import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.resources.CacheStoreSessionResource; import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCrossCacheTxStoreSelfTest extends GridCommonAbstractTest { /** */ private static Map firstStores = new ConcurrentHashMap<>(); @@ -61,7 +66,7 @@ public class IgniteCrossCacheTxStoreSelfTest extends GridCommonAbstractTest { CacheConfiguration cfg3 = cacheConfiguration("cacheC", new SecondStoreFactory()); CacheConfiguration cfg4 = cacheConfiguration("cacheD", null); - cfg.setCacheConfiguration(cfg1, cfg2, cfg3, cfg4); + cfg.setCacheConfiguration(cfg4, cfg2, cfg3, cfg1); return cfg; } @@ -90,8 +95,12 @@ private CacheConfiguration cacheConfiguration(String cacheName, Factory { private Ignite ignite; /** {@inheritDoc} */ - @Override public CacheStore create() { + @Override public synchronized CacheStore create() { String igniteInstanceName = ignite.name(); - CacheStore store = firstStores.get(igniteInstanceName); - - if (store == null) - store = F.addIfAbsent(firstStores, igniteInstanceName, new TestStore()); - - return store; + return firstStores.computeIfAbsent(igniteInstanceName, (k) -> new TestStore()); } } @@ -384,12 +391,7 @@ private static class SecondStoreFactory implements Factory { @Override public CacheStore create() { String igniteInstanceName = ignite.name(); - CacheStore store = secondStores.get(igniteInstanceName); - - if (store == null) - store = F.addIfAbsent(secondStores, igniteInstanceName, new TestStore()); - - return store; + return secondStores.computeIfAbsent(igniteInstanceName, (k) -> new TestStore()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteMvccTxTimeoutAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteMvccTxTimeoutAbstractTest.java new file mode 100644 index 0000000000000..f2977bcae96da --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteMvccTxTimeoutAbstractTest.java @@ -0,0 +1,139 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.util.Random; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.TransactionConfiguration; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Simple cache test. + */ +@RunWith(JUnit4.class) +public class IgniteMvccTxTimeoutAbstractTest extends GridCommonAbstractTest { + /** Random number generator. */ + private static final Random RAND = new Random(); + + /** Grid count. */ + private static final int GRID_COUNT = 2; + + /** Transaction timeout. */ + private static final long TIMEOUT = 50; + + /** + * @throws Exception If failed. + */ + @Override protected void beforeTestsStarted() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7388"); + + startGridsMultiThreaded(GRID_COUNT, true); + } + + /** + * @throws Exception If failed. + */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration c = super.getConfiguration(igniteInstanceName); + + TransactionConfiguration txCfg = c.getTransactionConfiguration(); + + txCfg.setDefaultTxTimeout(TIMEOUT); + + return c; + } + + /** + * @param i Grid index. + * @return Cache. + */ + @Override protected IgniteCache jcache(int i) { + return grid(i).cache(DEFAULT_CACHE_NAME); + } + + /** + * @throws IgniteCheckedException If test failed. + */ + @Test + public void testPessimisticRepeatableRead() throws Exception { + checkTransactionTimeout(PESSIMISTIC, REPEATABLE_READ); + } + + /** + * @param concurrency Concurrency. + * @param isolation Isolation. + * @throws IgniteCheckedException If test failed. + */ + private void checkTransactionTimeout(TransactionConcurrency concurrency, + TransactionIsolation isolation) throws Exception { + int idx = RAND.nextInt(GRID_COUNT); + + IgniteCache cache = jcache(idx); + + Transaction tx = ignite(idx).transactions().txStart(concurrency, isolation, TIMEOUT, 0); + + try { + info("Storing value in cache [key=1, val=1]"); + + cache.put(1, "1"); + + long sleep = TIMEOUT * 2; + + info("Going to sleep for (ms): " + sleep); + + Thread.sleep(sleep); + + info("Storing value in cache [key=1, val=2]"); + + cache.put(1, "2"); + + info("Committing transaction: " + tx); + + tx.commit(); + + assert false : "Timeout never happened for transaction: " + tx; + } + catch (Exception e) { + if (!(X.hasCause(e, TransactionTimeoutException.class))) + throw e; + + info("Received expected timeout exception [msg=" + e.getMessage() + ", tx=" + tx + ']'); + } + finally { + tx.close(); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteNoClassOnServerAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteNoClassOnServerAbstractTest.java index 1d92376a3a935..68034498e21a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteNoClassOnServerAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteNoClassOnServerAbstractTest.java @@ -30,6 +30,7 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; import static java.util.concurrent.TimeUnit.SECONDS; @@ -73,6 +74,7 @@ private IgniteConfiguration createConfiguration() { /** * @throws Exception If failed. */ + @Test public final void testNoClassOnServerNode() throws Exception { info("Run test with client: " + clientClassName()); @@ -132,4 +134,4 @@ public final void testNoClassOnServerNode() throws Exception { } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteOptimisticTxSuspendResumeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteOptimisticTxSuspendResumeTest.java index 4b613c2cd355c..0b05a08659000 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteOptimisticTxSuspendResumeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteOptimisticTxSuspendResumeTest.java @@ -19,17 +19,21 @@ import java.util.ArrayList; import java.util.Arrays; +import java.util.HashMap; +import java.util.IdentityHashMap; import java.util.List; +import java.util.Map; import java.util.concurrent.Callable; +import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.CI2; import org.apache.ignite.internal.util.typedef.PA; @@ -40,6 +44,9 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -54,6 +61,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteOptimisticTxSuspendResumeTest extends GridCommonAbstractTest { /** Transaction timeout. */ private static final long TX_TIMEOUT = 200; @@ -62,7 +70,13 @@ public class IgniteOptimisticTxSuspendResumeTest extends GridCommonAbstractTest private static final int FUT_TIMEOUT = 5000; /** */ - private boolean client = false; + private static final int CLIENT_CNT = 2; + + /** */ + private static final int SERVER_CNT = 4; + + /** */ + private static final int GRID_CNT = CLIENT_CNT + SERVER_CNT; /** * List of closures to execute transaction operation that prohibited in suspended state. @@ -109,6 +123,10 @@ public class IgniteOptimisticTxSuspendResumeTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + int idx = getTestIgniteInstanceIndex(igniteInstanceName); + + boolean client = idx >= SERVER_CNT && idx < GRID_CNT; + cfg.setClientMode(client); return cfg; @@ -118,16 +136,21 @@ public class IgniteOptimisticTxSuspendResumeTest extends GridCommonAbstractTest @Override protected void beforeTestsStarted() throws Exception { super.beforeTestsStarted(); - startGrids(serversNumber()); + startGridsMultiThreaded(gridCount()); + } - if (serversNumber() > 1) { - client = true; + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); - startGrid(serversNumber()); + Ignite client = ignite(gridCount() - 1); - startGrid(serversNumber() + 1); + assertTrue(client.cluster().localNode().isClient()); + + for (CacheConfiguration ccfg : cacheConfigurations()) { + grid(0).createCache(ccfg); - client = false; + client.createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); } awaitPartitionMapExchange(); @@ -138,11 +161,19 @@ public class IgniteOptimisticTxSuspendResumeTest extends GridCommonAbstractTest stopAllGrids(true); } + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + for (CacheConfiguration ccfg : cacheConfigurations()) + ignite(0).destroyCache(ccfg.getName()); + + super.afterTest(); + } + /** * @return Number of server nodes. */ - protected int serversNumber() { - return 1; + protected int gridCount() { + return GRID_CNT; } /** @@ -150,6 +181,7 @@ protected int serversNumber() { * * @throws Exception If failed. */ + @Test public void testResumeTxInAnotherThread() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -211,12 +243,13 @@ public void testResumeTxInAnotherThread() throws Exception { * * @throws Exception If failed. */ + @Test public void testCrossCacheTxInAnotherThread() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { for (TransactionIsolation isolation : TransactionIsolation.values()) { - final IgniteCache otherCache = - ignite.getOrCreateCache(cacheConfiguration(PARTITIONED, 0, false).setName("otherCache")); + final IgniteCache otherCache = ignite.getOrCreateCache( + cacheConfiguration("otherCache", PARTITIONED, 0, false)); final Transaction tx = ignite.transactions().txStart(OPTIMISTIC, isolation); @@ -271,6 +304,7 @@ public void testCrossCacheTxInAnotherThread() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxRollback() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -321,6 +355,7 @@ public void testTxRollback() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultiTxSuspendResume() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -365,6 +400,7 @@ public void testMultiTxSuspendResume() throws Exception { * * @throws Exception If failed. */ + @Test public void testOpsProhibitedOnSuspendedTxFromOtherThread() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -397,6 +433,7 @@ public void testOpsProhibitedOnSuspendedTxFromOtherThread() throws Exception { * * @throws Exception If failed. */ + @Test public void testOpsProhibitedOnSuspendedTx() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -425,6 +462,7 @@ public void testOpsProhibitedOnSuspendedTx() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxTimeoutOnResumed() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -435,10 +473,7 @@ public void testTxTimeoutOnResumed() throws Exception { tx.suspend(); - long start = U.currentTimeMillis(); - - while(TX_TIMEOUT >= U.currentTimeMillis() - start) - Thread.sleep(TX_TIMEOUT * 2); + U.sleep(TX_TIMEOUT * 2); GridTestUtils.assertThrowsWithCause(new Callable() { @Override public Object call() throws Exception { @@ -467,6 +502,7 @@ public void testTxTimeoutOnResumed() throws Exception { * * @throws Exception If failed. */ + @Test public void testTxTimeoutOnSuspend() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -475,10 +511,7 @@ public void testTxTimeoutOnSuspend() throws Exception { cache.put(1, 1); - long start = U.currentTimeMillis(); - - while(TX_TIMEOUT >= U.currentTimeMillis() - start) - Thread.sleep(TX_TIMEOUT * 2); + U.sleep(TX_TIMEOUT * 2); GridTestUtils.assertThrowsWithCause(new Callable() { @Override public Object call() throws Exception { @@ -510,6 +543,7 @@ public void testTxTimeoutOnSuspend() throws Exception { * * @throws Exception If failed. */ + @Test public void testSuspendTxAndStartNew() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -550,6 +584,7 @@ public void testSuspendTxAndStartNew() throws Exception { * * @throws Exception If failed. */ + @Test public void testSuspendTxAndStartNewWithoutCommit() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -598,34 +633,94 @@ public void testSuspendTxAndStartNewWithoutCommit() throws Exception { * * @throws Exception If failed. */ + @Test public void testSuspendTxAndResumeAfterTopologyChange() throws Exception { - executeTestForAllCaches(new CI2Exc>() { - @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { + Ignite srv = ignite(ThreadLocalRandom.current().nextInt(SERVER_CNT)); + Ignite client = ignite(SERVER_CNT); + Ignite clientNear = ignite(SERVER_CNT + 1); + + Map>> cacheKeys = generateKeys(srv, TransactionIsolation.values().length); + + doCheckSuspendTxAndResume(srv, cacheKeys); + doCheckSuspendTxAndResume(client, cacheKeys); + doCheckSuspendTxAndResume(clientNear, cacheKeys); + } + + /** + * @param node Ignite isntance. + * @param cacheKeys Different key types mapped to cache name. + * @throws Exception If failed. + */ + private void doCheckSuspendTxAndResume(Ignite node, Map>> cacheKeys) throws Exception { + ClusterNode locNode = node.cluster().localNode(); + + log.info("Run test for node [node=" + locNode.id() + ", client=" + locNode.isClient() + ']'); + + Map, Map> cacheTxMap = new IdentityHashMap<>(); + + for (Map.Entry>> cacheKeysEntry : cacheKeys.entrySet()) { + String cacheName = cacheKeysEntry.getKey(); + + IgniteCache cache = node.cache(cacheName); + + Map suspendedTxs = new IdentityHashMap<>(); + + for (List keysList : cacheKeysEntry.getValue()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { - Transaction tx = ignite.transactions().txStart(OPTIMISTIC, isolation); + Transaction tx = node.transactions().txStart(OPTIMISTIC, isolation); - cache.put(1, 1); + int key = keysList.get(isolation.ordinal()); + + cache.put(key, key); tx.suspend(); - assertEquals(SUSPENDED, tx.state()); + suspendedTxs.put(tx, key); - try (IgniteEx g = startGrid(serversNumber() + 3)) { - tx.resume(); + String msg = "node=" + node.cluster().localNode() + + ", cache=" + cacheName + ", isolation=" + isolation + ", key=" + key; - assertEquals(ACTIVE, tx.state()); + assertEquals(msg, SUSPENDED, tx.state()); + } + } - assertEquals(1, (int)cache.get(1)); + cacheTxMap.put(cache, suspendedTxs); + } - tx.commit(); + int newNodeIdx = gridCount(); - assertEquals(1, (int)cache.get(1)); - } + startGrid(newNodeIdx); - cache.removeAll(); + try { + for (Map.Entry, Map> entry : cacheTxMap.entrySet()) { + IgniteCache cache = entry.getKey(); + + for (Map.Entry suspendedTx : entry.getValue().entrySet()) { + Transaction tx = suspendedTx.getKey(); + + Integer key = suspendedTx.getValue(); + + tx.resume(); + + String msg = "node=" + node.cluster().localNode() + + ", cache=" + cache.getName() + ", isolation=" + tx.isolation() + ", key=" + key; + + assertEquals(msg, ACTIVE, tx.state()); + + assertEquals(msg, key, cache.get(key)); + + tx.commit(); + + assertEquals(msg, key, cache.get(key)); } } - }); + } + finally { + stopGrid(newNodeIdx); + + for (IgniteCache cache : cacheTxMap.keySet()) + cache.removeAll(); + } } /** @@ -633,6 +728,7 @@ public void testSuspendTxAndResumeAfterTopologyChange() throws Exception { * * @throws Exception If failed. */ + @Test public void testResumeActiveTx() throws Exception { executeTestForAllCaches(new CI2Exc>() { @Override public void applyx(Ignite ignite, final IgniteCache cache) throws Exception { @@ -666,10 +762,10 @@ public void testResumeActiveTx() throws Exception { private List> cacheConfigurations() { List> cfgs = new ArrayList<>(); - cfgs.add(cacheConfiguration(PARTITIONED, 0, false)); - cfgs.add(cacheConfiguration(PARTITIONED, 1, false)); - cfgs.add(cacheConfiguration(PARTITIONED, 1, true)); - cfgs.add(cacheConfiguration(REPLICATED, 0, false)); + cfgs.add(cacheConfiguration("cache1", PARTITIONED, 0, false)); + cfgs.add(cacheConfiguration("cache2", PARTITIONED, 1, false)); + cfgs.add(cacheConfiguration("cache3", PARTITIONED, 1, true)); + cfgs.add(cacheConfiguration("cache4", REPLICATED, 0, false)); return cfgs; } @@ -681,10 +777,11 @@ private List> cacheConfigurations() { * @return Cache configuration. */ private CacheConfiguration cacheConfiguration( + String name, CacheMode cacheMode, int backups, boolean nearCache) { - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); + CacheConfiguration ccfg = new CacheConfiguration<>(name); ccfg.setCacheMode(cacheMode); ccfg.setAtomicityMode(TRANSACTIONAL); @@ -701,37 +798,56 @@ private CacheConfiguration cacheConfiguration( /** * @param c Closure. - * @throws Exception If failed. */ - private void executeTestForAllCaches(CI2> c) throws Exception { - for (CacheConfiguration ccfg : cacheConfigurations()) { - ignite(0).createCache(ccfg); + private void executeTestForAllCaches(CI2> c) { + for (int i = 0; i < gridCount(); i++) { + Ignite ignite = ignite(i); - log.info("Run test for cache [cache=" + ccfg.getCacheMode() + - ", backups=" + ccfg.getBackups() + - ", near=" + (ccfg.getNearConfiguration() != null) + "]"); + ClusterNode locNode = ignite.cluster().localNode(); - awaitPartitionMapExchange(); + log.info("Run test for node [node=" + locNode.id() + ", client=" + locNode.isClient() + ']'); - int srvNum = serversNumber(); - if (serversNumber() > 1) { - ignite(serversNumber() + 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); - srvNum += 2; - } + for (CacheConfiguration ccfg : cacheConfigurations()) + c.apply(ignite, ignite.cache(ccfg.getName())); + } + } - try { - for (int i = 0; i < srvNum; i++) { - Ignite ignite = ignite(i); + /** + * Generates list of keys (primary, backup and neither primary nor backup). + * + * @param ignite Ignite instance. + * @param keysCnt The number of keys generated for each type of key. + * @return List of different keys mapped to cache name. + */ + private Map>> generateKeys(Ignite ignite, int keysCnt) { + Map>> cacheKeys = new HashMap<>(); - log.info("Run test for node [node=" + i + ", client=" + ignite.configuration().isClientMode() + ']'); + for (CacheConfiguration cfg : cacheConfigurations()) { + String cacheName = cfg.getName(); - c.apply(ignite, ignite.cache(ccfg.getName())); - } - } - finally { - ignite(0).destroyCache(ccfg.getName()); + IgniteCache cache = ignite.cache(cacheName); + + List> keys = new ArrayList<>(); + + // Generate different keys: 0 - primary, 1 - backup, 2 - neither primary nor backup. + for (int type = 0; type < 3; type++) { + if (type == 1 && cfg.getCacheMode() == PARTITIONED && cfg.getBackups() == 0) + continue; + + if (type == 2 && cfg.getCacheMode() == REPLICATED) + continue; + + List keys0 = findKeys(cache, keysCnt, type * 100_000, type); + + assertEquals(cacheName, keysCnt, keys0.size()); + + keys.add(keys0); } + + cacheKeys.put(cacheName, keys); } + + return cacheKeys; } /** @@ -750,7 +866,7 @@ public static abstract class CI2Exc implements CI2 { */ public abstract void applyx(E1 e1, E2 e2) throws Exception; - /** {@inheritdoc} */ + /** {@inheritDoc} */ @Override public void apply(E1 e1, E2 e2) { try { applyx(e1, e2); @@ -775,7 +891,7 @@ public static abstract class CI1Exc implements CI1 { */ public abstract void applyx(T o) throws Exception; - /** {@inheritdoc} */ + /** {@inheritDoc} */ @Override public void apply(T o) { try { applyx(o); @@ -797,7 +913,7 @@ public static abstract class RunnableX implements Runnable { */ public abstract void runx() throws Exception; - /** {@inheritdoc} */ + /** {@inheritDoc} */ @Override public void run() { try { runx(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgnitePessimisticTxSuspendResumeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgnitePessimisticTxSuspendResumeTest.java index 57a1470010578..81ec3fe4ca18b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgnitePessimisticTxSuspendResumeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgnitePessimisticTxSuspendResumeTest.java @@ -24,16 +24,21 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class IgnitePessimisticTxSuspendResumeTest extends GridCommonAbstractTest { /** * Creates new cache configuration. @@ -63,6 +68,7 @@ protected CacheConfiguration getCacheConfiguration() { * * @throws Exception If failed. */ + @Test public void testSuspendPessimisticTx() throws Exception { try (Ignite g = startGrid()) { IgniteCache cache = jcache(); @@ -70,6 +76,10 @@ public void testSuspendPessimisticTx() throws Exception { IgniteTransactions txs = g.transactions(); for (TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && + !MvccFeatureChecker.isSupported(TransactionConcurrency.PESSIMISTIC, isolation)) + continue; + final Transaction tx = txs.txStart(TransactionConcurrency.PESSIMISTIC, isolation); cache.put(1, "1"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteRejectConnectOnNodeStopTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteRejectConnectOnNodeStopTest.java index 97d685f225357..bd4278973b8fd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteRejectConnectOnNodeStopTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteRejectConnectOnNodeStopTest.java @@ -31,20 +31,19 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MINUTES; /** * Sanity test to check that node starts to reject connections when stop procedure started. */ +@RunWith(JUnit4.class) public class IgniteRejectConnectOnNodeStopTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -62,7 +61,7 @@ public class IgniteRejectConnectOnNodeStopTest extends GridCommonAbstractTest { discoSpi.setReconnectCount(2); discoSpi.setAckTimeout(30_000); discoSpi.setSocketTimeout(30_000); - discoSpi.setIpFinder(IP_FINDER); + discoSpi.setIpFinder(sharedStaticIpFinder); TcpCommunicationSpi commSpi = (TcpCommunicationSpi)cfg.getCommunicationSpi(); @@ -89,6 +88,7 @@ public class IgniteRejectConnectOnNodeStopTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNodeStop() throws Exception { Ignite srv = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCachePrimarySyncTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCachePrimarySyncTest.java index bdf0b12dee21d..cfda1c2ec50cb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCachePrimarySyncTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCachePrimarySyncTest.java @@ -54,15 +54,17 @@ import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.MvccFeatureChecker.Feature; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_ASYNC; @@ -72,10 +74,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteTxCachePrimarySyncTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 4; @@ -92,8 +92,6 @@ public class IgniteTxCachePrimarySyncTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(clientMode); TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); @@ -133,6 +131,7 @@ public class IgniteTxCachePrimarySyncTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSingleKeyCommitFromPrimary() throws Exception { singleKeyCommitFromPrimary(cacheConfiguration(DEFAULT_CACHE_NAME, PRIMARY_SYNC, 1, true, false)); @@ -148,6 +147,16 @@ public void testSingleKeyCommitFromPrimary() throws Exception { * @throws Exception If failed. */ private void singleKeyCommitFromPrimary(CacheConfiguration ccfg) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + if (ccfg.getCacheStoreFactory() != null && + !MvccFeatureChecker.isSupported(Feature.CACHE_STORE)) + return; + + if (ccfg.getNearConfiguration() != null && + !MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) + return; + } + Ignite ignite = ignite(0); IgniteCache cache = ignite.createCache(ccfg); @@ -164,6 +173,9 @@ private void singleKeyCommitFromPrimary(CacheConfiguration ccfg) for (final TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (final TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + singleKeyCommitFromPrimary(node, ccfg, new IgniteBiInClosure>() { @Override public void apply(Integer key, IgniteCache cache) { Ignite ignite = cache.unwrap(Ignite.class); @@ -246,6 +258,7 @@ private void singleKeyCommitFromPrimary( /** * @throws Exception If failed. */ + @Test public void testSingleKeyPrimaryNodeFail1() throws Exception { singleKeyPrimaryNodeLeft(cacheConfiguration(DEFAULT_CACHE_NAME, PRIMARY_SYNC, 1, true, false)); @@ -255,6 +268,7 @@ public void testSingleKeyPrimaryNodeFail1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleKeyPrimaryNodeFail2() throws Exception { singleKeyPrimaryNodeLeft(cacheConfiguration(DEFAULT_CACHE_NAME, PRIMARY_SYNC, 2, true, false)); @@ -266,12 +280,19 @@ public void testSingleKeyPrimaryNodeFail2() throws Exception { * @throws Exception If failed. */ private void singleKeyPrimaryNodeLeft(CacheConfiguration ccfg) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + if (ccfg.getCacheStoreFactory() != null && + !MvccFeatureChecker.isSupported(Feature.CACHE_STORE)) + return; + } + Ignite ignite = ignite(0); IgniteCache cache = ignite.createCache(ccfg); try { - ignite(NODES - 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) + ignite(NODES - 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); for (int i = 0; i < NODES; i++) { Ignite node = ignite(i); @@ -284,6 +305,9 @@ private void singleKeyPrimaryNodeLeft(CacheConfiguration ccfg) t for (final TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (final TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + singleKeyPrimaryNodeLeft(node, ccfg, new IgniteBiInClosure>() { @Override public void apply(Integer key, IgniteCache cache) { Ignite ignite = cache.unwrap(Ignite.class); @@ -371,6 +395,7 @@ private void singleKeyPrimaryNodeLeft( /** * @throws Exception If failed. */ + @Test public void testSingleKeyCommit() throws Exception { singleKeyCommit(cacheConfiguration(DEFAULT_CACHE_NAME, PRIMARY_SYNC, 1, true, false)); @@ -386,12 +411,23 @@ public void testSingleKeyCommit() throws Exception { * @throws Exception If failed. */ private void singleKeyCommit(CacheConfiguration ccfg) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + if (ccfg.getCacheStoreFactory() != null && + !MvccFeatureChecker.isSupported(Feature.CACHE_STORE)) + return; + + if (ccfg.getNearConfiguration() != null && + !MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) + return; + } + Ignite ignite = ignite(0); IgniteCache cache = ignite.createCache(ccfg); try { - ignite(NODES - 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) + ignite(NODES - 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); for (int i = 1; i < NODES; i++) { Ignite node = ignite(i); @@ -406,6 +442,9 @@ private void singleKeyCommit(CacheConfiguration ccfg) throws Exc for (final TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (final TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + singleKeyCommit(node, ccfg, new IgniteBiInClosure>() { @Override public void apply(Integer key, IgniteCache cache) { Ignite ignite = cache.unwrap(Ignite.class); @@ -513,6 +552,7 @@ private void singleKeyCommit( /** * @throws Exception If failed. */ + @Test public void testWaitPrimaryResponse() throws Exception { checkWaitPrimaryResponse(cacheConfiguration(DEFAULT_CACHE_NAME, PRIMARY_SYNC, 1, true, false)); @@ -528,12 +568,23 @@ public void testWaitPrimaryResponse() throws Exception { * @throws Exception If failed. */ private void checkWaitPrimaryResponse(CacheConfiguration ccfg) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + if (ccfg.getCacheStoreFactory() != null && + !MvccFeatureChecker.isSupported(Feature.CACHE_STORE)) + return; + + if (ccfg.getNearConfiguration() != null && + !MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) + return; + } + Ignite ignite = ignite(0); IgniteCache cache = ignite.createCache(ccfg); try { - ignite(NODES - 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) + ignite(NODES - 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); for (int i = 1; i < NODES; i++) { Ignite node = ignite(i); @@ -561,6 +612,9 @@ private void checkWaitPrimaryResponse(CacheConfiguration ccfg) t for (final TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (final TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + checkWaitPrimaryResponse(node, ccfg, new IgniteBiInClosure>() { @Override public void apply(Integer key, IgniteCache cache) { Ignite ignite = cache.unwrap(Ignite.class); @@ -651,7 +705,11 @@ private void checkWaitPrimaryResponse( /** * @throws Exception If failed. */ + @Test public void testOnePhaseMessages() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + return; // Not supported. Commit flow differs for Mvcc mode. + checkOnePhaseMessages(cacheConfiguration(DEFAULT_CACHE_NAME, PRIMARY_SYNC, 1, false, false)); } @@ -678,6 +736,9 @@ private void checkOnePhaseMessages(CacheConfiguration ccfg) thro for (final TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (final TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + checkOnePhaseMessages(node, ccfg, new IgniteBiInClosure>() { @Override public void apply(Integer key, IgniteCache cache) { Ignite ignite = cache.unwrap(Ignite.class); @@ -754,18 +815,19 @@ private void checkOnePhaseMessages( /** * @throws Exception If failed. */ + @Test public void testTxSyncMode() throws Exception { Ignite ignite = ignite(0); List> caches = new ArrayList<>(); try { - caches.add(createCache(ignite, cacheConfiguration("fullSync1", FULL_SYNC, 1, false, false), true)); - caches.add(createCache(ignite, cacheConfiguration("fullSync2", FULL_SYNC, 1, false, false), true)); - caches.add(createCache(ignite, cacheConfiguration("fullAsync1", FULL_ASYNC, 1, false, false), true)); - caches.add(createCache(ignite, cacheConfiguration("fullAsync2", FULL_ASYNC, 1, false, false), true)); - caches.add(createCache(ignite, cacheConfiguration("primarySync1", PRIMARY_SYNC, 1, false, false), true)); - caches.add(createCache(ignite, cacheConfiguration("primarySync2", PRIMARY_SYNC, 1, false, false), true)); + caches.add(createCache(ignite, cacheConfiguration("fullSync1", FULL_SYNC, 1, false, false))); + caches.add(createCache(ignite, cacheConfiguration("fullSync2", FULL_SYNC, 1, false, false))); + caches.add(createCache(ignite, cacheConfiguration("fullAsync1", FULL_ASYNC, 1, false, false))); + caches.add(createCache(ignite, cacheConfiguration("fullAsync2", FULL_ASYNC, 1, false, false))); + caches.add(createCache(ignite, cacheConfiguration("primarySync1", PRIMARY_SYNC, 1, false, false))); + caches.add(createCache(ignite, cacheConfiguration("primarySync2", PRIMARY_SYNC, 1, false, false))); for (int i = 0; i < NODES; i++) { checkTxSyncMode(ignite(i), true); @@ -805,7 +867,8 @@ private void waitKeyRemoved(final String cacheName, final Object key) throws Exc * @param key Cache key. * @throws Exception If failed. */ - private void waitKeyUpdated(Ignite ignite, int expNodes, final String cacheName, final Object key) throws Exception { + private void waitKeyUpdated(Ignite ignite, int expNodes, final String cacheName, + final Object key) throws Exception { Affinity aff = ignite.affinity(cacheName); final Collection nodes = aff.mapKeyToPrimaryAndBackups(key); @@ -834,14 +897,12 @@ private void waitKeyUpdated(Ignite ignite, int expNodes, final String cacheName, /** * @param ignite Node. * @param ccfg Cache configuration. - * @param nearCache If {@code true} creates near cache on one of client nodes. * @return Created cache. */ - private IgniteCache createCache(Ignite ignite, CacheConfiguration ccfg, - boolean nearCache) { + private IgniteCache createCache(Ignite ignite, CacheConfiguration ccfg) { IgniteCache cache = ignite.createCache(ccfg); - if (nearCache) + if (!MvccFeatureChecker.forcedMvcc() || MvccFeatureChecker.isSupported(Feature.NEAR_CACHE)) ignite(NODES - 1).createNearCache(ccfg.getName(), new NearCacheConfiguration<>()); return cache; @@ -866,6 +927,9 @@ private void checkTxSyncMode(Ignite ignite, boolean commit) { for (TransactionConcurrency concurrency : TransactionConcurrency.values()) { for (TransactionIsolation isolation : TransactionIsolation.values()) { + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + try (Transaction tx = txs.txStart(concurrency, isolation)) { fullSync1.put(key++, 1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCacheWriteSynchronizationModesMultithreadedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCacheWriteSynchronizationModesMultithreadedTest.java index bed8a41149a80..a3c72c0fbe33c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCacheWriteSynchronizationModesMultithreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxCacheWriteSynchronizationModesMultithreadedTest.java @@ -45,14 +45,16 @@ import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionOptimisticException; import org.jetbrains.annotations.NotNull; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_ASYNC; @@ -66,10 +68,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteTxCacheWriteSynchronizationModesMultithreadedTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 4; @@ -89,8 +89,6 @@ public class IgniteTxCacheWriteSynchronizationModesMultithreadedTest extends Gri @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(clientMode); ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); @@ -105,29 +103,31 @@ public class IgniteTxCacheWriteSynchronizationModesMultithreadedTest extends Gri /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-9470", MvccFeatureChecker.forcedMvcc()); - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL,"true"); + super.beforeTestsStarted(); - startGrids(SRVS); + startGridsMultiThreaded(SRVS); clientMode = true; - for (int i = 0; i < CLIENTS; i++) { - Ignite client = startGrid(SRVS + i); + startGridsMultiThreaded(SRVS, CLIENTS); - assertTrue(client.configuration().isClientMode()); - } + for (int i = 0; i < CLIENTS; i++) + assertTrue(grid(SRVS + i).configuration().isClientMode()); } /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + stopAllGrids(); + + super.afterTestsStopped(); } /** * @throws Exception If failed. */ + @Test public void testMultithreadedPrimarySyncRestart() throws Exception { multithreadedTests(PRIMARY_SYNC, true); } @@ -135,6 +135,7 @@ public void testMultithreadedPrimarySyncRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedPrimarySync() throws Exception { multithreadedTests(PRIMARY_SYNC, false); } @@ -142,6 +143,7 @@ public void testMultithreadedPrimarySync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedFullSync() throws Exception { multithreadedTests(FULL_SYNC, false); } @@ -149,6 +151,7 @@ public void testMultithreadedFullSync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedFullSyncRestart() throws Exception { multithreadedTests(FULL_SYNC, true); } @@ -156,6 +159,7 @@ public void testMultithreadedFullSyncRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedFullAsync() throws Exception { multithreadedTests(FULL_ASYNC, false); } @@ -163,6 +167,7 @@ public void testMultithreadedFullAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedFullAsyncRestart() throws Exception { multithreadedTests(FULL_ASYNC, true); } @@ -195,6 +200,14 @@ private void multithreaded(CacheWriteSynchronizationMode syncMode, boolean store, boolean nearCache, boolean restart) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) { + if (store && !MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.CACHE_STORE)) + return; + + if (nearCache && !MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.NEAR_CACHE)) + return; + } + final Ignite ignite = ignite(0); createCache(ignite, cacheConfiguration(DEFAULT_CACHE_NAME, syncMode, backups, store), nearCache); @@ -271,36 +284,38 @@ private void multithreaded(CacheWriteSynchronizationMode syncMode, } }); - commitMultithreaded(new IgniteBiInClosure>() { - @Override public void apply(Ignite ignite, IgniteCache cache) { - ThreadLocalRandom rnd = ThreadLocalRandom.current(); + if (!MvccFeatureChecker.forcedMvcc()) { + commitMultithreaded(new IgniteBiInClosure>() { + @Override public void apply(Ignite ignite, IgniteCache cache) { + ThreadLocalRandom rnd = ThreadLocalRandom.current(); - Map map = new LinkedHashMap<>(); + Map map = new LinkedHashMap<>(); - for (int i = 0; i < 10; i++) { - Integer key = rnd.nextInt(MULTITHREADED_TEST_KEYS); + for (int i = 0; i < 10; i++) { + Integer key = rnd.nextInt(MULTITHREADED_TEST_KEYS); - map.put(key, rnd.nextInt()); - } - - while (true) { - try (Transaction tx = ignite.transactions().txStart(OPTIMISTIC, SERIALIZABLE)) { - for (Map.Entry e : map.entrySet()) - cache.put(e.getKey(), e.getValue()); - - tx.commit(); - - break; - } - catch (TransactionOptimisticException ignored) { - // Retry. + map.put(key, rnd.nextInt()); } - catch (CacheException | IgniteException ignored) { - break; + + while (true) { + try (Transaction tx = ignite.transactions().txStart(OPTIMISTIC, SERIALIZABLE)) { + for (Map.Entry e : map.entrySet()) + cache.put(e.getKey(), e.getValue()); + + tx.commit(); + + break; + } + catch (TransactionOptimisticException ignored) { + // Retry. + } + catch (CacheException | IgniteException ignored) { + break; + } } } - } - }); + }); + } } finally { stop.set(true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxConcurrentRemoveObjectsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxConcurrentRemoveObjectsTest.java new file mode 100644 index 0000000000000..76d3bc61a2deb --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxConcurrentRemoveObjectsTest.java @@ -0,0 +1,178 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed; + +import java.util.UUID; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; +import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_CACHE_REMOVED_ENTRIES_TTL; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; +import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE; + +/** + * + */ +@RunWith(JUnit4.class) +public class IgniteTxConcurrentRemoveObjectsTest extends GridCommonAbstractTest { + /** Cache partitions. */ + private static final int CACHE_PARTITIONS = 16; + + /** Cache entries count. */ + private static final int CACHE_ENTRIES_COUNT = 512 * CACHE_PARTITIONS; + + /** New value for {@link IgniteSystemProperties#IGNITE_CACHE_REMOVED_ENTRIES_TTL} property. */ + private static final long newIgniteCacheRemovedEntriesTtl = 50L; + + /** Old value of {@link IgniteSystemProperties#IGNITE_CACHE_REMOVED_ENTRIES_TTL} property. */ + private static long oldIgniteCacheRmvEntriesTtl; + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + oldIgniteCacheRmvEntriesTtl = Long.getLong(IGNITE_CACHE_REMOVED_ENTRIES_TTL, 10_000); + + System.setProperty(IGNITE_CACHE_REMOVED_ENTRIES_TTL, Long.toString(newIgniteCacheRemovedEntriesTtl)); + + startGrid(0); + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + System.setProperty(IGNITE_CACHE_REMOVED_ENTRIES_TTL, Long.toString(oldIgniteCacheRmvEntriesTtl)); + + super.afterTestsStopped(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + grid(0).destroyCache(DEFAULT_CACHE_NAME); + + super.afterTest(); + } + + /** + * @return Cache configuration. + */ + private CacheConfiguration cacheConfiguration() { + CacheConfiguration ccfg = new CacheConfiguration<>(); + + ccfg.setName(DEFAULT_CACHE_NAME); + + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + + ccfg.setAffinity(new RendezvousAffinityFunction().setPartitions(CACHE_PARTITIONS)); + + return ccfg; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testOptimisticTxLeavesObjectsInLocalPartition() throws Exception { + checkTxLeavesObjectsInLocalPartition(cacheConfiguration(), TransactionConcurrency.OPTIMISTIC, SERIALIZABLE); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPessimisticTxLeavesObjectsInLocalPartition() throws Exception { + checkTxLeavesObjectsInLocalPartition(cacheConfiguration(), TransactionConcurrency.PESSIMISTIC, SERIALIZABLE); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMvccTxLeavesObjectsInLocalPartition() throws Exception { + checkTxLeavesObjectsInLocalPartition(cacheConfiguration().setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT), + TransactionConcurrency.PESSIMISTIC, REPEATABLE_READ); + } + + /** + * Too many deletes in single transaction may overflow {@link GridDhtLocalPartition#rmvQueue} and entries will be + * deleted synchronously in {@link GridDhtLocalPartition#onDeferredDelete(int, KeyCacheObject, GridCacheVersion)}. + * This should not corrupt internal map state in {@link GridDhtLocalPartition}. + * + * @throws Exception If failed. + */ + public void checkTxLeavesObjectsInLocalPartition(CacheConfiguration ccfg, + TransactionConcurrency optimistic, TransactionIsolation isolation) throws Exception { + IgniteEx igniteEx = grid(0); + + igniteEx.getOrCreateCache(ccfg); + + try (IgniteDataStreamer dataStreamer = igniteEx.dataStreamer(DEFAULT_CACHE_NAME)) { + for (int i = 0; i < CACHE_ENTRIES_COUNT; i++) + dataStreamer.addData(i, UUID.randomUUID().toString()); + } + + IgniteEx client = startGrid( + getConfiguration() + .setClientMode(true) + .setIgniteInstanceName(UUID.randomUUID().toString()) + ); + + awaitPartitionMapExchange(); + + assertEquals(CACHE_ENTRIES_COUNT, client.getOrCreateCache(DEFAULT_CACHE_NAME).size()); + + try (Transaction tx = client.transactions().txStart(optimistic, isolation)) { + IgniteCache cache = client.getOrCreateCache(cacheConfiguration()); + + for (int v = 0; v < CACHE_ENTRIES_COUNT; v++) { + cache.get(v); + + cache.remove(v); + } + + tx.commit(); + } + + GridTestUtils.waitForCondition( + () -> igniteEx.context().cache().cacheGroups().stream() + .filter(CacheGroupContext::userCache) + .flatMap(cgctx -> cgctx.topology().localPartitions().stream()) + .mapToInt(GridDhtLocalPartition::internalSize) + .max().orElse(-1) == 0, + newIgniteCacheRemovedEntriesTtl * 10 + ); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxConsistencyRestartAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxConsistencyRestartAbstractSelfTest.java index a530cab0834d8..f775c76da9747 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxConsistencyRestartAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxConsistencyRestartAbstractSelfTest.java @@ -33,11 +33,11 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -48,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteTxConsistencyRestartAbstractSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Grid count. */ private static final int GRID_CNT = 4; @@ -62,12 +60,6 @@ public abstract class IgniteTxConsistencyRestartAbstractSelfTest extends GridCom @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(cacheConfiguration(igniteInstanceName)); return cfg; @@ -105,6 +97,7 @@ public CacheConfiguration cacheConfiguration(String igniteInstanceName) { /** * @throws Exception If failed. */ + @Test public void testTxConsistency() throws Exception { startGridsMultiThreaded(GRID_CNT); @@ -208,4 +201,4 @@ public void testTxConsistency() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxGetAfterStopTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxGetAfterStopTest.java index 837763f0ecab8..0236ad2224a55 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxGetAfterStopTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxGetAfterStopTest.java @@ -24,6 +24,9 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -32,6 +35,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteTxGetAfterStopTest extends IgniteCacheAbstractTest { /** */ private CacheMode cacheMode; @@ -74,6 +78,7 @@ public class IgniteTxGetAfterStopTest extends IgniteCacheAbstractTest { /** * @throws Exception If failed. */ + @Test public void testReplicated() throws Exception { getAfterStop(REPLICATED, null); } @@ -81,6 +86,7 @@ public void testReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitioned() throws Exception { getAfterStop(PARTITIONED, new NearCacheConfiguration()); } @@ -88,6 +94,7 @@ public void testPartitioned() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedNearDisabled() throws Exception { getAfterStop(PARTITIONED, null); } @@ -130,4 +137,4 @@ private void getAfterStop(CacheMode cacheMode, @Nullable NearCacheConfiguration assertEquals(key0, cache0.get(key0)); assertNull(cache1.get(key1)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxOriginatingNodeFailureAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxOriginatingNodeFailureAbstractSelfTest.java index 8bdfafef579cc..b1276917fc038 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxOriginatingNodeFailureAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxOriginatingNodeFailureAbstractSelfTest.java @@ -48,12 +48,16 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; /** * Abstract test for originating node failure. */ +@RunWith(JUnit4.class) public abstract class IgniteTxOriginatingNodeFailureAbstractSelfTest extends GridCacheAbstractSelfTest { /** */ protected static final int GRID_CNT = 5; @@ -67,6 +71,7 @@ public abstract class IgniteTxOriginatingNodeFailureAbstractSelfTest extends Gri /** * @throws Exception If failed. */ + @Test public void testManyKeysCommit() throws Exception { Collection keys = new ArrayList<>(200); @@ -79,6 +84,7 @@ public void testManyKeysCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testManyKeysRollback() throws Exception { Collection keys = new ArrayList<>(200); @@ -314,4 +320,4 @@ protected void testTxOriginatingNodeFails(Collection keys, final boolea private boolean ignoredMessage(GridIoMessage msg) { return ignoreMsgCls != null && ignoreMsgCls.isAssignableFrom(msg.message().getClass()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPessimisticOriginatingNodeFailureAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPessimisticOriginatingNodeFailureAbstractSelfTest.java index 751455582c4b9..5fbe8b115276e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPessimisticOriginatingNodeFailureAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPessimisticOriginatingNodeFailureAbstractSelfTest.java @@ -23,10 +23,12 @@ import java.util.HashMap; import java.util.HashSet; import java.util.Map; +import java.util.Optional; import java.util.Set; import java.util.UUID; import java.util.concurrent.Callable; import java.util.concurrent.TimeUnit; +import java.util.stream.StreamSupport; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; @@ -40,6 +42,8 @@ import org.apache.ignite.internal.managers.communication.GridIoMessage; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager; @@ -57,12 +61,17 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; /** * Abstract test for originating node failure. */ +@RunWith(JUnit4.class) public abstract class IgniteTxPessimisticOriginatingNodeFailureAbstractSelfTest extends GridCacheAbstractSelfTest { /** */ protected static final int GRID_CNT = 5; @@ -79,6 +88,7 @@ public abstract class IgniteTxPessimisticOriginatingNodeFailureAbstractSelfTest /** * @throws Exception If failed. */ + @Test public void testManyKeysCommit() throws Exception { Collection keys = new ArrayList<>(200); @@ -91,6 +101,7 @@ public void testManyKeysCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testManyKeysRollback() throws Exception { Collection keys = new ArrayList<>(200); @@ -103,6 +114,7 @@ public void testManyKeysRollback() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryNodeFailureCommit() throws Exception { checkPrimaryNodeCrash(true); } @@ -110,6 +122,7 @@ public void testPrimaryNodeFailureCommit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryNodeFailureRollback() throws Exception { checkPrimaryNodeCrash(false); } @@ -268,8 +281,10 @@ protected void testTxOriginatingNodeFails(Collection keys, final boolea assertNotNull(cache); - assertEquals("Failed to check entry value on node: " + checkNodeId, - fullFailure ? initVal : val, cache.localPeek(key)); + if (atomicityMode() != TRANSACTIONAL_SNAPSHOT) { + assertEquals("Failed to check entry value on node: " + checkNodeId, + fullFailure ? initVal : val, cache.localPeek(key)); + } return null; } @@ -278,8 +293,22 @@ protected void testTxOriginatingNodeFails(Collection keys, final boolea } for (Map.Entry e : map.entrySet()) { - for (Ignite g : G.allGrids()) - assertEquals(fullFailure ? initVal : e.getValue(), g.cache(DEFAULT_CACHE_NAME).get(e.getKey())); + long cntr0 = -1; + + for (Ignite g : G.allGrids()) { + Integer key = e.getKey(); + + assertEquals(fullFailure ? initVal : e.getValue(), g.cache(DEFAULT_CACHE_NAME).get(key)); + + if (g.affinity(DEFAULT_CACHE_NAME).isPrimaryOrBackup(((IgniteEx)g).localNode(), key)) { + long nodeCntr = updateCoutner(g, key); + + if (cntr0 == -1) + cntr0 = nodeCntr; + + assertEquals(cntr0, nodeCntr); + } + } } } @@ -402,6 +431,9 @@ private void checkPrimaryNodeCrash(final boolean commmit) throws Exception { assertFalse(e.getValue().isEmpty()); + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) + continue; + for (ClusterNode node : e.getValue()) { final UUID checkNodeId = node.id(); @@ -425,8 +457,22 @@ private void checkPrimaryNodeCrash(final boolean commmit) throws Exception { } for (Map.Entry e : map.entrySet()) { - for (Ignite g : G.allGrids()) - assertEquals(!commmit ? initVal : e.getValue(), g.cache(DEFAULT_CACHE_NAME).get(e.getKey())); + long cntr0 = -1; + + for (Ignite g : G.allGrids()) { + Integer key = e.getKey(); + + assertEquals(!commmit ? initVal : e.getValue(), g.cache(DEFAULT_CACHE_NAME).get(key)); + + if (g.affinity(DEFAULT_CACHE_NAME).isPrimaryOrBackup(((IgniteEx)g).localNode(), key)) { + long nodeCntr = updateCoutner(g, key); + + if (cntr0 == -1) + cntr0 = nodeCntr; + + assertEquals(cntr0, nodeCntr); + } + } } } @@ -529,4 +575,21 @@ private boolean ignoredMessage(GridIoMessage msg) { else return false; } -} \ No newline at end of file + + /** */ + private static long updateCoutner(Ignite ign, Object key) { + return dataStore(((IgniteEx)ign).cachex(DEFAULT_CACHE_NAME).context(), key) + .map(IgniteCacheOffheapManager.CacheDataStore::updateCounter) + .orElse(0L); + } + + /** */ + private static Optional dataStore( + GridCacheContext cctx, Object key) { + int p = cctx.affinity().partition(key); + + return StreamSupport.stream(cctx.offheap().cacheDataStores().spliterator(), false) + .filter(ds -> ds.partId() == p) + .findFirst(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPreloadAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPreloadAbstractTest.java index c94457f942a88..26a04afc33320 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPreloadAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxPreloadAbstractTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -45,6 +49,7 @@ /** * Tests transaction during cache preloading. */ +@RunWith(JUnit4.class) public abstract class IgniteTxPreloadAbstractTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 6; @@ -77,6 +82,7 @@ public abstract class IgniteTxPreloadAbstractTest extends GridCacheAbstractSelfT /** * @throws Exception If failed. */ + @Test public void testRemoteTxPreloading() throws Exception { IgniteCache cache = jcache(0); @@ -146,13 +152,16 @@ public void testRemoteTxPreloading() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalTxPreloadingOptimistic() throws Exception { - testLocalTxPreloading(OPTIMISTIC); + if (!MvccFeatureChecker.forcedMvcc()) // Do not check optimistic tx for mvcc. + testLocalTxPreloading(OPTIMISTIC); } /** * @throws Exception If failed. */ + @Test public void testLocalTxPreloadingPessimistic() throws Exception { testLocalTxPreloading(PESSIMISTIC); } @@ -186,7 +195,7 @@ private void testLocalTxPreloading(TransactionConcurrency txConcurrency) throws IgniteTransactions txs = ignite(i).transactions(); - try (Transaction tx = txs.txStart(txConcurrency, TransactionIsolation.READ_COMMITTED)) { + try (Transaction tx = txs.txStart(txConcurrency, TransactionIsolation.REPEATABLE_READ)) { cache.invoke(TX_KEY, new EntryProcessor() { @Override public Void process(MutableEntry e, Object... args) { Integer val = e.getValue(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxRemoveTimeoutObjectsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxRemoveTimeoutObjectsTest.java index 571362143ea5f..b186b45e983e1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxRemoveTimeoutObjectsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxRemoveTimeoutObjectsTest.java @@ -32,8 +32,12 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE; @@ -41,6 +45,7 @@ /** * Test correctness of rollback a transaction with timeout during the grid stop. */ +@RunWith(JUnit4.class) public class IgniteTxRemoveTimeoutObjectsTest extends GridCacheAbstractSelfTest { /** */ private static final int PUT_CNT = 1000; @@ -55,9 +60,21 @@ public class IgniteTxRemoveTimeoutObjectsTest extends GridCacheAbstractSelfTest return 60_000; } + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-7388"); + + if (nearEnabled()) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** * @throws Exception If failed. */ + @Test public void testTxRemoveTimeoutObjects() throws Exception { IgniteCache cache0 = grid(0).cache(DEFAULT_CACHE_NAME); IgniteCache cache1 = grid(1).cache(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxTimeoutAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxTimeoutAbstractTest.java index a0ec70a9c3519..6bcee94add948 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxTimeoutAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/IgniteTxTimeoutAbstractTest.java @@ -17,18 +17,20 @@ package org.apache.ignite.internal.processors.cache.distributed; -import java.util.ArrayList; -import java.util.List; import java.util.Random; -import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -39,6 +41,7 @@ /** * Simple cache test. */ +@RunWith(JUnit4.class) public class IgniteTxTimeoutAbstractTest extends GridCommonAbstractTest { /** Random number generator. */ private static final Random RAND = new Random(); @@ -46,9 +49,6 @@ public class IgniteTxTimeoutAbstractTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_COUNT = 2; - /** Grid instances. */ - private static final List IGNITEs = new ArrayList<>(); - /** Transaction timeout. */ private static final long TIMEOUT = 50; @@ -56,15 +56,25 @@ public class IgniteTxTimeoutAbstractTest extends GridCommonAbstractTest { * @throws Exception If failed. */ @Override protected void beforeTestsStarted() throws Exception { - for (int i = 0; i < GRID_COUNT; i++) - IGNITEs.add(startGrid(i)); + startGridsMultiThreaded(GRID_COUNT); } /** * @throws Exception If failed. */ @Override protected void afterTestsStopped() throws Exception { - IGNITEs.clear(); + stopAllGrids(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration c = super.getConfiguration(igniteInstanceName); + + TransactionConfiguration txCfg = c.getTransactionConfiguration(); + + txCfg.setDefaultTxTimeout(TIMEOUT); + + return c; } /** @@ -72,12 +82,13 @@ public class IgniteTxTimeoutAbstractTest extends GridCommonAbstractTest { * @return Cache. */ @Override protected IgniteCache jcache(int i) { - return IGNITEs.get(i).cache(DEFAULT_CACHE_NAME); + return grid(i).cache(DEFAULT_CACHE_NAME); } /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticReadCommitted() throws Exception { checkTransactionTimeout(PESSIMISTIC, READ_COMMITTED); } @@ -85,6 +96,7 @@ public void testPessimisticReadCommitted() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticRepeatableRead() throws Exception { checkTransactionTimeout(PESSIMISTIC, REPEATABLE_READ); } @@ -92,6 +104,7 @@ public void testPessimisticRepeatableRead() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticSerializable() throws Exception { checkTransactionTimeout(PESSIMISTIC, SERIALIZABLE); } @@ -99,6 +112,7 @@ public void testPessimisticSerializable() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticReadCommitted() throws Exception { checkTransactionTimeout(OPTIMISTIC, READ_COMMITTED); } @@ -106,6 +120,7 @@ public void testOptimisticReadCommitted() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticRepeatableRead() throws Exception { checkTransactionTimeout(OPTIMISTIC, REPEATABLE_READ); } @@ -113,6 +128,7 @@ public void testOptimisticRepeatableRead() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticSerializable() throws Exception { checkTransactionTimeout(OPTIMISTIC, SERIALIZABLE); } @@ -124,7 +140,6 @@ public void testOptimisticSerializable() throws Exception { */ private void checkTransactionTimeout(TransactionConcurrency concurrency, TransactionIsolation isolation) throws Exception { - int idx = RAND.nextInt(GRID_COUNT); IgniteCache cache = jcache(idx); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/CacheGetReadFromBackupFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/CacheGetReadFromBackupFailoverTest.java new file mode 100644 index 0000000000000..8fc5ac8cf0cd3 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/CacheGetReadFromBackupFailoverTest.java @@ -0,0 +1,255 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.dht; + +import java.util.Collections; +import java.util.UUID; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; +import java.util.stream.Collectors; +import javax.cache.CacheException; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.IgniteIllegalStateException; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.failure.AbstractFailureHandler; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteNodeAttributes; +import org.apache.ignite.internal.NodeStoppingException; +import org.apache.ignite.internal.processors.cache.GridCacheFuture; +import org.apache.ignite.internal.processors.cache.GridCacheMvccManager; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Test for getting values on unstable topology with read from backup enabled. + */ +@RunWith(JUnit4.class) +public class CacheGetReadFromBackupFailoverTest extends GridCommonAbstractTest { + /** Tx cache name. */ + private static final String TX_CACHE = "txCache"; + /** Atomic cache name. */ + private static final String ATOMIC_CACHE = "atomicCache"; + /** Keys count. */ + private static final int KEYS_CNT = 50000; + /** Stop load flag. */ + private static final AtomicBoolean stop = new AtomicBoolean(); + /** Error. */ + private static final AtomicReference err = new AtomicReference<>(); + + /** + * @return Grid count. + */ + public int gridCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setFailureHandler(new AbstractFailureHandler() { + @Override protected boolean handle(Ignite ignite, FailureContext failureCtx) { + err.compareAndSet(null, failureCtx.error()); + stop.set(true); + return false; + } + }); + + cfg.setConsistentId(igniteInstanceName); + + CacheConfiguration txCcfg = new CacheConfiguration(TX_CACHE) + .setAtomicityMode(TRANSACTIONAL) + .setCacheMode(PARTITIONED) + .setBackups(1) + .setWriteSynchronizationMode(FULL_SYNC) + .setReadFromBackup(true); + + CacheConfiguration atomicCcfg = new CacheConfiguration(ATOMIC_CACHE) + .setAtomicityMode(ATOMIC) + .setCacheMode(PARTITIONED) + .setBackups(1) + .setWriteSynchronizationMode(FULL_SYNC) + .setReadFromBackup(true); + + cfg.setCacheConfiguration(txCcfg, atomicCcfg); + + // Enforce different mac adresses to emulate distributed environment by default. + cfg.setUserAttributes(Collections.singletonMap( + IgniteNodeAttributes.ATTR_MACS_OVERRIDE, UUID.randomUUID().toString())); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + stop.set(false); + + err.set(null); + + startGrids(gridCount()); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testFailover() throws Exception { + Ignite ignite = ignite(0); + + ignite.cluster().active(true); + + ThreadLocalRandom rnd = ThreadLocalRandom.current(); + + try (IgniteDataStreamer stmr = ignite.dataStreamer(TX_CACHE)) { + for (int i = 0; i < KEYS_CNT; i++) + stmr.addData(i, rnd.nextLong()); + } + + try (IgniteDataStreamer stmr = ignite.dataStreamer(ATOMIC_CACHE)) { + for (int i = 0; i < KEYS_CNT; i++) + stmr.addData(i, rnd.nextLong()); + } + + AtomicInteger idx = new AtomicInteger(-1); + + AtomicInteger successGet = new AtomicInteger(); + + IgniteInternalFuture fut = GridTestUtils.runAsync(() -> { + ThreadLocalRandom rnd0 = ThreadLocalRandom.current(); + + while (!stop.get()) { + Ignite ig = null; + + while (ig == null) { + int n = rnd0.nextInt(gridCount()); + + if (idx.get() != n) { + try { + ig = ignite(n); + } + catch (IgniteIllegalStateException e) { + // No-op. + } + } + } + + try { + if (rnd.nextBoolean()) { + ig.cache(TX_CACHE).get(rnd0.nextLong(KEYS_CNT)); + ig.cache(ATOMIC_CACHE).get(rnd0.nextLong(KEYS_CNT)); + } + else { + ig.cache(TX_CACHE).getAll(rnd.longs(16, 0, KEYS_CNT).boxed().collect(Collectors.toSet())); + ig.cache(ATOMIC_CACHE).getAll(rnd.longs(16, 0, KEYS_CNT).boxed().collect(Collectors.toSet())); + } + + successGet.incrementAndGet(); + } + catch (CacheException e) { + if (!X.hasCause(e, NodeStoppingException.class)) + throw e; + } + + } + }, "load-thread"); + + long startTime = System.currentTimeMillis(); + + while (System.currentTimeMillis() - startTime < 30 * 1000L) { + int idx0 = idx.get(); + + if (idx0 >= 0) + startGrid(idx0); + + U.sleep(500); + + int next = rnd.nextInt(gridCount()); + + idx.set(next); + + stopGrid(next); + + U.sleep(500); + } + + stop.set(true); + + while (true){ + try { + fut.get(10_000); + + break; + } + catch (IgniteFutureTimeoutCheckedException e) { + for (Ignite i : G.allGrids()) { + IgniteEx ex = (IgniteEx)i; + + log.info(">>>> " + ex.context().localNodeId()); + + GridCacheMvccManager mvcc = ex.context().cache().context().mvcc(); + + for (GridCacheFuture fut0 : mvcc.activeFutures()) { + log.info("activeFut - " + fut0); + } + + for (GridCacheFuture fut0 : mvcc.atomicFutures()) { + log.info("atomicFut - " + fut0); + } + } + } + } + + Assert.assertTrue(String.valueOf(successGet.get()), successGet.get() > 50); + + Throwable e = err.get(); + + if (e != null) { + log.error("Test failed", e); + + fail("Test failed"); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/CachePartitionPartialCountersMapSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/CachePartitionPartialCountersMapSelfTest.java index a4afbcace876b..5f4f4d9b6fa04 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/CachePartitionPartialCountersMapSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/CachePartitionPartialCountersMapSelfTest.java @@ -19,9 +19,15 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.CachePartitionPartialCountersMap; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +/** */ +@RunWith(JUnit4.class) public class CachePartitionPartialCountersMapSelfTest extends GridCommonAbstractTest { - + /** */ + @Test public void testAddAndRemove() throws Exception { CachePartitionPartialCountersMap map = new CachePartitionPartialCountersMap(10); @@ -54,4 +60,14 @@ public void testAddAndRemove() throws Exception { } } -} \ No newline at end of file + /** */ + public void testEmptyMap() throws Exception { + CachePartitionPartialCountersMap map = CachePartitionPartialCountersMap.EMPTY; + + assertFalse(map.remove(1)); + + map.trim(); + + assertNotNull(map.toString()); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest.java index 86738f5211662..5ed637baf3370 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest.java @@ -17,11 +17,15 @@ package org.apache.ignite.internal.processors.cache.distributed.dht; +import java.util.Arrays; import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractPartitionedByteArrayValuesSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.junit.Assert.assertArrayEquals; @@ -29,13 +33,11 @@ /** * Tests for byte array values in PARTITIONED-ONLY caches. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest extends GridCacheAbstractPartitionedByteArrayValuesSelfTest { - /** Offheap cache name. */ - protected static final String CACHE_ATOMIC = "cache_atomic"; - - /** Offheap cache name. */ - protected static final String CACHE_ATOMIC_OFFHEAP = "cache_atomic_offheap"; + /** */ + public static final String ATOMIC_CACHE = "atomicCache"; /** Atomic caches. */ private static IgniteCache[] cachesAtomic; @@ -46,10 +48,16 @@ public abstract class GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest ex CacheConfiguration atomicCacheCfg = cacheConfiguration0(); - atomicCacheCfg.setName(CACHE_ATOMIC); + atomicCacheCfg.setName(ATOMIC_CACHE); atomicCacheCfg.setAtomicityMode(ATOMIC); - c.setCacheConfiguration(cacheConfiguration(), atomicCacheCfg); + int size = c.getCacheConfiguration().length; + + CacheConfiguration[] configs = Arrays.copyOf(c.getCacheConfiguration(), size + 1); + + configs[size] = atomicCacheCfg; + + c.setCacheConfiguration(configs); c.setPeerClassLoadingEnabled(peerClassLoading()); @@ -71,7 +79,7 @@ public abstract class GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest ex cachesAtomic = new IgniteCache[gridCnt]; for (int i = 0; i < gridCount(); i++) - cachesAtomic[i] = ignites[i].cache(CACHE_ATOMIC); + cachesAtomic[i] = grid(i).cache(ATOMIC_CACHE); } /** {@inheritDoc} */ @@ -86,6 +94,7 @@ public abstract class GridCacheAbstractPartitionedOnlyByteArrayValuesSelfTest ex * * @throws Exception If failed. */ + @Test public void testAtomic() throws Exception { testAtomic0(cachesAtomic); } @@ -110,4 +119,4 @@ private void testAtomic0(IgniteCache[] caches) throws Exception assertNull(cache.get(KEY_1)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractTransformWriteThroughSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractTransformWriteThroughSelfTest.java index 34c96fd999394..0b9b16f05b3b2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractTransformWriteThroughSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAbstractTransformWriteThroughSelfTest.java @@ -28,12 +28,13 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheGenericTestStore; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,6 +45,7 @@ /** * Tests write-through. */ +@RunWith(JUnit4.class) public abstract class GridCacheAbstractTransformWriteThroughSelfTest extends GridCommonAbstractTest { /** Grid count. */ protected static final int GRID_CNT = 3; @@ -66,9 +68,6 @@ public abstract class GridCacheAbstractTransformWriteThroughSelfTest extends Gri /** Keys number. */ public static final int KEYS_CNT = 30; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Value increment processor. */ private static final EntryProcessor INCR_CLOS = new EntryProcessor() { @Override public Void process(MutableEntry e, Object... args) { @@ -104,12 +103,6 @@ public abstract class GridCacheAbstractTransformWriteThroughSelfTest extends Gri @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - GridCacheGenericTestStore store = new GridCacheGenericTestStore<>(); stores.add(store); @@ -132,6 +125,8 @@ public abstract class GridCacheAbstractTransformWriteThroughSelfTest extends Gri /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + super.beforeTestsStarted(); for (int i = 0; i < GRID_CNT; i++) @@ -147,6 +142,8 @@ public abstract class GridCacheAbstractTransformWriteThroughSelfTest extends Gri /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + super.beforeTest(); for (GridCacheGenericTestStore store : stores) @@ -156,6 +153,7 @@ public abstract class GridCacheAbstractTransformWriteThroughSelfTest extends Gri /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticNearUpdate() throws Exception { checkTransform(OPTIMISTIC, NEAR_NODE, OP_UPDATE); } @@ -163,6 +161,7 @@ public void testTransformOptimisticNearUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticPrimaryUpdate() throws Exception { checkTransform(OPTIMISTIC, PRIMARY_NODE, OP_UPDATE); } @@ -170,6 +169,7 @@ public void testTransformOptimisticPrimaryUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticBackupUpdate() throws Exception { checkTransform(OPTIMISTIC, BACKUP_NODE, OP_UPDATE); } @@ -177,6 +177,7 @@ public void testTransformOptimisticBackupUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticNearDelete() throws Exception { checkTransform(OPTIMISTIC, NEAR_NODE, OP_DELETE); } @@ -184,6 +185,7 @@ public void testTransformOptimisticNearDelete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticPrimaryDelete() throws Exception { checkTransform(OPTIMISTIC, PRIMARY_NODE, OP_DELETE); } @@ -191,6 +193,7 @@ public void testTransformOptimisticPrimaryDelete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformOptimisticBackupDelete() throws Exception { checkTransform(OPTIMISTIC, BACKUP_NODE, OP_DELETE); } @@ -198,6 +201,7 @@ public void testTransformOptimisticBackupDelete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticNearUpdate() throws Exception { checkTransform(PESSIMISTIC, NEAR_NODE, OP_UPDATE); } @@ -205,6 +209,7 @@ public void testTransformPessimisticNearUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticPrimaryUpdate() throws Exception { checkTransform(PESSIMISTIC, PRIMARY_NODE, OP_UPDATE); } @@ -212,6 +217,7 @@ public void testTransformPessimisticPrimaryUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticBackupUpdate() throws Exception { checkTransform(PESSIMISTIC, BACKUP_NODE, OP_UPDATE); } @@ -219,6 +225,7 @@ public void testTransformPessimisticBackupUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticNearDelete() throws Exception { checkTransform(PESSIMISTIC, NEAR_NODE, OP_DELETE); } @@ -226,6 +233,7 @@ public void testTransformPessimisticNearDelete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticPrimaryDelete() throws Exception { checkTransform(PESSIMISTIC, PRIMARY_NODE, OP_DELETE); } @@ -233,6 +241,7 @@ public void testTransformPessimisticPrimaryDelete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformPessimisticBackupDelete() throws Exception { checkTransform(PESSIMISTIC, BACKUP_NODE, OP_DELETE); } @@ -334,4 +343,4 @@ else if (nodeType == BACKUP_NODE) { return keys; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicFullApiSelfTest.java index b027aead6e6a1..b83e1c1a83402 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicFullApiSelfTest.java @@ -27,12 +27,16 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedFullApiSelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; /** * Multi node test for disabled near cache. */ +@RunWith(JUnit4.class) public class GridCacheAtomicFullApiSelfTest extends GridCachePartitionedFullApiSelfTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { @@ -57,6 +61,7 @@ public class GridCacheAtomicFullApiSelfTest extends GridCachePartitionedFullApiS /** * @throws Exception If failed. */ + @Test public void testLock() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -74,6 +79,7 @@ public void testLock() throws Exception { /** * @throws Exception In case of error. */ + @Test @Override public void testGetAll() throws Exception { jcache().put("key1", 1); jcache().put("key2", 2); @@ -108,4 +114,4 @@ public void testLock() throws Exception { assertEquals(2, (int)map2.get("key2")); assertNull(map2.get("key9999")); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicNearCacheSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicNearCacheSelfTest.java index 687a1bfc28066..1ae62a505de43 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicNearCacheSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheAtomicNearCacheSelfTest.java @@ -45,6 +45,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -55,6 +58,7 @@ /** * Tests near cache with various atomic cache configuration. */ +@RunWith(JUnit4.class) public class GridCacheAtomicNearCacheSelfTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 4; @@ -100,6 +104,7 @@ public class GridCacheAtomicNearCacheSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNoBackups() throws Exception { doStartGrids(0); @@ -109,6 +114,7 @@ public void testNoBackups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithBackups() throws Exception { doStartGrids(2); @@ -822,4 +828,4 @@ private Processor(Integer newVal) { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedDebugTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedDebugTest.java index 282b79219f412..a19af9f4cc49f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedDebugTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedDebugTest.java @@ -42,13 +42,13 @@ import org.apache.ignite.internal.processors.cache.GridCacheTestStore; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -60,10 +60,8 @@ /** * Tests for colocated cache. */ +@RunWith(JUnit4.class) public class GridCacheColocatedDebugTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Test thread count. */ private static final int THREAD_CNT = 10; @@ -75,12 +73,6 @@ public class GridCacheColocatedDebugTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -106,6 +98,7 @@ public class GridCacheColocatedDebugTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSimplestPessimistic() throws Exception { checkSinglePut(false, PESSIMISTIC, REPEATABLE_READ); } @@ -113,6 +106,7 @@ public void testSimplestPessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimpleOptimistic() throws Exception { checkSinglePut(true, OPTIMISTIC, REPEATABLE_READ); } @@ -120,6 +114,7 @@ public void testSimpleOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReentry() throws Exception { checkReentry(PESSIMISTIC, REPEATABLE_READ); } @@ -127,6 +122,7 @@ public void testReentry() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedInTxSeparatePessimistic() throws Exception { checkDistributedPut(true, true, PESSIMISTIC, REPEATABLE_READ); } @@ -134,6 +130,7 @@ public void testDistributedInTxSeparatePessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedInTxPessimistic() throws Exception { checkDistributedPut(true, false, PESSIMISTIC, REPEATABLE_READ); } @@ -141,6 +138,7 @@ public void testDistributedInTxPessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedSeparatePessimistic() throws Exception { checkDistributedPut(false, true, PESSIMISTIC, REPEATABLE_READ); } @@ -148,6 +146,7 @@ public void testDistributedSeparatePessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedPessimistic() throws Exception { checkDistributedPut(false, false, PESSIMISTIC, REPEATABLE_READ); } @@ -155,6 +154,7 @@ public void testDistributedPessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedNonLocalInTxSeparatePessimistic() throws Exception { checkNonLocalPuts(true, true, PESSIMISTIC, REPEATABLE_READ); } @@ -162,6 +162,7 @@ public void testDistributedNonLocalInTxSeparatePessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedNonLocalInTxPessimistic() throws Exception { checkNonLocalPuts(true, false, PESSIMISTIC, REPEATABLE_READ); } @@ -169,6 +170,7 @@ public void testDistributedNonLocalInTxPessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedNonLocalSeparatePessimistic() throws Exception { checkNonLocalPuts(false, true, PESSIMISTIC, REPEATABLE_READ); } @@ -176,6 +178,7 @@ public void testDistributedNonLocalSeparatePessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedNonLocalPessimistic() throws Exception { checkNonLocalPuts(false, false, PESSIMISTIC, REPEATABLE_READ); } @@ -183,6 +186,7 @@ public void testDistributedNonLocalPessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackSeparatePessimistic() throws Exception { checkRollback(true, PESSIMISTIC, REPEATABLE_READ); } @@ -190,6 +194,7 @@ public void testRollbackSeparatePessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedInTxSeparateOptimistic() throws Exception { checkDistributedPut(true, true, OPTIMISTIC, REPEATABLE_READ); } @@ -197,6 +202,7 @@ public void testDistributedInTxSeparateOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedInTxOptimistic() throws Exception { checkDistributedPut(true, false, OPTIMISTIC, REPEATABLE_READ); } @@ -204,6 +210,7 @@ public void testDistributedInTxOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedNonLocalInTxSeparateOptimistic() throws Exception { checkNonLocalPuts(true, true, OPTIMISTIC, REPEATABLE_READ); } @@ -211,6 +218,7 @@ public void testDistributedNonLocalInTxSeparateOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedNonLocalInTxOptimistic() throws Exception { checkNonLocalPuts(true, false, OPTIMISTIC, REPEATABLE_READ); } @@ -218,6 +226,7 @@ public void testDistributedNonLocalInTxOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollbackSeparateOptimistic() throws Exception { checkRollback(true, OPTIMISTIC, REPEATABLE_READ); } @@ -225,6 +234,7 @@ public void testRollbackSeparateOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRollback() throws Exception { checkRollback(false, PESSIMISTIC, REPEATABLE_READ); } @@ -232,6 +242,7 @@ public void testRollback() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutsMultithreadedColocated() throws Exception { checkPutsMultithreaded(true, false, 100000); } @@ -239,6 +250,7 @@ public void testPutsMultithreadedColocated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutsMultithreadedRemote() throws Exception { checkPutsMultithreaded(false, true, 100000); } @@ -246,6 +258,7 @@ public void testPutsMultithreadedRemote() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutsMultithreadedMixed() throws Exception { checkPutsMultithreaded(true, true, 100000); } @@ -352,6 +365,7 @@ public void checkPutsMultithreaded(boolean loc, boolean remote, final long maxIt /** * @throws Exception If failed. */ + @Test public void testLockLockedLocal() throws Exception { checkLockLocked(true); } @@ -359,6 +373,7 @@ public void testLockLockedLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockLockedRemote() throws Exception { checkLockLocked(false); } @@ -430,6 +445,7 @@ private void checkLockLocked(boolean loc) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticGet() throws Exception { storeEnabled = false; @@ -711,6 +727,7 @@ private void checkNonLocalPuts(boolean explicitTx, boolean separate, Transaction /** * @throws Exception If failed. */ + @Test public void testWriteThrough() throws Exception { storeEnabled = true; @@ -897,6 +914,7 @@ private void checkRollback(boolean separate, TransactionConcurrency concurrency, /** * @throws Exception If failed. */ + @Test public void testExplicitLocks() throws Exception { storeEnabled = false; @@ -923,6 +941,7 @@ public void testExplicitLocks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExplicitLocksDistributed() throws Exception { storeEnabled = false; @@ -989,4 +1008,4 @@ private static Integer forPrimary(Ignite g, int prev) { throw new IllegalArgumentException("Can not find key being primary for node: " + g.cluster().localNode().id()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedMvccTxSingleThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedMvccTxSingleThreadedSelfTest.java new file mode 100644 index 0000000000000..717d77113413f --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedMvccTxSingleThreadedSelfTest.java @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.dht; + +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.cache.IgniteMvccTxSingleThreadedAbstractTest; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheRebalanceMode.NONE; + +/** + * Test Mvcc txs in single-threaded mode for colocated cache. + */ +public class GridCacheColocatedMvccTxSingleThreadedSelfTest extends IgniteMvccTxSingleThreadedAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setCacheMode(PARTITIONED); + ccfg.setBackups(1); + ccfg.setNearConfiguration(null); + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + + ccfg.setEvictionPolicy(null); + + ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC); + + ccfg.setRebalanceMode(NONE); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 4; + } + + /** {@inheritDoc} */ + @Override protected int keyCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int maxKeyValue() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int iterations() { + return 3000; + } + + /** {@inheritDoc} */ + @Override protected boolean isTestDebug() { + return false; + } + + /** {@inheritDoc} */ + @Override protected boolean printMemoryStats() { + return true; + } + +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedOptimisticTransactionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedOptimisticTransactionSelfTest.java index 5ed6b38cc01c0..fa87637b382a8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedOptimisticTransactionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedOptimisticTransactionSelfTest.java @@ -22,11 +22,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,6 +37,7 @@ /** * Test ensuring that values are visible inside OPTIMISTIC transaction in co-located cache. */ +@RunWith(JUnit4.class) public class GridCacheColocatedOptimisticTransactionSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; @@ -50,9 +51,6 @@ public class GridCacheColocatedOptimisticTransactionSelfTest extends GridCommonA /** Value. */ private static final String VAL = "val"; - /** Shared IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Grids. */ private static Ignite[] ignites; @@ -65,10 +63,6 @@ public class GridCacheColocatedOptimisticTransactionSelfTest extends GridCommonA c.getTransactionConfiguration().setTxSerializableEnabled(true); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - CacheConfiguration cc = new CacheConfiguration(DEFAULT_CACHE_NAME); cc.setName(CACHE); @@ -78,7 +72,6 @@ public class GridCacheColocatedOptimisticTransactionSelfTest extends GridCommonA cc.setBackups(1); cc.setWriteSynchronizationMode(FULL_SYNC); - c.setDiscoverySpi(disco); c.setCacheConfiguration(cc); return c; @@ -110,6 +103,7 @@ public class GridCacheColocatedOptimisticTransactionSelfTest extends GridCommonA * * @throws Exception If failed. */ + @Test public void testOptimisticTransaction() throws Exception { for (IgniteCache cache : caches) { Transaction tx = cache.unwrap(Ignite.class).transactions().txStart(OPTIMISTIC, REPEATABLE_READ); @@ -148,4 +142,4 @@ public void testOptimisticTransaction() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedTxSingleThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedTxSingleThreadedSelfTest.java index c55a606d20068..90502dafe4dcc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedTxSingleThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheColocatedTxSingleThreadedSelfTest.java @@ -22,9 +22,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.IgniteTxSingleThreadedAbstractTest; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.log4j.Level; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -38,9 +35,6 @@ public class GridCacheColocatedTxSingleThreadedSelfTest extends IgniteTxSingleTh /** Cache debug flag. */ private static final boolean CACHE_DEBUG = false; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @SuppressWarnings({"ConstantConditions"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -61,12 +55,6 @@ public class GridCacheColocatedTxSingleThreadedSelfTest extends IgniteTxSingleTh cc.setRebalanceMode(NONE); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - c.setCacheConfiguration(cc); if (CACHE_DEBUG) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEntrySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEntrySelfTest.java index 7767da9721017..9f834d907f47e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEntrySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEntrySelfTest.java @@ -34,11 +34,12 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -46,23 +47,15 @@ /** * Unit tests for dht entry. */ +@RunWith(JUnit4.class) public class GridCacheDhtEntrySelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 2; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -78,12 +71,16 @@ public class GridCacheDhtEntrySelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + startGridsMultiThreaded(GRID_CNT); } /** {@inheritDoc} */ @SuppressWarnings({"SizeReplaceableByIsEmpty"}) @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + for (int i = 0; i < GRID_CNT; i++) { assert near(grid(i)).size() == 0 : "Near cache size is not zero for grid: " + i; assert dht(grid(i)).size() == 0 : "DHT cache size is not zero for grid: " + i; @@ -94,7 +91,6 @@ public class GridCacheDhtEntrySelfTest extends GridCommonAbstractTest { } /** {@inheritDoc} */ - @SuppressWarnings({"SizeReplaceableByIsEmpty"}) @Override protected void afterTest() throws Exception { for (int i = 0; i < GRID_CNT; i++) { near(grid(i)).removeAll(); @@ -126,7 +122,7 @@ private IgniteCache near(Ignite g) { * @param g Grid. * @return Dht cache. */ - @SuppressWarnings({"unchecked", "TypeMayBeWeakened"}) + @SuppressWarnings({"unchecked"}) private GridDhtCacheAdapter dht(Ignite g) { return ((GridNearCacheAdapter)((IgniteKernal)g).internalCache(DEFAULT_CACHE_NAME)).dht(); } @@ -140,6 +136,7 @@ private Ignite grid(UUID nodeId) { } /** @throws Exception If failed. */ + @Test public void testClearWithReaders() throws Exception { Integer key = 1; @@ -188,6 +185,7 @@ public void testClearWithReaders() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRemoveWithReaders() throws Exception { Integer key = 1; @@ -236,7 +234,7 @@ public void testRemoveWithReaders() throws Exception { } /** @throws Exception If failed. */ - @SuppressWarnings({"AssertWithSideEffects"}) + @Test public void testEvictWithReaders() throws Exception { Integer key = 1; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionNearReadersSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionNearReadersSelfTest.java index 36da261d331ef..427bbf7d794c2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionNearReadersSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionNearReadersSelfTest.java @@ -37,10 +37,10 @@ import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -50,13 +50,11 @@ /** * Tests for dht cache eviction. */ +@RunWith(JUnit4.class) public class GridCacheDhtEvictionNearReadersSelfTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 4; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Default constructor. */ public GridCacheDhtEvictionNearReadersSelfTest() { super(false /* don't start grid. */); @@ -66,12 +64,6 @@ public GridCacheDhtEvictionNearReadersSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -188,6 +180,7 @@ private IgnitePredicate nodeEvent(final UUID nodeId) { * * @throws Exception If failed. */ + @Test public void testReaders() throws Exception { Integer key = 1; @@ -280,4 +273,4 @@ public void testReaders() throws Exception { assertNull(localPeek(dhtOther, key)); assertNull(localPeek(nearOther, key)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionsDisabledSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionsDisabledSelfTest.java index d31015d9e419d..00ed47c299df1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionsDisabledSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtEvictionsDisabledSelfTest.java @@ -21,10 +21,10 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -33,10 +33,8 @@ /** * Test cache closure execution. */ +@RunWith(JUnit4.class) public class GridCacheDhtEvictionsDisabledSelfTest extends GridCommonAbstractTest { - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -48,12 +46,6 @@ public GridCacheDhtEvictionsDisabledSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setName("test"); @@ -73,6 +65,7 @@ public GridCacheDhtEvictionsDisabledSelfTest() { } /** @throws Exception If failed. */ + @Test public void testOneNode() throws Exception { checkNodes(startGridsMultiThreaded(1)); @@ -81,6 +74,7 @@ public void testOneNode() throws Exception { } /** @throws Exception If failed. */ + @Test public void testTwoNodes() throws Exception { checkNodes(startGridsMultiThreaded(2)); @@ -89,6 +83,7 @@ public void testTwoNodes() throws Exception { } /** @throws Exception If failed. */ + @Test public void testThreeNodes() throws Exception { checkNodes(startGridsMultiThreaded(3)); @@ -124,4 +119,4 @@ private void checkNodes(Ignite g) throws Exception { assertEquals(v1, v2); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMappingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMappingSelfTest.java index bd0af345fed34..36e2aa55b226b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMappingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMappingSelfTest.java @@ -24,10 +24,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -36,12 +37,15 @@ /** * Tests dht mapping. */ +@RunWith(JUnit4.class) public class GridCacheDhtMappingSelfTest extends GridCommonAbstractTest { /** Number of key backups. */ private static final int BACKUPS = 1; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -55,17 +59,13 @@ public class GridCacheDhtMappingSelfTest extends GridCommonAbstractTest { cacheCfg.setBackups(BACKUPS); cacheCfg.setAtomicityMode(TRANSACTIONAL); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); cfg.setCacheConfiguration(cacheCfg); return cfg; } /** @throws Exception If failed. */ + @Test public void testMapping() throws Exception { int nodeCnt = 5; @@ -95,4 +95,4 @@ public void testMapping() throws Exception { // Test key should be on primary and backup node only. assertEquals(1 + BACKUPS, cnt); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMultiBackupTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMultiBackupTest.java index e1e5315dda4d3..1f5bc28bbe165 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMultiBackupTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtMultiBackupTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheDhtMultiBackupTest extends GridCommonAbstractTest { /** * @@ -41,6 +45,7 @@ public GridCacheDhtMultiBackupTest() { /** * @throws Exception If failed */ + @Test public void testPut() throws Exception { try { Ignite g = G.start("examples/config/example-cache.xml"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadBigDataSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadBigDataSelfTest.java index 86dbd4fb86d89..5b4fcc02a8f6b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadBigDataSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadBigDataSelfTest.java @@ -27,10 +27,10 @@ import org.apache.ignite.lifecycle.LifecycleBean; import org.apache.ignite.lifecycle.LifecycleEventType; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -42,6 +42,7 @@ /** * Test large cache counts. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadBigDataSelfTest extends GridCommonAbstractTest { /** Size of values in KB. */ private static final int KBSIZE = 10 * 1024; @@ -70,9 +71,6 @@ public class GridCacheDhtPreloadBigDataSelfTest extends GridCommonAbstractTest { /** */ private LifecycleBean lbean; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -93,14 +91,9 @@ public GridCacheDhtPreloadBigDataSelfTest() { cc.setAffinity(new RendezvousAffinityFunction(false, partitions)); cc.setBackups(backups); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - if (lbean != null) c.setLifecycleBeans(lbean); - c.setDiscoverySpi(disco); c.setCacheConfiguration(cc); c.setDeploymentMode(CONTINUOUS); c.setNetworkTimeout(1000); @@ -125,6 +118,7 @@ public GridCacheDhtPreloadBigDataSelfTest() { /** * @throws Exception If failed. */ + @Test public void testLargeObjects() throws Exception { preloadMode = SYNC; @@ -159,6 +153,7 @@ public void testLargeObjects() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLargeObjectsWithLifeCycleBean() throws Exception { preloadMode = SYNC; partitions = 23; @@ -230,4 +225,4 @@ private byte[] value(int size) { @Override protected long getTestTimeout() { return 6 * 60 * 1000; // 6 min. } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadDelayedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadDelayedSelfTest.java index 9941d58af0dae..f3c627637b716 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadDelayedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadDelayedSelfTest.java @@ -35,6 +35,7 @@ import org.apache.ignite.events.Event; import org.apache.ignite.events.EventType; import org.apache.ignite.internal.IgniteKernal; +import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader; @@ -45,11 +46,11 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -60,6 +61,7 @@ /** * Test cases for partitioned cache {@link GridDhtPreloader preloader}. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadDelayedSelfTest extends GridCommonAbstractTest { /** Key count. */ private static final int KEY_CNT = 100; @@ -73,9 +75,6 @@ public class GridCacheDhtPreloadDelayedSelfTest extends GridCommonAbstractTest { /** Preload delay. */ private long delay = -1; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -92,11 +91,6 @@ public class GridCacheDhtPreloadDelayedSelfTest extends GridCommonAbstractTest { cc.setBackups(1); cc.setAtomicityMode(TRANSACTIONAL); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); c.setCacheConfiguration(cc); return c; @@ -110,6 +104,7 @@ public class GridCacheDhtPreloadDelayedSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testManualPreload() throws Exception { delay = -1; @@ -189,6 +184,7 @@ public void testManualPreload() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDelayedPreload() throws Exception { delay = PRELOAD_DELAY; @@ -260,6 +256,7 @@ public void testDelayedPreload() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAutomaticPreload() throws Exception { delay = 0; preloadMode = CacheRebalanceMode.SYNC; @@ -293,6 +290,7 @@ public void testAutomaticPreload() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAutomaticPreloadWithEmptyCache() throws Exception { preloadMode = SYNC; @@ -343,6 +341,7 @@ public void testAutomaticPreloadWithEmptyCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testManualPreloadSyncMode() throws Exception { preloadMode = CacheRebalanceMode.SYNC; delay = -1; @@ -358,6 +357,7 @@ public void testManualPreloadSyncMode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreloadManyNodes() throws Exception { delay = 0; preloadMode = ASYNC; @@ -390,7 +390,10 @@ public void testPreloadManyNodes() throws Exception { * @return Topology. */ private GridDhtPartitionTopology topology(Ignite g) { - return ((GridNearCacheAdapter)((IgniteKernal)g).internalCache(DEFAULT_CACHE_NAME)).dht().topology(); + GridCacheAdapter internalCache = ((IgniteKernal)g).internalCache(DEFAULT_CACHE_NAME); + + return internalCache.isNear() ? ((GridNearCacheAdapter)internalCache).dht().topology() : + internalCache.context().dht().topology(); } /** @@ -479,4 +482,4 @@ private final void checkMaps(final boolean strict, final GridDhtCacheAdapter c, int cnt) { for (int i = 0; i < cnt; i++) c.put(i, Integer.toString(i)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMessageCountTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMessageCountTest.java index 886a8864ac6bf..41f0af499c1de 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMessageCountTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMessageCountTest.java @@ -28,10 +28,10 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -39,6 +39,7 @@ /** * Test cases for partitioned cache {@link GridDhtPreloader preloader}. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadMessageCountTest extends GridCommonAbstractTest { /** Key count. */ private static final int KEY_CNT = 1000; @@ -46,9 +47,6 @@ public class GridCacheDhtPreloadMessageCountTest extends GridCommonAbstractTest /** Preload mode. */ private CacheRebalanceMode preloadMode = CacheRebalanceMode.SYNC; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -63,11 +61,6 @@ public class GridCacheDhtPreloadMessageCountTest extends GridCommonAbstractTest cc.setAffinity(new RendezvousAffinityFunction(false, 521)); cc.setBackups(1); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); c.setCacheConfiguration(cc); TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); @@ -87,6 +80,7 @@ public class GridCacheDhtPreloadMessageCountTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testAutomaticPreload() throws Exception { Ignite g0 = startGrid(0); @@ -134,4 +128,4 @@ private void checkCache(IgniteCache c, int keyCnt) { assertEquals(Integer.valueOf(i), c.localPeek(key, CachePeekMode.ONHEAP)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMultiThreadedSelfTest.java index 6c32a67b98f24..3349d55d884b2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadMultiThreadedSelfTest.java @@ -29,20 +29,19 @@ import org.apache.ignite.failure.NoOpFailureHandler; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * MultiThreaded load test for DHT preloader. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadMultiThreadedSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** * Creates new test. */ @@ -53,6 +52,7 @@ public GridCacheDhtPreloadMultiThreadedSelfTest() { /** * @throws Exception If failed. */ + @Test public void testNodeLeaveBeforePreloadingComplete() throws Exception { try { final CountDownLatch startLatch = new CountDownLatch(1); @@ -110,6 +110,7 @@ public void testNodeLeaveBeforePreloadingComplete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentNodesStart() throws Exception { try { multithreadedAsync( @@ -136,7 +137,10 @@ public void testConcurrentNodesStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentNodesStartStop() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + try { multithreadedAsync( new Callable() { @@ -177,8 +181,6 @@ public void testConcurrentNodesStartStop() throws Exception { } } - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - return cfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPerformanceTest.java index 4b08a09b9557a..582404fa5d0d1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPerformanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPerformanceTest.java @@ -27,23 +27,21 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test cases for partitioned cache {@link GridDhtPreloader preloader}. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadPerformanceTest extends GridCommonAbstractTest { /** */ private static final int THREAD_CNT = 30; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -74,12 +72,6 @@ public class GridCacheDhtPreloadPerformanceTest extends GridCommonAbstractTest { 1300)); cc1.setBackups(2); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setIgfsThreadPoolSize(1); c.setSystemThreadPoolSize(2); c.setPublicThreadPoolSize(2); @@ -101,6 +93,7 @@ public class GridCacheDhtPreloadPerformanceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testConcurrentStartPerformance() throws Exception { // // for (int i = 0; i < 10; i++) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPutGetSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPutGetSelfTest.java index 71911e8f9c965..d9e12c76bde1e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPutGetSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadPutGetSelfTest.java @@ -29,12 +29,13 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -45,6 +46,7 @@ /** * Test cases for partitioned cache {@link GridDhtPreloader preloader}. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadPutGetSelfTest extends GridCommonAbstractTest { /** Key count. */ private static final int KEY_CNT = 1000; @@ -61,9 +63,6 @@ public class GridCacheDhtPreloadPutGetSelfTest extends GridCommonAbstractTest { /** Preload mode. */ private CacheRebalanceMode preloadMode; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -79,11 +78,6 @@ public class GridCacheDhtPreloadPutGetSelfTest extends GridCommonAbstractTest { cacheCfg.setAffinity(new RendezvousAffinityFunction()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); cfg.setCacheConfiguration(cacheCfg); return cfg; @@ -92,6 +86,7 @@ public class GridCacheDhtPreloadPutGetSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPutGetAsync0() throws Exception { preloadMode = ASYNC; backups = 0; @@ -102,6 +97,7 @@ public void testPutGetAsync0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetAsync1() throws Exception { preloadMode = ASYNC; backups = 1; @@ -112,6 +108,7 @@ public void testPutGetAsync1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetAsync2() throws Exception { preloadMode = ASYNC; backups = 2; @@ -122,6 +119,7 @@ public void testPutGetAsync2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetSync0() throws Exception { preloadMode = SYNC; backups = 0; @@ -132,6 +130,7 @@ public void testPutGetSync0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetSync1() throws Exception { preloadMode = SYNC; backups = 1; @@ -142,6 +141,7 @@ public void testPutGetSync1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetSync2() throws Exception { preloadMode = SYNC; backups = 2; @@ -152,6 +152,7 @@ public void testPutGetSync2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetNone0() throws Exception { preloadMode = NONE; backups = 0; @@ -162,7 +163,11 @@ public void testPutGetNone0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetNone1() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10261"); + preloadMode = NONE; backups = 1; @@ -172,7 +177,11 @@ public void testPutGetNone1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetNone2() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10261"); + preloadMode = NONE; backups = 2; @@ -273,4 +282,4 @@ private void performTest() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadSelfTest.java index 82ed95e52531f..6b8b5698ef2f2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadSelfTest.java @@ -43,10 +43,10 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -62,6 +62,7 @@ /** * Test cases for partitioned cache {@link GridDhtPreloader preloader}. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadSelfTest extends GridCommonAbstractTest { /** Flag to print preloading events. */ private static final boolean DEBUG = false; @@ -90,9 +91,6 @@ public class GridCacheDhtPreloadSelfTest extends GridCommonAbstractTest { /** Number of partitions. */ private int partitions = DFLT_PARTITIONS; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -104,11 +102,6 @@ public GridCacheDhtPreloadSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); cfg.setCacheConfiguration(cacheConfiguration(igniteInstanceName)); cfg.setDeploymentMode(CONTINUOUS); @@ -158,6 +151,7 @@ protected boolean onheapCacheEnabled() { /** * @throws Exception If failed. */ + @Test public void testActivePartitionTransferSyncSameCoordinator() throws Exception { preloadMode = SYNC; @@ -167,6 +161,7 @@ public void testActivePartitionTransferSyncSameCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivePartitionTransferAsyncSameCoordinator() throws Exception { checkActivePartitionTransfer(1000, 4, true, false); } @@ -174,6 +169,7 @@ public void testActivePartitionTransferAsyncSameCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActivePartitionTransferSyncChangingCoordinator() throws Exception { preloadMode = SYNC; @@ -183,6 +179,7 @@ public void testActivePartitionTransferSyncChangingCoordinator() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testActivePartitionTransferAsyncChangingCoordinator() throws Exception { checkActivePartitionTransfer(1000, 4, false, false); } @@ -190,6 +187,7 @@ public void testActivePartitionTransferAsyncChangingCoordinator() throws Excepti /** * @throws Exception If failed. */ + @Test public void testActivePartitionTransferSyncRandomCoordinator() throws Exception { preloadMode = SYNC; @@ -199,6 +197,7 @@ public void testActivePartitionTransferSyncRandomCoordinator() throws Exception /** * @throws Exception If failed. */ + @Test public void testActivePartitionTransferAsyncRandomCoordinator() throws Exception { checkActivePartitionTransfer(1000, 4, false, true); } @@ -212,7 +211,6 @@ public void testActivePartitionTransferAsyncRandomCoordinator() throws Exception */ private void checkActivePartitionTransfer(int keyCnt, int nodeCnt, boolean sameCoord, boolean shuffle) throws Exception { - try { Ignite ignite1 = startGrid(0); @@ -350,6 +348,7 @@ private void checkActiveState(Iterable grids) { /** * @throws Exception If failed. */ + @Test public void testMultiplePartitionBatchesSyncPreload() throws Exception { preloadMode = SYNC; preloadBatchSize = 100; @@ -361,6 +360,7 @@ public void testMultiplePartitionBatchesSyncPreload() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultiplePartitionBatchesAsyncPreload() throws Exception { preloadBatchSize = 100; partitions = 2; @@ -371,6 +371,7 @@ public void testMultiplePartitionBatchesAsyncPreload() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleNodesSyncPreloadSameCoordinator() throws Exception { preloadMode = SYNC; @@ -380,6 +381,7 @@ public void testMultipleNodesSyncPreloadSameCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleNodesAsyncPreloadSameCoordinator() throws Exception { checkNodes(1000, 4, true, false); } @@ -387,6 +389,7 @@ public void testMultipleNodesAsyncPreloadSameCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleNodesSyncPreloadChangingCoordinator() throws Exception { preloadMode = SYNC; @@ -396,6 +399,7 @@ public void testMultipleNodesSyncPreloadChangingCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleNodesAsyncPreloadChangingCoordinator() throws Exception { checkNodes(1000, 4, false, false); } @@ -403,6 +407,7 @@ public void testMultipleNodesAsyncPreloadChangingCoordinator() throws Exception /** * @throws Exception If failed. */ + @Test public void testMultipleNodesSyncPreloadRandomCoordinator() throws Exception { preloadMode = SYNC; @@ -412,6 +417,7 @@ public void testMultipleNodesSyncPreloadRandomCoordinator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleNodesAsyncPreloadRandomCoordinator() throws Exception { checkNodes(1000, 4, false, true); } @@ -456,7 +462,6 @@ private void stopGrids(Iterable grids) { */ private void checkNodes(int keyCnt, int nodeCnt, boolean sameCoord, boolean shuffle) throws Exception { - try { Ignite ignite1 = startGrid(0); @@ -627,4 +632,4 @@ private String top2string(Iterable grids) { return "Grid partition maps [" + map.toString() + ']'; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadStartStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadStartStopSelfTest.java index e77e5c8d1f6f7..e91d81a54df85 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadStartStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadStartStopSelfTest.java @@ -34,10 +34,10 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -50,6 +50,7 @@ /** * Test cases for partitioned cache {@link GridDhtPreloader preloader}. */ +@RunWith(JUnit4.class) public class GridCacheDhtPreloadStartStopSelfTest extends GridCommonAbstractTest { /** */ private static final long TEST_TIMEOUT = 5 * 60 * 1000; @@ -81,9 +82,6 @@ public class GridCacheDhtPreloadStartStopSelfTest extends GridCommonAbstractTest /** */ private int cacheCnt = DFLT_CACHE_CNT; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -113,11 +111,6 @@ public GridCacheDhtPreloadStartStopSelfTest() { cacheCfgs[i] = cacheCfg; } - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); cfg.setCacheConfiguration(cacheCfgs); cfg.setDeploymentMode(CONTINUOUS); @@ -164,6 +157,7 @@ private void stopGrids(Iterable grids) { } /** @throws Exception If failed. */ + @Test public void testDeadlock() throws Exception { info("Testing deadlock..."); @@ -263,4 +257,4 @@ private void checkKeys(IgniteCache c, int cnt) { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadUnloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadUnloadSelfTest.java index bc5e3d472138d..d74f7422d9642 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadUnloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheDhtPreloadUnloadSelfTest.java @@ -28,10 +28,10 @@ import org.apache.ignite.lifecycle.LifecycleBean; import org.apache.ignite.lifecycle.LifecycleEventType; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,6 +44,7 @@ * Test large cache counts. */ @SuppressWarnings({"BusyWait"}) +@RunWith(JUnit4.class) public class GridCacheDhtPreloadUnloadSelfTest extends GridCommonAbstractTest { /** Default backups. */ private static final int DFLT_BACKUPS = 1; @@ -69,9 +70,6 @@ public class GridCacheDhtPreloadUnloadSelfTest extends GridCommonAbstractTest { /** */ private LifecycleBean lbean; - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Network timeout. */ private long netTimeout = 1000; @@ -96,14 +94,9 @@ public GridCacheDhtPreloadUnloadSelfTest() { cc.setBackups(backups); cc.setAtomicityMode(TRANSACTIONAL); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - if (lbean != null) c.setLifecycleBeans(lbean); - c.setDiscoverySpi(disco); c.setCacheConfiguration(cc); c.setDeploymentMode(CONTINUOUS); c.setNetworkTimeout(netTimeout); @@ -121,6 +114,7 @@ public GridCacheDhtPreloadUnloadSelfTest() { } /** @throws Exception If failed. */ + @Test public void testUnloadZeroBackupsTwoNodes() throws Exception { preloadMode = SYNC; backups = 0; @@ -148,6 +142,7 @@ public void testUnloadZeroBackupsTwoNodes() throws Exception { } /** @throws Exception If failed. */ + @Test public void testUnloadOneBackupTwoNodes() throws Exception { preloadMode = SYNC; backups = 1; @@ -226,6 +221,7 @@ private void waitForUnload(long gridCnt, long cnt, long wait) throws Interrupted } /** @throws Exception If failed. */ + @Test public void testUnloadOneBackupThreeNodes() throws Exception { preloadMode = SYNC; backups = 1; @@ -258,6 +254,7 @@ public void testUnloadOneBackupThreeNodes() throws Exception { } /** @throws Exception If failed. */ + @Test public void testUnloadOneBackThreeNodesWithLifeCycleBean() throws Exception { preloadMode = SYNC; backups = 1; @@ -324,4 +321,4 @@ private String value(int size) { return b.toString(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheGlobalLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheGlobalLoadTest.java index 6024030ab6f15..ea9360dc49543 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheGlobalLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCacheGlobalLoadTest.java @@ -32,8 +32,12 @@ import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.jetbrains.annotations.Nullable; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -42,6 +46,7 @@ * Load cache test. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class GridCacheGlobalLoadTest extends IgniteCacheAbstractTest { /** */ private static ConcurrentMap map; @@ -69,9 +74,17 @@ public class GridCacheGlobalLoadTest extends IgniteCacheAbstractTest { return new NearCacheConfiguration(); } + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } + /** * @throws Exception If failed. */ + @Test public void testLoadCache() throws Exception { loadCache(false, false); } @@ -79,6 +92,7 @@ public void testLoadCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAsyncOld() throws Exception { loadCache(true, true); } @@ -86,6 +100,7 @@ public void testLoadCacheAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAsync() throws Exception { loadCache(true, false); } @@ -242,4 +257,4 @@ private static class TestStore extends CacheStoreAdapter { fail(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledLockSelfTest.java index 102185c24da03..ff738c6714cea 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledLockSelfTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedLockSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCachePartitionedNearDisabledLockSelfTest extends GridCachePartitionedLockSelfTest { /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration() { @@ -41,7 +45,8 @@ public class GridCachePartitionedNearDisabledLockSelfTest extends GridCacheParti } /** {@inheritDoc} */ + @Test @Override public void testLockReentrancy() throws Throwable { fail("https://issues.apache.org/jira/browse/IGNITE-835"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledMetricsSelfTest.java index 6c2da72db43b0..2abfe79b97977 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedNearDisabledMetricsSelfTest.java @@ -23,6 +23,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -31,6 +34,7 @@ /** * Metrics test for partitioned cache with disabled near cache. */ +@RunWith(JUnit4.class) public class GridCachePartitionedNearDisabledMetricsSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 2; @@ -77,6 +81,7 @@ public class GridCachePartitionedNearDisabledMetricsSelfTest extends GridCacheAb /** * @throws Exception If failed. */ + @Test public void testGettingRemovedKey() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-819"); @@ -120,4 +125,4 @@ public void testGettingRemovedKey() throws Exception { assertEquals(0, hits); assertEquals(1, misses); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTopologyChangeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTopologyChangeSelfTest.java index 2051616e85f3b..6344c12543635 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTopologyChangeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTopologyChangeSelfTest.java @@ -41,11 +41,11 @@ import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -58,6 +58,7 @@ /** * Tests that new transactions do not start until partition exchange is completed. */ +@RunWith(JUnit4.class) public class GridCachePartitionedTopologyChangeSelfTest extends GridCommonAbstractTest { /** Partition does not belong to node. */ private static final int PARTITION_READER = 0; @@ -68,9 +69,6 @@ public class GridCachePartitionedTopologyChangeSelfTest extends GridCommonAbstra /** Node is backup for partition. */ private static final int PARTITION_BACKUP = 2; - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-807"); @@ -80,13 +78,6 @@ public class GridCachePartitionedTopologyChangeSelfTest extends GridCommonAbstra @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - // Discovery. - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -103,6 +94,7 @@ public class GridCachePartitionedTopologyChangeSelfTest extends GridCommonAbstra /** * @throws Exception If failed. */ + @Test public void testNearTxNodeJoined() throws Exception { checkTxNodeJoined(PARTITION_READER); } @@ -110,6 +102,7 @@ public void testNearTxNodeJoined() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryTxNodeJoined() throws Exception { checkTxNodeJoined(PARTITION_PRIMARY); } @@ -117,6 +110,7 @@ public void testPrimaryTxNodeJoined() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupTxNodeJoined() throws Exception { checkTxNodeJoined(PARTITION_BACKUP); } @@ -124,6 +118,7 @@ public void testBackupTxNodeJoined() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearTxNodeLeft() throws Exception { checkTxNodeLeft(PARTITION_READER); } @@ -131,6 +126,7 @@ public void testNearTxNodeLeft() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryTxNodeLeft() throws Exception { // This test does not make sense because if node is primary for some partition, // it will reside on node until node leaves grid. @@ -139,6 +135,7 @@ public void testPrimaryTxNodeLeft() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupTxNodeLeft() throws Exception { checkTxNodeLeft(PARTITION_BACKUP); } @@ -146,6 +143,7 @@ public void testBackupTxNodeLeft() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExplicitLocks() throws Exception { try { startGridsMultiThreaded(2); @@ -617,4 +615,4 @@ private List partitions(Ignite node, int partType) { return res; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTxOriginatingNodeFailureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTxOriginatingNodeFailureSelfTest.java index 07bbf6cbde098..66f25bdd077ae 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTxOriginatingNodeFailureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedTxOriginatingNodeFailureSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.IgniteTxOriginatingNodeFailureAbstractSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests transaction consistency when originating node fails. */ +@RunWith(JUnit4.class) public class GridCachePartitionedTxOriginatingNodeFailureSelfTest extends IgniteTxOriginatingNodeFailureAbstractSelfTest { /** */ @@ -58,6 +62,7 @@ public class GridCachePartitionedTxOriginatingNodeFailureSelfTest extends /** * @throws Exception If failed. */ + @Test public void testTxFromPrimary() throws Exception { ClusterNode txNode = grid(originatingNode()).localNode(); @@ -79,6 +84,7 @@ public void testTxFromPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxFromBackup() throws Exception { ClusterNode txNode = grid(originatingNode()).localNode(); @@ -100,6 +106,7 @@ public void testTxFromBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxFromNotColocated() throws Exception { ClusterNode txNode = grid(originatingNode()).localNode(); @@ -122,6 +129,7 @@ public void testTxFromNotColocated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxAllNodes() throws Exception { List allNodes = new ArrayList<>(GRID_CNT); @@ -148,4 +156,4 @@ public void testTxAllNodes() throws Exception { testTxOriginatingNodeFails(keys, false); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedUnloadEventsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedUnloadEventsSelfTest.java index d00dd7a71dcc3..9db4ae8e5d7f0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedUnloadEventsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionedUnloadEventsSelfTest.java @@ -32,10 +32,10 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -44,10 +44,8 @@ /** */ +@RunWith(JUnit4.class) public class GridCachePartitionedUnloadEventsSelfTest extends GridCommonAbstractTest { - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int EVENTS_COUNT = 40; @@ -58,10 +56,6 @@ public class GridCachePartitionedUnloadEventsSelfTest extends GridCommonAbstract @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - disco.setIpFinder(ipFinder); - cfg.setDiscoverySpi(disco); - CacheConfiguration ccfg = cacheConfiguration(); CacheConfiguration ccfgEvtsDisabled = new CacheConfiguration<>(ccfg); @@ -89,6 +83,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception if failed. */ + @Test public void testUnloadEvents() throws Exception { final Ignite g1 = startGrid("g1"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidationTest.java index fcc1293a959f7..90f1477cd05cf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidationTest.java @@ -51,14 +51,19 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCachePartitionsStateValidationTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -102,6 +107,7 @@ public class GridCachePartitionsStateValidationTest extends GridCommonAbstractTe * * @throws Exception If failed. */ + @Test public void testValidationIfPartitionCountersAreInconsistent() throws Exception { IgniteEx ignite = (IgniteEx) startGrids(2); ignite.cluster().active(true); @@ -134,7 +140,11 @@ public void testValidationIfPartitionCountersAreInconsistent() throws Exception * * @throws Exception If failed. */ + @Test public void testPartitionCountersConsistencyOnExchange() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10766"); + IgniteEx ignite = (IgniteEx) startGrids(4); ignite.cluster().active(true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidatorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidatorSelfTest.java index 9cf296a8eeae6..9d21162f5cb34 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidatorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridCachePartitionsStateValidatorSelfTest.java @@ -33,12 +33,16 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.mockito.Matchers; import org.mockito.Mockito; /** * Test correct behaviour of {@link GridDhtPartitionsStateValidator} class. */ +@RunWith(JUnit4.class) public class GridCachePartitionsStateValidatorSelfTest extends GridCommonAbstractTest { /** Mocks and stubs. */ private final UUID localNodeId = UUID.randomUUID(); @@ -100,6 +104,7 @@ private GridDhtPartitionsSingleMessage from(@Nullable Map queue = new ConcurrentLinkedQueue<>(); - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -77,11 +76,7 @@ public class IgniteAtomicLongChangingTopologySelfTest extends GridCommonAbstract @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi).setNetworkTimeout(30_000); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setNetworkTimeout(30_000); AtomicConfiguration atomicCfg = new AtomicConfiguration(); atomicCfg.setCacheMode(PARTITIONED); @@ -108,6 +103,7 @@ public class IgniteAtomicLongChangingTopologySelfTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Test public void testQueueCreateNodesJoin() throws Exception { CountDownLatch startLatch = new CountDownLatch(GRID_CNT); final AtomicBoolean run = new AtomicBoolean(true); @@ -136,6 +132,7 @@ public void testQueueCreateNodesJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientAtomicLongCreateCloseFailover() throws Exception { testFailoverWithClient(new IgniteInClosure() { @Override public void apply(Ignite ignite) { @@ -151,6 +148,7 @@ public void testClientAtomicLongCreateCloseFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientQueueCreateCloseFailover() throws Exception { testFailoverWithClient(new IgniteInClosure() { @Override public void apply(Ignite ignite) { @@ -172,6 +170,7 @@ public void testClientQueueCreateCloseFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientSetCreateCloseFailover() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-9015"); @@ -181,6 +180,7 @@ public void testClientSetCreateCloseFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientCollocatedSetCreateCloseFailover() throws Exception { checkClientSetCreateCloseFailover(true); } @@ -284,6 +284,7 @@ private IgniteInternalFuture restartThread(final AtomicBoolean finished) { /** * @throws Exception If failed. */ + @Test public void testIncrementConsistency() throws Exception { startGrids(GRID_CNT); @@ -323,6 +324,7 @@ public void testIncrementConsistency() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueClose() throws Exception { startGrids(GRID_CNT); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheClearDuringRebalanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheClearDuringRebalanceTest.java index 8561c5c5211b9..1ec27d11dbd71 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheClearDuringRebalanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheClearDuringRebalanceTest.java @@ -24,23 +24,28 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheClearDuringRebalanceTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache"; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + if(MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-7952"); + } /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { @@ -53,12 +58,6 @@ public class IgniteCacheClearDuringRebalanceTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - cfg.setCacheConfiguration(new CacheConfiguration(CACHE_NAME) .setCacheMode(PARTITIONED)); @@ -68,6 +67,7 @@ public class IgniteCacheClearDuringRebalanceTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testClearAll() throws Exception { final IgniteEx node = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCommitDelayTxRecoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCommitDelayTxRecoveryTest.java index 7b8854d40d2ac..2e629c63bde4c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCommitDelayTxRecoveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCommitDelayTxRecoveryTest.java @@ -38,12 +38,12 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -55,10 +55,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheCommitDelayTxRecoveryTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SRVS = 4; @@ -78,8 +76,6 @@ public class IgniteCacheCommitDelayTxRecoveryTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); cfg.setClientMode(client); @@ -97,6 +93,7 @@ public class IgniteCacheCommitDelayTxRecoveryTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testRecovery1() throws Exception { checkRecovery(1, false); } @@ -104,6 +101,7 @@ public void testRecovery1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRecovery2() throws Exception { checkRecovery(2, false); } @@ -111,6 +109,7 @@ public void testRecovery2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRecoveryStoreEnabled1() throws Exception { checkRecovery(1, true); } @@ -118,6 +117,7 @@ public void testRecoveryStoreEnabled1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRecoveryStoreEnabled2() throws Exception { checkRecovery(2, true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheConcurrentPutGetRemove.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheConcurrentPutGetRemove.java index a83e9998ca699..5686954090c50 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheConcurrentPutGetRemove.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheConcurrentPutGetRemove.java @@ -27,12 +27,11 @@ import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -41,22 +40,11 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheConcurrentPutGetRemove extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 4; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { super.beforeTestsStarted(); @@ -67,6 +55,7 @@ public class IgniteCacheConcurrentPutGetRemove extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPutGetRemoveAtomic() throws Exception { putGetRemove(cacheConfiguration(ATOMIC, 1)); } @@ -74,6 +63,7 @@ public void testPutGetRemoveAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetRemoveTx() throws Exception { putGetRemove(cacheConfiguration(TRANSACTIONAL, 1)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCrossCacheTxFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCrossCacheTxFailoverTest.java index 40f9dc4f3ac90..aae3e1650aafb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCrossCacheTxFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheCrossCacheTxFailoverTest.java @@ -40,15 +40,15 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -63,10 +63,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheCrossCacheTxFailoverTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE1 = "cache1"; @@ -86,8 +84,6 @@ public class IgniteCacheCrossCacheTxFailoverTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (igniteInstanceName.equals(getTestIgniteInstanceName(GRID_CNT - 1))) cfg.setClientMode(true); @@ -135,6 +131,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name, /** * @throws Exception If failed. */ + @Test public void testCrossCachePessimisticTxFailover() throws Exception { crossCacheTxFailover(PARTITIONED, true, PESSIMISTIC, REPEATABLE_READ); } @@ -142,6 +139,7 @@ public void testCrossCachePessimisticTxFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCachePessimisticTxFailoverDifferentAffinity() throws Exception { crossCacheTxFailover(PARTITIONED, false, PESSIMISTIC, REPEATABLE_READ); } @@ -149,6 +147,7 @@ public void testCrossCachePessimisticTxFailoverDifferentAffinity() throws Except /** * @throws Exception If failed. */ + @Test public void testCrossCacheOptimisticTxFailover() throws Exception { crossCacheTxFailover(PARTITIONED, true, OPTIMISTIC, REPEATABLE_READ); } @@ -156,6 +155,7 @@ public void testCrossCacheOptimisticTxFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheOptimisticSerializableTxFailover() throws Exception { crossCacheTxFailover(PARTITIONED, true, OPTIMISTIC, SERIALIZABLE); } @@ -163,6 +163,7 @@ public void testCrossCacheOptimisticSerializableTxFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheOptimisticTxFailoverDifferentAffinity() throws Exception { crossCacheTxFailover(PARTITIONED, false, OPTIMISTIC, REPEATABLE_READ); } @@ -170,6 +171,7 @@ public void testCrossCacheOptimisticTxFailoverDifferentAffinity() throws Excepti /** * @throws Exception If failed. */ + @Test public void testCrossCachePessimisticTxFailoverReplicated() throws Exception { crossCacheTxFailover(REPLICATED, true, PESSIMISTIC, REPEATABLE_READ); } @@ -177,6 +179,7 @@ public void testCrossCachePessimisticTxFailoverReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCacheOptimisticTxFailoverReplicated() throws Exception { crossCacheTxFailover(REPLICATED, true, OPTIMISTIC, REPEATABLE_READ); } @@ -184,6 +187,7 @@ public void testCrossCacheOptimisticTxFailoverReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCachePessimisticTxFailoverDifferentAffinityReplicated() throws Exception { crossCacheTxFailover(PARTITIONED, false, PESSIMISTIC, REPEATABLE_READ); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheLockFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheLockFailoverSelfTest.java index f813ef89000fe..e97931f9478a4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheLockFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheLockFailoverSelfTest.java @@ -31,12 +31,24 @@ import org.apache.ignite.lang.IgniteFutureTimeoutException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheLockFailoverSelfTest extends GridCacheAbstractSelfTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 2; @@ -85,6 +97,7 @@ public class IgniteCacheLockFailoverSelfTest extends GridCacheAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testLockFailover() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -149,6 +162,7 @@ public Object call() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnlockPrimaryLeft() throws Exception { GridCacheAdapter cache = ((IgniteKernal)grid(0)).internalCache(DEFAULT_CACHE_NAME); @@ -166,4 +180,4 @@ public void testUnlockPrimaryLeft() throws Exception { startGrid(1); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheMultiTxLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheMultiTxLockSelfTest.java index b5771054a56cf..abef429530146 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheMultiTxLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheMultiTxLockSelfTest.java @@ -22,6 +22,7 @@ import java.util.TreeMap; import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.TimeUnit; +import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.eviction.lru.LruEvictionPolicy; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -29,12 +30,12 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager; -import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -43,28 +44,30 @@ /** * Tests explicit lock. */ +@RunWith(JUnit4.class) public class IgniteCacheMultiTxLockSelfTest extends GridCommonAbstractTest { /** */ public static final String CACHE_NAME = "part_cache"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private volatile boolean run = true; /** */ private boolean client; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); + /** Unexpected lock error. */ + private volatile Throwable err; - TcpDiscoverySpi disco = new TcpDiscoverySpi(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); - disco.setIpFinder(ipFinder); + super.setUp(); + } - c.setDiscoverySpi(disco); + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration c = super.getConfiguration(igniteInstanceName); CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); @@ -95,6 +98,7 @@ public class IgniteCacheMultiTxLockSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testExplicitLockOneKey() throws Exception { checkExplicitLock(1, false); } @@ -102,6 +106,7 @@ public void testExplicitLockOneKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExplicitLockManyKeys() throws Exception { checkExplicitLock(4, false); } @@ -109,6 +114,7 @@ public void testExplicitLockManyKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExplicitLockManyKeysWithClient() throws Exception { checkExplicitLock(4, true); } @@ -121,6 +127,8 @@ public void testExplicitLockManyKeysWithClient() throws Exception { public void checkExplicitLock(int keys, boolean testClient) throws Exception { Collection threads = new ArrayList<>(); + err = null; + try { // Start grid 1. IgniteEx grid1 = startGrid(1); @@ -172,6 +180,8 @@ public void checkExplicitLock(int keys, boolean testClient) throws Exception { assertEquals("txMap is not empty:" + i, 0, tm.idMapSize()); } + + assertNull(err); } finally { stopAllGrids(); @@ -198,7 +208,6 @@ private void stopThreads(Iterable threads) { * @param keys Number of keys. * @return Running thread. */ - @SuppressWarnings("TypeMayBeWeakened") private Thread runCacheOperations(final IgniteInternalCache cache, final int keys) { Thread t = new Thread() { @Override public void run() { @@ -216,7 +225,7 @@ private Thread runCacheOperations(final IgniteInternalCache cache else cache.removeAll(vals.keySet()); } - catch (Exception e) { + catch (IgniteCheckedException e) { U.error(log(), "Failed cache operation.", e); } finally { @@ -225,8 +234,12 @@ private Thread runCacheOperations(final IgniteInternalCache cache U.sleep(100); } - catch (Exception e){ + catch (Throwable e){ U.error(log(), "Failed unlock.", e); + + err = e; + + return; } } } @@ -254,4 +267,4 @@ private TreeMap generateValues(int cnt) { return res; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedBackupNodeFailureRecoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedBackupNodeFailureRecoveryTest.java index 98520ab01a0de..6863601a6e7d6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedBackupNodeFailureRecoveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedBackupNodeFailureRecoveryTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; import org.apache.ignite.internal.util.typedef.PA; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -43,8 +46,9 @@ /** */ +@RunWith(JUnit4.class) public class IgniteCachePartitionedBackupNodeFailureRecoveryTest extends IgniteCacheAbstractTest { - /** {@inheritDoc}*/ + /** {@inheritDoc}*/ @Override protected int gridCount() { return 3; } @@ -80,6 +84,7 @@ public class IgniteCachePartitionedBackupNodeFailureRecoveryTest extends IgniteC * * @throws Exception If failed. */ + @Test public void testBackUpFail() throws Exception { final IgniteEx node1 = grid(0); final IgniteEx node2 = grid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest.java index 23304a47130c1..bb3fff05f5264 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest.java @@ -34,4 +34,4 @@ public class IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest return ccfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePrimaryNodeFailureRecoveryAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePrimaryNodeFailureRecoveryAbstractTest.java index 00f972921eb1b..48755e8e9eb91 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePrimaryNodeFailureRecoveryAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePrimaryNodeFailureRecoveryAbstractTest.java @@ -19,8 +19,11 @@ import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; import java.util.List; +import java.util.Optional; import java.util.UUID; +import java.util.stream.StreamSupport; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; @@ -33,10 +36,13 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.managers.communication.GridIoMessage; +import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest; import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; @@ -53,8 +59,12 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.internal.processors.cache.ExchangeContext.IGNITE_EXCHANGE_COMPATIBILITY_VER_1; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -65,6 +75,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCachePrimaryNodeFailureRecoveryAbstractTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -113,33 +124,47 @@ public abstract class IgniteCachePrimaryNodeFailureRecoveryAbstractTest extends /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryNodeFailureRecovery1() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryNodeFailure(false, false, true); } /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryNodeFailureRecovery2() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryNodeFailure(true, false, true); } /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryNodeFailureRollback1() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryNodeFailure(false, true, true); } /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryNodeFailureRollback2() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryNodeFailure(true, true, true); } + /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryNodeFailureRecovery1() throws Exception { primaryNodeFailure(false, false, false); } @@ -147,6 +172,7 @@ public void testPessimisticPrimaryNodeFailureRecovery1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryNodeFailureRecovery2() throws Exception { primaryNodeFailure(true, false, false); } @@ -154,6 +180,7 @@ public void testPessimisticPrimaryNodeFailureRecovery2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryNodeFailureRollback1() throws Exception { primaryNodeFailure(false, true, false); } @@ -161,6 +188,7 @@ public void testPessimisticPrimaryNodeFailureRollback1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryNodeFailureRollback2() throws Exception { primaryNodeFailure(true, true, false); } @@ -245,8 +273,8 @@ private void primaryNodeFailure(boolean locBackupKey, final boolean rollback, bo GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { try { - checkKey(key1, rollback ? null : key1Nodes); - checkKey(key2, rollback ? null : key2Nodes); + checkKey(key1, rollback, key1Nodes, 0); + checkKey(key2, rollback, key2Nodes, 0); return true; } @@ -258,41 +286,54 @@ private void primaryNodeFailure(boolean locBackupKey, final boolean rollback, bo } }, 5000); - checkKey(key1, rollback ? null : key1Nodes); - checkKey(key2, rollback ? null : key2Nodes); + checkKey(key1, rollback, key1Nodes, 0); + checkKey(key2, rollback, key2Nodes, 0); } /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryAndOriginatingNodeFailureRecovery1() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryAndOriginatingNodeFailure(false, false, true); } /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryAndOriginatingNodeFailureRecovery2() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryAndOriginatingNodeFailure(true, false, true); } /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryAndOriginatingNodeFailureRollback1() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryAndOriginatingNodeFailure(false, true, true); } /** * @throws Exception If failed. */ + @Test public void testOptimisticPrimaryAndOriginatingNodeFailureRollback2() throws Exception { + if (atomicityMode() == TRANSACTIONAL_SNAPSHOT) return; + primaryAndOriginatingNodeFailure(true, true, true); } /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryAndOriginatingNodeFailureRecovery1() throws Exception { primaryAndOriginatingNodeFailure(false, false, false); } @@ -300,6 +341,7 @@ public void testPessimisticPrimaryAndOriginatingNodeFailureRecovery1() throws Ex /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryAndOriginatingNodeFailureRecovery2() throws Exception { primaryAndOriginatingNodeFailure(true, false, false); } @@ -307,6 +349,7 @@ public void testPessimisticPrimaryAndOriginatingNodeFailureRecovery2() throws Ex /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryAndOriginatingNodeFailureRollback1() throws Exception { primaryAndOriginatingNodeFailure(false, true, false); } @@ -314,6 +357,7 @@ public void testPessimisticPrimaryAndOriginatingNodeFailureRollback1() throws Ex /** * @throws Exception If failed. */ + @Test public void testPessimisticPrimaryAndOriginatingNodeFailureRollback2() throws Exception { primaryAndOriginatingNodeFailure(true, true, false); } @@ -327,14 +371,14 @@ public void testPessimisticPrimaryAndOriginatingNodeFailureRollback2() throws Ex private void primaryAndOriginatingNodeFailure(final boolean locBackupKey, final boolean rollback, boolean optimistic) - throws Exception - { + throws Exception { // TODO IGNITE-6174: when exchanges can be merged test fails because of IGNITE-6174. System.setProperty(IGNITE_EXCHANGE_COMPATIBILITY_VER_1, "true"); try { - IgniteCache cache0 = jcache(0); - IgniteCache cache2 = jcache(2); + int orig = 0; + + IgniteCache origCache = jcache(orig); Affinity aff = ignite(0).affinity(DEFAULT_CACHE_NAME); @@ -342,7 +386,7 @@ private void primaryAndOriginatingNodeFailure(final boolean locBackupKey, for (int key = 0; key < 10_000; key++) { if (aff.isPrimary(ignite(1).cluster().localNode(), key)) { - if (locBackupKey == aff.isBackup(ignite(0).cluster().localNode(), key)) { + if (locBackupKey == aff.isBackup(ignite(orig).cluster().localNode(), key)) { key0 = key; break; @@ -353,27 +397,27 @@ private void primaryAndOriginatingNodeFailure(final boolean locBackupKey, assertNotNull(key0); final Integer key1 = key0; - final Integer key2 = primaryKey(cache2); + final Integer key2 = primaryKey(jcache(2)); - int backups = cache0.getConfiguration(CacheConfiguration.class).getBackups(); + int backups = origCache.getConfiguration(CacheConfiguration.class).getBackups(); final Collection key1Nodes = - (locBackupKey && backups < 2) ? null : aff.mapKeyToPrimaryAndBackups(key1); + (locBackupKey && backups < 2) ? Collections.emptyList() : aff.mapKeyToPrimaryAndBackups(key1); final Collection key2Nodes = aff.mapKeyToPrimaryAndBackups(key2); - TestCommunicationSpi commSpi = (TestCommunicationSpi)ignite(0).configuration().getCommunicationSpi(); + TestCommunicationSpi commSpi = (TestCommunicationSpi)ignite(orig).configuration().getCommunicationSpi(); - IgniteTransactions txs = ignite(0).transactions(); + IgniteTransactions txs = ignite(orig).transactions(); Transaction tx = txs.txStart(optimistic ? OPTIMISTIC : PESSIMISTIC, REPEATABLE_READ); log.info("Put key1 [key1=" + key1 + ", nodes=" + U.nodeIds(aff.mapKeyToPrimaryAndBackups(key1)) + ']'); - cache0.put(key1, key1); + origCache.put(key1, key1); log.info("Put key2 [key2=" + key2 + ", nodes=" + U.nodeIds(aff.mapKeyToPrimaryAndBackups(key2)) + ']'); - cache0.put(key2, key2); + origCache.put(key2, key2); log.info("Start prepare."); @@ -399,13 +443,13 @@ private void primaryAndOriginatingNodeFailure(final boolean locBackupKey, log.info("Stop originating node."); - stopGrid(0); + stopGrid(orig); GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { try { - checkKey(key1, rollback ? null : key1Nodes); - checkKey(key2, rollback ? null : key2Nodes); + checkKey(key1, rollback, key1Nodes, 0); + checkKey(key2, rollback, key2Nodes, 0); return true; } catch (AssertionError e) { @@ -416,24 +460,23 @@ private void primaryAndOriginatingNodeFailure(final boolean locBackupKey, } }, 5000); - checkKey(key1, rollback ? null : key1Nodes); - checkKey(key2, rollback ? null : key2Nodes); + checkKey(key1, rollback, key1Nodes, 0); + checkKey(key2, rollback, key2Nodes, 0); } finally { System.clearProperty(IGNITE_EXCHANGE_COMPATIBILITY_VER_1); } } - /** - * @param key Key. - * @param keyNodes Key nodes. - */ - private void checkKey(Integer key, Collection keyNodes) { - if (keyNodes == null) { - for (Ignite ignite : G.allGrids()) { - IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); + /** */ + private void checkKey(Integer key, boolean rollback, Collection keyNodes, long initUpdCntr) { + if (rollback) { + if (atomicityMode() != TRANSACTIONAL_SNAPSHOT) { + for (Ignite ignite : G.allGrids()) { + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); - assertNull("Unexpected value for: " + ignite.name(), cache.localPeek(key)); + assertNull("Unexpected value for: " + ignite.name(), cache.localPeek(key)); + } } for (Ignite ignite : G.allGrids()) { @@ -441,10 +484,34 @@ private void checkKey(Integer key, Collection keyNodes) { assertNull("Unexpected value for: " + ignite.name(), cache.get(key)); } + + boolean found = keyNodes.isEmpty(); + + long cntr0 = -1; + + for (ClusterNode node : keyNodes) { + try { + long nodeCntr = updateCoutner(grid(node), key); + + found = true; + + if (cntr0 == -1) + cntr0 = nodeCntr; + + assertEquals(cntr0, nodeCntr); + } + catch (IgniteIllegalStateException ignore) { + // No-op. + } + } + + assertTrue("Failed to find key node.", found); } - else { + else if (!keyNodes.isEmpty()) { boolean found = false; + long cntr0 = -1; + for (ClusterNode node : keyNodes) { try { Ignite ignite = grid(node); @@ -454,6 +521,13 @@ private void checkKey(Integer key, Collection keyNodes) { ignite.cache(DEFAULT_CACHE_NAME); assertEquals("Unexpected value for: " + ignite.name(), key, key); + + long nodeCntr = updateCoutner(ignite, key); + + if (cntr0 == -1) + cntr0 = nodeCntr; + + assertTrue(nodeCntr == cntr0 && nodeCntr > initUpdCntr); } catch (IgniteIllegalStateException ignore) { // No-op. @@ -498,6 +572,23 @@ private void waitPrepared(Ignite ignite) throws Exception { assertTrue("Failed to wait for tx.", wait); } + /** */ + private static long updateCoutner(Ignite ign, Object key) { + return dataStore(((IgniteEx)ign).cachex(DEFAULT_CACHE_NAME).context(), key) + .map(IgniteCacheOffheapManager.CacheDataStore::updateCounter) + .orElse(0L); + } + + /** */ + private static Optional dataStore( + GridCacheContext cctx, Object key) { + int p = cctx.affinity().partition(key); + + return StreamSupport.stream(cctx.offheap().cacheDataStores().spliterator(), false) + .filter(ds -> ds.partId() == p) + .findFirst(); + } + /** * */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAbstractSelfTest.java index fbb7c3ae65ffc..14c739cb6af2f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAbstractSelfTest.java @@ -51,11 +51,10 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -64,10 +63,8 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCachePutRetryAbstractSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ protected static final long DURATION = 60_000; @@ -120,8 +117,6 @@ protected CacheConfiguration cacheConfiguration(boolean evict, boolean store) th cfg.setIncludeEventTypes(); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); AtomicConfiguration acfg = new AtomicConfiguration(); @@ -137,8 +132,6 @@ protected CacheConfiguration cacheConfiguration(boolean evict, boolean store) th /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { - System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); - super.beforeTestsStarted(); startGridsMultiThreaded(GRID_CNT); @@ -146,7 +139,9 @@ protected CacheConfiguration cacheConfiguration(boolean evict, boolean store) th /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { - System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + super.afterTestsStopped(); + + stopAllGrids(); } /** {@inheritDoc} */ @@ -169,6 +164,7 @@ protected CacheConfiguration cacheConfiguration(boolean evict, boolean store) th /** * @throws Exception If failed. */ + @org.junit.Test public void testPut() throws Exception { checkRetry(Test.PUT, false, false); } @@ -176,6 +172,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testGetAndPut() throws Exception { checkRetry(Test.GET_AND_PUT, false, false); } @@ -183,6 +180,7 @@ public void testGetAndPut() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testPutStoreEnabled() throws Exception { checkRetry(Test.PUT, false, true); } @@ -190,6 +188,7 @@ public void testPutStoreEnabled() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testPutAll() throws Exception { checkRetry(Test.PUT_ALL, false, false); } @@ -197,6 +196,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testPutAsync() throws Exception { checkRetry(Test.PUT_ASYNC, false, false); } @@ -204,6 +204,7 @@ public void testPutAsync() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testPutAsyncStoreEnabled() throws Exception { checkRetry(Test.PUT_ASYNC, false, true); } @@ -211,6 +212,7 @@ public void testPutAsyncStoreEnabled() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testInvoke() throws Exception { checkRetry(Test.INVOKE, false, false); } @@ -218,6 +220,7 @@ public void testInvoke() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testInvokeAll() throws Exception { checkRetry(Test.INVOKE_ALL, false, false); } @@ -225,6 +228,7 @@ public void testInvokeAll() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testInvokeAllEvict() throws Exception { checkRetry(Test.INVOKE_ALL, true, false); } @@ -468,6 +472,7 @@ private void checkNoAtomicFutures() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testFailsWithNoRetries() throws Exception { checkFailsWithNoRetries(false); } @@ -475,6 +480,7 @@ public void testFailsWithNoRetries() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testFailsWithNoRetriesAsync() throws Exception { checkFailsWithNoRetries(true); } @@ -628,4 +634,4 @@ private static class TestCacheStore extends CacheStoreAdapter { STORE_MAP.remove(key); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAtomicSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAtomicSelfTest.java index d7e9981e67e19..fc37e5f4dfb65 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAtomicSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryAtomicSelfTest.java @@ -28,6 +28,8 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -37,6 +39,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCachePutRetryAtomicSelfTest extends IgniteCachePutRetryAbstractSelfTest { /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { @@ -46,6 +49,7 @@ public class IgniteCachePutRetryAtomicSelfTest extends IgniteCachePutRetryAbstra /** * @throws Exception If failed. */ + @org.junit.Test public void testPutInsideTransaction() throws Exception { ignite(0).createCache(cacheConfiguration(false, false)); @@ -104,4 +108,4 @@ public void testPutInsideTransaction() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryTransactionalSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryTransactionalSelfTest.java index 161025fdb1e8d..8c7f859334093 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryTransactionalSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCachePutRetryTransactionalSelfTest.java @@ -34,6 +34,8 @@ import org.apache.ignite.cache.CacheEntryProcessor; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.testframework.GridTestUtils.runAsync; @@ -42,6 +44,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCachePutRetryTransactionalSelfTest extends IgniteCachePutRetryAbstractSelfTest { /** */ private static final int FACTOR = 1000; @@ -54,6 +57,7 @@ public class IgniteCachePutRetryTransactionalSelfTest extends IgniteCachePutRetr /** * @throws Exception If failed. */ + @org.junit.Test public void testAtomicLongRetries() throws Exception { final AtomicBoolean finished = new AtomicBoolean(); @@ -91,6 +95,7 @@ public void testAtomicLongRetries() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExplicitTransactionRetriesSingleValue() throws Exception { checkRetry(Test.TX_PUT, false, false); } @@ -98,6 +103,7 @@ public void testExplicitTransactionRetriesSingleValue() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExplicitTransactionRetriesSingleValueStoreEnabled() throws Exception { checkRetry(Test.TX_PUT, false, true); } @@ -105,6 +111,7 @@ public void testExplicitTransactionRetriesSingleValueStoreEnabled() throws Excep /** * @throws Exception If failed. */ + @org.junit.Test public void testExplicitTransactionRetries() throws Exception { explicitTransactionRetries(false, false); } @@ -112,6 +119,7 @@ public void testExplicitTransactionRetries() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExplicitTransactionRetriesStoreEnabled() throws Exception { explicitTransactionRetries(false, true); } @@ -119,6 +127,7 @@ public void testExplicitTransactionRetriesStoreEnabled() throws Exception { /** * @throws Exception If failed. */ + @org.junit.Test public void testExplicitTransactionRetriesEvictionEnabled() throws Exception { explicitTransactionRetries(true, false); } @@ -202,6 +211,7 @@ public void explicitTransactionRetries(boolean evict, boolean store) throws Exce /** * @throws Exception If failed. */ + @org.junit.Test public void testOriginatingNodeFailureForcesOnePhaseCommitDataCleanup() throws Exception { ignite(0).createCache(cacheConfiguration(false, false)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheStartWithLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheStartWithLoadTest.java index acccc5be07fed..594db17d17ecb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheStartWithLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheStartWithLoadTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheStartWithLoadTest extends GridCommonAbstractTest { /** */ static final String CACHE_NAME = "tx_repl"; @@ -75,6 +79,7 @@ protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws /** * @throws Exception if failed. */ + @Test public void testNoRebalanceDuringCacheStart() throws Exception { IgniteEx crd = (IgniteEx)startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheTxRecoveryRollbackTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheTxRecoveryRollbackTest.java index 11c4c67f415c8..59f2a3256675d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheTxRecoveryRollbackTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCacheTxRecoveryRollbackTest.java @@ -49,13 +49,13 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.resources.LoggerResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -67,10 +67,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheTxRecoveryRollbackTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static ConcurrentHashMap storeMap = new ConcurrentHashMap<>(); @@ -83,8 +81,6 @@ public class IgniteCacheTxRecoveryRollbackTest extends GridCommonAbstractTest { cfg.setConsistentId(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); cfg.setCommunicationSpi(commSpi); @@ -115,6 +111,7 @@ public class IgniteCacheTxRecoveryRollbackTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNearTx1Implicit() throws Exception { nearTx1(null); } @@ -122,6 +119,7 @@ public void testNearTx1Implicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearTx1Optimistic() throws Exception { nearTx1(OPTIMISTIC); } @@ -129,6 +127,7 @@ public void testNearTx1Optimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearTx1Pessimistic() throws Exception { nearTx1(PESSIMISTIC); } @@ -213,6 +212,7 @@ private void nearTx1(final TransactionConcurrency concurrency) throws Exception /** * @throws Exception If failed. */ + @Test public void testNearTx2Implicit() throws Exception { nearTx2(null); } @@ -220,6 +220,7 @@ public void testNearTx2Implicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearTx2Optimistic() throws Exception { nearTx2(OPTIMISTIC); } @@ -227,6 +228,7 @@ public void testNearTx2Optimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearTx2Pessimistic() throws Exception { nearTx2(PESSIMISTIC); } @@ -319,6 +321,7 @@ private void nearTx2(final TransactionConcurrency concurrency) throws Exception /** * @throws Exception If failed. */ + @Test public void testTxWithStoreImplicit() throws Exception { txWithStore(null, true); } @@ -326,6 +329,7 @@ public void testTxWithStoreImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxWithStoreOptimistic() throws Exception { txWithStore(OPTIMISTIC, true); } @@ -333,6 +337,7 @@ public void testTxWithStoreOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxWithStorePessimistic() throws Exception { txWithStore(PESSIMISTIC, true); } @@ -340,6 +345,7 @@ public void testTxWithStorePessimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxWithStoreNoWriteThroughImplicit() throws Exception { txWithStore(null, false); } @@ -347,6 +353,7 @@ public void testTxWithStoreNoWriteThroughImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxWithStoreNoWriteThroughOptimistic() throws Exception { txWithStore(OPTIMISTIC, false); } @@ -354,6 +361,7 @@ public void testTxWithStoreNoWriteThroughOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxWithStoreNoWriteThroughPessimistic() throws Exception { txWithStore(PESSIMISTIC, false); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheMvccTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheMvccTxSelfTest.java new file mode 100644 index 0000000000000..be7591cd52c43 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheMvccTxSelfTest.java @@ -0,0 +1,41 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.distributed.dht; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.junit.Test; + +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * + */ +public class IgniteCrossCacheMvccTxSelfTest extends IgniteCrossCacheTxAbstractSelfTest { + /** {@inheritDoc} */ + @Override public CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPessimisticRepeatableRead() throws Exception { + checkTxsSingleOp(PESSIMISTIC, REPEATABLE_READ); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheTxAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheTxAbstractSelfTest.java new file mode 100644 index 0000000000000..d2769e392e947 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheTxAbstractSelfTest.java @@ -0,0 +1,165 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.dht; + +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.ThreadLocalRandom; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; + +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Tests specific combinations of cross-cache transactions. + */ +public abstract class IgniteCrossCacheTxAbstractSelfTest extends GridCommonAbstractTest { + /** */ + private static final String FIRST_CACHE = "FirstCache"; + + /** */ + private static final String SECOND_CACHE = "SecondCache"; + + /** */ + private static final int TX_CNT = 500; + + /** + * @return Node count for this test. + */ + private int nodeCount() { + return 4; + } + + /** + * @return {@code True} if near cache should be enabled. + */ + protected boolean nearEnabled() { + return false; + } + + /** {@inheritDoc} */ + @SuppressWarnings("unchecked") + @Override protected void beforeTestsStarted() throws Exception { + startGridsMultiThreaded(nodeCount()); + + CacheConfiguration firstCfg = new CacheConfiguration(FIRST_CACHE); + firstCfg.setBackups(1); + firstCfg.setAtomicityMode(atomicityMode()); + firstCfg.setWriteSynchronizationMode(FULL_SYNC); + + grid(0).createCache(firstCfg); + + CacheConfiguration secondCfg = new CacheConfiguration(SECOND_CACHE); + secondCfg.setBackups(1); + secondCfg.setAtomicityMode(atomicityMode()); + secondCfg.setWriteSynchronizationMode(FULL_SYNC); + + if (nearEnabled()) + secondCfg.setNearConfiguration(new NearCacheConfiguration()); + + grid(0).createCache(secondCfg); + } + + /** + * @return Atomicity mode. + */ + public abstract CacheAtomicityMode atomicityMode(); + + /** + * @param concurrency Concurrency. + * @param isolation Isolation. + * @throws Exception If failed. + */ + protected void checkTxsSingleOp(TransactionConcurrency concurrency, TransactionIsolation isolation) throws Exception { + Map firstCheck = new HashMap<>(); + Map secondCheck = new HashMap<>(); + + for (int i = 0; i < TX_CNT; i++) { + int grid = ThreadLocalRandom.current().nextInt(nodeCount()); + + IgniteCache first = grid(grid).cache(FIRST_CACHE); + IgniteCache second = grid(grid).cache(SECOND_CACHE); + + try (Transaction tx = grid(grid).transactions().txStart(concurrency, isolation)) { + try { + int size = ThreadLocalRandom.current().nextInt(24) + 1; + + for (int k = 0; k < size; k++) { + boolean rnd = ThreadLocalRandom.current().nextBoolean(); + + IgniteCache cache = rnd ? first : second; + Map check = rnd ? firstCheck : secondCheck; + + String val = rnd ? "first" + i : "second" + i; + + cache.put(k, val); + check.put(k, val); + } + + tx.commit(); + } + catch (Throwable e) { + e.printStackTrace(); + + throw e; + } + } + + if (i > 0 && i % 100 == 0) + info("Finished iteration: " + i); + } + + for (int g = 0; g < nodeCount(); g++) { + IgniteEx grid = grid(g); + + assertEquals(0, grid.context().cache().context().tm().idMapSize()); + + ClusterNode locNode = grid.localNode(); + + IgniteCache firstCache = grid.cache(FIRST_CACHE); + + for (Map.Entry entry : firstCheck.entrySet()) { + boolean primary = grid.affinity(FIRST_CACHE).isPrimary(locNode, entry.getKey()); + + boolean backup = grid.affinity(FIRST_CACHE).isBackup(locNode, entry.getKey()); + + assertEquals("Invalid value found first cache [primary=" + primary + ", backup=" + backup + + ", node=" + locNode.id() + ", key=" + entry.getKey() + ']', + entry.getValue(), firstCache.get(entry.getKey())); + } + + for (Map.Entry entry : secondCheck.entrySet()) { + boolean primary = grid.affinity(SECOND_CACHE).isPrimary(locNode, entry.getKey()); + + boolean backup = grid.affinity(SECOND_CACHE).isBackup(locNode, entry.getKey()); + + assertEquals("Invalid value found second cache [primary=" + primary + ", backup=" + backup + + ", node=" + locNode.id() + ", key=" + entry.getKey() + ']', + entry.getValue(), grid.cache(SECOND_CACHE).get(entry.getKey())); + } + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheTxSelfTest.java index 91c66850a6f23..b8ae501d67372 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheTxSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/IgniteCrossCacheTxSelfTest.java @@ -14,28 +14,13 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.ignite.internal.processors.cache.distributed.dht; -import java.util.HashMap; -import java.util.Map; -import java.util.concurrent.ThreadLocalRandom; -import org.apache.ignite.IgniteCache; -import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.apache.ignite.transactions.Transaction; -import org.apache.ignite.transactions.TransactionConcurrency; -import org.apache.ignite.transactions.TransactionIsolation; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; -import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; @@ -43,70 +28,19 @@ import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE; /** - * Tests specific combinations of cross-cache transactions. + * */ -public class IgniteCrossCacheTxSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** */ - private static final String FIRST_CACHE = "FirstCache"; - - /** */ - private static final String SECOND_CACHE = "SecondCache"; - - /** */ - private static final int TX_CNT = 500; - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - - /** - * @return Node count for this test. - */ - private int nodeCount() { - return 4; - } - - /** - * @return {@code True} if near cache should be enabled. - */ - protected boolean nearEnabled() { - return false; - } - +@RunWith(JUnit4.class) +public class IgniteCrossCacheTxSelfTest extends IgniteCrossCacheTxAbstractSelfTest { /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override protected void beforeTestsStarted() throws Exception { - startGridsMultiThreaded(nodeCount()); - - CacheConfiguration firstCfg = new CacheConfiguration(FIRST_CACHE); - firstCfg.setBackups(1); - firstCfg.setAtomicityMode(TRANSACTIONAL); - firstCfg.setWriteSynchronizationMode(FULL_SYNC); - - grid(0).createCache(firstCfg); - - CacheConfiguration secondCfg = new CacheConfiguration(SECOND_CACHE); - secondCfg.setBackups(1); - secondCfg.setAtomicityMode(TRANSACTIONAL); - secondCfg.setWriteSynchronizationMode(FULL_SYNC); - - if (nearEnabled()) - secondCfg.setNearConfiguration(new NearCacheConfiguration()); - - grid(0).createCache(secondCfg); + @Override public CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL; } /** * @throws Exception If failed. */ + @Test public void testPessimisticReadCommitted() throws Exception { checkTxsSingleOp(PESSIMISTIC, READ_COMMITTED); } @@ -114,6 +48,7 @@ public void testPessimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableRead() throws Exception { checkTxsSingleOp(PESSIMISTIC, REPEATABLE_READ); } @@ -121,6 +56,7 @@ public void testPessimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticReadCommitted() throws Exception { checkTxsSingleOp(OPTIMISTIC, READ_COMMITTED); } @@ -128,6 +64,7 @@ public void testOptimisticReadCommitted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableRead() throws Exception { checkTxsSingleOp(OPTIMISTIC, REPEATABLE_READ); } @@ -135,82 +72,9 @@ public void testOptimisticRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSerializable() throws Exception { checkTxsSingleOp(OPTIMISTIC, SERIALIZABLE); } - /** - * @param concurrency Concurrency. - * @param isolation Isolation. - * @throws Exception If failed. - */ - private void checkTxsSingleOp(TransactionConcurrency concurrency, TransactionIsolation isolation) throws Exception { - Map firstCheck = new HashMap<>(); - Map secondCheck = new HashMap<>(); - - for (int i = 0; i < TX_CNT; i++) { - int grid = ThreadLocalRandom.current().nextInt(nodeCount()); - - IgniteCache first = grid(grid).cache(FIRST_CACHE); - IgniteCache second = grid(grid).cache(SECOND_CACHE); - - try (Transaction tx = grid(grid).transactions().txStart(concurrency, isolation)) { - try { - int size = ThreadLocalRandom.current().nextInt(24) + 1; - - for (int k = 0; k < size; k++) { - boolean rnd = ThreadLocalRandom.current().nextBoolean(); - - IgniteCache cache = rnd ? first : second; - Map check = rnd ? firstCheck : secondCheck; - - String val = rnd ? "first" + i : "second" + i; - - cache.put(k, val); - check.put(k, val); - } - - tx.commit(); - } - catch (Throwable e) { - e.printStackTrace(); - - throw e; - } - } - - if (i > 0 && i % 100 == 0) - info("Finished iteration: " + i); - } - - for (int g = 0; g < nodeCount(); g++) { - IgniteEx grid = grid(g); - - assertEquals(0, grid.context().cache().context().tm().idMapSize()); - - ClusterNode locNode = grid.localNode(); - - IgniteCache firstCache = grid.cache(FIRST_CACHE); - - for (Map.Entry entry : firstCheck.entrySet()) { - boolean primary = grid.affinity(FIRST_CACHE).isPrimary(locNode, entry.getKey()); - - boolean backup = grid.affinity(FIRST_CACHE).isBackup(locNode, entry.getKey()); - - assertEquals("Invalid value found first cache [primary=" + primary + ", backup=" + backup + - ", node=" + locNode.id() + ", key=" + entry.getKey() + ']', - entry.getValue(), firstCache.get(entry.getKey())); - } - - for (Map.Entry entry : secondCheck.entrySet()) { - boolean primary = grid.affinity(SECOND_CACHE).isPrimary(locNode, entry.getKey()); - - boolean backup = grid.affinity(SECOND_CACHE).isBackup(locNode, entry.getKey()); - - assertEquals("Invalid value found second cache [primary=" + primary + ", backup=" + backup + - ", node=" + locNode.id() + ", key=" + entry.getKey() + ']', - entry.getValue(), grid.cache(SECOND_CACHE).get(entry.getKey())); - } - } - } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/NotMappedPartitionInTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/NotMappedPartitionInTxTest.java index e09cf53c51c11..0bad828931db9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/NotMappedPartitionInTxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/NotMappedPartitionInTxTest.java @@ -39,6 +39,9 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -49,6 +52,7 @@ /** */ @SuppressWarnings({"unchecked", "ThrowableNotThrown"}) +@RunWith(JUnit4.class) public class NotMappedPartitionInTxTest extends GridCommonAbstractTest { /** Cache. */ private static final String CACHE = "testCache"; @@ -62,23 +66,34 @@ public class NotMappedPartitionInTxTest extends GridCommonAbstractTest { /** Is client. */ private boolean isClient; + /** Atomicity mode. */ + private CacheAtomicityMode atomicityMode = CacheAtomicityMode.TRANSACTIONAL; + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + atomicityMode = CacheAtomicityMode.TRANSACTIONAL; + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { return super.getConfiguration(gridName) .setClientMode(isClient) .setCacheConfiguration( new CacheConfiguration(CACHE) - .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setAtomicityMode(atomicityMode) .setCacheMode(CacheMode.REPLICATED) .setAffinity(new TestAffinity()), new CacheConfiguration(CACHE2) - .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + .setAtomicityMode(atomicityMode)); } /** * */ - public void testOneServerOptimistic() throws Exception { + @Test + public void testOneServerTx() throws Exception { try { isClient = false; startGrid(0); @@ -86,37 +101,11 @@ public void testOneServerOptimistic() throws Exception { isClient = true; final IgniteEx client = startGrid(1); - GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Void call() throws Exception { - testNotMapped(client, OPTIMISTIC, REPEATABLE_READ); - - return null; - } - }, ClusterTopologyServerNotFoundException.class, "Failed to map keys to nodes (partition is not mapped to any node)"); - } - finally { - stopAllGrids(); - } - } - - /** - * - */ - public void testOneServerOptimisticSerializable() throws Exception { - try { - isClient = false; - startGrid(0); + checkNotMapped(client, OPTIMISTIC, REPEATABLE_READ); - isClient = true; - final IgniteEx client = startGrid(1); - - GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Void call() throws Exception { - testNotMapped(client, OPTIMISTIC, SERIALIZABLE); + checkNotMapped(client, OPTIMISTIC, SERIALIZABLE); - return null; - } - }, ClusterTopologyServerNotFoundException.class, "Failed to map keys to nodes (partition is not mapped to any node)"); + checkNotMapped(client, PESSIMISTIC, READ_COMMITTED); } finally { stopAllGrids(); @@ -126,21 +115,20 @@ public void testOneServerOptimisticSerializable() throws Exception { /** * */ - public void testOneServerPessimistic() throws Exception { + @Test + public void testOneServerMvcc() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10377"); + try { + atomicityMode = CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + isClient = false; startGrid(0); isClient = true; final IgniteEx client = startGrid(1); - GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Void call() throws Exception { - testNotMapped(client, PESSIMISTIC, READ_COMMITTED); - - return null; - } - }, ClusterTopologyServerNotFoundException.class, "Failed to lock keys (all partition nodes left the grid)"); + checkNotMapped(client, PESSIMISTIC, REPEATABLE_READ); } finally { stopAllGrids(); @@ -150,21 +138,20 @@ public void testOneServerPessimistic() throws Exception { /** * */ - public void testFourServersOptimistic() throws Exception { + @Test + public void testFourServersTx() throws Exception { try { isClient = false; - startGrids(4); + startGridsMultiThreaded(4); isClient = true; final IgniteEx client = startGrid(4); - GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Void call() throws Exception { - testNotMapped(client, OPTIMISTIC, REPEATABLE_READ); + checkNotMapped(client, OPTIMISTIC, REPEATABLE_READ); - return null; - } - }, ClusterTopologyServerNotFoundException.class, "Failed to map keys to nodes (partition is not mapped to any node)"); + checkNotMapped(client, OPTIMISTIC, SERIALIZABLE); + + checkNotMapped(client, PESSIMISTIC, READ_COMMITTED); } finally { stopAllGrids(); @@ -174,21 +161,20 @@ public void testFourServersOptimistic() throws Exception { /** * */ - public void testFourServersOptimisticSerializable() throws Exception { + @Test + public void testFourServersMvcc() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10377"); + try { + atomicityMode = CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + isClient = false; - startGrids(4); + startGridsMultiThreaded(4); isClient = true; final IgniteEx client = startGrid(4); - GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Void call() throws Exception { - testNotMapped(client, OPTIMISTIC, SERIALIZABLE); - - return null; - } - }, ClusterTopologyServerNotFoundException.class, "Failed to map keys to nodes (partition is not mapped to any node)"); + checkNotMapped(client, PESSIMISTIC, READ_COMMITTED); } finally { stopAllGrids(); @@ -196,50 +182,37 @@ public void testFourServersOptimisticSerializable() throws Exception { } /** - * + * @param client Ignite client. */ - public void testFourServersPessimistic() throws Exception { - try { - isClient = false; - startGrids(4); + private void checkNotMapped(final IgniteEx client, final TransactionConcurrency concurrency, + final TransactionIsolation isolation) { + String msg = concurrency == PESSIMISTIC ? "Failed to lock keys (all partition nodes left the grid)" : + "Failed to map keys to nodes (partition is not mapped to any node"; - isClient = true; - final IgniteEx client = startGrid(4); - GridTestUtils.assertThrowsAnyCause(log, new Callable() { - @Override public Void call() throws Exception { - testNotMapped(client, PESSIMISTIC, READ_COMMITTED); + GridTestUtils.assertThrowsAnyCause(log, new Callable() { + @Override public Void call() { + IgniteCache cache2 = client.cache(CACHE2); + IgniteCache cache1 = client.cache(CACHE).withKeepBinary(); - return null; - } - }, ClusterTopologyServerNotFoundException.class, "Failed to lock keys (all partition nodes left the grid)"); - } - finally { - stopAllGrids(); - } - } + try (Transaction tx = client.transactions().txStart(concurrency, isolation)) { - /** - * @param client Ignite client. - */ - private void testNotMapped(IgniteEx client, TransactionConcurrency concurrency, TransactionIsolation isolation) { - IgniteCache cache2 = client.cache(CACHE2); - IgniteCache cache1 = client.cache(CACHE).withKeepBinary(); + Map param = new TreeMap<>(); + param.put(TEST_KEY + 1, 1); + param.put(TEST_KEY + 1, 3); + param.put(TEST_KEY, 3); - try(Transaction tx = client.transactions().txStart(concurrency, isolation)) { + cache1.put(TEST_KEY, 3); - Map param = new TreeMap<>(); - param.put(TEST_KEY + 1, 1); - param.put(TEST_KEY + 1, 3); - param.put(TEST_KEY, 3); + cache1.putAll(param); + cache2.putAll(param); - cache1.put(TEST_KEY, 3); - - cache1.putAll(param); - cache2.putAll(param); + tx.commit(); + } - tx.commit(); - } + return null; + } + }, ClusterTopologyServerNotFoundException.class, msg); } /** */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/TxRecoveryStoreEnabledTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/TxRecoveryStoreEnabledTest.java index 30ac83d5f50b9..d4e9395197908 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/TxRecoveryStoreEnabledTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/TxRecoveryStoreEnabledTest.java @@ -43,6 +43,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -51,6 +54,7 @@ /** * */ +@RunWith(JUnit4.class) public class TxRecoveryStoreEnabledTest extends GridCommonAbstractTest { /** Nodes count. */ private static final int NODES_CNT = 2; @@ -102,6 +106,7 @@ public class TxRecoveryStoreEnabledTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testOptimistic() throws Exception { checkTxRecovery(OPTIMISTIC); } @@ -109,6 +114,7 @@ public void testOptimistic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimistic() throws Exception { checkTxRecovery(PESSIMISTIC); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/AtomicPutAllChangingTopologyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/AtomicPutAllChangingTopologyTest.java index 1eb8347d078ec..b49e2dfe1bed1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/AtomicPutAllChangingTopologyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/AtomicPutAllChangingTopologyTest.java @@ -28,14 +28,13 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -45,10 +44,8 @@ import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** */ +@RunWith(JUnit4.class) public class AtomicPutAllChangingTopologyTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES_CNT = 3; @@ -74,18 +71,10 @@ private CacheConfiguration cacheConfig() { .setName(CACHE_NAME); } - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(gridName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** * @throws Exception If failed. */ + @Test public void testPutAllOnChangingTopology() throws Exception { List futs = new LinkedList<>(); @@ -209,4 +198,4 @@ private void checkCacheState(Ignite node, IgniteCache cache) t ", actual2=" + locSize2 + "]", locSize, CACHE_SIZE); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicInvalidPartitionHandlingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicInvalidPartitionHandlingSelfTest.java index ba17da04e378d..877a7a633bf48 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicInvalidPartitionHandlingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicInvalidPartitionHandlingSelfTest.java @@ -48,10 +48,11 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -63,10 +64,8 @@ * Test GridDhtInvalidPartitionException handling in ATOMIC cache during restarts. */ @SuppressWarnings("ErrorNotRethrown") +@RunWith(JUnit4.class) public class GridCacheAtomicInvalidPartitionHandlingSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Delay flag. */ private static volatile boolean delay; @@ -77,7 +76,7 @@ public class GridCacheAtomicInvalidPartitionHandlingSelfTest extends GridCommonA @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER).setForceServerMode(true)); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); CacheConfiguration ccfg = cacheConfiguration(); @@ -131,6 +130,7 @@ protected boolean testClientNode() { /** * @throws Exception If failed. */ + @Test public void testPrimaryFullSync() throws Exception { checkRestarts(FULL_SYNC); } @@ -138,6 +138,7 @@ public void testPrimaryFullSync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryPrimarySync() throws Exception { checkRestarts(PRIMARY_SYNC); } @@ -145,6 +146,7 @@ public void testPrimaryPrimarySync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryFullAsync() throws Exception { checkRestarts(FULL_ASYNC); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicPreloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicPreloadSelfTest.java index a14c3efb3d3e1..da7ce8f034d60 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicPreloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridCacheAtomicPreloadSelfTest.java @@ -35,6 +35,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -44,6 +47,7 @@ /** * Simple test for preloading in ATOMIC cache. */ +@RunWith(JUnit4.class) public class GridCacheAtomicPreloadSelfTest extends GridCommonAbstractTest { /** */ private boolean nearEnabled; @@ -68,6 +72,7 @@ public class GridCacheAtomicPreloadSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPessimisticSimpleTxsNear() throws Exception { checkSimpleTxs(true, PESSIMISTIC); } @@ -75,6 +80,7 @@ public void testPessimisticSimpleTxsNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticSimpleTxsColocated() throws Exception { checkSimpleTxs(false, PESSIMISTIC); } @@ -82,6 +88,7 @@ public void testPessimisticSimpleTxsColocated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSimpleTxsColocated() throws Exception { checkSimpleTxs(false, OPTIMISTIC); } @@ -89,6 +96,7 @@ public void testOptimisticSimpleTxsColocated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticSimpleTxsNear() throws Exception { checkSimpleTxs(false, OPTIMISTIC); } @@ -218,4 +226,4 @@ private List generateKeys(ClusterNode node, IgniteCache return keys; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/IgniteCacheAtomicProtocolTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/IgniteCacheAtomicProtocolTest.java index 14c85717ae857..34809b6d81673 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/IgniteCacheAtomicProtocolTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/IgniteCacheAtomicProtocolTest.java @@ -44,11 +44,11 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -59,10 +59,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheAtomicProtocolTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String TEST_CACHE = "testCache"; @@ -81,7 +79,6 @@ public class IgniteCacheAtomicProtocolTest extends GridCommonAbstractTest { cfg.setConsistentId(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); cfg.setClientFailureDetectionTimeout(Integer.MAX_VALUE); TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); @@ -122,6 +119,7 @@ private void blockRebalance() { /** * @throws Exception If failed. */ + @Test public void testPutAllPrimaryFailure1() throws Exception { putAllPrimaryFailure(true, false); } @@ -129,6 +127,7 @@ public void testPutAllPrimaryFailure1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllPrimaryFailure1_UnstableTopology() throws Exception { blockRebalance = true; @@ -138,6 +137,7 @@ public void testPutAllPrimaryFailure1_UnstableTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllPrimaryFailure2() throws Exception { putAllPrimaryFailure(true, true); } @@ -145,6 +145,7 @@ public void testPutAllPrimaryFailure2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllPrimaryFailure2_UnstableTopology() throws Exception { blockRebalance = true; @@ -214,6 +215,7 @@ private void putAllPrimaryFailure(boolean fail0, boolean fail1) throws Exception /** * @throws Exception If failed. */ + @Test public void testPutAllBackupFailure1() throws Exception { putAllBackupFailure1(); } @@ -221,6 +223,7 @@ public void testPutAllBackupFailure1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllBackupFailure1_UnstableTopology() throws Exception { blockRebalance = true; @@ -275,6 +278,7 @@ private void putAllBackupFailure1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutBackupFailure1() throws Exception { putBackupFailure1(); } @@ -282,6 +286,7 @@ public void testPutBackupFailure1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutBackupFailure1_UnstableTopology() throws Exception { blockRebalance = true; @@ -331,6 +336,7 @@ private void putBackupFailure1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFullAsyncPutRemap() throws Exception { fullAsyncRemap(false); } @@ -338,6 +344,7 @@ public void testFullAsyncPutRemap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFullAsyncPutAllRemap() throws Exception { fullAsyncRemap(true); } @@ -406,6 +413,7 @@ private void fullAsyncRemap(boolean putAll) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutPrimarySync() throws Exception { startGrids(2); @@ -448,6 +456,7 @@ public void testPutPrimarySync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutNearNodeFailure() throws Exception { startGrids(2); @@ -485,6 +494,7 @@ public void testPutNearNodeFailure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllNearNodeFailure() throws Exception { final int SRVS = 4; @@ -545,6 +555,7 @@ public void testPutAllNearNodeFailure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheOperations0() throws Exception { cacheOperations(0); } @@ -552,6 +563,7 @@ public void testCacheOperations0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheOperations_UnstableTopology0() throws Exception { blockRebalance = true; @@ -561,6 +573,7 @@ public void testCacheOperations_UnstableTopology0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheOperations1() throws Exception { cacheOperations(1); } @@ -568,6 +581,7 @@ public void testCacheOperations1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheOperations_UnstableTopology1() throws Exception { blockRebalance = true; @@ -577,6 +591,7 @@ public void testCacheOperations_UnstableTopology1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheOperations2() throws Exception { cacheOperations(2); } @@ -584,6 +599,7 @@ public void testCacheOperations2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheOperations_UnstableTopology2() throws Exception { blockRebalance = true; @@ -639,6 +655,7 @@ private void cacheOperations(int backups) throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutMissedDhtRequest_UnstableTopology() throws Exception { blockRebalance = true; @@ -678,6 +695,7 @@ public void testPutMissedDhtRequest_UnstableTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllMissedDhtRequest_UnstableTopology1() throws Exception { putAllMissedDhtRequest_UnstableTopology(true, false); } @@ -685,6 +703,7 @@ public void testPutAllMissedDhtRequest_UnstableTopology1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllMissedDhtRequest_UnstableTopology2() throws Exception { putAllMissedDhtRequest_UnstableTopology(true, true); } @@ -750,6 +769,7 @@ private void putAllMissedDhtRequest_UnstableTopology(boolean fail0, boolean fail /** * @throws Exception If failed. */ + @Test public void testPutReaderUpdate1() throws Exception { readerUpdateDhtFails(false, false, false); @@ -761,6 +781,7 @@ public void testPutReaderUpdate1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutReaderUpdate2() throws Exception { readerUpdateDhtFails(true, false, false); @@ -772,6 +793,7 @@ public void testPutReaderUpdate2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllReaderUpdate1() throws Exception { readerUpdateDhtFails(false, false, true); @@ -783,6 +805,7 @@ public void testPutAllReaderUpdate1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllReaderUpdate2() throws Exception { readerUpdateDhtFails(true, false, true); @@ -806,13 +829,15 @@ private void readerUpdateDhtFails(boolean updateNearEnabled, startServers(2); - // Waiting for minor topology changing because of late affinity assignment. - awaitPartitionMapExchange(); - Ignite srv0 = ignite(0); Ignite srv1 = ignite(1); - List keys = primaryKeys(srv0.cache(TEST_CACHE), putAll ? 3 : 1); + IgniteCache cache = srv0.cache(TEST_CACHE); + + // Waiting for minor topology changing because of late affinity assignment. + awaitPartitionMapExchange(); + + List keys = primaryKeys(cache, putAll ? 3 : 1); ccfg = null; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAbstractNearPartitionedByteArrayValuesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAbstractNearPartitionedByteArrayValuesSelfTest.java index c6c849ac9bd42..a352f8fa631df 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAbstractNearPartitionedByteArrayValuesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAbstractNearPartitionedByteArrayValuesSelfTest.java @@ -19,6 +19,8 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractPartitionedByteArrayValuesSelfTest; +import org.junit.Ignore; +import org.junit.Test; /** * Tests for byte array values in NEAR-PARTITIONED caches. @@ -29,4 +31,18 @@ public abstract class GridCacheAbstractNearPartitionedByteArrayValuesSelfTest ex @Override protected NearCacheConfiguration nearConfiguration() { return new NearCacheConfiguration(); } -} \ No newline at end of file + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + @Override public void testPessimisticMvcc() throws Exception { + super.testPessimisticMvcc(); + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + @Override public void testPessimisticMvccMixed() throws Exception { + super.testPessimisticMvccMixed(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest.java index 5de7af055f262..7a9812abfe0e8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -44,6 +47,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCacheNearOnlyMultiNodeFullApiSelfTest { /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { @@ -80,16 +84,19 @@ public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCache } /** {@inheritDoc} */ - @Override public void _testReaderTtlNoTx() throws Exception { + @Test + @Override public void testReaderTtlNoTx() { // No-op. } /** {@inheritDoc} */ - @Override public void _testReaderTtlTx() throws Exception { + @Test + @Override public void testReaderTtlTx() { // No-op. } /** {@inheritDoc} */ + @Test @Override public void testSize() throws Exception { IgniteCache cache = jcache(); @@ -119,6 +126,7 @@ public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCache } /** {@inheritDoc} */ + @Test @Override public void testClear() throws Exception { IgniteCache nearCache = jcache(); IgniteCache primary = fullCache(); @@ -154,6 +162,7 @@ public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCache } /** {@inheritDoc} */ + @Test @Override public void testLocalClearKeys() throws Exception { IgniteCache nearCache = jcache(); IgniteCache primary = fullCache(); @@ -185,6 +194,7 @@ public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCache } /** {@inheritDoc} */ + @Test @Override public void testEvictExpired() throws Exception { IgniteCache cache = jcache(); @@ -232,6 +242,7 @@ public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCache } /** {@inheritDoc} */ + @Test @Override public void testLocalEvict() throws Exception { IgniteCache cache = jcache(); @@ -281,6 +292,7 @@ public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCache } /** {@inheritDoc} */ + @Test @Override public void testPeekExpired() throws Exception { IgniteCache c = jcache(); @@ -303,4 +315,4 @@ public class GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest extends GridCache assert c.localSize() == 0 : "Cache is not empty."; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest.java index c78c976869cad..7e64ce03a4792 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest.java @@ -18,26 +18,23 @@ package org.apache.ignite.internal.processors.cache.distributed.near; import java.util.Collection; -import java.util.Collections; import java.util.HashMap; import java.util.Map; -import javax.cache.expiry.Duration; -import javax.cache.expiry.TouchedExpiryPolicy; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CachePeekMode; -import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; -import static org.apache.ignite.testframework.GridTestUtils.waitForCondition; /** * Tests NEAR_ONLY cache. */ +@RunWith(JUnit4.class) public class GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest extends GridCacheNearOnlyMultiNodeFullApiSelfTest { /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { @@ -82,6 +79,7 @@ public class GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest extends GridCacheNe } /** {@inheritDoc} */ + @Test @Override public void testClear() throws Exception { IgniteCache nearCache = jcache(); IgniteCache primary = fullCache(); @@ -118,4 +116,4 @@ public class GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest extends GridCacheNe for (String key : keys) assertEquals((Integer)i++, nearCache.localPeek(key, CachePeekMode.ONHEAP)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicPartitionedTckMetricsSelfTestImpl.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicPartitionedTckMetricsSelfTestImpl.java index da0c7a92d94f8..08a44ec10e9f8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicPartitionedTckMetricsSelfTestImpl.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheAtomicPartitionedTckMetricsSelfTestImpl.java @@ -21,10 +21,14 @@ import javax.cache.processor.EntryProcessorException; import javax.cache.processor.MutableEntry; import org.apache.ignite.IgniteCache; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Partitioned atomic cache metrics test. */ +@RunWith(JUnit4.class) public class GridCacheAtomicPartitionedTckMetricsSelfTestImpl extends GridCacheAtomicPartitionedMetricsSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -34,6 +38,7 @@ public class GridCacheAtomicPartitionedTckMetricsSelfTestImpl extends GridCacheA /** * @throws Exception If failed. */ + @Test public void testEntryProcessorRemove() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -68,6 +73,7 @@ public void testEntryProcessorRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheStatistics() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -108,6 +114,7 @@ public void testCacheStatistics() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConditionReplace() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -158,6 +165,7 @@ public void testConditionReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsent() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -177,7 +185,7 @@ public void testPutIfAbsent() throws Exception { assertEquals(putCount, cache.localMetrics().getCachePuts()); result = cache.putIfAbsent(1, 1); - + ++hitCount; cache.containsKey(123); @@ -187,4 +195,4 @@ public void testPutIfAbsent() throws Exception { assertEquals(putCount, cache.localMetrics().getCachePuts()); assertEquals(missCount, cache.localMetrics().getCacheMisses()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheGetStoreErrorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheGetStoreErrorSelfTest.java index cb537ee67dfcf..731b623505e44 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheGetStoreErrorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheGetStoreErrorSelfTest.java @@ -28,11 +28,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.IgniteReflectionFactory; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -45,27 +46,27 @@ /** * Checks that exception is propagated to user when cache store throws an exception. */ +@RunWith(JUnit4.class) public class GridCacheGetStoreErrorSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Near enabled flag. */ private boolean nearEnabled; /** Cache mode for test. */ private CacheMode cacheMode; + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(cacheMode); @@ -88,21 +89,25 @@ public class GridCacheGetStoreErrorSelfTest extends GridCommonAbstractTest { } /** @throws Exception If failed. */ + @Test public void testGetErrorNear() throws Exception { checkGetError(true, PARTITIONED); } /** @throws Exception If failed. */ + @Test public void testGetErrorColocated() throws Exception { checkGetError(false, PARTITIONED); } /** @throws Exception If failed. */ + @Test public void testGetErrorReplicated() throws Exception { checkGetError(false, REPLICATED); } /** @throws Exception If failed. */ + @Test public void testGetErrorLocal() throws Exception { checkGetError(false, LOCAL); } @@ -116,7 +121,7 @@ private void checkGetError(boolean nearEnabled, CacheMode cacheMode) throws Exce this.nearEnabled = nearEnabled; this.cacheMode = cacheMode; - startGrids(3); + startGridsMultiThreaded(3); try { GridTestUtils.assertThrows(log, new Callable() { @@ -166,4 +171,4 @@ public static class TestStore extends CacheStoreAdapter { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheMvccNearEvictionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheMvccNearEvictionSelfTest.java new file mode 100644 index 0000000000000..3cc5b247e5e02 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheMvccNearEvictionSelfTest.java @@ -0,0 +1,33 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.distributed.near; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.junit.Ignore; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + +/** + * Test for mvcc cache. + */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-7187,https://issues.apache.org/jira/browse/IGNITE-7956") +public class GridCacheMvccNearEvictionSelfTest extends GridCacheNearEvictionSelfTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL_SNAPSHOT; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearAtomicMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearAtomicMetricsSelfTest.java index 4d82c71181fa5..da8d8e20d40d9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearAtomicMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearAtomicMetricsSelfTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Atomic cache metrics test. */ +@RunWith(JUnit4.class) public class GridCacheNearAtomicMetricsSelfTest extends GridCacheNearMetricsSelfTest { /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { @@ -39,6 +43,7 @@ public class GridCacheNearAtomicMetricsSelfTest extends GridCacheNearMetricsSelf /** * Checks that enabled near cache does not affect metrics. */ + @Test public void testNearCachePutRemoveGetMetrics() { IgniteEx initiator = grid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearClientHitTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearClientHitTest.java index 1dd62e49863be..083979b5925ce 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearClientHitTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearClientHitTest.java @@ -26,31 +26,21 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CachePeekMode.NEAR; /** * */ +@RunWith(JUnit4.class) public class GridCacheNearClientHitTest extends GridCommonAbstractTest { - /** Ip finder. */ - private final static TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private final static String CACHE_NAME = "test-near-cache"; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(final String igniteInstanceName) throws Exception { - final IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** * @param igniteInstanceName Node name. * @return Configuration. @@ -71,14 +61,11 @@ private CacheConfiguration cacheConfiguration() { CacheConfiguration cfg = new CacheConfiguration<>(); cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC); - cfg.setCacheMode(CacheMode.PARTITIONED); - cfg.setBackups(1); - cfg.setCopyOnRead(false); - cfg.setName(CACHE_NAME); + cfg.setNearConfiguration(new NearCacheConfiguration<>()); return cfg; } @@ -97,6 +84,7 @@ private NearCacheConfiguration nearCacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testLocalPeekAfterPrimaryNodeLeft() throws Exception { try { Ignite crd = startGrid("coordinator", getConfiguration("coordinator")); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearEvictionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearEvictionSelfTest.java index 9c7c93345d9d5..37124c76e2c32 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearEvictionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearEvictionSelfTest.java @@ -27,11 +27,11 @@ import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -41,13 +41,11 @@ /** * Tests entries distribution between primary-backup-near caches according to nodes count in grid. */ +@RunWith(JUnit4.class) public class GridCacheNearEvictionSelfTest extends GridCommonAbstractTest { /** Grid count. */ private int gridCnt; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -66,12 +64,6 @@ public class GridCacheNearEvictionSelfTest extends GridCommonAbstractTest { c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } @@ -83,6 +75,7 @@ protected CacheAtomicityMode atomicityMode() { } /** @throws Exception If failed. */ + @Test public void testNearEnabledOneNode() throws Exception { gridCnt = 1; @@ -106,6 +99,7 @@ public void testNearEnabledOneNode() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNearEnabledTwoNodes() throws Exception { gridCnt = 2; @@ -139,6 +133,7 @@ public void testNearEnabledTwoNodes() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNearEnabledThreeNodes() throws Exception { gridCnt = 3; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMetricsSelfTest.java index 70b06f34f2d9f..17f2e34d6b6e1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMetricsSelfTest.java @@ -29,12 +29,17 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Cache metrics test. */ +@RunWith(JUnit4.class) public class GridCacheNearMetricsSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int KEY_CNT = 50; @@ -83,6 +88,8 @@ protected int keyCount() { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.METRICS); + super.beforeTest(); for (int i = 0; i < gridCount(); i++) { @@ -107,6 +114,7 @@ protected int keyCount() { /** * @throws Exception If failed. */ + @Test public void testNearCacheDoesNotAffectCacheSize() throws Exception { IgniteCache cache0 = grid(0).cache(DEFAULT_CACHE_NAME); @@ -132,6 +140,7 @@ public void testNearCacheDoesNotAffectCacheSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimaryPut() throws Exception { Ignite g0 = grid(0); @@ -187,6 +196,7 @@ public void testPrimaryPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupPut() throws Exception { Ignite g0 = grid(0); @@ -241,6 +251,7 @@ else if (affinity(jcache).isBackup(g.cluster().localNode(), key)){ /** * @throws Exception If failed. */ + @Test public void testNearPut() throws Exception { Ignite g0 = grid(0); @@ -293,6 +304,7 @@ else if (affinity(jcache).isBackup(g.cluster().localNode(), key)){ /** * @throws Exception If failed. */ + @Test public void testPrimaryRead() throws Exception { Ignite g0 = grid(0); @@ -349,6 +361,7 @@ public void testPrimaryRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBackupRead() throws Exception { Ignite g0 = grid(0); @@ -400,6 +413,7 @@ public void testBackupRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearRead() throws Exception { Ignite g0 = grid(0); @@ -453,6 +467,7 @@ else if (affinity(jcache).isBackup(g.cluster().localNode(), key)){ /** * @throws Exception If failed. */ + @Test public void testCreateReadRemoveInvokesFromPrimary() throws Exception { Ignite g0 = grid(0); @@ -508,6 +523,7 @@ else if (affinity(jcache).isBackup(g.cluster().localNode(), key)) { /** * @throws Exception If failed. */ + @Test public void testCreateReadRemoveInvokesFromBackup() throws Exception { Ignite g0 = grid(0); @@ -563,6 +579,7 @@ else if (affinity(jcache).isBackup(g.cluster().localNode(), key)) { /** * @throws Exception If failed. */ + @Test public void testCreateReadRemoveInvokesFromNear() throws Exception { Ignite g0 = grid(0); @@ -618,6 +635,7 @@ public void testCreateReadRemoveInvokesFromNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadRemoveInvokesFromPrimary() throws Exception { Ignite g0 = grid(0); @@ -664,6 +682,7 @@ else if (affinity(jcache).isBackup(g.cluster().localNode(), key)) { /** * @throws Exception If failed. */ + @Test public void testReadRemoveInvokesFromBackup() throws Exception { Ignite g0 = grid(0); @@ -709,6 +728,7 @@ else if (affinity(jcache).isBackup(g.cluster().localNode(), key)) { /** * @throws Exception If failed. */ + @Test public void testReadRemoveInvokesFromNear() throws Exception { Ignite g0 = grid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiGetSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiGetSelfTest.java index e27c9ad8fe9ee..4a417893d6286 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiGetSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiGetSelfTest.java @@ -20,24 +20,25 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionOptimisticException; -import org.apache.log4j.Level; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.NONE; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -49,10 +50,8 @@ /** * Test getting the same value twice within the same transaction. */ +@RunWith(JUnit4.class) public class GridCacheNearMultiGetSelfTest extends GridCommonAbstractTest { - /** Cache debug flag. */ - private static final boolean CACHE_DEBUG = false; - /** Number of gets. */ private static final int GET_CNT = 5; @@ -60,43 +59,33 @@ public class GridCacheNearMultiGetSelfTest extends GridCommonAbstractTest { private static final int GRID_CNT = 3; /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @SuppressWarnings({"ConstantConditions"}) - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - - c.getTransactionConfiguration().setTxSerializableEnabled(true); + private CacheAtomicityMode atomicityMode; + /** + * @return Cache configuration. + */ + private CacheConfiguration cacheConfiguration() { CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); cc.setBackups(1); - cc.setAtomicityMode(TRANSACTIONAL); - + cc.setAtomicityMode(atomicityMode); cc.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); - cc.setRebalanceMode(NONE); + cc.setNearConfiguration(new NearCacheConfiguration()); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - - c.setCacheConfiguration(cc); - - if (CACHE_DEBUG) - resetLog4j(Level.DEBUG, false, GridCacheProcessor.class.getPackage().getName()); - - return c; + return cc; } /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { - for (int i = 0; i < GRID_CNT; i++) - startGrid(i); + startGridsMultiThreaded(GRID_CNT); + } + + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + atomicityMode = TRANSACTIONAL; } /** {@inheritDoc} */ @@ -111,6 +100,8 @@ public class GridCacheNearMultiGetSelfTest extends GridCommonAbstractTest { assertEquals("Cache size mismatch for grid [igniteInstanceName=" + g.name() + ", entrySet=" + entrySet(c) + ']', 0, c.size()); } + + grid(0).destroyCache(DEFAULT_CACHE_NAME); } /** @return {@code True} if debug enabled. */ @@ -119,101 +110,137 @@ private boolean isTestDebug() { } /** @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedNoPut() throws Exception { checkDoubleGet(OPTIMISTIC, READ_COMMITTED, false); } /** @throws Exception If failed. */ + @Test public void testOptimisticReadCommittedWithPut() throws Exception { checkDoubleGet(OPTIMISTIC, READ_COMMITTED, true); } /** @throws Exception If failed. */ + @Test public void testOptimisticReadCommitted() throws Exception { checkDoubleGet(OPTIMISTIC, READ_COMMITTED, false); checkDoubleGet(OPTIMISTIC, READ_COMMITTED, true); } /** @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadNoPut() throws Exception { checkDoubleGet(OPTIMISTIC, REPEATABLE_READ, false); } /** @throws Exception If failed. */ + @Test public void testOptimisticRepeatableReadWithPut() throws Exception { checkDoubleGet(OPTIMISTIC, REPEATABLE_READ, true); } /** @throws Exception If failed. */ + @Test public void testOptimisticRepeatableRead() throws Exception { checkDoubleGet(OPTIMISTIC, REPEATABLE_READ, false); checkDoubleGet(OPTIMISTIC, REPEATABLE_READ, true); } /** @throws Exception If failed. */ + @Test public void testOptimisticSerializableNoPut() throws Exception { checkDoubleGet(OPTIMISTIC, SERIALIZABLE, false); } /** @throws Exception If failed. */ + @Test public void testOptimisticSerializableWithPut() throws Exception { checkDoubleGet(OPTIMISTIC, SERIALIZABLE, true); } /** @throws Exception If failed. */ + @Test public void testOptimisticSerializable() throws Exception { checkDoubleGet(OPTIMISTIC, SERIALIZABLE, false); checkDoubleGet(OPTIMISTIC, SERIALIZABLE, true); } /** @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedNoPut() throws Exception { checkDoubleGet(PESSIMISTIC, READ_COMMITTED, false); } /** @throws Exception If failed. */ + @Test public void testPessimisticReadCommittedWithPut() throws Exception { checkDoubleGet(PESSIMISTIC, READ_COMMITTED, true); } /** @throws Exception If failed. */ + @Test public void testPessimisticReadCommitted() throws Exception { checkDoubleGet(PESSIMISTIC, READ_COMMITTED, false); checkDoubleGet(PESSIMISTIC, READ_COMMITTED, true); } /** @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadNoPut() throws Exception { checkDoubleGet(PESSIMISTIC, REPEATABLE_READ, false); } /** @throws Exception If failed. */ + @Test public void testPessimisticRepeatableReadWithPut() throws Exception { checkDoubleGet(PESSIMISTIC, REPEATABLE_READ, true); } /** @throws Exception If failed. */ + @Test public void testPessimisticRepeatableRead() throws Exception { checkDoubleGet(PESSIMISTIC, REPEATABLE_READ, false); checkDoubleGet(PESSIMISTIC, REPEATABLE_READ, true); } /** @throws Exception If failed. */ + @Test public void testPessimisticSerializableNoPut() throws Exception { checkDoubleGet(PESSIMISTIC, SERIALIZABLE, false); } /** @throws Exception If failed. */ + @Test public void testPessimisticSerializableWithPut() throws Exception { checkDoubleGet(PESSIMISTIC, SERIALIZABLE, true); } /** @throws Exception If failed. */ + @Test public void testPessimisticSerializable() throws Exception { checkDoubleGet(PESSIMISTIC, SERIALIZABLE, false); checkDoubleGet(PESSIMISTIC, SERIALIZABLE, true); } + /** @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testMvccPessimisticRepeatableReadNoPut() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + checkDoubleGet(PESSIMISTIC, REPEATABLE_READ, false); + } + + /** @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testMvccPessimisticRepeatableReadWithPut() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + checkDoubleGet(PESSIMISTIC, REPEATABLE_READ, true); + } + /** * @param concurrency Concurrency. * @param isolation Isolation. @@ -223,7 +250,7 @@ public void testPessimisticSerializable() throws Exception { private void checkDoubleGet(TransactionConcurrency concurrency, TransactionIsolation isolation, boolean put) throws Exception { IgniteEx ignite = grid(0); - IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); + IgniteCache cache = ignite.getOrCreateCache(cacheConfiguration()); Integer key = 1; @@ -308,4 +335,4 @@ private void checkDoubleGet(TransactionConcurrency concurrency, TransactionIsola ", t=" + t + (t != tx ? "tx=" + tx : "tx=''") + ']'; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiNodeSelfTest.java index 06e18627ab95f..602029a2f785b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearMultiNodeSelfTest.java @@ -53,12 +53,13 @@ import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -69,6 +70,7 @@ /** * Multi node test for near cache. */ +@RunWith(JUnit4.class) public class GridCacheNearMultiNodeSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 2; @@ -79,8 +81,14 @@ public class GridCacheNearMultiNodeSelfTest extends GridCommonAbstractTest { /** Cache store. */ private static TestStore store = new TestStore(); - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + + super.setUp(); + } /** Grid counter. */ private AtomicInteger cntr = new AtomicInteger(0); @@ -103,12 +111,6 @@ public GridCacheNearMultiNodeSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -247,6 +249,7 @@ private Map, Set>> mapKeys(int cnt) { } /** Test mappings. */ + @Test public void testMappings() { mapDebug = false; @@ -307,6 +310,7 @@ public void testMappings() { } /** @throws Exception If failed. */ + @Test public void testReadThroughAndPut() throws Exception { Integer key = 100000; @@ -330,6 +334,7 @@ public void testReadThroughAndPut() throws Exception { } /** @throws Exception If failed. */ + @Test public void testReadThrough() throws Exception { ClusterNode loc = grid(0).localNode(); @@ -362,7 +367,7 @@ public void testReadThrough() throws Exception { GridCacheAdapter dhtCache = dht(G.ignite(n.id())); - String s = dhtCache.localPeek(key, null, null); + String s = dhtCache.localPeek(key, null); assert s != null : "Value is null for key: " + key; assertEquals(s, Integer.toString(key)); @@ -375,6 +380,7 @@ public void testReadThrough() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"ConstantConditions"}) + @Test public void testOptimisticWriteThrough() throws Exception { IgniteCache near = jcache(0); @@ -420,6 +426,7 @@ public void testOptimisticWriteThrough() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNoTransactionSinglePutx() throws Exception { IgniteCache near = jcache(0); @@ -437,6 +444,7 @@ public void testNoTransactionSinglePutx() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNoTransactionSinglePut() throws Exception { IgniteCache near = jcache(0); @@ -479,6 +487,7 @@ public void testNoTransactionSinglePut() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNoTransactionWriteThrough() throws Exception { IgniteCache near = jcache(0); @@ -508,6 +517,7 @@ public void testNoTransactionWriteThrough() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"ConstantConditions"}) + @Test public void testPessimisticWriteThrough() throws Exception { IgniteCache near = jcache(0); @@ -522,7 +532,7 @@ public void testPessimisticWriteThrough() throws Exception { assertEquals("2", near.get(2)); assertEquals("3", near.get(3)); - assertNotNull(dht(primaryGrid(3)).localPeek(3, null, null)); + assertNotNull(dht(primaryGrid(3)).localPeek(3, null)); tx.commit(); } @@ -547,6 +557,7 @@ public void testPessimisticWriteThrough() throws Exception { } /** @throws Exception If failed. */ + @Test public void testConcurrentOps() throws Exception { // Don't create missing values. store.create(false); @@ -578,11 +589,13 @@ public void testConcurrentOps() throws Exception { } /** @throws Exception If failed. */ + @Test public void testBackupsLocalAffinity() throws Exception { checkBackupConsistency(2); } /** @throws Exception If failed. */ + @Test public void testBackupsRemoteAffinity() throws Exception { checkBackupConsistency(1); } @@ -609,11 +622,13 @@ private void checkBackupConsistency(int key) throws Exception { } /** @throws Exception If failed. */ + @Test public void testSingleLockLocalAffinity() throws Exception { checkSingleLock(2); } /** @throws Exception If failed. */ + @Test public void testSingleLockRemoteAffinity() throws Exception { checkSingleLock(1); } @@ -724,11 +739,13 @@ private void checkSingleLock(int key) throws Exception { } /** @throws Throwable If failed. */ + @Test public void testSingleLockReentryLocalAffinity() throws Throwable { checkSingleLockReentry(2); } /** @throws Throwable If failed. */ + @Test public void testSingleLockReentryRemoteAffinity() throws Throwable { checkSingleLockReentry(1); } @@ -791,11 +808,13 @@ private void checkSingleLockReentry(int key) throws Throwable { } /** @throws Exception If failed. */ + @Test public void testTransactionSingleGetLocalAffinity() throws Exception { checkTransactionSingleGet(2); } /** @throws Exception If failed. */ + @Test public void testTransactionSingleGetRemoteAffinity() throws Exception { checkTransactionSingleGet(1); } @@ -837,11 +856,13 @@ private void checkTransactionSingleGet(int key) throws Exception { } /** @throws Exception If failed. */ + @Test public void testTransactionSingleGetRemoveLocalAffinity() throws Exception { checkTransactionSingleGetRemove(2); } /** @throws Exception If failed. */ + @Test public void testTransactionSingleGetRemoveRemoteAffinity() throws Exception { checkTransactionSingleGetRemove(1); } @@ -951,4 +972,4 @@ boolean isEmpty() { map.remove(key); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOneNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOneNodeSelfTest.java index d4f485aaaf2f7..635f80dd819c5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOneNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOneNodeSelfTest.java @@ -26,12 +26,17 @@ import org.apache.ignite.cache.store.CacheStoreAdapter; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -42,10 +47,18 @@ /** * Single node test for near cache. */ +@RunWith(JUnit4.class) public class GridCacheNearOneNodeSelfTest extends GridCommonAbstractTest { /** Cache store. */ private static TestStore store = new TestStore(); + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** * */ @@ -80,8 +93,8 @@ public GridCacheNearOneNodeSelfTest() { cacheCfg.setCacheMode(PARTITIONED); cacheCfg.setBackups(1); cacheCfg.setAtomicityMode(TRANSACTIONAL); - cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + cacheCfg.setNearConfiguration(new NearCacheConfiguration()); cacheCfg.setCacheStoreFactory(singletonFactory(store)); cacheCfg.setReadThrough(true); @@ -94,6 +107,7 @@ public GridCacheNearOneNodeSelfTest() { } /** @throws Exception If failed. */ + @Test public void testRemove() throws Exception { IgniteCache near = jcache(); @@ -122,6 +136,7 @@ public void testRemove() throws Exception { } /** @throws Exception If failed. */ + @Test public void testReadThrough() throws Exception { IgniteCache near = jcache(); @@ -151,7 +166,7 @@ public void testReadThrough() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings({"ConstantConditions"}) + @Test public void testOptimisticTxWriteThrough() throws Exception { IgniteCache near = jcache(); GridCacheAdapter dht = dht(); @@ -184,6 +199,7 @@ public void testOptimisticTxWriteThrough() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSingleLockPut() throws Exception { IgniteCache near = jcache(); @@ -206,6 +222,7 @@ public void testSingleLockPut() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSingleLock() throws Exception { IgniteCache near = jcache(); @@ -237,6 +254,7 @@ public void testSingleLock() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSingleLockReentry() throws Exception { IgniteCache near = jcache(); @@ -281,6 +299,7 @@ public void testSingleLockReentry() throws Exception { } /** @throws Exception If failed. */ + @Test public void testTransactionSingleGet() throws Exception { IgniteCache cache = jcache(); @@ -300,6 +319,7 @@ public void testTransactionSingleGet() throws Exception { } /** @throws Exception If failed. */ + @Test public void testTransactionSingleGetRemove() throws Exception { IgniteCache cache = jcache(); @@ -385,4 +405,4 @@ boolean isEmpty() { map.remove(key); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyMultiNodeFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyMultiNodeFullApiSelfTest.java index ed436d006d872..103abddb19f67 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyMultiNodeFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyMultiNodeFullApiSelfTest.java @@ -50,6 +50,9 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_LOCKED; @@ -58,6 +61,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheNearOnlyMultiNodeFullApiSelfTest extends GridCachePartitionedMultiNodeFullApiSelfTest { /** */ private static AtomicInteger cnt; @@ -175,6 +179,7 @@ protected boolean clientHasNearCache() { } /** {@inheritDoc} */ + @Test @Override public void testSize() throws Exception { IgniteCache nearCache = jcache(); @@ -204,29 +209,28 @@ protected boolean clientHasNearCache() { } /** {@inheritDoc} */ + @Test @Override public void testLoadAll() throws Exception { // Not needed for near-only cache. } /** - * TODO GG-11133. - * @throws Exception If failed. */ - public void _testReaderTtlTx() throws Exception { + @Test + public void testReaderTtlTx() throws Exception { // IgniteProcessProxy#transactions is not implemented. - if (isMultiJvm()) + if (isMultiJvm() || !txShouldBeUsed()) return; checkReaderTtl(true); } /** - * TODO GG-11133. - * @throws Exception If failed. */ - public void _testReaderTtlNoTx() throws Exception { + @Test + public void testReaderTtlNoTx() throws Exception { checkReaderTtl(false); } @@ -252,7 +256,7 @@ private void checkReaderTtl(boolean inTx) throws Exception { final String key = primaryKeysForCache(fullCache(), 1).get(0); - c.put(key, 1); + fullCache().put(key, 1); info("Finished first put."); @@ -285,6 +289,8 @@ private void checkReaderTtl(boolean inTx) throws Exception { tx.close(); } + jcache(nearIdx).get(key); // Create entry on near node. + long[] expireTimes = new long[gridCount()]; for (int i = 0; i < gridCount(); i++) { @@ -300,8 +306,8 @@ else if (i == nearIdx) if (entryTtl != null) { assertNotNull(entryTtl.get1()); assertNotNull(entryTtl.get2()); - assertEquals(ttl, (long)entryTtl.get1()); - assertTrue(entryTtl.get2() > startTime); + assertTrue("Invalid expire time [expire=" + entryTtl.get2() + ", start=" + startTime + ']', + entryTtl.get2() > startTime); expireTimes[i] = entryTtl.get2(); } } @@ -333,7 +339,6 @@ else if (i == nearIdx) if (entryTtl != null) { assertNotNull(entryTtl.get1()); assertNotNull(entryTtl.get2()); - assertEquals(ttl, (long)entryTtl.get1()); assertTrue(entryTtl.get2() > startTime); expireTimes[i] = entryTtl.get2(); } @@ -363,7 +368,6 @@ else if (i == nearIdx) if (entryTtl != null) { assertNotNull(entryTtl.get1()); assertNotNull(entryTtl.get2()); - assertEquals(ttl, (long)entryTtl.get1()); assertEquals(expireTimes[i], (long)entryTtl.get2()); } } @@ -435,6 +439,7 @@ else if (i == nearIdx) } /** {@inheritDoc} */ + @Test @Override public void testClear() throws Exception { IgniteCache nearCache = jcache(); IgniteCache primary = fullCache(); @@ -487,6 +492,7 @@ else if (i == nearIdx) } /** {@inheritDoc} */ + @Test @Override public void testLocalClearKeys() throws Exception { IgniteCache nearCache = jcache(); IgniteCache primary = fullCache(); @@ -554,6 +560,7 @@ else if (i == nearIdx) /** {@inheritDoc} */ @SuppressWarnings("BusyWait") + @Test @Override public void testLockUnlock() throws Exception { if (lockingEnabled()) { final CountDownLatch lockCnt = new CountDownLatch(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlySelfTest.java index bf8c1ca089fbe..ae94888cd29c1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlySelfTest.java @@ -22,6 +22,9 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -32,10 +35,12 @@ * Near only self test. */ @SuppressWarnings("RedundantMethodOverride") +@RunWith(JUnit4.class) public abstract class GridCacheNearOnlySelfTest extends GridCacheClientModesAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testUpdateNearOnlyReader() throws Exception { IgniteCache dhtCache = dhtCache(); @@ -114,4 +119,4 @@ public static class CasePartitionedTransactional extends GridCacheNearOnlySelfTe return TRANSACTIONAL; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyTopologySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyTopologySelfTest.java index 2124bc8ef7e11..92df08162600d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyTopologySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearOnlyTopologySelfTest.java @@ -27,11 +27,13 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -43,16 +45,19 @@ /** * Near-only cache node startup test. */ +@RunWith(JUnit4.class) public class GridCacheNearOnlyTopologySelfTest extends GridCommonAbstractTest { - /** Shared ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Near only flag. */ private boolean cilent; /** Use cache flag. */ private boolean cache = true; + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -71,42 +76,43 @@ public class GridCacheNearOnlyTopologySelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheCfg); } - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setForceServerMode(true); - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); return cfg; } /** @throws Exception If failed. */ + @Test public void testStartupFirstOneNode() throws Exception { checkStartupNearNode(0, 2); } /** @throws Exception If failed. */ + @Test public void testStartupLastOneNode() throws Exception { checkStartupNearNode(1, 2); } /** @throws Exception If failed. */ + @Test public void testStartupFirstTwoNodes() throws Exception { checkStartupNearNode(0, 3); } /** @throws Exception If failed. */ + @Test public void testStartupInMiddleTwoNodes() throws Exception { checkStartupNearNode(1, 3); } /** @throws Exception If failed. */ + @Test public void testStartupLastTwoNodes() throws Exception { checkStartupNearNode(2, 3); } /** @throws Exception If failed. */ + @Test public void testKeyMapping() throws Exception { try { cache = true; @@ -129,6 +135,7 @@ public void testKeyMapping() throws Exception { } /** @throws Exception If failed. */ + @Test public void testKeyMappingOnComputeNode() throws Exception { try { cache = true; @@ -160,6 +167,7 @@ public void testKeyMappingOnComputeNode() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNodeLeave() throws Exception { try { cache = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearPartitionedClearSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearPartitionedClearSelfTest.java index cbdc8551dd7f1..d55efe1a31ec1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearPartitionedClearSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearPartitionedClearSelfTest.java @@ -26,10 +26,11 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheGenericTestStore; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -40,6 +41,7 @@ * Test clear operation in NEAR_PARTITIONED transactional cache. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class GridCacheNearPartitionedClearSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; @@ -53,11 +55,10 @@ public class GridCacheNearPartitionedClearSelfTest extends GridCommonAbstractTes /** */ private static CacheStore store = new GridCacheGenericTestStore<>(); - /** Shared IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + startGrids(GRID_CNT); } @@ -72,12 +73,6 @@ public class GridCacheNearPartitionedClearSelfTest extends GridCommonAbstractTes cfg.setLocalHost("127.0.0.1"); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setName(CACHE_NAME); @@ -102,6 +97,7 @@ public class GridCacheNearPartitionedClearSelfTest extends GridCommonAbstractTes * * @throws Exception If failed. */ + @Test public void testClear() throws Exception { IgniteCache cache = cacheForIndex(0); @@ -151,4 +147,4 @@ private int primaryKey0(Ignite ignite, IgniteCache cache) throws Exception { private IgniteCache cacheForIndex(int idx) { return grid(idx).cache(CACHE_NAME); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReaderPreloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReaderPreloadSelfTest.java index e143260f798d2..e085bfbbc331a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReaderPreloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReaderPreloadSelfTest.java @@ -30,7 +30,11 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -42,6 +46,7 @@ * read it on B. Finally the key is updated again and we ensure that it was updated on the near node B as well. I.e. * with this test we ensures that node B is considered as near reader for that key in case put occurred during preload. */ +@RunWith(JUnit4.class) public class GridCacheNearReaderPreloadSelfTest extends GridCommonAbstractTest { /** Test iterations count. */ private static final int REPEAT_CNT = 10; @@ -61,6 +66,11 @@ public class GridCacheNearReaderPreloadSelfTest extends GridCommonAbstractTest { /** Cache on backup node. */ private IgniteCache cache3; + /** {@inheritDoc} */ + @Override public void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { cache1 = null; @@ -75,6 +85,7 @@ public class GridCacheNearReaderPreloadSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testNearReaderPreload() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { startUp(); @@ -198,4 +209,4 @@ private void checkCache(IgniteCache cache, int key, int expVal assert F.eq(expVal, val) : "Unexpected cache value [key=" + key + ", expected=" + expVal + ", actual=" + val + ']'; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReadersSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReadersSelfTest.java index cbb2032302ee2..4fbb1f026e904 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReadersSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearReadersSelfTest.java @@ -44,11 +44,12 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -59,19 +60,22 @@ /** * Checks that readers are properly handled. */ +@RunWith(JUnit4.class) public class GridCacheNearReadersSelfTest extends GridCommonAbstractTest { /** Number of grids. */ private int grids = 2; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Grid counter. */ - private static AtomicInteger cntr = new AtomicInteger(0); + private AtomicInteger cntr = new AtomicInteger(0); /** Test cache affinity. */ private GridCacheModuloAffinityFunction aff = new GridCacheModuloAffinityFunction(); + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -92,12 +96,6 @@ public class GridCacheNearReadersSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setUserAttributes(F.asMap(GridCacheModuloAffinityFunction.IDX_ATTR, cntr.getAndIncrement())); return cfg; @@ -140,6 +138,7 @@ private Ignite grid(UUID nodeId) { } /** @throws Exception If failed. */ + @Test public void testTwoNodesTwoKeysNoBackups() throws Exception { aff.backups(0); grids = 2; @@ -233,6 +232,7 @@ public void testTwoNodesTwoKeysNoBackups() throws Exception { } /** @throws Exception If failed. */ + @Test public void testTwoNodesTwoKeysOneBackup() throws Exception { aff.backups(1); grids = 2; @@ -349,6 +349,7 @@ public void testTwoNodesTwoKeysOneBackup() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPutAllManyKeysOneReader() throws Exception { aff.backups(1); grids = 4; @@ -385,6 +386,7 @@ public void testPutAllManyKeysOneReader() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPutAllManyKeysTwoReaders() throws Exception { aff.backups(1); grids = 5; @@ -428,6 +430,7 @@ public void testPutAllManyKeysTwoReaders() throws Exception { } /** @throws Exception If failed. */ + @Test public void testBackupEntryReaders() throws Exception { aff.backups(1); grids = 2; @@ -465,7 +468,7 @@ public void testBackupEntryReaders() throws Exception { } /** @throws Exception If failed. */ - @SuppressWarnings({"SizeReplaceableByIsEmpty"}) + @Test public void testImplicitLockReaders() throws Exception { grids = 3; aff.reset(grids, 1); @@ -545,6 +548,7 @@ public void testImplicitLockReaders() throws Exception { } /** @throws Exception If failed. */ + @Test public void testExplicitLockReaders() throws Exception { if (atomicityMode() == ATOMIC) return; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxExceptionSelfTest.java index d6e3804728cb4..277e829013756 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxExceptionSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.IgniteTxExceptionAbstractSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -26,6 +27,13 @@ * Tests near cache. */ public class GridCacheNearTxExceptionSelfTest extends IgniteTxExceptionAbstractSelfTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { return PARTITIONED; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxForceKeyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxForceKeyTest.java index 21b90e2bddfe5..f1acc3cf4611f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxForceKeyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxForceKeyTest.java @@ -22,10 +22,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -34,16 +34,12 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheNearTxForceKeyTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setAtomicityMode(TRANSACTIONAL); @@ -63,6 +59,7 @@ public class GridCacheNearTxForceKeyTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testNearTx() throws Exception { Ignite ignite0 = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxMultiNodeSelfTest.java index 07ee991ba17b8..1328fc4b89fc6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxMultiNodeSelfTest.java @@ -33,13 +33,13 @@ import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -52,10 +52,8 @@ /** * Tests near transactions. */ +@RunWith(JUnit4.class) public class GridCacheNearTxMultiNodeSelfTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 3; @@ -78,12 +76,6 @@ public class GridCacheNearTxMultiNodeSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(cacheCfg); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - return cfg; } @@ -96,6 +88,7 @@ public class GridCacheNearTxMultiNodeSelfTest extends GridCommonAbstractTest { * @throws Exception If failed. */ @SuppressWarnings( {"unchecked"}) + @Test public void testTxCleanup() throws Exception { backups = 1; @@ -184,6 +177,7 @@ public void testTxCleanup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxReadersUpdate() throws Exception { startGridsMultiThreaded(GRID_CNT); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxPreloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxPreloadSelfTest.java index caf6cc7c7e74e..0cec87e90d9dd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxPreloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheNearTxPreloadSelfTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.IgniteTxPreloadAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +29,13 @@ * Tests cache transaction during preloading. */ public class GridCacheNearTxPreloadSelfTest extends IgniteTxPreloadAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { return PARTITIONED; @@ -42,4 +50,4 @@ public class GridCacheNearTxPreloadSelfTest extends IgniteTxPreloadAbstractTest return cfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinityExcludeNeighborsPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinityExcludeNeighborsPerformanceTest.java index 4bcaf88a01f8c..4d5a91fd3014c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinityExcludeNeighborsPerformanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinityExcludeNeighborsPerformanceTest.java @@ -28,10 +28,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.GridTimer; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.NONE; @@ -39,7 +39,7 @@ /** * Partitioned affinity test. */ -@SuppressWarnings({"PointlessArithmeticExpression"}) +@RunWith(JUnit4.class) public class GridCachePartitionedAffinityExcludeNeighborsPerformanceTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRIDS = 3; @@ -50,9 +50,6 @@ public class GridCachePartitionedAffinityExcludeNeighborsPerformanceTest extends /** */ private boolean excNeighbores; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static Collection msgs = new ArrayList<>(); @@ -60,12 +57,6 @@ public class GridCachePartitionedAffinityExcludeNeighborsPerformanceTest extends @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -108,6 +99,7 @@ private static Collection nodes(Affinity aff, Obj /** * @throws Exception If failed. */ + @Test public void testCountPerformance() throws Exception { excNeighbores = false; @@ -189,6 +181,7 @@ private long checkCountPerformance0(Ignite g, int cnt) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimedPerformance() throws Exception { excNeighbores = false; @@ -253,4 +246,4 @@ private int checkTimedPerformance(long dur, String testName) throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinitySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinitySelfTest.java index b212d63ed30b3..e252a1ba7cebe 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinitySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedAffinitySelfTest.java @@ -37,11 +37,11 @@ import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.LoggerResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -53,7 +53,7 @@ /** * Partitioned affinity test. */ -@SuppressWarnings({"PointlessArithmeticExpression"}) +@RunWith(JUnit4.class) public class GridCachePartitionedAffinitySelfTest extends GridCommonAbstractTest { /** Backup count. */ private static final int BACKUPS = 1; @@ -64,9 +64,6 @@ public class GridCachePartitionedAffinitySelfTest extends GridCommonAbstractTest /** Fail flag. */ private static AtomicBoolean failFlag = new AtomicBoolean(false); - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -77,14 +74,9 @@ public class GridCachePartitionedAffinitySelfTest extends GridCommonAbstractTest cacheCfg.setRebalanceMode(SYNC); cacheCfg.setAtomicityMode(TRANSACTIONAL); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setCacheConfiguration(cacheCfg); - cfg.setDiscoverySpi(spi); return cfg; } @@ -112,6 +104,7 @@ private static Collection nodes(Affinity aff, Obj } /** @throws Exception If failed. */ + @Test public void testAffinity() throws Exception { waitTopologyUpdate(); @@ -158,12 +151,12 @@ private static void partitionMap(Ignite g) { } /** @throws Exception If failed. */ - @SuppressWarnings("BusyWait") private void waitTopologyUpdate() throws Exception { GridTestUtils.waitTopologyUpdate(DEFAULT_CACHE_NAME, BACKUPS, log()); } /** @throws Exception If failed. */ + @Test public void testAffinityWithPut() throws Exception { waitTopologyUpdate(); @@ -287,4 +280,4 @@ private ListenerJob(int keyCnt, String master) { EVT_CACHE_OBJECT_REMOVED); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicOpSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicOpSelfTest.java index 5c0eefeb55691..df888da1894e4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicOpSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicOpSelfTest.java @@ -22,6 +22,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheBasicOpAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -29,6 +32,7 @@ /** * Simple cache test. */ +@RunWith(JUnit4.class) public class GridCachePartitionedBasicOpSelfTest extends GridCacheBasicOpAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -48,22 +52,26 @@ public class GridCachePartitionedBasicOpSelfTest extends GridCacheBasicOpAbstrac } /** {@inheritDoc} */ + @Test @Override public void testBasicOps() throws Exception { super.testBasicOps(); } /** {@inheritDoc} */ + @Test @Override public void testBasicOpsAsync() throws Exception { super.testBasicOpsAsync(); } /** {@inheritDoc} */ + @Test @Override public void testOptimisticTransaction() throws Exception { super.testOptimisticTransaction(); } /** {@inheritDoc} */ + @Test @Override public void testPutWithExpiration() throws Exception { super.testPutWithExpiration(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicStoreMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicStoreMultiNodeSelfTest.java index 91c9b22b793bd..4a1063f4f627f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicStoreMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedBasicStoreMultiNodeSelfTest.java @@ -30,11 +30,12 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheTestStore; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,10 +46,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCachePartitionedBasicStoreMultiNodeSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of grids to start. */ private static final int GRID_CNT = 3; @@ -57,6 +56,11 @@ public class GridCachePartitionedBasicStoreMultiNodeSelfTest extends GridCommonA /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + if (nearCacheConfiguration() != null) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + for (GridCacheTestStore store : stores) store.resetTimestamp(); } @@ -89,12 +93,6 @@ public class GridCachePartitionedBasicStoreMultiNodeSelfTest extends GridCommonA @Override protected final IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -124,6 +122,7 @@ protected NearCacheConfiguration nearCacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testPutFromPrimary() throws Exception { IgniteCache cache = jcache(0); @@ -137,6 +136,7 @@ public void testPutFromPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutFromBackup() throws Exception { IgniteCache cache = jcache(0); @@ -150,6 +150,7 @@ public void testPutFromBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutFromNear() throws Exception { IgniteCache cache = jcache(0); @@ -163,6 +164,7 @@ public void testPutFromNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsentFromPrimary() throws Exception { IgniteCache cache = jcache(0); @@ -176,6 +178,7 @@ public void testPutIfAbsentFromPrimary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsentFromBackup() throws Exception { IgniteCache cache = jcache(0); @@ -189,6 +192,7 @@ public void testPutIfAbsentFromBackup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsentFromNear() throws Exception { IgniteCache cache = jcache(0); @@ -202,6 +206,7 @@ public void testPutIfAbsentFromNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAll() throws Exception { IgniteCache cache = jcache(0); @@ -218,6 +223,7 @@ public void testPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleOperations() throws Exception { IgniteCache cache = jcache(0); @@ -279,4 +285,4 @@ static class StoreFactory implements Factory { return store; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest.java index eef5aebaacc98..3c9ce6d370024 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.lang.IgniteClosure; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for local cache. */ +@RunWith(JUnit4.class) public class GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest extends GridCachePartitionedFullApiSelfTest { /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { @@ -49,6 +53,7 @@ public class GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest extends Grid /** * */ + @Test public void testMapKeysToNodes() { grid(0).affinity(DEFAULT_CACHE_NAME).mapKeysToNodes(Arrays.asList("1", "2")); } @@ -56,6 +61,7 @@ public void testMapKeysToNodes() { /** * */ + @Test public void testMapKeyToNode() { assert grid(0).affinity(DEFAULT_CACHE_NAME).mapKeyToNode("1") == null; } @@ -77,4 +83,4 @@ public void testMapKeyToNode() { } }; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEventSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEventSelfTest.java index 1c79db7e7a111..ce782172c2860 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEventSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEventSelfTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheEventAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -30,6 +31,13 @@ * Tests events. */ public class GridCachePartitionedEventSelfTest extends GridCacheEventAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { CacheConfiguration cfg = defaultCacheConfiguration(); @@ -57,4 +65,4 @@ public class GridCachePartitionedEventSelfTest extends GridCacheEventAbstractTes @Override protected boolean partitioned() { return true; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEvictionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEvictionSelfTest.java index d6f9baff10acf..33ecded8eea8a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEvictionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedEvictionSelfTest.java @@ -30,12 +30,12 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -49,6 +49,7 @@ /** * Tests for partitioned cache automatic eviction. */ +@RunWith(JUnit4.class) public class GridCachePartitionedEvictionSelfTest extends GridCacheAbstractSelfTest { /** */ private static final boolean TEST_INFO = true; @@ -62,9 +63,6 @@ public class GridCachePartitionedEvictionSelfTest extends GridCacheAbstractSelfT /** */ private static final int KEY_CNT = 100; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected int gridCount() { return GRID_CNT; @@ -76,12 +74,6 @@ public class GridCachePartitionedEvictionSelfTest extends GridCacheAbstractSelfT c.getTransactionConfiguration().setTxSerializableEnabled(true); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -117,6 +109,7 @@ private IgniteCache cache(ClusterNode node) { * * @throws Exception If failed. */ + @Test public void testEvictionTxPessimisticReadCommitted() throws Exception { doTestEviction(PESSIMISTIC, READ_COMMITTED); } @@ -126,6 +119,7 @@ public void testEvictionTxPessimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testEvictionTxPessimisticRepeatableRead() throws Exception { doTestEviction(PESSIMISTIC, REPEATABLE_READ); } @@ -135,6 +129,7 @@ public void testEvictionTxPessimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testEvictionTxPessimisticSerializable() throws Exception { doTestEviction(PESSIMISTIC, SERIALIZABLE); } @@ -144,6 +139,7 @@ public void testEvictionTxPessimisticSerializable() throws Exception { * * @throws Exception If failed. */ + @Test public void testEvictionTxOptimisticReadCommitted() throws Exception { doTestEviction(OPTIMISTIC, READ_COMMITTED); } @@ -153,6 +149,7 @@ public void testEvictionTxOptimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testEvictionTxOptimisticRepeatableRead() throws Exception { doTestEviction(OPTIMISTIC, REPEATABLE_READ); } @@ -162,6 +159,7 @@ public void testEvictionTxOptimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testEvictionTxOptimisticSerializable() throws Exception { doTestEviction(OPTIMISTIC, SERIALIZABLE); } @@ -226,4 +224,4 @@ private void doTestEviction(TransactionConcurrency concurrency, TransactionIsola assertEquals(0, near(jcache(0)).nearSize()); assertEquals(0, near(jcache(1)).nearSize()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedExplicitLockNodeFailureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedExplicitLockNodeFailureSelfTest.java index 96fb8f65bb56c..1b806deb44cd4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedExplicitLockNodeFailureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedExplicitLockNodeFailureSelfTest.java @@ -30,10 +30,11 @@ import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -43,25 +44,22 @@ /** * Tests for node failure in transactions. */ +@RunWith(JUnit4.class) public class GridCachePartitionedExplicitLockNodeFailureSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ public static final int GRID_CNT = 4; + /** {@inheritDoc} */ + @Override public void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); c.getTransactionConfiguration().setTxSerializableEnabled(true); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -84,6 +82,7 @@ public class GridCachePartitionedExplicitLockNodeFailureSelfTest extends GridCom /** @throws Exception If check failed. */ @SuppressWarnings("ErrorNotRethrown") + @Test public void testLockFromNearOrBackup() throws Exception { startGrids(GRID_CNT); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFilteredPutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFilteredPutSelfTest.java index 20c32c1ff4d7f..6c8bd925ec386 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFilteredPutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFilteredPutSelfTest.java @@ -24,12 +24,11 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,31 +36,17 @@ /** * Test filtered put. */ +@RunWith(JUnit4.class) public class GridCachePartitionedFilteredPutSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(discoverySpi()); cfg.setCacheConfiguration(cacheConfiguration()); return cfg; } - /** - * @return Discovery SPI; - */ - private DiscoverySpi discoverySpi() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** * @return Cache configuration. */ @@ -89,6 +74,7 @@ private CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testPutAndRollbackCheckNear() throws Exception { doPutAndRollback(); @@ -100,6 +86,7 @@ public void testPutAndRollbackCheckNear() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAndRollbackCheckDht() throws Exception { doPutAndRollback(); @@ -125,4 +112,4 @@ private void doPutAndRollback() throws Exception { assert c.localPeek(1, CachePeekMode.ONHEAP) == null; assert c.get(1) == null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFullApiSelfTest.java index ca12f702e1ab1..6674076a65e60 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedFullApiSelfTest.java @@ -20,12 +20,10 @@ import javax.cache.Cache; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheMode; -import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.GridCacheAbstractFullApiSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheAdapter; +import org.junit.Test; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -59,6 +57,7 @@ public class GridCachePartitionedFullApiSelfTest extends GridCacheAbstractFullAp /** * @throws Exception If failed. */ + @Test public void testUpdate() throws Exception { if (gridCount() > 1) { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -84,4 +83,4 @@ public void testUpdate() throws Exception { assertEquals(1, cnt); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedHitsAndMissesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedHitsAndMissesSelfTest.java index 64f594084ea7e..b432fd7f5cccf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedHitsAndMissesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedHitsAndMissesSelfTest.java @@ -28,11 +28,12 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.stream.StreamReceiver; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_ASYNC; @@ -42,6 +43,7 @@ /** * Test for issue GG-3997 Total Hits and Misses display wrong value for in-memory database. */ +@RunWith(JUnit4.class) public class GridCachePartitionedHitsAndMissesSelfTest extends GridCommonAbstractTest { /** Amount of grids to start. */ private static final int GRID_CNT = 3; @@ -49,18 +51,10 @@ public class GridCachePartitionedHitsAndMissesSelfTest extends GridCommonAbstrac /** Count of total numbers to generate. */ private static final int CNT = 2000; - /** IP Finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - // DiscoverySpi. - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - disco.setIpFinder(IP_FINDER); - cfg.setDiscoverySpi(disco); - // Cache. cfg.setCacheConfiguration(cacheConfiguration()); @@ -98,7 +92,10 @@ protected CacheConfiguration cacheConfiguration() throws Exception { * * @throws Exception If failed. */ + @Test public void testHitsAndMisses() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.METRICS); + startGrids(GRID_CNT); awaitPartitionMapExchange(); @@ -167,4 +164,4 @@ private static class IncrementingUpdater implements StreamReceiver { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMetricsSelfTest.java index a67e01f4e667e..e946c435a1619 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMetricsSelfTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheTransactionalAbstractMetricsSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -32,6 +33,13 @@ public class GridCachePartitionedMetricsSelfTest extends GridCacheTransactionalA /** */ private static final int GRID_CNT = 2; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.METRICS); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeCounterSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeCounterSelfTest.java index 6d5302f29708b..02df9fd4764a0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeCounterSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeCounterSelfTest.java @@ -49,11 +49,11 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.LoggerResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -64,7 +64,8 @@ /** * Multiple put test. */ -@SuppressWarnings({"UnusedAssignment", "TooBroadScope", "PointlessBooleanExpression", "PointlessArithmeticExpression"}) +@SuppressWarnings({"UnusedAssignment", "TooBroadScope"}) +@RunWith(JUnit4.class) public class GridCachePartitionedMultiNodeCounterSelfTest extends GridCommonAbstractTest { /** Debug flag. */ private static final boolean DEBUG = false; @@ -81,9 +82,6 @@ public class GridCachePartitionedMultiNodeCounterSelfTest extends GridCommonAbst /** */ private static final String CNTR_KEY = "CNTR_KEY"; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static CountDownLatch startLatchMultiNode; @@ -105,12 +103,6 @@ public GridCachePartitionedMultiNodeCounterSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - // Default cache configuration. CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -189,6 +181,7 @@ public List grids(int max, Ignite... exclude) { } /** @throws Exception If failed. */ + @Test public void testMultiNearAndPrimary() throws Exception { // resetLog4j(Level.INFO, true, GridCacheTxManager.class.getName()); @@ -204,6 +197,7 @@ public void testMultiNearAndPrimary() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOneNearAndPrimary() throws Exception { // resetLog4j(Level.INFO, true, GridCacheTxManager.class.getName()); @@ -519,6 +513,7 @@ private void checkNearAndPrimary(int gridCnt, int priThreads, int nearThreads) t } /** @throws Exception If failed. */ + @Test public void testMultiNearAndPrimaryMultiNode() throws Exception { int gridCnt = 4; @@ -528,6 +523,7 @@ public void testMultiNearAndPrimaryMultiNode() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOneNearAndPrimaryMultiNode() throws Exception { int gridCnt = 2; @@ -779,4 +775,4 @@ private void onPrimary() { return S.toString(IncrementItemJob.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeFullApiSelfTest.java index 4450eb6a380dd..ac85fcde26f6f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeFullApiSelfTest.java @@ -32,6 +32,7 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -68,6 +69,7 @@ public Collection affinityNodes() { /** * @throws Exception If failed. */ + @Test public void testPutAllRemoveAll() throws Exception { for (int i = 0; i < gridCount(); i++) info(">>>>> Grid" + i + ": " + grid(i).localNode().id()); @@ -95,6 +97,7 @@ public void testPutAllRemoveAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutAllPutAll() throws Exception { for (int i = 0; i < gridCount(); i++) info(">>>>> Grid" + i + ": " + grid(i).localNode().id()); @@ -134,6 +137,7 @@ public void testPutAllPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutDebug() throws Exception { for (int i = 0; i < gridCount(); i++) info(">>>>> Grid" + i + ": " + grid(i).localNode().id()); @@ -168,6 +172,7 @@ public void testPutDebug() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPeekPartitionedModes() throws Exception { jcache().put("key", 1); @@ -207,6 +212,7 @@ else if (!aff.isPrimaryOrBackup(grid(i).localNode(), "key")) { /** * @throws Exception If failed. */ + @Test public void testPeekAsyncPartitionedModes() throws Exception { jcache().put("key", 1); @@ -237,6 +243,7 @@ else if (!grid(i).affinity(DEFAULT_CACHE_NAME).isPrimaryOrBackup(grid(i).localNo * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testNearDhtKeySize() throws Exception { List keys = new ArrayList<>(5); @@ -312,6 +319,7 @@ else if (ignite1 == null) /** * @throws Exception If failed. */ + @Test public void testAffinity() throws Exception { for (int i = 0; i < gridCount(); i++) info("Grid " + i + ": " + grid(i).localNode().id()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeLockSelfTest.java index 3122f60b823b1..e1d77d09560e2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiNodeLockSelfTest.java @@ -21,6 +21,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheMultiNodeLockAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +31,7 @@ /** * Test cases for multi-threaded tests. */ +@RunWith(JUnit4.class) public class GridCachePartitionedMultiNodeLockSelfTest extends GridCacheMultiNodeLockAbstractTest { /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration() { @@ -47,27 +51,32 @@ public class GridCachePartitionedMultiNodeLockSelfTest extends GridCacheMultiNod } /** {@inheritDoc} */ + @Test @Override public void testBasicLock() throws Exception { super.testBasicLock(); } /** {@inheritDoc} */ + @Test @Override public void testLockMultithreaded() throws Exception { super.testLockMultithreaded(); } /** {@inheritDoc} */ + @Test @Override public void testLockReentry() throws IgniteCheckedException { super.testLockReentry(); } /** {@inheritDoc} */ + @Test @Override public void testMultiNodeLock() throws Exception { super.testMultiNodeLock(); } /** {@inheritDoc} */ + @Test @Override public void testMultiNodeLockWithKeyLists() throws Exception { super.testMultiNodeLockWithKeyLists(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiThreadedPutGetSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiThreadedPutGetSelfTest.java index c99d482b56291..84365cd2484b0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiThreadedPutGetSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMultiThreadedPutGetSelfTest.java @@ -26,13 +26,13 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAlwaysEvictionPolicy; import org.apache.ignite.internal.util.typedef.CAX; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,6 +45,7 @@ /** * Multithreaded partition cache put get test. */ +@RunWith(JUnit4.class) public class GridCachePartitionedMultiThreadedPutGetSelfTest extends GridCommonAbstractTest { /** */ private static final boolean TEST_INFO = true; @@ -58,9 +59,6 @@ public class GridCachePartitionedMultiThreadedPutGetSelfTest extends GridCommonA /** Number of transactions per thread. */ private static final int TX_CNT = 500; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -87,12 +85,6 @@ public class GridCachePartitionedMultiThreadedPutGetSelfTest extends GridCommonA c.setCacheConfiguration(cc); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - return c; } @@ -121,6 +113,7 @@ public class GridCachePartitionedMultiThreadedPutGetSelfTest extends GridCommonA * * @throws Exception If failed. */ + @Test public void testPessimisticReadCommitted() throws Exception { doTest(PESSIMISTIC, READ_COMMITTED); } @@ -130,6 +123,7 @@ public void testPessimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testPessimisticRepeatableRead() throws Exception { doTest(PESSIMISTIC, REPEATABLE_READ); } @@ -139,6 +133,7 @@ public void testPessimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testPessimisticSerializable() throws Exception { doTest(PESSIMISTIC, SERIALIZABLE); } @@ -148,6 +143,7 @@ public void testPessimisticSerializable() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticReadCommitted() throws Exception { doTest(OPTIMISTIC, READ_COMMITTED); } @@ -157,6 +153,7 @@ public void testOptimisticReadCommitted() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticRepeatableRead() throws Exception { doTest(OPTIMISTIC, REPEATABLE_READ); } @@ -166,6 +163,7 @@ public void testOptimisticRepeatableRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticSerializable() throws Exception { doTest(OPTIMISTIC, SERIALIZABLE); } @@ -209,4 +207,4 @@ private void doTest(final TransactionConcurrency concurrency, final TransactionI } }, THREAD_CNT); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxMultiThreadedSelfTest.java new file mode 100644 index 0000000000000..0034330865e67 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxMultiThreadedSelfTest.java @@ -0,0 +1,100 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.near; + +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.processors.cache.IgniteMvccTxMultiThreadedAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; + +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Tests for partitioned cache transactions. + */ +public class GridCachePartitionedMvccTxMultiThreadedSelfTest extends IgniteMvccTxMultiThreadedAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + if (nearEnabled()) + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setCacheMode(PARTITIONED); + ccfg.setBackups(1); + + ccfg.setEvictionPolicy(null); + + ccfg.setWriteSynchronizationMode(FULL_SYNC); + + ccfg.setNearConfiguration(nearEnabled() ? new NearCacheConfiguration() : null); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** + * @return {@code True} if near cache is enabled. + */ + protected boolean nearEnabled() { + return true; + } + + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int keyCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int maxKeyValue() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int threadCount() { + return 5; + } + + /** {@inheritDoc} */ + @Override protected int iterations() { + return 1000; + } + + /** {@inheritDoc} */ + @Override protected boolean isTestDebug() { + return false; + } + + /** {@inheritDoc} */ + @Override protected boolean printMemoryStats() { + return true; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxSingleThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxSingleThreadedSelfTest.java new file mode 100644 index 0000000000000..140483912d894 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxSingleThreadedSelfTest.java @@ -0,0 +1,84 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.near; + +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.cache.IgniteMvccTxSingleThreadedAbstractTest; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheRebalanceMode.NONE; + +/** + * Tests for partitioned cache transactions. + */ +public class GridCachePartitionedMvccTxSingleThreadedSelfTest extends IgniteMvccTxSingleThreadedAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setCacheMode(PARTITIONED); + ccfg.setBackups(1); + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + + ccfg.setNearConfiguration(null); + ccfg.setEvictionPolicy(null); + + ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC); + + ccfg.setRebalanceMode(NONE); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 4; + } + + /** {@inheritDoc} */ + @Override protected int keyCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int maxKeyValue() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int iterations() { + return 3000; + } + + /** {@inheritDoc} */ + @Override protected boolean isTestDebug() { + return false; + } + + /** {@inheritDoc} */ + @Override protected boolean printMemoryStats() { + return true; + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxTimeoutSelfTest.java new file mode 100644 index 0000000000000..aad95aab29bff --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedMvccTxTimeoutSelfTest.java @@ -0,0 +1,47 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.near; + +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.cache.distributed.IgniteMvccTxTimeoutAbstractTest; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; + +/** + * Simple cache test. + */ +public class GridCachePartitionedMvccTxTimeoutSelfTest extends IgniteMvccTxTimeoutAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration c = super.getConfiguration(igniteInstanceName); + + CacheConfiguration cc = defaultCacheConfiguration(); + + cc.setCacheMode(PARTITIONED); + cc.setBackups(1); + cc.setAtomicityMode(TRANSACTIONAL); + + //cacheCfg.setPreloadMode(NONE); + + c.setCacheConfiguration(cc); + + return c; + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedNodeRestartTest.java index d7a0cdda00ce5..38e9d35cb0b3e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedNodeRestartTest.java @@ -21,6 +21,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractNodeRestartSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -29,6 +32,7 @@ /** * Test node restart. */ +@RunWith(JUnit4.class) public class GridCachePartitionedNodeRestartTest extends GridCacheAbstractNodeRestartSelfTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -57,77 +61,92 @@ public class GridCachePartitionedNodeRestartTest extends GridCacheAbstractNodeRe } /** {@inheritDoc} */ + @Test @Override public void testRestart() throws Exception { super.testRestart(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTwoNodesNoBackups() throws Throwable { super.testRestartWithPutTwoNodesNoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTwoNodesOneBackup() throws Throwable { super.testRestartWithPutTwoNodesOneBackup(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutFourNodesNoBackups() throws Throwable { super.testRestartWithPutFourNodesNoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutFourNodesOneBackups() throws Throwable { super.testRestartWithPutFourNodesOneBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutSixNodesTwoBackups() throws Throwable { super.testRestartWithPutSixNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutEightNodesTwoBackups() throws Throwable { super.testRestartWithPutEightNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTenNodesTwoBackups() throws Throwable { super.testRestartWithPutTenNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxEightNodesTwoBackups() throws Throwable { super.testRestartWithTxEightNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxFourNodesNoBackups() throws Throwable { super.testRestartWithTxFourNodesNoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxFourNodesOneBackups() throws Throwable { super.testRestartWithTxFourNodesOneBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxSixNodesTwoBackups() throws Throwable { super.testRestartWithTxSixNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTenNodesTwoBackups() throws Throwable { super.testRestartWithTxTenNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTwoNodesNoBackups() throws Throwable { super.testRestartWithTxTwoNodesNoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTwoNodesOneBackup() throws Throwable { super.testRestartWithTxTwoNodesOneBackup(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedOptimisticTxNodeRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedOptimisticTxNodeRestartTest.java index df9b27f771f19..75263b092bb3c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedOptimisticTxNodeRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedOptimisticTxNodeRestartTest.java @@ -23,6 +23,9 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractNodeRestartSelfTest; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -31,6 +34,7 @@ /** * Test node restart. */ +@RunWith(JUnit4.class) public class GridCachePartitionedOptimisticTxNodeRestartTest extends GridCacheAbstractNodeRestartSelfTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { @@ -77,69 +81,84 @@ protected boolean nearEnabled() { } /** {@inheritDoc} */ + @Test @Override public void testRestart() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTwoNodesNoBackups() throws Throwable { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTwoNodesOneBackup() throws Throwable { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutFourNodesNoBackups() throws Throwable { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutFourNodesOneBackups() throws Throwable { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutSixNodesTwoBackups() throws Throwable { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutEightNodesTwoBackups() throws Throwable { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTenNodesTwoBackups() throws Throwable { } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxEightNodesTwoBackups() throws Throwable { super.testRestartWithTxEightNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxFourNodesNoBackups() throws Throwable { super.testRestartWithTxFourNodesNoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxFourNodesOneBackups() throws Throwable { super.testRestartWithTxFourNodesOneBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxSixNodesTwoBackups() throws Throwable { super.testRestartWithTxSixNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTenNodesTwoBackups() throws Throwable { super.testRestartWithTxTenNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTwoNodesNoBackups() throws Throwable { super.testRestartWithTxTwoNodesNoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTwoNodesOneBackup() throws Throwable { super.testRestartWithTxTwoNodesOneBackup(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedPreloadLifecycleSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedPreloadLifecycleSelfTest.java index 3a52d64a0de0c..862dd57ac61d4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedPreloadLifecycleSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedPreloadLifecycleSelfTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -36,7 +39,7 @@ /** * Tests for replicated cache preloader. */ -@SuppressWarnings({"PublicInnerClass"}) +@RunWith(JUnit4.class) public class GridCachePartitionedPreloadLifecycleSelfTest extends GridCachePreloadLifecycleAbstractTest { /** Grid count. */ private int gridCnt = 5; @@ -57,7 +60,6 @@ public class GridCachePartitionedPreloadLifecycleSelfTest extends GridCachePrelo cc1.setRebalanceMode(preloadMode); cc1.setEvictionPolicy(null); cc1.setCacheStoreFactory(null); - cc1.setEvictionPolicy(null); // Identical configuration. CacheConfiguration cc2 = new CacheConfiguration(cc1); @@ -158,6 +160,7 @@ public void checkCache(Object[] keys) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean1() throws Exception { checkCache(keys(true, DFLT_KEYS.length, DFLT_KEYS)); } @@ -165,6 +168,7 @@ public void testLifecycleBean1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean2() throws Exception { checkCache(keys(false, DFLT_KEYS.length, DFLT_KEYS)); } @@ -172,6 +176,7 @@ public void testLifecycleBean2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean3() throws Exception { checkCache(keys(true, 500)); } @@ -179,6 +184,7 @@ public void testLifecycleBean3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean4() throws Exception { checkCache(keys(false, 500)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedStorePutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedStorePutSelfTest.java index 96d4603fb3cc0..e63291fc09545 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedStorePutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedStorePutSelfTest.java @@ -18,17 +18,19 @@ package org.apache.ignite.internal.processors.cache.distributed.near; import java.util.concurrent.atomic.AtomicInteger; +import javax.cache.Cache; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.store.CacheStoreAdapter; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheModuloAffinityFunction; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -37,44 +39,31 @@ /** * Test that store is called correctly on puts. */ +@RunWith(JUnit4.class) public class GridCachePartitionedStorePutSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final AtomicInteger CNT = new AtomicInteger(0); /** */ - private IgniteCache cache1; + private static AtomicInteger loads; - /** */ - private IgniteCache cache2; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); - /** */ - private IgniteCache cache3; + super.beforeTestsStarted(); + } /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(discoverySpi()); cfg.setCacheConfiguration(cacheConfiguration()); cfg.setUserAttributes(F.asMap(IDX_ATTR, CNT.getAndIncrement())); return cfg; } - /** - * @return Discovery SPI. - */ - private DiscoverySpi discoverySpi() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** * @return Cache configuration. */ @@ -95,9 +84,9 @@ private CacheConfiguration cacheConfiguration() { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { - cache1 = startGrid(1).cache(DEFAULT_CACHE_NAME); - cache2 = startGrid(2).cache(DEFAULT_CACHE_NAME); - cache3 = startGrid(3).cache(DEFAULT_CACHE_NAME); + loads = new AtomicInteger(); + + startGridsMultiThreaded(3); } /** {@inheritDoc} */ @@ -105,21 +94,36 @@ private CacheConfiguration cacheConfiguration() { stopAllGrids(); } - /** - * @throws Exception If failed. - */ - public void testPutx() throws Throwable { - info("Putting to the first node."); + /** */ + @Test + public void testPutShouldNotTriggerLoad() { + checkPut(0); + + assertEquals(0, loads.get()); - cache1.put(0, 1); + checkPut(1); - info("Putting to the second node."); + assertEquals(0, loads.get()); - cache2.put(0, 2); + checkPut(2); - info("Putting to the third node."); + assertEquals(0, loads.get()); + } + + /** */ + public void checkPut(int idx) { + IgniteCache cache = grid(idx).cache(DEFAULT_CACHE_NAME); + + cache.put(0, 1); + + try (Transaction tx = grid(idx).transactions().txStart()) { + cache.put(1, 1); + cache.put(2, 2); + + tx.commit(); + } - cache3.put(0, 3); + assertEquals(0, loads.get()); } /** @@ -128,13 +132,13 @@ public void testPutx() throws Throwable { private static class TestStore extends CacheStoreAdapter { /** {@inheritDoc} */ @Override public Object load(Object key) { - assert false; + loads.incrementAndGet(); return null; } /** {@inheritDoc} */ - @Override public void write(javax.cache.Cache.Entry e) { + @Override public void write(Cache.Entry e) { // No-op } @@ -143,4 +147,4 @@ private static class TestStore extends CacheStoreAdapter { // No-op } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiNodeSelfTest.java index fd4108c90c137..a032d6c301916 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiNodeSelfTest.java @@ -20,6 +20,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.IgniteTxMultiNodeAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -27,6 +30,7 @@ /** * Test basic cache operations in transactions. */ +@RunWith(JUnit4.class) public class GridCachePartitionedTxMultiNodeSelfTest extends IgniteTxMultiNodeAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -45,37 +49,44 @@ public class GridCachePartitionedTxMultiNodeSelfTest extends IgniteTxMultiNodeAb } /** {@inheritDoc} */ + @Test @Override public void testPutOneEntryInTx() throws Exception { super.testPutOneEntryInTx(); } /** {@inheritDoc} */ + @Test @Override public void testPutOneEntryInTxMultiThreaded() throws Exception { super.testPutOneEntryInTxMultiThreaded(); } /** {@inheritDoc} */ + @Test @Override public void testPutTwoEntriesInTx() throws Exception { super.testPutTwoEntriesInTx(); } /** {@inheritDoc} */ + @Test @Override public void testPutTwoEntryInTxMultiThreaded() throws Exception { super.testPutTwoEntryInTxMultiThreaded(); } /** {@inheritDoc} */ + @Test @Override public void testRemoveInTxQueried() throws Exception { super.testRemoveInTxQueried(); } /** {@inheritDoc} */ + @Test @Override public void testRemoveInTxQueriedMultiThreaded() throws Exception { super.testRemoveInTxQueriedMultiThreaded(); } /** {@inheritDoc} */ + @Test @Override public void testRemoveInTxSimple() throws Exception { super.testRemoveInTxSimple(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiThreadedSelfTest.java index a1bcd46c62024..1c09b08099fc0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxMultiThreadedSelfTest.java @@ -22,9 +22,6 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.IgniteTxMultiThreadedAbstractTest; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.log4j.Level; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,9 +34,6 @@ public class GridCachePartitionedTxMultiThreadedSelfTest extends IgniteTxMultiTh /** Cache debug flag. */ private static final boolean CACHE_DEBUG = false; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @SuppressWarnings({"ConstantConditions"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -60,12 +54,6 @@ public class GridCachePartitionedTxMultiThreadedSelfTest extends IgniteTxMultiTh c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - if (CACHE_DEBUG) resetLog4j(Level.DEBUG, true, GridCacheProcessor.class.getPackage().getName()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSalvageSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSalvageSelfTest.java index 2e4ad92351a30..fb973ba5d181d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSalvageSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSalvageSelfTest.java @@ -30,12 +30,12 @@ import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_TX_SALVAGE_TIMEOUT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -47,6 +47,7 @@ /** * Test tx salvage. */ +@RunWith(JUnit4.class) public class GridCachePartitionedTxSalvageSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 5; @@ -66,20 +67,10 @@ public class GridCachePartitionedTxSalvageSelfTest extends GridCommonAbstractTes /** Salvage timeout system property value before alteration. */ private static String salvageTimeoutOld; - /** Standard VM IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - // Discovery. - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -123,6 +114,7 @@ public class GridCachePartitionedTxSalvageSelfTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void testOptimisticTxSalvageBeforeTimeout() throws Exception { checkSalvageBeforeTimeout(OPTIMISTIC, true); } @@ -130,6 +122,7 @@ public void testOptimisticTxSalvageBeforeTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticcTxSalvageBeforeTimeout() throws Exception { checkSalvageBeforeTimeout(PESSIMISTIC, false); } @@ -137,6 +130,7 @@ public void testPessimisticcTxSalvageBeforeTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticTxSalvageAfterTimeout() throws Exception { checkSalvageAfterTimeout(OPTIMISTIC, true); } @@ -144,6 +138,7 @@ public void testOptimisticTxSalvageAfterTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTxSalvageAfterTimeout() throws Exception { checkSalvageAfterTimeout(PESSIMISTIC, false); } @@ -263,4 +258,4 @@ private void checkTxsNotEmpty(GridCacheContext ctx, int exp) { assertEquals("Some transactions were salvaged unexpectedly", exp, size); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSingleThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSingleThreadedSelfTest.java index 17936a7383ad2..d70776c8fe8e0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSingleThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxSingleThreadedSelfTest.java @@ -22,9 +22,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.IgniteTxSingleThreadedAbstractTest; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.log4j.Level; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -38,9 +35,6 @@ public class GridCachePartitionedTxSingleThreadedSelfTest extends IgniteTxSingle /** Cache debug flag. */ private static final boolean CACHE_DEBUG = false; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @SuppressWarnings({"ConstantConditions"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -60,12 +54,6 @@ public class GridCachePartitionedTxSingleThreadedSelfTest extends IgniteTxSingle cc.setRebalanceMode(NONE); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - c.setCacheConfiguration(cc); if (CACHE_DEBUG) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxTimeoutSelfTest.java index 202476e160919..1a88ae0f77936 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxTimeoutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePartitionedTxTimeoutSelfTest.java @@ -19,11 +19,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.internal.processors.cache.distributed.IgniteTxTimeoutAbstractTest; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -32,27 +28,10 @@ * Simple cache test. */ public class GridCachePartitionedTxTimeoutSelfTest extends IgniteTxTimeoutAbstractTest { - /** Transaction timeout. */ - private static final long TIMEOUT = 50; - - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TransactionConfiguration txCfg = c.getTransactionConfiguration(); - - txCfg.setTxSerializableEnabled(true); - txCfg.setDefaultTxTimeout(TIMEOUT); - - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePutArrayValueSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePutArrayValueSelfTest.java index 14a0c0f931d38..366c918cad2da 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePutArrayValueSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCachePutArrayValueSelfTest.java @@ -22,22 +22,35 @@ import java.io.ObjectInput; import java.io.ObjectOutput; import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheInternal; import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Specific test case for GG-3946 */ +@RunWith(JUnit4.class) public class GridCachePutArrayValueSelfTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { return 4; } + @Override protected void initStoreStrategy() throws IgniteCheckedException { + if (!MvccFeatureChecker.isSupported(MvccFeatureChecker.Feature.CACHE_STORE)) + return; + + super.initStoreStrategy(); + } + /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { CacheConfiguration cacheCfg = super.cacheConfiguration(igniteInstanceName); @@ -51,6 +64,7 @@ public class GridCachePutArrayValueSelfTest extends GridCacheAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testInternalKeys() throws Exception { assert gridCount() >= 2; @@ -117,4 +131,4 @@ public InternalKey(long key) { return S.toString(InternalKey.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheRendezvousAffinityClientSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheRendezvousAffinityClientSelfTest.java index 51c2fc249e728..71daeca500df3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheRendezvousAffinityClientSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridCacheRendezvousAffinityClientSelfTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests rendezvous affinity function with CLIENT_ONLY node (GG-8768). */ +@RunWith(JUnit4.class) public class GridCacheRendezvousAffinityClientSelfTest extends GridCommonAbstractTest { /** Client node. */ private boolean client; @@ -63,6 +67,7 @@ public class GridCacheRendezvousAffinityClientSelfTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testClientNode() throws Exception { try { client = true; @@ -111,4 +116,4 @@ public void testClientNode() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheStoreUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheStoreUpdateTest.java index bf8ad78128745..6a70afa6dcaba 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheStoreUpdateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridNearCacheStoreUpdateTest.java @@ -39,13 +39,18 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Check that near cache is updated when entry loaded from store. */ +@RunWith(JUnit4.class) public class GridNearCacheStoreUpdateTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache"; @@ -71,6 +76,9 @@ public class GridNearCacheStoreUpdateTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + srv = startGrid("server"); client = startGrid("client"); } @@ -83,6 +91,7 @@ public class GridNearCacheStoreUpdateTest extends GridCommonAbstractTest { /** * @throws Exception If fail. */ + @Test public void testAtomicUpdateNear() throws Exception { cache = client.createCache(cacheConfiguration(), new NearCacheConfiguration()); @@ -92,6 +101,7 @@ public void testAtomicUpdateNear() throws Exception { /** * @throws Exception If fail. */ + @Test public void testTransactionAtomicUpdateNear() throws Exception { cache = client.createCache(cacheConfiguration(), new NearCacheConfiguration()); @@ -101,6 +111,7 @@ public void testTransactionAtomicUpdateNear() throws Exception { /** * @throws Exception If fail. */ + @Test public void testPessimisticRepeatableReadUpdateNear() throws Exception { cache = client.createCache(cacheConfiguration().setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL), new NearCacheConfiguration()); @@ -111,6 +122,7 @@ public void testPessimisticRepeatableReadUpdateNear() throws Exception { /** * @throws Exception If fail. */ + @Test public void testPessimisticReadCommittedUpdateNear() throws Exception { cache = client.createCache(cacheConfiguration().setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL), new NearCacheConfiguration()); @@ -121,6 +133,7 @@ public void testPessimisticReadCommittedUpdateNear() throws Exception { /** * @throws Exception If fail. */ + @Test public void testOptimisticSerializableUpdateNear() throws Exception { cache = client.createCache(cacheConfiguration().setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL), new NearCacheConfiguration()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridPartitionedBackupLoadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridPartitionedBackupLoadSelfTest.java index 23b0e188c1a72..2b4945b1a4493 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridPartitionedBackupLoadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/GridPartitionedBackupLoadSelfTest.java @@ -21,15 +21,14 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.IgniteCache; -import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cache.store.CacheStoreAdapter; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -37,10 +36,8 @@ /** * Test that persistent store is not used when loading invalidated entry from backup node. */ +@RunWith(JUnit4.class) public class GridPartitionedBackupLoadSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 3; @@ -54,23 +51,11 @@ public class GridPartitionedBackupLoadSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(discoverySpi()); cfg.setCacheConfiguration(cacheConfiguration()); return cfg; } - /** - * @return Discovery SPI. - */ - private DiscoverySpi discoverySpi() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** * @return Cache configuration. */ @@ -91,6 +76,9 @@ private CacheConfiguration cacheConfiguration() { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-8582"); + startGridsMultiThreaded(GRID_CNT); } @@ -102,6 +90,7 @@ private CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testBackupLoad() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).put(1, 1); @@ -159,4 +148,4 @@ public Integer get(Integer key) { return map.get(key); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearOnlyTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearOnlyTxTest.java index ca12a9925ca99..6939eb376694e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearOnlyTxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearOnlyTxTest.java @@ -29,7 +29,11 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -39,7 +43,15 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheNearOnlyTxTest extends IgniteCacheAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 2; @@ -76,6 +88,7 @@ public class IgniteCacheNearOnlyTxTest extends IgniteCacheAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNearOnlyPutMultithreaded() throws Exception { final Ignite ignite1 = ignite(1); @@ -113,6 +126,7 @@ public void testNearOnlyPutMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOptimisticTx() throws Exception { txMultithreaded(true); } @@ -120,6 +134,7 @@ public void testOptimisticTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTx() throws Exception { txMultithreaded(false); } @@ -174,6 +189,7 @@ private void txMultithreaded(final boolean optimistic) throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentTx() throws Exception { final Ignite ignite1 = ignite(1); @@ -217,4 +233,4 @@ public void testConcurrentTx() throws Exception { fut1.get(); fut2.get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearReadCommittedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearReadCommittedTest.java index ad9bce047b2b2..3a1a55d5805f3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearReadCommittedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearReadCommittedTest.java @@ -22,7 +22,11 @@ import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -32,7 +36,15 @@ * */ @SuppressWarnings("RedundantMethodOverride") +@RunWith(JUnit4.class) public class IgniteCacheNearReadCommittedTest extends GridCacheAbstractSelfTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 2; @@ -51,6 +63,7 @@ public class IgniteCacheNearReadCommittedTest extends GridCacheAbstractSelfTest /** * @throws Exception If failed. */ + @Test public void testReadCommittedCacheCleanup() throws Exception { IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME); @@ -70,4 +83,4 @@ public void testReadCommittedCacheCleanup() throws Exception { assertEquals(0, cache.localSize(CachePeekMode.ALL)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearTxRollbackTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearTxRollbackTest.java index 06b625681b1ab..27e27818d62ba 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearTxRollbackTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheNearTxRollbackTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -41,6 +44,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheNearTxRollbackTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -74,6 +78,7 @@ public class IgniteCacheNearTxRollbackTest extends IgniteCacheAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPutAllRollback() throws Exception { IgniteCache cache = jcache(0); @@ -135,4 +140,4 @@ private static class TestCommunicationSpi extends TcpCommunicationSpi { super.sendMessage(node, msg, ackClosure); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheMultithreadedUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheMultithreadedUpdateTest.java index 703da8a59c72b..08b702749db3f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheMultithreadedUpdateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheMultithreadedUpdateTest.java @@ -27,11 +27,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -40,10 +40,8 @@ /** * */ +@RunWith(JUnit4.class) public class NearCacheMultithreadedUpdateTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -54,8 +52,6 @@ public class NearCacheMultithreadedUpdateTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -77,6 +73,7 @@ public class NearCacheMultithreadedUpdateTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testUpdateMultithreadedTx() throws Exception { updateMultithreaded(TRANSACTIONAL, false); } @@ -84,6 +81,7 @@ public void testUpdateMultithreadedTx() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateMultithreadedTxRestart() throws Exception { updateMultithreaded(TRANSACTIONAL, true); } @@ -91,6 +89,7 @@ public void testUpdateMultithreadedTxRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateMultithreadedAtomic() throws Exception { updateMultithreaded(ATOMIC, false); } @@ -98,6 +97,7 @@ public void testUpdateMultithreadedAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateMultithreadedAtomicRestart() throws Exception { updateMultithreaded(ATOMIC, true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCachePutAllMultinodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCachePutAllMultinodeTest.java index ca60060775b15..3af1875833627 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCachePutAllMultinodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCachePutAllMultinodeTest.java @@ -31,10 +31,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,10 +44,8 @@ /** * */ +@RunWith(JUnit4.class) public class NearCachePutAllMultinodeTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of grids to start. */ private static final int GRID_CNT = 3; @@ -62,12 +60,6 @@ public class NearCachePutAllMultinodeTest extends GridCommonAbstractTest { @Override protected final IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - if (!client) { CacheConfiguration cc = defaultCacheConfiguration(); @@ -112,6 +104,7 @@ public class NearCachePutAllMultinodeTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testMultithreadedPutAll() throws Exception { final AtomicInteger idx = new AtomicInteger(); @@ -163,4 +156,4 @@ static class TestFactory implements Factory { }; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheSyncUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheSyncUpdateTest.java index d25301337b7ad..a83fadea7637b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheSyncUpdateTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NearCacheSyncUpdateTest.java @@ -24,45 +24,43 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class NearCacheSyncUpdateTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); - return cfg; + startGridsMultiThreaded(3); } /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); - startGridsMultiThreaded(3); + super.afterTestsStopped(); } /** * @throws Exception If failed. */ + @Test public void testNearCacheSyncUpdateAtomic() throws Exception { nearCacheSyncUpdateTx(ATOMIC); } @@ -70,10 +68,20 @@ public void testNearCacheSyncUpdateAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearCacheSyncUpdateTx() throws Exception { nearCacheSyncUpdateTx(TRANSACTIONAL); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187") + @Test + public void testNearCacheSyncUpdateMvccTx() throws Exception { + nearCacheSyncUpdateTx(TRANSACTIONAL_SNAPSHOT); + } + /** * @param atomicityMode Atomicity mode. * @throws Exception If failed. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NoneRebalanceModeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NoneRebalanceModeSelfTest.java index 5d82f832ba920..e960762cfeead 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NoneRebalanceModeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/NoneRebalanceModeSelfTest.java @@ -22,6 +22,9 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.NONE; import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; @@ -29,6 +32,7 @@ /** * Test none rebalance mode. */ +@RunWith(JUnit4.class) public class NoneRebalanceModeSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @SuppressWarnings({"ConstantConditions"}) @@ -46,12 +50,22 @@ public class NoneRebalanceModeSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + startGrid(0); } + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + super.afterTestsStopped(); + } + /** * @throws Exception If failed. */ + @Test public void testRemoveAll() throws Exception { GridNearTransactionalCache cache = (GridNearTransactionalCache)((IgniteKernal)grid(0)).internalCache(DEFAULT_CACHE_NAME); @@ -60,4 +74,4 @@ public void testRemoveAll() throws Exception { grid(0).cache(DEFAULT_CACHE_NAME).removeAll(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/CacheManualRebalancingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/CacheManualRebalancingTest.java index 3a6ad48e3057b..0f71d13731c49 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/CacheManualRebalancingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/CacheManualRebalancingTest.java @@ -24,7 +24,6 @@ import org.apache.ignite.IgniteLogger; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CachePeekMode; -import org.apache.ignite.compute.ComputeTaskFuture; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; @@ -32,23 +31,22 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** */ +@RunWith(JUnit4.class) public class CacheManualRebalancingTest extends GridCommonAbstractTest { /** */ private static final String MYCACHE = "mycache"; - /** */ - public static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public static final int NODES_CNT = 2; @@ -56,8 +54,6 @@ public class CacheManualRebalancingTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(final String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setCacheConfiguration(cacheConfiguration(), new CacheConfiguration(DEFAULT_CACHE_NAME)); return cfg; @@ -91,6 +87,7 @@ private static CacheConfiguration cacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testRebalance() throws Exception { // Fill cache with large dataset to make rebalancing slow. try (IgniteDataStreamer streamer = grid(0).dataStreamer(MYCACHE)) { @@ -104,11 +101,9 @@ public void testRebalance() throws Exception { int newNodeCacheSize; // Start manual rebalancing. - IgniteCompute compute = newNode.compute().withAsync(); + IgniteCompute compute = newNode.compute(); - compute.broadcast(new MyCallable()); - - final ComputeTaskFuture rebalanceTaskFuture = compute.future(); + final IgniteFuture rebalanceTaskFuture = compute.broadcastAsync(new MyCallable()); boolean rebalanceFinished = GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { @@ -146,7 +141,7 @@ public static class MyCallable implements IgniteRunnable { assertNotNull(cache); boolean finished; - + log.info("Start rebalancing cache: " + cacheName + ", size: " + cache.localSize()); do { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.java index 96127aa5cd0f5..67a6a0ff03073 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.java @@ -36,18 +36,17 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheRabalancingDelayedPartitionMapExchangeSelfTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Map of destination node ID to runnable with logic for real message sending. * To apply real message sending use run method */ private final ConcurrentHashMap rs = new ConcurrentHashMap<>(); @@ -119,6 +118,7 @@ public class DelayableCommunicationSpi extends TcpCommunicationSpi { /** * @throws Exception e if failed. */ + @Test public void test() throws Exception { IgniteEx ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingAsyncSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingAsyncSelfTest.java index 0a8698a597cd8..15830fdecc68a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingAsyncSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingAsyncSelfTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.discovery.tcp.TestTcpDiscoverySpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheRebalancingAsyncSelfTest extends GridCacheRebalancingSyncSelfTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -43,6 +47,7 @@ public class GridCacheRebalancingAsyncSelfTest extends GridCacheRebalancingSyncS /** * @throws Exception Exception. */ + @Test public void testNodeFailedAtRebalancing() throws Exception { IgniteEx ignite = startGrid(0); @@ -65,4 +70,4 @@ public void testNodeFailedAtRebalancing() throws Exception { checkSupplyContextMapIsEmpty(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingCancelTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingCancelTest.java index 3965290480a3d..725080e10d4b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingCancelTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingCancelTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache.distributed.rebalancing; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CacheRebalanceMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; @@ -31,27 +32,23 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test cases for checking cancellation rebalancing process if some events occurs. */ +@RunWith(JUnit4.class) public class GridCacheRebalancingCancelTest extends GridCommonAbstractTest { /** */ private static final String DHT_PARTITIONED_CACHE = "cacheP"; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration dfltCfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)dfltCfg.getDiscoverySpi()).setIpFinder(ipFinder); - dfltCfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); return dfltCfg; @@ -62,6 +59,7 @@ public class GridCacheRebalancingCancelTest extends GridCommonAbstractTest { * * @throws Exception Exception. */ + @Test public void testClientNodeJoinAtRebalancing() throws Exception { final IgniteEx ignite0 = startGrid(0); @@ -71,6 +69,7 @@ public void testClientNodeJoinAtRebalancing() throws Exception { .setRebalanceMode(CacheRebalanceMode.ASYNC) .setBackups(1) .setRebalanceOrder(2) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setAffinity(new RendezvousAffinityFunction(false))); for (int i = 0; i < 2048; i++) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingOrderingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingOrderingTest.java index 43db9312af217..7638acdae881b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingOrderingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingOrderingTest.java @@ -59,6 +59,9 @@ import org.apache.ignite.services.ServiceContext; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -102,6 +105,7 @@ * * */ +@RunWith(JUnit4.class) public class GridCacheRebalancingOrderingTest extends GridCommonAbstractTest { /** {@link Random} for test key generation. */ private final static Random RANDOM = new Random(); @@ -268,6 +272,7 @@ private ServerStarter startServers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvents() throws Exception { Ignite ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionCountersMvccTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionCountersMvccTest.java new file mode 100644 index 0000000000000..4166cfcabb5df --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionCountersMvccTest.java @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.distributed.rebalancing; + +import org.apache.ignite.cache.CacheAtomicityMode; + +/** + * + */ +public class GridCacheRebalancingPartitionCountersMvccTest extends GridCacheRebalancingPartitionCountersTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionCountersTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionCountersTest.java index 258e36e531322..912b38373d09d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionCountersTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionCountersTest.java @@ -24,6 +24,7 @@ import java.util.HashMap; import java.util.List; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; @@ -37,10 +38,14 @@ import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheRebalancingPartitionCountersTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache"; @@ -61,6 +66,7 @@ public class GridCacheRebalancingPartitionCountersTest extends GridCommonAbstrac .setMaxSize(100L * 1024 * 1024)) .setWalMode(WALMode.LOG_ONLY)) .setCacheConfiguration(new CacheConfiguration(CACHE_NAME) + .setAtomicityMode(atomicityMode()) .setBackups(2) .setRebalanceBatchSize(4096) // Force to create several supply messages during rebalancing. .setAffinity( @@ -79,6 +85,13 @@ public class GridCacheRebalancingPartitionCountersTest extends GridCommonAbstrac U.delete(U.resolveWorkDirectory(U.defaultWorkDirectory(), "db", false)); } + /** + * @return Cache atomicity mode. + */ + protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.ATOMIC; + } + /** * */ @@ -93,6 +106,7 @@ private boolean contains(int[] arr, int a) { /** * Tests that after rebalancing all partition update counters have the same value on all nodes. */ + @Test public void test() throws Exception { IgniteEx ignite = (IgniteEx)startGrids(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionDistributionTest.java index f0cbd3885ac2d..ec5d0d0b1dc64 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingPartitionDistributionTest.java @@ -33,11 +33,14 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.assertions.Assertion; import org.apache.ignite.testframework.junits.common.GridRollingRestartAbstractTest; - +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test the behavior of the partition rebalancing during a rolling restart. */ +@RunWith(JUnit4.class) public class GridCacheRebalancingPartitionDistributionTest extends GridRollingRestartAbstractTest { /** The maximum allowable deviation from a perfect distribution. */ private static final double MAX_DEVIATION = 0.20; @@ -60,6 +63,7 @@ public class GridCacheRebalancingPartitionDistributionTest extends GridRollingRe * The test performs rolling restart and checks no server drops out and the partitions are balanced during * redistribution. */ + @Test public void testRollingRestart() throws InterruptedException { awaitPartitionMapExchange(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncCheckDataTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncCheckDataTest.java index 8a43b65d75a70..4b5e1a8f7189c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncCheckDataTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncCheckDataTest.java @@ -21,13 +21,14 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -35,19 +36,16 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheRebalancingSyncCheckDataTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setCacheMode(REPLICATED); ccfg.setRebalanceMode(SYNC); + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); cfg.setCacheConfiguration(ccfg); @@ -57,6 +55,7 @@ public class GridCacheRebalancingSyncCheckDataTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void testDataRebalancing() throws Exception { Ignite ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncSelfTest.java index 0bb35d1fb7526..b3831d30b99ae 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingSyncSelfTest.java @@ -21,11 +21,16 @@ import java.util.List; import java.util.Map; import java.util.Random; +import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CacheRebalanceMode; +import org.apache.ignite.cache.query.ScanQuery; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -35,12 +40,12 @@ import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionMap; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsFullMessage; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; import org.apache.ignite.internal.util.lang.GridAbsPredicateX; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.PA; @@ -51,10 +56,15 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.GridTestUtils.SF; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assume; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheRebalanceMode.NONE; @@ -62,15 +72,13 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheRebalancingSyncSelfTest extends GridCommonAbstractTest { /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** */ - private static final int TEST_SIZE = 100_000; + private static final int TEST_SIZE = SF.applyLB(100_000, 10_000); /** */ - private static final long TOPOLOGY_STILLNESS_TIME = 30_000L; + private static final long TOPOLOGY_STILLNESS_TIME = SF.applyLB(30_000, 5_000); /** partitioned cache name. */ protected static final String CACHE_NAME_DHT_PARTITIONED = "cacheP"; @@ -103,7 +111,6 @@ public class GridCacheRebalancingSyncSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration iCfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)iCfg.getDiscoverySpi()).setIpFinder(ipFinder); ((TcpDiscoverySpi)iCfg.getDiscoverySpi()).setForceServerMode(true); TcpCommunicationSpi commSpi = new CollectingCommunicationSpi(); @@ -125,6 +132,7 @@ public class GridCacheRebalancingSyncSelfTest extends GridCommonAbstractTest { cachePCfg.setRebalanceBatchSize(1); cachePCfg.setRebalanceBatchesPrefetchCount(1); cachePCfg.setRebalanceOrder(2); + cachePCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); CacheConfiguration cachePCfg2 = new CacheConfiguration<>(DEFAULT_CACHE_NAME); @@ -133,7 +141,8 @@ public class GridCacheRebalancingSyncSelfTest extends GridCommonAbstractTest { cachePCfg2.setRebalanceMode(CacheRebalanceMode.SYNC); cachePCfg2.setBackups(1); cachePCfg2.setRebalanceOrder(2); - cachePCfg2.setRebalanceDelay(5000); + cachePCfg2.setRebalanceDelay(SF.applyLB(5000, 500)); + cachePCfg2.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); CacheConfiguration cacheRCfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); @@ -142,6 +151,8 @@ public class GridCacheRebalancingSyncSelfTest extends GridCommonAbstractTest { cacheRCfg.setRebalanceMode(CacheRebalanceMode.SYNC); cacheRCfg.setRebalanceBatchSize(1); cacheRCfg.setRebalanceBatchesPrefetchCount(Integer.MAX_VALUE); + cacheRCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + ((TcpCommunicationSpi)iCfg.getCommunicationSpi()).setSharedMemoryPort(-1);//Shmem fail fix for Integer.MAX_VALUE. CacheConfiguration cacheRCfg2 = new CacheConfiguration<>(DEFAULT_CACHE_NAME); @@ -150,6 +161,7 @@ public class GridCacheRebalancingSyncSelfTest extends GridCommonAbstractTest { cacheRCfg2.setCacheMode(CacheMode.REPLICATED); cacheRCfg2.setRebalanceMode(CacheRebalanceMode.SYNC); cacheRCfg2.setRebalanceOrder(4); + cacheRCfg2.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); iCfg.setCacheConfiguration(cachePCfg, cachePCfg2, cacheRCfg, cacheRCfg2); @@ -177,12 +189,16 @@ protected void generateData(Ignite ignite, int from, int iter) { * @param iter Iteration. */ protected void generateData(Ignite ignite, String name, int from, int iter) { - for (int i = from; i < from + TEST_SIZE; i++) { - if ((i + 1) % (TEST_SIZE / 10) == 0) - log.info("Prepared " + (i + 1) * 100 / (TEST_SIZE) + "% entries. [count=" + TEST_SIZE + - ", iteration=" + iter + ", cache=" + name + "]"); + try (IgniteDataStreamer dataStreamer = ignite.dataStreamer(name)) { + dataStreamer.allowOverwrite(true); + + for (int i = from; i < from + TEST_SIZE; i++) { + if ((i + 1) % (TEST_SIZE / 10) == 0) + log.info("Prepared " + (i + 1) * 100 / (TEST_SIZE) + "% entries. [count=" + TEST_SIZE + + ", iteration=" + iter + ", cache=" + name + "]"); - ignite.cache(name).put(i, i + name.hashCode() + iter); + dataStreamer.addData(i, i + name.hashCode() + iter); + } } } @@ -192,26 +208,46 @@ protected void generateData(Ignite ignite, String name, int from, int iter) { * @param iter Iteration. */ protected void checkData(Ignite ignite, int from, int iter) { - checkData(ignite, CACHE_NAME_DHT_PARTITIONED, from, iter); - checkData(ignite, CACHE_NAME_DHT_PARTITIONED_2, from, iter); - checkData(ignite, CACHE_NAME_DHT_REPLICATED, from, iter); - checkData(ignite, CACHE_NAME_DHT_REPLICATED_2, from, iter); + checkData(ignite, CACHE_NAME_DHT_PARTITIONED, from, iter, true); + checkData(ignite, CACHE_NAME_DHT_PARTITIONED_2, from, iter, true); + checkData(ignite, CACHE_NAME_DHT_REPLICATED, from, iter, true); + checkData(ignite, CACHE_NAME_DHT_REPLICATED_2, from, iter, true); } /** * @param ignite Ignite. + * @param name Cache name. * @param from Start from key. * @param iter Iteration. - * @param name Cache name. + * @param scan If true then "scan" query will be used instead of "get" in a loop. Should be "false" when run in + * parallel with other operations. Otherwise should be "true", because it's much faster in such situations. */ - protected void checkData(Ignite ignite, String name, int from, int iter) { - for (int i = from; i < from + TEST_SIZE; i++) { - if ((i + 1) % (TEST_SIZE / 10) == 0) - log.info("<" + name + "> Checked " + (i + 1) * 100 / (TEST_SIZE) + "% entries. [count=" + TEST_SIZE + - ", iteration=" + iter + ", cache=" + name + "]"); - - assertEquals("Value does not match [key=" + i + ", cache=" + name + ']', - ignite.cache(name).get(i), i + name.hashCode() + iter); + protected void checkData(Ignite ignite, String name, int from, int iter, boolean scan) { + IgniteCache cache = ignite.cache(name); + + if (scan) { + AtomicInteger cnt = new AtomicInteger(); + + cache.query(new ScanQuery((k, v) -> k >= from && k < from + TEST_SIZE)).forEach(entry -> { + if (cnt.incrementAndGet() % (TEST_SIZE / 10) == 0) + log.info("<" + name + "> Checked " + cnt.get() * 100 / TEST_SIZE + "% entries. [count=" + + TEST_SIZE + ", iteration=" + iter + ", cache=" + name + "]"); + + assertEquals("Value does not match [key=" + entry.getKey() + ", cache=" + name + ']', + entry.getValue().intValue(), entry.getKey() + name.hashCode() + iter); + }); + + assertEquals(TEST_SIZE, cnt.get()); + } + else { + for (int i = from; i < from + TEST_SIZE; i++) { + if ((i + 1) % (TEST_SIZE / 10) == 0) + log.info("<" + name + "> Checked " + (i + 1) * 100 / (TEST_SIZE) + "% entries. [count=" + + TEST_SIZE + ", iteration=" + iter + ", cache=" + name + "]"); + + assertEquals("Value does not match [key=" + i + ", cache=" + name + ']', + cache.get(i).intValue(), i + name.hashCode() + iter); + } } } @@ -225,6 +261,8 @@ protected void checkData(Ignite ignite, String name, int from, int iter) { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10560") + @Test public void testSimpleRebalancing() throws Exception { IgniteKernal ignite = (IgniteKernal)startGrid(0); @@ -276,6 +314,7 @@ public void testSimpleRebalancing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadRebalancing() throws Exception { final Ignite ignite = startGrid(0); @@ -309,7 +348,7 @@ public void testLoadRebalancing() throws Exception { Thread t2 = new Thread() { @Override public void run() { while (!concurrentStartFinished) - checkData(ignite, CACHE_NAME_DHT_PARTITIONED, 0, 0); + checkData(ignite, CACHE_NAME_DHT_PARTITIONED, 0, 0, false); } }; @@ -464,7 +503,10 @@ protected void awaitPartitionMessagesAbsent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testComplexRebalancing() throws Exception { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10561", MvccFeatureChecker.forcedMvcc()); + final Ignite ignite = startGrid(0); generateData(ignite, 0, 0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingUnmarshallingFailedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingUnmarshallingFailedSelfTest.java index 6d72a52b362de..07e652f25263b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingUnmarshallingFailedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingUnmarshallingFailedSelfTest.java @@ -22,25 +22,25 @@ import java.io.ObjectInput; import java.io.ObjectOutput; import java.util.concurrent.atomic.AtomicInteger; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.CacheRebalanceMode; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.thread.IgniteThread; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheRebalancingUnmarshallingFailedSelfTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** partitioned cache name. */ protected static String CACHE = "cache"; @@ -109,6 +109,7 @@ public TestKey() { cfg.setCacheMode(CacheMode.PARTITIONED); cfg.setRebalanceMode(CacheRebalanceMode.SYNC); cfg.setBackups(0); + cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); iCfg.setCacheConfiguration(cfg); @@ -118,6 +119,7 @@ public TestKey() { /** * @throws Exception e. */ + @Test public void test() throws Exception { String marshClsName = GridTestProperties.getProperty(GridTestProperties.MARSH_CLASS_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingWithAsyncClearingMvccTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingWithAsyncClearingMvccTest.java new file mode 100644 index 0000000000000..e08a43f9b52fe --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingWithAsyncClearingMvccTest.java @@ -0,0 +1,31 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.distributed.rebalancing; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.junit.Ignore; + +/** + * + */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-10421") +public class GridCacheRebalancingWithAsyncClearingMvccTest extends GridCacheRebalancingWithAsyncClearingTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingWithAsyncClearingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingWithAsyncClearingTest.java index 1b176ae8a07ba..fe4d685cf86b8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingWithAsyncClearingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/GridCacheRebalancingWithAsyncClearingTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.PartitionLossPolicy; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; @@ -31,16 +32,20 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; -import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheRebalancingWithAsyncClearingTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache"; @@ -63,7 +68,8 @@ public class GridCacheRebalancingWithAsyncClearingTest extends GridCommonAbstrac .setMaxSize(100L * 1024 * 1024)) ); - cfg.setCacheConfiguration(new CacheConfiguration(CACHE_NAME) + cfg.setCacheConfiguration(new CacheConfiguration<>(CACHE_NAME) + .setAtomicityMode(atomicityMode()) .setBackups(2) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) .setIndexedTypes(Integer.class, Integer.class) @@ -92,11 +98,19 @@ public class GridCacheRebalancingWithAsyncClearingTest extends GridCommonAbstrac cleanPersistenceDir(); } + /** + * @return Atomicity mode. + */ + protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.ATOMIC; + } + /** * Test that partition clearing doesn't block partitions map exchange. * * @throws Exception If failed. */ + @Test public void testPartitionClearingNotBlockExchange() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE, "1"); @@ -189,6 +203,7 @@ public void testPartitionClearingNotBlockExchange() throws Exception { * * @throws Exception If failed. */ + @Test public void testCorrectRebalancingCurrentlyRentingPartitions() throws Exception { IgniteEx ignite = (IgniteEx) startGrids(3); ignite.cluster().active(true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/IgniteRebalanceOnCachesStoppingOrDestroyingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/IgniteRebalanceOnCachesStoppingOrDestroyingTest.java new file mode 100644 index 0000000000000..e36eaa81501fc --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/rebalancing/IgniteRebalanceOnCachesStoppingOrDestroyingTest.java @@ -0,0 +1,325 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.rebalancing; + +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.CountDownLatch; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.TransactionConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.failure.StopNodeFailureHandler; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.managers.communication.GridIoMessage; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage; +import org.apache.ignite.internal.util.future.GridFutureAdapter; +import org.apache.ignite.internal.util.lang.IgniteThrowableConsumer; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.lang.IgniteRunnable; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class IgniteRebalanceOnCachesStoppingOrDestroyingTest extends GridCommonAbstractTest { + /** */ + private static final String CACHE_1 = "cache_1"; + + /** */ + private static final String CACHE_2 = "cache_2"; + + /** */ + private static final String CACHE_3 = "cache_3"; + + /** */ + private static final String CACHE_4 = "cache_4"; + + /** */ + private static final String GROUP_1 = "group_1"; + + /** */ + private static final String GROUP_2 = "group_2"; + + /** */ + private static final int REBALANCE_BATCH_SIZE = 50 * 1024; + + /** Number of loaded keys in each cache. */ + private static final int KEYS_SIZE = 3000; + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setCommunicationSpi(new RebalanceBlockingSPI()); + + cfg.setFailureHandler(new StopNodeFailureHandler()); + + cfg.setRebalanceThreadPoolSize(4); + + cfg.setTransactionConfiguration(new TransactionConfiguration() + .setDefaultTxTimeout(1000)); + + cfg.setDataStorageConfiguration( + new DataStorageConfiguration() + .setWalMode(WALMode.LOG_ONLY) + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setPersistenceEnabled(true) + .setMaxSize(100L * 1024 * 1024))); + + return cfg; + } + + /** + * + */ + @Test + public void testStopCachesOnDeactivationFirstGroup() throws Exception { + testStopCachesOnDeactivation(GROUP_1); + } + + /** + * + */ + @Test + public void testStopCachesOnDeactivationSecondGroup() throws Exception { + testStopCachesOnDeactivation(GROUP_2); + } + + /** + * @param groupName Group name. + * @throws Exception If failed. + */ + private void testStopCachesOnDeactivation(String groupName) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10582"); + + performTest(ig -> { + ig.cluster().active(false); + + // Add to escape possible long waiting in awaitPartitionMapExchange due to {@link CacheAffinityChangeMessage}. + ig.cluster().active(true); + + return null; + }, groupName); + } + + /** + * + */ + @Test + public void testDestroySpecificCachesInDifferentCacheGroupsFirstGroup() throws Exception { + testDestroySpecificCachesInDifferentCacheGroups(GROUP_1); + } + + /** + * + */ + @Test + public void testDestroySpecificCachesInDifferentCacheGroupsSecondGroup() throws Exception { + testDestroySpecificCachesInDifferentCacheGroups(GROUP_2); + } + + /** + * @param groupName Group name. + * @throws Exception If failed. + */ + private void testDestroySpecificCachesInDifferentCacheGroups(String groupName) throws Exception { + performTest(ig -> { + ig.destroyCaches(Arrays.asList(CACHE_1, CACHE_3)); + + return null; + }, groupName); + } + + /** + * + */ + @Test + public void testDestroySpecificCacheAndCacheGroupFirstGroup() throws Exception { + testDestroySpecificCacheAndCacheGroup(GROUP_1); + } + + /** + * + */ + @Test + public void testDestroySpecificCacheAndCacheGroupSecondGroup() throws Exception { + testDestroySpecificCacheAndCacheGroup(GROUP_2); + } + + /** + * @param groupName Group name. + * @throws Exception If failed. + */ + private void testDestroySpecificCacheAndCacheGroup(String groupName) throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10582"); + + performTest(ig -> { + ig.destroyCaches(Arrays.asList(CACHE_1, CACHE_3, CACHE_4)); + + return null; + }, groupName); + } + + /** + * @param testAction Action that trigger stop or destroy of caches. + */ + private void performTest(IgniteThrowableConsumer testAction, String groupName) throws Exception { + IgniteEx ig0 = (IgniteEx)startGrids(2); + + ig0.cluster().active(true); + + stopGrid(1); + + loadData(ig0); + + IgniteEx ig1 = startGrid(1); + + RebalanceBlockingSPI commSpi = (RebalanceBlockingSPI)ig1.configuration().getCommunicationSpi(); + + // Complete all futures for groups that we don't need to wait. + commSpi.resumeRebalanceFutures.forEach((k, v) -> { + if (k != CU.cacheId(groupName)) + v.onDone(); + }); + + CountDownLatch latch = commSpi.suspendRebalanceInMiddleLatch.get(CU.cacheId(groupName)); + + assert latch != null; + + // Await some middle point rebalance for group. + latch.await(); + + testAction.accept(ig0); + + // Resume rebalance after action performed. + commSpi.resumeRebalanceFutures.get(CU.cacheId(groupName)).onDone(); + + awaitPartitionMapExchange(true, true, null, true); + + assertNull(grid(1).context().failure().failureContext()); + } + + /** + * @param ig Ig. + */ + private void loadData(Ignite ig) { + List configs = Stream.of( + F.t(CACHE_1, GROUP_1), + F.t(CACHE_2, GROUP_1), + F.t(CACHE_3, GROUP_2), + F.t(CACHE_4, GROUP_2) + ).map(names -> new CacheConfiguration<>(names.get1()) + .setGroupName(names.get2()) + .setRebalanceBatchSize(REBALANCE_BATCH_SIZE) + .setCacheMode(CacheMode.REPLICATED) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + ).collect(Collectors.toList()); + + ig.getOrCreateCaches(configs); + + configs.forEach(cfg -> { + try (IgniteDataStreamer streamer = ig.dataStreamer(cfg.getName())) { + for (int i = 0; i < KEYS_SIZE; i++) + streamer.addData(i, new byte[1024]); + + streamer.flush(); + } + }); + } + + /** + * + */ + private static class RebalanceBlockingSPI extends TcpCommunicationSpi { + /** */ + private final Map resumeRebalanceFutures = new ConcurrentHashMap<>(); + + /** */ + private final Map suspendRebalanceInMiddleLatch = new ConcurrentHashMap<>(); + + /** */ + RebalanceBlockingSPI() { + resumeRebalanceFutures.put(CU.cacheId(GROUP_1), new GridFutureAdapter()); + resumeRebalanceFutures.put(CU.cacheId(GROUP_2), new GridFutureAdapter()); + suspendRebalanceInMiddleLatch.put(CU.cacheId(GROUP_1), new CountDownLatch(3)); + suspendRebalanceInMiddleLatch.put(CU.cacheId(GROUP_2), new CountDownLatch(3)); + } + + /** {@inheritDoc} */ + @Override protected void notifyListener(UUID sndId, Message msg, IgniteRunnable msgC) { + if (msg instanceof GridIoMessage && + ((GridIoMessage)msg).message() instanceof GridDhtPartitionSupplyMessage) { + GridDhtPartitionSupplyMessage msg0 = (GridDhtPartitionSupplyMessage)((GridIoMessage)msg).message(); + + CountDownLatch latch = suspendRebalanceInMiddleLatch.get(msg0.groupId()); + + if (latch != null) { + if (latch.getCount() > 0) + latch.countDown(); + else { + resumeRebalanceFutures.get(msg0.groupId()).listen(f -> super.notifyListener(sndId, msg, msgC)); + + return; + } + } + } + + super.notifyListener(sndId, msg, msgC); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheAbstractReplicatedByteArrayValuesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheAbstractReplicatedByteArrayValuesSelfTest.java index 9fd2f29c15623..b4e58fcc4ec6b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheAbstractReplicatedByteArrayValuesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheAbstractReplicatedByteArrayValuesSelfTest.java @@ -41,7 +41,7 @@ public abstract class GridCacheAbstractReplicatedByteArrayValuesSelfTest extends /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration0() { - CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + CacheConfiguration cfg = new CacheConfiguration(); cfg.setCacheMode(REPLICATED); cfg.setAtomicityMode(TRANSACTIONAL); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedLockSelfTest.java index 0595f7d1a3285..c37e814546ea6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedLockSelfTest.java @@ -19,12 +19,16 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.distributed.GridCacheLockAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * Test cases for multi-threaded tests. */ +@RunWith(JUnit4.class) public class GridCacheReplicatedLockSelfTest extends GridCacheLockAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -32,7 +36,8 @@ public class GridCacheReplicatedLockSelfTest extends GridCacheLockAbstractTest { } /** {@inheritDoc} */ + @Test @Override public void testLockReentrancy() throws Throwable { fail("https://issues.apache.org/jira/browse/IGNITE-835"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMetricsSelfTest.java index 16be6733a9392..374a765ba446e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMetricsSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheTransactionalAbstractMetricsSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -29,6 +30,13 @@ public class GridCacheReplicatedMetricsSelfTest extends GridCacheTransactionalAb /** */ private static final int GRID_CNT = 2; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.METRICS); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { CacheConfiguration cfg = super.cacheConfiguration(igniteInstanceName); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeLockSelfTest.java index a34024424ea9a..42fa698c6765f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeLockSelfTest.java @@ -19,18 +19,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheMultiNodeLockAbstractTest; +import org.junit.Ignore; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * Test cases for multi-threaded tests. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-601") public class GridCacheReplicatedMultiNodeLockSelfTest extends GridCacheMultiNodeLockAbstractTest { - /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-601"); - } - /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration() { CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -39,4 +36,4 @@ public class GridCacheReplicatedMultiNodeLockSelfTest extends GridCacheMultiNode return cacheCfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeSelfTest.java index f1eed3a693905..6fe1ebad96a1c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMultiNodeSelfTest.java @@ -20,18 +20,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheMultiNodeAbstractTest; +import org.junit.Ignore; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * Test cases for multi-threaded tests. */ +@Ignore("https://issues.apache.org/jira/browse/IGNITE-601") public class GridCacheReplicatedMultiNodeSelfTest extends GridCacheMultiNodeAbstractTest { - /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-601"); - } - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -44,4 +41,4 @@ public class GridCacheReplicatedMultiNodeSelfTest extends GridCacheMultiNodeAbst return cfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxMultiThreadedSelfTest.java new file mode 100644 index 0000000000000..2df5d45b5bfd5 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxMultiThreadedSelfTest.java @@ -0,0 +1,81 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.replicated; + +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.cache.IgniteMvccTxMultiThreadedAbstractTest; + +import static org.apache.ignite.cache.CacheMode.REPLICATED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Tests for replicated transactions. + */ +public class GridCacheReplicatedMvccTxMultiThreadedSelfTest extends IgniteMvccTxMultiThreadedAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setCacheMode(REPLICATED); + ccfg.setEvictionPolicy(null); + + ccfg.setWriteSynchronizationMode(FULL_SYNC); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int keyCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int maxKeyValue() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int threadCount() { + return 5; + } + + /** {@inheritDoc} */ + @Override protected int iterations() { + return 1000; + } + + /** {@inheritDoc} */ + @Override protected boolean isTestDebug() { + return false; + } + + /** {@inheritDoc} */ + @Override protected boolean printMemoryStats() { + return true; + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxSingleThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxSingleThreadedSelfTest.java new file mode 100644 index 0000000000000..7e8063431aa45 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxSingleThreadedSelfTest.java @@ -0,0 +1,77 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.replicated; + +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.cache.IgniteMvccTxSingleThreadedAbstractTest; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.REPLICATED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Tests for replicated transactions. + */ +public class GridCacheReplicatedMvccTxSingleThreadedSelfTest extends IgniteMvccTxSingleThreadedAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + ccfg.setCacheMode(REPLICATED); + ccfg.setEvictionPolicy(null); + ccfg.setWriteSynchronizationMode(FULL_SYNC); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int keyCount() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int maxKeyValue() { + return 3; + } + + /** {@inheritDoc} */ + @Override protected int iterations() { + return 20; + } + + /** {@inheritDoc} */ + @Override protected boolean isTestDebug() { + return false; + } + + /** {@inheritDoc} */ + @Override protected boolean printMemoryStats() { + return true; + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxTimeoutSelfTest.java new file mode 100644 index 0000000000000..2f953236c467b --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedMvccTxTimeoutSelfTest.java @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.distributed.replicated; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.processors.cache.distributed.IgniteMvccTxTimeoutAbstractTest; + +import static org.apache.ignite.cache.CacheMode.REPLICATED; + +/** + * Simple cache test. + */ +public class GridCacheReplicatedMvccTxTimeoutSelfTest extends IgniteMvccTxTimeoutAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = defaultCacheConfiguration(); + + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + ccfg.setCacheMode(REPLICATED); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedNodeRestartSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedNodeRestartSelfTest.java index bff8755218786..fde81ce96cfea 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedNodeRestartSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedNodeRestartSelfTest.java @@ -19,6 +19,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractNodeRestartSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -27,6 +30,7 @@ /** * Tests node restart. */ +@RunWith(JUnit4.class) public class GridCacheReplicatedNodeRestartSelfTest extends GridCacheAbstractNodeRestartSelfTest { /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration() { @@ -55,81 +59,97 @@ public class GridCacheReplicatedNodeRestartSelfTest extends GridCacheAbstractNod } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTwoNodesNoBackups() throws Throwable { super.testRestartWithPutTwoNodesNoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTwoNodesOneBackup() throws Throwable { // No-op. } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutFourNodesOneBackups() throws Throwable { super.testRestartWithPutFourNodesOneBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutFourNodesNoBackups() throws Throwable { // No-op. } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutSixNodesTwoBackups() throws Throwable { super.testRestartWithPutSixNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutEightNodesTwoBackups() throws Throwable { super.testRestartWithPutEightNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithPutTenNodesTwoBackups() throws Throwable { super.testRestartWithPutTenNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTwoNodesNoBackups() throws Throwable { // No-op. } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTwoNodesOneBackup() throws Throwable { super.testRestartWithTxTwoNodesOneBackup(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxFourNodesOneBackups() throws Throwable { super.testRestartWithTxFourNodesOneBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxFourNodesNoBackups() throws Throwable { // No-op. } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxSixNodesTwoBackups() throws Throwable { super.testRestartWithTxSixNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxEightNodesTwoBackups() throws Throwable { super.testRestartWithTxEightNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxTenNodesTwoBackups() throws Throwable { super.testRestartWithTxTenNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxPutAllTenNodesTwoBackups() throws Throwable { super.testRestartWithTxPutAllTenNodesTwoBackups(); } /** {@inheritDoc} */ + @Test @Override public void testRestartWithTxPutAllFourNodesTwoBackups() throws Throwable { super.testRestartWithTxPutAllFourNodesTwoBackups(); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiNodeBasicTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiNodeBasicTest.java index ae97c0b15ed4b..35b8d9bc6cb9a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiNodeBasicTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiNodeBasicTest.java @@ -20,6 +20,9 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.IgniteTxMultiNodeAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -27,6 +30,7 @@ /** * Test basic cache operations in transactions. */ +@RunWith(JUnit4.class) public class GridCacheReplicatedTxMultiNodeBasicTest extends IgniteTxMultiNodeAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -44,36 +48,43 @@ public class GridCacheReplicatedTxMultiNodeBasicTest extends IgniteTxMultiNodeAb } /** {@inheritDoc} */ + @Test @Override public void testPutOneEntryInTx() throws Exception { super.testPutOneEntryInTx(); } /** {@inheritDoc} */ + @Test @Override public void testPutTwoEntriesInTx() throws Exception { super.testPutTwoEntriesInTx(); } /** {@inheritDoc} */ + @Test @Override public void testPutOneEntryInTxMultiThreaded() throws Exception { super.testPutOneEntryInTxMultiThreaded(); } /** {@inheritDoc} */ + @Test @Override public void testPutTwoEntryInTxMultiThreaded() throws Exception { super.testPutTwoEntryInTxMultiThreaded(); } /** {@inheritDoc} */ + @Test @Override public void testRemoveInTxQueried() throws Exception { super.testRemoveInTxQueried(); } /** {@inheritDoc} */ + @Test @Override public void testRemoveInTxSimple() throws Exception { super.testRemoveInTxSimple(); } /** {@inheritDoc} */ + @Test @Override public void testRemoveInTxQueriedMultiThreaded() throws Exception { super.testRemoveInTxQueriedMultiThreaded(); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiThreadedSelfTest.java index bcd7e582aeeee..30c79d7f49340 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxMultiThreadedSelfTest.java @@ -20,12 +20,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; -import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.IgniteTxMultiThreadedAbstractTest; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.log4j.Level; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -34,15 +29,6 @@ * Tests for replicated transactions. */ public class GridCacheReplicatedTxMultiThreadedSelfTest extends IgniteTxMultiThreadedAbstractTest { - /** Cache debug flag. */ - private static final boolean CACHE_DEBUG = false; - - /** Log to file flag. */ - private static final boolean LOG_TO_FILE = true; - - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @SuppressWarnings({"unchecked"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -50,8 +36,6 @@ public class GridCacheReplicatedTxMultiThreadedSelfTest extends IgniteTxMultiThr TransactionConfiguration tCfg = new TransactionConfiguration(); - tCfg.setTxSerializableEnabled(true); - c.setTransactionConfiguration(tCfg); CacheConfiguration cc = defaultCacheConfiguration(); @@ -64,15 +48,6 @@ public class GridCacheReplicatedTxMultiThreadedSelfTest extends IgniteTxMultiThr c.setCacheConfiguration(cc); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - - if (CACHE_DEBUG) - resetLog4j(Level.DEBUG, LOG_TO_FILE, GridCacheProcessor.class.getPackage().getName()); - return c; } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxOriginatingNodeFailureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxOriginatingNodeFailureSelfTest.java index 79308c8037e91..8730c5c3082ca 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxOriginatingNodeFailureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxOriginatingNodeFailureSelfTest.java @@ -35,4 +35,4 @@ public class GridCacheReplicatedTxOriginatingNodeFailureSelfTest extends @Override protected Class ignoreMessageClass() { return GridDistributedTxPrepareRequest.class; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxSingleThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxSingleThreadedSelfTest.java index 68d8a93e4b44d..80b2f3b4b22fc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxSingleThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxSingleThreadedSelfTest.java @@ -19,12 +19,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.IgniteTxSingleThreadedAbstractTest; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.log4j.Level; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -33,42 +28,19 @@ * Tests for replicated transactions. */ public class GridCacheReplicatedTxSingleThreadedSelfTest extends IgniteTxSingleThreadedAbstractTest { - /** Cache debug flag. */ - private static final boolean CACHE_DEBUG = false; - - /** Log to file flag. */ - private static final boolean LOG_TO_FILE = true; - - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @SuppressWarnings({"unchecked"}) + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - - c.getTransactionConfiguration().setTxSerializableEnabled(true); - - CacheConfiguration cc = defaultCacheConfiguration(); - - cc.setCacheMode(REPLICATED); - - cc.setEvictionPolicy(null); - - cc.setWriteSynchronizationMode(FULL_SYNC); - - c.setCacheConfiguration(cc); - - TcpDiscoverySpi spi = new TcpDiscoverySpi(); + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - spi.setIpFinder(ipFinder); + CacheConfiguration ccfg = defaultCacheConfiguration(); - c.setDiscoverySpi(spi); + ccfg.setCacheMode(REPLICATED); + ccfg.setEvictionPolicy(null); + ccfg.setWriteSynchronizationMode(FULL_SYNC); - if (CACHE_DEBUG) - resetLog4j(Level.DEBUG, LOG_TO_FILE, GridCacheProcessor.class.getPackage().getName()); + cfg.setCacheConfiguration(ccfg); - return c; + return cfg; } /** {@inheritDoc} */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxTimeoutSelfTest.java index a63a302692028..9a748ba30543d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxTimeoutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedTxTimeoutSelfTest.java @@ -20,9 +20,6 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.IgniteTxTimeoutAbstractTest; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -30,31 +27,16 @@ * Simple cache test. */ public class GridCacheReplicatedTxTimeoutSelfTest extends IgniteTxTimeoutAbstractTest { - /** Transaction timeout. */ - private static final long TIMEOUT = 50; - - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - c.getTransactionConfiguration().setDefaultTxTimeout(TIMEOUT); - c.getTransactionConfiguration().setTxSerializableEnabled(true); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(REPLICATED); c.setCacheConfiguration(cc); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - c.setDiscoverySpi(spi); - return c; } } \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheSyncReplicatedPreloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheSyncReplicatedPreloadSelfTest.java index e55a43497dc42..6bdbd56d56ad1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheSyncReplicatedPreloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheSyncReplicatedPreloadSelfTest.java @@ -23,11 +23,12 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -37,10 +38,8 @@ /** * Multithreaded tests for replicated cache preloader. */ +@RunWith(JUnit4.class) public class GridCacheSyncReplicatedPreloadSelfTest extends GridCommonAbstractTest { - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * Constructs test. */ @@ -52,12 +51,6 @@ public GridCacheSyncReplicatedPreloadSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(REPLICATED); @@ -86,7 +79,11 @@ public GridCacheSyncReplicatedPreloadSelfTest() { * @throws Exception If test failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testNodeRestart() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10082"); + int keyCnt = 1000; int retries = 20; @@ -116,6 +113,7 @@ public void testNodeRestart() throws Exception { * @throws Exception If test failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testNodeRestartMultithreaded() throws Exception { final int keyCnt = 1000; final int retries = 50; @@ -157,4 +155,4 @@ public void testNodeRestartMultithreaded() throws Exception { }, threadCnt); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridReplicatedTxPreloadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridReplicatedTxPreloadTest.java index 393cfb93b0386..98d86fd6290c8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridReplicatedTxPreloadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridReplicatedTxPreloadTest.java @@ -20,12 +20,16 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.distributed.IgniteTxPreloadAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * Tests cache transaction during preloading. */ +@RunWith(JUnit4.class) public class GridReplicatedTxPreloadTest extends IgniteTxPreloadAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -38,6 +42,7 @@ public class GridReplicatedTxPreloadTest extends IgniteTxPreloadAbstractTest { } /** {@inheritDoc} */ + @Test @Override public void testLocalTxPreloadingOptimistic() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-1755"); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheSyncRebalanceModeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheSyncRebalanceModeSelfTest.java index 8f96639e5302d..2caf38fa3c675 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheSyncRebalanceModeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheSyncRebalanceModeSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheSyncRebalanceModeSelfTest extends GridCommonAbstractTest { /** Entry count. */ public static final int CNT = 100_000; @@ -60,6 +64,7 @@ public class IgniteCacheSyncRebalanceModeSelfTest extends GridCommonAbstractTest /** * @throws Exception if failed. */ + @Test public void testStaticCache() throws Exception { IgniteEx ignite = startGrid(0); @@ -85,6 +90,7 @@ public void testStaticCache() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDynamicCache() throws Exception { IgniteEx ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadLifecycleSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadLifecycleSelfTest.java index 7a55c8a0bae76..f178e24c7bf80 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadLifecycleSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadLifecycleSelfTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.lifecycle.LifecycleBean; import org.apache.ignite.lifecycle.LifecycleEventType; import org.apache.ignite.resources.IgniteInstanceResource; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -38,7 +41,7 @@ /** * Tests for replicated cache preloader. */ -@SuppressWarnings({"PublicInnerClass"}) +@RunWith(JUnit4.class) public class GridCacheReplicatedPreloadLifecycleSelfTest extends GridCachePreloadLifecycleAbstractTest { /** */ private static boolean quiet = true; @@ -160,6 +163,7 @@ public void checkCache(Object[] keys) throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean1() throws Exception { checkCache(keys(true, DFLT_KEYS.length, DFLT_KEYS)); } @@ -167,6 +171,7 @@ public void testLifecycleBean1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean2() throws Exception { checkCache(keys(false, DFLT_KEYS.length, DFLT_KEYS)); } @@ -174,6 +179,7 @@ public void testLifecycleBean2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean3() throws Exception { checkCache(keys(true, 500)); } @@ -181,6 +187,7 @@ public void testLifecycleBean3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLifecycleBean4() throws Exception { checkCache(keys(false, 500)); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadSelfTest.java index f1140916d507d..a3413a407ff2c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadSelfTest.java @@ -17,12 +17,9 @@ package org.apache.ignite.internal.processors.cache.distributed.replicated.preloader; -import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; -import java.util.HashSet; import java.util.Iterator; -import java.util.List; import java.util.Map; import java.util.Random; import java.util.UUID; @@ -39,10 +36,7 @@ import org.apache.ignite.cache.CacheEntryEventSerializableFilter; import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cache.CacheRebalanceMode; -import org.apache.ignite.cache.affinity.AffinityFunction; -import org.apache.ignite.cache.affinity.AffinityFunctionContext; import org.apache.ignite.cache.affinity.AffinityKeyMapper; -import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; @@ -51,16 +45,18 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; +import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.P2; -import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.plugin.CachePluginConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.ASYNC; @@ -77,6 +73,7 @@ * Tests for replicated cache preloader. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class GridCacheReplicatedPreloadSelfTest extends GridCommonAbstractTest { /** */ private CacheRebalanceMode preloadMode = ASYNC; @@ -84,9 +81,6 @@ public class GridCacheReplicatedPreloadSelfTest extends GridCommonAbstractTest { /** */ private int batchSize = 4096; - /** */ - private int poolSize = 2; - /** */ private volatile boolean extClassloadingAtCfg = false; @@ -99,9 +93,6 @@ public class GridCacheReplicatedPreloadSelfTest extends GridCommonAbstractTest { /** Disable p2p. */ private volatile boolean disableP2p = false; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static volatile CountDownLatch latch; @@ -119,11 +110,7 @@ public class GridCacheReplicatedPreloadSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); + cfg.setRebalanceThreadPoolSize(2); cfg.setCacheConfiguration(cacheConfiguration(igniteInstanceName)); @@ -173,7 +160,6 @@ CacheConfiguration cacheConfiguration(String igniteInstanceName) { cacheCfg.setWriteSynchronizationMode(FULL_SYNC); cacheCfg.setRebalanceMode(preloadMode); cacheCfg.setRebalanceBatchSize(batchSize); - cacheCfg.setRebalanceThreadPoolSize(poolSize); if (extClassloadingAtCfg) loadExternalClassesToCfg(cacheCfg); @@ -220,6 +206,7 @@ private void loadExternalClassesToCfg(CacheConfiguration cacheCfg) { /** * @throws Exception If failed. */ + @Test public void testSingleNode() throws Exception { preloadMode = SYNC; @@ -234,6 +221,7 @@ public void testSingleNode() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testIntegrity() throws Exception { preloadMode = SYNC; @@ -309,6 +297,7 @@ public void testIntegrity() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDeployment() throws Exception { // TODO GG-11141. if (true) @@ -387,6 +376,7 @@ public void testDeployment() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExternalClassesAtConfiguration() throws Exception { try { extClassloadingAtCfg = true; @@ -442,6 +432,7 @@ public void testExternalClassesAtConfiguration() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExternalClassesAtConfigurationDynamicStart() throws Exception { try { extClassloadingAtCfg = false; @@ -480,6 +471,7 @@ public void testExternalClassesAtConfigurationDynamicStart() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExternalClassesAtConfigurationDynamicStart2() throws Exception { try { extClassloadingAtCfg = false; @@ -518,6 +510,7 @@ public void testExternalClassesAtConfigurationDynamicStart2() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExternalClassesAtMessage() throws Exception { try { useExtClassLoader = true; @@ -570,15 +563,20 @@ public void testExternalClassesAtMessage() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testExternalClassesAtEventP2pDisabled() throws Exception { - testExternalClassesAtEvent0(true); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + testExternalClassesAtEvent0(true); } /** * @throws Exception If test failed. */ + @Test public void testExternalClassesAtEvent() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + testExternalClassesAtEvent0(false); } @@ -625,6 +623,7 @@ private void testExternalClassesAtEvent0(boolean p2p) throws Exception { /** * @throws Exception If test failed. */ + @Test public void testSync() throws Exception { preloadMode = SYNC; batchSize = 512; @@ -649,6 +648,7 @@ public void testSync() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testAsync() throws Exception { preloadMode = ASYNC; batchSize = 256; @@ -656,54 +656,34 @@ public void testAsync() throws Exception { try { IgniteCache cache1 = startGrid(1).cache(DEFAULT_CACHE_NAME); - int keyCnt = 2000; + final int keyCnt = 2000; for (int i = 0; i < keyCnt; i++) cache1.put(i, "val" + i); - IgniteCache cache2 = startGrid(2).cache(DEFAULT_CACHE_NAME); + final IgniteCache cache2 = startGrid(2).cache(DEFAULT_CACHE_NAME); int size = cache2.localSize(CachePeekMode.ALL); info("Size of cache2: " + size); - assert waitCacheSize(cache2, keyCnt, getTestTimeout()) : - "Actual cache size: " + cache2.localSize(CachePeekMode.ALL); + boolean awaitSize = GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + return cache2.localSize(CachePeekMode.ALL) >= keyCnt; + } + }, getTestTimeout()); + + assertTrue("Actual cache size: " + cache2.localSize(CachePeekMode.ALL), awaitSize); } finally { stopAllGrids(); } } - /** - * @param cache Cache. - * @param expSize Lower bound of expected size. - * @param timeout Timeout. - * @return {@code true} if success. - * @throws InterruptedException If thread was interrupted. - */ - @SuppressWarnings({"BusyWait"}) - private boolean waitCacheSize(IgniteCache cache, int expSize, long timeout) - throws InterruptedException { - assert cache != null; - assert expSize > 0; - assert timeout >= 0; - - long end = System.currentTimeMillis() + timeout; - - while (cache.localSize(CachePeekMode.ALL) < expSize) { - Thread.sleep(50); - - if (end - System.currentTimeMillis() <= 0) - break; - } - - return cache.localSize(CachePeekMode.ALL) >= expSize; - } - /** * @throws Exception If test failed. */ + @Test public void testBatchSize1() throws Exception { preloadMode = SYNC; batchSize = 1; // 1 byte but one entry should be in batch anyway. @@ -728,6 +708,7 @@ public void testBatchSize1() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testBatchSize1000() throws Exception { preloadMode = SYNC; batchSize = 1000; // 1000 bytes. @@ -752,6 +733,7 @@ public void testBatchSize1000() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testBatchSize10000() throws Exception { preloadMode = SYNC; batchSize = 10000; // 10000 bytes. @@ -777,7 +759,11 @@ public void testBatchSize10000() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultipleNodes() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10082"); + preloadMode = ASYNC; batchSize = 256; @@ -788,7 +774,7 @@ public void testMultipleNodes() throws Exception { info("Beginning data population..."); - int cnt = 2500; + final int cnt = 2500; Map map = null; @@ -816,7 +802,7 @@ assert grid(gridIdx).cache(DEFAULT_CACHE_NAME).localSize(CachePeekMode.ALL) == c info("Cache size is OK for grid index: " + gridIdx); } - IgniteCache lastCache = startGrid(gridCnt).cache(DEFAULT_CACHE_NAME); + final IgniteCache lastCache = startGrid(gridCnt).cache(DEFAULT_CACHE_NAME); // Let preloading start. Thread.sleep(1000); @@ -828,8 +814,15 @@ assert grid(gridIdx).cache(DEFAULT_CACHE_NAME).localSize(CachePeekMode.ALL) == c stopGrid(idx); - assert waitCacheSize(lastCache, cnt, 20 * 1000) : - "Actual cache size: " + lastCache.localSize(CachePeekMode.ALL); + awaitPartitionMapExchange(true, true, null); + + boolean awaitSize = GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + return lastCache.localSize(CachePeekMode.ALL) >= cnt; + } + }, 20_000); + + assertTrue("Actual cache size: " + lastCache.localSize(CachePeekMode.ALL), awaitSize); } finally { stopAllGrids(); @@ -839,6 +832,7 @@ assert waitCacheSize(lastCache, cnt, 20 * 1000) : /** * @throws Exception If test failed. */ + @Test public void testConcurrentStartSync() throws Exception { preloadMode = SYNC; batchSize = 10000; @@ -854,6 +848,7 @@ public void testConcurrentStartSync() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testConcurrentStartAsync() throws Exception { preloadMode = ASYNC; batchSize = 10000; @@ -866,65 +861,6 @@ public void testConcurrentStartAsync() throws Exception { } } - /** - * Test affinity. - */ - @SuppressWarnings({"PublicInnerClass"}) - private static class TestAffinityFunction implements AffinityFunction { - /** {@inheritDoc} */ - @Override public int partitions() { - return 2; - } - - /** {@inheritDoc} */ - @Override public int partition(Object key) { - if (key instanceof Number) - return ((Number)key).intValue() % 2; - - return key == null ? 0 : U.safeAbs(key.hashCode() % 2); - } - - /** {@inheritDoc} */ - @Override public List> assignPartitions(AffinityFunctionContext affCtx) { - List> res = new ArrayList<>(partitions()); - - for (int part = 0; part < partitions(); part++) - res.add(nodes(part, affCtx.currentTopologySnapshot())); - - return res; - } - - /** {@inheritDoc} */ - @SuppressWarnings({"RedundantTypeArguments"}) - public List nodes(int part, Collection nodes) { - Collection col = new HashSet<>(nodes); - - if (col.size() <= 1) - return new ArrayList<>(col); - - for (Iterator iter = col.iterator(); iter.hasNext(); ) { - ClusterNode node = iter.next(); - - boolean even = node.attribute("EVEN"); - - if ((even && part != 0) || (!even && part != 1)) - iter.remove(); - } - - return new ArrayList<>(col); - } - - /** {@inheritDoc} */ - @Override public void reset() { - // No-op. - } - - /** {@inheritDoc} */ - @Override public void removeNode(UUID nodeId) { - // No-op. - } - } - /** * */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadStartStopEventsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadStartStopEventsSelfTest.java index 07c50d3786d5c..c43f8991c9d46 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadStartStopEventsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/preloader/GridCacheReplicatedPreloadStartStopEventsSelfTest.java @@ -23,10 +23,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.events.EventType.EVT_CACHE_REBALANCE_STARTED; @@ -35,10 +35,8 @@ /** * Tests that preload start/preload stop events are fired only once for replicated cache. */ +@RunWith(JUnit4.class) public class GridCacheReplicatedPreloadStartStopEventsSelfTest extends GridCommonAbstractTest { - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { super.afterTest(); @@ -50,8 +48,6 @@ public class GridCacheReplicatedPreloadStartStopEventsSelfTest extends GridCommo @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setCacheMode(REPLICATED); @@ -64,6 +60,7 @@ public class GridCacheReplicatedPreloadStartStopEventsSelfTest extends GridCommo /** * @throws Exception If failed. */ + @Test public void testStartStopEvents() throws Exception { Ignite ignite = startGrid(0); @@ -92,4 +89,4 @@ else if (e.type() == EVT_CACHE_REBALANCE_STOPPED) assertTrue("Unexpected start count: " + preloadStartCnt.get(), preloadStartCnt.get() <= 1); assertTrue("Unexpected stop count: " + preloadStopCnt.get(), preloadStopCnt.get() <= 1); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/DhtAndNearEvictionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/DhtAndNearEvictionTest.java index 84434696c9179..aaab659136b28 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/DhtAndNearEvictionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/DhtAndNearEvictionTest.java @@ -25,6 +25,7 @@ import javax.cache.integration.CacheWriterException; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory; import org.apache.ignite.cache.store.CacheStoreAdapter; import org.apache.ignite.configuration.CacheConfiguration; @@ -35,13 +36,18 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checking that DHT and near cache evictions work correctly when both are set. * * This is a regression test for IGNITE-9315. */ +@RunWith(JUnit4.class) public class DhtAndNearEvictionTest extends GridCommonAbstractTest { /** */ public GridStringLogger strLog; @@ -59,6 +65,13 @@ public class DhtAndNearEvictionTest extends GridCommonAbstractTest { return cfg; } + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EVICTION); + + super.beforeTestsStarted(); + } + /** */ @Override protected void beforeTest() throws Exception { super.beforeTest(); @@ -84,7 +97,10 @@ public class DhtAndNearEvictionTest extends GridCommonAbstractTest { *
  • backups=1
  • * */ + @Test public void testConcurrentWritesAndReadsWithReadThrough() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + startGrid(0); startGrid(1); @@ -97,6 +113,7 @@ public void testConcurrentWritesAndReadsWithReadThrough() throws Exception { ) .setReadThrough(true) .setCacheStoreFactory(DummyCacheStore.factoryOf()) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setBackups(1); grid(0).createCache(ccfg); @@ -136,12 +153,14 @@ public void testConcurrentWritesAndReadsWithReadThrough() throws Exception { /** * Checking rebalancing which used to be affected by IGNITE-9315. */ + @Test public void testRebalancing() throws Exception { Ignite grid0 = startGrid(0); CacheConfiguration ccfg = new CacheConfiguration("mycache") .setOnheapCacheEnabled(true) .setEvictionPolicyFactory(new LruEvictionPolicyFactory<>(500)) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setNearConfiguration( new NearCacheConfiguration() .setNearEvictionPolicyFactory(new LruEvictionPolicyFactory<>(100)) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionAbstractTest.java index a6c10baec92b3..3351508b22c4f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionAbstractTest.java @@ -38,12 +38,12 @@ import org.apache.ignite.internal.util.typedef.C2; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -61,11 +61,9 @@ /** * Base class for eviction tests. */ +@RunWith(JUnit4.class) public abstract class EvictionAbstractTest> extends GridCommonAbstractTest { - /** IP finder. */ - protected static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Put entry size. */ protected static final int PUT_ENTRY_SIZE = 10; @@ -126,12 +124,6 @@ public abstract class EvictionAbstractTest> c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setIncludeEventTypes(EVT_TASK_FAILED, EVT_TASK_FINISHED, EVT_JOB_MAPPED); c.setIncludeProperties(); @@ -147,6 +139,7 @@ public abstract class EvictionAbstractTest> /** * @throws Exception If failed. */ + @Test public void testMaxSizePolicy() throws Exception { plcMax = 3; plcMaxMemSize = 0; @@ -158,6 +151,7 @@ public void testMaxSizePolicy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizePolicyWithBatch() throws Exception { plcMax = 3; plcMaxMemSize = 0; @@ -169,6 +163,7 @@ public void testMaxSizePolicyWithBatch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxMemSizePolicy() throws Exception { plcMax = 0; plcMaxMemSize = 3 * MockEntry.ENTRY_SIZE; @@ -182,6 +177,7 @@ public void testMaxMemSizePolicy() throws Exception { * * @throws Exception If failed. */ + @Test public void testMaxMemSizePolicyWithBatch() throws Exception { plcMax = 3; plcMaxMemSize = 10 * MockEntry.ENTRY_SIZE; @@ -193,6 +189,7 @@ public void testMaxMemSizePolicyWithBatch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizeMemory() throws Exception { int max = 10; @@ -206,6 +203,7 @@ public void testMaxSizeMemory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizeMemoryWithBatch() throws Exception { int max = 10; @@ -219,6 +217,7 @@ public void testMaxSizeMemoryWithBatch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxMemSizeMemory() throws Exception { int max = 10; @@ -232,6 +231,7 @@ public void testMaxMemSizeMemory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizeRandom() throws Exception { plcMax = 10; plcMaxMemSize = 0; @@ -243,6 +243,7 @@ public void testMaxSizeRandom() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizeRandomWithBatch() throws Exception { plcMax = 10; plcMaxMemSize = 0; @@ -254,6 +255,7 @@ public void testMaxSizeRandomWithBatch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxMemSizeRandom() throws Exception { plcMax = 0; plcMaxMemSize = 10 * MockEntry.KEY_SIZE; @@ -265,6 +267,7 @@ public void testMaxMemSizeRandom() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizeAllowEmptyEntries() throws Exception { plcMax = 10; plcMaxMemSize = 0; @@ -276,6 +279,7 @@ public void testMaxSizeAllowEmptyEntries() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizeAllowEmptyEntriesWithBatch() throws Exception { plcMax = 10; plcMaxMemSize = 0; @@ -287,6 +291,7 @@ public void testMaxSizeAllowEmptyEntriesWithBatch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxMemSizeAllowEmptyEntries() throws Exception { plcMax = 0; plcMaxMemSize = 10 * MockEntry.KEY_SIZE; @@ -298,6 +303,7 @@ public void testMaxMemSizeAllowEmptyEntries() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizePut() throws Exception { plcMax = 100; plcBatchSize = 1; @@ -309,6 +315,7 @@ public void testMaxSizePut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxSizePutWithBatch() throws Exception { plcMax = 100; plcBatchSize = 2; @@ -320,6 +327,7 @@ public void testMaxSizePutWithBatch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxMemSizePut() throws Exception { int max = 100; @@ -653,6 +661,7 @@ protected static String string(Iterable c) { } /** @throws Exception If failed. */ + @Test public void testMaxSizePartitionedNearDisabled() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -665,6 +674,7 @@ public void testMaxSizePartitionedNearDisabled() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMaxSizePartitionedNearDisabledWithBatch() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -678,6 +688,7 @@ public void testMaxSizePartitionedNearDisabledWithBatch() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMaxMemSizePartitionedNearDisabled() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -691,6 +702,7 @@ public void testMaxMemSizePartitionedNearDisabled() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearEnabled() throws Exception { mode = PARTITIONED; nearEnabled = true; @@ -704,6 +716,7 @@ public void testPartitionedNearEnabled() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearDisabledMultiThreaded() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -715,6 +728,7 @@ public void testPartitionedNearDisabledMultiThreaded() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearEnabledMultiThreaded() throws Exception { mode = PARTITIONED; nearEnabled = true; @@ -1028,4 +1042,4 @@ public Collection queue() { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionPolicyFactoryAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionPolicyFactoryAbstractTest.java index 0662d2d3f4940..847c2cd928b5b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionPolicyFactoryAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/EvictionPolicyFactoryAbstractTest.java @@ -41,12 +41,12 @@ import org.apache.ignite.internal.util.typedef.C2; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -64,11 +64,9 @@ /** * Base class for eviction tests. */ +@RunWith(JUnit4.class) public abstract class EvictionPolicyFactoryAbstractTest> extends GridCommonAbstractTest { - /** IP finder. */ - protected static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Put entry size. */ protected static final int PUT_ENTRY_SIZE = 10; @@ -140,12 +138,6 @@ public abstract class EvictionPolicyFactoryAbstractTest c) { } /** @throws Exception If failed. */ + @Test public void testMaxSizePartitionedNearDisabled() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -693,6 +702,7 @@ public void testMaxSizePartitionedNearDisabled() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMaxSizePartitionedNearDisabledWithBatch() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -706,6 +716,7 @@ public void testMaxSizePartitionedNearDisabledWithBatch() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMaxMemSizePartitionedNearDisabled() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -719,6 +730,7 @@ public void testMaxMemSizePartitionedNearDisabled() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearEnabled() throws Exception { mode = PARTITIONED; nearEnabled = true; @@ -732,6 +744,7 @@ public void testPartitionedNearEnabled() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearDisabledMultiThreaded() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -743,6 +756,7 @@ public void testPartitionedNearDisabledMultiThreaded() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearEnabledMultiThreaded() throws Exception { mode = PARTITIONED; nearEnabled = true; @@ -1068,4 +1082,4 @@ public Collection queue() { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionConsistencySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionConsistencySelfTest.java index 5ff0be23a5cf5..3512cec410d51 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionConsistencySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionConsistencySelfTest.java @@ -35,24 +35,23 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.cache.CacheEvictableEntryImpl; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; -import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; /** * */ +@RunWith(JUnit4.class) public class GridCacheConcurrentEvictionConsistencySelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Default iteration count. */ private static final int ITERATION_CNT = 50000; @@ -73,7 +72,7 @@ public class GridCacheConcurrentEvictionConsistencySelfTest extends GridCommonAb IgniteConfiguration c = super.getConfiguration(igniteInstanceName); c.getTransactionConfiguration().setDefaultTxConcurrency(PESSIMISTIC); - c.getTransactionConfiguration().setDefaultTxIsolation(READ_COMMITTED); + c.getTransactionConfiguration().setDefaultTxIsolation(REPEATABLE_READ); CacheConfiguration cc = defaultCacheConfiguration(); @@ -88,13 +87,14 @@ public class GridCacheConcurrentEvictionConsistencySelfTest extends GridCommonAb c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); + return c; + } - c.setDiscoverySpi(disco); + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); - return c; + super.beforeTestsStarted(); } /** {@inheritDoc} */ @@ -105,6 +105,7 @@ public class GridCacheConcurrentEvictionConsistencySelfTest extends GridCommonAb /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencyFifoLocalTwoKeys() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy<>(); plc.setMaxSize(1); @@ -120,6 +121,7 @@ public void testPolicyConsistencyFifoLocalTwoKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencyLruLocalTwoKeys() throws Exception { LruEvictionPolicy plc = new LruEvictionPolicy<>(); plc.setMaxSize(1); @@ -135,6 +137,7 @@ public void testPolicyConsistencyLruLocalTwoKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencySortedLocalTwoKeys() throws Exception { SortedEvictionPolicy plc = new SortedEvictionPolicy<>(); plc.setMaxSize(1); @@ -150,6 +153,7 @@ public void testPolicyConsistencySortedLocalTwoKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencyFifoLocalFewKeys() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy<>(); plc.setMaxSize(POLICY_QUEUE_SIZE); @@ -164,6 +168,7 @@ public void testPolicyConsistencyFifoLocalFewKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencyLruLocalFewKeys() throws Exception { LruEvictionPolicy plc = new LruEvictionPolicy<>(); plc.setMaxSize(POLICY_QUEUE_SIZE); @@ -178,6 +183,7 @@ public void testPolicyConsistencyLruLocalFewKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencySortedLocalFewKeys() throws Exception { SortedEvictionPolicy plc = new SortedEvictionPolicy<>(); plc.setMaxSize(POLICY_QUEUE_SIZE); @@ -192,6 +198,7 @@ public void testPolicyConsistencySortedLocalFewKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencyFifoLocal() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy<>(); plc.setMaxSize(POLICY_QUEUE_SIZE); @@ -206,6 +213,7 @@ public void testPolicyConsistencyFifoLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencyLruLocal() throws Exception { LruEvictionPolicy plc = new LruEvictionPolicy<>(); plc.setMaxSize(POLICY_QUEUE_SIZE); @@ -220,6 +228,7 @@ public void testPolicyConsistencyLruLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistencySortedLocal() throws Exception { SortedEvictionPolicy plc = new SortedEvictionPolicy<>(); plc.setMaxSize(POLICY_QUEUE_SIZE); @@ -251,14 +260,21 @@ private void checkPolicyConsistency() throws Exception { int j = rnd.nextInt(keyCnt); - try (Transaction tx = ignite.transactions().txStart()) { - // Put or remove? - if (rnd.nextBoolean()) - cache.put(j, j); - else - cache.remove(j); - - tx.commit(); + while (true) { + try (Transaction tx = ignite.transactions().txStart()) { + // Put or remove? + if (rnd.nextBoolean()) + cache.put(j, j); + else + cache.remove(j); + + tx.commit(); + + break; + } + catch (Exception e) { + MvccFeatureChecker.assertMvccWriteConflict(e); + } } if (i != 0 && i % 5000 == 0) @@ -344,4 +360,4 @@ else if (plc instanceof SortedEvictionPolicy) { return Collections.emptyList(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionsSelfTest.java index 45d98bfc89e27..64dda3aaabfa5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheConcurrentEvictionsSelfTest.java @@ -22,6 +22,7 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cache.eviction.EvictionPolicy; import org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy; import org.apache.ignite.cache.eviction.lru.LruEvictionPolicy; @@ -29,24 +30,25 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils.SF; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.GridTestUtils.SF; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; -import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; /** * */ +@RunWith(JUnit4.class) public class GridCacheConcurrentEvictionsSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Replicated cache. */ private CacheMode mode = REPLICATED; @@ -64,9 +66,9 @@ public class GridCacheConcurrentEvictionsSelfTest extends GridCommonAbstractTest IgniteConfiguration c = super.getConfiguration(igniteInstanceName); c.getTransactionConfiguration().setDefaultTxConcurrency(PESSIMISTIC); - c.getTransactionConfiguration().setDefaultTxIsolation(READ_COMMITTED); + c.getTransactionConfiguration().setDefaultTxIsolation(REPEATABLE_READ); - CacheConfiguration cc = defaultCacheConfiguration(); + CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(mode); @@ -79,12 +81,6 @@ public class GridCacheConcurrentEvictionsSelfTest extends GridCommonAbstractTest c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } @@ -95,9 +91,17 @@ public class GridCacheConcurrentEvictionsSelfTest extends GridCommonAbstractTest plc = null; } + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTestsStarted(); + } + /** * @throws Exception If failed. */ + @Test public void testConcurrentPutsFifoLocal() throws Exception { mode = LOCAL; @@ -105,8 +109,8 @@ public void testConcurrentPutsFifoLocal() throws Exception { plc.setMaxSize(1000); this.plc = plc; - warmUpPutsCnt = 100000; - iterCnt = 100000; + warmUpPutsCnt = SF.applyLB(100_000, 10_000); + iterCnt = SF.applyLB(100_000, 10_000); checkConcurrentPuts(); } @@ -114,6 +118,7 @@ public void testConcurrentPutsFifoLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentPutsLruLocal() throws Exception { mode = LOCAL; @@ -121,8 +126,8 @@ public void testConcurrentPutsLruLocal() throws Exception { plc.setMaxSize(1000); this.plc = plc; - warmUpPutsCnt = 100000; - iterCnt = 100000; + warmUpPutsCnt = SF.applyLB(100_000, 10_000); + iterCnt = SF.applyLB(100_000, 10_000); checkConcurrentPuts(); } @@ -130,6 +135,7 @@ public void testConcurrentPutsLruLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentPutsSortedLocal() throws Exception { mode = LOCAL; @@ -137,8 +143,8 @@ public void testConcurrentPutsSortedLocal() throws Exception { plc.setMaxSize(1000); this.plc = plc; - warmUpPutsCnt = 100000; - iterCnt = 100000; + warmUpPutsCnt = SF.applyLB(100_000, 10_000); + iterCnt = SF.applyLB(100_000, 10_000); checkConcurrentPuts(); } @@ -166,21 +172,21 @@ private void checkConcurrentPuts() throws Exception { final AtomicInteger idx = new AtomicInteger(); - int threadCnt = 30; + int threadCnt = SF.applyLB(30, 8); long start = System.currentTimeMillis(); IgniteInternalFuture fut = multithreadedAsync( new Callable() { - @Override public Object call() throws Exception { + @Override public Object call() { for (int i = 0; i < iterCnt; i++) { int j = idx.incrementAndGet(); cache.put(j, j); - if (i != 0 && i % 10000 == 0) + if (i != 0 && i % 1000 == 0) // info("Puts count: " + i); - info("Stats [putsCnt=" + i + ", size=" + cache.size() + ']'); + info("Stats [putsCnt=" + i + ", size=" + cache.size(CachePeekMode.ONHEAP) + ']'); } return null; @@ -191,11 +197,13 @@ private void checkConcurrentPuts() throws Exception { fut.get(); - info("Test results [threadCnt=" + threadCnt + ", iterCnt=" + iterCnt + ", cacheSize=" + cache.size() + + info("Test results [threadCnt=" + threadCnt + ", iterCnt=" + iterCnt + ", cacheSize=" + cache.size(CachePeekMode.ONHEAP) + ", duration=" + (System.currentTimeMillis() - start) + ']'); + + assertTrue(cache.size(CachePeekMode.ONHEAP) <= 1000); } finally { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesAbstractSelfTest.java index f28e56b6fcda2..a7fb137afaeda 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesAbstractSelfTest.java @@ -32,23 +32,22 @@ import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * Tests that cache handles {@code setAllowEmptyEntries} flag correctly. */ +@RunWith(JUnit4.class) public abstract class GridCacheEmptyEntriesAbstractSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private EvictionPolicy plc; @@ -96,12 +95,6 @@ public abstract class GridCacheEmptyEntriesAbstractSelfTest extends GridCommonAb c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } @@ -121,6 +114,7 @@ public abstract class GridCacheEmptyEntriesAbstractSelfTest extends GridCommonAb * * @throws Exception If failed. */ + @Test public void testFifo() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy(); plc.setMaxSize(50); @@ -172,6 +166,9 @@ private void checkPolicy0() throws Exception { for (TransactionIsolation isolation : TransactionIsolation.values()) { txIsolation = isolation; + if (MvccFeatureChecker.forcedMvcc() && !MvccFeatureChecker.isSupported(concurrency, isolation)) + continue; + Ignite g = startGrids(); IgniteCache cache = g.cache(DEFAULT_CACHE_NAME); @@ -307,4 +304,4 @@ private void checkEmpty(IgniteCache cache) throws IgniteInterrup } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesLocalSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesLocalSelfTest.java index 018cc2e699dbc..e8f171ea6dfad 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesLocalSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesLocalSelfTest.java @@ -19,10 +19,15 @@ import org.apache.ignite.Ignite; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCacheEmptyEntriesLocalSelfTest extends GridCacheEmptyEntriesAbstractSelfTest { /** {@inheritDoc} */ @Override protected Ignite startGrids() throws Exception { @@ -35,7 +40,17 @@ public class GridCacheEmptyEntriesLocalSelfTest extends GridCacheEmptyEntriesAbs } /** {@inheritDoc} */ + @Test @Override public void testFifo() throws Exception { super.testFifo(); } -} \ No newline at end of file + + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTestsStarted(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesPartitionedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesPartitionedSelfTest.java index 5d9faebd465dc..88c9fbc9a60a4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesPartitionedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEmptyEntriesPartitionedSelfTest.java @@ -19,10 +19,15 @@ import org.apache.ignite.Ignite; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test allow empty entries flag on partitioned cache. */ +@RunWith(JUnit4.class) public class GridCacheEmptyEntriesPartitionedSelfTest extends GridCacheEmptyEntriesAbstractSelfTest { /** {@inheritDoc} */ @Override protected Ignite startGrids() throws Exception { @@ -35,7 +40,15 @@ public class GridCacheEmptyEntriesPartitionedSelfTest extends GridCacheEmptyEntr } /** {@inheritDoc} */ + @Test @Override public void testFifo() throws Exception { super.testFifo(); } -} \ No newline at end of file + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.beforeTestsStarted(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictableEntryEqualsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictableEntryEqualsSelfTest.java index 98c8b776ba386..ec00f6ebc08b0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictableEntryEqualsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictableEntryEqualsSelfTest.java @@ -26,14 +26,19 @@ import org.apache.ignite.cache.eviction.EvictionPolicy; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for EvictableEntry.equals(). */ +@RunWith(JUnit4.class) public class GridCacheEvictableEntryEqualsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testEquals() throws Exception { try (Ignite ignite = startGrid()) { CacheConfiguration cfg = new CacheConfiguration<>("test"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionFilterSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionFilterSelfTest.java index b432c2d584595..bb393a431c288 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionFilterSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionFilterSelfTest.java @@ -32,10 +32,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -46,10 +47,8 @@ /** * Base class for eviction tests. */ +@RunWith(JUnit4.class) public class GridCacheEvictionFilterSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Replicated cache. */ private CacheMode mode = REPLICATED; @@ -94,23 +93,21 @@ public class GridCacheEvictionFilterSelfTest extends GridCommonAbstractTest { c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } /** @throws Exception If failed. */ + @Test public void testLocal() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + mode = LOCAL; checkEvictionFilter(); } /** @throws Exception If failed. */ + @Test public void testReplicated() throws Exception { mode = REPLICATED; @@ -118,7 +115,10 @@ public void testReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitioned() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + mode = PARTITIONED; nearEnabled = true; @@ -126,6 +126,7 @@ public void testPartitioned() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitionedNearDisabled() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -190,6 +191,7 @@ private void checkEvictionFilter() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionedMixed() throws Exception { mode = PARTITIONED; nearEnabled = false; @@ -256,4 +258,4 @@ ConcurrentMap counts() { return cnts; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionLockUnlockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionLockUnlockSelfTest.java index 55b7b63691470..d9e1b9e1cb2b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionLockUnlockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionLockUnlockSelfTest.java @@ -30,10 +30,11 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.events.Event; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -45,10 +46,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheEvictionLockUnlockSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Evict latch. */ private static CountDownLatch evictLatch; @@ -86,16 +85,20 @@ public class GridCacheEvictionLockUnlockSelfTest extends GridCommonAbstractTest c.setCacheConfiguration(cc); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); + return c; + } - discoSpi.setIpFinder(ipFinder); + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); - c.setDiscoverySpi(discoSpi); - return c; + super.beforeTestsStarted(); } /** @throws Exception If failed. */ + @Test public void testLocal() throws Exception { mode = LOCAL; gridCnt = 1; @@ -104,6 +107,7 @@ public void testLocal() throws Exception { } /** @throws Exception If failed. */ + @Test public void testReplicated() throws Exception { mode = REPLICATED; gridCnt = 3; @@ -112,6 +116,7 @@ public void testReplicated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPartitioned() throws Exception { mode = PARTITIONED; gridCnt = 3; @@ -182,4 +187,4 @@ private static class EvictionPolicy implements org.apache.ignite.cache.eviction. entry.evict(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionTouchSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionTouchSelfTest.java index a91c5b6527e0d..cea9078246f93 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionTouchSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/GridCacheEvictionTouchSelfTest.java @@ -37,11 +37,12 @@ import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheGenericTestStore; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -51,10 +52,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheEvictionTouchSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private EvictionPolicy plc; @@ -99,13 +98,14 @@ public class GridCacheEvictionTouchSelfTest extends GridCommonAbstractTest { c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); + return c; + } - c.setDiscoverySpi(disco); + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); - return c; + super.beforeTestsStarted(); } /** {@inheritDoc} */ @@ -118,6 +118,7 @@ public class GridCacheEvictionTouchSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPolicyConsistency() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy<>(); plc.setMaxSize(500); @@ -173,6 +174,7 @@ public void testPolicyConsistency() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvictSingle() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy<>(); plc.setMaxSize(500); @@ -203,6 +205,7 @@ public void testEvictSingle() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvictAll() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy<>(); plc.setMaxSize(500); @@ -238,6 +241,7 @@ public void testEvictAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReload() throws Exception { FifoEvictionPolicy plc = new FifoEvictionPolicy<>(); plc.setMaxSize(100); @@ -269,4 +273,4 @@ public void testReload() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicyFactorySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicyFactorySelfTest.java index d53cb6f36c923..17af3d9641420 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicyFactorySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicyFactorySelfTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.cache.eviction.lru.LruEvictionPolicy; import org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory; import org.apache.ignite.internal.processors.cache.eviction.EvictionPolicyFactoryAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * LRU Eviction policy tests. */ +@RunWith(JUnit4.class) public class LruEvictionPolicyFactorySelfTest extends EvictionPolicyFactoryAbstractTest> { /** {@inheritDoc} */ @Override protected Factory> createPolicyFactory() { @@ -45,6 +49,7 @@ public class LruEvictionPolicyFactorySelfTest extends EvictionPolicyFactoryAbstr /** * @throws Exception If failed. */ + @Test public void testMiddleAccess() throws Exception { policyFactory = createPolicyFactory(); @@ -349,4 +354,4 @@ public void testMiddleAccess() throws Exception { assertTrue(policy(i).queue().size() <= plcMax + plcBatchSize); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicySelfTest.java index 3c5334911ed9a..af29970400087 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruEvictionPolicySelfTest.java @@ -21,15 +21,20 @@ import org.apache.ignite.cache.eviction.lru.LruEvictionPolicy; import org.apache.ignite.internal.processors.cache.CacheEvictableEntryImpl; import org.apache.ignite.internal.processors.cache.eviction.EvictionAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * LRU Eviction policy tests. */ +@RunWith(JUnit4.class) public class LruEvictionPolicySelfTest extends EvictionAbstractTest> { /** * @throws Exception If failed. */ + @Test public void testMiddleAccess() throws Exception { startGrid(); @@ -350,4 +355,4 @@ public void testMiddleAccess() throws Exception { } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearEvictionPolicySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearEvictionPolicySelfTest.java index 27295c69ca35e..7ed7b87a1c828 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearEvictionPolicySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearEvictionPolicySelfTest.java @@ -25,13 +25,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC; @@ -39,10 +41,8 @@ /** * LRU near eviction tests (GG-8884). */ +@RunWith(JUnit4.class) public class LruNearEvictionPolicySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Maximum size for near eviction policy. */ private static final int EVICTION_MAX_SIZE = 10; @@ -74,18 +74,13 @@ public class LruNearEvictionPolicySelfTest extends GridCommonAbstractTest { c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } /** * @throws Exception If failed. */ + @Test public void testAtomicNearEvictionMaxSize() throws Exception { atomicityMode = ATOMIC; @@ -95,12 +90,24 @@ public void testAtomicNearEvictionMaxSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransactionalNearEvictionMaxSize() throws Exception { atomicityMode = TRANSACTIONAL; checkNearEvictionMaxSize(); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187,https://issues.apache.org/jira/browse/IGNITE-7956") + @Test + public void testMvccTransactionalNearEvictionMaxSize() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + + checkNearEvictionMaxSize(); + } + /** * @throws Exception If failed. */ @@ -139,4 +146,4 @@ private void checkNearEvictionMaxSize() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearOnlyNearEvictionPolicySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearOnlyNearEvictionPolicySelfTest.java index a329e83f56252..12502cc182326 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearOnlyNearEvictionPolicySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/lru/LruNearOnlyNearEvictionPolicySelfTest.java @@ -26,12 +26,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -40,10 +43,8 @@ /** * LRU near eviction tests for NEAR_ONLY distribution mode (GG-8884). */ +@RunWith(JUnit4.class) public class LruNearOnlyNearEvictionPolicySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Grid count. */ private static final int GRID_COUNT = 2; @@ -84,7 +85,7 @@ public class LruNearOnlyNearEvictionPolicySelfTest extends GridCommonAbstractTes c.setCacheConfiguration(cc); } - c.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder).setForceServerMode(true)); + ((TcpDiscoverySpi)c.getDiscoverySpi()).setForceServerMode(true); cnt++; @@ -94,6 +95,7 @@ public class LruNearOnlyNearEvictionPolicySelfTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void testPartitionedAtomicNearEvictionMaxSize() throws Exception { atomicityMode = ATOMIC; cacheMode = PARTITIONED; @@ -104,6 +106,7 @@ public void testPartitionedAtomicNearEvictionMaxSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionedTransactionalNearEvictionMaxSize() throws Exception { atomicityMode = TRANSACTIONAL; cacheMode = PARTITIONED; @@ -114,6 +117,19 @@ public void testPartitionedTransactionalNearEvictionMaxSize() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187,https://issues.apache.org/jira/browse/IGNITE-7956") + @Test + public void testPartitionedMvccTransactionalNearEvictionMaxSize() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + cacheMode = PARTITIONED; + + checkNearEvictionMaxSize(); + } + + /** + * @throws Exception If failed. + */ + @Test public void testReplicatedAtomicNearEvictionMaxSize() throws Exception { atomicityMode = ATOMIC; cacheMode = REPLICATED; @@ -124,6 +140,7 @@ public void testReplicatedAtomicNearEvictionMaxSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedTransactionalNearEvictionMaxSize() throws Exception { atomicityMode = TRANSACTIONAL; cacheMode = REPLICATED; @@ -131,6 +148,18 @@ public void testReplicatedTransactionalNearEvictionMaxSize() throws Exception { checkNearEvictionMaxSize(); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187,https://issues.apache.org/jira/browse/IGNITE-7956") + @Test + public void testReplicatedMvccTransactionalNearEvictionMaxSize() throws Exception { + atomicityMode = TRANSACTIONAL_SNAPSHOT; + cacheMode = REPLICATED; + + checkNearEvictionMaxSize(); + } + /** * @throws Exception If failed. */ @@ -174,4 +203,4 @@ private void checkNearEvictionMaxSize() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionAbstractTest.java index 072ca7fc0d3bb..88f714815347f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionAbstractTest.java @@ -26,9 +26,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; @@ -36,17 +33,14 @@ * */ public class PageEvictionAbstractTest extends GridCommonAbstractTest { - /** */ - protected static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Offheap size for memory policy. */ - private static final int SIZE = 96 * 1024 * 1024; + private static final int SIZE = 20 * 1024 * 1024; /** Page size. */ static final int PAGE_SIZE = 2048; /** Number of entries. */ - static final int ENTRIES = 80_000; + static final int ENTRIES = 12_000; /** Empty pages pool size. */ private static final int EMPTY_PAGES_POOL_SIZE = 100; @@ -86,8 +80,6 @@ protected boolean nearEnabled() { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - DataStorageConfiguration dbCfg = new DataStorageConfiguration(); DataRegionConfiguration plc = new DataRegionConfiguration(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionDataStreamerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionDataStreamerTest.java index b5aab69f5016e..ed70d8587887e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionDataStreamerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionDataStreamerTest.java @@ -34,10 +34,10 @@ public class PageEvictionDataStreamerTest extends PageEvictionMultinodeAbstractT } /** {@inheritDoc} */ - @Override protected void createCacheAndTestEvcition(CacheConfiguration cfg) throws Exception { - IgniteCache cache = clientGrid.getOrCreateCache(cfg); + @Override protected void createCacheAndTestEviction(CacheConfiguration cfg) throws Exception { + IgniteCache cache = clientGrid().getOrCreateCache(cfg); - try (IgniteDataStreamer ldr = clientGrid.dataStreamer(cfg.getName())) { + try (IgniteDataStreamer ldr = clientGrid().dataStreamer(cfg.getName())) { ldr.allowOverwrite(true); for (int i = 1; i <= ENTRIES; i++) { @@ -60,6 +60,6 @@ public class PageEvictionDataStreamerTest extends PageEvictionMultinodeAbstractT // Eviction started, no OutOfMemory occurred, success. assertTrue(resultingSize < ENTRIES); - clientGrid.destroyCache(cfg.getName()); + clientGrid().destroyCache(cfg.getName()); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMetricTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMetricTest.java index a451c36bf8d7f..9af960fbc4493 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMetricTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMetricTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PageEvictionMetricTest extends PageEvictionAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -43,7 +47,25 @@ public class PageEvictionMetricTest extends PageEvictionAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPageEvictionMetric() throws Exception { + checkPageEvictionMetric(CacheAtomicityMode.ATOMIC); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPageEvictionMetricMvcc() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10448"); + + checkPageEvictionMetric(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + private void checkPageEvictionMetric(CacheAtomicityMode atomicityMode) throws Exception { IgniteEx ignite = startGrid(0); DataRegionMetricsImpl metrics = @@ -52,7 +74,7 @@ public void testPageEvictionMetric() throws Exception { metrics.enableMetrics(); CacheConfiguration cfg = cacheConfig("evict-metric", null, - CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC, CacheWriteSynchronizationMode.PRIMARY_SYNC); + CacheMode.PARTITIONED, atomicityMode, CacheWriteSynchronizationMode.PRIMARY_SYNC); IgniteCache cache = ignite.getOrCreateCache(cfg); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeAbstractTest.java index 777c2d7e999cc..d1e731770e97e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeAbstractTest.java @@ -25,10 +25,15 @@ import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class PageEvictionMultinodeAbstractTest extends PageEvictionAbstractTest { /** Cache modes. */ private static final CacheMode[] CACHE_MODES = {CacheMode.PARTITIONED, CacheMode.REPLICATED}; @@ -41,16 +46,19 @@ public abstract class PageEvictionMultinodeAbstractTest extends PageEvictionAbst private static final CacheWriteSynchronizationMode[] WRITE_MODES = {CacheWriteSynchronizationMode.PRIMARY_SYNC, CacheWriteSynchronizationMode.FULL_SYNC, CacheWriteSynchronizationMode.FULL_ASYNC}; - /** Client grid. */ - Ignite clientGrid; - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGridsMultiThreaded(4, false); - clientGrid = startGrid("client"); + startGrid("client"); } + /** + * @return Client grid. + */ + Ignite clientGrid() { + return grid("client"); + } /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -70,6 +78,7 @@ public abstract class PageEvictionMultinodeAbstractTest extends PageEvictionAbst /** * @throws Exception If failed. */ + @Test public void testPageEviction() throws Exception { for (int i = 0; i < CACHE_MODES.length; i++) { for (int j = 0; j < ATOMICITY_MODES.length; j++) { @@ -78,19 +87,34 @@ public void testPageEviction() throws Exception { CacheConfiguration cfg = cacheConfig( "evict" + i + j + k, null, CACHE_MODES[i], ATOMICITY_MODES[j], WRITE_MODES[k]); - createCacheAndTestEvcition(cfg); + createCacheAndTestEviction(cfg); } } } } } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10448") + @Test + public void testPageEvictionMvcc() throws Exception { + for (int i = 0; i < CACHE_MODES.length; i++) { + CacheConfiguration cfg = cacheConfig( + "evict" + i, null, CACHE_MODES[i], CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, + CacheWriteSynchronizationMode.FULL_SYNC); + + createCacheAndTestEviction(cfg); + } + } + /** * @param cfg Config. * @throws Exception If failed. */ - protected void createCacheAndTestEvcition(CacheConfiguration cfg) throws Exception { - IgniteCache cache = clientGrid.getOrCreateCache(cfg); + protected void createCacheAndTestEviction(CacheConfiguration cfg) throws Exception { + IgniteCache cache = clientGrid().getOrCreateCache(cfg); for (int i = 1; i <= ENTRIES; i++) { ThreadLocalRandom r = ThreadLocalRandom.current(); @@ -118,6 +142,6 @@ else if (r.nextInt() % 13 == 0) // Eviction started, no OutOfMemory occurred, success. assertTrue(resultingSize < ENTRIES * 10 / 11); - clientGrid.destroyCache(cfg.getName()); + clientGrid().destroyCache(cfg.getName()); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeMixedRegionsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeMixedRegionsTest.java index 9a96a642a07f3..bb36d2a84e52d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeMixedRegionsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionMultinodeMixedRegionsTest.java @@ -48,7 +48,7 @@ public class PageEvictionMultinodeMixedRegionsTest extends PageEvictionMultinode super.beforeTestsStarted(); - clientGrid.active(true); + clientGrid().cluster().active(true); } /** {@inheritDoc} */ diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionPagesRecyclingAndReusingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionPagesRecyclingAndReusingTest.java index 9c777fbe36c80..6d8147136842b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionPagesRecyclingAndReusingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionPagesRecyclingAndReusingTest.java @@ -28,10 +28,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.persistence.tree.reuse.ReuseList; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PageEvictionPagesRecyclingAndReusingTest extends PageEvictionAbstractTest { /** Test timeout. */ private static final long TEST_TIMEOUT = 10 * 60 * 1000; @@ -57,6 +62,7 @@ public class PageEvictionPagesRecyclingAndReusingTest extends PageEvictionAbstra /** * @throws Exception If failed. */ + @Test public void testPagesRecyclingAndReusingAtomicReplicated() throws Exception { testPagesRecyclingAndReusing(CacheAtomicityMode.ATOMIC, CacheMode.REPLICATED); } @@ -64,6 +70,7 @@ public void testPagesRecyclingAndReusingAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPagesRecyclingAndReusingAtomicLocal() throws Exception { testPagesRecyclingAndReusing(CacheAtomicityMode.ATOMIC, CacheMode.LOCAL); } @@ -71,6 +78,7 @@ public void testPagesRecyclingAndReusingAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPagesRecyclingAndReusingTxReplicated() throws Exception { testPagesRecyclingAndReusing(CacheAtomicityMode.TRANSACTIONAL, CacheMode.REPLICATED); } @@ -78,10 +86,39 @@ public void testPagesRecyclingAndReusingTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPagesRecyclingAndReusingTxLocal() throws Exception { testPagesRecyclingAndReusing(CacheAtomicityMode.TRANSACTIONAL, CacheMode.LOCAL); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10448") + @Test + public void testPagesRecyclingAndReusingMvccTxPartitioned() throws Exception { + testPagesRecyclingAndReusing(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.PARTITIONED); + } + + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10448") + @Test + public void testPagesRecyclingAndReusingMvccTxReplicated() throws Exception { + testPagesRecyclingAndReusing(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.REPLICATED); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7956,https://issues.apache.org/jira/browse/IGNITE-9530") + @Test + public void testPagesRecyclingAndReusingMvccTxLocal() throws Exception { + testPagesRecyclingAndReusing(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.LOCAL); + } + /** * @param atomicityMode Atomicity mode. * @param cacheMode Cache mode. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionReadThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionReadThroughTest.java index ff713616c742f..e63e3143a825e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionReadThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionReadThroughTest.java @@ -29,10 +29,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataPageEvictionMode; import org.apache.ignite.configuration.IgniteConfiguration; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PageEvictionReadThroughTest extends PageEvictionAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -42,6 +47,7 @@ public class PageEvictionReadThroughTest extends PageEvictionAbstractTest { /** * @throws Exception If failed. */ + @Test public void testEvictionWithReadThroughAtomicReplicated() throws Exception { testEvictionWithReadThrough(CacheAtomicityMode.ATOMIC, CacheMode.REPLICATED); } @@ -49,6 +55,7 @@ public void testEvictionWithReadThroughAtomicReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvictionWithReadThroughAtomicLocal() throws Exception { testEvictionWithReadThrough(CacheAtomicityMode.ATOMIC, CacheMode.LOCAL); } @@ -56,6 +63,7 @@ public void testEvictionWithReadThroughAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvictionWithReadThroughTxReplicated() throws Exception { testEvictionWithReadThrough(CacheAtomicityMode.TRANSACTIONAL, CacheMode.REPLICATED); } @@ -63,10 +71,38 @@ public void testEvictionWithReadThroughTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvictionWithReadThroughTxLocal() throws Exception { testEvictionWithReadThrough(CacheAtomicityMode.TRANSACTIONAL, CacheMode.LOCAL); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582,https://issues.apache.org/jira/browse/IGNITE-7956") + @Test + public void testEvictionWithReadThroughMvccTxReplicated() throws Exception { + testEvictionWithReadThrough(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.REPLICATED); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582,https://issues.apache.org/jira/browse/IGNITE-7956") + @Test + public void testEvictionWithReadThroughMvccTxPartitioned() throws Exception { + testEvictionWithReadThrough(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.PARTITIONED); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7956,https://issues.apache.org/jira/browse/IGNITE-8582,https://issues.apache.org/jira/browse/IGNITE-9530") + @Test + public void testEvictionWithReadThroughMvccTxLocal() throws Exception { + testEvictionWithReadThrough(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.LOCAL); + } + /** * @param atomicityMode Atomicity mode. * @param cacheMode Cache mode. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionTouchOrderTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionTouchOrderTest.java index 43356490fe21e..16c4e8a5a6146 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionTouchOrderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionTouchOrderTest.java @@ -23,10 +23,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataPageEvictionMode; import org.apache.ignite.configuration.IgniteConfiguration; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PageEvictionTouchOrderTest extends PageEvictionAbstractTest { /** Test entries number. */ private static final int SAFE_ENTRIES = 1000; @@ -45,6 +50,7 @@ public class PageEvictionTouchOrderTest extends PageEvictionAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTouchOrderWithFairFifoEvictionAtomicReplicated() throws Exception { testTouchOrderWithFairFifoEviction(CacheAtomicityMode.ATOMIC, CacheMode.REPLICATED); } @@ -52,6 +58,7 @@ public void testTouchOrderWithFairFifoEvictionAtomicReplicated() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testTouchOrderWithFairFifoEvictionAtomicLocal() throws Exception { testTouchOrderWithFairFifoEviction(CacheAtomicityMode.ATOMIC, CacheMode.LOCAL); } @@ -59,6 +66,7 @@ public void testTouchOrderWithFairFifoEvictionAtomicLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTouchOrderWithFairFifoEvictionTxReplicated() throws Exception { testTouchOrderWithFairFifoEviction(CacheAtomicityMode.TRANSACTIONAL, CacheMode.REPLICATED); } @@ -66,10 +74,41 @@ public void testTouchOrderWithFairFifoEvictionTxReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTouchOrderWithFairFifoEvictionTxLocal() throws Exception { testTouchOrderWithFairFifoEviction(CacheAtomicityMode.TRANSACTIONAL, CacheMode.LOCAL); } + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10448,https://issues.apache.org/jira/browse/IGNITE-7956") + @Test + public void testTouchOrderWithFairFifoEvictionMvccTxReplicated() throws Exception { + testTouchOrderWithFairFifoEviction(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.REPLICATED); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10448,https://issues.apache.org/jira/browse/IGNITE-7956") + @Test + public void testTouchOrderWithFairFifoEvictionMvccTxPartitioned() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10448"); + fail("https://issues.apache.org/jira/browse/IGNITE-7956"); + + testTouchOrderWithFairFifoEviction(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.PARTITIONED); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7956,https://issues.apache.org/jira/browse/IGNITE-9530") + @Test + public void testTouchOrderWithFairFifoEvictionMvccTxLocal() throws Exception { + testTouchOrderWithFairFifoEviction(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, CacheMode.LOCAL); + } + /** * @param atomicityMode Atomicity mode. * @param cacheMode Cache mode. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionWithRebalanceAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionWithRebalanceAbstractTest.java index 3ad104b031160..3d3955b976a89 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionWithRebalanceAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/paged/PageEvictionWithRebalanceAbstractTest.java @@ -23,19 +23,41 @@ import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class PageEvictionWithRebalanceAbstractTest extends PageEvictionAbstractTest { /** * @throws Exception If failed. */ + @Test public void testEvictionWithRebalance() throws Exception { + checkEvictionWithRebalance(CacheAtomicityMode.ATOMIC); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10448") + @Test + public void testEvictionWithRebalanceMvcc() throws Exception { + checkEvictionWithRebalance(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + } + + /** + * @throws Exception If failed. + */ + public void checkEvictionWithRebalance(CacheAtomicityMode atomicityMode) throws Exception { startGridsMultiThreaded(4); CacheConfiguration cfg = cacheConfig("evict-rebalance", null, CacheMode.PARTITIONED, - CacheAtomicityMode.ATOMIC, CacheWriteSynchronizationMode.PRIMARY_SYNC); + atomicityMode, CacheWriteSynchronizationMode.PRIMARY_SYNC); IgniteCache cache = ignite(0).getOrCreateCache(cfg); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/sorted/SortedEvictionPolicyPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/sorted/SortedEvictionPolicyPerformanceTest.java index 39b01ed4cc089..554bbbeffe63b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/sorted/SortedEvictionPolicyPerformanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/eviction/sorted/SortedEvictionPolicyPerformanceTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * {@link SortedEvictionPolicy} performance test. */ +@RunWith(JUnit4.class) public class SortedEvictionPolicyPerformanceTest extends GridCommonAbstractTest { /** Threads. */ private static final int THREADS = 8; @@ -87,6 +91,7 @@ public class SortedEvictionPolicyPerformanceTest extends GridCommonAbstractTest /** * Tests throughput. */ + @Test public void testThroughput() throws Exception { final LongAdder cnt = new LongAdder(); final AtomicBoolean finished = new AtomicBoolean(); @@ -128,4 +133,4 @@ else if (p >= pPut && p < pGet) } }, THREADS); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheClientNearCacheExpiryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheClientNearCacheExpiryTest.java index 3417ba8513a34..4ec23fe6b8f8e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheClientNearCacheExpiryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheClientNearCacheExpiryTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -40,6 +43,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheClientNearCacheExpiryTest extends IgniteCacheAbstractTest { /** */ private static final int NODES = 3; @@ -80,6 +84,7 @@ public class IgniteCacheClientNearCacheExpiryTest extends IgniteCacheAbstractTes /** * @throws Exception If failed. */ + @Test public void testExpirationOnClient() throws Exception { Ignite ignite = grid(NODES - 1); @@ -123,4 +128,4 @@ public void testExpirationOnClient() throws Exception { for (int i = KEYS_COUNT ; i < KEYS_COUNT * 2; i++) assertNull(cache.localPeek(i)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyAbstractTest.java index c04d9d8d9663a..9baba8e17de35 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyAbstractTest.java @@ -58,6 +58,9 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -70,6 +73,7 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheExpiryPolicyAbstractTest extends IgniteCacheAbstractTest { /** */ private static final long TTL_FOR_EXPIRE = 500L; @@ -118,6 +122,7 @@ public abstract class IgniteCacheExpiryPolicyAbstractTest extends IgniteCacheAbs /** * @throws Exception if failed. */ + @Test public void testCreateUpdate0() throws Exception { startGrids(1); @@ -149,6 +154,7 @@ public void testCreateUpdate0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testZeroOnCreate() throws Exception { factory = CreatedExpiryPolicy.factoryOf(Duration.ZERO); @@ -176,6 +182,7 @@ private void zeroOnCreate(Integer key) throws Exception { /** * @throws Exception If failed. */ + @Test public void testZeroOnUpdate() throws Exception { factory = new FactoryBuilder.SingletonFactory<>(new TestPolicy(null, 0L, null)); @@ -207,6 +214,7 @@ private void zeroOnUpdate(Integer key) throws Exception { /** * @throws Exception If failed. */ + @Test public void testZeroOnAccess() throws Exception { factory = new FactoryBuilder.SingletonFactory<>(new TestPolicy(null, null, 0L)); @@ -252,6 +260,7 @@ public void testZeroOnAccess() throws Exception { /** * @throws Exception If failed. */ + @Test public void testZeroOnAccessEagerTtlDisabled() throws Exception { disableEagerTtl = true; @@ -277,6 +286,7 @@ private void zeroOnAccess(Integer key) throws Exception { /** * @throws Exception If failed. */ + @Test public void testEternal() throws Exception { factory = EternalExpiryPolicy.factoryOf(); @@ -298,6 +308,7 @@ public void testEternal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNullFactory() throws Exception { factory = null; // Should work as eternal. @@ -349,6 +360,7 @@ private void eternal(Integer key) throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccess() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-305"); @@ -595,6 +607,7 @@ private void accessGetAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateUpdate() throws Exception { factory = new FactoryBuilder.SingletonFactory<>(new TestPolicy(60_000L, 61_000L, null)); @@ -835,6 +848,7 @@ private void createUpdate(Integer key, @Nullable TransactionConcurrency txConcur /** * @throws Exception If failed. */ + @Test public void testNearCreateUpdate() throws Exception { if (cacheMode() != PARTITIONED) return; @@ -958,6 +972,7 @@ private void nearPutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearAccess() throws Exception { if (cacheMode() != PARTITIONED) return; @@ -1005,6 +1020,7 @@ public void testNearAccess() throws Exception { * * @throws Exception If failed. */ + @Test public void testNearExpiresOnClient() throws Exception { if (cacheMode() != PARTITIONED) return; @@ -1042,6 +1058,7 @@ public void testNearExpiresOnClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNearExpiresWithCacheStore() throws Exception { if(cacheMode() != PARTITIONED) return; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyTestSuite.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyTestSuite.java index 61d9b4c296de6..e922537f0ca6e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyTestSuite.java @@ -17,51 +17,54 @@ package org.apache.ignite.internal.processors.cache.expiry; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.cache.store.IgniteCacheExpiryStoreLoadSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheTtlManagerNotificationTest; import org.apache.ignite.internal.processors.cache.IgniteCacheEntryListenerExpiredEventsTest; import org.apache.ignite.internal.processors.cache.IgniteCacheExpireAndUpdateConsistencyTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteCacheExpiryPolicyTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheExpiryPolicyTestSuite { /** * @return Cache Expiry Policy test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Expiry Policy Test Suite"); - suite.addTestSuite(IgniteCacheLargeValueExpireTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLargeValueExpireTest.class)); - suite.addTestSuite(IgniteCacheAtomicLocalExpiryPolicyTest.class); - //suite.addTestSuite(IgniteCacheAtomicLocalOnheapExpiryPolicyTest.class); - suite.addTestSuite(IgniteCacheAtomicExpiryPolicyTest.class); - //suite.addTestSuite(IgniteCacheAtomicOnheapExpiryPolicyTest.class); - suite.addTestSuite(IgniteCacheAtomicWithStoreExpiryPolicyTest.class); - suite.addTestSuite(IgniteCacheAtomicReplicatedExpiryPolicyTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicLocalExpiryPolicyTest.class)); + //suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicLocalOnheapExpiryPolicyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicExpiryPolicyTest.class)); + //suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicOnheapExpiryPolicyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicWithStoreExpiryPolicyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicReplicatedExpiryPolicyTest.class)); - suite.addTestSuite(IgniteCacheTxLocalExpiryPolicyTest.class); - suite.addTestSuite(IgniteCacheTxExpiryPolicyTest.class); - suite.addTestSuite(IgniteCacheTxWithStoreExpiryPolicyTest.class); - suite.addTestSuite(IgniteCacheTxReplicatedExpiryPolicyTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxLocalExpiryPolicyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxExpiryPolicyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxWithStoreExpiryPolicyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxReplicatedExpiryPolicyTest.class)); - suite.addTestSuite(IgniteCacheAtomicExpiryPolicyWithStoreTest.class); - suite.addTestSuite(IgniteCacheTxExpiryPolicyWithStoreTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicExpiryPolicyWithStoreTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxExpiryPolicyWithStoreTest.class)); - suite.addTestSuite(IgniteCacheExpiryStoreLoadSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheExpiryStoreLoadSelfTest.class)); - suite.addTestSuite(IgniteCacheClientNearCacheExpiryTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheClientNearCacheExpiryTest.class)); - suite.addTestSuite(IgniteCacheEntryListenerExpiredEventsTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheEntryListenerExpiredEventsTest.class)); - suite.addTestSuite(IgniteCacheExpireAndUpdateConsistencyTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheExpireAndUpdateConsistencyTest.class)); // Eager ttl expiration tests. - suite.addTestSuite(GridCacheTtlManagerNotificationTest.class); - suite.addTestSuite(IgniteCacheOnlyOneTtlCleanupThreadExistsTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheTtlManagerNotificationTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheOnlyOneTtlCleanupThreadExistsTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyWithStoreAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyWithStoreAbstractTest.java index 747cb43536739..6a74b0fd44af2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyWithStoreAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheExpiryPolicyWithStoreAbstractTest.java @@ -45,10 +45,14 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheExpiryPolicyWithStoreAbstractTest extends IgniteCacheAbstractTest { /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { @@ -72,6 +76,7 @@ public abstract class IgniteCacheExpiryPolicyWithStoreAbstractTest extends Ignit /** * @throws Exception If failed. */ + @Test public void testLoadAll() throws Exception { IgniteCache cache = jcache(0); @@ -124,6 +129,7 @@ public void testLoadAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCache() throws Exception { IgniteCache cache = jcache(0); @@ -150,6 +156,7 @@ public void testLoadCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadThrough() throws Exception { IgniteCache cache = jcache(0); @@ -182,6 +189,7 @@ public void testReadThrough() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetReadThrough() throws Exception { getReadThrough(false, null, null); getReadThrough(true, null, null); @@ -354,4 +362,4 @@ private static class TestExpiryPolicyFactory implements Factory { }; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheLargeValueExpireTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheLargeValueExpireTest.java index 7e0b1d75c5ff9..9719af96f3133 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheLargeValueExpireTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheLargeValueExpireTest.java @@ -30,18 +30,16 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheLargeValueExpireTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int PAGE_SIZE = 1024; @@ -49,8 +47,6 @@ public class IgniteCacheLargeValueExpireTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - DataStorageConfiguration dbCfg = new DataStorageConfiguration(); dbCfg.setPageSize(1024); @@ -62,6 +58,7 @@ public class IgniteCacheLargeValueExpireTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testExpire() throws Exception { try (Ignite ignite = startGrid(0)) { checkExpire(ignite, true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheOnlyOneTtlCleanupThreadExistsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheOnlyOneTtlCleanupThreadExistsTest.java index f47de7dd79664..0c2ef8aee458c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheOnlyOneTtlCleanupThreadExistsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheOnlyOneTtlCleanupThreadExistsTest.java @@ -21,11 +21,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks that one and only one Ttl cleanup worker thread must exists, and only * if at least one cache with set 'eagerTtl' flag exists. */ +@RunWith(JUnit4.class) public class IgniteCacheOnlyOneTtlCleanupThreadExistsTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME1 = "cache-1"; @@ -36,6 +40,7 @@ public class IgniteCacheOnlyOneTtlCleanupThreadExistsTest extends GridCommonAbst /** * @throws Exception If failed. */ + @Test public void testOnlyOneTtlCleanupThreadExists() throws Exception { try (final Ignite g = startGrid(0)) { checkCleanupThreadExists(false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheTxExpiryPolicyWithStoreTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheTxExpiryPolicyWithStoreTest.java index f5888f88ac696..f795ec92365b6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheTxExpiryPolicyWithStoreTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/expiry/IgniteCacheTxExpiryPolicyWithStoreTest.java @@ -21,6 +21,9 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +31,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheTxExpiryPolicyWithStoreTest extends IgniteCacheExpiryPolicyWithStoreAbstractTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -45,6 +49,7 @@ public class IgniteCacheTxExpiryPolicyWithStoreTest extends IgniteCacheExpiryPol } /** {@inheritDoc} */ + @Test @Override public void testGetReadThrough() throws Exception { super.testGetReadThrough(); @@ -62,4 +67,4 @@ public class IgniteCacheTxExpiryPolicyWithStoreTest extends IgniteCacheExpiryPol getReadThrough(false, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.SERIALIZABLE); getReadThrough(true, TransactionConcurrency.PESSIMISTIC, TransactionIsolation.SERIALIZABLE); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoadAllAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoadAllAbstractTest.java index 6b4f9761311ca..b8d6e304289a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoadAllAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoadAllAbstractTest.java @@ -22,6 +22,7 @@ import java.util.HashSet; import java.util.Map; import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; import javax.cache.Cache; import javax.cache.configuration.Factory; import javax.cache.integration.CacheLoader; @@ -34,11 +35,15 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; -import java.util.concurrent.ConcurrentHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * Test for {@link Cache#loadAll(Set, boolean, CompletionListener)}. */ +@RunWith(JUnit4.class) public abstract class IgniteCacheLoadAllAbstractTest extends IgniteCacheAbstractTest { /** */ private volatile boolean writeThrough = true; @@ -46,6 +51,13 @@ public abstract class IgniteCacheLoadAllAbstractTest extends IgniteCacheAbstract /** */ private static ConcurrentHashMap storeMap; + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throws Exception { @@ -77,6 +89,7 @@ public abstract class IgniteCacheLoadAllAbstractTest extends IgniteCacheAbstract /** * @throws Exception If failed. */ + @Test public void testLoadAll() throws Exception { IgniteCache cache0 = jcache(0); @@ -259,4 +272,4 @@ private static class CacheWriterFactory implements Factory { }; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoaderWriterAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoaderWriterAbstractTest.java index 5cdac5bbc97d5..c36844ad1b9f6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoaderWriterAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheLoaderWriterAbstractTest.java @@ -38,10 +38,14 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; import org.apache.ignite.lifecycle.LifecycleAware; import org.apache.ignite.resources.IgniteInstanceResource; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheLoaderWriterAbstractTest extends IgniteCacheAbstractTest { /** */ private static AtomicInteger ldrCallCnt = new AtomicInteger(); @@ -81,6 +85,7 @@ public abstract class IgniteCacheLoaderWriterAbstractTest extends IgniteCacheAbs /** * @throws Exception If failed. */ + @Test public void testLoaderWriter() throws Exception { IgniteCache cache = jcache(0); @@ -162,6 +167,7 @@ public void testLoaderWriter() throws Exception { /** * */ + @Test public void testLoaderException() { IgniteCache cache = jcache(0); @@ -180,6 +186,7 @@ public void testLoaderException() { /** * */ + @Test public void testWriterException() { IgniteCache cache = jcache(0); @@ -198,6 +205,7 @@ public void testWriterException() { /** * @throws Exception If failed. */ + @Test public void testLoaderWriterBulk() throws Exception { Map vals = new HashMap<>(); @@ -432,4 +440,4 @@ class TestWriter implements CacheWriter, LifecycleAware { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoLoadPreviousValueAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoLoadPreviousValueAbstractTest.java index 4eb6269833aa1..835d05eff76fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoLoadPreviousValueAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoLoadPreviousValueAbstractTest.java @@ -31,6 +31,9 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -38,6 +41,7 @@ /** * Test for configuration property {@link CacheConfiguration#isLoadPreviousValue()}. */ +@RunWith(JUnit4.class) public abstract class IgniteCacheNoLoadPreviousValueAbstractTest extends IgniteCacheAbstractTest { /** */ private Integer lastKey = 0; @@ -72,6 +76,7 @@ public abstract class IgniteCacheNoLoadPreviousValueAbstractTest extends IgniteC /** * @throws Exception If failed. */ + @Test public void testNoLoadPreviousValue() throws Exception { IgniteCache cache = jcache(0); @@ -215,4 +220,4 @@ protected Collection keys() throws Exception { return keys; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoReadThroughAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoReadThroughAbstractTest.java index 37f6fce4a299c..49014a956aafd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoReadThroughAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoReadThroughAbstractTest.java @@ -37,6 +37,9 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -44,6 +47,7 @@ /** * Test for configuration property {@link CacheConfiguration#isReadThrough}. */ +@RunWith(JUnit4.class) public abstract class IgniteCacheNoReadThroughAbstractTest extends IgniteCacheAbstractTest { /** */ private Integer lastKey = 0; @@ -86,6 +90,7 @@ public abstract class IgniteCacheNoReadThroughAbstractTest extends IgniteCacheAb /** * @throws Exception If failed. */ + @Test public void testNoReadThrough() throws Exception { IgniteCache cache = jcache(0); @@ -340,4 +345,4 @@ private static class NoReadThroughStoreFactory implements Factory { }; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoWriteThroughAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoWriteThroughAbstractTest.java index 5ab86ec92e694..e7664ddb3dc30 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoWriteThroughAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheNoWriteThroughAbstractTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -40,6 +43,7 @@ /** * Test for configuration property {@link CacheConfiguration#isWriteThrough}. */ +@RunWith(JUnit4.class) public abstract class IgniteCacheNoWriteThroughAbstractTest extends IgniteCacheAbstractTest { /** */ private Integer lastKey = 0; @@ -75,6 +79,7 @@ public abstract class IgniteCacheNoWriteThroughAbstractTest extends IgniteCacheA * @throws Exception If failed. */ @SuppressWarnings("UnnecessaryLocalVariable") + @Test public void testNoWriteThrough() throws Exception { IgniteCache cache = jcache(0); @@ -349,4 +354,4 @@ protected Collection keys() throws Exception { return keys; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreNodeRestartAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreNodeRestartAbstractTest.java index 9c455b6cccf43..25c20da4afb08 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreNodeRestartAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreNodeRestartAbstractTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheStoreNodeRestartAbstractTest extends IgniteCacheAbstractTest { /** */ protected static final String CACHE_NAME1 = "cache1"; @@ -73,6 +77,7 @@ public abstract class IgniteCacheStoreNodeRestartAbstractTest extends IgniteCach /** * @throws Exception If failed. */ + @Test public void testMarshaller() throws Exception { grid(0).cache(CACHE_NAME1).put("key1", new UserObject("key1")); @@ -113,4 +118,4 @@ public String getField() { return field; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionAbstractTest.java index 40e36c2dbedbb..0a6d9361eed1b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionAbstractTest.java @@ -43,12 +43,16 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheStoreSessionAbstractTest extends IgniteCacheAbstractTest { /** */ protected static volatile List expData; @@ -113,6 +117,7 @@ protected List testKeys(IgniteCache cache, int cnt) throws Exception { /** * @throws Exception If failed. */ + @Test public void testStoreSession() throws Exception { assertEquals(DEFAULT_CACHE_NAME, jcache(0).getName()); @@ -365,4 +370,4 @@ private void checkSession(String mtd) { assertEquals(exp.expCacheName, ses.cacheName()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionWriteBehindAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionWriteBehindAbstractTest.java index 832676d16931a..b07cea6a6ae8f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionWriteBehindAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheStoreSessionWriteBehindAbstractTest.java @@ -37,10 +37,14 @@ import org.apache.ignite.resources.CacheStoreSessionResource; import org.apache.ignite.resources.IgniteInstanceResource; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public abstract class IgniteCacheStoreSessionWriteBehindAbstractTest extends IgniteCacheAbstractTest { /** */ private static final String CACHE_NAME1 = "cache1"; @@ -114,6 +118,7 @@ protected CacheConfiguration cacheConfiguration(String igniteInstanceName) throw /** * @throws Exception If failed. */ + @Test public void testSession() throws Exception { testCache(DEFAULT_CACHE_NAME); @@ -209,7 +214,7 @@ protected class TestStore implements CacheStore { /** {@inheritDoc} */ @Override public void writeAll(Collection> entries) throws CacheWriterException { log.info("writeAll: " + entries); - + assertTrue("Unexpected entries: " + entries, entries.size() == 10 || entries.size() == 1); checkSession("writeAll"); @@ -293,4 +298,4 @@ public ExpectedData(String expMtd, String expCacheName) { this.expCacheName = expCacheName; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLoaderWriterTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLoaderWriterTest.java index fc4ec781a80ba..29ba7f1f3f6e5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLoaderWriterTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLoaderWriterTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxLoaderWriterTest extends IgniteCacheLoaderWriterAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 3; @@ -47,4 +55,4 @@ public class IgniteCacheTxLoaderWriterTest extends IgniteCacheLoaderWriterAbstra @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalLoadAllTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalLoadAllTest.java index 97374bf3304e0..391fde2940edf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalLoadAllTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalLoadAllTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -28,6 +29,12 @@ * */ public class IgniteCacheTxLocalLoadAllTest extends IgniteCacheLoadAllAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } /** {@inheritDoc} */ @Override protected int gridCount() { return 1; @@ -47,4 +54,4 @@ public class IgniteCacheTxLocalLoadAllTest extends IgniteCacheLoadAllAbstractTes @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoLoadPreviousValueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoLoadPreviousValueTest.java index 686b4ca310fb0..9b4afdba072a5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoLoadPreviousValueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoLoadPreviousValueTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxLocalNoLoadPreviousValueTest extends IgniteCacheNoLoadPreviousValueAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; @@ -47,4 +55,4 @@ public class IgniteCacheTxLocalNoLoadPreviousValueTest extends IgniteCacheNoLoad @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoReadThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoReadThroughTest.java index 5235c9d454b1b..470e65e1b9754 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoReadThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoReadThroughTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxLocalNoReadThroughTest extends IgniteCacheNoReadThroughAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; @@ -47,4 +55,4 @@ public class IgniteCacheTxLocalNoReadThroughTest extends IgniteCacheNoReadThroug @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoWriteThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoWriteThroughTest.java index 7985c57207122..3e5ae2d75ad7e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoWriteThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxLocalNoWriteThroughTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxLocalNoWriteThroughTest extends IgniteCacheNoWriteThroughAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; @@ -47,4 +55,4 @@ public class IgniteCacheTxLocalNoWriteThroughTest extends IgniteCacheNoWriteThro @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoLoadPreviousValueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoLoadPreviousValueTest.java index ce42e393647b7..019cf1e1e33f3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoLoadPreviousValueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoLoadPreviousValueTest.java @@ -18,13 +18,21 @@ package org.apache.ignite.internal.processors.cache.integration; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * */ public class IgniteCacheTxNearEnabledNoLoadPreviousValueTest extends IgniteCacheTxNoLoadPreviousValueTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { return new NearCacheConfiguration(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoWriteThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoWriteThroughTest.java index ea12cce995bea..7a58800546d0b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoWriteThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNearEnabledNoWriteThroughTest.java @@ -18,13 +18,21 @@ package org.apache.ignite.internal.processors.cache.integration; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * */ public class IgniteCacheTxNearEnabledNoWriteThroughTest extends IgniteCacheTxNoWriteThroughTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected NearCacheConfiguration nearConfiguration() { return new NearCacheConfiguration(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoLoadPreviousValueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoLoadPreviousValueTest.java index ce9ef9b467ab2..fe71fe9cacc22 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoLoadPreviousValueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoLoadPreviousValueTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxNoLoadPreviousValueTest extends IgniteCacheNoLoadPreviousValueAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 3; @@ -47,4 +55,4 @@ public class IgniteCacheTxNoLoadPreviousValueTest extends IgniteCacheNoLoadPrevi @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoReadThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoReadThroughTest.java index 90bc768ed60bf..b477b8821d153 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoReadThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoReadThroughTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxNoReadThroughTest extends IgniteCacheNoReadThroughAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 3; @@ -47,4 +55,4 @@ public class IgniteCacheTxNoReadThroughTest extends IgniteCacheNoReadThroughAbst @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoWriteThroughTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoWriteThroughTest.java index 325d96994d51f..12ca66c231913 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoWriteThroughTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxNoWriteThroughTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -28,6 +29,13 @@ * */ public class IgniteCacheTxNoWriteThroughTest extends IgniteCacheNoWriteThroughAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 3; @@ -47,4 +55,4 @@ public class IgniteCacheTxNoWriteThroughTest extends IgniteCacheNoWriteThroughAb @Override protected NearCacheConfiguration nearConfiguration() { return null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionTest.java index b9e884b9b4d84..741dd2a37d438 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionTest.java @@ -25,9 +25,13 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,7 +41,15 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheTxStoreSessionTest extends IgniteCacheStoreSessionAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 3; @@ -61,6 +73,7 @@ public class IgniteCacheTxStoreSessionTest extends IgniteCacheStoreSessionAbstra /** * @throws Exception If failed. */ + @Test public void testStoreSessionTx() throws Exception { testTxPut(jcache(0), null, null); @@ -256,6 +269,7 @@ private Transaction startTx(TransactionConcurrency concurrency, TransactionIsola /** * @throws Exception If failed. */ + @Test public void testSessionCrossCacheTx() throws Exception { IgniteCache cache0 = ignite(0).cache(DEFAULT_CACHE_NAME); @@ -292,4 +306,4 @@ public void testSessionCrossCacheTx() throws Exception { assertEquals(0, expData.size()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindCoalescingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindCoalescingTest.java index a90b4f18cf04d..cd88184eaf025 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindCoalescingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindCoalescingTest.java @@ -22,6 +22,7 @@ import javax.cache.integration.CacheWriterException; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -30,6 +31,13 @@ * parameter. */ public class IgniteCacheTxStoreSessionWriteBehindCoalescingTest extends IgniteCacheStoreSessionWriteBehindAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; @@ -85,4 +93,4 @@ private class TestNonCoalescingStore extends TestStore { entLatch.countDown(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindTest.java index b72aba39b3492..533ab31c8f1de 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/integration/IgniteCacheTxStoreSessionWriteBehindTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache.integration; import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -25,8 +26,15 @@ * */ public class IgniteCacheTxStoreSessionWriteBehindTest extends IgniteCacheStoreSessionWriteBehindAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheAtomicLocalTckMetricsSelfTestImpl.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheAtomicLocalTckMetricsSelfTestImpl.java index 23e7a8b748ab6..f2d73eb978577 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheAtomicLocalTckMetricsSelfTestImpl.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheAtomicLocalTckMetricsSelfTestImpl.java @@ -21,14 +21,19 @@ import javax.cache.processor.EntryProcessorException; import javax.cache.processor.MutableEntry; import org.apache.ignite.IgniteCache; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Local atomic cache metrics test with tck specific. */ +@RunWith(JUnit4.class) public class GridCacheAtomicLocalTckMetricsSelfTestImpl extends GridCacheAtomicLocalMetricsSelfTest { /** * @throws Exception If failed. */ + @Test public void testEntryProcessorRemove() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -63,6 +68,7 @@ public void testEntryProcessorRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheStatistics() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -103,6 +109,7 @@ public void testCacheStatistics() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConditionReplace() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -153,6 +160,7 @@ public void testConditionReplace() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutIfAbsent() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -180,4 +188,4 @@ public void testPutIfAbsent() throws Exception { assertEquals(putCount, cache.localMetrics().getCachePuts()); assertEquals(missCount, cache.localMetrics().getCacheMisses()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheDaemonNodeLocalSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheDaemonNodeLocalSelfTest.java index 8578db7306232..c1ad352c9b4d9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheDaemonNodeLocalSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheDaemonNodeLocalSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.GridCacheDaemonNodeAbstractSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -26,8 +27,15 @@ * Tests local cache with daemon node. */ public class GridCacheDaemonNodeLocalSelfTest extends GridCacheDaemonNodeAbstractSelfTest { + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTest(); + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { return LOCAL; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicApiSelfTest.java index d35590f07739d..84bcdd7d537b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicApiSelfTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheBasicApiAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -27,6 +28,13 @@ * Basic API tests. */ public class GridCacheLocalBasicApiSelfTest extends GridCacheBasicApiAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -39,4 +47,4 @@ public class GridCacheLocalBasicApiSelfTest extends GridCacheBasicApiAbstractTes return cfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicStoreSelfTest.java index 183b456c71b00..b17dd681b7695 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicStoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalBasicStoreSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.GridCacheBasicStoreAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -26,8 +27,15 @@ * Test store with local cache. */ public class GridCacheLocalBasicStoreSelfTest extends GridCacheBasicStoreAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { return LOCAL; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalByteArrayValuesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalByteArrayValuesSelfTest.java index a6be82b58d651..f5dc05b941e7f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalByteArrayValuesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalByteArrayValuesSelfTest.java @@ -26,9 +26,13 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractByteArrayValuesSelfTest; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -40,6 +44,7 @@ /** * Byte values test for LOCAL cache. */ +@RunWith(JUnit4.class) public class GridCacheLocalByteArrayValuesSelfTest extends GridCacheAbstractByteArrayValuesSelfTest { /** Grid. */ private static Ignite ignite; @@ -55,7 +60,6 @@ public class GridCacheLocalByteArrayValuesSelfTest extends GridCacheAbstractByte CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - ccfg.setName(CACHE_REGULAR); ccfg.setAtomicityMode(TRANSACTIONAL); ccfg.setCacheMode(LOCAL); ccfg.setWriteSynchronizationMode(FULL_SYNC); @@ -67,16 +71,18 @@ public class GridCacheLocalByteArrayValuesSelfTest extends GridCacheAbstractByte /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + ignite = startGrid(1); - cache = ignite.cache(CACHE_REGULAR); + cache = ignite.cache(DEFAULT_CACHE_NAME); } /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { cache = null; - ignite = null; + stopAllGrids(); } /** @@ -84,6 +90,7 @@ public class GridCacheLocalByteArrayValuesSelfTest extends GridCacheAbstractByte * * @throws Exception If failed. */ + @Test public void testPessimistic() throws Exception { testTransaction(cache, PESSIMISTIC, KEY_1, wrap(1)); } @@ -93,6 +100,7 @@ public void testPessimistic() throws Exception { * * @throws Exception If failed. */ + @Test public void testPessimisticMixed() throws Exception { testTransactionMixed(cache, PESSIMISTIC, KEY_1, wrap(1), KEY_2, 1); } @@ -102,6 +110,7 @@ public void testPessimisticMixed() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimistic() throws Exception { testTransaction(cache, OPTIMISTIC, KEY_1, wrap(1)); } @@ -111,6 +120,7 @@ public void testOptimistic() throws Exception { * * @throws Exception If failed. */ + @Test public void testOptimisticMixed() throws Exception { testTransactionMixed(cache, OPTIMISTIC, KEY_1, wrap(1), KEY_2, 1); } @@ -121,6 +131,7 @@ public void testOptimisticMixed() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("TooBroadScope") + @Test public void testSwap() throws Exception { // TODO GG-11148. // assert cache.getConfiguration(CacheConfiguration.class).isSwapEnabled(); @@ -197,4 +208,4 @@ private void testTransactionMixed(IgniteCache cache, Transactio tx.close(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEventSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEventSelfTest.java index 0bf01c2f26732..02439d809afea 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEventSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEventSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.distributed.GridCacheEventAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -26,6 +27,14 @@ * Tests events. */ public class GridCacheLocalEventSelfTest extends GridCacheEventAbstractTest { + /** {@inheritDoc} */ + @Override public void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS); + + super.beforeTest(); + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { return LOCAL; @@ -35,4 +44,4 @@ public class GridCacheLocalEventSelfTest extends GridCacheEventAbstractTest { @Override protected int gridCount() { return 1; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEvictionEventSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEvictionEventSelfTest.java index b2d98c61c1150..f5ce88e30aa46 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEvictionEventSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalEvictionEventSelfTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.GridCacheEvictionEventAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -27,6 +28,13 @@ * Tests local cache eviction event. */ public class GridCacheLocalEvictionEventSelfTest extends GridCacheEvictionEventAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { return CacheMode.LOCAL; @@ -36,4 +44,4 @@ public class GridCacheLocalEvictionEventSelfTest extends GridCacheEvictionEventA @Override protected CacheAtomicityMode atomicityMode() { return TRANSACTIONAL; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalFullApiSelfTest.java index aaf69c96d432a..c55be312bb0f0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalFullApiSelfTest.java @@ -27,12 +27,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheAbstractFullApiSelfTest; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Tests for local cache. */ +@RunWith(JUnit4.class) public class GridCacheLocalFullApiSelfTest extends GridCacheAbstractFullApiSelfTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -51,6 +55,7 @@ public class GridCacheLocalFullApiSelfTest extends GridCacheAbstractFullApiSelfT /** * @throws Exception In case of error. */ + @Test public void testMapKeysToNodes() throws Exception { IgniteCache cache = jcache(); @@ -85,6 +90,7 @@ public void testMapKeysToNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalClearAsync() throws Exception { localCacheClear(true); } @@ -92,6 +98,7 @@ public void testLocalClearAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalClear() throws Exception { localCacheClear(false); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalGetAndTransformStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalGetAndTransformStoreSelfTest.java index 19b4b6116be2f..8fbec61ccafb4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalGetAndTransformStoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalGetAndTransformStoreSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.GridCacheGetAndTransformStoreAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -26,8 +27,15 @@ * Test get and transform for store with local cache. */ public class GridCacheLocalGetAndTransformStoreSelfTest extends GridCacheGetAndTransformStoreAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { return LOCAL; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIsolatedNodesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIsolatedNodesSelfTest.java index 53122c78f5391..31d7170e305cb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIsolatedNodesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIsolatedNodesSelfTest.java @@ -23,13 +23,18 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Isolated nodes LOCAL cache self test. */ +@RunWith(JUnit4.class) public class GridCacheLocalIsolatedNodesSelfTest extends GridCommonAbstractTest { /** * @@ -40,6 +45,8 @@ public GridCacheLocalIsolatedNodesSelfTest() { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + startGrids(3); } @@ -52,6 +59,7 @@ public GridCacheLocalIsolatedNodesSelfTest() { * * @throws Exception If test failed. */ + @Test public void testIsolatedNodes() throws Exception { Ignite g1 = grid(0); UUID nid1 = g1.cluster().localNode().id(); @@ -115,4 +123,4 @@ private static class NodeIdFilter implements IgnitePredicate { } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIteratorsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIteratorsSelfTest.java index 6af80ca2a1869..6fa8edc00327b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIteratorsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalIteratorsSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.GridCacheAbstractIteratorsSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -40,4 +41,11 @@ public class GridCacheLocalIteratorsSelfTest extends GridCacheAbstractIteratorsS @Override protected int entryCount() { return 1000; } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTestsStarted(); + } } \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLoadAllSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLoadAllSelfTest.java index f10cefdb5ede9..638a49f848e68 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLoadAllSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLoadAllSelfTest.java @@ -27,14 +27,26 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Load-All self test. */ +@RunWith(JUnit4.class) public class GridCacheLocalLoadAllSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** * */ @@ -46,6 +58,7 @@ public GridCacheLocalLoadAllSelfTest() { * * @throws Exception If test failed. */ + @Test public void testCacheGetAll() throws Exception { Ignite ignite = grid(); @@ -107,4 +120,4 @@ private static class TestStore extends CacheStoreAdapter { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLockSelfTest.java index f4160edba4aec..4bd013ef1a244 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalLockSelfTest.java @@ -28,16 +28,27 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestThread; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Test cases for multi-threaded tests. */ -@SuppressWarnings({"ProhibitedExceptionThrown"}) +@RunWith(JUnit4.class) public class GridCacheLocalLockSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** Grid. */ private Ignite ignite; @@ -80,6 +91,7 @@ public GridCacheLocalLockSelfTest() { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testLockReentry() throws IgniteCheckedException { IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -125,6 +137,7 @@ public void testLockReentry() throws IgniteCheckedException { /** * @throws Exception If test failed. */ + @Test public void testLock() throws Throwable { final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -135,7 +148,6 @@ public void testLock() throws Throwable { final Lock lock = cache.lock(1); GridTestThread t1 = new GridTestThread(new Callable() { - @SuppressWarnings({"CatchGenericClass"}) @Nullable @Override public Object call() throws Exception { info("Before lock for.key 1"); @@ -172,7 +184,6 @@ public void testLock() throws Throwable { }); GridTestThread t2 = new GridTestThread(new Callable() { - @SuppressWarnings({"CatchGenericClass"}) @Nullable @Override public Object call() throws Exception { info("Waiting for latch1..."); @@ -243,6 +254,7 @@ public void testLock() throws Throwable { /** * @throws Exception If test failed. */ + @Test public void testLockAndPut() throws Throwable { final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -321,4 +333,4 @@ public void testLockAndPut() throws Throwable { assert !cache.isLocalLocked(1, true); assert !cache.isLocalLocked(1, false); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMetricsSelfTest.java index 16a2e3393fd75..d66799e38f4a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMetricsSelfTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheTransactionalAbstractMetricsSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -30,6 +31,15 @@ public class GridCacheLocalMetricsSelfTest extends GridCacheTransactionalAbstrac /** */ private static final int GRID_CNT = 1; + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.METRICS); + + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMultithreadedSelfTest.java index f6dc5351892a5..f5b659acdabe3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMultithreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalMultithreadedSelfTest.java @@ -34,14 +34,26 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestThread; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Multithreaded local cache locking test. */ +@RunWith(JUnit4.class) public class GridCacheLocalMultithreadedSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** Cache. */ private IgniteCache cache; @@ -88,6 +100,7 @@ public GridCacheLocalMultithreadedSelfTest() { /** * @throws Exception If test fails. */ + @Test public void testBasicLocks() throws Throwable { GridTestUtils.runMultiThreaded(new Callable() { /** {@inheritDoc} */ @@ -114,6 +127,7 @@ public void testBasicLocks() throws Throwable { /** * @throws Exception If test fails. */ + @Test public void testMultiLocks() throws Throwable { GridTestUtils.runMultiThreaded(new Callable() { /** {@inheritDoc} */ @@ -142,6 +156,7 @@ public void testMultiLocks() throws Throwable { /** * @throws Exception If test fails. */ + @Test public void testSlidingKeysLocks() throws Throwable { final AtomicInteger cnt = new AtomicInteger(); @@ -174,6 +189,7 @@ public void testSlidingKeysLocks() throws Throwable { /** * @throws Exception If test fails. */ + @Test public void testSingleLockTimeout() throws Exception { final CountDownLatch l1 = new CountDownLatch(1); final CountDownLatch l2 = new CountDownLatch(1); @@ -240,6 +256,7 @@ public void testSingleLockTimeout() throws Exception { /** * @throws Exception If test fails. */ + @Test public void testMultiLockTimeout() throws Exception { final CountDownLatch l1 = new CountDownLatch(1); final CountDownLatch l2 = new CountDownLatch(1); @@ -350,4 +367,4 @@ public void testMultiLockTimeout() throws Exception { private String thread() { return "Thread [id=" + Thread.currentThread().getId() + ", name=" + Thread.currentThread().getName() + ']'; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxExceptionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxExceptionSelfTest.java index 63a900dd1141d..ae1e2d0e62d10 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxExceptionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxExceptionSelfTest.java @@ -19,6 +19,7 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.processors.cache.IgniteTxExceptionAbstractSelfTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -26,6 +27,13 @@ * Tests local cache. */ public class GridCacheLocalTxExceptionSelfTest extends IgniteTxExceptionAbstractSelfTest { + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.beforeTestsStarted(); + } + /** {@inheritDoc} */ @Override protected int gridCount() { return 1; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxMultiThreadedSelfTest.java index 388a7bfb78958..5babd8fa3457b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxMultiThreadedSelfTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.IgniteTxMultiThreadedAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.log4j.Level; import org.apache.log4j.Logger; @@ -33,6 +34,11 @@ public class GridCacheLocalTxMultiThreadedSelfTest extends IgniteTxMultiThreaded /** Cache debug flag. */ private static final boolean CACHE_DEBUG = false; + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + } + /** {@inheritDoc} */ @SuppressWarnings({"ConstantConditions"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -90,4 +96,4 @@ public class GridCacheLocalTxMultiThreadedSelfTest extends IgniteTxMultiThreaded @Override protected boolean printMemoryStats() { return true; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxSingleThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxSingleThreadedSelfTest.java index 83c74e201abd2..9022693c11de3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxSingleThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxSingleThreadedSelfTest.java @@ -21,6 +21,7 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GridCacheProcessor; import org.apache.ignite.internal.processors.cache.IgniteTxSingleThreadedAbstractTest; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.log4j.Level; import org.apache.log4j.Logger; @@ -30,6 +31,11 @@ * Tests for local transactions. */ public class GridCacheLocalTxSingleThreadedSelfTest extends IgniteTxSingleThreadedAbstractTest { + /** {@inheritDoc} */ + @Override public void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + } + /** Cache debug flag. */ private static final boolean CACHE_DEBUG = false; @@ -85,4 +91,4 @@ public class GridCacheLocalTxSingleThreadedSelfTest extends IgniteTxSingleThread @Override protected boolean printMemoryStats() { return true; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxTimeoutSelfTest.java index 160e2512b180c..e7297e91b0e9a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxTimeoutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/local/GridCacheLocalTxTimeoutSelfTest.java @@ -25,11 +25,15 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; @@ -41,7 +45,15 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheLocalTxTimeoutSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override public void setUp() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE); + + super.setUp(); + } + /** Grid. */ private Ignite ignite; @@ -89,6 +101,7 @@ public GridCacheLocalTxTimeoutSelfTest() { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticReadCommitted() throws Exception { checkTransactionTimeout(PESSIMISTIC, READ_COMMITTED); } @@ -96,6 +109,7 @@ public void testPessimisticReadCommitted() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticRepeatableRead() throws Exception { checkTransactionTimeout(PESSIMISTIC, REPEATABLE_READ); } @@ -103,6 +117,7 @@ public void testPessimisticRepeatableRead() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testPessimisticSerializable() throws Exception { checkTransactionTimeout(PESSIMISTIC, SERIALIZABLE); } @@ -110,6 +125,7 @@ public void testPessimisticSerializable() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticReadCommitted() throws Exception { checkTransactionTimeout(OPTIMISTIC, READ_COMMITTED); } @@ -117,6 +133,7 @@ public void testOptimisticReadCommitted() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticRepeatableRead() throws Exception { checkTransactionTimeout(OPTIMISTIC, REPEATABLE_READ); } @@ -124,6 +141,7 @@ public void testOptimisticRepeatableRead() throws Exception { /** * @throws IgniteCheckedException If test failed. */ + @Test public void testOptimisticSerializable() throws Exception { checkTransactionTimeout(OPTIMISTIC, SERIALIZABLE); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest.java index 5d28cb7ef37a1..a72f80c811504 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest.java @@ -18,10 +18,14 @@ package org.apache.ignite.internal.processors.cache.multijvm; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearEnabledMultiNodeFullApiSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multi-JVM tests. */ +@RunWith(JUnit4.class) public class GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest extends GridCacheAtomicNearEnabledMultiNodeFullApiSelfTest { /** {@inheritDoc} */ @@ -30,7 +34,8 @@ public class GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest extends } /** {@inheritDoc} */ + @Test @Override public void testPutAllPutAll() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-1112"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest.java index 9b4db5b7bd791..56cc0eb45256c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/multijvm/GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest.java @@ -18,20 +18,17 @@ package org.apache.ignite.internal.processors.cache.multijvm; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedNearOnlyMultiNodeFullApiSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multi-JVM tests. */ +@RunWith(JUnit4.class) public class GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest extends GridCacheReplicatedNearOnlyMultiNodeFullApiSelfTest { /** {@inheritDoc} */ @Override protected boolean isMultiJvm() { return true; } - - /** {@inheritDoc} */ - @Override public void testNearDhtKeySize() throws Exception { - if (isMultiJvm()) - fail("https://issues.apache.org/jira/browse/IGNITE-648"); - } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractBasicCoordinatorFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractBasicCoordinatorFailoverTest.java index c1718b5bc4d6a..6e9f9e41be1c9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractBasicCoordinatorFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractBasicCoordinatorFailoverTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.cache.mvcc; import java.util.ArrayList; +import java.util.Iterator; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; @@ -27,19 +28,27 @@ import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; +import java.util.stream.Collectors; +import javax.cache.Cache; import javax.cache.CacheException; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.query.QueryCursor; +import org.apache.ignite.cache.query.ScanQuery; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.cluster.ClusterTopologyException; import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager; import org.apache.ignite.internal.processors.cache.distributed.TestCacheNodeExcludingFilter; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetRequest; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccSnapshotResponse; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteBiPredicate; @@ -50,6 +59,9 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; @@ -58,6 +70,7 @@ /** * Base class for Mvcc coordinator failover test. */ +@RunWith(JUnit4.class) public abstract class CacheMvccAbstractBasicCoordinatorFailoverTest extends CacheMvccAbstractTest { /** * @param concurrency Transaction concurrency. @@ -675,4 +688,182 @@ protected void checkCoordinatorChangeActiveQueryClientFails_Simple(@Nullable Ign for (Ignite node : G.allGrids()) checkActiveQueriesCleanup(node); } + + /** + * @throws Exception If failed. + */ + @Test + public void testMultipleCoordinatorsLeft2Persistence() throws Exception { + persistence = true; + + checkCoordinatorsLeft(2, false); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMultipleCoordinatorsLeft3Persistence() throws Exception { + persistence = true; + + checkCoordinatorsLeft(3, true); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testMultipleCoordinatorsLeft4() throws Exception { + checkCoordinatorsLeft(4, true); + } + + /** + * @param num Number of coordinators to stop. + * @throws Exception If failed. + */ + @SuppressWarnings("unchecked") + private void checkCoordinatorsLeft(int num, boolean stopCrdFirst) throws Exception { + disableScheduledVacuum = true; + + final int DATA_NODES = 3; + + final int NODES = num + DATA_NODES; + + nodeAttr = CRD_ATTR; + + // Do not use startMultithreaded here. + startGrids(num); + + nodeAttr = null; + + startGridsMultiThreaded(num, DATA_NODES); + + List victims = new ArrayList<>(num); + List survivors = new ArrayList<>(DATA_NODES); + + for (int i = 0; i < NODES; i++) { + if (i < num) + victims.add(grid(i)); + else + survivors.add(grid(i)); + } + + if (log.isInfoEnabled()) { + log.info("Nodes to be stopped [" + + victims.stream(). + map(n -> n.cluster().localNode().id().toString()) + .collect(Collectors.joining(", ")) + ']'); + + log.info("Nodes not to be stopped [" + + survivors.stream(). + map(n -> n.cluster().localNode().id().toString()) + .collect(Collectors.joining(", ")) + ']'); + } + + Ignite nearNode = survivors.get(0); + + if (persistence) + nearNode.cluster().active(true); + + CacheConfiguration ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, DATA_NODES - 1, DFLT_PARTITION_COUNT) + .setNodeFilter(new CoordinatorNodeFilter()); + + IgniteCache cache = nearNode.createCache(ccfg); + + try (Transaction tx = nearNode.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int key = 0; key < 10; key++) + cache.put(key, 1); + + tx.commit(); + } + + List stopThreads = victims.stream() + .map(v -> new Thread(() -> stopGrid(v.name()))) + .collect(Collectors.toList()); + + ScanQuery scan = new ScanQuery<>(); + + QueryCursor> cur = survivors.get(0).cache(DEFAULT_CACHE_NAME).query(scan); + + Iterator> it = cur.iterator(); + + assertTrue(it.hasNext()); + assertEquals(1, it.next().getValue()); + + if (log.isInfoEnabled()) + log.info("Start stopping nodes."); + + // Stop nodes and join threads. + if (stopCrdFirst) { + for (Thread t : stopThreads) + t.start(); + } + else { + // We should stop the oldest node last. + GridCachePartitionExchangeManager exch = ((IgniteEx)survivors.get(1)).context().cache().context().exchange(); + + GridDhtTopologyFuture lastFinished = exch.lastFinishedFuture(); + + for (int i = 1; i < stopThreads.size(); i++) + stopThreads.get(i).start(); + + while (lastFinished == exch.lastTopologyFuture()) + doSleep(1); + + stopThreads.get(0).start(); + } + + for (Thread t : stopThreads) + t.join(); + + if (log.isInfoEnabled()) + log.info("All nodes stopped."); + + assertTrue(it.hasNext()); + assertEquals(1, it.next().getValue()); + + for (Ignite node : survivors) { + for (int key = 0; key < 10; key++) + assertEquals(1, node.cache(DEFAULT_CACHE_NAME).get(key)); + } + + try (Transaction tx = nearNode.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int key = 0; key < 10; key++) + cache.put(key, 2); + + tx.commit(); + } + catch (Exception e) { + stopAllGrids(true); + + fail(X.getFullStackTrace(e)); + } + + for (Ignite node : survivors) { + for (int key = 0; key < 10; key++) + assertEquals(2, node.cache(DEFAULT_CACHE_NAME).get(key)); + } + + try (Transaction tx = nearNode.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int key = 0; key < 10; key++) + cache.put(key, 3); + + tx.commit(); + } + catch (Exception e) { + stopAllGrids(true); + + fail(X.getFullStackTrace(e)); + } + + for (Ignite node : survivors) { + for (int key = 0; key < 10; key++) + assertEquals(3, node.cache(DEFAULT_CACHE_NAME).get(key)); + } + + while (it.hasNext()) + assertEquals(1, (int)it.next().getValue()); + + cur.close(); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractCoordinatorFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractCoordinatorFailoverTest.java index 60f1a2f788bbe..eee39580a1af4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractCoordinatorFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractCoordinatorFailoverTest.java @@ -17,6 +17,11 @@ package org.apache.ignite.internal.processors.cache.mvcc; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.GET; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SCAN; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.WriteMode.PUT; @@ -28,10 +33,13 @@ /** * Mvcc cache API coordinator failover test. */ +@RunWith(JUnit4.class) public abstract class CacheMvccAbstractCoordinatorFailoverTest extends CacheMvccAbstractBasicCoordinatorFailoverTest { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testAccountsTxGet_Server_Backups0_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -42,6 +50,8 @@ public void testAccountsTxGet_Server_Backups0_CoordinatorFails_Persistence() thr /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testAccountsTxGet_SingleNode_CoordinatorFails() throws Exception { accountsTxReadAll(1, 0, 0, 1, null, true, GET, PUT, DFLT_TEST_TIME, RestartMode.RESTART_CRD); @@ -50,6 +60,8 @@ public void testAccountsTxGet_SingleNode_CoordinatorFails() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10750") + @Test public void testAccountsTxScan_Server_Backups0_CoordinatorFails() throws Exception { accountsTxReadAll(2, 0, 0, 64, null, true, SCAN, PUT, DFLT_TEST_TIME, RestartMode.RESTART_CRD); @@ -58,6 +70,7 @@ public void testAccountsTxScan_Server_Backups0_CoordinatorFails() throws Excepti /** * @throws Exception If failed. */ + @Test public void testAccountsTxScan_SingleNode_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -69,6 +82,7 @@ public void testAccountsTxScan_SingleNode_CoordinatorFails_Persistence() throws /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_Server_Backups0_RestartCoordinator_GetPut() throws Exception { putAllGetAll(RestartMode.RESTART_CRD , 2, 0, 0, 64, null, GET, PUT); @@ -77,6 +91,7 @@ public void testPutAllGetAll_Server_Backups0_RestartCoordinator_GetPut() throws /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_SingleNode_RestartCoordinator_GetPut_Persistence() throws Exception { persistence = true; @@ -87,6 +102,8 @@ public void testPutAllGetAll_SingleNode_RestartCoordinator_GetPut_Persistence() /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testUpdate_N_Objects_Servers_Backups0__PutGet_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -97,8 +114,9 @@ public void testUpdate_N_Objects_Servers_Backups0__PutGet_CoordinatorFails_Persi /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testUpdate_N_Objects_SingleNode__PutGet_CoordinatorFails() throws Exception { - updateNObjectsTest(7, 1, 0, 0, 1, DFLT_TEST_TIME, null, GET, PUT, RestartMode.RESTART_CRD); } @@ -106,6 +124,7 @@ public void testUpdate_N_Objects_SingleNode__PutGet_CoordinatorFails() throws Ex /** * @throws Exception If failed. */ + @Test public void testCoordinatorFailureSimplePessimisticTxPutGet() throws Exception { coordinatorFailureSimple(PESSIMISTIC, REPEATABLE_READ, GET, PUT); } @@ -113,6 +132,7 @@ public void testCoordinatorFailureSimplePessimisticTxPutGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadInProgressCoordinatorFailsSimple_FromClientPutGet() throws Exception { readInProgressCoordinatorFailsSimple(true, null, GET, PUT); } @@ -120,6 +140,7 @@ public void testReadInProgressCoordinatorFailsSimple_FromClientPutGet() throws E /** * @throws Exception If failed. */ + @Test public void testCoordinatorChangeActiveQueryClientFails_Simple() throws Exception { checkCoordinatorChangeActiveQueryClientFails_Simple(null, GET, PUT); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractTest.java index c191849fb7f91..3ef55b30774a0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractTest.java @@ -17,7 +17,6 @@ package org.apache.ignite.internal.processors.cache.mvcc; -import java.math.BigDecimal; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; @@ -40,6 +39,7 @@ import java.util.concurrent.locks.ReentrantReadWriteLock; import java.util.stream.Collectors; import javax.cache.Cache; +import javax.cache.CacheException; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; @@ -63,14 +63,20 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteFutureCancelledCheckedException; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.query.IgniteSQLException; +import org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException; +import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; import org.apache.ignite.internal.util.future.GridCompoundIdentityFuture; import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.apache.ignite.internal.util.lang.GridCloseableIterator; import org.apache.ignite.internal.util.lang.GridInClosure3; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; @@ -82,13 +88,11 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionException; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; @@ -106,9 +110,6 @@ * */ public abstract class CacheMvccAbstractTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ protected static final ObjectCodec INTEGER_CODEC = new IntegerCodec(); @@ -166,8 +167,6 @@ public abstract class CacheMvccAbstractTest extends GridCommonAbstractTest { if (disableScheduledVacuum) cfg.setMvccVacuumFrequency(Integer.MAX_VALUE); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (testSpi) cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); @@ -228,8 +227,7 @@ public abstract class CacheMvccAbstractTest extends GridCommonAbstractTest { persistence = false; try { - if(disableScheduledVacuum) - verifyOldVersionsCleaned(); + verifyOldVersionsCleaned(); verifyCoordinatorInternalState(); } @@ -681,19 +679,19 @@ else if (writeMode == WriteMode.DML) } case SQL_SUM: { - BigDecimal sum; + Long sum; if (rnd.nextBoolean()) { List> res = cache.cache.query(sumQry).getAll(); assertEquals(1, res.size()); - sum = (BigDecimal)res.get(0).get(0); + sum = (Long)res.get(0).get(0); } else { Map res = readAllByMode(cache.cache, keys, readMode, ACCOUNT_CODEC); - sum = (BigDecimal)((Map.Entry)res.entrySet().iterator().next()).getValue(); + sum = (Long)((Map.Entry)res.entrySet().iterator().next()).getValue(); } assertEquals(ACCOUNT_START_VAL * ACCOUNTS, sum.intValue()); @@ -939,6 +937,8 @@ protected void putAllGetAll( tx.commit(); + v++; + first = false; } @@ -946,10 +946,8 @@ protected void putAllGetAll( Map res = readAllByMode(cache.cache, keys, readMode, INTEGER_CODEC); for (Integer k : keys) - assertEquals("key=" + k, v, (Object)res.get(k)); + assertEquals("key=" + k, v - 1, (Object)res.get(k)); } - - v++; } catch (Exception e) { handleTxException(e); @@ -1361,6 +1359,13 @@ final void readWriteTest( } }, readers, "reader"); + GridTestUtils.runAsync(() -> { + while (System.currentTimeMillis() < stopTime) + doSleep(1000); + + stop.set(true); + }); + while (System.currentTimeMillis() < stopTime && !stop.get()) { Thread.sleep(1000); @@ -1402,8 +1407,10 @@ final void readWriteTest( Ignite srv = startGrid(idx); + cache0 = new TestCache(srv.cache(DEFAULT_CACHE_NAME)); + synchronized (caches) { - caches.set(idx, new TestCache(srv.cache(DEFAULT_CACHE_NAME))); + caches.set(idx, cache0); } awaitPartitionMapExchange(); @@ -1417,8 +1424,6 @@ final void readWriteTest( } } - stop.set(true); - Exception ex = null; try { @@ -1476,8 +1481,64 @@ final CacheConfiguration cacheConfiguration( * @param e Exception. */ protected void handleTxException(Exception e) { - if (log.isTraceEnabled()) - log.trace("Exception during tx execution: " + X.getFullStackTrace(e)); + if (log.isDebugEnabled()) + log.debug("Exception during tx execution: " + X.getFullStackTrace(e)); + + if (X.hasCause(e, IgniteFutureCancelledCheckedException.class)) + return; + + if (X.hasCause(e, ClusterTopologyException.class)) + return; + + if (X.hasCause(e, ClusterTopologyCheckedException.class)) + return; + + if (X.hasCause(e, IgniteTxRollbackCheckedException.class)) + return; + + if (X.hasCause(e, TransactionException.class)) + return; + + if (X.hasCause(e, IgniteTxTimeoutCheckedException.class)) + return; + + if (X.hasCause(e, CacheException.class)) { + CacheException cacheEx = X.cause(e, CacheException.class); + + if (cacheEx != null && cacheEx.getMessage() != null) { + if (cacheEx.getMessage().contains("Data node has left the grid during query execution")) + return; + } + + if (cacheEx != null && cacheEx.getMessage() != null) { + if (cacheEx.getMessage().contains("Query was interrupted.")) + return; + } + + if (cacheEx != null && cacheEx.getMessage() != null) { + if (cacheEx.getMessage().contains("Failed to fetch data from node")) + return; + } + + if (cacheEx != null && cacheEx.getMessage() != null) { + if (cacheEx.getMessage().contains("Failed to send message")) + return; + } + } + + if (X.hasCause(e, IgniteSQLException.class)) { + IgniteSQLException sqlEx = X.cause(e, IgniteSQLException.class); + + if (sqlEx != null && sqlEx.getMessage() != null) { + if (sqlEx.getMessage().contains("Transaction is already completed.")) + return; + + if (sqlEx.getMessage().contains("Cannot serialize transaction due to write conflict")) + return; + } + } + + fail("Unexpected tx exception. " + X.getFullStackTrace(e)); } /** @@ -1533,12 +1594,21 @@ final void verifyCoordinatorInternalState() throws Exception { * @throws Exception If failed. */ protected void verifyOldVersionsCleaned() throws Exception { - runVacuumSync(); + boolean retry; + + try { + runVacuumSync(); + + // Check versions. + retry = !checkOldVersions(false); + } + catch (Exception e) { + U.warn(log(), "Failed to perform vacuum, will retry.", e); - // Check versions. - boolean cleaned = checkOldVersions(false); + retry = true; + } - if (!cleaned) { // Retry on a stable topology with a newer snapshot. + if (retry) { // Retry on a stable topology with a newer snapshot. awaitPartitionMapExchange(); runVacuumSync(); @@ -1559,24 +1629,23 @@ private boolean checkOldVersions(boolean failIfNotCleaned) throws IgniteCheckedE for (IgniteCacheProxy cache : ((IgniteKernal)node).caches()) { GridCacheContext cctx = cache.context(); - if (!cctx.userCache() || !cctx.group().mvccEnabled()) + if (!cctx.userCache() || !cctx.group().mvccEnabled() || F.isEmpty(cctx.group().caches()) || cctx.shared().closed(cctx)) continue; - for (Iterator it = cache.withKeepBinary().iterator(); it.hasNext(); ) { - IgniteBiTuple entry = (IgniteBiTuple)it.next(); - - KeyCacheObject key = cctx.toCacheKeyObject(entry.getKey()); + try (GridCloseableIterator it = (GridCloseableIterator)cache.withKeepBinary().iterator()) { + while (it.hasNext()) { + IgniteBiTuple entry = (IgniteBiTuple)it.next(); - List> vers = cctx.offheap().mvccAllVersions(cctx, key) - .stream().filter(t -> t.get1() != null).collect(Collectors.toList()); + KeyCacheObject key = cctx.toCacheKeyObject(entry.getKey()); - if (vers.size() > 1) { - if (failIfNotCleaned) - fail("[key=" + key.value(null, false) + "; vers=" + vers + ']'); - else { - U.closeQuiet((AutoCloseable)it); + List> vers = cctx.offheap().mvccAllVersions(cctx, key) + .stream().filter(t -> t.get1() != null).collect(Collectors.toList()); - return false; + if (vers.size() > 1) { + if (failIfNotCleaned) + fail("[key=" + key.value(null, false) + "; vers=" + vers + ']'); + else + return false; } } } @@ -1604,10 +1673,6 @@ private void runVacuumSync() throws IgniteCheckedException { assert GridTestUtils.getFieldValue(crd, "txLog") != null; - Throwable vacuumError = crd.vacuumError(); - - assertNull(X.getFullStackTrace(vacuumError), vacuumError); - fut.add(crd.runVacuum()); } } @@ -2172,7 +2237,10 @@ enum ReadMode { SQL, /** */ - SQL_SUM + SQL_SUM, + + /** */ + INVOKE } /** @@ -2183,7 +2251,10 @@ enum WriteMode { DML, /** */ - PUT + PUT, + + /** */ + INVOKE } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccClusterRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccClusterRestartTest.java index 5cabffce784fa..0c3e73c14aa4d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccClusterRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccClusterRestartTest.java @@ -27,11 +27,11 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -41,18 +41,14 @@ /** * */ +@RunWith(JUnit4.class) public class CacheMvccClusterRestartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); cfg.setConsistentId(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - DataStorageConfiguration storageCfg = new DataStorageConfiguration(); storageCfg.setWalMode(WALMode.LOG_ONLY); @@ -81,8 +77,6 @@ public class CacheMvccClusterRestartTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9394"); - cleanPersistenceDir(); super.beforeTest(); @@ -100,6 +94,7 @@ public class CacheMvccClusterRestartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRestart1() throws Exception { restart1(3, 3); } @@ -107,6 +102,7 @@ public void testRestart1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRestart2() throws Exception { restart1(1, 3); } @@ -114,6 +110,7 @@ public void testRestart2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRestart3() throws Exception { restart1(3, 1); } @@ -128,7 +125,7 @@ private void restart1(int srvBefore, int srvAfter) throws Exception { IgniteCache cache = srv0.createCache(cacheConfiguration()); - Set keys = new HashSet<>(primaryKeys(cache, 1, 0)); + Set keys = new HashSet<>(primaryKeys(cache, 100, 0)); try (Transaction tx = srv0.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { for (Integer k : keys) diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccConfigurationValidationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccConfigurationValidationTest.java index 8b704688663c2..92ec8122ccaec 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccConfigurationValidationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccConfigurationValidationTest.java @@ -17,54 +17,47 @@ package org.apache.ignite.internal.processors.cache.mvcc; +import java.io.Serializable; +import java.util.UUID; import java.util.concurrent.Callable; -import java.util.concurrent.TimeUnit; +import javax.cache.Cache; import javax.cache.CacheException; import javax.cache.configuration.Factory; -import javax.cache.configuration.FactoryBuilder; -import javax.cache.expiry.CreatedExpiryPolicy; -import javax.cache.expiry.Duration; import javax.cache.expiry.ExpiryPolicy; + import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.cache.CacheInterceptorAdapter; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheInterceptor; import org.apache.ignite.cache.CacheMode; -import org.apache.ignite.cache.store.CacheStore; -import org.apache.ignite.cache.store.CacheStoreReadFromBackupTest; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.configvariations.ConfigVariations; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.LOCAL; +import static org.apache.ignite.configuration.DataPageEvictionMode.RANDOM_2_LRU; +import static org.apache.ignite.configuration.DataPageEvictionMode.RANDOM_LRU; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheMvccConfigurationValidationTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(gridName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -75,14 +68,17 @@ public class CacheMvccConfigurationValidationTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @SuppressWarnings("ThrowableNotThrown") + @Test public void testMvccModeMismatchForGroup1() throws Exception { final Ignite node = startGrid(0); node.createCache(new CacheConfiguration("cache1").setGroupName("grp1").setAtomicityMode(ATOMIC)); GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node.createCache(new CacheConfiguration("cache2").setGroupName("grp1").setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); + @Override public Void call() { + node.createCache( + new CacheConfiguration("cache2").setGroupName("grp1").setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); return null; } @@ -94,25 +90,31 @@ public void testMvccModeMismatchForGroup1() throws Exception { /** * @throws Exception If failed. */ + @SuppressWarnings("ThrowableNotThrown") + @Test public void testMvccModeMismatchForGroup2() throws Exception { final Ignite node = startGrid(0); - node.createCache(new CacheConfiguration("cache1").setGroupName("grp1").setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); + node.createCache( + new CacheConfiguration("cache1").setGroupName("grp1").setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { + @Override public Void call() { node.createCache(new CacheConfiguration("cache2").setGroupName("grp1").setAtomicityMode(ATOMIC)); return null; } }, CacheException.class, null); - node.createCache(new CacheConfiguration("cache2").setGroupName("grp1").setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); + node.createCache( + new CacheConfiguration("cache2").setGroupName("grp1").setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); } /** * @throws Exception If failed. */ + @SuppressWarnings("ThrowableNotThrown") + @Test public void testMvccLocalCacheDisabled() throws Exception { final Ignite node1 = startGrid(1); final Ignite node2 = startGrid(2); @@ -125,7 +127,7 @@ public void testMvccLocalCacheDisabled() throws Exception { cache1.put(2,2); GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { + @Override public Void call() { node1.createCache(new CacheConfiguration("cache2").setCacheMode(CacheMode.LOCAL) .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); @@ -144,108 +146,8 @@ public void testMvccLocalCacheDisabled() throws Exception { /** * @throws Exception If failed. */ - public void testMvccExpiredPolicyCacheDisabled() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-8640"); - - final Ignite node1 = startGrid(1); - final Ignite node2 = startGrid(2); - - IgniteCache cache1 = node1.createCache(new CacheConfiguration("cache1") - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - cache1.put(1,1); - cache1.put(2,2); - cache1.put(2,2); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node1.createCache(new CacheConfiguration("cache2") - .setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 1))) - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - return null; - } - }, CacheException.class, null); - - IgniteCache cache3 = node2.createCache(new CacheConfiguration("cache3") - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - cache3.put(1, 1); - cache3.put(2, 2); - cache3.put(3, 3); - } - - /** - * @throws Exception If failed. - */ - public void testMvccThirdPartyStoreCacheDisabled() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-8640"); - - final Ignite node1 = startGrid(1); - final Ignite node2 = startGrid(2); - - IgniteCache cache1 = node1.createCache(new CacheConfiguration("cache1") - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - cache1.put(1,1); - cache1.put(2,2); - cache1.put(2,2); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node1.createCache(new CacheConfiguration("cache2") - .setCacheStoreFactory(FactoryBuilder.factoryOf(CacheStoreReadFromBackupTest.TestStore.class)) - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - return null; - } - }, CacheException.class, null); - - IgniteCache cache3 = node2.createCache(new CacheConfiguration("cache3") - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - cache3.put(1, 1); - cache3.put(2, 2); - cache3.put(3, 3); - } - - /** - * @throws Exception If failed. - */ - public void testMvccInterceptorCacheDisabled() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-8640"); - - final Ignite node1 = startGrid(1); - final Ignite node2 = startGrid(2); - - IgniteCache cache1 = node1.createCache(new CacheConfiguration("cache1") - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - cache1.put(1,1); - cache1.put(2,2); - cache1.put(2,2); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node1.createCache(new CacheConfiguration("cache2") - .setInterceptor(new ConfigVariations.NoopInterceptor()) - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - return null; - } - }, CacheException.class, null); - - IgniteCache cache3 = node2.createCache(new CacheConfiguration("cache3") - .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); - - cache3.put(1, 1); - cache3.put(2, 2); - cache3.put(3, 3); - } - - /** - * @throws Exception If failed. - */ + @SuppressWarnings("ThrowableNotThrown") + @Test public void testNodeRestartWithCacheModeChangedTxToMvcc() throws Exception { cleanPersistenceDir(); @@ -292,6 +194,8 @@ public void testNodeRestartWithCacheModeChangedTxToMvcc() throws Exception { /** * @throws Exception If failed. */ + @SuppressWarnings("ThrowableNotThrown") + @Test public void testNodeRestartWithCacheModeChangedMvccToTx() throws Exception { cleanPersistenceDir(); @@ -299,6 +203,7 @@ public void testNodeRestartWithCacheModeChangedMvccToTx() throws Exception { DataStorageConfiguration storageCfg = new DataStorageConfiguration(); DataRegionConfiguration regionCfg = new DataRegionConfiguration(); regionCfg.setPersistenceEnabled(true); + regionCfg.setPageEvictionMode(RANDOM_LRU); storageCfg.setDefaultDataRegionConfiguration(regionCfg); IgniteConfiguration cfg = getConfiguration("testGrid"); cfg.setDataStorageConfiguration(storageCfg); @@ -338,87 +243,192 @@ public void testNodeRestartWithCacheModeChangedMvccToTx() throws Exception { /** * @throws Exception If failed. */ - public void testTxCacheWithCacheStore() throws Exception { - checkTransactionalModeConflict("cacheStoreFactory", new TestFactory(), - "Transactional cache may not have a third party cache store when MVCC is enabled."); - } + @Test + public void testMvccInMemoryEvictionDisabled() throws Exception { + final String memRegName = "in-memory-evictions"; - /** - * @throws Exception If failed. - */ - public void testTxCacheWithExpiryPolicy() throws Exception { - checkTransactionalModeConflict("expiryPolicyFactory0", CreatedExpiryPolicy.factoryOf(Duration.FIVE_MINUTES), - "Transactional cache may not have expiry policy when MVCC is enabled."); + // Enable in-memory eviction. + DataRegionConfiguration regionCfg = new DataRegionConfiguration(); + regionCfg.setPersistenceEnabled(false); + regionCfg.setPageEvictionMode(RANDOM_2_LRU); + regionCfg.setName(memRegName); + + DataStorageConfiguration storageCfg = new DataStorageConfiguration(); + storageCfg.setDefaultDataRegionConfiguration(regionCfg); + + IgniteConfiguration cfg = getConfiguration("testGrid"); + cfg.setDataStorageConfiguration(storageCfg); + + Ignite node = startGrid(cfg); + + CacheConfiguration ccfg1 = new CacheConfiguration("test1") + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT) + .setDataRegionName(memRegName); + + try { + node.createCache(ccfg1); + + fail("In memory evictions should be disabled for MVCC caches."); + } + catch (Exception e) { + assertTrue(X.getFullStackTrace(e).contains("Data pages evictions cannot be used with TRANSACTIONAL_SNAPSHOT")); + } } /** + * Test TRANSACTIONAL_SNAPSHOT and near cache. + * * @throws Exception If failed. */ - public void testTxCacheWithInterceptor() throws Exception { - checkTransactionalModeConflict("interceptor", new CacheInterceptorAdapter(), - "Transactional cache may not have an interceptor when MVCC is enabled."); + @SuppressWarnings("unchecked") + @Test + public void testTransactionalSnapshotLimitations() throws Exception { + assertCannotStart( + mvccCacheConfig().setCacheMode(LOCAL), + "LOCAL cache mode cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode" + ); + + assertCannotStart( + mvccCacheConfig().setNearConfiguration(new NearCacheConfiguration<>()), + "near cache cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode" + ); + + assertCannotStart( + mvccCacheConfig().setReadThrough(true), + "readThrough cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode" + ); + + assertCannotStart( + mvccCacheConfig().setWriteThrough(true), + "writeThrough cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode" + ); + + assertCannotStart( + mvccCacheConfig().setWriteBehindEnabled(true), + "writeBehindEnabled cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode" + ); + + assertCannotStart( + mvccCacheConfig().setExpiryPolicyFactory(new TestExpiryPolicyFactory()), + "expiry policy cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode" + ); + + assertCannotStart( + mvccCacheConfig().setInterceptor(new TestCacheInterceptor()), + "interceptor cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode" + ); } /** - * Check that setting specified property conflicts with transactional cache atomicity mode. - * @param propName Property name. - * @param obj Property value. - * @param errMsg Expected error message. - * @throws IgniteCheckedException if failed. + * Checks if passed in {@code 'Throwable'} has given class in {@code 'cause'} hierarchy + * including that throwable itself and it contains passed message. + *

    + * Note that this method follows includes {@link Throwable#getSuppressed()} + * into check. + * + * @param t Throwable to check (if {@code null}, {@code false} is returned). + * @param cls Cause class to check (if {@code null}, {@code false} is returned). + * @param msg Message to check. + * @return {@code True} if one of the causing exception is an instance of passed in classes + * and it contains the passed message, {@code false} otherwise. */ - @SuppressWarnings("ThrowableNotThrown") - private void checkTransactionalModeConflict(String propName, Object obj, String errMsg) - throws Exception { - final String setterName = "set" + propName.substring(0, 1).toUpperCase() + propName.substring(1); + private boolean hasCauseWithMessage(@Nullable Throwable t, Class cls, String msg) { + if (t == null) + return false; + + assert cls != null; + + for (Throwable th = t; th != null; th = th.getCause()) { + if (cls.isAssignableFrom(th.getClass()) && th.getMessage() != null && th.getMessage().contains(msg)) + return true; + + for (Throwable n : th.getSuppressed()) { + if (hasCauseWithMessage(n, cls, msg)) + return true; + } - try (final Ignite node = startGrid(0)) { - final CacheConfiguration cfg = new TestConfiguration("cache"); + if (th.getCause() == th) + break; + } - cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + return false; + } - U.invoke(TestConfiguration.class, cfg, setterName, obj); + /** + * Make sure cache cannot be started with the given configuration. + * + * @param ccfg Cache configuration. + * @param msg Message. + * @throws Exception If failed. + */ + @SuppressWarnings("unchecked") + private void assertCannotStart(CacheConfiguration ccfg, String msg) throws Exception { + Ignite node = startGrid(0); - GridTestUtils.assertThrows(log, new Callable() { - @SuppressWarnings("unchecked") - @Override public Void call() { - node.getOrCreateCache(cfg); + try { + try { + node.getOrCreateCache(ccfg); - return null; + fail("Cache should not start."); + } + catch (Exception e) { + if (msg != null) { + assert e.getMessage() != null : "Error message is null"; + assertTrue(hasCauseWithMessage(e, IgniteCheckedException.class, msg)); } - }, IgniteCheckedException.class, errMsg); + } + } + finally { + stopAllGrids(); } } /** - * Dummy class to overcome ambiguous method name "setExpiryPolicyFactory". + * @return MVCC-enabled cache configuration. */ - private final static class TestConfiguration extends CacheConfiguration { - /** - * - */ - TestConfiguration(String cacheName) { - super(cacheName); - } + private static CacheConfiguration mvccCacheConfig() { + return new CacheConfiguration().setName(DEFAULT_CACHE_NAME + UUID.randomUUID()) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT); + } - /** - * - */ - @SuppressWarnings("unused") - public void setExpiryPolicyFactory0(Factory plcFactory) { - super.setExpiryPolicyFactory(plcFactory); + /** + * Test expiry policy. + */ + private static class TestExpiryPolicyFactory implements Factory, Serializable { + /** {@inheritDoc} */ + @Override public ExpiryPolicy create() { + return null; } } /** - * + * Test cache interceptor. */ - private static class TestFactory implements Factory { - /** Serial version uid. */ - private static final long serialVersionUID = 0L; + private static class TestCacheInterceptor implements CacheInterceptor, Serializable { + /** {@inheritDoc} */ + @Nullable + @Override public Object onGet(Object key, @Nullable Object val) { + return null; + } /** {@inheritDoc} */ - @Override public CacheStore create() { + @Nullable @Override public Object onBeforePut(Cache.Entry entry, Object newVal) { return null; } + + /** {@inheritDoc} */ + @Override public void onAfterPut(Cache.Entry entry) { + // No-op. + } + + /** {@inheritDoc} */ + @Nullable @Override public IgniteBiTuple onBeforeRemove(Cache.Entry entry) { + return null; + } + + /** {@inheritDoc} */ + @Override public void onAfterRemove(Cache.Entry entry) { + // No-op. + } } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccIteratorWithConcurrentTransactionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccIteratorWithConcurrentTransactionTest.java index 90c5b6e737034..db5e151c26d3e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccIteratorWithConcurrentTransactionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccIteratorWithConcurrentTransactionTest.java @@ -25,14 +25,19 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.util.lang.IgniteClosure2X; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheMvccIteratorWithConcurrentTransactionTest extends CacheMvccAbstractFeatureTest { /** * @throws Exception if failed. */ + @Test public void testScanQuery() throws Exception { doTestConsistency(clo); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccLocalEntriesWithConcurrentTransactionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccLocalEntriesWithConcurrentTransactionTest.java index f4c9781744f05..0040093db9c31 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccLocalEntriesWithConcurrentTransactionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccLocalEntriesWithConcurrentTransactionTest.java @@ -26,14 +26,19 @@ import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.internal.util.lang.IgniteClosure2X; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheMvccLocalEntriesWithConcurrentTransactionTest extends CacheMvccAbstractFeatureTest { /** * @throws Exception if failed. */ + @Test public void testLocalEntries() throws Exception { doTestConsistency(clo); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccOperationChecksTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccOperationChecksTest.java index 5aedf17089d86..9a3d44be54264 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccOperationChecksTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccOperationChecksTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -40,6 +43,7 @@ /** * */ +@RunWith(JUnit4.class) public class CacheMvccOperationChecksTest extends CacheMvccAbstractTest { /** Empty Class[]. */ private final static Class[] E = new Class[]{}; @@ -62,6 +66,7 @@ public class CacheMvccOperationChecksTest extends CacheMvccAbstractTest { /** * @throws Exception if failed. */ + @Test public void testClearOperationsUnsupported() throws Exception { checkOperationUnsupported("clear", m("Clear"), E); @@ -80,6 +85,7 @@ public void testClearOperationsUnsupported() throws Exception { /** * @throws Exception if failed. */ + @Test public void testLoadOperationsUnsupported() throws Exception { checkOperationUnsupported("loadCache", m("Load"), t(IgniteBiPredicate.class, Object[].class), P, new Object[]{ 1 }); @@ -97,6 +103,7 @@ public void testLoadOperationsUnsupported() throws Exception { /** * @throws Exception if failed. */ + @Test public void testLockOperationsUnsupported() throws Exception { checkOperationUnsupported("lock", m("Lock"), t(Object.class), 1); @@ -106,14 +113,7 @@ public void testLockOperationsUnsupported() throws Exception { /** * @throws Exception if failed. */ - public void testPeekOperationsUnsupported() throws Exception { - checkOperationUnsupported("localPeek", m("Peek"), t(Object.class, CachePeekMode[].class), 1, - new CachePeekMode[]{CachePeekMode.NEAR}); - } - - /** - * @throws Exception if failed. - */ + @Test public void testEvictOperationsUnsupported() throws Exception { checkOperationUnsupported("localEvict", m("Evict"), t(Collection.class), Collections.singleton(1)); } @@ -121,6 +121,7 @@ public void testEvictOperationsUnsupported() throws Exception { /** * @throws Exception if failed. */ + @Test public void testWithExpiryPolicyUnsupported() throws Exception { checkOperationUnsupported("withExpiryPolicy", m("withExpiryPolicy"), t(ExpiryPolicy.class), EternalExpiryPolicy.factoryOf().create()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedCoordinatorFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedCoordinatorFailoverTest.java index 3ea1c5bb35317..dcaf720faa40b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedCoordinatorFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedCoordinatorFailoverTest.java @@ -17,8 +17,16 @@ package org.apache.ignite.internal.processors.cache.mvcc; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.GET; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SCAN; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.WriteMode.PUT; @@ -28,6 +36,7 @@ /** * Coordinator failover test for partitioned caches. */ +@RunWith(JUnit4.class) public class CacheMvccPartitionedCoordinatorFailoverTest extends CacheMvccAbstractCoordinatorFailoverTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -37,6 +46,8 @@ public class CacheMvccPartitionedCoordinatorFailoverTest extends CacheMvccAbstra /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testAccountsTxGet_ClientServer_Backups2_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -47,6 +58,8 @@ public void testAccountsTxGet_ClientServer_Backups2_CoordinatorFails_Persistence /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testAccountsTxGet_Server_Backups1_CoordinatorFails() throws Exception { accountsTxReadAll(2, 0, 1, DFLT_PARTITION_COUNT, null, true, GET, PUT, DFLT_TEST_TIME, RestartMode.RESTART_CRD); @@ -55,6 +68,8 @@ public void testAccountsTxGet_Server_Backups1_CoordinatorFails() throws Exceptio /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10750") + @Test public void testAccountsTxScan_ClientServer_Backups2_CoordinatorFails() throws Exception { accountsTxReadAll(4, 2, 2, DFLT_PARTITION_COUNT, null, true, SCAN, PUT, DFLT_TEST_TIME, RestartMode.RESTART_CRD); @@ -63,6 +78,8 @@ public void testAccountsTxScan_ClientServer_Backups2_CoordinatorFails() throws E /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testAccountsTxScan_Server_Backups1_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -73,6 +90,7 @@ public void testAccountsTxScan_Server_Backups1_CoordinatorFails_Persistence() th /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_ClientServer_Backups2_RestartCoordinator_GetPut() throws Exception { putAllGetAll(RestartMode.RESTART_CRD, 4, 2, 2, DFLT_PARTITION_COUNT, null, GET, PUT); @@ -81,6 +99,7 @@ public void testPutAllGetAll_ClientServer_Backups2_RestartCoordinator_GetPut() t /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_ClientServer_Backups1_RestartCoordinator_GetPut_Persistence() throws Exception { persistence = true; @@ -91,6 +110,8 @@ public void testPutAllGetAll_ClientServer_Backups1_RestartCoordinator_GetPut_Per /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testUpdate_N_Objects_ClientServer_Backups1_PutGet_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -101,6 +122,8 @@ public void testUpdate_N_Objects_ClientServer_Backups1_PutGet_CoordinatorFails_P /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10750") + @Test public void testUpdate_N_Objects_ClientServer_Backups1__PutGet_CoordinatorFails() throws Exception { updateNObjectsTest(10, 3, 2, 1, DFLT_PARTITION_COUNT, DFLT_TEST_TIME, null, GET, PUT, RestartMode.RESTART_CRD); @@ -110,6 +133,8 @@ public void testUpdate_N_Objects_ClientServer_Backups1__PutGet_CoordinatorFails( /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testGetReadInProgressCoordinatorFails() throws Exception { readInProgressCoordinatorFails(false, false, PESSIMISTIC, REPEATABLE_READ, GET, PUT, null); } @@ -117,6 +142,7 @@ public void testGetReadInProgressCoordinatorFails() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetReadInsideTxInProgressCoordinatorFails() throws Exception { readInProgressCoordinatorFails(false, true, PESSIMISTIC, REPEATABLE_READ, GET, PUT, null); } @@ -124,6 +150,7 @@ public void testGetReadInsideTxInProgressCoordinatorFails() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetReadInProgressCoordinatorFails_ReadDelay() throws Exception { readInProgressCoordinatorFails(true, false, PESSIMISTIC, REPEATABLE_READ, GET, PUT, null); } @@ -131,6 +158,8 @@ public void testGetReadInProgressCoordinatorFails_ReadDelay() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testGetReadInsideTxInProgressCoordinatorFails_ReadDelay() throws Exception { readInProgressCoordinatorFails(true, true, PESSIMISTIC, REPEATABLE_READ, GET, PUT, null); } @@ -138,7 +167,40 @@ public void testGetReadInsideTxInProgressCoordinatorFails_ReadDelay() throws Exc /** * @throws Exception If failed. */ + @Test public void testReadInProgressCoordinatorFailsSimple_FromServerPutGet() throws Exception { readInProgressCoordinatorFailsSimple(false, null, GET, PUT); } + + /** + * @throws Exception If failed. + */ + @Test + public void testActivateDeactivateCLuster() throws Exception { + disableScheduledVacuum = true; + persistence = true; + + final int DATA_NODES = 3; + + // Do not use startMultithreaded here. + startGrids(DATA_NODES); + + Ignite near = grid(DATA_NODES - 1); + + CacheConfiguration ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, DATA_NODES - 1, DFLT_PARTITION_COUNT); + + near.cluster().active(true); + + IgniteCache cache = near.createCache(ccfg); + + cache.put(1, 1); + + near.cluster().active(false); + + stopGrid(0); + + near.cluster().active(true); + + assertEquals(1, cache.get(1)); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorLazyStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorLazyStartTest.java index 064e7bbd2a7af..00439b5acc9c6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorLazyStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorLazyStartTest.java @@ -23,11 +23,15 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for a lazy MVCC processor start. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheMvccProcessorLazyStartTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -37,6 +41,7 @@ public class CacheMvccProcessorLazyStartTest extends CacheMvccAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPreconfiguredCacheMvccNotStarted() throws Exception { CacheConfiguration ccfg = cacheConfiguration(CacheMode.PARTITIONED, CacheWriteSynchronizationMode.FULL_SYNC, 0, 1); ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); @@ -60,6 +65,7 @@ public void testPreconfiguredCacheMvccNotStarted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreconfiguredCacheMvccStarted() throws Exception { CacheConfiguration ccfg = cacheConfiguration(CacheMode.PARTITIONED, CacheWriteSynchronizationMode.FULL_SYNC, 0, 1); @@ -82,6 +88,7 @@ public void testPreconfiguredCacheMvccStarted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMvccRestartedWithDynamicCache() throws Exception { persistence = true; @@ -129,6 +136,7 @@ public void testMvccRestartedWithDynamicCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMvccStartedWithDynamicCache() throws Exception { IgniteEx node1 = startGrid(1); IgniteEx node2 = startGrid(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorTest.java index dc902fc99fb14..657bc964875e1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccProcessorTest.java @@ -22,12 +22,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxState; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class CacheMvccProcessorTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -37,6 +41,7 @@ public class CacheMvccProcessorTest extends CacheMvccAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTreeWithPersistence() throws Exception { persistence = true; @@ -46,8 +51,9 @@ public void testTreeWithPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTreeWithoutPersistence() throws Exception { - persistence = true; + persistence = false; checkTreeOperations(); } @@ -66,12 +72,18 @@ private void checkTreeOperations() throws Exception { assertEquals(TxState.NA, mvccProcessor.state(new MvccVersionImpl(1, 1, MvccUtils.MVCC_OP_COUNTER_NA))); - mvccProcessor.updateState(new MvccVersionImpl(1, 1, MvccUtils.MVCC_OP_COUNTER_NA), TxState.PREPARED); - mvccProcessor.updateState(new MvccVersionImpl(1, 2, MvccUtils.MVCC_OP_COUNTER_NA), TxState.PREPARED); - mvccProcessor.updateState(new MvccVersionImpl(1, 3, MvccUtils.MVCC_OP_COUNTER_NA), TxState.COMMITTED); - mvccProcessor.updateState(new MvccVersionImpl(1, 4, MvccUtils.MVCC_OP_COUNTER_NA), TxState.ABORTED); - mvccProcessor.updateState(new MvccVersionImpl(1, 5, MvccUtils.MVCC_OP_COUNTER_NA), TxState.ABORTED); - mvccProcessor.updateState(new MvccVersionImpl(1, 6, MvccUtils.MVCC_OP_COUNTER_NA), TxState.PREPARED); + grid.context().cache().context().database().checkpointReadLock(); + try { + mvccProcessor.updateState(new MvccVersionImpl(1, 1, MvccUtils.MVCC_OP_COUNTER_NA), TxState.PREPARED); + mvccProcessor.updateState(new MvccVersionImpl(1, 2, MvccUtils.MVCC_OP_COUNTER_NA), TxState.PREPARED); + mvccProcessor.updateState(new MvccVersionImpl(1, 3, MvccUtils.MVCC_OP_COUNTER_NA), TxState.COMMITTED); + mvccProcessor.updateState(new MvccVersionImpl(1, 4, MvccUtils.MVCC_OP_COUNTER_NA), TxState.ABORTED); + mvccProcessor.updateState(new MvccVersionImpl(1, 5, MvccUtils.MVCC_OP_COUNTER_NA), TxState.ABORTED); + mvccProcessor.updateState(new MvccVersionImpl(1, 6, MvccUtils.MVCC_OP_COUNTER_NA), TxState.PREPARED); + } + finally { + grid.context().cache().context().database().checkpointReadUnlock(); + } if (persistence) { stopGrid(0, false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccRemoteTxOnNearNodeStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccRemoteTxOnNearNodeStartTest.java new file mode 100644 index 0000000000000..20ca8da19c7fa --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccRemoteTxOnNearNodeStartTest.java @@ -0,0 +1,91 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc; + +import com.google.common.collect.ImmutableMap; +import java.util.ArrayList; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.affinity.Affinity; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** */ +@RunWith(JUnit4.class) +public class CacheMvccRemoteTxOnNearNodeStartTest extends CacheMvccAbstractTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.PARTITIONED; + } + + /** + * Ensures that remote transaction on near node is started + * when first request is sent to OWNING partition and second to MOVING partition. + * @throws Exception if failed. + */ + @Test + public void testRemoteTxOnNearNodeIsStartedIfPartitionIsMoving() throws Exception { + startGridsMultiThreaded(3); + + IgniteCache cache = grid(0).getOrCreateCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) + .setCacheMode(cacheMode()) + .setBackups(1) + ); + + ArrayList keys = new ArrayList<>(); + + Affinity aff = grid(0).affinity(DEFAULT_CACHE_NAME); + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(1).localNode(), i) && aff.isBackup(grid(0).localNode(), i)) { + keys.add(i); + break; + } + } + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(1).localNode(), i) && aff.isBackup(grid(2).localNode(), i)) { + keys.add(i); + break; + } + } + + assert keys.size() == 2; + + stopGrid(2); + + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.putAll(ImmutableMap.of( + keys.get(0), 0, + keys.get(1), 1) + ); + + tx.commit(); + } + + // assert transaction was committed without errors + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccScanQueryWithConcurrentTransactionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccScanQueryWithConcurrentTransactionTest.java index 8af6a5b9c03ab..f618b3ffbad17 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccScanQueryWithConcurrentTransactionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccScanQueryWithConcurrentTransactionTest.java @@ -28,14 +28,19 @@ import org.apache.ignite.internal.util.lang.IgniteClosure2X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheMvccScanQueryWithConcurrentTransactionTest extends CacheMvccAbstractFeatureTest { /** * @throws Exception if failed. */ + @Test public void testScanQuery() throws Exception { doTestConsistency(clo); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeWithConcurrentTransactionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeWithConcurrentTransactionTest.java index 2b8b73ed2f455..54de75552cec3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeWithConcurrentTransactionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeWithConcurrentTransactionTest.java @@ -21,14 +21,19 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.util.lang.IgniteClosure2X; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheMvccSizeWithConcurrentTransactionTest extends CacheMvccAbstractFeatureTest { /** * @throws Exception if failed. */ + @Test public void testSize() throws Exception { doTestConsistency(clo); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTransactionsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTransactionsTest.java index af74996b49807..88da2f402faa8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTransactionsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTransactionsTest.java @@ -23,7 +23,6 @@ import java.util.Collections; import java.util.HashMap; import java.util.HashSet; -import java.util.LinkedHashMap; import java.util.LinkedHashSet; import java.util.List; import java.util.Map; @@ -41,12 +40,15 @@ import javax.cache.Cache; import javax.cache.expiry.Duration; import javax.cache.expiry.TouchedExpiryPolicy; +import javax.cache.processor.EntryProcessorException; +import javax.cache.processor.MutableEntry; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteTransactions; +import org.apache.ignite.cache.CacheEntryProcessor; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.cache.query.ScanQuery; @@ -71,6 +73,7 @@ import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccAckRequestTx; import org.apache.ignite.internal.processors.cache.mvcc.msg.MvccSnapshotResponse; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; +import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.lang.GridInClosure3; import org.apache.ignite.internal.util.typedef.CI1; @@ -90,6 +93,10 @@ import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; import org.junit.Assert; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -106,6 +113,7 @@ * TODO IGNITE-6739: test with cache groups. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheMvccTransactionsTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -115,6 +123,7 @@ public class CacheMvccTransactionsTest extends CacheMvccAbstractTest { /** * @throws Exception if failed. */ + @Test public void testEmptyTx() throws Exception { Ignite node = startGrids(2); @@ -134,6 +143,7 @@ public void testEmptyTx() throws Exception { /** * @throws Exception if failed. */ + @Test public void testImplicitTxOps() throws Exception { checkTxWithAllCaches(new CI1>() { @Override public void apply(IgniteCache cache) { @@ -198,6 +208,34 @@ public void testImplicitTxOps() throws Exception { val = (Integer)checkAndGet(false, cache, key, SCAN, GET); assertNull(val); + + val = cache.getAndPutIfAbsent(key, 1); + + assertNull(val); + + val = (Integer)checkAndGet(false, cache, key, SCAN, GET); + + assertEquals((Integer)1, val); + + val = cache.getAndPutIfAbsent(key, 1); + + assertEquals((Integer)1, val); + + val = (Integer)checkAndGet(false, cache, key, SCAN, GET); + + assertEquals((Integer)1, val); + + assertFalse(cache.remove(key, 2)); + + val = (Integer)checkAndGet(false, cache, key, SCAN, GET); + + assertEquals((Integer)1, val); + + cache.remove(key, 1); + + val = (Integer)checkAndGet(false, cache, key, SCAN, GET); + + assertNull(val); } } catch (Exception e) { @@ -210,6 +248,7 @@ public void testImplicitTxOps() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTx1() throws Exception { checkTxWithAllCaches(new CI1>() { @Override public void apply(IgniteCache cache) { @@ -250,6 +289,7 @@ public void testPessimisticTx1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPessimisticTx2() throws Exception { checkTxWithAllCaches(new CI1>() { @Override public void apply(IgniteCache cache) { @@ -282,6 +322,57 @@ public void testPessimisticTx2() throws Exception { }); } + /** + * @throws Exception If failed. + */ + @Test + public void testPessimisticTx3() throws Exception { + checkTxWithAllCaches(new CI1>() { + @Override public void apply(IgniteCache cache) { + try { + IgniteTransactions txs = cache.unwrap(Ignite.class).transactions(); + + List keys = testKeys(cache); + + for (Integer key : keys) { + log.info("Test key: " + key); + + try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) { + Integer val = cache.get(key); + + assertNull(val); + + Integer res = cache.invoke(key, new CacheEntryProcessor() { + @Override public Integer process(MutableEntry entry, + Object... arguments) throws EntryProcessorException { + + entry.setValue(key); + + return -key; + } + }); + + assertEquals(Integer.valueOf(-key), res); + + val = (Integer)checkAndGet(true, cache, key, GET, SCAN); + + assertEquals(key, val); + + tx.commit(); + } + + Integer val = (Integer)checkAndGet(false, cache, key, SCAN, GET); + + assertEquals(key, val); + } + } + catch (Exception e) { + throw new IgniteException(e); + } + } + }); + } + /** * @param c Closure to run. * @throws Exception If failed. @@ -323,6 +414,7 @@ private void checkTxWithAllCaches(IgniteInClosure> /** * @throws Exception If failed. */ + @Test public void testWithCacheGroups() throws Exception { Ignite srv0 = startGrid(0); @@ -381,6 +473,7 @@ public void testWithCacheGroups() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheRecreate() throws Exception { cacheRecreate(null); } @@ -388,6 +481,7 @@ public void testCacheRecreate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActiveQueriesCleanup() throws Exception { activeQueriesCleanup(false); } @@ -395,6 +489,7 @@ public void testActiveQueriesCleanup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActiveQueriesCleanupTx() throws Exception { activeQueriesCleanup(true); } @@ -459,6 +554,7 @@ private void activeQueriesCleanup(final boolean tx) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxReadIsolationSimple() throws Exception { Ignite srv0 = startGrids(4); @@ -545,6 +641,7 @@ public void testTxReadIsolationSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGetAllSimple() throws Exception { Ignite node = startGrid(0); @@ -604,6 +701,7 @@ public void testPutGetAllSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoveSimple() throws Exception { putRemoveSimple(false); } @@ -611,6 +709,7 @@ public void testPutRemoveSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoveSimple_LargeKeys() throws Exception { putRemoveSimple(true); } @@ -745,6 +844,7 @@ private void checkValues(Map expVals, IgniteCache accounts) { /** * @throws Exception If failed */ + @Test public void testOperationsSequenceScanConsistency_SingleNode_SinglePartition() throws Exception { operationsSequenceConsistency(1, 0, 0, 1, SCAN); } @@ -1820,6 +1971,7 @@ public void testOperationsSequenceScanConsistency_SingleNode_SinglePartition() t /** * @throws Exception If failed */ + @Test public void testOperationsSequenceScanConsistency_SingleNode() throws Exception { operationsSequenceConsistency(1, 0, 0, 64, SCAN); } @@ -1827,6 +1979,7 @@ public void testOperationsSequenceScanConsistency_SingleNode() throws Exception /** * @throws Exception If failed */ + @Test public void testOperationsSequenceScanConsistency_ClientServer_Backups0() throws Exception { operationsSequenceConsistency(4, 2, 0, 64, SCAN); } @@ -1834,6 +1987,7 @@ public void testOperationsSequenceScanConsistency_ClientServer_Backups0() throws /** * @throws Exception If failed */ + @Test public void testOperationsSequenceScanConsistency_ClientServer_Backups1() throws Exception { operationsSequenceConsistency(4, 2, 1, 64, SCAN); } @@ -1841,6 +1995,7 @@ public void testOperationsSequenceScanConsistency_ClientServer_Backups1() throws /** * @throws Exception If failed */ + @Test public void testOperationsSequenceGetConsistency_SingleNode_SinglePartition() throws Exception { operationsSequenceConsistency(1, 0, 0, 1, GET); } @@ -1848,6 +2003,7 @@ public void testOperationsSequenceGetConsistency_SingleNode_SinglePartition() th /** * @throws Exception If failed */ + @Test public void testOperationsSequenceGetConsistency_SingleNode() throws Exception { operationsSequenceConsistency(1, 0, 0, 64, GET); } @@ -1855,6 +2011,7 @@ public void testOperationsSequenceGetConsistency_SingleNode() throws Exception { /** * @throws Exception If failed */ + @Test public void testOperationsSequenceGetConsistency_ClientServer_Backups0() throws Exception { operationsSequenceConsistency(4, 2, 0, 64, GET); } @@ -1862,6 +2019,7 @@ public void testOperationsSequenceGetConsistency_ClientServer_Backups0() throws /** * @throws Exception If failed */ + @Test public void testOperationsSequenceGetConsistency_ClientServer_Backups1() throws Exception { operationsSequenceConsistency(4, 2, 1, 64, GET); } @@ -2009,11 +2167,11 @@ private void operationsSequenceConsistency( } /** - * TODO IGNITE-5935 enable when recovery is implemented. - * * @throws Exception If failed. */ - public void _testNodesRestartNoHang() throws Exception { + @Ignore("https://issues.apache.org/jira/browse/IGNITE-5935") + @Test + public void testNodesRestartNoHang() throws Exception { final int srvs = 4; final int clients = 4; final int writers = 6; @@ -2128,6 +2286,7 @@ public void _testNodesRestartNoHang() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActiveQueryCleanupOnNodeFailure() throws Exception { testSpi = true; @@ -2175,9 +2334,8 @@ public void testActiveQueryCleanupOnNodeFailure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalanceSimple() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9451"); - Ignite srv0 = startGrid(0); IgniteCache cache = (IgniteCache)srv0.createCache( @@ -2256,9 +2414,8 @@ public void testRebalanceSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalanceWithRemovedValuesSimple() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9451"); - Ignite node = startGrid(0); IgniteTransactions txs = node.transactions(); @@ -2310,6 +2467,7 @@ public void testRebalanceWithRemovedValuesSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxPrepareFailureSimplePessimisticTx() throws Exception { testSpi = true; @@ -2383,9 +2541,9 @@ public void testTxPrepareFailureSimplePessimisticTx() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testMvccCoordinatorChangeSimple() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9722"); - Ignite srv0 = startGrid(0); final List cacheNames = new ArrayList<>(); @@ -2469,6 +2627,7 @@ private void checkPutGet(List cacheNames) { /** * @throws Exception If failed. */ + @Test public void testMvccCoordinatorInfoConsistency() throws Exception { for (int i = 0; i < 4; i++) { startGrid(i); @@ -2501,6 +2660,7 @@ public void testMvccCoordinatorInfoConsistency() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMvccCoordinatorInfoConsistency_Persistence() throws Exception { persistence = true; @@ -2531,6 +2691,7 @@ private void checkCoordinatorsConsistency(@Nullable Integer expNodes) { /** * @throws Exception If failed. */ + @Test public void testGetVersionRequestFailover() throws Exception { final int NODES = 5; @@ -2621,6 +2782,7 @@ public void testGetVersionRequestFailover() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadWithStreamer() throws Exception { startGridsMultiThreaded(5); @@ -2654,6 +2816,7 @@ public void testLoadWithStreamer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdate_N_Objects_SingleNode_SinglePartition_Get() throws Exception { int[] nValues = {3, 5, 10}; @@ -2667,6 +2830,7 @@ public void testUpdate_N_Objects_SingleNode_SinglePartition_Get() throws Excepti /** * @throws Exception If failed. */ + @Test public void testUpdate_N_Objects_SingleNode_Get() throws Exception { int[] nValues = {3, 5, 10}; @@ -2680,6 +2844,7 @@ public void testUpdate_N_Objects_SingleNode_Get() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdate_N_Objects_SingleNode_SinglePartition_Scan() throws Exception { int[] nValues = {3, 5, 10}; @@ -2693,6 +2858,7 @@ public void testUpdate_N_Objects_SingleNode_SinglePartition_Scan() throws Except /** * @throws Exception If failed. */ + @Test public void testUpdate_N_Objects_SingleNode_Scan() throws Exception { int[] nValues = {3, 5, 10}; @@ -2706,6 +2872,8 @@ public void testUpdate_N_Objects_SingleNode_Scan() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10750") + @Test public void testUpdate_N_Objects_ClientServer_Backups2_Get() throws Exception { int[] nValues = {3, 5, 10}; @@ -2719,6 +2887,7 @@ public void testUpdate_N_Objects_ClientServer_Backups2_Get() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdate_N_Objects_ClientServer_Backups1_Scan() throws Exception { int[] nValues = {3, 5, 10}; @@ -2732,6 +2901,8 @@ public void testUpdate_N_Objects_ClientServer_Backups1_Scan() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9470") + @Test public void testImplicitPartsScan_SingleNode_SinglePartition() throws Exception { doImplicitPartsScanTest(1, 0, 0, 1, 10_000); } @@ -2739,6 +2910,8 @@ public void testImplicitPartsScan_SingleNode_SinglePartition() throws Exception /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9470") + @Test public void testImplicitPartsScan_SingleNode() throws Exception { doImplicitPartsScanTest(1, 0, 0, 64, 10_000); } @@ -2746,6 +2919,8 @@ public void testImplicitPartsScan_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9470") + @Test public void testImplicitPartsScan_ClientServer_Backups0() throws Exception { doImplicitPartsScanTest(4, 2, 0, 64, 10_000); } @@ -2753,6 +2928,8 @@ public void testImplicitPartsScan_ClientServer_Backups0() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9470") + @Test public void testImplicitPartsScan_ClientServer_Backups1() throws Exception { doImplicitPartsScanTest(4, 2, 1, 64, 10_000); } @@ -2760,6 +2937,8 @@ public void testImplicitPartsScan_ClientServer_Backups1() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9470") + @Test public void testImplicitPartsScan_ClientServer_Backups2() throws Exception { doImplicitPartsScanTest(4, 2, 2, 64, 10_000); } @@ -2958,6 +3137,7 @@ private void doImplicitPartsScanTest( /** * @throws IgniteCheckedException If failed. */ + @Test public void testSize() throws Exception { Ignite node = startGrid(0); @@ -3055,6 +3235,34 @@ public void testSize() throws Exception { assertEquals(size, cache.size()); } + // Check rollback create. + for (int i = 0; i < KEYS; i++) { + if (i % 2 == 0) { + final Integer key = i; + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.put(key, i); + + tx.rollback(); + } + + assertEquals(size, cache.size()); + } + } + + // Check rollback update. + for (int i = 0; i < KEYS; i++) { + final Integer key = i; + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.put(key, -1); + + tx.rollback(); + } + + assertEquals(size, cache.size()); + } + // Check rollback remove. for (int i = 0; i < KEYS; i++) { final Integer key = i; @@ -3093,6 +3301,7 @@ public void testSize() throws Exception { /** * @throws IgniteCheckedException If failed. */ + @Test public void testInternalApi() throws Exception { Ignite node = startGrid(0); @@ -3104,7 +3313,7 @@ public void testInternalApi() throws Exception { MvccProcessorImpl crd = mvccProcessor(node); // Start query to prevent cleanup. - IgniteInternalFuture fut = crd.requestSnapshotAsync(); + IgniteInternalFuture fut = crd.requestSnapshotAsync((IgniteInternalTx)null); fut.get(); @@ -3193,9 +3402,9 @@ public void testInternalApi() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311") + @Test public void testExpiration() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-7311"); - final IgniteEx node = startGrid(0); IgniteCache cache = node.createCache(cacheConfiguration(PARTITIONED, FULL_SYNC, 1, 64)); @@ -3248,9 +3457,9 @@ public void testExpiration() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311") + @Test public void testChangeExpireTime() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-7311"); - final IgniteEx node = startGrid(0); IgniteCache cache = node.createCache(cacheConfiguration(PARTITIONED, FULL_SYNC, 1, 64)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxFailoverTest.java new file mode 100644 index 0000000000000..31969092342d4 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxFailoverTest.java @@ -0,0 +1,277 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc; + +import java.util.concurrent.CyclicBarrier; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteTransactions; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; +import org.apache.ignite.internal.processors.cache.WalStateManager; +import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; +import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Check Tx state recovery from WAL. + */ +@RunWith(JUnit4.class) +public class CacheMvccTxFailoverTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + super.afterTest(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName) + .setDataStorageConfiguration(new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setMaxSize(100_000_000L) + .setPersistenceEnabled(true)) + .setWalMode(WALMode.BACKGROUND) + ) + .setMvccVacuumFrequency(Long.MAX_VALUE) + .setCacheConfiguration(cacheConfiguration()); + } + + /** + * @return Cache configuration. + */ + @SuppressWarnings("unchecked") + protected CacheConfiguration cacheConfiguration() { + return defaultCacheConfiguration() + .setNearConfiguration(null) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + } + + /** + * @throws Exception If fails. + */ + @Test + public void testSingleNodeTxMissedRollback() throws Exception { + checkSingleNodeRestart(true, false, true); + } + + /** + * @throws Exception If fails. + */ + @Test + public void testSingleNodeTxMissedRollbackRecoverFromWAL() throws Exception { + checkSingleNodeRestart(true, true, true); + } + + /** + * @throws Exception If fails. + */ + @Test + public void testSingleNodeTxMissedCommit() throws Exception { + checkSingleNodeRestart(false, false, true); + } + + /** + * @throws Exception If fails. + */ + @Test + public void testSingleNodeTxMissedCommitRecoverFromWAL() throws Exception { + checkSingleNodeRestart(false, true, true); + } + + /** + * @throws Exception If fails. + */ + @Test + public void testSingleNodeRollbackedTxRecoverFromWAL() throws Exception { + checkSingleNodeRestart(true, true, false); + } + + /** + * @throws Exception If fails. + */ + @Test + public void testSingleNodeCommitedTxRecoverFromWAL() throws Exception { + checkSingleNodeRestart(false, true, false); + } + + + /** + * @param rollBack If {@code True} then Tx will be rolled backup, committed otherwise. + * @param recoverFromWAL If {@code True} then Tx recovery from WAL will be checked, + * binary recovery from latest checkpoint otherwise. + * @param omitTxFinish If {@code True} then unfinished Tx state will be restored as if node fails during commit. + * @throws Exception If fails. + */ + public void checkSingleNodeRestart(boolean rollBack, boolean recoverFromWAL, boolean omitTxFinish) throws Exception { + IgniteEx node = startGrid(0); + + node.cluster().active(true); + + IgniteCache cache = node.getOrCreateCache(DEFAULT_CACHE_NAME); + + cache.put(1, 1); + cache.put(2, 1); + + IgniteTransactions txs = node.transactions(); + + IgniteWriteAheadLogManager wal = node.context().cache().context().wal(); + + if (recoverFromWAL){ + //Force checkpoint. See for details: https://issues.apache.org/jira/browse/IGNITE-10187 + node.context().cache().context().database().waitForCheckpoint(null); + + ((GridCacheDatabaseSharedManager)node.context().cache().context().database()).enableCheckpoints(false).get(); + } + + GridTimeoutProcessor.CancelableTask flushTask = GridTestUtils.getFieldValue(wal, FileWriteAheadLogManager.class, "backgroundFlushSchedule"); + WalStateManager.WALDisableContext wctx = GridTestUtils.getFieldValue(wal, FileWriteAheadLogManager.class, "walDisableContext"); + + // Disable checkpoint and WAL flusher. + node.context().timeout().removeTimeoutObject(flushTask); + + try (Transaction tx = txs.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { + assertEquals((Integer)1, cache.get(1)); + cache.put(2, 2); + + flushTask.onTimeout(); // Flush WAL. + + if (!recoverFromWAL){ + //Force checkpoint, then disable. + node.context().cache().context().database().waitForCheckpoint(null); + + ((GridCacheDatabaseSharedManager)node.context().cache().context().database()).enableCheckpoints(false).get(); + } + + if (omitTxFinish) + GridTestUtils.setFieldValue(wctx, "disableWal", true); // Disable wal. + + if (rollBack) + tx.rollback(); + else + tx.commit(); + } + + stopGrid(0); + + node = startGrid(0); + + node.cluster().active(true); + + cache = node.cache(DEFAULT_CACHE_NAME); + + assertEquals((Integer)1, cache.get(1)); + + if (omitTxFinish || rollBack) + assertEquals((Integer) 1, cache.get(2)); // Commit\rollback marker were saved neither in WAL nor in checkpoint. + else + assertEquals((Integer) 2, cache.get(2)); + + cache.put(2, 3); + + assertEquals((Integer)3, cache.get(2)); + } + + + /** + * @throws Exception If fails. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10219") + @Test + public void testLostRollbackOnBackup() throws Exception { + IgniteEx node = startGrid(0); + + startGrid(1); + + node.cluster().active(true); + + final CyclicBarrier barrier = new CyclicBarrier(2); + + GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + try { + barrier.await(); + + stopGrid(1); + + barrier.await(); + + startGrid(1); + + barrier.await(); + } + catch (Exception e) { + e.printStackTrace(); + barrier.reset(); + } + } + }); + + IgniteCache cache = node.getOrCreateCache(DEFAULT_CACHE_NAME); + + Integer key = primaryKey(cache); + + cache.put(key, 0); + + IgniteTransactions txs = node.transactions(); + + try (Transaction tx = txs.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { + assertEquals((Integer)0, cache.get(key)); + + cache.put(key, 1); + + barrier.await(); + + barrier.await(); // Await backup node stop. + + Thread.sleep(1000); + + tx.rollback(); + } + + barrier.await(); + + assertEquals((Integer)0, cache.get(key)); + + cache.put(key, 2); + + assertEquals((Integer)2, cache.get(key)); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccVacuumTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccVacuumTest.java index 8c96b2e4a6a6e..29bb6e7a7f693 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccVacuumTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccVacuumTest.java @@ -25,12 +25,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.worker.GridWorker; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Vacuum test. */ +@RunWith(JUnit4.class) public class CacheMvccVacuumTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -40,6 +44,7 @@ public class CacheMvccVacuumTest extends CacheMvccAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStartStopVacuumInMemory() throws Exception { Ignite node0 = startGrid(0); Ignite node1 = startGrid(1); @@ -70,6 +75,7 @@ public void testStartStopVacuumInMemory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStartStopVacuumPersistence() throws Exception { persistence = true; @@ -84,6 +90,12 @@ public void testStartStopVacuumPersistence() throws Exception { ensureNoVacuum(node0); ensureNoVacuum(node1); + node1.createCache(new CacheConfiguration<>("test1") + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + + ensureNoVacuum(node0); + ensureNoVacuum(node1); + node1.createCache(new CacheConfiguration<>("test2") .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT)); @@ -125,6 +137,7 @@ public void testStartStopVacuumPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVacuumNotStartedWithoutMvcc() throws Exception { IgniteConfiguration cfg = getConfiguration("grid1"); @@ -136,6 +149,7 @@ public void testVacuumNotStartedWithoutMvcc() throws Exception { /** * @throws Exception If failed. */ + @Test public void testVacuumNotStartedWithoutMvccPersistence() throws Exception { persistence = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccCachePeekTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccCachePeekTest.java new file mode 100644 index 0000000000000..7bcce6affd38e --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccCachePeekTest.java @@ -0,0 +1,166 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc; + +import java.util.concurrent.CountDownLatch; +import java.util.stream.Stream; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.CachePeekMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** */ +@RunWith(JUnit4.class) +public class MvccCachePeekTest extends CacheMvccAbstractTest { + /** */ + private interface ThrowingRunnable { + /** */ + void run() throws Exception; + } + + /** */ + private IgniteCache cache; + + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.PARTITIONED; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + startGridsMultiThreaded(3); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testPeek() throws Exception { + doWithCache(this::checkPeekSerial); + doWithCache(this::checkPeekDoesNotSeeAbortedVersions); + doWithCache(this::checkPeekDoesNotSeeActiveVersions); + doWithCache(this::checkPeekOnheap); + doWithCache(this::checkPeekNearCache); + } + + /** */ + private void doWithCache(ThrowingRunnable action) throws Exception { + cache = grid(0).getOrCreateCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME) + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT) + .setBackups(1) + .setCacheMode(cacheMode())); + + try { + action.run(); + } + finally { + cache.destroy(); + } + } + + /** */ + private void checkPeekSerial() throws Exception { + Stream.of(primaryKey(cache), backupKey(cache)).forEach(key -> { + assertNull(cache.localPeek(key)); + + cache.put(key, 1); + + assertEquals(1, cache.localPeek(key)); + + cache.put(key, 2); + + assertEquals(2, cache.localPeek(key)); + }); + } + + /** */ + private void checkPeekDoesNotSeeAbortedVersions() throws Exception { + Integer pk = primaryKey(cache); + + cache.put(pk, 1); + + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.put(pk, 2); + + tx.rollback(); + } + + assertEquals(1, cache.localPeek(pk)); + } + + /** */ + private void checkPeekDoesNotSeeActiveVersions() throws Exception { + Integer pk = primaryKey(cache); + + cache.put(pk, 1); + + CountDownLatch writeCompleted = new CountDownLatch(1); + CountDownLatch checkCompleted = new CountDownLatch(1); + + IgniteInternalFuture fut = GridTestUtils.runAsync(() -> { + try (Transaction tx = grid(0).transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.put(pk, 2); + + writeCompleted.countDown(); + checkCompleted.await(); + + tx.commit(); + } + + return null; + }); + + writeCompleted.await(); + + assertEquals(1, cache.localPeek(pk)); + + checkCompleted.countDown(); + + fut.get(); + } + + /** */ + private void checkPeekOnheap() throws Exception { + Stream.of(primaryKey(cache), backupKey(cache), nearKey(cache)).forEach(key -> { + cache.put(key, 1); + + assertNull(cache.localPeek(key, CachePeekMode.ONHEAP)); + }); + } + + /** */ + private void checkPeekNearCache() throws Exception { + Stream.of(primaryKey(cache), backupKey(cache), nearKey(cache)).forEach(key -> { + cache.put(key, 1); + + assertNull(cache.localPeek(key, CachePeekMode.NEAR)); + }); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUnsupportedTxModesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUnsupportedTxModesTest.java new file mode 100644 index 0000000000000..e1b3e41193796 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccUnsupportedTxModesTest.java @@ -0,0 +1,377 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc; + +import com.google.common.collect.ImmutableMap; +import java.util.Collections; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheEntryProcessor; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionException; +import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static java.util.Collections.singleton; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; +import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE; + +/** */ +@RunWith(JUnit4.class) +public class MvccUnsupportedTxModesTest extends GridCommonAbstractTest { + /** */ + private static IgniteCache cache; + /** */ + private static final CacheEntryProcessor testEntryProcessor = (entry, arguments) -> null; + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + IgniteEx ign = startGrid(0); + + cache = ign.getOrCreateCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME) + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT)); + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + super.afterTestsStopped(); + } + + /** */ + @Test + public void testGetAndPutIfAbsent() { + checkOperation(() -> cache.getAndPutIfAbsent(1, 1)); + } + + /** */ + @Test + public void testGetAndPutIfAbsentAsync() { + checkOperation(() -> cache.getAndPutIfAbsentAsync(1, 1)); + } + + /** */ + @Test + public void testGet() { + checkOperation(() -> cache.get(1)); + } + + /** */ + @Test + public void testGetAsync() { + checkOperation(() -> cache.getAsync(1)); + } + + /** */ + @Test + public void testGetEntry() { + checkOperation(() -> cache.getEntry(1)); + } + + /** */ + @Test + public void testGetEntryAsync() { + checkOperation(() -> cache.getEntryAsync(1)); + } + + /** */ + @Test + public void testGetAll() { + checkOperation(() -> cache.getAll(singleton(1))); + } + + /** */ + @Test + public void testGetAllAsync() { + checkOperation(() -> cache.getAllAsync(singleton(1))); + } + + /** */ + @Test + public void testGetEntries() { + checkOperation(() -> cache.getEntries(singleton(1))); + } + + /** */ + @Test + public void testGetEntriesAsync() { + checkOperation(() -> cache.getEntriesAsync(singleton(1))); + } + + /** */ + @Test + public void testContainsKey() { + checkOperation(() -> cache.containsKey(1)); + } + + /** */ + @Test + public void testContainsKeyAsync() { + checkOperation(() -> cache.containsKeyAsync(1)); + } + + /** */ + @Test + public void testContainsKeys() { + checkOperation(() -> cache.containsKeys(singleton(1))); + } + + /** */ + @Test + public void testContainsKeysAsync() { + checkOperation(() -> cache.containsKeysAsync(singleton(1))); + } + + /** */ + @Test + public void testPut() { + checkOperation(() -> cache.put(1, 1)); + } + + /** */ + @Test + public void testPutAsync() { + checkOperation(() -> cache.putAsync(1, 1)); + } + + /** */ + @Test + public void testGetAndPut() { + checkOperation(() -> cache.getAndPut(1, 1)); + } + + /** */ + @Test + public void testGetAndPutAsync() { + checkOperation(() -> cache.getAndPutAsync(1, 1)); + } + + /** */ + @Test + public void testPutAll() { + checkOperation(() -> cache.putAll(ImmutableMap.of(1, 1))); + } + + /** */ + @Test + public void testPutAllAsync() { + checkOperation(() -> cache.putAllAsync(ImmutableMap.of(1, 1))); + } + + /** */ + @Test + public void testPutIfAbsent() { + checkOperation(() -> cache.putIfAbsent(1, 1)); + } + + /** */ + @Test + public void testPutIfAbsentAsync() { + checkOperation(() -> cache.putIfAbsentAsync(1, 1)); + } + + /** */ + @Test + public void testRemove1() { + checkOperation(() -> cache.remove(1)); + } + + /** */ + @Test + public void testRemoveAsync1() { + checkOperation(() -> cache.removeAsync(1)); + } + + /** */ + @Test + public void testRemove2() { + checkOperation(() -> cache.remove(1, 1)); + } + + /** */ + @Test + public void testRemoveAsync2() { + checkOperation(() -> cache.removeAsync(1, 1)); + } + + /** */ + @Test + public void testGetAndRemove() { + checkOperation(() -> cache.getAndRemove(1)); + } + + /** */ + @Test + public void testGetAndRemoveAsync() { + checkOperation(() -> cache.getAndRemoveAsync(1)); + } + + /** */ + @Test + public void testReplace1() { + checkOperation(() -> cache.replace(1, 1, 1)); + } + + /** */ + @Test + public void testReplaceAsync1() { + checkOperation(() -> cache.replaceAsync(1, 1, 1)); + } + + /** */ + @Test + public void testReplace2() { + checkOperation(() -> cache.replace(1, 1)); + } + + /** */ + @Test + public void testReplaceAsync2() { + checkOperation(() -> cache.replaceAsync(1, 1)); + } + + /** */ + @Test + public void testGetAndReplace() { + checkOperation(() -> cache.getAndReplace(1, 1)); + } + + /** */ + @Test + public void testGetAndReplaceAsync() { + checkOperation(() -> cache.getAndReplaceAsync(1, 1)); + } + + /** */ + @Test + public void testRemoveAll1() { + checkOperation(() -> cache.removeAll(singleton(1))); + } + + /** */ + @Test + public void testRemoveAllAsync1() { + checkOperation(() -> cache.removeAllAsync(singleton(1))); + } + + /** */ + @Test + public void testInvoke1() { + checkOperation(() -> cache.invoke(1, testEntryProcessor)); + } + + /** */ + @Test + public void testInvokeAsync1() { + checkOperation(() -> cache.invokeAsync(1, testEntryProcessor)); + } + + /** */ + @Test + public void testInvoke2() { + checkOperation(() -> cache.invoke(1, testEntryProcessor)); + } + + /** */ + @Test + public void testInvokeAsync2() { + checkOperation(() -> cache.invokeAsync(1, testEntryProcessor)); + } + + /** */ + @Test + public void testInvokeAll1() { + checkOperation(() -> cache.invokeAll(singleton(1), testEntryProcessor)); + } + + /** */ + @Test + public void testInvokeAllAsync1() { + checkOperation(() -> cache.invokeAllAsync(singleton(1), testEntryProcessor)); + } + + /** */ + @Test + public void testInvokeAll2() { + checkOperation(() -> cache.invokeAll(singleton(1), testEntryProcessor)); + } + + /** */ + @Test + public void testInvokeAllAsync2() { + checkOperation(() -> cache.invokeAllAsync(singleton(1), testEntryProcessor)); + } + + /** */ + @Test + public void testInvokeAll3() { + checkOperation(() -> cache.invokeAll(Collections.singletonMap(1, testEntryProcessor))); + } + + /** */ + @Test + public void testInvokeAllAsync3() { + checkOperation(() -> cache.invokeAllAsync(Collections.singletonMap(1, testEntryProcessor))); + } + + /** + * @param action Action. + */ + private void checkOperation(Runnable action) { + assertNotSupportedInTx(action, OPTIMISTIC, READ_COMMITTED); + assertNotSupportedInTx(action, OPTIMISTIC, REPEATABLE_READ); + assertNotSupportedInTx(action, OPTIMISTIC, SERIALIZABLE); + + assertSupportedInTx(action, PESSIMISTIC, READ_COMMITTED); + assertSupportedInTx(action, PESSIMISTIC, REPEATABLE_READ); + assertSupportedInTx(action, PESSIMISTIC, SERIALIZABLE); + } + + /** */ + private void assertNotSupportedInTx(Runnable action, TransactionConcurrency conc, TransactionIsolation iso) { + try (Transaction ignored = grid(0).transactions().txStart(conc, iso)) { + action.run(); + + fail("Action failure is expected."); + } + catch (TransactionException e) { + assertEquals("Only pessimistic transactions are supported when MVCC is enabled.", e.getMessage()); + } + } + + /** */ + private void assertSupportedInTx(Runnable action, TransactionConcurrency conc, TransactionIsolation iso) { + try (Transaction tx = grid(0).transactions().txStart(conc, iso)) { + action.run(); + + tx.commit(); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/CheckpointReadLockFailureTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/CheckpointReadLockFailureTest.java new file mode 100644 index 0000000000000..b3fede0ad2827 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/CheckpointReadLockFailureTest.java @@ -0,0 +1,127 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence; + +import java.util.HashSet; +import java.util.Set; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.apache.ignite.Ignite; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.failure.AbstractFailureHandler; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureType; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Tests critical failure handling on checkpoint read lock acquisition errors. + */ +public class CheckpointReadLockFailureTest extends GridCommonAbstractTest { + /** */ + private static final AbstractFailureHandler FAILURE_HND = new AbstractFailureHandler() { + @Override protected boolean handle(Ignite ignite, FailureContext failureCtx) { + if (failureCtx.type() != FailureType.SYSTEM_CRITICAL_OPERATION_TIMEOUT) + return true; + + if (hndLatch != null) + hndLatch.countDown(); + + return false; + } + }; + + /** */ + private static volatile CountDownLatch hndLatch; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName) + .setFailureHandler(FAILURE_HND) + .setDataStorageConfiguration(new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setPersistenceEnabled(true)) + .setCheckpointFrequency(Integer.MAX_VALUE) + .setCheckpointReadLockTimeout(1)); + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + Set ignoredFailureTypes = new HashSet<>(FAILURE_HND.getIgnoredFailureTypes()); + ignoredFailureTypes.remove(FailureType.SYSTEM_CRITICAL_OPERATION_TIMEOUT); + + FAILURE_HND.setIgnoredFailureTypes(ignoredFailureTypes); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + cleanPersistenceDir(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testFailureTypeOnTimeout() throws Exception { + hndLatch = new CountDownLatch(1); + + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + GridCacheDatabaseSharedManager db = (GridCacheDatabaseSharedManager)ig.context().cache().context().database(); + + IgniteInternalFuture acquireWriteLock = GridTestUtils.runAsync(() -> { + db.checkpointLock.writeLock().lock(); + + try { + doSleep(Long.MAX_VALUE); + } + finally { + db.checkpointLock.writeLock().unlock(); + } + }); + + GridTestUtils.waitForCondition(() -> db.checkpointLock.writeLock().isHeldByCurrentThread(), 5000); + + IgniteInternalFuture acquireReadLock = GridTestUtils.runAsync(() -> { + db.checkpointReadLock(); + db.checkpointReadUnlock(); + }); + + assertTrue(hndLatch.await(5, TimeUnit.SECONDS)); + + acquireWriteLock.cancel(); + + acquireReadLock.get(5, TimeUnit.SECONDS); + + GridTestUtils.waitForCondition(acquireWriteLock::isCancelled, 5000); + + stopGrid(0); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteBaselineAffinityTopologyActivationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteBaselineAffinityTopologyActivationTest.java index f44e7929ec762..273a3b3d949e4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteBaselineAffinityTopologyActivationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteBaselineAffinityTopologyActivationTest.java @@ -54,12 +54,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class IgniteBaselineAffinityTopologyActivationTest extends GridCommonAbstractTest { /** */ private String consId; @@ -117,6 +121,7 @@ public class IgniteBaselineAffinityTopologyActivationTest extends GridCommonAbst * (it is the node that once wasn't presented in branchingHistory but hasn't participated in any branching point) * joins the cluster after restart, cluster gets activated. */ + @Test public void testAutoActivationWithCompatibleOldNode() throws Exception { startGridWithConsistentId("A"); startGridWithConsistentId("B"); @@ -168,6 +173,7 @@ public void testAutoActivationWithCompatibleOldNode() throws Exception { * IgniteCluster::setBaselineTopology(long topVer) should throw an exception * when online node from current BaselineTopology is not presented in topology version. */ + @Test public void testBltChangeTopVerRemoveOnlineNodeFails() throws Exception { Ignite ignite = startGridWithConsistentId("A"); @@ -198,6 +204,7 @@ public void testBltChangeTopVerRemoveOnlineNodeFails() throws Exception { /** * Verifies that online nodes cannot be removed from BaselineTopology (this may change in future). */ + @Test public void testOnlineNodesCannotBeRemovedFromBaselineTopology() throws Exception { Ignite nodeA = startGridWithConsistentId("A"); Ignite nodeB = startGridWithConsistentId("B"); @@ -224,6 +231,7 @@ public void testOnlineNodesCannotBeRemovedFromBaselineTopology() throws Exceptio /** * */ + @Test public void testNodeFailsToJoinWithIncompatiblePreviousBaselineTopology() throws Exception { startGridWithConsistentId("A"); startGridWithConsistentId("B"); @@ -272,6 +280,7 @@ public void testNodeFailsToJoinWithIncompatiblePreviousBaselineTopology() throws * Verifies scenario when parts of grid were activated independently they are not allowed to join * into the same grid again (due to risks of incompatible data modifications). */ + @Test public void testIncompatibleBltNodeIsProhibitedToJoinCluster() throws Exception { startGridWithConsistentId("A"); startGridWithConsistentId("B"); @@ -315,6 +324,7 @@ public void testIncompatibleBltNodeIsProhibitedToJoinCluster() throws Exception /** * Test verifies that node with out-of-data but still compatible Baseline Topology is allowed to join the cluster. */ + @Test public void testNodeWithOldBltIsAllowedToJoinCluster() throws Exception { final long expectedHash1 = (long)"A".hashCode() + "B".hashCode() + "C".hashCode(); @@ -376,6 +386,7 @@ public void testNodeWithOldBltIsAllowedToJoinCluster() throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeJoinsDuringPartitionMapExchange() throws Exception { startGridWithConsistentId("A"); startGridWithConsistentId("B"); @@ -477,6 +488,7 @@ private void checkBaselineTopologyOnNode( * * @throws Exception If failed. */ + @Test public void testNodeWithBltIsNotAllowedToJoinClusterDuringFirstActivation() throws Exception { Ignite nodeC = startGridWithConsistentId("C"); @@ -528,6 +540,7 @@ public void testNodeWithBltIsNotAllowedToJoinClusterDuringFirstActivation() thro * Verifies that when new node outside of baseline topology joins active cluster with BLT already set * it receives BLT from the cluster and stores it locally. */ + @Test public void testNewNodeJoinsToActiveCluster() throws Exception { Ignite nodeA = startGridWithConsistentId("A"); Ignite nodeB = startGridWithConsistentId("B"); @@ -561,6 +574,7 @@ public void testNewNodeJoinsToActiveCluster() throws Exception { /** * */ + @Test public void testRemoveNodeFromBaselineTopology() throws Exception { final long expectedActivationHash = (long)"A".hashCode() + "C".hashCode(); @@ -615,6 +629,7 @@ public void testRemoveNodeFromBaselineTopology() throws Exception { /** * */ + @Test public void testAddNodeToBaselineTopology() throws Exception { final long expectedActivationHash = (long)"A".hashCode() + "B".hashCode() + "C".hashCode() + "D".hashCode(); @@ -646,6 +661,7 @@ public void testAddNodeToBaselineTopology() throws Exception { /** * Verifies that baseline topology is removed successfully through baseline changing API. */ + @Test public void testRemoveBaselineTopology() throws Exception { BaselineTopologyVerifier verifier = new BaselineTopologyVerifier() { @Override public void verify(BaselineTopology blt) { @@ -705,6 +721,7 @@ private Ignite startGridWithConsistentId(String consId) throws Exception { * Verifies that when new node joins already active cluster and new activation request is issued, * no changes to BaselineTopology branching history happen. */ + @Test public void testActivationHashIsNotUpdatedOnMultipleActivationRequests() throws Exception { final long expectedActivationHash = (long)"A".hashCode(); @@ -731,6 +748,7 @@ public void testActivationHashIsNotUpdatedOnMultipleActivationRequests() throws * Verifies that grid is autoactivated when full BaselineTopology is preset even on one node * and then all other nodes from BaselineTopology are started. */ + @Test public void testAutoActivationWithBaselineTopologyPreset() throws Exception { Ignite ig = startGridWithConsistentId("A"); @@ -771,6 +789,7 @@ private BaselineNode createBaselineNodeWithConsId(String consId) { } /** */ + @Test public void testAutoActivationSimple() throws Exception { startGrids(3); @@ -804,6 +823,7 @@ public void testAutoActivationSimple() throws Exception { /** * */ + @Test public void testNoAutoActivationOnJoinNewNodeToInactiveCluster() throws Exception { startGrids(2); @@ -829,6 +849,7 @@ public void testNoAutoActivationOnJoinNewNodeToInactiveCluster() throws Exceptio /** * Verifies that neither BaselineTopology nor BaselineTopologyHistory are changed when cluster is deactivated. */ + @Test public void testBaselineTopologyRemainsTheSameOnClusterDeactivation() throws Exception { startGrids(2); @@ -858,6 +879,7 @@ public void testBaselineTopologyRemainsTheSameOnClusterDeactivation() throws Exc /** * */ + @Test public void testBaselineHistorySyncWithNewNode() throws Exception { final long expectedBranchingHash = "A".hashCode() + "B".hashCode() + "C".hashCode(); @@ -899,6 +921,7 @@ public void testBaselineHistorySyncWithNewNode() throws Exception { /** * */ + @Test public void testBaselineHistorySyncWithOldNodeWithCompatibleHistory() throws Exception { final long expectedBranchingHash0 = "A".hashCode() + "B".hashCode() + "C".hashCode(); @@ -955,6 +978,7 @@ public void testBaselineHistorySyncWithOldNodeWithCompatibleHistory() throws Exc /** * @throws Exception if failed. */ + @Test public void testBaselineNotDeletedOnDeactivation() throws Exception { Ignite nodeA = startGridWithConsistentId("A"); startGridWithConsistentId("B"); @@ -990,6 +1014,7 @@ public void testBaselineNotDeletedOnDeactivation() throws Exception { /** * */ + @Test public void testNodeWithBltIsProhibitedToJoinNewCluster() throws Exception { BaselineTopologyVerifier nullVerifier = new BaselineTopologyVerifier() { @Override public void verify(BaselineTopology blt) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteDataStorageMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteDataStorageMetricsSelfTest.java index 4db1de90d1f4f..8472d04408310 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteDataStorageMetricsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteDataStorageMetricsSelfTest.java @@ -35,11 +35,11 @@ import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.PAX; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -48,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteDataStorageMetricsSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String GROUP1 = "grp1"; @@ -92,8 +90,6 @@ public class IgniteDataStorageMetricsSelfTest extends GridCommonAbstractTest { cfg.setBinaryConfiguration(new BinaryConfiguration().setCompactFooter(false)); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setCacheConfiguration(cacheConfiguration(GROUP1, "cache", PARTITIONED, ATOMIC, 1, null), cacheConfiguration(null, "cache-np", PARTITIONED, ATOMIC, 1, "no-persistence")); @@ -141,6 +137,7 @@ private CacheConfiguration cacheConfiguration( /** * @throws Exception if failed. */ + @Test public void testPersistenceMetrics() throws Exception { final IgniteEx ig = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinaryMetadataOnClusterRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinaryMetadataOnClusterRestartTest.java index 28c1e9b2a5838..edcd99a334049 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinaryMetadataOnClusterRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinaryMetadataOnClusterRestartTest.java @@ -44,10 +44,14 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsBinaryMetadataOnClusterRestartTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache1"; @@ -74,6 +78,8 @@ public class IgnitePdsBinaryMetadataOnClusterRestartTest extends GridCommonAbstr @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); + cfg.setConsistentId(gridName); + if (customWorkSubDir != null) cfg.setWorkDirectory(Paths.get(U.defaultWorkDirectory(), customWorkSubDir).toString()); @@ -116,6 +122,7 @@ public class IgnitePdsBinaryMetadataOnClusterRestartTest extends GridCommonAbstr /** * @see IGNITE-7258 refer to the following JIRA for more context about the problem verified by the test. */ + @Test public void testUpdatedBinaryMetadataIsPreservedOnJoinToOldCoordinator() throws Exception { Ignite ignite0 = startGridInASeparateWorkDir("A"); Ignite ignite1 = startGridInASeparateWorkDir("B"); @@ -169,6 +176,7 @@ public void testUpdatedBinaryMetadataIsPreservedOnJoinToOldCoordinator() throws /** * @see IGNITE-7258 refer to the following JIRA for more context about the problem verified by the test. */ + @Test public void testNewBinaryMetadataIsWrittenOnOldCoordinator() throws Exception { Ignite ignite0 = startGridInASeparateWorkDir("A"); Ignite ignite1 = startGridInASeparateWorkDir("B"); @@ -223,6 +231,7 @@ public void testNewBinaryMetadataIsWrittenOnOldCoordinator() throws Exception { * * @see IGNITE-7258 refer to the following JIRA for more context about the problem verified by the test. */ + @Test public void testNewBinaryMetadataIsPropagatedToAllOutOfDataNodes() throws Exception { Ignite igniteA = startGridInASeparateWorkDir("A"); startGridInASeparateWorkDir("B"); @@ -290,6 +299,7 @@ private String binaryTypeName(BinaryObject bObj) { * * @see IGNITE-7258 refer to the following JIRA for more context about the problem verified by the test. */ + @Test public void testNodeWithIncompatibleMetadataIsProhibitedToJoinTheCluster() throws Exception { final String decimalFieldName = "decField"; @@ -358,8 +368,8 @@ private void copyIncompatibleBinaryMetadata(String fromWorkDir, ) throws Exception { String workDir = U.defaultWorkDirectory(); - Path fromFile = Paths.get(workDir, fromWorkDir, "binary_meta", "node00-" + fromConsId, fileName); - Path toFile = Paths.get(workDir, toWorkDir, "binary_meta", "node00-" + toConsId, fileName); + Path fromFile = Paths.get(workDir, fromWorkDir, "binary_meta", fromConsId, fileName); + Path toFile = Paths.get(workDir, toWorkDir, "binary_meta", toConsId, fileName); Files.copy(fromFile, toFile, StandardCopyOption.REPLACE_EXISTING); } @@ -368,6 +378,7 @@ private void copyIncompatibleBinaryMetadata(String fromWorkDir, * Test verifies that binary metadata from regular java classes is saved and restored correctly * on cluster restart. */ + @Test public void testStaticMetadataIsRestoredOnRestart() throws Exception { clientMode = false; @@ -443,6 +454,7 @@ private void examineDynamicMetadata(int nodesCount, BinaryObjectExaminer... exam * Test verifies that metadata for binary types built with BinaryObjectBuilder is saved and updated correctly * on cluster restart. */ + @Test public void testDynamicMetadataIsRestoredOnRestart() throws Exception { clientMode = false; //1: start two nodes, add single BinaryObject @@ -509,6 +521,7 @@ public void testDynamicMetadataIsRestoredOnRestart() throws Exception { /** * */ + @Test public void testBinaryEnumMetadataIsRestoredOnRestart() throws Exception { clientMode = false; @@ -550,6 +563,7 @@ public void testBinaryEnumMetadataIsRestoredOnRestart() throws Exception { /** * Test verifies that metadata is saved, stored and delivered to client nodes correctly. */ + @Test public void testMixedMetadataIsRestoredOnRestart() throws Exception { clientMode = false; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinarySortObjectFieldsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinarySortObjectFieldsTest.java index 70a0203b5ebb2..78998be885c62 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinarySortObjectFieldsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsBinarySortObjectFieldsTest.java @@ -23,15 +23,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.PersistentStoreConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsBinarySortObjectFieldsTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "ignitePdsBinarySortObjectFieldsTestCache"; @@ -81,9 +82,6 @@ public void setVal(Long val) { } } - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTestsStopped() throws Exception { cleanPersistenceDir(); @@ -102,8 +100,6 @@ public void setVal(Long val) { cfg.setConsistentId(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration()); return cfg; @@ -121,6 +117,7 @@ public void setVal(Long val) { /** * @throws Exception if failed. */ + @Test public void testGivenCacheWithPojoValueAndPds_WhenPut_ThenNoHangup() throws Exception { System.setProperty("IGNITE_BINARY_SORT_OBJECT_FIELDS", "true"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheAssignmentNodeRestartsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheAssignmentNodeRestartsTest.java index 032b422a60bc8..dbb3dcd545e5c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheAssignmentNodeRestartsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheAssignmentNodeRestartsTest.java @@ -41,11 +41,13 @@ import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assume; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -55,10 +57,8 @@ /** * The test validates assignment after nodes restart with enabled persistence. */ +@RunWith(JUnit4.class) public class IgnitePdsCacheAssignmentNodeRestartsTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -75,8 +75,6 @@ public class IgnitePdsCacheAssignmentNodeRestartsTest extends GridCommonAbstract .setWalMode(WALMode.LOG_ONLY) ); - ((TcpDiscoverySpi) cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -127,8 +125,11 @@ private CacheConfiguration cacheConfiguration(String name, /** * @throws Exception If failed. */ + @Test public void testAssignmentAfterRestarts() throws Exception { try { + Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10582", MvccFeatureChecker.forcedMvcc()); + System.setProperty(IGNITE_PDS_CHECKPOINT_TEST_SKIP_SYNC, "true"); final int gridsCnt = 5; @@ -258,4 +259,4 @@ private void checkAffinity() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheConfigurationFileConsistencyCheckTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheConfigurationFileConsistencyCheckTest.java index 74a3950239195..d3b8a81797eaa 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheConfigurationFileConsistencyCheckTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheConfigurationFileConsistencyCheckTest.java @@ -39,11 +39,11 @@ import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.marshaller.jdk.JdkMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.CACHE_DATA_FILENAME; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.CACHE_DATA_TMP_FILENAME; @@ -51,10 +51,8 @@ /** * Tests that ignite can start when caches' configurations with same name in different groups stored. */ +@RunWith(JUnit4.class) public class IgnitePdsCacheConfigurationFileConsistencyCheckTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int CACHES = 4; @@ -71,9 +69,7 @@ public class IgnitePdsCacheConfigurationFileConsistencyCheckTest extends GridCom @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - return cfg.setDiscoverySpi(new TcpDiscoverySpi() - .setIpFinder(IP_FINDER)) - .setDataStorageConfiguration(new DataStorageConfiguration() + return cfg.setDataStorageConfiguration(new DataStorageConfiguration() .setDefaultDataRegionConfiguration(new DataRegionConfiguration() .setMaxSize(200 * 1024 * 1024) .setPersistenceEnabled(true))); @@ -102,6 +98,7 @@ public class IgnitePdsCacheConfigurationFileConsistencyCheckTest extends GridCom * * @throws Exception If fails. */ + @Test public void testStartDuplicatedCacheConfigurations() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(NODES); @@ -129,6 +126,7 @@ public void testStartDuplicatedCacheConfigurations() throws Exception { * * @throws Exception If failed. */ + @Test public void testTmpCacheConfigurationsDelete() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(NODES); @@ -168,6 +166,7 @@ public void testTmpCacheConfigurationsDelete() throws Exception { * * @throws Exception If failed. */ + @Test public void testCorruptedCacheConfigurationsValidation() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(NODES); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest.java index 279d3c8c4e620..801f15eaa6402 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest.java @@ -27,18 +27,16 @@ import org.apache.ignite.failure.StopNodeFailureHandler; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -46,9 +44,7 @@ public class IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest extends GridComm if ("client".equals(igniteInstanceName)) cfg.setClientMode(true).setFailureHandler(new StopNodeFailureHandler()); - return cfg.setDiscoverySpi(new TcpDiscoverySpi() - .setIpFinder(IP_FINDER)) - .setDataStorageConfiguration(new DataStorageConfiguration() + return cfg.setDataStorageConfiguration(new DataStorageConfiguration() .setDefaultDataRegionConfiguration(new DataRegionConfiguration() .setPersistenceEnabled(true))); } @@ -71,6 +67,7 @@ public class IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest extends GridComm * Tests that joining node metadata correctly handled on client. * @throws Exception If fails. */ + @Test public void testJoiningNodeBinaryMetaOnClient() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheRebalancingAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheRebalancingAbstractTest.java index 389a7feacaaff..dbbf09ffe1c73 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheRebalancingAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheRebalancingAbstractTest.java @@ -54,22 +54,20 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.runMultiThreadedAsync; /** * Test for rebalancing and persistence integration. */ +@RunWith(JUnit4.class) public abstract class IgnitePdsCacheRebalancingAbstractTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Default cache. */ private static final String CACHE = "cache"; @@ -150,11 +148,6 @@ public abstract class IgnitePdsCacheRebalancingAbstractTest extends GridCommonAb cfg.setDataStorageConfiguration(dsCfg); - cfg.setDiscoverySpi( - new TcpDiscoverySpi() - .setIpFinder(IP_FINDER) - ); - return cfg; } @@ -211,6 +204,7 @@ protected long checkpointFrequency() { * * @throws Exception If fails. */ + @Test public void testRebalancingOnRestart() throws Exception { Ignite ignite0 = startGrid(0); @@ -262,6 +256,7 @@ public void testRebalancingOnRestart() throws Exception { * * @throws Exception If fails. */ + @Test public void testRebalancingOnRestartAfterCheckpoint() throws Exception { IgniteEx ignite0 = startGrid(0); @@ -323,6 +318,7 @@ public void testRebalancingOnRestartAfterCheckpoint() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTopologyChangesWithConstantLoad() throws Exception { final long timeOut = U.currentTimeMillis() + 5 * 60 * 1000; @@ -520,6 +516,7 @@ else if (nodesCnt.get() >= maxNodesCnt) /** * @throws Exception If failed. */ + @Test public void testForceRebalance() throws Exception { testForceRebalance(CACHE); } @@ -527,6 +524,7 @@ public void testForceRebalance() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForceRebalanceClientTopology() throws Exception { filteredCacheEnabled = true; @@ -582,6 +580,7 @@ private void testForceRebalance(String cacheName) throws Exception { /** * @throws Exception If failed */ + @Test public void testPartitionCounterConsistencyOnUnstableTopology() throws Exception { final Ignite ig = startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheStartStopWithFreqCheckpointTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheStartStopWithFreqCheckpointTest.java new file mode 100644 index 0000000000000..b4dae392d3011 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCacheStartStopWithFreqCheckpointTest.java @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.CacheRebalanceMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class IgnitePdsCacheStartStopWithFreqCheckpointTest extends GridCommonAbstractTest { + /** Caches. */ + private static final int CACHES = 10; + + /** Cache name. */ + private static final String CACHE_NAME = "test"; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setConsistentId(igniteInstanceName); + + DataStorageConfiguration dsCfg = new DataStorageConfiguration() + .setWalMode(WALMode.LOG_ONLY) + .setCheckpointFrequency(1000) + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setMaxSize(512 * 1024 * 1024) + .setPersistenceEnabled(true) + ); + + cfg.setDataStorageConfiguration(dsCfg); + + CacheConfiguration[] ccfgs = new CacheConfiguration[CACHES]; + + for (int i = 0; i < ccfgs.length; i++) + ccfgs[i] = cacheConfiguration(i); + + cfg.setCacheConfiguration(ccfgs); + + return cfg; + } + + /** {@inheritDoc} */ + private CacheConfiguration cacheConfiguration(int cacheIdx) { + return new CacheConfiguration(CACHE_NAME + cacheIdx) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setCacheMode(CacheMode.REPLICATED) + .setBackups(0) + .setRebalanceMode(CacheRebalanceMode.NONE); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * Test checkpoint deadlock during caches start/stop process and frequent checkpoints is set. + * + * @throws Exception If failed. + */ + @Test + public void testCheckpointDeadlock() throws Exception { + IgniteEx crd = startGrid(0); + + crd.cluster().active(true); + + for (int cacheId = 0; cacheId < CACHES; cacheId++) { + IgniteCache cache = crd.getOrCreateCache(CACHE_NAME + cacheId); + + for (int key = 0; key < 4096; key++) + cache.put(key, key); + } + + forceCheckpoint(); + + final AtomicBoolean stopFlag = new AtomicBoolean(); + + IgniteInternalFuture cacheStartStopFut = GridTestUtils.runAsync(() -> { + while (!stopFlag.get()) { + List cacheNames = new ArrayList<>(); + for (int i = 0; i < CACHES / 2; i++) + cacheNames.add(CACHE_NAME + i); + + try { + // Stop cache without destroy. + crd.context().cache().dynamicDestroyCaches(cacheNames, false,false).get(); + } + catch (IgniteCheckedException e) { + throw new IgniteException("Failed to destroy cache", e); + } + + List cachesToStart = new ArrayList<>(); + for (int i = 0; i < CACHES / 2; i++) + cachesToStart.add(cacheConfiguration(i)); + + crd.getOrCreateCaches(cachesToStart); + } + }); + + U.sleep(60_000); + + log.info("Stopping caches start/stop process."); + + stopFlag.set(true); + + try { + cacheStartStopFut.get(30, TimeUnit.SECONDS); + } + catch (IgniteFutureTimeoutCheckedException e) { + U.dumpThreads(log); + + log.warning("Caches start/stop future hangs. Interrupting checkpointer..."); + + interruptCheckpointer(crd); + + // Should succeed. + cacheStartStopFut.get(); + + Assert.assertTrue("Checkpoint and exchange is probably in deadlock (see thread dump above for details).", false); + } + } + + /** + * Interrupts checkpoint thread for given node. + * + * @param node Node. + */ + private void interruptCheckpointer(IgniteEx node) { + GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager) node.context().cache().context().database(); + + dbMgr.checkpointerThread().interrupt(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTest.java index 25d54abb345e3..51ff1b79b6b40 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTest.java @@ -42,11 +42,16 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Cause by https://issues.apache.org/jira/browse/IGNITE-7278 */ +@RunWith(JUnit4.class) public class IgnitePdsContinuousRestartTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 4; @@ -72,7 +77,7 @@ public IgnitePdsContinuousRestartTest() { /** * @param cancel Cancel. */ - public IgnitePdsContinuousRestartTest(boolean cancel) { + protected IgnitePdsContinuousRestartTest(boolean cancel) { this.cancel = cancel; } @@ -112,6 +117,14 @@ public IgnitePdsContinuousRestartTest(boolean cancel) { cleanPersistenceDir(); } + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10561"); + + super.beforeTest(); + } + /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -122,6 +135,7 @@ public IgnitePdsContinuousRestartTest(boolean cancel) { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_1000_500_1_1() throws Exception { checkRebalancingDuringLoad(1000, 500, 1, 1); } @@ -129,6 +143,7 @@ public void testRebalancingDuringLoad_1000_500_1_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_8000_500_1_1() throws Exception { checkRebalancingDuringLoad(8000, 500, 1, 1); } @@ -136,6 +151,7 @@ public void testRebalancingDuringLoad_8000_500_1_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_1000_20000_1_1() throws Exception { checkRebalancingDuringLoad(1000, 20000, 1, 1); } @@ -143,6 +159,7 @@ public void testRebalancingDuringLoad_1000_20000_1_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_8000_8000_1_1() throws Exception { checkRebalancingDuringLoad(8000, 8000, 1, 1); } @@ -150,6 +167,7 @@ public void testRebalancingDuringLoad_8000_8000_1_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_1000_500_8_1() throws Exception { checkRebalancingDuringLoad(1000, 500, 8, 1); } @@ -157,6 +175,7 @@ public void testRebalancingDuringLoad_1000_500_8_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_8000_500_8_1() throws Exception { checkRebalancingDuringLoad(8000, 500, 8, 1); } @@ -164,6 +183,7 @@ public void testRebalancingDuringLoad_8000_500_8_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_1000_20000_8_1() throws Exception { checkRebalancingDuringLoad(1000, 20000, 8, 1); } @@ -171,6 +191,7 @@ public void testRebalancingDuringLoad_1000_20000_8_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_8000_8000_8_1() throws Exception { checkRebalancingDuringLoad(8000, 8000, 8, 1); } @@ -178,6 +199,7 @@ public void testRebalancingDuringLoad_8000_8000_8_1() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_1000_500_8_16() throws Exception { checkRebalancingDuringLoad(1000, 500, 8, 16); } @@ -185,6 +207,7 @@ public void testRebalancingDuringLoad_1000_500_8_16() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_8000_500_8_16() throws Exception { checkRebalancingDuringLoad(8000, 500, 8, 16); } @@ -192,6 +215,7 @@ public void testRebalancingDuringLoad_8000_500_8_16() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_1000_20000_8_16() throws Exception { checkRebalancingDuringLoad(1000, 20000, 8, 16); } @@ -199,6 +223,7 @@ public void testRebalancingDuringLoad_1000_20000_8_16() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingDuringLoad_8000_8000_8_16() throws Exception { checkRebalancingDuringLoad(8000, 8000, 8, 16); } @@ -206,14 +231,18 @@ public void testRebalancingDuringLoad_8000_8000_8_16() throws Exception { /** * @throws Exception if failed. */ - public void testRebalncingDuringLoad_10_10_1_1() throws Exception { + @Test + public void testRebalancingDuringLoad_10_10_1_1() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10583"); checkRebalancingDuringLoad(10, 10, 1, 1); } /** * @throws Exception if failed. */ - public void testRebalncingDuringLoad_10_500_8_16() throws Exception { + @Test + public void testRebalancingDuringLoad_10_500_8_16() throws Exception { checkRebalancingDuringLoad(10, 500, 8, 16); } @@ -258,7 +287,16 @@ private void checkRebalancingDuringLoad( map.put(key, new Person("fn" + key, "ln" + key)); } - cache.putAll(map); + while (true) { + try { + cache.putAll(map); + + break; + } + catch (Exception e) { + MvccFeatureChecker.assertMvccWriteConflict(e); + } + } } return null; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithExpiryPolicy.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithExpiryPolicy.java index d5b3f5527a6f0..0fa8144d11b01 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithExpiryPolicy.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithExpiryPolicy.java @@ -25,17 +25,12 @@ import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; /** * Cause by https://issues.apache.org/jira/browse/IGNITE-5879 */ public class IgnitePdsContinuousRestartTestWithExpiryPolicy extends IgnitePdsContinuousRestartTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * Default constructor. */ @@ -47,9 +42,6 @@ public IgnitePdsContinuousRestartTestWithExpiryPolicy() { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - discoverySpi.setIpFinder(ipFinder); - CacheConfiguration ccfg = new CacheConfiguration(); ccfg.setName(CACHE_NAME); @@ -64,4 +56,11 @@ public IgnitePdsContinuousRestartTestWithExpiryPolicy() { return cfg; } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + + super.beforeTest(); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes.java index 110e67708af40..13f1a19f4f53a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes.java @@ -22,17 +22,11 @@ import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; /** * Adding shared group and indexes to testing. It would impact how we evict partitions. */ public class IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes extends IgnitePdsContinuousRestartTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Cache 2 singleton group name. */ public static final String CACHE_GROUP_NAME = "Group2"; @@ -47,9 +41,6 @@ public IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes() { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - discoverySpi.setIpFinder(ipFinder); - CacheConfiguration ccfg2 = new CacheConfiguration(); ccfg2.setName(CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedIndexTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedIndexTest.java index 14d0fb6d4dfb7..6f3c9e58a6b37 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedIndexTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedIndexTest.java @@ -48,10 +48,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.multijvm.IgniteProcessProxy; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to reproduce corrupted indexes problem after partition file eviction and truncation. */ +@RunWith(JUnit4.class) public class IgnitePdsCorruptedIndexTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE = "cache"; @@ -123,6 +127,7 @@ public class IgnitePdsCorruptedIndexTest extends GridCommonAbstractTest { /** * */ + @Test public void testCorruption() throws Exception { final String corruptedNodeName = "corrupted"; @@ -317,16 +322,6 @@ private static boolean isPartitionFile(File file) { return file.getName().contains("part") && file.getName().endsWith("bin"); } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - FileIO delegate = delegateFactory.create(file); - - if (isPartitionFile(file)) - return new HaltOnTruncateFileIO(delegate, file); - - return delegate; - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { FileIO delegate = delegateFactory.create(file, modes); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedStoreTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedStoreTest.java index 059b5eefdcf73..25924102d88cf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedStoreTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsCorruptedStoreTest.java @@ -37,6 +37,7 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.pagemem.PageIdAllocator; import org.apache.ignite.internal.pagemem.PageIdUtils; import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; @@ -53,10 +54,10 @@ import org.apache.ignite.lang.IgniteBiClosure; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; import static org.apache.ignite.IgniteSystemProperties.IGNITE_PDS_SKIP_CRC; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; import static org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage.METASTORAGE_CACHE_ID; @@ -64,6 +65,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgnitePdsCorruptedStoreTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME1 = "cache1"; @@ -150,6 +152,7 @@ private CacheConfiguration cacheConfiguration(String name) { /** * @throws Exception If test failed. */ + @Test public void testNodeInvalidatedWhenPersistenceIsCorrupted() throws Exception { Ignite ignite = startGrid(0); @@ -180,6 +183,9 @@ public void testNodeInvalidatedWhenPersistenceIsCorrupted() throws Exception { startGrid(0); } catch (IgniteCheckedException ex) { + if (X.hasCause(ex, StorageException.class, IOException.class)) + return; // Success; + throw ex; } @@ -191,6 +197,7 @@ public void testNodeInvalidatedWhenPersistenceIsCorrupted() throws Exception { * * @throws Exception In case of fail */ + @Test public void testWrongPageCRC() throws Exception { System.setProperty(IGNITE_PDS_SKIP_CRC, "true"); @@ -224,6 +231,7 @@ public void testWrongPageCRC() throws Exception { /** * Test node invalidation when meta storage is corrupted. */ + @Test public void testMetaStorageCorruption() throws Exception { IgniteEx ignite = startGrid(0); @@ -231,7 +239,7 @@ public void testMetaStorageCorruption() throws Exception { MetaStorage metaStorage = ignite.context().cache().context().database().metaStorage(); - corruptTreeRoot(ignite, (PageMemoryEx)metaStorage.pageMemory(), METASTORAGE_CACHE_ID, 0); + corruptTreeRoot(ignite, (PageMemoryEx)metaStorage.pageMemory(), METASTORAGE_CACHE_ID, PageIdAllocator.METASTORE_PARTITION); stopGrid(0); @@ -250,6 +258,7 @@ public void testMetaStorageCorruption() throws Exception { /** * Test node invalidation when cache meta is corrupted. */ + @Test public void testCacheMetaCorruption() throws Exception { IgniteEx ignite = startGrid(0); @@ -324,6 +333,7 @@ private void corruptTreeRoot(IgniteEx ignite, PageMemoryEx pageMem, int grpId, i /** * Test node invalidation when meta store is read only. */ + @Test public void testReadOnlyMetaStore() throws Exception { IgniteEx ignite0 = startGrid(0); @@ -368,6 +378,7 @@ public void testReadOnlyMetaStore() throws Exception { /** * Test node invalidation due to checkpoint error. */ + @Test public void testCheckpointFailure() throws Exception { IgniteEx ignite = startGrid(0); @@ -458,11 +469,6 @@ private static class FailingFileIOFactory implements FileIOFactory { /** Create FileIO closure. */ private volatile IgniteBiClosure createClo; - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... openOption) throws IOException { FileIO fileIO = null; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheAbstractTest.java index 434007ece5f8e..2fe9ca74b5cd4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheAbstractTest.java @@ -29,18 +29,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; /** * Base class for {@link IgnitePdsDestroyCacheTest} and {@link IgnitePdsDestroyCacheWithoutCheckpointsTest} */ public abstract class IgnitePdsDestroyCacheAbstractTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ protected static final int CACHES = 3; @@ -54,9 +48,7 @@ public abstract class IgnitePdsDestroyCacheAbstractTest extends GridCommonAbstra @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - return cfg.setDiscoverySpi(new TcpDiscoverySpi() - .setIpFinder(IP_FINDER)) - .setDataStorageConfiguration(new DataStorageConfiguration() + return cfg.setDataStorageConfiguration(new DataStorageConfiguration() .setDefaultDataRegionConfiguration(new DataRegionConfiguration() .setMaxSize(200 * 1024 * 1024) .setPersistenceEnabled(true))); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheTest.java index 99d6f01c07d5d..06e3a6a3b77c1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheTest.java @@ -18,16 +18,21 @@ package org.apache.ignite.internal.processors.cache.persistence; import org.apache.ignite.Ignite; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test correct clean up cache configuration data after destroying cache. */ +@RunWith(JUnit4.class) public class IgnitePdsDestroyCacheTest extends IgnitePdsDestroyCacheAbstractTest { /** * Test destroy non grouped caches. * * @throws Exception If failed. */ + @Test public void testDestroyCaches() throws Exception { Ignite ignite = startGrids(NODES); @@ -43,6 +48,7 @@ public void testDestroyCaches() throws Exception { * * @throws Exception If failed. */ + @Test public void testDestroyGroupCaches() throws Exception { Ignite ignite = startGrids(NODES); @@ -58,6 +64,7 @@ public void testDestroyGroupCaches() throws Exception { * * @throws Exception If failed. */ + @Test public void testDestroyCachesAbruptly() throws Exception { Ignite ignite = startGrids(NODES); @@ -67,12 +74,13 @@ public void testDestroyCachesAbruptly() throws Exception { checkDestroyCachesAbruptly(ignite); } - + /** * Test destroy group caches abruptly with checkpoints. * * @throws Exception If failed. */ + @Test public void testDestroyGroupCachesAbruptly() throws Exception { Ignite ignite = startGrids(NODES); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheWithoutCheckpointsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheWithoutCheckpointsTest.java index 1bb6f5d8de8ba..8042a767c2373 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheWithoutCheckpointsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDestroyCacheWithoutCheckpointsTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.typedef.G; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Check that cluster survives after destroy caches abruptly with disabled checkpoints. */ +@RunWith(JUnit4.class) public class IgnitePdsDestroyCacheWithoutCheckpointsTest extends IgnitePdsDestroyCacheAbstractTest { /** * {@inheritDoc} @@ -39,6 +43,7 @@ public class IgnitePdsDestroyCacheWithoutCheckpointsTest extends IgnitePdsDestro * * @throws Exception If failed. */ + @Test public void testDestroyCachesAbruptlyWithoutCheckpoints() throws Exception { Ignite ignite = startGrids(NODES); @@ -56,6 +61,7 @@ public void testDestroyCachesAbruptlyWithoutCheckpoints() throws Exception { * * @throws Exception If failed. */ + @Test public void testDestroyGroupCachesAbruptlyWithoutCheckpoints() throws Exception { Ignite ignite = startGrids(NODES); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDynamicCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDynamicCacheTest.java index 42dc56364eb56..63bf42c7ec2f6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDynamicCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsDynamicCacheTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.processors.database.IgniteDbDynamicCacheSelfTest; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsDynamicCacheTest extends IgniteDbDynamicCacheSelfTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -84,6 +88,7 @@ public class IgnitePdsDynamicCacheTest extends IgniteDbDynamicCacheSelfTest { /** * @throws Exception If failed. */ + @Test public void testRestartAndCreate() throws Exception { startGrids(3); @@ -151,6 +156,7 @@ public void testRestartAndCreate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDynamicCacheSavingOnNewNode() throws Exception { Ignite ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsExchangeDuringCheckpointTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsExchangeDuringCheckpointTest.java index f1609726e0d05..a072d45c47f6d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsExchangeDuringCheckpointTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsExchangeDuringCheckpointTest.java @@ -24,25 +24,23 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsExchangeDuringCheckpointTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Non-persistent data region name. */ private static final String NO_PERSISTENCE_REGION = "no-persistence-region"; /** * */ + @Test public void testExchangeOnNodeLeft() throws Exception { for (int i = 0; i < 5; i++) { startGrids(3); @@ -64,6 +62,7 @@ public void testExchangeOnNodeLeft() throws Exception { /** * */ + @Test public void testExchangeOnNodeJoin() throws Exception { for (int i = 0; i < 5; i++) { startGrids(2); @@ -111,12 +110,6 @@ public void testExchangeOnNodeJoin() throws Exception { cfg.setCacheConfiguration(ccfg, ccfgNp); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsMarshallerMappingRestoreOnNodeStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsMarshallerMappingRestoreOnNodeStartTest.java index 27bfe28014cf5..c8c8aafcd304b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsMarshallerMappingRestoreOnNodeStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsMarshallerMappingRestoreOnNodeStartTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsMarshallerMappingRestoreOnNodeStartTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -68,6 +72,7 @@ public class IgnitePdsMarshallerMappingRestoreOnNodeStartTest extends GridCommon * Test verifies that binary metadata from regular java classes is saved and restored correctly * on cluster restart. */ + @Test public void testStaticMetadataIsRestoredOnRestart() throws Exception { startGrids(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsNoSpaceLeftOnDeviceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsNoSpaceLeftOnDeviceTest.java new file mode 100644 index 0000000000000..d53a344b6018f --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsNoSpaceLeftOnDeviceTest.java @@ -0,0 +1,154 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence; + +import java.io.File; +import java.io.IOException; +import java.nio.file.OpenOption; +import java.util.concurrent.atomic.AtomicReference; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.failure.StopNodeFailureHandler; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; +import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; +import org.apache.ignite.internal.processors.cache.persistence.wal.reader.StandaloneGridKernalContext; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class IgnitePdsNoSpaceLeftOnDeviceTest extends GridCommonAbstractTest { + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + final DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration(); + + dataStorageConfiguration.getDefaultDataRegionConfiguration().setPersistenceEnabled(true).setMaxSize(1 << 24); + dataStorageConfiguration.setFileIOFactory(new FailingFileIOFactory()); + + cfg.setDataStorageConfiguration(dataStorageConfiguration); + + CacheConfiguration ccfg = new CacheConfiguration(); + + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + ccfg.setName(DEFAULT_CACHE_NAME); + ccfg.setBackups(1); + + cfg.setCacheConfiguration(ccfg); + + cfg.setFailureHandler(new StopNodeFailureHandler()); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * The tests to validate IGNITE-9120 + * Metadata writer does not propagate error to failure handler. + * + * @throws Exception If failed. + */ + @Test + public void testWhileWritingBinaryMetadataFile() throws Exception { + final IgniteEx ignite0 = startGrid(0); + + final IgniteEx ignite1 = startGrid(1); + + FailingFileIOFactory.setUnluckyConsistentId(ignite1.localNode().consistentId().toString()); + + ignite0.cluster().active(true); + + final IgniteCache cache = ignite0.cache(DEFAULT_CACHE_NAME); + + for (int i = 0; i < 30; i++) { + try (Transaction tx = ignite0.transactions().txStart()) { + cache.put(1, ignite0.binary().builder("test").setField("field1", i).build()); + + cache.put(1 << i, ignite0.binary().builder("test").setField("field1", i).build()); + + tx.commit(); + } + catch (Exception e) { + } + } + + waitForTopology(1); + } + + /** + * Generating error "No space left on device" when writing binary_meta file on the second node + */ + private static class FailingFileIOFactory implements FileIOFactory { + /** + * + */ + private static final long serialVersionUID = 0L; + + /** + * + */ + private final FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); + + /** + * Node ConsistentId for which the error will be generated + */ + private static final AtomicReference unluckyConsistentId = new AtomicReference<>(); + + /** {@inheritDoc} */ + @Override public FileIO create(File file, OpenOption... modes) throws IOException { + if (unluckyConsistentId.get() != null + && file.getAbsolutePath().contains(unluckyConsistentId.get()) + && file.getAbsolutePath().contains(StandaloneGridKernalContext.BINARY_META_FOLDER)) + throw new IOException("No space left on device"); + + return delegateFactory.create(file, modes); + } + + /** + * Set node ConsistentId for which the error will be generated + * + * @param unluckyConsistentId Node ConsistentId. + */ + public static void setUnluckyConsistentId(String unluckyConsistentId) { + FailingFileIOFactory.unluckyConsistentId.set(unluckyConsistentId); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPageSizesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPageSizesTest.java index 353bc504d2489..885f6a3896970 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPageSizesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPageSizesTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsPageSizesTest extends GridCommonAbstractTest { /** Cache name. */ private final String cacheName = "cache"; @@ -76,6 +80,7 @@ public class IgnitePdsPageSizesTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testPageSize_1k() throws Exception { checkPageSize(1024); } @@ -83,6 +88,7 @@ public void testPageSize_1k() throws Exception { /** * @throws Exception if failed. */ + @Test public void testPageSize_2k() throws Exception { checkPageSize(2 * 1024); } @@ -90,6 +96,7 @@ public void testPageSize_2k() throws Exception { /** * @throws Exception if failed. */ + @Test public void testPageSize_4k() throws Exception { checkPageSize(4 * 1024); } @@ -97,6 +104,7 @@ public void testPageSize_4k() throws Exception { /** * @throws Exception if failed. */ + @Test public void testPageSize_8k() throws Exception { checkPageSize(8 * 1024); } @@ -104,6 +112,7 @@ public void testPageSize_8k() throws Exception { /** * @throws Exception if failed. */ + @Test public void testPageSize_16k() throws Exception { checkPageSize(16 * 1024); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPartitionFilesDestroyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPartitionFilesDestroyTest.java index 3605700d739fc..a2a4b0bd9cc8d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPartitionFilesDestroyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPartitionFilesDestroyTest.java @@ -43,18 +43,20 @@ import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; /** * Test class to check that partition files after eviction are destroyed correctly on next checkpoint or crash recovery. */ +@RunWith(JUnit4.class) public class IgnitePdsPartitionFilesDestroyTest extends GridCommonAbstractTest { - /** Cache name. */ - private static final String CACHE = "cache"; - /** Partitions count. */ private static final int PARTS_CNT = 32; @@ -81,7 +83,7 @@ public class IgnitePdsPartitionFilesDestroyTest extends GridCommonAbstractTest { cfg.setDataStorageConfiguration(dsCfg); - CacheConfiguration ccfg = new CacheConfiguration<>(CACHE) + CacheConfiguration ccfg = defaultCacheConfiguration() .setBackups(1) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) .setAffinity(new RendezvousAffinityFunction(false, PARTS_CNT)); @@ -93,6 +95,9 @@ public class IgnitePdsPartitionFilesDestroyTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + stopAllGrids(); cleanPersistenceDir(); @@ -119,7 +124,7 @@ public class IgnitePdsPartitionFilesDestroyTest extends GridCommonAbstractTest { private void loadData(IgniteEx ignite, int keysCnt, int multiplier) { log.info("Load data: keys=" + keysCnt); - try (IgniteDataStreamer streamer = ignite.dataStreamer(CACHE)) { + try (IgniteDataStreamer streamer = ignite.dataStreamer(DEFAULT_CACHE_NAME)) { streamer.allowOverwrite(true); for (int k = 0; k < keysCnt; k++) @@ -134,7 +139,7 @@ private void loadData(IgniteEx ignite, int keysCnt, int multiplier) { private void checkData(IgniteEx ignite, int keysCnt, int multiplier) { log.info("Check data: " + ignite.name() + ", keys=" + keysCnt); - IgniteCache cache = ignite.cache(CACHE); + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); for (int k = 0; k < keysCnt; k++) Assert.assertEquals("node = " + ignite.name() + ", key = " + k, (Integer) (k * multiplier), cache.get(k)); @@ -145,6 +150,7 @@ private void checkData(IgniteEx ignite, int keysCnt, int multiplier) { * * @throws Exception If failed. */ + @Test public void testPartitionFileDestroyAfterCheckpoint() throws Exception { IgniteEx crd = (IgniteEx) startGrids(2); @@ -177,6 +183,7 @@ public void testPartitionFileDestroyAfterCheckpoint() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionFileDestroyAndRecreate() throws Exception { IgniteEx crd = startGrid(0); @@ -224,6 +231,7 @@ public void testPartitionFileDestroyAndRecreate() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionFileDestroyCrashRecovery1() throws Exception { IgniteEx crd = startGrid(0); @@ -277,6 +285,7 @@ public void testPartitionFileDestroyCrashRecovery1() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionFileDestroyCrashRecovery2() throws Exception { IgniteEx crd = startGrid(0); @@ -338,6 +347,7 @@ public void testPartitionFileDestroyCrashRecovery2() throws Exception { * * @throws Exception If failed. */ + @Test public void testDestroyWhenPartitionsAreEmpty() throws Exception { IgniteEx crd = (IgniteEx) startGrids(2); @@ -346,7 +356,7 @@ public void testDestroyWhenPartitionsAreEmpty() throws Exception { forceCheckpoint(); // Evict arbitrary partition. - List parts = crd.cachex(CACHE).context().topology().localPartitions(); + List parts = crd.cachex(DEFAULT_CACHE_NAME).context().topology().localPartitions(); for (GridDhtLocalPartition part : parts) if (part.state() != GridDhtPartitionState.EVICTED) { part.rent(false).get(); @@ -375,12 +385,12 @@ public void testDestroyWhenPartitionsAreEmpty() throws Exception { private void checkPartitionFiles(IgniteEx ignite, boolean exists) throws IgniteCheckedException { int evicted = 0; - GridDhtPartitionTopology top = ignite.cachex(CACHE).context().topology(); + GridDhtPartitionTopology top = ignite.cachex(DEFAULT_CACHE_NAME).context().topology(); for (int p = 0; p < PARTS_CNT; p++) { GridDhtLocalPartition part = top.localPartition(p); - File partFile = partitionFile(ignite, CACHE, p); + File partFile = partitionFile(ignite, DEFAULT_CACHE_NAME, p); if (exists) { if (part != null && part.state() == GridDhtPartitionState.EVICTED) @@ -449,16 +459,6 @@ private static boolean isPartitionFile(File file) { return file.getName().contains("part") && file.getName().endsWith("bin"); } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - FileIO delegate = delegateFactory.create(file); - - if (isPartitionFile(file)) - return new FailingFileIO(delegate); - - return delegate; - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { FileIO delegate = delegateFactory.create(file, modes); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPartitionsStateRecoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPartitionsStateRecoveryTest.java new file mode 100644 index 0000000000000..9e4398a6ec51e --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsPartitionsStateRecoveryTest.java @@ -0,0 +1,177 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence; + +import java.util.Arrays; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheRebalanceMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class IgnitePdsPartitionsStateRecoveryTest extends GridCommonAbstractTest { + /** Partitions count. */ + private static final int PARTS_CNT = 32; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setConsistentId(igniteInstanceName); + + DataStorageConfiguration dsCfg = new DataStorageConfiguration() + .setWalMode(WALMode.LOG_ONLY) + .setWalSegmentSize(16 * 1024 * 1024) + .setCheckpointFrequency(20 * 60 * 1000) + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setMaxSize(512 * 1024 * 1024) + .setPersistenceEnabled(true) + ); + + cfg.setDataStorageConfiguration(dsCfg); + + CacheConfiguration ccfg = defaultCacheConfiguration() + .setBackups(0) + .setRebalanceMode(CacheRebalanceMode.NONE) // Disable rebalance to prevent owning MOVING partitions. + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) + .setAffinity(new RendezvousAffinityFunction(false, PARTS_CNT)); + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + System.setProperty(GridCacheDatabaseSharedManager.IGNITE_PDS_SKIP_CHECKPOINT_ON_NODE_STOP, "true"); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + System.clearProperty(GridCacheDatabaseSharedManager.IGNITE_PDS_SKIP_CHECKPOINT_ON_NODE_STOP); + } + + /** + * Test checks that partition state is recovered properly if last checkpoint was skipped and there are logical updates to apply. + * + * @throws Exception If failed. + */ + @Test + public void testPartitionsStateConsistencyAfterRecovery() throws Exception { + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); + + for (int key = 0; key < 4096; key++) + cache.put(key, key); + + forceCheckpoint(); + + for (int key = 0; key < 4096; key++) { + int[] payload = new int[4096]; + Arrays.fill(payload, key); + + cache.put(key, payload); + } + + GridDhtPartitionTopology topology = ignite.cachex(DEFAULT_CACHE_NAME).context().topology(); + + Assert.assertFalse(topology.hasMovingPartitions()); + + log.info("Stopping grid..."); + + stopGrid(0); + + ignite = startGrid(0); + + awaitPartitionMapExchange(); + + topology = ignite.cachex(DEFAULT_CACHE_NAME).context().topology(); + + Assert.assertFalse("Node restored moving partitions after join to topology.", topology.hasMovingPartitions()); + } + + /** + * Test checks that partition state is recovered properly if only logical updates exist. + * + * @throws Exception If failed. + */ + @Test + public void testPartitionsStateConsistencyAfterRecoveryNoCheckpoints() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10603"); + + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); + + forceCheckpoint(); + + for (int key = 0; key < 4096; key++) { + int[] payload = new int[4096]; + Arrays.fill(payload, key); + + cache.put(key, payload); + } + + GridDhtPartitionTopology topology = ignite.cachex(DEFAULT_CACHE_NAME).context().topology(); + + Assert.assertFalse(topology.hasMovingPartitions()); + + log.info("Stopping grid..."); + + stopGrid(0); + + ignite = startGrid(0); + + awaitPartitionMapExchange(); + + topology = ignite.cachex(DEFAULT_CACHE_NAME).context().topology(); + + Assert.assertFalse("Node restored moving partitions after join to topology.", topology.hasMovingPartitions()); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRecoveryAfterFileCorruptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRecoveryAfterFileCorruptionTest.java index af6c8b740c6e0..ef8f0efa3bc78 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRecoveryAfterFileCorruptionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRecoveryAfterFileCorruptionTest.java @@ -23,6 +23,7 @@ import java.util.Collection; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheRebalanceMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; @@ -48,18 +49,16 @@ import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * This test generates WAL & Page Store with N pages, then rewrites pages with zeroes and tries to acquire all pages. */ +@RunWith(JUnit4.class) public class IgnitePdsRecoveryAfterFileCorruptionTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Total pages. */ private static final int totalPages = 512; @@ -78,6 +77,8 @@ public class IgnitePdsRecoveryAfterFileCorruptionTest extends GridCommonAbstract ccfg.setRebalanceMode(CacheRebalanceMode.NONE); + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + cfg.setCacheConfiguration(ccfg); DataStorageConfiguration memCfg = new DataStorageConfiguration() @@ -92,11 +93,6 @@ public class IgnitePdsRecoveryAfterFileCorruptionTest extends GridCommonAbstract cfg.setDataStorageConfiguration(memCfg); - cfg.setDiscoverySpi( - new TcpDiscoverySpi() - .setIpFinder(ipFinder) - ); - return cfg; } @@ -117,6 +113,7 @@ public class IgnitePdsRecoveryAfterFileCorruptionTest extends GridCommonAbstract /** * @throws Exception if failed. */ + @Test public void testPageRecoveryAfterFileCorruption() throws Exception { IgniteEx ig = startGrid(0); @@ -202,7 +199,7 @@ private void initPage(PageMemory mem, PageIO pageIO, FullPageId fullId) throws I final long pageAddr = mem.writeLock(fullId.groupId(), fullId.pageId(), page); try { - pageIO.initNewPage(pageAddr, fullId.pageId(), mem.pageSize()); + pageIO.initNewPage(pageAddr, fullId.pageId(), mem.realPageSize(fullId.groupId())); } finally { mem.writeUnlock(fullId.groupId(), fullId.pageId(), page, null, true); @@ -261,7 +258,7 @@ private void checkRestore(IgniteEx ig, FullPageId[] pages) throws IgniteCheckedE try { long pageAddr = mem.readLock(fullId.groupId(), fullId.pageId(), page); - for (int j = PageIO.COMMON_HEADER_END; j < mem.pageSize(); j += 4) + for (int j = PageIO.COMMON_HEADER_END; j < mem.realPageSize(fullId.groupId()); j += 4) assertEquals(j + (int)fullId.pageId(), PageUtils.getInt(pageAddr, j)); mem.readUnlock(fullId.groupId(), fullId.pageId(), page); @@ -305,7 +302,7 @@ private void generateWal( PageIO.setPageId(pageAddr, fullId.pageId()); try { - for (int j = PageIO.COMMON_HEADER_END; j < mem.pageSize(); j += 4) + for (int j = PageIO.COMMON_HEADER_END; j < mem.realPageSize(fullId.groupId()); j += 4) PageUtils.putInt(pageAddr, j, j + (int)fullId.pageId()); } finally { @@ -346,7 +343,7 @@ private void generateWal( cp += cpEnd - cpStart; tmpBuf.rewind(); - for (int j = PageIO.COMMON_HEADER_END; j < mem.pageSize(); j += 4) + for (int j = PageIO.COMMON_HEADER_END; j < mem.realPageSize(fullId.groupId()); j += 4) assertEquals(j + (int)fullId.pageId(), tmpBuf.getInt(j)); tmpBuf.rewind(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRemoveDuringRebalancingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRemoveDuringRebalancingTest.java index e51901d476b78..3970565361410 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRemoveDuringRebalancingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsRemoveDuringRebalancingTest.java @@ -33,21 +33,19 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsRemoveDuringRebalancingTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); @@ -71,11 +69,6 @@ public class IgnitePdsRemoveDuringRebalancingTest extends GridCommonAbstractTest cfg.setDataStorageConfiguration(memCfg); - cfg.setDiscoverySpi( - new TcpDiscoverySpi() - .setIpFinder(IP_FINDER) - ); - return cfg; } @@ -100,6 +93,7 @@ public class IgnitePdsRemoveDuringRebalancingTest extends GridCommonAbstractTest /** * @throws Exception if failed. */ + @Test public void testRemovesDuringRebalancing() throws Exception { IgniteEx ig = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTaskCancelingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTaskCancelingTest.java index b36bac0123d6e..c2c6ced642c55 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTaskCancelingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTaskCancelingTest.java @@ -52,14 +52,14 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test handle of task canceling with PDS enabled. */ +@RunWith(JUnit4.class) public class IgnitePdsTaskCancelingTest extends GridCommonAbstractTest { /** Slow file IO enabled. */ private static final AtomicBoolean slowFileIoEnabled = new AtomicBoolean(false); @@ -91,6 +91,9 @@ public class IgnitePdsTaskCancelingTest extends GridCommonAbstractTest { cfg.setDataStorageConfiguration(getDataStorageConfiguration()); + // Set the thread pool size according to the NUM_TASKS. + cfg.setPublicThreadPoolSize(16); + return cfg; } @@ -114,6 +117,7 @@ private DataStorageConfiguration getDataStorageConfiguration() { /** * Checks that tasks canceling does not lead to node failure. */ + @Test public void testFailNodesOnCanceledTask() throws Exception { cleanPersistenceDir(); @@ -185,6 +189,7 @@ public void testFailNodesOnCanceledTask() throws Exception { /** * Test FilePageStore with multiple interrupted threads. */ + @Test public void testFilePageStoreInterruptThreads() throws Exception { failure.set(false); @@ -287,11 +292,6 @@ private static class SlowIOFactory implements FileIOFactory { /** */ private final FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... openOption) throws IOException { final FileIO delegate = delegateFactory.create(file, openOption); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTxCacheRebalancingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTxCacheRebalancingTest.java index 3b324c37436fd..ebaf168d43985 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTxCacheRebalancingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePdsTxCacheRebalancingTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsTxCacheRebalancingTest extends IgnitePdsCacheRebalancingAbstractTest { /** {@inheritDoc} */ @Override protected CacheConfiguration cacheConfiguration(String cacheName) { @@ -52,6 +56,7 @@ public class IgnitePdsTxCacheRebalancingTest extends IgnitePdsCacheRebalancingAb /** * @throws Exception If failed. */ + @Test public void testTopologyChangesWithConstantLoadExplicitTx() throws Exception { explicitTx = true; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreCacheGroupsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreCacheGroupsTest.java index 96854137617fd..cfad1036cc6c7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreCacheGroupsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreCacheGroupsTest.java @@ -19,7 +19,9 @@ import java.io.Serializable; import java.util.Arrays; +import java.util.HashMap; import java.util.List; +import java.util.Map; import java.util.Objects; import java.util.Random; import java.util.concurrent.ThreadLocalRandom; @@ -37,14 +39,15 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteKernal; +import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; import org.apache.ignite.internal.processors.platform.cache.expiry.PlatformExpiryPolicy; import org.apache.ignite.internal.util.tostring.GridToStringInclude; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -54,10 +57,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgnitePersistentStoreCacheGroupsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String GROUP1 = "grp1"; @@ -95,8 +96,6 @@ public class IgnitePersistentStoreCacheGroupsTest extends GridCommonAbstractTest cfg.setBinaryConfiguration(new BinaryConfiguration().setCompactFooter(false)); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (ccfgs != null) { cfg.setCacheConfiguration(ccfgs); @@ -123,6 +122,7 @@ protected int entriesCount() { /** * @throws Exception If failed. */ + @Test public void testClusterRestartStaticCaches1() throws Exception { clusterRestart(1, true); } @@ -130,6 +130,7 @@ public void testClusterRestartStaticCaches1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClusterRestartStaticCaches2() throws Exception { clusterRestart(3, true); } @@ -137,6 +138,7 @@ public void testClusterRestartStaticCaches2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClusterRestartDynamicCaches1() throws Exception { clusterRestart(1, false); } @@ -144,6 +146,7 @@ public void testClusterRestartDynamicCaches1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClusterRestartDynamicCaches2() throws Exception { clusterRestart(3, false); } @@ -152,6 +155,7 @@ public void testClusterRestartDynamicCaches2() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testClusterRestartCachesWithH2Indexes() throws Exception { CacheConfiguration[] ccfgs1 = new CacheConfiguration[5]; @@ -215,8 +219,9 @@ public void testClusterRestartCachesWithH2Indexes() throws Exception { /** * @throws Exception If failed. */ - public void _testExpiryPolicy() throws Exception { - long ttl = 10000; + @Test + public void testExpiryPolicy() throws Exception { + long ttl = 10 * 60000; CacheConfiguration[] ccfgs1 = new CacheConfiguration[5]; @@ -232,20 +237,33 @@ public void _testExpiryPolicy() throws Exception { Ignite node = ignite(0); - node.active(true); + node.cluster().active(true); node.createCaches(Arrays.asList(ccfgs1)); ExpiryPolicy plc = new PlatformExpiryPolicy(ttl, -2, -2); + Map> expTimes = new HashMap<>(); + for (String cacheName : caches) { + Map cacheExpTimes = new HashMap<>(); + expTimes.put(cacheName, cacheExpTimes); + IgniteCache cache = node.cache(cacheName).withExpiryPolicy(plc); - for (int i = 0; i < entriesCount(); i++) - cache.put(i, cacheName + i); - } + for (int i = 0; i < entriesCount(); i++) { + Integer key = i; - long deadline = System.currentTimeMillis() + (long)(ttl * 1.2); + cache.put(key, cacheName + i); + + IgniteKernal primaryNode = (IgniteKernal)primaryCache(i, cacheName).unwrap(Ignite.class); + GridCacheEntryEx entry = primaryNode.internalCache(cacheName).entryEx(key); + entry.unswap(); + + assertTrue(entry.expireTime() > 0); + cacheExpTimes.put(key, entry.expireTime()); + } + } stopAllGrids(); @@ -253,30 +271,29 @@ public void _testExpiryPolicy() throws Exception { node = ignite(0); - node.active(true); + node.cluster().active(true); for (String cacheName : caches) { IgniteCache cache = node.cache(cacheName); - for (int i = 0; i < entriesCount(); i++) - assertEquals(cacheName + i, cache.get(i)); + for (int i = 0; i < entriesCount(); i++) { + Integer key = i; - assertEquals(entriesCount(), cache.size()); - } + assertEquals(cacheName + i, cache.get(i)); - // Wait for expiration. - Thread.sleep(Math.max(deadline - System.currentTimeMillis(), 0)); + IgniteKernal primaryNode = (IgniteKernal)primaryCache(i, cacheName).unwrap(Ignite.class); + GridCacheEntryEx entry = primaryNode.internalCache(cacheName).entryEx(key); + entry.unswap(); - for (String cacheName : caches) { - IgniteCache cache = node.cache(cacheName); - - assertEquals(0, cache.size()); + assertEquals(expTimes.get(cacheName).get(key), (Long)entry.expireTime()); + } } } /** * @throws Exception If failed. */ + @Test public void testCreateDropCache() throws Exception { ccfgs = new CacheConfiguration[]{cacheConfiguration(GROUP1, "c1", PARTITIONED, ATOMIC, 1) .setIndexedTypes(Integer.class, Person.class)}; @@ -293,6 +310,7 @@ public void testCreateDropCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateDropCache1() throws Exception { CacheConfiguration ccfg1 = cacheConfiguration(GROUP1, "c1", PARTITIONED, ATOMIC, 1); @@ -317,6 +335,7 @@ public void testCreateDropCache1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateDropCache2() throws Exception { CacheConfiguration ccfg1 = cacheConfiguration(GROUP1, "c1", PARTITIONED, ATOMIC, 1) .setIndexedTypes(Integer.class, Person.class); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreDataStructuresTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreDataStructuresTest.java index dc4e17e657b30..04de2369094d3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreDataStructuresTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgnitePersistentStoreDataStructuresTest.java @@ -22,6 +22,7 @@ import org.apache.ignite.IgniteAtomicLong; import org.apache.ignite.IgniteAtomicSequence; import org.apache.ignite.IgniteCountDownLatch; +import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLock; import org.apache.ignite.IgniteQueue; import org.apache.ignite.IgniteSemaphore; @@ -33,19 +34,17 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePersistentStoreDataStructuresTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static volatile boolean autoActivationEnabled = false; @@ -53,8 +52,6 @@ public class IgnitePersistentStoreDataStructuresTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - DataStorageConfiguration memCfg = new DataStorageConfiguration() .setDefaultDataRegionConfiguration( new DataRegionConfiguration().setMaxSize(200 * 1024 * 1024).setPersistenceEnabled(true)) @@ -91,6 +88,7 @@ public class IgnitePersistentStoreDataStructuresTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testQueue() throws Exception { Ignite ignite = startGrids(4); @@ -116,6 +114,7 @@ public void testQueue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomic() throws Exception { Ignite ignite = startGrids(4); @@ -141,6 +140,7 @@ public void testAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSequence() throws Exception { Ignite ignite = startGrids(4); @@ -170,6 +170,7 @@ public void testSequence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSequenceAfterAutoactivation() throws Exception { final String seqName = "testSequence"; @@ -185,10 +186,19 @@ public void testSequenceAfterAutoactivation() throws Exception { final Ignite node = startGrids(2); - IgniteInternalFuture fut = GridTestUtils.runAsync(new Runnable() { - @Override public void run() { - // Should not hang. - node.atomicSequence(seqName, 0, false); + IgniteInternalFuture fut = GridTestUtils.runAsync(() -> { + while (true) { + try { + // Should not hang. + node.atomicSequence(seqName, 0, false); + + break; + } + catch (IgniteException e) { + // Can fail on not yet activated cluster. Retry until success. + assertTrue(e.getMessage() + .contains("Can not perform the operation because the cluster is inactive")); + } } }); @@ -205,6 +215,7 @@ public void testSequenceAfterAutoactivation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSet() throws Exception { Ignite ignite = startGrids(4); @@ -236,6 +247,7 @@ public void testSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLockVolatility() throws Exception { Ignite ignite = startGrids(4); @@ -259,6 +271,7 @@ public void testLockVolatility() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSemaphoreVolatility() throws Exception { Ignite ignite = startGrids(4); @@ -282,6 +295,7 @@ public void testSemaphoreVolatility() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLatchVolatility() throws Exception { Ignite ignite = startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteRebalanceScheduleResendPartitionsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteRebalanceScheduleResendPartitionsTest.java index bb08a8dc5631f..d52bdd81c5efb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteRebalanceScheduleResendPartitionsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteRebalanceScheduleResendPartitionsTest.java @@ -27,6 +27,7 @@ import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; @@ -44,26 +45,24 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.runAsync; /** * */ +@RunWith(JUnit4.class) public class IgniteRebalanceScheduleResendPartitionsTest extends GridCommonAbstractTest { - /** */ - public static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { IgniteConfiguration cfg = super.getConfiguration(name); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setConsistentId(name); cfg.setAutoActivationEnabled(false); @@ -79,8 +78,8 @@ public class IgniteRebalanceScheduleResendPartitionsTest extends GridCommonAbstr cfg.setCacheConfiguration( new CacheConfiguration(DEFAULT_CACHE_NAME) - .setAffinity( - new RendezvousAffinityFunction(false, 32)) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setAffinity(new RendezvousAffinityFunction(false, 32)) .setBackups(1) ); @@ -110,7 +109,11 @@ public class IgniteRebalanceScheduleResendPartitionsTest extends GridCommonAbstr * * @throws Exception If failed. */ + @Test public void test() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + Ignite ig0 = startGrids(3); ig0.cluster().active(true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWalModeChangeDuringRebalancingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWalModeChangeDuringRebalancingSelfTest.java index 4dd5f51b28772..caf16cd11a9bd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWalModeChangeDuringRebalancingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWalModeChangeDuringRebalancingSelfTest.java @@ -23,12 +23,14 @@ import java.nio.MappedByteBuffer; import java.nio.file.OpenOption; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.atomic.AtomicReference; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; @@ -51,18 +53,30 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.waitForCondition; /** * */ +@RunWith(JUnit4.class) public class LocalWalModeChangeDuringRebalancingSelfTest extends GridCommonAbstractTest { /** */ private static boolean disableWalDuringRebalancing = true; + /** */ + private static boolean enablePendingTxTracker = false; + + /** */ + private static int dfltCacheBackupCnt = 0; + /** */ private static final AtomicReference supplyMessageLatch = new AtomicReference<>(); @@ -91,10 +105,13 @@ public class LocalWalModeChangeDuringRebalancingSelfTest extends GridCommonAbstr cfg.setCacheConfiguration( new CacheConfiguration(DEFAULT_CACHE_NAME) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) // Test checks internal state before and after rebalance, so it is configured to be triggered manually - .setRebalanceDelay(-1), + .setRebalanceDelay(-1) + .setBackups(dfltCacheBackupCnt), new CacheConfiguration(REPL_CACHE) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setRebalanceDelay(-1) .setCacheMode(CacheMode.REPLICATED) ); @@ -147,6 +164,9 @@ public class LocalWalModeChangeDuringRebalancingSelfTest extends GridCommonAbstr System.setProperty(IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING, Boolean.toString(disableWalDuringRebalancing)); + System.setProperty(IgniteSystemProperties.IGNITE_PENDING_TX_TRACKER_ENABLED, + Boolean.toString(enablePendingTxTracker)); + return cfg; } @@ -184,6 +204,17 @@ public class LocalWalModeChangeDuringRebalancingSelfTest extends GridCommonAbstr cleanPersistenceDir(); disableWalDuringRebalancing = true; + enablePendingTxTracker = false; + dfltCacheBackupCnt = 0; + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + super.afterTestsStopped(); + + System.clearProperty(IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING); + + System.clearProperty(IgniteSystemProperties.IGNITE_PENDING_TX_TRACKER_ENABLED); } /** @@ -196,14 +227,22 @@ protected int getKeysCount() { /** * @throws Exception If failed. */ + @Test public void testWalDisabledDuringRebalancing() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + doTestSimple(); } /** * @throws Exception If failed. */ + @Test public void testWalNotDisabledIfParameterSetToFalse() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + disableWalDuringRebalancing = false; doTestSimple(); @@ -286,7 +325,66 @@ else if (timestamp >= rebalanceStartedTimestamp && timestamp <= rebalanceFinishe /** * @throws Exception If failed. */ + @Test + public void testWalDisabledDuringRebalancingWithPendingTxTracker() throws Exception { + enablePendingTxTracker = true; + dfltCacheBackupCnt = 2; + + Ignite ignite = startGrids(3); + + ignite.cluster().active(true); + + ignite.cluster().setBaselineTopology(3); + + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); + + stopGrid(2); + + awaitExchange((IgniteEx)ignite); + + doLoad(cache, 4, 10_000); + + IgniteEx newIgnite = startGrid(2); + + awaitExchange(newIgnite); + + CacheGroupContext grpCtx = newIgnite.cachex(DEFAULT_CACHE_NAME).context().group(); + + assertFalse(grpCtx.walEnabled()); + + long rebalanceStartedTs = System.currentTimeMillis(); + + for (Ignite g : G.allGrids()) + g.cache(DEFAULT_CACHE_NAME).rebalance(); + + awaitPartitionMapExchange(); + + assertTrue(grpCtx.walEnabled()); + + long rebalanceFinishedTs = System.currentTimeMillis(); + + CheckpointHistory cpHist = + ((GridCacheDatabaseSharedManager)newIgnite.context().cache().context().database()).checkpointHistory(); + + assertNotNull(cpHist); + + // Ensure there was a checkpoint on WAL re-activation. + assertEquals( + 1, + cpHist.checkpoints() + .stream() + .filter(ts -> rebalanceStartedTs <= ts && ts <= rebalanceFinishedTs) + .count()); + } + + /** + * @throws Exception If failed. + */ + @Test public void testLocalAndGlobalWalStateInterdependence() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + Ignite ignite = startGrids(3); ignite.cluster().active(true); @@ -325,6 +423,7 @@ public void testLocalAndGlobalWalStateInterdependence() throws Exception { * * @throws Exception If failed. */ + @Test public void testWithExchangesMerge() throws Exception { final int nodeCnt = 4; final int keyCnt = getKeysCount(); @@ -378,14 +477,22 @@ public void testWithExchangesMerge() throws Exception { /** * @throws Exception If failed. */ + @Test public void testParallelExchangeDuringRebalance() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + doTestParallelExchange(supplyMessageLatch); } /** * @throws Exception If failed. */ + @Test public void testParallelExchangeDuringCheckpoint() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + doTestParallelExchange(fileIOLatch); } @@ -440,7 +547,11 @@ private void doTestParallelExchange(AtomicReference latchRef) th /** * @throws Exception If failed. */ + @Test public void testDataClearedAfterRestartWithDisabledWal() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + Ignite ignite = startGrid(0); ignite.cluster().active(true); @@ -481,6 +592,7 @@ public void testDataClearedAfterRestartWithDisabledWal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWalNotDisabledAfterShrinkingBaselineTopology() throws Exception { Ignite ignite = startGrids(4); @@ -531,6 +643,34 @@ private void awaitExchange(IgniteEx ig) throws IgniteCheckedException { ig.context().cache().context().exchange().lastTopologyFuture().get(); } + /** + * Put random values to cache in multiple threads until time interval given expires. + * + * @param cache Cache to modify. + * @param threadCnt Number ot threads to be used. + * @param duration Time interval in milliseconds. + * @throws Exception When something goes wrong. + */ + private void doLoad(IgniteCache cache, int threadCnt, long duration) throws Exception { + GridTestUtils.runMultiThreaded(() -> { + long stopTs = U.currentTimeMillis() + duration; + + int keysCnt = getKeysCount(); + + ThreadLocalRandom rnd = ThreadLocalRandom.current(); + + do { + try { + cache.put(rnd.nextInt(keysCnt), rnd.nextInt()); + } + catch (Exception ex) { + MvccFeatureChecker.assertMvccWriteConflict(ex); + } + } + while (U.currentTimeMillis() < stopTs); + }, threadCnt, "load-cache"); + } + /** * */ @@ -545,11 +685,6 @@ private static class TestFileIOFactory implements FileIOFactory { this.delegate = delegate; } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return new TestFileIO(delegate.create(file)); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { return new TestFileIO(delegate.create(file, modes)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest.java similarity index 81% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest.java rename to modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest.java index 4f2817f29109c..ff736373f4d6e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest.java @@ -20,6 +20,7 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; @@ -33,10 +34,11 @@ import org.apache.ignite.internal.processors.cache.persistence.wal.reader.IgniteWalIteratorFactory; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.lang.String.valueOf; import static org.apache.ignite.IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING; @@ -47,13 +49,13 @@ /** * */ -public class LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest extends GridCommonAbstractTest { - +@RunWith(JUnit4.class) +public class LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest extends GridCommonAbstractTest { /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + private final int NODES = 3; /** */ - private final int NODES = 3; + private CacheAtomicityMode atomicityMode; /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { @@ -61,8 +63,6 @@ public class LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest extends Grid cfg.setConsistentId(name); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setWalPath(walPath(name)) @@ -76,8 +76,8 @@ public class LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest extends Grid cfg.setCacheConfiguration( new CacheConfiguration(DEFAULT_CACHE_NAME) - .setAffinity( - new RendezvousAffinityFunction(false, 3)) + .setAtomicityMode(atomicityMode) + .setAffinity(new RendezvousAffinityFunction(false, 3)) ); return cfg; @@ -97,15 +97,48 @@ public class LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest extends Grid super.afterTest(); System.clearProperty(IGNITE_DISABLE_WAL_DURING_REBALANCING); + + stopAllGrids(); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10652") + @Test + public void testAtomic() throws Exception { + atomicityMode = CacheAtomicityMode.ATOMIC; + + check(); } /** * @throws Exception If failed. */ - public void test() throws Exception { - Ignite ig = startGrids(NODES); + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10652") + @Test + public void testTx() throws Exception { + atomicityMode = CacheAtomicityMode.TRANSACTIONAL; - ig.cluster().active(true); + check(); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10421") + @Test + public void testMvcc() throws Exception { + atomicityMode = CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + + check(); + } + + /** + * @throws Exception If failed. + */ + public void check() throws Exception { + Ignite ig = startGridsMultiThreaded(NODES); int entries = 100_000; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/MemoryPolicyInitializationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/MemoryPolicyInitializationTest.java index fac021ef9b84e..82def5c4f072e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/MemoryPolicyInitializationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/MemoryPolicyInitializationTest.java @@ -27,12 +27,16 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.configuration.MemoryConfiguration.DFLT_MEM_PLC_DEFAULT_NAME; /** * */ +@RunWith(JUnit4.class) public class MemoryPolicyInitializationTest extends GridCommonAbstractTest { /** */ private static final String CUSTOM_NON_DEFAULT_MEM_PLC_NAME = "custom_mem_plc"; @@ -68,6 +72,7 @@ public class MemoryPolicyInitializationTest extends GridCommonAbstractTest { /** * Verifies that expected memory policies are allocated when used doesn't provide any MemoryPolicyConfiguration. */ + @Test public void testNoConfigProvided() throws Exception { memCfg = null; @@ -84,6 +89,7 @@ public void testNoConfigProvided() throws Exception { * Verifies that expected memory policies are allocated when used provides MemoryPolicyConfiguration * with non-default custom MemoryPolicy. */ + @Test public void testCustomConfigNoDefault() throws Exception { prepareCustomNoDefaultConfig(); @@ -103,6 +109,7 @@ public void testCustomConfigNoDefault() throws Exception { * User is allowed to configure memory policy with 'default' name, * in that case Ignite instance will use this user-defined memory policy as a default one. */ + @Test public void testCustomConfigOverridesDefault() throws Exception { prepareCustomConfigWithOverridingDefault(); @@ -127,6 +134,7 @@ public void testCustomConfigOverridesDefault() throws Exception { * At the same time user still can create a memory policy with name 'default' * which although won't be used as default. */ + @Test public void testCustomConfigOverridesDefaultNameAndDeclaresDefault() throws Exception { prepareCustomConfigWithOverriddenDefaultName(); @@ -150,6 +158,7 @@ public void testCustomConfigOverridesDefaultNameAndDeclaresDefault() throws Exce * with specified default memory policy name and specified custom memory policy name * all started with correct memory policy. */ + @Test public void testCachesOnOverriddenMemoryPolicy() throws Exception { prepareCustomConfigWithOverridingDefaultAndCustom(); @@ -184,6 +193,7 @@ public void testCachesOnOverriddenMemoryPolicy() throws Exception { * with specified default memory policy name and specified custom memory policy name * all started with correct memory policy. */ + @Test public void testCachesOnUserDefinedDefaultMemoryPolicy() throws Exception { prepareCustomConfigWithOverriddenDefaultName(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/PersistenceDirectoryWarningLoggingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/PersistenceDirectoryWarningLoggingTest.java index bb55c8200ee33..2de83c2380c12 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/PersistenceDirectoryWarningLoggingTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/PersistenceDirectoryWarningLoggingTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests that warning is logged when persistence store directory equals {@code System.getProperty("java.io.tmpdir")}. */ +@RunWith(JUnit4.class) public class PersistenceDirectoryWarningLoggingTest extends GridCommonAbstractTest { /** Warning message to test. */ private static final String WARN_MSG_PREFIX = "Persistence store directory is in the temp " + @@ -60,6 +64,7 @@ public class PersistenceDirectoryWarningLoggingTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testPdsDirWarningSuppressed() throws Exception { startGrid(); @@ -69,6 +74,7 @@ public void testPdsDirWarningSuppressed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPdsDirWarningIsLogged() throws Exception { IgniteConfiguration cfg = getConfiguration("0"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/SingleNodePersistenceSslTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/SingleNodePersistenceSslTest.java new file mode 100644 index 0000000000000..118d57de9a45c --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/SingleNodePersistenceSslTest.java @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence; + +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.failure.StopNodeFailureHandler; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Checks, that cluster correct workds with enabled persistence and SSL. + */ +public class SingleNodePersistenceSslTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override public IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + return super.getConfiguration(igniteInstanceName) + .setSslContextFactory(GridTestUtils.sslFactory()) + .setFailureHandler(new StopNodeFailureHandler()) + .setDataStorageConfiguration( + new DataStorageConfiguration().setDefaultDataRegionConfiguration( + new DataRegionConfiguration().setPersistenceEnabled(true) + ) + ); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * Checks, that cluster could be started and activated. + * + * @throws Exception If test failed. + */ + @Test + public void testActivate() throws Exception { + startGrids(2).cluster().active(true); + + assertTrue(grid(0).cluster().active()); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/ClientAffinityAssignmentWithBaselineTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/ClientAffinityAssignmentWithBaselineTest.java index 13a98e4a7ae8c..41853617f3346 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/ClientAffinityAssignmentWithBaselineTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/ClientAffinityAssignmentWithBaselineTest.java @@ -49,16 +49,21 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; /** * Checks that client affinity assignment cache is calculated correctly regardless of current baseline topology. */ +@RunWith(JUnit4.class) public class ClientAffinityAssignmentWithBaselineTest extends GridCommonAbstractTest { /** Nodes count. */ private static final int DEFAULT_NODES_COUNT = 5; @@ -156,6 +161,9 @@ public class ClientAffinityAssignmentWithBaselineTest extends GridCommonAbstract /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10261"); + stopAllGrids(); cleanPersistenceDir(); @@ -225,6 +233,7 @@ else if (REPLICATED_TX_CACHE_NAME.equals(cacheName)) { /** * */ + @Test public void testPartitionedAtomicCache() throws Exception { testChangingBaselineDown(PARTITIONED_ATOMIC_CACHE_NAME, false); } @@ -232,6 +241,7 @@ public void testPartitionedAtomicCache() throws Exception { /** * */ + @Test public void testPartitionedTxCache() throws Exception { testChangingBaselineDown(PARTITIONED_TX_CACHE_NAME, false); } @@ -239,6 +249,7 @@ public void testPartitionedTxCache() throws Exception { /** * Test that activation after client join won't break cache. */ + @Test public void testLateActivation() throws Exception { testChangingBaselineDown(PARTITIONED_TX_CACHE_NAME, true); } @@ -246,6 +257,7 @@ public void testLateActivation() throws Exception { /** * */ + @Test public void testReplicatedAtomicCache() throws Exception { testChangingBaselineDown(REPLICATED_ATOMIC_CACHE_NAME, false); } @@ -253,6 +265,7 @@ public void testReplicatedAtomicCache() throws Exception { /** * */ + @Test public void testReplicatedTxCache() throws Exception { testChangingBaselineDown(REPLICATED_TX_CACHE_NAME, false); } @@ -328,6 +341,7 @@ private void testChangingBaselineDown(String cacheName, boolean lateActivation) /** * Tests that rejoin of baseline node with clear LFS under load won't break cache. */ + @Test public void testRejoinWithCleanLfs() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(DEFAULT_NODES_COUNT - 1); startGrid("flaky"); @@ -394,6 +408,7 @@ public void testRejoinWithCleanLfs() throws Exception { /** * Test that changing baseline down under cross-cache txs load won't break cache. */ + @Test public void testCrossCacheTxs() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(DEFAULT_NODES_COUNT); @@ -455,6 +470,7 @@ public void testCrossCacheTxs() throws Exception { /** * Tests that join of non-baseline node while long transactions are running won't break dynamically started cache. */ + @Test public void testDynamicCacheLongTransactionNodeStart() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(4); @@ -526,13 +542,14 @@ public void testDynamicCacheLongTransactionNodeStart() throws Exception { * Tests that if dynamic cache has no affinity nodes at the moment of start, * it will still work correctly when affinity nodes will appear. */ + @Test public void testDynamicCacheStartNoAffinityNodes() throws Exception { fail("IGNITE-8652"); IgniteEx ig0 = startGrid(0); ig0.cluster().active(true); - + IgniteEx client = (IgniteEx)startGrid("client"); CacheConfiguration dynamicCacheCfg = new CacheConfiguration() @@ -544,7 +561,7 @@ public void testDynamicCacheStartNoAffinityNodes() throws Exception { .setNodeFilter(new ConsistentIdNodeFilter((Serializable)ig0.localNode().consistentId())); IgniteCache dynamicCache = client.getOrCreateCache(dynamicCacheCfg); - + for (int i = 1; i < 4; i++) startGrid(i); @@ -552,29 +569,29 @@ public void testDynamicCacheStartNoAffinityNodes() throws Exception { for (int i = 0; i < ENTRIES; i++) dynamicCache.put(i, "abacaba" + i); - + AtomicBoolean releaseTx = new AtomicBoolean(false); CountDownLatch allTxsDoneLatch = new CountDownLatch(10); - + for (int i = 0; i < 10; i++) { final int i0 = i; - + GridTestUtils.runAsync(new Runnable() { @Override public void run() { try (Transaction tx = client.transactions().txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { dynamicCache.put(i0, "txtxtxtx" + i0); - + while (!releaseTx.get()) LockSupport.parkNanos(1_000_000); - + tx.commit(); - + System.out.println("Tx #" + i0 + " committed"); } catch (Throwable t) { System.out.println("Tx #" + i0 + " failed"); - + t.printStackTrace(); } finally { @@ -583,7 +600,7 @@ public void testDynamicCacheStartNoAffinityNodes() throws Exception { } }); } - + GridTestUtils.runAsync(new Runnable() { @Override public void run() { try { @@ -608,6 +625,7 @@ public void testDynamicCacheStartNoAffinityNodes() throws Exception { /** * Tests that join of non-baseline node while long transactions are running won't break cache started on client join. */ + @Test public void testClientJoinCacheLongTransactionNodeStart() throws Exception { IgniteEx ig0 = (IgniteEx)startGrids(4); @@ -784,7 +802,7 @@ private void startTxLoadThread( IgniteCache cache = ig.cache(cacheName).withAllowAtomicOpsInTx(); - boolean pessimistic = r.nextBoolean(); + boolean pessimistic = atomicityMode(cache) == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT || r.nextBoolean(); boolean rollback = r.nextBoolean(); @@ -860,7 +878,8 @@ private void startCrossCacheTxLoadThread( IgniteCache cache1 = ig.cache(cacheName1); IgniteCache cache2 = ig.cache(cacheName2); - boolean pessimistic = r.nextBoolean(); + boolean pessimistic = atomicityMode(cache1) == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT || + atomicityMode(cache2) == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT || r.nextBoolean(); boolean rollback = r.nextBoolean(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/IgniteAbsentEvictionNodeOutOfBaselineTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/IgniteAbsentEvictionNodeOutOfBaselineTest.java index cddd701beea46..c84f754085e67 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/IgniteAbsentEvictionNodeOutOfBaselineTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/baseline/IgniteAbsentEvictionNodeOutOfBaselineTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test absenting eviction for joined node if it is out of baseline. */ +@RunWith(JUnit4.class) public class IgniteAbsentEvictionNodeOutOfBaselineTest extends GridCommonAbstractTest { /** */ private static final String TEST_CACHE_NAME = "test"; @@ -75,6 +79,7 @@ public class IgniteAbsentEvictionNodeOutOfBaselineTest extends GridCommonAbstrac /** * Removed partitions if node is out of baseline. */ + @Test public void testPartitionsRemovedIfJoiningNodeNotInBaseline() throws Exception { //given: start 3 nodes with data Ignite ignite0 = startGrids(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/CheckpointBufferDeadlockTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/CheckpointBufferDeadlockTest.java index 3afafe691f462..de9eaf19a2f58 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/CheckpointBufferDeadlockTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/CheckpointBufferDeadlockTest.java @@ -52,25 +52,19 @@ import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl; import org.apache.ignite.internal.util.typedef.internal.CU; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger; - -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CheckpointBufferDeadlockTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Max size. */ private static final int MAX_SIZE = 500 * 1024 * 1024; @@ -102,8 +96,6 @@ public class CheckpointBufferDeadlockTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setFileIOFactory(new SlowCheckpointFileIOFactory()) @@ -150,6 +142,7 @@ public class CheckpointBufferDeadlockTest extends GridCommonAbstractTest { /** * */ + @Test public void testFourCheckpointThreads() throws Exception { checkpointThreads = 4; @@ -159,6 +152,7 @@ public void testFourCheckpointThreads() throws Exception { /** * */ + @Test public void testOneCheckpointThread() throws Exception { checkpointThreads = 1; @@ -335,11 +329,6 @@ private static class SlowCheckpointFileIOFactory implements FileIOFactory { /** Delegate factory. */ private final FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... openOption) throws IOException { final FileIO delegate = delegateFactory.create(file, openOption); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/CheckpointFailingIoFactory.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/CheckpointFailingIoFactory.java new file mode 100644 index 0000000000000..2d48cd614b449 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/CheckpointFailingIoFactory.java @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db; + +import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; +import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; + +import java.io.File; +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.file.OpenOption; + +/** + * + */ +class CheckpointFailingIoFactory implements FileIOFactory { + /** */ + private volatile boolean fail; + + /** + * Will fail immediately. + */ + public CheckpointFailingIoFactory() { + this(true); + } + + /** + * @param failImmediately Fail immediately flag. + */ + public CheckpointFailingIoFactory(boolean failImmediately) { + fail = failImmediately; + } + + /** + * After this call all subsequent write calls will fail. + */ + public void startFailing() { + fail = true; + } + + /** {@inheritDoc} */ + @Override public FileIO create(File file, OpenOption... modes) throws IOException { + FileIO delegate = new RandomAccessFileIOFactory().create(file, modes); + + if (file.getName().contains("part-")) + return new FileIODecorator(delegate) { + @Override public int write(ByteBuffer srcBuf) throws IOException { + if (fail) + throw new IOException("test"); + else + return super.write(srcBuf); + } + + @Override public int write(ByteBuffer srcBuf, long position) throws IOException { + if (fail) + throw new IOException("test"); + else + return super.write(srcBuf, position); + } + + @Override public int write(byte[] buf, int off, int len) throws IOException { + if (fail) + throw new IOException("test"); + else + return super.write(buf, off, len); + } + }; + + return delegate; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgniteLogicalRecoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgniteLogicalRecoveryTest.java new file mode 100644 index 0000000000000..6bfebfc9a9d29 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgniteLogicalRecoveryTest.java @@ -0,0 +1,673 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ThreadLocalRandom; +import java.util.function.Predicate; +import java.util.stream.Collectors; +import com.google.common.collect.Lists; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteIllegalStateException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cache.query.annotations.QuerySqlField; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.failure.FailureHandler; +import org.apache.ignite.failure.StopNodeFailureHandler; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheGroupIdMessage; +import org.apache.ignite.internal.processors.cache.GridCacheUtils; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage; +import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; +import org.apache.ignite.internal.util.future.GridCompoundFuture; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.Nullable; +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * A set of tests that check correctness of logical recovery performed during node start. + */ +@RunWith(JUnit4.class) +public class IgniteLogicalRecoveryTest extends GridCommonAbstractTest { + /** */ + private static final int[] EVTS_DISABLED = {}; + + /** Shared group name. */ + private static final String SHARED_GROUP_NAME = "group"; + + /** Dynamic cache prefix. */ + private static final String DYNAMIC_CACHE_PREFIX = "dynamic-cache-"; + + /** Cache prefix. */ + private static final String CACHE_PREFIX = "cache-"; + + /** Io factory. */ + private FileIOFactory ioFactory; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setIncludeEventTypes(EVTS_DISABLED); + + cfg.setConsistentId(igniteInstanceName); + + cfg.setCacheConfiguration( + cacheConfiguration(CACHE_PREFIX + 0, CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC), + cacheConfiguration(CACHE_PREFIX + 1, CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL), + cacheConfiguration(CACHE_PREFIX + 2, CacheMode.REPLICATED, CacheAtomicityMode.ATOMIC), + cacheConfiguration(CACHE_PREFIX + 3, CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL), + cacheConfiguration(CACHE_PREFIX + 4, SHARED_GROUP_NAME, CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL), + cacheConfiguration(CACHE_PREFIX + 5, SHARED_GROUP_NAME, CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL) + ); + + DataStorageConfiguration dsCfg = new DataStorageConfiguration() + .setWalMode(WALMode.LOG_ONLY) + .setCheckpointFrequency(1024 * 1024 * 1024) // Disable automatic checkpoints. + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setName("dflt") + .setInitialSize(256 * 1024 * 1024) + .setMaxSize(256 * 1024 * 1024) + .setPersistenceEnabled(true) + ); + + cfg.setDataStorageConfiguration(dsCfg); + + if (ioFactory != null) + dsCfg.setFileIOFactory(ioFactory); + + TestRecordingCommunicationSpi spi = new TestRecordingCommunicationSpi(); + + spi.record(GridDhtPartitionDemandMessage.class); + + cfg.setCommunicationSpi(spi); + + return cfg; + } + + /** + * @param name Name. + * @param cacheMode Cache mode. + * @param atomicityMode Atomicity mode. + */ + private CacheConfiguration cacheConfiguration(String name, CacheMode cacheMode, CacheAtomicityMode atomicityMode) { + return cacheConfiguration(name, null, cacheMode, atomicityMode); + } + + /** + * @param name Name. + * @param groupName Group name. + * @param cacheMode Cache mode. + * @param atomicityMode Atomicity mode. + */ + protected CacheConfiguration cacheConfiguration(String name, @Nullable String groupName, CacheMode cacheMode, CacheAtomicityMode atomicityMode) { + CacheConfiguration cfg = new CacheConfiguration<>(name) + .setGroupName(groupName) + .setCacheMode(cacheMode) + .setAtomicityMode(atomicityMode) + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) + .setBackups(2) + .setAffinity(new RendezvousAffinityFunction(false, 32)); + + cfg.setIndexedTypes(Integer.class, Integer.class); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + System.setProperty(GridCacheDatabaseSharedManager.IGNITE_PDS_SKIP_CHECKPOINT_ON_NODE_STOP, "true"); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + System.clearProperty(GridCacheDatabaseSharedManager.IGNITE_PDS_SKIP_CHECKPOINT_ON_NODE_STOP); + } + + /** + * + */ + @Test + public void testRecoveryOnJoinToActiveCluster() throws Exception { + IgniteEx crd = (IgniteEx) startGridsMultiThreaded(3); + + crd.cluster().active(true); + + IgniteEx node = grid(2); + + AggregateCacheLoader cacheLoader = new AggregateCacheLoader(node); + + cacheLoader.loadByTime(5_000).get(); + + forceCheckpoint(); + + cacheLoader.loadByTime(5_000).get(); + + stopGrid(2, true); + + node = startGrid(2); + + awaitPartitionMapExchange(); + + cacheLoader.consistencyCheck(node); + + checkNoRebalanceAfterRecovery(); + + checkCacheContextsConsistencyAfterRecovery(); + } + + /** + * + */ + @Test + public void testRecoveryOnJoinToInactiveCluster() throws Exception { + IgniteEx crd = (IgniteEx) startGridsMultiThreaded(3); + + crd.cluster().active(true); + + IgniteEx node = grid(2); + + AggregateCacheLoader cacheLoader = new AggregateCacheLoader(node); + + cacheLoader.loadByTime(5_000).get(); + + forceCheckpoint(); + + cacheLoader.loadByTime(5_000).get(); + + stopGrid(2, true); + + crd.cluster().active(false); + + node = startGrid(2); + + crd.cluster().active(true); + + awaitPartitionMapExchange(); + + checkNoRebalanceAfterRecovery(); + + cacheLoader.consistencyCheck(node); + + checkCacheContextsConsistencyAfterRecovery(); + } + + /** + * + */ + @Test + public void testRecoveryOnDynamicallyStartedCaches() throws Exception { + List dynamicCaches = Lists.newArrayList( + cacheConfiguration(DYNAMIC_CACHE_PREFIX + 0, CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL), + cacheConfiguration(DYNAMIC_CACHE_PREFIX + 1, CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL), + cacheConfiguration(DYNAMIC_CACHE_PREFIX + 2, CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC), + cacheConfiguration(DYNAMIC_CACHE_PREFIX + 3, CacheMode.REPLICATED, CacheAtomicityMode.ATOMIC) + ); + + doTestWithDynamicCaches(dynamicCaches); + } + + /** + * + */ + @Test + public void testRecoveryWithMvccCaches() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10582"); + + List dynamicCaches = Lists.newArrayList( + cacheConfiguration(DYNAMIC_CACHE_PREFIX + 0, CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT), + cacheConfiguration(DYNAMIC_CACHE_PREFIX + 1, CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) + ); + + doTestWithDynamicCaches(dynamicCaches); + } + + /** + * @param dynamicCaches Dynamic caches. + */ + private void doTestWithDynamicCaches(List dynamicCaches) throws Exception { + IgniteEx crd = (IgniteEx) startGridsMultiThreaded(3); + + crd.cluster().active(true); + + IgniteEx node = grid(2); + + node.getOrCreateCaches(dynamicCaches); + + AggregateCacheLoader cacheLoader = new AggregateCacheLoader(node); + + cacheLoader.loadByTime(5_000).get(); + + forceCheckpoint(); + + cacheLoader.loadByTime(5_000).get(); + + stopGrid(2, true); + + startGrid(2); + + awaitPartitionMapExchange(); + + checkNoRebalanceAfterRecovery(); + + for (int idx = 0; idx < 3; idx++) + cacheLoader.consistencyCheck(grid(idx)); + + checkCacheContextsConsistencyAfterRecovery(); + } + + /** + * + */ + @Test + public void testRecoveryOnJoinToDifferentBlt() throws Exception { + IgniteEx crd = (IgniteEx) startGridsMultiThreaded(3); + + crd.cluster().active(true); + + IgniteEx node = grid(2); + + AggregateCacheLoader cacheLoader = new AggregateCacheLoader(node); + + cacheLoader.loadByTime(5_000).get(); + + forceCheckpoint(); + + cacheLoader.loadByTime(5_000).get(); + + stopGrid(2, true); + + resetBaselineTopology(); + + startGrid(2); + + resetBaselineTopology(); + + awaitPartitionMapExchange(); + + for (int idx = 0; idx < 3; idx++) + cacheLoader.consistencyCheck(grid(idx)); + + checkCacheContextsConsistencyAfterRecovery(); + } + + /** + * + */ + @Test + public void testRecoveryOnCrushDuringCheckpointOnNodeStart() throws Exception { + IgniteEx crd = (IgniteEx) startGridsMultiThreaded(3, false); + + crd.cluster().active(true); + + IgniteEx node = grid(2); + + AggregateCacheLoader cacheLoader = new AggregateCacheLoader(node); + + cacheLoader.loadByTime(5_000).get(); + + forceCheckpoint(); + + cacheLoader.loadByTime(5_000).get(); + + stopGrid(2, false); + + ioFactory = new CheckpointFailingIoFactory(); + + IgniteInternalFuture startNodeFut = GridTestUtils.runAsync(() -> startGrid(2)); + + try { + startNodeFut.get(); + } + catch (Exception expected) { } + + // Wait until node will leave cluster. + GridTestUtils.waitForCondition(() -> { + try { + grid(2); + } + catch (IgniteIllegalStateException e) { + return true; + } + + return false; + }, getTestTimeout()); + + ioFactory = null; + + // Start node again and check recovery. + startGrid(2); + + awaitPartitionMapExchange(); + + checkNoRebalanceAfterRecovery(); + + for (int idx = 0; idx < 3; idx++) + cacheLoader.consistencyCheck(grid(idx)); + } + + /** + * Checks that cache contexts have consistent parameters after recovery finished and nodes have joined to topology. + */ + private void checkCacheContextsConsistencyAfterRecovery() throws Exception { + IgniteEx crd = grid(0); + + Collection cacheNames = crd.cacheNames(); + + for (String cacheName : cacheNames) { + for (int nodeIdx = 1; nodeIdx < 3; nodeIdx++) { + IgniteEx node = grid(nodeIdx); + + GridCacheContext one = cacheContext(crd, cacheName); + GridCacheContext other = cacheContext(node, cacheName); + + checkCacheContextsConsistency(one, other); + } + } + } + + /** + * @return Cache context with given name from node. + */ + private GridCacheContext cacheContext(IgniteEx node, String cacheName) { + return node.cachex(cacheName).context(); + } + + /** + * Checks that cluster-wide parameters are consistent between two caches. + * + * @param one Cache context. + * @param other Cache context. + */ + private void checkCacheContextsConsistency(GridCacheContext one, GridCacheContext other) { + Assert.assertEquals(one.statisticsEnabled(), other.statisticsEnabled()); + Assert.assertEquals(one.dynamicDeploymentId(), other.dynamicDeploymentId()); + Assert.assertEquals(one.keepBinary(), other.keepBinary()); + Assert.assertEquals(one.updatesAllowed(), other.updatesAllowed()); + Assert.assertEquals(one.group().receivedFrom(), other.group().receivedFrom()); + } + + /** {@inheritDoc} */ + @Override protected FailureHandler getFailureHandler(String igniteInstanceName) { + return new StopNodeFailureHandler(); + } + + /** {@inheritDoc} */ + @Override protected long getTestTimeout() { + return 120 * 1000; + } + + /** + * Method checks that there were no rebalance for all caches (excluding sys cache). + */ + private void checkNoRebalanceAfterRecovery() { + int sysCacheGroupId = CU.cacheId(GridCacheUtils.UTILITY_CACHE_NAME); + + List nodes = G.allGrids(); + + for (final Ignite node : nodes) { + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(node); + + Set mvccCaches = ((IgniteEx) node).context().cache().cacheGroups().stream() + .flatMap(group -> group.caches().stream()) + .filter(cache -> cache.config().getAtomicityMode() == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) + .map(GridCacheContext::groupId) + .collect(Collectors.toSet()); + + List rebalancedGroups = spi.recordedMessages(true).stream() + .map(msg -> (GridDhtPartitionDemandMessage) msg) + .map(GridCacheGroupIdMessage::groupId) + .filter(grpId -> grpId != sysCacheGroupId) + //TODO: remove following filter when failover for MVCC will be fixed. + .filter(grpId -> !mvccCaches.contains(grpId)) + .distinct() + .collect(Collectors.toList()); + + Assert.assertTrue("There was unexpected rebalance for some groups" + + " [node=" + node.name() + ", groups=" + rebalancedGroups + ']', rebalancedGroups.isEmpty()); + } + } + + /** + * + */ + private static class AggregateCacheLoader { + /** Ignite. */ + final IgniteEx ignite; + + /** Cache loaders. */ + final List cacheLoaders; + + /** + * @param ignite Ignite. + */ + public AggregateCacheLoader(IgniteEx ignite) { + this.ignite = ignite; + + List cacheLoaders = new ArrayList<>(); + + for (String cacheName : ignite.cacheNames()) + cacheLoaders.add(new CacheLoader(ignite, cacheName)); + + this.cacheLoaders = cacheLoaders; + } + + /** + * @param timeMillis Loading time in milliseconds. + */ + public IgniteInternalFuture loadByTime(int timeMillis) { + GridCompoundFuture loadFut = new GridCompoundFuture(); + + for (CacheLoader cacheLoader : cacheLoaders) { + long endTime = U.currentTimeMillis() + timeMillis; + + cacheLoader.stopPredicate = it -> U.currentTimeMillis() >= endTime; + + loadFut.add(GridTestUtils.runAsync(cacheLoader)); + } + + loadFut.markInitialized(); + + return loadFut; + } + + /** + * @param ignite Ignite node to check consistency from. + */ + public void consistencyCheck(IgniteEx ignite) { + for (CacheLoader cacheLoader : cacheLoaders) + cacheLoader.consistencyCheck(ignite); + } + } + + /** + * + */ + static class CacheLoader implements Runnable { + /** Keys space. */ + static final int KEYS_SPACE = 3096; + + /** Ignite. */ + final IgniteEx ignite; + + /** Stop predicate. */ + volatile Predicate stopPredicate; + + /** Cache name. */ + final String cacheName; + + /** Local cache. */ + final Map locCache = new ConcurrentHashMap<>(); + + /** + * @param ignite Ignite. + * @param cacheName Cache name. + */ + public CacheLoader(IgniteEx ignite, String cacheName) { + this.ignite = ignite; + this.cacheName = cacheName; + } + + /** {@inheritDoc} */ + @Override public void run() { + final Predicate predicate = stopPredicate; + + while (!predicate.test(ignite)) { + ThreadLocalRandom rnd = ThreadLocalRandom.current(); + + int key = rnd.nextInt(KEYS_SPACE); + + boolean remove = rnd.nextInt(100) <= 20; + + try { + IgniteCache cache = ignite.getOrCreateCache(cacheName); + + if (remove) { + cache.remove(key); + + locCache.remove(key); + } + else { + int[] payload = new int[KEYS_SPACE]; + Arrays.fill(payload, key); + + TestValue val = new TestValue(key, payload); + + cache.put(key, val); + + locCache.put(key, val); + } + + // Throttle against GC. + U.sleep(1); + } + catch (Exception ignored) { } + } + } + + /** + * + */ + public void consistencyCheck(IgniteEx ignite) { + IgniteCache cache = ignite.getOrCreateCache(cacheName); + + for (int key = 0; key < KEYS_SPACE; key++) { + TestValue expectedVal = locCache.get(key); + TestValue actualVal = cache.get(key); + + Assert.assertEquals("Consistency check failed for: " + cache.getName() + ", key=" + key, + expectedVal, actualVal); + } + } + + /** {@inheritDoc} */ + @Override public boolean equals(Object o) { + if (this == o) + return true; + if (o == null || getClass() != o.getClass()) + return false; + + CacheLoader loader = (CacheLoader) o; + + return Objects.equals(cacheName, loader.cacheName); + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + return Objects.hash(cacheName); + } + } + + /** + * Test payload with indexed field. + */ + static class TestValue { + /** Indexed field. */ + @QuerySqlField(index = true) + private final int indexedField; + + /** Payload. */ + private final int[] payload; + + /** + * @param indexedField Indexed field. + * @param payload Payload. + */ + public TestValue(int indexedField, int[] payload) { + this.indexedField = indexedField; + this.payload = payload; + } + + /** {@inheritDoc} */ + @Override public boolean equals(Object o) { + if (this == o) + return true; + if (o == null || getClass() != o.getClass()) + return false; + + TestValue testValue = (TestValue) o; + + return indexedField == testValue.indexedField && + Arrays.equals(payload, testValue.payload); + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + int result = Objects.hash(indexedField); + + result = 31 * result + Arrays.hashCode(payload); + + return result; + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsCacheRestoreTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsCacheRestoreTest.java index 2db3ca20d2b52..3d8d60c82597b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsCacheRestoreTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsCacheRestoreTest.java @@ -25,21 +25,18 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsCacheRestoreTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Non-persistent data region name. */ private static final String NO_PERSISTENCE_REGION = "no-persistence-region"; @@ -50,8 +47,6 @@ public class IgnitePdsCacheRestoreTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (ccfgs != null) { cfg.setCacheConfiguration(ccfgs); @@ -93,6 +88,7 @@ public class IgnitePdsCacheRestoreTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRestoreAndNewCache1() throws Exception { restoreAndNewCache(false); } @@ -100,6 +96,7 @@ public void testRestoreAndNewCache1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRestoreAndNewCache2() throws Exception { restoreAndNewCache(true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsCacheWalDisabledOnRebalancingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsCacheWalDisabledOnRebalancingTest.java new file mode 100644 index 0000000000000..0ecde09998b8b --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsCacheWalDisabledOnRebalancingTest.java @@ -0,0 +1,272 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.persistence.db; + +import java.io.File; +import java.nio.file.Files; +import java.nio.file.Paths; +import java.util.Collection; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteBiPredicate; +import org.apache.ignite.mxbean.CacheGroupMetricsMXBean; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; + +/** + * Test scenarios with rebalancing, IGNITE_DISABLE_WAL_DURING_REBALANCING optimization and topology changes + * such as client nodes join/leave, server nodes from BLT leave/join, server nodes out of BLT join/leave. + */ +@RunWith(JUnit4.class) +public class IgnitePdsCacheWalDisabledOnRebalancingTest extends GridCommonAbstractTest { + /** Block message predicate to set to Communication SPI in node configuration. */ + private IgniteBiPredicate blockMessagePredicate; + + /** */ + private static final int CACHE1_PARTS_NUM = 8; + + /** */ + private static final int CACHE2_PARTS_NUM = 16; + + /** */ + private static final int CACHE3_PARTS_NUM = 32; + + /** */ + private static final int CACHE_SIZE = 2_000; + + /** */ + private static final String CACHE3_NAME = "cache3"; + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + cleanPersistenceDir(); + + System.setProperty(IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING, "true"); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + + System.clearProperty(IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg1 = new CacheConfiguration("cache1") + .setAtomicityMode(CacheAtomicityMode.ATOMIC) + .setCacheMode(CacheMode.REPLICATED) + .setAffinity(new RendezvousAffinityFunction(false, CACHE1_PARTS_NUM)); + + CacheConfiguration ccfg2 = new CacheConfiguration("cache2") + .setBackups(1) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setCacheMode(CacheMode.PARTITIONED) + .setAffinity(new RendezvousAffinityFunction(false, CACHE2_PARTS_NUM)); + + CacheConfiguration ccfg3 = new CacheConfiguration(CACHE3_NAME) + .setBackups(2) + .setAtomicityMode(CacheAtomicityMode.ATOMIC) + .setCacheMode(CacheMode.PARTITIONED) + .setAffinity(new RendezvousAffinityFunction(false, CACHE3_PARTS_NUM)); + + cfg.setCacheConfiguration(ccfg1, ccfg2, ccfg3); + + if ("client".equals(igniteInstanceName)) + cfg.setClientMode(true); + else { + DataStorageConfiguration dsCfg = new DataStorageConfiguration() + .setConcurrencyLevel(Runtime.getRuntime().availableProcessors() * 4) + .setWalMode(WALMode.LOG_ONLY) + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setPersistenceEnabled(true) + .setMaxSize(256 * 1024 * 1024)); + + cfg.setDataStorageConfiguration(dsCfg); + } + + TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi(); + commSpi.blockMessages(blockMessagePredicate); + + cfg.setCommunicationSpi(commSpi); + + + return cfg; + } + + /** + * If client joins topology during rebalancing process, rebalancing finishes successfully, + * all partitions are owned as expected when rebalancing finishes. + */ + @Test + public void testClientJoinsLeavesDuringRebalancing() throws Exception { + Ignite ig0 = startGrids(2); + + ig0.active(true); + + for (int i = 0; i < 3; i++) + fillCache(ig0.getOrCreateCache("cache" + i), CACHE_SIZE); + + String ig1Name = "node01-" + grid(1).localNode().consistentId(); + + stopGrid(1); + + cleanPersistenceFiles(ig1Name); + + int groupId = ((IgniteEx) ig0).cachex(CACHE3_NAME).context().groupId(); + + blockMessagePredicate = (node, msg) -> { + if (msg instanceof GridDhtPartitionDemandMessage) + return ((GridDhtPartitionDemandMessage) msg).groupId() == groupId; + + return false; + }; + + IgniteEx ig1 = startGrid(1); + + startGrid("client"); + + stopGrid("client"); + + CacheGroupMetricsMXBean mxBean = ig1.cachex(CACHE3_NAME).context().group().mxBean(); + + assertTrue("Unexpected moving partitions count: " + mxBean.getLocalNodeMovingPartitionsCount(), + mxBean.getLocalNodeMovingPartitionsCount() == CACHE3_PARTS_NUM); + + TestRecordingCommunicationSpi commSpi = (TestRecordingCommunicationSpi) ig1 + .configuration().getCommunicationSpi(); + + commSpi.stopBlock(); + + boolean waitResult = GridTestUtils.waitForCondition( + () -> mxBean.getLocalNodeMovingPartitionsCount() == 0, + 30_000); + + assertTrue("Failed to wait for owning all partitions, parts in moving state: " + + mxBean.getLocalNodeMovingPartitionsCount(), waitResult); + } + + /** + * If server nodes from BLT leave topology and then join again after additional keys were put to caches, + * rebalance starts. + * + * Test verifies that all moving partitions get owned after rebalance finishes. + * + * @throws Exception If failed. + */ + @Test + public void testServerNodesFromBltLeavesAndJoinsDuringRebalancing() throws Exception { + Ignite ig0 = startGridsMultiThreaded(4); + + fillCache(ig0.cache(CACHE3_NAME), CACHE_SIZE); + + List nonAffinityKeys1 = nearKeys(grid(1).cache(CACHE3_NAME), 100, CACHE_SIZE / 2); + List nonAffinityKeys2 = nearKeys(grid(2).cache(CACHE3_NAME), 100, CACHE_SIZE / 2); + + stopGrid(1); + stopGrid(2); + + Set nonAffinityKeysSet = new HashSet<>(); + + nonAffinityKeysSet.addAll(nonAffinityKeys1); + nonAffinityKeysSet.addAll(nonAffinityKeys2); + + fillCache(ig0.cache(CACHE3_NAME), nonAffinityKeysSet); + + int groupId = ((IgniteEx) ig0).cachex(CACHE3_NAME).context().groupId(); + + blockMessagePredicate = (node, msg) -> { + if (msg instanceof GridDhtPartitionDemandMessage) + return ((GridDhtPartitionDemandMessage) msg).groupId() == groupId; + + return false; + }; + + IgniteEx ig1 = startGrid(1); + + CacheGroupMetricsMXBean mxBean = ig1.cachex(CACHE3_NAME).context().group().mxBean(); + + TestRecordingCommunicationSpi commSpi = (TestRecordingCommunicationSpi) ig1 + .configuration().getCommunicationSpi(); + + startGrid(2); + + commSpi.stopBlock(); + + boolean allOwned = GridTestUtils.waitForCondition( + () -> mxBean.getLocalNodeMovingPartitionsCount() == 0, 30_000); + + assertTrue("Partitions were not owned, there are " + mxBean.getLocalNodeMovingPartitionsCount() + + " partitions in MOVING state", allOwned); + } + + /** */ + private void cleanPersistenceFiles(String igName) throws Exception { + String ig1DbPath = Paths.get(DFLT_STORE_DIR, igName).toString(); + + File igDbDir = U.resolveWorkDirectory(U.defaultWorkDirectory(), ig1DbPath, false); + + U.delete(igDbDir); + Files.createDirectory(igDbDir.toPath()); + + String ig1DbWalPath = Paths.get(DFLT_STORE_DIR, "wal", igName).toString(); + + U.delete(U.resolveWorkDirectory(U.defaultWorkDirectory(), ig1DbWalPath, false)); + } + + /** */ + private void fillCache(IgniteCache cache, int cacheSize) { + for (int i = 0; i < cacheSize; i++) + cache.put(i, "value_" + i); + } + + /** */ + private void fillCache(IgniteCache cache, Collection keys) { + for (Integer key : keys) + cache.put(key, "value_" + key); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsDataRegionMetricsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsDataRegionMetricsTest.java index 4a22a2bcd7f1d..27072a4d0f955 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsDataRegionMetricsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsDataRegionMetricsTest.java @@ -36,6 +36,7 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.pagemem.PageIdAllocator; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore; @@ -43,22 +44,25 @@ import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.CU; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.file.Files.newDirectoryStream; import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_DATA_REG_DEFAULT_NAME; +import static org.apache.ignite.internal.processors.cache.GridCacheUtils.UTILITY_CACHE_NAME; +import static org.apache.ignite.internal.processors.cache.mvcc.txlog.TxLog.TX_LOG_CACHE_NAME; +import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.META_STORAGE_NAME; +import static org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage.METASTORAGE_CACHE_ID; +import static org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage.METASTORAGE_CACHE_NAME; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsDataRegionMetricsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final long INIT_REGION_SIZE = 10 << 20; @@ -81,8 +85,6 @@ public class IgnitePdsDataRegionMetricsTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - DataStorageConfiguration memCfg = new DataStorageConfiguration() .setDefaultDataRegionConfiguration( new DataRegionConfiguration() @@ -94,16 +96,22 @@ public class IgnitePdsDataRegionMetricsTest extends GridCommonAbstractTest { cfg.setDataStorageConfiguration(memCfg); - CacheConfiguration ccfg = new CacheConfiguration<>() - .setName(DEFAULT_CACHE_NAME) - .setCacheMode(CacheMode.PARTITIONED) - .setBackups(1); + CacheConfiguration ccfg = cacheConfiguration(); cfg.setCacheConfiguration(ccfg); return cfg; } + /** + * @return Ignite cache configuration. + */ + protected CacheConfiguration cacheConfiguration() { + return (CacheConfiguration)new CacheConfiguration<>(DEFAULT_CACHE_NAME) + .setCacheMode(CacheMode.PARTITIONED) + .setBackups(1); + } + /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { super.beforeTest(); @@ -121,6 +129,7 @@ public class IgnitePdsDataRegionMetricsTest extends GridCommonAbstractTest { } /** */ + @Test public void testMemoryUsageSingleNode() throws Exception { DataRegionMetrics initMetrics = null; @@ -136,7 +145,7 @@ public void testMemoryUsageSingleNode() throws Exception { assertTrue(currMetrics.getTotalAllocatedPages() >= currMetrics.getPhysicalMemoryPages()); - final IgniteCache cache = node.getOrCreateCache(DEFAULT_CACHE_NAME); + final IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); Map map = new HashMap<>(); @@ -148,9 +157,9 @@ public void testMemoryUsageSingleNode() throws Exception { cache.putAll(map); - forceCheckpoint(); + forceCheckpoint(node); - checkMetricsConsistency(node, DEFAULT_CACHE_NAME); + checkMetricsConsistency(node); } currMetrics = getDfltRegionMetrics(node); @@ -164,13 +173,14 @@ public void testMemoryUsageSingleNode() throws Exception { } /** */ + @Test public void testMemoryUsageMultipleNodes() throws Exception { IgniteEx node0 = startGrid(0); IgniteEx node1 = startGrid(1); node0.cluster().active(true); - final IgniteCache cache = node0.getOrCreateCache(DEFAULT_CACHE_NAME); + final IgniteCache cache = node0.cache(DEFAULT_CACHE_NAME); Map map = new HashMap<>(); @@ -183,8 +193,8 @@ public void testMemoryUsageMultipleNodes() throws Exception { forceCheckpoint(); - checkMetricsConsistency(node0, DEFAULT_CACHE_NAME); - checkMetricsConsistency(node1, DEFAULT_CACHE_NAME); + checkMetricsConsistency(node0); + checkMetricsConsistency(node1); IgniteEx node2 = startGrid(2); @@ -194,9 +204,9 @@ public void testMemoryUsageMultipleNodes() throws Exception { forceCheckpoint(); - checkMetricsConsistency(node0, DEFAULT_CACHE_NAME); - checkMetricsConsistency(node1, DEFAULT_CACHE_NAME); - checkMetricsConsistency(node2, DEFAULT_CACHE_NAME); + checkMetricsConsistency(node0); + checkMetricsConsistency(node1); + checkMetricsConsistency(node2); stopGrid(1, true); @@ -206,8 +216,8 @@ public void testMemoryUsageMultipleNodes() throws Exception { forceCheckpoint(); - checkMetricsConsistency(node0, DEFAULT_CACHE_NAME); - checkMetricsConsistency(node2, DEFAULT_CACHE_NAME); + checkMetricsConsistency(node0); + checkMetricsConsistency(node2); } /** @@ -215,6 +225,7 @@ public void testMemoryUsageMultipleNodes() throws Exception { * * @throws Exception If failed. */ + @Test public void testCheckpointBufferSize() throws Exception { IgniteEx ig = startGrid(0); @@ -232,6 +243,7 @@ public void testCheckpointBufferSize() throws Exception { * * @throws Exception If failed. */ + @Test public void testUsedCheckpointBuffer() throws Exception { IgniteEx ig = startGrid(0); @@ -292,15 +304,26 @@ private static DataRegionMetrics getDfltRegionMetrics(Ignite node) { throw new RuntimeException("No metrics found for default data region"); } + /** */ + private void checkMetricsConsistency(final IgniteEx node) throws Exception { + checkMetricsConsistency(node, DEFAULT_CACHE_NAME); + checkMetricsConsistency(node, UTILITY_CACHE_NAME); + checkMetricsConsistency(node, TX_LOG_CACHE_NAME); + checkMetricsConsistency(node, METASTORAGE_CACHE_NAME); + } + /** */ private void checkMetricsConsistency(final IgniteEx node, String cacheName) throws Exception { FilePageStoreManager pageStoreMgr = (FilePageStoreManager)node.context().cache().context().pageStore(); assert pageStoreMgr != null : "Persistence is not enabled"; - File cacheWorkDir = pageStoreMgr.cacheWorkDir( - node.getOrCreateCache(cacheName).getConfiguration(CacheConfiguration.class) - ); + boolean metaStore = METASTORAGE_CACHE_NAME.equals(cacheName); + boolean txLog = TX_LOG_CACHE_NAME.equals(cacheName); + + File cacheWorkDir = metaStore ? new File(pageStoreMgr.workDir(), META_STORAGE_NAME) : + txLog ? new File(pageStoreMgr.workDir(), TX_LOG_CACHE_NAME) : + pageStoreMgr.cacheWorkDir(node.cachex(cacheName).configuration()); long totalPersistenceSize = 0; @@ -310,20 +333,23 @@ private void checkMetricsConsistency(final IgniteEx node, String cacheName) thro for (Path path : files) { File file = path.toFile(); - FilePageStore store = (FilePageStore)pageStoreMgr.getStore(CU.cacheId(cacheName), partId(file)); + FilePageStore store = (FilePageStore)pageStoreMgr.getStore(metaStore ? + METASTORAGE_CACHE_ID : CU.cacheId(cacheName), partId(file)); totalPersistenceSize += path.toFile().length() - store.headerSize(); } } - long totalAllocatedPagesFromMetrics = node.context().cache().context() - .cacheContext(CU.cacheId(DEFAULT_CACHE_NAME)) - .group() - .dataRegion() - .memoryMetrics() - .getTotalAllocatedPages(); + GridCacheSharedContext cctx = node.context().cache().context(); + + String regionName = metaStore ? GridCacheDatabaseSharedManager.METASTORE_DATA_REGION_NAME : + txLog ? TX_LOG_CACHE_NAME : + cctx.cacheContext(CU.cacheId(cacheName)).group().dataRegion().config().getName(); + + long totalAllocatedPagesFromMetrics = cctx.database().memoryMetrics(regionName).getTotalAllocatedPages(); - assertEquals(totalPersistenceSize / pageStoreMgr.pageSize(), totalAllocatedPagesFromMetrics); + assertEquals("Number of allocated pages is different than in metrics for [node=" + node.name() + ", cache=" + cacheName + "]", + totalPersistenceSize / pageStoreMgr.pageSize(), totalAllocatedPagesFromMetrics); } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsMultiNodePutGetRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsMultiNodePutGetRestartTest.java index d386c172ed05b..1af977c08b9b7 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsMultiNodePutGetRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsMultiNodePutGetRestartTest.java @@ -37,18 +37,16 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsMultiNodePutGetRestartTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 3; @@ -78,8 +76,6 @@ public class IgnitePdsMultiNodePutGetRestartTest extends GridCommonAbstractTest cfg.setCacheConfiguration(ccfg); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setMarshaller(null); BinaryConfiguration bCfg = new BinaryConfiguration(); @@ -108,6 +104,7 @@ public class IgnitePdsMultiNodePutGetRestartTest extends GridCommonAbstractTest /** * @throws Exception if failed. */ + @Test public void testPutGetSimple() throws Exception { String home = U.getIgniteHome(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionDuringPartitionClearTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionDuringPartitionClearTest.java index 720b6045d1690..a83afd1018808 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionDuringPartitionClearTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionDuringPartitionClearTest.java @@ -34,11 +34,16 @@ import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsPageEvictionDuringPartitionClearTest extends GridCommonAbstractTest { /** */ public static final String CACHE_NAME = "cache"; @@ -85,7 +90,11 @@ public class IgnitePdsPageEvictionDuringPartitionClearTest extends GridCommonAbs /** * @throws Exception if failed. */ + @Test public void testPageEvictionOnNodeStart() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + for (int r = 0; r < 3; r++) { cleanPersistenceDir(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionTest.java index 3c07a164f481e..8ae0d06f0fae6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPageEvictionTest.java @@ -33,18 +33,16 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsPageEvictionTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Test entry count. */ public static final int ENTRY_CNT = 1_000_000; @@ -73,11 +71,6 @@ public class IgnitePdsPageEvictionTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfg); - cfg.setDiscoverySpi( - new TcpDiscoverySpi() - .setIpFinder(IP_FINDER) - ); - cfg.setMarshaller(null); return cfg; @@ -102,6 +95,7 @@ public class IgnitePdsPageEvictionTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testPageEvictionSql() throws Exception { IgniteEx ig = grid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPartitionPreloadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPartitionPreloadTest.java new file mode 100644 index 0000000000000..e9c1753b0ff21 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsPartitionPreloadTest.java @@ -0,0 +1,714 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db; + +import java.util.Collections; +import java.util.List; +import java.util.function.Predicate; +import java.util.function.Supplier; +import javax.cache.Cache; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.LOCAL; + +/** + * Test partition preload for varios cache modes. + */ +@RunWith(JUnit4.class) +public class IgnitePdsPartitionPreloadTest extends GridCommonAbstractTest { + /** Test entry count. */ + public static final int ENTRY_CNT = 500; + + /** Grid count. */ + private static final int GRIDS_CNT = 3; + + /** */ + private static final String CLIENT_GRID_NAME = "client"; + + /** */ + public static final String DEFAULT_REGION = "default"; + + /** */ + private Supplier cfgFactory; + + /** */ + private static final String TEST_ATTR = "testId"; + + /** */ + private static final String NO_CACHE_NODE = "node0"; + + /** */ + private static final String PRIMARY_NODE = "node1"; + + /** */ + private static final String BACKUP_NODE = "node2"; + + /** */ + public static final String MEM = "mem"; + + /** */ + public static final int MB = 1024 * 1024; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + cfg.setClientMode(CLIENT_GRID_NAME.equals(gridName)); + + if (!cfg.isClientMode()) { + String val = "node" + getTestIgniteInstanceIndex(gridName); + cfg.setUserAttributes(Collections.singletonMap(TEST_ATTR, val)); + cfg.setConsistentId(val); + } + + DataStorageConfiguration memCfg = new DataStorageConfiguration() + .setDataRegionConfigurations(new DataRegionConfiguration().setName(MEM).setInitialSize(10 * MB)) + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration(). + setMetricsEnabled(true). + setMaxSize(50L * MB). + setPersistenceEnabled(true). + setName(DEFAULT_REGION)) + .setWalMode(WALMode.LOG_ONLY) + .setWalSegmentSize(16 * MB) + .setPageSize(1024) + .setMetricsEnabled(true); + + cfg.setDataStorageConfiguration(memCfg); + + cfg.setCacheConfiguration(cfgFactory.get()); + + return cfg; + } + + /** + * @param atomicityMode Atomicity mode. + */ + private CacheConfiguration cacheConfiguration(CacheAtomicityMode atomicityMode) { + CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); + + ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + ccfg.setAffinity(new RendezvousAffinityFunction(false, 32)); + ccfg.setBackups(1); + ccfg.setNodeFilter(new TestIgnitePredicate()); + ccfg.setAtomicityMode(atomicityMode); + + return ccfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** */ + @Test + public void testLocalPreloadPartitionClient() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL).setDataRegionName(MEM); + + startGridsMultiThreaded(GRIDS_CNT); + + IgniteEx client = startGrid("client"); + + assertNotNull(client.cache(DEFAULT_CACHE_NAME)); + + assertFalse(client.cache(DEFAULT_CACHE_NAME).localPreloadPartition(0)); + assertFalse(grid(0).cache(DEFAULT_CACHE_NAME).localPreloadPartition(0)); + } + + /** */ + @Test + public void testLocalPreloadPartitionClientMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT).setDataRegionName(MEM); + + startGridsMultiThreaded(GRIDS_CNT); + + IgniteEx client = startGrid("client"); + + assertNotNull(client.cache(DEFAULT_CACHE_NAME)); + + assertFalse(client.cache(DEFAULT_CACHE_NAME).localPreloadPartition(0)); + assertFalse(grid(0).cache(DEFAULT_CACHE_NAME).localPreloadPartition(0)); + } + + + /** */ + @Test + public void testLocalPreloadPartitionPrimary() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.LOCAL); + } + + /** */ + @Test + public void testLocalPreloadPartitionPrimaryMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.LOCAL); + } + + /** */ + @Test + public void testLocalPreloadPartitionBackup() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.LOCAL); + } + + /** */ + @Test + public void testLocalPreloadPartitionBackupMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.LOCAL); + } + + /** */ + @Test + public void testPreloadPartitionInMemoryRemote() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL).setDataRegionName(MEM); + + startGridsMultiThreaded(GRIDS_CNT); + + IgniteEx client = startGrid("client"); + + assertNotNull(client.cache(DEFAULT_CACHE_NAME)); + + try { + client.cache(DEFAULT_CACHE_NAME).preloadPartition(0); + + fail("Exception is expected"); + } + catch (Exception e) { + log.error("Expected", e); + } + } + + /** */ + @Test + public void testPreloadPartitionInMemoryRemoteMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT).setDataRegionName(MEM); + + startGridsMultiThreaded(GRIDS_CNT); + + IgniteEx client = startGrid("client"); + + assertNotNull(client.cache(DEFAULT_CACHE_NAME)); + + try { + client.cache(DEFAULT_CACHE_NAME).preloadPartition(0); + + fail("Exception is expected"); + } + catch (Exception e) { + log.error("Expected", e); + } + } + + /** */ + @Test + public void testPreloadPartitionInMemoryLocal() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL).setDataRegionName(MEM); + + startGridsMultiThreaded(GRIDS_CNT); + + int key = 0; + + Ignite prim = primaryNode(key, DEFAULT_CACHE_NAME); + + int part = prim.affinity(DEFAULT_CACHE_NAME).partition(key); + + try { + prim.cache(DEFAULT_CACHE_NAME).preloadPartition(part); + + fail("Exception is expected"); + } + catch (Exception e) { + log.error("Expected", e); + } + } + + /** */ + @Test + public void testPreloadPartitionInMemoryLocalMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT).setDataRegionName(MEM); + + startGridsMultiThreaded(GRIDS_CNT); + + int key = 0; + + Ignite prim = primaryNode(key, DEFAULT_CACHE_NAME); + + int part = prim.affinity(DEFAULT_CACHE_NAME).partition(key); + + try { + prim.cache(DEFAULT_CACHE_NAME).preloadPartition(part); + + fail("Exception is expected"); + } + catch (Exception e) { + log.error("Expected", e); + } + } + + /** */ + @Test + public void testPreloadPartitionTransactionalClientSync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition(() -> { + try { + return startGrid(CLIENT_GRID_NAME); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalClientSyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition(() -> { + try { + return startGrid(CLIENT_GRID_NAME); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalClientAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition(() -> { + try { + return startGrid(CLIENT_GRID_NAME); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalClientAsyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition(() -> { + try { + return startGrid(CLIENT_GRID_NAME); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalNodeFilteredSync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition(() -> grid(0), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalNodeFilteredSyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition(() -> grid(0), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalNodeFilteredAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition(() -> grid(0), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalNodeFilteredAsyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition(() -> grid(0), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalPrimarySync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalPrimarySyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalPrimaryAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalPrimaryAsyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalBackupSync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalBackupSyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalBackupAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionTransactionalBackupAsyncMvcc() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicClientSync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition(() -> { + try { + return startGrid(CLIENT_GRID_NAME); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicClientAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition(() -> { + try { + return startGrid(CLIENT_GRID_NAME); + } + catch (Exception e) { + throw new RuntimeException(e); + } + }, PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicNodeFilteredSync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition(() -> grid(0), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicNodeFilteredAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition(() -> grid(0), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicPrimarySync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicPrimaryAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicBackupSync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadPartitionAtomicBackupAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(ATOMIC); + + preloadPartition( + () -> G.allGrids().stream().filter(BackupNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadLocalTransactionalSync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL).setCacheMode(LOCAL); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadLocalTransactionalSyncMvcc() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT).setCacheMode(LOCAL); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.SYNC); + } + + /** */ + @Test + public void testPreloadLocalTransactionalAsync() throws Exception { + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL).setCacheMode(LOCAL); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** */ + @Test + public void testPreloadLocalTransactionalAsyncMvcc() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + cfgFactory = () -> cacheConfiguration(TRANSACTIONAL_SNAPSHOT).setCacheMode(LOCAL); + + preloadPartition( + () -> G.allGrids().stream().filter(PrimaryNodePredicate.INSTANCE).findFirst().get(), PreloadMode.ASYNC); + } + + /** + * @param execNodeFactory Test node factory. + * @param preloadMode Preload mode. + */ + private void preloadPartition(Supplier execNodeFactory, PreloadMode preloadMode) throws Exception { + Ignite crd = startGridsMultiThreaded(GRIDS_CNT); + + Ignite testNode = grid(1); + + Object consistentId = testNode.cluster().localNode().consistentId(); + + assertEquals(PRIMARY_NODE, testNode.cluster().localNode().consistentId()); + + boolean locCacheMode = testNode.cache(DEFAULT_CACHE_NAME).getConfiguration(CacheConfiguration.class).getCacheMode() == LOCAL; + + Integer key = primaryKey(testNode.cache(DEFAULT_CACHE_NAME)); + + int preloadPart = crd.affinity(DEFAULT_CACHE_NAME).partition(key); + + int cnt = 0; + + try (IgniteDataStreamer streamer = testNode.dataStreamer(DEFAULT_CACHE_NAME)) { + int k = 0; + + while (cnt < ENTRY_CNT) { + if (testNode.affinity(DEFAULT_CACHE_NAME).partition(k) == preloadPart) { + streamer.addData(k, k); + + cnt++; + } + + k++; + } + } + + forceCheckpoint(); + + stopAllGrids(); + + startGridsMultiThreaded(GRIDS_CNT); + + testNode = G.allGrids().stream(). + filter(ignite -> PRIMARY_NODE.equals(ignite.cluster().localNode().consistentId())).findFirst().get(); + + if (!locCacheMode) + assertEquals(testNode, primaryNode(key, DEFAULT_CACHE_NAME)); + + Ignite execNode = execNodeFactory.get(); + + switch (preloadMode) { + case SYNC: + execNode.cache(DEFAULT_CACHE_NAME).preloadPartition(preloadPart); + + if (locCacheMode) { + testNode = G.allGrids().stream().filter(ignite -> + ignite.cluster().localNode().consistentId().equals(consistentId)).findFirst().get(); + } + + break; + case ASYNC: + execNode.cache(DEFAULT_CACHE_NAME).preloadPartitionAsync(preloadPart).get(); + + if (locCacheMode) { + testNode = G.allGrids().stream().filter(ignite -> + ignite.cluster().localNode().consistentId().equals(consistentId)).findFirst().get(); + } + + break; + case LOCAL: + assertTrue(execNode.cache(DEFAULT_CACHE_NAME).localPreloadPartition(preloadPart)); + + testNode = execNode; // For local preloading testNode == execNode + + break; + } + + long c0 = testNode.dataRegionMetrics(DEFAULT_REGION).getPagesRead(); + + // After partition preloading no pages should be read from store. + List> list = U.arrayList(testNode.cache(DEFAULT_CACHE_NAME).localEntries(), 1000); + + assertEquals(ENTRY_CNT, list.size()); + + assertEquals("Read pages count must be same", c0, testNode.dataRegionMetrics(DEFAULT_REGION).getPagesRead()); + } + + /** */ + private static class TestIgnitePredicate implements IgnitePredicate { + /** {@inheritDoc} */ + @Override public boolean apply(ClusterNode node) { + return !NO_CACHE_NODE.equals(node.attribute(TEST_ATTR)); + } + } + + /** */ + private static class PrimaryNodePredicate implements Predicate { + /** */ + private static final PrimaryNodePredicate INSTANCE = new PrimaryNodePredicate(); + + /** {@inheritDoc} */ + @Override public boolean test(Ignite ignite) { + return PRIMARY_NODE.equals(ignite.cluster().localNode().consistentId()); + } + } + + /** */ + private static class BackupNodePredicate implements Predicate { + /** */ + private static final BackupNodePredicate INSTANCE = new BackupNodePredicate(); + + /** {@inheritDoc} */ + @Override public boolean test(Ignite ignite) { + return BACKUP_NODE.equals(ignite.cluster().localNode().consistentId()); + } + } + + /** */ + private enum PreloadMode { + /** Sync. */ SYNC, + /** Async. */ASYNC, + /** Local. */LOCAL; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsRebalancingOnNotStableTopologyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsRebalancingOnNotStableTopologyTest.java index 8b3c8757c57fc..0798bcbecc648 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsRebalancingOnNotStableTopologyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsRebalancingOnNotStableTopologyTest.java @@ -33,8 +33,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.multijvm.IgniteProcessProxy; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * We start writing to unstable cluster. @@ -42,6 +46,7 @@ * There will be entries in WAL which belongs to evicted partitions. * We should ignore them (not throw exceptions). This point is tested. */ +@RunWith(JUnit4.class) public class IgnitePdsRebalancingOnNotStableTopologyTest extends GridCommonAbstractTest { /** Checkpoint frequency. */ private static final long CHECKPOINT_FREQUENCY = 2_000_000; @@ -49,13 +54,14 @@ public class IgnitePdsRebalancingOnNotStableTopologyTest extends GridCommonAbstr /** Cluster size. */ private static final int CLUSTER_SIZE = 5; - /** */ - private static final String CACHE_NAME = "cache1"; - /** * @throws Exception When fails. */ + @Test public void test() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + Ignite ex = startGrid(0); ex.active(true); @@ -79,7 +85,7 @@ public void test() throws Exception { startLatch.countDown(); - IgniteCache cache1 = ex1.cache(CACHE_NAME); + IgniteCache cache1 = ex1.cache(DEFAULT_CACHE_NAME); int key = keyCnt.get(); @@ -139,7 +145,7 @@ public void test() throws Exception { checkTopology(CLUSTER_SIZE); - IgniteCache cache1 = ex.cache(CACHE_NAME); + IgniteCache cache1 = ex.cache(DEFAULT_CACHE_NAME); assert keyCnt.get() > 0; @@ -155,7 +161,7 @@ public void test() throws Exception { cfg.setActiveOnStart(false); - CacheConfiguration ccfg = new CacheConfiguration<>(CACHE_NAME); + CacheConfiguration ccfg = defaultCacheConfiguration(); ccfg.setPartitionLossPolicy(PartitionLossPolicy.READ_ONLY_SAFE); ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsReserveWalSegmentsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsReserveWalSegmentsTest.java index 5885b7a5ed869..c24ea69647aa1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsReserveWalSegmentsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsReserveWalSegmentsTest.java @@ -29,21 +29,18 @@ import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE; /** * Test correctness of truncating unused WAL segments. */ +@RunWith(JUnit4.class) public class IgnitePdsReserveWalSegmentsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { System.setProperty(IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE, "2"); @@ -52,8 +49,6 @@ public class IgnitePdsReserveWalSegmentsTest extends GridCommonAbstractTest { cfg.setConsistentId(gridName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setAffinity(new RendezvousAffinityFunction(false, 32)); @@ -96,6 +91,7 @@ public class IgnitePdsReserveWalSegmentsTest extends GridCommonAbstractTest { * * @throws Exception if failed. */ + @Test public void testWalManagerRangeReservation() throws Exception { IgniteEx ig0 = prepareGrid(4); @@ -125,6 +121,7 @@ public void testWalManagerRangeReservation() throws Exception { * * @throws Exception if failed. */ + @Test public void testWalDoesNotTruncatedWhenSegmentReserved() throws Exception { IgniteEx ig0 = prepareGrid(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsReserveWalSegmentsWithCompactionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsReserveWalSegmentsWithCompactionTest.java new file mode 100644 index 0000000000000..bc34f2905b61c --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsReserveWalSegmentsWithCompactionTest.java @@ -0,0 +1,34 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db; + +import org.apache.ignite.configuration.IgniteConfiguration; + +/** + * + */ +public class IgnitePdsReserveWalSegmentsWithCompactionTest extends IgnitePdsReserveWalSegmentsTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + cfg.getDataStorageConfiguration().setWalCompactionEnabled(true); + + return cfg; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsTransactionsHangTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsTransactionsHangTest.java index 36afe32e1bda2..57022a8cded02 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsTransactionsHangTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsTransactionsHangTest.java @@ -40,12 +40,13 @@ import org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; @@ -53,10 +54,8 @@ /** * Checks that transactions don't hang during checkpoint creation. */ +@RunWith(JUnit4.class) public class IgnitePdsTransactionsHangTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Page cache size. */ private static final long PAGE_CACHE_SIZE = 512L * 1024 * 1024; @@ -100,10 +99,6 @@ public class IgnitePdsTransactionsHangTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - cfg.setDiscoverySpi(discoSpi); - BinaryConfiguration binaryCfg = new BinaryConfiguration(); binaryCfg.setCompactFooter(false); cfg.setBinaryConfiguration(binaryCfg); @@ -164,6 +159,7 @@ private CacheConfiguration getCacheConfiguration() { * * @throws Exception If failed. * */ + @Test public void testTransactionsDontHang() throws Exception { try { final Ignite g = startGrids(2); @@ -193,10 +189,17 @@ public void testTransactionsDontHang() throws Exception { TestEntity entity = TestEntity.newTestEntity(locRandom); - try (Transaction tx = g.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { - cache.put(randomKey, entity); + while (true) { + try (Transaction tx = g.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.put(randomKey, entity); + + tx.commit(); - tx.commit(); + break; + } + catch (Exception e) { + MvccFeatureChecker.assertMvccWriteConflict(e); + } } operationCnt.increment(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWholeClusterRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWholeClusterRestartTest.java index 7db84a3d8c49b..c7375c6b4953f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWholeClusterRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWholeClusterRestartTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.spi.checkpoint.noop.NoopCheckpointSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsWholeClusterRestartTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 5; @@ -45,9 +49,6 @@ public class IgnitePdsWholeClusterRestartTest extends GridCommonAbstractTest { /** */ private static final int ENTRIES_COUNT = 1_000; - /** */ - public static final String CACHE_NAME = "cache1"; - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); @@ -59,9 +60,8 @@ public class IgnitePdsWholeClusterRestartTest extends GridCommonAbstractTest { cfg.setDataStorageConfiguration(memCfg); - CacheConfiguration ccfg1 = new CacheConfiguration(); + CacheConfiguration ccfg1 = defaultCacheConfiguration(); - ccfg1.setName(CACHE_NAME); ccfg1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); ccfg1.setRebalanceMode(CacheRebalanceMode.SYNC); ccfg1.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); @@ -97,6 +97,7 @@ public class IgnitePdsWholeClusterRestartTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testRestarts() throws Exception { startGrids(GRID_CNT); @@ -104,7 +105,7 @@ public void testRestarts() throws Exception { awaitPartitionMapExchange(); - try (IgniteDataStreamer ds = ignite(0).dataStreamer(CACHE_NAME)) { + try (IgniteDataStreamer ds = ignite(0).dataStreamer(DEFAULT_CACHE_NAME)) { for (int i = 0; i < ENTRIES_COUNT; i++) ds.addData(i, i); } @@ -131,9 +132,9 @@ public void testRestarts() throws Exception { Ignite ig = ignite(g); for (int k = 0; k < ENTRIES_COUNT; k++) - assertEquals("Failed to read [g=" + g + ", part=" + ig.affinity(CACHE_NAME).partition(k) + - ", nodes=" + ig.affinity(CACHE_NAME).mapKeyToPrimaryAndBackups(k) + ']', - k, ig.cache(CACHE_NAME).get(k)); + assertEquals("Failed to read [g=" + g + ", part=" + ig.affinity(DEFAULT_CACHE_NAME).partition(k) + + ", nodes=" + ig.affinity(DEFAULT_CACHE_NAME).mapKeyToPrimaryAndBackups(k) + ']', + k, ig.cache(DEFAULT_CACHE_NAME).get(k)); } } finally { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWithTtlTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWithTtlTest.java index bb371dce29bca..408a967b94acd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWithTtlTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWithTtlTest.java @@ -24,6 +24,7 @@ import javax.cache.expiry.Duration; import javax.cache.expiry.ExpiryPolicy; import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cache.CacheRebalanceMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; @@ -34,33 +35,46 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.apache.ignite.internal.util.lang.GridCursor; import org.apache.ignite.internal.util.typedef.PA; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test TTL worker with persistence enabled */ +@RunWith(JUnit4.class) public class IgnitePdsWithTtlTest extends GridCommonAbstractTest { /** */ public static final String CACHE_NAME = "expirableCache"; /** */ - private static final int EXPIRATION_TIMEOUT = 10; + public static final String GROUP_NAME = "group1"; + + /** */ + public static final int PART_SIZE = 32; /** */ - public static final int ENTRIES = 100_000; + private static final int EXPIRATION_TIMEOUT = 10; /** */ - private static final TcpDiscoveryVmIpFinder FINDER = new TcpDiscoveryVmIpFinder(true); + public static final int ENTRIES = 50_000; /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + super.beforeTest(); cleanPersistenceDir(); @@ -80,36 +94,42 @@ public class IgnitePdsWithTtlTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { final IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - disco.setIpFinder(FINDER); - - cfg.setDiscoverySpi(disco); - - final CacheConfiguration ccfg = new CacheConfiguration(); - ccfg.setName(CACHE_NAME); - ccfg.setAffinity(new RendezvousAffinityFunction(false, 32)); - ccfg.setExpiryPolicyFactory(AccessedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, EXPIRATION_TIMEOUT))); - ccfg.setEagerTtl(true); - ccfg.setGroupName("group1"); - ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); - ccfg.setRebalanceMode(CacheRebalanceMode.SYNC); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setDefaultDataRegionConfiguration( new DataRegionConfiguration() - .setMaxSize(192L * 1024 * 1024) + .setMaxSize(2L * 1024 * 1024 * 1024) .setPersistenceEnabled(true) ).setWalMode(WALMode.LOG_ONLY)); - cfg.setCacheConfiguration(ccfg); + cfg.setCacheConfiguration(getCacheConfiguration(CACHE_NAME)); return cfg; } + /** + * Returns a new cache configuration with the given name and {@code GROUP_NAME} group. + * @param name Cache name. + * @return Cache configuration. + */ + private CacheConfiguration getCacheConfiguration(String name) { + CacheConfiguration ccfg = new CacheConfiguration(); + + ccfg.setName(name); + ccfg.setGroupName(GROUP_NAME); + ccfg.setAffinity(new RendezvousAffinityFunction(false, PART_SIZE)); + ccfg.setExpiryPolicyFactory(AccessedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, EXPIRATION_TIMEOUT))); + ccfg.setEagerTtl(true); + ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + ccfg.setRebalanceMode(CacheRebalanceMode.SYNC); + + return ccfg; + } + /** * @throws Exception if failed. */ + @Test public void testTtlIsApplied() throws Exception { loadAndWaitForCleanup(false); } @@ -117,6 +137,32 @@ public void testTtlIsApplied() throws Exception { /** * @throws Exception if failed. */ + @Test + public void testTtlIsAppliedForMultipleCaches() throws Exception { + IgniteEx srv = startGrid(0); + srv.cluster().active(true); + + int cacheCnt = 2; + + // Create a new caches in the same group. + // It is important that initially created cache CACHE_NAME remains empty. + for (int i = 0; i < cacheCnt; ++i) { + String cacheName = CACHE_NAME + "-" + i; + + srv.getOrCreateCache(getCacheConfiguration(cacheName)); + + fillCache(srv.cache(cacheName)); + } + + waitAndCheckExpired(srv, srv.cache(CACHE_NAME + "-" + (cacheCnt - 1))); + + stopAllGrids(); + } + + /** + * @throws Exception if failed. + */ + @Test public void testTtlIsAppliedAfterRestart() throws Exception { loadAndWaitForCleanup(true); } @@ -131,6 +177,8 @@ private void loadAndWaitForCleanup(boolean restartGrid) throws Exception { fillCache(srv.cache(CACHE_NAME)); if (restartGrid) { + srv.context().cache().context().database().wakeupForCheckpoint("test-checkpoint"); + stopGrid(0); srv = startGrid(0); srv.cluster().active(true); @@ -138,9 +186,9 @@ private void loadAndWaitForCleanup(boolean restartGrid) throws Exception { final IgniteCache cache = srv.cache(CACHE_NAME); - pringStatistics((IgniteCacheProxy)cache, "After restart from LFS"); + printStatistics((IgniteCacheProxy)cache, "After restart from LFS"); - waitAndCheckExpired(cache); + waitAndCheckExpired(srv, cache); stopAllGrids(); } @@ -148,6 +196,7 @@ private void loadAndWaitForCleanup(boolean restartGrid) throws Exception { /** * @throws Exception if failed. */ + @Test public void testRebalancingWithTtlExpirable() throws Exception { IgniteEx srv = startGrid(0); srv.cluster().active(true); @@ -161,9 +210,9 @@ public void testRebalancingWithTtlExpirable() throws Exception { final IgniteCache cache = srv.cache(CACHE_NAME); - pringStatistics((IgniteCacheProxy)cache, "After rebalancing start"); + printStatistics((IgniteCacheProxy)cache, "After rebalancing start"); - waitAndCheckExpired(cache); + waitAndCheckExpired(srv, cache); stopAllGrids(); } @@ -171,6 +220,7 @@ public void testRebalancingWithTtlExpirable() throws Exception { /** * @throws Exception if failed. */ + @Test public void testStartStopAfterRebalanceWithTtlExpirable() throws Exception { try { IgniteEx srv = startGrid(0); @@ -219,28 +269,51 @@ protected void fillCache(IgniteCache cache) { for (int i = 0; i < ENTRIES; i++) cache.get(i); // touch entries - pringStatistics((IgniteCacheProxy)cache, "After cache puts"); + printStatistics((IgniteCacheProxy)cache, "After cache puts"); } /** */ protected void waitAndCheckExpired( - final IgniteCache cache) throws IgniteInterruptedCheckedException { - GridTestUtils.waitForCondition(new PA() { + IgniteEx srv, + final IgniteCache cache + ) throws IgniteCheckedException { + boolean awaited = GridTestUtils.waitForCondition(new PA() { @Override public boolean apply() { return cache.size() == 0; } - }, TimeUnit.SECONDS.toMillis(EXPIRATION_TIMEOUT + 1)); + }, TimeUnit.SECONDS.toMillis(EXPIRATION_TIMEOUT + EXPIRATION_TIMEOUT / 2)); + + assertTrue("Cache is not empty. size=" + cache.size(), awaited); + + printStatistics((IgniteCacheProxy)cache, "After timeout"); + + GridCacheSharedContext ctx = srv.context().cache().context(); + GridCacheContext cctx = ctx.cacheContext(CU.cacheId(CACHE_NAME)); - pringStatistics((IgniteCacheProxy)cache, "After timeout"); + // Check partitions through internal API. + for (int partId = 0; partId < PART_SIZE; ++partId) { + GridDhtLocalPartition locPart = cctx.dht().topology().localPartition(partId); + + if (locPart == null) + continue; + + IgniteCacheOffheapManager.CacheDataStore dataStore = + ctx.cache().cacheGroup(CU.cacheId(GROUP_NAME)).offheap().dataStore(locPart); + + GridCursor cur = dataStore.cursor(); + + assertFalse(cur.next()); + assertEquals(0, locPart.fullSize()); + } for (int i = 0; i < ENTRIES; i++) assertNull(cache.get(i)); } /** */ - private void pringStatistics(IgniteCacheProxy cache, String msg) { + private void printStatistics(IgniteCacheProxy cache, String msg) { System.out.println(msg + " {{"); cache.context().printMemoryStats(); System.out.println("}} " + msg); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWithTtlTest2.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWithTtlTest2.java new file mode 100644 index 0000000000000..6a0e39deaa9e9 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgnitePdsWithTtlTest2.java @@ -0,0 +1,145 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db; + +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import javax.cache.expiry.Duration; +import javax.cache.expiry.ModifiedExpiryPolicy; +import org.apache.ignite.Ignite; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.NoOpFailureHandler; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; + +@RunWith(JUnit4.class) +public class IgnitePdsWithTtlTest2 extends GridCommonAbstractTest { + /** */ + public static AtomicBoolean handleFired = new AtomicBoolean(false); + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION); + + super.beforeTest(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + //protection if test failed to finish, e.g. by error + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** */ + public CacheConfiguration getCacheConfiguration(String name) { + CacheConfiguration ccfg = new CacheConfiguration(); + + ccfg.setName(name); + + ccfg.setAtomicityMode(ATOMIC); + + ccfg.setBackups(1); + + ccfg.setAffinity(new RendezvousAffinityFunction(false, 32768)); + + ccfg.setEagerTtl(true); + + ccfg.setExpiryPolicyFactory(ModifiedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 20))); + + return ccfg; + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + final IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setDataStorageConfiguration( + new DataStorageConfiguration() + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setMaxSize(100L * 1024 * 1024) + .setPersistenceEnabled(true) + )); + + cfg.setCacheConfiguration( + getCacheConfiguration("cache_1"), + getCacheConfiguration("cache_2"), + getCacheConfiguration("cache_3"), + getCacheConfiguration("cache_4"), + getCacheConfiguration("cache_5"), + getCacheConfiguration("cache_6"), + getCacheConfiguration("cache_7"), + getCacheConfiguration("cache_8"), + getCacheConfiguration("cache_9"), + getCacheConfiguration("cache_10"), + getCacheConfiguration("cache_11"), + getCacheConfiguration("cache_12"), + getCacheConfiguration("cache_13"), + getCacheConfiguration("cache_14"), + getCacheConfiguration("cache_15"), + getCacheConfiguration("cache_16"), + getCacheConfiguration("cache_17"), + getCacheConfiguration("cache_18"), + getCacheConfiguration("cache_19"), + getCacheConfiguration("cache_20") + ); + + cfg.setFailureHandler(new CustomStopNodeOrHaltFailureHandler()); + + return cfg; + } + + /** + * @throws Exception if failed. + */ + @Test + public void testTtlIsAppliedToManyCaches() throws Exception { + handleFired.set(false); + + startGrid(0); + + assertFalse(handleFired.get()); + } + + private class CustomStopNodeOrHaltFailureHandler extends NoOpFailureHandler { + /** {@inheritDoc} */ + @Override public boolean onFailure(Ignite ignite, FailureContext failureCtx) { + boolean res = super.handle(ignite, failureCtx); + + handleFired.set(true); + + return res; + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgniteSequentialNodeCrashRecoveryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgniteSequentialNodeCrashRecoveryTest.java new file mode 100644 index 0000000000000..b4ad8c853a69e --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IgniteSequentialNodeCrashRecoveryTest.java @@ -0,0 +1,339 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db; + +import java.io.File; +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.file.OpenOption; +import java.util.Collection; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.FailureHandler; +import org.apache.ignite.failure.StopNodeFailureHandler; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgnitionEx; +import org.apache.ignite.internal.pagemem.FullPageId; +import org.apache.ignite.internal.pagemem.PageIdAllocator; +import org.apache.ignite.internal.pagemem.PageIdUtils; +import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; +import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl; +import org.apache.ignite.internal.util.typedef.internal.CU; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * + */ +public class IgniteSequentialNodeCrashRecoveryTest extends GridCommonAbstractTest { + /** */ + private static final int PAGE_SIZE = 4096; + + /** */ + private FileIOFactory fileIoFactory; + + /** */ + private FailureHandler failureHnd; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + DataStorageConfiguration dsCfg = new DataStorageConfiguration() + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setMaxSize(512 * 1024 * 1024).setPersistenceEnabled(true)) + // Set large checkpoint frequency to make sure no checkpoint happens right after the node start. + .setCheckpointFrequency(getTestTimeout()) + .setPageSize(PAGE_SIZE); + + if (fileIoFactory != null) + dsCfg.setFileIOFactory(fileIoFactory); + + cfg + .setDataStorageConfiguration(dsCfg) + .setConsistentId(igniteInstanceName); + + if (failureHnd != null) + cfg.setFailureHandler(failureHnd); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testCrashOnCheckpointAfterLogicalRecovery() throws Exception { + IgniteEx g = startGrid(0); + + g.cluster().active(true); + + g.getOrCreateCache(new CacheConfiguration<>("cache") + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setAffinity(new RendezvousAffinityFunction(false, 8))); + + disableCheckpoints(g); + + { + IgniteCache cache = g.cache("cache"); + + // Now that checkpoints are disabled, put some data to the cache. + GridTestUtils.runMultiThreaded(() -> { + for (int i = 0; i < 400; i++) + cache.put(i % 100, Thread.currentThread().getName()); + }, 64, "update-thread"); + } + + Collection dirtyAfterLoad = captureDirtyPages(g); + + stopGrid(0); + + CheckpointFailingIoFactory f = (CheckpointFailingIoFactory)(fileIoFactory = new CheckpointFailingIoFactory(false)); + StopLatchFailureHandler fh = (StopLatchFailureHandler)(failureHnd = new StopLatchFailureHandler()); + + // Now start the node. Since the checkpoint was disabled, logical recovery will be performed. + g = startGrid(0); + + fileIoFactory = null; + failureHnd = null; + + // Capture dirty pages after logical recovery & updates. + Collection dirtyAfterRecoveryAndUpdates = captureDirtyPages(g); + + f.startFailing(); + + triggerCheckpoint(g); + + assertTrue("Failed to wait for checkpoint failure", fh.waitFailed()); + + // Capture pages we marked on first run and did not mark on second run. + dirtyAfterLoad.removeAll(dirtyAfterRecoveryAndUpdates); + + assertFalse(dirtyAfterLoad.isEmpty()); + + fileIoFactory = new CheckingIoFactory(dirtyAfterLoad); + + g = startGrid(0); + + { + IgniteCache cache = g.cache("cache"); + + for (int i = 0; i < 400; i++) + cache.put(100 + (i % 100), Thread.currentThread().getName()); + + for (int i = 0; i < 200; i++) + assertTrue("i=" + i, cache.containsKey(i)); + } + } + + /** + * + */ + private void disableCheckpoints(IgniteEx g) throws Exception { + GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)g.context() + .cache().context().database(); + + dbMgr.enableCheckpoints(false).get(); + } + + /** + * @param ig Ignite instance. + */ + private void triggerCheckpoint(IgniteEx ig) { + GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)ig.context() + .cache().context().database(); + + dbMgr.wakeupForCheckpoint("test-should-fail"); + } + + /** + * @param g Ignite instance. + * @throws IgniteCheckedException If failed. + */ + private Collection captureDirtyPages(IgniteEx g) throws IgniteCheckedException { + GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)g.context() + .cache().context().database(); + + // Capture a set of dirty pages. + PageMemoryImpl pageMem = (PageMemoryImpl)dbMgr.dataRegion("default").pageMemory(); + + return pageMem.dirtyPages(); + } + + /** + * + */ + private class StopLatchFailureHandler extends StopNodeFailureHandler { + /** */ + private CountDownLatch stopLatch = new CountDownLatch(1); + + /** {@inheritDoc} */ + @Override public boolean handle(Ignite ignite, FailureContext failureCtx) { + new Thread( + new Runnable() { + @Override public void run() { + U.error(ignite.log(), "Stopping local node on Ignite failure: [failureCtx=" + failureCtx + ']'); + + IgnitionEx.stop(ignite.name(), true, true); + + stopLatch.countDown(); + } + }, + "node-stopper" + ).start(); + + return true; + } + + /** + * @return {@code true} if wait succeeded. + * @throws InterruptedException If current thread was interrupted. + */ + public boolean waitFailed() throws InterruptedException { + return stopLatch.await(getTestTimeout(), TimeUnit.MILLISECONDS); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(StopNodeFailureHandler.class, this, "super", super.toString()); + } + } + + /** + * + */ + private static class CheckingIoFactory implements FileIOFactory { + /** */ + private final transient Collection forbiddenPages; + + /** + * @param forbiddenPages Forbidden pages. + */ + private CheckingIoFactory(Collection forbiddenPages) { + this.forbiddenPages = forbiddenPages; + } + + /** {@inheritDoc} */ + @Override public FileIO create(File file, OpenOption... modes) throws IOException { + FileIO delegate = new RandomAccessFileIOFactory().create(file, modes); + + if (file.getName().contains("part-")) + return new CheckingFileIO(file, delegate, forbiddenPages); + + return delegate; + } + } + + /** + * + */ + private static class CheckingFileIO extends FileIODecorator { + /** */ + private int grpId; + + /** */ + private int partId; + + /** */ + private Collection forbiddenPages; + + /** + * @param file File. + * @param delegate Delegate. + * @param forbiddenPages Forbidden pages. + */ + public CheckingFileIO(File file, FileIO delegate, Collection forbiddenPages) { + super(delegate); + this.forbiddenPages = forbiddenPages; + + String fileName = file.getName(); + + int start = fileName.indexOf("part-") + 5; + int end = fileName.indexOf(".bin"); + partId = Integer.parseInt(fileName.substring(start, end)); + + String path = file.getPath(); + + if (path.contains(File.separator + "metastorage" + File.separator)) + grpId = MetaStorage.METASTORAGE_CACHE_ID; + else { + start = path.indexOf("cache-") + 6; + end = path.indexOf(File.separator, start); + + grpId = start >= 0 ? CU.cacheId(path.substring(start, end)) : 0; + } + } + + /** {@inheritDoc} */ + @Override public int write(ByteBuffer srcBuf) throws IOException { + throw new AssertionError("Should not be called"); + } + + /** {@inheritDoc} */ + @Override public int write(ByteBuffer srcBuf, long position) throws IOException { + FullPageId fId = new FullPageId( + PageIdUtils.pageId(partId, PageIdAllocator.FLAG_DATA, (int)(position / PAGE_SIZE) - 1), + grpId); + + if (forbiddenPages.contains(fId)) + throw new AssertionError("Attempted to write invalid page on recovery: " + fId); + + return super.write(srcBuf, position); + } + + /** {@inheritDoc} */ + @Override public int write(byte[] buf, int off, int len) throws IOException { + throw new AssertionError("Should not be called"); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/SlowHistoricalRebalanceSmallHistoryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/SlowHistoricalRebalanceSmallHistoryTest.java index bd1a1a9cc3575..bb47aef925d59 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/SlowHistoricalRebalanceSmallHistoryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/SlowHistoricalRebalanceSmallHistoryTest.java @@ -22,6 +22,7 @@ import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.cluster.ClusterNode; @@ -39,18 +40,16 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class SlowHistoricalRebalanceSmallHistoryTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Slow rebalance cache name. */ private static final String SLOW_REBALANCE_CACHE = "b13813ce"; @@ -69,8 +68,6 @@ public class SlowHistoricalRebalanceSmallHistoryTest extends GridCommonAbstractT cfg.setConsistentId(name); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setWalHistorySize(WAL_HISTORY_SIZE) @@ -118,13 +115,14 @@ public class SlowHistoricalRebalanceSmallHistoryTest extends GridCommonAbstractT /** * Checks that we reserve and release the same WAL index on exchange. */ + @Test public void testReservation() throws Exception { IgniteEx ig = startGrid(0); ig.cluster().active(true); - ig.getOrCreateCache(new CacheConfiguration<>() - .setName(SLOW_REBALANCE_CACHE) + ig.getOrCreateCache(new CacheConfiguration<>(SLOW_REBALANCE_CACHE) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setAffinity(new RendezvousAffinityFunction(false, 1)) .setBackups(1) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) @@ -141,8 +139,8 @@ public void testReservation() throws Exception { resetBaselineTopology(); - IgniteCache anotherCache = ig.getOrCreateCache(new CacheConfiguration<>() - .setName(REGULAR_CACHE) + IgniteCache anotherCache = ig.getOrCreateCache(new CacheConfiguration<>(REGULAR_CACHE) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setAffinity(new RendezvousAffinityFunction(false, 1)) .setBackups(1) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) @@ -193,9 +191,6 @@ public void testReservation() throws Exception { * */ private static class RebalanceBlockingSPI extends TcpCommunicationSpi { - /** */ - public static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override public void sendMessage(ClusterNode node, Message msg) throws IgniteSpiException { if (msg instanceof GridIoMessage && ((GridIoMessage)msg).message() instanceof GridDhtPartitionSupplyMessage) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteCheckpointDirtyPagesForLowLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteCheckpointDirtyPagesForLowLoadTest.java index 8baa1c301512c..49967204ad3f1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteCheckpointDirtyPagesForLowLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteCheckpointDirtyPagesForLowLoadTest.java @@ -36,11 +36,15 @@ import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test generates low load to grid in having some shared groups. Test checks if pages marked dirty after some time will * became reasonable low for 1 put. */ +@RunWith(JUnit4.class) public class IgniteCheckpointDirtyPagesForLowLoadTest extends GridCommonAbstractTest { /** Caches in group. */ private static final int CACHES_IN_GRP = 1; @@ -99,6 +103,7 @@ public class IgniteCheckpointDirtyPagesForLowLoadTest extends GridCommonAbstract /** * @throws Exception if failed. */ + @Test public void testManyCachesAndNotManyPuts() throws Exception { try { IgniteEx ignite = startGrid(0); @@ -206,4 +211,4 @@ private int waitForCurrentCheckpointPagesCounterUpdated(GridCacheDatabaseSharedM return currCpPages; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteMassLoadSandboxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteMassLoadSandboxTest.java index f38db0a832448..43948912c5199 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteMassLoadSandboxTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/checkpoint/IgniteMassLoadSandboxTest.java @@ -57,6 +57,9 @@ import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; @@ -64,6 +67,7 @@ * Sandbox test to measure progress of grid write operations. If no progress occur during period of time, then thread * dumps are generated. */ +@RunWith(JUnit4.class) public class IgniteMassLoadSandboxTest extends GridCommonAbstractTest { /** Cache name. Random to cover external stores possible problems. */ public static final String CACHE_NAME = "partitioned" + new Random().nextInt(10000000); @@ -173,6 +177,7 @@ public class IgniteMassLoadSandboxTest extends GridCommonAbstractTest { * * @throws Exception if failed. */ + @Test public void testContinuousPutMultithreaded() throws Exception { try { // System.setProperty(IgniteSystemProperties.IGNITE_DIRTY_PAGES_PARALLEL, "true"); @@ -233,6 +238,7 @@ public void testContinuousPutMultithreaded() throws Exception { * * @throws Exception if failed. */ + @Test public void testDataStreamerContinuousPutMultithreaded() throws Exception { try { // System.setProperty(IgniteSystemProperties.IGNITE_DIRTY_PAGES_PARALLEL, "true"); @@ -309,6 +315,7 @@ public void testDataStreamerContinuousPutMultithreaded() throws Exception { * * @throws Exception if failed. */ + @Test public void testCoveredWalLogged() throws Exception { GridStringLogger log0 = null; @@ -484,6 +491,7 @@ private static boolean keepInDb(int id) { * * @throws Exception if failed. */ + @Test public void testPutRemoveMultithreaded() throws Exception { setWalArchAndWorkToSameVal = false; customWalMode = WALMode.LOG_ONLY; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/DefaultPageSizeBackwardsCompatibilityTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/DefaultPageSizeBackwardsCompatibilityTest.java index 739beb4e376e3..7709d9cf81e5f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/DefaultPageSizeBackwardsCompatibilityTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/DefaultPageSizeBackwardsCompatibilityTest.java @@ -18,7 +18,6 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; -import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; @@ -26,18 +25,16 @@ import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class DefaultPageSizeBackwardsCompatibilityTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Client mode. */ private boolean set2kPageSize = true; @@ -51,9 +48,6 @@ public class DefaultPageSizeBackwardsCompatibilityTest extends GridCommonAbstrac @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - discoverySpi.setIpFinder(IP_FINDER); - DataStorageConfiguration memCfg = new DataStorageConfiguration(); if (set2kPageSize) @@ -100,7 +94,8 @@ public class DefaultPageSizeBackwardsCompatibilityTest extends GridCommonAbstrac /** * @throws Exception If failed. */ - public void testStartFrom2kDefaultStore() throws Exception { + @Test + public void testStartFrom16kDefaultStore() throws Exception { startGrids(2); Ignite ig = ignite(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheDestroyDuringCheckpointTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheDestroyDuringCheckpointTest.java index 62cc244dc1346..1d95bfec22c40 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheDestroyDuringCheckpointTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheDestroyDuringCheckpointTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for cache creation/deletion with frequent checkpoints. */ +@RunWith(JUnit4.class) public class IgnitePdsCacheDestroyDuringCheckpointTest extends GridCommonAbstractTest { /** */ private static final String NAME_PREFIX = "CACHE-"; @@ -90,6 +94,7 @@ private DataStorageConfiguration createDbConfig() { /** * @throws Exception If fail. */ + @Test public void testCacheCreatePutCheckpointDestroy() throws Exception { IgniteEx ig = startGrid(0); ig.active(true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheIntegrationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheIntegrationTest.java index ba9a00098e81e..c0d351fd85c4d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheIntegrationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCacheIntegrationTest.java @@ -38,19 +38,17 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsCacheIntegrationTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 3; @@ -83,8 +81,6 @@ public class IgnitePdsCacheIntegrationTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfg); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setMarshaller(null); BinaryConfiguration bCfg = new BinaryConfiguration(); @@ -111,6 +107,7 @@ public class IgnitePdsCacheIntegrationTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testPutGetSimple() throws Exception { startGrids(GRID_CNT); @@ -146,6 +143,7 @@ public void testPutGetSimple() throws Exception { /** * @throws Exception if failed. */ + @Test public void testPutMultithreaded() throws Exception { startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimpleTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimpleTest.java index 0cfcec8450025..6957d551acb4b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimpleTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimpleTest.java @@ -24,13 +24,16 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Puts data into grid, waits for checkpoint to start and then verifies data */ +@RunWith(JUnit4.class) public class IgnitePdsCheckpointSimpleTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @@ -65,6 +68,7 @@ public class IgnitePdsCheckpointSimpleTest extends GridCommonAbstractTest { * Checks if same data can be loaded after checkpoint. * @throws Exception if failed. */ + @Test public void testRecoveryAfterCpEnd() throws Exception { IgniteEx ignite = startGrid(0); @@ -94,7 +98,7 @@ public void testRecoveryAfterCpEnd() throws Exception { * @param i key. * @return value with extra data, which allows to verify */ - @NotNull private String valueWithRedundancyForKey(int i) { + private @NotNull String valueWithRedundancyForKey(int i) { return Strings.repeat(Integer.toString(i), 10); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimulationWithRealCpDisabledTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimulationWithRealCpDisabledTest.java index 5fa618b3b947e..051505c50aaad 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimulationWithRealCpDisabledTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsCheckpointSimulationWithRealCpDisabledTest.java @@ -35,6 +35,7 @@ import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheRebalanceMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; @@ -53,6 +54,8 @@ import org.apache.ignite.internal.pagemem.wal.record.CheckpointRecord; import org.apache.ignite.internal.pagemem.wal.record.DataEntry; import org.apache.ignite.internal.pagemem.wal.record.DataRecord; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MvccDataRecord; import org.apache.ignite.internal.pagemem.wal.record.PageSnapshot; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.PartitionMetaStateRecord; @@ -62,6 +65,7 @@ import org.apache.ignite.internal.processors.cache.GridCacheOperation; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.KeyCacheObject; +import org.apache.ignite.internal.processors.cache.mvcc.MvccVersionImpl; import org.apache.ignite.internal.processors.cache.persistence.DummyPageIO; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; @@ -75,20 +79,18 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** - * Test simulated chekpoints, + * Test simulated checkpoints, * Disables integrated check pointer thread */ +@RunWith(JUnit4.class) public class IgnitePdsCheckpointSimulationWithRealCpDisabledTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int TOTAL_PAGES = 1000; @@ -96,17 +98,24 @@ public class IgnitePdsCheckpointSimulationWithRealCpDisabledTest extends GridCom private static final boolean VERBOSE = false; /** Cache name. */ - private static final String cacheName = "cache"; + private static final String CACHE_NAME = "cache"; + + /** Mvcc cache name. */ + private static final String MVCC_CACHE_NAME = "mvccCache"; + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - CacheConfiguration ccfg = new CacheConfiguration(cacheName); + CacheConfiguration ccfg = new CacheConfiguration(CACHE_NAME) + .setRebalanceMode(CacheRebalanceMode.NONE); - ccfg.setRebalanceMode(CacheRebalanceMode.NONE); + CacheConfiguration mvccCfg = new CacheConfiguration(MVCC_CACHE_NAME) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT) + .setRebalanceMode(CacheRebalanceMode.NONE); - cfg.setCacheConfiguration(ccfg); + cfg.setCacheConfiguration(ccfg, mvccCfg); cfg.setDataStorageConfiguration( new DataStorageConfiguration() @@ -120,12 +129,6 @@ public class IgnitePdsCheckpointSimulationWithRealCpDisabledTest extends GridCom ) ); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } @@ -146,10 +149,11 @@ public class IgnitePdsCheckpointSimulationWithRealCpDisabledTest extends GridCom /** * @throws Exception if failed. */ + @Test public void testCheckpointSimulationMultiThreaded() throws Exception { IgniteEx ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); GridCacheSharedContext shared = ig.context().cache().context(); @@ -164,7 +168,7 @@ public void testCheckpointSimulationMultiThreaded() throws Exception { // Must put something in partition 0 in order to initialize meta page. // Otherwise we will violate page store integrity rules. - ig.cache(cacheName).put(0, 0); + ig.cache(CACHE_NAME).put(0, 0); PageMemory mem = shared.database().dataRegion(null).pageMemory(); @@ -172,7 +176,7 @@ public void testCheckpointSimulationMultiThreaded() throws Exception { try { res = runCheckpointing(ig, (PageMemoryImpl)mem, pageStore, shared.wal(), - shared.cache().cache(cacheName).context().cacheId()); + shared.cache().cache(CACHE_NAME).context().cacheId()); } catch (Throwable th) { log().error("Error while running checkpointing", th); @@ -187,7 +191,7 @@ public void testCheckpointSimulationMultiThreaded() throws Exception { ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); shared = ig.context().cache().context(); @@ -203,14 +207,15 @@ public void testCheckpointSimulationMultiThreaded() throws Exception { /** * @throws Exception if failed. */ + @Test public void testGetForInitialWrite() throws Exception { IgniteEx ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); GridCacheSharedContext shared = ig.context().cache().context(); - int cacheId = shared.cache().cache(cacheName).context().cacheId(); + int cacheId = shared.cache().cache(CACHE_NAME).context().cacheId(); GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)shared.database(); @@ -238,12 +243,13 @@ public void testGetForInitialWrite() throws Exception { long pageAddr = mem.writeLock(fullId.groupId(), fullId.pageId(), page); try { - DataPageIO.VERSIONS.latest().initNewPage(pageAddr, fullId.pageId(), mem.pageSize()); + DataPageIO.VERSIONS.latest().initNewPage(pageAddr, fullId.pageId(), + mem.realPageSize(fullId.groupId())); for (int i = PageIO.COMMON_HEADER_END + DataPageIO.ITEMS_OFF; i < mem.pageSize(); i++) PageUtils.putByte(pageAddr, i, (byte)0xAB); - PageIO.printPage(pageAddr, mem.pageSize()); + PageIO.printPage(pageAddr, mem.realPageSize(fullId.groupId())); } finally { mem.writeUnlock(fullId.groupId(), fullId.pageId(), page, null, true); @@ -263,7 +269,7 @@ public void testGetForInitialWrite() throws Exception { ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); shared = ig.context().cache().context(); @@ -302,13 +308,29 @@ public void testGetForInitialWrite() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDataWalEntries() throws Exception { + checkDataWalEntries(false); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testMvccDataWalEntries() throws Exception { + checkDataWalEntries(true); + } + + /** + * @throws Exception if failed. + */ + private void checkDataWalEntries(boolean mvcc) throws Exception { IgniteEx ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); GridCacheSharedContext sharedCtx = ig.context().cache().context(); - GridCacheContext cctx = sharedCtx.cache().cache(cacheName).context(); + GridCacheContext cctx = sharedCtx.cache().cache(mvcc ? MVCC_CACHE_NAME : CACHE_NAME).context(); GridCacheDatabaseSharedManager db = (GridCacheDatabaseSharedManager)sharedCtx.database(); IgniteWriteAheadLogManager wal = sharedCtx.wal(); @@ -331,7 +353,10 @@ public void testDataWalEntries() throws Exception { if (op != GridCacheOperation.DELETE) val = cctx.toCacheObject("value-" + i); - entries.add(new DataEntry(cctx.cacheId(), key, val, op, null, cctx.versions().next(), 0L, + entries.add(mvcc ? + new MvccDataEntry(cctx.cacheId(), key, val, op, null, cctx.versions().next(), 0L, + cctx.affinity().partition(i), i, new MvccVersionImpl(1000L, 10L, i + 1 /* Non-zero */)) : + new DataEntry(cctx.cacheId(), key, val, op, null, cctx.versions().next(), 0L, cctx.affinity().partition(i), i)); } @@ -342,17 +367,17 @@ public void testDataWalEntries() throws Exception { wal.flush(start, false); for (DataEntry entry : entries) - wal.log(new DataRecord(entry)); + wal.log(mvcc ? new MvccDataRecord((MvccDataEntry) entry) : new DataRecord(entry)); // Data will not be written to the page store. stopAllGrids(); ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); sharedCtx = ig.context().cache().context(); - cctx = sharedCtx.cache().cache(cacheName).context(); + cctx = sharedCtx.cache().cache(mvcc ? MVCC_CACHE_NAME : CACHE_NAME).context(); db = (GridCacheDatabaseSharedManager)sharedCtx.database(); wal = sharedCtx.wal(); @@ -378,7 +403,10 @@ public void testDataWalEntries() throws Exception { while (idx < entries.size()) { IgniteBiTuple dataRecTup = it.next(); - assert dataRecTup.get2() instanceof DataRecord; + if (!mvcc) + assert dataRecTup.get2() instanceof DataRecord; + else + assert dataRecTup.get2() instanceof MvccDataRecord; DataRecord dataRec = (DataRecord)dataRecTup.get2(); @@ -401,6 +429,13 @@ public void testDataWalEntries() throws Exception { assertEquals(entry.nearXidVersion(), readEntry.nearXidVersion()); assertEquals(entry.partitionCounter(), readEntry.partitionCounter()); + if (mvcc) { + assert entry instanceof MvccDataEntry; + assert readEntry instanceof MvccDataEntry; + + assertEquals(((MvccDataEntry) entry).mvccVer(), ((MvccDataEntry) readEntry).mvccVer()); + } + idx++; } } @@ -409,13 +444,14 @@ public void testDataWalEntries() throws Exception { /** * @throws Exception if failed. */ + @Test public void testPageWalEntries() throws Exception { IgniteEx ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); GridCacheSharedContext sharedCtx = ig.context().cache().context(); - int cacheId = sharedCtx.cache().cache(cacheName).context().cacheId(); + int cacheId = sharedCtx.cache().cache(CACHE_NAME).context().cacheId(); GridCacheDatabaseSharedManager db = (GridCacheDatabaseSharedManager)sharedCtx.database(); PageMemory pageMem = sharedCtx.database().dataRegion(null).pageMemory(); @@ -459,7 +495,7 @@ public void testPageWalEntries() throws Exception { ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); sharedCtx = ig.context().cache().context(); @@ -509,14 +545,15 @@ public void testPageWalEntries() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDirtyFlag() throws Exception { IgniteEx ig = startGrid(0); - ig.active(true); + ig.cluster().active(true); GridCacheSharedContext shared = ig.context().cache().context(); - int cacheId = shared.cache().cache(cacheName).context().cacheId(); + int cacheId = shared.cache().cache(CACHE_NAME).context().cacheId(); GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)shared.database(); @@ -544,7 +581,7 @@ public void testDirtyFlag() throws Exception { long pageAddr = mem.writeLock(fullId.groupId(), fullId.pageId(), page); try { - pageIO.initNewPage(pageAddr, fullId.pageId(), mem.pageSize()); + pageIO.initNewPage(pageAddr, fullId.pageId(), mem.realPageSize(fullId.groupId())); assertTrue(mem.isDirty(fullId.groupId(), fullId.pageId(), page)); } @@ -632,7 +669,7 @@ private void writePageData(FullPageId fullId, PageMemory mem) throws Exception { long pageAddr = mem.writeLock(fullId.groupId(), fullId.pageId(), page); try { - DataPageIO.VERSIONS.latest().initNewPage(pageAddr, fullId.pageId(), mem.pageSize()); + DataPageIO.VERSIONS.latest().initNewPage(pageAddr, fullId.pageId(), mem.realPageSize(fullId.groupId())); ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -707,7 +744,7 @@ else if (rec instanceof PageSnapshot) { long pageAddr = mem.readLock(fullId.groupId(), fullId.pageId(), page); try { - for (int i = PageIO.COMMON_HEADER_END; i < mem.pageSize(); i++) { + for (int i = PageIO.COMMON_HEADER_END; i < mem.realPageSize(fullId.groupId()); i++) { int expState = state & 0xFF; int pageState = PageUtils.getByte(pageAddr, i) & 0xFF; int walState = walData[i] & 0xFF; @@ -818,7 +855,8 @@ private IgniteBiTuple, WALPointer> runCheckpointing( ", bhc=" + U.hexLong(System.identityHashCode(pageAddr)) + ", page=" + U.hexLong(System.identityHashCode(page)) + ']'); - for (int i = PageIO.COMMON_HEADER_END; i < mem.pageSize(); i++) + for (int i = PageIO.COMMON_HEADER_END; i < mem.realPageSize(fullId.groupId()); + i++) { assertEquals("Verify page failed [fullId=" + fullId + ", i=" + i + ", state=" + state + @@ -826,6 +864,7 @@ private IgniteBiTuple, WALPointer> runCheckpointing( ", bhc=" + U.hexLong(System.identityHashCode(pageAddr)) + ", page=" + U.hexLong(System.identityHashCode(page)) + ']', state & 0xFF, PageUtils.getByte(pageAddr, i) & 0xFF); + } } state = (state + 1) & 0xFF; @@ -836,7 +875,7 @@ private IgniteBiTuple, WALPointer> runCheckpointing( ", bhc=" + U.hexLong(System.identityHashCode(pageAddr)) + ", page=" + U.hexLong(System.identityHashCode(page)) + ']'); - for (int i = PageIO.COMMON_HEADER_END; i < mem.pageSize(); i++) + for (int i = PageIO.COMMON_HEADER_END; i < mem.realPageSize(fullId.groupId()); i++) PageUtils.putByte(pageAddr, i, (byte)state); resMap.put(fullId, state); @@ -926,7 +965,7 @@ private IgniteBiTuple, WALPointer> runCheckpointing( Integer first = null; - for (int i = PageIO.COMMON_HEADER_END; i < mem.pageSize(); i++) { + for (int i = PageIO.COMMON_HEADER_END; i < mem.realPageSize(fullId.groupId()); i++) { int val = tmpBuf.get(i) & 0xFF; if (first == null) @@ -1027,7 +1066,7 @@ private void initPage(PageMemory mem, PageIO pageIO, FullPageId fullId) throws I final long pageAddr = mem.writeLock(fullId.groupId(), fullId.pageId(), page); try { - pageIO.initNewPage(pageAddr, fullId.pageId(), mem.pageSize()); + pageIO.initNewPage(pageAddr, fullId.pageId(), mem.realPageSize(fullId.groupId())); } finally { mem.writeUnlock(fullId.groupId(), fullId.pageId(), page, null, true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsDiskErrorsRecoveringTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsDiskErrorsRecoveringTest.java index bd30696a1e260..2b329c0651b4c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsDiskErrorsRecoveringTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsDiskErrorsRecoveringTest.java @@ -25,7 +25,6 @@ import java.util.Arrays; import java.util.concurrent.atomic.AtomicLong; import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.IgniteException; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheRebalanceMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; @@ -39,6 +38,7 @@ import org.apache.ignite.failure.StopNodeFailureHandler; import org.apache.ignite.internal.GridKernalState; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; @@ -51,15 +51,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_MMAP; /** * Tests node recovering after disk errors during interaction with persistent storage. */ +@RunWith(JUnit4.class) public class IgnitePdsDiskErrorsRecoveringTest extends GridCommonAbstractTest { /** */ private static final int PAGE_SIZE = DataStorageConfiguration.DFLT_PAGE_SIZE; @@ -97,6 +98,8 @@ public class IgnitePdsDiskErrorsRecoveringTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + cfg.setConsistentId(igniteInstanceName); + DataStorageConfiguration dsCfg = new DataStorageConfiguration() .setDefaultDataRegionConfiguration( new DataRegionConfiguration().setMaxSize(100L * 1024 * 1024).setPersistenceEnabled(true)) @@ -125,93 +128,49 @@ public class IgnitePdsDiskErrorsRecoveringTest extends GridCommonAbstractTest { /** * Test node stopping & recovering on cache initialization fail. */ + @Test public void testRecoveringOnCacheInitFail() throws Exception { // Fail to initialize page store. 2 extra pages is needed for MetaStorage. ioFactory = new FilteringFileIOFactory(".bin", new LimitedSizeFileIOFactory(new RandomAccessFileIOFactory(), 2 * PAGE_SIZE)); - final IgniteEx grid = startGrid(0); - boolean failed = false; - try { - grid.cluster().active(true); - } catch (Exception expected) { - log.warning("Expected cache error", expected); - - failed = true; - } - - Assert.assertTrue("Cache initialization must failed", failed); - - // Grid should be automatically stopped after checkpoint fail. - awaitStop(grid); - - // Grid should be successfully recovered after stopping. - ioFactory = null; - - IgniteEx recoveredGrid = startGrid(0); - recoveredGrid.active(true); - } - - /** - * Test node stopping & recovering on start marker writing fail during activation. - * - * @throws Exception If test failed. - */ - public void testRecoveringOnNodeStartMarkerWriteFail() throws Exception { - // Fail to write node start marker tmp file at the second checkpoint. Pass only initial checkpoint. - ioFactory = new FilteringFileIOFactory("started.bin" + FilePageStoreManager.TMP_SUFFIX, new LimitedSizeFileIOFactory(new RandomAccessFileIOFactory(), 20)); - - IgniteEx grid = startGrid(0); - grid.cluster().active(true); - - for (int i = 0; i < 1000; i++) { - byte payload = (byte) i; - byte[] data = new byte[2048]; - Arrays.fill(data, payload); - grid.cache(CACHE_NAME).put(i, data); - } + IgniteInternalFuture startGridFut = GridTestUtils.runAsync(() -> { + try { + IgniteEx grid = startGrid(0); - stopAllGrids(); + grid.cluster().active(true); + } + catch (Exception e) { + throw new RuntimeException("Failed to start node.", e); + } + }); - boolean activationFailed = false; try { - grid = startGrid(0); - grid.cluster().active(true); + startGridFut.get(); } - catch (IgniteException e) { - log.warning("Activation test exception", e); + catch (Exception e) { + Assert.assertTrue(e.getMessage().contains("Failed to start node.")); - activationFailed = true; + failed = true; } - Assert.assertTrue("Activation must be failed", activationFailed); - - // Grid should be automatically stopped after checkpoint fail. - awaitStop(grid); + Assert.assertTrue("Cache initialization must failed", failed); // Grid should be successfully recovered after stopping. ioFactory = null; - IgniteEx recoveredGrid = startGrid(0); - recoveredGrid.cluster().active(true); - - for (int i = 0; i < 1000; i++) { - byte payload = (byte) i; - byte[] data = new byte[2048]; - Arrays.fill(data, payload); + IgniteEx grid = startGrid(0); - byte[] actualData = (byte[]) recoveredGrid.cache(CACHE_NAME).get(i); - Assert.assertArrayEquals(data, actualData); - } + grid.cluster().active(true); } - /** * Test node stopping & recovering on checkpoint begin fail. * * @throws Exception If test failed. */ + @Test public void testRecoveringOnCheckpointBeginFail() throws Exception { // Fail to write checkpoint start marker tmp file at the second checkpoint. Pass only initial checkpoint. ioFactory = new FilteringFileIOFactory("START.bin" + FilePageStoreManager.TMP_SUFFIX, new LimitedSizeFileIOFactory(new RandomAccessFileIOFactory(), 20)); @@ -262,6 +221,7 @@ public void testRecoveringOnCheckpointBeginFail() throws Exception { /** * Test node stopping & recovering on checkpoint pages write fail. */ + @Test public void testRecoveringOnCheckpointWriteFail() throws Exception { // Fail write partition and index files at the second checkpoint. Pass only initial checkpoint. ioFactory = new FilteringFileIOFactory(".bin", new LimitedSizeFileIOFactory(new RandomAccessFileIOFactory(), 128 * PAGE_SIZE)); @@ -311,20 +271,26 @@ public void testRecoveringOnCheckpointWriteFail() throws Exception { /** * Test node stopping & recovering on WAL writing fail with enabled MMAP (Batch allocation for WAL segments). */ + @Test public void testRecoveringOnWALWritingFail1() throws Exception { // Allow to allocate only 1 wal segment, fail on write to second. ioFactory = new FilteringFileIOFactory(".wal", new LimitedSizeFileIOFactory(new RandomAccessFileIOFactory(), WAL_SEGMENT_SIZE)); + System.setProperty(IGNITE_WAL_MMAP, "true"); + doTestRecoveringOnWALWritingFail(); } /** * Test node stopping & recovering on WAL writing fail with disabled MMAP. */ + @Test public void testRecoveringOnWALWritingFail2() throws Exception { // Fail somewhere on the second wal segment. ioFactory = new FilteringFileIOFactory(".wal", new LimitedSizeFileIOFactory(new RandomAccessFileIOFactory(), (long) (1.5 * WAL_SEGMENT_SIZE))); + System.setProperty(IGNITE_WAL_MMAP, "false"); + doTestRecoveringOnWALWritingFail(); } @@ -332,18 +298,23 @@ public void testRecoveringOnWALWritingFail2() throws Exception { * Test node stopping & recovery on WAL writing fail. */ private void doTestRecoveringOnWALWritingFail() throws Exception { - final IgniteEx grid = startGrid(0); + IgniteEx grid = startGrid(0); FileWriteAheadLogManager wal = (FileWriteAheadLogManager)grid.context().cache().context().wal(); + wal.setFileIOFactory(ioFactory); grid.cluster().active(true); int failedPosition = -1; - for (int i = 0; i < 1000; i++) { + final int keysCount = 2000; + + final int dataSize = 2048; + + for (int i = 0; i < keysCount; i++) { byte payload = (byte) i; - byte[] data = new byte[2048]; + byte[] data = new byte[dataSize]; Arrays.fill(data, payload); try { @@ -357,7 +328,7 @@ private void doTestRecoveringOnWALWritingFail() throws Exception { } // We must be able to put something into cache before fail. - Assert.assertTrue(failedPosition > 0); + Assert.assertTrue("One of the cache puts must be failed", failedPosition > 0); // Grid should be automatically stopped after WAL fail. awaitStop(grid); @@ -365,15 +336,16 @@ private void doTestRecoveringOnWALWritingFail() throws Exception { ioFactory = null; // Grid should be successfully recovered after stopping. - IgniteEx recoveredGrid = startGrid(0); - recoveredGrid.cluster().active(true); + grid = startGrid(0); + + grid.cluster().active(true); for (int i = 0; i < failedPosition; i++) { byte payload = (byte) i; - byte[] data = new byte[2048]; + byte[] data = new byte[dataSize]; Arrays.fill(data, payload); - byte[] actualData = (byte[]) recoveredGrid.cache(CACHE_NAME).get(i); + byte[] actualData = (byte[]) grid.cache(CACHE_NAME).get(i); Assert.assertArrayEquals(data, actualData); } } @@ -472,11 +444,6 @@ private static class FilteringFileIOFactory implements FileIOFactory { this.pattern = pattern; } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, WRITE, READ); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { if (file.getName().endsWith(pattern)) @@ -507,11 +474,6 @@ private LimitedSizeFileIOFactory(FileIOFactory delegate, long fsSpaceBytes) { this.availableSpaceBytes = new AtomicLong(fsSpaceBytes); } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return new LimitedSizeFileIO(delegate.create(file), availableSpaceBytes); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { return new LimitedSizeFileIO(delegate.create(file, modes), availableSpaceBytes); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsNoActualWalHistoryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsNoActualWalHistoryTest.java index c2967fef1cb6d..41e027e6c6712 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsNoActualWalHistoryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsNoActualWalHistoryTest.java @@ -36,10 +36,14 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgnitePdsNoActualWalHistoryTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -98,6 +102,7 @@ public class IgnitePdsNoActualWalHistoryTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testWalBig() throws Exception { try { IgniteEx ignite = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsPageReplacementTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsPageReplacementTest.java index 432393e763b44..7be0625237717 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsPageReplacementTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsPageReplacementTest.java @@ -40,11 +40,15 @@ import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for page replacement (rotation with disk) process with enabled persistence. * A lot of reader threads tries to acquire page and checkpointer threads write data. */ +@RunWith(JUnit4.class) public class IgnitePdsPageReplacementTest extends GridCommonAbstractTest { /** */ private static final int NUMBER_OF_SEGMENTS = 64; @@ -113,6 +117,7 @@ private DataStorageConfiguration createDbConfig() { /** * @throws Exception If fail. */ + @Test public void testPageReplacement() throws Exception { final IgniteEx ig = startGrid(0); @@ -201,7 +206,7 @@ private void initPage(PageMemory mem, PageIO pageIO, FullPageId fullId) throws I final long pageAddr = mem.writeLock(fullId.groupId(), fullId.pageId(), page); try { - pageIO.initNewPage(pageAddr, fullId.pageId(), mem.pageSize()); + pageIO.initNewPage(pageAddr, fullId.pageId(), mem.realPageSize(fullId.groupId())); } finally { mem.writeUnlock(fullId.groupId(), fullId.pageId(), page, null, true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsThreadInterruptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsThreadInterruptionTest.java index adaa764aa4ca6..4b7db7dd4d546 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsThreadInterruptionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/file/IgnitePdsThreadInterruptionTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test what interruptions of writing threads do not affect PDS. */ +@RunWith(JUnit4.class) public class IgnitePdsThreadInterruptionTest extends GridCommonAbstractTest { /** */ private static final int PAGE_SIZE = 1 << 12; // 4096 @@ -106,6 +110,7 @@ private DataStorageConfiguration storageConfiguration() { * * @throws Exception If failed. */ + @Test public void testInterruptsOnLFSRead() throws Exception { final Ignite ignite = startGrid(); @@ -185,6 +190,7 @@ public void testInterruptsOnLFSRead() throws Exception { * * @throws Exception */ + @Test public void testInterruptsOnWALWrite() throws Exception { final Ignite ignite = startGrid(); @@ -256,4 +262,4 @@ public void testInterruptsOnWALWrite() throws Exception { log.info("Verified keys: " + verifiedKeys); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/filename/IgniteUidAsConsistentIdMigrationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/filename/IgniteUidAsConsistentIdMigrationTest.java index 6e8ef6598f1da..b3e36c9e1c543 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/filename/IgniteUidAsConsistentIdMigrationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/filename/IgniteUidAsConsistentIdMigrationTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_CONSISTENT_ID_BY_HOST_WITHOUT_PORT; import static org.apache.ignite.IgniteSystemProperties.IGNITE_DATA_STORAGE_FOLDER_BY_CONSISTENT_ID; @@ -44,6 +47,7 @@ /** * Test for new and old style persistent storage folders generation */ +@RunWith(JUnit4.class) public class IgniteUidAsConsistentIdMigrationTest extends GridCommonAbstractTest { /** Cache name for test. */ public static final String CACHE_NAME = "dummy"; @@ -160,6 +164,7 @@ private void deleteWorkFiles() throws IgniteCheckedException { * * @throws Exception if failed. */ + @Test public void testNewStyleIdIsGenerated() throws Exception { final Ignite ignite = startActivateFillDataGrid(0); @@ -174,6 +179,7 @@ public void testNewStyleIdIsGenerated() throws Exception { * * @throws Exception if failed. */ + @Test public void testNewStyleIdIsGeneratedInCustomStorePath() throws Exception { placeStorageInTemp = true; final Ignite ignite = startActivateFillDataGrid(0); @@ -196,6 +202,7 @@ public void testNewStyleIdIsGeneratedInCustomStorePath() throws Exception { * * @throws Exception if failed. */ + @Test public void testPreconfiguredConsitentIdIsApplied() throws Exception { this.configuredConsistentId = "someConfiguredConsistentId"; Ignite ignite = startActivateFillDataGrid(0); @@ -210,6 +217,7 @@ public void testPreconfiguredConsitentIdIsApplied() throws Exception { * * @throws Exception if failed */ + @Test public void testRestartOnExistingOldStyleId() throws Exception { final String expDfltConsistentId = "127.0.0.1:47500"; @@ -244,6 +252,7 @@ public void testRestartOnExistingOldStyleId() throws Exception { * * @throws Exception if failed */ + @Test public void testStartWithoutActivate() throws Exception { //start stop grid without activate startGrid(0); @@ -259,6 +268,7 @@ public void testStartWithoutActivate() throws Exception { * * @throws Exception if failed */ + @Test public void testRestartOnSameFolderWillCauseSameUuidGeneration() throws Exception { final UUID uuid; { @@ -290,6 +300,7 @@ public void testRestartOnSameFolderWillCauseSameUuidGeneration() throws Exceptio * * @throws Exception if failed */ + @Test public void testStartNodeAfterDeactivate() throws Exception { final UUID uuid; { @@ -365,6 +376,7 @@ public void testStartNodeAfterDeactivate() throws Exception { * * @throws Exception if failed */ + @Test public void testNodeIndexIncremented() throws Exception { final Ignite ignite0 = startGrid(0); final Ignite ignite1 = startGrid(1); @@ -387,6 +399,7 @@ public void testNodeIndexIncremented() throws Exception { * * @throws Exception if failed */ + @Test public void testNewStyleAlwaysSmallestNodeIndexIsCreated() throws Exception { final Ignite ignite0 = startGrid(0); final Ignite ignite1 = startGrid(1); @@ -419,6 +432,7 @@ public void testNewStyleAlwaysSmallestNodeIndexIsCreated() throws Exception { * * @throws Exception if failed */ + @Test public void testNewStyleAlwaysSmallestNodeIndexIsCreatedMultithreaded() throws Exception { final Ignite ignite0 = startGridsMultiThreaded(11); @@ -446,6 +460,7 @@ public void testNewStyleAlwaysSmallestNodeIndexIsCreatedMultithreaded() throws E * * @throws Exception if failed. */ + @Test public void testStartTwoOldStyleNodes() throws Exception { final String expDfltConsistentId1 = "127.0.0.1:47500"; @@ -495,6 +510,7 @@ public void testStartTwoOldStyleNodes() throws Exception { * * @throws Exception if failed. */ + @Test public void testStartOldStyleNodesByCompatibleProperty() throws Exception { clearPropsAfterTest = true; System.setProperty(IGNITE_DATA_STORAGE_FOLDER_BY_CONSISTENT_ID, "true"); @@ -537,6 +553,7 @@ public void testStartOldStyleNodesByCompatibleProperty() throws Exception { * * @throws Exception if failed. */ + @Test public void testStartOldStyleNoPortsNodesByCompatibleProperty() throws Exception { clearPropsAfterTest = true; System.setProperty(IGNITE_DATA_STORAGE_FOLDER_BY_CONSISTENT_ID, "true"); @@ -577,6 +594,7 @@ public void testStartOldStyleNoPortsNodesByCompatibleProperty() throws Exception * * @throws Exception if failed. */ + @Test public void testOldStyleNodeWithUnexpectedPort() throws Exception { this.configuredConsistentId = "127.0.0.1:49999"; //emulated old-style node with not appropriate consistent ID final Ignite ignite = startActivateFillDataGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/FsyncWalRolloverDoesNotBlockTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/FsyncWalRolloverDoesNotBlockTest.java new file mode 100644 index 0000000000000..81da76350747c --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/FsyncWalRolloverDoesNotBlockTest.java @@ -0,0 +1,86 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.pagemem.wal.record.CheckpointRecord; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_WAL_PATH; +import static org.apache.ignite.configuration.WALMode.FSYNC; + +/** */ +@RunWith(JUnit4.class) +public class FsyncWalRolloverDoesNotBlockTest extends GridCommonAbstractTest { + /** */ + private static class RolloverRecord extends CheckpointRecord { + /** */ + private RolloverRecord() { + super(null); + } + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(name); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration().setPersistenceEnabled(true)) + .setWalMode(FSYNC) + .setWalArchivePath(DFLT_WAL_PATH)); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** */ + @Test + public void test() throws Exception { + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + ig.context().cache().context().database().checkpointReadLock(); + + try { + ig.context().cache().context().wal().log(new RolloverRecord()); + } + finally { + ig.context().cache().context().database().checkpointReadUnlock(); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteNodeStoppedDuringDisableWALTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteNodeStoppedDuringDisableWALTest.java index a744ab10c1671..f6c2de11299ef 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteNodeStoppedDuringDisableWALTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteNodeStoppedDuringDisableWALTest.java @@ -23,11 +23,9 @@ import java.nio.file.Path; import java.nio.file.SimpleFileVisitor; import java.nio.file.attribute.BasicFileAttributes; -import java.util.HashMap; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteDataStreamer; -import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -39,16 +37,16 @@ import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.persistence.filename.PdsFoldersResolver; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.file.FileVisitResult.CONTINUE; import static java.nio.file.Files.walkFileTree; import static org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.CP_FILE_NAME_PATTERN; - import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.INDEX_FILE_NAME; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.META_STORAGE_NAME; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.PART_FILE_PREFIX; @@ -59,16 +57,12 @@ /*** * */ +@RunWith(JUnit4.class) public class IgniteNodeStoppedDuringDisableWALTest extends GridCommonAbstractTest { - /** */ - public static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { IgniteConfiguration cfg = super.getConfiguration(name); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setDefaultDataRegionConfiguration( @@ -79,7 +73,7 @@ public class IgniteNodeStoppedDuringDisableWALTest extends GridCommonAbstractTes cfg.setAutoActivationEnabled(false); - cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME)); + cfg.setCacheConfiguration(defaultCacheConfiguration()); return cfg; } @@ -92,9 +86,15 @@ public class IgniteNodeStoppedDuringDisableWALTest extends GridCommonAbstractTes } /** + * Test checks that after WAL is globally disabled and node is stopped, persistent store is cleaned properly after node restart. + * * @throws Exception If failed. */ + @Test public void test() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10421"); + for (NodeStopPoint nodeStopPoint : NodeStopPoint.values()) { testStopNodeWithDisableWAL(nodeStopPoint); @@ -177,7 +177,7 @@ private void testStopNodeWithDisableWAL(NodeStopPoint nodeStopPoint) throws Exce boolean fail = false; try (WALIterator it = sharedContext.wal().replay(null)) { - dbMgr.applyUpdatesOnRecovery(it, (tup) -> true, (entry) -> true, new HashMap<>()); + dbMgr.applyUpdatesOnRecovery(it, (ptr, rec) -> true, (entry) -> true); } catch (IgniteCheckedException e) { if (nodeStopPoint.needCleanUp) @@ -190,6 +190,8 @@ private void testStopNodeWithDisableWAL(NodeStopPoint nodeStopPoint) throws Exce String msg = nodeStopPoint.toString(); + int pageSize = ig1.configuration().getDataStorageConfiguration().getPageSize(); + if (nodeStopPoint.needCleanUp) { PdsFoldersResolver foldersResolver = ((IgniteEx)ig1).context().pdsFolderResolver(); @@ -215,14 +217,14 @@ private void testStopNodeWithDisableWAL(NodeStopPoint nodeStopPoint) throws Exce if (CP_FILE_NAME_PATTERN.matcher(name).matches()) failed = true; - if (name.startsWith(PART_FILE_PREFIX)) + if (name.startsWith(PART_FILE_PREFIX) && path.toFile().length() > pageSize) failed = true; - if (name.startsWith(INDEX_FILE_NAME)) + if (name.startsWith(INDEX_FILE_NAME) && path.toFile().length() > pageSize) failed = true; if (failed) - fail(msg + " " + filePath); + fail(msg + " " + filePath + " " + path.toFile().length()); return CONTINUE; } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWALTailIsReachedDuringIterationOverArchiveTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWALTailIsReachedDuringIterationOverArchiveTest.java index e3c2c6c5fab1f..e8a181b702553 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWALTailIsReachedDuringIterationOverArchiveTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWALTailIsReachedDuringIterationOverArchiveTest.java @@ -44,11 +44,11 @@ import org.apache.ignite.internal.processors.cache.persistence.wal.reader.IgniteWalIteratorFactory.IteratorParametersBuilder; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.ByteBuffer.allocate; import static java.nio.file.StandardOpenOption.WRITE; @@ -57,10 +57,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteWALTailIsReachedDuringIterationOverArchiveTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** WAL segment size. */ private static final int WAL_SEGMENT_SIZE = 10 * 1024 * 1024; @@ -78,8 +76,6 @@ public class IgniteWALTailIsReachedDuringIterationOverArchiveTest extends GridCo ) ); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME)); return cfg; @@ -118,6 +114,7 @@ public class IgniteWALTailIsReachedDuringIterationOverArchiveTest extends GridCo /** * @throws Exception If failed. */ + @Test public void testStandAloneIterator() throws Exception { IgniteEx ig = grid(); @@ -133,6 +130,7 @@ public void testStandAloneIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWALManagerIterator() throws Exception { IgniteEx ig = grid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushFailoverTest.java index 351a42cb5eb66..d3988d2a3b3fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushFailoverTest.java @@ -24,7 +24,6 @@ import java.nio.MappedByteBuffer; import java.nio.file.OpenOption; import org.apache.ignite.IgniteCache; -import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; @@ -44,14 +43,14 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; - -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteWalFlushFailoverTest extends GridCommonAbstractTest { /** */ private static final String TEST_CACHE = "testCache"; @@ -105,6 +104,7 @@ public class IgniteWalFlushFailoverTest extends GridCommonAbstractTest { * * @throws Exception In case of fail */ + @Test public void testErrorOnFlushByTimeout() throws Exception { flushByTimeout = true; flushingErrorTest(); @@ -115,6 +115,7 @@ public void testErrorOnFlushByTimeout() throws Exception { * * @throws Exception In case of fail */ + @Test public void testErrorOnDirectFlush() throws Exception { flushByTimeout = false; flushingErrorTest(); @@ -185,11 +186,6 @@ private static class FailingFileIOFactory implements FileIOFactory { this.fail = fail; } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { final FileIO delegate = delegateFactory.create(file, modes); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushMultiNodeFailoverAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushMultiNodeFailoverAbstractSelfTest.java index a28ec5fe1e74c..86ff1f3fc6ff0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushMultiNodeFailoverAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFlushMultiNodeFailoverAbstractSelfTest.java @@ -20,8 +20,9 @@ import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; - import java.nio.MappedByteBuffer; +import java.nio.file.OpenOption; +import java.util.concurrent.atomic.AtomicBoolean; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteSystemProperties; @@ -40,24 +41,21 @@ import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; -import org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; - -import java.nio.file.OpenOption; -import java.util.concurrent.atomic.AtomicBoolean; - -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests error recovery while node flushing */ +@RunWith(JUnit4.class) public abstract class IgniteWalFlushMultiNodeFailoverAbstractSelfTest extends GridCommonAbstractTest { /** */ private static final String TEST_CACHE = "testCache"; @@ -75,6 +73,9 @@ public abstract class IgniteWalFlushMultiNodeFailoverAbstractSelfTest extends Gr /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10550"); + super.beforeTest(); stopAllGrids(); @@ -145,6 +146,7 @@ protected boolean mmap() { * * @throws Exception In case of fail */ + @Test public void testFailWhileStart() throws Exception { failWhilePut(true); } @@ -154,6 +156,7 @@ public void testFailWhileStart() throws Exception { * * @throws Exception In case of fail */ + @Test public void testFailAfterStart() throws Exception { failWhilePut(false); } @@ -171,7 +174,7 @@ private void failWhilePut(boolean failWhileStart) throws Exception { for (int i = 0; i < ITRS; i++) { while (!Thread.currentThread().isInterrupted()) { try (Transaction tx = grid.transactions().txStart( - TransactionConcurrency.PESSIMISTIC, TransactionIsolation.READ_COMMITTED)) { + TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { cache.put(i, "testValue" + i); tx.commit(); @@ -232,8 +235,6 @@ private void failWhilePut(boolean failWhileStart) throws Exception { private void setFileIOFactory(IgniteWriteAheadLogManager wal) { if (wal instanceof FileWriteAheadLogManager) ((FileWriteAheadLogManager)wal).setFileIOFactory(new FailingFileIOFactory(canFail)); - else if (wal instanceof FsyncModeFileWriteAheadLogManager) - ((FsyncModeFileWriteAheadLogManager)wal).setFileIOFactory(new FailingFileIOFactory(canFail)); else fail(wal.getClass().toString()); } @@ -256,11 +257,6 @@ private static class FailingFileIOFactory implements FileIOFactory { this.fail = fail; } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { final FileIO delegate = delegateFactory.create(file, modes); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFormatFileFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFormatFileFailoverTest.java index 5a1a6fa179654..617ec9a2c6f4f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFormatFileFailoverTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalFormatFileFailoverTest.java @@ -1,258 +1,260 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache.persistence.db.wal; - -import java.io.File; -import java.io.IOException; -import java.nio.file.OpenOption; -import java.util.Arrays; -import java.util.concurrent.atomic.AtomicReference; -import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.cache.CacheAtomicityMode; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.DataRegionConfiguration; -import org.apache.ignite.configuration.DataStorageConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.configuration.WALMode; -import org.apache.ignite.failure.FailureHandler; -import org.apache.ignite.failure.TestFailureHandler; -import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; -import org.apache.ignite.internal.processors.cache.persistence.StorageException; -import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; -import org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator; -import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; -import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; -import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; -import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -import static java.nio.file.StandardOpenOption.CREATE; -import static java.nio.file.StandardOpenOption.READ; -import static java.nio.file.StandardOpenOption.WRITE; - -/** - * - */ -public class IgniteWalFormatFileFailoverTest extends GridCommonAbstractTest { - /** */ - private static final String TEST_CACHE = "testCache"; - - /** */ - private static final String formatFile = "formatFile"; - - /** Fail method name reference. */ - private final AtomicReference failMtdNameRef = new AtomicReference<>(); - - /** */ - private boolean fsync; - - /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - cleanPersistenceDir(); - } - - /** {@inheritDoc} */ - @Override protected void afterTest() throws Exception { - stopAllGrids(); - - cleanPersistenceDir(); - } - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(gridName); - - cfg.setCacheConfiguration(new CacheConfiguration(TEST_CACHE) - .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); - - DataStorageConfiguration memCfg = new DataStorageConfiguration() - .setDefaultDataRegionConfiguration(new DataRegionConfiguration() - .setMaxSize(2048L * 1024 * 1024) - .setPersistenceEnabled(true)) - .setWalMode(fsync ? WALMode.FSYNC : WALMode.BACKGROUND) - .setWalBufferSize(1024 * 1024) - .setWalSegmentSize(512 * 1024) - .setFileIOFactory(new FailingFileIOFactory(failMtdNameRef)); - - cfg.setDataStorageConfiguration(memCfg); - - cfg.setFailureHandler(new TestFailureHandler(false)); - - return cfg; - } - - /** - * @throws Exception If failed. - */ - public void testNodeStartFailedFsync() throws Exception { - fsync = true; - - failMtdNameRef.set(formatFile); - - checkCause(GridTestUtils.assertThrows(log, () -> startGrid(0), IgniteCheckedException.class, null)); - } - - /** - * @throws Exception If failed. - */ - public void testFailureHandlerTriggeredFsync() throws Exception { - fsync = true; - - failFormatFileOnClusterActivate(); - } - - /** - * @throws Exception If failed. - */ - public void testFailureHandlerTriggered() throws Exception { - fsync = false; - - failFormatFileOnClusterActivate(); - } - - /** - * @throws Exception If failed. - */ - private void failFormatFileOnClusterActivate() throws Exception { - failMtdNameRef.set(null); - - startGrid(0); - startGrid(1); - - if (!fsync) { - setFileIOFactory(grid(0).context().cache().context().wal()); - setFileIOFactory(grid(1).context().cache().context().wal()); - } - - failMtdNameRef.set(formatFile); - - grid(0).cluster().active(true); - - checkCause(failureHandler(0).awaitFailure(2000).error()); - checkCause(failureHandler(1).awaitFailure(2000).error()); - } - - /** - * @param mtdName Method name. - */ - private static boolean isCalledFrom(String mtdName) { - return isCalledFrom(Thread.currentThread().getStackTrace(), mtdName); - } - - /** - * @param stackTrace Stack trace. - * @param mtdName Method name. - */ - private static boolean isCalledFrom(StackTraceElement[] stackTrace, String mtdName) { - return Arrays.stream(stackTrace).map(StackTraceElement::getMethodName).anyMatch(mtdName::equals); - } - - /** - * @param gridIdx Grid index. - * @return Failure handler configured for grid with given index. - */ - private TestFailureHandler failureHandler(int gridIdx) { - FailureHandler hnd = grid(gridIdx).configuration().getFailureHandler(); - - assertTrue(hnd instanceof TestFailureHandler); - - return (TestFailureHandler)hnd; - } - - /** - * @param t Throwable. - */ - private void checkCause(Throwable t) { - StorageException e = X.cause(t, StorageException.class); - - assertNotNull(e); - assertNotNull(e.getMessage()); - assertTrue(e.getMessage().contains("Failed to format WAL segment file")); - - IOException ioe = X.cause(e, IOException.class); - - assertNotNull(ioe); - assertNotNull(ioe.getMessage()); - assertTrue(ioe.getMessage().contains("No space left on device")); - - assertTrue(isCalledFrom(ioe.getStackTrace(), formatFile)); - } - - /** */ - private void setFileIOFactory(IgniteWriteAheadLogManager wal) { - if (wal instanceof FileWriteAheadLogManager) - ((FileWriteAheadLogManager)wal).setFileIOFactory(new FailingFileIOFactory(failMtdNameRef)); - else - fail(wal.getClass().toString()); - } - - /** - * Create File I/O which fails if specific method call present in stack trace. - */ - private static class FailingFileIOFactory implements FileIOFactory { - /** Serial version uid. */ - private static final long serialVersionUID = 0L; - - /** Delegate factory. */ - private final FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); - - /** Fail method name reference. */ - private final AtomicReference failMtdNameRef; - - /** - * @param failMtdNameRef Fail method name reference. - */ - FailingFileIOFactory(AtomicReference failMtdNameRef) { - assertNotNull(failMtdNameRef); - - this.failMtdNameRef = failMtdNameRef; - } - - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - - /** {@inheritDoc} */ - @Override public FileIO create(File file, OpenOption... modes) throws IOException { - final FileIO delegate = delegateFactory.create(file, modes); - - return new FileIODecorator(delegate) { - @Override public int write(byte[] buf, int off, int len) throws IOException { - conditionalFail(); - - return super.write(buf, off, len); - } - - @Override public void clear() throws IOException { - conditionalFail(); - - super.clear(); - } - - private void conditionalFail() throws IOException { - String failMtdName = failMtdNameRef.get(); - - if (failMtdName != null && isCalledFrom(failMtdName)) - throw new IOException("No space left on device"); - } - }; - } - } -} +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import java.io.File; +import java.io.IOException; +import java.nio.file.OpenOption; +import java.util.Arrays; +import java.util.concurrent.atomic.AtomicReference; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.failure.FailureHandler; +import org.apache.ignite.failure.TestFailureHandler; +import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIO; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator; +import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; +import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class IgniteWalFormatFileFailoverTest extends GridCommonAbstractTest { + /** */ + private static final String TEST_CACHE = "testCache"; + + /** */ + private static final String formatFile = "formatFile"; + + /** Fail method name reference. */ + private final AtomicReference failMtdNameRef = new AtomicReference<>(); + + /** */ + private boolean fsync; + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + cfg.setCacheConfiguration(new CacheConfiguration(TEST_CACHE) + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)); + + DataStorageConfiguration memCfg = new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setMaxSize(2048L * 1024 * 1024) + .setPersistenceEnabled(true)) + .setWalMode(fsync ? WALMode.FSYNC : WALMode.BACKGROUND) + .setWalBufferSize(1024 * 1024) + .setWalSegmentSize(512 * 1024) + .setFileIOFactory(new FailingFileIOFactory(failMtdNameRef)); + + cfg.setDataStorageConfiguration(memCfg); + + cfg.setFailureHandler(new TestFailureHandler(false)); + + return cfg; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNodeStartFailedFsync() throws Exception { + fsync = true; + + failMtdNameRef.set(formatFile); + + checkCause(GridTestUtils.assertThrows(log, () -> startGrid(0), IgniteCheckedException.class, null)); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testFailureHandlerTriggeredFsync() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10035"); + + fsync = true; + + failFormatFileOnClusterActivate(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testFailureHandlerTriggered() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10035"); + + fsync = false; + + failFormatFileOnClusterActivate(); + } + + /** + * @throws Exception If failed. + */ + private void failFormatFileOnClusterActivate() throws Exception { + failMtdNameRef.set(null); + + startGrid(0); + startGrid(1); + + if (!fsync) { + setFileIOFactory(grid(0).context().cache().context().wal()); + setFileIOFactory(grid(1).context().cache().context().wal()); + } + + failMtdNameRef.set(formatFile); + + grid(0).cluster().active(true); + + checkCause(failureHandler(0).awaitFailure(2000).error()); + checkCause(failureHandler(1).awaitFailure(2000).error()); + } + + /** + * @param mtdName Method name. + */ + private static boolean isCalledFrom(String mtdName) { + return isCalledFrom(Thread.currentThread().getStackTrace(), mtdName); + } + + /** + * @param stackTrace Stack trace. + * @param mtdName Method name. + */ + private static boolean isCalledFrom(StackTraceElement[] stackTrace, String mtdName) { + return Arrays.stream(stackTrace).map(StackTraceElement::getMethodName).anyMatch(mtdName::equals); + } + + /** + * @param gridIdx Grid index. + * @return Failure handler configured for grid with given index. + */ + private TestFailureHandler failureHandler(int gridIdx) { + FailureHandler hnd = grid(gridIdx).configuration().getFailureHandler(); + + assertTrue(hnd instanceof TestFailureHandler); + + return (TestFailureHandler)hnd; + } + + /** + * @param t Throwable. + */ + private void checkCause(Throwable t) { + StorageException e = X.cause(t, StorageException.class); + + assertNotNull(e); + assertNotNull(e.getMessage()); + assertTrue(e.getMessage().contains("Failed to format WAL segment file")); + + IOException ioe = X.cause(e, IOException.class); + + assertNotNull(ioe); + assertNotNull(ioe.getMessage()); + assertTrue(ioe.getMessage().contains("No space left on device")); + + assertTrue(isCalledFrom(ioe.getStackTrace(), formatFile)); + } + + /** */ + private void setFileIOFactory(IgniteWriteAheadLogManager wal) { + if (wal instanceof FileWriteAheadLogManager) + ((FileWriteAheadLogManager)wal).setFileIOFactory(new FailingFileIOFactory(failMtdNameRef)); + else + fail(wal.getClass().toString()); + } + + /** + * Create File I/O which fails if specific method call present in stack trace. + */ + private static class FailingFileIOFactory implements FileIOFactory { + /** Serial version uid. */ + private static final long serialVersionUID = 0L; + + /** Delegate factory. */ + private final FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); + + /** Fail method name reference. */ + private final AtomicReference failMtdNameRef; + + /** + * @param failMtdNameRef Fail method name reference. + */ + FailingFileIOFactory(AtomicReference failMtdNameRef) { + assertNotNull(failMtdNameRef); + + this.failMtdNameRef = failMtdNameRef; + } + + /** {@inheritDoc} */ + @Override public FileIO create(File file, OpenOption... modes) throws IOException { + final FileIO delegate = delegateFactory.create(file, modes); + + return new FileIODecorator(delegate) { + @Override public int write(byte[] buf, int off, int len) throws IOException { + conditionalFail(); + + return super.write(buf, off, len); + } + + @Override public void clear() throws IOException { + conditionalFail(); + + super.clear(); + } + + private void conditionalFail() throws IOException { + String failMtdName = failMtdNameRef.get(); + + if (failMtdName != null && isCalledFrom(failMtdName)) + throw new IOException("No space left on device"); + } + }; + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalHistoryReservationsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalHistoryReservationsTest.java index 6d2b1f79227b5..278862e958e59 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalHistoryReservationsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalHistoryReservationsTest.java @@ -38,14 +38,19 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_PDS_WAL_REBALANCE_THRESHOLD; /** * */ +@RunWith(JUnit4.class) public class IgniteWalHistoryReservationsTest extends GridCommonAbstractTest { /** */ private volatile boolean client; @@ -106,7 +111,10 @@ public class IgniteWalHistoryReservationsTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testReservedOnExchange() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + System.setProperty(IGNITE_PDS_WAL_REBALANCE_THRESHOLD, "0"); final int entryCnt = 10_000; @@ -114,7 +122,7 @@ public void testReservedOnExchange() throws Exception { final IgniteEx ig0 = (IgniteEx)startGrids(initGridCnt + 1); - ig0.active(true); + ig0.cluster().active(true); stopGrid(initGridCnt); @@ -251,12 +259,16 @@ private void printProgress(int k) { /** * @throws Exception If failed. */ + @Test public void testRemovesArePreloadedIfHistoryIsAvailable() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10551"); + int entryCnt = 10_000; IgniteEx ig0 = (IgniteEx)startGrids(2); - ig0.active(true); + ig0.cluster().active(true); IgniteCache cache = ig0.cache("cache1"); @@ -270,6 +282,8 @@ public void testRemovesArePreloadedIfHistoryIsAvailable() throws Exception { IgniteEx ig1 = startGrid(1); + awaitPartitionMapExchange(); + IgniteCache cache1 = ig1.cache("cache1"); assertEquals(entryCnt / 2, cache.size()); @@ -286,8 +300,6 @@ public void testRemovesArePreloadedIfHistoryIsAvailable() throws Exception { } } - cache.rebalance().get(); - for (int p = 0; p < ig1.affinity("cache1").partitions(); p++) { GridDhtLocalPartition p0 = ig0.context().cache().cache("cache1").context().topology().localPartition(p); GridDhtLocalPartition p1 = ig1.context().cache().cache("cache1").context().topology().localPartition(p); @@ -300,12 +312,16 @@ public void testRemovesArePreloadedIfHistoryIsAvailable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeIsClearedIfHistoryIsUnavailable() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10551"); + int entryCnt = 10_000; IgniteEx ig0 = (IgniteEx)startGrids(2); - ig0.active(true); + ig0.cluster().active(true); IgniteCache cache = ig0.cache("cache1"); @@ -330,6 +346,8 @@ public void testNodeIsClearedIfHistoryIsUnavailable() throws Exception { IgniteEx ig1 = startGrid(1); + awaitPartitionMapExchange(); + IgniteCache cache1 = ig1.cache("cache1"); assertEquals(entryCnt / 2, cache.size()); @@ -346,8 +364,6 @@ public void testNodeIsClearedIfHistoryIsUnavailable() throws Exception { } } - cache.rebalance().get(); - for (int p = 0; p < ig1.affinity("cache1").partitions(); p++) { GridDhtLocalPartition p0 = ig0.context().cache().cache("cache1").context().topology().localPartition(p); GridDhtLocalPartition p1 = ig1.context().cache().cache("cache1").context().topology().localPartition(p); @@ -360,7 +376,11 @@ public void testNodeIsClearedIfHistoryIsUnavailable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWalHistoryPartiallyRemoved() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10551"); + int entryCnt = 10_000; IgniteEx ig0 = (IgniteEx)startGrids(2); @@ -399,7 +419,10 @@ public void testWalHistoryPartiallyRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeLeftDuringExchange() throws Exception { + MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.ENTRY_LOCK); + System.setProperty(IGNITE_PDS_WAL_REBALANCE_THRESHOLD, "0"); final int entryCnt = 10_000; @@ -407,7 +430,7 @@ public void testNodeLeftDuringExchange() throws Exception { final Ignite ig0 = startGrids(initGridCnt); - ig0.active(true); + ig0.cluster().active(true); IgniteCache cache = ig0.cache("cache1"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorExceptionDuringReadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorExceptionDuringReadTest.java index ccd889aaaaf42..36896bcea4664 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorExceptionDuringReadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorExceptionDuringReadTest.java @@ -34,19 +34,17 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteWalIteratorExceptionDuringReadTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int WAL_SEGMENT_SIZE = 1024 * 1024 * 20; @@ -54,8 +52,6 @@ public class IgniteWalIteratorExceptionDuringReadTest extends GridCommonAbstract @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setWalSegmentSize(WAL_SEGMENT_SIZE) @@ -81,6 +77,7 @@ public class IgniteWalIteratorExceptionDuringReadTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Test public void test() throws Exception { IgniteEx ig = (IgniteEx)startGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorSwitchSegmentTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorSwitchSegmentTest.java index 74db28fed0072..4eeb22395fa14 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorSwitchSegmentTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalIteratorSwitchSegmentTest.java @@ -47,7 +47,6 @@ import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator; import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; -import org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager; import org.apache.ignite.internal.processors.cache.persistence.wal.aware.SegmentAware; import org.apache.ignite.internal.processors.cache.persistence.wal.io.FileInput; import org.apache.ignite.internal.processors.cache.persistence.wal.reader.StandaloneGridKernalContext; @@ -63,6 +62,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.METASTORE_DATA_RECORD; import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.HEADER_RECORD_SIZE; @@ -72,6 +74,7 @@ /*** * Test check correct switch segment if in the tail of segment have garbage. */ +@RunWith(JUnit4.class) public class IgniteWalIteratorSwitchSegmentTest extends GridCommonAbstractTest { /** Segment file size. */ private static final int SEGMENT_SIZE = 1024 * 1024; @@ -88,12 +91,6 @@ public class IgniteWalIteratorSwitchSegmentTest extends GridCommonAbstractTest { 2 }; - /** FileWriteAheadLogManagers for check. */ - private Class[] checkWalManagers = new Class[] { - FileWriteAheadLogManager.class, - FsyncModeFileWriteAheadLogManager.class - }; - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { super.beforeTest(); @@ -113,6 +110,7 @@ public class IgniteWalIteratorSwitchSegmentTest extends GridCommonAbstractTest { * * @throws Exception If some thing failed. */ + @Test public void testCheckSerializer() throws Exception { for (int serVer : checkSerializerVers) { checkInvariantSwitchSegmentSize(serVer); @@ -153,7 +151,7 @@ private void checkInvariantSwitchSegmentSize(int serVer) throws Exception { null, null, null, - + null, null) ).createSerializer(serVer); @@ -169,17 +167,14 @@ private void checkInvariantSwitchSegmentSize(int serVer) throws Exception { * * @throws Exception If some thing failed. */ + @Test public void testInvariantSwitchSegment() throws Exception { for (int serVer : checkSerializerVers) { - for (Class walMgrClass : checkWalManagers) { - try { - log.info("checking wal manager " + walMgrClass + " with serializer version " + serVer); - - checkInvariantSwitchSegment(walMgrClass, serVer); - } - finally { - U.delete(Paths.get(U.defaultWorkDirectory())); - } + try { + checkInvariantSwitchSegment(serVer); + } + finally { + U.delete(Paths.get(U.defaultWorkDirectory())); } } } @@ -189,26 +184,26 @@ public void testInvariantSwitchSegment() throws Exception { * * @throws Exception If some thing failed. */ + @Test public void testSwitchReadingSegmentFromWorkToArchive() throws Exception { for (int serVer : checkSerializerVers) { - try { - checkSwitchReadingSegmentDuringIteration(FileWriteAheadLogManager.class, serVer); - } - finally { - U.delete(Paths.get(U.defaultWorkDirectory())); - } + try { + checkSwitchReadingSegmentDuringIteration(serVer); + } + finally { + U.delete(Paths.get(U.defaultWorkDirectory())); + } } } /** - * @param walMgrClass WAL manager class. * @param serVer WAL serializer version. * @throws Exception If some thing failed. */ - private void checkInvariantSwitchSegment(Class walMgrClass, int serVer) throws Exception { + private void checkInvariantSwitchSegment(int serVer) throws Exception { String workDir = U.defaultWorkDirectory(); - T2 initTup = initiate(walMgrClass, serVer, workDir); + T2 initTup = initiate(serVer, workDir); IgniteWriteAheadLogManager walMgr = initTup.get1(); @@ -271,9 +266,8 @@ private void checkInvariantSwitchSegment(Class walMgrClass, int serVer) throws E // Add more record for rollover to the next segment. recordsToWrite += 100; - for (int i = 0; i < recordsToWrite; i++) { + for (int i = 0; i < recordsToWrite; i++) walMgr.log(new MetastoreDataRecord(rec.key(), rec.value())); - } walMgr.flush(null, true); @@ -326,14 +320,13 @@ private void checkInvariantSwitchSegment(Class walMgrClass, int serVer) throws E } /** - * @param walMgrClass WAL manager class. * @param serVer WAL serializer version. * @throws Exception If some thing failed. */ - private void checkSwitchReadingSegmentDuringIteration(Class walMgrClass, int serVer) throws Exception { + private void checkSwitchReadingSegmentDuringIteration(int serVer) throws Exception { String workDir = U.defaultWorkDirectory(); - T2 initTup = initiate(walMgrClass, serVer, workDir); + T2 initTup = initiate(serVer, workDir); IgniteWriteAheadLogManager walMgr = initTup.get1(); @@ -369,8 +362,7 @@ private void checkSwitchReadingSegmentDuringIteration(Class walMgrClass, int ser () -> { // Check that switch segment works as expected and all record is reachable. try (WALIterator it = walMgr.replay(null)) { - Object handle = getFieldValueHierarchy(it, "currWalSegment"); - + Object handle = getFieldValueHierarchy(it, "currWalSegment"); FileInput in = getFieldValueHierarchy(handle, "in"); Object delegate = getFieldValueHierarchy(in.io(), "delegate"); Channel ch = getFieldValueHierarchy(delegate, "ch"); @@ -423,14 +415,12 @@ private void checkSwitchReadingSegmentDuringIteration(Class walMgrClass, int ser /*** * Initiate WAL manager. * - * @param walMgrClass WAL manager class. * @param serVer WAL serializer version. * @param workDir Work directory path. * @return Tuple of WAL manager and WAL record serializer. * @throws IgniteCheckedException If some think failed. */ private T2 initiate( - Class walMgrClass, int serVer, String workDir ) throws IgniteCheckedException { @@ -448,6 +438,7 @@ private T2 initiate( .setWalMode(WALMode.FSYNC) .setWalPath(workDir + WORK_SUB_DIR) .setWalArchivePath(workDir + ARCHIVE_SUB_DIR) + .setFileIOFactory(new RandomAccessFileIOFactory()) ); cfg.setEventStorageSpi(new NoopEventStorageSpi()); @@ -464,18 +455,9 @@ private T2 initiate( } }; - IgniteWriteAheadLogManager walMgr = null; + IgniteWriteAheadLogManager walMgr = new FileWriteAheadLogManager(kctx); - if (walMgrClass.equals(FileWriteAheadLogManager.class)) { - walMgr = new FileWriteAheadLogManager(kctx); - - GridTestUtils.setFieldValue(walMgr, "serializerVer", serVer); - } - else if (walMgrClass.equals(FsyncModeFileWriteAheadLogManager.class)) { - walMgr = new FsyncModeFileWriteAheadLogManager(kctx); - - GridTestUtils.setFieldValue(walMgr, "serializerVersion", serVer); - } + GridTestUtils.setFieldValue(walMgr, "serializerVer", serVer); GridCacheSharedContext ctx = new GridCacheSharedContext<>( kctx, @@ -494,6 +476,7 @@ else if (walMgrClass.equals(FsyncModeFileWriteAheadLogManager.class)) { null, null, null, + null, null ); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRebalanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRebalanceTest.java index 57565bfd58c48..ab4b9e057889a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRebalanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRebalanceTest.java @@ -63,6 +63,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.file.StandardOpenOption.CREATE; import static java.nio.file.StandardOpenOption.READ; @@ -72,6 +75,7 @@ /** * Historical WAL rebalance base test. */ +@RunWith(JUnit4.class) public class IgniteWalRebalanceTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -151,6 +155,7 @@ public class IgniteWalRebalanceTest extends GridCommonAbstractTest { * * @throws Exception if failed. */ + @Test public void testSimple() throws Exception { IgniteEx ig0 = startGrid(0); IgniteEx ig1 = startGrid(1); @@ -190,6 +195,7 @@ public void testSimple() throws Exception { * * @throws Exception If failed. */ + @Test public void testRebalanceRemoves() throws Exception { IgniteEx ig0 = startGrid(0); IgniteEx ig1 = startGrid(1); @@ -237,6 +243,7 @@ public void testRebalanceRemoves() throws Exception { * * @throws Exception If failed. */ + @Test public void testWithLocalWalChange() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_DISABLE_WAL_DURING_REBALANCING, "true"); @@ -326,6 +333,7 @@ else if (k % 3 == 1) * * @throws Exception If failed. */ + @Test public void testWithGlobalWalChange() throws Exception { // Prepare some data. IgniteEx crd = (IgniteEx) startGrids(3); @@ -405,6 +413,7 @@ public void testWithGlobalWalChange() throws Exception { * * @throws Exception If failed. */ + @Test public void testRebalanceCancelOnSupplyError() throws Exception { // Prepare some data. IgniteEx crd = (IgniteEx) startGrids(3); @@ -612,11 +621,6 @@ static class FailingIOFactory implements FileIOFactory { this.delegate = delegate; } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, WRITE, READ); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { FileIO delegateIO = delegate.create(file, modes); @@ -645,4 +649,4 @@ public void reset() { failRead = false; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryPPCTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryPPCTest.java index d3e6278084026..f0cff40f51716 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryPPCTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryPPCTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteWalRecoveryPPCTest extends GridCommonAbstractTest { /** */ private boolean fork; @@ -138,6 +142,7 @@ public class IgniteWalRecoveryPPCTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testWalSimple() throws Exception { try { IgniteEx ignite = startGrid(1); @@ -234,6 +239,7 @@ else if (i % 2 == 0) /** * */ + @Test public void testDynamicallyStartedNonPersistentCache() throws Exception { try { IgniteEx ignite = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoverySeveralRestartsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoverySeveralRestartsTest.java index db20ace425b6f..cb2ea1d1e95dd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoverySeveralRestartsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoverySeveralRestartsTest.java @@ -37,10 +37,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteWalRecoverySeveralRestartsTest extends GridCommonAbstractTest { /** */ public static final int PAGE_SIZE = 1024; @@ -113,6 +117,7 @@ public class IgniteWalRecoverySeveralRestartsTest extends GridCommonAbstractTest /** * @throws Exception if failed. */ + @Test public void testWalRecoverySeveralRestarts() throws Exception { try { IgniteEx ignite = startGrid(1); @@ -168,6 +173,7 @@ public void testWalRecoverySeveralRestarts() throws Exception { /** * @throws Exception if failed. */ + @Test public void testWalRecoveryWithDynamicCache() throws Exception { try { IgniteEx ignite = startGrid(1); @@ -221,6 +227,7 @@ public void testWalRecoveryWithDynamicCache() throws Exception { /** * @throws Exception if failed. */ + @Test public void testWalRecoveryWithDynamicCacheLargeObjects() throws Exception { try { IgniteEx ignite = startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalSerializerVersionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalSerializerVersionTest.java index dcb8e7f452719..33d66c04b25bb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalSerializerVersionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalSerializerVersionTest.java @@ -43,11 +43,11 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteCallable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_SERIALIZER_VERSION; import static org.apache.ignite.transactions.TransactionState.PREPARED; @@ -55,16 +55,12 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteWalSerializerVersionTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { IgniteConfiguration cfg = super.getConfiguration(name); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration(new DataStorageConfiguration() .setDefaultDataRegionConfiguration(new DataRegionConfiguration() .setPersistenceEnabled(true) @@ -76,6 +72,7 @@ public class IgniteWalSerializerVersionTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCheckDifferentSerializerVersions() throws Exception { System.setProperty(IGNITE_WAL_SERIALIZER_VERSION, "1"); @@ -127,6 +124,7 @@ public void testCheckDifferentSerializerVersions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCheckDifferentSerializerVersionsAndLogTimestamp() throws Exception { IgniteCallable> recordsFactory = new IgniteCallable>() { @Override public List call() throws Exception { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionAfterRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionAfterRestartTest.java new file mode 100644 index 0000000000000..73933a3496e70 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionAfterRestartTest.java @@ -0,0 +1,158 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.stream.Collectors; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.events.WalSegmentCompactedEvent; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteBiTuple; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.events.EventType.EVT_WAL_SEGMENT_COMPACTED; + +/** */ +@RunWith(JUnit4.class) +public class WalCompactionAfterRestartTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(name); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setPersistenceEnabled(true) + .setMaxSize(200L * 1024 * 1024)) + .setWalMode(WALMode.LOG_ONLY) + .setWalSegmentSize(512 * 1024) + .setWalCompactionEnabled(true) + .setMaxWalArchiveSize(2 * 512 * 1024) + ); + + CacheConfiguration ccfg = new CacheConfiguration(); + + ccfg.setName(DEFAULT_CACHE_NAME); + ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); + ccfg.setAffinity(new RendezvousAffinityFunction(false, 16)); + ccfg.setBackups(0); + + cfg.setCacheConfiguration(ccfg); + cfg.setConsistentId(name); + + cfg.setIncludeEventTypes(EVT_WAL_SEGMENT_COMPACTED); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void test() throws Exception { + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + doCachePuts(ig, 10_000); + + ig.cluster().active(false); + + stopGrid(0); + + IgniteEx ig0 = startGrid(0); + + ig0.cluster().active(true); + + List> discrepancies = Collections.synchronizedList(new ArrayList<>()); + + ig0.events().localListen(e -> { + long evtSegIdx = ((WalSegmentCompactedEvent)e).getAbsWalSegmentIdx(); + long lastCompactedIdx = ig0.context().cache().context().wal().lastCompactedSegment(); + + if (lastCompactedIdx < 0 || lastCompactedIdx > evtSegIdx) + discrepancies.add(F.t(evtSegIdx, lastCompactedIdx)); + + return true; + }, EVT_WAL_SEGMENT_COMPACTED); + + doCachePuts(ig0, 5_000); + + stopGrid(0); + + if (!discrepancies.isEmpty()) { + fail("Discrepancies (EVT_WAL_SEGMENT_COMPACTED index vs. lastCompactedSegment):" + System.lineSeparator() + + discrepancies.stream() + .map(t -> String.format("%d <-> %d", t.get1(), t.get2())) + .collect(Collectors.joining(System.lineSeparator()))); + } + } + + /** */ + private void doCachePuts(IgniteEx ig, long millis) throws IgniteCheckedException { + IgniteCache cache = ig.getOrCreateCache(DEFAULT_CACHE_NAME); + + AtomicBoolean stop = new AtomicBoolean(); + + IgniteInternalFuture putFut = GridTestUtils.runMultiThreadedAsync(() -> { + ThreadLocalRandom rnd = ThreadLocalRandom.current(); + + while (!stop.get()) + cache.put(rnd.nextInt(), "Ignite".getBytes()); + }, 4, "cache-filler"); + + U.sleep(millis); + + stop.set(true); + + putFut.get(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionSwitchOnTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionSwitchOnTest.java new file mode 100644 index 0000000000000..70183ae151313 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionSwitchOnTest.java @@ -0,0 +1,144 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import java.io.File; +import java.io.FileFilter; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; +import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Load without compaction -> Stop -> Enable WAL Compaction -> Start. + */ +public class WalCompactionSwitchOnTest extends GridCommonAbstractTest { + /** Compaction enabled. */ + private boolean compactionEnabled; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration() + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setPersistenceEnabled(true) + .setMaxSize(256 * 1024 * 1024)) + .setWalSegmentSize(512 * 1024) + .setWalSegments(100) + .setWalCompactionEnabled(compactionEnabled)); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + } + + /** + * Load without compaction -> Stop -> Enable WAL Compaction -> Start. + * + * @throws Exception On exception. + */ + @Test + public void testWalCompactionSwitch() throws Exception { + IgniteEx ex = startGrid(0); + + ex.cluster().active(true); + + IgniteCache cache = ex.getOrCreateCache( + new CacheConfiguration() + .setName("c1") + .setGroupName("g1") + .setCacheMode(CacheMode.PARTITIONED) + ); + + for (int i = 0; i < 500; i++) + cache.put(i, i); + + File walDir = U.resolveWorkDirectory( + ex.configuration().getWorkDirectory(), + "db/wal/node00-" + ex.localNode().consistentId(), + false + ); + + forceCheckpoint(); + + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + File[] archivedFiles = walDir.listFiles(new FileFilter() { + @Override public boolean accept(File pathname) { + return pathname.getName().endsWith(".wal"); + } + }); + + return archivedFiles.length == 39; + } + }, 5000); + + stopGrid(0); + + compactionEnabled = true; + + ex = startGrid(0); + + ex.cluster().active(true); + + File archiveDir = U.resolveWorkDirectory( + ex.configuration().getWorkDirectory(), + "db/wal/archive/node00-" + ex.localNode().consistentId(), + false + ); + + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + File[] archivedFiles = archiveDir.listFiles(new FileFilter() { + @Override public boolean accept(File pathname) { + return pathname.getName().endsWith(FilePageStoreManager.ZIP_SUFFIX); + } + }); + + return archivedFiles.length == 20; + } + }, 5000); + + File[] tmpFiles = archiveDir.listFiles(new FileFilter() { + @Override public boolean accept(File pathname) { + return pathname.getName().endsWith(FilePageStoreManager.TMP_SUFFIX); + } + }); + + assertEquals(0, tmpFiles.length); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionTest.java index e61745505380b..9cf44cd2735c3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalCompactionTest.java @@ -1,19 +1,19 @@ /* -* Licensed to the Apache Software Foundation (ASF) under one or more -* contributor license agreements. See the NOTICE file distributed with -* this work for additional information regarding copyright ownership. -* The ASF licenses this file to You under the Apache License, Version 2.0 -* (the "License"); you may not use this file except in compliance with -* the License. You may obtain a copy of the License at -* -* http://www.apache.org/licenses/LICENSE-2.0 -* -* Unless required by applicable law or agreed to in writing, software -* distributed under the License is distributed on an "AS IS" BASIS, -* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -* See the License for the specific language governing permissions and -* limitations under the License. -*/ + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ package org.apache.ignite.internal.processors.cache.persistence.db.wal; import java.io.File; @@ -21,6 +21,7 @@ import java.io.RandomAccessFile; import java.util.Arrays; import java.util.Comparator; +import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; @@ -36,20 +37,18 @@ import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.persistence.wal.FileDescriptor; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_PAGE_SIZE; /** * */ +@RunWith(JUnit4.class) public class WalCompactionTest extends GridCommonAbstractTest { - /** Ip finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Wal segment size. */ private static final int WAL_SEGMENT_SIZE = 4 * 1024 * 1024; @@ -69,8 +68,6 @@ public class WalCompactionTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { IgniteConfiguration cfg = super.getConfiguration(name); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration(new DataStorageConfiguration() .setDefaultDataRegionConfiguration(new DataRegionConfiguration() .setPersistenceEnabled(true) @@ -117,6 +114,7 @@ public class WalCompactionTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testApplyingUpdatesFromCompactedWal() throws Exception { testApplyingUpdatesFromCompactedWal(false); } @@ -126,6 +124,7 @@ public void testApplyingUpdatesFromCompactedWal() throws Exception { * * @throws Exception If failed. */ + @Test public void testApplyingUpdatesFromCompactedWalWhenCompressorDisabled() throws Exception { testApplyingUpdatesFromCompactedWal(true); } @@ -149,8 +148,10 @@ private void testApplyingUpdatesFromCompactedWal(boolean switchOffCompressor) th } // Spam WAL to move all data records to compressible WAL zone. - for (int i = 0; i < WAL_SEGMENT_SIZE / DFLT_PAGE_SIZE * 2; i++) - ig.context().cache().context().wal().log(new PageSnapshot(new FullPageId(-1, -1), new byte[DFLT_PAGE_SIZE])); + for (int i = 0; i < WAL_SEGMENT_SIZE / DFLT_PAGE_SIZE * 2; i++) { + ig.context().cache().context().wal().log(new PageSnapshot(new FullPageId(-1, -1), new byte[DFLT_PAGE_SIZE], + DFLT_PAGE_SIZE)); + } // WAL archive segment is allowed to be compressed when it's at least one checkpoint away from current WAL head. ig.context().cache().context().database().wakeupForCheckpoint("Forced checkpoint").get(); @@ -219,11 +220,26 @@ else if (arr[i] != 1) { } assertFalse(fail); + + // Check compation successfully reset on blt changed. + stopAllGrids(); + + Ignite ignite = startGrids(2); + + ignite.cluster().active(true); + + resetBaselineTopology(); + + // This node will join to different blt. + startGrid(2); + + awaitPartitionMapExchange(); } /** * */ + @Test public void testCompressorToleratesEmptyWalSegmentsFsync() throws Exception { testCompressorToleratesEmptyWalSegments(WALMode.FSYNC); } @@ -231,6 +247,7 @@ public void testCompressorToleratesEmptyWalSegmentsFsync() throws Exception { /** * */ + @Test public void testCompressorToleratesEmptyWalSegmentsLogOnly() throws Exception { testCompressorToleratesEmptyWalSegments(WALMode.LOG_ONLY); } @@ -304,6 +321,7 @@ private void testCompressorToleratesEmptyWalSegments(WALMode walMode) throws Exc /** * @throws Exception If failed. */ + @Test public void testSeekingStartInCompactedSegment() throws Exception { IgniteEx ig = (IgniteEx)startGrids(3); ig.cluster().active(true); @@ -347,8 +365,10 @@ public void testSeekingStartInCompactedSegment() throws Exception { } // Spam WAL to move all data records to compressible WAL zone. - for (int i = 0; i < WAL_SEGMENT_SIZE / DFLT_PAGE_SIZE * 2; i++) - ig.context().cache().context().wal().log(new PageSnapshot(new FullPageId(-1, -1), new byte[DFLT_PAGE_SIZE])); + for (int i = 0; i < WAL_SEGMENT_SIZE / DFLT_PAGE_SIZE * 2; i++) { + ig.context().cache().context().wal().log(new PageSnapshot(new FullPageId(-1, -1), new byte[DFLT_PAGE_SIZE], + DFLT_PAGE_SIZE)); + } // WAL archive segment is allowed to be compressed when it's at least one checkpoint away from current WAL head. ig.context().cache().context().database().wakeupForCheckpoint("Forced checkpoint").get(); @@ -368,13 +388,12 @@ public void testSeekingStartInCompactedSegment() throws Exception { File[] cpMarkers = cpMarkersDir.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String name) { - return !( - name.equals(cpMarkersToSave[0].getName()) || - name.equals(cpMarkersToSave[1].getName()) || - name.equals(cpMarkersToSave[2].getName()) || - name.equals(cpMarkersToSave[3].getName()) || - name.equals(cpMarkersToSave[4].getName()) - ); + for (File cpMarker : cpMarkersToSave) { + if (cpMarker.getName().equals(name)) + return false; + } + + return true; } }); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalDeletionArchiveAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalDeletionArchiveAbstractTest.java index f8aeb6a86c4ba..27a121c8cb4cd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalDeletionArchiveAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalDeletionArchiveAbstractTest.java @@ -23,6 +23,8 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteException; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -34,16 +36,17 @@ import org.apache.ignite.internal.processors.cache.persistence.wal.FileDescriptor; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE; /** * */ +@RunWith(JUnit4.class) public abstract class WalDeletionArchiveAbstractTest extends GridCommonAbstractTest { - /** */ - public static final String CACHE_NAME = "SomeCache"; - /** * Start grid with override default configuration via customConfigurator. */ @@ -71,6 +74,13 @@ private Ignite startGrid(Consumer customConfigurator) return ignite; } + /** */ + private CacheConfiguration cacheConfiguration() { + CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); + + return ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); + } + /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { stopAllGrids(); @@ -95,6 +105,7 @@ private Ignite startGrid(Consumer customConfigurator) /** * History size parameters consistency check. Should be set just one of wal history size or max wal archive size. */ + @Test public void testGridDoesNotStart_BecauseBothWalHistorySizeAndMaxWalArchiveSizeUsed() throws Exception { //given: wal history size and max wal archive size are both set. IgniteConfiguration configuration = getConfiguration(getTestIgniteInstanceName()); @@ -125,6 +136,7 @@ private String findSourceMessage(Throwable ex) { /** * Correct delete archived wal files. */ + @Test public void testCorrectDeletedArchivedWalFiles() throws Exception { //given: configured grid with setted max wal archive size long maxWalArchiveSize = 2 * 1024 * 1024; @@ -136,7 +148,7 @@ public void testCorrectDeletedArchivedWalFiles() throws Exception { long allowedThresholdWalArchiveSize = maxWalArchiveSize / 2; - IgniteCache cache = ignite.getOrCreateCache(CACHE_NAME); + IgniteCache cache = ignite.getOrCreateCache(cacheConfiguration()); //when: put to cache more than 2 MB for (int i = 0; i < 500; i++) @@ -165,6 +177,7 @@ public void testCorrectDeletedArchivedWalFiles() throws Exception { /** * Checkpoint triggered depends on wal size. */ + @Test public void testCheckpointStarted_WhenWalHasTooBigSizeWithoutCheckpoint() throws Exception { //given: configured grid with max wal archive size = 1MB, wal segment size = 512KB Ignite ignite = startGrid(dbCfg -> { @@ -173,7 +186,7 @@ public void testCheckpointStarted_WhenWalHasTooBigSizeWithoutCheckpoint() throws GridCacheDatabaseSharedManager dbMgr = gridDatabase(ignite); - IgniteCache cache = ignite.getOrCreateCache(CACHE_NAME); + IgniteCache cache = ignite.getOrCreateCache(cacheConfiguration()); for (int i = 0; i < 500; i++) cache.put(i, i); @@ -191,6 +204,7 @@ public void testCheckpointStarted_WhenWalHasTooBigSizeWithoutCheckpoint() throws * * @deprecated Test old removing process depends on WalHistorySize. */ + @Test public void testCheckpointHistoryRemovingByWalHistorySize() throws Exception { //given: configured grid with wal history size = 10 int walHistorySize = 10; @@ -201,7 +215,7 @@ public void testCheckpointHistoryRemovingByWalHistorySize() throws Exception { GridCacheDatabaseSharedManager dbMgr = gridDatabase(ignite); - IgniteCache cache = ignite.getOrCreateCache(CACHE_NAME); + IgniteCache cache = ignite.getOrCreateCache(cacheConfiguration()); //when: put to cache and do checkpoint int testNumberOfCheckpoint = walHistorySize * 2; @@ -225,6 +239,7 @@ public void testCheckpointHistoryRemovingByWalHistorySize() throws Exception { * Correct delete checkpoint history from memory depends on IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE. WAL files * doesn't delete because deleting was disabled. */ + @Test public void testCorrectDeletedCheckpointHistoryButKeepWalFiles() throws Exception { System.setProperty(IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE, "2"); //given: configured grid with disabled WAL removing. @@ -234,7 +249,7 @@ public void testCorrectDeletedCheckpointHistoryButKeepWalFiles() throws Exceptio GridCacheDatabaseSharedManager dbMgr = gridDatabase(ignite); - IgniteCache cache = ignite.getOrCreateCache(CACHE_NAME); + IgniteCache cache = ignite.getOrCreateCache(cacheConfiguration()); //when: put to cache for (int i = 0; i < 500; i++) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalPathsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalPathsTest.java index 7141fed0807f4..d64953e578c41 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalPathsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalPathsTest.java @@ -23,8 +23,12 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** Tests equal paths to WAL storage and WAL archive. */ +@RunWith(JUnit4.class) public class WalPathsTest extends GridCommonAbstractTest { /** WalPath and WalArchivePath. */ private File walDir; @@ -67,6 +71,7 @@ private IgniteConfiguration getConfig(boolean relativePath) throws Exception { * * @throws Exception If failed. */ + @Test public void testWalStoreAndArchivePathsEquality() throws Exception { IgniteConfiguration cfg = getConfig(false); @@ -78,6 +83,7 @@ public void testWalStoreAndArchivePathsEquality() throws Exception { * * @throws Exception If failed. */ + @Test public void testWalStoreAndArchiveAbsolutAndRelativePathsEquality() throws Exception { final IgniteConfiguration cfg = getConfig(true); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRecoveryTxLogicalRecordsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRecoveryTxLogicalRecordsTest.java index 261167a7d6170..9837ce6f1d6d9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRecoveryTxLogicalRecordsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRecoveryTxLogicalRecordsTest.java @@ -49,8 +49,8 @@ import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager; import org.apache.ignite.internal.processors.cache.IgniteRebalanceIterator; -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtDemandedPartitionsMap; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; @@ -65,10 +65,14 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class WalRecoveryTxLogicalRecordsTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -146,6 +150,7 @@ public class WalRecoveryTxLogicalRecordsTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testWalTxSimple() throws Exception { Ignite ignite = startGrid(); @@ -221,6 +226,7 @@ public void testWalTxSimple() throws Exception { /** * @throws Exception if failed. */ + @Test public void testWalRecoveryRemoves() throws Exception { Ignite ignite = startGrid(); @@ -309,6 +315,7 @@ public void testWalRecoveryRemoves() throws Exception { /** * @throws Exception if failed. */ + @Test public void testHistoricalRebalanceIterator() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_PDS_WAL_REBALANCE_THRESHOLD, "0"); @@ -473,6 +480,7 @@ public void testHistoricalRebalanceIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWalAfterPreloading() throws Exception { Ignite ignite = startGrid(); @@ -515,6 +523,7 @@ public void testWalAfterPreloading() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRecoveryRandomPutRemove() throws Exception { try { pageSize = 1024; @@ -572,6 +581,7 @@ public void testRecoveryRandomPutRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRecoveryNoPageLost1() throws Exception { recoveryNoPageLost(false); } @@ -579,13 +589,17 @@ public void testRecoveryNoPageLost1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRecoveryNoPageLost2() throws Exception { recoveryNoPageLost(true); } /** + * Test checks that the number of pages per each page store are not changing before and after node restart. + * * @throws Exception If failed. */ + @Test public void testRecoveryNoPageLost3() throws Exception { try { pageSize = 1024; @@ -679,7 +693,7 @@ private void recoveryNoPageLost(boolean checkpoint) throws Exception { pages = allocatedPages(ignite, CACHE2_NAME); - ignite.close(); + stopGrid(0, true); } } finally { @@ -721,6 +735,7 @@ private List allocatedPages(Ignite ignite, String cacheName) throws Exc /** * @throws Exception If failed. */ + @Test public void testFreeListRecovery() throws Exception { try { pageSize = 1024; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingFsyncTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingFsyncTest.java new file mode 100644 index 0000000000000..7454e5f68dfe2 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingFsyncTest.java @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import org.apache.ignite.configuration.WALMode; +import org.jetbrains.annotations.NotNull; + +/** + * + */ +public class WalRolloverRecordLoggingFsyncTest extends WalRolloverRecordLoggingTest { + + /** {@inheritDoc} */ + @NotNull @Override public WALMode walMode() { + return WALMode.FSYNC; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingLogOnlyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingLogOnlyTest.java new file mode 100644 index 0000000000000..765fdeb5c4556 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingLogOnlyTest.java @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import org.apache.ignite.configuration.WALMode; +import org.jetbrains.annotations.NotNull; + +/** + * + */ +public class WalRolloverRecordLoggingLogOnlyTest extends WalRolloverRecordLoggingTest { + + /** {@inheritDoc} */ + @NotNull @Override public WALMode walMode() { + return WALMode.LOG_ONLY; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingTest.java new file mode 100644 index 0000000000000..e8e39b34aadf1 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverRecordLoggingTest.java @@ -0,0 +1,154 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import java.util.concurrent.ThreadLocalRandom; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.failure.StopNodeOrHaltFailureHandler; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; +import org.apache.ignite.internal.pagemem.wal.record.CheckpointRecord; +import org.apache.ignite.internal.pagemem.wal.record.RolloverType; +import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public abstract class WalRolloverRecordLoggingTest extends GridCommonAbstractTest { + /** */ + private static class RolloverRecord extends CheckpointRecord { + /** */ + private RolloverRecord() { + super(null); + } + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(name); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setPersistenceEnabled(true) + .setMaxSize(40 * 1024 * 1024)) + .setWalMode(walMode()) + .setWalSegmentSize(4 * 1024 * 1024) + ); + + cfg.setFailureHandler(new StopNodeOrHaltFailureHandler(false, 0)); + + return cfg; + } + + /** + * @return Wal mode. + */ + @NotNull public abstract WALMode walMode(); + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** */ + @Test + public void testAvoidInfinityWaitingOnRolloverOfSegment() throws Exception { + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteCache cache = ig.getOrCreateCache(DEFAULT_CACHE_NAME); + + long startTime = U.currentTimeMillis(); + long duration = 5_000; + + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync( + () -> { + ThreadLocalRandom random = ThreadLocalRandom.current(); + + while (U.currentTimeMillis() - startTime < duration) + cache.put(random.nextInt(100_000), random.nextInt(100_000)); + }, + 8, "cache-put-thread"); + + Thread t = new Thread(() -> { + do { + try { + U.sleep(100); + } + catch (IgniteInterruptedCheckedException e) { + // No-op. + } + + ig.context().cache().context().database().wakeupForCheckpoint("test"); + } while (U.currentTimeMillis() - startTime < duration); + }); + + t.start(); + + IgniteWriteAheadLogManager walMgr = ig.context().cache().context().wal(); + + IgniteCacheDatabaseSharedManager dbMgr = ig.context().cache().context().database(); + + RolloverRecord rec = new RolloverRecord(); + + do { + try { + dbMgr.checkpointReadLock(); + + try { + walMgr.log(rec, RolloverType.NEXT_SEGMENT); + } + finally { + dbMgr.checkpointReadUnlock(); + } + } + catch (IgniteCheckedException e) { + log.error(e.getMessage(), e); + } + } while (U.currentTimeMillis() - startTime < duration); + + fut.get(); + + t.join(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverTypesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverTypesTest.java new file mode 100644 index 0000000000000..aa1a65bf6780a --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/WalRolloverTypesTest.java @@ -0,0 +1,370 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +package org.apache.ignite.internal.processors.cache.persistence.db.wal; + +import java.util.concurrent.ThreadLocalRandom; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; +import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.pagemem.wal.record.CheckpointRecord; +import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.spi.checkpoint.noop.NoopCheckpointSpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_WAL_ARCHIVE_PATH; +import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_WAL_PATH; +import static org.apache.ignite.configuration.WALMode.FSYNC; +import static org.apache.ignite.configuration.WALMode.LOG_ONLY; +import static org.apache.ignite.internal.pagemem.wal.record.RolloverType.CURRENT_SEGMENT; +import static org.apache.ignite.internal.pagemem.wal.record.RolloverType.NEXT_SEGMENT; +import static org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.HEADER_RECORD_SIZE; + +/** + * + */ +@RunWith(JUnit4.class) +public class WalRolloverTypesTest extends GridCommonAbstractTest { + /** */ + private WALMode walMode; + + /** */ + private boolean disableWALArchiving; + + /** */ + private static class AdHocWALRecord extends CheckpointRecord { + /** */ + private AdHocWALRecord() { + super(null); + } + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String name) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(name); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setPersistenceEnabled(true) + .setMaxSize(20 * 1024 * 1024)) + .setWalMode(walMode) + .setWalArchivePath(disableWALArchiving ? DFLT_WAL_PATH : DFLT_WAL_ARCHIVE_PATH) + .setWalSegmentSize(4 * 1024 * 1024)) + .setCheckpointSpi(new NoopCheckpointSpi()) + ; + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** */ + @Test + public void testCurrentSegmentTypeLogOnlyModeArchiveOn() throws Exception { + checkCurrentSegmentType(LOG_ONLY, false); + } + + /** */ + @Test + public void testCurrentSegmentTypeLogOnlyModeArchiveOff() throws Exception { + checkCurrentSegmentType(LOG_ONLY, true); + } + + /** */ + @Test + public void testCurrentSegmentTypeLogFsyncModeArchiveOn() throws Exception { + checkCurrentSegmentType(FSYNC, false); + } + + /** */ + @Test + public void testCurrentSegmentTypeLogFsyncModeArchiveOff() throws Exception { + checkCurrentSegmentType(FSYNC, true); + } + + /** */ + @Test + public void testNextSegmentTypeLogOnlyModeArchiveOn() throws Exception { + checkNextSegmentType(LOG_ONLY, false); + } + + /** */ + @Test + public void testNextSegmentTypeLogOnlyModeArchiveOff() throws Exception { + checkNextSegmentType(LOG_ONLY, true); + } + + /** */ + @Test + public void testNextSegmentTypeFsyncModeArchiveOn() throws Exception { + checkNextSegmentType(FSYNC, false); + } + + /** */ + @Test + public void testNextSegmentTypeFsyncModeArchiveOff() throws Exception { + checkNextSegmentType(FSYNC, true); + } + + /** */ + private void checkCurrentSegmentType(WALMode mode, boolean disableArch) throws Exception { + walMode = mode; + disableWALArchiving = disableArch; + + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteWriteAheadLogManager walMgr = ig.context().cache().context().wal(); + + ig.context().cache().context().database().checkpointReadLock(); + + try { + WALPointer ptr = walMgr.log(new AdHocWALRecord(), CURRENT_SEGMENT); + + assertEquals(0, ((FileWALPointer)ptr).index()); + } + finally { + ig.context().cache().context().database().checkpointReadUnlock(); + } + } + + /** */ + private void checkNextSegmentType(WALMode mode, boolean disableArch) throws Exception { + walMode = mode; + disableWALArchiving = disableArch; + + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteWriteAheadLogManager walMgr = ig.context().cache().context().wal(); + + ig.context().cache().context().database().checkpointReadLock(); + + try { + WALPointer ptr = walMgr.log(new AdHocWALRecord(), NEXT_SEGMENT); + + assertEquals(1, ((FileWALPointer)ptr).index()); + } + finally { + ig.context().cache().context().database().checkpointReadUnlock(); + } + } + + /** */ + @Test + public void testNextSegmentTypeWithCacheActivityLogOnlyModeArchiveOn() throws Exception { + checkNextSegmentTypeWithCacheActivity(LOG_ONLY, false); + } + + /** */ + @Test + public void testNextSegmentTypeWithCacheActivityLogOnlyModeArchiveOff() throws Exception { + checkNextSegmentTypeWithCacheActivity(LOG_ONLY, true); + } + + /** */ + @Test + public void testNextSegmentTypeWithCacheActivityFsyncModeArchiveOn() throws Exception { + checkNextSegmentTypeWithCacheActivity(FSYNC, false); + } + + /** */ + @Test + public void testNextSegmentTypeWithCacheActivityFsyncModeArchiveOff() throws Exception { + checkNextSegmentTypeWithCacheActivity(FSYNC, true); + } + + /** + * Under load, ensures the record gets into very beginning of the segment in {@code NEXT_SEGMENT} log mode. + */ + private void checkNextSegmentTypeWithCacheActivity(WALMode mode, boolean disableArch) throws Exception { + walMode = mode; + disableWALArchiving = disableArch; + + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteCache cache = ig.getOrCreateCache(DEFAULT_CACHE_NAME); + + final long testDuration = 30_000; + + long startTime = U.currentTimeMillis(); + + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync( + () -> { + ThreadLocalRandom random = ThreadLocalRandom.current(); + + while (U.currentTimeMillis() - startTime < testDuration) + cache.put(random.nextInt(100), random.nextInt(100_000)); + }, + 8, "cache-put-thread"); + + IgniteWriteAheadLogManager walMgr = ig.context().cache().context().wal(); + + IgniteCacheDatabaseSharedManager dbMgr = ig.context().cache().context().database(); + + AdHocWALRecord markerRecord = new AdHocWALRecord(); + + WALPointer ptr0; + WALPointer ptr1; + + do { + try { + U.sleep(1000); + + ptr0 = walMgr.log(markerRecord); + + dbMgr.checkpointReadLock(); + + try { + ptr1 = walMgr.log(markerRecord, NEXT_SEGMENT); + } + finally { + dbMgr.checkpointReadUnlock(); + } + + assertTrue(ptr0 instanceof FileWALPointer); + assertTrue(ptr1 instanceof FileWALPointer); + + assertTrue(((FileWALPointer)ptr0).index() < ((FileWALPointer)ptr1).index()); + + assertEquals(HEADER_RECORD_SIZE, ((FileWALPointer)ptr1).fileOffset()); + } + catch (IgniteCheckedException e) { + log.error(e.getMessage(), e); + } + } + while (U.currentTimeMillis() - startTime < testDuration); + + fut.get(); + } + + /** */ + @Test + public void testCurrentSegmentTypeWithCacheActivityLogOnlyModeArchiveOn() throws Exception { + checkCurrentSegmentTypeWithCacheActivity(LOG_ONLY, false); + } + + /** */ + @Test + public void testCurrentSegmentTypeWithCacheActivityLogOnlyModeArchiveOff() throws Exception { + checkCurrentSegmentTypeWithCacheActivity(LOG_ONLY, true); + } + + /** */ + @Test + public void testCurrentSegmentTypeWithCacheActivityFsyncModeArchiveOn() throws Exception { + checkCurrentSegmentTypeWithCacheActivity(FSYNC, false); + } + + /** */ + @Test + public void testCurrentSegmentTypeWithCacheActivityFsyncModeArchiveOff() throws Exception { + checkCurrentSegmentTypeWithCacheActivity(FSYNC, true); + } + + /** + * Under load, ensures the record gets into very beginning of the segment in {@code NEXT_SEGMENT} log mode. + */ + private void checkCurrentSegmentTypeWithCacheActivity(WALMode mode, boolean disableArch) throws Exception { + walMode = mode; + disableWALArchiving = disableArch; + + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteCache cache = ig.getOrCreateCache(DEFAULT_CACHE_NAME); + + final long testDuration = 30_000; + + long startTime = U.currentTimeMillis(); + + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync( + () -> { + ThreadLocalRandom random = ThreadLocalRandom.current(); + + while (U.currentTimeMillis() - startTime < testDuration) + cache.put(random.nextInt(100), random.nextInt(100_000)); + }, + 8, "cache-put-thread"); + + IgniteWriteAheadLogManager walMgr = ig.context().cache().context().wal(); + + IgniteCacheDatabaseSharedManager dbMgr = ig.context().cache().context().database(); + + AdHocWALRecord markerRecord = new AdHocWALRecord(); + + WALPointer ptr0; + WALPointer ptr1; + + do { + try { + U.sleep(1000); + + dbMgr.checkpointReadLock(); + + try { + ptr0 = walMgr.log(markerRecord, CURRENT_SEGMENT); + } + finally { + dbMgr.checkpointReadUnlock(); + } + + ptr1 = walMgr.log(markerRecord); + + assertTrue(ptr0 instanceof FileWALPointer); + assertTrue(ptr1 instanceof FileWALPointer); + + assertTrue(((FileWALPointer)ptr0).index() < ((FileWALPointer)ptr1).index()); + } + catch (IgniteCheckedException e) { + log.error(e.getMessage(), e); + } + } + while (U.currentTimeMillis() - startTime < testDuration); + + fut.get(); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteAbstractWalIteratorInvalidCrcTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteAbstractWalIteratorInvalidCrcTest.java index 0b53bb8252051..a855ae561ce68 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteAbstractWalIteratorInvalidCrcTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteAbstractWalIteratorInvalidCrcTest.java @@ -46,11 +46,12 @@ import org.apache.ignite.internal.processors.cache.persistence.wal.reader.IgniteWalIteratorFactory; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.ByteBuffer.allocate; import static java.nio.file.StandardOpenOption.WRITE; @@ -59,10 +60,8 @@ /** * */ +@RunWith(JUnit4.class) public abstract class IgniteAbstractWalIteratorInvalidCrcTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Size of inserting dummy value. */ private static final int VALUE_SIZE = 4 * 1024; @@ -82,8 +81,6 @@ public abstract class IgniteAbstractWalIteratorInvalidCrcTest extends GridCommon @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setWalSegmentSize(WAL_SEGMENT_SIZE) @@ -150,6 +147,7 @@ public abstract class IgniteAbstractWalIteratorInvalidCrcTest extends GridCommon * Test that iteration fails if one of archive segments contains record with invalid CRC. * @throws Exception If failed. */ + @Test public void testArchiveCorruptedPtr() throws Exception { doTest((archiveDescs, descs) -> archiveDescs.get(random.nextInt(archiveDescs.size())), false, true); } @@ -159,6 +157,7 @@ public void testArchiveCorruptedPtr() throws Exception { * and it is not the tail segment. * @throws Exception If failed. */ + @Test public void testNotTailCorruptedPtr() throws Exception { doTest((archiveDescs, descs) -> descs.get(random.nextInt(descs.size() - 1)), true, true); } @@ -168,6 +167,7 @@ public void testNotTailCorruptedPtr() throws Exception { * Test that iteration does not fail if tail segment in working directory contains record with invalid CRC. * @throws Exception If failed. */ + @Test public void testTailCorruptedPtr() throws Exception { doTest((archiveDescs, descs) -> descs.get(descs.size() - 1), false, false); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteDataIntegrityTests.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteDataIntegrityTests.java index 59dd3b7e7dc9b..208db7438f3fa 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteDataIntegrityTests.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteDataIntegrityTests.java @@ -17,35 +17,40 @@ package org.apache.ignite.internal.processors.cache.persistence.db.wal.crc; -import junit.framework.TestCase; import java.io.EOFException; import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.util.concurrent.ThreadLocalRandom; + import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory; import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory; import org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferExpander; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; import org.apache.ignite.internal.processors.cache.persistence.wal.io.FileInput; import org.apache.ignite.internal.processors.cache.persistence.wal.io.SimpleFileInput; import org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException; -import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; /** * */ -public class IgniteDataIntegrityTests extends TestCase { +public class IgniteDataIntegrityTests { /** File input. */ private SimpleFileInput fileInput; /** Buffer expander. */ private ByteBufferExpander expBuf; - /** {@inheritDoc} */ - @Override protected void setUp() throws Exception { - super.setUp(); - + /** */ + @Before + public void setUp() throws Exception { File file = File.createTempFile("integrity", "dat"); file.deleteOnExit(); @@ -59,6 +64,7 @@ public class IgniteDataIntegrityTests extends TestCase { ); ByteBuffer buf = ByteBuffer.allocate(1024); + ThreadLocalRandom curr = ThreadLocalRandom.current(); for (int i = 0; i < 1024; i+=16) { @@ -66,7 +72,7 @@ public class IgniteDataIntegrityTests extends TestCase { buf.putInt(curr.nextInt()); buf.putInt(curr.nextInt()); buf.position(i); - buf.putInt(PureJavaCrc32.calcCrc32(buf, 12)); + buf.putInt(FastCrc.calcCrc(buf, 12)); } buf.rewind(); @@ -75,8 +81,9 @@ public class IgniteDataIntegrityTests extends TestCase { fileInput.io().force(); } - /** {@inheritDoc} */ - @Override protected void tearDown() throws Exception { + /** */ + @After + public void tearDown() throws Exception { fileInput.io().close(); expBuf.close(); } @@ -84,6 +91,7 @@ public class IgniteDataIntegrityTests extends TestCase { /** * */ + @Test public void testSuccessfulPath() throws Exception { checkIntegrity(); } @@ -91,6 +99,7 @@ public void testSuccessfulPath() throws Exception { /** * */ + @Test public void testIntegrityViolationChecking() throws Exception { toggleOneRandomBit(0, 1024 - 16); @@ -106,6 +115,7 @@ public void testIntegrityViolationChecking() throws Exception { /** * */ + @Test public void testSkipingLastCorruptedEntry() throws Exception { toggleOneRandomBit(1024 - 16, 1024); @@ -121,6 +131,7 @@ public void testSkipingLastCorruptedEntry() throws Exception { /** * */ + @Test public void testExpandBuffer() { ByteBufferExpander expBuf = new ByteBufferExpander(24, ByteOrder.nativeOrder()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgnitePureJavaCrcCompatibility.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgnitePureJavaCrcCompatibility.java new file mode 100644 index 0000000000000..4f49990c603b0 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgnitePureJavaCrcCompatibility.java @@ -0,0 +1,58 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.persistence.db.wal.crc; + +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc; +import org.apache.ignite.internal.processors.cache.persistence.wal.crc.PureJavaCrc32; + +import java.nio.ByteBuffer; +import java.util.concurrent.ThreadLocalRandom; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; + +/** + * PureJavaCrc32 previous crc algo realization vs java.util.zip.crc32 test. + */ +public class IgnitePureJavaCrcCompatibility { + /** + * Test crc algo equality results. + * @throws Exception + */ + @Test + public void testAlgoEqual() throws Exception { + ByteBuffer buf = ByteBuffer.allocate(1024); + + ThreadLocalRandom curr = ThreadLocalRandom.current(); + + for (int i = 0; i < 1024; i+=16) { + buf.putInt(curr.nextInt()); + buf.putInt(curr.nextInt()); + buf.putInt(curr.nextInt()); + buf.position(i); + + buf.position(i); + int crc0 = FastCrc.calcCrc(buf, 12); + + buf.position(i); + int crc1 = PureJavaCrc32.calcCrc32(buf, 12); + + assertEquals(crc0, crc1); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteReplayWalIteratorInvalidCrcTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteReplayWalIteratorInvalidCrcTest.java index 756ef78798032..0fbc7777134bf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteReplayWalIteratorInvalidCrcTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/crc/IgniteReplayWalIteratorInvalidCrcTest.java @@ -22,10 +22,14 @@ import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; import org.apache.ignite.internal.pagemem.wal.WALIterator; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteReplayWalIteratorInvalidCrcTest extends IgniteAbstractWalIteratorInvalidCrcTest { /** {@inheritDoc} */ @NotNull @Override protected WALMode getWalMode() { @@ -49,6 +53,7 @@ public class IgniteReplayWalIteratorInvalidCrcTest extends IgniteAbstractWalIter * {@inheritDoc} * Case is not relevant to the replay iterator. */ + @Test @Override public void testNotTailCorruptedPtr() { } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/reader/IgniteWalReaderTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/reader/IgniteWalReaderTest.java index beab13883a7e0..c167559a91803 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/reader/IgniteWalReaderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/reader/IgniteWalReaderTest.java @@ -38,6 +38,7 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; import javax.cache.Cache; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; @@ -55,13 +56,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.events.WalSegmentArchivedEvent; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.pagemem.wal.WALIterator; import org.apache.ignite.internal.pagemem.wal.WALPointer; import org.apache.ignite.internal.pagemem.wal.record.DataEntry; import org.apache.ignite.internal.pagemem.wal.record.DataRecord; -import org.apache.ignite.internal.pagemem.wal.record.LazyDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.MarshalledDataEntry; import org.apache.ignite.internal.pagemem.wal.record.TxRecord; import org.apache.ignite.internal.pagemem.wal.record.UnwrapDataEntry; +import org.apache.ignite.internal.pagemem.wal.record.UnwrappedDataEntry; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.GridCacheOperation; @@ -76,20 +79,21 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.logger.NullLogger; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.Arrays.fill; import static org.apache.ignite.events.EventType.EVT_WAL_SEGMENT_ARCHIVED; import static org.apache.ignite.events.EventType.EVT_WAL_SEGMENT_COMPACTED; import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.DATA_RECORD; -import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.TX_RECORD; +import static org.apache.ignite.internal.pagemem.wal.record.WALRecord.RecordType.MVCC_DATA_RECORD; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.CREATE; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.DELETE; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; @@ -98,10 +102,8 @@ /** * Test suite for WAL segments reader and event generator. */ +@RunWith(JUnit4.class) public class IgniteWalReaderTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Wal segments count */ private static final int WAL_SEGMENTS = 10; @@ -123,9 +125,6 @@ public class IgniteWalReaderTest extends GridCommonAbstractTest { /** Custom wal mode. */ private WALMode customWalMode; - /** Clear properties in afterTest() method. */ - private boolean clearProps; - /** Set WAL and Archive path to same value. */ private boolean setWalAndArchiveToSameVal; @@ -136,8 +135,6 @@ public class IgniteWalReaderTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - CacheConfiguration ccfg = new CacheConfiguration<>(CACHE_NAME); ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); @@ -184,8 +181,6 @@ public class IgniteWalReaderTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { - stopAllGrids(); - cleanPersistenceDir(); } @@ -195,13 +190,13 @@ public class IgniteWalReaderTest extends GridCommonAbstractTest { cleanPersistenceDir(); - if (clearProps) - System.clearProperty(IgniteSystemProperties.IGNITE_WAL_LOG_TX_RECORDS); + System.clearProperty(IgniteSystemProperties.IGNITE_WAL_LOG_TX_RECORDS); } /** * @throws Exception if failed. */ + @Test public void testFillWalAndReadRecords() throws Exception { setWalAndArchiveToSameVal = false; @@ -267,7 +262,7 @@ private int iterateAndCount(WALIterator walIter) throws IgniteCheckedException { WALRecord walRecord = tup.get2(); - if (walRecord.type() == DATA_RECORD) { + if (walRecord.type() == DATA_RECORD || walRecord.type() == MVCC_DATA_RECORD) { DataRecord record = (DataRecord)walRecord; for (DataEntry entry : record.writeEntries()) { @@ -293,6 +288,7 @@ private int iterateAndCount(WALIterator walIter) throws IgniteCheckedException { * * @throws Exception if failed. */ + @Test public void testArchiveCompletedEventFired() throws Exception { assertTrue(checkWhetherWALRelatedEventFired(EVT_WAL_SEGMENT_ARCHIVED)); } @@ -302,6 +298,7 @@ public void testArchiveCompletedEventFired() throws Exception { * * @throws Exception if failed. */ + @Test public void testArchiveCompactedEventFired() throws Exception { boolean oldEnableWalCompaction = enableWalCompaction; @@ -353,6 +350,7 @@ private boolean checkWhetherWALRelatedEventFired(int evtType) throws Exception { * * @throws Exception if failure occurs. */ + @Test public void testArchiveIncompleteSegmentAfterInactivity() throws Exception { AtomicBoolean waitingForEvt = new AtomicBoolean(); @@ -399,6 +397,7 @@ public void testArchiveIncompleteSegmentAfterInactivity() throws Exception { * * @throws Exception if failed. */ + @Test public void testFillWalForExactSegmentsCount() throws Exception { customWalMode = WALMode.FSYNC; @@ -483,6 +482,7 @@ private boolean remove(Map m, Object key, Object val) { * * @throws Exception if failed. */ + @Test public void testTxFillWalAndExtractDataRecords() throws Exception { Ignite ignite0 = startGrid(); @@ -584,6 +584,7 @@ private void scanIterateAndCount( /** * @throws Exception if failed. */ + @Test public void testFillWalWithDifferentTypes() throws Exception { Ignite ig = startGrid(); @@ -778,6 +779,7 @@ else if (val12 instanceof BinaryObject) { * * @throws Exception if failed. */ + @Test public void testReadEmptyWal() throws Exception { customWalMode = WALMode.FSYNC; @@ -809,6 +811,60 @@ public void testReadEmptyWal() throws Exception { ); } + /** + * Tests WAL iterator which uses shared cache context of currently started Ignite node. + */ + @Test + public void testIteratorWithCurrentKernelContext() throws Exception { + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + int cntEntries = 100; + + putDummyRecords(ignite, cntEntries); + + String workDir = U.defaultWorkDirectory(); + + IgniteWalIteratorFactory factory = new IgniteWalIteratorFactory(log); + + IteratorParametersBuilder iterParametersBuilder = + createIteratorParametersBuilder(workDir, genDbSubfolderName(ignite, 0)) + .filesOrDirs(workDir) + .binaryMetadataFileStoreDir(null) + .marshallerMappingFileStoreDir(null) + .sharedContext(ignite.context().cache().context()); + + AtomicInteger cnt = new AtomicInteger(); + + IgniteBiInClosure objConsumer = (key, val) -> { + if (val instanceof IndexedObject) { + assertEquals(key, ((IndexedObject)val).iVal); + assertEquals(key, cnt.getAndIncrement()); + } + }; + + iterateAndCountDataRecord(factory.iterator(iterParametersBuilder.copy()), objConsumer, null); + + assertEquals(cntEntries, cnt.get()); + + // Test without converting non primary types. + iterParametersBuilder.keepBinary(true); + + cnt.set(0); + + IgniteBiInClosure binObjConsumer = (key, val) -> { + if (val instanceof BinaryObject) { + assertEquals(key, ((BinaryObject)val).field("iVal")); + assertEquals(key, cnt.getAndIncrement()); + } + }; + + iterateAndCountDataRecord(factory.iterator(iterParametersBuilder.copy()), binObjConsumer, null); + + assertEquals(cntEntries, cnt.get()); + } + /** * Creates and fills cache with data. * @@ -845,6 +901,7 @@ private void createCache2(Ignite ig, CacheAtomicityMode mode) { * * @throws Exception if failed. */ + @Test public void testRemoveOperationPresentedForDataEntry() throws Exception { runRemoveOperationTest(CacheAtomicityMode.TRANSACTIONAL); } @@ -854,7 +911,11 @@ public void testRemoveOperationPresentedForDataEntry() throws Exception { * * @throws Exception if failed. */ + @Test public void testRemoveOperationPresentedForDataEntryForAtomic() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + return; + runRemoveOperationTest(CacheAtomicityMode.ATOMIC); } @@ -941,6 +1002,7 @@ private void runRemoveOperationTest(CacheAtomicityMode mode) throws Exception { * * @throws Exception if failed. */ + @Test public void testPutAllTxIntoTwoNodes() throws Exception { Ignite ignite = startGrid("node0"); Ignite ignite1 = startGrid(1); @@ -1040,9 +1102,8 @@ public void testPutAllTxIntoTwoNodes() throws Exception { * * @throws Exception if failed. */ + @Test public void testTxRecordsReadWoBinaryMeta() throws Exception { - clearProps = true; - System.setProperty(IgniteSystemProperties.IGNITE_WAL_LOG_TX_RECORDS, "true"); Ignite ignite = startGrid("node0"); @@ -1081,6 +1142,7 @@ public void testTxRecordsReadWoBinaryMeta() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCheckBoundsIterator() throws Exception { Ignite ignite = startGrid("node0"); @@ -1287,26 +1349,34 @@ private Map iterateAndCountDataRecord( WALRecord walRecord = tup.get2(); - if (walRecord.type() == DATA_RECORD && walRecord instanceof DataRecord) { - DataRecord dataRecord = (DataRecord)walRecord; + WALRecord.RecordType type = walRecord.type(); - if (dataRecordHnd != null) - dataRecordHnd.apply(dataRecord); + //noinspection EnumSwitchStatementWhichMissesCases + switch (type) { + case DATA_RECORD: + // Fallthrough. + case MVCC_DATA_RECORD: { + assert walRecord instanceof DataRecord; - List entries = dataRecord.writeEntries(); + DataRecord dataRecord = (DataRecord)walRecord; - for (DataEntry entry : entries) { - GridCacheVersion globalTxId = entry.nearXidVersion(); + if (dataRecordHnd != null) + dataRecordHnd.apply(dataRecord); - Object unwrappedKeyObj; - Object unwrappedValObj; + List entries = dataRecord.writeEntries(); - if (entry instanceof UnwrapDataEntry) { - UnwrapDataEntry unwrapDataEntry = (UnwrapDataEntry)entry; + for (DataEntry entry : entries) { + GridCacheVersion globalTxId = entry.nearXidVersion(); + + Object unwrappedKeyObj; + Object unwrappedValObj; + + if (entry instanceof UnwrappedDataEntry) { + UnwrappedDataEntry unwrapDataEntry = (UnwrappedDataEntry)entry; unwrappedKeyObj = unwrapDataEntry.unwrappedKey(); unwrappedValObj = unwrapDataEntry.unwrappedValue(); } - else if (entry instanceof LazyDataEntry) { + else if (entry instanceof MarshalledDataEntry) { unwrappedKeyObj = null; unwrappedValObj = null; //can't check value @@ -1314,35 +1384,43 @@ else if (entry instanceof LazyDataEntry) { else { final CacheObject val = entry.value(); - unwrappedValObj = val instanceof BinaryObject ? val : val.value(null, false); + unwrappedValObj = val instanceof BinaryObject ? val : val.value(null, false); - final CacheObject key = entry.key(); + final CacheObject key = entry.key(); - unwrappedKeyObj = key instanceof BinaryObject ? key : key.value(null, false); - } + unwrappedKeyObj = key instanceof BinaryObject ? key : key.value(null, false); + } - if (DUMP_RECORDS) - log.info("//Entry operation " + entry.op() + "; cache Id" + entry.cacheId() + "; " + - "under transaction: " + globalTxId + - //; entry " + entry + - "; Key: " + unwrappedKeyObj + - "; Value: " + unwrappedValObj); + if (DUMP_RECORDS) + log.info("//Entry operation " + entry.op() + "; cache Id" + entry.cacheId() + "; " + + "under transaction: " + globalTxId + + //; entry " + entry + + "; Key: " + unwrappedKeyObj + + "; Value: " + unwrappedValObj); - if (cacheObjHnd != null && (unwrappedKeyObj != null || unwrappedValObj != null)) - cacheObjHnd.apply(unwrappedKeyObj, unwrappedValObj); + if (cacheObjHnd != null && (unwrappedKeyObj != null || unwrappedValObj != null)) + cacheObjHnd.apply(unwrappedKeyObj, unwrappedValObj); - Integer entriesUnderTx = entriesUnderTxFound.get(globalTxId); + Integer entriesUnderTx = entriesUnderTxFound.get(globalTxId); - entriesUnderTxFound.put(globalTxId, entriesUnderTx == null ? 1 : entriesUnderTx + 1); + entriesUnderTxFound.put(globalTxId, entriesUnderTx == null ? 1 : entriesUnderTx + 1); + } } - } - else if (walRecord.type() == TX_RECORD && walRecord instanceof TxRecord) { - TxRecord txRecord = (TxRecord)walRecord; - GridCacheVersion globalTxId = txRecord.nearXidVersion(); - if (DUMP_RECORDS) - log.info("//Tx Record, state: " + txRecord.state() + - "; nearTxVersion" + globalTxId); + break; + + case TX_RECORD: + // Fallthrough + case MVCC_TX_RECORD: { + assert walRecord instanceof TxRecord; + + TxRecord txRecord = (TxRecord)walRecord; + GridCacheVersion globalTxId = txRecord.nearXidVersion(); + + if (DUMP_RECORDS) + log.info("//Tx Record, state: " + txRecord.state() + + "; nearTxVersion" + globalTxId); + } } } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/FileDownloaderTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/FileDownloaderTest.java index 6f01d93c3b487..b5c11bef768a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/FileDownloaderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/FileDownloaderTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.runAsync; import static org.junit.Assert.*; @@ -35,6 +38,7 @@ /** * FileDownloader test */ +@RunWith(JUnit4.class) public class FileDownloaderTest extends GridCommonAbstractTest { /** */ private static final Path DOWNLOADER_PATH = new File("download").toPath(); @@ -68,6 +72,7 @@ public class FileDownloaderTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void test() throws Exception { assertTrue(UPLOADER_PATH.toFile().createNewFile()); assertTrue(!DOWNLOADER_PATH.toFile().exists()); @@ -120,4 +125,4 @@ public void test() throws Exception { assertArrayEquals(Files.readAllBytes(UPLOADER_PATH), Files.readAllBytes(DOWNLOADER_PATH)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/IgniteMetaStorageBasicTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/IgniteMetaStorageBasicTest.java index 18375150d06d4..91e233b04620d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/IgniteMetaStorageBasicTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/metastorage/IgniteMetaStorageBasicTest.java @@ -17,26 +17,33 @@ package org.apache.ignite.internal.processors.cache.persistence.metastorage; import java.io.Serializable; +import java.util.Collection; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Random; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; /** * Single place to add for basic MetaStorage tests. */ public class IgniteMetaStorageBasicTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); @@ -51,12 +58,6 @@ public class IgniteMetaStorageBasicTest extends GridCommonAbstractTest { cfg.setDataStorageConfiguration(storageCfg); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } @@ -76,6 +77,348 @@ public class IgniteMetaStorageBasicTest extends GridCommonAbstractTest { cleanPersistenceDir(); } + /** + * + */ + @Test + public void testMetaStorageMassivePutFixed() throws Exception { + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteCacheDatabaseSharedManager db = ig.context().cache().context().database(); + + MetaStorage metaStorage = db.metaStorage(); + + assertNotNull(metaStorage); + + Random rnd = new Random(); + + db.checkpointReadLock(); + + int size; + try { + for (int i = 0; i < 10_000; i++) { + size = rnd.nextBoolean() ? 3500 : 2 * 3500; + String key = "TEST_KEY_" + (i % 1000); + + byte[] arr = new byte[size]; + rnd.nextBytes(arr); + + metaStorage.remove(key); + + metaStorage.putData(key, arr/*b.toString().getBytes()*/); + } + } + finally { + db.checkpointReadUnlock(); + } + } + + /** + * + */ + @Test + public void testMetaStorageMassivePutRandom() throws Exception { + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteCacheDatabaseSharedManager db = ig.context().cache().context().database(); + + MetaStorage metaStorage = db.metaStorage(); + + assertNotNull(metaStorage); + + Random rnd = new Random(); + + db.checkpointReadLock(); + + int size; + try { + for (int i = 0; i < 50_000; i++) { + size = 100 + rnd.nextInt(9000); + + String key = "TEST_KEY_" + (i % 2_000); + + byte[] arr = new byte[size]; + rnd.nextBytes(arr); + + metaStorage.remove(key); + + metaStorage.putData(key, arr); + } + } + finally { + db.checkpointReadUnlock(); + } + + stopGrid(); + } + + /** + * @param metaStorage Meta storage. + * @param size Size. + */ + private Map putDataToMetaStorage(MetaStorage metaStorage, int size, int from) throws IgniteCheckedException { + Map res = new HashMap<>(); + + for (Iterator> it = generateTestData(size, from).iterator(); it.hasNext(); ) { + IgniteBiTuple d = it.next(); + + metaStorage.putData(d.getKey(), d.getValue()); + + res.put(d.getKey(), d.getValue()); + } + + return res; + } + + /** + * Testing data migration between metastorage partitions (delete partition case) + */ + @Test + public void testDeletePartitionFromMetaStorageMigration() throws Exception { + final Map testData = new HashMap<>(); + + MetaStorage.PRESERVE_LEGACY_METASTORAGE_PARTITION_ID = true; + + try { + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteCacheDatabaseSharedManager db = ig.context().cache().context().database(); + + MetaStorage metaStorage = db.metaStorage(); + + assertNotNull(metaStorage); + + db.checkpointReadLock(); + + try { + testData.putAll(putDataToMetaStorage(metaStorage, 1_000, 0)); + } + finally { + db.checkpointReadUnlock(); + } + + db.waitForCheckpoint("Test"); + + ((GridCacheDatabaseSharedManager)db).enableCheckpoints(false); + + db.checkpointReadLock(); + + try { + testData.putAll(putDataToMetaStorage(metaStorage, 1_000, 1_000)); + } + finally { + db.checkpointReadUnlock(); + } + + stopGrid(0); + + MetaStorage.PRESERVE_LEGACY_METASTORAGE_PARTITION_ID = false; + + IgniteConfiguration cfg = getConfiguration(getTestIgniteInstanceName(0)); + + cfg.getDataStorageConfiguration().setCheckpointFrequency(3600 * 1000L); + + ig = (IgniteEx)startGrid(getTestIgniteInstanceName(0), optimize(cfg), null); + + ig.cluster().active(true); + + db = ig.context().cache().context().database(); + + metaStorage = db.metaStorage(); + + assertNotNull(metaStorage); + + db.checkpointReadLock(); + + try { + testData.putAll(putDataToMetaStorage(metaStorage, 1_000, 2_000)); + } + finally { + db.checkpointReadUnlock(); + } + + db.waitForCheckpoint("Test"); + + stopGrid(0); + + ig = startGrid(0); + + ig.cluster().active(true); + + db = ig.context().cache().context().database(); + + metaStorage = db.metaStorage(); + + assertNotNull(metaStorage); + + db.checkpointReadLock(); + try { + Collection> read = metaStorage.readAll(); + + int cnt = 0; + for (IgniteBiTuple r : read) { + byte[] test = testData.get(r.get1()); + + if (test != null) { + Assert.assertArrayEquals(r.get2(), test); + + cnt++; + } + } + + assertEquals(cnt, testData.size()); + } + finally { + db.checkpointReadUnlock(); + } + } + finally { + MetaStorage.PRESERVE_LEGACY_METASTORAGE_PARTITION_ID = false; + } + + } + + /** + * Testing data migration between metastorage partitions + */ + @Test + public void testMetaStorageMigration() throws Exception { + final Map testData = new HashMap<>(5_000); + + generateTestData(5_000, -1).forEach(t -> testData.put(t.get1(), t.get2())); + + MetaStorage.PRESERVE_LEGACY_METASTORAGE_PARTITION_ID = true; + + try { + IgniteEx ig = startGrid(0); + + ig.cluster().active(true); + + IgniteCacheDatabaseSharedManager db = ig.context().cache().context().database(); + + MetaStorage metaStorage = db.metaStorage(); + + assertNotNull(metaStorage); + + db.checkpointReadLock(); + + try { + for (Map.Entry v : testData.entrySet()) + metaStorage.putData(v.getKey(), v.getValue()); + } + finally { + db.checkpointReadUnlock(); + } + + stopGrid(0); + + MetaStorage.PRESERVE_LEGACY_METASTORAGE_PARTITION_ID = false; + + ig = startGrid(0); + + ig.cluster().active(true); + + db = ig.context().cache().context().database(); + + metaStorage = db.metaStorage(); + + assertNotNull(metaStorage); + + db.checkpointReadLock(); + + try { + Collection> read = metaStorage.readAll(); + + int cnt = 0; + for (IgniteBiTuple r : read) { + byte[] test = testData.get(r.get1()); + + if (test != null) { + Assert.assertArrayEquals(r.get2(), test); + + cnt++; + } + } + + assertEquals(cnt, testData.size()); + } + finally { + db.checkpointReadUnlock(); + } + } + finally { + MetaStorage.PRESERVE_LEGACY_METASTORAGE_PARTITION_ID = false; + } + } + + /** + * Testing temporary storage + */ + @Test + public void testMetaStoreMigrationTmpStorage() throws Exception { + List> data = generateTestData(2_000, -1).collect(Collectors.toList()); + + // memory + try (MetaStorage.TmpStorage tmpStorage = new MetaStorage.TmpStorage(4 * 1024 * 1024, log)) { + for (IgniteBiTuple item : data) + tmpStorage.add(item.get1(), item.get2()); + + compare(tmpStorage.stream().iterator(), data.iterator()); + } + + // file + try (MetaStorage.TmpStorage tmpStorage = new MetaStorage.TmpStorage(4 * 1024, log)) { + for (IgniteBiTuple item : data) + tmpStorage.add(item.get1(), item.get2()); + + compare(tmpStorage.stream().iterator(), data.iterator()); + } + } + + /** + * Test data generation + */ + private static Stream> generateTestData(int size, int fromKey) { + final AtomicInteger idx = new AtomicInteger(fromKey); + final Random rnd = new Random(); + + return Stream.generate(() -> { + byte[] val = new byte[1024]; + + rnd.nextBytes(val); + + return new IgniteBiTuple<>("KEY_" + (fromKey < 0 ? rnd.nextInt() : idx.getAndIncrement()), val); + }).limit(size); + } + + /** + * Compare two iterator + * + * @param it It. + * @param it1 It 1. + */ + private static void compare(Iterator> it, Iterator> it1) { + while (true) { + Assert.assertEquals(it.hasNext(), it1.hasNext()); + + if (!it.hasNext()) + break; + + IgniteBiTuple i = it.next(); + IgniteBiTuple i1 = it1.next(); + + Assert.assertEquals(i.get1(), i.get1()); + + Assert.assertArrayEquals(i.get2(), i1.get2()); + } + } + /** * Verifies that MetaStorage after massive amounts of keys stored and updated keys restores its state successfully * after restart. @@ -85,6 +428,7 @@ public class IgniteMetaStorageBasicTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testMetaStorageMassivePutUpdateRestart() throws Exception { IgniteEx ig = startGrid(0); @@ -106,6 +450,35 @@ public void testMetaStorageMassivePutUpdateRestart() throws Exception { verifyKeys(ig, KEYS_CNT, KEY_PREFIX, UPDATED_VAL_PREFIX); } + /** + * @throws Exception If fails. + */ + @Test + public void testRecoveryOfMetastorageWhenNodeNotInBaseline() throws Exception { + IgniteEx ig0 = startGrid(0); + + ig0.cluster().active(true); + + final byte KEYS_CNT = 100; + final String KEY_PREFIX = "test.key."; + final String NEW_VAL_PREFIX = "new.val."; + final String UPDATED_VAL_PREFIX = "updated.val."; + + startGrid(1); + + // Disable checkpoints in order to check whether recovery works. + forceCheckpoint(grid(1)); + disableCheckpoints(grid(1)); + + loadKeys(grid(1), KEYS_CNT, KEY_PREFIX, NEW_VAL_PREFIX, UPDATED_VAL_PREFIX); + + stopGrid(1, true); + + startGrid(1); + + verifyKeys(grid(1), KEYS_CNT, KEY_PREFIX, UPDATED_VAL_PREFIX); + } + /** */ private void loadKeys(IgniteEx ig, byte keysCnt, @@ -144,4 +517,19 @@ private void verifyKeys(IgniteEx ig, Assert.assertEquals(valPrefix + i, val); } } + + /** + * Disable checkpoints on a specific node. + * + * @param node Ignite node.h + * @throws IgniteCheckedException If failed. + */ + private void disableCheckpoints(Ignite node) throws IgniteCheckedException { + assert !node.cluster().localNode().isClient(); + + GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)((IgniteEx)node).context() + .cache().context().database(); + + dbMgr.enableCheckpoints(false).get(); + } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreePageMemoryImplTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreePageMemoryImplTest.java index 7719b43feef92..f51056f73f867 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreePageMemoryImplTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreePageMemoryImplTest.java @@ -17,17 +17,25 @@ package org.apache.ignite.internal.processors.cache.persistence.pagemem; +import java.util.Collections; import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.mem.DirectMemoryProvider; import org.apache.ignite.internal.mem.unsafe.UnsafeMemoryProvider; import org.apache.ignite.internal.pagemem.FullPageId; import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.CheckpointWriteProgressSupplier; import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.database.BPlusTreeSelfTest; +import org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor; +import org.apache.ignite.internal.processors.subscription.GridInternalSubscriptionProcessor; import org.apache.ignite.internal.util.typedef.CIX3; +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi; import org.apache.ignite.testframework.junits.GridTestKernalContext; +import org.mockito.Mockito; /** * @@ -44,8 +52,18 @@ public class BPlusTreePageMemoryImplTest extends BPlusTreeSelfTest { DirectMemoryProvider provider = new UnsafeMemoryProvider(log); + IgniteConfiguration cfg = new IgniteConfiguration(); + + cfg.setEncryptionSpi(new NoopEncryptionSpi()); + + GridTestKernalContext cctx = new GridTestKernalContext(log, cfg); + + cctx.add(new IgnitePluginProcessor(cctx, cfg, Collections.emptyList())); + cctx.add(new GridInternalSubscriptionProcessor(cctx)); + cctx.add(new GridEncryptionManager(cctx)); + GridCacheSharedContext sharedCtx = new GridCacheSharedContext<>( - new GridTestKernalContext(log), + cctx, null, null, null, @@ -61,6 +79,7 @@ public class BPlusTreePageMemoryImplTest extends BPlusTreeSelfTest { null, null, null, + null, null ); @@ -78,7 +97,7 @@ public class BPlusTreePageMemoryImplTest extends BPlusTreeSelfTest { () -> true, new DataRegionMetricsImpl(new DataRegionConfiguration()), PageMemoryImpl.ThrottlingPolicy.DISABLED, - null + Mockito.mock(CheckpointWriteProgressSupplier.class) ); mem.start(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreeReuseListPageMemoryImplTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreeReuseListPageMemoryImplTest.java index 71eb129544c1b..9a7d63b1e901d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreeReuseListPageMemoryImplTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/BPlusTreeReuseListPageMemoryImplTest.java @@ -17,7 +17,10 @@ package org.apache.ignite.internal.processors.cache.persistence.pagemem; +import java.util.Collections; import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.mem.DirectMemoryProvider; import org.apache.ignite.internal.mem.unsafe.UnsafeMemoryProvider; import org.apache.ignite.internal.pagemem.FullPageId; @@ -27,7 +30,10 @@ import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.database.BPlusTreeReuseSelfTest; +import org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor; +import org.apache.ignite.internal.processors.subscription.GridInternalSubscriptionProcessor; import org.apache.ignite.internal.util.lang.GridInClosure3X; +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.mockito.Mockito; @@ -46,8 +52,18 @@ public class BPlusTreeReuseListPageMemoryImplTest extends BPlusTreeReuseSelfTest DirectMemoryProvider provider = new UnsafeMemoryProvider(log); + IgniteConfiguration cfg = new IgniteConfiguration(); + + cfg.setEncryptionSpi(new NoopEncryptionSpi()); + + GridTestKernalContext cctx = new GridTestKernalContext(log, cfg); + + cctx.add(new IgnitePluginProcessor(cctx, cfg, Collections.emptyList())); + cctx.add(new GridInternalSubscriptionProcessor(cctx)); + cctx.add(new GridEncryptionManager(cctx)); + GridCacheSharedContext sharedCtx = new GridCacheSharedContext<>( - new GridTestKernalContext(log), + cctx, null, null, null, @@ -63,6 +79,7 @@ public class BPlusTreeReuseListPageMemoryImplTest extends BPlusTreeReuseSelfTest null, null, null, + null, null ); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FillFactorMetricTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FillFactorMetricTest.java index ac65c6dcb2676..0e0b286c88917 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FillFactorMetricTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FillFactorMetricTest.java @@ -29,18 +29,17 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for fillFactor metrics. */ +@RunWith(JUnit4.class) public class FillFactorMetricTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String MY_DATA_REGION = "MyPolicy"; @@ -56,7 +55,6 @@ public class FillFactorMetricTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { return super.getConfiguration(igniteInstanceName) - .setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)) .setDataStorageConfiguration( new DataStorageConfiguration().setDataRegionConfigurations( new DataRegionConfiguration() @@ -95,6 +93,7 @@ protected CacheConfiguration cacheCfg() { * * @throws Exception if failed. */ + @Test public void testEmptyCachePagesFillFactor() throws Exception { startGrids(1); @@ -112,6 +111,7 @@ public void testEmptyCachePagesFillFactor() throws Exception { /** * throws if failed. */ + @Test public void testFillAndEmpty() throws Exception { final AtomicBoolean stopLoadFlag = new AtomicBoolean(); final AtomicBoolean doneFlag = new AtomicBoolean(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FullPageIdTableTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FullPageIdTableTest.java index e337bb13ab50f..fd23ce595c087 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FullPageIdTableTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/FullPageIdTableTest.java @@ -108,7 +108,7 @@ public void testRandomOperations() throws Exception { } } finally { - prov.shutdown(); + prov.shutdown(true); } } @@ -225,7 +225,7 @@ else if (check.size() >= elementsCnt * 2 / 3) { finally { long msPassed = U.currentTimeMillis() - seed; System.err.println("Seed used [" + seed + "] duration ["+ msPassed+ "] ms"); - prov.shutdown(); + prov.shutdown(true); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IgnitePageMemReplaceDelayedWriteUnitTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IgnitePageMemReplaceDelayedWriteUnitTest.java index aa1e37d594e86..7b354992ba5e8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IgnitePageMemReplaceDelayedWriteUnitTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IgnitePageMemReplaceDelayedWriteUnitTest.java @@ -31,23 +31,29 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.mem.DirectMemoryProvider; import org.apache.ignite.internal.mem.unsafe.UnsafeMemoryProvider; import org.apache.ignite.internal.pagemem.FullPageId; import org.apache.ignite.internal.pagemem.PageIdAllocator; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.CheckpointWriteProgressSupplier; import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; +import org.apache.ignite.internal.processors.subscription.GridInternalSubscriptionProcessor; import org.apache.ignite.internal.util.GridMultiCollectionWrapper; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.logger.NullLogger; +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; import org.mockito.Mockito; +import org.mockito.invocation.InvocationOnMock; +import org.mockito.stubbing.Answer; import static org.mockito.Matchers.any; import static org.mockito.Mockito.mock; @@ -127,7 +133,7 @@ public void testReplacementWithDelayCausesLockForRead() throws IgniteCheckedExce assert totalEvicted.get() > 0; - memory.stop(); + memory.stop(true); } /** @@ -182,7 +188,7 @@ public void testBackwardCompatibilityMode() throws IgniteCheckedException { assert totalEvicted.get() > 0; - memory.stop(); + memory.stop(true); } /** @@ -212,6 +218,7 @@ private PageMemoryImpl createPageMemory(IgniteConfiguration cfg, ReplacedPageWri GridCacheSharedContext sctx = Mockito.mock(GridCacheSharedContext.class); + when(sctx.gridConfig()).thenReturn(cfg); when(sctx.pageStore()).thenReturn(new NoOpPageStoreManager()); when(sctx.wal()).thenReturn(new NoOpWALManager()); when(sctx.database()).thenReturn(db); @@ -220,6 +227,17 @@ private PageMemoryImpl createPageMemory(IgniteConfiguration cfg, ReplacedPageWri GridKernalContext kernalCtx = mock(GridKernalContext.class); when(kernalCtx.config()).thenReturn(cfg); + when(kernalCtx.log(any(Class.class))).thenReturn(log); + when(kernalCtx.internalSubscriptionProcessor()).thenAnswer(new Answer() { + @Override public Object answer(InvocationOnMock mock) throws Throwable { + return new GridInternalSubscriptionProcessor(kernalCtx); + } + }); + when(kernalCtx.encryption()).thenAnswer(new Answer() { + @Override public Object answer(InvocationOnMock mock) throws Throwable { + return new GridEncryptionManager(kernalCtx); + } + }); when(sctx.kernalContext()).thenReturn(kernalCtx); DataRegionConfiguration regCfg = cfg.getDataStorageConfiguration().getDefaultDataRegionConfiguration(); @@ -231,7 +249,8 @@ private PageMemoryImpl createPageMemory(IgniteConfiguration cfg, ReplacedPageWri DirectMemoryProvider provider = new UnsafeMemoryProvider(log); PageMemoryImpl memory = new PageMemoryImpl(provider, sizes, sctx, pageSize, - pageWriter, null, () -> true, memMetrics, PageMemoryImpl.ThrottlingPolicy.DISABLED, null); + pageWriter, null, () -> true, memMetrics, PageMemoryImpl.ThrottlingPolicy.DISABLED, + mock(CheckpointWriteProgressSupplier.class)); memory.start(); return memory; @@ -244,6 +263,8 @@ private PageMemoryImpl createPageMemory(IgniteConfiguration cfg, ReplacedPageWri @NotNull private IgniteConfiguration getConfiguration(long overallSize) { IgniteConfiguration cfg = new IgniteConfiguration(); + cfg.setEncryptionSpi(new NoopEncryptionSpi()); + cfg.setDataStorageConfiguration( new DataStorageConfiguration() .setDefaultDataRegionConfiguration( diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IndexStoragePageMemoryImplTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IndexStoragePageMemoryImplTest.java index 43fbb6e4a0e51..cbf9dea928c62 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IndexStoragePageMemoryImplTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/IndexStoragePageMemoryImplTest.java @@ -18,18 +18,26 @@ package org.apache.ignite.internal.processors.cache.persistence.pagemem; import java.io.File; +import java.util.Collections; import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager; import org.apache.ignite.internal.mem.DirectMemoryProvider; import org.apache.ignite.internal.mem.file.MappedFileMemoryProvider; import org.apache.ignite.internal.pagemem.FullPageId; import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.persistence.CheckpointWriteProgressSupplier; import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl; import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager; import org.apache.ignite.internal.processors.database.IndexStorageSelfTest; +import org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor; +import org.apache.ignite.internal.processors.subscription.GridInternalSubscriptionProcessor; import org.apache.ignite.internal.util.lang.GridInClosure3X; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi; import org.apache.ignite.testframework.junits.GridTestKernalContext; +import org.mockito.Mockito; /** * @@ -59,8 +67,18 @@ public class IndexStoragePageMemoryImplTest extends IndexStorageSelfTest { DirectMemoryProvider provider = new MappedFileMemoryProvider(log(), allocationPath); + IgniteConfiguration cfg = new IgniteConfiguration(); + + cfg.setEncryptionSpi(new NoopEncryptionSpi()); + + GridTestKernalContext cctx = new GridTestKernalContext(log, cfg); + + cctx.add(new IgnitePluginProcessor(cctx, cfg, Collections.emptyList())); + cctx.add(new GridInternalSubscriptionProcessor(cctx)); + cctx.add(new GridEncryptionManager(cctx)); + GridCacheSharedContext sharedCtx = new GridCacheSharedContext<>( - new GridTestKernalContext(log), + cctx, null, null, null, @@ -76,6 +94,7 @@ public class IndexStoragePageMemoryImplTest extends IndexStorageSelfTest { null, null, null, + null, null ); @@ -93,7 +112,7 @@ public class IndexStoragePageMemoryImplTest extends IndexStorageSelfTest { () -> true, new DataRegionMetricsImpl(new DataRegionConfiguration()), PageMemoryImpl.ThrottlingPolicy.DISABLED, - null + Mockito.mock(CheckpointWriteProgressSupplier.class) ); } } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpPageStoreManager.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpPageStoreManager.java index 39c7dc92d2e43..44071eaa93e07 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpPageStoreManager.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpPageStoreManager.java @@ -23,6 +23,7 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Predicate; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.GridKernalContext; @@ -234,6 +235,11 @@ public class NoOpPageStoreManager implements IgnitePageStoreManager { // No-op. } + /** {@inheritDoc} */ + @Override public void cleanupPageStoreIfMatch(Predicate cacheGrpPred, boolean cleanFiles) { + // No-op. + } + /** {@inheritDoc} */ @Override public boolean checkAndInitCacheWorkDir(CacheConfiguration cacheCfg) throws IgniteCheckedException { return false; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpWALManager.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpWALManager.java index 811a231524522..b482c84075e66 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpWALManager.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/NoOpWALManager.java @@ -20,13 +20,15 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; -import org.apache.ignite.internal.processors.cache.persistence.StorageException; import org.apache.ignite.internal.pagemem.wal.WALIterator; import org.apache.ignite.internal.pagemem.wal.WALPointer; +import org.apache.ignite.internal.pagemem.wal.record.RolloverType; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; -import org.apache.ignite.internal.processors.cache.persistence.wal.FileDescriptor; +import org.apache.ignite.internal.processors.cache.persistence.StorageException; +import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteFuture; +import org.jetbrains.annotations.Nullable; /** * @@ -57,18 +59,33 @@ public class NoOpWALManager implements IgniteWriteAheadLogManager { return null; } + /** {@inheritDoc} */ + @Override public WALPointer log(WALRecord entry, RolloverType rollOverType) { + return null; + } + /** {@inheritDoc} */ @Override public void flush(WALPointer ptr, boolean explicitFsync) throws IgniteCheckedException, StorageException { } + /** {@inheritDoc} */ + @Override public WALRecord read(WALPointer ptr) throws IgniteCheckedException, StorageException { + return null; + } + /** {@inheritDoc} */ @Override public WALIterator replay(WALPointer start) throws IgniteCheckedException, StorageException { return null; } /** {@inheritDoc} */ - @Override public boolean reserve(WALPointer start) throws IgniteCheckedException { + @Override public WALIterator replay(WALPointer start, @Nullable IgniteBiPredicate recordDeserializeFilter) throws IgniteCheckedException, StorageException { + return null; + } + + /** {@inheritDoc} */ + @Override public boolean reserve(WALPointer start) { return false; } @@ -102,11 +119,6 @@ public class NoOpWALManager implements IgniteWriteAheadLogManager { return false; } - /** {@inheritDoc} */ - @Override public void cleanupWalDirectories() throws IgniteCheckedException { - // No-op. - } - /** {@inheritDoc} */ @Override public void start(GridCacheSharedContext cctx) throws IgniteCheckedException { // No-op. diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageIdDistributionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageIdDistributionTest.java index 626e7941b3abc..5ad2f5b463835 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageIdDistributionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageIdDistributionTest.java @@ -29,17 +29,19 @@ import org.apache.ignite.internal.mem.unsafe.UnsafeMemoryProvider; import org.apache.ignite.internal.pagemem.FullPageId; import org.apache.ignite.internal.pagemem.PageIdUtils; -import org.apache.ignite.internal.processors.cache.persistence.pagemem.FullPageIdTable; -import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.logger.java.JavaLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PageIdDistributionTest extends GridCommonAbstractTest { /** */ private static final int[] CACHE_IDS = new int[] { @@ -62,6 +64,7 @@ public class PageIdDistributionTest extends GridCommonAbstractTest { /** * */ + @Test public void testDistributions() { printPageIdDistribution( CU.cacheId("partitioned"), 1024, 10_000, 32, 2.5f); @@ -142,7 +145,7 @@ private void printPageIdDistribution( } /** - * Uncomment and run this test manually to get data to plot histogram for per-element distance from ideal. + * If needed run this test manually to get data to plot histogram for per-element distance from ideal. * You can use Octave to plot the histogram: *
          *     all = csvread("histo.txt");
    @@ -151,7 +154,8 @@ private void printPageIdDistribution(
          *
          * @throws Exception If failed.
          */
    -    public void _testRealHistory() throws Exception {
    +    @Test
    +    public void testRealHistory() throws Exception {
             int capacity = CACHE_IDS.length * PARTS * PAGES;
     
             info("Capacity: " + capacity);
    @@ -228,7 +232,7 @@ public void _testRealHistory() throws Exception {
                 }
             }
             finally {
    -            prov.shutdown();
    +            prov.shutdown(true);
             }
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplNoLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplNoLoadTest.java
    index 52aff0ca16acd..1190899496f6b 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplNoLoadTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplNoLoadTest.java
    @@ -18,7 +18,10 @@
     package org.apache.ignite.internal.processors.cache.persistence.pagemem;
     
     import java.io.File;
    +import java.util.Collections;
     import org.apache.ignite.configuration.DataRegionConfiguration;
    +import org.apache.ignite.configuration.IgniteConfiguration;
    +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager;
     import org.apache.ignite.internal.mem.DirectMemoryProvider;
     import org.apache.ignite.internal.mem.file.MappedFileMemoryProvider;
     import org.apache.ignite.internal.pagemem.FullPageId;
    @@ -26,15 +29,24 @@
     import org.apache.ignite.internal.pagemem.impl.PageMemoryNoLoadSelfTest;
     import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
     import org.apache.ignite.internal.processors.cache.persistence.CheckpointLockStateChecker;
    +import org.apache.ignite.internal.processors.cache.persistence.CheckpointWriteProgressSupplier;
     import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl;
     import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager;
    +import org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor;
    +import org.apache.ignite.internal.processors.subscription.GridInternalSubscriptionProcessor;
     import org.apache.ignite.internal.util.lang.GridInClosure3X;
     import org.apache.ignite.internal.util.typedef.internal.U;
    +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi;
     import org.apache.ignite.testframework.junits.GridTestKernalContext;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
    +import org.mockito.Mockito;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class PageMemoryImplNoLoadTest extends PageMemoryNoLoadSelfTest {
         /**
          * @return Page memory implementation.
    @@ -49,8 +61,18 @@ public class PageMemoryImplNoLoadTest extends PageMemoryNoLoadSelfTest {
     
             DirectMemoryProvider provider = new MappedFileMemoryProvider(log(), memDir);
     
    +        IgniteConfiguration cfg = new IgniteConfiguration();
    +
    +        cfg.setEncryptionSpi(new NoopEncryptionSpi());
    +
    +        GridTestKernalContext cctx = new GridTestKernalContext(log, cfg);
    +
    +        cctx.add(new IgnitePluginProcessor(cctx, cfg, Collections.emptyList()));
    +        cctx.add(new GridInternalSubscriptionProcessor(cctx));
    +        cctx.add(new GridEncryptionManager(cctx));
    +
             GridCacheSharedContext sharedCtx = new GridCacheSharedContext<>(
    -            new GridTestKernalContext(log),
    +            cctx,
                 null,
                 null,
                 null,
    @@ -66,6 +88,7 @@ public class PageMemoryImplNoLoadTest extends PageMemoryNoLoadSelfTest {
                 null,
                 null,
                 null,
    +            null,
                 null
             );
     
    @@ -88,11 +111,12 @@ public class PageMemoryImplNoLoadTest extends PageMemoryNoLoadSelfTest {
                 },
                 new DataRegionMetricsImpl(new DataRegionConfiguration()),
                 PageMemoryImpl.ThrottlingPolicy.DISABLED,
    -            null
    +            Mockito.mock(CheckpointWriteProgressSupplier.class)
             );
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testPageHandleDeallocation() throws Exception {
             // No-op.
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplTest.java
    index 000131a86b1ec..7591dd74b2111 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImplTest.java
    @@ -30,6 +30,7 @@
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.failure.NoOpFailureHandler;
     import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException;
    +import org.apache.ignite.internal.managers.encryption.GridEncryptionManager;
     import org.apache.ignite.internal.mem.DirectMemoryProvider;
     import org.apache.ignite.internal.mem.IgniteOutOfMemoryException;
     import org.apache.ignite.internal.mem.unsafe.UnsafeMemoryProvider;
    @@ -44,12 +45,17 @@
     import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO;
     import org.apache.ignite.internal.processors.failure.FailureProcessor;
     import org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor;
    +import org.apache.ignite.internal.processors.subscription.GridInternalSubscriptionProcessor;
     import org.apache.ignite.internal.util.lang.GridInClosure3X;
     import org.apache.ignite.plugin.PluginProvider;
    +import org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.GridTestKernalContext;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     import org.mockito.Mockito;
     
     import static org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.CHECKPOINT_POOL_OVERFLOW_ERROR_MSG;
    @@ -57,6 +63,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class PageMemoryImplTest extends GridCommonAbstractTest {
         /** Mb. */
         private static final long MB = 1024 * 1024;
    @@ -70,6 +77,7 @@ public class PageMemoryImplTest extends GridCommonAbstractTest {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testThatAllocationTooMuchPagesCauseToOOMException() throws Exception {
             PageMemoryImpl memory = createPageMemory(PageMemoryImpl.ThrottlingPolicy.DISABLED);
     
    @@ -87,6 +95,7 @@ public void testThatAllocationTooMuchPagesCauseToOOMException() throws Exception
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCheckpointBufferOverusageDontCauseWriteLockLeak() throws Exception {
             PageMemoryImpl memory = createPageMemory(PageMemoryImpl.ThrottlingPolicy.DISABLED);
     
    @@ -140,6 +149,7 @@ public void testCheckpointBufferOverusageDontCauseWriteLockLeak() throws Excepti
          * Tests that checkpoint buffer won't be overflowed with enabled CHECKPOINT_BUFFER_ONLY throttling.
          * @throws Exception If failed.
          */
    +    @Test
         public void testCheckpointBufferCantOverflowMixedLoad() throws Exception {
             testCheckpointBufferCantOverflowWithThrottlingMixedLoad(PageMemoryImpl.ThrottlingPolicy.CHECKPOINT_BUFFER_ONLY);
         }
    @@ -148,6 +158,7 @@ public void testCheckpointBufferCantOverflowMixedLoad() throws Exception {
          * Tests that checkpoint buffer won't be overflowed with enabled SPEED_BASED throttling.
          * @throws Exception If failed.
          */
    +    @Test
         public void testCheckpointBufferCantOverflowMixedLoadSpeedBased() throws Exception {
             testCheckpointBufferCantOverflowWithThrottlingMixedLoad(PageMemoryImpl.ThrottlingPolicy.SPEED_BASED);
         }
    @@ -156,6 +167,7 @@ public void testCheckpointBufferCantOverflowMixedLoadSpeedBased() throws Excepti
          * Tests that checkpoint buffer won't be overflowed with enabled TARGET_RATIO_BASED throttling.
          * @throws Exception If failed.
          */
    +    @Test
         public void testCheckpointBufferCantOverflowMixedLoadRatioBased() throws Exception {
             testCheckpointBufferCantOverflowWithThrottlingMixedLoad(PageMemoryImpl.ThrottlingPolicy.TARGET_RATIO_BASED);
         }
    @@ -271,9 +283,13 @@ private PageMemoryImpl createPageMemory(PageMemoryImpl.ThrottlingPolicy throttli
             IgniteConfiguration igniteCfg = new IgniteConfiguration();
             igniteCfg.setDataStorageConfiguration(new DataStorageConfiguration());
             igniteCfg.setFailureHandler(new NoOpFailureHandler());
    +        igniteCfg.setEncryptionSpi(new NoopEncryptionSpi());
     
             GridTestKernalContext kernalCtx = new GridTestKernalContext(new GridTestLog4jLogger(), igniteCfg);
    +
             kernalCtx.add(new IgnitePluginProcessor(kernalCtx, igniteCfg, Collections.emptyList()));
    +        kernalCtx.add(new GridInternalSubscriptionProcessor(kernalCtx));
    +        kernalCtx.add(new GridEncryptionManager(kernalCtx));
     
             FailureProcessor failureProc = new FailureProcessor(kernalCtx);
     
    @@ -298,6 +314,7 @@ private PageMemoryImpl createPageMemory(PageMemoryImpl.ThrottlingPolicy throttli
                 null,
                 null,
                 null,
    +            null,
                 null
             );
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryNoStoreLeakTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryNoStoreLeakTest.java
    index 65e8c361bc3f8..fdd47c1656f5f 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryNoStoreLeakTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryNoStoreLeakTest.java
    @@ -25,6 +25,9 @@
     import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl;
     import org.apache.ignite.internal.util.typedef.internal.D;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Base scenario for memory leak:
    @@ -33,6 +36,7 @@
      * 3. IgniteCacheDatabaseSharedManager started and onActive called here. Memory allocated;
      * 4. Call active(true) again. Activation successfull, non heap memory leak introduced;
      */
    +@RunWith(JUnit4.class)
     public class PageMemoryNoStoreLeakTest extends GridCommonAbstractTest {
         /** */
         private static final int PAGE_SIZE = 4 * 1024;
    @@ -46,6 +50,7 @@ public class PageMemoryNoStoreLeakTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPageDoubleInitMemoryLeak() throws Exception {
             long initVMsize = D.getCommittedVirtualMemorySize();
     
    @@ -71,7 +76,7 @@ public void testPageDoubleInitMemoryLeak() throws Exception {
                     mem.start();
                 }
                 finally {
    -                mem.stop();
    +                mem.stop(true);
                 }
     
                 long committedVMSize = D.getCommittedVirtualMemorySize();
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSandboxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSandboxTest.java
    index c417b07995ebb..0bfe22a1a3d73 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSandboxTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSandboxTest.java
    @@ -38,21 +38,19 @@
     import org.apache.ignite.internal.processors.cache.ratemetrics.HitRateMetrics;
     import org.apache.ignite.internal.util.typedef.internal.S;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Test to visualize and debug {@link PagesWriteThrottle}.
      * Prints puts/gets rate, number of dirty pages, pages written in current checkpoint and pages in checkpoint buffer.
      * Not intended to be part of any test suite.
      */
    +@RunWith(JUnit4.class)
     public class PagesWriteThrottleSandboxTest extends GridCommonAbstractTest {
    -    /** Ip finder. */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** Cache name. */
         private static final String CACHE_NAME = "cache1";
     
    @@ -60,9 +58,6 @@ public class PagesWriteThrottleSandboxTest extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(gridName);
     
    -        TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi();
    -        discoverySpi.setIpFinder(ipFinder);
    -
             DataStorageConfiguration dbCfg = new DataStorageConfiguration()
                 .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
                     .setMaxSize(4000L * 1024 * 1024)
    @@ -112,6 +107,7 @@ public class PagesWriteThrottleSandboxTest extends GridCommonAbstractTest {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testThrottle() throws Exception {
             startGrids(1).active(true);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSmokeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSmokeTest.java
    index 7ff134850387b..1dae7630937c5 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSmokeTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PagesWriteThrottleSmokeTest.java
    @@ -44,23 +44,17 @@
     import org.apache.ignite.internal.processors.cache.ratemetrics.HitRateMetrics;
     import org.apache.ignite.internal.util.typedef.internal.S;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    -
    -import static java.nio.file.StandardOpenOption.CREATE;
    -import static java.nio.file.StandardOpenOption.READ;
    -import static java.nio.file.StandardOpenOption.WRITE;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class PagesWriteThrottleSmokeTest extends GridCommonAbstractTest {
    -    /** Ip finder. */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** Slow checkpoint enabled. */
         private static final AtomicBoolean slowCheckpointEnabled = new AtomicBoolean(true);
     
    @@ -71,9 +65,6 @@ public class PagesWriteThrottleSmokeTest extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(gridName);
     
    -        TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi();
    -        discoverySpi.setIpFinder(ipFinder);
    -
             DataStorageConfiguration dbCfg = new DataStorageConfiguration()
                 .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
                     .setMaxSize(400L * 1024 * 1024)
    @@ -127,6 +118,7 @@ public class PagesWriteThrottleSmokeTest extends GridCommonAbstractTest {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testThrottle() throws Exception {
             startGrids(2).active(true);
     
    @@ -287,11 +279,6 @@ private static class SlowCheckpointFileIOFactory implements FileIOFactory {
             /** Delegate factory. */
             private final FileIOFactory delegateFactory = new RandomAccessFileIOFactory();
     
    -        /** {@inheritDoc} */
    -        @Override public FileIO create(File file) throws IOException {
    -            return create(file, CREATE, READ, WRITE);
    -        }
    -
             /** {@inheritDoc} */
             @Override public FileIO create(File file, OpenOption... openOption) throws IOException {
                 final FileIO delegate = delegateFactory.create(file, openOption);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/AbstractNodeJoinTemplate.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/AbstractNodeJoinTemplate.java
    index 219db8d166433..3d24f1b568819 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/AbstractNodeJoinTemplate.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/AbstractNodeJoinTemplate.java
    @@ -35,9 +35,6 @@
     import org.apache.ignite.lang.IgniteCallable;
     import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.lang.IgniteInClosure;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.junit.Assert;
     
    @@ -293,15 +290,9 @@ protected CacheConfiguration[] allCacheConfigurations() {
                 }
             };
     
    -    /** Ip finder. */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String name) throws Exception {
             return super.getConfiguration(name)
    -            .setDiscoverySpi(
    -                new TcpDiscoverySpi()
    -                    .setIpFinder(ipFinder))
                 .setDataStorageConfiguration(
                     new DataStorageConfiguration()
                         .setDefaultDataRegionConfiguration(
    @@ -786,7 +777,9 @@ public Runnable checkCacheEmpty() {
     
                         Map caches = caches(ig);
     
    -                    Assert.assertEquals(0, caches.size());
    +                    for (GridCacheAdapter cacheAdapter : caches.values())
    +                        Assert.assertTrue("Cache should be in recovery mode: " + cacheAdapter.context(),
    +                            cacheAdapter.context().isRecoveryMode());
                     }
                 });
             }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateCacheTest.java
    index 938b3c81b1650..13517ef49e21d 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateCacheTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateCacheTest.java
    @@ -23,14 +23,19 @@
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.processors.cache.GridCacheProcessor;
     import org.apache.ignite.internal.util.typedef.F;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteChangeGlobalStateCacheTest extends IgniteChangeGlobalStateAbstractTest {
         /**
          *
          */
    +    @Test
         public void testCheckValueAfterActivation(){
             String cacheName = "my-cache";
     
    @@ -64,6 +69,7 @@ public void testCheckValueAfterActivation(){
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMoreKeyValueAfterActivate() throws Exception {
             String cacheName = "my-cache";
     
    @@ -117,6 +123,7 @@ public void testMoreKeyValueAfterActivate() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testDeActivateAndActivateCacheValue() throws Exception {
             String chName = "myCache";
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStreamerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStreamerTest.java
    index 16be316154b21..dd0bb0982c7e9 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStreamerTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStreamerTest.java
    @@ -20,10 +20,14 @@
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.IgniteDataStreamer;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteChangeGlobalStateDataStreamerTest extends IgniteChangeGlobalStateAbstractTest {
         /** {@inheritDoc} */
         @Override protected int backUpNodes() {
    @@ -38,6 +42,7 @@ public class IgniteChangeGlobalStateDataStreamerTest extends IgniteChangeGlobalS
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeActivateAndActivateDataStreamer() throws Exception {
             Ignite ig1 = primary(0);
             Ignite ig2 = primary(1);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStructureTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStructureTest.java
    index 8902a36ea5622..ef0f18fc922be 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStructureTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateDataStructureTest.java
    @@ -27,16 +27,21 @@
     import org.apache.ignite.internal.processors.cache.GridCacheProcessor;
     import org.apache.ignite.internal.util.typedef.F;
     import org.apache.ignite.internal.util.typedef.internal.U;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.testframework.GridTestUtils.runAsync;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteChangeGlobalStateDataStructureTest extends IgniteChangeGlobalStateAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeActivateAndActivateAtomicLong() throws Exception {
             String lName = "myLong";
     
    @@ -108,6 +113,7 @@ public void testDeActivateAndActivateAtomicLong() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeActivateAndActivateCountDownLatch() throws Exception {
             final AtomicInteger cnt = new AtomicInteger();
     
    @@ -202,6 +208,7 @@ public void testDeActivateAndActivateCountDownLatch() throws Exception {
         /**
          *
          */
    +    @Test
         public void testDeActivateAndActivateAtomicSequence(){
             String seqName = "mySeq";
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateFailOverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateFailOverTest.java
    index 1ef269e555c20..9a7acf8f31cc0 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateFailOverTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateFailOverTest.java
    @@ -26,6 +26,9 @@
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.internal.IgniteInternalFuture;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.lang.Thread.sleep;
     import static org.apache.ignite.testframework.GridTestUtils.runAsync;
    @@ -33,6 +36,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteChangeGlobalStateFailOverTest extends IgniteChangeGlobalStateAbstractTest {
         /** {@inheritDoc} */
         @Override protected int primaryNodes() {
    @@ -57,6 +61,7 @@ public class IgniteChangeGlobalStateFailOverTest extends IgniteChangeGlobalState
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActivateDeActivateOnFixTopology() throws Exception {
             final Ignite igB1 = backUp(0);
             final Ignite igB2 = backUp(1);
    @@ -143,6 +148,7 @@ public void testActivateDeActivateOnFixTopology() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActivateDeActivateOnJoiningNode() throws Exception {
             final Ignite igB1 = backUp(0);
             final Ignite igB2 = backUp(1);
    @@ -276,6 +282,7 @@ public void testActivateDeActivateOnJoiningNode() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActivateDeActivateOnFixTopologyWithPutValues() throws Exception {
             final Ignite igB1 = backUp(0);
             final Ignite igB2 = backUp(1);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateServiceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateServiceTest.java
    index e6c9ae5976e03..e2b1476dc95d0 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateServiceTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateServiceTest.java
    @@ -25,10 +25,14 @@
     import org.apache.ignite.services.ServiceConfiguration;
     import org.apache.ignite.services.ServiceContext;
     import org.apache.ignite.services.ServiceDescriptor;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteChangeGlobalStateServiceTest extends IgniteChangeGlobalStateAbstractTest {
         /** {@inheritDoc} */
         @Override protected int backUpClientNodes() {
    @@ -48,6 +52,7 @@ public class IgniteChangeGlobalStateServiceTest extends IgniteChangeGlobalStateA
         /**
          *
          */
    +    @Test
         public void testDeployService() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-6629");
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateTest.java
    index 9152ab90319df..7b7430e78167a 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteChangeGlobalStateTest.java
    @@ -32,6 +32,9 @@
     import org.apache.ignite.internal.IgniteInternalFuture;
     import org.apache.ignite.internal.processors.cache.GridCacheProcessor;
     import org.apache.ignite.internal.util.typedef.F;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.testframework.GridTestUtils.assertThrows;
     import static org.apache.ignite.testframework.GridTestUtils.runAsync;
    @@ -39,10 +42,12 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteChangeGlobalStateTest extends IgniteChangeGlobalStateAbstractTest {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testStopPrimaryAndActivateFromServerNode() throws Exception {
             Ignite ig1P = primary(0);
             Ignite ig2P = primary(1);
    @@ -72,6 +77,7 @@ public void testStopPrimaryAndActivateFromServerNode() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testStopPrimaryAndActivateFromClientNode() throws Exception {
             Ignite ig1P = primary(0);
             Ignite ig2P = primary(1);
    @@ -113,6 +119,7 @@ public void testStopPrimaryAndActivateFromClientNode() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testConcurrentActivateFromClientNodeAndServerNode() throws Exception {
             final Ignite ig1B = backUp(0);
             final Ignite ig2B = backUp(1);
    @@ -210,6 +217,7 @@ public void testConcurrentActivateFromClientNodeAndServerNode() throws Exception
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testConcurrentActivateFromServerNode() throws Exception {
             final Ignite ig1B = backUp(0);
             final Ignite ig2B = backUp(1);
    @@ -265,6 +273,7 @@ public void testConcurrentActivateFromServerNode() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActiveAndInActiveAtTheSameTimeCluster() throws Exception {
             Ignite ig1P = primary(0);
             Ignite ig2P = primary(1);
    @@ -302,6 +311,7 @@ public void testActiveAndInActiveAtTheSameTimeCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActivateOnAlreadyActivatedCluster() throws Exception {
             Ignite ig1P = primary(0);
             Ignite ig2P = primary(1);
    @@ -359,6 +369,7 @@ public void testActivateOnAlreadyActivatedCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTryUseCacheInActiveCluster() throws Exception {
             Ignite ig1B = backUp(0);
             Ignite ig2B = backUp(1);
    @@ -401,6 +412,7 @@ private void checkExceptionTryUseCache(final Ignite ig) {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTryUseServiceInActiveCluster() throws Exception {
             Ignite ig1B = backUp(0);
             Ignite ig2B = backUp(1);
    @@ -443,6 +455,7 @@ private void checkExceptionTryUseService(final Ignite ig) {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTryUseDataStructureInActiveCluster() throws Exception {
             Ignite ig1B = backUp(0);
             Ignite ig2B = backUp(1);
    @@ -485,6 +498,7 @@ private void checkExceptionTryUseDataStructure(final Ignite ig){
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFailGetLock() throws Exception {
             Ignite ig1P = primary(0);
             Ignite ig2P = primary(1);
    @@ -542,8 +556,9 @@ public void testFailGetLock() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActivateAfterFailGetLock() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-1094");
    +        fail("https://issues.apache.org/jira/browse/IGNITE-10723");
     
             Ignite ig1P = primary(0);
             Ignite ig2P = primary(1);
    @@ -618,6 +633,7 @@ public void testActivateAfterFailGetLock() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testDeActivateFromServerNode() throws Exception {
             Ignite ig1 = primary(0);
             Ignite ig2 = primary(1);
    @@ -637,6 +653,7 @@ public void testDeActivateFromServerNode() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testDeActivateFromClientNode() throws Exception {
             Ignite ig1 = primary(0);
             Ignite ig2 = primary(1);
    @@ -664,6 +681,7 @@ public void testDeActivateFromClientNode() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testDeActivateCheckCacheDestroy() throws Exception {
             String chName = "myCache";
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteNoParrallelClusterIsAllowedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteNoParrallelClusterIsAllowedTest.java
    index 5c986eea23cff..60386f11beb29 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteNoParrallelClusterIsAllowedTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteNoParrallelClusterIsAllowedTest.java
    @@ -19,19 +19,19 @@
     
     import junit.framework.AssertionFailedError;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteNoParrallelClusterIsAllowedTest extends IgniteChangeGlobalStateAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder vmIpFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testSimple() throws Exception {
             startPrimaryNodes(primaryNodes());
     
    @@ -76,7 +76,7 @@ private void tryToStartBackupClusterWhatShouldFail() {
                 while (true) {
                     String message = e.getMessage();
     
    -                if (message.contains("Failed to acquire file lock during"))
    +                if (message.contains("Failed to acquire file lock ["))
                         break;
     
                     if (e.getCause() != null)
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteStandByClusterTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteStandByClusterTest.java
    index 6900af87d2f39..9a793d4bbfad1 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteStandByClusterTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/IgniteStandByClusterTest.java
    @@ -46,28 +46,24 @@
     import org.apache.ignite.plugin.PluginContext;
     import org.apache.ignite.plugin.PluginProvider;
     import org.apache.ignite.plugin.PluginValidationException;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.Nullable;
     import org.junit.Assert;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteStandByClusterTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder vmIpFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /**
          *
          */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(vmIpFinder));
    -
             cfg.setDataStorageConfiguration(new DataStorageConfiguration()
                 .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
                     .setMaxSize(100L * 1024 * 1024)
    @@ -81,6 +77,7 @@ public class IgniteStandByClusterTest extends GridCommonAbstractTest {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testStartDynamicCachesAfterActivation() throws Exception {
             final String cacheName0 = "cache0";
             final String cacheName = "cache";
    @@ -146,6 +143,7 @@ public void testStartDynamicCachesAfterActivation() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testStaticCacheStartAfterActivationWithCacheFilter() throws Exception {
             String cache1 = "cache1";
             String cache2 = "cache2";
    @@ -213,6 +211,7 @@ public void testStaticCacheStartAfterActivationWithCacheFilter() throws Exceptio
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testSimple() throws Exception {
             IgniteEx ig = startGrid(0);
     
    @@ -238,6 +237,7 @@ public void testSimple() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testJoinDaemonAndDaemonStop() throws Exception {
             IgniteEx ig = startGrid(0);
     
    @@ -258,6 +258,7 @@ public void testJoinDaemonAndDaemonStop() throws Exception {
         /**
          * Check that daemon node does not move cluster to compatibility mode.
          */
    +    @Test
         public void testJoinDaemonToBaseline() throws Exception {
             Ignite ignite0 = startGrid(0);
     
    @@ -279,6 +280,7 @@ public void testJoinDaemonToBaseline() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testCheckStatusFromDaemon() throws Exception {
             IgniteEx ig = startGrid(0);
     
    @@ -309,6 +311,7 @@ public void testCheckStatusFromDaemon() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testRestartCluster() throws Exception {
             IgniteEx ig1 = startGrid(getConfiguration("node1"));
             IgniteEx ig2 = startGrid(getConfiguration("node2"));
    @@ -344,6 +347,7 @@ public void testRestartCluster() throws Exception {
         /**
          * @throws Exception if fail.
          */
    +    @Test
         public void testActivateDeActivateCallbackForPluginProviders() throws Exception {
             IgniteEx ig1 = startGrid(getConfiguration("node1"));
             IgniteEx ig2 = startGrid(getConfiguration("node2"));
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToActiveCluster.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToActiveCluster.java
    index 8e90e78337500..628316cfc128d 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToActiveCluster.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToActiveCluster.java
    @@ -24,10 +24,14 @@
     import org.apache.ignite.internal.processors.cache.GridCacheAdapter;
     import org.apache.ignite.internal.util.typedef.internal.CU;
     import org.junit.Assert;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class JoinActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         /** {@inheritDoc} */
         @Override public JoinNodeTestPlanBuilder withOutConfigurationTemplate() throws Exception {
    @@ -193,26 +197,31 @@ public class JoinActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         // Server node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinWithOutConfiguration() throws Exception {
             withOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationOnJoin() throws Exception {
             staticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationInCluster() throws Exception {
             staticCacheConfigurationInClusterTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationSameOnBoth() throws Exception {
             staticCacheConfigurationSameOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationDifferentOnBoth() throws Exception {
             staticCacheConfigurationDifferentOnBothTemplate().execute();
         }
    @@ -220,16 +229,19 @@ public class JoinActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         // Client node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientWithOutConfiguration() throws Exception {
             joinClientWithOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationOnJoin() throws Exception {
             joinClientStaticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationInCluster() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-5518");
     
    @@ -237,6 +249,7 @@ public class JoinActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationSameOnBoth() throws Exception {
             joinClientStaticCacheConfigurationSameOnBothTemplate().execute();
         }
    @@ -244,6 +257,7 @@ public class JoinActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         /**
          *
          */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationDifferentOnBoth() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-5518");
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToInActiveCluster.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToInActiveCluster.java
    index 59e0691ea4e6a..db762a86fc43e 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToInActiveCluster.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinActiveNodeToInActiveCluster.java
    @@ -18,10 +18,14 @@
     package org.apache.ignite.internal.processors.cache.persistence.standbycluster.join;
     
     import org.apache.ignite.internal.processors.cache.persistence.standbycluster.AbstractNodeJoinTemplate;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class JoinActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate {
         /** {@inheritDoc} */
         @Override public JoinNodeTestPlanBuilder withOutConfigurationTemplate() throws Exception {
    @@ -153,26 +157,31 @@ public class JoinActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate {
         // Server node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinWithOutConfiguration() throws Exception {
             withOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationOnJoin() throws Exception {
             staticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationInCluster() throws Exception {
             staticCacheConfigurationInClusterTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationSameOnBoth() throws Exception {
             staticCacheConfigurationSameOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationDifferentOnBoth() throws Exception {
             staticCacheConfigurationDifferentOnBothTemplate().execute();
         }
    @@ -180,16 +189,19 @@ public class JoinActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate {
         // Client node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientWithOutConfiguration() throws Exception {
             joinClientWithOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationOnJoin() throws Exception {
             joinClientStaticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationInCluster() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-5518");
     
    @@ -197,11 +209,13 @@ public class JoinActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate {
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationSameOnBoth() throws Exception {
             joinClientStaticCacheConfigurationSameOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationDifferentOnBoth() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-5518");
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToActiveCluster.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToActiveCluster.java
    index d9b0dd4a48327..2ed470fbd38c3 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToActiveCluster.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToActiveCluster.java
    @@ -24,10 +24,14 @@
     import org.apache.ignite.internal.processors.cache.GridCacheAdapter;
     import org.apache.ignite.internal.util.typedef.internal.CU;
     import org.junit.Assert;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class JoinInActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         /** {@inheritDoc} */
         @Override public JoinNodeTestPlanBuilder withOutConfigurationTemplate() throws Exception {
    @@ -192,26 +196,31 @@ public class JoinInActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         // Server node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinWithOutConfiguration() throws Exception {
             withOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationOnJoin() throws Exception {
             staticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationInCluster() throws Exception {
             staticCacheConfigurationInClusterTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationSameOnBoth() throws Exception {
             staticCacheConfigurationSameOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationDifferentOnBoth() throws Exception {
             staticCacheConfigurationDifferentOnBothTemplate().execute();
         }
    @@ -219,26 +228,31 @@ public class JoinInActiveNodeToActiveCluster extends AbstractNodeJoinTemplate {
         // Client node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientWithOutConfiguration() throws Exception {
             joinClientWithOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationOnJoin() throws Exception {
             joinClientStaticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationInCluster() throws Exception {
             joinClientStaticCacheConfigurationInClusterTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationSameOnBoth() throws Exception {
             joinClientStaticCacheConfigurationSameOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationDifferentOnBoth() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-5518");
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToInActiveCluster.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToInActiveCluster.java
    index dabd0a38f3dc0..d77564d61a614 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToInActiveCluster.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/JoinInActiveNodeToInActiveCluster.java
    @@ -18,10 +18,14 @@
     package org.apache.ignite.internal.processors.cache.persistence.standbycluster.join;
     
     import org.apache.ignite.internal.processors.cache.persistence.standbycluster.AbstractNodeJoinTemplate;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class JoinInActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate {
         /** {@inheritDoc} */
         @Override public JoinNodeTestPlanBuilder withOutConfigurationTemplate() throws Exception {
    @@ -153,26 +157,31 @@ public class JoinInActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate
         // Server node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinWithOutConfiguration() throws Exception {
             withOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationOnJoin() throws Exception {
             staticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationInCluster() throws Exception {
             staticCacheConfigurationInClusterTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationSameOnBoth() throws Exception {
             staticCacheConfigurationSameOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testStaticCacheConfigurationDifferentOnBoth() throws Exception {
             staticCacheConfigurationDifferentOnBothTemplate().execute();
         }
    @@ -180,16 +189,19 @@ public class JoinInActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate
         // Client node join.
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientWithOutConfiguration() throws Exception {
             joinClientWithOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationOnJoin() throws Exception {
             joinClientStaticCacheConfigurationOnJoinTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationInCluster() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-5518");
     
    @@ -197,11 +209,13 @@ public class JoinInActiveNodeToInActiveCluster extends AbstractNodeJoinTemplate
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationSameOnBoth() throws Exception {
             joinClientStaticCacheConfigurationSameOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationDifferentOnBoth() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-5518");
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/persistence/JoinActiveNodeToActiveClusterWithPersistence.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/persistence/JoinActiveNodeToActiveClusterWithPersistence.java
    index 1ccfb7d3d7008..cd655809ab2f5 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/persistence/JoinActiveNodeToActiveClusterWithPersistence.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/join/persistence/JoinActiveNodeToActiveClusterWithPersistence.java
    @@ -20,10 +20,14 @@
     import org.apache.ignite.internal.processors.cache.persistence.standbycluster.join.JoinActiveNodeToActiveCluster;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.processors.cache.persistence.standbycluster.AbstractNodeJoinTemplate;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class JoinActiveNodeToActiveClusterWithPersistence extends JoinActiveNodeToActiveCluster {
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration cfg(String name) throws Exception {
    @@ -67,21 +71,25 @@ private AbstractNodeJoinTemplate.JoinNodeTestPlanBuilder persistent(AbstractNode
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinWithOutConfiguration() throws Exception {
             withOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientWithOutConfiguration() throws Exception {
             joinClientWithOutConfigurationTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationDifferentOnBoth() throws Exception {
             staticCacheConfigurationDifferentOnBothTemplate().execute();
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testJoinClientStaticCacheConfigurationInCluster() throws Exception {
             staticCacheConfigurationInClusterTemplate().execute();
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteAbstractStandByClientReconnectTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteAbstractStandByClientReconnectTest.java
    index d01e11a78e558..176d34edf2281 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteAbstractStandByClientReconnectTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteAbstractStandByClientReconnectTest.java
    @@ -31,9 +31,9 @@
     import org.apache.ignite.events.Event;
     import org.apache.ignite.events.EventType;
     import org.apache.ignite.internal.IgniteEx;
    -import org.apache.ignite.internal.IgniteInternalFuture;
     import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor;
     import org.apache.ignite.internal.util.typedef.internal.CU;
    +import org.apache.ignite.lang.IgniteFuture;
     import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.spi.discovery.DiscoverySpiCustomMessage;
     import org.apache.ignite.spi.discovery.DiscoverySpiListener;
    @@ -389,7 +389,7 @@ private AwaitDiscoverySpiListener(
             }
     
             /** {@inheritDoc} */
    -        @Override public IgniteInternalFuture onDiscovery(
    +        @Override public IgniteFuture onDiscovery(
                 int type,
                 long topVer,
                 ClusterNode node,
    @@ -397,7 +397,7 @@ private AwaitDiscoverySpiListener(
                 @Nullable Map> topHist,
                 @Nullable DiscoverySpiCustomMessage data
             ) {
    -            IgniteInternalFuture fut = delegate.onDiscovery(type, topVer, node, topSnapshot, topHist, data);
    +            IgniteFuture fut = delegate.onDiscovery(type, topVer, node, topSnapshot, topHist, data);
     
                 if (type == EVT_CLIENT_NODE_DISCONNECTED) {
                     try {
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectTest.java
    index d2244d450ebd8..28bcbb16b68b1 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectTest.java
    @@ -19,14 +19,19 @@
     
     import java.util.concurrent.CountDownLatch;
     import org.apache.ignite.internal.IgniteEx;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteStandByClientReconnectTest extends IgniteAbstractStandByClientReconnectTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActiveClientReconnectToActiveCluster() throws Exception {
             CountDownLatch activateLatch = new CountDownLatch(1);
     
    @@ -108,6 +113,7 @@ public void testActiveClientReconnectToActiveCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActiveClientReconnectToInActiveCluster() throws Exception {
             CountDownLatch activateLatch = new CountDownLatch(1);
     
    @@ -188,6 +194,7 @@ public void testActiveClientReconnectToInActiveCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInActiveClientReconnectToActiveCluster() throws Exception {
             CountDownLatch activateLatch = new CountDownLatch(1);
     
    @@ -245,6 +252,7 @@ public void testInActiveClientReconnectToActiveCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInActiveClientReconnectToInActiveCluster() throws Exception {
             startNodes(null);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectToNewClusterTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectToNewClusterTest.java
    index c1f672b29af5d..a9abff134fbe5 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectToNewClusterTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/standbycluster/reconnect/IgniteStandByClientReconnectToNewClusterTest.java
    @@ -22,14 +22,19 @@
     import java.util.Set;
     import java.util.concurrent.CountDownLatch;
     import org.apache.ignite.internal.IgniteEx;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteStandByClientReconnectToNewClusterTest extends IgniteAbstractStandByClientReconnectTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActiveClientReconnectsToActiveCluster() throws Exception {
             CountDownLatch activateLatch = new CountDownLatch(1);
     
    @@ -109,6 +114,7 @@ public void testActiveClientReconnectsToActiveCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testActiveClientReconnectsToInactiveCluster() throws Exception {
             startNodes(null);
     
    @@ -189,6 +195,7 @@ public void testActiveClientReconnectsToInactiveCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInactiveClientReconnectsToActiveCluster() throws Exception {
             CountDownLatch activateLatch = new CountDownLatch(1);
     
    @@ -250,6 +257,7 @@ public void testInactiveClientReconnectsToActiveCluster() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInactiveClientReconnectsToInactiveCluster() throws Exception {
             startNodes(null);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/TrackingPageIOTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/TrackingPageIOTest.java
    index cacea48c5289c..e4f0fe9588e9e 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/TrackingPageIOTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/tree/io/TrackingPageIOTest.java
    @@ -25,17 +25,22 @@
     import java.util.Set;
     import java.util.TreeSet;
     import java.util.concurrent.ThreadLocalRandom;
    -import junit.framework.TestCase;
     import org.apache.ignite.internal.pagemem.PageIdAllocator;
     import org.apache.ignite.internal.pagemem.PageIdUtils;
     import org.apache.ignite.internal.processors.cache.persistence.snapshot.TrackingPageIsCorruptedException;
     import org.apache.ignite.internal.util.GridUnsafe;
     import org.jetbrains.annotations.NotNull;
    +import org.junit.Test;
    +
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.assertFalse;
    +import static org.junit.Assert.assertTrue;
    +import static org.junit.Assert.fail;
     
     /**
      *
      */
    -public class TrackingPageIOTest extends TestCase {
    +public class TrackingPageIOTest {
         /** Page size. */
         public static final int PAGE_SIZE = 4096;
     
    @@ -46,6 +51,7 @@ public class TrackingPageIOTest extends TestCase {
         /**
          *
          */
    +    @Test
         public void testBasics() throws Exception {
             ByteBuffer buf = createBuffer();
     
    @@ -72,6 +78,7 @@ public void testBasics() throws Exception {
         /**
          *
          */
    +    @Test
         public void testMarkingRandomly() throws Exception {
             ByteBuffer buf = createBuffer();
     
    @@ -82,6 +89,7 @@ public void testMarkingRandomly() throws Exception {
         /**
          *
          */
    +    @Test
         public void testZeroingRandomly() throws Exception {
             ByteBuffer buf = createBuffer();
     
    @@ -142,6 +150,7 @@ private void checkMarkingRandomly(ByteBuffer buf, int backupId, boolean testZero
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFindNextChangedPage() throws Exception {
             ByteBuffer buf = createBuffer();
     
    @@ -197,6 +206,7 @@ else if (setIdx.contains(pageId))
         /**
          *
          */
    +    @Test
         public void testMerging() throws Exception {
             ByteBuffer buf = createBuffer();
     
    @@ -235,6 +245,7 @@ public void testMerging() throws Exception {
         /**
          *
          */
    +    @Test
         public void testMerging_MarksShouldBeDropForSuccessfulBackup() throws Exception {
             ByteBuffer buf = createBuffer();
     
    @@ -303,6 +314,7 @@ private void generateMarking(
          *
          * @throws Exception if failed.
          */
    +    @Test
         public void testThatWeDontFailIfSnapshotTagWasLost() throws Exception {
             ByteBuffer buf = createBuffer();
     
    @@ -356,4 +368,4 @@ public void testThatWeDontFailIfSnapshotTagWasLost() throws Exception {
                 assertFalse(io.wasChanged(buf, id, oldTag, oldTag - 1, PAGE_SIZE));
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalDeltaConsistencyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalDeltaConsistencyTest.java
    index 7f872fd005cb7..3b324d58ec1ef 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalDeltaConsistencyTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/AbstractWalDeltaConsistencyTest.java
    @@ -18,6 +18,7 @@
     package org.apache.ignite.internal.processors.cache.persistence.wal;
     
     import org.apache.ignite.Ignite;
    +import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.DataRegionConfiguration;
     import org.apache.ignite.configuration.DataStorageConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    @@ -46,6 +47,22 @@ public abstract class AbstractWalDeltaConsistencyTest extends GridCommonAbstract
             return cfg;
         }
     
    +    /**
    +     * @param name Cache name.
    +     * @return Cache configuration.
    +     */
    +    @SuppressWarnings("unchecked")
    +    protected   CacheConfiguration cacheConfiguration(String name) {
    +        return defaultCacheConfiguration().setName(name);
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void afterTestsStopped() throws Exception {
    +        stopAllGrids();
    +
    +        super.afterTestsStopped();
    +    }
    +
         /**
          * Check page memory on each checkpoint.
          */
    @@ -76,6 +93,8 @@ protected boolean checkPagesOnCheckpoint() {
         protected DataStorageConfiguration getDataStorageConfiguration() {
             return new DataStorageConfiguration()
                 .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
    +                .setInitialSize(256 * 1024 * 1024)
    +                .setMaxSize(256 * 1024 * 1024)
                     .setPersistenceEnabled(true)
                     .setName("dflt-plc"));
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/CpTriggeredWalDeltaConsistencyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/CpTriggeredWalDeltaConsistencyTest.java
    index 781111cea1cdf..ca68ec66a29b4 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/CpTriggeredWalDeltaConsistencyTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/CpTriggeredWalDeltaConsistencyTest.java
    @@ -19,25 +19,37 @@
     
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.internal.IgniteEx;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Checkpoint triggered WAL delta records consistency test.
      */
    +@RunWith(JUnit4.class)
     public class CpTriggeredWalDeltaConsistencyTest extends AbstractWalDeltaConsistencyTest {
         /** {@inheritDoc} */
         @Override protected boolean checkPagesOnCheckpoint() {
             return true;
         }
     
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        stopAllGrids();
    +
    +        super.afterTest();
    +    }
    +
         /**
    -     *
    +     * @throws Exception If failed.
          */
    +    @Test
         public final void testPutRemoveCacheDestroy() throws Exception {
             IgniteEx ignite = startGrid(0);
     
             ignite.cluster().active(true);
     
    -        IgniteCache cache0 = ignite.getOrCreateCache("cache0");
    +        IgniteCache cache0 = ignite.createCache(cacheConfiguration("cache0"));
     
             for (int i = 0; i < 3_000; i++)
                 cache0.put(i, "Cache value " + i);
    @@ -49,7 +61,7 @@ public final void testPutRemoveCacheDestroy() throws Exception {
                 cache0.remove(i);
     
             for (int i = 5; i >= 0; i--) {
    -            IgniteCache cache1 = ignite.getOrCreateCache("cache1");
    +            IgniteCache cache1 = ignite.getOrCreateCache(cacheConfiguration("cache1"));
     
                 for (int j = 0; j < 300; j++)
                     cache1.put(j + i * 100, "Cache value " + j);
    @@ -59,7 +71,5 @@ public final void testPutRemoveCacheDestroy() throws Exception {
             }
     
             forceCheckpoint();
    -
    -        stopAllGrids();
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/ExplicitWalDeltaConsistencyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/ExplicitWalDeltaConsistencyTest.java
    index 1b9a18a8de1e6..2cd8259f6bfc8 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/ExplicitWalDeltaConsistencyTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/ExplicitWalDeltaConsistencyTest.java
    @@ -20,20 +20,33 @@
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.processors.cache.persistence.wal.memtracker.PageMemoryTrackerPluginProvider;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * WAL delta records consistency test with explicit checks.
      */
    +@RunWith(JUnit4.class)
     public class ExplicitWalDeltaConsistencyTest extends AbstractWalDeltaConsistencyTest {
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        stopAllGrids();
    +
    +        super.afterTest();
    +    }
    +
         /**
    -     *
    +     * @throws Exception If failed.
          */
    +    @Test
         public final void testPutRemoveAfterCheckpoint() throws Exception {
             IgniteEx ignite = startGrid(0);
     
             ignite.cluster().active(true);
     
    -        IgniteCache cache = ignite.getOrCreateCache(DEFAULT_CACHE_NAME);
    +        IgniteCache cache = ignite.createCache(cacheConfiguration(DEFAULT_CACHE_NAME));
     
             for (int i = 0; i < 5_000; i++)
                 cache.put(i, "Cache value " + i);
    @@ -55,30 +68,34 @@ public final void testPutRemoveAfterCheckpoint() throws Exception {
                 cache.remove(i);
     
             assertTrue(PageMemoryTrackerPluginProvider.tracker(ignite).checkPages(true));
    -
    -        stopAllGrids();
         }
     
         /**
    -     *
    +     * @throws Exception If failed.
          */
    +    @Test
         public final void testNotEmptyPds() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            fail("https://issues.apache.org/jira/browse/IGNITE-10822");
    +
             IgniteEx ignite = startGrid(0);
     
             ignite.cluster().active(true);
     
    -        IgniteCache cache = ignite.getOrCreateCache(DEFAULT_CACHE_NAME);
    +        IgniteCache cache = ignite.createCache(cacheConfiguration(DEFAULT_CACHE_NAME));
     
             for (int i = 0; i < 3_000; i++)
                 cache.put(i, "Cache value " + i);
     
    +        forceCheckpoint();
    +
             stopGrid(0);
     
             ignite = startGrid(0);
     
             ignite.cluster().active(true);
     
    -        cache = ignite.getOrCreateCache(DEFAULT_CACHE_NAME);
    +        cache = ignite.getOrCreateCache(cacheConfiguration(DEFAULT_CACHE_NAME));
     
             for (int i = 2_000; i < 5_000; i++)
                 cache.put(i, "Changed cache value " + i);
    @@ -87,7 +104,5 @@ public final void testNotEmptyPds() throws Exception {
                 cache.remove(i);
     
             assertTrue(PageMemoryTrackerPluginProvider.tracker(ignite).checkPages(true));
    -
    -        stopAllGrids();
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SegmentedRingByteBufferTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SegmentedRingByteBufferTest.java
    index 87f3fa781daef..032a357e6ea70 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SegmentedRingByteBufferTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SegmentedRingByteBufferTest.java
    @@ -39,6 +39,9 @@
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer.BufferMode.DIRECT;
     import static org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBuffer.BufferMode.ONHEAP;
    @@ -46,10 +49,12 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class SegmentedRingByteBufferTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAligned() throws Exception {
             doTestAligned(ONHEAP);
         }
    @@ -57,6 +62,7 @@ public void testAligned() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAlignedDirect() throws Exception {
             doTestAligned(DIRECT);
         }
    @@ -64,6 +70,7 @@ public void testAlignedDirect() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNotAligned() throws Exception {
             doTestNotAligned(ONHEAP);
         }
    @@ -71,6 +78,7 @@ public void testNotAligned() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNotAlignedDirect() throws Exception {
             doTestNotAligned(DIRECT);
         }
    @@ -78,6 +86,7 @@ public void testNotAlignedDirect() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoOverflowMultiThreaded() throws Exception {
             doTestNoOverflowMultiThreaded(ONHEAP);
         }
    @@ -85,6 +94,7 @@ public void testNoOverflowMultiThreaded() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoOverflowMultiThreadedDirect() throws Exception {
             doTestNoOverflowMultiThreaded(DIRECT);
         }
    @@ -92,6 +102,7 @@ public void testNoOverflowMultiThreadedDirect() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiThreaded() throws Exception {
             doTestMultiThreaded(ONHEAP);
         }
    @@ -99,6 +110,7 @@ public void testMultiThreaded() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiThreadedDirect() throws Exception {
             doTestMultiThreaded(DIRECT);
         }
    @@ -106,6 +118,7 @@ public void testMultiThreadedDirect() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiThreaded2() throws Exception {
             doTestMultiThreaded2(ONHEAP);
         }
    @@ -113,6 +126,7 @@ public void testMultiThreaded2() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiThreadedDirect2() throws Exception {
             doTestMultiThreaded2(DIRECT);
         }
    @@ -782,4 +796,4 @@ public int size() {
                 return res;
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SysPropWalDeltaConsistencyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SysPropWalDeltaConsistencyTest.java
    index db9f10448b1fa..bf32501605901 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SysPropWalDeltaConsistencyTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/SysPropWalDeltaConsistencyTest.java
    @@ -21,10 +21,14 @@
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.processors.cache.persistence.wal.memtracker.PageMemoryTrackerPluginProvider;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * WAL delta records consistency test enabled by system property.
      */
    +@RunWith(JUnit4.class)
     public class SysPropWalDeltaConsistencyTest extends AbstractWalDeltaConsistencyTest {
         /** {@inheritDoc} */
         @Override protected void beforeTestsStarted() throws Exception {
    @@ -40,6 +44,13 @@ public class SysPropWalDeltaConsistencyTest extends AbstractWalDeltaConsistencyT
             System.clearProperty(PageMemoryTrackerPluginProvider.IGNITE_ENABLE_PAGE_MEMORY_TRACKER);
         }
     
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        stopAllGrids();
    +
    +        super.afterTest();
    +    }
    +
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
    @@ -50,14 +61,15 @@ public class SysPropWalDeltaConsistencyTest extends AbstractWalDeltaConsistencyT
         }
     
         /**
    -     *
    +     * @throws Exception If failed.
          */
    +    @Test
         public final void testPutRemoveMultinode() throws Exception {
             IgniteEx ignite0 = startGrid(0);
     
             ignite0.cluster().active(true);
     
    -        IgniteCache cache0 = ignite0.getOrCreateCache("cache0");
    +        IgniteCache cache0 = ignite0.createCache(cacheConfiguration("cache0"));
     
             for (int i = 0; i < 3_000; i++)
                 cache0.put(i, "Cache value " + i);
    @@ -70,13 +82,11 @@ public final void testPutRemoveMultinode() throws Exception {
             for (int i = 1_000; i < 4_000; i++)
                 cache0.remove(i);
     
    -        IgniteCache cache1 = ignite1.getOrCreateCache("cache1");
    +        IgniteCache cache1 = ignite1.createCache(cacheConfiguration("cache1"));
     
             for (int i = 0; i < 1_000; i++)
                 cache1.put(i, "Cache value " + i);
     
             forceCheckpoint();
    -
    -        stopAllGrids();
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAwareTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAwareTest.java
    index 82876845541ed..0bd9fcb853e40 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAwareTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/aware/SegmentAwareTest.java
    @@ -17,27 +17,71 @@
     package org.apache.ignite.internal.processors.cache.persistence.wal.aware;
     
     import java.util.concurrent.CountDownLatch;
    -import junit.framework.TestCase;
     import org.apache.ignite.IgniteCheckedException;
     import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException;
     import org.apache.ignite.internal.IgniteInternalFuture;
     import org.apache.ignite.internal.IgniteInterruptedCheckedException;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.junit.Test;
     
     import static org.hamcrest.CoreMatchers.is;
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.assertFalse;
     import static org.junit.Assert.assertThat;
    +import static org.junit.Assert.assertTrue;
    +import static org.junit.Assert.fail;
     
     /**
      * Test for {@link SegmentAware}.
      */
    -public class SegmentAwareTest extends TestCase {
    +public class SegmentAwareTest {
    +    /**
    +     * Checking to avoid deadlock SegmentArchivedStorage.markAsMovedToArchive -> SegmentLockStorage.locked <->
    +     * SegmentLockStorage.releaseWorkSegment -> SegmentArchivedStorage.onSegmentUnlocked
    +     *
    +     * @throws IgniteCheckedException if failed.
    +     */
    +    @Test
    +    public void testAvoidDeadlockArchiverAndLockStorage() throws IgniteCheckedException {
    +        SegmentAware aware = new SegmentAware(10, false);
    +
    +        int iterationCnt = 100_000;
    +        int segmentToHandle = 1;
    +
    +        IgniteInternalFuture archiverThread = GridTestUtils.runAsync(() -> {
    +            int i = iterationCnt;
    +
    +            while (i-- > 0) {
    +                try {
    +                    aware.markAsMovedToArchive(segmentToHandle);
    +                }
    +                catch (IgniteInterruptedCheckedException e) {
    +                    throw new RuntimeException(e);
    +                }
    +            }
    +        });
    +
    +        IgniteInternalFuture lockerThread = GridTestUtils.runAsync(() -> {
    +            int i = iterationCnt;
    +
    +            while (i-- > 0) {
    +                aware.lockWorkSegment(segmentToHandle);
    +
    +                aware.releaseWorkSegment(segmentToHandle);
    +            }
    +        });
    +
    +        archiverThread.get();
    +        lockerThread.get();
    +    }
     
         /**
          * Waiting finished when work segment is set.
          */
    +    @Test
         public void testFinishAwaitSegment_WhenExactWaitingSegmentWasSet() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegment(5));
     
    @@ -51,9 +95,10 @@ public void testFinishAwaitSegment_WhenExactWaitingSegmentWasSet() throws Ignite
         /**
          * Waiting finished when work segment greater than expected is set.
          */
    +    @Test
         public void testFinishAwaitSegment_WhenGreaterThanWaitingSegmentWasSet() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegment(5));
     
    @@ -67,9 +112,10 @@ public void testFinishAwaitSegment_WhenGreaterThanWaitingSegmentWasSet() throws
         /**
          * Waiting finished when work segment is set.
          */
    +    @Test
         public void testFinishAwaitSegment_WhenNextSegmentEqualToWaitingOne() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegment(5));
     
    @@ -89,9 +135,10 @@ public void testFinishAwaitSegment_WhenNextSegmentEqualToWaitingOne() throws Ign
         /**
          * Waiting finished when interrupt was triggered.
          */
    +    @Test
         public void testFinishAwaitSegment_WhenInterruptWasCall() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegment(5));
     
    @@ -105,9 +152,10 @@ public void testFinishAwaitSegment_WhenInterruptWasCall() throws IgniteCheckedEx
         /**
          * Waiting finished when next work segment triggered.
          */
    +    @Test
         public void testFinishWaitSegmentForArchive_WhenWorkSegmentIncremented() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.curAbsWalIdx(5);
             aware.setLastArchivedAbsoluteIndex(4);
    @@ -124,9 +172,10 @@ public void testFinishWaitSegmentForArchive_WhenWorkSegmentIncremented() throws
         /**
          * Waiting finished when work segment is set.
          */
    +    @Test
         public void testFinishWaitSegmentForArchive_WhenWorkSegmentGreaterValue() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.curAbsWalIdx(5);
             aware.setLastArchivedAbsoluteIndex(4);
    @@ -143,9 +192,10 @@ public void testFinishWaitSegmentForArchive_WhenWorkSegmentGreaterValue() throws
         /**
          * Waiting finished when interrupt was triggered.
          */
    +    @Test
         public void testFinishWaitSegmentForArchive_WhenInterruptWasCall() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.curAbsWalIdx(5);
             aware.setLastArchivedAbsoluteIndex(4);
    @@ -162,9 +212,10 @@ public void testFinishWaitSegmentForArchive_WhenInterruptWasCall() throws Ignite
         /**
          * Should correct calculate next segment.
          */
    +    @Test
         public void testCorrectCalculateNextSegmentIndex() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.curAbsWalIdx(5);
     
    @@ -178,9 +229,10 @@ public void testCorrectCalculateNextSegmentIndex() throws IgniteCheckedException
         /**
          * Waiting finished when segment archived.
          */
    +    @Test
         public void testFinishWaitNextAbsoluteIndex_WhenMarkAsArchivedFirstSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(2);
    +        SegmentAware aware = new SegmentAware(2, false);
     
             aware.curAbsWalIdx(1);
             aware.setLastArchivedAbsoluteIndex(-1);
    @@ -197,9 +249,10 @@ public void testFinishWaitNextAbsoluteIndex_WhenMarkAsArchivedFirstSegment() thr
         /**
          * Waiting finished when segment archived.
          */
    +    @Test
         public void testFinishWaitNextAbsoluteIndex_WhenSetToArchivedFirst() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(2);
    +        SegmentAware aware = new SegmentAware(2, false);
     
             aware.curAbsWalIdx(1);
             aware.setLastArchivedAbsoluteIndex(-1);
    @@ -216,9 +269,10 @@ public void testFinishWaitNextAbsoluteIndex_WhenSetToArchivedFirst() throws Igni
         /**
          * Waiting finished when force interrupt was triggered.
          */
    +    @Test
         public void testFinishWaitNextAbsoluteIndex_WhenOnlyForceInterruptWasCall() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(2);
    +        SegmentAware aware = new SegmentAware(2, false);
     
             aware.curAbsWalIdx(2);
             aware.setLastArchivedAbsoluteIndex(-1);
    @@ -241,9 +295,10 @@ public void testFinishWaitNextAbsoluteIndex_WhenOnlyForceInterruptWasCall() thro
         /**
          * Waiting finished when segment archived.
          */
    +    @Test
         public void testFinishSegmentArchived_WhenSetExactWaitingSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegmentArchived(5));
     
    @@ -257,9 +312,10 @@ public void testFinishSegmentArchived_WhenSetExactWaitingSegment() throws Ignite
         /**
          * Waiting finished when segment archived.
          */
    -    public void testFinishSegmentArchived_WhenMarkExactWatingSegment() throws IgniteCheckedException, InterruptedException {
    +    @Test
    +    public void testFinishSegmentArchived_WhenMarkExactWaitingSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegmentArchived(5));
     
    @@ -273,9 +329,10 @@ public void testFinishSegmentArchived_WhenMarkExactWatingSegment() throws Ignite
         /**
          * Waiting finished when segment archived.
          */
    -    public void testFinishSegmentArchived_WhenSetGreaterThanWatingSegment() throws IgniteCheckedException, InterruptedException {
    +    @Test
    +    public void testFinishSegmentArchived_WhenSetGreaterThanWaitingSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegmentArchived(5));
     
    @@ -289,9 +346,10 @@ public void testFinishSegmentArchived_WhenSetGreaterThanWatingSegment() throws I
         /**
          * Waiting finished when segment archived.
          */
    -    public void testFinishSegmentArchived_WhenMarkGreaterThanWatingSegment() throws IgniteCheckedException, InterruptedException {
    +    @Test
    +    public void testFinishSegmentArchived_WhenMarkGreaterThanWaitingSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             IgniteInternalFuture future = awaitThread(() -> aware.awaitSegmentArchived(5));
     
    @@ -305,9 +363,10 @@ public void testFinishSegmentArchived_WhenMarkGreaterThanWatingSegment() throws
         /**
          * Waiting finished when interrupt was triggered.
          */
    +    @Test
         public void testFinishSegmentArchived_WhenInterruptWasCall() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.curAbsWalIdx(5);
             aware.setLastArchivedAbsoluteIndex(4);
    @@ -324,9 +383,10 @@ public void testFinishSegmentArchived_WhenInterruptWasCall() throws IgniteChecke
         /**
          * Waiting finished when release work segment.
          */
    +    @Test
         public void testMarkAsMovedToArchive_WhenReleaseLockedSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.checkCanReadArchiveOrReserveWorkSegment(5);
     
    @@ -342,9 +402,10 @@ public void testMarkAsMovedToArchive_WhenReleaseLockedSegment() throws IgniteChe
         /**
          * Waiting finished and increment archived segment when interrupt was call.
          */
    +    @Test
         public void testMarkAsMovedToArchive_WhenInterruptWasCall() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
             aware.checkCanReadArchiveOrReserveWorkSegment(5);
     
             IgniteInternalFuture future = awaitThread(() -> aware.markAsMovedToArchive(5));
    @@ -362,11 +423,12 @@ public void testMarkAsMovedToArchive_WhenInterruptWasCall() throws IgniteChecked
         /**
          * Waiting finished when segment archived.
          */
    +    @Test
         public void testFinishWaitSegmentToCompress_WhenSetLastArchivedSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, true);
     
    -        aware.lastCompressedIdx(5);
    +        aware.onSegmentCompressed(5);
     
             IgniteInternalFuture future = awaitThread(aware::waitNextSegmentToCompress);
     
    @@ -380,11 +442,12 @@ public void testFinishWaitSegmentToCompress_WhenSetLastArchivedSegment() throws
         /**
          * Waiting finished when segment archived.
          */
    +    @Test
         public void testFinishWaitSegmentToCompress_WhenMarkLastArchivedSegment() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, true);
     
    -        aware.lastCompressedIdx(5);
    +        aware.onSegmentCompressed(5);
     
             IgniteInternalFuture future = awaitThread(aware::waitNextSegmentToCompress);
     
    @@ -398,28 +461,24 @@ public void testFinishWaitSegmentToCompress_WhenMarkLastArchivedSegment() throws
         /**
          * Next segment for compress based on truncated archive idx.
          */
    +    @Test
         public void testCorrectCalculateNextCompressSegment() throws IgniteCheckedException, InterruptedException {
    -        //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, true);
     
    -        aware.lastCompressedIdx(5);
             aware.setLastArchivedAbsoluteIndex(6);
    -        aware.lastTruncatedArchiveIdx(7);
     
    -        //when:
    -        long segmentToCompress = aware.waitNextSegmentToCompress();
    -
    -        //then: segment to compress greater than truncated archive idx
    -        assertEquals(8, segmentToCompress);
    +        for (int exp = 0; exp <= 6; exp++)
    +            assertEquals(exp, aware.waitNextSegmentToCompress());
         }
     
         /**
          * Waiting finished when interrupt was call.
          */
    +    @Test
         public void testFinishWaitSegmentToCompress_WhenInterruptWasCall() throws IgniteCheckedException, InterruptedException {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    -        aware.lastCompressedIdx(5);
    +        SegmentAware aware = new SegmentAware(10, true);
    +        aware.onSegmentCompressed(5);
     
             IgniteInternalFuture future = awaitThread(aware::waitNextSegmentToCompress);
     
    @@ -430,12 +489,37 @@ public void testFinishWaitSegmentToCompress_WhenInterruptWasCall() throws Ignite
             assertTrue(future.get(20) instanceof IgniteInterruptedCheckedException);
         }
     
    +    /**
    +     * Tests that {@link SegmentAware#onSegmentCompressed} returns segments in proper order.
    +     */
    +    @Test
    +    public void testLastCompressedIdxProperOrdering() throws IgniteInterruptedCheckedException {
    +        SegmentAware aware = new SegmentAware(10, true);
    +
    +        for (int i = 0; i < 5; i++) {
    +            aware.setLastArchivedAbsoluteIndex(i);
    +            aware.waitNextSegmentToCompress();
    +        }
    +
    +        aware.onSegmentCompressed(0);
    +
    +        aware.onSegmentCompressed(4);
    +        assertEquals(0, aware.lastCompressedIdx());
    +        aware.onSegmentCompressed(1);
    +        assertEquals(1, aware.lastCompressedIdx());
    +        aware.onSegmentCompressed(3);
    +        assertEquals(1, aware.lastCompressedIdx());
    +        aware.onSegmentCompressed(2);
    +        assertEquals(4, aware.lastCompressedIdx());
    +    }
    +
         /**
          * Segment reserve correctly.
          */
    +    @Test
         public void testReserveCorrectly() {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             //when: reserve one segment twice and one segment once.
             aware.reserve(5);
    @@ -476,9 +560,10 @@ public void testReserveCorrectly() {
         /**
          * Should fail when release unreserved segment.
          */
    +    @Test
         public void testAssertFail_WhenReleaseUnreservedSegment() {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.reserve(5);
             try {
    @@ -495,9 +580,10 @@ public void testAssertFail_WhenReleaseUnreservedSegment() {
         /**
          * Segment locked correctly.
          */
    +    @Test
         public void testReserveWorkSegmentCorrectly() {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             //when: lock one segment twice.
             aware.checkCanReadArchiveOrReserveWorkSegment(5);
    @@ -528,9 +614,10 @@ public void testReserveWorkSegmentCorrectly() {
         /**
          * Should fail when release unlocked segment.
          */
    +    @Test
         public void testAssertFail_WhenReleaseUnreservedWorkSegment() {
             //given: thread which awaited segment.
    -        SegmentAware aware = new SegmentAware(10);
    +        SegmentAware aware = new SegmentAware(10, false);
     
             aware.checkCanReadArchiveOrReserveWorkSegment(5);
             try {
    @@ -598,4 +685,4 @@ interface Waiter {
              */
             void await() throws IgniteInterruptedCheckedException;
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTracker.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTracker.java
    index 6649993de89c8..6944e215fce45 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTracker.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTracker.java
    @@ -31,7 +31,6 @@
     import java.util.concurrent.locks.ReentrantLock;
     import org.apache.ignite.IgniteCheckedException;
     import org.apache.ignite.IgniteLogger;
    -import org.apache.ignite.configuration.WALMode;
     import org.apache.ignite.internal.GridKernalContext;
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.mem.DirectMemoryProvider;
    @@ -46,29 +45,33 @@
     import org.apache.ignite.internal.pagemem.wal.WALPointer;
     import org.apache.ignite.internal.pagemem.wal.record.PageSnapshot;
     import org.apache.ignite.internal.pagemem.wal.record.WALRecord;
    -import org.apache.ignite.internal.pagemem.wal.record.delta.InitNewPageRecord;
     import org.apache.ignite.internal.pagemem.wal.record.delta.PageDeltaRecord;
    -import org.apache.ignite.internal.pagemem.wal.record.delta.RecycleRecord;
     import org.apache.ignite.internal.processors.cache.CacheGroupContext;
     import org.apache.ignite.internal.processors.cache.GridCacheProcessor;
     import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
    +import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils;
    +import org.apache.ignite.internal.processors.cache.mvcc.txlog.TxLog;
     import org.apache.ignite.internal.processors.cache.persistence.DataRegion;
     import org.apache.ignite.internal.processors.cache.persistence.DbCheckpointListener;
     import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager;
     import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager;
     import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage;
    -import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx;
     import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl;
    +import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO;
     import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager;
    -import org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager;
    +import org.apache.ignite.internal.processors.cache.tree.AbstractDataLeafIO;
     import org.apache.ignite.internal.util.GridUnsafe;
     import org.apache.ignite.internal.util.typedef.internal.CU;
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.plugin.IgnitePlugin;
     import org.apache.ignite.plugin.PluginContext;
    +import org.apache.ignite.spi.encryption.EncryptionSpi;
     import org.mockito.Mockito;
     
    +import static org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.T_CACHE_ID_DATA_REF_MVCC_LEAF;
    +import static org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO.T_DATA_REF_MVCC_LEAF;
    +
     /**
      * Page memory tracker.
      *
    @@ -149,40 +152,21 @@ public class PageMemoryTracker implements IgnitePlugin {
          */
         IgniteWriteAheadLogManager createWalManager() {
             if (isEnabled()) {
    -            if (ctx.igniteConfiguration().getDataStorageConfiguration().getWalMode() == WALMode.FSYNC) {
    -                return new FsyncModeFileWriteAheadLogManager(gridCtx) {
    -                    @Override public WALPointer log(WALRecord record) throws IgniteCheckedException {
    -                        WALPointer res = super.log(record);
    -
    -                        applyWalRecord(record);
    -
    -                        return res;
    -                    }
    -
    -                    @Override public void resumeLogging(WALPointer lastPtr) throws IgniteCheckedException {
    -                        super.resumeLogging(lastPtr);
    -
    -                        emptyPds = (lastPtr == null);
    -                    }
    -                };
    -            }
    -            else {
    -                return new FileWriteAheadLogManager(gridCtx) {
    -                    @Override public WALPointer log(WALRecord record) throws IgniteCheckedException {
    -                        WALPointer res = super.log(record);
    +            return new FileWriteAheadLogManager(gridCtx) {
    +                @Override public WALPointer log(WALRecord record) throws IgniteCheckedException {
    +                    WALPointer res = super.log(record);
     
    -                        applyWalRecord(record);
    +                    applyWalRecord(record);
     
    -                        return res;
    -                    }
    +                    return res;
    +                }
     
    -                    @Override public void resumeLogging(WALPointer lastPtr) throws IgniteCheckedException {
    -                        super.resumeLogging(lastPtr);
    +                @Override public void resumeLogging(WALPointer lastPtr) throws IgniteCheckedException {
    +                    super.resumeLogging(lastPtr);
     
    -                        emptyPds = (lastPtr == null);
    -                    }
    -                };
    -            }
    +                    emptyPds = (lastPtr == null);
    +                }
    +            };
             }
     
             return null;
    @@ -222,9 +206,21 @@ synchronized void start() {
     
             pageSize = ctx.igniteConfiguration().getDataStorageConfiguration().getPageSize();
     
    +        EncryptionSpi encSpi = ctx.igniteConfiguration().getEncryptionSpi();
    +
             pageMemoryMock = Mockito.mock(PageMemory.class);
     
             Mockito.doReturn(pageSize).when(pageMemoryMock).pageSize();
    +        Mockito.when(pageMemoryMock.realPageSize(Mockito.anyInt())).then(mock -> {
    +            int grpId = (Integer) mock.getArguments()[0];
    +
    +            if (gridCtx.encryption().groupKey(grpId) == null)
    +                return pageSize;
    +
    +            return pageSize
    +                - (encSpi.encryptedSizeNoPadding(pageSize) - pageSize)
    +                - encSpi.blockSize() /* For CRC. */;
    +        });
     
             GridCacheSharedContext sharedCtx = gridCtx.cache().context();
     
    @@ -283,7 +279,7 @@ synchronized void stop() {
     
             stats.clear();
     
    -        memoryProvider.shutdown();
    +        memoryProvider.shutdown(true);
     
             if (checkpointLsnr != null) {
                 ((GridCacheDatabaseSharedManager)gridCtx.cache().context().database())
    @@ -421,8 +417,6 @@ private void applyWalRecord(WALRecord record) throws IgniteCheckedException {
                 try {
                     PageUtils.putBytes(page.address(), 0, snapshot.pageData());
     
    -                page.fullPageId(fullPageId);
    -
                     page.changeHistory().clear();
     
                     page.changeHistory().add(record);
    @@ -446,12 +440,6 @@ else if (record instanceof PageDeltaRecord) {
                 try {
                     deltaRecord.applyDelta(pageMemoryMock, page.address());
     
    -                // Set new fullPageId after recycle or after new page init, because pageId tag is changed.
    -                if (record instanceof RecycleRecord)
    -                    page.fullPageId(new FullPageId(((RecycleRecord)record).newPageId(), grpId));
    -                else if (record instanceof InitNewPageRecord)
    -                    page.fullPageId(new FullPageId(((InitNewPageRecord)record).newPageId(), grpId));
    -
                     page.changeHistory().add(record);
                 }
                 finally {
    @@ -462,18 +450,7 @@ else if (record instanceof InitNewPageRecord)
                 return;
     
             // Increment statistics.
    -        AtomicInteger statCnt = stats.get(record.type());
    -
    -        if (statCnt == null) {
    -            statCnt = new AtomicInteger();
    -
    -            AtomicInteger oldCnt = stats.putIfAbsent(record.type(), statCnt);
    -
    -            if (oldCnt != null)
    -                statCnt = oldCnt;
    -        }
    -
    -        statCnt.incrementAndGet();
    +        stats.computeIfAbsent(record.type(), r -> new AtomicInteger()).incrementAndGet();
         }
     
         /**
    @@ -486,6 +463,9 @@ private long pageStoreAllocatedPages() {
     
             long totalAllocated = pageStoreMgr.pagesAllocated(MetaStorage.METASTORAGE_CACHE_ID);
     
    +        if (MvccUtils.mvccEnabled(gridCtx))
    +            totalAllocated += pageStoreMgr.pagesAllocated(TxLog.TX_LOG_CACHE_ID);
    +
             for (CacheGroupContext ctx : gridCtx.cache().cacheGroups())
                 totalAllocated += pageStoreMgr.pagesAllocated(ctx.groupId());
     
    @@ -509,15 +489,6 @@ public boolean checkPages(boolean checkAll) throws IgniteCheckedException {
             synchronized (pageAllocatorMux) {
                 long totalAllocated = pageStoreAllocatedPages();
     
    -            long metaId = ((PageMemoryEx)cacheProc.context().database().metaStorage().pageMemory()).metaPageId(
    -                MetaStorage.METASTORAGE_CACHE_ID);
    -
    -            // Meta storage meta page is counted as allocated, but never used in current implementation.
    -            // This behavior will be fixed by https://issues.apache.org/jira/browse/IGNITE-8735
    -            if (!pages.containsKey(new FullPageId(metaId, MetaStorage.METASTORAGE_CACHE_ID))
    -                && pages.containsKey(new FullPageId(metaId + 1, MetaStorage.METASTORAGE_CACHE_ID)))
    -                totalAllocated--;
    -
                 log.info(">>> Total tracked pages: " + pages.size());
                 log.info(">>> Total allocated pages: " + totalAllocated);
     
    @@ -542,6 +513,8 @@ public boolean checkPages(boolean checkAll) throws IgniteCheckedException {
     
                 if (fullPageId.groupId() == MetaStorage.METASTORAGE_CACHE_ID)
                     pageMem = cacheProc.context().database().metaStorage().pageMemory();
    +            else if (fullPageId.groupId() == TxLog.TX_LOG_CACHE_ID)
    +                pageMem = cacheProc.context().database().dataRegion(TxLog.TX_LOG_CACHE_NAME).pageMemory();
                 else {
                     CacheGroupContext ctx = cacheProc.cacheGroup(fullPageId.groupId());
     
    @@ -563,7 +536,7 @@ public boolean checkPages(boolean checkAll) throws IgniteCheckedException {
                 long rmtPage = pageMem.acquirePage(fullPageId.groupId(), fullPageId.pageId());
     
                 try {
    -                long rmtPageAddr = pageMem.readLock(fullPageId.groupId(), fullPageId.pageId(), rmtPage);
    +                long rmtPageAddr = pageMem.readLockForce(fullPageId.groupId(), fullPageId.pageId(), rmtPage);
     
                     try {
                         page.lock();
    @@ -576,20 +549,8 @@ public boolean checkPages(boolean checkAll) throws IgniteCheckedException {
     
                                 dumpHistory(page);
                             }
    -                        else {
    -                            ByteBuffer locBuf = GridUnsafe.wrapPointer(page.address(), pageSize);
    -                            ByteBuffer rmtBuf = GridUnsafe.wrapPointer(rmtPageAddr, pageSize);
    -
    -                            if (!locBuf.equals(rmtBuf)) {
    -                                res = false;
    -
    -                                log.error("Page buffers are not equals: " + fullPageId);
    -
    -                                dumpDiff(locBuf, rmtBuf);
    -
    -                                dumpHistory(page);
    -                            }
    -                        }
    +                        else if (!comparePages(fullPageId, page, rmtPageAddr))
    +                            res = false;
     
                             if (!res && !checkAll)
                                 return false;
    @@ -611,6 +572,52 @@ public boolean checkPages(boolean checkAll) throws IgniteCheckedException {
             return res;
         }
     
    +    /**
    +     * Compare pages content.
    +     *
    +     * @param fullPageId Full page ID.
    +     * @param expectedPage Expected page.
    +     * @param actualPageAddr Actual page address.
    +     * @return {@code True} if pages are equals, {@code False} otherwise.
    +     * @throws IgniteCheckedException If fails.
    +     */
    +    private boolean comparePages(FullPageId fullPageId, DirectMemoryPage expectedPage, long actualPageAddr) throws IgniteCheckedException {
    +        long expPageArrd = expectedPage.address();
    +
    +        GridCacheProcessor cacheProc = gridCtx.cache();
    +
    +        ByteBuffer locBuf = GridUnsafe.wrapPointer(expPageArrd, pageSize);
    +        ByteBuffer rmtBuf = GridUnsafe.wrapPointer(actualPageAddr, pageSize);
    +
    +        PageIO pageIo = PageIO.getPageIO(actualPageAddr);
    +
    +        if (pageIo.getType() == T_DATA_REF_MVCC_LEAF || pageIo.getType() == T_CACHE_ID_DATA_REF_MVCC_LEAF) {
    +            assert cacheProc.cacheGroup(fullPageId.groupId()).mvccEnabled();
    +
    +            AbstractDataLeafIO io = (AbstractDataLeafIO)pageIo;
    +
    +            int cnt = io.getCount(actualPageAddr);
    +
    +            // Reset lock info as there is no sense to log it into WAL.
    +            for (int i = 0; i < cnt; i++) {
    +                io.setMvccLockCoordinatorVersion(expPageArrd, i, io.getMvccLockCoordinatorVersion(actualPageAddr, i));
    +                io.setMvccLockCounter(expPageArrd, i, io.getMvccLockCounter(actualPageAddr, i));
    +            }
    +        }
    +
    +        if (!locBuf.equals(rmtBuf)) {
    +            log.error("Page buffers are not equals: " + fullPageId);
    +
    +            dumpDiff(locBuf, rmtBuf);
    +
    +            dumpHistory(expectedPage);
    +
    +            return false;
    +        }
    +
    +        return true;
    +    }
    +
         /**
          * Dump statistics to log.
          */
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTrackerPluginProvider.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTrackerPluginProvider.java
    index c5f83b5ef68dd..cad3b0269dd75 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTrackerPluginProvider.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/memtracker/PageMemoryTrackerPluginProvider.java
    @@ -20,12 +20,16 @@
     import java.io.Serializable;
     import java.util.UUID;
     import org.apache.ignite.Ignite;
    +import org.apache.ignite.IgniteCheckedException;
     import org.apache.ignite.IgniteLogger;
     import org.apache.ignite.cluster.ClusterNode;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.GridKernalContext;
    +import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.pagemem.store.IgnitePageStoreManager;
     import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager;
    +import org.apache.ignite.internal.processors.cache.persistence.DatabaseLifecycleListener;
    +import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager;
     import org.apache.ignite.internal.processors.cluster.IgniteChangeGlobalStateSupport;
     import org.apache.ignite.internal.util.typedef.internal.CU;
     import org.apache.ignite.plugin.CachePluginContext;
    @@ -43,7 +47,7 @@
      * PageMemory tracker plugin provider.
      */
     public class PageMemoryTrackerPluginProvider implements PluginProvider,
    -    IgniteChangeGlobalStateSupport {
    +    IgniteChangeGlobalStateSupport, DatabaseLifecycleListener {
         /** System property name to implicitly enable page memory tracker . */
         public static final String IGNITE_ENABLE_PAGE_MEMORY_TRACKER = "IGNITE_ENABLE_PAGE_MEMORY_TRACKER";
     
    @@ -132,7 +136,7 @@ else if (IgnitePageStoreManager.class.equals(cls))
     
         /** {@inheritDoc} */
         @Override public void start(PluginContext ctx) {
    -        // No-op
    +        ((IgniteEx)ctx.grid()).context().internalSubscriptionProcessor().registerDatabaseListener(this);
         }
     
         /** {@inheritDoc} */
    @@ -197,4 +201,15 @@ public static PageMemoryTracker tracker(Ignite ignite) {
                 return null;
             }
         }
    +
    +    @Override public void beforeBinaryMemoryRestore(IgniteCacheDatabaseSharedManager mgr) throws IgniteCheckedException {
    +        if (plugin != null) {
    +            try {
    +                plugin.start();
    +            }
    +            catch (Exception e) {
    +                log.error("Can't start plugin", e);
    +            }
    +        }
    +    }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIteratorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIteratorTest.java
    index cf660c8b86832..89ab63a1b9308 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIteratorTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/wal/reader/StandaloneWalRecordsIteratorTest.java
    @@ -21,6 +21,8 @@
     import java.io.IOException;
     import java.nio.file.OpenOption;
     import java.nio.file.StandardOpenOption;
    +import java.util.List;
    +import java.util.Random;
     import java.util.concurrent.atomic.AtomicInteger;
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCheckedException;
    @@ -30,27 +32,32 @@
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager;
     import org.apache.ignite.internal.pagemem.wal.WALIterator;
    +import org.apache.ignite.internal.pagemem.wal.WALPointer;
    +import org.apache.ignite.internal.pagemem.wal.record.RolloverType;
     import org.apache.ignite.internal.pagemem.wal.record.SnapshotRecord;
    +import org.apache.ignite.internal.pagemem.wal.record.WALRecord;
     import org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager;
     import org.apache.ignite.internal.processors.cache.persistence.file.FileIO;
     import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIO;
     import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory;
    +import org.apache.ignite.internal.processors.cache.persistence.wal.FileDescriptor;
    +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWALPointer;
     import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
    +import org.apache.ignite.lang.IgniteBiTuple;
    +import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.internal.processors.cache.persistence.wal.reader.IgniteWalIteratorFactory.IteratorParametersBuilder;
     
     /**
      * The test check, that StandaloneWalRecordsIterator correctly close file descriptors associated with WAL files.
      */
    +@RunWith(JUnit4.class)
     public class StandaloneWalRecordsIteratorTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String name) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(name);
    @@ -61,9 +68,6 @@ public class StandaloneWalRecordsIteratorTest extends GridCommonAbstractTest {
                         new DataRegionConfiguration()
                             .setPersistenceEnabled(true)
                     )
    -        ).setDiscoverySpi(
    -            new TcpDiscoverySpi()
    -                .setIpFinder(IP_FINDER)
             );
     
             return cfg;
    @@ -88,11 +92,9 @@ public class StandaloneWalRecordsIteratorTest extends GridCommonAbstractTest {
         }
     
         /**
    -     * Check correct closing file descriptors.
          *
    -     * @throws Exception if test failed.
          */
    -    public void testCorrectClosingFileDescriptors() throws Exception {
    +    private String createWalFiles() throws Exception {
             IgniteEx ig = (IgniteEx)startGrid();
     
             String archiveWalDir = getArchiveWalDirPath(ig);
    @@ -108,7 +110,7 @@ public void testCorrectClosingFileDescriptors() throws Exception {
                 sharedMgr.checkpointReadLock();
     
                 try {
    -                walMgr.log(new SnapshotRecord(i, false));
    +                walMgr.log(new SnapshotRecord(i, false), RolloverType.NEXT_SEGMENT);
                 }
                 finally {
                     sharedMgr.checkpointReadUnlock();
    @@ -117,8 +119,19 @@ public void testCorrectClosingFileDescriptors() throws Exception {
     
             stopGrid();
     
    +        return archiveWalDir;
    +    }
    +
    +    /**
    +     * Check correct closing file descriptors.
    +     *
    +     * @throws Exception if test failed.
    +     */
    +    @Test
    +    public void testCorrectClosingFileDescriptors() throws Exception {
    +
             // Iterate by all archived WAL segments.
    -        createWalIterator(archiveWalDir).forEach(x -> {
    +        createWalIterator(createWalFiles()).forEach(x -> {
             });
     
             assertTrue("At least one WAL file must be opened!", CountedFileIO.getCountOpenedWalFiles() > 0);
    @@ -126,6 +139,63 @@ public void testCorrectClosingFileDescriptors() throws Exception {
             assertTrue("All WAL files must be closed at least ones!", CountedFileIO.getCountOpenedWalFiles() <= CountedFileIO.getCountClosedWalFiles());
         }
     
    +    /**
    +     * Check correct check bounds.
    +     *
    +     * @throws Exception if test failed.
    +     */
    +    @Test
    +    public void testStrictBounds() throws Exception {
    +        String dir = createWalFiles();
    +
    +        FileWALPointer lowBound = null, highBound = null;
    +
    +        for (IgniteBiTuple p : createWalIterator(dir, null, null, false)) {
    +            if (lowBound == null)
    +                lowBound = (FileWALPointer) p.get1();
    +
    +            highBound = (FileWALPointer) p.get1();
    +        }
    +
    +        assertNotNull(lowBound);
    +
    +        assertNotNull(highBound);
    +
    +        createWalIterator(dir, lowBound, highBound, true);
    +
    +        final FileWALPointer lBound = lowBound;
    +        final FileWALPointer hBound = highBound;
    +
    +        //noinspection ThrowableNotThrown
    +        GridTestUtils.assertThrows(log, () -> {
    +            createWalIterator(dir, new FileWALPointer(lBound.index() - 1, 0, 0), hBound, true);
    +
    +            return 0;
    +        } , IgniteCheckedException.class, null);
    +
    +        //noinspection ThrowableNotThrown
    +        GridTestUtils.assertThrows(log, () -> {
    +            createWalIterator(dir, lBound, new FileWALPointer(hBound.index() + 1, 0, 0), true);
    +
    +            return 0;
    +        }, IgniteCheckedException.class, null);
    +
    +        List walFiles = listWalFiles(dir);
    +
    +        assertNotNull(walFiles);
    +
    +        assertTrue(!walFiles.isEmpty());
    +
    +        assertTrue(walFiles.get(new Random().nextInt(walFiles.size())).file().delete());
    +
    +        //noinspection ThrowableNotThrown
    +        GridTestUtils.assertThrows(log, () -> {
    +            createWalIterator(dir, lBound, hBound, true);
    +
    +            return 0;
    +        }, IgniteCheckedException.class, null);
    +    }
    +
         /**
          * Creates WALIterator associated with files inside walDir.
          *
    @@ -141,6 +211,41 @@ private WALIterator createWalIterator(String walDir) throws IgniteCheckedExcepti
             return new IgniteWalIteratorFactory(log).iterator(params.filesOrDirs(walDir));
         }
     
    +
    +    /**
    +     * @param walDir Wal directory.
    +     */
    +    private List listWalFiles(String walDir) throws IgniteCheckedException {
    +        IteratorParametersBuilder params = new IteratorParametersBuilder();
    +
    +        params.ioFactory(new RandomAccessFileIOFactory());
    +
    +        return new IgniteWalIteratorFactory(log).resolveWalFiles(params.filesOrDirs(walDir));
    +    }
    +
    +    /**
    +     * @param walDir Wal directory.
    +     * @param lowBound Low bound.
    +     * @param highBound High bound.
    +     * @param strictCheck Strict check.
    +     */
    +    private WALIterator createWalIterator(String walDir, FileWALPointer lowBound, FileWALPointer highBound, boolean strictCheck)
    +                    throws IgniteCheckedException {
    +        IteratorParametersBuilder params = new IteratorParametersBuilder();
    +
    +        params.ioFactory(new RandomAccessFileIOFactory()).
    +            filesOrDirs(walDir).
    +            strictBoundsCheck(strictCheck);
    +
    +        if (lowBound != null)
    +            params.from(lowBound);
    +
    +        if (lowBound != null)
    +            params.to(highBound);
    +
    +        return new IgniteWalIteratorFactory(log).iterator(params);
    +    }
    +
         /**
          * Evaluate path to directory with WAL archive.
          *
    @@ -160,11 +265,6 @@ private String getArchiveWalDirPath(Ignite ignite) throws IgniteCheckedException
          *
          */
         private static class CountedFileIOFactory extends RandomAccessFileIOFactory {
    -        /** {@inheritDoc} */
    -        @Override public FileIO create(File file) throws IOException {
    -            return create(file, StandardOpenOption.CREATE, StandardOpenOption.READ, StandardOpenOption.WRITE);
    -        }
    -
             /** {@inheritDoc} */
             @Override public FileIO create(File file, OpenOption... modes) throws IOException {
                 return new CountedFileIO(file, modes);
    @@ -213,4 +313,4 @@ public CountedFileIO(File file, OpenOption... modes) throws IOException {
              */
             public static int getCountClosedWalFiles() { return WAL_CLOSE_COUNTER.get(); }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/CacheScanQueryFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/CacheScanQueryFailoverTest.java
    index 0633138f0c245..f5f13e4c161fe 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/CacheScanQueryFailoverTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/CacheScanQueryFailoverTest.java
    @@ -32,6 +32,9 @@
     import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.LOCAL;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    @@ -39,6 +42,7 @@
     /**
      * ScanQuery failover test. Tests scenario where user supplied closures throw unhandled errors.
      */
    +@RunWith(JUnit4.class)
     public class CacheScanQueryFailoverTest extends GridCommonAbstractTest {
         /** */
         private static final String LOCAL_CACHE_NAME = "local";
    @@ -85,6 +89,7 @@ public class CacheScanQueryFailoverTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testScanQueryWithFailedClosures() throws Exception {
             Ignite srv = startGrids(4);
             Ignite client = startGrid("client");
    @@ -103,6 +108,7 @@ public void testScanQueryWithFailedClosures() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testScanQueryOverLocalCacheWithFailedClosures() throws Exception {
             Ignite srv = startGrids(4);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryTransformerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryTransformerSelfTest.java
    index 53acd22acecf1..54916bf4c29e1 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryTransformerSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryTransformerSelfTest.java
    @@ -40,24 +40,21 @@
     import org.apache.ignite.lang.IgniteCallable;
     import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.resources.IgniteInstanceResource;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Test for scan query with transformer.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheQueryTransformerSelfTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER));
             cfg.setMarshaller(null);
     
             return cfg;
    @@ -80,6 +77,7 @@ public class GridCacheQueryTransformerSelfTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testGetKeys() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -111,6 +109,7 @@ public void testGetKeys() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testGetKeysFiltered() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -148,6 +147,7 @@ public void testGetKeysFiltered() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testGetObjectField() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -179,6 +179,7 @@ public void testGetObjectField() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testGetObjectFieldPartitioned() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -219,6 +220,7 @@ public void testGetObjectFieldPartitioned() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testGetObjectFieldFiltered() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -256,6 +258,7 @@ public void testGetObjectFieldFiltered() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testKeepBinary() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -289,6 +292,7 @@ public void testKeepBinary() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testKeepBinaryFiltered() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -328,6 +332,7 @@ public void testKeepBinaryFiltered() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocal() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -369,6 +374,7 @@ public void testLocal() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalFiltered() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -416,6 +422,7 @@ public void testLocalFiltered() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalKeepBinary() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -457,6 +464,7 @@ public void testLocalKeepBinary() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalKeepBinaryFiltered() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -504,6 +512,7 @@ public void testLocalKeepBinaryFiltered() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testUnsupported() throws Exception {
             final IgniteCache cache = grid().createCache("test-cache");
     
    @@ -592,6 +601,7 @@ public void testUnsupported() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPageSize() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    @@ -629,6 +639,7 @@ public void testPageSize() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalInjection() throws Exception {
             IgniteCache cache = grid().createCache("test-cache");
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCircularQueueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCircularQueueTest.java
    index a07bd64d80581..e7449be7d17b7 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCircularQueueTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/GridCircularQueueTest.java
    @@ -20,13 +20,18 @@
     import java.util.ArrayDeque;
     import org.apache.ignite.internal.util.GridRandom;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      */
    +@RunWith(JUnit4.class)
     public class GridCircularQueueTest extends GridCommonAbstractTest {
         /**
          *
          */
    +    @Test
         public void testQueue() {
             GridCacheQueryManager.CircularQueue q = new GridCacheQueryManager.CircularQueue<>(4);
     
    @@ -115,4 +120,4 @@ private void check(GridCacheQueryManager.CircularQueue q, ArrayDeque d) {
             for (Object o : d)
                 assertEquals(q.get(i++), o);
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IgniteCacheQueryCacheDestroySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IgniteCacheQueryCacheDestroySelfTest.java
    index d0d392b50b604..d97ad29f80fc0 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IgniteCacheQueryCacheDestroySelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IgniteCacheQueryCacheDestroySelfTest.java
    @@ -37,10 +37,14 @@
     import org.apache.ignite.lang.IgniteBiPredicate;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * The test for the destruction of the cache during the execution of the query
      */
    +@RunWith(JUnit4.class)
     public class IgniteCacheQueryCacheDestroySelfTest extends GridCommonAbstractTest {
         /** */
         private static final String CACHE_NAME = "cache";
    @@ -55,6 +59,7 @@ public class IgniteCacheQueryCacheDestroySelfTest extends GridCommonAbstractTest
         /**
          * The main test code.
          */
    +    @Test
         public void testQueue() throws Throwable {
             startGridsMultiThreaded(GRID_CNT);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQuerySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQuerySelfTest.java
    index 5f2e2edaccaf0..33dd0c5c6f8d9 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQuerySelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQuerySelfTest.java
    @@ -26,12 +26,10 @@
     import java.util.TreeMap;
     import java.util.concurrent.Callable;
     import javax.cache.Cache;
    -import junit.framework.TestCase;
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.IgniteSystemProperties;
     import org.apache.ignite.IgniteTransactions;
    -import org.apache.ignite.Ignition;
     import org.apache.ignite.binary.BinaryObject;
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.query.QueryCursor;
    @@ -42,41 +40,32 @@
     import org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException;
     import org.apache.ignite.spi.IgniteSpiAdapter;
     import org.apache.ignite.spi.IgniteSpiException;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.spi.indexing.IndexingQueryFilter;
     import org.apache.ignite.spi.indexing.IndexingSpi;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.apache.ignite.transactions.TransactionState;
     import org.jetbrains.annotations.NotNull;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Indexing Spi query only test
      */
    -public class IndexingSpiQuerySelfTest extends TestCase {
    -    public static final String CACHE_NAME = "test-cache";
    +@RunWith(JUnit4.class)
    +public class IndexingSpiQuerySelfTest extends GridCommonAbstractTest {
    +    private IndexingSpi indexingSpi;
     
         /** {@inheritDoc} */
    -    @Override public void tearDown() throws Exception {
    -        Ignition.stopAll(true);
    -    }
    -
    -    /**
    -     * @return Configuration.
    -     */
    -    protected IgniteConfiguration configuration() {
    -        IgniteConfiguration cfg = new IgniteConfiguration();
    +    @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    +        IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(ipFinder);
    -
    -        cfg.setDiscoverySpi(disco);
    +        cfg.setIndexingSpi(indexingSpi);
     
             return cfg;
         }
    @@ -86,17 +75,23 @@ protected  CacheConfiguration cacheConfiguration(String cacheName) {
             return new CacheConfiguration<>(cacheName);
         }
     
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        super.afterTest();
    +
    +        stopAllGrids();
    +    }
    +
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testSimpleIndexingSpi() throws Exception {
    -        IgniteConfiguration cfg = configuration();
    -
    -        cfg.setIndexingSpi(new MyIndexingSpi());
    +        indexingSpi = new MyIndexingSpi();
     
    -        Ignite ignite = Ignition.start(cfg);
    +        Ignite ignite = startGrid(0);
     
    -        CacheConfiguration ccfg = cacheConfiguration(CACHE_NAME);
    +        CacheConfiguration ccfg = cacheConfiguration(DEFAULT_CACHE_NAME);
     
             IgniteCache cache = ignite.createCache(ccfg);
     
    @@ -112,14 +107,13 @@ public void testSimpleIndexingSpi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testIndexingSpiWithDisabledQueryProcessor() throws Exception {
    -        IgniteConfiguration cfg = configuration();
    +        indexingSpi = new MyIndexingSpi();
     
    -        cfg.setIndexingSpi(new MyIndexingSpi());
    +        Ignite ignite = startGrid(0);
     
    -        Ignite ignite = Ignition.start(cfg);
    -
    -        CacheConfiguration ccfg = cacheConfiguration(CACHE_NAME);
    +        CacheConfiguration ccfg = cacheConfiguration(DEFAULT_CACHE_NAME);
     
             IgniteCache cache = ignite.createCache(ccfg);
     
    @@ -135,14 +129,13 @@ public void testIndexingSpiWithDisabledQueryProcessor() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBinaryIndexingSpi() throws Exception {
    -        IgniteConfiguration cfg = configuration();
    -
    -        cfg.setIndexingSpi(new MyBinaryIndexingSpi());
    +        indexingSpi = new MyBinaryIndexingSpi();
     
    -        Ignite ignite = Ignition.start(cfg);
    +        Ignite ignite = startGrid(0);
     
    -        CacheConfiguration ccfg = cacheConfiguration(CACHE_NAME);
    +        CacheConfiguration ccfg = cacheConfiguration(DEFAULT_CACHE_NAME);
     
             IgniteCache cache = ignite.createCache(ccfg);
     
    @@ -165,46 +158,48 @@ public void testBinaryIndexingSpi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonBinaryIndexingSpi() throws Exception {
             System.setProperty(IgniteSystemProperties.IGNITE_UNWRAP_BINARY_FOR_INDEXING_SPI, "true");
     
    -        IgniteConfiguration cfg = configuration();
    -
    -        cfg.setIndexingSpi(new MyIndexingSpi());
    +        try {
    +            indexingSpi = new MyIndexingSpi();
     
    -        Ignite ignite = Ignition.start(cfg);
    +            Ignite ignite = startGrid(0);
     
    -        CacheConfiguration ccfg = cacheConfiguration(CACHE_NAME);
    +            CacheConfiguration ccfg = cacheConfiguration(DEFAULT_CACHE_NAME);
     
    -        IgniteCache cache = ignite.createCache(ccfg);
    +            IgniteCache cache = ignite.createCache(ccfg);
     
    -        for (int i = 0; i < 10; i++) {
    -            PersonKey key = new PersonKey(i);
    +            for (int i = 0; i < 10; i++) {
    +                PersonKey key = new PersonKey(i);
     
    -            cache.put(key, new Person("John Doe " + i));
    -        }
    +                cache.put(key, new Person("John Doe " + i));
    +            }
     
    -        QueryCursor> cursor = cache.query(
    -            new SpiQuery().setArgs(new PersonKey(2), new PersonKey(5)));
    +            QueryCursor> cursor = cache.query(
    +                new SpiQuery().setArgs(new PersonKey(2), new PersonKey(5)));
     
    -        for (Cache.Entry entry : cursor)
    -            System.out.println(entry);
    +            for (Cache.Entry entry : cursor)
    +                System.out.println(entry);
     
    -        cache.remove(new PersonKey(9));
    +            cache.remove(new PersonKey(9));
    +        }
    +        finally {
    +            System.clearProperty(IgniteSystemProperties.IGNITE_UNWRAP_BINARY_FOR_INDEXING_SPI);
    +        }
         }
     
         /**
          * @throws Exception If failed.
          */
    -    @SuppressWarnings("ThrowableResultOfMethodCallIgnored")
    +    @Test
         public void testIndexingSpiFailure() throws Exception {
    -        IgniteConfiguration cfg = configuration();
    -
    -        cfg.setIndexingSpi(new MyBrokenIndexingSpi());
    +        indexingSpi = new MyBrokenIndexingSpi();
     
    -        Ignite ignite = Ignition.start(cfg);
    +        Ignite ignite = startGrid(0);
     
    -        CacheConfiguration ccfg = cacheConfiguration(CACHE_NAME);
    +        CacheConfiguration ccfg = cacheConfiguration(DEFAULT_CACHE_NAME);
     
             ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
     
    @@ -369,4 +364,4 @@ static class Person implements Serializable {
                 this.name = name;
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQueryTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQueryTxSelfTest.java
    index e59deed2eb767..1cf3e90f7a48a 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQueryTxSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/IndexingSpiQueryTxSelfTest.java
    @@ -17,6 +17,11 @@
     
     package org.apache.ignite.internal.processors.cache.query;
     
    +import java.util.Collection;
    +import java.util.Iterator;
    +import java.util.concurrent.Callable;
    +import java.util.concurrent.atomic.AtomicInteger;
    +import javax.cache.Cache;
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.IgniteTransactions;
     import org.apache.ignite.cache.CacheAtomicityMode;
    @@ -36,62 +41,72 @@
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.apache.ignite.transactions.TransactionState;
     import org.jetbrains.annotations.Nullable;
    -
    -import java.util.Collection;
    -import java.util.Iterator;
    -import java.util.concurrent.Callable;
    -import java.util.concurrent.atomic.AtomicInteger;
    -import javax.cache.Cache;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Indexing Spi transactional query test
      */
    +@RunWith(JUnit4.class)
     public class IndexingSpiQueryTxSelfTest extends GridCacheAbstractSelfTest {
    -    /** */
    -    private static AtomicInteger cnt;
    -
         /** {@inheritDoc} */
         @Override protected int gridCount() {
             return 4;
         }
     
    -    /** {@inheritDoc} */
    -    @Override protected void beforeTestsStarted() throws Exception {
    -        cnt = new AtomicInteger();
    -
    -        super.beforeTestsStarted();
    -    }
    -
         /** {@inheritDoc} */
         @SuppressWarnings("unchecked")
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
             ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true);
     
    -        if (cnt.getAndIncrement() == 0)
    -            cfg.setClientMode(true);
    -        else {
    -            cfg.setIndexingSpi(new MyBrokenIndexingSpi());
    +        cfg.setClientMode("client".equals(igniteInstanceName));
    +        cfg.setIndexingSpi(new MyBrokenIndexingSpi());
     
    -            CacheConfiguration ccfg = cacheConfiguration(igniteInstanceName);
    -            ccfg.setName("test-cache");
    -            ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    +        CacheConfiguration ccfg = cacheConfiguration(igniteInstanceName);
    +        ccfg.setName(DEFAULT_CACHE_NAME);
    +        ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
     
    -            ccfg.setIndexedTypes(Integer.class, Integer.class);
    +        ccfg.setIndexedTypes(Integer.class, Integer.class);
    +
    +        cfg.setCacheConfiguration(ccfg);
     
    -            cfg.setCacheConfiguration(ccfg);
    -        }
             return cfg;
         }
     
    +    /** */
    +    @Test
    +    public void testIndexingSpiWithTxClient() throws Exception {
    +        IgniteEx client = startGrid("client");
    +
    +        assertNotNull(client.cache(DEFAULT_CACHE_NAME));
    +
    +        doTestIndexingSpiWithTx(client, 0);
    +    }
    +
    +    /** */
    +    @Test
    +    public void testIndexingSpiWithTxLocal() throws Exception {
    +        IgniteEx ignite = (IgniteEx)primaryNode(0, DEFAULT_CACHE_NAME);
    +
    +        doTestIndexingSpiWithTx(ignite, 0);
    +    }
    +
    +    /** */
    +    @Test
    +    public void testIndexingSpiWithTxNotLocal() throws Exception {
    +        IgniteEx ignite = (IgniteEx)primaryNode(0, DEFAULT_CACHE_NAME);
    +
    +        doTestIndexingSpiWithTx(ignite, 1);
    +    }
    +
         /**
          * @throws Exception If failed.
          */
         @SuppressWarnings("ThrowableResultOfMethodCallIgnored")
    -    public void testIndexingSpiWithTx() throws Exception {
    -        IgniteEx ignite = grid(0);
    -
    -        final IgniteCache cache = ignite.cache("test-cache");
    +    private void doTestIndexingSpiWithTx(IgniteEx ignite, int key) throws Exception {
    +        final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME);
     
             final IgniteTransactions txs = ignite.transactions();
     
    @@ -104,7 +119,7 @@ public void testIndexingSpiWithTx() throws Exception {
                             Transaction tx;
     
                             try (Transaction tx0 = tx = txs.txStart(concurrency, isolation)) {
    -                            cache.put(1, 1);
    +                            cache.put(key, key);
     
                                 tx0.commit();
                             }
    @@ -114,6 +129,8 @@ public void testIndexingSpiWithTx() throws Exception {
                             return null;
                         }
                     }, IgniteTxHeuristicCheckedException.class);
    +
    +                checkFutures();
                 }
             }
         }
    @@ -135,7 +152,7 @@ private static class MyBrokenIndexingSpi extends IgniteSpiAdapter implements Ind
             /** {@inheritDoc} */
             @Override public Iterator> query(@Nullable String cacheName, Collection params,
                 @Nullable IndexingQueryFilter filters) throws IgniteSpiException {
    -           return null;
    +            return null;
             }
     
             /** {@inheritDoc} */
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/ScanQueryOffheapExpiryPolicySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/ScanQueryOffheapExpiryPolicySelfTest.java
    index a3b1d4169dbea..83d82966e0aaf 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/ScanQueryOffheapExpiryPolicySelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/ScanQueryOffheapExpiryPolicySelfTest.java
    @@ -29,6 +29,9 @@
     import javax.cache.expiry.CreatedExpiryPolicy;
     import javax.cache.expiry.Duration;
     import java.util.concurrent.TimeUnit;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CachePeekMode.OFFHEAP;
     import static org.apache.ignite.cache.CachePeekMode.ONHEAP;
    @@ -36,6 +39,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class ScanQueryOffheapExpiryPolicySelfTest extends GridCommonAbstractTest {
     
         /** Nodes count. */
    @@ -74,6 +78,7 @@ public class ScanQueryOffheapExpiryPolicySelfTest extends GridCommonAbstractTest
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEntriesMovedFromOnHeap() throws Exception {
             Ignite ignite0 = grid(0);
             Ignite ignite1 = grid(1);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchAckTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchAckTest.java
    index 3ee6a20e04827..78b621ed6966a 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchAckTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchAckTest.java
    @@ -36,14 +36,15 @@
     import org.apache.ignite.lang.IgniteRunnable;
     import org.apache.ignite.plugin.extensions.communication.Message;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    @@ -52,10 +53,8 @@
     /**
      * Continuous queries tests.
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousBatchAckTest extends GridCommonAbstractTest implements Serializable {
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         protected static final String CLIENT = "_client";
     
    @@ -86,12 +85,6 @@ else if (igniteInstanceName.endsWith(SERVER2))
             else
                 cfg.setCommunicationSpi(new FailedTcpCommunicationSpi(false, false));
     
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(disco);
    -
             return cfg;
         }
     
    @@ -117,6 +110,7 @@ else if (igniteInstanceName.endsWith(SERVER2))
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartition() throws Exception {
             checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 1, ATOMIC, false));
         }
    @@ -124,6 +118,7 @@ public void testPartition() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionWithFilter() throws Exception {
             filterOn.set(true);
     
    @@ -133,6 +128,7 @@ public void testPartitionWithFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionNoBackups() throws Exception {
             checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 0, ATOMIC, false));
         }
    @@ -140,6 +136,7 @@ public void testPartitionNoBackups() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionTx() throws Exception {
             checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL, false));
         }
    @@ -147,6 +144,7 @@ public void testPartitionTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionTxWithFilter() throws Exception {
             filterOn.set(true);
     
    @@ -156,6 +154,7 @@ public void testPartitionTxWithFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionTxNoBackup() throws Exception {
             checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL, false));
         }
    @@ -163,6 +162,7 @@ public void testPartitionTxNoBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionTxNoBackupWithFilter() throws Exception {
             filterOn.set(true);
     
    @@ -172,6 +172,7 @@ public void testPartitionTxNoBackupWithFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicated() throws Exception {
             checkBackupAcknowledgeMessage(cacheConfiguration(REPLICATED, 1, ATOMIC, false));
         }
    @@ -179,6 +180,7 @@ public void testReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicatedTx() throws Exception {
             checkBackupAcknowledgeMessage(cacheConfiguration(REPLICATED, 1, TRANSACTIONAL, false));
         }
    @@ -186,12 +188,69 @@ public void testReplicatedTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicatedTxWithFilter() throws Exception {
             filterOn.set(true);
     
             checkBackupAcknowledgeMessage(cacheConfiguration(REPLICATED, 1, TRANSACTIONAL, true));
         }
     
    +    // MVCC tests.
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testPartitionMvccTx() throws Exception {
    +        checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL_SNAPSHOT, false));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testPartitionMvccTxWithFilter() throws Exception {
    +        filterOn.set(true);
    +
    +        checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL_SNAPSHOT, true));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testPartitionMvccTxNoBackup() throws Exception {
    +        checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL_SNAPSHOT, false));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testPartitionMvccTxNoBackupWithFilter() throws Exception {
    +        filterOn.set(true);
    +
    +        checkBackupAcknowledgeMessage(cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL_SNAPSHOT, true));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testReplicatedMvccTx() throws Exception {
    +        checkBackupAcknowledgeMessage(cacheConfiguration(REPLICATED, 1, TRANSACTIONAL_SNAPSHOT, false));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testReplicatedMvccTxWithFilter() throws Exception {
    +        filterOn.set(true);
    +
    +        checkBackupAcknowledgeMessage(cacheConfiguration(REPLICATED, 1, TRANSACTIONAL_SNAPSHOT, true));
    +    }
    +
         /**
          * @param ccfg Cache configuration.
          * @throws Exception If failed.
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchForceServerModeAckTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchForceServerModeAckTest.java
    index 35a1fe49beeae..289d5fd0beb65 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchForceServerModeAckTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousBatchForceServerModeAckTest.java
    @@ -19,17 +19,11 @@
     
     import java.io.Serializable;
     import org.apache.ignite.configuration.IgniteConfiguration;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     
     /**
      * Continuous queries tests.
      */
     public class CacheContinuousBatchForceServerModeAckTest extends CacheContinuousBatchAckTest implements Serializable {
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** {@inheritDoc} */
         @SuppressWarnings("unchecked")
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    @@ -41,40 +35,13 @@ public class CacheContinuousBatchForceServerModeAckTest extends CacheContinuousB
                 FailedTcpCommunicationSpi spi = new FailedTcpCommunicationSpi(true, false);
     
                 cfg.setCommunicationSpi(spi);
    -
    -            TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -            disco.setForceServerMode(true);
    -
    -            disco.setIpFinder(IP_FINDER);
    -
    -            cfg.setDiscoverySpi(disco);
             }
    -        else if (igniteInstanceName.endsWith(SERVER2)) {
    +        else if (igniteInstanceName.endsWith(SERVER2))
                 cfg.setCommunicationSpi(new FailedTcpCommunicationSpi(false, true));
     
    -            TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -            disco.setIpFinder(IP_FINDER);
    -
    -            cfg.setDiscoverySpi(disco);
    -        }
    -        else {
    +        else
                 cfg.setCommunicationSpi(new FailedTcpCommunicationSpi(false, false));
     
    -            TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -            disco.setIpFinder(IP_FINDER);
    -
    -            cfg.setDiscoverySpi(disco);
    -        }
    -
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(disco);
    -
             return cfg;
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFailoverMvccTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFailoverMvccTxSelfTest.java
    new file mode 100644
    index 0000000000000..0aa9cd1912707
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFailoverMvccTxSelfTest.java
    @@ -0,0 +1,55 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.ignite.internal.processors.cache.query.continuous;
    +
    +import org.apache.ignite.cache.CacheAtomicityMode;
    +import org.apache.ignite.cache.CacheMode;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
    +
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
    +import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    +
    +/**
    + *
    + */
    +@RunWith(JUnit4.class)
    +public class CacheContinuousQueryAsyncFailoverMvccTxSelfTest  extends CacheContinuousQueryFailoverAbstractSelfTest {
    +    /** {@inheritDoc} */
    +    @Override protected CacheMode cacheMode() {
    +        return PARTITIONED;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected CacheAtomicityMode atomicityMode() {
    +        return TRANSACTIONAL_SNAPSHOT;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected boolean asyncCallback() {
    +        return true;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311")
    +    @Test
    +    @Override public void testBackupQueueEvict() throws Exception {
    +        // No-op.
    +    }
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFailoverTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFailoverTxSelfTest.java
    index 0417022cdbeb7..8f0bd0e0d4633 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFailoverTxSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFailoverTxSelfTest.java
    @@ -41,9 +41,4 @@ public class CacheContinuousQueryAsyncFailoverTxSelfTest extends CacheContinuous
         @Override protected boolean asyncCallback() {
             return true;
         }
    -
    -    /** {@inheritDoc} */
    -    @Override public void testNoEventLossOnTopologyChange() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-4015");
    -    }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFilterListenerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFilterListenerTest.java
    index e66e7f1fe9a01..5ebf3be525118 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFilterListenerTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFilterListenerTest.java
    @@ -47,16 +47,17 @@
     import org.apache.ignite.lang.IgniteAsyncCallback;
     import org.apache.ignite.lang.IgniteBiInClosure;
     import org.apache.ignite.resources.IgniteInstanceResource;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.SECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    @@ -66,10 +67,8 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryAsyncFilterListenerTest extends GridCommonAbstractTest {
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 5;
     
    @@ -83,8 +82,6 @@ public class CacheContinuousQueryAsyncFilterListenerTest extends GridCommonAbstr
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
    -
             cfg.setClientMode(client);
     
             MemoryEventStorageSpi storeSpi = new MemoryEventStorageSpi();
    @@ -113,6 +110,7 @@ public class CacheContinuousQueryAsyncFilterListenerTest extends GridCommonAbstr
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerTx() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), true, true, false);
         }
    @@ -120,6 +118,7 @@ public void testNonDeadLockInListenerTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerTxJCacheApi() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), true, true, true);
         }
    @@ -127,6 +126,23 @@ public void testNonDeadLockInListenerTxJCacheApi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testNonDeadLockInListenerMvccTx() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), true, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInListenerMvccTxJCacheApi() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), true, true, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testNonDeadLockInListenerAtomic() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, ATOMIC), true, true, false);
         }
    @@ -134,6 +150,7 @@ public void testNonDeadLockInListenerAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerAtomicJCacheApi() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, ATOMIC), true, true, true);
         }
    @@ -141,6 +158,7 @@ public void testNonDeadLockInListenerAtomicJCacheApi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerReplicatedAtomic() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, ATOMIC), true, true, false);
         }
    @@ -148,6 +166,7 @@ public void testNonDeadLockInListenerReplicatedAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerReplicatedAtomicJCacheApi() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, ATOMIC), true, true, true);
         }
    @@ -155,6 +174,7 @@ public void testNonDeadLockInListenerReplicatedAtomicJCacheApi() throws Exceptio
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerReplicatedAtomicOffHeapValues() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, ATOMIC), true, true, false);
         }
    @@ -162,6 +182,7 @@ public void testNonDeadLockInListenerReplicatedAtomicOffHeapValues() throws Exce
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerAtomicWithoutBackup() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 0, ATOMIC), true, true, false);
         }
    @@ -169,6 +190,7 @@ public void testNonDeadLockInListenerAtomicWithoutBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerAtomicWithoutBackupJCacheApi() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 0, ATOMIC), true, true, true);
         }
    @@ -176,6 +198,7 @@ public void testNonDeadLockInListenerAtomicWithoutBackupJCacheApi() throws Excep
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListener() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), true, true, false);
         }
    @@ -183,6 +206,7 @@ public void testNonDeadLockInListener() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerReplicated() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL), true, true, false);
         }
    @@ -190,10 +214,35 @@ public void testNonDeadLockInListenerReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInListenerReplicatedJCacheApi() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL), true, true, true);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInListenerMvcc() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), true, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInListenerReplicatedMvcc() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL_SNAPSHOT), true, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInListenerReplicatedJCacheApiMvcc() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL_SNAPSHOT), true, true, true);
    +    }
    +
         ///
         /// ASYNC FILTER AND LISTENER. TEST FILTER.
         ///
    @@ -201,6 +250,7 @@ public void testNonDeadLockInListenerReplicatedJCacheApi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterTx() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), true, true, false);
         }
    @@ -208,6 +258,7 @@ public void testNonDeadLockInFilterTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterTxJCacheApi() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), true, true, true);
         }
    @@ -215,6 +266,23 @@ public void testNonDeadLockInFilterTxJCacheApi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testNonDeadLockInFilterMvccTx() throws Exception {
    +        testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), true, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInFilterMvccTxJCacheApi() throws Exception {
    +        testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), true, true, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testNonDeadLockInFilterAtomic() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, ATOMIC), true, true, false);
         }
    @@ -222,6 +290,7 @@ public void testNonDeadLockInFilterAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterAtomicJCacheApi() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, ATOMIC), true, true, true);
         }
    @@ -229,6 +298,7 @@ public void testNonDeadLockInFilterAtomicJCacheApi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterReplicatedAtomic() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(REPLICATED, 2, ATOMIC), true, true, false);
         }
    @@ -236,6 +306,7 @@ public void testNonDeadLockInFilterReplicatedAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterAtomicWithoutBackup() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 0, ATOMIC), true, true, false);
         }
    @@ -243,6 +314,7 @@ public void testNonDeadLockInFilterAtomicWithoutBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilter() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), true, true, false);
         }
    @@ -250,6 +322,7 @@ public void testNonDeadLockInFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterReplicated() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL), true, true, false);
         }
    @@ -257,10 +330,35 @@ public void testNonDeadLockInFilterReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterReplicatedJCacheApi() throws Exception {
             testNonDeadLockInFilter(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL), true, true, false);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInFilterMvcc() throws Exception {
    +        testNonDeadLockInFilter(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), true, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInFilterReplicatedMvcc() throws Exception {
    +        testNonDeadLockInFilter(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL_SNAPSHOT), true, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInFilterReplicatedJCacheApiMvcc() throws Exception {
    +        testNonDeadLockInFilter(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL_SNAPSHOT), true, true, false);
    +    }
    +
         ///
         /// ASYNC LISTENER. TEST LISTENER.
         ///
    @@ -268,6 +366,7 @@ public void testNonDeadLockInFilterReplicatedJCacheApi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterTxSyncFilter() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), false, true, false);
         }
    @@ -275,6 +374,15 @@ public void testNonDeadLockInFilterTxSyncFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testNonDeadLockInFilterMvccTxSyncFilter() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), false, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testNonDeadLockInFilterAtomicSyncFilter() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, ATOMIC), false, true, false);
         }
    @@ -282,6 +390,7 @@ public void testNonDeadLockInFilterAtomicSyncFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterReplicatedAtomicSyncFilter() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, ATOMIC), false, true, false);
         }
    @@ -289,6 +398,7 @@ public void testNonDeadLockInFilterReplicatedAtomicSyncFilter() throws Exception
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterAtomicWithoutBackupSyncFilter() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 0, ATOMIC), false, true, false);
         }
    @@ -296,6 +406,7 @@ public void testNonDeadLockInFilterAtomicWithoutBackupSyncFilter() throws Except
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterSyncFilter() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL), false, true, false);
         }
    @@ -303,10 +414,27 @@ public void testNonDeadLockInFilterSyncFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonDeadLockInFilterReplicatedSyncFilter() throws Exception {
             testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL), false, true, false);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInFilterSyncFilterMvcc() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT), false, true, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testNonDeadLockInFilterReplicatedSyncFilterMvcc() throws Exception {
    +        testNonDeadLockInListener(cacheConfiguration(REPLICATED, 2, TRANSACTIONAL_SNAPSHOT), false, true, false);
    +    }
    +
         /**
          * @param ccfg Cache configuration.
          * @param asyncFltr Async filter.
    @@ -371,18 +499,41 @@ else if (val.equals(newVal)) {
                                 else if (!val.equals(val0))
                                     return;
     
    -                            Transaction tx = null;
    +                            // For MVCC mode we need to wait until updated value becomes visible. Usually this is
    +                            // several ms to wait - mvcc coordinator need some time to register tx as finished.
    +                            if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT) {
    +                                Object v = null;
     
    -                            try {
    -                                if (cache0.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL)
    -                                    tx = ignite.transactions().txStart(PESSIMISTIC, REPEATABLE_READ);
    +                                while (v == null && !Thread.currentThread().isInterrupted()) {
    +                                    v = cache0.get(key);
    +
    +                                    if (v == null)
    +                                        doSleep(50);
    +                                }
    +                            }
     
    +                            try {
                                     assertEquals(val, val0);
     
    -                                cache0.put(key, newVal);
    +                                if (atomicityMode(cache0) != ATOMIC) {
    +                                    boolean committed = false;
    +
    +                                    while (!committed && !Thread.currentThread().isInterrupted()) {
    +                                        try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
    +                                            cache0.put(key, newVal);
    +
    +                                            tx.commit();
     
    -                                if (tx != null)
    -                                    tx.commit();
    +                                            committed =true;
    +                                        }
    +                                        catch (Exception ex) {
    +                                            assertTrue(ex.toString(),
    +                                                ex.getMessage() != null && ex.getMessage().contains("Cannot serialize transaction due to write conflict"));
    +                                        }
    +                                    }
    +                                }
    +                                else
    +                                    cache0.put(key, newVal);
     
                                     latch.countDown();
                                 }
    @@ -391,10 +542,6 @@ else if (!val.equals(val0))
     
                                     throw new IgniteException(exp);
                                 }
    -                            finally {
    -                                if (tx != null)
    -                                    tx.close();
    -                            }
                             }
                         };
     
    @@ -521,19 +668,29 @@ else if (val.equals(newVal)) {
                                 else if (!val.equals(val0))
                                     return;
     
    -                            Transaction tx = null;
    -
                                 try {
    -                                if (cache0.getConfiguration(CacheConfiguration.class)
    -                                    .getAtomicityMode() == TRANSACTIONAL)
    -                                    tx = ignite.transactions().txStart(PESSIMISTIC, REPEATABLE_READ);
    -
                                     assertEquals(val, val0);
     
    -                                cache0.put(key, newVal);
    +                                if (atomicityMode(cache0) != ATOMIC) {
    +                                    boolean committed = false;
    +
    +                                    while (!committed && !Thread.currentThread().isInterrupted()) {
    +                                        try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
    +
    +                                            cache0.put(key, newVal);
     
    -                                if (tx != null)
    -                                    tx.commit();
    +                                            tx.commit();
    +
    +                                            committed =true;
    +                                        }
    +                                        catch (Exception ex) {
    +                                            assertTrue(ex.toString(),
    +                                                ex.getMessage() != null && ex.getMessage().contains("Cannot serialize transaction due to write conflict"));
    +                                        }
    +                                    }
    +                                }
    +                                else
    +                                    cache0.put(key, newVal);
     
                                     latch.countDown();
                                 }
    @@ -542,10 +699,6 @@ else if (!val.equals(val0))
     
                                     throw new IgniteException(exp);
                                 }
    -                            finally {
    -                                if (tx != null)
    -                                    tx.close();
    -                            }
                             }
                         };
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryConcurrentPartitionUpdateTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryConcurrentPartitionUpdateTest.java
    index 6c74f7901cd8e..0f52b43630afa 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryConcurrentPartitionUpdateTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryConcurrentPartitionUpdateTest.java
    @@ -23,10 +23,12 @@
     import java.util.concurrent.ThreadLocalRandom;
     import java.util.concurrent.atomic.AtomicBoolean;
     import java.util.concurrent.atomic.AtomicInteger;
    +import javax.cache.CacheException;
     import javax.cache.event.CacheEntryEvent;
     import javax.cache.event.CacheEntryUpdatedListener;
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
    +import org.apache.ignite.IgniteTransactions;
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.affinity.Affinity;
     import org.apache.ignite.cache.query.ContinuousQuery;
    @@ -37,23 +39,26 @@
     import org.apache.ignite.internal.util.lang.GridAbsPredicate;
     import org.apache.ignite.internal.util.typedef.T2;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.apache.ignite.transactions.Transaction;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryConcurrentPartitionUpdateTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private boolean client;
     
    @@ -61,8 +66,6 @@ public class CacheContinuousQueryConcurrentPartitionUpdateTest extends GridCommo
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi) cfg.getDiscoverySpi()).setIpFinder(ipFinder);
    -
             cfg.setClientMode(client);
     
             return cfg;
    @@ -78,6 +81,7 @@ public class CacheContinuousQueryConcurrentPartitionUpdateTest extends GridCommo
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConcurrentUpdatePartitionAtomic() throws Exception {
             concurrentUpdatePartition(ATOMIC, false);
         }
    @@ -85,6 +89,7 @@ public void testConcurrentUpdatePartitionAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConcurrentUpdatePartitionTx() throws Exception {
             concurrentUpdatePartition(TRANSACTIONAL, false);
         }
    @@ -92,6 +97,15 @@ public void testConcurrentUpdatePartitionTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testConcurrentUpdatePartitionMvccTx() throws Exception {
    +        concurrentUpdatePartition(TRANSACTIONAL_SNAPSHOT, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testConcurrentUpdatePartitionAtomicCacheGroup() throws Exception {
             concurrentUpdatePartition(ATOMIC, true);
         }
    @@ -99,10 +113,19 @@ public void testConcurrentUpdatePartitionAtomicCacheGroup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConcurrentUpdatePartitionTxCacheGroup() throws Exception {
             concurrentUpdatePartition(TRANSACTIONAL, true);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testConcurrentUpdatePartitionMvccTxCacheGroup() throws Exception {
    +        concurrentUpdatePartition(TRANSACTIONAL_SNAPSHOT, true);
    +    }
    +
         /**
          * @param atomicityMode Cache atomicity mode.
          * @param cacheGrp {@code True} if test cache multiple caches in the same group.
    @@ -179,8 +202,30 @@ private void concurrentUpdatePartition(CacheAtomicityMode atomicityMode, boolean
                         ThreadLocalRandom rnd = ThreadLocalRandom.current();
     
                         for (int i = 0; i < UPDATES; i++) {
    -                        for (int c = 0; c < srvCaches.size(); c++)
    -                            srvCaches.get(c).put(keys.get(rnd.nextInt(KEYS)), i);
    +                        for (int c = 0; c < srvCaches.size(); c++) {
    +                            if (atomicityMode == ATOMIC)
    +                                srvCaches.get(c).put(keys.get(rnd.nextInt(KEYS)), i);
    +                            else  {
    +                                IgniteCache cache0 = srvCaches.get(c);
    +                                IgniteTransactions txs = cache0.unwrap(Ignite.class).transactions();
    +
    +                                boolean committed = false;
    +
    +                                while (!committed) {
    +                                    try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) {
    +                                        cache0.put(keys.get(rnd.nextInt(KEYS)), i);
    +
    +                                        tx.commit();
    +
    +                                        committed = true;
    +                                    }
    +                                    catch (CacheException e) {
    +                                        assertTrue(e.getMessage() != null &&
    +                                            e.getMessage().contains("Cannot serialize transaction due to write conflict"));
    +                                    }
    +                                }
    +                            }
    +                        }
                         }
     
                         return null;
    @@ -232,6 +277,7 @@ private T2 startListener(IgniteCache
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConcurrentUpdatesAndQueryStartAtomic() throws Exception {
             concurrentUpdatesAndQueryStart(ATOMIC, false);
         }
    @@ -239,6 +285,7 @@ public void testConcurrentUpdatesAndQueryStartAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConcurrentUpdatesAndQueryStartTx() throws Exception {
             concurrentUpdatesAndQueryStart(TRANSACTIONAL, false);
         }
    @@ -246,17 +293,36 @@ public void testConcurrentUpdatesAndQueryStartTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    -    public void _testConcurrentUpdatesAndQueryStartAtomicCacheGroup() throws Exception {
    +    @Test
    +    public void testConcurrentUpdatesAndQueryStartMvccTx() throws Exception {
    +        concurrentUpdatesAndQueryStart(TRANSACTIONAL_SNAPSHOT, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testConcurrentUpdatesAndQueryStartAtomicCacheGroup() throws Exception {
             concurrentUpdatesAndQueryStart(ATOMIC, true);
         }
     
         /**
          * @throws Exception If failed.
          */
    -    public void _testConcurrentUpdatesAndQueryStartTxCacheGroup() throws Exception {
    +    @Test
    +    public void testConcurrentUpdatesAndQueryStartTxCacheGroup() throws Exception {
             concurrentUpdatesAndQueryStart(TRANSACTIONAL, true);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-10755")
    +    @Test
    +    public void testConcurrentUpdatesAndQueryStartMvccTxCacheGroup() throws Exception {
    +        concurrentUpdatesAndQueryStart(TRANSACTIONAL_SNAPSHOT, true);
    +    }
    +
         /**
          * @param atomicityMode Cache atomicity mode.
          * @param cacheGrp {@code True} if test cache multiple caches in the same group.
    @@ -273,24 +339,24 @@ private void concurrentUpdatesAndQueryStart(CacheAtomicityMode atomicityMode, bo
     
             if (cacheGrp) {
                 for (int i = 0; i < 3; i++) {
    -                CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME + i);
    +                CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME + i);
     
                     ccfg.setGroupName("testGroup");
                     ccfg.setWriteSynchronizationMode(FULL_SYNC);
                     ccfg.setAtomicityMode(atomicityMode);
     
    -                IgniteCache cache = client.createCache(ccfg);
    +                IgniteCache cache = client.createCache(ccfg);
     
                     caches.add(cache.getName());
                 }
             }
             else {
    -            CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME);
    +            CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME);
     
                 ccfg.setWriteSynchronizationMode(FULL_SYNC);
                 ccfg.setAtomicityMode(atomicityMode);
     
    -            IgniteCache cache = client.createCache(ccfg);
    +            IgniteCache cache = client.createCache(ccfg);
     
                 caches.add(cache.getName());
             }
    @@ -333,8 +399,29 @@ private void concurrentUpdatesAndQueryStart(CacheAtomicityMode atomicityMode, bo
                             ThreadLocalRandom rnd = ThreadLocalRandom.current();
     
                             while (!stop.get()) {
    -                            for (IgniteCache srvCache : srvCaches)
    -                                srvCache.put(keys.get(rnd.nextInt(KEYS)), rnd.nextInt(100) - 200);
    +                            for (IgniteCache srvCache : srvCaches)  {
    +                                if (atomicityMode == ATOMIC)
    +                                    srvCache.put(keys.get(rnd.nextInt(KEYS)), rnd.nextInt(100) - 200);
    +                                else  {
    +                                    IgniteTransactions txs = srvCache.unwrap(Ignite.class).transactions();
    +
    +                                    boolean committed = false;
    +
    +                                    while (!committed) {
    +                                        try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) {
    +                                            srvCache.put(keys.get(rnd.nextInt(KEYS)), rnd.nextInt(100) - 200);
    +
    +                                            tx.commit();
    +
    +                                            committed = true;
    +                                        }
    +                                        catch (CacheException e) {
    +                                            assertTrue(e.getMessage() != null &&
    +                                                e.getMessage().contains("Cannot serialize transaction due to write conflict"));
    +                                        }
    +                                    }
    +                                }
    +                            }
                             }
     
                             return null;
    @@ -361,8 +448,29 @@ private void concurrentUpdatesAndQueryStart(CacheAtomicityMode atomicityMode, bo
                         ThreadLocalRandom rnd = ThreadLocalRandom.current();
     
                         for (int i = 0; i < UPDATES; i++) {
    -                        for (IgniteCache srvCache : srvCaches)
    -                            srvCache.put(keys.get(rnd.nextInt(KEYS)), i);
    +                        for (IgniteCache srvCache : srvCaches) {
    +                            if (atomicityMode == ATOMIC)
    +                                srvCache.put(keys.get(rnd.nextInt(KEYS)), i);
    +                            else  {
    +                                IgniteTransactions txs = srvCache.unwrap(Ignite.class).transactions();
    +
    +                                boolean committed = false;
    +
    +                                while (!committed) {
    +                                    try (Transaction tx = txs.txStart(PESSIMISTIC, REPEATABLE_READ)) {
    +                                        srvCache.put(keys.get(rnd.nextInt(KEYS)), i);
    +
    +                                        tx.commit();
    +
    +                                        committed = true;
    +                                    }
    +                                    catch (CacheException e) {
    +                                        assertTrue(e.getMessage() != null &&
    +                                            e.getMessage().contains("Cannot serialize transaction due to write conflict"));
    +                                    }
    +                                }
    +                            }
    +                        }
                         }
     
                         return null;
    @@ -378,7 +486,7 @@ private void concurrentUpdatesAndQueryStart(CacheAtomicityMode atomicityMode, bo
     
                             return evtCnt.get() >= THREADS * UPDATES;
                         }
    -                }, 5000);
    +                }, 30000);
     
                     assertEquals(THREADS * UPDATES, qry.get1().get());
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryCounterAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryCounterAbstractTest.java
    index 4ec8151b85d92..9ec0b233ced19 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryCounterAbstractTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryCounterAbstractTest.java
    @@ -45,14 +45,14 @@
     import org.apache.ignite.internal.util.typedef.T2;
     import org.apache.ignite.lang.IgniteBiInClosure;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.NotNull;
     import org.jetbrains.annotations.Nullable;
     import java.util.concurrent.ConcurrentHashMap;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    @@ -63,14 +63,12 @@
     /**
      * Continuous queries counter tests.
      */
    +@RunWith(JUnit4.class)
     public abstract class CacheContinuousQueryCounterAbstractTest extends GridCommonAbstractTest
         implements Serializable {
         /** */
         protected static final String CACHE_NAME = "test_cache";
     
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** Latch timeout. */
         protected static final long LATCH_TIMEOUT = 5000;
     
    @@ -87,12 +85,6 @@ public abstract class CacheContinuousQueryCounterAbstractTest extends GridCommon
             if (igniteInstanceName.equals(NO_CACHE_IGNITE_INSTANCE_NAME))
                 cfg.setClientMode(true);
     
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(disco);
    -
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             return cfg;
    @@ -177,6 +169,7 @@ protected CacheAtomicityMode atomicityMode() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAllEntries() throws Exception {
             IgniteCache cache = grid(0).cache(CACHE_NAME);
     
    @@ -234,7 +227,7 @@ public void testAllEntries() throws Exception {
                 assertEquals(2, vals.size());
                 assertEquals(2, (int)vals.get(0).get1());
                 assertEquals(1L, (long)vals.get(0).get2());
    -            assertNull(vals.get(1).get1());
    +            assertEquals(2, (int)vals.get(1).get1());
     
                 vals = map.get(3);
     
    @@ -248,6 +241,7 @@ public void testAllEntries() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTwoQueryListener() throws Exception {
             if (cacheMode() == LOCAL)
                 return;
    @@ -353,8 +347,8 @@ private void checkEvents(Map>> evnts, long iter)
             assertEquals(1L, (long)val.get(0).get1());
     
             // Check remove 1
    +        assertEquals(1L, (long)val.get(1).get1());
             assertEquals(iter * 2 + 2, (long)val.get(1).get2());
    -        assertNull(val.get(1).get1());
     
             val = evnts.get(2);
     
    @@ -365,8 +359,8 @@ private void checkEvents(Map>> evnts, long iter)
             assertEquals(2L, (long)val.get(0).get1());
     
             // Check remove 2
    +        assertEquals(2L, (long)val.get(1).get1());
             assertEquals(iter * 2 + 2, (long)val.get(1).get2());
    -        assertNull(val.get(1).get1());
     
             val = evnts.get(3);
     
    @@ -377,13 +371,14 @@ private void checkEvents(Map>> evnts, long iter)
             assertEquals(3L, (long)val.get(0).get1());
     
             // Check remove 3
    +        assertEquals(3L, (long)val.get(1).get1());
             assertEquals(iter * 2 + 2, (long)val.get(1).get2());
    -        assertNull(val.get(1).get1());
         }
     
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRestartQuery() throws Exception {
             IgniteCache cache = grid(0).cache(CACHE_NAME);
     
    @@ -442,6 +437,7 @@ public void testRestartQuery() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEntriesByFilter() throws Exception {
             IgniteCache cache = grid(0).cache(CACHE_NAME);
     
    @@ -509,7 +505,7 @@ public void testEntriesByFilter() throws Exception {
                 assertEquals((int)vals.get(1).get1(), 4);
                 assertEquals((long)vals.get(1).get1(), (long)vals.get(1).get2());
     
    -            assertNull(vals.get(2).get1());
    +            assertEquals(4, (long)vals.get(2).get1());
                 assertEquals(5, (long)vals.get(2).get2());
     
                 assertEquals((int)vals.get(3).get1(), 10);
    @@ -526,7 +522,7 @@ public void testEntriesByFilter() throws Exception {
                 assertEquals((int)vals.get(1).get1(), 4);
                 assertEquals((long)vals.get(1).get1(), (long)vals.get(1).get2());
     
    -            assertNull(vals.get(2).get1());
    +            assertEquals(4, (long)vals.get(2).get1());
                 assertEquals(5, (long)vals.get(2).get2());
     
                 assertEquals((int)vals.get(3).get1(), 40);
    @@ -537,6 +533,7 @@ public void testEntriesByFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLoadCache() throws Exception {
             IgniteCache cache = grid(0).cache(CACHE_NAME);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEventBufferTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEventBufferTest.java
    index 382f16692175c..c86ac65a81f81 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEventBufferTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryEventBufferTest.java
    @@ -28,15 +28,20 @@
     import javax.cache.event.EventType;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
     @SuppressWarnings("unchecked")
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryEventBufferTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBuffer1() throws Exception {
             testBuffer(1);
         }
    @@ -44,6 +49,7 @@ public void testBuffer1() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBuffer2() throws Exception {
             for (int i = 0; i < 10; i++) {
                 log.info("Iteration: " + i);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryExecuteInPrimaryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryExecuteInPrimaryTest.java
    index ffc7fd9ebd5c2..79aedd198bd9d 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryExecuteInPrimaryTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryExecuteInPrimaryTest.java
    @@ -46,10 +46,14 @@
     import java.util.concurrent.CountDownLatch;
     import java.util.concurrent.atomic.AtomicBoolean;
     import java.util.concurrent.atomic.AtomicInteger;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.LOCAL;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
    @@ -58,6 +62,7 @@
     /**
      * Continuous queries execute in primary node tests.
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryExecuteInPrimaryTest extends GridCommonAbstractTest
         implements Serializable {
     
    @@ -99,6 +104,7 @@ protected CacheConfiguration cacheConfiguration(
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(ATOMIC, LOCAL);
     
    @@ -109,6 +115,7 @@ public void testLocalCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicatedCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(ATOMIC, REPLICATED);
     
    @@ -119,6 +126,7 @@ public void testReplicatedCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionedCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(ATOMIC, PARTITIONED);
     
    @@ -129,6 +137,7 @@ public void testPartitionedCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTransactionLocalCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL, LOCAL);
     
    @@ -139,6 +148,7 @@ public void testTransactionLocalCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTransactionReplicatedCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL, REPLICATED);
     
    @@ -149,6 +159,7 @@ public void testTransactionReplicatedCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTransactionPartitionedCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL, PARTITIONED);
     
    @@ -156,6 +167,41 @@ public void testTransactionPartitionedCache() throws Exception {
             doTestWithEventsEntries(ccfg);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTransactionLocalCache() throws Exception {
    +        fail("https://issues.apache.org/jira/browse/IGNITE-9530");
    +
    +        CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL_SNAPSHOT, LOCAL);
    +
    +        doTestWithoutEventsEntries(ccfg);
    +        doTestWithEventsEntries(ccfg);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTransactionReplicatedCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL_SNAPSHOT, REPLICATED);
    +
    +        doTestWithoutEventsEntries(ccfg);
    +        doTestWithEventsEntries(ccfg);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTransactionPartitionedCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(TRANSACTIONAL_SNAPSHOT, PARTITIONED);
    +
    +        doTestWithoutEventsEntries(ccfg);
    +        doTestWithEventsEntries(ccfg);
    +    }
    +
         /**
          * @throws Exception If failed.
          */
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFactoryFilterRandomOperationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFactoryFilterRandomOperationTest.java
    index 993ef790a1062..7fadb3b3eb3b4 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFactoryFilterRandomOperationTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFactoryFilterRandomOperationTest.java
    @@ -46,6 +46,7 @@
     import javax.cache.event.CacheEntryUpdatedListener;
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
    +import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.CacheEntryEventSerializableFilter;
     import org.apache.ignite.cache.affinity.Affinity;
     import org.apache.ignite.cache.query.CacheQueryEntryEvent;
    @@ -57,15 +58,19 @@
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.jetbrains.annotations.NotNull;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static java.util.concurrent.TimeUnit.SECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
    -import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
     import static org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFactoryFilterRandomOperationTest.NonSerializableFilter.isAccepted;
     import static org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTest.ContinuousDeploy.CLIENT;
     import static org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTest.ContinuousDeploy.SERVER;
    +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
     import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED;
     import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ;
     import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE;
    @@ -73,6 +78,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryFactoryFilterRandomOperationTest extends CacheContinuousQueryRandomOperationsTest {
         /** */
         private static final int NODES = 5;
    @@ -89,6 +95,7 @@ public class CacheContinuousQueryFactoryFilterRandomOperationTest extends CacheC
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInternalQuery() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 1,
    @@ -274,8 +281,16 @@ private void randomUpdate(
     
             Transaction tx = null;
     
    -        if (cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL && rnd.nextBoolean())
    -            tx = ignite.transactions().txStart(txRandomConcurrency(rnd), txRandomIsolation(rnd));
    +        CacheAtomicityMode atomicityMode = atomicityMode(cache);
    +
    +        boolean mvccEnabled = atomicityMode == TRANSACTIONAL_SNAPSHOT;
    +
    +        if (atomicityMode != ATOMIC && rnd.nextBoolean()) {
    +            TransactionConcurrency concurrency = mvccEnabled ? PESSIMISTIC : txRandomConcurrency(rnd);
    +            TransactionIsolation isolation = mvccEnabled ? REPEATABLE_READ : txRandomIsolation(rnd);
    +
    +            tx = ignite.transactions().txStart(concurrency, isolation);
    +        }
     
             try {
                 // log.info("Random operation [key=" + key + ", op=" + op + ']');
    @@ -287,7 +302,7 @@ private void randomUpdate(
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr);
    +                    updatePartitionCounter(cache, key, partCntr, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, oldVal);
     
    @@ -302,7 +317,7 @@ private void randomUpdate(
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr);
    +                    updatePartitionCounter(cache, key, partCntr, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, oldVal);
     
    @@ -312,12 +327,13 @@ private void randomUpdate(
                     }
     
                     case 2: {
    -                    cache.remove(key);
    +                    boolean res = cache.remove(key);
     
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr);
    +                    // We don't update part counter if nothing was removed when MVCC enabled.
    +                    updatePartitionCounter(cache, key, partCntr, mvccEnabled && !res);
     
                         waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, oldVal, oldVal);
     
    @@ -327,12 +343,13 @@ private void randomUpdate(
                     }
     
                     case 3: {
    -                    cache.getAndRemove(key);
    +                    Object res = cache.getAndRemove(key);
     
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr);
    +                    // We don't update part counter if nothing was removed when MVCC enabled.
    +                    updatePartitionCounter(cache, key, partCntr, mvccEnabled && res == null);
     
                         waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, oldVal, oldVal);
     
    @@ -347,7 +364,7 @@ private void randomUpdate(
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr);
    +                    updatePartitionCounter(cache, key, partCntr, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, oldVal);
     
    @@ -357,12 +374,15 @@ private void randomUpdate(
                     }
     
                     case 5: {
    -                    cache.invoke(key, new EntrySetValueProcessor(null, rnd.nextBoolean()));
    +                    EntrySetValueProcessor proc = new EntrySetValueProcessor(null, rnd.nextBoolean());
    +
    +                    cache.invoke(key, proc);
     
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr);
    +                    // We don't update part counter if nothing was removed when MVCC enabled.
    +                    updatePartitionCounter(cache, key, partCntr, mvccEnabled && proc.getOldVal() == null);
     
                         waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, oldVal, oldVal);
     
    @@ -378,7 +398,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal == null) {
    -                        updatePartitionCounter(cache, key, partCntr);
    +                        updatePartitionCounter(cache, key, partCntr, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, null);
     
    @@ -397,7 +417,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal == null) {
    -                        updatePartitionCounter(cache, key, partCntr);
    +                        updatePartitionCounter(cache, key, partCntr, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, null);
     
    @@ -416,7 +436,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal != null) {
    -                        updatePartitionCounter(cache, key, partCntr);
    +                        updatePartitionCounter(cache, key, partCntr, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, oldVal);
     
    @@ -435,7 +455,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal != null) {
    -                        updatePartitionCounter(cache, key, partCntr);
    +                        updatePartitionCounter(cache, key, partCntr, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, oldVal);
     
    @@ -459,7 +479,7 @@ private void randomUpdate(
                                 if (tx != null)
                                     tx.commit();
     
    -                            updatePartitionCounter(cache, key, partCntr);
    +                            updatePartitionCounter(cache, key, partCntr, false);
     
                                 waitAndCheckEvent(evtsQueues, partCntr, affinity(cache), key, newVal, oldVal);
     
    @@ -523,8 +543,10 @@ private TransactionConcurrency txRandomConcurrency(Random rnd) {
          * @param cache Cache.
          * @param key Key
          * @param cntrs Partition counters.
    +     * @param skipUpdCntr Skip update counter flag.
          */
    -    private void updatePartitionCounter(IgniteCache cache, Object key, Map cntrs) {
    +    private void updatePartitionCounter(IgniteCache cache, Object key, Map cntrs,
    +        boolean skipUpdCntr) {
             Affinity aff = cache.unwrap(Ignite.class).affinity(cache.getName());
     
             int part = aff.partition(key);
    @@ -534,7 +556,10 @@ private void updatePartitionCounter(IgniteCache cache, Object ke
             if (partCntr == null)
                 partCntr = 0L;
     
    -        cntrs.put(part, ++partCntr);
    +        if (!skipUpdCntr)
    +            partCntr++;
    +
    +        cntrs.put(part, partCntr);
         }
     
         /**
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverAbstractSelfTest.java
    index 91c702e1ec3a7..40adcef26fb56 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverAbstractSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverAbstractSelfTest.java
    @@ -74,8 +74,8 @@
     import org.apache.ignite.internal.IgniteKernal;
     import org.apache.ignite.internal.managers.communication.GridIoMessage;
     import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion;
    -import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology;
     import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.CachePartitionPartialCountersMap;
    +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology;
     import org.apache.ignite.internal.processors.continuous.GridContinuousHandler;
     import org.apache.ignite.internal.processors.continuous.GridContinuousMessage;
     import org.apache.ignite.internal.processors.continuous.GridContinuousProcessor;
    @@ -87,6 +87,7 @@
     import org.apache.ignite.internal.util.typedef.PAX;
     import org.apache.ignite.internal.util.typedef.T2;
     import org.apache.ignite.internal.util.typedef.T3;
    +import org.apache.ignite.internal.util.typedef.X;
     import org.apache.ignite.internal.util.typedef.internal.CU;
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.lang.IgniteAsyncCallback;
    @@ -98,12 +99,14 @@
     import org.apache.ignite.spi.IgniteSpiException;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
     import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.apache.ignite.transactions.TransactionRollbackException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static java.util.concurrent.TimeUnit.MINUTES;
    @@ -111,14 +114,14 @@
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     import static org.apache.ignite.testframework.GridTestUtils.waitForCondition;
    +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public abstract class CacheContinuousQueryFailoverAbstractSelfTest extends GridCommonAbstractTest {
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int BACKUP_ACK_THRESHOLD = 100;
     
    @@ -136,7 +139,6 @@ public abstract class CacheContinuousQueryFailoverAbstractSelfTest extends GridC
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
             ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true);
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
     
             TestCommunicationSpi commSpi = new TestCommunicationSpi();
     
    @@ -211,6 +213,7 @@ protected NearCacheConfiguration nearCacheConfiguration() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFirstFilteredEvent() throws Exception {
             this.backups = 2;
     
    @@ -255,6 +258,7 @@ public void testFirstFilteredEvent() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRebalanceVersion() throws Exception {
             Ignite ignite0 = startGrid(0);
     
    @@ -307,6 +311,7 @@ public void testRebalanceVersion() throws Exception {
          *
          * @throws Exception If fail.
          */
    +    @Test
         public void testRebalance() throws Exception {
             for (int iter = 0; iter < 5; iter++) {
                 log.info("Iteration: " + iter);
    @@ -422,6 +427,7 @@ private void waitRebalanceFinished(Ignite ignite, long topVer, int minorVer) thr
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testOneBackup() throws Exception {
             checkBackupQueue(1, false);
         }
    @@ -429,6 +435,7 @@ public void testOneBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testOneBackupClientUpdate() throws Exception {
             checkBackupQueue(1, true);
         }
    @@ -436,6 +443,7 @@ public void testOneBackupClientUpdate() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testUpdatePartitionCounter() throws Exception {
             this.backups = 2;
     
    @@ -536,6 +544,7 @@ private void checkPartCounter(int nodes, int killedNodeIdx, Map u
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStartStopQuery() throws Exception {
             this.backups = 1;
     
    @@ -640,6 +649,7 @@ public void testStartStopQuery() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLeftPrimaryAndBackupNodes() throws Exception {
             if (cacheMode() == REPLICATED)
                 return;
    @@ -760,6 +770,8 @@ public void testLeftPrimaryAndBackupNodes() throws Exception {
                 }
             }, 5000L);
     
    +        awaitPartitionMapExchange();
    +
             for (; keyIter < keys.size(); keyIter++) {
                 int key = keys.get(keyIter);
     
    @@ -784,7 +796,18 @@ public void testLeftPrimaryAndBackupNodes() throws Exception {
                         expEvts.add(new T3<>((Object)key, (Object)val, (Object)key));
                 }
     
    -            clnCache.put(key, val);
    +            boolean updated = false;
    +
    +            while (!updated) {
    +                try {
    +                    clnCache.put(key, val);
    +
    +                    updated = true;
    +                }
    +                catch (Exception ignore) {
    +                    assertEquals(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, atomicityMode());
    +                }
    +            }
     
                 filtered = !filtered;
             }
    @@ -797,6 +820,7 @@ public void testLeftPrimaryAndBackupNodes() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteFilter() throws Exception {
             this.backups = 2;
     
    @@ -909,6 +933,7 @@ public void testRemoteFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testThreeBackups() throws Exception {
             if (cacheMode() == REPLICATED)
                 return;
    @@ -977,8 +1002,8 @@ private void checkBackupQueue(int backups, boolean updateFromClient) throws Exce
                     T2 t = updates.get(key);
     
                     if (updateFromClient) {
    -                    if (atomicityMode() == CacheAtomicityMode.TRANSACTIONAL) {
    -                        try (Transaction tx = qryClient.transactions().txStart()) {
    +                    if (atomicityMode() != CacheAtomicityMode.ATOMIC) {
    +                        try (Transaction tx = qryClient.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
                                 qryClientCache.put(key, key);
     
                                 tx.commit();
    @@ -993,8 +1018,8 @@ private void checkBackupQueue(int backups, boolean updateFromClient) throws Exce
                             qryClientCache.put(key, key);
                     }
                     else {
    -                    if (atomicityMode() == CacheAtomicityMode.TRANSACTIONAL) {
    -                        try (Transaction tx = ignite.transactions().txStart()) {
    +                    if (atomicityMode() != CacheAtomicityMode.ATOMIC) {
    +                        try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
                                 cache.put(key, key);
     
                                 tx.commit();
    @@ -1335,6 +1360,7 @@ private List testKeys(IgniteCache cache, int parts) thr
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBackupQueueCleanupClientQuery() throws Exception {
             startGridsMultiThreaded(2);
     
    @@ -1408,6 +1434,7 @@ public void testBackupQueueCleanupClientQuery() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBackupQueueEvict() throws Exception {
             startGridsMultiThreaded(2);
     
    @@ -1480,6 +1507,7 @@ public void testBackupQueueEvict() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBackupQueueCleanupServerQuery() throws Exception {
             Ignite qryClient = startGridsMultiThreaded(2);
     
    @@ -1556,6 +1584,7 @@ private Collection backupQueue(Ignite ignite) {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFailoverStartStopBackup() throws Exception {
             failoverStartStopFilter(2);
         }
    @@ -1563,6 +1592,7 @@ public void testFailoverStartStopBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStartStop() throws Exception {
             this.backups = 2;
     
    @@ -1755,18 +1785,30 @@ private void failoverStartStopFilter(int backups) throws Exception {
                     if (filtered)
                         val = -val;
     
    -                if (processorPut && prevVal != null) {
    -                    qryClnCache.invoke(key, new CacheEntryProcessor() {
    -                        @Override public Void process(MutableEntry entry,
    -                            Object... arguments) throws EntryProcessorException {
    -                            entry.setValue(arguments[0]);
    +                boolean updated = false;
     
    -                            return null;
    +                while (!updated) {
    +                    try {
    +                        if (processorPut && prevVal != null) {
    +                            qryClnCache.invoke(key, new CacheEntryProcessor() {
    +                                @Override public Void process(MutableEntry entry,
    +                                    Object... arguments) throws EntryProcessorException {
    +                                    entry.setValue(arguments[0]);
    +
    +                                    return null;
    +                                }
    +                            }, val);
                             }
    -                    }, val);
    +                        else
    +                            qryClnCache.put(key, val);
    +
    +                        updated = true;
    +                    }
    +                    catch (CacheException e) {
    +                        assertTrue(X.hasCause(e, TransactionRollbackException.class));
    +                        assertSame(atomicityMode(), CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT);
    +                    }
                     }
    -                else
    -                    qryClnCache.put(key, val);
     
                     processorPut = !processorPut;
     
    @@ -1870,6 +1912,7 @@ private void failoverStartStopFilter(int backups) throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiThreadedFailover() throws Exception {
             this.backups = 2;
     
    @@ -2020,7 +2063,20 @@ public void testMultiThreadedFailover() throws Exception {
     
                             Integer val = valCntr.incrementAndGet();
     
    -                        Integer prevVal = (Integer)qryClnCache.getAndPut(key, val);
    +                        Integer prevVal = null;
    +
    +                        boolean updated = false;
    +
    +                        while (!updated) {
    +                            try {
    +                                prevVal = (Integer)qryClnCache.getAndPut(key, val);
    +
    +                                updated = true;
    +                            }
    +                            catch (CacheException e) {
    +                                assertSame(atomicityMode(), CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT);
    +                            }
    +                        }
     
                             expEvts.get(threadId).add(new T3<>((Object)key, (Object)val, (Object)prevVal));
     
    @@ -2064,6 +2120,7 @@ public void testMultiThreadedFailover() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiThreaded() throws Exception {
             this.backups = 2;
     
    @@ -2114,7 +2171,19 @@ public void testMultiThreaded() throws Exception {
                     @Override public Object call() throws Exception {
                         Integer val0 = val.getAndIncrement();
     
    -                    cache.put(key, val0);
    +                    boolean updated = false;
    +
    +                    while (!updated) {
    +                        try {
    +                            cache.put(key, val0);
    +
    +                            updated = true;
    +                        }
    +                        catch (CacheException e) {
    +                            assertTrue(X.hasCause(e, TransactionRollbackException.class));
    +                            assertSame(atomicityMode(), CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT);
    +                        }
    +                    }
     
                         return null;
                     }
    @@ -2245,6 +2314,7 @@ private boolean checkEvents(boolean logAll,
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoEventLossOnTopologyChange() throws Exception {
             final int batchLoadSize = 2000;
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverMvccTxReplicatedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverMvccTxReplicatedSelfTest.java
    new file mode 100644
    index 0000000000000..2576d23f6839a
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverMvccTxReplicatedSelfTest.java
    @@ -0,0 +1,31 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.ignite.internal.processors.cache.query.continuous;
    +
    +import org.apache.ignite.cache.CacheMode;
    +
    +import static org.apache.ignite.cache.CacheMode.REPLICATED;
    +
    +/**
    + *
    + */
    +public class CacheContinuousQueryFailoverMvccTxReplicatedSelfTest extends CacheContinuousQueryFailoverMvccTxSelfTest {
    +    /** {@inheritDoc} */
    +    @Override protected CacheMode cacheMode() {
    +        return REPLICATED;
    +    }
    +}
    \ No newline at end of file
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverMvccTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverMvccTxSelfTest.java
    new file mode 100644
    index 0000000000000..4fab9c8dce195
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverMvccTxSelfTest.java
    @@ -0,0 +1,50 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.ignite.internal.processors.cache.query.continuous;
    +
    +import org.apache.ignite.cache.CacheAtomicityMode;
    +import org.apache.ignite.cache.CacheMode;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
    +
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
    +import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    +
    +/**
    + *
    + */
    +@RunWith(JUnit4.class)
    +public class CacheContinuousQueryFailoverMvccTxSelfTest extends CacheContinuousQueryFailoverAbstractSelfTest {
    +    /** {@inheritDoc} */
    +    @Override protected CacheMode cacheMode() {
    +        return PARTITIONED;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected CacheAtomicityMode atomicityMode() {
    +        return TRANSACTIONAL_SNAPSHOT;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311")
    +    @Test
    +    @Override public void testBackupQueueEvict() throws Exception {
    +        // No-op.
    +    }
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverTxSelfTest.java
    index cd916e4217001..789a105e02b05 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverTxSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFailoverTxSelfTest.java
    @@ -36,9 +36,4 @@ public class CacheContinuousQueryFailoverTxSelfTest extends CacheContinuousQuery
         @Override protected CacheAtomicityMode atomicityMode() {
             return TRANSACTIONAL;
         }
    -
    -    /** {@inheritDoc} */
    -    @Override public void testNoEventLossOnTopologyChange() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-4015");
    -    }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryLostPartitionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryLostPartitionTest.java
    index bcbf1e0d33f05..9cfe4d96a54f7 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryLostPartitionTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryLostPartitionTest.java
    @@ -29,15 +29,16 @@
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.util.typedef.PA;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static javax.cache.configuration.FactoryBuilder.factoryOf;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheRebalanceMode.SYNC;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC;
    @@ -45,16 +46,17 @@
     /**
      * Test from https://issues.apache.org/jira/browse/IGNITE-2384.
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryLostPartitionTest extends GridCommonAbstractTest {
    -    /** */
    -    static public TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** Cache name. */
         public static final String CACHE_NAME = "test_cache";
     
         /** Cache name. */
         public static final String TX_CACHE_NAME = "tx_test_cache";
     
    +    /** Cache name. */
    +    public static final String MVCC_TX_CACHE_NAME = "mvcc_tx_test_cache";
    +
         /** {@inheritDoc} */
         @Override protected void beforeTest() throws Exception {
             super.beforeTest();
    @@ -76,6 +78,7 @@ public class CacheContinuousQueryLostPartitionTest extends GridCommonAbstractTes
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxEvent() throws Exception {
             testEvent(TX_CACHE_NAME, false);
         }
    @@ -83,6 +86,15 @@ public void testTxEvent() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testMvccTxEvent() throws Exception {
    +        testEvent(MVCC_TX_CACHE_NAME, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testAtomicEvent() throws Exception {
             testEvent(CACHE_NAME, false);
         }
    @@ -90,6 +102,7 @@ public void testAtomicEvent() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxClientEvent() throws Exception {
             testEvent(TX_CACHE_NAME, true);
         }
    @@ -97,6 +110,15 @@ public void testTxClientEvent() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testMvccTxClientEvent() throws Exception {
    +        testEvent(MVCC_TX_CACHE_NAME, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testAtomicClientEvent() throws Exception {
             testEvent(CACHE_NAME, true);
         }
    @@ -197,12 +219,7 @@ private AllEventListener registerCacheListener(IgniteCache cache(String cacheName) {
     
             if (cacheName.equals(CACHE_NAME))
                 cfg.setAtomicityMode(ATOMIC);
    -        else
    +        else if (cacheName.equals(TX_CACHE_NAME))
                 cfg.setAtomicityMode(TRANSACTIONAL);
    +        else
    +            cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
     
             cfg.setRebalanceMode(SYNC);
             cfg.setWriteSynchronizationMode(PRIMARY_SYNC);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationFromCallbackTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationFromCallbackTest.java
    index 0540b43207a6f..a56efde1a42e2 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationFromCallbackTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationFromCallbackTest.java
    @@ -52,14 +52,14 @@
     import org.apache.ignite.lang.IgniteAsyncCallback;
     import org.apache.ignite.resources.IgniteInstanceResource;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    @@ -71,6 +71,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryOperationFromCallbackTest extends GridCommonAbstractTest {
         /** */
         public static final int KEYS = 10;
    @@ -78,9 +79,6 @@ public class CacheContinuousQueryOperationFromCallbackTest extends GridCommonAbs
         /** */
         public static final int KEYS_FROM_CALLBACK = 20;
     
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 5;
     
    @@ -102,7 +100,6 @@ public class CacheContinuousQueryOperationFromCallbackTest extends GridCommonAbs
     
             cfg.setSystemThreadPoolSize(SYSTEM_POOL_SIZE);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             cfg.setClientMode(client);
    @@ -136,6 +133,47 @@ public class CacheContinuousQueryOperationFromCallbackTest extends GridCommonAbs
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testAtomicOneBackup() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, ATOMIC, FULL_SYNC);
    +
    +        doTest(ccfg, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testTxOneBackupFilter() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL, FULL_SYNC);
    +
    +        doTest(ccfg, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testTxOneBackupFilterPrimary() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL, PRIMARY_SYNC);
    +
    +        doTest(ccfg, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testTxOneBackup() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL, FULL_SYNC);
    +
    +        doTest(ccfg, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testAtomicTwoBackups() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, ATOMIC, FULL_SYNC);
     
    @@ -145,6 +183,7 @@ public void testAtomicTwoBackups() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxTwoBackupsFilter() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL, FULL_SYNC);
     
    @@ -154,6 +193,7 @@ public void testTxTwoBackupsFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxTwoBackupsFilterPrimary() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL, PRIMARY_SYNC);
     
    @@ -163,6 +203,7 @@ public void testTxTwoBackupsFilterPrimary() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxReplicatedFilter() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 0, TRANSACTIONAL, FULL_SYNC);
     
    @@ -172,6 +213,7 @@ public void testTxReplicatedFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxTwoBackup() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL, FULL_SYNC);
     
    @@ -181,6 +223,7 @@ public void testTxTwoBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxReplicated() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 2, TRANSACTIONAL, FULL_SYNC);
     
    @@ -190,6 +233,7 @@ public void testTxReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxReplicatedPrimary() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 2, TRANSACTIONAL, PRIMARY_SYNC);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationP2PTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationP2PTest.java
    index eb53fa3c5c122..52cad3114da25 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationP2PTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOperationP2PTest.java
    @@ -35,13 +35,14 @@
     import org.apache.ignite.cache.query.QueryCursor;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    @@ -49,10 +50,8 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryOperationP2PTest extends GridCommonAbstractTest {
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 5;
     
    @@ -63,8 +62,6 @@ public class CacheContinuousQueryOperationP2PTest extends GridCommonAbstractTest
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
    -
             cfg.setClientMode(client);
             cfg.setPeerClassLoadingEnabled(true);
     
    @@ -92,6 +89,7 @@ public class CacheContinuousQueryOperationP2PTest extends GridCommonAbstractTest
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -104,6 +102,7 @@ public void testAtomicClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomic() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -116,6 +115,7 @@ public void testAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicReplicated() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -128,6 +128,7 @@ public void testAtomicReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicReplicatedClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -140,6 +141,7 @@ public void testAtomicReplicatedClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTx() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -151,6 +153,7 @@ public void testTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -163,6 +166,7 @@ public void testTxClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxReplicated() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -175,6 +179,7 @@ public void testTxReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxReplicatedClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -184,6 +189,57 @@ public void testTxReplicatedClient() throws Exception {
             testContinuousQuery(ccfg, true);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTx() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT
    +        );
    +
    +        testContinuousQuery(ccfg, false);
    +    }
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxClient() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT
    +        );
    +
    +        testContinuousQuery(ccfg, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxReplicated() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT
    +        );
    +
    +        testContinuousQuery(ccfg, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxReplicatedClient() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT
    +        );
    +
    +        testContinuousQuery(ccfg, true);
    +    }
    +
         /**
          * @param ccfg Cache configuration.
          * @param isClient Client.
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java
    index 7442e24ce98fc..274b1c38fefeb 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java
    @@ -52,24 +52,28 @@
     import org.apache.ignite.lang.IgniteAsyncCallback;
     import org.apache.ignite.resources.IgniteInstanceResource;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC;
    +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryOrderingEventTest extends GridCommonAbstractTest {
         /** */
         public static final int LISTENER_CNT = 3;
    @@ -77,9 +81,6 @@ public class CacheContinuousQueryOrderingEventTest extends GridCommonAbstractTes
         /** */
         public static final int KEYS = 10;
     
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 5;
     
    @@ -96,7 +97,6 @@ public class CacheContinuousQueryOrderingEventTest extends GridCommonAbstractTes
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             cfg.setClientMode(client);
    @@ -130,6 +130,7 @@ public class CacheContinuousQueryOrderingEventTest extends GridCommonAbstractTes
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicOnheapTwoBackup() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, ATOMIC, PRIMARY_SYNC);
     
    @@ -139,6 +140,7 @@ public void testAtomicOnheapTwoBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnheapTwoBackup() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL, FULL_SYNC);
     
    @@ -148,6 +150,7 @@ public void testTxOnheapTwoBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnheapWithoutBackup() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL, PRIMARY_SYNC);
     
    @@ -157,17 +160,49 @@ public void testTxOnheapWithoutBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnheapWithoutBackupFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL, FULL_SYNC);
     
             doOrderingTest(ccfg, false);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxOnheapTwoBackup() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT, FULL_SYNC);
    +
    +        doOrderingTest(ccfg, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxOnheapWithoutBackup() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL_SNAPSHOT, PRIMARY_SYNC);
    +
    +        doOrderingTest(ccfg, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxOnheapWithoutBackupFullSync() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL_SNAPSHOT, FULL_SYNC);
    +
    +        doOrderingTest(ccfg, false);
    +    }
    +
         // ASYNC
     
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicOnheapTwoBackupAsync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, ATOMIC, PRIMARY_SYNC);
     
    @@ -177,6 +212,7 @@ public void testAtomicOnheapTwoBackupAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicOnheapTwoBackupAsyncFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, ATOMIC, FULL_SYNC);
     
    @@ -186,6 +222,7 @@ public void testAtomicOnheapTwoBackupAsyncFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicReplicatedAsync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 0, ATOMIC, PRIMARY_SYNC);
     
    @@ -195,6 +232,7 @@ public void testAtomicReplicatedAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicReplicatedAsyncFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED, 0, ATOMIC, FULL_SYNC);
     
    @@ -204,6 +242,7 @@ public void testAtomicReplicatedAsyncFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicOnheapWithoutBackupAsync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, ATOMIC, PRIMARY_SYNC);
     
    @@ -213,6 +252,7 @@ public void testAtomicOnheapWithoutBackupAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnheapTwoBackupAsync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL, PRIMARY_SYNC);
     
    @@ -222,6 +262,7 @@ public void testTxOnheapTwoBackupAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnheapAsync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL, PRIMARY_SYNC);
     
    @@ -231,12 +272,43 @@ public void testTxOnheapAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnheapAsyncFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL, FULL_SYNC);
     
             doOrderingTest(ccfg, true);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxOnheapTwoBackupAsync() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 2, TRANSACTIONAL_SNAPSHOT, PRIMARY_SYNC);
    +
    +        doOrderingTest(ccfg, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxOnheapAsync() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL_SNAPSHOT, PRIMARY_SYNC);
    +
    +        doOrderingTest(ccfg, true);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxOnheapAsyncFullSync() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 0, TRANSACTIONAL_SNAPSHOT, FULL_SYNC);
    +
    +        doOrderingTest(ccfg, true);
    +    }
    +
         /**
          * @param ccfg Cache configuration.
          * @param async Async filter.
    @@ -298,46 +370,59 @@ protected void doOrderingTest(
     
                             QueryTestKey key = new QueryTestKey(rnd.nextInt(KEYS));
     
    -                        boolean startTx = cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() ==
    -                            TRANSACTIONAL && rnd.nextBoolean();
    +                        boolean startTx = atomicityMode(cache) != ATOMIC && rnd.nextBoolean();
     
                             Transaction tx = null;
     
    -                        if (startTx)
    -                            tx = cache.unwrap(Ignite.class).transactions().txStart();
    -
    -                        try {
    -                            if ((cache.get(key) == null) || rnd.nextBoolean()) {
    -                                cache.invoke(key, new CacheEntryProcessor() {
    -                                    @Override public Object process(
    -                                        MutableEntry entry,
    -                                        Object... arguments)
    -                                        throws EntryProcessorException {
    -                                        if (entry.exists())
    -                                            entry.setValue(new QueryTestValue(entry.getValue().val1 + 1));
    -                                        else
    -                                            entry.setValue(new QueryTestValue(0));
    -
    -                                        return null;
    -                                    }
    -                                });
    -                            }
    -                            else {
    -                                QueryTestValue val;
    -                                QueryTestValue newVal;
    +                        boolean committed = false;
    +
    +                        while (!committed && !Thread.currentThread().isInterrupted()) {
    +                            try {
    +                                if (startTx)
    +                                    tx = cache.unwrap(Ignite.class).transactions().txStart(PESSIMISTIC, REPEATABLE_READ);
    +
    +                                if ((cache.get(key) == null) || rnd.nextBoolean()) {
    +                                    cache.invoke(key, new CacheEntryProcessor() {
    +                                        @Override public Object process(
    +                                            MutableEntry entry,
    +                                            Object... arguments)
    +                                            throws EntryProcessorException {
    +                                            if (entry.exists())
    +                                                entry.setValue(new QueryTestValue(entry.getValue().val1 + 1));
    +                                            else
    +                                                entry.setValue(new QueryTestValue(0));
    +
    +                                            return null;
    +                                        }
    +                                    });
    +                                }
    +                                else {
    +                                    QueryTestValue val;
    +                                    QueryTestValue newVal;
     
    -                                do {
    -                                    val = cache.get(key);
    +                                    do {
    +                                        val = cache.get(key);
     
    -                                    newVal = val == null ?
    -                                        new QueryTestValue(0) : new QueryTestValue(val.val1 + 1);
    +                                        newVal = val == null ?
    +                                            new QueryTestValue(0) : new QueryTestValue(val.val1 + 1);
    +                                    }
    +                                    while (!cache.replace(key, val, newVal));
                                     }
    -                                while (!cache.replace(key, val, newVal));
    +
    +                                if (tx != null)
    +                                    tx.commit();
    +
    +                                committed = true;
    +                            }
    +                            catch (Exception e) {
    +                                assertTrue(e.getMessage(), e.getMessage() != null &&
    +                                    (e.getMessage().contains("Transaction has been rolled back") ||
    +                                        e.getMessage().contains("Cannot serialize transaction due to write conflict")));
    +                            }
    +                            finally {
    +                                if (tx != null)
    +                                    tx.close();
                                 }
    -                        }
    -                        finally {
    -                            if (tx != null)
    -                                tx.commit();
                             }
                         }
                     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryRandomOperationsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryRandomOperationsTest.java
    index 6d15df2d79700..c6f0fefdfca0c 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryRandomOperationsTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryRandomOperationsTest.java
    @@ -68,14 +68,15 @@
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static java.util.concurrent.TimeUnit.SECONDS;
    @@ -84,12 +85,14 @@
     import static javax.cache.event.EventType.UPDATED;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     import static org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTest.ContinuousDeploy.ALL;
     import static org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTest.ContinuousDeploy.CLIENT;
     import static org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTest.ContinuousDeploy.SERVER;
    +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
     import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED;
     import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ;
     import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE;
    @@ -97,10 +100,8 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryRandomOperationsTest extends GridCommonAbstractTest {
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 5;
     
    @@ -120,7 +121,6 @@ public class CacheContinuousQueryRandomOperationsTest extends GridCommonAbstract
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             cfg.setClientMode(client);
    @@ -142,6 +142,7 @@ public class CacheContinuousQueryRandomOperationsTest extends GridCommonAbstract
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFilterAndFactoryProvided() throws Exception {
             final CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -186,6 +187,7 @@ public void testFilterAndFactoryProvided() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -198,6 +200,7 @@ public void testAtomicClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomic() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -210,6 +213,7 @@ public void testAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicAllNodes() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -222,6 +226,7 @@ public void testAtomicAllNodes() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicReplicated() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -234,6 +239,7 @@ public void testAtomicReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicReplicatedAllNodes() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -246,6 +252,7 @@ public void testAtomicReplicatedAllNodes() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicReplicatedClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -258,6 +265,7 @@ public void testAtomicReplicatedClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicNoBackups() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -270,6 +278,7 @@ public void testAtomicNoBackups() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicNoBackupsAllNodes() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -282,6 +291,7 @@ public void testAtomicNoBackupsAllNodes() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicNoBackupsClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -294,6 +304,7 @@ public void testAtomicNoBackupsClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTx() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -306,6 +317,7 @@ public void testTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxAllNodes() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -318,6 +330,7 @@ public void testTxAllNodes() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxExplicit() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -330,6 +343,46 @@ public void testTxExplicit() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testMvccTx() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, SERVER);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxAllNodes() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, ALL);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxExplicit() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, SERVER);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testDoubleRemoveAtomicWithoutBackup() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -342,11 +395,12 @@ public void testDoubleRemoveAtomicWithoutBackup() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveAtomicWithoutBackupWithStore() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
                 ATOMIC,
    -            false);
    +            true);
     
             doTestNotModifyOperation(ccfg);
         }
    @@ -354,6 +408,7 @@ public void testDoubleRemoveAtomicWithoutBackupWithStore() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveAtomic() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -366,6 +421,7 @@ public void testDoubleRemoveAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveAtomicWithStore() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -378,6 +434,7 @@ public void testDoubleRemoveAtomicWithStore() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveTx() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -390,11 +447,12 @@ public void testDoubleRemoveTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveTxWithStore() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
                 TRANSACTIONAL,
    -            false);
    +            true);
     
             doTestNotModifyOperation(ccfg);
         }
    @@ -402,6 +460,7 @@ public void testDoubleRemoveTxWithStore() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveReplicatedTx() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -414,10 +473,51 @@ public void testDoubleRemoveReplicatedTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveReplicatedTxWithStore() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
                 TRANSACTIONAL,
    +            true);
    +
    +        doTestNotModifyOperation(ccfg);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testDoubleRemoveMvccTx() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestNotModifyOperation(ccfg);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582")
    +    @Test
    +    public void testDoubleRemoveMvccTxWithStore() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT,
    +            true);
    +
    +        doTestNotModifyOperation(ccfg);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testDoubleRemoveReplicatedMvccTx() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
                 false);
     
             doTestNotModifyOperation(ccfg);
    @@ -426,6 +526,21 @@ public void testDoubleRemoveReplicatedTxWithStore() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582")
    +    @Test
    +    public void testDoubleRemoveReplicatedMvccTxWithStore() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
    +            true);
    +
    +        doTestNotModifyOperation(ccfg);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testDoubleRemoveReplicatedAtomic() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -438,6 +553,7 @@ public void testDoubleRemoveReplicatedAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDoubleRemoveReplicatedAtomicWithStore() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -733,6 +849,7 @@ private void checkSingleEvent(
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -745,6 +862,7 @@ public void testTxClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxClientExplicit() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 1,
    @@ -757,6 +875,7 @@ public void testTxClientExplicit() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxReplicated() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -769,6 +888,7 @@ public void testTxReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxReplicatedClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
                 0,
    @@ -781,6 +901,7 @@ public void testTxReplicatedClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxNoBackups() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -793,6 +914,7 @@ public void testTxNoBackups() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxNoBackupsAllNodes() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -805,6 +927,7 @@ public void testTxNoBackupsAllNodes() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxNoBackupsExplicit() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -817,6 +940,7 @@ public void testTxNoBackupsExplicit() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxNoBackupsClient() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
                 0,
    @@ -826,6 +950,110 @@ public void testTxNoBackupsClient() throws Exception {
             doTestContinuousQuery(ccfg, CLIENT);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxClient() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, CLIENT);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxClientExplicit() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            1,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, CLIENT);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxReplicated() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, SERVER);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxReplicatedClient() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(REPLICATED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, CLIENT);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxNoBackups() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, SERVER);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxNoBackupsAllNodes() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, ALL);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxNoBackupsExplicit() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, SERVER);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxNoBackupsClient() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,
    +            0,
    +            TRANSACTIONAL_SNAPSHOT,
    +            false);
    +
    +        doTestContinuousQuery(ccfg, CLIENT);
    +    }
    +
         /**
          * @param ccfg Cache configuration.
          * @param deploy The place where continuous query will be started.
    @@ -988,11 +1216,19 @@ private void randomUpdate(
     
             Transaction tx = null;
     
    -        if (cache.getConfiguration(CacheConfiguration.class).getAtomicityMode() == TRANSACTIONAL && rnd.nextBoolean())
    -            tx = ignite.transactions().txStart(txRandomConcurrency(rnd), txRandomIsolation(rnd));
    +        CacheAtomicityMode atomicityMode = atomicityMode(cache);
    +
    +        boolean mvccEnabled = atomicityMode == TRANSACTIONAL_SNAPSHOT;
    +
    +        if (atomicityMode != ATOMIC && rnd.nextBoolean()) {
    +            TransactionConcurrency concurrency = mvccEnabled ? PESSIMISTIC : txRandomConcurrency(rnd);
    +            TransactionIsolation isolation = mvccEnabled ? REPEATABLE_READ : txRandomIsolation(rnd);
    +
    +            tx = ignite.transactions().txStart(concurrency, isolation);
    +        }
     
             try {
    -            // log.info("Random operation [key=" + key + ", op=" + op + ']');
    +            log.info("Random operation [key=" + key + ", op=" + op + ']');
     
                 switch (op) {
                     case 0: {
    @@ -1001,7 +1237,7 @@ private void randomUpdate(
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, oldVal);
     
    @@ -1016,7 +1252,7 @@ private void randomUpdate(
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, oldVal);
     
    @@ -1026,12 +1262,13 @@ private void randomUpdate(
                     }
     
                     case 2: {
    -                    cache.remove(key);
    +                    boolean res = cache.remove(key);
     
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                    // We don't update part counter if nothing was removed when MVCC enabled.
    +                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs, mvccEnabled && !res);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, oldVal, oldVal);
     
    @@ -1041,12 +1278,13 @@ private void randomUpdate(
                     }
     
                     case 3: {
    -                    cache.getAndRemove(key);
    +                    Object res = cache.getAndRemove(key);
     
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                    // We don't update part counter if nothing was removed when MVCC enabled.
    +                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs, mvccEnabled && res == null);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, oldVal, oldVal);
     
    @@ -1061,7 +1299,7 @@ private void randomUpdate(
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, oldVal);
     
    @@ -1071,12 +1309,15 @@ private void randomUpdate(
                     }
     
                     case 5: {
    -                    cache.invoke(key, new EntrySetValueProcessor(null, rnd.nextBoolean()));
    +                    EntrySetValueProcessor proc = new EntrySetValueProcessor(null, rnd.nextBoolean());
    +
    +                    cache.invoke(key, proc);
     
                         if (tx != null)
                             tx.commit();
     
    -                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                    // We don't update part counter if nothing was removed when MVCC enabled.
    +                    updatePartitionCounter(cache, key, partCntr, expEvtCntrs,mvccEnabled && proc.getOldVal() == null);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, oldVal, oldVal);
     
    @@ -1092,7 +1333,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal == null) {
    -                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, null);
     
    @@ -1111,7 +1352,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal == null) {
    -                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, null);
     
    @@ -1130,7 +1371,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal != null) {
    -                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, oldVal);
     
    @@ -1149,7 +1390,7 @@ private void randomUpdate(
                             tx.commit();
     
                         if (oldVal != null) {
    -                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                        updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                             waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, oldVal);
     
    @@ -1173,7 +1414,7 @@ private void randomUpdate(
                                 if (tx != null)
                                     tx.commit();
     
    -                            updatePartitionCounter(cache, key, partCntr, expEvtCntrs);
    +                            updatePartitionCounter(cache, key, partCntr, expEvtCntrs, false);
     
                                 waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), key, newVal, oldVal);
     
    @@ -1212,7 +1453,7 @@ private void randomUpdate(
                             tx.commit();
     
                         for (Map.Entry e : vals.entrySet())
    -                        updatePartitionCounter(cache, e.getKey(), partCntr, expEvtCntrs);
    +                        updatePartitionCounter(cache, e.getKey(), partCntr, expEvtCntrs, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), vals, expData);
     
    @@ -1233,7 +1474,7 @@ private void randomUpdate(
                             tx.commit();
     
                         for (Map.Entry e : vals.entrySet())
    -                        updatePartitionCounter(cache, e.getKey(), partCntr, expEvtCntrs);
    +                        updatePartitionCounter(cache, e.getKey(), partCntr, expEvtCntrs, false);
     
                         waitAndCheckEvent(evtsQueues, partCntr, expEvtCntrs, affinity(cache), vals, expData);
     
    @@ -1330,16 +1571,18 @@ else if (val == 1)
          * @return {@link TransactionConcurrency}.
          */
         private TransactionConcurrency txRandomConcurrency(Random rnd) {
    -        return rnd.nextBoolean() ? TransactionConcurrency.OPTIMISTIC : TransactionConcurrency.PESSIMISTIC;
    +        return rnd.nextBoolean() ? TransactionConcurrency.OPTIMISTIC : PESSIMISTIC;
         }
     
         /**
          * @param cache Cache.
          * @param key Key
          * @param cntrs Partition counters.
    +     * @param evtCntrs Event counters.
    +     * @param skipUpdCntr Skip update counter flag.
          */
         private void updatePartitionCounter(IgniteCache cache, Object key, Map cntrs,
    -        Map evtCntrs) {
    +        Map evtCntrs, boolean skipUpdCntr) {
             Affinity aff = cache.unwrap(Ignite.class).affinity(cache.getName());
     
             int part = aff.partition(key);
    @@ -1349,7 +1592,10 @@ private void updatePartitionCounter(IgniteCache cache, Object ke
             if (partCntr == null)
                 partCntr = 0L;
     
    -        cntrs.put(part, ++partCntr);
    +        if (!skipUpdCntr)
    +            partCntr++;
    +
    +        cntrs.put(part, partCntr);
             evtCntrs.put(key, partCntr);
         }
     
    @@ -1577,6 +1823,12 @@ public QueryTestValue(Integer val) {
          *
          */
         protected static class EntrySetValueProcessor implements EntryProcessor {
    +        /**
    +         * Static variable: we need to obtain a previous value from another node.
    +         * Assume this is a single threaded execution.
    +         */
    +        private static Object oldVal;
    +
             /** */
             private Object val;
     
    @@ -1607,6 +1859,8 @@ public EntrySetValueProcessor(Object val, boolean retOld) {
                 if (skipModify)
                     return null;
     
    +            oldVal = e.getValue();
    +
                 Object old = retOld ? e.getValue() : null;
     
                 if (val != null)
    @@ -1617,6 +1871,17 @@ public EntrySetValueProcessor(Object val, boolean retOld) {
                 return old;
             }
     
    +        /**
    +         * @return Old value.
    +         */
    +        Object getOldVal() {
    +            Object oldVal0 = oldVal;
    +
    +            oldVal = null; // Clean value.
    +
    +            return oldVal0;
    +        }
    +
             /** {@inheritDoc} */
             @Override public String toString() {
                 return S.toString(EntrySetValueProcessor.class, this);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryVariationsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryVariationsTest.java
    index fc86a35c16ed3..44d51d013142e 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryVariationsTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryVariationsTest.java
    @@ -65,6 +65,9 @@
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static java.util.concurrent.TimeUnit.SECONDS;
    @@ -80,6 +83,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousQueryVariationsTest extends IgniteCacheConfigVariationsAbstractTest {
         /** */
         private static final int ITERATION_CNT = 20;
    @@ -105,6 +109,7 @@ public class CacheContinuousQueryVariationsTest extends IgniteCacheConfigVariati
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationJCacheApiKeepBinary() throws Exception {
             testRandomOperation(true, false, false, false, true);
         }
    @@ -112,6 +117,7 @@ public void testRandomOperationJCacheApiKeepBinary() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationJCacheApiAsyncCallback() throws Exception {
             testRandomOperation(true, false, false, true, false);
         }
    @@ -119,6 +125,7 @@ public void testRandomOperationJCacheApiAsyncCallback() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationJCacheApiWithFilter() throws Exception {
             testRandomOperation(true, false, true, false, false);
         }
    @@ -126,6 +133,7 @@ public void testRandomOperationJCacheApiWithFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationJCacheApiWithFilterAsyncCallback() throws Exception {
             testRandomOperation(true, false, true, true, false);
         }
    @@ -133,6 +141,7 @@ public void testRandomOperationJCacheApiWithFilterAsyncCallback() throws Excepti
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationJCacheApiSyncWithFilter() throws Exception {
             testRandomOperation(true, true, true, false, false);
         }
    @@ -140,6 +149,7 @@ public void testRandomOperationJCacheApiSyncWithFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperation() throws Exception {
             testRandomOperation(true, true, false, false, false);
         }
    @@ -147,6 +157,7 @@ public void testRandomOperation() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationWithKeepBinary() throws Exception {
             testRandomOperation(true, true, false, false, true);
         }
    @@ -154,6 +165,7 @@ public void testRandomOperationWithKeepBinary() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationWithAsyncCallback() throws Exception {
             testRandomOperation(true, true, false, true, false);
         }
    @@ -161,6 +173,7 @@ public void testRandomOperationWithAsyncCallback() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationWithFilter() throws Exception {
             testRandomOperation(true, true, true, false, false);
         }
    @@ -168,6 +181,7 @@ public void testRandomOperationWithFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationWithFilterWithKeepBinary() throws Exception {
             testRandomOperation(true, true, true, false, true);
         }
    @@ -175,6 +189,7 @@ public void testRandomOperationWithFilterWithKeepBinary() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomOperationWithFilterAsyncCallback() throws Exception {
             testRandomOperation(true, true, true, true, false);
         }
    @@ -623,6 +638,7 @@ private void checkNoEvent(List>> evtsQueues)
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoveRemoveScenario() throws Exception {
             runInAllDataModes(new TestRunnable() {
                 @Override public void run() throws Exception {
    @@ -717,7 +733,7 @@ public void testRemoveRemoveScenario() throws Exception {
                             while (evts.size() != 10) {
                                 Thread.sleep(100);
                             }
    -                        
    +
                             evts.clear();
     
                             log.info("Finish iteration: " + i);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerFailoverTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerFailoverTest.java
    index 241dc2ac5c166..8360129f49a70 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerFailoverTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerFailoverTest.java
    @@ -36,10 +36,10 @@
     import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.lang.IgniteOutClosure;
     import org.apache.ignite.resources.LoggerResource;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.SECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
    @@ -48,10 +48,8 @@
     
     /**
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousWithTransformerFailoverTest extends GridCommonAbstractTest {
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private boolean client;
     
    @@ -59,8 +57,6 @@ public class CacheContinuousWithTransformerFailoverTest extends GridCommonAbstra
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
    -
             CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME);
     
             ccfg.setCacheMode(PARTITIONED);
    @@ -83,6 +79,7 @@ public class CacheContinuousWithTransformerFailoverTest extends GridCommonAbstra
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testServerNodeLeft() throws Exception {
             startGrids(3);
     
    @@ -150,6 +147,7 @@ public void testServerNodeLeft() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTransformerException() throws Exception {
             try {
                 startGrids(1);
    @@ -204,6 +202,7 @@ public void testTransformerException() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testCrossCallback() throws Exception {
             startGrids(2);
             try {
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerReplicatedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerReplicatedSelfTest.java
    index 7aa91a21ad4fa..5c831267f244e 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerReplicatedSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousWithTransformerReplicatedSelfTest.java
    @@ -32,6 +32,7 @@
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.binary.BinaryObject;
    +import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.CacheEntryEventSerializableFilter;
     import org.apache.ignite.cache.CacheMode;
     import org.apache.ignite.cache.query.ContinuousQueryWithTransformer;
    @@ -46,11 +47,15 @@
     import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     
     /**
      */
    +@RunWith(JUnit4.class)
     public class CacheContinuousWithTransformerReplicatedSelfTest extends GridCommonAbstractTest {
         /** */
         private static final int DFLT_ENTRY_CNT = 10;
    @@ -100,6 +105,7 @@ protected CacheMode cacheMode() {
                 CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME);
     
                 ccfg.setCacheMode(cacheMode());
    +            ccfg.setAtomicityMode(atomicityMode());
     
                 cfg.setCacheConfiguration(ccfg);
             }
    @@ -107,6 +113,13 @@ protected CacheMode cacheMode() {
             return cfg;
         }
     
    +    /**
    +     * @return Cache atomicity mode.
    +     */
    +    protected CacheAtomicityMode atomicityMode() {
    +        return CacheAtomicityMode.TRANSACTIONAL;
    +    }
    +
         /** {@inheritDoc} */
         @Override protected void afterTest() throws Exception {
             gridToRunQuery().cache(DEFAULT_CACHE_NAME).removeAll();
    @@ -135,6 +148,7 @@ protected Ignite gridToRunQuery() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformer() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, SKIP_KEEP_BINARY, false);
         }
    @@ -142,6 +156,7 @@ public void testContinuousWithTransformer() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAsync() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, SKIP_KEEP_BINARY, true);
         }
    @@ -149,6 +164,7 @@ public void testContinuousWithTransformerAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListener() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, SKIP_KEEP_BINARY, false);
         }
    @@ -156,6 +172,7 @@ public void testContinuousWithTransformerAndRegularListener() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListenerAsync() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, SKIP_KEEP_BINARY, true);
         }
    @@ -163,6 +180,7 @@ public void testContinuousWithTransformerAndRegularListenerAsync() throws Except
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerWithFilter() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, SKIP_KEEP_BINARY, false);
         }
    @@ -170,6 +188,7 @@ public void testContinuousWithTransformerWithFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerWithFilterAsync() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, SKIP_KEEP_BINARY, true);
         }
    @@ -177,6 +196,7 @@ public void testContinuousWithTransformerWithFilterAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListenerWithFilter() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, KEEP_BINARY, false);
         }
    @@ -184,6 +204,7 @@ public void testContinuousWithTransformerAndRegularListenerWithFilter() throws E
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListenerWithFilterAsync() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, KEEP_BINARY, true);
         }
    @@ -191,6 +212,7 @@ public void testContinuousWithTransformerAndRegularListenerWithFilterAsync() thr
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerKeepBinary() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, KEEP_BINARY, false);
         }
    @@ -198,6 +220,7 @@ public void testContinuousWithTransformerKeepBinary() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerKeepBinaryAsync() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, KEEP_BINARY, true);
         }
    @@ -205,6 +228,7 @@ public void testContinuousWithTransformerKeepBinaryAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListenerKeepBinary() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, KEEP_BINARY, false);
         }
    @@ -212,6 +236,7 @@ public void testContinuousWithTransformerAndRegularListenerKeepBinary() throws E
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListenerKeepBinaryAsync() throws Exception {
             runContinuousQueryWithTransformer(SKIP_EVT_FILTER, DFLT_ENTRY_CNT, KEEP_BINARY, true);
         }
    @@ -219,6 +244,7 @@ public void testContinuousWithTransformerAndRegularListenerKeepBinaryAsync() thr
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerWithFilterKeepBinary() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, KEEP_BINARY, false);
         }
    @@ -226,6 +252,7 @@ public void testContinuousWithTransformerWithFilterKeepBinary() throws Exception
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerWithFilterKeepBinaryAsync() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, KEEP_BINARY, true);
         }
    @@ -233,6 +260,7 @@ public void testContinuousWithTransformerWithFilterKeepBinaryAsync() throws Exce
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListenerWithFilterKeepBinary() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, KEEP_BINARY, false);
         }
    @@ -240,6 +268,7 @@ public void testContinuousWithTransformerAndRegularListenerWithFilterKeepBinary(
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousWithTransformerAndRegularListenerWithFilterKeepBinaryAsync() throws Exception {
             runContinuousQueryWithTransformer(ADD_EVT_FILTER, DFLT_ENTRY_CNT / 2, KEEP_BINARY, true);
         }
    @@ -247,6 +276,7 @@ public void testContinuousWithTransformerAndRegularListenerWithFilterKeepBinaryA
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTransformerReturnNull() throws Exception {
             Ignite ignite = gridToRunQuery();
     
    @@ -296,6 +326,7 @@ public void testTransformerReturnNull() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testExpired() throws Exception {
             Ignite ignite = gridToRunQuery();
     
    @@ -483,10 +514,10 @@ private static class LocalEventListener implements EventListener {
             /** {@inheritDoc} */
             @Override public void onUpdated(Iterable events) throws CacheEntryListenerException {
                 for (String evt : events) {
    +                cnt.incrementAndGet();
    +
                     if (evt.startsWith(SARAH_CONNOR))
                         cntLatch.countDown();
    -
    -                cnt.incrementAndGet();
                 }
             }
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorExternalizableFailedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorExternalizableFailedTest.java
    index 435ccc05eeb31..fda585b4b16c9 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorExternalizableFailedTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorExternalizableFailedTest.java
    @@ -34,17 +34,19 @@
     import org.apache.ignite.configuration.NearCacheConfiguration;
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.jetbrains.annotations.NotNull;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC;
     import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC;
    @@ -56,6 +58,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheEntryProcessorExternalizableFailedTest extends GridCommonAbstractTest {
         /** */
         private static final int EXPECTED_VALUE = 42;
    @@ -63,9 +66,6 @@ public class CacheEntryProcessorExternalizableFailedTest extends GridCommonAbstr
         /** */
         private static final int WRONG_VALUE = -1;
     
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 3;
     
    @@ -73,19 +73,18 @@ public class CacheEntryProcessorExternalizableFailedTest extends GridCommonAbstr
         public static final int ITERATION_CNT = 1;
     
         /** */
    -    public static final int KEYS = 10;
    +    public static final int KEY = 10;
     
         /** */
         private boolean client;
     
         /** */
    -    private boolean failOnWrite = false;
    +    private boolean failOnWrite;
     
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             cfg.setClientMode(client);
    @@ -104,6 +103,13 @@ public class CacheEntryProcessorExternalizableFailedTest extends GridCommonAbstr
             startGrid(getServerNodeCount());
         }
     
    +    /** {@inheritDoc} */
    +    @Override protected void afterTestsStopped() throws Exception {
    +        stopAllGrids();
    +
    +        super.afterTestsStopped();
    +    }
    +
         /** {@inheritDoc} */
         @Override protected void beforeTest() throws Exception {
             super.beforeTest();
    @@ -121,6 +127,7 @@ private int getServerNodeCount() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testOptimisticFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2);
     
    @@ -142,6 +149,165 @@ public void testOptimisticFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testOptimistic() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        failOnWrite = true;
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testOptimisticWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        failOnWrite = true;
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testOptimisticFullSyncWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        failOnWrite = true;
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testOptimisticOnePhaseCommit() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        failOnWrite = true;
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testOptimisticOnePhaseCommitWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        failOnWrite = true;
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testOptimisticOnePhaseCommitFullSync() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        failOnWrite = true;
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testOptimisticOnePhaseCommitFullSyncWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        failOnWrite = true;
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testPessimisticOnePhaseCommit() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1);
     
    @@ -163,9 +329,10 @@ public void testPessimisticOnePhaseCommit() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticOnePhaseCommitWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
             doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
     
    @@ -185,6 +352,7 @@ public void testPessimisticOnePhaseCommitWithNearCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticOnePhaseCommitFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1);
     
    @@ -206,9 +374,10 @@ public void testPessimisticOnePhaseCommitFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticOnePhaseCommitFullSyncWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
             doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
     
    @@ -228,6 +397,7 @@ public void testPessimisticOnePhaseCommitFullSyncWithNearCache() throws Exceptio
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimistic() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2);
     
    @@ -249,9 +419,10 @@ public void testPessimistic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
             doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
     
    @@ -271,6 +442,7 @@ public void testPessimisticWithNearCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2);
     
    @@ -292,63 +464,81 @@ public void testPessimisticFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    -    public void testOptimisticOnePhaseCommit() throws Exception {
    -        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1);
    +    @Test
    +    public void testPessimisticFullSyncWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
             failOnWrite = true;
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
         }
     
         /**
          * @throws Exception If failed.
          */
    -    public void testOptimisticOnePhaseCommitFullSync() throws Exception {
    -        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1);
    +    @Test
    +    public void testMvccPessimisticOnePhaseCommit() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
     
    -        failOnWrite = true;
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticOnePhaseCommitWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
         }
     
         /**
          * @throws Exception If failed.
          */
    -    public void testOptimisticOnePhaseCommitFullSyncWithNearCache() throws Exception {
    -        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +    public void testMvccPessimisticOnePhaseCommitFullSync() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
     
    -        failOnWrite = true;
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticOnePhaseCommitFullSyncWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
    -        doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
             doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
         }
    @@ -356,20 +546,29 @@ public void testOptimisticOnePhaseCommitFullSyncWithNearCache() throws Exception
         /**
          * @throws Exception If failed.
          */
    -    public void testOptimistic() throws Exception {
    -        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2);
    +    @Test
    +    public void testMvccPessimistic() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
     
    -        failOnWrite = true;
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
    -        doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
             doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
         }
    @@ -377,20 +576,29 @@ public void testOptimistic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    -    public void testOptimisticFullSyncWithNearCache() throws Exception {
    -        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2);
    +    @Test
    +    public void testMvccPessimisticFullSync() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
    -        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
     
    -        failOnWrite = true;
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticFullSyncWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
    -        doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
     
    -        doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
    +        failOnWrite = true;
     
             doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
         }
    @@ -399,6 +607,7 @@ public void testOptimisticFullSyncWithNearCache() throws Exception {
          * @param ccfg Cache configuration.
          * @throws Exception If failed.
          */
    +    @SuppressWarnings("unchecked")
         private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency txConcurrency,
             TransactionIsolation txIsolation) throws Exception {
             IgniteEx cln = grid(getServerNodeCount());
    @@ -412,28 +621,19 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
             else
                 clnCache = cln.cache(ccfg.getName());
     
    -        putKeys(clnCache, EXPECTED_VALUE);
    +        clnCache.put(KEY, EXPECTED_VALUE);
     
             try {
                 // Explicit tx.
                 for (int i = 0; i < ITERATION_CNT; i++) {
    -                try (final Transaction tx = cln.transactions().txStart(txConcurrency, txIsolation)) {
    -                    putKeys(clnCache, WRONG_VALUE);
    -
    -                    clnCache.invoke(KEYS, createEntryProcessor());
    -
    -                    GridTestUtils.assertThrowsWithCause(new Callable() {
    -                        @Override public Object call() throws Exception {
    -                            tx.commit();
    -
    -                            return null;
    -                        }
    -                    }, UnsupportedOperationException.class);
    -                }
    +                if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT)
    +                    checkExplicitMvccInvoke(cln, clnCache, txConcurrency, txIsolation);
    +                else
    +                    checkExplicitTxInvoke(cln, clnCache, txConcurrency, txIsolation);
     
                     assertNull(cln.transactions().tx());
     
    -                checkKeys(clnCache, EXPECTED_VALUE);
    +                assertEquals(EXPECTED_VALUE, clnCache.get(KEY));
                 }
     
                 // From affinity node.
    @@ -443,32 +643,24 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
     
                 // Explicit tx.
                 for (int i = 0; i < ITERATION_CNT; i++) {
    -                try (final Transaction tx = grid.transactions().txStart(txConcurrency, txIsolation)) {
    -                    putKeys(cache, WRONG_VALUE);
    -
    -                    cache.invoke(KEYS, createEntryProcessor());
    -
    -                    GridTestUtils.assertThrowsWithCause(new Callable() {
    -                        @Override public Object call() throws Exception {
    -                            tx.commit();
    -
    -                            return null;
    -                        }
    -                    }, UnsupportedOperationException.class);
    -                }
    +                if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT)
    +                    checkExplicitMvccInvoke(cln, clnCache, txConcurrency, txIsolation);
    +                else
    +                    checkExplicitTxInvoke(cln, clnCache, txConcurrency, txIsolation);
     
                     assertNull(cln.transactions().tx());
     
    -                checkKeys(cache, EXPECTED_VALUE);
    +                assertEquals(EXPECTED_VALUE, cache.get(KEY));
                 }
     
                 final IgniteCache clnCache0 = clnCache;
     
                 // Implicit tx.
                 for (int i = 0; i < ITERATION_CNT; i++) {
    +                //noinspection ThrowableNotThrown
                     GridTestUtils.assertThrowsWithCause(new Callable() {
                         @Override public Object call() throws Exception {
    -                        clnCache0.invoke(KEYS, createEntryProcessor());
    +                        clnCache0.invoke(KEY, createEntryProcessor());
     
                             return null;
                         }
    @@ -477,7 +669,7 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
                     assertNull(cln.transactions().tx());
                 }
     
    -            checkKeys(clnCache, EXPECTED_VALUE);
    +            assertEquals(EXPECTED_VALUE, clnCache.get(KEY));
             }
             catch (Exception e) {
                 e.printStackTrace();
    @@ -488,33 +680,61 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
         }
     
         /**
    -     * @return Entry processor.
    +     * @param node Ignite node.
    +     * @param cache Node cache.
    +     * @param txConcurrency Transaction concurrency.
    +     * @param txIsolation TransactionIsolation.
          */
    -    @NotNull private EntryProcessor createEntryProcessor() {
    -        return failOnWrite ? new ExternalizableFailedWriteEntryProcessor() :
    -            new ExternalizableFailedReadEntryProcessor();
    +    @SuppressWarnings({"unchecked", "ThrowableNotThrown"})
    +    private void checkExplicitTxInvoke(Ignite node, IgniteCache cache, TransactionConcurrency txConcurrency,
    +        TransactionIsolation txIsolation) {
    +        try (final Transaction tx = node.transactions().txStart(txConcurrency, txIsolation)) {
    +            cache.put(KEY, WRONG_VALUE);
    +
    +            cache.invoke(KEY, createEntryProcessor());
    +
    +            GridTestUtils.assertThrowsWithCause(new Callable() {
    +                @Override public Object call() throws Exception {
    +                    tx.commit();
    +
    +                    return null;
    +                }
    +            }, UnsupportedOperationException.class);
    +        }
         }
     
    -    /**
    -     * @param cache Cache.
    -     * @param val Value.
    -     */
    -    private void putKeys(IgniteCache cache, int val) {
    -        cache.put(KEYS, val);
    +    @SuppressWarnings({"unchecked", "ThrowableNotThrown"})
    +    private void checkExplicitMvccInvoke(Ignite node, IgniteCache cache, TransactionConcurrency txConcurrency,
    +        TransactionIsolation txIsolation) {
    +        try (final Transaction tx = node.transactions().txStart(txConcurrency, txIsolation)) {
    +            cache.put(KEY, WRONG_VALUE);
    +
    +            GridTestUtils.assertThrowsWithCause(new Callable() {
    +                @Override public Object call() throws Exception {
    +                    cache.invoke(KEY, createEntryProcessor());
    +
    +                    fail("Should never happened.");
    +
    +                    tx.commit();
    +
    +                    return null;
    +                }
    +            }, UnsupportedOperationException.class);
    +        }
         }
     
         /**
    -     * @param cache Cache.
    -     * @param expVal Expected value.
    +     * @return Entry processor.
          */
    -    private void checkKeys(IgniteCache cache, int expVal) {
    -        assertEquals(expVal, cache.get(KEYS));
    +    private @NotNull EntryProcessor createEntryProcessor() {
    +        return failOnWrite ? new ExternalizableFailedWriteEntryProcessor() :
    +            new ExternalizableFailedReadEntryProcessor();
         }
     
         /**
          * @return Cache configuration.
          */
    -    private CacheConfiguration cacheConfiguration(CacheWriteSynchronizationMode wrMode, int backup) {
    +    private CacheConfiguration cacheConfiguration(CacheWriteSynchronizationMode wrMode, int backup) {
             return new CacheConfiguration("test-cache-" + wrMode + "-" + backup)
                 .setAtomicityMode(TRANSACTIONAL)
                 .setWriteSynchronizationMode(FULL_SYNC)
    @@ -525,7 +745,7 @@ private CacheConfiguration cacheConfiguration(CacheWriteSynchronizationMode wrMo
          *
          */
         private static class ExternalizableFailedWriteEntryProcessor implements EntryProcessor,
    -        Externalizable{
    +        Externalizable {
             /** */
             public ExternalizableFailedWriteEntryProcessor() {
                 // No-op.
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorNonSerializableTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorNonSerializableTest.java
    index 693668556ffb9..2f48cf7ae6a89 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorNonSerializableTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheEntryProcessorNonSerializableTest.java
    @@ -32,16 +32,19 @@
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.marshaller.jdk.JdkMarshaller;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.PRIMARY_SYNC;
     import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC;
    @@ -53,6 +56,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheEntryProcessorNonSerializableTest extends GridCommonAbstractTest {
         /** */
         private static final int EXPECTED_VALUE = 42;
    @@ -60,9 +64,6 @@ public class CacheEntryProcessorNonSerializableTest extends GridCommonAbstractTe
         /** */
         private static final int WRONG_VALUE = -1;
     
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 3;
     
    @@ -70,7 +71,7 @@ public class CacheEntryProcessorNonSerializableTest extends GridCommonAbstractTe
         public static final int ITERATION_CNT = 1;
     
         /** */
    -    public static final int KEYS = 10;
    +    private static final int KEY = 10;
     
         /** */
         private boolean client;
    @@ -79,7 +80,6 @@ public class CacheEntryProcessorNonSerializableTest extends GridCommonAbstractTe
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             cfg.setClientMode(client);
    @@ -100,6 +100,13 @@ public class CacheEntryProcessorNonSerializableTest extends GridCommonAbstractTe
             startGrid(getServerNodeCount());
         }
     
    +    /** {@inheritDoc} */
    +    @Override protected void afterTestsStopped() throws Exception {
    +        stopAllGrids();
    +
    +        super.afterTestsStopped();
    +    }
    +
         /**
          * @return Server nodes.
          */
    @@ -110,6 +117,7 @@ private int getServerNodeCount() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticOnePhaseCommit() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1);
     
    @@ -123,9 +131,10 @@ public void testPessimisticOnePhaseCommit() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticOnePhaseCommitWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
             doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
     
    @@ -137,6 +146,7 @@ public void testPessimisticOnePhaseCommitWithNearCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticOnePhaseCommitFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1);
     
    @@ -150,9 +160,10 @@ public void testPessimisticOnePhaseCommitFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticOnePhaseCommitFullSyncWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
             doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
     
    @@ -164,6 +175,7 @@ public void testPessimisticOnePhaseCommitFullSyncWithNearCache() throws Exceptio
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimistic() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2);
     
    @@ -177,9 +189,10 @@ public void testPessimistic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
             doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
     
    @@ -191,6 +204,7 @@ public void testPessimisticWithNearCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPessimisticFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2);
     
    @@ -204,6 +218,107 @@ public void testPessimisticFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testPessimisticFullSyncWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
    +    }
    +
    +    /**
    +     */
    +    @Test
    +    public void testMvccPessimisticOnePhaseCommit() {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticOnePhaseCommitWithNearCache() {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     */
    +    @Test
    +    public void testMvccPessimisticOnePhaseCommitFullSync() {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticOnePhaseCommitFullSyncWithNearCache() {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     */
    +    @Test
    +    public void testMvccPessimistic() {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticWithNearCache() {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     */
    +    @Test
    +    public void testMvccPessimisticFullSync() {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    public void testMvccPessimisticFullSyncWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2).setAtomicityMode(TRANSACTIONAL_SNAPSHOT)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, REPEATABLE_READ);
    +
    +        doTestInvokeTest(ccfg, PESSIMISTIC, SERIALIZABLE);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testOptimisticOnePhaseCommit() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1);
     
    @@ -217,6 +332,22 @@ public void testOptimisticOnePhaseCommit() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testOptimisticOnePhaseCommitWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 1)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testOptimisticOnePhaseCommitFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1);
     
    @@ -230,9 +361,10 @@ public void testOptimisticOnePhaseCommitFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testOptimisticOnePhaseCommitFullSyncWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 1)
    -            .setNearConfiguration(new NearCacheConfiguration());
    +            .setNearConfiguration(new NearCacheConfiguration<>());
     
             doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
     
    @@ -244,6 +376,7 @@ public void testOptimisticOnePhaseCommitFullSyncWithNearCache() throws Exception
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testOptimistic() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2);
     
    @@ -257,6 +390,22 @@ public void testOptimistic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testOptimisticWithNearCache() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PRIMARY_SYNC, 2)
    +            .setNearConfiguration(new NearCacheConfiguration<>());
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, READ_COMMITTED);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, REPEATABLE_READ);
    +
    +        doTestInvokeTest(ccfg, OPTIMISTIC, SERIALIZABLE);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testOptimisticFullSync() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2);
     
    @@ -270,6 +419,7 @@ public void testOptimisticFullSync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testOptimisticFullSyncWithNearCache() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(FULL_SYNC, 2);
     
    @@ -282,10 +432,13 @@ public void testOptimisticFullSyncWithNearCache() throws Exception {
     
         /**
          * @param ccfg Cache configuration.
    -     * @throws Exception If failed.
          */
    +    @SuppressWarnings({"unchecked", "ThrowableNotThrown"})
         private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency txConcurrency,
    -        TransactionIsolation txIsolation) throws Exception {
    +        TransactionIsolation txIsolation) {
    +        if (ccfg.getNearConfiguration() != null)
    +            MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE);
    +
             IgniteEx cln = grid(getServerNodeCount());
     
             grid(0).createCache(ccfg);
    @@ -297,26 +450,17 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
             else
                 clnCache = cln.cache(ccfg.getName());
     
    -        putKeys(clnCache, EXPECTED_VALUE);
    +        clnCache.put(KEY, EXPECTED_VALUE);
     
             try {
                 // Explicit tx.
                 for (int i = 0; i < ITERATION_CNT; i++) {
    -                try (final Transaction tx = cln.transactions().txStart(txConcurrency, txIsolation)) {
    -                    putKeys(clnCache, WRONG_VALUE);
    -
    -                    clnCache.invoke(KEYS, new NonSerialazibleEntryProcessor());
    -
    -                    GridTestUtils.assertThrowsWithCause(new Callable() {
    -                        @Override public Object call() throws Exception {
    -                            tx.commit();
    +                if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT)
    +                    checkMvccInvoke(cln, clnCache, txConcurrency, txIsolation);
    +                else
    +                    checkTxInvoke(cln, clnCache, txConcurrency, txIsolation);
     
    -                            return null;
    -                        }
    -                    }, NotSerializableException.class);
    -                }
    -
    -                checkKeys(clnCache, EXPECTED_VALUE);
    +                assertEquals(EXPECTED_VALUE, clnCache.get(KEY));
                 }
     
                 // From affinity node.
    @@ -326,21 +470,12 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
     
                 // Explicit tx.
                 for (int i = 0; i < ITERATION_CNT; i++) {
    -                try (final Transaction tx = grid.transactions().txStart(txConcurrency, txIsolation)) {
    -                    putKeys(cache, WRONG_VALUE);
    -
    -                    cache.invoke(KEYS, new NonSerialazibleEntryProcessor());
    +                if (ccfg.getAtomicityMode() == TRANSACTIONAL_SNAPSHOT)
    +                    checkMvccInvoke(grid, cache, txConcurrency, txIsolation);
    +                else
    +                    checkTxInvoke(grid, cache, txConcurrency, txIsolation);
     
    -                    GridTestUtils.assertThrowsWithCause(new Callable() {
    -                        @Override public Object call() throws Exception {
    -                            tx.commit();
    -
    -                            return null;
    -                        }
    -                    }, NotSerializableException.class);
    -                }
    -
    -                checkKeys(cache, EXPECTED_VALUE);
    +                assertEquals(EXPECTED_VALUE, cache.get(KEY));
                 }
     
                 final IgniteCache clnCache0 = clnCache;
    @@ -348,15 +483,15 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
                 // Implicit tx.
                 for (int i = 0; i < ITERATION_CNT; i++) {
                     GridTestUtils.assertThrowsWithCause(new Callable() {
    -                    @Override public Object call() throws Exception {
    -                        clnCache0.invoke(KEYS, new NonSerialazibleEntryProcessor());
    +                    @Override public Object call() {
    +                        clnCache0.invoke(KEY, new NonSerialazibleEntryProcessor());
     
                             return null;
                         }
                     }, NotSerializableException.class);
                 }
     
    -            checkKeys(clnCache, EXPECTED_VALUE);
    +            assertEquals(EXPECTED_VALUE, clnCache.get(KEY));
             }
             finally {
                 grid(0).destroyCache(ccfg.getName());
    @@ -364,25 +499,59 @@ private void doTestInvokeTest(CacheConfiguration ccfg, TransactionConcurrency tx
         }
     
         /**
    -     * @param cache Cache.
    -     * @param val Value.
    +     * @param node Grid node.
    +     * @param cache Node cache.
    +     * @param txConcurrency Transaction concurrency.
    +     * @param txIsolation Transaction isolation.
          */
    -    private void putKeys(IgniteCache cache, int val) {
    -        cache.put(KEYS, val);
    +    @SuppressWarnings({"unchecked", "ThrowableNotThrown"})
    +    private void checkTxInvoke(Ignite node, IgniteCache cache, TransactionConcurrency txConcurrency,
    +        TransactionIsolation txIsolation) {
    +        try (final Transaction tx = node.transactions().txStart(txConcurrency, txIsolation)) {
    +            cache.put(KEY, WRONG_VALUE);
    +
    +            cache.invoke(KEY, new NonSerialazibleEntryProcessor());
    +
    +            GridTestUtils.assertThrowsWithCause(new Callable() {
    +                @Override public Object call() {
    +                    tx.commit();
    +
    +                    return null;
    +                }
    +            }, NotSerializableException.class);
    +        }
         }
     
         /**
    -     * @param cache Cache.
    -     * @param expVal Expected value.
    +     * @param node Grid node.
    +     * @param cache Node cache.
    +     * @param txConcurrency Transaction concurrency.
    +     * @param txIsolation Transaction isolation.
          */
    -    private void checkKeys(IgniteCache cache, int expVal) {
    -        assertEquals(expVal, cache.get(KEYS));
    +    @SuppressWarnings({"unchecked", "ThrowableNotThrown"})
    +    private void checkMvccInvoke(Ignite node, IgniteCache cache, TransactionConcurrency txConcurrency,
    +        TransactionIsolation txIsolation) {
    +        try (final Transaction tx = node.transactions().txStart(txConcurrency, txIsolation)) {
    +            cache.put(KEY, WRONG_VALUE);
    +
    +            GridTestUtils.assertThrowsWithCause(new Callable() {
    +                @Override public Object call() {
    +                    cache.invoke(KEY, new NonSerialazibleEntryProcessor());
    +
    +                    fail("Should never happened.");
    +
    +                    tx.commit();
    +
    +                    return null;
    +                }
    +            }, NotSerializableException.class);
    +        }
         }
     
         /**
          * @return Cache configuration.
          */
    -    private CacheConfiguration cacheConfiguration(CacheWriteSynchronizationMode wrMode, int backup) {
    +    private CacheConfiguration cacheConfiguration(CacheWriteSynchronizationMode wrMode, int backup) {
             return new CacheConfiguration("test-cache-" + wrMode + "-" + backup)
                 .setAtomicityMode(TRANSACTIONAL)
                 .setWriteSynchronizationMode(FULL_SYNC)
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationNearEnabledTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationNearEnabledTest.java
    index 9eb56dcb61551..a45617e9eb9ad 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationNearEnabledTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationNearEnabledTest.java
    @@ -21,10 +21,15 @@
     import org.apache.ignite.cache.CacheMode;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.NearCacheConfiguration;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheKeepBinaryIterationNearEnabledTest extends CacheKeepBinaryIterationTest {
         /** {@inheritDoc} */
         @Override protected CacheConfiguration cacheConfiguration(
    @@ -39,4 +44,17 @@ public class CacheKeepBinaryIterationNearEnabledTest extends CacheKeepBinaryIter
             return ccfg;
         }
     
    +    /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    @Override public void testMvccTxOnHeap() throws Exception {
    +        // No-op.
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7187")
    +    @Test
    +    @Override public void testMvccTxOnHeapLocalEntries() throws Exception {
    +        // No-op.
    +    }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationStoreEnabledTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationStoreEnabledTest.java
    index f98229deeb02d..af1f582afcf21 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationStoreEnabledTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationStoreEnabledTest.java
    @@ -24,10 +24,14 @@
     import org.apache.ignite.cache.CacheWriteSynchronizationMode;
     import org.apache.ignite.cache.store.CacheStoreAdapter;
     import org.apache.ignite.configuration.CacheConfiguration;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheKeepBinaryIterationStoreEnabledTest extends CacheKeepBinaryIterationTest {
         /** Cache store. */
         private static TestStore store = new TestStore();
    @@ -50,6 +54,18 @@ public class CacheKeepBinaryIterationStoreEnabledTest extends CacheKeepBinaryIte
             return ccfg;
         }
     
    +    /** {@inheritDoc} */
    +    @Test
    +    @Override public void testMvccTxOnHeap() throws Exception {
    +        fail("https://issues.apache.org/jira/browse/IGNITE-8582");
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Test
    +    @Override public void testMvccTxOnHeapLocalEntries() throws Exception {
    +        fail("https://issues.apache.org/jira/browse/IGNITE-8582");
    +    }
    +
         /**
          *
          */
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationTest.java
    index ce4557088dbe2..c41467f8c4d06 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheKeepBinaryIterationTest.java
    @@ -32,24 +32,23 @@
     import org.apache.ignite.internal.util.tostring.GridToStringInclude;
     import org.apache.ignite.internal.util.typedef.internal.S;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.config.GridTestProperties;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheKeepBinaryIterationTest extends GridCommonAbstractTest {
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 3;
     
    @@ -64,7 +63,6 @@ public class CacheKeepBinaryIterationTest extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             return cfg;
    @@ -80,6 +78,7 @@ public class CacheKeepBinaryIterationTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAtomicOnHeap() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, ATOMIC);
     
    @@ -92,6 +91,7 @@ public void testAtomicOnHeap() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnHeap() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,1, TRANSACTIONAL);
     
    @@ -104,6 +104,20 @@ public void testTxOnHeap() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testMvccTxOnHeap() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED,1, TRANSACTIONAL_SNAPSHOT);
    +
    +        doTestScanQuery(ccfg, true, true);
    +        doTestScanQuery(ccfg, true, false);
    +        doTestScanQuery(ccfg, false, true);
    +        doTestScanQuery(ccfg, false, false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testAtomicOnHeapLocalEntries() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, ATOMIC);
     
    @@ -116,6 +130,7 @@ public void testAtomicOnHeapLocalEntries() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxOnHeapLocalEntries() throws Exception {
             CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL);
     
    @@ -125,6 +140,20 @@ public void testTxOnHeapLocalEntries() throws Exception {
             doTestLocalEntries(ccfg, false, false);
         }
     
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testMvccTxOnHeapLocalEntries() throws Exception {
    +        CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, 1, TRANSACTIONAL_SNAPSHOT);
    +
    +        doTestLocalEntries(ccfg, true, true);
    +        doTestLocalEntries(ccfg, true, false);
    +        doTestLocalEntries(ccfg, false, true);
    +        doTestLocalEntries(ccfg, false, false);
    +    }
    +
    +
         /**
          * @param ccfg Cache configuration.
          */
    @@ -368,4 +397,4 @@ public QueryTestValue(Integer val) {
                 return S.toString(QueryTestValue.class, this);
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ClientReconnectContinuousQueryTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ClientReconnectContinuousQueryTest.java
    index 9b531c68277a4..bbdbb1e1f2399 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ClientReconnectContinuousQueryTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ClientReconnectContinuousQueryTest.java
    @@ -23,6 +23,7 @@
     import javax.cache.event.CacheEntryListenerException;
     import javax.cache.event.CacheEntryUpdatedListener;
     import org.apache.ignite.IgniteCache;
    +import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.query.ContinuousQuery;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    @@ -37,10 +38,14 @@
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class ClientReconnectContinuousQueryTest extends GridCommonAbstractTest {
         /** Client index. */
         private static final int CLIENT_IDX = 1;
    @@ -77,17 +82,31 @@ public class ClientReconnectContinuousQueryTest extends GridCommonAbstractTest {
             else {
                 CacheConfiguration ccfg = defaultCacheConfiguration();
     
    +            ccfg.setAtomicityMode(atomicityMode());
    +
    +            // TODO IGNITE-9530 Remove this clause.
    +            if (atomicityMode() == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT)
    +                ccfg.setNearConfiguration(null);
    +
                 cfg.setCacheConfiguration(ccfg);
             }
     
             return cfg;
         }
     
    +    /**
    +     * @return Transaction snapshot.
    +     */
    +    protected CacheAtomicityMode atomicityMode() {
    +        return CacheAtomicityMode.TRANSACTIONAL;
    +    }
    +
         /**
          * Test client reconnect to alive grid.
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testClientReconnect() throws Exception {
             try {
                 startGrids(2);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryMarshallerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryMarshallerTest.java
    index 44dcc1cd63f4c..4da0074d42cf5 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryMarshallerTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryMarshallerTest.java
    @@ -40,10 +40,14 @@
     import org.apache.ignite.custom.DummyEventFilterFactory;
     import org.apache.ignite.lang.IgniteBiPredicate;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Checks that Optimized Marshaller is not used on any stage of Continuous Query handling.
      */
    +@RunWith(JUnit4.class)
     public class ContinuousQueryMarshallerTest extends GridCommonAbstractTest {
         /** */
         public static final String CACHE_NAME = "test-cache";
    @@ -65,6 +69,7 @@ public class ContinuousQueryMarshallerTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteFilterFactoryClient() throws Exception {
             check("server", "client");
         }
    @@ -72,6 +77,7 @@ public void testRemoteFilterFactoryClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteFilterFactoryServer() throws Exception {
             check("server1", "server2");
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryPeerClassLoadingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryPeerClassLoadingTest.java
    index 73d8d0d99fe76..72fa404d3ea85 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryPeerClassLoadingTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryPeerClassLoadingTest.java
    @@ -29,10 +29,14 @@
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.custom.DummyEventFilterFactory;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Checks if filter factory correctly deployed on all nodes.
      */
    +@RunWith(JUnit4.class)
     public class ContinuousQueryPeerClassLoadingTest extends GridCommonAbstractTest {
         /** */
         public static final String CACHE_NAME = "test-cache";
    @@ -55,6 +59,7 @@ public class ContinuousQueryPeerClassLoadingTest extends GridCommonAbstractTest
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteFilterFactoryClient() throws Exception {
             check("server", "client1", "client2");
         }
    @@ -62,6 +67,7 @@ public void testRemoteFilterFactoryClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteFilterFactoryServer1() throws Exception {
             check("server1", "server2", "client");
         }
    @@ -69,6 +75,7 @@ public void testRemoteFilterFactoryServer1() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteFilterFactoryServer2() throws Exception {
             check("server1", "server2", "server3");
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryReassignmentTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryReassignmentTest.java
    new file mode 100644
    index 0000000000000..15d7125a30fac
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryReassignmentTest.java
    @@ -0,0 +1,161 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.ignite.internal.processors.cache.query.continuous;
    +
    +import java.util.concurrent.atomic.AtomicInteger;
    +import javax.cache.configuration.FactoryBuilder;
    +import javax.cache.event.CacheEntryEvent;
    +import org.apache.ignite.Ignite;
    +import org.apache.ignite.IgniteCache;
    +import org.apache.ignite.cache.CacheEntryEventSerializableFilter;
    +import org.apache.ignite.cache.query.ContinuousQuery;
    +import org.apache.ignite.configuration.CacheConfiguration;
    +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
    +
    +import static org.apache.ignite.testframework.GridTestUtils.waitForCondition;
    +
    +/**
    + *
    + */
    +@RunWith(JUnit4.class)
    +public class ContinuousQueryReassignmentTest extends GridCommonAbstractTest {
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        stopAllGrids();
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override public boolean isDebug() {
    +        return true;
    +    }
    +
    +    /**
    +     *
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testContinuousQueryNotCalledOnReassignment() throws Exception {
    +        testContinuousQueryNotCalledOnReassignment(false);
    +    }
    +
    +    /**
    +     * @throws Exception if failed.
    +     */
    +    @Test
    +    public void testLocalContinuousQueryNotCalledOnReassignment() throws Exception {
    +        testContinuousQueryNotCalledOnReassignment(true);
    +    }
    +
    +    /**
    +     * @param loc If {@code true}, then local continuous query will be tested.
    +     * @throws Exception If failed.
    +     */
    +    private void testContinuousQueryNotCalledOnReassignment(boolean loc) throws Exception {
    +        Ignite lsnrNode = startGrid(1);
    +        Ignite victim = startGrid(2);
    +
    +        awaitPartitionMapExchange();
    +
    +        CacheConfiguration cacheCfg = new CacheConfiguration<>("cache");
    +        cacheCfg.setBackups(1);
    +        IgniteCache cache = lsnrNode.getOrCreateCache(cacheCfg);
    +
    +        AtomicInteger updCntr = new AtomicInteger();
    +
    +        listenToUpdates(cache, loc, updCntr, null);
    +
    +        // Subscribe on all nodes to receive all updates.
    +        if (loc)
    +            listenToUpdates(victim.cache("cache"), true, updCntr, null);
    +
    +        int updates = 1000;
    +
    +        for (int i = 0; i < updates; i++)
    +            cache.put(i, Integer.toString(i));
    +
    +        assertTrue(
    +            "Failed to wait for continuous query updates. Exp: " + updates + "; actual: " + updCntr.get(),
    +            waitForCondition(() -> updCntr.get() == updates, 10000));
    +
    +        victim.close();
    +
    +        assertFalse("Continuous query is called on reassignment.",
    +            waitForCondition(() -> updCntr.get() > updates, 2000));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testContinuousQueryWithRemoteFilterNotCalledOnReassignment() throws Exception {
    +        Ignite lsnrNode = startGrid(1);
    +        Ignite victim = startGrid(2);
    +
    +        awaitPartitionMapExchange();
    +
    +        CacheConfiguration cacheCfg = new CacheConfiguration<>("cache");
    +        cacheCfg.setBackups(1);
    +        IgniteCache cache = lsnrNode.getOrCreateCache(cacheCfg);
    +
    +        AtomicInteger updCntr = new AtomicInteger();
    +
    +        CacheEntryEventSerializableFilter filter = (e) -> e.getKey() % 2 == 0;
    +
    +        listenToUpdates(cache, false, updCntr, filter);
    +
    +        int updates = 1000;
    +
    +        for (int i = 0; i < updates; i++)
    +            cache.put(i, Integer.toString(i));
    +
    +        assertTrue(
    +            "Failed to wait for continuous query updates. Exp: " + updates + "; actual: " + updCntr.get(),
    +            waitForCondition(() -> updCntr.get() == updates / 2, 10000));
    +
    +        victim.close();
    +
    +        assertFalse("Continuous query is called on reassignment.",
    +            waitForCondition(() -> updCntr.get() > updates / 2, 2000));
    +    }
    +
    +    /**
    +     * Register a continuous query, that counts updates on the provided cache.
    +     *
    +     * @param cache Cache.
    +     * @param loc If {@code true}, then local continuous query will be registered.
    +     * @param updCntr Update counter.
    +     * @param rmtFilter Remote filter.
    +     */
    +    private void listenToUpdates(IgniteCache cache, boolean loc, AtomicInteger updCntr,
    +        CacheEntryEventSerializableFilter rmtFilter) {
    +
    +        ContinuousQuery cq = new ContinuousQuery<>();
    +        cq.setLocal(loc);
    +        cq.setLocalListener((evts) -> {
    +            for (CacheEntryEvent e : evts)
    +                updCntr.incrementAndGet();
    +        });
    +        if (rmtFilter != null)
    +            cq.setRemoteFilterFactory(FactoryBuilder.factoryOf(rmtFilter));
    +
    +        cache.query(cq);
    +    }
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryRemoteFilterMissingInClassPathSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryRemoteFilterMissingInClassPathSelfTest.java
    index 226302ff9d616..74f924853f521 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryRemoteFilterMissingInClassPathSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/ContinuousQueryRemoteFilterMissingInClassPathSelfTest.java
    @@ -36,10 +36,14 @@
     import java.net.MalformedURLException;
     import java.net.URL;
     import java.net.URLClassLoader;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class ContinuousQueryRemoteFilterMissingInClassPathSelfTest extends GridCommonAbstractTest {
         /** URL of classes. */
         private static final URL[] URLS;
    @@ -97,6 +101,7 @@ public class ContinuousQueryRemoteFilterMissingInClassPathSelfTest extends GridC
         /**
          * @throws Exception If fail.
          */
    +    @Test
         public void testWarningMessageOnClientNode() throws Exception {
             ldr = new URLClassLoader(URLS, getClass().getClassLoader());
     
    @@ -121,6 +126,7 @@ public void testWarningMessageOnClientNode() throws Exception {
         /**
          * @throws Exception If fail.
          */
    +    @Test
         public void testNoWarningMessageOnClientNode() throws Exception {
             ldr = new URLClassLoader(URLS, getClass().getClassLoader());
     
    @@ -143,6 +149,7 @@ public void testNoWarningMessageOnClientNode() throws Exception {
         /**
          * @throws Exception If fail.
          */
    +    @Test
         public void testExceptionOnServerNode() throws Exception {
             ldr = new URLClassLoader(URLS, getClass().getClassLoader());
     
    @@ -168,6 +175,7 @@ public void testExceptionOnServerNode() throws Exception {
         /**
          * @throws Exception If fail.
          */
    +    @Test
         public void testNoExceptionOnServerNode() throws Exception {
             ldr = new URLClassLoader(URLS, getClass().getClassLoader());
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAbstractSelfTest.java
    index 0ace0ba10f856..7e665842cd06b 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAbstractSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAbstractSelfTest.java
    @@ -67,17 +67,18 @@
     import org.apache.ignite.lang.IgniteBiInClosure;
     import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static java.util.concurrent.TimeUnit.SECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.LOCAL;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
    @@ -90,10 +91,8 @@
     /**
      * Continuous queries tests.
      */
    +@RunWith(JUnit4.class)
     public abstract class GridCacheContinuousQueryAbstractSelfTest extends GridCommonAbstractTest implements Serializable {
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** Latch timeout. */
         protected static final long LATCH_TIMEOUT = 5000;
     
    @@ -112,25 +111,24 @@ public abstract class GridCacheContinuousQueryAbstractSelfTest extends GridCommo
     
                 cacheCfg.setCacheMode(cacheMode());
                 cacheCfg.setAtomicityMode(atomicityMode());
    -            cacheCfg.setNearConfiguration(nearConfiguration());
    +            cacheCfg.setLoadPreviousValue(true);
                 cacheCfg.setRebalanceMode(ASYNC);
                 cacheCfg.setWriteSynchronizationMode(FULL_SYNC);
    -            cacheCfg.setCacheStoreFactory(new StoreFactory());
    -            cacheCfg.setReadThrough(true);
    -            cacheCfg.setWriteThrough(true);
    -            cacheCfg.setLoadPreviousValue(true);
    +            cacheCfg.setNearConfiguration(nearConfiguration());
    +
    +            if (atomicityMode() != TRANSACTIONAL_SNAPSHOT) {
    +                cacheCfg.setCacheStoreFactory(new StoreFactory()); // TODO IGNITE-8582 enable for tx snapshot.
    +                cacheCfg.setReadThrough(true); // TODO IGNITE-8582 enable for tx snapshot.
    +                cacheCfg.setWriteThrough(true); // TODO IGNITE-8582 enable for tx snapshot.
    +            }
    +            else
    +                cacheCfg.setIndexedTypes(Integer.class, Integer.class);
     
                 cfg.setCacheConfiguration(cacheCfg);
             }
             else
                 cfg.setClientMode(true);
     
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(disco);
    -
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             return cfg;
    @@ -239,10 +237,27 @@ protected CacheAtomicityMode atomicityMode() {
          */
         protected abstract int gridCount();
     
    +    /**
    +     * @param cache Cache.
    +     * @param key Key.
    +     * @param val Value.
    +     */
    +    protected void cachePut(IgniteCache cache, Integer key, Integer val) {
    +        cache.put(key, val);
    +    }
    +
    +    /**
    +     * @param cache Cache.
    +     * @param key Key.
    +     */
    +    protected void cacheRemove(IgniteCache cache, Integer key) {
    +        cache.remove(key);
    +    }
    +
         /**
          * @throws Exception If failed.
          */
    -    @SuppressWarnings("ThrowableResultOfMethodCallIgnored")
    +    @Test
         public void testIllegalArguments() throws Exception {
             final ContinuousQuery q = new ContinuousQuery<>();
     
    @@ -285,6 +300,7 @@ public void testIllegalArguments() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAllEntries() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -314,13 +330,13 @@ public void testAllEntries() throws Exception {
             });
     
             try (QueryCursor> ignored = cache.query(qry)) {
    -            cache.put(1, 1);
    -            cache.put(2, 2);
    -            cache.put(3, 3);
    +            cachePut(cache,1, 1);
    +            cachePut(cache,2, 2);
    +            cachePut(cache,3, 3);
     
    -            cache.remove(2);
    +            cacheRemove(cache, 2);
     
    -            cache.put(1, 10);
    +            cachePut(cache, 1, 10);
     
                 assert latch.await(LATCH_TIMEOUT, MILLISECONDS);
     
    @@ -351,6 +367,7 @@ public void testAllEntries() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFilterException() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -370,13 +387,14 @@ public void testFilterException() throws Exception {
     
             try (QueryCursor> ignored = cache.query(qry)) {
                 for (int i = 0; i < 100; i++)
    -                cache.put(i, i);
    +                cachePut(cache, i, i);
             }
         }
     
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTwoQueryListener() throws Exception {
             if (cacheMode() == LOCAL)
                 return;
    @@ -409,13 +427,13 @@ public void testTwoQueryListener() throws Exception {
                 for (int i = 0; i < gridCount(); i++) {
                     IgniteCache cache0 = grid(i).cache(DEFAULT_CACHE_NAME);
     
    -                cache0.put(1, 1);
    -                cache0.put(2, 2);
    -                cache0.put(3, 3);
    +                cachePut(cache0, 1, 1);
    +                cachePut(cache0, 2, 2);
    +                cachePut(cache0, 3, 3);
     
    -                cache0.remove(1);
    -                cache0.remove(2);
    -                cache0.remove(3);
    +                cacheRemove(cache0, 1);
    +                cacheRemove(cache0, 2);
    +                cacheRemove(cache0, 3);
     
                     final int iter = i + 1;
     
    @@ -432,6 +450,7 @@ public void testTwoQueryListener() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBackupCleanerTaskFinalize() throws Exception {
             final String CACHE_NAME = "LOCAL_CACHE";
     
    @@ -456,6 +475,7 @@ public void testBackupCleanerTaskFinalize() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRestartQuery() throws Exception {
             if (cacheMode() == LOCAL)
                 return;
    @@ -467,7 +487,7 @@ public void testRestartQuery() throws Exception {
             final int keyCnt = parts * 2;
     
             for (int i = 0; i < parts / 2; i++)
    -            cache.put(i, i);
    +            cachePut(cache, i, i);
     
             for (int i = 0; i < 10; i++) {
                 if (i % 2 == 0) {
    @@ -486,7 +506,7 @@ public void testRestartQuery() throws Exception {
                     QueryCursor> qryCur = cache.query(qry);
     
                     for (int key = 0; key < keyCnt; key++)
    -                    cache.put(key, key);
    +                    cachePut(cache, key, key);
     
                     try {
                         assert GridTestUtils.waitForCondition(new PA() {
    @@ -501,7 +521,7 @@ public void testRestartQuery() throws Exception {
                 }
                 else {
                     for (int key = 0; key < keyCnt; key++)
    -                    cache.put(key, key);
    +                    cachePut(cache, key, key);
                 }
             }
         }
    @@ -509,6 +529,7 @@ public void testRestartQuery() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEntriesByFilter() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -544,16 +565,16 @@ public void testEntriesByFilter() throws Exception {
             });
     
             try (QueryCursor> ignored = cache.query(qry)) {
    -            cache.put(1, 1);
    -            cache.put(2, 2);
    -            cache.put(3, 3);
    -            cache.put(4, 4);
    +            cachePut(cache, 1, 1);
    +            cachePut(cache, 2, 2);
    +            cachePut(cache, 3, 3);
    +            cachePut(cache, 4, 4);
     
    -            cache.remove(2);
    -            cache.remove(3);
    +            cacheRemove(cache, 2);
    +            cacheRemove(cache, 3);
     
    -            cache.put(1, 10);
    -            cache.put(4, 40);
    +            cachePut(cache, 1, 10);
    +            cachePut(cache, 4, 40);
     
                 assert latch.await(LATCH_TIMEOUT, MILLISECONDS);
     
    @@ -578,6 +599,7 @@ public void testEntriesByFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalNodeOnly() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -631,8 +653,8 @@ public void testLocalNodeOnly() throws Exception {
                         break;
                 }
     
    -            cache.put(locKey, 1);
    -            cache.put(rmtKey, 2);
    +            cachePut(cache, locKey, 1);
    +            cachePut(cache, rmtKey, 2);
     
                 assert latch.await(LATCH_TIMEOUT, MILLISECONDS);
     
    @@ -649,6 +671,7 @@ public void testLocalNodeOnly() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBuffering() throws Exception {
             if (grid(0).cache(DEFAULT_CACHE_NAME).getConfiguration(CacheConfiguration.class).getCacheMode() != PARTITIONED)
                 return;
    @@ -706,12 +729,12 @@ public void testBuffering() throws Exception {
                 Iterator it = keys.iterator();
     
                 for (int i = 0; i < 4; i++)
    -                cache.put(it.next(), 0);
    +                cachePut(cache, it.next(), 0);
     
                 assert !latch.await(2, SECONDS);
     
                 for (int i = 0; i < 2; i++)
    -                cache.put(it.next(), 0);
    +                cachePut(cache, it.next(), 0);
     
                 assert latch.await(LATCH_TIMEOUT, MILLISECONDS);
     
    @@ -734,6 +757,7 @@ public void testBuffering() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTimeInterval() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -790,7 +814,7 @@ public void testTimeInterval() throws Exception {
                 }
     
                 for (Integer k : keys)
    -                cache.put(k, 0);
    +                cachePut(cache, k, 0);
     
                 assert !latch.await(2, SECONDS);
                 assert latch.await(1000 + LATCH_TIMEOUT, MILLISECONDS);
    @@ -814,6 +838,7 @@ public void testTimeInterval() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInitialQuery() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -832,7 +857,7 @@ public void testInitialQuery() throws Exception {
             });
     
             for (int i = 0; i < 10; i++)
    -            cache.put(i, i);
    +            cachePut(cache, i, i);
     
             try (QueryCursor> cur = cache.query(qry)) {
                 List> res = cur.getAll();
    @@ -859,6 +884,7 @@ public void testInitialQuery() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInitialQueryAndUpdates() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -884,7 +910,7 @@ public void testInitialQueryAndUpdates() throws Exception {
             });
     
             for (int i = 0; i < 10; i++)
    -            cache.put(i, i);
    +            cachePut(cache, i, i);
     
             try (QueryCursor> cur = cache.query(qry)) {
                 List> res = cur.getAll();
    @@ -906,8 +932,8 @@ public void testInitialQueryAndUpdates() throws Exception {
                     exp++;
                 }
     
    -            cache.put(10, 10);
    -            cache.put(11, 11);
    +            cachePut(cache, 10, 10);
    +            cachePut(cache, 11, 11);
     
                 assert latch.await(LATCH_TIMEOUT, MILLISECONDS) : latch.getCount();
     
    @@ -921,6 +947,7 @@ public void testInitialQueryAndUpdates() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLoadCache() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -954,6 +981,7 @@ public void testLoadCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInternalKey() throws Exception {
             if (atomicityMode() == ATOMIC)
                 return;
    @@ -978,8 +1006,8 @@ public void testInternalKey() throws Exception {
             try (QueryCursor> ignored = cache.query(qry)) {
                 cache.put(new GridCacheInternalKeyImpl("test", "test"), 1);
     
    -            cache.put(1, 1);
    -            cache.put(2, 2);
    +            cachePut(cache, 1, 1);
    +            cachePut(cache, 2, 2);
     
                 assert latch.await(LATCH_TIMEOUT, MILLISECONDS);
     
    @@ -994,6 +1022,7 @@ public void testInternalKey() throws Exception {
          * @throws Exception If failed.
          */
         @SuppressWarnings("TryFinallyCanBeTryWithResources")
    +    @Test
         public void testNodeJoinWithoutCache() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -1014,7 +1043,7 @@ public void testNodeJoinWithoutCache() throws Exception {
                     log.info("Started node without cache: " + ignite);
                 }
     
    -            cache.put(1, 1);
    +            cachePut(cache, 1, 1);
     
                 assertTrue(latch.await(5000, MILLISECONDS));
             }
    @@ -1026,6 +1055,7 @@ public void testNodeJoinWithoutCache() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEvents() throws Exception {
             final AtomicInteger cnt = new AtomicInteger();
             final CountDownLatch latch = new CountDownLatch(50);
    @@ -1102,7 +1132,7 @@ public void testEvents() throws Exception {
     
                 try (QueryCursor> ignored = cache.query(qry)) {
                     for (int i = 0; i < 100; i++)
    -                    cache.put(i, i);
    +                    cachePut(cache, i, i);
     
                     assert latch.await(LATCH_TIMEOUT, MILLISECONDS);
                     assert execLatch.await(LATCH_TIMEOUT, MILLISECONDS);
    @@ -1121,6 +1151,7 @@ public void testEvents() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testExpired() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME).
                 withExpiryPolicy(new CreatedExpiryPolicy(new Duration(MILLISECONDS, 1000)));
    @@ -1147,8 +1178,8 @@ public void testExpired() throws Exception {
             });
     
             try (QueryCursor> ignored = cache.query(qry)) {
    -            cache.put(1, 1);
    -            cache.put(2, 2);
    +            cachePut(cache, 1, 1);
    +            cachePut(cache, 2, 2);
     
                 // Wait for expiration.
                 Thread.sleep(2000);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAtomicSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAtomicSelfTest.java
    index 36754ec54dca8..bb397b418b559 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAtomicSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryAtomicSelfTest.java
    @@ -19,12 +19,16 @@
     
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.configuration.NearCacheConfiguration;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     
     /**
      * Continuous queries tests for atomic cache.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryAtomicSelfTest extends GridCacheContinuousQueryPartitionedSelfTest {
         /** {@inheritDoc} */
         @Override protected CacheAtomicityMode atomicityMode() {
    @@ -37,7 +41,8 @@ public class GridCacheContinuousQueryAtomicSelfTest extends GridCacheContinuousQ
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testInternalKey() throws Exception {
             // No-op.
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryConcurrentTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryConcurrentTest.java
    index 0241a69b8cf95..d56c52f0efb17 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryConcurrentTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryConcurrentTest.java
    @@ -40,6 +40,7 @@
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.IgniteInternalFuture;
    +import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
     import org.apache.ignite.internal.util.future.GridFutureAdapter;
     import org.apache.ignite.internal.util.future.IgniteFinishedFutureImpl;
     import org.apache.ignite.internal.util.future.IgniteFutureImpl;
    @@ -47,11 +48,11 @@
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.lang.IgniteFuture;
     import org.apache.ignite.lang.IgniteInClosure;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.Executors.newSingleThreadExecutor;
     import static java.util.concurrent.TimeUnit.MINUTES;
    @@ -61,10 +62,8 @@
      *
      */
     @SuppressWarnings("unchecked")
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryConcurrentTest extends GridCommonAbstractTest {
    -    /** */
    -    private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int NODES = 2;
     
    @@ -86,8 +85,6 @@ public class GridCacheContinuousQueryConcurrentTest extends GridCommonAbstractTe
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
    -
             cfg.setPeerClassLoadingEnabled(false);
     
             if (igniteInstanceName.endsWith(String.valueOf(NODES)))
    @@ -99,6 +96,7 @@ public class GridCacheContinuousQueryConcurrentTest extends GridCommonAbstractTe
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicatedTx() throws Exception {
             testRegistration(cacheConfiguration(CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL, 1));
         }
    @@ -106,6 +104,15 @@ public void testReplicatedTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testReplicatedMvccTx() throws Exception {
    +        testRegistration(cacheConfiguration(CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, 1));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testRestartReplicated() throws Exception {
             testRestartRegistration(cacheConfiguration(CacheMode.REPLICATED, CacheAtomicityMode.ATOMIC, 2));
         }
    @@ -113,6 +120,7 @@ public void testRestartReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRestartPartition() throws Exception {
             testRestartRegistration(cacheConfiguration(CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC, 2));
         }
    @@ -120,6 +128,7 @@ public void testRestartPartition() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRestartPartitionTx() throws Exception {
             testRestartRegistration(cacheConfiguration(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL, 2));
         }
    @@ -127,6 +136,15 @@ public void testRestartPartitionTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testRestartPartitionMvccTx() throws Exception {
    +        testRestartRegistration(cacheConfiguration(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, 2));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testReplicatedAtomic() throws Exception {
             testRegistration(cacheConfiguration(CacheMode.REPLICATED, CacheAtomicityMode.ATOMIC, 2));
         }
    @@ -134,6 +152,7 @@ public void testReplicatedAtomic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionTx() throws Exception {
             testRegistration(cacheConfiguration(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL, 2));
         }
    @@ -141,6 +160,15 @@ public void testPartitionTx() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
    +    public void testPartitionMvccTx() throws Exception {
    +        testRegistration(cacheConfiguration(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT, 2));
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
         public void testPartitionAtomic() throws Exception {
             testRegistration(cacheConfiguration(CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC, 2));
         }
    @@ -342,17 +370,44 @@ private IgniteFuture waitForKey(Integer key, final IgniteCache>() {
    -            @Override public void apply(IgniteFuture f) {
    -                String val = f.get();
    +        if (!((IgniteCacheProxy)cache).context().mvccEnabled()) {
    +            cache.getAsync(key).listen(new IgniteInClosure>() {
    +                @Override public void apply(IgniteFuture f) {
    +                    String val = f.get();
     
    -                if (val != null) {
    -                    log.info("Completed by get: " + id);
    +                    if (val != null) {
    +                        log.info("Completed by get: " + id);
     
    -                    (((GridFutureAdapter)((IgniteFutureImpl)promise).internalFuture())).onDone("by get");
    +                        (((GridFutureAdapter)((IgniteFutureImpl)promise).internalFuture())).onDone("by async get");
    +                    }
                     }
    -            }
    -        });
    +            });
    +        }
    +        else {
    +            // For MVCC caches we need to wait until updated value becomes visible for consequent readers.
    +            // When MVCC transaction completes, it's updates are not visible immediately for the new transactions.
    +            // This is caused by the lag between transaction completes on the node and mvcc coordinator
    +            // removes this transaction from the active list.
    +            GridTestUtils.runAsync(new Runnable() {
    +                @Override public void run() {
    +                    String v;
    +
    +                    while (!Thread.currentThread().isInterrupted()) {
    +                        v = cache.get(key);
    +
    +                        if (v == null)
    +                            doSleep(100);
    +                        else {
    +                            log.info("Completed by async mvcc get: " + id);
    +
    +                            (((GridFutureAdapter)((IgniteFutureImpl)promise).internalFuture())).onDone("by get");
    +
    +                            break;
    +                        }
    +                    }
    +                }
    +            });
    +        }
     
             return promise;
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryMultiNodesFilteringTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryMultiNodesFilteringTest.java
    index b316042decb7f..27734c03ecff2 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryMultiNodesFilteringTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryMultiNodesFilteringTest.java
    @@ -40,6 +40,7 @@
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.IgniteException;
    +import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.query.ContinuousQuery;
     import org.apache.ignite.cache.query.QueryCursor;
     import org.apache.ignite.cluster.ClusterNode;
    @@ -49,11 +50,11 @@
     import org.apache.ignite.internal.util.typedef.PA;
     import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.resources.IgniteInstanceResource;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    @@ -62,10 +63,8 @@
     
     /** */
     @SuppressWarnings("unchecked")
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryMultiNodesFilteringTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int SERVER_GRIDS_COUNT = 6;
     
    @@ -88,6 +87,7 @@ public class GridCacheContinuousQueryMultiNodesFilteringTest extends GridCommonA
         }
     
         /** */
    +    @Test
         public void testFiltersAndListeners() throws Exception {
             for (int i = 1; i <= SERVER_GRIDS_COUNT; i++)
                 startGrid(i, false);
    @@ -145,6 +145,7 @@ public void testFiltersAndListeners() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testWithNodeFilter() throws Exception {
             List qryCursors = new ArrayList<>();
     
    @@ -250,8 +251,6 @@ private Ignite startGrid(final int idx, boolean isClientMode) throws Exception {
     
             IgniteConfiguration cfg = optimize(getConfiguration(igniteInstanceName)).setClientMode(isClientMode);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             cfg.setUserAttributes(Collections.singletonMap("idx", idx));
     
             Ignite node = startGrid(igniteInstanceName, cfg);
    @@ -318,11 +317,18 @@ private CacheConfiguration cacheConfiguration(NodeFilterByRegexp filter) {
             return new CacheConfiguration("test-cache-cq")
                 .setBackups(1)
                 .setNodeFilter(filter)
    -            .setAtomicityMode(ATOMIC)
    +            .setAtomicityMode(atomicityMode())
                 .setWriteSynchronizationMode(FULL_SYNC)
                 .setCacheMode(PARTITIONED);
         }
     
    +    /**
    +     * @return Atomicity mode.
    +     */
    +    protected CacheAtomicityMode atomicityMode() {
    +        return ATOMIC;
    +    }
    +
         /** */
         private final static class ListenerConfiguration extends MutableCacheEntryListenerConfiguration {
             /** Operation. */
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryNodesFilteringTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryNodesFilteringTest.java
    index e444a72f4341c..2b3e12f20f003 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryNodesFilteringTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryNodesFilteringTest.java
    @@ -31,18 +31,16 @@
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.lang.IgnitePredicate;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridStringLogger;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /** */
     @SuppressWarnings("unused")
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryNodesFilteringTest extends GridCommonAbstractTest implements Serializable {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final String ENTRY_FILTER_CLS_NAME = "org.apache.ignite.tests.p2p.CacheDeploymentEntryEventFilter";
     
    @@ -51,7 +49,7 @@ public class GridCacheContinuousQueryNodesFilteringTest extends GridCommonAbstra
          *
          * @throws Exception if failed.
          */
    -    @SuppressWarnings("EmptyTryBlock")
    +    @Test
         public void testNodeWithoutAttributeExclusion() throws Exception {
             try (Ignite node1 = startNodeWithCache()) {
                 try (Ignite node2 = startGrid("node2", getConfiguration("node2", false, null))) {
    @@ -65,6 +63,7 @@ public void testNodeWithoutAttributeExclusion() throws Exception {
          *
          * @throws Exception if failed.
          */
    +    @Test
         public void testNodeWithAttributeFailure() throws Exception {
             try (Ignite node1 = startNodeWithCache()) {
                 GridStringLogger log = new GridStringLogger();
    @@ -129,8 +128,6 @@ private Ignite startNodeWithCache() throws Exception {
         private IgniteConfiguration getConfiguration(String name, boolean setAttr, GridStringLogger log) throws Exception {
             IgniteConfiguration cfg = optimize(getConfiguration(name));
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             if (setAttr)
                 cfg.setUserAttributes(Collections.singletonMap("node-type", "data"));
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryPartitionedOnlySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryPartitionedOnlySelfTest.java
    index 8cf26acb669e6..fe282a845aaeb 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryPartitionedOnlySelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryPartitionedOnlySelfTest.java
    @@ -19,12 +19,16 @@
     
     import org.apache.ignite.cache.CacheMode;
     import org.apache.ignite.configuration.NearCacheConfiguration;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     
     /**
      * Continuous queries tests for partitioned cache.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryPartitionedOnlySelfTest extends GridCacheContinuousQueryAbstractSelfTest {
         /** {@inheritDoc} */
         @Override protected NearCacheConfiguration nearConfiguration() {
    @@ -42,7 +46,8 @@ public class GridCacheContinuousQueryPartitionedOnlySelfTest extends GridCacheCo
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testInternalKey() throws Exception {
             // Disabled since data structures are not allowed in partitioned only mode.
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedAtomicSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedAtomicSelfTest.java
    index 4432eae0f3a26..27fb0fdd44dc6 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedAtomicSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedAtomicSelfTest.java
    @@ -18,12 +18,16 @@
     package org.apache.ignite.internal.processors.cache.query.continuous;
     
     import org.apache.ignite.cache.CacheAtomicityMode;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     
     /**
      * Continuous queries tests for replicated atomic cache.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryReplicatedAtomicSelfTest extends GridCacheContinuousQueryReplicatedSelfTest {
         /** {@inheritDoc} */
         @Override protected CacheAtomicityMode atomicityMode() {
    @@ -31,7 +35,8 @@ public class GridCacheContinuousQueryReplicatedAtomicSelfTest extends GridCacheC
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testInternalKey() throws Exception {
             // No-op.
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedSelfTest.java
    index 863bc804c1124..b4043b0b2c3ed 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedSelfTest.java
    @@ -27,6 +27,9 @@
     import org.apache.ignite.cache.CacheMode;
     import org.apache.ignite.cache.query.ContinuousQuery;
     import org.apache.ignite.cache.query.QueryCursor;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
    @@ -34,6 +37,7 @@
     /**
      * Continuous queries tests for replicated cache.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryReplicatedSelfTest extends GridCacheContinuousQueryAbstractSelfTest {
         /** {@inheritDoc} */
         @Override protected CacheMode cacheMode() {
    @@ -48,6 +52,7 @@ public class GridCacheContinuousQueryReplicatedSelfTest extends GridCacheContinu
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteNodeCallback() throws Exception {
             IgniteCache cache1 = grid(0).cache(DEFAULT_CACHE_NAME);
             IgniteCache cache2 = grid(1).cache(DEFAULT_CACHE_NAME);
    @@ -87,6 +92,7 @@ public void testRemoteNodeCallback() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testCrossCallback() throws Exception {
             // Prepare.
             IgniteCache cache1 = grid(0).cache(DEFAULT_CACHE_NAME);
    @@ -132,4 +138,4 @@ public void testCrossCallback() throws Exception {
                 assert latch2.await(LATCH_TIMEOUT, MILLISECONDS);
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedTxOneNodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedTxOneNodeTest.java
    index 6474df5544d83..ab6a76ce1fab1 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedTxOneNodeTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryReplicatedTxOneNodeTest.java
    @@ -17,6 +17,7 @@
     
     package org.apache.ignite.internal.processors.cache.query.continuous;
     
    +import java.util.Iterator;
     import java.util.concurrent.CountDownLatch;
     import java.util.concurrent.TimeUnit;
     import java.util.concurrent.atomic.AtomicInteger;
    @@ -31,19 +32,16 @@
     import org.apache.ignite.cache.query.ContinuousQuery;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Test for replicated cache with one node.
      */
    -@SuppressWarnings("Duplicates")
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryReplicatedTxOneNodeTest extends GridCommonAbstractTest {
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
    @@ -55,13 +53,11 @@ public class GridCacheContinuousQueryReplicatedTxOneNodeTest extends GridCommonA
             cacheCfg.setRebalanceMode(CacheRebalanceMode.SYNC);
             cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
     
    -        cfg.setCacheConfiguration(cacheCfg);
    -
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    +        // TODO IGNITE-9530 Remove this clause.
    +        if (atomicMode() == CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT)
    +            cacheCfg.setNearConfiguration(null);
     
    -        disco.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(disco);
    +        cfg.setCacheConfiguration(cacheCfg);
     
             return cfg;
         }
    @@ -83,6 +79,7 @@ protected CacheMode cacheMode() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocal() throws Exception {
             if (cacheMode() == CacheMode.REPLICATED)
                 doTest(true);
    @@ -91,6 +88,7 @@ public void testLocal() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDistributed() throws Exception {
             doTest(false);
         }
    @@ -98,6 +96,7 @@ public void testDistributed() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalOneNode() throws Exception {
             doTestOneNode(true);
         }
    @@ -105,6 +104,7 @@ public void testLocalOneNode() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDistributedOneNode() throws Exception {
             doTestOneNode(false);
         }
    @@ -164,7 +164,14 @@ private void doTestOneNode(boolean loc) throws Exception {
                 for (int i = 0; i < 10; i++)
                     cache.put("key" + i, i);
     
    -            cache.clear();
    +            if (atomicMode() != CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT)
    +                cache.clear();
    +            else { // TODO IGNITE-7952. Remove "else" clause - do cache.clear() instead of iteration.
    +                for (Iterator it = cache.iterator(); it.hasNext();) {
    +                    it.next();
    +                    it.remove();
    +                }
    +            }
     
                 qry.setLocalListener(new CacheEntryUpdatedListener() {
                     @Override public void onUpdated(Iterable> evts)
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryTxSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryTxSelfTest.java
    index 91b6b9ccdf309..84b006c82ddf2 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryTxSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/GridCacheContinuousQueryTxSelfTest.java
    @@ -20,12 +20,16 @@
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.CacheMode;
     import org.apache.ignite.configuration.NearCacheConfiguration;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
     
     /**
      * Continuous queries tests for atomic cache.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheContinuousQueryTxSelfTest extends GridCacheContinuousQueryPartitionedSelfTest {
         /** {@inheritDoc} */
         @Override protected CacheAtomicityMode atomicityMode() {
    @@ -43,7 +47,8 @@ public class GridCacheContinuousQueryTxSelfTest extends GridCacheContinuousQuery
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testInternalKey() throws Exception {
             // No-op.
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryBackupQueueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryBackupQueueTest.java
    index 5baa3a7f0c1f7..dc9dd576b94b1 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryBackupQueueTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryBackupQueueTest.java
    @@ -30,6 +30,7 @@
     import javax.cache.event.CacheEntryEventFilter;
     import javax.cache.event.CacheEntryUpdatedListener;
     import org.apache.ignite.Ignite;
    +import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.query.ContinuousQuery;
     import org.apache.ignite.cache.query.QueryCursor;
     import org.apache.ignite.configuration.CacheConfiguration;
    @@ -38,11 +39,11 @@
     import org.apache.ignite.internal.IgniteKernal;
     import org.apache.ignite.internal.processors.continuous.GridContinuousHandler;
     import org.apache.ignite.internal.processors.continuous.GridContinuousProcessor;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    @@ -51,10 +52,8 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteCacheContinuousQueryBackupQueueTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** Keys count. */
         private static final int KEYS_COUNT = 1024;
     
    @@ -88,8 +87,6 @@ public class IgniteCacheContinuousQueryBackupQueueTest extends GridCommonAbstrac
     
             cfg.setClientMode(client);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);
    -
             DataStorageConfiguration memCfg = new DataStorageConfiguration();
             memCfg.setPageSize(16 * 1024);
     
    @@ -98,6 +95,13 @@ public class IgniteCacheContinuousQueryBackupQueueTest extends GridCommonAbstrac
             return cfg;
         }
     
    +    /**
    +     * @return Atomicity mode.
    +     */
    +    protected CacheAtomicityMode atomicityMode() {
    +        return ATOMIC;
    +    }
    +
         /** {@inheritDoc} */
         @Override protected void beforeTest() throws Exception {
             super.beforeTest();
    @@ -122,6 +126,7 @@ public class IgniteCacheContinuousQueryBackupQueueTest extends GridCommonAbstrac
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBackupQueue() throws Exception {
             final CacheEventListener lsnr = new CacheEventListener();
     
    @@ -145,6 +150,7 @@ public void testBackupQueue() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testManyQueryBackupQueue() throws Exception {
             List qryCursors = new ArrayList<>();
     
    @@ -177,6 +183,7 @@ public void testManyQueryBackupQueue() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBackupQueueAutoUnsubscribeFalse() throws Exception {
             try {
                 client = true;
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryClientReconnectTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryClientReconnectTest.java
    index 906cc7d8d190c..f8613964de114 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryClientReconnectTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/IgniteCacheContinuousQueryClientReconnectTest.java
    @@ -30,6 +30,9 @@
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.IgniteClientReconnectAbstractTest;
     import org.apache.ignite.resources.LoggerResource;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.SECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
    @@ -39,6 +42,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteCacheContinuousQueryClientReconnectTest extends IgniteClientReconnectAbstractTest {
         /** {@inheritDoc} */
         @Override protected int serverCount() {
    @@ -75,6 +79,7 @@ protected CacheAtomicityMode atomicMode() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReconnectClient() throws Exception {
             Ignite client = grid(serverCount());
     
    @@ -118,6 +123,7 @@ public void testReconnectClient() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReconnectClientAndLeftRouter() throws Exception {
             if (!tcpDiscovery())
                 return;
    @@ -187,4 +193,4 @@ private static class CacheEventListener implements CacheEntryUpdatedListener cache = jcache();
     
    @@ -177,6 +186,7 @@ public void testWriteThrough() throws Exception {
         }
     
         /** @throws Exception If test failed. */
    +    @Test
         public void testReadThrough() throws Exception {
             IgniteCache cache = jcache();
     
    @@ -269,6 +279,7 @@ public void testReadThrough() throws Exception {
         }
     
         /** @throws Exception If failed. */
    +    @Test
         public void testMultithreaded() throws Exception {
             final ConcurrentMap> perThread = new ConcurrentHashMap<>();
     
    @@ -277,7 +288,6 @@ public void testMultithreaded() throws Exception {
             final IgniteCache cache = jcache();
     
             IgniteInternalFuture fut = multithreadedAsync(new Runnable() {
    -            @SuppressWarnings({"NullableProblems"})
                 @Override public void run() {
                     // Initialize key set for this thread.
                     Set set = new HashSet<>();
    @@ -354,4 +364,4 @@ private void checkLastMethod(@Nullable String mtd) {
             }
         }
     
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreLocalTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreLocalTest.java
    index 59dd4b41930bf..f35b674156ed0 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreLocalTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreLocalTest.java
    @@ -18,13 +18,21 @@
     package org.apache.ignite.internal.processors.cache.store;
     
     import org.apache.ignite.cache.CacheMode;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     
     /**
      * Tests {@link org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore} in grid configuration.
      */
     public class GridCacheWriteBehindStoreLocalTest extends GridCacheWriteBehindStoreAbstractTest {
    +    /** {@inheritDoc} */
    +    @Override public void setUp() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE);
    +
    +        super.setUp();
    +    }
    +
         /** {@inheritDoc} */
         @Override protected CacheMode cacheMode() {
             return CacheMode.LOCAL;
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreMultithreadedSelfTest.java
    index 4fce452a327f1..0820c5c8a46a7 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreMultithreadedSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreMultithreadedSelfTest.java
    @@ -23,16 +23,21 @@
     import java.util.Set;
     import org.apache.ignite.internal.IgniteInterruptedCheckedException;
     import org.apache.ignite.internal.util.typedef.internal.U;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Multithreaded tests for {@link org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore}.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheWriteBehindStoreMultithreadedSelfTest extends GridCacheWriteBehindStoreAbstractSelfTest {
         /**
          * This test performs complex set of operations on store with coalescing from multiple threads.
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testPutGetRemoveWithCoalescing() throws Exception {
             testPutGetRemove(true);
         }
    @@ -42,6 +47,7 @@ public void testPutGetRemoveWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testPutGetRemoveWithoutCoalescing() throws Exception {
             testPutGetRemove(false);
         }
    @@ -86,6 +92,7 @@ private void testPutGetRemove(boolean writeCoalescing) throws Exception {
          *
          * @throws Exception if failed.
          */
    +    @Test
         public void testStoreFailureWithCoalescing() throws Exception {
             testStoreFailure(true);
         }
    @@ -95,6 +102,7 @@ public void testStoreFailureWithCoalescing() throws Exception {
          *
          * @throws Exception if failed.
          */
    +    @Test
         public void testStoreFailureWithoutCoalescing() throws Exception {
             testStoreFailure(false);
         }
    @@ -162,6 +170,7 @@ private void testStoreFailure(boolean writeCoalescing) throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testFlushFromTheSameThreadWithCoalescing() throws Exception {
             testFlushFromTheSameThread(true);
         }
    @@ -172,6 +181,7 @@ public void testFlushFromTheSameThreadWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testFlushFromTheSameThreadWithoutCoalescing() throws Exception {
             testFlushFromTheSameThread(false);
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStorePartitionedMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStorePartitionedMultiNodeSelfTest.java
    index de61058f0a4b8..cdcc69d7413b7 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStorePartitionedMultiNodeSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStorePartitionedMultiNodeSelfTest.java
    @@ -32,11 +32,12 @@
     import org.apache.ignite.internal.IgniteInterruptedCheckedException;
     import org.apache.ignite.internal.processors.cache.GridCacheTestStore;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
     import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    @@ -45,13 +46,11 @@
     /**
      * Tests write-behind store with near and dht commit option.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheWriteBehindStorePartitionedMultiNodeSelfTest extends GridCommonAbstractTest {
         /** Grids to start. */
         private static final int GRID_CNT = 5;
     
    -    /** Ip finder. */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** Flush frequency. */
         public static final int WRITE_BEHIND_FLUSH_FREQ = 1000;
     
    @@ -66,12 +65,6 @@ public class GridCacheWriteBehindStorePartitionedMultiNodeSelfTest extends GridC
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration c = super.getConfiguration(igniteInstanceName);
     
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(ipFinder);
    -
    -        c.setDiscoverySpi(disco);
    -
             CacheConfiguration cc = defaultCacheConfiguration();
     
             cc.setCacheMode(CacheMode.PARTITIONED);
    @@ -92,6 +85,11 @@ public class GridCacheWriteBehindStorePartitionedMultiNodeSelfTest extends GridC
             return c;
         }
     
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTest() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE);
    +    }
    +
         /** {@inheritDoc} */
         @Override protected void afterTestsStopped() throws Exception {
             stores = null;
    @@ -114,6 +112,7 @@ private void prepare() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testSingleWritesOnDhtNode() throws Exception {
             checkSingleWrites();
         }
    @@ -121,6 +120,7 @@ public void testSingleWritesOnDhtNode() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBatchWritesOnDhtNode() throws Exception {
             checkBatchWrites();
         }
    @@ -128,6 +128,7 @@ public void testBatchWritesOnDhtNode() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTxWritesOnDhtNode() throws Exception {
             checkTxWrites();
         }
    @@ -213,4 +214,4 @@ private void checkWrites() throws IgniteInterruptedCheckedException {
             for (int i = 0; i < 100; i++)
                 assertTrue(allKeys.contains(i));
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreSelfTest.java
    index 538f13540ca9f..9a03b166b8e89 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStoreSelfTest.java
    @@ -29,16 +29,21 @@
     import org.apache.ignite.internal.util.typedef.F;
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.jsr166.ConcurrentLinkedHashMap;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * This class provides basic tests for {@link org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore}.
      */
    +@RunWith(JUnit4.class)
     public class GridCacheWriteBehindStoreSelfTest extends GridCacheWriteBehindStoreAbstractSelfTest {
         /**
          * Tests correct store (with write coalescing) shutdown when underlying store fails.
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testShutdownWithFailureWithCoalescing() throws Exception {
             testShutdownWithFailure(true);
         }
    @@ -48,6 +53,7 @@ public void testShutdownWithFailureWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testShutdownWithFailureWithoutCoalescing() throws Exception {
             testShutdownWithFailure(false);
         }
    @@ -93,6 +99,7 @@ private void testShutdownWithFailure(final boolean writeCoalescing) throws Excep
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testSimpleStoreWithCoalescing() throws Exception {
             testSimpleStore(true);
         }
    @@ -102,6 +109,7 @@ public void testSimpleStoreWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testSimpleStoreWithoutCoalescing() throws Exception {
             testSimpleStore(false);
         }
    @@ -112,6 +120,7 @@ public void testSimpleStoreWithoutCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testSimpleStoreFlushFrequencyWithoutCoalescing() throws Exception {
             initStore(1, false);
     
    @@ -177,6 +186,7 @@ private void testSimpleStore(boolean writeCoalescing) throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testValuePropagationWithCoalescing() throws Exception {
             testValuePropagation(true);
         }
    @@ -187,6 +197,7 @@ public void testValuePropagationWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testValuePropagationWithoutCoalescing() throws Exception {
             testValuePropagation(false);
         }
    @@ -234,6 +245,7 @@ private void testValuePropagation(boolean writeCoalescing) throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousPutWithCoalescing() throws Exception {
             testContinuousPut(true);
         }
    @@ -243,6 +255,7 @@ public void testContinuousPutWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testContinuousPutWithoutCoalescing() throws Exception {
             testContinuousPut(false);
         }
    @@ -320,6 +333,7 @@ private void testContinuousPut(boolean writeCoalescing) throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testShutdownWithCoalescing() throws Exception {
             testShutdown(true);
         }
    @@ -330,6 +344,7 @@ public void testShutdownWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testShutdownWithoutCoalescing() throws Exception {
             testShutdown(false);
         }
    @@ -390,6 +405,7 @@ private void testShutdown(boolean writeCoalescing) throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testBatchApplyWithCoalescing() throws Exception {
             testBatchApply(true);
         }
    @@ -400,6 +416,7 @@ public void testBatchApplyWithCoalescing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testBatchApplyWithoutCoalescing() throws Exception {
             testBatchApply(false);
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgniteCacheWriteBehindNoUpdateSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgniteCacheWriteBehindNoUpdateSelfTest.java
    index ce26c2ce96b5d..49acd92cb04b4 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgniteCacheWriteBehindNoUpdateSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgniteCacheWriteBehindNoUpdateSelfTest.java
    @@ -39,10 +39,14 @@
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteCacheWriteBehindNoUpdateSelfTest extends GridCommonAbstractTest {
         /** */
         private static final String THROTTLES_CACHE_NAME = "test";
    @@ -83,6 +87,7 @@ public class IgniteCacheWriteBehindNoUpdateSelfTest extends GridCommonAbstractTe
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEntryProcessorNoUpdate() throws Exception {
             IgniteCache cache = ignite(0).cache(THROTTLES_CACHE_NAME);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreAbstractTest.java
    index a64104ea56ed6..25be9d26d0bab 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreAbstractTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreAbstractTest.java
    @@ -28,11 +28,28 @@
     import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractTest;
     import org.apache.ignite.internal.util.lang.GridAbsPredicate;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests that write behind store is updated if client does not have store.
      */
    +@RunWith(JUnit4.class)
     public abstract class IgnteCacheClientWriteBehindStoreAbstractTest extends IgniteCacheAbstractTest {
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTestsStarted() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE);
    +
    +        super.beforeTestsStarted();
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTest() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE);
    +    }
    +
         /** {@inheritDoc} */
         @Override protected int gridCount() {
             return 3;
    @@ -83,6 +100,7 @@ public abstract class IgnteCacheClientWriteBehindStoreAbstractTest extends Ignit
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testClientWithoutStore() throws Exception {
             Ignite client = grid(2);
     
    @@ -103,4 +121,4 @@ public void testClientWithoutStore() throws Exception {
     
             assertEquals(1000, storeMap.size());
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreNonCoalescingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreNonCoalescingTest.java
    index 4ffa973ff90e6..a515007b339b7 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreNonCoalescingTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/store/IgnteCacheClientWriteBehindStoreNonCoalescingTest.java
    @@ -40,12 +40,16 @@
     import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
     import org.apache.ignite.lang.IgniteBiInClosure;
     import org.apache.ignite.lang.IgniteFuture;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     
     /**
      * This class provides non write coalescing tests for {@link org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore}.
      */
    +@RunWith(JUnit4.class)
     public class IgnteCacheClientWriteBehindStoreNonCoalescingTest extends IgniteCacheAbstractTest {
         /** {@inheritDoc} */
         @Override protected int gridCount() {
    @@ -83,6 +87,7 @@ public class IgnteCacheClientWriteBehindStoreNonCoalescingTest extends IgniteCac
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonCoalescingIncrementing() throws Exception {
             Ignite ignite = grid(0);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AbstractTransactionIntergrityTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AbstractTransactionIntergrityTest.java
    index fe27e6e119d24..cb05af6b25be4 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AbstractTransactionIntergrityTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AbstractTransactionIntergrityTest.java
    @@ -33,6 +33,7 @@
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.IgniteException;
     import org.apache.ignite.cache.CacheMode;
    +import org.apache.ignite.cache.affinity.Affinity;
     import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction;
     import org.apache.ignite.cache.query.annotations.QuerySqlField;
     import org.apache.ignite.cluster.ClusterNode;
    @@ -40,15 +41,12 @@
     import org.apache.ignite.configuration.DataRegionConfiguration;
     import org.apache.ignite.configuration.DataStorageConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    +import org.apache.ignite.configuration.WALMode;
     import org.apache.ignite.failure.FailureHandler;
     import org.apache.ignite.failure.StopNodeFailureHandler;
     import org.apache.ignite.internal.IgniteEx;
    -import org.apache.ignite.internal.TestRecordingCommunicationSpi;
     import org.apache.ignite.internal.util.typedef.G;
     import org.apache.ignite.lang.IgniteUuid;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.jetbrains.annotations.NotNull;
    @@ -66,14 +64,11 @@
      * This test can be extended to emulate failover scenarios during transactional operations on the grid.
      */
     public class AbstractTransactionIntergrityTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** Count of accounts in one thread. */
         private static final int DFLT_ACCOUNTS_CNT = 32;
     
         /** Count of threads and caches. */
    -    private static final int DFLT_TX_THREADS_CNT = 20;
    +    private static final int DFLT_TX_THREADS_CNT = Runtime.getRuntime().availableProcessors();
     
         /** Count of nodes to start. */
         private static final int DFLT_NODES_CNT = 3;
    @@ -126,16 +121,6 @@ protected boolean persistent() {
             return true;
         }
     
    -    /**
    -     * @return Flag enables cross-node transactions,
    -     *         when primary partitions participating in transaction spreaded across several cluster nodes.
    -     */
    -    protected boolean crossNodeTransactions() {
    -        // Commit error during cross node transactions breaks transaction integrity
    -        // TODO: https://issues.apache.org/jira/browse/IGNITE-9086
    -        return false;
    -    }
    -
         /** {@inheritDoc} */
         @Override protected FailureHandler getFailureHandler(String igniteInstanceName) {
             return new StopNodeFailureHandler();
    @@ -147,15 +132,14 @@ protected boolean crossNodeTransactions() {
     
             cfg.setConsistentId(name);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -        cfg.setCommunicationSpi(new TestRecordingCommunicationSpi());
    -        cfg.setLocalHost("127.0.0.1");
    -
             cfg.setDataStorageConfiguration(new DataStorageConfiguration()
                 .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
    -                .setMaxSize(256 * 1024 * 1024)
    -                .setPersistenceEnabled(persistent()))
    -        );
    +                    .setPersistenceEnabled(persistent())
    +                    .setMaxSize(50 * 1024 * 1024)
    +            )
    +            .setWalSegmentSize(16 * 1024 * 1024)
    +            .setPageSize(1024)
    +            .setWalMode(WALMode.LOG_ONLY));
     
             CacheConfiguration[] cacheConfigurations = new CacheConfiguration[txThreadsCount()];
     
    @@ -178,6 +162,8 @@ protected boolean crossNodeTransactions() {
     
             cfg.setCacheConfiguration(cacheConfigurations);
     
    +        cfg.setFailureDetectionTimeout(30_000);
    +
             return cfg;
         }
     
    @@ -219,8 +205,11 @@ protected boolean crossNodeTransactions() {
     
         /**
          * Test transfer amount.
    +     *
    +     * @param failoverScenario Scenario.
    +     * @param colocatedAccounts {@code True} to use colocated on same primary node accounts.
          */
    -    public void doTestTransferAmount(FailoverScenario failoverScenario) throws Exception {
    +    public void doTestTransferAmount(FailoverScenario failoverScenario, boolean colocatedAccounts) throws Exception {
             failoverScenario.beforeNodesStarted();
     
             //given: started some nodes with client.
    @@ -230,26 +219,26 @@ public void doTestTransferAmount(FailoverScenario failoverScenario) throws Excep
     
             igniteClient.cluster().active(true);
     
    -        int[] initAmount = new int[txThreadsCount()];
    +        int[] initAmounts = new int[txThreadsCount()];
             completedTxs = new ConcurrentLinkedHashMap[txThreadsCount()];
     
             //and: fill all accounts on all caches and calculate total amount for every cache.
             for (int cachePrefixIdx = 0; cachePrefixIdx < txThreadsCount(); cachePrefixIdx++) {
                 IgniteCache cache = igniteClient.getOrCreateCache(cacheName(cachePrefixIdx));
     
    -            AtomicInteger coinsCounter = new AtomicInteger();
    +            AtomicInteger coinsCntr = new AtomicInteger();
     
                 try (Transaction tx = igniteClient.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
                     for (int accountId = 0; accountId < accountsCount(); accountId++) {
    -                    Set initialAmount = generateCoins(coinsCounter, 5);
    +                    Set initAmount = generateCoins(coinsCntr, 5);
     
    -                    cache.put(accountId, new AccountState(accountId, tx.xid(), initialAmount));
    +                    cache.put(accountId, new AccountState(accountId, tx.xid(), initAmount));
                     }
     
                     tx.commit();
                 }
     
    -            initAmount[cachePrefixIdx] = coinsCounter.get();
    +            initAmounts[cachePrefixIdx] = coinsCntr.get();
                 completedTxs[cachePrefixIdx] = new ConcurrentLinkedHashMap();
             }
     
    @@ -259,7 +248,8 @@ public void doTestTransferAmount(FailoverScenario failoverScenario) throws Excep
             ArrayList transferThreads = new ArrayList<>();
     
             for (int i = 0; i < txThreadsCount(); i++) {
    -            transferThreads.add(new TransferAmountTxThread(firstTransactionDone, igniteClient, cacheName(i), i));
    +            transferThreads.add(new TransferAmountTxThread(firstTransactionDone,
    +                igniteClient, cacheName(i), i, colocatedAccounts));
     
                 transferThreads.get(i).start();
             }
    @@ -268,13 +258,12 @@ public void doTestTransferAmount(FailoverScenario failoverScenario) throws Excep
     
             failoverScenario.afterFirstTransaction();
     
    -        for (Thread thread : transferThreads) {
    +        for (Thread thread : transferThreads)
                 thread.join();
    -        }
     
             failoverScenario.afterTransactionsFinished();
     
    -        consistencyCheck(initAmount);
    +        consistencyCheck(initAmounts);
         }
     
         /**
    @@ -385,11 +374,11 @@ public AccountState addCoins(IgniteUuid txId, Set coinsToAdd) {
     
             /**
              * @param txId Transaction id.
    -         * @param coinsToRemove Coins to remove from current account.
    +         * @param coinsToRmv Coins to remove from current account.
              * @return Account state with removed coins.
              */
    -        public AccountState removeCoins(IgniteUuid txId, Set coinsToRemove) {
    -            return new AccountState(accId, txId, Sets.difference(coins, coinsToRemove).immutableCopy());
    +        public AccountState removeCoins(IgniteUuid txId, Set coinsToRmv) {
    +            return new AccountState(accId, txId, Sets.difference(coins, coinsToRmv).immutableCopy());
             }
     
             /** {@inheritDoc} */
    @@ -418,11 +407,11 @@ public AccountState removeCoins(IgniteUuid txId, Set coinsToRemove) {
         /**
          * @param coinsNum Coins number.
          */
    -    private Set generateCoins(AtomicInteger coinsCounter, int coinsNum) {
    +    private Set generateCoins(AtomicInteger coinsCntr, int coinsNum) {
             Set res = new HashSet<>();
     
             for (int i = 0; i < coinsNum; i++)
    -            res.add(coinsCounter.incrementAndGet());
    +            res.add(coinsCntr.incrementAndGet());
     
             return res;
         }
    @@ -479,23 +468,35 @@ public TxState(AccountState before1, AccountState before2, AccountState after1,
         private class TransferAmountTxThread extends Thread {
             /** */
             private CountDownLatch firstTransactionLatch;
    +
             /** */
             private IgniteEx ignite;
    +
             /** */
             private String cacheName;
    +
             /** */
    -        private int txIndex;
    +        private int workerIdx;
    +
             /** */
             private Random random = new Random();
     
    +        /** */
    +        private final boolean colocatedAccounts;
    +
             /**
              * @param ignite Ignite.
              */
    -        private TransferAmountTxThread(CountDownLatch firstTransactionLatch, final IgniteEx ignite, String cacheName, int txIndex) {
    +        private TransferAmountTxThread(CountDownLatch firstTransactionLatch,
    +            final IgniteEx ignite,
    +            String cacheName,
    +            int workerIdx,
    +            boolean colocatedAccounts) {
                 this.firstTransactionLatch = firstTransactionLatch;
                 this.ignite = ignite;
                 this.cacheName = cacheName;
    -            this.txIndex = txIndex;
    +            this.workerIdx = workerIdx;
    +            this.colocatedAccounts = colocatedAccounts;
             }
     
             /** {@inheritDoc} */
    @@ -514,7 +515,6 @@ private TransferAmountTxThread(CountDownLatch firstTransactionLatch, final Ignit
             /**
              * @throws IgniteException if fails
              */
    -        @SuppressWarnings("unchecked")
             private void updateInTransaction(IgniteCache cache) throws IgniteException {
                 int accIdFrom;
                 int accIdTo;
    @@ -526,11 +526,16 @@ private void updateInTransaction(IgniteCache cache) throw
                     if (accIdFrom == accIdTo)
                         continue;
     
    -                ClusterNode primaryForAccFrom = ignite.cachex(cacheName).affinity().mapKeyToNode(accIdFrom);
    -                ClusterNode primaryForAccTo = ignite.cachex(cacheName).affinity().mapKeyToNode(accIdTo);
    +                Affinity affinity = ignite.affinity(cacheName);
    +
    +                ClusterNode primaryForAccFrom = affinity.mapKeyToNode(accIdFrom);
    +                assertNotNull(primaryForAccFrom);
    +
    +                ClusterNode primaryForAccTo = affinity.mapKeyToNode(accIdTo);
    +                assertNotNull(primaryForAccTo);
     
                     // Allows only transaction between accounts that primary on the same node if corresponding flag is enabled.
    -                if (!crossNodeTransactions() && !primaryForAccFrom.id().equals(primaryForAccTo.id()))
    +                if (colocatedAccounts && !primaryForAccFrom.id().equals(primaryForAccTo.id()))
                         continue;
     
                     break;
    @@ -541,7 +546,10 @@ private void updateInTransaction(IgniteCache cache) throw
     
                 try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
                     acctFrom = cache.get(accIdFrom);
    +                assertNotNull(acctFrom);
    +
                     acctTo = cache.get(accIdTo);
    +                assertNotNull(acctTo);
     
                     Set coinsToTransfer = acctFrom.coinsToTransfer(random);
     
    @@ -553,24 +561,9 @@ private void updateInTransaction(IgniteCache cache) throw
     
                     tx.commit();
     
    -                completedTxs[txIndex].put(tx.xid(), new TxState(acctFrom, acctTo, nextFrom, nextTo, coinsToTransfer));
    +                completedTxs[workerIdx].put(tx.xid(), new TxState(acctFrom, acctTo, nextFrom, nextTo, coinsToTransfer));
                 }
             }
    -
    -        /**
    -         * @param curr current
    -         * @return random value
    -         */
    -        private long getNextAccountId(long curr) {
    -            long randomVal;
    -
    -            do {
    -                randomVal = random.nextInt(accountsCount());
    -            }
    -            while (curr == randomVal);
    -
    -            return randomVal;
    -        }
         }
     
         /**
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AtomicOperationsInTxTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AtomicOperationsInTxTest.java
    index b58dbe00df7bc..e02ca25349719 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AtomicOperationsInTxTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/AtomicOperationsInTxTest.java
    @@ -32,6 +32,9 @@
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CachePeekMode.ALL;
    @@ -39,6 +42,7 @@
     /**
      * Checks how operations under atomic cache works inside a transaction.
      */
    +@RunWith(JUnit4.class)
     public class AtomicOperationsInTxTest extends GridCommonAbstractTest {
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception {
    @@ -71,6 +75,7 @@ public class AtomicOperationsInTxTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEnablingAtomicOperationDuringTransaction() throws Exception {
             GridTestUtils.assertThrows(log, (Callable)() -> {
                 try (Transaction tx = grid(0).transactions().txStart()) {
    @@ -85,6 +90,7 @@ public void testEnablingAtomicOperationDuringTransaction() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAllowedAtomicOperations() throws Exception {
             checkOperations(true);
         }
    @@ -92,6 +98,7 @@ public void testAllowedAtomicOperations() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNotAllowedAtomicOperations() throws Exception {
             checkOperations(false);
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/DepthFirstSearchTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/DepthFirstSearchTest.java
    index 3d1064cddc695..6f19932695c4f 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/DepthFirstSearchTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/DepthFirstSearchTest.java
    @@ -26,17 +26,20 @@
     import java.util.Map;
     import java.util.Random;
     import java.util.Set;
    -import junit.framework.TestCase;
     import org.apache.ignite.internal.processors.cache.version.GridCacheVersion;
     import org.apache.ignite.internal.util.typedef.F;
     import org.apache.ignite.internal.util.typedef.internal.U;
    +import org.junit.Test;
     
     import static org.apache.ignite.internal.processors.cache.transactions.TxDeadlockDetection.findCycle;
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.assertNull;
    +import static org.junit.Assert.fail;
     
     /**
      * DFS test for search cycle in wait-for-graph.
      */
    -public class DepthFirstSearchTest extends TestCase {
    +public class DepthFirstSearchTest {
         /** Tx 1. */
         private static final GridCacheVersion T1 = new GridCacheVersion(1, 0, 0);
     
    @@ -62,6 +65,7 @@ public class DepthFirstSearchTest extends TestCase {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoCycle() throws Exception {
             assertNull(findCycle(Collections.>emptyMap(), T1));
     
    @@ -115,6 +119,7 @@ public void testNoCycle() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFindCycle2() throws Exception {
             Map> wfg = new HashMap>() {{
                 put(T1, Collections.singleton(T2));
    @@ -180,6 +185,7 @@ public void testFindCycle2() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFindCycle3() throws Exception {
             Map> wfg = new HashMap>() {{
                 put(T1, Collections.singleton(T2));
    @@ -240,6 +246,7 @@ public void testFindCycle3() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFindCycle4() throws Exception {
             Map> wfg = new HashMap>() {{
                 put(T1, Collections.singleton(T2));
    @@ -255,6 +262,7 @@ public void testFindCycle4() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRandomNoExceptions() throws Exception {
             int maxNodesCnt = 100;
             int minNodesCnt = 10;
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithPrimaryIndexCorruptionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithPrimaryIndexCorruptionTest.java
    index 3260607023a92..486ae2262f027 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithPrimaryIndexCorruptionTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithPrimaryIndexCorruptionTest.java
    @@ -17,27 +17,46 @@
     
     package org.apache.ignite.internal.processors.cache.transactions;
     
    +import java.util.Collection;
    +import java.util.Queue;
    +import java.util.concurrent.ConcurrentLinkedQueue;
     import java.util.function.BiFunction;
    +import java.util.function.Consumer;
     import java.util.function.Supplier;
     import org.apache.ignite.IgniteCheckedException;
     import org.apache.ignite.IgniteIllegalStateException;
     import org.apache.ignite.Ignition;
    -import org.apache.ignite.cluster.ClusterNode;
     import org.apache.ignite.internal.IgniteEx;
    -import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion;
    +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition;
    +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopology;
     import org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree;
     import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO;
     import org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler;
     import org.apache.ignite.internal.processors.cache.tree.SearchRow;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
    +
    +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING;
     
     /**
      * Test cases that check transaction data integrity after transaction commit failed.
      */
    +@RunWith(JUnit4.class)
     public class TransactionIntegrityWithPrimaryIndexCorruptionTest extends AbstractTransactionIntergrityTest {
         /** Corruption enabled flag. */
         private static volatile boolean corruptionEnabled;
     
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTest() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            fail("https://issues.apache.org/jira/browse/IGNITE-10470");
    +
    +        super.beforeTest();
    +    }
    +
         /** {@inheritDoc} */
         @Override protected void afterTest() throws Exception {
             corruptionEnabled = false;
    @@ -45,81 +64,108 @@ public class TransactionIntegrityWithPrimaryIndexCorruptionTest extends Abstract
             super.afterTest();
         }
     
    -    /** {@inheritDoc} */
    -    @Override protected long getTestTimeout() {
    -        return 60 * 1000L;
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitPrimaryColocatedThrowsError() throws Exception {
    +        doTestTransferAmount0(true, true, () -> new AssertionError("Test"));
         }
     
    -    /**
    -     * Throws a test {@link AssertionError} during tx commit from {@link BPlusTree} and checks after that data is consistent.
    -     */
    -    public void testPrimaryIndexCorruptionDuringCommitOnPrimaryNode1() throws Exception {
    -        doTestTransferAmount(new IndexCorruptionFailoverScenario(
    -            true,
    -            (hnd, tree) -> hnd instanceof BPlusTree.Search,
    -            failoverPredicate(true, () -> new AssertionError("Test")))
    -        );
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitPrimaryColocatedThrowsUnchecked() throws Exception {
    +        doTestTransferAmount0(true, true, () -> new RuntimeException("Test"));
         }
     
    -    /**
    -     * Throws a test {@link RuntimeException} during tx commit from {@link BPlusTree} and checks after that data is consistent.
    -     */
    -    public void testPrimaryIndexCorruptionDuringCommitOnPrimaryNode2() throws Exception {
    -        doTestTransferAmount(new IndexCorruptionFailoverScenario(
    -            true,
    -            (hnd, tree) -> hnd instanceof BPlusTree.Search,
    -            failoverPredicate(true, () -> new RuntimeException("Test")))
    -        );
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitPrimaryColocatedThrowsChecked() throws Exception {
    +        doTestTransferAmount0(true, true, () -> new IgniteCheckedException("Test"));
         }
     
    -    /**
    -     * Throws a test {@link AssertionError} during tx commit from {@link BPlusTree} and checks after that data is consistent.
    -     */
    -    public void testPrimaryIndexCorruptionDuringCommitOnBackupNode() throws Exception {
    -        doTestTransferAmount(new IndexCorruptionFailoverScenario(
    -            true,
    -            (hnd, tree) -> hnd instanceof BPlusTree.Search,
    -            failoverPredicate(false, () -> new AssertionError("Test")))
    -        );
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitPrimaryNonColocatedThrowsError() throws Exception {
    +        doTestTransferAmount0(false, true, () -> new AssertionError("Test"));
         }
     
    -    /**
    -     * Throws a test {@link IgniteCheckedException} during tx commit from {@link BPlusTree} and checks after that data is consistent.
    -     */
    -    public void testPrimaryIndexCorruptionDuringCommitOnPrimaryNode3() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-9082");
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitPrimaryNonColocatedThrowsUnchecked() throws Exception {
    +        doTestTransferAmount0(false, true, () -> new RuntimeException("Test"));
    +    }
     
    -        doTestTransferAmount(new IndexCorruptionFailoverScenario(
    -            false,
    -            (hnd, tree) -> hnd instanceof BPlusTree.Search,
    -            failoverPredicate(true, () -> new IgniteCheckedException("Test")))
    -        );
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitPrimaryNonColocatedThrowsChecked() throws Exception {
    +        doTestTransferAmount0(false, true, () -> new IgniteCheckedException("Test"));
    +    }
    +
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitBackupColocatedThrowsError() throws Exception {
    +        doTestTransferAmount0(true, false, () -> new AssertionError("Test"));
    +    }
    +
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitBackupColocatedThrowsUnchecked() throws Exception {
    +        doTestTransferAmount0(true, false, () -> new RuntimeException("Test"));
    +    }
    +
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitBackupColocatedThrowsChecked() throws Exception {
    +        doTestTransferAmount0(true, false, () -> new IgniteCheckedException("Test"));
    +    }
    +
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitBackupNonColocatedThrowsError() throws Exception {
    +        doTestTransferAmount0(false, false, () -> new AssertionError("Test"));
    +    }
    +
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitBackupNonColocatedThrowsUnchecked() throws Exception {
    +        doTestTransferAmount0(false, false, () -> new RuntimeException("Test"));
    +    }
    +
    +    /** */
    +    @Test
    +    public void testPrimaryIndexCorruptionDuringCommitBackupNonColocatedThrowsChecked() throws Exception {
    +        doTestTransferAmount0(false, false, () -> new IgniteCheckedException("Test"));
         }
     
         /**
          * Creates failover predicate which generates error during transaction commmit.
          *
    -     * @param failOnPrimary If {@code true} index should be failed on transaction primary node.
    +     * @param failOnPrimary If {@code true} index should be failed on transaction primary node, otherwise on backup.
          * @param errorSupplier Supplier to create various errors.
    +     * @param errorConsumer Consumer to track unexpected errors while committing.
          */
         private BiFunction failoverPredicate(
             boolean failOnPrimary,
    -        Supplier errorSupplier
    +        Supplier errorSupplier,
    +        Consumer errorConsumer
         ) {
             return (ignite, row) -> {
    -            int cacheId = row.cacheId();
    -            int partId = row.key().partition();
    -
    -            final ClusterNode locNode = ignite.localNode();
    -            final AffinityTopologyVersion curTopVer = ignite.context().discovery().topologyVersionEx();
    -
    -            // Throw exception if current node is primary for given row.
    -            return ignite.cachesx(c -> c.context().cacheId() == cacheId)
    -                .stream()
    -                .filter(c -> c.context().affinity().primaryByPartition(locNode, partId, curTopVer) == failOnPrimary)
    -                .map(c -> errorSupplier.get())
    -                .findFirst()
    -                .orElse(null);
    +            try {
    +                int cacheId = row.cacheId();
    +                int partId = row.key().partition();
    +
    +                GridDhtPartitionTopology top = ignite.context().cache().cacheGroup(cacheId).topology();
    +
    +                GridDhtLocalPartition part = top.localPartition(partId);
    +
    +                assertTrue("Illegal partition state for mapped tx: " + part, part != null && part.state() == OWNING);
    +
    +                return part.primary(top.readyTopologyVersion()) == failOnPrimary ? errorSupplier.get() : null;
    +            }
    +            catch (Throwable e) {
    +                errorConsumer.accept(e);
    +
    +                throw e;
    +            }
             };
         }
     
    @@ -130,68 +176,65 @@ class IndexCorruptionFailoverScenario implements FailoverScenario {
             /** Failed node index. */
             static final int failedNodeIdx = 1;
     
    -        /** Is node stopping expected after failover. */
    -        private final boolean nodeStoppingExpected;
    -
    -        /** Predicate that will choose an instance of {@link BPlusTree} and page operation
    -         * to make further failover in this tree using {@link #failoverPredicate}. */
    -        private final BiFunction treeCorruptionPredicate;
    +        /**
    +         * Predicate that will choose an instance of {@link BPlusTree} and page operation to make further failover in
    +         * this tree using {@link #failoverPred}.
    +         */
    +        private final BiFunction treeCorruptionPred;
     
             /** Function that may return error during row insertion into {@link BPlusTree}. */
    -        private final BiFunction failoverPredicate;
    +        private final BiFunction failoverPred;
     
             /**
    -         * @param nodeStoppingExpected Node stopping expected.
    -         * @param treeCorruptionPredicate Tree corruption predicate.
    -         * @param failoverPredicate Failover predicate.
    +         * @param treeCorruptionPred Tree corruption predicate.
    +         * @param failoverPred Failover predicate.
              */
             IndexCorruptionFailoverScenario(
    -            boolean nodeStoppingExpected,
    -            BiFunction treeCorruptionPredicate,
    -            BiFunction failoverPredicate
    +            BiFunction treeCorruptionPred,
    +            BiFunction failoverPred
             ) {
    -            this.nodeStoppingExpected = nodeStoppingExpected;
    -            this.treeCorruptionPredicate = treeCorruptionPredicate;
    -            this.failoverPredicate = failoverPredicate;
    +            this.treeCorruptionPred = treeCorruptionPred;
    +            this.failoverPred = failoverPred;
             }
     
             /** {@inheritDoc} */
             @Override public void beforeNodesStarted() {
                 BPlusTree.pageHndWrapper = (tree, hnd) -> {
    -                final IgniteEx locIgnite = (IgniteEx) Ignition.localIgnite();
    +                final IgniteEx locIgnite = (IgniteEx)Ignition.localIgnite();
     
    -                if (!locIgnite.name().endsWith(String.valueOf(failedNodeIdx)))
    +                if (getTestIgniteInstanceIndex(locIgnite.name()) != failedNodeIdx)
                         return hnd;
     
    -                if (treeCorruptionPredicate.apply(hnd, tree)) {
    -                    log.info("Created corrupted tree handler for -> " + hnd + " " + tree);
    -
    -                    PageHandler delegate = (PageHandler) hnd;
    +                if (treeCorruptionPred.apply(hnd, tree)) {
    +                    PageHandler delegate = (PageHandler)hnd;
     
                         return new PageHandler() {
    -                        @Override public BPlusTree.Result run(int cacheId, long pageId, long page, long pageAddr, PageIO io, Boolean walPlc, BPlusTree.Get arg, int lvl) throws IgniteCheckedException {
    -                            log.info("Invoked " + " " + cacheId + " " + arg.toString() + " for BTree (" + corruptionEnabled + ") -> " + arg.row() + " / " + arg.row().getClass());
    +                        @Override public BPlusTree.Result run(int cacheId, long pageId, long page, long pageAddr, PageIO io,
    +                            Boolean walPlc, BPlusTree.Get arg, int lvl) throws IgniteCheckedException {
    +                            log.info("Invoked [cachedId=" + cacheId + ", hnd=" + arg.toString() +
    +                                ", corruption=" + corruptionEnabled + ", row=" + arg.row() + ", rowCls=" + arg.row().getClass() + ']');
     
                                 if (corruptionEnabled && (arg.row() instanceof SearchRow)) {
    -                                SearchRow row = (SearchRow) arg.row();
    +                                SearchRow row = (SearchRow)arg.row();
     
                                     // Store cacheId to search row explicitly, as it can be zero if there is one cache in a group.
    -                                Throwable res = failoverPredicate.apply(locIgnite, new SearchRow(cacheId, row.key()));
    +                                Throwable res = failoverPred.apply(locIgnite, new SearchRow(cacheId, row.key()));
     
                                     if (res != null) {
                                         if (res instanceof Error)
    -                                        throw (Error) res;
    +                                        throw (Error)res;
                                         else if (res instanceof RuntimeException)
    -                                        throw (RuntimeException) res;
    +                                        throw (RuntimeException)res;
                                         else if (res instanceof IgniteCheckedException)
    -                                        throw (IgniteCheckedException) res;
    +                                        throw (IgniteCheckedException)res;
                                     }
                                 }
     
                                 return delegate.run(cacheId, pageId, page, pageAddr, io, walPlc, arg, lvl);
                             }
     
    -                        @Override public boolean releaseAfterWrite(int cacheId, long pageId, long page, long pageAddr, BPlusTree.Get g, int lvl) {
    +                        @Override public boolean releaseAfterWrite(int cacheId, long pageId, long page, long pageAddr,
    +                            BPlusTree.Get g, int lvl) {
                                 return g.canRelease(pageId, lvl);
                             }
                         };
    @@ -212,27 +255,68 @@ else if (res instanceof IgniteCheckedException)
                 // Disable index corruption.
                 BPlusTree.pageHndWrapper = (tree, hnd) -> hnd;
     
    -            if (nodeStoppingExpected) {
    -                // Wait until node with corrupted index will left cluster.
    -                GridTestUtils.waitForCondition(() -> {
    -                    try {
    -                        grid(failedNodeIdx);
    -                    }
    -                    catch (IgniteIllegalStateException e) {
    -                        return true;
    -                    }
    +            // Wait until node with corrupted index will left cluster.
    +            GridTestUtils.waitForCondition(() -> {
    +                try {
    +                    grid(failedNodeIdx);
    +                }
    +                catch (IgniteIllegalStateException e) {
    +                    return true;
    +                }
     
    -                    return false;
    -                }, getTestTimeout());
    +                return false;
    +            }, getTestTimeout());
     
    -                // Failed node should be stopped.
    -                GridTestUtils.assertThrows(log, () -> grid(failedNodeIdx), IgniteIllegalStateException.class, "");
    +            // Failed node should be stopped.
    +            GridTestUtils.assertThrows(log, () -> grid(failedNodeIdx), IgniteIllegalStateException.class, null);
     
    -                // Re-start failed node.
    -                startGrid(failedNodeIdx);
    +            // Re-start failed node.
    +            startGrid(failedNodeIdx);
     
    -                awaitPartitionMapExchange();
    -            }
    +            awaitPartitionMapExchange();
    +        }
    +    }
    +
    +    /**
    +     * Test transfer amount with extended error recording.
    +     *
    +     * @param colocatedAccount Colocated account.
    +     * @param failOnPrimary {@code True} if fail on primary, else on backup.
    +     * @param supplier Fail reason supplier.
    +     * @throws Exception If failover predicate execution is failed.
    +     */
    +    private void doTestTransferAmount0(boolean colocatedAccount, boolean failOnPrimary, Supplier supplier) throws Exception {
    +        ErrorTracker errTracker = new ErrorTracker();
    +
    +        doTestTransferAmount(
    +            new IndexCorruptionFailoverScenario(
    +                (hnd, tree) -> hnd instanceof BPlusTree.Search,
    +                failoverPredicate(failOnPrimary, supplier, errTracker)),
    +            colocatedAccount
    +        );
    +
    +        for (Throwable throwable : errTracker.errors())
    +            log.error("Recorded error", throwable);
    +
    +        if (!errTracker.errors().isEmpty())
    +            fail("Test run has error");
    +    }
    +
    +    /** */
    +    private static class ErrorTracker implements Consumer {
    +        /** Queue. */
    +        private final Queue q = new ConcurrentLinkedQueue<>();
    +
    +        /** {@inheritDoc} */
    +        @Override public void accept(Throwable throwable) {
    +            q.add(throwable);
    +        }
    +
    +        /**
    +         * @return Recorded errors.
    +         */
    +        public Collection errors() {
    +            return q;
             }
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithSystemWorkerDeathTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithSystemWorkerDeathTest.java
    index 25aae4b2a1b47..a4dd7c0fbd16f 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithSystemWorkerDeathTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TransactionIntegrityWithSystemWorkerDeathTest.java
    @@ -26,10 +26,14 @@
     import org.apache.ignite.internal.worker.WorkersControlMXBeanImpl;
     import org.apache.ignite.mxbean.WorkersControlMXBean;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TransactionIntegrityWithSystemWorkerDeathTest extends AbstractTransactionIntergrityTest {
         /** {@inheritDoc}. */
         @Override protected long getTestTimeout() {
    @@ -41,9 +45,8 @@ public class TransactionIntegrityWithSystemWorkerDeathTest extends AbstractTrans
             return false;
         }
     
    -    /**
    -     *
    -     */
    +    /** */
    +    @Test
         public void testFailoverWithDiscoWorkerTermination() throws Exception {
             doTestTransferAmount(new FailoverScenario() {
                 static final int failedNodeIdx = 1;
    @@ -83,7 +86,7 @@ public void testFailoverWithDiscoWorkerTermination() throws Exception {
     
                     awaitPartitionMapExchange();
                 }
    -        });
    +        }, true);
         }
     
         /**
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDataConsistencyOnCommitFailureTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDataConsistencyOnCommitFailureTest.java
    new file mode 100644
    index 0000000000000..4c88fad4a2dd3
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDataConsistencyOnCommitFailureTest.java
    @@ -0,0 +1,241 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.ignite.internal.processors.cache.transactions;
    +
    +import java.util.UUID;
    +import java.util.function.Supplier;
    +import org.apache.ignite.Ignite;
    +import org.apache.ignite.IgniteCheckedException;
    +import org.apache.ignite.IgniteTransactions;
    +import org.apache.ignite.cache.CacheAtomicityMode;
    +import org.apache.ignite.cache.CacheMode;
    +import org.apache.ignite.cache.CacheWriteSynchronizationMode;
    +import org.apache.ignite.configuration.CacheConfiguration;
    +import org.apache.ignite.configuration.IgniteConfiguration;
    +import org.apache.ignite.internal.IgniteEx;
    +import org.apache.ignite.internal.managers.communication.GridIoPolicy;
    +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
    +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal;
    +import org.apache.ignite.internal.util.typedef.G;
    +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.apache.ignite.testsuites.IgniteIgnore;
    +import org.apache.ignite.transactions.Transaction;
    +import org.apache.ignite.transactions.TransactionConcurrency;
    +import org.apache.ignite.transactions.TransactionIsolation;
    +import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
    +import org.mockito.Mockito;
    +import org.mockito.invocation.InvocationOnMock;
    +import org.mockito.stubbing.Answer;
    +
    +/**
    + * Tests data consistency if transaction is failed due to heuristic exception on originating node.
    + */
    +@RunWith(JUnit4.class)
    +public class TxDataConsistencyOnCommitFailureTest extends GridCommonAbstractTest {
    +    /** */
    +    public static final int KEY = 0;
    +
    +    /** */
    +    public static final String CLIENT = "client";
    +
    +    /** */
    +    private int nodesCnt;
    +
    +    /** */
    +    private int backups;
    +
    +    /** {@inheritDoc} */
    +    @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    +        IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
    +
    +        cfg.setClientMode(igniteInstanceName.startsWith(CLIENT));
    +
    +        cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME).
    +            setCacheMode(CacheMode.PARTITIONED).
    +            setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL).
    +            setBackups(backups).
    +            setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC));
    +
    +        return cfg;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        super.afterTest();
    +
    +        stopAllGrids();
    +    }
    +
    +    /** */
    +    @Test
    +    public void testCommitErrorOnNearNode2PC() throws Exception {
    +        nodesCnt = 3;
    +
    +        backups = 2;
    +
    +        doTestCommitError(() -> {
    +            try {
    +                return startGrid("client");
    +            }
    +            catch (Exception e) {
    +                throw new RuntimeException(e);
    +            }
    +        });
    +    }
    +
    +    /** */
    +    @Test
    +    public void testCommitErrorOnNearNode1PC() throws Exception {
    +        nodesCnt = 2;
    +
    +        backups = 1;
    +
    +        doTestCommitError(() -> {
    +            try {
    +                return startGrid("client");
    +            }
    +            catch (Exception e) {
    +                throw new RuntimeException(e);
    +            }
    +        });
    +    }
    +
    +    /** */
    +    @IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-9806", forceFailure = false)
    +    @Test
    +    public void testCommitErrorOnColocatedNode2PC() throws Exception {
    +        nodesCnt = 3;
    +
    +        backups = 2;
    +
    +        doTestCommitError(() -> primaryNode(KEY, DEFAULT_CACHE_NAME));
    +    }
    +
    +    /**
    +     * @param factory Factory.
    +     */
    +    private void doTestCommitError(Supplier factory) throws Exception {
    +        Ignite crd = startGridsMultiThreaded(nodesCnt);
    +
    +        crd.cache(DEFAULT_CACHE_NAME).put(KEY, KEY);
    +
    +        Ignite ignite = factory.get();
    +
    +        if (ignite == null)
    +            ignite = startGrid("client");
    +
    +        assertNotNull(ignite.cache(DEFAULT_CACHE_NAME));
    +
    +        injectMockedTxManager(ignite);
    +
    +        checkKey();
    +
    +        IgniteTransactions transactions = ignite.transactions();
    +
    +        try(Transaction tx = transactions.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, 0, 1)) {
    +            assertNotNull(transactions.tx());
    +
    +            ignite.cache(DEFAULT_CACHE_NAME).put(KEY, KEY + 1);
    +
    +            tx.commit();
    +
    +            fail();
    +        }
    +        catch (Exception t) {
    +            // No-op.
    +        }
    +
    +        checkKey();
    +
    +        checkFutures();
    +    }
    +
    +    /**
    +     * @param ignite Ignite.
    +     */
    +    private void injectMockedTxManager(Ignite ignite) {
    +        IgniteEx igniteEx = (IgniteEx)ignite;
    +
    +        GridCacheSharedContext ctx = igniteEx.context().cache().context();
    +
    +        IgniteTxManager tm = ctx.tm();
    +
    +        IgniteTxManager mockTm = Mockito.spy(tm);
    +
    +        MockGridNearTxLocal locTx = new MockGridNearTxLocal(ctx, false, false, false, GridIoPolicy.SYSTEM_POOL,
    +            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, 0, true, null, 1, null, 0, null);
    +
    +        Mockito.doAnswer(new Answer() {
    +            @Override public GridNearTxLocal answer(InvocationOnMock invocation) throws Throwable {
    +                mockTm.onCreated(null, locTx);
    +
    +                return locTx;
    +            }
    +        }).when(mockTm).
    +            newTx(locTx.implicit(), locTx.implicitSingle(), null, locTx.concurrency(),
    +                locTx.isolation(), locTx.timeout(), locTx.storeEnabled(), null, locTx.size(), locTx.label());
    +
    +        ctx.setTxManager(mockTm);
    +    }
    +
    +    /** */
    +    private void checkKey() {
    +        for (Ignite ignite : G.allGrids()) {
    +            if (!ignite.configuration().isClientMode())
    +                assertNotNull(ignite.cache(DEFAULT_CACHE_NAME).localPeek(KEY));
    +        }
    +    }
    +
    +    /** */
    +    private static class MockGridNearTxLocal extends GridNearTxLocal {
    +        /** Empty constructor. */
    +        public MockGridNearTxLocal() {
    +        }
    +
    +        /**
    +         * @param ctx Context.
    +         * @param implicit Implicit.
    +         * @param implicitSingle Implicit single.
    +         * @param sys System.
    +         * @param plc Policy.
    +         * @param concurrency Concurrency.
    +         * @param isolation Isolation.
    +         * @param timeout Timeout.
    +         * @param storeEnabled Store enabled.
    +         * @param mvccOp Mvcc op.
    +         * @param txSize Tx size.
    +         * @param subjId Subj id.
    +         * @param taskNameHash Task name hash.
    +         * @param lb Label.
    +         */
    +        public MockGridNearTxLocal(GridCacheSharedContext ctx, boolean implicit, boolean implicitSingle, boolean sys,
    +            byte plc, TransactionConcurrency concurrency, TransactionIsolation isolation, long timeout,
    +            boolean storeEnabled, Boolean mvccOp, int txSize, @Nullable UUID subjId, int taskNameHash, @Nullable String lb) {
    +            super(ctx, implicit, implicitSingle, sys, plc, concurrency, isolation, timeout, storeEnabled, mvccOp,
    +                txSize, subjId, taskNameHash, lb);
    +        }
    +
    +        /** {@inheritDoc} */
    +        @Override public void userCommit() throws IgniteCheckedException {
    +            throw new IgniteCheckedException("Force failure");
    +        }
    +    }
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockCauseTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockCauseTest.java
    index 8f5ee30c44e67..714154a9794c1 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockCauseTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockCauseTest.java
    @@ -40,10 +40,14 @@
     import java.util.concurrent.TimeUnit;
     import java.util.concurrent.atomic.AtomicBoolean;
     import java.util.concurrent.atomic.AtomicReference;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxDeadlockCauseTest extends GridCommonAbstractTest {
         /** */
         private CacheConfiguration ccfg;
    @@ -84,6 +88,7 @@ public class TxDeadlockCauseTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCause() throws Exception {
             startGrids(1);
     
    @@ -96,6 +101,7 @@ public void testCause() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCauseSeveralNodes() throws Exception {
             startGrids(2);
     
    @@ -108,6 +114,7 @@ public void testCauseSeveralNodes() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCauseNear() throws Exception {
             ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME)
                     .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
    @@ -124,6 +131,7 @@ public void testCauseNear() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCauseSeveralNodesNear() throws Exception {
             ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME)
                     .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
    @@ -146,6 +154,7 @@ public void testCauseSeveralNodesNear() throws Exception {
          *              instead of {@link IgniteCache#get(java.lang.Object)} and {@link IgniteCache#put(java.lang.Object, java.lang.Object)} operations sequence.
          * @throws Exception If failed.
          */
    +    @Test
         public void testCauseObject(int nodes, final int keysCnt, final long timeout, final TransactionIsolation isolation, final boolean oneOp) throws Exception {
             final Ignite ignite = grid(new Random().nextInt(nodes));
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionMessageMarshallingTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionMessageMarshallingTest.java
    index 1a48cec75fb5c..df31a722bfd10 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionMessageMarshallingTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionMessageMarshallingTest.java
    @@ -33,10 +33,14 @@
     import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
     import org.apache.ignite.internal.processors.cache.KeyCacheObject;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxDeadlockDetectionMessageMarshallingTest extends GridCommonAbstractTest {
         /** Topic. */
         private static final String TOPIC = "mytopic";
    @@ -57,6 +61,7 @@ public class TxDeadlockDetectionMessageMarshallingTest extends GridCommonAbstrac
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMessageUnmarshallWithoutCacheContext() throws Exception {
             try {
                 Ignite ignite = startGrid(0);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionNoHangsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionNoHangsTest.java
    index 3f1639a5edf02..bffed6b57cc15 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionNoHangsTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionNoHangsTest.java
    @@ -35,6 +35,9 @@
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.IgniteSystemProperties.IGNITE_TX_DEADLOCK_DETECTION_MAX_ITERS;
     import static org.apache.ignite.IgniteSystemProperties.IGNITE_TX_DEADLOCK_DETECTION_TIMEOUT;
    @@ -46,6 +49,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxDeadlockDetectionNoHangsTest extends GridCommonAbstractTest {
         /** Nodes count. */
         private static final int NODES_CNT = 3;
    @@ -105,6 +109,7 @@ public class TxDeadlockDetectionNoHangsTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoHangsPessimistic() throws Exception {
             assertTrue(grid(0).context().cache().context().tm().deadlockDetectionEnabled());
     
    @@ -126,6 +131,7 @@ public void testNoHangsPessimistic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoHangsOptimistic() throws Exception {
             assertTrue(grid(0).context().cache().context().tm().deadlockDetectionEnabled());
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionTest.java
    index 6cf65c8fd4f33..f9bd4879fefd7 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionTest.java
    @@ -48,6 +48,9 @@
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionDeadlockException;
     import org.apache.ignite.transactions.TransactionTimeoutException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.internal.util.typedef.X.hasCause;
     import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    @@ -56,6 +59,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxDeadlockDetectionTest extends GridCommonAbstractTest {
         /** Nodes count. */
         private static final int NODES_CNT = 3;
    @@ -102,6 +106,7 @@ public class TxDeadlockDetectionTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoHangs() throws Exception {
             final AtomicBoolean stop = new AtomicBoolean();
     
    @@ -182,6 +187,7 @@ public void testNoHangs() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoDeadlockSimple() throws Exception {
             final AtomicInteger threadCnt = new AtomicInteger();
     
    @@ -236,6 +242,7 @@ public void testNoDeadlockSimple() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoDeadlock() throws Exception {
             for (int i = 2; i <= 10; i++) {
                 final int threads = i;
    @@ -310,6 +317,7 @@ public void testNoDeadlock() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFailedTxLocksRequest() throws Exception {
             doTestFailedMessage(TxLocksRequest.class);
         }
    @@ -317,6 +325,7 @@ public void testFailedTxLocksRequest() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFailedTxLocksResponse() throws Exception {
             doTestFailedMessage(TxLocksResponse.class);
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionUnmasrhalErrorsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionUnmasrhalErrorsTest.java
    index 0cadafefe31be..5b5dddb61a2b9 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionUnmasrhalErrorsTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxDeadlockDetectionUnmasrhalErrorsTest.java
    @@ -38,6 +38,9 @@
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionDeadlockException;
     import org.apache.ignite.transactions.TransactionTimeoutException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.internal.util.typedef.X.hasCause;
     import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    @@ -47,6 +50,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxDeadlockDetectionUnmasrhalErrorsTest extends GridCommonAbstractTest {
         /** Nodes count. */
         private static final int NODES_CNT = 2;
    @@ -86,6 +90,7 @@ public class TxDeadlockDetectionUnmasrhalErrorsTest extends GridCommonAbstractTe
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlockCacheObjectContext() throws Exception {
             IgniteCache cache0 = null;
             IgniteCache cache1 = null;
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxLabelTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxLabelTest.java
    index d89ba0b6554ec..516c08a0c203a 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxLabelTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxLabelTest.java
    @@ -18,16 +18,35 @@
     package org.apache.ignite.internal.processors.cache.transactions;
     
     import org.apache.ignite.Ignite;
    -import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest;
    +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests transaction labels.
      */
    -public class TxLabelTest extends GridCacheAbstractSelfTest {
    +@RunWith(JUnit4.class)
    +public class TxLabelTest extends GridCommonAbstractTest {
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTestsStarted() throws Exception {
    +        super.beforeTestsStarted();
    +
    +        startGrid(0).getOrCreateCache(defaultCacheConfiguration());
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void afterTestsStopped() throws Exception {
    +        stopAllGrids();
    +
    +        super.afterTestsStopped();
    +    }
    +
         /**
          * Tests transaction labels.
          */
    +    @Test
         public void testLabel() {
             testLabel0(grid(0), "lbl0");
             testLabel0(grid(0), "lbl1");
    @@ -55,9 +74,4 @@ private void testLabel0(Ignite ignite, String lbl) {
                 tx.commit();
             }
         }
    -
    -    /** {@inheritDoc} */
    -    @Override protected int gridCount() {
    -        return 1;
    -    }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxMultiCacheAsyncOpsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxMultiCacheAsyncOpsTest.java
    index 89f20ec52b4fd..cca0bee251c14 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxMultiCacheAsyncOpsTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxMultiCacheAsyncOpsTest.java
    @@ -26,6 +26,9 @@
     import org.apache.ignite.lang.IgniteFuture;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    @@ -33,6 +36,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxMultiCacheAsyncOpsTest extends GridCommonAbstractTest {
         /** Grid count. */
         public static final int GRID_COUNT = 3;
    @@ -77,6 +81,7 @@ private CacheConfiguration cacheConfiguration(int idx) {
         /**
          *
          */
    +    @Test
         public void testCommitAfterAsyncPut() {
             CacheConfiguration[] caches = cacheConfigurations();
     
    @@ -106,6 +111,7 @@ public void testCommitAfterAsyncPut() {
         /**
          *
          */
    +    @Test
         public void testCommitAfterAsyncGet() {
             CacheConfiguration[] caches = cacheConfigurations();
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOnCachesStartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOnCachesStartTest.java
    index 24044040cac8b..5e4d1ac4ac2b6 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOnCachesStartTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOnCachesStartTest.java
    @@ -24,38 +24,26 @@
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.configuration.CacheConfiguration;
    -import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.IgniteInternalFuture;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *  Tests transactions closes correctly while other caches start and stop.
      *  Tests possible {@link NullPointerException} in {@link TransactionProxyImpl#leave} due to race while
      *  {@link org.apache.ignite.internal.processors.cache.GridCacheTtlManager} initializes (IGNITE-7972).
      */
    +@RunWith(JUnit4.class)
     public class TxOnCachesStartTest extends GridCommonAbstractTest {
         /** */
         private static int NUM_CACHES = 100;
     
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
    -    /** {@inheritDoc} */
    -    @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    -        IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
    -
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
    -        return cfg;
    -    }
    -
         /** {@inheritDoc} */
         @Override protected void beforeTest() throws Exception {
             super.beforeTest();
    @@ -73,6 +61,7 @@ public class TxOnCachesStartTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTransactionCloseOnCachesStartAndStop() throws Exception {
             Ignite srv =  startGrids(5);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOnCachesStopTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOnCachesStopTest.java
    new file mode 100644
    index 0000000000000..02badb91f2d2e
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOnCachesStopTest.java
    @@ -0,0 +1,312 @@
    +/*
    + * Copyright 2019 GridGain Systems, Inc. and Contributors.
    + *
    + * Licensed under the GridGain Community Edition License (the "License");
    + * you may not use this file except in compliance with the License.
    + * You may obtain a copy of the License at
    + *
    + *     https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.ignite.internal.processors.cache.transactions;
    +
    +import java.util.concurrent.CountDownLatch;
    +import org.apache.ignite.Ignite;
    +import org.apache.ignite.IgniteCache;
    +import org.apache.ignite.IgniteException;
    +import org.apache.ignite.Ignition;
    +import org.apache.ignite.cache.CacheAtomicityMode;
    +import org.apache.ignite.cache.CacheWriteSynchronizationMode;
    +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction;
    +import org.apache.ignite.configuration.CacheConfiguration;
    +import org.apache.ignite.configuration.DataRegionConfiguration;
    +import org.apache.ignite.configuration.DataStorageConfiguration;
    +import org.apache.ignite.configuration.IgniteConfiguration;
    +import org.apache.ignite.configuration.WALMode;
    +import org.apache.ignite.internal.IgniteEx;
    +import org.apache.ignite.internal.IgniteInternalFuture;
    +import org.apache.ignite.internal.TestRecordingCommunicationSpi;
    +import org.apache.ignite.internal.processors.cache.CacheInvalidStateException;
    +import org.apache.ignite.internal.processors.cache.CacheStoppedException;
    +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest;
    +import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException;
    +import org.apache.ignite.internal.util.GridRandom;
    +import org.apache.ignite.internal.util.typedef.X;
    +import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
    +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.apache.ignite.transactions.Transaction;
    +import org.apache.ignite.transactions.TransactionConcurrency;
    +import org.apache.ignite.transactions.TransactionIsolation;
    +import org.apache.ignite.transactions.TransactionRollbackException;
    +import org.junit.Test;
    +
    +/**
    + *
    + */
    +public class TxOnCachesStopTest extends GridCommonAbstractTest {
    +    /** Cache1 name. */
    +    private static final String CACHE_1_NAME = "cache1";
    +
    +    /** Cache2 name. */
    +    private static final String CACHE_2_NAME = "cache2";
    +
    +    /** rnd instance. */
    +    private static final GridRandom rnd = new GridRandom();
    +
    +    /** */
    +    private CacheConfiguration destroyCacheCfg;
    +
    +    /** */
    +    private CacheConfiguration surviveCacheCfg;
    +
    +    /** {@inheritDoc} */
    +    @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception {
    +        IgniteConfiguration cfg = super.getConfiguration(gridName);
    +
    +        TestRecordingCommunicationSpi commSpi = new TestRecordingCommunicationSpi();
    +        cfg.setCommunicationSpi(commSpi);
    +
    +        DataStorageConfiguration memCfg = new DataStorageConfiguration()
    +            .setDefaultDataRegionConfiguration(
    +                new DataRegionConfiguration().setMaxSize(100 * 1024 * 1024).setPersistenceEnabled(true))
    +            .setWalMode(WALMode.LOG_ONLY);
    +
    +        cfg.setDataStorageConfiguration(memCfg);
    +
    +        CacheConfiguration ccfg1 = new CacheConfiguration<>();
    +
    +        ccfg1.setName(CACHE_1_NAME);
    +        ccfg1.setBackups(1);
    +        ccfg1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    +        ccfg1.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
    +        ccfg1.setAffinity(new RendezvousAffinityFunction(false, 32));
    +
    +        destroyCacheCfg = ccfg1;
    +
    +        CacheConfiguration ccfg2 = new CacheConfiguration<>();
    +
    +        ccfg2.setName(CACHE_2_NAME);
    +        ccfg2.setBackups(1);
    +        ccfg2.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    +        ccfg2.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
    +        ccfg2.setAffinity(new RendezvousAffinityFunction(false, 32));
    +
    +        surviveCacheCfg = ccfg2;
    +
    +        cfg.setCacheConfiguration(ccfg1, ccfg2);
    +
    +        return cfg;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTest() throws Exception {
    +        super.beforeTest();
    +
    +        stopAllGrids();
    +
    +        cleanPersistenceDir();
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        super.afterTest();
    +
    +        grid(0).destroyCache(destroyCacheCfg.getName());
    +        grid(0).destroyCache(surviveCacheCfg.getName());
    +
    +        stopAllGrids();
    +
    +        cleanPersistenceDir();
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testTxOnCacheStopNoMessageBlock() throws Exception {
    +        testTxOnCacheStop(false);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testTxOnCacheStopWithMessageBlock() throws Exception {
    +        testTxOnCacheStop(true);
    +    }
    +
    +    /**
    +     * @param block {@code True} To block GridNearTxPrepareRequest message.
    +     */
    +    public void testTxOnCacheStop(boolean block) throws Exception {
    +        startGridsMultiThreaded(2);
    +
    +        Ignition.setClientMode(true);
    +
    +        IgniteEx ig = startGrid("client");
    +
    +        ig.cluster().active(true);
    +
    +        for (TransactionConcurrency conc : TransactionConcurrency.values())
    +            for (TransactionIsolation iso : TransactionIsolation.values())
    +                runTxOnCacheStop(conc, iso, ig, block);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    @Test
    +    public void testTxOnCacheStopInMid() throws Exception {
    +        startGridsMultiThreaded(2);
    +
    +        Ignition.setClientMode(true);
    +
    +        IgniteEx ig = startGrid("client");
    +
    +        ig.cluster().active(true);
    +
    +        for (TransactionConcurrency conc : TransactionConcurrency.values())
    +            for (TransactionIsolation iso : TransactionIsolation.values())
    +                runCacheStopInMidTx(conc, iso, ig);
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    private void runTxOnCacheStop(TransactionConcurrency conc, TransactionIsolation iso, Ignite ig, boolean runConc)
    +        throws Exception {
    +        if ((conc == TransactionConcurrency.OPTIMISTIC) && (MvccFeatureChecker.forcedMvcc()))
    +            return;
    +
    +        CountDownLatch destroyLatch = new CountDownLatch(1);
    +
    +        final IgniteCache cache = ig.getOrCreateCache(destroyCacheCfg);
    +
    +        final IgniteCache cache2 = ig.getOrCreateCache(surviveCacheCfg);
    +
    +        TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(ig);
    +
    +        IgniteInternalFuture f0 = GridTestUtils.runAsync(() -> {
    +            try {
    +                destroyLatch.await();
    +
    +                IgniteInternalFuture f = GridTestUtils.runAsync(() -> {
    +                    doSleep(rnd.nextInt(500));
    +
    +                    spi.stopBlock();
    +                });
    +
    +                cache.destroy();
    +
    +                f.get();
    +            }
    +            catch (Exception e) {
    +                e.printStackTrace();
    +            }
    +        });
    +
    +        spi.blockMessages((node, msg) -> {
    +            if (msg instanceof GridNearTxPrepareRequest) {
    +                destroyLatch.countDown();
    +
    +                return runConc;
    +            }
    +
    +            return false;
    +        });
    +
    +        IgniteInternalFuture f1 = GridTestUtils.runAsync(() -> {
    +            byte[] val = new byte[1024];
    +
    +            try (Transaction tx = ig.transactions().txStart(conc, iso, 1_000, 2)) {
    +                cache.put(100, val);
    +
    +                cache2.put(100, val);
    +
    +                tx.commit();
    +            }
    +            catch (IgniteException e) {
    +                assertTrue(X.hasCause(e, IgniteTxTimeoutCheckedException.class)
    +                    || X.hasCause(e, CacheInvalidStateException.class) || X.hasCause(e, IgniteException.class));
    +            }
    +        });
    +
    +        f1.get();
    +        f0.get();
    +
    +        try {
    +            assertEquals(cache2.get(100), cache.get(100));
    +        }
    +        catch (IllegalStateException e) {
    +            assertTrue(X.hasCause(e, CacheStoppedException.class));
    +        }
    +
    +        spi.stopBlock();
    +    }
    +
    +    /**
    +     * @throws Exception If failed.
    +     */
    +    private void runCacheStopInMidTx(TransactionConcurrency conc, TransactionIsolation iso, Ignite ig) throws Exception {
    +        if ((conc == TransactionConcurrency.OPTIMISTIC) && (MvccFeatureChecker.forcedMvcc()))
    +            return;
    +
    +        CountDownLatch destroyLatch = new CountDownLatch(1);
    +
    +        CountDownLatch putLatch = new CountDownLatch(1);
    +
    +        final IgniteCache cache = ig.getOrCreateCache(destroyCacheCfg);
    +
    +        final IgniteCache cache2 = ig.getOrCreateCache(surviveCacheCfg);
    +
    +        IgniteInternalFuture f0 = GridTestUtils.runAsync(() -> {
    +            try {
    +                putLatch.await();
    +
    +                cache.destroy();
    +
    +                destroyLatch.countDown();
    +            }
    +            catch (Exception e) {
    +                e.printStackTrace();
    +            }
    +        });
    +
    +        IgniteInternalFuture f1 = GridTestUtils.runAsync(() -> {
    +            byte[] val = new byte[1024];
    +
    +            try (Transaction tx = ig.transactions().txStart(conc, iso, 1_000, 2)) {
    +                cache.put(100, val);
    +
    +                cache2.put(100, val);
    +
    +                putLatch.countDown();
    +
    +                destroyLatch.await();
    +
    +                tx.commit();
    +            }
    +            catch (IgniteException e) {
    +                assertTrue(X.hasCause(e, CacheInvalidStateException.class) ||
    +                    X.hasCause(e, CacheStoppedException.class) || X.hasCause(e, TransactionRollbackException.class) ||
    +                    X.hasCause(e, IgniteException.class));
    +            }
    +            catch (InterruptedException e) {
    +                e.printStackTrace();
    +            }
    +
    +        });
    +
    +        f1.get();
    +        f0.get();
    +
    +        assertNull(cache2.get(100));
    +    }
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionCrossCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionCrossCacheTest.java
    index 056b093f7fba6..680e69b9e294b 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionCrossCacheTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionCrossCacheTest.java
    @@ -39,6 +39,9 @@
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionDeadlockException;
     import org.apache.ignite.transactions.TransactionTimeoutException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.internal.util.typedef.X.hasCause;
     import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC;
    @@ -47,6 +50,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxOptimisticDeadlockDetectionCrossCacheTest extends GridCommonAbstractTest {
         /** {@inheritDoc} */
         @SuppressWarnings("unchecked")
    @@ -83,6 +87,7 @@ public class TxOptimisticDeadlockDetectionCrossCacheTest extends GridCommonAbstr
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlock() throws Exception {
             startGrids(2);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionTest.java
    index 19fb4c9d7cb78..4818d821b10aa 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticDeadlockDetectionTest.java
    @@ -52,11 +52,13 @@
     import org.apache.ignite.spi.IgniteSpiException;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
     import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionDeadlockException;
     import org.apache.ignite.transactions.TransactionTimeoutException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
    @@ -69,10 +71,8 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxOptimisticDeadlockDetectionTest extends AbstractDeadlockDetectionTest {
    -    /** Ip finder. */
    -    private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** Cache name. */
         private static final String CACHE_NAME = "cache";
     
    @@ -93,15 +93,8 @@ public class TxOptimisticDeadlockDetectionTest extends AbstractDeadlockDetection
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
    -
    -        discoSpi.setIpFinder(IP_FINDER);
    -
    -        if (isDebug()) {
    -            discoSpi.failureDetectionTimeoutEnabled(false);
    -
    -            cfg.setDiscoverySpi(discoSpi);
    -        }
    +        if (isDebug())
    +            ((TcpDiscoverySpi)cfg.getDiscoverySpi()).failureDetectionTimeoutEnabled(false);
     
             TcpCommunicationSpi commSpi = new TestCommunicationSpi();
     
    @@ -109,8 +102,6 @@ public class TxOptimisticDeadlockDetectionTest extends AbstractDeadlockDetection
     
             cfg.setClientMode(client);
     
    -        cfg.setDiscoverySpi(discoSpi);
    -
             return cfg;
         }
     
    @@ -131,6 +122,7 @@ public class TxOptimisticDeadlockDetectionTest extends AbstractDeadlockDetection
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksPartitioned() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 doTestDeadlocks(createCache(PARTITIONED, syncMode, false), ORDINAL_START_KEY);
    @@ -141,6 +133,7 @@ public void testDeadlocksPartitioned() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksPartitionedNear() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 doTestDeadlocks(createCache(PARTITIONED, syncMode, true), ORDINAL_START_KEY);
    @@ -151,6 +144,7 @@ public void testDeadlocksPartitionedNear() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksReplicated() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 doTestDeadlocks(createCache(REPLICATED, syncMode, false), ORDINAL_START_KEY);
    @@ -161,6 +155,7 @@ public void testDeadlocksReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksPartitionedNearTxOnPrimary() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 doTestDeadlocksTxOnPrimary(createCache(PARTITIONED, syncMode, true),  ORDINAL_START_KEY);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticOnPartitionExchangeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticOnPartitionExchangeTest.java
    index 369e75506b69f..9be8c7d139362 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticOnPartitionExchangeTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticOnPartitionExchangeTest.java
    @@ -50,6 +50,9 @@
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionIsolation;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    @@ -65,6 +68,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxOptimisticOnPartitionExchangeTest extends GridCommonAbstractTest {
         /** Nodes count. */
         private static final int NODES_CNT = 3;
    @@ -106,6 +110,7 @@ public class TxOptimisticOnPartitionExchangeTest extends GridCommonAbstractTest
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConsistencyOnPartitionExchange() throws Exception {
             doTest(SERIALIZABLE, true);
             doTest(READ_COMMITTED, true);
    @@ -312,4 +317,4 @@ private void sendMessage(
                 super.sendMessage(node, msg, ackC);
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticPrepareOnUnstableTopologyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticPrepareOnUnstableTopologyTest.java
    index cb371cf2cc583..3fde7deab4589 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticPrepareOnUnstableTopologyTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxOptimisticPrepareOnUnstableTopologyTest.java
    @@ -17,26 +17,27 @@
     
     package org.apache.ignite.internal.processors.cache.transactions;
     
    -import java.util.ArrayList;
    -import java.util.Collection;
     import java.util.TreeMap;
     import java.util.concurrent.ThreadLocalRandom;
     import java.util.concurrent.TimeUnit;
    +import java.util.concurrent.atomic.AtomicBoolean;
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.IgniteEx;
    +import org.apache.ignite.internal.IgniteInternalFuture;
     import org.apache.ignite.internal.IgniteKernal;
    -import org.apache.ignite.internal.util.typedef.G;
    +import org.apache.ignite.internal.util.future.GridCompoundFuture;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
    +import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    @@ -45,36 +46,24 @@
     /**
      * Tests optimistic prepare on unstable topology.
      */
    +@RunWith(JUnit4.class)
     public class TxOptimisticPrepareOnUnstableTopologyTest extends GridCommonAbstractTest {
         /** */
         public static final String CACHE_NAME = "part_cache";
     
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    +    /** */
    +    private static final int STARTUP_DELAY = 500;
     
         /** */
    -    private volatile boolean run = true;
    +    private static final int GRID_CNT = 4;
     
         /** */
         private boolean client;
     
    -    /** {@inheritDoc} */
    -    @Override protected void afterTestsStopped() throws Exception {
    -        stopAllGrids();
    -
    -        assertEquals(0, G.allGrids().size());
    -    }
    -
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration c = super.getConfiguration(igniteInstanceName);
     
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(ipFinder);
    -
    -        c.setDiscoverySpi(disco);
    -
             CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME);
     
             ccfg.setName(CACHE_NAME);
    @@ -93,6 +82,7 @@ public class TxOptimisticPrepareOnUnstableTopologyTest extends GridCommonAbstrac
         /**
          *
          */
    +    @Test
         public void testPrepareOnUnstableTopology() throws Exception {
             for (TransactionIsolation isolation : TransactionIsolation.values()) {
                 doPrepareOnUnstableTopology(4, false, isolation, 0);
    @@ -110,58 +100,42 @@ public void testPrepareOnUnstableTopology() throws Exception {
          */
         private void doPrepareOnUnstableTopology(int keys, boolean testClient, TransactionIsolation isolation,
             long timeout) throws Exception {
    -        Collection threads = new ArrayList<>();
    -
    -        try {
    -            // Start grid 1.
    -            IgniteEx grid1 = startGrid(0);
    -
    -            assertFalse(grid1.configuration().isClientMode());
    -
    -            threads.add(runCacheOperations(grid1, isolation, timeout, keys));
    -
    -            TimeUnit.SECONDS.sleep(3L);
    -
    -            client = testClient; // If test client start on node in client mode.
    -
    -            // Start grid 2.
    -            IgniteEx grid2 = startGrid(1);
    -
    -            assertEquals((Object)testClient, grid2.configuration().isClientMode());
    -
    -            client = false;
    +        GridCompoundFuture compFut = new GridCompoundFuture<>();
     
    -            threads.add(runCacheOperations(grid2, isolation, timeout, keys));
    +        AtomicBoolean stopFlag = new AtomicBoolean();
     
    -            TimeUnit.SECONDS.sleep(3L);
    -
    -            // Start grid 3.
    -            IgniteEx grid3 = startGrid(2);
    +        try {
    +            int clientIdx = testClient ? 1 : -1;
     
    -            assertFalse(grid3.configuration().isClientMode());
    +            try {
    +                for (int i = 0; i < GRID_CNT; i++) {
    +                    client = (clientIdx == i);
     
    -            if (testClient)
    -                log.info("Started client node: " + grid3.name());
    +                    IgniteEx grid = startGrid(i);
     
    -            threads.add(runCacheOperations(grid3, isolation, timeout, keys));
    +                    assertEquals(client, grid.configuration().isClientMode().booleanValue());
     
    -            TimeUnit.SECONDS.sleep(3L);
    +                    client = false;
     
    -            // Start grid 4.
    -            IgniteEx grid4 = startGrid(3);
    +                    IgniteInternalFuture fut = runCacheOperationsAsync(grid, stopFlag, isolation, timeout, keys);
     
    -            assertFalse(grid4.configuration().isClientMode());
    +                    compFut.add(fut);
     
    -            threads.add(runCacheOperations(grid4, isolation, timeout, keys));
    +                    U.sleep(STARTUP_DELAY);
    +                }
    +            }
    +            finally {
    +                stopFlag.set(true);
    +            }
     
    -            TimeUnit.SECONDS.sleep(3L);
    +            compFut.markInitialized();
     
    -            stopThreads(threads);
    +            compFut.get();
     
    -            for (int i = 0; i < 4; i++) {
    +            for (int i = 0; i < GRID_CNT; i++) {
                     IgniteTxManager tm = ((IgniteKernal)grid(i)).internalCache(CACHE_NAME).context().tm();
     
    -                assertEquals("txMap is not empty:" + i, 0, tm.idMapSize());
    +                assertEquals("txMap is not empty: " + i, 0, tm.idMapSize());
                 }
             }
             finally {
    @@ -169,65 +143,51 @@ private void doPrepareOnUnstableTopology(int keys, boolean testClient, Transacti
             }
         }
     
    -    /**
    -     * @param threads Thread which will be stopped.
    -     */
    -    private void stopThreads(Iterable threads) {
    -        try {
    -            run = false;
    -
    -            for (Thread thread : threads)
    -                thread.join();
    -        }
    -        catch (Exception e) {
    -            U.error(log(), "Couldn't stop threads.", e);
    -        }
    -    }
    -
         /**
          * @param node Node.
          * @param isolation Isolation.
          * @param timeout Timeout.
          * @param keys Number of keys.
    -     * @return Running thread.
    +     * @return Future representing pending completion of the operation.
          */
    -    @SuppressWarnings("TypeMayBeWeakened")
    -    private Thread runCacheOperations(Ignite node, TransactionIsolation isolation, long timeout, final int keys) {
    -        Thread t = new Thread() {
    -            @Override public void run() {
    -                while (run) {
    -                    TreeMap vals = generateValues(keys);
    -
    -                    try {
    -                        try (Transaction tx = node.transactions().txStart(TransactionConcurrency.OPTIMISTIC, isolation,
    -                            timeout, keys)){
    -
    -                            IgniteCache cache = node.cache(CACHE_NAME);
    -
    -                            // Put or remove.
    -                            if (ThreadLocalRandom.current().nextDouble(1) < 0.65)
    -                                cache.putAll(vals);
    -                            else
    -                                cache.removeAll(vals.keySet());
    -
    -                            tx.commit();
    -                        }
    -                        catch (Exception e) {
    -                            U.error(log(), "Failed cache operation.", e);
    -                        }
    -
    -                        U.sleep(100);
    +    private IgniteInternalFuture runCacheOperationsAsync(
    +        Ignite node,
    +        AtomicBoolean stopFlag,
    +        TransactionIsolation isolation,
    +        long timeout,
    +        final int keys
    +    ) {
    +        return GridTestUtils.runAsync(() -> {
    +            while (!stopFlag.get()) {
    +                TreeMap vals = generateValues(keys);
    +
    +                try {
    +                    try (Transaction tx = node.transactions().txStart(TransactionConcurrency.OPTIMISTIC, isolation,
    +                        timeout, keys)) {
    +
    +                        IgniteCache cache = node.cache(CACHE_NAME);
    +
    +                        // Put or remove.
    +                        if (ThreadLocalRandom.current().nextDouble(1) < 0.65)
    +                            cache.putAll(vals);
    +                        else
    +                            cache.removeAll(vals.keySet());
    +
    +                        tx.commit();
                         }
    -                    catch (Exception e){
    -                        U.error(log(), "Failed unlock.", e);
    +                    catch (Exception e) {
    +                        U.error(log(), "Failed cache operation.", e);
                         }
    +
    +                    U.sleep(100);
    +                }
    +                catch (Exception e) {
    +                    U.error(log(), "Failed unlock.", e);
                     }
                 }
    -        };
    -
    -        t.start();
     
    -        return t;
    +            return null;
    +        });
         }
     
         /**
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionCrossCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionCrossCacheTest.java
    index b6bd7fbf4f093..5a397556e993b 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionCrossCacheTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionCrossCacheTest.java
    @@ -37,6 +37,9 @@
     import org.apache.ignite.transactions.TransactionDeadlockException;
     import org.apache.ignite.transactions.TransactionTimeoutException;
     import org.jetbrains.annotations.NotNull;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.internal.util.typedef.X.hasCause;
     import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    @@ -45,6 +48,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxPessimisticDeadlockDetectionCrossCacheTest extends GridCommonAbstractTest {
         /** Nodes count. */
         private static final int NODES_CNT = 2;
    @@ -75,6 +79,7 @@ public class TxPessimisticDeadlockDetectionCrossCacheTest extends GridCommonAbst
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlockNoNear() throws Exception {
             doTestDeadlock(false, false);
         }
    @@ -82,6 +87,7 @@ public void testDeadlockNoNear() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlockOneNear() throws Exception {
             doTestDeadlock(false, true);
         }
    @@ -89,6 +95,7 @@ public void testDeadlockOneNear() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlockAnotherNear() throws Exception {
             doTestDeadlock(true, false);
             doTestDeadlock(false, true);
    @@ -97,6 +104,7 @@ public void testDeadlockAnotherNear() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlockBothNear() throws Exception {
             doTestDeadlock(true, true);
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionTest.java
    index 88bebb29fc9de..3d03f8cb51506 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxPessimisticDeadlockDetectionTest.java
    @@ -48,6 +48,9 @@
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionDeadlockException;
     import org.apache.ignite.transactions.TransactionTimeoutException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.LOCAL;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    @@ -61,6 +64,7 @@
     /**
      * Tests deadlock detection for pessimistic transactions.
      */
    +@RunWith(JUnit4.class)
     public class TxPessimisticDeadlockDetectionTest extends AbstractDeadlockDetectionTest {
         /** Cache name. */
         private static final String CACHE_NAME = "cache";
    @@ -119,6 +123,7 @@ public class TxPessimisticDeadlockDetectionTest extends AbstractDeadlockDetectio
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksPartitioned() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 doTestDeadlocks(createCache(PARTITIONED, syncMode, false), ORDINAL_START_KEY);
    @@ -129,6 +134,7 @@ public void testDeadlocksPartitioned() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksPartitionedNear() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 doTestDeadlocks(createCache(PARTITIONED, syncMode, true), ORDINAL_START_KEY);
    @@ -139,6 +145,7 @@ public void testDeadlocksPartitionedNear() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksReplicated() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 doTestDeadlocks(createCache(REPLICATED, syncMode, false), ORDINAL_START_KEY);
    @@ -149,6 +156,7 @@ public void testDeadlocksReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlocksLocal() throws Exception {
             for (CacheWriteSynchronizationMode syncMode : CacheWriteSynchronizationMode.values()) {
                 IgniteCache cache = null;
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncNearCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncNearCacheTest.java
    index 5caa1a06325b8..dc115e1dc62d6 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncNearCacheTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncNearCacheTest.java
    @@ -17,6 +17,8 @@
     
     package org.apache.ignite.internal.processors.cache.transactions;
     
    +import org.apache.ignite.testframework.MvccFeatureChecker;
    +
     /**
      * Tests an ability to async rollback near transactions.
      */
    @@ -25,4 +27,11 @@ public class TxRollbackAsyncNearCacheTest extends TxRollbackAsyncTest {
         @Override protected boolean nearCacheEnabled() {
             return true;
         }
    +
    +    /** {@inheritDoc} */
    +    @Override public void setUp() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE);
    +
    +        super.setUp();
    +    }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncTest.java
    index 4ca8ba37c34ba..5d37f1387db4d 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncTest.java
    @@ -51,12 +51,12 @@
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.IgniteFutureCancelledCheckedException;
     import org.apache.ignite.internal.IgniteInternalFuture;
    -import org.apache.ignite.internal.IgniteInterruptedCheckedException;
     import org.apache.ignite.internal.IgniteKernal;
     import org.apache.ignite.internal.TestRecordingCommunicationSpi;
     import org.apache.ignite.internal.processors.cache.GridCacheContext;
     import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
     import org.apache.ignite.internal.processors.cache.distributed.near.GridNearLockRequest;
    +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistRequest;
     import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishRequest;
     import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal;
     import org.apache.ignite.internal.util.future.GridFutureAdapter;
    @@ -79,13 +79,17 @@
     import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.lang.IgniteUuid;
     import org.apache.ignite.plugin.extensions.communication.Message;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
    +import org.apache.ignite.testframework.GridTestUtils.SF;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.apache.ignite.transactions.TransactionRollbackException;
    +import org.junit.Assume;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.lang.Thread.interrupted;
     import static java.lang.Thread.yield;
    @@ -96,7 +100,6 @@
     import static org.apache.ignite.testframework.GridTestUtils.waitForCondition;
     import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC;
     import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    -import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED;
     import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ;
     import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE;
     import static org.apache.ignite.transactions.TransactionState.ROLLED_BACK;
    @@ -104,6 +107,7 @@
     /**
      * Tests an ability to async rollback near transactions.
      */
    +@RunWith(JUnit4.class)
     public class TxRollbackAsyncTest extends GridCommonAbstractTest {
         /** */
         public static final int DURATION = 60_000;
    @@ -111,9 +115,6 @@ public class TxRollbackAsyncTest extends GridCommonAbstractTest {
         /** */
         private static final String CACHE_NAME = "test";
     
    -    /** IP finder. */
    -    private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int GRID_CNT = 3;
     
    @@ -127,8 +128,6 @@ public class TxRollbackAsyncTest extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             cfg.setCommunicationSpi(new TestRecordingCommunicationSpi());
     
             boolean client = igniteInstanceName.startsWith("client");
    @@ -165,10 +164,11 @@ protected boolean nearCacheEnabled() {
         }
     
         /**
    -     *
          * @return {@code True} if persistence must be enabled for test.
          */
    -    protected boolean persistenceEnabled() { return false; }
    +    protected boolean persistenceEnabled() {
    +        return false;
    +    }
     
         /** {@inheritDoc} */
         @Override protected void beforeTest() throws Exception {
    @@ -210,7 +210,10 @@ private Ignite startClient() throws Exception {
         /**
          *
          */
    +    @Test
         public void testRollbackSimple() throws Exception {
    +        Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-7952", MvccFeatureChecker.forcedMvcc());
    +
             startClient();
     
             for (Ignite ignite : G.allGrids()) {
    @@ -225,7 +228,7 @@ public void testRollbackSimple() throws Exception {
          */
         private void testRollbackSimple0(Ignite near) throws Exception {
             // Normal rollback after put.
    -        Transaction tx = near.transactions().txStart(PESSIMISTIC, READ_COMMITTED);
    +        Transaction tx = near.transactions().txStart(PESSIMISTIC, REPEATABLE_READ);
     
             near.cache(CACHE_NAME).put(0, 0);
     
    @@ -314,6 +317,7 @@ private void testRollbackSimple0(Ignite near) throws Exception {
         /**
          *
          */
    +    @Test
         public void testSynchronousRollback() throws Exception {
             Ignite client = startClient();
     
    @@ -339,22 +343,21 @@ public void testSynchronousRollback() throws Exception {
         }
     
         /**
    -     *
          * @param holdLockNode Node holding the write lock.
          * @param tryLockNode Node trying to acquire lock.
          * @param useTimeout {@code True} if need to start tx with timeout.
    -     *
          * @throws Exception If failed.
          */
    -    private void testSynchronousRollback0(Ignite holdLockNode, final Ignite tryLockNode, final boolean useTimeout) throws Exception {
    -        final CountDownLatch keyLocked = new CountDownLatch(1);
    +    private void testSynchronousRollback0(Ignite holdLockNode, final Ignite tryLockNode,
    +        final boolean useTimeout) throws Exception {
    +        final GridFutureAdapter keyLocked = new GridFutureAdapter<>();
     
             CountDownLatch waitCommit = new CountDownLatch(1);
     
             // Used for passing tx instance to rollback thread.
             IgniteInternalFuture lockFut = lockInTx(holdLockNode, keyLocked, waitCommit, 0);
     
    -        U.awaitQuiet(keyLocked);
    +        keyLocked.get();
     
             final int txCnt = 1000;
     
    @@ -377,8 +380,6 @@ private void testSynchronousRollback0(Ignite holdLockNode, final Ignite tryLockN
     
             IgniteInternalFuture txFut = multithreadedAsync(new Runnable() {
                 @Override public void run() {
    -                U.awaitQuiet(keyLocked);
    -
                     for (int i = 0; i < txCnt; i++) {
                         GridNearTxLocal tx0 = ctx.tm().threadLocalTx(cctx);
     
    @@ -390,7 +391,7 @@ private void testSynchronousRollback0(Ignite holdLockNode, final Ignite tryLockN
                             txReadyFut.onDone(tx);
     
                             // Will block on lock request until rolled back asynchronously.
    -                        Object o = tryLockNode.cache(CACHE_NAME).get(0);
    +                        Object o = tryLockNode.cache(CACHE_NAME).getAndPut(0, 0);
     
                             assertNull(o); // If rolled back by close, previous get will return null.
                         }
    @@ -409,7 +410,7 @@ private void testSynchronousRollback0(Ignite holdLockNode, final Ignite tryLockN
     
                     int proc = 1;
     
    -                while(true) {
    +                while (true) {
                         try {
                             Transaction tx = txReadyFut.get();
     
    @@ -465,6 +466,7 @@ private void testSynchronousRollback0(Ignite holdLockNode, final Ignite tryLockN
         /**
          *
          */
    +    @Test
         public void testEnlistManyRead() throws Exception {
             testEnlistMany(false, REPEATABLE_READ, PESSIMISTIC);
         }
    @@ -472,6 +474,7 @@ public void testEnlistManyRead() throws Exception {
         /**
          *
          */
    +    @Test
         public void testEnlistManyWrite() throws Exception {
             testEnlistMany(true, REPEATABLE_READ, PESSIMISTIC);
         }
    @@ -479,22 +482,30 @@ public void testEnlistManyWrite() throws Exception {
         /**
          *
          */
    +    @Test
         public void testEnlistManyReadOptimistic() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            return; // Optimistic transactions are not supported by MVCC.
    +
             testEnlistMany(false, SERIALIZABLE, OPTIMISTIC);
         }
     
         /**
          *
          */
    +    @Test
         public void testEnlistManyWriteOptimistic() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            return; // Optimistic transactions are not supported by MVCC.
    +
             testEnlistMany(true, SERIALIZABLE, OPTIMISTIC);
         }
     
    -
         /**
          *
          */
    -    private void testEnlistMany(boolean write, TransactionIsolation isolation, TransactionConcurrency conc) throws Exception {
    +    private void testEnlistMany(boolean write, TransactionIsolation isolation,
    +        TransactionConcurrency conc) throws Exception {
             final Ignite client = startClient();
     
             Map entries = new HashMap<>();
    @@ -504,7 +515,7 @@ private void testEnlistMany(boolean write, TransactionIsolation isolation, Trans
     
             IgniteInternalFuture fut = null;
     
    -        try(Transaction tx = client.transactions().txStart(conc, isolation, 0, 0)) {
    +        try (Transaction tx = client.transactions().txStart(conc, isolation, 0, 0)) {
                 fut = rollbackAsync(tx, 200);
     
                 if (write)
    @@ -530,14 +541,21 @@ private void testEnlistMany(boolean write, TransactionIsolation isolation, Trans
         /**
          * Rollback tx while near lock request is delayed.
          */
    +    @Test
         public void testRollbackDelayNearLockRequest() throws Exception {
    +        Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-9470", MvccFeatureChecker.forcedMvcc());
    +
             final Ignite client = startClient();
     
             final Ignite prim = primaryNode(0, CACHE_NAME);
     
             final TestRecordingCommunicationSpi spi = (TestRecordingCommunicationSpi)client.configuration().getCommunicationSpi();
     
    -        spi.blockMessages(GridNearLockRequest.class, prim.name());
    +        boolean mvcc = MvccFeatureChecker.forcedMvcc();
    +
    +        Class msgCls = mvcc ? GridNearTxEnlistRequest.class : GridNearLockRequest.class;
    +
    +        spi.blockMessages(msgCls, prim.name());
     
             final IgniteInternalFuture rollbackFut = runAsync(new Callable() {
                 @Override public Void call() throws Exception {
    @@ -549,13 +567,13 @@ public void testRollbackDelayNearLockRequest() throws Exception {
                 }
             }, "tx-rollback-thread");
     
    -        try(final Transaction tx = client.transactions().txStart()) {
    +        try (final Transaction tx = client.transactions().txStart()) {
                 client.cache(CACHE_NAME).put(0, 0);
     
                 fail();
             }
             catch (CacheException e) {
    -            assertTrue(X.hasCause(e, TransactionRollbackException.class));
    +            assertTrue(X.getFullStackTrace(e),X.hasCause(e, TransactionRollbackException.class));
             }
     
             rollbackFut.get();
    @@ -570,6 +588,7 @@ public void testRollbackDelayNearLockRequest() throws Exception {
         /**
          * Tests rollback with concurrent commit.
          */
    +    @Test
         public void testRollbackDelayFinishRequest() throws Exception {
             final Ignite client = startClient();
     
    @@ -608,7 +627,7 @@ public void testRollbackDelayFinishRequest() throws Exception {
                 }
             }, "tx-rollback-thread");
     
    -        try(final Transaction tx = client.transactions().txStart()) {
    +        try (final Transaction tx = client.transactions().txStart()) {
                 txRef.set(tx);
     
                 client.cache(CACHE_NAME).put(0, 0);
    @@ -629,7 +648,10 @@ public void testRollbackDelayFinishRequest() throws Exception {
         /**
          *
          */
    +    @Test
         public void testMixedAsyncRollbackTypes() throws Exception {
    +        Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10434", MvccFeatureChecker.forcedMvcc());
    +
             final Ignite client = startClient();
     
             final AtomicBoolean stop = new AtomicBoolean();
    @@ -660,6 +682,8 @@ public void testMixedAsyncRollbackTypes() throws Exception {
             for (Ignite ignite : G.allGrids())
                 perNodeTxs.put(ignite, new ArrayBlockingQueue<>(1000));
     
    +        boolean mvcc = MvccFeatureChecker.forcedMvcc();
    +
             IgniteInternalFuture txFut = multithreadedAsync(() -> {
                 while (!stop.get()) {
                     int nodeId = r.nextInt(GRID_CNT + 1);
    @@ -667,8 +691,8 @@ public void testMixedAsyncRollbackTypes() throws Exception {
                     // Choose random node to start tx on.
                     Ignite node = nodeId == GRID_CNT || nearCacheEnabled() ? client : grid(nodeId);
     
    -                TransactionConcurrency conc = TC_VALS[r.nextInt(TC_VALS.length)];
    -                TransactionIsolation isolation = TI_VALS[r.nextInt(TI_VALS.length)];
    +                TransactionConcurrency conc = mvcc ? PESSIMISTIC : TC_VALS[r.nextInt(TC_VALS.length)];
    +                TransactionIsolation isolation = mvcc ? REPEATABLE_READ :TI_VALS[r.nextInt(TI_VALS.length)];
     
                     // Timeout is necessary otherwise deadlock is possible due to randomness of lock acquisition.
                     long timeout = r.nextInt(50) + 50;
    @@ -761,8 +785,8 @@ public void testMixedAsyncRollbackTypes() throws Exception {
                     Transaction tx;
     
                     // Rollback all transaction
    -                while((tx = nodeQ.poll()) != null) {
    -                    rolledBack.add(1);
    +                while ((tx = nodeQ.poll()) != null) {
    +                    rolledBack.increment();
     
                         doSleep(r.nextInt(50)); // Add random sleep to increase completed txs count.
     
    @@ -795,8 +819,8 @@ public void testMixedAsyncRollbackTypes() throws Exception {
             for (BlockingQueue queue : perNodeTxs.values()) {
                 Transaction tx;
     
    -            while((tx = queue.poll()) != null) {
    -                rolledBack.add(1);
    +            while ((tx = queue.poll()) != null) {
    +                rolledBack.increment();
     
                     rollbackClo.apply(tx);
                 }
    @@ -813,8 +837,9 @@ public void testMixedAsyncRollbackTypes() throws Exception {
         /**
          * Tests proxy object returned by {@link IgniteTransactions#localActiveTransactions()}
          */
    +    @Test
         public void testRollbackProxy() throws Exception {
    -        final CountDownLatch keyLocked = new CountDownLatch(1);
    +        final GridFutureAdapter keyLocked = new GridFutureAdapter<>();
     
             CountDownLatch waitCommit = new CountDownLatch(1);
     
    @@ -822,7 +847,7 @@ public void testRollbackProxy() throws Exception {
     
             IgniteInternalFuture lockFut = lockInTx(ig, keyLocked, waitCommit, 0);
     
    -        U.awaitQuiet(keyLocked);
    +        keyLocked.get();
     
             Collection txs = ig.transactions().localActiveTransactions();
     
    @@ -901,6 +926,7 @@ public void testRollbackProxy() throws Exception {
         /**
          *
          */
    +    @Test
         public void testRollbackOnTopologyLockPessimistic() throws Exception {
             final Ignite client = startClient();
     
    @@ -942,12 +968,12 @@ public void testRollbackOnTopologyLockPessimistic() throws Exception {
                 @Override public boolean apply(Event evt) {
                     runAsync(new Runnable() {
                         @Override public void run() {
    -                        try(Transaction tx = crd.transactions().withLabel("testLbl").txStart()) {
    +                        try (Transaction tx = crd.transactions().withLabel("testLbl").txStart()) {
                                 // Wait for node start.
                                 waitForCondition(new GridAbsPredicate() {
                                     @Override public boolean apply() {
                                         return crd.cluster().topologyVersion() != GRID_CNT +
    -                                        /** client node */ 1  + /** stop server node */ 1 + /** start server node */ 1;
    +                                        /** client node */1 + /** stop server node */1 + /** start server node */1;
                                     }
                                 }, 10_000);
     
    @@ -1020,30 +1046,31 @@ public void testRollbackOnTopologyLockPessimistic() throws Exception {
          * Locks entry in tx and delays commit until signalled.
          *
          * @param node Near node.
    -     * @param keyLocked Latch for notifying until key is locked.
    +     * @param keyLocked Future to be done when key is locked.
          * @param waitCommit Latch for waiting until commit is allowed.
          * @param timeout Timeout.
    -     *
          * @return tx completion future.
          */
    -    private IgniteInternalFuture lockInTx(final Ignite node, final CountDownLatch keyLocked,
    +    private IgniteInternalFuture lockInTx(final Ignite node, final GridFutureAdapter keyLocked,
             final CountDownLatch waitCommit, final int timeout) throws Exception {
             return multithreadedAsync(new Runnable() {
                 @Override public void run() {
    -                Transaction tx = node.transactions().withLabel(LABEL).txStart(PESSIMISTIC, REPEATABLE_READ, timeout, 1);
    +                try {
    +                    Transaction tx = node.transactions().withLabel(LABEL).txStart(PESSIMISTIC, REPEATABLE_READ, timeout, 1);
     
    -                node.cache(CACHE_NAME).put(0, 0);
    +                    node.cache(CACHE_NAME).put(0, 0);
     
    -                keyLocked.countDown();
    +                    keyLocked.onDone();
     
    -                try {
                         U.await(waitCommit);
    +
    +                    tx.commit();
                     }
    -                catch (IgniteInterruptedCheckedException e) {
    -                    fail("Lock thread was interrupted while waiting");
    -                }
    +                catch (Throwable e) {
    +                    keyLocked.onDone(e);
     
    -                tx.commit();
    +                    throw new RuntimeException(e);
    +                }
                 }
             }, 1, "tx-lock-thread");
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncWithPersistenceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncWithPersistenceTest.java
    index a11dca7cf83e6..8d24f05313dd4 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncWithPersistenceTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackAsyncWithPersistenceTest.java
    @@ -17,6 +17,9 @@
     
     package org.apache.ignite.internal.processors.cache.transactions;
     
    +import org.apache.ignite.testframework.MvccFeatureChecker;
    +import org.junit.Test;
    +
     import static org.apache.ignite.IgniteSystemProperties.IGNITE_WAL_LOG_TX_RECORDS;
     
     /**
    @@ -53,5 +56,14 @@ public class TxRollbackAsyncWithPersistenceTest extends TxRollbackAsyncTest {
     
             cleanPersistenceDir();
         }
    +
    +    /** {@inheritDoc} */
    +    @Test
    +    @Override public void testSynchronousRollback() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            fail("https://issues.apache.org/jira/browse/IGNITE-10785");
    +
    +        super.testSynchronousRollback();
    +    }
     }
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnIncorrectParamsTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnIncorrectParamsTest.java
    index 8aafa8b091ba9..2d886a46b7a02 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnIncorrectParamsTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnIncorrectParamsTest.java
    @@ -28,20 +28,32 @@
     import org.apache.ignite.lang.IgniteBiPredicate;
     import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.events.EventType.EVT_TX_STARTED;
     
     /**
      * Tests transaction rollback on incorrect tx params.
      */
    +@RunWith(JUnit4.class)
     public class TxRollbackOnIncorrectParamsTest extends GridCommonAbstractTest {
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTest() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            fail("https://issues.apache.org/jira/browse/IGNITE-10415");
    +    }
    +
         /**
          *
          */
    +    @Test
         public void testTimeoutSetLocalGuarantee() throws Exception {
             Ignite ignite = startGrid(0);
     
    @@ -61,14 +73,14 @@ public void testTimeoutSetLocalGuarantee() throws Exception {
             IgniteCache cache = ignite.getOrCreateCache(defaultCacheConfiguration());
     
             try (Transaction tx = ignite.transactions().txStart(
    -            TransactionConcurrency.OPTIMISTIC, TransactionIsolation.REPEATABLE_READ, 200, 2)) {
    +            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, 200, 2)) {
                 cache.put(1, 1);
     
                 tx.commit();
             }
     
             try (Transaction tx = ignite.transactions().txStart(
    -            TransactionConcurrency.OPTIMISTIC, TransactionIsolation.REPEATABLE_READ, 100, 2)) {
    +            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, 100, 2)) {
                 cache.put(1, 2);
     
                 tx.commit();
    @@ -94,6 +106,7 @@ public void testTimeoutSetLocalGuarantee() throws Exception {
         /**
          *
          */
    +    @Test
         public void testLabelFilledLocalGuarantee() throws Exception {
             Ignite ignite = startGrid(0);
     
    @@ -133,6 +146,7 @@ public void testLabelFilledLocalGuarantee() throws Exception {
         /**
          *
          */
    +    @Test
         public void testLabelFilledRemoteGuarantee() throws Exception {
             Ignite ignite = startGrid(0);
             Ignite remote = startGrid(1);
    @@ -193,6 +207,7 @@ public void testLabelFilledRemoteGuarantee() throws Exception {
         /**
          *
          */
    +    @Test
         public void testTimeoutSetRemoteGuarantee() throws Exception {
             Ignite ignite = startGrid(0);
             Ignite remote = startGrid(1);
    @@ -216,14 +231,14 @@ public void testTimeoutSetRemoteGuarantee() throws Exception {
                 EVT_TX_STARTED);
     
             try (Transaction tx = ignite.transactions().txStart(
    -            TransactionConcurrency.OPTIMISTIC, TransactionIsolation.REPEATABLE_READ, 100, 2)) {
    +            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, 100, 2)) {
                 cacheLocal.put(1, 1);
     
                 tx.commit();
             }
     
             try (Transaction tx = remote.transactions().txStart(
    -            TransactionConcurrency.OPTIMISTIC, TransactionIsolation.REPEATABLE_READ, 100, 2)) {
    +            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, 100, 2)) {
                 cacheRemote.put(1, 2);
     
                 tx.commit();
    @@ -255,6 +270,7 @@ public void testTimeoutSetRemoteGuarantee() throws Exception {
         /**
          *
          */
    +    @Test
         public void testRollbackInsideLocalListenerAfterRemoteFilter() throws Exception {
             Ignite ignite = startGrid(0);
             Ignite remote = startGrid(1);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTimeoutOnePhaseCommitTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTimeoutOnePhaseCommitTest.java
    new file mode 100644
    index 0000000000000..27551d9af3d69
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTimeoutOnePhaseCommitTest.java
    @@ -0,0 +1,215 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.ignite.internal.processors.cache.transactions;
    +
    +import java.util.concurrent.CountDownLatch;
    +import java.util.concurrent.TimeUnit;
    +import org.apache.ignite.Ignite;
    +import org.apache.ignite.IgniteCheckedException;
    +import org.apache.ignite.configuration.CacheConfiguration;
    +import org.apache.ignite.configuration.IgniteConfiguration;
    +import org.apache.ignite.internal.IgniteEx;
    +import org.apache.ignite.internal.IgniteInternalFuture;
    +import org.apache.ignite.internal.IgniteInterruptedCheckedException;
    +import org.apache.ignite.internal.TestRecordingCommunicationSpi;
    +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareResponse;
    +import org.apache.ignite.internal.processors.cache.verify.IdleVerifyResultV2;
    +import org.apache.ignite.internal.util.typedef.X;
    +import org.apache.ignite.internal.util.typedef.internal.U;
    +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.apache.ignite.transactions.Transaction;
    +import org.apache.ignite.transactions.TransactionConcurrency;
    +import org.apache.ignite.transactions.TransactionTimeoutException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
    +
    +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    +import static org.apache.ignite.testframework.GridTestUtils.runAsync;
    +import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC;
    +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
    +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ;
    +
    +/**
    + * Tests rollback on timeout scenarios for one-phase commit protocol.
    + */
    +@RunWith(JUnit4.class)
    +public class TxRollbackOnTimeoutOnePhaseCommitTest extends GridCommonAbstractTest {
    +    /** */
    +    private static final int GRID_CNT = 2;
    +
    +    /** {@inheritDoc} */
    +    @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    +        IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
    +
    +        cfg.setCommunicationSpi(new TestRecordingCommunicationSpi());
    +
    +        boolean client = igniteInstanceName.startsWith("client");
    +
    +        cfg.setClientMode(client);
    +
    +        if (!client) {
    +            CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME);
    +
    +            ccfg.setAtomicityMode(TRANSACTIONAL);
    +            ccfg.setBackups(1);
    +            ccfg.setWriteSynchronizationMode(FULL_SYNC);
    +            ccfg.setOnheapCacheEnabled(false);
    +
    +            cfg.setCacheConfiguration(ccfg);
    +        }
    +
    +        return cfg;
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTest() throws Exception {
    +        super.beforeTest();
    +
    +        startGridsMultiThreaded(GRID_CNT);
    +
    +        startGrid("client");
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Override protected void afterTest() throws Exception {
    +        super.afterTest();
    +
    +        stopAllGrids();
    +    }
    +
    +    /** */
    +    @Test
    +    public void testRollbackOnTimeoutPartitionDesyncPessimistic() throws Exception {
    +        doTestRollbackOnTimeoutPartitionDesync(PESSIMISTIC);
    +    }
    +
    +    /** */
    +    @Test
    +    public void testRollbackOnTimeoutPartitionDesyncOptimistic() throws Exception {
    +        doTestRollbackOnTimeoutPartitionDesync(OPTIMISTIC);
    +    }
    +
    +    /** */
    +    @Test
    +    public void testUnlockOptimistic() throws IgniteCheckedException {
    +        IgniteEx client = grid("client");
    +
    +        assertNotNull(client.cache(DEFAULT_CACHE_NAME));
    +
    +        int key = 0;
    +
    +        CountDownLatch lock = new CountDownLatch(1);
    +        CountDownLatch finish = new CountDownLatch(1);
    +
    +        IgniteInternalFuture fut = runAsync(() -> {
    +            try (Transaction tx = client.transactions().txStart(PESSIMISTIC, REPEATABLE_READ, 0, 1)) {
    +                client.cache(DEFAULT_CACHE_NAME).put(key, key + 1);
    +
    +                lock.countDown();
    +
    +                try {
    +                    assertTrue(U.await(finish, 30, TimeUnit.SECONDS));
    +                }
    +                catch (IgniteInterruptedCheckedException e) {
    +                    fail();
    +                }
    +
    +                tx.commit();
    +            }
    +        });
    +
    +        try (Transaction tx = client.transactions().txStart(OPTIMISTIC, REPEATABLE_READ, 200, 1)) {
    +            try {
    +                assertTrue(U.await(lock, 30, TimeUnit.SECONDS));
    +            }
    +            catch (IgniteInterruptedCheckedException e) {
    +                fail();
    +            }
    +
    +            client.cache(DEFAULT_CACHE_NAME).put(key, key);
    +
    +            tx.commit();
    +
    +            // fail(); // TODO IGNITE-10027 throw timeout exception for optimistic timeout.
    +        }
    +        catch (Exception e) {
    +            assertTrue(e.getClass().getName(), X.hasCause(e, TransactionTimeoutException.class));
    +        }
    +
    +        assertNull(client.cache(DEFAULT_CACHE_NAME).get(key));
    +
    +        finish.countDown();
    +
    +        fut.get();
    +
    +        assertEquals(1, client.cache(DEFAULT_CACHE_NAME).get(key));
    +    }
    +
    +    /** */
    +    private void doTestRollbackOnTimeoutPartitionDesync(TransactionConcurrency concurrency) throws Exception {
    +        IgniteEx client = grid("client");
    +
    +        assertNotNull(client.cache(DEFAULT_CACHE_NAME));
    +
    +        int key = 0;
    +
    +        Ignite primary = primaryNode(key, DEFAULT_CACHE_NAME);
    +        Ignite backup = backupNode(key, DEFAULT_CACHE_NAME);
    +
    +        TestRecordingCommunicationSpi backupSpi = TestRecordingCommunicationSpi.spi(backup);
    +        backupSpi.blockMessages(GridDhtTxPrepareResponse.class, primary.name());
    +
    +        IgniteInternalFuture fut = runAsync(() -> {
    +            try {
    +                backupSpi.waitForBlocked(1, 5000);
    +            }
    +            catch (InterruptedException e) {
    +                fail();
    +            }
    +
    +            doSleep(500);
    +
    +            backupSpi.stopBlock();
    +        });
    +
    +        try (Transaction tx = client.transactions().txStart(concurrency, REPEATABLE_READ, 500, 1)) {
    +            client.cache(DEFAULT_CACHE_NAME).put(key, key);
    +
    +            tx.commit();
    +        }
    +        catch (Exception e) {
    +            assertTrue(e.getClass().getName(), X.hasCause(e, TransactionTimeoutException.class));
    +        }
    +
    +        fut.get();
    +
    +        IdleVerifyResultV2 res = idleVerify(client, DEFAULT_CACHE_NAME);
    +
    +        if (res.hasConflicts()) {
    +            StringBuilder b = new StringBuilder();
    +
    +            res.print(b::append);
    +
    +            fail(b.toString());
    +        }
    +
    +        checkFutures();
    +    }
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTimeoutTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTimeoutTest.java
    index ccf4c8aa88ee6..fb20162e5efa2 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTimeoutTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTimeoutTest.java
    @@ -22,7 +22,6 @@
     import java.util.List;
     import java.util.Map;
     import java.util.Random;
    -import java.util.UUID;
     import java.util.concurrent.CountDownLatch;
     import java.util.concurrent.ThreadLocalRandom;
     import java.util.concurrent.atomic.AtomicBoolean;
    @@ -37,8 +36,6 @@
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.configuration.NearCacheConfiguration;
    -import org.apache.ignite.events.Event;
    -import org.apache.ignite.events.EventType;
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.IgniteFutureTimeoutCheckedException;
     import org.apache.ignite.internal.IgniteInternalFuture;
    @@ -54,32 +51,25 @@
     import org.apache.ignite.internal.util.typedef.G;
     import org.apache.ignite.internal.util.typedef.X;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.internal.visor.VisorTaskArgument;
    -import org.apache.ignite.internal.visor.tx.VisorTxInfo;
    -import org.apache.ignite.internal.visor.tx.VisorTxOperation;
    -import org.apache.ignite.internal.visor.tx.VisorTxTask;
    -import org.apache.ignite.internal.visor.tx.VisorTxTaskArg;
    -import org.apache.ignite.internal.visor.tx.VisorTxTaskResult;
    -import org.apache.ignite.lang.IgniteBiPredicate;
    -import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.lang.IgniteInClosure;
    -import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.plugin.extensions.communication.Message;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
    +import org.apache.ignite.testframework.GridTestUtils.SF;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionDeadlockException;
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.apache.ignite.transactions.TransactionTimeoutException;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.lang.Thread.sleep;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     import static org.apache.ignite.testframework.GridTestUtils.runAsync;
    -import static org.apache.ignite.testframework.GridTestUtils.runAsync;
     import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC;
     import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC;
     import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED;
    @@ -89,9 +79,10 @@
     /**
      * Tests an ability to eagerly rollback timed out transactions.
      */
    +@RunWith(JUnit4.class)
     public class TxRollbackOnTimeoutTest extends GridCommonAbstractTest {
         /** */
    -    private static final long DURATION = 60 * 1000L;
    +    private static final long DURATION = SF.apply(60 * 1000);
     
         /** */
         private static final long TX_MIN_TIMEOUT = 1;
    @@ -99,9 +90,6 @@ public class TxRollbackOnTimeoutTest extends GridCommonAbstractTest {
         /** */
         private static final String CACHE_NAME = "test";
     
    -    /** IP finder. */
    -    private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int GRID_CNT = 3;
     
    @@ -111,8 +99,6 @@ public class TxRollbackOnTimeoutTest extends GridCommonAbstractTest {
     
             cfg.setConsistentId(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             cfg.setCommunicationSpi(new TestRecordingCommunicationSpi());
     
             boolean client = "client".equals(igniteInstanceName);
    @@ -144,6 +130,9 @@ protected boolean nearCacheEnabled() {
     
         /** {@inheritDoc} */
         @Override protected void beforeTest() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            fail("https://issues.apache.org/jira/browse/IGNITE-7388");
    +
             super.beforeTest();
     
             startGridsMultiThreaded(GRID_CNT);
    @@ -184,6 +173,7 @@ protected void validateDeadlockException(Exception e) {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLockAndConcurrentTimeout() throws Exception {
             startClient();
     
    @@ -251,6 +241,7 @@ private void lock(final Ignite node, final boolean retry) throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testWaitingTxUnblockedOnTimeout() throws Exception {
             waitingTxUnblockedOnTimeout(grid(0), grid(0));
     
    @@ -274,6 +265,7 @@ public void testWaitingTxUnblockedOnTimeout() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testWaitingTxUnblockedOnThreadDeath() throws Exception {
             waitingTxUnblockedOnThreadDeath(grid(0), grid(0));
     
    @@ -297,6 +289,7 @@ public void testWaitingTxUnblockedOnThreadDeath() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeadlockUnblockedOnTimeout() throws Exception {
             deadlockUnblockedOnTimeout(ignite(0), ignite(1));
     
    @@ -375,6 +368,7 @@ private void deadlockUnblockedOnTimeout(final Ignite node1, final Ignite node2)
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testTimeoutRemoval() throws Exception {
             IgniteEx client = (IgniteEx)startClient();
     
    @@ -406,6 +400,7 @@ public void testTimeoutRemoval() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testSimple() throws Exception {
             for (TransactionConcurrency concurrency : TransactionConcurrency.values())
                 for (TransactionIsolation isolation : TransactionIsolation.values()) {
    @@ -417,6 +412,7 @@ public void testSimple() throws Exception {
         /**
          * Test timeouts with random values and different tx configurations.
          */
    +    @Test
         public void testRandomMixedTxConfigurations() throws Exception {
             final Ignite client = startClient();
     
    @@ -521,6 +517,7 @@ public void testRandomMixedTxConfigurations() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testTimeoutOnPrimaryDHTNode() throws Exception {
             final ClusterNode n0 = grid(0).affinity(CACHE_NAME).mapKeyToNode(0);
     
    @@ -535,6 +532,7 @@ public void testTimeoutOnPrimaryDHTNode() throws Exception {
         /**
          *
          */
    +    @Test
         public void testLockRelease() throws Exception {
             final Ignite client = startClient();
     
    @@ -596,6 +594,7 @@ public void testLockRelease() throws Exception {
         /**
          *
          */
    +    @Test
         public void testEnlistManyRead() throws Exception {
             testEnlistMany(false);
         }
    @@ -603,6 +602,7 @@ public void testEnlistManyRead() throws Exception {
         /**
          *
          */
    +    @Test
         public void testEnlistManyWrite() throws Exception {
             testEnlistMany(true);
         }
    @@ -610,6 +610,7 @@ public void testEnlistManyWrite() throws Exception {
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxRemapOptimisticReadCommitted() throws Exception {
             doTestRollbackOnTimeoutTxRemap(OPTIMISTIC, READ_COMMITTED, true);
         }
    @@ -617,6 +618,7 @@ public void testRollbackOnTimeoutTxRemapOptimisticReadCommitted() throws Excepti
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxRemapOptimisticRepeatableRead() throws Exception {
             doTestRollbackOnTimeoutTxRemap(OPTIMISTIC, REPEATABLE_READ, true);
         }
    @@ -624,6 +626,7 @@ public void testRollbackOnTimeoutTxRemapOptimisticRepeatableRead() throws Except
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxRemapOptimisticSerializable() throws Exception {
             doTestRollbackOnTimeoutTxRemap(OPTIMISTIC, SERIALIZABLE, true);
         }
    @@ -631,6 +634,7 @@ public void testRollbackOnTimeoutTxRemapOptimisticSerializable() throws Exceptio
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxRemapPessimisticReadCommitted() throws Exception {
             doTestRollbackOnTimeoutTxRemap(PESSIMISTIC, READ_COMMITTED, true);
         }
    @@ -638,6 +642,7 @@ public void testRollbackOnTimeoutTxRemapPessimisticReadCommitted() throws Except
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxRemapPessimisticRepeatableRead() throws Exception {
             doTestRollbackOnTimeoutTxRemap(PESSIMISTIC, REPEATABLE_READ, true);
         }
    @@ -645,6 +650,7 @@ public void testRollbackOnTimeoutTxRemapPessimisticRepeatableRead() throws Excep
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxRemapPessimisticSerializable() throws Exception {
             doTestRollbackOnTimeoutTxRemap(PESSIMISTIC, SERIALIZABLE, true);
         }
    @@ -652,6 +658,7 @@ public void testRollbackOnTimeoutTxRemapPessimisticSerializable() throws Excepti
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxServerRemapOptimisticReadCommitted() throws Exception {
             doTestRollbackOnTimeoutTxRemap(OPTIMISTIC, READ_COMMITTED, false);
         }
    @@ -659,6 +666,7 @@ public void testRollbackOnTimeoutTxServerRemapOptimisticReadCommitted() throws E
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxServerRemapOptimisticRepeatableRead() throws Exception {
             doTestRollbackOnTimeoutTxRemap(OPTIMISTIC, REPEATABLE_READ, false);
         }
    @@ -666,6 +674,7 @@ public void testRollbackOnTimeoutTxServerRemapOptimisticRepeatableRead() throws
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxServerRemapOptimisticSerializable() throws Exception {
             doTestRollbackOnTimeoutTxRemap(OPTIMISTIC, SERIALIZABLE, false);
         }
    @@ -673,6 +682,7 @@ public void testRollbackOnTimeoutTxServerRemapOptimisticSerializable() throws Ex
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxServerRemapPessimisticReadCommitted() throws Exception {
             doTestRollbackOnTimeoutTxRemap(PESSIMISTIC, READ_COMMITTED, false);
         }
    @@ -680,6 +690,7 @@ public void testRollbackOnTimeoutTxServerRemapPessimisticReadCommitted() throws
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxServerRemapPessimisticRepeatableRead() throws Exception {
             doTestRollbackOnTimeoutTxRemap(PESSIMISTIC, REPEATABLE_READ, false);
         }
    @@ -687,6 +698,7 @@ public void testRollbackOnTimeoutTxServerRemapPessimisticRepeatableRead() throws
         /**
          *
          */
    +    @Test
         public void testRollbackOnTimeoutTxServerRemapPessimisticSerializable() throws Exception {
             doTestRollbackOnTimeoutTxRemap(PESSIMISTIC, SERIALIZABLE, false);
         }
    @@ -819,7 +831,12 @@ private void testEnlistMany(boolean write) throws Exception {
                 tx.commit();
             }
             catch (Throwable t) {
    -            assertTrue(X.hasCause(t, TransactionTimeoutException.class));
    +            boolean timedOut = X.hasCause(t, TransactionTimeoutException.class);
    +
    +            if (!timedOut)
    +                log.error("Got unexpected exception", t);
    +
    +            assertTrue(timedOut);
             }
     
             assertEquals(0, client.cache(CACHE_NAME).size());
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTopologyChangeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTopologyChangeTest.java
    index 13c5e41c27cb3..797a6397a8cb9 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTopologyChangeTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxRollbackOnTopologyChangeTest.java
    @@ -36,10 +36,13 @@
     import org.apache.ignite.internal.processors.cache.GridCacheFuture;
     import org.apache.ignite.internal.util.typedef.G;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
    +import org.junit.Assume;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.lang.Thread.yield;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    @@ -50,6 +53,7 @@
     /**
      * Tests an ability to rollback transactions on topology change.
      */
    +@RunWith(JUnit4.class)
     public class TxRollbackOnTopologyChangeTest extends GridCommonAbstractTest {
         /** */
         public static final int ROLLBACK_TIMEOUT = 500;
    @@ -57,9 +61,6 @@ public class TxRollbackOnTopologyChangeTest extends GridCommonAbstractTest {
         /** */
         private static final String CACHE_NAME = "test";
     
    -    /** IP finder. */
    -    private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int SRV_CNT = 6;
     
    @@ -79,8 +80,6 @@ public class TxRollbackOnTopologyChangeTest extends GridCommonAbstractTest {
             cfg.setTransactionConfiguration(new TransactionConfiguration().
                 setTxTimeoutOnPartitionMapExchange(ROLLBACK_TIMEOUT));
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             cfg.setCommunicationSpi(new TestRecordingCommunicationSpi());
     
             cfg.setClientMode(getTestIgniteInstanceIndex(igniteInstanceName) >= SRV_CNT);
    @@ -98,6 +97,9 @@ public class TxRollbackOnTopologyChangeTest extends GridCommonAbstractTest {
     
         /** {@inheritDoc} */
         @Override protected void beforeTest() throws Exception {
    +        Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-9322",
    +            MvccFeatureChecker.forcedMvcc()); //Won't start nodes if the only test mutes.
    +
             super.beforeTest();
     
             startGridsMultiThreaded(TOTAL_CNT);
    @@ -113,6 +115,7 @@ public class TxRollbackOnTopologyChangeTest extends GridCommonAbstractTest {
         /**
          * Tests rollbacks on topology change.
          */
    +    @Test
         public void testRollbackOnTopologyChange() throws Exception {
             final AtomicBoolean stop = new AtomicBoolean();
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxStateChangeEventTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxStateChangeEventTest.java
    index 01c87aed7ba5b..c09569b149a60 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxStateChangeEventTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxStateChangeEventTest.java
    @@ -21,15 +21,22 @@
     import org.apache.ignite.Ignite;
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.IgniteEvents;
    +import org.apache.ignite.IgniteTransactions;
    +import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.events.Event;
     import org.apache.ignite.events.TransactionStateChangedEvent;
    +import org.apache.ignite.internal.IgniteInterruptedCheckedException;
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.lang.IgnitePredicate;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
     import org.apache.ignite.transactions.TransactionState;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.events.EventType.EVTS_TX;
     import static org.apache.ignite.events.EventType.EVT_TX_COMMITTED;
    @@ -41,6 +48,7 @@
     /**
      * Tests transaction state change event.
      */
    +@RunWith(JUnit4.class)
     public class TxStateChangeEventTest extends GridCommonAbstractTest {
         /** Label. */
         private final String lb = "testLabel";
    @@ -66,21 +74,23 @@ public class TxStateChangeEventTest extends GridCommonAbstractTest {
         /**
          *
          */
    +    @Test
         public void testLocal() throws Exception {
    -        test(true);
    +        check(true);
         }
     
         /**
          *
          */
    +    @Test
         public void testRemote() throws Exception {
    -        test(false);
    +        check(false);
         }
     
         /**
          *
          */
    -    private void test(boolean loc) throws Exception {
    +    private void check(boolean loc) throws Exception {
             Ignite ignite = startGrids(5);
     
             final IgniteEvents evts = loc ? ignite.events() : grid(3).events();
    @@ -104,27 +114,51 @@ private void test(boolean loc) throws Exception {
                     },
                     EVTS_TX);
     
    -        IgniteCache cache = ignite.getOrCreateCache(defaultCacheConfiguration().setBackups(2));
    +        IgniteTransactions txs = ignite.transactions();
     
    -        // create & commit
    -        try (Transaction tx = ignite.transactions().withLabel(lb).txStart(
    -            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.SERIALIZABLE, timeout, 3)) {
    -            cache.put(1, 1);
    +        IgniteCache cache = ignite.getOrCreateCache(getCacheConfig());
     
    -            tx.commit();
    +        checkCommit(txs, cache);
    +
    +        if (!MvccFeatureChecker.forcedMvcc())
    +            checkSuspendResume(txs, cache);
    +
    +        checkRollback(txs, cache);
    +    }
    +
    +    /** */
    +    @SuppressWarnings("unchecked")
    +    private CacheConfiguration getCacheConfig() {
    +        return defaultCacheConfiguration().setBackups(2);
    +    }
    +
    +    /**
    +     * @param txs Transaction manager.
    +     * @param cache Ignite cache.
    +     */
    +    private void checkRollback(IgniteTransactions txs, IgniteCache cache) {
    +        // create & rollback (pessimistic)
    +        try (Transaction tx = txs.withLabel(lb).txStart(
    +            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, timeout, 3)) {
    +            cache.put(4, 5);
             }
     
             assertTrue(
                 creation.get() &&
    -                commit.get() &&
    -                !rollback.get() &&
    +                !commit.get() &&
    +                rollback.get() &&
                     !suspend.get() &&
                     !resume.get());
    +    }
     
    -        clear();
    -
    +    /**
    +     * @param txs Transaction manager.
    +     * @param cache Ignite cache.
    +     */
    +    private void checkSuspendResume(IgniteTransactions txs,
    +        IgniteCache cache) throws IgniteInterruptedCheckedException {
             // create & suspend & resume & commit
    -        try (Transaction tx = ignite.transactions().withLabel(lb).txStart(
    +        try (Transaction tx = txs.withLabel(lb).txStart(
                 TransactionConcurrency.OPTIMISTIC, TransactionIsolation.SERIALIZABLE, timeout, 3)) {
                 cache.put(2, 7);
     
    @@ -145,19 +179,29 @@ private void test(boolean loc) throws Exception {
                     resume.get());
     
             clear();
    +    }
     
    -        // create & rollback (pessimistic)
    -        try (Transaction tx = ignite.transactions().withLabel(lb).txStart(
    -            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.SERIALIZABLE, timeout, 3)) {
    -            cache.put(4, 5);
    +    /**
    +     * @param txs Transaction manager.
    +     * @param cache Ignite cache.
    +     */
    +    private void checkCommit(IgniteTransactions txs, IgniteCache cache) {
    +        // create & commit
    +        try (Transaction tx = txs.withLabel(lb).txStart(
    +            TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ, timeout, 3)) {
    +            cache.put(1, 1);
    +
    +            tx.commit();
             }
     
             assertTrue(
                 creation.get() &&
    -                !commit.get() &&
    -                rollback.get() &&
    +                commit.get() &&
    +                !rollback.get() &&
                     !suspend.get() &&
                     !resume.get());
    +
    +        clear();
         }
     
         /**
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxWithSmallTimeoutAndContentionOneKeyTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxWithSmallTimeoutAndContentionOneKeyTest.java
    index 62f46352391bf..7ce79a556743a 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxWithSmallTimeoutAndContentionOneKeyTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/transactions/TxWithSmallTimeoutAndContentionOneKeyTest.java
    @@ -37,13 +37,15 @@
     import org.apache.ignite.internal.processors.cache.verify.PartitionHashRecordV2;
     import org.apache.ignite.internal.processors.cache.verify.PartitionKeyV2;
     import org.apache.ignite.internal.util.typedef.internal.SB;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.transactions.Transaction;
     import org.apache.ignite.transactions.TransactionConcurrency;
     import org.apache.ignite.transactions.TransactionIsolation;
    +import org.junit.Assume;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.testframework.GridTestUtils.runAsync;
     import static org.apache.ignite.testframework.GridTestUtils.runMultiThreadedAsync;
    @@ -56,10 +58,8 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class TxWithSmallTimeoutAndContentionOneKeyTest extends GridCommonAbstractTest {
    -    /** */
    -    public static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int TIME_TO_EXECUTE = 30 * 1000;
     
    @@ -72,8 +72,6 @@ public class TxWithSmallTimeoutAndContentionOneKeyTest extends GridCommonAbstrac
     
             cfg.setConsistentId("NODE_" + name.substring(name.length() - 1));
     
    -        cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER));
    -
             cfg.setDataStorageConfiguration(
                 new DataStorageConfiguration()
                     .setDefaultDataRegionConfiguration(
    @@ -112,6 +110,9 @@ public class TxWithSmallTimeoutAndContentionOneKeyTest extends GridCommonAbstrac
          * @return Random transaction type.
          */
         protected TransactionConcurrency transactionConcurrency() {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            return PESSIMISTIC;
    +
             ThreadLocalRandom random = ThreadLocalRandom.current();
     
             return random.nextBoolean() ? OPTIMISTIC : PESSIMISTIC;
    @@ -121,6 +122,9 @@ protected TransactionConcurrency transactionConcurrency() {
          * @return Random transaction isolation level.
          */
         protected TransactionIsolation transactionIsolation(){
    +        if (MvccFeatureChecker.forcedMvcc())
    +            return REPEATABLE_READ;
    +
             ThreadLocalRandom random = ThreadLocalRandom.current();
     
             switch (random.nextInt(3)) {
    @@ -149,7 +153,10 @@ protected long randomTimeOut() {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void test() throws Exception {
    +        Assume.assumeFalse("https://issues.apache.org/jira/browse/IGNITE-10455", MvccFeatureChecker.forcedMvcc());
    +
             startGrids(4);
     
             client = true;
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryAbstractTest.java
    index 16ea84857840a..731970507ba74 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryAbstractTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryAbstractTest.java
    @@ -28,10 +28,14 @@
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.cache.CacheEntry;
     import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Versioned entry abstract test.
      */
    +@RunWith(JUnit4.class)
     public abstract class CacheVersionedEntryAbstractTest extends GridCacheAbstractSelfTest {
         /** Entries number to store in a cache. */
         private static final int ENTRIES_NUM = 500;
    @@ -54,6 +58,7 @@ public abstract class CacheVersionedEntryAbstractTest extends GridCacheAbstractS
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInvoke() throws Exception {
             Cache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -71,6 +76,7 @@ public void testInvoke() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testInvokeAll() throws Exception {
             Cache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -95,6 +101,7 @@ public void testInvokeAll() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalPeek() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -107,6 +114,7 @@ public void testLocalPeek() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testVersionComparision() throws Exception {
             IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -142,4 +150,4 @@ private void checkVersionedEntry(CacheEntry entry) {
             assertNotNull(entry.getKey());
             assertNotNull(entry.getValue());
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryLocalTransactionalSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryLocalTransactionalSelfTest.java
    index d7fc93809848d..1797bbea40232 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryLocalTransactionalSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/version/CacheVersionedEntryLocalTransactionalSelfTest.java
    @@ -19,6 +19,7 @@
     
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.cache.CacheMode;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     
     /**
      *
    @@ -38,4 +39,12 @@ public class CacheVersionedEntryLocalTransactionalSelfTest extends CacheVersione
         @Override protected CacheAtomicityMode atomicityMode() {
             return CacheAtomicityMode.TRANSACTIONAL;
         }
    -}
    \ No newline at end of file
    +
    +    /** {@inheritDoc} */
    +    @Override public void setUp() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE);
    +
    +        super.setUp();
    +    }
    +
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorRemoteTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorRemoteTest.java
    index 5fd84165fc477..6513d963f5930 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorRemoteTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorRemoteTest.java
    @@ -31,11 +31,15 @@
     import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.testframework.junits.common.GridCommonTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests execution of anonymous closures on remote nodes.
      */
     @GridCommonTest(group = "Closure Processor")
    +@RunWith(JUnit4.class)
     public class GridClosureProcessorRemoteTest extends GridCommonAbstractTest {
         /** Number of grids started for tests. Should not be less than 2. */
         public static final int NODES_CNT = 2;
    @@ -67,6 +71,7 @@ public class GridClosureProcessorRemoteTest extends GridCommonAbstractTest {
         /**
          * @throws Exception Thrown in case of failure.
          */
    +    @Test
         public void testAnonymousBroadcast() throws Exception {
             Ignite g = grid(0);
     
    @@ -91,6 +96,7 @@ public void testAnonymousBroadcast() throws Exception {
         /**
          * @throws Exception Thrown in case of failure.
          */
    +    @Test
         public void testAnonymousUnicast() throws Exception {
             Ignite g = grid(0);
     
    @@ -118,6 +124,7 @@ public void testAnonymousUnicast() throws Exception {
          *
          * @throws Exception Thrown in case of failure.
          */
    +    @Test
         public void testAnonymousUnicastRequest() throws Exception {
             Ignite g = grid(0);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorSelfTest.java
    index fef74e8f4d145..b851d0663b527 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureProcessorSelfTest.java
    @@ -41,18 +41,19 @@
     import org.apache.ignite.lang.IgniteRunnable;
     import org.apache.ignite.resources.IgniteInstanceResource;
     import org.apache.ignite.resources.LoggerResource;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.testframework.junits.common.GridCommonTest;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests for {@link GridClosureProcessor}.
      */
     @GridCommonTest(group = "Closure Processor")
    +@RunWith(JUnit4.class)
     public class GridClosureProcessorSelfTest extends GridCommonAbstractTest {
         /** Number of grids started for tests. Should not be less than 2. */
         private static final int NODES_CNT = 2;
    @@ -63,19 +64,10 @@ public class GridClosureProcessorSelfTest extends GridCommonAbstractTest {
         /** Timeout used in timed tests. */
         private static final long JOB_TIMEOUT = 100;
     
    -    /** IP finder. */
    -    private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
    -
    -        discoSpi.setIpFinder(ipFinder);
    -
    -        cfg.setDiscoverySpi(discoSpi);
    -
             cfg.setCacheConfiguration();
     
             return cfg;
    @@ -314,6 +306,7 @@ private IgnitePredicate singleNodePredicate(final int idx) {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRunAsyncSingle() throws Exception {
             IgniteRunnable job = new ClosureTestRunnable();
     
    @@ -340,6 +333,7 @@ public void testRunAsyncSingle() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRunAsyncMultiple() throws Exception {
             Collection jobs = F.asList(new ClosureTestRunnable(), new ClosureTestRunnable());
     
    @@ -354,6 +348,7 @@ public void testRunAsyncMultiple() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallAsyncSingle() throws Exception {
             IgniteCallable job = new ClosureTestCallable();
     
    @@ -382,6 +377,7 @@ public void testCallAsyncSingle() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallAsyncErrorNoFailover() throws Exception {
             IgniteCompute comp = compute(grid(0).cluster().forPredicate(F.notEqualTo(grid(0).localNode())));
     
    @@ -400,6 +396,7 @@ public void testCallAsyncErrorNoFailover() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testWithName() throws Exception {
             grid(0).compute().withName("TestTaskName").call(new ClosureTestCallable());
         }
    @@ -407,6 +404,7 @@ public void testWithName() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testWithTimeout() throws Exception {
             Collection jobs = F.asList(new TestCallableTimeout());
     
    @@ -438,6 +436,7 @@ public void testWithTimeout() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallAsyncMultiple() throws Exception {
             Collection jobs = F.asList(new ClosureTestCallable(), new ClosureTestCallable());
     
    @@ -457,6 +456,7 @@ public void testCallAsyncMultiple() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReduceAsync() throws Exception {
             Collection jobs = F.asList(new ClosureTestCallable(), new ClosureTestCallable());
     
    @@ -477,6 +477,7 @@ public void testReduceAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReducerError() throws Exception {
             final Ignite g = grid(0);
     
    @@ -508,4 +509,4 @@ public void testReducerError() throws Exception {
                 }
             }, IgniteException.class, null);
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureSerializationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureSerializationTest.java
    index c6d14d81231c5..84970d0fd5ca5 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureSerializationTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/closure/GridClosureSerializationTest.java
    @@ -31,10 +31,14 @@
     import org.apache.ignite.resources.JobContextResource;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests handling of job result serialization error.
      */
    +@RunWith(JUnit4.class)
     public class GridClosureSerializationTest extends GridCommonAbstractTest {
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(final String igniteInstanceName) throws Exception {
    @@ -56,7 +60,8 @@ public class GridClosureSerializationTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    -    @SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "Convert2Lambda"})
    +    @SuppressWarnings({"Convert2Lambda"})
    +    @Test
         public void testSerializationFailure() throws Exception {
             final IgniteEx ignite0 = grid(0);
             final IgniteEx ignite1 = grid(1);
    @@ -77,7 +82,8 @@ public void testSerializationFailure() throws Exception {
         /**
          * @throws Exception If failed.
          */
    -    @SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "Convert2Lambda"})
    +    @SuppressWarnings({"Convert2Lambda"})
    +    @Test
         public void testExceptionSerializationFailure() throws Exception {
             final IgniteEx ignite0 = grid(0);
             final IgniteEx ignite1 = grid(1);
    @@ -98,7 +104,8 @@ public void testExceptionSerializationFailure() throws Exception {
         /**
          * @throws Exception If failed.
          */
    -    @SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "Convert2Lambda"})
    +    @SuppressWarnings({"Convert2Lambda"})
    +    @Test
         public void testAttributesSerializationFailure() throws Exception {
             final IgniteEx ignite0 = grid(0);
             final IgniteEx ignite1 = grid(1);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridAddressResolverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridAddressResolverSelfTest.java
    index 2b706d5c2cf03..d29f062b0be93 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridAddressResolverSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridAddressResolverSelfTest.java
    @@ -30,11 +30,15 @@
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.testframework.junits.common.GridCommonTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Address Resolver test.
      */
     @GridCommonTest(group = "Kernal Self")
    +@RunWith(JUnit4.class)
     public class GridAddressResolverSelfTest extends GridCommonAbstractTest {
         /** */
         private final InetSocketAddress addr0 = new InetSocketAddress("test0.com", 5000);
    @@ -76,6 +80,7 @@ public class GridAddressResolverSelfTest extends GridCommonAbstractTest {
         }
     
         /** */
    +    @Test
         public void test() throws Exception {
             startGrid(0);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridUpdateNotifierSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridUpdateNotifierSelfTest.java
    index 58c41b5165bfb..c73dd27025a18 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridUpdateNotifierSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/cluster/GridUpdateNotifierSelfTest.java
    @@ -24,12 +24,16 @@
     import org.apache.ignite.lang.IgniteProductVersion;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.ignite.testframework.junits.common.GridCommonTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     import org.mockito.Mockito;
     
     /**
      * Update notifier test.
      */
     @GridCommonTest(group = "Kernal Self")
    +@RunWith(JUnit4.class)
     public class GridUpdateNotifierSelfTest extends GridCommonAbstractTest {
         /** */
         private String updateStatusParams;
    @@ -64,6 +68,7 @@ public class GridUpdateNotifierSelfTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNotifier() throws Exception {
             String nodeVer = IgniteProperties.get("ignite.version");
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/GridComputeJobExecutionErrorToLogManualTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/GridComputeJobExecutionErrorToLogManualTest.java
    index dc24156919be1..4bea890a0c802 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/GridComputeJobExecutionErrorToLogManualTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/GridComputeJobExecutionErrorToLogManualTest.java
    @@ -18,38 +18,22 @@
     package org.apache.ignite.internal.processors.compute;
     
     import org.apache.ignite.Ignite;
    -import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.lang.IgniteFuture;
     import org.apache.ignite.lang.IgniteInClosure;
     import org.apache.ignite.lang.IgniteRunnable;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Manual test to reproduce IGNITE-4053
      */
    +@RunWith(JUnit4.class)
     public class GridComputeJobExecutionErrorToLogManualTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private static final int GRID_CNT = 2;
     
    -    /** {@inheritDoc} */
    -    @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    -        IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
    -
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(ipFinder);
    -
    -        cfg.setDiscoverySpi(disco);
    -
    -        return cfg;
    -    }
    -
         /** {@inheritDoc} */
         @Override protected void beforeTestsStarted() throws Exception {
             startGridsMultiThreaded(GRID_CNT, true);
    @@ -58,6 +42,7 @@ public class GridComputeJobExecutionErrorToLogManualTest extends GridCommonAbstr
         /**
          * @throws Exception If fails.
          */
    +    @Test
         public void testRuntimeException() throws Exception {
             Ignite ignite = grid(0);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeConfigVariationsFullApiTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeConfigVariationsFullApiTest.java
    index d1fbb341373f4..3e6736ed06deb 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeConfigVariationsFullApiTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeConfigVariationsFullApiTest.java
    @@ -50,11 +50,15 @@
     import org.apache.ignite.testframework.junits.IgniteConfigVariationsAbstractTest;
     import org.jetbrains.annotations.Nullable;
     import org.junit.Assert;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Full API compute test.
      */
     @SuppressWarnings("unchecked")
    +@RunWith(JUnit4.class)
     public class IgniteComputeConfigVariationsFullApiTest extends IgniteConfigVariationsAbstractTest {
         /** Max job count. */
         private static final int MAX_JOB_COUNT = 10;
    @@ -177,6 +181,7 @@ protected void runTest(final Factory[] factories, final ComputeTest test) throws
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testExecuteTaskClass() throws Exception {
             runTest(jobFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -201,6 +206,7 @@ public void testExecuteTaskClass() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testExecuteTaskClassAsync() throws Exception {
             runTest(jobFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -225,6 +231,7 @@ public void testExecuteTaskClassAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testExecuteTask() throws Exception {
             runTest(jobFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -248,6 +255,7 @@ public void testExecuteTask() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testExecuteTaskAsync() throws Exception {
             runTest(jobFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -271,6 +279,7 @@ public void testExecuteTaskAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBroadcastClosure() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -294,6 +303,7 @@ public void testBroadcastClosure() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBroadcastClosureAsync() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -317,6 +327,7 @@ public void testBroadcastClosureAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBroadcastCallable() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -346,6 +357,7 @@ public void testBroadcastCallable() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBroadcastCallableAsync() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -375,6 +387,7 @@ public void testBroadcastCallableAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBroadcastRunnable() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -389,6 +402,7 @@ public void testBroadcastRunnable() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testBroadcastRunnableAsync() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -405,6 +419,7 @@ public void testBroadcastRunnableAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRun() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -427,6 +442,7 @@ public void testRun() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRunAsync() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -453,6 +469,7 @@ public void testRunAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApplyAsync() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -480,6 +497,7 @@ public void testApplyAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApply() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -499,6 +517,7 @@ public void testApply() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApplyForCollection() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -524,6 +543,7 @@ public void testApplyForCollection() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApplyForCollectionAsync() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -550,6 +570,7 @@ public void testApplyForCollectionAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApplyForCollectionWithReducer() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -585,6 +606,7 @@ public void testApplyForCollectionWithReducer() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApplyForCollectionWithReducerAsync() throws Exception {
             runTest(closureFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -620,6 +642,7 @@ public void testApplyForCollectionWithReducerAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallAsync() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -648,6 +671,7 @@ public void testCallAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCall() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -668,6 +692,7 @@ public void testCall() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallCollection() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -690,6 +715,7 @@ public void testCallCollection() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallCollectionAsync() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -712,6 +738,7 @@ public void testCallCollectionAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallCollectionWithReducer() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -746,6 +773,7 @@ public void testCallCollectionWithReducer() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCallCollectionWithReducerAsync() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -780,6 +808,7 @@ public void testCallCollectionWithReducerAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAffinityCall() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -806,6 +835,7 @@ public void testAffinityCall() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAffinityCallAsync() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -834,6 +864,7 @@ public void testAffinityCallAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheAffinityCall() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -861,6 +892,7 @@ public void testMultiCacheAffinityCall() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheAffinityCallAsync() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -890,6 +922,7 @@ public void testMultiCacheAffinityCallAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheByPartIdAffinityCall() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -917,6 +950,7 @@ public void testMultiCacheByPartIdAffinityCall() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheByPartIdAffinityCallAsync() throws Exception {
             runTest(callableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -946,6 +980,7 @@ public void testMultiCacheByPartIdAffinityCallAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAffinityRun() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -965,6 +1000,7 @@ public void testAffinityRun() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAffinityRunAsync() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -986,6 +1022,7 @@ public void testAffinityRunAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheAffinityRun() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -1006,6 +1043,7 @@ public void testMultiCacheAffinityRun() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheAffinityRunAsync() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -1028,6 +1066,7 @@ public void testMultiCacheAffinityRunAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheByPartIdAffinityRun() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -1048,6 +1087,7 @@ public void testMultiCacheByPartIdAffinityRun() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultiCacheByPartIdAffinityRunAsync() throws Exception {
             runTest(runnableFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -1070,6 +1110,7 @@ public void testMultiCacheByPartIdAffinityRunAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDeployExecuteByName() throws Exception {
             runTest(jobFactories, new ComputeTest() {
                 @Override public void test(Factory factory, Ignite ignite) throws Exception {
    @@ -1127,7 +1168,8 @@ public interface ComputeTest {
              * @param ignite Ignite instance to use.
              * @throws Exception If failed.
              */
    -        public void test(Factory factory, Ignite ignite) throws Exception;
    +        @Test
    +    public void test(Factory factory, Ignite ignite) throws Exception;
         }
     
         /**
    @@ -2469,4 +2511,4 @@ private static void writeJobState(ObjectOutput out, boolean isVal, byte bVal, ch
                 out.writeObject(eVal);
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorConfigurationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorConfigurationSelfTest.java
    index 2277100067848..64689105ed0af 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorConfigurationSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorConfigurationSelfTest.java
    @@ -21,34 +21,20 @@
     import org.apache.ignite.Ignition;
     import org.apache.ignite.configuration.ExecutorConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests custom executor configuration.
      */
    +@RunWith(JUnit4.class)
     public class IgniteComputeCustomExecutorConfigurationSelfTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
    -    /** {@inheritDoc} */
    -    @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    -        IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
    -
    -        TcpDiscoverySpi disco = new TcpDiscoverySpi();
    -
    -        disco.setIpFinder(ipFinder);
    -
    -        cfg.setDiscoverySpi(disco);
    -
    -        return cfg;
    -    }
    -
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConfigurations() throws Exception {
             try {
                 checkStartWithInvalidConfiguration(getConfiguration("node0")
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorSelfTest.java
    index 18c52c0b5d94f..b59fdcfdb2024 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/IgniteComputeCustomExecutorSelfTest.java
    @@ -36,12 +36,16 @@
     import org.apache.ignite.lang.IgniteRunnable;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests custom executor named pools.
      *
      * https://issues.apache.org/jira/browse/IGNITE-4699
      */
    +@RunWith(JUnit4.class)
     public class IgniteComputeCustomExecutorSelfTest extends GridCommonAbstractTest {
         /** */
         private static final int GRID_CNT = 2;
    @@ -96,6 +100,7 @@ private ExecutorConfiguration createExecConfiguration(String name) {
         /**
          * @throws Exception If fails.
          */
    +    @Test
         public void testInvalidCustomExecutor() throws Exception {
             grid(0).compute().withExecutor("invalid").broadcast(new IgniteRunnable() {
                 @Override public void run() {
    @@ -107,6 +112,7 @@ public void testInvalidCustomExecutor() throws Exception {
         /**
          * @throws Exception If fails.
          */
    +    @Test
         public void testAllComputeApiByCustomExecutor() throws Exception {
             IgniteCompute comp = grid(0).compute().withExecutor(EXEC_NAME0);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/PublicThreadpoolStarvationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/PublicThreadpoolStarvationTest.java
    index dd9e0d5090df8..c99b55a1c7b5c 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/PublicThreadpoolStarvationTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/compute/PublicThreadpoolStarvationTest.java
    @@ -25,6 +25,9 @@
     import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest;
     import org.apache.ignite.lang.IgniteRunnable;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
    @@ -33,6 +36,7 @@
      * Test to validate https://issues.apache.org/jira/browse/IGNITE-4239
      * Jobs hang when a lot of jobs calculate cache.
      */
    +@RunWith(JUnit4.class)
     public class PublicThreadpoolStarvationTest extends GridCacheAbstractSelfTest {
         /** Cache size. */
         private static final int CACHE_SIZE = 10;
    @@ -113,6 +117,7 @@ private void fillCaches() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCacheSizeOnPublicThreadpoolStarvation() throws Exception {
             grid(0).compute().run(new IgniteRunnable() {
                 @Override public void run() {
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridEventConsumeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridEventConsumeSelfTest.java
    index 1a7abd45fa921..01dcb7edf06e6 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridEventConsumeSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridEventConsumeSelfTest.java
    @@ -49,10 +49,11 @@
     import org.apache.ignite.lang.IgnitePredicate;
     import org.apache.ignite.resources.IgniteInstanceResource;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static java.util.concurrent.TimeUnit.SECONDS;
    @@ -67,6 +68,7 @@
     /**
      * Event consume test.
      */
    +@RunWith(JUnit4.class)
     public class GridEventConsumeSelfTest extends GridCommonAbstractTest {
         /** */
         private static final String PRJ_PRED_CLS_NAME = "org.apache.ignite.tests.p2p.GridEventConsumeProjectionPredicate";
    @@ -80,9 +82,6 @@ public class GridEventConsumeSelfTest extends GridCommonAbstractTest {
         /** Number of created consumes per thread in multithreaded test. */
         private static final int CONSUME_CNT = 500;
     
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** Consume latch. */
         private static volatile CountDownLatch consumeLatch;
     
    @@ -99,12 +98,6 @@ public class GridEventConsumeSelfTest extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        TcpDiscoverySpi disc = new TcpDiscoverySpi();
    -
    -        disc.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(disc);
    -
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
             if (include)
    @@ -174,6 +167,7 @@ private Collection localRoutines(GridContinuousProcessor proc)
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApi() throws Exception {
             try {
                 grid(0).events().stopRemoteListen(null);
    @@ -251,6 +245,7 @@ public void testApi() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApiAsyncOld() throws Exception {
             IgniteEvents evtAsync = grid(0).events().withAsync();
     
    @@ -341,6 +336,7 @@ public void testApiAsyncOld() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testApiAsync() throws Exception {
             IgniteEvents evt = grid(0).events();
     
    @@ -420,6 +416,7 @@ public void testApiAsync() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAllEvents() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -460,6 +457,7 @@ public void testAllEvents() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEventsByType() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -501,6 +499,7 @@ public void testEventsByType() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEventsByFilter() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -545,6 +544,7 @@ public void testEventsByFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testEventsByTypeAndFilter() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -591,6 +591,7 @@ public void testEventsByTypeAndFilter() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteProjection() throws Exception {
             final Collection nodeIds = new ConcurrentSkipListSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -632,6 +633,7 @@ public void testRemoteProjection() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testProjectionWithLocalNode() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -673,6 +675,7 @@ public void testProjectionWithLocalNode() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalNodeOnly() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -716,6 +719,7 @@ public void testLocalNodeOnly() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStopByCallback() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -757,6 +761,7 @@ public void testStopByCallback() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStopRemoteListen() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -807,6 +812,7 @@ public void testStopRemoteListen() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStopLocalListenByCallback() throws Exception {
             final AtomicInteger cnt = new AtomicInteger();
             final CountDownLatch latch = new CountDownLatch(1);
    @@ -842,6 +848,7 @@ public void testStopLocalListenByCallback() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNodeJoin() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -893,6 +900,7 @@ public void testNodeJoin() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNodeJoinWithProjection() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -947,9 +955,9 @@ public void testNodeJoinWithProjection() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-585")
    +    @Test
         public void testNodeJoinWithP2P() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-585");
    -
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
             final CountDownLatch latch = new CountDownLatch(GRID_CNT + 1);
    @@ -995,6 +1003,7 @@ public void testNodeJoinWithP2P() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testResources() throws Exception {
             final Collection nodeIds = new HashSet<>();
             final AtomicInteger cnt = new AtomicInteger();
    @@ -1049,6 +1058,7 @@ public void testResources() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMasterNodeLeave() throws Exception {
             final CountDownLatch latch = new CountDownLatch(GRID_CNT);
     
    @@ -1088,6 +1098,7 @@ public void testMasterNodeLeave() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMasterNodeLeaveNoAutoUnsubscribe() throws Exception {
             Ignite g = startGrid("anotherGrid");
     
    @@ -1145,7 +1156,8 @@ public void testMasterNodeLeaveNoAutoUnsubscribe() throws Exception {
         /**
          * @throws Exception If failed.
          */
    -    public void _testMultithreadedWithNodeRestart() throws Exception {
    +    @Test
    +    public void testMultithreadedWithNodeRestart() throws Exception {
             final AtomicBoolean stop = new AtomicBoolean();
             final BlockingQueue> queue = new LinkedBlockingQueue<>();
             final Collection started = new GridConcurrentHashSet<>();
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridMessageListenSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridMessageListenSelfTest.java
    index f5691e563e38c..ce520b3ae8c8b 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridMessageListenSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/GridMessageListenSelfTest.java
    @@ -39,12 +39,16 @@
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.SECONDS;
     
     /**
      * Message listen test.
      */
    +@RunWith(JUnit4.class)
     public class GridMessageListenSelfTest extends GridCommonAbstractTest {
         /** */
         private static final int GRID_CNT = 3;
    @@ -142,6 +146,7 @@ public class GridMessageListenSelfTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNullTopic() throws Exception {
             latch = new CountDownLatch(MSG_CNT * GRID_CNT);
     
    @@ -161,6 +166,7 @@ public void testNullTopic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonNullTopic() throws Exception {
             latch = new CountDownLatch(MSG_CNT * GRID_CNT);
     
    @@ -180,6 +186,7 @@ public void testNonNullTopic() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStopListen() throws Exception {
             latch = new CountDownLatch(GRID_CNT);
     
    @@ -205,6 +212,7 @@ public void testStopListen() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testProjection() throws Exception {
             latch = new CountDownLatch(MSG_CNT * (GRID_CNT - 1));
     
    @@ -224,6 +232,7 @@ public void testProjection() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNodeJoin() throws Exception {
             latch = new CountDownLatch(MSG_CNT * (GRID_CNT + 1));
     
    @@ -256,6 +265,7 @@ public void testNodeJoin() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNodeJoinWithProjection() throws Exception {
             latch = new CountDownLatch(MSG_CNT * GRID_CNT);
     
    @@ -295,6 +305,7 @@ public void testNodeJoinWithProjection() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNullTopicWithDeployment() throws Exception {
             Class cls = getExternalClassLoader().loadClass(LSNR_CLS_NAME);
     
    @@ -314,6 +325,7 @@ public void testNullTopicWithDeployment() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNonNullTopicWithDeployment() throws Exception {
             ClassLoader ldr = getExternalClassLoader();
     
    @@ -338,6 +350,7 @@ public void testNonNullTopicWithDeployment() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testListenActor() throws Exception {
             latch = new CountDownLatch(MSG_CNT * (GRID_CNT + 1));
     
    @@ -491,4 +504,4 @@ private Actor(UUID sourceNodeId) {
                 latch.countDown();
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/IgniteNoCustomEventsOnNodeStart.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/IgniteNoCustomEventsOnNodeStart.java
    index dc151a62bd418..63d6f4b76579b 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/IgniteNoCustomEventsOnNodeStart.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/continuous/IgniteNoCustomEventsOnNodeStart.java
    @@ -21,18 +21,17 @@
     import org.apache.ignite.internal.processors.cache.CacheAffinityChangeMessage;
     import org.apache.ignite.spi.discovery.DiscoverySpiCustomMessage;
     import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Sanity test to verify there are no unnecessary messages on node start.
      */
    +@RunWith(JUnit4.class)
     public class IgniteNoCustomEventsOnNodeStart extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private boolean client;
     
    @@ -43,11 +42,6 @@ public class IgniteNoCustomEventsOnNodeStart extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        TestTcpDiscoverySpi discoSpi = new TestTcpDiscoverySpi();
    -        discoSpi.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(discoSpi);
    -
             cfg.setClientMode(client);
     
             return cfg;
    @@ -56,6 +50,7 @@ public class IgniteNoCustomEventsOnNodeStart extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNoCustomEventsOnStart() throws Exception {
             failed = false;
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/BPlusTreeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/BPlusTreeSelfTest.java
    index 487cdbe25bd97..4574dc3f3a0bf 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/BPlusTreeSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/BPlusTreeSelfTest.java
    @@ -77,6 +77,10 @@
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.Nullable;
     import org.jsr166.ConcurrentLinkedHashMap;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.internal.pagemem.PageIdUtils.effectivePageId;
     import static org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.rnd;
    @@ -86,6 +90,7 @@
     
     /**
      */
    +@RunWith(JUnit4.class)
     public class BPlusTreeSelfTest extends GridCommonAbstractTest {
         /** */
         private static final short LONG_INNER_IO = 30000;
    @@ -204,7 +209,7 @@ protected ReuseList createReuseList(int cacheId, PageMemory pageMem, long rootId
             }
             finally {
                 if (pageMem != null)
    -                pageMem.stop();
    +                pageMem.stop(true);
     
                 MAX_PER_PAGE = 0;
                 PUT_INC = 1;
    @@ -216,6 +221,7 @@ protected ReuseList createReuseList(int cacheId, PageMemory pageMem, long rootId
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testFind() throws IgniteCheckedException {
             TestTree tree = createTestTree(true);
             TreeMap map = new TreeMap<>();
    @@ -234,6 +240,7 @@ public void testFind() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testRetries() throws IgniteCheckedException {
             TestTree tree = createTestTree(true);
     
    @@ -251,9 +258,34 @@ public void testRetries() throws IgniteCheckedException {
             }
         }
     
    +    /**
    +     * @throws Exception if failed.
    +     */
    +    @Test
    +    public void testIsEmpty() throws Exception {
    +        TestTree tree = createTestTree(true);
    +
    +        assertTrue(tree.isEmpty());
    +
    +        for (long i = 1; i <= 500; i++) {
    +            tree.put(i);
    +
    +            assertFalse(tree.isEmpty());
    +        }
    +
    +        for (long i = 1; i <= 500; i++) {
    +            assertFalse(tree.isEmpty());
    +
    +            tree.remove(i);
    +        }
    +
    +        assertTrue(tree.isEmpty());
    +    }
    +
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testFindWithClosure() throws IgniteCheckedException {
             TestTree tree = createTestTree(true);
             TreeMap map = new TreeMap<>();
    @@ -350,6 +382,7 @@ private void checkCursor(GridCursor cursor, Iterator iterator) throw
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_mm_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -362,6 +395,7 @@ public void testPutRemove_1_20_mm_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_mm_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -374,6 +408,7 @@ public void testPutRemove_1_20_mm_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_pm_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -386,6 +421,7 @@ public void testPutRemove_1_20_pm_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_pm_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -398,6 +434,7 @@ public void testPutRemove_1_20_pm_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_pp_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -410,6 +447,7 @@ public void testPutRemove_1_20_pp_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_pp_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -422,6 +460,7 @@ public void testPutRemove_1_20_pp_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_mp_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -434,6 +473,7 @@ public void testPutRemove_1_20_mp_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_1_20_mp_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 20;
    @@ -447,6 +487,7 @@ public void testPutRemove_1_20_mp_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_mm_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -459,6 +500,7 @@ public void testPutRemove_2_40_mm_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_mm_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -471,6 +513,7 @@ public void testPutRemove_2_40_mm_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_pm_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -483,6 +526,7 @@ public void testPutRemove_2_40_pm_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_pm_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -495,6 +539,7 @@ public void testPutRemove_2_40_pm_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_pp_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -507,6 +552,7 @@ public void testPutRemove_2_40_pp_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_pp_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -519,6 +565,7 @@ public void testPutRemove_2_40_pp_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_mp_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -531,6 +578,7 @@ public void testPutRemove_2_40_mp_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_2_40_mp_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 2;
             CNT = 40;
    @@ -544,6 +592,7 @@ public void testPutRemove_2_40_mp_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_mm_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -556,6 +605,7 @@ public void testPutRemove_3_60_mm_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_mm_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -568,6 +618,7 @@ public void testPutRemove_3_60_mm_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_pm_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -580,6 +631,7 @@ public void testPutRemove_3_60_pm_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_pm_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -592,6 +644,7 @@ public void testPutRemove_3_60_pm_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_pp_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -604,6 +657,7 @@ public void testPutRemove_3_60_pp_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_pp_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -616,6 +670,7 @@ public void testPutRemove_3_60_pp_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_mp_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -628,6 +683,7 @@ public void testPutRemove_3_60_mp_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testPutRemove_3_60_mp_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 3;
             CNT = 60;
    @@ -744,6 +800,7 @@ private void checkIterateC(TestTree tree, long lower, long upper, TestTreeRowClo
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testRandomInvoke_1_30_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 30;
    @@ -754,6 +811,7 @@ public void testRandomInvoke_1_30_1() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testRandomInvoke_1_30_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 30;
    @@ -865,6 +923,7 @@ else if (rnd % 3 == 0) {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testRandomPutRemove_1_30_0() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 30;
    @@ -875,6 +934,7 @@ public void testRandomPutRemove_1_30_0() throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testRandomPutRemove_1_30_1() throws IgniteCheckedException {
             MAX_PER_PAGE = 1;
             CNT = 30;
    @@ -885,6 +945,7 @@ public void testRandomPutRemove_1_30_1() throws IgniteCheckedException {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassiveRemove3_false() throws Exception {
             MAX_PER_PAGE = 3;
     
    @@ -894,6 +955,7 @@ public void testMassiveRemove3_false() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassiveRemove3_true() throws Exception {
             MAX_PER_PAGE = 3;
     
    @@ -903,6 +965,7 @@ public void testMassiveRemove3_true() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassiveRemove2_false() throws Exception {
             MAX_PER_PAGE = 2;
     
    @@ -912,6 +975,7 @@ public void testMassiveRemove2_false() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassiveRemove2_true() throws Exception {
             MAX_PER_PAGE = 2;
     
    @@ -921,6 +985,7 @@ public void testMassiveRemove2_true() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassiveRemove1_false() throws Exception {
             MAX_PER_PAGE = 1;
     
    @@ -930,6 +995,7 @@ public void testMassiveRemove1_false() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassiveRemove1_true() throws Exception {
             MAX_PER_PAGE = 1;
     
    @@ -1004,6 +1070,7 @@ private void doTestMassiveRemove(final boolean canGetRow) throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassivePut1_true() throws Exception {
             MAX_PER_PAGE = 1;
     
    @@ -1013,6 +1080,7 @@ public void testMassivePut1_true() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassivePut1_false() throws Exception {
             MAX_PER_PAGE = 1;
     
    @@ -1022,12 +1090,14 @@ public void testMassivePut1_false() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassivePut2_true() throws Exception {
             MAX_PER_PAGE = 2;
     
             doTestMassivePut(true);
         }
     
    +    @Test
         public void testMassivePut2_false() throws Exception {
             MAX_PER_PAGE = 2;
     
    @@ -1037,12 +1107,14 @@ public void testMassivePut2_false() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMassivePut3_true() throws Exception {
             MAX_PER_PAGE = 3;
     
             doTestMassivePut(true);
         }
     
    +    @Test
         public void testMassivePut3_false() throws Exception {
             MAX_PER_PAGE = 3;
     
    @@ -1175,6 +1247,7 @@ private void assertEqualContents(IgniteTree tree, Map map
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testEmptyCursors() throws IgniteCheckedException {
             MAX_PER_PAGE = 5;
     
    @@ -1205,6 +1278,7 @@ private void doTestCursor(boolean canGetRow) throws IgniteCheckedException {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testCursorConcurrentMerge() throws IgniteCheckedException {
             MAX_PER_PAGE = 5;
     
    @@ -1280,6 +1354,7 @@ public void testCursorConcurrentMerge() throws IgniteCheckedException {
          *
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testSizeForPutRmvSequential() throws IgniteCheckedException {
             MAX_PER_PAGE = 5;
     
    @@ -1369,6 +1444,7 @@ public void testSizeForPutRmvSequential() throws IgniteCheckedException {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testSizeForRandomPutRmvMultithreaded_5_4() throws Exception {
             MAX_PER_PAGE = 5;
             CNT = 10_000;
    @@ -1376,6 +1452,7 @@ public void testSizeForRandomPutRmvMultithreaded_5_4() throws Exception {
             doTestSizeForRandomPutRmvMultithreaded(4);
         }
     
    +    @Test
         public void testSizeForRandomPutRmvMultithreaded_3_256() throws Exception {
             MAX_PER_PAGE = 3;
             CNT = 10_000;
    @@ -1492,6 +1569,7 @@ private void doTestSizeForRandomPutRmvMultithreaded(final int rmvPutSlidingWindo
          *
          * @see #doTestSizeForRandomPutRmvMultithreadedAsync doTestSizeForRandomPutRmvMultithreadedAsync() for details.
          */
    +    @Test
         public void testSizeForRandomPutRmvMultithreadedAsync_16() throws Exception {
             doTestSizeForRandomPutRmvMultithreadedAsync(16);
         }
    @@ -1502,6 +1580,7 @@ public void testSizeForRandomPutRmvMultithreadedAsync_16() throws Exception {
          *
          * @see #doTestSizeForRandomPutRmvMultithreadedAsync doTestSizeForRandomPutRmvMultithreadedAsync() for details.
          */
    +    @Test
         public void testSizeForRandomPutRmvMultithreadedAsync_3() throws Exception {
             doTestSizeForRandomPutRmvMultithreadedAsync(3);
         }
    @@ -1671,6 +1750,7 @@ public void doTestSizeForRandomPutRmvMultithreadedAsync(final int rmvPutSlidingW
          *
          * @throws Exception if test failed
          */
    +    @Test
         public void testPutSizeLivelock() throws Exception {
             MAX_PER_PAGE = 5;
             CNT = 800;
    @@ -1803,6 +1883,7 @@ public void testPutSizeLivelock() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testPutRmvSizeSinglePageContention() throws Exception {
             MAX_PER_PAGE = 10;
             CNT = 20_000;
    @@ -1924,6 +2005,7 @@ public void testPutRmvSizeSinglePageContention() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testPutRmvFindSizeMultithreaded() throws Exception {
             MAX_PER_PAGE = 5;
             CNT = 60_000;
    @@ -2061,6 +2143,7 @@ public void testPutRmvFindSizeMultithreaded() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTestRandomPutRemoveMultithreaded_1_30_0() throws Exception {
             MAX_PER_PAGE = 1;
             CNT = 30;
    @@ -2071,6 +2154,7 @@ public void testTestRandomPutRemoveMultithreaded_1_30_0() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTestRandomPutRemoveMultithreaded_1_30_1() throws Exception {
             MAX_PER_PAGE = 1;
             CNT = 30;
    @@ -2081,6 +2165,7 @@ public void testTestRandomPutRemoveMultithreaded_1_30_1() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTestRandomPutRemoveMultithreaded_2_50_0() throws Exception {
             MAX_PER_PAGE = 2;
             CNT = 50;
    @@ -2091,6 +2176,7 @@ public void testTestRandomPutRemoveMultithreaded_2_50_0() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTestRandomPutRemoveMultithreaded_2_50_1() throws Exception {
             MAX_PER_PAGE = 2;
             CNT = 50;
    @@ -2101,6 +2187,7 @@ public void testTestRandomPutRemoveMultithreaded_2_50_1() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTestRandomPutRemoveMultithreaded_3_70_0() throws Exception {
             MAX_PER_PAGE = 3;
             CNT = 70;
    @@ -2111,6 +2198,7 @@ public void testTestRandomPutRemoveMultithreaded_3_70_0() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTestRandomPutRemoveMultithreaded_3_70_1() throws Exception {
             MAX_PER_PAGE = 3;
             CNT = 70;
    @@ -2121,6 +2209,7 @@ public void testTestRandomPutRemoveMultithreaded_3_70_1() throws Exception {
         /**
          * @throws IgniteCheckedException If failed.
          */
    +    @Test
         public void testFindFirstAndLast() throws IgniteCheckedException {
             MAX_PER_PAGE = 5;
     
    @@ -2147,6 +2236,7 @@ public void testFindFirstAndLast() throws IgniteCheckedException {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testIterate() throws Exception {
             MAX_PER_PAGE = 5;
     
    @@ -2177,6 +2267,7 @@ public void testIterate() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testIterateConcurrentPutRemove() throws Exception {
             iterateConcurrentPutRemove();
         }
    @@ -2184,9 +2275,9 @@ public void testIterateConcurrentPutRemove() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-7265")
    +    @Test
         public void testIterateConcurrentPutRemove_1() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-7265");
    -
             MAX_PER_PAGE = 1;
     
             iterateConcurrentPutRemove();
    @@ -2195,6 +2286,7 @@ public void testIterateConcurrentPutRemove_1() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testIterateConcurrentPutRemove_5() throws Exception {
             MAX_PER_PAGE = 5;
     
    @@ -2204,6 +2296,7 @@ public void testIterateConcurrentPutRemove_5() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testIteratePutRemove_10() throws Exception {
             MAX_PER_PAGE = 10;
     
    @@ -2349,6 +2442,7 @@ private void iterateConcurrentPutRemove() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testConcurrentGrowDegenerateTreeAndConcurrentRemove() throws Exception {
             //calculate tree size when split happens
             final TestTree t = createTestTree(true);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/CacheFreeListImplSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/CacheFreeListImplSelfTest.java
    index 74228154c1208..2f78c566d7840 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/CacheFreeListImplSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/CacheFreeListImplSelfTest.java
    @@ -52,10 +52,14 @@
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class CacheFreeListImplSelfTest extends GridCommonAbstractTest {
         /** */
         private static final int CPUS = Runtime.getRuntime().availableProcessors();
    @@ -71,7 +75,7 @@ public class CacheFreeListImplSelfTest extends GridCommonAbstractTest {
             super.afterTest();
     
             if (pageMem != null)
    -            pageMem.stop();
    +            pageMem.stop(true);
     
             pageMem = null;
         }
    @@ -79,6 +83,7 @@ public class CacheFreeListImplSelfTest extends GridCommonAbstractTest {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteSingleThreaded_1024() throws Exception {
             checkInsertDeleteSingleThreaded(1024);
         }
    @@ -86,6 +91,7 @@ public void testInsertDeleteSingleThreaded_1024() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteSingleThreaded_2048() throws Exception {
             checkInsertDeleteSingleThreaded(2048);
         }
    @@ -93,6 +99,7 @@ public void testInsertDeleteSingleThreaded_2048() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteSingleThreaded_4096() throws Exception {
             checkInsertDeleteSingleThreaded(4096);
         }
    @@ -100,6 +107,7 @@ public void testInsertDeleteSingleThreaded_4096() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteSingleThreaded_8192() throws Exception {
             checkInsertDeleteSingleThreaded(8192);
         }
    @@ -107,6 +115,7 @@ public void testInsertDeleteSingleThreaded_8192() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteSingleThreaded_16384() throws Exception {
             checkInsertDeleteSingleThreaded(16384);
         }
    @@ -114,6 +123,7 @@ public void testInsertDeleteSingleThreaded_16384() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteMultiThreaded_1024() throws Exception {
             checkInsertDeleteMultiThreaded(1024);
         }
    @@ -121,6 +131,7 @@ public void testInsertDeleteMultiThreaded_1024() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteMultiThreaded_2048() throws Exception {
             checkInsertDeleteMultiThreaded(2048);
         }
    @@ -128,6 +139,7 @@ public void testInsertDeleteMultiThreaded_2048() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteMultiThreaded_4096() throws Exception {
             checkInsertDeleteMultiThreaded(4096);
         }
    @@ -135,6 +147,7 @@ public void testInsertDeleteMultiThreaded_4096() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteMultiThreaded_8192() throws Exception {
             checkInsertDeleteMultiThreaded(8192);
         }
    @@ -142,6 +155,7 @@ public void testInsertDeleteMultiThreaded_8192() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testInsertDeleteMultiThreaded_16384() throws Exception {
             checkInsertDeleteMultiThreaded(16384);
         }
    @@ -480,11 +494,6 @@ private TestDataRow(int keySize, int valSize) {
             @Override public byte newMvccTxState() {
                 return 0;
             }
    -
    -        /** {@inheritDoc} */
    -        @Override public boolean isKeyAbsentBefore() {
    -            return false;
    -        }
         }
     
         /**
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/DataRegionMetricsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/DataRegionMetricsSelfTest.java
    index 122f50e964629..a62b89123f46c 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/DataRegionMetricsSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/DataRegionMetricsSelfTest.java
    @@ -23,12 +23,16 @@
     import org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl;
     import org.apache.ignite.internal.processors.cache.ratemetrics.HitRateMetrics;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.lang.Thread.sleep;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class DataRegionMetricsSelfTest extends GridCommonAbstractTest {
         /** */
         private DataRegionMetricsImpl memMetrics;
    @@ -61,6 +65,7 @@ public class DataRegionMetricsSelfTest extends GridCommonAbstractTest {
          * Test for allocationRate metric in single-threaded mode.
          * @throws Exception if any happens during test.
          */
    +    @Test
         public void testAllocationRateSingleThreaded() throws Exception {
             threadsCnt = 1;
             memMetrics.rateTimeInterval(RATE_TIME_INTERVAL_2);
    @@ -84,6 +89,7 @@ public void testAllocationRateSingleThreaded() throws Exception {
          * Test for allocationRate metric in multi-threaded mode with short silent period in the middle of the test.
          * @throws Exception if any happens during test.
          */
    +    @Test
         public void testAllocationRateMultiThreaded() throws Exception {
             threadsCnt = 4;
             memMetrics.rateTimeInterval(RATE_TIME_INTERVAL_1);
    @@ -123,6 +129,7 @@ public void testAllocationRateMultiThreaded() throws Exception {
          * Test verifies that allocationRate calculation algorithm survives setting new values to rateTimeInterval parameter.
          * @throws Exception if any happens during test.
          */
    +    @Test
         public void testAllocationRateTimeIntervalConcurrentChange() throws Exception {
             threadsCnt = 5;
             memMetrics.rateTimeInterval(RATE_TIME_INTERVAL_1);
    @@ -153,6 +160,7 @@ public void testAllocationRateTimeIntervalConcurrentChange() throws Exception {
          *
          * @throws Exception if any happens during test.
          */
    +    @Test
         public void testAllocationRateSubintervalsConcurrentChange() throws Exception {
             threadsCnt = 5;
             memMetrics.rateTimeInterval(RATE_TIME_INTERVAL_1);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbAbstractTest.java
    index 9a23502b25725..82a3428983e29 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbAbstractTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbAbstractTest.java
    @@ -29,9 +29,6 @@
     import org.apache.ignite.internal.cluster.IgniteClusterEx;
     import org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree;
     import org.apache.ignite.internal.util.typedef.internal.S;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
    @@ -43,9 +40,6 @@
      *
      */
     public abstract class IgniteDbAbstractTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /**
          * @return Node count.
          */
    @@ -138,11 +132,6 @@ public abstract class IgniteDbAbstractTest extends GridCommonAbstractTest {
             if (!client)
                 cfg.setCacheConfiguration(ccfg, ccfg2, ccfg3, ccfg4, ccfg5);
     
    -        TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
    -
    -        discoSpi.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(discoSpi);
             cfg.setMarshaller(null);
     
             configure(cfg);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbDynamicCacheSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbDynamicCacheSelfTest.java
    index 126ed5ffbe86a..f1870de2aa508 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbDynamicCacheSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbDynamicCacheSelfTest.java
    @@ -29,11 +29,16 @@
     import org.apache.ignite.configuration.DataStorageConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.util.typedef.internal.U;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteDbDynamicCacheSelfTest extends GridCommonAbstractTest {
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception {
    @@ -79,6 +84,7 @@ public class IgniteDbDynamicCacheSelfTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCreate() throws Exception {
             int iterations = 200;
     
    @@ -112,7 +118,11 @@ public void testCreate() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMultipleDynamicCaches() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            fail("https://issues.apache.org/jira/browse/IGNITE-10421");
    +
             int caches = 10;
     
             int entries = 10;
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbMemoryLeakAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbMemoryLeakAbstractTest.java
    index 81b5515f29044..9047609c3f5c8 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbMemoryLeakAbstractTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbMemoryLeakAbstractTest.java
    @@ -26,12 +26,16 @@
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
     import org.apache.ignite.internal.processors.cache.persistence.DataStructure;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.IgniteSystemProperties.getInteger;
     
     /**
      * Base class for memory leaks tests.
      */
    +@RunWith(JUnit4.class)
     public abstract class IgniteDbMemoryLeakAbstractTest extends IgniteDbAbstractTest {
         /** */
         private static final int CONCURRENCY_LEVEL = 16;
    @@ -173,6 +177,7 @@ protected static int nextInt() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testMemoryLeak() throws Exception {
             final IgniteEx ignite = grid(0);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetAbstractTest.java
    index 84455bab45f83..f9fb312852239 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetAbstractTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetAbstractTest.java
    @@ -36,7 +36,6 @@
     import org.apache.ignite.cache.query.ScanQuery;
     import org.apache.ignite.cache.query.SqlFieldsQuery;
     import org.apache.ignite.cache.query.SqlQuery;
    -import org.apache.ignite.configuration.DataStorageConfiguration;
     import org.apache.ignite.configuration.NearCacheConfiguration;
     import org.apache.ignite.internal.IgniteEx;
     import org.apache.ignite.internal.processors.cache.GridCacheAdapter;
    @@ -45,11 +44,16 @@
     import org.apache.ignite.internal.util.typedef.PA;
     import org.apache.ignite.internal.util.typedef.X;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.junit.Assert;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public abstract class IgniteDbPutGetAbstractTest extends IgniteDbAbstractTest {
         /** */
         private static final int KEYS_COUNT = 20_000;
    @@ -78,6 +82,7 @@ private  IgniteCache cache(String name) throws Exception {
         /**
          *
          */
    +    @Test
         public void testGradualRandomPutAllRemoveAll() throws Exception {
             IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -138,6 +143,7 @@ private void doPutRemoveAll(Random rnd, IgniteCache cache, Map
         /**
          *
          */
    +    @Test
         public void testRandomRemove() throws Exception {
             IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -177,6 +183,7 @@ public void testRandomRemove() throws Exception {
     
         /**
          */
    +    @Test
         public void testRandomPut() throws Exception {
             IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -205,6 +212,7 @@ public void testRandomPut() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutGetSimple() throws Exception {
             IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -229,6 +237,7 @@ public void testPutGetSimple() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutGetLarge() throws Exception {
             IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -271,6 +280,7 @@ public void testPutGetLarge() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPutGetLargeKeys() throws Exception {
             IgniteCache cache = ignite(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -319,6 +329,7 @@ private int[] randomInts(final int size) {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutGetOverwrite() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -347,6 +358,7 @@ public void testPutGetOverwrite() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testOverwriteNormalSizeAfterSmallerSize() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -371,6 +383,7 @@ public void testOverwriteNormalSizeAfterSmallerSize() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutDoesNotTriggerRead() throws Exception {
             IgniteEx ig = grid(0);
     
    @@ -382,6 +395,7 @@ public void testPutDoesNotTriggerRead() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutGetMultipleObjects() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -455,7 +469,11 @@ public void testPutGetMultipleObjects() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testSizeClear() throws Exception {
    +        if (MvccFeatureChecker.forcedMvcc())
    +            fail("https://issues.apache.org/jira/browse/IGNITE-7952");
    +
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
             GridCacheAdapter internalCache = internalCache(cache);
    @@ -491,6 +509,7 @@ public void testSizeClear() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testBounds() throws Exception {
             IgniteEx ig = ig();
     
    @@ -549,6 +568,7 @@ public void testBounds() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testMultithreadedPut() throws Exception {
             IgniteEx ig = ig();
     
    @@ -613,6 +633,7 @@ public void testMultithreadedPut() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutGetRandomUniqueMultipleObjects() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -680,6 +701,7 @@ private static int[] generateUniqueRandomKeys(int cnt, Random rnd) {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPutPrimaryUniqueSecondaryDuplicates() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -724,6 +746,7 @@ public void testPutPrimaryUniqueSecondaryDuplicates() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutGetRandomNonUniqueMultipleObjects() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -770,6 +793,7 @@ public void testPutGetRandomNonUniqueMultipleObjects() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testPutGetRemoveMultipleForward() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -860,6 +884,7 @@ public void _testRandomPutGetRemove() throws Exception {
                 assertEquals(map.get(key), cache.get(key));
         }
     
    +    @Test
         public void testPutGetRemoveMultipleBackward() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -904,6 +929,7 @@ public void testPutGetRemoveMultipleBackward() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testIndexOverwrite() throws Exception {
             final IgniteCache cache = cache(DEFAULT_CACHE_NAME);
     
    @@ -947,6 +973,7 @@ public void testIndexOverwrite() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testObjectKey() throws Exception {
             IgniteEx ig = ig();
     
    @@ -992,6 +1019,7 @@ public void testObjectKey() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testIterators() throws Exception {
             IgniteEx ignite = ig();
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetWithCacheStoreTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetWithCacheStoreTest.java
    index 8cca7f40e6ac1..b6eb70864a4dd 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetWithCacheStoreTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbPutGetWithCacheStoreTest.java
    @@ -31,7 +31,11 @@
     import org.apache.ignite.configuration.IgniteReflectionFactory;
     import org.apache.ignite.configuration.WALMode;
     import org.apache.ignite.internal.IgniteEx;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    @@ -39,6 +43,7 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IgniteDbPutGetWithCacheStoreTest extends GridCommonAbstractTest {
         /** */
         private static Map storeMap = new ConcurrentHashMap<>();
    @@ -76,6 +81,13 @@ public class IgniteDbPutGetWithCacheStoreTest extends GridCommonAbstractTest {
             return cfg;
         }
     
    +    /** {@inheritDoc} */
    +    @Override protected void beforeTest() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE);
    +
    +        super.beforeTest();
    +    }
    +
         /** {@inheritDoc} */
         @Override protected void afterTest() throws Exception {
             cleanPersistenceDir();
    @@ -93,6 +105,7 @@ public class IgniteDbPutGetWithCacheStoreTest extends GridCommonAbstractTest {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testWriteThrough() throws Exception {
             checkWriteThrough(ATOMIC);
             checkWriteThrough(TRANSACTIONAL);
    @@ -101,6 +114,7 @@ public void testWriteThrough() throws Exception {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testReadThrough() throws Exception {
             checkReadThrough(ATOMIC);
             checkReadThrough(TRANSACTIONAL);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbSingleNodeTinyPutGetTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbSingleNodeTinyPutGetTest.java
    index 53d299e1cf9c7..a61f3f11253d4 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbSingleNodeTinyPutGetTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IgniteDbSingleNodeTinyPutGetTest.java
    @@ -19,20 +19,23 @@
     
     import org.apache.ignite.IgniteCache;
     import org.apache.ignite.internal.IgniteEx;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Test for
      */
    +@RunWith(JUnit4.class)
     public class IgniteDbSingleNodeTinyPutGetTest extends IgniteDbSingleNodePutGetTest {
         /** {@inheritDoc} */
         @Override protected boolean isLargePage() {
             return true;
         }
     
    -    /**
    -     * @throws Exception If fail.
    -     */
    -    public void testPutGetTiny() throws Exception {
    +    /** */
    +    @Test
    +    public void testPutGetTiny() {
             IgniteEx ig = grid(0);
     
             IgniteCache cache = ig.cache("tiny");
    @@ -45,82 +48,98 @@ public void testPutGetTiny() throws Exception {
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testGradualRandomPutAllRemoveAll() {
             // No-op
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testRandomRemove() {
             // No-op
         }
     
         /** {@inheritDoc} */
    +    @Test
         @Override public void testRandomPut() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetSimple() throws Exception {
    +    @Test
    +    @Override public void testPutGetSimple() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetLarge() throws Exception {
    +    @Test
    +    @Override public void testPutGetLarge() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetOverwrite() throws Exception {
    +    @Test
    +    @Override public void testPutGetOverwrite() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testOverwriteNormalSizeAfterSmallerSize() throws Exception {
    +    @Test
    +    @Override public void testOverwriteNormalSizeAfterSmallerSize() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutDoesNotTriggerRead() throws Exception {
    +    @Test
    +    @Override public void testPutDoesNotTriggerRead() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetMultipleObjects() throws Exception {
    +    @Test
    +    @Override public void testPutGetMultipleObjects() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testSizeClear() throws Exception {
    +    @Test
    +    @Override public void testSizeClear() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testBounds() throws Exception {
    +    @Test
    +    @Override public void testBounds() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testMultithreadedPut() throws Exception {
    +    @Test
    +    @Override public void testMultithreadedPut() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetRandomUniqueMultipleObjects() throws Exception {
    +    @Test
    +    @Override public void testPutGetRandomUniqueMultipleObjects() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutPrimaryUniqueSecondaryDuplicates() throws Exception {
    +    @Test
    +    @Override public void testPutPrimaryUniqueSecondaryDuplicates() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetRandomNonUniqueMultipleObjects() throws Exception {
    +    @Test
    +    @Override public void testPutGetRandomNonUniqueMultipleObjects() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetRemoveMultipleForward() throws Exception {
    +    @Test
    +    @Override public void testPutGetRemoveMultipleForward() {
             // No-op
         }
     
    @@ -130,22 +149,26 @@ public void testPutGetTiny() throws Exception {
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPutGetRemoveMultipleBackward() throws Exception {
    +    @Test
    +    @Override public void testPutGetRemoveMultipleBackward() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testIndexOverwrite() throws Exception {
    +    @Test
    +    @Override public void testIndexOverwrite() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testObjectKey() throws Exception {
    +    @Test
    +    @Override public void testObjectKey() {
             // No-op
         }
     
         /** {@inheritDoc} */
    -    @Override public void testIterators() throws Exception {
    +    @Test
    +    @Override public void testIterators() {
             // No-op
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IndexStorageSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IndexStorageSelfTest.java
    index 69a86b4ea8679..78c83b6abd760 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IndexStorageSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/IndexStorageSelfTest.java
    @@ -35,10 +35,14 @@
     import org.apache.ignite.internal.processors.cache.persistence.RootPage;
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class IndexStorageSelfTest extends GridCommonAbstractTest {
         /** Make sure page is small enough to trigger multiple pages in a linked list. */
         private static final int PAGE_SIZE = 1024;
    @@ -57,6 +61,7 @@ public class IndexStorageSelfTest extends GridCommonAbstractTest {
         /**
          * @throws Exception if failed.
          */
    +    @Test
         public void testMetaIndexAllocation() throws Exception {
             metaAllocation();
         }
    @@ -96,15 +101,24 @@ private void metaAllocation() throws Exception {
                     IndexStorageImpl metaStore = storeMap.get(cacheId);
     
                     if (metaStore == null) {
    -                    metaStore = new IndexStorageImpl(mem, null, new AtomicLong(), cacheId,
    -                        PageIdAllocator.INDEX_PARTITION, PageMemory.FLAG_IDX,
    -                        null, mem.allocatePage(cacheId, PageIdAllocator.INDEX_PARTITION, PageMemory.FLAG_IDX), true,
    -                            null);
    +                    metaStore = new IndexStorageImpl(
    +                        mem,
    +                        null,
    +                        new AtomicLong(),
    +                        cacheId,
    +                        false,
    +                        PageIdAllocator.INDEX_PARTITION,
    +                        PageMemory.FLAG_IDX,
    +                        null,
    +                        mem.allocatePage(cacheId, PageIdAllocator.INDEX_PARTITION, PageMemory.FLAG_IDX),
    +                        true,
    +                        null
    +                    );
     
                         storeMap.put(cacheId, metaStore);
                     }
     
    -                final RootPage rootPage = metaStore.getOrAllocateForTree(idxName);
    +                final RootPage rootPage = metaStore.allocateIndex(idxName);
     
                     assertTrue(rootPage.isAllocated());
     
    @@ -118,7 +132,7 @@ private void metaAllocation() throws Exception {
                         String idxName = entry.getKey();
                         FullPageId rootPageId = entry.getValue().pageId();
     
    -                    final RootPage rootPage = storeMap.get(cacheId).getOrAllocateForTree(idxName);
    +                    final RootPage rootPage = storeMap.get(cacheId).allocateIndex(idxName);
     
                         assertEquals("Invalid root page ID restored [cacheId=" + cacheId + ", idxName=" + idxName + ']',
                             rootPageId, rootPage.pageId());
    @@ -129,7 +143,7 @@ private void metaAllocation() throws Exception {
                 }
             }
             finally {
    -            mem.stop();
    +            mem.stop(true);
             }
         }
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/SwapPathConstructionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/SwapPathConstructionSelfTest.java
    index 5910a35efdc56..5872c3185e0db 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/database/SwapPathConstructionSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/database/SwapPathConstructionSelfTest.java
    @@ -31,11 +31,15 @@
     import org.apache.ignite.internal.processors.cache.persistence.DataRegion;
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Test verifies correct construction of swap file path {@link DataRegionConfiguration#setSwapPath(String)}
      * when absolute or relative paths are provided via configuration.
      */
    +@RunWith(JUnit4.class)
     public class SwapPathConstructionSelfTest extends GridCommonAbstractTest {
         /** */
         private DataStorageConfiguration memCfg;
    @@ -78,6 +82,7 @@ private void cleanUpSwapDir() {
         /**
          * Verifies relative swap file path construction. Directory with swap files is cleaned up during after-test phase.
          */
    +    @Test
         public void testRelativeSwapFilePath() throws Exception {
             memCfg = createMemoryConfiguration(true);
     
    @@ -94,6 +99,7 @@ public void testRelativeSwapFilePath() throws Exception {
          * Verifies absolute swap file path construction. System tmp directory is used to allocate swap files,
          * so no clean up is needed.
          */
    +    @Test
         public void testAbsoluteSwapFilePath() throws Exception {
             memCfg = createMemoryConfiguration(false);
     
    diff --git a/modules/web-console/frontend/app/modules/navbar/navbar.directive.js b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorMvccPersistenceSelfTest.java
    similarity index 75%
    rename from modules/web-console/frontend/app/modules/navbar/navbar.directive.js
    rename to modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorMvccPersistenceSelfTest.java
    index e400cf7d2f54a..9360cab60284e 100644
    --- a/modules/web-console/frontend/app/modules/navbar/navbar.directive.js
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorMvccPersistenceSelfTest.java
    @@ -15,18 +15,14 @@
      * limitations under the License.
      */
     
    -export default function factory(IgniteNavbar) {
    -    function controller() {
    -        const ctrl = this;
    +package org.apache.ignite.internal.processors.datastreamer;
     
    -        ctrl.items = IgniteNavbar;
    +/**
    + *
    + */
    +public class DataStreamProcessorMvccPersistenceSelfTest extends DataStreamProcessorMvccSelfTest {
    +    /** {@inheritDoc} */
    +    @Override public boolean persistenceEnabled() {
    +        return true;
         }
    -
    -    return {
    -        restrict: 'A',
    -        controller,
    -        controllerAs: 'navbar'
    -    };
     }
    -
    -factory.$inject = ['IgniteNavbar'];
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorMvccSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorMvccSelfTest.java
    index 381d9a92dd099..6d37b9b22ee2d 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorMvccSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorMvccSelfTest.java
    @@ -20,21 +20,32 @@
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    +import org.junit.Ignore;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     
     /**
      * Check DataStreamer with Mvcc enabled.
      */
    +@RunWith(JUnit4.class)
     public class DataStreamProcessorMvccSelfTest extends DataStreamProcessorSelfTest {
         /** {@inheritDoc} */
    +    @SuppressWarnings("unchecked")
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration igniteConfiguration = super.getConfiguration(igniteInstanceName);
     
    -        CacheConfiguration[] cacheConfigurations = igniteConfiguration.getCacheConfiguration();
    +        CacheConfiguration[] ccfgs = igniteConfiguration.getCacheConfiguration();
     
    -        assert cacheConfigurations == null || cacheConfigurations.length == 0
    -                || (cacheConfigurations.length == 1 && cacheConfigurations[0].getAtomicityMode() == TRANSACTIONAL_SNAPSHOT);
    +        if (ccfgs != null) {
    +            for (CacheConfiguration ccfg : ccfgs)
    +                ccfg.setNearConfiguration(null);
    +        }
    +
    +        assert ccfgs == null || ccfgs.length == 0 ||
    +            (ccfgs.length == 1 && ccfgs[0].getAtomicityMode() == TRANSACTIONAL_SNAPSHOT);
     
             return igniteConfiguration;
         }
    @@ -45,43 +56,30 @@ public class DataStreamProcessorMvccSelfTest extends DataStreamProcessorSelfTest
         }
     
         /** {@inheritDoc} */
    -    @Override public void testPartitioned() throws Exception {
    -        // test uses batchedSorted StreamReceiver which depends on Cache.putAll, Cache.removeAll
    -        fail("https://issues.apache.org/jira/browse/IGNITE-9451");
    -
    -        super.testPartitioned();
    -    }
    -
    -    /** {@inheritDoc} */
    -    @Override public void testColocated() throws Exception {
    -        // test uses batchedSorted StreamReceiver which depends on Cache.putAll, Cache.removeAll
    -        fail("https://issues.apache.org/jira/browse/IGNITE-9451");
    -
    -        super.testColocated();
    -    }
    -
    -    /** {@inheritDoc} */
    -    @Override public void testReplicated() throws Exception {
    -        // test uses batchedSorted StreamReceiver which depends on Cache.putAll, Cache.removeAll
    -        fail("https://issues.apache.org/jira/browse/IGNITE-9451");
    -
    -        super.testReplicated();
    -    }
    -
    -    /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-8582")
    +    @Test
         @Override public void testUpdateStore() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-8582");
    -
             super.testUpdateStore();
         }
     
         /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-9321")
    +    @Test
         @Override public void testFlushTimeout() throws Exception {
    -        fail("https://issues.apache.org/jira/browse/IGNITE-9321");
    +        super.testFlushTimeout();
         }
     
         /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-9530")
    +    @Test
         @Override public void testLocal() throws Exception {
    -        // Do not check local caches with MVCC enabled.
    +        super.testLocal();
    +    }
    +
    +    /** {@inheritDoc} */
    +    @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752")
    +    @Test
    +    @Override public void testTryFlush() throws Exception {
    +        super.testTryFlush();
         }
     }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorPersistenceSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorPersistenceSelfTest.java
    new file mode 100644
    index 0000000000000..7ce4fdd28e9d3
    --- /dev/null
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorPersistenceSelfTest.java
    @@ -0,0 +1,28 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.ignite.internal.processors.datastreamer;
    +
    +/**
    + *
    + */
    +public class DataStreamProcessorPersistenceSelfTest extends DataStreamProcessorSelfTest {
    +    /** {@inheritDoc} */
    +    @Override public boolean persistenceEnabled() {
    +        return true;
    +    }
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorSelfTest.java
    index 536d73e271d59..297c686da7098 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessorSelfTest.java
    @@ -42,9 +42,12 @@
     import org.apache.ignite.cache.store.CacheStoreAdapter;
     import org.apache.ignite.cluster.ClusterNode;
     import org.apache.ignite.configuration.CacheConfiguration;
    +import org.apache.ignite.configuration.DataRegionConfiguration;
    +import org.apache.ignite.configuration.DataStorageConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.configuration.IgniteReflectionFactory;
     import org.apache.ignite.configuration.NearCacheConfiguration;
    +import org.apache.ignite.configuration.WALMode;
     import org.apache.ignite.events.Event;
     import org.apache.ignite.internal.IgniteInternalFuture;
     import org.apache.ignite.internal.IgniteKernal;
    @@ -52,23 +55,24 @@
     import org.apache.ignite.internal.processors.cache.GridCacheEntryEx;
     import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheAdapter;
     import org.apache.ignite.internal.util.lang.GridAbsPredicate;
    +import org.apache.ignite.internal.util.typedef.G;
     import org.apache.ignite.internal.util.typedef.internal.CU;
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.lang.IgniteClosure;
     import org.apache.ignite.lang.IgniteFuture;
     import org.apache.ignite.lang.IgniteFutureCancelledException;
     import org.apache.ignite.lang.IgnitePredicate;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.stream.StreamReceiver;
     import org.apache.ignite.testframework.GridTestUtils;
    +import org.apache.ignite.testframework.MvccFeatureChecker;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.Nullable;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static java.util.concurrent.TimeUnit.MILLISECONDS;
     import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
    -import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT;
     import static org.apache.ignite.cache.CacheMode.LOCAL;
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheMode.REPLICATED;
    @@ -78,13 +82,11 @@
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class DataStreamProcessorSelfTest extends GridCommonAbstractTest {
         /** */
         private static ConcurrentHashMap storeMap;
     
    -    /** */
    -    private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private CacheMode mode = PARTITIONED;
     
    @@ -97,6 +99,13 @@ public class DataStreamProcessorSelfTest extends GridCommonAbstractTest {
         /** */
         private TestStore store;
     
    +    @Override protected void beforeTest() throws Exception {
    +        super.beforeTest();
    +
    +        if (persistenceEnabled())
    +            cleanPersistenceDir();
    +    }
    +
         /** {@inheritDoc} */
         @Override public void afterTest() throws Exception {
             super.afterTest();
    @@ -104,6 +113,13 @@ public class DataStreamProcessorSelfTest extends GridCommonAbstractTest {
             useCache = false;
         }
     
    +    /**
    +     * @return {@code True} if persistent store is enabled for test.
    +     */
    +    public boolean persistenceEnabled() {
    +        return false;
    +    }
    +
         /** {@inheritDoc} */
         @SuppressWarnings({"IfMayBeConditional", "unchecked"})
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
    @@ -111,13 +127,6 @@ public class DataStreamProcessorSelfTest extends GridCommonAbstractTest {
     
             cfg.setPeerClassLoadingEnabled(false);
     
    -        TcpDiscoverySpi spi = new TcpDiscoverySpi();
    -
    -        spi.setForceServerMode(true);
    -        spi.setIpFinder(ipFinder);
    -
    -        cfg.setDiscoverySpi(spi);
    -
             cfg.setIncludeProperties();
     
             if (useCache) {
    @@ -141,6 +150,12 @@ public class DataStreamProcessorSelfTest extends GridCommonAbstractTest {
                 }
     
                 cfg.setCacheConfiguration(cc);
    +
    +            if (persistenceEnabled())
    +                cfg.setDataStorageConfiguration(new DataStorageConfiguration()
    +                    .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
    +                            .setPersistenceEnabled(true))
    +                    .setWalMode(WALMode.LOG_ONLY));
             }
             else {
                 cfg.setCacheConfiguration();
    @@ -168,7 +183,10 @@ protected boolean customKeepBinary() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitioned() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.NEAR_CACHE);
    +
             mode = PARTITIONED;
     
             checkDataStreamer();
    @@ -177,6 +195,7 @@ public void testPartitioned() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testColocated() throws Exception {
             mode = PARTITIONED;
             nearEnabled = false;
    @@ -187,6 +206,7 @@ public void testColocated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicated() throws Exception {
             mode = REPLICATED;
     
    @@ -196,7 +216,10 @@ public void testReplicated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocal() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.LOCAL_CACHE);
    +
             mode = LOCAL;
     
             try {
    @@ -226,6 +249,8 @@ private void checkDataStreamer() throws Exception {
     
                 Ignite igniteWithoutCache = startGrid(1);
     
    +            afterGridStarted();
    +
                 final IgniteDataStreamer ldr = igniteWithoutCache.dataStreamer(DEFAULT_CACHE_NAME);
     
                 ldr.receiver(DataStreamerCacheUpdaters.batchedSorted());
    @@ -312,6 +337,7 @@ private void checkDataStreamer() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionedIsolated() throws Exception {
             mode = PARTITIONED;
     
    @@ -321,6 +347,7 @@ public void testPartitionedIsolated() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicatedIsolated() throws Exception {
             mode = REPLICATED;
     
    @@ -334,11 +361,9 @@ private void checkIsolatedDataStreamer() throws Exception {
             try {
                 useCache = true;
     
    -            Ignite g1 = startGrid(0);
    -            startGrid(1);
    -            startGrid(2);
    +            Ignite g1 = startGridsMultiThreaded(3);
     
    -            awaitPartitionMapExchange();
    +            afterGridStarted();
     
                 IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME);
     
    @@ -415,6 +440,7 @@ private void checkIsolatedDataStreamer() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testPrimitiveArrays() throws Exception {
             try {
                 useCache = true;
    @@ -423,6 +449,8 @@ public void testPrimitiveArrays() throws Exception {
                 Ignite g1 = startGrid(1);
                 startGrid(2); // Reproduced only for several nodes in topology (if marshalling is used).
     
    +            afterGridStarted();
    +
                 List arrays = Arrays.asList(
                     new byte[] {1}, new boolean[] {true, false}, new char[] {2, 3}, new short[] {3, 4},
                     new int[] {4, 5}, new long[] {5, 6}, new float[] {6, 7}, new double[] {7, 8});
    @@ -446,6 +474,7 @@ public void testPrimitiveArrays() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testReplicatedMultiThreaded() throws Exception {
             mode = REPLICATED;
     
    @@ -455,6 +484,7 @@ public void testReplicatedMultiThreaded() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testPartitionedMultiThreaded() throws Exception {
             mode = PARTITIONED;
     
    @@ -471,20 +501,19 @@ public void testPartitionedMultiThreaded() throws Exception {
         protected void checkLoaderMultithreaded(int nodesCntNoCache, int nodesCntCache)
             throws Exception {
             try {
    -            // Start all required nodes.
    -            int idx = 1;
    -
                 useCache = true;
     
    -            for (int i = 0; i < nodesCntCache; i++)
    -                startGrid(idx++);
    +            startGridsMultiThreaded(nodesCntCache);
     
                 useCache = false;
     
    -            for (int i = 0; i < nodesCntNoCache; i++)
    -                startGrid(idx++);
    +            startGridsMultiThreaded(nodesCntCache, nodesCntNoCache);
    +
    +            Ignite g1 = grid(nodesCntCache + nodesCntNoCache - 1);
     
    -            Ignite g1 = grid(idx - 1);
    +            afterGridStarted();
    +
    +            afterGridStarted();
     
                 // Get and configure loader.
                 final IgniteDataStreamer ldr = g1.dataStreamer(DEFAULT_CACHE_NAME);
    @@ -584,12 +613,15 @@ protected void checkLoaderMultithreaded(int nodesCntNoCache, int nodesCntCache)
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLoaderApi() throws Exception {
             useCache = true;
     
             try {
                 Ignite g1 = startGrid(1);
     
    +            afterGridStarted();
    +
                 IgniteDataStreamer ldr = g1.dataStreamer(DEFAULT_CACHE_NAME);
     
                 ldr.close(false);
    @@ -738,15 +770,17 @@ private static  IgniteClosure removeClosure(@Nullable final T exp) {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFlush() throws Exception {
    -        // Local caches are not allowed with MVCC enabled.
    -        mode = getCacheAtomicityMode() != TRANSACTIONAL_SNAPSHOT ? LOCAL : PARTITIONED;
    +        mode = PARTITIONED;
     
             useCache = true;
     
             try {
                 Ignite g = startGrid();
     
    +            afterGridStarted();
    +
                 final IgniteCache c = g.cache(DEFAULT_CACHE_NAME);
     
                 final IgniteDataStreamer ldr = g.dataStreamer(DEFAULT_CACHE_NAME);
    @@ -791,15 +825,17 @@ public Void call() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testTryFlush() throws Exception {
    -        // Local caches are not allowed with MVCC enabled.
    -        mode = getCacheAtomicityMode() != TRANSACTIONAL_SNAPSHOT ? LOCAL : PARTITIONED;
    +        mode = PARTITIONED;
     
             useCache = true;
     
             try {
                 Ignite g = startGrid();
     
    +            afterGridStarted();
    +
                 IgniteCache c = g.cache(DEFAULT_CACHE_NAME);
     
                 IgniteDataStreamer ldr = g.dataStreamer(DEFAULT_CACHE_NAME);
    @@ -827,15 +863,19 @@ public void testTryFlush() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testFlushTimeout() throws Exception {
    -        // Local caches are not allowed with MVCC enabled.
    -        mode = getCacheAtomicityMode() != TRANSACTIONAL_SNAPSHOT ? LOCAL : PARTITIONED;
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_EVENTS);
    +
    +        mode = PARTITIONED;
     
             useCache = true;
     
             try {
                 Ignite g = startGrid();
     
    +            afterGridStarted();
    +
                 final CountDownLatch latch = new CountDownLatch(9);
     
                 g.events().localListen(new IgnitePredicate() {
    @@ -879,7 +919,10 @@ public void testFlushTimeout() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testUpdateStore() throws Exception {
    +        MvccFeatureChecker.failIfNotSupported(MvccFeatureChecker.Feature.CACHE_STORE);
    +
             storeMap = new ConcurrentHashMap<>();
     
             try {
    @@ -887,10 +930,11 @@ public void testUpdateStore() throws Exception {
     
                 useCache = true;
     
    -            Ignite ignite = startGrid(1);
    +            Ignite ignite = startGridsMultiThreaded(3);
     
    -            startGrid(2);
    -            startGrid(3);
    +            afterGridStarted();
    +
    +            afterGridStarted();
     
                 for (int i = 0; i < 1000; i++)
                     storeMap.put(i, i);
    @@ -941,20 +985,24 @@ public void testUpdateStore() throws Exception {
             }
             finally {
                 storeMap = null;
    +
    +            stopAllGrids();
             }
         }
     
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCustomUserUpdater() throws Exception {
             useCache = true;
     
             try {
    -            Ignite ignite = startGrid(1);
    +            Ignite ignite = startGridsMultiThreaded(3);
     
    -            startGrid(2);
    -            startGrid(3);
    +            afterGridStarted();
    +
    +            afterGridStarted();
     
                 try (IgniteDataStreamer ldr = ignite.dataStreamer(DEFAULT_CACHE_NAME)) {
                     ldr.allowOverwrite(true);
    @@ -983,12 +1031,15 @@ public void testCustomUserUpdater() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testLocalDataStreamerDedicatedThreadPool() throws Exception {
             try {
                 useCache = true;
     
                 Ignite ignite = startGrid(1);
     
    +            afterGridStarted();
    +
                 final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME);
     
                 try (IgniteDataStreamer ldr = ignite.dataStreamer(DEFAULT_CACHE_NAME)) {
    @@ -1025,6 +1076,7 @@ public void testLocalDataStreamerDedicatedThreadPool() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemoteDataStreamerDedicatedThreadPool() throws Exception {
             try {
                 useCache = true;
    @@ -1035,6 +1087,8 @@ public void testRemoteDataStreamerDedicatedThreadPool() throws Exception {
     
                 Ignite client = startGrid(0);
     
    +            afterGridStarted();
    +
                 final IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME);
     
                 try (IgniteDataStreamer ldr = client.dataStreamer(DEFAULT_CACHE_NAME)) {
    @@ -1100,6 +1154,19 @@ protected StreamReceiver getStreamReceiver() {
             return new TestDataReceiver();
         }
     
    +    /**
    +     * Activates grid if necessary and wait for partition map exchange.
    +     */
    +    private void afterGridStarted() throws InterruptedException {
    +        G.allGrids().stream()
    +            .filter(g -> !g.cluster().node().isClient())
    +            .findAny()
    +            .filter(g -> !g.cluster().active())
    +            .ifPresent(g -> g.cluster().active(true));
    +
    +        awaitPartitionMapExchange();
    +    }
    +
         /**
          *
          */
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerClientReconnectAfterClusterRestartTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerClientReconnectAfterClusterRestartTest.java
    index 239647c58529b..468e8ce8200e6 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerClientReconnectAfterClusterRestartTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerClientReconnectAfterClusterRestartTest.java
    @@ -25,17 +25,16 @@
     import org.apache.ignite.events.Event;
     import org.apache.ignite.events.EventType;
     import org.apache.ignite.lang.IgnitePredicate;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      * Tests DataStreamer reconnect behaviour when client nodes arrives at the same or different topVer than it left.
      */
    +@RunWith(JUnit4.class)
     public class DataStreamerClientReconnectAfterClusterRestartTest extends GridCommonAbstractTest {
    -    /** */
    -    public static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private boolean clientMode;
     
    @@ -43,8 +42,6 @@ public class DataStreamerClientReconnectAfterClusterRestartTest extends GridComm
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             cfg.setCacheConfiguration(new CacheConfiguration<>("test"));
     
             cfg.setClientMode(clientMode);
    @@ -53,21 +50,25 @@ public class DataStreamerClientReconnectAfterClusterRestartTest extends GridComm
         }
     
         /** */
    +    @Test
         public void testOneClient() throws Exception {
             clusterRestart(false, false);
         }
     
         /** */
    +    @Test
         public void testOneClientAllowOverwrite() throws Exception {
             clusterRestart(false, true);
         }
     
         /** */
    +    @Test
         public void testTwoClients() throws Exception {
             clusterRestart(true, false);
         }
     
         /** */
    +    @Test
         public void testTwoClientsAllowOverwrite() throws Exception {
             clusterRestart(true, true);
         }
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImplSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImplSelfTest.java
    index e4c7660386394..1844e06aa4b1b 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImplSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImplSelfTest.java
    @@ -50,15 +50,15 @@
     import org.apache.ignite.lang.IgniteInClosure;
     import org.apache.ignite.plugin.extensions.communication.Message;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.apache.log4j.Appender;
     import org.apache.log4j.Logger;
     import org.apache.log4j.SimpleLayout;
     import org.apache.log4j.WriterAppender;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    @@ -66,10 +66,8 @@
     /**
      * Tests for {@code IgniteDataStreamerImpl}.
      */
    +@RunWith(JUnit4.class)
     public class DataStreamerImplSelfTest extends GridCommonAbstractTest {
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** Number of keys to load via data streamer. */
         private static final int KEYS_COUNT = 1000;
     
    @@ -96,11 +94,6 @@ public class DataStreamerImplSelfTest extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
    -        discoSpi.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(discoSpi);
    -
             cfg.setCommunicationSpi(new StaleTopologyCommunicationSpi());
     
             if (cnt < MAX_CACHE_COUNT)
    @@ -114,6 +107,7 @@ public class DataStreamerImplSelfTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCloseWithCancellation() throws Exception {
             cnt = 0;
     
    @@ -142,6 +136,7 @@ public void testCloseWithCancellation() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testNullPointerExceptionUponDataStreamerClosing() throws Exception {
             cnt = 0;
     
    @@ -189,6 +184,7 @@ public void testNullPointerExceptionUponDataStreamerClosing() throws Exception {
          *
          * @throws Exception If failed.
          */
    +    @Test
         public void testAddDataFromMap() throws Exception {
             cnt = 0;
     
    @@ -225,6 +221,7 @@ public void testAddDataFromMap() throws Exception {
          *
          * @throws Exception If fail.
          */
    +    @Test
         public void testNoDataNodesOnClose() throws Exception {
             boolean failed = false;
     
    @@ -254,6 +251,7 @@ public void testNoDataNodesOnClose() throws Exception {
          *
          * @throws Exception If fail.
          */
    +    @Test
         public void testNoDataNodesOnFlush() throws Exception {
             boolean failed = false;
     
    @@ -296,6 +294,7 @@ public void testNoDataNodesOnFlush() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testAllOperationFinishedBeforeFutureCompletion() throws Exception {
             cnt = 0;
     
    @@ -348,6 +347,7 @@ public void testAllOperationFinishedBeforeFutureCompletion() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testRemapOnTopologyChangeDuringUpdatePreparation() throws Exception {
             cnt = 0;
     
    @@ -430,6 +430,7 @@ public void testRemapOnTopologyChangeDuringUpdatePreparation() throws Exception
          *
          * @throws Exception if failed
          */
    +    @Test
         public void testRetryWhenTopologyMismatch() throws Exception {
             final int KEY = 1;
             final String VAL = "1";
    @@ -465,6 +466,7 @@ public void testRetryWhenTopologyMismatch() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testClientEventsNotCausingRemaps() throws Exception {
             Ignite ignite = startGrids(2);
     
    @@ -493,6 +495,7 @@ public void testClientEventsNotCausingRemaps() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testServerEventsCauseRemaps() throws Exception {
             Ignite ignite = startGrids(2);
     
    @@ -525,6 +528,7 @@ public void testServerEventsCauseRemaps() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testDataStreamerWaitsUntilDynamicCacheStartIsFinished() throws Exception {
             final Ignite ignite0 = startGrids(2);
             final Ignite ignite1 = grid(1);
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultiThreadedSelfTest.java
    index 6d7b367727b9a..8a008f71a8b96 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultiThreadedSelfTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultiThreadedSelfTest.java
    @@ -29,21 +29,19 @@
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.lang.IgniteFuture;
     import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     
     /**
      * Tests for DataStreamer.
      */
    +@RunWith(JUnit4.class)
     public class DataStreamerMultiThreadedSelfTest extends GridCommonAbstractTest {
    -    /** IP finder. */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private boolean dynamicCache;
     
    @@ -53,11 +51,6 @@ public class DataStreamerMultiThreadedSelfTest extends GridCommonAbstractTest {
     
             ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1);
     
    -        TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
    -        discoSpi.setIpFinder(IP_FINDER);
    -
    -        cfg.setDiscoverySpi(discoSpi);
    -
             if (!dynamicCache)
                 cfg.setCacheConfiguration(cacheConfiguration());
     
    @@ -84,6 +77,7 @@ private CacheConfiguration cacheConfiguration() {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStartStopIgnites() throws Exception {
             startStopIgnites();
         }
    @@ -91,6 +85,7 @@ public void testStartStopIgnites() throws Exception {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testStartStopIgnitesDynamicCache() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-1602");
     
    @@ -141,4 +136,4 @@ private void startStopIgnites() throws Exception {
                 stopAllGrids();
             }
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultinodeCreateCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultinodeCreateCacheTest.java
    index 3725288fdd938..c3cc212ff0c86 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultinodeCreateCacheTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerMultinodeCreateCacheTest.java
    @@ -25,24 +25,21 @@
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.IgniteInternalFuture;
     import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.GridTestUtils;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class DataStreamerMultinodeCreateCacheTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** {@inheritDoc} */
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setSocketTimeout(50);
             ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setAckTimeout(50);
     
    @@ -57,6 +54,7 @@ public class DataStreamerMultinodeCreateCacheTest extends GridCommonAbstractTest
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testCreateCacheAndStream() throws Exception {
             fail("https://issues.apache.org/jira/browse/IGNITE-1603");
     
    @@ -99,4 +97,4 @@ public void testCreateCacheAndStream() throws Exception {
     
             fut.get(2 * 60_000);
         }
    -}
    \ No newline at end of file
    +}
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerTimeoutTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerTimeoutTest.java
    index 6e88adf1d54e1..f11f10db48772 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerTimeoutTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerTimeoutTest.java
    @@ -30,6 +30,9 @@
     import org.apache.ignite.internal.util.typedef.internal.U;
     import org.apache.ignite.stream.StreamReceiver;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    @@ -37,6 +40,7 @@
     /**
      * Test timeout for Data streamer.
      */
    +@RunWith(JUnit4.class)
     public class DataStreamerTimeoutTest extends GridCommonAbstractTest {
         /** Cache name. */
         public static final String CACHE_NAME = "cacheName";
    @@ -79,6 +83,7 @@ private CacheConfiguration cacheConfiguration() {
          * Test timeout on {@code DataStreamer.addData()} method
          * @throws Exception If fail.
          */
    +    @Test
         public void testTimeoutOnCloseMethod() throws Exception {
             failOn = 1;
     
    @@ -109,6 +114,7 @@ public void testTimeoutOnCloseMethod() throws Exception {
          *
          * @throws Exception If fail.
          */
    +    @Test
         public void testTimeoutOnAddData() throws Exception {
             failOn = 1;
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerUpdateAfterLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerUpdateAfterLoadTest.java
    index bdfc1168f7e1f..d2040013fe8a3 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerUpdateAfterLoadTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerUpdateAfterLoadTest.java
    @@ -25,21 +25,19 @@
     import org.apache.ignite.cache.CacheAtomicityMode;
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
     import org.jetbrains.annotations.NotNull;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
     
     /**
      *
      */
    +@RunWith(JUnit4.class)
     public class DataStreamerUpdateAfterLoadTest extends GridCommonAbstractTest {
    -    /** */
    -    private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);
    -
         /** */
         private boolean client;
     
    @@ -50,8 +48,6 @@ public class DataStreamerUpdateAfterLoadTest extends GridCommonAbstractTest {
         @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
             IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
     
    -        ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);
    -
             cfg.setClientMode(client);
     
             return cfg;
    @@ -71,6 +67,7 @@ public class DataStreamerUpdateAfterLoadTest extends GridCommonAbstractTest {
         /**
          * @throws Exception If failed.
          */
    +    @Test
         public void testUpdateAfterLoad() throws Exception {
             Ignite ignite0 = ignite(0);
     
    diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/IgniteDataStreamerPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/IgniteDataStreamerPerformanceTest.java
    index f7c82942270ac..4fb5b9016fc2f 100644
    --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/IgniteDataStreamerPerformanceTest.java
    +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/datastreamer/IgniteDataStreamerPerformanceTest.java
    @@ -25,10 +25,10 @@
     import org.apache.ignite.configuration.CacheConfiguration;
     import org.apache.ignite.configuration.IgniteConfiguration;
     import org.apache.ignite.internal.util.typedef.internal.U;
    -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
    -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
     import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
    +import org.junit.Test;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.JUnit4;
     
     import static org.apache.ignite.cache.CacheMode.PARTITIONED;
     import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
    @@ -41,10 +41,8 @@
      * 

    * Disable assertions and give at least 2 GB heap to run this test. */ +@RunWith(JUnit4.class) public class IgniteDataStreamerPerformanceTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 3; @@ -61,12 +59,6 @@ public class IgniteDataStreamerPerformanceTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setIncludeProperties(); cfg.setIncludeEventTypes(EVT_TASK_FAILED, EVT_TASK_FINISHED, EVT_JOB_MAPPED); @@ -115,6 +107,7 @@ public class IgniteDataStreamerPerformanceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPerformance() throws Exception { doTest(); } @@ -195,4 +188,4 @@ private void doTest() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAbstractSelfTest.java index 2e8269cbb164d..bfc259a82c931 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAbstractSelfTest.java @@ -59,6 +59,9 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.igfs.IgfsMode.PRIMARY; import static org.apache.ignite.igfs.IgfsMode.PROXY; @@ -66,7 +69,8 @@ /** * Test fo regular igfs operations. */ -@SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "ConstantConditions"}) +@SuppressWarnings({"ConstantConditions"}) +@RunWith(JUnit4.class) public abstract class IgfsAbstractSelfTest extends IgfsAbstractBaseSelfTest { /** * Constructor. @@ -82,6 +86,7 @@ protected IgfsAbstractSelfTest(IgfsMode mode) { * * @throws Exception If failed. */ + @Test public void testExists() throws Exception { create(igfs, paths(DIR), null); @@ -93,6 +98,7 @@ public void testExists() throws Exception { * * @throws Exception If failed. */ + @Test public void testExistsPathDoesNotExist() throws Exception { assert !igfs.exists(DIR); } @@ -102,6 +108,7 @@ public void testExistsPathDoesNotExist() throws Exception { * * @throws Exception If failed. */ + @Test public void testListFiles() throws Exception { create(igfs, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); @@ -124,6 +131,7 @@ public void testListFiles() throws Exception { * * @throws Exception If failed. */ + @Test public void testListFilesPathDoesNotExist() throws Exception { Collection paths = null; @@ -142,6 +150,7 @@ public void testListFilesPathDoesNotExist() throws Exception { * * @throws Exception If failed. */ + @Test public void testInfo() throws Exception { create(igfs, paths(DIR), null); @@ -157,6 +166,7 @@ public void testInfo() throws Exception { * * @throws Exception If failed. */ + @Test public void testInfoPathDoesNotExist() throws Exception { IgfsFile info = null; @@ -176,6 +186,7 @@ public void testInfoPathDoesNotExist() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameFile() throws Exception { create(igfs, paths(DIR, SUBDIR), paths(FILE)); @@ -190,6 +201,7 @@ public void testRenameFile() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameFileParentRoot() throws Exception { IgfsPath file1 = new IgfsPath("/file1"); IgfsPath file2 = new IgfsPath("/file2"); @@ -208,6 +220,7 @@ public void testRenameFileParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameDirectory() throws Exception { create(igfs, paths(DIR, SUBDIR), null); @@ -222,6 +235,7 @@ public void testRenameDirectory() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameDirectoryParentRoot() throws Exception { IgfsPath dir1 = new IgfsPath("/dir1"); IgfsPath dir2 = new IgfsPath("/dir2"); @@ -240,6 +254,7 @@ public void testRenameDirectoryParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFile() throws Exception { create(igfs, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); @@ -254,6 +269,7 @@ public void testMoveFile() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileDestinationRoot() throws Exception { create(igfs, paths(DIR, SUBDIR), paths(FILE)); @@ -268,6 +284,7 @@ public void testMoveFileDestinationRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileSourceParentRoot() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -285,6 +302,7 @@ public void testMoveFileSourceParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFile() throws Exception { create(igfs, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); @@ -299,6 +317,7 @@ public void testMoveRenameFile() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileDestinationRoot() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -315,6 +334,7 @@ public void testMoveRenameFileDestinationRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceParentRoot() throws Exception { IgfsPath file = new IgfsPath("/" + FILE_NEW.name()); @@ -332,6 +352,7 @@ public void testMoveRenameFileSourceParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectory() throws Exception { create(igfs, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); @@ -346,6 +367,7 @@ public void testMoveDirectory() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectoryDestinationRoot() throws Exception { create(igfs, paths(DIR, SUBDIR, SUBSUBDIR), null); @@ -360,6 +382,7 @@ public void testMoveDirectoryDestinationRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceParentRoot() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR.name()); @@ -377,6 +400,7 @@ public void testMoveDirectorySourceParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectory() throws Exception { create(igfs, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); @@ -391,6 +415,7 @@ public void testMoveRenameDirectory() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectoryDestinationRoot() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR.name()); @@ -407,6 +432,7 @@ public void testMoveRenameDirectoryDestinationRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceParentRoot() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR_NEW.name()); @@ -423,6 +449,7 @@ public void testMoveRenameDirectorySourceParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameSourceDoesNotExist() throws Exception { create(igfs, paths(DIR, DIR_NEW), null); @@ -443,6 +470,7 @@ public void testMoveRenameSourceDoesNotExist() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testMkdirs() throws Exception { if (!propertiesSupported()) return; @@ -520,6 +548,7 @@ public void testMkdirs() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testMkdirsParentRoot() throws Exception { Map props = null; @@ -546,6 +575,7 @@ public void testMkdirsParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelete() throws Exception { create(igfs, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); @@ -559,6 +589,7 @@ public void testDelete() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeleteParentRoot() throws Exception { create(igfs, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); @@ -572,6 +603,7 @@ public void testDeleteParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeleteDirectoryNotEmpty() throws Exception { create(igfs, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); checkExist(igfs, igfsSecondary, SUBDIR, SUBSUBDIR, FILE); @@ -594,6 +626,7 @@ public void testDeleteDirectoryNotEmpty() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testUpdate() throws Exception { if(!propertiesSupported()) return; @@ -616,6 +649,7 @@ public void testUpdate() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testUpdateParentRoot() throws Exception { if(!propertiesSupported()) return; @@ -637,6 +671,7 @@ public void testUpdateParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdatePathDoesNotExist() throws Exception { final Map props = properties("owner", "group", "0555"); @@ -651,6 +686,7 @@ public void testUpdatePathDoesNotExist() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testFormat() throws Exception { if (mode == PROXY) return; @@ -721,6 +757,7 @@ public void testFormat() throws Exception { * * @throws Exception If failed. */ + @Test public void testRootPropertiesPersistAfterFormat() throws Exception { if(!propertiesSupported()) return; @@ -761,6 +798,7 @@ private void checkRootPropertyUpdate(String prop, String setVal, String expGetVa * * @throws Exception If failed. */ + @Test public void testOpen() throws Exception { create(igfs, paths(DIR, SUBDIR), null); @@ -777,6 +815,7 @@ public void testOpen() throws Exception { * * @throws Exception If failed. */ + @Test public void testOpenDoesNotExist() throws Exception { igfsSecondary.delete(FILE.toString(), false); @@ -800,6 +839,7 @@ public void testOpenDoesNotExist() throws Exception { * * @throws Exception If failed. */ + @Test public void testSetTimes() throws Exception { createFile(igfs, FILE, true, chunk); @@ -908,7 +948,8 @@ private void checkSetTimes(IgfsPath path) throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings({"ConstantConditions", "EmptyTryBlock", "UnusedDeclaration"}) + @SuppressWarnings({"ConstantConditions", "EmptyTryBlock"}) + @Test public void testCreate() throws Exception { create(igfs, paths(DIR, SUBDIR), null); @@ -993,6 +1034,7 @@ public void testCreate() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateParentRoot() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -1006,6 +1048,7 @@ public void testCreateParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateNoClose() throws Exception { if (mode != PRIMARY) return; @@ -1035,6 +1078,7 @@ public void testCreateNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateRenameNoClose() throws Exception { if (dual) return; @@ -1060,6 +1104,7 @@ public void testCreateRenameNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateRenameParentNoClose() throws Exception { if (dual) return; @@ -1085,6 +1130,7 @@ public void testCreateRenameParentNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateDeleteNoClose() throws Exception { if (mode != PRIMARY) return; @@ -1139,6 +1185,7 @@ public void testCreateDeleteNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateDeleteParentNoClose() throws Exception { if (mode != PRIMARY) return; @@ -1193,6 +1240,7 @@ public void testCreateDeleteParentNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateUpdateNoClose() throws Exception { if (dual) return; @@ -1223,6 +1271,7 @@ public void testCreateUpdateNoClose() throws Exception { * * @throws Exception On error. */ + @Test public void testSimpleWrite() throws Exception { IgfsPath path = new IgfsPath("/file1"); @@ -1258,6 +1307,7 @@ public void testSimpleWrite() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateConsistency() throws Exception { final AtomicInteger ctr = new AtomicInteger(); final AtomicReference err = new AtomicReference<>(); @@ -1300,6 +1350,7 @@ public void testCreateConsistency() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateConsistencyMultithreaded() throws Exception { final AtomicBoolean stop = new AtomicBoolean(); @@ -1378,7 +1429,8 @@ public void testCreateConsistencyMultithreaded() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings({"TryFinallyCanBeTryWithResources", "EmptyTryBlock"}) + @SuppressWarnings({"EmptyTryBlock"}) + @Test public void testAppend() throws Exception { if (appendSupported()) { create(igfs, paths(DIR, SUBDIR), null); @@ -1519,6 +1571,7 @@ public void testAppend() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendParentRoot() throws Exception { if (appendSupported()) { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -1536,6 +1589,7 @@ public void testAppendParentRoot() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendNoClose() throws Exception { if (mode != PRIMARY) return; @@ -1570,6 +1624,7 @@ public Object call() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendRenameNoClose() throws Exception { if (dual) return; @@ -1598,6 +1653,7 @@ public void testAppendRenameNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendRenameParentNoClose() throws Exception { if (dual) return; @@ -1626,6 +1682,7 @@ public void testAppendRenameParentNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendDeleteNoClose() throws Exception { if (mode != PRIMARY) return; @@ -1681,6 +1738,7 @@ public boolean apply() { * * @throws Exception If failed. */ + @Test public void testAppendDeleteParentNoClose() throws Exception { if (mode != PRIMARY) return; @@ -1736,6 +1794,7 @@ public boolean apply() { * * @throws Exception If failed. */ + @Test public void testAppendUpdateNoClose() throws Exception { if (dual) return; @@ -1767,6 +1826,7 @@ public void testAppendUpdateNoClose() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendConsistency() throws Exception { if (appendSupported()) { final AtomicInteger ctr = new AtomicInteger(); @@ -1818,6 +1878,7 @@ public void run() { * * @throws Exception If failed. */ + @Test public void testAppendConsistencyMultithreaded() throws Exception { if (appendSupported()) { final AtomicBoolean stop = new AtomicBoolean(); @@ -1891,6 +1952,7 @@ public void run() { * * @throws Exception If failed. */ + @Test public void testStop() throws Exception { create(igfs, paths(DIR, SUBDIR), null); @@ -1911,6 +1973,7 @@ public void testStop() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentMkdirsDelete() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -1959,6 +2022,7 @@ public void testConcurrentMkdirsDelete() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentRenameDeleteSource() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -2023,6 +2087,7 @@ public void testConcurrentRenameDeleteSource() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentRenameDeleteDestination() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -2079,6 +2144,7 @@ public void testConcurrentRenameDeleteDestination() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentRenames() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -2139,6 +2205,7 @@ public void testConcurrentRenames() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentDeletes() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -2186,6 +2253,7 @@ public void testConcurrentDeletes() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksRename() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, RENAME_CNT, 0, 0, 0, 0); } @@ -2195,6 +2263,7 @@ public void testDeadlocksRename() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksDelete() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, 0, DELETE_CNT, 0, 0, 0); } @@ -2204,6 +2273,7 @@ public void testDeadlocksDelete() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksUpdate() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, 0, 0, UPDATE_CNT, 0, 0); } @@ -2213,6 +2283,7 @@ public void testDeadlocksUpdate() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksMkdirs() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, 0, 0, 0, MKDIRS_CNT, 0); } @@ -2222,6 +2293,7 @@ public void testDeadlocksMkdirs() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksDeleteRename() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, RENAME_CNT, DELETE_CNT, 0, 0, 0); } @@ -2231,6 +2303,7 @@ public void testDeadlocksDeleteRename() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksDeleteMkdirsRename() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, RENAME_CNT, DELETE_CNT, 0, MKDIRS_CNT, 0); } @@ -2240,6 +2313,7 @@ public void testDeadlocksDeleteMkdirsRename() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksDeleteMkdirs() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, 0, DELETE_CNT, 0, MKDIRS_CNT, 0); } @@ -2249,6 +2323,7 @@ public void testDeadlocksDeleteMkdirs() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocksCreate() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, 0, 0, 0, 0, CREATE_CNT); } @@ -2258,6 +2333,7 @@ public void testDeadlocksCreate() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeadlocks() throws Exception { checkDeadlocksRepeat(5, 2, 2, 2, RENAME_CNT, DELETE_CNT, UPDATE_CNT, MKDIRS_CNT, CREATE_CNT); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAttributesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAttributesSelfTest.java index b430a8ceca8f8..5ff9b0a6df920 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAttributesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsAttributesSelfTest.java @@ -27,6 +27,9 @@ import java.util.HashMap; import java.util.Map; import org.apache.ignite.igfs.IgfsMode; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.igfs.IgfsMode.DUAL_SYNC; import static org.apache.ignite.igfs.IgfsMode.PRIMARY; @@ -35,10 +38,12 @@ /** * {@link IgfsAttributes} test case. */ +@RunWith(JUnit4.class) public class IgfsAttributesSelfTest extends IgfsCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSerialization() throws Exception { Map pathModes = new HashMap<>(); @@ -79,4 +84,4 @@ private boolean eq(IgfsAttributes attr1, IgfsAttributes attr2) throws Exception return true; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBackupFailoverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBackupFailoverSelfTest.java index ff9c51a56a4e8..1b3c6a21017cb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBackupFailoverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBackupFailoverSelfTest.java @@ -37,6 +37,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -53,6 +56,7 @@ * Tests IGFS behavioral guarantees if some nodes on the cluster are synchronously or asynchronously stopped. * The operations to check are read, write or both. */ +@RunWith(JUnit4.class) public class IgfsBackupFailoverSelfTest extends IgfsCommonAbstractTest { /** Directory. */ protected static final IgfsPath DIR = new IgfsPath("/dir"); @@ -205,6 +209,7 @@ private IgfsPath filePath(int j) { * * @throws Exception On error. */ + @Test public void testReadFailoverAfterStopMultipleNodes() throws Exception { final IgfsImpl igfs0 = nodeDatas[0].igfsImpl; @@ -259,6 +264,7 @@ public void testReadFailoverAfterStopMultipleNodes() throws Exception { * * @throws Exception On error. */ + @Test public void testReadFailoverWhileStoppingMultipleNodes() throws Exception { final IgfsImpl igfs0 = nodeDatas[0].igfsImpl; @@ -340,6 +346,7 @@ public void testReadFailoverWhileStoppingMultipleNodes() throws Exception { * * @throws Exception On error. */ + @Test public void testWriteFailoverAfterStopMultipleNodes() throws Exception { final IgfsImpl igfs0 = nodeDatas[0].igfsImpl; @@ -434,6 +441,7 @@ public void testWriteFailoverAfterStopMultipleNodes() throws Exception { * * @throws Exception */ + @Test public void testWriteFailoverWhileStoppingMultipleNodes() throws Exception { final IgfsImpl igfs0 = nodeDatas[0].igfsImpl; @@ -598,4 +606,4 @@ protected static int doWithRetries(int attempts, Callable clo) throws Exce @Override protected long getTestTimeout() { return 20 * 60 * 1000; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBlockMessageSystemPoolStarvationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBlockMessageSystemPoolStarvationSelfTest.java index 9012e0e27f4ad..e3894e0f35716 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBlockMessageSystemPoolStarvationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsBlockMessageSystemPoolStarvationSelfTest.java @@ -41,6 +41,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -50,6 +53,7 @@ /** * Test to check for system pool starvation due to {@link IgfsBlocksMessage}. */ +@RunWith(JUnit4.class) public class IgfsBlockMessageSystemPoolStarvationSelfTest extends IgfsCommonAbstractTest { /** First node name. */ private static final String NODE_1_NAME = "node1"; @@ -106,6 +110,7 @@ public class IgfsBlockMessageSystemPoolStarvationSelfTest extends IgfsCommonAbst * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testStarvation() throws Exception { // 1. Create two IGFS file to make all system threads busy. CountDownLatch fileWriteLatch = new CountDownLatch(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCachePerBlockLruEvictionPolicySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCachePerBlockLruEvictionPolicySelfTest.java index 44418cedfee19..f86a9f0111fe6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCachePerBlockLruEvictionPolicySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCachePerBlockLruEvictionPolicySelfTest.java @@ -44,6 +44,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -54,7 +57,8 @@ /** * Tests for IGFS per-block LR eviction policy. */ -@SuppressWarnings({"ConstantConditions", "ThrowableResultOfMethodCallIgnored"}) +@SuppressWarnings({"ConstantConditions"}) +@RunWith(JUnit4.class) public class IgfsCachePerBlockLruEvictionPolicySelfTest extends IgfsCommonAbstractTest { /** Primary IGFS name. */ private static final String IGFS_PRIMARY = "igfs-primary"; @@ -243,6 +247,7 @@ private void start() throws Exception { * * @throws Exception If failed. */ + @Test public void testFilePrimary() throws Exception { start(); @@ -267,6 +272,7 @@ public void testFilePrimary() throws Exception { * * @throws Exception If failed. */ + @Test public void testFileDual() throws Exception { start(); @@ -297,6 +303,7 @@ public void testFileDual() throws Exception { * * @throws Exception If failed. */ + @Test public void testFileDualExclusion() throws Exception { start(); @@ -324,6 +331,7 @@ public void testFileDualExclusion() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameDifferentExcludeSettings() throws Exception { start(); @@ -351,6 +359,7 @@ public void testRenameDifferentExcludeSettings() throws Exception { * * @throws Exception If failed. */ + @Test public void testBlockCountEviction() throws Exception { start(); @@ -386,6 +395,7 @@ public void testBlockCountEviction() throws Exception { * * @throws Exception If failed. */ + @Test public void testDataSizeEviction() throws Exception { start(); @@ -493,4 +503,4 @@ private void checkEvictionPolicy(final int curBlocks, final long curBytes) }, 5000) : "Unexpected counts [expectedBlocks=" + curBlocks + ", actualBlocks=" + evictPlc.getCurrentBlocks() + ", expectedBytes=" + curBytes + ", currentBytes=" + curBytes + ']'; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCacheSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCacheSelfTest.java index f8f3a0c421ffd..2bb221630d7cc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCacheSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsCacheSelfTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -41,6 +44,7 @@ /** * Tests ensuring that IGFS data and meta caches are not "visible" through public API. */ +@RunWith(JUnit4.class) public class IgfsCacheSelfTest extends IgfsCommonAbstractTest { /** Regular cache name. */ private static final String CACHE_NAME = "cache"; @@ -100,6 +104,7 @@ protected CacheConfiguration cacheConfiguration(@NotNull String cacheName) { * * @throws Exception If failed. */ + @Test public void testCache() throws Exception { final Ignite g = grid(); @@ -133,4 +138,4 @@ public void testCache() throws Exception { assert g.cache(CACHE_NAME) != null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDataManagerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDataManagerSelfTest.java index c07f0baf15bf8..d21c8b1ea58e5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDataManagerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDataManagerSelfTest.java @@ -32,9 +32,6 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; @@ -45,6 +42,9 @@ import java.util.Collection; import java.util.Collections; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -54,10 +54,8 @@ /** * {@link IgfsDataManager} test case. */ +@RunWith(JUnit4.class) public class IgfsDataManagerSelfTest extends IgfsCommonAbstractTest { - /** Test IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Groups count for data blocks. */ private static final int DATA_BLOCK_GROUP_CNT = 2; @@ -90,12 +88,6 @@ public class IgfsDataManagerSelfTest extends IgfsCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - FileSystemConfiguration igfsCfg = new FileSystemConfiguration(); igfsCfg.setMetaCacheConfiguration(cacheConfiguration("meta")); @@ -151,6 +143,7 @@ protected CacheConfiguration cacheConfiguration(@NotNull String cacheName) { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testDataStoring() throws Exception { for (int i = 0; i < 10; i++) { IgfsPath path = IgfsPath.ROOT; @@ -235,6 +228,7 @@ public void testDataStoring() throws Exception { * * @throws Exception If failed. */ + @Test public void testDataStoringRemainder() throws Exception { final int blockSize = IGFS_BLOCK_SIZE; @@ -326,6 +320,7 @@ public void testDataStoringRemainder() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDataStoringFlush() throws Exception { final int blockSize = IGFS_BLOCK_SIZE; final int writesCnt = 64; @@ -400,6 +395,7 @@ public void testDataStoringFlush() throws Exception { * * @throws Exception If failed. */ + @Test public void testAffinity() throws Exception { final int blockSize = 10; final int grpSize = blockSize * DATA_BLOCK_GROUP_CNT; @@ -453,6 +449,7 @@ public void testAffinity() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAffinity2() throws Exception { int blockSize = BLOCK_SIZE; @@ -488,6 +485,7 @@ public void testAffinity2() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAffinityFileMap() throws Exception { int blockSize = BLOCK_SIZE; @@ -616,4 +614,4 @@ private void expectsAffinityFail(final IgfsEntryInfo info, final long start, fin } }, IgfsException.class, msg); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDualAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDualAbstractSelfTest.java index bc3ef3194dd0d..e0a8cd8a7332e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDualAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsDualAbstractSelfTest.java @@ -32,15 +32,15 @@ import java.util.Map; import java.util.concurrent.Callable; import java.util.concurrent.CyclicBarrier; - -import static org.apache.ignite.igfs.IgfsMode.DUAL_ASYNC; -import static org.apache.ignite.igfs.IgfsMode.DUAL_SYNC; -import static org.apache.ignite.igfs.IgfsMode.PROXY; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for IGFS working in mode when remote file system exists: DUAL_SYNC, DUAL_ASYNC. */ @SuppressWarnings("ConstantConditions") +@RunWith(JUnit4.class) public abstract class IgfsDualAbstractSelfTest extends IgfsAbstractSelfTest { /** * Constructor. @@ -56,6 +56,7 @@ protected IgfsDualAbstractSelfTest(IgfsMode mode) { * * @throws Exception If failed. */ + @Test public void testExistsPathMissing() throws Exception { create(igfsSecondary, paths(DIR), null); @@ -67,6 +68,7 @@ public void testExistsPathMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testListFilesPathMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); @@ -89,6 +91,7 @@ public void testListFilesPathMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testInfoPathMissing() throws Exception { create(igfsSecondary, paths(DIR), null); create(igfs, null, null); @@ -105,6 +108,7 @@ public void testInfoPathMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameFileSourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), paths(FILE)); create(igfs, paths(DIR), null); @@ -121,6 +125,7 @@ public void testRenameFileSourceMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameFileSourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), paths(FILE)); create(igfs, null, null); @@ -137,6 +142,7 @@ public void testRenameFileSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameFileParentRootSourceMissing() throws Exception { IgfsPath file1 = new IgfsPath("/file1"); IgfsPath file2 = new IgfsPath("/file2"); @@ -155,6 +161,7 @@ public void testRenameFileParentRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameDirectorySourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), null); create(igfs, paths(DIR), null); @@ -170,6 +177,7 @@ public void testRenameDirectorySourceMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameDirectorySourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), null); create(igfs, null, null); @@ -186,6 +194,7 @@ public void testRenameDirectorySourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testRenameDirectoryParentRootSourceMissing() throws Exception { IgfsPath dir1 = new IgfsPath("/dir1"); IgfsPath dir2 = new IgfsPath("/dir2"); @@ -204,6 +213,7 @@ public void testRenameDirectoryParentRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileSourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); @@ -220,6 +230,7 @@ public void testMoveFileSourceMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileSourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR_NEW, SUBDIR_NEW), paths(FILE)); @@ -236,6 +247,7 @@ public void testMoveFileSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, SUBDIR, DIR_NEW), paths(FILE)); @@ -252,6 +264,7 @@ public void testMoveFileDestinationMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, SUBDIR), paths(FILE)); @@ -268,6 +281,7 @@ public void testMoveFileDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileSourceAndDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, DIR_NEW), null); @@ -284,6 +298,7 @@ public void testMoveFileSourceAndDestinationMissingPartially() throws Exception * * @throws Exception If failed. */ + @Test public void testMoveFileSourceAndDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, null, null); @@ -300,6 +315,7 @@ public void testMoveFileSourceAndDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileDestinationRootSourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), paths(FILE)); create(igfs, paths(DIR), null); @@ -316,6 +332,7 @@ public void testMoveFileDestinationRootSourceMissingPartially() throws Exception * * @throws Exception If failed. */ + @Test public void testMoveFileDestinationRootSourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), paths(FILE)); create(igfs, null, null); @@ -332,6 +349,7 @@ public void testMoveFileDestinationRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileSourceParentRootSourceMissing() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -349,6 +367,7 @@ public void testMoveFileSourceParentRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveFileSourceParentRootDestinationMissingPartially() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -367,6 +386,7 @@ public void testMoveFileSourceParentRootDestinationMissingPartially() throws Exc * * @throws Exception If failed. */ + @Test public void testMoveFileSourceParentRootDestinationMissing() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -385,6 +405,7 @@ public void testMoveFileSourceParentRootDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); @@ -401,6 +422,7 @@ public void testMoveRenameFileSourceMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR_NEW, SUBDIR_NEW), paths(FILE)); @@ -417,6 +439,7 @@ public void testMoveRenameFileSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, SUBDIR, DIR_NEW), paths(FILE)); @@ -433,6 +456,7 @@ public void testMoveRenameFileDestinationMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, SUBDIR), paths(FILE)); @@ -449,6 +473,7 @@ public void testMoveRenameFileDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceAndDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, paths(DIR, DIR_NEW), null); @@ -465,6 +490,7 @@ public void testMoveRenameFileSourceAndDestinationMissingPartially() throws Exce * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceAndDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, DIR_NEW, SUBDIR_NEW), paths(FILE)); create(igfs, null, null); @@ -481,6 +507,7 @@ public void testMoveRenameFileSourceAndDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileDestinationRootSourceMissingPartially() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -499,6 +526,7 @@ public void testMoveRenameFileDestinationRootSourceMissingPartially() throws Exc * * @throws Exception If failed. */ + @Test public void testMoveRenameFileDestinationRootSourceMissing() throws Exception { IgfsPath file = new IgfsPath("/" + FILE.name()); @@ -517,6 +545,7 @@ public void testMoveRenameFileDestinationRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceParentRootSourceMissing() throws Exception { IgfsPath file = new IgfsPath("/" + FILE_NEW.name()); @@ -534,6 +563,7 @@ public void testMoveRenameFileSourceParentRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceParentRootDestinationMissingPartially() throws Exception { IgfsPath file = new IgfsPath("/" + FILE_NEW.name()); @@ -552,6 +582,7 @@ public void testMoveRenameFileSourceParentRootDestinationMissingPartially() thro * * @throws Exception If failed. */ + @Test public void testMoveRenameFileSourceParentRootDestinationMissing() throws Exception { IgfsPath file = new IgfsPath("/" + FILE_NEW.name()); @@ -570,6 +601,7 @@ public void testMoveRenameFileSourceParentRootDestinationMissing() throws Except * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, DIR_NEW, SUBDIR_NEW), null); @@ -586,6 +618,7 @@ public void testMoveDirectorySourceMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR_NEW, SUBDIR_NEW), null); @@ -603,6 +636,7 @@ public void testMoveDirectorySourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectoryDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW), null); @@ -619,6 +653,7 @@ public void testMoveDirectoryDestinationMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectoryDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, SUBDIR, SUBSUBDIR), null); @@ -635,6 +670,7 @@ public void testMoveDirectoryDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceAndDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, DIR_NEW), null); @@ -651,6 +687,7 @@ public void testMoveDirectorySourceAndDestinationMissingPartially() throws Excep * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceAndDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, null, null); @@ -667,6 +704,7 @@ public void testMoveDirectorySourceAndDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectoryDestinationRootSourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR), null); create(igfs, paths(DIR), null); @@ -683,6 +721,7 @@ public void testMoveDirectoryDestinationRootSourceMissingPartially() throws Exce * * @throws Exception If failed. */ + @Test public void testMoveDirectoryDestinationRootSourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR), null); create(igfs, null, null); @@ -699,6 +738,7 @@ public void testMoveDirectoryDestinationRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceParentRootSourceMissing() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR.name()); @@ -716,6 +756,7 @@ public void testMoveDirectorySourceParentRootSourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceParentRootDestinationMissingPartially() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR.name()); @@ -734,6 +775,7 @@ public void testMoveDirectorySourceParentRootDestinationMissingPartially() throw * * @throws Exception If failed. */ + @Test public void testMoveDirectorySourceParentRootDestinationMissing() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR.name()); @@ -752,6 +794,7 @@ public void testMoveDirectorySourceParentRootDestinationMissing() throws Excepti * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, DIR_NEW, SUBDIR_NEW), null); @@ -768,6 +811,7 @@ public void testMoveRenameDirectorySourceMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR_NEW, SUBDIR_NEW), null); @@ -785,6 +829,7 @@ public void testMoveRenameDirectorySourceMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectoryDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW), null); @@ -801,6 +846,7 @@ public void testMoveRenameDirectoryDestinationMissingPartially() throws Exceptio * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectoryDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, SUBDIR, SUBSUBDIR), null); @@ -817,6 +863,7 @@ public void testMoveRenameDirectoryDestinationMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceAndDestinationMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, paths(DIR, DIR_NEW), null); @@ -833,6 +880,7 @@ public void testMoveRenameDirectorySourceAndDestinationMissingPartially() throws * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceAndDestinationMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR, DIR_NEW, SUBDIR_NEW), null); create(igfs, null, null); @@ -849,6 +897,7 @@ public void testMoveRenameDirectorySourceAndDestinationMissing() throws Exceptio * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectoryDestinationRootSourceMissingPartially() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR.name()); @@ -867,6 +916,7 @@ public void testMoveRenameDirectoryDestinationRootSourceMissingPartially() throw * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectoryDestinationRootSourceMissing() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR.name()); @@ -885,6 +935,7 @@ public void testMoveRenameDirectoryDestinationRootSourceMissing() throws Excepti * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceParentRootSourceMissing() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR_NEW.name()); @@ -902,6 +953,7 @@ public void testMoveRenameDirectorySourceParentRootSourceMissing() throws Except * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceParentRootDestinationMissingPartially() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR_NEW.name()); @@ -920,6 +972,7 @@ public void testMoveRenameDirectorySourceParentRootDestinationMissingPartially() * * @throws Exception If failed. */ + @Test public void testMoveRenameDirectorySourceParentRootDestinationMissing() throws Exception { IgfsPath dir = new IgfsPath("/" + SUBSUBDIR_NEW.name()); @@ -938,6 +991,7 @@ public void testMoveRenameDirectorySourceParentRootDestinationMissing() throws E * * @throws Exception If failed. */ + @Test public void testMkdirsParentPathMissingPartially() throws Exception { Map props = null; @@ -968,6 +1022,7 @@ public void testMkdirsParentPathMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testMkdrisParentPathMissing() throws Exception { Map props = null; @@ -999,6 +1054,7 @@ public void testMkdrisParentPathMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeletePathMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); create(igfs, paths(DIR), null); @@ -1014,6 +1070,7 @@ public void testDeletePathMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeletePathMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); create(igfs, null, null); @@ -1029,6 +1086,7 @@ public void testDeletePathMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeleteParentRootPathMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR, SUBSUBDIR), paths(FILE)); create(igfs, null, null); @@ -1043,6 +1101,7 @@ public void testDeleteParentRootPathMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdatePathMissingPartially() throws Exception { if(!propertiesSupported()) return; @@ -1074,6 +1133,7 @@ public void testUpdatePathMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdatePathMissing() throws Exception { if(!propertiesSupported()) return; @@ -1105,6 +1165,7 @@ public void testUpdatePathMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdateParentRootPathMissing() throws Exception { doUpdateParentRootPathMissing(properties("owner", "group", "0555")); } @@ -1135,6 +1196,7 @@ protected void doUpdateParentRootPathMissing(Map props) throws E * * @throws Exception If failed. */ + @Test public void testOpenMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), null); create(igfs, null, null); @@ -1149,6 +1211,7 @@ public void testOpenMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateParentMissingPartially() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), null); create(igfs, paths(DIR), null); @@ -1165,6 +1228,7 @@ public void testCreateParentMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testSetPropertiesOnPartiallyMissingDirectory() throws Exception { if (!propertiesSupported()) return; @@ -1185,6 +1249,7 @@ public void testSetPropertiesOnPartiallyMissingDirectory() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateParentMissing() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), null); create(igfs, null, null); @@ -1200,6 +1265,7 @@ public void testCreateParentMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testSetPropertiesOnMissingDirectory() throws Exception { if (!propertiesSupported()) return; @@ -1223,6 +1289,7 @@ public void testSetPropertiesOnMissingDirectory() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendParentMissingPartially() throws Exception { if (!appendSupported()) return; @@ -1244,6 +1311,7 @@ public void testAppendParentMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testAppendParentMissing() throws Exception { if (!appendSupported()) return; @@ -1265,6 +1333,7 @@ public void testAppendParentMissing() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentRenameDeleteSourceRemote() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -1321,6 +1390,7 @@ public void testConcurrentRenameDeleteSourceRemote() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentRenameDeleteDestinationRemote() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -1373,6 +1443,7 @@ public void testConcurrentRenameDeleteDestinationRemote() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentRenamesRemote() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -1433,6 +1504,7 @@ public void testConcurrentRenamesRemote() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentDeletesRemote() throws Exception { for (int i = 0; i < REPEAT_CNT; i++) { final CyclicBarrier barrier = new CyclicBarrier(2); @@ -1526,6 +1598,7 @@ private T2 checkParentListingTime(IgfsSecondaryFileSystem fs, IgfsPa * * @throws Exception On error. */ + @Test public void testAccessAndModificationTimeUpwardsPropagation() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), paths(FILE, FILE2)); @@ -1552,6 +1625,7 @@ public void testAccessAndModificationTimeUpwardsPropagation() throws Exception { * * @throws Exception If failed. */ + @Test public void testSetTimesMissingPartially() throws Exception { if (!timesSupported()) return; @@ -1589,6 +1663,7 @@ public void testSetTimesMissingPartially() throws Exception { * * @throws Exception If failed. */ + @Test public void testSecondarySize() throws Exception { igfs.mkdirs(SUBDIR); @@ -1614,4 +1689,4 @@ public static boolean propertiesContains(Map allProps, Map(Arrays.asList(new T2<>( new IgfsPath("/a/b/c/d"), PROXY), new T2<>(new IgfsPath("/a/P/"), PRIMARY), new T2<>(new IgfsPath("/a/b/"), DUAL_ASYNC)))); @@ -49,6 +56,7 @@ public class IgfsModeResolverSelfTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testCanContain() throws Exception { for (IgfsMode m: IgfsMode.values()) { // Each mode can contain itself: @@ -68,6 +76,7 @@ public void testCanContain() throws Exception { /** * @throws Exception If failed. */ + @Test public void testResolve() throws Exception { assertEquals(DUAL_SYNC, reslvr.resolveMode(IgfsPath.ROOT)); assertEquals(DUAL_SYNC, reslvr.resolveMode(new IgfsPath("/a"))); @@ -89,6 +98,7 @@ public void testResolve() throws Exception { /** * @throws Exception If failed. */ + @Test public void testModesValidation() throws Exception { // Another mode inside PRIMARY directory: try { @@ -148,6 +158,7 @@ public void testModesValidation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDualParentsWithPrimaryChild() throws Exception { Set set = new HashSet<>(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsModesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsModesSelfTest.java index 0720d55792be0..0a43c1d2e4cf0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsModesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsModesSelfTest.java @@ -39,6 +39,9 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -51,6 +54,7 @@ /** * IGFS modes self test. */ +@RunWith(JUnit4.class) public class IgfsModesSelfTest extends IgfsCommonAbstractTest { /** Grid instance hosting primary IGFS. */ private IgniteEx grid; @@ -220,6 +224,7 @@ final void pathModes(IgniteBiTuple... modes) { * * @throws Exception If failed. */ + @Test public void testModeDefaultIsNotSet() throws Exception { setSecondaryFs = true; @@ -233,6 +238,7 @@ public void testModeDefaultIsNotSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testModeDefaultIsSet() throws Exception { mode = DUAL_SYNC; @@ -248,6 +254,7 @@ public void testModeDefaultIsSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testModeSecondaryNoUri() throws Exception { mode = PROXY; @@ -269,6 +276,7 @@ public void testModeSecondaryNoUri() throws Exception { * * @throws Exception If failed. */ + @Test public void testPathMode() throws Exception { pathModes(F.t("/dir1", PROXY), F.t("/dir2", DUAL_SYNC), F.t("/dir3", PRIMARY), F.t("/dir4", PRIMARY)); @@ -297,6 +305,7 @@ public void testPathMode() throws Exception { * * @throws Exception If failed. */ + @Test public void testPathModeSwitchToPrimary() throws Exception { mode = DUAL_SYNC; @@ -314,6 +323,7 @@ public void testPathModeSwitchToPrimary() throws Exception { * * @throws Exception If failed. */ + @Test public void testPathModeSecondaryNoCfg() throws Exception { pathModes(F.t("dir", PROXY)); @@ -335,6 +345,7 @@ public void testPathModeSecondaryNoCfg() throws Exception { * * @throws Exception If failed. */ + @Test public void testPropagationPrimary() throws Exception { mode = PRIMARY; @@ -346,6 +357,7 @@ public void testPropagationPrimary() throws Exception { * * @throws Exception If failed. */ + @Test public void testPropagationDualSync() throws Exception { mode = DUAL_SYNC; @@ -357,6 +369,7 @@ public void testPropagationDualSync() throws Exception { * * @throws Exception If failed. */ + @Test public void testPropagationDualAsync() throws Exception { mode = DUAL_ASYNC; @@ -485,4 +498,4 @@ private void checkPropagation() throws Exception { assert !igfsSecondary.exists(dir); assert !igfsSecondary.exists(file); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsOneClientNodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsOneClientNodeTest.java index a55f607883703..dfd70d6ec8273 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsOneClientNodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsOneClientNodeTest.java @@ -30,6 +30,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -37,6 +40,7 @@ /** * Test for igfs with one node in client mode. */ +@RunWith(JUnit4.class) public class IgfsOneClientNodeTest extends GridCommonAbstractTest { /** Regular cache name. */ private static final String CACHE_NAME = "cache"; @@ -94,6 +98,7 @@ protected CacheConfiguration cacheConfiguration(@NotNull String cacheName) { /** * @throws Exception If failed. */ + @Test public void testStartIgfs() throws Exception { final IgfsImpl igfs = (IgfsImpl) grid(0).fileSystem("igfs"); @@ -124,4 +129,4 @@ public void testStartIgfs() throws Exception { } }, IgfsException.class, "Failed to execute operation because there are no IGFS metadata nodes."); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryMultiNodeSelfTest.java index f004d4069bbe2..0969cfd4b987a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryMultiNodeSelfTest.java @@ -17,9 +17,14 @@ package org.apache.ignite.internal.processors.igfs; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + /** * Multinode test for PRIMARY mode. */ +@RunWith(JUnit4.class) public class IgfsPrimaryMultiNodeSelfTest extends IgfsPrimarySelfTest { /** {@inheritDoc} */ @Override protected int nodeCount() { @@ -29,7 +34,8 @@ public class IgfsPrimaryMultiNodeSelfTest extends IgfsPrimarySelfTest { /** * @throws Exception If failed. */ + @Test @Override public void testCreateConsistencyMultithreaded() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-8823"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryRelaxedConsistencyMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryRelaxedConsistencyMultiNodeSelfTest.java index d35237cacd0fb..a3f11a28a664f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryRelaxedConsistencyMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsPrimaryRelaxedConsistencyMultiNodeSelfTest.java @@ -17,19 +17,24 @@ package org.apache.ignite.internal.processors.igfs; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + /** * Tests for PRIMARY mode and relaxed consistency model. */ +@RunWith(JUnit4.class) public class IgfsPrimaryRelaxedConsistencyMultiNodeSelfTest extends IgfsPrimaryRelaxedConsistencySelfTest { /** {@inheritDoc} */ @Override protected int nodeCount() { return 4; } - @Override - public void testCreateConsistencyMultithreaded() throws Exception { + @Test + @Override public void testCreateConsistencyMultithreaded() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-8823"); super.testCreateConsistencyMultithreaded(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorSelfTest.java index 0ba3bcb221393..3a3c397c9e5a8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorSelfTest.java @@ -17,6 +17,17 @@ package org.apache.ignite.internal.processors.igfs; +import java.security.SecureRandom; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.atomic.AtomicInteger; +import javax.cache.Cache; import org.apache.commons.io.IOUtils; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; @@ -40,24 +51,12 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; - -import javax.cache.Cache; -import java.security.SecureRandom; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.concurrent.Callable; -import java.util.concurrent.atomic.AtomicInteger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.charset.StandardCharsets.UTF_8; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -67,10 +66,8 @@ /** * Tests for {@link IgfsProcessor}. */ +@RunWith(JUnit4.class) public class IgfsProcessorSelfTest extends IgfsCommonAbstractTest { - /** Test IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Random numbers generator. */ protected final SecureRandom rnd = new SecureRandom(); @@ -114,13 +111,6 @@ public class IgfsProcessorSelfTest extends IgfsCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - FileSystemConfiguration igfsCfg = new FileSystemConfiguration(); igfsCfg.setMetaCacheConfiguration(cacheConfiguration("meta")); @@ -165,6 +155,7 @@ public String igfsName() { } /** @throws Exception If failed. */ + @Test public void testigfsEnabled() throws Exception { IgniteFileSystem igfs = grid(0).fileSystem(igfsName()); @@ -176,6 +167,7 @@ public void testigfsEnabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdateProperties() throws Exception { IgfsPath p = path("/tmp/my"); @@ -209,6 +201,7 @@ public void testUpdateProperties() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCreate() throws Exception { IgfsPath path = path("/file"); @@ -219,7 +212,7 @@ public void testCreate() throws Exception { for (int i = 0; i < nodesCount(); i++) { IgfsEntryInfo fileInfo = - (IgfsEntryInfo)grid(i).cachex(metaCacheName).localPeek(info.fileId(), null, null); + (IgfsEntryInfo)grid(i).cachex(metaCacheName).localPeek(info.fileId(), null); assertNotNull(fileInfo); assertNotNull(fileInfo.listing()); @@ -235,6 +228,7 @@ public void testCreate() throws Exception { * * @throws Exception In case of any exception. */ + @Test public void testMakeListDeleteDirs() throws Exception { assertListDir("/"); @@ -293,6 +287,7 @@ public void testMakeListDeleteDirs() throws Exception { * @throws Exception In case of any exception. */ @SuppressWarnings("TooBroadScope") + @Test public void testMakeListDeleteDirsMultithreaded() throws Exception { assertListDir("/"); @@ -343,6 +338,7 @@ public void testMakeListDeleteDirsMultithreaded() throws Exception { } /** @throws Exception If failed. */ + @Test public void testBasicOps() throws Exception { // Create directories. igfs.mkdirs(path("/A/B1/C1")); @@ -423,6 +419,7 @@ public void testBasicOps() throws Exception { * * @throws Exception If failed. */ + @Test public void testSize() throws Exception { IgfsPath dir1 = path("/dir1"); IgfsPath subDir1 = path("/dir1/subdir1"); @@ -469,6 +466,7 @@ private > List sorted(Collection col) { } /** @throws Exception If failed. */ + @Test public void testRename() throws Exception { // Create directories. igfs.mkdirs(path("/A/B1/C1")); @@ -616,6 +614,7 @@ private IgfsPath path(long i) { } /** @throws Exception If failed. */ + @Test public void testCreateOpenAppend() throws Exception { // Error - path points to root directory. assertCreateFails("/", false); @@ -676,6 +675,7 @@ public void testCreateOpenAppend() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("BusyWait") + @Test public void testDeleteCacheConsistency() throws Exception { IgfsPath path = new IgfsPath("/someFile"); @@ -735,21 +735,25 @@ public void testDeleteCacheConsistency() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCreateAppendLongData1() throws Exception { checkCreateAppendLongData(123, 1024, 100); } /** @throws Exception If failed. */ + @Test public void testCreateAppendLongData2() throws Exception { checkCreateAppendLongData(123 + 1024, 1024, 100); } /** @throws Exception If failed. */ + @Test public void testCreateAppendLongData3() throws Exception { checkCreateAppendLongData(123, 1024, 1000); } /** @throws Exception If failed. */ + @Test public void testCreateAppendLongData4() throws Exception { checkCreateAppendLongData(123 + 1024, 1024, 1000); } @@ -759,6 +763,7 @@ public void testCreateAppendLongData4() throws Exception { * * @throws Exception If failed. */ + @Test public void testFormatNonEmpty() throws Exception { String dirPath = "/A/B/C"; @@ -793,6 +798,7 @@ public void testFormatNonEmpty() throws Exception { * * @throws Exception If failed. */ + @Test public void testFormatEmpty() throws Exception { igfs.clear(); } @@ -989,4 +995,4 @@ private void assertListDir(String path, String... item) { assertEquals(Arrays.asList(item), names); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorValidationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorValidationSelfTest.java index 7484c59fbb2e6..925c86f98e48a 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorValidationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsProcessorValidationSelfTest.java @@ -27,15 +27,15 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.GridCacheDefaultAffinityKeyMapper; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import java.lang.reflect.Array; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.igfs.IgfsMode.DUAL_ASYNC; @@ -50,10 +50,8 @@ *

    * Tests starting with "testRemote" are checking {@link IgfsProcessor#checkIgfsOnRemoteNode(org.apache.ignite.cluster.ClusterNode)}. */ +@RunWith(JUnit4.class) public class IgfsProcessorValidationSelfTest extends IgfsCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Grid #1 config. */ private IgniteConfiguration g1Cfg; @@ -72,12 +70,6 @@ public class IgfsProcessorValidationSelfTest extends IgfsCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - g1IgfsCfg1.setName("g1IgfsCfg1"); g1IgfsCfg2.setName("g1IgfsCfg2"); @@ -114,6 +106,7 @@ private T[] concat(T[] first, T[] second, Class cls) { /** * @throws Exception If failed. */ + @Test public void testLocalIfAffinityMapperIsWrongClass() throws Exception { for (FileSystemConfiguration igfsCfg : g1Cfg.getFileSystemConfiguration()) { @@ -130,6 +123,7 @@ public void testLocalIfAffinityMapperIsWrongClass() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalIfIgfsConfigsHaveDuplicatedNames() throws Exception { String igfsCfgName = "igfs-cfg"; @@ -142,6 +136,7 @@ public void testLocalIfIgfsConfigsHaveDuplicatedNames() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalIfQueryIndexingEnabledForDataCache() throws Exception { g1IgfsCfg1.setDataCacheConfiguration(dataCache(1024)); g1IgfsCfg1.getDataCacheConfiguration().setIndexedTypes(Integer.class, String.class); @@ -152,6 +147,7 @@ public void testLocalIfQueryIndexingEnabledForDataCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalIfQueryIndexingEnabledForMetaCache() throws Exception { g1IgfsCfg1.setMetaCacheConfiguration(metaCache()); @@ -163,7 +159,7 @@ public void testLocalIfQueryIndexingEnabledForMetaCache() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("NullableProblems") + @Test public void testLocalNullIgfsNameIsNotSupported() throws Exception { try { g1IgfsCfg1.setName(null); @@ -186,6 +182,7 @@ public void testLocalNullIgfsNameIsNotSupported() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalIfNonPrimaryModeAndHadoopFileSystemUriIsNull() throws Exception { g1IgfsCfg2.setDefaultMode(PROXY); @@ -195,6 +192,7 @@ public void testLocalIfNonPrimaryModeAndHadoopFileSystemUriIsNull() throws Excep /** * @throws Exception If failed. */ + @Test public void testRemoteIfDataBlockSizeDiffers() throws Exception { IgniteConfiguration g2Cfg = getConfiguration("g2"); @@ -212,6 +210,7 @@ public void testRemoteIfDataBlockSizeDiffers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoteIfAffinityMapperGroupSizeDiffers() throws Exception { IgniteConfiguration g2Cfg = getConfiguration("g2"); @@ -227,6 +226,7 @@ public void testRemoteIfAffinityMapperGroupSizeDiffers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoteIfDefaultModeDiffers() throws Exception { IgniteConfiguration g2Cfg = getConfiguration("g2"); @@ -249,6 +249,7 @@ public void testRemoteIfDefaultModeDiffers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoteIfPathModeDiffers() throws Exception { IgniteConfiguration g2Cfg = getConfiguration("g2"); @@ -268,6 +269,7 @@ public void testRemoteIfPathModeDiffers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testZeroEndpointTcpPort() throws Exception { checkInvalidPort(0); } @@ -275,6 +277,7 @@ public void testZeroEndpointTcpPort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNegativeEndpointTcpPort() throws Exception { checkInvalidPort(-1); } @@ -282,6 +285,7 @@ public void testNegativeEndpointTcpPort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTooBigEndpointTcpPort() throws Exception { checkInvalidPort(65536); } @@ -289,6 +293,7 @@ public void testTooBigEndpointTcpPort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPreConfiguredCache() throws Exception { FileSystemConfiguration igfsCfg1 = new FileSystemConfiguration(g1IgfsCfg1); igfsCfg1.setName("igfs"); @@ -335,6 +340,7 @@ private void checkInvalidPort(int port) throws Exception { /** * @throws Exception If failed. */ + @Test public void testInvalidEndpointThreadCount() throws Exception { final String failMsg = "IGFS endpoint thread count must be positive"; @@ -404,4 +410,4 @@ private CacheConfiguration metaCache() { return metaCache; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSecondaryFileSystemInjectionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSecondaryFileSystemInjectionSelfTest.java index 1e24610d938ef..fb12e880bedd4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSecondaryFileSystemInjectionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSecondaryFileSystemInjectionSelfTest.java @@ -40,6 +40,9 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -48,6 +51,7 @@ /** * Tests for resource injection to secondary file system. */ +@RunWith(JUnit4.class) public class IgfsSecondaryFileSystemInjectionSelfTest extends GridCommonAbstractTest { /** IGFS name. */ protected static final String IGFS_NAME = "igfs-test"; @@ -93,7 +97,7 @@ public class IgfsSecondaryFileSystemInjectionSelfTest extends GridCommonAbstract /** * @throws Exception If failed. */ - @SuppressWarnings({"UnusedDeclaration"}) + @Test public void testInjectPrimaryByField() throws Exception { secondary = new TestBaseSecondaryFsMock() { @FileSystemResource @@ -120,7 +124,7 @@ public void testInjectPrimaryByField() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"UnusedDeclaration"}) + @Test public void testInjectPrimaryByMethods() throws Exception { secondary = new TestBaseSecondaryFsMock() { /** Ignite instance. */ @@ -171,7 +175,7 @@ void setIgniteInst(Ignite ig) { /** * */ - private static abstract class TestBaseSecondaryFsMock implements IgfsSecondaryFileSystem { + private abstract static class TestBaseSecondaryFsMock implements IgfsSecondaryFileSystem { /** {@inheritDoc} */ @Override public boolean exists(IgfsPath path) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest.java index 0d65bb87aa7ec..b987752781d59 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest.java @@ -37,10 +37,10 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.configuration.FileSystemConfiguration.DFLT_MGMT_PORT; @@ -48,10 +48,8 @@ /** * Base test class for {@link IgfsServer} checking IPC endpoint registrations. */ +@RunWith(JUnit4.class) public abstract class IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest extends IgfsCommonAbstractTest { - /** IP finder. */ - protected static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - private static final AtomicInteger mgmtPort = new AtomicInteger(DFLT_MGMT_PORT); /** {@inheritDoc} */ @@ -62,6 +60,7 @@ public abstract class IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest e /** * @throws Exception If failed. */ + @Test public void testLoopbackEndpointsRegistration() throws Exception { IgniteConfiguration cfg = gridConfiguration(); @@ -81,6 +80,7 @@ public void testLoopbackEndpointsRegistration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoopbackEndpointsCustomHostRegistration() throws Exception { IgniteConfiguration cfg = gridConfigurationManyIgfsCaches(2); @@ -130,11 +130,6 @@ else if (record.clazz() == IpcServerTcpEndpoint.class) protected IgniteConfiguration gridConfiguration() throws Exception { IgniteConfiguration cfg = getConfiguration(getTestIgniteInstanceName()); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setName("partitioned"); @@ -165,11 +160,6 @@ protected IgniteConfiguration gridConfiguration() throws Exception { IgniteConfiguration gridConfigurationManyIgfsCaches(int cacheCtn) throws Exception { IgniteConfiguration cfg = getConfiguration(getTestIgniteInstanceName()); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - List cachesCfg = new ArrayList<>(); for (int i = 0; i < cacheCtn; ++i) { @@ -263,4 +253,4 @@ protected FileSystemConfiguration igfsConfiguration(@Nullable IgfsIpcEndpointTyp return igfsConfiguration; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest.java index 030c852e520fd..2ec7a38ef6a66 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest.java @@ -22,16 +22,21 @@ import org.apache.ignite.igfs.IgfsIpcEndpointType; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.T2; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link IgfsServer} that checks all IPC endpoint registration types * permitted for Linux and Mac OS. */ +@RunWith(JUnit4.class) public class IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest extends IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testLoopbackAndShmemEndpointsRegistration() throws Exception { IgniteConfiguration cfg = gridConfigurationManyIgfsCaches(3); @@ -51,4 +56,4 @@ public void testLoopbackAndShmemEndpointsRegistration() throws Exception { assertEquals(4, res.get1().intValue()); assertEquals(2, res.get2().intValue()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnWindowsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnWindowsSelfTest.java index b16beb124a5ee..0ac420dde422d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnWindowsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsServerManagerIpcEndpointRegistrationOnWindowsSelfTest.java @@ -25,16 +25,21 @@ import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link IgfsServerManager} that checks shmem IPC endpoint registration * forbidden for Windows. */ +@RunWith(JUnit4.class) public class IgfsServerManagerIpcEndpointRegistrationOnWindowsSelfTest extends IgfsServerManagerIpcEndpointRegistrationAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testShmemEndpointsRegistration() throws Exception { Throwable e = GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -50,4 +55,4 @@ public void testShmemEndpointsRegistration() throws Exception { assert e.getCause().getCause().getMessage().contains(" should not be configured on Windows (configure " + IpcServerTcpEndpoint.class.getSimpleName() + ")"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSizeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSizeSelfTest.java index d37cd9c6ec83c..112c12ea218fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSizeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsSizeSelfTest.java @@ -52,6 +52,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -61,6 +64,7 @@ /** * {@link IgfsAttributes} test case. */ +@RunWith(JUnit4.class) public class IgfsSizeSelfTest extends IgfsCommonAbstractTest { /** How many grids to start. */ private static final int GRID_CNT = 3; @@ -77,9 +81,6 @@ public class IgfsSizeSelfTest extends IgfsCommonAbstractTest { /** IGFS name. */ private static final String IGFS_NAME = "test"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** IGFS management port */ private static int mgmtPort; @@ -143,11 +144,6 @@ public class IgfsSizeSelfTest extends IgfsCommonAbstractTest { igfsCfg.setMetaCacheConfiguration(metaCfg); igfsCfg.setDataCacheConfiguration(dataCfg); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); cfg.setFileSystemConfiguration(igfsCfg); if (memIgfsdDataPlcSetter != null) @@ -173,6 +169,7 @@ private void startUp() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitioned() throws Exception { cacheMode = PARTITIONED; nearEnabled = true; @@ -185,6 +182,7 @@ public void testPartitioned() throws Exception { * * @throws Exception If failed. */ + @Test public void testColocated() throws Exception { cacheMode = PARTITIONED; nearEnabled = false; @@ -197,6 +195,7 @@ public void testColocated() throws Exception { * * @throws Exception If failed. */ + @Test public void testReplicated() throws Exception { cacheMode = REPLICATED; @@ -208,6 +207,7 @@ public void testReplicated() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionedOversize() throws Exception { cacheMode = PARTITIONED; nearEnabled = true; @@ -220,6 +220,7 @@ public void testPartitionedOversize() throws Exception { * * @throws Exception If failed. */ + @Test public void testColocatedOversize() throws Exception { cacheMode = PARTITIONED; nearEnabled = false; @@ -232,6 +233,7 @@ public void testColocatedOversize() throws Exception { * * @throws Exception If failed. */ + @Test public void testReplicatedOversize() throws Exception { cacheMode = REPLICATED; @@ -243,6 +245,7 @@ public void testReplicatedOversize() throws Exception { * * @throws Exception If failed. */ + @Test public void testPartitionedPreload() throws Exception { cacheMode = PARTITIONED; nearEnabled = true; @@ -255,6 +258,7 @@ public void testPartitionedPreload() throws Exception { * * @throws Exception If failed. */ + @Test public void testColocatedPreload() throws Exception { cacheMode = PARTITIONED; nearEnabled = false; @@ -709,4 +713,4 @@ private int length() { return len; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStartCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStartCacheTest.java index 46fdae578abca..6b39ff69e59f1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStartCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStartCacheTest.java @@ -28,14 +28,14 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.GridCacheAdapter; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import java.io.BufferedWriter; import java.io.OutputStreamWriter; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -47,10 +47,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgfsStartCacheTest extends IgfsCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** * @param igfs If {@code true} created IGFS configuration. * @param idx Node index. @@ -59,12 +57,6 @@ public class IgfsStartCacheTest extends IgfsCommonAbstractTest { private IgniteConfiguration config(boolean igfs, int idx) { IgniteConfiguration cfg = new IgniteConfiguration(); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - if (igfs) { FileSystemConfiguration igfsCfg = new FileSystemConfiguration(); @@ -106,6 +98,7 @@ private IgniteConfiguration config(boolean igfs, int idx) { /** * @throws Exception If failed. */ + @Test public void testCacheStart() throws Exception { Ignite g0 = G.start(config(true, 0)); @@ -165,4 +158,4 @@ private void checkCache(GridCacheAdapter cache) { assertTrue(cache.context().systemTx()); assertEquals(SYSTEM_POOL, cache.context().ioPolicy()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStreamsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStreamsSelfTest.java index 115cd2ea551f6..20f3fdcd0fdab 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStreamsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsStreamsSelfTest.java @@ -35,9 +35,6 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; @@ -52,6 +49,9 @@ import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.atomic.AtomicInteger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -62,10 +62,8 @@ /** * Tests for IGFS streams content. */ +@RunWith(JUnit4.class) public class IgfsStreamsSelfTest extends IgfsCommonAbstractTest { - /** Test IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Group size. */ public static final int CFG_GRP_SIZE = 128; @@ -113,12 +111,6 @@ public class IgfsStreamsSelfTest extends IgfsCommonAbstractTest { cfg.setCacheConfiguration(); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - FileSystemConfiguration igfsCfg = new FileSystemConfiguration(); igfsCfg.setMetaCacheConfiguration(cacheConfiguration("meta")); @@ -159,6 +151,7 @@ protected CacheConfiguration cacheConfiguration(@NotNull String cacheName) { * * @throws Exception In case of exception. */ + @Test public void testCreateFile() throws Exception { IgfsPath path = new IgfsPath("/asdf"); @@ -172,6 +165,7 @@ public void testCreateFile() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCreateFileColocated() throws Exception { IgfsPath path = new IgfsPath("/colocated"); @@ -210,6 +204,7 @@ public void testCreateFileColocated() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCreateFileFragmented() throws Exception { IgfsEx impl = (IgfsEx)grid(0).fileSystem("igfs"); String metaCacheName = grid(0).igfsx("igfs").configuration().getMetaCacheConfiguration().getName(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsTaskSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsTaskSelfTest.java index f95907763b9e1..55e0fd1f9faa0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsTaskSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/IgfsTaskSelfTest.java @@ -40,9 +40,6 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.JobContextResource; import org.apache.ignite.resources.TaskSessionResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import java.io.IOException; import java.io.OutputStreamWriter; @@ -50,6 +47,9 @@ import java.util.Collections; import java.util.List; import java.util.Random; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -60,6 +60,7 @@ /** * Tests for {@link IgfsTask}. */ +@RunWith(JUnit4.class) public class IgfsTaskSelfTest extends IgfsCommonAbstractTest { /** Predefined words dictionary. */ private static final String[] DICTIONARY = new String[] {"word0", "word1", "word2", "word3", "word4", "word5", @@ -68,9 +69,6 @@ public class IgfsTaskSelfTest extends IgfsCommonAbstractTest { /** File path. */ private static final IgfsPath FILE = new IgfsPath("/file"); - /** Shared IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Block size: 64 Kb. */ private static final int BLOCK_SIZE = 64 * 1024; @@ -131,11 +129,6 @@ private IgniteConfiguration config(int idx) { IgniteConfiguration cfg = new IgniteConfiguration(); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); cfg.setFileSystemConfiguration(igfsCfg); cfg.setIgniteInstanceName("node-" + idx); @@ -149,6 +142,7 @@ private IgniteConfiguration config(int idx) { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testTask() throws Exception { String arg = DICTIONARY[new Random(System.currentTimeMillis()).nextInt(DICTIONARY.length)]; @@ -168,6 +162,7 @@ public void testTask() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testTaskAsync() throws Exception { String arg = DICTIONARY[new Random(System.currentTimeMillis()).nextInt(DICTIONARY.length)]; @@ -287,4 +282,4 @@ private static class Job implements IgfsJob, Serializable { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsAbstractRecordResolverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsAbstractRecordResolverSelfTest.java index 0395e211ae997..750acfed84789 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsAbstractRecordResolverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsAbstractRecordResolverSelfTest.java @@ -30,9 +30,6 @@ import org.apache.ignite.igfs.IgfsPath; import org.apache.ignite.igfs.mapreduce.IgfsFileRange; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -48,9 +45,6 @@ public class IgfsAbstractRecordResolverSelfTest extends GridCommonAbstractTest { /** File path. */ protected static final IgfsPath FILE = new IgfsPath("/file"); - /** Shared IP finder. */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** IGFS. */ protected static IgniteFileSystem igfs; @@ -84,11 +78,6 @@ public class IgfsAbstractRecordResolverSelfTest extends GridCommonAbstractTest { cfg.setIgniteInstanceName("grid"); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); cfg.setFileSystemConfiguration(igfsCfg); Ignite g = G.start(cfg); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsByteDelimiterRecordResolverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsByteDelimiterRecordResolverSelfTest.java index 9564a1899d911..d2e78c9434231 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsByteDelimiterRecordResolverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsByteDelimiterRecordResolverSelfTest.java @@ -21,16 +21,21 @@ import org.apache.ignite.igfs.mapreduce.IgfsFileRange; import org.apache.ignite.igfs.mapreduce.records.IgfsByteDelimiterRecordResolver; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Byte delimiter split resolver self test. */ +@RunWith(JUnit4.class) public class IgfsByteDelimiterRecordResolverSelfTest extends IgfsAbstractRecordResolverSelfTest { /** * Test split resolution when there are no delimiters in the file. * * @throws Exception If failed. */ + @Test public void testNoDelimiters() throws Exception { byte[] delim = wrap(2); byte[] data = array(F.t(wrap(1), 8)); @@ -47,6 +52,7 @@ public void testNoDelimiters() throws Exception { * * @throws Exception If failed. */ + @Test public void testHeadDelimiter() throws Exception { byte[] delim = array(F.t(wrap(2), 8)); byte[] data = array(F.t(delim, 1), F.t(wrap(1), 8)); @@ -73,6 +79,7 @@ public void testHeadDelimiter() throws Exception { * * @throws Exception If failed. */ + @Test public void testEndDelimiter() throws Exception { byte[] delim = array(F.t(wrap(2), 8)); byte[] data = array(F.t(wrap(1), 8), F.t(delim, 1)); @@ -99,6 +106,7 @@ public void testEndDelimiter() throws Exception { * * @throws Exception If failed. */ + @Test public void testMiddleDelimiter() throws Exception { byte[] delim = array(F.t(wrap(2), 8)); byte[] data = array(F.t(wrap(1), 8), F.t(delim, 1), F.t(wrap(1), 8)); @@ -139,6 +147,7 @@ public void testMiddleDelimiter() throws Exception { * * @throws Exception If failed. */ + @Test public void testTwoHeadDelimiters() throws Exception { byte[] delim = array(F.t(wrap(2), 8)); byte[] data = array(F.t(delim, 2), F.t(wrap(1), 8)); @@ -179,6 +188,7 @@ public void testTwoHeadDelimiters() throws Exception { * * @throws Exception If failed. */ + @Test public void testTwoTailDelimiters() throws Exception { byte[] delim = array(F.t(wrap(2), 8)); byte[] data = array(F.t(wrap(1), 8), F.t(delim, 2)); @@ -219,6 +229,7 @@ public void testTwoTailDelimiters() throws Exception { * * @throws Exception If failed. */ + @Test public void testHeadAndTailDelimiters() throws Exception { byte[] delim = array(F.t(wrap(2), 8)); byte[] data = array(F.t(delim, 1), F.t(wrap(1), 8), F.t(delim, 1)); @@ -259,6 +270,7 @@ public void testHeadAndTailDelimiters() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelimiterStartsWithTheSameBytesAsLastPreviousDataByte() throws Exception { byte[] delim = array(F.t(wrap(1, 1, 2), 1)); byte[] data = array(F.t(wrap(1), 1), F.t(delim, 1), F.t(wrap(1), 1)); @@ -332,4 +344,4 @@ public void assertSplitNull(long suggestedStart, long suggestedLen, byte[] data, private IgfsByteDelimiterRecordResolver resolver(byte[]... delims) { return new IgfsByteDelimiterRecordResolver(delims); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsFixedLengthRecordResolverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsFixedLengthRecordResolverSelfTest.java index d869d5c9cac7e..6e07f85905b84 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsFixedLengthRecordResolverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsFixedLengthRecordResolverSelfTest.java @@ -21,16 +21,21 @@ import org.apache.ignite.igfs.mapreduce.IgfsFileRange; import org.apache.ignite.igfs.mapreduce.records.IgfsFixedLengthRecordResolver; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Fixed length split resolver self test. */ +@RunWith(JUnit4.class) public class IgfsFixedLengthRecordResolverSelfTest extends IgfsAbstractRecordResolverSelfTest { /** * Test split resolver. * * @throws Exception If failed. */ + @Test public void testResolver() throws Exception { byte[] data = array(F.t(wrap(1), 24)); @@ -144,4 +149,4 @@ public void assertSplitNull(long suggestedStart, long suggestedLen, byte[] data, private IgfsFixedLengthRecordResolver resolver(int len) { return new IgfsFixedLengthRecordResolver(len); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsNewLineDelimiterRecordResolverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsNewLineDelimiterRecordResolverSelfTest.java index d0050614945fe..8ea1afa46d3e3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsNewLineDelimiterRecordResolverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsNewLineDelimiterRecordResolverSelfTest.java @@ -21,6 +21,9 @@ import org.apache.ignite.igfs.mapreduce.IgfsFileRange; import org.apache.ignite.igfs.mapreduce.records.IgfsNewLineRecordResolver; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.igfs.mapreduce.records.IgfsNewLineRecordResolver.SYM_CR; import static org.apache.ignite.igfs.mapreduce.records.IgfsNewLineRecordResolver.SYM_LF; @@ -28,12 +31,14 @@ /** * New line split resolver self test. */ +@RunWith(JUnit4.class) public class IgfsNewLineDelimiterRecordResolverSelfTest extends IgfsAbstractRecordResolverSelfTest { /** * Test new line delimtier record resovler. * * @throws Exception If failed. */ + @Test public void test() throws Exception{ byte[] data = array(F.t(wrap(1), 8), F.t(wrap(SYM_LF), 1), F.t(wrap(1), 8), F.t(wrap(SYM_CR, SYM_LF), 1), F.t(wrap(1), 8)); @@ -127,4 +132,4 @@ public void assertSplitNull(long suggestedStart, long suggestedLen, byte[] data) private IgfsNewLineRecordResolver resolver() { return IgfsNewLineRecordResolver.NEW_LINE; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsStringDelimiterRecordResolverSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsStringDelimiterRecordResolverSelfTest.java index cc4c2db6d979b..6539e5b342d35 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsStringDelimiterRecordResolverSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/igfs/split/IgfsStringDelimiterRecordResolverSelfTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.igfs.mapreduce.records.IgfsByteDelimiterRecordResolver; import org.apache.ignite.igfs.mapreduce.records.IgfsStringDelimiterRecordResolver; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * String delimiter split resolver self-test. */ +@RunWith(JUnit4.class) public class IgfsStringDelimiterRecordResolverSelfTest extends IgfsAbstractRecordResolverSelfTest { /** Charset used in tests. */ private static final Charset UTF8 = Charset.forName("UTF-8"); @@ -36,6 +40,7 @@ public class IgfsStringDelimiterRecordResolverSelfTest extends IgfsAbstractRecor * * @throws Exception If failed. */ + @Test public void testResolver() throws Exception { String delim = "aaaaaaaa"; @@ -134,4 +139,4 @@ public void assertSplitNull(long suggestedStart, long suggestedLen, byte[] data, private IgfsStringDelimiterRecordResolver resolver(String... delims) { return new IgfsStringDelimiterRecordResolver(UTF8, delims); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/jobmetrics/GridJobMetricsProcessorLoadTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/jobmetrics/GridJobMetricsProcessorLoadTest.java index c75f002340c79..1fab2355a37f2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/jobmetrics/GridJobMetricsProcessorLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/jobmetrics/GridJobMetricsProcessorLoadTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid job metrics processor load test. */ +@RunWith(JUnit4.class) public class GridJobMetricsProcessorLoadTest extends GridCommonAbstractTest { /** */ private static final int THREADS_CNT = 10; @@ -55,6 +59,7 @@ public GridJobMetricsProcessorLoadTest() { /** * @throws Exception if failed. */ + @Test public void testJobMetricsMultiThreaded() throws Exception { GridTestUtils.runMultiThreaded(new Runnable() { @Override public void run() { @@ -86,4 +91,4 @@ public void testJobMetricsMultiThreaded() throws Exception { ctx.jobMetric().getJobMetrics(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/messaging/IgniteMessagingConfigVariationFullApiTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/messaging/IgniteMessagingConfigVariationFullApiTest.java index 33a8a7392cbff..339c9f22eaf96 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/messaging/IgniteMessagingConfigVariationFullApiTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/messaging/IgniteMessagingConfigVariationFullApiTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.cluster.ClusterGroup; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.testframework.junits.IgniteConfigVariationsAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * The test checks process messaging. */ +@RunWith(JUnit4.class) public class IgniteMessagingConfigVariationFullApiTest extends IgniteConfigVariationsAbstractTest { /** * Message topic. @@ -55,6 +59,7 @@ public class IgniteMessagingConfigVariationFullApiTest extends IgniteConfigVaria /** * @throws Exception If failed. */ + @Test public void testLocalServer() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -66,6 +71,7 @@ public void testLocalServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalServerAsync() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -77,6 +83,7 @@ public void testLocalServerAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalListener() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -88,6 +95,7 @@ public void testLocalListener() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerClientMessage() throws Exception { if (!testsCfg.withClients()) return; @@ -102,6 +110,7 @@ public void testServerClientMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerClientMessageAsync() throws Exception { if (!testsCfg.withClients()) return; @@ -116,6 +125,7 @@ public void testServerClientMessageAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientClientMessage() throws Exception { if (!testsCfg.withClients()) return; @@ -130,6 +140,7 @@ public void testClientClientMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientClientMessageAsync() throws Exception { if (!testsCfg.withClients()) return; @@ -144,6 +155,7 @@ public void testClientClientMessageAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientServerMessage() throws Exception { if (!testsCfg.withClients()) return; @@ -158,6 +170,7 @@ public void testClientServerMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientServerMessageAsync() throws Exception { if (!testsCfg.withClients()) return; @@ -172,6 +185,7 @@ public void testClientServerMessageAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollectionMessage() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -183,6 +197,7 @@ public void testCollectionMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOrderedMessage() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -194,6 +209,7 @@ public void testOrderedMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientServerOrderedMessage() throws Exception { if (!testsCfg.withClients()) return; @@ -208,6 +224,7 @@ public void testClientServerOrderedMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientClientOrderedMessage() throws Exception { if (!testsCfg.withClients()) return; @@ -222,6 +239,7 @@ public void testClientClientOrderedMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerClientOrderedMessage() throws Exception { if (!testsCfg.withClients()) return; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcConfigurationValidationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcConfigurationValidationSelfTest.java index cabeb496857f2..93439d28e8630 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcConfigurationValidationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcConfigurationValidationSelfTest.java @@ -27,11 +27,15 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * ODBC configuration validation tests. */ @SuppressWarnings("deprecation") +@RunWith(JUnit4.class) public class OdbcConfigurationValidationSelfTest extends GridCommonAbstractTest { /** Node index generator. */ private static final AtomicInteger NODE_IDX_GEN = new AtomicInteger(); @@ -46,6 +50,7 @@ public class OdbcConfigurationValidationSelfTest extends GridCommonAbstractTest * * @throws Exception If failed. */ + @Test public void testAddressDefault() throws Exception { check(new OdbcConfiguration(), true); } @@ -55,6 +60,7 @@ public void testAddressDefault() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddressHostOnly() throws Exception { check(new OdbcConfiguration().setEndpointAddress("127.0.0.1"), true); } @@ -64,6 +70,7 @@ public void testAddressHostOnly() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddressHostAndPort() throws Exception { check(new OdbcConfiguration().setEndpointAddress("127.0.0.1:9999"), true); @@ -76,6 +83,7 @@ public void testAddressHostAndPort() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddressHostAndPortRange() throws Exception { check(new OdbcConfiguration().setEndpointAddress("127.0.0.1:9999..10000"), true); check(new OdbcConfiguration().setEndpointAddress("127.0.0.1:9999..10000"), true); @@ -89,6 +97,7 @@ public void testAddressHostAndPortRange() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddressInvalidHost() throws Exception { check(new OdbcConfiguration().setEndpointAddress("126.0.0.1"), false); } @@ -98,6 +107,7 @@ public void testAddressInvalidHost() throws Exception { * * @throws Exception If failed. */ + @Test public void testAddressInvalidFormat() throws Exception { check(new OdbcConfiguration().setEndpointAddress("127.0.0.1:"), false); @@ -121,6 +131,7 @@ public void testAddressInvalidFormat() throws Exception { * * @throws Exception If failed. */ + @Test public void testConnectionParams() throws Exception { check(new OdbcConfiguration().setEndpointAddress("127.0.0.1:9998..10000") .setSocketSendBufferSize(4 * 1024), true); @@ -140,6 +151,7 @@ public void testConnectionParams() throws Exception { * * @throws Exception If failed. */ + @Test public void testThreadPoolSize() throws Exception { check(new OdbcConfiguration().setThreadPoolSize(0), false); check(new OdbcConfiguration().setThreadPoolSize(-1), false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcEscapeSequenceSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcEscapeSequenceSelfTest.java index c08c40ca98c01..6df2ad5e715ad 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcEscapeSequenceSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/odbc/OdbcEscapeSequenceSelfTest.java @@ -23,14 +23,19 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Escape sequence parser tests. */ +@RunWith(JUnit4.class) public class OdbcEscapeSequenceSelfTest extends GridCommonAbstractTest { /** * Test simple cases. */ + @Test public void testTrivial() { check( "select * from table;", @@ -41,6 +46,7 @@ public void testTrivial() { /** * Test escape sequence series. */ + @Test public void testSimpleFunction() throws Exception { check( "test()", @@ -76,6 +82,7 @@ public void testSimpleFunction() throws Exception { /** * Test escape sequence for explicit data type conversion */ + @Test public void testConvertFunction() throws Exception { check( "CONVERT ( CURDATE(), CHAR )", @@ -207,6 +214,7 @@ public void testConvertFunction() throws Exception { /** * Test simple nested escape sequences. Depth = 2. */ + @Test public void testNestedFunction() throws Exception { check( "func1(field1, func2(field2))", @@ -227,6 +235,7 @@ public void testNestedFunction() throws Exception { /** * Test nested escape sequences. Depth > 2. */ + @Test public void testDeepNestedFunction() { check( "func1(func2(func3(field1)))", @@ -252,6 +261,7 @@ public void testDeepNestedFunction() { /** * Test series of nested escape sequences. */ + @Test public void testNestedFunctionMixed() { check( "func1(func2(field1), func3(field2))", @@ -272,6 +282,7 @@ public void testNestedFunctionMixed() { /** * Test invalid escape sequence. */ + @Test public void testFailedOnInvalidFunctionSequence() { checkFail("{fnfunc1()}"); @@ -283,6 +294,7 @@ public void testFailedOnInvalidFunctionSequence() { /** * Test escape sequences with additional whitespace characters */ + @Test public void testFunctionEscapeSequenceWithWhitespaces() throws Exception { check("func1()", "{ fn func1()}"); @@ -296,6 +308,7 @@ public void testFunctionEscapeSequenceWithWhitespaces() throws Exception { /** * Test guid escape sequences */ + @Test public void testGuidEscapeSequence() { check( "CAST('12345678-9abc-def0-1234-123456789abc' AS UUID)", @@ -316,6 +329,7 @@ public void testGuidEscapeSequence() { /** * Test invalid escape sequence. */ + @Test public void testFailedOnInvalidGuidSequence() { checkFail("select {guid'12345678-9abc-def0-1234-123456789abc'}"); @@ -341,6 +355,7 @@ public void testFailedOnInvalidGuidSequence() { /** * Test escape sequences with additional whitespace characters */ + @Test public void testGuidEscapeSequenceWithWhitespaces() throws Exception { check( "CAST('12345678-9abc-def0-1234-123456789abc' AS UUID)", @@ -361,6 +376,7 @@ public void testGuidEscapeSequenceWithWhitespaces() throws Exception { /** * Test date escape sequences */ + @Test public void testDateEscapeSequence() throws Exception { check( "'2016-08-26'", @@ -381,6 +397,7 @@ public void testDateEscapeSequence() throws Exception { /** * Test date escape sequences with additional whitespace characters */ + @Test public void testDateEscapeSequenceWithWhitespaces() throws Exception { check("'2016-08-26'", "{ d '2016-08-26'}"); @@ -392,6 +409,7 @@ public void testDateEscapeSequenceWithWhitespaces() throws Exception { /** * Test invalid escape sequence. */ + @Test public void testFailedOnInvalidDateSequence() { checkFail("{d'2016-08-26'}"); @@ -411,6 +429,7 @@ public void testFailedOnInvalidDateSequence() { /** * Test date escape sequences */ + @Test public void testTimeEscapeSequence() throws Exception { check("'13:15:08'", "{t '13:15:08'}"); @@ -423,6 +442,7 @@ public void testTimeEscapeSequence() throws Exception { /** * Test date escape sequences with additional whitespace characters */ + @Test public void testTimeEscapeSequenceWithWhitespaces() throws Exception { check("'13:15:08'", "{ t '13:15:08'}"); @@ -434,6 +454,7 @@ public void testTimeEscapeSequenceWithWhitespaces() throws Exception { /** * Test invalid escape sequence. */ + @Test public void testFailedOnInvalidTimeSequence() { checkFail("{t'13:15:08'}"); @@ -453,6 +474,7 @@ public void testFailedOnInvalidTimeSequence() { /** * Test timestamp escape sequences */ + @Test public void testTimestampEscapeSequence() throws Exception { check( "'2016-08-26 13:15:08'", @@ -478,6 +500,7 @@ public void testTimestampEscapeSequence() throws Exception { /** * Test timestamp escape sequences with additional whitespace characters */ + @Test public void testTimestampEscapeSequenceWithWhitespaces() throws Exception { check("'2016-08-26 13:15:08'", "{ ts '2016-08-26 13:15:08'}" @@ -495,6 +518,7 @@ public void testTimestampEscapeSequenceWithWhitespaces() throws Exception { /** * Test invalid escape sequence. */ + @Test public void testFailedOnInvalidTimestampSequence() { checkFail("{ts '2016-08-26 13:15:08,12345'}"); @@ -523,6 +547,7 @@ public void testFailedOnInvalidTimestampSequence() { /** * Test escape sequence series. */ + @Test public void testOuterJoinFunction() throws Exception { check( "t OUTER JOIN t2 ON t.id=t2.id", @@ -543,6 +568,7 @@ public void testOuterJoinFunction() throws Exception { /** * Test simple nested escape sequences. Depth = 2. */ + @Test public void testNestedOuterJoin() throws Exception { check( "t OUTER JOIN (t2 OUTER JOIN t3 ON t2.id=t3.id) ON t.id=t2.id", @@ -563,6 +589,7 @@ public void testNestedOuterJoin() throws Exception { /** * Test nested escape sequences. Depth > 2. */ + @Test public void testDeepNestedOuterJoin() { check( "t OUTER JOIN (t2 OUTER JOIN (t3 OUTER JOIN t4 ON t3.id=t4.id) ON t2.id=t3.id) ON t.id=t2.id", @@ -588,6 +615,7 @@ public void testDeepNestedOuterJoin() { /** * Test invalid escape sequence. */ + @Test public void testFailedOnInvalidOuterJoinSequence() { checkFail("{ojt OUTER JOIN t2 ON t.id=t2.id}"); @@ -599,6 +627,7 @@ public void testFailedOnInvalidOuterJoinSequence() { /** * Test escape sequences with additional whitespace characters */ + @Test public void testOuterJoinSequenceWithWhitespaces() throws Exception { check( "t OUTER JOIN t2 ON t.id=t2.id", @@ -619,6 +648,7 @@ public void testOuterJoinSequenceWithWhitespaces() throws Exception { /** * Test non-escape sequences. */ + @Test public void testNonEscapeSequence() throws Exception { check("'{fn test()}'", "'{fn test()}'"); @@ -676,6 +706,7 @@ public void testNonEscapeSequence() throws Exception { /** * Test escape sequence series. */ + @Test public void testSimpleCallProc() throws Exception { check( "CALL test()", @@ -711,6 +742,7 @@ public void testSimpleCallProc() throws Exception { /** * Test simple nested escape sequences. Depth = 2. */ + @Test public void testNestedCallProc() throws Exception { check( "CALL func1(field1, CALL func2(field2))", @@ -731,6 +763,7 @@ public void testNestedCallProc() throws Exception { /** * Test nested escape sequences. Depth > 2. */ + @Test public void testDeepNestedCallProc() { check( "CALL func1(CALL func2(CALL func3(field1)))", @@ -756,6 +789,7 @@ public void testDeepNestedCallProc() { /** * Test series of nested escape sequences. */ + @Test public void testNestedCallProcMixed() { check( "CALL func1(CALL func2(field1), CALL func3(field2))", @@ -776,6 +810,7 @@ public void testNestedCallProcMixed() { /** * Test invalid escape sequence. */ + @Test public void testFailedOnInvalidCallProcSequence() { checkFail("{callfunc1()}"); @@ -787,6 +822,7 @@ public void testFailedOnInvalidCallProcSequence() { /** * Test escape sequences with additional whitespace characters */ + @Test public void testCallProcEscapeSequenceWithWhitespaces() throws Exception { check("CALL func1()", "{ call func1()}"); @@ -800,6 +836,7 @@ public void testCallProcEscapeSequenceWithWhitespaces() throws Exception { /** * Test escape sequence series. */ + @Test public void testLikeEscapeSequence() throws Exception { check( "ESCAPE '\\'", @@ -850,6 +887,7 @@ public void testLikeEscapeSequence() throws Exception { /** * Test escape sequences with additional whitespace characters */ + @Test public void testLikeEscapeSequenceWithWhitespaces() throws Exception { check("ESCAPE '\\'", "{ '\\' }"); check("ESCAPE '\\'", "{ escape '\\'}"); @@ -864,6 +902,7 @@ public void testLikeEscapeSequenceWithWhitespaces() throws Exception { /** * Test invalid escape sequence. */ + @Test public void testLikeOnInvalidLikeEscapeSequence() { checkFail("LIKE 'AAA's'"); checkFail("LIKE 'AAA\'s'"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/port/GridPortProcessorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/port/GridPortProcessorSelfTest.java index 9d5d52bd4c29b..5b6a2acb0da02 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/port/GridPortProcessorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/port/GridPortProcessorSelfTest.java @@ -22,6 +22,9 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.spi.IgnitePortProtocol.TCP; import static org.apache.ignite.spi.IgnitePortProtocol.UDP; @@ -29,6 +32,7 @@ /** * */ +@RunWith(JUnit4.class) public class GridPortProcessorSelfTest extends GridCommonAbstractTest { /** */ private GridTestKernalContext ctx; @@ -52,6 +56,7 @@ public class GridPortProcessorSelfTest extends GridCommonAbstractTest { /** * @throws Exception If any exception occurs. */ + @Test public void testA() throws Exception { Class cls1 = TcpCommunicationSpi.class; @@ -81,6 +86,7 @@ public void testA() throws Exception { /** * @throws Exception If any exception occurs. */ + @Test public void testB() throws Exception { final AtomicInteger ai = new AtomicInteger(); @@ -126,4 +132,4 @@ public void testB() throws Exception { assertEquals(i, ai.get()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/cache/GridCacheCommandHandlerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/cache/GridCacheCommandHandlerSelfTest.java index c94ebd8003c02..08ec39fe831d8 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/cache/GridCacheCommandHandlerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/cache/GridCacheCommandHandlerSelfTest.java @@ -47,10 +47,14 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import javax.cache.processor.EntryProcessorException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests command handler directly. */ +@RunWith(JUnit4.class) public class GridCacheCommandHandlerSelfTest extends GridCommonAbstractTest { /** * Constructor. @@ -102,6 +106,7 @@ protected CacheAtomicityMode atomicityMode(){ * * @throws Exception If failed. */ + @Test public void testCacheGetFailsSyncNotify() throws Exception { GridRestCommandHandler hnd = new TestableCacheCommandHandler(grid().context(), "getAsync"); @@ -128,7 +133,7 @@ public void testCacheGetFailsSyncNotify() throws Exception { * * @throws Exception In case of any exception. */ - @SuppressWarnings("NullableProblems") + @Test public void testAppendPrepend() throws Exception { assertEquals("as" + "df", testAppend("as", "df", true)); assertEquals("df" + "as", testAppend("as", "df", false)); @@ -217,6 +222,7 @@ private T testAppend(T curVal, T newVal, boolean append) throws IgniteChecke * * @throws Exception If failed. */ + @Test public void testCacheClear() throws Exception { GridRestCommandHandler hnd = new GridCacheCommandHandler(((IgniteKernal)grid()).context()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/log/GridLogCommandHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/log/GridLogCommandHandlerTest.java index 7f0a6de4144d7..bb31ce25c429c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/log/GridLogCommandHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/log/GridLogCommandHandlerTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.internal.processors.rest.request.GridRestLogRequest; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * REST log command handler tests. */ +@RunWith(JUnit4.class) public class GridLogCommandHandlerTest extends GridCommonAbstractTest { /** */ private String igniteHome = System.getProperty("user.dir"); @@ -69,6 +73,7 @@ public class GridLogCommandHandlerTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSupportedCommands() throws Exception { GridLogCommandHandler cmdHandler = new GridLogCommandHandler(newContext()); @@ -81,6 +86,7 @@ public void testSupportedCommands() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnSupportedCommands() throws Exception { GridLogCommandHandler cmdHandler = new GridLogCommandHandler(newContext()); @@ -93,6 +99,7 @@ public void testUnSupportedCommands() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleAsync() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteHome(igniteHome); @@ -114,6 +121,7 @@ public void testHandleAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleAsyncForNonExistingLines() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteHome(igniteHome); @@ -135,6 +143,7 @@ public void testHandleAsyncForNonExistingLines() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleAsyncFromAndToNotSet() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteHome(igniteHome); @@ -153,6 +162,7 @@ public void testHandleAsyncFromAndToNotSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleAsyncPathNotSet() throws Exception { GridTestKernalContext ctx = newContext(); ctx.config().setIgniteHome(igniteHome); @@ -172,6 +182,7 @@ public void testHandleAsyncPathNotSet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleAsyncPathIsOutsideIgniteHome() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteHome(igniteHome); @@ -193,6 +204,7 @@ public void testHandleAsyncPathIsOutsideIgniteHome() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleAsyncFromGreaterThanTo() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteHome(igniteHome); @@ -214,6 +226,7 @@ public void testHandleAsyncFromGreaterThanTo() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHandleAsyncFromEqualTo() throws Exception { IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteHome(igniteHome); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/query/GridQueryCommandHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/query/GridQueryCommandHandlerTest.java index f0ff25a2c72f3..0d8fe5c7a4d51 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/query/GridQueryCommandHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/query/GridQueryCommandHandlerTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.Collection; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * REST query command handler tests. */ +@RunWith(JUnit4.class) public class GridQueryCommandHandlerTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -51,6 +55,7 @@ public class GridQueryCommandHandlerTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSupportedCommands() throws Exception { GridTestKernalContext ctx = newContext(grid().configuration()); @@ -72,6 +77,7 @@ public void testSupportedCommands() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnsupportedCommands() throws Exception { GridTestKernalContext ctx = newContext(grid().configuration()); @@ -87,6 +93,7 @@ public void testUnsupportedCommands() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNullCache() throws Exception { QueryCommandHandler cmdHnd = new QueryCommandHandler(grid().context()); @@ -116,6 +123,7 @@ public void testNullCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNullPageSize() throws Exception { grid().getOrCreateCache(getName()); @@ -151,6 +159,7 @@ public void testNullPageSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQuery() throws Exception { grid().getOrCreateCache(getName()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/top/CacheTopologyCommandHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/top/CacheTopologyCommandHandlerTest.java index 5d7dfd709c210..cebd96d1b8706 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/top/CacheTopologyCommandHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/rest/handlers/top/CacheTopologyCommandHandlerTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheTopologyCommandHandlerTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration() throws Exception { @@ -70,6 +74,7 @@ public class CacheTopologyCommandHandlerTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTopologyCommandOnDynamicCacheCreateDestroy() throws Exception { GridRestTopologyRequest req = new GridRestTopologyRequest(); req.command(GridRestCommand.TOPOLOGY); @@ -80,6 +85,7 @@ public void testTopologyCommandOnDynamicCacheCreateDestroy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeCommandOnDynamicCacheCreateDestroy1() throws Exception { Ignite node = startGrid(); @@ -93,6 +99,7 @@ public void testNodeCommandOnDynamicCacheCreateDestroy1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeCommandOnDynamicCacheCreateDestroy2() throws Exception { Ignite node = startGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ClosureServiceClientsNodesTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ClosureServiceClientsNodesTest.java index edb01822996e0..9715288f2d983 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ClosureServiceClientsNodesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ClosureServiceClientsNodesTest.java @@ -36,15 +36,16 @@ import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; import org.apache.ignite.services.ServiceDescriptor; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that compute and service run only on server nodes by default. */ +@RunWith(JUnit4.class) public class ClosureServiceClientsNodesTest extends GridCommonAbstractTest { /** Number of grids started for tests. */ private static final int NODES_CNT = 4; @@ -55,17 +56,12 @@ public class ClosureServiceClientsNodesTest extends GridCommonAbstractTest { /** Test singleton service name. */ private static final String SINGLETON_NAME = "testSingleton"; - /** IP finder. */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setMarshaller(new BinaryMarshaller()); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setCacheConfiguration(); if (igniteInstanceName.equals(getTestIgniteInstanceName(CLIENT_IDX))) @@ -83,6 +79,7 @@ public class ClosureServiceClientsNodesTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDefaultClosure() throws Exception { Set srvNames = new HashSet<>(NODES_CNT - 1); @@ -117,6 +114,7 @@ public void testDefaultClosure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientClosure() throws Exception { for (int i = 0 ; i < NODES_CNT; i++) { log.info("Iteration: " + i); @@ -144,6 +142,7 @@ public void testClientClosure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomClosure() throws Exception { for (int i = 0 ; i < NODES_CNT; i++) { log.info("Iteration: " + i); @@ -167,6 +166,7 @@ public void testCustomClosure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultService() throws Exception { UUID clientNodeId = grid(CLIENT_IDX).cluster().localNode().id(); @@ -208,6 +208,7 @@ public void testDefaultService() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientService() throws Exception { UUID clientNodeId = grid(CLIENT_IDX).cluster().localNode().id(); @@ -268,4 +269,4 @@ private static class TestService implements Service { log.info("Executing test service."); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceClientNodeTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceClientNodeTest.java index 1d6cbaeac590e..cf56a2d6b2016 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceClientNodeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceClientNodeTest.java @@ -22,18 +22,16 @@ import org.apache.ignite.Ignite; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridServiceClientNodeTest extends GridCommonAbstractTest { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -41,8 +39,6 @@ public class GridServiceClientNodeTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientFailureDetectionTimeout(30000); cfg.setClientMode(client); @@ -61,6 +57,7 @@ public class GridServiceClientNodeTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDeployFromClient() throws Exception { startGrids(3); @@ -74,6 +71,7 @@ public void testDeployFromClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployFromClientAfterRouterStop1() throws Exception { startGrid(0); @@ -102,6 +100,7 @@ public void testDeployFromClientAfterRouterStop1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployFromClientAfterRouterStop2() throws Exception { startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceContinuousQueryRedeployTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceContinuousQueryRedeployTest.java index 2437b47f70e5f..e4157e2290fbf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceContinuousQueryRedeployTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceContinuousQueryRedeployTest.java @@ -39,11 +39,15 @@ import org.apache.ignite.services.ServiceContext; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests whether concurrent service cancel and registering ContinuousQuery doesn't causes * service redeployment. */ +@RunWith(JUnit4.class) public class GridServiceContinuousQueryRedeployTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "TEST_CACHE"; @@ -62,6 +66,7 @@ public class GridServiceContinuousQueryRedeployTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testServiceRedeploymentAfterCancel() throws Exception { final Ignite ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentCompoundFutureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentCompoundFutureSelfTest.java index ad93c6aabb5d0..f2563d2741717 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentCompoundFutureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentCompoundFutureSelfTest.java @@ -28,12 +28,17 @@ import org.apache.ignite.services.ServiceConfiguration; import org.apache.ignite.services.ServiceDeploymentException; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class GridServiceDeploymentCompoundFutureSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testWaitForCompletionOnFailingFuture() throws Exception { GridServiceDeploymentCompoundFuture compFut = new GridServiceDeploymentCompoundFuture(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentExceptionPropagationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentExceptionPropagationTest.java index d987ce6264f7c..d5df8281325fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentExceptionPropagationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceDeploymentExceptionPropagationTest.java @@ -23,11 +23,16 @@ import org.apache.ignite.services.ServiceContext; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class GridServiceDeploymentExceptionPropagationTest extends GridCommonAbstractTest { /** */ @SuppressWarnings("unused") + @Test public void testExceptionPropagation() throws Exception { try (Ignite srv = startGrid("server")) { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServicePackagePrivateSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServicePackagePrivateSelfTest.java index c085192fdb0ca..c87b8de8b1558 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServicePackagePrivateSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServicePackagePrivateSelfTest.java @@ -22,14 +22,19 @@ import org.apache.ignite.internal.processors.service.inner.MyService; import org.apache.ignite.internal.processors.service.inner.MyServiceFactory; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for package-private service implementation. */ +@RunWith(JUnit4.class) public class GridServicePackagePrivateSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPackagePrivateService() throws Exception { try { Ignite server = startGrid("server"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorAbstractSelfTest.java index 8e8a2fee42f23..7f546ea917e21 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorAbstractSelfTest.java @@ -41,24 +41,22 @@ import org.apache.ignite.services.ServiceConfiguration; import org.apache.ignite.services.ServiceContext; import org.apache.ignite.services.ServiceDescriptor; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link GridAffinityProcessor}. */ @GridCommonTest(group = "Service Processor") +@RunWith(JUnit4.class) public abstract class GridServiceProcessorAbstractSelfTest extends GridCommonAbstractTest { /** Cache name. */ public static final String CACHE_NAME = "testServiceCache"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Random generator. */ private static final Random RAND = new Random(); @@ -66,12 +64,6 @@ public abstract class GridServiceProcessorAbstractSelfTest extends GridCommonAbs @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - c.setDiscoverySpi(discoSpi); - ServiceConfiguration[] svcs = services(); if (svcs != null) @@ -155,6 +147,7 @@ protected Ignite randomGrid() { /** * @throws Exception If failed. */ + @Test public void testSameConfigurationOld() throws Exception { String name = "dupServiceOld"; @@ -184,6 +177,7 @@ public void testSameConfigurationOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSameConfiguration() throws Exception { String name = "dupServiceOld"; @@ -209,6 +203,7 @@ public void testSameConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentConfigurationOld() throws Exception { String name = "dupServiceOld"; @@ -242,6 +237,7 @@ public void testDifferentConfigurationOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentConfiguration() throws Exception { String name = "dupService"; @@ -271,6 +267,7 @@ public void testDifferentConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetServiceByName() throws Exception { String name = "serviceByName"; @@ -290,6 +287,7 @@ public void testGetServiceByName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetServicesByName() throws Exception { final String name = "servicesByName"; @@ -316,6 +314,7 @@ public void testGetServicesByName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOnEachNodeOld() throws Exception { Ignite g = randomGrid(); @@ -348,6 +347,7 @@ public void testDeployOnEachNodeOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOnEachNode() throws Exception { Ignite g = randomGrid(); @@ -376,6 +376,7 @@ public void testDeployOnEachNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeploySingletonOld() throws Exception { Ignite g = randomGrid(); @@ -408,6 +409,7 @@ public void testDeploySingletonOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeploySingleton() throws Exception { Ignite g = randomGrid(); @@ -436,6 +438,7 @@ public void testDeploySingleton() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityDeployOld() throws Exception { Ignite g = randomGrid(); @@ -465,6 +468,7 @@ public void testAffinityDeployOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityDeploy() throws Exception { Ignite g = randomGrid(); @@ -490,6 +494,7 @@ public void testAffinityDeploy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployMultiple1Old() throws Exception { Ignite g = randomGrid(); @@ -522,6 +527,7 @@ public void testDeployMultiple1Old() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployMultiple1() throws Exception { Ignite g = randomGrid(); @@ -550,6 +556,7 @@ public void testDeployMultiple1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployMultiple2Old() throws Exception { Ignite g = randomGrid(); @@ -584,6 +591,7 @@ public void testDeployMultiple2Old() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployMultiple2() throws Exception { Ignite g = randomGrid(); @@ -614,6 +622,7 @@ public void testDeployMultiple2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancelSingleton() throws Exception { Ignite g = randomGrid(); @@ -649,6 +658,7 @@ public void testCancelSingleton() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancelSingletonAsync() throws Exception { Ignite g = randomGrid(); @@ -684,6 +694,7 @@ public void testCancelSingletonAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancelEachNode() throws Exception { Ignite g = randomGrid(); @@ -719,6 +730,7 @@ public void testCancelEachNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancelAsyncEachNode() throws Exception { Ignite g = randomGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorBatchDeploySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorBatchDeploySelfTest.java index f0e2e71595c5f..b77144cf0e9fd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorBatchDeploySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorBatchDeploySelfTest.java @@ -29,7 +29,6 @@ import java.util.concurrent.atomic.AtomicBoolean; import org.apache.ignite.Ignite; import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.lang.IgniteFuture; @@ -37,16 +36,18 @@ import org.apache.ignite.services.ServiceConfiguration; import org.apache.ignite.services.ServiceDeploymentException; import org.apache.ignite.services.ServiceDescriptor; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.runAsync; /** * Test for deployment of multiple configurations at a time. */ +@RunWith(JUnit4.class) public class GridServiceProcessorBatchDeploySelfTest extends GridCommonAbstractTest { /** Number of services to be deployed. */ private static final int NUM_SERVICES = 100; @@ -57,22 +58,6 @@ public class GridServiceProcessorBatchDeploySelfTest extends GridCommonAbstractT /** Client node name. */ private static final String CLIENT_NODE_NAME = "client"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - c.setDiscoverySpi(discoSpi); - - return c; - } - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { for (int i = 0; i < NUM_NODES; i++) @@ -91,6 +76,7 @@ public class GridServiceProcessorBatchDeploySelfTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testDeployAll() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); @@ -110,6 +96,7 @@ public void testDeployAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployAllAsync() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); @@ -129,11 +116,10 @@ public void testDeployAllAsync() throws Exception { } /** - * TODO: enable when IGNITE-6259 is fixed. - * * @throws Exception If failed. */ - public void _testDeployAllTopologyChange() throws Exception { + @Test + public void testDeployAllTopologyChange() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); final AtomicBoolean finished = new AtomicBoolean(); @@ -141,7 +127,7 @@ public void _testDeployAllTopologyChange() throws Exception { IgniteInternalFuture topChangeFut = runTopChanger(finished); try { - int numServices = 500; + int numServices = 50; int batchSize = 5; CountDownLatch latch = new CountDownLatch(numServices); @@ -171,7 +157,7 @@ public void _testDeployAllTopologyChange() throws Exception { from = to; } - assertTrue(latch.await(30, TimeUnit.SECONDS)); + assertTrue(latch.await(120, TimeUnit.SECONDS)); assertDeployedServices(client, cfgs); } @@ -183,11 +169,10 @@ public void _testDeployAllTopologyChange() throws Exception { } /** - * TODO: enable when IGNITE-6259 is fixed. - * * @throws Exception If failed. */ - public void _testDeployAllTopologyChangeFail() throws Exception { + @Test + public void testDeployAllTopologyChangeFail() throws Exception { final Ignite client = grid(CLIENT_NODE_NAME); final AtomicBoolean finished = new AtomicBoolean(); @@ -195,7 +180,7 @@ public void _testDeployAllTopologyChangeFail() throws Exception { IgniteInternalFuture topChangeFut = runTopChanger(finished); try { - int numServices = 500; + int numServices = 200; int batchSize = 5; CountDownLatch latch = new CountDownLatch(numServices); @@ -248,7 +233,7 @@ public void _testDeployAllTopologyChangeFail() throws Exception { from = to; } - assertTrue(latch.await(30, TimeUnit.SECONDS)); + assertTrue(latch.await(120, TimeUnit.SECONDS)); cfgs.removeAll(failingCfgs); @@ -264,6 +249,7 @@ public void _testDeployAllTopologyChangeFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployAllFail() throws Exception { deployAllFail(false); } @@ -271,6 +257,7 @@ public void testDeployAllFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployAllAsyncFail() throws Exception { deployAllFail(true); } @@ -302,6 +289,7 @@ private void deployAllFail(boolean async) throws Exception { /** * @throws Exception If failed. */ + @Test public void testClashingNames() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); @@ -328,6 +316,7 @@ public void testClashingNames() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClashingNamesFail() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); @@ -360,6 +349,7 @@ public void testClashingNamesFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClashingNameDifferentConfig() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); @@ -394,6 +384,7 @@ public void testClashingNameDifferentConfig() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancelAll() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); @@ -415,6 +406,7 @@ public void testCancelAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancelAllAsync() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); @@ -436,11 +428,11 @@ public void testCancelAllAsync() throws Exception { } /** - * TODO: enable when IGNITE-6259 is fixed. - * * @throws Exception If failed. */ - public void _testCancelAllTopologyChange() throws Exception { + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10021") + @Test + public void testCancelAllTopologyChange() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); int numServices = 500; @@ -490,6 +482,7 @@ public void _testCancelAllTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCancelAllClashingNames() throws Exception { Ignite client = grid(CLIENT_NODE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeConfigSelfTest.java index e43d02934ddc2..d328db2ba201f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeConfigSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.services.ServiceConfiguration; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Single node services test. */ +@RunWith(JUnit4.class) public class GridServiceProcessorMultiNodeConfigSelfTest extends GridServiceProcessorAbstractSelfTest { /** Cluster singleton name. */ private static final String CLUSTER_SINGLE = "serviceConfigSingleton"; @@ -144,6 +148,7 @@ public class GridServiceProcessorMultiNodeConfigSelfTest extends GridServiceProc /** * @throws Exception If failed. */ + @Test public void testSingletonUpdateTopology() throws Exception { checkSingletonUpdateTopology(CLUSTER_SINGLE); } @@ -151,6 +156,7 @@ public void testSingletonUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOnEachNodeUpdateTopology() throws Exception { checkDeployOnEachNodeUpdateTopology(NODE_SINGLE); } @@ -158,6 +164,7 @@ public void testDeployOnEachNodeUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOnEachNodeButClientUpdateTopology() throws Exception { checkDeployOnEachNodeButClientUpdateTopology(NODE_SINGLE_BUT_CLIENT); } @@ -165,6 +172,7 @@ public void testDeployOnEachNodeButClientUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAll() throws Exception { checkSingletonUpdateTopology(CLUSTER_SINGLE); @@ -182,6 +190,7 @@ public void testAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityUpdateTopology() throws Exception { Ignite g = randomGrid(); @@ -204,6 +213,7 @@ public void testAffinityUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployLimits() throws Exception { final Ignite g = randomGrid(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeSelfTest.java index 517f061d32323..ed331fa18dea0 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorMultiNodeSelfTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Single node services test. */ +@RunWith(JUnit4.class) public class GridServiceProcessorMultiNodeSelfTest extends GridServiceProcessorAbstractSelfTest { /** {@inheritDoc} */ @Override protected int nodeCount() { @@ -38,6 +42,7 @@ public class GridServiceProcessorMultiNodeSelfTest extends GridServiceProcessorA /** * @throws Exception If failed. */ + @Test public void testSingletonUpdateTopology() throws Exception { String name = "serviceSingletonUpdateTopology"; @@ -82,6 +87,7 @@ public void testSingletonUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAffinityDeployUpdateTopology() throws Exception { Ignite g = randomGrid(); @@ -120,6 +126,7 @@ public void testAffinityDeployUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOnEachNodeButClientUpdateTopology() throws Exception { // Prestart client node. Ignite client = startGrid("client", getConfiguration("client").setClientMode(true)); @@ -185,6 +192,7 @@ public void testDeployOnEachNodeButClientUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOnEachProjectionNodeUpdateTopology() throws Exception { // Prestart client node. Ignite client = startGrid("client", getConfiguration("client").setClientMode(true)); @@ -252,6 +260,7 @@ public void testDeployOnEachProjectionNodeUpdateTopology() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOnEachNodeUpdateTopology() throws Exception { // Prestart client node. Ignite client = startGrid("client", getConfiguration("client").setClientMode(true)); @@ -329,6 +338,7 @@ public void testDeployOnEachNodeUpdateTopology() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testDeployLimits() throws Exception { final String name = "serviceWithLimitsUpdateTopology"; @@ -390,4 +400,4 @@ public void testDeployLimits() throws Exception { stopExtraNodes(extraNodes); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorProxySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorProxySelfTest.java index 97d5f05e9e52b..0a5c7beb57716 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorProxySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorProxySelfTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Service proxy test. */ +@RunWith(JUnit4.class) public class GridServiceProcessorProxySelfTest extends GridServiceProcessorAbstractSelfTest { /** {@inheritDoc} */ @Override protected int nodeCount() { @@ -42,6 +46,7 @@ public class GridServiceProcessorProxySelfTest extends GridServiceProcessorAbstr /** * @throws Exception If failed. */ + @Test public void testNodeSingletonProxy() throws Exception { String name = "testNodeSingletonProxy"; @@ -74,6 +79,7 @@ public void testNodeSingletonProxy() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ThrowableNotThrown") + @Test public void testException() throws Exception { String name = "errorService"; @@ -96,6 +102,7 @@ public void testException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClusterSingletonProxy() throws Exception { String name = "testClusterSingletonProxy"; @@ -114,6 +121,7 @@ public void testClusterSingletonProxy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultiNodeProxy() throws Exception { Ignite ignite = randomGrid(); @@ -139,6 +147,7 @@ public void testMultiNodeProxy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeSingletonRemoteNotStickyProxy() throws Exception { String name = "testNodeSingletonRemoteNotStickyProxy"; @@ -177,6 +186,7 @@ public void testNodeSingletonRemoteNotStickyProxy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeSingletonRemoteStickyProxy() throws Exception { String name = "testNodeSingletonRemoteStickyProxy"; @@ -212,6 +222,7 @@ public void testNodeSingletonRemoteStickyProxy() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingletonProxyInvocation() throws Exception { final String name = "testProxyInvocationFromSeveralNodes"; @@ -234,6 +245,7 @@ public void testSingletonProxyInvocation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalProxyInvocation() throws Exception { final String name = "testLocalProxyInvocation"; @@ -274,6 +286,7 @@ public void testLocalProxyInvocation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoteNotStickProxyInvocation() throws Exception { final String name = "testRemoteNotStickProxyInvocation"; @@ -309,6 +322,7 @@ public void testRemoteNotStickProxyInvocation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemoteStickyProxyInvocation() throws Exception { final String name = "testRemoteStickyProxyInvocation"; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorSingleNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorSingleNodeSelfTest.java index 202b4b6316e39..e2e5a5d62f318 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorSingleNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorSingleNodeSelfTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Single node services test. */ +@RunWith(JUnit4.class) public class GridServiceProcessorSingleNodeSelfTest extends GridServiceProcessorAbstractSelfTest { /** {@inheritDoc} */ @Override protected int nodeCount() { @@ -33,6 +37,7 @@ public class GridServiceProcessorSingleNodeSelfTest extends GridServiceProcessor /** * @throws Exception If failed. */ + @Test public void testNodeSingletonNotDeployedProxy() throws Exception { String name = "testNodeSingletonNotDeployedProxy"; @@ -55,4 +60,4 @@ public void testNodeSingletonNotDeployedProxy() throws Exception { info("Got expected exception: " + e.getMessage()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorStopSelfTest.java index 8eefa20454883..d17b0150e29f9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProcessorStopSelfTest.java @@ -37,10 +37,14 @@ import org.apache.ignite.services.ServiceContext; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests that {@link GridServiceProcessor} completes deploy/undeploy futures during node stop. */ +@RunWith(JUnit4.class) public class GridServiceProcessorStopSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { @@ -52,6 +56,7 @@ public class GridServiceProcessorStopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStopDuringDeployment() throws Exception { final CountDownLatch depLatch = new CountDownLatch(1); @@ -98,6 +103,7 @@ public void testStopDuringDeployment() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStopDuringHangedDeployment() throws Exception { final CountDownLatch depLatch = new CountDownLatch(1); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyClientReconnectSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyClientReconnectSelfTest.java index 5fdcdc49e5224..2e665faf70286 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyClientReconnectSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyClientReconnectSelfTest.java @@ -26,24 +26,20 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Service proxy test with client reconnect. */ +@RunWith(JUnit4.class) public class GridServiceProxyClientReconnectSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setClientMode(igniteInstanceName.contains("client")); return cfg; @@ -57,6 +53,7 @@ public class GridServiceProxyClientReconnectSelfTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testClientReconnect() throws Exception { startGrid("server"); @@ -82,7 +79,7 @@ public void testClientReconnect() throws Exception { startGrid("server"); - assertTrue(latch.await(10, TimeUnit.SECONDS)); + assertTrue(latch.await(12, TimeUnit.SECONDS)); client.services().deployClusterSingleton("my-service", new MyServiceImpl()); @@ -93,6 +90,7 @@ public void testClientReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectLongServiceInit() throws Exception { startGrid("server"); @@ -118,7 +116,7 @@ public void testClientReconnectLongServiceInit() throws Exception { startGrid("server"); - assertTrue(latch.await(10, TimeUnit.SECONDS)); + assertTrue(latch.await(12, TimeUnit.SECONDS)); client.services().deployClusterSingleton("my-service", new MyLongInitServiceImpl()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyNodeStopSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyNodeStopSelfTest.java index 93d063d5dd35b..014fcb9e826cd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyNodeStopSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceProxyNodeStopSelfTest.java @@ -20,31 +20,19 @@ import java.util.concurrent.Callable; import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.service.inner.MyService; import org.apache.ignite.internal.processors.service.inner.MyServiceFactory; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for service proxy after client node stopped. */ +@RunWith(JUnit4.class) public class GridServiceProxyNodeStopSelfTest extends GridCommonAbstractTest { - /** */ - private final static TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -53,6 +41,7 @@ public class GridServiceProxyNodeStopSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testProxyHashCode() throws Exception { Ignite server = startGrid("server"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceReassignmentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceReassignmentSelfTest.java index e44c8eabde0c7..532728b290f1f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceReassignmentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceReassignmentSelfTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests service reassignment. */ +@RunWith(JUnit4.class) public class GridServiceReassignmentSelfTest extends GridServiceProcessorAbstractSelfTest { /** */ private static final String SERVICE_NAME = "testService"; @@ -46,6 +50,7 @@ public class GridServiceReassignmentSelfTest extends GridServiceProcessorAbstrac /** * @throws Exception If failed. */ + @Test public void testClusterSingleton() throws Exception { checkReassigns(1, 1); } @@ -53,6 +58,7 @@ public void testClusterSingleton() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeSingleton() throws Exception { checkReassigns(0, 1); } @@ -60,6 +66,7 @@ public void testNodeSingleton() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLimited1() throws Exception { checkReassigns(5, 2); } @@ -67,6 +74,7 @@ public void testLimited1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLimited2() throws Exception { checkReassigns(7, 3); } @@ -249,4 +257,4 @@ private int nextRandomIdx(Iterable startedGrids, Random rnd) { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceSerializationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceSerializationSelfTest.java index 7c0b03dc8df8b..92a7449d4a85e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceSerializationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/GridServiceSerializationSelfTest.java @@ -26,34 +26,23 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.thread.IgniteThread; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Service serialization test. */ +@RunWith(JUnit4.class) public class GridServiceSerializationSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - - return cfg; - } - /** * @throws Exception If failed. */ + @Test public void testServiceSerialization() throws Exception { try { Ignite server = startGridsMultiThreaded(3); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceConfigVariationsFullApiTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceConfigVariationsFullApiTest.java index 160014ce163f8..87af35444409e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceConfigVariationsFullApiTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceConfigVariationsFullApiTest.java @@ -39,10 +39,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.configvariations.Parameters; import org.apache.ignite.testframework.junits.IgniteConfigVariationsAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Full API services test. */ +@RunWith(JUnit4.class) public class IgniteServiceConfigVariationsFullApiTest extends IgniteConfigVariationsAbstractTest { /** Test service name. */ private static final String SERVICE_NAME = "testService"; @@ -84,6 +88,7 @@ public class IgniteServiceConfigVariationsFullApiTest extends IgniteConfigVariat * * @throws Exception If failed. */ + @Test public void testNodeSingletonDeploy() throws Exception { runInAllDataModes(new ServiceTestRunnable(true, new DeployClosure() { @Override public void run(IgniteServices services, String svcName, TestService svc) throws Exception { @@ -100,6 +105,7 @@ public void testNodeSingletonDeploy() throws Exception { * * @throws Exception If failed. */ + @Test public void testClusterSingletonDeploy() throws Exception { runInAllDataModes(new ServiceTestRunnable(false, new DeployClosure() { @Override public void run(IgniteServices services, String svcName, TestService svc) throws Exception { @@ -116,6 +122,7 @@ public void testClusterSingletonDeploy() throws Exception { * * @throws Exception If failed. */ + @Test public void testKeyAffinityDeploy() throws Exception { runInAllDataModes(new ServiceTestRunnable(false, new DeployClosure() { @Override public void run(IgniteServices services, String svcName, TestService svc) { @@ -136,6 +143,7 @@ public void testKeyAffinityDeploy() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleDeploy() throws Exception { runInAllDataModes(new ServiceTestRunnable(true, new DeployClosure() { @Override public void run(IgniteServices services, String svcName, TestService svc) { @@ -149,6 +157,7 @@ public void testMultipleDeploy() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeploy() throws Exception { runInAllDataModes(new ServiceTestRunnable(false, new DeployClosure() { @Override public void run(IgniteServices services, String svcName, TestService svc) throws Exception { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest.java index 000ed20d328c2..f61f5b44a2fa4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest.java @@ -27,16 +27,17 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestExternalClassLoader; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests that not all nodes in cluster need user's service definition (only nodes according to filter). */ +@RunWith(JUnit4.class) public class IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest extends GridCommonAbstractTest { /** */ private static final String NOOP_SERVICE_CLS_NAME = "org.apache.ignite.tests.p2p.NoopService"; @@ -44,9 +45,6 @@ public class IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest extends G /** */ private static final String NOOP_SERVICE_2_CLS_NAME = "org.apache.ignite.tests.p2p.NoopService2"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int GRID_CNT = 6; @@ -90,12 +88,6 @@ public class IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest extends G cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setMarshaller(marshaller()); cfg.setUserAttributes(Collections.singletonMap(NODE_NAME_ATTR, igniteInstanceName)); @@ -151,6 +143,7 @@ protected Marshaller marshaller() { /** * @throws Exception If failed. */ + @Test public void testServiceDeployment1() throws Exception { startGrid(0).services().deploy(serviceConfig(true)); @@ -173,6 +166,7 @@ public void testServiceDeployment1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServiceDeployment2() throws Exception { for (int i = 0 ; i < 4; i++) startGrid(i); @@ -190,6 +184,7 @@ public void testServiceDeployment2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServiceDeployment3() throws Exception { startGrid(0).services().deploy(serviceConfig(true)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeploymentClassLoadingDefaultMarshallerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeploymentClassLoadingDefaultMarshallerTest.java index 7f087210c2655..119cc5fd50a7d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeploymentClassLoadingDefaultMarshallerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDeploymentClassLoadingDefaultMarshallerTest.java @@ -25,21 +25,19 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests that not all nodes in cluster need user's service definition (only nodes according to filter). */ +@RunWith(JUnit4.class) public class IgniteServiceDeploymentClassLoadingDefaultMarshallerTest extends GridCommonAbstractTest { /** */ private static final String NOOP_SERVICE_CLS_NAME = "org.apache.ignite.tests.p2p.NoopService"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int SERVER_NODE = 0; @@ -67,12 +65,6 @@ public class IgniteServiceDeploymentClassLoadingDefaultMarshallerTest extends Gr cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setMarshaller(marshaller()); cfg.setUserAttributes(Collections.singletonMap(NODE_NAME_ATTR, igniteInstanceName)); @@ -124,6 +116,7 @@ protected Marshaller marshaller() { /** * @throws Exception If failed. */ + @Test public void testServiceDeployment1() throws Exception { startGrid(SERVER_NODE); @@ -141,6 +134,7 @@ public void testServiceDeployment1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServiceDeployment2() throws Exception { startGrid(SERVER_NODE); @@ -154,6 +148,7 @@ public void testServiceDeployment2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServiceDeployment3() throws Exception { startGrid(SERVER_NODE_WITH_EXT_CLASS_LOADER).services().deploy(serviceConfig()); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDynamicCachesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDynamicCachesSelfTest.java index 7e5a18ddae955..790beb3a9b375 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDynamicCachesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceDynamicCachesSelfTest.java @@ -21,40 +21,24 @@ import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteServices; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteServiceDynamicCachesSelfTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 4; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrids(GRID_CNT); @@ -63,6 +47,7 @@ public class IgniteServiceDynamicCachesSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDeployCalledAfterCacheStart() throws Exception { String cacheName = "cache"; @@ -108,6 +93,7 @@ public void testDeployCalledAfterCacheStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployCalledBeforeCacheStart() throws Exception { String cacheName = "cache"; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceProxyTimeoutInitializedTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceProxyTimeoutInitializedTest.java index 17e583d399278..4e56b5831e2e2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceProxyTimeoutInitializedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceProxyTimeoutInitializedTest.java @@ -42,10 +42,14 @@ import java.util.concurrent.Callable; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests service proxy timeouts. */ +@RunWith(JUnit4.class) public class IgniteServiceProxyTimeoutInitializedTest extends GridCommonAbstractTest { /** */ private static Service srvc; @@ -102,7 +106,8 @@ public class IgniteServiceProxyTimeoutInitializedTest extends GridCommonAbstract * * @throws Exception If fail. */ - @SuppressWarnings({"Convert2Lambda", "ThrowableResultOfMethodCallIgnored"}) + @SuppressWarnings({"Convert2Lambda"}) + @Test public void testUnavailableService() throws Exception { srvc = new TestWaitServiceImpl(); @@ -142,7 +147,8 @@ public void testUnavailableService() throws Exception { * * @throws Exception If fail. */ - @SuppressWarnings({"ThrowableResultOfMethodCallIgnored", "Convert2Lambda"}) + @SuppressWarnings({"Convert2Lambda"}) + @Test public void testServiceException() throws Exception { srvc = new HangServiceImpl(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceReassignmentTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceReassignmentTest.java index e74b27de99caa..c118d6d9595d3 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceReassignmentTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/IgniteServiceReassignmentTest.java @@ -32,20 +32,18 @@ import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceConfiguration; import org.apache.ignite.services.ServiceContext; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteServiceReassignmentTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private ServiceConfiguration srvcCfg; @@ -59,8 +57,6 @@ public class IgniteServiceReassignmentTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (srvcCfg != null) cfg.setServiceConfiguration(srvcCfg); @@ -87,6 +83,7 @@ public class IgniteServiceReassignmentTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNodeRestart1() throws Exception { srvcCfg = serviceConfiguration(); @@ -129,6 +126,7 @@ public void testNodeRestart1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeRestart2() throws Exception { startGrids(3); @@ -157,6 +155,7 @@ public void testNodeRestart2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeRestartRandom() throws Exception { final int NODES = 5; @@ -190,6 +189,7 @@ public void testNodeRestartRandom() throws Exception { /** * @throws Exception If failed. */ + @Test public void testZombieAssignmentsCleanup() throws Exception { useStrLog = true; @@ -246,6 +246,7 @@ public void testZombieAssignmentsCleanup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeStopWhileThereAreCacheActivitiesInServiceProcessor() throws Exception { final int nodesCnt = 2; final int maxSvc = 1024; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOnActivationTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOnActivationTest.java index 52d706b920dca..cb60f396b6b41 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOnActivationTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOnActivationTest.java @@ -28,16 +28,14 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.services.ServiceConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class ServiceDeploymentOnActivationTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String SERVICE_NAME = "test-service"; @@ -61,10 +59,6 @@ public class ServiceDeploymentOnActivationTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi(); - discoverySpi.setIpFinder(IP_FINDER); - cfg.setDiscoverySpi(discoverySpi); - cfg.setClientMode(client); if (srvcCfg != null) @@ -101,6 +95,7 @@ public class ServiceDeploymentOnActivationTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testServersWithPersistence() throws Exception { persistence = true; @@ -110,6 +105,7 @@ public void testServersWithPersistence() throws Exception { /** * @throws Exception if failed. */ + @Test public void testClientsWithPersistence() throws Exception { persistence = true; @@ -119,6 +115,7 @@ public void testClientsWithPersistence() throws Exception { /** * @throws Exception if failed. */ + @Test public void testServersWithoutPersistence() throws Exception { persistence = false; @@ -128,6 +125,7 @@ public void testServersWithoutPersistence() throws Exception { /** * @throws Exception if failed. */ + @Test public void testClientsWithoutPersistence() throws Exception { persistence = false; @@ -137,6 +135,7 @@ public void testClientsWithoutPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServersStaticConfigWithPersistence() throws Exception { persistence = true; @@ -146,6 +145,7 @@ public void testServersStaticConfigWithPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientsStaticConfigWithPersistence() throws Exception { persistence = true; @@ -155,6 +155,7 @@ public void testClientsStaticConfigWithPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServersStaticConfigWithoutPersistence() throws Exception { persistence = false; @@ -164,7 +165,8 @@ public void testServersStaticConfigWithoutPersistence() throws Exception { /** * @throws Exception If failed. */ - public void _testClientsStaticConfigWithoutPersistence() throws Exception { + @Test + public void testClientsStaticConfigWithoutPersistence() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-8279"); persistence = false; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOutsideBaselineTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOutsideBaselineTest.java index 878ec0d43c5db..779835e4f2627 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOutsideBaselineTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServiceDeploymentOutsideBaselineTest.java @@ -32,17 +32,15 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.services.ServiceConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class ServiceDeploymentOutsideBaselineTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String SERVICE_NAME = "test-service"; @@ -56,10 +54,6 @@ public class ServiceDeploymentOutsideBaselineTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi(); - discoverySpi.setIpFinder(IP_FINDER); - cfg.setDiscoverySpi(discoverySpi); - if (persistence) { cfg.setDataStorageConfiguration( new DataStorageConfiguration() @@ -93,6 +87,7 @@ public class ServiceDeploymentOutsideBaselineTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testDeployOutsideBaseline() throws Exception { checkDeploymentFromOutsideNode(true, false); } @@ -100,6 +95,7 @@ public void testDeployOutsideBaseline() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOutsideBaselineNoPersistence() throws Exception { checkDeploymentFromOutsideNode(false, false); } @@ -107,6 +103,7 @@ public void testDeployOutsideBaselineNoPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOutsideBaselineStatic() throws Exception { checkDeploymentFromOutsideNode(true, true); } @@ -114,6 +111,7 @@ public void testDeployOutsideBaselineStatic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployOutsideBaselineStaticNoPersistence() throws Exception { checkDeploymentFromOutsideNode(false, true); } @@ -121,6 +119,7 @@ public void testDeployOutsideBaselineStaticNoPersistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployFromNodeAddedToBlt() throws Exception { checkDeployWithNodeAddedToBlt(true); } @@ -128,6 +127,7 @@ public void testDeployFromNodeAddedToBlt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployToNodeAddedToBlt() throws Exception { checkDeployWithNodeAddedToBlt(false); } @@ -135,6 +135,7 @@ public void testDeployToNodeAddedToBlt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployFromNodeRemovedFromBlt() throws Exception { checkDeployFromNodeRemovedFromBlt(true, false); } @@ -142,6 +143,7 @@ public void testDeployFromNodeRemovedFromBlt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployFromNodeRemovedFromBltStatic() throws Exception { checkDeployFromNodeRemovedFromBlt(true, true); } @@ -149,6 +151,7 @@ public void testDeployFromNodeRemovedFromBltStatic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployToNodeRemovedFromBlt() throws Exception { checkDeployFromNodeRemovedFromBlt(false, false); } @@ -156,6 +159,7 @@ public void testDeployToNodeRemovedFromBlt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticDeployFromEachPersistentNodes() throws Exception { checkDeployFromEachNodes(true, true); } @@ -163,6 +167,7 @@ public void testStaticDeployFromEachPersistentNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeployFromEachNodes() throws Exception { checkDeployFromEachNodes(false, false); } @@ -170,6 +175,7 @@ public void testDeployFromEachNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStaticDeployFromEachNodes() throws Exception { checkDeployFromEachNodes(false, true); } diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServicePredicateAccessCacheTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServicePredicateAccessCacheTest.java index 33a1993670bb6..d1b1ff6468bda 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServicePredicateAccessCacheTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/ServicePredicateAccessCacheTest.java @@ -31,11 +31,11 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceContext; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -44,10 +44,8 @@ /** * */ +@RunWith(JUnit4.class) public class ServicePredicateAccessCacheTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static CountDownLatch latch; @@ -55,8 +53,6 @@ public class ServicePredicateAccessCacheTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setMarshaller(new BinaryMarshaller()); cfg.setPeerClassLoadingEnabled(false); @@ -72,6 +68,7 @@ public class ServicePredicateAccessCacheTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPredicateAccessCache() throws Exception { final Ignite ignite0 = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/SystemCacheNotConfiguredTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/SystemCacheNotConfiguredTest.java index a76eb22585a64..3883e97512b4b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/service/SystemCacheNotConfiguredTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/service/SystemCacheNotConfiguredTest.java @@ -25,21 +25,19 @@ import org.apache.ignite.services.Service; import org.apache.ignite.services.ServiceConfiguration; import org.apache.ignite.services.ServiceContext; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests if system cache was started before deploying of service. */ +@RunWith(JUnit4.class) public class SystemCacheNotConfiguredTest extends GridCommonAbstractTest { /** */ private final ByteArrayOutputStream errContent = new ByteArrayOutputStream(); - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private final PrintStream originalErr = System.err; @@ -52,11 +50,6 @@ public class SystemCacheNotConfiguredTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi(); - - discoverySpi.setIpFinder(ipFinder); - cfg.setDiscoverySpi(discoverySpi); - if("server".equals(igniteInstanceName)) cfg.setServiceConfiguration(serviceConfiguration()); @@ -66,6 +59,7 @@ public class SystemCacheNotConfiguredTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void test() throws Exception { captureErr(); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessorSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessorSelfTest.java index 606b10252b19a..c31e4cf113b5b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessorSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/processors/timeout/GridTimeoutProcessorSelfTest.java @@ -27,12 +27,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; /** * Timeout processor tests. */ +@RunWith(JUnit4.class) public class GridTimeoutProcessorSelfTest extends GridCommonAbstractTest { /** Random number generator. */ private static final Random RAND = new Random(); @@ -66,6 +70,7 @@ public class GridTimeoutProcessorSelfTest extends GridCommonAbstractTest { * * @throws Exception If test failed. */ + @Test public void testTimeouts() throws Exception { int max = 100; @@ -136,6 +141,7 @@ public void testTimeouts() throws Exception { * * @throws Exception If test failed. */ + @Test public void testTimeoutsMultithreaded() throws Exception { final int max = 100; @@ -212,6 +218,7 @@ public void testTimeoutsMultithreaded() throws Exception { * * @throws Exception If test failed. */ + @Test public void testTimeoutObjectAdapterMultithreaded() throws Exception { final int max = 100; @@ -272,6 +279,7 @@ public void testTimeoutObjectAdapterMultithreaded() throws Exception { * * @throws Exception If test failed. */ + @Test public void testTimeoutNeverCalled() throws Exception { int max = 100; @@ -331,6 +339,7 @@ public void testTimeoutNeverCalled() throws Exception { * * @throws Exception If test failed. */ + @Test public void testTimeoutNeverCalledMultithreaded() throws Exception { int threads = 20; @@ -395,6 +404,7 @@ public void testTimeoutNeverCalledMultithreaded() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testAddRemoveInterleaving() throws Exception { final AtomicInteger callCnt = new AtomicInteger(0); @@ -517,6 +527,7 @@ public void testAddRemoveInterleaving() throws Exception { * * @throws Exception If test failed. */ + @Test public void testTimeoutCallOnce() throws Exception { ctx.timeout().addTimeoutObject(new GridTimeoutObject() { /** Timeout ID. */ @@ -558,6 +569,7 @@ public void testTimeoutCallOnce() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testTimeoutSameEndTime() throws Exception { final CountDownLatch latch = new CountDownLatch(2); @@ -631,6 +643,7 @@ public void testTimeoutSameEndTime() throws Exception { * * @throws Exception If test failed. */ + @Test public void testCancelingWithClearedInterruptedFlag() throws Exception { final CountDownLatch onTimeoutCalled = new CountDownLatch(1); @@ -652,4 +665,4 @@ public void testCancelingWithClearedInterruptedFlag() throws Exception { onTimeoutCalled.await(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/product/GridProductVersionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/product/GridProductVersionSelfTest.java index 92990deb62703..58b3f97a39682 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/product/GridProductVersionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/product/GridProductVersionSelfTest.java @@ -19,6 +19,9 @@ import org.apache.ignite.lang.IgniteProductVersion; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteVersionUtils.BUILD_TSTAMP; import static org.apache.ignite.internal.IgniteVersionUtils.REV_HASH_STR; @@ -28,10 +31,12 @@ /** * Versions test. */ +@RunWith(JUnit4.class) public class GridProductVersionSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testFromString() throws Exception { IgniteProductVersion ver = IgniteProductVersion.fromString("1.2.3"); @@ -138,4 +143,4 @@ public void testFromString() throws Exception { IgniteProductVersion.fromString(VER_STR + '-' + BUILD_TSTAMP + '-' + REV_HASH_STR); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserBulkLoadSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserBulkLoadSelfTest.java index b6716ffbf6a8f..60e489ed13d71 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserBulkLoadSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserBulkLoadSelfTest.java @@ -17,11 +17,17 @@ package org.apache.ignite.internal.sql; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + /** * Tests for SQL parser: COPY command. */ +@RunWith(JUnit4.class) public class SqlParserBulkLoadSelfTest extends SqlParserAbstractSelfTest { /** Tests for COPY command. */ + @Test public void testCopy() { assertParseError(null, "copy grom 'any.file' into Person (_key, age, firstName, lastName) format csv", diff --git a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserCreateIndexSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserCreateIndexSelfTest.java index 465e8d15c9204..774a73bf01f98 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserCreateIndexSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserCreateIndexSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.internal.sql.command.SqlCreateIndexCommand; import org.apache.ignite.internal.sql.command.SqlIndexColumn; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.sql.SqlKeyword.INLINE_SIZE; import static org.apache.ignite.internal.sql.SqlKeyword.PARALLEL; @@ -33,7 +36,8 @@ /** * Tests for SQL parser: CREATE INDEX. */ -@SuppressWarnings({"UnusedReturnValue", "ThrowableNotThrown"}) +@SuppressWarnings({"UnusedReturnValue"}) +@RunWith(JUnit4.class) public class SqlParserCreateIndexSelfTest extends SqlParserAbstractSelfTest { /** Default properties */ private static final Map DEFAULT_PROPS = getProps(null, null); @@ -43,6 +47,7 @@ public class SqlParserCreateIndexSelfTest extends SqlParserAbstractSelfTest { * * @throws Exception If failed. */ + @Test public void testCreateIndex() throws Exception { // Base. parseValidate(null, "CREATE INDEX idx ON tbl(a)", null, "TBL", "IDX", DEFAULT_PROPS, "A", false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserDropIndexSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserDropIndexSelfTest.java index a0af3a62d9d7b..a6d668e26aad6 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserDropIndexSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserDropIndexSelfTest.java @@ -18,16 +18,21 @@ package org.apache.ignite.internal.sql; import org.apache.ignite.internal.sql.command.SqlDropIndexCommand; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for SQL parser: CREATE INDEX. */ +@RunWith(JUnit4.class) public class SqlParserDropIndexSelfTest extends SqlParserAbstractSelfTest { /** * Tests for DROP INDEX command. * * @throws Exception If failed. */ + @Test public void testDropIndex() throws Exception { // Base. parseValidate(null, "DROP INDEX idx", null, "IDX"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserSetStreamingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserSetStreamingSelfTest.java index 7e699f6adabb7..2b448ebcf8624 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserSetStreamingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserSetStreamingSelfTest.java @@ -19,14 +19,19 @@ import org.apache.ignite.internal.processors.query.QueryUtils; import org.apache.ignite.internal.sql.command.SqlSetStreamingCommand; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for SQL parser: SET STREAMING. */ +@RunWith(JUnit4.class) public class SqlParserSetStreamingSelfTest extends SqlParserAbstractSelfTest { /** * */ + @Test public void testParseSetStreaming() { parseValidate("set streaming on", true, false, 2048, 0, 0, 0, false); parseValidate("set streaming 1", true, false, 2048, 0, 0, 0, false); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserTransactionalKeywordsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserTransactionalKeywordsSelfTest.java index 103bb97925cc6..f2e12cfdaf7dd 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserTransactionalKeywordsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserTransactionalKeywordsSelfTest.java @@ -21,14 +21,19 @@ import org.apache.ignite.internal.sql.command.SqlCommand; import org.apache.ignite.internal.sql.command.SqlCommitTransactionCommand; import org.apache.ignite.internal.sql.command.SqlRollbackTransactionCommand; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for processing of keywords BEGIN, COMMIT, ROLLBACK, START. */ +@RunWith(JUnit4.class) public class SqlParserTransactionalKeywordsSelfTest extends SqlParserAbstractSelfTest { /** * Test parsing of different forms of BEGIN/START. */ + @Test public void testBegin() { assertBegin("begin"); assertBegin("BEGIN"); @@ -44,6 +49,7 @@ public void testBegin() { /** * Test parsing of different forms of COMMIT. */ + @Test public void testCommit() { assertCommit("commit"); assertCommit("COMMIT transaction"); @@ -54,6 +60,7 @@ public void testCommit() { /** * Test parsing of different forms of ROLLBACK. */ + @Test public void testRollback() { assertRollback("rollback"); assertRollback("ROLLBACK transaction"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserUserSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserUserSelfTest.java index d3296f7908282..ec5b37d92c5b5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserUserSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/sql/SqlParserUserSelfTest.java @@ -20,17 +20,22 @@ import org.apache.ignite.internal.sql.command.SqlAlterUserCommand; import org.apache.ignite.internal.sql.command.SqlCreateUserCommand; import org.apache.ignite.internal.sql.command.SqlDropUserCommand; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for SQL parser: CREATE INDEX. */ -@SuppressWarnings({"UnusedReturnValue", "ThrowableNotThrown"}) +@SuppressWarnings({"UnusedReturnValue"}) +@RunWith(JUnit4.class) public class SqlParserUserSelfTest extends SqlParserAbstractSelfTest { /** * Tests for CREATE USER command. * * @throws Exception If failed. */ + @Test public void testCreateUser() throws Exception { // Base. parseValidateCreate("CREATE USER test WITH PASSWORD 'test'", "TEST", "test"); @@ -51,6 +56,7 @@ public void testCreateUser() throws Exception { * * @throws Exception If failed. */ + @Test public void testAlterUser() throws Exception { // Base. parseValidateAlter("ALTER USER test WITH PASSWORD 'test'", "TEST", "test"); @@ -71,6 +77,7 @@ public void testAlterUser() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropUser() throws Exception { // Base. parseValidateDrop("DROP USER test", "TEST"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/GridArraysSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/GridArraysSelfTest.java index 1cab527794c87..7e032fb28de60 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/GridArraysSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/GridArraysSelfTest.java @@ -19,6 +19,9 @@ import java.util.Arrays; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.util.GridArrays.clearTail; import static org.apache.ignite.internal.util.GridArrays.remove; @@ -26,12 +29,14 @@ /** */ +@RunWith(JUnit4.class) public class GridArraysSelfTest extends GridCommonAbstractTest { /** */ private static final String[] EMPTY = {}; /** */ + @Test public void testSet() { String[] arr = set(EMPTY, 4, "aa"); @@ -71,6 +76,7 @@ public void testSet() { /** */ + @Test public void testClearTail() { String[] arr = new String[10]; @@ -105,6 +111,7 @@ public void testClearTail() { /** */ + @Test public void testRemoveLong() { long[] arr = {0,1,2,3,4,5,6}; @@ -117,6 +124,7 @@ public void testRemoveLong() { /** */ + @Test public void testRemove() { Integer[] arr = {0,1,2,3,4,5,6}; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/GridCleanerTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/GridCleanerTest.java index deb87b0a89faa..e024c7c88c8bf 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/GridCleanerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/GridCleanerTest.java @@ -17,15 +17,16 @@ package org.apache.ignite.internal.util; -import junit.framework.TestCase; +import org.junit.Test; /** * Grid cleaner tests. */ -public class GridCleanerTest extends TestCase { +public class GridCleanerTest { /** * @throws Exception If failed. */ + @Test public void testCreate() throws Exception { Object cleaner = GridCleaner.create(this, new Runnable() { @Override public void run() { diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/GridHandleTableSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/GridHandleTableSelfTest.java index bd6105fa58a98..ccbe84fd32c3c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/GridHandleTableSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/GridHandleTableSelfTest.java @@ -18,14 +18,19 @@ package org.apache.ignite.internal.util; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridHandleTable}. */ +@RunWith(JUnit4.class) public class GridHandleTableSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testGrow() throws Exception { GridHandleTable table = new GridHandleTable(8, 2); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/GridStartupWithUndefinedIgniteHomeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/GridStartupWithUndefinedIgniteHomeSelfTest.java index cb0beed3888f0..f5f989f226a45 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/GridStartupWithUndefinedIgniteHomeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/GridStartupWithUndefinedIgniteHomeSelfTest.java @@ -30,6 +30,8 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.After; +import org.junit.Test; import static org.apache.ignite.IgniteSystemProperties.IGNITE_HOME; import static org.apache.ignite.internal.util.IgniteUtils.nullifyHomeDirectory; @@ -42,7 +44,7 @@ * independent from {@link GridCommonAbstractTest} stuff. * 2. Do not replace native Java asserts with JUnit ones - test won't fall on TeamCity. */ -public class GridStartupWithUndefinedIgniteHomeSelfTest extends TestCase { +public class GridStartupWithUndefinedIgniteHomeSelfTest { /** */ private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); @@ -50,7 +52,8 @@ public class GridStartupWithUndefinedIgniteHomeSelfTest extends TestCase { private static final int GRID_COUNT = 2; /** {@inheritDoc} */ - @Override protected void tearDown() throws Exception { + @After + public void tearDown() throws Exception { // Next grid in the same VM shouldn't use cached values produced by these tests. nullifyHomeDirectory(); @@ -60,6 +63,7 @@ public class GridStartupWithUndefinedIgniteHomeSelfTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testStartStopWithUndefinedIgniteHome() throws Exception { IgniteUtils.nullifyHomeDirectory(); @@ -77,7 +81,7 @@ public void testStartStopWithUndefinedIgniteHome() throws Exception { IgniteLogger log = new JavaLogger(); - log.info(">>> Test started: " + getName()); + log.info(">>> Test started: start-stop"); log.info("Grid start-stop test count: " + GRID_COUNT); for (int i = 0; i < GRID_COUNT; i++) { @@ -103,4 +107,4 @@ public void testStartStopWithUndefinedIgniteHome() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteDevOnlyLogTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteDevOnlyLogTest.java index 9a829473b7395..de3ba71cdd01e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteDevOnlyLogTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteDevOnlyLogTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Testing logging via {@link IgniteUtils#warnDevOnly(IgniteLogger, Object)}. */ +@RunWith(JUnit4.class) public class IgniteDevOnlyLogTest extends GridCommonAbstractTest { /** */ private List additionalArgs; @@ -55,6 +59,7 @@ public class IgniteDevOnlyLogTest extends GridCommonAbstractTest { } /** Check that dev-only messages appear in the log. */ + @Test public void testDevOnlyQuietMessage() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-9328"); @@ -72,6 +77,7 @@ public void testDevOnlyQuietMessage() throws Exception { } /** Check that dev-only messages appear in the log. */ + @Test public void testDevOnlyVerboseMessage() throws Exception { additionalArgs = Collections.singletonList("-D" + IgniteSystemProperties.IGNITE_QUIET + "=false"); @@ -91,6 +97,7 @@ public void testDevOnlyVerboseMessage() throws Exception { * doesn't print anything if {@link org.apache.ignite.IgniteSystemProperties#IGNITE_DEV_ONLY_LOGGING_DISABLED} * is set to {@code true}. */ + @Test public void testDevOnlyDisabledProperty() throws Exception { additionalArgs = Collections.singletonList("-D" + IgniteSystemProperties.IGNITE_DEV_ONLY_LOGGING_DISABLED + "=true"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteExceptionRegistrySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteExceptionRegistrySelfTest.java index 81b7522014960..18aaded687335 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteExceptionRegistrySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteExceptionRegistrySelfTest.java @@ -24,11 +24,15 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class IgniteExceptionRegistrySelfTest extends GridCommonAbstractTest { /** */ private IgniteExceptionRegistry registry = IgniteExceptionRegistry.get(); @@ -36,6 +40,7 @@ public class IgniteExceptionRegistrySelfTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testOnException() throws Exception { awaitPartitionMapExchange(); @@ -90,6 +95,7 @@ public void testOnException() throws Exception { /** * @throws Exception if failed. */ + @Test public void testMultiThreadedMaxSize() throws Exception { final int maxSize = 10; @@ -108,4 +114,4 @@ public void testMultiThreadedMaxSize() throws Exception { assert maxSize + 1 >= size && maxSize - 1 <= size; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteUtilsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteUtilsSelfTest.java index 61a076ed65f34..cefce1ebd8a2f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteUtilsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/IgniteUtilsSelfTest.java @@ -38,20 +38,32 @@ import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Arrays; -import java.util.Calendar; import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Random; import java.util.UUID; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.CyclicBarrier; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Consumer; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteInterruptedException; import org.apache.ignite.cluster.ClusterGroup; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.compute.ComputeJob; import org.apache.ignite.compute.ComputeJobAdapter; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.processors.igfs.IgfsUtils; import org.apache.ignite.internal.util.lang.GridPeerDeployAware; +import org.apache.ignite.internal.util.lang.IgniteThrowableConsumer; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; @@ -64,13 +76,19 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import static java.util.Arrays.asList; import static org.junit.Assert.assertArrayEquals; /** * Grid utils tests. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class IgniteUtilsSelfTest extends GridCommonAbstractTest { /** */ public static final int[] EMPTY = new int[0]; @@ -89,6 +107,7 @@ private String text120() { /** * */ + @Test public void testIsPow2() { assertTrue(U.isPow2(1)); assertTrue(U.isPow2(2)); @@ -113,6 +132,7 @@ public void testIsPow2() { /** * @throws Exception If failed. */ + @Test public void testAllLocalIps() throws Exception { Collection ips = U.allLocalIps(); @@ -122,6 +142,7 @@ public void testAllLocalIps() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAllLocalMACs() throws Exception { Collection macs = U.allLocalMACs(); @@ -133,6 +154,7 @@ public void testAllLocalMACs() throws Exception { * * @throws Exception If failed. */ + @Test public void testAllLocalMACsMultiThreaded() throws Exception { GridTestUtils.runMultiThreaded(new Runnable() { @Override public void run() { @@ -148,6 +170,7 @@ public void testAllLocalMACsMultiThreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByteArray2String() throws Exception { assertEquals("{0x0A,0x14,0x1E,0x28,0x32,0x3C,0x46,0x50,0x5A}", U.byteArray2String(new byte[]{10, 20, 30, 40, 50, 60, 70, 80, 90}, "0x%02X", ",0x%02X")); @@ -156,6 +179,7 @@ public void testByteArray2String() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFormatMins() throws Exception { printFormatMins(0); printFormatMins(1); @@ -183,6 +207,7 @@ private void printFormatMins(long mins) { /** * @throws Exception If failed. */ + @Test public void testDownloadUrlFromHttp() throws Exception { GridEmbeddedHttpServer srv = null; try { @@ -206,6 +231,7 @@ public void testDownloadUrlFromHttp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDownloadUrlFromHttps() throws Exception { GridEmbeddedHttpServer srv = null; try { @@ -229,6 +255,7 @@ public void testDownloadUrlFromHttps() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDownloadUrlFromLocalFile() throws Exception { File file = new File(System.getProperty("java.io.tmpdir") + File.separator + "url-http.file"); @@ -242,6 +269,7 @@ public void testDownloadUrlFromLocalFile() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOs() throws Exception { System.out.println("OS string: " + U.osString()); System.out.println("JDK string: " + U.jdkString()); @@ -268,6 +296,7 @@ public void testOs() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJavaSerialization() throws Exception { ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); ObjectOutputStream objOut = new ObjectOutputStream(byteOut); @@ -286,6 +315,7 @@ public void testJavaSerialization() throws Exception { /** * */ + @Test public void testHidePassword() { Collection uriList = new ArrayList<>(); @@ -329,7 +359,7 @@ private SelfReferencedJob(Ignite ignite) throws IgniteCheckedException { arr = new SelfReferencedJob[] {this, this}; - col = Arrays.asList(this, this, this); + col = asList(this, this, this); newContext(); @@ -355,6 +385,7 @@ private SelfReferencedJob(Ignite ignite) throws IgniteCheckedException { /** * @throws Exception If test fails. */ + @Test public void testDetectPeerDeployAwareInfiniteRecursion() throws Exception { Ignite g = startGrid(1); @@ -386,51 +417,10 @@ private static ComputeJob job(final Runnable r) { }; } - /** - * - * @throws Exception If failed. - */ - public void testParseIsoDate() throws Exception { - Calendar cal = U.parseIsoDate("2009-12-08T13:30:44.000Z"); - - assert cal.get(Calendar.YEAR) == 2009; - assert cal.get(Calendar.MONTH) == 11; - assert cal.get(Calendar.DAY_OF_MONTH) == 8; - assert cal.get(Calendar.HOUR_OF_DAY) == 13; - assert cal.get(Calendar.MINUTE) == 30; - assert cal.get(Calendar.SECOND) == 44; - assert cal.get(Calendar.MILLISECOND) == 0; - assert cal.get(Calendar.ZONE_OFFSET) == 0 : - "Unexpected value: " + cal.get(Calendar.ZONE_OFFSET); - - cal = U.parseIsoDate("2009-12-08T13:30:44.000+03:00"); - - assert cal.get(Calendar.YEAR) == 2009; - assert cal.get(Calendar.MONTH) == 11; - assert cal.get(Calendar.DAY_OF_MONTH) == 8; - assert cal.get(Calendar.HOUR_OF_DAY) == 13; - assert cal.get(Calendar.MINUTE) == 30; - assert cal.get(Calendar.SECOND) == 44; - assert cal.get(Calendar.MILLISECOND) == 0; - assert cal.get(Calendar.ZONE_OFFSET) == 3 * 60 * 60 * 1000 : - "Unexpected value: " + cal.get(Calendar.ZONE_OFFSET); - - cal = U.parseIsoDate("2009-12-08T13:30:44.000+0300"); - - assert cal.get(Calendar.YEAR) == 2009; - assert cal.get(Calendar.MONTH) == 11; - assert cal.get(Calendar.DAY_OF_MONTH) == 8; - assert cal.get(Calendar.HOUR_OF_DAY) == 13; - assert cal.get(Calendar.MINUTE) == 30; - assert cal.get(Calendar.SECOND) == 44; - assert cal.get(Calendar.MILLISECOND) == 0; - assert cal.get(Calendar.ZONE_OFFSET) == 3 * 60 * 60 * 1000 : - "Unexpected value: " + cal.get(Calendar.ZONE_OFFSET); - } - /** * @throws Exception If test failed. */ + @Test public void testPeerDeployAware0() throws Exception { Collection col = new ArrayList<>(); @@ -499,6 +489,7 @@ public void testPeerDeployAware0() throws Exception { /** * Test UUID to bytes array conversion. */ + @Test public void testsGetBytes() { for (int i = 0; i < 100; i++) { UUID id = UUID.randomUUID(); @@ -515,6 +506,7 @@ public void testsGetBytes() { * */ @SuppressWarnings("ZeroLengthArrayAllocation") + @Test public void testReadByteArray() { assertTrue(Arrays.equals(new byte[0], U.readByteArray(ByteBuffer.allocate(0)))); assertTrue(Arrays.equals(new byte[0], U.readByteArray(ByteBuffer.allocate(0), ByteBuffer.allocate(0)))); @@ -556,6 +548,7 @@ public void testReadByteArray() { * */ @SuppressWarnings("ZeroLengthArrayAllocation") + @Test public void testHashCodeFromBuffers() { assertEquals(Arrays.hashCode(new byte[0]), U.hashCode(ByteBuffer.allocate(0))); assertEquals(Arrays.hashCode(new byte[0]), U.hashCode(ByteBuffer.allocate(0), ByteBuffer.allocate(0))); @@ -580,6 +573,7 @@ public void testHashCodeFromBuffers() { /** * Test annotation look up. */ + @Test public void testGetAnnotations() { assert U.getAnnotation(A1.class, Ann1.class) != null; assert U.getAnnotation(A2.class, Ann1.class) != null; @@ -594,6 +588,7 @@ public void testGetAnnotations() { /** * */ + @Test public void testUnique() { int[][][] arrays = new int[][][]{ new int[][]{EMPTY, EMPTY, EMPTY}, @@ -620,6 +615,7 @@ public void testUnique() { /** * */ + @Test public void testDifference() { int[][][] arrays = new int[][][]{ new int[][]{EMPTY, EMPTY, EMPTY}, @@ -643,6 +639,7 @@ public void testDifference() { /** * */ + @Test public void testCopyIfExceeded() { int[][] arrays = new int[][]{new int[]{13, 14, 17, 11}, new int[]{13}, EMPTY}; @@ -660,6 +657,7 @@ public void testCopyIfExceeded() { /** * */ + @Test public void testIsIncreasingArray() { assertTrue(U.isIncreasingArray(EMPTY, 0)); assertTrue(U.isIncreasingArray(new int[]{Integer.MIN_VALUE, -10, 1, 13, Integer.MAX_VALUE}, 5)); @@ -678,6 +676,7 @@ public void testIsIncreasingArray() { /** * */ + @Test public void testIsNonDecreasingArray() { assertTrue(U.isNonDecreasingArray(EMPTY, 0)); assertTrue(U.isNonDecreasingArray(new int[]{Integer.MIN_VALUE, -10, 1, 13, Integer.MAX_VALUE}, 5)); @@ -696,6 +695,7 @@ public void testIsNonDecreasingArray() { /** * Test InetAddress Comparator. */ + @Test public void testInetAddressesComparator() { List ips = new ArrayList() { { @@ -720,6 +720,7 @@ public void testInetAddressesComparator() { } + @Test public void testMD5Calculation() throws Exception { String md5 = U.calculateMD5(new ByteArrayInputStream("Corrupted information.".getBytes())); @@ -729,6 +730,7 @@ public void testMD5Calculation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testResolveLocalAddresses() throws Exception { InetAddress inetAddress = InetAddress.getByName("0.0.0.0"); @@ -750,6 +752,7 @@ public void testResolveLocalAddresses() throws Exception { /** * */ + @Test public void testToSocketAddressesNoDuplicates() { Collection addrs = new ArrayList<>(); @@ -805,6 +808,7 @@ private static void checkString(String s0) throws Exception { * * @throws Exception If failed. */ + @Test public void testLongStringWriteUTF() throws Exception { checkString(null); checkString(""); @@ -826,6 +830,7 @@ public void testLongStringWriteUTF() throws Exception { /** * */ + @Test public void testCeilPow2() throws Exception { assertEquals(2, U.ceilPow2(2)); assertEquals(4, U.ceilPow2(3)); @@ -852,6 +857,7 @@ public void testCeilPow2() throws Exception { /** * */ + @Test public void testIsOldestNodeVersionAtLeast() { IgniteProductVersion v240 = IgniteProductVersion.fromString("2.4.0"); IgniteProductVersion v241 = IgniteProductVersion.fromString("2.4.1"); @@ -870,10 +876,306 @@ public void testIsOldestNodeVersionAtLeast() { TcpDiscoveryNode node250ts = new TcpDiscoveryNode(); node250ts.version(v250ts); - assertTrue(U.isOldestNodeVersionAtLeast(v240, Arrays.asList(node240, node241, node250, node250ts))); - assertFalse(U.isOldestNodeVersionAtLeast(v241, Arrays.asList(node240, node241, node250, node250ts))); - assertTrue(U.isOldestNodeVersionAtLeast(v250, Arrays.asList(node250, node250ts))); - assertTrue(U.isOldestNodeVersionAtLeast(v250ts, Arrays.asList(node250, node250ts))); + assertTrue(U.isOldestNodeVersionAtLeast(v240, asList(node240, node241, node250, node250ts))); + assertFalse(U.isOldestNodeVersionAtLeast(v241, asList(node240, node241, node250, node250ts))); + assertTrue(U.isOldestNodeVersionAtLeast(v250, asList(node250, node250ts))); + assertTrue(U.isOldestNodeVersionAtLeast(v250ts, asList(node250, node250ts))); + } + + /** + * + */ + @Test + public void testDoInParallel() throws Throwable { + CyclicBarrier barrier = new CyclicBarrier(3); + + ExecutorService executorService = Executors.newFixedThreadPool(3); + + try { + IgniteUtils.doInParallel(3, + executorService, + asList(1, 2, 3), + i -> { + try { + barrier.await(1, TimeUnit.SECONDS); + } + catch (Exception e) { + throw new IgniteCheckedException(e); + } + + return null; + } + ); + } finally { + executorService.shutdownNow(); + } + } + + /** + * + */ + @Test + public void testDoInParallelBatch() { + CyclicBarrier barrier = new CyclicBarrier(3); + + ExecutorService executorService = Executors.newFixedThreadPool(3); + + try { + IgniteUtils.doInParallel(2, + executorService, + asList(1, 2, 3), + i -> { + try { + barrier.await(400, TimeUnit.MILLISECONDS); + } + catch (Exception e) { + throw new IgniteCheckedException(e); + } + + return null; + } + ); + + fail("Should throw timeout exception"); + } + catch (Exception e) { + assertTrue(e.toString(), X.hasCause(e, TimeoutException.class)); + } finally { + executorService.shutdownNow(); + } + } + + /** + * Test optimal splitting on batch sizes. + */ + @Test + public void testOptimalBatchSize() { + assertArrayEquals(new int[]{1}, IgniteUtils.calculateOptimalBatchSizes(1, 1)); + + assertArrayEquals(new int[]{2}, IgniteUtils.calculateOptimalBatchSizes(1, 2)); + + assertArrayEquals(new int[]{1, 1, 1, 1}, IgniteUtils.calculateOptimalBatchSizes(6, 4)); + + assertArrayEquals(new int[]{1}, IgniteUtils.calculateOptimalBatchSizes(4, 1)); + + assertArrayEquals(new int[]{1, 1}, IgniteUtils.calculateOptimalBatchSizes(4, 2)); + + assertArrayEquals(new int[]{1, 1, 1}, IgniteUtils.calculateOptimalBatchSizes(4, 3)); + + assertArrayEquals(new int[]{1, 1, 1, 1}, IgniteUtils.calculateOptimalBatchSizes(4, 4)); + + assertArrayEquals(new int[]{2, 1, 1, 1}, IgniteUtils.calculateOptimalBatchSizes(4, 5)); + + assertArrayEquals(new int[]{2, 2, 1, 1}, IgniteUtils.calculateOptimalBatchSizes(4, 6)); + + assertArrayEquals(new int[]{2, 2, 2, 1}, IgniteUtils.calculateOptimalBatchSizes(4, 7)); + + assertArrayEquals(new int[]{2, 2, 2, 2}, IgniteUtils.calculateOptimalBatchSizes(4, 8)); + + assertArrayEquals(new int[]{3, 2, 2, 2}, IgniteUtils.calculateOptimalBatchSizes(4, 9)); + + assertArrayEquals(new int[]{3, 3, 2, 2}, IgniteUtils.calculateOptimalBatchSizes(4, 10)); + } + + /** + * Test parallel execution in order. + */ + @Test + public void testDoInParallelResultsOrder() throws IgniteCheckedException { + ExecutorService executorService = Executors.newFixedThreadPool(4); + + try { + for(int parallelism = 1; parallelism < 16; parallelism++) + for(int size = 0; size < 10_000; size++) + testOrder(executorService, size, parallelism); + } finally { + executorService.shutdownNow(); + } + } + + /** + * Test parallel execution steal job. + */ + @Test + public void testDoInParallelWithStealingJob() throws IgniteCheckedException { + // Pool size should be less that input data collection. + ExecutorService executorService = Executors.newFixedThreadPool(1); + + CountDownLatch mainThreadLatch = new CountDownLatch(1); + CountDownLatch poolThreadLatch = new CountDownLatch(1); + + // Busy one thread from the pool. + executorService.submit(new Runnable() { + @Override + public void run() { + try { + poolThreadLatch.await(); + } + catch (InterruptedException e) { + throw new IgniteInterruptedException(e); + } + } + }); + + List data = asList(0, 1, 2, 3, 4, 5, 6, 7, 8, 9); + + AtomicInteger taskProcessed = new AtomicInteger(); + + long threadId = Thread.currentThread().getId(); + + AtomicInteger curThreadCnt = new AtomicInteger(); + AtomicInteger poolThreadCnt = new AtomicInteger(); + + Collection res = U.doInParallel(10, + executorService, + data, + new IgniteThrowableConsumer() { + @Override public Integer accept(Integer cnt) throws IgniteInterruptedCheckedException { + // Release thread in pool in the middle of range. + if (taskProcessed.getAndIncrement() == (data.size() / 2) - 1) { + poolThreadLatch.countDown(); + + try { + // Await thread in thread pool complete task. + mainThreadLatch.await(); + } + catch (InterruptedException e) { + Thread.currentThread().interrupt(); + + throw new IgniteInterruptedCheckedException(e); + } + } + + // Increment if executed in current thread. + if (Thread.currentThread().getId() == threadId) + curThreadCnt.incrementAndGet(); + else { + poolThreadCnt.incrementAndGet(); + + if (taskProcessed.get() == data.size()) + mainThreadLatch.countDown(); + } + + return -cnt; + } + }); + + Assert.assertEquals(curThreadCnt.get() + poolThreadCnt.get(), data.size()); + Assert.assertEquals(5, curThreadCnt.get()); + Assert.assertEquals(5, poolThreadCnt.get()); + Assert.assertEquals(asList(0, -1, -2, -3, -4, -5, -6, -7, -8, -9), res); + } + + /** + * Test parallel execution steal job. + */ + @Test + public void testDoInParallelWithStealingJobRunTaskInExecutor() throws Exception { + // Pool size should be less that input data collection. + ExecutorService executorService = Executors.newFixedThreadPool(2); + + Future f1 = executorService.submit(()-> runTask(executorService)); + Future f2 = executorService.submit(()-> runTask(executorService)); + Future f3 = executorService.submit(()-> runTask(executorService)); + + f1.get(); + f2.get(); + f3.get(); + } + + /** + * + * @param executorService Executor service. + */ + private void runTask(ExecutorService executorService) { + List data = asList(0, 1, 2, 3, 4, 5, 6, 7, 8, 9); + + long threadId = Thread.currentThread().getId(); + + AtomicInteger curThreadCnt = new AtomicInteger(); + + Collection res; + + try { + res = U.doInParallel(10, + executorService, + data, + new IgniteThrowableConsumer() { + @Override public Integer accept(Integer cnt) { + if (Thread.currentThread().getId() == threadId) + curThreadCnt.incrementAndGet(); + + return -cnt; + } + }); + } + catch (IgniteCheckedException e) { + throw new IgniteException(e); + } + + Assert.assertTrue(curThreadCnt.get() > 0); + Assert.assertEquals(asList(0, -1, -2, -3, -4, -5, -6, -7, -8, -9), res); + } + + /** + * Template method to test parallel execution + * @param executorService ExecutorService. + * @param size Size. + * @param parallelism Parallelism. + * @throws IgniteCheckedException Exception. + */ + private void testOrder(ExecutorService executorService, int size, int parallelism) throws IgniteCheckedException { + List list = new ArrayList<>(); + for(int i = 0; i < size; i++) + list.add(i); + + Collection results = IgniteUtils.doInParallel( + parallelism, + executorService, + list, + i -> i * 2 + ); + + assertEquals(list.size(), results.size()); + + final int[] i = {0}; + results.forEach(new Consumer() { + @Override public void accept(Integer integer) { + assertEquals(2 * list.get(i[0]), integer.intValue()); + i[0]++; + } + }); + } + + /** + * + */ + @Test + public void testDoInParallelException() { + String expectedException = "ExpectedException"; + + ExecutorService executorService = Executors.newFixedThreadPool(1); + + try { + IgniteUtils.doInParallel( + 1, + executorService, + asList(1, 2, 3), + i -> { + if (Integer.valueOf(1).equals(i)) + throw new IgniteCheckedException(expectedException); + + return null; + } + ); + + fail("Should throw ParallelExecutionException"); + } + catch (IgniteCheckedException e) { + assertEquals(expectedException, e.getMessage()); + } finally { + executorService.shutdownNow(); + } } /** diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/StripedExecutorTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/StripedExecutorTest.java index 9a4bf0619c4df..0b4123a3bf652 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/StripedExecutorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/StripedExecutorTest.java @@ -20,10 +20,14 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.logger.java.JavaLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class StripedExecutorTest extends GridCommonAbstractTest { /** */ private StripedExecutor stripedExecSvc; @@ -44,6 +48,7 @@ public class StripedExecutorTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCompletedTasks() throws Exception { stripedExecSvc.execute(0, new TestRunnable()); stripedExecSvc.execute(1, new TestRunnable()); @@ -56,6 +61,7 @@ public void testCompletedTasks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStripesCompletedTasks() throws Exception { stripedExecSvc.execute(0, new TestRunnable()); stripedExecSvc.execute(1, new TestRunnable()); @@ -72,6 +78,7 @@ public void testStripesCompletedTasks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStripesActiveStatuses() throws Exception { stripedExecSvc.execute(0, new TestRunnable()); stripedExecSvc.execute(1, new TestRunnable(true)); @@ -88,6 +95,7 @@ public void testStripesActiveStatuses() throws Exception { /** * @throws Exception If failed. */ + @Test public void testActiveStripesCount() throws Exception { stripedExecSvc.execute(0, new TestRunnable()); stripedExecSvc.execute(1, new TestRunnable(true)); @@ -100,6 +108,7 @@ public void testActiveStripesCount() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStripesQueueSizes() throws Exception { stripedExecSvc.execute(0, new TestRunnable()); stripedExecSvc.execute(0, new TestRunnable(true)); @@ -120,6 +129,7 @@ public void testStripesQueueSizes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueSize() throws Exception { stripedExecSvc.execute(1, new TestRunnable()); stripedExecSvc.execute(1, new TestRunnable(true)); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridCompoundFutureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridCompoundFutureSelfTest.java index a6021332e7e9e..4d8756958ca46 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridCompoundFutureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridCompoundFutureSelfTest.java @@ -25,14 +25,19 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests compound future contracts. */ +@RunWith(JUnit4.class) public class GridCompoundFutureSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testMarkInitialized() throws Exception { GridCompoundFuture fut = new GridCompoundFuture<>(); @@ -53,6 +58,7 @@ public void testMarkInitialized() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCompleteOnReducer() throws Exception { GridCompoundFuture fut = new GridCompoundFuture<>(CU.boolReducer()); @@ -85,6 +91,7 @@ public void testCompleteOnReducer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCompleteOnException() throws Exception { GridCompoundFuture fut = new GridCompoundFuture<>(CU.boolReducer()); @@ -117,6 +124,7 @@ public void testCompleteOnException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentCompletion() throws Exception { GridCompoundFuture fut = new GridCompoundFuture<>(CU.boolReducer()); @@ -149,6 +157,7 @@ public void testConcurrentCompletion() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentRandomCompletion() throws Exception { GridCompoundFuture fut = new GridCompoundFuture<>(CU.boolReducer()); @@ -187,4 +196,4 @@ else if (op == 8) assertTrue(fut.isDone()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridEmbeddedFutureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridEmbeddedFutureSelfTest.java index 314acceaba3c7..4d3134935612c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridEmbeddedFutureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridEmbeddedFutureSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.lang.IgniteBiClosure; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.configuration.CacheConfiguration.DFLT_MAX_CONCURRENT_ASYNC_OPS; @@ -33,6 +36,7 @@ /** * Tests grid embedded future use cases. */ +@RunWith(JUnit4.class) public class GridEmbeddedFutureSelfTest extends GridCommonAbstractTest { /** * Test kernal context. @@ -47,6 +51,7 @@ public class GridEmbeddedFutureSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testFutureChain() throws Exception { GridFutureAdapter fut = new GridFutureAdapter<>(); @@ -70,6 +75,7 @@ public void testFutureChain() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ErrorNotRethrown") + @Test public void testFutureCompletesCorrectly() throws Exception { List list = Arrays.asList( null, @@ -137,4 +143,4 @@ public void testFutureCompletesCorrectly() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureAdapterSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureAdapterSelfTest.java index 89f3a0309c3fc..e954afcc655aa 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureAdapterSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureAdapterSelfTest.java @@ -35,14 +35,19 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests grid future adapter use cases. */ +@RunWith(JUnit4.class) public class GridFutureAdapterSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testOnDone() throws Exception { GridFutureAdapter fut = new GridFutureAdapter<>(); @@ -92,6 +97,7 @@ public void testOnDone() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOnCancelled() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -119,6 +125,7 @@ public void testOnCancelled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testListenSyncNotify() throws Exception { GridFutureAdapter fut = new GridFutureAdapter<>(); @@ -170,6 +177,7 @@ public void testListenSyncNotify() throws Exception { /** * @throws Exception If failed. */ + @Test public void testListenNotify() throws Exception { GridTestKernalContext ctx = new GridTestKernalContext(log); @@ -228,6 +236,7 @@ public void testListenNotify() throws Exception { * * @throws Exception In case of any exception. */ + @Test public void testChaining() throws Exception { checkChaining(null); @@ -326,6 +335,7 @@ private void checkChaining(ExecutorService exec) throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { GridFutureAdapter unfinished = new GridFutureAdapter<>(); GridFutureAdapter finished = new GridFutureAdapter<>(); @@ -371,4 +381,4 @@ public void testGet() throws Exception { } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureQueueTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureQueueTest.java index 4b6d81c2da114..595e0cd3d605b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureQueueTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/future/GridFutureQueueTest.java @@ -181,4 +181,4 @@ public void testQueue(long time, int writers) throws Exception { System.gc(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/future/IgniteFutureImplTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/future/IgniteFutureImplTest.java index 3a06cf4420bc6..7c08863e93161 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/future/IgniteFutureImplTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/future/IgniteFutureImplTest.java @@ -37,10 +37,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteFutureImplTest extends GridCommonAbstractTest { /** Context thread name. */ private static final String CTX_THREAD_NAME = "test-async"; @@ -73,6 +77,7 @@ public class IgniteFutureImplTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testFutureGet() throws Exception { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -92,6 +97,7 @@ public void testFutureGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFutureException() throws Exception { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -131,6 +137,7 @@ public void testFutureException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFutureIgniteException() throws Exception { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -154,6 +161,7 @@ public void testFutureIgniteException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testListeners() throws Exception { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -198,6 +206,7 @@ public void testListeners() throws Exception { /** * @throws Exception If failed. */ + @Test public void testListenersOnError() throws Exception { { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -265,6 +274,7 @@ public void testListenersOnError() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAsyncListeners() throws Exception { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -306,6 +316,7 @@ private void checkAsyncListener(IgniteFutureImpl fut) throws Interrupted /** * @throws Exception If failed. */ + @Test public void testAsyncListenersOnError() throws Exception { checkAsyncListenerOnError(new IgniteException("Test exception")); checkAsyncListenerOnError(new IgniteCheckedException("Test checked exception")); @@ -385,6 +396,7 @@ private void checkAsyncListenerOnError(Exception err0, IgniteFutureImpl /** * @throws Exception If failed. */ + @Test public void testChain() throws Exception { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -426,6 +438,7 @@ public void testChain() throws Exception { /** * @throws Exception If failed. */ + @Test public void testChainError() throws Exception { { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -575,6 +588,7 @@ public void testChainError() throws Exception { /** * @throws Exception If failed. */ + @Test public void testChainAsync() throws Exception { GridFutureAdapter fut0 = new GridFutureAdapter<>(); @@ -647,6 +661,7 @@ private TestClosure(CountDownLatch latch) { /** * @throws Exception If failed. */ + @Test public void testChainAsyncOnError() throws Exception { checkChainedOnError(new IgniteException("Test exception")); checkChainedOnError(new IgniteCheckedException("Test checked exception")); @@ -805,4 +820,4 @@ protected IgniteFutureImpl createFuture(IgniteInternalFuture fut) { protected Class expectedException() { return IgniteException.class; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioEmbeddedFutureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioEmbeddedFutureSelfTest.java index b7b69662f3925..0b26d4f7c96f2 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioEmbeddedFutureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioEmbeddedFutureSelfTest.java @@ -20,16 +20,21 @@ import org.apache.ignite.internal.util.nio.GridNioEmbeddedFuture; import org.apache.ignite.internal.util.nio.GridNioFutureImpl; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; /** * Test for NIO embedded future. */ +@RunWith(JUnit4.class) public class GridNioEmbeddedFutureSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNioEmbeddedFuture() throws Exception { // Original future. final GridNioFutureImpl origFut = new GridNioFutureImpl<>(null); @@ -57,4 +62,4 @@ public void testNioEmbeddedFuture() throws Exception { // Wait for embedded future completes. assertEquals(new Integer(100), embFut.get(1, SECONDS)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioFutureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioFutureSelfTest.java index 44a1effe89f4c..86f19eb311eb5 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioFutureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/future/nio/GridNioFutureSelfTest.java @@ -30,14 +30,19 @@ import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for NIO future. */ +@RunWith(JUnit4.class) public class GridNioFutureSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testOnDone() throws Exception { GridNioFutureImpl fut = new GridNioFutureImpl<>(null); @@ -87,6 +92,7 @@ public void testOnDone() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOnCancelled() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -114,6 +120,7 @@ public void testOnCancelled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testListenSyncNotify() throws Exception { GridNioFutureImpl fut = new GridNioFutureImpl<>(null); @@ -165,6 +172,7 @@ public void testListenSyncNotify() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { GridNioFutureImpl unfinished = new GridNioFutureImpl<>(null); GridNioFutureImpl finished = new GridNioFutureImpl<>(null); @@ -210,4 +218,4 @@ public void testGet() throws Exception { } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataInputOutputByteOrderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataInputOutputByteOrderSelfTest.java index f3ff7810c0f64..7f3c8b99f3c4b 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataInputOutputByteOrderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataInputOutputByteOrderSelfTest.java @@ -19,7 +19,9 @@ import java.io.ByteArrayInputStream; import java.util.Random; -import junit.framework.TestCase; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; import static org.apache.ignite.GridTestIoUtils.getCharByByteLE; import static org.apache.ignite.GridTestIoUtils.getDoubleByByteLE; @@ -28,11 +30,12 @@ import static org.apache.ignite.GridTestIoUtils.getLongByByteLE; import static org.apache.ignite.GridTestIoUtils.getShortByByteLE; import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; /** * Grid unsafe data input/output byte order sanity tests. */ -public class GridUnsafeDataInputOutputByteOrderSelfTest extends TestCase { +public class GridUnsafeDataInputOutputByteOrderSelfTest { /** Array length. */ private static final int ARR_LEN = 16; @@ -48,15 +51,17 @@ public class GridUnsafeDataInputOutputByteOrderSelfTest extends TestCase { /** In. */ private GridUnsafeDataInput in; - /** {@inheritDoc} */ - @Override protected void setUp() throws Exception { + /** */ + @Before + public void setUp() throws Exception { out = new GridUnsafeDataOutput(16 * 8+ LEN_BYTES); in = new GridUnsafeDataInput(); in.inputStream(new ByteArrayInputStream(out.internalArray())); } - /** {@inheritDoc} */ - @Override public void tearDown() throws Exception { + /** */ + @After + public void tearDown() throws Exception { in.close(); out.close(); } @@ -64,6 +69,7 @@ public class GridUnsafeDataInputOutputByteOrderSelfTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { short val = (short)RND.nextLong(); @@ -76,6 +82,7 @@ public void testShort() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShortArray() throws Exception { short[] arr = new short[ARR_LEN]; @@ -95,6 +102,7 @@ public void testShortArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testChar() throws Exception { char val = (char)RND.nextLong(); @@ -107,6 +115,7 @@ public void testChar() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCharArray() throws Exception { char[] arr = new char[ARR_LEN]; @@ -126,6 +135,7 @@ public void testCharArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInt() throws Exception { int val = RND.nextInt(); @@ -138,6 +148,7 @@ public void testInt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIntArray() throws Exception { int[] arr = new int[ARR_LEN]; @@ -157,6 +168,7 @@ public void testIntArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { long val = RND.nextLong(); @@ -169,6 +181,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLongArray() throws Exception { long[] arr = new long[ARR_LEN]; @@ -188,6 +201,7 @@ public void testLongArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloat() throws Exception { float val = RND.nextFloat(); @@ -200,6 +214,7 @@ public void testFloat() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFloatArray() throws Exception { float[] arr = new float[ARR_LEN]; @@ -219,6 +234,7 @@ public void testFloatArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDouble() throws Exception { double val = RND.nextDouble(); @@ -231,6 +247,7 @@ public void testDouble() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDoubleArray() throws Exception { double[] arr = new double[ARR_LEN]; @@ -246,4 +263,4 @@ public void testDoubleArray() throws Exception { assertArrayEquals(arr, in.readDoubleArray(), 0); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataOutputArraySizingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataOutputArraySizingSelfTest.java index 7e59a6a8250ba..42546204865dc 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataOutputArraySizingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/io/GridUnsafeDataOutputArraySizingSelfTest.java @@ -20,12 +20,16 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MARSHAL_BUFFERS_RECHECK; /** * Test for {@link GridUnsafeDataOutput}. */ +@RunWith(JUnit4.class) public class GridUnsafeDataOutputArraySizingSelfTest extends GridCommonAbstractTest { /** Small array. */ private static final byte[] SMALL = new byte[32]; @@ -49,7 +53,7 @@ public class GridUnsafeDataOutputArraySizingSelfTest extends GridCommonAbstractT /** * @throws Exception If failed. */ - @SuppressWarnings("BusyWait") + @Test public void testSmall() throws Exception { final GridUnsafeDataOutput out = new GridUnsafeDataOutput(512); @@ -63,6 +67,7 @@ public void testSmall() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBig() throws Exception { GridUnsafeDataOutput out = new GridUnsafeDataOutput(512); @@ -76,6 +81,7 @@ public void testBig() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("BusyWait") + @Test public void testChanged1() throws Exception { GridUnsafeDataOutput out = new GridUnsafeDataOutput(512); @@ -94,7 +100,7 @@ public void testChanged1() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("BusyWait") + @Test public void testChanged2() throws Exception { final GridUnsafeDataOutput out = new GridUnsafeDataOutput(512); @@ -148,4 +154,4 @@ private static class WriteAndCheckPredicate implements GridAbsPredicate { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryCrashDetectionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryCrashDetectionSelfTest.java index 0c5f5644e89b5..4ed193738c19e 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryCrashDetectionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryCrashDetectionSelfTest.java @@ -36,10 +36,14 @@ import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test shared memory endpoints crash detection. */ +@RunWith(JUnit4.class) public class IpcSharedMemoryCrashDetectionSelfTest extends GridCommonAbstractTest { /** Timeout in ms between read/write attempts in busy-wait loops. */ public static final int RW_SLEEP_TIMEOUT = 50; @@ -70,6 +74,7 @@ public class IpcSharedMemoryCrashDetectionSelfTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void testIgfsServerClientInteractionsUponClientKilling() throws Exception { // Run server endpoint. IpcSharedMemoryServerEndpoint srv = new IpcSharedMemoryServerEndpoint(U.defaultWorkDirectory()); @@ -112,6 +117,7 @@ public void testIgfsServerClientInteractionsUponClientKilling() throws Exception /** * @throws Exception If failed. */ + @Test public void testIgfsClientServerInteractionsUponServerKilling() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-1386"); @@ -163,6 +169,7 @@ public void testIgfsClientServerInteractionsUponServerKilling() throws Exception /** * @throws Exception If failed. */ + @Test public void testClientThrowsCorrectExceptionUponServerKilling() throws Exception { info("Shared memory IDs before starting server-client interactions: " + IpcSharedMemoryUtils.sharedMemoryIds()); @@ -520,4 +527,4 @@ public void shmemIds(String shmemIds) { }); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryNativeLoaderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryNativeLoaderSelfTest.java index bca9401cf3a1c..57f5216016360 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryNativeLoaderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryNativeLoaderSelfTest.java @@ -21,21 +21,23 @@ import java.io.IOException; import java.io.InputStreamReader; import java.util.Collections; -import junit.framework.TestCase; import org.apache.ignite.internal.util.GridJavaProcess; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; /** * Test shared memory native loader. */ -public class IpcSharedMemoryNativeLoaderSelfTest extends TestCase { - +public class IpcSharedMemoryNativeLoaderSelfTest { /** * Test {@link IpcSharedMemoryNativeLoader#load()} in case, when native library path was * already loaded, but corrupted. * * @throws Exception If failed. */ + @Test public void testLoadWithCorruptedLibFile() throws Exception { if (U.isWindows()) return; @@ -76,4 +78,4 @@ private void readStreams(Process proc) throws IOException { while ((s = errOut.readLine()) != null) System.out.println("ERR>>>>>> " + s); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemorySpaceSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemorySpaceSelfTest.java index a06d0203736db..d63921e76ce5c 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemorySpaceSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemorySpaceSelfTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IpcSharedMemorySpaceSelfTest extends GridCommonAbstractTest { /** */ public static final int DATA_LEN = 1024 * 1024; @@ -59,6 +63,7 @@ public class IpcSharedMemorySpaceSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testBasicOperations() throws Exception { File tokFile = new File(IgniteSystemProperties.getString("java.io.tmpdir"), UUID.randomUUID().toString()); @@ -155,6 +160,7 @@ public void testBasicOperations() throws Exception { /** * @throws Exception If failed. */ + @Test public void testForceClose() throws Exception { File tokFile = new File(IgniteSystemProperties.getString("java.io.tmpdir"), getTestIgniteInstanceName()); @@ -196,6 +202,7 @@ public void testForceClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadAfterClose() throws Exception { File tokFile = new File(IgniteSystemProperties.getString("java.io.tmpdir"), getTestIgniteInstanceName()); @@ -234,6 +241,7 @@ public void testReadAfterClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteAfterClose() throws Exception { File tokFile = new File(IgniteSystemProperties.getString("java.io.tmpdir"), getTestIgniteInstanceName()); @@ -266,4 +274,4 @@ public void testWriteAfterClose() throws Exception { assert !tokFile.exists(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryUtilsSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryUtilsSelfTest.java index 8b2aa413d3b25..1b031e45be35f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryUtilsSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/ipc/shmem/IpcSharedMemoryUtilsSelfTest.java @@ -21,10 +21,14 @@ import java.util.Collection; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IpcSharedMemoryUtilsSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -36,6 +40,7 @@ public class IpcSharedMemoryUtilsSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPid() throws Exception { int pid = IpcSharedMemoryUtils.pid(); @@ -50,6 +55,7 @@ public void testPid() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIdsGet() throws Exception { File tokFile = new File(IgniteSystemProperties.getString("java.io.tmpdir"), getTestIgniteInstanceName()); @@ -80,4 +86,4 @@ public void testIdsGet() throws Exception { assertFalse(IpcSharedMemoryUtils.sharedMemoryIds().contains(shmemId)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioDelimitedBufferSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioDelimitedBufferSelfTest.java index 03b5169ec2fd7..5a57c0a133d26 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioDelimitedBufferSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioDelimitedBufferSelfTest.java @@ -22,18 +22,21 @@ import java.util.Arrays; import java.util.List; import java.util.Random; -import junit.framework.TestCase; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; /** * Tests for {@link GridNioDelimitedBuffer}. */ -public class GridNioDelimitedBufferSelfTest extends TestCase { +public class GridNioDelimitedBufferSelfTest { /** */ private static final String ASCII = "ASCII"; /** * Tests simple delimiter (excluded from alphabet) */ + @Test public void testReadZString() throws Exception { Random rnd = new Random(); @@ -81,6 +84,7 @@ public void testReadZString() throws Exception { /** * Tests compound delimiter (included to alphabet) */ + @Test public void testDelim() throws Exception { byte[] delim = "aabb".getBytes(ASCII); @@ -111,4 +115,4 @@ public void testDelim() throws Exception { assertEquals(strs, res); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSelfTest.java index e623467319a1c..d4065907db558 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSelfTest.java @@ -52,12 +52,16 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; /** * Tests for new NIO server. */ +@RunWith(JUnit4.class) public class GridNioSelfTest extends GridCommonAbstractTest { /** Message count in test without reconnect. */ private static final int MSG_CNT = 2000; @@ -103,6 +107,7 @@ public class GridNioSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSimpleMessages() throws Exception { final Collection sesSet = new GridConcurrentHashSet<>(); @@ -159,6 +164,7 @@ public void testSimpleMessages() throws Exception { * * @throws Exception if failed. */ + @Test public void testServerShutdown() throws Exception { GridNioServerListener lsnr = new GridNioServerListenerAdapter() { @Override public void onConnected(GridNioSession ses) { @@ -214,6 +220,7 @@ public void testServerShutdown() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCorrectSocketClose() throws Exception { final AtomicReference err = new AtomicReference<>(); @@ -259,6 +266,7 @@ public void testCorrectSocketClose() throws Exception { /** * @throws Exception If failed. */ + @Test public void testThroughput() throws Exception { GridNioServerListener lsnr = new GridNioServerListenerAdapter() { @Override public void onConnected(GridNioSession ses) { @@ -339,6 +347,7 @@ public void testThroughput() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCloseSession() throws Exception { final AtomicReference err = new AtomicReference<>(); @@ -414,6 +423,7 @@ public void testCloseSession() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSendAfterServerStop() throws Exception { final AtomicReference sesRef = new AtomicReference<>(); @@ -631,6 +641,7 @@ protected GridNioServer.Builder serverBuilder(int port, /** * @throws Exception If test failed. */ + @Test public void testSendReceive() throws Exception { CountDownLatch latch = new CountDownLatch(10); @@ -665,6 +676,7 @@ public void testSendReceive() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testAsyncSendReceive() throws Exception { CountDownLatch latch = new CountDownLatch(10); @@ -703,6 +715,7 @@ public void testAsyncSendReceive() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultiThreadedSendReceive() throws Exception { CountDownLatch latch = new CountDownLatch(MSG_CNT * THREAD_CNT); @@ -749,6 +762,7 @@ public void testMultiThreadedSendReceive() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentConnects() throws Exception { final CyclicBarrier barrier = new CyclicBarrier(THREAD_CNT); @@ -846,6 +860,7 @@ public void testConcurrentConnects() throws Exception { /** * @throws Exception if test failed. */ + @Test public void testDeliveryDuration() throws Exception { idProvider.set(1); @@ -913,6 +928,7 @@ public void testDeliveryDuration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSessionIdleTimeout() throws Exception { final int sesCnt = 20; @@ -975,6 +991,7 @@ public void testSessionIdleTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWriteTimeout() throws Exception { final int sesCnt = 20; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSessionMetaKeySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSessionMetaKeySelfTest.java index 71950eec98a31..d825ae8c725e9 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSessionMetaKeySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSessionMetaKeySelfTest.java @@ -22,14 +22,19 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridNioSessionMetaKey}. */ +@RunWith(JUnit4.class) public class GridNioSessionMetaKeySelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNextRandomKey() throws Exception { AtomicInteger keyGen = U.staticField(GridNioSessionMetaKey.class, "keyGen"); @@ -54,4 +59,4 @@ public void testNextRandomKey() throws Exception { keyGen.set(initVal); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSslSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSslSelfTest.java index 1c4aa27d86281..034beb07d2682 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSslSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridNioSslSelfTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for new NIO server with SSL enabled. */ +@RunWith(JUnit4.class) public class GridNioSslSelfTest extends GridNioSelfTest { /** Test SSL context. */ private static SSLContext sslCtx; @@ -75,13 +79,15 @@ public class GridNioSslSelfTest extends GridNioSelfTest { } /** {@inheritDoc} */ + @Test @Override public void testWriteTimeout() throws Exception { // Skip base test because it enables "skipWrite" mode in the GridNioServer // which makes SSL handshake impossible. } /** {@inheritDoc} */ + @Test @Override public void testAsyncSendReceive() throws Exception { // No-op, do not want to mess with SSL channel. } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridRoundTripTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridRoundTripTest.java index d83fde6654e0e..91ae5a2e4de60 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridRoundTripTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/GridRoundTripTest.java @@ -26,13 +26,13 @@ import java.net.ServerSocket; import java.net.Socket; import java.util.Random; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; /** * Tests pure round trip time on network. */ -public class GridRoundTripTest extends TestCase { +public class GridRoundTripTest { /** Communication port. */ public static final int PORT = 47600; @@ -43,6 +43,7 @@ public class GridRoundTripTest extends TestCase { * @throws IOException If error occurs. * @throws InterruptedException If interrupted */ + @Test public void testRunServer() throws IOException, InterruptedException { final ServerSocket sock = new ServerSocket(); @@ -74,6 +75,7 @@ public void testRunServer() throws IOException, InterruptedException { * Runs client test */ @SuppressWarnings("InfiniteLoopStatement") + @Test public void testRunClient() { Socket sock = new Socket(); @@ -228,4 +230,4 @@ private static byte[] createMessage(int len) { return res; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/IgniteExceptionInNioWorkerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/IgniteExceptionInNioWorkerSelfTest.java index 9af3f8c11467a..731a190d45dfe 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/IgniteExceptionInNioWorkerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/IgniteExceptionInNioWorkerSelfTest.java @@ -24,21 +24,19 @@ import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteExceptionInNioWorkerSelfTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 4; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -49,10 +47,6 @@ public class IgniteExceptionInNioWorkerSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfg); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); commSpi.setSharedMemoryPort(-1); @@ -65,6 +59,7 @@ public class IgniteExceptionInNioWorkerSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testBrokenMessage() throws Exception { startGrids(GRID_CNT); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/impl/GridNioFilterChainSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/impl/GridNioFilterChainSelfTest.java index 588857583fd79..7563b334ef21f 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/nio/impl/GridNioFilterChainSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/nio/impl/GridNioFilterChainSelfTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests filter chain event processing. */ +@RunWith(JUnit4.class) public class GridNioFilterChainSelfTest extends GridCommonAbstractTest { /** Session opened event meta name. */ private static final int OPENED_META_NAME = 11; @@ -66,6 +70,7 @@ public class GridNioFilterChainSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testChainEvents() throws Exception { final AtomicReference connectedEvt = new AtomicReference<>(); final AtomicReference disconnectedEvt = new AtomicReference<>(); @@ -400,4 +405,4 @@ public MockNioSession(InetSocketAddress locAddr, InetSocketAddress rmtAddr) { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapAbstractSelfTest.java index b0c6037ddb0fd..6c9518ff86044 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapAbstractSelfTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.concurrent.ConcurrentHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests off-heap map. */ +@RunWith(JUnit4.class) public abstract class GridOffHeapMapAbstractSelfTest extends GridCommonAbstractTest { /** Random. */ private static final Random RAND = new Random(); @@ -140,6 +144,7 @@ private int hash(int h) { /** * @throws Exception If failed. */ + @Test public void testInsert() throws Exception { map = newMap(); @@ -160,6 +165,7 @@ public void testInsert() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRehash() throws Exception { initCap = 10; @@ -197,6 +203,7 @@ public void testRehash() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { map = newMap(); @@ -217,6 +224,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPut1() throws Exception { map = newMap(); @@ -237,6 +245,7 @@ public void testPut1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPut2() throws Exception { map = newMap(); @@ -265,6 +274,7 @@ public void testPut2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { map = newMap(); @@ -294,6 +304,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemovex() throws Exception { map = newMap(); @@ -322,6 +333,7 @@ public void testRemovex() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { initCap = 10; @@ -394,6 +406,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorMultithreaded() throws Exception { initCap = 10; @@ -468,6 +481,7 @@ public void testIteratorMultithreaded() throws Exception { /** * */ + @Test public void testInsertLoad() { map = newMap(); @@ -502,6 +516,7 @@ public void testInsertLoad() { /** * */ + @Test public void testPutLoad() { map = newMap(); @@ -537,6 +552,7 @@ public void testPutLoad() { /** * */ + @Test public void testLru1() { lruStripes = 1; mem = 10; @@ -579,6 +595,7 @@ public void testLru1() { /** * */ + @Test public void testLru2() { mem = 1000 + 64 * 16; // Add segment size. @@ -616,6 +633,7 @@ public void testLru2() { /** * @throws Exception If failed. */ + @Test public void testLruMultithreaded() throws Exception { mem = 1000 + 64 * 16; // Add segment size. @@ -667,6 +685,7 @@ public void testLruMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorAfterRehash() throws Exception { mem = 0; initCap = 10; @@ -737,6 +756,7 @@ public void testIteratorAfterRehash() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreadedOps() throws Exception { mem = 1512; // Small enough for evictions. @@ -824,4 +844,4 @@ public void testMultithreadedOps() throws Exception { assertEquals(zeroAllocated, map.allocatedSize()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapPerformanceAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapPerformanceAbstractTest.java index f7388e8399fb0..c1bef7185c508 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapPerformanceAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapMapPerformanceAbstractTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.internal.util.typedef.T3; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests off-heap map. */ +@RunWith(JUnit4.class) public abstract class GridOffHeapMapPerformanceAbstractTest extends GridCommonAbstractTest { /** Random. */ private static final Random RAND = new Random(); @@ -133,6 +137,7 @@ private String string() { /** * Test plain hash map. */ + @Test public void testHashMapPutRemove() { Map map = new HashMap<>(LOAD_CNT); @@ -187,6 +192,7 @@ public void testHashMapPutRemove() { /** * */ + @Test public void testInsertRemoveLoad() { info("Starting insert performance test..."); @@ -240,6 +246,7 @@ public void testInsertRemoveLoad() { /** * */ + @Test public void testPutRemoveLoad() { info("Starting put performance test..."); @@ -288,4 +295,4 @@ public void testPutRemoveLoad() { rmv = cnt % 3 == 0; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapAbstractSelfTest.java index 9447970c23635..241c6542b5699 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapAbstractSelfTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.internal.util.lang.GridTuple; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests off-heap map. */ +@RunWith(JUnit4.class) public abstract class GridOffHeapPartitionedMapAbstractSelfTest extends GridCommonAbstractTest { /** Random. */ private static final Random RAND = new Random(); @@ -143,6 +147,7 @@ private byte[] bytes(int len) { /** * @throws Exception If failed. */ + @Test public void testInsert() throws Exception { map = newMap(); @@ -165,6 +170,7 @@ public void testInsert() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRehash() throws Exception { initCap = 10; @@ -203,6 +209,7 @@ public void testRehash() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPointerAfterRehash() throws Exception { initCap = 10; @@ -245,7 +252,7 @@ public void testPointerAfterRehash() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("unchecked") + @Test public void testPutRandomKeys() throws Exception { map = newMap(); @@ -275,6 +282,7 @@ public void testPutRandomKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGet() throws Exception { map = newMap(); @@ -297,6 +305,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPut1() throws Exception { map = newMap(); @@ -319,6 +328,7 @@ public void testPut1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPut2() throws Exception { map = newMap(); @@ -349,6 +359,7 @@ public void testPut2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemove() throws Exception { map = newMap(); @@ -380,6 +391,7 @@ public void testRemove() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRemovex() throws Exception { map = newMap(); @@ -410,6 +422,7 @@ public void testRemovex() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { initCap = 10; @@ -483,6 +496,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorMultithreaded() throws Exception { initCap = 10; @@ -580,6 +594,7 @@ public void testIteratorMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIteratorRemoveMultithreaded() throws Exception { initCap = 10; @@ -637,6 +652,7 @@ public void testIteratorRemoveMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionIterator() throws Exception { initCap = 10; @@ -711,6 +727,7 @@ public void testPartitionIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPartitionIteratorMultithreaded() throws Exception { initCap = 10; @@ -786,6 +803,7 @@ public void testPartitionIteratorMultithreaded() throws Exception { /** * */ + @Test public void testInsertLoad() { mem = 0; // Disable LRU. @@ -826,6 +844,7 @@ public void testInsertLoad() { /** * */ + @Test public void testPutLoad() { mem = 0; // Disable LRU. @@ -867,6 +886,7 @@ public void testPutLoad() { /** * */ + @Test public void testLru1() { lruStripes = 1; mem = 10; @@ -911,6 +931,7 @@ public void testLru1() { /** * */ + @Test public void testLru2() { mem = 1000 + 64 * 16 * parts; // Add segment size. @@ -950,6 +971,7 @@ public void testLru2() { /** * @throws Exception If failed. */ + @Test public void testLruMultithreaded() throws Exception { mem = 1000 + 64 * 16 * parts; // Add segment size. @@ -1009,6 +1031,7 @@ public void testLruMultithreaded() throws Exception { * */ @SuppressWarnings("TooBroadScope") + @Test public void testValuePointerEvict() { mem = 90; @@ -1076,6 +1099,7 @@ public void testValuePointerEvict() { * */ @SuppressWarnings("TooBroadScope") + @Test public void testValuePointerEnableEviction() { mem = 90; @@ -1142,6 +1166,7 @@ public void testValuePointerEnableEviction() { /** * */ + @Test public void testValuePointerRemove() { map = newMap(); @@ -1155,4 +1180,4 @@ public void testValuePointerRemove() { assertNull(map.valuePointer(1, k.hashCode(), k.getBytes())); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapPerformanceAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapPerformanceAbstractTest.java index 86432fc0d4f36..0a504a64916f4 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapPerformanceAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/GridOffHeapPartitionedMapPerformanceAbstractTest.java @@ -29,11 +29,15 @@ import org.apache.ignite.internal.util.typedef.T3; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Performance test for partitioned offheap hash map. */ @SuppressWarnings({"unchecked", "NonThreadSafeLazyInitialization"}) +@RunWith(JUnit4.class) public abstract class GridOffHeapPartitionedMapPerformanceAbstractTest extends GridCommonAbstractTest { /** */ protected static final int LOAD_CNT = 256; @@ -108,6 +112,7 @@ protected GridOffHeapPartitionedMapPerformanceAbstractTest() { /** * @throws Exception If failed. */ + @Test public void testPuts() throws Exception { info("Warming up..."); @@ -121,6 +126,7 @@ public void testPuts() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutsConcurrentMap() throws Exception { info("Warming up..."); @@ -134,6 +140,7 @@ public void testPutsConcurrentMap() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemoves() throws Exception { info("Warming up..."); @@ -147,6 +154,7 @@ public void testPutRemoves() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutRemovesConcurrentMap() throws Exception { info("Warming up..."); @@ -427,4 +435,4 @@ private T3 randomKey(Random rnd) { private GridByteArrayWrapper randomKeyWrapper(Random rnd) { return wrappers[rnd.nextInt(keys.length)]; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeMemorySelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeMemorySelfTest.java index 47b0684f7ef07..d2be2de0ebe30 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeMemorySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeMemorySelfTest.java @@ -23,12 +23,17 @@ import java.util.Collection; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests unsafe memory. */ +@RunWith(JUnit4.class) public class GridUnsafeMemorySelfTest extends GridCommonAbstractTest { /** */ + @Test public void testBuffers() { ByteBuffer b1 = GridUnsafe.allocateBuffer(10); ByteBuffer b2 = GridUnsafe.allocateBuffer(20); @@ -64,6 +69,7 @@ public void testBuffers() { /** * @throws Exception If failed. */ + @Test public void testBytes() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(64); @@ -90,6 +96,7 @@ public void testBytes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByte() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(64); @@ -120,6 +127,7 @@ public void testByte() throws Exception { /** * @throws Exception If failed. */ + @Test public void testShort() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(64); @@ -142,6 +150,7 @@ public void testShort() throws Exception { /** * */ + @Test public void testFloat() { GridUnsafeMemory mem = new GridUnsafeMemory(64); @@ -164,6 +173,7 @@ public void testFloat() { /** * */ + @Test public void testDouble() { GridUnsafeMemory mem = new GridUnsafeMemory(64); @@ -187,6 +197,7 @@ public void testDouble() { /** * @throws Exception If failed. */ + @Test public void testInt() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(64); @@ -223,6 +234,7 @@ public void testInt() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLong() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(64); @@ -258,6 +270,7 @@ public void testLong() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCompare1() throws Exception { checkCompare("123"); } @@ -265,6 +278,7 @@ public void testCompare1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCompare2() throws Exception { checkCompare("1234567890"); } @@ -272,6 +286,7 @@ public void testCompare2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCompare3() throws Exception { checkCompare("12345678901234567890"); } @@ -310,6 +325,7 @@ public void checkCompare(String s) throws Exception { /** * @throws Exception If failed. */ + @Test public void testOutOfMemory() throws Exception { int cap = 64; int block = 9; diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeStripedLruSefTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeStripedLruSefTest.java index 76ccceb56915b..57b7778a6e859 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeStripedLruSefTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/offheap/unsafe/GridUnsafeStripedLruSefTest.java @@ -21,11 +21,15 @@ import java.util.HashSet; import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Striped LRU test. */ @SuppressWarnings("FieldCanBeLocal") +@RunWith(JUnit4.class) public class GridUnsafeStripedLruSefTest extends GridCommonAbstractTest { /** Number of stripes. */ private short stripes = 1; @@ -65,6 +69,7 @@ private void init() { /** * */ + @Test public void testOffer1() { checkOffer(1000); } @@ -72,6 +77,7 @@ public void testOffer1() { /** * */ + @Test public void testOffer2() { stripes = 11; @@ -95,6 +101,7 @@ private void checkOffer(int cnt) { /** * */ + @Test public void testRemove1() { checkRemove(1000); } @@ -102,6 +109,7 @@ public void testRemove1() { /** * */ + @Test public void testRemove2() { stripes = 35; @@ -130,6 +138,7 @@ private void checkRemove(int cnt) { /** * */ + @Test public void testPoll1() { checkPoll(1000); } @@ -137,6 +146,7 @@ public void testPoll1() { /** * */ + @Test public void testPoll2() { stripes = 20; @@ -180,6 +190,7 @@ private void checkPoll(int cnt) { /** * @throws Exception If failed. */ + @Test public void testLruMultithreaded() throws Exception { checkLruMultithreaded(1000000); } @@ -226,4 +237,4 @@ private void checkLruMultithreaded(final int cnt) throws Exception { assertEquals(0, lru.size()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/CircularStringBuilderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/CircularStringBuilderSelfTest.java index b927863d66c46..c90cec179e94d 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/CircularStringBuilderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/CircularStringBuilderSelfTest.java @@ -19,15 +19,20 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class CircularStringBuilderSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCSBPrimitive() throws Exception { CircularStringBuilder csb = new CircularStringBuilder(1); csb.append((String)null); @@ -43,6 +48,7 @@ public void testCSBPrimitive() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCSBOverflow() throws Exception { testSB(3, "1234", 2, "234"); testSB(4, "1234", 2, "1234"); diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/GridToStringBuilderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/GridToStringBuilderSelfTest.java index d249914ccb3bc..d73dae4399e69 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/GridToStringBuilderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/GridToStringBuilderSelfTest.java @@ -20,35 +20,34 @@ import java.lang.reflect.Array; import java.util.ArrayList; import java.util.Arrays; -import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.TreeMap; import java.util.UUID; -import java.util.concurrent.Callable; -import java.util.concurrent.CyclicBarrier; import java.util.concurrent.locks.ReadWriteLock; import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteSystemProperties; -import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.internal.processors.cache.KeyCacheObjectImpl; -import org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_TO_STRING_COLLECTION_LIMIT; +import static org.apache.ignite.IgniteSystemProperties.IGNITE_TO_STRING_MAX_LENGTH; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_TO_STRING_COLLECTION_LIMIT; -import static org.apache.ignite.IgniteSystemProperties.IGNITE_TO_STRING_MAX_LENGTH; /** * Tests for {@link GridToStringBuilder}. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridToStringBuilderSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testToString() throws Exception { TestClass1 obj = new TestClass1(); @@ -63,6 +62,7 @@ public void testToString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testToStringWithAdditions() throws Exception { TestClass1 obj = new TestClass1(); @@ -80,6 +80,7 @@ public void testToStringWithAdditions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testToStringCheckSimpleListRecursionPrevention() throws Exception { ArrayList list1 = new ArrayList<>(); ArrayList list2 = new ArrayList<>(); @@ -87,162 +88,32 @@ public void testToStringCheckSimpleListRecursionPrevention() throws Exception { list2.add(list1); list1.add(list2); - info(GridToStringBuilder.toString(ArrayList.class, list1)); - info(GridToStringBuilder.toString(ArrayList.class, list2)); - } - - /** - * @throws Exception If failed. - */ - public void testToStringCheckSimpleMapRecursionPrevention() throws Exception { - HashMap map1 = new HashMap<>(); - HashMap map2 = new HashMap<>(); - - map1.put("2", map2); - map2.put("1", map1); - info(GridToStringBuilder.toString(HashMap.class, map1)); - info(GridToStringBuilder.toString(HashMap.class, map2)); + GridToStringBuilder.toString(ArrayList.class, list1); + GridToStringBuilder.toString(ArrayList.class, list2); } /** * @throws Exception If failed. */ - public void testToStringCheckListAdvancedRecursionPrevention() throws Exception { + @Ignore("https://issues.apache.org/jira/browse/IGNITE-602") + @Test + public void testToStringCheckAdvancedRecursionPrevention() throws Exception { + ArrayList list1 = new ArrayList<>(); ArrayList list2 = new ArrayList<>(); list2.add(list1); list1.add(list2); - info(GridToStringBuilder.toString(ArrayList.class, list1, "name", list2)); - info(GridToStringBuilder.toString(ArrayList.class, list2, "name", list1)); - } - - /** - * @throws Exception If failed. - */ - public void testToStringCheckMapAdvancedRecursionPrevention() throws Exception { - HashMap map1 = new HashMap<>(); - HashMap map2 = new HashMap<>(); - - map1.put("2", map2); - map2.put("1", map1); - - info(GridToStringBuilder.toString(HashMap.class, map1, "name", map2)); - info(GridToStringBuilder.toString(HashMap.class, map2, "name", map1)); - } - - /** - * @throws Exception If failed. - */ - public void testToStringCheckObjectRecursionPrevention() throws Exception { - Node n1 = new Node(); - Node n2 = new Node(); - Node n3 = new Node(); - Node n4 = new Node(); - - n1.name = "n1"; - n2.name = "n2"; - n3.name = "n3"; - n4.name = "n4"; - - n1.next = n2; - n2.next = n3; - n3.next = n4; - n4.next = n3; - - n1.nodes = new Node[4]; - n2.nodes = n1.nodes; - n3.nodes = n1.nodes; - n4.nodes = n1.nodes; - - n1.nodes[0] = n1; - n1.nodes[1] = n2; - n1.nodes[2] = n3; - n1.nodes[3] = n4; - - String expN1 = n1.toString(); - String expN2 = n2.toString(); - String expN3 = n3.toString(); - String expN4 = n4.toString(); - - info(expN1); - info(expN2); - info(expN3); - info(expN4); - info(GridToStringBuilder.toString("Test", "Appended vals", n1)); - - CyclicBarrier bar = new CyclicBarrier(4); - - IgniteInternalFuture fut1 = GridTestUtils.runAsync(new BarrierCallable(bar, n1, expN1)); - IgniteInternalFuture fut2 = GridTestUtils.runAsync(new BarrierCallable(bar, n2, expN2)); - IgniteInternalFuture fut3 = GridTestUtils.runAsync(new BarrierCallable(bar, n3, expN3)); - IgniteInternalFuture fut4 = GridTestUtils.runAsync(new BarrierCallable(bar, n4, expN4)); - - fut1.get(3_000); - fut2.get(3_000); - fut3.get(3_000); - fut4.get(3_000); - } - - /** - * Test class. - */ - private static class Node { - /** */ - @GridToStringInclude - String name; - - /** */ - @GridToStringInclude - Node next; - - /** */ - @GridToStringInclude - Node[] nodes; - - /** {@inheritDoc} */ - @Override public String toString() { - return GridToStringBuilder.toString(Node.class, this); - } - } - - /** - * Test class. - */ - private static class BarrierCallable implements Callable { - /** */ - CyclicBarrier bar; - - /** */ - Object obj; - - /** Expected value of {@code toString()} method. */ - String exp; - - /** */ - private BarrierCallable(CyclicBarrier bar, Object obj, String exp) { - this.bar = bar; - this.obj = obj; - this.exp = exp; - } - - /** {@inheritDoc} */ - @Override public String call() throws Exception { - for (int i = 0; i < 10; i++) { - bar.await(); - - assertEquals(exp, obj.toString()); - } - - return null; - } + GridToStringBuilder.toString(ArrayList.class, list1, "name", list2); + GridToStringBuilder.toString(ArrayList.class, list2, "name", list1); } /** * JUnit. */ + @Test public void testToStringPerformance() { TestClass1 obj = new TestClass1(); @@ -277,45 +148,14 @@ private void testArr(V v, int limit) throws Exception { Arrays.fill(arrOf, v); T[] arr = Arrays.copyOf(arrOf, limit); - checkArrayOverflow(arrOf, arr, limit); - } - - /** - * Test array print. - * - * @throws Exception if failed. - */ - public void testArrLimitWithRecursion() throws Exception { - int limit = IgniteSystemProperties.getInteger(IGNITE_TO_STRING_COLLECTION_LIMIT, 100); - - ArrayList[] arrOf = new ArrayList[limit + 1]; - Arrays.fill(arrOf, new ArrayList()); - ArrayList[] arr = Arrays.copyOf(arrOf, limit); - - arrOf[0].add(arrOf); - arr[0].add(arr); - - checkArrayOverflow(arrOf, arr, limit); - } - - /** - * @param arrOf Array. - * @param arr Array copy. - * @param limit Array limit. - */ - private void checkArrayOverflow(Object[] arrOf, Object[] arr, int limit) { - String arrStr = GridToStringBuilder.arrayToString(arr); - String arrOfStr = GridToStringBuilder.arrayToString(arrOf); + String arrStr = GridToStringBuilder.arrayToString(arr.getClass(), arr); + String arrOfStr = GridToStringBuilder.arrayToString(arrOf.getClass(), arrOf); // Simulate overflow StringBuilder resultSB = new StringBuilder(arrStr); resultSB.deleteCharAt(resultSB.length()-1); resultSB.append("... and ").append(arrOf.length - limit).append(" more]"); - - arrStr = resultSB.toString(); - - info(arrOfStr); - info(arrStr); + arrStr = resultSB.toString(); assertTrue("Collection limit error in array of type " + arrOf.getClass().getName() + " error, normal arr: <" + arrStr + ">, overflowed arr: <" + arrOfStr + ">", arrStr.equals(arrOfStr)); @@ -324,6 +164,7 @@ private void checkArrayOverflow(Object[] arrOf, Object[] arr, int limit) { /** * @throws Exception If failed. */ + @Test public void testToStringCollectionLimits() throws Exception { int limit = IgniteSystemProperties.getInteger(IGNITE_TO_STRING_COLLECTION_LIMIT, 100); @@ -332,190 +173,113 @@ public void testToStringCollectionLimits() throws Exception { for (Object val : vals) testArr(val, limit); - int[] intArr1 = new int[0]; - - assertEquals("[]", GridToStringBuilder.arrayToString(intArr1)); - assertEquals("null", GridToStringBuilder.arrayToString(null)); - - int[] intArr2 = {1, 2, 3}; - - assertEquals("[1, 2, 3]", GridToStringBuilder.arrayToString(intArr2)); - - Object[] intArr3 = {2, 3, 4}; - - assertEquals("[2, 3, 4]", GridToStringBuilder.arrayToString(intArr3)); - byte[] byteArr = new byte[1]; - byteArr[0] = 1; - assertEquals(Arrays.toString(byteArr), GridToStringBuilder.arrayToString(byteArr)); + assertEquals(Arrays.toString(byteArr), GridToStringBuilder.arrayToString(byteArr.getClass(), byteArr)); byteArr = Arrays.copyOf(byteArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(byteArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(byteArr.getClass(), byteArr).contains("... and 1 more")); boolean[] boolArr = new boolean[1]; - boolArr[0] = true; - assertEquals(Arrays.toString(boolArr), GridToStringBuilder.arrayToString(boolArr)); + assertEquals(Arrays.toString(boolArr), GridToStringBuilder.arrayToString(boolArr.getClass(), boolArr)); boolArr = Arrays.copyOf(boolArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(boolArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(boolArr.getClass(), boolArr).contains("... and 1 more")); short[] shortArr = new short[1]; - shortArr[0] = 100; - assertEquals(Arrays.toString(shortArr), GridToStringBuilder.arrayToString(shortArr)); + assertEquals(Arrays.toString(shortArr), GridToStringBuilder.arrayToString(shortArr.getClass(), shortArr)); shortArr = Arrays.copyOf(shortArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(shortArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(shortArr.getClass(), shortArr).contains("... and 1 more")); int[] intArr = new int[1]; - intArr[0] = 10000; - assertEquals(Arrays.toString(intArr), GridToStringBuilder.arrayToString(intArr)); + assertEquals(Arrays.toString(intArr), GridToStringBuilder.arrayToString(intArr.getClass(), intArr)); intArr = Arrays.copyOf(intArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(intArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(intArr.getClass(), intArr).contains("... and 1 more")); long[] longArr = new long[1]; - longArr[0] = 10000000; - assertEquals(Arrays.toString(longArr), GridToStringBuilder.arrayToString(longArr)); + assertEquals(Arrays.toString(longArr), GridToStringBuilder.arrayToString(longArr.getClass(), longArr)); longArr = Arrays.copyOf(longArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(longArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(longArr.getClass(), longArr).contains("... and 1 more")); float[] floatArr = new float[1]; - floatArr[0] = 1.f; - assertEquals(Arrays.toString(floatArr), GridToStringBuilder.arrayToString(floatArr)); + assertEquals(Arrays.toString(floatArr), GridToStringBuilder.arrayToString(floatArr.getClass(), floatArr)); floatArr = Arrays.copyOf(floatArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(floatArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(floatArr.getClass(), floatArr).contains("... and 1 more")); double[] doubleArr = new double[1]; - doubleArr[0] = 1.; - assertEquals(Arrays.toString(doubleArr), GridToStringBuilder.arrayToString(doubleArr)); + assertEquals(Arrays.toString(doubleArr), GridToStringBuilder.arrayToString(doubleArr.getClass(), doubleArr)); doubleArr = Arrays.copyOf(doubleArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(doubleArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(doubleArr.getClass(), doubleArr).contains("... and 1 more")); - char[] cArr = new char[1]; - - cArr[0] = 'a'; - assertEquals(Arrays.toString(cArr), GridToStringBuilder.arrayToString(cArr)); - cArr = Arrays.copyOf(cArr, 101); + char[] charArr = new char[1]; + charArr[0] = 'a'; + assertEquals(Arrays.toString(charArr), GridToStringBuilder.arrayToString(charArr.getClass(), charArr)); + charArr = Arrays.copyOf(charArr, 101); assertTrue("Can't find \"... and 1 more\" in overflowed array string!", - GridToStringBuilder.arrayToString(cArr).contains("... and 1 more")); + GridToStringBuilder.arrayToString(charArr.getClass(), charArr).contains("... and 1 more")); Map strMap = new TreeMap<>(); List strList = new ArrayList<>(limit+1); - TestClass1 testCls = new TestClass1(); - - testCls.strMap = strMap; - testCls.strListIncl = strList; - - for (int i = 0; i < limit; i++) { - strMap.put("k" + i, "v"); - strList.add("e"); - } - - checkColAndMap(testCls); - } - - /** - * @throws Exception If failed. - */ - public void testToStringColAndMapLimitWithRecursion() throws Exception { - int limit = IgniteSystemProperties.getInteger(IGNITE_TO_STRING_COLLECTION_LIMIT, 100); - Map strMap = new TreeMap<>(); - List strList = new ArrayList<>(limit+1); - TestClass1 testClass = new TestClass1(); testClass.strMap = strMap; testClass.strListIncl = strList; - Map m = new TreeMap(); - m.put("m", strMap); - - List l = new ArrayList(); - l.add(strList); - - strMap.put("k0", m); - strList.add(l); - - for (int i = 1; i < limit; i++) { + for (int i = 0; i < limit; i++) { strMap.put("k" + i, "v"); strList.add("e"); } + String testClassStr = GridToStringBuilder.toString(TestClass1.class, testClass); - checkColAndMap(testClass); - } - - /** - * @param testCls Class with collection and map included in toString(). - */ - private void checkColAndMap(TestClass1 testCls) { - String testClsStr = GridToStringBuilder.toString(TestClass1.class, testCls); + strMap.put("kz", "v"); // important to add last element in TreeMap here + strList.add("e"); - testCls.strMap.put("kz", "v"); // important to add last element in TreeMap here - testCls.strListIncl.add("e"); + String testClassStrOf = GridToStringBuilder.toString(TestClass1.class, testClass); - String testClsStrOf = GridToStringBuilder.toString(TestClass1.class, testCls); + String testClassStrOfR = testClassStrOf.replaceAll("... and 1 more",""); - String testClsStrOfR = testClsStrOf.replaceAll("... and 1 more",""); + assertTrue("Collection limit error in Map or List, normal: <" + testClassStr + ">, overflowed: <" + +"testClassStrOf", testClassStr.length() == testClassStrOfR.length()); - info(testClsStr); - info(testClsStrOf); - info(testClsStrOfR); - - assertTrue("Collection limit error in Map or List, normal: <" + testClsStr + ">, overflowed: <" - + testClsStrOf + ">", testClsStr.length() == testClsStrOfR.length()); } /** * @throws Exception If failed. */ + @Test public void testToStringSizeLimits() throws Exception { int limit = IgniteSystemProperties.getInteger(IGNITE_TO_STRING_MAX_LENGTH, 10_000); int tailLen = limit / 10 * 2; - StringBuilder sb = new StringBuilder(limit + 10); - - for (int i = 0; i < limit - 100; i++) + for (int i = 0; i < limit - 100; i++) { sb.append('a'); - + } String actual = GridToStringBuilder.toString(TestClass2.class, new TestClass2(sb.toString())); - String exp = "TestClass2 [str=" + sb + ", nullArr=null]"; - - assertEquals(exp, actual); + String expected = "TestClass2 [str=" + sb.toString() + ", nullArr=null]"; + assertEquals(expected, actual); - for (int i = 0; i < 110; i++) + for (int i = 0; i < 110; i++) { sb.append('b'); - + } actual = GridToStringBuilder.toString(TestClass2.class, new TestClass2(sb.toString())); - exp = "TestClass2 [str=" + sb + ", nullArr=null]"; - - assertEquals(exp.substring(0, limit - tailLen), actual.substring(0, limit - tailLen)); - assertEquals(exp.substring(exp.length() - tailLen), actual.substring(actual.length() - tailLen)); - + expected = "TestClass2 [str=" + sb.toString() + ", nullArr=null]"; + assertEquals(expected.substring(0, limit - tailLen), actual.substring(0, limit - tailLen)); + assertEquals(expected.substring(expected.length() - tailLen), actual.substring(actual.length() - tailLen)); assertTrue(actual.contains("... and")); assertTrue(actual.contains("skipped ...")); } - /** - * - */ - public void testObjectPlusStringToString() { - IgniteTxKey k = new IgniteTxKey(new KeyCacheObjectImpl(1, null, 1), 123); - - info(k.toString()); - - assertTrue("Wrong string: " + k, k.toString().startsWith("IgniteTxKey [")); - } - /** * Test class. */ @@ -647,8 +411,8 @@ private static class TestClass2{ /** * @param str String. */ - TestClass2(String str) { + public TestClass2(String str) { this.str = str; } } -} +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/IncludeSensitiveAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/IncludeSensitiveAbstractTest.java index 3907e85400ad8..adacd772fba20 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/IncludeSensitiveAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/internal/util/tostring/IncludeSensitiveAbstractTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.SensitiveInfoTestLoggerProxy; import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for property {@link IgniteSystemProperties#IGNITE_TO_STRING_INCLUDE_SENSITIVE}. */ +@RunWith(JUnit4.class) public abstract class IncludeSensitiveAbstractTest extends GridCacheAbstractSelfTest { /** Number of test entries */ private static final int ENTRY_CNT = 10; @@ -96,6 +100,7 @@ protected void commitTx() { * * @throws Exception If failed. */ + @Test public void test() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/FileIOTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/FileIOTest.java index eb1f8d5c487ef..0d048023e5df2 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/FileIOTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/FileIOTest.java @@ -21,14 +21,14 @@ import java.io.RandomAccessFile; import java.util.Arrays; import java.util.UUID; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; /** * Java file IO test. */ -public class FileIOTest extends TestCase { +public class FileIOTest { /** File path. */ private static final String FILE_PATH = "/test-java-file.tmp"; @@ -38,6 +38,7 @@ public class FileIOTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testReadLineFromBinaryFile() throws Exception { File file = new File(FILE_PATH); @@ -79,6 +80,7 @@ public void testReadLineFromBinaryFile() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleFilesCreation() throws Exception { File parent = new File(TMP_DIR, "testMultipleFilesCreation"); @@ -127,6 +129,7 @@ public void testMultipleFilesCreation() throws Exception { /** * */ + @Test public void testGetAbsolutePath() { for (int i = 0; i < 1000000; i++) { new File("/" + UUID.randomUUID().toString()).getAbsolutePath(); @@ -136,4 +139,4 @@ public void testGetAbsolutePath() { new File("/Users").getAbsolutePath(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/FileLocksTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/FileLocksTest.java index b376f0bae009a..b70e19868e9da 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/FileLocksTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/FileLocksTest.java @@ -21,18 +21,19 @@ import java.io.RandomAccessFile; import java.nio.channels.FileLock; import javax.swing.JOptionPane; -import junit.framework.TestCase; +import org.junit.Test; /** * Java file locks test. */ -public class FileLocksTest extends TestCase { +public class FileLocksTest { /** File path (on Windows file will be created under the root directory of the current drive). */ private static final String LOCK_FILE_PATH = "/test-java-file-lock-tmp.bin"; /** * @throws Exception If failed. */ + @Test public void testWriteLocks() throws Exception { final File file = new File(LOCK_FILE_PATH); @@ -77,6 +78,7 @@ public void testWriteLocks() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadLocks() throws Exception { final File file = new File(LOCK_FILE_PATH); @@ -118,4 +120,4 @@ public void testReadLocks() throws Exception { thread.join(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/LinkedHashMapTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/LinkedHashMapTest.java index 536784e341def..f4fd673be1130 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/LinkedHashMapTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/LinkedHashMapTest.java @@ -19,14 +19,15 @@ import java.util.LinkedHashMap; import java.util.Map; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.X; +import org.junit.Test; /** * Test for {@link LinkedHashMap}. */ -public class LinkedHashMapTest extends TestCase { +public class LinkedHashMapTest { /** @throws Exception If failed. */ + @Test public void testAccessOrder1() throws Exception { X.println(">>> testAccessOrder1 <<<"); @@ -52,6 +53,7 @@ public void testAccessOrder1() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAccessOrder2() throws Exception { X.println(">>> testAccessOrder2 <<<"); @@ -69,6 +71,7 @@ public void testAccessOrder2() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAccessOrder3() throws Exception { X.println(">>> testAccessOrder3 <<<"); @@ -84,4 +87,4 @@ public void testAccessOrder3() throws Exception { X.println("State after get: " + map); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/NetworkFailureTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/NetworkFailureTest.java index c331ce97e781d..3a5e4da39c29f 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/NetworkFailureTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/NetworkFailureTest.java @@ -26,20 +26,21 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; import javax.swing.JOptionPane; -import junit.framework.TestCase; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; /** * */ -public class NetworkFailureTest extends TestCase { +public class NetworkFailureTest { /** * @throws Exception If failed. */ + @Test public void testNetworkFailure() throws Exception { final AtomicBoolean done = new AtomicBoolean(); @@ -141,6 +142,7 @@ public void testNetworkFailure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadTimeout() throws Exception { final InetAddress addr = InetAddress.getByName("192.168.3.10"); @@ -223,6 +225,7 @@ public void testReadTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSocketCloseOnTimeout() throws Exception { final AtomicBoolean done = new AtomicBoolean(); @@ -323,6 +326,7 @@ public void testSocketCloseOnTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectionTime() throws Exception { X.println("Unexistent host."); checkConnection(InetAddress.getByName("192.168.0.222")); @@ -370,4 +374,4 @@ private Socket openSocket(InetAddress addr, int port) throws IOException { return sock; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/QueueSizeCounterMultiThreadedTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/QueueSizeCounterMultiThreadedTest.java index bf3557f9e0f67..3349e804cccd0 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/QueueSizeCounterMultiThreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/QueueSizeCounterMultiThreadedTest.java @@ -23,20 +23,20 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; -import junit.framework.TestCase; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; /** * Test to check strange assertion in eviction manager. */ -public class QueueSizeCounterMultiThreadedTest extends TestCase { +public class QueueSizeCounterMultiThreadedTest { /** * @throws Exception If failed. */ - @SuppressWarnings({"LockAcquiredButNotSafelyReleased"}) + @Test public void testQueueSizeCounter() throws Exception { final ConcurrentLinkedQueue q = new ConcurrentLinkedQueue<>(); @@ -50,7 +50,6 @@ public void testQueueSizeCounter() throws Exception { IgniteInternalFuture fut1 = GridTestUtils.runMultiThreadedAsync( new Callable() { - @SuppressWarnings( {"BusyWait"}) @Nullable @Override public Object call() throws Exception { int cleanUps = 0; @@ -101,4 +100,4 @@ public void testQueueSizeCounter() throws Exception { fut1.get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/ReadWriteLockMultiThreadedTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/ReadWriteLockMultiThreadedTest.java index 666a5a1941561..d57dcc860dff5 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/ReadWriteLockMultiThreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/ReadWriteLockMultiThreadedTest.java @@ -20,20 +20,21 @@ import java.util.concurrent.Callable; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; -import junit.framework.TestCase; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; /** * JDK read write lock test. */ -public class ReadWriteLockMultiThreadedTest extends TestCase { +public class ReadWriteLockMultiThreadedTest { /** * @throws Exception If failed. */ @SuppressWarnings({"LockAcquiredButNotSafelyReleased"}) + @Test public void testReadThenWriteLockAcquire() throws Exception { ReadWriteLock lock = new ReentrantReadWriteLock(); @@ -45,6 +46,7 @@ public void testReadThenWriteLockAcquire() throws Exception { /** * */ + @Test public void testNotOwnedLockRelease() { ReadWriteLock lock = new ReentrantReadWriteLock(); @@ -55,6 +57,7 @@ public void testNotOwnedLockRelease() { * @throws Exception If failed. */ @SuppressWarnings({"LockAcquiredButNotSafelyReleased"}) + @Test public void testWriteLockAcquire() throws Exception { final ReadWriteLock lock = new ReentrantReadWriteLock(); @@ -124,6 +127,7 @@ public void testWriteLockAcquire() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"LockAcquiredButNotSafelyReleased"}) + @Test public void testReadLockAcquire() throws Exception { final ReadWriteLock lock = new ReentrantReadWriteLock(); @@ -169,6 +173,7 @@ public void testReadLockAcquire() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"LockAcquiredButNotSafelyReleased"}) + @Test public void testTryWriteLock() throws Exception { final ReadWriteLock lock = new ReentrantReadWriteLock(); @@ -204,4 +209,4 @@ public void testTryWriteLock() throws Exception { fut.get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/RegExpTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/RegExpTest.java index 9b8c4bd11949f..6add401663bc0 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/RegExpTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/RegExpTest.java @@ -19,17 +19,18 @@ import java.util.regex.Matcher; import java.util.regex.Pattern; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; /** * Java reg exp test. */ -public class RegExpTest extends TestCase { +public class RegExpTest { /** * @throws Exception If failed. */ + @Test public void testRegExp() throws Exception { String normal = "swap-spaces/space1/b53b3a3d6ab90ce0268229151c9bde11|b53b3a3d6ab90ce0268229151c9bde11|1315392441288"; @@ -53,4 +54,4 @@ public void testRegExp() throws Exception { assert normal.matches(ptrn); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/jvmtest/ServerSocketMultiThreadedTest.java b/modules/core/src/test/java/org/apache/ignite/jvmtest/ServerSocketMultiThreadedTest.java index f6fe7dbb49d1b..e677ea7f8d15b 100644 --- a/modules/core/src/test/java/org/apache/ignite/jvmtest/ServerSocketMultiThreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/jvmtest/ServerSocketMultiThreadedTest.java @@ -24,11 +24,11 @@ import java.util.concurrent.Callable; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.atomic.AtomicInteger; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; /** * Java server socket test. @@ -37,7 +37,7 @@ * BindException or SocketException may be thrown. Purpose of this test is * to find some explanation to that. */ -public class ServerSocketMultiThreadedTest extends TestCase { +public class ServerSocketMultiThreadedTest { /** */ private static final int THREADS_CNT = 10; @@ -47,6 +47,7 @@ public class ServerSocketMultiThreadedTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testConcurrentBind() throws Exception { final AtomicInteger bindExCnt = new AtomicInteger(); final AtomicInteger sockExCnt = new AtomicInteger(); @@ -100,4 +101,4 @@ public void testConcurrentBind() throws Exception { X.println("Test stats [bindExCnt=" + bindExCnt.get() + ", sockExCnt=" + sockExCnt.get() + ", okCnt=" + okCnt + ']'); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/GridByteArrayListSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/GridByteArrayListSelfTest.java index aad28ea828a58..f99277ce845cf 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/GridByteArrayListSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/GridByteArrayListSelfTest.java @@ -25,15 +25,20 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridByteArrayListSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testCapacity() { int cap = 10; @@ -64,6 +69,7 @@ public void testCapacity() { /** * */ + @Test public void testAddSetByte() { GridByteArrayList list = new GridByteArrayList(10); @@ -83,6 +89,7 @@ public void testAddSetByte() { /** * */ + @Test public void testAddSetInteger() { GridByteArrayList list = new GridByteArrayList(10); @@ -118,6 +125,7 @@ public void testAddSetInteger() { /** * */ + @Test public void testAddByteArray() { GridByteArrayList list = new GridByteArrayList(3); @@ -140,6 +148,7 @@ public void testAddByteArray() { /** * */ + @Test public void testAddByteBuffer() { GridByteArrayList list = new GridByteArrayList(3); @@ -167,6 +176,7 @@ public void testAddByteBuffer() { * */ @SuppressWarnings({"ErrorNotRethrown"}) + @Test public void testBounds() { GridByteArrayList list = new GridByteArrayList(3); @@ -201,6 +211,7 @@ public void testBounds() { /** * @throws Exception If failed. */ + @Test public void testRead() throws Exception { GridByteArrayList list = new GridByteArrayList(10); @@ -212,4 +223,4 @@ public void testRead() throws Exception { assert Arrays.equals(list.array(), arr); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/GridFuncPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/lang/GridFuncPerformanceTest.java index 5afd75f680baa..9122c8e72bd1c 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/GridFuncPerformanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/GridFuncPerformanceTest.java @@ -23,11 +23,15 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * GridFunc performance test. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridFuncPerformanceTest extends GridCommonAbstractTest { /** * Creates test. @@ -39,6 +43,7 @@ public GridFuncPerformanceTest() { /** * */ + @Test public void testTransformingIteratorPerformance() { // Warmup. testBody(); @@ -99,4 +104,4 @@ private long testBody() { return duration; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterLoadTest.java b/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterLoadTest.java index f411f31a37dab..bc4c55e86a1a5 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterLoadTest.java @@ -22,11 +22,15 @@ import org.apache.ignite.internal.processors.cache.eviction.GridCacheMockEntry; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Check how much memory and time required to fill 1_000_000 entries with meta. * Do not include this test to suits. */ +@RunWith(JUnit4.class) public class GridMetadataAwareAdapterLoadTest extends GridCommonAbstractTest { /** Creates test. */ public GridMetadataAwareAdapterLoadTest() { @@ -40,6 +44,7 @@ public GridMetadataAwareAdapterLoadTest() { * * @throws Exception */ + @Test public void test() throws Exception { String[] dic = new String[1_000_000]; diff --git a/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterSelfTest.java index f202fa97c8542..b38be9817e30a 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/GridMetadataAwareAdapterSelfTest.java @@ -21,11 +21,15 @@ import org.apache.ignite.internal.util.lang.GridMetadataAwareAdapter; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridMetadataAwareAdapterSelfTest extends GridCommonAbstractTest { /** Creates test. */ public GridMetadataAwareAdapterSelfTest() { @@ -36,6 +40,7 @@ public GridMetadataAwareAdapterSelfTest() { * Junit. */ @SuppressWarnings({"AssertWithSideEffects"}) + @Test public void test() { GridMetadataAwareAdapter ma = new GridMetadataAwareAdapter(); diff --git a/modules/core/src/test/java/org/apache/ignite/lang/GridSetWrapperSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/GridSetWrapperSelfTest.java index 7cddf27c056b5..be662dae6bb1b 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/GridSetWrapperSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/GridSetWrapperSelfTest.java @@ -27,19 +27,25 @@ import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Set wrapper test. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridSetWrapperSelfTest extends GridCommonAbstractTest { /** @throws Exception If failed. */ + @Test public void testEmptySet() throws Exception { checkCollectionEmptiness(new GridSetWrapper<>(new HashMap())); } /** @throws Exception If failed. */ + @Test public void testMultipleValuesSet() throws Exception { Set set = new GridSetWrapper<>(new HashMap()); @@ -76,6 +82,7 @@ public void testMultipleValuesSet() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetRemove() throws Exception { Collection set = new GridSetWrapper<>(new HashMap()); @@ -117,6 +124,7 @@ public void testSetRemove() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetRemoveAll() throws Exception { Collection set = new GridSetWrapper<>(new HashMap()); @@ -138,6 +146,7 @@ public void testSetRemoveAll() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetClear() throws Exception { Collection set = new GridSetWrapper<>(new HashMap()); @@ -156,6 +165,7 @@ public void testSetClear() throws Exception { } /** @throws Exception If failed. */ + @Test public void testIterator() throws Exception { Set set = new GridSetWrapper<>(new HashMap()); @@ -227,4 +237,4 @@ private void checkCollectionEmptiness(Collection c) throws Exception { info("Caught expected exception: " + e); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/GridTupleSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/GridTupleSelfTest.java index 77be70c229cea..ae67a4db9f01d 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/GridTupleSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/GridTupleSelfTest.java @@ -27,11 +27,15 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridTupleSelfTest extends GridCommonAbstractTest { /** Creates test. */ public GridTupleSelfTest() { @@ -41,6 +45,7 @@ public GridTupleSelfTest() { /** * JUnit. */ + @Test public void testGridTupleAsIterable() { String str = "A test string"; @@ -71,6 +76,7 @@ public void testGridTupleAsIterable() { /** * JUnit. */ + @Test public void testGridTuple2AsIterable() { String str1 = "A test string 1"; String str2 = "A test string 2"; @@ -103,6 +109,7 @@ public void testGridTuple2AsIterable() { /** * JUnit. */ + @Test public void testGridTuple2AsMap() { String str1 = "A test string 1"; String str2 = "A test string 2"; @@ -141,6 +148,7 @@ public void testGridTuple2AsMap() { /** * JUnit. */ + @Test public void testGridTuple3AsIterable() { String str1 = "A test string 1"; String str2 = "A test string 2"; diff --git a/modules/core/src/test/java/org/apache/ignite/lang/GridXSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/GridXSelfTest.java index 8fd6df638668d..952f0b325c806 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/GridXSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/GridXSelfTest.java @@ -24,15 +24,20 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link X}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridXSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testHasCause() { ConnectException conEx = new ConnectException(); @@ -55,6 +60,7 @@ public void testHasCause() { /** * Tests string presentation of given time. */ + @Test public void testTimeSpan() { assertEquals(X.timeSpan2DHMSM(86400001L), "1 day, 00:00:00.001"); @@ -68,6 +74,7 @@ public void testTimeSpan() { /** * */ + @Test public void testShallowClone() { // Single not cloneable object Object obj = new Object(); @@ -117,6 +124,7 @@ public void testShallowClone() { * */ @SuppressWarnings({"StringEquality"}) + @Test public void testDeepCloner() { // Single not cloneable object Object obj = new Object(); @@ -272,4 +280,4 @@ private static class TestCycledChild extends TestCycled { @SuppressWarnings({"unused"}) private final TestCycled anotherCycle = this; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/IgniteUuidSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/IgniteUuidSelfTest.java index a3d82cbeb88cd..4091e1b679cfb 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/IgniteUuidSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/IgniteUuidSelfTest.java @@ -30,11 +30,15 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link org.apache.ignite.lang.IgniteUuid}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class IgniteUuidSelfTest extends GridCommonAbstractTest { /** Sample size. */ private static final int NUM = 100000; @@ -42,6 +46,7 @@ public class IgniteUuidSelfTest extends GridCommonAbstractTest { /** * JUnit. */ + @Test public void testToString() { IgniteUuid id1 = IgniteUuid.randomUuid(); @@ -73,6 +78,7 @@ public void testToString() { /** * JUnit. */ + @Test public void testGridUuid() { IgniteUuid id1 = IgniteUuid.randomUuid(); IgniteUuid id2 = IgniteUuid.randomUuid(); @@ -95,6 +101,7 @@ public void testGridUuid() { /** * JUnit. */ + @Test public void testGridUuidPerformance() { long start = System.currentTimeMillis(); @@ -121,6 +128,7 @@ public void testGridUuidPerformance() { * * @throws Exception If failed. */ + @Test public void testSerializationPerformance() throws Exception { UuidBean[] uids = new UuidBean[NUM]; @@ -330,4 +338,4 @@ private UuidBean(UUID uid) { /** {@inheritDoc} */ @Override public String toString() { return S.toString(UuidBean.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentLinkedHashMapSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentLinkedHashMapSelfTest.java index 8ce7ae37038f9..acfd481408a55 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentLinkedHashMapSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentLinkedHashMapSelfTest.java @@ -21,10 +21,14 @@ import java.util.Map; import org.apache.ignite.internal.util.GridBoundedConcurrentLinkedHashMap; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridBoundedConcurrentLinkedHashMap}. */ +@RunWith(JUnit4.class) public class GridBoundedConcurrentLinkedHashMapSelfTest extends GridCommonAbstractTest { /** Bound. */ private static final int MAX = 3; @@ -32,6 +36,7 @@ public class GridBoundedConcurrentLinkedHashMapSelfTest extends GridCommonAbstra /** * @throws Exception If failed. */ + @Test public void testBound() throws Exception { Map map = new GridBoundedConcurrentLinkedHashMap<>(MAX); diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentOrderedMapSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentOrderedMapSelfTest.java index 05ba4959caf4e..2c33ccf9f9a19 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentOrderedMapSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedConcurrentOrderedMapSelfTest.java @@ -23,15 +23,20 @@ import org.apache.ignite.internal.util.typedef.CI2; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridBoundedConcurrentOrderedMap}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridBoundedConcurrentOrderedMapSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testEvictionSingleElement() { SortedMap m = new GridBoundedConcurrentOrderedMap<>(1); @@ -52,6 +57,7 @@ public void testEvictionSingleElement() { /** * */ + @Test public void testEvictionListener() { GridBoundedConcurrentOrderedMap m = new GridBoundedConcurrentOrderedMap<>(1); @@ -78,4 +84,4 @@ public void testEvictionListener() { assertEquals(10, m.lastKey().intValue()); assertEquals(10, evicted.get()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedPriorityQueueSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedPriorityQueueSelfTest.java index bcaecc4adcd10..489a14fdd94a7 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedPriorityQueueSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridBoundedPriorityQueueSelfTest.java @@ -26,11 +26,15 @@ import org.apache.ignite.internal.util.GridBoundedPriorityQueue; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridBoundedPriorityQueue}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridBoundedPriorityQueueSelfTest extends GridCommonAbstractTest { /** Queue items comparator. */ private static final Comparator CMP = new Comparator() { @@ -45,6 +49,7 @@ public class GridBoundedPriorityQueueSelfTest extends GridCommonAbstractTest { /** * Test eviction in bounded priority queue. */ + @Test public void testEviction() { GridBoundedPriorityQueue queue = new GridBoundedPriorityQueue<>(3, CMP); diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferPerformanceTest.java index c0fc4864472b9..7e510d5397c76 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferPerformanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferPerformanceTest.java @@ -27,14 +27,19 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.util.deque.FastSizeDeque; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCircularBufferPerformanceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testThroughput() throws Exception { int size = 256 * 1024; @@ -74,6 +79,7 @@ public void testThroughput() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDequeueThroughput() throws Exception { final FastSizeDeque buf = new FastSizeDeque<>(new ConcurrentLinkedDeque<>()); @@ -117,6 +123,7 @@ public void testDequeueThroughput() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArrayBlockingQueueThroughput() throws Exception { final int size = 256 * 1024; @@ -158,6 +165,7 @@ public void testArrayBlockingQueueThroughput() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAdderThroughput() throws Exception { final int size = 256 * 1024; @@ -194,6 +202,7 @@ public void testAdderThroughput() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicLongThroughput() throws Exception { final int size = 256 * 1024; @@ -226,4 +235,4 @@ public void testAtomicLongThroughput() throws Exception { info("Buffer: " + buf); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferSelfTest.java index 50d351b30cd28..93e98481b3bee 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridCircularBufferSelfTest.java @@ -23,14 +23,19 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.internal.util.GridCircularBuffer; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridCircularBufferSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testCreation() { try { GridCircularBuffer buf = new GridCircularBuffer<>(-2); @@ -73,6 +78,7 @@ public void testCreation() { /** * @throws Exception If failed. */ + @Test public void testSingleThreaded() throws Exception { int size = 8; int iterCnt = size * 10; @@ -107,6 +113,7 @@ public void testSingleThreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMutliThreaded() throws Exception { int size = 32 * 1024; @@ -135,6 +142,7 @@ public void testMutliThreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMutliThreaded2() throws Exception { int size = 256 * 1024; @@ -173,4 +181,4 @@ public void testMutliThreaded2() throws Exception { info("Buffer: " + buf); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentLinkedHashMapSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentLinkedHashMapSelfTest.java index 7bcbd07b3becf..98524dcdf832f 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentLinkedHashMapSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentLinkedHashMapSelfTest.java @@ -27,6 +27,9 @@ import java.util.concurrent.ThreadLocalRandom; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jsr166.ConcurrentLinkedHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.jsr166.ConcurrentLinkedHashMap.QueuePolicy.PER_SEGMENT_Q; import static org.jsr166.ConcurrentLinkedHashMap.QueuePolicy.PER_SEGMENT_Q_OPTIMIZED_RMV; @@ -34,6 +37,7 @@ /** * This class tests basic contracts of {@code ConcurrentLinkedHashMap}. */ +@RunWith(JUnit4.class) public class GridConcurrentLinkedHashMapSelfTest extends GridCommonAbstractTest { /** */ private static final int KEYS_UPPER_BOUND = 1000; @@ -47,6 +51,7 @@ public class GridConcurrentLinkedHashMapSelfTest extends GridCommonAbstractTest /** * */ + @Test public void testInsertionOrder() { testOrder(false); } @@ -54,6 +59,7 @@ public void testInsertionOrder() { /** * */ + @Test public void testInsertionOrderWithUpdate() { testOrder(true); } @@ -61,6 +67,7 @@ public void testInsertionOrderWithUpdate() { /** * */ + @Test public void testEvictionInsert() { final int mapSize = 1000; @@ -159,6 +166,7 @@ private void testOrder(boolean update) { * Tests iterator when concurrent modifications remove and add the same keys to the map. * */ + @Test public void testIteratorDuplicates() { Map tst = new ConcurrentLinkedHashMap<>(); @@ -187,6 +195,7 @@ public void testIteratorDuplicates() { /** * @throws Exception If failed. */ + @Test public void testRehash() throws Exception { Map map = new ConcurrentLinkedHashMap<>(10); @@ -204,6 +213,7 @@ public void testRehash() throws Exception { /** * */ + @Test public void testDescendingMethods() { ConcurrentLinkedHashMap tst = new ConcurrentLinkedHashMap<>(); @@ -273,6 +283,7 @@ public void testDescendingMethods() { /** * */ + @Test public void testIterationInPerSegmentModes() { checkIteration(PER_SEGMENT_Q); checkIteration(PER_SEGMENT_Q_OPTIMIZED_RMV); diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentWeakHashSetSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentWeakHashSetSelfTest.java index 9a6d3caa5da4b..cc40347b9e01d 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentWeakHashSetSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConcurrentWeakHashSetSelfTest.java @@ -28,11 +28,15 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridConcurrentWeakHashSet}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridConcurrentWeakHashSetSelfTest extends GridCommonAbstractTest { /** Time to wait after {@link System#gc} method call. */ private static final long WAIT_TIME = 3000; @@ -43,6 +47,7 @@ public class GridConcurrentWeakHashSetSelfTest extends GridCommonAbstractTest { /** * @throws Exception Thrown if test failed. */ + @Test public void testA() throws Exception { Collection set = new GridConcurrentWeakHashSet<>(); @@ -129,6 +134,7 @@ public void testA() throws Exception { * @throws Exception Thrown if test failed. */ @SuppressWarnings({"UnusedAssignment"}) + @Test public void testB() throws Exception { Collection set = new GridConcurrentWeakHashSet<>(); @@ -198,6 +204,7 @@ public void testB() throws Exception { /** * @throws Exception Thrown if test failed. */ + @Test public void testC() throws Exception { final Collection set = new GridConcurrentWeakHashSet<>(); @@ -243,6 +250,7 @@ public void testC() throws Exception { /** * @throws Exception Thrown if test failed. */ + @Test public void testD() throws Exception { final Collection set = new GridConcurrentWeakHashSet<>(); @@ -396,4 +404,4 @@ private SampleBean(int num) { return S.toString(SampleBean.class, this); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConsistentHashSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConsistentHashSelfTest.java index 98d7b988acae9..765b34921d915 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConsistentHashSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridConsistentHashSelfTest.java @@ -32,12 +32,16 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Consistent hash test. */ @SuppressWarnings({"AssertWithSideEffects"}) @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridConsistentHashSelfTest extends GridCommonAbstractTest { /** */ private static final int NODES = 20; @@ -103,6 +107,7 @@ private void clean(GridConsistentHash hash) { * * @throws Exception In case of any exception. */ + @Test public void testCollisions() throws Exception { Map> map = new HashMap<>(); @@ -154,6 +159,7 @@ public void testCollisions() throws Exception { * * @throws Exception In case of any exception. */ + @Test public void testTreeSetRestrictions() throws Exception { // Constructs hash without explicit node's comparator. GridConsistentHash hash = new GridConsistentHash<>(); @@ -187,6 +193,7 @@ public void testTreeSetRestrictions() throws Exception { /** * */ + @Test public void testOneNode() { GridConsistentHash hash = new GridConsistentHash<>(); @@ -204,6 +211,7 @@ public void testOneNode() { /** * */ + @Test public void testHistory() { for (int i = NODES; i-- > 0; ) { GridConsistentHash hash = new GridConsistentHash<>(); @@ -338,4 +346,4 @@ private String[] keys(int cnt) { return keys; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanIdentitySetSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanIdentitySetSelfTest.java index 0cfe0c8f296c5..b0643e2456c41 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanIdentitySetSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanIdentitySetSelfTest.java @@ -21,17 +21,22 @@ import org.apache.ignite.internal.util.GridLeanIdentitySet; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link org.apache.ignite.internal.util.GridLeanMap}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridLeanIdentitySetSelfTest extends GridCommonAbstractTest { /** * JUnit. * * @throws Exception If failed. */ + @Test public void testAddSizeContainsClear() throws Exception { Set set = new GridLeanIdentitySet<>(); @@ -59,4 +64,4 @@ public void testAddSizeContainsClear() throws Exception { assert set.isEmpty(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapPerformanceTest.java index 3528cb0add75f..fa8a0f6c3a408 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapPerformanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapPerformanceTest.java @@ -20,10 +20,14 @@ import java.util.Map; import org.apache.ignite.internal.util.GridLeanMap; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Performance test for {@link GridLeanMap}. */ +@RunWith(JUnit4.class) public class GridLeanMapPerformanceTest extends GridCommonAbstractTest { /** */ private static final int RUN_CNT = 5; @@ -34,6 +38,7 @@ public class GridLeanMapPerformanceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPerformance() throws Exception { long avgDur = 0; @@ -80,4 +85,4 @@ private void iterate(Map map) throws Exception { for (Integer v : map.values()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapSelfTest.java index 2b49858f03f37..4dddf47f3687b 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridLeanMapSelfTest.java @@ -25,17 +25,22 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link GridLeanMap}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridLeanMapSelfTest extends GridCommonAbstractTest { /** * JUnit. * * @throws Exception If failed. */ + @Test public void testDefaultMap() throws Exception { Map map = new GridLeanMap<>(); @@ -102,7 +107,7 @@ public void testDefaultMap() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings({"MismatchedQueryAndUpdateOfCollection"}) + @Test public void testEmptyMap() throws Exception { Map map = new GridLeanMap<>(0); @@ -169,6 +174,7 @@ public void testEmptyMap() throws Exception { * * @throws Exception If failed. */ + @Test public void testOneEntryMap() throws Exception { Map map = new GridLeanMap<>(0); @@ -224,6 +230,7 @@ public void testOneEntryMap() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapPutSameKey() throws Exception { Map map = new GridLeanMap<>(0); @@ -242,6 +249,7 @@ public void testMapPutSameKey() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleEntriesMap() throws Exception { Map map = new GridLeanMap<>(0); @@ -333,6 +341,7 @@ public void testMultipleEntriesMap() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapRemove() throws Exception { Map map = new GridLeanMap<>(0); @@ -387,6 +396,7 @@ public void testMapRemove() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapClear() throws Exception { Map map = new GridLeanMap<>(); @@ -408,6 +418,7 @@ public void testMapClear() throws Exception { * * @throws Exception If failed. */ + @Test public void testEntrySet() throws Exception { Map map = new GridLeanMap<>(); @@ -460,6 +471,7 @@ public void testEntrySet() throws Exception { * * @throws Exception If failed. */ + @Test public void testWithInitSize1() throws Exception { // Batch mode. Map map = new GridLeanMap<>(4); @@ -528,6 +540,7 @@ public void testWithInitSize1() throws Exception { * * @throws Exception If failed. */ + @Test public void testWithInitSize2() throws Exception { // Batch mode. Map map = new GridLeanMap<>(10); @@ -657,4 +670,4 @@ else if (map.size() == 5) else checkImpl(map, "LeanHashMap"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridListSetSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridListSetSelfTest.java index 6dbff65569aac..f7a93cc51ba74 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridListSetSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridListSetSelfTest.java @@ -25,15 +25,20 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridListSet}. */ @GridCommonTest(group = "Lang") +@RunWith(JUnit4.class) public class GridListSetSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testUnsorted() { GridListSet set = new GridListSet<>(); @@ -91,6 +96,7 @@ public void testUnsorted() { /** * */ + @Test public void testSortedNotStrict() { GridListSet set = new GridListSet<>(new Comparator() { @Override public int compare(V1 o1, V1 o2) { @@ -153,6 +159,7 @@ public void testSortedNotStrict() { /** * */ + @Test public void testSortedStrict() { List vals = new ArrayList<>(); @@ -286,4 +293,4 @@ private V2(int val, int other) { return S.toString(V2.class, this, super.toString()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridStripedLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridStripedLockSelfTest.java index f6fb545769740..14d546565fa74 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/GridStripedLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/GridStripedLockSelfTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.internal.util.GridStripedLock; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridStripedLockSelfTest extends GridCommonAbstractTest { /** */ private static final int STRIPE_COUNT = 16; @@ -54,6 +58,7 @@ public class GridStripedLockSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testIntLocking() throws Exception { GridTestUtils.runMultiThreaded(new Runnable() { @Override public void run() { @@ -87,6 +92,7 @@ public void testIntLocking() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLongLocking() throws Exception { GridTestUtils.runMultiThreaded(new Runnable() { @Override public void run() { @@ -120,6 +126,7 @@ public void testLongLocking() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectLocking() throws Exception { GridTestUtils.runMultiThreaded(new Runnable() { @Override public void run() { @@ -197,4 +204,4 @@ private Iterable testObjects(final int cnt) { } }; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/lang/utils/IgniteOffheapReadWriteLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/lang/utils/IgniteOffheapReadWriteLockSelfTest.java index c5ebe6a771bd0..91c353dc24045 100644 --- a/modules/core/src/test/java/org/apache/ignite/lang/utils/IgniteOffheapReadWriteLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/lang/utils/IgniteOffheapReadWriteLockSelfTest.java @@ -29,11 +29,15 @@ import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings("BusyWait") +@RunWith(JUnit4.class) public class IgniteOffheapReadWriteLockSelfTest extends GridCommonAbstractTest { /** */ private static final int TAG_0 = 1; @@ -44,6 +48,7 @@ public class IgniteOffheapReadWriteLockSelfTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testConcurrentUpdatesSingleLock() throws Exception { final int numPairs = 100; final Pair[] data = new Pair[numPairs]; @@ -139,6 +144,7 @@ public void testConcurrentUpdatesSingleLock() throws Exception { /** * @throws Exception if failed. */ + @Test public void testConcurrentUpdatesMultipleLocks() throws Exception { final int numPairs = 100; final Pair[] data = new Pair[numPairs]; @@ -225,6 +231,7 @@ public void testConcurrentUpdatesMultipleLocks() throws Exception { /** * @throws Exception if failed. */ + @Test public void testLockUpgradeMultipleLocks() throws Exception { final int numPairs = 100; final Pair[] data = new Pair[numPairs]; @@ -312,6 +319,7 @@ public void testLockUpgradeMultipleLocks() throws Exception { /** * @throws Exception if failed. */ + @Test public void testTagIdUpdateWait() throws Exception { checkTagIdUpdate(true); } @@ -319,6 +327,7 @@ public void testTagIdUpdateWait() throws Exception { /** * @throws Exception if failed. */ + @Test public void testTagIdUpdateContinuous() throws Exception { checkTagIdUpdate(false); } diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/GridCacheMultiNodeLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/GridCacheMultiNodeLoadTest.java index 35c3405cdd51b..3e648c74495cd 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/GridCacheMultiNodeLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/GridCacheMultiNodeLoadTest.java @@ -21,10 +21,10 @@ import org.apache.ignite.cache.eviction.lru.LruEvictionPolicy; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -33,6 +33,7 @@ /** * Multi-node cache test. */ +@RunWith(JUnit4.class) public class GridCacheMultiNodeLoadTest extends GridCommonAbstractTest { /** Cache name. */ public static final String CACHE_NAME = "partitioned"; @@ -43,20 +44,11 @@ public class GridCacheMultiNodeLoadTest extends GridCommonAbstractTest { /** Grid 1. */ private static Ignite ignite1; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @SuppressWarnings("unchecked") @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setName(CACHE_NAME); @@ -97,7 +89,8 @@ public class GridCacheMultiNodeLoadTest extends GridCommonAbstractTest { /** * @throws Exception If test failed. */ + @Test public void testMany() throws Exception { ignite1.compute().execute(GridCacheLoadPopulationTask.class, null); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/cache/GridCacheWriteBehindStoreLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/cache/GridCacheWriteBehindStoreLoadTest.java index c01d113fb0589..86ef6d304c404 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/cache/GridCacheWriteBehindStoreLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/cache/GridCacheWriteBehindStoreLoadTest.java @@ -31,12 +31,16 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Basic store test. */ +@RunWith(JUnit4.class) public class GridCacheWriteBehindStoreLoadTest extends GridCommonAbstractTest { /** Flush frequency. */ private static final int WRITE_FROM_BEHIND_FLUSH_FREQUENCY = 1000; @@ -125,6 +129,7 @@ protected CacheMode cacheMode() { /** * @throws Exception If failed. */ + @Test public void testLoadCacheSequentialKeys() throws Exception { rndKeys = false; @@ -136,6 +141,7 @@ public void testLoadCacheSequentialKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheRandomKeys() throws Exception { rndKeys = true; @@ -199,4 +205,4 @@ private void loadCache() throws Exception { @Override protected long getTestTimeout() { return 0; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridIoManagerBenchmark0.java b/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridIoManagerBenchmark0.java index 58fc166aeb602..0197579eea54d 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridIoManagerBenchmark0.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridIoManagerBenchmark0.java @@ -42,17 +42,18 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.communication.CommunicationSpi; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.managers.communication.GridIoPolicy.PUBLIC_POOL; /** * */ +@RunWith(JUnit4.class) public class GridIoManagerBenchmark0 extends GridCommonAbstractTest { /** */ public static final int CONCUR_MSGS = 10 * 1024; @@ -63,9 +64,6 @@ public class GridIoManagerBenchmark0 extends GridCommonAbstractTest { /** */ private static final long TEST_TIMEOUT = 3 * 60 * 1000; - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGridsMultiThreaded(2); @@ -75,12 +73,6 @@ public class GridIoManagerBenchmark0 extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - c.setDiscoverySpi(discoSpi); - c.setCommunicationSpi(getCommunication()); return c; @@ -103,7 +95,7 @@ private static String generateTestString(int len) { /** * @throws Exception If failed. */ - @SuppressWarnings("deprecation") + @Test public void testThroughput() throws Exception { final IgniteKernal sndKernal = (IgniteKernal)grid(0); final IgniteKernal rcvKernal = (IgniteKernal)grid(1); @@ -199,7 +191,7 @@ public void testThroughput() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("deprecation") + @Test public void testLatency() throws Exception { final IgniteKernal sndKernal = (IgniteKernal)grid(0); final IgniteKernal rcvKernal = (IgniteKernal)grid(1); @@ -295,7 +287,7 @@ public void testLatency() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("deprecation") + @Test public void testVariableLoad() throws Exception { final IgniteKernal sndKernal = (IgniteKernal)grid(0); final IgniteKernal rcvKernal = (IgniteKernal)grid(1); @@ -469,4 +461,4 @@ private CommunicationSpi getCommunication() { @Override protected long getTestTimeout() { return TEST_TIMEOUT + 60 * 1000; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridTcpCommunicationBenchmark.java b/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridTcpCommunicationBenchmark.java index b62094977c3ce..b94cccad4c61e 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridTcpCommunicationBenchmark.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/communication/GridTcpCommunicationBenchmark.java @@ -583,4 +583,4 @@ // @Override protected long getTestTimeout() { // return TEST_TIMEOUT + 60 * 1000; // } -//} \ No newline at end of file +//} diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/datastructures/GridCachePartitionedAtomicLongLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/datastructures/GridCachePartitionedAtomicLongLoadTest.java index 30172724e7378..3ce01d10df15f 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/datastructures/GridCachePartitionedAtomicLongLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/datastructures/GridCachePartitionedAtomicLongLoadTest.java @@ -29,11 +29,11 @@ import org.apache.ignite.configuration.AtomicConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -43,13 +43,11 @@ /** * Load test for atomic long. */ +@RunWith(JUnit4.class) public class GridCachePartitionedAtomicLongLoadTest extends GridCommonAbstractTest { /** Test duration. */ private static final long DURATION = 8 * 60 * 60 * 1000; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final AtomicInteger idx = new AtomicInteger(); @@ -84,18 +82,13 @@ public class GridCachePartitionedAtomicLongLoadTest extends GridCommonAbstractTe c.setCacheConfiguration(cc); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - return c; } /** * @throws Exception If failed. */ + @Test public void testLoad() throws Exception { startGrid(); @@ -139,4 +132,4 @@ private class AtomicCallable implements Callable { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsLoadTest.java index 2d1aaa48d770a..804cd936bc89d 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsLoadTest.java @@ -31,11 +31,15 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multi-splits load test. */ @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public class GridMultiSplitsLoadTest extends GridCommonAbstractTest { /** */ public GridMultiSplitsLoadTest() { @@ -89,6 +93,7 @@ private int getThreadCount() { * * @throws Exception If task execution failed. */ + @Test public void testLoad() throws Exception { final Ignite ignite = G.ignite(getTestIgniteInstanceName()); diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsRedeployLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsRedeployLoadTest.java index 0cb089569829e..db7bf138979f0 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsRedeployLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/multisplit/GridMultiSplitsRedeployLoadTest.java @@ -27,11 +27,15 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multi splits redeploy load test. */ @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public class GridMultiSplitsRedeployLoadTest extends GridCommonAbstractTest { /** Load test task type ID. */ public static final String TASK_TYPE_ID = GridLoadTestTask.class.getName(); @@ -75,6 +79,7 @@ private int getThreadCount() { * * @throws Exception If task execution failed. */ + @Test public void testLoad() throws Exception { final Ignite ignite = G.ignite(getTestIgniteInstanceName()); diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/newnodes/GridSingleSplitsNewNodesAbstractLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/newnodes/GridSingleSplitsNewNodesAbstractLoadTest.java index aac3c301f037f..18343f36fb73c 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/newnodes/GridSingleSplitsNewNodesAbstractLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/newnodes/GridSingleSplitsNewNodesAbstractLoadTest.java @@ -29,11 +29,15 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base class for single split on new nodes tests. */ @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public abstract class GridSingleSplitsNewNodesAbstractLoadTest extends GridCommonAbstractTest { /** * @param cfg Current configuration. @@ -88,6 +92,7 @@ protected int getNodeCount() { * * @throws Exception If task execution failed. */ + @Test public void testLoad() throws Exception { final Ignite ignite = startGrid(getTestIgniteInstanceName()); diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/redeploy/GridSingleSplitsRedeployLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/redeploy/GridSingleSplitsRedeployLoadTest.java index 057a1c82a3601..85b1b2752f4ef 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/redeploy/GridSingleSplitsRedeployLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/redeploy/GridSingleSplitsRedeployLoadTest.java @@ -33,11 +33,15 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Single splits redeploy load test. */ @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public class GridSingleSplitsRedeployLoadTest extends GridCommonAbstractTest { /** Load test task type ID. */ public static final String TASK_NAME = "org.apache.ignite.tests.p2p.SingleSplitTestTask"; @@ -91,6 +95,7 @@ private int getThreadCount() { * * @throws Exception If task execution failed. */ + @Test public void testLoad() throws Exception { final Ignite ignite = G.ignite(getTestIgniteInstanceName()); diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/session/GridSessionLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/session/GridSessionLoadTest.java index d46158b743409..0b1957b060ea0 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/session/GridSessionLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/session/GridSessionLoadTest.java @@ -26,11 +26,15 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Session load test. */ @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public class GridSessionLoadTest extends GridCommonAbstractTest { /** */ public GridSessionLoadTest() { @@ -59,7 +63,7 @@ private int getThreadCount() { /** * @throws Exception If failed. */ - @SuppressWarnings("unchecked") + @Test public void testSessionLoad() throws Exception { final Ignite ignite = G.ignite(getTestIgniteInstanceName()); diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/stealing/GridStealingLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/stealing/GridStealingLoadTest.java index ace423aa3c4d6..6bfdd36105a74 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/direct/stealing/GridStealingLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/direct/stealing/GridStealingLoadTest.java @@ -32,11 +32,15 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "Load Test") +@RunWith(JUnit4.class) public class GridStealingLoadTest extends GridCommonAbstractTest { /** */ public GridStealingLoadTest() { @@ -86,7 +90,7 @@ private int getThreadCount() { /** * @throws Exception If failed. */ - @SuppressWarnings("unchecked") + @Test public void testStealingLoad() throws Exception { final Ignite ignite = grid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridCacheTestContext.java b/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridCacheTestContext.java index 344a1cc0cdbbd..55c4c4574fc5d 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridCacheTestContext.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridCacheTestContext.java @@ -48,6 +48,7 @@ import org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager; import org.apache.ignite.internal.processors.cache.version.GridCacheVersionManager; import org.apache.ignite.internal.processors.plugin.CachePluginManager; +import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.testframework.junits.GridTestKernalContext; import static org.apache.ignite.testframework.junits.GridAbstractTest.defaultCacheConfiguration; @@ -81,14 +82,18 @@ public GridCacheTestContext(GridTestKernalContext ctx) throws Exception { new GridCacheSharedTtlCleanupManager(), new PartitionsEvictManager(), new CacheNoopJtaManager(), + null, null ), defaultCacheConfiguration(), null, CacheType.USER, AffinityTopologyVersion.ZERO, + IgniteUuid.randomUuid(), true, true, + false, + false, new GridCacheEventManager(), new CacheOsStoreManager(null, new CacheConfiguration()), new GridCacheEvictionManager(), diff --git a/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridHashMapLoadTest.java b/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridHashMapLoadTest.java index 7d4f90e3c93e2..7b2ddb6535ec9 100644 --- a/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridHashMapLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/loadtests/hashmap/GridHashMapLoadTest.java @@ -28,15 +28,20 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests hashmap load. */ @SuppressWarnings("InfiniteLoopStatement") +@RunWith(JUnit4.class) public class GridHashMapLoadTest extends GridCommonAbstractTest { /** * */ + @Test public void testHashMapLoad() { Map map = new HashMap<>(5 * 1024 * 1024); @@ -53,6 +58,7 @@ public void testHashMapLoad() { /** * */ + @Test public void testConcurrentHashMapLoad() { Map map = new ConcurrentHashMap<>(5 * 1024 * 1024); @@ -69,6 +75,7 @@ public void testConcurrentHashMapLoad() { /** * @throws Exception If failed. */ + @Test public void testMapEntry() throws Exception { Map map = new HashMap<>(5 * 1024 * 1024); diff --git a/modules/core/src/test/java/org/apache/ignite/logger/java/JavaLoggerTest.java b/modules/core/src/test/java/org/apache/ignite/logger/java/JavaLoggerTest.java index d9ec810d32586..4687ca992c9c2 100644 --- a/modules/core/src/test/java/org/apache/ignite/logger/java/JavaLoggerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/logger/java/JavaLoggerTest.java @@ -18,17 +18,19 @@ package org.apache.ignite.logger.java; import java.util.UUID; -import junit.framework.TestCase; import org.apache.ignite.IgniteLogger; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.logger.LoggerNodeIdAware; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; + +import static org.junit.Assert.assertTrue; /** * Java logger test. */ @GridCommonTest(group = "Logger") -public class JavaLoggerTest extends TestCase { +public class JavaLoggerTest { /** */ @SuppressWarnings({"FieldCanBeLocal"}) private IgniteLogger log; @@ -36,6 +38,7 @@ public class JavaLoggerTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testLogInitialize() throws Exception { log = new JavaLogger(); @@ -65,4 +68,4 @@ public void testLogInitialize() throws Exception { // Ensure we don't get pattern, only actual file name is allowed here. assert !log.fileName().contains("%"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/marshaller/DynamicProxySerializationMultiJvmSelfTest.java b/modules/core/src/test/java/org/apache/ignite/marshaller/DynamicProxySerializationMultiJvmSelfTest.java index 51e71960173bf..44da9e4ad7568 100644 --- a/modules/core/src/test/java/org/apache/ignite/marshaller/DynamicProxySerializationMultiJvmSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/marshaller/DynamicProxySerializationMultiJvmSelfTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multi-JVM test for dynamic proxy serialization. */ +@RunWith(JUnit4.class) public class DynamicProxySerializationMultiJvmSelfTest extends GridCommonAbstractTest { /** */ private static Callable marshFactory; @@ -58,6 +62,7 @@ public class DynamicProxySerializationMultiJvmSelfTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testBinaryMarshaller() throws Exception { marshFactory = new Callable() { @Override public Marshaller call() throws Exception { @@ -71,6 +76,7 @@ public void testBinaryMarshaller() throws Exception { /** * @throws Exception If failed. */ + @Test public void testToBinary() throws Exception { marshFactory = new Callable() { @Override public Marshaller call() throws Exception { @@ -90,6 +96,7 @@ public void testToBinary() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBinaryField() throws Exception { marshFactory = new Callable() { @Override public Marshaller call() throws Exception { diff --git a/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerAbstractTest.java index 05a8924d6186e..10100e215cc1a 100644 --- a/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerAbstractTest.java @@ -74,6 +74,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.thread.IgniteThread; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.events.EventType.EVTS_CACHE; @@ -81,6 +84,7 @@ /** * Common test for marshallers. */ +@RunWith(JUnit4.class) public abstract class GridMarshallerAbstractTest extends GridCommonAbstractTest implements Serializable { /** */ private static final String CACHE_NAME = "namedCache"; @@ -157,6 +161,7 @@ protected GridMarshallerAbstractTest() { /** * @throws Exception If failed. */ + @Test public void testDefaultCache() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -187,6 +192,7 @@ public void testDefaultCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNamedCache() throws Exception { IgniteCache cache = grid().cache(CACHE_NAME); @@ -219,6 +225,7 @@ public void testNamedCache() throws Exception { * * @throws Exception If test failed. */ + @Test public void testMarshalling() throws Exception { GridMarshallerTestBean inBean = newTestBean(new Object()); @@ -246,6 +253,7 @@ public void testMarshalling() throws Exception { * * @throws Exception If test failed. */ + @Test public void testMarshallingAnonymousClassInstance() throws Exception { final Ignite g = grid(); @@ -283,6 +291,7 @@ public void testMarshallingAnonymousClassInstance() throws Exception { * * @throws Exception If test failed. */ + @Test public void testMarshallingLocalClassInstance() throws Exception { /** * Local class. @@ -320,6 +329,7 @@ class LocalRunnable implements Runnable, Serializable { * * @throws Exception If test failed. */ + @Test public void testMarshallingNestedClassInstance() throws Exception { GridMarshallerTestBean inBean = newTestBean(new NestedClass()); @@ -347,6 +357,7 @@ public void testMarshallingNestedClassInstance() throws Exception { * * @throws Exception If test failed. */ + @Test public void testMarshallingStaticNestedClassInstance() throws Exception { GridMarshallerTestBean inBean = newTestBean(new StaticNestedClass()); @@ -374,6 +385,7 @@ public void testMarshallingStaticNestedClassInstance() throws Exception { * * @throws Exception If test failed. */ + @Test public void testMarshallingNullObject() throws Exception { GridMarshallerTestBean inBean = newTestBean(null); @@ -397,6 +409,7 @@ public void testMarshallingNullObject() throws Exception { * @throws IgniteCheckedException If marshalling failed. */ @SuppressWarnings({"ZeroLengthArrayAllocation"}) + @Test public void testMarshallingArrayOfPrimitives() throws IgniteCheckedException { char[] inChars = "vasya".toCharArray(); @@ -429,6 +442,7 @@ public void testMarshallingArrayOfPrimitives() throws IgniteCheckedException { * * @throws Exception If test failed. */ + @Test public void testExternalClassesMarshalling() throws Exception { ClassLoader tstClsLdr = new GridTestClassLoader( Collections.singletonMap("org/apache/ignite/p2p/p2p.properties", "resource=loaded"), @@ -452,6 +466,7 @@ public void testExternalClassesMarshalling() throws Exception { * * @throws Exception If test failed. */ + @Test public void testGridKernalMarshalling() throws Exception { GridMarshallerTestBean inBean = newTestBean(grid()); @@ -477,6 +492,7 @@ public void testGridKernalMarshalling() throws Exception { * * @throws Exception If test failed. */ + @Test public void testSubgridMarshalling() throws Exception { final Ignite ignite = grid(); @@ -507,6 +523,7 @@ public void testSubgridMarshalling() throws Exception { * * @throws Exception If test failed. */ + @Test public void testLoggerMarshalling() throws Exception { GridMarshallerTestBean inBean = newTestBean(grid().log()); @@ -532,6 +549,7 @@ public void testLoggerMarshalling() throws Exception { * @throws Exception If test failed. */ @SuppressWarnings("unchecked") + @Test public void testNodeLocalMarshalling() throws Exception { ConcurrentMap loc = grid().cluster().nodeLocalMap(); @@ -568,6 +586,7 @@ public void testNodeLocalMarshalling() throws Exception { * * @throws Exception If test failed. */ + @Test public void testExecutorServiceMarshalling() throws Exception { ExecutorService inSrvc = grid().executorService(); @@ -599,6 +618,7 @@ public void testExecutorServiceMarshalling() throws Exception { * * @throws Exception If test failed. */ + @Test public void testKernalContext() throws Exception { GridMarshallerTestBean inBean = newTestBean(GridKernalTestUtils.context(grid())); @@ -621,6 +641,7 @@ public void testKernalContext() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScheduler() throws Exception { IgniteScheduler scheduler = grid().scheduler(); @@ -653,6 +674,7 @@ public void testScheduler() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCompute() throws Exception { IgniteConfiguration cfg = optimize(getConfiguration("g1")); @@ -693,6 +715,7 @@ public void testCompute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvents() throws Exception { IgniteConfiguration cfg = optimize(getConfiguration("g1")); @@ -735,6 +758,7 @@ public void testEvents() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMessaging() throws Exception { IgniteConfiguration cfg = optimize(getConfiguration("g1")); @@ -771,6 +795,7 @@ public void testMessaging() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServices() throws Exception { IgniteConfiguration cfg = optimize(getConfiguration("g1")); @@ -878,6 +903,7 @@ private static class StaticNestedClass implements Serializable { /** * @throws Exception If failed. */ + @Test public void testReadArray() throws Exception { byte[] arr = new byte[10]; @@ -894,6 +920,7 @@ public void testReadArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReadFully() throws Exception { byte[] arr = new byte[10]; @@ -952,4 +979,4 @@ private ReadArrayTestClass(byte[] arr, boolean fully) { in.read(arr); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerMappingConsistencyTest.java b/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerMappingConsistencyTest.java index ba39e366e9f85..4350824ac016d 100644 --- a/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerMappingConsistencyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerMappingConsistencyTest.java @@ -27,15 +27,16 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridMarshallerMappingConsistencyTest extends GridCommonAbstractTest { /** Test cache name. */ private static final String CACHE_NAME = "cache"; @@ -45,9 +46,6 @@ public class GridMarshallerMappingConsistencyTest extends GridCommonAbstractTest File.separatorChar + "work" + File.separatorChar + "test"; - /** Ip finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration igniteCfg = super.getConfiguration(igniteInstanceName); @@ -64,11 +62,6 @@ public class GridMarshallerMappingConsistencyTest extends GridCommonAbstractTest igniteCfg.setWorkDirectory(WORK_DIR + File.separator + igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(ipFinder); - - igniteCfg.setDiscoverySpi(discoSpi); - return igniteCfg; } @@ -101,6 +94,7 @@ private void clearWorkDir() throws IOException { * * @throws Exception If failed. */ + @Test public void testMappingsPersistedOnJoin() throws Exception { Ignite g1 = startGrid(1); Ignite g2 = startGrid(2); @@ -140,6 +134,7 @@ public void testMappingsPersistedOnJoin() throws Exception { * * @throws Exception If failed. */ + @Test public void testPersistedMappingsSharedOnJoin() throws Exception { Ignite g1 = startGrid(1); startGrid(2); diff --git a/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerPerformanceTest.java b/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerPerformanceTest.java index c4e29c5ddfa9a..3a7df77146118 100644 --- a/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerPerformanceTest.java +++ b/modules/core/src/test/java/org/apache/ignite/marshaller/GridMarshallerPerformanceTest.java @@ -45,10 +45,14 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgniteOutClosure; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Marshallers benchmark. */ +@RunWith(JUnit4.class) public class GridMarshallerPerformanceTest extends GridCommonAbstractTest { /** Number of iterations per test. */ private static final int ITER_CNT = 1 * 1000 * 1000; @@ -64,6 +68,7 @@ public class GridMarshallerPerformanceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSerialization() throws Exception { final ByteArrayOutputStream out = new ByteArrayOutputStream(); @@ -112,6 +117,7 @@ public void testSerialization() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGridMarshaller() throws Exception { final GridTuple tuple = new GridTuple<>(); @@ -135,6 +141,7 @@ public void testGridMarshaller() throws Exception { /** * @throws Exception If failed. */ + @Test public void testByteBuffer() throws Exception { final ByteBuffer buf = ByteBuffer.allocate(1024); @@ -160,6 +167,7 @@ public void testByteBuffer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testKryo() throws Exception { final Kryo kryo = new Kryo(); @@ -524,4 +532,4 @@ static TestObject read(ByteBuffer buf) { list.equals(obj.list) && map.equals(obj.map); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerContextSelfTest.java b/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerContextSelfTest.java index da474df82365e..a7661b7b68909 100644 --- a/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerContextSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerContextSelfTest.java @@ -37,6 +37,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.nio.file.Files.readAllBytes; import static org.apache.ignite.internal.MarshallerPlatformIds.JAVA_ID; @@ -44,6 +47,7 @@ /** * Test marshaller context. */ +@RunWith(JUnit4.class) public class MarshallerContextSelfTest extends GridCommonAbstractTest { /** */ private GridTestKernalContext ctx; @@ -71,6 +75,7 @@ public class MarshallerContextSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClassName() throws Exception { MarshallerContextImpl marshCtx = new MarshallerContextImpl(null, null); @@ -96,6 +101,7 @@ public void testClassName() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultiplatformMappingsCollecting() throws Exception { String nonJavaClassName = "random.platform.Mapping"; @@ -132,6 +138,7 @@ public void testMultiplatformMappingsCollecting() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultiplatformMappingsDistributing() throws Exception { String nonJavaClassName = "random.platform.Mapping"; @@ -155,6 +162,7 @@ public void testMultiplatformMappingsDistributing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOnUpdated() throws Exception { File workDir = U.resolveWorkDirectory(U.defaultWorkDirectory(), "marshaller", false); MarshallerContextImpl ctx = new MarshallerContextImpl(null, null); @@ -187,6 +195,7 @@ public void testOnUpdated() throws Exception { * Tests that there is a null value inserted in allCaches list * if platform ids passed to marshaller cache were not sequential (like 0, 2). */ + @Test public void testCacheStructure0() throws Exception { MarshallerContextImpl ctx = new MarshallerContextImpl(null, null); @@ -220,6 +229,7 @@ public void testCacheStructure0() throws Exception { * Tests that there are no null values in allCaches list * if platform ids passed to marshaller context were sequential. */ + @Test public void testCacheStructure1() throws Exception { MarshallerContextImpl ctx = new MarshallerContextImpl(null, null); diff --git a/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerEnumDeadlockMultiJvmTest.java b/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerEnumDeadlockMultiJvmTest.java index 7042c03221d5f..2ff810601dbba 100644 --- a/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerEnumDeadlockMultiJvmTest.java +++ b/modules/core/src/test/java/org/apache/ignite/marshaller/MarshallerEnumDeadlockMultiJvmTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Contains test of Enum marshalling with various {@link Marshaller}s. See IGNITE-8547 for details. */ +@RunWith(JUnit4.class) public class MarshallerEnumDeadlockMultiJvmTest extends GridCommonAbstractTest { /** */ private Factory marshFactory; @@ -44,6 +48,7 @@ public class MarshallerEnumDeadlockMultiJvmTest extends GridCommonAbstractTest { } /** */ + @Test public void testJdkMarshaller() throws Exception { marshFactory = new JdkMarshallerFactory(); @@ -51,6 +56,7 @@ public void testJdkMarshaller() throws Exception { } /** */ + @Test public void testOptimizedMarshaller() throws Exception { marshFactory = new OptimizedMarshallerFactory(); @@ -58,6 +64,7 @@ public void testOptimizedMarshaller() throws Exception { } /** */ + @Test public void testBinaryMarshaller() throws Exception { marshFactory = new BinaryMarshallerFactory(); diff --git a/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingNoPeerClassLoadingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingNoPeerClassLoadingSelfTest.java index 4338947de5658..e5620007f3f97 100644 --- a/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingNoPeerClassLoadingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingNoPeerClassLoadingSelfTest.java @@ -27,11 +27,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.P2; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for Messaging public API with disabled * peer class loading. */ +@RunWith(JUnit4.class) public class GridMessagingNoPeerClassLoadingSelfTest extends GridMessagingSelfTest { /** */ private static CountDownLatch rcvLatch; @@ -52,6 +56,7 @@ public class GridMessagingNoPeerClassLoadingSelfTest extends GridMessagingSelfTe * * @throws Exception If error occurs. */ + @Test @Override public void testSendMessageWithExternalClassLoader() throws Exception { URL[] urls = new URL[] { new URL(GridTestProperties.getProperty("p2p.uri.cls")) }; @@ -92,4 +97,4 @@ unmarshal it (peer class loading is disabled.) */ assertFalse(rcvLatch.await(3, TimeUnit.SECONDS)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingSelfTest.java index a7c452122d566..867fd7c74980f 100644 --- a/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/messaging/GridMessagingSelfTest.java @@ -38,7 +38,6 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteMessaging; import org.apache.ignite.cluster.ClusterGroup; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.DiscoverySpiTestListener; import org.apache.ignite.internal.managers.discovery.IgniteDiscoverySpi; import org.apache.ignite.internal.processors.continuous.StartRoutineDiscoveryMessage; @@ -50,19 +49,20 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.assertThrows; /** * Various tests for Messaging public API. */ +@RunWith(JUnit4.class) public class GridMessagingSelfTest extends GridCommonAbstractTest implements Serializable { /** */ private static final String MSG_1 = "MSG-1"; @@ -91,9 +91,6 @@ public class GridMessagingSelfTest extends GridCommonAbstractTest implements Ser /** */ public static final String EXT_RESOURCE_CLS_NAME = "org.apache.ignite.tests.p2p.TestUserResource"; - /** Shared IP finder. */ - private final transient TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ protected static CountDownLatch rcvLatch; @@ -199,20 +196,12 @@ public TestMessage() { ignite2 = null; } - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** * Tests simple message sending-receiving. * * @throws Exception If error occurs. */ + @Test public void testSendReceiveMessage() throws Exception { final Collection rcvMsgs = new GridConcurrentHashSet<>(); @@ -262,6 +251,7 @@ public void testSendReceiveMessage() throws Exception { * @throws Exception If error occurs. */ @SuppressWarnings("TooBroadScope") + @Test public void testStopLocalListen() throws Exception { final AtomicInteger msgCnt1 = new AtomicInteger(); @@ -374,6 +364,7 @@ public void testStopLocalListen() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testSendReceiveMessageWithStringTopic() throws Exception { final Collection rcvMsgs = new GridConcurrentHashSet<>(); @@ -497,6 +488,7 @@ public void testSendReceiveMessageWithStringTopic() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testSendReceiveMessageWithEnumTopic() throws Exception { final Collection rcvMsgs = new GridConcurrentHashSet<>(); @@ -621,6 +613,7 @@ public void testSendReceiveMessageWithEnumTopic() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testRemoteListen() throws Exception { final Collection rcvMsgs = new GridConcurrentHashSet<>(); @@ -658,6 +651,7 @@ public void testRemoteListen() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("TooBroadScope") + @Test public void testStopRemoteListen() throws Exception { final AtomicInteger msgCnt1 = new AtomicInteger(); @@ -751,6 +745,7 @@ public void testStopRemoteListen() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testRemoteListenOrderedMessages() throws Exception { List msgs = Arrays.asList( new TestMessage(MSG_1), @@ -795,7 +790,6 @@ public void testRemoteListenOrderedMessages() throws Exception { assertFalse(error.get()); - //noinspection AssertEqualsBetweenInconvertibleTypes assertEquals(msgs, Arrays.asList(rcvMsgs.toArray())); } @@ -805,6 +799,7 @@ public void testRemoteListenOrderedMessages() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testRemoteListenWithIntTopic() throws Exception { final Collection rcvMsgs = new GridConcurrentHashSet<>(); @@ -944,6 +939,7 @@ public void testRemoteListenWithIntTopic() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testSendMessageWithExternalClassLoader() throws Exception { URL[] urls = new URL[] {new URL(GridTestProperties.getProperty("p2p.uri.cls"))}; @@ -988,7 +984,7 @@ public void testSendMessageWithExternalClassLoader() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings("ConstantConditions") + @Test public void testNullMessages() throws Exception { assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1026,6 +1022,7 @@ public void testNullMessages() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAsyncOld() throws Exception { final AtomicInteger msgCnt = new AtomicInteger(); @@ -1138,6 +1135,7 @@ public void testAsyncOld() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAsync() throws Exception { final AtomicInteger msgCnt = new AtomicInteger(); @@ -1214,6 +1212,7 @@ public void testAsync() throws Exception { * * @throws Exception If an error occurred. */ + @Test public void testRemoteListenForOldest() throws Exception { remoteListenForOldest(ignite1); diff --git a/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingSendAsyncTest.java b/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingSendAsyncTest.java index d51a5dd7c60e4..5222e35e8bdb3 100644 --- a/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingSendAsyncTest.java +++ b/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingSendAsyncTest.java @@ -29,22 +29,19 @@ import java.util.concurrent.atomic.AtomicReference; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteMessaging; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteBiPredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteMessagingSendAsyncTest extends GridCommonAbstractTest implements Serializable { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Threads number for multi-thread tests. */ private static final int THREADS = 10; @@ -54,15 +51,6 @@ public class IgniteMessagingSendAsyncTest extends GridCommonAbstractTest impleme /** */ private final String msgStr = "message"; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(gridName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -75,6 +63,7 @@ public class IgniteMessagingSendAsyncTest extends GridCommonAbstractTest impleme * * @throws Exception If failed. */ + @Test public void testSendDefaultMode() throws Exception { Ignite ignite1 = startGrid(1); @@ -91,6 +80,7 @@ public void testSendDefaultMode() throws Exception { * * @throws Exception If failed. */ + @Test public void testSendAsyncMode() throws Exception { Ignite ignite1 = startGrid(1); @@ -107,6 +97,7 @@ public void testSendAsyncMode() throws Exception { * * @throws Exception If failed. */ + @Test public void testSendDefaultMode2Nodes() throws Exception { Ignite ignite1 = startGrid(1); Ignite ignite2 = startGrid(2); @@ -124,6 +115,7 @@ public void testSendDefaultMode2Nodes() throws Exception { * * @throws Exception If failed. */ + @Test public void testSendAsyncMode2Node() throws Exception { Ignite ignite1 = startGrid(1); Ignite ignite2 = startGrid(2); @@ -141,6 +133,7 @@ public void testSendAsyncMode2Node() throws Exception { * * @throws Exception If failed. */ + @Test public void testSendOrderedDefaultMode() throws Exception { Ignite ignite1 = startGrid(1); @@ -159,6 +152,7 @@ public void testSendOrderedDefaultMode() throws Exception { * * @throws Exception If failed. */ + @Test public void testSendOrderedDefaultMode2Node() throws Exception { Ignite ignite1 = startGrid(1); Ignite ignite2 = startGrid(2); @@ -176,6 +170,7 @@ public void testSendOrderedDefaultMode2Node() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSendOrderedDefaultModeMultiThreads() throws Exception { Ignite ignite = startGrid(1); @@ -185,6 +180,7 @@ public void testSendOrderedDefaultModeMultiThreads() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSendOrderedDefaultModeMultiThreadsWith2Node() throws Exception { Ignite ignite1 = startGrid(1); Ignite ignite2 = startGrid(2); diff --git a/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingWithClientTest.java b/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingWithClientTest.java index b96728fe822a8..c26fab2da6329 100644 --- a/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingWithClientTest.java +++ b/modules/core/src/test/java/org/apache/ignite/messaging/IgniteMessagingWithClientTest.java @@ -32,18 +32,17 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteMessagingWithClientTest extends GridCommonAbstractTest implements Serializable { - /** */ - protected static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Message topic. */ private enum TOPIC { /** */ @@ -62,8 +61,6 @@ private enum TOPIC { ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); } - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } @@ -77,6 +74,7 @@ private enum TOPIC { /** * @throws Exception If failed. */ + @Test public void testMessageSendWithClientJoin() throws Exception { startGrid(0); @@ -162,4 +160,4 @@ private static class RemoteListener implements IgniteBiPredicate { return true; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/DeploymentClassLoaderCallableTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/DeploymentClassLoaderCallableTest.java index 9c0e4462f7970..ca75752377815 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/DeploymentClassLoaderCallableTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/DeploymentClassLoaderCallableTest.java @@ -25,9 +25,13 @@ import org.apache.ignite.testframework.GridTestExternalClassLoader; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class DeploymentClassLoaderCallableTest extends GridCommonAbstractTest { /** */ private static final String RUN_CLS = "org.apache.ignite.tests.p2p.compute.ExternalCallable"; @@ -47,6 +51,7 @@ public class DeploymentClassLoaderCallableTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testDeploymentFromSecondAndThird() throws Exception { try { startGrid(1); @@ -67,6 +72,7 @@ public void testDeploymentFromSecondAndThird() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDeploymentFromEach() throws Exception { try { final Ignite ignite1 = startGrid(1); @@ -87,6 +93,7 @@ public void testDeploymentFromEach() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDeploymentFromOne() throws Exception { try { startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployContinuousModeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployContinuousModeSelfTest.java index 865fff4b2917c..a7b035f753fef 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployContinuousModeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployContinuousModeSelfTest.java @@ -18,6 +18,9 @@ package org.apache.ignite.p2p; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.configuration.DeploymentMode.CONTINUOUS; @@ -25,13 +28,15 @@ * Continuous deployment mode test. */ @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridMultinodeRedeployContinuousModeSelfTest extends GridAbstractMultinodeRedeployTest { /** * Test GridDeploymentMode.CONTINUOUS mode. * * @throws Throwable if error occur. */ + @Test public void testContinuousMode() throws Throwable { processTest(CONTINUOUS); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployIsolatedModeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployIsolatedModeSelfTest.java index 25cb8affad57b..cfcc055197c27 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployIsolatedModeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployIsolatedModeSelfTest.java @@ -18,6 +18,9 @@ package org.apache.ignite.p2p; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.configuration.DeploymentMode.ISOLATED; @@ -25,13 +28,15 @@ * Isolated deployment mode test. */ @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridMultinodeRedeployIsolatedModeSelfTest extends GridAbstractMultinodeRedeployTest { /** * Test GridDeploymentMode.ISOLATED mode. * * @throws Throwable if error occur. */ + @Test public void testIsolatedMode() throws Throwable { processTest(ISOLATED); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployPrivateModeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployPrivateModeSelfTest.java index 708e62d6c71b7..bad6f5bde0fa8 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployPrivateModeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeployPrivateModeSelfTest.java @@ -18,6 +18,9 @@ package org.apache.ignite.p2p; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.configuration.DeploymentMode.PRIVATE; @@ -25,13 +28,15 @@ * Private deployment mode test. */ @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridMultinodeRedeployPrivateModeSelfTest extends GridAbstractMultinodeRedeployTest { /** * Test GridDeploymentMode.PRIVATE mode. * * @throws Throwable if error occur. */ + @Test public void testPrivateMode() throws Throwable { processTest(PRIVATE); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeploySharedModeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeploySharedModeSelfTest.java index 93df50b54aff8..ade643f136011 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeploySharedModeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridMultinodeRedeploySharedModeSelfTest.java @@ -18,6 +18,9 @@ package org.apache.ignite.p2p; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.configuration.DeploymentMode.SHARED; @@ -25,13 +28,15 @@ * Shared deployment mode test. */ @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridMultinodeRedeploySharedModeSelfTest extends GridAbstractMultinodeRedeployTest { /** * Test GridDeploymentMode.SHARED mode. * * @throws Throwable if error occur. */ + @Test public void testSharedMode() throws Throwable { processTest(SHARED); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PClassLoadingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PClassLoadingSelfTest.java index 7b2333fe4eff8..8f2aa7ebb57b6 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PClassLoadingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PClassLoadingSelfTest.java @@ -35,6 +35,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.lang.IgniteProductVersion.fromString; @@ -42,6 +45,7 @@ * P2P test. */ @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PClassLoadingSelfTest extends GridCommonAbstractTest { /** */ private final ClassLoader tstClsLdr; @@ -60,7 +64,8 @@ public GridP2PClassLoadingSelfTest() { /** * @throws Exception If failed. */ - @SuppressWarnings({"serial", "ConstantConditions"}) + @SuppressWarnings({"ConstantConditions"}) + @Test public void testClassLoading() throws Exception { ComputeTask task = (ComputeTask)tstClsLdr.loadClass(GridP2PTestTask.class.getName()).newInstance(); @@ -110,7 +115,6 @@ private static class TestGridNode extends GridMetadataAwareAdapter implements Cl } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Nullable @Override public T attribute(String name) { return null; } @@ -160,4 +164,4 @@ private static class TestGridNode extends GridMetadataAwareAdapter implements Cl return id().hashCode(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PContinuousDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PContinuousDeploymentSelfTest.java index 28dab3fe5374b..f15e121ea140e 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PContinuousDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PContinuousDeploymentSelfTest.java @@ -25,10 +25,10 @@ import org.apache.ignite.events.EventType; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -38,10 +38,8 @@ /** * Tests for continuous deployment with cache and changing topology. */ +@RunWith(JUnit4.class) public class GridP2PContinuousDeploymentSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of grids cache. */ private static final int GRID_CNT = 2; @@ -71,12 +69,6 @@ public class GridP2PContinuousDeploymentSelfTest extends GridCommonAbstractTest else cfg.setCacheConfiguration(cacheConfiguration()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setPeerClassLoadingEnabled(true); if (clientMode) @@ -114,6 +106,7 @@ protected CacheConfiguration cacheConfiguration() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testDeployment() throws Exception { startGridsMultiThreaded(GRID_CNT); @@ -139,6 +132,7 @@ public void testDeployment() throws Exception { * * @throws Exception If failed. */ + @Test public void testServerJoinWithP2PClassDeployedInCluster() throws Exception { startGrids(GRID_CNT); @@ -176,4 +170,4 @@ public void testServerJoinWithP2PClassDeployedInCluster() throws Exception { awaitPartitionMapExchange(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDifferentClassLoaderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDifferentClassLoaderSelfTest.java index 3af720fd24a65..3551ad9fca0ec 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDifferentClassLoaderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDifferentClassLoaderSelfTest.java @@ -27,12 +27,16 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test P2P deployment tasks which loaded from different class loaders. */ @SuppressWarnings({"ProhibitedExceptionDeclared", "ProhibitedExceptionThrown"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PDifferentClassLoaderSelfTest extends GridCommonAbstractTest { /** * Class Name of task 1. @@ -125,6 +129,7 @@ private void processTest(boolean isSameTask, boolean expectEquals) throws Except * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { depMode = DeploymentMode.PRIVATE; @@ -136,6 +141,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { depMode = DeploymentMode.ISOLATED; @@ -147,6 +153,7 @@ public void testIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -158,6 +165,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { depMode = DeploymentMode.SHARED; @@ -169,6 +177,7 @@ public void testSharedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testRedeployPrivateMode() throws Exception { depMode = DeploymentMode.PRIVATE; @@ -180,6 +189,7 @@ public void testRedeployPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testRedeployIsolatedMode() throws Exception { depMode = DeploymentMode.ISOLATED; @@ -191,6 +201,7 @@ public void testRedeployIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testRedeployContinuousMode() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -202,6 +213,7 @@ public void testRedeployContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testRedeploySharedMode() throws Exception { depMode = DeploymentMode.SHARED; @@ -221,4 +233,4 @@ private boolean isNotSame(int[] m1, int[] m2) { return m1[0] != m2[0] && m1[1] != m2[1]; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDoubleDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDoubleDeploymentSelfTest.java index f384090da289a..01ade5823bd3c 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDoubleDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PDoubleDeploymentSelfTest.java @@ -22,25 +22,23 @@ import org.apache.ignite.compute.ComputeTask; import org.apache.ignite.configuration.DeploymentMode; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestClassLoader; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PDoubleDeploymentSelfTest extends GridCommonAbstractTest { /** Deployment mode. */ private DeploymentMode depMode; - /** IP finder. */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -52,12 +50,6 @@ public class GridP2PDoubleDeploymentSelfTest extends GridCommonAbstractTest { // Test requires SHARED mode to test local deployment priority over p2p. cfg.setDeploymentMode(depMode); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(); return cfg; @@ -114,6 +106,7 @@ private void processTestBothNodesDeploy(DeploymentMode depMode) throws Exception /** * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.PRIVATE); } @@ -121,6 +114,7 @@ public void testPrivateMode() throws Exception { /** * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.ISOLATED); } @@ -128,6 +122,7 @@ public void testIsolatedMode() throws Exception { /** * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.CONTINUOUS); } @@ -135,7 +130,8 @@ public void testContinuousMode() throws Exception { /** * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { processTestBothNodesDeploy(DeploymentMode.SHARED); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PHotRedeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PHotRedeploymentSelfTest.java index 620bf24f33341..65c257f7f5730 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PHotRedeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PHotRedeploymentSelfTest.java @@ -24,12 +24,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"JUnitTestClassNamingConvention", "ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PHotRedeploymentSelfTest extends GridCommonAbstractTest { /** Task name. */ private static final String TASK_NAME = "org.apache.ignite.tests.p2p.P2PTestTaskExternalPath1"; @@ -164,6 +168,7 @@ private void processTestClassLoaderHotRedeployment(DeploymentMode depMode) throw * * @throws Exception if error occur. */ + @Test public void testSameClassLoaderIsolatedMode() throws Exception { processTestHotRedeployment(DeploymentMode.PRIVATE); } @@ -173,6 +178,7 @@ public void testSameClassLoaderIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSameClassLoaderIsolatedClassLoaderMode() throws Exception { processTestHotRedeployment(DeploymentMode.ISOLATED); } @@ -182,6 +188,7 @@ public void testSameClassLoaderIsolatedClassLoaderMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSameClassLoaderContinuousMode() throws Exception { processTestHotRedeployment(DeploymentMode.CONTINUOUS); } @@ -191,6 +198,7 @@ public void testSameClassLoaderContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSameClassLoaderSharedMode() throws Exception { processTestHotRedeployment(DeploymentMode.SHARED); } @@ -200,6 +208,7 @@ public void testSameClassLoaderSharedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testNewClassLoaderHotRedeploymentPrivateMode() throws Exception { processTestClassLoaderHotRedeployment(DeploymentMode.PRIVATE); } @@ -209,6 +218,7 @@ public void testNewClassLoaderHotRedeploymentPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testNewClassLoaderHotRedeploymentIsolatedMode() throws Exception { processTestClassLoaderHotRedeployment(DeploymentMode.ISOLATED); } @@ -218,6 +228,7 @@ public void testNewClassLoaderHotRedeploymentIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testNewClassLoaderHotRedeploymentContinuousMode() throws Exception { processTestClassLoaderHotRedeployment(DeploymentMode.CONTINUOUS); } @@ -227,7 +238,8 @@ public void testNewClassLoaderHotRedeploymentContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testNewClassLoaderHotRedeploymentSharedMode() throws Exception { processTestClassLoaderHotRedeployment(DeploymentMode.SHARED); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PJobClassLoaderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PJobClassLoaderSelfTest.java index 0120a31e066a2..431fb0d8cf3ec 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PJobClassLoaderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PJobClassLoaderSelfTest.java @@ -30,12 +30,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to make sure that if job executes on the same node, it reuses the same class loader as task. */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PJobClassLoaderSelfTest extends GridCommonAbstractTest { /** * Current deployment mode. Used in {@link #getConfiguration(String)}. @@ -74,6 +78,7 @@ private void processTest(DeploymentMode depMode) throws Exception { * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { processTest(DeploymentMode.PRIVATE); } @@ -83,6 +88,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { processTest(DeploymentMode.ISOLATED); } @@ -92,6 +98,7 @@ public void testIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { processTest(DeploymentMode.CONTINUOUS); } @@ -101,6 +108,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { processTest(DeploymentMode.SHARED); } @@ -144,4 +152,4 @@ public Serializable execute() { return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PLocalDeploymentSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PLocalDeploymentSelfTest.java index ca2aeb6df3e30..a33d16f424d9b 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PLocalDeploymentSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PLocalDeploymentSelfTest.java @@ -43,6 +43,9 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_CACHE_REMOVED_ENTRIES_TTL; import static org.apache.ignite.spi.deployment.local.LocalDeploymentSpi.IGNITE_DEPLOYMENT_ADDITIONAL_CHECK; @@ -52,6 +55,7 @@ */ @SuppressWarnings({"ProhibitedExceptionDeclared", "ObjectEquality"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PLocalDeploymentSelfTest extends GridCommonAbstractTest { /** * Current deployment mode. Used in {@link #getConfiguration(String)}. @@ -105,6 +109,7 @@ private void processSharedModeTest(DeploymentMode depMode) throws Exception { * @throws Exception if error occur. */ @SuppressWarnings({"unchecked"}) + @Test public void testLocalDeployment() throws Exception { depMode = DeploymentMode.PRIVATE; @@ -182,6 +187,7 @@ private void processIsolatedModeTest(DeploymentMode depMode) throws Exception { * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { processIsolatedModeTest(DeploymentMode.PRIVATE); } @@ -191,6 +197,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { processIsolatedModeTest(DeploymentMode.ISOLATED); } @@ -200,6 +207,7 @@ public void testIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { processSharedModeTest(DeploymentMode.CONTINUOUS); } @@ -209,6 +217,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { processSharedModeTest(DeploymentMode.SHARED); } @@ -216,6 +225,7 @@ public void testSharedMode() throws Exception { /** * Tests concurrent deployment using delegating classloader for the task. */ + @Test public void testConcurrentDeploymentWithDelegatingClassloader() throws Exception { depMode = DeploymentMode.SHARED; @@ -368,4 +378,4 @@ public static class DeployementTestJob extends ComputeJobAdapter { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PMissedResourceCacheSizeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PMissedResourceCacheSizeSelfTest.java index d5ecde90f3e79..65a80d3d6504f 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PMissedResourceCacheSizeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PMissedResourceCacheSizeSelfTest.java @@ -26,18 +26,19 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestExternalClassLoader; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PMissedResourceCacheSizeSelfTest extends GridCommonAbstractTest { /** Task name. */ private static final String TASK_NAME1 = "org.apache.ignite.tests.p2p.P2PTestTaskExternalPath1"; @@ -57,9 +58,6 @@ public class GridP2PMissedResourceCacheSizeSelfTest extends GridCommonAbstractTe /** */ private int missedRsrcCacheSize; - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -73,12 +71,6 @@ public class GridP2PMissedResourceCacheSizeSelfTest extends GridCommonAbstractTe cfg.setCacheConfiguration(); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } @@ -227,6 +219,7 @@ private void processSize0Test(DeploymentMode depMode) throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize0PrivateMode() throws Exception { processSize0Test(DeploymentMode.PRIVATE); } @@ -236,6 +229,7 @@ public void testSize0PrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize0IsolatedMode() throws Exception { processSize0Test(DeploymentMode.ISOLATED); } @@ -245,6 +239,7 @@ public void testSize0IsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize0ContinuousMode() throws Exception { processSize0Test(DeploymentMode.CONTINUOUS); } @@ -254,6 +249,7 @@ public void testSize0ContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize0SharedMode() throws Exception { processSize0Test(DeploymentMode.SHARED); } @@ -262,6 +258,7 @@ public void testSize0SharedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize2PrivateMode() throws Exception { // processSize2Test(GridDeploymentMode.PRIVATE); } @@ -271,6 +268,7 @@ public void testSize2PrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize2IsolatedMode() throws Exception { // processSize2Test(GridDeploymentMode.ISOLATED); } @@ -280,6 +278,7 @@ public void testSize2IsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize2ContinuousMode() throws Exception { // processSize2Test(GridDeploymentMode.CONTINUOUS); } @@ -289,7 +288,8 @@ public void testSize2ContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSize2SharedMode() throws Exception { // processSize2Test(GridDeploymentMode.SHARED); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PNodeLeftSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PNodeLeftSelfTest.java index bc0d787dfe3c0..3735cc0aa0434 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PNodeLeftSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PNodeLeftSelfTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test P2P class loading in SHARED_CLASSLOADER_UNDEPLOY mode. */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PNodeLeftSelfTest extends GridCommonAbstractTest { /** */ private static final ClassLoader urlClsLdr1; @@ -105,6 +109,7 @@ private void processTest(boolean isExpectUndeploy) throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -116,9 +121,10 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { depMode = DeploymentMode.SHARED; processTest(true); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRecursionTaskSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRecursionTaskSelfTest.java index a35409d5db825..879e6fd987186 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRecursionTaskSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRecursionTaskSelfTest.java @@ -32,12 +32,16 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PRecursionTaskSelfTest extends GridCommonAbstractTest { /** * Current deployment mode. Used in {@link #getConfiguration(String)}. @@ -85,6 +89,7 @@ private void processTest(DeploymentMode depMode) throws Exception { * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { processTest(DeploymentMode.PRIVATE); } @@ -94,6 +99,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { processTest(DeploymentMode.ISOLATED); } @@ -103,6 +109,7 @@ public void testIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { processTest(DeploymentMode.CONTINUOUS); } @@ -112,6 +119,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { processTest(DeploymentMode.SHARED); } diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRemoteClassLoadersSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRemoteClassLoadersSelfTest.java index 623c962dcf2af..f761e25560d37 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRemoteClassLoadersSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PRemoteClassLoadersSelfTest.java @@ -37,12 +37,16 @@ import org.apache.ignite.testframework.GridTestClassLoader; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PRemoteClassLoadersSelfTest extends GridCommonAbstractTest { /** Current deployment mode. Used in {@link #getConfiguration(String)}. */ private DeploymentMode depMode; @@ -178,6 +182,7 @@ Collections.EMPTY_MAP, getClass().getClassLoader(), * * @throws Exception if error occur. */ + @Test public void testSameClassLoaderPrivateMode() throws Exception { processTestSameRemoteClassLoader(DeploymentMode.PRIVATE); } @@ -187,6 +192,7 @@ public void testSameClassLoaderPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSameClassLoaderIsolatedMode() throws Exception { processTestSameRemoteClassLoader(DeploymentMode.ISOLATED); } @@ -196,6 +202,7 @@ public void testSameClassLoaderIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testDifferentClassLoaderPrivateMode() throws Exception { processTestDifferentRemoteClassLoader(DeploymentMode.PRIVATE); } @@ -205,6 +212,7 @@ public void testDifferentClassLoaderPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testDifferentClassLoaderIsolatedMode() throws Exception { processTestDifferentRemoteClassLoader(DeploymentMode.ISOLATED); } @@ -289,4 +297,4 @@ public static class GridP2PRemoteTestTask extends ComputeTaskAdapter 0 : "Result of execution is: " + res + " for more information see GridP2PTestJob"; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PTimeoutSelfTest.java index 56aab0eedeb64..0cd4e7bfca506 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PTimeoutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PTimeoutSelfTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PTimeoutSelfTest extends GridCommonAbstractTest { /** Current deployment mode. Used in {@link #getConfiguration(String)}. */ private DeploymentMode depMode; @@ -138,6 +142,7 @@ private void processFilterTest(DeploymentMode depMode) throws Exception { * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { processTest(DeploymentMode.PRIVATE); } @@ -147,6 +152,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testIsolatedMode() throws Exception { processTest(DeploymentMode.ISOLATED); } @@ -156,6 +162,7 @@ public void testIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { processTest(DeploymentMode.CONTINUOUS); } @@ -165,6 +172,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { processTest(DeploymentMode.SHARED); } @@ -174,6 +182,7 @@ public void testSharedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testFilterPrivateMode() throws Exception { processFilterTest(DeploymentMode.PRIVATE); } @@ -183,6 +192,7 @@ public void testFilterPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testFilterIsolatedMode() throws Exception { processFilterTest(DeploymentMode.ISOLATED); } @@ -192,6 +202,7 @@ public void testFilterIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testFilterContinuousMode() throws Exception { processFilterTest(DeploymentMode.CONTINUOUS); } @@ -201,7 +212,8 @@ public void testFilterContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testFilterSharedMode() throws Exception { processFilterTest(DeploymentMode.SHARED); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PUndeploySelfTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PUndeploySelfTest.java index 4eae06479d5dd..42eae7d8f3e14 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PUndeploySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/GridP2PUndeploySelfTest.java @@ -31,12 +31,16 @@ import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class GridP2PUndeploySelfTest extends GridCommonAbstractTest { /** Current deployment mode. Used in {@link #getConfiguration(String)}. */ private DeploymentMode depMode; @@ -171,6 +175,7 @@ private void processTestUndeployP2PTasks(DeploymentMode depMode) throws Exceptio * * @throws Exception if error occur. */ + @Test public void testUndeployLocalPrivateMode() throws Exception { processTestUndeployLocalTasks(DeploymentMode.PRIVATE); } @@ -180,6 +185,7 @@ public void testUndeployLocalPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testUndeployLocalIsolatedMode() throws Exception { processTestUndeployLocalTasks(DeploymentMode.ISOLATED); } @@ -189,6 +195,7 @@ public void testUndeployLocalIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testUndeployLocalContinuousMode() throws Exception { processTestUndeployLocalTasks(DeploymentMode.CONTINUOUS); } @@ -198,6 +205,7 @@ public void testUndeployLocalContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testUndeployLocalSharedMode() throws Exception { processTestUndeployLocalTasks(DeploymentMode.SHARED); } @@ -207,6 +215,7 @@ public void testUndeployLocalSharedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testUndeployP2PPrivateMode() throws Exception { processTestUndeployP2PTasks(DeploymentMode.PRIVATE); } @@ -216,6 +225,7 @@ public void testUndeployP2PPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testUndeployP2PIsolatedMode() throws Exception { processTestUndeployP2PTasks(DeploymentMode.ISOLATED); } @@ -225,6 +235,7 @@ public void testUndeployP2PIsolatedMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testUndeployP2PContinuousMode() throws Exception { processTestUndeployP2PTasks(DeploymentMode.CONTINUOUS); } @@ -234,7 +245,8 @@ public void testUndeployP2PContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testUndeployP2PSharedMode() throws Exception { processTestUndeployP2PTasks(DeploymentMode.SHARED); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/P2PScanQueryUndeployTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/P2PScanQueryUndeployTest.java index 73ce917d12cd6..b16fa98b1e837 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/P2PScanQueryUndeployTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/P2PScanQueryUndeployTest.java @@ -44,8 +44,12 @@ import org.apache.ignite.testframework.GridTestExternalClassLoader; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class P2PScanQueryUndeployTest extends GridCommonAbstractTest { /** Predicate classname. */ private static final String PREDICATE_CLASSNAME = "org.apache.ignite.tests.p2p.AlwaysTruePredicate"; @@ -108,6 +112,7 @@ public class P2PScanQueryUndeployTest extends GridCommonAbstractTest { * * @throws Exception if test failed. */ + @Test public void testAfterClientDisconnect() throws Exception { ClassLoader extClsLdr = new GridTestExternalClassLoader(new URL[] {new URL(GridTestProperties.getProperty("p2p.uri.cls"))}); diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/P2PStreamingClassLoaderTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/P2PStreamingClassLoaderTest.java index a5418132369d7..cc38f045c28b5 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/P2PStreamingClassLoaderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/P2PStreamingClassLoaderTest.java @@ -27,9 +27,13 @@ import org.apache.ignite.stream.StreamTransformer; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ @GridCommonTest(group = "P2P") +@RunWith(JUnit4.class) public class P2PStreamingClassLoaderTest extends GridCommonAbstractTest { /** */ private static final String ENTRY_PROCESSOR_CLASS_NAME = "org.apache.ignite.tests.p2p.NoopCacheEntryProcessor"; @@ -91,6 +95,7 @@ private void processTest() throws Exception { * * @throws Exception if error occur. */ + @Test public void testPrivateMode() throws Exception { depMode = DeploymentMode.PRIVATE; @@ -102,6 +107,7 @@ public void testPrivateMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testContinuousMode() throws Exception { depMode = DeploymentMode.CONTINUOUS; @@ -113,6 +119,7 @@ public void testContinuousMode() throws Exception { * * @throws Exception if error occur. */ + @Test public void testSharedMode() throws Exception { depMode = DeploymentMode.SHARED; diff --git a/modules/core/src/test/java/org/apache/ignite/p2p/SharedDeploymentTest.java b/modules/core/src/test/java/org/apache/ignite/p2p/SharedDeploymentTest.java index cc0340e6c2bb8..e35468ab7eb91 100644 --- a/modules/core/src/test/java/org/apache/ignite/p2p/SharedDeploymentTest.java +++ b/modules/core/src/test/java/org/apache/ignite/p2p/SharedDeploymentTest.java @@ -27,9 +27,13 @@ import java.lang.reflect.Constructor; import java.net.URL; import java.util.Collection; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class SharedDeploymentTest extends GridCommonAbstractTest { /** */ private static final String RUN_CLS = "org.apache.ignite.tests.p2p.compute.ExternalCallable"; @@ -50,6 +54,7 @@ public class SharedDeploymentTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testDeploymentFromSecondAndThird() throws Exception { try { startGrid(1); diff --git a/modules/core/src/test/java/org/apache/ignite/platform/PlatformDefaultJavaObjectFactorySelfTest.java b/modules/core/src/test/java/org/apache/ignite/platform/PlatformDefaultJavaObjectFactorySelfTest.java index 45fda4fab70b2..851c4829dabbd 100644 --- a/modules/core/src/test/java/org/apache/ignite/platform/PlatformDefaultJavaObjectFactorySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/platform/PlatformDefaultJavaObjectFactorySelfTest.java @@ -29,11 +29,14 @@ import java.util.Map; import java.util.UUID; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Dedicated tests for {@link PlatformDefaultJavaObjectFactory}. */ -@SuppressWarnings("ThrowableResultOfMethodCallIgnored") +@RunWith(JUnit4.class) public class PlatformDefaultJavaObjectFactorySelfTest extends GridCommonAbstractTest { /** Name of the class. */ private static final String CLS_NAME = TestJavaObject.class.getName(); @@ -44,6 +47,7 @@ public class PlatformDefaultJavaObjectFactorySelfTest extends GridCommonAbstract /** * Test normal object creation. */ + @Test public void testNormal() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); @@ -77,6 +81,7 @@ public void testNormal() { /** * Test object creation with boxed property. */ + @Test public void testBoxedProperty() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); @@ -90,6 +95,7 @@ public void testBoxedProperty() { /** * Test object creation without properties. */ + @Test public void testNoProperties() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); @@ -103,6 +109,7 @@ public void testNoProperties() { /** * Test object creation with invalid property name. */ + @Test public void testInvalidPropertyName() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); @@ -118,6 +125,7 @@ public void testInvalidPropertyName() { /** * Test object creation with invalid property value. */ + @Test public void testInvalidPropertyValue() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); @@ -133,6 +141,7 @@ public void testInvalidPropertyValue() { /** * Test object creation without default constructor. */ + @Test public void testNoDefaultConstructor() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); @@ -148,6 +157,7 @@ public void testNoDefaultConstructor() { /** * Test object creation with null class name. */ + @Test public void testNullClassName() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); @@ -171,6 +181,7 @@ public void testNullClassName() { /** * Test object creation with invalid class name. */ + @Test public void testInvalidClassName() { final PlatformDefaultJavaObjectFactory factory = new PlatformDefaultJavaObjectFactory(); diff --git a/modules/core/src/test/java/org/apache/ignite/platform/PlatformEventsWriteEventTask.java b/modules/core/src/test/java/org/apache/ignite/platform/PlatformEventsWriteEventTask.java index 114414c158c1f..f78c094e3529d 100644 --- a/modules/core/src/test/java/org/apache/ignite/platform/PlatformEventsWriteEventTask.java +++ b/modules/core/src/test/java/org/apache/ignite/platform/PlatformEventsWriteEventTask.java @@ -102,7 +102,7 @@ private Job(long ptr, ClusterNode node) { IgniteUuid igniteUuid = new IgniteUuid(uuid, 3); ctx.writeEvent(writer, new CacheEvent("cacheName", node, node, "msg", evtType, 1, true, 2, - igniteUuid, 3, 4, true, 5, true, uuid, "cloClsName", "taskName")); + igniteUuid, null, 3, 4, true, 5, true, uuid, "cloClsName", "taskName")); //noinspection unchecked ctx.writeEvent(writer, new CacheQueryExecutedEvent(node, msg, evtType, "qryType", "cacheName", diff --git a/modules/core/src/test/java/org/apache/ignite/platform/PlatformJavaObjectFactoryProxySelfTest.java b/modules/core/src/test/java/org/apache/ignite/platform/PlatformJavaObjectFactoryProxySelfTest.java index 832af7ca219dd..38ad669cd278f 100644 --- a/modules/core/src/test/java/org/apache/ignite/platform/PlatformJavaObjectFactoryProxySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/platform/PlatformJavaObjectFactoryProxySelfTest.java @@ -33,11 +33,14 @@ import java.util.Map; import java.util.UUID; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Dedicated tests for {@link PlatformJavaObjectFactoryProxy}. */ -@SuppressWarnings("ThrowableResultOfMethodCallIgnored") +@RunWith(JUnit4.class) public class PlatformJavaObjectFactoryProxySelfTest extends GridCommonAbstractTest { /** Name of the class. */ private static final String CLS_NAME = TestJavaObject.class.getName(); @@ -64,6 +67,7 @@ public class PlatformJavaObjectFactoryProxySelfTest extends GridCommonAbstractTe /** * Test normal object creation using default factory. */ + @Test public void testDefaultFactoryNormal() { Map props = new HashMap<>(); @@ -96,6 +100,7 @@ public void testDefaultFactoryNormal() { /** * Test normal object creation using custom factory. */ + @Test public void testCustomFactoryNormal() { Map props = new HashMap<>(); @@ -130,6 +135,7 @@ public void testCustomFactoryNormal() { /** * Test object creation with boxed property. */ + @Test public void testCustomFactoryBoxedProperty() { PlatformJavaObjectFactoryProxy proxy = proxyForCustom(NO_DFLT_CTOR_FACTORY_CLS_NAME, Collections.singletonMap("fIntBoxed", (Object)1)); @@ -142,6 +148,7 @@ public void testCustomFactoryBoxedProperty() { /** * Test object creation without properties. */ + @Test public void testCustomFactoryNoProperties() { PlatformJavaObjectFactoryProxy proxy = proxyForCustom(NO_DFLT_CTOR_FACTORY_CLS_NAME, Collections.emptyMap()); @@ -154,6 +161,7 @@ public void testCustomFactoryNoProperties() { /** * Test object creation with invalid property name. */ + @Test public void testCustomFactoryInvalidPropertyName() { final PlatformJavaObjectFactoryProxy proxy = proxyForCustom(NO_DFLT_CTOR_FACTORY_CLS_NAME, Collections.singletonMap("invalid", (Object)1)); @@ -168,6 +176,7 @@ public void testCustomFactoryInvalidPropertyName() { /** * Test object creation with invalid property value. */ + @Test public void testCustomFactoryInvalidPropertyValue() { final PlatformJavaObjectFactoryProxy proxy = proxyForCustom(NO_DFLT_CTOR_FACTORY_CLS_NAME, Collections.singletonMap("fInt", (Object)1L)); @@ -182,6 +191,7 @@ public void testCustomFactoryInvalidPropertyValue() { /** * Test object creation with null class name. */ + @Test public void testCustomFactoryNullClassName() { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -199,6 +209,7 @@ public void testCustomFactoryNullClassName() { /** * Test object creation with invalid class name. */ + @Test public void testCustomFactoryInvalidClassName() { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { diff --git a/modules/core/src/test/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilderTest.java b/modules/core/src/test/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilderTest.java index 0ac7bc739f179..f6dd03992857d 100644 --- a/modules/core/src/test/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/plugin/security/SecurityPermissionSetBuilderTest.java @@ -25,11 +25,17 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.plugin.security.SecurityPermission.ADMIN_VIEW; +import static org.apache.ignite.plugin.security.SecurityPermission.CACHE_CREATE; +import static org.apache.ignite.plugin.security.SecurityPermission.CACHE_DESTROY; import static org.apache.ignite.plugin.security.SecurityPermission.CACHE_PUT; import static org.apache.ignite.plugin.security.SecurityPermission.CACHE_READ; import static org.apache.ignite.plugin.security.SecurityPermission.CACHE_REMOVE; +import static org.apache.ignite.plugin.security.SecurityPermission.JOIN_AS_SERVER; import static org.apache.ignite.plugin.security.SecurityPermission.SERVICE_DEPLOY; import static org.apache.ignite.plugin.security.SecurityPermission.SERVICE_INVOKE; import static org.apache.ignite.plugin.security.SecurityPermission.EVENTS_ENABLE; @@ -40,10 +46,12 @@ /** * Test for check correct work {@link SecurityPermissionSetBuilder permission builder} */ +@RunWith(JUnit4.class) public class SecurityPermissionSetBuilderTest extends GridCommonAbstractTest { /** */ - @SuppressWarnings({"ThrowableNotThrown", "ArraysAsListWithZeroOrOneArgument"}) + @SuppressWarnings({"ThrowableNotThrown"}) + @Test public void testPermissionBuilder() { SecurityBasicPermissionSet exp = new SecurityBasicPermissionSet(); @@ -65,7 +73,7 @@ public void testPermissionBuilder() { exp.setServicePermissions(permSrvc); - exp.setSystemPermissions(permissions(ADMIN_VIEW, EVENTS_ENABLE)); + exp.setSystemPermissions(permissions(ADMIN_VIEW, EVENTS_ENABLE, JOIN_AS_SERVER, CACHE_CREATE, CACHE_DESTROY)); final SecurityPermissionSetBuilder permsBuilder = new SecurityPermissionSetBuilder(); @@ -80,7 +88,7 @@ public void testPermissionBuilder() { assertThrows(log, new Callable() { @Override public Object call() throws Exception { - permsBuilder.appendTaskPermissions("task", CACHE_READ); + permsBuilder.appendTaskPermissions("task", CACHE_READ, JOIN_AS_SERVER); return null; } }, IgniteException.class, @@ -93,7 +101,7 @@ public void testPermissionBuilder() { return null; } }, IgniteException.class, - "you can assign permission only start with [EVENTS_, ADMIN_], but you try TASK_EXECUTE" + "you can assign permission only start with [EVENTS_, ADMIN_, CACHE_CREATE, CACHE_DESTROY, JOIN_AS_SERVER], but you try TASK_EXECUTE" ); assertThrows(log, new Callable() { @@ -102,7 +110,16 @@ public void testPermissionBuilder() { return null; } }, IgniteException.class, - "you can assign permission only start with [EVENTS_, ADMIN_], but you try SERVICE_INVOKE" + "you can assign permission only start with [EVENTS_, ADMIN_, CACHE_CREATE, CACHE_DESTROY, JOIN_AS_SERVER], but you try SERVICE_INVOKE" + ); + + assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + permsBuilder.appendCachePermissions("cache", CACHE_CREATE); + return null; + } + }, IgniteException.class, + "CACHE_CREATE should be assigned as system permission, not cache permission" ); permsBuilder @@ -116,7 +133,9 @@ public void testPermissionBuilder() { .appendServicePermissions("service2", SERVICE_INVOKE) .appendServicePermissions("service2", SERVICE_INVOKE) .appendSystemPermissions(ADMIN_VIEW) - .appendSystemPermissions(ADMIN_VIEW, EVENTS_ENABLE); + .appendSystemPermissions(ADMIN_VIEW, EVENTS_ENABLE) + .appendSystemPermissions(JOIN_AS_SERVER) + .appendSystemPermissions(CACHE_CREATE, CACHE_DESTROY); SecurityPermissionSet actual = permsBuilder.build(); diff --git a/modules/core/src/test/java/org/apache/ignite/services/ServiceThreadPoolSelfTest.java b/modules/core/src/test/java/org/apache/ignite/services/ServiceThreadPoolSelfTest.java index 40efdfaa91d4f..f0ebd69be992e 100644 --- a/modules/core/src/test/java/org/apache/ignite/services/ServiceThreadPoolSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/services/ServiceThreadPoolSelfTest.java @@ -20,27 +20,16 @@ import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test verifying that services thread pool is properly used. */ +@RunWith(JUnit4.class) public class ServiceThreadPoolSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(gridName); - - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -49,6 +38,7 @@ public class ServiceThreadPoolSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDefaultPoolSize() throws Exception { Ignite ignite = startGrid("grid", new IgniteConfiguration()); @@ -61,6 +51,7 @@ public void testDefaultPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInheritedPoolSize() throws Exception { Ignite ignite = startGrid("grid", new IgniteConfiguration().setPublicThreadPoolSize(42)); @@ -73,6 +64,7 @@ public void testInheritedPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomPoolSize() throws Exception { Ignite ignite = startGrid("grid", new IgniteConfiguration().setServiceThreadPoolSize(42)); @@ -85,6 +77,7 @@ public void testCustomPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExecution() throws Exception { startGrid(0); // Server. diff --git a/modules/core/src/test/java/org/apache/ignite/session/GridSessionCancelSiblingsFromFutureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/session/GridSessionCancelSiblingsFromFutureSelfTest.java index 68a6771e334ff..8074d75efbb1e 100644 --- a/modules/core/src/test/java/org/apache/ignite/session/GridSessionCancelSiblingsFromFutureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/session/GridSessionCancelSiblingsFromFutureSelfTest.java @@ -45,12 +45,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test of session siblings cancellation from future. */ @SuppressWarnings({"CatchGenericClass"}) @GridCommonTest(group = "Task Session") +@RunWith(JUnit4.class) public class GridSessionCancelSiblingsFromFutureSelfTest extends GridCommonAbstractTest { /** */ private static final int WAIT_TIME = 20000; @@ -95,6 +99,7 @@ public GridSessionCancelSiblingsFromFutureSelfTest() { /** * @throws Exception if failed */ + @Test public void testCancelSiblings() throws Exception { refreshInitialData(); @@ -105,6 +110,7 @@ public void testCancelSiblings() throws Exception { /** * @throws Exception if failed */ + @Test public void testMultiThreaded() throws Exception { refreshInitialData(); @@ -284,4 +290,4 @@ private static class GridTaskSessionTestTask extends ComputeTaskSplitAdapter { return results.get(0).getData(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/session/GridSessionJobWaitTaskAttributeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/session/GridSessionJobWaitTaskAttributeSelfTest.java index e7490e963104e..4c12eae257bb8 100644 --- a/modules/core/src/test/java/org/apache/ignite/session/GridSessionJobWaitTaskAttributeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/session/GridSessionJobWaitTaskAttributeSelfTest.java @@ -43,12 +43,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"CatchGenericClass"}) @GridCommonTest(group = "Task Session") +@RunWith(JUnit4.class) public class GridSessionJobWaitTaskAttributeSelfTest extends GridCommonAbstractTest { /** */ public static final int SPLIT_COUNT = 5; @@ -79,6 +83,7 @@ public GridSessionJobWaitTaskAttributeSelfTest() { /** * @throws Exception if failed. */ + @Test public void testSetAttribute() throws Exception { for (int i = 0; i < EXEC_COUNT; i++) checkTask(i); @@ -87,6 +92,7 @@ public void testSetAttribute() throws Exception { /** * @throws Exception if failed. */ + @Test public void testMultiThreaded() throws Exception { final GridThreadSerialNumber sNum = new GridThreadSerialNumber(); @@ -221,4 +227,4 @@ private static class GridTaskSessionTestTask extends ComputeTaskSplitAdapter getAttributes() { return attrs; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/session/GridSessionSetTaskAttributeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/session/GridSessionSetTaskAttributeSelfTest.java index ec8d5a3bf7579..d4764e959ed36 100644 --- a/modules/core/src/test/java/org/apache/ignite/session/GridSessionSetTaskAttributeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/session/GridSessionSetTaskAttributeSelfTest.java @@ -39,12 +39,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings({"CatchGenericClass"}) @GridCommonTest(group = "Task Session") +@RunWith(JUnit4.class) public class GridSessionSetTaskAttributeSelfTest extends GridCommonAbstractTest { /** */ public static final int SPLIT_COUNT = 5; @@ -60,6 +64,7 @@ public GridSessionSetTaskAttributeSelfTest() { /** * @throws Exception if failed. */ + @Test public void testSetAttribute() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -72,6 +77,7 @@ public void testSetAttribute() throws Exception { /** * @throws Exception if failed. */ + @Test public void testMultiThreaded() throws Exception { Ignite ignite = G.ignite(getTestIgniteInstanceName()); @@ -200,4 +206,4 @@ private static class GridTaskSessionTestTask extends ComputeTaskSplitAdapter= timeout); + + try { + strategy.nextTimeout(); + + fail("Should fail with IgniteSpiOperationTimeoutException"); + } catch (IgniteSpiOperationTimeoutException ignored) { + //No-op + } + + return; + } + } + + + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/GridSpiLocalHostInjectionTest.java b/modules/core/src/test/java/org/apache/ignite/spi/GridSpiLocalHostInjectionTest.java index 82205ffa3830c..79fc96eabf813 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/GridSpiLocalHostInjectionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/GridSpiLocalHostInjectionTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * This class tests injection of {@code localHost} property to various SPIs. */ +@RunWith(JUnit4.class) public class GridSpiLocalHostInjectionTest extends GridCommonAbstractTest { /** Value to be set globally in config. */ public static final String CONFIG_LOCAL_ADDR_VALUE = "127.0.0.3"; @@ -38,6 +42,7 @@ public class GridSpiLocalHostInjectionTest extends GridCommonAbstractTest { /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpDiscoverySpiBothSet() throws IgniteCheckedException { processTcpDiscoverySpiTestInjection(true, true, SPI_LOCAL_ADDR_VALUE); } @@ -45,6 +50,7 @@ public void testTcpDiscoverySpiBothSet() throws IgniteCheckedException { /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpDiscoverySpiOnlySet() throws IgniteCheckedException { processTcpDiscoverySpiTestInjection(false, true, SPI_LOCAL_ADDR_VALUE); } @@ -52,6 +58,7 @@ public void testTcpDiscoverySpiOnlySet() throws IgniteCheckedException { /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpDiscoverySpiConfigOnlySet() throws IgniteCheckedException { processTcpDiscoverySpiTestInjection(true, false, CONFIG_LOCAL_ADDR_VALUE); } @@ -59,6 +66,7 @@ public void testTcpDiscoverySpiConfigOnlySet() throws IgniteCheckedException { /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpDiscoverySpiBothNotSet() throws IgniteCheckedException { processTcpDiscoverySpiTestInjection(false, false, null); } @@ -66,6 +74,7 @@ public void testTcpDiscoverySpiBothNotSet() throws IgniteCheckedException { /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpCommunicationSpiBothSet() throws IgniteCheckedException { processTcpCommunicationSpiTestInjection(true, true, SPI_LOCAL_ADDR_VALUE); } @@ -73,6 +82,7 @@ public void testTcpCommunicationSpiBothSet() throws IgniteCheckedException { /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpCommunicationSpiOnlySet() throws IgniteCheckedException { processTcpCommunicationSpiTestInjection(false, true, SPI_LOCAL_ADDR_VALUE); } @@ -80,6 +90,7 @@ public void testTcpCommunicationSpiOnlySet() throws IgniteCheckedException { /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpCommunicationSpiConfigOnlySet() throws IgniteCheckedException { processTcpCommunicationSpiTestInjection(true, false, CONFIG_LOCAL_ADDR_VALUE); } @@ -87,6 +98,7 @@ public void testTcpCommunicationSpiConfigOnlySet() throws IgniteCheckedException /** * @throws IgniteCheckedException If test fails. */ + @Test public void testTcpCommunicationSpiBothNotSet() throws IgniteCheckedException { processTcpCommunicationSpiTestInjection(false, false, null); } @@ -153,4 +165,4 @@ private GridResourceProcessor getResourceProcessor(boolean cfgVal) throws Ignite return proc; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/GridSpiStartStopAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/spi/GridSpiStartStopAbstractTest.java index 2d339bb8f0031..0b92b16d7e468 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/GridSpiStartStopAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/GridSpiStartStopAbstractTest.java @@ -18,11 +18,15 @@ package org.apache.ignite.spi; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base SPI start-stop test class. * @param SPI implementation class. */ +@RunWith(JUnit4.class) public abstract class GridSpiStartStopAbstractTest extends GridSpiAbstractTest { /** */ public static final int COUNT = 5; @@ -42,6 +46,7 @@ protected int getCount() { /** * @throws Exception If test failed. */ + @Test public void testStartStop() throws Exception { info("Spi start-stop test [count=" + getCount() + ", spi=" + getSpiClass().getSimpleName() + ']'); @@ -67,6 +72,7 @@ public void testStartStop() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testStop() throws Exception { IgniteSpi spi = getSpiClass().newInstance(); @@ -76,4 +82,4 @@ public void testStop() throws Exception { spi.spiStop(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/GridTcpSpiForwardingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/GridTcpSpiForwardingSelfTest.java index 5d8e31631b530..f877c23a0c433 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/GridTcpSpiForwardingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/GridTcpSpiForwardingSelfTest.java @@ -40,10 +40,14 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link TcpDiscoverySpi} and {@link TcpCommunicationSpi}. */ +@RunWith(JUnit4.class) public class GridTcpSpiForwardingSelfTest extends GridCommonAbstractTest { /** */ private static final int locPort1 = 47500; @@ -142,6 +146,7 @@ else if (getTestIgniteInstanceName(1).equals(igniteInstanceName)) { /** * @throws Exception If any error occurs. */ + @Test public void testCustomResolver() throws Exception { final Map> map = new HashMap<>(); @@ -162,6 +167,7 @@ public void testCustomResolver() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBasicResolverMapPorts() throws Exception { Map map = new HashMap<>(); @@ -178,6 +184,7 @@ public void testBasicResolverMapPorts() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBasicResolverMapAddress() throws Exception { Map map = new HashMap<>(); @@ -193,6 +200,7 @@ public void testBasicResolverMapAddress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBasicResolverErrors() throws Exception { GridTestUtils.assertThrows( log, diff --git a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/GridCheckpointSpiAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/GridCheckpointSpiAbstractTest.java index 489f53ebab0a1..92cf0b4a9235a 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/GridCheckpointSpiAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/GridCheckpointSpiAbstractTest.java @@ -19,12 +19,15 @@ import org.apache.ignite.GridTestIoUtils; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid checkpoint SPI abstract test. * @param Concrete SPI class. */ -@SuppressWarnings({"CatchGenericClass"}) +@RunWith(JUnit4.class) public abstract class GridCheckpointSpiAbstractTest extends GridSpiAbstractTest { /** */ @@ -36,6 +39,7 @@ public abstract class GridCheckpointSpiAbstractTest /** * @throws Exception Thrown in case of any errors. */ + @Test public void testSaveLoadRemoveWithoutExpire() throws Exception { String dataPrefix = "Test check point data "; @@ -86,6 +90,7 @@ public void testSaveLoadRemoveWithoutExpire() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testSaveWithExpire() throws Exception { // Save states. for (int i = 0; i < CHECK_POINT_COUNT; i++) { @@ -112,6 +117,7 @@ public void testSaveWithExpire() throws Exception { /** * @throws Exception Thrown in case of any errors. */ + @Test public void testDuplicates() throws Exception { int idx1 = 1; int idx2 = 2; @@ -138,4 +144,4 @@ public void testDuplicates() throws Exception { assert finalSerState == null : "Checkpoint state should not be loaded with key: " + CHECK_POINT_KEY_PREFIX; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiConfigSelfTest.java index a933333f36196..af0dcdd6a7ed5 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiConfigSelfTest.java @@ -19,17 +19,22 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid cache checkpoint SPI config self test. */ @GridSpiTest(spi = CacheCheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class CacheCheckpointSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new CacheCheckpointSpi(), "cacheName", null); checkNegativeSpiProperty(new CacheCheckpointSpi(), "cacheName", ""); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiSecondCacheSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiSecondCacheSelfTest.java index 43977834a005d..e75f6cebc77fb 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiSecondCacheSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/cache/CacheCheckpointSpiSecondCacheSelfTest.java @@ -20,10 +20,10 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -31,10 +31,8 @@ /** * Test for cache checkpoint SPI with second cache configured. */ +@RunWith(JUnit4.class) public class CacheCheckpointSpiSecondCacheSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Checkpoints cache name. */ private static final String CP_CACHE = "checkpoints"; @@ -47,12 +45,6 @@ public CacheCheckpointSpiSecondCacheSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg1 = defaultCacheConfiguration(); cacheCfg1.setName(DEFAULT_CACHE_NAME); @@ -79,6 +71,7 @@ public CacheCheckpointSpiSecondCacheSelfTest() { /** * @throws Exception If failed. */ + @Test public void testSecondCachePutRemove() throws Exception { IgniteCache data = grid().cache(DEFAULT_CACHE_NAME); IgniteCache cp = grid().cache(CP_CACHE); @@ -101,4 +94,4 @@ public void testSecondCachePutRemove() throws Exception { assertEquals("1", cp.get(1)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/jdbc/JdbcCheckpointSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/jdbc/JdbcCheckpointSpiConfigSelfTest.java index 1affc9cb64ae3..53d28de207b2e 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/jdbc/JdbcCheckpointSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/jdbc/JdbcCheckpointSpiConfigSelfTest.java @@ -21,6 +21,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.hsqldb.jdbc.jdbcDataSource; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.spi.checkpoint.jdbc.JdbcCheckpointSpi.DFLT_CHECKPOINT_TABLE_NAME; import static org.apache.ignite.spi.checkpoint.jdbc.JdbcCheckpointSpi.DFLT_EXPIRE_DATE_FIELD_NAME; @@ -33,10 +36,12 @@ * Grid jdbc checkpoint SPI config self test. */ @GridSpiTest(spi = JdbcCheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class JdbcCheckpointSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new JdbcCheckpointSpi(), "dataSource", null); @@ -79,4 +84,4 @@ public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(spi, "expireDateFieldType", null); checkNegativeSpiProperty(spi, "expireDateFieldType", ""); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiConfigSelfTest.java index 983a35860fc9d..54757f01096b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiConfigSelfTest.java @@ -20,17 +20,22 @@ import java.util.LinkedList; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid shared file system checkpoint SPI config self test. */ @GridSpiTest(spi = SharedFsCheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class GridSharedFsCheckpointSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new SharedFsCheckpointSpi(), "directoryPaths", null); checkNegativeSpiProperty(new SharedFsCheckpointSpi(), "directoryPaths", new LinkedList()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultiThreadedSelfTest.java index 85c19bcd6e789..df1e6f42f1f41 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultiThreadedSelfTest.java @@ -31,11 +31,15 @@ import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests SPI in multi-threaded environment. */ @GridSpiTest(spi = SharedFsCheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class GridSharedFsCheckpointSpiMultiThreadedSelfTest extends GridSpiAbstractTest { /** */ @@ -65,6 +69,7 @@ public Collection getDirectoryPaths() { * * @throws Exception If failed. */ + @Test public void testSpi() throws Exception { final AtomicInteger writeFinished = new AtomicInteger(); @@ -198,4 +203,4 @@ void deleteFolder(File f) { @Override protected void afterTestsStopped() throws Exception { deleteFolder(new File(U.getIgniteHome(), PATH)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest.java index 2eea7c16d5dd5..22bd4cf920196 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/checkpoint/sharedfs/GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest.java @@ -27,11 +27,15 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests multiple shared directories. */ @GridSpiTest(spi = SharedFsCheckpointSpi.class, group = "Checkpoint SPI") +@RunWith(JUnit4.class) public class GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest extends GridSpiAbstractTest { /** */ @@ -59,6 +63,7 @@ public Collection getDirectoryPaths() { /** * @throws Exception If failed. */ + @Test public void testMultipleSharedDirectories() throws Exception { String data = "Test check point data."; @@ -117,4 +122,4 @@ public void testMultipleSharedDirectories() throws Exception { assert error : "Check point should not be saved."; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiConfigSelfTest.java index 5b5cdb201a6ad..fb0a298c07503 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiConfigSelfTest.java @@ -19,17 +19,22 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Unit tests for {@link FifoQueueCollisionSpi} config. */ @GridSpiTest(spi = FifoQueueCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridFifoQueueCollisionSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new FifoQueueCollisionSpi(), "parallelJobsNumber", 0); checkNegativeSpiProperty(new FifoQueueCollisionSpi(), "waitingJobsNumber", -1); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiSelfTest.java index 1195bcb69e9f1..ec0f57161b40f 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/fifoqueue/GridFifoQueueCollisionSpiSelfTest.java @@ -26,15 +26,20 @@ import org.apache.ignite.spi.collision.GridTestCollisionTaskSession; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Unit tests for {@link FifoQueueCollisionSpi}. */ @GridSpiTest(spi = FifoQueueCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridFifoQueueCollisionSpiSelfTest extends GridSpiAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCollision0() throws Exception { int activeCnt = 2; @@ -67,6 +72,7 @@ public void testCollision0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollision1() throws Exception { getSpi().setParallelJobsNumber(32); @@ -95,6 +101,7 @@ public void testCollision1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollision2() throws Exception { getSpi().setParallelJobsNumber(3); @@ -116,6 +123,7 @@ public void testCollision2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollision3() throws Exception { getSpi().setParallelJobsNumber(15); @@ -175,4 +183,4 @@ private GridCollisionTestContext createContext(int activeNum, int passiveNum) { return new GridCollisionTestContext(activeJobs, passiveJobs); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiAttributesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiAttributesSelfTest.java index 2eea92abd73f0..da1b80d017159 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiAttributesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiAttributesSelfTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_SPI_CLASS; import static org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi.WAIT_JOBS_THRESHOLD_NODE_ATTR; @@ -42,6 +45,7 @@ * Job stealing attributes test. */ @GridSpiTest(spi = JobStealingCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridJobStealingCollisionSpiAttributesSelfTest extends GridSpiAbstractTest { /** */ private static GridTestNode rmtNode; @@ -129,6 +133,7 @@ private void addSpiDependency(GridTestNode node) throws Exception { /** * @throws Exception If test failed. */ + @Test public void testSameAttribute() throws Exception { List waitCtxs = Collections.emptyList(); @@ -162,6 +167,7 @@ public void testSameAttribute() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testEmptyRemoteAttribute() throws Exception { List waitCtxs = Collections.emptyList(); @@ -189,6 +195,7 @@ public void testEmptyRemoteAttribute() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testEmptyLocalAttribute() throws Exception { // Collision SPI does not allow to send more than 1 message in a // certain period of time (see getMessageExpireTime() method). @@ -221,6 +228,7 @@ public void testEmptyLocalAttribute() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDiffAttribute() throws Exception { List waitCtxs = Collections.emptyList(); @@ -254,6 +262,7 @@ public void testDiffAttribute() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testBothEmptyAttribute() throws Exception { // Collision SPI does not allow to send more than 1 message in a // certain period of time (see getMessageExpireTime() method). @@ -281,6 +290,7 @@ public void testBothEmptyAttribute() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testIsStealingOff() throws Exception { // Collision SPI does not allow to send more than 1 message in a // certain period of time (see getMessageExpireTime() method). @@ -309,6 +319,7 @@ public void testIsStealingOff() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testIsStealingOn() throws Exception { // Collision SPI does not allow to send more than 1 message in a // certain period of time (see getMessageExpireTime() method). @@ -333,4 +344,4 @@ public void testIsStealingOn() throws Exception { // Message should not be sent to remote node because stealing is on assert msg != null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiConfigSelfTest.java index 177275451b37b..d7db2a77925e8 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiConfigSelfTest.java @@ -19,19 +19,24 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Job stealing collision SPI config test. */ @GridSpiTest(spi = JobStealingCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridJobStealingCollisionSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new JobStealingCollisionSpi(), "messageExpireTime", 0); checkNegativeSpiProperty(new JobStealingCollisionSpi(), "waitJobsThreshold", -1); checkNegativeSpiProperty(new JobStealingCollisionSpi(), "activeJobsThreshold", -1); checkNegativeSpiProperty(new JobStealingCollisionSpi(), "maximumStealingAttempts", 0); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiCustomTopologySelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiCustomTopologySelfTest.java index 2b88dbb9db807..f061cbbc467e2 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiCustomTopologySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiCustomTopologySelfTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_SPI_CLASS; import static org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi.THIEF_NODE_ATTR; @@ -46,6 +49,7 @@ * Job stealing collision SPI topology test. */ @GridSpiTest(spi = JobStealingCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridJobStealingCollisionSpiCustomTopologySelfTest extends GridSpiAbstractTest { /** */ @@ -125,6 +129,7 @@ private void checkNoAction(GridTestCollisionJobContext ctx) { /** * @throws Exception If test failed. */ + @Test public void testThiefNodeNotInTopology() throws Exception { List waitCtxs = new ArrayList<>(2); @@ -177,4 +182,4 @@ private GridTestTaskSession createTaskSession(final ClusterNode node) { } }; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiSelfTest.java index 05e0cee707687..5e3354873c4ab 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/jobstealing/GridJobStealingCollisionSpiSelfTest.java @@ -40,6 +40,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_SPI_CLASS; import static org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi.STEALING_ATTEMPT_COUNT_ATTR; @@ -50,6 +53,7 @@ * Job stealing SPI test. */ @GridSpiTest(spi = JobStealingCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridJobStealingCollisionSpiSelfTest extends GridSpiAbstractTest { /** */ public GridJobStealingCollisionSpiSelfTest() { @@ -148,6 +152,7 @@ private void checkNoAction(GridTestCollisionJobContext ctx) { /** * @throws Exception If test failed. */ + @Test public void testTwoPassiveJobs() throws Exception { final List waitCtxs = new ArrayList<>(2); final List activeCtxs = new ArrayList<>(1); @@ -184,6 +189,7 @@ public void testTwoPassiveJobs() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testOnePassiveOneActiveJobs() throws Exception { List waitCtxs = new ArrayList<>(1); @@ -217,6 +223,7 @@ public void testOnePassiveOneActiveJobs() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultiplePassiveOneActive() throws Exception { List waitCtxs = new ArrayList<>(2); @@ -254,6 +261,7 @@ public void testMultiplePassiveOneActive() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultiplePassiveZeroActive() throws Exception { final List waitCtxs = new ArrayList<>(2); final List activeCtxs = new ArrayList<>(2); @@ -309,6 +317,7 @@ public GridTestTaskSession createTaskSession() { /** * @throws Exception If test failed. */ + @Test public void testOnePassiveZeroActive() throws Exception { List waitCtxs = new ArrayList<>(1); @@ -335,6 +344,7 @@ public void testOnePassiveZeroActive() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testZeroPassiveOneActive() throws Exception { Collection empty = Collections.emptyList(); @@ -362,6 +372,7 @@ public void testZeroPassiveOneActive() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testZeroPassiveZeroActive() throws Exception { Collection empty = Collections.emptyList(); @@ -383,6 +394,7 @@ public void testZeroPassiveZeroActive() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMaxHopsExceeded() throws Exception { Collection waitCtxs = new ArrayList<>(2); @@ -424,4 +436,4 @@ public void testMaxHopsExceeded() throws Exception { assert msg == null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiConfigSelfTest.java index a303da89dd7d9..53bb96f36689b 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiConfigSelfTest.java @@ -19,16 +19,21 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Priority queue collision SPI config test. */ @GridSpiTest(spi = PriorityQueueCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridPriorityQueueCollisionSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new PriorityQueueCollisionSpi(), "parallelJobsNumber", 0); checkNegativeSpiProperty(new PriorityQueueCollisionSpi(), "waitingJobsNumber", -1); @@ -36,4 +41,4 @@ public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new PriorityQueueCollisionSpi(), "priorityAttributeKey", null); checkNegativeSpiProperty(new PriorityQueueCollisionSpi(), "jobPriorityAttributeKey", null); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiSelfTest.java index 9484dec8842ea..4f2a3fe6dbd84 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/collision/priorityqueue/GridPriorityQueueCollisionSpiSelfTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.spi.collision.priorityqueue.PriorityQueueCollisionSpi.DFLT_JOB_PRIORITY_ATTRIBUTE_KEY; import static org.apache.ignite.spi.collision.priorityqueue.PriorityQueueCollisionSpi.DFLT_PARALLEL_JOBS_NUM; @@ -39,6 +42,7 @@ * Priority queue collision SPI test. */ @GridSpiTest(spi = PriorityQueueCollisionSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridPriorityQueueCollisionSpiSelfTest extends GridSpiAbstractTest { /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { @@ -49,6 +53,7 @@ public class GridPriorityQueueCollisionSpiSelfTest extends GridSpiAbstractTest

    activeJobs = makeContextList(null); List passiveJobs = makeContextList(null); @@ -115,6 +121,7 @@ public void testCollision() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollision0() throws Exception { List activeJobs = makeContextList(null); List passiveJobs = makeContextList(null); @@ -137,7 +144,7 @@ public void testCollision0() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"RedundantTypeArguments"}) + @Test public void testCollision1() throws Exception { List activeJobs = makeContextList(null); List passiveJobs = makeContextList(null); @@ -176,6 +183,7 @@ public void testCollision1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollision2() throws Exception { List activeJobs = makeContextList(null); List passiveJobs = makeContextList(null); @@ -198,6 +206,7 @@ public void testCollision2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollision3() throws Exception { List activeJobs = makeContextList(null); List passiveJobs = makeContextList(null); @@ -220,6 +229,7 @@ public void testCollision3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollisionEmpty() throws Exception { Collection activeJobs = new ArrayList<>(); Collection passiveJobs = new ArrayList<>(); @@ -233,6 +243,7 @@ public void testCollisionEmpty() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollisionWithoutPriorityAttribute() throws Exception { List activeJobs = makeContextList(null); List passiveJobs = makeContextList(null); @@ -242,7 +253,7 @@ public void testCollisionWithoutPriorityAttribute() throws Exception { ((GridTestCollisionTaskSession)ctx.getTaskSession()).setPriorityAttributeKey("bad-attr-name"); ((GridTestCollisionJobContext)ctx).setJobContext(new GridTestJobContext() { - @SuppressWarnings({"unchecked", "RedundantTypeArguments"}) + @SuppressWarnings({"RedundantTypeArguments"}) @Override public V getAttribute(K key) { if (DFLT_JOB_PRIORITY_ATTRIBUTE_KEY.equals(key)) return null; @@ -279,6 +290,7 @@ public void testCollisionWithoutPriorityAttribute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollisionWithWrongPriorityAttribute() throws Exception { List activeJobs = makeContextList(null); List passiveJobs = makeContextList(null); @@ -335,6 +347,7 @@ public void testCollisionWithWrongPriorityAttribute() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testCollision4() throws Exception { List activeJobs = makeContextList(null, false); List passiveJobs = makeContextList(null, false); @@ -380,6 +393,7 @@ public void testCollision4() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollision5() throws Exception { List activeJobs = makeContextList(null, false); List passiveJobs = makeContextList(null, false); @@ -439,4 +453,4 @@ private List makeContextList(@Nullable String attrKey, bool private List makeContextList(@Nullable String attrKey) { return makeContextList(attrKey, true); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/GridAbstractCommunicationSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/GridAbstractCommunicationSelfTest.java index e51dac8655222..26cd67f9687e9 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/GridAbstractCommunicationSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/GridAbstractCommunicationSelfTest.java @@ -44,6 +44,9 @@ import org.apache.ignite.testframework.junits.IgniteMock; import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_MACS; @@ -51,7 +54,7 @@ * Super class for all communication self tests. * @param Type of communication SPI. */ -@SuppressWarnings({"JUnitAbstractTestClassNamingConvention"}) +@RunWith(JUnit4.class) public abstract class GridAbstractCommunicationSelfTest extends GridSpiAbstractTest { /** */ private static long msgId = 1; @@ -89,7 +92,6 @@ public abstract class GridAbstractCommunicationSelfTest { /** */ private final UUID locNodeId; @@ -153,6 +155,7 @@ protected GridAbstractCommunicationSelfTest() { /** * @throws Exception If failed. */ + @Test public void testSendToOneNode() throws Exception { info(">>> Starting send to one node test. <<<"); @@ -195,7 +198,7 @@ public void testSendToOneNode() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("WaitWithoutCorrespondingNotify") + @Test public void testSendToManyNodes() throws Exception { msgDestMap.clear(); @@ -380,4 +383,4 @@ private void startSpis() throws Exception { for (IgniteTestResources rsrcs : spiRsrcs) rsrcs.stopThreads(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/GridCacheMessageSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/GridCacheMessageSelfTest.java index 8ddfd440b7ac7..1a50244c5479e 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/GridCacheMessageSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/GridCacheMessageSelfTest.java @@ -43,10 +43,10 @@ import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; import org.apache.ignite.plugin.extensions.communication.MessageReader; import org.apache.ignite.plugin.extensions.communication.MessageWriter; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -54,10 +54,8 @@ /** * Messaging test. */ +@RunWith(JUnit4.class) public class GridCacheMessageSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Sample count. */ private static final int SAMPLE_CNT = 1; @@ -106,12 +104,6 @@ public class GridCacheMessageSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - cfg.setIncludeEventTypes((int[])null); cfg.setFailureHandler(new TestFailureHandler()); @@ -137,6 +129,7 @@ public class GridCacheMessageSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSendMessage() throws Exception { try { startGridsMultiThreaded(2); @@ -151,6 +144,7 @@ public void testSendMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSendBadMessage() throws Exception { try { startGrids(2); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridCacheDhtLockBackupSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridCacheDhtLockBackupSelfTest.java index 951971d1bb254..11dd9a9158754 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridCacheDhtLockBackupSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridCacheDhtLockBackupSelfTest.java @@ -35,12 +35,12 @@ import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.communication.CommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestThread; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_ASYNC; @@ -48,10 +48,8 @@ /** * Special cases for GG-2329. */ +@RunWith(JUnit4.class) public class GridCacheDhtLockBackupSelfTest extends GridCommonAbstractTest { - /** Ip-finder. */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Communication spi for grid start. */ private CommunicationSpi commSpi; @@ -69,12 +67,6 @@ public GridCacheDhtLockBackupSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration()); cfg.setMarshaller(marsh); @@ -102,7 +94,7 @@ protected CacheConfiguration cacheConfiguration() { /** * @throws Exception If test failed. */ - @SuppressWarnings({"TooBroadScope"}) + @Test public void testLock() throws Exception { final int kv = 1; diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiAbstractTest.java index e89a4c828fb75..eda8d8e8a6ea9 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiAbstractTest.java @@ -36,10 +36,14 @@ import org.apache.ignite.spi.communication.CommunicationSpi; import org.apache.ignite.spi.communication.GridAbstractCommunicationSelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link TcpCommunicationSpi} */ +@RunWith(JUnit4.class) abstract class GridTcpCommunicationSpiAbstractTest extends GridAbstractCommunicationSelfTest { /** */ private static final int SPI_COUNT = 3; @@ -82,6 +86,7 @@ protected GridTcpCommunicationSpiAbstractTest(boolean useShmem) { } /** {@inheritDoc} */ + @Test @Override public void testSendToManyNodes() throws Exception { super.testSendToManyNodes(); @@ -98,6 +103,7 @@ protected GridTcpCommunicationSpiAbstractTest(boolean useShmem) { /** * */ + @Test public void testCheckConnection1() { for (int i = 0; i < 100; i++) { for (Map.Entry> entry : spis.entrySet()) { @@ -120,6 +126,7 @@ public void testCheckConnection1() { /** * @throws Exception If failed. */ + @Test public void testCheckConnection2() throws Exception { final int THREADS = spis.size(); @@ -193,4 +200,4 @@ public void testCheckConnection2() throws Exception { fail("Failed to wait when clients are closed."); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConcurrentConnectSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConcurrentConnectSelfTest.java index ce96c5574e292..8683ef8ef1c9c 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConcurrentConnectSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConcurrentConnectSelfTest.java @@ -39,6 +39,7 @@ import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.internal.managers.communication.GridIoMessageFactory; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.nio.GridCommunicationClient; import org.apache.ignite.internal.util.nio.GridNioServer; @@ -58,12 +59,15 @@ import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; -import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridSpiTest(spi = TcpCommunicationSpi.class, group = "Communication SPI") +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiConcurrentConnectSelfTest extends GridSpiAbstractTest { /** */ @@ -125,7 +129,7 @@ private static class MessageListener implements CommunicationListener { private final AtomicInteger cntr = new AtomicInteger(); /** */ - private final ConcurrentHashSet msgIds = new ConcurrentHashSet<>(); + private final GridConcurrentHashSet msgIds = new GridConcurrentHashSet<>(); /** * @param latch Latch. @@ -160,6 +164,7 @@ private static class MessageListener implements CommunicationListener { /** * @throws Exception If failed. */ + @Test public void testTwoThreads() throws Exception { concurrentConnect(2, 10, ITERS, false, false); } @@ -167,6 +172,7 @@ public void testTwoThreads() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreaded() throws Exception { int threads = Runtime.getRuntime().availableProcessors() * 5; @@ -176,6 +182,7 @@ public void testMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreaded_10Connections() throws Exception { connectionsPerNode = 10; @@ -185,6 +192,7 @@ public void testMultithreaded_10Connections() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreaded_NoPairedConnections() throws Exception { pairedConnections = false; @@ -194,6 +202,7 @@ public void testMultithreaded_NoPairedConnections() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultithreaded_10ConnectionsNoPaired() throws Exception { pairedConnections = false; connectionsPerNode = 10; @@ -204,6 +213,7 @@ public void testMultithreaded_10ConnectionsNoPaired() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithLoad() throws Exception { int threads = Runtime.getRuntime().availableProcessors() * 5; @@ -213,6 +223,7 @@ public void testWithLoad() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRandomSleep() throws Exception { concurrentConnect(4, 1, ITERS, true, false); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConfigSelfTest.java index 7bea716929148..2dcd6ba0bad11 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiConfigSelfTest.java @@ -20,6 +20,9 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.getFreeCommPort; @@ -27,10 +30,12 @@ * TCP communication SPI config test. */ @GridSpiTest(spi = TcpCommunicationSpi.class, group = "Communication SPI") +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new TcpCommunicationSpi(), "localPort", 1023); checkNegativeSpiProperty(new TcpCommunicationSpi(), "localPort", 65636); @@ -55,6 +60,7 @@ public void testNegativeConfig() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalPortRange() throws Exception { IgniteConfiguration cfg = getConfiguration(); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiLanLoadTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiLanLoadTest.java index 93a159755b0d2..21aec783a62e8 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiLanLoadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiLanLoadTest.java @@ -41,11 +41,14 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Class for multithreaded {@link TcpCommunicationSpi} test. */ -@SuppressWarnings({"JUnitAbstractTestClassNamingConvention"}) +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiLanLoadTest extends GridSpiAbstractTest { /** Connection idle timeout */ public static final int IDLE_CONN_TIMEOUT = 2000; @@ -86,7 +89,6 @@ public GridTcpCommunicationSpiLanLoadTest() { /** * Accumulating listener. */ - @SuppressWarnings({"deprecation"}) private class MessageListener implements CommunicationListener { /** Node id of local node. */ private final UUID locNodeId; @@ -140,6 +142,7 @@ public int remoteMessageCount() { /** * @throws Exception If failed. */ + @Test public void testRunReceiver() throws Exception { info(">>> Starting receiving SPI. <<<"); @@ -155,6 +158,7 @@ public void testRunReceiver() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRunSender() throws Exception { reject = true; @@ -265,7 +269,6 @@ private MBeanServer getMBeanServer() throws Exception { } /** {@inheritDoc} */ - @SuppressWarnings({"NullableProblems"}) @Override protected void afterTestsStopped() throws Exception { spi.setListener(null); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiMultithreadedSelfTest.java index d610bc3c04d53..7df40f8852bc2 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiMultithreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiMultithreadedSelfTest.java @@ -61,13 +61,16 @@ import org.apache.ignite.testframework.junits.GridTestKernalContext; import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_MACS; /** * Class for multithreaded {@link TcpCommunicationSpi} test. */ -@SuppressWarnings({"JUnitAbstractTestClassNamingConvention"}) +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiMultithreadedSelfTest extends GridSpiAbstractTest { /** Connection idle timeout */ public static final int IDLE_CONN_TIMEOUT = 2000; @@ -126,7 +129,6 @@ public GridTcpCommunicationSpiMultithreadedSelfTest() { /** * Accumulating listener. */ - @SuppressWarnings({"deprecation"}) private static class MessageListener implements CommunicationListener { /** Node id of local node. */ private final UUID locNodeId; @@ -194,6 +196,7 @@ public int remoteMessageCount() { /** * @throws Exception If failed. */ + @Test public void testSendToRandomNodesMultithreaded() throws Exception { info(">>> Starting send to random nodes multithreaded test. <<<"); @@ -279,6 +282,7 @@ public void testSendToRandomNodesMultithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFlowSend() throws Exception { reject = true; @@ -389,6 +393,7 @@ public void testFlowSend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPassThroughPerformance() throws Exception { reject = true; diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryAckSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryAckSelfTest.java index 1467c29a58b5d..22d1075cd2518 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryAckSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryAckSelfTest.java @@ -29,6 +29,7 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.managers.communication.GridIoMessageFactory; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor; import org.apache.ignite.internal.util.nio.GridNioServer; @@ -49,12 +50,15 @@ import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; -import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridSpiTest(spi = TcpCommunicationSpi.class, group = "Communication SPI") +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiRecoveryAckSelfTest extends GridSpiAbstractTest { /** */ private static final Collection spiRsrcs = new ArrayList<>(); @@ -92,7 +96,7 @@ public GridTcpCommunicationSpiRecoveryAckSelfTest() { /** */ private class TestListener implements CommunicationListener { /** */ - private ConcurrentHashSet msgIds = new ConcurrentHashSet<>(); + private GridConcurrentHashSet msgIds = new GridConcurrentHashSet<>(); /** */ private AtomicInteger rcvCnt = new AtomicInteger(); @@ -121,6 +125,7 @@ private class TestListener implements CommunicationListener { /** * @throws Exception If failed. */ + @Test public void testAckOnIdle() throws Exception { checkAck(10, 2000, 9); } @@ -128,6 +133,7 @@ public void testAckOnIdle() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAckOnCount() throws Exception { checkAck(10, 60_000, 10); } @@ -238,6 +244,7 @@ private void checkAck(int ackCnt, int idleTimeout, int msgPerIter) throws Except /** * @throws Exception If failed. */ + @Test public void testQueueOverflow() throws Exception { for (int i = 0; i < 3; i++) { try { @@ -338,7 +345,6 @@ private void checkOverflow() throws Exception { * @return Session. * @throws Exception If failed. */ - @SuppressWarnings("unchecked") private GridNioSession communicationSession(TcpCommunicationSpi spi) throws Exception { final GridNioServer srv = U.field(spi, "nioSrvr"); @@ -499,4 +505,4 @@ private void stopSpis() throws Exception { nodes.clear(); spiRsrcs.clear(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest.java index b1aa11902ca10..6dfbbf028a823 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest.java @@ -18,10 +18,14 @@ package org.apache.ignite.spi.communication.tcp; import org.apache.ignite.configuration.IgniteConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest extends GridTcpCommunicationSpiRecoverySelfTest { /** {@inheritDoc} */ @Override protected TcpCommunicationSpi getSpi(int idx) { @@ -46,10 +50,11 @@ public class GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest extends Gri /** * @throws Exception if failed. */ + @Test public void testFailureDetectionEnabled() throws Exception { for (TcpCommunicationSpi spi: spis) { assertTrue(spi.failureDetectionTimeoutEnabled()); assertTrue(spi.failureDetectionTimeout() == IgniteConfiguration.DFLT_FAILURE_DETECTION_TIMEOUT); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoverySelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoverySelfTest.java index d2e18c02c0b19..3232ba3d0e864 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoverySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiRecoverySelfTest.java @@ -34,6 +34,7 @@ import org.apache.ignite.internal.IgniteInterruptedCheckedException; import org.apache.ignite.internal.managers.communication.GridIoMessageFactory; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.nio.GridNioServer; import org.apache.ignite.internal.util.nio.GridNioSession; @@ -54,13 +55,16 @@ import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; -import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings("unchecked") @GridSpiTest(spi = TcpCommunicationSpi.class, group = "Communication SPI") +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiRecoverySelfTest extends GridSpiAbstractTest { /** */ private static final Collection spiRsrcs = new ArrayList<>(); @@ -105,7 +109,6 @@ public GridTcpCommunicationSpiRecoverySelfTest() { } /** */ - @SuppressWarnings({"deprecation"}) private class TestListener implements CommunicationListener { /** */ private boolean block; @@ -114,7 +117,7 @@ private class TestListener implements CommunicationListener { private CountDownLatch blockLatch; /** */ - private ConcurrentHashSet msgIds = new ConcurrentHashSet<>(); + private GridConcurrentHashSet msgIds = new GridConcurrentHashSet<>(); /** */ private AtomicInteger rcvCnt = new AtomicInteger(); @@ -198,6 +201,7 @@ protected long awaitForSocketWriteTimeout() { /** * @throws Exception If failed. */ + @Test public void testBlockListener() throws Exception { // Test listener throws exception and stops selector thread, so must restart SPI. for (int i = 0; i < ITERS; i++) { @@ -217,7 +221,6 @@ public void testBlockListener() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("BusyWait") private void checkBlockListener() throws Exception { TcpCommunicationSpi spi0 = spis.get(0); TcpCommunicationSpi spi1 = spis.get(1); @@ -287,6 +290,7 @@ private void checkBlockListener() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlockRead1() throws Exception { createSpis(); @@ -405,6 +409,7 @@ public void testBlockRead1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlockRead2() throws Exception { createSpis(); @@ -540,6 +545,7 @@ public void testBlockRead2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlockRead3() throws Exception { createSpis(); @@ -671,7 +677,6 @@ private boolean waitForSessionsCount(TcpCommunicationSpi spi, int cnt) throws Ig * @return Session. * @throws Exception If failed. */ - @SuppressWarnings("unchecked") private GridNioSession communicationSession(TcpCommunicationSpi spi, boolean in) throws Exception { final GridNioServer srv = U.field(spi, "nioSrvr"); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiTcpFailureDetectionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiTcpFailureDetectionSelfTest.java index 88b25cdf7e64a..9e75a2c69bdfc 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiTcpFailureDetectionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/GridTcpCommunicationSpiTcpFailureDetectionSelfTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.communication.CommunicationSpi; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridTcpCommunicationSpiTcpFailureDetectionSelfTest extends GridTcpCommunicationSpiTcpSelfTest { /** */ private final static int SPI_COUNT = 4; @@ -63,6 +67,7 @@ public class GridTcpCommunicationSpiTcpFailureDetectionSelfTest extends GridTcpC /** * @throws Exception if failed. */ + @Test public void testFailureDetectionEnabled() throws Exception { assertTrue(spis[0].failureDetectionTimeoutEnabled()); assertTrue(spis[0].failureDetectionTimeout() == IgniteConfiguration.DFLT_FAILURE_DETECTION_TIMEOUT); @@ -72,4 +77,4 @@ public void testFailureDetectionEnabled() throws Exception { assertEquals(0, spis[i].failureDetectionTimeout()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/IgniteTcpCommunicationHandshakeWaitTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/IgniteTcpCommunicationHandshakeWaitTest.java new file mode 100644 index 0000000000000..e139ae3c6584e --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/IgniteTcpCommunicationHandshakeWaitTest.java @@ -0,0 +1,148 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.communication.tcp; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.spi.IgniteSpiException; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeAddFinishedMessage; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Testing {@link TcpCommunicationSpi} that will send the wait handshake message on received connections until SPI + * context initialized. + */ +@RunWith(JUnit4.class) +public class IgniteTcpCommunicationHandshakeWaitTest extends GridCommonAbstractTest { + /** */ + private static final long COMMUNICATION_TIMEOUT = 1000; + + /** */ + private static final long DISCOVERY_MESSAGE_DELAY = 500; + + /** */ + private final AtomicBoolean slowNet = new AtomicBoolean(); + + /** */ + private final CountDownLatch latch = new CountDownLatch(1); + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setPeerClassLoadingEnabled(false); + + TcpDiscoverySpi discoSpi = new SlowTcpDiscoverySpi(); + + discoSpi.setIpFinder(sharedStaticIpFinder); + + cfg.setDiscoverySpi(discoSpi); + + TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); + + commSpi.setConnectTimeout(COMMUNICATION_TIMEOUT); + commSpi.setMaxConnectTimeout(4 * COMMUNICATION_TIMEOUT); + commSpi.setReconnectCount(3); + + cfg.setCommunicationSpi(commSpi); + + return cfg; + } + + /** + * Test that joining node will send the wait handshake message on received connections until SPI context + * initialized. + * + * @throws Exception If failed. + */ + @Test + public void testHandshakeOnNodeJoining() throws Exception { + System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); + + IgniteEx ignite = startGrid("srv1"); + + startGrid("srv2"); + + slowNet.set(true); + + IgniteInternalFuture fut = GridTestUtils.runAsync(() -> { + latch.await(expectedTimeout(), TimeUnit.MILLISECONDS); + + Collection nodes = ignite.context().discovery().aliveServerNodes(); + + assertEquals(3, nodes.size()); + + return ignite.context().io().sendIoTest(new ArrayList<>(nodes), null, true).get(); + }); + + startGrid("srv3"); + + fut.get(); + } + + /** */ + private long expectedTimeout() { + long maxBackoffTimeout = COMMUNICATION_TIMEOUT; + + for (int i = 1; i < 3 && maxBackoffTimeout < 3 * COMMUNICATION_TIMEOUT; i++) + maxBackoffTimeout += Math.min(2 * maxBackoffTimeout, 3 * COMMUNICATION_TIMEOUT); + + return maxBackoffTimeout; + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + } + + /** */ + private class SlowTcpDiscoverySpi extends TcpDiscoverySpi { + /** {@inheritDoc} */ + @Override protected boolean ensured(TcpDiscoveryAbstractMessage msg) { + if (slowNet.get() && msg instanceof TcpDiscoveryNodeAddFinishedMessage) { + try { + if (igniteInstanceName.contains("srv2") && msg.verified()) + latch.countDown(); + + U.sleep(DISCOVERY_MESSAGE_DELAY); + } + catch (IgniteInterruptedCheckedException e) { + throw new IgniteSpiException("Thread has been interrupted.", e); + } + } + + return super.ensured(msg); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/IgniteTcpCommunicationRecoveryAckClosureSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/IgniteTcpCommunicationRecoveryAckClosureSelfTest.java index 1c2bf04d1a07a..e3045612b5f99 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/IgniteTcpCommunicationRecoveryAckClosureSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/IgniteTcpCommunicationRecoveryAckClosureSelfTest.java @@ -30,6 +30,7 @@ import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.managers.communication.GridIoMessageFactory; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor; import org.apache.ignite.internal.util.nio.GridNioServer; @@ -52,12 +53,15 @@ import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; -import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridSpiTest(spi = TcpCommunicationSpi.class, group = "Communication SPI") +@RunWith(JUnit4.class) public class IgniteTcpCommunicationRecoveryAckClosureSelfTest extends GridSpiAbstractTest { /** */ @@ -96,7 +100,7 @@ public IgniteTcpCommunicationRecoveryAckClosureSelfTest() { /** */ private class TestListener implements CommunicationListener { /** */ - private ConcurrentHashSet msgIds = new ConcurrentHashSet<>(); + private GridConcurrentHashSet msgIds = new GridConcurrentHashSet<>(); /** */ private AtomicInteger rcvCnt = new AtomicInteger(); @@ -123,6 +127,7 @@ private class TestListener implements CommunicationListener { /** * @throws Exception If failed. */ + @Test public void testAckOnIdle() throws Exception { checkAck(10, 2000, 9); } @@ -130,6 +135,7 @@ public void testAckOnIdle() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAckOnCount() throws Exception { checkAck(10, 60_000, 10); } @@ -257,6 +263,7 @@ private void checkAck(int ackCnt, int idleTimeout, int msgPerIter) throws Except /** * @throws Exception If failed. */ + @Test public void testQueueOverflow() throws Exception { for (int i = 0; i < 3; i++) { try { @@ -389,7 +396,6 @@ private void checkOverflow() throws Exception { * @return Session. * @throws Exception If failed. */ - @SuppressWarnings("unchecked") private GridNioSession communicationSession(TcpCommunicationSpi spi) throws Exception { final GridNioServer srv = U.field(spi, "nioSrvr"); @@ -550,4 +556,4 @@ private void stopSpis() throws Exception { nodes.clear(); spiRsrcs.clear(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiDropNodesTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiDropNodesTest.java index 08cf1f7305f66..bef32183ca6ac 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiDropNodesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiDropNodesTest.java @@ -21,38 +21,37 @@ import java.util.HashMap; import java.util.Map; import java.util.concurrent.Callable; -import java.util.concurrent.CountDownLatch; import java.util.concurrent.CyclicBarrier; -import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; + import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.Event; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.nio.GridCommunicationClient; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteRunnable; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; /** - * + * Tests grid node kicking on communication failure. */ +@RunWith(JUnit4.class) public class TcpCommunicationSpiDropNodesTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Nodes count. */ private static final int NODES_CNT = 4; @@ -73,11 +72,7 @@ public class TcpCommunicationSpiDropNodesTest extends GridCommonAbstractTest { spi.setIdleConnectionTimeout(100); spi.setSharedMemoryPort(-1); - TcpDiscoverySpi discoSpi = (TcpDiscoverySpi) cfg.getDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - cfg.setCommunicationSpi(spi); - cfg.setDiscoverySpi(discoSpi); return cfg; } @@ -109,8 +104,11 @@ public class TcpCommunicationSpiDropNodesTest extends GridCommonAbstractTest { } /** + * Server node shouldn't be failed by other server node if IGNITE_ENABLE_FORCIBLE_NODE_KILL=true. + * * @throws Exception If failed. */ + @Test public void testOneNode() throws Exception { pred = new IgniteBiPredicate() { @Override public boolean apply(ClusterNode locNode, ClusterNode rmtNode) { @@ -120,12 +118,11 @@ public void testOneNode() throws Exception { startGrids(NODES_CNT); - final CountDownLatch latch = new CountDownLatch(1); + AtomicInteger evts = new AtomicInteger(); grid(0).events().localListen(new IgnitePredicate() { - @Override - public boolean apply(Event event) { - latch.countDown(); + @Override public boolean apply(Event evt) { + evts.incrementAndGet(); return true; } @@ -135,58 +132,29 @@ public boolean apply(Event event) { block = true; - grid(0).compute().broadcast(new IgniteRunnable() { - @Override public void run() { - // No-op. - } - }); - - assertTrue(latch.await(15, TimeUnit.SECONDS)); - - assertTrue(GridTestUtils.waitForCondition(new GridAbsPredicate() { - @Override public boolean apply() { - return grid(3).cluster().topologyVersion() == NODES_CNT + 1; - } - }, 5000)); - - for (int i = 0; i < 10; i++) { - U.sleep(1000); - - assertEquals(NODES_CNT - 1, grid(0).cluster().nodes().size()); - - int liveNodesCnt = 0; - - for (int j = 0; j < NODES_CNT; j++) { - IgniteEx ignite; - - try { - ignite = grid(j); - - log.info("Checking topology for grid(" + j + "): " + ignite.cluster().nodes()); - - ClusterNode locNode = ignite.localNode(); - - if (locNode.order() != 3) { - assertEquals(NODES_CNT - 1, ignite.cluster().nodes().size()); - - for (ClusterNode node : ignite.cluster().nodes()) - assertTrue(node.order() != 3); - - liveNodesCnt++; - } - } - catch (Exception e) { - log.info("Checking topology for grid(" + j + "): no grid in topology."); + try { + grid(0).compute().broadcast(new IgniteRunnable() { + @Override public void run() { + // No-op. } - } + }); - assertEquals(NODES_CNT - 1, liveNodesCnt); + fail("Should have exception here."); + } catch (IgniteException e) { + assertTrue(e.getCause() instanceof IgniteSpiException); } + + block = false; + + assertEquals(NODES_CNT, grid(0).cluster().nodes().size()); + assertEquals(0, evts.get()); } /** + * Servers shouldn't fail each other if IGNITE_ENABLE_FORCIBLE_NODE_KILL=true. * @throws Exception If failed. */ + @Test public void testTwoNodesEachOther() throws Exception { pred = new IgniteBiPredicate() { @Override public boolean apply(ClusterNode locNode, ClusterNode rmtNode) { @@ -197,11 +165,11 @@ public void testTwoNodesEachOther() throws Exception { startGrids(NODES_CNT); - final CountDownLatch latch = new CountDownLatch(1); + AtomicInteger evts = new AtomicInteger(); grid(0).events().localListen(new IgnitePredicate() { @Override public boolean apply(Event evt) { - latch.countDown(); + evts.incrementAndGet(); return true; } @@ -241,65 +209,40 @@ public void testTwoNodesEachOther() throws Exception { } }); - assertTrue(latch.await(5, TimeUnit.SECONDS)); - - GridTestUtils.waitForCondition(new GridAbsPredicate() { - @Override public boolean apply() { - return grid(2).cluster().nodes().size() == NODES_CNT - 1; - } - }, 5000); - try { fut1.get(); + + fail("Should fail with SpiException"); } catch (IgniteCheckedException e) { - // No-op. + assertTrue(e.getCause().getCause() instanceof IgniteSpiException); } try { fut2.get(); + + fail("Should fail with SpiException"); } catch (IgniteCheckedException e) { - // No-op. + assertTrue(e.getCause().getCause() instanceof IgniteSpiException); } - long failedNodeOrder = 1 + 2 + 3 + 4; - - for (ClusterNode node : grid(0).cluster().nodes()) - failedNodeOrder -= node.order(); + assertEquals(NODES_CNT , grid(0).cluster().nodes().size()); + assertEquals(0, evts.get()); - for (int i = 0; i < 10; i++) { - U.sleep(1000); + for (int j = 0; j < NODES_CNT; j++) { + IgniteEx ignite; - assertEquals(NODES_CNT - 1, grid(0).cluster().nodes().size()); + try { + ignite = grid(j); - int liveNodesCnt = 0; + log.info("Checking topology for grid(" + j + "): " + ignite.cluster().nodes()); - for (int j = 0; j < NODES_CNT; j++) { - IgniteEx ignite; - - try { - ignite = grid(j); - - log.info("Checking topology for grid(" + j + "): " + ignite.cluster().nodes()); - - ClusterNode locNode = ignite.localNode(); - - if (locNode.order() != failedNodeOrder) { - assertEquals(NODES_CNT - 1, ignite.cluster().nodes().size()); - - for (ClusterNode node : ignite.cluster().nodes()) - assertTrue(node.order() != failedNodeOrder); - - liveNodesCnt++; - } - } - catch (Exception e) { - log.info("Checking topology for grid(" + j + "): no grid in topology."); - } + assertEquals(NODES_CNT, ignite.cluster().nodes().size()); + } + catch (Exception e) { + log.info("Checking topology for grid(" + j + "): no grid in topology."); } - - assertEquals(NODES_CNT - 1, liveNodesCnt); } } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFaultyClientSslTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFaultyClientSslTest.java new file mode 100644 index 0000000000000..6510475c547df --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFaultyClientSslTest.java @@ -0,0 +1,38 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.communication.tcp; + +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Tests that faulty client will be failed if connection can't be established. + */ +@RunWith(JUnit4.class) +public class TcpCommunicationSpiFaultyClientSslTest extends TcpCommunicationSpiFaultyClientTest { + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + cfg.setSslContextFactory(GridTestUtils.sslFactory()); + + return cfg; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFaultyClientTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFaultyClientTest.java index 00b1d90258cf2..0061efb914444 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFaultyClientTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFaultyClientTest.java @@ -23,6 +23,8 @@ import java.util.Collections; import java.util.HashMap; import java.util.Map; +import java.util.UUID; +import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.ignite.IgniteCheckedException; @@ -38,12 +40,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.lang.IgniteRunnable; +import org.apache.ignite.spi.communication.CommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; @@ -51,9 +55,6 @@ * Tests that faulty client will be failed if connection can't be established. */ public class TcpCommunicationSpiFaultyClientTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Predicate. */ private static final IgnitePredicate PRED = new IgnitePredicate() { @Override public boolean apply(ClusterNode node) { @@ -67,25 +68,39 @@ public class TcpCommunicationSpiFaultyClientTest extends GridCommonAbstractTest /** Block. */ private static volatile boolean block; + /** */ + private int failureDetectionTimeout = 3000; + + /** */ + private int connectTimeout = -1; + + /** */ + private int maxConnectTimeout = -1; + + /** */ + private int reconnectCnt = -1; + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - cfg.setFailureDetectionTimeout(1000); + cfg.setFailureDetectionTimeout(failureDetectionTimeout); cfg.setClientMode(clientMode); TestCommunicationSpi spi = new TestCommunicationSpi(); + if (connectTimeout != -1) { + spi.setConnectTimeout(connectTimeout); + spi.setMaxConnectTimeout(maxConnectTimeout); + spi.setReconnectCount(reconnectCnt); + } + spi.setIdleConnectionTimeout(100); spi.setSharedMemoryPort(-1); - TcpDiscoverySpi discoSpi = (TcpDiscoverySpi) cfg.getDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - discoSpi.setClientReconnectDisabled(true); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setClientReconnectDisabled(true); cfg.setCommunicationSpi(spi); - cfg.setDiscoverySpi(discoSpi); return cfg; } @@ -116,25 +131,65 @@ public class TcpCommunicationSpiFaultyClientTest extends GridCommonAbstractTest stopAllGrids(); } + /** */ + private long computeExpectedDelay() { + if (connectTimeout == -1) + return failureDetectionTimeout; + + long expDelay = 0; + + for (int i = 1; i < reconnectCnt && expDelay < maxConnectTimeout; i++) + expDelay += Math.min(connectTimeout * 2, maxConnectTimeout); + + return expDelay; + } + /** * @throws Exception If failed. */ + @Test public void testNoServerOnHost() throws Exception { - testFailClient(null); + testFailClient(null, computeExpectedDelay()); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNoServerOnHostCustomFailureDetection() throws Exception { + connectTimeout = 3000; + maxConnectTimeout = 6000; + reconnectCnt = 3; + + testFailClient(null, computeExpectedDelay()); } /** * @throws Exception If failed. */ + @Test public void testNotAcceptedConnection() throws Exception { - testFailClient(new FakeServer()); + testFailClient(new FakeServer(), computeExpectedDelay()); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNotAcceptedConnectionCustomFailureDetection() throws Exception { + connectTimeout = 3000; + maxConnectTimeout = 6000; + reconnectCnt = 3; + + testFailClient(new FakeServer(), computeExpectedDelay()); } /** * @param srv Server. + * @param expDelay Expected delay until client is gone while trying to establish connection. * @throws Exception If failed. */ - private void testFailClient(FakeServer srv) throws Exception { + private void testFailClient(FakeServer srv, long expDelay) throws Exception { IgniteInternalFuture fut = null; try { @@ -150,12 +205,30 @@ private void testFailClient(FakeServer srv) throws Exception { startGrid(2); startGrid(3); - U.sleep(1000); // Wait for write timeout and closing idle connections. + // Need to wait for PME to avoid opening new connections during closing idle connections. + awaitPartitionMapExchange(); + + CommunicationSpi commSpi = grid(0).configuration().getCommunicationSpi(); + + ConcurrentMap clients = U.field(commSpi, "clients"); + + // Wait for write timeout and closing idle connections. + assertTrue("Failed to wait for closing idle connections.", + GridTestUtils.waitForCondition(() -> { + for (GridCommunicationClient[] clients0 : clients.values()) { + for (GridCommunicationClient client : clients0) { + if (client != null) + return false; + } + } + + return true; + }, 1000)); final CountDownLatch latch = new CountDownLatch(1); grid(0).events().localListen(new IgnitePredicate() { - @Override public boolean apply(Event event) { + @Override public boolean apply(Event evt) { latch.countDown(); return true; @@ -164,6 +237,8 @@ private void testFailClient(FakeServer srv) throws Exception { block = true; + long t1 = U.currentTimeMillis(); + try { grid(0).compute(grid(0).cluster().forClients()).withNoFailover().broadcast(new IgniteRunnable() { @Override public void run() { @@ -171,11 +246,15 @@ private void testFailClient(FakeServer srv) throws Exception { } }); } - catch (IgniteException e) { + catch (IgniteException ignored) { // No-op. } - assertTrue(latch.await(3, TimeUnit.SECONDS)); + final long time = U.currentTimeMillis() - t1; + + assertTrue("Must try longer than expected delay", time >= expDelay); + + assertTrue(latch.await(expDelay + 1000, TimeUnit.MILLISECONDS)); assertTrue(GridTestUtils.waitForCondition(new GridAbsPredicate() { @Override public boolean apply() { @@ -218,7 +297,7 @@ private static class FakeServer implements Runnable { * Default constructor. */ FakeServer() throws IOException { - this.srv = new ServerSocket(47200, 50, InetAddress.getByName("127.0.0.1")); + srv = new ServerSocket(47200, 50, InetAddress.getByName("127.0.0.1")); } /** @@ -235,7 +314,7 @@ public void stop() { try { U.sleep(10); } - catch (IgniteInterruptedCheckedException e) { + catch (IgniteInterruptedCheckedException ignored) { // No-op. } } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFreezingClientTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFreezingClientTest.java new file mode 100644 index 0000000000000..2f2d8d3df38ff --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiFreezingClientTest.java @@ -0,0 +1,188 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.communication.tcp; + +import javax.cache.Cache; +import java.lang.management.ManagementFactory; +import java.util.Iterator; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.query.ScanQuery; +import org.apache.ignite.cluster.ClusterTopologyException; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.lang.IgniteCallable; +import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.resources.LoggerResource; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; + +/** + * Tests that freezing due to JVM STW client will be failed if connection can't be established. + */ +@RunWith(JUnit4.class) +public class TcpCommunicationSpiFreezingClientTest extends GridCommonAbstractTest { + /** */ + private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + cfg.setFailureDetectionTimeout(120000); + cfg.setClientFailureDetectionTimeout(120000); + cfg.setClientMode("client".equals(gridName)); + + TcpCommunicationSpi spi = new TcpCommunicationSpi(); + + spi.setConnectTimeout(3000); + spi.setMaxConnectTimeout(6000); + spi.setReconnectCount(3); + spi.setIdleConnectionTimeout(100); + spi.setSharedMemoryPort(-1); + + TcpDiscoverySpi discoSpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); + + discoSpi.setIpFinder(IP_FINDER); + + cfg.setCommunicationSpi(spi); + cfg.setDiscoverySpi(discoSpi); + + cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME).setWriteSynchronizationMode(FULL_SYNC). + setCacheMode(PARTITIONED).setAtomicityMode(ATOMIC)); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + System.setProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL, "true"); + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + super.afterTestsStopped(); + + System.clearProperty(IgniteSystemProperties.IGNITE_ENABLE_FORCIBLE_NODE_KILL); + } + + /** {@inheritDoc} */ + @Override protected boolean isMultiJvm() { + return true; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testFreezingClient() throws Exception { + try { + final IgniteEx srv = (IgniteEx) startGrids(2); + + final IgniteEx client = startGrid("client"); + + final int keysCnt = 100_000; + + try (IgniteDataStreamer streamer = srv.dataStreamer(DEFAULT_CACHE_NAME)) { + for (int i = 0; i < keysCnt; i++) + streamer.addData(i, new byte[512]); + } + + // Wait for connections go idle. + doSleep(1000); + + srv.compute(srv.cluster().forNode(client.localNode())).withNoFailover().call(new ClientClosure()); + + fail("Client node must be kicked from topology"); + } + catch (ClusterTopologyException e) { + // Expected. + } + finally { + stopAllGrids(); + } + } + + /** */ + public static class ClientClosure implements IgniteCallable { + /** */ + private static final long serialVersionUID = 0L; + + @IgniteInstanceResource + private transient Ignite ignite; + + @LoggerResource + private IgniteLogger log; + + /** {@inheritDoc} */ + @Override public Integer call() throws Exception { + Thread loadThread = new Thread() { + @Override public void run() { + log.info("result = " + simulateLoad()); + } + }; + + loadThread.setName("load-thread"); + loadThread.start(); + + int cnt = 0; + + final Iterator> it = ignite.cache(DEFAULT_CACHE_NAME). + query(new ScanQuery().setPageSize(100000)).iterator(); + + while (it.hasNext()) { + Cache.Entry entry = it.next(); + + // Trigger STW. + final long[] tids = ManagementFactory.getThreadMXBean().findDeadlockedThreads(); + + cnt++; + } + + loadThread.join(); + + return cnt; + } + + /** + * + */ + public static double simulateLoad() { + double d = 0; + + for (int i = 0; i < 1000000000; i++) + d += Math.log(Math.PI * i); + + return d; + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiHalfOpenedConnectionTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiHalfOpenedConnectionTest.java index 3e10f942c4459..ccbd9d26fef7a 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiHalfOpenedConnectionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiHalfOpenedConnectionTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests case when connection is closed only for one side, when other is not notified. */ +@RunWith(JUnit4.class) public class TcpCommunicationSpiHalfOpenedConnectionTest extends GridCommonAbstractTest { /** Client spi. */ private TcpCommunicationSpi clientSpi; @@ -66,6 +70,7 @@ public class TcpCommunicationSpiHalfOpenedConnectionTest extends GridCommonAbstr /** * @throws Exception If failed. */ + @Test public void testReconnect() throws Exception { pairedConnections = false; @@ -75,6 +80,7 @@ public void testReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectPaired() throws Exception { pairedConnections = true; diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiSkipMessageSendTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiSkipMessageSendTest.java index 2c17f957e27fb..e3d3b63a729ee 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiSkipMessageSendTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpiSkipMessageSendTest.java @@ -48,10 +48,14 @@ import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests that the client will be segmented in time and won't hang due to canceling compute jobs. */ +@RunWith(JUnit4.class) public class TcpCommunicationSpiSkipMessageSendTest extends GridCommonAbstractTest { /** */ private static final CountDownLatch COMPUTE_JOB_STARTED = new CountDownLatch(1); @@ -105,6 +109,7 @@ public class TcpCommunicationSpiSkipMessageSendTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testClientSegmented() throws Exception { startGrid("server"); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationStatisticsTest.java b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationStatisticsTest.java index 377d1ebb9b766..4707cf1d80f14 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationStatisticsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationStatisticsTest.java @@ -42,18 +42,16 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.GridTestMessage; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for TcpCommunicationSpi statistics. */ +@RunWith(JUnit4.class) public class TcpCommunicationStatisticsTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Mutex. */ final private Object mux = new Object(); @@ -100,8 +98,6 @@ private class SynchronizedCommunicationSpi extends TcpCommunicationSpi { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER).setForceServerMode(true)); - TcpCommunicationSpi spi = new SynchronizedCommunicationSpi(); cfg.setCommunicationSpi(spi); @@ -134,6 +130,7 @@ private TcpCommunicationSpiMBean mbean(int nodeIdx) throws MalformedObjectNameEx * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testStatistics() throws Exception { startGrids(2); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/deployment/local/GridLocalDeploymentSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/deployment/local/GridLocalDeploymentSpiSelfTest.java index 14833640606de..ecca16ab7c3aa 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/deployment/local/GridLocalDeploymentSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/deployment/local/GridLocalDeploymentSpiSelfTest.java @@ -35,11 +35,15 @@ import org.apache.ignite.spi.deployment.DeploymentResource; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Local deployment SPI test. */ @GridSpiTest(spi = LocalDeploymentSpi.class, group = "Deployment SPI") +@RunWith(JUnit4.class) public class GridLocalDeploymentSpiSelfTest extends GridSpiAbstractTest { /** */ private static Map>>> tasks = @@ -87,6 +91,7 @@ private void checkUndeployed(Class> taskCls) { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testDeploy() throws Exception { String taskName = "GridDeploymentTestTask"; @@ -114,6 +119,7 @@ public void testDeploy() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testRedeploy() throws Exception { String taskName = "GridDeploymentTestTask"; diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryRandomStartStopTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryRandomStartStopTest.java index 3614987323014..c11377b587b4d 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryRandomStartStopTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryRandomStartStopTest.java @@ -30,11 +30,15 @@ import org.apache.ignite.events.Event; import org.apache.ignite.internal.managers.eventstorage.GridLocalEventListener; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base discovery random start-stop test class. * @param Discovery spi type. */ +@RunWith(JUnit4.class) public abstract class AbstractDiscoveryRandomStartStopTest extends GridSpiAbstractTest { /** */ private static final int DFLT_MAX_INTERVAL = 10; @@ -152,6 +156,7 @@ private class Waiter extends Thread { * @throws Exception If failed. */ @SuppressWarnings({"BusyWait"}) + @Test public void testDiscovery() throws Exception { Random rand = new Random(); @@ -215,4 +220,4 @@ private void toggleState() throws Exception { return attrs; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoverySelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoverySelfTest.java index e59d24a2c5afc..471dc0ba60a95 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoverySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoverySelfTest.java @@ -34,9 +34,9 @@ import mx4j.tools.adaptor.http.HttpAdaptor; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.internal.util.future.GridFinishedFuture; +import org.apache.ignite.internal.util.future.IgniteFinishedFutureImpl; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.spi.IgniteSpi; import org.apache.ignite.spi.IgniteSpiAdapter; @@ -47,6 +47,9 @@ import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; import static org.apache.ignite.lang.IgniteProductVersion.fromString; @@ -55,7 +58,7 @@ * Base discovery self-test class. * @param SPI implementation class. */ -@SuppressWarnings({"JUnitAbstractTestClassNamingConvention"}) +@RunWith(JUnit4.class) public abstract class AbstractDiscoverySelfTest extends GridSpiAbstractTest { /** */ private static final String HTTP_ADAPTOR_MBEAN_NAME = "mbeanAdaptor:protocol=HTTP"; @@ -91,6 +94,7 @@ protected AbstractDiscoverySelfTest() { * @throws Exception If failed. */ @SuppressWarnings({"UnconditionalWait"}) + @Test public void testDiscovery() throws Exception { assert spis.size() > 1; assert spiStartTime > 0; @@ -162,7 +166,7 @@ public boolean isMetricsUpdated() { } /** {@inheritDoc} */ - @Override public IgniteInternalFuture onDiscovery( + @Override public IgniteFuture onDiscovery( int type, long topVer, ClusterNode node, @@ -172,7 +176,7 @@ public boolean isMetricsUpdated() { if (type == EVT_NODE_METRICS_UPDATED) isMetricsUpdate = true; - return new GridFinishedFuture(); + return new IgniteFinishedFutureImpl<>(); } } @@ -180,6 +184,7 @@ public boolean isMetricsUpdated() { * @throws Exception If failed. */ @SuppressWarnings({"UnconditionalWait"}) + @Test public void testMetrics() throws Exception { Collection listeners = new ArrayList<>(); @@ -232,6 +237,7 @@ public void testMetrics() throws Exception { * * @throws Exception If test failed. */ + @Test public void testLocalMetricsUpdate() throws Exception { AtomicInteger[] locUpdCnts = new AtomicInteger[getSpiCount()]; @@ -246,7 +252,7 @@ public void testLocalMetricsUpdate() throws Exception { // No-op. } - @Override public IgniteInternalFuture onDiscovery(int type, long topVer, ClusterNode node, + @Override public IgniteFuture onDiscovery(int type, long topVer, ClusterNode node, Collection topSnapshot, Map> topHist, @Nullable DiscoverySpiCustomMessage data) { // If METRICS_UPDATED came from local node @@ -254,7 +260,7 @@ public void testLocalMetricsUpdate() throws Exception { && node.id().equals(spi.getLocalNode().id())) spiCnt.addAndGet(1); - return new GridFinishedFuture(); + return new IgniteFinishedFutureImpl<>(); } }; @@ -291,6 +297,7 @@ private boolean isContainsNodeId(Iterable nodes, UUID nodeId) { /** * Checks that physical address of local node is equal to local.ip property. */ + @Test public void testLocalNode() { for (DiscoverySpi spi : spis) { ClusterNode loc = spi.getLocalNode(); @@ -304,6 +311,7 @@ public void testLocalNode() { /** * Check that "test.node.prop" is present on all nodes. */ + @Test public void testNodeAttributes() { for (DiscoverySpi spi : spis) { assert !spi.getRemoteNodes().isEmpty() : "No remote nodes found in Spi."; @@ -339,6 +347,7 @@ else if (!"true".equals(attr)) { /** * Checks that each spi can pings all other. */ + @Test public void testPing() { for (DiscoverySpi spi : spis) { for (IgniteTestResources rscrs : spiRsrcs) { @@ -357,6 +366,7 @@ public void testPing() { * * @throws Exception If failed. */ + @Test public void testNodeSerialize() throws Exception { for (DiscoverySpi spi : spis) { ClusterNode node = spi.getLocalNode(); @@ -416,16 +426,21 @@ protected long getMaxMetricsWaitTime() { } @SuppressWarnings({"NakedNotify"}) - @Override public IgniteInternalFuture onDiscovery(int type, long topVer, ClusterNode node, - Collection topSnapshot, Map> topHist, - @Nullable DiscoverySpiCustomMessage data) { + @Override public IgniteFuture onDiscovery( + int type, + long topVer, + ClusterNode node, + Collection topSnapshot, + Map> topHist, + @Nullable DiscoverySpiCustomMessage data + ) { info("Discovery event [type=" + type + ", node=" + node + ']'); synchronized (mux) { mux.notifyAll(); } - return new GridFinishedFuture(); + return new IgniteFinishedFutureImpl<>(); } }); @@ -549,4 +564,4 @@ private static class NullOutputStream extends OutputStream { // No-op } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryTest.java index 2c2d99a1698a6..7876524c0a268 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AbstractDiscoveryTest.java @@ -29,12 +29,15 @@ import org.apache.ignite.events.Event; import org.apache.ignite.internal.managers.eventstorage.GridLocalEventListener; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base discovery test class. * @param SPI implementation class. */ -@SuppressWarnings({"JUnitAbstractTestClassNamingConvention"}) +@RunWith(JUnit4.class) public abstract class AbstractDiscoveryTest extends GridSpiAbstractTest { /** */ @SuppressWarnings({"ClassExplicitlyExtendsThread"}) @@ -47,7 +50,6 @@ private class Pinger extends Thread { private boolean isCanceled; /** {@inheritDoc} */ - @SuppressWarnings({"UnusedCatchParameter"}) @Override public void run() { Random rnd = new Random(); @@ -127,6 +129,7 @@ private class DiscoveryListener implements GridLocalEventListener { /** * @throws Exception If failed. */ + @Test public void testDiscovery() throws Exception { GridLocalEventListener discoLsnr = new DiscoveryListener(); @@ -154,4 +157,4 @@ public void testDiscovery() throws Exception { return attrs; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AuthenticationRestartTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AuthenticationRestartTest.java index cf1836ad0af75..d7fc2a1f783f1 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/AuthenticationRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/AuthenticationRestartTest.java @@ -25,6 +25,9 @@ import org.apache.ignite.spi.discovery.tcp.TestReconnectPluginProvider; import org.apache.ignite.spi.discovery.tcp.TestReconnectProcessor; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.waitForCondition; @@ -32,6 +35,7 @@ * Checks whether client is able to reconnect to restarted cluster with * enabled security. */ +@RunWith(JUnit4.class) public class AuthenticationRestartTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -62,6 +66,7 @@ public class AuthenticationRestartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClientReconnect() throws Exception { stopGrid("server"); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/ClusterMetricsSnapshotSerializeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/ClusterMetricsSnapshotSerializeSelfTest.java index 25de2c74b1d3b..a6a64d87c01f6 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/ClusterMetricsSnapshotSerializeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/ClusterMetricsSnapshotSerializeSelfTest.java @@ -21,11 +21,15 @@ import org.apache.ignite.internal.ClusterMetricsSnapshot; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid discovery metrics test. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class ClusterMetricsSnapshotSerializeSelfTest extends GridCommonAbstractTest { /** Metrics serialized by Ignite 1.0 */ private static final byte[] METRICS_V1 = {0, 0, 0, 22, 0, 0, 0, 8, 64, 0, 0, 0, 0, 0, 0, 27, 0, 0, 0, 15, 64, @@ -47,6 +51,7 @@ public ClusterMetricsSnapshotSerializeSelfTest() { } /** */ + @Test public void testMetricsSize() { byte[] data = new byte[ClusterMetricsSnapshot.METRICS_SIZE]; @@ -62,6 +67,7 @@ public void testMetricsSize() { } /** */ + @Test public void testSerialization() { byte[] data = new byte[ClusterMetricsSnapshot.METRICS_SIZE]; @@ -83,6 +89,7 @@ public void testSerialization() { /** * Checks compatibility with old serialized metrics. */ + @Test public void testMetricsCompatibility() { ClusterMetrics metrics = ClusterMetricsSnapshot.deserialize(METRICS_V1, 0); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/FilterDataForClientNodeDiscoveryTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/FilterDataForClientNodeDiscoveryTest.java index 9a45d2d68d5be..c3e2dd7285541 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/FilterDataForClientNodeDiscoveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/FilterDataForClientNodeDiscoveryTest.java @@ -28,18 +28,17 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class FilterDataForClientNodeDiscoveryTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Join servers count. */ private int joinSrvCnt; @@ -56,6 +55,7 @@ public class FilterDataForClientNodeDiscoveryTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testDataBag() throws Exception { startGrid(configuration(0, false)); startGrid(configuration(1, false)); @@ -73,6 +73,7 @@ public void testDataBag() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDiscoveryServerOnlyCustomMessage() throws Exception { startGrid(configuration(0, false)); startGrid(configuration(1, false)); @@ -120,7 +121,7 @@ private IgniteConfiguration configuration(int nodeIdx, boolean client) throws Ex TcpDiscoverySpi testSpi = new TestDiscoverySpi(); - testSpi.setIpFinder(IP_FINDER); + testSpi.setIpFinder(sharedStaticIpFinder); cfg.setDiscoverySpi(testSpi); @@ -216,4 +217,4 @@ private static class MessageForServer implements DiscoveryServerOnlyCustomMessag return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/IgniteClientReconnectEventHandlingTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/IgniteClientReconnectEventHandlingTest.java new file mode 100644 index 0000000000000..72a73d14179e1 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/IgniteClientReconnectEventHandlingTest.java @@ -0,0 +1,159 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.discovery; + +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.CountDownLatch; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.Event; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteFuture; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; + +import static java.util.concurrent.TimeUnit.MILLISECONDS; +import static java.util.concurrent.TimeUnit.SECONDS; +import static org.apache.ignite.events.EventType.EVTS_DISCOVERY; +import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_RECONNECTED; +import static org.apache.ignite.events.EventType.EVT_NODE_JOINED; +import static org.apache.ignite.testframework.GridTestUtils.waitForCondition; + +/** + * Checks that client will not process previous cluster events after reconnect. + */ +@Ignore("https://ggsystems.atlassian.net/browse/GG-24771") +public class IgniteClientReconnectEventHandlingTest extends GridCommonAbstractTest { + /** */ + private final CountDownLatch latch = new CountDownLatch(1); + + /** */ + private final CountDownLatch reconnect = new CountDownLatch(1); + + /** */ + private final ConcurrentLinkedQueue evtQueue = new ConcurrentLinkedQueue<>(); + + /** */ + private static final int RECONNECT_DELAY = 100; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + DiscoverySpi discoSpi = cfg.getDiscoverySpi(); + + // For optimization test duration. + if (discoSpi instanceof TcpDiscoverySpi) + ((TcpDiscoverySpi)discoSpi).setReconnectDelay(RECONNECT_DELAY); + + if (igniteInstanceName.contains("client")) { + cfg.setClientMode(true); + + Map, int[]> lsnrs = new HashMap<>(); + + lsnrs.put(new IgnitePredicate() { + @Override public boolean apply(Event evt) { + try { + // Wait for the discovery notifier worker processed client disconnection. + latch.await(cfg.getFailureDetectionTimeout(), MILLISECONDS); + } + catch (InterruptedException e) { + log.error("Unexpected exception.", e); + + fail("Unexpected exception: " + e.getMessage()); + } + + return true; + } + }, new int[] {EVT_NODE_JOINED}); + + lsnrs.put(new IgnitePredicate() { + @Override public boolean apply(Event evt) { + reconnect.countDown(); + + return true; + } + }, new int[] {EVT_CLIENT_NODE_RECONNECTED}); + + lsnrs.put(new IgnitePredicate() { + @Override public boolean apply(Event evt) { + evtQueue.add(evt); + + return true; + } + }, EVTS_DISCOVERY); + + cfg.setLocalEventListeners(lsnrs); + } + + return cfg; + } + + /** @throws Exception If failed. */ + @Test + public void testClientReconnect() throws Exception { + startGrid(0); + + IgniteEx client = startGrid("client"); + + // Creates the join event and hold up it on the client. + startGrid(1); + + stopGrid(0); + + stopGrid(1); + + // Wait for the discovery notifier worker processed client disconnection. + assertTrue("Failed to wait for client disconnected.", + waitForCondition(() -> client.cluster().clientReconnectFuture() != null, 10_000)); + + assertTrue(client.context().clientDisconnected()); + + IgniteFuture fut = client.cluster().clientReconnectFuture(); + + fut.listen(f -> evtQueue.clear()); + + // Starts a new cluster. + startGrid(0); + + // The client shouldn't connect to the new cluster until processed previous cluster events. + U.sleep(RECONNECT_DELAY * 2); + + assertTrue(client.context().clientDisconnected()); + + // Continue processing events from the previous cluster. + latch.countDown(); + + fut.get(); + + assertTrue(!client.context().clientDisconnected()); + + assertTrue("Failed to wait for client reconnect event.", reconnect.await(10, SECONDS)); + + awaitPartitionMapExchange(); + + assertEquals("Only reconnect event should be processed after the client reconnects to cluster.", + 1, evtQueue.size()); + + assertEquals(EVT_CLIENT_NODE_RECONNECTED, evtQueue.poll().type()); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/IgniteDiscoveryCacheReuseSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/IgniteDiscoveryCacheReuseSelfTest.java index 75bcbb1db73ef..3659e69fbd7ae 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/IgniteDiscoveryCacheReuseSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/IgniteDiscoveryCacheReuseSelfTest.java @@ -18,7 +18,6 @@ package org.apache.ignite.spi.discovery; import org.apache.ignite.Ignite; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.managers.discovery.DiscoCache; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -26,28 +25,17 @@ import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests discovery cache reuse between topology events. */ +@RunWith(JUnit4.class) public class IgniteDiscoveryCacheReuseSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -60,6 +48,7 @@ public class IgniteDiscoveryCacheReuseSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testDiscoCacheReuseOnNodeJoin() throws Exception { startGridsMultiThreaded(2); @@ -109,4 +98,4 @@ private void assertDiscoCacheReuse(AffinityTopologyVersion v1, AffinityTopologyV assertEquals("Discovery caches are not equal", alives1, alives2); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/LongClientConnectToClusterTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/LongClientConnectToClusterTest.java new file mode 100644 index 0000000000000..3e4873907633a --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/LongClientConnectToClusterTest.java @@ -0,0 +1,178 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.discovery; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.EventType; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.future.IgniteFinishedFutureImpl; +import org.apache.ignite.lang.IgniteFuture; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeAddFinishedMessage; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.Nullable; +import java.io.IOException; +import java.io.OutputStream; +import java.net.Socket; +import java.util.Collection; +import java.util.Collections; +import java.util.Map; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Test client connects to two nodes cluster during time more than the + * {@link org.apache.ignite.configuration.IgniteConfiguration#clientFailureDetectionTimeout}. + */ +@RunWith(JUnit4.class) +public class LongClientConnectToClusterTest extends GridCommonAbstractTest { + /** Client instance name. */ + public static final String CLIENT_INSTANCE_NAME = "client"; + /** Client metrics update count. */ + private static volatile int clientMetricsUpdateCnt; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + TcpDiscoverySpi discoSpi = getTestIgniteInstanceName(0).equals(igniteInstanceName) + ? new DelayedTcpDiscoverySpi() + : getTestIgniteInstanceName(1).equals(igniteInstanceName) + ? new UpdateMetricsInterceptorTcpDiscoverySpi() + : new TcpDiscoverySpi(); + + return super.getConfiguration(igniteInstanceName) + .setClientMode(igniteInstanceName.startsWith(CLIENT_INSTANCE_NAME)) + .setClientFailureDetectionTimeout(1_000) + .setMetricsUpdateFrequency(500) + .setDiscoverySpi(discoSpi + .setReconnectCount(1) + .setLocalAddress("127.0.0.1") + .setIpFinder(new TcpDiscoveryVmIpFinder() + .setAddresses(Collections.singletonList(igniteInstanceName.startsWith(CLIENT_INSTANCE_NAME) + ? "127.0.0.1:47501" + : "127.0.0.1:47500..47502")))); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + startGrids(2); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + } + + /** + * Test method. + * + * @throws Exception If failed. + */ + @Test + public void testClientConnectToCluster() throws Exception { + clientMetricsUpdateCnt = 0; + + IgniteEx client = startGrid(CLIENT_INSTANCE_NAME); + + assertTrue(clientMetricsUpdateCnt > 0); + + assertTrue(client.localNode().isClient()); + + assertEquals(client.cluster().nodes().size(), 3); + } + + /** Discovery SPI which intercept TcpDiscoveryClientMetricsUpdateMessage. */ + private static class UpdateMetricsInterceptorTcpDiscoverySpi extends TcpDiscoverySpi { + /** */ + private class DiscoverySpiListenerWrapper implements DiscoverySpiListener { + /** */ + private DiscoverySpiListener delegate; + + /** + * @param delegate Delegate. + */ + private DiscoverySpiListenerWrapper(DiscoverySpiListener delegate) { + this.delegate = delegate; + } + + /** {@inheritDoc} */ + @Override public IgniteFuture onDiscovery( + int type, + long topVer, + ClusterNode node, + Collection topSnapshot, + @Nullable Map> topHist, + @Nullable DiscoverySpiCustomMessage spiCustomMsg + ) { + if (EventType.EVT_NODE_METRICS_UPDATED == type) { + log.info("Metrics update message catched from node " + node); + + assertFalse(locNode.isClient()); + + if (node.isClient()) + clientMetricsUpdateCnt++; + } + + if (delegate != null) + return delegate.onDiscovery(type, topVer, node, topSnapshot, topHist, spiCustomMsg); + + return new IgniteFinishedFutureImpl<>(); + } + + /** {@inheritDoc} */ + @Override public void onLocalNodeInitialized(ClusterNode locNode) { + if (delegate != null) + delegate.onLocalNodeInitialized(locNode); + } + } + + /** {@inheritDoc} */ + @Override public void setListener(@Nullable DiscoverySpiListener lsnr) { + super.setListener(new DiscoverySpiListenerWrapper(lsnr)); + } + } + + /** Discovery SPI delayed TcpDiscoveryNodeAddFinishedMessage. */ + private static class DelayedTcpDiscoverySpi extends TcpDiscoverySpi { + /** Delay message period millis. */ + public static final int DELAY_MSG_PERIOD_MILLIS = 2_000; + + /** {@inheritDoc} */ + @Override protected void writeToSocket(ClusterNode node, Socket sock, OutputStream out, + TcpDiscoveryAbstractMessage msg, long timeout) throws IOException, IgniteCheckedException { + if (msg instanceof TcpDiscoveryNodeAddFinishedMessage && msg.topologyVersion() == 3) { + log.info("Catched discovery message: " + msg); + + try { + Thread.sleep(DELAY_MSG_PERIOD_MILLIS); + } + catch (InterruptedException e) { + log.error("Interrupt on DelayedTcpDiscoverySpi.", e); + + Thread.currentThread().interrupt(); + } + } + + super.writeToSocket(node, sock, out, msg, timeout); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/DiscoveryUnmarshalVulnerabilityTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/DiscoveryUnmarshalVulnerabilityTest.java index 448c9af9223cd..69081c1c4f0b3 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/DiscoveryUnmarshalVulnerabilityTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/DiscoveryUnmarshalVulnerabilityTest.java @@ -32,6 +32,9 @@ import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MARSHALLER_BLACKLIST; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MARSHALLER_WHITELIST; @@ -39,6 +42,7 @@ /** * Tests for whitelist and blacklist ot avoiding deserialization vulnerability. */ +@RunWith(JUnit4.class) public class DiscoveryUnmarshalVulnerabilityTest extends GridCommonAbstractTest { /** Marshaller. */ private static final JdkMarshaller MARSH = new JdkMarshaller(); @@ -61,6 +65,7 @@ public class DiscoveryUnmarshalVulnerabilityTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testNoLists() throws Exception { testExploit(true); } @@ -68,6 +73,7 @@ public void testNoLists() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWhiteListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); @@ -79,6 +85,7 @@ public void testWhiteListIncluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWhiteListExcluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_excluded.txt").getPath(); @@ -90,6 +97,7 @@ public void testWhiteListExcluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlackListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); @@ -101,6 +109,7 @@ public void testBlackListIncluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlackListExcluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_excluded.txt").getPath(); @@ -112,6 +121,7 @@ public void testBlackListExcluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBothListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientConnectTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientConnectTest.java index 2ed55a1fcc4df..3ae4f547d69ae 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientConnectTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientConnectTest.java @@ -26,20 +26,24 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicBoolean; +import javax.cache.CacheException; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryNodeAddFinishedMessage; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * We emulate that client receive message about joining to topology earlier than some server nodes in topology. @@ -47,16 +51,29 @@ * To emulate this we connect client to second node in topology and pause sending message about joining finishing to * third node. */ +@RunWith(JUnit4.class) public class IgniteClientConnectTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Latch to stop message sending. */ private final CountDownLatch latch = new CountDownLatch(1); /** Start client flag. */ private final AtomicBoolean clientJustStarted = new AtomicBoolean(false); + /** Failure detection timeout. */ + private int failureDetectionTimeout = -1; + + /** Node add finished delay. */ + private int nodeAddFinishedDelay = 5_000; + + /** Connection timeout. */ + private long connTimeout = -1; + + /** Maxx connection timeout. */ + private long maxxConnTimeout = -1; + + /** Recon count. */ + private int reconCnt = -1; + /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -69,9 +86,24 @@ public class IgniteClientConnectTest extends GridCommonAbstractTest { ipFinder.registerAddresses(Collections.singleton(new InetSocketAddress(InetAddress.getLoopbackAddress(), 47501))); disco.setIpFinder(ipFinder); + + + if (failureDetectionTimeout != -1) + cfg.setFailureDetectionTimeout(failureDetectionTimeout); + + if (connTimeout != -1) { + TcpCommunicationSpi tcpCommSpi = (TcpCommunicationSpi)cfg.getCommunicationSpi(); + + tcpCommSpi.setConnectTimeout(connTimeout); + tcpCommSpi.setMaxConnectTimeout(maxxConnTimeout); + tcpCommSpi.setReconnectCount(reconCnt); + } + } + else { + disco.setIpFinder(sharedStaticIpFinder); + + cfg.setFailureDetectionTimeout(60_000); } - else - disco.setIpFinder(ipFinder); disco.setJoinTimeout(2 * 60_000); disco.setSocketTimeout(1000); @@ -94,7 +126,56 @@ public class IgniteClientConnectTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testClientConnectToBigTopology() throws Exception { + failureDetectionTimeout = -1; + connTimeout = -1; + + testClientConnectToBigTopology0(); + } + + /** + * + * @throws Exception If failed. + */ + @Test + public void testFailureDetectionTimeoutReached() throws Exception { + failureDetectionTimeout = 1000; + connTimeout = -1; + + try { + testClientConnectToBigTopology0(); + } + catch (CacheException e) { + assertTrue(e.getCause().getMessage().contains("Failed to send message")); + } + } + + /** + * + * @throws Exception If failed. + */ + @Test + public void testCustomTimeoutReached() throws Exception { + failureDetectionTimeout = 1000; + + connTimeout = 1000; + maxxConnTimeout = 3000; + reconCnt = 3; + + try { + testClientConnectToBigTopology0(); + } + catch (CacheException e) { + assertTrue(e.getCause().getMessage().contains("Failed to send message")); + } + } + + /** + * + * @throws Exception In case of error. + */ + public void testClientConnectToBigTopology0() throws Exception { Ignite ignite = startGrids(3); IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -142,9 +223,9 @@ class TestTcpDiscoverySpi extends TcpDiscoverySpi { try { latch.await(); - Thread.sleep(3000); + Thread.sleep(nodeAddFinishedDelay); } catch (InterruptedException e) { - e.printStackTrace(); + fail("Unexpected interrupt on nodeAddFinishedDelay"); } super.writeToSocket(sock, out, msg, timeout); @@ -160,4 +241,4 @@ TcpDiscoveryImpl discovery() { return impl; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientReconnectMassiveShutdownTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientReconnectMassiveShutdownTest.java index 2878110a9cfb3..26dea8398d9f3 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientReconnectMassiveShutdownTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/IgniteClientReconnectMassiveShutdownTest.java @@ -38,10 +38,11 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -52,6 +53,7 @@ /** * Client reconnect test in multi threaded mode while cache operations are in progress. */ +@RunWith(JUnit4.class) public class IgniteClientReconnectMassiveShutdownTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 14; @@ -62,17 +64,12 @@ public class IgniteClientReconnectMassiveShutdownTest extends GridCommonAbstract /** */ private static volatile boolean clientMode; - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setClientMode(clientMode); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder)); - ((TcpCommunicationSpi)cfg.getCommunicationSpi()).setSharedMemoryPort(-1); return cfg; @@ -95,13 +92,15 @@ public class IgniteClientReconnectMassiveShutdownTest extends GridCommonAbstract /** * @throws Exception If any error occurs. */ - public void _testMassiveServersShutdown1() throws Exception { + @Test + public void testMassiveServersShutdown1() throws Exception { massiveServersShutdown(StopType.FAIL_EVENT); } /** * @throws Exception If any error occurs. */ + @Test public void testMassiveServersShutdown2() throws Exception { massiveServersShutdown(StopType.SIMULATE_FAIL); } @@ -109,7 +108,8 @@ public void testMassiveServersShutdown2() throws Exception { /** * @throws Exception If any error occurs. */ - public void _testMassiveServersShutdown3() throws Exception { + @Test + public void testMassiveServersShutdown3() throws Exception { massiveServersShutdown(StopType.CLOSE); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryMarshallerCheckSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryMarshallerCheckSelfTest.java index f14e0b1cf70ce..e44b0a54994e1 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryMarshallerCheckSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryMarshallerCheckSelfTest.java @@ -23,17 +23,16 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.spi.IgniteSpiException; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi}. */ +@RunWith(JUnit4.class) public class TcpClientDiscoveryMarshallerCheckSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean testFooter; @@ -65,10 +64,6 @@ public class TcpClientDiscoveryMarshallerCheckSelfTest extends GridCommonAbstrac cfg.setClientMode(true); cfg.setMarshaller(new BinaryMarshaller()); } - - TcpDiscoverySpi spi = new TcpDiscoverySpi().setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); } return cfg; @@ -82,6 +77,7 @@ public class TcpClientDiscoveryMarshallerCheckSelfTest extends GridCommonAbstrac /** * @throws Exception If failed. */ + @Test public void testMarshallerInConsistency() throws Exception { startGrid(0); @@ -101,6 +97,7 @@ public void testMarshallerInConsistency() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInconsistentCompactFooterSingle() throws Exception { clientServerInconsistentConfigFail(false, 1, 1); } @@ -108,6 +105,7 @@ public void testInconsistentCompactFooterSingle() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInconsistentCompactFooterMulti() throws Exception { clientServerInconsistentConfigFail(true, 2, 10); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiCoordinatorChangeTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiCoordinatorChangeTest.java new file mode 100644 index 0000000000000..be76afde2db6c --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiCoordinatorChangeTest.java @@ -0,0 +1,126 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.discovery.tcp; + +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.ignite.Ignite; +import org.apache.ignite.Ignition; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.Event; +import org.apache.ignite.events.EventType; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * This class tests that a client is able to connect to another server node without leaving the cluster. + */ +@RunWith(JUnit4.class) +public class TcpClientDiscoverySpiCoordinatorChangeTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + } + + /** + * Checks that a client node doesn't fail because of coordinator change. + * + * @throws Exception If test fails. + */ + @Test + public void testClientNotFailed() throws Exception { + TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + + // Start server A. + Ignite srvA = startNode("server-a", ipFinder, false); + + // Start the client. + Ignite client = startNode("client", ipFinder, true); + + AtomicBoolean clientReconnectState = getClientReconnectState(client); + + // Start server B. + Ignite srvB = startNode("server-b", ipFinder, false); + + // Stop server A. + srvA.close(); + + // Will throw an exception if the client is disconnected. + client.getOrCreateCache("CACHE-NAME"); + + // Check that the client didn't disconnect/reconnect quickly. + assertFalse("Client node was failed and reconnected to the cluster.", clientReconnectState.get()); + + // Stop the client. + client.close(); + + // Stop server B. + srvB.close(); + } + + /** + * @param instanceName Instance name. + * @param ipFinder IP-finder. + * @param clientMode Client mode flag. + * @return Started node. + * @throws Exception If a node was not started. + */ + private Ignite startNode(String instanceName, TcpDiscoveryIpFinder ipFinder, boolean clientMode) throws Exception { + IgniteConfiguration cfg = getConfiguration(instanceName) + .setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder)) + .setClientMode(clientMode); + + return Ignition.start(cfg); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String instanceName) throws Exception { + return super.getConfiguration(instanceName) + .setMetricsUpdateFrequency(Integer.MAX_VALUE) + .setClientFailureDetectionTimeout(Integer.MAX_VALUE) + .setFailureDetectionTimeout(Integer.MAX_VALUE); + } + + /** + * @param ignite Client node. + * @return Client reconnect state. + */ + private AtomicBoolean getClientReconnectState(Ignite ignite) { + final AtomicBoolean reconnectState = new AtomicBoolean(false); + + ignite.events().localListen( + new IgnitePredicate() { + @Override public boolean apply(Event evt) { + if (evt.type() == EventType.EVT_CLIENT_NODE_RECONNECTED) + reconnectState.set(true); + + return true; + } + }, + EventType.EVT_CLIENT_NODE_RECONNECTED + ); + + return reconnectState; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiFailureTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiFailureTimeoutSelfTest.java index c167a902d23dd..01494cea24ec1 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiFailureTimeoutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiFailureTimeoutSelfTest.java @@ -43,12 +43,16 @@ import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryPingRequest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; /** * Client-based discovery SPI test with failure detection timeout enabled. */ +@RunWith(JUnit4.class) public class TcpClientDiscoverySpiFailureTimeoutSelfTest extends TcpClientDiscoverySpiSelfTest { /** */ private final static int FAILURE_AWAIT_TIME = 7_000; @@ -110,6 +114,7 @@ public class TcpClientDiscoverySpiFailureTimeoutSelfTest extends TcpClientDiscov /** * @throws Exception in case of error. */ + @Test public void testFailureDetectionTimeoutEnabled() throws Exception { startServerNodes(1); startClientNodes(1); @@ -130,6 +135,7 @@ public void testFailureDetectionTimeoutEnabled() throws Exception { /** * @throws Exception in case of error. */ + @Test public void testFailureTimeoutWorkabilityAvgTimeout() throws Exception { failureThreshold = 3000; @@ -144,6 +150,7 @@ public void testFailureTimeoutWorkabilityAvgTimeout() throws Exception { /** * @throws Exception in case of error. */ + @Test public void testFailureTimeoutWorkabilitySmallTimeout() throws Exception { failureThreshold = 500; @@ -160,6 +167,7 @@ public void testFailureTimeoutWorkabilitySmallTimeout() throws Exception { * * @throws Exception in case of error. */ + @Test public void testFailureTimeoutServerClient() throws Exception { failureThreshold = 3000; clientFailureDetectionTimeout = 2000; @@ -213,6 +221,7 @@ public void testFailureTimeoutServerClient() throws Exception { * * @throws Exception in case of error. */ + @Test public void testFailureTimeout3Server() throws Exception { failureThreshold = 1000; clientFailureDetectionTimeout = 10000; @@ -319,6 +328,7 @@ private void checkFailureThresholdWorkability() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectOnCoordinatorRouterFail1() throws Exception { clientReconnectOnCoordinatorRouterFail(1); } @@ -326,6 +336,7 @@ public void testClientReconnectOnCoordinatorRouterFail1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectOnCoordinatorRouterFail2() throws Exception { clientReconnectOnCoordinatorRouterFail(2); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiMulticastTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiMulticastTest.java index e19b121aebdb7..ec44dee4cb1ac 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiMulticastTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiMulticastTest.java @@ -30,6 +30,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_DISCONNECTED; @@ -38,6 +41,7 @@ /** * */ +@RunWith(JUnit4.class) public class TcpClientDiscoverySpiMulticastTest extends GridCommonAbstractTest { /** */ private boolean forceSrv; @@ -95,6 +99,7 @@ public class TcpClientDiscoverySpiMulticastTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testClientStartsFirst() throws Exception { IgniteInternalFuture fut = GridTestUtils.runAsync(new Callable() { @Override public Ignite call() throws Exception { @@ -149,6 +154,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testJoinWithMulticast() throws Exception { joinWithMulticast(); } @@ -156,6 +162,7 @@ public void testJoinWithMulticast() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinWithMulticastForceServer() throws Exception { forceSrv = true; @@ -218,4 +225,4 @@ private void assertSpi(Ignite ignite, boolean client) { else assertFalse(addrSnds.isEmpty()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiSelfTest.java index c85e94e6c3889..8c078bb9271be 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoverySpiSelfTest.java @@ -42,6 +42,7 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.DiscoveryEvent; import org.apache.ignite.events.Event; +import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; @@ -63,6 +64,7 @@ import org.apache.ignite.spi.IgniteSpiOperationTimeoutException; import org.apache.ignite.spi.IgniteSpiOperationTimeoutHelper; import org.apache.ignite.spi.IgniteSpiThread; +import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; @@ -73,6 +75,9 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.MINUTES; @@ -86,6 +91,7 @@ /** * Client-based discovery tests. */ +@RunWith(JUnit4.class) public class TcpClientDiscoverySpiSelfTest extends GridCommonAbstractTest { /** */ private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); @@ -162,6 +168,14 @@ public class TcpClientDiscoverySpiSelfTest extends GridCommonAbstractTest { cfg.setClientFailureDetectionTimeout(clientFailureDetectionTimeout()); + // Override default settings to speed up reconnection. + cfg.setCommunicationSpi( + new TcpCommunicationSpi() + .setConnectTimeout(500) + .setMaxConnectTimeout(1000) + .setReconnectCount(2) + ); + TcpDiscoverySpi disco = getDiscoverySpi(); if (igniteInstanceName.startsWith("server")) @@ -299,6 +313,7 @@ protected long failureDetectionTimeout() { /** * @throws Exception If failed. */ + @Test public void testJoinTimeout() throws Exception { clientIpFinder = new TcpDiscoveryVmIpFinder(); joinTimeout = 1000; @@ -320,6 +335,7 @@ public void testJoinTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientToClientPing() throws Exception { startGrid("server-p1"); Ignite c1 = startGrid("client-p1"); @@ -335,6 +351,7 @@ public void testClientToClientPing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeJoin() throws Exception { startServerNodes(3); startClientNodes(3); @@ -357,6 +374,7 @@ public void testClientNodeJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeLeave() throws Exception { startServerNodes(3); startClientNodes(3); @@ -379,6 +397,7 @@ public void testClientNodeLeave() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeFail() throws Exception { startServerNodes(3); startClientNodes(3); @@ -401,6 +420,7 @@ public void testClientNodeFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerNodeJoin() throws Exception { startServerNodes(3); startClientNodes(3); @@ -423,6 +443,7 @@ public void testServerNodeJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerNodeLeave() throws Exception { startServerNodes(3); startClientNodes(3); @@ -445,6 +466,7 @@ public void testServerNodeLeave() throws Exception { /** * @throws Exception If failed. */ + @Test public void testServerNodeFail() throws Exception { startServerNodes(3); startClientNodes(3); @@ -469,6 +491,7 @@ public void testServerNodeFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPing() throws Exception { startServerNodes(2); startClientNodes(1); @@ -487,6 +510,7 @@ public void testPing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPingFailedNodeFromClient() throws Exception { startServerNodes(2); startClientNodes(1); @@ -518,6 +542,7 @@ public void testPingFailedNodeFromClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPingFailedClientNode() throws Exception { startServerNodes(2); startClientNodes(1); @@ -538,15 +563,25 @@ public void testPingFailedClientNode() throws Exception { ((TestTcpDiscoverySpi)client.configuration().getDiscoverySpi()).resumeAll(); - Thread.sleep(2000); + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + try { + boolean ping1 = ((IgniteEx) srv1).context().discovery().pingNode(client.cluster().localNode().id()); - assert ((IgniteEx)srv1).context().discovery().pingNode(client.cluster().localNode().id()); - assert ((IgniteEx)srv0).context().discovery().pingNode(client.cluster().localNode().id()); + boolean ping2 = ((IgniteEx) srv0).context().discovery().pingNode(client.cluster().localNode().id()); + + return ping1 && ping2; + } catch (IgniteClientDisconnectedException | IgniteClientDisconnectedCheckedException e) { + return false; + } + } + }, 5_000); } /** * @throws Exception If failed. */ + @Test public void testClientReconnectOnRouterFail() throws Exception { clientsPerSrv = 1; @@ -575,6 +610,7 @@ public void testClientReconnectOnRouterFail() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnectOnRouterSuspend() throws Exception { reconnectAfterSuspend(false); } @@ -584,6 +620,7 @@ public void testClientReconnectOnRouterSuspend() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnectOnRouterSuspendTopologyChange() throws Exception { clientFailureDetectionTimeout = 20_000; @@ -665,6 +702,7 @@ private void reconnectAfterSuspend(boolean changeTop) throws Exception { /** * @throws Exception if failed. */ + @Test public void testClientReconnectHistoryMissingOnRouter() throws Exception { clientFailureDetectionTimeout = 60000; netTimeout = 60000; @@ -712,6 +750,7 @@ public void testClientReconnectHistoryMissingOnRouter() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectAfterPause() throws Exception { startServerNodes(2); startClientNodes(1); @@ -740,6 +779,7 @@ public void testReconnectAfterPause() throws Exception { /** * @throws Exception if failed. */ + @Test public void testReconnectAfterMassiveTopologyChange() throws Exception { clientIpFinder = IP_FINDER; @@ -790,6 +830,7 @@ public void testReconnectAfterMassiveTopologyChange() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectOnNetworkProblem() throws Exception { clientsPerSrv = 1; @@ -815,6 +856,7 @@ public void testClientReconnectOnNetworkProblem() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectOneServerOneClient() throws Exception { clientsPerSrv = 1; @@ -840,6 +882,7 @@ public void testClientReconnectOneServerOneClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectTopologyChange1() throws Exception { clientFailureDetectionTimeout = 100000; @@ -884,6 +927,7 @@ public void testClientReconnectTopologyChange1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientReconnectTopologyChange2() throws Exception { clientFailureDetectionTimeout = 100000; @@ -928,6 +972,7 @@ public void testClientReconnectTopologyChange2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGetMissedMessagesOnReconnect() throws Exception { clientsPerSrv = 1; @@ -965,6 +1010,7 @@ public void testGetMissedMessagesOnReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientSegmentation() throws Exception { clientsPerSrv = 1; @@ -1014,6 +1060,7 @@ public void testClientSegmentation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeJoinOneServer() throws Exception { startServerNodes(1); @@ -1031,6 +1078,7 @@ public void testClientNodeJoinOneServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientNodeLeaveOneServer() throws Exception { startServerNodes(1); startClientNodes(1); @@ -1053,6 +1101,7 @@ public void testClientNodeLeaveOneServer() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientNodeFailOneServer() throws Exception { startServerNodes(1); startClientNodes(1); @@ -1073,6 +1122,7 @@ public void testClientNodeFailOneServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientAndRouterFail() throws Exception { startServerNodes(2); startClientNodes(2); @@ -1105,6 +1155,7 @@ public void testClientAndRouterFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMetrics() throws Exception { startServerNodes(3); startClientNodes(3); @@ -1165,6 +1216,7 @@ private boolean checkMetrics(int srvCnt, int clientCnt, int execJobsCnt) { /** * @throws Exception If failed. */ + @Test public void testDataExchangeFromServer() throws Exception { testDataExchange("server-0"); } @@ -1172,6 +1224,7 @@ public void testDataExchangeFromServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDataExchangeFromClient() throws Exception { testDataExchange("client-0"); } @@ -1216,6 +1269,7 @@ private void testDataExchange(String masterName) throws Exception { /** * @throws Exception If failed. */ + @Test public void testDataExchangeFromServer2() throws Exception { startServerNodes(2); @@ -1246,6 +1300,7 @@ public void testDataExchangeFromServer2() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testDuplicateId() throws Exception { startServerNodes(2); @@ -1267,6 +1322,7 @@ public void testDuplicateId() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testTimeoutWaitingNodeAddedMessage() throws Exception { longSockTimeouts = true; @@ -1310,6 +1366,7 @@ public void testTimeoutWaitingNodeAddedMessage() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testGridStartTime() throws Exception { startServerNodes(2); @@ -1332,6 +1389,7 @@ public void testGridStartTime() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinError() throws Exception { startServerNodes(1); @@ -1349,6 +1407,7 @@ public void testJoinError() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinError2() throws Exception { startServerNodes(1); @@ -1367,6 +1426,7 @@ public void testJoinError2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinError3() throws Exception { startServerNodes(1); @@ -1384,6 +1444,7 @@ public void testJoinError3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinErrorMissedAddFinishedMessage1() throws Exception { missedAddFinishedMessage(true); } @@ -1391,6 +1452,7 @@ public void testJoinErrorMissedAddFinishedMessage1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinErrorMissedAddFinishedMessage2() throws Exception { missedAddFinishedMessage(false); } @@ -1449,6 +1511,7 @@ private void missedAddFinishedMessage(boolean singleSrv) throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientMessageWorkerStartSingleServer() throws Exception { clientMessageWorkerStart(1, 1); } @@ -1456,6 +1519,7 @@ public void testClientMessageWorkerStartSingleServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientMessageWorkerStartTwoServers1() throws Exception { clientMessageWorkerStart(2, 1); } @@ -1463,6 +1527,7 @@ public void testClientMessageWorkerStartTwoServers1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClientMessageWorkerStartTwoServers2() throws Exception { clientMessageWorkerStart(2, 2); } @@ -1527,6 +1592,7 @@ private void clientMessageWorkerStart(int srvs, int connectTo) throws Exception /** * @throws Exception If failed. */ + @Test public void testJoinMutlithreaded() throws Exception { startServerNodes(1); @@ -1550,6 +1616,7 @@ public void testJoinMutlithreaded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectAfterFail() throws Exception { reconnectAfterFail(false); } @@ -1557,6 +1624,7 @@ public void testReconnectAfterFail() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectAfterFailTopologyChanged() throws Exception { reconnectAfterFail(true); } @@ -1678,6 +1746,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testReconnectAfterFailConcurrentJoin() throws Exception { startServerNodes(1); @@ -1750,6 +1819,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testClientFailReconnectDisabled() throws Exception { reconnectDisabled = true; @@ -1791,6 +1861,7 @@ public void testClientFailReconnectDisabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectSegmentedAfterJoinTimeoutServerFailed() throws Exception { reconnectSegmentedAfterJoinTimeout(true); } @@ -1798,6 +1869,7 @@ public void testReconnectSegmentedAfterJoinTimeoutServerFailed() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testReconnectSegmentedAfterJoinTimeoutNetworkError() throws Exception { reconnectSegmentedAfterJoinTimeout(false); } @@ -1898,6 +1970,7 @@ else if (evt.type() == EVT_NODE_SEGMENTED) { /** * @throws Exception If failed. */ + @Test public void testReconnectClusterRestart() throws Exception { netTimeout = 3000; joinTimeout = 60_000; @@ -1970,6 +2043,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testDisconnectAfterNetworkTimeout() throws Exception { netTimeout = 5000; joinTimeout = 60_000; @@ -2055,6 +2129,7 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) { /** * @throws Exception If failed. */ + @Test public void testForceClientReconnect() throws Exception { startServerNodes(1); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryUnresolvedHostTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryUnresolvedHostTest.java index 4dc1604bd389f..78be17048fead 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryUnresolvedHostTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpClientDiscoveryUnresolvedHostTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.spi.IgniteSpiOperationTimeoutHelper; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Client-based discovery SPI test with unresolved server hosts. */ +@RunWith(JUnit4.class) public class TcpClientDiscoveryUnresolvedHostTest extends GridCommonAbstractTest { /** */ TestTcpDiscoverySpi spi; @@ -58,6 +62,7 @@ public class TcpClientDiscoveryUnresolvedHostTest extends GridCommonAbstractTest * * @throws Exception in case of error. */ + @Test public void test() throws Exception { try { startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryClientSuspensionSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryClientSuspensionSelfTest.java index a519d25d4114f..4dafa9cd8106b 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryClientSuspensionSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryClientSuspensionSelfTest.java @@ -23,27 +23,20 @@ import org.apache.ignite.Ignition; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for missed client metrics update messages. */ +@RunWith(JUnit4.class) public class TcpDiscoveryClientSuspensionSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - cfg.setMetricsUpdateFrequency(100); cfg.setClientFailureDetectionTimeout(1000); @@ -73,6 +66,7 @@ public class TcpDiscoveryClientSuspensionSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testOneServer() throws Exception { doTestClientSuspension(1); } @@ -80,6 +74,7 @@ public void testOneServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTwoServers() throws Exception { doTestClientSuspension(2); } @@ -87,6 +82,7 @@ public void testTwoServers() throws Exception { /** * @throws Exception If failed. */ + @Test public void testThreeServers() throws Exception { doTestClientSuspension(3); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryConcurrentStartTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryConcurrentStartTest.java index a66fe71f2aa8c..3538d160e23a8 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryConcurrentStartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryConcurrentStartTest.java @@ -20,22 +20,21 @@ import java.util.concurrent.Callable; import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link TcpDiscoverySpi}. */ +@RunWith(JUnit4.class) public class TcpDiscoveryConcurrentStartTest extends GridCommonAbstractTest { /** */ private static final int TOP_SIZE = 3; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static volatile boolean client; @@ -43,8 +42,6 @@ public class TcpDiscoveryConcurrentStartTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder)); - cfg.setCacheConfiguration(); cfg.setClientMode(client); @@ -65,6 +62,7 @@ public class TcpDiscoveryConcurrentStartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testConcurrentStart() throws Exception { for (int i = 0; i < 10; i++) { try { @@ -79,6 +77,7 @@ public void testConcurrentStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConcurrentStartClients() throws Exception { for (int i = 0; i < 20; i++) { try { @@ -108,4 +107,4 @@ public void testConcurrentStartClients() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryIpFinderCleanerTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryIpFinderCleanerTest.java new file mode 100644 index 0000000000000..4a48cdd5babcc --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryIpFinderCleanerTest.java @@ -0,0 +1,165 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.discovery.tcp; + +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.apache.ignite.Ignite; +import org.apache.ignite.Ignition; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Tests IP finder cleaner. + */ +public class TcpDiscoveryIpFinderCleanerTest extends GridCommonAbstractTest { + /** */ + private static final long IP_FINDER_CLEAN_FREQ = 1000; + + /** */ + private static final long NODE_STOPPING_TIMEOUT = 20000; + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + } + + /** + * Checks the node stops gracefully even if {@link TcpDiscoveryIpFinder} ignores {@link InterruptedException}. + * + * @throws Exception If failed. + */ + @Test + public void testNodeStops() throws Exception { + CustomIpFinder ipFinder = new CustomIpFinder(true); + + Ignite ignite = Ignition.start(getConfiguration(ipFinder)); + + try { + if (!ipFinder.suspend().await(IP_FINDER_CLEAN_FREQ * 5, TimeUnit.MILLISECONDS)) + fail("Failed to suspend IP finder."); + + if (!stopNodeAsync(ignite).await(NODE_STOPPING_TIMEOUT, TimeUnit.MILLISECONDS)) + fail("Node was not stopped."); + } + finally { + ipFinder.interruptCleanerThread(); + } + } + + /** + * @param ipFinder IP finder. + * @return Grid test configuration. + * @throws Exception If failed. + */ + private IgniteConfiguration getConfiguration(TcpDiscoveryIpFinder ipFinder) throws Exception { + TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi() + .setIpFinder(ipFinder) + .setIpFinderCleanFrequency(IP_FINDER_CLEAN_FREQ); + + return getConfiguration() + .setDiscoverySpi(discoverySpi); + } + + /** + * Stop the node asynchronously. + * + * @param node Ignite instance. + * @return Latch to signal when the node is stopped completely. + */ + private static CountDownLatch stopNodeAsync(final Ignite node) { + final CountDownLatch latch = new CountDownLatch(1); + + GridTestUtils.runAsync(new Runnable() { + @Override public void run() { + try { + node.close(); + } + finally { + latch.countDown(); + } + } + }); + + return latch; + } + + /** + * Custom IP finder. + */ + private static class CustomIpFinder extends TcpDiscoveryVmIpFinder { + /** */ + private volatile boolean suspendFinderAndResetInterruptedFlag; + + /** */ + private final CountDownLatch suspended = new CountDownLatch(1); + + /** */ + private volatile Thread cleanerThread; + + /** {@inheritDoc} */ + public CustomIpFinder(boolean shared) { + super(shared); + } + + /** {@inheritDoc} */ + @Override public synchronized Collection getRegisteredAddresses() { + if (suspendFinderAndResetInterruptedFlag) { + cleanerThread = Thread.currentThread(); + + suspended.countDown(); + + try { + new CountDownLatch(1).await(); + } + catch (InterruptedException ignore) { + suspendFinderAndResetInterruptedFlag = false; + } + } + + return super.getRegisteredAddresses(); + } + + /** + * Suspend IP finder in {@link CustomIpFinder#getRegisteredAddresses()} method. + * + * @return Latch to signal when IP finder is suspended. + */ + public CountDownLatch suspend() { + suspendFinderAndResetInterruptedFlag = true; + + return suspended; + } + + /** + * Interrupt IP finder cleaner thread. + */ + public void interruptCleanerThread() { + if (cleanerThread != null) + cleanerThread.interrupt(); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMarshallerCheckSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMarshallerCheckSelfTest.java index 696225c495d5c..778b6d0106507 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMarshallerCheckSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMarshallerCheckSelfTest.java @@ -22,13 +22,15 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.spi.IgniteSpiException; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link TcpDiscoverySpi}. */ +@RunWith(JUnit4.class) public class TcpDiscoveryMarshallerCheckSelfTest extends GridCommonAbstractTest { /** */ private static boolean sameMarsh; @@ -36,19 +38,10 @@ public class TcpDiscoveryMarshallerCheckSelfTest extends GridCommonAbstractTest /** */ private static boolean flag; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(discoSpi); - cfg.setLocalHost("127.0.0.1"); if (flag) @@ -72,6 +65,7 @@ public class TcpDiscoveryMarshallerCheckSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testMarshallerInConsistency() throws Exception { sameMarsh = false; @@ -93,10 +87,11 @@ public void testMarshallerInConsistency() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMarshallerConsistency() throws Exception { sameMarsh = true; startGrid(1); startGrid(2); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMultiThreadedTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMultiThreadedTest.java index 70d5078832c30..fcc76f269a325 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMultiThreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryMultiThreadedTest.java @@ -54,10 +54,11 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_JOB_MAPPED; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; @@ -68,6 +69,7 @@ /** * Test for {@link TcpDiscoverySpi}. */ +@RunWith(JUnit4.class) public class TcpDiscoveryMultiThreadedTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 5; @@ -96,9 +98,6 @@ private static boolean client() { return client != null ? client : clientFlagGlobal; } - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * @throws Exception If fails. */ @@ -107,10 +106,11 @@ public TcpDiscoveryMultiThreadedTest() throws Exception { } /** {@inheritDoc} */ - @SuppressWarnings({"IfMayBeConditional"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + cfg.setConsistentId(igniteInstanceName); + UUID id = nodeId.get(); if (id != null) { @@ -122,10 +122,8 @@ public TcpDiscoveryMultiThreadedTest() throws Exception { if (client()) cfg.setClientMode(true); - cfg.setDiscoverySpi(new TcpDiscoverySpi(). - setIpFinder(ipFinder). - setJoinTimeout(60_000). - setNetworkTimeout(10_000)); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setJoinTimeout(60_000); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setNetworkTimeout(10_000); int[] evts = {EVT_NODE_FAILED, EVT_NODE_LEFT}; @@ -171,6 +169,7 @@ public TcpDiscoveryMultiThreadedTest() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testMultiThreadedClientsRestart() throws Exception { final AtomicBoolean done = new AtomicBoolean(); @@ -219,6 +218,7 @@ public void testMultiThreadedClientsRestart() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testMultiThreadedClientsServersRestart() throws Throwable { fail("https://issues.apache.org/jira/browse/IGNITE-1123"); @@ -228,7 +228,10 @@ public void testMultiThreadedClientsServersRestart() throws Throwable { /** * @throws Exception If any error occurs. */ - public void _testMultiThreadedServersRestart() throws Throwable { + @Test + public void testMultiThreadedServersRestart() throws Throwable { + fail("https://issues.apache.org/jira/browse/IGNITE-1123"); + multiThreadedClientsServersRestart(GRID_CNT * 2, 0); } @@ -423,6 +426,7 @@ else if (X.hasCause(e, ClusterTopologyCheckedException.class)) /** * @throws Exception If any error occurs. */ + @Test public void testTopologyVersion() throws Exception { clientFlagGlobal = false; @@ -447,6 +451,7 @@ public void testTopologyVersion() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testMultipleStartOnCoordinatorStop() throws Exception{ for (int k = 0; k < 3; k++) { log.info("Iteration: " + k); @@ -496,7 +501,10 @@ public void testMultipleStartOnCoordinatorStop() throws Exception{ /** * @throws Exception If failed. */ - public void _testCustomEventOnJoinCoordinatorStop() throws Exception { + @Test + public void testCustomEventOnJoinCoordinatorStop() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10198"); + for (int k = 0; k < 10; k++) { log.info("Iteration: " + k); @@ -513,11 +521,13 @@ public void _testCustomEventOnJoinCoordinatorStop() throws Exception { IgniteInternalFuture fut1 = GridTestUtils.runAsync(new Callable() { @Override public Void call() throws Exception { - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); + String cacheName = DEFAULT_CACHE_NAME + "-tmp"; Ignite ignite = ignite(START_NODES - 1); while (!stop.get()) { + CacheConfiguration ccfg = new CacheConfiguration(cacheName); + ignite.createCache(ccfg); ignite.destroyCache(ccfg.getName()); @@ -590,7 +600,10 @@ public void _testCustomEventOnJoinCoordinatorStop() throws Exception { /** * @throws Exception If failed. */ - public void _testClientContinuousQueryCoordinatorStop() throws Exception { + @Test + public void testClientContinuousQueryCoordinatorStop() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10198"); + for (int k = 0; k < 10; k++) { log.info("Iteration: " + k); @@ -659,7 +672,10 @@ public void _testClientContinuousQueryCoordinatorStop() throws Exception { /** * @throws Exception If failed. */ - public void _testCustomEventNodeRestart() throws Exception { + @Test + public void testCustomEventNodeRestart() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-10249"); + clientFlagGlobal = false; Ignite ignite = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeAttributesUpdateOnReconnectTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeAttributesUpdateOnReconnectTest.java index 56dc4ece5f4fc..25b14a92bfe96 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeAttributesUpdateOnReconnectTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeAttributesUpdateOnReconnectTest.java @@ -32,12 +32,16 @@ import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteClientReconnectAbstractTest.reconnectClientNode; /** * Checks whether on client reconnect node attributes from kernal context are sent. */ +@RunWith(JUnit4.class) public class TcpDiscoveryNodeAttributesUpdateOnReconnectTest extends GridCommonAbstractTest { /** */ private volatile String rejoinAttr; @@ -85,6 +89,7 @@ public class TcpDiscoveryNodeAttributesUpdateOnReconnectTest extends GridCommonA /** * @throws Exception If failed. */ + @Test public void testReconnect() throws Exception { Ignite srv = startGrid("server"); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConfigConsistentIdSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConfigConsistentIdSelfTest.java index 3f80746edac8a..f7c066ddf55e9 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConfigConsistentIdSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConfigConsistentIdSelfTest.java @@ -19,25 +19,22 @@ import java.io.Serializable; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link IgniteConfiguration#consistentId}. */ +@RunWith(JUnit4.class) public class TcpDiscoveryNodeConfigConsistentIdSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setLocalHost("0.0.0.0"); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder)); - cfg.setConsistentId(igniteInstanceName); return cfg; @@ -56,6 +53,7 @@ public class TcpDiscoveryNodeConfigConsistentIdSelfTest extends GridCommonAbstra /** * @throws Exception If failed. */ + @Test public void testConsistentId() throws Exception { Object id0 = grid(0).localNode().consistentId(); Serializable id1 = grid(0).configuration().getConsistentId(); @@ -72,4 +70,4 @@ public void testConsistentId() throws Exception { assertEquals(id0, grid(0).localNode().consistentId()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConsistentIdSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConsistentIdSelfTest.java index b9d7682df5129..5a3f3109ace2f 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConsistentIdSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryNodeConsistentIdSelfTest.java @@ -20,25 +20,22 @@ import org.apache.ignite.Ignite; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link TcpDiscoveryNode#consistentId()} */ +@RunWith(JUnit4.class) public class TcpDiscoveryNodeConsistentIdSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setLocalHost("0.0.0.0"); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder)); - return cfg; } @@ -55,6 +52,7 @@ public class TcpDiscoveryNodeConsistentIdSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testConsistentId() throws Exception { Object id0 = grid(0).localNode().consistentId(); @@ -77,4 +75,4 @@ public void testConsistentId() throws Exception { private int getDiscoveryPort(Ignite ignite) { return ((TcpDiscoveryNode) ignite.cluster().localNode()).discoveryPort(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryPendingMessageDeliveryTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryPendingMessageDeliveryTest.java index 9b3dfeea2e93d..c7b975703f255 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryPendingMessageDeliveryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryPendingMessageDeliveryTest.java @@ -24,27 +24,28 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.managers.discovery.CustomMessageWrapper; import org.apache.ignite.internal.managers.discovery.DiscoCache; import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManager; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.lang.IgniteUuid; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.eclipse.jetty.util.ConcurrentHashSet; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class TcpDiscoveryPendingMessageDeliveryTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private volatile boolean blockMsgs; @@ -72,10 +73,12 @@ public class TcpDiscoveryPendingMessageDeliveryTest extends GridCommonAbstractTe disco = new DyingDiscoverySpi(); else if (igniteInstanceName.startsWith("listener")) disco = new ListeningDiscoverySpi(); + else if (igniteInstanceName.startsWith("receiver")) + disco = new DyingThreadDiscoverySpi(); else disco = new TcpDiscoverySpi(); - disco.setIpFinder(IP_FINDER); + disco.setIpFinder(sharedStaticIpFinder); cfg.setDiscoverySpi(disco); return cfg; @@ -84,6 +87,7 @@ else if (igniteInstanceName.startsWith("listener")) /** * @throws Exception If failed. */ + @Test public void testPendingMessagesOverflow() throws Exception { Ignite coord = startGrid("coordinator"); TcpDiscoverySpi coordDisco = (TcpDiscoverySpi)coord.configuration().getDiscoverySpi(); @@ -139,6 +143,7 @@ public void testPendingMessagesOverflow() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomMessageInSingletonCluster() throws Exception { Ignite coord = startGrid("coordinator"); TcpDiscoverySpi coordDisco = (TcpDiscoverySpi)coord.configuration().getDiscoverySpi(); @@ -184,6 +189,48 @@ public void testCustomMessageInSingletonCluster() throws Exception { }, 10000)); } + /** + * @throws Exception If failed. + */ + @Test + public void testDeliveryAllFailedMessagesInCorrectOrder() throws Exception { + IgniteEx coord = startGrid("coordinator"); + TcpDiscoverySpi coordDisco = (TcpDiscoverySpi)coord.configuration().getDiscoverySpi(); + + Set sentEnsuredMsgs = new GridConcurrentHashSet<>(); + coordDisco.addSendMessageListener(msg -> { + if (coordDisco.ensured(msg)) + sentEnsuredMsgs.add(msg); + }); + + //Node which receive message but will not send it further around the ring. + IgniteEx receiver = startGrid("receiver"); + + //Node which will be failed first. + IgniteEx dummy = startGrid("dummy"); + + //Node which should received all fail message in any way. + startGrid("listener"); + + sentEnsuredMsgs.clear(); + receivedEnsuredMsgs.clear(); + + blockMsgs = true; + + log.info("Sending fail node messages"); + + coord.context().discovery().failNode(dummy.localNode().id(), "Dummy node failed"); + coord.context().discovery().failNode(receiver.localNode().id(), "Receiver node failed"); + + boolean delivered = GridTestUtils.waitForCondition(() -> { + log.info("Waiting for messages delivery"); + + return receivedEnsuredMsgs.equals(sentEnsuredMsgs); + }, 5000); + + assertTrue("Sent: " + sentEnsuredMsgs + "; received: " + receivedEnsuredMsgs, delivered); + } + /** * @param disco Discovery SPI. * @param id Message id. @@ -192,6 +239,17 @@ private void sendDummyCustomMessage(TcpDiscoverySpi disco, IgniteUuid id) { disco.sendCustomEvent(new CustomMessageWrapper(new DummyCustomDiscoveryMessage(id))); } + /** + * Discovery SPI, that makes a thread to die when {@code blockMsgs} is set to {@code true}. + */ + private class DyingThreadDiscoverySpi extends TcpDiscoverySpi { + /** {@inheritDoc} */ + @Override protected void startMessageProcess(TcpDiscoveryAbstractMessage msg) { + if (blockMsgs) + throw new RuntimeException("Thread is dying"); + } + } + /** * Discovery SPI, that makes a node stop sending messages when {@code blockMsgs} is set to {@code true}. */ diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryReconnectUnstableTopologyTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryReconnectUnstableTopologyTest.java new file mode 100644 index 0000000000000..55f745a3595ad --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryReconnectUnstableTopologyTest.java @@ -0,0 +1,224 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.discovery.tcp; + +import java.io.IOException; +import java.io.OutputStream; +import java.lang.reflect.Field; +import java.net.Socket; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CountDownLatch; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.EventType; +import org.apache.ignite.internal.DiscoverySpiTestListener; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.managers.discovery.CustomMessageWrapper; +import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteBiClosure; +import org.apache.ignite.spi.discovery.DiscoverySpiCustomMessage; +import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Test scenario: + * + * 1. Create topology in specific order: srv1 srv2 client srv3 srv4 + * 2. Delay client reconnect. + * 3. Trigger topology change by restarting srv2 (will trigger reconnect to next node), srv3, srv4 + * 4. Resume reconnect to node with empty EnsuredMessageHistory and wait for completion. + * 5. Add new node to topology. + * + * Pass condition: new node successfully joins topology. + */ +@RunWith(JUnit4.class) +public class TcpDiscoveryReconnectUnstableTopologyTest extends GridCommonAbstractTest { + /** */ + private static final TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + cfg.setAutoActivationEnabled(false); + + BlockTcpDiscoverySpi spi = new BlockTcpDiscoverySpi(); + + // Guarantees client join to srv2. + Field rndAddrsField = U.findField(BlockTcpDiscoverySpi.class, "skipAddrsRandomization"); + + assertNotNull(rndAddrsField); + + rndAddrsField.set(spi, true); + + cfg.setDiscoverySpi(spi.setIpFinder(ipFinder)); + + cfg.setClientMode(igniteInstanceName.startsWith("client")); + + cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME)); + + return cfg; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testReconnectUnstableTopology() throws Exception { + try { + List nodes = new ArrayList<>(); + + nodes.add(startGrid(0)); + + nodes.add(startGrid(1)); + + nodes.add(startGrid("client")); + + nodes.add(startGrid(2)); + + nodes.add(startGrid(3)); + + for (int i = 0; i < nodes.size(); i++) { + IgniteEx ex = nodes.get(i); + + assertEquals(i + 1, ex.localNode().order()); + } + + DiscoverySpiTestListener lsnr = new DiscoverySpiTestListener(); + + spi(grid("client")).setInternalListener(lsnr); + + lsnr.startBlockReconnect(); + + CountDownLatch restartLatch = new CountDownLatch(1); + + IgniteInternalFuture fut = multithreadedAsync(() -> { + stopGrid(1); + stopGrid(2); + stopGrid(3); + try { + startGrid(1); + startGrid(2); + startGrid(3); + } + catch (Exception e) { + fail(); + } + + restartLatch.countDown(); + }, 1, "restarter"); + + U.awaitQuiet(restartLatch); + + lsnr.stopBlockRestart(); + + fut.get(); + + doSleep(1500); // Wait for reconnect. + + startGrid(4); + } + finally { + stopAllGrids(); + } + } + + /** + * @param ig Ignite. + */ + private TcpDiscoverySpi spi(Ignite ig) { + return (TcpDiscoverySpi)ig.configuration().getDiscoverySpi(); + } + + /** + * Discovery SPI with blocking support. + */ + protected class BlockTcpDiscoverySpi extends TcpDiscoverySpi { + /** Closure. */ + private volatile IgniteBiClosure clo; + + /** + * @param clo Closure. + */ + public void setClosure(IgniteBiClosure clo) { + this.clo = clo; + } + + /** + * @param addr Address. + * @param msg Message. + */ + private synchronized void apply(ClusterNode addr, TcpDiscoveryAbstractMessage msg) { + if (!(msg instanceof TcpDiscoveryCustomEventMessage)) + return; + + TcpDiscoveryCustomEventMessage cm = (TcpDiscoveryCustomEventMessage)msg; + + DiscoveryCustomMessage delegate; + + try { + DiscoverySpiCustomMessage custMsg = cm.message(marshaller(), U.resolveClassLoader(ignite().configuration())); + + assertNotNull(custMsg); + + delegate = ((CustomMessageWrapper)custMsg).delegate(); + + } + catch (Throwable throwable) { + throw new RuntimeException(throwable); + } + + if (clo != null) + clo.apply(addr, delegate); + } + + /** {@inheritDoc} */ + @Override protected void writeToSocket( + Socket sock, + TcpDiscoveryAbstractMessage msg, + byte[] data, + long timeout + ) throws IOException { + if (spiCtx != null) + apply(spiCtx.localNode(), msg); + + super.writeToSocket(sock, msg, data, timeout); + } + + /** {@inheritDoc} */ + @Override protected void writeToSocket(Socket sock, + OutputStream out, + TcpDiscoveryAbstractMessage msg, + long timeout) throws IOException, IgniteCheckedException { + if (spiCtx != null) + apply(spiCtx.localNode(), msg); + + super.writeToSocket(sock, out, msg, timeout); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryRestartTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryRestartTest.java index cacefa58c14e0..fd7d9eec7f87f 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryRestartTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryRestartTest.java @@ -30,13 +30,15 @@ import org.apache.ignite.events.DiscoveryEvent; import org.apache.ignite.events.Event; import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.util.GridConcurrentHashSet; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.eclipse.jetty.util.ConcurrentHashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; import static org.apache.ignite.events.EventType.EVT_NODE_JOINED; @@ -45,10 +47,8 @@ /** * */ +@RunWith(JUnit4.class) public class TcpDiscoveryRestartTest extends GridCommonAbstractTest { - /** */ - private TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static AtomicReference err; @@ -56,12 +56,6 @@ public class TcpDiscoveryRestartTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(spi); - int[] evts = {EVT_NODE_JOINED, EVT_NODE_FAILED, EVT_NODE_LEFT}; cfg.setIncludeEventTypes(evts); @@ -83,6 +77,7 @@ public class TcpDiscoveryRestartTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testRestart() throws Exception { err = new AtomicReference<>(); @@ -90,7 +85,7 @@ public void testRestart() throws Exception { startGrids(NODE_CNT); - final ConcurrentHashSet nodeIds = new ConcurrentHashSet<>(); + final GridConcurrentHashSet nodeIds = new GridConcurrentHashSet<>(); final AtomicInteger id = new AtomicInteger(NODE_CNT); @@ -172,10 +167,10 @@ private void failed(String msg) { */ private class TestEventListener implements IgnitePredicate { /** */ - private final ConcurrentHashSet joinIds = new ConcurrentHashSet<>(); + private final GridConcurrentHashSet joinIds = new GridConcurrentHashSet<>(); /** */ - private final ConcurrentHashSet leftIds = new ConcurrentHashSet<>(); + private final GridConcurrentHashSet leftIds = new GridConcurrentHashSet<>(); /** {@inheritDoc} */ @Override public boolean apply(Event evt) { @@ -211,4 +206,4 @@ void checkEvents(final UUID nodeId) throws Exception { assertTrue("No left event: " + nodeId, leftIds.contains(nodeId)); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySegmentationPolicyTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySegmentationPolicyTest.java index 4c66d2463ab55..5e944f8d42449 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySegmentationPolicyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySegmentationPolicyTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for segmentation policy and failure handling in {@link TcpDiscoverySpi}. */ +@RunWith(JUnit4.class) public class TcpDiscoverySegmentationPolicyTest extends GridCommonAbstractTest { /** Nodes count. */ private static final int NODES_CNT = 3; @@ -59,6 +63,7 @@ public class TcpDiscoverySegmentationPolicyTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStopOnSegmentation() throws Exception { startGrids(NODES_CNT); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySelfTest.java index 1aae8fbc3133a..c57e0fde4e827 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySelfTest.java @@ -92,6 +92,9 @@ import org.eclipse.jetty.util.ConcurrentHashSet; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.events.EventType.EVT_JOB_MAPPED; @@ -108,6 +111,7 @@ /** * Test for {@link TcpDiscoverySpi}. */ +@RunWith(JUnit4.class) public class TcpDiscoverySelfTest extends GridCommonAbstractTest { /** */ private TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); @@ -245,6 +249,7 @@ else if (igniteInstanceName.contains("testNodeShutdownOnRingMessageWorkerFailure /** * @throws Exception If any error occurs. */ + @Test public void testSingleNodeStartStop() throws Exception { try { startGrid(1); @@ -257,6 +262,7 @@ public void testSingleNodeStartStop() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testThreeNodesStartStop() throws Exception { try { IgniteEx ignite1 = startGrid(1); @@ -309,6 +315,7 @@ public void testThreeNodesStartStop() throws Exception { /** * @throws Exception If any errors occur. */ + @Test public void testNodeConnectMessageSize() throws Exception { try { Ignite g1 = startGrid(1); @@ -340,6 +347,7 @@ public void testNodeConnectMessageSize() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testPing() throws Exception { try { startGrid(1); @@ -370,6 +378,7 @@ public void testPing() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testFailureDetectionOnNodePing1() throws Exception { try { Ignite g1 = startGrid("testFailureDetectionOnNodePingCoordinator"); @@ -386,6 +395,7 @@ public void testFailureDetectionOnNodePing1() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testFailureDetectionOnNodePing2() throws Exception { try { startGrid("testFailureDetectionOnNodePingCoordinator"); @@ -402,6 +412,7 @@ public void testFailureDetectionOnNodePing2() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testFailureDetectionOnNodePing3() throws Exception { try { Ignite g1 = startGrid("testFailureDetectionOnNodePingCoordinator"); @@ -451,6 +462,7 @@ private void testFailureDetectionOnNodePing(Ignite pingingNode, Ignite failedNod /** * @throws Exception If any error occurs. */ + @Test public void testPingInterruptedOnNodeFailed() throws Exception { try { final Ignite pingingNode = startGrid("testPingInterruptedOnNodeFailedPingingNode"); @@ -528,6 +540,7 @@ public void testPingInterruptedOnNodeFailed() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testPingInterruptedOnNodeLeft() throws Exception { try { final Ignite pingingNode = startGrid("testPingInterruptedOnNodeFailedPingingNode"); @@ -583,6 +596,7 @@ public void testPingInterruptedOnNodeLeft() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testNodeAdded() throws Exception { try { final Ignite g1 = startGrid(1); @@ -624,6 +638,7 @@ public void testNodeAdded() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testOrdinaryNodeLeave() throws Exception { try { Ignite g1 = startGrid(1); @@ -659,6 +674,7 @@ public void testOrdinaryNodeLeave() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testCoordinatorNodeLeave() throws Exception { try { startGrid(1); @@ -703,6 +719,7 @@ public void testCoordinatorNodeLeave() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testOrdinaryNodeFailure() throws Exception { try { Ignite g1 = startGrid(1); @@ -737,6 +754,7 @@ public void testOrdinaryNodeFailure() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testCoordinatorNodeFailure() throws Exception { try { Ignite g1 = startGrid(1); @@ -766,6 +784,7 @@ public void testCoordinatorNodeFailure() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testMetricsSending() throws Exception { final AtomicBoolean stopping = new AtomicBoolean(); @@ -855,6 +874,7 @@ else if (id.equals(g2.cluster().localNode().id())) /** * @throws Exception If any error occurs. */ + @Test public void testFailBeforeNodeAddedSent() throws Exception { try { Ignite g1 = startGrid(1); @@ -900,6 +920,7 @@ else if (evt.type() == EVT_NODE_FAILED) /** * @throws Exception If any error occurs. */ + @Test public void testFailBeforeNodeLeftSent() throws Exception { try { startGrid(1); @@ -941,6 +962,7 @@ public void testFailBeforeNodeLeftSent() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testIpFinderCleaning() throws Exception { try { ipFinder.registerAddresses(Arrays.asList(new InetSocketAddress("1.1.1.1", 1024), @@ -988,6 +1010,7 @@ public void testIpFinderCleaning() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testNonSharedIpFinder() throws Exception { try { GridTestUtils.runMultiThreadedAsync(new Callable() { @@ -1011,6 +1034,7 @@ public void testNonSharedIpFinder() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testMulticastIpFinder() throws Exception { try { for (int i = 0; i < 5; i++) { @@ -1043,6 +1067,7 @@ public void testMulticastIpFinder() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testInvalidAddressIpFinder() throws Exception { ipFinder.setShared(false); @@ -1069,6 +1094,7 @@ public void testInvalidAddressIpFinder() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testJoinTimeout() throws Exception { try { // This start will fail as expected. @@ -1090,6 +1116,7 @@ public void testJoinTimeout() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testJoinTimeoutForIpFinder() throws Exception { try { // This start will fail as expected. @@ -1124,6 +1151,7 @@ public void testJoinTimeoutForIpFinder() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDirtyIpFinder() throws Exception { try { // Dirty IP finder @@ -1143,6 +1171,7 @@ public void testDirtyIpFinder() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testDuplicateId() throws Exception { try { // Random ID. @@ -1174,6 +1203,7 @@ public void testDuplicateId() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testLoopbackProblemFirstNodeOnLoopback() throws Exception { // On Windows and Mac machines two nodes can reside on the same port // (if one node has localHost="127.0.0.1" and another has localHost="0.0.0.0"). @@ -1206,6 +1236,7 @@ public void testLoopbackProblemFirstNodeOnLoopback() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testLoopbackProblemSecondNodeOnLoopback() throws Exception { if (U.isWindows() || U.isMacOs()) return; @@ -1235,6 +1266,7 @@ public void testLoopbackProblemSecondNodeOnLoopback() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testGridStartTime() throws Exception { try { startGridsMultiThreaded(5); @@ -1281,6 +1313,7 @@ public void testGridStartTime() throws Exception { /** * @throws Exception If failed */ + @Test public void testCustomEventRace1_1() throws Exception { try { customEventRace1(true, false); @@ -1293,6 +1326,7 @@ public void testCustomEventRace1_1() throws Exception { /** * @throws Exception If failed */ + @Test public void testCustomEventRace1_2() throws Exception { try { customEventRace1(false, false); @@ -1305,6 +1339,7 @@ public void testCustomEventRace1_2() throws Exception { /** * @throws Exception If failed */ + @Test public void testCustomEventRace1_3() throws Exception { try { customEventRace1(true, true); @@ -1406,6 +1441,7 @@ private void customEventRace1(final boolean cacheStartFrom1, boolean stopCrd) th /** * @throws Exception If failed */ + @Test public void testCustomEventCoordinatorFailure1() throws Exception { try { customEventCoordinatorFailure(true); @@ -1418,6 +1454,7 @@ public void testCustomEventCoordinatorFailure1() throws Exception { /** * @throws Exception If failed */ + @Test public void testCustomEventCoordinatorFailure2() throws Exception { try { customEventCoordinatorFailure(false); @@ -1430,6 +1467,7 @@ public void testCustomEventCoordinatorFailure2() throws Exception { /** * @throws Exception If failed */ + @Test public void testNodeShutdownOnRingMessageWorkerFailure() throws Exception { try { final TestMessageWorkerFailureSpi1 spi0 = new TestMessageWorkerFailureSpi1( @@ -1480,6 +1518,7 @@ public void testNodeShutdownOnRingMessageWorkerFailure() throws Exception { /** * @throws Exception If failed */ + @Test public void testNoRingMessageWorkerAbnormalFailureOnSegmentation() throws Exception { try { TestMessageWorkerFailureSpi1 spi1 = new TestMessageWorkerFailureSpi1( @@ -1563,6 +1602,7 @@ public void testNoRingMessageWorkerAbnormalFailureOnSegmentation() throws Except /** * @throws Exception If failed */ + @Test public void testNodeShutdownOnRingMessageWorkerStartNotFinished() throws Exception { try { Ignite ignite0 = startGrid(0); @@ -1672,6 +1712,7 @@ private void customEventCoordinatorFailure(boolean twoNodes) throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedNodes1() throws Exception { try { final int FAIL_ORDER = 3; @@ -1716,6 +1757,7 @@ public void testFailedNodes1() throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedNodes2() throws Exception { try { final int FAIL_ORDER = 3; @@ -1760,6 +1802,7 @@ public void testFailedNodes2() throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedNodes3() throws Exception { try { nodeSpi.set(createFailedNodeSpi(-1)); @@ -1793,6 +1836,7 @@ public void testFailedNodes3() throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedNodes4() throws Exception { try { final int FAIL_ORDER = 3; @@ -1836,6 +1880,7 @@ public void testFailedNodes4() throws Exception { * * @throws Exception If failed. */ + @Test public void testFailedNodes5() throws Exception { try { ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -1892,6 +1937,7 @@ public void testFailedNodes5() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCustomEventAckNotSend() throws Exception { try { TestCustomerEventAckSpi spi0 = new TestCustomerEventAckSpi(); @@ -1920,6 +1966,7 @@ public void testCustomEventAckNotSend() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDiscoveryEventsDiscard() throws Exception { try { TestEventDiscardSpi spi = new TestEventDiscardSpi(); @@ -1954,6 +2001,7 @@ public void testDiscoveryEventsDiscard() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoExtraNodeFailedMessage() throws Exception { try { final int NODES = 10; @@ -2002,6 +2050,7 @@ public void testNoExtraNodeFailedMessage() throws Exception { /** * Test verifies Ignite nodes don't exchange system types on discovery phase but only user types. */ + @Test public void testSystemMarshallerTypesFilteredOut() throws Exception { try { nodeSpi.set(new TestTcpDiscoveryMarshallerDataSpi()); @@ -2035,6 +2084,7 @@ public void testSystemMarshallerTypesFilteredOut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDuplicatedDiscoveryDataRemoved() throws Exception { try { TestDiscoveryDataDuplicateSpi.checkNodeAdded = false; @@ -2092,6 +2142,7 @@ public void testDuplicatedDiscoveryDataRemoved() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFailedNodeRestoreConnection() throws Exception { try { TestRestoreConnectedSpi.startTest = false; diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySnapshotHistoryTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySnapshotHistoryTest.java index b55473ca80d80..65303a946195a 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySnapshotHistoryTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySnapshotHistoryTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.Collections; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.DFLT_TOP_HISTORY_SIZE; /** * Tests for topology snapshots history. */ +@RunWith(JUnit4.class) public class TcpDiscoverySnapshotHistoryTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -52,6 +56,7 @@ public class TcpDiscoverySnapshotHistoryTest extends GridCommonAbstractTest { /** * @throws Exception If any error occurs. */ + @Test public void testHistorySupported() throws Exception { try { final Ignite g = startGrid(); @@ -72,6 +77,7 @@ public void testHistorySupported() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testSettingNewTopologyHistorySize() throws Exception { try { final Ignite g = startGrid(); @@ -96,6 +102,7 @@ public void testSettingNewTopologyHistorySize() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testNodeAdded() throws Exception { try { // Add grid #1 @@ -129,6 +136,7 @@ public void testNodeAdded() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testNodeAddedAndRemoved() throws Exception { try { // Add grid #1 @@ -174,4 +182,4 @@ private static void assertTopVer(long expTopVer, Ignite... ignites) { for (Ignite g : ignites) assertEquals("Grid has wrong topology version.", expTopVer, g.cluster().topologyVersion()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiConfigSelfTest.java index ea1bb27db088a..850ca1fa11fd2 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiConfigSelfTest.java @@ -21,15 +21,20 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridSpiTest(spi = TcpDiscoverySpi.class, group = "Discovery SPI") +@RunWith(JUnit4.class) public class TcpDiscoverySpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new TcpDiscoverySpi(), "ipFinder", null); checkNegativeSpiProperty(new TcpDiscoverySpi(), "ipFinderCleanFrequency", 0); @@ -47,6 +52,7 @@ public void testNegativeConfig() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalPortRange() throws Exception { try { IgniteConfiguration cfg = getConfiguration(); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiFailureTimeoutSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiFailureTimeoutSelfTest.java index a760e2e06b87d..56324a79b4725 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiFailureTimeoutSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiFailureTimeoutSelfTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryConnectionCheckMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryPingRequest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class TcpDiscoverySpiFailureTimeoutSelfTest extends AbstractDiscoverySelfTest { /** */ private static final int SPI_COUNT = 6; @@ -84,6 +88,7 @@ public class TcpDiscoverySpiFailureTimeoutSelfTest extends AbstractDiscoverySelf /** * @throws Exception In case of error. */ + @Test public void testFailureDetectionTimeoutEnabled() throws Exception { assertTrue(firstSpi().failureDetectionTimeoutEnabled()); assertTrue(secondSpi().failureDetectionTimeoutEnabled()); @@ -102,6 +107,7 @@ public void testFailureDetectionTimeoutEnabled() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testFailureDetectionTimeoutDisabled() throws Exception { for (int i = 2; i < spis.size(); i++) { assertFalse(((TcpDiscoverySpi)spis.get(i)).failureDetectionTimeoutEnabled()); @@ -113,6 +119,7 @@ public void testFailureDetectionTimeoutDisabled() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testFailureDetectionOnSocketOpen() throws Exception { try { ClusterNode node = secondSpi().getLocalNode(); @@ -139,6 +146,7 @@ public void testFailureDetectionOnSocketOpen() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testFailureDetectionOnSocketWrite() throws Exception { try { ClusterNode node = secondSpi().getLocalNode(); @@ -161,6 +169,7 @@ public void testFailureDetectionOnSocketWrite() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testConnectionCheckMessage() throws Exception { TestTcpDiscoverySpi nextSpi = null; diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiMBeanTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiMBeanTest.java index 53ddfa015f31b..3a4d7cfad03e2 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiMBeanTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiMBeanTest.java @@ -27,10 +27,14 @@ import javax.management.MBeanServer; import javax.management.ObjectName; import java.lang.management.ManagementFactory; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests TcpDiscoverySpiMBean. */ +@RunWith(JUnit4.class) public class TcpDiscoverySpiMBeanTest extends GridCommonAbstractTest { /** */ private GridStringLogger strLog = new GridStringLogger(); @@ -54,6 +58,7 @@ public class TcpDiscoverySpiMBeanTest extends GridCommonAbstractTest { * * @throws Exception if fails. */ + @Test public void testMBean() throws Exception { startGrids(3); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiRandomStartStopTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiRandomStartStopTest.java index 7edfa6b4e0bc5..b2570bb86c386 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiRandomStartStopTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiRandomStartStopTest.java @@ -18,8 +18,6 @@ package org.apache.ignite.spi.discovery.tcp; import org.apache.ignite.spi.discovery.AbstractDiscoveryRandomStartStopTest; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.spi.GridSpiTest; /** @@ -28,18 +26,8 @@ @GridSpiTest(spi = TcpDiscoverySpi.class, group = "Discovery SPI") public class TcpDiscoverySpiRandomStartStopTest extends AbstractDiscoveryRandomStartStopTest { - /** */ - private TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected int getMaxInterval() { return 10; } - - /** {@inheritDoc} */ - @Override protected void spiConfigure(TcpDiscoverySpi spi) throws Exception { - super.spiConfigure(spi); - - spi.setIpFinder(ipFinder); - } } \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiReconnectDelayTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiReconnectDelayTest.java index 89df32c98f81e..7d9daf9fb7a18 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiReconnectDelayTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiReconnectDelayTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryClientReconnectMessage; import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryJoinRequestMessage; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVT_CLIENT_NODE_DISCONNECTED; @@ -47,6 +50,7 @@ /** * Test for {@link TcpDiscoverySpi#setReconnectDelay(int)}. */ +@RunWith(JUnit4.class) public class TcpDiscoverySpiReconnectDelayTest extends GridCommonAbstractTest { /** Time to wait for events. */ private static final int EVT_TIMEOUT = 120000; @@ -61,11 +65,13 @@ public class TcpDiscoverySpiReconnectDelayTest extends GridCommonAbstractTest { //region Client joins after failNode() /** */ + @Test public void testClientJoinAfterFailureShortTimeout() throws Exception { checkClientJoinAfterNodeFailure(5, 500); } /** */ + @Test public void testClientJoinAfterFailureLongTimeout() throws Exception { checkClientJoinAfterNodeFailure(3, 5000); } @@ -157,11 +163,13 @@ else if (evt.type() == EVT_CLIENT_NODE_RECONNECTED) //region Client joins after brakeConnection() /** */ + @Test public void testClientJoinAfterSocketClosedShortTimeout() throws Exception { checkClientJoinAfterSocketClosed(5, 500); } /** */ + @Test public void testClientJoinAfterSocketClosedLongTimeout() throws Exception { checkClientJoinAfterSocketClosed(3, 5000); } @@ -221,21 +229,25 @@ private void checkClientJoinAfterSocketClosed(int numOfFailedRequests, int recon //region Client joins at start /** */ + @Test public void testClientJoinAtStartShortTimeout() throws Exception { checkClientJoinAtStart(5, 500); } /** */ + @Test public void testClientJoinAtStartLongTimeout() throws Exception { checkClientJoinAtStart(3, 5000); } /** */ + @Test public void testServerJoinAtStartShortTimeout() throws Exception { checkServerJoinAtStart(5, 500); } /** */ + @Test public void testServerJoinAtStartLongTimeout() throws Exception { checkServerJoinAtStart(3, 5000); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiWildcardSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiWildcardSelfTest.java index 3d2f2431ff61e..7651d396eb277 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiWildcardSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpiWildcardSelfTest.java @@ -18,17 +18,16 @@ package org.apache.ignite.spi.discovery.tcp; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class TcpDiscoverySpiWildcardSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 5; @@ -36,11 +35,6 @@ public class TcpDiscoverySpiWildcardSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); cfg.setLocalHost(null); return cfg; @@ -49,6 +43,7 @@ public class TcpDiscoverySpiWildcardSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTopology() throws Exception { try { startGridsMultiThreaded(NODES); @@ -60,4 +55,4 @@ public void testTopology() throws Exception { stopAllGrids(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslParametersTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslParametersTest.java index f2fc2780de9ef..3fd4a1b4204c7 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslParametersTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslParametersTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.ssl.SslContextFactory; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cases when node connects to cluster with different set of cipher suites. */ +@RunWith(JUnit4.class) public class TcpDiscoverySslParametersTest extends GridCommonAbstractTest { /** */ @@ -59,6 +63,7 @@ public class TcpDiscoverySslParametersTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSameCipherSuite() throws Exception { checkDiscoverySuccess( new String[][] { @@ -80,6 +85,7 @@ public void testSameCipherSuite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOneCommonCipherSuite() throws Exception { checkDiscoverySuccess( new String[][] { @@ -99,6 +105,7 @@ public void testOneCommonCipherSuite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoCommonCipherSuite() throws Exception { checkDiscoveryFailure( new String[][] { @@ -117,6 +124,7 @@ public void testNoCommonCipherSuite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNonExistentCipherSuite() throws Exception { checkDiscoveryFailure( new String[][] { @@ -131,13 +139,15 @@ public void testNonExistentCipherSuite() throws Exception { }, null, IgniteCheckedException.class, - "Unsupported ciphersuite" + // Java 8 has "Unsupported ciphersuite", Java 11 has "Unsupported CipherSuite" + "Unsupported" ); } /** * @throws Exception If failed. */ + @Test public void testNoCommonProtocols() throws Exception { checkDiscoveryFailure( null, @@ -157,6 +167,7 @@ public void testNoCommonProtocols() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNonExistentProtocol() throws Exception { checkDiscoveryFailure( null, @@ -177,6 +188,7 @@ public void testNonExistentProtocol() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSameProtocols() throws Exception { checkDiscoverySuccess(null, new String[][] { @@ -195,6 +207,7 @@ public void testSameProtocols() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOneCommonProtocol() throws Exception { checkDiscoverySuccess(null, new String[][] { diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslSecuredUnsecuredTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslSecuredUnsecuredTest.java index ca34f779760cf..6803dddd38954 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslSecuredUnsecuredTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslSecuredUnsecuredTest.java @@ -29,11 +29,15 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cases when node connects to cluster with different SSL configuration. * Exception with meaningful message should be thrown. */ +@RunWith(JUnit4.class) public class TcpDiscoverySslSecuredUnsecuredTest extends GridCommonAbstractTest { /** */ private volatile TcpDiscoverySpi spi; @@ -66,6 +70,7 @@ public class TcpDiscoverySslSecuredUnsecuredTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testSecuredUnsecuredServerConnection() throws Exception { checkConnection("plain-server", "ssl-server"); } @@ -73,6 +78,7 @@ public void testSecuredUnsecuredServerConnection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnsecuredSecuredServerConnection() throws Exception { checkConnection("ssl-server", "plain-server"); } @@ -80,6 +86,7 @@ public void testUnsecuredSecuredServerConnection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSecuredClientUnsecuredServerConnection() throws Exception { checkConnection("plain-server", "ssl-client"); } @@ -87,6 +94,7 @@ public void testSecuredClientUnsecuredServerConnection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUnsecuredClientSecuredServerConnection() throws Exception { checkConnection("ssl-server", "plain-client"); } @@ -94,6 +102,7 @@ public void testUnsecuredClientSecuredServerConnection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPlainServerNodesRestart() throws Exception { checkNodesRestart("plain-server-1", "plain-server-2"); } @@ -101,6 +110,7 @@ public void testPlainServerNodesRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSslServerNodesRestart() throws Exception { checkNodesRestart("ssl-server-1", "ssl-server-2"); } @@ -108,6 +118,7 @@ public void testSslServerNodesRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPlainClientNodesRestart() throws Exception { checkNodesRestart("plain-server", "plain-client"); } @@ -115,6 +126,7 @@ public void testPlainClientNodesRestart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSslClientNodesRestart() throws Exception { checkNodesRestart("ssl-server", "ssl-client"); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslTrustedUntrustedTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslTrustedUntrustedTest.java index e1c6755380d48..1357b1d7be6d6 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslTrustedUntrustedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySslTrustedUntrustedTest.java @@ -22,11 +22,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cases when node connects to cluster with different SSL configuration. * Exception with meaningful message should be thrown. */ +@RunWith(JUnit4.class) public class TcpDiscoverySslTrustedUntrustedTest extends GridCommonAbstractTest { /** */ private volatile String keyStore; @@ -50,6 +54,7 @@ public class TcpDiscoverySslTrustedUntrustedTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testSameKey() throws Exception { checkDiscoverySuccess("node01", "trustone", "node01", "trustone"); } @@ -57,6 +62,7 @@ public void testSameKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentKeys() throws Exception { checkDiscoverySuccess("node02", "trusttwo", "node03", "trusttwo"); } @@ -64,6 +70,7 @@ public void testDifferentKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBothTrusts() throws Exception { checkDiscoverySuccess("node01", "trustboth", "node02", "trustboth", "node03", "trustboth"); } @@ -71,6 +78,7 @@ public void testBothTrusts() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDifferentCa() throws Exception { checkDiscoveryFailure("node01", "trustone", "node02", "trusttwo"); } @@ -78,6 +86,7 @@ public void testDifferentCa() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWrongCa() throws Exception { checkDiscoveryFailure("node02", "trustone", "node03", "trustone"); } @@ -85,6 +94,7 @@ public void testWrongCa() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMismatchingCaSecond() throws Exception { checkDiscoveryFailure("node01", "trustboth", "node03", "trusttwo"); } @@ -92,6 +102,7 @@ public void testMismatchingCaSecond() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMismatchingCaFirst() throws Exception { checkDiscoveryFailure("node02", "trusttwo", "node01", "trustboth"); } diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryWithWrongServerTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryWithWrongServerTest.java index ffd0d030c643e..50e4cf255aa0a 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryWithWrongServerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoveryWithWrongServerTest.java @@ -36,10 +36,14 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestThread; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Client-based discovery SPI test with non-Ignite servers. */ +@RunWith(JUnit4.class) public class TcpDiscoveryWithWrongServerTest extends GridCommonAbstractTest { /** Non-Ignite Server port #1. */ private final static int SERVER_PORT = 47500; @@ -123,6 +127,7 @@ private void stopTcpThreads() throws IOException { * * @throws Exception in case of error. */ + @Test public void testWrongHandshakeResponse() throws Exception { startTcpThread(new SomeResponseWorker(), SERVER_PORT); startTcpThread(new SomeResponseWorker(), LAST_SERVER_PORT); @@ -135,6 +140,7 @@ public void testWrongHandshakeResponse() throws Exception { * * @throws Exception in case of error. */ + @Test public void testNoHandshakeResponse() throws Exception { startTcpThread(new NoResponseWorker(), SERVER_PORT); startTcpThread(new NoResponseWorker(), LAST_SERVER_PORT); @@ -147,6 +153,7 @@ public void testNoHandshakeResponse() throws Exception { * * @throws Exception in case of error. */ + @Test public void testDisconnectOnRequest() throws Exception { startTcpThread(new DisconnectOnRequestWorker(), SERVER_PORT); startTcpThread(new DisconnectOnRequestWorker(), LAST_SERVER_PORT); @@ -159,6 +166,7 @@ public void testDisconnectOnRequest() throws Exception { * * @throws Exception in case of error. */ + @Test public void testEarlyDisconnect() throws Exception { startTcpThread(new EarlyDisconnectWorker(), SERVER_PORT); startTcpThread(new EarlyDisconnectWorker(), LAST_SERVER_PORT); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/TcpDiscoveryIpFinderAbstractSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/TcpDiscoveryIpFinderAbstractSelfTest.java index 465b38dbe7859..f57266555f2d0 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/TcpDiscoveryIpFinderAbstractSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/TcpDiscoveryIpFinderAbstractSelfTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Abstract test for ip finder. */ +@RunWith(JUnit4.class) public abstract class TcpDiscoveryIpFinderAbstractSelfTest extends GridCommonAbstractTest { /** */ @@ -64,6 +68,7 @@ protected TcpDiscoveryIpFinderAbstractSelfTest() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testIpFinder() throws Exception { finder.initializeLocalAddresses(Arrays.asList(new InetSocketAddress(InetAddress.getLocalHost(), 1000))); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/jdbc/TcpDiscoveryJdbcIpFinderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/jdbc/TcpDiscoveryJdbcIpFinderSelfTest.java index 37352e16e8e2a..9d4e4d347bcfc 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/jdbc/TcpDiscoveryJdbcIpFinderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/jdbc/TcpDiscoveryJdbcIpFinderSelfTest.java @@ -20,10 +20,14 @@ import com.mchange.v2.c3p0.ComboPooledDataSource; import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * JDBC IP finder self test. */ +@RunWith(JUnit4.class) public class TcpDiscoveryJdbcIpFinderSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** */ @@ -66,6 +70,7 @@ public TcpDiscoveryJdbcIpFinderSelfTest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInitSchemaFlag() throws Exception { initSchema = false; @@ -87,4 +92,4 @@ public void testInitSchemaFlag() throws Exception { dataSrc.close(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/multicast/TcpDiscoveryMulticastIpFinderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/multicast/TcpDiscoveryMulticastIpFinderSelfTest.java index 29ed595f38189..a7a2f888fda5d 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/multicast/TcpDiscoveryMulticastIpFinderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/multicast/TcpDiscoveryMulticastIpFinderSelfTest.java @@ -22,10 +22,14 @@ import java.util.Collections; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAbstractSelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * GridTcpDiscoveryMulticastIpFinder test. */ +@RunWith(JUnit4.class) public class TcpDiscoveryMulticastIpFinderSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** @@ -48,7 +52,8 @@ public TcpDiscoveryMulticastIpFinderSelfTest() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"TooBroadScope", "BusyWait"}) + @SuppressWarnings({"TooBroadScope"}) + @Test public void testExchange() throws Exception { String locAddr = null; @@ -131,4 +136,4 @@ private void checkRequestAddresses(TcpDiscoveryMulticastIpFinder ipFinder, int e assertEquals(exp, ipFinder.getRegisteredAddresses().size()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/sharedfs/TcpDiscoverySharedFsIpFinderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/sharedfs/TcpDiscoverySharedFsIpFinderSelfTest.java index e065c8f376c33..152a9c055ce21 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/sharedfs/TcpDiscoverySharedFsIpFinderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/sharedfs/TcpDiscoverySharedFsIpFinderSelfTest.java @@ -24,10 +24,14 @@ import java.util.List; import java.util.UUID; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * GridTcpDiscoverySharedFsIpFinder test. */ +@RunWith(JUnit4.class) public class TcpDiscoverySharedFsIpFinderSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** @@ -60,6 +64,7 @@ public TcpDiscoverySharedFsIpFinderSelfTest() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testUniqueNames() throws Exception { InetSocketAddress node1 = new InetSocketAddress("10.7.7.7", 4343); InetAddress ia = InetAddress.getByAddress("localhost", new byte[] {10, 7, 7, 7}); @@ -77,4 +82,4 @@ public void testUniqueNames() throws Exception { finder.close(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/vm/TcpDiscoveryVmIpFinderSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/vm/TcpDiscoveryVmIpFinderSelfTest.java index acc12c29142e2..9ca8e218237a4 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/vm/TcpDiscoveryVmIpFinderSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/vm/TcpDiscoveryVmIpFinderSelfTest.java @@ -27,12 +27,14 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAbstractSelfTest; import org.apache.ignite.testframework.GridTestUtils; - -import static org.apache.ignite.internal.processors.cache.binary.GridCacheBinaryObjectsAbstractSelfTest.IP_FINDER; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * GridTcpDiscoveryVmIpFinder test. */ +@RunWith(JUnit4.class) public class TcpDiscoveryVmIpFinderSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** @@ -56,6 +58,7 @@ public TcpDiscoveryVmIpFinderSelfTest() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testAddressesInitialization() throws Exception { TcpDiscoveryVmIpFinder finder = ipFinder(); @@ -125,6 +128,7 @@ public void testAddressesInitialization() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testIpV6AddressesInitialization() throws Exception { TcpDiscoveryVmIpFinder finder = ipFinder(); @@ -202,30 +206,31 @@ public void testIpV6AddressesInitialization() throws Exception { /** * */ + @Test public void testUnregistration() throws Exception { Ignition.start(config("server1", false, false)); - int srvSize = IP_FINDER.getRegisteredAddresses().size(); + int srvSize = sharedStaticIpFinder.getRegisteredAddresses().size(); Ignition.start(config("server2", false, false)); Ignition.start(config("client1", true, false)); - assertEquals(2 * srvSize, IP_FINDER.getRegisteredAddresses().size()); + assertEquals(2 * srvSize, sharedStaticIpFinder.getRegisteredAddresses().size()); Ignition.start(config("client2", true, false)); Ignition.start(config("client3", true, false)); - assertEquals(2 * srvSize, IP_FINDER.getRegisteredAddresses().size()); + assertEquals(2 * srvSize, sharedStaticIpFinder.getRegisteredAddresses().size()); Ignition.start(config("client4", true, true)); - assertEquals(3 * srvSize, IP_FINDER.getRegisteredAddresses().size()); + assertEquals(3 * srvSize, sharedStaticIpFinder.getRegisteredAddresses().size()); Ignition.stop("client1", true); Ignition.stop("client2", true); Ignition.stop("client3", true); - assertEquals(3 * srvSize, IP_FINDER.getRegisteredAddresses().size()); + assertEquals(3 * srvSize, sharedStaticIpFinder.getRegisteredAddresses().size()); Ignition.stop("client4", true); @@ -246,7 +251,7 @@ public void testUnregistration() throws Exception { assertTrue(res); - assertTrue(3 * srvSize >= IP_FINDER.getRegisteredAddresses().size()); + assertTrue(3 * srvSize >= sharedStaticIpFinder.getRegisteredAddresses().size()); } /** @@ -262,10 +267,10 @@ private static IgniteConfiguration config(String name, boolean client, boolean f TcpDiscoverySpi disco = new TcpDiscoverySpi(); disco.setForceServerMode(forceServerMode); - disco.setIpFinder(IP_FINDER); + disco.setIpFinder(sharedStaticIpFinder); cfg.setDiscoverySpi(disco); return cfg; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/encryption/KeystoreEncryptionSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/encryption/KeystoreEncryptionSpiSelfTest.java new file mode 100644 index 0000000000000..573c5c27acc05 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/spi/encryption/KeystoreEncryptionSpiSelfTest.java @@ -0,0 +1,131 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.spi.encryption; + +import java.nio.ByteBuffer; +import org.apache.ignite.IgniteException; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionKey; +import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi; +import org.apache.ignite.testframework.GridTestUtils; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; + +import static java.nio.charset.StandardCharsets.UTF_8; +import static org.apache.ignite.internal.encryption.AbstractEncryptionTest.KEYSTORE_PASSWORD; +import static org.apache.ignite.internal.encryption.AbstractEncryptionTest.KEYSTORE_PATH; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; + +/** */ +public class KeystoreEncryptionSpiSelfTest { + /** @throws Exception If failed. */ + @Test + public void testCantStartWithEmptyParam() throws Exception { + GridTestUtils.assertThrowsWithCause(() -> { + EncryptionSpi encSpi = new KeystoreEncryptionSpi(); + + encSpi.spiStart("default"); + }, IgniteException.class); + } + + /** @throws Exception If failed. */ + @Test + public void testCantStartWithoutPassword() throws Exception { + GridTestUtils.assertThrowsWithCause(() -> { + KeystoreEncryptionSpi encSpi = new KeystoreEncryptionSpi(); + + encSpi.setKeyStorePath("/ignite/is/cool/path/doesnt/exists"); + + encSpi.spiStart("default"); + }, IgniteException.class); + } + + /** @throws Exception If failed. */ + @Test + public void testCantStartKeystoreDoesntExists() throws Exception { + GridTestUtils.assertThrowsWithCause(() -> { + KeystoreEncryptionSpi encSpi = new KeystoreEncryptionSpi(); + + encSpi.setKeyStorePath("/ignite/is/cool/path/doesnt/exists"); + encSpi.setKeyStorePassword(KEYSTORE_PASSWORD.toCharArray()); + + encSpi.spiStart("default"); + }, IgniteException.class); + } + + /** @throws Exception If failed. */ + @Test + public void testEncryptDecrypt() throws Exception { + EncryptionSpi encSpi = spi(); + + KeystoreEncryptionKey k = GridTestUtils.getFieldValue(encSpi, "masterKey"); + + assertNotNull(k); + assertNotNull(k.key()); + + byte[] plainText = "Just a test string to encrypt!".getBytes(UTF_8); + byte[] cipherText = new byte[spi().encryptedSize(plainText.length)]; + + encSpi.encrypt(ByteBuffer.wrap(plainText), k, ByteBuffer.wrap(cipherText)); + + assertNotNull(cipherText); + assertEquals(encSpi.encryptedSize(plainText.length), cipherText.length); + + byte[] decryptedText = encSpi.decrypt(cipherText, k); + + assertNotNull(decryptedText); + assertEquals(plainText.length, decryptedText.length); + + assertEquals(new String(plainText, UTF_8), new String(decryptedText, UTF_8)); + } + + /** @throws Exception If failed. */ + @Test + public void testKeyEncryptDecrypt() throws Exception { + EncryptionSpi encSpi = spi(); + + KeystoreEncryptionKey k = (KeystoreEncryptionKey)encSpi.create(); + + assertNotNull(k); + assertNotNull(k.key()); + + byte[] encGrpKey = encSpi.encryptKey(k); + + assertNotNull(encGrpKey); + assertTrue(encGrpKey.length > 0); + + KeystoreEncryptionKey k2 = (KeystoreEncryptionKey)encSpi.decryptKey(encGrpKey); + + assertEquals(k.key(), k2.key()); + } + + /** */ + @NotNull private EncryptionSpi spi() throws Exception { + KeystoreEncryptionSpi encSpi = new KeystoreEncryptionSpi(); + + encSpi.setKeyStorePath(KEYSTORE_PATH); + encSpi.setKeyStorePassword(KEYSTORE_PASSWORD.toCharArray()); + + GridTestUtils.invoke(encSpi, "onBeforeStart"); + + encSpi.spiStart("default"); + + return encSpi; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageMultiThreadedSelfTest.java index f4395e8cccbb9..af99a0a5fecd4 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageMultiThreadedSelfTest.java @@ -25,15 +25,20 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Memory event storage load test. */ @GridSpiTest(spi = MemoryEventStorageSpi.class, group = "EventStorage SPI") +@RunWith(JUnit4.class) public class GridMemoryEventStorageMultiThreadedSelfTest extends GridSpiAbstractTest { /** * @throws Exception If test failed */ + @Test public void testMultiThreaded() throws Exception { GridTestUtils.runMultiThreaded(new Callable() { @Override public Object call() throws Exception { @@ -50,4 +55,4 @@ public void testMultiThreaded() throws Exception { assert evts.size() <= 10000 : "Incorrect number of events: " + evts.size(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiConfigSelfTest.java index 3c82ffdb0c317..44b4c0af3fcad 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiConfigSelfTest.java @@ -19,17 +19,22 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Memory event storage SPI config test. */ @GridSpiTest(spi = MemoryEventStorageSpi.class, group = "Event Storage SPI") +@RunWith(JUnit4.class) public class GridMemoryEventStorageSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new MemoryEventStorageSpi(), "expireCount", 0); checkNegativeSpiProperty(new MemoryEventStorageSpi(), "expireAgeMs", 0); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiSelfTest.java index 0eb2df8be3c53..14cadb2b2f5ca 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/eventstorage/memory/GridMemoryEventStorageSpiSelfTest.java @@ -25,6 +25,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_NODE_METRICS_UPDATED; @@ -32,6 +35,7 @@ * Tests for {@link MemoryEventStorageSpi}. */ @GridSpiTest(spi = MemoryEventStorageSpi.class, group = "Event Storage SPI") +@RunWith(JUnit4.class) public class GridMemoryEventStorageSpiSelfTest extends GridSpiAbstractTest { /** */ private static final int EXPIRE_CNT = 100; @@ -55,6 +59,7 @@ public long getExpireAgeMs() { /** * @throws Exception If failed. */ + @Test public void testMemoryEventStorage() throws Exception { MemoryEventStorageSpi spi = getSpi(); @@ -101,7 +106,7 @@ public void testMemoryEventStorage() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"NullableProblems"}) + @Test public void testFilter() throws Exception { MemoryEventStorageSpi spi = getSpi(); diff --git a/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiConfigSelfTest.java index 20d2fda21fb1b..8be996b54b481 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiConfigSelfTest.java @@ -19,16 +19,21 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Always-failover SPI config test. */ @GridSpiTest(spi = AlwaysFailoverSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridAlwaysFailoverSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new AlwaysFailoverSpi(), "maximumFailoverAttempts", -1); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiSelfTest.java index 98f2b077e7344..c22a277539fb8 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/failover/always/GridAlwaysFailoverSpiSelfTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.testframework.GridTestNode; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.spi.failover.always.AlwaysFailoverSpi.FAILED_NODE_LIST_ATTR; @@ -36,10 +39,12 @@ * Always-failover SPI test. */ @GridSpiTest(spi = AlwaysFailoverSpi.class, group = "Failover SPI") +@RunWith(JUnit4.class) public class GridAlwaysFailoverSpiSelfTest extends GridSpiAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSingleNode() throws Exception { AlwaysFailoverSpi spi = getSpi(); @@ -57,7 +62,7 @@ public void testSingleNode() throws Exception { /** * @throws Exception If test failed. */ - @SuppressWarnings("unchecked") + @Test public void testTwoNodes() throws Exception { AlwaysFailoverSpi spi = getSpi(); @@ -79,6 +84,7 @@ public void testTwoNodes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMaxAttempts() throws Exception { AlwaysFailoverSpi spi = getSpi(); @@ -119,4 +125,4 @@ private void checkFailedNodes(ComputeJobResult res, int cnt) { assert failedNodes != null; assert failedNodes.size() == cnt; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiConfigSelfTest.java index 2e91bba4c8917..f6ab3a5ff25a6 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiConfigSelfTest.java @@ -19,16 +19,21 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Job stealing failover SPI config test. */ @GridSpiTest(spi = JobStealingFailoverSpi.class, group = "Collision SPI") +@RunWith(JUnit4.class) public class GridJobStealingFailoverSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new JobStealingFailoverSpi(), "maximumFailoverAttempts", -1); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiOneNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiOneNodeSelfTest.java index abee6c52ab3b7..6ce79bfe885b2 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiOneNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiOneNodeSelfTest.java @@ -30,11 +30,15 @@ import org.apache.ignite.testframework.GridTestNode; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Job stealing failover SPI test for one node. */ @GridSpiTest(spi = JobStealingFailoverSpi.class, group = "Failover SPI") +@RunWith(JUnit4.class) public class GridJobStealingFailoverSpiOneNodeSelfTest extends GridSpiAbstractTest { /** {@inheritDoc} */ @Override protected GridSpiTestContext initSpiContext() throws Exception { @@ -69,6 +73,7 @@ private ClusterNode addSpiDependency(GridTestNode node) throws Exception { /** * @throws Exception If test failed. */ + @Test public void testFailover() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -86,6 +91,7 @@ public void testFailover() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNoFailover() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -96,4 +102,4 @@ public void testNoFailover() throws Exception { assert other == null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiSelfTest.java index 9a91375c8f129..4f7bc31ab0af6 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/failover/jobstealing/GridJobStealingFailoverSpiSelfTest.java @@ -31,6 +31,9 @@ import org.apache.ignite.testframework.GridTestNode; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_SPI_CLASS; import static org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi.THIEF_NODE_ATTR; @@ -41,6 +44,7 @@ * Self test for {@link JobStealingFailoverSpi} SPI. */ @GridSpiTest(spi = JobStealingFailoverSpi.class, group = "Failover SPI") +@RunWith(JUnit4.class) public class GridJobStealingFailoverSpiSelfTest extends GridSpiAbstractTest { /** {@inheritDoc} */ @Override protected GridSpiTestContext initSpiContext() throws Exception { @@ -76,6 +80,7 @@ private void addSpiDependency(GridTestNode node) throws Exception { /** * @throws Exception If test failed. */ + @Test public void testFailover() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -96,6 +101,7 @@ public void testFailover() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMaxHopsExceeded() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -115,6 +121,7 @@ public void testMaxHopsExceeded() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMaxHopsExceededThiefNotSet() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -132,6 +139,7 @@ public void testMaxHopsExceededThiefNotSet() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testNonZeroFailoverCount() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -154,6 +162,7 @@ public void testNonZeroFailoverCount() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testThiefNotInTopology() throws Exception { ClusterNode rmt = new GridTestNode(UUID.randomUUID()); @@ -175,6 +184,7 @@ public void testThiefNotInTopology() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testThiefEqualsVictim() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -196,6 +206,7 @@ public void testThiefEqualsVictim() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testThiefIdNotSet() throws Exception { ClusterNode rmt = getSpiContext().remoteNodes().iterator().next(); @@ -227,4 +238,4 @@ private void checkAttributes(ComputeJobContext ctx, ClusterNode failed, int fail assert failedSet.contains(failed.id()); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/failover/never/GridNeverFailoverSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/failover/never/GridNeverFailoverSpiSelfTest.java index d1767cb6864c5..d2baefea9b538 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/failover/never/GridNeverFailoverSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/failover/never/GridNeverFailoverSpiSelfTest.java @@ -27,15 +27,20 @@ import org.apache.ignite.testframework.GridTestNode; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Never failover SPI test. */ @GridSpiTest(spi = NeverFailoverSpi.class, group = "Failover SPI") +@RunWith(JUnit4.class) public class GridNeverFailoverSpiSelfTest extends GridSpiAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAlwaysNull() throws Exception { List nodes = new ArrayList<>(); @@ -46,4 +51,4 @@ public void testAlwaysNull() throws Exception { assert getSpi().failover(new GridFailoverTestContext(new GridTestTaskSession(), new GridTestJobResult(node)), nodes) == null; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiConfigSelfTest.java index 7bbd2cb55abd1..9eb96ad65203a 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiConfigSelfTest.java @@ -19,17 +19,22 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridSpiTest(spi = AdaptiveLoadBalancingSpi.class, group = "LoadBalancing SPI") +@RunWith(JUnit4.class) public class GridAdaptiveLoadBalancingSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new AdaptiveLoadBalancingSpi(), "loadProbe", null); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiMultipleNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiMultipleNodeSelfTest.java index 77dd26aac880c..58c4f4a47c76a 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiMultipleNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiMultipleNodeSelfTest.java @@ -30,11 +30,15 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests adaptive load balancing SPI. */ @GridSpiTest(spi = AdaptiveLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridAdaptiveLoadBalancingSpiMultipleNodeSelfTest extends GridSpiAbstractTest { /** */ private static final int RMT_NODE_CNT = 10; @@ -72,6 +76,7 @@ public AdaptiveLoadProbe getLoadProbe() { /** * @throws Exception If failed. */ + @Test public void testWeights() throws Exception { // Seal it. List nodes = new ArrayList<>(getSpiContext().remoteNodes()); @@ -99,4 +104,4 @@ public void testWeights() throws Exception { ", cnts[i+1]=" + cnts[i + 1] + ']'; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiSelfTest.java index 3a377497c82d6..3675bb041950b 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/adaptive/GridAdaptiveLoadBalancingSpiSelfTest.java @@ -30,11 +30,15 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests adaptive load balancing SPI. */ @GridSpiTest(spi = AdaptiveLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridAdaptiveLoadBalancingSpiSelfTest extends GridSpiAbstractTest { /** {@inheritDoc} */ @Override protected GridSpiTestContext initSpiContext() throws Exception { @@ -65,6 +69,7 @@ public AdaptiveLoadProbe getLoadProbe() { * @throws Exception If failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testSingleNodeZeroWeight() throws Exception { GridTestNode node = (GridTestNode)getSpiContext().nodes().iterator().next(); @@ -90,6 +95,7 @@ public void testSingleNodeZeroWeight() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testSingleNodeSameSession() throws Exception { GridTestNode node = (GridTestNode)getSpiContext().nodes().iterator().next(); @@ -115,6 +121,7 @@ public void testSingleNodeSameSession() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testSingleNodeDifferentSession() throws Exception { GridTestNode node = (GridTestNode)getSpiContext().nodes().iterator().next(); @@ -135,4 +142,4 @@ public void testSingleNodeDifferentSession() throws Exception { assert pick1 == pick2; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/internal/GridInternalTasksLoadBalancingSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/internal/GridInternalTasksLoadBalancingSelfTest.java index 26f4a3b98c828..59c1df6036ef8 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/internal/GridInternalTasksLoadBalancingSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/internal/GridInternalTasksLoadBalancingSelfTest.java @@ -42,10 +42,14 @@ import org.apache.ignite.spi.loadbalancing.LoadBalancingSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that will start two nodes with custom load balancing SPI and execute {@link GridInternal} task on it. */ +@RunWith(JUnit4.class) public class GridInternalTasksLoadBalancingSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 2; @@ -82,6 +86,7 @@ public class GridInternalTasksLoadBalancingSelfTest extends GridCommonAbstractTe * * @throws Exception In case of error. */ + @Test public void testInternalTaskBalancing() throws Exception { customLoadBalancer = true; @@ -113,6 +118,7 @@ public void testInternalTaskBalancing() throws Exception { * * @throws Exception In case of error. */ + @Test public void testInternalTaskDefaultBalancing() throws Exception { customLoadBalancer = false; diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingNotPerTaskMultithreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingNotPerTaskMultithreadedSelfTest.java index 79b1db46ae28a..f786d4e1e2f77 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingNotPerTaskMultithreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingNotPerTaskMultithreadedSelfTest.java @@ -33,11 +33,15 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Multithreaded tests for global load balancer. */ @GridSpiTest(spi = RoundRobinLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridRoundRobinLoadBalancingNotPerTaskMultithreadedSelfTest extends GridSpiAbstractTest { /** Thread count. */ @@ -73,6 +77,7 @@ public boolean getPerTask() { * * @throws Exception If failed. */ + @Test public void testMultipleTaskSessionsMultithreaded() throws Exception { final RoundRobinLoadBalancingSpi spi = getSpi(); @@ -126,4 +131,4 @@ public void testMultipleTaskSessionsMultithreaded() throws Exception { } }, THREAD_CNT, "balancer-test-worker"); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiLocalNodeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiLocalNodeSelfTest.java index 9fd0e6ea9672a..5bbe8b2f939f0 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiLocalNodeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiLocalNodeSelfTest.java @@ -24,17 +24,22 @@ import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests Round Robin load balancing for single node. */ @GridSpiTest(spi = RoundRobinLoadBalancingSpi.class, group = "Load Balancing SPI", triggerDiscovery = true) +@RunWith(JUnit4.class) public class GridRoundRobinLoadBalancingSpiLocalNodeSelfTest extends GridSpiAbstractTest { /** * @throws Exception If failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testLocalNode() throws Exception { assert getDiscoverySpi().getRemoteNodes().isEmpty(); @@ -51,4 +56,4 @@ public void testLocalNode() throws Exception { assert node == locNode; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest.java index 4eaba9b46ee99..fe18aec547786 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.testframework.GridSpiTestContext; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_TASK_FAILED; import static org.apache.ignite.events.EventType.EVT_TASK_FINISHED; @@ -37,6 +40,7 @@ * Tests round robin load balancing SPI. */ @GridSpiTest(spi = RoundRobinLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest extends GridSpiAbstractTest { /** {@inheritDoc} */ @@ -60,6 +64,7 @@ public class GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest * @throws Exception If test failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testMultipleNodes() throws Exception { List allNodes = (List)getSpiContext().nodes(); @@ -89,6 +94,7 @@ public void testMultipleNodes() throws Exception { * @throws Exception If test failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testMultipleTasks() throws Exception { ComputeTaskSession ses1 = new GridTestTaskSession(IgniteUuid.randomUuid()); ComputeTaskSession ses2 = new GridTestTaskSession(IgniteUuid.randomUuid()); @@ -135,4 +141,4 @@ public void testMultipleTasks() throws Exception { getSpiContext().triggerEvent(new TaskEvent( null, null, EVT_TASK_FAILED, ses2.getId(), null, null, false, null)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiNotPerTaskSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiNotPerTaskSelfTest.java index 90f04bdacd140..788536de00b08 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiNotPerTaskSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiNotPerTaskSelfTest.java @@ -33,6 +33,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_TASK_FAILED; import static org.apache.ignite.events.EventType.EVT_TASK_FINISHED; @@ -42,6 +45,7 @@ * Tests round robin load balancing. */ @GridSpiTest(spi = RoundRobinLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridRoundRobinLoadBalancingSpiNotPerTaskSelfTest extends GridSpiAbstractTest { /** @@ -70,6 +74,7 @@ public boolean getPerTask() { /** * @throws Exception If test failed. */ + @Test public void testMultipleNodes() throws Exception { List allNodes = (List)getSpiContext().nodes(); @@ -85,6 +90,7 @@ public void testMultipleNodes() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testMultipleTaskSessions() throws Exception { ComputeTaskSession ses1 = new GridTestTaskSession(IgniteUuid.randomUuid()); ComputeTaskSession ses2 = new GridTestTaskSession(IgniteUuid.randomUuid()); @@ -106,6 +112,7 @@ public void testMultipleTaskSessions() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testBalancingOneNode() throws Exception { ComputeTaskSession ses = new GridTestTaskSession(); @@ -120,6 +127,7 @@ public void testBalancingOneNode() throws Exception { } /** */ + @Test public void testNodeNotInTopology() throws Exception { ComputeTaskSession ses = new GridTestTaskSession(); @@ -134,4 +142,4 @@ public void testNodeNotInTopology() throws Exception { assertTrue(e.getMessage().contains("Task topology does not have alive nodes")); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest.java index 3159e04c68093..b8a87a79148d7 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/roundrobin/GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.spi.loadbalancing.roundrobin.GridRoundRobinTestUtils.checkCyclicBalancing; @@ -36,6 +39,7 @@ * Tests round robin load balancing with topology changes. */ @GridSpiTest(spi = RoundRobinLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest extends GridSpiAbstractTest { /** @@ -57,6 +61,7 @@ public class GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest /** * @throws Exception If failed. */ + @Test public void testTopologyChange() throws Exception { ComputeTaskSession ses = new GridTestTaskSession(IgniteUuid.randomUuid()); @@ -107,4 +112,4 @@ public void testTopologyChange() throws Exception { checkCyclicBalancing(getSpi(), allNodes, orderedNodes, ses); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiConfigSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiConfigSelfTest.java index 2ed7813b81511..36e1e5a442d81 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiConfigSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiConfigSelfTest.java @@ -19,17 +19,22 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractConfigTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @GridSpiTest(spi = WeightedRandomLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridWeightedRandomLoadBalancingSpiConfigSelfTest extends GridSpiAbstractConfigTest { /** * @throws Exception If failed. */ + @Test public void testNegativeConfig() throws Exception { checkNegativeSpiProperty(new WeightedRandomLoadBalancingSpi(), "nodeWeight", 0); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiSelfTest.java index 3c1e95756208f..90f28077b48a5 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiSelfTest.java @@ -27,17 +27,22 @@ import org.apache.ignite.testframework.GridTestNode; import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Weighted random load balancing SPI. */ @GridSpiTest(spi = WeightedRandomLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridWeightedRandomLoadBalancingSpiSelfTest extends GridSpiAbstractTest { /** * @throws Exception If failed. */ @SuppressWarnings({"ObjectEquality"}) + @Test public void testSingleNode() throws Exception { List nodes = Collections.singletonList((ClusterNode)new GridTestNode(UUID.randomUUID())); @@ -54,6 +59,7 @@ public void testSingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleNodes() throws Exception { List nodes = new ArrayList<>(); @@ -68,4 +74,4 @@ public void testMultipleNodes() throws Exception { assert node != null; assert nodes.contains(node); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiWeightedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiWeightedSelfTest.java index 2a83d6fdd7801..e751be0de45c5 100644 --- a/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiWeightedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/spi/loadbalancing/weightedrandom/GridWeightedRandomLoadBalancingSpiWeightedSelfTest.java @@ -31,6 +31,9 @@ import org.apache.ignite.testframework.junits.spi.GridSpiAbstractTest; import org.apache.ignite.testframework.junits.spi.GridSpiTest; import org.apache.ignite.testframework.junits.spi.GridSpiTestConfig; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.spi.loadbalancing.weightedrandom.WeightedRandomLoadBalancingSpi.NODE_WEIGHT_ATTR_NAME; @@ -38,6 +41,7 @@ * {@link WeightedRandomLoadBalancingSpi} self test. */ @GridSpiTest(spi = WeightedRandomLoadBalancingSpi.class, group = "Load Balancing SPI") +@RunWith(JUnit4.class) public class GridWeightedRandomLoadBalancingSpiWeightedSelfTest extends GridSpiAbstractTest { /** @@ -51,6 +55,7 @@ public boolean getUseWeights() { /** * @throws Exception If test failed. */ + @Test public void testWeights() throws Exception { List nodes = new ArrayList<>(); @@ -85,4 +90,4 @@ public void testWeights() throws Exception { info("Node counts: " + Arrays.toString(cnts)); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/startup/GridRandomCommandLineLoader.java b/modules/core/src/test/java/org/apache/ignite/startup/GridRandomCommandLineLoader.java index 9284b8a327c2a..146a939b35143 100644 --- a/modules/core/src/test/java/org/apache/ignite/startup/GridRandomCommandLineLoader.java +++ b/modules/core/src/test/java/org/apache/ignite/startup/GridRandomCommandLineLoader.java @@ -35,6 +35,7 @@ import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.IgnitionListener; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteVersionUtils; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; @@ -61,7 +62,7 @@ public final class GridRandomCommandLineLoader { private static final String IGNITE_PROG_NAME = "IGNITE_PROG_NAME"; /** Copyright text. Ant processed. */ - private static final String COPYRIGHT = "2018 Copyright(C) Apache Software Foundation."; + private static final String COPYRIGHT = IgniteVersionUtils.COPYRIGHT; /** Version. Ant processed. */ private static final String VER = "x.x.x"; diff --git a/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineLoaderTest.java b/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineLoaderTest.java index 612479bb94642..3cb3d4597cc97 100644 --- a/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineLoaderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineLoaderTest.java @@ -30,6 +30,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.apache.ignite.testframework.junits.multijvm.IgniteProcessProxy; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_RESTART_CODE; @@ -37,6 +40,7 @@ * Command line loader test. */ @GridCommonTest(group = "Loaders") +@RunWith(JUnit4.class) public class GridCommandLineLoaderTest extends GridCommonAbstractTest { /** */ private static final String GRID_CFG_PATH = "/modules/core/src/test/config/loaders/grid-cfg.xml"; @@ -44,6 +48,7 @@ public class GridCommandLineLoaderTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testLoader() throws Exception { String path = U.getIgniteHome() + GRID_CFG_PATH; diff --git a/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineTransformerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineTransformerSelfTest.java index 6fa0b0929e7d0..d08ed6bdb236d 100644 --- a/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineTransformerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/startup/cmdline/GridCommandLineTransformerSelfTest.java @@ -20,14 +20,19 @@ import java.util.concurrent.Callable; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * GridCommandLineTransformer test. */ +@RunWith(JUnit4.class) public class GridCommandLineTransformerSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTransformIfNoArguments() throws Exception { assertEquals( "\"INTERACTIVE=0\" \"QUIET=-DIGNITE_QUIET=true\" \"NO_PAUSE=0\" " + @@ -38,6 +43,7 @@ public void testTransformIfNoArguments() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformIfArgumentIsnull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @SuppressWarnings("NullArgumentToVariableArgMethod") @@ -53,6 +59,7 @@ public void testTransformIfArgumentIsnull() throws Exception { * * @throws Exception If failed. */ + @Test public void testTransformIfUnsupportedOptions() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -64,6 +71,7 @@ public void testTransformIfUnsupportedOptions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformIfUnsupportedJvmOptions() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -87,6 +95,7 @@ public void testTransformIfUnsupportedJvmOptions() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformIfSeveralArgumentsWithoutDashPrefix() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -98,6 +107,7 @@ public void testTransformIfSeveralArgumentsWithoutDashPrefix() throws Exception /** * @throws Exception If failed. */ + @Test public void testTransformIfOnlyPathToConfigSpecified() throws Exception { assertEquals( "\"INTERACTIVE=0\" \"QUIET=-DIGNITE_QUIET=true\" \"NO_PAUSE=0\" \"NO_JMX=0\" " + @@ -108,6 +118,7 @@ public void testTransformIfOnlyPathToConfigSpecified() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransformIfAllSupportedArguments() throws Exception { assertEquals( "\"INTERACTIVE=1\" \"QUIET=-DIGNITE_QUIET=false\" \"NO_PAUSE=1\" \"NO_JMX=1\" " + @@ -116,4 +127,4 @@ public void testTransformIfAllSupportedArguments() throws Exception { CommandLineTransformer.transform("-i", "-np", "-v", "-J-Xmx1g", "-J-Xms1m", "-nojmx", "\"c:\\path to\\русский каталог\"")); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/startup/properties/NotStringSystemPropertyTest.java b/modules/core/src/test/java/org/apache/ignite/startup/properties/NotStringSystemPropertyTest.java index 5e108a6e02f70..f2cbf98cd2bd0 100644 --- a/modules/core/src/test/java/org/apache/ignite/startup/properties/NotStringSystemPropertyTest.java +++ b/modules/core/src/test/java/org/apache/ignite/startup/properties/NotStringSystemPropertyTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * The test checks start of Ignite with non-string properties. */ +@RunWith(JUnit4.class) public class NotStringSystemPropertyTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration optimize(IgniteConfiguration cfg) throws IgniteCheckedException { @@ -38,6 +42,7 @@ public class NotStringSystemPropertyTest extends GridCommonAbstractTest { /** * @throws Exception If fail. */ + @Test public void testGridStart() throws Exception { Some some = new Some(0, "prop"); diff --git a/modules/core/src/test/java/org/apache/ignite/startup/servlet/GridServletLoaderTest.java b/modules/core/src/test/java/org/apache/ignite/startup/servlet/GridServletLoaderTest.java index 3345aed830af1..05941d6bb9e14 100644 --- a/modules/core/src/test/java/org/apache/ignite/startup/servlet/GridServletLoaderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/startup/servlet/GridServletLoaderTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Servlet loader test. @@ -58,6 +61,7 @@ * {@code JAVA_OPTS="${JAVA_OPTS} "-Dcom.sun.management.jmxremote.port=1097" "-Dcom.sun.management.jmxremote.ssl=false" "-Dcom.sun.management.jmxremote.authenticate=false" "} */ @GridCommonTest(group = "Loaders") +@RunWith(JUnit4.class) public class GridServletLoaderTest extends GridCommonAbstractTest { /** */ public static final int JMX_RMI_CONNECTOR_PORT = 1097; @@ -68,7 +72,7 @@ public class GridServletLoaderTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ - @SuppressWarnings({"unchecked"}) + @Test public void testLoader() throws Exception { JMXConnector jmx = null; @@ -174,4 +178,4 @@ private static JMXConnector getJMXConnector(String host, int port) throws IOExce // Wait for 5 minutes. return 5 * 60 * 1000; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerSelfTest.java b/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerSelfTest.java index 90ef8d7bd70dc..0b8f5d66b910e 100644 --- a/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerSelfTest.java @@ -43,24 +43,22 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.marshaller.jdk.JdkMarshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.stream.StreamSingleTupleExtractor; import org.apache.ignite.stream.StreamMultipleTupleExtractor; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; /** * Tests {@link SocketStreamer}. */ +@RunWith(JUnit4.class) public class SocketStreamerSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Grid count. */ private final static int GRID_CNT = 3; @@ -81,12 +79,6 @@ public class SocketStreamerSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfg); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - return cfg; } @@ -103,6 +95,7 @@ public class SocketStreamerSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSizeBasedDefaultConverter() throws Exception { test(null, null, new Runnable() { @Override public void run() { @@ -131,6 +124,7 @@ public void testSizeBasedDefaultConverter() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleEntriesFromOneMessage() throws Exception { test(null, null, new Runnable() { @Override public void run() { @@ -162,6 +156,7 @@ public void testMultipleEntriesFromOneMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSizeBasedCustomConverter() throws Exception { SocketMessageConverter converter = new SocketMessageConverter() { @Override public Message convert(byte[] msg) { @@ -201,6 +196,7 @@ public void testSizeBasedCustomConverter() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDelimiterBasedDefaultConverter() throws Exception { test(null, DELIM, new Runnable() { @Override public void run() { @@ -226,6 +222,7 @@ public void testDelimiterBasedDefaultConverter() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDelimiterBasedCustomConverter() throws Exception { SocketMessageConverter converter = new SocketMessageConverter() { @Override public Message convert(byte[] msg) { @@ -263,8 +260,8 @@ public void testDelimiterBasedCustomConverter() throws Exception { * @param converter Converter. * @param r Runnable.. */ - private void test(@Nullable SocketMessageConverter converter, - @Nullable byte[] delim, + private void test(@Nullable SocketMessageConverter converter, + @Nullable byte[] delim, Runnable r, boolean oneMessagePerTuple) throws Exception { SocketStreamer sockStmr = null; @@ -389,4 +386,4 @@ private static class Message implements Serializable { this.values = values; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerUnmarshalVulnerabilityTest.java b/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerUnmarshalVulnerabilityTest.java index dadc5b6f31c19..4044ddd7f9e15 100644 --- a/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerUnmarshalVulnerabilityTest.java +++ b/modules/core/src/test/java/org/apache/ignite/stream/socket/SocketStreamerUnmarshalVulnerabilityTest.java @@ -41,6 +41,9 @@ import org.apache.ignite.stream.StreamSingleTupleExtractor; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MARSHALLER_BLACKLIST; import static org.apache.ignite.IgniteSystemProperties.IGNITE_MARSHALLER_WHITELIST; @@ -48,6 +51,7 @@ /** * Tests for whitelist and blacklist ot avoiding deserialization vulnerability. */ +@RunWith(JUnit4.class) public class SocketStreamerUnmarshalVulnerabilityTest extends GridCommonAbstractTest { /** Shared value. */ private static final AtomicBoolean SHARED = new AtomicBoolean(); @@ -83,6 +87,7 @@ public class SocketStreamerUnmarshalVulnerabilityTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Test public void testNoLists() throws Exception { testExploit(true); } @@ -90,6 +95,7 @@ public void testNoLists() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWhiteListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); @@ -101,6 +107,7 @@ public void testWhiteListIncluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWhiteListExcluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_excluded.txt").getPath(); @@ -112,6 +119,7 @@ public void testWhiteListExcluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlackListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); @@ -123,6 +131,7 @@ public void testBlackListIncluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBlackListExcluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_excluded.txt").getPath(); @@ -134,6 +143,7 @@ public void testBlackListExcluded() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBothListIncluded() throws Exception { String path = U.resolveIgnitePath("modules/core/src/test/config/class_list_exploit_included.txt").getPath(); diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/GridStringLogger.java b/modules/core/src/test/java/org/apache/ignite/testframework/GridStringLogger.java index 9056dd65c1349..0b09d3e3c41ad 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/GridStringLogger.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/GridStringLogger.java @@ -26,7 +26,10 @@ /** * Logger which logs to string buffer. + * + * @deprecated Use {@link ListeningTestLogger} instead. */ +@Deprecated public class GridStringLogger implements IgniteLogger { /** Initial string builder capacity in bytes */ private static final int INITIAL = 1024 * 33; diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/GridTestUtils.java b/modules/core/src/test/java/org/apache/ignite/testframework/GridTestUtils.java index 4195551e51238..a5e92843e7223 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/GridTestUtils.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/GridTestUtils.java @@ -64,8 +64,7 @@ import javax.net.ssl.KeyManagerFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.TrustManager; -import junit.framework.Test; -import junit.framework.TestCase; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; @@ -97,6 +96,7 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.plugin.extensions.communication.Message; @@ -104,6 +104,7 @@ import org.apache.ignite.spi.discovery.DiscoverySpiListener; import org.apache.ignite.ssl.SslContextFactory; import org.apache.ignite.testframework.config.GridTestProperties; +import org.apache.ignite.testframework.junits.GridAbstractTest; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; @@ -160,7 +161,7 @@ private DiscoverySpiListenerWrapper(DiscoverySpiListener delegate, DiscoveryHook } /** {@inheritDoc} */ - @Override public IgniteInternalFuture onDiscovery(int type, long topVer, ClusterNode node, Collection topSnapshot, @Nullable Map> topHist, @Nullable DiscoverySpiCustomMessage spiCustomMsg) { + @Override public IgniteFuture onDiscovery(int type, long topVer, ClusterNode node, Collection topSnapshot, @Nullable Map> topHist, @Nullable DiscoverySpiCustomMessage spiCustomMsg) { hook.handleDiscoveryMessage(spiCustomMsg); return delegate.onDiscovery(type, topVer, node, topSnapshot, topHist, spiCustomMsg); @@ -180,17 +181,22 @@ public static DiscoverySpiListener wrap(DiscoverySpiListener delegate, Discovery } } + /** Test parameters scale factor util. */ + public static final class SF extends ScaleFactorUtil { + + } + /** */ private static final Map, String> addrs = new HashMap<>(); /** */ - private static final Map, Integer> mcastPorts = new HashMap<>(); + private static final Map, Integer> mcastPorts = new HashMap<>(); /** */ - private static final Map, Integer> discoPorts = new HashMap<>(); + private static final Map, Integer> discoPorts = new HashMap<>(); /** */ - private static final Map, Integer> commPorts = new HashMap<>(); + private static final Map, Integer> commPorts = new HashMap<>(); /** */ private static int[] addr; @@ -597,7 +603,7 @@ public static void assertOneToOne(Iterable it, IgnitePredicate... ps) * @param cls Class. * @return Next multicast port. */ - public static synchronized int getNextMulticastPort(Class cls) { + public static synchronized int getNextMulticastPort(Class cls) { Integer portRet = mcastPorts.get(cls); if (portRet != null) @@ -644,7 +650,7 @@ public static synchronized int getNextMulticastPort(Class cls) { * @param cls Class. * @return Next communication port. */ - public static synchronized int getNextCommPort(Class cls) { + public static synchronized int getNextCommPort(Class cls) { Integer portRet = commPorts.get(cls); if (portRet != null) @@ -671,7 +677,7 @@ public static synchronized int getNextCommPort(Class cls) { * @param cls Class. * @return Next discovery port. */ - public static synchronized int getNextDiscoPort(Class cls) { + public static synchronized int getNextDiscoPort(Class cls) { Integer portRet = discoPorts.get(cls); if (portRet != null) @@ -1554,51 +1560,58 @@ public static void setFieldValue(Object obj, Class cls, String fieldName, Object */ @SuppressWarnings("SynchronizationOnLocalVariableOrMethodParameter") @Nullable public static T invoke(Object obj, String mtd, Object... params) throws Exception { - // We cannot resolve method by parameter classes due to some of parameters can be null. - // Search correct method among all methods collection. - for (Method m : obj.getClass().getDeclaredMethods()) { - // Filter methods by name. - if (!m.getName().equals(mtd)) - continue; + Class cls = obj.getClass(); - if (!areCompatible(params, m.getParameterTypes())) - continue; - - try { - synchronized (m) { - // Backup accessible field state. - boolean accessible = m.isAccessible(); + do { + // We cannot resolve method by parameter classes due to some of parameters can be null. + // Search correct method among all methods collection. + for (Method m : cls.getDeclaredMethods()) { + // Filter methods by name. + if (!m.getName().equals(mtd)) + continue; - try { - if (!accessible) - m.setAccessible(true); + if (!areCompatible(params, m.getParameterTypes())) + continue; - return (T)m.invoke(obj, params); - } - finally { - // Recover accessible field state. - if (!accessible) - m.setAccessible(false); + try { + synchronized (m) { + // Backup accessible field state. + boolean accessible = m.isAccessible(); + + try { + if (!accessible) + m.setAccessible(true); + + return (T)m.invoke(obj, params); + } + finally { + // Recover accessible field state. + if (!accessible) + m.setAccessible(false); + } } } - } - catch (IllegalAccessException e) { - throw new RuntimeException("Failed to access method" + - " [obj=" + obj + ", mtd=" + mtd + ", params=" + Arrays.toString(params) + ']', e); - } - catch (InvocationTargetException e) { - Throwable cause = e.getCause(); + catch (IllegalAccessException e) { + throw new RuntimeException("Failed to access method" + + " [obj=" + obj + ", mtd=" + mtd + ", params=" + Arrays.toString(params) + ']', e); + } + catch (InvocationTargetException e) { + Throwable cause = e.getCause(); - if (cause instanceof Error) - throw (Error) cause; + if (cause instanceof Error) + throw (Error) cause; - if (cause instanceof Exception) - throw (Exception) cause; + if (cause instanceof Exception) + throw (Exception) cause; - throw new RuntimeException("Failed to invoke method)" + - " [obj=" + obj + ", mtd=" + mtd + ", params=" + Arrays.toString(params) + ']', e); + throw new RuntimeException("Failed to invoke method)" + + " [obj=" + obj + ", mtd=" + mtd + ", params=" + Arrays.toString(params) + ']', e); + } } - } + + cls = cls.getSuperclass(); + } while (cls != Object.class); + throw new RuntimeException("Failed to find method" + " [obj=" + obj + ", mtd=" + mtd + ", params=" + Arrays.toString(params) + ']'); @@ -1996,12 +2009,12 @@ public static String fullSimpleName(@NotNull Class cls) { * @param test Test. * @param ignoredTests Tests to ignore. If test contained in the collection it is not included in suite */ - public static void addTestIfNeeded(@NotNull final TestSuite suite, @NotNull final Class test, + public static void addTestIfNeeded(@NotNull final TestSuite suite, @NotNull final Class test, @Nullable final Collection ignoredTests) { if (ignoredTests != null && ignoredTests.contains(test)) return; - suite.addTestSuite(test); + suite.addTest(new JUnit4TestAdapter(test)); } /** @@ -2039,4 +2052,49 @@ public static void mergeExchangeWaitVersion(Ignite node, long topVer, List merge ((IgniteEx)node).context().cache().context().exchange().mergeExchangesTestWaitVersion( new AffinityTopologyVersion(topVer, 0), mergedEvts); } + + /** Test parameters scale factor util. */ + private static class ScaleFactorUtil { + /** Test speed scale factor property name. */ + private static final String TEST_SCALE_FACTOR_PROPERTY = "TEST_SCALE_FACTOR"; + + /** Min test scale factor value. */ + private static final double MIN_TEST_SCALE_FACTOR_VALUE = 0.1; + + /** Max test scale factor value. */ + private static final double MAX_TEST_SCALE_FACTOR_VALUE = 1.0; + + /** Test speed scale factor. */ + private static final double TEST_SCALE_FACTOR_VALUE = readScaleFactor(); + + /** */ + private static double readScaleFactor() { + double scaleFactor = Double.parseDouble(System.getProperty(TEST_SCALE_FACTOR_PROPERTY, "1.0")); + + scaleFactor = Math.max(scaleFactor, MIN_TEST_SCALE_FACTOR_VALUE); + scaleFactor = Math.min(scaleFactor, MAX_TEST_SCALE_FACTOR_VALUE); + + return scaleFactor; + } + + /** */ + public static int apply(int val) { + return (int) Math.round(TEST_SCALE_FACTOR_VALUE * val); + } + + /** */ + public static int apply(int val, int lowerBound, int upperBound) { + return applyUB(applyLB(val, lowerBound), upperBound); + } + + /** Apply scale factor with lower bound */ + public static int applyLB(int val, int lowerBound) { + return Math.max(apply(val), lowerBound); + } + + /** Apply scale factor with upper bound */ + public static int applyUB(int val, int upperBound) { + return Math.min(apply(val), upperBound); + } + } } diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/ListeningTestLogger.java b/modules/core/src/test/java/org/apache/ignite/testframework/ListeningTestLogger.java new file mode 100644 index 0000000000000..1b05f4cb642f1 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testframework/ListeningTestLogger.java @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testframework; + +import java.util.Collection; +import java.util.concurrent.CopyOnWriteArraySet; +import java.util.function.Consumer; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.internal.util.typedef.X; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** + * Implementation of {@link org.apache.ignite.IgniteLogger} that performs any actions when certain message is logged. + * It can be useful in tests to ensure that a specific message was (or was not) printed to the log. + */ +public class ListeningTestLogger implements IgniteLogger { + /** + * If set to {@code true}, enables debug and trace log messages processing. + */ + private final boolean dbg; + + /** + * Logger to echo all messages, limited by {@code dbg} flag. + */ + private final IgniteLogger echo; + + /** + * Registered log messages listeners. + */ + private final Collection> lsnrs = new CopyOnWriteArraySet<>(); + + /** + * Default constructor. + */ + public ListeningTestLogger() { + this(false); + } + + /** + * @param dbg If set to {@code true}, enables debug and trace log messages processing. + */ + public ListeningTestLogger(boolean dbg) { + this(dbg, null); + } + + /** + * @param dbg If set to {@code true}, enables debug and trace log messages processing. + * @param echo Logger to echo all messages, limited by {@code dbg} flag. + */ + public ListeningTestLogger(boolean dbg, @Nullable IgniteLogger echo) { + this.dbg = dbg; + this.echo = echo; + } + + /** + * Registers message listener. + * + * @param lsnr Message listener. + */ + public void registerListener(@NotNull LogListener lsnr) { + lsnr.reset(); + + lsnrs.add(lsnr); + } + + /** + * Registers message listener. + *

    + * NOTE listener is executed in the thread causing the logging, so it is not recommended to throw any exceptions + * from it. Use {@link LogListener} to create message predicates with assertions. + * + * @param lsnr Message listener. + * @see LogListener + */ + public void registerListener(@NotNull Consumer lsnr) { + lsnrs.add(lsnr); + } + + /** + * Unregisters message listener. + * + * @param lsnr Message listener. + */ + public void unregisterListener(@NotNull Consumer lsnr) { + lsnrs.remove(lsnr); + } + + /** + * Clears all listeners. + */ + public void clearListeners() { + lsnrs.clear(); + } + + /** {@inheritDoc} */ + @Override public ListeningTestLogger getLogger(Object ctgr) { + return this; + } + + /** {@inheritDoc} */ + @Override public void trace(String msg) { + if (!dbg) + return; + + if (echo != null) + echo.trace(msg); + + applyListeners(msg); + } + + /** {@inheritDoc} */ + @Override public void debug(String msg) { + if (!dbg) + return; + + if (echo != null) + echo.debug(msg); + + applyListeners(msg); + } + + /** {@inheritDoc} */ + @Override public void info(String msg) { + if (echo != null) + echo.info(msg); + + applyListeners(msg); + } + + /** {@inheritDoc} */ + @Override public void warning(String msg, @Nullable Throwable t) { + if (echo != null) + echo.warning(msg, t); + + applyListeners(msg); + + if (t != null) + applyListeners(X.getFullStackTrace(t)); + } + + /** {@inheritDoc} */ + @Override public void error(String msg, @Nullable Throwable t) { + if (echo != null) + echo.error(msg, t); + + applyListeners(msg); + + if (t != null) + applyListeners(X.getFullStackTrace(t)); + } + + /** {@inheritDoc} */ + @Override public boolean isTraceEnabled() { + return dbg; + } + + /** {@inheritDoc} */ + @Override public boolean isDebugEnabled() { + return dbg; + } + + /** {@inheritDoc} */ + @Override public boolean isInfoEnabled() { + return true; + } + + /** {@inheritDoc} */ + @Override public boolean isQuiet() { + return false; + } + + /** {@inheritDoc} */ + @Override public String fileName() { + return null; + } + + /** + * Applies listeners whose pattern is found in the message. + * + * @param msg Message to check. + */ + private void applyListeners(String msg) { + if (msg == null) + return; + + for (Consumer lsnr : lsnrs) + lsnr.accept(msg); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/LogListener.java b/modules/core/src/test/java/org/apache/ignite/testframework/LogListener.java new file mode 100644 index 0000000000000..485e3fae98b35 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testframework/LogListener.java @@ -0,0 +1,389 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testframework; + +import java.time.temporal.ValueRange; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import org.jetbrains.annotations.NotNull; + +/** + * The basic listener for custom log contents checking in {@link ListeningTestLogger}.

    + * + * Supports {@link #matches(String) substring}, {@link #matches(Pattern) regular expression} or + * {@link #matches(Predicate) predicate} listeners and the following optional modifiers: + *

      + *
    • {@link Builder#times times()} sets the exact number of occurrences
    • + *
    • {@link Builder#atLeast atLeast()} sets the minimum number of occurrences
    • + *
    • {@link Builder#atMost atMost()} sets the maximum number of occurrences
    • + *
    + * {@link Builder#atLeast atLeast()} and {@link Builder#atMost atMost()} can be used together.

    + * + * If the expected number of occurrences is not specified for the listener, + * then at least one occurence is expected by default. In other words:
    + *
    + * {@code LogListener.matches(msg).build();}
    + *
    + * is equivalent to
    + *
    + * {@code LogListener.matches(msg).atLeast(1).build();}
    + * 
    + * + * If only the expected maximum number of occurrences is specified, then + * the minimum number of entries for successful validation is zero. In other words:
    + *
    + * {@code LogListener.matches(msg).atMost(10).build();}
    + *
    + * is equivalent to
    + *
    + * {@code LogListener.matches(msg).atLeast(0).atMost(10).build();}
    + * 
    + */ +public abstract class LogListener implements Consumer { + /** + * Checks that all conditions are met. + * + * @return {@code True} if all conditions are met. + */ + public abstract boolean check(); + + /** + * Reset listener state. + */ + abstract void reset(); + + /** + * Creates new listener builder. + * + * @param substr Substring to search for in a log message. + * @return Log message listener builder. + */ + public static Builder matches(String substr) { + return new Builder().andMatches(substr); + } + + /** + * Creates new listener builder. + * + * @param regexp Regular expression to search for in a log message. + * @return Log message listener builder. + */ + public static Builder matches(Pattern regexp) { + return new Builder().andMatches(regexp); + } + + /** + * Creates new listener builder. + * + * @param pred Log message predicate. + * @return Log message listener builder. + */ + public static Builder matches(Predicate pred) { + return new Builder().andMatches(pred); + } + + /** + * Log listener builder. + */ + public static class Builder { + /** */ + private final CompositeMessageListener lsnr = new CompositeMessageListener(); + + /** */ + private Node prev; + + /** + * Add new substring predicate. + * + * @param substr Substring. + * @return current builder instance. + */ + public Builder andMatches(String substr) { + addLast(new Node(msg -> { + if (substr.isEmpty()) + return msg.isEmpty() ? 1 : 0; + + int cnt = 0; + + for (int idx = 0; (idx = msg.indexOf(substr, idx)) != -1; idx++) + ++cnt; + + return cnt; + })); + + return this; + } + + /** + * Add new regular expression predicate. + * + * @param regexp Regular expression. + * @return current builder instance. + */ + public Builder andMatches(Pattern regexp) { + addLast(new Node(msg -> { + int cnt = 0; + + Matcher matcher = regexp.matcher(msg); + + while (matcher.find()) + ++cnt; + + return cnt; + })); + + return this; + } + + /** + * Add new log message predicate. + * + * @param pred Log message predicate. + * @return current builder instance. + */ + public Builder andMatches(Predicate pred) { + addLast(new Node(msg -> pred.test(msg) ? 1 : 0)); + + return this; + } + + /** + * Set expected number of matches.
    + * Each log message may contain several matches that will be counted, + * except {@code Predicate} which can have only one match for message. + * + * @param n Expected number of matches. + * @return current builder instance. + */ + public Builder times(int n) { + if (prev != null) + prev.cnt = n; + + return this; + } + + /** + * Set expected minimum number of matches.
    + * Each log message may contain several matches that will be counted, + * except {@code Predicate} which can have only one match for message. + * + * @param n Expected number of matches. + * @return current builder instance. + */ + public Builder atLeast(int n) { + if (prev != null) { + prev.min = n; + + prev.cnt = null; + } + + return this; + } + + /** + * Set expected maximum number of matches.
    + * Each log message may contain several matches that will be counted, + * except {@code Predicate} which can have only one match for message. + * + * @param n Expected number of matches. + * @return current builder instance. + */ + public Builder atMost(int n) { + if (prev != null) { + prev.max = n; + + prev.cnt = null; + } + + return this; + } + + /** + * Constructs message listener. + * + * @return Log message listener. + */ + public LogListener build() { + addLast(null); + + return lsnr.lsnrs.size() == 1 ? lsnr.lsnrs.get(0) : lsnr; + } + + /** + * @param node Log listener attributes. + */ + private void addLast(Node node) { + if (prev != null) + lsnr.add(prev.listener()); + + prev = node; + } + + /** */ + private Builder() {} + + /** + * Mutable attributes for log listener. + */ + static final class Node { + /** */ + final Function func; + + /** */ + Integer min; + + /** */ + Integer max; + + /** */ + Integer cnt; + + /** */ + Node(Function func) { + this.func = func; + } + + /** */ + LogMessageListener listener() { + ValueRange range; + + if (cnt != null) + range = ValueRange.of(cnt, cnt); + else if (min == null && max == null) + range = ValueRange.of(1, Integer.MAX_VALUE); + else + range = ValueRange.of(min == null ? 0 : min, max == null ? Integer.MAX_VALUE : max); + + return new LogMessageListener(func, range); + } + } + } + + /** */ + private static class LogMessageListener extends LogListener { + /** */ + private final Function func; + + /** */ + private final AtomicReference err = new AtomicReference<>(); + + /** */ + private final AtomicInteger matches = new AtomicInteger(); + + /** */ + private final ValueRange exp; + + /** + * @param exp Expected occurrences. + * @param func Function of counting matches in the message. + */ + private LogMessageListener(@NotNull Function func, @NotNull ValueRange exp) { + this.func = func; + this.exp = exp; + } + + /** {@inheritDoc} */ + @Override public void accept(String msg) { + if (err.get() != null) + return; + + try { + int cnt = func.apply(msg); + + if (cnt > 0) + matches.addAndGet(cnt); + } + catch (Throwable t) { + err.compareAndSet(null, t); + + if (t instanceof VirtualMachineError) + throw t; + } + } + + /** {@inheritDoc} */ + @Override public boolean check() { + errCheck(); + + int matchesCnt = matches.get(); + + return exp.isValidIntValue(matchesCnt); + } + + /** {@inheritDoc} */ + @Override void reset() { + matches.set(0); + } + + /** + * Check that there were no runtime errors. + */ + private void errCheck() { + Throwable t = err.get(); + + if (t instanceof Error) + throw (Error)t; + + if (t instanceof RuntimeException) + throw (RuntimeException)t; + + assert t == null : t; + } + } + + /** */ + private static class CompositeMessageListener extends LogListener { + /** */ + private final List lsnrs = new ArrayList<>(); + + /** {@inheritDoc} */ + @Override public boolean check() { + for (LogMessageListener lsnr : lsnrs) + if (!lsnr.check()) + return false; + + return true; + } + + /** {@inheritDoc} */ + @Override void reset() { + for (LogMessageListener lsnr : lsnrs) + lsnr.reset(); + } + + /** {@inheritDoc} */ + @Override public void accept(String msg) { + for (LogMessageListener lsnr : lsnrs) + lsnr.accept(msg); + } + + /** + * @param lsnr Listener. + */ + private void add(LogMessageListener lsnr) { + lsnrs.add(lsnr); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/MvccFeatureChecker.java b/modules/core/src/test/java/org/apache/ignite/testframework/MvccFeatureChecker.java new file mode 100644 index 0000000000000..b3b64f9b286ea --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testframework/MvccFeatureChecker.java @@ -0,0 +1,162 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testframework; + +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.internal.processors.query.IgniteSQLException; +import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.transactions.TransactionConcurrency; +import org.apache.ignite.transactions.TransactionIsolation; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS; +import static org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.TRANSACTION_SERIALIZATION_ERROR; +import static org.junit.Assert.fail; + +/** + * Provides checks for features supported when FORCE_MVCC mode is on. + */ +public class MvccFeatureChecker { + /** */ + private static final boolean FORCE_MVCC = + IgniteSystemProperties.getBoolean(IGNITE_FORCE_MVCC_MODE_IN_TESTS, false); + + /** */ + public enum Feature { + CACHE_STORE, + NEAR_CACHE, + LOCAL_CACHE, + ENTRY_LOCK, + CACHE_EVENTS, + EVICTION, + EXPIRATION, + METRICS, + INTERCEPTOR + } + + /** + * Fails if feature is not supported. + * + * @param f feature. + * @throws AssertionError If failed. + */ + public static void failIfNotSupported(Feature f) { + if (!forcedMvcc()) + return; + + validateFeature(f); + } + + /** + * @return {@code True} if Mvcc mode is forced. + */ + public static boolean forcedMvcc() { + return FORCE_MVCC; + } + + /** + * Check if feature is supported. + * + * @param f Feature. + * @return {@code True} if feature is supported, {@code False} otherwise. + */ + public static boolean isSupported(Feature f) { + try { + validateFeature(f); + + return true; + } + catch (AssertionError ignore) { + return false; + } + } + + /** + * Check if Tx mode is supported. + * + * @param conc Transaction concurrency. + * @param iso Transaction isolation. + * @return {@code True} if feature is supported, {@code False} otherwise. + */ + public static boolean isSupported(TransactionConcurrency conc, TransactionIsolation iso) { + return conc == TransactionConcurrency.PESSIMISTIC && + iso == TransactionIsolation.REPEATABLE_READ; + } + + + /** + * Check if Cache mode is supported. + * + * @param mode Cache mode. + * @return {@code True} if feature is supported, {@code False} otherwise. + */ + public static boolean isSupported(CacheMode mode) { + return mode != CacheMode.LOCAL || isSupported(Feature.LOCAL_CACHE); + } + + /** + * TODO proper exception handling after https://issues.apache.org/jira/browse/IGNITE-9470 + * Checks if given exception was caused by MVCC write conflict. + * + * @param e Exception. + */ + public static void assertMvccWriteConflict(Exception e) { + IgniteSQLException sqlEx = X.cause(e, IgniteSQLException.class); + + if (sqlEx == null || sqlEx.statusCode() != TRANSACTION_SERIALIZATION_ERROR) + fail("Unexpected exception: " + X.getFullStackTrace(e)); + } + + /** + * Fails if feature is not supported in Mvcc mode. + * + * @param feature Mvcc feature. + * @throws AssertionError If failed. + */ + @SuppressWarnings("fallthrough") + private static void validateFeature(Feature feature) { + switch (feature) { + case NEAR_CACHE: + fail("https://issues.apache.org/jira/browse/IGNITE-7187"); + + case LOCAL_CACHE: + fail("https://issues.apache.org/jira/browse/IGNITE-9530"); + + case CACHE_STORE: + fail("https://issues.apache.org/jira/browse/IGNITE-8582"); + + case ENTRY_LOCK: + fail("https://issues.apache.org/jira/browse/IGNITE-9324"); + + case CACHE_EVENTS: + fail("https://issues.apache.org/jira/browse/IGNITE-9321"); + + case EVICTION: + fail("https://issues.apache.org/jira/browse/IGNITE-7956"); + + case EXPIRATION: + fail("https://issues.apache.org/jira/browse/IGNITE-7311"); + + case METRICS: + fail("https://issues.apache.org/jira/browse/IGNITE-9224"); + + case INTERCEPTOR: + fail("https://issues.apache.org/jira/browse/IGNITE-9323"); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/config/GridTestProperties.java b/modules/core/src/test/java/org/apache/ignite/testframework/config/GridTestProperties.java index e2594ca04b29a..9f6feabc94fa1 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/config/GridTestProperties.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/config/GridTestProperties.java @@ -30,6 +30,7 @@ import java.util.regex.Pattern; import org.apache.ignite.binary.BinaryBasicNameMapper; import org.apache.ignite.binary.BinaryTypeConfiguration; +import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.log4j.xml.DOMConfigurator; import org.jetbrains.annotations.Nullable; @@ -95,14 +96,7 @@ public final class GridTestProperties { /** */ static { // Initialize IGNITE_HOME system property. - String igniteHome = System.getProperty("IGNITE_HOME"); - - if (igniteHome == null || igniteHome.isEmpty()) { - igniteHome = System.getenv("IGNITE_HOME"); - - if (igniteHome != null && !igniteHome.isEmpty()) - System.setProperty("IGNITE_HOME", igniteHome); - } + U.getIgniteHome(); // Load default properties. File cfgFile = getTestConfigurationFile(null, TESTS_PROP_FILE); diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/ConfigVariationsTestSuiteBuilder.java b/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/ConfigVariationsTestSuiteBuilder.java index 4a60671fee594..e4159c6ae6834 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/ConfigVariationsTestSuiteBuilder.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/ConfigVariationsTestSuiteBuilder.java @@ -18,6 +18,8 @@ package org.apache.ignite.testframework.configvariations; import java.util.Arrays; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestResult; import junit.framework.TestSuite; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; @@ -200,7 +202,7 @@ private TestSuite build(int[] igniteCfgVariation, @Nullable int[] cacheCfgVariat addedSuite = createMultiNodeTestSuite((Class)cls, testCfg, testedNodeCnt, withClients, skipWaitPartMapExchange); else - addedSuite = new IgniteConfigVariationsTestSuite(cls, testCfg); + addedSuite = makeTestSuite(cls, testCfg); return addedSuite; } @@ -227,12 +229,28 @@ private static TestSuite createMultiNodeTestSuite(Class cls, + VariationsTestsConfig cfg) { + TestSuite res = new TestSuite(cls.getSimpleName()); + + res.addTest(new JUnit4TestAdapter(cls) { + @Override public void run(TestResult tr) { + IgniteConfigVariationsAbstractTest.injectTestsConfiguration(cfg); + + super.run(tr); + } + }); + + return res; + } + /** * @return {@code this} for chaining. */ diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/IgniteConfigVariationsTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/IgniteConfigVariationsTestSuite.java deleted file mode 100644 index d953c271eb309..0000000000000 --- a/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/IgniteConfigVariationsTestSuite.java +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.testframework.configvariations; - -import junit.framework.Test; -import junit.framework.TestResult; -import junit.framework.TestSuite; -import org.apache.ignite.testframework.junits.IgniteConfigVariationsAbstractTest; - -/** - * Configuration variations test suite. - */ -public class IgniteConfigVariationsTestSuite extends TestSuite { - /** */ - protected final VariationsTestsConfig cfg; - - /** - * @param cls Test class. - * @param cfg Configuration. - */ - public IgniteConfigVariationsTestSuite(Class cls, - VariationsTestsConfig cfg) { - super(cls); - - this.cfg = cfg; - } - - /** {@inheritDoc} */ - @Override public void runTest(Test test, TestResult res) { - if (test instanceof IgniteConfigVariationsAbstractTest) - ((IgniteConfigVariationsAbstractTest)test).setTestsConfiguration(cfg); - - super.runTest(test, res); - } -} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/VariationsTestsConfig.java b/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/VariationsTestsConfig.java index 91c6c93741a2b..633333eb5e387 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/VariationsTestsConfig.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/configvariations/VariationsTestsConfig.java @@ -48,7 +48,7 @@ public class VariationsTestsConfig { private boolean stopCache; /** */ - private boolean awaitPartMapExchange = true; + private boolean awaitPartMapExchange; /** */ private boolean withClients; @@ -88,7 +88,7 @@ public VariationsTestsConfig( boolean withClients, boolean awaitPartMapExchange ) { - A.ensure(gridCnt >= 1, "Grids count cannot be less then 1."); + A.ensure(gridCnt >= 1, "Grids count cannot be less than 1."); this.factory = factory; this.desc = desc; diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/GridAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/GridAbstractTest.java index ee0dfa48fd330..5491a4f6ab953 100755 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/GridAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/GridAbstractTest.java @@ -17,7 +17,34 @@ package org.apache.ignite.testframework.junits; -import junit.framework.TestCase; +import java.io.ObjectStreamException; +import java.io.Serializable; +import java.lang.annotation.Annotation; +import java.lang.reflect.Field; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.lang.reflect.Proxy; +import java.net.MalformedURLException; +import java.net.URL; +import java.net.URLClassLoader; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.concurrent.Callable; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; +import java.util.concurrent.locks.Lock; +import java.util.concurrent.locks.ReentrantLock; +import javax.cache.configuration.Factory; +import javax.cache.configuration.FactoryBuilder; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; @@ -76,10 +103,10 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.TestTcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.MvccFeatureChecker; import org.apache.ignite.testframework.config.GridTestProperties; import org.apache.ignite.testframework.configvariations.VariationsTestsConfig; import org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger; @@ -93,40 +120,22 @@ import org.apache.log4j.PatternLayout; import org.apache.log4j.Priority; import org.apache.log4j.RollingFileAppender; +import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; +import org.junit.Ignore; +import org.junit.Rule; +import org.junit.Test; +import org.junit.rules.TestRule; +import org.junit.runners.model.Statement; +import org.junit.runners.model.TestClass; import org.springframework.beans.BeansException; import org.springframework.context.ApplicationContext; import org.springframework.context.support.FileSystemXmlApplicationContext; -import javax.cache.configuration.Factory; -import javax.cache.configuration.FactoryBuilder; -import java.io.ObjectStreamException; -import java.io.Serializable; -import java.lang.reflect.Field; -import java.lang.reflect.InvocationHandler; -import java.lang.reflect.Method; -import java.lang.reflect.Modifier; -import java.lang.reflect.Proxy; -import java.net.MalformedURLException; -import java.net.URL; -import java.net.URLClassLoader; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.Collections; -import java.util.List; -import java.util.Map; -import java.util.UUID; -import java.util.concurrent.Callable; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ConcurrentMap; -import java.util.concurrent.TimeoutException; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicReference; - import static org.apache.ignite.IgniteSystemProperties.IGNITE_CLIENT_CACHE_CHANGE_MESSAGE_TIMEOUT; import static org.apache.ignite.IgniteSystemProperties.IGNITE_DISCO_FAILED_CLIENT_RECONNECT_DELAY; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.GridKernalState.DISCONNECTED; import static org.apache.ignite.testframework.config.GridTestProperties.BINARY_MARSHALLER_USE_SIMPLE_NAME_MAPPER; @@ -140,10 +149,7 @@ "ProhibitedExceptionDeclared", "JUnitTestCaseWithNonTrivialConstructors" }) -public abstract class GridAbstractTest extends TestCase { - /** Persistence in tests allowed property. */ - public static final String PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY = "PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY"; - +public abstract class GridAbstractTest extends JUnit3TestLegacySupport { /************************************************************** * DO NOT REMOVE TRANSIENT - THIS OBJECT MIGHT BE TRANSFERRED * * TO ANOTHER NODE. * @@ -159,6 +165,9 @@ public abstract class GridAbstractTest extends TestCase { setAddresses(Collections.singleton("127.0.0.1:47500..47509")); }}; + /** Shared static IP finder which is used in configuration at nodes startup for all test methods in class. */ + protected static TcpDiscoveryIpFinder sharedStaticIpFinder; + /** */ private static final int DFLT_TOP_WAIT_TIMEOUT = 2000; @@ -168,6 +177,23 @@ public abstract class GridAbstractTest extends TestCase { /** */ protected static final String DEFAULT_CACHE_NAME = "default"; + /** Lock to maintain integrity of {@link TestCounters} and of {@link IgniteConfigVariationsAbstractTest}. */ + private final Lock runSerializer = new ReentrantLock(); + + /** Manages test execution and reporting. */ + @Rule public transient TestRule runRule = (base, description) -> new Statement() { + @Override public void evaluate() throws Throwable { + runSerializer.lock(); + try { + assert getName() != null : "getName returned null"; + + runTestCase(base); + } finally { + runSerializer.unlock(); + } + } + }; + /** */ private transient boolean startGrid; @@ -183,9 +209,6 @@ public abstract class GridAbstractTest extends TestCase { /** Timestamp for tests. */ private static long ts = System.currentTimeMillis(); - /** Starting Ignite instance name. */ - protected static final ThreadLocal startingIgniteInstanceName = new ThreadLocal<>(); - /** Force failure flag. */ private boolean forceFailure; @@ -198,11 +221,8 @@ public abstract class GridAbstractTest extends TestCase { /** Number of tests. */ private int testCnt; - /** - * - */ - private static final boolean PERSISTENCE_ALLOWED = - IgniteSystemProperties.getBoolean(PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, true); + /** Lazily initialized current test method. */ + private volatile Method currTestMtd; /** * @@ -399,8 +419,8 @@ protected void time(String name, Runnable r) { } /** - * Runs given code in multiple threads and synchronously waits for all threads to complete. - * If any thread failed, exception will be thrown out of this method. + * Runs given code in multiple threads and synchronously waits for all threads to complete. If any thread failed, + * exception will be thrown out of this method. * * @param r Runnable. * @param threadNum Thread number. @@ -501,7 +521,8 @@ protected IgniteInternalFuture multithreadedAsync(Callable c, int threadNu * @throws Exception If failed. * @return Future. */ - protected IgniteInternalFuture multithreadedAsync(Callable c, int threadNum, String threadName) throws Exception { + protected IgniteInternalFuture multithreadedAsync(Callable c, int threadNum, + String threadName) throws Exception { return GridTestUtils.runMultiThreadedAsync(c, threadNum, threadName); } @@ -531,46 +552,21 @@ protected GridTestKernalContext newContext(IgniteConfiguration cfg) throws Ignit } /** - * Called before execution of every test method in class. - * - * @throws Exception If failed. {@link #afterTest()} will be called in this case. - */ - protected void beforeTest() throws Exception { - // No-op. - } - - /** - * Called after execution of every test method in class or - * if {@link #beforeTest()} failed without test method execution. - * - * @throws Exception If failed. - */ - protected void afterTest() throws Exception { - // No-op. - } - - /** - * Called before execution of all test methods in class. - * - * @throws Exception If failed. {@link #afterTestsStopped()} will be called in this case. + * Will clean and re-create marshaller directory from scratch. */ - protected void beforeTestsStarted() throws Exception { - // Will clean and re-create marshaller directory from scratch. + private void resolveWorkDirectory() throws Exception { U.resolveWorkDirectory(U.defaultWorkDirectory(), "marshaller", true); U.resolveWorkDirectory(U.defaultWorkDirectory(), "binary_meta", true); } /** - * Called after execution of all test methods in class or - * if {@link #beforeTestsStarted()} failed without execution of any test methods. - * - * @throws Exception If failed. + * {@inheritDoc} + *

    + * Do not annotate with Before in overriding methods.

    + * @deprecated This method is deprecated. Instead of invoking or overriding it, it is recommended to make your own + * method with {@code @Before} annotation. */ - protected void afterTestsStopped() throws Exception { - // No-op. - } - - /** {@inheritDoc} */ + @Deprecated @Override protected void setUp() throws Exception { stopGridErr = false; @@ -601,10 +597,12 @@ protected void afterTestsStopped() throws Exception { } if (isFirstTest()) { + sharedStaticIpFinder = new TcpDiscoveryVmIpFinder(true); + info(">>> Starting test class: " + testClassDescription() + " <<<"); - if(isSafeTopology()) - assert G.allGrids().isEmpty() : "Not all Ignite instances stopped before tests execution"; + if (isSafeTopology()) + assert G.allGrids().isEmpty() : "Not all Ignite instances stopped before tests execution:" + G.allGrids(); if (startGrid) { IgniteConfiguration cfg = optimize(getConfiguration()); @@ -618,6 +616,8 @@ protected void afterTestsStopped() throws Exception { if (!jvmIds.isEmpty()) log.info("Next processes of IgniteNodeRunner were killed: " + jvmIds); + resolveWorkDirectory(); + beforeTestsStarted(); } catch (Exception | Error t) { @@ -670,6 +670,34 @@ protected String testClassDescription() { return GridTestUtils.fullSimpleName(getClass()); } + /** + * @return Current test method. + * @throws NoSuchMethodError If method wasn't found for some reason. + */ + @NotNull protected Method currentTestMethod() { + if (currTestMtd == null) { + try { + currTestMtd = getClass().getMethod(getName()); + } + catch (NoSuchMethodException e) { + throw new NoSuchMethodError("Current test method is not found: " + getName()); + } + } + + return currTestMtd; + } + + /** + * Search for the annotation of the given type in current test method. + * + * @param annotationCls Type of annotation to look for. + * @param Annotation type. + * @return Instance of annotation if it is present in test method. + */ + @Nullable protected A currentTestAnnotation(Class annotationCls) { + return currentTestMethod().getAnnotation(annotationCls); + } + /** * @return Started grid. * @throws Exception If anything failed. @@ -736,9 +764,6 @@ protected Ignite startGridsMultiThreaded(int cnt) throws Exception { * @throws Exception If failed. */ protected final Ignite startGridsMultiThreaded(int init, int cnt) throws Exception { - if (isMultiJvm()) - fail("https://issues.apache.org/jira/browse/IGNITE-648"); - assert init >= 0; assert cnt > 0; @@ -839,8 +864,8 @@ protected Ignite startGrid(int idx, GridSpringResourceContext ctx) throws Except * @return Started grid. * @throws Exception If failed. */ - protected Ignite startGrid(String igniteInstanceName) throws Exception { - return startGrid(igniteInstanceName, (GridSpringResourceContext)null); + protected IgniteEx startGrid(String igniteInstanceName) throws Exception { + return (IgniteEx)startGrid(igniteInstanceName, (GridSpringResourceContext)null); } /** @@ -887,11 +912,8 @@ private void validateConfiguration(IgniteConfiguration cfg) { */ protected Ignite startGrid(String igniteInstanceName, IgniteConfiguration cfg, GridSpringResourceContext ctx) throws Exception { - - checkConfiguration(cfg); - if (!isRemoteJvm(igniteInstanceName)) { - startingIgniteInstanceName.set(igniteInstanceName); + IgniteUtils.setCurrentIgniteName(igniteInstanceName); try { String cfgProcClsName = System.getProperty(IGNITE_CFG_PREPROCESSOR_CLS); @@ -927,37 +949,13 @@ protected Ignite startGrid(String igniteInstanceName, IgniteConfiguration cfg, G return node; } finally { - startingIgniteInstanceName.set(null); + IgniteUtils.setCurrentIgniteName(null); } } else return startRemoteGrid(igniteInstanceName, null, ctx); } - /** - * @param cfg Config. - */ - protected void checkConfiguration(IgniteConfiguration cfg) { - if (cfg == null) - return; - - if (!PERSISTENCE_ALLOWED) { - String errorMsg = "PERSISTENCE IS NOT ALLOWED IN THIS SUITE, PUT YOUR TEST TO ANOTHER ONE!"; - - DataStorageConfiguration dsCfg = cfg.getDataStorageConfiguration(); - - if (dsCfg != null) { - assertFalse(errorMsg, dsCfg.getDefaultDataRegionConfiguration().isPersistenceEnabled()); - - DataRegionConfiguration[] dataRegionConfigurations = dsCfg.getDataRegionConfigurations(); - - if (dataRegionConfigurations != null) - for (DataRegionConfiguration dataRegionConfiguration : dataRegionConfigurations) - assertFalse(errorMsg, dataRegionConfiguration.isPersistenceEnabled()); - } - } - } - /** * Starts new grid at another JVM with given name. * @@ -983,7 +981,7 @@ protected Ignite startRemoteGrid(String igniteInstanceName, IgniteConfiguration */ protected Ignite startGridWithSpringCtx(String gridName, boolean client, String cfgUrl) throws Exception { IgniteBiTuple, ? extends GridSpringResourceContext> cfgMap = - IgnitionEx.loadConfigurations(cfgUrl); + IgnitionEx.loadConfigurations(cfgUrl); IgniteConfiguration cfg = F.first(cfgMap.get1()); @@ -1054,8 +1052,6 @@ protected Ignite startRemoteGrid(String igniteInstanceName, IgniteConfiguration if (cfg == null) cfg = optimize(getConfiguration(igniteInstanceName)); - checkConfiguration(cfg); - if (locNode != null) { DiscoverySpi discoverySpi = locNode.configuration().getDiscoverySpi(); @@ -1066,7 +1062,7 @@ protected Ignite startRemoteGrid(String igniteInstanceName, IgniteConfiguration m.setAccessible(true); - cfg.setDiscoverySpi((DiscoverySpi) m.invoke(discoverySpi)); + cfg.setDiscoverySpi((DiscoverySpi)m.invoke(discoverySpi)); resetDiscovery = false; } @@ -1094,8 +1090,6 @@ protected List additionalRemoteJvmArgs() { * @throws IgniteCheckedException On error. */ protected IgniteConfiguration optimize(IgniteConfiguration cfg) throws IgniteCheckedException { - checkConfiguration(cfg); - if (cfg.getLocalHost() == null) { if (cfg.getDiscoverySpi() instanceof TcpDiscoverySpi) { cfg.setLocalHost("127.0.0.1"); @@ -1528,15 +1522,13 @@ protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws if (cfg.getDiscoverySpi() instanceof TcpDiscoverySpi) ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setJoinTimeout(getTestTimeout()); - if (isMultiJvm()) - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(LOCAL_IP_FINDER); - return cfg; } /** * Create instance of {@link BinaryMarshaller} suitable for use * without starting a grid upon an empty {@link IgniteConfiguration}. + * * @return Binary marshaller. * @throws IgniteCheckedException if failed. */ @@ -1547,6 +1539,7 @@ protected BinaryMarshaller createStandaloneBinaryMarshaller() throws IgniteCheck /** * Create instance of {@link BinaryMarshaller} suitable for use * without starting a grid upon given {@link IgniteConfiguration}. + * * @return Binary marshaller. * @throws IgniteCheckedException if failed. */ @@ -1634,10 +1627,10 @@ protected String home() throws IgniteCheckedException { /** * This method should be overridden by subclasses to change configuration parameters. * - * @return Grid configuration used for starting of grid. * @param igniteInstanceName Ignite instance name. * @param rsrcs Resources. * @throws Exception If failed. + * @return Grid configuration used for starting of grid. */ @SuppressWarnings("deprecation") protected IgniteConfiguration getConfiguration(String igniteInstanceName, IgniteTestResources rsrcs) @@ -1679,18 +1672,13 @@ protected IgniteConfiguration getConfiguration(String igniteInstanceName, Ignite // Set metrics update interval to 1 second to speed up tests. cfg.setMetricsUpdateFrequency(1000); - String mcastAddr = GridTestUtils.getNextMulticastGroup(getClass()); + if (!isMultiJvm()) { + assert sharedStaticIpFinder != null : "Shared static IP finder should be initialized at this point."; - TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder(); - - ipFinder.setAddresses(Collections.singleton("127.0.0.1:" + TcpDiscoverySpi.DFLT_PORT)); - - if (!F.isEmpty(mcastAddr)) { - ipFinder.setMulticastGroup(mcastAddr); - ipFinder.setMulticastPort(GridTestUtils.getNextMulticastPort(getClass())); + discoSpi.setIpFinder(sharedStaticIpFinder); } - - discoSpi.setIpFinder(ipFinder); + else + discoSpi.setIpFinder(LOCAL_IP_FINDER); cfg.setDiscoverySpi(discoSpi); @@ -1710,8 +1698,6 @@ protected IgniteConfiguration getConfiguration(String igniteInstanceName, Ignite cfg.setFailureHandler(getFailureHandler(igniteInstanceName)); - checkConfiguration(cfg); - return cfg; } @@ -1728,11 +1714,14 @@ protected FailureHandler getFailureHandler(String igniteInstanceName) { /** * @return New cache configuration with modified defaults. */ + @SuppressWarnings("unchecked") public static CacheConfiguration defaultCacheConfiguration() { CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - cfg.setAtomicityMode(TRANSACTIONAL); - cfg.setNearConfiguration(new NearCacheConfiguration()); + if (MvccFeatureChecker.forcedMvcc()) + cfg.setAtomicityMode(TRANSACTIONAL_SNAPSHOT); + else + cfg.setAtomicityMode(TRANSACTIONAL).setNearConfiguration(new NearCacheConfiguration<>()); cfg.setWriteSynchronizationMode(FULL_SYNC); cfg.setEvictionPolicy(null); @@ -1755,7 +1744,14 @@ protected static ClassLoader getExternalClassLoader() { } } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + *

    + * Do not annotate with After in overriding methods.

    + * @deprecated This method is deprecated. Instead of invoking or overriding it, it is recommended to make your own + * method with {@code @After} annotation. + */ + @Deprecated @Override protected void tearDown() throws Exception { long dur = System.currentTimeMillis() - ts; @@ -1994,7 +1990,7 @@ public static R executeOnLocalOrRemoteJvm(Ignite ignite, final TestIgniteCal * @param cache Cache. * @param job Job. */ - public static R executeOnLocalOrRemoteJvm(IgniteCache cache, TestCacheCallable job) { + public static R executeOnLocalOrRemoteJvm(IgniteCache cache, TestCacheCallable job) { Ignite ignite = cache.unwrap(Ignite.class); if (!isMultiJvmObject(ignite)) @@ -2053,7 +2049,7 @@ public static R executeRemotely(IgniteCacheProcessProxy cache, @Override public R call() throws Exception { Ignite ignite = Ignition.ignite(id); - IgniteCache cache = ignite.cache(cacheName); + IgniteCache cache = ignite.cache(cacheName); return job.call(ignite, cache); } @@ -2082,14 +2078,16 @@ protected synchronized TestCounters getTestCounters(IgniteConfiguration cfg) thr } /** {@inheritDoc} */ - @SuppressWarnings({"ProhibitedExceptionDeclared"}) - @Override protected void runTest() throws Throwable { + @Override void runTest(Statement testRoutine) throws Throwable { final AtomicReference ex = new AtomicReference<>(); Thread runner = new IgniteThread(getTestIgniteInstanceName(), "test-runner", new Runnable() { @Override public void run() { try { - runTestInternal(); + if (forceFailure) + fail("Forced failure: " + forceFailureMsg); + + testRoutine.evaluate(); } catch (Throwable e) { IgniteClosure hnd = errorHandler(); @@ -2106,7 +2104,7 @@ protected synchronized TestCounters getTestCounters(IgniteConfiguration cfg) thr if (runner.isAlive()) { U.error(log, "Test has been timed out and will be interrupted (threads dump will be taken before interruption) [" + - "test=" + getName() + ", timeout=" + getTestTimeout() + ']'); + "test=" + getName() + ", timeout=" + getTestTimeout() + ']'); List nodes = IgnitionEx.allGridsx(); @@ -2124,7 +2122,7 @@ protected synchronized TestCounters getTestCounters(IgniteConfiguration cfg) thr U.join(runner, log); throw new TimeoutException("Test has been timed out [test=" + getName() + ", timeout=" + - getTestTimeout() + ']' ); + getTestTimeout() + ']'); } Throwable t = ex.get(); @@ -2139,8 +2137,7 @@ protected synchronized TestCounters getTestCounters(IgniteConfiguration cfg) thr } /** - * @return Error handler to process all uncaught exceptions of the test run - * ({@code null} by default). + * @return Error handler to process all uncaught exceptions of the test run ({@code null} by default). */ protected IgniteClosure errorHandler() { return null; @@ -2166,17 +2163,6 @@ public void forceTestCount(int cnt) { forceTestCnt = true; } - /** - * @throws Throwable If failed. - */ - @SuppressWarnings({"ProhibitedExceptionDeclared"}) - private void runTestInternal() throws Throwable { - if (forceFailure) - fail("Forced failure: " + forceFailureMsg); - else - super.runTest(); - } - /** * @return Test case timeout. */ @@ -2199,7 +2185,7 @@ private long getDefaultTestTimeout() { /** * @param store Store. */ - protected Factory singletonFactory(T store) { + protected static Factory singletonFactory(T store) { return notSerializableProxy(new FactoryBuilder.SingletonFactory<>(store), Factory.class); } @@ -2207,7 +2193,7 @@ protected Factory singletonFactory(T store) { * @param obj Object that should be wrap proxy * @return Created proxy. */ - protected T notSerializableProxy(final T obj) { + protected static T notSerializableProxy(final T obj) { Class cls = (Class)obj.getClass(); Class[] interfaces = (Class[])cls.getInterfaces(); @@ -2225,7 +2211,7 @@ protected T notSerializableProxy(final T obj) { * @param itfClses Interfaces that should be implemented by proxy (vararg parameter) * @return Created proxy. */ - protected T notSerializableProxy(final T obj, Class itfCls, Class ... itfClses) { + protected static T notSerializableProxy(final T obj, Class itfCls, Class... itfClses) { Class[] itfs = Arrays.copyOf(itfClses, itfClses.length + 3); itfs[itfClses.length] = itfCls; @@ -2248,7 +2234,7 @@ protected T notSerializableProxy(final T obj, Class itfCls, Class * @param obj Object that must not be changed after serialization/deserialization. * @return An object to return from writeReplace() */ - private Object supressSerialization(Object obj) { + private static Object supressSerialization(Object obj) { SerializableProxy res = new SerializableProxy(UUID.randomUUID()); serializedObj.put(res.uuid, obj); @@ -2286,7 +2272,7 @@ private void awaitTopologyChange() throws IgniteInterruptedCheckedException { AffinityTopologyVersion exchVer = ctx.cache().context().exchange().readyAffinityVersion(); if (!topVer.equals(exchVer)) { - info("Topology version mismatch [node=" + g.name() + + info("Topology version mismatch [node=" + g.name() + ", exchVer=" + exchVer + ", topVer=" + topVer + ']'); @@ -2301,6 +2287,7 @@ private void awaitTopologyChange() throws IgniteInterruptedCheckedException { } } } + /** * @param expSize Expected nodes number. * @throws Exception If failed. @@ -2316,7 +2303,7 @@ protected void waitForTopology(final int expSize) throws Exception { return false; } - for (Ignite node: nodes) { + for (Ignite node : nodes) { try { IgniteFuture reconnectFut = node.cluster().clientReconnectFuture(); @@ -2354,7 +2341,7 @@ public static void doSleep(long millis) { U.sleep(millis); } catch (Exception e) { - throw new IgniteException(); + throw new IgniteException(e); } } @@ -2363,7 +2350,7 @@ public static void doSleep(long millis) { * @param cacheName Cache name. * @return Cache group ID for given cache name. */ - protected final int groupIdForCache(Ignite node, String cacheName) { + protected static final int groupIdForCache(Ignite node, String cacheName) { for (CacheGroupContext grp : ((IgniteKernal)node).context().cache().cacheGroups()) { if (grp.hasCache(cacheName)) return grp.groupId(); @@ -2484,6 +2471,12 @@ public ExecuteRemotelyTask(TestIgniteIdxCallable job, int idx) { /** * Test counters. + * + * TODO IGNITE-10179 Try to make this class go away since its primary (possibly even only) purpose appears to + * support methods isFirstTest() and isLastTest() which in turn look like JUnit 3-specific workaround for + * functionality that is natively available in JUnit 4 via BeforeClass and AfterClass annotations. Along the way, + * find out if this will allow to get rid of runSerializer lock which is introduced with sole purpose to + * maintain integrity of TestCounters. */ protected class TestCounters { /** */ @@ -2600,19 +2593,16 @@ public int getNumberOfTests() { if (this0.forceTestCnt) cnt = this0.testCnt; - else { - cnt = 0; - - for (Method m : this0.getClass().getMethods()) - if (m.getName().startsWith("test") && Modifier.isPublic(m.getModifiers()) && m.getParameterCount() == 0) - cnt++; - } + else + cnt = (int)new TestClass(this0.getClass()) + .getAnnotatedMethods(Test.class) + .stream() + .filter(method -> method.getAnnotation(Ignore.class) == null) + .count(); numOfTests = cnt; } - countTestCases(); - return numOfTests; } } diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteConfigVariationsAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteConfigVariationsAbstractTest.java index 0e454a366473d..2bbead2c2cca0 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteConfigVariationsAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteConfigVariationsAbstractTest.java @@ -38,6 +38,7 @@ import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.testframework.configvariations.VariationsTestsConfig; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.runners.model.Statement; /** * Common abstract test for Ignite tests based on configurations variations. @@ -55,25 +56,38 @@ public abstract class IgniteConfigVariationsAbstractTest extends GridCommonAbstr /** */ private static final File workDir = new File(U.getIgniteHome() + File.separator + "workOfConfigVariationsTests"); - /** */ - protected VariationsTestsConfig testsCfg; + /** Dummy initial stub to just let people launch test classes not from suite. */ + protected VariationsTestsConfig testsCfg = new VariationsTestsConfig(null, "Dummy config", false, null, 1, false); /** */ protected volatile DataMode dataMode = DataMode.PLANE_OBJECT; + /** See {@link IgniteConfigVariationsAbstractTest#injectTestsConfiguration} */ + private static VariationsTestsConfig testsCfgInjected; + + /** + * @param testsCfgInjected Tests configuration. + */ + public static void injectTestsConfiguration(VariationsTestsConfig testsCfgInjected) { + IgniteConfigVariationsAbstractTest.testsCfgInjected = testsCfgInjected; + } + /** - * @param testsCfg Tests configuration. + * {@inheritDoc} + *

    + * IMPL NOTE when this override was introduced, alternative was to replace multiple usages of instance member + * {@code testsCfg} splattered all over the project with those of static one {@code testsCfgInjected} - kind + * of cumbersome, risky and potentially redundant change given the chance of later migration to JUnit 5 and + * further rework to use dynamic test parameters that would likely cause removal of the static member.

    */ - public void setTestsConfiguration(VariationsTestsConfig testsCfg) { - assert this.testsCfg == null : "Test config must be set only once [oldTestCfg=" + this.testsCfg - + ", newTestCfg=" + testsCfg + "]"; + @Override protected void runTestCase(Statement testRoutine) throws Throwable { + testsCfg = testsCfgInjected; - this.testsCfg = testsCfg; + super.runTestCase(testRoutine); } /** {@inheritDoc} */ - @Override - protected boolean isSafeTopology() { + @Override protected boolean isSafeTopology() { return false; } @@ -158,6 +172,8 @@ private void memoryUsage() { /** {@inheritDoc} */ @Override protected String testDescription() { + assert testsCfg != null: "Tests should be run using test suite."; + return super.testDescription() + '-' + testsCfg.description() + '-' + testsCfg.gridCount() + "-node(s)"; } diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteTestResources.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteTestResources.java index 5fef8bc52c3ee..59e29941331d7 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteTestResources.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteTestResources.java @@ -78,6 +78,13 @@ public class IgniteTestResources { /** */ private GridResourceProcessor rsrcProc; + /** + * @return Default MBean server or {@code null} if {@code IGNITE_MBEANS_DISABLED} is configured. + */ + @Nullable private static MBeanServer prepareMBeanServer() { + return U.IGNITE_MBEANS_DISABLED ? null : ManagementFactory.getPlatformMBeanServer(); + } + /** * @throws IgniteCheckedException If failed. */ @@ -87,7 +94,8 @@ public IgniteTestResources() throws IgniteCheckedException { else log = rootLog.getLogger(getClass()); - this.jmx = ManagementFactory.getPlatformMBeanServer(); + this.jmx = prepareMBeanServer(); + this.rsrcProc = new GridResourceProcessor(new GridTestKernalContext(this.log)); } @@ -97,7 +105,7 @@ public IgniteTestResources() throws IgniteCheckedException { public IgniteTestResources(IgniteConfiguration cfg) throws IgniteCheckedException { this.cfg = cfg; this.log = rootLog.getLogger(getClass()); - this.jmx = ManagementFactory.getPlatformMBeanServer(); + this.jmx = prepareMBeanServer(); this.rsrcProc = new GridResourceProcessor(new GridTestKernalContext(this.log, this.cfg)); } @@ -119,7 +127,7 @@ public IgniteTestResources(IgniteLogger log) throws IgniteCheckedException { assert log != null; this.log = log.getLogger(getClass()); - this.jmx = ManagementFactory.getPlatformMBeanServer(); + this.jmx = prepareMBeanServer(); this.rsrcProc = new GridResourceProcessor(new GridTestKernalContext(this.log)); } diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/JUnit3TestLegacySupport.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/JUnit3TestLegacySupport.java new file mode 100644 index 0000000000000..709b39c6d6e2d --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/JUnit3TestLegacySupport.java @@ -0,0 +1,134 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testframework.junits; + +import junit.framework.Assert; // IMPL NOTE some old tests expect inherited deprecated assertions. +import org.junit.Rule; +import org.junit.rules.TestName; +import org.junit.runners.model.Statement; + +/** + * Supports compatibility with old tests that expect specific threading behavior of JUnit 3 TestCase class, + * inherited deprecated assertions and specific old interface for GridTestUtils. + */ +@SuppressWarnings({"TransientFieldInNonSerializableClass", "ExtendsUtilityClass", "deprecation"}) +public abstract class JUnit3TestLegacySupport extends Assert { + /** + * Supports obtaining test name for JUnit4 framework in a way that makes it available for legacy methods invoked + * from {@code runTest(Statement)}. + */ + @Rule public transient TestName nameRule = new TestName(); + + /** + * Gets the name of the currently executed test case. + * + * @return Name of the currently executed test case. + */ + public String getName() { + return nameRule.getMethodName(); + } + + /** This method is called before a test is executed. */ + abstract void setUp() throws Exception; + + /** Runs test code in between {@code setUp} and {@code tearDown}. */ + abstract void runTest(Statement testRoutine) throws Throwable; + + /** This method is called after a test is executed. */ + abstract void tearDown() throws Exception; + + /** + * Runs the bare test sequence like in JUnit 3 class TestCase. + * + * @throws Throwable if any exception is thrown + */ + protected void runTestCase(Statement testRoutine) throws Throwable { + Throwable e = null; + setUp(); + try { + runTest(testRoutine); + } catch (Throwable running) { + e = running; + } finally { + try { + tearDown(); + } catch (Throwable tearingDown) { + if (e == null) e = tearingDown; + } + } + if (e != null) throw e; + } + + /** + * Called before execution of every test method in class. + *

    + * Do not annotate with Before in overriding methods.

    + * + * @throws Exception If failed. {@link #afterTest()} will be called in this case. + * @deprecated This method is deprecated. Instead of invoking or overriding it, it is recommended to make your own + * method with {@code @Before} annotation. + */ + @Deprecated + protected void beforeTest() throws Exception { + // No-op. + } + + /** + * Called after execution of every test method in class or if {@link #beforeTest()} failed without test method + * execution. + *

    + * Do not annotate with After in overriding methods.

    + * + * @throws Exception If failed. + * @deprecated This method is deprecated. Instead of invoking or overriding it, it is recommended to make your own + * method with {@code @After} annotation. + */ + @Deprecated + protected void afterTest() throws Exception { + // No-op. + } + + /** + * Called before execution of all test methods in class. + *

    + * Do not annotate with BeforeClass in overriding methods.

    + * + * @throws Exception If failed. {@link #afterTestsStopped()} will be called in this case. + * @deprecated This method is deprecated. Instead of invoking or overriding it, it is recommended to make your own + * method with {@code @BeforeClass} annotation. + */ + @Deprecated + protected void beforeTestsStarted() throws Exception { + // No-op. + } + + /** + * Called after execution of all test methods in class or + * if {@link #beforeTestsStarted()} failed without execution of any test methods. + *

    + * Do not annotate with AfterClass in overriding methods.

    + * + * @throws Exception If failed. + * @deprecated This method is deprecated. Instead of invoking or overriding it, it is recommended to make your own + * method with {@code @AfterClass} annotation. + */ + @Deprecated + protected void afterTestsStopped() throws Exception { + // No-op. + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/cache/GridAbstractCacheStoreSelfTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/cache/GridAbstractCacheStoreSelfTest.java index 22d93763abf8f..fd8ad188f5b91 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/cache/GridAbstractCacheStoreSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/cache/GridAbstractCacheStoreSelfTest.java @@ -45,10 +45,14 @@ import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionState; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Abstract cache store test. */ +@RunWith(JUnit4.class) public abstract class GridAbstractCacheStoreSelfTest> extends GridCommonAbstractTest { /** */ @@ -72,6 +76,7 @@ protected GridAbstractCacheStoreSelfTest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStore() throws Exception { // Create dummy transaction Transaction tx = new DummyTx(); @@ -105,6 +110,7 @@ public void testStore() throws Exception { /** * @throws IgniteCheckedException if failed. */ + @Test public void testRollback() throws IgniteCheckedException { Transaction tx = new DummyTx(); @@ -190,6 +196,7 @@ public void testRollback() throws IgniteCheckedException { /** * @throws IgniteCheckedException if failed. */ + @Test public void testAllOpsWithTXNoCommit() throws IgniteCheckedException { doTestAllOps(new DummyTx(), false); } @@ -197,6 +204,7 @@ public void testAllOpsWithTXNoCommit() throws IgniteCheckedException { /** * @throws IgniteCheckedException if failed. */ + @Test public void testAllOpsWithTXCommit() throws IgniteCheckedException { doTestAllOps(new DummyTx(), true); } @@ -204,6 +212,7 @@ public void testAllOpsWithTXCommit() throws IgniteCheckedException { /** * @throws IgniteCheckedException if failed. */ + @Test public void testAllOpsWithoutTX() throws IgniteCheckedException { doTestAllOps(null, false); } @@ -306,6 +315,7 @@ private void doTestAllOps(@Nullable Transaction tx, boolean commit) { /** * @throws Exception If failed. */ + @Test public void testSimpleMultithreading() throws Exception { final Random rnd = new Random(); diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractExamplesTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractExamplesTest.java index 1e6c84ee1f27a..3c305fac291ab 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractExamplesTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractExamplesTest.java @@ -75,4 +75,4 @@ protected final void startRemoteNodes() throws Exception { protected String defaultConfig() { return DFLT_CFG; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractLifecycleAwareSelfTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractLifecycleAwareSelfTest.java index 35c0b069b1d14..703d59bec051d 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractLifecycleAwareSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridAbstractLifecycleAwareSelfTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.Ignite; import org.apache.ignite.lifecycle.LifecycleAware; import org.apache.ignite.resources.CacheNameResource; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base class for tests against {@link LifecycleAware} support. */ +@RunWith(JUnit4.class) public abstract class GridAbstractLifecycleAwareSelfTest extends GridCommonAbstractTest { /** */ protected Collection lifecycleAwares = new ArrayList<>(); @@ -107,6 +111,7 @@ protected void afterGridStart(Ignite ignite) { /** * @throws Exception If failed. */ + @Test public void testLifecycleAware() throws Exception { Ignite ignite = startGrid(); @@ -131,4 +136,4 @@ public void testLifecycleAware() throws Exception { lifecycleAwares.clear(); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridCommonAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridCommonAbstractTest.java index f3e2b27ceab28..62434953fc943 100755 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridCommonAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/common/GridCommonAbstractTest.java @@ -47,6 +47,7 @@ import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteMessaging; import org.apache.ignite.Ignition; +import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CachePeekMode; import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.cache.affinity.AffinityFunction; @@ -316,7 +317,9 @@ protected static GridDhtCacheAdapter dht(IgniteCache cache) { * @return DHT cache. */ protected GridDhtCacheAdapter dht() { - return this.near().dht(); + GridCacheAdapter internalCache = ((IgniteKernal)grid()).internalCache(DEFAULT_CACHE_NAME); + + return internalCache.isNear() ? internalCache.context().near().dht() : internalCache.context().dht(); } /** @@ -324,7 +327,9 @@ protected GridDhtCacheAdapter dht() { * @return DHT cache. */ protected GridDhtCacheAdapter dht(int idx) { - return this.near(idx).dht(); + GridCacheAdapter internalCache = ((IgniteKernal)grid(idx)).internalCache(DEFAULT_CACHE_NAME); + + return internalCache.isNear() ? internalCache.context().near().dht() : internalCache.context().dht(); } /** @@ -333,7 +338,9 @@ protected GridDhtCacheAdapter dht(int idx) { * @return DHT cache. */ protected GridDhtCacheAdapter dht(int idx, String cache) { - return this.near(idx, cache).dht(); + GridCacheAdapter internalCache = ((IgniteKernal)grid(idx)).internalCache(cache); + + return internalCache.isNear() ? internalCache.context().near().dht() : internalCache.context().dht(); } /** @@ -394,6 +401,14 @@ protected static GridDhtColocatedCache colocated(IgniteCache return ((IgniteKernal)cache.unwrap(Ignite.class)).internalCache(cache.getName()).context().colocated(); } + /** + * @param cache Ignite cache. + * @return CacheAtomicityMode for given cache. + */ + public static CacheAtomicityMode atomicityMode(IgniteCache cache) { + return ((CacheConfiguration)cache.getConfiguration(CacheConfiguration.class)).getAtomicityMode(); + } + /** * @param cache Cache. * @param keys Keys. @@ -481,7 +496,7 @@ protected GridNearCacheAdapter near(int idx, String cache) { } /** {@inheritDoc} */ - @Override protected final void setUp() throws Exception { + @Override public void setUp() throws Exception { // Disable SSL hostname verifier. HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier() { @Override public boolean verify(String s, SSLSession sslSes) { @@ -495,7 +510,7 @@ protected GridNearCacheAdapter near(int idx, String cache) { } /** {@inheritDoc} */ - @Override protected final void tearDown() throws Exception { + @Override public void tearDown() throws Exception { getTestCounters().incrementStopped(); super.tearDown(); @@ -670,7 +685,7 @@ protected void awaitPartitionMapExchange( if (readyVer.topologyVersion() > 0 && c.context().started()) { // Must map on updated version of topology. - Collection affNodes = + List affNodes = dht.context().affinity().assignment(readyVer).idealAssignment().get(p); int affNodesCnt = affNodes.size(); @@ -684,8 +699,13 @@ protected void awaitPartitionMapExchange( GridDhtLocalPartition loc = top.localPartition(p, readyVer, false); + boolean notPrimary = !affNodes.isEmpty() && + !exchMgr.rebalanceTopologyVersion().equals(AffinityTopologyVersion.NONE) && + !affNodes.get(0).equals(dht.context().affinity().primaryByPartition(p, readyVer)); + if (affNodesCnt != ownerNodesCnt || !affNodes.containsAll(owners) || - (waitEvicts && loc != null && loc.state() != GridDhtPartitionState.OWNING)) { + (waitEvicts && loc != null && loc.state() != GridDhtPartitionState.OWNING) || + notPrimary) { if (i % 50 == 0) LT.warn(log(), "Waiting for topology map update [" + "igniteInstanceName=" + g.name() + @@ -1384,7 +1404,7 @@ protected static V dhtPeek(IgniteCache cache, K key) throws IgniteC * @param key Key. */ protected static V localPeek(GridCacheAdapter cache, K key) throws IgniteCheckedException { - return cache.localPeek(key, null, null); + return cache.localPeek(key, null); } /** @@ -1392,7 +1412,7 @@ protected static V localPeek(GridCacheAdapter cache, K key) throws * @param key Key. */ protected static V localPeekOnHeap(GridCacheAdapter cache, K key) throws IgniteCheckedException { - return cache.localPeek(key, new CachePeekMode[] {CachePeekMode.ONHEAP}, null); + return cache.localPeek(key, new CachePeekMode[] {CachePeekMode.ONHEAP}); } /** @@ -1996,17 +2016,26 @@ protected void checkFutures() { final Collection> futs = ig.context().cache().context().mvcc().activeFutures(); - for (GridCacheFuture fut : futs) - log.info("Waiting for future: " + fut); + boolean hasFutures = false; + + for (GridCacheFuture fut : futs) { + if (!fut.isDone()) { + log.error("Expecting no active future [node=" + ig.localNode().id() + ", fut=" + fut + ']'); + + hasFutures = true; + } + } - assertTrue("Expecting no active futures: node=" + ig.localNode().id(), futs.isEmpty()); + if (hasFutures) + fail("Some mvcc futures are not finished"); Collection txs = ig.context().cache().context().tm().activeTransactions(); for (IgniteInternalTx tx : txs) - log.info("Waiting for tx: " + tx); + log.error("Expecting no active transaction [node=" + ig.localNode().id() + ", tx=" + tx + ']'); - assertTrue("Expecting no active transactions: node=" + ig.localNode().id(), txs.isEmpty()); + if (!txs.isEmpty()) + fail("Some transaction are not finished"); } } } diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteCacheProcessProxy.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteCacheProcessProxy.java index f81d103712c2e..ecf119b098b5f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteCacheProcessProxy.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteCacheProcessProxy.java @@ -687,6 +687,21 @@ private IgniteCacheProcessProxy(String name, boolean async, ExpiryPolicy plc, Ig throw new UnsupportedOperationException("Method should be supported."); } + /** {@inheritDoc} */ + @Override public void preloadPartition(int partId) { + throw new UnsupportedOperationException("Method should be supported."); + } + + /** {@inheritDoc} */ + @Override public IgniteFuture preloadPartitionAsync(int partId) { + throw new UnsupportedOperationException("Method should be supported."); + } + + /** {@inheritDoc} */ + @Override public boolean localPreloadPartition(int partition) { + throw new UnsupportedOperationException("Method should be supported."); + } + /** {@inheritDoc} */ @Override public IgniteCache withAllowAtomicOpsInTx() { return this; diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteProcessProxy.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteProcessProxy.java index b6f7f8945adc4..ccd4e9eaa4c58 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteProcessProxy.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteProcessProxy.java @@ -31,6 +31,7 @@ import javax.cache.CacheException; import org.apache.ignite.DataRegionMetrics; import org.apache.ignite.DataRegionMetricsAdapter; +import org.apache.ignite.DataStorageMetrics; import org.apache.ignite.DataStorageMetricsAdapter; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteAtomicLong; @@ -54,8 +55,8 @@ import org.apache.ignite.IgniteSemaphore; import org.apache.ignite.IgniteServices; import org.apache.ignite.IgniteSet; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.IgniteTransactions; -import org.apache.ignite.DataStorageMetrics; import org.apache.ignite.Ignition; import org.apache.ignite.MemoryMetrics; import org.apache.ignite.PersistenceMetrics; @@ -265,7 +266,8 @@ protected Collection filteredJvmArgs() throws Exception { (marsh != null && arg.startsWith("-D" + IgniteTestResources.MARSH_CLASS_NAME)) || arg.startsWith("--add-opens") || arg.startsWith("--add-exports") || arg.startsWith("--add-modules") || arg.startsWith("--patch-module") || arg.startsWith("--add-reads") || - arg.startsWith("-XX:+IgnoreUnrecognizedVMOptions")) + arg.startsWith("-XX:+IgnoreUnrecognizedVMOptions") || + arg.startsWith(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS)) filteredJvmArgs.add(arg); } diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/junits/spi/GridSpiAbstractTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/junits/spi/GridSpiAbstractTest.java index 101d0163d469f..426c5816e7af6 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/junits/spi/GridSpiAbstractTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/junits/spi/GridSpiAbstractTest.java @@ -126,7 +126,7 @@ private void resetTestData() throws Exception { /** * @throws Exception If failed. */ - @Override protected final void setUp() throws Exception { + @Override public final void setUp() throws Exception { // Need to change classloader here, although it also handled in the parent class // the current test initialisation procedure doesn't allow us to setUp the parent first. cl = Thread.currentThread().getContextClassLoader(); @@ -490,7 +490,7 @@ protected UUID getNodeId() throws Exception { /** * @throws Exception If failed. */ - @Override protected final void tearDown() throws Exception { + @Override public final void tearDown() throws Exception { getTestCounters().incrementStopped(); boolean wasLast = isLastTest(); @@ -730,4 +730,4 @@ private static class SecurityPermissionSetImpl implements SecurityPermissionSet return null; } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/test/ConfigVariationsTestSuiteBuilderTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/test/ConfigVariationsTestSuiteBuilderTest.java index 93f0168bd3573..631f0337dcd2f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/test/ConfigVariationsTestSuiteBuilderTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/test/ConfigVariationsTestSuiteBuilderTest.java @@ -18,22 +18,23 @@ package org.apache.ignite.testframework.test; import java.util.concurrent.atomic.AtomicInteger; -import junit.framework.TestCase; import junit.framework.TestSuite; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; import org.apache.ignite.testframework.junits.IgniteConfigVariationsAbstractTest; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; /** * */ -public class ConfigVariationsTestSuiteBuilderTest extends TestCase { - /** - * @throws Exception If failed. - */ - public void testDefaults() throws Exception { +public class ConfigVariationsTestSuiteBuilderTest { + /** */ + @Test + public void testDefaults() { TestSuite dfltSuite = new ConfigVariationsTestSuiteBuilder("testSuite", NoopTest.class).build(); assertEquals(4, dfltSuite.countTestCases()); @@ -55,11 +56,10 @@ public void testDefaults() throws Exception { assertEquals(4 * 4 * 2 * 3, dfltCacheSuite.countTestCases()); } - /** - * @throws Exception If failed. - */ + /** */ @SuppressWarnings("serial") - public void testIgniteConfigFilter() throws Exception { + @Test + public void testIgniteConfigFilter() { TestSuite dfltSuite = new ConfigVariationsTestSuiteBuilder("testSuite", NoopTest.class).build(); final AtomicInteger cnt = new AtomicInteger(); @@ -75,11 +75,10 @@ public void testIgniteConfigFilter() throws Exception { assertEquals(dfltSuite.countTestCases() / 2, filteredSuite.countTestCases()); } - /** - * @throws Exception If failed. - */ + /** */ @SuppressWarnings("serial") - public void testCacheConfigFilter() throws Exception { + @Test + public void testCacheConfigFilter() { TestSuite dfltSuite = new ConfigVariationsTestSuiteBuilder("testSuite", NoopTest.class) .withBasicCacheParams() .build(); @@ -98,14 +97,11 @@ public void testCacheConfigFilter() throws Exception { assertEquals(dfltSuite.countTestCases() / 2, filteredSuite.countTestCases()); } - /** - * - */ - private static class NoopTest extends IgniteConfigVariationsAbstractTest { - /** - * @throws Exception If failed. - */ - public void test1() throws Exception { + /** */ + public static class NoopTest extends IgniteConfigVariationsAbstractTest { + /** */ + @Test + public void test1() { // No-op. } } diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/test/ListeningTestLoggerTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/test/ListeningTestLoggerTest.java new file mode 100644 index 0000000000000..0be0a00398442 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testframework/test/ListeningTestLoggerTest.java @@ -0,0 +1,438 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testframework.test; + +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.regex.Pattern; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteVersionUtils; +import org.apache.ignite.logger.NullLogger; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.ListeningTestLogger; +import org.apache.ignite.testframework.LogListener; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +import static org.apache.ignite.testframework.GridTestUtils.assertThrowsWithCause; + +/** + * Test. + */ +@SuppressWarnings("ThrowableNotThrown") +public class ListeningTestLoggerTest extends GridCommonAbstractTest { + /** */ + private final ListeningTestLogger log = new ListeningTestLogger(false, super.log); + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setGridLogger(log); + + return cfg; + } + + /** + * Basic example of using listening logger - checks that all running instances of Ignite print product version. + * + * @throws Exception If failed. + */ + @Test + public void testIgniteVersionLogging() throws Exception { + int gridCnt = 4; + + LogListener lsnr = LogListener.matches(IgniteVersionUtils.VER_STR).atLeast(gridCnt).build(); + + log.registerListener(lsnr); + + try { + startGridsMultiThreaded(gridCnt); + + assertTrue(lsnr.check()); + } finally { + stopAllGrids(); + } + } + + /** + * Checks that re-register works fine. + */ + @Test + public void testUnregister() { + String msg = "catch me"; + + LogListener lsnr1 = LogListener.matches(msg).times(1).build(); + LogListener lsnr2 = LogListener.matches(msg).times(2).build(); + + log.registerListener(lsnr1); + log.registerListener(lsnr2); + + log.info(msg); + + log.unregisterListener(lsnr1); + + log.info(msg); + + assertTrue(lsnr1.check()); + assertTrue(lsnr2.check()); + + // Repeat these steps to ensure that the state is cleared during registration. + log.registerListener(lsnr1); + log.registerListener(lsnr2); + + log.info(msg); + + log.unregisterListener(lsnr1); + + log.info(msg); + + assertTrue(lsnr1.check()); + assertTrue(lsnr2.check()); + } + + /** + * Ensures that listener will be re-registered only once. + */ + @Test + public void testRegister() { + AtomicInteger cntr = new AtomicInteger(); + + LogListener lsnr3 = LogListener.matches(m -> cntr.incrementAndGet() > 0).build(); + + log.registerListener(lsnr3); + log.registerListener(lsnr3); + + log.info("1"); + + assertEquals(1, cntr.get()); + } + + /** + * Checks basic API. + */ + @Test + public void testBasicApi() { + LogListener lsnr = LogListener.matches(Pattern.compile("a[a-z]+")) + .andMatches("Exception message.").andMatches(".java:").build(); + + log.registerListener(lsnr); + + log.info("Something new."); + + assertFalse(lsnr.check()); + + log.error("There was an error.", new RuntimeException("Exception message.")); + + assertTrue(lsnr.check()); + } + + /** + * Checks blank lines matching. + */ + @Test + public void testEmptyLine() { + LogListener emptyLineLsnr = LogListener.matches("").build(); + + log.registerListener(emptyLineLsnr); + + log.info(""); + + assertTrue(emptyLineLsnr.check()); + } + + /** */ + @Test + public void testPredicateExceptions() { + LogListener lsnr = LogListener.matches(msg -> { + assertFalse(msg.contains("Target")); + + return true; + }).build(); + + log.registerListener(lsnr); + + log.info("Ignored message."); + log.info("Target message."); + + assertThrowsWithCause(lsnr::check, AssertionError.class); + + // Check custom exception. + LogListener lsnr2 = LogListener.matches(msg -> { + throw new IllegalStateException("Illegal state"); + }).build(); + + log.registerListener(lsnr2); + + log.info("1"); + log.info("2"); + + assertThrowsWithCause(lsnr2::check, IllegalStateException.class); + } + + /** + * Validates listener range definition. + */ + @Test + public void testRange() { + String msg = "range"; + + LogListener lsnr2 = LogListener.matches(msg).times(2).build(); + LogListener lsnr2_3 = LogListener.matches(msg).atLeast(2).atMost(3).build(); + + log.registerListener(lsnr2); + log.registerListener(lsnr2_3); + + log.info(msg); + log.info(msg); + + assertTrue(lsnr2.check()); + assertTrue(lsnr2_3.check()); + + log.info(msg); + + assertFalse(lsnr2.check()); + + assertTrue(lsnr2_3.check()); + + log.info(msg); + + assertFalse(lsnr2_3.check()); + } + + /** + * Checks that substring was not found in the log messages. + */ + @Test + public void testNotPresent() { + String msg = "vacuum"; + + LogListener notPresent = LogListener.matches(msg).times(0).build(); + + log.registerListener(notPresent); + + log.info("1"); + + assertTrue(notPresent.check()); + + log.info(msg); + + assertFalse(notPresent.check()); + } + + /** + * Checks that the substring is found at least twice. + */ + @Test + public void testAtLeast() { + String msg = "at least"; + + LogListener atLeast2 = LogListener.matches(msg).atLeast(2).build(); + + log.registerListener(atLeast2); + + log.info(msg); + + assertFalse(atLeast2.check()); + + log.info(msg); + + assertTrue(atLeast2.check()); + } + + /** + * Checks that the substring is found no more than twice. + */ + @Test + public void testAtMost() { + String msg = "at most"; + + LogListener atMost2 = LogListener.matches(msg).atMost(2).build(); + + log.registerListener(atMost2); + + assertTrue(atMost2.check()); + + log.info(msg); + log.info(msg); + + assertTrue(atMost2.check()); + + log.info(msg); + + assertFalse(atMost2.check()); + } + + /** + * Checks that only last value is taken into account. + */ + @Test + public void testMultiRange() { + String msg = "multi range"; + + LogListener atMost3 = LogListener.matches(msg).times(1).times(2).atMost(3).build(); + + log.registerListener(atMost3); + + for (int i = 0; i < 6; i++) { + if (i < 4) + assertTrue(atMost3.check()); + else + assertFalse(atMost3.check()); + + log.info(msg); + } + + LogListener lsnr4 = LogListener.matches(msg).atLeast(2).atMost(3).times(4).build(); + + log.registerListener(lsnr4); + + for (int i = 1; i < 6; i++) { + log.info(msg); + + if (i == 4) + assertTrue(lsnr4.check()); + else + assertFalse(lsnr4.check()); + } + } + + /** + * Checks that matches are counted for each message. + */ + @Test + public void testMatchesPerMessage() { + LogListener lsnr = LogListener.matches("aa").times(4).build(); + + log.registerListener(lsnr); + + log.info("aabaab"); + log.info("abaaab"); + + assertTrue(lsnr.check()); + + LogListener newLineLsnr = LogListener.matches("\n").times(5).build(); + + log.registerListener(newLineLsnr); + + log.info("\n1\n2\n\n3\n"); + + assertTrue(newLineLsnr.check()); + + LogListener regexpLsnr = LogListener.matches(Pattern.compile("(?i)hi|hello")).times(3).build(); + + log.registerListener(regexpLsnr); + + log.info("Hi! Hello!"); + log.info("Hi folks"); + + assertTrue(regexpLsnr.check()); + } + + /** + * Check thread safety. + * + * @throws Exception If failed. + */ + @Test + public void testMultithreaded() throws Exception { + int iterCnt = 50_000; + int threadCnt = 6; + int total = threadCnt * iterCnt; + int rndNum = ThreadLocalRandom.current().nextInt(iterCnt); + + ListeningTestLogger log = new ListeningTestLogger(); + + LogListener lsnr = LogListener.matches("abba").times(total) + .andMatches(Pattern.compile("(?i)abba")).times(total * 2) + .andMatches("ab").times(total) + .andMatches("ba").times(total) + .build(); + + LogListener mtLsnr = LogListener.matches("abba").build(); + + log.registerListener(lsnr); + + GridTestUtils.runMultiThreaded(() -> { + for (int i = 0; i < iterCnt; i++) { + if (rndNum == i) + log.registerListener(mtLsnr); + + log.info("It is the abba(ABBA) message."); + } + }, threadCnt, "test-listening-log"); + + assertTrue(lsnr.check()); + assertTrue(mtLsnr.check()); + } + + /** + * Check "echo" logger. + */ + @Test + public void testEchoLogger() { + IgniteLogger echo = new StringLogger(); + + ListeningTestLogger log = new ListeningTestLogger(true, echo); + + log.error("1"); + log.warning("2"); + log.info("3"); + log.debug("4"); + log.trace("5"); + + assertEquals("12345", echo.toString()); + } + + /** */ + private static class StringLogger extends NullLogger { + /** */ + private final StringBuilder buf = new StringBuilder(); + + /** {@inheritDoc} */ + @Override public void trace(String msg) { + buf.append(msg); + } + + /** {@inheritDoc} */ + @Override public void debug(String msg) { + buf.append(msg); + } + + /** {@inheritDoc} */ + @Override public void info(String msg) { + buf.append(msg); + } + + /** {@inheritDoc} */ + @Override public void warning(String msg, Throwable t) { + buf.append(msg); + } + + /** {@inheritDoc} */ + @Override public void error(String msg, Throwable t) { + buf.append(msg); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return buf.toString(); + } + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/test/ParametersTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/test/ParametersTest.java index 736db1283915a..fd145b500b4c4 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/test/ParametersTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/test/ParametersTest.java @@ -19,22 +19,25 @@ import java.util.HashSet; import java.util.Set; -import junit.framework.TestCase; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.configvariations.ConfigParameter; import org.apache.ignite.testframework.configvariations.Parameters; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; /** * Test. */ -public class ParametersTest extends TestCase { +public class ParametersTest { /** */ private static final String DEFAULT_CACHE_NAME = "default"; /** * @throws Exception If failed. */ + @Test public void testEnumVariations() throws Exception { ConfigParameter[] modes = Parameters.enumParameters("setCacheMode", CacheMode.class); @@ -60,7 +63,7 @@ public void testEnumVariations() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("unchecked") + @Test public void testEnumVariationsWithNull() throws Exception { ConfigParameter[] cfgParam = Parameters.enumParameters(true, "setCacheMode", CacheMode.class); diff --git a/modules/core/src/test/java/org/apache/ignite/testframework/test/VariationsIteratorTest.java b/modules/core/src/test/java/org/apache/ignite/testframework/test/VariationsIteratorTest.java index d8ac2b39d2b4d..a21cb6c87ee39 100644 --- a/modules/core/src/test/java/org/apache/ignite/testframework/test/VariationsIteratorTest.java +++ b/modules/core/src/test/java/org/apache/ignite/testframework/test/VariationsIteratorTest.java @@ -20,16 +20,20 @@ import java.util.Arrays; import java.util.HashSet; import java.util.Set; -import junit.framework.TestCase; import org.apache.ignite.testframework.configvariations.VariationsIterator; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; /** * Test start iterator. */ -public class VariationsIteratorTest extends TestCase { +public class VariationsIteratorTest { /** * @throws Exception If failed. */ + @Test public void test1() throws Exception { Object[][] arr = new Object[][] { {0, 1}, @@ -43,7 +47,7 @@ public void test1() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("PointlessArithmeticExpression") + @Test public void test2() throws Exception { Object[][] arr = new Object[][] { {0}, @@ -58,7 +62,7 @@ public void test2() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("PointlessArithmeticExpression") + @Test public void test3() throws Exception { Object[][] arr = new Object[][] { {0, 1, 2, 3, 4, 5}, @@ -73,7 +77,7 @@ public void test3() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("PointlessArithmeticExpression") + @Test public void test4() throws Exception { Object[][] arr = new Object[][]{ {0,1,2}, @@ -91,6 +95,7 @@ public void test4() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimple() throws Exception { Object[][] arr = new Object[][] { {0}, @@ -102,6 +107,7 @@ public void testSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimple2() throws Exception { Object[][] arr = new Object[][] { {0}, diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicTestSuite.java index ac2bed36d3a58..b5e841892d778 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicTestSuite.java @@ -18,6 +18,7 @@ package org.apache.ignite.testsuites; import java.util.Set; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.GridSuppressedExceptionSelfTest; import org.apache.ignite.failure.FailureHandlerTriggeredTest; @@ -42,6 +43,7 @@ import org.apache.ignite.internal.IgniteSlowClientDetectionSelfTest; import org.apache.ignite.internal.MarshallerContextLockingSelfTest; import org.apache.ignite.internal.TransactionsMXBeanImplTest; +import org.apache.ignite.internal.managers.IgniteDiagnosticMessagesMultipleConnectionsTest; import org.apache.ignite.internal.managers.IgniteDiagnosticMessagesTest; import org.apache.ignite.internal.processors.affinity.GridAffinityProcessorMemoryLeakTest; import org.apache.ignite.internal.processors.affinity.GridAffinityProcessorRendezvousSelfTest; @@ -85,33 +87,32 @@ import org.apache.ignite.spi.GridSpiLocalHostInjectionTest; import org.apache.ignite.startup.properties.NotStringSystemPropertyTest; import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.junits.GridAbstractTest; import org.apache.ignite.testframework.test.ConfigVariationsTestSuiteBuilderTest; +import org.apache.ignite.testframework.test.ListeningTestLoggerTest; import org.apache.ignite.testframework.test.ParametersTest; import org.apache.ignite.testframework.test.VariationsIteratorTest; import org.apache.ignite.util.AttributeNodeFilterSelfTest; import org.jetbrains.annotations.Nullable; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Basic test suite. */ -public class IgniteBasicTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBasicTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Tests don't include in the execution. Providing null means nothing to exclude. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(@Nullable final Set ignoredTests) throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); - + public static TestSuite suite(@Nullable final Set ignoredTests) { TestSuite suite = new TestSuite("Ignite Basic Test Suite"); suite.addTest(IgniteMarshallerSelfTestSuite.suite(ignoredTests)); @@ -127,95 +128,98 @@ public static TestSuite suite(@Nullable final Set ignoredTests) throws Ex suite.addTest(IgnitePlatformsTestSuite.suite()); - suite.addTest(new TestSuite(GridSelfTest.class)); - suite.addTest(new TestSuite(ClusterGroupHostsSelfTest.class)); - suite.addTest(new TestSuite(IgniteMessagingWithClientTest.class)); - suite.addTest(new TestSuite(IgniteMessagingSendAsyncTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClusterGroupHostsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteMessagingWithClientTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteMessagingSendAsyncTest.class)); GridTestUtils.addTestIfNeeded(suite, ClusterGroupSelfTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, GridMessagingSelfTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, GridMessagingNoPeerClassLoadingSelfTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, GridReleaseTypeSelfTest.class, ignoredTests); - suite.addTestSuite(GridProductVersionSelfTest.class); - suite.addTestSuite(GridAffinityProcessorRendezvousSelfTest.class); - suite.addTestSuite(GridAffinityProcessorMemoryLeakTest.class); - suite.addTestSuite(GridClosureProcessorSelfTest.class); - suite.addTestSuite(GridClosureProcessorRemoteTest.class); - suite.addTestSuite(GridClosureSerializationTest.class); - suite.addTestSuite(ClosureServiceClientsNodesTest.class); - suite.addTestSuite(GridStartStopSelfTest.class); - suite.addTestSuite(GridProjectionForCachesSelfTest.class); - suite.addTestSuite(GridProjectionForCachesOnDaemonNodeSelfTest.class); - suite.addTestSuite(GridSpiLocalHostInjectionTest.class); - suite.addTestSuite(GridLifecycleBeanSelfTest.class); - suite.addTestSuite(GridStopWithCancelSelfTest.class); - suite.addTestSuite(GridReduceSelfTest.class); - suite.addTestSuite(GridEventConsumeSelfTest.class); - suite.addTestSuite(GridSuppressedExceptionSelfTest.class); - suite.addTestSuite(GridLifecycleAwareSelfTest.class); - suite.addTestSuite(GridMessageListenSelfTest.class); - suite.addTestSuite(GridFailFastNodeFailureDetectionSelfTest.class); - suite.addTestSuite(IgniteSlowClientDetectionSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridProductVersionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAffinityProcessorRendezvousSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAffinityProcessorMemoryLeakTest.class)); + suite.addTest(new JUnit4TestAdapter(GridClosureProcessorSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridClosureProcessorRemoteTest.class)); + suite.addTest(new JUnit4TestAdapter(GridClosureSerializationTest.class)); + suite.addTest(new JUnit4TestAdapter(ClosureServiceClientsNodesTest.class)); + suite.addTest(new JUnit4TestAdapter(GridStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridProjectionForCachesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridProjectionForCachesOnDaemonNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSpiLocalHostInjectionTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLifecycleBeanSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridStopWithCancelSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridReduceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridEventConsumeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSuppressedExceptionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLifecycleAwareSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMessageListenSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFailFastNodeFailureDetectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSlowClientDetectionSelfTest.class)); GridTestUtils.addTestIfNeeded(suite, IgniteDaemonNodeMarshallerCacheTest.class, ignoredTests); - suite.addTestSuite(IgniteMarshallerCacheConcurrentReadWriteTest.class); - suite.addTestSuite(GridNodeMetricsLogSelfTest.class); - suite.addTestSuite(GridLocalIgniteSerializationTest.class); - suite.addTestSuite(GridMBeansTest.class); - suite.addTestSuite(TransactionsMXBeanImplTest.class); - suite.addTestSuite(SetTxTimeoutOnPartitionMapExchangeTest.class); - - suite.addTestSuite(IgniteExceptionInNioWorkerSelfTest.class); - suite.addTestSuite(IgniteLocalNodeMapBeforeStartTest.class); - suite.addTestSuite(OdbcConfigurationValidationSelfTest.class); - suite.addTestSuite(OdbcEscapeSequenceSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteMarshallerCacheConcurrentReadWriteTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNodeMetricsLogSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLocalIgniteSerializationTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMBeansTest.class)); + suite.addTest(new JUnit4TestAdapter(TransactionsMXBeanImplTest.class)); + suite.addTest(new JUnit4TestAdapter(SetTxTimeoutOnPartitionMapExchangeTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteExceptionInNioWorkerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteLocalNodeMapBeforeStartTest.class)); + suite.addTest(new JUnit4TestAdapter(OdbcConfigurationValidationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(OdbcEscapeSequenceSelfTest.class)); GridTestUtils.addTestIfNeeded(suite, DynamicProxySerializationMultiJvmSelfTest.class, ignoredTests); // Tests against configuration variations framework. - suite.addTestSuite(ParametersTest.class); - suite.addTestSuite(VariationsIteratorTest.class); - suite.addTestSuite(ConfigVariationsTestSuiteBuilderTest.class); - suite.addTestSuite(NotStringSystemPropertyTest.class); + suite.addTest(new JUnit4TestAdapter(ParametersTest.class)); + suite.addTest(new JUnit4TestAdapter(VariationsIteratorTest.class)); + suite.addTest(new JUnit4TestAdapter(ConfigVariationsTestSuiteBuilderTest.class)); + suite.addTest(new JUnit4TestAdapter(NotStringSystemPropertyTest.class)); - suite.addTestSuite(MarshallerContextLockingSelfTest.class); - suite.addTestSuite(MarshallerContextSelfTest.class); + suite.addTest(new JUnit4TestAdapter(MarshallerContextLockingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MarshallerContextSelfTest.class)); - suite.addTestSuite(SecurityPermissionSetBuilderTest.class); + suite.addTest(new JUnit4TestAdapter(SecurityPermissionSetBuilderTest.class)); - suite.addTestSuite(AttributeNodeFilterSelfTest.class); + suite.addTest(new JUnit4TestAdapter(AttributeNodeFilterSelfTest.class)); // Basic DB data structures. - suite.addTestSuite(BPlusTreeSelfTest.class); - suite.addTestSuite(BPlusTreeFakeReuseSelfTest.class); - suite.addTestSuite(BPlusTreeReuseSelfTest.class); - suite.addTestSuite(IndexStorageSelfTest.class); - suite.addTestSuite(CacheFreeListImplSelfTest.class); - suite.addTestSuite(DataRegionMetricsSelfTest.class); - suite.addTestSuite(SwapPathConstructionSelfTest.class); + suite.addTest(new JUnit4TestAdapter(BPlusTreeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BPlusTreeFakeReuseSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BPlusTreeReuseSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IndexStorageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheFreeListImplSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DataRegionMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SwapPathConstructionSelfTest.class)); - suite.addTestSuite(IgniteMarshallerCacheFSRestoreTest.class); - suite.addTestSuite(IgniteMarshallerCacheClassNameConflictTest.class); - suite.addTestSuite(IgniteMarshallerCacheClientRequestsMappingOnMissTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteMarshallerCacheFSRestoreTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteMarshallerCacheClassNameConflictTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteMarshallerCacheClientRequestsMappingOnMissTest.class)); - suite.addTestSuite(IgniteDiagnosticMessagesTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteDiagnosticMessagesTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDiagnosticMessagesMultipleConnectionsTest.class)); - suite.addTestSuite(IgniteRejectConnectOnNodeStopTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteRejectConnectOnNodeStopTest.class)); - suite.addTestSuite(GridCleanerTest.class); + suite.addTest(new JUnit4TestAdapter(GridCleanerTest.class)); - suite.addTestSuite(ClassSetTest.class); + suite.addTest(new JUnit4TestAdapter(ClassSetTest.class)); // Basic failure handlers. - suite.addTestSuite(FailureHandlerTriggeredTest.class); - suite.addTestSuite(StopNodeFailureHandlerTest.class); - suite.addTestSuite(StopNodeOrHaltFailureHandlerTest.class); - suite.addTestSuite(OomFailureHandlerTest.class); - suite.addTestSuite(TransactionIntegrityWithSystemWorkerDeathTest.class); + suite.addTest(new JUnit4TestAdapter(FailureHandlerTriggeredTest.class)); + suite.addTest(new JUnit4TestAdapter(StopNodeFailureHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(StopNodeOrHaltFailureHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(OomFailureHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(TransactionIntegrityWithSystemWorkerDeathTest.class)); + + suite.addTest(new JUnit4TestAdapter(AtomicOperationsInTxTest.class)); - suite.addTestSuite(AtomicOperationsInTxTest.class); + suite.addTest(new JUnit4TestAdapter(CacheRebalanceConfigValidationTest.class)); - suite.addTestSuite(CacheRebalanceConfigValidationTest.class); + suite.addTest(new JUnit4TestAdapter(ListeningTestLoggerTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicWithPersistenceTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicWithPersistenceTestSuite.java index 7ce620956ba7c..aa52de167ea43 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicWithPersistenceTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBasicWithPersistenceTestSuite.java @@ -18,49 +18,78 @@ package org.apache.ignite.testsuites; import java.util.Set; + +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.apache.ignite.failure.FailureHandlingConfigurationTest; import org.apache.ignite.failure.IoomFailureHandlerTest; +import org.apache.ignite.failure.SystemWorkersBlockingTest; import org.apache.ignite.failure.SystemWorkersTerminationTest; import org.apache.ignite.internal.ClusterBaselineNodesMetricsSelfTest; import org.apache.ignite.internal.GridNodeMetricsLogPdsSelfTest; +import org.apache.ignite.internal.encryption.EncryptedCacheBigEntryTest; +import org.apache.ignite.internal.encryption.EncryptedCacheCreateTest; +import org.apache.ignite.internal.encryption.EncryptedCacheDestroyTest; +import org.apache.ignite.internal.encryption.EncryptedCacheGroupCreateTest; +import org.apache.ignite.internal.encryption.EncryptedCacheNodeJoinTest; +import org.apache.ignite.internal.encryption.EncryptedCachePreconfiguredRestartTest; +import org.apache.ignite.internal.encryption.EncryptedCacheRestartTest; +import org.apache.ignite.internal.processors.cache.persistence.CheckpointReadLockFailureTest; +import org.apache.ignite.internal.processors.cache.persistence.SingleNodePersistenceSslTest; import org.apache.ignite.internal.processors.service.ServiceDeploymentOnActivationTest; import org.apache.ignite.internal.processors.service.ServiceDeploymentOutsideBaselineTest; import org.apache.ignite.marshaller.GridMarshallerMappingConsistencyTest; +import org.apache.ignite.util.GridCommandHandlerSslTest; import org.apache.ignite.util.GridCommandHandlerTest; import org.apache.ignite.util.GridInternalTaskUnusedWalSegmentsTest; import org.jetbrains.annotations.Nullable; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Basic test suite. */ +@RunWith(AllTests.class) public class IgniteBasicWithPersistenceTestSuite extends TestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Tests don't include in the execution. Providing null means nothing to exclude. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(@Nullable final Set ignoredTests) throws Exception { + public static TestSuite suite(@Nullable final Set ignoredTests) { TestSuite suite = new TestSuite("Ignite Basic With Persistence Test Suite"); - suite.addTestSuite(IoomFailureHandlerTest.class); - suite.addTestSuite(ClusterBaselineNodesMetricsSelfTest.class); - suite.addTestSuite(ServiceDeploymentOnActivationTest.class); - suite.addTestSuite(ServiceDeploymentOutsideBaselineTest.class); - suite.addTestSuite(GridMarshallerMappingConsistencyTest.class); - suite.addTestSuite(SystemWorkersTerminationTest.class); + suite.addTest(new JUnit4TestAdapter(IoomFailureHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(ClusterBaselineNodesMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ServiceDeploymentOnActivationTest.class)); + suite.addTest(new JUnit4TestAdapter(ServiceDeploymentOutsideBaselineTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMarshallerMappingConsistencyTest.class)); + suite.addTest(new JUnit4TestAdapter(SystemWorkersTerminationTest.class)); + suite.addTest(new JUnit4TestAdapter(FailureHandlingConfigurationTest.class)); + suite.addTest(new JUnit4TestAdapter(SystemWorkersBlockingTest.class)); + suite.addTest(new JUnit4TestAdapter(CheckpointReadLockFailureTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCommandHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCommandHandlerSslTest.class)); + suite.addTest(new JUnit4TestAdapter(GridInternalTaskUnusedWalSegmentsTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridNodeMetricsLogPdsSelfTest.class)); - suite.addTestSuite(GridCommandHandlerTest.class); - suite.addTestSuite(GridInternalTaskUnusedWalSegmentsTest.class); + suite.addTest(new JUnit4TestAdapter(EncryptedCacheBigEntryTest.class)); + suite.addTest(new JUnit4TestAdapter(EncryptedCacheCreateTest.class)); + suite.addTest(new JUnit4TestAdapter(EncryptedCacheDestroyTest.class)); + suite.addTest(new JUnit4TestAdapter(EncryptedCacheGroupCreateTest.class)); + suite.addTest(new JUnit4TestAdapter(EncryptedCacheNodeJoinTest.class)); + suite.addTest(new JUnit4TestAdapter(EncryptedCacheRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(EncryptedCachePreconfiguredRestartTest.class)); - suite.addTestSuite(GridNodeMetricsLogPdsSelfTest.class); + suite.addTest(new JUnit4TestAdapter(SingleNodePersistenceSslTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheTestSuite.java index 170bb33c9ba41..64b640528030f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheTestSuite.java @@ -17,15 +17,16 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import java.util.HashSet; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCacheAffinityRoutingSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheEntryMemorySizeSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheMvccSelfTest; import org.apache.ignite.internal.processors.cache.binary.CacheKeepBinaryWithInterceptorTest; -import org.apache.ignite.internal.processors.cache.expiry.IgniteCacheAtomicLocalExpiryPolicyTest; import org.apache.ignite.internal.processors.cache.binary.GridBinaryCacheEntryMemorySizeSelfTest; import org.apache.ignite.internal.processors.cache.binary.datastreaming.DataStreamProcessorBinarySelfTest; +import org.apache.ignite.internal.processors.cache.binary.datastreaming.DataStreamProcessorPersistenceBinarySelfTest; import org.apache.ignite.internal.processors.cache.binary.datastreaming.GridDataStreamerImplSelfTest; import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAffinityRoutingBinarySelfTest; import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultiNodeSelfTest; @@ -34,23 +35,35 @@ import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAtomicPartitionedOnlyBinaryMultithreadedSelfTest; import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheBinariesNearPartitionedByteArrayValuesSelfTest; import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheBinariesPartitionedOnlyByteArrayValuesSelfTest; +import org.apache.ignite.internal.processors.cache.expiry.IgniteCacheAtomicLocalExpiryPolicyTest; +import org.apache.ignite.internal.processors.datastreamer.DataStreamProcessorPersistenceSelfTest; import org.apache.ignite.internal.processors.datastreamer.DataStreamProcessorSelfTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache suite with binary marshaller. */ -public class IgniteBinaryCacheTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBinaryCacheTestSuite { /** * @return Suite. - * @throws Exception In case of error. */ - public static TestSuite suite() throws Exception { - TestSuite suite = new TestSuite("Binary Cache Test Suite"); + public static TestSuite suite() { + return suite(new HashSet<>()); + } - HashSet ignoredTests = new HashSet<>(); + /** + * @param ignoredTests Tests to ignore. + * @return Test suite. + */ + public static TestSuite suite(Collection ignoredTests) { + TestSuite suite = new TestSuite("Binary Cache Test Suite"); // Tests below have a special version for Binary Marshaller ignoredTests.add(DataStreamProcessorSelfTest.class); + ignoredTests.add(DataStreamProcessorPersistenceSelfTest.class); ignoredTests.add(GridCacheAffinityRoutingSelfTest.class); ignoredTests.add(IgniteCacheAtomicLocalExpiryPolicyTest.class); ignoredTests.add(GridCacheEntryMemorySizeSelfTest.class); @@ -60,23 +73,21 @@ public static TestSuite suite() throws Exception { suite.addTest(IgniteCacheTestSuite.suite(ignoredTests)); - // TODO GG-11148 - // suite.addTestSuite(GridCacheMemoryModeBinarySelfTest.class); - - suite.addTestSuite(GridCacheBinariesPartitionedOnlyByteArrayValuesSelfTest.class); - suite.addTestSuite(GridCacheBinariesNearPartitionedByteArrayValuesSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheBinariesPartitionedOnlyByteArrayValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheBinariesNearPartitionedByteArrayValuesSelfTest.class, ignoredTests); - suite.addTestSuite(GridDataStreamerImplSelfTest.class); - suite.addTestSuite(DataStreamProcessorBinarySelfTest.class); - suite.addTestSuite(GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultiNodeSelfTest.class); - suite.addTestSuite(GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultithreadedSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, GridDataStreamerImplSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStreamProcessorBinarySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStreamProcessorPersistenceBinarySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultithreadedSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheAtomicPartitionedOnlyBinaryMultiNodeSelfTest.class); - suite.addTestSuite(GridCacheAtomicPartitionedOnlyBinaryMultithreadedSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicPartitionedOnlyBinaryMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicPartitionedOnlyBinaryMultithreadedSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheAffinityRoutingBinarySelfTest.class); - suite.addTestSuite(GridBinaryCacheEntryMemorySizeSelfTest.class); - suite.addTestSuite(CacheKeepBinaryWithInterceptorTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheAffinityRoutingBinarySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridBinaryCacheEntryMemorySizeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheKeepBinaryWithInterceptorTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsCacheTestSuite3.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsCacheTestSuite3.java index 0007813ac92c3..1b77812d828e9 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsCacheTestSuite3.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsCacheTestSuite3.java @@ -17,10 +17,14 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.binary.GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest; import org.apache.ignite.internal.processors.cache.binary.GridCacheBinaryTransactionalEntryProcessorDeploymentSelfTest; +import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * IgniteBinaryObjectsCacheTestSuite3 is kept together with {@link IgniteCacheTestSuite3} @@ -38,19 +42,27 @@ * In future this suite may be merged with {@link IgniteCacheTestSuite3} * */ +@RunWith(AllTests.class) public class IgniteBinaryObjectsCacheTestSuite3 { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { + return suite(null); + } + + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { GridTestProperties.setProperty(GridTestProperties.ENTRY_PROCESSOR_CLASS_NAME, "org.apache.ignite.tests.p2p.CacheDeploymentBinaryEntryProcessor"); - TestSuite suite = IgniteCacheTestSuite3.suite(); + TestSuite suite = IgniteCacheTestSuite3.suite(ignoredTests); - suite.addTestSuite(GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest.class); - suite.addTestSuite(GridCacheBinaryTransactionalEntryProcessorDeploymentSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheBinaryTransactionalEntryProcessorDeploymentSelfTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsComputeGridTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsComputeGridTestSuite.java index 4820d455683f5..84d0461f3d840 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsComputeGridTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsComputeGridTestSuite.java @@ -17,9 +17,9 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.GridComputationBinarylizableClosuresSelfTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; /** * @@ -30,11 +30,9 @@ public class IgniteBinaryObjectsComputeGridTestSuite { * @throws Exception If failed. */ public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); - TestSuite suite = IgniteComputeGridTestSuite.suite(); - suite.addTestSuite(GridComputationBinarylizableClosuresSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridComputationBinarylizableClosuresSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsTestSuite.java index c7d87444f5317..761c1f8670379 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinaryObjectsTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.binary.BinaryArrayIdentityResolverSelfTest; import org.apache.ignite.internal.binary.BinaryBasicIdMapperSelfTest; @@ -78,91 +79,93 @@ import org.apache.ignite.internal.processors.cache.binary.local.GridCacheBinaryObjectsLocalOnheapSelfTest; import org.apache.ignite.internal.processors.cache.binary.local.GridCacheBinaryObjectsLocalSelfTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteBinaryMetadataUpdateChangingTopologySelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test for binary objects stored in cache. */ -public class IgniteBinaryObjectsTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBinaryObjectsTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Binary Objects Test Suite"); - suite.addTestSuite(BinarySimpleNameTestPropertySelfTest.class); - - suite.addTestSuite(BinaryBasicIdMapperSelfTest.class); - suite.addTestSuite(BinaryBasicNameMapperSelfTest.class); - - suite.addTestSuite(BinaryTreeSelfTest.class); - suite.addTestSuite(BinaryMarshallerSelfTest.class); - suite.addTestSuite(BinaryObjectExceptionSelfTest.class); - - suite.addTestSuite(BinarySerialiedFieldComparatorSelfTest.class); - suite.addTestSuite(BinaryArrayIdentityResolverSelfTest.class); - - suite.addTestSuite(BinaryConfigurationConsistencySelfTest.class); - suite.addTestSuite(BinaryConfigurationCustomSerializerSelfTest.class); - suite.addTestSuite(GridBinaryMarshallerCtxDisabledSelfTest.class); - suite.addTestSuite(BinaryObjectBuilderDefaultMappersSelfTest.class); - suite.addTestSuite(BinaryObjectBuilderSimpleNameLowerCaseMappersSelfTest.class); - suite.addTestSuite(BinaryObjectBuilderAdditionalSelfTest.class); - //suite.addTestSuite(BinaryFieldExtractionSelfTest.class); - suite.addTestSuite(BinaryFieldsHeapSelfTest.class); - suite.addTestSuite(BinaryFieldsOffheapSelfTest.class); - suite.addTestSuite(BinaryFooterOffsetsHeapSelfTest.class); - suite.addTestSuite(BinaryFooterOffsetsOffheapSelfTest.class); - suite.addTestSuite(BinaryEnumsSelfTest.class); - suite.addTestSuite(GridDefaultBinaryMappersBinaryMetaDataSelfTest.class); - suite.addTestSuite(GridSimpleLowerCaseBinaryMappersBinaryMetaDataSelfTest.class); - suite.addTestSuite(GridBinaryAffinityKeySelfTest.class); - suite.addTestSuite(GridBinaryWildcardsSelfTest.class); - suite.addTestSuite(BinaryObjectToStringSelfTest.class); - suite.addTestSuite(BinaryObjectTypeCompatibilityTest.class); + suite.addTest(new JUnit4TestAdapter(BinarySimpleNameTestPropertySelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(BinaryBasicIdMapperSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryBasicNameMapperSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(BinaryTreeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryMarshallerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectExceptionSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(BinarySerialiedFieldComparatorSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryArrayIdentityResolverSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(BinaryConfigurationConsistencySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryConfigurationCustomSerializerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridBinaryMarshallerCtxDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectBuilderDefaultMappersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectBuilderSimpleNameLowerCaseMappersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectBuilderAdditionalSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(BinaryFieldExtractionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFieldsHeapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFieldsOffheapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFooterOffsetsHeapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFooterOffsetsOffheapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryEnumsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridDefaultBinaryMappersBinaryMetaDataSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSimpleLowerCaseBinaryMappersBinaryMetaDataSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridBinaryAffinityKeySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridBinaryWildcardsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectToStringSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectTypeCompatibilityTest.class)); // Tests for objects with non-compact footers. - suite.addTestSuite(BinaryMarshallerNonCompactSelfTest.class); - suite.addTestSuite(BinaryObjectBuilderNonCompactDefaultMappersSelfTest.class); - suite.addTestSuite(BinaryObjectBuilderNonCompactSimpleNameLowerCaseMappersSelfTest.class); - suite.addTestSuite(BinaryObjectBuilderAdditionalNonCompactSelfTest.class); - suite.addTestSuite(BinaryFieldsHeapNonCompactSelfTest.class); - suite.addTestSuite(BinaryFieldsOffheapNonCompactSelfTest.class); - suite.addTestSuite(BinaryFooterOffsetsHeapNonCompactSelfTest.class); - suite.addTestSuite(BinaryFooterOffsetsOffheapNonCompactSelfTest.class); - - suite.addTestSuite(GridCacheBinaryObjectsLocalSelfTest.class); - //suite.addTestSuite(GridCacheBinaryObjectsLocalOnheapSelfTest.class); - suite.addTestSuite(GridCacheBinaryObjectsAtomicLocalSelfTest.class); - suite.addTestSuite(GridCacheBinaryObjectsReplicatedSelfTest.class); - suite.addTestSuite(GridCacheBinaryObjectsPartitionedSelfTest.class); - suite.addTestSuite(GridCacheBinaryObjectsPartitionedNearDisabledSelfTest.class); - //suite.addTestSuite(GridCacheBinaryObjectsPartitionedNearDisabledOnheapSelfTest.class); - //suite.addTestSuite(GridCacheBinaryObjectsPartitionedOnheapSelfTest.class); - suite.addTestSuite(GridCacheBinaryObjectsAtomicSelfTest.class); - //suite.addTestSuite(GridCacheBinaryObjectsAtomicOnheapSelfTest.class); - suite.addTestSuite(GridCacheBinaryObjectsAtomicNearDisabledSelfTest.class); - //suite.addTestSuite(GridCacheBinaryObjectsAtomicNearDisabledOnheapSelfTest.class); - - suite.addTestSuite(GridCacheBinaryStoreObjectsSelfTest.class); - suite.addTestSuite(GridCacheBinaryStoreBinariesDefaultMappersSelfTest.class); - suite.addTestSuite(GridCacheBinaryStoreBinariesSimpleNameMappersSelfTest.class); - - suite.addTestSuite(GridCacheClientNodeBinaryObjectMetadataTest.class); - suite.addTestSuite(GridCacheBinaryObjectMetadataExchangeMultinodeTest.class); - suite.addTestSuite(BinaryMetadataUpdatesFlowTest.class); - suite.addTestSuite(GridCacheClientNodeBinaryObjectMetadataMultinodeTest.class); - suite.addTestSuite(IgniteBinaryMetadataUpdateChangingTopologySelfTest.class); - - suite.addTestSuite(BinaryTxCacheLocalEntriesSelfTest.class); - suite.addTestSuite(BinaryAtomicCacheLocalEntriesSelfTest.class); + suite.addTest(new JUnit4TestAdapter(BinaryMarshallerNonCompactSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectBuilderNonCompactDefaultMappersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectBuilderNonCompactSimpleNameLowerCaseMappersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryObjectBuilderAdditionalNonCompactSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFieldsHeapNonCompactSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFieldsOffheapNonCompactSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFooterOffsetsHeapNonCompactSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryFooterOffsetsOffheapNonCompactSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsLocalSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsLocalOnheapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsAtomicLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsPartitionedNearDisabledSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsPartitionedNearDisabledOnheapSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsPartitionedOnheapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsAtomicSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsAtomicOnheapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsAtomicNearDisabledSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectsAtomicNearDisabledOnheapSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryStoreObjectsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryStoreBinariesDefaultMappersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryStoreBinariesSimpleNameMappersSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheClientNodeBinaryObjectMetadataTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectMetadataExchangeMultinodeTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryMetadataUpdatesFlowTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheClientNodeBinaryObjectMetadataMultinodeTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteBinaryMetadataUpdateChangingTopologySelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(BinaryTxCacheLocalEntriesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryAtomicCacheLocalEntriesSelfTest.class)); // Byte order - suite.addTestSuite(BinaryHeapStreamByteOrderSelfTest.class); - suite.addTestSuite(BinaryAbstractOutputStreamTest.class); - suite.addTestSuite(BinaryOffheapStreamByteOrderSelfTest.class); + suite.addTest(new JUnit4TestAdapter(BinaryHeapStreamByteOrderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryAbstractOutputStreamTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryOffheapStreamByteOrderSelfTest.class)); - suite.addTestSuite(GridCacheBinaryObjectUserClassloaderSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheBinaryObjectUserClassloaderSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperBasicTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperBasicTestSuite.java index 318f87ec07d28..69d3b60e33881 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperBasicTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperBasicTestSuite.java @@ -21,11 +21,14 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.cache.IgniteMarshallerCacheClassNameConflictTest; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Basic test suite. */ -public class IgniteBinarySimpleNameMapperBasicTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBinarySimpleNameMapperBasicTestSuite { /** * @return Test suite. * @throws Exception Thrown in case of the failure. diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheFullApiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheFullApiTestSuite.java index bbf4297af8b46..dc362b8726701 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheFullApiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheFullApiTestSuite.java @@ -20,16 +20,18 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache full API suite with binary marshaller and simple name mapper. */ -public class IgniteBinarySimpleNameMapperCacheFullApiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBinarySimpleNameMapperCacheFullApiTestSuite { /** * @return Suite. - * @throws Exception In case of error. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { GridTestProperties.setProperty(GridTestProperties.MARSH_CLASS_NAME, BinaryMarshaller.class.getName()); GridTestProperties.setProperty(GridTestProperties.BINARY_MARSHALLER_USE_SIMPLE_NAME_MAPPER, "true"); diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheBasicConfigVariationsFullApiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheBasicConfigVariationsFullApiTestSuite.java index 85a8f59ad5184..af7b42612642f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheBasicConfigVariationsFullApiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheBasicConfigVariationsFullApiTestSuite.java @@ -20,16 +20,18 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.IgniteCacheConfigVariationsFullApiTest; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache API. */ -public class IgniteCacheBasicConfigVariationsFullApiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheBasicConfigVariationsFullApiTestSuite { /** * @return Cache API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return new ConfigVariationsTestSuiteBuilder( "Cache New Full API Test Suite", IgniteCacheConfigVariationsFullApiTest.class) diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheBlockExchangeOnReadOperationsTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheBlockExchangeOnReadOperationsTestSuite.java new file mode 100644 index 0000000000000..a7c839c7859da --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheBlockExchangeOnReadOperationsTestSuite.java @@ -0,0 +1,54 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.Set; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.internal.processors.cache.distributed.CacheBlockOnGetAllTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheBlockOnScanTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheBlockOnSingleGetTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite. + */ +@RunWith(AllTests.class) +public class IgniteCacheBlockExchangeOnReadOperationsTestSuite { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + return suite(null); + } + + /** + * @param ignoredTests Tests to ignore. + * @return Test suite. + */ + public static TestSuite suite(Set ignoredTests) { + TestSuite suite = new TestSuite("Do Not Block Read Operations Test Suite"); + + suite.addTest(new JUnit4TestAdapter(CacheBlockOnSingleGetTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheBlockOnGetAllTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheBlockOnScanTest.class)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheDataStructuresSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheDataStructuresSelfTestSuite.java index 6bbf0fe652baf..184e85dd18ef4 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheDataStructuresSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheDataStructuresSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.AtomicCacheAffinityConfigurationTest; import org.apache.ignite.internal.processors.cache.datastructures.GridCacheQueueCleanupSelfTest; @@ -26,7 +27,6 @@ import org.apache.ignite.internal.processors.cache.datastructures.IgniteClientDiscoveryDataStructuresTest; import org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest; import org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureWithJobTest; -import org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructuresNoClassOnServerTest; import org.apache.ignite.internal.processors.cache.datastructures.IgniteSequenceInternalCleanupTest; import org.apache.ignite.internal.processors.cache.datastructures.SemaphoreFailoverNoWaitingAcquirerTest; import org.apache.ignite.internal.processors.cache.datastructures.SemaphoreFailoverSafeReleasePermitsTest; @@ -45,7 +45,6 @@ import org.apache.ignite.internal.processors.cache.datastructures.partitioned.GridCachePartitionedAtomicQueueMultiNodeSelfTest; import org.apache.ignite.internal.processors.cache.datastructures.partitioned.GridCachePartitionedAtomicQueueRotativeMultiNodeTest; import org.apache.ignite.internal.processors.cache.datastructures.partitioned.GridCachePartitionedAtomicReferenceApiSelfTest; -import org.apache.ignite.internal.processors.cache.datastructures.partitioned.GridCachePartitionedAtomicReferenceMultiNodeTest; import org.apache.ignite.internal.processors.cache.datastructures.partitioned.GridCachePartitionedAtomicSequenceMultiThreadedTest; import org.apache.ignite.internal.processors.cache.datastructures.partitioned.GridCachePartitionedAtomicSequenceTxSelfTest; import org.apache.ignite.internal.processors.cache.datastructures.partitioned.GridCachePartitionedAtomicSetFailoverSelfTest; @@ -70,7 +69,6 @@ import org.apache.ignite.internal.processors.cache.datastructures.partitioned.IgnitePartitionedSemaphoreSelfTest; import org.apache.ignite.internal.processors.cache.datastructures.partitioned.IgnitePartitionedSetNoBackupsSelfTest; import org.apache.ignite.internal.processors.cache.datastructures.replicated.GridCacheReplicatedAtomicReferenceApiSelfTest; -import org.apache.ignite.internal.processors.cache.datastructures.replicated.GridCacheReplicatedAtomicReferenceMultiNodeTest; import org.apache.ignite.internal.processors.cache.datastructures.replicated.GridCacheReplicatedAtomicStampedApiSelfTest; import org.apache.ignite.internal.processors.cache.datastructures.replicated.GridCacheReplicatedDataStructuresFailoverSelfTest; import org.apache.ignite.internal.processors.cache.datastructures.replicated.GridCacheReplicatedQueueApiSelfTest; @@ -84,106 +82,108 @@ import org.apache.ignite.internal.processors.cache.datastructures.replicated.IgniteReplicatedLockSelfTest; import org.apache.ignite.internal.processors.cache.datastructures.replicated.IgniteReplicatedSemaphoreSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheAtomicReplicatedNodeRestartSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache data structures. */ -public class IgniteCacheDataStructuresSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheDataStructuresSelfTestSuite { /** * @return Cache test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Cache Data Structures Test Suite"); // Data structures. - suite.addTest(new TestSuite(GridCachePartitionedQueueFailoverDataConsistencySelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicQueueFailoverDataConsistencySelfTest.class)); - - suite.addTest(new TestSuite(GridCacheLocalSequenceApiSelfTest.class)); - suite.addTest(new TestSuite(GridCacheLocalSetSelfTest.class)); - suite.addTest(new TestSuite(GridCacheLocalAtomicSetSelfTest.class)); - suite.addTest(new TestSuite(GridCacheLocalQueueApiSelfTest.class)); - suite.addTest(new TestSuite(GridCacheLocalAtomicQueueApiSelfTest.class)); - suite.addTest(new TestSuite(IgniteLocalCountDownLatchSelfTest.class)); - suite.addTest(new TestSuite(IgniteLocalSemaphoreSelfTest.class)); - suite.addTest(new TestSuite(IgniteLocalLockSelfTest.class)); - - suite.addTest(new TestSuite(GridCacheReplicatedSequenceApiSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedSequenceMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedQueueApiSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedQueueMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedQueueRotativeMultiNodeTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedSetSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedDataStructuresFailoverSelfTest.class)); - suite.addTest(new TestSuite(IgniteReplicatedCountDownLatchSelfTest.class)); - suite.addTest(new TestSuite(IgniteReplicatedSemaphoreSelfTest.class)); - suite.addTest(new TestSuite(IgniteReplicatedLockSelfTest.class)); - suite.addTest(new TestSuite(IgniteCacheAtomicReplicatedNodeRestartSelfTest.class)); - - suite.addTest(new TestSuite(GridCachePartitionedSequenceApiSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedSequenceMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedQueueApiSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicQueueApiSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedQueueMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicQueueMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCacheQueueClientDisconnectTest.class)); - - suite.addTest(new TestSuite(GridCachePartitionedQueueCreateMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicQueueCreateMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedSetSelfTest.class)); - suite.addTest(new TestSuite(IgnitePartitionedSetNoBackupsSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicSetSelfTest.class)); - suite.addTest(new TestSuite(IgnitePartitionedCountDownLatchSelfTest.class)); - suite.addTest(new TestSuite(IgniteDataStructureWithJobTest.class)); - suite.addTest(new TestSuite(IgnitePartitionedSemaphoreSelfTest.class)); - suite.addTest(new TestSuite(SemaphoreFailoverSafeReleasePermitsTest.class)); - suite.addTest(new TestSuite(SemaphoreFailoverNoWaitingAcquirerTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedQueueFailoverDataConsistencySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicQueueFailoverDataConsistencySelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheLocalSequenceApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalAtomicSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalQueueApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalAtomicQueueApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteLocalCountDownLatchSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteLocalSemaphoreSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteLocalLockSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedSequenceApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedSequenceMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedQueueApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedQueueMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedQueueRotativeMultiNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedDataStructuresFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteReplicatedCountDownLatchSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteReplicatedSemaphoreSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteReplicatedLockSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicReplicatedNodeRestartSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedSequenceApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedSequenceMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedQueueApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicQueueApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedQueueMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicQueueMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQueueClientDisconnectTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedQueueCreateMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicQueueCreateMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePartitionedSetNoBackupsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePartitionedCountDownLatchSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDataStructureWithJobTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePartitionedSemaphoreSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SemaphoreFailoverSafeReleasePermitsTest.class)); + suite.addTest(new JUnit4TestAdapter(SemaphoreFailoverNoWaitingAcquirerTest.class)); // TODO IGNITE-3141, enabled when fixed. - // suite.addTest(new TestSuite(IgnitePartitionedLockSelfTest.class)); + // suite.addTest(new JUnit4TestAdapter(IgnitePartitionedLockSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedSetFailoverSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicSetFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedSetFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicSetFailoverSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedQueueRotativeMultiNodeTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicQueueRotativeMultiNodeTest.class)); - suite.addTest(new TestSuite(GridCacheQueueCleanupSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedQueueRotativeMultiNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicQueueRotativeMultiNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQueueCleanupSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedQueueEntryMoveSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedQueueEntryMoveSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedDataStructuresFailoverSelfTest.class)); - suite.addTest(new TestSuite(GridCacheQueueMultiNodeConsistencySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedDataStructuresFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQueueMultiNodeConsistencySelfTest.class)); - suite.addTest(new TestSuite(IgniteLocalAtomicLongApiSelfTest.class)); - suite.addTest(new TestSuite(IgnitePartitionedAtomicLongApiSelfTest.class)); - suite.addTest(new TestSuite(IgniteReplicatedAtomicLongApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteLocalAtomicLongApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePartitionedAtomicLongApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteReplicatedAtomicLongApiSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicSequenceMultiThreadedTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicSequenceTxSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicSequenceMultiThreadedTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicSequenceTxSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicStampedApiSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedAtomicStampedApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicStampedApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedAtomicStampedApiSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicReferenceApiSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedAtomicReferenceApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicReferenceApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedAtomicReferenceApiSelfTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedAtomicReferenceMultiNodeTest.class)); - //suite.addTest(new TestSuite(GridCacheReplicatedAtomicReferenceMultiNodeTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicReferenceMultiNodeTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedAtomicReferenceMultiNodeTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedNodeRestartTxSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedQueueJoinedNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNodeRestartTxSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedQueueJoinedNodeSelfTest.class)); - suite.addTest(new TestSuite(IgniteDataStructureUniqueNameTest.class)); - //suite.addTest(new TestSuite(IgniteDataStructuresNoClassOnServerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDataStructureUniqueNameTest.class)); + //suite.addTest(new JUnit4TestAdapter(IgniteDataStructuresNoClassOnServerTest.class)); - suite.addTest(new TestSuite(IgniteClientDataStructuresTest.class)); - suite.addTest(new TestSuite(IgniteClientDiscoveryDataStructuresTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientDataStructuresTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientDiscoveryDataStructuresTest.class)); - suite.addTest(new TestSuite(IgnitePartitionedQueueNoBackupsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePartitionedQueueNoBackupsTest.class)); - suite.addTest(new TestSuite(IgniteSequenceInternalCleanupTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSequenceInternalCleanupTest.class)); - suite.addTestSuite(AtomicCacheAffinityConfigurationTest.class); + suite.addTest(new JUnit4TestAdapter(AtomicCacheAffinityConfigurationTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheEvictionSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheEvictionSelfTestSuite.java index ad9658d3593ca..66808900581da 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheEvictionSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheEvictionSelfTestSuite.java @@ -17,10 +17,12 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCachePreloadingEvictionsSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearEvictionSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearEvictionSelfTest; +import org.apache.ignite.internal.processors.cache.eviction.DhtAndNearEvictionTest; import org.apache.ignite.internal.processors.cache.eviction.GridCacheConcurrentEvictionConsistencySelfTest; import org.apache.ignite.internal.processors.cache.eviction.GridCacheConcurrentEvictionsSelfTest; import org.apache.ignite.internal.processors.cache.eviction.GridCacheEmptyEntriesLocalSelfTest; @@ -29,7 +31,6 @@ import org.apache.ignite.internal.processors.cache.eviction.GridCacheEvictionFilterSelfTest; import org.apache.ignite.internal.processors.cache.eviction.GridCacheEvictionLockUnlockSelfTest; import org.apache.ignite.internal.processors.cache.eviction.GridCacheEvictionTouchSelfTest; -import org.apache.ignite.internal.processors.cache.eviction.DhtAndNearEvictionTest; import org.apache.ignite.internal.processors.cache.eviction.fifo.FifoEvictionPolicyFactorySelfTest; import org.apache.ignite.internal.processors.cache.eviction.fifo.FifoEvictionPolicySelfTest; import org.apache.ignite.internal.processors.cache.eviction.lru.LruEvictionPolicyFactorySelfTest; @@ -49,53 +50,57 @@ import org.apache.ignite.internal.processors.cache.eviction.paged.RandomLruPageEvictionWithRebalanceTest; import org.apache.ignite.internal.processors.cache.eviction.sorted.SortedEvictionPolicyFactorySelfTest; import org.apache.ignite.internal.processors.cache.eviction.sorted.SortedEvictionPolicySelfTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache eviction. */ -public class IgniteCacheEvictionSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheEvictionSelfTestSuite { /** + * @param ignoredTests Ignored tests. * @return Cache eviction test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Ignite Cache Eviction Test Suite"); - suite.addTest(new TestSuite(FifoEvictionPolicySelfTest.class)); - suite.addTest(new TestSuite(SortedEvictionPolicySelfTest.class)); - suite.addTest(new TestSuite(LruEvictionPolicySelfTest.class)); - suite.addTest(new TestSuite(FifoEvictionPolicyFactorySelfTest.class)); - suite.addTest(new TestSuite(SortedEvictionPolicyFactorySelfTest.class)); - suite.addTest(new TestSuite(LruEvictionPolicyFactorySelfTest.class)); - suite.addTest(new TestSuite(LruNearEvictionPolicySelfTest.class)); - suite.addTest(new TestSuite(LruNearOnlyNearEvictionPolicySelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearEvictionSelfTest.class)); - suite.addTest(new TestSuite(GridCacheAtomicNearEvictionSelfTest.class)); - suite.addTest(new TestSuite(GridCacheEvictionFilterSelfTest.class)); - suite.addTest(new TestSuite(GridCacheConcurrentEvictionsSelfTest.class)); - suite.addTest(new TestSuite(GridCacheConcurrentEvictionConsistencySelfTest.class)); - suite.addTest(new TestSuite(GridCacheEvictionTouchSelfTest.class)); - suite.addTest(new TestSuite(GridCacheEvictionLockUnlockSelfTest.class)); - suite.addTest(new TestSuite(GridCachePreloadingEvictionsSelfTest.class)); - suite.addTest(new TestSuite(GridCacheEmptyEntriesPartitionedSelfTest.class)); - suite.addTest(new TestSuite(GridCacheEmptyEntriesLocalSelfTest.class)); - suite.addTest(new TestSuite(GridCacheEvictableEntryEqualsSelfTest.class)); + GridTestUtils.addTestIfNeeded(suite, FifoEvictionPolicySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, SortedEvictionPolicySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, LruEvictionPolicySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, FifoEvictionPolicyFactorySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, SortedEvictionPolicyFactorySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, LruEvictionPolicyFactorySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, LruNearEvictionPolicySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, LruNearOnlyNearEvictionPolicySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearEvictionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicNearEvictionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheEvictionFilterSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheConcurrentEvictionsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheConcurrentEvictionConsistencySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheEvictionTouchSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheEvictionLockUnlockSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePreloadingEvictionsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheEmptyEntriesPartitionedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheEmptyEntriesLocalSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheEvictableEntryEqualsSelfTest.class, ignoredTests); - suite.addTest(new TestSuite(RandomLruPageEvictionMultinodeTest.class)); - suite.addTest(new TestSuite(RandomLruNearEnabledPageEvictionMultinodeTest.class)); - suite.addTest(new TestSuite(Random2LruPageEvictionMultinodeTest.class)); - suite.addTest(new TestSuite(Random2LruNearEnabledPageEvictionMultinodeTest.class)); - suite.addTest(new TestSuite(RandomLruPageEvictionWithRebalanceTest.class)); - suite.addTest(new TestSuite(Random2LruPageEvictionWithRebalanceTest.class)); - suite.addTest(new TestSuite(PageEvictionTouchOrderTest.class)); - suite.addTest(new TestSuite(PageEvictionReadThroughTest.class)); - suite.addTest(new TestSuite(PageEvictionDataStreamerTest.class)); + GridTestUtils.addTestIfNeeded(suite, RandomLruPageEvictionMultinodeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, RandomLruNearEnabledPageEvictionMultinodeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, Random2LruPageEvictionMultinodeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, Random2LruNearEnabledPageEvictionMultinodeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, RandomLruPageEvictionWithRebalanceTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, Random2LruPageEvictionWithRebalanceTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PageEvictionTouchOrderTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PageEvictionReadThroughTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PageEvictionDataStreamerTest.class, ignoredTests); - suite.addTest(new TestSuite(PageEvictionMetricTest.class)); + GridTestUtils.addTestIfNeeded(suite, PageEvictionMetricTest.class, ignoredTests); - suite.addTest(new TestSuite(PageEvictionPagesRecyclingAndReusingTest.class)); + GridTestUtils.addTestIfNeeded(suite, PageEvictionPagesRecyclingAndReusingTest.class, ignoredTests); - suite.addTest(new TestSuite(DhtAndNearEvictionTest.class)); + GridTestUtils.addTestIfNeeded(suite, DhtAndNearEvictionTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite.java index 99dd828246a0c..8baa8fa323aef 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite.java @@ -18,6 +18,7 @@ package org.apache.ignite.testsuites; import java.util.Set; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCacheIncrementTransformTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheAtomicNodeJoinTest; @@ -40,11 +41,14 @@ import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteChangingBaselineDownCacheRemoveFailoverTest; import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteChangingBaselineUpCacheRemoveFailoverTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheFailoverTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheFailoverTestSuite { /** * @return Ignite Cache Failover test suite. * @throws Exception Thrown in case of the failure. @@ -61,36 +65,36 @@ public static TestSuite suite() throws Exception { public static TestSuite suite(Set ignoredTests) throws Exception { TestSuite suite = new TestSuite("Cache Failover Test Suite"); - suite.addTestSuite(GridCacheAtomicInvalidPartitionHandlingSelfTest.class); - suite.addTestSuite(GridCacheAtomicClientInvalidPartitionHandlingSelfTest.class); - suite.addTestSuite(GridCacheRebalancingPartitionDistributionTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicInvalidPartitionHandlingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicClientInvalidPartitionHandlingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheRebalancingPartitionDistributionTest.class)); GridTestUtils.addTestIfNeeded(suite, GridCacheIncrementTransformTest.class, ignoredTests); // Failure consistency tests. - suite.addTestSuite(GridCacheAtomicRemoveFailureTest.class); - suite.addTestSuite(GridCacheAtomicClientRemoveFailureTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicRemoveFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicClientRemoveFailureTest.class)); - suite.addTestSuite(GridCacheDhtAtomicRemoveFailureTest.class); - suite.addTestSuite(GridCacheDhtRemoveFailureTest.class); - suite.addTestSuite(GridCacheDhtClientRemoveFailureTest.class); - suite.addTestSuite(GridCacheNearRemoveFailureTest.class); - suite.addTestSuite(GridCacheAtomicNearRemoveFailureTest.class); - suite.addTestSuite(IgniteChangingBaselineUpCacheRemoveFailoverTest.class); - suite.addTestSuite(IgniteChangingBaselineDownCacheRemoveFailoverTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheDhtAtomicRemoveFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheDhtRemoveFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheDhtClientRemoveFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheNearRemoveFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearRemoveFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteChangingBaselineUpCacheRemoveFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteChangingBaselineDownCacheRemoveFailoverTest.class)); - suite.addTestSuite(IgniteCacheAtomicNodeJoinTest.class); - suite.addTestSuite(IgniteCacheTxNodeJoinTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicNodeJoinTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxNodeJoinTest.class)); - suite.addTestSuite(IgniteCacheTxNearDisabledPutGetRestartTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxNearDisabledPutGetRestartTest.class)); - suite.addTestSuite(IgniteCacheSizeFailoverTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheSizeFailoverTest.class)); - suite.addTestSuite(IgniteAtomicLongChangingTopologySelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteAtomicLongChangingTopologySelfTest.class)); - suite.addTestSuite(GridCacheTxNodeFailureSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheTxNodeFailureSelfTest.class)); - suite.addTestSuite(AtomicPutAllChangingTopologyTest.class); + suite.addTest(new JUnit4TestAdapter(AtomicPutAllChangingTopologyTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite2.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite2.java index 9cb374134e379..ecc6c2590e2a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite2.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite2.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.CacheGetFromJobTest; import org.apache.ignite.internal.processors.cache.distributed.CacheAsyncOperationsFailoverAtomicTest; @@ -32,41 +33,41 @@ import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedFailoverSelfTest; import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteChangingBaselineDownCachePutAllFailoverTest; import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteChangingBaselineUpCachePutAllFailoverTest; -import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteStableBaselineCachePutAllFailoverTest; -import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteStableBaselineCacheRemoveFailoverTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ +@RunWith(AllTests.class) public class IgniteCacheFailoverTestSuite2 { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Failover Test Suite2"); - suite.addTestSuite(GridCachePartitionedTxSalvageSelfTest.class); - suite.addTestSuite(CacheGetFromJobTest.class); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedTxSalvageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheGetFromJobTest.class)); - suite.addTestSuite(GridCacheAtomicFailoverSelfTest.class); - suite.addTestSuite(GridCacheAtomicReplicatedFailoverSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicReplicatedFailoverSelfTest.class)); - suite.addTestSuite(GridCachePartitionedFailoverSelfTest.class); - suite.addTestSuite(GridCacheColocatedFailoverSelfTest.class); - suite.addTestSuite(GridCacheReplicatedFailoverSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheColocatedFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedFailoverSelfTest.class)); - suite.addTestSuite(IgniteCacheCrossCacheTxFailoverTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheCrossCacheTxFailoverTest.class)); - suite.addTestSuite(CacheAsyncOperationsFailoverAtomicTest.class); - suite.addTestSuite(CacheAsyncOperationsFailoverTxTest.class); + suite.addTest(new JUnit4TestAdapter(CacheAsyncOperationsFailoverAtomicTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheAsyncOperationsFailoverTxTest.class)); - suite.addTestSuite(CachePutAllFailoverAtomicTest.class); - suite.addTestSuite(CachePutAllFailoverTxTest.class); - //suite.addTestSuite(IgniteStableBaselineCachePutAllFailoverTest.class); - //suite.addTestSuite(IgniteStableBaselineCacheRemoveFailoverTest.class); - suite.addTestSuite(IgniteChangingBaselineDownCachePutAllFailoverTest.class); - suite.addTestSuite(IgniteChangingBaselineUpCachePutAllFailoverTest.class); + suite.addTest(new JUnit4TestAdapter(CachePutAllFailoverAtomicTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePutAllFailoverTxTest.class)); + //suite.addTest(new JUnit4TestAdapter(IgniteStableBaselineCachePutAllFailoverTest.class)); + //suite.addTest(new JUnit4TestAdapter(IgniteStableBaselineCacheRemoveFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteChangingBaselineDownCachePutAllFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteChangingBaselineUpCachePutAllFailoverTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite3.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite3.java index f1cf2676b7531..6c53c8c6ecea4 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite3.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuite3.java @@ -17,25 +17,28 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.distributed.CacheGetInsideLockChangingTopologyTest; import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCachePutRetryAtomicSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCachePutRetryTransactionalSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheFailoverTestSuite3 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheFailoverTestSuite3 { /** * @return Ignite Cache Failover test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Failover Test Suite3"); - suite.addTestSuite(IgniteCachePutRetryAtomicSelfTest.class); - suite.addTestSuite(IgniteCachePutRetryTransactionalSelfTest.class); - suite.addTestSuite(CacheGetInsideLockChangingTopologyTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCachePutRetryAtomicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePutRetryTransactionalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheGetInsideLockChangingTopologyTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuiteSsl.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuiteSsl.java index 99a1463deca5b..fafefb5f0bb3e 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuiteSsl.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFailoverTestSuiteSsl.java @@ -17,24 +17,27 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.spi.communication.tcp.IgniteCacheSslStartStopSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheFailoverTestSuiteSsl extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheFailoverTestSuiteSsl { /** * @return Ignite Cache Failover test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Failover Test Suite SSL"); // Disable SSL test with old JDK because of https://bugs.openjdk.java.net/browse/JDK-8013809. if (!IgniteUtils.isHotSpot() || IgniteUtils.isJavaVersionAtLeast("1.7.0_65")) - suite.addTestSuite(IgniteCacheSslStartStopSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheSslStartStopSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiMultiJvmSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiMultiJvmSelfTestSuite.java index daab8c6f2196e..b62e9ba5a341d 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiMultiJvmSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiMultiJvmSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.multijvm.GridCacheAtomicClientOnlyMultiJvmFullApiSelfTest; import org.apache.ignite.internal.processors.cache.multijvm.GridCacheAtomicClientOnlyMultiJvmP2PDisabledFullApiSelfTest; @@ -42,52 +43,54 @@ import org.apache.ignite.internal.processors.cache.multijvm.GridCacheReplicatedMultiJvmP2PDisabledFullApiSelfTest; import org.apache.ignite.internal.processors.cache.multijvm.GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest; import org.apache.ignite.internal.processors.cache.multijvm.GridCacheReplicatedOnheapMultiJvmFullApiSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Multi-JVM test suite. */ -public class IgniteCacheFullApiMultiJvmSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheFullApiMultiJvmSelfTestSuite { /** * @return Multi-JVM tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Full API Multi Jvm Test Suite"); System.setProperty("H2_JDBC_CONNECTIONS", "500"); // Multi-node. - suite.addTestSuite(GridCacheReplicatedMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedMultiJvmP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedAtomicMultiJvmFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedMultiJvmP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedAtomicMultiJvmFullApiSelfTest.class)); - suite.addTestSuite(GridCachePartitionedMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedCopyOnReadDisabledMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicCopyOnReadDisabledMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedMultiJvmP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicMultiJvmP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedCopyOnReadDisabledMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicCopyOnReadDisabledMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMultiJvmP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicMultiJvmP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearEnabledMultiJvmFullApiSelfTest.class)); - suite.addTestSuite(GridCachePartitionedNearDisabledMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledMultiJvmP2PDisabledFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledMultiJvmP2PDisabledFullApiSelfTest.class)); - suite.addTestSuite(GridCacheNearOnlyMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCacheNearOnlyMultiJvmP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheNearOnlyMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheNearOnlyMultiJvmP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedNearOnlyMultiJvmFullApiSelfTest.class)); - suite.addTestSuite(GridCacheAtomicClientOnlyMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicClientOnlyMultiJvmP2PDisabledFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicClientOnlyMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicClientOnlyMultiJvmP2PDisabledFullApiSelfTest.class)); - suite.addTestSuite(GridCacheAtomicNearOnlyMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicNearOnlyMultiJvmP2PDisabledFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearOnlyMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearOnlyMultiJvmP2PDisabledFullApiSelfTest.class)); - suite.addTestSuite(GridCacheAtomicOnheapMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledAtomicOnheapMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledOnheapMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedOnheapMultiJvmFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedOnheapMultiJvmFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicOnheapMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledAtomicOnheapMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledOnheapMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedOnheapMultiJvmFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedOnheapMultiJvmFullApiSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiSelfTestSuite.java index 7d40a6ad4243a..3875475e612f0 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheFullApiSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCacheClearSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheAtomicFullApiSelfTest; @@ -78,103 +79,101 @@ import org.apache.ignite.internal.processors.cache.local.GridCacheLocalFullApiMultithreadedSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalFullApiSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalWithGroupFullApiSelfTest; -import org.apache.ignite.internal.processors.cache.persistence.standbycluster.extended.GridActivateExtensionTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache API. */ -public class IgniteCacheFullApiSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheFullApiSelfTestSuite { /** * @return Cache API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); - + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Full API Test Suite"); // One node. - suite.addTestSuite(GridCacheLocalFullApiSelfTest.class); - suite.addTestSuite(GridCacheLocalAtomicFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedFilteredPutSelfTest.class); - suite.addTestSuite(GridCacheReplicatedAtomicFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicNearEnabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicOnheapFullApiSelfTest.class); - - suite.addTestSuite(GridCachePartitionedOnheapFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedAtomicOnheapFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledOnheapFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledAtomicOnheapFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalAtomicFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedFilteredPutSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedAtomicFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearEnabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicOnheapFullApiSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedOnheapFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicOnheapFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledOnheapFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledAtomicOnheapFullApiSelfTest.class)); // No primary. - suite.addTestSuite(GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearOnlyNoPrimaryFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedClientOnlyNoPrimaryFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearOnlyNoPrimaryFullApiSelfTest.class)); // Multi-node. - suite.addTestSuite(GridCacheReplicatedMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedMultiNodeP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedAtomicMultiNodeFullApiSelfTest.class); - - suite.addTestSuite(GridCachePartitionedMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedCopyOnReadDisabledMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicCopyOnReadDisabledMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedMultiNodeP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicMultiNodeP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicNearEnabledMultiNodeFullApiSelfTest.class); - suite.addTestSuite(CachePartitionedMultiNodeLongTxTimeoutFullApiTest.class); - suite.addTestSuite(CachePartitionedMultiNodeLongTxTimeout2FullApiTest.class); - suite.addTestSuite(CachePartitionedNearEnabledMultiNodeLongTxTimeoutFullApiTest.class); - - suite.addTestSuite(GridCachePartitionedNearDisabledMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledMultiNodeP2PDisabledFullApiSelfTest.class); - - suite.addTestSuite(GridCacheNearOnlyMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCacheNearOnlyMultiNodeP2PDisabledFullApiSelfTest.class); - suite.addTestSuite(GridCacheReplicatedNearOnlyMultiNodeFullApiSelfTest.class); - - suite.addTestSuite(GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicClientOnlyMultiNodeP2PDisabledFullApiSelfTest.class); - - suite.addTestSuite(GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicNearOnlyMultiNodeP2PDisabledFullApiSelfTest.class); - - suite.addTestSuite(CacheReplicatedRendezvousAffinityExcludeNeighborsMultiNodeFullApiSelfTest.class); - suite.addTestSuite(CacheReplicatedRendezvousAffinityMultiNodeFullApiSelfTest.class); - - suite.addTestSuite(GridCacheNearReloadAllSelfTest.class); - suite.addTestSuite(GridCacheColocatedReloadAllSelfTest.class); - suite.addTestSuite(GridCacheAtomicReloadAllSelfTest.class); - suite.addTestSuite(GridCacheNearTxMultiNodeSelfTest.class); - suite.addTestSuite(GridCachePartitionedMultiNodeCounterSelfTest.class); - - suite.addTestSuite(GridCachePartitionedOnheapMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedAtomicOnheapMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledOnheapMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledAtomicOnheapMultiNodeFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicOnheapMultiNodeFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedMultiNodeP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedAtomicMultiNodeFullApiSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedCopyOnReadDisabledMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicCopyOnReadDisabledMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMultiNodeP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicMultiNodeP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearEnabledMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePartitionedMultiNodeLongTxTimeoutFullApiTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePartitionedMultiNodeLongTxTimeout2FullApiTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePartitionedNearEnabledMultiNodeLongTxTimeoutFullApiTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledMultiNodeP2PDisabledFullApiSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheNearOnlyMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheNearOnlyMultiNodeP2PDisabledFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedNearOnlyMultiNodeFullApiSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicClientOnlyMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicClientOnlyMultiNodeP2PDisabledFullApiSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearOnlyMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearOnlyMultiNodeP2PDisabledFullApiSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(CacheReplicatedRendezvousAffinityExcludeNeighborsMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheReplicatedRendezvousAffinityMultiNodeFullApiSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheNearReloadAllSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheColocatedReloadAllSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicReloadAllSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheNearTxMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMultiNodeCounterSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedOnheapMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicOnheapMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledOnheapMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledAtomicOnheapMultiNodeFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicOnheapMultiNodeFullApiSelfTest.class)); // Multithreaded. - suite.addTestSuite(GridCacheLocalFullApiMultithreadedSelfTest.class); - suite.addTestSuite(GridCacheReplicatedFullApiMultithreadedSelfTest.class); - suite.addTestSuite(GridCachePartitionedFullApiMultithreadedSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalFullApiMultithreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedFullApiMultithreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedFullApiMultithreadedSelfTest.class)); // Other. - suite.addTestSuite(GridCacheClearSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheClearSelfTest.class)); - suite.addTestSuite(GridCacheLocalWithGroupFullApiSelfTest.class); - suite.addTestSuite(GridCacheLocalAtomicWithGroupFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicMultiNodeWithGroupFullApiSelfTest.class); - suite.addTestSuite(GridCacheAtomicNearEnabledMultiNodeWithGroupFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedMultiNodeWithGroupFullApiSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledMultiNodeWithGroupFullApiSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalWithGroupFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheLocalAtomicWithGroupFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicMultiNodeWithGroupFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicNearEnabledMultiNodeWithGroupFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMultiNodeWithGroupFullApiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledMultiNodeWithGroupFullApiSelfTest.class)); - //suite.addTestSuite(GridActivateExtensionTest.class); + //suite.addTest(new JUnit4TestAdapter(GridActivateExtensionTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheIteratorsSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheIteratorsSelfTestSuite.java index 6fb3b486cb1a5..ee2feeec1cccb 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheIteratorsSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheIteratorsSelfTestSuite.java @@ -17,26 +17,31 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedIteratorsSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedIteratorsSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalIteratorsSelfTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache iterators test suite. */ -public class IgniteCacheIteratorsSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheIteratorsSelfTestSuite { /** + * @param ignoredTests Ignored tests. * @return Cache iterators test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Cache Iterators Test Suite"); - suite.addTest(new TestSuite(GridCacheLocalIteratorsSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedIteratorsSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedIteratorsSelfTest.class)); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalIteratorsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheReplicatedIteratorsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedIteratorsSelfTest.class, ignoredTests); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheLoadConsistencyTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheLoadConsistencyTestSuite.java index cd0be9ce23b4c..ef4423995cea5 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheLoadConsistencyTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheLoadConsistencyTestSuite.java @@ -17,25 +17,28 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.distributed.CacheNearDisabledAtomicInvokeRestartSelfTest; import org.apache.ignite.internal.processors.cache.distributed.CacheNearDisabledTransactionalInvokeRestartSelfTest; import org.apache.ignite.internal.processors.cache.distributed.CacheNearDisabledTransactionalWriteReadRestartSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheLoadConsistencyTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheLoadConsistencyTestSuite { /** * @return Ignite Cache Failover test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Load Consistency Test Suite"); - suite.addTestSuite(CacheNearDisabledAtomicInvokeRestartSelfTest.class); - suite.addTestSuite(CacheNearDisabledTransactionalInvokeRestartSelfTest.class); - suite.addTestSuite(CacheNearDisabledTransactionalWriteReadRestartSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheNearDisabledAtomicInvokeRestartSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheNearDisabledTransactionalInvokeRestartSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheNearDisabledTransactionalWriteReadRestartSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMetricsSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMetricsSelfTestSuite.java index b6dcb21af8595..88ad48029db8a 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMetricsSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMetricsSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.TransactionMetricsMxBeanImplTest; import org.apache.ignite.internal.processors.cache.CacheGroupsMetricsRebalanceTest; @@ -38,45 +39,49 @@ import org.apache.ignite.internal.processors.cache.local.GridCacheAtomicLocalTckMetricsSelfTestImpl; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalAtomicMetricsNoReadThroughSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalMetricsSelfTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache metrics. */ -public class IgniteCacheMetricsSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheMetricsSelfTestSuite { /** + * @param ignoredTests Ignored tests. * @return Cache metrics test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Cache Metrics Test Suite"); - suite.addTestSuite(GridCacheLocalMetricsSelfTest.class); - suite.addTestSuite(GridCacheLocalAtomicMetricsNoReadThroughSelfTest.class); - suite.addTestSuite(GridCacheNearMetricsSelfTest.class); - suite.addTestSuite(GridCacheNearAtomicMetricsSelfTest.class); - suite.addTestSuite(GridCacheReplicatedMetricsSelfTest.class); - suite.addTestSuite(GridCachePartitionedMetricsSelfTest.class); - suite.addTestSuite(GridCachePartitionedHitsAndMissesSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalAtomicMetricsNoReadThroughSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearAtomicMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheReplicatedMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedHitsAndMissesSelfTest.class, ignoredTests); // Atomic cache. - suite.addTestSuite(GridCacheAtomicLocalMetricsSelfTest.class); - suite.addTestSuite(GridCacheAtomicLocalMetricsNoStoreSelfTest.class); - suite.addTestSuite(GridCacheAtomicReplicatedMetricsSelfTest.class); - suite.addTestSuite(GridCacheAtomicPartitionedMetricsSelfTest.class); - suite.addTestSuite(GridCacheAtomicPartitionedTckMetricsSelfTestImpl.class); - suite.addTestSuite(GridCacheAtomicLocalTckMetricsSelfTestImpl.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicLocalMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicLocalMetricsNoStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicReplicatedMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicPartitionedMetricsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicPartitionedTckMetricsSelfTestImpl.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicLocalTckMetricsSelfTestImpl.class, ignoredTests); - suite.addTestSuite(CacheGroupsMetricsRebalanceTest.class); - suite.addTestSuite(CacheValidatorMetricsTest.class); - suite.addTestSuite(CacheMetricsEntitiesCountTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheGroupsMetricsRebalanceTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheValidatorMetricsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheMetricsEntitiesCountTest.class, ignoredTests); // Cluster wide metrics. - suite.addTestSuite(CacheMetricsForClusterGroupSelfTest.class); - suite.addTestSuite(OffheapCacheMetricsForClusterGroupSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheMetricsForClusterGroupSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, OffheapCacheMetricsForClusterGroupSelfTest.class, ignoredTests); - suite.addTestSuite(TransactionMetricsMxBeanImplTest.class); + GridTestUtils.addTestIfNeeded(suite, TransactionMetricsMxBeanImplTest.class, ignoredTests); - suite.addTestSuite(GridEvictionPolicyMBeansTest.class); + GridTestUtils.addTestIfNeeded(suite, GridEvictionPolicyMBeansTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite.java index 8585ebe66ba09..b426e7cbdfa31 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccClusterRestartTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccConfigurationValidationTest; @@ -26,16 +27,24 @@ import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccPartitionedCoordinatorFailoverTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccProcessorLazyStartTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccProcessorTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccRemoteTxOnNearNodeStartTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccReplicatedCoordinatorFailoverTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccScanQueryWithConcurrentTransactionTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSizeWithConcurrentTransactionTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTransactionsTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTxFailoverTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccVacuumTest; +import org.apache.ignite.internal.processors.cache.mvcc.MvccCachePeekTest; +import org.apache.ignite.internal.processors.cache.mvcc.MvccUnsupportedTxModesTest; +import org.apache.ignite.internal.processors.datastreamer.DataStreamProcessorMvccPersistenceSelfTest; import org.apache.ignite.internal.processors.datastreamer.DataStreamProcessorMvccSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ +@RunWith(AllTests.class) public class IgniteCacheMvccTestSuite extends TestSuite { /** * @return Test suite. @@ -44,25 +53,33 @@ public static TestSuite suite() { TestSuite suite = new TestSuite("IgniteCache MVCC Test Suite"); // Basic tests. - suite.addTestSuite(CacheMvccTransactionsTest.class); - suite.addTestSuite(CacheMvccProcessorTest.class); - suite.addTestSuite(CacheMvccVacuumTest.class); - suite.addTestSuite(CacheMvccConfigurationValidationTest.class); + suite.addTest(new JUnit4TestAdapter(CacheMvccTransactionsTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccProcessorTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccVacuumTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccConfigurationValidationTest.class)); - suite.addTestSuite(DataStreamProcessorMvccSelfTest.class); - suite.addTestSuite(CacheMvccOperationChecksTest.class); + suite.addTest(new JUnit4TestAdapter(DataStreamProcessorMvccSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DataStreamProcessorMvccPersistenceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccOperationChecksTest.class)); + + suite.addTest(new JUnit4TestAdapter(CacheMvccRemoteTxOnNearNodeStartTest.class)); + + suite.addTest(new JUnit4TestAdapter(MvccUnsupportedTxModesTest.class)); + + suite.addTest(new JUnit4TestAdapter(MvccCachePeekTest.class)); // Concurrent ops tests. - suite.addTestSuite(CacheMvccIteratorWithConcurrentTransactionTest.class); - suite.addTestSuite(CacheMvccLocalEntriesWithConcurrentTransactionTest.class); - suite.addTestSuite(CacheMvccScanQueryWithConcurrentTransactionTest.class); - suite.addTestSuite(CacheMvccSizeWithConcurrentTransactionTest.class); + suite.addTest(new JUnit4TestAdapter(CacheMvccIteratorWithConcurrentTransactionTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccLocalEntriesWithConcurrentTransactionTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccScanQueryWithConcurrentTransactionTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccSizeWithConcurrentTransactionTest.class)); // Failover tests. - suite.addTestSuite(CacheMvccClusterRestartTest.class); - suite.addTestSuite(CacheMvccPartitionedCoordinatorFailoverTest.class); - suite.addTestSuite(CacheMvccReplicatedCoordinatorFailoverTest.class); - suite.addTestSuite(CacheMvccProcessorLazyStartTest.class); + suite.addTest(new JUnit4TestAdapter(CacheMvccTxFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccClusterRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccPartitionedCoordinatorFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccReplicatedCoordinatorFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccProcessorLazyStartTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite1.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite1.java new file mode 100755 index 0000000000000..ae3680d29136d --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite1.java @@ -0,0 +1,249 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import java.util.Set; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.IgniteCacheEntryProcessorSequentialCallTest; +import org.apache.ignite.cache.IgniteWarmupClosureSelfTest; +import org.apache.ignite.cache.store.CacheStoreReadFromBackupTest; +import org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest; +import org.apache.ignite.cache.store.GridStoreLoadCacheTest; +import org.apache.ignite.cache.store.StoreResourceInjectionSelfTest; +import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreTest; +import org.apache.ignite.cache.store.jdbc.GridCacheJdbcBlobStoreSelfTest; +import org.apache.ignite.cache.store.jdbc.JdbcTypesDefaultTransformerTest; +import org.apache.ignite.internal.managers.communication.IgniteCommunicationBalanceMultipleConnectionsTest; +import org.apache.ignite.internal.managers.communication.IgniteCommunicationBalancePairedConnectionsTest; +import org.apache.ignite.internal.managers.communication.IgniteCommunicationBalanceTest; +import org.apache.ignite.internal.managers.communication.IgniteCommunicationSslBalanceTest; +import org.apache.ignite.internal.managers.communication.IgniteIoTestMessagesTest; +import org.apache.ignite.internal.managers.communication.IgniteVariousConnectionNumberTest; +import org.apache.ignite.internal.processors.cache.CacheAffinityCallSelfTest; +import org.apache.ignite.internal.processors.cache.CacheDeferredDeleteQueueTest; +import org.apache.ignite.internal.processors.cache.CacheDeferredDeleteSanitySelfTest; +import org.apache.ignite.internal.processors.cache.CacheMvccTxFastFinishTest; +import org.apache.ignite.internal.processors.cache.CacheNamesSelfTest; +import org.apache.ignite.internal.processors.cache.CacheNamesWithSpecialCharactersTest; +import org.apache.ignite.internal.processors.cache.CacheTxFastFinishTest; +import org.apache.ignite.internal.processors.cache.DataStorageConfigurationValidationTest; +import org.apache.ignite.internal.processors.cache.GridCacheAffinityApiSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheAffinityMapperSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheAffinityRoutingSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheAsyncOperationsLimitSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheConfigurationConsistencySelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheConfigurationValidationSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheLifecycleAwareSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheMissingCommitVersionSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheMvccManagerSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheMvccMultiThreadedUpdateSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheMvccPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheMvccSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheOffHeapAtomicMultiThreadedUpdateSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheOffHeapMultiThreadedUpdateSelfTest; +import org.apache.ignite.internal.processors.cache.GridCachePartitionedLocalStoreSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheReplicatedLocalStoreSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheStopSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheTcpClientDiscoveryMultiThreadedTest; +import org.apache.ignite.internal.processors.cache.GridDataStorageConfigurationConsistencySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicLocalInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicLocalWithStoreInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicNearEnabledInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicStopBusySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicWithStoreInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheEntryListenerAtomicLocalTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheEntryListenerAtomicReplicatedTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheEntryListenerAtomicTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheEntryProcessorCallTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheManyAsyncOperationsTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheMvccTxInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheMvccTxNearEnabledInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheTxInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheTxNearEnabledInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteClientAffinityAssignmentSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteIncompleteCacheObjectSelfTest; +import org.apache.ignite.internal.processors.cache.binary.CacheKeepBinaryWithInterceptorTest; +import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAffinityRoutingBinarySelfTest; +import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultiNodeSelfTest; +import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultithreadedSelfTest; +import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAtomicPartitionedOnlyBinaryMultiNodeSelfTest; +import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheAtomicPartitionedOnlyBinaryMultithreadedSelfTest; +import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheBinariesNearPartitionedByteArrayValuesSelfTest; +import org.apache.ignite.internal.processors.cache.binary.distributed.dht.GridCacheBinariesPartitionedOnlyByteArrayValuesSelfTest; +import org.apache.ignite.internal.processors.cache.context.IgniteCacheAtomicExecutionContextTest; +import org.apache.ignite.internal.processors.cache.context.IgniteCacheContinuousExecutionContextTest; +import org.apache.ignite.internal.processors.cache.context.IgniteCacheIsolatedExecutionContextTest; +import org.apache.ignite.internal.processors.cache.context.IgniteCacheP2PDisableExecutionContextTest; +import org.apache.ignite.internal.processors.cache.context.IgniteCachePrivateExecutionContextTest; +import org.apache.ignite.internal.processors.cache.context.IgniteCacheReplicatedExecutionContextTest; +import org.apache.ignite.internal.processors.cache.context.IgniteCacheSharedExecutionContextTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheAtomicNearUpdateTopologyChangeTest; +import org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheAtomicMessageRecovery10ConnectionsTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheAtomicMessageRecoveryPairedConnectionsTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheAtomicMessageRecoveryTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheConnectionRecovery10ConnectionsTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheConnectionRecoveryTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheMessageRecoveryIdleConnectionTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheMessageWriteTimeoutTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheAtomicNearCacheSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionsStateValidatorSelfTest; +import org.apache.ignite.internal.processors.cache.expiry.IgniteCacheAtomicLocalExpiryPolicyTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheEntryProcessorExternalizableFailedTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheEntryProcessorNonSerializableTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite. + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite1 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 1"); + + Set ignoredTests = new HashSet<>(); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(CacheKeepBinaryWithInterceptorTest.class); + ignoredTests.add(CacheEntryProcessorNonSerializableTest.class); + ignoredTests.add(CacheEntryProcessorExternalizableFailedTest.class); + ignoredTests.add(IgniteCacheEntryProcessorSequentialCallTest.class); + ignoredTests.add(IgniteCacheEntryProcessorCallTest.class); + ignoredTests.add(GridCacheConfigurationConsistencySelfTest.class); + ignoredTests.add(IgniteCacheMessageRecoveryIdleConnectionTest.class); + ignoredTests.add(IgniteCacheConnectionRecoveryTest.class); + ignoredTests.add(IgniteCacheConnectionRecovery10ConnectionsTest.class); + ignoredTests.add(CacheDeferredDeleteSanitySelfTest.class); + ignoredTests.add(CacheDeferredDeleteQueueTest.class); + ignoredTests.add(GridCacheStopSelfTest.class); + ignoredTests.add(GridCacheBinariesNearPartitionedByteArrayValuesSelfTest.class); + ignoredTests.add(GridCacheBinariesPartitionedOnlyByteArrayValuesSelfTest.class); + + // Atomic caches. + ignoredTests.add(IgniteCacheEntryListenerAtomicTest.class); + ignoredTests.add(IgniteCacheEntryListenerAtomicReplicatedTest.class); + ignoredTests.add(IgniteCacheEntryListenerAtomicLocalTest.class); + ignoredTests.add(IgniteCacheAtomicLocalExpiryPolicyTest.class); + ignoredTests.add(IgniteCacheAtomicInvokeTest.class); + ignoredTests.add(IgniteCacheAtomicNearEnabledInvokeTest.class); + ignoredTests.add(IgniteCacheAtomicWithStoreInvokeTest.class); + ignoredTests.add(IgniteCacheAtomicLocalInvokeTest.class); + ignoredTests.add(IgniteCacheAtomicLocalWithStoreInvokeTest.class); + ignoredTests.add(GridCachePartitionedLocalStoreSelfTest.class); + ignoredTests.add(GridCacheReplicatedLocalStoreSelfTest.class); + ignoredTests.add(CacheStoreReadFromBackupTest.class); + + ignoredTests.add(IgniteCacheAtomicExecutionContextTest.class); + ignoredTests.add(IgniteCacheReplicatedExecutionContextTest.class); + ignoredTests.add(IgniteCacheContinuousExecutionContextTest.class); + ignoredTests.add(IgniteCacheIsolatedExecutionContextTest.class); + ignoredTests.add(IgniteCacheP2PDisableExecutionContextTest.class); + ignoredTests.add(IgniteCachePrivateExecutionContextTest.class); + ignoredTests.add(IgniteCacheSharedExecutionContextTest.class); + + ignoredTests.add(IgniteCacheAtomicStopBusySelfTest.class); + ignoredTests.add(GridCacheAtomicNearCacheSelfTest.class); + ignoredTests.add(CacheAtomicNearUpdateTopologyChangeTest.class); + ignoredTests.add(GridCacheOffHeapAtomicMultiThreadedUpdateSelfTest.class); + ignoredTests.add(IgniteCacheAtomicMessageRecoveryTest.class); + ignoredTests.add(IgniteCacheAtomicMessageRecoveryPairedConnectionsTest.class); + ignoredTests.add(IgniteCacheAtomicMessageRecovery10ConnectionsTest.class); + + ignoredTests.add(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearPartitionedAtomic.class); + ignoredTests.add(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearReplicatedAtomic.class); + ignoredTests.add(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientPartitionedAtomic.class); + ignoredTests.add(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientReplicatedAtomic.class); + + ignoredTests.add(GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultiNodeSelfTest.class); + ignoredTests.add(GridCacheAtomicPartitionedOnlyBinaryDataStreamerMultithreadedSelfTest.class); + ignoredTests.add(GridCacheAtomicPartitionedOnlyBinaryMultiNodeSelfTest.class); + ignoredTests.add(GridCacheAtomicPartitionedOnlyBinaryMultithreadedSelfTest.class); + + // Irrelevant tests. + ignoredTests.add(GridCacheMvccSelfTest.class); // This is about MvccCandidate, but not TxSnapshot. + ignoredTests.add(GridCacheMvccPartitionedSelfTest.class); // This is about MvccCandidate, but not TxSnapshot. + ignoredTests.add(GridCacheMvccManagerSelfTest.class); // This is about MvccCandidate, but not TxSnapshot. + ignoredTests.add(GridCacheMissingCommitVersionSelfTest.class); // Mvcc tx states resides in TxLog. + + // Other non-Tx test. + ignoredTests.add(GridCacheAffinityRoutingSelfTest.class); + ignoredTests.add(GridCacheAffinityRoutingBinarySelfTest.class); + ignoredTests.add(IgniteClientAffinityAssignmentSelfTest.class); + ignoredTests.add(GridCacheConcurrentMapSelfTest.class); + ignoredTests.add(CacheAffinityCallSelfTest.class); + ignoredTests.add(GridCacheAffinityMapperSelfTest.class); + ignoredTests.add(GridCacheAffinityApiSelfTest.class); + + ignoredTests.add(CacheNamesSelfTest.class); + ignoredTests.add(CacheNamesWithSpecialCharactersTest.class); + ignoredTests.add(GridCacheConfigurationValidationSelfTest.class); + + ignoredTests.add(GridDataStorageConfigurationConsistencySelfTest.class); + ignoredTests.add(DataStorageConfigurationValidationTest.class); + ignoredTests.add(JdbcTypesDefaultTransformerTest.class); + ignoredTests.add(GridCacheJdbcBlobStoreSelfTest.class); + ignoredTests.add(CacheJdbcPojoStoreTest.class); + ignoredTests.add(GridCacheBalancingStoreSelfTest.class); + ignoredTests.add(GridStoreLoadCacheTest.class); + + ignoredTests.add(IgniteWarmupClosureSelfTest.class); + ignoredTests.add(StoreResourceInjectionSelfTest.class); + ignoredTests.add(GridCacheAsyncOperationsLimitSelfTest.class); + ignoredTests.add(IgniteCacheManyAsyncOperationsTest.class); + ignoredTests.add(GridCacheLifecycleAwareSelfTest.class); + ignoredTests.add(IgniteCacheMessageWriteTimeoutTest.class); + ignoredTests.add(GridCachePartitionsStateValidatorSelfTest.class); + ignoredTests.add(IgniteVariousConnectionNumberTest.class); + ignoredTests.add(IgniteIncompleteCacheObjectSelfTest.class); + + ignoredTests.add(IgniteCommunicationBalanceTest.class); + ignoredTests.add(IgniteCommunicationBalancePairedConnectionsTest.class); + ignoredTests.add(IgniteCommunicationBalanceMultipleConnectionsTest.class); + ignoredTests.add(IgniteCommunicationSslBalanceTest.class); + ignoredTests.add(IgniteIoTestMessagesTest.class); + + ignoredTests.add(GridCacheTcpClientDiscoveryMultiThreadedTest.class); + + // Skip classes which Mvcc implementations are added in this method below. + ignoredTests.add(GridCacheOffHeapMultiThreadedUpdateSelfTest.class); // See GridCacheMvccMultiThreadedUpdateSelfTest. + ignoredTests.add(CacheTxFastFinishTest.class); // See CacheMvccTxFastFinishTest. + ignoredTests.add(IgniteCacheTxInvokeTest.class); // See IgniteCacheMvccTxInvokeTest. + ignoredTests.add(IgniteCacheTxNearEnabledInvokeTest.class); // See IgniteCacheMvccTxNearEnabledInvokeTest. + + suite.addTest(IgniteBinaryCacheTestSuite.suite(ignoredTests)); + + // Add Mvcc clones. + suite.addTest(new JUnit4TestAdapter(GridCacheMvccMultiThreadedUpdateSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheMvccTxFastFinishTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheMvccTxInvokeTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheMvccTxNearEnabledInvokeTest.class)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite2.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite2.java new file mode 100644 index 0000000000000..67d717dcfbc5b --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite2.java @@ -0,0 +1,201 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilterSelfTest; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionBackupFilterSelfTest; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionExcludeNeighborsSelfTest; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionFastPowerOfTwoHashSelfTest; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionSelfTest; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionStandardHashSelfTest; +import org.apache.ignite.internal.IgniteReflectionFactorySelfTest; +import org.apache.ignite.internal.processors.cache.CacheComparatorTest; +import org.apache.ignite.internal.processors.cache.CacheConfigurationLeakTest; +import org.apache.ignite.internal.processors.cache.CacheEnumOperationsSingleNodeTest; +import org.apache.ignite.internal.processors.cache.CacheEnumOperationsTest; +import org.apache.ignite.internal.processors.cache.CacheExchangeMessageDuplicatedStateTest; +import org.apache.ignite.internal.processors.cache.CacheGroupLocalConfigurationSelfTest; +import org.apache.ignite.internal.processors.cache.CacheOptimisticTransactionsWithFilterSingleServerTest; +import org.apache.ignite.internal.processors.cache.CacheOptimisticTransactionsWithFilterTest; +import org.apache.ignite.internal.processors.cache.GridCacheAtomicMessageCountSelfTest; +import org.apache.ignite.internal.processors.cache.GridCachePartitionedProjectionAffinitySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteAtomicCacheEntryProcessorNodeJoinTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheNoSyncForGetTest; +import org.apache.ignite.internal.processors.cache.IgniteCachePartitionMapUpdateTest; +import org.apache.ignite.internal.processors.cache.IgniteClientCacheStartFailoverTest; +import org.apache.ignite.internal.processors.cache.IgniteDynamicCacheAndNodeStop; +import org.apache.ignite.internal.processors.cache.IgniteNearClientCacheCloseTest; +import org.apache.ignite.internal.processors.cache.IgniteOnePhaseCommitInvokeTest; +import org.apache.ignite.internal.processors.cache.IgniteOnePhaseCommitNearReadersTest; +import org.apache.ignite.internal.processors.cache.MemoryPolicyConfigValidationTest; +import org.apache.ignite.internal.processors.cache.NonAffinityCoordinatorDynamicStartStopTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTestAllowOverwrite; +import org.apache.ignite.internal.processors.cache.distributed.CachePartitionStateTest; +import org.apache.ignite.internal.processors.cache.distributed.GridCachePartitionedNearDisabledMvccTxMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.GridCachePartitionedNearDisabledTxMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.GridCacheTransformEventSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheClientNodePartitionsExchangeTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheServerNodeConcurrentStart; +import org.apache.ignite.internal.processors.cache.distributed.dht.CachePartitionPartialCountersMapSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedMvccTxSingleThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedOptimisticTransactionSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedTxSingleThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtAtomicEvictionNearReadersSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadOnheapSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedUnloadEventsSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCachePartitionedBackupNodeFailureRecoveryTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearEvictionEventSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearMultiNodeSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearReadersSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearClientHitTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearJobExecutionSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearMultiGetSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearTxForceKeyTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedAffinitySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedAtomicGetAndTransformStoreSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedMultiThreadedPutGetSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedMvccTxMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedMvccTxSingleThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedMvccTxTimeoutSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxSingleThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxTimeoutSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheRendezvousAffinityClientSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.NearCacheSyncUpdateTest; +import org.apache.ignite.internal.processors.cache.distributed.near.NoneRebalanceModeSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedJobExecutionTest; +import org.apache.ignite.internal.processors.cache.local.GridCacheLocalAtomicBasicStoreSelfTest; +import org.apache.ignite.internal.processors.cache.local.GridCacheLocalAtomicGetAndTransformStoreSelfTest; +import org.apache.ignite.internal.processors.cache.persistence.MemoryPolicyInitializationTest; +import org.apache.ignite.internal.processors.continuous.IgniteNoCustomEventsOnNodeStart; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite. + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite2 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + HashSet ignoredTests = new HashSet<>(128); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(GridCacheTransformEventSelfTest.class); + ignoredTests.add(IgniteClientCacheStartFailoverTest.class); + ignoredTests.add(IgniteNearClientCacheCloseTest.class); + ignoredTests.add(IgniteCacheNoSyncForGetTest.class); + ignoredTests.add(CacheEnumOperationsSingleNodeTest.class); + ignoredTests.add(CacheEnumOperationsTest.class); + ignoredTests.add(NearCacheSyncUpdateTest.class); + ignoredTests.add(GridCacheNearMultiGetSelfTest.class); + + // Irrelevant Tx tests. + ignoredTests.add(GridCacheColocatedOptimisticTransactionSelfTest.class); + ignoredTests.add(CacheOptimisticTransactionsWithFilterSingleServerTest.class); + ignoredTests.add(CacheOptimisticTransactionsWithFilterTest.class); + + // Irrelevant Tx tests. + ignoredTests.add(IgniteOnePhaseCommitInvokeTest.class); + ignoredTests.add(IgniteOnePhaseCommitNearReadersTest.class); + ignoredTests.add(GridCacheDhtPreloadOnheapSelfTest.class); + ignoredTests.add(GridCachePartitionedMultiThreadedPutGetSelfTest.class); // On-heap test. + + // Atomic cache tests. + ignoredTests.add(GridCacheLocalAtomicBasicStoreSelfTest.class); + ignoredTests.add(GridCacheLocalAtomicGetAndTransformStoreSelfTest.class); + ignoredTests.add(GridCacheAtomicNearMultiNodeSelfTest.class); + ignoredTests.add(GridCacheAtomicNearReadersSelfTest.class); + ignoredTests.add(GridCachePartitionedAtomicGetAndTransformStoreSelfTest.class); + ignoredTests.add(GridCacheAtomicNearEvictionEventSelfTest.class); + ignoredTests.add(GridCacheAtomicMessageCountSelfTest.class); + ignoredTests.add(IgniteAtomicCacheEntryProcessorNodeJoinTest.class); + ignoredTests.add(GridCacheDhtAtomicEvictionNearReadersSelfTest.class); + ignoredTests.add(GridCacheNearClientHitTest.class); + ignoredTests.add(GridCacheNearTxForceKeyTest.class); + ignoredTests.add(CacheLoadingConcurrentGridStartSelfTest.class); + ignoredTests.add(CacheLoadingConcurrentGridStartSelfTestAllowOverwrite.class); + ignoredTests.add(IgniteCachePartitionedBackupNodeFailureRecoveryTest.class); + + // Other non-tx tests. + ignoredTests.add(RendezvousAffinityFunctionSelfTest.class); + ignoredTests.add(RendezvousAffinityFunctionExcludeNeighborsSelfTest.class); + ignoredTests.add(RendezvousAffinityFunctionFastPowerOfTwoHashSelfTest.class); + ignoredTests.add(RendezvousAffinityFunctionStandardHashSelfTest.class); + ignoredTests.add(GridCachePartitionedAffinitySelfTest.class); + ignoredTests.add(GridCacheRendezvousAffinityClientSelfTest.class); + ignoredTests.add(GridCachePartitionedProjectionAffinitySelfTest.class); + ignoredTests.add(RendezvousAffinityFunctionBackupFilterSelfTest.class); + ignoredTests.add(ClusterNodeAttributeAffinityBackupFilterSelfTest.class); + ignoredTests.add(NonAffinityCoordinatorDynamicStartStopTest.class); + + ignoredTests.add(NoneRebalanceModeSelfTest.class); + ignoredTests.add(IgniteCachePartitionMapUpdateTest.class); + ignoredTests.add(IgniteCacheClientNodePartitionsExchangeTest.class); + ignoredTests.add(IgniteCacheServerNodeConcurrentStart.class); + + ignoredTests.add(GridCachePartitionedUnloadEventsSelfTest.class); + + ignoredTests.add(IgniteNoCustomEventsOnNodeStart.class); + ignoredTests.add(CacheExchangeMessageDuplicatedStateTest.class); + ignoredTests.add(IgniteDynamicCacheAndNodeStop.class); + + ignoredTests.add(GridCacheReplicatedJobExecutionTest.class); + ignoredTests.add(GridCacheNearJobExecutionSelfTest.class); + + ignoredTests.add(CacheConfigurationLeakTest.class); + ignoredTests.add(MemoryPolicyConfigValidationTest.class); + ignoredTests.add(MemoryPolicyInitializationTest.class); + ignoredTests.add(CacheGroupLocalConfigurationSelfTest.class); + + ignoredTests.add(CachePartitionStateTest.class); + ignoredTests.add(CacheComparatorTest.class); + ignoredTests.add(CachePartitionPartialCountersMapSelfTest.class); + ignoredTests.add(IgniteReflectionFactorySelfTest.class); + + // Skip classes which Mvcc implementations are added in this method below. + // TODO IGNITE-10175: refactor these tests (use assume) to support both mvcc and non-mvcc modes after moving to JUnit4/5. + ignoredTests.add(GridCachePartitionedTxSingleThreadedSelfTest.class); // See GridCachePartitionedMvccTxSingleThreadedSelfTest + ignoredTests.add(GridCacheColocatedTxSingleThreadedSelfTest.class); // See GridCacheColocatedMvccTxSingleThreadedSelfTest + ignoredTests.add(GridCachePartitionedTxMultiThreadedSelfTest.class); // See GridCachePartitionedMvccTxMultiThreadedSelfTest + ignoredTests.add(GridCachePartitionedNearDisabledTxMultiThreadedSelfTest.class); // See GridCachePartitionedNearDisabledMvccTxMultiThreadedSelfTest + ignoredTests.add(GridCachePartitionedTxTimeoutSelfTest.class); // See GridCachePartitionedMvccTxTimeoutSelfTest + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 2"); + + suite.addTest(IgniteCacheTestSuite2.suite(ignoredTests)); + + // Add Mvcc clones. + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMvccTxSingleThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheColocatedMvccTxSingleThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMvccTxMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledMvccTxMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedMvccTxTimeoutSelfTest.class)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite3.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite3.java new file mode 100644 index 0000000000000..da9db3c384b39 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite3.java @@ -0,0 +1,137 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.cache.CacheInterceptorPartitionCounterLocalSanityTest; +import org.apache.ignite.internal.processors.cache.CacheInterceptorPartitionCounterRandomOperationsTest; +import org.apache.ignite.internal.processors.cache.GridCacheAtomicEntryProcessorDeploymentSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheEntryVersionSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorAtomicNearEnabledSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorAtomicRebalanceTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorAtomicReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorAtomicWithStoreReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorAtomicWithStoreSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorLocalAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheInterceptorLocalAtomicWithStoreSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheValueBytesPreloadingSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheVersionSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheVersionTopologyChangeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheGroupsTest; +import org.apache.ignite.internal.processors.cache.binary.GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheAsyncOperationsTest; +import org.apache.ignite.internal.processors.cache.distributed.GridCacheMixedModeSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheClientOnlySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteTxReentryColocatedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridCacheValueConsistencyAtomicNearEnabledSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridCacheValueConsistencyAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearOnlySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteTxReentryNearSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedAtomicGetAndTransformStoreSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedMvccTxMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedMvccTxSingleThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedMvccTxTimeoutSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxSingleThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxTimeoutSelfTest; +import org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStoreMultithreadedSelfTest; +import org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStoreSelfTest; +import org.apache.ignite.internal.processors.cache.store.IgnteCacheClientWriteBehindStoreAtomicTest; +import org.apache.ignite.internal.processors.cache.store.IgnteCacheClientWriteBehindStoreNonCoalescingTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite. + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite3 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + HashSet ignoredTests = new HashSet<>(); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(GridCacheEntryVersionSelfTest.class); + ignoredTests.add(GridCacheVersionTopologyChangeTest.class); + ignoredTests.add(CacheAsyncOperationsTest.class); + ignoredTests.add(CacheInterceptorPartitionCounterLocalSanityTest.class); + ignoredTests.add(CacheInterceptorPartitionCounterRandomOperationsTest.class); + ignoredTests.add(IgniteCacheGroupsTest.class); + + // Atomic caches + ignoredTests.add(GridCacheValueConsistencyAtomicSelfTest.class); + ignoredTests.add(GridCacheValueConsistencyAtomicNearEnabledSelfTest.class); + ignoredTests.add(GridCacheReplicatedAtomicGetAndTransformStoreSelfTest.class); + ignoredTests.add(GridCacheAtomicEntryProcessorDeploymentSelfTest.class); + ignoredTests.add(GridCacheValueBytesPreloadingSelfTest.class); + ignoredTests.add(GridCacheBinaryAtomicEntryProcessorDeploymentSelfTest.class); + + ignoredTests.add(GridCacheClientOnlySelfTest.CasePartitionedAtomic.class); + ignoredTests.add(GridCacheClientOnlySelfTest.CaseReplicatedAtomic.class); + ignoredTests.add(GridCacheNearOnlySelfTest.CasePartitionedAtomic.class); + ignoredTests.add(GridCacheNearOnlySelfTest.CaseReplicatedAtomic.class); + + ignoredTests.add(IgnteCacheClientWriteBehindStoreAtomicTest.class); + ignoredTests.add(IgnteCacheClientWriteBehindStoreNonCoalescingTest.class); + + ignoredTests.add(GridCacheInterceptorLocalAtomicSelfTest.class); + ignoredTests.add(GridCacheInterceptorLocalAtomicWithStoreSelfTest.class); + ignoredTests.add(GridCacheInterceptorAtomicSelfTest.class); + ignoredTests.add(GridCacheInterceptorAtomicNearEnabledSelfTest.class); + ignoredTests.add(GridCacheInterceptorAtomicWithStoreSelfTest.class); + ignoredTests.add(GridCacheInterceptorAtomicReplicatedSelfTest.class); + ignoredTests.add(GridCacheInterceptorAtomicWithStoreReplicatedSelfTest.class); + ignoredTests.add(GridCacheInterceptorAtomicRebalanceTest.class); + + // Irrelevant tx tests + ignoredTests.add(IgniteTxReentryNearSelfTest.class); + ignoredTests.add(IgniteTxReentryColocatedSelfTest.class); + + // Other non-tx tests + ignoredTests.add(GridCacheWriteBehindStoreSelfTest.class); + ignoredTests.add(GridCacheWriteBehindStoreMultithreadedSelfTest.class); + + ignoredTests.add(GridCacheVersionSelfTest.class); + ignoredTests.add(GridCacheMixedModeSelfTest.class); + + // Skip classes which Mvcc implementations are added in this method below. + // TODO IGNITE-10175: refactor these tests (use assume) to support both mvcc and non-mvcc modes after moving to JUnit4/5. + ignoredTests.add(GridCacheReplicatedTxSingleThreadedSelfTest.class); // See GridCacheReplicatedMvccTxSingleThreadedSelfTest + ignoredTests.add(GridCacheReplicatedTxMultiThreadedSelfTest.class); // See GridCacheReplicatedMvccTxMultiThreadedSelfTest + ignoredTests.add(GridCacheReplicatedTxTimeoutSelfTest.class); // See GridCacheReplicatedMvccTxTimeoutSelfTest + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 3"); + + suite.addTest(IgniteBinaryObjectsCacheTestSuite3.suite(ignoredTests)); + + // Add Mvcc clones. + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedMvccTxSingleThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedMvccTxMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedMvccTxTimeoutSelfTest.class)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite4.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite4.java new file mode 100644 index 0000000000000..22cacc4cb56a1 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite4.java @@ -0,0 +1,201 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.store.CacheStoreListenerRWThroughDisabledAtomicCacheTest; +import org.apache.ignite.internal.processors.cache.CacheConnectionLeakStoreTxTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticReadCommittedSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticRepeatableReadSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticSerializableSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticReadCommittedSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticRepeatableReadSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticSerializableSelfTest; +import org.apache.ignite.internal.processors.cache.CacheOffheapMapEntrySelfTest; +import org.apache.ignite.internal.processors.cache.CacheReadThroughAtomicRestartSelfTest; +import org.apache.ignite.internal.processors.cache.CacheReadThroughLocalAtomicRestartSelfTest; +import org.apache.ignite.internal.processors.cache.CacheReadThroughReplicatedAtomicRestartSelfTest; +import org.apache.ignite.internal.processors.cache.CacheStoreUsageMultinodeDynamicStartAtomicTest; +import org.apache.ignite.internal.processors.cache.CacheStoreUsageMultinodeStaticStartAtomicTest; +import org.apache.ignite.internal.processors.cache.CacheTxNotAllowReadFromBackupTest; +import org.apache.ignite.internal.processors.cache.GridCacheMultinodeUpdateAtomicNearEnabledSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheMultinodeUpdateAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheVersionMultinodeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicCopyOnReadDisabledTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicLocalPeekModesTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicLocalStoreValueTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicNearEnabledStoreValueTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicNearPeekModesTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicPeekModesTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicReplicatedPeekModesTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicStoreValueTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheConfigurationDefaultTemplateTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheContainsKeyAtomicTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheInvokeReadThroughSingleNodeTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheInvokeReadThroughTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheStartTest; +import org.apache.ignite.internal.processors.cache.IgniteClientCacheInitializationFailTest; +import org.apache.ignite.internal.processors.cache.IgniteDynamicCacheStartNoExchangeTimeoutTest; +import org.apache.ignite.internal.processors.cache.IgniteDynamicCacheStartSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteDynamicCacheStartStopConcurrentTest; +import org.apache.ignite.internal.processors.cache.IgniteDynamicClientCacheStartSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteExchangeFutureHistoryTest; +import org.apache.ignite.internal.processors.cache.IgniteInternalCacheTypesTest; +import org.apache.ignite.internal.processors.cache.IgniteStartCacheInTransactionAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteSystemCacheOnClientTest; +import org.apache.ignite.internal.processors.cache.MarshallerCacheJobRunNodeRestartTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheDiscoveryDataConcurrentJoinTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheGetFutureHangsSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheGroupsPreloadTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheNoValueClassOnServerNodeTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheResultIsNotNullOnPartitionLossTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheCreatePutTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheFailedUpdateResponseTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheReadFromBackupTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheSingleGetMessageTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCrossCacheMvccTxSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCrossCacheTxSelfTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicLoadAllTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicLoaderWriterTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicLocalLoadAllTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicLocalNoLoadPreviousValueTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicLocalNoReadThroughTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicLocalNoWriteThroughTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicNearEnabledNoLoadPreviousValueTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicNearEnabledNoReadThroughTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicNearEnabledNoWriteThroughTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicNoLoadPreviousValueTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicNoReadThroughTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicNoWriteThroughTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicStoreSessionTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheAtomicStoreSessionWriteBehindTest; +import org.apache.ignite.internal.processors.cache.integration.IgniteCacheJdbcBlobStoreNodeRestartTest; +import org.apache.ignite.internal.processors.cache.version.CacheVersionedEntryLocalAtomicSwapDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.version.CacheVersionedEntryPartitionedAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.version.CacheVersionedEntryReplicatedAtomicSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite4 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + HashSet ignoredTests = new HashSet<>(128); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(GridCacheVersionMultinodeTest.class); + ignoredTests.add(IgniteCacheCreatePutTest.class); + ignoredTests.add(IgniteClientCacheInitializationFailTest.class); + ignoredTests.add(IgniteCacheFailedUpdateResponseTest.class); + ignoredTests.add(CacheGetEntryPessimisticRepeatableReadSelfTest.class); + ignoredTests.add(CacheTxNotAllowReadFromBackupTest.class); + ignoredTests.add(CacheOffheapMapEntrySelfTest.class); + ignoredTests.add(CacheGroupsPreloadTest.class); + ignoredTests.add(CacheConnectionLeakStoreTxTest.class); + ignoredTests.add(IgniteCacheInvokeReadThroughTest.class); + ignoredTests.add(IgniteCacheInvokeReadThroughSingleNodeTest.class); + ignoredTests.add(IgniteDynamicCacheStartSelfTest.class); + ignoredTests.add(IgniteDynamicClientCacheStartSelfTest.class); + ignoredTests.add(IgniteDynamicCacheStartNoExchangeTimeoutTest.class); + ignoredTests.add(IgniteCacheSingleGetMessageTest.class); + ignoredTests.add(IgniteCacheReadFromBackupTest.class); + + // Optimistic tx tests. + ignoredTests.add(CacheGetEntryOptimisticReadCommittedSelfTest.class); + ignoredTests.add(CacheGetEntryOptimisticRepeatableReadSelfTest.class); + ignoredTests.add(CacheGetEntryOptimisticSerializableSelfTest.class); + + // Irrelevant Tx tests. + ignoredTests.add(CacheGetEntryPessimisticReadCommittedSelfTest.class); + ignoredTests.add(CacheGetEntryPessimisticSerializableSelfTest.class); + + // Atomic cache tests. + ignoredTests.add(GridCacheMultinodeUpdateAtomicSelfTest.class); + ignoredTests.add(GridCacheMultinodeUpdateAtomicNearEnabledSelfTest.class); + ignoredTests.add(IgniteCacheAtomicLoadAllTest.class); + ignoredTests.add(IgniteCacheAtomicLocalLoadAllTest.class); + ignoredTests.add(IgniteCacheAtomicLoaderWriterTest.class); + ignoredTests.add(IgniteCacheAtomicStoreSessionTest.class); + ignoredTests.add(IgniteCacheAtomicStoreSessionWriteBehindTest.class); + ignoredTests.add(IgniteCacheAtomicNoReadThroughTest.class); + ignoredTests.add(IgniteCacheAtomicNearEnabledNoReadThroughTest.class); + ignoredTests.add(IgniteCacheAtomicLocalNoReadThroughTest.class); + ignoredTests.add(IgniteCacheAtomicNoLoadPreviousValueTest.class); + ignoredTests.add(IgniteCacheAtomicNearEnabledNoLoadPreviousValueTest.class); + ignoredTests.add(IgniteCacheAtomicLocalNoLoadPreviousValueTest.class); + ignoredTests.add(IgniteCacheAtomicNoWriteThroughTest.class); + ignoredTests.add(IgniteCacheAtomicNearEnabledNoWriteThroughTest.class); + ignoredTests.add(IgniteCacheAtomicLocalNoWriteThroughTest.class); + ignoredTests.add(IgniteCacheAtomicPeekModesTest.class); + ignoredTests.add(IgniteCacheAtomicNearPeekModesTest.class); + ignoredTests.add(IgniteCacheAtomicReplicatedPeekModesTest.class); + ignoredTests.add(IgniteCacheAtomicLocalPeekModesTest.class); + ignoredTests.add(IgniteCacheAtomicCopyOnReadDisabledTest.class); + ignoredTests.add(IgniteCacheAtomicLocalStoreValueTest.class); + ignoredTests.add(IgniteCacheAtomicStoreValueTest.class); + ignoredTests.add(IgniteCacheAtomicNearEnabledStoreValueTest.class); + ignoredTests.add(CacheStoreListenerRWThroughDisabledAtomicCacheTest.class); + ignoredTests.add(CacheStoreUsageMultinodeStaticStartAtomicTest.class); + ignoredTests.add(CacheStoreUsageMultinodeDynamicStartAtomicTest.class); + ignoredTests.add(IgniteStartCacheInTransactionAtomicSelfTest.class); + ignoredTests.add(CacheReadThroughReplicatedAtomicRestartSelfTest.class); + ignoredTests.add(CacheReadThroughLocalAtomicRestartSelfTest.class); + ignoredTests.add(CacheReadThroughAtomicRestartSelfTest.class); + ignoredTests.add(CacheVersionedEntryLocalAtomicSwapDisabledSelfTest.class); + ignoredTests.add(CacheVersionedEntryPartitionedAtomicSelfTest.class); + ignoredTests.add(CacheGetFutureHangsSelfTest.class); + ignoredTests.add(IgniteCacheContainsKeyAtomicTest.class); + ignoredTests.add(CacheVersionedEntryReplicatedAtomicSelfTest.class); + ignoredTests.add(CacheResultIsNotNullOnPartitionLossTest.class); + + // Other non-tx tests. + ignoredTests.add(IgniteDynamicCacheStartStopConcurrentTest.class); + ignoredTests.add(IgniteCacheConfigurationDefaultTemplateTest.class); + ignoredTests.add(IgniteCacheStartTest.class); + ignoredTests.add(CacheDiscoveryDataConcurrentJoinTest.class); + ignoredTests.add(IgniteCacheJdbcBlobStoreNodeRestartTest.class); + ignoredTests.add(IgniteInternalCacheTypesTest.class); + ignoredTests.add(IgniteExchangeFutureHistoryTest.class); + ignoredTests.add(CacheNoValueClassOnServerNodeTest.class); + ignoredTests.add(IgniteSystemCacheOnClientTest.class); + ignoredTests.add(MarshallerCacheJobRunNodeRestartTest.class); + + // Skip classes which Mvcc implementations are added in this method below. + // TODO IGNITE-10175: refactor these tests (use assume) to support both mvcc and non-mvcc modes after moving to JUnit4/5. + ignoredTests.add(IgniteCrossCacheTxSelfTest.class); + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 4"); + + suite.addTest(IgniteCacheTestSuite4.suite(ignoredTests)); + + // Add Mvcc clones. + suite.addTest(new JUnit4TestAdapter(IgniteCrossCacheMvccTxSelfTest.class)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite5.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite5.java new file mode 100644 index 0000000000000..91f0b71fdeda0 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite5.java @@ -0,0 +1,97 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import junit.framework.TestSuite; +import org.apache.ignite.GridCacheAffinityBackupsSelfTest; +import org.apache.ignite.IgniteCacheAffinitySelfTest; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.affinity.AffinityClientNodeSelfTest; +import org.apache.ignite.cache.affinity.AffinityDistributionLoggingTest; +import org.apache.ignite.cache.affinity.AffinityHistoryCleanupTest; +import org.apache.ignite.cache.affinity.local.LocalAffinityFunctionTest; +import org.apache.ignite.internal.GridCachePartitionExchangeManagerHistSizeTest; +import org.apache.ignite.internal.processors.cache.CacheSerializableTransactionsTest; +import org.apache.ignite.internal.processors.cache.ClusterReadOnlyModeTest; +import org.apache.ignite.internal.processors.cache.ClusterStatePartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.ClusterStateReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.ConcurrentCacheStartTest; +import org.apache.ignite.internal.processors.cache.EntryVersionConsistencyReadThroughTest; +import org.apache.ignite.internal.processors.cache.IgniteCachePutStackOverflowSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheStoreCollectionTest; +import org.apache.ignite.internal.processors.cache.PartitionsExchangeOnDiscoveryHistoryOverflowTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheLateAffinityAssignmentNodeJoinValidationTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheTxIteratorSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.NotMappedPartitionInTxTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest; +import org.apache.ignite.internal.processors.cache.distributed.rebalancing.CacheManualRebalancingTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheSyncRebalanceModeSelfTest; +import org.apache.ignite.internal.processors.cache.store.IgniteCacheWriteBehindNoUpdateSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite. + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite5 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + HashSet ignoredTests = new HashSet<>(128); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(IgniteCacheStoreCollectionTest.class); + ignoredTests.add(EntryVersionConsistencyReadThroughTest.class); + ignoredTests.add(ClusterReadOnlyModeTest.class); + ignoredTests.add(NotMappedPartitionInTxTest.class); + ignoredTests.add(IgniteCacheTxIteratorSelfTest.class); + + // Irrelevant Tx tests. + ignoredTests.add(CacheSerializableTransactionsTest.class); + ignoredTests.add(IgniteCachePutStackOverflowSelfTest.class); + ignoredTests.add(IgniteCacheAtomicProtocolTest.class); + + // Other non-tx tests. + ignoredTests.add(CacheLateAffinityAssignmentNodeJoinValidationTest.class); + ignoredTests.add(IgniteCacheWriteBehindNoUpdateSelfTest.class); + ignoredTests.add(IgniteCacheSyncRebalanceModeSelfTest.class); + ignoredTests.add(ClusterStatePartitionedSelfTest.class); + ignoredTests.add(ClusterStateReplicatedSelfTest.class); + ignoredTests.add(CacheManualRebalancingTest.class); + ignoredTests.add(GridCacheAffinityBackupsSelfTest.class); + ignoredTests.add(IgniteCacheAffinitySelfTest.class); + ignoredTests.add(AffinityClientNodeSelfTest.class); + ignoredTests.add(LocalAffinityFunctionTest.class); + ignoredTests.add(AffinityHistoryCleanupTest.class); + ignoredTests.add(AffinityDistributionLoggingTest.class); + ignoredTests.add(PartitionsExchangeOnDiscoveryHistoryOverflowTest.class); + ignoredTests.add(GridCachePartitionExchangeManagerHistSizeTest.class); + ignoredTests.add(ConcurrentCacheStartTest.class); + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 5"); + + suite.addTest(IgniteCacheTestSuite5.suite(ignoredTests)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite6.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite6.java new file mode 100644 index 0000000000000..f5bf0e3dc6cea --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite6.java @@ -0,0 +1,96 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import java.util.Set; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.cache.PartitionedAtomicCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.PartitionedMvccTxPessimisticCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.PartitionedTransactionalOptimisticCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.PartitionedTransactionalPessimisticCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.PartitionsExchangeCoordinatorFailoverTest; +import org.apache.ignite.internal.processors.cache.ReplicatedAtomicCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.ReplicatedMvccTxPessimisticCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.ReplicatedTransactionalOptimisticCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.ReplicatedTransactionalPessimisticCacheGetsDistributionTest; +import org.apache.ignite.internal.processors.cache.datastructures.IgniteExchangeLatchManagerCoordinatorFailTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheExchangeMergeTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheParallelStartTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteCache150ClientsTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteOptimisticTxSuspendResumeTest; +import org.apache.ignite.internal.processors.cache.transactions.TxOptimisticOnPartitionExchangeTest; +import org.apache.ignite.internal.processors.cache.transactions.TxOptimisticPrepareOnUnstableTopologyTest; +import org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnTimeoutOnePhaseCommitTest; +import org.apache.ignite.internal.processors.cache.transactions.TxStateChangeEventTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite. + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite6 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + Set ignoredTests = new HashSet<>(); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(TxStateChangeEventTest.class); + + // Atomic cache tests. + ignoredTests.add(ReplicatedAtomicCacheGetsDistributionTest.class); + ignoredTests.add(PartitionedAtomicCacheGetsDistributionTest.class); + + // Irrelevant Tx tests. + ignoredTests.add(IgniteOptimisticTxSuspendResumeTest.class); + ignoredTests.add(TxOptimisticPrepareOnUnstableTopologyTest.class); + ignoredTests.add(ReplicatedTransactionalOptimisticCacheGetsDistributionTest.class); + ignoredTests.add(PartitionedTransactionalOptimisticCacheGetsDistributionTest.class); + ignoredTests.add(TxOptimisticOnPartitionExchangeTest.class); + + ignoredTests.add(TxRollbackOnTimeoutOnePhaseCommitTest.class); + + // Other non-tx tests. + ignoredTests.add(CacheExchangeMergeTest.class); + ignoredTests.add(IgniteExchangeLatchManagerCoordinatorFailTest.class); + ignoredTests.add(PartitionsExchangeCoordinatorFailoverTest.class); + ignoredTests.add(CacheParallelStartTest.class); + ignoredTests.add(IgniteCache150ClientsTest.class); + + // Skip tests that has Mvcc clones. + ignoredTests.add(PartitionedTransactionalPessimisticCacheGetsDistributionTest.class); // See PartitionedMvccTxPessimisticCacheGetsDistributionTest. + ignoredTests.add(ReplicatedTransactionalPessimisticCacheGetsDistributionTest.class); //See ReplicatedMvccTxPessimisticCacheGetsDistributionTest + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 6"); + + suite.addTest(IgniteCacheTestSuite6.suite(ignoredTests)); + + // Add mvcc versions for skipped tests. + suite.addTest(new JUnit4TestAdapter(PartitionedMvccTxPessimisticCacheGetsDistributionTest.class)); + suite.addTest(new JUnit4TestAdapter(ReplicatedMvccTxPessimisticCacheGetsDistributionTest.class)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite7.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite7.java new file mode 100644 index 0000000000000..83d5cd32a089a --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite7.java @@ -0,0 +1,86 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.authentication.Authentication1kUsersNodeRestartTest; +import org.apache.ignite.internal.processors.authentication.AuthenticationConfigurationClusterTest; +import org.apache.ignite.internal.processors.authentication.AuthenticationOnNotActiveClusterTest; +import org.apache.ignite.internal.processors.authentication.AuthenticationProcessorNPEOnStartTest; +import org.apache.ignite.internal.processors.authentication.AuthenticationProcessorNodeRestartTest; +import org.apache.ignite.internal.processors.authentication.AuthenticationProcessorSelfTest; +import org.apache.ignite.internal.processors.cache.CacheDataRegionConfigurationTest; +import org.apache.ignite.internal.processors.cache.CacheGroupMetricsMBeanTest; +import org.apache.ignite.internal.processors.cache.MvccCacheGroupMetricsMBeanTest; +import org.apache.ignite.internal.processors.cache.distributed.Cache64kPartitionsTest; +import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingPartitionCountersMvccTest; +import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingPartitionCountersTest; +import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingWithAsyncClearingMvccTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.PageEvictionMultinodeMixedRegionsTest; +import org.apache.ignite.internal.processors.cache.persistence.db.CheckpointBufferDeadlockTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite7 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + HashSet ignoredTests = new HashSet<>(128); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(PageEvictionMultinodeMixedRegionsTest.class); + + // Other non-tx tests. + ignoredTests.add(CheckpointBufferDeadlockTest.class);// + ignoredTests.add(AuthenticationConfigurationClusterTest.class);// + ignoredTests.add(AuthenticationProcessorSelfTest.class); + ignoredTests.add(AuthenticationOnNotActiveClusterTest.class); + ignoredTests.add(AuthenticationProcessorNodeRestartTest.class); + ignoredTests.add(AuthenticationProcessorNPEOnStartTest.class); + ignoredTests.add(Authentication1kUsersNodeRestartTest.class); + ignoredTests.add(CacheDataRegionConfigurationTest.class); + ignoredTests.add(Cache64kPartitionsTest.class); + + // Skip classes which Mvcc implementations are added in this method below. + // TODO IGNITE-10175: refactor these tests (use assume) to support both mvcc and non-mvcc modes after moving to JUnit4/5. + ignoredTests.add(CacheGroupMetricsMBeanTest.class); // See MvccCacheGroupMetricsMBeanTest + ignoredTests.add(GridCacheRebalancingPartitionCountersTest.class); // See GridCacheRebalancingPartitionCountersMvccTest + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 7"); + + suite.addTest(IgniteCacheTestSuite7.suite(ignoredTests)); + + // Add Mvcc clones. + suite.addTest(new JUnit4TestAdapter(MvccCacheGroupMetricsMBeanTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheRebalancingPartitionCountersMvccTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheRebalancingWithAsyncClearingMvccTest.class)); + + + return suite; + } + +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite8.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite8.java new file mode 100644 index 0000000000000..28732af2e640a --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite8.java @@ -0,0 +1,135 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorNearPartitionedAtomicCacheGroupsTest; +import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorNearPartitionedAtomicCacheTest; +import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorPartitionedAtomicCacheGroupsTest; +import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorPartitionedAtomicCacheTest; +import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorReplicatedAtomicCacheGroupsTest; +import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorReplicatedAtomicCacheTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearEvictionSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicPartitionedMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicPartitionedTckMetricsSelfTestImpl; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheMvccNearEvictionSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearAtomicMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearEvictionSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRabalancingDelayedPartitionMapExchangeSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheAtomicReplicatedMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.eviction.GridCacheEvictableEntryEqualsSelfTest; +import org.apache.ignite.internal.processors.cache.eviction.fifo.FifoEvictionPolicyFactorySelfTest; +import org.apache.ignite.internal.processors.cache.eviction.fifo.FifoEvictionPolicySelfTest; +import org.apache.ignite.internal.processors.cache.eviction.lru.LruEvictionPolicyFactorySelfTest; +import org.apache.ignite.internal.processors.cache.eviction.lru.LruEvictionPolicySelfTest; +import org.apache.ignite.internal.processors.cache.eviction.lru.LruNearEvictionPolicySelfTest; +import org.apache.ignite.internal.processors.cache.eviction.lru.LruNearOnlyNearEvictionPolicySelfTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.PageEvictionDataStreamerTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.PageEvictionMetricTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.PageEvictionPagesRecyclingAndReusingTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.PageEvictionReadThroughTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.PageEvictionTouchOrderTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.Random2LruNearEnabledPageEvictionMultinodeTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.Random2LruPageEvictionMultinodeTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.Random2LruPageEvictionWithRebalanceTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.RandomLruNearEnabledPageEvictionMultinodeTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.RandomLruPageEvictionMultinodeTest; +import org.apache.ignite.internal.processors.cache.eviction.paged.RandomLruPageEvictionWithRebalanceTest; +import org.apache.ignite.internal.processors.cache.eviction.sorted.SortedEvictionPolicyFactorySelfTest; +import org.apache.ignite.internal.processors.cache.eviction.sorted.SortedEvictionPolicySelfTest; +import org.apache.ignite.internal.processors.cache.local.GridCacheAtomicLocalMetricsNoStoreSelfTest; +import org.apache.ignite.internal.processors.cache.local.GridCacheAtomicLocalMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.local.GridCacheAtomicLocalTckMetricsSelfTestImpl; +import org.apache.ignite.internal.processors.cache.local.GridCacheLocalAtomicMetricsNoReadThroughSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite8 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + HashSet ignoredTests = new HashSet<>(128); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(LruNearEvictionPolicySelfTest.class); + ignoredTests.add(LruNearOnlyNearEvictionPolicySelfTest.class); + ignoredTests.add(RandomLruPageEvictionMultinodeTest.class); + ignoredTests.add(RandomLruNearEnabledPageEvictionMultinodeTest.class); + ignoredTests.add(PageEvictionDataStreamerTest.class); + ignoredTests.add(Random2LruPageEvictionMultinodeTest.class); + ignoredTests.add(Random2LruNearEnabledPageEvictionMultinodeTest.class); + ignoredTests.add(RandomLruPageEvictionWithRebalanceTest.class); + ignoredTests.add(Random2LruPageEvictionWithRebalanceTest.class); + ignoredTests.add(PageEvictionTouchOrderTest.class); + ignoredTests.add(PageEvictionReadThroughTest.class); + ignoredTests.add(PageEvictionMetricTest.class); + ignoredTests.add(PageEvictionPagesRecyclingAndReusingTest.class); + + // Irrelevant Tx tests. + ignoredTests.add(GridCacheEvictableEntryEqualsSelfTest.class); + + // Atomic cache tests. + ignoredTests.add(GridCacheLocalAtomicMetricsNoReadThroughSelfTest.class); + ignoredTests.add(GridCacheNearAtomicMetricsSelfTest.class); + ignoredTests.add(GridCacheAtomicLocalMetricsSelfTest.class); + ignoredTests.add(GridCacheAtomicLocalMetricsNoStoreSelfTest.class); + ignoredTests.add(GridCacheAtomicReplicatedMetricsSelfTest.class); + ignoredTests.add(GridCacheAtomicPartitionedMetricsSelfTest.class); + ignoredTests.add(GridCacheAtomicPartitionedTckMetricsSelfTestImpl.class); + ignoredTests.add(GridCacheAtomicLocalTckMetricsSelfTestImpl.class); + ignoredTests.add(IgniteTopologyValidatorPartitionedAtomicCacheTest.class); + ignoredTests.add(IgniteTopologyValidatorNearPartitionedAtomicCacheTest.class); + ignoredTests.add(IgniteTopologyValidatorReplicatedAtomicCacheTest.class); + ignoredTests.add(IgniteTopologyValidatorNearPartitionedAtomicCacheGroupsTest.class); + ignoredTests.add(IgniteTopologyValidatorPartitionedAtomicCacheGroupsTest.class); + ignoredTests.add(IgniteTopologyValidatorReplicatedAtomicCacheGroupsTest.class); + + // Other non-tx tests. + ignoredTests.add(FifoEvictionPolicySelfTest.class); + ignoredTests.add(SortedEvictionPolicySelfTest.class); + ignoredTests.add(LruEvictionPolicySelfTest.class); + ignoredTests.add(FifoEvictionPolicyFactorySelfTest.class); + ignoredTests.add(SortedEvictionPolicyFactorySelfTest.class); + ignoredTests.add(LruEvictionPolicyFactorySelfTest.class); + ignoredTests.add(GridCacheAtomicNearEvictionSelfTest.class); + ignoredTests.add(GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.class); + + // Skip classes which Mvcc implementations are added in this method below. + // TODO IGNITE-10175: refactor these tests (use assume) to support both mvcc and non-mvcc modes after moving to JUnit4/5. + ignoredTests.add(GridCacheNearEvictionSelfTest.class); // See GridCacheMvccNearEvictionSelfTest + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 8"); + + suite.addTest(IgniteCacheTestSuite8.suite(ignoredTests)); + + // Add Mvcc clones. + suite.addTest(new JUnit4TestAdapter(GridCacheMvccNearEvictionSelfTest.class)); + + return suite; + } + +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite9.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite9.java new file mode 100644 index 0000000000000..ce9533fd90577 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccTestSuite9.java @@ -0,0 +1,60 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.Collection; +import java.util.HashSet; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.cache.IgniteCacheGetCustomCollectionsSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheLoadRebalanceEvictionSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheAtomicPrimarySyncBackPressureTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteTxConcurrentRemoveObjectsTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite. + */ +@RunWith(AllTests.class) +public class IgniteCacheMvccTestSuite9 { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + Collection ignoredTests = new HashSet<>(); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(IgniteTxConcurrentRemoveObjectsTest.class); + + // Atomic caches. + ignoredTests.add(CacheAtomicPrimarySyncBackPressureTest.class); + + // Other non-tx tests. + ignoredTests.add(IgniteCacheGetCustomCollectionsSelfTest.class); + ignoredTests.add(IgniteCacheLoadRebalanceEvictionSelfTest.class); + + TestSuite suite = new TestSuite("IgniteCache Mvcc Test Suite part 9"); + + suite.addTest(IgniteCacheTestSuite9.suite(ignoredTests)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheNearOnlySelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheNearOnlySelfTestSuite.java index 087007e9fb120..eb163a2d18615 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheNearOnlySelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheNearOnlySelfTestSuite.java @@ -17,34 +17,46 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheClientOnlySelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearOnlySelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearOnlyTopologySelfTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for near-only cache. */ +@RunWith(AllTests.class) public class IgniteCacheNearOnlySelfTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { + return suite(null); + } + + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Near-only cache test suite."); - suite.addTest(new TestSuite(GridCacheClientOnlySelfTest.CasePartitionedAtomic.class)); - suite.addTest(new TestSuite(GridCacheClientOnlySelfTest.CasePartitionedTransactional.class)); - suite.addTest(new TestSuite(GridCacheClientOnlySelfTest.CaseReplicatedAtomic.class)); - suite.addTest(new TestSuite(GridCacheClientOnlySelfTest.CaseReplicatedTransactional.class)); + GridTestUtils.addTestIfNeeded(suite,GridCacheClientOnlySelfTest.CasePartitionedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheClientOnlySelfTest.CasePartitionedTransactional.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheClientOnlySelfTest.CaseReplicatedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheClientOnlySelfTest.CaseReplicatedTransactional.class, ignoredTests); - suite.addTest(new TestSuite(GridCacheNearOnlyTopologySelfTest.class)); + GridTestUtils.addTestIfNeeded(suite,GridCacheNearOnlyTopologySelfTest.class, ignoredTests); - suite.addTest(new TestSuite(GridCacheNearOnlySelfTest.CasePartitionedAtomic.class)); - suite.addTest(new TestSuite(GridCacheNearOnlySelfTest.CasePartitionedTransactional.class)); - suite.addTest(new TestSuite(GridCacheNearOnlySelfTest.CaseReplicatedAtomic.class)); - suite.addTest(new TestSuite(GridCacheNearOnlySelfTest.CaseReplicatedTransactional.class)); + GridTestUtils.addTestIfNeeded(suite,GridCacheNearOnlySelfTest.CasePartitionedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheNearOnlySelfTest.CasePartitionedTransactional.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheNearOnlySelfTest.CaseReplicatedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheNearOnlySelfTest.CaseReplicatedTransactional.class, ignoredTests); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheP2pUnmarshallingErrorTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheP2pUnmarshallingErrorTestSuite.java index dfc96dca72337..00effea08c76f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheP2pUnmarshallingErrorTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheP2pUnmarshallingErrorTestSuite.java @@ -24,25 +24,26 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheP2pUnmarshallingRebalanceErrorTest; import org.apache.ignite.internal.processors.cache.IgniteCacheP2pUnmarshallingTxErrorTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Checks behavior on exception while unmarshalling key. */ -public class IgniteCacheP2pUnmarshallingErrorTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheP2pUnmarshallingErrorTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Tests don't include in the execution. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(Set ignoredTests) throws Exception { + public static TestSuite suite(Set ignoredTests) { TestSuite suite = new TestSuite("P2p Unmarshalling Test Suite"); GridTestUtils.addTestIfNeeded(suite, IgniteCacheP2pUnmarshallingErrorTest.class, ignoredTests); @@ -52,4 +53,4 @@ public static TestSuite suite(Set ignoredTests) throws Exception { return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite.java index 9040ea575f1d5..6c0b3a2395681 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.IgniteCacheCreateRestartSelfTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheNearRestartRollbackSelfTest; @@ -24,26 +25,28 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedNodeRestartTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedOptimisticTxNodeRestartTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedNodeRestartSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache stability test suite on changing topology. */ -public class IgniteCacheRestartTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheRestartTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Restart Test Suite"); - suite.addTestSuite(GridCachePartitionedNodeRestartTest.class); - suite.addTestSuite(GridCachePartitionedOptimisticTxNodeRestartTest.class); - suite.addTestSuite(GridCacheReplicatedNodeRestartSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledOptimisticTxNodeRestartTest.class); - suite.addTestSuite(IgniteCacheNearRestartRollbackSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNodeRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedOptimisticTxNodeRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedNodeRestartSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledOptimisticTxNodeRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheNearRestartRollbackSelfTest.class)); - suite.addTestSuite(IgniteCacheCreateRestartSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheCreateRestartSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite2.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite2.java index 6a4c4878b17a7..e6d0f5bdd3cc1 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite2.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheRestartTestSuite2.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.internal.IgniteProperties; @@ -27,29 +28,31 @@ import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheAtomicNodeRestartTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheGetRestartTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheAtomicReplicatedNodeRestartSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache stability test suite on changing topology. */ -public class IgniteCacheRestartTestSuite2 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheRestartTestSuite2 { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache Restart Test Suite2"); - suite.addTestSuite(IgniteCacheAtomicNodeRestartTest.class); - suite.addTestSuite(IgniteCacheAtomicReplicatedNodeRestartSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicNodeRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicReplicatedNodeRestartSelfTest.class)); - suite.addTestSuite(IgniteCacheAtomicPutAllFailoverSelfTest.class); - suite.addTestSuite(IgniteCachePutAllRestartTest.class); - suite.addTestSuite(GridCachePutAllFailoverSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicPutAllFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePutAllRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePutAllFailoverSelfTest.class)); // TODO IGNITE-4768. - //suite.addTestSuite(IgniteBinaryMetadataUpdateNodeRestartTest.class); + //suite.addTest(new JUnit4TestAdapter(IgniteBinaryMetadataUpdateNodeRestartTest.class)); - suite.addTestSuite(IgniteCacheGetRestartTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheGetRestartTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTcpClientDiscoveryTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTcpClientDiscoveryTestSuite.java index e0efda43e4a8c..db3e0ec30e72b 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTcpClientDiscoveryTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTcpClientDiscoveryTestSuite.java @@ -17,31 +17,51 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCacheTcpClientDiscoveryMultiThreadedTest; -import org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientPartitionedAtomic; +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientPartitionedTransactional; +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientReplicatedAtomic; +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientReplicatedTransactional; +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearPartitionedAtomic; +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearPartitionedTransactional; +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearReplicatedAtomic; +import static org.apache.ignite.internal.processors.cache.distributed.GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearReplicatedTransactional; /** * Tests a cache with TcpClientDiscovery SPI being enabled. */ +@RunWith(AllTests.class) public class IgniteCacheTcpClientDiscoveryTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { + return suite(null); + } + + /** + * @param ignoredTests Tests to ignore. + * @return Test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Cache + TcpClientDiscovery SPI test suite."); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearPartitionedAtomic.class)); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearPartitionedTransactional.class)); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearReplicatedAtomic.class)); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseNearReplicatedTransactional.class)); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientPartitionedAtomic.class)); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientPartitionedTransactional.class)); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientReplicatedAtomic.class)); - suite.addTest(new TestSuite(GridCacheClientModesTcpClientDiscoveryAbstractTest.CaseClientReplicatedTransactional.class)); - suite.addTest(new TestSuite(GridCacheTcpClientDiscoveryMultiThreadedTest.class)); + GridTestUtils.addTestIfNeeded(suite, CaseNearPartitionedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CaseNearPartitionedTransactional.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CaseNearReplicatedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CaseNearReplicatedTransactional.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CaseClientPartitionedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CaseClientPartitionedTransactional.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CaseClientReplicatedAtomic.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CaseClientReplicatedTransactional.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheTcpClientDiscoveryMultiThreadedTest.class, ignoredTests); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite.java index d6c1f96a56b97..d1b581a2b7b84 100755 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite.java @@ -17,10 +17,12 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.cache.IgniteCacheEntryProcessorSequentialCallTest; import org.apache.ignite.cache.IgniteWarmupClosureSelfTest; import org.apache.ignite.cache.store.CacheStoreReadFromBackupTest; +import org.apache.ignite.cache.store.CacheStoreWriteErrorTest; import org.apache.ignite.cache.store.CacheTransactionalStoreReadFromBackupTest; import org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest; import org.apache.ignite.cache.store.GridCacheLoadOnlyStoreAdapterSelfTest; @@ -35,10 +37,6 @@ import org.apache.ignite.cache.store.jdbc.GridCacheJdbcBlobStoreMultithreadedSelfTest; import org.apache.ignite.cache.store.jdbc.GridCacheJdbcBlobStoreSelfTest; import org.apache.ignite.cache.store.jdbc.JdbcTypesDefaultTransformerTest; -import org.apache.ignite.internal.IgniteInternalCacheRemoveTest; -import org.apache.ignite.internal.managers.IgniteDiagnosticMessagesMultipleConnectionsTest; -import org.apache.ignite.internal.managers.IgniteDiagnosticMessagesTest; -import org.apache.ignite.internal.managers.communication.GridIoManagerSelfTest; import org.apache.ignite.internal.managers.communication.IgniteCommunicationBalanceMultipleConnectionsTest; import org.apache.ignite.internal.managers.communication.IgniteCommunicationBalancePairedConnectionsTest; import org.apache.ignite.internal.managers.communication.IgniteCommunicationBalanceTest; @@ -47,7 +45,6 @@ import org.apache.ignite.internal.managers.communication.IgniteVariousConnectionNumberTest; import org.apache.ignite.internal.processors.cache.BinaryMetadataRegistrationInsideEntryProcessorTest; import org.apache.ignite.internal.processors.cache.CacheAffinityCallSelfTest; -import org.apache.ignite.internal.processors.cache.CacheAtomicSingleMessageCountSelfTest; import org.apache.ignite.internal.processors.cache.CacheDeferredDeleteQueueTest; import org.apache.ignite.internal.processors.cache.CacheDeferredDeleteSanitySelfTest; import org.apache.ignite.internal.processors.cache.CacheFutureExceptionSelfTest; @@ -60,25 +57,16 @@ import org.apache.ignite.internal.processors.cache.GridCacheAffinityMapperSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheAffinityRoutingSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheAsyncOperationsLimitSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheAtomicUsersAffinityMapperSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheClearAllSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheClearLocallySelfTest; import org.apache.ignite.internal.processors.cache.GridCacheColocatedTxStoreExceptionSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheConcurrentGetCacheOnClientTest; import org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheConfigurationConsistencySelfTest; import org.apache.ignite.internal.processors.cache.GridCacheConfigurationValidationSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheEntryMemorySizeSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheFullTextQueryMultithreadedSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheKeyCheckNearEnabledSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheKeyCheckSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheLeakTest; import org.apache.ignite.internal.processors.cache.GridCacheLifecycleAwareSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheLocalTxStoreExceptionSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheMissingCommitVersionSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheMixedPartitionExchangeSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheMultiUpdateLockSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheMvccFlagsTest; import org.apache.ignite.internal.processors.cache.GridCacheMvccManagerSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheMvccPartitionedSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheMvccSelfTest; @@ -89,17 +77,10 @@ import org.apache.ignite.internal.processors.cache.GridCachePartitionedLocalStoreSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheReplicatedLocalStoreSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheReplicatedTxStoreExceptionSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheReplicatedUsersAffinityMapperSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheReturnValueTransferSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheSlowTxWarnTest; import org.apache.ignite.internal.processors.cache.GridCacheStopSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheStorePutxSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheStoreValueBytesSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheSwapPreloadSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheTtlManagerLoadTest; import org.apache.ignite.internal.processors.cache.GridCacheTtlManagerSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheTxPartitionedLocalStoreSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheTxUsersAffinityMapperSelfTest; import org.apache.ignite.internal.processors.cache.GridDataStorageConfigurationConsistencySelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicInvokeTest; import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicLocalInvokeTest; @@ -107,7 +88,6 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicNearEnabledInvokeTest; import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicStopBusySelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheAtomicWithStoreInvokeTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheBinaryEntryProcessorSelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheEntryListenerAtomicLocalTest; import org.apache.ignite.internal.processors.cache.IgniteCacheEntryListenerAtomicReplicatedTest; import org.apache.ignite.internal.processors.cache.IgniteCacheEntryListenerAtomicTest; @@ -118,27 +98,19 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheEntryProcessorCallTest; import org.apache.ignite.internal.processors.cache.IgniteCacheManyAsyncOperationsTest; import org.apache.ignite.internal.processors.cache.IgniteCacheNearLockValueSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheObjectPutSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheSerializationSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheStartStopLoadTest; import org.apache.ignite.internal.processors.cache.IgniteCacheTransactionalStopBusySelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheTxInvokeTest; import org.apache.ignite.internal.processors.cache.IgniteCacheTxLocalInvokeTest; import org.apache.ignite.internal.processors.cache.IgniteCacheTxNearEnabledInvokeTest; -import org.apache.ignite.internal.processors.cache.IgniteCachingProviderSelfTest; import org.apache.ignite.internal.processors.cache.IgniteClientAffinityAssignmentSelfTest; import org.apache.ignite.internal.processors.cache.IgniteIncompleteCacheObjectSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteOnePhaseCommitNearSelfTest; import org.apache.ignite.internal.processors.cache.IgnitePutAllLargeBatchSelfTest; import org.apache.ignite.internal.processors.cache.IgnitePutAllUpdateNonPreloadedPartitionSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteStaticCacheStartSelfTest; import org.apache.ignite.internal.processors.cache.IgniteTxConfigCacheSelfTest; -import org.apache.ignite.internal.processors.cache.InterceptorWithKeepBinaryCacheFullApiTest; import org.apache.ignite.internal.processors.cache.context.IgniteCacheAtomicExecutionContextTest; import org.apache.ignite.internal.processors.cache.context.IgniteCacheContinuousExecutionContextTest; import org.apache.ignite.internal.processors.cache.context.IgniteCacheIsolatedExecutionContextTest; import org.apache.ignite.internal.processors.cache.context.IgniteCacheP2PDisableExecutionContextTest; -import org.apache.ignite.internal.processors.cache.context.IgniteCachePartitionedExecutionContextTest; import org.apache.ignite.internal.processors.cache.context.IgniteCachePrivateExecutionContextTest; import org.apache.ignite.internal.processors.cache.context.IgniteCacheReplicatedExecutionContextTest; import org.apache.ignite.internal.processors.cache.context.IgniteCacheSharedExecutionContextTest; @@ -163,10 +135,12 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionsStateValidatorSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheGetStoreErrorSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearTxExceptionSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedStorePutSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxExceptionSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalTxExceptionSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheEntryProcessorExternalizableFailedTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheEntryProcessorNonSerializableTest; +import org.apache.ignite.internal.processors.datastreamer.DataStreamProcessorPersistenceSelfTest; import org.apache.ignite.internal.processors.datastreamer.DataStreamProcessorSelfTest; import org.apache.ignite.internal.processors.datastreamer.DataStreamerClientReconnectAfterClusterRestartTest; import org.apache.ignite.internal.processors.datastreamer.DataStreamerImplSelfTest; @@ -175,67 +149,62 @@ import org.apache.ignite.internal.processors.datastreamer.DataStreamerTimeoutTest; import org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateAfterLoadTest; import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.junits.GridAbstractTest; - -import java.util.Set; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Tests to ignore. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(Set ignoredTests) throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); - + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite"); - suite.addTestSuite(IgniteCacheEntryListenerAtomicTest.class); - suite.addTestSuite(IgniteCacheEntryListenerAtomicReplicatedTest.class); - suite.addTestSuite(IgniteCacheEntryListenerAtomicLocalTest.class); - suite.addTestSuite(IgniteCacheEntryListenerTxTest.class); - suite.addTestSuite(IgniteCacheEntryListenerTxReplicatedTest.class); - suite.addTestSuite(IgniteCacheEntryListenerTxLocalTest.class); - suite.addTestSuite(IgniteCacheEntryListenerEagerTtlDisabledTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryListenerAtomicTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryListenerAtomicReplicatedTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryListenerAtomicLocalTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryListenerTxTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryListenerTxReplicatedTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryListenerTxLocalTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryListenerEagerTtlDisabledTest.class, ignoredTests); - suite.addTestSuite(IgniteClientAffinityAssignmentSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteClientAffinityAssignmentSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteCacheAtomicInvokeTest.class); - suite.addTestSuite(IgniteCacheAtomicNearEnabledInvokeTest.class); - suite.addTestSuite(IgniteCacheAtomicWithStoreInvokeTest.class); - suite.addTestSuite(IgniteCacheAtomicLocalInvokeTest.class); - suite.addTestSuite(IgniteCacheAtomicLocalWithStoreInvokeTest.class); - suite.addTestSuite(IgniteCacheTxInvokeTest.class); - suite.addTestSuite(CacheEntryProcessorNonSerializableTest.class); - suite.addTestSuite(CacheEntryProcessorExternalizableFailedTest.class); - suite.addTestSuite(IgniteCacheEntryProcessorCallTest.class); - suite.addTestSuite(IgniteCacheTxNearEnabledInvokeTest.class); - suite.addTestSuite(IgniteCacheTxLocalInvokeTest.class); - suite.addTestSuite(IgniteCrossCacheTxStoreSelfTest.class); - suite.addTestSuite(IgniteCacheEntryProcessorSequentialCallTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicNearEnabledInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicWithStoreInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicLocalInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicLocalWithStoreInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheTxInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheEntryProcessorNonSerializableTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheEntryProcessorExternalizableFailedTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryProcessorCallTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheTxNearEnabledInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheTxLocalInvokeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCrossCacheTxStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryProcessorSequentialCallTest.class, ignoredTests); // TODO GG-11148: include test when implemented. // Test fails due to incorrect handling of CacheConfiguration#getCopyOnRead() and // CacheObjectContext#storeValue() properties. Heap storage should be redesigned in this ticket. //GridTestUtils.addTestIfNeeded(suite, CacheEntryProcessorCopySelfTest.class, ignoredTests); - suite.addTestSuite(IgnitePutAllLargeBatchSelfTest.class); - suite.addTestSuite(IgnitePutAllUpdateNonPreloadedPartitionSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePutAllLargeBatchSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePutAllUpdateNonPreloadedPartitionSelfTest.class, ignoredTests); // User's class loader tests. GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicExecutionContextTest.class, ignoredTests); - GridTestUtils.addTestIfNeeded(suite, IgniteCachePartitionedExecutionContextTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, IgniteCacheReplicatedExecutionContextTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, IgniteCacheTxExecutionContextTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, IgniteCacheContinuousExecutionContextTest.class, ignoredTests); @@ -245,151 +214,147 @@ public static TestSuite suite(Set ignoredTests) throws Exception { GridTestUtils.addTestIfNeeded(suite, IgniteCacheSharedExecutionContextTest.class, ignoredTests); // Warmup closure tests. - suite.addTestSuite(IgniteWarmupClosureSelfTest.class); - - // Swap tests. - suite.addTestSuite(GridCacheSwapPreloadSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWarmupClosureSelfTest.class, ignoredTests); // Common tests. - suite.addTestSuite(CacheNamesSelfTest.class); - suite.addTestSuite(CacheNamesWithSpecialCharactersTest.class); - suite.addTestSuite(GridCacheConcurrentMapSelfTest.class); - suite.addTestSuite(GridCacheAffinityMapperSelfTest.class); - suite.addTestSuite(CacheAffinityCallSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheNamesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheNamesWithSpecialCharactersTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheConcurrentMapSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAffinityMapperSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheAffinityCallSelfTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, GridCacheAffinityRoutingSelfTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, GridCacheMvccSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheMvccPartitionedSelfTest.class); - suite.addTestSuite(GridCacheMvccManagerSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheMvccPartitionedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheMvccManagerSelfTest.class, ignoredTests); // TODO GG-11141. - // suite.addTestSuite(GridCacheP2PUndeploySelfTest.class); - suite.addTestSuite(GridCacheConfigurationValidationSelfTest.class); - suite.addTestSuite(GridCacheConfigurationConsistencySelfTest.class); - suite.addTestSuite(GridDataStorageConfigurationConsistencySelfTest.class); - suite.addTestSuite(DataStorageConfigurationValidationTest.class); - suite.addTestSuite(GridCacheJdbcBlobStoreSelfTest.class); - suite.addTestSuite(GridCacheJdbcBlobStoreMultithreadedSelfTest.class); - suite.addTestSuite(JdbcTypesDefaultTransformerTest.class); - suite.addTestSuite(CacheJdbcPojoStoreTest.class); - suite.addTestSuite(CacheJdbcPojoStoreBinaryMarshallerSelfTest.class); - suite.addTestSuite(CacheJdbcPojoStoreBinaryMarshallerStoreKeepBinarySelfTest.class); - suite.addTestSuite(CacheJdbcPojoStoreBinaryMarshallerWithSqlEscapeSelfTest.class); - suite.addTestSuite(CacheJdbcPojoStoreBinaryMarshallerStoreKeepBinaryWithSqlEscapeSelfTest.class); - suite.addTestSuite(CacheJdbcPojoStoreMultitreadedSelfTest.class); - suite.addTestSuite(GridCacheBalancingStoreSelfTest.class); - suite.addTestSuite(GridCacheAffinityApiSelfTest.class); - suite.addTestSuite(GridCacheStoreValueBytesSelfTest.class); + // GridTestUtils.addTestIfNeeded(suite,GridCacheP2PUndeploySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheConfigurationValidationSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheConfigurationConsistencySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridDataStorageConfigurationConsistencySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStorageConfigurationValidationTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheJdbcBlobStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheJdbcBlobStoreMultithreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, JdbcTypesDefaultTransformerTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheJdbcPojoStoreTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheJdbcPojoStoreBinaryMarshallerSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheJdbcPojoStoreBinaryMarshallerStoreKeepBinarySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheJdbcPojoStoreBinaryMarshallerWithSqlEscapeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheJdbcPojoStoreBinaryMarshallerStoreKeepBinaryWithSqlEscapeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheJdbcPojoStoreMultitreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheBalancingStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAffinityApiSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheStoreValueBytesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStreamProcessorPersistenceSelfTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, DataStreamProcessorSelfTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, DataStreamerUpdateAfterLoadTest.class, ignoredTests); - suite.addTestSuite(DataStreamerMultiThreadedSelfTest.class); - suite.addTestSuite(DataStreamerMultinodeCreateCacheTest.class); - suite.addTestSuite(DataStreamerImplSelfTest.class); - suite.addTestSuite(DataStreamerTimeoutTest.class); - suite.addTestSuite(DataStreamerClientReconnectAfterClusterRestartTest.class); + GridTestUtils.addTestIfNeeded(suite, DataStreamerMultiThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStreamerMultinodeCreateCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStreamerImplSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStreamerTimeoutTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, DataStreamerClientReconnectAfterClusterRestartTest.class, ignoredTests); GridTestUtils.addTestIfNeeded(suite, GridCacheEntryMemorySizeSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheClearAllSelfTest.class); - suite.addTestSuite(GridCacheObjectToStringSelfTest.class); - suite.addTestSuite(GridCacheLoadOnlyStoreAdapterSelfTest.class); - suite.addTestSuite(GridCacheGetStoreErrorSelfTest.class); - suite.addTestSuite(StoreResourceInjectionSelfTest.class); - suite.addTestSuite(CacheFutureExceptionSelfTest.class); - suite.addTestSuite(GridCacheAsyncOperationsLimitSelfTest.class); - suite.addTestSuite(IgniteCacheManyAsyncOperationsTest.class); - suite.addTestSuite(GridCacheTtlManagerSelfTest.class); - // TODO: ignite-4534 -// suite.addTestSuite(GridCacheTtlManagerEvictionSelfTest.class); - suite.addTestSuite(GridCacheLifecycleAwareSelfTest.class); - suite.addTestSuite(IgniteCacheAtomicStopBusySelfTest.class); - suite.addTestSuite(IgniteCacheTransactionalStopBusySelfTest.class); - suite.addTestSuite(GridCacheAtomicNearCacheSelfTest.class); - suite.addTestSuite(CacheAtomicNearUpdateTopologyChangeTest.class); - suite.addTestSuite(CacheTxNearUpdateTopologyChangeTest.class); - suite.addTestSuite(GridCacheStorePutxSelfTest.class); - suite.addTestSuite(GridCacheOffHeapMultiThreadedUpdateSelfTest.class); - suite.addTestSuite(GridCacheOffHeapAtomicMultiThreadedUpdateSelfTest.class); - suite.addTestSuite(GridCacheColocatedTxStoreExceptionSelfTest.class); - suite.addTestSuite(GridCacheReplicatedTxStoreExceptionSelfTest.class); - suite.addTestSuite(GridCacheLocalTxStoreExceptionSelfTest.class); - suite.addTestSuite(GridCacheNearTxStoreExceptionSelfTest.class); - suite.addTestSuite(GridCacheMissingCommitVersionSelfTest.class); - suite.addTestSuite(GridCacheEntrySetIterationPreloadingSelfTest.class); - suite.addTestSuite(GridCacheMixedPartitionExchangeSelfTest.class); - suite.addTestSuite(IgniteCacheAtomicMessageRecoveryTest.class); - suite.addTestSuite(IgniteCacheAtomicMessageRecoveryPairedConnectionsTest.class); - suite.addTestSuite(IgniteCacheAtomicMessageRecovery10ConnectionsTest.class); - suite.addTestSuite(IgniteCacheTxMessageRecoveryTest.class); - suite.addTestSuite(IgniteCacheMessageWriteTimeoutTest.class); - suite.addTestSuite(IgniteCacheMessageRecoveryIdleConnectionTest.class); - suite.addTestSuite(IgniteCacheConnectionRecoveryTest.class); - suite.addTestSuite(IgniteCacheConnectionRecovery10ConnectionsTest.class); - suite.addTestSuite(GridCacheGlobalLoadTest.class); - suite.addTestSuite(GridCachePartitionedLocalStoreSelfTest.class); - suite.addTestSuite(GridCacheReplicatedLocalStoreSelfTest.class); - suite.addTestSuite(GridCacheTxPartitionedLocalStoreSelfTest.class); - suite.addTestSuite(IgniteCacheSystemTransactionsSelfTest.class); - suite.addTestSuite(CacheDeferredDeleteSanitySelfTest.class); - suite.addTestSuite(CacheDeferredDeleteQueueTest.class); - suite.addTestSuite(GridCachePartitionsStateValidatorSelfTest.class); - suite.addTestSuite(GridCachePartitionsStateValidationTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheClearAllSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheObjectToStringSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLoadOnlyStoreAdapterSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheGetStoreErrorSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, StoreResourceInjectionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheFutureExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAsyncOperationsLimitSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheManyAsyncOperationsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheTtlManagerSelfTest.class, ignoredTests); +// GridTestUtils.addTestIfNeeded(suite, GridCacheTtlManagerEvictionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLifecycleAwareSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicStopBusySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheTransactionalStopBusySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicNearCacheSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheAtomicNearUpdateTopologyChangeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheTxNearUpdateTopologyChangeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedStorePutSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheOffHeapMultiThreadedUpdateSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheOffHeapAtomicMultiThreadedUpdateSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheColocatedTxStoreExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheReplicatedTxStoreExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalTxStoreExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearTxStoreExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheMissingCommitVersionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheEntrySetIterationPreloadingSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheMixedPartitionExchangeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicMessageRecoveryTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicMessageRecoveryPairedConnectionsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheAtomicMessageRecovery10ConnectionsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheTxMessageRecoveryTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheMessageWriteTimeoutTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheMessageRecoveryIdleConnectionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheConnectionRecoveryTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheConnectionRecovery10ConnectionsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheGlobalLoadTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedLocalStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheReplicatedLocalStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheTxPartitionedLocalStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheSystemTransactionsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheDeferredDeleteSanitySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheDeferredDeleteQueueTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionsStateValidatorSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionsStateValidationTest.class, ignoredTests); - suite.addTest(IgniteCacheTcpClientDiscoveryTestSuite.suite()); + suite.addTest(IgniteCacheTcpClientDiscoveryTestSuite.suite(ignoredTests)); // Heuristic exception handling. - suite.addTestSuite(GridCacheColocatedTxExceptionSelfTest.class); - suite.addTestSuite(GridCacheReplicatedTxExceptionSelfTest.class); - suite.addTestSuite(GridCacheLocalTxExceptionSelfTest.class); - suite.addTestSuite(GridCacheNearTxExceptionSelfTest.class); - suite.addTestSuite(GridCacheStopSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheColocatedTxExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheReplicatedTxExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalTxExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearTxExceptionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheStopSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteCacheNearLockValueSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheNearLockValueSelfTest.class, ignoredTests); - suite.addTestSuite(CachePutEventListenerErrorSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, CachePutEventListenerErrorSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteTxConfigCacheSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteTxConfigCacheSelfTest.class, ignoredTests); - suite.addTestSuite(CacheTxFastFinishTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheTxFastFinishTest.class, ignoredTests); - //suite.addTestSuite(GridIoManagerSelfTest.class); - suite.addTestSuite(IgniteVariousConnectionNumberTest.class); - suite.addTestSuite(IgniteCommunicationBalanceTest.class); - suite.addTestSuite(IgniteCommunicationBalancePairedConnectionsTest.class); - suite.addTestSuite(IgniteCommunicationBalanceMultipleConnectionsTest.class); - suite.addTestSuite(IgniteCommunicationSslBalanceTest.class); - suite.addTestSuite(IgniteIoTestMessagesTest.class); - suite.addTestSuite(IgniteDiagnosticMessagesTest.class); - suite.addTestSuite(IgniteDiagnosticMessagesMultipleConnectionsTest.class); + //GridTestUtils.addTestIfNeeded(suite,GridIoManagerSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteVariousConnectionNumberTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCommunicationBalanceTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCommunicationBalancePairedConnectionsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCommunicationBalanceMultipleConnectionsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCommunicationSslBalanceTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteIoTestMessagesTest.class, ignoredTests); - suite.addTestSuite(IgniteIncompleteCacheObjectSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteIncompleteCacheObjectSelfTest.class, ignoredTests); - suite.addTestSuite(GridStoreLoadCacheTest.class); - suite.addTestSuite(CacheStoreReadFromBackupTest.class); - suite.addTestSuite(CacheTransactionalStoreReadFromBackupTest.class); + GridTestUtils.addTestIfNeeded(suite, GridStoreLoadCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheStoreReadFromBackupTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheStoreWriteErrorTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheTransactionalStoreReadFromBackupTest.class, ignoredTests); - //suite.addTestSuite(CacheAtomicSingleMessageCountSelfTest.class); - //suite.addTestSuite(GridCacheAtomicUsersAffinityMapperSelfTest.class); - //suite.addTestSuite(GridCacheClearLocallySelfTest.class); - //suite.addTestSuite(GridCacheConcurrentGetCacheOnClientTest.class); - //suite.addTestSuite(GridCacheFullTextQueryMultithreadedSelfTest.class); - //suite.addTestSuite(GridCacheKeyCheckNearEnabledSelfTest.class); - //suite.addTestSuite(GridCacheKeyCheckSelfTest.class); - //suite.addTestSuite(GridCacheLeakTest.class); - //suite.addTestSuite(GridCacheMultiUpdateLockSelfTest.class); - //suite.addTestSuite(GridCacheMvccFlagsTest.class); - //suite.addTestSuite(GridCacheReplicatedUsersAffinityMapperSelfTest.class); - //suite.addTestSuite(GridCacheReturnValueTransferSelfTest.class); - //suite.addTestSuite(GridCacheSlowTxWarnTest.class); - //suite.addTestSuite(GridCacheTtlManagerLoadTest.class); - //suite.addTestSuite(GridCacheTxUsersAffinityMapperSelfTest.class); - //suite.addTestSuite(IgniteInternalCacheRemoveTest.class); - //suite.addTestSuite(IgniteCacheBinaryEntryProcessorSelfTest.class); - //suite.addTestSuite(IgniteCacheObjectPutSelfTest.class); - //suite.addTestSuite(IgniteCacheSerializationSelfTest.class); - //suite.addTestSuite(IgniteCacheStartStopLoadTest.class); - //suite.addTestSuite(IgniteCachingProviderSelfTest.class); - //suite.addTestSuite(IgniteOnePhaseCommitNearSelfTest.class); - //suite.addTestSuite(IgniteStaticCacheStartSelfTest.class); - //suite.addTestSuite(InterceptorWithKeepBinaryCacheFullApiTest.class); + //GridTestUtils.addTestIfNeeded(suite,CacheAtomicSingleMessageCountSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheAtomicUsersAffinityMapperSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheClearLocallySelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheConcurrentGetCacheOnClientTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheFullTextQueryMultithreadedSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheKeyCheckNearEnabledSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheKeyCheckSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheLeakTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheMultiUpdateLockSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheMvccFlagsTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedUsersAffinityMapperSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReturnValueTransferSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheSlowTxWarnTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheTtlManagerLoadTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheTxUsersAffinityMapperSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteInternalCacheRemoveTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheBinaryEntryProcessorSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheObjectPutSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheSerializationSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheStartStopLoadTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCachingProviderSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteOnePhaseCommitNearSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteStaticCacheStartSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,InterceptorWithKeepBinaryCacheFullApiTest.class, ignoredTests); - suite.addTestSuite(BinaryMetadataRegistrationInsideEntryProcessorTest.class); + GridTestUtils.addTestIfNeeded(suite, BinaryMetadataRegistrationInsideEntryProcessorTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite2.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite2.java index b8eb276af589c..fc86e479bf9d2 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite2.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite2.java @@ -17,13 +17,12 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; - import org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilterSelfTest; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionBackupFilterSelfTest; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionExcludeNeighborsSelfTest; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionFastPowerOfTwoHashSelfTest; -import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionSelfTest; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunctionStandardHashSelfTest; import org.apache.ignite.internal.IgniteReflectionFactorySelfTest; import org.apache.ignite.internal.processors.cache.CacheComparatorTest; @@ -67,23 +66,18 @@ import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheClientNodeChangingTopologyTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheClientNodePartitionsExchangeTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheServerNodeConcurrentStart; +import org.apache.ignite.internal.processors.cache.distributed.dht.CacheGetReadFromBackupFailoverTest; import org.apache.ignite.internal.processors.cache.distributed.dht.CachePartitionPartialCountersMapSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedDebugTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedOptimisticTransactionSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedPreloadRestartSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedPrimarySyncSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedTxSingleThreadedSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtAtomicEvictionNearReadersSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtEntrySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtEntrySetSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtEvictionNearReadersSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtEvictionsDisabledSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtMappingSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtMultiBackupTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadBigDataSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadDelayedSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadDisabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadMessageCountTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadMultiThreadedSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadOnheapSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadPutGetSelfTest; @@ -91,14 +85,10 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadStartStopSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheDhtPreloadUnloadSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedNearDisabledLockSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedNearDisabledMetricsSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedTopologyChangeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedUnloadEventsSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCacheClearDuringRebalanceTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCacheContainsKeyColocatedSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCachePartitionedBackupNodeFailureRecoveryTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCrossCacheTxNearEnabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteTxConsistencyColocatedRestartSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearEvictionEventSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearMultiNodeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheAtomicNearReadersSelfTest; @@ -120,9 +110,7 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedBasicOpSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedBasicStoreMultiNodeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedBasicStoreSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedEntryLockSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedEventSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedEvictionSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedExplicitLockNodeFailureSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedGetAndTransformStoreSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedLoadCacheSelfTest; @@ -131,30 +119,20 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedMultiNodeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedMultiThreadedPutGetSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedNearDisabledBasicStoreMultiNodeSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedNestedTxTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedNodeFailureSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedPreloadLifecycleSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedStorePutSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxConcurrentGetTest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxMultiNodeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxMultiThreadedSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxReadTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxSingleThreadedSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCachePartitionedTxTimeoutSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheRendezvousAffinityClientSelfTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearCacheStoreUpdateTest; import org.apache.ignite.internal.processors.cache.distributed.near.GridPartitionedBackupLoadSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheContainsKeyNearSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheNearTxRollbackTest; -import org.apache.ignite.internal.processors.cache.distributed.near.NearCacheMultithreadedUpdateTest; -import org.apache.ignite.internal.processors.cache.distributed.near.NearCachePutAllMultinodeTest; import org.apache.ignite.internal.processors.cache.distributed.near.NearCacheSyncUpdateTest; import org.apache.ignite.internal.processors.cache.distributed.near.NoneRebalanceModeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedJobExecutionTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalAtomicBasicStoreSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalAtomicGetAndTransformStoreSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalBasicApiSelfTest; -import org.apache.ignite.internal.processors.cache.local.GridCacheLocalBasicStoreMultithreadedSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalBasicStoreSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalEventSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalEvictionEventSelfTest; @@ -164,197 +142,212 @@ import org.apache.ignite.internal.processors.cache.local.GridCacheLocalLockSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalMultithreadedSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalTxMultiThreadedSelfTest; -import org.apache.ignite.internal.processors.cache.local.GridCacheLocalTxReadTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalTxSingleThreadedSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalTxTimeoutSelfTest; import org.apache.ignite.internal.processors.cache.persistence.MemoryPolicyInitializationTest; import org.apache.ignite.internal.processors.continuous.IgniteNoCustomEventsOnNodeStart; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite2 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite2 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); + public static TestSuite suite() { + return suite(null); + } + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite part 2"); // Local cache. - suite.addTestSuite(GridCacheLocalBasicApiSelfTest.class); - suite.addTestSuite(GridCacheLocalBasicStoreSelfTest.class); - //suite.addTestSuite(GridCacheLocalBasicStoreMultithreadedSelfTest.class); - suite.addTestSuite(GridCacheLocalAtomicBasicStoreSelfTest.class); - suite.addTestSuite(GridCacheLocalGetAndTransformStoreSelfTest.class); - suite.addTestSuite(GridCacheLocalAtomicGetAndTransformStoreSelfTest.class); - suite.addTestSuite(GridCacheLocalLoadAllSelfTest.class); - suite.addTestSuite(GridCacheLocalLockSelfTest.class); - suite.addTestSuite(GridCacheLocalMultithreadedSelfTest.class); - suite.addTestSuite(GridCacheLocalTxSingleThreadedSelfTest.class); - //suite.addTestSuite(GridCacheLocalTxReadTest.class); - suite.addTestSuite(GridCacheLocalTxTimeoutSelfTest.class); - suite.addTestSuite(GridCacheLocalEventSelfTest.class); - suite.addTestSuite(GridCacheLocalEvictionEventSelfTest.class); - suite.addTestSuite(GridCacheVariableTopologySelfTest.class); - suite.addTestSuite(GridCacheLocalTxMultiThreadedSelfTest.class); - suite.addTestSuite(GridCacheTransformEventSelfTest.class); - suite.addTestSuite(GridCacheLocalIsolatedNodesSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalBasicApiSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalBasicStoreSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheLocalBasicStoreMultithreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalAtomicBasicStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalGetAndTransformStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalAtomicGetAndTransformStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalLoadAllSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalLockSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalMultithreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalTxSingleThreadedSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheLocalTxReadTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalTxTimeoutSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalEventSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalEvictionEventSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalTxMultiThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheLocalIsolatedNodesSelfTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, GridCacheTransformEventSelfTest.class, ignoredTests); // Partitioned cache. - suite.addTestSuite(GridCachePartitionedGetSelfTest.class); - suite.addTest(new TestSuite(GridCachePartitionedBasicApiTest.class)); - suite.addTest(new TestSuite(GridCacheNearMultiGetSelfTest.class)); - suite.addTest(new TestSuite(NoneRebalanceModeSelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearJobExecutionSelfTest.class)); - suite.addTest(new TestSuite(GridCacheReplicatedJobExecutionTest.class)); - suite.addTest(new TestSuite(GridCacheNearOneNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCacheAtomicNearMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearReadersSelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearReaderPreloadSelfTest.class)); - suite.addTest(new TestSuite(GridCacheAtomicNearReadersSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAffinitySelfTest.class)); - //suite.addTest(new TestSuite(RendezvousAffinityFunctionSelfTest.class)); - suite.addTest(new TestSuite(RendezvousAffinityFunctionExcludeNeighborsSelfTest.class)); - suite.addTest(new TestSuite(RendezvousAffinityFunctionFastPowerOfTwoHashSelfTest.class)); - suite.addTest(new TestSuite(RendezvousAffinityFunctionStandardHashSelfTest.class)); - suite.addTest(new TestSuite(GridCacheRendezvousAffinityClientSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedProjectionAffinitySelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedBasicOpSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedBasicStoreSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedGetAndTransformStoreSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedAtomicGetAndTransformStoreSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedBasicStoreMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedNearDisabledBasicStoreMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedEventSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedLockSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedNearDisabledLockSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedMultiNodeLockSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedMultiThreadedPutGetSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedNodeFailureSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedExplicitLockNodeFailureSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedTxSingleThreadedSelfTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedEntryLockSelfTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedEvictionSelfTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedNestedTxTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedStorePutSelfTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedTxConcurrentGetTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedTxMultiNodeSelfTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedTxReadTest.class)); - suite.addTest(new TestSuite(GridCacheColocatedTxSingleThreadedSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedTxTimeoutSelfTest.class)); - suite.addTest(new TestSuite(GridCacheFinishPartitionsSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtEntrySelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtMappingSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedTxMultiThreadedSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedNearDisabledTxMultiThreadedSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadOnheapSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadBigDataSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadPutGetSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadDisabledSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadMultiThreadedSelfTest.class)); - suite.addTest(new TestSuite(CacheDhtLocalPartitionAfterRemoveSelfTest.class)); - suite.addTest(new TestSuite(GridCacheColocatedPreloadRestartSelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearPreloadRestartSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadStartStopSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadUnloadSelfTest.class)); - suite.addTest(new TestSuite(RendezvousAffinityFunctionBackupFilterSelfTest.class)); - suite.addTest(new TestSuite(ClusterNodeAttributeAffinityBackupFilterSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedPreloadLifecycleSelfTest.class)); - suite.addTest(new TestSuite(CacheLoadingConcurrentGridStartSelfTest.class)); - suite.addTest(new TestSuite(CacheLoadingConcurrentGridStartSelfTestAllowOverwrite.class)); - suite.addTest(new TestSuite(CacheTxLoadingConcurrentGridStartSelfTestAllowOverwrite.class)); - suite.addTest(new TestSuite(GridCacheDhtPreloadDelayedSelfTest.class)); - suite.addTest(new TestSuite(GridPartitionedBackupLoadSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedLoadCacheSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionNotLoadedEventSelfTest.class)); - suite.addTest(new TestSuite(GridCacheDhtEvictionsDisabledSelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearEvictionEventSelfTest.class)); - suite.addTest(new TestSuite(GridCacheAtomicNearEvictionEventSelfTest.class)); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedGetSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedBasicApiTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedBasicOpSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearMultiGetSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, NoneRebalanceModeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearOneNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicNearMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearReadersSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearReaderPreloadSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicNearReadersSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedGetAndTransformStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedAtomicGetAndTransformStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedBasicStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridNearCacheStoreUpdateTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedBasicStoreMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedNearDisabledBasicStoreMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheConcurrentReadThroughTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedLockSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedNearDisabledLockSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedMultiNodeLockSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedMultiThreadedPutGetSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedNodeFailureSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedExplicitLockNodeFailureSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheLockReleaseNodeLeaveTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedEntryLockSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedNestedTxTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedTxConcurrentGetTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedTxMultiNodeSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedTxReadTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedTxSingleThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheColocatedTxSingleThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedTxTimeoutSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheFinishPartitionsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedTxMultiThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedNearDisabledTxMultiThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtEntrySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtMappingSelfTest.class, ignoredTests); + + // Preload + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadOnheapSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadBigDataSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadPutGetSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadDisabledSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadMultiThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheColocatedPreloadRestartSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearPreloadRestartSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadStartStopSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadUnloadSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedPreloadLifecycleSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtPreloadDelayedSelfTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, CacheDhtLocalPartitionAfterRemoveSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheLoadingConcurrentGridStartSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheLoadingConcurrentGridStartSelfTestAllowOverwrite.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheTxLoadingConcurrentGridStartSelfTestAllowOverwrite.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridPartitionedBackupLoadSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheGetReadFromBackupFailoverTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedLoadCacheSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedEventSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionNotLoadedEventSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheDhtEvictionsDisabledSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearEvictionEventSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicNearEvictionEventSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedEvictionSelfTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedTopologyChangeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedUnloadEventsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheColocatedOptimisticTransactionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheAtomicMessageCountSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearPartitionedClearSelfTest.class, ignoredTests); - suite.addTest(new TestSuite(GridCachePartitionedTopologyChangeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedUnloadEventsSelfTest.class)); - suite.addTest(new TestSuite(GridCacheColocatedOptimisticTransactionSelfTest.class)); - suite.addTestSuite(GridCacheAtomicMessageCountSelfTest.class); - suite.addTest(new TestSuite(GridCacheNearPartitionedClearSelfTest.class)); + GridTestUtils.addTestIfNeeded(suite, GridCacheOffheapUpdateSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearClientHitTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearPrimarySyncSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheColocatedPrimarySyncSelfTest.class, ignoredTests); - suite.addTest(new TestSuite(GridCacheOffheapUpdateSelfTest.class)); - suite.addTest(new TestSuite(GridCacheNearClientHitTest.class)); - suite.addTest(new TestSuite(GridCacheNearPrimarySyncSelfTest.class)); - suite.addTest(new TestSuite(GridCacheColocatedPrimarySyncSelfTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteCachePartitionMapUpdateTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheClientNodePartitionsExchangeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheClientNodeChangingTopologyTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheServerNodeConcurrentStart.class, ignoredTests); - suite.addTest(new TestSuite(IgniteCachePartitionMapUpdateTest.class)); - suite.addTest(new TestSuite(IgniteCacheClientNodePartitionsExchangeTest.class)); - suite.addTest(new TestSuite(IgniteCacheClientNodeChangingTopologyTest.class)); - suite.addTest(new TestSuite(IgniteCacheServerNodeConcurrentStart.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheEntryProcessorNodeJoinTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteAtomicCacheEntryProcessorNodeJoinTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheNearTxForceKeyTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CrossCacheTxRandomOperationsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CrossCacheTxNearEnabledRandomOperationsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteDynamicCacheAndNodeStop.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, NearCacheSyncUpdateTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheEnumOperationsSingleNodeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheEnumOperationsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheIncrementTxTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCachePartitionedBackupNodeFailureRecoveryTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheVariableTopologySelfTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteCacheEntryProcessorNodeJoinTest.class)); - suite.addTest(new TestSuite(IgniteAtomicCacheEntryProcessorNodeJoinTest.class)); - suite.addTest(new TestSuite(GridCacheNearTxForceKeyTest.class)); - suite.addTest(new TestSuite(CrossCacheTxRandomOperationsTest.class)); - suite.addTest(new TestSuite(CrossCacheTxNearEnabledRandomOperationsTest.class)); - suite.addTest(new TestSuite(IgniteDynamicCacheAndNodeStop.class)); - suite.addTest(new TestSuite(CacheLockReleaseNodeLeaveTest.class)); - suite.addTest(new TestSuite(NearCacheSyncUpdateTest.class)); - suite.addTest(new TestSuite(CacheConfigurationLeakTest.class)); - suite.addTest(new TestSuite(MemoryPolicyConfigValidationTest.class)); - suite.addTest(new TestSuite(MemoryPolicyInitializationTest.class)); - suite.addTest(new TestSuite(CacheGroupLocalConfigurationSelfTest.class)); - suite.addTest(new TestSuite(CacheEnumOperationsSingleNodeTest.class)); - suite.addTest(new TestSuite(CacheEnumOperationsTest.class)); - suite.addTest(new TestSuite(IgniteCacheIncrementTxTest.class)); - suite.addTest(new TestSuite(IgniteCachePartitionedBackupNodeFailureRecoveryTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteNoCustomEventsOnNodeStart.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheExchangeMessageDuplicatedStateTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteNoCustomEventsOnNodeStart.class)); + //GridTestUtils.addTestIfNeeded(suite,NearCacheMultithreadedUpdateTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,NearCachePutAllMultinodeTest.class, ignoredTests); - suite.addTest(new TestSuite(CacheExchangeMessageDuplicatedStateTest.class)); - suite.addTest(new TestSuite(CacheConcurrentReadThroughTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteOnePhaseCommitInvokeTest.class, ignoredTests); - suite.addTest(new TestSuite(GridNearCacheStoreUpdateTest.class)); - //suite.addTest(new TestSuite(NearCacheMultithreadedUpdateTest.class)); - //suite.addTest(new TestSuite(NearCachePutAllMultinodeTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheNoSyncForGetTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheContainsKeyNearSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheNearTxRollbackTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteOnePhaseCommitInvokeTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteOnePhaseCommitNearReadersTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteNearClientCacheCloseTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteClientCacheStartFailoverTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteCacheNoSyncForGetTest.class)); - //suite.addTest(new TestSuite(IgniteCacheContainsKeyNearSelfTest.class)); - //suite.addTest(new TestSuite(IgniteCacheNearTxRollbackTest.class)); + GridTestUtils.addTestIfNeeded(suite, CacheOptimisticTransactionsWithFilterSingleServerTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheOptimisticTransactionsWithFilterTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteOnePhaseCommitNearReadersTest.class)); - suite.addTest(new TestSuite(IgniteNearClientCacheCloseTest.class)); - suite.addTest(new TestSuite(IgniteClientCacheStartFailoverTest.class)); + GridTestUtils.addTestIfNeeded(suite, NonAffinityCoordinatorDynamicStartStopTest.class, ignoredTests); - suite.addTest(new TestSuite(CacheOptimisticTransactionsWithFilterSingleServerTest.class)); - suite.addTest(new TestSuite(CacheOptimisticTransactionsWithFilterTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheClearDuringRebalanceTest.class, ignoredTests); - suite.addTest(new TestSuite(NonAffinityCoordinatorDynamicStartStopTest.class)); + //GridTestUtils.addTestIfNeeded(suite,GridCacheColocatedDebugTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, GridCacheDhtAtomicEvictionNearReadersSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheDhtEntrySetSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheDhtEvictionNearReadersSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheDhtMultiBackupTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheDhtPreloadMessageCountTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedNearDisabledMetricsSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheContainsKeyColocatedSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCrossCacheTxNearEnabledSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteTxConsistencyColocatedRestartSelfTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteCacheClearDuringRebalanceTest.class)); + // Configuration validation + GridTestUtils.addTestIfNeeded(suite, CacheConfigurationLeakTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, MemoryPolicyConfigValidationTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, MemoryPolicyInitializationTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheGroupLocalConfigurationSelfTest.class, ignoredTests); - suite.addTest(new TestSuite(CachePartitionStateTest.class)); + // Affinity and collocation + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedAffinitySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedProjectionAffinitySelfTest.class, ignoredTests); - suite.addTest(new TestSuite(CacheComparatorTest.class)); + // Other tests. + GridTestUtils.addTestIfNeeded(suite, GridCacheNearJobExecutionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheReplicatedJobExecutionTest.class, ignoredTests); - suite.addTest(new TestSuite(CachePartitionPartialCountersMapSelfTest.class)); + //GridTestUtils.addTestIfNeeded(suite,RendezvousAffinityFunctionSelfTest.class), ignoredTests); + GridTestUtils.addTestIfNeeded(suite, RendezvousAffinityFunctionExcludeNeighborsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, RendezvousAffinityFunctionFastPowerOfTwoHashSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, RendezvousAffinityFunctionStandardHashSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRendezvousAffinityClientSelfTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteReflectionFactorySelfTest.class)); + GridTestUtils.addTestIfNeeded(suite, RendezvousAffinityFunctionBackupFilterSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, ClusterNodeAttributeAffinityBackupFilterSelfTest.class, ignoredTests); - //suite.addTest(new TestSuite(GridCacheColocatedDebugTest.class)); - //suite.addTest(new TestSuite(GridCacheDhtAtomicEvictionNearReadersSelfTest.class)); - //suite.addTest(new TestSuite(GridCacheDhtEntrySetSelfTest.class)); - //suite.addTest(new TestSuite(GridCacheDhtEvictionNearReadersSelfTest.class)); - //suite.addTest(new TestSuite(GridCacheDhtMultiBackupTest.class)); - //suite.addTest(new TestSuite(GridCacheDhtPreloadMessageCountTest.class)); - //suite.addTest(new TestSuite(GridCachePartitionedNearDisabledMetricsSelfTest.class)); - //suite.addTest(new TestSuite(IgniteCacheContainsKeyColocatedSelfTest.class)); - //suite.addTest(new TestSuite(IgniteCrossCacheTxNearEnabledSelfTest.class)); - //suite.addTest(new TestSuite(IgniteTxConsistencyColocatedRestartSelfTest.class)); + GridTestUtils.addTestIfNeeded(suite, CachePartitionStateTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheComparatorTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CachePartitionPartialCountersMapSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteReflectionFactorySelfTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite3.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite3.java index 188a035646342..c97e638e4c847 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite3.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite3.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.CacheStartupInDeploymentModesTest; import org.apache.ignite.internal.processors.cache.GridCacheAtomicEntryProcessorDeploymentSelfTest; @@ -77,126 +78,132 @@ import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxSingleThreadedSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxTimeoutSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheSyncReplicatedPreloadSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.GridReplicatedTxPreloadTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.preloader.GridCacheReplicatedPreloadLifecycleSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.preloader.GridCacheReplicatedPreloadSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.preloader.GridCacheReplicatedPreloadStartStopEventsSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheDaemonNodeLocalSelfTest; import org.apache.ignite.internal.processors.cache.local.GridCacheLocalByteArrayValuesSelfTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite3 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite3 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); + public static TestSuite suite() { + return suite(null); + } + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite part 3"); - suite.addTestSuite(IgniteCacheGroupsTest.class); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheGroupsTest.class, ignoredTests); // Value consistency tests. - suite.addTestSuite(GridCacheValueConsistencyAtomicSelfTest.class); - suite.addTestSuite(GridCacheValueConsistencyAtomicNearEnabledSelfTest.class); - suite.addTestSuite(GridCacheValueConsistencyTransactionalSelfTest.class); - suite.addTestSuite(GridCacheValueConsistencyTransactionalNearEnabledSelfTest.class); - suite.addTestSuite(GridCacheValueBytesPreloadingSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheValueConsistencyAtomicSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheValueConsistencyAtomicNearEnabledSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheValueConsistencyTransactionalSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheValueConsistencyTransactionalNearEnabledSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheValueBytesPreloadingSelfTest.class, ignoredTests); // Replicated cache. - suite.addTestSuite(GridCacheReplicatedBasicApiTest.class); - suite.addTestSuite(GridCacheReplicatedBasicOpSelfTest.class); - suite.addTestSuite(GridCacheReplicatedBasicStoreSelfTest.class); - suite.addTestSuite(GridCacheReplicatedGetAndTransformStoreSelfTest.class); - suite.addTestSuite(GridCacheReplicatedAtomicGetAndTransformStoreSelfTest.class); - suite.addTestSuite(GridCacheReplicatedEventSelfTest.class); - suite.addTestSuite(GridCacheReplicatedEventDisabledSelfTest.class); - suite.addTestSuite(GridCacheReplicatedSynchronousCommitTest.class); - - suite.addTestSuite(GridCacheReplicatedLockSelfTest.class); - suite.addTestSuite(GridCacheReplicatedMultiNodeLockSelfTest.class); - suite.addTestSuite(GridCacheReplicatedMultiNodeSelfTest.class); - suite.addTestSuite(GridCacheReplicatedNodeFailureSelfTest.class); - suite.addTestSuite(GridCacheReplicatedTxSingleThreadedSelfTest.class); - suite.addTestSuite(GridCacheReplicatedTxTimeoutSelfTest.class); - suite.addTestSuite(GridCacheReplicatedPreloadSelfTest.class); - suite.addTestSuite(GridCacheReplicatedPreloadLifecycleSelfTest.class); - suite.addTestSuite(GridCacheSyncReplicatedPreloadSelfTest.class); - - //suite.addTestSuite(GridCacheReplicatedEntrySetSelfTest.class); - //suite.addTestSuite(GridCacheReplicatedMarshallerTxTest.class); - //suite.addTestSuite(GridCacheReplicatedOnheapFullApiSelfTest.class); - //suite.addTestSuite(GridCacheReplicatedOnheapMultiNodeFullApiSelfTest.class); - //suite.addTestSuite(GridCacheReplicatedTxConcurrentGetTest.class); - //suite.addTestSuite(GridCacheReplicatedTxMultiNodeBasicTest.class); - //suite.addTestSuite(GridCacheReplicatedTxReadTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedBasicApiTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedBasicOpSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedBasicStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedGetAndTransformStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedAtomicGetAndTransformStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedEventSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedEventDisabledSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedSynchronousCommitTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedLockSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedMultiNodeLockSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedNodeFailureSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedTxSingleThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedTxTimeoutSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedPreloadSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedPreloadLifecycleSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheSyncReplicatedPreloadSelfTest.class, ignoredTests); + + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedEntrySetSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedMarshallerTxTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedOnheapFullApiSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedOnheapMultiNodeFullApiSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedTxConcurrentGetTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedTxMultiNodeBasicTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedTxReadTest.class, ignoredTests); // TODO GG-11141. -// suite.addTestSuite(GridCacheDeploymentSelfTest.class); -// suite.addTestSuite(GridCacheDeploymentOffHeapSelfTest.class); -// suite.addTestSuite(GridCacheDeploymentOffHeapValuesSelfTest.class); - suite.addTestSuite(CacheStartupInDeploymentModesTest.class); - suite.addTestSuite(GridCacheConditionalDeploymentSelfTest.class); - suite.addTestSuite(GridCacheAtomicEntryProcessorDeploymentSelfTest.class); - suite.addTestSuite(GridCacheTransactionalEntryProcessorDeploymentSelfTest.class); - suite.addTestSuite(IgniteCacheScanPredicateDeploymentSelfTest.class); - - suite.addTestSuite(GridCachePutArrayValueSelfTest.class); - suite.addTestSuite(GridCacheReplicatedEvictionEventSelfTest.class); - suite.addTestSuite(GridCacheReplicatedTxMultiThreadedSelfTest.class); - suite.addTestSuite(GridCacheReplicatedPreloadEventsSelfTest.class); - suite.addTestSuite(GridCacheReplicatedPreloadStartStopEventsSelfTest.class); - suite.addTestSuite(GridReplicatedTxPreloadTest.class); - - suite.addTestSuite(IgniteTxReentryNearSelfTest.class); - suite.addTestSuite(IgniteTxReentryColocatedSelfTest.class); +// GridTestUtils.addTestIfNeeded(suite,GridCacheDeploymentSelfTest.class, ignoredTests); +// GridTestUtils.addTestIfNeeded(suite,GridCacheDeploymentOffHeapSelfTest.class, ignoredTests); +// GridTestUtils.addTestIfNeeded(suite,GridCacheDeploymentOffHeapValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,CacheStartupInDeploymentModesTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheConditionalDeploymentSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheAtomicEntryProcessorDeploymentSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheTransactionalEntryProcessorDeploymentSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheScanPredicateDeploymentSelfTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite,GridCachePutArrayValueSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedEvictionEventSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedTxMultiThreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedPreloadEventsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedPreloadStartStopEventsSelfTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite,IgniteTxReentryNearSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteTxReentryColocatedSelfTest.class, ignoredTests); // Test for byte array value special case. - suite.addTestSuite(GridCacheLocalByteArrayValuesSelfTest.class); - suite.addTestSuite(GridCacheNearPartitionedP2PEnabledByteArrayValuesSelfTest.class); - suite.addTestSuite(GridCacheNearPartitionedP2PDisabledByteArrayValuesSelfTest.class); - suite.addTestSuite(GridCachePartitionedOnlyP2PEnabledByteArrayValuesSelfTest.class); - suite.addTestSuite(GridCachePartitionedOnlyP2PDisabledByteArrayValuesSelfTest.class); - suite.addTestSuite(GridCacheReplicatedP2PEnabledByteArrayValuesSelfTest.class); - suite.addTestSuite(GridCacheReplicatedP2PDisabledByteArrayValuesSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheLocalByteArrayValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheNearPartitionedP2PEnabledByteArrayValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheNearPartitionedP2PDisabledByteArrayValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedOnlyP2PEnabledByteArrayValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedOnlyP2PDisabledByteArrayValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedP2PEnabledByteArrayValuesSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReplicatedP2PDisabledByteArrayValuesSelfTest.class, ignoredTests); // Near-only cache. - suite.addTest(IgniteCacheNearOnlySelfTestSuite.suite()); + suite.addTest(IgniteCacheNearOnlySelfTestSuite.suite(ignoredTests)); // Test cache with daemon nodes. - suite.addTestSuite(GridCacheDaemonNodeLocalSelfTest.class); - suite.addTestSuite(GridCacheDaemonNodePartitionedSelfTest.class); - suite.addTestSuite(GridCacheDaemonNodeReplicatedSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheDaemonNodeLocalSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheDaemonNodePartitionedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheDaemonNodeReplicatedSelfTest.class, ignoredTests); // Write-behind. - suite.addTest(IgniteCacheWriteBehindTestSuite.suite()); + suite.addTest(IgniteCacheWriteBehindTestSuite.suite(ignoredTests)); // Transform. - suite.addTestSuite(GridCachePartitionedTransformWriteThroughBatchUpdateSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCachePartitionedTransformWriteThroughBatchUpdateSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheEntryVersionSelfTest.class); - suite.addTestSuite(GridCacheVersionSelfTest.class); - suite.addTestSuite(GridCacheVersionTopologyChangeTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheEntryVersionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheVersionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheVersionTopologyChangeTest.class, ignoredTests); // Memory leak tests. - suite.addTestSuite(GridCacheReferenceCleanupSelfTest.class); - suite.addTestSuite(GridCacheReloadSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheReferenceCleanupSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,GridCacheReloadSelfTest.class, ignoredTests); - suite.addTestSuite(GridCacheMixedModeSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheMixedModeSelfTest.class, ignoredTests); // Cache interceptor tests. - suite.addTest(IgniteCacheInterceptorSelfTestSuite.suite()); + suite.addTest(IgniteCacheInterceptorSelfTestSuite.suite(ignoredTests)); - suite.addTestSuite(IgniteTxGetAfterStopTest.class); + GridTestUtils.addTestIfNeeded(suite,IgniteTxGetAfterStopTest.class, ignoredTests); - suite.addTestSuite(CacheAsyncOperationsTest.class); + GridTestUtils.addTestIfNeeded(suite,CacheAsyncOperationsTest.class, ignoredTests); - suite.addTestSuite(IgniteTxRemoveTimeoutObjectsTest.class); - suite.addTestSuite(IgniteTxRemoveTimeoutObjectsNearTest.class); + GridTestUtils.addTestIfNeeded(suite,IgniteTxRemoveTimeoutObjectsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteTxRemoveTimeoutObjectsNearTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite4.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite4.java index 370fa4939250f..06a84bbc5627d 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite4.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite4.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import java.util.HashSet; import junit.framework.TestSuite; import org.apache.ignite.cache.store.CacheStoreListenerRWThroughDisabledAtomicCacheTest; import org.apache.ignite.cache.store.CacheStoreListenerRWThroughDisabledTransactionalCacheTest; @@ -26,12 +27,13 @@ import org.apache.ignite.internal.processors.GridCacheTxLoadFromStoreOnLockSelfTest; import org.apache.ignite.internal.processors.cache.CacheClientStoreSelfTest; import org.apache.ignite.internal.processors.cache.CacheConnectionLeakStoreTxTest; -import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticReadCommittedSeltTest; -import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticRepeatableReadSeltTest; -import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticSerializableSeltTest; -import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticReadCommittedSeltTest; -import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticRepeatableReadSeltTest; -import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticSerializableSeltTest; +import org.apache.ignite.internal.processors.cache.CacheEventWithTxLabelTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticReadCommittedSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticRepeatableReadSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryOptimisticSerializableSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticReadCommittedSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticRepeatableReadSelfTest; +import org.apache.ignite.internal.processors.cache.CacheGetEntryPessimisticSerializableSelfTest; import org.apache.ignite.internal.processors.cache.CacheOffheapMapEntrySelfTest; import org.apache.ignite.internal.processors.cache.CacheReadThroughAtomicRestartSelfTest; import org.apache.ignite.internal.processors.cache.CacheReadThroughLocalAtomicRestartSelfTest; @@ -102,6 +104,7 @@ import org.apache.ignite.internal.processors.cache.distributed.CacheGetFutureHangsSelfTest; import org.apache.ignite.internal.processors.cache.distributed.CacheGroupsPreloadTest; import org.apache.ignite.internal.processors.cache.distributed.CacheNoValueClassOnServerNodeTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheResultIsNotNullOnPartitionLossTest; import org.apache.ignite.internal.processors.cache.distributed.CacheStartOnJoinTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheCreatePutMultiNodeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheCreatePutTest; @@ -152,193 +155,206 @@ import org.apache.ignite.internal.processors.cache.version.CacheVersionedEntryPartitionedTransactionalSelfTest; import org.apache.ignite.internal.processors.cache.version.CacheVersionedEntryReplicatedAtomicSelfTest; import org.apache.ignite.internal.processors.cache.version.CacheVersionedEntryReplicatedTransactionalSelfTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +import static org.apache.ignite.testframework.GridTestUtils.addTestIfNeeded; /** * Test suite. */ -public class IgniteCacheTestSuite4 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite4 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); + public static TestSuite suite() { + return suite(null); + } + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(HashSet ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite part 4"); // Multi node update. - suite.addTestSuite(GridCacheMultinodeUpdateSelfTest.class); - suite.addTestSuite(GridCacheMultinodeUpdateNearEnabledSelfTest.class); - suite.addTestSuite(GridCacheMultinodeUpdateNearEnabledNoBackupsSelfTest.class); - suite.addTestSuite(GridCacheMultinodeUpdateAtomicSelfTest.class); - suite.addTestSuite(GridCacheMultinodeUpdateAtomicNearEnabledSelfTest.class); - - suite.addTestSuite(IgniteCacheAtomicLoadAllTest.class); - suite.addTestSuite(IgniteCacheAtomicLocalLoadAllTest.class); - suite.addTestSuite(IgniteCacheTxLoadAllTest.class); - suite.addTestSuite(IgniteCacheTxLocalLoadAllTest.class); - - suite.addTestSuite(IgniteCacheAtomicLoaderWriterTest.class); - suite.addTestSuite(IgniteCacheTxLoaderWriterTest.class); - - suite.addTestSuite(IgniteCacheAtomicStoreSessionTest.class); - suite.addTestSuite(IgniteCacheTxStoreSessionTest.class); - suite.addTestSuite(IgniteCacheAtomicStoreSessionWriteBehindTest.class); - suite.addTestSuite(IgniteCacheTxStoreSessionWriteBehindTest.class); - suite.addTestSuite(IgniteCacheTxStoreSessionWriteBehindCoalescingTest.class); - - suite.addTestSuite(IgniteCacheAtomicNoReadThroughTest.class); - suite.addTestSuite(IgniteCacheAtomicNearEnabledNoReadThroughTest.class); - suite.addTestSuite(IgniteCacheAtomicLocalNoReadThroughTest.class); - suite.addTestSuite(IgniteCacheTxNoReadThroughTest.class); - suite.addTestSuite(IgniteCacheTxNearEnabledNoReadThroughTest.class); - suite.addTestSuite(IgniteCacheTxLocalNoReadThroughTest.class); - - suite.addTestSuite(IgniteCacheAtomicNoLoadPreviousValueTest.class); - suite.addTestSuite(IgniteCacheAtomicNearEnabledNoLoadPreviousValueTest.class); - suite.addTestSuite(IgniteCacheAtomicLocalNoLoadPreviousValueTest.class); - suite.addTestSuite(IgniteCacheTxNoLoadPreviousValueTest.class); - suite.addTestSuite(IgniteCacheTxNearEnabledNoLoadPreviousValueTest.class); - suite.addTestSuite(IgniteCacheTxLocalNoLoadPreviousValueTest.class); - - suite.addTestSuite(IgniteCacheAtomicNoWriteThroughTest.class); - suite.addTestSuite(IgniteCacheAtomicNearEnabledNoWriteThroughTest.class); - suite.addTestSuite(IgniteCacheAtomicLocalNoWriteThroughTest.class); - suite.addTestSuite(IgniteCacheTxNoWriteThroughTest.class); - suite.addTestSuite(IgniteCacheTxNearEnabledNoWriteThroughTest.class); - suite.addTestSuite(IgniteCacheTxLocalNoWriteThroughTest.class); - - suite.addTestSuite(IgniteCacheAtomicPeekModesTest.class); - suite.addTestSuite(IgniteCacheAtomicNearPeekModesTest.class); - suite.addTestSuite(IgniteCacheAtomicReplicatedPeekModesTest.class); - suite.addTestSuite(IgniteCacheAtomicLocalPeekModesTest.class); - suite.addTestSuite(IgniteCacheTxPeekModesTest.class); - suite.addTestSuite(IgniteCacheTxNearPeekModesTest.class); - suite.addTestSuite(IgniteCacheTxLocalPeekModesTest.class); - suite.addTestSuite(IgniteCacheTxReplicatedPeekModesTest.class); - - suite.addTestSuite(IgniteCacheInvokeReadThroughSingleNodeTest.class); - suite.addTestSuite(IgniteCacheInvokeReadThroughTest.class); - suite.addTestSuite(IgniteCacheReadThroughStoreCallTest.class); - suite.addTestSuite(GridCacheVersionMultinodeTest.class); - - suite.addTestSuite(IgniteCacheNearReadCommittedTest.class); - suite.addTestSuite(IgniteCacheAtomicCopyOnReadDisabledTest.class); - suite.addTestSuite(IgniteCacheTxCopyOnReadDisabledTest.class); - - suite.addTestSuite(IgniteCacheTxPreloadNoWriteTest.class); - - suite.addTestSuite(IgniteDynamicCacheStartSelfTest.class); - suite.addTestSuite(IgniteDynamicCacheMultinodeTest.class); - suite.addTestSuite(IgniteDynamicCacheStartFailTest.class); - suite.addTestSuite(IgniteDynamicCacheStartCoordinatorFailoverTest.class); - suite.addTestSuite(IgniteDynamicCacheWithConfigStartSelfTest.class); - suite.addTestSuite(IgniteCacheDynamicStopSelfTest.class); - suite.addTestSuite(IgniteDynamicCacheStartStopConcurrentTest.class); - suite.addTestSuite(IgniteCacheConfigurationTemplateTest.class); - suite.addTestSuite(IgniteCacheConfigurationDefaultTemplateTest.class); - suite.addTestSuite(IgniteDynamicClientCacheStartSelfTest.class); - suite.addTestSuite(IgniteDynamicCacheStartNoExchangeTimeoutTest.class); - suite.addTestSuite(CacheAffinityEarlyTest.class); - suite.addTestSuite(IgniteCacheCreatePutMultiNodeSelfTest.class); - suite.addTestSuite(IgniteCacheCreatePutTest.class); - suite.addTestSuite(CacheStartOnJoinTest.class); - suite.addTestSuite(IgniteCacheStartTest.class); - suite.addTestSuite(CacheDiscoveryDataConcurrentJoinTest.class); - suite.addTestSuite(IgniteClientCacheInitializationFailTest.class); - suite.addTestSuite(IgniteCacheFailedUpdateResponseTest.class); - - suite.addTestSuite(GridCacheTxLoadFromStoreOnLockSelfTest.class); - - suite.addTestSuite(GridCacheMarshallingNodeJoinSelfTest.class); - - suite.addTestSuite(IgniteCacheJdbcBlobStoreNodeRestartTest.class); - - suite.addTestSuite(IgniteCacheAtomicLocalStoreValueTest.class); - suite.addTestSuite(IgniteCacheAtomicStoreValueTest.class); - suite.addTestSuite(IgniteCacheAtomicNearEnabledStoreValueTest.class); - suite.addTestSuite(IgniteCacheTxLocalStoreValueTest.class); - suite.addTestSuite(IgniteCacheTxStoreValueTest.class); - suite.addTestSuite(IgniteCacheTxNearEnabledStoreValueTest.class); - - suite.addTestSuite(IgniteCacheLockFailoverSelfTest.class); - suite.addTestSuite(IgniteCacheMultiTxLockSelfTest.class); - - suite.addTestSuite(IgniteInternalCacheTypesTest.class); - - suite.addTestSuite(IgniteExchangeFutureHistoryTest.class); - - suite.addTestSuite(CacheNoValueClassOnServerNodeTest.class); - suite.addTestSuite(IgniteSystemCacheOnClientTest.class); - - suite.addTestSuite(CacheRemoveAllSelfTest.class); - suite.addTestSuite(CacheGetEntryOptimisticReadCommittedSeltTest.class); - suite.addTestSuite(CacheGetEntryOptimisticRepeatableReadSeltTest.class); - suite.addTestSuite(CacheGetEntryOptimisticSerializableSeltTest.class); - suite.addTestSuite(CacheGetEntryPessimisticReadCommittedSeltTest.class); - suite.addTestSuite(CacheGetEntryPessimisticRepeatableReadSeltTest.class); - suite.addTestSuite(CacheGetEntryPessimisticSerializableSeltTest.class); - suite.addTestSuite(CacheTxNotAllowReadFromBackupTest.class); - - suite.addTestSuite(CacheStopAndDestroySelfTest.class); - - suite.addTestSuite(CacheOffheapMapEntrySelfTest.class); - - suite.addTestSuite(CacheJdbcStoreSessionListenerSelfTest.class); - suite.addTestSuite(CacheStoreSessionListenerLifecycleSelfTest.class); - suite.addTestSuite(CacheStoreListenerRWThroughDisabledAtomicCacheTest.class); - suite.addTestSuite(CacheStoreListenerRWThroughDisabledTransactionalCacheTest.class); - suite.addTestSuite(CacheStoreSessionListenerWriteBehindEnabledTest.class); - - suite.addTestSuite(CacheClientStoreSelfTest.class); - suite.addTestSuite(CacheStoreUsageMultinodeStaticStartAtomicTest.class); - suite.addTestSuite(CacheStoreUsageMultinodeStaticStartTxTest.class); - suite.addTestSuite(CacheStoreUsageMultinodeDynamicStartAtomicTest.class); - suite.addTestSuite(CacheStoreUsageMultinodeDynamicStartTxTest.class); - suite.addTestSuite(CacheConnectionLeakStoreTxTest.class); - - suite.addTestSuite(GridCacheStoreManagerDeserializationTest.class); - suite.addTestSuite(GridLocalCacheStoreManagerDeserializationTest.class); - - suite.addTestSuite(IgniteStartCacheInTransactionSelfTest.class); - suite.addTestSuite(IgniteStartCacheInTransactionAtomicSelfTest.class); - - suite.addTestSuite(CacheReadThroughRestartSelfTest.class); - suite.addTestSuite(CacheReadThroughReplicatedRestartSelfTest.class); - suite.addTestSuite(CacheReadThroughReplicatedAtomicRestartSelfTest.class); - suite.addTestSuite(CacheReadThroughLocalRestartSelfTest.class); - suite.addTestSuite(CacheReadThroughLocalAtomicRestartSelfTest.class); - suite.addTestSuite(CacheReadThroughAtomicRestartSelfTest.class); + addTestIfNeeded(suite, GridCacheMultinodeUpdateSelfTest.class, ignoredTests); + addTestIfNeeded(suite, GridCacheMultinodeUpdateNearEnabledSelfTest.class, ignoredTests); + addTestIfNeeded(suite, GridCacheMultinodeUpdateNearEnabledNoBackupsSelfTest.class, ignoredTests); + addTestIfNeeded(suite, GridCacheMultinodeUpdateAtomicSelfTest.class, ignoredTests); + addTestIfNeeded(suite, GridCacheMultinodeUpdateAtomicNearEnabledSelfTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicLoadAllTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicLocalLoadAllTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLoadAllTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLocalLoadAllTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicLoaderWriterTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLoaderWriterTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicStoreSessionTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxStoreSessionTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicStoreSessionWriteBehindTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxStoreSessionWriteBehindTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxStoreSessionWriteBehindCoalescingTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicNoReadThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicNearEnabledNoReadThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicLocalNoReadThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNoReadThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNearEnabledNoReadThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLocalNoReadThroughTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicNoLoadPreviousValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicNearEnabledNoLoadPreviousValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicLocalNoLoadPreviousValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNoLoadPreviousValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNearEnabledNoLoadPreviousValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLocalNoLoadPreviousValueTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicNoWriteThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicNearEnabledNoWriteThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicLocalNoWriteThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNoWriteThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNearEnabledNoWriteThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLocalNoWriteThroughTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicPeekModesTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicNearPeekModesTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicReplicatedPeekModesTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicLocalPeekModesTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxPeekModesTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNearPeekModesTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLocalPeekModesTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxReplicatedPeekModesTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheInvokeReadThroughSingleNodeTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheInvokeReadThroughTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheReadThroughStoreCallTest.class, ignoredTests); + addTestIfNeeded(suite, GridCacheVersionMultinodeTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheNearReadCommittedTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicCopyOnReadDisabledTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxCopyOnReadDisabledTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheTxPreloadNoWriteTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteDynamicCacheStartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteDynamicCacheMultinodeTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteDynamicCacheStartFailTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteDynamicCacheStartCoordinatorFailoverTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteDynamicCacheWithConfigStartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheDynamicStopSelfTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteDynamicCacheStartStopConcurrentTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheConfigurationTemplateTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheConfigurationDefaultTemplateTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteDynamicClientCacheStartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteDynamicCacheStartNoExchangeTimeoutTest.class, ignoredTests); + addTestIfNeeded(suite, CacheAffinityEarlyTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheCreatePutMultiNodeSelfTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheCreatePutTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStartOnJoinTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheStartTest.class, ignoredTests); + addTestIfNeeded(suite, CacheDiscoveryDataConcurrentJoinTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteClientCacheInitializationFailTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheFailedUpdateResponseTest.class, ignoredTests); + + addTestIfNeeded(suite, GridCacheTxLoadFromStoreOnLockSelfTest.class, ignoredTests); + + addTestIfNeeded(suite, GridCacheMarshallingNodeJoinSelfTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheJdbcBlobStoreNodeRestartTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheAtomicLocalStoreValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicStoreValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheAtomicNearEnabledStoreValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxLocalStoreValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxStoreValueTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheTxNearEnabledStoreValueTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteCacheLockFailoverSelfTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheMultiTxLockSelfTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteInternalCacheTypesTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteExchangeFutureHistoryTest.class, ignoredTests); + + addTestIfNeeded(suite, CacheNoValueClassOnServerNodeTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteSystemCacheOnClientTest.class, ignoredTests); + + addTestIfNeeded(suite, CacheRemoveAllSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheGetEntryOptimisticReadCommittedSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheGetEntryOptimisticRepeatableReadSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheGetEntryOptimisticSerializableSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheGetEntryPessimisticReadCommittedSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheGetEntryPessimisticRepeatableReadSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheGetEntryPessimisticSerializableSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheTxNotAllowReadFromBackupTest.class, ignoredTests); + + addTestIfNeeded(suite, CacheStopAndDestroySelfTest.class, ignoredTests); + + addTestIfNeeded(suite, CacheOffheapMapEntrySelfTest.class, ignoredTests); + + addTestIfNeeded(suite, CacheJdbcStoreSessionListenerSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreSessionListenerLifecycleSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreListenerRWThroughDisabledAtomicCacheTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreListenerRWThroughDisabledTransactionalCacheTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreSessionListenerWriteBehindEnabledTest.class, ignoredTests); + + addTestIfNeeded(suite, CacheClientStoreSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreUsageMultinodeStaticStartAtomicTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreUsageMultinodeStaticStartTxTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreUsageMultinodeDynamicStartAtomicTest.class, ignoredTests); + addTestIfNeeded(suite, CacheStoreUsageMultinodeDynamicStartTxTest.class, ignoredTests); + addTestIfNeeded(suite, CacheConnectionLeakStoreTxTest.class, ignoredTests); + + addTestIfNeeded(suite, GridCacheStoreManagerDeserializationTest.class, ignoredTests); + addTestIfNeeded(suite, GridLocalCacheStoreManagerDeserializationTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteStartCacheInTransactionSelfTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteStartCacheInTransactionAtomicSelfTest.class, ignoredTests); + + addTestIfNeeded(suite, CacheReadThroughRestartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheReadThroughReplicatedRestartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheReadThroughReplicatedAtomicRestartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheReadThroughLocalRestartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheReadThroughLocalAtomicRestartSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheReadThroughAtomicRestartSelfTest.class, ignoredTests); // Versioned entry tests - suite.addTestSuite(CacheVersionedEntryLocalAtomicSwapDisabledSelfTest.class); - suite.addTestSuite(CacheVersionedEntryLocalTransactionalSelfTest.class); - suite.addTestSuite(CacheVersionedEntryPartitionedAtomicSelfTest.class); - suite.addTestSuite(CacheVersionedEntryPartitionedTransactionalSelfTest.class); - suite.addTestSuite(CacheVersionedEntryReplicatedAtomicSelfTest.class); - suite.addTestSuite(CacheVersionedEntryReplicatedTransactionalSelfTest.class); + addTestIfNeeded(suite, CacheVersionedEntryLocalAtomicSwapDisabledSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheVersionedEntryLocalTransactionalSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheVersionedEntryPartitionedAtomicSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheVersionedEntryPartitionedTransactionalSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheVersionedEntryReplicatedAtomicSelfTest.class, ignoredTests); + addTestIfNeeded(suite, CacheVersionedEntryReplicatedTransactionalSelfTest.class, ignoredTests); + + addTestIfNeeded(suite, GridCacheDhtTxPreloadSelfTest.class, ignoredTests); + addTestIfNeeded(suite, GridCacheNearTxPreloadSelfTest.class, ignoredTests); + addTestIfNeeded(suite, GridReplicatedTxPreloadTest.class, ignoredTests); + addTestIfNeeded(suite, CacheGroupsPreloadTest.class, ignoredTests); + + addTestIfNeeded(suite, IgniteDynamicCacheFilterTest.class, ignoredTests); - suite.addTestSuite(GridCacheDhtTxPreloadSelfTest.class); - suite.addTestSuite(GridCacheNearTxPreloadSelfTest.class); - suite.addTestSuite(GridReplicatedTxPreloadTest.class); - suite.addTestSuite(CacheGroupsPreloadTest.class); + addTestIfNeeded(suite, CrossCacheLockTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCrossCacheTxSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteDynamicCacheFilterTest.class); + addTestIfNeeded(suite, CacheGetFutureHangsSelfTest.class, ignoredTests); - suite.addTestSuite(CrossCacheLockTest.class); - suite.addTestSuite(IgniteCrossCacheTxSelfTest.class); + addTestIfNeeded(suite, IgniteCacheSingleGetMessageTest.class, ignoredTests); + addTestIfNeeded(suite, IgniteCacheReadFromBackupTest.class, ignoredTests); - suite.addTestSuite(CacheGetFutureHangsSelfTest.class); + addTestIfNeeded(suite, MarshallerCacheJobRunNodeRestartTest.class, ignoredTests); - suite.addTestSuite(IgniteCacheSingleGetMessageTest.class); - suite.addTestSuite(IgniteCacheReadFromBackupTest.class); + addTestIfNeeded(suite, IgniteCacheNearOnlyTxTest.class, ignoredTests); - suite.addTestSuite(MarshallerCacheJobRunNodeRestartTest.class); + addTestIfNeeded(suite, IgniteCacheContainsKeyAtomicTest.class, ignoredTests); - suite.addTestSuite(IgniteCacheNearOnlyTxTest.class); + addTestIfNeeded(suite, CacheResultIsNotNullOnPartitionLossTest.class, ignoredTests); - suite.addTestSuite(IgniteCacheContainsKeyAtomicTest.class); + addTestIfNeeded(suite, CacheEventWithTxLabelTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite5.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite5.java index dafc44f306081..16b84850a784c 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite5.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite5.java @@ -17,6 +17,8 @@ package org.apache.ignite.testsuites; +import java.util.HashSet; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.GridCacheAffinityBackupsSelfTest; import org.apache.ignite.IgniteCacheAffinitySelfTest; @@ -29,6 +31,7 @@ import org.apache.ignite.internal.processors.cache.CacheNearReaderUpdateTest; import org.apache.ignite.internal.processors.cache.CacheRebalancingSelfTest; import org.apache.ignite.internal.processors.cache.CacheSerializableTransactionsTest; +import org.apache.ignite.internal.processors.cache.ClusterReadOnlyModeTest; import org.apache.ignite.internal.processors.cache.ClusterStatePartitionedSelfTest; import org.apache.ignite.internal.processors.cache.ClusterStateReplicatedSelfTest; import org.apache.ignite.internal.processors.cache.ConcurrentCacheStartTest; @@ -43,74 +46,81 @@ import org.apache.ignite.internal.processors.cache.distributed.IgniteCachePartitionLossPolicySelfTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheTxIteratorSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.NotMappedPartitionInTxTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridCacheAtomicPreloadSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheContainsKeyColocatedAtomicSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheContainsKeyNearAtomicSelfTest; import org.apache.ignite.internal.processors.cache.distributed.rebalancing.CacheManualRebalancingTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheSyncRebalanceModeSelfTest; import org.apache.ignite.internal.processors.cache.store.IgniteCacheWriteBehindNoUpdateSelfTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite5 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite5 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); + public static TestSuite suite() { + return suite(null); + } + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(HashSet ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite part 5"); - suite.addTestSuite(CacheSerializableTransactionsTest.class); - suite.addTestSuite(CacheNearReaderUpdateTest.class); - suite.addTestSuite(IgniteCacheStoreCollectionTest.class); - suite.addTestSuite(IgniteCacheWriteBehindNoUpdateSelfTest.class); - suite.addTestSuite(IgniteCachePutStackOverflowSelfTest.class); - suite.addTestSuite(CacheKeepBinaryTransactionTest.class); + GridTestUtils.addTestIfNeeded(suite,CacheSerializableTransactionsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,CacheNearReaderUpdateTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheStoreCollectionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheWriteBehindNoUpdateSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCachePutStackOverflowSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,CacheKeepBinaryTransactionTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite,CacheLateAffinityAssignmentTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,CacheLateAffinityAssignmentNodeJoinValidationTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,EntryVersionConsistencyReadThroughTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheSyncRebalanceModeSelfTest.class, ignoredTests); - suite.addTestSuite(CacheLateAffinityAssignmentTest.class); - suite.addTestSuite(CacheLateAffinityAssignmentNodeJoinValidationTest.class); - suite.addTestSuite(EntryVersionConsistencyReadThroughTest.class); - suite.addTestSuite(IgniteCacheSyncRebalanceModeSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReadThroughEvictionsVariationsSuite.class)); - suite.addTest(IgniteCacheReadThroughEvictionsVariationsSuite.suite()); - suite.addTestSuite(IgniteCacheTxIteratorSelfTest.class); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheTxIteratorSelfTest.class, ignoredTests); - suite.addTestSuite(ClusterStatePartitionedSelfTest.class); - suite.addTestSuite(ClusterStateReplicatedSelfTest.class); - suite.addTestSuite(IgniteCachePartitionLossPolicySelfTest.class); - suite.addTestSuite(IgniteCacheGroupsPartitionLossPolicySelfTest.class); + GridTestUtils.addTestIfNeeded(suite,ClusterStatePartitionedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,ClusterStateReplicatedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,ClusterReadOnlyModeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCachePartitionLossPolicySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheGroupsPartitionLossPolicySelfTest.class, ignoredTests); - suite.addTestSuite(CacheRebalancingSelfTest.class); - suite.addTestSuite(CacheManualRebalancingTest.class); + GridTestUtils.addTestIfNeeded(suite,CacheRebalancingSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,CacheManualRebalancingTest.class, ignoredTests); // Affinity tests. - suite.addTestSuite(GridCacheAffinityBackupsSelfTest.class); - suite.addTestSuite(IgniteCacheAffinitySelfTest.class); - suite.addTestSuite(AffinityClientNodeSelfTest.class); - suite.addTestSuite(LocalAffinityFunctionTest.class); - suite.addTestSuite(AffinityHistoryCleanupTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCacheAffinityBackupsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheAffinitySelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,AffinityClientNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,LocalAffinityFunctionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite,AffinityHistoryCleanupTest.class, ignoredTests); - suite.addTestSuite(AffinityDistributionLoggingTest.class); + GridTestUtils.addTestIfNeeded(suite,AffinityDistributionLoggingTest.class, ignoredTests); - suite.addTestSuite(IgniteCacheAtomicProtocolTest.class); + GridTestUtils.addTestIfNeeded(suite,IgniteCacheAtomicProtocolTest.class, ignoredTests); - suite.addTestSuite(PartitionsExchangeOnDiscoveryHistoryOverflowTest.class); + GridTestUtils.addTestIfNeeded(suite,PartitionsExchangeOnDiscoveryHistoryOverflowTest.class, ignoredTests); - suite.addTestSuite(GridCachePartitionExchangeManagerHistSizeTest.class); + GridTestUtils.addTestIfNeeded(suite,GridCachePartitionExchangeManagerHistSizeTest.class, ignoredTests); - suite.addTestSuite(NotMappedPartitionInTxTest.class); + GridTestUtils.addTestIfNeeded(suite,NotMappedPartitionInTxTest.class, ignoredTests); - suite.addTestSuite(ConcurrentCacheStartTest.class); + GridTestUtils.addTestIfNeeded(suite,ConcurrentCacheStartTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,GridCacheAtomicPreloadSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheContainsKeyColocatedAtomicSelfTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite,IgniteCacheContainsKeyNearAtomicSelfTest.class, ignoredTests); - //suite.addTestSuite(GridCacheAtomicPreloadSelfTest.class); - //suite.addTestSuite(IgniteCacheContainsKeyColocatedAtomicSelfTest.class); - //suite.addTestSuite(IgniteCacheContainsKeyNearAtomicSelfTest.class); return suite; } } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite6.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite6.java index 1269d0d1bbb40..0981bd244b157 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite6.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite6.java @@ -17,7 +17,9 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; +import org.apache.ignite.internal.processors.cache.CacheNoAffinityExchangeTest; import org.apache.ignite.internal.processors.cache.PartitionedAtomicCacheGetsDistributionTest; import org.apache.ignite.internal.processors.cache.PartitionedTransactionalOptimisticCacheGetsDistributionTest; import org.apache.ignite.internal.processors.cache.PartitionedTransactionalPessimisticCacheGetsDistributionTest; @@ -27,18 +29,17 @@ import org.apache.ignite.internal.processors.cache.ReplicatedTransactionalPessimisticCacheGetsDistributionTest; import org.apache.ignite.internal.processors.cache.datastructures.IgniteExchangeLatchManagerCoordinatorFailTest; import org.apache.ignite.internal.processors.cache.distributed.CacheExchangeMergeTest; -import org.apache.ignite.internal.processors.cache.distributed.CachePartitionStateTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheParallelStartTest; import org.apache.ignite.internal.processors.cache.distributed.CacheTryLockMultithreadedTest; import org.apache.ignite.internal.processors.cache.distributed.GridCachePartitionEvictionDuringReadThroughSelfTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCache150ClientsTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheThreadLocalTxTest; -import org.apache.ignite.internal.processors.cache.distributed.IgniteOptimisticTxSuspendResumeMultiServerTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteOptimisticTxSuspendResumeTest; import org.apache.ignite.internal.processors.cache.distributed.IgnitePessimisticTxSuspendResumeTest; -import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingOrderingTest; import org.apache.ignite.internal.processors.cache.transactions.TxLabelTest; import org.apache.ignite.internal.processors.cache.transactions.TxMultiCacheAsyncOpsTest; import org.apache.ignite.internal.processors.cache.transactions.TxOnCachesStartTest; +import org.apache.ignite.internal.processors.cache.transactions.TxOnCachesStopTest; import org.apache.ignite.internal.processors.cache.transactions.TxOptimisticOnPartitionExchangeTest; import org.apache.ignite.internal.processors.cache.transactions.TxOptimisticPrepareOnUnstableTopologyTest; import org.apache.ignite.internal.processors.cache.transactions.TxRollbackAsyncNearCacheTest; @@ -46,74 +47,86 @@ import org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnIncorrectParamsTest; import org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnTimeoutNearCacheTest; import org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnTimeoutNoDeadlockDetectionTest; +import org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnTimeoutOnePhaseCommitTest; import org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnTimeoutTest; import org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnTopologyChangeTest; import org.apache.ignite.internal.processors.cache.transactions.TxStateChangeEventTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite6 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite6 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); + public static TestSuite suite() { + return suite(null); + } + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite part 6"); - suite.addTestSuite(CachePartitionStateTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionEvictionDuringReadThroughSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteOptimisticTxSuspendResumeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePessimisticTxSuspendResumeTest.class, ignoredTests); - suite.addTestSuite(GridCachePartitionEvictionDuringReadThroughSelfTest.class); - suite.addTestSuite(IgniteOptimisticTxSuspendResumeTest.class); - suite.addTestSuite(IgniteOptimisticTxSuspendResumeMultiServerTest.class); - suite.addTestSuite(IgnitePessimisticTxSuspendResumeTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheExchangeMergeTest.class, ignoredTests); - suite.addTestSuite(CacheExchangeMergeTest.class); + GridTestUtils.addTestIfNeeded(suite, TxRollbackOnTimeoutTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackOnTimeoutNoDeadlockDetectionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackOnTimeoutNearCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheThreadLocalTxTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackAsyncTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackAsyncNearCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackOnTopologyChangeTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackOnTimeoutOnePhaseCommitTest.class, ignoredTests); - suite.addTestSuite(TxRollbackOnTimeoutTest.class); - suite.addTestSuite(TxRollbackOnTimeoutNoDeadlockDetectionTest.class); - suite.addTestSuite(TxRollbackOnTimeoutNearCacheTest.class); - suite.addTestSuite(IgniteCacheThreadLocalTxTest.class); - suite.addTestSuite(TxRollbackAsyncTest.class); - suite.addTestSuite(TxRollbackAsyncNearCacheTest.class); - suite.addTestSuite(TxRollbackOnTopologyChangeTest.class); + GridTestUtils.addTestIfNeeded(suite, TxOptimisticPrepareOnUnstableTopologyTest.class, ignoredTests); - suite.addTestSuite(TxOptimisticPrepareOnUnstableTopologyTest.class); + GridTestUtils.addTestIfNeeded(suite, TxLabelTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackOnIncorrectParamsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxStateChangeEventTest.class, ignoredTests); - suite.addTestSuite(TxLabelTest.class); - suite.addTestSuite(TxRollbackOnIncorrectParamsTest.class); - suite.addTestSuite(TxStateChangeEventTest.class); + GridTestUtils.addTestIfNeeded(suite, TxMultiCacheAsyncOpsTest.class, ignoredTests); - suite.addTestSuite(TxMultiCacheAsyncOpsTest.class); + GridTestUtils.addTestIfNeeded(suite, TxOnCachesStartTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxOnCachesStopTest.class, ignoredTests); - suite.addTestSuite(TxOnCachesStartTest.class); - - suite.addTestSuite(IgniteCache150ClientsTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteCache150ClientsTest.class, ignoredTests); // TODO enable this test after IGNITE-6753, now it takes too long -// suite.addTestSuite(IgniteOutOfMemoryPropagationTest.class); +// GridTestUtils.addTestIfNeeded(suite, IgniteOutOfMemoryPropagationTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, ReplicatedAtomicCacheGetsDistributionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, ReplicatedTransactionalOptimisticCacheGetsDistributionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, ReplicatedTransactionalPessimisticCacheGetsDistributionTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, PartitionedAtomicCacheGetsDistributionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PartitionedTransactionalOptimisticCacheGetsDistributionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PartitionedTransactionalPessimisticCacheGetsDistributionTest.class, ignoredTests); - suite.addTestSuite(ReplicatedAtomicCacheGetsDistributionTest.class); - suite.addTestSuite(ReplicatedTransactionalOptimisticCacheGetsDistributionTest.class); - suite.addTestSuite(ReplicatedTransactionalPessimisticCacheGetsDistributionTest.class); + GridTestUtils.addTestIfNeeded(suite, TxOptimisticOnPartitionExchangeTest.class, ignoredTests); - suite.addTestSuite(PartitionedAtomicCacheGetsDistributionTest.class); - suite.addTestSuite(PartitionedTransactionalOptimisticCacheGetsDistributionTest.class); - suite.addTestSuite(PartitionedTransactionalPessimisticCacheGetsDistributionTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteExchangeLatchManagerCoordinatorFailTest.class, ignoredTests); - suite.addTestSuite(TxOptimisticOnPartitionExchangeTest.class); + GridTestUtils.addTestIfNeeded(suite, PartitionsExchangeCoordinatorFailoverTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheTryLockMultithreadedTest.class, ignoredTests); - suite.addTestSuite(IgniteExchangeLatchManagerCoordinatorFailTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheParallelStartTest.class, ignoredTests); - suite.addTestSuite(PartitionsExchangeCoordinatorFailoverTest.class); - suite.addTestSuite(CacheTryLockMultithreadedTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheNoAffinityExchangeTest.class, ignoredTests); - //suite.addTestSuite(CacheClientsConcurrentStartTest.class); - //suite.addTestSuite(GridCacheRebalancingOrderingTest.class); - //suite.addTestSuite(IgniteCacheClientMultiNodeUpdateTopologyLockTest.class); + //GridTestUtils.addTestIfNeeded(suite, CacheClientsConcurrentStartTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingOrderingTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, IgniteCacheClientMultiNodeUpdateTopologyLockTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite7.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite7.java index 6c48ecc2da6f7..f540afdcad21b 100755 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite7.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite7.java @@ -33,9 +33,8 @@ import org.apache.ignite.internal.processors.cache.WalModeChangeCoordinatorNotAffinityNodeSelfTest; import org.apache.ignite.internal.processors.cache.WalModeChangeSelfTest; import org.apache.ignite.internal.processors.cache.distributed.Cache64kPartitionsTest; -import org.apache.ignite.internal.processors.cache.distributed.CachePageWriteLockUnlockTest; -import org.apache.ignite.internal.processors.cache.distributed.CacheRentingStateRepairTest; import org.apache.ignite.internal.processors.cache.distributed.CacheDataLossOnPartitionMoveTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheRentingStateRepairTest; import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCacheStartWithLoadTest; import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingPartitionCountersTest; import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingWithAsyncClearingTest; @@ -45,64 +44,64 @@ import org.apache.ignite.internal.processors.cache.transactions.TransactionIntegrityWithPrimaryIndexCorruptionTest; import org.apache.ignite.internal.processors.cache.transactions.TxRollbackAsyncWithPersistenceTest; import org.apache.ignite.internal.processors.cache.transactions.TxWithSmallTimeoutAndContentionOneKeyTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite7 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite7 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Tests to ignore. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(Set ignoredTests) throws Exception { + public static TestSuite suite(Set ignoredTests) { TestSuite suite = new TestSuite("IgniteCache With Persistence Test Suite"); - suite.addTestSuite(CheckpointBufferDeadlockTest.class); - suite.addTestSuite(IgniteCacheStartWithLoadTest.class); - - suite.addTestSuite(AuthenticationConfigurationClusterTest.class); - suite.addTestSuite(AuthenticationProcessorSelfTest.class); - suite.addTestSuite(AuthenticationOnNotActiveClusterTest.class); - suite.addTestSuite(AuthenticationProcessorNodeRestartTest.class); - suite.addTestSuite(AuthenticationProcessorNPEOnStartTest.class); - suite.addTestSuite(Authentication1kUsersNodeRestartTest.class); + GridTestUtils.addTestIfNeeded(suite, CheckpointBufferDeadlockTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheStartWithLoadTest.class, ignoredTests); - suite.addTestSuite(CacheDataRegionConfigurationTest.class); + GridTestUtils.addTestIfNeeded(suite, AuthenticationConfigurationClusterTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, AuthenticationProcessorSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, AuthenticationOnNotActiveClusterTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, AuthenticationProcessorNodeRestartTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, AuthenticationProcessorNPEOnStartTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, Authentication1kUsersNodeRestartTest.class, ignoredTests); - suite.addTestSuite(WalModeChangeAdvancedSelfTest.class); - suite.addTestSuite(WalModeChangeSelfTest.class); - suite.addTestSuite(WalModeChangeCoordinatorNotAffinityNodeSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheDataRegionConfigurationTest.class, ignoredTests); - suite.addTestSuite(Cache64kPartitionsTest.class); - suite.addTestSuite(GridCacheRebalancingPartitionCountersTest.class); - suite.addTestSuite(GridCacheRebalancingWithAsyncClearingTest.class); + GridTestUtils.addTestIfNeeded(suite, WalModeChangeAdvancedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, WalModeChangeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, WalModeChangeCoordinatorNotAffinityNodeSelfTest.class, ignoredTests); - suite.addTestSuite(IgnitePdsCacheAssignmentNodeRestartsTest.class); - suite.addTestSuite(TxRollbackAsyncWithPersistenceTest.class); + GridTestUtils.addTestIfNeeded(suite, Cache64kPartitionsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingPartitionCountersTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingWithAsyncClearingTest.class, ignoredTests); - suite.addTestSuite(CacheGroupMetricsMBeanTest.class); - suite.addTestSuite(CacheMetricsManageTest.class); - suite.addTestSuite(PageEvictionMultinodeMixedRegionsTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCacheAssignmentNodeRestartsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, TxRollbackAsyncWithPersistenceTest.class, ignoredTests); - suite.addTestSuite(IgniteDynamicCacheStartFailWithPersistenceTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheGroupMetricsMBeanTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheMetricsManageTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PageEvictionMultinodeMixedRegionsTest.class, ignoredTests); - suite.addTestSuite(TxWithSmallTimeoutAndContentionOneKeyTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteDynamicCacheStartFailWithPersistenceTest.class, ignoredTests); - suite.addTestSuite(CacheRentingStateRepairTest.class); + GridTestUtils.addTestIfNeeded(suite, TxWithSmallTimeoutAndContentionOneKeyTest.class, ignoredTests); - suite.addTestSuite(TransactionIntegrityWithPrimaryIndexCorruptionTest.class); - suite.addTestSuite(CacheDataLossOnPartitionMoveTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheRentingStateRepairTest.class, ignoredTests); - suite.addTestSuite(CachePageWriteLockUnlockTest.class); + GridTestUtils.addTestIfNeeded(suite, TransactionIntegrityWithPrimaryIndexCorruptionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CacheDataLossOnPartitionMoveTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite8.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite8.java index 086c567dac0ea..e32658d23bd44 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite8.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite8.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCacheOrderedPreloadingSelfTest; import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRabalancingDelayedPartitionMapExchangeSelfTest; @@ -25,41 +26,48 @@ import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingSyncCheckDataTest; import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingSyncSelfTest; import org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRebalancingUnmarshallingFailedSelfTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite8 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite8 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); + public static TestSuite suite() { + return suite(null); + } + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite part 8"); // Cache metrics. - suite.addTest(IgniteCacheMetricsSelfTestSuite.suite()); + suite.addTest(IgniteCacheMetricsSelfTestSuite.suite(ignoredTests)); // Topology validator. - suite.addTest(IgniteTopologyValidatorTestSuite.suite()); + suite.addTest(IgniteTopologyValidatorTestSuite.suite(ignoredTests)); // Eviction. - suite.addTest(IgniteCacheEvictionSelfTestSuite.suite()); + suite.addTest(IgniteCacheEvictionSelfTestSuite.suite(ignoredTests)); // Iterators. - suite.addTest(IgniteCacheIteratorsSelfTestSuite.suite()); + suite.addTest(IgniteCacheIteratorsSelfTestSuite.suite(ignoredTests)); // Rebalancing. - suite.addTestSuite(GridCacheOrderedPreloadingSelfTest.class); - suite.addTestSuite(GridCacheRebalancingSyncSelfTest.class); - suite.addTestSuite(GridCacheRebalancingSyncCheckDataTest.class); - suite.addTestSuite(GridCacheRebalancingUnmarshallingFailedSelfTest.class); - suite.addTestSuite(GridCacheRebalancingAsyncSelfTest.class); - suite.addTestSuite(GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.class); - suite.addTestSuite(GridCacheRebalancingCancelTest.class); + GridTestUtils.addTestIfNeeded(suite, GridCacheOrderedPreloadingSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingSyncSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingSyncCheckDataTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingUnmarshallingFailedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingAsyncSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheRebalancingCancelTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite9.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite9.java index 386b17bacae2e..b9a2dc6736004 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite9.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTestSuite9.java @@ -17,37 +17,55 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.CachePutIfAbsentTest; import org.apache.ignite.internal.processors.cache.IgniteCacheGetCustomCollectionsSelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheLoadRebalanceEvictionSelfTest; import org.apache.ignite.internal.processors.cache.distributed.CacheAtomicPrimarySyncBackPressureTest; +import org.apache.ignite.internal.processors.cache.distributed.CacheOperationsInterruptTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCachePrimarySyncTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteTxCachePrimarySyncTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteTxCacheWriteSynchronizationModesMultithreadedTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.apache.ignite.internal.processors.cache.distributed.IgniteTxConcurrentRemoveObjectsTest; +import org.apache.ignite.internal.processors.cache.transactions.TxDataConsistencyOnCommitFailureTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite. */ -public class IgniteCacheTestSuite9 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTestSuite9 { /** * @return IgniteCache test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); + public static TestSuite suite() { + return suite(null); + } + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("IgniteCache Test Suite part 9"); - suite.addTestSuite(IgniteCacheGetCustomCollectionsSelfTest.class); - suite.addTestSuite(IgniteCacheLoadRebalanceEvictionSelfTest.class); - suite.addTestSuite(IgniteCachePrimarySyncTest.class); - suite.addTestSuite(IgniteTxCachePrimarySyncTest.class); - suite.addTestSuite(IgniteTxCacheWriteSynchronizationModesMultithreadedTest.class); - suite.addTestSuite(CachePutIfAbsentTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheGetCustomCollectionsSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCacheLoadRebalanceEvictionSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteCachePrimarySyncTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTxCachePrimarySyncTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTxCacheWriteSynchronizationModesMultithreadedTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CachePutIfAbsentTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, CacheAtomicPrimarySyncBackPressureTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, IgniteTxConcurrentRemoveObjectsTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, TxDataConsistencyOnCommitFailureTest.class, ignoredTests); - suite.addTestSuite(CacheAtomicPrimarySyncBackPressureTest.class); + GridTestUtils.addTestIfNeeded(suite, CacheOperationsInterruptTest.class, ignoredTests); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTxRecoverySelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTxRecoverySelfTestSuite.java index a2c6c83f3e193..2cceda3cb2ba6 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTxRecoverySelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheTxRecoverySelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedTxPessimisticOriginatingNodeFailureSelfTest; import org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedNearDisabledTxOriginatingNodeFailureSelfTest; @@ -30,35 +31,37 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridCacheNearTxPessimisticOriginatingNodeFailureSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxOriginatingNodeFailureSelfTest; import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxPessimisticOriginatingNodeFailureSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Tx recovery self test suite. */ -public class IgniteCacheTxRecoverySelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheTxRecoverySelfTestSuite { /** * @return Cache API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Cache tx recovery test suite"); - suite.addTestSuite(IgniteCacheCommitDelayTxRecoveryTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheCommitDelayTxRecoveryTest.class)); - suite.addTestSuite(IgniteCachePartitionedPrimaryNodeFailureRecoveryTest.class); - suite.addTestSuite(IgniteCachePartitionedNearDisabledPrimaryNodeFailureRecoveryTest.class); - suite.addTestSuite(IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedPrimaryNodeFailureRecoveryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedNearDisabledPrimaryNodeFailureRecoveryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest.class)); - suite.addTestSuite(GridCachePartitionedTxOriginatingNodeFailureSelfTest.class); - suite.addTestSuite(GridCachePartitionedNearDisabledTxOriginatingNodeFailureSelfTest.class); - suite.addTestSuite(GridCacheReplicatedTxOriginatingNodeFailureSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedTxOriginatingNodeFailureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedNearDisabledTxOriginatingNodeFailureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedTxOriginatingNodeFailureSelfTest.class)); - suite.addTestSuite(GridCacheColocatedTxPessimisticOriginatingNodeFailureSelfTest.class); - suite.addTestSuite(GridCacheNearTxPessimisticOriginatingNodeFailureSelfTest.class); - suite.addTestSuite(GridCacheReplicatedTxPessimisticOriginatingNodeFailureSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheColocatedTxPessimisticOriginatingNodeFailureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheNearTxPessimisticOriginatingNodeFailureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheReplicatedTxPessimisticOriginatingNodeFailureSelfTest.class)); - suite.addTestSuite(IgniteCacheTxRecoveryRollbackTest.class); - suite.addTestSuite(TxRecoveryStoreEnabledTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheTxRecoveryRollbackTest.class)); + suite.addTest(new JUnit4TestAdapter(TxRecoveryStoreEnabledTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheWriteBehindTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheWriteBehindTestSuite.java index dff93ffa5ecf0..e7e4050176c8a 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheWriteBehindTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteCacheWriteBehindTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCachePartitionedWritesTest; import org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStoreLocalTest; @@ -28,30 +29,41 @@ import org.apache.ignite.internal.processors.cache.store.IgnteCacheClientWriteBehindStoreAtomicTest; import org.apache.ignite.internal.processors.cache.store.IgnteCacheClientWriteBehindStoreNonCoalescingTest; import org.apache.ignite.internal.processors.cache.store.IgnteCacheClientWriteBehindStoreTxTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite that contains all tests for {@link org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore}. */ -public class IgniteCacheWriteBehindTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheWriteBehindTestSuite { /** * @return Ignite Bamboo in-memory data grid test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { + return suite(null); + } + + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Write-Behind Store Test Suite"); // Write-behind tests. - suite.addTest(new TestSuite(GridCacheWriteBehindStoreSelfTest.class)); - suite.addTest(new TestSuite(GridCacheWriteBehindStoreMultithreadedSelfTest.class)); - suite.addTest(new TestSuite(GridCacheWriteBehindStoreLocalTest.class)); - suite.addTest(new TestSuite(GridCacheWriteBehindStoreReplicatedTest.class)); - suite.addTest(new TestSuite(GridCacheWriteBehindStorePartitionedTest.class)); - suite.addTest(new TestSuite(GridCacheWriteBehindStorePartitionedMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(GridCachePartitionedWritesTest.class)); - suite.addTest(new TestSuite(IgnteCacheClientWriteBehindStoreAtomicTest.class)); - suite.addTest(new TestSuite(IgnteCacheClientWriteBehindStoreTxTest.class)); - suite.addTest(new TestSuite(IgnteCacheClientWriteBehindStoreNonCoalescingTest.class)); + GridTestUtils.addTestIfNeeded(suite, GridCacheWriteBehindStoreSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheWriteBehindStoreMultithreadedSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheWriteBehindStoreLocalTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheWriteBehindStoreReplicatedTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheWriteBehindStorePartitionedTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCacheWriteBehindStorePartitionedMultiNodeSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, GridCachePartitionedWritesTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnteCacheClientWriteBehindStoreAtomicTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnteCacheClientWriteBehindStoreTxTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnteCacheClientWriteBehindStoreNonCoalescingTest.class, ignoredTests); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientNodesTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientNodesTestSuite.java index c9405fa00152d..58e485d2daec0 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientNodesTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientNodesTestSuite.java @@ -17,28 +17,31 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheClientNodeConcurrentStart; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheClientReconnectTest; import org.apache.ignite.internal.processors.cache.distributed.IgniteCacheManyClientsTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteClientNodesTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteClientNodesTestSuite { /** * @return Test suite. - * @throws Exception In case of error. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Client Nodes Reconnect Test Suite"); suite.addTest(IgniteClientReconnectTestSuite.suite()); - suite.addTestSuite(IgniteCacheManyClientsTest.class); - suite.addTestSuite(IgniteCacheClientNodeConcurrentStart.class); - suite.addTestSuite(IgniteCacheClientReconnectTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheManyClientsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheClientNodeConcurrentStart.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheClientReconnectTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientReconnectTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientReconnectTestSuite.java index d5ebd15a3d417..908aa36f615a3 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientReconnectTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteClientReconnectTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.IgniteClientConnectAfterCommunicationFailureTest; import org.apache.ignite.internal.IgniteClientReconnectApiExceptionTest; @@ -33,33 +34,35 @@ import org.apache.ignite.internal.IgniteClientReconnectStopTest; import org.apache.ignite.internal.IgniteClientReconnectStreamerTest; import org.apache.ignite.internal.IgniteClientRejoinTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteClientReconnectTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteClientReconnectTestSuite { /** * @return Test suite. - * @throws Exception In case of error. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Client Reconnect Test Suite"); - suite.addTestSuite(IgniteClientConnectAfterCommunicationFailureTest.class); - suite.addTestSuite(IgniteClientReconnectStopTest.class); - suite.addTestSuite(IgniteClientReconnectApiExceptionTest.class); - suite.addTestSuite(IgniteClientReconnectDiscoveryStateTest.class); - suite.addTestSuite(IgniteClientReconnectCacheTest.class); - suite.addTestSuite(IgniteClientReconnectDelayedSpiTest.class); - suite.addTestSuite(IgniteClientReconnectBinaryContexTest.class); - suite.addTestSuite(IgniteClientReconnectContinuousProcessorTest.class); - suite.addTestSuite(IgniteClientReconnectComputeTest.class); - suite.addTestSuite(IgniteClientReconnectAtomicsTest.class); - suite.addTestSuite(IgniteClientReconnectCollectionsTest.class); - suite.addTestSuite(IgniteClientReconnectServicesTest.class); - suite.addTestSuite(IgniteClientReconnectStreamerTest.class); - suite.addTestSuite(IgniteClientReconnectFailoverTest.class); - suite.addTestSuite(IgniteClientRejoinTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteClientConnectAfterCommunicationFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectStopTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectApiExceptionTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectDiscoveryStateTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectDelayedSpiTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectBinaryContexTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectContinuousProcessorTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectComputeTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectAtomicsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectCollectionsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectServicesTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectStreamerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientRejoinTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeBasicConfigVariationsFullApiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeBasicConfigVariationsFullApiTestSuite.java index 41cc8a10c49f6..83d923c66edf9 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeBasicConfigVariationsFullApiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeBasicConfigVariationsFullApiTestSuite.java @@ -26,11 +26,14 @@ import org.apache.ignite.testframework.configvariations.ConfigVariations; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; import org.apache.ignite.testframework.configvariations.Parameters; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Full API compute test. */ -public class IgniteComputeBasicConfigVariationsFullApiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteComputeBasicConfigVariationsFullApiTestSuite { /** */ @SuppressWarnings("unchecked") private static final ConfigParameter[][] BASIC_COMPUTE_SET = new ConfigParameter[][] { @@ -45,9 +48,8 @@ public class IgniteComputeBasicConfigVariationsFullApiTestSuite extends TestSuit /** * @return Compute API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Compute New Full API Test Suite"); suite.addTest(new ConfigVariationsTestSuiteBuilder( diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeGridTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeGridTestSuite.java index 2e5708d5743ea..b5d0f4ff241ff 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeGridTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteComputeGridTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.ClusterNodeMetricsSelfTest; import org.apache.ignite.internal.ClusterNodeMetricsUpdateTest; @@ -77,6 +78,7 @@ import org.apache.ignite.internal.IgniteExplicitImplicitDeploymentSelfTest; import org.apache.ignite.internal.IgniteRoundRobinErrorAfterClientReconnectTest; import org.apache.ignite.internal.TaskNodeRestartTest; +import org.apache.ignite.internal.VisorManagementEventSelfTest; import org.apache.ignite.internal.managers.checkpoint.GridCheckpointManagerSelfTest; import org.apache.ignite.internal.managers.checkpoint.GridCheckpointTaskSelfTest; import org.apache.ignite.internal.managers.communication.GridCommunicationManagerListenersSelfTest; @@ -88,16 +90,18 @@ import org.apache.ignite.p2p.GridMultinodeRedeployIsolatedModeSelfTest; import org.apache.ignite.p2p.GridMultinodeRedeployPrivateModeSelfTest; import org.apache.ignite.p2p.GridMultinodeRedeploySharedModeSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Compute grid test suite. */ +@RunWith(AllTests.class) public class IgniteComputeGridTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Compute Grid Test Suite"); suite.addTest(IgniteTaskSessionSelfTestSuite.suite()); @@ -105,78 +109,80 @@ public static TestSuite suite() throws Exception { suite.addTest(IgniteJobMetricsSelfTestSuite.suite()); suite.addTest(IgniteContinuousTaskSelfTestSuite.suite()); - suite.addTestSuite(GridTaskCancelSingleNodeSelfTest.class); - suite.addTestSuite(GridTaskFailoverSelfTest.class); - suite.addTestSuite(GridJobCollisionCancelSelfTest.class); - suite.addTestSuite(GridTaskTimeoutSelfTest.class); - suite.addTestSuite(GridCancelUnusedJobSelfTest.class); - suite.addTestSuite(GridTaskJobRejectSelfTest.class); - suite.addTestSuite(GridTaskExecutionSelfTest.class); - //suite.addTestSuite(GridTaskExecutionContextSelfTest.class); - suite.addTestSuite(GridTaskExecutionWithoutPeerClassLoadingSelfTest.class); - suite.addTestSuite(GridFailoverSelfTest.class); - suite.addTestSuite(GridTaskListenerSelfTest.class); - suite.addTestSuite(GridFailoverTopologySelfTest.class); - suite.addTestSuite(GridTaskResultCacheSelfTest.class); - suite.addTestSuite(GridTaskMapAsyncSelfTest.class); - suite.addTestSuite(GridJobContextSelfTest.class); - suite.addTestSuite(GridJobMasterLeaveAwareSelfTest.class); - suite.addTestSuite(GridJobStealingSelfTest.class); - suite.addTestSuite(GridJobSubjectIdSelfTest.class); - suite.addTestSuite(GridMultithreadedJobStealingSelfTest.class); - suite.addTestSuite(GridAlwaysFailoverSpiFailSelfTest.class); - suite.addTestSuite(GridTaskInstanceExecutionSelfTest.class); - suite.addTestSuite(ClusterNodeMetricsSelfTest.class); - suite.addTestSuite(ClusterNodeMetricsUpdateTest.class); - suite.addTestSuite(GridNonHistoryMetricsSelfTest.class); - suite.addTestSuite(GridCancelledJobsMetricsSelfTest.class); - suite.addTestSuite(GridCollisionJobsContextSelfTest.class); - suite.addTestSuite(GridJobStealingZeroActiveJobsSelfTest.class); - suite.addTestSuite(GridTaskFutureImplStopGridSelfTest.class); - suite.addTestSuite(GridFailoverCustomTopologySelfTest.class); - suite.addTestSuite(GridMultipleSpisSelfTest.class); - suite.addTestSuite(GridStopWithWaitSelfTest.class); - suite.addTestSuite(GridCancelOnGridStopSelfTest.class); - suite.addTestSuite(GridDeploymentSelfTest.class); - suite.addTestSuite(GridDeploymentMultiThreadedSelfTest.class); - suite.addTestSuite(GridMultipleVersionsDeploymentSelfTest.class); - suite.addTestSuite(IgniteExplicitImplicitDeploymentSelfTest.class); - suite.addTestSuite(GridEventStorageCheckAllEventsSelfTest.class); - suite.addTestSuite(GridCommunicationManagerListenersSelfTest.class); - suite.addTestSuite(IgniteExecutorServiceTest.class); - suite.addTestSuite(GridTaskInstantiationSelfTest.class); - suite.addTestSuite(GridMultipleJobsSelfTest.class); - suite.addTestSuite(GridCheckpointManagerSelfTest.class); - suite.addTestSuite(GridCheckpointTaskSelfTest.class); - suite.addTestSuite(GridTaskNameAnnotationSelfTest.class); - suite.addTestSuite(GridJobCheckpointCleanupSelfTest.class); - suite.addTestSuite(GridEventStorageSelfTest.class); - suite.addTestSuite(GridEventStorageDefaultExceptionTest.class); - suite.addTestSuite(GridFailoverTaskWithPredicateSelfTest.class); - suite.addTestSuite(GridProjectionLocalJobMultipleArgumentsSelfTest.class); - suite.addTestSuite(GridAffinitySelfTest.class); - suite.addTestSuite(GridAffinityNoCacheSelfTest.class); - //suite.addTestSuite(GridAffinityMappedTest.class); - //suite.addTestSuite(GridAffinityP2PSelfTest.class); - suite.addTestSuite(GridEventStorageRuntimeConfigurationSelfTest.class); - suite.addTestSuite(GridMultinodeRedeployContinuousModeSelfTest.class); - suite.addTestSuite(GridMultinodeRedeploySharedModeSelfTest.class); - suite.addTestSuite(GridMultinodeRedeployPrivateModeSelfTest.class); - suite.addTestSuite(GridMultinodeRedeployIsolatedModeSelfTest.class); - suite.addTestSuite(IgniteComputeEmptyClusterGroupTest.class); - suite.addTestSuite(IgniteComputeTopologyExceptionTest.class); - suite.addTestSuite(IgniteComputeResultExceptionTest.class); - suite.addTestSuite(GridTaskFailoverAffinityRunTest.class); - suite.addTestSuite(TaskNodeRestartTest.class); - suite.addTestSuite(IgniteRoundRobinErrorAfterClientReconnectTest.class); - suite.addTestSuite(PublicThreadpoolStarvationTest.class); - suite.addTestSuite(StripedExecutorTest.class); - suite.addTestSuite(GridJobServicesAddNodeTest.class); + suite.addTest(new JUnit4TestAdapter(GridTaskCancelSingleNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobCollisionCancelSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskTimeoutSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCancelUnusedJobSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskJobRejectSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskExecutionSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridTaskExecutionContextSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskExecutionWithoutPeerClassLoadingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskListenerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFailoverTopologySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskResultCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskMapAsyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobContextSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobMasterLeaveAwareSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobSubjectIdSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultithreadedJobStealingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAlwaysFailoverSpiFailSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskInstanceExecutionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClusterNodeMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClusterNodeMetricsUpdateTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNonHistoryMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCancelledJobsMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCollisionJobsContextSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingZeroActiveJobsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskFutureImplStopGridSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFailoverCustomTopologySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultipleSpisSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridStopWithWaitSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCancelOnGridStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridDeploymentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridDeploymentMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultipleVersionsDeploymentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteExplicitImplicitDeploymentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridEventStorageCheckAllEventsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCommunicationManagerListenersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteExecutorServiceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskInstantiationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultipleJobsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCheckpointManagerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCheckpointTaskSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskNameAnnotationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobCheckpointCleanupSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridEventStorageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridEventStorageDefaultExceptionTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFailoverTaskWithPredicateSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridProjectionLocalJobMultipleArgumentsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAffinitySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAffinityNoCacheSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridAffinityMappedTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridAffinityP2PSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridEventStorageRuntimeConfigurationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultinodeRedeployContinuousModeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultinodeRedeploySharedModeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultinodeRedeployPrivateModeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultinodeRedeployIsolatedModeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteComputeEmptyClusterGroupTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteComputeTopologyExceptionTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteComputeResultExceptionTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskFailoverAffinityRunTest.class)); + suite.addTest(new JUnit4TestAdapter(TaskNodeRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteRoundRobinErrorAfterClientReconnectTest.class)); + suite.addTest(new JUnit4TestAdapter(PublicThreadpoolStarvationTest.class)); + suite.addTest(new JUnit4TestAdapter(StripedExecutorTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobServicesAddNodeTest.class)); - suite.addTestSuite(IgniteComputeCustomExecutorConfigurationSelfTest.class); - suite.addTestSuite(IgniteComputeCustomExecutorSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteComputeCustomExecutorConfigurationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteComputeCustomExecutorSelfTest.class)); - suite.addTestSuite(IgniteComputeJobOneThreadTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteComputeJobOneThreadTest.class)); + + suite.addTest(new JUnit4TestAdapter(VisorManagementEventSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousQueryConfigVariationsSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousQueryConfigVariationsSuite.java index 221dcaef52879..24b5d520c3714 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousQueryConfigVariationsSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousQueryConfigVariationsSuite.java @@ -20,18 +20,20 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryVariationsTest; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import static org.apache.ignite.IgniteSystemProperties.IGNITE_DISCOVERY_HISTORY_SIZE; /** * Test suite for cache queries. */ -public class IgniteContinuousQueryConfigVariationsSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteContinuousQueryConfigVariationsSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { System.setProperty(IGNITE_DISCOVERY_HISTORY_SIZE, "100"); TestSuite suite = new TestSuite("Ignite Continuous Query Config Variations Suite"); diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousTaskSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousTaskSelfTestSuite.java index a99826ecae600..32297dfe9bb4b 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousTaskSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteContinuousTaskSelfTestSuite.java @@ -17,29 +17,32 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.GridContinuousJobAnnotationSelfTest; import org.apache.ignite.internal.GridContinuousJobSiblingsSelfTest; import org.apache.ignite.internal.GridContinuousTaskSelfTest; import org.apache.ignite.internal.GridTaskContinuousMapperSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Continuous task self-test suite. */ -public class IgniteContinuousTaskSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteContinuousTaskSelfTestSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Kernal Test Suite"); - suite.addTest(new TestSuite(GridContinuousJobAnnotationSelfTest.class)); - suite.addTest(new TestSuite(GridContinuousJobSiblingsSelfTest.class)); - suite.addTest(new TestSuite(GridContinuousTaskSelfTest.class)); - suite.addTest(new TestSuite(GridTaskContinuousMapperSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridContinuousJobAnnotationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridContinuousJobSiblingsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridContinuousTaskSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTaskContinuousMapperSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDatabaseTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDatabaseTestSuite.java index 246b23559d6b4..015eb28347405 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDatabaseTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDatabaseTestSuite.java @@ -17,23 +17,26 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.database.IgniteDbMultiNodePutGetTest; import org.apache.ignite.internal.processors.database.IgniteDbSingleNodePutGetTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteDatabaseTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteDatabaseTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Database Tests"); - suite.addTestSuite(IgniteDbSingleNodePutGetTest.class); - suite.addTestSuite(IgniteDbMultiNodePutGetTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteDbSingleNodePutGetTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbMultiNodePutGetTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakTestSuite.java index f271bd88c4b7d..7a144ae6e63e2 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.cache.LargeEntryUpdateTest; import org.apache.ignite.internal.processors.database.IgniteDbMemoryLeakLargeObjectsTest; @@ -24,25 +25,27 @@ import org.apache.ignite.internal.processors.database.IgniteDbMemoryLeakNonTransactionalTest; import org.apache.ignite.internal.processors.database.IgniteDbMemoryLeakTest; import org.apache.ignite.internal.processors.database.IgniteDbMemoryLeakWithExpirationTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Page memory leaks tests. */ -public class IgniteDbMemoryLeakTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteDbMemoryLeakTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Db Memory Leaks Test Suite"); - suite.addTestSuite(IgniteDbMemoryLeakTest.class); - suite.addTestSuite(IgniteDbMemoryLeakWithExpirationTest.class); - suite.addTestSuite(IgniteDbMemoryLeakLargePagesTest.class); - suite.addTestSuite(IgniteDbMemoryLeakLargeObjectsTest.class); - suite.addTestSuite(IgniteDbMemoryLeakNonTransactionalTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteDbMemoryLeakTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbMemoryLeakWithExpirationTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbMemoryLeakLargePagesTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbMemoryLeakLargeObjectsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbMemoryLeakNonTransactionalTest.class)); - suite.addTestSuite(LargeEntryUpdateTest.class); + suite.addTest(new JUnit4TestAdapter(LargeEntryUpdateTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteExternalizableSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteExternalizableSelfTestSuite.java index 7ac1284a813e5..7f5b7b689116f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteExternalizableSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteExternalizableSelfTestSuite.java @@ -17,21 +17,25 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.GridTopicExternalizableSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Externalizable self-test suite. */ -public class IgniteExternalizableSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteExternalizableSelfTestSuite { /** * @return Test suite. */ public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Externalizable Test Suite"); - suite.addTest(new TestSuite(GridTopicExternalizableSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTopicExternalizableSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIgfsTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIgfsTestSuite.java index 8f026f79a6a23..d6c2833d0098b 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIgfsTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIgfsTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.igfs.IgfsFragmentizerSelfTest; import org.apache.ignite.igfs.IgfsFragmentizerTopologySelfTest; @@ -69,93 +70,95 @@ import org.apache.ignite.internal.processors.igfs.split.IgfsNewLineDelimiterRecordResolverSelfTest; import org.apache.ignite.internal.processors.igfs.split.IgfsStringDelimiterRecordResolverSelfTest; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for Hadoop file system over Ignite cache. * Contains platform independent tests only. */ -public class IgniteIgfsTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteIgfsTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite FS Test Suite For Platform Independent Tests"); - suite.addTest(new TestSuite(IgfsPrimarySelfTest.class)); - suite.addTest(new TestSuite(IgfsPrimaryMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsPrimarySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsPrimaryMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(IgfsPrimaryRelaxedConsistencySelfTest.class)); - suite.addTest(new TestSuite(IgfsPrimaryRelaxedConsistencyMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsPrimaryRelaxedConsistencySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsPrimaryRelaxedConsistencyMultiNodeSelfTest.class)); - suite.addTest(new TestSuite(IgfsDualSyncSelfTest.class)); - suite.addTest(new TestSuite(IgfsDualAsyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsDualSyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsDualAsyncSelfTest.class)); - suite.addTest(new TestSuite(IgfsLocalSecondaryFileSystemDualSyncSelfTest.class)); - suite.addTest(new TestSuite(IgfsLocalSecondaryFileSystemDualAsyncSelfTest.class)); - suite.addTest(new TestSuite(IgfsLocalSecondaryFileSystemDualSyncClientSelfTest.class)); - suite.addTest(new TestSuite(IgfsLocalSecondaryFileSystemDualAsyncClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsLocalSecondaryFileSystemDualSyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsLocalSecondaryFileSystemDualAsyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsLocalSecondaryFileSystemDualSyncClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsLocalSecondaryFileSystemDualAsyncClientSelfTest.class)); - //suite.addTest(new TestSuite(IgfsSizeSelfTest.class)); - suite.addTest(new TestSuite(IgfsAttributesSelfTest.class)); - suite.addTest(new TestSuite(IgfsFileInfoSelfTest.class)); - suite.addTest(new TestSuite(IgfsMetaManagerSelfTest.class)); - suite.addTest(new TestSuite(IgfsDataManagerSelfTest.class)); - suite.addTest(new TestSuite(IgfsProcessorSelfTest.class)); - suite.addTest(new TestSuite(IgfsProcessorValidationSelfTest.class)); - suite.addTest(new TestSuite(IgfsCacheSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(IgfsSizeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsAttributesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsFileInfoSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsMetaManagerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsDataManagerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsProcessorSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsProcessorValidationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsCacheSelfTest.class)); if (U.isWindows()) - suite.addTest(new TestSuite(IgfsServerManagerIpcEndpointRegistrationOnWindowsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsServerManagerIpcEndpointRegistrationOnWindowsSelfTest.class)); - suite.addTest(new TestSuite(IgfsCachePerBlockLruEvictionPolicySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsCachePerBlockLruEvictionPolicySelfTest.class)); - suite.addTest(new TestSuite(IgfsStreamsSelfTest.class)); - suite.addTest(new TestSuite(IgfsModesSelfTest.class)); - suite.addTest(new TestSuite(IgfsMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsStreamsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsModesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsMetricsSelfTest.class)); - suite.addTest(new TestSuite(IgfsPrimaryClientSelfTest.class)); - suite.addTest(new TestSuite(IgfsPrimaryRelaxedConsistencyClientSelfTest.class)); - suite.addTest(new TestSuite(IgfsDualSyncClientSelfTest.class)); - suite.addTest(new TestSuite(IgfsDualAsyncClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsPrimaryClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsPrimaryRelaxedConsistencyClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsDualSyncClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsDualAsyncClientSelfTest.class)); - suite.addTest(new TestSuite(IgfsOneClientNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsOneClientNodeTest.class)); - suite.addTest(new TestSuite(IgfsModeResolverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsModeResolverSelfTest.class)); - //suite.addTestSuite(IgfsPathSelfTest.class); - suite.addTestSuite(IgfsFragmentizerSelfTest.class); - suite.addTestSuite(IgfsFragmentizerTopologySelfTest.class); - suite.addTestSuite(IgfsFileMapSelfTest.class); + //suite.addTest(new JUnit4TestAdapter(IgfsPathSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsFragmentizerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsFragmentizerTopologySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsFileMapSelfTest.class)); - suite.addTestSuite(IgfsByteDelimiterRecordResolverSelfTest.class); - suite.addTestSuite(IgfsStringDelimiterRecordResolverSelfTest.class); - suite.addTestSuite(IgfsFixedLengthRecordResolverSelfTest.class); - suite.addTestSuite(IgfsNewLineDelimiterRecordResolverSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsByteDelimiterRecordResolverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsStringDelimiterRecordResolverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsFixedLengthRecordResolverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsNewLineDelimiterRecordResolverSelfTest.class)); - suite.addTestSuite(IgfsTaskSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsTaskSelfTest.class)); - suite.addTestSuite(IgfsGroupDataBlockKeyMapperHashSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsGroupDataBlockKeyMapperHashSelfTest.class)); - suite.addTestSuite(IgfsStartCacheTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsStartCacheTest.class)); - suite.addTestSuite(IgfsBackupsPrimarySelfTest.class); - suite.addTestSuite(IgfsBackupsDualSyncSelfTest.class); - suite.addTestSuite(IgfsBackupsDualAsyncSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsBackupsPrimarySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsBackupsDualSyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsBackupsDualAsyncSelfTest.class)); - suite.addTestSuite(IgfsBlockMessageSystemPoolStarvationSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsBlockMessageSystemPoolStarvationSelfTest.class)); // TODO: Enable when IGFS failover is fixed. - //suite.addTestSuite(IgfsBackupFailoverSelfTest.class); + //suite.addTest(new JUnit4TestAdapter(IgfsBackupFailoverSelfTest.class)); - suite.addTestSuite(IgfsProxySelfTest.class); - suite.addTestSuite(IgfsLocalSecondaryFileSystemProxySelfTest.class); - suite.addTestSuite(IgfsLocalSecondaryFileSystemProxyClientSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsProxySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsLocalSecondaryFileSystemProxySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsLocalSecondaryFileSystemProxyClientSelfTest.class)); - suite.addTestSuite(IgfsAtomicPrimarySelfTest.class); - suite.addTestSuite(IgfsAtomicPrimaryMultiNodeSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsAtomicPrimarySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgfsAtomicPrimaryMultiNodeSelfTest.class)); - suite.addTestSuite(IgfsSecondaryFileSystemInjectionSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgfsSecondaryFileSystemInjectionSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcSharedMemorySelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcSharedMemorySelfTestSuite.java index 0b8d580a1253a..c3ce92106ee34 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcSharedMemorySelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcSharedMemorySelfTestSuite.java @@ -17,28 +17,31 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryCrashDetectionSelfTest; import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryNativeLoaderSelfTest; import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemorySpaceSelfTest; import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryUtilsSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Shared memory test suite. */ -public class IgniteIpcSharedMemorySelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteIpcSharedMemorySelfTestSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite IPC Shared Memory Test Suite."); - suite.addTest(new TestSuite(IpcSharedMemorySpaceSelfTest.class)); - suite.addTest(new TestSuite(IpcSharedMemoryUtilsSelfTest.class)); - suite.addTest(new TestSuite(IpcSharedMemoryCrashDetectionSelfTest.class)); - suite.addTest(new TestSuite(IpcSharedMemoryNativeLoaderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IpcSharedMemorySpaceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IpcSharedMemoryUtilsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IpcSharedMemoryCrashDetectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IpcSharedMemoryNativeLoaderSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcTestSuite.java index b2f2b8d8feddc..df5974ef88db6 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteIpcTestSuite.java @@ -18,11 +18,14 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Suite for shared memory mode. */ -public class IgniteIpcTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteIpcTestSuite { /** * @return IgniteCache test suite. * @throws Exception Thrown in case of the failure. diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteJobMetricsSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteJobMetricsSelfTestSuite.java index 2255bda1b1984..d2e9c18e28d77 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteJobMetricsSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteJobMetricsSelfTestSuite.java @@ -17,22 +17,25 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.jobmetrics.GridJobMetricsProcessorLoadTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Job metrics self test suite. */ -public class IgniteJobMetricsSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteJobMetricsSelfTestSuite { /** * @return Job metrics test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Job metrics Test Suite"); - suite.addTest(new TestSuite(GridJobMetricsProcessorLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobMetricsProcessorLoadTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteKernalSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteKernalSelfTestSuite.java index b8ea85093bf6d..a6d4dd7b49c23 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteKernalSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteKernalSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.ClusterMetricsSelfTest; import org.apache.ignite.internal.ComputeJobCancelWithServiceSelfTest; @@ -42,6 +43,7 @@ import org.apache.ignite.internal.LongJVMPauseDetectorTest; import org.apache.ignite.internal.managers.GridManagerStopSelfTest; import org.apache.ignite.internal.managers.communication.GridCommunicationSendMessageSelfTest; +import org.apache.ignite.internal.managers.deployment.DeploymentRequestOfUnknownClassProcessingTest; import org.apache.ignite.internal.managers.deployment.GridDeploymentManagerStopSelfTest; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManagerAliveCacheSelfTest; import org.apache.ignite.internal.managers.discovery.GridDiscoveryManagerAttributesSelfTest; @@ -53,7 +55,6 @@ import org.apache.ignite.internal.processors.service.GridServiceClientNodeTest; import org.apache.ignite.internal.processors.service.GridServiceContinuousQueryRedeployTest; import org.apache.ignite.internal.processors.service.GridServiceDeploymentCompoundFutureSelfTest; -import org.apache.ignite.internal.processors.service.GridServiceDeploymentExceptionPropagationTest; import org.apache.ignite.internal.processors.service.GridServicePackagePrivateSelfTest; import org.apache.ignite.internal.processors.service.GridServiceProcessorBatchDeploySelfTest; import org.apache.ignite.internal.processors.service.GridServiceProcessorMultiNodeConfigSelfTest; @@ -82,93 +83,95 @@ import org.apache.ignite.testframework.GridTestUtils; import java.util.Set; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Kernal self test suite. */ +@RunWith(AllTests.class) public class IgniteKernalSelfTestSuite extends TestSuite { /** * @return Kernal test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Tests don't include in the execution. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(Set ignoredTests) throws Exception { + public static TestSuite suite(Set ignoredTests) { TestSuite suite = new TestSuite("Ignite Kernal Test Suite"); - suite.addTestSuite(GridGetOrStartSelfTest.class); - suite.addTestSuite(GridSameVmStartupSelfTest.class); - suite.addTestSuite(GridSpiExceptionSelfTest.class); - suite.addTestSuite(GridRuntimeExceptionSelfTest.class); - suite.addTestSuite(GridFailedInputParametersSelfTest.class); - suite.addTestSuite(GridNodeFilterSelfTest.class); - suite.addTestSuite(GridNodeVisorAttributesSelfTest.class); - suite.addTestSuite(GridDiscoverySelfTest.class); - suite.addTestSuite(GridCommunicationSelfTest.class); - suite.addTestSuite(GridEventStorageManagerSelfTest.class); - suite.addTestSuite(GridCommunicationSendMessageSelfTest.class); - suite.addTestSuite(GridCacheMessageSelfTest.class); - suite.addTestSuite(GridDeploymentManagerStopSelfTest.class); - suite.addTestSuite(GridManagerStopSelfTest.class); - suite.addTestSuite(GridDiscoveryManagerAttributesSelfTest.RegularDiscovery.class); - suite.addTestSuite(GridDiscoveryManagerAttributesSelfTest.ClientDiscovery.class); - suite.addTestSuite(GridDiscoveryManagerAliveCacheSelfTest.class); - suite.addTestSuite(GridDiscoveryEventSelfTest.class); - suite.addTestSuite(GridPortProcessorSelfTest.class); - suite.addTestSuite(GridHomePathSelfTest.class); - suite.addTestSuite(GridStartupWithUndefinedIgniteHomeSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridGetOrStartSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSameVmStartupSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSpiExceptionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRuntimeExceptionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFailedInputParametersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNodeFilterSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNodeVisorAttributesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridDiscoverySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCommunicationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridEventStorageManagerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCommunicationSendMessageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheMessageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridDeploymentManagerStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridManagerStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridDiscoveryManagerAttributesSelfTest.RegularDiscovery.class)); + suite.addTest(new JUnit4TestAdapter(GridDiscoveryManagerAttributesSelfTest.ClientDiscovery.class)); + suite.addTest(new JUnit4TestAdapter(GridDiscoveryManagerAliveCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridDiscoveryEventSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridPortProcessorSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridHomePathSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridStartupWithUndefinedIgniteHomeSelfTest.class)); GridTestUtils.addTestIfNeeded(suite, GridVersionSelfTest.class, ignoredTests); - suite.addTestSuite(GridListenActorSelfTest.class); - suite.addTestSuite(GridNodeLocalSelfTest.class); - suite.addTestSuite(GridKernalConcurrentAccessStopSelfTest.class); - suite.addTestSuite(IgniteConcurrentEntryProcessorAccessStopTest.class); - suite.addTestSuite(GridUpdateNotifierSelfTest.class); - suite.addTestSuite(GridAddressResolverSelfTest.class); - suite.addTestSuite(IgniteUpdateNotifierPerClusterSettingSelfTest.class); - suite.addTestSuite(GridLocalEventListenerSelfTest.class); - suite.addTestSuite(IgniteTopologyPrintFormatSelfTest.class); - suite.addTestSuite(ComputeJobCancelWithServiceSelfTest.class); - suite.addTestSuite(IgniteConnectionConcurrentReserveAndRemoveTest.class); - suite.addTestSuite(LongJVMPauseDetectorTest.class); - suite.addTestSuite(ClusterMetricsSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridListenActorSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNodeLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridKernalConcurrentAccessStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteConcurrentEntryProcessorAccessStopTest.class)); + suite.addTest(new JUnit4TestAdapter(GridUpdateNotifierSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAddressResolverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteUpdateNotifierPerClusterSettingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLocalEventListenerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteTopologyPrintFormatSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ComputeJobCancelWithServiceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteConnectionConcurrentReserveAndRemoveTest.class)); + suite.addTest(new JUnit4TestAdapter(LongJVMPauseDetectorTest.class)); + suite.addTest(new JUnit4TestAdapter(ClusterMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DeploymentRequestOfUnknownClassProcessingTest.class)); // Managed Services. - suite.addTestSuite(GridServiceProcessorSingleNodeSelfTest.class); - suite.addTestSuite(GridServiceProcessorMultiNodeSelfTest.class); - suite.addTestSuite(GridServiceProcessorMultiNodeConfigSelfTest.class); - suite.addTestSuite(GridServiceProcessorProxySelfTest.class); - suite.addTestSuite(GridServiceReassignmentSelfTest.class); - suite.addTestSuite(GridServiceClientNodeTest.class); - suite.addTestSuite(GridServiceProcessorStopSelfTest.class); - suite.addTestSuite(ServicePredicateAccessCacheTest.class); - suite.addTestSuite(GridServicePackagePrivateSelfTest.class); - suite.addTestSuite(GridServiceSerializationSelfTest.class); - suite.addTestSuite(GridServiceProxyNodeStopSelfTest.class); - suite.addTestSuite(GridServiceProxyClientReconnectSelfTest.class); - suite.addTestSuite(IgniteServiceReassignmentTest.class); - suite.addTestSuite(IgniteServiceProxyTimeoutInitializedTest.class); - suite.addTestSuite(IgniteServiceDynamicCachesSelfTest.class); - suite.addTestSuite(GridServiceContinuousQueryRedeployTest.class); - suite.addTestSuite(ServiceThreadPoolSelfTest.class); - suite.addTestSuite(GridServiceProcessorBatchDeploySelfTest.class); - suite.addTestSuite(GridServiceDeploymentCompoundFutureSelfTest.class); - suite.addTestSuite(SystemCacheNotConfiguredTest.class); + suite.addTest(new JUnit4TestAdapter(GridServiceProcessorSingleNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceProcessorMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceProcessorMultiNodeConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceProcessorProxySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceReassignmentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceClientNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceProcessorStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ServicePredicateAccessCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServicePackagePrivateSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceSerializationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceProxyNodeStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceProxyClientReconnectSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceReassignmentTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceProxyTimeoutInitializedTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceDynamicCachesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceContinuousQueryRedeployTest.class)); + suite.addTest(new JUnit4TestAdapter(ServiceThreadPoolSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceProcessorBatchDeploySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServiceDeploymentCompoundFutureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SystemCacheNotConfiguredTest.class)); // IGNITE-3392 - //suite.addTestSuite(GridServiceDeploymentExceptionPropagationTest.class); + //suite.addTestSuite(GridServiceDeploymentExceptionPropagationTest.class)); - suite.addTestSuite(IgniteServiceDeploymentClassLoadingDefaultMarshallerTest.class); - suite.addTestSuite(IgniteServiceDeploymentClassLoadingJdkMarshallerTest.class); - suite.addTestSuite(IgniteServiceDeploymentClassLoadingOptimizedMarshallerTest.class); - suite.addTestSuite(IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest.class); - suite.addTestSuite(IgniteServiceDeployment2ClassLoadersJdkMarshallerTest.class); - suite.addTestSuite(IgniteServiceDeployment2ClassLoadersOptimizedMarshallerTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteServiceDeploymentClassLoadingDefaultMarshallerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceDeploymentClassLoadingJdkMarshallerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceDeploymentClassLoadingOptimizedMarshallerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceDeployment2ClassLoadersDefaultMarshallerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceDeployment2ClassLoadersJdkMarshallerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteServiceDeployment2ClassLoadersOptimizedMarshallerTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLangSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLangSelfTestSuite.java index f40b03e534715..9bfdff017f7ce 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLangSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLangSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.util.future.GridCompoundFutureSelfTest; import org.apache.ignite.internal.util.future.GridEmbeddedFutureSelfTest; @@ -45,51 +46,53 @@ import org.apache.ignite.lang.utils.IgniteOffheapReadWriteLockSelfTest; import org.apache.ignite.util.GridConcurrentLinkedDequeSelfTest; import org.apache.ignite.util.GridConcurrentLinkedHashMapMultiThreadedSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Ignite language test suite. */ -public class IgniteLangSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteLangSelfTestSuite { /** * @return Kernal test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Lang Test Suite"); - suite.addTest(new TestSuite(GridTupleSelfTest.class)); - suite.addTest(new TestSuite(GridBoundedPriorityQueueSelfTest.class)); - suite.addTest(new TestSuite(GridByteArrayListSelfTest.class)); - suite.addTest(new TestSuite(GridLeanMapSelfTest.class)); - suite.addTest(new TestSuite(GridLeanIdentitySetSelfTest.class)); - suite.addTest(new TestSuite(GridListSetSelfTest.class)); - suite.addTest(new TestSuite(GridSetWrapperSelfTest.class)); - suite.addTest(new TestSuite(GridConcurrentWeakHashSetSelfTest.class)); - suite.addTest(new TestSuite(GridMetadataAwareAdapterSelfTest.class)); - suite.addTest(new TestSuite(GridSetWrapperSelfTest.class)); - suite.addTest(new TestSuite(IgniteUuidSelfTest.class)); - suite.addTest(new TestSuite(GridXSelfTest.class)); - suite.addTest(new TestSuite(GridBoundedConcurrentOrderedMapSelfTest.class)); - suite.addTest(new TestSuite(GridBoundedConcurrentLinkedHashMapSelfTest.class)); - suite.addTest(new TestSuite(GridConcurrentLinkedDequeSelfTest.class)); - suite.addTest(new TestSuite(GridCircularBufferSelfTest.class)); - suite.addTest(new TestSuite(GridConcurrentLinkedHashMapSelfTest.class)); - suite.addTest(new TestSuite(GridConcurrentLinkedHashMapMultiThreadedSelfTest.class)); - suite.addTest(new TestSuite(GridStripedLockSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTupleSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridBoundedPriorityQueueSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridByteArrayListSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLeanMapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLeanIdentitySetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridListSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSetWrapperSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridConcurrentWeakHashSetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMetadataAwareAdapterSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSetWrapperSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteUuidSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridXSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridBoundedConcurrentOrderedMapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridBoundedConcurrentLinkedHashMapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridConcurrentLinkedDequeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCircularBufferSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridConcurrentLinkedHashMapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridConcurrentLinkedHashMapMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridStripedLockSelfTest.class)); - suite.addTest(new TestSuite(GridFutureAdapterSelfTest.class)); - suite.addTest(new TestSuite(GridCompoundFutureSelfTest.class)); - suite.addTest(new TestSuite(GridEmbeddedFutureSelfTest.class)); - suite.addTest(new TestSuite(GridNioFutureSelfTest.class)); - suite.addTest(new TestSuite(GridNioEmbeddedFutureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFutureAdapterSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCompoundFutureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridEmbeddedFutureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNioFutureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNioEmbeddedFutureSelfTest.class)); - suite.addTest(new TestSuite(IgniteFutureImplTest.class)); - suite.addTest(new TestSuite(IgniteCacheFutureImplTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteFutureImplTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheFutureImplTest.class)); - suite.addTest(new TestSuite(IgniteOffheapReadWriteLockSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteOffheapReadWriteLockSelfTest.class)); // Consistent hash tests. - suite.addTest(new TestSuite(GridConsistentHashSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridConsistentHashSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLoggingSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLoggingSelfTestSuite.java index 43a56fd664078..19038ea501161 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLoggingSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLoggingSelfTestSuite.java @@ -17,20 +17,24 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.logger.java.JavaLoggerTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Logging self-test suite. */ -public class IgniteLoggingSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteLoggingSelfTestSuite { /** * @return P2P tests suite. */ public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Logging Test Suite"); - suite.addTest(new TestSuite(JavaLoggerTest.class)); + suite.addTest(new JUnit4TestAdapter(JavaLoggerTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLostAndFoundTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLostAndFoundTestSuite.java index 37bb2d6f32257..7abd939619032 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLostAndFoundTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteLostAndFoundTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.GridFactoryVmShutdownTest; import org.apache.ignite.internal.managers.GridManagerMxBeanIllegalArgumentHandleTest; @@ -40,31 +41,34 @@ import org.apache.ignite.lang.GridSystemCurrentTimeMillisTest; import org.apache.ignite.lang.GridThreadPriorityTest; import org.apache.ignite.startup.servlet.GridServletLoaderTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Tests suite for orphaned tests. */ -public class IgniteLostAndFoundTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteLostAndFoundTestSuite { /** * @return Tests suite for orphaned tests (not in any test sute previously). */ public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite List And Found Test Suite"); - suite.addTestSuite(FileIOTest.class); - suite.addTestSuite(FileLocksTest.class); - suite.addTestSuite(GridComputeJobExecutionErrorToLogManualTest.class); - suite.addTestSuite(GridManagerMxBeanIllegalArgumentHandleTest.class); - suite.addTestSuite(GridRoundTripTest.class); - suite.addTestSuite(GridServletLoaderTest.class); + suite.addTest(new JUnit4TestAdapter(FileIOTest.class)); + suite.addTest(new JUnit4TestAdapter(FileLocksTest.class)); + suite.addTest(new JUnit4TestAdapter(GridComputeJobExecutionErrorToLogManualTest.class)); + suite.addTest(new JUnit4TestAdapter(GridManagerMxBeanIllegalArgumentHandleTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRoundTripTest.class)); + suite.addTest(new JUnit4TestAdapter(GridServletLoaderTest.class)); - suite.addTestSuite(LinkedHashMapTest.class); - suite.addTestSuite(NetworkFailureTest.class); - suite.addTestSuite(PagesWriteThrottleSandboxTest.class); - suite.addTestSuite(QueueSizeCounterMultiThreadedTest.class); - suite.addTestSuite(ReadWriteLockMultiThreadedTest.class); - suite.addTestSuite(RegExpTest.class); - suite.addTestSuite(ServerSocketMultiThreadedTest.class); + suite.addTest(new JUnit4TestAdapter(LinkedHashMapTest.class)); + suite.addTest(new JUnit4TestAdapter(NetworkFailureTest.class)); + suite.addTest(new JUnit4TestAdapter(PagesWriteThrottleSandboxTest.class)); + suite.addTest(new JUnit4TestAdapter(QueueSizeCounterMultiThreadedTest.class)); + suite.addTest(new JUnit4TestAdapter(ReadWriteLockMultiThreadedTest.class)); + suite.addTest(new JUnit4TestAdapter(RegExpTest.class)); + suite.addTest(new JUnit4TestAdapter(ServerSocketMultiThreadedTest.class)); // Non-JUnit classes with Test in name, which should be either converted to JUnit or removed in the future diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMarshallerSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMarshallerSelfTestSuite.java index f5e3809760c9a..c8d1756519d09 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMarshallerSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMarshallerSelfTestSuite.java @@ -34,25 +34,26 @@ import org.apache.ignite.testframework.GridTestUtils; import java.util.Set; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for all marshallers. */ -public class IgniteMarshallerSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteMarshallerSelfTestSuite { /** * @return Kernal test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Ignored tests. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(Set ignoredTests) throws Exception { + public static TestSuite suite(Set ignoredTests) { TestSuite suite = new TestSuite("Ignite Marshaller Test Suite"); GridTestUtils.addTestIfNeeded(suite, GridUnsafeDataOutputArraySizingSelfTest.class, ignoredTests); diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMessagingConfigVariationFullApiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMessagingConfigVariationFullApiTestSuite.java index 0490a921a9952..788ca7a4ee168 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMessagingConfigVariationFullApiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteMessagingConfigVariationFullApiTestSuite.java @@ -26,11 +26,14 @@ import org.apache.ignite.testframework.configvariations.ConfigVariations; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; import org.apache.ignite.testframework.configvariations.Parameters; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test sute for Messaging process. */ -public class IgniteMessagingConfigVariationFullApiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteMessagingConfigVariationFullApiTestSuite { /** */ @SuppressWarnings("unchecked") private static final ConfigParameter[][] GRID_PARAMETER_VARIATION = new ConfigParameter[][] { @@ -44,9 +47,8 @@ public class IgniteMessagingConfigVariationFullApiTestSuite extends TestSuite { /** * @return Messaging test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Compute New Full API Test Suite"); suite.addTest(new ConfigVariationsTestSuiteBuilder( diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteP2PSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteP2PSelfTestSuite.java index 76b0aa061eb0a..ec4ce1bf5f46f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteP2PSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteP2PSelfTestSuite.java @@ -18,6 +18,7 @@ package org.apache.ignite.testsuites; import java.util.Set; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.managers.deployment.GridDeploymentMessageCountSelfTest; import org.apache.ignite.p2p.DeploymentClassLoaderCallableTest; @@ -40,46 +41,47 @@ import org.apache.ignite.p2p.P2PStreamingClassLoaderTest; import org.apache.ignite.p2p.SharedDeploymentTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * P2P test suite. */ -public class IgniteP2PSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteP2PSelfTestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @return P2P tests suite. - * @throws Exception If failed. */ @SuppressWarnings({"ProhibitedExceptionDeclared"}) - public static TestSuite suite(Set ignoredTests) throws Exception { + public static TestSuite suite(Set ignoredTests) { TestSuite suite = new TestSuite("Ignite P2P Test Suite"); - suite.addTest(new TestSuite(GridP2PDoubleDeploymentSelfTest.class)); - suite.addTest(new TestSuite(GridP2PHotRedeploymentSelfTest.class)); - suite.addTest(new TestSuite(GridP2PClassLoadingSelfTest.class)); - suite.addTest(new TestSuite(GridP2PUndeploySelfTest.class)); - suite.addTest(new TestSuite(GridP2PRemoteClassLoadersSelfTest.class)); - suite.addTest(new TestSuite(GridP2PNodeLeftSelfTest.class)); - suite.addTest(new TestSuite(GridP2PDifferentClassLoaderSelfTest.class)); - suite.addTest(new TestSuite(GridP2PSameClassLoaderSelfTest.class)); - suite.addTest(new TestSuite(GridP2PJobClassLoaderSelfTest.class)); - suite.addTest(new TestSuite(GridP2PRecursionTaskSelfTest.class)); - suite.addTest(new TestSuite(GridP2PLocalDeploymentSelfTest.class)); - //suite.addTest(new TestSuite(GridP2PTestTaskExecutionTest.class)); - suite.addTest(new TestSuite(GridP2PTimeoutSelfTest.class)); - suite.addTest(new TestSuite(GridP2PMissedResourceCacheSizeSelfTest.class)); - suite.addTest(new TestSuite(GridP2PContinuousDeploymentSelfTest.class)); - suite.addTest(new TestSuite(DeploymentClassLoaderCallableTest.class)); - suite.addTest(new TestSuite(P2PStreamingClassLoaderTest.class)); - suite.addTest(new TestSuite(SharedDeploymentTest.class)); - suite.addTest(new TestSuite(P2PScanQueryUndeployTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PDoubleDeploymentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PHotRedeploymentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PClassLoadingSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PUndeploySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PRemoteClassLoadersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PNodeLeftSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PDifferentClassLoaderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PSameClassLoaderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PJobClassLoaderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PRecursionTaskSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PLocalDeploymentSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridP2PTestTaskExecutionTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PTimeoutSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PMissedResourceCacheSizeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridP2PContinuousDeploymentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DeploymentClassLoaderCallableTest.class)); + suite.addTest(new JUnit4TestAdapter(P2PStreamingClassLoaderTest.class)); + suite.addTest(new JUnit4TestAdapter(SharedDeploymentTest.class)); + suite.addTest(new JUnit4TestAdapter(P2PScanQueryUndeployTest.class)); GridTestUtils.addTestIfNeeded(suite, GridDeploymentMessageCountSelfTest.class, ignoredTests); return suite; diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite.java new file mode 100644 index 0000000000000..396f53bad893c --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite.java @@ -0,0 +1,90 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import java.util.Set; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsCacheConfigurationFileConsistencyCheckTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsDestroyCacheTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsDestroyCacheWithoutCheckpointsTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsDataRegionMetricsTest; +import org.apache.ignite.internal.processors.cache.persistence.db.file.DefaultPageSizeBackwardsCompatibilityTest; +import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsCheckpointSimulationWithRealCpDisabledTest; +import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsPageReplacementTest; +import org.apache.ignite.internal.processors.cache.persistence.metastorage.IgniteMetaStorageBasicTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.BPlusTreePageMemoryImplTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.BPlusTreeReuseListPageMemoryImplTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.FillFactorMetricTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.IndexStoragePageMemoryImplTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImplNoLoadTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImplTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryNoStoreLeakTest; +import org.apache.ignite.internal.processors.cache.persistence.pagemem.PagesWriteThrottleSmokeTest; +import org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBufferTest; +import org.apache.ignite.internal.processors.cache.persistence.wal.aware.SegmentAwareTest; + +/** + * + */ +public class IgnitePdsMvccTestSuite extends TestSuite { + /** + * @return Suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + TestSuite suite = new TestSuite("Ignite Persistent Store Mvcc Test Suite"); + + Set ignoredTests = new HashSet<>(); + + // Skip classes that already contains Mvcc tests. + ignoredTests.add(IgnitePdsCheckpointSimulationWithRealCpDisabledTest.class); + + // Atomic tests. + ignoredTests.add(IgnitePdsDataRegionMetricsTest.class); + + // Non-relevant tests. + ignoredTests.add(IgnitePdsCacheConfigurationFileConsistencyCheckTest.class); + ignoredTests.add(DefaultPageSizeBackwardsCompatibilityTest.class); + ignoredTests.add(IgniteMetaStorageBasicTest.class); + + ignoredTests.add(IgnitePdsPageReplacementTest.class); + + ignoredTests.add(PageMemoryImplNoLoadTest.class); + ignoredTests.add(PageMemoryNoStoreLeakTest.class); + ignoredTests.add(IndexStoragePageMemoryImplTest.class); + ignoredTests.add(PageMemoryImplTest.class); + ignoredTests.add(BPlusTreePageMemoryImplTest.class); + ignoredTests.add(BPlusTreeReuseListPageMemoryImplTest.class); + ignoredTests.add(SegmentedRingByteBufferTest.class); + ignoredTests.add(PagesWriteThrottleSmokeTest.class); + ignoredTests.add(FillFactorMetricTest.class); + ignoredTests.add(IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest.class); + ignoredTests.add(SegmentAwareTest.class); + + ignoredTests.add(IgnitePdsDestroyCacheTest.class); + ignoredTests.add(IgnitePdsDestroyCacheWithoutCheckpointsTest.class); + + suite.addTest(IgnitePdsTestSuite.suite(ignoredTests)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite2.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite2.java new file mode 100644 index 0000000000000..35928ce338e84 --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite2.java @@ -0,0 +1,97 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import java.util.Collection; +import java.util.HashSet; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.cache.persistence.IgniteDataStorageMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsExchangeDuringCheckpointTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsPageSizesTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePersistentStoreDataStructuresTest; +import org.apache.ignite.internal.processors.cache.persistence.LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest; +import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteAbsentEvictionNodeOutOfBaselineTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsReserveWalSegmentsTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsReserveWalSegmentsWithCompactionTest; +import org.apache.ignite.internal.processors.cache.persistence.db.filename.IgniteUidAsConsistentIdMigrationTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.FsyncWalRolloverDoesNotBlockTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWALTailIsReachedDuringIterationOverArchiveTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalFormatFileFailoverTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalIteratorExceptionDuringReadTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalIteratorSwitchSegmentTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalSerializerVersionTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalCompactionTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalRolloverTypesTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteDataIntegrityTests; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteFsyncReplayWalIteratorInvalidCrcTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgnitePureJavaCrcCompatibility; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteReplayWalIteratorInvalidCrcTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteStandaloneWalIteratorInvalidCrcTest; +import org.apache.ignite.internal.processors.cache.persistence.wal.reader.StandaloneWalRecordsIteratorTest; + +/** + * + */ +public class IgnitePdsMvccTestSuite2 extends TestSuite { + /** + * @return Suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + TestSuite suite = new TestSuite("Ignite persistent Store Mvcc Test Suite 2"); + + Collection ignoredTests = new HashSet<>(); + + // Classes that are contained mvcc test already. + ignoredTests.add(LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest.class); + + // Atomic caches + ignoredTests.add(IgnitePersistentStoreDataStructuresTest.class); + + // Skip irrelevant test + ignoredTests.add(IgniteDataIntegrityTests.class); + ignoredTests.add(IgniteStandaloneWalIteratorInvalidCrcTest.class); + ignoredTests.add(IgniteReplayWalIteratorInvalidCrcTest.class); + ignoredTests.add(IgniteFsyncReplayWalIteratorInvalidCrcTest.class); + ignoredTests.add(IgnitePureJavaCrcCompatibility.class); + ignoredTests.add(IgniteAbsentEvictionNodeOutOfBaselineTest.class); + + ignoredTests.add(IgnitePdsPageSizesTest.class); + ignoredTests.add(IgniteDataStorageMetricsSelfTest.class); + ignoredTests.add(IgniteWalFormatFileFailoverTest.class); + ignoredTests.add(IgnitePdsExchangeDuringCheckpointTest.class); + ignoredTests.add(IgnitePdsReserveWalSegmentsTest.class); + ignoredTests.add(IgnitePdsReserveWalSegmentsWithCompactionTest.class); + + ignoredTests.add(IgniteUidAsConsistentIdMigrationTest.class); + ignoredTests.add(IgniteWalSerializerVersionTest.class); + ignoredTests.add(WalCompactionTest.class); + ignoredTests.add(IgniteWalIteratorSwitchSegmentTest.class); + ignoredTests.add(IgniteWalIteratorExceptionDuringReadTest.class); + ignoredTests.add(StandaloneWalRecordsIteratorTest.class); + ignoredTests.add(IgniteWALTailIsReachedDuringIterationOverArchiveTest.class); + ignoredTests.add(WalRolloverTypesTest.class); + ignoredTests.add(FsyncWalRolloverDoesNotBlockTest.class); + + suite.addTest(IgnitePdsTestSuite2.suite(ignoredTests)); + + return suite; + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite3.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite3.java new file mode 100644 index 0000000000000..03ba32312111f --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite3.java @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; + +/** + * Mvcc version of {@link IgnitePdsTestSuite3}. + */ +public class IgnitePdsMvccTestSuite3 extends TestSuite { + /** + * @return Suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + TestSuite suite = new TestSuite("Ignite Persistent Store Mvcc Test Suite 3"); + + HashSet ignoredTests = new HashSet<>(); + + // No ignored tests yet. + + suite.addTest(IgnitePdsTestSuite3.suite(ignoredTests)); + + return suite; + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite4.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite4.java new file mode 100644 index 0000000000000..bb93385a34eff --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsMvccTestSuite4.java @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.testsuites; + +import java.util.HashSet; +import java.util.Set; +import junit.framework.TestSuite; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsTaskCancelingTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsPartitionPreloadTest; +import org.apache.ignite.internal.processors.cache.persistence.file.FileDownloaderTest; + +/** + * Mvcc variant of {@link IgnitePdsTestSuite4}. + */ +public class IgnitePdsMvccTestSuite4 extends TestSuite { + /** + * @return Suite. + */ + public static TestSuite suite() { + System.setProperty(IgniteSystemProperties.IGNITE_FORCE_MVCC_MODE_IN_TESTS, "true"); + + TestSuite suite = new TestSuite("Ignite persistent Store Mvcc Test Suite 4"); + + Set ignoredTests = new HashSet<>(); + + // Skip classes that already contains Mvcc tests + ignoredTests.add(IgnitePdsPartitionPreloadTest.class); + + // Skip irrelevant test + ignoredTests.add(FileDownloaderTest.class); + ignoredTests.add(IgnitePdsTaskCancelingTest.class); + + suite.addTest(IgnitePdsTestSuite4.suite(ignoredTests)); + + return suite; + } + +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite.java index 2b57223b517d6..1284306941c84 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite.java @@ -17,22 +17,21 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; -import org.apache.ignite.internal.pagemem.impl.PageMemoryNoLoadSelfTest; import org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTestWithPersistence; +import org.apache.ignite.internal.processors.cache.IgnitePdsDataRegionMetricsTxTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsCacheConfigurationFileConsistencyCheckTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsDestroyCacheTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsDestroyCacheWithoutCheckpointsTest; -import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsCacheConfigurationFileConsistencyCheckTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsDynamicCacheTest; -import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsRemoveDuringRebalancingTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsSingleNodePutGetPersistenceTest; -import org.apache.ignite.internal.processors.cache.persistence.IgnitePersistenceSequentialCheckpointTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsCacheRestoreTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsDataRegionMetricsTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsWithTtlTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsWithTtlTest2; import org.apache.ignite.internal.processors.cache.persistence.db.file.DefaultPageSizeBackwardsCompatibilityTest; -import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsCheckpointSimpleTest; import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsCheckpointSimulationWithRealCpDisabledTest; import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsPageReplacementTest; import org.apache.ignite.internal.processors.cache.persistence.metastorage.IgniteMetaStorageBasicTest; @@ -40,68 +39,76 @@ import org.apache.ignite.internal.processors.cache.persistence.pagemem.BPlusTreeReuseListPageMemoryImplTest; import org.apache.ignite.internal.processors.cache.persistence.pagemem.FillFactorMetricTest; import org.apache.ignite.internal.processors.cache.persistence.pagemem.IndexStoragePageMemoryImplTest; -import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageIdDistributionTest; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImplNoLoadTest; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImplTest; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryNoStoreLeakTest; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PagesWriteThrottleSmokeTest; -import org.apache.ignite.internal.processors.cache.persistence.tree.io.TrackingPageIOTest; import org.apache.ignite.internal.processors.cache.persistence.wal.CpTriggeredWalDeltaConsistencyTest; import org.apache.ignite.internal.processors.cache.persistence.wal.ExplicitWalDeltaConsistencyTest; import org.apache.ignite.internal.processors.cache.persistence.wal.SegmentedRingByteBufferTest; import org.apache.ignite.internal.processors.cache.persistence.wal.SysPropWalDeltaConsistencyTest; -import org.apache.ignite.internal.processors.cache.persistence.wal.aware.SegmentAware; import org.apache.ignite.internal.processors.cache.persistence.wal.aware.SegmentAwareTest; import org.apache.ignite.internal.processors.database.IgniteDbDynamicCacheSelfTest; import org.apache.ignite.internal.processors.database.IgniteDbMultiNodePutGetTest; import org.apache.ignite.internal.processors.database.IgniteDbPutGetWithCacheStoreTest; import org.apache.ignite.internal.processors.database.IgniteDbSingleNodePutGetTest; import org.apache.ignite.internal.processors.database.IgniteDbSingleNodeTinyPutGetTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgnitePdsTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgnitePdsTestSuite { + /** + * @return IgniteCache test suite. + */ + public static TestSuite suite() { + return suite(null); + } + /** - * @return Suite. - * @throws Exception If failed. + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Ignite Persistent Store Test Suite"); - addRealPageStoreTests(suite); - addRealPageStoreTestsLongRunning(suite); + addRealPageStoreTests(suite, ignoredTests); + addRealPageStoreTestsLongRunning(suite, ignoredTests); // Basic PageMemory tests. - //suite.addTestSuite(PageMemoryNoLoadSelfTest.class); - suite.addTestSuite(PageMemoryImplNoLoadTest.class); - suite.addTestSuite(PageMemoryNoStoreLeakTest.class); - suite.addTestSuite(IndexStoragePageMemoryImplTest.class); - suite.addTestSuite(PageMemoryImplTest.class); - //suite.addTestSuite(PageIdDistributionTest.class); - //suite.addTestSuite(TrackingPageIOTest.class); + //GridTestUtils.addTestIfNeeded(suite, PageMemoryNoLoadSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PageMemoryImplNoLoadTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PageMemoryNoStoreLeakTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IndexStoragePageMemoryImplTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, PageMemoryImplTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, PageIdDistributionTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, TrackingPageIOTest.class, ignoredTests); // BTree tests with store page memory. - suite.addTestSuite(BPlusTreePageMemoryImplTest.class); - suite.addTestSuite(BPlusTreeReuseListPageMemoryImplTest.class); + GridTestUtils.addTestIfNeeded(suite, BPlusTreePageMemoryImplTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, BPlusTreeReuseListPageMemoryImplTest.class, ignoredTests); - suite.addTestSuite(SegmentedRingByteBufferTest.class); + GridTestUtils.addTestIfNeeded(suite, SegmentedRingByteBufferTest.class, ignoredTests); // Write throttling - suite.addTestSuite(PagesWriteThrottleSmokeTest.class); + GridTestUtils.addTestIfNeeded(suite, PagesWriteThrottleSmokeTest.class, ignoredTests); // Metrics - suite.addTestSuite(FillFactorMetricTest.class); + GridTestUtils.addTestIfNeeded(suite, FillFactorMetricTest.class, ignoredTests); // WAL delta consistency - suite.addTestSuite(CpTriggeredWalDeltaConsistencyTest.class); - suite.addTestSuite(ExplicitWalDeltaConsistencyTest.class); - suite.addTestSuite(SysPropWalDeltaConsistencyTest.class); + GridTestUtils.addTestIfNeeded(suite, CpTriggeredWalDeltaConsistencyTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, ExplicitWalDeltaConsistencyTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, SysPropWalDeltaConsistencyTest.class, ignoredTests); // Binary meta tests. - suite.addTestSuite(IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCacheObjectBinaryProcessorOnDiscoveryTest.class, ignoredTests); - suite.addTestSuite(SegmentAwareTest.class); + GridTestUtils.addTestIfNeeded(suite, SegmentAwareTest.class, ignoredTests); return suite; } @@ -111,10 +118,11 @@ public static TestSuite suite() throws Exception { * execute. * * @param suite suite to add tests into. + * @param ignoredTests Ignored tests. */ - private static void addRealPageStoreTestsLongRunning(TestSuite suite) { + private static void addRealPageStoreTestsLongRunning(TestSuite suite, Collection ignoredTests) { // Basic PageMemory tests. - suite.addTestSuite(IgnitePdsPageReplacementTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsPageReplacementTest.class, ignoredTests); } /** @@ -123,41 +131,44 @@ private static void addRealPageStoreTestsLongRunning(TestSuite suite) { * NOTE: These tests are also executed using I/O plugins. * * @param suite suite to add tests into. + * @param ignoredTests Ignored tests. */ - public static void addRealPageStoreTests(TestSuite suite) { + public static void addRealPageStoreTests(TestSuite suite, Collection ignoredTests) { // Checkpointing smoke-test. - suite.addTestSuite(IgnitePdsCheckpointSimulationWithRealCpDisabledTest.class); - //suite.addTestSuite(IgnitePdsCheckpointSimpleTest.class); - //suite.addTestSuite(IgnitePersistenceSequentialCheckpointTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCheckpointSimulationWithRealCpDisabledTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, IgnitePdsCheckpointSimpleTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, IgnitePersistenceSequentialCheckpointTest.class, ignoredTests); // Basic API tests. - suite.addTestSuite(IgniteDbSingleNodePutGetTest.class); - suite.addTestSuite(IgniteDbMultiNodePutGetTest.class); - suite.addTestSuite(IgniteDbSingleNodeTinyPutGetTest.class); - suite.addTestSuite(IgniteDbDynamicCacheSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteDbSingleNodePutGetTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteDbMultiNodePutGetTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteDbSingleNodeTinyPutGetTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteDbDynamicCacheSelfTest.class, ignoredTests); // Persistence-enabled. - suite.addTestSuite(IgnitePdsSingleNodePutGetPersistenceTest.class); - suite.addTestSuite(IgnitePdsDynamicCacheTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsSingleNodePutGetPersistenceTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsDynamicCacheTest.class, ignoredTests); // TODO uncomment when https://issues.apache.org/jira/browse/IGNITE-7510 is fixed - // suite.addTestSuite(IgnitePdsClientNearCachePutGetTest.class); - suite.addTestSuite(IgniteDbPutGetWithCacheStoreTest.class); - suite.addTestSuite(IgnitePdsWithTtlTest.class); + // GridTestUtils.addTestIfNeeded(suite, IgnitePdsClientNearCachePutGetTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteDbPutGetWithCacheStoreTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsWithTtlTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsWithTtlTest2.class, ignoredTests); - suite.addTestSuite(IgniteClusterActivateDeactivateTestWithPersistence.class); + GridTestUtils.addTestIfNeeded(suite, IgniteClusterActivateDeactivateTestWithPersistence.class, ignoredTests); - suite.addTestSuite(IgnitePdsCacheRestoreTest.class); - suite.addTestSuite(IgnitePdsDataRegionMetricsTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCacheRestoreTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsDataRegionMetricsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsDataRegionMetricsTxTest.class, ignoredTests); - suite.addTestSuite(IgnitePdsDestroyCacheTest.class); - //suite.addTestSuite(IgnitePdsRemoveDuringRebalancingTest.class); - suite.addTestSuite(IgnitePdsDestroyCacheWithoutCheckpointsTest.class); - suite.addTestSuite(IgnitePdsCacheConfigurationFileConsistencyCheckTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsDestroyCacheTest.class, ignoredTests); + //GridTestUtils.addTestIfNeeded(suite, IgnitePdsRemoveDuringRebalancingTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsDestroyCacheWithoutCheckpointsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCacheConfigurationFileConsistencyCheckTest.class, ignoredTests); - suite.addTestSuite(DefaultPageSizeBackwardsCompatibilityTest.class); + GridTestUtils.addTestIfNeeded(suite, DefaultPageSizeBackwardsCompatibilityTest.class, ignoredTests); //MetaStorage - suite.addTestSuite(IgniteMetaStorageBasicTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteMetaStorageBasicTest.class, ignoredTests); } } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite2.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite2.java index a9f2601656aea..9a5c3d741a0c7 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite2.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite2.java @@ -17,16 +17,19 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.persistence.IgniteDataStorageMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsCacheStartStopWithFreqCheckpointTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsCorruptedStoreTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsExchangeDuringCheckpointTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsPageSizesTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsPartitionFilesDestroyTest; +import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsPartitionsStateRecoveryTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePersistentStoreDataStructuresTest; import org.apache.ignite.internal.processors.cache.persistence.IgniteRebalanceScheduleResendPartitionsTest; -import org.apache.ignite.internal.processors.cache.persistence.LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest; import org.apache.ignite.internal.processors.cache.persistence.LocalWalModeChangeDuringRebalancingSelfTest; +import org.apache.ignite.internal.processors.cache.persistence.LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest; import org.apache.ignite.internal.processors.cache.persistence.baseline.ClientAffinityAssignmentWithBaselineTest; import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteAbsentEvictionNodeOutOfBaselineTest; import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteAllBaselineNodesOnlineFullApiSelfTest; @@ -34,10 +37,12 @@ import org.apache.ignite.internal.processors.cache.persistence.baseline.IgniteOnlineNodeOutOfBaselineFullApiSelfTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsRebalancingOnNotStableTopologyTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsReserveWalSegmentsTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsReserveWalSegmentsWithCompactionTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsWholeClusterRestartTest; import org.apache.ignite.internal.processors.cache.persistence.db.SlowHistoricalRebalanceSmallHistoryTest; import org.apache.ignite.internal.processors.cache.persistence.db.checkpoint.IgniteCheckpointDirtyPagesForLowLoadTest; import org.apache.ignite.internal.processors.cache.persistence.db.filename.IgniteUidAsConsistentIdMigrationTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.FsyncWalRolloverDoesNotBlockTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteNodeStoppedDuringDisableWALTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWALTailIsReachedDuringIterationOverArchiveTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalFlushBackgroundSelfTest; @@ -53,42 +58,58 @@ import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalIteratorExceptionDuringReadTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalIteratorSwitchSegmentTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalSerializerVersionTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalCompactionSwitchOnTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalCompactionTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalDeletionArchiveFsyncTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalDeletionArchiveLogOnlyTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalRolloverTypesTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteDataIntegrityTests; import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteFsyncReplayWalIteratorInvalidCrcTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgnitePureJavaCrcCompatibility; import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteReplayWalIteratorInvalidCrcTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.crc.IgniteStandaloneWalIteratorInvalidCrcTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.reader.IgniteWalReaderTest; import org.apache.ignite.internal.processors.cache.persistence.wal.reader.StandaloneWalRecordsIteratorTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ +@RunWith(AllTests.class) public class IgnitePdsTestSuite2 extends TestSuite { /** * @return Suite. */ public static TestSuite suite() { + return suite(null); + } + + /** + * @param ignoredTests Ignored tests. + * @return Suite. + */ + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Ignite persistent Store Test Suite 2"); // Integrity test. - suite.addTestSuite(IgniteDataIntegrityTests.class); - suite.addTestSuite(IgniteStandaloneWalIteratorInvalidCrcTest.class); - suite.addTestSuite(IgniteReplayWalIteratorInvalidCrcTest.class); - suite.addTestSuite(IgniteFsyncReplayWalIteratorInvalidCrcTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteDataIntegrityTests.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteStandaloneWalIteratorInvalidCrcTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteReplayWalIteratorInvalidCrcTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteFsyncReplayWalIteratorInvalidCrcTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePureJavaCrcCompatibility.class, ignoredTests); - addRealPageStoreTests(suite); + addRealPageStoreTests(suite, ignoredTests); - addRealPageStoreTestsNotForDirectIo(suite); + addRealPageStoreTestsNotForDirectIo(suite, ignoredTests); // BaselineTopology tests - suite.addTestSuite(IgniteAllBaselineNodesOnlineFullApiSelfTest.class); - suite.addTestSuite(IgniteOfflineBaselineNodeFullApiSelfTest.class); - suite.addTestSuite(IgniteOnlineNodeOutOfBaselineFullApiSelfTest.class); - suite.addTestSuite(ClientAffinityAssignmentWithBaselineTest.class); - suite.addTestSuite(IgniteAbsentEvictionNodeOutOfBaselineTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteAllBaselineNodesOnlineFullApiSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteOfflineBaselineNodeFullApiSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteOnlineNodeOutOfBaselineFullApiSelfTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, ClientAffinityAssignmentWithBaselineTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteAbsentEvictionNodeOutOfBaselineTest.class, ignoredTests); return suite; } @@ -98,19 +119,22 @@ public static TestSuite suite() { * execute. * * @param suite suite to add tests into. + * @param ignoredTests Ignored tests. */ - private static void addRealPageStoreTestsNotForDirectIo(TestSuite suite) { - suite.addTestSuite(IgnitePdsPartitionFilesDestroyTest.class); + private static void addRealPageStoreTestsNotForDirectIo(TestSuite suite, Collection ignoredTests) { + GridTestUtils.addTestIfNeeded(suite, IgnitePdsPartitionFilesDestroyTest.class, ignoredTests); - suite.addTestSuite(LocalWalModeChangeDuringRebalancingSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, LocalWalModeChangeDuringRebalancingSelfTest.class, ignoredTests); - suite.addTestSuite(LocalWacModeNoChangeDuringRebalanceOnNonNodeAssignTest.class); + GridTestUtils.addTestIfNeeded(suite, LocalWalModeNoChangeDuringRebalanceOnNonNodeAssignTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFlushFsyncSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushFsyncSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFlushFsyncWithDedicatedWorkerSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushFsyncWithDedicatedWorkerSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFlushFsyncWithMmapBufferSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushFsyncWithMmapBufferSelfTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCacheStartStopWithFreqCheckpointTest.class, ignoredTests); } /** @@ -119,70 +143,79 @@ private static void addRealPageStoreTestsNotForDirectIo(TestSuite suite) { * NOTE: These tests are also executed using I/O plugins. * * @param suite suite to add tests into. + * @param ignoredTests Ignored tests. */ - public static void addRealPageStoreTests(TestSuite suite) { - suite.addTestSuite(IgnitePdsPageSizesTest.class); + public static void addRealPageStoreTests(TestSuite suite, Collection ignoredTests) { + GridTestUtils.addTestIfNeeded(suite, IgnitePdsPageSizesTest.class, ignoredTests); // Metrics test. - suite.addTestSuite(IgniteDataStorageMetricsSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteDataStorageMetricsSelfTest.class, ignoredTests); - suite.addTestSuite(IgnitePdsRebalancingOnNotStableTopologyTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsRebalancingOnNotStableTopologyTest.class, ignoredTests); - suite.addTestSuite(IgnitePdsWholeClusterRestartTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsWholeClusterRestartTest.class, ignoredTests); // Rebalancing test - suite.addTestSuite(IgniteWalHistoryReservationsTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalHistoryReservationsTest.class, ignoredTests); - suite.addTestSuite(SlowHistoricalRebalanceSmallHistoryTest.class); + GridTestUtils.addTestIfNeeded(suite, SlowHistoricalRebalanceSmallHistoryTest.class, ignoredTests); - suite.addTestSuite(IgnitePersistentStoreDataStructuresTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePersistentStoreDataStructuresTest.class, ignoredTests); // Failover test - suite.addTestSuite(IgniteWalFlushFailoverTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushFailoverTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFlushBackgroundSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushBackgroundSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFlushBackgroundWithMmapBufferSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushBackgroundWithMmapBufferSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFlushLogOnlySelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushLogOnlySelfTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFlushLogOnlyWithMmapBufferSelfTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFlushLogOnlyWithMmapBufferSelfTest.class, ignoredTests); - suite.addTestSuite(IgniteWalFormatFileFailoverTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalFormatFileFailoverTest.class, ignoredTests); // Test suite uses Standalone WAL iterator to verify PDS content. - suite.addTestSuite(IgniteWalReaderTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalReaderTest.class, ignoredTests); - suite.addTestSuite(IgnitePdsExchangeDuringCheckpointTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsExchangeDuringCheckpointTest.class, ignoredTests); - suite.addTestSuite(IgnitePdsReserveWalSegmentsTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsReserveWalSegmentsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsReserveWalSegmentsWithCompactionTest.class, ignoredTests); // new style folders with generated consistent ID test - suite.addTestSuite(IgniteUidAsConsistentIdMigrationTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteUidAsConsistentIdMigrationTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, IgniteWalSerializerVersionTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, WalCompactionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, WalCompactionSwitchOnTest.class, ignoredTests); + + GridTestUtils.addTestIfNeeded(suite, WalDeletionArchiveFsyncTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, WalDeletionArchiveLogOnlyTest.class, ignoredTests); - suite.addTestSuite(IgniteWalSerializerVersionTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteCheckpointDirtyPagesForLowLoadTest.class, ignoredTests); - suite.addTestSuite(WalCompactionTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCorruptedStoreTest.class, ignoredTests); - suite.addTestSuite(WalDeletionArchiveFsyncTest.class); - suite.addTestSuite(WalDeletionArchiveLogOnlyTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalIteratorSwitchSegmentTest.class, ignoredTests); - suite.addTestSuite(IgniteCheckpointDirtyPagesForLowLoadTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWalIteratorExceptionDuringReadTest.class, ignoredTests); - suite.addTestSuite(IgnitePdsCorruptedStoreTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteNodeStoppedDuringDisableWALTest.class, ignoredTests); - suite.addTestSuite(IgniteWalIteratorSwitchSegmentTest.class); + GridTestUtils.addTestIfNeeded(suite, StandaloneWalRecordsIteratorTest.class, ignoredTests); - suite.addTestSuite(IgniteWalIteratorExceptionDuringReadTest.class); + //GridTestUtils.addTestIfNeeded(suite, IgniteWalRecoverySeveralRestartsTest.class, ignoredTests); - suite.addTestSuite(IgniteNodeStoppedDuringDisableWALTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteRebalanceScheduleResendPartitionsTest.class, ignoredTests); - suite.addTestSuite(StandaloneWalRecordsIteratorTest.class); + GridTestUtils.addTestIfNeeded(suite, IgniteWALTailIsReachedDuringIterationOverArchiveTest.class, ignoredTests); - //suite.addTestSuite(IgniteWalRecoverySeveralRestartsTest.class); + GridTestUtils.addTestIfNeeded(suite, WalRolloverTypesTest.class, ignoredTests); - suite.addTestSuite(IgniteRebalanceScheduleResendPartitionsTest.class); + GridTestUtils.addTestIfNeeded(suite, FsyncWalRolloverDoesNotBlockTest.class, ignoredTests); - suite.addTestSuite(IgniteWALTailIsReachedDuringIterationOverArchiveTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsPartitionsStateRecoveryTest.class, ignoredTests); } } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite3.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite3.java index 06ba9c0611851..82c482e3840c8 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite3.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite3.java @@ -17,21 +17,34 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsContinuousRestartTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsContinuousRestartTestWithExpiryPolicy; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ +@RunWith(AllTests.class) public class IgnitePdsTestSuite3 extends TestSuite { /** - * @return Suite. + * @return IgniteCache test suite. */ public static TestSuite suite() { - TestSuite suite = new TestSuite("Ignite Persistent Store Test Suite 3"); + return suite(null); + } + + /** + * @param ignoredTests Ignored tests. + * @return IgniteCache test suite. + */ + public static TestSuite suite(Collection ignoredTests) { + TestSuite suite = new TestSuite("Ignite Persistent Store Mvcc Test Suite 3"); - addRealPageStoreTestsNotForDirectIo(suite); + addRealPageStoreTestsNotForDirectIo(suite, ignoredTests); return suite; } @@ -40,10 +53,11 @@ public static TestSuite suite() { * Fills {@code suite} with PDS test subset, which operates with real page store, but requires long time to execute. * * @param suite suite to add tests into. + * @param ignoredTests Ignored tests list.` */ - private static void addRealPageStoreTestsNotForDirectIo(TestSuite suite) { + private static void addRealPageStoreTestsNotForDirectIo(TestSuite suite, Collection ignoredTests) { // Rebalancing test - suite.addTestSuite(IgnitePdsContinuousRestartTest.class); - suite.addTestSuite(IgnitePdsContinuousRestartTestWithExpiryPolicy.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsContinuousRestartTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsContinuousRestartTestWithExpiryPolicy.class, ignoredTests); } } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite4.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite4.java index 2e6a4391b87c3..d38c34a34ed96 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite4.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsTestSuite4.java @@ -17,29 +17,53 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; +import org.apache.ignite.cache.ResetLostPartitionTest; +import org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse; +import org.apache.ignite.internal.processors.cache.distributed.CachePageWriteLockUnlockTest; +import org.apache.ignite.internal.processors.cache.distributed.rebalancing.IgniteRebalanceOnCachesStoppingOrDestroyingTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsRecoveryAfterFileCorruptionTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsTaskCancelingTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsCacheWalDisabledOnRebalancingTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsPageEvictionDuringPartitionClearTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsPartitionPreloadTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsTransactionsHangTest; import org.apache.ignite.internal.processors.cache.persistence.file.FileDownloaderTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgnitePdsTestSuite4 extends TestSuite { +@RunWith(AllTests.class) +public class IgnitePdsTestSuite4 { /** * @return Suite. */ public static TestSuite suite() { - TestSuite suite = new TestSuite("Ignite Persistent Store Test Suite 4"); + return suite(null); + } - addRealPageStoreTestsNotForDirectIo(suite); + /** + * @param ignoredTests Ignored tests. + * @return Suite. + */ + public static TestSuite suite(Collection ignoredTests) { + TestSuite suite = new TestSuite("Ignite Persistent Store Test Suite 4"); - suite.addTestSuite(FileDownloaderTest.class); + addRealPageStoreTestsNotForDirectIo(suite, ignoredTests); - suite.addTestSuite(IgnitePdsTaskCancelingTest.class); + GridTestUtils.addTestIfNeeded(suite, FileDownloaderTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsTaskCancelingTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsPartitionPreloadTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, ResetLostPartitionTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteRebalanceOnCachesStoppingOrDestroyingTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, CachePageWriteLockUnlockTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsCacheWalDisabledOnRebalancingTest.class, ignoredTests); return suite; } @@ -48,16 +72,16 @@ public static TestSuite suite() { * Fills {@code suite} with PDS test subset, which operates with real page store, but requires long time to execute. * * @param suite suite to add tests into. + * @param ignoredTests Ignored tests. */ - private static void addRealPageStoreTestsNotForDirectIo(TestSuite suite) { - suite.addTestSuite(IgnitePdsTransactionsHangTest.class); - - suite.addTestSuite(IgnitePdsPageEvictionDuringPartitionClearTest.class); + private static void addRealPageStoreTestsNotForDirectIo(TestSuite suite, Collection ignoredTests) { + GridTestUtils.addTestIfNeeded(suite, IgnitePdsTransactionsHangTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsPageEvictionDuringPartitionClearTest.class, ignoredTests); // Rebalancing test - suite.addTestSuite(IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsContinuousRestartTestWithSharedGroupAndIndexes.class, ignoredTests); // Integrity test. - suite.addTestSuite(IgnitePdsRecoveryAfterFileCorruptionTest.class); + GridTestUtils.addTestIfNeeded(suite, IgnitePdsRecoveryAfterFileCorruptionTest.class, ignoredTests); } } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePerformanceTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePerformanceTestSuite.java index 2e829579df9fc..2b4f18f0c661c 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePerformanceTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePerformanceTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.GridCacheConcurrentTxMultiNodeLoadTest; import org.apache.ignite.internal.processors.cache.GridCacheIteratorPerformanceTest; @@ -57,42 +58,45 @@ import org.apache.ignite.loadtests.nio.GridNioBenchmarkTest; import org.apache.ignite.marshaller.GridMarshallerPerformanceTest; import org.apache.ignite.spi.communication.tcp.GridTcpCommunicationSpiLanLoadTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Tests suite for performance tests tests. * Note: Most of these are resource-consuming or non-terminating. */ -public class IgnitePerformanceTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgnitePerformanceTestSuite { /** * @return Tests suite for orphaned tests (not in any test sute previously). */ public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Load-Test Suite"); - suite.addTestSuite(GridCacheDhtPreloadPerformanceTest.class); - suite.addTestSuite(GridCacheIteratorPerformanceTest.class); - suite.addTestSuite(GridCacheMultiNodeLoadTest.class); - suite.addTestSuite(GridCacheConcurrentTxMultiNodeLoadTest.class); - suite.addTestSuite(GridCachePartitionedAffinityExcludeNeighborsPerformanceTest.class); - suite.addTestSuite(GridCachePartitionedAtomicLongLoadTest.class); - suite.addTestSuite(GridCacheWriteBehindStoreLoadTest.class); - suite.addTestSuite(GridCircularBufferPerformanceTest.class); - suite.addTestSuite(GridFuncPerformanceTest.class); - suite.addTestSuite(GridHashMapLoadTest.class); - suite.addTestSuite(GridLeanMapPerformanceTest.class); - suite.addTestSuite(GridMarshallerPerformanceTest.class); - suite.addTestSuite(GridMetadataAwareAdapterLoadTest.class); - suite.addTestSuite(GridMultiSplitsLoadTest.class); - suite.addTestSuite(GridMultiSplitsRedeployLoadTest.class); - suite.addTestSuite(GridSessionLoadTest.class); - suite.addTestSuite(GridSingleSplitsNewNodesMulticastLoadTest.class); - suite.addTestSuite(GridSingleSplitsRedeployLoadTest.class); - suite.addTestSuite(GridStealingLoadTest.class); - suite.addTestSuite(GridTcpCommunicationSpiLanLoadTest.class); - suite.addTestSuite(GridUnsafeMapPerformanceTest.class); - suite.addTestSuite(GridUnsafePartitionedMapPerformanceTest.class); - suite.addTestSuite(IgniteDataStreamerPerformanceTest.class); - suite.addTestSuite(SortedEvictionPolicyPerformanceTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheDhtPreloadPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheIteratorPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheMultiNodeLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheConcurrentTxMultiNodeLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAffinityExcludeNeighborsPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCachePartitionedAtomicLongLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheWriteBehindStoreLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCircularBufferPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFuncPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridHashMapLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLeanMapPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMarshallerPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMetadataAwareAdapterLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultiSplitsLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMultiSplitsRedeployLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSingleSplitsNewNodesMulticastLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSingleSplitsRedeployLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridStealingLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiLanLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridUnsafeMapPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(GridUnsafePartitionedMapPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDataStreamerPerformanceTest.class)); + suite.addTest(new JUnit4TestAdapter(SortedEvictionPolicyPerformanceTest.class)); // Non-JUnit classes with Test in name, which should be either converted to JUnit or removed in the future // Main classes: diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePlatformsTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePlatformsTestSuite.java index f7021d88ce4c0..75a8587ccc0c8 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePlatformsTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePlatformsTestSuite.java @@ -17,24 +17,27 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.platform.PlatformDefaultJavaObjectFactorySelfTest; import org.apache.ignite.platform.PlatformJavaObjectFactoryProxySelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Suite for platform tests. */ -public class IgnitePlatformsTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgnitePlatformsTestSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Deployment SPI Test Suite"); // LocalDeploymentSpi tests - suite.addTest(new TestSuite(PlatformDefaultJavaObjectFactorySelfTest.class)); - suite.addTest(new TestSuite(PlatformJavaObjectFactoryProxySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(PlatformDefaultJavaObjectFactorySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(PlatformJavaObjectFactoryProxySelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteReproducingSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteReproducingSuite.java index 05464590231fb..1c0cf61dcf494 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteReproducingSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteReproducingSuite.java @@ -18,6 +18,8 @@ package org.apache.ignite.testsuites; import junit.framework.TestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cycled run tests on PR code.
    @@ -29,17 +31,17 @@ * * This suite is not included into main build */ -public class IgniteReproducingSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteReproducingSuite { /** * @return suite with test(s) for reproduction some problem. - * @throws Exception if failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Issue Reproducing Test Suite"); //uncomment to add some test //for (int i = 0; i < 100; i++) - // suite.addTestSuite(IgniteCheckpointDirtyPagesForLowLoadTest.class); + // suite.addTest(new JUnit4TestAdapter(IgniteCheckpointDirtyPagesForLowLoadTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteRestHandlerTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteRestHandlerTestSuite.java index f3e5828de031e..e5ea8d50154e2 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteRestHandlerTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteRestHandlerTestSuite.java @@ -17,29 +17,32 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheAtomicCommandHandlerSelfTest; import org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandlerSelfTest; import org.apache.ignite.internal.processors.rest.handlers.log.GridLogCommandHandlerTest; import org.apache.ignite.internal.processors.rest.handlers.query.GridQueryCommandHandlerTest; import org.apache.ignite.internal.processors.rest.handlers.top.CacheTopologyCommandHandlerTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * REST support tests. */ -public class IgniteRestHandlerTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteRestHandlerTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("REST Support Test Suite"); - suite.addTestSuite(GridCacheCommandHandlerSelfTest.class); - suite.addTestSuite(GridCacheAtomicCommandHandlerSelfTest.class); - suite.addTestSuite(GridLogCommandHandlerTest.class); - suite.addTestSuite(GridQueryCommandHandlerTest.class); - suite.addTestSuite(CacheTopologyCommandHandlerTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheCommandHandlerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheAtomicCommandHandlerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLogCommandHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(GridQueryCommandHandlerTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheTopologyCommandHandlerTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteServiceConfigVariationsFullApiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteServiceConfigVariationsFullApiTestSuite.java index 84af386537052..7fa59ae3e004d 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteServiceConfigVariationsFullApiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteServiceConfigVariationsFullApiTestSuite.java @@ -26,11 +26,14 @@ import org.apache.ignite.testframework.configvariations.ConfigVariations; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; import org.apache.ignite.testframework.configvariations.Parameters; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Full API service test suit. */ -public class IgniteServiceConfigVariationsFullApiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteServiceConfigVariationsFullApiTestSuite { /** */ @SuppressWarnings("unchecked") private static final ConfigParameter[][] PARAMS = new ConfigParameter[][] { @@ -45,9 +48,8 @@ public class IgniteServiceConfigVariationsFullApiTestSuite extends TestSuite { /** * @return Compute API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Service Deployment New Full API Test Suite"); suite.addTest(new ConfigVariationsTestSuiteBuilder( @@ -89,4 +91,4 @@ public static TestSuite suite() throws Exception { return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCheckpointSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCheckpointSelfTestSuite.java index a3ba8ff4693cc..cb831ef92b454 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCheckpointSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCheckpointSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.checkpoint.cache.CacheCheckpointSpiConfigSelfTest; import org.apache.ignite.spi.checkpoint.cache.CacheCheckpointSpiSecondCacheSelfTest; @@ -31,36 +32,38 @@ import org.apache.ignite.spi.checkpoint.sharedfs.GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest; import org.apache.ignite.spi.checkpoint.sharedfs.GridSharedFsCheckpointSpiSelfTest; import org.apache.ignite.spi.checkpoint.sharedfs.GridSharedFsCheckpointSpiStartStopSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Grid SPI checkpoint self test suite. */ -public class IgniteSpiCheckpointSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSpiCheckpointSelfTestSuite { /** * @return Checkpoint test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Checkpoint Test Suite"); // Cache. - suite.addTest(new TestSuite(CacheCheckpointSpiConfigSelfTest.class)); - suite.addTest(new TestSuite(CacheCheckpointSpiSelfTest.class)); - suite.addTest(new TestSuite(CacheCheckpointSpiStartStopSelfTest.class)); - suite.addTest(new TestSuite(CacheCheckpointSpiSecondCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheCheckpointSpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheCheckpointSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheCheckpointSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheCheckpointSpiSecondCacheSelfTest.class)); // JDBC. - suite.addTest(new TestSuite(JdbcCheckpointSpiConfigSelfTest.class)); - suite.addTest(new TestSuite(JdbcCheckpointSpiCustomConfigSelfTest.class)); - suite.addTest(new TestSuite(JdbcCheckpointSpiDefaultConfigSelfTest.class)); - suite.addTest(new TestSuite(JdbcCheckpointSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcCheckpointSpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcCheckpointSpiCustomConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcCheckpointSpiDefaultConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(JdbcCheckpointSpiStartStopSelfTest.class)); // Shared FS. - suite.addTest(new TestSuite(GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest.class)); - suite.addTest(new TestSuite(GridSharedFsCheckpointSpiSelfTest.class)); - suite.addTest(new TestSuite(GridSharedFsCheckpointSpiStartStopSelfTest.class)); - suite.addTest(new TestSuite(GridSharedFsCheckpointSpiConfigSelfTest.class)); - //suite.addTest(new TestSuite(GridSharedFsCheckpointSpiMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSharedFsCheckpointSpiMultipleDirectoriesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSharedFsCheckpointSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSharedFsCheckpointSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSharedFsCheckpointSpiConfigSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(GridSharedFsCheckpointSpiMultiThreadedSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCollisionSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCollisionSelfTestSuite.java index d5d6ab8a7b526..8df78eeeebe07 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCollisionSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCollisionSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.collision.fifoqueue.GridFifoQueueCollisionSpiConfigSelfTest; import org.apache.ignite.spi.collision.fifoqueue.GridFifoQueueCollisionSpiSelfTest; @@ -29,35 +30,37 @@ import org.apache.ignite.spi.collision.priorityqueue.GridPriorityQueueCollisionSpiConfigSelfTest; import org.apache.ignite.spi.collision.priorityqueue.GridPriorityQueueCollisionSpiSelfTest; import org.apache.ignite.spi.collision.priorityqueue.GridPriorityQueueCollisionSpiStartStopSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Collision SPI self-test suite. */ -public class IgniteSpiCollisionSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSpiCollisionSelfTestSuite { /** * @return Failover SPI tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Collision SPI Test Suite"); // Priority. - suite.addTestSuite(GridPriorityQueueCollisionSpiSelfTest.class); - suite.addTestSuite(GridPriorityQueueCollisionSpiStartStopSelfTest.class); - suite.addTestSuite(GridPriorityQueueCollisionSpiConfigSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridPriorityQueueCollisionSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridPriorityQueueCollisionSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridPriorityQueueCollisionSpiConfigSelfTest.class)); // FIFO. - suite.addTestSuite(GridFifoQueueCollisionSpiSelfTest.class); - suite.addTestSuite(GridFifoQueueCollisionSpiStartStopSelfTest.class); - suite.addTestSuite(GridFifoQueueCollisionSpiConfigSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridFifoQueueCollisionSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFifoQueueCollisionSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridFifoQueueCollisionSpiConfigSelfTest.class)); // Job Stealing. - suite.addTestSuite(GridJobStealingCollisionSpiSelfTest.class); - suite.addTestSuite(GridJobStealingCollisionSpiAttributesSelfTest.class); - suite.addTestSuite(GridJobStealingCollisionSpiCustomTopologySelfTest.class); - suite.addTestSuite(GridJobStealingCollisionSpiStartStopSelfTest.class); - suite.addTestSuite(GridJobStealingCollisionSpiConfigSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridJobStealingCollisionSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingCollisionSpiAttributesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingCollisionSpiCustomTopologySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingCollisionSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingCollisionSpiConfigSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCommunicationSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCommunicationSelfTestSuite.java index ef55d36134a9b..84b11d7891381 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCommunicationSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiCommunicationSelfTestSuite.java @@ -17,8 +17,8 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; -import org.apache.ignite.spi.communication.tcp.GridCacheDhtLockBackupSelfTest; import org.apache.ignite.spi.communication.tcp.GridTcpCommunicationSpiConcurrentConnectSelfTest; import org.apache.ignite.spi.communication.tcp.GridTcpCommunicationSpiConcurrentConnectSslSelfTest; import org.apache.ignite.spi.communication.tcp.GridTcpCommunicationSpiConfigSelfTest; @@ -38,55 +38,65 @@ import org.apache.ignite.spi.communication.tcp.GridTcpCommunicationSpiTcpSelfTest; import org.apache.ignite.spi.communication.tcp.IgniteTcpCommunicationRecoveryAckClosureSelfTest; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpiDropNodesTest; +import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpiFaultyClientSslTest; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpiFaultyClientTest; +import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpiFreezingClientTest; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpiHalfOpenedConnectionTest; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpiSkipMessageSendTest; import org.apache.ignite.spi.communication.tcp.TcpCommunicationStatisticsTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for all communication SPIs. */ -public class IgniteSpiCommunicationSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSpiCommunicationSelfTestSuite { /** * @return Communication SPI tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Communication SPI Test Suite"); - suite.addTest(new TestSuite(GridTcpCommunicationSpiRecoveryAckSelfTest.class)); - suite.addTest(new TestSuite(IgniteTcpCommunicationRecoveryAckClosureSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiRecoverySelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiRecoveryNoPairedConnectionsTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiRecoverySslSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiRecoveryAckSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteTcpCommunicationRecoveryAckClosureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiRecoverySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiRecoveryNoPairedConnectionsTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiRecoverySslSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiConcurrentConnectSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiConcurrentConnectSslSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiConcurrentConnectSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiConcurrentConnectSslSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiSslSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiSslSmallBuffersSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiSslSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiSslSmallBuffersSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiTcpSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiTcpNoDelayOffSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiShmemSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiTcpSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiTcpNoDelayOffSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiShmemSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiStartStopSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiMultithreadedSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiMultithreadedShmemTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiMultithreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiMultithreadedShmemTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiTcpFailureDetectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiRecoveryFailureDetectionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiTcpFailureDetectionSelfTest.class)); - suite.addTest(new TestSuite(GridTcpCommunicationSpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTcpCommunicationSpiConfigSelfTest.class)); - suite.addTest(new TestSuite(TcpCommunicationSpiSkipMessageSendTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpCommunicationSpiSkipMessageSendTest.class)); + + suite.addTest(new JUnit4TestAdapter(TcpCommunicationSpiFaultyClientTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpCommunicationSpiDropNodesTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpCommunicationSpiHalfOpenedConnectionTest.class)); + + suite.addTest(new JUnit4TestAdapter(TcpCommunicationStatisticsTest.class)); + + suite.addTest(new JUnit4TestAdapter(TcpCommunicationSpiFreezingClientTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpCommunicationSpiFaultyClientTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpCommunicationSpiFaultyClientSslTest.class)); - suite.addTest(new TestSuite(TcpCommunicationSpiFaultyClientTest.class)); - suite.addTest(new TestSuite(TcpCommunicationSpiDropNodesTest.class)); - suite.addTest(new TestSuite(TcpCommunicationSpiHalfOpenedConnectionTest.class)); - suite.addTest(new TestSuite(TcpCommunicationStatisticsTest.class)); //suite.addTest(new TestSuite(GridCacheDhtLockBackupSelfTest.class)); diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDeploymentSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDeploymentSelfTestSuite.java index f3b1e5665d298..6fce539a8b27a 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDeploymentSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDeploymentSelfTestSuite.java @@ -17,25 +17,28 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.deployment.local.GridLocalDeploymentSpiSelfTest; import org.apache.ignite.spi.deployment.local.GridLocalDeploymentSpiStartStopSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suit for deployment SPIs. */ -public class IgniteSpiDeploymentSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSpiDeploymentSelfTestSuite { /** * @return Deployment SPI tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Deployment SPI Test Suite"); // LocalDeploymentSpi tests - suite.addTest(new TestSuite(GridLocalDeploymentSpiSelfTest.class)); - suite.addTest(new TestSuite(GridLocalDeploymentSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLocalDeploymentSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLocalDeploymentSpiStartStopSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDiscoverySelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDiscoverySelfTestSuite.java index 04869f9aa91d8..bc7cf933d3f6b 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDiscoverySelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiDiscoverySelfTestSuite.java @@ -1,12 +1,11 @@ /* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at + * Copyright 2020 GridGain Systems, Inc. and Contributors. * - * http://www.apache.org/licenses/LICENSE-2.0 + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -17,27 +16,34 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.IgniteDiscoveryMassiveNodeFailTest; +import org.apache.ignite.spi.ExponentialBackoffTimeoutStrategyTest; import org.apache.ignite.spi.GridTcpSpiForwardingSelfTest; import org.apache.ignite.spi.discovery.AuthenticationRestartTest; import org.apache.ignite.spi.discovery.FilterDataForClientNodeDiscoveryTest; import org.apache.ignite.spi.discovery.IgniteDiscoveryCacheReuseSelfTest; +import org.apache.ignite.spi.discovery.IgniteClientReconnectEventHandlingTest; +import org.apache.ignite.spi.discovery.LongClientConnectToClusterTest; import org.apache.ignite.spi.discovery.tcp.DiscoveryUnmarshalVulnerabilityTest; import org.apache.ignite.spi.discovery.tcp.IgniteClientConnectTest; import org.apache.ignite.spi.discovery.tcp.IgniteClientReconnectMassiveShutdownTest; import org.apache.ignite.spi.discovery.tcp.TcpClientDiscoveryMarshallerCheckSelfTest; +import org.apache.ignite.spi.discovery.tcp.TcpClientDiscoverySpiCoordinatorChangeTest; import org.apache.ignite.spi.discovery.tcp.TcpClientDiscoverySpiFailureTimeoutSelfTest; import org.apache.ignite.spi.discovery.tcp.TcpClientDiscoverySpiMulticastTest; import org.apache.ignite.spi.discovery.tcp.TcpClientDiscoverySpiSelfTest; import org.apache.ignite.spi.discovery.tcp.TcpClientDiscoveryUnresolvedHostTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryClientSuspensionSelfTest; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryIpFinderCleanerTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryMarshallerCheckSelfTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryMultiThreadedTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryNodeAttributesUpdateOnReconnectTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryNodeConfigConsistentIdSelfTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryNodeConsistentIdSelfTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryPendingMessageDeliveryTest; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryReconnectUnstableTopologyTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoveryRestartTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySegmentationPolicyTest; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySelfTest; @@ -68,78 +74,84 @@ public class IgniteSpiDiscoverySelfTestSuite extends TestSuite { /** * @return Discovery SPI tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, GridTestUtils.getNextMulticastGroup(IgniteSpiDiscoverySelfTestSuite.class)); TestSuite suite = new TestSuite("Ignite Discovery SPI Test Suite"); // Tcp. - suite.addTest(new TestSuite(TcpDiscoveryVmIpFinderSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySharedFsIpFinderSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryJdbcIpFinderSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryMulticastIpFinderSelfTest.class)); - - suite.addTest(new TestSuite(TcpDiscoverySelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySpiSelfTest.class)); - //suite.addTest(new TestSuite(TcpDiscoverySpiRandomStartStopTest.class)); - //suite.addTest(new TestSuite(TcpDiscoverySpiSslSelfTest.class)); - //suite.addTest(new TestSuite(TcpDiscoverySpiWildcardSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySpiFailureTimeoutSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySpiMBeanTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySpiStartStopSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySpiConfigSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryMarshallerCheckSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySnapshotHistoryTest.class)); - - suite.addTest(new TestSuite(GridTcpSpiForwardingSelfTest.class)); - - suite.addTest(new TestSuite(TcpClientDiscoverySpiSelfTest.class)); - suite.addTest(new TestSuite(TcpClientDiscoveryMarshallerCheckSelfTest.class)); - suite.addTest(new TestSuite(TcpClientDiscoverySpiMulticastTest.class)); - suite.addTest(new TestSuite(TcpClientDiscoverySpiFailureTimeoutSelfTest.class)); - suite.addTest(new TestSuite(TcpClientDiscoveryUnresolvedHostTest.class)); - - suite.addTest(new TestSuite(TcpDiscoveryNodeConsistentIdSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryNodeConfigConsistentIdSelfTest.class)); - - suite.addTest(new TestSuite(TcpDiscoveryRestartTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryMultiThreadedTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryVmIpFinderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySharedFsIpFinderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryJdbcIpFinderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryMulticastIpFinderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryIpFinderCleanerTest.class)); + + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiRandomStartStopTest.class)); + //suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiSslSelfTest.class)); + //suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiWildcardSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiFailureTimeoutSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiMBeanTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryMarshallerCheckSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySnapshotHistoryTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridTcpSpiForwardingSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(ExponentialBackoffTimeoutStrategyTest.class)); + + suite.addTest(new JUnit4TestAdapter(TcpClientDiscoverySpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(LongClientConnectToClusterTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpClientDiscoveryMarshallerCheckSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpClientDiscoverySpiCoordinatorChangeTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpClientDiscoverySpiMulticastTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpClientDiscoverySpiFailureTimeoutSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpClientDiscoveryUnresolvedHostTest.class)); + + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryNodeConsistentIdSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryNodeConfigConsistentIdSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryMultiThreadedTest.class)); //suite.addTest(new TestSuite(TcpDiscoveryConcurrentStartTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySegmentationPolicyTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySegmentationPolicyTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryNodeAttributesUpdateOnReconnectTest.class)); - suite.addTest(new TestSuite(AuthenticationRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryNodeAttributesUpdateOnReconnectTest.class)); + suite.addTest(new JUnit4TestAdapter(AuthenticationRestartTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryWithWrongServerTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryWithWrongServerTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySpiReconnectDelayTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySpiReconnectDelayTest.class)); - suite.addTest(new TestSuite(IgniteDiscoveryMassiveNodeFailTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDiscoveryMassiveNodeFailTest.class)); // Client connect. - suite.addTest(new TestSuite(IgniteClientConnectTest.class)); - suite.addTest(new TestSuite(IgniteClientReconnectMassiveShutdownTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryClientSuspensionSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientConnectTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectMassiveShutdownTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryClientSuspensionSelfTest.class)); // SSL. - suite.addTest(new TestSuite(TcpDiscoverySslSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySslTrustedSelfTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySslSecuredUnsecuredTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySslTrustedUntrustedTest.class)); - suite.addTest(new TestSuite(TcpDiscoverySslParametersTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySslSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySslTrustedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySslSecuredUnsecuredTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySslTrustedUntrustedTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoverySslParametersTest.class)); // Disco cache reuse. - suite.addTest(new TestSuite(IgniteDiscoveryCacheReuseSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDiscoveryCacheReuseSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(DiscoveryUnmarshalVulnerabilityTest.class)); - suite.addTest(new TestSuite(DiscoveryUnmarshalVulnerabilityTest.class)); + suite.addTest(new JUnit4TestAdapter(FilterDataForClientNodeDiscoveryTest.class)); - suite.addTest(new TestSuite(FilterDataForClientNodeDiscoveryTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryPendingMessageDeliveryTest.class)); - suite.addTest(new TestSuite(TcpDiscoveryPendingMessageDeliveryTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryReconnectUnstableTopologyTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiEventStorageSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiEventStorageSelfTestSuite.java index 9f295a28e883e..b9917d6246a95 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiEventStorageSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiEventStorageSelfTestSuite.java @@ -17,28 +17,31 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.eventstorage.memory.GridMemoryEventStorageMultiThreadedSelfTest; import org.apache.ignite.spi.eventstorage.memory.GridMemoryEventStorageSpiConfigSelfTest; import org.apache.ignite.spi.eventstorage.memory.GridMemoryEventStorageSpiSelfTest; import org.apache.ignite.spi.eventstorage.memory.GridMemoryEventStorageSpiStartStopSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Event storage test suite. */ -public class IgniteSpiEventStorageSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSpiEventStorageSelfTestSuite { /** * @return Event storage test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Event Storage Test Suite"); - suite.addTest(new TestSuite(GridMemoryEventStorageSpiSelfTest.class)); - suite.addTest(new TestSuite(GridMemoryEventStorageSpiStartStopSelfTest.class)); - suite.addTest(new TestSuite(GridMemoryEventStorageMultiThreadedSelfTest.class)); - suite.addTest(new TestSuite(GridMemoryEventStorageSpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMemoryEventStorageSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMemoryEventStorageSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMemoryEventStorageMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMemoryEventStorageSpiConfigSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiFailoverSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiFailoverSelfTestSuite.java index 2e8f0e9965326..933a2bcc0b5d1 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiFailoverSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiFailoverSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.failover.always.GridAlwaysFailoverSpiConfigSelfTest; import org.apache.ignite.spi.failover.always.GridAlwaysFailoverSpiSelfTest; @@ -27,33 +28,35 @@ import org.apache.ignite.spi.failover.jobstealing.GridJobStealingFailoverSpiStartStopSelfTest; import org.apache.ignite.spi.failover.never.GridNeverFailoverSpiSelfTest; import org.apache.ignite.spi.failover.never.GridNeverFailoverSpiStartStopSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Failover SPI self-test suite. */ -public class IgniteSpiFailoverSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSpiFailoverSelfTestSuite { /** * @return Failover SPI tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Failover SPI Test Suite"); // Always failover. - suite.addTest(new TestSuite(GridAlwaysFailoverSpiSelfTest.class)); - suite.addTest(new TestSuite(GridAlwaysFailoverSpiStartStopSelfTest.class)); - suite.addTest(new TestSuite(GridAlwaysFailoverSpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAlwaysFailoverSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAlwaysFailoverSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAlwaysFailoverSpiConfigSelfTest.class)); // Never failover. - suite.addTest(new TestSuite(GridNeverFailoverSpiSelfTest.class)); - suite.addTest(new TestSuite(GridNeverFailoverSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNeverFailoverSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridNeverFailoverSpiStartStopSelfTest.class)); // Job stealing failover. - suite.addTest(new TestSuite(GridJobStealingFailoverSpiSelfTest.class)); - suite.addTest(new TestSuite(GridJobStealingFailoverSpiOneNodeSelfTest.class)); - suite.addTest(new TestSuite(GridJobStealingFailoverSpiStartStopSelfTest.class)); - suite.addTest(new TestSuite(GridJobStealingFailoverSpiConfigSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingFailoverSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingFailoverSpiOneNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingFailoverSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridJobStealingFailoverSpiConfigSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiLoadBalancingSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiLoadBalancingSelfTestSuite.java index 52d4fddabef24..e5bdfeff08f29 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiLoadBalancingSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiLoadBalancingSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.Test; import junit.framework.TestSuite; import org.apache.ignite.spi.loadbalancing.adaptive.GridAdaptiveLoadBalancingSpiConfigSelfTest; @@ -34,10 +35,13 @@ import org.apache.ignite.spi.loadbalancing.weightedrandom.GridWeightedRandomLoadBalancingSpiSelfTest; import org.apache.ignite.spi.loadbalancing.weightedrandom.GridWeightedRandomLoadBalancingSpiStartStopSelfTest; import org.apache.ignite.spi.loadbalancing.weightedrandom.GridWeightedRandomLoadBalancingSpiWeightedSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Load balancing SPI self-test suite. */ +@RunWith(AllTests.class) public final class IgniteSpiLoadBalancingSelfTestSuite { /** * Enforces singleton. @@ -53,27 +57,27 @@ public static Test suite() { TestSuite suite = new TestSuite("Ignite Load Balancing Test Suite"); // Random. - suite.addTestSuite(GridWeightedRandomLoadBalancingSpiSelfTest.class); - suite.addTestSuite(GridWeightedRandomLoadBalancingSpiWeightedSelfTest.class); - suite.addTestSuite(GridWeightedRandomLoadBalancingSpiStartStopSelfTest.class); - suite.addTestSuite(GridWeightedRandomLoadBalancingSpiConfigSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridWeightedRandomLoadBalancingSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridWeightedRandomLoadBalancingSpiWeightedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridWeightedRandomLoadBalancingSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridWeightedRandomLoadBalancingSpiConfigSelfTest.class)); // Round-robin. - suite.addTestSuite(GridRoundRobinLoadBalancingSpiLocalNodeSelfTest.class); - suite.addTestSuite(GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest.class); - suite.addTestSuite(GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest.class); - suite.addTestSuite(GridRoundRobinLoadBalancingSpiNotPerTaskSelfTest.class); - suite.addTestSuite(GridRoundRobinLoadBalancingSpiStartStopSelfTest.class); - suite.addTestSuite(GridRoundRobinLoadBalancingNotPerTaskMultithreadedSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridRoundRobinLoadBalancingSpiLocalNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRoundRobinLoadBalancingSpiMultipleNodesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRoundRobinLoadBalancingSpiTopologyChangeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRoundRobinLoadBalancingSpiNotPerTaskSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRoundRobinLoadBalancingSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRoundRobinLoadBalancingNotPerTaskMultithreadedSelfTest.class)); // Adaptive. - suite.addTestSuite(GridAdaptiveLoadBalancingSpiSelfTest.class); - suite.addTestSuite(GridAdaptiveLoadBalancingSpiMultipleNodeSelfTest.class); - suite.addTestSuite(GridAdaptiveLoadBalancingSpiStartStopSelfTest.class); - suite.addTestSuite(GridAdaptiveLoadBalancingSpiConfigSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridAdaptiveLoadBalancingSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAdaptiveLoadBalancingSpiMultipleNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAdaptiveLoadBalancingSpiStartStopSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridAdaptiveLoadBalancingSpiConfigSelfTest.class)); // Load balancing for internal tasks. - suite.addTestSuite(GridInternalTasksLoadBalancingSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridInternalTasksLoadBalancingSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiTestSuite.java index 5de61aecf186c..c98cbf11c0250 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteSpiTestSuite.java @@ -17,14 +17,19 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.managers.GridManagerLocalMessageListenerSelfTest; import org.apache.ignite.internal.managers.GridNoopManagerSelfTest; +import org.apache.ignite.spi.encryption.KeystoreEncryptionSpiSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Grid SPI test suite. */ -public class IgniteSpiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSpiTestSuite { /** * @return All SPI tests suite. * @throws Exception If failed. @@ -57,10 +62,12 @@ public static TestSuite suite() throws Exception { suite.addTest(IgniteSpiCommunicationSelfTestSuite.suite()); // All other tests. - suite.addTestSuite(GridNoopManagerSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridNoopManagerSelfTest.class)); // Local Message Listener tests. - suite.addTestSuite(GridManagerLocalMessageListenerSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridManagerLocalMessageListenerSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(KeystoreEncryptionSpiSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStandByClusterSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStandByClusterSuite.java index f5244207c89af..8cdc6df93297c 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStandByClusterSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStandByClusterSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest; import org.apache.ignite.internal.processors.cache.distributed.CacheBaselineTopologyTest; @@ -35,45 +36,48 @@ import org.apache.ignite.internal.processors.cache.persistence.standbycluster.join.persistence.JoinInActiveNodeToInActiveClusterWithPersistence; import org.apache.ignite.internal.processors.cache.persistence.standbycluster.reconnect.IgniteStandByClientReconnectTest; import org.apache.ignite.internal.processors.cache.persistence.standbycluster.reconnect.IgniteStandByClientReconnectToNewClusterTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteStandByClusterSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteStandByClusterSuite { /** * @return Test suite. */ public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Activate/DeActivate Cluster Test Suite"); - suite.addTestSuite(IgniteClusterActivateDeactivateTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteClusterActivateDeactivateTest.class)); - suite.addTestSuite(IgniteStandByClusterTest.class); - suite.addTestSuite(IgniteStandByClientReconnectTest.class); - suite.addTestSuite(IgniteStandByClientReconnectToNewClusterTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteStandByClusterTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteStandByClientReconnectTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteStandByClientReconnectToNewClusterTest.class)); - suite.addTestSuite(JoinActiveNodeToActiveCluster.class); - suite.addTestSuite(JoinActiveNodeToInActiveCluster.class); - suite.addTestSuite(JoinInActiveNodeToActiveCluster.class); - suite.addTestSuite(JoinInActiveNodeToInActiveCluster.class); + suite.addTest(new JUnit4TestAdapter(JoinActiveNodeToActiveCluster.class)); + suite.addTest(new JUnit4TestAdapter(JoinActiveNodeToInActiveCluster.class)); + suite.addTest(new JUnit4TestAdapter(JoinInActiveNodeToActiveCluster.class)); + suite.addTest(new JUnit4TestAdapter(JoinInActiveNodeToInActiveCluster.class)); - suite.addTestSuite(JoinActiveNodeToActiveClusterWithPersistence.class); - suite.addTestSuite(JoinActiveNodeToInActiveClusterWithPersistence.class); - suite.addTestSuite(JoinInActiveNodeToActiveClusterWithPersistence.class); - suite.addTestSuite(JoinInActiveNodeToInActiveClusterWithPersistence.class); + suite.addTest(new JUnit4TestAdapter(JoinActiveNodeToActiveClusterWithPersistence.class)); + suite.addTest(new JUnit4TestAdapter(JoinActiveNodeToInActiveClusterWithPersistence.class)); + suite.addTest(new JUnit4TestAdapter(JoinInActiveNodeToActiveClusterWithPersistence.class)); + suite.addTest(new JUnit4TestAdapter(JoinInActiveNodeToInActiveClusterWithPersistence.class)); -//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTestSuite(IgniteChangeGlobalStateTest.class); -//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTestSuite(IgniteChangeGlobalStateCacheTest.class); -//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTestSuite(IgniteChangeGlobalStateDataStructureTest.class); -//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTestSuite(IgniteChangeGlobalStateServiceTest.class); +//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTest(new JUnit4TestAdapter(IgniteChangeGlobalStateTest.class)); +//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTest(new JUnit4TestAdapter(IgniteChangeGlobalStateCacheTest.class)); +//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTest(new JUnit4TestAdapter(IgniteChangeGlobalStateDataStructureTest.class)); +//TODO https://issues.apache.org/jira/browse/IGNITE-9081 suite.addTest(new JUnit4TestAdapter(IgniteChangeGlobalStateServiceTest.class)); - suite.addTestSuite(IgniteChangeGlobalStateDataStreamerTest.class); - suite.addTestSuite(IgniteChangeGlobalStateFailOverTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteChangeGlobalStateDataStreamerTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteChangeGlobalStateFailOverTest.class)); - suite.addTestSuite(IgniteNoParrallelClusterIsAllowedTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteNoParrallelClusterIsAllowedTest.class)); - suite.addTestSuite(CacheBaselineTopologyTest.class); - suite.addTestSuite(IgniteBaselineAffinityTopologyActivationTest.class); + suite.addTest(new JUnit4TestAdapter(CacheBaselineTopologyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteBaselineAffinityTopologyActivationTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStartUpTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStartUpTestSuite.java index b29f5c730dd2e..815c658e465c7 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStartUpTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStartUpTestSuite.java @@ -17,21 +17,24 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.startup.cmdline.GridCommandLineTransformerSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Loaders self-test suite. */ -public class IgniteStartUpTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteStartUpTestSuite { /** * @return Loaders tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite StartUp Test Suite"); - suite.addTest(new TestSuite(GridCommandLineTransformerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCommandLineTransformerSelfTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStreamSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStreamSelfTestSuite.java index 9eac2773166f7..ab4650db5cffa 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStreamSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteStreamSelfTestSuite.java @@ -17,24 +17,27 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.stream.socket.SocketStreamerSelfTest; import org.apache.ignite.stream.socket.SocketStreamerUnmarshalVulnerabilityTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Stream test suite. */ -public class IgniteStreamSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteStreamSelfTestSuite { /** * @return Stream tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Stream Test Suite"); - suite.addTest(new TestSuite(SocketStreamerSelfTest.class)); - suite.addTest(new TestSuite(SocketStreamerUnmarshalVulnerabilityTest.class)); + suite.addTest(new JUnit4TestAdapter(SocketStreamerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SocketStreamerUnmarshalVulnerabilityTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTaskSessionSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTaskSessionSelfTestSuite.java index 9a837db184ac4..8ab1daee13ef9 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTaskSessionSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTaskSessionSelfTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.session.GridSessionCancelSiblingsFromFutureSelfTest; import org.apache.ignite.session.GridSessionCancelSiblingsFromJobSelfTest; @@ -37,38 +38,40 @@ import org.apache.ignite.session.GridSessionSetTaskAttributeSelfTest; import org.apache.ignite.session.GridSessionTaskWaitJobAttributeSelfTest; import org.apache.ignite.session.GridSessionWaitAttributeSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Task session test suite. */ -public class IgniteTaskSessionSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteTaskSessionSelfTestSuite { /** * @return TaskSession test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite TaskSession Test Suite"); - suite.addTest(new TestSuite(GridSessionCancelSiblingsFromFutureSelfTest.class)); - suite.addTest(new TestSuite(GridSessionCancelSiblingsFromJobSelfTest.class)); - suite.addTest(new TestSuite(GridSessionCancelSiblingsFromTaskSelfTest.class)); - suite.addTest(new TestSuite(GridSessionSetFutureAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionSetFutureAttributeWaitListenerSelfTest.class)); - suite.addTest(new TestSuite(GridSessionSetJobAttributeWaitListenerSelfTest.class)); - suite.addTest(new TestSuite(GridSessionSetJobAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionSetJobAttribute2SelfTest.class)); - suite.addTest(new TestSuite(GridSessionJobWaitTaskAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionSetTaskAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionFutureWaitTaskAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionFutureWaitJobAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionTaskWaitJobAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionSetJobAttributeOrderSelfTest.class)); - suite.addTest(new TestSuite(GridSessionWaitAttributeSelfTest.class)); - suite.addTest(new TestSuite(GridSessionJobFailoverSelfTest.class)); - suite.addTest(new TestSuite(GridSessionLoadSelfTest.class)); - suite.addTest(new TestSuite(GridSessionCollisionSpiSelfTest.class)); - suite.addTest(new TestSuite(GridSessionCheckpointSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionCancelSiblingsFromFutureSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionCancelSiblingsFromJobSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionCancelSiblingsFromTaskSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionSetFutureAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionSetFutureAttributeWaitListenerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionSetJobAttributeWaitListenerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionSetJobAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionSetJobAttribute2SelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionJobWaitTaskAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionSetTaskAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionFutureWaitTaskAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionFutureWaitJobAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionTaskWaitJobAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionSetJobAttributeOrderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionWaitAttributeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionJobFailoverSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionLoadSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionCollisionSpiSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSessionCheckpointSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTimeoutProcessorSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTimeoutProcessorSelfTestSuite.java index 713dce0a30b8a..50bdbdcf8650f 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTimeoutProcessorSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTimeoutProcessorSelfTestSuite.java @@ -17,23 +17,26 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor; import org.apache.ignite.internal.processors.timeout.GridTimeoutProcessorSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Suite for {@link GridTimeoutProcessor} tests. */ -public class IgniteTimeoutProcessorSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteTimeoutProcessorSelfTestSuite { /** * @return Job metrics test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Timeout Processor Test Suite"); - suite.addTest(new TestSuite(GridTimeoutProcessorSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTimeoutProcessorSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTopologyValidatorTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTopologyValidatorTestSuite.java index 1c9b8525c8be0..1466b856910c9 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTopologyValidatorTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteTopologyValidatorTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import java.util.Collection; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorGridSplitCacheTest; import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorNearPartitionedAtomicCacheGroupsTest; @@ -31,34 +32,38 @@ import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorReplicatedAtomicCacheTest; import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorReplicatedTxCacheGroupsTest; import org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorReplicatedTxCacheTest; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Topology validator test suite. */ -public class IgniteTopologyValidatorTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteTopologyValidatorTestSuite { /** + * @param ignoredTests Ignored tests. * @return Topology validator tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite(Collection ignoredTests) { TestSuite suite = new TestSuite("Topology validator Test Suite"); - suite.addTest(new TestSuite(IgniteTopologyValidatorNearPartitionedAtomicCacheTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorNearPartitionedTxCacheTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorPartitionedAtomicCacheTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorPartitionedTxCacheTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorReplicatedAtomicCacheTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorReplicatedTxCacheTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorNearPartitionedAtomicCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorNearPartitionedTxCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorPartitionedAtomicCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorPartitionedTxCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorReplicatedAtomicCacheTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorReplicatedTxCacheTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteTopologyValidatorNearPartitionedAtomicCacheGroupsTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorNearPartitionedTxCacheGroupsTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorPartitionedAtomicCacheGroupsTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorPartitionedTxCacheGroupsTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorReplicatedAtomicCacheGroupsTest.class)); - suite.addTest(new TestSuite(IgniteTopologyValidatorReplicatedTxCacheGroupsTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorNearPartitionedAtomicCacheGroupsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorNearPartitionedTxCacheGroupsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorPartitionedAtomicCacheGroupsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorPartitionedTxCacheGroupsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorReplicatedAtomicCacheGroupsTest.class, ignoredTests); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorReplicatedTxCacheGroupsTest.class, ignoredTests); - suite.addTest(new TestSuite(IgniteTopologyValidatorGridSplitCacheTest.class)); + GridTestUtils.addTestIfNeeded(suite, IgniteTopologyValidatorGridSplitCacheTest.class, ignoredTests); return suite; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteUtilSelfTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteUtilSelfTestSuite.java index 673269bb4f037..2fdf11bfae142 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteUtilSelfTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/IgniteUtilSelfTestSuite.java @@ -17,7 +17,10 @@ package org.apache.ignite.testsuites; +import java.util.Set; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.apache.ignite.internal.IgniteVersionUtilsSelfTest; import org.apache.ignite.internal.commandline.CommandHandlerParsingTest; import org.apache.ignite.internal.pagemem.impl.PageIdUtilsSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheUtilsSelfTest; @@ -57,92 +60,88 @@ import org.apache.ignite.util.GridStringBuilderFactorySelfTest; import org.apache.ignite.util.GridTopologyHeapSizeSelfTest; import org.apache.ignite.util.GridTransientTest; -import org.apache.ignite.util.IgniteTaskTrackingThreadPoolExecutorTest; import org.apache.ignite.util.mbeans.GridMBeanDisableSelfTest; import org.apache.ignite.util.mbeans.GridMBeanExoticNamesSelfTest; import org.apache.ignite.util.mbeans.GridMBeanSelfTest; import org.apache.ignite.util.mbeans.WorkersControlMXBeanTest; - -import java.util.Set; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for Ignite utility classes. */ -public class IgniteUtilSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteUtilSelfTestSuite { /** * @return Grid utility methods tests suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return suite(null); } /** * @param ignoredTests Tests don't include in the execution. * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite(Set ignoredTests) throws Exception { + public static TestSuite suite(Set ignoredTests) { TestSuite suite = new TestSuite("Ignite Util Test Suite"); - suite.addTestSuite(GridThreadPoolExecutorServiceSelfTest.class); - suite.addTestSuite(IgniteThreadPoolSizeTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteVersionUtilsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridThreadPoolExecutorServiceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteThreadPoolSizeTest.class)); GridTestUtils.addTestIfNeeded(suite, IgniteUtilsSelfTest.class, ignoredTests); - suite.addTestSuite(GridSpinReadWriteLockSelfTest.class); - suite.addTestSuite(GridQueueSelfTest.class); - suite.addTestSuite(GridStringBuilderFactorySelfTest.class); - suite.addTestSuite(GridToStringBuilderSelfTest.class); - suite.addTestSuite(CircularStringBuilderSelfTest.class); - suite.addTestSuite(GridByteArrayListSelfTest.class); - suite.addTestSuite(GridMBeanSelfTest.class); - suite.addTestSuite(GridMBeanDisableSelfTest.class); - suite.addTestSuite(GridMBeanExoticNamesSelfTest.class); - suite.addTestSuite(GridLongListSelfTest.class); - suite.addTestSuite(GridThreadTest.class); - suite.addTestSuite(GridIntListSelfTest.class); - suite.addTestSuite(GridArraysSelfTest.class); - suite.addTestSuite(GridCacheUtilsSelfTest.class); - suite.addTestSuite(IgniteExceptionRegistrySelfTest.class); - suite.addTestSuite(GridMessageCollectionTest.class); - suite.addTestSuite(WorkersControlMXBeanTest.class); - suite.addTestSuite(GridConcurrentLinkedDequeMultiThreadedTest.class); - suite.addTestSuite(GridLogThrottleTest.class); - suite.addTestSuite(GridRandomSelfTest.class); - suite.addTestSuite(GridSnapshotLockSelfTest.class); - suite.addTestSuite(GridTopologyHeapSizeSelfTest.class); - suite.addTestSuite(GridTransientTest.class); - suite.addTestSuite(IgniteDevOnlyLogTest.class); + suite.addTest(new JUnit4TestAdapter(GridSpinReadWriteLockSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridQueueSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridStringBuilderFactorySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridToStringBuilderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CircularStringBuilderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridByteArrayListSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMBeanSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMBeanDisableSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMBeanExoticNamesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLongListSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridThreadTest.class)); + suite.addTest(new JUnit4TestAdapter(GridIntListSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridArraysSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheUtilsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteExceptionRegistrySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridMessageCollectionTest.class)); + suite.addTest(new JUnit4TestAdapter(WorkersControlMXBeanTest.class)); + suite.addTest(new JUnit4TestAdapter(GridConcurrentLinkedDequeMultiThreadedTest.class)); + suite.addTest(new JUnit4TestAdapter(GridLogThrottleTest.class)); + suite.addTest(new JUnit4TestAdapter(GridRandomSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridSnapshotLockSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTopologyHeapSizeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridTransientTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDevOnlyLogTest.class)); // Sensitive toString. - suite.addTestSuite(IncludeSensitiveAtomicTest.class); - suite.addTestSuite(IncludeSensitiveTransactionalTest.class); + suite.addTest(new JUnit4TestAdapter(IncludeSensitiveAtomicTest.class)); + suite.addTest(new JUnit4TestAdapter(IncludeSensitiveTransactionalTest.class)); // Metrics. - suite.addTestSuite(ClusterMetricsSnapshotSerializeSelfTest.class); + suite.addTest(new JUnit4TestAdapter(ClusterMetricsSnapshotSerializeSelfTest.class)); // Unsafe. - suite.addTestSuite(GridUnsafeMemorySelfTest.class); - suite.addTestSuite(GridUnsafeStripedLruSefTest.class); - suite.addTestSuite(GridUnsafeMapSelfTest.class); - suite.addTestSuite(GridUnsafePartitionedMapSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridUnsafeMemorySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridUnsafeStripedLruSefTest.class)); + suite.addTest(new JUnit4TestAdapter(GridUnsafeMapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridUnsafePartitionedMapSelfTest.class)); // NIO. - suite.addTestSuite(GridNioSessionMetaKeySelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridNioSessionMetaKeySelfTest.class)); GridTestUtils.addTestIfNeeded(suite, GridNioSelfTest.class, ignoredTests); - suite.addTestSuite(GridNioFilterChainSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridNioFilterChainSelfTest.class)); GridTestUtils.addTestIfNeeded(suite, GridNioSslSelfTest.class, ignoredTests); - suite.addTestSuite(GridNioDelimitedBufferSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridNioDelimitedBufferSelfTest.class)); - suite.addTestSuite(GridPartitionMapSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridPartitionMapSelfTest.class)); //dbx - suite.addTestSuite(PageIdUtilsSelfTest.class); + suite.addTest(new JUnit4TestAdapter(PageIdUtilsSelfTest.class)); // control.sh - suite.addTestSuite(CommandHandlerParsingTest.class); - - // Thread pool. - suite.addTestSuite(IgniteTaskTrackingThreadPoolExecutorTest.class); + suite.addTest(new JUnit4TestAdapter(CommandHandlerParsingTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/InterceptorCacheConfigVariationsFullApiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/InterceptorCacheConfigVariationsFullApiTestSuite.java index 31fd365fe6392..f25225c191c53 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/InterceptorCacheConfigVariationsFullApiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/InterceptorCacheConfigVariationsFullApiTestSuite.java @@ -20,16 +20,18 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.InterceptorCacheConfigVariationsFullApiTest; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache API. */ -public class InterceptorCacheConfigVariationsFullApiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class InterceptorCacheConfigVariationsFullApiTestSuite { /** * @return Cache API test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return new ConfigVariationsTestSuiteBuilder( "Cache New Full API Test Suite with Interceptor", InterceptorCacheConfigVariationsFullApiTest.class) diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/TxDeadlockDetectionTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/TxDeadlockDetectionTestSuite.java index 0bd9584fee0b5..4e94a3193ab43 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/TxDeadlockDetectionTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/TxDeadlockDetectionTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.transactions.DepthFirstSearchTest; import org.apache.ignite.internal.processors.cache.transactions.TxDeadlockCauseTest; @@ -28,28 +29,30 @@ import org.apache.ignite.internal.processors.cache.transactions.TxOptimisticDeadlockDetectionTest; import org.apache.ignite.internal.processors.cache.transactions.TxPessimisticDeadlockDetectionCrossCacheTest; import org.apache.ignite.internal.processors.cache.transactions.TxPessimisticDeadlockDetectionTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Deadlock detection related tests. */ -public class TxDeadlockDetectionTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class TxDeadlockDetectionTestSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Deadlock Detection Test Suite"); - suite.addTestSuite(DepthFirstSearchTest.class); - suite.addTestSuite(TxOptimisticDeadlockDetectionTest.class); - suite.addTestSuite(TxOptimisticDeadlockDetectionCrossCacheTest.class); - suite.addTestSuite(TxPessimisticDeadlockDetectionTest.class); - suite.addTestSuite(TxPessimisticDeadlockDetectionCrossCacheTest.class); - //suite.addTestSuite(TxDeadlockCauseTest.class); - suite.addTestSuite(TxDeadlockDetectionTest.class); - suite.addTestSuite(TxDeadlockDetectionNoHangsTest.class); - suite.addTestSuite(TxDeadlockDetectionUnmasrhalErrorsTest.class); - suite.addTestSuite(TxDeadlockDetectionMessageMarshallingTest.class); + suite.addTest(new JUnit4TestAdapter(DepthFirstSearchTest.class)); + suite.addTest(new JUnit4TestAdapter(TxOptimisticDeadlockDetectionTest.class)); + suite.addTest(new JUnit4TestAdapter(TxOptimisticDeadlockDetectionCrossCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(TxPessimisticDeadlockDetectionTest.class)); + suite.addTest(new JUnit4TestAdapter(TxPessimisticDeadlockDetectionCrossCacheTest.class)); + //suite.addTest(new JUnit4TestAdapter(TxDeadlockCauseTest.class)); + suite.addTest(new JUnit4TestAdapter(TxDeadlockDetectionTest.class)); + suite.addTest(new JUnit4TestAdapter(TxDeadlockDetectionNoHangsTest.class)); + suite.addTest(new JUnit4TestAdapter(TxDeadlockDetectionUnmasrhalErrorsTest.class)); + suite.addTest(new JUnit4TestAdapter(TxDeadlockDetectionMessageMarshallingTest.class)); return suite; } diff --git a/modules/core/src/test/java/org/apache/ignite/testsuites/WithKeepBinaryCacheConfigVariationsFullApiTestSuite.java b/modules/core/src/test/java/org/apache/ignite/testsuites/WithKeepBinaryCacheConfigVariationsFullApiTestSuite.java index e557d77d93034..069274c25809b 100644 --- a/modules/core/src/test/java/org/apache/ignite/testsuites/WithKeepBinaryCacheConfigVariationsFullApiTestSuite.java +++ b/modules/core/src/test/java/org/apache/ignite/testsuites/WithKeepBinaryCacheConfigVariationsFullApiTestSuite.java @@ -23,17 +23,19 @@ import org.apache.ignite.internal.processors.cache.WithKeepBinaryCacheFullApiTest; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache API. */ -public class WithKeepBinaryCacheConfigVariationsFullApiTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class WithKeepBinaryCacheConfigVariationsFullApiTestSuite { /** * @return Cache API test suite. - * @throws Exception If failed. */ @SuppressWarnings("serial") - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("With Keep Binary Cache Config Variations Full API Test Suite"); suite.addTest(new ConfigVariationsTestSuiteBuilder( diff --git a/modules/core/src/test/java/org/apache/ignite/thread/GridThreadPoolExecutorServiceSelfTest.java b/modules/core/src/test/java/org/apache/ignite/thread/GridThreadPoolExecutorServiceSelfTest.java index dce6328775ef8..f6d00815ed501 100644 --- a/modules/core/src/test/java/org/apache/ignite/thread/GridThreadPoolExecutorServiceSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/thread/GridThreadPoolExecutorServiceSelfTest.java @@ -28,6 +28,9 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; @@ -35,6 +38,7 @@ * Test for {@link IgniteThreadPoolExecutor}. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridThreadPoolExecutorServiceSelfTest extends GridCommonAbstractTest { /** Thread count. */ private static final int THREAD_CNT = 40; @@ -42,6 +46,7 @@ public class GridThreadPoolExecutorServiceSelfTest extends GridCommonAbstractTes /** * @throws Exception If failed. */ + @Test public void testSingleThreadExecutor() throws Exception { ExecutorService exec = Executors.newSingleThreadExecutor(); @@ -63,6 +68,7 @@ public void testSingleThreadExecutor() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleGridThreadExecutor() throws Exception { ExecutorService exec = Executors.newSingleThreadExecutor(new IgniteThreadFactory("gridName", "testThread")); @@ -84,6 +90,7 @@ public void testSingleGridThreadExecutor() throws Exception { /** * @throws ExecutionException If failed. */ + @Test public void testGridThreadPoolExecutor() throws Exception { IgniteThreadPoolExecutor exec = new IgniteThreadPoolExecutor("", "", 1, 1, 0, new LinkedBlockingQueue()); @@ -100,6 +107,7 @@ public void testGridThreadPoolExecutor() throws Exception { /** * @throws ExecutionException If failed. */ + @Test public void testGridThreadPoolExecutorRejection() throws Exception { IgniteThreadPoolExecutor exec = new IgniteThreadPoolExecutor("", "", 1, 1, 0, new LinkedBlockingQueue()); @@ -113,6 +121,7 @@ public void testGridThreadPoolExecutorRejection() throws Exception { /** * @throws ExecutionException If failed. */ + @Test public void testGridThreadPoolExecutorPrestartCoreThreads() throws Exception { final AtomicInteger curPoolSize = new AtomicInteger(); @@ -190,4 +199,4 @@ private final class TestRunnable implements Runnable { } } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/thread/GridThreadTest.java b/modules/core/src/test/java/org/apache/ignite/thread/GridThreadTest.java index 56741f6d65b9e..2a3e35214e679 100644 --- a/modules/core/src/test/java/org/apache/ignite/thread/GridThreadTest.java +++ b/modules/core/src/test/java/org/apache/ignite/thread/GridThreadTest.java @@ -21,11 +21,15 @@ import java.util.Collection; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link org.apache.ignite.thread.IgniteThread}. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridThreadTest extends GridCommonAbstractTest { /** Thread count. */ private static final int THREAD_CNT = 3; @@ -33,6 +37,7 @@ public class GridThreadTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAssertion() throws Exception { Collection ts = new ArrayList<>(); @@ -50,4 +55,4 @@ public void testAssertion() throws Exception { for (IgniteThread t : ts) t.join(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/thread/IgniteThreadPoolSizeTest.java b/modules/core/src/test/java/org/apache/ignite/thread/IgniteThreadPoolSizeTest.java index f8ea38a2e6a57..6ef4cb65cacf5 100644 --- a/modules/core/src/test/java/org/apache/ignite/thread/IgniteThreadPoolSizeTest.java +++ b/modules/core/src/test/java/org/apache/ignite/thread/IgniteThreadPoolSizeTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.Ignition; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteThreadPoolSizeTest extends GridCommonAbstractTest { /** Wrong thread pool size value for testing */ private static final int WRONG_VALUE = 0; @@ -39,6 +43,7 @@ private IgniteConfiguration configuration() { /** * @throws Exception If failed. */ + @Test public void testAsyncCallbackPoolSize() throws Exception { testWrongPoolSize(configuration().setAsyncCallbackPoolSize(WRONG_VALUE)); } @@ -46,6 +51,7 @@ public void testAsyncCallbackPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgfsThreadPoolSize() throws Exception { testWrongPoolSize(configuration().setIgfsThreadPoolSize(WRONG_VALUE)); } @@ -53,6 +59,7 @@ public void testIgfsThreadPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testManagementThreadPoolSize() throws Exception { testWrongPoolSize(configuration().setManagementThreadPoolSize(WRONG_VALUE)); } @@ -60,6 +67,7 @@ public void testManagementThreadPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPeerClassLoadingThreadPoolSize() throws Exception { testWrongPoolSize(configuration().setPeerClassLoadingThreadPoolSize(WRONG_VALUE)); } @@ -67,6 +75,7 @@ public void testPeerClassLoadingThreadPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPublicThreadPoolSize() throws Exception { testWrongPoolSize(configuration().setPublicThreadPoolSize(WRONG_VALUE)); } @@ -74,6 +83,7 @@ public void testPublicThreadPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalanceThreadPoolSize() throws Exception { testWrongPoolSize(configuration().setRebalanceThreadPoolSize(WRONG_VALUE)); } @@ -81,6 +91,7 @@ public void testRebalanceThreadPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSystemThreadPoolSize() throws Exception { testWrongPoolSize(configuration().setSystemThreadPoolSize(WRONG_VALUE)); } @@ -88,6 +99,7 @@ public void testSystemThreadPoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUtilityCachePoolSize() throws Exception { testWrongPoolSize(configuration().setUtilityCachePoolSize(WRONG_VALUE)); } @@ -95,6 +107,7 @@ public void testUtilityCachePoolSize() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectorThreadPoolSize() throws Exception { final IgniteConfiguration cfg = configuration(); diff --git a/modules/core/src/test/java/org/apache/ignite/util/AttributeNodeFilterSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/AttributeNodeFilterSelfTest.java index 2463d9a2d2c79..21c0fdbd3a1d4 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/AttributeNodeFilterSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/AttributeNodeFilterSelfTest.java @@ -28,18 +28,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link AttributeNodeFilter}. */ +@RunWith(JUnit4.class) public class AttributeNodeFilterSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private Map attrs; @@ -47,8 +45,6 @@ public class AttributeNodeFilterSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - if (attrs != null) cfg.setUserAttributes(attrs); @@ -68,6 +64,7 @@ public class AttributeNodeFilterSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSingleAttribute() throws Exception { IgnitePredicate filter = new AttributeNodeFilter("attr", "value"); @@ -82,6 +79,7 @@ public void testSingleAttribute() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleAttributeNullValue() throws Exception { IgnitePredicate filter = new AttributeNodeFilter("attr", null); @@ -95,6 +93,7 @@ public void testSingleAttributeNullValue() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAttributes() throws Exception { IgnitePredicate filter = new AttributeNodeFilter(F.asMap("attr1", "value1", "attr2", "value2")); @@ -111,6 +110,7 @@ public void testMultipleAttributes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleAttributesNullValues() throws Exception { IgnitePredicate filter = new AttributeNodeFilter(F.asMap("attr1", null, "attr2", null)); @@ -126,6 +126,7 @@ public void testMultipleAttributesNullValues() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClusterGroup() throws Exception { Ignite group1 = startGridsMultiThreaded(3); @@ -146,6 +147,7 @@ public void testClusterGroup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheFilter() throws Exception { Ignite group1 = startGridsMultiThreaded(3); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridCommandHandlerSslTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridCommandHandlerSslTest.java new file mode 100644 index 0000000000000..9616d51f4b87e --- /dev/null +++ b/modules/core/src/test/java/org/apache/ignite/util/GridCommandHandlerSslTest.java @@ -0,0 +1,173 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.util; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import org.apache.ignite.Ignite; +import org.apache.ignite.configuration.ConnectorConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.commandline.CommandHandler; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.ssl.SslContextFactory; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.internal.commandline.CommandHandler.EXIT_CODE_CONNECTION_FAILED; +import static org.apache.ignite.internal.commandline.CommandHandler.EXIT_CODE_OK; + +/** + * Command line handler test with SSL. + */ +@RunWith(JUnit4.class) +public class GridCommandHandlerSslTest extends GridCommonAbstractTest { + /** */ + private volatile String[] cipherSuites; + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + cleanPersistenceDir(); + + stopAllGrids(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** + * @return SSL factory. + */ + @NotNull private SslContextFactory createSslFactory() { + SslContextFactory factory = (SslContextFactory)GridTestUtils.sslFactory(); + + factory.setCipherSuites(cipherSuites); + + return factory; + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration()); + cfg.getDataStorageConfiguration().getDefaultDataRegionConfiguration().setMaxSize(100 * 1024 * 1024); + cfg.getDataStorageConfiguration().getDefaultDataRegionConfiguration().setPersistenceEnabled(true); + + cfg.setConnectorConfiguration(new ConnectorConfiguration()); + cfg.getConnectorConfiguration().setSslEnabled(true); + cfg.setSslContextFactory(createSslFactory()); + + return cfg; + } + + /** + * @param nodeCipherSuite Ciphers suites to set on node. + * @param utilityCipherSuite Ciphers suites to set on utility. + * @param expRes Expected result. + * @throws Exception If failed. + */ + private void activate(String nodeCipherSuite, String utilityCipherSuite, int expRes) throws Exception { + cipherSuites = F.isEmpty(nodeCipherSuite) ? null : nodeCipherSuite.split(","); + + Ignite ignite = startGrids(1); + + assertFalse(ignite.cluster().active()); + + final CommandHandler cmd = new CommandHandler(); + + List params = new ArrayList<>(); + params.add("--activate"); + params.add("--keystore"); + params.add(GridTestUtils.keyStorePath("node01")); + params.add("--keystore-password"); + params.add(GridTestUtils.keyStorePassword()); + + if (!F.isEmpty(utilityCipherSuite)) { + params.add("--ssl-cipher-suites"); + params.add(utilityCipherSuite); + } + + assertEquals(expRes, cmd.execute(params)); + + if (expRes == EXIT_CODE_OK) + assertTrue(ignite.cluster().active()); + else + assertFalse(ignite.cluster().active()); + + assertEquals(EXIT_CODE_CONNECTION_FAILED, cmd.execute(Arrays.asList("--deactivate", "--yes"))); + } + + /** + * @throws Exception If test failed. + */ + @Test + public void testDefaultCipherSuite() throws Exception { + cipherSuites = null; + + activate(null, null, EXIT_CODE_OK); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testSameCipherSuite() throws Exception { + String ciphers = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256," + + "TLS_RSA_WITH_AES_128_GCM_SHA256," + + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"; + + activate(ciphers, ciphers, EXIT_CODE_OK); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testOneCommonCipherSuite() throws Exception { + String nodeCipherSuites = "TLS_RSA_WITH_AES_128_GCM_SHA256," + + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"; + + String utilityCipherSuites = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256," + + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"; + + activate(nodeCipherSuites, utilityCipherSuites, EXIT_CODE_OK); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testNoCommonCipherSuite() throws Exception { + String nodeCipherSuites = "TLS_RSA_WITH_AES_128_GCM_SHA256"; + + String utilityCipherSuites = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256," + + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"; + + activate(nodeCipherSuites, utilityCipherSuites, EXIT_CODE_CONNECTION_FAILED); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridCommandHandlerTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridCommandHandlerTest.java index cf2bd2873e72a..2021fc2853779 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridCommandHandlerTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridCommandHandlerTest.java @@ -17,11 +17,9 @@ package org.apache.ignite.util; -import javax.cache.processor.EntryProcessor; -import javax.cache.processor.EntryProcessorException; -import javax.cache.processor.MutableEntry; import java.io.ByteArrayOutputStream; import java.io.File; +import java.io.IOException; import java.io.PrintStream; import java.nio.file.DirectoryStream; import java.nio.file.Files; @@ -34,6 +32,7 @@ import java.util.List; import java.util.Map; import java.util.TreeMap; +import java.util.UUID; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; @@ -42,25 +41,33 @@ import java.util.concurrent.atomic.LongAdder; import java.util.regex.Matcher; import java.util.regex.Pattern; +import javax.cache.processor.EntryProcessor; +import javax.cache.processor.EntryProcessorException; +import javax.cache.processor.MutableEntry; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteAtomicSequence; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.AtomicConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.ConnectorConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.GridJobExecuteResponse; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.commandline.Command; import org.apache.ignite.internal.commandline.CommandHandler; import org.apache.ignite.internal.commandline.cache.CacheCommand; import org.apache.ignite.internal.managers.communication.GridIoMessage; import org.apache.ignite.internal.pagemem.wal.record.DataEntry; +import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.CacheObjectImpl; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheEntryEx; @@ -76,11 +83,13 @@ import org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry; import org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.processors.datastructures.GridCacheInternalKeyImpl; import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.X; +import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.internal.visor.tx.VisorTxInfo; import org.apache.ignite.internal.visor.tx.VisorTxTaskResult; @@ -94,6 +103,7 @@ import org.apache.ignite.transactions.TransactionRollbackException; import org.apache.ignite.transactions.TransactionTimeoutException; import org.jetbrains.annotations.NotNull; +import org.junit.Test; import static java.nio.file.Files.delete; import static java.nio.file.Files.newDirectoryStream; @@ -102,7 +112,10 @@ import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.commandline.CommandHandler.EXIT_CODE_OK; import static org.apache.ignite.internal.commandline.CommandHandler.EXIT_CODE_UNEXPECTED_ERROR; -import static org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsDumpTask.IDLE_DUMP_FILE_PREMIX; +import static org.apache.ignite.internal.commandline.OutputFormat.MULTI_LINE; +import static org.apache.ignite.internal.commandline.OutputFormat.SINGLE_LINE; +import static org.apache.ignite.internal.commandline.cache.CacheCommand.HELP; +import static org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsDumpTask.IDLE_DUMP_FILE_PREFIX; import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC; import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; import static org.apache.ignite.transactions.TransactionIsolation.READ_COMMITTED; @@ -120,6 +133,12 @@ public class GridCommandHandlerTest extends GridCommonAbstractTest { /** Option is used for auto confirmation. */ private static final String CMD_AUTO_CONFIRMATION = "--yes"; + /** Atomic configuration. */ + private AtomicConfiguration atomicConfiguration; + + /** Additional data region configuration. */ + private DataRegionConfiguration dataRegionConfiguration; + /** * @return Folder in work directory. * @throws IgniteCheckedException If failed to resolve folder name. @@ -147,11 +166,12 @@ protected File folder(String folder) throws IgniteCheckedException { cleanPersistenceDir(); - //delete idle-verify dump files. + // Delete idle-verify dump files. try (DirectoryStream files = newDirectoryStream( - Paths.get(U.defaultWorkDirectory()), - entry -> entry.toFile().getName().startsWith(IDLE_DUMP_FILE_PREMIX) - )) { + Paths.get(U.defaultWorkDirectory()), + entry -> entry.toFile().getName().startsWith(IDLE_DUMP_FILE_PREFIX) + ) + ) { for (Path path : files) delete(path); } @@ -180,12 +200,19 @@ protected void injectTestSystemOut() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + if (atomicConfiguration != null) + cfg.setAtomicConfiguration(atomicConfiguration); + cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); cfg.setConnectorConfiguration(new ConnectorConfiguration()); - DataStorageConfiguration memCfg = new DataStorageConfiguration().setDefaultDataRegionConfiguration( - new DataRegionConfiguration().setMaxSize(50L * 1024 * 1024)); + DataStorageConfiguration memCfg = new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration() + .setMaxSize(50L * 1024 * 1024)); + + if (dataRegionConfiguration != null) + memCfg.setDataRegionConfigurations(dataRegionConfiguration); cfg.setDataStorageConfiguration(memCfg); @@ -205,6 +232,7 @@ protected void injectTestSystemOut() { * * @throws Exception If failed. */ + @Test public void testActivate() throws Exception { Ignite ignite = startGrids(1); @@ -227,9 +255,11 @@ protected int execute(String... args) { * @param args Arguments. * @return Result of execution */ - protected int execute(ArrayList args) { - // Add force to avoid interactive confirmation - args.add(CMD_AUTO_CONFIRMATION); + protected int execute(List args) { + if(!F.isEmpty(args) && !"--help".equalsIgnoreCase(args.get(0))) { + // Add force to avoid interactive confirmation. + args.add(CMD_AUTO_CONFIRMATION); + } return new CommandHandler().execute(args); } @@ -265,6 +295,7 @@ protected int execute(CommandHandler hnd, String... args) { * * @throws Exception If failed. */ + @Test public void testDeactivate() throws Exception { Ignite ignite = startGrids(1); @@ -284,6 +315,7 @@ public void testDeactivate() throws Exception { * * @throws Exception If failed. */ + @Test public void testState() throws Exception { Ignite ignite = startGrids(1); @@ -301,6 +333,7 @@ public void testState() throws Exception { * * @throws Exception If failed. */ + @Test public void testBaselineCollect() throws Exception { Ignite ignite = startGrids(1); @@ -337,6 +370,7 @@ private String consistentIds(Ignite... ignites) { * * @throws Exception If failed. */ + @Test public void testBaselineAdd() throws Exception { Ignite ignite = startGrids(1); @@ -357,6 +391,7 @@ public void testBaselineAdd() throws Exception { * * @throws Exception If failed. */ + @Test public void testBaselineRemove() throws Exception { Ignite ignite = startGrids(1); Ignite other = startGrid("nodeToStop"); @@ -380,6 +415,7 @@ public void testBaselineRemove() throws Exception { * * @throws Exception If failed. */ + @Test public void testBaselineSet() throws Exception { Ignite ignite = startGrids(1); @@ -401,6 +437,7 @@ public void testBaselineSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testBaselineVersion() throws Exception { Ignite ignite = startGrids(1); @@ -422,6 +459,7 @@ public void testBaselineVersion() throws Exception { * * @throws Exception If failed. */ + @Test public void testActiveTransactions() throws Exception { Ignite ignite = startGridsMultiThreaded(2); @@ -450,9 +488,7 @@ public void testActiveTransactions() throws Exception { // Basic test. validate(h, map -> { - ClusterNode node = grid(0).cluster().localNode(); - - VisorTxTaskResult res = map.get(node); + VisorTxTaskResult res = map.get(grid(0).cluster().localNode()); for (VisorTxInfo info : res.getInfos()) { if (info.getSize() == 100) { @@ -473,7 +509,7 @@ public void testActiveTransactions() throws Exception { for (Map.Entry entry : map.entrySet()) assertEquals(entry.getKey().equals(node) ? 1 : 0, entry.getValue().getInfos().size()); - }, "--tx", "label", "label1"); + }, "--tx", "--label", "label1"); // Test filter by label regex. validate(h, map -> { @@ -495,7 +531,7 @@ else if (entry.getKey().equals(node2)) { assertTrue(entry.getValue().getInfos().isEmpty()); } - }, "--tx", "label", "^label[0-9]"); + }, "--tx", "--label", "^label[0-9]"); // Test filter by empty label. validate(h, map -> { @@ -504,7 +540,7 @@ else if (entry.getKey().equals(node2)) { for (VisorTxInfo info : res.getInfos()) assertNull(info.getLabel()); - }, "--tx", "label", "null"); + }, "--tx", "--label", "null"); // test check minSize int minSize = 10; @@ -516,21 +552,21 @@ else if (entry.getKey().equals(node2)) { for (VisorTxInfo txInfo : res.getInfos()) assertTrue(txInfo.getSize() >= minSize); - }, "--tx", "minSize", Integer.toString(minSize)); + }, "--tx", "--min-size", Integer.toString(minSize)); // test order by size. validate(h, map -> { VisorTxTaskResult res = map.get(grid(0).localNode()); assertTrue(res.getInfos().get(0).getSize() >= res.getInfos().get(1).getSize()); - }, "--tx", "order", "SIZE"); + }, "--tx", "--order", "SIZE"); // test order by duration. validate(h, map -> { VisorTxTaskResult res = map.get(grid(0).localNode()); assertTrue(res.getInfos().get(0).getDuration() >= res.getInfos().get(1).getDuration()); - }, "--tx", "order", "DURATION"); + }, "--tx", "--order", "DURATION"); // test order by start_time. validate(h, map -> { @@ -538,7 +574,7 @@ else if (entry.getKey().equals(node2)) { for (int i = res.getInfos().size() - 1; i > 1; i--) assertTrue(res.getInfos().get(i - 1).getStartTime() >= res.getInfos().get(i).getStartTime()); - }, "--tx", "order", CommandHandler.CMD_TX_ORDER_START_TIME); + }, "--tx", "--order", "START_TIME"); // Trigger topology change and test connection. IgniteInternalFuture startFut = multithreadedAsync(() -> { @@ -563,9 +599,9 @@ else if (entry.getKey().equals(node2)) { VisorTxInfo info = killedEntry.getValue().getInfos().get(0); assertEquals(toKill[0].getXid(), info.getXid()); - }, "--tx", "kill", - "xid", toKill[0].getXid().toString(), // Use saved on first run value. - "nodes", grid(0).localNode().consistentId().toString()); + }, "--tx", "--kill", + "--xid", toKill[0].getXid().toString(), // Use saved on first run value. + "--nodes", grid(0).localNode().consistentId().toString()); unlockLatch.countDown(); @@ -581,6 +617,7 @@ else if (entry.getKey().equals(node2)) { /** * */ + @Test public void testKillHangingLocalTransactions() throws Exception { Ignite ignite = startGridsMultiThreaded(2); @@ -632,7 +669,7 @@ public void testKillHangingLocalTransactions() throws Exception { assertEquals(tx0.xid(), info.getXid()); assertEquals(1, map.size()); - }, "--tx", "kill"); + }, "--tx", "--kill"); tx0.finishFuture().get(); @@ -650,6 +687,7 @@ public void testKillHangingLocalTransactions() throws Exception { /** * Simulate uncommitted backup transactions and test rolling back using utility. */ + @Test public void testKillHangingRemoteTransactions() throws Exception { final int cnt = 3; @@ -797,7 +835,7 @@ public void testKillHangingRemoteTransactions() throws Exception { // Check kill. validate(h, map -> { // No-op. - }, "--tx", "kill"); + }, "--tx", "--kill"); // Wait for all remote txs to finish. for (Ignite ignite : G.allGrids()) { @@ -828,6 +866,7 @@ public void testKillHangingRemoteTransactions() throws Exception { * * @throws Exception If failed. */ + @Test public void testBaselineAddOnNotActiveCluster() throws Exception { Ignite ignite = startGrid(1); @@ -854,36 +893,41 @@ public void testBaselineAddOnNotActiveCluster() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheHelp() throws Exception { - Ignite ignite = startGrids(1); - - ignite.cluster().active(true); - injectTestSystemOut(); assertEquals(EXIT_CODE_OK, execute("--cache", "help")); for (CacheCommand cmd : CacheCommand.values()) { - if (cmd != CacheCommand.HELP) - assertTrue(cmd.text(), testOut.toString().contains(cmd.text())); + if (cmd != HELP) + assertTrue(cmd.text(), testOut.toString().contains(cmd.toString())); } } /** * @throws Exception If failed. */ + @Test + public void testHelp() throws Exception { + injectTestSystemOut(); + + assertEquals(EXIT_CODE_OK, execute("--help")); + + for (Command cmd : Command.values()) + assertTrue(cmd.text(), testOut.toString().contains(cmd.toString())); + } + + /** + * @throws Exception If failed. + */ + @Test public void testCacheIdleVerify() throws Exception { - Ignite ignite = startGrids(2); + IgniteEx ignite = (IgniteEx)startGrids(2); ignite.cluster().active(true); - IgniteCache cache = ignite.createCache(new CacheConfiguration<>() - .setAffinity(new RendezvousAffinityFunction(false, 32)) - .setBackups(1) - .setName(DEFAULT_CACHE_NAME)); - - for (int i = 0; i < 100; i++) - cache.put(i, i); + createCacheAndPreload(ignite, 100); injectTestSystemOut(); @@ -893,7 +937,7 @@ public void testCacheIdleVerify() throws Exception { HashSet clearKeys = new HashSet<>(Arrays.asList(1, 2, 3, 4, 5, 6)); - ((IgniteEx)ignite).context().cache().cache(DEFAULT_CACHE_NAME).clearLocallyAll(clearKeys, true, true, true); + ignite.context().cache().cache(DEFAULT_CACHE_NAME).clearLocallyAll(clearKeys, true, true, true); assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify")); @@ -905,20 +949,13 @@ public void testCacheIdleVerify() throws Exception { * * @throws Exception If failed. */ + @Test public void testCacheIdleVerifyTwoConflictTypes() throws Exception { IgniteEx ignite = (IgniteEx)startGrids(2); ignite.cluster().active(true); - int parts = 32; - - IgniteCache cache = ignite.createCache(new CacheConfiguration<>() - .setAffinity(new RendezvousAffinityFunction(false, parts)) - .setBackups(1) - .setName(DEFAULT_CACHE_NAME)); - - for (int i = 0; i < 100; i++) - cache.put(i, i); + createCacheAndPreload(ignite, 100); injectTestSystemOut(); @@ -930,7 +967,7 @@ public void testCacheIdleVerifyTwoConflictTypes() throws Exception { corruptDataEntry(cacheCtx, 1, true, false); - corruptDataEntry(cacheCtx, 1 + parts / 2, false, true); + corruptDataEntry(cacheCtx, 1 + cacheCtx.config().getAffinity().partitions() / 2, false, true); assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify")); @@ -942,17 +979,17 @@ public void testCacheIdleVerifyTwoConflictTypes() throws Exception { * * @throws Exception If failed. */ + @Test public void testCacheIdleVerifyDump() throws Exception { IgniteEx ignite = (IgniteEx)startGrids(3); ignite.cluster().active(true); - int parts = 32; + int keysCount = 20;//less than parts number for ability to check skipZeros flag. - IgniteCache cache = ignite.createCache(new CacheConfiguration<>() - .setAffinity(new RendezvousAffinityFunction(false, parts)) - .setBackups(1) - .setName(DEFAULT_CACHE_NAME)); + createCacheAndPreload(ignite, keysCount); + + int parts = ignite.affinity(DEFAULT_CACHE_NAME).partitions(); ignite.createCache(new CacheConfiguration<>() .setAffinity(new RendezvousAffinityFunction(false, parts)) @@ -961,14 +998,9 @@ public void testCacheIdleVerifyDump() throws Exception { injectTestSystemOut(); - int keysCount = 20;//less than parts number for ability to check skipZeros flag. - - for (int i = 0; i < keysCount; i++) - cache.put(i, i); - assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump", DEFAULT_CACHE_NAME)); - assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump", "--skipZeros", DEFAULT_CACHE_NAME)); + assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump", "--skip-zeros", DEFAULT_CACHE_NAME)); Matcher fileNameMatcher = dumpFileNameMatcher(); @@ -1028,37 +1060,382 @@ private void assertSort(int expectedPartsCount, String output) { * * @throws Exception If failed. */ + @Test public void testCacheIdleVerifyDumpForCorruptedData() throws Exception { IgniteEx ignite = (IgniteEx)startGrids(3); ignite.cluster().active(true); + createCacheAndPreload(ignite, 100); + + injectTestSystemOut(); + + corruptingAndCheckDefaultCache(ignite, "ALL"); + } + + /** + * @param ignite Ignite. + * @param cacheFilter cacheFilter. + */ + private void corruptingAndCheckDefaultCache(IgniteEx ignite, String cacheFilter) throws IOException { + injectTestSystemOut(); + + GridCacheContext cacheCtx = ignite.cachex(DEFAULT_CACHE_NAME).context(); + + corruptDataEntry(cacheCtx, 0, true, false); + + corruptDataEntry(cacheCtx, cacheCtx.config().getAffinity().partitions() / 2, false, true); + + assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump", "--cache-filter", cacheFilter)); + + Matcher fileNameMatcher = dumpFileNameMatcher(); + + if (fileNameMatcher.find()) { + String dumpWithConflicts = new String(Files.readAllBytes(Paths.get(fileNameMatcher.group(1)))); + + assertTrue(dumpWithConflicts.contains("found 2 conflict partitions: [counterConflicts=1, hashConflicts=1]")); + } + else + fail("Should be found dump with conflicts"); + } + + /** + * Tests that idle verify print partitions info when node failing. + * + * @throws Exception If failed. + */ + @Test + public void testCacheIdleVerifyDumpWhenNodeFailing() throws Exception { + Ignite ignite = startGrids(3); + + Ignite unstable = startGrid("unstable"); + + ignite.cluster().active(true); + + createCacheAndPreload(ignite, 100); + + for (int i = 0; i < 3; i++) { + TestRecordingCommunicationSpi.spi(unstable).blockMessages(GridJobExecuteResponse.class, + getTestIgniteInstanceName(i)); + } + + injectTestSystemOut(); + + IgniteInternalFuture fut = GridTestUtils.runAsync(() -> { + assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump")); + }); + + TestRecordingCommunicationSpi.spi(unstable).waitForBlocked(); + + UUID unstableNodeId = unstable.cluster().localNode().id(); + + unstable.close(); + + fut.get(); + + checkExceptionMessageOnReport(unstableNodeId); + } + + /** + * Tests that idle verify print partitions info when several nodes failing at same time. + * + * @throws Exception If failed. + */ + @Test + public void testCacheIdleVerifyDumpWhenSeveralNodesFailing() throws Exception { + int nodes = 6; + + Ignite ignite = startGrids(nodes); + + List unstableNodes = new ArrayList<>(nodes / 2); + + for (int i = 0; i < nodes; i++) { + if (i % 2 == 1) + unstableNodes.add(ignite(i)); + } + + ignite.cluster().active(true); + + createCacheAndPreload(ignite, 100); + + for (Ignite unstable : unstableNodes) { + for (int i = 0; i < nodes; i++) { + TestRecordingCommunicationSpi.spi(unstable).blockMessages(GridJobExecuteResponse.class, + getTestIgniteInstanceName(i)); + } + } + + injectTestSystemOut(); + + IgniteInternalFuture fut = GridTestUtils.runAsync( + () -> assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump")) + ); + + List unstableNodeIds = new ArrayList<>(nodes / 2); + + for (Ignite unstable : unstableNodes) { + TestRecordingCommunicationSpi.spi(unstable).waitForBlocked(); + + unstableNodeIds.add(unstable.cluster().localNode().id()); + + unstable.close(); + } + + fut.get(); + + for (UUID unstableId : unstableNodeIds) + checkExceptionMessageOnReport(unstableId); + } + + /** + * Creates default cache and preload some data entries. + * + * @param ignite Ignite. + * @param countEntries Count of entries. + */ + private void createCacheAndPreload(Ignite ignite, int countEntries) { + ignite.createCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME) + .setAffinity(new RendezvousAffinityFunction(false, 32)) + .setBackups(1)); + + try (IgniteDataStreamer streamer = ignite.dataStreamer(DEFAULT_CACHE_NAME)) { + for (int i = 0; i < countEntries; i++) + streamer.addData(i, i); + } + } + + /** + * Try to finds node failed exception message on output report. + * + * @param unstableNodeId Unstable node id. + */ + private void checkExceptionMessageOnReport(UUID unstableNodeId) throws IOException { + Matcher fileNameMatcher = dumpFileNameMatcher(); + + if (fileNameMatcher.find()) { + String dumpWithConflicts = new String(Files.readAllBytes(Paths.get(fileNameMatcher.group(1)))); + + assertTrue(dumpWithConflicts.contains("Idle verify failed on nodes:")); + + assertTrue(dumpWithConflicts.contains("Node ID: " + unstableNodeId + "\n" + + "Exception message:\n" + + "Node has left grid: " + unstableNodeId)); + } + else + fail("Should be found dump with conflicts"); + } + + /** + * Tests that idle verify print partitions info over system caches. + * + * @throws Exception If failed. + */ + @Test + public void testCacheIdleVerifyDumpForCorruptedDataOnSystemCache() throws Exception { int parts = 32; - IgniteCache cache = ignite.createCache(new CacheConfiguration<>() + atomicConfiguration = new AtomicConfiguration() .setAffinity(new RendezvousAffinityFunction(false, parts)) - .setBackups(1) - .setName(DEFAULT_CACHE_NAME)); + .setBackups(2); + + IgniteEx ignite = (IgniteEx)startGrids(3); + + ignite.cluster().active(true); injectTestSystemOut(); + // Adding some assignments without deployments. + for (int i = 0; i < 100; i++) { + ignite.semaphore("s" + i, i, false, true); + + ignite.atomicSequence("sq" + i, 0, true) + .incrementAndGet(); + } + + CacheGroupContext storedSysCacheCtx = ignite.context().cache().cacheGroup(CU.cacheId("default-ds-group")); + + assertNotNull(storedSysCacheCtx); + + corruptDataEntry(storedSysCacheCtx.caches().get(0), new GridCacheInternalKeyImpl("sq0", + "default-ds-group"), true, false); + + corruptDataEntry(storedSysCacheCtx.caches().get(0), new GridCacheInternalKeyImpl("sq" + parts / 2, + "default-ds-group"), false, true); + + CacheGroupContext memorySysCacheCtx = ignite.context().cache().cacheGroup(CU.cacheId("default-volatile-ds-group")); + + assertNotNull(memorySysCacheCtx); + + corruptDataEntry(memorySysCacheCtx.caches().get(0), new GridCacheInternalKeyImpl("s0", + "default-volatile-ds-group"), true, false); + + corruptDataEntry(memorySysCacheCtx.caches().get(0), new GridCacheInternalKeyImpl("s" + parts / 2, + "default-volatile-ds-group"), false, true); + + assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump", "--cache-filter", "SYSTEM")); + + Matcher fileNameMatcher = dumpFileNameMatcher(); + + if (fileNameMatcher.find()) { + String dumpWithConflicts = new String(Files.readAllBytes(Paths.get(fileNameMatcher.group(1)))); + + assertTrue(dumpWithConflicts.contains("found 4 conflict partitions: [counterConflicts=2, " + + "hashConflicts=2]")); + } + else + fail("Should be found dump with conflicts"); + } + + /** + * Tests that idle verify print partitions info over persistence client caches. + * + * @throws Exception If failed. + */ + @Test + public void testCacheIdleVerifyDumpForCorruptedDataOnPersistenceClientCache() throws Exception { + IgniteEx ignite = (IgniteEx)startGrids(3); + + ignite.cluster().active(true); + + createCacheAndPreload(ignite, 100); + + corruptingAndCheckDefaultCache(ignite, "PERSISTENT"); + } + + /** + * Tests that idle verify print partitions info over none-persistence client caches. + * + * @throws Exception If failed. + */ + @Test + public void testCacheIdleVerifyDumpForCorruptedDataOnNonePersistenceClientCache() throws Exception { + int parts = 32; + + dataRegionConfiguration = new DataRegionConfiguration() + .setName("none-persistence-region"); + + IgniteEx ignite = (IgniteEx)startGrids(3); + + ignite.cluster().active(true); + + IgniteCache cache = ignite.createCache(new CacheConfiguration<>() + .setAffinity(new RendezvousAffinityFunction(false, parts)) + .setBackups(2) + .setName(DEFAULT_CACHE_NAME) + .setDataRegionName("none-persistence-region")); + + // Adding some assignments without deployments. for (int i = 0; i < 100; i++) cache.put(i, i); + injectTestSystemOut(); + GridCacheContext cacheCtx = ignite.cachex(DEFAULT_CACHE_NAME).context(); corruptDataEntry(cacheCtx, 0, true, false); - corruptDataEntry(cacheCtx, 0 + parts / 2, false, true); + corruptDataEntry(cacheCtx, parts / 2, false, true); - assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump")); + assertEquals( + EXIT_CODE_OK, + execute("--cache", "idle_verify", "--dump", "--cache-filter", "NOT_PERSISTENT") + ); Matcher fileNameMatcher = dumpFileNameMatcher(); if (fileNameMatcher.find()) { String dumpWithConflicts = new String(Files.readAllBytes(Paths.get(fileNameMatcher.group(1)))); - assertTrue(dumpWithConflicts.contains("found 2 conflict partitions: [counterConflicts=1, hashConflicts=1]")); + assertTrue(dumpWithConflicts.contains("found 1 conflict partitions: [counterConflicts=0, " + + "hashConflicts=1]")); + } + else + fail("Should be found dump with conflicts"); + } + + /** + * Tests that idle verify print partitions info with exclude cache group. + * + * @throws Exception If failed. + */ + @Test + public void testCacheIdleVerifyDumpExcludedCacheGrp() throws Exception { + IgniteEx ignite = (IgniteEx)startGrids(3); + + ignite.cluster().active(true); + + int parts = 32; + + IgniteCache cache = ignite.createCache(new CacheConfiguration<>() + .setAffinity(new RendezvousAffinityFunction(false, parts)) + .setGroupName("shared_grp") + .setBackups(1) + .setName(DEFAULT_CACHE_NAME)); + + IgniteCache secondCache = ignite.createCache(new CacheConfiguration<>() + .setAffinity(new RendezvousAffinityFunction(false, parts)) + .setGroupName("shared_grp") + .setBackups(1) + .setName(DEFAULT_CACHE_NAME + "_second")); + + injectTestSystemOut(); + + assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump", "--excludeCaches", "shared_grp")); + + Matcher fileNameMatcher = dumpFileNameMatcher(); + + if (fileNameMatcher.find()) { + String dumpWithConflicts = new String(Files.readAllBytes(Paths.get(fileNameMatcher.group(1)))); + + assertTrue(dumpWithConflicts.contains("idle_verify check has finished, found 0 partitions")); + } + else + fail("Should be found dump with conflicts"); + } + + /** + * Tests that idle verify print partitions info with exclude caches. + * + * @throws Exception If failed. + */ + @Test + public void testCacheIdleVerifyDumpExcludedCaches() throws Exception { + IgniteEx ignite = (IgniteEx)startGrids(3); + + ignite.cluster().active(true); + + int parts = 32; + + ignite.createCache(new CacheConfiguration<>() + .setAffinity(new RendezvousAffinityFunction(false, parts)) + .setGroupName("shared_grp") + .setBackups(1) + .setName(DEFAULT_CACHE_NAME)); + + ignite.createCache(new CacheConfiguration<>() + .setAffinity(new RendezvousAffinityFunction(false, parts)) + .setGroupName("shared_grp") + .setBackups(1) + .setName(DEFAULT_CACHE_NAME + "_second")); + + ignite.createCache(new CacheConfiguration<>() + .setAffinity(new RendezvousAffinityFunction(false, parts)) + .setBackups(1) + .setName(DEFAULT_CACHE_NAME + "_third")); + + injectTestSystemOut(); + + assertEquals(EXIT_CODE_OK, execute("--cache", "idle_verify", "--dump", "--excludeCaches", DEFAULT_CACHE_NAME + + "," + DEFAULT_CACHE_NAME + "_second")); + + Matcher fileNameMatcher = dumpFileNameMatcher(); + + if (fileNameMatcher.find()) { + String dumpWithConflicts = new String(Files.readAllBytes(Paths.get(fileNameMatcher.group(1)))); + + assertTrue(dumpWithConflicts.contains("idle_verify check has finished, found 32 partitions")); + assertTrue(dumpWithConflicts.contains("default_third")); + assertTrue(!dumpWithConflicts.contains("shared_grp")); } else fail("Should be found dump with conflicts"); @@ -1076,6 +1453,7 @@ public void testCacheIdleVerifyDumpForCorruptedData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheIdleVerifyMovingParts() throws Exception { IgniteEx ignite = (IgniteEx)startGrids(2); @@ -1110,6 +1488,7 @@ public void testCacheIdleVerifyMovingParts() throws Exception { /** * */ + @Test public void testCacheContention() throws Exception { int cnt = 10; @@ -1183,6 +1562,7 @@ public void testCacheContention() throws Exception { /** * */ + @Test public void testCacheSequence() throws Exception { Ignite ignite = startGrid(); @@ -1207,6 +1587,7 @@ public void testCacheSequence() throws Exception { /** * */ + @Test public void testCacheGroups() throws Exception { Ignite ignite = startGrid(); @@ -1231,6 +1612,7 @@ public void testCacheGroups() throws Exception { /** * */ + @Test public void testCacheAffinity() throws Exception { Ignite ignite = startGrid(); @@ -1254,21 +1636,125 @@ public void testCacheAffinity() throws Exception { assertTrue(testOut.toString().contains("affCls=RendezvousAffinityFunction")); } + /** */ + @Test + public void testCacheConfigNoOutputFormat() throws Exception { + testCacheConfig(null, 1, 1); + } + + /** */ + @Test + public void testCacheConfigSingleLineOutputFormatSingleNodeSignleCache() throws Exception { + testCacheConfigSingleLineOutputFormat(1, 1); + } + + /** */ + @Test + public void testCacheConfigSingleLineOutputFormatTwoNodeSignleCache() throws Exception { + testCacheConfigSingleLineOutputFormat(2, 1); + } + + /** */ + @Test + public void testCacheConfigSingleLineOutputFormatTwoNodeManyCaches() throws Exception { + testCacheConfigSingleLineOutputFormat(2, 100); + } + + /** */ + @Test + public void testCacheConfigMultiLineOutputFormatSingleNodeSingleCache() throws Exception { + testCacheConfigMultiLineOutputFormat(1, 1); + } + + /** */ + @Test + public void testCacheConfigMultiLineOutputFormatTwoNodeSingleCache() throws Exception { + testCacheConfigMultiLineOutputFormat(2, 1); + } + + /** */ + @Test + public void testCacheConfigMultiLineOutputFormatTwoNodeManyCaches() throws Exception { + testCacheConfigMultiLineOutputFormat(2, 100); + } + + /** */ + private void testCacheConfigSingleLineOutputFormat(int nodesCnt, int cachesCnt) throws Exception { + testCacheConfig("single-line", nodesCnt, cachesCnt); + } + + /** */ + private void testCacheConfigMultiLineOutputFormat(int nodesCnt, int cachesCnt) throws Exception { + testCacheConfig("multi-line", nodesCnt, cachesCnt); + } + + /** */ + private void testCacheConfig(String outputFormat, int nodesCnt, int cachesCnt) throws Exception { + assertTrue("Invalid number of nodes or caches", nodesCnt > 0 && cachesCnt > 0); + + Ignite ignite = startGrid(nodesCnt); + + ignite.cluster().active(true); + + List ccfgs = new ArrayList<>(cachesCnt); + + for (int i = 0; i < cachesCnt; i++) { + ccfgs.add( + new CacheConfiguration<>() + .setAffinity(new RendezvousAffinityFunction(false, 32)) + .setBackups(1) + .setName(DEFAULT_CACHE_NAME + i) + ); + } + + ignite.createCaches(ccfgs); + + IgniteCache cache1 = ignite.cache(DEFAULT_CACHE_NAME + 0); + + for (int i = 0; i < 100; i++) + cache1.put(i, i); + + injectTestSystemOut(); + + int exitCode; + + if (outputFormat == null) + exitCode = execute("--cache", "list", ".*", "--config"); + else + exitCode = execute("--cache", "list", ".*", "--config", "--output-format", outputFormat); + + assertEquals(EXIT_CODE_OK, exitCode); + + String outStr = testOut.toString(); + + if (outputFormat == null || SINGLE_LINE.text().equals(outputFormat)) { + for (int i = 0; i < cachesCnt; i++) + assertTrue(outStr.contains("name=" + DEFAULT_CACHE_NAME + i)); + + assertTrue(outStr.contains("partitions=32")); + assertTrue(outStr.contains("function=o.a.i.cache.affinity.rendezvous.RendezvousAffinityFunction")); + } + else if (MULTI_LINE.text().equals(outputFormat)) { + for (int i = 0; i < cachesCnt; i++) + assertTrue(outStr.contains("[cache = '" + DEFAULT_CACHE_NAME + i + "']")); + + assertTrue(outStr.contains("Affinity Partitions: 32")); + assertTrue(outStr.contains("Affinity Function: o.a.i.cache.affinity.rendezvous.RendezvousAffinityFunction")); + } + else + fail("Unknown output format: " + outputFormat); + } + /** * */ + @Test public void testCacheDistribution() throws Exception { Ignite ignite = startGrids(2); ignite.cluster().active(true); - IgniteCache cache = ignite.createCache(new CacheConfiguration<>() - .setAffinity(new RendezvousAffinityFunction(false, 32)) - .setBackups(1) - .setName(DEFAULT_CACHE_NAME)); - - for (int i = 0; i < 100; i++) - cache.put(i, i); + createCacheAndPreload(ignite, 100); injectTestSystemOut(); @@ -1294,7 +1780,7 @@ public void testCacheDistribution() throws Exception { assertTrue(lastRowIndex > 0); // Last row is empty, but the previous line contains data - lastRowIndex = log.lastIndexOf('\n', lastRowIndex-1); + lastRowIndex = log.lastIndexOf('\n', lastRowIndex - 1); assertTrue(lastRowIndex > 0); @@ -1312,13 +1798,7 @@ public void testCacheResetLostPartitions() throws Exception { ignite.cluster().active(true); - IgniteCache cache = ignite.createCache(new CacheConfiguration<>() - .setAffinity(new RendezvousAffinityFunction(false, 32)) - .setBackups(1) - .setName(DEFAULT_CACHE_NAME)); - - for (int i = 0; i < 100; i++) - cache.put(i, i); + createCacheAndPreload(ignite, 100); injectTestSystemOut(); @@ -1362,6 +1842,7 @@ private Map generate(int from, int cnt) { * * @throws Exception if failed. */ + @Test public void testUnusedWalPrint() throws Exception { Ignite ignite = startGrids(2); @@ -1395,6 +1876,7 @@ public void testUnusedWalPrint() throws Exception { * * @throws Exception if failed. */ + @Test public void testUnusedWalDelete() throws Exception { Ignite ignite = startGrids(2); @@ -1514,7 +1996,7 @@ private static class IncrementClosure implements EntryProcessor ctx, - int key, + Object key, boolean breakCntr, boolean breakData ) { diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeMultiThreadedTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeMultiThreadedTest.java index 99064f0a8f4d3..455b28e11fe2f 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeMultiThreadedTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeMultiThreadedTest.java @@ -25,10 +25,14 @@ import org.jetbrains.annotations.Nullable; import org.jsr166.ConcurrentLinkedDeque8; import org.jsr166.ConcurrentLinkedDeque8.Node; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link org.jsr166.ConcurrentLinkedDeque8}. */ +@RunWith(JUnit4.class) public class GridConcurrentLinkedDequeMultiThreadedTest extends GridCommonAbstractTest { /** */ private static final Random RND = new Random(); @@ -36,7 +40,7 @@ public class GridConcurrentLinkedDequeMultiThreadedTest extends GridCommonAbstra /** * @throws Exception If failed. */ - @SuppressWarnings({"BusyWait"}) + @Test public void testQueueMultiThreaded() throws Exception { final AtomicBoolean done = new AtomicBoolean(); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeSelfTest.java index f695f75f90dce..1e55beb751b68 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedDequeSelfTest.java @@ -20,16 +20,21 @@ import java.util.Iterator; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jsr166.ConcurrentLinkedDeque8; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.jsr166.ConcurrentLinkedDeque8.Node; /** * Tests for {@link org.jsr166.ConcurrentLinkedDeque8}. */ +@RunWith(JUnit4.class) public class GridConcurrentLinkedDequeSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPoll() throws Exception { ConcurrentLinkedDeque8 deque = new ConcurrentLinkedDeque8<>(); @@ -66,6 +71,7 @@ public void testPoll() throws Exception { /** * */ + @Test public void testUnlink() { ConcurrentLinkedDeque8 deque = new ConcurrentLinkedDeque8<>(); @@ -116,6 +122,7 @@ public void testUnlink() { /** * */ + @Test public void testEmptyDeque() { ConcurrentLinkedDeque8 deque = new ConcurrentLinkedDeque8<>(); @@ -175,6 +182,7 @@ private void checkSize(ConcurrentLinkedDeque8 q, int expSize) { /** * */ + @Test public void testUnlinkWithIterator() { ConcurrentLinkedDeque8 q = new ConcurrentLinkedDeque8<>(); @@ -211,6 +219,7 @@ public void testUnlinkWithIterator() { /** * */ + @Test public void testUnlinkLastWithIterator() { ConcurrentLinkedDeque8 q = new ConcurrentLinkedDeque8<>(); @@ -230,4 +239,4 @@ public void testUnlinkLastWithIterator() { assertFalse(it.hasNext()); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedHashMapMultiThreadedSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedHashMapMultiThreadedSelfTest.java index 9fe2690d1d1f1..0716488eddb4c 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedHashMapMultiThreadedSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridConcurrentLinkedHashMapMultiThreadedSelfTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; import org.jsr166.ConcurrentLinkedHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.jsr166.ConcurrentLinkedHashMap.QueuePolicy.PER_SEGMENT_Q; import static org.jsr166.ConcurrentLinkedHashMap.QueuePolicy.PER_SEGMENT_Q_OPTIMIZED_RMV; @@ -42,10 +45,12 @@ /** * */ +@RunWith(JUnit4.class) public class GridConcurrentLinkedHashMapMultiThreadedSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPut() throws Exception { info(">>> Test grid concurrent linked hash map..."); @@ -64,6 +69,7 @@ public void testPut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutPerSegment() throws Exception { info(">>> Test grid concurrent linked hash map..."); @@ -81,6 +87,7 @@ public void testPutPerSegment() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvict() throws Exception { info(">>> Test grid concurrent linked hash map..."); @@ -143,6 +150,7 @@ public void testEvict() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvictPerSegment() throws Exception { info(">>> Test grid concurrent linked hash map..."); @@ -205,6 +213,7 @@ public void testEvictPerSegment() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEvictPerSegmentOptimizedRemoves() throws Exception { info(">>> Test grid concurrent linked hash map..."); @@ -323,6 +332,7 @@ private Map> putMultiThreaded(final ConcurrentMap>> Test grid concurrent linked hash map iterator..."); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridIntListSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridIntListSelfTest.java index cc48fa8610942..16daa9a7f4665 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridIntListSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridIntListSelfTest.java @@ -17,19 +17,23 @@ package org.apache.ignite.util; -import junit.framework.TestCase; import org.apache.ignite.internal.util.GridIntList; +import org.junit.Test; import static org.apache.ignite.internal.util.GridIntList.asList; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; /** * */ -public class GridIntListSelfTest extends TestCase { +public class GridIntListSelfTest { /** * @throws Exception If failed. */ @SuppressWarnings("ZeroLengthArrayAllocation") + @Test public void testCopyWithout() throws Exception { assertCopy( new GridIntList(new int[] {}), @@ -67,6 +71,7 @@ public void testCopyWithout() throws Exception { /** * */ + @Test public void testTruncate() { GridIntList list = asList(1, 2, 3, 4, 5, 6, 7, 8); @@ -108,6 +113,7 @@ private void assertCopy(GridIntList lst, GridIntList rmv) { /** * */ + @Test public void testRemove() { GridIntList list = asList(1, 2, 3, 4, 5, 6); @@ -130,6 +136,7 @@ public void testRemove() { /** * */ + @Test public void testSort() { assertEquals(new GridIntList(), new GridIntList().sort()); assertEquals(asList(1), asList(1).sort()); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridInternalTaskUnusedWalSegmentsTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridInternalTaskUnusedWalSegmentsTest.java index a39b0c26fa723..f41057241b636 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridInternalTaskUnusedWalSegmentsTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridInternalTaskUnusedWalSegmentsTest.java @@ -30,9 +30,6 @@ import org.apache.ignite.internal.visor.misc.VisorWalTaskArg; import org.apache.ignite.internal.visor.misc.VisorWalTaskOperation; import org.apache.ignite.internal.visor.misc.VisorWalTaskResult; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.io.File; @@ -40,24 +37,23 @@ import java.util.ArrayList; import java.util.Collection; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE; /** * Test correctness of VisorWalTask. */ +@RunWith(JUnit4.class) public class GridInternalTaskUnusedWalSegmentsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { System.setProperty(IGNITE_PDS_MAX_CHECKPOINT_MEMORY_HISTORY_SIZE, "2"); IgniteConfiguration cfg = super.getConfiguration(gridName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setAffinity(new RendezvousAffinityFunction(false, 32)); @@ -96,6 +92,7 @@ public class GridInternalTaskUnusedWalSegmentsTest extends GridCommonAbstractTes * * @throws Exception if failed. */ + @Test public void testCorrectnessOfDeletionTaskSegments() throws Exception { try { IgniteEx ig0 = (IgniteEx)startGrids(4); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridLogThrottleTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridLogThrottleTest.java index 37fa558a65add..5b7a2a443033d 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridLogThrottleTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridLogThrottleTest.java @@ -21,6 +21,9 @@ import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid log throttle test. To verify correctness, you need to run this test @@ -28,6 +31,7 @@ * all messages that should be omitted are indeed omitted. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridLogThrottleTest extends GridCommonAbstractTest { /** */ private final GridStringLogger log0 = new GridStringLogger(false, this.log); @@ -42,6 +46,7 @@ public GridLogThrottleTest() { * * @throws Exception If any error occurs. */ + @Test public void testThrottle() throws Exception { LT.throttleTimeout(1000); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridLongListSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridLongListSelfTest.java index 3b62e32f01f06..bf69de7823d18 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridLongListSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridLongListSelfTest.java @@ -17,19 +17,24 @@ package org.apache.ignite.util; -import junit.framework.TestCase; import org.apache.ignite.internal.util.GridLongList; +import org.junit.Test; import static org.apache.ignite.internal.util.GridLongList.asList; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; /** * */ -public class GridLongListSelfTest extends TestCase { +public class GridLongListSelfTest { /** * @throws Exception If failed. */ @SuppressWarnings("ZeroLengthArrayAllocation") + @Test public void testCopyWithout() throws Exception { assertCopy( new GridLongList(new long[] {}), @@ -67,6 +72,7 @@ public void testCopyWithout() throws Exception { /** * */ + @Test public void testTruncate() { GridLongList list = asList(1, 2, 3, 4, 5, 6, 7, 8); @@ -108,6 +114,7 @@ private void assertCopy(GridLongList lst, GridLongList rmv) { /** * */ + @Test public void testRemove() { GridLongList list = asList(1,2,3,4,5,6); @@ -130,6 +137,7 @@ public void testRemove() { /** * */ + @Test public void testSort() { assertEquals(new GridLongList(), new GridLongList().sort()); assertEquals(asList(1), asList(1).sort()); @@ -150,4 +158,28 @@ public void testSort() { assertEquals(asList(1, 3, 4, 5, 0), list); assertEquals(asList(0, 1, 3, 4, 5), list.sort()); } -} \ No newline at end of file + + /** + * + */ + @Test + public void testArray() { + GridLongList list = new GridLongList(); + + long[] array = list.array(); + + assertNotNull(array); + + assertEquals(0, array.length); + + list.add(1L); + + array = list.array(); + + assertNotNull(array); + + assertEquals(1, array.length); + + assertEquals(1L, array[0]); + } +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridMessageCollectionTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridMessageCollectionTest.java index 13538817ee797..ce048393e97d4 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridMessageCollectionTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridMessageCollectionTest.java @@ -18,7 +18,6 @@ package org.apache.ignite.util; import java.nio.ByteBuffer; -import junit.framework.TestCase; import org.apache.ignite.internal.direct.DirectMessageReader; import org.apache.ignite.internal.direct.DirectMessageWriter; import org.apache.ignite.internal.managers.communication.GridIoMessageFactory; @@ -27,14 +26,17 @@ import org.apache.ignite.plugin.extensions.communication.MessageFactory; import org.apache.ignite.plugin.extensions.communication.MessageReader; import org.apache.ignite.plugin.extensions.communication.MessageWriter; +import org.junit.Test; import static java.util.UUID.randomUUID; import static org.apache.ignite.internal.util.GridMessageCollection.of; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNull; /** * */ -public class GridMessageCollectionTest extends TestCase { +public class GridMessageCollectionTest { /** */ private byte proto; @@ -58,6 +60,7 @@ protected MessageReader reader(MessageFactory msgFactory, byte proto) { /** * */ + @Test public void testMarshal() { UUIDCollectionMessage um0 = UUIDCollectionMessage.of(); UUIDCollectionMessage um1 = UUIDCollectionMessage.of(randomUUID()); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridPartitionMapSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridPartitionMapSelfTest.java index 9068e829072c3..1e98de94908cd 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridPartitionMapSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridPartitionMapSelfTest.java @@ -28,13 +28,18 @@ import org.apache.ignite.internal.util.GridPartitionStateMap; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid utils tests. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridPartitionMapSelfTest extends GridCommonAbstractTest { /** */ + @Test public void testPartitionStateMap() { GridPartitionStateMap map = initMap(new GridPartitionStateMap()); @@ -92,6 +97,7 @@ public void testPartitionStateMap() { } /** */ + @Test public void testEqualsAndHashCode() { GridPartitionStateMap map1 = initMap(new GridPartitionStateMap()); @@ -109,6 +115,7 @@ public void testEqualsAndHashCode() { /** * */ + @Test public void testCopy() { GridPartitionStateMap map1 = initMap(new GridPartitionStateMap()); @@ -134,6 +141,7 @@ public void testCopy() { /** * */ + @Test public void testCopyNoActive() { GridPartitionStateMap map2 = new GridPartitionStateMap(); @@ -150,6 +158,7 @@ public void testCopyNoActive() { /** * Tests that entries from {@link Iterator#next()} remain unaltered. */ + @Test public void testIteratorNext() { GridPartitionStateMap map = new GridPartitionStateMap(); @@ -182,6 +191,7 @@ public void testIteratorNext() { /** * Tests {@link GridDhtPartitionState} compatibility with {@link TreeMap} on random operations. */ + @Test public void testOnRandomOperations() { ThreadLocalRandom rnd = ThreadLocalRandom.current(); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridQueueSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridQueueSelfTest.java index 6a323776393e1..c4ac47eba7333 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridQueueSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridQueueSelfTest.java @@ -21,15 +21,20 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Grid utils tests. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridQueueSelfTest extends GridCommonAbstractTest { /** * */ + @Test public void testQueue() { GridQueue q = new GridQueue<>(); for (char c = 'a'; c <= 'z'; c++) @@ -69,4 +74,4 @@ public void testQueue() { assert q.isEmpty(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridRandomSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridRandomSelfTest.java index 005da6cf7bdce..3a06a8b00b7b8 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridRandomSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridRandomSelfTest.java @@ -21,13 +21,18 @@ import java.util.concurrent.ThreadLocalRandom; import org.apache.ignite.internal.util.GridRandom; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for {@link GridRandom}. */ +@RunWith(JUnit4.class) public class GridRandomSelfTest extends GridCommonAbstractTest { /** */ + @Test public void testRandom() { for (int i = 0; i < 100; i++) { long seed = ThreadLocalRandom.current().nextLong(); diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridSnapshotLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridSnapshotLockSelfTest.java index 97fa060aca4f3..7b092d2eb7e9f 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridSnapshotLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridSnapshotLockSelfTest.java @@ -26,14 +26,19 @@ import org.apache.ignite.internal.util.GridSnapshotLock; import org.apache.ignite.internal.util.typedef.T3; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridSnapshotLockSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSyncConsistent() throws Exception { final AtomicBoolean stop = new AtomicBoolean(); @@ -110,4 +115,4 @@ public void testSyncConsistent() throws Exception { fut1.get(); fut2.get(); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridSpinReadWriteLockSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridSpinReadWriteLockSelfTest.java index a1040401f8764..e711f478b73d9 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridSpinReadWriteLockSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridSpinReadWriteLockSelfTest.java @@ -23,10 +23,14 @@ import org.apache.ignite.internal.util.GridSpinReadWriteLock; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridSpinReadWriteLockSelfTest extends GridCommonAbstractTest { /** Constructor. */ public GridSpinReadWriteLockSelfTest() { @@ -36,6 +40,7 @@ public GridSpinReadWriteLockSelfTest() { /** * @throws Exception If any error occurs. */ + @Test public void testWriteLockReentry() throws Exception { GridSpinReadWriteLock lock = new GridSpinReadWriteLock(); @@ -51,6 +56,7 @@ public void testWriteLockReentry() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testReadLockReentry() throws Exception { final GridSpinReadWriteLock lock = new GridSpinReadWriteLock(); @@ -93,6 +99,7 @@ public void testReadLockReentry() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testLockDowngrade() throws Exception { GridSpinReadWriteLock lock = new GridSpinReadWriteLock(); @@ -124,6 +131,7 @@ public void testLockDowngrade() throws Exception { /** * @throws Exception If any error occurs. */ + @Test public void testMonitorState() throws Exception { GridSpinReadWriteLock lock = new GridSpinReadWriteLock(); @@ -141,4 +149,4 @@ public void testMonitorState() throws Exception { info("Caught expected exception: " + e); } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridStringBuilderFactorySelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridStringBuilderFactorySelfTest.java index e3639cad7907b..3bce1c1f9af3e 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridStringBuilderFactorySelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridStringBuilderFactorySelfTest.java @@ -21,11 +21,15 @@ import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * String builder factory test. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridStringBuilderFactorySelfTest extends GridCommonAbstractTest { /** */ public GridStringBuilderFactorySelfTest() { @@ -35,6 +39,7 @@ public GridStringBuilderFactorySelfTest() { /** * Tests string builder factory. */ + @Test public void testStringBuilderFactory() { SB b1 = GridStringBuilderFactory.acquire(); @@ -70,4 +75,4 @@ public void testStringBuilderFactory() { assert b3.length() == 0; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridTopologyHeapSizeSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridTopologyHeapSizeSelfTest.java index c8e4619e6dc40..a83bf3359354b 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridTopologyHeapSizeSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridTopologyHeapSizeSelfTest.java @@ -19,15 +19,14 @@ import java.util.UUID; import org.apache.ignite.cluster.ClusterNode; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.ClusterMetricsSnapshot; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestNode; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_JVM_PID; import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_MACS; @@ -35,26 +34,12 @@ /** * Tests for calculation logic for topology heap size. */ +@RunWith(JUnit4.class) public class GridTopologyHeapSizeSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - /** * @throws Exception If failed. */ + @Test public void testTopologyHeapSizeInOneJvm() throws Exception { try { ClusterNode node1 = startGrid(1).cluster().node(); @@ -72,6 +57,7 @@ public void testTopologyHeapSizeInOneJvm() throws Exception { } /** */ + @Test public void testTopologyHeapSizeForNodesWithDifferentPids() { GridTestNode node1 = getNode("123456789ABC", 1000); GridTestNode node2 = getNode("123456789ABC", 1001); @@ -85,6 +71,7 @@ public void testTopologyHeapSizeForNodesWithDifferentPids() { } /** */ + @Test public void testTopologyHeapSizeForNodesWithDifferentMacs() { GridTestNode node1 = getNode("123456789ABC", 1000); GridTestNode node2 = getNode("CBA987654321", 1000); @@ -117,4 +104,4 @@ private GridTestNode getNode(String mac, int pid) { return node; } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/GridTransientTest.java b/modules/core/src/test/java/org/apache/ignite/util/GridTransientTest.java index 9614d20a7680e..95f8dacf72dd4 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/GridTransientTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/GridTransientTest.java @@ -24,11 +24,15 @@ import java.io.Serializable; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Transient value serialization test. */ @GridCommonTest(group = "Utils") +@RunWith(JUnit4.class) public class GridTransientTest extends GridCommonAbstractTest implements Serializable { /** */ private static final String VALUE = "value"; @@ -49,6 +53,7 @@ public GridTransientTest() { /** * @throws Exception If failed. */ + @Test public void testTransientSerialization() throws Exception { GridTransientTest objSrc = new GridTransientTest(); @@ -70,4 +75,4 @@ public void testTransientSerialization() throws Exception { assertEquals(objDest.data1, null); assertEquals(objDest.data2, VALUE); } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/IgniteTaskTrackingThreadPoolExecutorTest.java b/modules/core/src/test/java/org/apache/ignite/util/IgniteTaskTrackingThreadPoolExecutorTest.java deleted file mode 100644 index 3db02b0409a98..0000000000000 --- a/modules/core/src/test/java/org/apache/ignite/util/IgniteTaskTrackingThreadPoolExecutorTest.java +++ /dev/null @@ -1,140 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.util; - -import java.util.List; -import java.util.concurrent.LinkedBlockingQueue; -import java.util.concurrent.atomic.AtomicReference; -import java.util.concurrent.atomic.LongAdder; -import junit.framework.TestCase; -import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.internal.managers.communication.GridIoPolicy; -import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.thread.IgniteTaskTrackingThreadPoolExecutor; -import org.jetbrains.annotations.Nullable; - -/** - * Tests for tracking thread pool executor. - */ -public class IgniteTaskTrackingThreadPoolExecutorTest extends TestCase { - /** */ - private IgniteTaskTrackingThreadPoolExecutor executor; - - /** {@inheritDoc} */ - @Override protected void setUp() throws Exception { - int procs = Runtime.getRuntime().availableProcessors(); - - executor = new IgniteTaskTrackingThreadPoolExecutor("test", "default", - procs * 2, procs * 2, 30_000, new LinkedBlockingQueue<>(), GridIoPolicy.UNDEFINED, (t, e) -> { - // No-op. - }); - } - - /** {@inheritDoc} */ - @Override protected void tearDown() throws Exception { - List runnables = executor.shutdownNow(); - - assertEquals("Some tasks are not completed", 0, runnables.size()); - } - - /** */ - public void testSimple() throws IgniteCheckedException { - doTest(null); - } - - /** */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public void testWithException() throws IgniteCheckedException { - int fail = 5555; - - try { - doTest(fail); - - fail(); - } - catch (Throwable t) { - TestException cause = (TestException)X.getCause(t); - - assertEquals(fail, cause.idx); - } - - AtomicReference err = U.field(executor, "err"); - err.set(null); - - executor.awaitDone(); - } - - /** */ - public void testReuse() throws IgniteCheckedException { - long avg = 0; - - long warmUp = 30; - - int iters = 150; - - for (int i = 0; i < iters; i++) { - long t1 = System.nanoTime(); - - doTest(null); - - if (i >= warmUp) - avg += System.nanoTime() - t1; - - executor.reset(); - } - - X.print("Average time per iteration: " + (avg / (iters - warmUp)) / 1000 / 1000. + " ms"); - } - - /** */ - private void doTest(@Nullable Integer fail) throws IgniteCheckedException { - LongAdder cnt = new LongAdder(); - - int exp = 100_000; - - for (int i = 0; i < exp; i++) { - final int finalI = i; - executor.execute(() -> { - if (fail != null && fail == finalI) - throw new TestException(finalI); - else - cnt.add(1); - }); - } - - executor.markInitialized(); - - executor.awaitDone(); - - assertEquals("Counter is not as expected", exp, cnt.sum()); - } - - /** */ - private static class TestException extends RuntimeException { - /** */ - final int idx; - - /** - * @param idx Index. - */ - public TestException(int idx) { - this.idx = idx; - } - } -} diff --git a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanBaselineTest.java b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanBaselineTest.java index 31c012c8b2a49..61edffcac12ad 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanBaselineTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanBaselineTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.Set; import java.util.stream.Collectors; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class GridMBeanBaselineTest extends GridCommonAbstractTest { /** Client index. */ private static final int CLIENT_IDX = 33; @@ -71,6 +75,7 @@ public class GridMBeanBaselineTest extends GridCommonAbstractTest { * * @throws Exception Thrown if test fails. */ + @Test public void testIgniteKernalNodeInBaselineTest() throws Exception { try { IgniteEx ignite0 = (IgniteEx)startGrids(NODES); diff --git a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanDisableSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanDisableSelfTest.java index ac8f011748c24..6758fdaf7a709 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanDisableSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanDisableSelfTest.java @@ -27,21 +27,27 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Disabling MBeans test. */ +@RunWith(JUnit4.class) public class GridMBeanDisableSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { + @Override public void setUp() throws Exception { IgniteUtils.IGNITE_MBEANS_DISABLED = true; - super.beforeTestsStarted(); + super.setUp(); } /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { + @Override public void tearDown() throws Exception { IgniteUtils.IGNITE_MBEANS_DISABLED = false; + + super.tearDown(); } /** @@ -49,15 +55,16 @@ public class GridMBeanDisableSelfTest extends GridCommonAbstractTest { * * @throws Exception Thrown if test fails. */ + @Test public void testCorrectMBeanInfo() throws Exception { // Node should start and stopped with no errors. try (final Ignite ignite = startGrid(0)) { - final MBeanServer server = ignite.configuration().getMBeanServer(); + final MBeanServer srv = ignite.configuration().getMBeanServer(); GridTestUtils.assertThrowsWithCause( new Callable() { @Override public Void call() throws Exception { - U.registerMBean(server, ignite.name(), "dummy", "DummyMbean1", new DummyMBeanImpl(), DummyMBean.class); + U.registerMBean(srv, ignite.name(), "dummy", "DummyMbean1", new DummyMBeanImpl(), DummyMBean.class); return null; } @@ -72,7 +79,7 @@ public void testCorrectMBeanInfo() throws Exception { "DummyMbean2" ); - U.registerMBean(server, objName, new DummyMBeanImpl(), DummyMBean.class); + U.registerMBean(srv, objName, new DummyMBeanImpl(), DummyMBean.class); return null; @@ -82,6 +89,7 @@ public void testCorrectMBeanInfo() throws Exception { } /** Check that a cache can be started when MBeans are disabled. */ + @Test public void testCacheStart() throws Exception { try ( Ignite ignite = startGrid(0); @@ -109,4 +117,4 @@ static class DummyMBeanImpl implements DummyMBean { // No op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanExoticNamesSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanExoticNamesSelfTest.java index 3b79f24b1a2b0..d50bf6bf1456e 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanExoticNamesSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanExoticNamesSelfTest.java @@ -22,17 +22,23 @@ import org.apache.ignite.Ignite; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Testing registration of MBeans with special characters in group name or bean name. */ +@RunWith(JUnit4.class) public class GridMBeanExoticNamesSelfTest extends GridCommonAbstractTest { /** Test registration of a bean with special characters in group name. */ + @Test public void testGroupWithSpecialSymbols() throws Exception { checkMBeanRegistration("dummy!@#$^&*()?\\grp", "dummy"); } /** Test registration of a bean with special characters in name. */ + @Test public void testNameWithSpecialSymbols() throws Exception { checkMBeanRegistration("dummygrp", "dum!@#$^&*()?\\my"); } @@ -67,4 +73,4 @@ private static class DummyMBeanImpl implements DummyMBean { // No op. } } -} \ No newline at end of file +} diff --git a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanSelfTest.java b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanSelfTest.java index 4e329f9fc9437..8ee1e59760390 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanSelfTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/mbeans/GridMBeanSelfTest.java @@ -29,16 +29,21 @@ import org.apache.ignite.mxbean.MXBeanParametersDescriptions; import org.apache.ignite.mxbean.MXBeanParametersNames; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * MBean test. */ +@RunWith(JUnit4.class) public class GridMBeanSelfTest extends GridCommonAbstractTest { /** * Tests correct MBean interface. * * @throws Exception Thrown if test fails. */ + @Test public void testCorrectMBeanInfo() throws Exception { StandardMBean mbean = new IgniteStandardMXBean(new GridMBeanImplementation(), GridMBeanInterface.class); @@ -85,6 +90,7 @@ public void testCorrectMBeanInfo() throws Exception { * * @throws Exception Thrown if test fails. */ + @Test public void testMissedNameMBeanInfo() throws Exception { try { StandardMBean mbean = new IgniteStandardMXBean(new GridMBeanImplementation(), GridMBeanInterfaceBad.class); @@ -103,6 +109,7 @@ public void testMissedNameMBeanInfo() throws Exception { * * @throws Exception Thrown if test fails. */ + @Test public void testMissedDescriptionMBeanInfo() throws Exception { try { StandardMBean mbean = new IgniteStandardMXBean(new GridMBeanImplementation(), @@ -122,6 +129,7 @@ public void testMissedDescriptionMBeanInfo() throws Exception { * * @throws Exception Thrown if test fails. */ + @Test public void testEmptyDescriptionMBeanInfo() throws Exception { try { StandardMBean mbean = new IgniteStandardMXBean(new GridMBeanImplementation(), @@ -141,6 +149,7 @@ public void testEmptyDescriptionMBeanInfo() throws Exception { * * @throws Exception Thrown if test fails. */ + @Test public void testEmptyNameMBeanInfo() throws Exception { try { StandardMBean mbean = new IgniteStandardMXBean(new GridMBeanImplementation(), @@ -160,6 +169,7 @@ public void testEmptyNameMBeanInfo() throws Exception { * * @throws Exception Thrown if test fails. */ + @Test public void testIgniteKernalReturnsValidMBeanInfo() throws Exception { try { IgniteEx igniteCrd = startGrid(0); diff --git a/modules/core/src/test/java/org/apache/ignite/util/mbeans/WorkersControlMXBeanTest.java b/modules/core/src/test/java/org/apache/ignite/util/mbeans/WorkersControlMXBeanTest.java index cb30906371f81..1062267b76e90 100644 --- a/modules/core/src/test/java/org/apache/ignite/util/mbeans/WorkersControlMXBeanTest.java +++ b/modules/core/src/test/java/org/apache/ignite/util/mbeans/WorkersControlMXBeanTest.java @@ -1,98 +1,104 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.util.mbeans; - -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; -import org.apache.ignite.internal.worker.WorkersControlMXBeanImpl; -import org.apache.ignite.mxbean.WorkersControlMXBean; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -/** - * {@link WorkersControlMXBean} test. - */ -public class WorkersControlMXBeanTest extends GridCommonAbstractTest { - /** Test thread name. */ - private static final String TEST_THREAD_NAME = "test-thread"; - - /** - * @throws Exception Thrown if test fails. - */ - public void testStopThreadByUniqueName() throws Exception { - WorkersControlMXBean workersCtrlMXBean = new WorkersControlMXBeanImpl(null); - - Thread t = startTestThread(); - - assertTrue(workersCtrlMXBean.stopThreadByUniqueName(TEST_THREAD_NAME)); - - t.join(500); - - assertFalse(workersCtrlMXBean.stopThreadByUniqueName(TEST_THREAD_NAME)); - - Thread t1 = startTestThread(); - Thread t2 = startTestThread(); - - assertFalse(workersCtrlMXBean.stopThreadByUniqueName(TEST_THREAD_NAME)); - - t1.stop(); - t2.stop(); - } - - /** - * @throws Exception Thrown if test fails. - */ - public void testStopThreadById() throws Exception { - WorkersControlMXBean workersCtrlMXBean = new WorkersControlMXBeanImpl(null); - - Thread t1 = startTestThread(); - Thread t2 = startTestThread(); - - assertTrue(workersCtrlMXBean.stopThreadById(t1.getId())); - assertTrue(workersCtrlMXBean.stopThreadById(t2.getId())); - - t1.join(500); - t2.join(500); - - assertFalse(workersCtrlMXBean.stopThreadById(t1.getId())); - assertFalse(workersCtrlMXBean.stopThreadById(t2.getId())); - } - - /** - * @return Started thread. - */ - private static Thread startTestThread() throws InterruptedException { - final CountDownLatch latch = new CountDownLatch(1); - - Thread t = new Thread(TEST_THREAD_NAME) { - @Override public void run() { - latch.countDown(); - - for (;;) - ; - } - }; - - t.start(); - - assertTrue(latch.await(500, TimeUnit.MILLISECONDS)); - - assertTrue(t.isAlive()); - - return t; - } -} +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.util.mbeans; + +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.apache.ignite.internal.worker.WorkersControlMXBeanImpl; +import org.apache.ignite.mxbean.WorkersControlMXBean; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * {@link WorkersControlMXBean} test. + */ +@RunWith(JUnit4.class) +public class WorkersControlMXBeanTest extends GridCommonAbstractTest { + /** Test thread name. */ + private static final String TEST_THREAD_NAME = "test-thread"; + + /** + * @throws Exception Thrown if test fails. + */ + @Test + public void testStopThreadByUniqueName() throws Exception { + WorkersControlMXBean workersCtrlMXBean = new WorkersControlMXBeanImpl(null); + + Thread t = startTestThread(); + + assertTrue(workersCtrlMXBean.stopThreadByUniqueName(TEST_THREAD_NAME)); + + t.join(500); + + assertFalse(workersCtrlMXBean.stopThreadByUniqueName(TEST_THREAD_NAME)); + + Thread t1 = startTestThread(); + Thread t2 = startTestThread(); + + assertFalse(workersCtrlMXBean.stopThreadByUniqueName(TEST_THREAD_NAME)); + + t1.stop(); + t2.stop(); + } + + /** + * @throws Exception Thrown if test fails. + */ + @Test + public void testStopThreadById() throws Exception { + WorkersControlMXBean workersCtrlMXBean = new WorkersControlMXBeanImpl(null); + + Thread t1 = startTestThread(); + Thread t2 = startTestThread(); + + assertTrue(workersCtrlMXBean.stopThreadById(t1.getId())); + assertTrue(workersCtrlMXBean.stopThreadById(t2.getId())); + + t1.join(500); + t2.join(500); + + assertFalse(workersCtrlMXBean.stopThreadById(t1.getId())); + assertFalse(workersCtrlMXBean.stopThreadById(t2.getId())); + } + + /** + * @return Started thread. + */ + private static Thread startTestThread() throws InterruptedException { + final CountDownLatch latch = new CountDownLatch(1); + + Thread t = new Thread(TEST_THREAD_NAME) { + @Override public void run() { + latch.countDown(); + + for (;;) + ; + } + }; + + t.start(); + + assertTrue(latch.await(500, TimeUnit.MILLISECONDS)); + + assertTrue(t.isAlive()); + + return t; + } +} diff --git a/modules/core/src/test/resources/other_tde_keystore.jks b/modules/core/src/test/resources/other_tde_keystore.jks new file mode 100644 index 0000000000000..6b1f51b7e8d02 Binary files /dev/null and b/modules/core/src/test/resources/other_tde_keystore.jks differ diff --git a/modules/core/src/test/resources/tde.jks b/modules/core/src/test/resources/tde.jks new file mode 100644 index 0000000000000..1bf532c292dec Binary files /dev/null and b/modules/core/src/test/resources/tde.jks differ diff --git a/modules/dev-utils/src/main/java/org/apache/ignite/development/utils/WalStat.java b/modules/dev-utils/src/main/java/org/apache/ignite/development/utils/WalStat.java index a09c9a5b4f986..ae6d3187ffef4 100644 --- a/modules/dev-utils/src/main/java/org/apache/ignite/development/utils/WalStat.java +++ b/modules/dev-utils/src/main/java/org/apache/ignite/development/utils/WalStat.java @@ -40,7 +40,6 @@ import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.transactions.TransactionState; -import sun.nio.ch.DirectBuffer; /** * Statistic for overall WAL file @@ -129,9 +128,9 @@ void registerRecord(WALRecord record, WALPointer walPointer, boolean workDir) { if (type == WALRecord.RecordType.PAGE_RECORD) registerPageSnapshot((PageSnapshot)record); - else if (type == WALRecord.RecordType.DATA_RECORD) + else if (type == WALRecord.RecordType.DATA_RECORD || type == WALRecord.RecordType.MVCC_DATA_RECORD) registerDataRecord((DataRecord)record); - else if (type == WALRecord.RecordType.TX_RECORD) + else if (type == WALRecord.RecordType.TX_RECORD || type == WALRecord.RecordType.MVCC_TX_RECORD) registerTxRecord((TxRecord)record); incrementStat(type.toString(), record, recTypeSizes); diff --git a/modules/direct-io/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AlignedBuffersDirectFileIOFactory.java b/modules/direct-io/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AlignedBuffersDirectFileIOFactory.java index 8a28e9eac1c47..ea555a0c583f8 100644 --- a/modules/direct-io/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AlignedBuffersDirectFileIOFactory.java +++ b/modules/direct-io/src/main/java/org/apache/ignite/internal/processors/cache/persistence/file/AlignedBuffersDirectFileIOFactory.java @@ -144,11 +144,6 @@ public AlignedBuffersDirectFileIOFactory( return allocate; } - /** {@inheritDoc} */ - @Override public FileIO create(File file) throws IOException { - return create(file, CREATE, READ, WRITE); - } - /** {@inheritDoc} */ @Override public FileIO create(File file, OpenOption... modes) throws IOException { if (useBackupFactory) diff --git a/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest.java b/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest.java index e2578aee5ca26..f429c50652274 100644 --- a/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest.java +++ b/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest.java @@ -16,10 +16,16 @@ */ package org.apache.ignite.internal.processors.cache.persistence; +import org.apache.ignite.testframework.MvccFeatureChecker; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + /** * Version of test to be executed in Direct IO suite. * Contains reduced number of records, because Direct IO does not support tmpfs. */ +@RunWith(JUnit4.class) public class IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest extends LocalWalModeChangeDuringRebalancingSelfTest { /** {@inheritDoc} */ @Override protected int getKeysCount() { @@ -27,7 +33,11 @@ public class IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest extends L } /** {@inheritDoc} */ + @Test @Override public void testWithExchangesMerge() throws Exception { + if (MvccFeatureChecker.forcedMvcc()) + fail("https://issues.apache.org/jira/browse/IGNITE-10752"); + super.testWithExchangesMerge(); } } diff --git a/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteFileIOTest.java b/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteFileIOTest.java index 9620eb0fca683..6dac3ce560652 100644 --- a/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteFileIOTest.java +++ b/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteFileIOTest.java @@ -21,13 +21,13 @@ import java.nio.ByteBuffer; import java.nio.MappedByteBuffer; import java.util.concurrent.ThreadLocalRandom; -import junit.framework.TestCase; import org.jetbrains.annotations.NotNull; +import org.junit.Test; /** * File IO tests. */ -public class IgniteFileIOTest extends TestCase { +public class IgniteFileIOTest { /** Test data size. */ private static final int TEST_DATA_SIZE = 16 * 1024 * 1024; @@ -171,6 +171,7 @@ private void checkPosition(long position) throws IOException { /** * test for 'full read' functionality. */ + @Test public void testReadFully() throws Exception { byte[] arr = new byte[TEST_DATA_SIZE]; @@ -206,6 +207,7 @@ public void testReadFully() throws Exception { /** * test for 'full read' functionality. */ + @Test public void testReadFullyArray() throws Exception { byte[] arr = new byte[TEST_DATA_SIZE]; @@ -227,6 +229,7 @@ public void testReadFullyArray() throws Exception { /** * test for 'full write' functionality. */ + @Test public void testWriteFully() throws Exception { byte[] arr = new byte[TEST_DATA_SIZE]; @@ -262,6 +265,7 @@ public void testWriteFully() throws Exception { /** * test for 'full write' functionality. */ + @Test public void testWriteFullyArray() throws Exception { byte[] arr = new byte[TEST_DATA_SIZE]; diff --git a/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteNativeIoWithNoPersistenceTest.java b/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteNativeIoWithNoPersistenceTest.java index 981e0d59a633b..adef4daf4442c 100644 --- a/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteNativeIoWithNoPersistenceTest.java +++ b/modules/direct-io/src/test/java/org/apache/ignite/internal/processors/cache/persistence/file/IgniteNativeIoWithNoPersistenceTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks if Direct IO can be set up if no persistent store is configured */ +@RunWith(JUnit4.class) public class IgniteNativeIoWithNoPersistenceTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @@ -52,6 +56,7 @@ public class IgniteNativeIoWithNoPersistenceTest extends GridCommonAbstractTest * Checks simple launch with native IO. * @throws Exception if failed */ + @Test public void testDirectIoHandlesNoPersistentGrid() throws Exception { IgniteEx ignite = startGrid(0); diff --git a/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite.java b/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite.java index 787a755c36a6f..312c976386e89 100644 --- a/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite.java +++ b/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite.java @@ -16,26 +16,30 @@ */ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.persistence.file.IgniteNativeIoWithNoPersistenceTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Subset of {@link IgnitePdsTestSuite} suite test, started with direct-oi jar in classpath. */ -public class IgnitePdsNativeIoTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgnitePdsNativeIoTestSuite { /** * @return Suite. */ public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Persistent Store Test Suite (with Direct IO)"); - IgnitePdsTestSuite.addRealPageStoreTests(suite); + IgnitePdsTestSuite.addRealPageStoreTests(suite, null); //long running test by design with light parameters - suite.addTestSuite(IgnitePdsReplacementNativeIoTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsReplacementNativeIoTest.class)); - suite.addTestSuite(IgniteNativeIoWithNoPersistenceTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteNativeIoWithNoPersistenceTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite2.java b/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite2.java index 2ed7450d79d59..1c6e88c1d514f 100644 --- a/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite2.java +++ b/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsNativeIoTestSuite2.java @@ -14,11 +14,14 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest; import org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoPdsRecoveryAfterFileCorruptionTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.FsyncWalRolloverDoesNotBlockTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteNativeIoWalFlushFsyncSelfTest; /** @@ -27,19 +30,20 @@ public class IgnitePdsNativeIoTestSuite2 extends TestSuite { /** * @return Suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Persistent Store Test Suite 2 (Native IO)"); - IgnitePdsTestSuite2.addRealPageStoreTests(suite); + IgnitePdsTestSuite2.addRealPageStoreTests(suite, null); //Integrity test with reduced count of pages. - suite.addTestSuite(IgniteNativeIoPdsRecoveryAfterFileCorruptionTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteNativeIoPdsRecoveryAfterFileCorruptionTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest.class)); - suite.addTestSuite(IgniteNativeIoLocalWalModeChangeDuringRebalancingSelfTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteNativeIoWalFlushFsyncSelfTest.class)); - suite.addTestSuite(IgniteNativeIoWalFlushFsyncSelfTest.class); + suite.addTest(new JUnit4TestAdapter(FsyncWalRolloverDoesNotBlockTest.class)); return suite; } diff --git a/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsReplacementNativeIoTest.java b/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsReplacementNativeIoTest.java index f9bda769b2f42..1d3f91e1c9a75 100644 --- a/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsReplacementNativeIoTest.java +++ b/modules/direct-io/src/test/java/org/apache/ignite/testsuites/IgnitePdsReplacementNativeIoTest.java @@ -18,10 +18,14 @@ import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsPageReplacementTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Page replacement light variant of test for native direct IO (wastes real IOPs on agents) */ +@RunWith(JUnit4.class) public class IgnitePdsReplacementNativeIoTest extends IgnitePdsPageReplacementTest { /** {@inheritDoc} */ @@ -36,6 +40,7 @@ public class IgnitePdsReplacementNativeIoTest extends IgnitePdsPageReplacementTe } /** {@inheritDoc} */ + @Test @Override public void testPageReplacement() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_USE_ASYNC_FILE_IO_FACTORY, "false"); diff --git a/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTest.java b/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTest.java index eb5937925caca..3c9e40856ab6b 100644 --- a/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTest.java +++ b/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTest.java @@ -23,10 +23,14 @@ import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link IgniteSink}. */ +@RunWith(JUnit4.class) public class FlinkIgniteSinkSelfTest extends GridCommonAbstractTest { /** Cache name. */ private static final String TEST_CACHE = "testCache"; @@ -34,6 +38,7 @@ public class FlinkIgniteSinkSelfTest extends GridCommonAbstractTest { /** Ignite test configuration file. */ private static final String GRID_CONF_FILE = "modules/flink/src/test/resources/example-ignite.xml"; + @Test public void testIgniteSink() throws Exception { Configuration configuration = new Configuration(); @@ -56,6 +61,7 @@ public void testIgniteSink() throws Exception { assertEquals("testValue", igniteSink.getIgnite().getOrCreateCache(TEST_CACHE).get("testData")); } + @Test public void testIgniteSinkStreamExecution() throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); @@ -78,4 +84,4 @@ public void testIgniteSinkStreamExecution() throws Exception { fail("Stream execution process failed."); } } -} \ No newline at end of file +} diff --git a/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTestSuite.java b/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTestSuite.java index b6934e256beb5..3fee82ed95133 100644 --- a/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTestSuite.java +++ b/modules/flink/src/test/java/org/apache/ignite/sink/flink/FlinkIgniteSinkSelfTestSuite.java @@ -17,21 +17,23 @@ package org.apache.ignite.sink.flink; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Apache Flink sink tests. */ -public class FlinkIgniteSinkSelfTestSuite extends TestSuite { - +@RunWith(AllTests.class) +public class FlinkIgniteSinkSelfTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Apache Flink sink Test Suite"); - suite.addTest(new TestSuite(FlinkIgniteSinkSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(FlinkIgniteSinkSelfTest.class)); return suite; } diff --git a/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTest.java b/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTest.java index 031eb5d4d7577..a5fa3940b4176 100644 --- a/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTest.java +++ b/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTest.java @@ -33,12 +33,16 @@ import org.apache.ignite.events.Event; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; /** * {@link IgniteSink} test. */ +@RunWith(JUnit4.class) public class IgniteSinkTest extends GridCommonAbstractTest { /** Number of events to be sent to memory channel. */ private static final int EVENT_CNT = 10000; @@ -49,6 +53,7 @@ public class IgniteSinkTest extends GridCommonAbstractTest { /** * @throws Exception {@link Exception}. */ + @Test public void testSink() throws Exception { IgniteConfiguration cfg = loadConfiguration("modules/flume/src/test/resources/example-ignite.xml"); diff --git a/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTestSuite.java b/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTestSuite.java index ad6d162b83597..0cd52a7dcf769 100644 --- a/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTestSuite.java +++ b/modules/flume/src/test/java/org/apache/ignite/stream/flume/IgniteSinkTestSuite.java @@ -17,20 +17,23 @@ package org.apache.ignite.stream.flume; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Tests for a Flume sink for Ignite. */ -public class IgniteSinkTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteSinkTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Apache Flume NG Sink Test Suite"); - suite.addTest(new TestSuite(IgniteSinkTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSinkTest.class)); return suite; } diff --git a/modules/gce/src/test/java/org/apache/ignite/testsuites/IgniteGCETestSuite.java b/modules/gce/src/test/java/org/apache/ignite/testsuites/IgniteGCETestSuite.java index 147abad9e3786..d95748d36f42f 100644 --- a/modules/gce/src/test/java/org/apache/ignite/testsuites/IgniteGCETestSuite.java +++ b/modules/gce/src/test/java/org/apache/ignite/testsuites/IgniteGCETestSuite.java @@ -17,21 +17,24 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.discovery.tcp.ipfinder.gce.TcpDiscoveryGoogleStorageIpFinderSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Google Compute Engine integration tests. */ -public class IgniteGCETestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteGCETestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Google Compute Engine Integration Test Suite"); - suite.addTest(new TestSuite(TcpDiscoveryGoogleStorageIpFinderSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryGoogleStorageIpFinderSelfTest.class)); return suite; } @@ -68,4 +71,4 @@ public static String getProjectName() { return name; } -} \ No newline at end of file +} diff --git a/modules/geospatial/src/test/java/org/apache/ignite/internal/processors/query/h2/H2IndexingAbstractGeoSelfTest.java b/modules/geospatial/src/test/java/org/apache/ignite/internal/processors/query/h2/H2IndexingAbstractGeoSelfTest.java index 9c5038ec37851..e0ebc90f3fc55 100644 --- a/modules/geospatial/src/test/java/org/apache/ignite/internal/processors/query/h2/H2IndexingAbstractGeoSelfTest.java +++ b/modules/geospatial/src/test/java/org/apache/ignite/internal/processors/query/h2/H2IndexingAbstractGeoSelfTest.java @@ -17,6 +17,19 @@ package org.apache.ignite.internal.processors.query.h2; +import java.io.Serializable; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.Callable; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import javax.cache.Cache; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; @@ -36,27 +49,17 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.locationtech.jts.geom.Geometry; import org.locationtech.jts.io.ParseException; import org.locationtech.jts.io.WKTReader; -import javax.cache.Cache; -import java.io.Serializable; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.concurrent.Callable; -import java.util.concurrent.ThreadLocalRandom; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicReference; - /** * Geo-indexing test. */ +@RunWith(JUnit4.class) public abstract class H2IndexingAbstractGeoSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int CNT = 100; @@ -227,12 +230,13 @@ private static void destroy(IgniteCache cache, IgniteEx grid, boolean dynamic) { if (!dynamic) cache.destroy(); else - grid.context().cache().dynamicDestroyCache(cache.getName(), true, true, false); + grid.context().cache().dynamicDestroyCache(cache.getName(), true, true, false, null); } /** * @throws Exception If failed. */ + @Test public void testPrimitiveGeometry() throws Exception { IgniteCache cache = createCache("geom", true, Long.class, Geometry.class); @@ -258,6 +262,7 @@ public void testPrimitiveGeometry() throws Exception { * * @throws Exception If failed. */ + @Test public void testGeo() throws Exception { checkGeo(false); } @@ -267,6 +272,7 @@ public void testGeo() throws Exception { * * @throws Exception If failed. */ + @Test public void testGeoDynamic() throws Exception { checkGeo(true); } @@ -349,6 +355,7 @@ private void checkGeo(boolean dynamic) throws Exception { * * @throws Exception If failed. */ + @Test public void testGeoMultithreaded() throws Exception { checkGeoMultithreaded(false); } @@ -358,6 +365,7 @@ public void testGeoMultithreaded() throws Exception { * * @throws Exception If failed. */ + @Test public void testGeoMultithreadedDynamic() throws Exception { checkGeoMultithreaded(true); } @@ -493,6 +501,7 @@ private void checkPoints(Collection> res, String * * @throws Exception if fails. */ + @Test public void testSegmentedGeoIndexJoinPartitioned() throws Exception { checkSegmentedGeoIndexJoin(true, false); } @@ -502,6 +511,7 @@ public void testSegmentedGeoIndexJoinPartitioned() throws Exception { * * @throws Exception if fails. */ + @Test public void testSegmentedGeoIndexJoinPartitionedDynamic() throws Exception { checkSegmentedGeoIndexJoin(true, true); } @@ -511,6 +521,7 @@ public void testSegmentedGeoIndexJoinPartitionedDynamic() throws Exception { * * @throws Exception if fails. */ + @Test public void testSegmentedGeoIndexJoinReplicated() throws Exception { checkSegmentedGeoIndexJoin(false, false); } @@ -520,6 +531,7 @@ public void testSegmentedGeoIndexJoinReplicated() throws Exception { * * @throws Exception if fails. */ + @Test public void testSegmentedGeoIndexJoinReplicatedDynamic() throws Exception { checkSegmentedGeoIndexJoin(false, true); } @@ -674,4 +686,4 @@ protected static class EnemyCamp implements Serializable { this.name = name; } } -} \ No newline at end of file +} diff --git a/modules/geospatial/src/test/java/org/apache/ignite/testsuites/GeoSpatialIndexingTestSuite.java b/modules/geospatial/src/test/java/org/apache/ignite/testsuites/GeoSpatialIndexingTestSuite.java index 22109dea2d907..67c06c265e6d9 100644 --- a/modules/geospatial/src/test/java/org/apache/ignite/testsuites/GeoSpatialIndexingTestSuite.java +++ b/modules/geospatial/src/test/java/org/apache/ignite/testsuites/GeoSpatialIndexingTestSuite.java @@ -17,24 +17,27 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.query.h2.H2IndexingGeoSelfTest; import org.apache.ignite.internal.processors.query.h2.H2IndexingSegmentedGeoSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Geospatial indexing tests. */ -public class GeoSpatialIndexingTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class GeoSpatialIndexingTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("H2 Geospatial Indexing Test Suite"); - suite.addTestSuite(H2IndexingGeoSelfTest.class); - suite.addTestSuite(H2IndexingSegmentedGeoSelfTest.class); + suite.addTest(new JUnit4TestAdapter(H2IndexingGeoSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2IndexingSegmentedGeoSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/h2/licenses/EPL-1.0.txt b/modules/h2/licenses/EPL-1.0.txt new file mode 100644 index 0000000000000..3fa00836fa410 --- /dev/null +++ b/modules/h2/licenses/EPL-1.0.txt @@ -0,0 +1,86 @@ +Eclipse Public License - v 1.0 +THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. + +1. DEFINITIONS + +"Contribution" means: + +a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and + +b) in the case of each subsequent Contributor: + +i) changes to the Program, and + +ii) additions to the Program; + +where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program. + +"Contributor" means any person or entity that distributes the Program. + +"Licensed Patents" mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. + +"Program" means the Contributions distributed in accordance with this Agreement. + +"Recipient" means anyone who receives the Program under this Agreement, including all Contributors. + +2. GRANT OF RIGHTS + +a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form. + +b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. + +c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. + +d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. + +3. REQUIREMENTS + +A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that: + +a) it complies with the terms and conditions of this Agreement; and + +b) its license agreement: + +i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; + +ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; + +iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and + +iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange. + +When the Program is made available in source code form: + +a) it must be made available under this Agreement; and + +b) a copy of this Agreement must be included with each copy of the Program. + +Contributors may not remove or alter any copyright notices contained within the Program. + +Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution. + +4. COMMERCIAL DISTRIBUTION + +Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. + +For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. + +5. NO WARRANTY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. + +6. DISCLAIMER OF LIABILITY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +7. GENERAL + +If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. + +If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. + +All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. + +Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. + +This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation. \ No newline at end of file diff --git a/modules/h2/licenses/MPL-2.0.txt b/modules/h2/licenses/MPL-2.0.txt new file mode 100644 index 0000000000000..4fb81368f85a9 --- /dev/null +++ b/modules/h2/licenses/MPL-2.0.txt @@ -0,0 +1,151 @@ +Mozilla Public License Version 2.0 + +1. Definitions +1.1. “Contributor” +means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. + +1.2. “Contributor Version” +means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor’s Contribution. + +1.3. “Contribution” +means Covered Software of a particular Contributor. + +1.4. “Covered Software” +means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. + +1.5. “Incompatible With Secondary Licenses” +means + +that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or + +that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. + +1.6. “Executable Form” +means any form of the work other than Source Code Form. + +1.7. “Larger Work” +means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. + +1.8. “License” +means this document. + +1.9. “Licensable” +means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. + +1.10. “Modifications” +means any of the following: + +any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or + +any new file in Source Code Form that contains any Covered Software. + +1.11. “Patent Claims” of a Contributor +means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. + +1.12. “Secondary License” +means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. + +1.13. “Source Code Form” +means the form of the work preferred for making modifications. + +1.14. “You” (or “Your”) +means an individual or a legal entity exercising rights under this License. For legal entities, “You” includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control” means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. + +2. License Grants and Conditions +2.1. Grants +Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: + +under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and + +under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. + +2.2. Effective Date +The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. + +2.3. Limitations on Grant Scope +The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: + +for any code that a Contributor has removed from Covered Software; or + +for infringements caused by: (i) Your and any other third party’s modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or + +under Patent Claims infringed by Covered Software in the absence of its Contributions. + +This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). + +2.4. Subsequent Licenses +No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). + +2.5. Representation +Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. + +2.6. Fair Use +This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. + +2.7. Conditions +Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. + +3. Responsibilities +3.1. Distribution of Source Form +All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients’ rights in the Source Code Form. + +3.2. Distribution of Executable Form +If You distribute Covered Software in Executable Form then: + +such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and + +You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients’ rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work +You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). + +3.4. Notices +You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms +You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. + +4. Inability to Comply Due to Statute or Regulation +If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. + +5. Termination +5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. + +6. Disclaimer of Warranty +Covered Software is provided under this License on an “as is” basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. + +7. Limitation of Liability +Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party’s negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. + +8. Litigation +Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party’s ability to bring cross-claims or counter-claims. + +9. Miscellaneous +This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. + +10. Versions of the License +10.1. New Versions +Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. + +10.2. Effect of New Versions +You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. + +10.3. Modified Versions +If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses +If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. + +Exhibit A - Source Code Form License Notice +This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - “Incompatible With Secondary Licenses” Notice +This Source Code Form is “Incompatible With Secondary Licenses”, as defined by the Mozilla Public License, v. 2.0. \ No newline at end of file diff --git a/modules/h2/licenses/gg-community.txt b/modules/h2/licenses/gg-community.txt new file mode 100644 index 0000000000000..bdece5ff0e558 --- /dev/null +++ b/modules/h2/licenses/gg-community.txt @@ -0,0 +1,13 @@ +Copyright 2020 GridGain Systems, Inc. and Contributors. + +Licensed under the GridGain Community Edition License (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. \ No newline at end of file diff --git a/modules/h2/pom.xml b/modules/h2/pom.xml new file mode 100644 index 0000000000000..2bc3405879a2b --- /dev/null +++ b/modules/h2/pom.xml @@ -0,0 +1,190 @@ + + + + + + + + 4.0.0 + + org.apache.ignite + ignite-parent + 1 + ../../parent + + + ignite-h2 + 2.7.0-SNAPSHOT + H2 Database Engine Fork + http://h2database.com + + + + MPL 2.0 + https://www.mozilla.org/en-US/MPL/2.0/ + repo + + + EPL 1.0 + https://opensource.org/licenses/eclipse-1.0.php + repo + + + + + 1.7 + UTF-8 + + + + + + javax.servlet + javax.servlet-api + 3.1.0 + + + org.slf4j + slf4j-api + ${slf4j16.version} + + + org.locationtech.jts + jts-core + 1.15.0 + + + + + + + org.slf4j + slf4j-simple + ${slf4j16.version} + test + + + + + + + + java9+ + + [1.9,) + + + + + + + src/main/java + src/test/java + + + + src/main/java + + **/*.prop + **/*.png + **/*.jsp + **/*.ico + **/*.gif + **/*.css + **/*.js + org/h2/res/help.csv + org/h2/res/javadoc.properties + META-INF/** + + + + src/main/resources/precompiled + META-INF/versions/9 + + + + + src/test/java + + org/h2/test/scripts/**/*.sql + org/h2/test/scripts/*.txt + org/h2/samples/newsfeed.sql + org/h2/samples/optimizations.sql + + + + + + + org.codehaus.mojo + build-helper-maven-plugin + 3.0.0 + + + generate-test-sources + + add-test-source + + + + src/test/tools + src/test/java/META-INF/** + + + + + + + + org.apache.maven.plugins + maven-javadoc-plugin + + true + + + + + org.apache.maven.plugins + maven-surefire-plugin + + + 100 + 0 + 0 + true + target/trace.db/ + false + + + ${project.build.outputDirectory} + ${project.build.testOutputDirectory} + + + TestAllJunit.java + H2TestCase.java + + + + + + + \ No newline at end of file diff --git a/modules/h2/src/main/java/META-INF/services/java.sql.Driver b/modules/h2/src/main/java/META-INF/services/java.sql.Driver new file mode 100644 index 0000000000000..679185a2fa8bb --- /dev/null +++ b/modules/h2/src/main/java/META-INF/services/java.sql.Driver @@ -0,0 +1 @@ +org.h2.Driver \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/Driver.java b/modules/h2/src/main/java/org/h2/Driver.java new file mode 100644 index 0000000000000..e02016702b9b4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/Driver.java @@ -0,0 +1,207 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.DriverPropertyInfo; +import java.sql.SQLException; +import java.util.Properties; +import java.util.logging.Logger; +import org.h2.engine.Constants; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.upgrade.DbUpgrade; + +/** + * The database driver. An application should not use this class directly. The + * only thing the application needs to do is load the driver. This can be done + * using Class.forName. To load the driver and open a database connection, use + * the following code: + * + *
    + * Class.forName("org.h2.Driver");
    + * Connection conn = DriverManager.getConnection(
    + *      "jdbc:h2:˜/test", "sa", "sa");
    + * 
    + */ +public class Driver implements java.sql.Driver, JdbcDriverBackwardsCompat { + + private static final Driver INSTANCE = new Driver(); + private static final String DEFAULT_URL = "jdbc:default:connection"; + private static final ThreadLocal DEFAULT_CONNECTION = + new ThreadLocal<>(); + + private static volatile boolean registered; + + static { + load(); + } + + /** + * Open a database connection. + * This method should not be called by an application. + * Instead, the method DriverManager.getConnection should be used. + * + * @param url the database URL + * @param info the connection properties + * @return the new connection or null if the URL is not supported + */ + @Override + public Connection connect(String url, Properties info) throws SQLException { + try { + if (info == null) { + info = new Properties(); + } + if (!acceptsURL(url)) { + return null; + } + if (url.equals(DEFAULT_URL)) { + return DEFAULT_CONNECTION.get(); + } + Connection c = DbUpgrade.connectOrUpgrade(url, info); + if (c != null) { + return c; + } + return new JdbcConnection(url, info); + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } + + /** + * Check if the driver understands this URL. + * This method should not be called by an application. + * + * @param url the database URL + * @return if the driver understands the URL + */ + @Override + public boolean acceptsURL(String url) { + if (url != null) { + if (url.startsWith(Constants.START_URL)) { + return true; + } else if (url.equals(DEFAULT_URL)) { + return DEFAULT_CONNECTION.get() != null; + } + } + return false; + } + + /** + * Get the major version number of the driver. + * This method should not be called by an application. + * + * @return the major version number + */ + @Override + public int getMajorVersion() { + return Constants.VERSION_MAJOR; + } + + /** + * Get the minor version number of the driver. + * This method should not be called by an application. + * + * @return the minor version number + */ + @Override + public int getMinorVersion() { + return Constants.VERSION_MINOR; + } + + /** + * Get the list of supported properties. + * This method should not be called by an application. + * + * @param url the database URL + * @param info the connection properties + * @return a zero length array + */ + @Override + public DriverPropertyInfo[] getPropertyInfo(String url, Properties info) { + return new DriverPropertyInfo[0]; + } + + /** + * Check if this driver is compliant to the JDBC specification. + * This method should not be called by an application. + * + * @return true + */ + @Override + public boolean jdbcCompliant() { + return true; + } + + /** + * [Not supported] + */ + @Override + public Logger getParentLogger() { + return null; + } + + /** + * INTERNAL + */ + public static synchronized Driver load() { + try { + if (!registered) { + registered = true; + DriverManager.registerDriver(INSTANCE); + } + } catch (SQLException e) { + DbException.traceThrowable(e); + } + return INSTANCE; + } + + /** + * INTERNAL + */ + public static synchronized void unload() { + try { + if (registered) { + registered = false; + DriverManager.deregisterDriver(INSTANCE); + } + } catch (SQLException e) { + DbException.traceThrowable(e); + } + } + + /** + * INTERNAL + * Sets, on a per-thread basis, the default-connection for + * user-defined functions. + */ + public static void setDefaultConnection(Connection c) { + if (c == null) { + DEFAULT_CONNECTION.remove(); + } else { + DEFAULT_CONNECTION.set(c); + } + } + + /** + * INTERNAL + */ + public static void setThreadContextClassLoader(Thread thread) { + // Apache Tomcat: use the classloader of the driver to avoid the + // following log message: + // org.apache.catalina.loader.WebappClassLoader clearReferencesThreads + // SEVERE: The web application appears to have started a thread named + // ... but has failed to stop it. + // This is very likely to create a memory leak. + try { + thread.setContextClassLoader(Driver.class.getClassLoader()); + } catch (Throwable t) { + // ignore + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/JdbcDriverBackwardsCompat.java b/modules/h2/src/main/java/org/h2/JdbcDriverBackwardsCompat.java new file mode 100644 index 0000000000000..15082332abc4e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/JdbcDriverBackwardsCompat.java @@ -0,0 +1,16 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcDriverBackwardsCompat { + + // compatibility interface + +} diff --git a/modules/h2/src/main/java/org/h2/api/Aggregate.java b/modules/h2/src/main/java/org/h2/api/Aggregate.java new file mode 100644 index 0000000000000..2131bf7659831 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/Aggregate.java @@ -0,0 +1,53 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +import java.sql.Connection; +import java.sql.SQLException; + +/** + * A user-defined aggregate function needs to implement this interface. + * The class must be public and must have a public non-argument constructor. + */ +public interface Aggregate { + + /** + * This method is called when the aggregate function is used. + * A new object is created for each invocation. + * + * @param conn a connection to the database + */ + void init(Connection conn) throws SQLException; + + /** + * This method must return the H2 data type, {@link org.h2.value.Value}, + * of the aggregate function, given the H2 data type of the input data. + * The method should check here if the number of parameters + * passed is correct, and if not it should throw an exception. + * + * @param inputTypes the H2 data type of the parameters, + * @return the H2 data type of the result + * @throws SQLException if the number/type of parameters passed is incorrect + */ + int getInternalType(int[] inputTypes) throws SQLException; + + /** + * This method is called once for each row. + * If the aggregate function is called with multiple parameters, + * those are passed as array. + * + * @param value the value(s) for this row + */ + void add(Object value) throws SQLException; + + /** + * This method returns the computed aggregate value. + * + * @return the aggregated value + */ + Object getResult() throws SQLException; + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/api/AggregateFunction.java b/modules/h2/src/main/java/org/h2/api/AggregateFunction.java new file mode 100644 index 0000000000000..07734e4edc909 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/AggregateFunction.java @@ -0,0 +1,56 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +import java.sql.Connection; +import java.sql.SQLException; + +/** + * A user-defined aggregate function needs to implement this interface. + * The class must be public and must have a public non-argument constructor. + *

    + * Please note this interface only has limited support for data types. + * If you need data types that don't have a corresponding SQL type + * (for example GEOMETRY), then use the {@link Aggregate} interface. + *

    + */ +public interface AggregateFunction { + + /** + * This method is called when the aggregate function is used. + * A new object is created for each invocation. + * + * @param conn a connection to the database + */ + void init(Connection conn) throws SQLException; + + /** + * This method must return the SQL type of the method, given the SQL type of + * the input data. The method should check here if the number of parameters + * passed is correct, and if not it should throw an exception. + * + * @param inputTypes the SQL type of the parameters, {@link java.sql.Types} + * @return the SQL type of the result + */ + int getType(int[] inputTypes) throws SQLException; + + /** + * This method is called once for each row. + * If the aggregate function is called with multiple parameters, + * those are passed as array. + * + * @param value the value(s) for this row + */ + void add(Object value) throws SQLException; + + /** + * This method returns the computed aggregate value. + * + * @return the aggregated value + */ + Object getResult() throws SQLException; + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/api/CustomDataTypesHandler.java b/modules/h2/src/main/java/org/h2/api/CustomDataTypesHandler.java new file mode 100644 index 0000000000000..fc8006e4a56a2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/CustomDataTypesHandler.java @@ -0,0 +1,109 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +import org.h2.store.DataHandler; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * Custom data type handler + * Provides means to plug-in custom data types support + * + * Please keep in mind that this feature may not possibly + * provide the same ABI stability level as other features + * as it exposes many of the H2 internals. You may be + * required to update your code occasionally due to internal + * changes in H2 if you are going to use this feature + */ +public interface CustomDataTypesHandler { + /** + * Get custom data type given its name + * + * @param name data type name + * @return custom data type + */ + DataType getDataTypeByName(String name); + + /** + * Get custom data type given its integer id + * + * @param type identifier of a data type + * @return custom data type + */ + DataType getDataTypeById(int type); + + /** + * Get order for custom data type given its integer id + * + * @param type identifier of a data type + * @return order associated with custom data type + */ + int getDataTypeOrder(int type); + + /** + * Convert the provided source value into value of given target data type + * Shall implement conversions to and from custom data types. + * + * @param source source value + * @param targetType identifier of target data type + * @return converted value + */ + Value convert(Value source, int targetType); + + /** + * Get custom data type class name given its integer id + * + * @param type identifier of a data type + * @return class name + */ + String getDataTypeClassName(int type); + + /** + * Get custom data type identifier given corresponding Java class + * @param cls Java class object + * @return type identifier + */ + int getTypeIdFromClass(Class cls); + + /** + * Get {@link org.h2.value.Value} object + * corresponding to given data type identifier and data. + * + * @param type custom data type identifier + * @param data underlying data type value + * @param dataHandler data handler object + * @return Value object + */ + Value getValue(int type, Object data, DataHandler dataHandler); + + /** + * Converts {@link org.h2.value.Value} object + * to the specified class. + * + * @param value the value to convert + * @param cls the target class + * @return result + */ + Object getObject(Value value, Class cls); + + /** + * Checks if type supports add operation + * + * @param type custom data type identifier + * @return True, if custom data type supports add operation + */ + boolean supportsAdd(int type); + + /** + * Get compatible type identifier that would not overflow + * after many add operations. + * + * @param type identifier of a type + * @return resulting type identifier + */ + int getAddProofType(int type); +} diff --git a/modules/h2/src/main/java/org/h2/api/DatabaseEventListener.java b/modules/h2/src/main/java/org/h2/api/DatabaseEventListener.java new file mode 100644 index 0000000000000..ce74c892b9498 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/DatabaseEventListener.java @@ -0,0 +1,107 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +import java.sql.SQLException; +import java.util.EventListener; + +/** + * A class that implements this interface can get notified about exceptions + * and other events. A database event listener can be registered when + * connecting to a database. Example database URL: + * jdbc:h2:test;DATABASE_EVENT_LISTENER='com.acme.DbListener' + */ +public interface DatabaseEventListener extends EventListener { + + /** + * This state is used when scanning the database file. + */ + int STATE_SCAN_FILE = 0; + + /** + * This state is used when re-creating an index. + */ + int STATE_CREATE_INDEX = 1; + + /** + * This state is used when re-applying the transaction log or rolling back + * uncommitted transactions. + */ + int STATE_RECOVER = 2; + + /** + * This state is used during the BACKUP command. + */ + int STATE_BACKUP_FILE = 3; + + /** + * This state is used after re-connecting to a database (if auto-reconnect + * is enabled). + */ + int STATE_RECONNECTED = 4; + + /** + * This state is used when a query starts. + */ + int STATE_STATEMENT_START = 5; + + /** + * This state is used when a query ends. + */ + int STATE_STATEMENT_END = 6; + + /** + * This state is used for periodic notification during long-running queries. + */ + int STATE_STATEMENT_PROGRESS = 7; + + /** + * This method is called just after creating the object. + * This is done when opening the database if the listener is specified + * in the database URL, but may be later if the listener is set at + * runtime with the SET SQL statement. + * + * @param url - the database URL + */ + void init(String url); + + /** + * This method is called after the database has been opened. It is save to + * connect to the database and execute statements at this point. + */ + void opened(); + + /** + * This method is called if an exception occurred. + * + * @param e the exception + * @param sql the SQL statement + */ + void exceptionThrown(SQLException e, String sql); + + /** + * This method is called for long running events, such as recovering, + * scanning a file or building an index. + *

    + * More states might be added in future versions, therefore implementations + * should silently ignore states that they don't understand. + *

    + * + * @param state the state + * @param name the object name + * @param x the current position + * @param max the highest possible value (might be 0) + */ + void setProgress(int state, String name, int x, int max); + + /** + * This method is called before the database is closed normally. It is save + * to connect to the database and execute statements at this point, however + * the connection must be closed before the method returns. + */ + void closingDatabase(); + +} diff --git a/modules/h2/src/main/java/org/h2/api/ErrorCode.java b/modules/h2/src/main/java/org/h2/api/ErrorCode.java new file mode 100644 index 0000000000000..b1a97bfd709d7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/ErrorCode.java @@ -0,0 +1,2058 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +/** + * This class defines the error codes used for SQL exceptions. + * Error messages are formatted as follows: + *
    + * { error message (possibly translated; may include quoted data) }
    + * { error message in English if different }
    + * { SQL statement if applicable }
    + * { [ error code - build number ] }
    + * 
    + * Example: + *
    + * Syntax error in SQL statement "SELECT * FORM[*] TEST ";
    + * SQL statement: select * form test [42000-125]
    + * 
    + * The [*] marks the position of the syntax error + * (FORM instead of FROM in this case). + * The error code is 42000, and the build number is 125, + * meaning version 1.2.125. + */ +public class ErrorCode { + + // 02: no data + + /** + * The error with code 2000 is thrown when + * the result set is positioned before the first or after the last row, or + * not on a valid row for the given operation. + * Example of wrong usage: + *
    +     * ResultSet rs = stat.executeQuery("SELECT * FROM DUAL");
    +     * rs.getString(1);
    +     * 
    + * Correct: + *
    +     * ResultSet rs = stat.executeQuery("SELECT * FROM DUAL");
    +     * rs.next();
    +     * rs.getString(1);
    +     * 
    + */ + public static final int NO_DATA_AVAILABLE = 2000; + + // 07: dynamic SQL error + + /** + * The error with code 7001 is thrown when + * trying to call a function with the wrong number of parameters. + * Example: + *
    +     * CALL ABS(1, 2)
    +     * 
    + */ + public static final int INVALID_PARAMETER_COUNT_2 = 7001; + + // 08: connection exception + + /** + * The error with code 8000 is thrown when + * there was a problem trying to create a database lock. + * See the message and cause for details. + */ + public static final int ERROR_OPENING_DATABASE_1 = 8000; + + // 21: cardinality violation + + /** + * The error with code 21002 is thrown when the number of + * columns does not match. Possible reasons are: for an INSERT or MERGE + * statement, the column count does not match the table or the column list + * specified. For a SELECT UNION statement, both queries return a different + * number of columns. For a constraint, the number of referenced and + * referencing columns does not match. Example: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR);
    +     * INSERT INTO TEST VALUES('Hello');
    +     * 
    + */ + public static final int COLUMN_COUNT_DOES_NOT_MATCH = 21002; + + // 22: data exception + + /** + * The error with code 22001 is thrown when + * trying to insert a value that is too long for the column. + * Example: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR(2));
    +     * INSERT INTO TEST VALUES(1, 'Hello');
    +     * 
    + */ + public static final int VALUE_TOO_LONG_2 = 22001; + + /** + * The error with code 22003 is thrown when a value is out of + * range when converting to another data type. Example: + *
    +     * CALL CAST(1000000 AS TINYINT);
    +     * SELECT CAST(124.34 AS DECIMAL(2, 2));
    +     * 
    + */ + public static final int NUMERIC_VALUE_OUT_OF_RANGE_1 = 22003; + + /** + * The error with code 22004 is thrown when a value is out of + * range when converting to another column's data type. + */ + public static final int NUMERIC_VALUE_OUT_OF_RANGE_2 = 22004; + + /** + * The error with code 22007 is thrown when + * a text can not be converted to a date, time, or timestamp constant. + * Examples: + *
    +     * CALL DATE '2007-January-01';
    +     * CALL TIME '14:61:00';
    +     * CALL TIMESTAMP '2001-02-30 12:00:00';
    +     * 
    + */ + public static final int INVALID_DATETIME_CONSTANT_2 = 22007; + + /** + * The error with code 22012 is thrown when trying to divide + * a value by zero. Example: + *
    +     * CALL 1/0;
    +     * 
    + */ + public static final int DIVISION_BY_ZERO_1 = 22012; + + /** + * The error with code 22018 is thrown when + * trying to convert a value to a data type where the conversion is + * undefined, or when an error occurred trying to convert. Example: + *
    +     * CALL CAST(DATE '2001-01-01' AS BOOLEAN);
    +     * CALL CAST('CHF 99.95' AS INT);
    +     * 
    + */ + public static final int DATA_CONVERSION_ERROR_1 = 22018; + + /** + * The error with code 22025 is thrown when using an invalid + * escape character sequence for LIKE or REGEXP. The default escape + * character is '\'. The escape character is required when searching for + * the characters '%', '_' and the escape character itself. That means if + * you want to search for the text '10%', you need to use LIKE '10\%'. If + * you want to search for 'C:\temp' you need to use 'C:\\temp'. The escape + * character can be changed using the ESCAPE clause as in LIKE '10+%' ESCAPE + * '+'. Example of wrong usage: + *
    +     * CALL 'C:\temp' LIKE 'C:\temp';
    +     * CALL '1+1' LIKE '1+1' ESCAPE '+';
    +     * 
    + * Correct: + *
    +     * CALL 'C:\temp' LIKE 'C:\\temp';
    +     * CALL '1+1' LIKE '1++1' ESCAPE '+';
    +     * 
    + */ + public static final int LIKE_ESCAPE_ERROR_1 = 22025; + + /** + * The error with code 22030 is thrown when + * an attempt is made to INSERT or UPDATE an ENUM-typed cell, + * but the value is not one of the values enumerated by the + * type. + * + * Example: + *
    +     * CREATE TABLE TEST(CASE ENUM('sensitive','insensitive'));
    +     * INSERT INTO TEST VALUES('snake');
    +     * 
    + */ + public static final int ENUM_VALUE_NOT_PERMITTED = 22030; + + /** + * The error with code 22032 is thrown when an + * attempt is made to add or modify an ENUM-typed column so + * that one or more of its enumerators would be empty. + * + * Example: + *
    +     * CREATE TABLE TEST(CASE ENUM(' '));
    +     * 
    + */ + public static final int ENUM_EMPTY = 22032; + + /** + * The error with code 22033 is thrown when an + * attempt is made to add or modify an ENUM-typed column so + * that it would have duplicate values. + * + * Example: + *
    +     * CREATE TABLE TEST(CASE ENUM('sensitive', 'sensitive'));
    +     * 
    + */ + public static final int ENUM_DUPLICATE = 22033; + + // 23: constraint violation + + /** + * The error with code 23502 is thrown when + * trying to insert NULL into a column that does not allow NULL. + * Example: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR NOT NULL);
    +     * INSERT INTO TEST(ID) VALUES(1);
    +     * 
    + */ + public static final int NULL_NOT_ALLOWED = 23502; + + /** + * The error with code 23503 is thrown when trying to delete + * or update a row when this would violate a referential constraint, because + * there is a child row that would become an orphan. Example: + *
    +     * CREATE TABLE TEST(ID INT PRIMARY KEY, PARENT INT);
    +     * INSERT INTO TEST VALUES(1, 1), (2, 1);
    +     * ALTER TABLE TEST ADD CONSTRAINT TEST_ID_PARENT
    +     *       FOREIGN KEY(PARENT) REFERENCES TEST(ID) ON DELETE RESTRICT;
    +     * DELETE FROM TEST WHERE ID = 1;
    +     * 
    + */ + public static final int REFERENTIAL_INTEGRITY_VIOLATED_CHILD_EXISTS_1 = 23503; + + /** + * The error with code 23505 is thrown when trying to insert + * a row that would violate a unique index or primary key. Example: + *
    +     * CREATE TABLE TEST(ID INT PRIMARY KEY);
    +     * INSERT INTO TEST VALUES(1);
    +     * INSERT INTO TEST VALUES(1);
    +     * 
    + */ + public static final int DUPLICATE_KEY_1 = 23505; + + /** + * The error with code 23506 is thrown when trying to insert + * or update a row that would violate a referential constraint, because the + * referenced row does not exist. Example: + *
    +     * CREATE TABLE PARENT(ID INT PRIMARY KEY);
    +     * CREATE TABLE CHILD(P_ID INT REFERENCES PARENT(ID));
    +     * INSERT INTO CHILD VALUES(1);
    +     * 
    + */ + public static final int REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1 = 23506; + + /** + * The error with code 23507 is thrown when + * updating or deleting from a table with a foreign key constraint + * that should set the default value, but there is no default value defined. + * Example: + *
    +     * CREATE TABLE TEST(ID INT PRIMARY KEY, PARENT INT);
    +     * INSERT INTO TEST VALUES(1, 1), (2, 1);
    +     * ALTER TABLE TEST ADD CONSTRAINT TEST_ID_PARENT
    +     *   FOREIGN KEY(PARENT) REFERENCES TEST(ID) ON DELETE SET DEFAULT;
    +     * DELETE FROM TEST WHERE ID = 1;
    +     * 
    + */ + public static final int NO_DEFAULT_SET_1 = 23507; + + /** + * The error with code 23513 is thrown when + * a check constraint is violated. Example: + *
    +     * CREATE TABLE TEST(ID INT CHECK ID>0);
    +     * INSERT INTO TEST VALUES(0);
    +     * 
    + */ + public static final int CHECK_CONSTRAINT_VIOLATED_1 = 23513; + + /** + * The error with code 23514 is thrown when + * evaluation of a check constraint resulted in a error. + */ + public static final int CHECK_CONSTRAINT_INVALID = 23514; + + // 28: invalid authorization specification + + /** + * The error with code 28000 is thrown when + * there is no such user registered in the database, when the user password + * does not match, or when the database encryption password does not match + * (if database encryption is used). + */ + public static final int WRONG_USER_OR_PASSWORD = 28000; + + // 3B: savepoint exception + + /** + * The error with code 40001 is thrown when + * the database engine has detected a deadlock. The transaction of this + * session has been rolled back to solve the problem. A deadlock occurs when + * a session tries to lock a table another session has locked, while the + * other session wants to lock a table the first session has locked. As an + * example, session 1 has locked table A, while session 2 has locked table + * B. If session 1 now tries to lock table B and session 2 tries to lock + * table A, a deadlock has occurred. Deadlocks that involve more than two + * sessions are also possible. To solve deadlock problems, an application + * should lock tables always in the same order, such as always lock table A + * before locking table B. For details, see
    Wikipedia Deadlock. + */ + public static final int DEADLOCK_1 = 40001; + + // 42: syntax error or access rule violation + + /** + * The error with code 42000 is thrown when + * trying to execute an invalid SQL statement. + * Example: + *
    +     * CREATE ALIAS REMAINDER FOR "IEEEremainder";
    +     * 
    + */ + public static final int SYNTAX_ERROR_1 = 42000; + + /** + * The error with code 42001 is thrown when + * trying to execute an invalid SQL statement. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * INSERT INTO TEST(1);
    +     * 
    + */ + public static final int SYNTAX_ERROR_2 = 42001; + + /** + * The error with code 42101 is thrown when + * trying to create a table or view if an object with this name already + * exists. Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * CREATE TABLE TEST(ID INT PRIMARY KEY);
    +     * 
    + */ + public static final int TABLE_OR_VIEW_ALREADY_EXISTS_1 = 42101; + + /** + * The error with code 42102 is thrown when + * trying to query, modify or drop a table or view that does not exists + * in this schema and database. A common cause is that the wrong + * database was opened. + * Example: + *
    +     * SELECT * FROM ABC;
    +     * 
    + */ + public static final int TABLE_OR_VIEW_NOT_FOUND_1 = 42102; + + /** + * The error with code 42111 is thrown when + * trying to create an index if an index with the same name already exists. + * Example: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR);
    +     * CREATE INDEX IDX_ID ON TEST(ID);
    +     * CREATE TABLE ADDRESS(ID INT);
    +     * CREATE INDEX IDX_ID ON ADDRESS(ID);
    +     * 
    + */ + public static final int INDEX_ALREADY_EXISTS_1 = 42111; + + /** + * The error with code 42112 is thrown when + * trying to drop or reference an index that does not exist. + * Example: + *
    +     * DROP INDEX ABC;
    +     * 
    + */ + public static final int INDEX_NOT_FOUND_1 = 42112; + + /** + * The error with code 42121 is thrown when trying to create + * a table or insert into a table and use the same column name twice. + * Example: + *
    +     * CREATE TABLE TEST(ID INT, ID INT);
    +     * 
    + */ + public static final int DUPLICATE_COLUMN_NAME_1 = 42121; + + /** + * The error with code 42122 is thrown when + * referencing an non-existing column. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * SELECT NAME FROM TEST;
    +     * 
    + */ + public static final int COLUMN_NOT_FOUND_1 = 42122; + + // 0A: feature not supported + + // HZ: remote database access + + // + + /** + * The error with code 50000 is thrown when + * something unexpected occurs, for example an internal stack + * overflow. For details about the problem, see the cause of the + * exception in the stack trace. + */ + public static final int GENERAL_ERROR_1 = 50000; + + /** + * The error with code 50004 is thrown when + * creating a table with an unsupported data type, or + * when the data type is unknown because parameters are used. + * Example: + *
    +     * CREATE TABLE TEST(ID VERYSMALLINT);
    +     * 
    + */ + public static final int UNKNOWN_DATA_TYPE_1 = 50004; + + /** + * The error with code 50100 is thrown when calling an + * unsupported JDBC method or database feature. See the stack trace for + * details. + */ + public static final int FEATURE_NOT_SUPPORTED_1 = 50100; + + /** + * The error with code 50200 is thrown when + * another connection locked an object longer than the lock timeout + * set for this connection, or when a deadlock occurred. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * -- connection 1:
    +     * SET AUTOCOMMIT FALSE;
    +     * INSERT INTO TEST VALUES(1);
    +     * -- connection 2:
    +     * SET AUTOCOMMIT FALSE;
    +     * INSERT INTO TEST VALUES(1);
    +     * 
    + */ + public static final int LOCK_TIMEOUT_1 = 50200; + + /** + * The error with code 57014 is thrown when + * a statement was canceled using Statement.cancel() or + * when the query timeout has been reached. + * Examples: + *
    +     * stat.setQueryTimeout(1);
    +     * stat.cancel();
    +     * 
    + */ + public static final int STATEMENT_WAS_CANCELED = 57014; + + /** + * The error with code 90000 is thrown when + * a function that does not return a result set was used in the FROM clause. + * Example: + *
    +     * SELECT * FROM SIN(1);
    +     * 
    + */ + public static final int FUNCTION_MUST_RETURN_RESULT_SET_1 = 90000; + + /** + * The error with code 90001 is thrown when + * Statement.executeUpdate() was called for a SELECT statement. + * This is not allowed according to the JDBC specs. + */ + public static final int METHOD_NOT_ALLOWED_FOR_QUERY = 90001; + + /** + * The error with code 90002 is thrown when + * Statement.executeQuery() was called for a statement that does + * not return a result set (for example, an UPDATE statement). + * This is not allowed according to the JDBC specs. + */ + public static final int METHOD_ONLY_ALLOWED_FOR_QUERY = 90002; + + /** + * The error with code 90003 is thrown when + * trying to convert a String to a binary value. Two hex digits + * per byte are required. Example of wrong usage: + *
    +     * CALL X'00023';
    +     * Hexadecimal string with odd number of characters: 00023
    +     * 
    + * Correct: + *
    +     * CALL X'000023';
    +     * 
    + */ + public static final int HEX_STRING_ODD_1 = 90003; + + /** + * The error with code 90004 is thrown when + * trying to convert a text to binary, but the expression contains + * a non-hexadecimal character. + * Example: + *
    +     * CALL X'ABCDEFGH';
    +     * CALL CAST('ABCDEFGH' AS BINARY);
    +     * 
    + * Conversion from text to binary is supported, but the text must + * represent the hexadecimal encoded bytes. + */ + public static final int HEX_STRING_WRONG_1 = 90004; + + /** + * The error with code 90005 is thrown when + * trying to create a trigger and using the combination of SELECT + * and FOR EACH ROW, which we do not support. + */ + public static final int TRIGGER_SELECT_AND_ROW_BASED_NOT_SUPPORTED = 90005; + + /** + * The error with code 90006 is thrown when + * trying to get a value from a sequence that has run out of numbers + * and does not have cycling enabled. + */ + public static final int SEQUENCE_EXHAUSTED = 90006; + + /** + * The error with code 90007 is thrown when + * trying to call a JDBC method on an object that has been closed. + */ + public static final int OBJECT_CLOSED = 90007; + + /** + * The error with code 90008 is thrown when + * trying to use a value that is not valid for the given operation. + * Example: + *
    +     * CREATE SEQUENCE TEST INCREMENT 0;
    +     * 
    + */ + public static final int INVALID_VALUE_2 = 90008; + + /** + * The error with code 90009 is thrown when + * trying to create a sequence with an invalid combination + * of attributes (min value, max value, start value, etc). + */ + public static final int SEQUENCE_ATTRIBUTES_INVALID = 90009; + + /** + * The error with code 90010 is thrown when + * trying to format a timestamp or number using TO_CHAR + * with an invalid format. + */ + public static final int INVALID_TO_CHAR_FORMAT = 90010; + + /** + * The error with code 90011 is thrown when + * trying to open a connection to a database using an implicit relative + * path, such as "jdbc:h2:test" (in which case the database file would be + * stored in the current working directory of the application). This is not + * allowed because it can lead to confusion where the database file is, and + * can result in multiple databases because different working directories + * are used. Instead, use "jdbc:h2:~/name" (relative to the current user + * home directory), use an absolute path, set the base directory (baseDir), + * use "jdbc:h2:./name" (explicit relative path), or set the system property + * "h2.implicitRelativePath" to "true" (to prevent this check). For Windows, + * an absolute path also needs to include the drive ("C:/..."). Please see + * the documentation on the supported URL format. Example: + *
    +     * jdbc:h2:test
    +     * 
    + */ + public static final int URL_RELATIVE_TO_CWD = 90011; + + /** + * The error with code 90012 is thrown when + * trying to execute a statement with an parameter. + * Example: + *
    +     * CALL SIN(?);
    +     * 
    + */ + public static final int PARAMETER_NOT_SET_1 = 90012; + + /** + * The error with code 90013 is thrown when + * trying to open a database that does not exist using the flag + * IFEXISTS=TRUE, or when trying to access a database object with a catalog + * name that does not match the database name. Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * SELECT XYZ.PUBLIC.TEST.ID FROM TEST;
    +     * 
    + */ + public static final int DATABASE_NOT_FOUND_1 = 90013; + + /** + * The error with code 90014 is thrown when + * trying to parse a date with an unsupported format string, or + * when the date can not be parsed. + * Example: + *
    +     * CALL PARSEDATETIME('2001 January', 'yyyy mm');
    +     * 
    + */ + public static final int PARSE_ERROR_1 = 90014; + + /** + * The error with code 90015 is thrown when + * using an aggregate function with a data type that is not supported. + * Example: + *
    +     * SELECT SUM('Hello') FROM DUAL;
    +     * 
    + */ + public static final int SUM_OR_AVG_ON_WRONG_DATATYPE_1 = 90015; + + /** + * The error with code 90016 is thrown when + * a column was used in the expression list or the order by clause of a + * group or aggregate query, and that column is not in the GROUP BY clause. + * Example of wrong usage: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR);
    +     * INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World');
    +     * SELECT ID, MAX(NAME) FROM TEST;
    +     * Column ID must be in the GROUP BY list.
    +     * 
    + * Correct: + *
    +     * SELECT ID, MAX(NAME) FROM TEST GROUP BY ID;
    +     * 
    + */ + public static final int MUST_GROUP_BY_COLUMN_1 = 90016; + + /** + * The error with code 90017 is thrown when + * trying to define a second primary key constraint for this table. + * Example: + *
    +     * CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR);
    +     * ALTER TABLE TEST ADD CONSTRAINT PK PRIMARY KEY(NAME);
    +     * 
    + */ + public static final int SECOND_PRIMARY_KEY = 90017; + + /** + * The error with code 90018 is thrown when + * the connection was opened, but never closed. In the finalizer of the + * connection, this forgotten close was detected and the connection was + * closed automatically, but relying on the finalizer is not good practice + * as it is not guaranteed and behavior is virtual machine dependent. The + * application should close the connection. This exception only appears in + * the .trace.db file. Example of wrong usage: + *
    +     * Connection conn;
    +     * conn = DriverManager.getConnection("jdbc:h2:˜/test");
    +     * conn = null;
    +     * The connection was not closed by the application and is
    +     * garbage collected
    +     * 
    + * Correct: + *
    +     * conn.close();
    +     * 
    + */ + public static final int TRACE_CONNECTION_NOT_CLOSED = 90018; + + /** + * The error with code 90019 is thrown when + * trying to drop the current user, if there are no other admin users. + * Example: + *
    +     * DROP USER SA;
    +     * 
    + */ + public static final int CANNOT_DROP_CURRENT_USER = 90019; + + /** + * The error with code 90020 is thrown when trying to open a + * database in embedded mode if this database is already in use in another + * process (or in a different class loader). Multiple connections to the + * same database are supported in the following cases: + *
    • In embedded mode (URL of the form jdbc:h2:~/test) if all + * connections are opened within the same process and class loader. + *
    • In server and cluster mode (URL of the form + * jdbc:h2:tcp://localhost/test) using remote connections. + *
    + * The mixed mode is also supported. This mode requires to start a server + * in the same process where the database is open in embedded mode. + */ + public static final int DATABASE_ALREADY_OPEN_1 = 90020; + + /** + * The error with code 90021 is thrown when + * trying to change a specific database property that conflicts with other + * database properties. + */ + public static final int UNSUPPORTED_SETTING_COMBINATION = 90021; + + /** + * The error with code 90022 is thrown when + * trying to call a unknown function. + * Example: + *
    +     * CALL SPECIAL_SIN(10);
    +     * 
    + */ + public static final int FUNCTION_NOT_FOUND_1 = 90022; + + /** + * The error with code 90023 is thrown when + * trying to set a primary key on a nullable column. + * Example: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR);
    +     * ALTER TABLE TEST ADD CONSTRAINT PK PRIMARY KEY(ID);
    +     * 
    + */ + public static final int COLUMN_MUST_NOT_BE_NULLABLE_1 = 90023; + + /** + * The error with code 90024 is thrown when + * a file could not be renamed. + */ + public static final int FILE_RENAME_FAILED_2 = 90024; + + /** + * The error with code 90025 is thrown when + * a file could not be deleted, because it is still in use + * (only in Windows), or because an error occurred when deleting. + */ + public static final int FILE_DELETE_FAILED_1 = 90025; + + /** + * The error with code 90026 is thrown when + * an object could not be serialized. + */ + public static final int SERIALIZATION_FAILED_1 = 90026; + + /** + * The error with code 90027 is thrown when + * an object could not be de-serialized. + */ + public static final int DESERIALIZATION_FAILED_1 = 90027; + + /** + * The error with code 90028 is thrown when + * an input / output error occurred. For more information, see the root + * cause of the exception. + */ + public static final int IO_EXCEPTION_1 = 90028; + + /** + * The error with code 90029 is thrown when + * calling ResultSet.deleteRow(), insertRow(), or updateRow() + * when the current row is not updatable. + * Example: + *
    +     * ResultSet rs = stat.executeQuery("SELECT * FROM TEST");
    +     * rs.next();
    +     * rs.insertRow();
    +     * 
    + */ + public static final int NOT_ON_UPDATABLE_ROW = 90029; + + /** + * The error with code 90030 is thrown when + * the database engine has detected a checksum mismatch in the data + * or index. To solve this problem, restore a backup or use the + * Recovery tool (org.h2.tools.Recover). + */ + public static final int FILE_CORRUPTED_1 = 90030; + + /** + * The error with code 90031 is thrown when + * an input / output error occurred. For more information, see the root + * cause of the exception. + */ + public static final int IO_EXCEPTION_2 = 90031; + + /** + * The error with code 90032 is thrown when + * trying to drop or alter a user that does not exist. + * Example: + *
    +     * DROP USER TEST_USER;
    +     * 
    + */ + public static final int USER_NOT_FOUND_1 = 90032; + + /** + * The error with code 90033 is thrown when + * trying to create a user or role if a user with this name already exists. + * Example: + *
    +     * CREATE USER TEST_USER;
    +     * CREATE USER TEST_USER;
    +     * 
    + */ + public static final int USER_ALREADY_EXISTS_1 = 90033; + + /** + * The error with code 90034 is thrown when + * writing to the trace file failed, for example because the there + * is an I/O exception. This message is printed to System.out, + * but only once. + */ + public static final int TRACE_FILE_ERROR_2 = 90034; + + /** + * The error with code 90035 is thrown when + * trying to create a sequence if a sequence with this name already + * exists. + * Example: + *
    +     * CREATE SEQUENCE TEST_SEQ;
    +     * CREATE SEQUENCE TEST_SEQ;
    +     * 
    + */ + public static final int SEQUENCE_ALREADY_EXISTS_1 = 90035; + + /** + * The error with code 90036 is thrown when + * trying to access a sequence that does not exist. + * Example: + *
    +     * SELECT NEXT VALUE FOR SEQUENCE XYZ;
    +     * 
    + */ + public static final int SEQUENCE_NOT_FOUND_1 = 90036; + + /** + * The error with code 90037 is thrown when + * trying to drop or alter a view that does not exist. + * Example: + *
    +     * DROP VIEW XYZ;
    +     * 
    + */ + public static final int VIEW_NOT_FOUND_1 = 90037; + + /** + * The error with code 90038 is thrown when + * trying to create a view if a view with this name already + * exists. + * Example: + *
    +     * CREATE VIEW DUMMY AS SELECT * FROM DUAL;
    +     * CREATE VIEW DUMMY AS SELECT * FROM DUAL;
    +     * 
    + */ + public static final int VIEW_ALREADY_EXISTS_1 = 90038; + + /** + * The error with code 90039 is thrown when + * trying to access a CLOB or BLOB object that timed out. + * See the database setting LOB_TIMEOUT. + */ + public static final int LOB_CLOSED_ON_TIMEOUT_1 = 90039; + + /** + * The error with code 90040 is thrown when + * a user that is not administrator tries to execute a statement + * that requires admin privileges. + */ + public static final int ADMIN_RIGHTS_REQUIRED = 90040; + + /** + * The error with code 90041 is thrown when + * trying to create a trigger and there is already a trigger with that name. + *
    +     * CREATE TABLE TEST(ID INT);
    +     * CREATE TRIGGER TRIGGER_A AFTER INSERT ON TEST
    +     *      CALL "org.h2.samples.TriggerSample$MyTrigger";
    +     * CREATE TRIGGER TRIGGER_A AFTER INSERT ON TEST
    +     *      CALL "org.h2.samples.TriggerSample$MyTrigger";
    +     * 
    + */ + public static final int TRIGGER_ALREADY_EXISTS_1 = 90041; + + /** + * The error with code 90042 is thrown when + * trying to drop a trigger that does not exist. + * Example: + *
    +     * DROP TRIGGER TRIGGER_XYZ;
    +     * 
    + */ + public static final int TRIGGER_NOT_FOUND_1 = 90042; + + /** + * The error with code 90043 is thrown when + * there is an error initializing the trigger, for example because the + * class does not implement the Trigger interface. + * See the root cause for details. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * CREATE TRIGGER TRIGGER_A AFTER INSERT ON TEST
    +     *      CALL "java.lang.String";
    +     * 
    + */ + public static final int ERROR_CREATING_TRIGGER_OBJECT_3 = 90043; + + /** + * The error with code 90044 is thrown when + * an exception or error occurred while calling the triggers fire method. + * See the root cause for details. + */ + public static final int ERROR_EXECUTING_TRIGGER_3 = 90044; + + /** + * The error with code 90045 is thrown when trying to create a + * constraint if an object with this name already exists. Example: + *
    +     * CREATE TABLE TEST(ID INT NOT NULL);
    +     * ALTER TABLE TEST ADD CONSTRAINT PK PRIMARY KEY(ID);
    +     * ALTER TABLE TEST ADD CONSTRAINT PK PRIMARY KEY(ID);
    +     * 
    + */ + public static final int CONSTRAINT_ALREADY_EXISTS_1 = 90045; + + /** + * The error with code 90046 is thrown when + * trying to open a connection to a database using an unsupported URL + * format. Please see the documentation on the supported URL format and + * examples. Example: + *
    +     * jdbc:h2:;;
    +     * 
    + */ + public static final int URL_FORMAT_ERROR_2 = 90046; + + /** + * The error with code 90047 is thrown when + * trying to connect to a TCP server with an incompatible client. + */ + public static final int DRIVER_VERSION_ERROR_2 = 90047; + + /** + * The error with code 90048 is thrown when + * the file header of a database files (*.db) does not match the + * expected version, or if it is corrupted. + */ + public static final int FILE_VERSION_ERROR_1 = 90048; + + /** + * The error with code 90049 is thrown when + * trying to open an encrypted database with the wrong file encryption + * password or algorithm. + */ + public static final int FILE_ENCRYPTION_ERROR_1 = 90049; + + /** + * The error with code 90050 is thrown when trying to open an + * encrypted database, but not separating the file password from the user + * password. The file password is specified in the password field, before + * the user password. A single space needs to be added between the file + * password and the user password; the file password itself may not contain + * spaces. File passwords (as well as user passwords) are case sensitive. + * Example of wrong usage: + *
    +     * String url = "jdbc:h2:˜/test;CIPHER=AES";
    +     * String passwords = "filePasswordUserPassword";
    +     * DriverManager.getConnection(url, "sa", pwds);
    +     * 
    + * Correct: + *
    +     * String url = "jdbc:h2:˜/test;CIPHER=AES";
    +     * String passwords = "filePassword userPassword";
    +     * DriverManager.getConnection(url, "sa", pwds);
    +     * 
    + */ + public static final int WRONG_PASSWORD_FORMAT = 90050; + + /** + * The error with code 90051 is thrown when + * trying to use a scale that is > precision. + * Example: + *
    +     * CREATE TABLE TABLE1 ( FAIL NUMBER(6,24) );
    +     * 
    + */ + public static final int INVALID_VALUE_SCALE_PRECISION = 90051; + + /** + * The error with code 90052 is thrown when + * a subquery that is used as a value contains more than one column. + * Example of wrong usage: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * INSERT INTO TEST VALUES(1), (2);
    +     * SELECT * FROM TEST WHERE ID IN (SELECT 1, 2 FROM DUAL);
    +     * 
    + * Correct: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * INSERT INTO TEST VALUES(1), (2);
    +     * SELECT * FROM TEST WHERE ID IN (1, 2);
    +     * 
    + */ + public static final int SUBQUERY_IS_NOT_SINGLE_COLUMN = 90052; + + /** + * The error with code 90053 is thrown when + * a subquery that is used as a value contains more than one row. + * Example: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR);
    +     * INSERT INTO TEST VALUES(1, 'Hello'), (1, 'World');
    +     * SELECT X, (SELECT NAME FROM TEST WHERE ID=X) FROM DUAL;
    +     * 
    + */ + public static final int SCALAR_SUBQUERY_CONTAINS_MORE_THAN_ONE_ROW = 90053; + + /** + * The error with code 90054 is thrown when + * an aggregate function is used where it is not allowed. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * INSERT INTO TEST VALUES(1), (2);
    +     * SELECT MAX(ID) FROM TEST WHERE ID = MAX(ID) GROUP BY ID;
    +     * 
    + */ + public static final int INVALID_USE_OF_AGGREGATE_FUNCTION_1 = 90054; + + /** + * The error with code 90055 is thrown when + * trying to open a database with an unsupported cipher algorithm. + * Supported is AES. + * Example: + *
    +     * jdbc:h2:~/test;CIPHER=DES
    +     * 
    + */ + public static final int UNSUPPORTED_CIPHER = 90055; + + /** + * The error with code 90056 is thrown when trying to format a + * timestamp using TO_DATE and TO_TIMESTAMP with an invalid format. + */ + public static final int INVALID_TO_DATE_FORMAT = 90056; + + /** + * The error with code 90057 is thrown when + * trying to drop a constraint that does not exist. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * ALTER TABLE TEST DROP CONSTRAINT CID;
    +     * 
    + */ + public static final int CONSTRAINT_NOT_FOUND_1 = 90057; + + /** + * The error with code 90058 is thrown when trying to call + * commit or rollback inside a trigger, or when trying to call a method + * inside a trigger that implicitly commits the current transaction, if an + * object is locked. This is not because it would release the lock too + * early. + */ + public static final int COMMIT_ROLLBACK_NOT_ALLOWED = 90058; + + /** + * The error with code 90059 is thrown when + * a query contains a column that could belong to multiple tables. + * Example: + *
    +     * CREATE TABLE PARENT(ID INT, NAME VARCHAR);
    +     * CREATE TABLE CHILD(PID INT, NAME VARCHAR);
    +     * SELECT ID, NAME FROM PARENT P, CHILD C WHERE P.ID = C.PID;
    +     * 
    + */ + public static final int AMBIGUOUS_COLUMN_NAME_1 = 90059; + + /** + * The error with code 90060 is thrown when + * trying to use a file locking mechanism that is not supported. + * Currently only FILE (the default) and SOCKET are supported + * Example: + *
    +     * jdbc:h2:~/test;FILE_LOCK=LDAP
    +     * 
    + */ + public static final int UNSUPPORTED_LOCK_METHOD_1 = 90060; + + /** + * The error with code 90061 is thrown when + * trying to start a server if a server is already running at the same port. + * It could also be a firewall problem. To find out if another server is + * already running, run the following command on Windows: + *
    +     * netstat -ano
    +     * 
    + * The column PID is the process id as listed in the Task Manager. + * For Linux, use: + *
    +     * netstat -npl
    +     * 
    + */ + public static final int EXCEPTION_OPENING_PORT_2 = 90061; + + /** + * The error with code 90062 is thrown when + * a directory or file could not be created. This can occur when + * trying to create a directory if a file with the same name already + * exists, or vice versa. + * + */ + public static final int FILE_CREATION_FAILED_1 = 90062; + + /** + * The error with code 90063 is thrown when + * trying to rollback to a savepoint that is not defined. + * Example: + *
    +     * ROLLBACK TO SAVEPOINT S_UNKNOWN;
    +     * 
    + */ + public static final int SAVEPOINT_IS_INVALID_1 = 90063; + + /** + * The error with code 90064 is thrown when + * Savepoint.getSavepointName() is called on an unnamed savepoint. + * Example: + *
    +     * Savepoint sp = conn.setSavepoint();
    +     * sp.getSavepointName();
    +     * 
    + */ + public static final int SAVEPOINT_IS_UNNAMED = 90064; + + /** + * The error with code 90065 is thrown when + * Savepoint.getSavepointId() is called on a named savepoint. + * Example: + *
    +     * Savepoint sp = conn.setSavepoint("Joe");
    +     * sp.getSavepointId();
    +     * 
    + */ + public static final int SAVEPOINT_IS_NAMED = 90065; + + /** + * The error with code 90066 is thrown when + * the same property appears twice in the database URL or in + * the connection properties. + * Example: + *
    +     * jdbc:h2:~/test;LOCK_TIMEOUT=0;LOCK_TIMEOUT=1
    +     * 
    + */ + public static final int DUPLICATE_PROPERTY_1 = 90066; + + /** + * The error with code 90067 is thrown when the client could + * not connect to the database, or if the connection was lost. Possible + * reasons are: the database server is not running at the given port, the + * connection was closed due to a shutdown, or the server was stopped. Other + * possible causes are: the server is not an H2 server, or the network + * connection is broken. + */ + public static final int CONNECTION_BROKEN_1 = 90067; + + /** + * The error with code 90068 is thrown when the given + * expression that is used in the ORDER BY is not in the result list. This + * is required for distinct queries, otherwise the result would be + * ambiguous. + * Example of wrong usage: + *
    +     * CREATE TABLE TEST(ID INT, NAME VARCHAR);
    +     * INSERT INTO TEST VALUES(2, 'Hello'), (1, 'Hello');
    +     * SELECT DISTINCT NAME FROM TEST ORDER BY ID;
    +     * Order by expression ID must be in the result list in this case
    +     * 
    + * Correct: + *
    +     * SELECT DISTINCT ID, NAME FROM TEST ORDER BY ID;
    +     * 
    + */ + public static final int ORDER_BY_NOT_IN_RESULT = 90068; + + /** + * The error with code 90069 is thrown when + * trying to create a role if an object with this name already exists. + * Example: + *
    +     * CREATE ROLE TEST_ROLE;
    +     * CREATE ROLE TEST_ROLE;
    +     * 
    + */ + public static final int ROLE_ALREADY_EXISTS_1 = 90069; + + /** + * The error with code 90070 is thrown when + * trying to drop or grant a role that does not exists. + * Example: + *
    +     * DROP ROLE TEST_ROLE_2;
    +     * 
    + */ + public static final int ROLE_NOT_FOUND_1 = 90070; + + /** + * The error with code 90071 is thrown when + * trying to grant or revoke if no role or user with that name exists. + * Example: + *
    +     * GRANT SELECT ON TEST TO UNKNOWN;
    +     * 
    + */ + public static final int USER_OR_ROLE_NOT_FOUND_1 = 90071; + + /** + * The error with code 90072 is thrown when + * trying to grant or revoke both roles and rights at the same time. + * Example: + *
    +     * GRANT SELECT, TEST_ROLE ON TEST TO SA;
    +     * 
    + */ + public static final int ROLES_AND_RIGHT_CANNOT_BE_MIXED = 90072; + + /** + * The error with code 90073 is thrown when trying to create + * an alias for a Java method, if two methods exists in this class that have + * this name and the same number of parameters. + * Example of wrong usage: + *
    +     * CREATE ALIAS GET_LONG FOR
    +     *      "java.lang.Long.getLong";
    +     * 
    + * Correct: + *
    +     * CREATE ALIAS GET_LONG FOR
    +     *      "java.lang.Long.getLong(java.lang.String, java.lang.Long)";
    +     * 
    + */ + public static final int METHODS_MUST_HAVE_DIFFERENT_PARAMETER_COUNTS_2 = 90073; + + /** + * The error with code 90074 is thrown when + * trying to grant a role that has already been granted. + * Example: + *
    +     * CREATE ROLE TEST_A;
    +     * CREATE ROLE TEST_B;
    +     * GRANT TEST_A TO TEST_B;
    +     * GRANT TEST_B TO TEST_A;
    +     * 
    + */ + public static final int ROLE_ALREADY_GRANTED_1 = 90074; + + /** + * The error with code 90075 is thrown when + * trying to alter a table and allow null for a column that is part of a + * primary key or hash index. + * Example: + *
    +     * CREATE TABLE TEST(ID INT PRIMARY KEY);
    +     * ALTER TABLE TEST ALTER COLUMN ID NULL;
    +     * 
    + */ + public static final int COLUMN_IS_PART_OF_INDEX_1 = 90075; + + /** + * The error with code 90076 is thrown when + * trying to create a function alias for a system function or for a function + * that is already defined. + * Example: + *
    +     * CREATE ALIAS SQRT FOR "java.lang.Math.sqrt"
    +     * 
    + */ + public static final int FUNCTION_ALIAS_ALREADY_EXISTS_1 = 90076; + + /** + * The error with code 90077 is thrown when + * trying to drop a system function or a function alias that does not exist. + * Example: + *
    +     * DROP ALIAS SQRT;
    +     * 
    + */ + public static final int FUNCTION_ALIAS_NOT_FOUND_1 = 90077; + + /** + * The error with code 90078 is thrown when + * trying to create a schema if an object with this name already exists. + * Example: + *
    +     * CREATE SCHEMA TEST_SCHEMA;
    +     * CREATE SCHEMA TEST_SCHEMA;
    +     * 
    + */ + public static final int SCHEMA_ALREADY_EXISTS_1 = 90078; + + /** + * The error with code 90079 is thrown when + * trying to drop a schema that does not exist. + * Example: + *
    +     * DROP SCHEMA UNKNOWN;
    +     * 
    + */ + public static final int SCHEMA_NOT_FOUND_1 = 90079; + + /** + * The error with code 90080 is thrown when + * trying to rename a object to a different schema, or when trying to + * create a related object in another schema. + * For CREATE LINKED TABLE, it is thrown when multiple tables with that + * name exist in different schemas. + * Example: + *
    +     * CREATE SCHEMA TEST_SCHEMA;
    +     * CREATE TABLE TEST(ID INT);
    +     * CREATE INDEX TEST_ID ON TEST(ID);
    +     * ALTER INDEX TEST_ID RENAME TO TEST_SCHEMA.IDX_TEST_ID;
    +     * 
    + */ + public static final int SCHEMA_NAME_MUST_MATCH = 90080; + + /** + * The error with code 90081 is thrown when + * trying to alter a column to not allow NULL, if there + * is already data in the table where this column is NULL. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * INSERT INTO TEST VALUES(NULL);
    +     * ALTER TABLE TEST ALTER COLUMN ID VARCHAR NOT NULL;
    +     * 
    + */ + public static final int COLUMN_CONTAINS_NULL_VALUES_1 = 90081; + + /** + * The error with code 90082 is thrown when + * trying to drop a system generated sequence. + */ + public static final int SEQUENCE_BELONGS_TO_A_TABLE_1 = 90082; + + /** + * The error with code 90083 is thrown when + * trying to drop a column that is part of a constraint. + * Example: + *
    +     * CREATE TABLE TEST(ID INT, PID INT REFERENCES(ID));
    +     * ALTER TABLE TEST DROP COLUMN PID;
    +     * 
    + */ + public static final int COLUMN_IS_REFERENCED_1 = 90083; + + /** + * The error with code 90084 is thrown when + * trying to drop the last column of a table. + * Example: + *
    +     * CREATE TABLE TEST(ID INT);
    +     * ALTER TABLE TEST DROP COLUMN ID;
    +     * 
    + */ + public static final int CANNOT_DROP_LAST_COLUMN = 90084; + + /** + * The error with code 90085 is thrown when + * trying to manually drop an index that was generated by the system + * because of a unique or referential constraint. To find out what + * constraint causes the problem, run: + *
    +     * SELECT * FROM INFORMATION_SCHEMA.CONSTRAINTS
    +     * WHERE UNIQUE_INDEX_NAME = '<index name>';
    +     * 
    + * Example of wrong usage: + *
    +     * CREATE TABLE TEST(ID INT, CONSTRAINT UID UNIQUE(ID));
    +     * DROP INDEX UID_INDEX_0;
    +     * Index UID_INDEX_0 belongs to constraint UID
    +     * 
    + * Correct: + *
    +     * ALTER TABLE TEST DROP CONSTRAINT UID;
    +     * 
    + */ + public static final int INDEX_BELONGS_TO_CONSTRAINT_2 = 90085; + + /** + * The error with code 90086 is thrown when + * a class can not be loaded because it is not in the classpath + * or because a related class is not in the classpath. + * Example: + *
    +     * CREATE ALIAS TEST FOR "java.lang.invalid.Math.sqrt";
    +     * 
    + */ + public static final int CLASS_NOT_FOUND_1 = 90086; + + /** + * The error with code 90087 is thrown when + * a method with matching number of arguments was not found in the class. + * Example: + *
    +     * CREATE ALIAS TO_BINARY FOR "java.lang.Long.toBinaryString(long)";
    +     * CALL TO_BINARY(10, 2);
    +     * 
    + */ + public static final int METHOD_NOT_FOUND_1 = 90087; + + /** + * The error with code 90088 is thrown when + * trying to switch to an unknown mode. + * Example: + *
    +     * SET MODE UNKNOWN;
    +     * 
    + */ + public static final int UNKNOWN_MODE_1 = 90088; + + /** + * The error with code 90089 is thrown when + * trying to change the collation while there was already data in + * the database. The collation of the database must be set when the + * database is empty. + * Example of wrong usage: + *
    +     * CREATE TABLE TEST(NAME VARCHAR PRIMARY KEY);
    +     * INSERT INTO TEST VALUES('Hello', 'World');
    +     * SET COLLATION DE;
    +     * Collation cannot be changed because there is a data table: PUBLIC.TEST
    +     * 
    + * Correct: + *
    +     * SET COLLATION DE;
    +     * CREATE TABLE TEST(NAME VARCHAR PRIMARY KEY);
    +     * INSERT INTO TEST VALUES('Hello', 'World');
    +     * 
    + */ + public static final int COLLATION_CHANGE_WITH_DATA_TABLE_1 = 90089; + + /** + * The error with code 90090 is thrown when + * trying to drop a schema that may not be dropped (the schema PUBLIC + * and the schema INFORMATION_SCHEMA). + * Example: + *
    +     * DROP SCHEMA PUBLIC;
    +     * 
    + */ + public static final int SCHEMA_CAN_NOT_BE_DROPPED_1 = 90090; + + /** + * The error with code 90091 is thrown when + * trying to drop the role PUBLIC. + * Example: + *
    +     * DROP ROLE PUBLIC;
    +     * 
    + */ + public static final int ROLE_CAN_NOT_BE_DROPPED_1 = 90091; + + /** + * The error with code 90093 is thrown when + * trying to connect to a clustered database that runs in standalone + * mode. This can happen if clustering is not enabled on the database, + * or if one of the clients disabled clustering because it can not see + * the other cluster node. + */ + public static final int CLUSTER_ERROR_DATABASE_RUNS_ALONE = 90093; + + /** + * The error with code 90094 is thrown when + * trying to connect to a clustered database that runs together with a + * different cluster node setting than what is used when trying to connect. + */ + public static final int CLUSTER_ERROR_DATABASE_RUNS_CLUSTERED_1 = 90094; + + /** + * The error with code 90095 is thrown when + * calling the method STRINGDECODE with an invalid escape sequence. + * Only Java style escape sequences and Java properties file escape + * sequences are supported. + * Example: + *
    +     * CALL STRINGDECODE('\i');
    +     * 
    + */ + public static final int STRING_FORMAT_ERROR_1 = 90095; + + /** + * The error with code 90096 is thrown when + * trying to perform an operation with a non-admin user if the + * user does not have enough rights. + */ + public static final int NOT_ENOUGH_RIGHTS_FOR_1 = 90096; + + /** + * The error with code 90097 is thrown when + * trying to delete or update a database if it is open in read-only mode. + * Example: + *
    +     * jdbc:h2:~/test;ACCESS_MODE_DATA=R
    +     * CREATE TABLE TEST(ID INT);
    +     * 
    + */ + public static final int DATABASE_IS_READ_ONLY = 90097; + + /** + * The error with code 90098 is thrown when the database has + * been closed, for example because the system ran out of memory or because + * the self-destruction counter has reached zero. This counter is only used + * for recovery testing, and not set in normal operation. + */ + public static final int DATABASE_IS_CLOSED = 90098; + + /** + * The error with code 90099 is thrown when an error occurred + * trying to initialize the database event listener. Example: + *
    +     * jdbc:h2:˜/test;DATABASE_EVENT_LISTENER='java.lang.String'
    +     * 
    + */ + public static final int ERROR_SETTING_DATABASE_EVENT_LISTENER_2 = 90099; + + /** + * The error with code 90101 is thrown when + * the XA API detected unsupported transaction names. This can happen + * when mixing application generated transaction names and transaction names + * generated by this databases XAConnection API. + */ + public static final int WRONG_XID_FORMAT_1 = 90101; + + /** + * The error with code 90102 is thrown when + * trying to use unsupported options for the given compression algorithm. + * Example of wrong usage: + *
    +     * CALL COMPRESS(STRINGTOUTF8(SPACE(100)), 'DEFLATE l 10');
    +     * 
    + * Correct: + *
    +     * CALL COMPRESS(STRINGTOUTF8(SPACE(100)), 'DEFLATE l 9');
    +     * 
    + */ + public static final int UNSUPPORTED_COMPRESSION_OPTIONS_1 = 90102; + + /** + * The error with code 90103 is thrown when + * trying to use an unsupported compression algorithm. + * Example: + *
    +     * CALL COMPRESS(STRINGTOUTF8(SPACE(100)), 'BZIP');
    +     * 
    + */ + public static final int UNSUPPORTED_COMPRESSION_ALGORITHM_1 = 90103; + + /** + * The error with code 90104 is thrown when + * the data can not be de-compressed. + * Example: + *
    +     * CALL EXPAND(X'00FF');
    +     * 
    + */ + public static final int COMPRESSION_ERROR = 90104; + + /** + * The error with code 90105 is thrown when + * an exception occurred in a user-defined method. + * Example: + *
    +     * CREATE ALIAS SYS_PROP FOR "java.lang.System.getProperty";
    +     * CALL SYS_PROP(NULL);
    +     * 
    + */ + public static final int EXCEPTION_IN_FUNCTION_1 = 90105; + + /** + * The error with code 90106 is thrown when + * trying to truncate a table that can not be truncated. + * Tables with referential integrity constraints can not be truncated. + * Also, system tables and view can not be truncated. + * Example: + *
    +     * TRUNCATE TABLE INFORMATION_SCHEMA.SETTINGS;
    +     * 
    + */ + public static final int CANNOT_TRUNCATE_1 = 90106; + + /** + * The error with code 90107 is thrown when + * trying to drop an object because another object would become invalid. + * Example: + *
    +     * CREATE TABLE COUNT(X INT);
    +     * CREATE TABLE ITEMS(ID INT DEFAULT SELECT MAX(X)+1 FROM COUNT);
    +     * DROP TABLE COUNT;
    +     * 
    + */ + public static final int CANNOT_DROP_2 = 90107; + + /** + * The error with code 90108 is thrown when not enough heap + * memory was available. A possible solutions is to increase the memory size + * using java -Xmx128m .... Another solution is to reduce + * the cache size. + */ + public static final int OUT_OF_MEMORY = 90108; + + /** + * The error with code 90109 is thrown when + * trying to run a query against an invalid view. + * Example: + *
    +     * CREATE FORCE VIEW TEST_VIEW AS SELECT * FROM TEST;
    +     * SELECT * FROM TEST_VIEW;
    +     * 
    + */ + public static final int VIEW_IS_INVALID_2 = 90109; + + /** + * The error with code 90111 is thrown when + * an exception occurred while accessing a linked table. + */ + public static final int ERROR_ACCESSING_LINKED_TABLE_2 = 90111; + + /** + * The error with code 90112 is thrown when a row was deleted + * twice while locking was disabled. This is an intern exception that should + * never be thrown to the application, because such deleted should be + * detected and the resulting exception ignored inside the database engine. + *
    +     * Row not found when trying to delete from index UID_INDEX_0
    +     * 
    + */ + public static final int ROW_NOT_FOUND_WHEN_DELETING_1 = 90112; + + /** + * The error with code 90113 is thrown when + * the database URL contains unsupported settings. + * Example: + *
    +     * jdbc:h2:~/test;UNKNOWN=TRUE
    +     * 
    + */ + public static final int UNSUPPORTED_SETTING_1 = 90113; + + /** + * The error with code 90114 is thrown when + * trying to create a constant if a constant with this name already exists. + * Example: + *
    +     * CREATE CONSTANT TEST VALUE 1;
    +     * CREATE CONSTANT TEST VALUE 1;
    +     * 
    + */ + public static final int CONSTANT_ALREADY_EXISTS_1 = 90114; + + /** + * The error with code 90115 is thrown when + * trying to drop a constant that does not exists. + * Example: + *
    +     * DROP CONSTANT UNKNOWN;
    +     * 
    + */ + public static final int CONSTANT_NOT_FOUND_1 = 90115; + + /** + * The error with code 90116 is thrown when + * trying use a literal in a SQL statement if literals are disabled. + * If literals are disabled, use PreparedStatement and parameters instead + * of literals in the SQL statement. + * Example: + *
    +     * SET ALLOW_LITERALS NONE;
    +     * CALL 1+1;
    +     * 
    + */ + public static final int LITERALS_ARE_NOT_ALLOWED = 90116; + + /** + * The error with code 90117 is thrown when + * trying to connect to a TCP server from another machine, if remote + * connections are not allowed. To allow remote connections, + * start the TCP server using the option -tcpAllowOthers as in: + *
    +     * java org.h2.tools.Server -tcp -tcpAllowOthers
    +     * 
    + * Or, when starting the server from an application, use: + *
    +     * Server server = Server.createTcpServer("-tcpAllowOthers");
    +     * server.start();
    +     * 
    + */ + public static final int REMOTE_CONNECTION_NOT_ALLOWED = 90117; + + /** + * The error with code 90118 is thrown when + * trying to drop a table can not be dropped. + * Example: + *
    +     * DROP TABLE INFORMATION_SCHEMA.SETTINGS;
    +     * 
    + */ + public static final int CANNOT_DROP_TABLE_1 = 90118; + + /** + * The error with code 90119 is thrown when + * trying to create a domain if an object with this name already exists, + * or when trying to overload a built-in data type. + * Example: + *
    +     * CREATE DOMAIN INTEGER AS VARCHAR;
    +     * CREATE DOMAIN EMAIL AS VARCHAR CHECK LOCATE('@', VALUE) > 0;
    +     * CREATE DOMAIN EMAIL AS VARCHAR CHECK LOCATE('@', VALUE) > 0;
    +     * 
    + */ + public static final int USER_DATA_TYPE_ALREADY_EXISTS_1 = 90119; + + /** + * The error with code 90120 is thrown when + * trying to drop a domain that doesn't exist. + * Example: + *
    +     * DROP DOMAIN UNKNOWN;
    +     * 
    + */ + public static final int USER_DATA_TYPE_NOT_FOUND_1 = 90120; + + /** + * The error with code 90121 is thrown when + * a database operation is started while the virtual machine exits + * (for example in a shutdown hook), or when the session is closed. + */ + public static final int DATABASE_CALLED_AT_SHUTDOWN = 90121; + + /** + * The error with code 90123 is thrown when + * trying mix regular parameters and indexed parameters in the same + * statement. Example: + *
    +     * SELECT ?, ?1 FROM DUAL;
    +     * 
    + */ + public static final int CANNOT_MIX_INDEXED_AND_UNINDEXED_PARAMS = 90123; + + /** + * The error with code 90124 is thrown when + * trying to access a file that doesn't exist. This can occur when trying to + * read a lob if the lob file has been deleted by another application. + */ + public static final int FILE_NOT_FOUND_1 = 90124; + + /** + * The error with code 90125 is thrown when + * PreparedStatement.setBigDecimal is called with object that extends the + * class BigDecimal, and the system property h2.allowBigDecimalExtensions is + * not set. Using extensions of BigDecimal is dangerous because the database + * relies on the behavior of BigDecimal. Example of wrong usage: + *
    +     * BigDecimal bd = new MyDecimal("$10.3");
    +     * prep.setBigDecimal(1, bd);
    +     * Invalid class, expected java.math.BigDecimal but got MyDecimal
    +     * 
    + * Correct: + *
    +     * BigDecimal bd = new BigDecimal("10.3");
    +     * prep.setBigDecimal(1, bd);
    +     * 
    + */ + public static final int INVALID_CLASS_2 = 90125; + + /** + * The error with code 90126 is thrown when + * trying to call the BACKUP statement for an in-memory database. + * Example: + *
    +     * jdbc:h2:mem:
    +     * BACKUP TO 'test.zip';
    +     * 
    + */ + public static final int DATABASE_IS_NOT_PERSISTENT = 90126; + + /** + * The error with code 90127 is thrown when + * trying to update or delete a row in a result set if the result set is + * not updatable. Result sets are only updatable if: + * the statement was created with updatable concurrency; + * all columns of the result set are from the same table; + * the table is a data table (not a system table or view); + * all columns of the primary key or any unique index are included; + * all columns of the result set are columns of that table. + */ + public static final int RESULT_SET_NOT_UPDATABLE = 90127; + + /** + * The error with code 90128 is thrown when + * trying to call a method of the ResultSet that is only supported + * for scrollable result sets, and the result set is not scrollable. + * Example: + *
    +     * rs.first();
    +     * 
    + */ + public static final int RESULT_SET_NOT_SCROLLABLE = 90128; + + /** + * The error with code 90129 is thrown when + * trying to commit a transaction that doesn't exist. + * Example: + *
    +     * PREPARE COMMIT ABC;
    +     * COMMIT TRANSACTION TEST;
    +     * 
    + */ + public static final int TRANSACTION_NOT_FOUND_1 = 90129; + + /** + * The error with code 90130 is thrown when + * an execute method of PreparedStatement was called with a SQL statement. + * This is not allowed according to the JDBC specification. Instead, use + * an execute method of Statement. + * Example of wrong usage: + *
    +     * PreparedStatement prep = conn.prepareStatement("SELECT * FROM TEST");
    +     * prep.execute("DELETE FROM TEST");
    +     * 
    + * Correct: + *
    +     * Statement stat = conn.createStatement();
    +     * stat.execute("DELETE FROM TEST");
    +     * 
    + */ + public static final int METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT = 90130; + + /** + * The error with code 90131 is thrown when using multi version + * concurrency control, and trying to update the same row from within two + * connections at the same time, or trying to insert two rows with the same + * key from two connections. Example: + *
    +     * jdbc:h2:~/test;MVCC=TRUE
    +     * Session 1:
    +     * CREATE TABLE TEST(ID INT);
    +     * INSERT INTO TEST VALUES(1);
    +     * SET AUTOCOMMIT FALSE;
    +     * UPDATE TEST SET ID = 2;
    +     * Session 2:
    +     * SET AUTOCOMMIT FALSE;
    +     * UPDATE TEST SET ID = 3;
    +     * 
    + */ + public static final int CONCURRENT_UPDATE_1 = 90131; + + /** + * The error with code 90132 is thrown when + * trying to drop a user-defined aggregate function that doesn't exist. + * Example: + *
    +     * DROP AGGREGATE UNKNOWN;
    +     * 
    + */ + public static final int AGGREGATE_NOT_FOUND_1 = 90132; + + /** + * The error with code 90133 is thrown when + * trying to change a specific database property while the database is + * already open. The MVCC property needs to be set in the first connection + * (in the connection opening the database) and can not be changed later on. + */ + public static final int CANNOT_CHANGE_SETTING_WHEN_OPEN_1 = 90133; + + /** + * The error with code 90134 is thrown when + * trying to load a Java class that is not part of the allowed classes. By + * default, all classes are allowed, but this can be changed using the + * system property h2.allowedClasses. + */ + public static final int ACCESS_DENIED_TO_CLASS_1 = 90134; + + /** + * The error with code 90135 is thrown when + * trying to open a connection to a database that is currently open + * in exclusive mode. The exclusive mode is set using: + *
    +     * SET EXCLUSIVE TRUE;
    +     * 
    + */ + public static final int DATABASE_IS_IN_EXCLUSIVE_MODE = 90135; + + /** + * The error with code 90136 is thrown when + * executing a query that used an unsupported outer join condition. + * Example: + *
    +     * SELECT * FROM DUAL A LEFT JOIN DUAL B ON B.X=(SELECT MAX(X) FROM DUAL);
    +     * 
    + */ + public static final int UNSUPPORTED_OUTER_JOIN_CONDITION_1 = 90136; + + /** + * The error with code 90137 is thrown when + * trying to assign a value to something that is not a variable. + *
    +     * SELECT AMOUNT, SET(@V, IFNULL(@V, 0)+AMOUNT) FROM TEST;
    +     * 
    + */ + public static final int CAN_ONLY_ASSIGN_TO_VARIABLE_1 = 90137; + + /** + * The error with code 90138 is thrown when + * + * trying to open a persistent database using an incorrect database name. + * The name of a persistent database contains the path and file name prefix + * where the data is stored. The file name part of a database name must be + * at least two characters. + * + * Example of wrong usage: + *
    +     * DriverManager.getConnection("jdbc:h2:~/t");
    +     * DriverManager.getConnection("jdbc:h2:~/test/");
    +     * 
    + * Correct: + *
    +     * DriverManager.getConnection("jdbc:h2:~/te");
    +     * DriverManager.getConnection("jdbc:h2:~/test/te");
    +     * 
    + */ + public static final int INVALID_DATABASE_NAME_1 = 90138; + + /** + * The error with code 90139 is thrown when + * the specified public static Java method was not found in the class. + * Example: + *
    +     * CREATE ALIAS TEST FOR "java.lang.Math.test";
    +     * 
    + */ + public static final int PUBLIC_STATIC_JAVA_METHOD_NOT_FOUND_1 = 90139; + + /** + * The error with code 90140 is thrown when trying to update or + * delete a row in a result set if the statement was not created with + * updatable concurrency. Result sets are only updatable if the statement + * was created with updatable concurrency, and if the result set contains + * all columns of the primary key or of a unique index of a table. + */ + public static final int RESULT_SET_READONLY = 90140; + + /** + * The error with code 90141 is thrown when + * trying to change the java object serializer while there was already data + * in the database. The serializer of the database must be set when the + * database is empty. + */ + public static final int JAVA_OBJECT_SERIALIZER_CHANGE_WITH_DATA_TABLE = 90141; + + /** + * The error with code 90142 is thrown when + * trying to set zero for step size. + */ + public static final int STEP_SIZE_MUST_NOT_BE_ZERO = 90142; + + /** + * The error with code 90143 is thrown when + * trying to fetch a row from the primary index and the row is not there. + * Can happen in MULTI_THREADED=1 case. + */ + public static final int ROW_NOT_FOUND_IN_PRIMARY_INDEX = 90143; + + // next are 90110, 90122, 90144 + + private ErrorCode() { + // utility class + } + + /** + * INTERNAL + */ + public static boolean isCommon(int errorCode) { + // this list is sorted alphabetically + switch (errorCode) { + case DATA_CONVERSION_ERROR_1: + case DUPLICATE_KEY_1: + case FUNCTION_ALIAS_ALREADY_EXISTS_1: + case LOCK_TIMEOUT_1: + case NULL_NOT_ALLOWED: + case NO_DATA_AVAILABLE: + case NUMERIC_VALUE_OUT_OF_RANGE_1: + case OBJECT_CLOSED: + case REFERENTIAL_INTEGRITY_VIOLATED_CHILD_EXISTS_1: + case REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1: + case SYNTAX_ERROR_1: + case SYNTAX_ERROR_2: + case TABLE_OR_VIEW_ALREADY_EXISTS_1: + case TABLE_OR_VIEW_NOT_FOUND_1: + case VALUE_TOO_LONG_2: + return true; + } + return false; + } + + /** + * INTERNAL + */ + public static String getState(int errorCode) { + // To convert SQLState to error code, replace + // 21S: 210, 42S: 421, HY: 50, C: 1, T: 2 + + switch (errorCode) { + + // 02: no data + case NO_DATA_AVAILABLE: return "02000"; + + // 07: dynamic SQL error + case INVALID_PARAMETER_COUNT_2: return "07001"; + + // 08: connection exception + case ERROR_OPENING_DATABASE_1: return "08000"; + + // 21: cardinality violation + case COLUMN_COUNT_DOES_NOT_MATCH: return "21S02"; + + // 42: syntax error or access rule violation + case TABLE_OR_VIEW_ALREADY_EXISTS_1: return "42S01"; + case TABLE_OR_VIEW_NOT_FOUND_1: return "42S02"; + case INDEX_ALREADY_EXISTS_1: return "42S11"; + case INDEX_NOT_FOUND_1: return "42S12"; + case DUPLICATE_COLUMN_NAME_1: return "42S21"; + case COLUMN_NOT_FOUND_1: return "42S22"; + + // 0A: feature not supported + + // HZ: remote database access + + // HY + case GENERAL_ERROR_1: return "HY000"; + case UNKNOWN_DATA_TYPE_1: return "HY004"; + + case FEATURE_NOT_SUPPORTED_1: return "HYC00"; + case LOCK_TIMEOUT_1: return "HYT00"; + default: + return "" + errorCode; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/api/JavaObjectSerializer.java b/modules/h2/src/main/java/org/h2/api/JavaObjectSerializer.java new file mode 100644 index 0000000000000..65413f342add4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/JavaObjectSerializer.java @@ -0,0 +1,32 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +/** + * Custom serialization mechanism for java objects being stored in column of + * type OTHER. + * + * @author Sergi Vladykin + */ +public interface JavaObjectSerializer { + + /** + * Serialize object to byte array. + * + * @param obj the object to serialize + * @return the byte array of the serialized object + */ + byte[] serialize(Object obj) throws Exception; + + /** + * Deserialize object from byte array. + * + * @param bytes the byte array of the serialized object + * @return the object + */ + Object deserialize(byte[] bytes) throws Exception; + +} diff --git a/modules/h2/src/main/java/org/h2/api/TableEngine.java b/modules/h2/src/main/java/org/h2/api/TableEngine.java new file mode 100644 index 0000000000000..72c54857ef700 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/TableEngine.java @@ -0,0 +1,27 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +import org.h2.table.Table; +import org.h2.command.ddl.CreateTableData; + +/** + * A class that implements this interface can create custom table + * implementations. + * + * @author Sergi Vladykin + */ +public interface TableEngine { + + /** + * Create new table. + * + * @param data the data to construct the table + * @return the created table + */ + Table createTable(CreateTableData data); + +} diff --git a/modules/h2/src/main/java/org/h2/api/TimestampWithTimeZone.java b/modules/h2/src/main/java/org/h2/api/TimestampWithTimeZone.java new file mode 100644 index 0000000000000..42d6fbe1b2f8a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/TimestampWithTimeZone.java @@ -0,0 +1,149 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +import java.io.Serializable; +import org.h2.util.DateTimeUtils; + +/** + * How we expose "TIMESTAMP WITH TIME ZONE" in our ResultSets. + */ +public class TimestampWithTimeZone implements Serializable, Cloneable { + + /** + * The serial version UID. + */ + private static final long serialVersionUID = 4413229090646777107L; + + /** + * A bit field with bits for the year, month, and day (see DateTimeUtils for + * encoding) + */ + private final long dateValue; + /** + * The nanoseconds since midnight. + */ + private final long timeNanos; + /** + * Time zone offset from UTC in minutes, range of -12hours to +12hours + */ + private final short timeZoneOffsetMins; + + public TimestampWithTimeZone(long dateValue, long timeNanos, short timeZoneOffsetMins) { + this.dateValue = dateValue; + this.timeNanos = timeNanos; + this.timeZoneOffsetMins = timeZoneOffsetMins; + } + + /** + * @return the year-month-day bit field + */ + public long getYMD() { + return dateValue; + } + + /** + * Gets the year. + * + *

    The year is in the specified time zone and not UTC. So for + * {@code 2015-12-31 19:00:00.00-10:00} the value returned + * will be {@code 2015} even though in UTC the year is {@code 2016}.

    + * + * @return the year + */ + public int getYear() { + return DateTimeUtils.yearFromDateValue(dateValue); + } + + /** + * Gets the month 1-based. + * + *

    The month is in the specified time zone and not UTC. So for + * {@code 2015-12-31 19:00:00.00-10:00} the value returned + * is {@code 12} even though in UTC the month is {@code 1}.

    + * + * @return the month + */ + public int getMonth() { + return DateTimeUtils.monthFromDateValue(dateValue); + } + + /** + * Gets the day of month 1-based. + * + *

    The day of month is in the specified time zone and not UTC. So for + * {@code 2015-12-31 19:00:00.00-10:00} the value returned + * is {@code 31} even though in UTC the day of month is {@code 1}.

    + * + * @return the day of month + */ + public int getDay() { + return DateTimeUtils.dayFromDateValue(dateValue); + } + + /** + * Gets the nanoseconds since midnight. + * + *

    The nanoseconds are relative to midnight in the specified + * time zone. So for {@code 2016-09-24 00:00:00.000000001-00:01} the + * value returned is {@code 1} even though {@code 60000000001} + * nanoseconds have passed since midnight in UTC.

    + * + * @return the nanoseconds since midnight + */ + public long getNanosSinceMidnight() { + return timeNanos; + } + + /** + * The time zone offset in minutes. + * + * @return the offset + */ + public short getTimeZoneOffsetMins() { + return timeZoneOffsetMins; + } + + @Override + public String toString() { + return DateTimeUtils.timestampTimeZoneToString(dateValue, timeNanos, timeZoneOffsetMins); + } + + @Override + public int hashCode() { + final int prime = 31; + int result = 1; + result = prime * result + (int) (dateValue ^ (dateValue >>> 32)); + result = prime * result + (int) (timeNanos ^ (timeNanos >>> 32)); + result = prime * result + timeZoneOffsetMins; + return result; + } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null) { + return false; + } + if (getClass() != obj.getClass()) { + return false; + } + TimestampWithTimeZone other = (TimestampWithTimeZone) obj; + if (dateValue != other.dateValue) { + return false; + } + if (timeNanos != other.timeNanos) { + return false; + } + if (timeZoneOffsetMins != other.timeZoneOffsetMins) { + return false; + } + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/api/Trigger.java b/modules/h2/src/main/java/org/h2/api/Trigger.java new file mode 100644 index 0000000000000..4b5bf504048a2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/Trigger.java @@ -0,0 +1,93 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.api; + +import java.sql.Connection; +import java.sql.SQLException; + +/** + * A class that implements this interface can be used as a trigger. + */ +public interface Trigger { + + /** + * The trigger is called for INSERT statements. + */ + int INSERT = 1; + + /** + * The trigger is called for UPDATE statements. + */ + int UPDATE = 2; + + /** + * The trigger is called for DELETE statements. + */ + int DELETE = 4; + + /** + * The trigger is called for SELECT statements. + */ + int SELECT = 8; + + /** + * This method is called by the database engine once when initializing the + * trigger. It is called when the trigger is created, as well as when the + * database is opened. The type of operation is a bit field with the + * appropriate flags set. As an example, if the trigger is of type INSERT + * and UPDATE, then the parameter type is set to (INSERT | UPDATE). + * + * @param conn a connection to the database (a system connection) + * @param schemaName the name of the schema + * @param triggerName the name of the trigger used in the CREATE TRIGGER + * statement + * @param tableName the name of the table + * @param before whether the fire method is called before or after the + * operation is performed + * @param type the operation type: INSERT, UPDATE, DELETE, SELECT, or a + * combination (this parameter is a bit field) + */ + void init(Connection conn, String schemaName, String triggerName, + String tableName, boolean before, int type) throws SQLException; + + /** + * This method is called for each triggered action. The method is called + * immediately when the operation occurred (before it is committed). A + * transaction rollback will also rollback the operations that were done + * within the trigger, if the operations occurred within the same database. + * If the trigger changes state outside the database, a rollback trigger + * should be used. + *

    + * The row arrays contain all columns of the table, in the same order + * as defined in the table. + *

    + *

    + * The trigger itself may change the data in the newRow array. + *

    + * + * @param conn a connection to the database + * @param oldRow the old row, or null if no old row is available (for + * INSERT) + * @param newRow the new row, or null if no new row is available (for + * DELETE) + * @throws SQLException if the operation must be undone + */ + void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException; + + /** + * This method is called when the database is closed. + * If the method throws an exception, it will be logged, but + * closing the database will continue. + */ + void close() throws SQLException; + + /** + * This method is called when the trigger is dropped. + */ + void remove() throws SQLException; + +} diff --git a/modules/h2/src/main/java/org/h2/api/package.html b/modules/h2/src/main/java/org/h2/api/package.html new file mode 100644 index 0000000000000..bee9552ad9a3a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/api/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Contains interfaces for user-defined extensions, such as triggers and user-defined aggregate functions. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/bnf/Bnf.java b/modules/h2/src/main/java/org/h2/bnf/Bnf.java new file mode 100644 index 0000000000000..0084b2841ecf2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/Bnf.java @@ -0,0 +1,368 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStreamReader; +import java.io.Reader; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.StringTokenizer; + +import org.h2.bnf.context.DbContextRule; +import org.h2.tools.Csv; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * This class can read a file that is similar to BNF (Backus-Naur form). + * It is made specially to support SQL grammar. + */ +public class Bnf { + + /** + * The rule map. The key is lowercase, and all spaces + * are replaces with underscore. + */ + private final HashMap ruleMap = new HashMap<>(); + private String syntax; + private String currentToken; + private String[] tokens; + private char firstChar; + private int index; + private Rule lastRepeat; + private ArrayList statements; + private String currentTopic; + + /** + * Create an instance using the grammar specified in the CSV file. + * + * @param csv if not specified, the help.csv is used + * @return a new instance + */ + public static Bnf getInstance(Reader csv) throws SQLException, IOException { + Bnf bnf = new Bnf(); + if (csv == null) { + byte[] data = Utils.getResource("/org/h2/res/help.csv"); + csv = new InputStreamReader(new ByteArrayInputStream(data)); + } + bnf.parse(csv); + return bnf; + } + + /** + * Add an alias for a rule. + * + * @param name for example "procedure" + * @param replacement for example "@func@" + */ + public void addAlias(String name, String replacement) { + RuleHead head = ruleMap.get(replacement); + ruleMap.put(name, head); + } + + private void addFixedRule(String name, int fixedType) { + Rule rule = new RuleFixed(fixedType); + addRule(name, "Fixed", rule); + } + + private RuleHead addRule(String topic, String section, Rule rule) { + RuleHead head = new RuleHead(section, topic, rule); + String key = StringUtils.toLowerEnglish(topic.trim().replace(' ', '_')); + if (ruleMap.get(key) != null) { + throw new AssertionError("already exists: " + topic); + } + ruleMap.put(key, head); + return head; + } + + private void parse(Reader reader) throws SQLException, IOException { + Rule functions = null; + statements = New.arrayList(); + Csv csv = new Csv(); + csv.setLineCommentCharacter('#'); + ResultSet rs = csv.read(reader, null); + while (rs.next()) { + String section = rs.getString("SECTION").trim(); + if (section.startsWith("System")) { + continue; + } + String topic = rs.getString("TOPIC"); + syntax = rs.getString("SYNTAX").trim(); + currentTopic = section; + tokens = tokenize(); + index = 0; + Rule rule = parseRule(); + if (section.startsWith("Command")) { + rule = new RuleList(rule, new RuleElement(";\n\n", currentTopic), false); + } + RuleHead head = addRule(topic, section, rule); + if (section.startsWith("Function")) { + if (functions == null) { + functions = rule; + } else { + functions = new RuleList(rule, functions, true); + } + } else if (section.startsWith("Commands")) { + statements.add(head); + } + } + addRule("@func@", "Function", functions); + addFixedRule("@ymd@", RuleFixed.YMD); + addFixedRule("@hms@", RuleFixed.HMS); + addFixedRule("@nanos@", RuleFixed.NANOS); + addFixedRule("anything_except_single_quote", RuleFixed.ANY_EXCEPT_SINGLE_QUOTE); + addFixedRule("anything_except_double_quote", RuleFixed.ANY_EXCEPT_DOUBLE_QUOTE); + addFixedRule("anything_until_end_of_line", RuleFixed.ANY_UNTIL_EOL); + addFixedRule("anything_until_end_comment", RuleFixed.ANY_UNTIL_END); + addFixedRule("anything_except_two_dollar_signs", RuleFixed.ANY_EXCEPT_2_DOLLAR); + addFixedRule("anything", RuleFixed.ANY_WORD); + addFixedRule("@hex_start@", RuleFixed.HEX_START); + addFixedRule("@concat@", RuleFixed.CONCAT); + addFixedRule("@az_@", RuleFixed.AZ_UNDERSCORE); + addFixedRule("@af@", RuleFixed.AF); + addFixedRule("@digit@", RuleFixed.DIGIT); + addFixedRule("@open_bracket@", RuleFixed.OPEN_BRACKET); + addFixedRule("@close_bracket@", RuleFixed.CLOSE_BRACKET); + } + + /** + * Parse the syntax and let the rule call the visitor. + * + * @param visitor the visitor + * @param s the syntax to parse + */ + public void visit(BnfVisitor visitor, String s) { + this.syntax = s; + tokens = tokenize(); + index = 0; + Rule rule = parseRule(); + rule.setLinks(ruleMap); + rule.accept(visitor); + } + + /** + * Check whether the statement starts with a whitespace. + * + * @param s the statement + * @return if the statement is not empty and starts with a whitespace + */ + public static boolean startWithSpace(String s) { + return s.length() > 0 && Character.isWhitespace(s.charAt(0)); + } + + /** + * Convert convert ruleLink to rule_link. + * + * @param token the token + * @return the rule map key + */ + public static String getRuleMapKey(String token) { + StringBuilder buff = new StringBuilder(); + for (char ch : token.toCharArray()) { + if (Character.isUpperCase(ch)) { + buff.append('_').append(Character.toLowerCase(ch)); + } else { + buff.append(ch); + } + } + return buff.toString(); + } + + /** + * Get the rule head for the given title. + * + * @param title the title + * @return the rule head, or null + */ + public RuleHead getRuleHead(String title) { + return ruleMap.get(title); + } + + private Rule parseRule() { + read(); + return parseOr(); + } + + private Rule parseOr() { + Rule r = parseList(); + if (firstChar == '|') { + read(); + r = new RuleList(r, parseOr(), true); + } + lastRepeat = r; + return r; + } + + private Rule parseList() { + Rule r = parseToken(); + if (firstChar != '|' && firstChar != ']' && firstChar != '}' + && firstChar != 0) { + r = new RuleList(r, parseList(), false); + } + lastRepeat = r; + return r; + } + + private Rule parseToken() { + Rule r; + if ((firstChar >= 'A' && firstChar <= 'Z') + || (firstChar >= 'a' && firstChar <= 'z')) { + // r = new RuleElement(currentToken+ " syntax:" + syntax); + r = new RuleElement(currentToken, currentTopic); + } else if (firstChar == '[') { + read(); + Rule r2 = parseOr(); + r = new RuleOptional(r2); + if (firstChar != ']') { + throw new AssertionError("expected ], got " + currentToken + + " syntax:" + syntax); + } + } else if (firstChar == '{') { + read(); + r = parseOr(); + if (firstChar != '}') { + throw new AssertionError("expected }, got " + currentToken + + " syntax:" + syntax); + } + } else if ("@commaDots@".equals(currentToken)) { + r = new RuleList(new RuleElement(",", currentTopic), lastRepeat, false); + r = new RuleRepeat(r, true); + } else if ("@dots@".equals(currentToken)) { + r = new RuleRepeat(lastRepeat, false); + } else { + r = new RuleElement(currentToken, currentTopic); + } + lastRepeat = r; + read(); + return r; + } + + private void read() { + if (index < tokens.length) { + currentToken = tokens[index++]; + firstChar = currentToken.charAt(0); + } else { + currentToken = ""; + firstChar = 0; + } + } + + private String[] tokenize() { + ArrayList list = New.arrayList(); + syntax = StringUtils.replaceAll(syntax, "yyyy-MM-dd", "@ymd@"); + syntax = StringUtils.replaceAll(syntax, "hh:mm:ss", "@hms@"); + syntax = StringUtils.replaceAll(syntax, "nnnnnnnnn", "@nanos@"); + syntax = StringUtils.replaceAll(syntax, "function", "@func@"); + syntax = StringUtils.replaceAll(syntax, "0x", "@hexStart@"); + syntax = StringUtils.replaceAll(syntax, ",...", "@commaDots@"); + syntax = StringUtils.replaceAll(syntax, "...", "@dots@"); + syntax = StringUtils.replaceAll(syntax, "||", "@concat@"); + syntax = StringUtils.replaceAll(syntax, "a-z|_", "@az_@"); + syntax = StringUtils.replaceAll(syntax, "A-Z|_", "@az_@"); + syntax = StringUtils.replaceAll(syntax, "A-F", "@af@"); + syntax = StringUtils.replaceAll(syntax, "0-9", "@digit@"); + syntax = StringUtils.replaceAll(syntax, "'['", "@openBracket@"); + syntax = StringUtils.replaceAll(syntax, "']'", "@closeBracket@"); + StringTokenizer tokenizer = getTokenizer(syntax); + while (tokenizer.hasMoreTokens()) { + String s = tokenizer.nextToken(); + // avoid duplicate strings + s = StringUtils.cache(s); + if (s.length() == 1) { + if (" \r\n".indexOf(s.charAt(0)) >= 0) { + continue; + } + } + list.add(s); + } + return list.toArray(new String[0]); + } + + /** + * Get the list of tokens that can follow. + * This is the main autocomplete method. + * The returned map for the query 'S' may look like this: + *
    +     * key: 1#SELECT, value: ELECT
    +     * key: 1#SET, value: ET
    +     * 
    + * + * @param query the start of the statement + * @return the map of possible token types / tokens + */ + public HashMap getNextTokenList(String query) { + Sentence sentence = new Sentence(); + sentence.setQuery(query); + try { + for (RuleHead head : statements) { + if (!head.getSection().startsWith("Commands")) { + continue; + } + sentence.start(); + if (head.getRule().autoComplete(sentence)) { + break; + } + } + } catch (IllegalStateException e) { + // ignore + } + return sentence.getNext(); + } + + /** + * Cross-link all statements with each other. + * This method is called after updating the topics. + */ + public void linkStatements() { + for (RuleHead r : ruleMap.values()) { + r.getRule().setLinks(ruleMap); + } + } + + /** + * Update a topic with a context specific rule. + * This is used for autocomplete support. + * + * @param topic the topic + * @param rule the database context rule + */ + public void updateTopic(String topic, DbContextRule rule) { + topic = StringUtils.toLowerEnglish(topic); + RuleHead head = ruleMap.get(topic); + if (head == null) { + head = new RuleHead("db", topic, rule); + ruleMap.put(topic, head); + statements.add(head); + } else { + head.setRule(rule); + } + } + + /** + * Get the list of possible statements. + * + * @return the list of statements + */ + public ArrayList getStatements() { + return statements; + } + + /** + * Get the tokenizer for the given syntax. + * + * @param s the syntax + * @return the tokenizer + */ + public static StringTokenizer getTokenizer(String s) { + return new StringTokenizer(s, " [](){}|.,\r\n<>:-+*/=\"!'$", true); + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/BnfVisitor.java b/modules/h2/src/main/java/org/h2/bnf/BnfVisitor.java new file mode 100644 index 0000000000000..9029aacd2dfcb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/BnfVisitor.java @@ -0,0 +1,54 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.ArrayList; + +/** + * The visitor interface for BNF rules. + */ +public interface BnfVisitor { + + /** + * Visit a rule element. + * + * @param keyword whether this is a keyword + * @param name the element name + * @param link the linked rule if it's not a keyword + */ + void visitRuleElement(boolean keyword, String name, Rule link); + + /** + * Visit a repeat rule. + * + * @param comma whether the comma is repeated as well + * @param rule the element to repeat + */ + void visitRuleRepeat(boolean comma, Rule rule); + + /** + * Visit a fixed rule. + * + * @param type the type + */ + void visitRuleFixed(int type); + + /** + * Visit a rule list. + * + * @param or true for OR, false for AND + * @param list the rules + */ + void visitRuleList(boolean or, ArrayList list); + + /** + * Visit an optional rule. + * + * @param rule the rule + */ + void visitRuleOptional(Rule rule); + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/Rule.java b/modules/h2/src/main/java/org/h2/bnf/Rule.java new file mode 100644 index 0000000000000..3b13d915cf972 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/Rule.java @@ -0,0 +1,38 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.HashMap; + +/** + * Represents a BNF rule. + */ +public interface Rule { + + /** + * Update cross references. + * + * @param ruleMap the reference map + */ + void setLinks(HashMap ruleMap); + + /** + * Add the next possible token(s). If there was a match, the query in the + * sentence is updated (the matched token is removed). + * + * @param sentence the sentence context + * @return true if a full match + */ + boolean autoComplete(Sentence sentence); + + /** + * Call the visit method in the given visitor. + * + * @param visitor the visitor + */ + void accept(BnfVisitor visitor); + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/RuleElement.java b/modules/h2/src/main/java/org/h2/bnf/RuleElement.java new file mode 100644 index 0000000000000..fb340014b205c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/RuleElement.java @@ -0,0 +1,80 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.HashMap; + +import org.h2.util.StringUtils; + +/** + * A single terminal rule in a BNF object. + */ +public class RuleElement implements Rule { + + private final boolean keyword; + private final String name; + private Rule link; + private final int type; + + public RuleElement(String name, String topic) { + this.name = name; + this.keyword = name.length() == 1 || + name.equals(StringUtils.toUpperEnglish(name)); + topic = StringUtils.toLowerEnglish(topic); + this.type = topic.startsWith("function") ? + Sentence.FUNCTION : Sentence.KEYWORD; + } + + @Override + public void accept(BnfVisitor visitor) { + visitor.visitRuleElement(keyword, name, link); + } + + @Override + public void setLinks(HashMap ruleMap) { + if (link != null) { + link.setLinks(ruleMap); + } + if (keyword) { + return; + } + String test = Bnf.getRuleMapKey(name); + for (int i = 0; i < test.length(); i++) { + String t = test.substring(i); + RuleHead r = ruleMap.get(t); + if (r != null) { + link = r.getRule(); + return; + } + } + throw new AssertionError("Unknown " + name + "/" + test); + } + + @Override + public boolean autoComplete(Sentence sentence) { + sentence.stopIfRequired(); + if (keyword) { + String query = sentence.getQuery(); + String q = query.trim(); + String up = sentence.getQueryUpper().trim(); + if (up.startsWith(name)) { + query = query.substring(name.length()); + while (!"_".equals(name) && Bnf.startWithSpace(query)) { + query = query.substring(1); + } + sentence.setQuery(query); + return true; + } else if (q.length() == 0 || name.startsWith(up)) { + if (q.length() < name.length()) { + sentence.add(name, name.substring(q.length()), type); + } + } + return false; + } + return link.autoComplete(sentence); + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/RuleFixed.java b/modules/h2/src/main/java/org/h2/bnf/RuleFixed.java new file mode 100644 index 0000000000000..12b5112505fad --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/RuleFixed.java @@ -0,0 +1,211 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.HashMap; + +/** + * Represents a hard coded terminal rule in a BNF object. + */ +public class RuleFixed implements Rule { + + public static final int YMD = 0, HMS = 1, NANOS = 2; + public static final int ANY_EXCEPT_SINGLE_QUOTE = 3; + public static final int ANY_EXCEPT_DOUBLE_QUOTE = 4; + public static final int ANY_UNTIL_EOL = 5; + public static final int ANY_UNTIL_END = 6; + public static final int ANY_WORD = 7; + public static final int ANY_EXCEPT_2_DOLLAR = 8; + public static final int HEX_START = 10, CONCAT = 11; + public static final int AZ_UNDERSCORE = 12, AF = 13, DIGIT = 14; + public static final int OPEN_BRACKET = 15, CLOSE_BRACKET = 16; + + private final int type; + + RuleFixed(int type) { + this.type = type; + } + + @Override + public void accept(BnfVisitor visitor) { + visitor.visitRuleFixed(type); + } + + @Override + public void setLinks(HashMap ruleMap) { + // nothing to do + } + + @Override + public boolean autoComplete(Sentence sentence) { + sentence.stopIfRequired(); + String query = sentence.getQuery(); + String s = query; + boolean removeTrailingSpaces = false; + switch (type) { + case YMD: + while (s.length() > 0 && "0123456789-".indexOf(s.charAt(0)) >= 0) { + s = s.substring(1); + } + if (s.length() == 0) { + sentence.add("2006-01-01", "1", Sentence.KEYWORD); + } + // needed for timestamps + removeTrailingSpaces = true; + break; + case HMS: + while (s.length() > 0 && "0123456789:".indexOf(s.charAt(0)) >= 0) { + s = s.substring(1); + } + if (s.length() == 0) { + sentence.add("12:00:00", "1", Sentence.KEYWORD); + } + break; + case NANOS: + while (s.length() > 0 && Character.isDigit(s.charAt(0))) { + s = s.substring(1); + } + if (s.length() == 0) { + sentence.add("nanoseconds", "0", Sentence.KEYWORD); + } + removeTrailingSpaces = true; + break; + case ANY_EXCEPT_SINGLE_QUOTE: + while (true) { + while (s.length() > 0 && s.charAt(0) != '\'') { + s = s.substring(1); + } + if (s.startsWith("''")) { + s = s.substring(2); + } else { + break; + } + } + if (s.length() == 0) { + sentence.add("anything", "Hello World", Sentence.KEYWORD); + sentence.add("'", "'", Sentence.KEYWORD); + } + break; + case ANY_EXCEPT_2_DOLLAR: + while (s.length() > 0 && !s.startsWith("$$")) { + s = s.substring(1); + } + if (s.length() == 0) { + sentence.add("anything", "Hello World", Sentence.KEYWORD); + sentence.add("$$", "$$", Sentence.KEYWORD); + } + break; + case ANY_EXCEPT_DOUBLE_QUOTE: + while (true) { + while (s.length() > 0 && s.charAt(0) != '\"') { + s = s.substring(1); + } + if (s.startsWith("\"\"")) { + s = s.substring(2); + } else { + break; + } + } + if (s.length() == 0) { + sentence.add("anything", "identifier", Sentence.KEYWORD); + sentence.add("\"", "\"", Sentence.KEYWORD); + } + break; + case ANY_WORD: + while (s.length() > 0 && !Bnf.startWithSpace(s)) { + s = s.substring(1); + } + if (s.length() == 0) { + sentence.add("anything", "anything", Sentence.KEYWORD); + } + break; + case HEX_START: + if (s.startsWith("0X") || s.startsWith("0x")) { + s = s.substring(2); + } else if ("0".equals(s)) { + sentence.add("0x", "x", Sentence.KEYWORD); + } else if (s.length() == 0) { + sentence.add("0x", "0x", Sentence.KEYWORD); + } + break; + case CONCAT: + if (s.equals("|")) { + sentence.add("||", "|", Sentence.KEYWORD); + } else if (s.startsWith("||")) { + s = s.substring(2); + } else if (s.length() == 0) { + sentence.add("||", "||", Sentence.KEYWORD); + } + removeTrailingSpaces = true; + break; + case AZ_UNDERSCORE: + if (s.length() > 0 && + (Character.isLetter(s.charAt(0)) || s.charAt(0) == '_')) { + s = s.substring(1); + } + if (s.length() == 0) { + sentence.add("character", "A", Sentence.KEYWORD); + } + break; + case AF: + if (s.length() > 0) { + char ch = Character.toUpperCase(s.charAt(0)); + if (ch >= 'A' && ch <= 'F') { + s = s.substring(1); + } + } + if (s.length() == 0) { + sentence.add("hex character", "0A", Sentence.KEYWORD); + } + break; + case DIGIT: + if (s.length() > 0 && Character.isDigit(s.charAt(0))) { + s = s.substring(1); + } + if (s.length() == 0) { + sentence.add("digit", "1", Sentence.KEYWORD); + } + break; + case OPEN_BRACKET: + if (s.length() == 0) { + sentence.add("[", "[", Sentence.KEYWORD); + } else if (s.charAt(0) == '[') { + s = s.substring(1); + } + removeTrailingSpaces = true; + break; + case CLOSE_BRACKET: + if (s.length() == 0) { + sentence.add("]", "]", Sentence.KEYWORD); + } else if (s.charAt(0) == ']') { + s = s.substring(1); + } + removeTrailingSpaces = true; + break; + // no autocomplete support for comments + // (comments are not reachable in the bnf tree) + case ANY_UNTIL_EOL: + case ANY_UNTIL_END: + default: + throw new AssertionError("type="+type); + } + if (!s.equals(query)) { + // can not always remove spaces here, because a repeat + // rule for a-z would remove multiple words + // but we have to remove spaces after '||' + // and after ']' + if (removeTrailingSpaces) { + while (Bnf.startWithSpace(s)) { + s = s.substring(1); + } + } + sentence.setQuery(s); + return true; + } + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/RuleHead.java b/modules/h2/src/main/java/org/h2/bnf/RuleHead.java new file mode 100644 index 0000000000000..5c96929f77e4a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/RuleHead.java @@ -0,0 +1,38 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +/** + * Represents the head of a BNF rule. + */ +public class RuleHead { + private final String section; + private final String topic; + private Rule rule; + + RuleHead(String section, String topic, Rule rule) { + this.section = section; + this.topic = topic; + this.rule = rule; + } + + public String getTopic() { + return topic; + } + + public Rule getRule() { + return rule; + } + + void setRule(Rule rule) { + this.rule = rule; + } + + public String getSection() { + return section; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/RuleList.java b/modules/h2/src/main/java/org/h2/bnf/RuleList.java new file mode 100644 index 0000000000000..28826f95b6b5f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/RuleList.java @@ -0,0 +1,73 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.ArrayList; +import java.util.HashMap; +import org.h2.util.New; + +/** + * Represents a sequence of BNF rules, or a list of alternative rules. + */ +public class RuleList implements Rule { + + private final boolean or; + private final ArrayList list; + private boolean mapSet; + + public RuleList(Rule first, Rule next, boolean or) { + list = New.arrayList(); + if (first instanceof RuleList && ((RuleList) first).or == or) { + list.addAll(((RuleList) first).list); + } else { + list.add(first); + } + if (next instanceof RuleList && ((RuleList) next).or == or) { + list.addAll(((RuleList) next).list); + } else { + list.add(next); + } + this.or = or; + } + + @Override + public void accept(BnfVisitor visitor) { + visitor.visitRuleList(or, list); + } + + @Override + public void setLinks(HashMap ruleMap) { + if (!mapSet) { + for (Rule r : list) { + r.setLinks(ruleMap); + } + mapSet = true; + } + } + + @Override + public boolean autoComplete(Sentence sentence) { + sentence.stopIfRequired(); + String old = sentence.getQuery(); + if (or) { + for (Rule r : list) { + sentence.setQuery(old); + if (r.autoComplete(sentence)) { + return true; + } + } + return false; + } + for (Rule r : list) { + if (!r.autoComplete(sentence)) { + sentence.setQuery(old); + return false; + } + } + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/RuleOptional.java b/modules/h2/src/main/java/org/h2/bnf/RuleOptional.java new file mode 100644 index 0000000000000..6aca7b5dfedbd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/RuleOptional.java @@ -0,0 +1,40 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.HashMap; + +/** + * Represents an optional BNF rule. + */ +public class RuleOptional implements Rule { + private final Rule rule; + private boolean mapSet; + + public RuleOptional(Rule rule) { + this.rule = rule; + } + + @Override + public void accept(BnfVisitor visitor) { + visitor.visitRuleOptional(rule); + } + + @Override + public void setLinks(HashMap ruleMap) { + if (!mapSet) { + rule.setLinks(ruleMap); + mapSet = true; + } + } + @Override + public boolean autoComplete(Sentence sentence) { + sentence.stopIfRequired(); + rule.autoComplete(sentence); + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/RuleRepeat.java b/modules/h2/src/main/java/org/h2/bnf/RuleRepeat.java new file mode 100644 index 0000000000000..323f04024538e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/RuleRepeat.java @@ -0,0 +1,47 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.HashMap; + +/** + * Represents a loop in a BNF object. + */ +public class RuleRepeat implements Rule { + + private final Rule rule; + private final boolean comma; + + public RuleRepeat(Rule rule, boolean comma) { + this.rule = rule; + this.comma = comma; + } + + @Override + public void accept(BnfVisitor visitor) { + visitor.visitRuleRepeat(comma, rule); + } + + @Override + public void setLinks(HashMap ruleMap) { + // not required, because it's already linked + } + + @Override + public boolean autoComplete(Sentence sentence) { + sentence.stopIfRequired(); + while (rule.autoComplete(sentence)) { + // nothing to do + } + String s = sentence.getQuery(); + while (Bnf.startWithSpace(s)) { + s = s.substring(1); + } + sentence.setQuery(s); + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/Sentence.java b/modules/h2/src/main/java/org/h2/bnf/Sentence.java new file mode 100644 index 0000000000000..4fbabef6d593e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/Sentence.java @@ -0,0 +1,222 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Objects; +import java.util.concurrent.TimeUnit; + +import org.h2.bnf.context.DbSchema; +import org.h2.bnf.context.DbTableOrView; +import org.h2.util.StringUtils; + +/** + * A query context object. It contains the list of table and alias objects. + * Used for autocomplete. + */ +public class Sentence { + + /** + * This token type means the possible choices of the item depend on the + * context. For example the item represents a table name of the current + * database. + */ + public static final int CONTEXT = 0; + + /** + * The token type for a keyword. + */ + public static final int KEYWORD = 1; + + /** + * The token type for a function name. + */ + public static final int FUNCTION = 2; + + private static final long MAX_PROCESSING_TIME = 100; + + /** + * The map of next tokens in the form type#tokenName token. + */ + private final HashMap next = new HashMap<>(); + + /** + * The complete query string. + */ + private String query; + + /** + * The uppercase version of the query string. + */ + private String queryUpper; + + private long stopAtNs; + private DbSchema lastMatchedSchema; + private DbTableOrView lastMatchedTable; + private DbTableOrView lastTable; + private HashSet tables; + private HashMap aliases; + + /** + * Start the timer to make sure processing doesn't take too long. + */ + public void start() { + stopAtNs = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(MAX_PROCESSING_TIME); + } + + /** + * Check if it's time to stop processing. + * Processing auto-complete shouldn't take more than a few milliseconds. + * If processing is stopped, this methods throws an IllegalStateException + */ + public void stopIfRequired() { + if (System.nanoTime() > stopAtNs) { + throw new IllegalStateException(); + } + } + + /** + * Add a word to the set of next tokens. + * + * @param n the token name + * @param string an example text + * @param type the token type + */ + public void add(String n, String string, int type) { + next.put(type+"#"+n, string); + } + + /** + * Add an alias name and object + * + * @param alias the alias name + * @param table the alias table + */ + public void addAlias(String alias, DbTableOrView table) { + if (aliases == null) { + aliases = new HashMap<>(); + } + aliases.put(alias, table); + } + + /** + * Add a table. + * + * @param table the table + */ + public void addTable(DbTableOrView table) { + lastTable = table; + if (tables == null) { + tables = new HashSet<>(); + } + tables.add(table); + } + + /** + * Get the set of tables. + * + * @return the set of tables + */ + public HashSet getTables() { + return tables; + } + + /** + * Get the alias map. + * + * @return the alias map + */ + public HashMap getAliases() { + return aliases; + } + + /** + * Get the last added table. + * + * @return the last table + */ + public DbTableOrView getLastTable() { + return lastTable; + } + + /** + * Get the last matched schema if the last match was a schema. + * + * @return the last schema or null + */ + public DbSchema getLastMatchedSchema() { + return lastMatchedSchema; + } + + /** + * Set the last matched schema if the last match was a schema, + * or null if it was not. + * + * @param schema the last matched schema or null + */ + public void setLastMatchedSchema(DbSchema schema) { + this.lastMatchedSchema = schema; + } + + /** + * Set the last matched table if the last match was a table. + * + * @param table the last matched table or null + */ + public void setLastMatchedTable(DbTableOrView table) { + this.lastMatchedTable = table; + } + + /** + * Get the last matched table if the last match was a table. + * + * @return the last table or null + */ + public DbTableOrView getLastMatchedTable() { + return lastMatchedTable; + } + + /** + * Set the query string. + * + * @param query the query string + */ + public void setQuery(String query) { + if (!Objects.equals(this.query, query)) { + this.query = query; + this.queryUpper = StringUtils.toUpperEnglish(query); + } + } + + /** + * Get the query string. + * + * @return the query + */ + public String getQuery() { + return query; + } + + /** + * Get the uppercase version of the query string. + * + * @return the uppercase query + */ + public String getQueryUpper() { + return queryUpper; + } + + /** + * Get the map of next tokens. + * + * @return the next token map + */ + public HashMap getNext() { + return next; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/context/DbColumn.java b/modules/h2/src/main/java/org/h2/bnf/context/DbColumn.java new file mode 100644 index 0000000000000..22687650e571a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/context/DbColumn.java @@ -0,0 +1,115 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf.context; + +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; + +/** + * Keeps the meta data information of a column. + * This class is used by the H2 Console. + */ +public class DbColumn { + + private final String name; + + private final String quotedName; + + private final String dataType; + + private final int position; + + private DbColumn(DbContents contents, ResultSet rs, boolean procedureColumn) + throws SQLException { + name = rs.getString("COLUMN_NAME"); + quotedName = contents.quoteIdentifier(name); + String type = rs.getString("TYPE_NAME"); + // a procedures column size is identified by PRECISION, for table this + // is COLUMN_SIZE + String precisionColumnName; + if (procedureColumn) { + precisionColumnName = "PRECISION"; + } else { + precisionColumnName = "COLUMN_SIZE"; + } + int precision = rs.getInt(precisionColumnName); + position = rs.getInt("ORDINAL_POSITION"); + boolean isSQLite = contents.isSQLite(); + if (precision > 0 && !isSQLite) { + type += "(" + precision; + String scaleColumnName; + if (procedureColumn) { + scaleColumnName = "SCALE"; + } else { + scaleColumnName = "DECIMAL_DIGITS"; + } + int prec = rs.getInt(scaleColumnName); + if (prec > 0) { + type += ", " + prec; + } + type += ")"; + } + if (rs.getInt("NULLABLE") == DatabaseMetaData.columnNoNulls) { + type += " NOT NULL"; + } + dataType = type; + } + + /** + * Create a column from a DatabaseMetaData.getProcedureColumns row. + * + * @param contents the database contents + * @param rs the result set + * @return the column + */ + public static DbColumn getProcedureColumn(DbContents contents, ResultSet rs) + throws SQLException { + return new DbColumn(contents, rs, true); + } + + /** + * Create a column from a DatabaseMetaData.getColumns row. + * + * @param contents the database contents + * @param rs the result set + * @return the column + */ + public static DbColumn getColumn(DbContents contents, ResultSet rs) + throws SQLException { + return new DbColumn(contents, rs, false); + } + + /** + * @return The data type name (including precision and the NOT NULL flag if + * applicable). + */ + public String getDataType() { + return dataType; + } + + /** + * @return The column name. + */ + public String getName() { + return name; + } + + /** + * @return The quoted table name. + */ + public String getQuotedName() { + return quotedName; + } + + /** + * @return Column index + */ + public int getPosition() { + return position; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/context/DbContents.java b/modules/h2/src/main/java/org/h2/bnf/context/DbContents.java new file mode 100644 index 0000000000000..868ff69cbdf4a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/context/DbContents.java @@ -0,0 +1,284 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf.context; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; + +import org.h2.command.Parser; +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * Keeps meta data information about a database. + * This class is used by the H2 Console. + */ +public class DbContents { + + private DbSchema[] schemas; + private DbSchema defaultSchema; + private boolean isOracle; + private boolean isH2; + private boolean isPostgreSQL; + private boolean isDerby; + private boolean isSQLite; + private boolean isH2ModeMySQL; + private boolean isMySQL; + private boolean isFirebird; + private boolean isMSSQLServer; + private boolean isDB2; + + /** + * @return The default schema. + */ + public DbSchema getDefaultSchema() { + return defaultSchema; + } + + /** + * @return True if this is an Apache Derby database. + */ + public boolean isDerby() { + return isDerby; + } + + /** + * @return True if this is a Firebird database. + */ + public boolean isFirebird() { + return isFirebird; + } + + /** + * @return True if this is a H2 database. + */ + public boolean isH2() { + return isH2; + } + + /** + * @return True if this is a H2 database in MySQL mode. + */ + public boolean isH2ModeMySQL() { + return isH2ModeMySQL; + } + + /** + * @return True if this is a MS SQL Server database. + */ + public boolean isMSSQLServer() { + return isMSSQLServer; + } + + /** + * @return True if this is a MySQL database. + */ + public boolean isMySQL() { + return isMySQL; + } + + /** + * @return True if this is an Oracle database. + */ + public boolean isOracle() { + return isOracle; + } + + /** + * @return True if this is a PostgreSQL database. + */ + public boolean isPostgreSQL() { + return isPostgreSQL; + } + + /** + * @return True if this is an SQLite database. + */ + public boolean isSQLite() { + return isSQLite; + } + + /** + * @return True if this is an IBM DB2 database. + */ + public boolean isDB2() { + return isDB2; + } + + /** + * @return The list of schemas. + */ + public DbSchema[] getSchemas() { + return schemas; + } + + /** + * Read the contents of this database from the database meta data. + * + * @param url the database URL + * @param conn the connection + */ + public synchronized void readContents(String url, Connection conn) + throws SQLException { + isH2 = url.startsWith("jdbc:h2:"); + if (isH2) { + PreparedStatement prep = conn.prepareStatement( + "SELECT UPPER(VALUE) FROM INFORMATION_SCHEMA.SETTINGS " + + "WHERE NAME=?"); + prep.setString(1, "MODE"); + ResultSet rs = prep.executeQuery(); + rs.next(); + if ("MYSQL".equals(rs.getString(1))) { + isH2ModeMySQL = true; + } + rs.close(); + prep.close(); + } + isDB2 = url.startsWith("jdbc:db2:"); + isSQLite = url.startsWith("jdbc:sqlite:"); + isOracle = url.startsWith("jdbc:oracle:"); + // the Vertica engine is based on PostgreSQL + isPostgreSQL = url.startsWith("jdbc:postgresql:") || url.startsWith("jdbc:vertica:"); + // isHSQLDB = url.startsWith("jdbc:hsqldb:"); + isMySQL = url.startsWith("jdbc:mysql:"); + isDerby = url.startsWith("jdbc:derby:"); + isFirebird = url.startsWith("jdbc:firebirdsql:"); + isMSSQLServer = url.startsWith("jdbc:sqlserver:"); + DatabaseMetaData meta = conn.getMetaData(); + String defaultSchemaName = getDefaultSchemaName(meta); + String[] schemaNames = getSchemaNames(meta); + schemas = new DbSchema[schemaNames.length]; + for (int i = 0; i < schemaNames.length; i++) { + String schemaName = schemaNames[i]; + boolean isDefault = defaultSchemaName == null || + defaultSchemaName.equals(schemaName); + DbSchema schema = new DbSchema(this, schemaName, isDefault); + if (isDefault) { + defaultSchema = schema; + } + schemas[i] = schema; + String[] tableTypes = { "TABLE", "SYSTEM TABLE", "VIEW", + "SYSTEM VIEW", "TABLE LINK", "SYNONYM", "EXTERNAL" }; + schema.readTables(meta, tableTypes); + if (!isPostgreSQL && !isDB2) { + schema.readProcedures(meta); + } + } + if (defaultSchema == null) { + String best = null; + for (DbSchema schema : schemas) { + if ("dbo".equals(schema.name)) { + // MS SQL Server + defaultSchema = schema; + break; + } + if (defaultSchema == null || + best == null || + schema.name.length() < best.length()) { + best = schema.name; + defaultSchema = schema; + } + } + } + } + + private String[] getSchemaNames(DatabaseMetaData meta) throws SQLException { + if (isMySQL || isSQLite) { + return new String[] { "" }; + } else if (isFirebird) { + return new String[] { null }; + } + ResultSet rs = meta.getSchemas(); + ArrayList schemaList = New.arrayList(); + while (rs.next()) { + String schema = rs.getString("TABLE_SCHEM"); + String[] ignoreNames = null; + if (isOracle) { + ignoreNames = new String[] { "CTXSYS", "DIP", "DBSNMP", + "DMSYS", "EXFSYS", "FLOWS_020100", "FLOWS_FILES", + "MDDATA", "MDSYS", "MGMT_VIEW", "OLAPSYS", "ORDSYS", + "ORDPLUGINS", "OUTLN", "SI_INFORMTN_SCHEMA", "SYS", + "SYSMAN", "SYSTEM", "TSMSYS", "WMSYS", "XDB" }; + } else if (isMSSQLServer) { + ignoreNames = new String[] { "sys", "db_accessadmin", + "db_backupoperator", "db_datareader", "db_datawriter", + "db_ddladmin", "db_denydatareader", + "db_denydatawriter", "db_owner", "db_securityadmin" }; + } else if (isDB2) { + ignoreNames = new String[] { "NULLID", "SYSFUN", + "SYSIBMINTERNAL", "SYSIBMTS", "SYSPROC", "SYSPUBLIC", + // not empty, but not sure what they contain + "SYSCAT", "SYSIBM", "SYSIBMADM", + "SYSSTAT", "SYSTOOLS", + }; + + } + if (ignoreNames != null) { + for (String ignore : ignoreNames) { + if (ignore.equals(schema)) { + schema = null; + break; + } + } + } + if (schema == null) { + continue; + } + schemaList.add(schema); + } + rs.close(); + return schemaList.toArray(new String[0]); + } + + private String getDefaultSchemaName(DatabaseMetaData meta) { + String defaultSchemaName = ""; + try { + if (isOracle) { + return meta.getUserName(); + } else if (isPostgreSQL) { + return "public"; + } else if (isMySQL) { + return ""; + } else if (isDerby) { + return StringUtils.toUpperEnglish(meta.getUserName()); + } else if (isFirebird) { + return null; + } + ResultSet rs = meta.getSchemas(); + int index = rs.findColumn("IS_DEFAULT"); + while (rs.next()) { + if (rs.getBoolean(index)) { + defaultSchemaName = rs.getString("TABLE_SCHEM"); + } + } + } catch (SQLException e) { + // IS_DEFAULT not found + } + return defaultSchemaName; + } + + /** + * Add double quotes around an identifier if required. + * For the H2 database, all identifiers are quoted. + * + * @param identifier the identifier + * @return the quoted identifier + */ + public String quoteIdentifier(String identifier) { + if (identifier == null) { + return null; + } + if (isH2 && !isH2ModeMySQL) { + return Parser.quoteIdentifier(identifier); + } + return StringUtils.toUpperEnglish(identifier); + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/context/DbContextRule.java b/modules/h2/src/main/java/org/h2/bnf/context/DbContextRule.java new file mode 100644 index 0000000000000..48b32b67129b8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/context/DbContextRule.java @@ -0,0 +1,349 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf.context; + +import java.util.HashMap; +import java.util.HashSet; + +import org.h2.bnf.Bnf; +import org.h2.bnf.BnfVisitor; +import org.h2.bnf.Rule; +import org.h2.bnf.RuleElement; +import org.h2.bnf.RuleHead; +import org.h2.bnf.RuleList; +import org.h2.bnf.Sentence; +import org.h2.message.DbException; +import org.h2.util.ParserUtil; +import org.h2.util.StringUtils; + +/** + * A BNF terminal rule that is linked to the database context information. + * This class is used by the H2 Console, to support auto-complete. + */ +public class DbContextRule implements Rule { + + public static final int COLUMN = 0, TABLE = 1, TABLE_ALIAS = 2; + public static final int NEW_TABLE_ALIAS = 3; + public static final int COLUMN_ALIAS = 4, SCHEMA = 5, PROCEDURE = 6; + + private final DbContents contents; + private final int type; + + private String columnType; + + /** + * BNF terminal rule Constructor + * @param contents Extract rule from this component + * @param type Rule type, one of + * {@link DbContextRule#COLUMN}, + * {@link DbContextRule#TABLE}, + * {@link DbContextRule#TABLE_ALIAS}, + * {@link DbContextRule#NEW_TABLE_ALIAS}, + * {@link DbContextRule#COLUMN_ALIAS}, + * {@link DbContextRule#SCHEMA} + */ + public DbContextRule(DbContents contents, int type) { + this.contents = contents; + this.type = type; + } + + /** + * @param columnType COLUMN Auto completion can be filtered by column type + */ + public void setColumnType(String columnType) { + this.columnType = columnType; + } + + @Override + public void setLinks(HashMap ruleMap) { + // nothing to do + } + + @Override + public void accept(BnfVisitor visitor) { + // nothing to do + } + + @Override + public boolean autoComplete(Sentence sentence) { + String query = sentence.getQuery(), s = query; + String up = sentence.getQueryUpper(); + switch (type) { + case SCHEMA: { + DbSchema[] schemas = contents.getSchemas(); + String best = null; + DbSchema bestSchema = null; + for (DbSchema schema: schemas) { + String name = StringUtils.toUpperEnglish(schema.name); + if (up.startsWith(name)) { + if (best == null || name.length() > best.length()) { + best = name; + bestSchema = schema; + } + } else if (s.length() == 0 || name.startsWith(up)) { + if (s.length() < name.length()) { + sentence.add(name, name.substring(s.length()), type); + sentence.add(schema.quotedName + ".", + schema.quotedName.substring(s.length()) + ".", + Sentence.CONTEXT); + } + } + } + if (best != null) { + sentence.setLastMatchedSchema(bestSchema); + s = s.substring(best.length()); + } + break; + } + case TABLE: { + DbSchema schema = sentence.getLastMatchedSchema(); + if (schema == null) { + schema = contents.getDefaultSchema(); + } + DbTableOrView[] tables = schema.getTables(); + String best = null; + DbTableOrView bestTable = null; + for (DbTableOrView table : tables) { + String compare = up; + String name = StringUtils.toUpperEnglish(table.getName()); + if (table.getQuotedName().length() > name.length()) { + name = table.getQuotedName(); + compare = query; + } + if (compare.startsWith(name)) { + if (best == null || name.length() > best.length()) { + best = name; + bestTable = table; + } + } else if (s.length() == 0 || name.startsWith(compare)) { + if (s.length() < name.length()) { + sentence.add(table.getQuotedName(), + table.getQuotedName().substring(s.length()), + Sentence.CONTEXT); + } + } + } + if (best != null) { + sentence.setLastMatchedTable(bestTable); + sentence.addTable(bestTable); + s = s.substring(best.length()); + } + break; + } + case NEW_TABLE_ALIAS: + s = autoCompleteTableAlias(sentence, true); + break; + case TABLE_ALIAS: + s = autoCompleteTableAlias(sentence, false); + break; + case COLUMN_ALIAS: { + int i = 0; + if (query.indexOf(' ') < 0) { + break; + } + for (; i < up.length(); i++) { + char ch = up.charAt(i); + if (ch != '_' && !Character.isLetterOrDigit(ch)) { + break; + } + } + if (i == 0) { + break; + } + String alias = up.substring(0, i); + if (ParserUtil.isKeyword(alias)) { + break; + } + s = s.substring(alias.length()); + break; + } + case COLUMN: { + HashSet set = sentence.getTables(); + String best = null; + DbTableOrView last = sentence.getLastMatchedTable(); + if (last != null && last.getColumns() != null) { + for (DbColumn column : last.getColumns()) { + String compare = up; + String name = StringUtils.toUpperEnglish(column.getName()); + if (column.getQuotedName().length() > name.length()) { + name = column.getQuotedName(); + compare = query; + } + if (compare.startsWith(name) && + (columnType == null || + column.getDataType().contains(columnType))) { + String b = s.substring(name.length()); + if (best == null || b.length() < best.length()) { + best = b; + } else if (s.length() == 0 || name.startsWith(compare)) { + if (s.length() < name.length()) { + sentence.add(column.getName(), + column.getName().substring(s.length()), + Sentence.CONTEXT); + } + } + } + } + } + for (DbSchema schema : contents.getSchemas()) { + for (DbTableOrView table : schema.getTables()) { + if (table != last && set != null && !set.contains(table)) { + continue; + } + if (table == null || table.getColumns() == null) { + continue; + } + for (DbColumn column : table.getColumns()) { + String name = StringUtils.toUpperEnglish(column + .getName()); + if (columnType == null + || column.getDataType().contains(columnType)) { + if (up.startsWith(name)) { + String b = s.substring(name.length()); + if (best == null || b.length() < best.length()) { + best = b; + } + } else if (s.length() == 0 || name.startsWith(up)) { + if (s.length() < name.length()) { + sentence.add(column.getName(), + column.getName().substring(s.length()), + Sentence.CONTEXT); + } + } + } + } + } + } + if (best != null) { + s = best; + } + break; + } + case PROCEDURE: + autoCompleteProcedure(sentence); + break; + default: + throw DbException.throwInternalError("type=" + type); + } + if (!s.equals(query)) { + while (Bnf.startWithSpace(s)) { + s = s.substring(1); + } + sentence.setQuery(s); + return true; + } + return false; + } + private void autoCompleteProcedure(Sentence sentence) { + DbSchema schema = sentence.getLastMatchedSchema(); + if (schema == null) { + schema = contents.getDefaultSchema(); + } + String incompleteSentence = sentence.getQueryUpper(); + String incompleteFunctionName = incompleteSentence; + if (incompleteSentence.contains("(")) { + incompleteFunctionName = incompleteSentence.substring(0, + incompleteSentence.indexOf('(')).trim(); + } + + // Common elements + RuleElement openBracket = new RuleElement("(", "Function"); + RuleElement closeBracket = new RuleElement(")", "Function"); + RuleElement comma = new RuleElement(",", "Function"); + + // Fetch all elements + for (DbProcedure procedure : schema.getProcedures()) { + final String procName = procedure.getName(); + if (procName.startsWith(incompleteFunctionName)) { + // That's it, build a RuleList from this function + RuleElement procedureElement = new RuleElement(procName, + "Function"); + RuleList rl = new RuleList(procedureElement, openBracket, false); + // Go further only if the user use open bracket + if (incompleteSentence.contains("(")) { + for (DbColumn parameter : procedure.getParameters()) { + if (parameter.getPosition() > 1) { + rl = new RuleList(rl, comma, false); + } + DbContextRule columnRule = new DbContextRule(contents, + COLUMN); + String parameterType = parameter.getDataType(); + // Remove precision + if (parameterType.contains("(")) { + parameterType = parameterType.substring(0, + parameterType.indexOf('(')); + } + columnRule.setColumnType(parameterType); + rl = new RuleList(rl, columnRule, false); + } + rl = new RuleList(rl, closeBracket , false); + } + rl.autoComplete(sentence); + } + } + } + + private static String autoCompleteTableAlias(Sentence sentence, + boolean newAlias) { + String s = sentence.getQuery(); + String up = sentence.getQueryUpper(); + int i = 0; + for (; i < up.length(); i++) { + char ch = up.charAt(i); + if (ch != '_' && !Character.isLetterOrDigit(ch)) { + break; + } + } + if (i == 0) { + return s; + } + String alias = up.substring(0, i); + if ("SET".equals(alias) || ParserUtil.isKeyword(alias)) { + return s; + } + if (newAlias) { + sentence.addAlias(alias, sentence.getLastTable()); + } + HashMap map = sentence.getAliases(); + if ((map != null && map.containsKey(alias)) || + (sentence.getLastTable() == null)) { + if (newAlias && s.length() == alias.length()) { + return s; + } + s = s.substring(alias.length()); + if (s.length() == 0) { + sentence.add(alias + ".", ".", Sentence.CONTEXT); + } + return s; + } + HashSet tables = sentence.getTables(); + if (tables != null) { + String best = null; + for (DbTableOrView table : tables) { + String tableName = + StringUtils.toUpperEnglish(table.getName()); + if (alias.startsWith(tableName) && + (best == null || tableName.length() > best.length())) { + sentence.setLastMatchedTable(table); + best = tableName; + } else if (s.length() == 0 || tableName.startsWith(alias)) { + sentence.add(tableName + ".", + tableName.substring(s.length()) + ".", + Sentence.CONTEXT); + } + } + if (best != null) { + s = s.substring(best.length()); + if (s.length() == 0) { + sentence.add(alias + ".", ".", Sentence.CONTEXT); + } + return s; + } + } + return s; + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/context/DbProcedure.java b/modules/h2/src/main/java/org/h2/bnf/context/DbProcedure.java new file mode 100644 index 0000000000000..2f5bf37e7a253 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/context/DbProcedure.java @@ -0,0 +1,96 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf.context; + +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import org.h2.util.New; + +/** + * Contains meta data information about a procedure. + * This class is used by the H2 Console. + */ +public class DbProcedure { + + private final DbSchema schema; + private final String name; + private final String quotedName; + private final boolean returnsResult; + private DbColumn[] parameters; + + public DbProcedure(DbSchema schema, ResultSet rs) throws SQLException { + this.schema = schema; + name = rs.getString("PROCEDURE_NAME"); + returnsResult = rs.getShort("PROCEDURE_TYPE") == + DatabaseMetaData.procedureReturnsResult; + quotedName = schema.getContents().quoteIdentifier(name); + } + + /** + * @return The schema this table belongs to. + */ + public DbSchema getSchema() { + return schema; + } + + /** + * @return The column list. + */ + public DbColumn[] getParameters() { + return parameters; + } + + /** + * @return The table name. + */ + public String getName() { + return name; + } + + /** + * @return The quoted table name. + */ + public String getQuotedName() { + return quotedName; + } + + /** + * @return True if this function return a value + */ + public boolean isReturnsResult() { + return returnsResult; + } + + /** + * Read the column for this table from the database meta data. + * + * @param meta the database meta data + */ + void readParameters(DatabaseMetaData meta) throws SQLException { + ResultSet rs = meta.getProcedureColumns(null, schema.name, name, null); + ArrayList list = New.arrayList(); + while (rs.next()) { + DbColumn column = DbColumn.getProcedureColumn(schema.getContents(), rs); + if (column.getPosition() > 0) { + // Not the return type + list.add(column); + } + } + rs.close(); + parameters = new DbColumn[list.size()]; + // Store the parameter in the good position [1-n] + for (int i = 0; i < parameters.length; i++) { + DbColumn column = list.get(i); + if (column.getPosition() > 0 + && column.getPosition() <= parameters.length) { + parameters[column.getPosition() - 1] = column; + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/context/DbSchema.java b/modules/h2/src/main/java/org/h2/bnf/context/DbSchema.java new file mode 100644 index 0000000000000..287af62c2aa4e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/context/DbSchema.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf.context; + +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; + +import org.h2.engine.SysProperties; +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * Contains meta data information about a database schema. + * This class is used by the H2 Console. + */ +public class DbSchema { + + /** + * The schema name. + */ + public final String name; + + /** + * True if this is the default schema for this database. + */ + public final boolean isDefault; + + /** + * True if this is a system schema (for example the INFORMATION_SCHEMA). + */ + public final boolean isSystem; + + /** + * The quoted schema name. + */ + public final String quotedName; + + /** + * The database content container. + */ + private final DbContents contents; + + /** + * The table list. + */ + private DbTableOrView[] tables; + + /** + * The procedures list. + */ + private DbProcedure[] procedures; + + DbSchema(DbContents contents, String name, boolean isDefault) { + this.contents = contents; + this.name = name; + this.quotedName = contents.quoteIdentifier(name); + this.isDefault = isDefault; + if (name == null) { + // firebird + isSystem = true; + } else if ("INFORMATION_SCHEMA".equals(name)) { + isSystem = true; + } else if (!contents.isH2() && + StringUtils.toUpperEnglish(name).startsWith("INFO")) { + isSystem = true; + } else if (contents.isPostgreSQL() && + StringUtils.toUpperEnglish(name).startsWith("PG_")) { + isSystem = true; + } else if (contents.isDerby() && name.startsWith("SYS")) { + isSystem = true; + } else { + isSystem = false; + } + } + + /** + * @return The database content container. + */ + public DbContents getContents() { + return contents; + } + + /** + * @return The table list. + */ + public DbTableOrView[] getTables() { + return tables; + } + + /** + * @return The procedure list. + */ + public DbProcedure[] getProcedures() { + return procedures; + } + + /** + * Read all tables for this schema from the database meta data. + * + * @param meta the database meta data + * @param tableTypes the table types to read + */ + public void readTables(DatabaseMetaData meta, String[] tableTypes) + throws SQLException { + ResultSet rs = meta.getTables(null, name, null, tableTypes); + ArrayList list = New.arrayList(); + while (rs.next()) { + DbTableOrView table = new DbTableOrView(this, rs); + if (contents.isOracle() && table.getName().indexOf('$') > 0) { + continue; + } + list.add(table); + } + rs.close(); + tables = list.toArray(new DbTableOrView[0]); + if (tables.length < SysProperties.CONSOLE_MAX_TABLES_LIST_COLUMNS) { + for (DbTableOrView tab : tables) { + try { + tab.readColumns(meta); + } catch (SQLException e) { + // MySQL: + // View '...' references invalid table(s) or column(s) + // or function(s) or definer/invoker of view + // lack rights to use them HY000/1356 + // ignore + } + } + } + } + + /** + * Read all procedures in the dataBase. + * @param meta the database meta data + * @throws SQLException Error while fetching procedures + */ + public void readProcedures(DatabaseMetaData meta) throws SQLException { + ResultSet rs = meta.getProcedures(null, name, null); + ArrayList list = New.arrayList(); + while (rs.next()) { + list.add(new DbProcedure(this, rs)); + } + rs.close(); + procedures = list.toArray(new DbProcedure[0]); + if (procedures.length < SysProperties.CONSOLE_MAX_PROCEDURES_LIST_COLUMNS) { + for (DbProcedure procedure : procedures) { + procedure.readParameters(meta); + } + } + } +} diff --git a/modules/h2/src/main/java/org/h2/bnf/context/DbTableOrView.java b/modules/h2/src/main/java/org/h2/bnf/context/DbTableOrView.java new file mode 100644 index 0000000000000..1c50c5104e7d7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/context/DbTableOrView.java @@ -0,0 +1,104 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.bnf.context; + +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import org.h2.util.New; + +/** + * Contains meta data information about a table or a view. + * This class is used by the H2 Console. + */ +public class DbTableOrView { + + /** + * The schema this table belongs to. + */ + private final DbSchema schema; + + /** + * The table name. + */ + private final String name; + + /** + * The quoted table name. + */ + private final String quotedName; + + /** + * True if this represents a view. + */ + private final boolean isView; + + /** + * The column list. + */ + private DbColumn[] columns; + + public DbTableOrView(DbSchema schema, ResultSet rs) throws SQLException { + this.schema = schema; + name = rs.getString("TABLE_NAME"); + String type = rs.getString("TABLE_TYPE"); + isView = "VIEW".equals(type); + quotedName = schema.getContents().quoteIdentifier(name); + } + + /** + * @return The schema this table belongs to. + */ + public DbSchema getSchema() { + return schema; + } + + /** + * @return The column list. + */ + public DbColumn[] getColumns() { + return columns; + } + + /** + * @return The table name. + */ + public String getName() { + return name; + } + + /** + * @return True if this represents a view. + */ + public boolean isView() { + return isView; + } + + /** + * @return The quoted table name. + */ + public String getQuotedName() { + return quotedName; + } + + /** + * Read the column for this table from the database meta data. + * + * @param meta the database meta data + */ + public void readColumns(DatabaseMetaData meta) throws SQLException { + ResultSet rs = meta.getColumns(null, schema.name, name, null); + ArrayList list = New.arrayList(); + while (rs.next()) { + DbColumn column = DbColumn.getColumn(schema.getContents(), rs); + list.add(column); + } + rs.close(); + columns = list.toArray(new DbColumn[0]); + } + +} diff --git a/modules/h2/src/main/java/org/h2/bnf/context/package.html b/modules/h2/src/main/java/org/h2/bnf/context/package.html new file mode 100644 index 0000000000000..197ef82ea85d1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/context/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Classes that provide context for the BNF tool, in order to provide BNF-based auto-complete. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/bnf/package.html b/modules/h2/src/main/java/org/h2/bnf/package.html new file mode 100644 index 0000000000000..eae02c4fe6c16 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/bnf/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +The implementation of the BNF (Backus-Naur form) parser and tool. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/command/Command.java b/modules/h2/src/main/java/org/h2/command/Command.java new file mode 100644 index 0000000000000..95b88c0a8edc2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/Command.java @@ -0,0 +1,381 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command; + +import java.sql.SQLException; +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.ParameterInterface; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.ResultInterface; +import org.h2.result.ResultWithGeneratedKeys; +import org.h2.util.MathUtils; + +/** + * Represents a SQL statement. This object is only used on the server side. + */ +public abstract class Command implements CommandInterface { + /** + * The session. + */ + protected final Session session; + + /** + * The last start time. + */ + protected long startTimeNanos; + + /** + * The trace module. + */ + private final Trace trace; + + /** + * If this query was canceled. + */ + private volatile boolean cancel; + + private final String sql; + + private boolean canReuse; + + Command(Parser parser, String sql) { + this.session = parser.getSession(); + this.sql = sql; + trace = session.getDatabase().getTrace(Trace.COMMAND); + } + + /** + * Check if this command is transactional. + * If it is not, then it forces the current transaction to commit. + * + * @return true if it is + */ + public abstract boolean isTransactional(); + + /** + * Check if this command is a query. + * + * @return true if it is + */ + @Override + public abstract boolean isQuery(); + + /** + * Prepare join batching. + */ + public abstract void prepareJoinBatch(); + + /** + * Get the list of parameters. + * + * @return the list of parameters + */ + @Override + public abstract ArrayList getParameters(); + + /** + * Check if this command is read only. + * + * @return true if it is + */ + public abstract boolean isReadOnly(); + + /** + * Get an empty result set containing the meta data. + * + * @return an empty result set + */ + public abstract ResultInterface queryMeta(); + + /** + * Execute an updating statement (for example insert, delete, or update), if + * this is possible. + * + * @return the update count + * @throws DbException if the command is not an updating statement + */ + public int update() { + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_QUERY); + } + + /** + * Execute a query statement, if this is possible. + * + * @param maxrows the maximum number of rows returned + * @return the local result set + * @throws DbException if the command is not a query + */ + public ResultInterface query(@SuppressWarnings("unused") int maxrows) { + throw DbException.get(ErrorCode.METHOD_ONLY_ALLOWED_FOR_QUERY); + } + + @Override + public final ResultInterface getMetaData() { + return queryMeta(); + } + + /** + * Start the stopwatch. + */ + void start() { + if (trace.isInfoEnabled() || session.getDatabase().getQueryStatistics()) { + startTimeNanos = System.nanoTime(); + } + } + + void setProgress(int state) { + session.getDatabase().setProgress(state, sql, 0, 0); + } + + /** + * Check if this command has been canceled, and throw an exception if yes. + * + * @throws DbException if the statement has been canceled + */ + protected void checkCanceled() { + if (cancel) { + cancel = false; + throw DbException.get(ErrorCode.STATEMENT_WAS_CANCELED); + } + } + + @Override + public void stop() { + session.endStatement(); + session.setCurrentCommand(null, false); + if (!isTransactional()) { + session.commit(true); + } else if (session.getAutoCommit()) { + session.commit(false); + } else if (session.getDatabase().isMultiThreaded()) { + Database db = session.getDatabase(); + if (db != null) { + if (db.getLockMode() == Constants.LOCK_MODE_READ_COMMITTED) { + session.unlockReadLocks(); + } + } + } + if (trace.isInfoEnabled() && startTimeNanos > 0) { + long timeMillis = (System.nanoTime() - startTimeNanos) / 1000 / 1000; + if (timeMillis > Constants.SLOW_QUERY_LIMIT_MS) { + trace.info("slow query: {0} ms", timeMillis); + } + } + } + + /** + * Execute a query and return the result. + * This method prepares everything and calls {@link #query(int)} finally. + * + * @param maxrows the maximum number of rows to return + * @param scrollable if the result set must be scrollable (ignored) + * @return the result set + */ + @Override + public ResultInterface executeQuery(int maxrows, boolean scrollable) { + startTimeNanos = 0; + long start = 0; + Database database = session.getDatabase(); + Object sync = database.isMultiThreaded() ? (Object) session : (Object) database; + session.waitIfExclusiveModeEnabled(); + boolean callStop = true; + boolean writing = !isReadOnly(); + if (writing) { + while (!database.beforeWriting()) { + // wait + } + } + synchronized (sync) { + session.setCurrentCommand(this, false); + try { + while (true) { + database.checkPowerOff(); + try { + ResultInterface result = query(maxrows); + callStop = !result.isLazy(); + return result; + } catch (DbException e) { + start = filterConcurrentUpdate(e, start); + } catch (OutOfMemoryError e) { + callStop = false; + // there is a serious problem: + // the transaction may be applied partially + // in this case we need to panic: + // close the database + database.shutdownImmediately(); + throw DbException.convert(e); + } catch (Throwable e) { + throw DbException.convert(e); + } + } + } catch (DbException e) { + e = e.addSQL(sql); + SQLException s = e.getSQLException(); + database.exceptionThrown(s, sql); + if (s.getErrorCode() == ErrorCode.OUT_OF_MEMORY) { + callStop = false; + database.shutdownImmediately(); + throw e; + } + database.checkPowerOff(); + throw e; + } finally { + if (callStop) { + stop(); + } + if (writing) { + database.afterWriting(); + } + } + } + } + + @Override + public ResultWithGeneratedKeys executeUpdate(Object generatedKeysRequest) { + long start = 0; + Database database = session.getDatabase(); + Object sync = database.isMultiThreaded() ? (Object) session : (Object) database; + session.waitIfExclusiveModeEnabled(); + boolean callStop = true; + boolean writing = !isReadOnly(); + if (writing) { + while (!database.beforeWriting()) { + // wait + } + } + synchronized (sync) { + Session.Savepoint rollback = session.setSavepoint(); + session.setCurrentCommand(this, generatedKeysRequest); + try { + while (true) { + database.checkPowerOff(); + try { + int updateCount = update(); + if (!Boolean.FALSE.equals(generatedKeysRequest)) { + return new ResultWithGeneratedKeys.WithKeys(updateCount, + session.getGeneratedKeys().getKeys(session)); + } + return ResultWithGeneratedKeys.of(updateCount); + } catch (DbException e) { + start = filterConcurrentUpdate(e, start); + } catch (OutOfMemoryError e) { + callStop = false; + database.shutdownImmediately(); + throw DbException.convert(e); + } catch (Throwable e) { + throw DbException.convert(e); + } + } + } catch (DbException e) { + e = e.addSQL(sql); + SQLException s = e.getSQLException(); + database.exceptionThrown(s, sql); + if (s.getErrorCode() == ErrorCode.OUT_OF_MEMORY) { + callStop = false; + database.shutdownImmediately(); + throw e; + } + database.checkPowerOff(); + if (s.getErrorCode() == ErrorCode.DEADLOCK_1) { + session.rollback(); + } else { + session.rollbackTo(rollback, false); + } + throw e; + } finally { + try { + if (callStop) { + stop(); + } + } finally { + if (writing) { + database.afterWriting(); + } + } + } + } + } + + private long filterConcurrentUpdate(DbException e, long start) { + int errorCode = e.getErrorCode(); + if (errorCode != ErrorCode.CONCURRENT_UPDATE_1 && + errorCode != ErrorCode.ROW_NOT_FOUND_IN_PRIMARY_INDEX && + errorCode != ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1) { + throw e; + } + long now = System.nanoTime() / 1_000_000; + if (start != 0 && now - start > session.getLockTimeout()) { + throw DbException.get(ErrorCode.LOCK_TIMEOUT_1, e.getCause(), ""); + } + Database database = session.getDatabase(); + int sleep = 1 + MathUtils.randomInt(10); + while (true) { + try { + if (database.isMultiThreaded()) { + Thread.sleep(sleep); + } else { + database.wait(sleep); + } + } catch (InterruptedException e1) { + // ignore + } + long slept = System.nanoTime() / 1_000_000 - now; + if (slept >= sleep) { + break; + } + } + return start == 0 ? now : start; + } + + @Override + public void close() { + canReuse = true; + } + + @Override + public void cancel() { + this.cancel = true; + } + + @Override + public String toString() { + return sql + Trace.formatParams(getParameters()); + } + + public boolean isCacheable() { + return false; + } + + /** + * Whether the command is already closed (in which case it can be re-used). + * + * @return true if it can be re-used + */ + public boolean canReuse() { + return canReuse; + } + + /** + * The command is now re-used, therefore reset the canReuse flag, and the + * parameter values. + */ + public void reuse() { + canReuse = false; + ArrayList parameters = getParameters(); + for (ParameterInterface param : parameters) { + param.setValue(null, true); + } + } + + public void setCanReuse(boolean canReuse) { + this.canReuse = canReuse; + } +} diff --git a/modules/h2/src/main/java/org/h2/command/CommandContainer.java b/modules/h2/src/main/java/org/h2/command/CommandContainer.java new file mode 100644 index 0000000000000..4d261155dcf55 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/CommandContainer.java @@ -0,0 +1,165 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command; + +import java.util.ArrayList; +import org.h2.api.DatabaseEventListener; +import org.h2.command.dml.Explain; +import org.h2.command.dml.Query; +import org.h2.expression.Parameter; +import org.h2.expression.ParameterInterface; +import org.h2.result.ResultInterface; +import org.h2.table.TableView; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * Represents a single SQL statements. + * It wraps a prepared statement. + */ +public class CommandContainer extends Command { + + private Prepared prepared; + private boolean readOnlyKnown; + private boolean readOnly; + + CommandContainer(Parser parser, String sql, Prepared prepared) { + super(parser, sql); + prepared.setCommand(this); + this.prepared = prepared; + } + + @Override + public ArrayList getParameters() { + return prepared.getParameters(); + } + + @Override + public boolean isTransactional() { + return prepared.isTransactional(); + } + + @Override + public boolean isQuery() { + return prepared.isQuery(); + } + + @Override + public void prepareJoinBatch() { + if (session.isJoinBatchEnabled()) { + prepareJoinBatch(prepared); + } + } + + private static void prepareJoinBatch(Prepared prepared) { + if (prepared.isQuery()) { + int type = prepared.getType(); + + if (type == CommandInterface.SELECT) { + ((Query) prepared).prepareJoinBatch(); + } else if (type == CommandInterface.EXPLAIN || + type == CommandInterface.EXPLAIN_ANALYZE) { + prepareJoinBatch(((Explain) prepared).getCommand()); + } + } + } + + private void recompileIfRequired() { + if (prepared.needRecompile()) { + // TODO test with 'always recompile' + prepared.setModificationMetaId(0); + String sql = prepared.getSQL(); + ArrayList oldParams = prepared.getParameters(); + Parser parser = new Parser(session); + prepared = parser.parse(sql); + long mod = prepared.getModificationMetaId(); + prepared.setModificationMetaId(0); + ArrayList newParams = prepared.getParameters(); + for (int i = 0, size = newParams.size(); i < size; i++) { + Parameter old = oldParams.get(i); + if (old.isValueSet()) { + Value v = old.getValue(session); + Parameter p = newParams.get(i); + p.setValue(v); + } + } + prepared.prepare(); + prepared.setModificationMetaId(mod); + prepareJoinBatch(); + } + } + + @Override + public int update() { + recompileIfRequired(); + setProgress(DatabaseEventListener.STATE_STATEMENT_START); + start(); + session.setLastScopeIdentity(ValueNull.INSTANCE); + prepared.checkParameters(); + int updateCount = prepared.update(); + prepared.trace(startTimeNanos, updateCount); + setProgress(DatabaseEventListener.STATE_STATEMENT_END); + return updateCount; + } + + @Override + public ResultInterface query(int maxrows) { + recompileIfRequired(); + setProgress(DatabaseEventListener.STATE_STATEMENT_START); + start(); + prepared.checkParameters(); + ResultInterface result = prepared.query(maxrows); + prepared.trace(startTimeNanos, result.isLazy() ? 0 : result.getRowCount()); + setProgress(DatabaseEventListener.STATE_STATEMENT_END); + return result; + } + + @Override + public void stop() { + super.stop(); + // Clean up after the command was run in the session. + // Must restart query (and dependency construction) to reuse. + if (prepared.getCteCleanups() != null) { + for (TableView view : prepared.getCteCleanups()) { + // check if view was previously deleted as their name is set to + // null + if (view.getName() != null) { + session.removeLocalTempTable(view); + } + } + } + } + + @Override + public boolean canReuse() { + return super.canReuse() && prepared.getCteCleanups() == null; + } + + @Override + public boolean isReadOnly() { + if (!readOnlyKnown) { + readOnly = prepared.isReadOnly(); + readOnlyKnown = true; + } + return readOnly; + } + + @Override + public ResultInterface queryMeta() { + return prepared.queryMeta(); + } + + @Override + public boolean isCacheable() { + return prepared.isCacheable(); + } + + @Override + public int getCommandType() { + return prepared.getType(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/CommandInterface.java b/modules/h2/src/main/java/org/h2/command/CommandInterface.java new file mode 100644 index 0000000000000..f7bdc9833c6c6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/CommandInterface.java @@ -0,0 +1,551 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command; + +import java.util.ArrayList; +import org.h2.expression.ParameterInterface; +import org.h2.result.ResultInterface; +import org.h2.result.ResultWithGeneratedKeys; + +/** + * Represents a SQL statement. + */ +public interface CommandInterface { + + /** + * The type for unknown statement. + */ + int UNKNOWN = 0; + + // ddl operations + + /** + * The type of a ALTER INDEX RENAME statement. + */ + int ALTER_INDEX_RENAME = 1; + + /** + * The type of a ALTER SCHEMA RENAME statement. + */ + int ALTER_SCHEMA_RENAME = 2; + + /** + * The type of a ALTER TABLE ADD CHECK statement. + */ + int ALTER_TABLE_ADD_CONSTRAINT_CHECK = 3; + + /** + * The type of a ALTER TABLE ADD UNIQUE statement. + */ + int ALTER_TABLE_ADD_CONSTRAINT_UNIQUE = 4; + + /** + * The type of a ALTER TABLE ADD FOREIGN KEY statement. + */ + int ALTER_TABLE_ADD_CONSTRAINT_REFERENTIAL = 5; + + /** + * The type of a ALTER TABLE ADD PRIMARY KEY statement. + */ + int ALTER_TABLE_ADD_CONSTRAINT_PRIMARY_KEY = 6; + + /** + * The type of a ALTER TABLE ADD statement. + */ + int ALTER_TABLE_ADD_COLUMN = 7; + + /** + * The type of a ALTER TABLE ALTER COLUMN SET NOT NULL statement. + */ + int ALTER_TABLE_ALTER_COLUMN_NOT_NULL = 8; + + /** + * The type of a ALTER TABLE ALTER COLUMN SET NULL statement. + */ + int ALTER_TABLE_ALTER_COLUMN_NULL = 9; + + /** + * The type of a ALTER TABLE ALTER COLUMN SET DEFAULT statement. + */ + int ALTER_TABLE_ALTER_COLUMN_DEFAULT = 10; + + /** + * The type of an ALTER TABLE ALTER COLUMN statement that changes the column + * data type. + */ + int ALTER_TABLE_ALTER_COLUMN_CHANGE_TYPE = 11; + + /** + * The type of a ALTER TABLE DROP COLUMN statement. + */ + int ALTER_TABLE_DROP_COLUMN = 12; + + /** + * The type of a ALTER TABLE ALTER COLUMN SELECTIVITY statement. + */ + int ALTER_TABLE_ALTER_COLUMN_SELECTIVITY = 13; + + /** + * The type of a ALTER TABLE DROP CONSTRAINT statement. + */ + int ALTER_TABLE_DROP_CONSTRAINT = 14; + + /** + * The type of a ALTER TABLE RENAME statement. + */ + int ALTER_TABLE_RENAME = 15; + + /** + * The type of a ALTER TABLE ALTER COLUMN RENAME statement. + */ + int ALTER_TABLE_ALTER_COLUMN_RENAME = 16; + + /** + * The type of a ALTER USER ADMIN statement. + */ + int ALTER_USER_ADMIN = 17; + + /** + * The type of a ALTER USER RENAME statement. + */ + int ALTER_USER_RENAME = 18; + + /** + * The type of a ALTER USER SET PASSWORD statement. + */ + int ALTER_USER_SET_PASSWORD = 19; + + /** + * The type of a ALTER VIEW statement. + */ + int ALTER_VIEW = 20; + + /** + * The type of a ANALYZE statement. + */ + int ANALYZE = 21; + + /** + * The type of a CREATE AGGREGATE statement. + */ + int CREATE_AGGREGATE = 22; + + /** + * The type of a CREATE CONSTANT statement. + */ + int CREATE_CONSTANT = 23; + + /** + * The type of a CREATE ALIAS statement. + */ + int CREATE_ALIAS = 24; + + /** + * The type of a CREATE INDEX statement. + */ + int CREATE_INDEX = 25; + + /** + * The type of a CREATE LINKED TABLE statement. + */ + int CREATE_LINKED_TABLE = 26; + + /** + * The type of a CREATE ROLE statement. + */ + int CREATE_ROLE = 27; + + /** + * The type of a CREATE SCHEMA statement. + */ + int CREATE_SCHEMA = 28; + + /** + * The type of a CREATE SEQUENCE statement. + */ + int CREATE_SEQUENCE = 29; + + /** + * The type of a CREATE TABLE statement. + */ + int CREATE_TABLE = 30; + + /** + * The type of a CREATE TRIGGER statement. + */ + int CREATE_TRIGGER = 31; + + /** + * The type of a CREATE USER statement. + */ + int CREATE_USER = 32; + + /** + * The type of a CREATE DOMAIN statement. + */ + int CREATE_DOMAIN = 33; + + /** + * The type of a CREATE VIEW statement. + */ + int CREATE_VIEW = 34; + + /** + * The type of a DEALLOCATE statement. + */ + int DEALLOCATE = 35; + + /** + * The type of a DROP AGGREGATE statement. + */ + int DROP_AGGREGATE = 36; + + /** + * The type of a DROP CONSTANT statement. + */ + int DROP_CONSTANT = 37; + + /** + * The type of a DROP ALL OBJECTS statement. + */ + int DROP_ALL_OBJECTS = 38; + + /** + * The type of a DROP ALIAS statement. + */ + int DROP_ALIAS = 39; + + /** + * The type of a DROP INDEX statement. + */ + int DROP_INDEX = 40; + + /** + * The type of a DROP ROLE statement. + */ + int DROP_ROLE = 41; + + /** + * The type of a DROP SCHEMA statement. + */ + int DROP_SCHEMA = 42; + + /** + * The type of a DROP SEQUENCE statement. + */ + int DROP_SEQUENCE = 43; + + /** + * The type of a DROP TABLE statement. + */ + int DROP_TABLE = 44; + + /** + * The type of a DROP TRIGGER statement. + */ + int DROP_TRIGGER = 45; + + /** + * The type of a DROP USER statement. + */ + int DROP_USER = 46; + + /** + * The type of a DROP DOMAIN statement. + */ + int DROP_DOMAIN = 47; + + /** + * The type of a DROP VIEW statement. + */ + int DROP_VIEW = 48; + + /** + * The type of a GRANT statement. + */ + int GRANT = 49; + + /** + * The type of a REVOKE statement. + */ + int REVOKE = 50; + + /** + * The type of a PREPARE statement. + */ + int PREPARE = 51; + + /** + * The type of a COMMENT statement. + */ + int COMMENT = 52; + + /** + * The type of a TRUNCATE TABLE statement. + */ + int TRUNCATE_TABLE = 53; + + // dml operations + + /** + * The type of a ALTER SEQUENCE statement. + */ + int ALTER_SEQUENCE = 54; + + /** + * The type of a ALTER TABLE SET REFERENTIAL_INTEGRITY statement. + */ + int ALTER_TABLE_SET_REFERENTIAL_INTEGRITY = 55; + + /** + * The type of a BACKUP statement. + */ + int BACKUP = 56; + + /** + * The type of a CALL statement. + */ + int CALL = 57; + + /** + * The type of a DELETE statement. + */ + int DELETE = 58; + + /** + * The type of a EXECUTE statement. + */ + int EXECUTE = 59; + + /** + * The type of a EXPLAIN statement. + */ + int EXPLAIN = 60; + + /** + * The type of a INSERT statement. + */ + int INSERT = 61; + + /** + * The type of a MERGE statement. + */ + int MERGE = 62; + + /** + * The type of a REPLACE statement. + */ + int REPLACE = 63; + + /** + * The type of a no operation statement. + */ + int NO_OPERATION = 63; + + /** + * The type of a RUNSCRIPT statement. + */ + int RUNSCRIPT = 64; + + /** + * The type of a SCRIPT statement. + */ + int SCRIPT = 65; + + /** + * The type of a SELECT statement. + */ + int SELECT = 66; + + /** + * The type of a SET statement. + */ + int SET = 67; + + /** + * The type of a UPDATE statement. + */ + int UPDATE = 68; + + // transaction commands + + /** + * The type of a SET AUTOCOMMIT statement. + */ + int SET_AUTOCOMMIT_TRUE = 69; + + /** + * The type of a SET AUTOCOMMIT statement. + */ + int SET_AUTOCOMMIT_FALSE = 70; + + /** + * The type of a COMMIT statement. + */ + int COMMIT = 71; + + /** + * The type of a ROLLBACK statement. + */ + int ROLLBACK = 72; + + /** + * The type of a CHECKPOINT statement. + */ + int CHECKPOINT = 73; + + /** + * The type of a SAVEPOINT statement. + */ + int SAVEPOINT = 74; + + /** + * The type of a ROLLBACK TO SAVEPOINT statement. + */ + int ROLLBACK_TO_SAVEPOINT = 75; + + /** + * The type of a CHECKPOINT SYNC statement. + */ + int CHECKPOINT_SYNC = 76; + + /** + * The type of a PREPARE COMMIT statement. + */ + int PREPARE_COMMIT = 77; + + /** + * The type of a COMMIT TRANSACTION statement. + */ + int COMMIT_TRANSACTION = 78; + + /** + * The type of a ROLLBACK TRANSACTION statement. + */ + int ROLLBACK_TRANSACTION = 79; + + /** + * The type of a SHUTDOWN statement. + */ + int SHUTDOWN = 80; + + /** + * The type of a SHUTDOWN IMMEDIATELY statement. + */ + int SHUTDOWN_IMMEDIATELY = 81; + + /** + * The type of a SHUTDOWN COMPACT statement. + */ + int SHUTDOWN_COMPACT = 82; + + /** + * The type of a BEGIN {WORK|TRANSACTION} statement. + */ + int BEGIN = 83; + + /** + * The type of a SHUTDOWN DEFRAG statement. + */ + int SHUTDOWN_DEFRAG = 84; + + /** + * The type of a ALTER TABLE RENAME CONSTRAINT statement. + */ + int ALTER_TABLE_RENAME_CONSTRAINT = 85; + + + /** + * The type of a EXPLAIN ANALYZE statement. + */ + int EXPLAIN_ANALYZE = 86; + + /** + * The type of a ALTER TABLE ALTER COLUMN SET INVISIBLE statement. + */ + int ALTER_TABLE_ALTER_COLUMN_VISIBILITY = 87; + + /** + * The type of a CREATE SYNONYM statement. + */ + int CREATE_SYNONYM = 88; + + /** + * The type of a DROP SYNONYM statement. + */ + int DROP_SYNONYM = 89; + + /** + * The type of a ALTER TABLE ALTER COLUMN SET ON UPDATE statement. + */ + int ALTER_TABLE_ALTER_COLUMN_ON_UPDATE = 90; + + /** + * Get command type. + * + * @return one of the constants above + */ + int getCommandType(); + + /** + * Check if this is a query. + * + * @return true if it is a query + */ + boolean isQuery(); + + /** + * Get the parameters (if any). + * + * @return the parameters + */ + ArrayList getParameters(); + + /** + * Execute the query. + * + * @param maxRows the maximum number of rows returned + * @param scrollable if the result set must be scrollable + * @return the result + */ + ResultInterface executeQuery(int maxRows, boolean scrollable); + + /** + * Execute the statement + * + * @param generatedKeysRequest + * {@code false} if generated keys are not needed, {@code true} if + * generated keys should be configured automatically, {@code int[]} + * to specify column indices to return generated keys from, or + * {@code String[]} to specify column names to return generated keys + * from + * + * @return the update count + */ + ResultWithGeneratedKeys executeUpdate(Object generatedKeysRequest); + + /** + * Stop the command execution, release all locks and resources + */ + void stop(); + + /** + * Close the statement. + */ + void close(); + + /** + * Cancel the statement if it is still processing. + */ + void cancel(); + + /** + * Get an empty result set containing the meta data of the result. + * + * @return the empty result + */ + ResultInterface getMetaData(); +} diff --git a/modules/h2/src/main/java/org/h2/command/CommandList.java b/modules/h2/src/main/java/org/h2/command/CommandList.java new file mode 100644 index 0000000000000..bbcf258a8241d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/CommandList.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command; + +import java.util.ArrayList; +import org.h2.expression.ParameterInterface; +import org.h2.result.ResultInterface; + +/** + * Represents a list of SQL statements. + */ +class CommandList extends Command { + + private final Command command; + private final String remaining; + + CommandList(Parser parser, String sql, Command c, String remaining) { + super(parser, sql); + this.command = c; + this.remaining = remaining; + } + + @Override + public ArrayList getParameters() { + return command.getParameters(); + } + + private void executeRemaining() { + Command remainingCommand = session.prepareLocal(remaining); + if (remainingCommand.isQuery()) { + remainingCommand.query(0); + } else { + remainingCommand.update(); + } + } + + @Override + public int update() { + int updateCount = command.executeUpdate(false).getUpdateCount(); + executeRemaining(); + return updateCount; + } + + @Override + public void prepareJoinBatch() { + command.prepareJoinBatch(); + } + + @Override + public ResultInterface query(int maxrows) { + ResultInterface result = command.query(maxrows); + executeRemaining(); + return result; + } + + @Override + public boolean isQuery() { + return command.isQuery(); + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public boolean isReadOnly() { + return false; + } + + @Override + public ResultInterface queryMeta() { + return command.queryMeta(); + } + + @Override + public int getCommandType() { + return command.getCommandType(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/CommandRemote.java b/modules/h2/src/main/java/org/h2/command/CommandRemote.java new file mode 100644 index 0000000000000..4911c81940a96 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/CommandRemote.java @@ -0,0 +1,331 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command; + +import java.io.IOException; +import java.util.ArrayList; + +import org.h2.engine.Constants; +import org.h2.engine.GeneratedKeysMode; +import org.h2.engine.SessionRemote; +import org.h2.engine.SysProperties; +import org.h2.expression.ParameterInterface; +import org.h2.expression.ParameterRemote; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.ResultInterface; +import org.h2.result.ResultRemote; +import org.h2.result.ResultWithGeneratedKeys; +import org.h2.util.New; +import org.h2.value.Transfer; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * Represents the client-side part of a SQL statement. + * This class is not used in embedded mode. + */ +public class CommandRemote implements CommandInterface { + + private final ArrayList transferList; + private final ArrayList parameters; + private final Trace trace; + private final String sql; + private final int fetchSize; + private SessionRemote session; + private int id; + private boolean isQuery; + private int cmdType = UNKNOWN; + private boolean readonly; + private final int created; + + public CommandRemote(SessionRemote session, + ArrayList transferList, String sql, int fetchSize) { + this.transferList = transferList; + trace = session.getTrace(); + this.sql = sql; + parameters = New.arrayList(); + prepare(session, true); + // set session late because prepare might fail - in this case we don't + // need to close the object + this.session = session; + this.fetchSize = fetchSize; + created = session.getLastReconnect(); + } + + @Override + public void stop() { + // Must never be called, because remote result is not lazy. + throw DbException.throwInternalError(); + } + + private void prepare(SessionRemote s, boolean createParams) { + id = s.getNextId(); + for (int i = 0, count = 0; i < transferList.size(); i++) { + try { + Transfer transfer = transferList.get(i); + + boolean v16 = s.getClientVersion() >= Constants.TCP_PROTOCOL_VERSION_16; + + if (createParams) { + s.traceOperation(v16 ? "SESSION_PREPARE_READ_PARAMS2" + : "SESSION_PREPARE_READ_PARAMS", id); + transfer.writeInt( + v16 ? SessionRemote.SESSION_PREPARE_READ_PARAMS2 + : SessionRemote.SESSION_PREPARE_READ_PARAMS) + .writeInt(id).writeString(sql); + } else { + s.traceOperation("SESSION_PREPARE", id); + transfer.writeInt(SessionRemote.SESSION_PREPARE). + writeInt(id).writeString(sql); + } + s.done(transfer); + isQuery = transfer.readBoolean(); + readonly = transfer.readBoolean(); + + cmdType = v16 && createParams ? transfer.readInt() : UNKNOWN; + + int paramCount = transfer.readInt(); + if (createParams) { + parameters.clear(); + for (int j = 0; j < paramCount; j++) { + ParameterRemote p = new ParameterRemote(j); + p.readMetaData(transfer); + parameters.add(p); + } + } + } catch (IOException e) { + s.removeServer(e, i--, ++count); + } + } + } + + @Override + public boolean isQuery() { + return isQuery; + } + + @Override + public ArrayList getParameters() { + return parameters; + } + + private void prepareIfRequired() { + if (session.getLastReconnect() != created) { + // in this case we need to prepare again in every case + id = Integer.MIN_VALUE; + } + session.checkClosed(); + if (id <= session.getCurrentId() - SysProperties.SERVER_CACHED_OBJECTS) { + // object is too old - we need to prepare again + prepare(session, false); + } + } + + @Override + public ResultInterface getMetaData() { + synchronized (session) { + if (!isQuery) { + return null; + } + int objectId = session.getNextId(); + ResultRemote result = null; + for (int i = 0, count = 0; i < transferList.size(); i++) { + prepareIfRequired(); + Transfer transfer = transferList.get(i); + try { + session.traceOperation("COMMAND_GET_META_DATA", id); + transfer.writeInt(SessionRemote.COMMAND_GET_META_DATA). + writeInt(id).writeInt(objectId); + session.done(transfer); + int columnCount = transfer.readInt(); + result = new ResultRemote(session, transfer, objectId, + columnCount, Integer.MAX_VALUE); + break; + } catch (IOException e) { + session.removeServer(e, i--, ++count); + } + } + session.autoCommitIfCluster(); + return result; + } + } + + @Override + public ResultInterface executeQuery(int maxRows, boolean scrollable) { + checkParameters(); + synchronized (session) { + int objectId = session.getNextId(); + ResultRemote result = null; + for (int i = 0, count = 0; i < transferList.size(); i++) { + prepareIfRequired(); + Transfer transfer = transferList.get(i); + try { + session.traceOperation("COMMAND_EXECUTE_QUERY", id); + transfer.writeInt(SessionRemote.COMMAND_EXECUTE_QUERY). + writeInt(id).writeInt(objectId).writeInt(maxRows); + int fetch; + if (session.isClustered() || scrollable) { + fetch = Integer.MAX_VALUE; + } else { + fetch = fetchSize; + } + transfer.writeInt(fetch); + sendParameters(transfer); + session.done(transfer); + int columnCount = transfer.readInt(); + if (result != null) { + result.close(); + result = null; + } + result = new ResultRemote(session, transfer, objectId, columnCount, fetch); + if (readonly) { + break; + } + } catch (IOException e) { + session.removeServer(e, i--, ++count); + } + } + session.autoCommitIfCluster(); + session.readSessionState(); + return result; + } + } + + @Override + public ResultWithGeneratedKeys executeUpdate(Object generatedKeysRequest) { + checkParameters(); + boolean supportsGeneratedKeys = session.isSupportsGeneratedKeys(); + boolean readGeneratedKeys = supportsGeneratedKeys && !Boolean.FALSE.equals(generatedKeysRequest); + int objectId = readGeneratedKeys ? session.getNextId() : 0; + synchronized (session) { + int updateCount = 0; + ResultRemote generatedKeys = null; + boolean autoCommit = false; + for (int i = 0, count = 0; i < transferList.size(); i++) { + prepareIfRequired(); + Transfer transfer = transferList.get(i); + try { + session.traceOperation("COMMAND_EXECUTE_UPDATE", id); + transfer.writeInt(SessionRemote.COMMAND_EXECUTE_UPDATE).writeInt(id); + sendParameters(transfer); + if (supportsGeneratedKeys) { + int mode = GeneratedKeysMode.valueOf(generatedKeysRequest); + transfer.writeInt(mode); + switch (mode) { + case GeneratedKeysMode.COLUMN_NUMBERS: { + int[] keys = (int[]) generatedKeysRequest; + transfer.writeInt(keys.length); + for (int key : keys) { + transfer.writeInt(key); + } + break; + } + case GeneratedKeysMode.COLUMN_NAMES: { + String[] keys = (String[]) generatedKeysRequest; + transfer.writeInt(keys.length); + for (String key : keys) { + transfer.writeString(key); + } + break; + } + } + } + session.done(transfer); + updateCount = transfer.readInt(); + autoCommit = transfer.readBoolean(); + if (readGeneratedKeys) { + int columnCount = transfer.readInt(); + if (generatedKeys != null) { + generatedKeys.close(); + generatedKeys = null; + } + generatedKeys = new ResultRemote(session, transfer, objectId, columnCount, Integer.MAX_VALUE); + } + } catch (IOException e) { + session.removeServer(e, i--, ++count); + } + } + session.setAutoCommitFromServer(autoCommit); + session.autoCommitIfCluster(); + session.readSessionState(); + if (generatedKeys != null) { + return new ResultWithGeneratedKeys.WithKeys(updateCount, generatedKeys); + } + return ResultWithGeneratedKeys.of(updateCount); + } + } + + private void checkParameters() { + if (cmdType != EXPLAIN) { + for (ParameterInterface p : parameters) { + p.checkSet(); + } + } + } + + private void sendParameters(Transfer transfer) throws IOException { + int len = parameters.size(); + transfer.writeInt(len); + for (ParameterInterface p : parameters) { + Value pVal = p.getParamValue(); + + if (pVal == null && cmdType == EXPLAIN) { + pVal = ValueNull.INSTANCE; + } + + transfer.writeValue(pVal); + } + } + + @Override + public void close() { + if (session == null || session.isClosed()) { + return; + } + synchronized (session) { + session.traceOperation("COMMAND_CLOSE", id); + for (Transfer transfer : transferList) { + try { + transfer.writeInt(SessionRemote.COMMAND_CLOSE).writeInt(id); + } catch (IOException e) { + trace.error(e, "close"); + } + } + } + session = null; + try { + for (ParameterInterface p : parameters) { + Value v = p.getParamValue(); + if (v != null) { + v.remove(); + } + } + } catch (DbException e) { + trace.error(e, "close"); + } + parameters.clear(); + } + + /** + * Cancel this current statement. + */ + @Override + public void cancel() { + session.cancelStatement(id); + } + + @Override + public String toString() { + return sql + Trace.formatParams(getParameters()); + } + + @Override + public int getCommandType() { + return cmdType; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/Parser.java b/modules/h2/src/main/java/org/h2/command/Parser.java new file mode 100644 index 0000000000000..0ce12314b5849 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/Parser.java @@ -0,0 +1,6885 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + * + * Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + * Support for the operator "&&" as an alias for SPATIAL_INTERSECTS + */ +package org.h2.command; + +import java.math.BigDecimal; +import java.math.BigInteger; +import java.nio.charset.Charset; +import java.text.Collator; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.LinkedHashSet; +import java.util.List; +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.ddl.AlterIndexRename; +import org.h2.command.ddl.AlterSchemaRename; +import org.h2.command.ddl.AlterTableAddConstraint; +import org.h2.command.ddl.AlterTableAlterColumn; +import org.h2.command.ddl.AlterTableDropConstraint; +import org.h2.command.ddl.AlterTableRename; +import org.h2.command.ddl.AlterTableRenameColumn; +import org.h2.command.ddl.AlterTableRenameConstraint; +import org.h2.command.ddl.AlterUser; +import org.h2.command.ddl.AlterView; +import org.h2.command.ddl.Analyze; +import org.h2.command.ddl.CommandWithColumns; +import org.h2.command.ddl.CreateAggregate; +import org.h2.command.ddl.CreateConstant; +import org.h2.command.ddl.CreateFunctionAlias; +import org.h2.command.ddl.CreateIndex; +import org.h2.command.ddl.CreateLinkedTable; +import org.h2.command.ddl.CreateRole; +import org.h2.command.ddl.CreateSchema; +import org.h2.command.ddl.CreateSequence; +import org.h2.command.ddl.CreateSynonym; +import org.h2.command.ddl.CreateTable; +import org.h2.command.ddl.CreateTrigger; +import org.h2.command.ddl.CreateUser; +import org.h2.command.ddl.CreateUserDataType; +import org.h2.command.ddl.CreateView; +import org.h2.command.ddl.DeallocateProcedure; +import org.h2.command.ddl.DefineCommand; +import org.h2.command.ddl.DropAggregate; +import org.h2.command.ddl.DropConstant; +import org.h2.command.ddl.DropDatabase; +import org.h2.command.ddl.DropFunctionAlias; +import org.h2.command.ddl.DropIndex; +import org.h2.command.ddl.DropRole; +import org.h2.command.ddl.DropSchema; +import org.h2.command.ddl.DropSequence; +import org.h2.command.ddl.DropSynonym; +import org.h2.command.ddl.DropTable; +import org.h2.command.ddl.DropTrigger; +import org.h2.command.ddl.DropUser; +import org.h2.command.ddl.DropUserDataType; +import org.h2.command.ddl.DropView; +import org.h2.command.ddl.GrantRevoke; +import org.h2.command.ddl.PrepareProcedure; +import org.h2.command.ddl.SchemaCommand; +import org.h2.command.ddl.SetComment; +import org.h2.command.ddl.TruncateTable; +import org.h2.command.dml.AlterSequence; +import org.h2.command.dml.AlterTableSet; +import org.h2.command.dml.BackupCommand; +import org.h2.command.dml.Call; +import org.h2.command.dml.Delete; +import org.h2.command.dml.ExecuteProcedure; +import org.h2.command.dml.Explain; +import org.h2.command.dml.Insert; +import org.h2.command.dml.Merge; +import org.h2.command.dml.MergeUsing; +import org.h2.command.dml.NoOperation; +import org.h2.command.dml.Query; +import org.h2.command.dml.Replace; +import org.h2.command.dml.RunScriptCommand; +import org.h2.command.dml.ScriptCommand; +import org.h2.command.dml.Select; +import org.h2.command.dml.SelectOrderBy; +import org.h2.command.dml.SelectUnion; +import org.h2.command.dml.Set; +import org.h2.command.dml.SetTypes; +import org.h2.command.dml.TransactionCommand; +import org.h2.command.dml.Update; +import org.h2.constraint.ConstraintActionType; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.FunctionAlias; +import org.h2.engine.Mode; +import org.h2.engine.Mode.ModeEnum; +import org.h2.engine.Procedure; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.engine.User; +import org.h2.engine.UserAggregate; +import org.h2.engine.UserDataType; +import org.h2.expression.Aggregate; +import org.h2.expression.Aggregate.AggregateType; +import org.h2.expression.Alias; +import org.h2.expression.CompareLike; +import org.h2.expression.Comparison; +import org.h2.expression.ConditionAndOr; +import org.h2.expression.ConditionExists; +import org.h2.expression.ConditionIn; +import org.h2.expression.ConditionInParameter; +import org.h2.expression.ConditionInSelect; +import org.h2.expression.ConditionNot; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.expression.ExpressionList; +import org.h2.expression.Function; +import org.h2.expression.FunctionCall; +import org.h2.expression.JavaAggregate; +import org.h2.expression.JavaFunction; +import org.h2.expression.Operation; +import org.h2.expression.Operation.OpType; +import org.h2.expression.Parameter; +import org.h2.expression.Rownum; +import org.h2.expression.SequenceValue; +import org.h2.expression.Subquery; +import org.h2.expression.TableFunction; +import org.h2.expression.ValueExpression; +import org.h2.expression.Variable; +import org.h2.expression.Wildcard; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.result.SortOrder; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; +import org.h2.table.Column; +import org.h2.table.FunctionTable; +import org.h2.table.IndexColumn; +import org.h2.table.IndexHints; +import org.h2.table.RangeTable; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.table.TableFilter.TableFilterVisitor; +import org.h2.table.TableView; +import org.h2.util.DateTimeFunctions; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.ParserUtil; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.CompareMode; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueEnum; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * The parser is used to convert a SQL statement string to an command object. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class Parser { + + private static final String WITH_STATEMENT_SUPPORTS_LIMITED_SUB_STATEMENTS = + "WITH statement supports only SELECT, CREATE TABLE, INSERT, UPDATE, MERGE or DELETE statements"; + + // used during the tokenizer phase + private static final int CHAR_END = 1, CHAR_VALUE = 2, CHAR_QUOTED = 3; + private static final int CHAR_NAME = 4, CHAR_SPECIAL_1 = 5, + CHAR_SPECIAL_2 = 6; + private static final int CHAR_STRING = 7, CHAR_DOT = 8, + CHAR_DOLLAR_QUOTED_STRING = 9; + + // this are token types + private static final int KEYWORD = ParserUtil.KEYWORD; + private static final int IDENTIFIER = ParserUtil.IDENTIFIER; + private static final int NULL = ParserUtil.NULL; + private static final int TRUE = ParserUtil.TRUE; + private static final int FALSE = ParserUtil.FALSE; + private static final int ROWNUM = ParserUtil.ROWNUM; + private static final int PARAMETER = 10, END = 11, VALUE = 12; + private static final int EQUAL = 13, BIGGER_EQUAL = 14, BIGGER = 15; + private static final int SMALLER = 16, SMALLER_EQUAL = 17, NOT_EQUAL = 18; + private static final int AT = 19; + private static final int MINUS = 20, PLUS = 21, STRING_CONCAT = 22; + private static final int OPEN = 23, CLOSE = 24; + private static final int SPATIAL_INTERSECTS = 25; + + private static final Comparator TABLE_FILTER_COMPARATOR = + new Comparator() { + @Override + public int compare(TableFilter o1, TableFilter o2) { + return o1 == o2 ? 0 : compareTableFilters(o1, o2); + } + }; + + private final Database database; + private final Session session; + /** + * @see org.h2.engine.DbSettings#databaseToUpper + */ + private final boolean identifiersToUpper; + + /** indicates character-type for each char in sqlCommand */ + private int[] characterTypes; + private int currentTokenType; + private String currentToken; + private boolean currentTokenQuoted; + private Value currentValue; + private String originalSQL; + /** copy of originalSQL, with comments blanked out */ + private String sqlCommand; + /** cached array if chars from sqlCommand */ + private char[] sqlCommandChars; + /** index into sqlCommand of previous token */ + private int lastParseIndex; + /** index into sqlCommand of current token */ + private int parseIndex; + private CreateView createView; + private Prepared currentPrepared; + private Select currentSelect; + private ArrayList parameters; + private String schemaName; + private ArrayList expectedList; + private boolean rightsChecked; + private boolean recompileAlways; + private boolean literalsChecked; + private ArrayList indexedParameterList; + private int orderInFrom; + private ArrayList suppliedParameterList; + + public Parser(Session session) { + this.database = session.getDatabase(); + this.identifiersToUpper = database.getSettings().databaseToUpper; + this.session = session; + } + + /** + * Parse the statement and prepare it for execution. + * + * @param sql the SQL statement to parse + * @return the prepared object + */ + public Prepared prepare(String sql) { + Prepared p = parse(sql); + p.prepare(); + if (currentTokenType != END) { + throw getSyntaxError(); + } + return p; + } + + /** + * Parse a statement or a list of statements, and prepare it for execution. + * + * @param sql the SQL statement to parse + * @return the command object + */ + public Command prepareCommand(String sql) { + try { + Prepared p = parse(sql); + boolean hasMore = isToken(";"); + if (!hasMore && currentTokenType != END) { + throw getSyntaxError(); + } + p.prepare(); + Command c = new CommandContainer(this, sql, p); + if (hasMore) { + String remaining = originalSQL.substring(parseIndex); + if (remaining.trim().length() != 0) { + c = new CommandList(this, sql, c, remaining); + } + } + return c; + } catch (DbException e) { + throw e.addSQL(originalSQL); + } + } + + /** + * Parse the statement, but don't prepare it for execution. + * + * @param sql the SQL statement to parse + * @return the prepared object + */ + Prepared parse(String sql) { + Prepared p; + try { + // first, try the fast variant + p = parse(sql, false); + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.SYNTAX_ERROR_1) { + // now, get the detailed exception + p = parse(sql, true); + } else { + throw e.addSQL(sql); + } + } + p.setPrepareAlways(recompileAlways); + p.setParameterList(parameters); + return p; + } + + private Prepared parse(String sql, boolean withExpectedList) { + initialize(sql); + if (withExpectedList) { + expectedList = New.arrayList(); + } else { + expectedList = null; + } + parameters = New.arrayList(); + currentSelect = null; + currentPrepared = null; + createView = null; + recompileAlways = false; + indexedParameterList = suppliedParameterList; + read(); + return parsePrepared(); + } + + private Prepared parsePrepared() { + int start = lastParseIndex; + Prepared c = null; + String token = currentToken; + if (token.length() == 0) { + c = new NoOperation(session); + } else { + char first = token.charAt(0); + switch (first) { + case '?': + // read the ? as a parameter + readTerm(); + // this is an 'out' parameter - set a dummy value + parameters.get(0).setValue(ValueNull.INSTANCE); + read("="); + read("CALL"); + c = parseCall(); + break; + case '(': + c = parseSelect(); + break; + case 'a': + case 'A': + if (readIf("ALTER")) { + c = parseAlter(); + } else if (readIf("ANALYZE")) { + c = parseAnalyze(); + } + break; + case 'b': + case 'B': + if (readIf("BACKUP")) { + c = parseBackup(); + } else if (readIf("BEGIN")) { + c = parseBegin(); + } + break; + case 'c': + case 'C': + if (readIf("COMMIT")) { + c = parseCommit(); + } else if (readIf("CREATE")) { + c = parseCreate(); + } else if (readIf("CALL")) { + c = parseCall(); + } else if (readIf("CHECKPOINT")) { + c = parseCheckpoint(); + } else if (readIf("COMMENT")) { + c = parseComment(); + } + break; + case 'd': + case 'D': + if (readIf("DELETE")) { + c = parseDelete(); + } else if (readIf("DROP")) { + c = parseDrop(); + } else if (readIf("DECLARE")) { + // support for DECLARE GLOBAL TEMPORARY TABLE... + c = parseCreate(); + } else if (readIf("DEALLOCATE")) { + c = parseDeallocate(); + } + break; + case 'e': + case 'E': + if (readIf("EXPLAIN")) { + c = parseExplain(); + } else if (readIf("EXECUTE")) { + c = parseExecute(); + } + break; + case 'f': + case 'F': + if (isToken("FROM")) { + c = parseSelect(); + } + break; + case 'g': + case 'G': + if (readIf("GRANT")) { + c = parseGrantRevoke(CommandInterface.GRANT); + } + break; + case 'h': + case 'H': + if (readIf("HELP")) { + c = parseHelp(); + } + break; + case 'i': + case 'I': + if (readIf("INSERT")) { + c = parseInsert(); + } + break; + case 'm': + case 'M': + if (readIf("MERGE")) { + c = parseMerge(); + } + break; + case 'p': + case 'P': + if (readIf("PREPARE")) { + c = parsePrepare(); + } + break; + case 'r': + case 'R': + if (readIf("ROLLBACK")) { + c = parseRollback(); + } else if (readIf("REVOKE")) { + c = parseGrantRevoke(CommandInterface.REVOKE); + } else if (readIf("RUNSCRIPT")) { + c = parseRunScript(); + } else if (readIf("RELEASE")) { + c = parseReleaseSavepoint(); + } else if (readIf("REPLACE")) { + c = parseReplace(); + } + break; + case 's': + case 'S': + if (isToken("SELECT")) { + c = parseSelect(); + } else if (readIf("SET")) { + c = parseSet(); + } else if (readIf("SAVEPOINT")) { + c = parseSavepoint(); + } else if (readIf("SCRIPT")) { + c = parseScript(); + } else if (readIf("SHUTDOWN")) { + c = parseShutdown(); + } else if (readIf("SHOW")) { + c = parseShow(); + } + break; + case 't': + case 'T': + if (readIf("TRUNCATE")) { + c = parseTruncate(); + } + break; + case 'u': + case 'U': + if (readIf("UPDATE")) { + c = parseUpdate(); + } else if (readIf("USE")) { + c = parseUse(); + } + break; + case 'v': + case 'V': + if (readIf("VALUES")) { + c = parseValues(); + } + break; + case 'w': + case 'W': + if (readIf("WITH")) { + c = parseWithStatementOrQuery(); + } + break; + case ';': + c = new NoOperation(session); + break; + default: + throw getSyntaxError(); + } + if (indexedParameterList != null) { + for (int i = 0, size = indexedParameterList.size(); + i < size; i++) { + if (indexedParameterList.get(i) == null) { + indexedParameterList.set(i, new Parameter(i)); + } + } + parameters = indexedParameterList; + } + if (readIf("{")) { + do { + int index = (int) readLong() - 1; + if (index < 0 || index >= parameters.size()) { + throw getSyntaxError(); + } + Parameter p = parameters.get(index); + if (p == null) { + throw getSyntaxError(); + } + read(":"); + Expression expr = readExpression(); + expr = expr.optimize(session); + p.setValue(expr.getValue(session)); + } while (readIf(",")); + read("}"); + for (Parameter p : parameters) { + p.checkSet(); + } + parameters.clear(); + } + } + if (c == null) { + throw getSyntaxError(); + } + setSQL(c, null, start); + return c; + } + + private DbException getSyntaxError() { + if (expectedList == null || expectedList.isEmpty()) { + return DbException.getSyntaxError(sqlCommand, parseIndex); + } + StatementBuilder buff = new StatementBuilder(); + for (String e : expectedList) { + buff.appendExceptFirst(", "); + buff.append(e); + } + return DbException.getSyntaxError(sqlCommand, parseIndex, + buff.toString()); + } + + private Prepared parseBackup() { + BackupCommand command = new BackupCommand(session); + read("TO"); + command.setFileName(readExpression()); + return command; + } + + private Prepared parseAnalyze() { + Analyze command = new Analyze(session); + if (readIf("TABLE")) { + Table table = readTableOrView(); + command.setTable(table); + } + if (readIf("SAMPLE_SIZE")) { + command.setTop(readPositiveInt()); + } + return command; + } + + private TransactionCommand parseBegin() { + TransactionCommand command; + if (!readIf("WORK")) { + readIf("TRANSACTION"); + } + command = new TransactionCommand(session, CommandInterface.BEGIN); + return command; + } + + private TransactionCommand parseCommit() { + TransactionCommand command; + if (readIf("TRANSACTION")) { + command = new TransactionCommand(session, + CommandInterface.COMMIT_TRANSACTION); + command.setTransactionName(readUniqueIdentifier()); + return command; + } + command = new TransactionCommand(session, + CommandInterface.COMMIT); + readIf("WORK"); + return command; + } + + private TransactionCommand parseShutdown() { + int type = CommandInterface.SHUTDOWN; + if (readIf("IMMEDIATELY")) { + type = CommandInterface.SHUTDOWN_IMMEDIATELY; + } else if (readIf("COMPACT")) { + type = CommandInterface.SHUTDOWN_COMPACT; + } else if (readIf("DEFRAG")) { + type = CommandInterface.SHUTDOWN_DEFRAG; + } else { + readIf("SCRIPT"); + } + return new TransactionCommand(session, type); + } + + private TransactionCommand parseRollback() { + TransactionCommand command; + if (readIf("TRANSACTION")) { + command = new TransactionCommand(session, + CommandInterface.ROLLBACK_TRANSACTION); + command.setTransactionName(readUniqueIdentifier()); + return command; + } + if (readIf("TO")) { + read("SAVEPOINT"); + command = new TransactionCommand(session, + CommandInterface.ROLLBACK_TO_SAVEPOINT); + command.setSavepointName(readUniqueIdentifier()); + } else { + readIf("WORK"); + command = new TransactionCommand(session, + CommandInterface.ROLLBACK); + } + return command; + } + + private Prepared parsePrepare() { + if (readIf("COMMIT")) { + TransactionCommand command = new TransactionCommand(session, + CommandInterface.PREPARE_COMMIT); + command.setTransactionName(readUniqueIdentifier()); + return command; + } + String procedureName = readAliasIdentifier(); + if (readIf("(")) { + ArrayList list = New.arrayList(); + for (int i = 0;; i++) { + Column column = parseColumnForTable("C" + i, true); + list.add(column); + if (readIf(")")) { + break; + } + read(","); + } + } + read("AS"); + Prepared prep = parsePrepared(); + PrepareProcedure command = new PrepareProcedure(session); + command.setProcedureName(procedureName); + command.setPrepared(prep); + return command; + } + + private TransactionCommand parseSavepoint() { + TransactionCommand command = new TransactionCommand(session, + CommandInterface.SAVEPOINT); + command.setSavepointName(readUniqueIdentifier()); + return command; + } + + private Prepared parseReleaseSavepoint() { + Prepared command = new NoOperation(session); + readIf("SAVEPOINT"); + readUniqueIdentifier(); + return command; + } + + private Schema findSchema(String schemaName) { + if (schemaName == null) { + return null; + } + Schema schema = database.findSchema(schemaName); + if (schema == null) { + if (equalsToken("SESSION", schemaName)) { + // for local temporary tables + schema = database.getSchema(session.getCurrentSchemaName()); + } else if (database.getMode().sysDummy1 && + "SYSIBM".equals(schemaName)) { + // IBM DB2 and Apache Derby compatibility: SYSIBM.SYSDUMMY1 + schema = database.getSchema(session.getCurrentSchemaName()); + } + } + return schema; + } + + private Schema getSchema(String schemaName) { + if (schemaName == null) { + return null; + } + Schema schema = findSchema(schemaName); + if (schema == null) { + throw DbException.get(ErrorCode.SCHEMA_NOT_FOUND_1, schemaName); + } + return schema; + } + + private Schema getSchema() { + return getSchema(schemaName); + } + /* + * Gets the current schema for scenarios that need a guaranteed, non-null schema object. + * + * This routine is solely here + * because of the function readIdentifierWithSchema(String defaultSchemaName) - which + * is often called with a null parameter (defaultSchemaName) - then 6 lines into the function + * that routine nullifies the state field schemaName - which I believe is a bug. + * + * There are about 7 places where "readIdentifierWithSchema(null)" is called in this file. + * + * In other words when is it legal to not have an active schema defined by schemaName ? + * I don't think it's ever a valid case. I don't understand when that would be allowed. + * I spent a long time trying to figure this out. + * As another proof of this point, the command "SET SCHEMA=NULL" is not a valid command. + * + * I did try to fix this in readIdentifierWithSchema(String defaultSchemaName) + * - but every fix I tried cascaded so many unit test errors - so + * I gave up. I think this needs a bigger effort to fix his, as part of bigger, dedicated story. + * + */ + private Schema getSchemaWithDefault() { + if (schemaName == null) { + schemaName = session.getCurrentSchemaName(); + } + return getSchema(schemaName); + } + + private Column readTableColumn(TableFilter filter) { + String tableAlias = null; + String columnName = readColumnIdentifier(); + if (readIf(".")) { + tableAlias = columnName; + columnName = readColumnIdentifier(); + if (readIf(".")) { + String schema = tableAlias; + tableAlias = columnName; + columnName = readColumnIdentifier(); + if (readIf(".")) { + String catalogName = schema; + schema = tableAlias; + tableAlias = columnName; + columnName = readColumnIdentifier(); + if (!equalsToken(catalogName, database.getShortName())) { + throw DbException.get(ErrorCode.DATABASE_NOT_FOUND_1, + catalogName); + } + } + if (!equalsToken(schema, filter.getTable().getSchema() + .getName())) { + throw DbException.get(ErrorCode.SCHEMA_NOT_FOUND_1, schema); + } + } + if (!equalsToken(tableAlias, filter.getTableAlias())) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, + tableAlias); + } + } + if (database.getSettings().rowId) { + if (Column.ROWID.equals(columnName)) { + return filter.getRowIdColumn(); + } + } + return filter.getTable().getColumn(columnName); + } + + private Update parseUpdate() { + Update command = new Update(session); + currentPrepared = command; + int start = lastParseIndex; + TableFilter filter = readSimpleTableFilter(0, null); + command.setTableFilter(filter); + parseUpdateSetClause(command, filter, start); + return command; + } + + private void parseUpdateSetClause(Update command, TableFilter filter, int start) { + read("SET"); + if (readIf("(")) { + ArrayList columns = New.arrayList(); + do { + Column column = readTableColumn(filter); + columns.add(column); + } while (readIfMore(true)); + read("="); + Expression expression = readExpression(); + if (columns.size() == 1) { + // the expression is parsed as a simple value + command.setAssignment(columns.get(0), expression); + } else { + for (int i = 0, size = columns.size(); i < size; i++) { + Column column = columns.get(i); + Function f = Function.getFunction(database, "ARRAY_GET"); + f.setParameter(0, expression); + f.setParameter(1, ValueExpression.get(ValueInt.get(i + 1))); + f.doneWithParameters(); + command.setAssignment(column, f); + } + } + } else { + do { + Column column = readTableColumn(filter); + read("="); + Expression expression; + if (readIf("DEFAULT")) { + expression = ValueExpression.getDefault(); + } else { + expression = readExpression(); + } + command.setAssignment(column, expression); + } while (readIf(",")); + } + if (readIf("WHERE")) { + Expression condition = readExpression(); + command.setCondition(condition); + } + if (readIf("ORDER")) { + // for MySQL compatibility + // (this syntax is supported, but ignored) + read("BY"); + parseSimpleOrderList(); + } + if (readIf("LIMIT")) { + Expression limit = readTerm().optimize(session); + command.setLimit(limit); + } + setSQL(command, "UPDATE", start); + } + + private TableFilter readSimpleTableFilter(int orderInFrom, Collection excludeTokens) { + Table table = readTableOrView(); + String alias = null; + if (readIf("AS")) { + alias = readAliasIdentifier(); + } else if (currentTokenType == IDENTIFIER) { + if (!equalsTokenIgnoreCase(currentToken, "SET") + && (excludeTokens == null || !isTokenInList(excludeTokens))) { + // SET is not a keyword (PostgreSQL supports it as a table name) + alias = readAliasIdentifier(); + } + } + return new TableFilter(session, table, alias, rightsChecked, + currentSelect, orderInFrom, null); + } + + private Delete parseDelete() { + Delete command = new Delete(session); + Expression limit = null; + if (readIf("TOP")) { + limit = readTerm().optimize(session); + } + currentPrepared = command; + int start = lastParseIndex; + if (!readIf("FROM") && database.getMode().getEnum() == ModeEnum.MySQL) { + readIdentifierWithSchema(); + read("FROM"); + } + TableFilter filter = readSimpleTableFilter(0, null); + command.setTableFilter(filter); + parseDeleteGivenTable(command, limit, start); + return command; + } + + private void parseDeleteGivenTable(Delete command, Expression limit, int start) { + if (readIf("WHERE")) { + Expression condition = readExpression(); + command.setCondition(condition); + } + if (readIf("LIMIT") && limit == null) { + limit = readTerm().optimize(session); + } + command.setLimit(limit); + setSQL(command, "DELETE", start); + } + + private IndexColumn[] parseIndexColumnList() { + ArrayList columns = New.arrayList(); + do { + IndexColumn column = new IndexColumn(); + column.columnName = readColumnIdentifier(); + columns.add(column); + if (readIf("ASC")) { + // ignore + } else if (readIf("DESC")) { + column.sortType = SortOrder.DESCENDING; + } + if (readIf("NULLS")) { + if (readIf("FIRST")) { + column.sortType |= SortOrder.NULLS_FIRST; + } else { + read("LAST"); + column.sortType |= SortOrder.NULLS_LAST; + } + } + } while (readIfMore(true)); + return columns.toArray(new IndexColumn[0]); + } + + private String[] parseColumnList() { + ArrayList columns = New.arrayList(); + do { + String columnName = readColumnIdentifier(); + columns.add(columnName); + } while (readIfMore(false)); + return columns.toArray(new String[0]); + } + + private Column[] parseColumnList(Table table) { + ArrayList columns = New.arrayList(); + HashSet set = new HashSet<>(); + if (!readIf(")")) { + do { + Column column = parseColumn(table); + if (!set.add(column)) { + throw DbException.get(ErrorCode.DUPLICATE_COLUMN_NAME_1, + column.getSQL()); + } + columns.add(column); + } while (readIfMore(false)); + } + return columns.toArray(new Column[0]); + } + + private Column parseColumn(Table table) { + String id = readColumnIdentifier(); + if (database.getSettings().rowId && Column.ROWID.equals(id)) { + return table.getRowIdColumn(); + } + return table.getColumn(id); + } + + /** + * Read comma or closing brace. + * + * @param strict + * if {@code false} additional comma before brace is allowed + * @return {@code true} if comma is read, {@code false} if brace is read + */ + private boolean readIfMore(boolean strict) { + if (readIf(",")) { + return strict || !readIf(")"); + } + read(")"); + return false; + } + + private Prepared parseHelp() { + StringBuilder buff = new StringBuilder( + "SELECT * FROM INFORMATION_SCHEMA.HELP"); + int i = 0; + ArrayList paramValues = New.arrayList(); + while (currentTokenType != END) { + String s = currentToken; + read(); + if (i == 0) { + buff.append(" WHERE "); + } else { + buff.append(" AND "); + } + i++; + buff.append("UPPER(TOPIC) LIKE ?"); + paramValues.add(ValueString.get("%" + s + "%")); + } + return prepare(session, buff.toString(), paramValues); + } + + private Prepared parseShow() { + ArrayList paramValues = New.arrayList(); + StringBuilder buff = new StringBuilder("SELECT "); + if (readIf("CLIENT_ENCODING")) { + // for PostgreSQL compatibility + buff.append("'UNICODE' AS CLIENT_ENCODING FROM DUAL"); + } else if (readIf("DEFAULT_TRANSACTION_ISOLATION")) { + // for PostgreSQL compatibility + buff.append("'read committed' AS DEFAULT_TRANSACTION_ISOLATION " + + "FROM DUAL"); + } else if (readIf("TRANSACTION")) { + // for PostgreSQL compatibility + read("ISOLATION"); + read("LEVEL"); + buff.append("'read committed' AS TRANSACTION_ISOLATION " + + "FROM DUAL"); + } else if (readIf("DATESTYLE")) { + // for PostgreSQL compatibility + buff.append("'ISO' AS DATESTYLE FROM DUAL"); + } else if (readIf("SERVER_VERSION")) { + // for PostgreSQL compatibility + buff.append("'" + Constants.PG_VERSION + "' AS SERVER_VERSION FROM DUAL"); + } else if (readIf("SERVER_ENCODING")) { + // for PostgreSQL compatibility + buff.append("'UTF8' AS SERVER_ENCODING FROM DUAL"); + } else if (readIf("TABLES")) { + // for MySQL compatibility + String schema = Constants.SCHEMA_MAIN; + if (readIf("FROM")) { + schema = readUniqueIdentifier(); + } + buff.append("TABLE_NAME, TABLE_SCHEMA FROM " + + "INFORMATION_SCHEMA.TABLES " + + "WHERE TABLE_SCHEMA=? ORDER BY TABLE_NAME"); + paramValues.add(ValueString.get(schema)); + } else if (readIf("COLUMNS")) { + // for MySQL compatibility + read("FROM"); + String tableName = readIdentifierWithSchema(); + String schemaName = getSchema().getName(); + paramValues.add(ValueString.get(tableName)); + if (readIf("FROM")) { + schemaName = readUniqueIdentifier(); + } + buff.append("C.COLUMN_NAME FIELD, " + + "C.TYPE_NAME || '(' || C.NUMERIC_PRECISION || ')' TYPE, " + + "C.IS_NULLABLE \"NULL\", " + + "CASE (SELECT MAX(I.INDEX_TYPE_NAME) FROM " + + "INFORMATION_SCHEMA.INDEXES I " + + "WHERE I.TABLE_SCHEMA=C.TABLE_SCHEMA " + + "AND I.TABLE_NAME=C.TABLE_NAME " + + "AND I.COLUMN_NAME=C.COLUMN_NAME)" + + "WHEN 'PRIMARY KEY' THEN 'PRI' " + + "WHEN 'UNIQUE INDEX' THEN 'UNI' ELSE '' END KEY, " + + "IFNULL(COLUMN_DEFAULT, 'NULL') DEFAULT " + + "FROM INFORMATION_SCHEMA.COLUMNS C " + + "WHERE C.TABLE_NAME=? AND C.TABLE_SCHEMA=? " + + "ORDER BY C.ORDINAL_POSITION"); + paramValues.add(ValueString.get(schemaName)); + } else if (readIf("DATABASES") || readIf("SCHEMAS")) { + // for MySQL compatibility + buff.append("SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA"); + } + boolean b = session.getAllowLiterals(); + try { + // need to temporarily enable it, in case we are in + // ALLOW_LITERALS_NUMBERS mode + session.setAllowLiterals(true); + return prepare(session, buff.toString(), paramValues); + } finally { + session.setAllowLiterals(b); + } + } + + private static Prepared prepare(Session s, String sql, + ArrayList paramValues) { + Prepared prep = s.prepare(sql); + ArrayList params = prep.getParameters(); + if (params != null) { + for (int i = 0, size = params.size(); i < size; i++) { + Parameter p = params.get(i); + p.setValue(paramValues.get(i)); + } + } + return prep; + } + + private boolean isSelect() { + int start = lastParseIndex; + while (readIf("(")) { + // need to read ahead, it could be a nested union: + // ((select 1) union (select 1)) + } + boolean select = isToken("SELECT") || isToken("FROM") || isToken("WITH"); + parseIndex = start; + read(); + return select; + } + + + private Prepared parseMerge() { + Merge command = new Merge(session); + currentPrepared = command; + int start = lastParseIndex; + read("INTO"); + List excludeIdentifiers = Arrays.asList("USING", "KEY", "VALUES"); + TableFilter targetTableFilter = readSimpleTableFilter(0, excludeIdentifiers); + command.setTargetTableFilter(targetTableFilter); + Table table = command.getTargetTable(); + + if (readIf("USING")) { + return parseMergeUsing(command, start); + } + if (readIf("(")) { + if (isSelect()) { + command.setQuery(parseSelect()); + read(")"); + return command; + } + Column[] columns = parseColumnList(table); + command.setColumns(columns); + } + if (readIf("KEY")) { + read("("); + Column[] keys = parseColumnList(table); + command.setKeys(keys); + } + if (readIf("VALUES")) { + do { + ArrayList values = New.arrayList(); + read("("); + if (!readIf(")")) { + do { + if (readIf("DEFAULT")) { + values.add(null); + } else { + values.add(readExpression()); + } + } while (readIfMore(false)); + } + command.addRow(values.toArray(new Expression[0])); + } while (readIf(",")); + } else { + command.setQuery(parseSelect()); + } + return command; + } + + private MergeUsing parseMergeUsing(Merge oldCommand, int start) { + MergeUsing command = new MergeUsing(oldCommand); + currentPrepared = command; + + if (readIf("(")) { + /* a select query is supplied */ + if (isSelect()) { + command.setQuery(parseSelect()); + read(")"); + } + command.setQueryAlias(readFromAlias(null, Collections.singletonList("ON"))); + + String[] querySQLOutput = {null}; + List columnTemplateList = TableView.createQueryColumnTemplateList(null, command.getQuery(), + querySQLOutput); + TableView temporarySourceTableView = createCTEView( + command.getQueryAlias(), querySQLOutput[0], + columnTemplateList, false/* no recursion */, + false/* do not add to session */, + false /* isPersistent */, + session); + TableFilter sourceTableFilter = new TableFilter(session, + temporarySourceTableView, command.getQueryAlias(), + rightsChecked, (Select) command.getQuery(), 0, null); + command.setSourceTableFilter(sourceTableFilter); + } else { + /* Its a table name, simulate a query by building a select query for the table */ + List excludeIdentifiers = Collections.singletonList("ON"); + TableFilter sourceTableFilter = readSimpleTableFilter(0, excludeIdentifiers); + command.setSourceTableFilter(sourceTableFilter); + + StringBuilder buff = new StringBuilder("SELECT * FROM ") + .append(sourceTableFilter.getTable().getName()); + if (sourceTableFilter.getTableAlias() != null) { + buff.append(" AS ").append(sourceTableFilter.getTableAlias()); + } + Prepared preparedQuery = prepare(session, buff.toString(), null/*paramValues*/); + command.setQuery((Select) preparedQuery); + + } + read("ON"); + read("("); + Expression condition = readExpression(); + command.setOnCondition(condition); + read(")"); + + if (readIfAll("WHEN", "MATCHED", "THEN")) { + int startMatched = lastParseIndex; + if (readIf("UPDATE")) { + Update updateCommand = new Update(session); + //currentPrepared = updateCommand; + TableFilter filter = command.getTargetTableFilter(); + updateCommand.setTableFilter(filter); + parseUpdateSetClause(updateCommand, filter, startMatched); + command.setUpdateCommand(updateCommand); + } + startMatched = lastParseIndex; + if (readIf("DELETE")) { + Delete deleteCommand = new Delete(session); + TableFilter filter = command.getTargetTableFilter(); + deleteCommand.setTableFilter(filter); + parseDeleteGivenTable(deleteCommand, null, startMatched); + command.setDeleteCommand(deleteCommand); + } + } + if (readIfAll("WHEN", "NOT", "MATCHED", "THEN")) { + if (readIf("INSERT")) { + Insert insertCommand = new Insert(session); + insertCommand.setTable(command.getTargetTable()); + parseInsertGivenTable(insertCommand, command.getTargetTable()); + command.setInsertCommand(insertCommand); + } + } + + setSQL(command, "MERGE", start); + + // build and prepare the targetMatchQuery ready to test each rows + // existence in the target table (using source row to match) + StringBuilder targetMatchQuerySQL = new StringBuilder( + "SELECT _ROWID_ FROM " + command.getTargetTable().getName()); + if (command.getTargetTableFilter().getTableAlias() != null) { + targetMatchQuerySQL.append(" AS ").append(command.getTargetTableFilter().getTableAlias()); + } + targetMatchQuerySQL + .append(" WHERE ").append(command.getOnCondition().getSQL()); + command.setTargetMatchQuery( + (Select) parse(targetMatchQuerySQL.toString())); + + return command; + } + + private Insert parseInsert() { + Insert command = new Insert(session); + currentPrepared = command; + if (database.getMode().onDuplicateKeyUpdate && readIf("IGNORE")) { + command.setIgnore(true); + } + read("INTO"); + Table table = readTableOrView(); + command.setTable(table); + Insert returnedCommand = parseInsertGivenTable(command, table); + if (returnedCommand != null) { + return returnedCommand; + } + if (database.getMode().onDuplicateKeyUpdate) { + if (readIf("ON")) { + read("DUPLICATE"); + read("KEY"); + read("UPDATE"); + do { + Column column = parseColumn(table); + read("="); + Expression expression; + if (readIf("DEFAULT")) { + expression = ValueExpression.getDefault(); + } else { + expression = readExpression(); + } + command.addAssignmentForDuplicate(column, expression); + } while (readIf(",")); + } + } + if (database.getMode().isolationLevelInSelectOrInsertStatement) { + parseIsolationClause(); + } + return command; + } + + private Insert parseInsertGivenTable(Insert command, Table table) { + Column[] columns = null; + if (readIf("(")) { + if (isSelect()) { + command.setQuery(parseSelect()); + read(")"); + return command; + } + columns = parseColumnList(table); + command.setColumns(columns); + } + if (readIf("DIRECT")) { + command.setInsertFromSelect(true); + } + if (readIf("SORTED")) { + command.setSortedInsertMode(true); + } + if (readIf("DEFAULT")) { + read("VALUES"); + Expression[] expr = {}; + command.addRow(expr); + } else if (readIf("VALUES")) { + read("("); + do { + ArrayList values = New.arrayList(); + if (!readIf(")")) { + do { + if (readIf("DEFAULT")) { + values.add(null); + } else { + values.add(readExpression()); + } + } while (readIfMore(false)); + } + command.addRow(values.toArray(new Expression[0])); + // the following condition will allow (..),; and (..); + } while (readIf(",") && readIf("(")); + } else if (readIf("SET")) { + if (columns != null) { + throw getSyntaxError(); + } + ArrayList columnList = New.arrayList(); + ArrayList values = New.arrayList(); + do { + columnList.add(parseColumn(table)); + read("="); + Expression expression; + if (readIf("DEFAULT")) { + expression = ValueExpression.getDefault(); + } else { + expression = readExpression(); + } + values.add(expression); + } while (readIf(",")); + command.setColumns(columnList.toArray(new Column[0])); + command.addRow(values.toArray(new Expression[0])); + } else { + command.setQuery(parseSelect()); + } + return null; + } + + /** + * MySQL compatibility. REPLACE is similar to MERGE. + */ + private Replace parseReplace() { + Replace command = new Replace(session); + currentPrepared = command; + read("INTO"); + Table table = readTableOrView(); + command.setTable(table); + if (readIf("(")) { + if (isSelect()) { + command.setQuery(parseSelect()); + read(")"); + return command; + } + Column[] columns = parseColumnList(table); + command.setColumns(columns); + } + if (readIf("VALUES")) { + do { + ArrayList values = New.arrayList(); + read("("); + if (!readIf(")")) { + do { + if (readIf("DEFAULT")) { + values.add(null); + } else { + values.add(readExpression()); + } + } while (readIfMore(false)); + } + command.addRow(values.toArray(new Expression[0])); + } while (readIf(",")); + } else { + command.setQuery(parseSelect()); + } + return command; + } + + private TableFilter readTableFilter() { + Table table; + String alias = null; + if (readIf("(")) { + if (isSelect()) { + Query query = parseSelectUnion(); + read(")"); + query.setParameterList(new ArrayList<>(parameters)); + query.init(); + Session s; + if (createView != null) { + s = database.getSystemSession(); + } else { + s = session; + } + alias = session.getNextSystemIdentifier(sqlCommand); + table = TableView.createTempView(s, session.getUser(), alias, + query, currentSelect); + } else { + TableFilter top; + top = readTableFilter(); + top = readJoin(top); + read(")"); + alias = readFromAlias(null); + if (alias != null) { + top.setAlias(alias); + ArrayList derivedColumnNames = readDerivedColumnNames(); + if (derivedColumnNames != null) { + top.setDerivedColumns(derivedColumnNames); + } + } + return top; + } + } else if (readIf("VALUES")) { + table = parseValuesTable(0).getTable(); + } else { + String tableName = readIdentifierWithSchema(null); + Schema schema = getSchema(); + boolean foundLeftBracket = readIf("("); + if (foundLeftBracket && readIf("INDEX")) { + // Sybase compatibility with + // "select * from test (index table1_index)" + readIdentifierWithSchema(null); + read(")"); + foundLeftBracket = false; + } + if (foundLeftBracket) { + Schema mainSchema = database.getSchema(Constants.SCHEMA_MAIN); + if (equalsToken(tableName, RangeTable.NAME) + || equalsToken(tableName, RangeTable.ALIAS)) { + Expression min = readExpression(); + read(","); + Expression max = readExpression(); + if (readIf(",")) { + Expression step = readExpression(); + read(")"); + table = new RangeTable(mainSchema, min, max, step, + false); + } else { + read(")"); + table = new RangeTable(mainSchema, min, max, false); + } + } else { + Expression expr = readFunction(schema, tableName); + if (!(expr instanceof FunctionCall)) { + throw getSyntaxError(); + } + FunctionCall call = (FunctionCall) expr; + if (!call.isDeterministic()) { + recompileAlways = true; + } + table = new FunctionTable(mainSchema, session, expr, call); + } + } else if (equalsToken("DUAL", tableName)) { + table = getDualTable(false); + } else if (database.getMode().sysDummy1 && + equalsToken("SYSDUMMY1", tableName)) { + table = getDualTable(false); + } else { + table = readTableOrView(tableName); + } + } + ArrayList derivedColumnNames = null; + IndexHints indexHints = null; + // for backward compatibility, handle case where USE is a table alias + if (readIf("USE")) { + if (readIf("INDEX")) { + indexHints = parseIndexHints(table); + } else { + alias = "USE"; + derivedColumnNames = readDerivedColumnNames(); + } + } else { + alias = readFromAlias(alias); + if (alias != null) { + derivedColumnNames = readDerivedColumnNames(); + // if alias present, a second chance to parse index hints + if (readIf("USE")) { + read("INDEX"); + indexHints = parseIndexHints(table); + } + } + } + // inherit alias for CTE as views from table name + if (table.isView() && table.isTableExpression() && alias == null) { + alias = table.getName(); + } + TableFilter filter = new TableFilter(session, table, alias, rightsChecked, + currentSelect, orderInFrom++, indexHints); + if (derivedColumnNames != null) { + filter.setDerivedColumns(derivedColumnNames); + } + return filter; + } + + private IndexHints parseIndexHints(Table table) { + if (table == null) { + throw getSyntaxError(); + } + read("("); + LinkedHashSet indexNames = new LinkedHashSet<>(); + if (!readIf(")")) { + do { + String indexName = readIdentifierWithSchema(); + Index index = table.getIndex(indexName); + indexNames.add(index.getName()); + } while (readIfMore(true)); + } + return IndexHints.createUseIndexHints(indexNames); + } + + private String readFromAlias(String alias, List excludeIdentifiers) { + if (readIf("AS")) { + alias = readAliasIdentifier(); + } else if (currentTokenType == IDENTIFIER && !isTokenInList(excludeIdentifiers)) { + alias = readAliasIdentifier(); + } + return alias; + } + + private String readFromAlias(String alias) { + // left and right are not keywords (because they are functions as + // well) + List excludeIdentifiers = Arrays.asList("LEFT", "RIGHT", "FULL"); + return readFromAlias(alias, excludeIdentifiers); + } + + private ArrayList readDerivedColumnNames() { + if (readIf("(")) { + ArrayList derivedColumnNames = New.arrayList(); + do { + derivedColumnNames.add(readAliasIdentifier()); + } while (readIfMore(true)); + return derivedColumnNames; + } + return null; + } + + private Prepared parseTruncate() { + read("TABLE"); + Table table = readTableOrView(); + TruncateTable command = new TruncateTable(session); + command.setTable(table); + return command; + } + + private boolean readIfExists(boolean ifExists) { + if (readIf("IF")) { + read("EXISTS"); + ifExists = true; + } + return ifExists; + } + + private Prepared parseComment() { + int type = 0; + read("ON"); + boolean column = false; + if (readIf("TABLE") || readIf("VIEW")) { + type = DbObject.TABLE_OR_VIEW; + } else if (readIf("COLUMN")) { + column = true; + type = DbObject.TABLE_OR_VIEW; + } else if (readIf("CONSTANT")) { + type = DbObject.CONSTANT; + } else if (readIf("CONSTRAINT")) { + type = DbObject.CONSTRAINT; + } else if (readIf("ALIAS")) { + type = DbObject.FUNCTION_ALIAS; + } else if (readIf("INDEX")) { + type = DbObject.INDEX; + } else if (readIf("ROLE")) { + type = DbObject.ROLE; + } else if (readIf("SCHEMA")) { + type = DbObject.SCHEMA; + } else if (readIf("SEQUENCE")) { + type = DbObject.SEQUENCE; + } else if (readIf("TRIGGER")) { + type = DbObject.TRIGGER; + } else if (readIf("USER")) { + type = DbObject.USER; + } else if (readIf("DOMAIN")) { + type = DbObject.USER_DATATYPE; + } else { + throw getSyntaxError(); + } + SetComment command = new SetComment(session); + String objectName; + if (column) { + // can't use readIdentifierWithSchema() because + // it would not read schema.table.column correctly + // if the db name is equal to the schema name + ArrayList list = New.arrayList(); + do { + list.add(readUniqueIdentifier()); + } while (readIf(".")); + schemaName = session.getCurrentSchemaName(); + if (list.size() == 4) { + if (!equalsToken(database.getShortName(), list.remove(0))) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, + "database name"); + } + } + if (list.size() == 3) { + schemaName = list.remove(0); + } + if (list.size() != 2) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, + "table.column"); + } + objectName = list.get(0); + command.setColumn(true); + command.setColumnName(list.get(1)); + } else { + objectName = readIdentifierWithSchema(); + } + command.setSchemaName(schemaName); + command.setObjectName(objectName); + command.setObjectType(type); + read("IS"); + command.setCommentExpression(readExpression()); + return command; + } + + private Prepared parseDrop() { + if (readIf("TABLE")) { + boolean ifExists = readIfExists(false); + String tableName = readIdentifierWithSchema(); + DropTable command = new DropTable(session, getSchema()); + command.setTableName(tableName); + while (readIf(",")) { + tableName = readIdentifierWithSchema(); + DropTable next = new DropTable(session, getSchema()); + next.setTableName(tableName); + command.addNextDropTable(next); + } + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + if (readIf("CASCADE")) { + command.setDropAction(ConstraintActionType.CASCADE); + readIf("CONSTRAINTS"); + } else if (readIf("RESTRICT")) { + command.setDropAction(ConstraintActionType.RESTRICT); + } else if (readIf("IGNORE")) { + command.setDropAction(ConstraintActionType.SET_DEFAULT); + } + return command; + } else if (readIf("INDEX")) { + boolean ifExists = readIfExists(false); + String indexName = readIdentifierWithSchema(); + DropIndex command = new DropIndex(session, getSchema()); + command.setIndexName(indexName); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + //Support for MySQL: DROP INDEX index_name ON tbl_name + if (readIf("ON")) { + readIdentifierWithSchema(); + } + return command; + } else if (readIf("USER")) { + boolean ifExists = readIfExists(false); + DropUser command = new DropUser(session); + command.setUserName(readUniqueIdentifier()); + ifExists = readIfExists(ifExists); + readIf("CASCADE"); + command.setIfExists(ifExists); + return command; + } else if (readIf("SEQUENCE")) { + boolean ifExists = readIfExists(false); + String sequenceName = readIdentifierWithSchema(); + DropSequence command = new DropSequence(session, getSchema()); + command.setSequenceName(sequenceName); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } else if (readIf("CONSTANT")) { + boolean ifExists = readIfExists(false); + String constantName = readIdentifierWithSchema(); + DropConstant command = new DropConstant(session, getSchema()); + command.setConstantName(constantName); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } else if (readIf("TRIGGER")) { + boolean ifExists = readIfExists(false); + String triggerName = readIdentifierWithSchema(); + DropTrigger command = new DropTrigger(session, getSchema()); + command.setTriggerName(triggerName); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } else if (readIf("VIEW")) { + boolean ifExists = readIfExists(false); + String viewName = readIdentifierWithSchema(); + DropView command = new DropView(session, getSchema()); + command.setViewName(viewName); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + ConstraintActionType dropAction = parseCascadeOrRestrict(); + if (dropAction != null) { + command.setDropAction(dropAction); + } + return command; + } else if (readIf("ROLE")) { + boolean ifExists = readIfExists(false); + DropRole command = new DropRole(session); + command.setRoleName(readUniqueIdentifier()); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } else if (readIf("ALIAS")) { + boolean ifExists = readIfExists(false); + String aliasName = readIdentifierWithSchema(); + DropFunctionAlias command = new DropFunctionAlias(session, + getSchema()); + command.setAliasName(aliasName); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } else if (readIf("SCHEMA")) { + boolean ifExists = readIfExists(false); + DropSchema command = new DropSchema(session); + command.setSchemaName(readUniqueIdentifier()); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + if (readIf("CASCADE")) { + command.setDropAction(ConstraintActionType.CASCADE); + } else if (readIf("RESTRICT")) { + command.setDropAction(ConstraintActionType.RESTRICT); + } + return command; + } else if (readIf("ALL")) { + read("OBJECTS"); + DropDatabase command = new DropDatabase(session); + command.setDropAllObjects(true); + if (readIf("DELETE")) { + read("FILES"); + command.setDeleteFiles(true); + } + return command; + } else if (readIf("DOMAIN")) { + return parseDropUserDataType(); + } else if (readIf("TYPE")) { + return parseDropUserDataType(); + } else if (readIf("DATATYPE")) { + return parseDropUserDataType(); + } else if (readIf("AGGREGATE")) { + return parseDropAggregate(); + } else if (readIf("SYNONYM")) { + boolean ifExists = readIfExists(false); + String synonymName = readIdentifierWithSchema(); + DropSynonym command = new DropSynonym(session, getSchema()); + command.setSynonymName(synonymName); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } + throw getSyntaxError(); + } + + private DropUserDataType parseDropUserDataType() { + boolean ifExists = readIfExists(false); + DropUserDataType command = new DropUserDataType(session); + command.setTypeName(readUniqueIdentifier()); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } + + private DropAggregate parseDropAggregate() { + boolean ifExists = readIfExists(false); + DropAggregate command = new DropAggregate(session); + command.setName(readUniqueIdentifier()); + ifExists = readIfExists(ifExists); + command.setIfExists(ifExists); + return command; + } + + private TableFilter readJoin(TableFilter top) { + TableFilter last = top; + while (true) { + TableFilter join; + if (readIf("RIGHT")) { + readIf("OUTER"); + read("JOIN"); + // the right hand side is the 'inner' table usually + join = readTableFilter(); + join = readJoin(join); + Expression on = null; + if (readIf("ON")) { + on = readExpression(); + } + addJoin(join, top, true, on); + top = join; + } else if (readIf("LEFT")) { + readIf("OUTER"); + read("JOIN"); + join = readTableFilter(); + join = readJoin(join); + Expression on = null; + if (readIf("ON")) { + on = readExpression(); + } + addJoin(top, join, true, on); + } else if (readIf("FULL")) { + throw getSyntaxError(); + } else if (readIf("INNER")) { + read("JOIN"); + join = readTableFilter(); + top = readJoin(top); + Expression on = null; + if (readIf("ON")) { + on = readExpression(); + } + addJoin(top, join, false, on); + } else if (readIf("JOIN")) { + join = readTableFilter(); + top = readJoin(top); + Expression on = null; + if (readIf("ON")) { + on = readExpression(); + } + addJoin(top, join, false, on); + } else if (readIf("CROSS")) { + read("JOIN"); + join = readTableFilter(); + addJoin(top, join, false, null); + } else if (readIf("NATURAL")) { + read("JOIN"); + join = readTableFilter(); + Column[] tableCols = last.getTable().getColumns(); + Column[] joinCols = join.getTable().getColumns(); + String tableSchema = last.getTable().getSchema().getName(); + String joinSchema = join.getTable().getSchema().getName(); + Expression on = null; + for (Column tc : tableCols) { + String tableColumnName = tc.getName(); + for (Column c : joinCols) { + String joinColumnName = c.getName(); + if (equalsToken(tableColumnName, joinColumnName)) { + join.addNaturalJoinColumn(c); + Expression tableExpr = new ExpressionColumn( + database, tableSchema, + last.getTableAlias(), tableColumnName); + Expression joinExpr = new ExpressionColumn( + database, joinSchema, join.getTableAlias(), + joinColumnName); + Expression equal = new Comparison(session, + Comparison.EQUAL, tableExpr, joinExpr); + if (on == null) { + on = equal; + } else { + on = new ConditionAndOr(ConditionAndOr.AND, on, + equal); + } + } + } + } + addJoin(top, join, false, on); + } else { + break; + } + last = join; + } + return top; + } + + /** + * Add one join to another. This method creates nested join between them if + * required. + * + * @param top parent join + * @param join child join + * @param outer if child join is an outer join + * @param on the join condition + * @see TableFilter#addJoin(TableFilter, boolean, Expression) + */ + private void addJoin(TableFilter top, TableFilter join, boolean outer, Expression on) { + if (join.getJoin() != null) { + String joinTable = Constants.PREFIX_JOIN + parseIndex; + TableFilter n = new TableFilter(session, getDualTable(true), + joinTable, rightsChecked, currentSelect, join.getOrderInFrom(), + null); + n.setNestedJoin(join); + join = n; + } + top.addJoin(join, outer, on); + } + + private Prepared parseExecute() { + ExecuteProcedure command = new ExecuteProcedure(session); + String procedureName = readAliasIdentifier(); + Procedure p = session.getProcedure(procedureName); + if (p == null) { + throw DbException.get(ErrorCode.FUNCTION_ALIAS_NOT_FOUND_1, + procedureName); + } + command.setProcedure(p); + if (readIf("(")) { + for (int i = 0;; i++) { + command.setExpression(i, readExpression()); + if (readIf(")")) { + break; + } + read(","); + } + } + return command; + } + + private DeallocateProcedure parseDeallocate() { + readIf("PLAN"); + String procedureName = readAliasIdentifier(); + DeallocateProcedure command = new DeallocateProcedure(session); + command.setProcedureName(procedureName); + return command; + } + + private Explain parseExplain() { + Explain command = new Explain(session); + if (readIf("ANALYZE")) { + command.setExecuteCommand(true); + } else { + if (readIf("PLAN")) { + readIf("FOR"); + } + } + if (isToken("SELECT") || isToken("FROM") || isToken("(") || isToken("WITH")) { + Query query = parseSelect(); + query.setNeverLazy(true); + command.setCommand(query); + } else if (readIf("DELETE")) { + command.setCommand(parseDelete()); + } else if (readIf("UPDATE")) { + command.setCommand(parseUpdate()); + } else if (readIf("INSERT")) { + command.setCommand(parseInsert()); + } else if (readIf("MERGE")) { + command.setCommand(parseMerge()); + } else { + throw getSyntaxError(); + } + return command; + } + + private Query parseSelect() { + Query command = null; + int paramIndex = parameters.size(); + command = parseSelectUnion(); + ArrayList params = New.arrayList(); + for (int i = paramIndex, size = parameters.size(); i < size; i++) { + params.add(parameters.get(i)); + } + command.setParameterList(params); + command.init(); + return command; + } + + private Prepared parseWithStatementOrQuery() { + int paramIndex = parameters.size(); + Prepared command = parseWith(); + ArrayList params = New.arrayList(); + for (int i = paramIndex, size = parameters.size(); i < size; i++) { + params.add(parameters.get(i)); + } + command.setParameterList(params); + if (command instanceof Query) { + Query query = (Query) command; + query.init(); + } + return command; + } + + private Query parseSelectUnion() { + int start = lastParseIndex; + Query command = parseSelectSub(); + return parseSelectUnionExtension(command, start, false); + } + + private Query parseSelectUnionExtension(Query command, int start, + boolean unionOnly) { + while (true) { + if (readIf("UNION")) { + SelectUnion union = new SelectUnion(session, command); + if (readIf("ALL")) { + union.setUnionType(SelectUnion.UnionType.UNION_ALL); + } else { + readIf("DISTINCT"); + union.setUnionType(SelectUnion.UnionType.UNION); + } + union.setRight(parseSelectSub()); + command = union; + } else if (readIf("MINUS") || readIf("EXCEPT")) { + SelectUnion union = new SelectUnion(session, command); + union.setUnionType(SelectUnion.UnionType.EXCEPT); + union.setRight(parseSelectSub()); + command = union; + } else if (readIf("INTERSECT")) { + SelectUnion union = new SelectUnion(session, command); + union.setUnionType(SelectUnion.UnionType.INTERSECT); + union.setRight(parseSelectSub()); + command = union; + } else { + break; + } + } + if (!unionOnly) { + parseEndOfQuery(command); + } + setSQL(command, null, start); + return command; + } + + private void parseEndOfQuery(Query command) { + if (readIf("ORDER")) { + read("BY"); + Select oldSelect = currentSelect; + if (command instanceof Select) { + currentSelect = (Select) command; + } + ArrayList orderList = New.arrayList(); + do { + boolean canBeNumber = true; + if (readIf("=")) { + canBeNumber = false; + } + SelectOrderBy order = new SelectOrderBy(); + Expression expr = readExpression(); + if (canBeNumber && expr instanceof ValueExpression && + expr.getType() == Value.INT) { + order.columnIndexExpr = expr; + } else if (expr instanceof Parameter) { + recompileAlways = true; + order.columnIndexExpr = expr; + } else { + order.expression = expr; + } + if (readIf("DESC")) { + order.descending = true; + } else { + readIf("ASC"); + } + if (readIf("NULLS")) { + if (readIf("FIRST")) { + order.nullsFirst = true; + } else { + read("LAST"); + order.nullsLast = true; + } + } + orderList.add(order); + } while (readIf(",")); + command.setOrder(orderList); + currentSelect = oldSelect; + } + // make sure aggregate functions will not work here + Select temp = currentSelect; + currentSelect = null; + // http://sqlpro.developpez.com/SQL2008/ + if (readIf("OFFSET")) { + command.setOffset(readExpression().optimize(session)); + if (!readIf("ROW")) { + readIf("ROWS"); + } + } + if (readIf("FETCH")) { + if (!readIf("FIRST")) { + read("NEXT"); + } + if (readIf("ROW")) { + command.setLimit(ValueExpression.get(ValueInt.get(1))); + } else { + Expression limit = readExpression().optimize(session); + command.setLimit(limit); + if (!readIf("ROW")) { + read("ROWS"); + } + } + read("ONLY"); + } + currentSelect = temp; + if (readIf("LIMIT")) { + temp = currentSelect; + // make sure aggregate functions will not work here + currentSelect = null; + Expression limit = readExpression().optimize(session); + command.setLimit(limit); + if (readIf("OFFSET")) { + Expression offset = readExpression().optimize(session); + command.setOffset(offset); + } else if (readIf(",")) { + // MySQL: [offset, ] rowcount + Expression offset = limit; + limit = readExpression().optimize(session); + command.setOffset(offset); + command.setLimit(limit); + } + if (readIf("SAMPLE_SIZE")) { + Expression sampleSize = readExpression().optimize(session); + command.setSampleSize(sampleSize); + } + currentSelect = temp; + } + if (readIf("FOR")) { + if (readIf("UPDATE")) { + if (readIf("OF")) { + do { + readIdentifierWithSchema(); + } while (readIf(",")); + } else if (readIf("NOWAIT")) { + // TODO parser: select for update nowait: should not wait + } + command.setForUpdate(true); + } else if (readIf("READ") || readIf("FETCH")) { + read("ONLY"); + } + } + if (database.getMode().isolationLevelInSelectOrInsertStatement) { + parseIsolationClause(); + } + } + + /** + * DB2 isolation clause + */ + private void parseIsolationClause() { + if (readIf("WITH")) { + if (readIf("RR") || readIf("RS")) { + // concurrent-access-resolution clause + if (readIf("USE")) { + read("AND"); + read("KEEP"); + if (readIf("SHARE") || readIf("UPDATE") || + readIf("EXCLUSIVE")) { + // ignore + } + read("LOCKS"); + } + } else if (readIf("CS") || readIf("UR")) { + // ignore + } + } + } + + private Query parseSelectSub() { + if (readIf("(")) { + Query command = parseSelectUnion(); + read(")"); + return command; + } + if (readIf("WITH")) { + Query query = null; + try { + query = (Query) parseWith(); + } catch (ClassCastException e) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, + "WITH statement supports only SELECT (query) in this context"); + } + // recursive can not be lazy + query.setNeverLazy(true); + return query; + } + return parseSelectSimple(); + } + + private void parseSelectSimpleFromPart(Select command) { + do { + TableFilter filter = readTableFilter(); + parseJoinTableFilter(filter, command); + } while (readIf(",")); + + // Parser can reorder joined table filters, need to explicitly sort them + // to get the order as it was in the original query. + if (session.isForceJoinOrder()) { + sortTableFilters(command.getTopFilters()); + } + } + + private static void sortTableFilters(ArrayList filters) { + if (filters.size() < 2) { + return; + } + // Most probably we are already sorted correctly. + boolean sorted = true; + TableFilter prev = filters.get(0); + for (int i = 1; i < filters.size(); i++) { + TableFilter next = filters.get(i); + if (compareTableFilters(prev, next) > 0) { + sorted = false; + break; + } + prev = next; + } + // If not, then sort manually. + if (!sorted) { + Collections.sort(filters, TABLE_FILTER_COMPARATOR); + } + } + + /** + * Find out which of the table filters appears first in the "from" clause. + * + * @param o1 the first table filter + * @param o2 the second table filter + * @return -1 if o1 appears first, and 1 if o2 appears first + */ + static int compareTableFilters(TableFilter o1, TableFilter o2) { + assert o1.getOrderInFrom() != o2.getOrderInFrom(); + return o1.getOrderInFrom() > o2.getOrderInFrom() ? 1 : -1; + } + + private void parseJoinTableFilter(TableFilter top, final Select command) { + top = readJoin(top); + command.addTableFilter(top, true); + boolean isOuter = false; + while (true) { + TableFilter n = top.getNestedJoin(); + if (n != null) { + n.visit(new TableFilterVisitor() { + @Override + public void accept(TableFilter f) { + command.addTableFilter(f, false); + } + }); + } + TableFilter join = top.getJoin(); + if (join == null) { + break; + } + isOuter = isOuter | join.isJoinOuter(); + if (isOuter) { + command.addTableFilter(join, false); + } else { + // make flat so the optimizer can work better + Expression on = join.getJoinCondition(); + if (on != null) { + command.addCondition(on); + } + join.removeJoinCondition(); + top.removeJoin(); + command.addTableFilter(join, true); + } + top = join; + } + } + + private void parseSelectSimpleSelectPart(Select command) { + Select temp = currentSelect; + // make sure aggregate functions will not work in TOP and LIMIT + currentSelect = null; + if (readIf("TOP")) { + // can't read more complex expressions here because + // SELECT TOP 1 +? A FROM TEST could mean + // SELECT TOP (1+?) A FROM TEST or + // SELECT TOP 1 (+?) AS A FROM TEST + Expression limit = readTerm().optimize(session); + command.setLimit(limit); + } else if (readIf("LIMIT")) { + Expression offset = readTerm().optimize(session); + command.setOffset(offset); + Expression limit = readTerm().optimize(session); + command.setLimit(limit); + } + currentSelect = temp; + if (readIf("DISTINCT")) { + command.setDistinct(true); + } else { + readIf("ALL"); + } + ArrayList expressions = New.arrayList(); + do { + if (readIf("*")) { + expressions.add(new Wildcard(null, null)); + } else { + Expression expr = readExpression(); + if (readIf("AS") || currentTokenType == IDENTIFIER) { + String alias = readAliasIdentifier(); + boolean aliasColumnName = database.getSettings().aliasColumnName; + aliasColumnName |= database.getMode().aliasColumnName; + expr = new Alias(expr, alias, aliasColumnName); + } + expressions.add(expr); + } + } while (readIf(",")); + command.setExpressions(expressions); + } + + private Select parseSelectSimple() { + boolean fromFirst; + if (readIf("SELECT")) { + fromFirst = false; + } else if (readIf("FROM")) { + fromFirst = true; + } else { + throw getSyntaxError(); + } + Select command = new Select(session); + int start = lastParseIndex; + Select oldSelect = currentSelect; + currentSelect = command; + currentPrepared = command; + if (fromFirst) { + parseSelectSimpleFromPart(command); + read("SELECT"); + parseSelectSimpleSelectPart(command); + } else { + parseSelectSimpleSelectPart(command); + if (!readIf("FROM")) { + // select without FROM: convert to SELECT ... FROM + // SYSTEM_RANGE(1,1) + Table dual = getDualTable(false); + TableFilter filter = new TableFilter(session, dual, null, + rightsChecked, currentSelect, 0, + null); + command.addTableFilter(filter, true); + } else { + parseSelectSimpleFromPart(command); + } + } + if (readIf("WHERE")) { + Expression condition = readExpression(); + command.addCondition(condition); + } + // the group by is read for the outer select (or not a select) + // so that columns that are not grouped can be used + currentSelect = oldSelect; + if (readIf("GROUP")) { + read("BY"); + command.setGroupQuery(); + ArrayList list = New.arrayList(); + do { + Expression expr = readExpression(); + list.add(expr); + } while (readIf(",")); + command.setGroupBy(list); + } + currentSelect = command; + if (readIf("HAVING")) { + command.setGroupQuery(); + Expression condition = readExpression(); + command.setHaving(condition); + } + command.setParameterList(parameters); + currentSelect = oldSelect; + setSQL(command, "SELECT", start); + return command; + } + + private Table getDualTable(boolean noColumns) { + Schema main = database.findSchema(Constants.SCHEMA_MAIN); + Expression one = ValueExpression.get(ValueLong.get(1)); + return new RangeTable(main, one, one, noColumns); + } + + private void setSQL(Prepared command, String start, int startIndex) { + String sql = originalSQL.substring(startIndex, lastParseIndex).trim(); + if (start != null) { + sql = start + " " + sql; + } + command.setSQL(sql); + } + + private Expression readExpression() { + Expression r = readAnd(); + while (readIf("OR")) { + r = new ConditionAndOr(ConditionAndOr.OR, r, readAnd()); + } + return r; + } + + private Expression readAnd() { + Expression r = readCondition(); + while (readIf("AND")) { + r = new ConditionAndOr(ConditionAndOr.AND, r, readCondition()); + } + return r; + } + + private Expression readCondition() { + if (readIf("NOT")) { + return new ConditionNot(readCondition()); + } + if (readIf("EXISTS")) { + read("("); + Query query = parseSelect(); + // can not reduce expression because it might be a union except + // query with distinct + read(")"); + return new ConditionExists(query); + } + if (readIf("INTERSECTS")) { + read("("); + Expression r1 = readConcat(); + read(","); + Expression r2 = readConcat(); + read(")"); + return new Comparison(session, Comparison.SPATIAL_INTERSECTS, r1, + r2); + } + Expression r = readConcat(); + while (true) { + // special case: NOT NULL is not part of an expression (as in CREATE + // TABLE TEST(ID INT DEFAULT 0 NOT NULL)) + int backup = parseIndex; + boolean not = false; + if (readIf("NOT")) { + not = true; + if (isToken("NULL")) { + // this really only works for NOT NULL! + parseIndex = backup; + currentToken = "NOT"; + break; + } + } + if (readIf("LIKE")) { + Expression b = readConcat(); + Expression esc = null; + if (readIf("ESCAPE")) { + esc = readConcat(); + } + recompileAlways = true; + r = new CompareLike(database, r, b, esc, false); + } else if (readIf("ILIKE")) { + Function function = Function.getFunction(database, "CAST"); + function.setDataType(new Column("X", Value.STRING_IGNORECASE)); + function.setParameter(0, r); + r = function; + Expression b = readConcat(); + Expression esc = null; + if (readIf("ESCAPE")) { + esc = readConcat(); + } + recompileAlways = true; + r = new CompareLike(database, r, b, esc, false); + } else if (readIf("REGEXP")) { + Expression b = readConcat(); + recompileAlways = true; + r = new CompareLike(database, r, b, null, true); + } else if (readIf("IS")) { + if (readIf("NOT")) { + if (readIf("NULL")) { + r = new Comparison(session, Comparison.IS_NOT_NULL, r, + null); + } else if (readIf("DISTINCT")) { + read("FROM"); + r = new Comparison(session, Comparison.EQUAL_NULL_SAFE, + r, readConcat()); + } else { + r = new Comparison(session, + Comparison.NOT_EQUAL_NULL_SAFE, r, readConcat()); + } + } else if (readIf("NULL")) { + r = new Comparison(session, Comparison.IS_NULL, r, null); + } else if (readIf("DISTINCT")) { + read("FROM"); + r = new Comparison(session, Comparison.NOT_EQUAL_NULL_SAFE, + r, readConcat()); + } else { + r = new Comparison(session, Comparison.EQUAL_NULL_SAFE, r, + readConcat()); + } + } else if (readIf("IN")) { + read("("); + if (readIf(")")) { + if (database.getMode().prohibitEmptyInPredicate) { + throw getSyntaxError(); + } + r = ValueExpression.get(ValueBoolean.FALSE); + } else { + if (isSelect()) { + Query query = parseSelect(); + // can not be lazy because we have to call + // method ResultInterface.containsDistinct + // which is not supported for lazy execution + query.setNeverLazy(true); + r = new ConditionInSelect(database, r, query, false, + Comparison.EQUAL); + } else { + ArrayList v = New.arrayList(); + Expression last; + do { + last = readExpression(); + v.add(last); + } while (readIf(",")); + if (v.size() == 1 && (last instanceof Subquery)) { + Subquery s = (Subquery) last; + Query q = s.getQuery(); + r = new ConditionInSelect(database, r, q, false, + Comparison.EQUAL); + } else { + r = new ConditionIn(database, r, v); + } + } + read(")"); + } + } else if (readIf("BETWEEN")) { + Expression low = readConcat(); + read("AND"); + Expression high = readConcat(); + Expression condLow = new Comparison(session, + Comparison.SMALLER_EQUAL, low, r); + Expression condHigh = new Comparison(session, + Comparison.BIGGER_EQUAL, high, r); + r = new ConditionAndOr(ConditionAndOr.AND, condLow, condHigh); + } else { + int compareType = getCompareType(currentTokenType); + if (compareType < 0) { + break; + } + read(); + if (readIf("ALL")) { + read("("); + Query query = parseSelect(); + r = new ConditionInSelect(database, r, query, true, + compareType); + read(")"); + } else if (readIf("ANY") || readIf("SOME")) { + read("("); + if (currentTokenType == PARAMETER && compareType == 0) { + Parameter p = readParameter(); + r = new ConditionInParameter(database, r, p); + } else { + Query query = parseSelect(); + r = new ConditionInSelect(database, r, query, false, + compareType); + } + read(")"); + } else { + Expression right = readConcat(); + if (SysProperties.OLD_STYLE_OUTER_JOIN && + readIf("(") && readIf("+") && readIf(")")) { + // support for a subset of old-fashioned Oracle outer + // join with (+) + if (r instanceof ExpressionColumn && + right instanceof ExpressionColumn) { + ExpressionColumn leftCol = (ExpressionColumn) r; + ExpressionColumn rightCol = (ExpressionColumn) right; + ArrayList filters = currentSelect + .getTopFilters(); + for (TableFilter f : filters) { + while (f != null) { + leftCol.mapColumns(f, 0); + rightCol.mapColumns(f, 0); + f = f.getJoin(); + } + } + TableFilter leftFilter = leftCol.getTableFilter(); + TableFilter rightFilter = rightCol.getTableFilter(); + r = new Comparison(session, compareType, r, right); + if (leftFilter != null && rightFilter != null) { + int idx = filters.indexOf(rightFilter); + if (idx >= 0) { + filters.remove(idx); + leftFilter.addJoin(rightFilter, true, r); + } else { + rightFilter.mapAndAddFilter(r); + } + r = ValueExpression.get(ValueBoolean.TRUE); + } + } + } else { + r = new Comparison(session, compareType, r, right); + } + } + } + if (not) { + r = new ConditionNot(r); + } + } + return r; + } + + private Expression readConcat() { + Expression r = readSum(); + while (true) { + if (readIf("||")) { + r = new Operation(OpType.CONCAT, r, readSum()); + } else if (readIf("~")) { + if (readIf("*")) { + Function function = Function.getFunction(database, "CAST"); + function.setDataType(new Column("X", + Value.STRING_IGNORECASE)); + function.setParameter(0, r); + r = function; + } + r = new CompareLike(database, r, readSum(), null, true); + } else if (readIf("!~")) { + if (readIf("*")) { + Function function = Function.getFunction(database, "CAST"); + function.setDataType(new Column("X", + Value.STRING_IGNORECASE)); + function.setParameter(0, r); + r = function; + } + r = new ConditionNot(new CompareLike(database, r, readSum(), + null, true)); + } else { + return r; + } + } + } + + private Expression readSum() { + Expression r = readFactor(); + while (true) { + if (readIf("+")) { + r = new Operation(OpType.PLUS, r, readFactor()); + } else if (readIf("-")) { + r = new Operation(OpType.MINUS, r, readFactor()); + } else { + return r; + } + } + } + + private Expression readFactor() { + Expression r = readTerm(); + while (true) { + if (readIf("*")) { + r = new Operation(OpType.MULTIPLY, r, readTerm()); + } else if (readIf("/")) { + r = new Operation(OpType.DIVIDE, r, readTerm()); + } else if (readIf("%")) { + r = new Operation(OpType.MODULUS, r, readTerm()); + } else { + return r; + } + } + } + + private Expression readAggregate(AggregateType aggregateType, String aggregateName) { + if (currentSelect == null) { + throw getSyntaxError(); + } + currentSelect.setGroupQuery(); + Aggregate r; + if (aggregateType == AggregateType.COUNT) { + if (readIf("*")) { + r = new Aggregate(AggregateType.COUNT_ALL, null, currentSelect, + false); + } else { + boolean distinct = readIf("DISTINCT"); + Expression on = readExpression(); + if (on instanceof Wildcard && !distinct) { + // PostgreSQL compatibility: count(t.*) + r = new Aggregate(AggregateType.COUNT_ALL, null, currentSelect, + false); + } else { + r = new Aggregate(AggregateType.COUNT, on, currentSelect, + distinct); + } + } + } else if (aggregateType == AggregateType.GROUP_CONCAT) { + boolean distinct = readIf("DISTINCT"); + + if (equalsToken("GROUP_CONCAT", aggregateName)) { + r = new Aggregate(AggregateType.GROUP_CONCAT, + readExpression(), currentSelect, distinct); + if (readIf("ORDER")) { + read("BY"); + r.setGroupConcatOrder(parseSimpleOrderList()); + } + + if (readIf("SEPARATOR")) { + r.setGroupConcatSeparator(readExpression()); + } + } else if (equalsToken("STRING_AGG", aggregateName)) { + // PostgreSQL compatibility: string_agg(expression, delimiter) + r = new Aggregate(AggregateType.GROUP_CONCAT, + readExpression(), currentSelect, distinct); + read(","); + r.setGroupConcatSeparator(readExpression()); + if (readIf("ORDER")) { + read("BY"); + r.setGroupConcatOrder(parseSimpleOrderList()); + } + } else { + r = null; + } + } else if (aggregateType == AggregateType.ARRAY_AGG) { + boolean distinct = readIf("DISTINCT"); + + r = new Aggregate(AggregateType.ARRAY_AGG, + readExpression(), currentSelect, distinct); + if (readIf("ORDER")) { + read("BY"); + r.setArrayAggOrder(parseSimpleOrderList()); + } + } else { + boolean distinct = readIf("DISTINCT"); + r = new Aggregate(aggregateType, readExpression(), currentSelect, + distinct); + } + read(")"); + if (r != null && readIf("FILTER")) { + read("("); + read("WHERE"); + Expression condition = readExpression(); + read(")"); + r.setFilterCondition(condition); + } + return r; + } + + private ArrayList parseSimpleOrderList() { + ArrayList orderList = New.arrayList(); + do { + SelectOrderBy order = new SelectOrderBy(); + order.expression = readExpression(); + if (readIf("DESC")) { + order.descending = true; + } else { + readIf("ASC"); + } + orderList.add(order); + } while (readIf(",")); + return orderList; + } + + private JavaFunction readJavaFunction(Schema schema, String functionName) { + FunctionAlias functionAlias = null; + if (schema != null) { + functionAlias = schema.findFunction(functionName); + } else { + functionAlias = findFunctionAlias(session.getCurrentSchemaName(), + functionName); + } + if (functionAlias == null) { + throw DbException.get(ErrorCode.FUNCTION_NOT_FOUND_1, functionName); + } + Expression[] args; + ArrayList argList = New.arrayList(); + int numArgs = 0; + while (!readIf(")")) { + if (numArgs++ > 0) { + read(","); + } + argList.add(readExpression()); + } + args = argList.toArray(new Expression[0]); + return new JavaFunction(functionAlias, args); + } + + private JavaAggregate readJavaAggregate(UserAggregate aggregate) { + ArrayList params = New.arrayList(); + do { + params.add(readExpression()); + } while (readIfMore(true)); + Expression filterCondition; + if (readIf("FILTER")) { + read("("); + read("WHERE"); + filterCondition = readExpression(); + read(")"); + } else { + filterCondition = null; + } + Expression[] list = params.toArray(new Expression[0]); + JavaAggregate agg = new JavaAggregate(aggregate, list, currentSelect, filterCondition); + currentSelect.setGroupQuery(); + return agg; + } + + private AggregateType getAggregateType(String name) { + if (!identifiersToUpper) { + // if not yet converted to uppercase, do it now + name = StringUtils.toUpperEnglish(name); + } + return Aggregate.getAggregateType(name); + } + + private Expression readFunction(Schema schema, String name) { + if (schema != null) { + return readJavaFunction(schema, name); + } + AggregateType agg = getAggregateType(name); + if (agg != null) { + return readAggregate(agg, name); + } + Function function = Function.getFunction(database, name); + if (function == null) { + UserAggregate aggregate = database.findAggregate(name); + if (aggregate != null) { + return readJavaAggregate(aggregate); + } + return readJavaFunction(null, name); + } + switch (function.getFunctionType()) { + case Function.CAST: { + function.setParameter(0, readExpression()); + read("AS"); + Column type = parseColumnWithType(null); + function.setDataType(type); + read(")"); + break; + } + case Function.CONVERT: { + if (database.getMode().swapConvertFunctionParameters) { + Column type = parseColumnWithType(null); + function.setDataType(type); + read(","); + function.setParameter(0, readExpression()); + read(")"); + } else { + function.setParameter(0, readExpression()); + read(","); + Column type = parseColumnWithType(null); + function.setDataType(type); + read(")"); + } + break; + } + case Function.EXTRACT: { + function.setParameter(0, + ValueExpression.get(ValueString.get(currentToken))); + read(); + read("FROM"); + function.setParameter(1, readExpression()); + read(")"); + break; + } + case Function.DATE_ADD: + case Function.DATE_DIFF: { + if (DateTimeFunctions.isDatePart(currentToken)) { + function.setParameter(0, + ValueExpression.get(ValueString.get(currentToken))); + read(); + } else { + function.setParameter(0, readExpression()); + } + read(","); + function.setParameter(1, readExpression()); + read(","); + function.setParameter(2, readExpression()); + read(")"); + break; + } + case Function.SUBSTRING: { + // Different variants include: + // SUBSTRING(X,1) + // SUBSTRING(X,1,1) + // SUBSTRING(X FROM 1 FOR 1) -- Postgres + // SUBSTRING(X FROM 1) -- Postgres + // SUBSTRING(X FOR 1) -- Postgres + function.setParameter(0, readExpression()); + if (readIf("FROM")) { + function.setParameter(1, readExpression()); + if (readIf("FOR")) { + function.setParameter(2, readExpression()); + } + } else if (readIf("FOR")) { + function.setParameter(1, ValueExpression.get(ValueInt.get(0))); + function.setParameter(2, readExpression()); + } else { + read(","); + function.setParameter(1, readExpression()); + if (readIf(",")) { + function.setParameter(2, readExpression()); + } + } + read(")"); + break; + } + case Function.POSITION: { + // can't read expression because IN would be read too early + function.setParameter(0, readConcat()); + if (!readIf(",")) { + read("IN"); + } + function.setParameter(1, readExpression()); + read(")"); + break; + } + case Function.TRIM: { + Expression space = null; + if (readIf("LEADING")) { + function = Function.getFunction(database, "LTRIM"); + if (!readIf("FROM")) { + space = readExpression(); + read("FROM"); + } + } else if (readIf("TRAILING")) { + function = Function.getFunction(database, "RTRIM"); + if (!readIf("FROM")) { + space = readExpression(); + read("FROM"); + } + } else if (readIf("BOTH")) { + if (!readIf("FROM")) { + space = readExpression(); + read("FROM"); + } + } + Expression p0 = readExpression(); + if (readIf(",")) { + space = readExpression(); + } else if (readIf("FROM")) { + space = p0; + p0 = readExpression(); + } + function.setParameter(0, p0); + if (space != null) { + function.setParameter(1, space); + } + read(")"); + break; + } + case Function.TABLE: + case Function.TABLE_DISTINCT: { + int i = 0; + ArrayList columns = New.arrayList(); + do { + String columnName = readAliasIdentifier(); + Column column = parseColumnWithType(columnName); + columns.add(column); + read("="); + function.setParameter(i, readExpression()); + i++; + } while (readIfMore(true)); + TableFunction tf = (TableFunction) function; + tf.setColumns(columns); + break; + } + case Function.ROW_NUMBER: + read(")"); + read("OVER"); + read("("); + read(")"); + if (currentSelect == null && currentPrepared == null) { + throw getSyntaxError(); + } + return new Rownum(currentSelect == null ? currentPrepared + : currentSelect); + default: + if (!readIf(")")) { + int i = 0; + do { + function.setParameter(i++, readExpression()); + } while (readIfMore(true)); + } + } + function.doneWithParameters(); + return function; + } + + private Expression readFunctionWithoutParameters(String name) { + if (readIf("(")) { + read(")"); + } + if (database.isAllowBuiltinAliasOverride()) { + FunctionAlias functionAlias = database.getSchema(session.getCurrentSchemaName()).findFunction(name); + if (functionAlias != null) { + return new JavaFunction(functionAlias, new Expression[0]); + } + } + Function function = Function.getFunction(database, name); + function.doneWithParameters(); + return function; + } + + private Expression readWildcardOrSequenceValue(String schema, + String objectName) { + if (readIf("*")) { + return new Wildcard(schema, objectName); + } + if (schema == null) { + schema = session.getCurrentSchemaName(); + } + if (readIf("NEXTVAL")) { + Sequence sequence = findSequence(schema, objectName); + if (sequence != null) { + return new SequenceValue(sequence); + } + } else if (readIf("CURRVAL")) { + Sequence sequence = findSequence(schema, objectName); + if (sequence != null) { + Function function = Function.getFunction(database, "CURRVAL"); + function.setParameter(0, ValueExpression.get(ValueString + .get(sequence.getSchema().getName()))); + function.setParameter(1, ValueExpression.get(ValueString + .get(sequence.getName()))); + function.doneWithParameters(); + return function; + } + } + return null; + } + + private Expression readTermObjectDot(String objectName) { + Expression expr = readWildcardOrSequenceValue(null, objectName); + if (expr != null) { + return expr; + } + String name = readColumnIdentifier(); + Schema s = database.findSchema(objectName); + if ((!SysProperties.OLD_STYLE_OUTER_JOIN || s != null) && readIf("(")) { + // only if the token before the dot is a valid schema name, + // otherwise the old style Oracle outer join doesn't work: + // t.x = t2.x(+) + // this additional check is not required + // if the old style outer joins are not supported + return readFunction(s, name); + } else if (readIf(".")) { + String schema = objectName; + objectName = name; + expr = readWildcardOrSequenceValue(schema, objectName); + if (expr != null) { + return expr; + } + name = readColumnIdentifier(); + if (readIf("(")) { + String databaseName = schema; + if (!equalsToken(database.getShortName(), databaseName)) { + throw DbException.get(ErrorCode.DATABASE_NOT_FOUND_1, + databaseName); + } + schema = objectName; + return readFunction(database.getSchema(schema), name); + } else if (readIf(".")) { + String databaseName = schema; + if (!equalsToken(database.getShortName(), databaseName)) { + throw DbException.get(ErrorCode.DATABASE_NOT_FOUND_1, + databaseName); + } + schema = objectName; + objectName = name; + expr = readWildcardOrSequenceValue(schema, objectName); + if (expr != null) { + return expr; + } + name = readColumnIdentifier(); + return new ExpressionColumn(database, schema, objectName, name); + } + return new ExpressionColumn(database, schema, objectName, name); + } + return new ExpressionColumn(database, null, objectName, name); + } + + private Parameter readParameter() { + // there must be no space between ? and the number + boolean indexed = Character.isDigit(sqlCommandChars[parseIndex]); + + Parameter p; + if (indexed) { + readParameterIndex(); + if (indexedParameterList == null) { + if (parameters == null) { + // this can occur when parsing expressions only (for + // example check constraints) + throw getSyntaxError(); + } else if (!parameters.isEmpty()) { + throw DbException + .get(ErrorCode.CANNOT_MIX_INDEXED_AND_UNINDEXED_PARAMS); + } + indexedParameterList = New.arrayList(); + } + int index = currentValue.getInt() - 1; + if (index < 0 || index >= Constants.MAX_PARAMETER_INDEX) { + throw DbException.getInvalidValueException( + "parameter index", index); + } + if (indexedParameterList.size() <= index) { + indexedParameterList.ensureCapacity(index + 1); + while (indexedParameterList.size() <= index) { + indexedParameterList.add(null); + } + } + p = indexedParameterList.get(index); + if (p == null) { + p = new Parameter(index); + indexedParameterList.set(index, p); + } + read(); + } else { + read(); + if (indexedParameterList != null) { + throw DbException + .get(ErrorCode.CANNOT_MIX_INDEXED_AND_UNINDEXED_PARAMS); + } + p = new Parameter(parameters.size()); + } + parameters.add(p); + return p; + } + + private Expression readTerm() { + Expression r; + switch (currentTokenType) { + case AT: + read(); + r = new Variable(session, readAliasIdentifier()); + if (readIf(":=")) { + Expression value = readExpression(); + Function function = Function.getFunction(database, "SET"); + function.setParameter(0, r); + function.setParameter(1, value); + r = function; + } + break; + case PARAMETER: + r = readParameter(); + break; + case KEYWORD: + if (isToken("SELECT") || isToken("FROM") || isToken("WITH")) { + Query query = parseSelect(); + r = new Subquery(query); + } else { + throw getSyntaxError(); + } + break; + case IDENTIFIER: + String name = currentToken; + if (currentTokenQuoted) { + read(); + if (readIf("(")) { + r = readFunction(null, name); + } else if (readIf(".")) { + r = readTermObjectDot(name); + } else { + r = new ExpressionColumn(database, null, null, name); + } + } else { + read(); + if (readIf(".")) { + r = readTermObjectDot(name); + } else if (equalsToken("CASE", name)) { + // CASE must be processed before (, + // otherwise CASE(3) would be a function call, which it is + // not + r = readCase(); + } else if (readIf("(")) { + r = readFunction(null, name); + } else if (equalsToken("CURRENT_USER", name)) { + r = readFunctionWithoutParameters("USER"); + } else if (equalsToken("CURRENT_TIMESTAMP", name)) { + r = readFunctionWithoutParameters("CURRENT_TIMESTAMP"); + } else if (equalsToken("SYSDATE", name)) { + r = readFunctionWithoutParameters("CURRENT_TIMESTAMP"); + } else if (equalsToken("SYSTIMESTAMP", name)) { + r = readFunctionWithoutParameters("CURRENT_TIMESTAMP"); + } else if (equalsToken("CURRENT_DATE", name)) { + r = readFunctionWithoutParameters("CURRENT_DATE"); + } else if (equalsToken("TODAY", name)) { + r = readFunctionWithoutParameters("CURRENT_DATE"); + } else if (equalsToken("CURRENT_TIME", name)) { + r = readFunctionWithoutParameters("CURRENT_TIME"); + } else if (equalsToken("SYSTIME", name)) { + r = readFunctionWithoutParameters("CURRENT_TIME"); + } else if (equalsToken("CURRENT", name)) { + if (readIf("TIMESTAMP")) { + r = readFunctionWithoutParameters("CURRENT_TIMESTAMP"); + } else if (readIf("TIME")) { + r = readFunctionWithoutParameters("CURRENT_TIME"); + } else if (readIf("DATE")) { + r = readFunctionWithoutParameters("CURRENT_DATE"); + } else { + r = new ExpressionColumn(database, null, null, name); + } + } else if (equalsToken("NEXT", name) && readIf("VALUE")) { + read("FOR"); + Sequence sequence = readSequence(); + r = new SequenceValue(sequence); + } else if (equalsToken("TIME", name)) { + boolean without = readIf("WITHOUT"); + if (without) { + read("TIME"); + read("ZONE"); + } + if (currentTokenType != VALUE + || currentValue.getType() != Value.STRING) { + if (without) { + throw getSyntaxError(); + } + r = new ExpressionColumn(database, null, null, name); + } else { + String time = currentValue.getString(); + read(); + r = ValueExpression.get(ValueTime.parse(time)); + } + } else if (equalsToken("TIMESTAMP", name)) { + if (readIf("WITH")) { + read("TIME"); + read("ZONE"); + if (currentTokenType != VALUE + || currentValue.getType() != Value.STRING) { + throw getSyntaxError(); + } + String timestamp = currentValue.getString(); + read(); + r = ValueExpression.get(ValueTimestampTimeZone.parse(timestamp)); + } else { + boolean without = readIf("WITHOUT"); + if (without) { + read("TIME"); + read("ZONE"); + } + if (currentTokenType != VALUE + || currentValue.getType() != Value.STRING) { + if (without) { + throw getSyntaxError(); + } + r = new ExpressionColumn(database, null, null, name); + } else { + String timestamp = currentValue.getString(); + read(); + r = ValueExpression.get(ValueTimestamp.parse(timestamp, database.getMode())); + } + } + } else if (currentTokenType == VALUE && + currentValue.getType() == Value.STRING) { + if (equalsToken("DATE", name) || + equalsToken("D", name)) { + String date = currentValue.getString(); + read(); + r = ValueExpression.get(ValueDate.parse(date)); + } else if (equalsToken("T", name)) { + String time = currentValue.getString(); + read(); + r = ValueExpression.get(ValueTime.parse(time)); + } else if (equalsToken("TS", name)) { + String timestamp = currentValue.getString(); + read(); + r = ValueExpression + .get(ValueTimestamp.parse(timestamp, database.getMode())); + } else if (equalsToken("X", name)) { + read(); + byte[] buffer = StringUtils + .convertHexToBytes(currentValue.getString()); + r = ValueExpression.get(ValueBytes.getNoCopy(buffer)); + } else if (equalsToken("E", name)) { + String text = currentValue.getString(); + // the PostgreSQL ODBC driver uses + // LIKE E'PROJECT\\_DATA' instead of LIKE + // 'PROJECT\_DATA' + // N: SQL-92 "National Language" strings + text = StringUtils.replaceAll(text, "\\\\", "\\"); + read(); + r = ValueExpression.get(ValueString.get(text)); + } else if (equalsToken("N", name)) { + // SQL-92 "National Language" strings + String text = currentValue.getString(); + read(); + r = ValueExpression.get(ValueString.get(text)); + } else { + r = new ExpressionColumn(database, null, null, name); + } + } else { + r = new ExpressionColumn(database, null, null, name); + } + } + break; + case MINUS: + read(); + if (currentTokenType == VALUE) { + r = ValueExpression.get(currentValue.negate()); + if (r.getType() == Value.LONG && + r.getValue(session).getLong() == Integer.MIN_VALUE) { + // convert Integer.MIN_VALUE to type 'int' + // (Integer.MAX_VALUE+1 is of type 'long') + r = ValueExpression.get(ValueInt.get(Integer.MIN_VALUE)); + } else if (r.getType() == Value.DECIMAL && + r.getValue(session).getBigDecimal() + .compareTo(ValueLong.MIN_BD) == 0) { + // convert Long.MIN_VALUE to type 'long' + // (Long.MAX_VALUE+1 is of type 'decimal') + r = ValueExpression.get(ValueLong.MIN); + } + read(); + } else { + r = new Operation(OpType.NEGATE, readTerm(), null); + } + break; + case PLUS: + read(); + r = readTerm(); + break; + case OPEN: + read(); + if (readIf(")")) { + r = new ExpressionList(new Expression[0]); + } else { + r = readExpression(); + if (readIf(",")) { + ArrayList list = New.arrayList(); + list.add(r); + while (!readIf(")")) { + r = readExpression(); + list.add(r); + if (!readIf(",")) { + read(")"); + break; + } + } + r = new ExpressionList(list.toArray(new Expression[0])); + } else { + read(")"); + } + } + break; + case TRUE: + read(); + r = ValueExpression.get(ValueBoolean.TRUE); + break; + case FALSE: + read(); + r = ValueExpression.get(ValueBoolean.FALSE); + break; + case ROWNUM: + read(); + if (readIf("(")) { + read(")"); + } + if (currentSelect == null && currentPrepared == null) { + throw getSyntaxError(); + } + r = new Rownum(currentSelect == null ? currentPrepared + : currentSelect); + break; + case NULL: + read(); + r = ValueExpression.getNull(); + break; + case VALUE: + r = ValueExpression.get(currentValue); + read(); + break; + default: + throw getSyntaxError(); + } + if (readIf("[")) { + Function function = Function.getFunction(database, "ARRAY_GET"); + function.setParameter(0, r); + r = readExpression(); + r = new Operation(OpType.PLUS, r, ValueExpression.get(ValueInt + .get(1))); + function.setParameter(1, r); + r = function; + read("]"); + } + if (readIf("::")) { + // PostgreSQL compatibility + if (isToken("PG_CATALOG")) { + read("PG_CATALOG"); + read("."); + } + if (readIf("REGCLASS")) { + FunctionAlias f = findFunctionAlias(Constants.SCHEMA_MAIN, + "PG_GET_OID"); + if (f == null) { + throw getSyntaxError(); + } + Expression[] args = { r }; + r = new JavaFunction(f, args); + } else { + Column col = parseColumnWithType(null); + Function function = Function.getFunction(database, "CAST"); + function.setDataType(col); + function.setParameter(0, r); + r = function; + } + } + return r; + } + + private Expression readCase() { + if (readIf("END")) { + readIf("CASE"); + return ValueExpression.getNull(); + } + if (readIf("ELSE")) { + Expression elsePart = readExpression().optimize(session); + read("END"); + readIf("CASE"); + return elsePart; + } + int i; + Function function; + if (readIf("WHEN")) { + function = Function.getFunction(database, "CASE"); + function.setParameter(0, null); + i = 1; + do { + function.setParameter(i++, readExpression()); + read("THEN"); + function.setParameter(i++, readExpression()); + } while (readIf("WHEN")); + } else { + Expression expr = readExpression(); + if (readIf("END")) { + readIf("CASE"); + return ValueExpression.getNull(); + } + if (readIf("ELSE")) { + Expression elsePart = readExpression().optimize(session); + read("END"); + readIf("CASE"); + return elsePart; + } + function = Function.getFunction(database, "CASE"); + function.setParameter(0, expr); + i = 1; + read("WHEN"); + do { + function.setParameter(i++, readExpression()); + read("THEN"); + function.setParameter(i++, readExpression()); + } while (readIf("WHEN")); + } + if (readIf("ELSE")) { + function.setParameter(i, readExpression()); + } + read("END"); + readIf("CASE"); + function.doneWithParameters(); + return function; + } + + private int readPositiveInt() { + int v = readInt(); + if (v < 0) { + throw DbException.getInvalidValueException("positive integer", v); + } + return v; + } + + private int readInt() { + boolean minus = false; + if (currentTokenType == MINUS) { + minus = true; + read(); + } else if (currentTokenType == PLUS) { + read(); + } + if (currentTokenType != VALUE) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, "integer"); + } + if (minus) { + // must do that now, otherwise Integer.MIN_VALUE would not work + currentValue = currentValue.negate(); + } + int i = currentValue.getInt(); + read(); + return i; + } + + private long readLong() { + boolean minus = false; + if (currentTokenType == MINUS) { + minus = true; + read(); + } else if (currentTokenType == PLUS) { + read(); + } + if (currentTokenType != VALUE) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, "long"); + } + if (minus) { + // must do that now, otherwise Long.MIN_VALUE would not work + currentValue = currentValue.negate(); + } + long i = currentValue.getLong(); + read(); + return i; + } + + private boolean readBooleanSetting() { + if (currentTokenType == VALUE) { + boolean result = currentValue.getBoolean(); + read(); + return result; + } + if (readIf("TRUE") || readIf("ON")) { + return true; + } else if (readIf("FALSE") || readIf("OFF")) { + return false; + } else { + throw getSyntaxError(); + } + } + + private String readString() { + Expression expr = readExpression().optimize(session); + if (!(expr instanceof ValueExpression)) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, "string"); + } + return expr.getValue(session).getString(); + } + + // TODO: why does this function allow defaultSchemaName=null - which resets + // the parser schemaName for everyone ? + private String readIdentifierWithSchema(String defaultSchemaName) { + if (currentTokenType != IDENTIFIER) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, + "identifier"); + } + String s = currentToken; + read(); + schemaName = defaultSchemaName; + if (readIf(".")) { + schemaName = s; + if (currentTokenType != IDENTIFIER) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, + "identifier"); + } + s = currentToken; + read(); + } + if (equalsToken(".", currentToken)) { + if (equalsToken(schemaName, database.getShortName())) { + read("."); + schemaName = s; + if (currentTokenType != IDENTIFIER) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, + "identifier"); + } + s = currentToken; + read(); + } + } + return s; + } + + private String readIdentifierWithSchema() { + return readIdentifierWithSchema(session.getCurrentSchemaName()); + } + + private String readAliasIdentifier() { + return readColumnIdentifier(); + } + + private String readUniqueIdentifier() { + return readColumnIdentifier(); + } + + private String readColumnIdentifier() { + if (currentTokenType != IDENTIFIER) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, + "identifier"); + } + String s = currentToken; + read(); + return s; + } + + private void read(String expected) { + if (currentTokenQuoted || !equalsToken(expected, currentToken)) { + addExpected(expected); + throw getSyntaxError(); + } + read(); + } + + private boolean readIf(String token) { + if (!currentTokenQuoted && equalsToken(token, currentToken)) { + read(); + return true; + } + addExpected(token); + return false; + } + + /* + * Reads every token in list, in order - returns true if all are found. + * If any are not found, returns false - AND resets parsing back to state when called. + */ + private boolean readIfAll(String... tokens) { + // save parse location in case we have to fail this test + int start = lastParseIndex; + for (String token: tokens) { + if (!currentTokenQuoted && equalsToken(token, currentToken)) { + read(); + } else { + // read failed - revert parse location to before when called + parseIndex = start; + read(); + return false; + } + } + return true; + } + + private boolean isToken(String token) { + boolean result = equalsToken(token, currentToken) && + !currentTokenQuoted; + if (result) { + return true; + } + addExpected(token); + return false; + } + + private boolean equalsToken(String a, String b) { + if (a == null) { + return b == null; + } else + return a.equals(b) || !identifiersToUpper && a.equalsIgnoreCase(b); + } + + private static boolean equalsTokenIgnoreCase(String a, String b) { + if (a == null) { + return b == null; + } else + return a.equals(b) || a.equalsIgnoreCase(b); + } + + private boolean isTokenInList(Collection upperCaseTokenList) { + String upperCaseCurrentToken = currentToken.toUpperCase(); + return upperCaseTokenList.contains(upperCaseCurrentToken); + } + + private void addExpected(String token) { + if (expectedList != null) { + expectedList.add(token); + } + } + + private void read() { + currentTokenQuoted = false; + if (expectedList != null) { + expectedList.clear(); + } + int[] types = characterTypes; + lastParseIndex = parseIndex; + int i = parseIndex; + int type = types[i]; + while (type == 0) { + type = types[++i]; + } + int start = i; + char[] chars = sqlCommandChars; + char c = chars[i++]; + currentToken = ""; + switch (type) { + case CHAR_NAME: + while (true) { + type = types[i]; + if (type != CHAR_NAME && type != CHAR_VALUE) { + break; + } + i++; + } + currentToken = StringUtils.cache(sqlCommand.substring( + start, i)); + currentTokenType = getTokenType(currentToken); + parseIndex = i; + return; + case CHAR_QUOTED: { + String result = null; + while (true) { + for (int begin = i;; i++) { + if (chars[i] == '\"') { + if (result == null) { + result = sqlCommand.substring(begin, i); + } else { + result += sqlCommand.substring(begin - 1, i); + } + break; + } + } + if (chars[++i] != '\"') { + break; + } + i++; + } + currentToken = StringUtils.cache(result); + parseIndex = i; + currentTokenQuoted = true; + currentTokenType = IDENTIFIER; + return; + } + case CHAR_SPECIAL_2: + if (types[i] == CHAR_SPECIAL_2) { + i++; + } + currentToken = sqlCommand.substring(start, i); + currentTokenType = getSpecialType(currentToken); + parseIndex = i; + return; + case CHAR_SPECIAL_1: + currentToken = sqlCommand.substring(start, i); + currentTokenType = getSpecialType(currentToken); + parseIndex = i; + return; + case CHAR_VALUE: + if (c == '0' && chars[i] == 'X') { + // hex number + long number = 0; + start += 2; + i++; + while (true) { + c = chars[i]; + if ((c < '0' || c > '9') && (c < 'A' || c > 'F')) { + checkLiterals(false); + currentValue = ValueInt.get((int) number); + currentTokenType = VALUE; + currentToken = "0"; + parseIndex = i; + return; + } + number = (number << 4) + c - + (c >= 'A' ? ('A' - 0xa) : ('0')); + if (number > Integer.MAX_VALUE) { + readHexDecimal(start, i); + return; + } + i++; + } + } + long number = c - '0'; + while (true) { + c = chars[i]; + if (c < '0' || c > '9') { + if (c == '.' || c == 'E' || c == 'L') { + readDecimal(start, i); + break; + } + checkLiterals(false); + currentValue = ValueInt.get((int) number); + currentTokenType = VALUE; + currentToken = "0"; + parseIndex = i; + break; + } + number = number * 10 + (c - '0'); + if (number > Integer.MAX_VALUE) { + readDecimal(start, i); + break; + } + i++; + } + return; + case CHAR_DOT: + if (types[i] != CHAR_VALUE) { + currentTokenType = KEYWORD; + currentToken = "."; + parseIndex = i; + return; + } + readDecimal(i - 1, i); + return; + case CHAR_STRING: { + String result = null; + while (true) { + for (int begin = i;; i++) { + if (chars[i] == '\'') { + if (result == null) { + result = sqlCommand.substring(begin, i); + } else { + result += sqlCommand.substring(begin - 1, i); + } + break; + } + } + if (chars[++i] != '\'') { + break; + } + i++; + } + currentToken = "'"; + checkLiterals(true); + currentValue = ValueString.get(StringUtils.cache(result), + database.getMode().treatEmptyStringsAsNull); + parseIndex = i; + currentTokenType = VALUE; + return; + } + case CHAR_DOLLAR_QUOTED_STRING: { + String result = null; + int begin = i - 1; + while (types[i] == CHAR_DOLLAR_QUOTED_STRING) { + i++; + } + result = sqlCommand.substring(begin, i); + currentToken = "'"; + checkLiterals(true); + currentValue = ValueString.get(StringUtils.cache(result), + database.getMode().treatEmptyStringsAsNull); + parseIndex = i; + currentTokenType = VALUE; + return; + } + case CHAR_END: + currentToken = ""; + currentTokenType = END; + parseIndex = i; + return; + default: + throw getSyntaxError(); + } + } + + private void readParameterIndex() { + int i = parseIndex; + + char[] chars = sqlCommandChars; + char c = chars[i++]; + long number = c - '0'; + while (true) { + c = chars[i]; + if (c < '0' || c > '9') { + currentValue = ValueInt.get((int) number); + currentTokenType = VALUE; + currentToken = "0"; + parseIndex = i; + break; + } + number = number * 10 + (c - '0'); + if (number > Integer.MAX_VALUE) { + throw DbException.getInvalidValueException( + "parameter index", number); + } + i++; + } + } + + private void checkLiterals(boolean text) { + if (!literalsChecked && !session.getAllowLiterals()) { + int allowed = database.getAllowLiterals(); + if (allowed == Constants.ALLOW_LITERALS_NONE || + (text && allowed != Constants.ALLOW_LITERALS_ALL)) { + throw DbException.get(ErrorCode.LITERALS_ARE_NOT_ALLOWED); + } + } + } + + private void readHexDecimal(int start, int i) { + char[] chars = sqlCommandChars; + char c; + do { + c = chars[++i]; + } while ((c >= '0' && c <= '9') || (c >= 'A' && c <= 'F')); + parseIndex = i; + String sub = sqlCommand.substring(start, i); + BigDecimal bd = new BigDecimal(new BigInteger(sub, 16)); + checkLiterals(false); + currentValue = ValueDecimal.get(bd); + currentTokenType = VALUE; + } + + private void readDecimal(int start, int i) { + char[] chars = sqlCommandChars; + int[] types = characterTypes; + // go until the first non-number + while (true) { + int t = types[i]; + if (t != CHAR_DOT && t != CHAR_VALUE) { + break; + } + i++; + } + boolean containsE = false; + if (chars[i] == 'E' || chars[i] == 'e') { + containsE = true; + i++; + if (chars[i] == '+' || chars[i] == '-') { + i++; + } + if (types[i] != CHAR_VALUE) { + throw getSyntaxError(); + } + while (types[++i] == CHAR_VALUE) { + // go until the first non-number + } + } + parseIndex = i; + String sub = sqlCommand.substring(start, i); + checkLiterals(false); + if (!containsE && sub.indexOf('.') < 0) { + BigInteger bi = new BigInteger(sub); + if (bi.compareTo(ValueLong.MAX_BI) <= 0) { + // parse constants like "10000000L" + if (chars[i] == 'L') { + parseIndex++; + } + currentValue = ValueLong.get(bi.longValue()); + currentTokenType = VALUE; + return; + } + } + BigDecimal bd; + try { + bd = new BigDecimal(sub); + } catch (NumberFormatException e) { + throw DbException.get(ErrorCode.DATA_CONVERSION_ERROR_1, e, sub); + } + currentValue = ValueDecimal.get(bd); + currentTokenType = VALUE; + } + + public Session getSession() { + return session; + } + + private void initialize(String sql) { + if (sql == null) { + sql = ""; + } + originalSQL = sql; + sqlCommand = sql; + int len = sql.length() + 1; + char[] command = new char[len]; + int[] types = new int[len]; + len--; + sql.getChars(0, len, command, 0); + boolean changed = false; + command[len] = ' '; + int startLoop = 0; + int lastType = 0; + for (int i = 0; i < len; i++) { + char c = command[i]; + int type = 0; + switch (c) { + case '/': + if (command[i + 1] == '*') { + // block comment + changed = true; + command[i] = ' '; + command[i + 1] = ' '; + startLoop = i; + i += 2; + checkRunOver(i, len, startLoop); + while (command[i] != '*' || command[i + 1] != '/') { + command[i++] = ' '; + checkRunOver(i, len, startLoop); + } + command[i] = ' '; + command[i + 1] = ' '; + i++; + } else if (command[i + 1] == '/') { + // single line comment + changed = true; + startLoop = i; + while (true) { + c = command[i]; + if (c == '\n' || c == '\r' || i >= len - 1) { + break; + } + command[i++] = ' '; + checkRunOver(i, len, startLoop); + } + } else { + type = CHAR_SPECIAL_1; + } + break; + case '-': + if (command[i + 1] == '-') { + // single line comment + changed = true; + startLoop = i; + while (true) { + c = command[i]; + if (c == '\n' || c == '\r' || i >= len - 1) { + break; + } + command[i++] = ' '; + checkRunOver(i, len, startLoop); + } + } else { + type = CHAR_SPECIAL_1; + } + break; + case '$': + if (command[i + 1] == '$' && (i == 0 || command[i - 1] <= ' ')) { + // dollar quoted string + changed = true; + command[i] = ' '; + command[i + 1] = ' '; + startLoop = i; + i += 2; + checkRunOver(i, len, startLoop); + while (command[i] != '$' || command[i + 1] != '$') { + types[i++] = CHAR_DOLLAR_QUOTED_STRING; + checkRunOver(i, len, startLoop); + } + command[i] = ' '; + command[i + 1] = ' '; + i++; + } else { + if (lastType == CHAR_NAME || lastType == CHAR_VALUE) { + // $ inside an identifier is supported + type = CHAR_NAME; + } else { + // but not at the start, to support PostgreSQL $1 + type = CHAR_SPECIAL_1; + } + } + break; + case '(': + case ')': + case '{': + case '}': + case '*': + case ',': + case ';': + case '+': + case '%': + case '?': + case '@': + case ']': + type = CHAR_SPECIAL_1; + break; + case '!': + case '<': + case '>': + case '|': + case '=': + case ':': + case '&': + case '~': + type = CHAR_SPECIAL_2; + break; + case '.': + type = CHAR_DOT; + break; + case '\'': + type = types[i] = CHAR_STRING; + startLoop = i; + while (command[++i] != '\'') { + checkRunOver(i, len, startLoop); + } + break; + case '[': + if (database.getMode().squareBracketQuotedNames) { + // SQL Server alias for " + command[i] = '"'; + changed = true; + type = types[i] = CHAR_QUOTED; + startLoop = i; + while (command[++i] != ']') { + checkRunOver(i, len, startLoop); + } + command[i] = '"'; + } else { + type = CHAR_SPECIAL_1; + } + break; + case '`': + // MySQL alias for ", but not case sensitive + command[i] = '"'; + changed = true; + type = types[i] = CHAR_QUOTED; + startLoop = i; + while (command[++i] != '`') { + checkRunOver(i, len, startLoop); + c = command[i]; + command[i] = Character.toUpperCase(c); + } + command[i] = '"'; + break; + case '\"': + type = types[i] = CHAR_QUOTED; + startLoop = i; + while (command[++i] != '\"') { + checkRunOver(i, len, startLoop); + } + break; + case '_': + type = CHAR_NAME; + break; + case '#': + if (database.getMode().supportPoundSymbolForColumnNames) { + type = CHAR_NAME; + } else { + type = CHAR_SPECIAL_1; + } + break; + default: + if (c >= 'a' && c <= 'z') { + if (identifiersToUpper) { + command[i] = (char) (c - ('a' - 'A')); + changed = true; + } + type = CHAR_NAME; + } else if (c >= 'A' && c <= 'Z') { + type = CHAR_NAME; + } else if (c >= '0' && c <= '9') { + type = CHAR_VALUE; + } else { + if (c <= ' ' || Character.isSpaceChar(c)) { + // whitespace + } else if (Character.isJavaIdentifierPart(c)) { + type = CHAR_NAME; + if (identifiersToUpper) { + char u = Character.toUpperCase(c); + if (u != c) { + command[i] = u; + changed = true; + } + } + } else { + type = CHAR_SPECIAL_1; + } + } + } + types[i] = type; + lastType = type; + } + sqlCommandChars = command; + types[len] = CHAR_END; + characterTypes = types; + if (changed) { + sqlCommand = new String(command); + } + parseIndex = 0; + } + + private void checkRunOver(int i, int len, int startLoop) { + if (i >= len) { + parseIndex = startLoop; + throw getSyntaxError(); + } + } + + private int getSpecialType(String s) { + char c0 = s.charAt(0); + if (s.length() == 1) { + switch (c0) { + case '?': + case '$': + return PARAMETER; + case '@': + return AT; + case '+': + return PLUS; + case '-': + return MINUS; + case '{': + case '}': + case '*': + case '/': + case '%': + case ';': + case ',': + case ':': + case '[': + case ']': + case '~': + return KEYWORD; + case '(': + return OPEN; + case ')': + return CLOSE; + case '<': + return SMALLER; + case '>': + return BIGGER; + case '=': + return EQUAL; + default: + break; + } + } else if (s.length() == 2) { + char c1 = s.charAt(1); + switch (c0) { + case ':': + if (c1 == ':' || c1 == '=') { + return KEYWORD; + } + break; + case '>': + if (c1 == '=') { + return BIGGER_EQUAL; + } + break; + case '<': + if (c1 == '=') { + return SMALLER_EQUAL; + } else if (c1 == '>') { + return NOT_EQUAL; + } + break; + case '!': + if (c1 == '=') { + return NOT_EQUAL; + } else if (c1 == '~') { + return KEYWORD; + } + break; + case '|': + if (c1 == '|') { + return STRING_CONCAT; + } + break; + case '&': + if (c1 == '&') { + return SPATIAL_INTERSECTS; + } + break; + } + } + throw getSyntaxError(); + } + + private int getTokenType(String s) { + int len = s.length(); + if (len == 0) { + throw getSyntaxError(); + } + if (!identifiersToUpper) { + // if not yet converted to uppercase, do it now + s = StringUtils.toUpperEnglish(s); + } + return getSaveTokenType(s, false); + } + + private boolean isKeyword(String s) { + if (!identifiersToUpper) { + // if not yet converted to uppercase, do it now + s = StringUtils.toUpperEnglish(s); + } + return ParserUtil.isKeyword(s); + } + + private static int getSaveTokenType(String s, boolean functionsAsKeywords) { + return ParserUtil.getSaveTokenType(s, functionsAsKeywords); + } + + private Column parseColumnForTable(String columnName, + boolean defaultNullable) { + Column column; + boolean isIdentity = readIf("IDENTITY"); + if (isIdentity || readIf("BIGSERIAL")) { + // Check if any of them are disallowed in the current Mode + if (isIdentity && database.getMode(). + disallowedTypes.contains("IDENTITY")) { + throw DbException.get(ErrorCode.UNKNOWN_DATA_TYPE_1, + currentToken); + } + column = new Column(columnName, Value.LONG); + column.setOriginalSQL("IDENTITY"); + parseAutoIncrement(column); + // PostgreSQL compatibility + if (!database.getMode().serialColumnIsNotPK) { + column.setPrimaryKey(true); + } + } else if (readIf("SERIAL")) { + column = new Column(columnName, Value.INT); + column.setOriginalSQL("SERIAL"); + parseAutoIncrement(column); + // PostgreSQL compatibility + if (!database.getMode().serialColumnIsNotPK) { + column.setPrimaryKey(true); + } + } else { + column = parseColumnWithType(columnName); + } + if (readIf("INVISIBLE")) { + column.setVisible(false); + } else if (readIf("VISIBLE")) { + column.setVisible(true); + } + NullConstraintType nullConstraint = parseNotNullConstraint(); + switch (nullConstraint) { + case NULL_IS_ALLOWED: + column.setNullable(true); + break; + case NULL_IS_NOT_ALLOWED: + column.setNullable(false); + break; + case NO_NULL_CONSTRAINT_FOUND: + // domains may be defined as not nullable + column.setNullable(defaultNullable & column.isNullable()); + break; + default: + throw DbException.get(ErrorCode.UNKNOWN_MODE_1, + "Internal Error - unhandled case: " + nullConstraint.name()); + } + if (readIf("AS")) { + if (isIdentity) { + getSyntaxError(); + } + Expression expr = readExpression(); + column.setComputedExpression(expr); + } else if (readIf("DEFAULT")) { + Expression defaultExpression = readExpression(); + column.setDefaultExpression(session, defaultExpression); + } else if (readIf("GENERATED")) { + if (!readIf("ALWAYS")) { + read("BY"); + read("DEFAULT"); + } + read("AS"); + read("IDENTITY"); + long start = 1, increment = 1; + if (readIf("(")) { + read("START"); + readIf("WITH"); + start = readLong(); + readIf(","); + if (readIf("INCREMENT")) { + readIf("BY"); + increment = readLong(); + } + read(")"); + } + column.setPrimaryKey(true); + column.setAutoIncrement(true, start, increment); + } + if (readIf("ON")) { + read("UPDATE"); + Expression onUpdateExpression = readExpression(); + column.setOnUpdateExpression(session, onUpdateExpression); + } + if (NullConstraintType.NULL_IS_NOT_ALLOWED == parseNotNullConstraint()) { + column.setNullable(false); + } + if (readIf("AUTO_INCREMENT") || readIf("BIGSERIAL") || readIf("SERIAL")) { + parseAutoIncrement(column); + parseNotNullConstraint(); + } else if (readIf("IDENTITY")) { + parseAutoIncrement(column); + column.setPrimaryKey(true); + parseNotNullConstraint(); + } + if (readIf("NULL_TO_DEFAULT")) { + column.setConvertNullToDefault(true); + } + if (readIf("SEQUENCE")) { + Sequence sequence = readSequence(); + column.setSequence(sequence); + } + if (readIf("SELECTIVITY")) { + int value = readPositiveInt(); + column.setSelectivity(value); + } + String comment = readCommentIf(); + if (comment != null) { + column.setComment(comment); + } + return column; + } + + private void parseAutoIncrement(Column column) { + long start = 1, increment = 1; + if (readIf("(")) { + start = readLong(); + if (readIf(",")) { + increment = readLong(); + } + read(")"); + } + column.setAutoIncrement(true, start, increment); + } + + private String readCommentIf() { + if (readIf("COMMENT")) { + readIf("IS"); + return readString(); + } + return null; + } + + private Column parseColumnWithType(String columnName) { + String original = currentToken; + boolean regular = false; + int originalScale = -1; + if (readIf("LONG")) { + if (readIf("RAW")) { + original += " RAW"; + } + } else if (readIf("DOUBLE")) { + if (readIf("PRECISION")) { + original += " PRECISION"; + } + } else if (readIf("CHARACTER")) { + if (readIf("VARYING")) { + original += " VARYING"; + } + } else if (readIf("TIME")) { + if (readIf("(")) { + originalScale = readPositiveInt(); + if (originalScale > ValueTime.MAXIMUM_SCALE) { + throw DbException.get(ErrorCode.INVALID_VALUE_SCALE_PRECISION, Integer.toString(originalScale)); + } + read(")"); + } + if (readIf("WITHOUT")) { + read("TIME"); + read("ZONE"); + original += " WITHOUT TIME ZONE"; + } + } else if (readIf("TIMESTAMP")) { + if (readIf("(")) { + originalScale = readPositiveInt(); + // Allow non-standard TIMESTAMP(..., ...) syntax + if (readIf(",")) { + originalScale = readPositiveInt(); + } + if (originalScale > ValueTimestamp.MAXIMUM_SCALE) { + throw DbException.get(ErrorCode.INVALID_VALUE_SCALE_PRECISION, Integer.toString(originalScale)); + } + read(")"); + } + if (readIf("WITH")) { + read("TIME"); + read("ZONE"); + original += " WITH TIME ZONE"; + } else if (readIf("WITHOUT")) { + read("TIME"); + read("ZONE"); + original += " WITHOUT TIME ZONE"; + } + } else { + regular = true; + } + long precision = -1; + int displaySize = -1; + String[] enumerators = null; + int scale = -1; + String comment = null; + Column templateColumn = null; + DataType dataType; + if (!identifiersToUpper) { + original = StringUtils.toUpperEnglish(original); + } + UserDataType userDataType = database.findUserDataType(original); + if (userDataType != null) { + templateColumn = userDataType.getColumn(); + dataType = DataType.getDataType(templateColumn.getType()); + comment = templateColumn.getComment(); + original = templateColumn.getOriginalSQL(); + precision = templateColumn.getPrecision(); + displaySize = templateColumn.getDisplaySize(); + scale = templateColumn.getScale(); + enumerators = templateColumn.getEnumerators(); + } else { + Mode mode = database.getMode(); + dataType = DataType.getTypeByName(original, mode); + if (dataType == null || mode.disallowedTypes.contains(original)) { + throw DbException.get(ErrorCode.UNKNOWN_DATA_TYPE_1, + currentToken); + } + } + if (database.getIgnoreCase() && dataType.type == Value.STRING && + !equalsToken("VARCHAR_CASESENSITIVE", original)) { + original = "VARCHAR_IGNORECASE"; + dataType = DataType.getTypeByName(original, database.getMode()); + } + if (regular) { + read(); + } + precision = precision == -1 ? dataType.defaultPrecision : precision; + displaySize = displaySize == -1 ? dataType.defaultDisplaySize + : displaySize; + scale = scale == -1 ? dataType.defaultScale : scale; + if (dataType.supportsPrecision || dataType.supportsScale) { + int t = dataType.type; + if (t == Value.TIME || t == Value.TIMESTAMP || t == Value.TIMESTAMP_TZ) { + if (originalScale >= 0) { + scale = originalScale; + switch (t) { + case Value.TIME: + if (original.equals("TIME WITHOUT TIME ZONE")) { + original = "TIME(" + originalScale + ") WITHOUT TIME ZONE"; + } else { + original = original + '(' + originalScale + ')'; + } + precision = displaySize = ValueTime.getDisplaySize(originalScale); + break; + case Value.TIMESTAMP: + if (original.equals("TIMESTAMP WITHOUT TIME ZONE")) { + original = "TIMESTAMP(" + originalScale + ") WITHOUT TIME ZONE"; + } else { + original = original + '(' + originalScale + ')'; + } + precision = displaySize = ValueTimestamp.getDisplaySize(originalScale); + break; + case Value.TIMESTAMP_TZ: + original = "TIMESTAMP(" + originalScale + ") WITH TIME ZONE"; + precision = displaySize = ValueTimestampTimeZone.getDisplaySize(originalScale); + break; + } + } + } else if (readIf("(")) { + if (!readIf("MAX")) { + long p = readLong(); + if (readIf("K")) { + p *= 1024; + } else if (readIf("M")) { + p *= 1024 * 1024; + } else if (readIf("G")) { + p *= 1024 * 1024 * 1024; + } + if (p > Long.MAX_VALUE) { + p = Long.MAX_VALUE; + } + original += "(" + p; + // Oracle syntax + if (!readIf("CHAR")) { + readIf("BYTE"); + } + if (dataType.supportsScale) { + if (readIf(",")) { + scale = readInt(); + original += ", " + scale; + } else { + scale = 0; + } + } + precision = p; + displaySize = MathUtils.convertLongToInt(precision); + original += ")"; + } + read(")"); + } + } else if (dataType.type == Value.DOUBLE && original.equals("FLOAT")) { + if (readIf("(")) { + int p = readPositiveInt(); + read(")"); + if (p > 53) { + throw DbException.get(ErrorCode.INVALID_VALUE_SCALE_PRECISION, Integer.toString(p)); + } + if (p <= 24) { + dataType = DataType.getDataType(Value.FLOAT); + } + original = original + '(' + p + ')'; + } + } else if (dataType.type == Value.ENUM) { + if (readIf("(")) { + java.util.List enumeratorList = new ArrayList<>(); + original += '('; + String enumerator0 = readString(); + enumeratorList.add(enumerator0); + original += "'" + enumerator0 + "'"; + while (readIfMore(true)) { + original += ','; + String enumeratorN = readString(); + original += "'" + enumeratorN + "'"; + enumeratorList.add(enumeratorN); + } + original += ')'; + enumerators = enumeratorList.toArray(new String[0]); + } + try { + ValueEnum.check(enumerators); + } catch (DbException e) { + throw e.addSQL(original); + } + } else if (readIf("(")) { + // Support for MySQL: INT(11), MEDIUMINT(8) and so on. + // Just ignore the precision. + readPositiveInt(); + read(")"); + } + if (readIf("FOR")) { + read("BIT"); + read("DATA"); + if (dataType.type == Value.STRING) { + dataType = DataType.getTypeByName("BINARY", database.getMode()); + } + } + // MySQL compatibility + readIf("UNSIGNED"); + int type = dataType.type; + if (scale > precision) { + throw DbException.get(ErrorCode.INVALID_VALUE_SCALE_PRECISION, + Integer.toString(scale), Long.toString(precision)); + } + + + Column column = new Column(columnName, type, precision, scale, + displaySize, enumerators); + if (templateColumn != null) { + column.setNullable(templateColumn.isNullable()); + column.setDefaultExpression(session, + templateColumn.getDefaultExpression()); + int selectivity = templateColumn.getSelectivity(); + if (selectivity != Constants.SELECTIVITY_DEFAULT) { + column.setSelectivity(selectivity); + } + Expression checkConstraint = templateColumn.getCheckConstraint( + session, columnName); + column.addCheckConstraint(session, checkConstraint); + } + column.setComment(comment); + column.setOriginalSQL(original); + return column; + } + + private Prepared parseCreate() { + boolean orReplace = false; + if (readIf("OR")) { + read("REPLACE"); + orReplace = true; + } + boolean force = readIf("FORCE"); + if (readIf("VIEW")) { + return parseCreateView(force, orReplace); + } else if (readIf("ALIAS")) { + return parseCreateFunctionAlias(force); + } else if (readIf("SEQUENCE")) { + return parseCreateSequence(); + } else if (readIf("USER")) { + return parseCreateUser(); + } else if (readIf("TRIGGER")) { + return parseCreateTrigger(force); + } else if (readIf("ROLE")) { + return parseCreateRole(); + } else if (readIf("SCHEMA")) { + return parseCreateSchema(); + } else if (readIf("CONSTANT")) { + return parseCreateConstant(); + } else if (readIf("DOMAIN")) { + return parseCreateUserDataType(); + } else if (readIf("TYPE")) { + return parseCreateUserDataType(); + } else if (readIf("DATATYPE")) { + return parseCreateUserDataType(); + } else if (readIf("AGGREGATE")) { + return parseCreateAggregate(force); + } else if (readIf("LINKED")) { + return parseCreateLinkedTable(false, false, force); + } + // tables or linked tables + boolean memory = false, cached = false; + if (readIf("MEMORY")) { + memory = true; + } else if (readIf("CACHED")) { + cached = true; + } + if (readIf("LOCAL")) { + read("TEMPORARY"); + if (readIf("LINKED")) { + return parseCreateLinkedTable(true, false, force); + } + read("TABLE"); + return parseCreateTable(true, false, cached); + } else if (readIf("GLOBAL")) { + read("TEMPORARY"); + if (readIf("LINKED")) { + return parseCreateLinkedTable(true, true, force); + } + read("TABLE"); + return parseCreateTable(true, true, cached); + } else if (readIf("TEMP") || readIf("TEMPORARY")) { + if (readIf("LINKED")) { + return parseCreateLinkedTable(true, true, force); + } + read("TABLE"); + return parseCreateTable(true, true, cached); + } else if (readIf("TABLE")) { + if (!cached && !memory) { + cached = database.getDefaultTableType() == Table.TYPE_CACHED; + } + return parseCreateTable(false, false, cached); + } else if (readIf("SYNONYM")) { + return parseCreateSynonym(orReplace); + } else { + boolean hash = false, primaryKey = false; + boolean unique = false, spatial = false; + String indexName = null; + Schema oldSchema = null; + boolean ifNotExists = false; + if (readIf("PRIMARY")) { + read("KEY"); + if (readIf("HASH")) { + hash = true; + } + primaryKey = true; + if (!isToken("ON")) { + ifNotExists = readIfNotExists(); + indexName = readIdentifierWithSchema(null); + oldSchema = getSchema(); + } + } else { + if (readIf("UNIQUE")) { + unique = true; + } + if (readIf("HASH")) { + hash = true; + } + if (readIf("SPATIAL")) { + spatial = true; + } + if (readIf("INDEX")) { + if (!isToken("ON")) { + ifNotExists = readIfNotExists(); + indexName = readIdentifierWithSchema(null); + oldSchema = getSchema(); + } + } else { + throw getSyntaxError(); + } + } + read("ON"); + String tableName = readIdentifierWithSchema(); + checkSchema(oldSchema); + CreateIndex command = new CreateIndex(session, getSchema()); + command.setIfNotExists(ifNotExists); + command.setPrimaryKey(primaryKey); + command.setTableName(tableName); + command.setUnique(unique); + command.setIndexName(indexName); + command.setComment(readCommentIf()); + read("("); + command.setIndexColumns(parseIndexColumnList()); + + if (readIf("USING")) { + if (hash) { + throw getSyntaxError(); + } + if (spatial) { + throw getSyntaxError(); + } + if (readIf("BTREE")) { + // default + } else if (readIf("RTREE")) { + spatial = true; + } else if (readIf("HASH")) { + hash = true; + } else { + throw getSyntaxError(); + } + + } + command.setHash(hash); + command.setSpatial(spatial); + return command; + } + } + + /** + * @return true if we expect to see a TABLE clause + */ + private boolean addRoleOrRight(GrantRevoke command) { + if (readIf("SELECT")) { + command.addRight(Right.SELECT); + return true; + } else if (readIf("DELETE")) { + command.addRight(Right.DELETE); + return true; + } else if (readIf("INSERT")) { + command.addRight(Right.INSERT); + return true; + } else if (readIf("UPDATE")) { + command.addRight(Right.UPDATE); + return true; + } else if (readIf("ALL")) { + command.addRight(Right.ALL); + return true; + } else if (readIf("ALTER")) { + read("ANY"); + read("SCHEMA"); + command.addRight(Right.ALTER_ANY_SCHEMA); + command.addTable(null); + return false; + } else if (readIf("CONNECT")) { + // ignore this right + return true; + } else if (readIf("RESOURCE")) { + // ignore this right + return true; + } else { + command.addRoleName(readUniqueIdentifier()); + return false; + } + } + + private GrantRevoke parseGrantRevoke(int operationType) { + GrantRevoke command = new GrantRevoke(session); + command.setOperationType(operationType); + boolean tableClauseExpected = addRoleOrRight(command); + while (readIf(",")) { + addRoleOrRight(command); + if (command.isRightMode() && command.isRoleMode()) { + throw DbException + .get(ErrorCode.ROLES_AND_RIGHT_CANNOT_BE_MIXED); + } + } + if (tableClauseExpected) { + if (readIf("ON")) { + if (readIf("SCHEMA")) { + Schema schema = database.getSchema(readAliasIdentifier()); + command.setSchema(schema); + } else { + do { + Table table = readTableOrView(); + command.addTable(table); + } while (readIf(",")); + } + } + } + if (operationType == CommandInterface.GRANT) { + read("TO"); + } else { + read("FROM"); + } + command.setGranteeName(readUniqueIdentifier()); + return command; + } + + private Select parseValues() { + Select command = new Select(session); + currentSelect = command; + TableFilter filter = parseValuesTable(0); + ArrayList list = New.arrayList(); + list.add(new Wildcard(null, null)); + command.setExpressions(list); + command.addTableFilter(filter, true); + command.init(); + return command; + } + + private TableFilter parseValuesTable(int orderInFrom) { + Schema mainSchema = database.getSchema(Constants.SCHEMA_MAIN); + TableFunction tf = (TableFunction) Function.getFunction(database, + "TABLE"); + ArrayList columns = New.arrayList(); + ArrayList> rows = New.arrayList(); + do { + int i = 0; + ArrayList row = New.arrayList(); + boolean multiColumn = readIf("("); + do { + Expression expr = readExpression(); + expr = expr.optimize(session); + int type = expr.getType(); + long prec; + int scale, displaySize; + Column column; + String columnName = "C" + (i + 1); + if (rows.isEmpty()) { + if (type == Value.UNKNOWN) { + type = Value.STRING; + } + DataType dt = DataType.getDataType(type); + prec = dt.defaultPrecision; + scale = dt.defaultScale; + displaySize = dt.defaultDisplaySize; + column = new Column(columnName, type, prec, scale, + displaySize); + columns.add(column); + } + prec = expr.getPrecision(); + scale = expr.getScale(); + displaySize = expr.getDisplaySize(); + if (i >= columns.size()) { + throw DbException + .get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + Column c = columns.get(i); + type = Value.getHigherOrder(c.getType(), type); + prec = Math.max(c.getPrecision(), prec); + scale = Math.max(c.getScale(), scale); + displaySize = Math.max(c.getDisplaySize(), displaySize); + column = new Column(columnName, type, prec, scale, displaySize); + columns.set(i, column); + row.add(expr); + i++; + } while (multiColumn && readIfMore(true)); + rows.add(row); + } while (readIf(",")); + int columnCount = columns.size(); + int rowCount = rows.size(); + for (ArrayList row : rows) { + if (row.size() != columnCount) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + } + for (int i = 0; i < columnCount; i++) { + Column c = columns.get(i); + if (c.getType() == Value.UNKNOWN) { + c = new Column(c.getName(), Value.STRING, 0, 0, 0); + columns.set(i, c); + } + Expression[] array = new Expression[rowCount]; + for (int j = 0; j < rowCount; j++) { + array[j] = rows.get(j).get(i); + } + ExpressionList list = new ExpressionList(array); + tf.setParameter(i, list); + } + tf.setColumns(columns); + tf.doneWithParameters(); + Table table = new FunctionTable(mainSchema, session, tf, tf); + return new TableFilter(session, table, null, + rightsChecked, currentSelect, orderInFrom, + null); + } + + private Call parseCall() { + Call command = new Call(session); + currentPrepared = command; + command.setExpression(readExpression()); + return command; + } + + private CreateRole parseCreateRole() { + CreateRole command = new CreateRole(session); + command.setIfNotExists(readIfNotExists()); + command.setRoleName(readUniqueIdentifier()); + return command; + } + + private CreateSchema parseCreateSchema() { + CreateSchema command = new CreateSchema(session); + command.setIfNotExists(readIfNotExists()); + command.setSchemaName(readUniqueIdentifier()); + if (readIf("AUTHORIZATION")) { + command.setAuthorization(readUniqueIdentifier()); + } else { + command.setAuthorization(session.getUser().getName()); + } + if (readIf("WITH")) { + command.setTableEngineParams(readTableEngineParams()); + } + return command; + } + + private ArrayList readTableEngineParams() { + ArrayList tableEngineParams = New.arrayList(); + do { + tableEngineParams.add(readUniqueIdentifier()); + } while (readIf(",")); + return tableEngineParams; + } + + private CreateSequence parseCreateSequence() { + boolean ifNotExists = readIfNotExists(); + String sequenceName = readIdentifierWithSchema(); + CreateSequence command = new CreateSequence(session, getSchema()); + command.setIfNotExists(ifNotExists); + command.setSequenceName(sequenceName); + while (true) { + if (readIf("START")) { + readIf("WITH"); + command.setStartWith(readExpression()); + } else if (readIf("INCREMENT")) { + readIf("BY"); + command.setIncrement(readExpression()); + } else if (readIf("MINVALUE")) { + command.setMinValue(readExpression()); + } else if (readIf("NOMINVALUE")) { + command.setMinValue(null); + } else if (readIf("MAXVALUE")) { + command.setMaxValue(readExpression()); + } else if (readIf("NOMAXVALUE")) { + command.setMaxValue(null); + } else if (readIf("CYCLE")) { + command.setCycle(true); + } else if (readIf("NOCYCLE")) { + command.setCycle(false); + } else if (readIf("NO")) { + if (readIf("MINVALUE")) { + command.setMinValue(null); + } else if (readIf("MAXVALUE")) { + command.setMaxValue(null); + } else if (readIf("CYCLE")) { + command.setCycle(false); + } else if (readIf("CACHE")) { + command.setCacheSize(ValueExpression.get(ValueLong.get(1))); + } else { + break; + } + } else if (readIf("CACHE")) { + command.setCacheSize(readExpression()); + } else if (readIf("NOCACHE")) { + command.setCacheSize(ValueExpression.get(ValueLong.get(1))); + } else if (readIf("BELONGS_TO_TABLE")) { + command.setBelongsToTable(true); + } else if (readIf("ORDER")) { + // Oracle compatibility + } else { + break; + } + } + return command; + } + + private boolean readIfNotExists() { + if (readIf("IF")) { + read("NOT"); + read("EXISTS"); + return true; + } + return false; + } + + private boolean readIfAffinity() { + return readIf("AFFINITY") || readIf("SHARD"); + } + + private CreateConstant parseCreateConstant() { + boolean ifNotExists = readIfNotExists(); + String constantName = readIdentifierWithSchema(); + Schema schema = getSchema(); + if (isKeyword(constantName)) { + throw DbException.get(ErrorCode.CONSTANT_ALREADY_EXISTS_1, + constantName); + } + read("VALUE"); + Expression expr = readExpression(); + CreateConstant command = new CreateConstant(session, schema); + command.setConstantName(constantName); + command.setExpression(expr); + command.setIfNotExists(ifNotExists); + return command; + } + + private CreateAggregate parseCreateAggregate(boolean force) { + boolean ifNotExists = readIfNotExists(); + CreateAggregate command = new CreateAggregate(session); + command.setForce(force); + String name = readIdentifierWithSchema(); + if (isKeyword(name) || Function.getFunction(database, name) != null || + getAggregateType(name) != null) { + throw DbException.get(ErrorCode.FUNCTION_ALIAS_ALREADY_EXISTS_1, + name); + } + command.setName(name); + command.setSchema(getSchema()); + command.setIfNotExists(ifNotExists); + read("FOR"); + command.setJavaClassMethod(readUniqueIdentifier()); + return command; + } + + private CreateUserDataType parseCreateUserDataType() { + boolean ifNotExists = readIfNotExists(); + CreateUserDataType command = new CreateUserDataType(session); + command.setTypeName(readUniqueIdentifier()); + read("AS"); + Column col = parseColumnForTable("VALUE", true); + if (readIf("CHECK")) { + Expression expr = readExpression(); + col.addCheckConstraint(session, expr); + } + col.rename(null); + command.setColumn(col); + command.setIfNotExists(ifNotExists); + return command; + } + + private CreateTrigger parseCreateTrigger(boolean force) { + boolean ifNotExists = readIfNotExists(); + String triggerName = readIdentifierWithSchema(null); + Schema schema = getSchema(); + boolean insteadOf, isBefore; + if (readIf("INSTEAD")) { + read("OF"); + isBefore = true; + insteadOf = true; + } else if (readIf("BEFORE")) { + insteadOf = false; + isBefore = true; + } else { + read("AFTER"); + insteadOf = false; + isBefore = false; + } + int typeMask = 0; + boolean onRollback = false; + do { + if (readIf("INSERT")) { + typeMask |= Trigger.INSERT; + } else if (readIf("UPDATE")) { + typeMask |= Trigger.UPDATE; + } else if (readIf("DELETE")) { + typeMask |= Trigger.DELETE; + } else if (readIf("SELECT")) { + typeMask |= Trigger.SELECT; + } else if (readIf("ROLLBACK")) { + onRollback = true; + } else { + throw getSyntaxError(); + } + } while (readIf(",")); + read("ON"); + String tableName = readIdentifierWithSchema(); + checkSchema(schema); + CreateTrigger command = new CreateTrigger(session, getSchema()); + command.setForce(force); + command.setTriggerName(triggerName); + command.setIfNotExists(ifNotExists); + command.setInsteadOf(insteadOf); + command.setBefore(isBefore); + command.setOnRollback(onRollback); + command.setTypeMask(typeMask); + command.setTableName(tableName); + if (readIf("FOR")) { + read("EACH"); + read("ROW"); + command.setRowBased(true); + } else { + command.setRowBased(false); + } + if (readIf("QUEUE")) { + command.setQueueSize(readPositiveInt()); + } + command.setNoWait(readIf("NOWAIT")); + if (readIf("AS")) { + command.setTriggerSource(readString()); + } else { + read("CALL"); + command.setTriggerClassName(readUniqueIdentifier()); + } + return command; + } + + private CreateUser parseCreateUser() { + CreateUser command = new CreateUser(session); + command.setIfNotExists(readIfNotExists()); + command.setUserName(readUniqueIdentifier()); + command.setComment(readCommentIf()); + if (readIf("PASSWORD")) { + command.setPassword(readExpression()); + } else if (readIf("SALT")) { + command.setSalt(readExpression()); + read("HASH"); + command.setHash(readExpression()); + } else if (readIf("IDENTIFIED")) { + read("BY"); + // uppercase if not quoted + command.setPassword(ValueExpression.get(ValueString + .get(readColumnIdentifier()))); + } else { + throw getSyntaxError(); + } + if (readIf("ADMIN")) { + command.setAdmin(true); + } + return command; + } + + private CreateFunctionAlias parseCreateFunctionAlias(boolean force) { + boolean ifNotExists = readIfNotExists(); + String aliasName; + if (currentTokenType != IDENTIFIER) { + aliasName = currentToken; + read(); + schemaName = session.getCurrentSchemaName(); + } else { + aliasName = readIdentifierWithSchema(); + } + final boolean newAliasSameNameAsBuiltin = Function.getFunction(database, aliasName) != null; + if (database.isAllowBuiltinAliasOverride() && newAliasSameNameAsBuiltin) { + // fine + } else if (isKeyword(aliasName) || + newAliasSameNameAsBuiltin || + getAggregateType(aliasName) != null) { + throw DbException.get(ErrorCode.FUNCTION_ALIAS_ALREADY_EXISTS_1, + aliasName); + } + CreateFunctionAlias command = new CreateFunctionAlias(session, + getSchema()); + command.setForce(force); + command.setAliasName(aliasName); + command.setIfNotExists(ifNotExists); + command.setDeterministic(readIf("DETERMINISTIC")); + command.setBufferResultSetToLocalTemp(!readIf("NOBUFFER")); + if (readIf("AS")) { + command.setSource(readString()); + } else { + read("FOR"); + command.setJavaClassMethod(readUniqueIdentifier()); + } + return command; + } + + private Prepared parseWith() { + List viewsCreated = new ArrayList<>(); + readIf("RECURSIVE"); + + // this WITH statement might not be a temporary view - allow optional keyword to + // tell us that this keyword. This feature will not be documented - H2 internal use only. + boolean isPersistent = readIf("PERSISTENT"); + + // this WITH statement is not a temporary view - it is part of a persistent view + // as in CREATE VIEW abc AS WITH my_cte - this auto detects that condition + if (session.isParsingCreateView()) { + isPersistent = true; + } + + do { + viewsCreated.add(parseSingleCommonTableExpression(isPersistent)); + } while (readIf(",")); + + Prepared p = null; + // reverse the order of constructed CTE views - as the destruction order + // (since later created view may depend on previously created views - + // we preserve that dependency order in the destruction sequence ) + // used in setCteCleanups + Collections.reverse(viewsCreated); + + if (isToken("SELECT")) { + Query query = parseSelectUnion(); + query.setPrepareAlways(true); + query.setNeverLazy(true); + p = query; + } else if (readIf("INSERT")) { + p = parseInsert(); + p.setPrepareAlways(true); + } else if (readIf("UPDATE")) { + p = parseUpdate(); + p.setPrepareAlways(true); + } else if (readIf("MERGE")) { + p = parseMerge(); + p.setPrepareAlways(true); + } else if (readIf("DELETE")) { + p = parseDelete(); + p.setPrepareAlways(true); + } else if (readIf("CREATE")) { + if (!isToken("TABLE")) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, + WITH_STATEMENT_SUPPORTS_LIMITED_SUB_STATEMENTS); + + } + p = parseCreate(); + p.setPrepareAlways(true); + } else { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, + WITH_STATEMENT_SUPPORTS_LIMITED_SUB_STATEMENTS); + } + + // clean up temporary views starting with last to first (in case of + // dependencies) - but only if they are not persistent + if (!isPersistent) { + p.setCteCleanups(viewsCreated); + } + return p; + } + + private TableView parseSingleCommonTableExpression(boolean isPersistent) { + String cteViewName = readIdentifierWithSchema(); + Schema schema = getSchema(); + Table recursiveTable = null; + ArrayList columns = New.arrayList(); + String[] cols = null; + + // column names are now optional - they can be inferred from the named + // query, if not supplied by user + if (readIf("(")) { + cols = parseColumnList(); + for (String c : cols) { + // we don't really know the type of the column, so STRING will + // have to do, UNKNOWN does not work here + columns.add(new Column(c, Value.STRING)); + } + } + + Table oldViewFound = null; + if (isPersistent) { + oldViewFound = getSchema().findTableOrView(session, cteViewName); + } else { + oldViewFound = session.findLocalTempTable(cteViewName); + } + // this persistent check conflicts with check 10 lines down + if (oldViewFound != null) { + if (!(oldViewFound instanceof TableView)) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, + cteViewName); + } + TableView tv = (TableView) oldViewFound; + if (!tv.isTableExpression()) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, + cteViewName); + } + if (isPersistent) { + oldViewFound.lock(session, true, true); + database.removeSchemaObject(session, oldViewFound); + + } else { + session.removeLocalTempTable(oldViewFound); + } + oldViewFound = null; + } + /* + * This table is created as a workaround because recursive table + * expressions need to reference something that look like themselves to + * work (its removed after creation in this method). Only create table + * data and table if we don't have a working CTE already. + */ + recursiveTable = TableView.createShadowTableForRecursiveTableExpression( + isPersistent, session, cteViewName, schema, columns, database); + List columnTemplateList; + String[] querySQLOutput = {null}; + try { + read("AS"); + read("("); + Query withQuery = parseSelect(); + if (isPersistent) { + withQuery.session = session; + } + read(")"); + columnTemplateList = TableView.createQueryColumnTemplateList(cols, withQuery, querySQLOutput); + + } finally { + TableView.destroyShadowTableForRecursiveExpression(isPersistent, session, recursiveTable); + } + + return createCTEView(cteViewName, + querySQLOutput[0], columnTemplateList, + true/* allowRecursiveQueryDetection */, + true/* add to session */, + isPersistent, session); + } + + private TableView createCTEView(String cteViewName, String querySQL, + List columnTemplateList, boolean allowRecursiveQueryDetection, + boolean addViewToSession, boolean isPersistent, Session targetSession) { + Database db = targetSession.getDatabase(); + Schema schema = getSchemaWithDefault(); + int id = db.allocateObjectId(); + Column[] columnTemplateArray = columnTemplateList.toArray(new Column[0]); + + // No easy way to determine if this is a recursive query up front, so we just compile + // it twice - once without the flag set, and if we didn't see a recursive term, + // then we just compile it again. + TableView view; + synchronized (targetSession) { + view = new TableView(schema, id, cteViewName, querySQL, + parameters, columnTemplateArray, targetSession, + allowRecursiveQueryDetection, false /* literalsChecked */, true /* isTableExpression */, + isPersistent); + if (!view.isRecursiveQueryDetected() && allowRecursiveQueryDetection) { + if (isPersistent) { + db.addSchemaObject(targetSession, view); + view.lock(targetSession, true, true); + targetSession.getDatabase().removeSchemaObject(targetSession, view); + } else { + session.removeLocalTempTable(view); + } + view = new TableView(schema, id, cteViewName, querySQL, parameters, + columnTemplateArray, targetSession, + false/* assume recursive */, false /* literalsChecked */, true /* isTableExpression */, + isPersistent); + } + // both removeSchemaObject and removeLocalTempTable hold meta locks + targetSession.getDatabase().unlockMeta(targetSession); + } + view.setTableExpression(true); + view.setTemporary(!isPersistent); + view.setHidden(true); + view.setOnCommitDrop(false); + if (addViewToSession) { + if (isPersistent) { + db.addSchemaObject(targetSession, view); + view.unlock(targetSession); + db.unlockMeta(targetSession); + } else { + targetSession.addLocalTempTable(view); + } + } + return view; + } + + private CreateView parseCreateView(boolean force, boolean orReplace) { + boolean ifNotExists = readIfNotExists(); + boolean isTableExpression = readIf("TABLE_EXPRESSION"); + String viewName = readIdentifierWithSchema(); + CreateView command = new CreateView(session, getSchema()); + this.createView = command; + command.setViewName(viewName); + command.setIfNotExists(ifNotExists); + command.setComment(readCommentIf()); + command.setOrReplace(orReplace); + command.setForce(force); + command.setTableExpression(isTableExpression); + if (readIf("(")) { + String[] cols = parseColumnList(); + command.setColumnNames(cols); + } + String select = StringUtils.cache(sqlCommand + .substring(parseIndex)); + read("AS"); + try { + Query query; + session.setParsingCreateView(true, viewName); + try { + query = parseSelect(); + query.prepare(); + } finally { + session.setParsingCreateView(false, viewName); + } + command.setSelect(query); + } catch (DbException e) { + if (force) { + command.setSelectSQL(select); + while (currentTokenType != END) { + read(); + } + } else { + throw e; + } + } + return command; + } + + private TransactionCommand parseCheckpoint() { + TransactionCommand command; + if (readIf("SYNC")) { + command = new TransactionCommand(session, + CommandInterface.CHECKPOINT_SYNC); + } else { + command = new TransactionCommand(session, + CommandInterface.CHECKPOINT); + } + return command; + } + + private Prepared parseAlter() { + if (readIf("TABLE")) { + return parseAlterTable(); + } else if (readIf("USER")) { + return parseAlterUser(); + } else if (readIf("INDEX")) { + return parseAlterIndex(); + } else if (readIf("SCHEMA")) { + return parseAlterSchema(); + } else if (readIf("SEQUENCE")) { + return parseAlterSequence(); + } else if (readIf("VIEW")) { + return parseAlterView(); + } + throw getSyntaxError(); + } + + private void checkSchema(Schema old) { + if (old != null && getSchema() != old) { + throw DbException.get(ErrorCode.SCHEMA_NAME_MUST_MATCH); + } + } + + private AlterIndexRename parseAlterIndex() { + boolean ifExists = readIfExists(false); + String indexName = readIdentifierWithSchema(); + Schema old = getSchema(); + AlterIndexRename command = new AlterIndexRename(session); + command.setOldSchema(old); + command.setOldName(indexName); + command.setIfExists(ifExists); + read("RENAME"); + read("TO"); + String newName = readIdentifierWithSchema(old.getName()); + checkSchema(old); + command.setNewName(newName); + return command; + } + + private AlterView parseAlterView() { + AlterView command = new AlterView(session); + boolean ifExists = readIfExists(false); + command.setIfExists(ifExists); + String viewName = readIdentifierWithSchema(); + Table tableView = getSchema().findTableOrView(session, viewName); + if (!(tableView instanceof TableView) && !ifExists) { + throw DbException.get(ErrorCode.VIEW_NOT_FOUND_1, viewName); + } + TableView view = (TableView) tableView; + command.setView(view); + read("RECOMPILE"); + return command; + } + + private Prepared parseAlterSchema() { + boolean ifExists = readIfExists(false); + String schemaName = readIdentifierWithSchema(); + Schema old = getSchema(); + read("RENAME"); + read("TO"); + String newName = readIdentifierWithSchema(old.getName()); + Schema schema = findSchema(schemaName); + if (schema == null) { + if (ifExists) { + return new NoOperation(session); + } + throw DbException.get(ErrorCode.SCHEMA_NOT_FOUND_1, schemaName); + } + AlterSchemaRename command = new AlterSchemaRename(session); + command.setOldSchema(schema); + checkSchema(old); + command.setNewName(newName); + return command; + } + + private AlterSequence parseAlterSequence() { + boolean ifExists = readIfExists(false); + String sequenceName = readIdentifierWithSchema(); + AlterSequence command = new AlterSequence(session, getSchema()); + command.setSequenceName(sequenceName); + command.setIfExists(ifExists); + while (true) { + if (readIf("RESTART")) { + read("WITH"); + command.setStartWith(readExpression()); + } else if (readIf("INCREMENT")) { + read("BY"); + command.setIncrement(readExpression()); + } else if (readIf("MINVALUE")) { + command.setMinValue(readExpression()); + } else if (readIf("NOMINVALUE")) { + command.setMinValue(null); + } else if (readIf("MAXVALUE")) { + command.setMaxValue(readExpression()); + } else if (readIf("NOMAXVALUE")) { + command.setMaxValue(null); + } else if (readIf("CYCLE")) { + command.setCycle(true); + } else if (readIf("NOCYCLE")) { + command.setCycle(false); + } else if (readIf("NO")) { + if (readIf("MINVALUE")) { + command.setMinValue(null); + } else if (readIf("MAXVALUE")) { + command.setMaxValue(null); + } else if (readIf("CYCLE")) { + command.setCycle(false); + } else if (readIf("CACHE")) { + command.setCacheSize(ValueExpression.get(ValueLong.get(1))); + } else { + break; + } + } else if (readIf("CACHE")) { + command.setCacheSize(readExpression()); + } else if (readIf("NOCACHE")) { + command.setCacheSize(ValueExpression.get(ValueLong.get(1))); + } else { + break; + } + } + return command; + } + + private AlterUser parseAlterUser() { + String userName = readUniqueIdentifier(); + if (readIf("SET")) { + AlterUser command = new AlterUser(session); + command.setType(CommandInterface.ALTER_USER_SET_PASSWORD); + command.setUser(database.getUser(userName)); + if (readIf("PASSWORD")) { + command.setPassword(readExpression()); + } else if (readIf("SALT")) { + command.setSalt(readExpression()); + read("HASH"); + command.setHash(readExpression()); + } else { + throw getSyntaxError(); + } + return command; + } else if (readIf("RENAME")) { + read("TO"); + AlterUser command = new AlterUser(session); + command.setType(CommandInterface.ALTER_USER_RENAME); + command.setUser(database.getUser(userName)); + String newName = readUniqueIdentifier(); + command.setNewName(newName); + return command; + } else if (readIf("ADMIN")) { + AlterUser command = new AlterUser(session); + command.setType(CommandInterface.ALTER_USER_ADMIN); + User user = database.getUser(userName); + command.setUser(user); + if (readIf("TRUE")) { + command.setAdmin(true); + } else if (readIf("FALSE")) { + command.setAdmin(false); + } else { + throw getSyntaxError(); + } + return command; + } + throw getSyntaxError(); + } + + private void readIfEqualOrTo() { + if (!readIf("=")) { + readIf("TO"); + } + } + + private Prepared parseSet() { + if (readIf("@")) { + Set command = new Set(session, SetTypes.VARIABLE); + command.setString(readAliasIdentifier()); + readIfEqualOrTo(); + command.setExpression(readExpression()); + return command; + } else if (readIf("AUTOCOMMIT")) { + readIfEqualOrTo(); + boolean value = readBooleanSetting(); + int setting = value ? CommandInterface.SET_AUTOCOMMIT_TRUE + : CommandInterface.SET_AUTOCOMMIT_FALSE; + return new TransactionCommand(session, setting); + } else if (readIf("MVCC")) { + readIfEqualOrTo(); + boolean value = readBooleanSetting(); + Set command = new Set(session, SetTypes.MVCC); + command.setInt(value ? 1 : 0); + return command; + } else if (readIf("EXCLUSIVE")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.EXCLUSIVE); + command.setExpression(readExpression()); + return command; + } else if (readIf("IGNORECASE")) { + readIfEqualOrTo(); + boolean value = readBooleanSetting(); + Set command = new Set(session, SetTypes.IGNORECASE); + command.setInt(value ? 1 : 0); + return command; + } else if (readIf("PASSWORD")) { + readIfEqualOrTo(); + AlterUser command = new AlterUser(session); + command.setType(CommandInterface.ALTER_USER_SET_PASSWORD); + command.setUser(session.getUser()); + command.setPassword(readExpression()); + return command; + } else if (readIf("SALT")) { + readIfEqualOrTo(); + AlterUser command = new AlterUser(session); + command.setType(CommandInterface.ALTER_USER_SET_PASSWORD); + command.setUser(session.getUser()); + command.setSalt(readExpression()); + read("HASH"); + command.setHash(readExpression()); + return command; + } else if (readIf("MODE")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.MODE); + command.setString(readAliasIdentifier()); + return command; + } else if (readIf("COMPRESS_LOB")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.COMPRESS_LOB); + if (currentTokenType == VALUE) { + command.setString(readString()); + } else { + command.setString(readUniqueIdentifier()); + } + return command; + } else if (readIf("DATABASE")) { + readIfEqualOrTo(); + read("COLLATION"); + return parseSetCollation(); + } else if (readIf("COLLATION")) { + readIfEqualOrTo(); + return parseSetCollation(); + } else if (readIf("BINARY_COLLATION")) { + readIfEqualOrTo(); + return parseSetBinaryCollation(); + } else if (readIf("CLUSTER")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.CLUSTER); + command.setString(readString()); + return command; + } else if (readIf("DATABASE_EVENT_LISTENER")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.DATABASE_EVENT_LISTENER); + command.setString(readString()); + return command; + } else if (readIf("ALLOW_LITERALS")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.ALLOW_LITERALS); + if (readIf("NONE")) { + command.setInt(Constants.ALLOW_LITERALS_NONE); + } else if (readIf("ALL")) { + command.setInt(Constants.ALLOW_LITERALS_ALL); + } else if (readIf("NUMBERS")) { + command.setInt(Constants.ALLOW_LITERALS_NUMBERS); + } else { + command.setInt(readPositiveInt()); + } + return command; + } else if (readIf("DEFAULT_TABLE_TYPE")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.DEFAULT_TABLE_TYPE); + if (readIf("MEMORY")) { + command.setInt(Table.TYPE_MEMORY); + } else if (readIf("CACHED")) { + command.setInt(Table.TYPE_CACHED); + } else { + command.setInt(readPositiveInt()); + } + return command; + } else if (readIf("CREATE")) { + readIfEqualOrTo(); + // Derby compatibility (CREATE=TRUE in the database URL) + read(); + return new NoOperation(session); + } else if (readIf("HSQLDB.DEFAULT_TABLE_TYPE")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("PAGE_STORE")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("CACHE_TYPE")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("FILE_LOCK")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("DB_CLOSE_ON_EXIT")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("AUTO_SERVER")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("AUTO_SERVER_PORT")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("AUTO_RECONNECT")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("ASSERT")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("ACCESS_MODE_DATA")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("OPEN_NEW")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("JMX")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("PAGE_SIZE")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("RECOVER")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("NAMES")) { + // Quercus PHP MySQL driver compatibility + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("SCOPE_GENERATED_KEYS")) { + readIfEqualOrTo(); + read(); + return new NoOperation(session); + } else if (readIf("SCHEMA")) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.SCHEMA); + command.setString(readAliasIdentifier()); + return command; + } else if (readIf("DATESTYLE")) { + // PostgreSQL compatibility + readIfEqualOrTo(); + if (!readIf("ISO")) { + String s = readString(); + if (!equalsToken(s, "ISO")) { + throw getSyntaxError(); + } + } + return new NoOperation(session); + } else if (readIf("SEARCH_PATH") || + readIf(SetTypes.getTypeName(SetTypes.SCHEMA_SEARCH_PATH))) { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.SCHEMA_SEARCH_PATH); + ArrayList list = New.arrayList(); + list.add(readAliasIdentifier()); + while (readIf(",")) { + list.add(readAliasIdentifier()); + } + command.setStringArray(list.toArray(new String[0])); + return command; + } else if (readIf("JAVA_OBJECT_SERIALIZER")) { + readIfEqualOrTo(); + return parseSetJavaObjectSerializer(); + } else { + if (isToken("LOGSIZE")) { + // HSQLDB compatibility + currentToken = SetTypes.getTypeName(SetTypes.MAX_LOG_SIZE); + } + if (isToken("FOREIGN_KEY_CHECKS")) { + // MySQL compatibility + currentToken = SetTypes + .getTypeName(SetTypes.REFERENTIAL_INTEGRITY); + } + int type = SetTypes.getType(currentToken); + if (type < 0) { + throw getSyntaxError(); + } + read(); + readIfEqualOrTo(); + Set command = new Set(session, type); + command.setExpression(readExpression()); + return command; + } + } + + private Prepared parseUse() { + readIfEqualOrTo(); + Set command = new Set(session, SetTypes.SCHEMA); + command.setString(readAliasIdentifier()); + return command; + } + + private Set parseSetCollation() { + Set command = new Set(session, SetTypes.COLLATION); + String name = readAliasIdentifier(); + command.setString(name); + if (equalsToken(name, CompareMode.OFF)) { + return command; + } + Collator coll = CompareMode.getCollator(name); + if (coll == null) { + throw DbException.getInvalidValueException("collation", name); + } + if (readIf("STRENGTH")) { + if (readIf("PRIMARY")) { + command.setInt(Collator.PRIMARY); + } else if (readIf("SECONDARY")) { + command.setInt(Collator.SECONDARY); + } else if (readIf("TERTIARY")) { + command.setInt(Collator.TERTIARY); + } else if (readIf("IDENTICAL")) { + command.setInt(Collator.IDENTICAL); + } + } else { + command.setInt(coll.getStrength()); + } + return command; + } + + private Set parseSetBinaryCollation() { + Set command = new Set(session, SetTypes.BINARY_COLLATION); + String name = readAliasIdentifier(); + command.setString(name); + if (equalsToken(name, CompareMode.UNSIGNED) || + equalsToken(name, CompareMode.SIGNED)) { + return command; + } + throw DbException.getInvalidValueException("BINARY_COLLATION", name); + } + + private Set parseSetJavaObjectSerializer() { + Set command = new Set(session, SetTypes.JAVA_OBJECT_SERIALIZER); + String name = readString(); + command.setString(name); + return command; + } + + private RunScriptCommand parseRunScript() { + RunScriptCommand command = new RunScriptCommand(session); + read("FROM"); + command.setFileNameExpr(readExpression()); + if (readIf("COMPRESSION")) { + command.setCompressionAlgorithm(readUniqueIdentifier()); + } + if (readIf("CIPHER")) { + command.setCipher(readUniqueIdentifier()); + if (readIf("PASSWORD")) { + command.setPassword(readExpression()); + } + } + if (readIf("CHARSET")) { + command.setCharset(Charset.forName(readString())); + } + return command; + } + + private ScriptCommand parseScript() { + ScriptCommand command = new ScriptCommand(session); + boolean data = true, passwords = true, settings = true; + boolean dropTables = false, simple = false; + if (readIf("SIMPLE")) { + simple = true; + } + if (readIf("NODATA")) { + data = false; + } + if (readIf("NOPASSWORDS")) { + passwords = false; + } + if (readIf("NOSETTINGS")) { + settings = false; + } + if (readIf("DROP")) { + dropTables = true; + } + if (readIf("BLOCKSIZE")) { + long blockSize = readLong(); + command.setLobBlockSize(blockSize); + } + command.setData(data); + command.setPasswords(passwords); + command.setSettings(settings); + command.setDrop(dropTables); + command.setSimple(simple); + if (readIf("TO")) { + command.setFileNameExpr(readExpression()); + if (readIf("COMPRESSION")) { + command.setCompressionAlgorithm(readUniqueIdentifier()); + } + if (readIf("CIPHER")) { + command.setCipher(readUniqueIdentifier()); + if (readIf("PASSWORD")) { + command.setPassword(readExpression()); + } + } + if (readIf("CHARSET")) { + command.setCharset(Charset.forName(readString())); + } + } + if (readIf("SCHEMA")) { + HashSet schemaNames = new HashSet<>(); + do { + schemaNames.add(readUniqueIdentifier()); + } while (readIf(",")); + command.setSchemaNames(schemaNames); + } else if (readIf("TABLE")) { + ArrayList tables = New.arrayList(); + do { + tables.add(readTableOrView()); + } while (readIf(",")); + command.setTables(tables); + } + return command; + } + + private Table readTableOrView() { + return readTableOrView(readIdentifierWithSchema(null)); + } + + private Table readTableOrView(String tableName) { + // same algorithm than readSequence + if (schemaName != null) { + return getSchema().getTableOrView(session, tableName); + } + Table table = database.getSchema(session.getCurrentSchemaName()) + .resolveTableOrView(session, tableName); + if (table != null) { + return table; + } + String[] schemaNames = session.getSchemaSearchPath(); + if (schemaNames != null) { + for (String name : schemaNames) { + Schema s = database.getSchema(name); + table = s.resolveTableOrView(session, tableName); + if (table != null) { + return table; + } + } + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + + private FunctionAlias findFunctionAlias(String schema, String aliasName) { + FunctionAlias functionAlias = database.getSchema(schema).findFunction( + aliasName); + if (functionAlias != null) { + return functionAlias; + } + String[] schemaNames = session.getSchemaSearchPath(); + if (schemaNames != null) { + for (String n : schemaNames) { + functionAlias = database.getSchema(n).findFunction(aliasName); + if (functionAlias != null) { + return functionAlias; + } + } + } + return null; + } + + private Sequence findSequence(String schema, String sequenceName) { + Sequence sequence = database.getSchema(schema).findSequence( + sequenceName); + if (sequence != null) { + return sequence; + } + String[] schemaNames = session.getSchemaSearchPath(); + if (schemaNames != null) { + for (String n : schemaNames) { + sequence = database.getSchema(n).findSequence(sequenceName); + if (sequence != null) { + return sequence; + } + } + } + return null; + } + + private Sequence readSequence() { + // same algorithm as readTableOrView + String sequenceName = readIdentifierWithSchema(null); + if (schemaName != null) { + return getSchema().getSequence(sequenceName); + } + Sequence sequence = findSequence(session.getCurrentSchemaName(), + sequenceName); + if (sequence != null) { + return sequence; + } + throw DbException.get(ErrorCode.SEQUENCE_NOT_FOUND_1, sequenceName); + } + + private Prepared parseAlterTable() { + boolean ifTableExists = readIfExists(false); + String tableName = readIdentifierWithSchema(); + Schema schema = getSchema(); + if (readIf("ADD")) { + Prepared command = parseAlterTableAddConstraintIf(tableName, + schema, ifTableExists); + if (command != null) { + return command; + } + return parseAlterTableAddColumn(tableName, schema, ifTableExists); + } else if (readIf("SET")) { + read("REFERENTIAL_INTEGRITY"); + int type = CommandInterface.ALTER_TABLE_SET_REFERENTIAL_INTEGRITY; + boolean value = readBooleanSetting(); + AlterTableSet command = new AlterTableSet(session, + schema, type, value); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + if (readIf("CHECK")) { + command.setCheckExisting(true); + } else if (readIf("NOCHECK")) { + command.setCheckExisting(false); + } + return command; + } else if (readIf("RENAME")) { + if (readIf("COLUMN")) { + // PostgreSQL syntax + String columnName = readColumnIdentifier(); + read("TO"); + AlterTableRenameColumn command = new AlterTableRenameColumn( + session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setOldColumnName(columnName); + String newName = readColumnIdentifier(); + command.setNewColumnName(newName); + return command; + } else if (readIf("CONSTRAINT")) { + String constraintName = readIdentifierWithSchema(schema.getName()); + checkSchema(schema); + read("TO"); + AlterTableRenameConstraint command = new AlterTableRenameConstraint( + session, schema); + command.setConstraintName(constraintName); + String newName = readColumnIdentifier(); + command.setNewConstraintName(newName); + return commandIfTableExists(schema, tableName, ifTableExists, command); + } else { + read("TO"); + String newName = readIdentifierWithSchema(schema.getName()); + checkSchema(schema); + AlterTableRename command = new AlterTableRename(session, + getSchema()); + command.setOldTableName(tableName); + command.setNewTableName(newName); + command.setIfTableExists(ifTableExists); + command.setHidden(readIf("HIDDEN")); + return command; + } + } else if (readIf("DROP")) { + if (readIf("CONSTRAINT")) { + boolean ifExists = readIfExists(false); + String constraintName = readIdentifierWithSchema(schema.getName()); + ifExists = readIfExists(ifExists); + checkSchema(schema); + AlterTableDropConstraint command = new AlterTableDropConstraint( + session, getSchema(), ifExists); + command.setConstraintName(constraintName); + return commandIfTableExists(schema, tableName, ifTableExists, command); + } else if (readIf("FOREIGN")) { + // MySQL compatibility + read("KEY"); + String constraintName = readIdentifierWithSchema(schema.getName()); + checkSchema(schema); + AlterTableDropConstraint command = new AlterTableDropConstraint( + session, getSchema(), false); + command.setConstraintName(constraintName); + return commandIfTableExists(schema, tableName, ifTableExists, command); + } else if (readIf("INDEX")) { + // MySQL compatibility + String indexOrConstraintName = readIdentifierWithSchema(); + final SchemaCommand command; + if (schema.findIndex(session, indexOrConstraintName) != null) { + DropIndex dropIndexCommand = new DropIndex(session, getSchema()); + dropIndexCommand.setIndexName(indexOrConstraintName); + command = dropIndexCommand; + } else { + AlterTableDropConstraint dropCommand = new AlterTableDropConstraint( + session, getSchema(), false/*ifExists*/); + dropCommand.setConstraintName(indexOrConstraintName); + command = dropCommand; + } + return commandIfTableExists(schema, tableName, ifTableExists, command); + } else if (readIf("PRIMARY")) { + read("KEY"); + Table table = tableIfTableExists(schema, tableName, ifTableExists); + if (table == null) { + return new NoOperation(session); + } + Index idx = table.getPrimaryKey(); + DropIndex command = new DropIndex(session, schema); + command.setIndexName(idx.getName()); + return command; + } else { + readIf("COLUMN"); + boolean ifExists = readIfExists(false); + ArrayList columnsToRemove = New.arrayList(); + Table table = tableIfTableExists(schema, tableName, ifTableExists); + // For Oracle compatibility - open bracket required + boolean openingBracketDetected = readIf("("); + do { + String columnName = readColumnIdentifier(); + if (table != null) { + if (!ifExists || table.doesColumnExist(columnName)) { + Column column = table.getColumn(columnName); + columnsToRemove.add(column); + } + } + } while (readIf(",")); + if (openingBracketDetected) { + // For Oracle compatibility - close bracket + read(")"); + } + if (table == null || columnsToRemove.isEmpty()) { + return new NoOperation(session); + } + AlterTableAlterColumn command = new AlterTableAlterColumn(session, schema); + command.setType(CommandInterface.ALTER_TABLE_DROP_COLUMN); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setColumnsToRemove(columnsToRemove); + return command; + } + } else if (readIf("CHANGE")) { + // MySQL compatibility + readIf("COLUMN"); + String columnName = readColumnIdentifier(); + String newColumnName = readColumnIdentifier(); + Column column = columnIfTableExists(schema, tableName, columnName, ifTableExists); + boolean nullable = column == null ? true : column.isNullable(); + // new column type ignored. RENAME and MODIFY are + // a single command in MySQL but two different commands in H2. + parseColumnForTable(newColumnName, nullable); + AlterTableRenameColumn command = new AlterTableRenameColumn(session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setOldColumnName(columnName); + command.setNewColumnName(newColumnName); + return command; + } else if (readIf("MODIFY")) { + // MySQL compatibility (optional) + readIf("COLUMN"); + // Oracle specifies (but will not require) an opening parenthesis + boolean hasOpeningBracket = readIf("("); + String columnName = readColumnIdentifier(); + AlterTableAlterColumn command = null; + NullConstraintType nullConstraint = parseNotNullConstraint(); + switch (nullConstraint) { + case NULL_IS_ALLOWED: + case NULL_IS_NOT_ALLOWED: + command = new AlterTableAlterColumn(session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + Column column = columnIfTableExists(schema, tableName, columnName, ifTableExists); + command.setOldColumn(column); + if (nullConstraint == NullConstraintType.NULL_IS_ALLOWED) { + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_NULL); + } else { + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_NOT_NULL); + } + break; + case NO_NULL_CONSTRAINT_FOUND: + command = parseAlterTableAlterColumnType(schema, tableName, columnName, ifTableExists); + break; + default: + throw DbException.get(ErrorCode.UNKNOWN_MODE_1, + "Internal Error - unhandled case: " + nullConstraint.name()); + } + if(hasOpeningBracket) { + read(")"); + } + return command; + } else if (readIf("ALTER")) { + readIf("COLUMN"); + String columnName = readColumnIdentifier(); + Column column = columnIfTableExists(schema, tableName, columnName, ifTableExists); + if (readIf("RENAME")) { + read("TO"); + AlterTableRenameColumn command = new AlterTableRenameColumn( + session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setOldColumnName(columnName); + String newName = readColumnIdentifier(); + command.setNewColumnName(newName); + return command; + } else if (readIf("DROP")) { + // PostgreSQL compatibility + if (readIf("DEFAULT")) { + AlterTableAlterColumn command = new AlterTableAlterColumn( + session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setOldColumn(column); + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_DEFAULT); + command.setDefaultExpression(null); + return command; + } + if (readIf("ON")) { + read("UPDATE"); + AlterTableAlterColumn command = new AlterTableAlterColumn(session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setOldColumn(column); + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_ON_UPDATE); + command.setDefaultExpression(null); + return command; + } + read("NOT"); + read("NULL"); + AlterTableAlterColumn command = new AlterTableAlterColumn( + session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setOldColumn(column); + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_NULL); + return command; + } else if (readIf("TYPE")) { + // PostgreSQL compatibility + return parseAlterTableAlterColumnType(schema, tableName, + columnName, ifTableExists); + } else if (readIf("SET")) { + if (readIf("DATA")) { + // Derby compatibility + read("TYPE"); + return parseAlterTableAlterColumnType(schema, tableName, columnName, + ifTableExists); + } + AlterTableAlterColumn command = new AlterTableAlterColumn( + session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setOldColumn(column); + NullConstraintType nullConstraint = parseNotNullConstraint(); + switch (nullConstraint) { + case NULL_IS_ALLOWED: + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_NULL); + break; + case NULL_IS_NOT_ALLOWED: + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_NOT_NULL); + break; + case NO_NULL_CONSTRAINT_FOUND: + if (readIf("DEFAULT")) { + Expression defaultExpression = readExpression(); + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_DEFAULT); + command.setDefaultExpression(defaultExpression); + } else if (readIf("ON")) { + read("UPDATE"); + Expression onUpdateExpression = readExpression(); + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_ON_UPDATE); + command.setDefaultExpression(onUpdateExpression); + } else if (readIf("INVISIBLE")) { + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_VISIBILITY); + command.setVisible(false); + } else if (readIf("VISIBLE")) { + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_VISIBILITY); + command.setVisible(true); + } + break; + default: + throw DbException.get(ErrorCode.UNKNOWN_MODE_1, + "Internal Error - unhandled case: " + nullConstraint.name()); + } + return command; + } else if (readIf("RESTART")) { + readIf("WITH"); + Expression start = readExpression(); + AlterSequence command = new AlterSequence(session, schema); + command.setColumn(column); + command.setStartWith(start); + return commandIfTableExists(schema, tableName, ifTableExists, command); + } else if (readIf("SELECTIVITY")) { + AlterTableAlterColumn command = new AlterTableAlterColumn( + session, schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_SELECTIVITY); + command.setOldColumn(column); + command.setSelectivity(readExpression()); + return command; + } else { + return parseAlterTableAlterColumnType(schema, tableName, + columnName, ifTableExists); + } + } + throw getSyntaxError(); + } + + private Table tableIfTableExists(Schema schema, String tableName, boolean ifTableExists) { + Table table = schema.resolveTableOrView(session, tableName); + if (table == null && !ifTableExists) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + return table; + } + + private Column columnIfTableExists(Schema schema, String tableName, + String columnName, boolean ifTableExists) { + Table table = tableIfTableExists(schema, tableName, ifTableExists); + return table == null ? null : table.getColumn(columnName); + } + + private Prepared commandIfTableExists(Schema schema, String tableName, + boolean ifTableExists, Prepared commandIfTableExists) { + return tableIfTableExists(schema, tableName, ifTableExists) == null + ? new NoOperation(session) + : commandIfTableExists; + } + + private AlterTableAlterColumn parseAlterTableAlterColumnType(Schema schema, + String tableName, String columnName, boolean ifTableExists) { + Column oldColumn = columnIfTableExists(schema, tableName, columnName, ifTableExists); + Column newColumn = parseColumnForTable(columnName, + oldColumn == null ? true : oldColumn.isNullable()); + AlterTableAlterColumn command = new AlterTableAlterColumn(session, + schema); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setType(CommandInterface.ALTER_TABLE_ALTER_COLUMN_CHANGE_TYPE); + command.setOldColumn(oldColumn); + command.setNewColumn(newColumn); + return command; + } + + private AlterTableAlterColumn parseAlterTableAddColumn(String tableName, + Schema schema, boolean ifTableExists) { + readIf("COLUMN"); + AlterTableAlterColumn command = new AlterTableAlterColumn(session, + schema); + command.setType(CommandInterface.ALTER_TABLE_ADD_COLUMN); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + if (readIf("(")) { + command.setIfNotExists(false); + do { + parseTableColumnDefinition(command, schema, tableName); + } while (readIfMore(true)); + } else { + boolean ifNotExists = readIfNotExists(); + command.setIfNotExists(ifNotExists); + parseTableColumnDefinition(command, schema, tableName); + } + if (readIf("BEFORE")) { + command.setAddBefore(readColumnIdentifier()); + } else if (readIf("AFTER")) { + command.setAddAfter(readColumnIdentifier()); + } else if (readIf("FIRST")) { + command.setAddFirst(); + } + return command; + } + + private ConstraintActionType parseAction() { + ConstraintActionType result = parseCascadeOrRestrict(); + if (result != null) { + return result; + } + if (readIf("NO")) { + read("ACTION"); + return ConstraintActionType.RESTRICT; + } + read("SET"); + if (readIf("NULL")) { + return ConstraintActionType.SET_NULL; + } + read("DEFAULT"); + return ConstraintActionType.SET_DEFAULT; + } + + private ConstraintActionType parseCascadeOrRestrict() { + if (readIf("CASCADE")) { + return ConstraintActionType.CASCADE; + } else if (readIf("RESTRICT")) { + return ConstraintActionType.RESTRICT; + } else { + return null; + } + } + + private DefineCommand parseAlterTableAddConstraintIf(String tableName, + Schema schema, boolean ifTableExists) { + String constraintName = null, comment = null; + boolean ifNotExists = false; + boolean allowIndexDefinition = database.getMode().indexDefinitionInCreateTable; + boolean allowAffinityKey = database.getMode().allowAffinityKey; + if (readIf("CONSTRAINT")) { + ifNotExists = readIfNotExists(); + constraintName = readIdentifierWithSchema(schema.getName()); + checkSchema(schema); + comment = readCommentIf(); + allowIndexDefinition = true; + } + if (readIf("PRIMARY")) { + read("KEY"); + AlterTableAddConstraint command = new AlterTableAddConstraint( + session, schema, ifNotExists); + command.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_PRIMARY_KEY); + command.setComment(comment); + command.setConstraintName(constraintName); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + if (readIf("HASH")) { + command.setPrimaryKeyHash(true); + } + read("("); + command.setIndexColumns(parseIndexColumnList()); + if (readIf("INDEX")) { + String indexName = readIdentifierWithSchema(); + command.setIndex(getSchema().findIndex(session, indexName)); + } + return command; + } else if (allowIndexDefinition && (isToken("INDEX") || isToken("KEY"))) { + // MySQL + // need to read ahead, as it could be a column name + int start = lastParseIndex; + read(); + if (DataType.getTypeByName(currentToken, database.getMode()) != null) { + // known data type + parseIndex = start; + read(); + return null; + } + CreateIndex command = new CreateIndex(session, schema); + command.setComment(comment); + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + if (!readIf("(")) { + command.setIndexName(readUniqueIdentifier()); + read("("); + } + command.setIndexColumns(parseIndexColumnList()); + // MySQL compatibility + if (readIf("USING")) { + read("BTREE"); + } + return command; + } else if (allowAffinityKey && readIfAffinity()) { + read("KEY"); + read("("); + CreateIndex command = createAffinityIndex(schema, tableName, parseIndexColumnList()); + command.setIfTableExists(ifTableExists); + return command; + } + AlterTableAddConstraint command; + if (readIf("CHECK")) { + command = new AlterTableAddConstraint(session, schema, ifNotExists); + command.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_CHECK); + command.setCheckExpression(readExpression()); + } else if (readIf("UNIQUE")) { + readIf("KEY"); + readIf("INDEX"); + command = new AlterTableAddConstraint(session, schema, ifNotExists); + command.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_UNIQUE); + if (!readIf("(")) { + constraintName = readUniqueIdentifier(); + read("("); + } + command.setIndexColumns(parseIndexColumnList()); + if (readIf("INDEX")) { + String indexName = readIdentifierWithSchema(); + command.setIndex(getSchema().findIndex(session, indexName)); + } + // MySQL compatibility + if (readIf("USING")) { + read("BTREE"); + } + } else if (readIf("FOREIGN")) { + command = new AlterTableAddConstraint(session, schema, ifNotExists); + command.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_REFERENTIAL); + read("KEY"); + read("("); + command.setIndexColumns(parseIndexColumnList()); + if (readIf("INDEX")) { + String indexName = readIdentifierWithSchema(); + command.setIndex(schema.findIndex(session, indexName)); + } + read("REFERENCES"); + parseReferences(command, schema, tableName); + } else { + if (constraintName != null) { + throw getSyntaxError(); + } + return null; + } + if (readIf("NOCHECK")) { + command.setCheckExisting(false); + } else { + readIf("CHECK"); + command.setCheckExisting(true); + } + command.setTableName(tableName); + command.setIfTableExists(ifTableExists); + command.setConstraintName(constraintName); + command.setComment(comment); + return command; + } + + private void parseReferences(AlterTableAddConstraint command, + Schema schema, String tableName) { + if (readIf("(")) { + command.setRefTableName(schema, tableName); + command.setRefIndexColumns(parseIndexColumnList()); + } else { + String refTableName = readIdentifierWithSchema(schema.getName()); + command.setRefTableName(getSchema(), refTableName); + if (readIf("(")) { + command.setRefIndexColumns(parseIndexColumnList()); + } + } + if (readIf("INDEX")) { + String indexName = readIdentifierWithSchema(); + command.setRefIndex(getSchema().findIndex(session, indexName)); + } + while (readIf("ON")) { + if (readIf("DELETE")) { + command.setDeleteAction(parseAction()); + } else { + read("UPDATE"); + command.setUpdateAction(parseAction()); + } + } + if (readIf("NOT")) { + read("DEFERRABLE"); + } else { + readIf("DEFERRABLE"); + } + } + + private CreateLinkedTable parseCreateLinkedTable(boolean temp, + boolean globalTemp, boolean force) { + read("TABLE"); + boolean ifNotExists = readIfNotExists(); + String tableName = readIdentifierWithSchema(); + CreateLinkedTable command = new CreateLinkedTable(session, getSchema()); + command.setTemporary(temp); + command.setGlobalTemporary(globalTemp); + command.setForce(force); + command.setIfNotExists(ifNotExists); + command.setTableName(tableName); + command.setComment(readCommentIf()); + read("("); + command.setDriver(readString()); + read(","); + command.setUrl(readString()); + read(","); + command.setUser(readString()); + read(","); + command.setPassword(readString()); + read(","); + String originalTable = readString(); + if (readIf(",")) { + command.setOriginalSchema(originalTable); + originalTable = readString(); + } + command.setOriginalTable(originalTable); + read(")"); + if (readIf("EMIT")) { + read("UPDATES"); + command.setEmitUpdates(true); + } else if (readIf("READONLY")) { + command.setReadOnly(true); + } + return command; + } + + private CreateTable parseCreateTable(boolean temp, boolean globalTemp, + boolean persistIndexes) { + boolean ifNotExists = readIfNotExists(); + String tableName = readIdentifierWithSchema(); + if (temp && globalTemp && equalsToken("SESSION", schemaName)) { + // support weird syntax: declare global temporary table session.xy + // (...) not logged + schemaName = session.getCurrentSchemaName(); + globalTemp = false; + } + Schema schema = getSchema(); + CreateTable command = new CreateTable(session, schema); + command.setPersistIndexes(persistIndexes); + command.setTemporary(temp); + command.setGlobalTemporary(globalTemp); + command.setIfNotExists(ifNotExists); + command.setTableName(tableName); + command.setComment(readCommentIf()); + if (readIf("(")) { + if (!readIf(")")) { + do { + parseTableColumnDefinition(command, schema, tableName); + } while (readIfMore(false)); + } + } + // Allows "COMMENT='comment'" in DDL statements (MySQL syntax) + if (readIf("COMMENT")) { + if (readIf("=")) { + // read the complete string comment, but nothing with it for now + readString(); + } + } + if (readIf("ENGINE")) { + if (readIf("=")) { + // map MySQL engine types onto H2 behavior + String tableEngine = readUniqueIdentifier(); + if ("InnoDb".equalsIgnoreCase(tableEngine)) { + // ok + } else if (!"MyISAM".equalsIgnoreCase(tableEngine)) { + throw DbException.getUnsupportedException(tableEngine); + } + } else { + command.setTableEngine(readUniqueIdentifier()); + } + } + if (readIf("WITH")) { + command.setTableEngineParams(readTableEngineParams()); + } + // MySQL compatibility + if (readIf("AUTO_INCREMENT")) { + read("="); + if (currentTokenType != VALUE || + currentValue.getType() != Value.INT) { + throw DbException.getSyntaxError(sqlCommand, parseIndex, + "integer"); + } + read(); + } + readIf("DEFAULT"); + if (readIf("CHARSET")) { + read("="); + if (!readIf("UTF8")) { + read("UTF8MB4"); + } + } + if (temp) { + if (readIf("ON")) { + read("COMMIT"); + if (readIf("DROP")) { + command.setOnCommitDrop(); + } else if (readIf("DELETE")) { + read("ROWS"); + command.setOnCommitTruncate(); + } + } else if (readIf("NOT")) { + if (readIf("PERSISTENT")) { + command.setPersistData(false); + } else { + read("LOGGED"); + } + } + if (readIf("TRANSACTIONAL")) { + command.setTransactional(true); + } + } else if (!persistIndexes && readIf("NOT")) { + read("PERSISTENT"); + command.setPersistData(false); + } + if (readIf("HIDDEN")) { + command.setHidden(true); + } + if (readIf("AS")) { + if (readIf("SORTED")) { + command.setSortedInsertMode(true); + } + command.setQuery(parseSelect()); + } + // for MySQL compatibility + if (readIf("ROW_FORMAT")) { + if (readIf("=")) { + readColumnIdentifier(); + } + } + return command; + } + + private void parseTableColumnDefinition(CommandWithColumns command, Schema schema, String tableName) { + DefineCommand c = parseAlterTableAddConstraintIf(tableName, + schema, false); + if (c != null) { + command.addConstraintCommand(c); + } else { + String columnName = readColumnIdentifier(); + Column column = parseColumnForTable(columnName, true); + if (column.isAutoIncrement() && column.isPrimaryKey()) { + column.setPrimaryKey(false); + IndexColumn[] cols = { new IndexColumn() }; + cols[0].columnName = column.getName(); + AlterTableAddConstraint pk = new AlterTableAddConstraint( + session, schema, false); + pk.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_PRIMARY_KEY); + pk.setTableName(tableName); + pk.setIndexColumns(cols); + command.addConstraintCommand(pk); + } + command.addColumn(column); + String constraintName = null; + if (readIf("CONSTRAINT")) { + constraintName = readColumnIdentifier(); + } + // For compatibility with Apache Ignite. + boolean allowAffinityKey = database.getMode().allowAffinityKey; + boolean affinity = allowAffinityKey && readIfAffinity(); + if (readIf("PRIMARY")) { + read("KEY"); + boolean hash = readIf("HASH"); + IndexColumn[] cols = { new IndexColumn() }; + cols[0].columnName = column.getName(); + AlterTableAddConstraint pk = new AlterTableAddConstraint( + session, schema, false); + pk.setConstraintName(constraintName); + pk.setPrimaryKeyHash(hash); + pk.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_PRIMARY_KEY); + pk.setTableName(tableName); + pk.setIndexColumns(cols); + command.addConstraintCommand(pk); + if (readIf("AUTO_INCREMENT")) { + parseAutoIncrement(column); + } + if (affinity) { + CreateIndex idx = createAffinityIndex(schema, tableName, cols); + command.addConstraintCommand(idx); + } + } else if (affinity) { + read("KEY"); + IndexColumn[] cols = { new IndexColumn() }; + cols[0].columnName = column.getName(); + CreateIndex idx = createAffinityIndex(schema, tableName, cols); + command.addConstraintCommand(idx); + } else if (readIf("UNIQUE")) { + AlterTableAddConstraint unique = new AlterTableAddConstraint( + session, schema, false); + unique.setConstraintName(constraintName); + unique.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_UNIQUE); + IndexColumn[] cols = { new IndexColumn() }; + cols[0].columnName = columnName; + unique.setIndexColumns(cols); + unique.setTableName(tableName); + command.addConstraintCommand(unique); + } + if (NullConstraintType.NULL_IS_NOT_ALLOWED == parseNotNullConstraint()) { + column.setNullable(false); + } + if (readIf("CHECK")) { + Expression expr = readExpression(); + column.addCheckConstraint(session, expr); + } + if (readIf("REFERENCES")) { + AlterTableAddConstraint ref = new AlterTableAddConstraint( + session, schema, false); + ref.setConstraintName(constraintName); + ref.setType(CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_REFERENTIAL); + IndexColumn[] cols = { new IndexColumn() }; + cols[0].columnName = columnName; + ref.setIndexColumns(cols); + ref.setTableName(tableName); + parseReferences(ref, schema, tableName); + command.addConstraintCommand(ref); + } + } + } + + /** + * Enumeration describing null constraints + */ + private enum NullConstraintType { + NULL_IS_ALLOWED, NULL_IS_NOT_ALLOWED, NO_NULL_CONSTRAINT_FOUND + } + + private NullConstraintType parseNotNullConstraint() { + NullConstraintType nullConstraint = NullConstraintType.NO_NULL_CONSTRAINT_FOUND; + if (isToken("NOT") || isToken("NULL")) { + if (readIf("NOT")) { + read("NULL"); + nullConstraint = NullConstraintType.NULL_IS_NOT_ALLOWED; + } else { + read("NULL"); + nullConstraint = NullConstraintType.NULL_IS_ALLOWED; + } + if (database.getMode().getEnum() == ModeEnum.Oracle) { + if (readIf("ENABLE")) { + // Leave constraint 'as is' + readIf("VALIDATE"); + // Turn off constraint, allow NULLs + if (readIf("NOVALIDATE")) { + nullConstraint = NullConstraintType.NULL_IS_ALLOWED; + } + } + // Turn off constraint, allow NULLs + if (readIf("DISABLE")) { + nullConstraint = NullConstraintType.NULL_IS_ALLOWED; + // ignore validate + readIf("VALIDATE"); + // ignore novalidate + readIf("NOVALIDATE"); + } + } + } + return nullConstraint; + } + + private CreateSynonym parseCreateSynonym(boolean orReplace) { + boolean ifNotExists = readIfNotExists(); + String name = readIdentifierWithSchema(); + Schema synonymSchema = getSchema(); + read("FOR"); + String tableName = readIdentifierWithSchema(); + + Schema targetSchema = getSchema(); + CreateSynonym command = new CreateSynonym(session, synonymSchema); + command.setName(name); + command.setSynonymFor(tableName); + command.setSynonymForSchema(targetSchema); + command.setComment(readCommentIf()); + command.setIfNotExists(ifNotExists); + command.setOrReplace(orReplace); + return command; + } + + private CreateIndex createAffinityIndex(Schema schema, String tableName, IndexColumn[] indexColumns) { + CreateIndex idx = new CreateIndex(session, schema); + idx.setTableName(tableName); + idx.setIndexColumns(indexColumns); + idx.setAffinity(true); + return idx; + } + + private static int getCompareType(int tokenType) { + switch (tokenType) { + case EQUAL: + return Comparison.EQUAL; + case BIGGER_EQUAL: + return Comparison.BIGGER_EQUAL; + case BIGGER: + return Comparison.BIGGER; + case SMALLER: + return Comparison.SMALLER; + case SMALLER_EQUAL: + return Comparison.SMALLER_EQUAL; + case NOT_EQUAL: + return Comparison.NOT_EQUAL; + case SPATIAL_INTERSECTS: + return Comparison.SPATIAL_INTERSECTS; + default: + return -1; + } + } + + /** + * Add double quotes around an identifier if required. + * + * @param s the identifier + * @return the quoted identifier + */ + public static String quoteIdentifier(String s) { + if (s == null) { + return "\"\""; + } + if (ParserUtil.isSimpleIdentifier(s, false)) { + return s; + } + return StringUtils.quoteIdentifier(s); + } + + public void setLiteralsChecked(boolean literalsChecked) { + this.literalsChecked = literalsChecked; + } + + public void setRightsChecked(boolean rightsChecked) { + this.rightsChecked = rightsChecked; + } + + public void setSuppliedParameterList(ArrayList suppliedParameterList) { + this.suppliedParameterList = suppliedParameterList; + } + + /** + * Parse a SQL code snippet that represents an expression. + * + * @param sql the code snippet + * @return the expression object + */ + public Expression parseExpression(String sql) { + parameters = New.arrayList(); + initialize(sql); + read(); + return readExpression(); + } + + /** + * Parse a SQL code snippet that represents a table name. + * + * @param sql the code snippet + * @return the table object + */ + public Table parseTableName(String sql) { + parameters = New.arrayList(); + initialize(sql); + read(); + return readTableOrView(); + } + + @Override + public String toString() { + return StringUtils.addAsterisk(sqlCommand, parseIndex); + } +} diff --git a/modules/h2/src/main/java/org/h2/command/Prepared.java b/modules/h2/src/main/java/org/h2/command/Prepared.java new file mode 100644 index 0000000000000..74e87629fa2a2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/Prepared.java @@ -0,0 +1,462 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command; + +import java.util.ArrayList; +import java.util.List; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.Parameter; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.ResultInterface; +import org.h2.table.TableView; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; + +/** + * A prepared statement. + */ +public abstract class Prepared { + + /** + * The session. + */ + protected Session session; + + /** + * The SQL string. + */ + protected String sqlStatement; + + /** + * Whether to create a new object (for indexes). + */ + protected boolean create = true; + + /** + * The list of parameters. + */ + protected ArrayList parameters; + + /** + * If the query should be prepared before each execution. This is set for + * queries with LIKE ?, because the query plan depends on the parameter + * value. + */ + protected boolean prepareAlways; + + private long modificationMetaId; + private Command command; + private int objectId; + private int currentRowNumber; + private int rowScanCount; + /** + * Common table expressions (CTE) in queries require us to create temporary views, + * which need to be cleaned up once a command is done executing. + */ + private List cteCleanups; + + /** + * Create a new object. + * + * @param session the session + */ + public Prepared(Session session) { + this.session = session; + modificationMetaId = session.getDatabase().getModificationMetaId(); + } + + /** + * Check if this command is transactional. + * If it is not, then it forces the current transaction to commit. + * + * @return true if it is + */ + public abstract boolean isTransactional(); + + /** + * Get an empty result set containing the meta data. + * + * @return the result set + */ + public abstract ResultInterface queryMeta(); + + + /** + * Get the command type as defined in CommandInterface + * + * @return the statement type + */ + public abstract int getType(); + + /** + * Check if this command is read only. + * + * @return true if it is + */ + public boolean isReadOnly() { + return false; + } + + /** + * Check if the statement needs to be re-compiled. + * + * @return true if it must + */ + public boolean needRecompile() { + Database db = session.getDatabase(); + if (db == null) { + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, "database closed"); + } + // parser: currently, compiling every create/drop/... twice + // because needRecompile return true even for the first execution + return prepareAlways || + modificationMetaId < db.getModificationMetaId() || + db.getSettings().recompileAlways; + } + + /** + * Get the meta data modification id of the database when this statement was + * compiled. + * + * @return the meta data modification id + */ + long getModificationMetaId() { + return modificationMetaId; + } + + /** + * Set the meta data modification id of this statement. + * + * @param id the new id + */ + void setModificationMetaId(long id) { + this.modificationMetaId = id; + } + + /** + * Set the parameter list of this statement. + * + * @param parameters the parameter list + */ + public void setParameterList(ArrayList parameters) { + this.parameters = parameters; + } + + /** + * Get the parameter list. + * + * @return the parameter list + */ + public ArrayList getParameters() { + return parameters; + } + + /** + * Check if all parameters have been set. + * + * @throws DbException if any parameter has not been set + */ + protected void checkParameters() { + if (parameters != null) { + for (Parameter param : parameters) { + param.checkSet(); + } + } + } + + /** + * Set the command. + * + * @param command the new command + */ + public void setCommand(Command command) { + this.command = command; + } + + /** + * Check if this object is a query. + * + * @return true if it is + */ + public boolean isQuery() { + return false; + } + + /** + * Prepare this statement. + */ + public void prepare() { + // nothing to do + } + + /** + * Execute the statement. + * + * @return the update count + * @throws DbException if it is a query + */ + public int update() { + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_QUERY); + } + + /** + * Execute the query. + * + * @param maxrows the maximum number of rows to return + * @return the result set + * @throws DbException if it is not a query + */ + @SuppressWarnings("unused") + public ResultInterface query(int maxrows) { + throw DbException.get(ErrorCode.METHOD_ONLY_ALLOWED_FOR_QUERY); + } + + /** + * Set the SQL statement. + * + * @param sql the SQL statement + */ + public void setSQL(String sql) { + this.sqlStatement = sql; + } + + /** + * Get the SQL statement. + * + * @return the SQL statement + */ + public String getSQL() { + return sqlStatement; + } + + /** + * Get the object id to use for the database object that is created in this + * statement. This id is only set when the object is persistent. + * If not set, this method returns 0. + * + * @return the object id or 0 if not set + */ + protected int getCurrentObjectId() { + return objectId; + } + + /** + * Get the current object id, or get a new id from the database. The object + * id is used when creating new database object (CREATE statement). + * + * @return the object id + */ + protected int getObjectId() { + int id = objectId; + if (id == 0) { + id = session.getDatabase().allocateObjectId(); + } else { + objectId = 0; + } + return id; + } + + /** + * Get the SQL statement with the execution plan. + * + * @return the execution plan + */ + public String getPlanSQL() { + return null; + } + + /** + * Check if this statement was canceled. + * + * @throws DbException if it was canceled + */ + public void checkCanceled() { + session.checkCanceled(); + Command c = command != null ? command : session.getCurrentCommand(); + if (c != null) { + c.checkCanceled(); + } + } + + /** + * Set the object id for this statement. + * + * @param i the object id + */ + public void setObjectId(int i) { + this.objectId = i; + this.create = false; + } + + /** + * Set the session for this statement. + * + * @param currentSession the new session + */ + public void setSession(Session currentSession) { + this.session = currentSession; + } + + /** + * Print information about the statement executed if info trace level is + * enabled. + * + * @param startTimeNanos when the statement was started + * @param rowCount the query or update row count + */ + void trace(long startTimeNanos, int rowCount) { + if (session.getTrace().isInfoEnabled() && startTimeNanos > 0) { + long deltaTimeNanos = System.nanoTime() - startTimeNanos; + String params = Trace.formatParams(parameters); + session.getTrace().infoSQL(sqlStatement, params, rowCount, + deltaTimeNanos / 1000 / 1000); + } + // startTime_nanos can be zero for the command that actually turns on + // statistics + if (session.getDatabase().getQueryStatistics() && startTimeNanos != 0) { + long deltaTimeNanos = System.nanoTime() - startTimeNanos; + session.getDatabase().getQueryStatisticsData(). + update(toString(), deltaTimeNanos, rowCount); + } + } + + /** + * Set the prepare always flag. + * If set, the statement is re-compiled whenever it is executed. + * + * @param prepareAlways the new value + */ + public void setPrepareAlways(boolean prepareAlways) { + this.prepareAlways = prepareAlways; + } + + /** + * Set the current row number. + * + * @param rowNumber the row number + */ + protected void setCurrentRowNumber(int rowNumber) { + if ((++rowScanCount & 127) == 0) { + checkCanceled(); + } + this.currentRowNumber = rowNumber; + setProgress(); + } + + /** + * Get the current row number. + * + * @return the row number + */ + public int getCurrentRowNumber() { + return currentRowNumber; + } + + /** + * Notifies query progress via the DatabaseEventListener + */ + private void setProgress() { + if ((currentRowNumber & 127) == 0) { + session.getDatabase().setProgress( + DatabaseEventListener.STATE_STATEMENT_PROGRESS, + sqlStatement, currentRowNumber, 0); + } + } + + /** + * Convert the statement to a String. + * + * @return the SQL statement + */ + @Override + public String toString() { + return sqlStatement; + } + + /** + * Get the SQL snippet of the value list. + * + * @param values the value list + * @return the SQL snippet + */ + protected static String getSQL(Value[] values) { + StatementBuilder buff = new StatementBuilder(); + for (Value v : values) { + buff.appendExceptFirst(", "); + if (v != null) { + buff.append(v.getSQL()); + } + } + return buff.toString(); + } + + /** + * Get the SQL snippet of the expression list. + * + * @param list the expression list + * @return the SQL snippet + */ + protected static String getSQL(Expression[] list) { + StatementBuilder buff = new StatementBuilder(); + for (Expression e : list) { + buff.appendExceptFirst(", "); + if (e != null) { + buff.append(e.getSQL()); + } + } + return buff.toString(); + } + + /** + * Set the SQL statement of the exception to the given row. + * + * @param e the exception + * @param rowId the row number + * @param values the values of the row + * @return the exception + */ + protected DbException setRow(DbException e, int rowId, String values) { + StringBuilder buff = new StringBuilder(); + if (sqlStatement != null) { + buff.append(sqlStatement); + } + buff.append(" -- "); + if (rowId > 0) { + buff.append("row #").append(rowId + 1).append(' '); + } + buff.append('(').append(values).append(')'); + return e.addSQL(buff.toString()); + } + + public boolean isCacheable() { + return false; + } + + /** + * @return the temporary views created for CTE's. + */ + public List getCteCleanups() { + return cteCleanups; + } + + /** + * Set the temporary views created for CTE's. + * + * @param cteCleanups the temporary views + */ + public void setCteCleanups(List cteCleanups) { + this.cteCleanups = cteCleanups; + } + + public Session getSession() { + return session; + } +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterIndexRename.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterIndexRename.java new file mode 100644 index 0000000000000..ce264de805668 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterIndexRename.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.schema.Schema; + +/** + * This class represents the statement + * ALTER INDEX RENAME + */ +public class AlterIndexRename extends DefineCommand { + + private boolean ifExists; + private Schema oldSchema; + private String oldIndexName; + private Index oldIndex; + private String newIndexName; + + public AlterIndexRename(Session session) { + super(session); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setOldSchema(Schema old) { + oldSchema = old; + } + + public void setOldName(String name) { + oldIndexName = name; + } + + public void setNewName(String name) { + newIndexName = name; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + oldIndex = oldSchema.findIndex(session, oldIndexName); + if (oldIndex == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.INDEX_NOT_FOUND_1, + newIndexName); + } + return 0; + } + if (oldSchema.findIndex(session, newIndexName) != null || + newIndexName.equals(oldIndexName)) { + throw DbException.get(ErrorCode.INDEX_ALREADY_EXISTS_1, + newIndexName); + } + session.getUser().checkRight(oldIndex.getTable(), Right.ALL); + db.renameSchemaObject(session, oldIndex, newIndexName); + return 0; + } + + @Override + public int getType() { + return CommandInterface.ALTER_INDEX_RENAME; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterSchemaRename.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterSchemaRename.java new file mode 100644 index 0000000000000..9b7a90d5ae450 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterSchemaRename.java @@ -0,0 +1,66 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObject; + +import java.util.ArrayList; + +/** + * This class represents the statement + * ALTER SCHEMA RENAME + */ +public class AlterSchemaRename extends DefineCommand { + + private Schema oldSchema; + private String newSchemaName; + + public AlterSchemaRename(Session session) { + super(session); + } + + public void setOldSchema(Schema schema) { + oldSchema = schema; + } + + public void setNewName(String name) { + newSchemaName = name; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + if (!oldSchema.canDrop()) { + throw DbException.get(ErrorCode.SCHEMA_CAN_NOT_BE_DROPPED_1, + oldSchema.getName()); + } + if (db.findSchema(newSchemaName) != null || + newSchemaName.equals(oldSchema.getName())) { + throw DbException.get(ErrorCode.SCHEMA_ALREADY_EXISTS_1, + newSchemaName); + } + session.getUser().checkSchemaAdmin(); + db.renameDatabaseObject(session, oldSchema, newSchemaName); + ArrayList all = db.getAllSchemaObjects(); + for (SchemaObject schemaObject : all) { + db.updateMeta(session, schemaObject); + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.ALTER_SCHEMA_RENAME; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterTableAddConstraint.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableAddConstraint.java new file mode 100644 index 0000000000000..8fb16ff9ea120 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableAddConstraint.java @@ -0,0 +1,453 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.constraint.Constraint; +import org.h2.constraint.ConstraintActionType; +import org.h2.constraint.ConstraintCheck; +import org.h2.constraint.ConstraintReferential; +import org.h2.constraint.ConstraintUnique; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.New; + +/** + * This class represents the statement + * ALTER TABLE ADD CONSTRAINT + */ +public class AlterTableAddConstraint extends SchemaCommand { + + private int type; + private String constraintName; + private String tableName; + private IndexColumn[] indexColumns; + private ConstraintActionType deleteAction = ConstraintActionType.RESTRICT; + private ConstraintActionType updateAction = ConstraintActionType.RESTRICT; + private Schema refSchema; + private String refTableName; + private IndexColumn[] refIndexColumns; + private Expression checkExpression; + private Index index, refIndex; + private String comment; + private boolean checkExisting; + private boolean primaryKeyHash; + private boolean ifTableExists; + private final boolean ifNotExists; + private final ArrayList createdIndexes = New.arrayList(); + + public AlterTableAddConstraint(Session session, Schema schema, + boolean ifNotExists) { + super(session, schema); + this.ifNotExists = ifNotExists; + } + + public void setIfTableExists(boolean b) { + ifTableExists = b; + } + + private String generateConstraintName(Table table) { + if (constraintName == null) { + constraintName = getSchema().getUniqueConstraintName( + session, table); + } + return constraintName; + } + + @Override + public int update() { + try { + return tryUpdate(); + } catch (DbException e) { + for (Index index : createdIndexes) { + session.getDatabase().removeSchemaObject(session, index); + } + throw e; + } finally { + getSchema().freeUniqueName(constraintName); + } + } + + /** + * Try to execute the statement. + * + * @return the update count + */ + private int tryUpdate() { + if (!transactional) { + session.commit(true); + } + Database db = session.getDatabase(); + Table table = getSchema().findTableOrView(session, tableName); + if (table == null) { + if (ifTableExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + if (getSchema().findConstraint(session, constraintName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.CONSTRAINT_ALREADY_EXISTS_1, + constraintName); + } + session.getUser().checkRight(table, Right.ALL); + db.lockMeta(session); + table.lock(session, true, true); + Constraint constraint; + switch (type) { + case CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_PRIMARY_KEY: { + IndexColumn.mapColumns(indexColumns, table); + index = table.findPrimaryKey(); + ArrayList constraints = table.getConstraints(); + for (int i = 0; constraints != null && i < constraints.size(); i++) { + Constraint c = constraints.get(i); + if (Constraint.Type.PRIMARY_KEY == c.getConstraintType()) { + throw DbException.get(ErrorCode.SECOND_PRIMARY_KEY); + } + } + if (index != null) { + // if there is an index, it must match with the one declared + // we don't test ascending / descending + IndexColumn[] pkCols = index.getIndexColumns(); + if (pkCols.length != indexColumns.length) { + throw DbException.get(ErrorCode.SECOND_PRIMARY_KEY); + } + for (int i = 0; i < pkCols.length; i++) { + if (pkCols[i].column != indexColumns[i].column) { + throw DbException.get(ErrorCode.SECOND_PRIMARY_KEY); + } + } + } + if (index == null) { + IndexType indexType = IndexType.createPrimaryKey( + table.isPersistIndexes(), primaryKeyHash); + String indexName = table.getSchema().getUniqueIndexName( + session, table, Constants.PREFIX_PRIMARY_KEY); + int id = getObjectId(); + try { + index = table.addIndex(session, indexName, id, + indexColumns, indexType, true, null); + } finally { + getSchema().freeUniqueName(indexName); + } + } + index.getIndexType().setBelongsToConstraint(true); + int constraintId = getObjectId(); + String name = generateConstraintName(table); + ConstraintUnique pk = new ConstraintUnique(getSchema(), + constraintId, name, table, true); + pk.setColumns(indexColumns); + pk.setIndex(index, true); + constraint = pk; + break; + } + case CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_UNIQUE: { + IndexColumn.mapColumns(indexColumns, table); + boolean isOwner = false; + if (index != null && canUseUniqueIndex(index, table, indexColumns)) { + isOwner = true; + index.getIndexType().setBelongsToConstraint(true); + } else { + index = getUniqueIndex(table, indexColumns); + if (index == null) { + index = createIndex(table, indexColumns, true); + isOwner = true; + } + } + int id = getObjectId(); + String name = generateConstraintName(table); + ConstraintUnique unique = new ConstraintUnique(getSchema(), id, + name, table, false); + unique.setColumns(indexColumns); + unique.setIndex(index, isOwner); + constraint = unique; + break; + } + case CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_CHECK: { + int id = getObjectId(); + String name = generateConstraintName(table); + ConstraintCheck check = new ConstraintCheck(getSchema(), id, name, table); + TableFilter filter = new TableFilter(session, table, null, false, null, 0, null); + checkExpression.mapColumns(filter, 0); + checkExpression = checkExpression.optimize(session); + check.setExpression(checkExpression); + check.setTableFilter(filter); + constraint = check; + if (checkExisting) { + check.checkExistingData(session); + } + break; + } + case CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_REFERENTIAL: { + Table refTable = refSchema.resolveTableOrView(session, refTableName); + if (refTable == null) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, refTableName); + } + session.getUser().checkRight(refTable, Right.ALL); + if (!refTable.canReference()) { + throw DbException.getUnsupportedException("Reference " + + refTable.getSQL()); + } + boolean isOwner = false; + IndexColumn.mapColumns(indexColumns, table); + if (index != null && canUseIndex(index, table, indexColumns, false)) { + isOwner = true; + index.getIndexType().setBelongsToConstraint(true); + } else { + index = getIndex(table, indexColumns, false); + if (index == null) { + index = createIndex(table, indexColumns, false); + isOwner = true; + } + } + if (refIndexColumns == null) { + Index refIdx = refTable.getPrimaryKey(); + refIndexColumns = refIdx.getIndexColumns(); + } else { + IndexColumn.mapColumns(refIndexColumns, refTable); + } + if (refIndexColumns.length != indexColumns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + boolean isRefOwner = false; + if (refIndex != null && refIndex.getTable() == refTable && + canUseIndex(refIndex, refTable, refIndexColumns, false)) { + isRefOwner = true; + refIndex.getIndexType().setBelongsToConstraint(true); + } else { + refIndex = null; + } + if (refIndex == null) { + refIndex = getIndex(refTable, refIndexColumns, false); + if (refIndex == null) { + refIndex = createIndex(refTable, refIndexColumns, true); + isRefOwner = true; + } + } + int id = getObjectId(); + String name = generateConstraintName(table); + ConstraintReferential refConstraint = new ConstraintReferential(getSchema(), + id, name, table); + refConstraint.setColumns(indexColumns); + refConstraint.setIndex(index, isOwner); + refConstraint.setRefTable(refTable); + refConstraint.setRefColumns(refIndexColumns); + refConstraint.setRefIndex(refIndex, isRefOwner); + if (checkExisting) { + refConstraint.checkExistingData(session); + } + refTable.addConstraint(refConstraint); + refConstraint.setDeleteAction(deleteAction); + refConstraint.setUpdateAction(updateAction); + constraint = refConstraint; + break; + } + default: + throw DbException.throwInternalError("type=" + type); + } + // parent relationship is already set with addConstraint + constraint.setComment(comment); + if (table.isTemporary() && !table.isGlobalTemporary()) { + session.addLocalTempTableConstraint(constraint); + } else { + db.addSchemaObject(session, constraint); + } + table.addConstraint(constraint); + return 0; + } + + private Index createIndex(Table t, IndexColumn[] cols, boolean unique) { + int indexId = getObjectId(); + IndexType indexType; + if (unique) { + // for unique constraints + indexType = IndexType.createUnique(t.isPersistIndexes(), false); + } else { + // constraints + indexType = IndexType.createNonUnique(t.isPersistIndexes()); + } + indexType.setBelongsToConstraint(true); + String prefix = constraintName == null ? "CONSTRAINT" : constraintName; + String indexName = t.getSchema().getUniqueIndexName(session, t, + prefix + "_INDEX_"); + try { + Index index = t.addIndex(session, indexName, indexId, cols, + indexType, true, null); + createdIndexes.add(index); + return index; + } finally { + getSchema().freeUniqueName(indexName); + } + } + + public void setDeleteAction(ConstraintActionType action) { + this.deleteAction = action; + } + + public void setUpdateAction(ConstraintActionType action) { + this.updateAction = action; + } + + private static Index getUniqueIndex(Table t, IndexColumn[] cols) { + if (t.getIndexes() == null) { + return null; + } + for (Index idx : t.getIndexes()) { + if (canUseUniqueIndex(idx, t, cols)) { + return idx; + } + } + return null; + } + + private static Index getIndex(Table t, IndexColumn[] cols, boolean moreColumnOk) { + if (t.getIndexes() == null) { + return null; + } + for (Index idx : t.getIndexes()) { + if (canUseIndex(idx, t, cols, moreColumnOk)) { + return idx; + } + } + return null; + } + + + // all cols must be in the index key, the order doesn't matter and there + // must be no other fields in the index key + private static boolean canUseUniqueIndex(Index idx, Table table, IndexColumn[] cols) { + if (idx.getTable() != table || !idx.getIndexType().isUnique()) { + return false; + } + Column[] indexCols = idx.getColumns(); + HashSet indexColsSet = new HashSet<>(); + Collections.addAll(indexColsSet, indexCols); + HashSet colsSet = new HashSet<>(); + for (IndexColumn c : cols) { + colsSet.add(c.column); + } + return colsSet.equals(indexColsSet); + } + + private static boolean canUseIndex(Index existingIndex, Table table, + IndexColumn[] cols, boolean moreColumnsOk) { + if (existingIndex.getTable() != table || existingIndex.getCreateSQL() == null) { + // can't use the scan index or index of another table + return false; + } + Column[] indexCols = existingIndex.getColumns(); + + if (moreColumnsOk) { + if (indexCols.length < cols.length) { + return false; + } + for (IndexColumn col : cols) { + // all columns of the list must be part of the index, + // but not all columns of the index need to be part of the list + // holes are not allowed (index=a,b,c & list=a,b is ok; + // but list=a,c is not) + int idx = existingIndex.getColumnIndex(col.column); + if (idx < 0 || idx >= cols.length) { + return false; + } + } + } else { + if (indexCols.length != cols.length) { + return false; + } + for (IndexColumn col : cols) { + // all columns of the list must be part of the index + int idx = existingIndex.getColumnIndex(col.column); + if (idx < 0) { + return false; + } + } + } + return true; + } + + public void setConstraintName(String constraintName) { + this.constraintName = constraintName; + } + + public void setType(int type) { + this.type = type; + } + + @Override + public int getType() { + return type; + } + + public void setCheckExpression(Expression expression) { + this.checkExpression = expression; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public void setIndexColumns(IndexColumn[] indexColumns) { + this.indexColumns = indexColumns; + } + + public IndexColumn[] getIndexColumns() { + return indexColumns; + } + + /** + * Set the referenced table. + * + * @param refSchema the schema + * @param ref the table name + */ + public void setRefTableName(Schema refSchema, String ref) { + this.refSchema = refSchema; + this.refTableName = ref; + } + + public void setRefIndexColumns(IndexColumn[] indexColumns) { + this.refIndexColumns = indexColumns; + } + + public void setIndex(Index index) { + this.index = index; + } + + public void setRefIndex(Index refIndex) { + this.refIndex = refIndex; + } + + public void setComment(String comment) { + this.comment = comment; + } + + public void setCheckExisting(boolean b) { + this.checkExisting = b; + } + + public void setPrimaryKeyHash(boolean b) { + this.primaryKeyHash = b; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterTableAlterColumn.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableAlterColumn.java new file mode 100644 index 0000000000000..3d249fcc03931 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableAlterColumn.java @@ -0,0 +1,601 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.Parser; +import org.h2.command.Prepared; +import org.h2.constraint.Constraint; +import org.h2.constraint.ConstraintReferential; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObject; +import org.h2.schema.Sequence; +import org.h2.schema.TriggerObject; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.table.TableView; +import org.h2.util.New; + +/** + * This class represents the statements + * ALTER TABLE ADD, + * ALTER TABLE ADD IF NOT EXISTS, + * ALTER TABLE ALTER COLUMN, + * ALTER TABLE ALTER COLUMN RESTART, + * ALTER TABLE ALTER COLUMN SELECTIVITY, + * ALTER TABLE ALTER COLUMN SET DEFAULT, + * ALTER TABLE ALTER COLUMN SET NOT NULL, + * ALTER TABLE ALTER COLUMN SET NULL, + * ALTER TABLE ALTER COLUMN SET VISIBLE, + * ALTER TABLE ALTER COLUMN SET INVISIBLE, + * ALTER TABLE DROP COLUMN + */ +public class AlterTableAlterColumn extends CommandWithColumns { + + private String tableName; + private Column oldColumn; + private Column newColumn; + private int type; + /** + * Default or on update expression. + */ + private Expression defaultExpression; + private Expression newSelectivity; + private boolean addFirst; + private String addBefore; + private String addAfter; + private boolean ifTableExists; + private boolean ifNotExists; + private ArrayList columnsToAdd; + private ArrayList columnsToRemove; + private boolean newVisibility; + + public AlterTableAlterColumn(Session session, Schema schema) { + super(session, schema); + } + + public void setIfTableExists(boolean b) { + ifTableExists = b; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public void setOldColumn(Column oldColumn) { + this.oldColumn = oldColumn; + } + + /** + * Add the column as the first column of the table. + */ + public void setAddFirst() { + addFirst = true; + } + + public void setAddBefore(String before) { + this.addBefore = before; + } + + public void setAddAfter(String after) { + this.addAfter = after; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + Table table = getSchema().resolveTableOrView(session, tableName); + if (table == null) { + if (ifTableExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + session.getUser().checkRight(table, Right.ALL); + table.checkSupportAlter(); + table.lock(session, true, true); + if (newColumn != null) { + checkDefaultReferencesTable(table, newColumn.getDefaultExpression()); + checkClustering(newColumn); + } + if (columnsToAdd != null) { + for (Column column : columnsToAdd) { + checkDefaultReferencesTable(table, column.getDefaultExpression()); + checkClustering(column); + } + } + switch (type) { + case CommandInterface.ALTER_TABLE_ALTER_COLUMN_NOT_NULL: { + if (!oldColumn.isNullable()) { + // no change + break; + } + checkNoNullValues(table); + oldColumn.setNullable(false); + db.updateMeta(session, table); + break; + } + case CommandInterface.ALTER_TABLE_ALTER_COLUMN_NULL: { + if (oldColumn.isNullable()) { + // no change + break; + } + checkNullable(table); + oldColumn.setNullable(true); + db.updateMeta(session, table); + break; + } + case CommandInterface.ALTER_TABLE_ALTER_COLUMN_DEFAULT: { + Sequence sequence = oldColumn == null ? null : oldColumn.getSequence(); + checkDefaultReferencesTable(table, defaultExpression); + oldColumn.setSequence(null); + oldColumn.setDefaultExpression(session, defaultExpression); + removeSequence(table, sequence); + db.updateMeta(session, table); + break; + } + case CommandInterface.ALTER_TABLE_ALTER_COLUMN_ON_UPDATE: { + checkDefaultReferencesTable(table, defaultExpression); + oldColumn.setOnUpdateExpression(session, defaultExpression); + db.updateMeta(session, table); + break; + } + case CommandInterface.ALTER_TABLE_ALTER_COLUMN_CHANGE_TYPE: { + // if the change is only increasing the precision, then we don't + // need to copy the table because the length is only a constraint, + // and does not affect the storage structure. + if (oldColumn.isWideningConversion(newColumn)) { + convertAutoIncrementColumn(table, newColumn); + oldColumn.copy(newColumn); + db.updateMeta(session, table); + } else { + oldColumn.setSequence(null); + oldColumn.setDefaultExpression(session, null); + oldColumn.setConvertNullToDefault(false); + if (oldColumn.isNullable() && !newColumn.isNullable()) { + checkNoNullValues(table); + } else if (!oldColumn.isNullable() && newColumn.isNullable()) { + checkNullable(table); + } + if (oldColumn.getVisible() ^ newColumn.getVisible()) { + oldColumn.setVisible(newColumn.getVisible()); + } + convertAutoIncrementColumn(table, newColumn); + copyData(table); + } + table.setModified(); + break; + } + case CommandInterface.ALTER_TABLE_ADD_COLUMN: { + // ifNotExists only supported for single column add + if (ifNotExists && columnsToAdd != null && columnsToAdd.size() == 1 && + table.doesColumnExist(columnsToAdd.get(0).getName())) { + break; + } + ArrayList sequences = generateSequences(columnsToAdd, false); + if (columnsToAdd != null) { + changePrimaryKeysToNotNull(columnsToAdd); + } + copyData(table, sequences, true); + break; + } + case CommandInterface.ALTER_TABLE_DROP_COLUMN: { + if (table.getColumns().length - columnsToRemove.size() < 1) { + throw DbException.get(ErrorCode.CANNOT_DROP_LAST_COLUMN, + columnsToRemove.get(0).getSQL()); + } + table.dropMultipleColumnsConstraintsAndIndexes(session, columnsToRemove); + copyData(table); + break; + } + case CommandInterface.ALTER_TABLE_ALTER_COLUMN_SELECTIVITY: { + int value = newSelectivity.optimize(session).getValue(session).getInt(); + oldColumn.setSelectivity(value); + db.updateMeta(session, table); + break; + } + case CommandInterface.ALTER_TABLE_ALTER_COLUMN_VISIBILITY: { + oldColumn.setVisible(newVisibility); + table.setModified(); + db.updateMeta(session, table); + break; + } + default: + DbException.throwInternalError("type=" + type); + } + return 0; + } + + private static void checkDefaultReferencesTable(Table table, Expression defaultExpression) { + if (defaultExpression == null) { + return; + } + HashSet dependencies = new HashSet<>(); + ExpressionVisitor visitor = ExpressionVisitor + .getDependenciesVisitor(dependencies); + defaultExpression.isEverything(visitor); + if (dependencies.contains(table)) { + throw DbException.get(ErrorCode.COLUMN_IS_REFERENCED_1, + defaultExpression.getSQL()); + } + } + + private void checkClustering(Column c) { + if (!Constants.CLUSTERING_DISABLED + .equals(session.getDatabase().getCluster()) + && c.isAutoIncrement()) { + throw DbException.getUnsupportedException( + "CLUSTERING && auto-increment columns"); + } + } + + private void convertAutoIncrementColumn(Table table, Column c) { + if (c.isAutoIncrement()) { + if (c.isPrimaryKey()) { + c.setOriginalSQL("IDENTITY"); + } else { + int objId = getObjectId(); + c.convertAutoIncrementToSequence(session, getSchema(), objId, + table.isTemporary()); + } + } + } + + private void removeSequence(Table table, Sequence sequence) { + if (sequence != null) { + table.removeSequence(sequence); + sequence.setBelongsToTable(false); + Database db = session.getDatabase(); + db.removeSchemaObject(session, sequence); + } + } + + private void copyData(Table table) { + copyData(table, null, false); + } + + private void copyData(Table table, ArrayList sequences, boolean createConstraints) { + if (table.isTemporary()) { + throw DbException.getUnsupportedException("TEMP TABLE"); + } + Database db = session.getDatabase(); + String baseName = table.getName(); + String tempName = db.getTempTableName(baseName, session); + Column[] columns = table.getColumns(); + ArrayList newColumns = New.arrayList(); + Table newTable = cloneTableStructure(table, columns, db, tempName, newColumns); + if (sequences != null) { + for (Sequence sequence : sequences) { + table.addSequence(sequence); + } + } + try { + // check if a view would become invalid + // (because the column to drop is referenced or so) + checkViews(table, newTable); + } catch (DbException e) { + execute("DROP TABLE " + newTable.getName(), true); + throw DbException.get(ErrorCode.VIEW_IS_INVALID_2, e, getSQL(), e.getMessage()); + } + String tableName = table.getName(); + ArrayList dependentViews = new ArrayList<>(table.getDependentViews()); + for (TableView view : dependentViews) { + table.removeDependentView(view); + } + execute("DROP TABLE " + table.getSQL() + " IGNORE", true); + db.renameSchemaObject(session, newTable, tableName); + for (DbObject child : newTable.getChildren()) { + if (child instanceof Sequence) { + continue; + } + String name = child.getName(); + if (name == null || child.getCreateSQL() == null) { + continue; + } + if (name.startsWith(tempName + "_")) { + name = name.substring(tempName.length() + 1); + SchemaObject so = (SchemaObject) child; + if (so instanceof Constraint) { + if (so.getSchema().findConstraint(session, name) != null) { + name = so.getSchema().getUniqueConstraintName(session, newTable); + } + } else if (so instanceof Index) { + if (so.getSchema().findIndex(session, name) != null) { + name = so.getSchema().getUniqueIndexName(session, newTable, name); + } + } + db.renameSchemaObject(session, so, name); + } + } + if (createConstraints) { + createConstraints(); + } + for (TableView view : dependentViews) { + String sql = view.getCreateSQL(true, true); + execute(sql, true); + } + } + + private Table cloneTableStructure(Table table, Column[] columns, Database db, + String tempName, ArrayList newColumns) { + for (Column col : columns) { + newColumns.add(col.getClone()); + } + if (type == CommandInterface.ALTER_TABLE_DROP_COLUMN) { + for (Column removeCol : columnsToRemove) { + Column foundCol = null; + for (Column newCol : newColumns) { + if (newCol.getName().equals(removeCol.getName())) { + foundCol = newCol; + break; + } + } + if (foundCol == null) { + throw DbException.throwInternalError(removeCol.getCreateSQL()); + } + newColumns.remove(foundCol); + } + } else if (type == CommandInterface.ALTER_TABLE_ADD_COLUMN) { + int position; + if (addFirst) { + position = 0; + } else if (addBefore != null) { + position = table.getColumn(addBefore).getColumnId(); + } else if (addAfter != null) { + position = table.getColumn(addAfter).getColumnId() + 1; + } else { + position = columns.length; + } + if (columnsToAdd != null) { + for (Column column : columnsToAdd) { + newColumns.add(position++, column); + } + } + } else if (type == CommandInterface.ALTER_TABLE_ALTER_COLUMN_CHANGE_TYPE) { + int position = oldColumn.getColumnId(); + newColumns.set(position, newColumn); + } + + // create a table object in order to get the SQL statement + // can't just use this table, because most column objects are 'shared' + // with the old table + // still need a new id because using 0 would mean: the new table tries + // to use the rows of the table 0 (the meta table) + int id = db.allocateObjectId(); + CreateTableData data = new CreateTableData(); + data.tableName = tempName; + data.id = id; + data.columns = newColumns; + data.temporary = table.isTemporary(); + data.persistData = table.isPersistData(); + data.persistIndexes = table.isPersistIndexes(); + data.isHidden = table.isHidden(); + data.create = true; + data.session = session; + Table newTable = getSchema().createTable(data); + newTable.setComment(table.getComment()); + StringBuilder buff = new StringBuilder(); + buff.append(newTable.getCreateSQL()); + StringBuilder columnList = new StringBuilder(); + for (Column nc : newColumns) { + if (columnList.length() > 0) { + columnList.append(", "); + } + if (type == CommandInterface.ALTER_TABLE_ADD_COLUMN && + columnsToAdd != null && columnsToAdd.contains(nc)) { + Expression def = nc.getDefaultExpression(); + columnList.append(def == null ? "NULL" : def.getSQL()); + } else { + columnList.append(nc.getSQL()); + } + } + buff.append(" AS SELECT "); + if (columnList.length() == 0) { + // special case: insert into test select * from + buff.append('*'); + } else { + buff.append(columnList); + } + buff.append(" FROM ").append(table.getSQL()); + String newTableSQL = buff.toString(); + String newTableName = newTable.getName(); + Schema newTableSchema = newTable.getSchema(); + newTable.removeChildrenAndResources(session); + + execute(newTableSQL, true); + newTable = newTableSchema.getTableOrView(session, newTableName); + ArrayList triggers = New.arrayList(); + for (DbObject child : table.getChildren()) { + if (child instanceof Sequence) { + continue; + } else if (child instanceof Index) { + Index idx = (Index) child; + if (idx.getIndexType().getBelongsToConstraint()) { + continue; + } + } + String createSQL = child.getCreateSQL(); + if (createSQL == null) { + continue; + } + if (child instanceof TableView) { + continue; + } else if (child.getType() == DbObject.TABLE_OR_VIEW) { + DbException.throwInternalError(); + } + String quotedName = Parser.quoteIdentifier(tempName + "_" + child.getName()); + String sql = null; + if (child instanceof ConstraintReferential) { + ConstraintReferential r = (ConstraintReferential) child; + if (r.getTable() != table) { + sql = r.getCreateSQLForCopy(r.getTable(), newTable, quotedName, false); + } + } + if (sql == null) { + sql = child.getCreateSQLForCopy(newTable, quotedName); + } + if (sql != null) { + if (child instanceof TriggerObject) { + triggers.add(sql); + } else { + execute(sql, true); + } + } + } + table.setModified(); + // remove the sequences from the columns (except dropped columns) + // otherwise the sequence is dropped if the table is dropped + for (Column col : newColumns) { + Sequence seq = col.getSequence(); + if (seq != null) { + table.removeSequence(seq); + col.setSequence(null); + } + } + for (String sql : triggers) { + execute(sql, true); + } + return newTable; + } + + /** + * Check that all views and other dependent objects. + */ + private void checkViews(SchemaObject sourceTable, SchemaObject newTable) { + String sourceTableName = sourceTable.getName(); + String newTableName = newTable.getName(); + Database db = sourceTable.getDatabase(); + // save the real table under a temporary name + String temp = db.getTempTableName(sourceTableName, session); + db.renameSchemaObject(session, sourceTable, temp); + try { + // have our new table impersonate the target table + db.renameSchemaObject(session, newTable, sourceTableName); + checkViewsAreValid(sourceTable); + } finally { + // always put the source tables back with their proper names + try { + db.renameSchemaObject(session, newTable, newTableName); + } finally { + db.renameSchemaObject(session, sourceTable, sourceTableName); + } + } + } + + /** + * Check that a table or view is still valid. + * + * @param tableOrView the table or view to check + */ + private void checkViewsAreValid(DbObject tableOrView) { + for (DbObject view : tableOrView.getChildren()) { + if (view instanceof TableView) { + String sql = ((TableView) view).getQuery(); + // check if the query is still valid + // do not execute, not even with limit 1, because that could + // have side effects or take a very long time + session.prepare(sql); + checkViewsAreValid(view); + } + } + } + + private void execute(String sql, boolean ddl) { + Prepared command = session.prepare(sql); + command.update(); + if (ddl) { + session.commit(true); + } + } + + private void checkNullable(Table table) { + for (Index index : table.getIndexes()) { + if (index.getColumnIndex(oldColumn) < 0) { + continue; + } + IndexType indexType = index.getIndexType(); + if (indexType.isPrimaryKey() || indexType.isHash()) { + throw DbException.get( + ErrorCode.COLUMN_IS_PART_OF_INDEX_1, index.getSQL()); + } + } + } + + private void checkNoNullValues(Table table) { + String sql = "SELECT COUNT(*) FROM " + + table.getSQL() + " WHERE " + + oldColumn.getSQL() + " IS NULL"; + Prepared command = session.prepare(sql); + ResultInterface result = command.query(0); + result.next(); + if (result.currentRow()[0].getInt() > 0) { + throw DbException.get( + ErrorCode.COLUMN_CONTAINS_NULL_VALUES_1, + oldColumn.getSQL()); + } + } + + public void setType(int type) { + this.type = type; + } + + public void setSelectivity(Expression selectivity) { + newSelectivity = selectivity; + } + + /** + * Set default or on update expression. + * + * @param defaultExpression default or on update expression + */ + public void setDefaultExpression(Expression defaultExpression) { + this.defaultExpression = defaultExpression; + } + + public void setNewColumn(Column newColumn) { + this.newColumn = newColumn; + } + + @Override + public int getType() { + return type; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + @Override + public void addColumn(Column column) { + if (columnsToAdd == null) { + columnsToAdd = New.arrayList(); + } + columnsToAdd.add(column); + } + + public void setColumnsToRemove(ArrayList columnsToRemove) { + this.columnsToRemove = columnsToRemove; + } + + public void setVisible(boolean visible) { + this.newVisibility = visible; + } +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterTableDropConstraint.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableDropConstraint.java new file mode 100644 index 0000000000000..49957219e0bb2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableDropConstraint.java @@ -0,0 +1,56 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.constraint.Constraint; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; + +/** + * This class represents the statement + * ALTER TABLE DROP CONSTRAINT + */ +public class AlterTableDropConstraint extends SchemaCommand { + + private String constraintName; + private final boolean ifExists; + + public AlterTableDropConstraint(Session session, Schema schema, + boolean ifExists) { + super(session, schema); + this.ifExists = ifExists; + } + + public void setConstraintName(String string) { + constraintName = string; + } + + @Override + public int update() { + session.commit(true); + Constraint constraint = getSchema().findConstraint(session, constraintName); + if (constraint == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.CONSTRAINT_NOT_FOUND_1, constraintName); + } + } else { + session.getUser().checkRight(constraint.getTable(), Right.ALL); + session.getUser().checkRight(constraint.getRefTable(), Right.ALL); + session.getDatabase().removeSchemaObject(session, constraint); + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.ALTER_TABLE_DROP_CONSTRAINT; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRename.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRename.java new file mode 100644 index 0000000000000..0f102478be9be --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRename.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Table; + +/** + * This class represents the statement + * ALTER TABLE RENAME + */ +public class AlterTableRename extends SchemaCommand { + + private boolean ifTableExists; + private String oldTableName; + private String newTableName; + private boolean hidden; + + public AlterTableRename(Session session, Schema schema) { + super(session, schema); + } + + public void setIfTableExists(boolean b) { + ifTableExists = b; + } + + public void setOldTableName(String name) { + oldTableName = name; + } + + public void setNewTableName(String name) { + newTableName = name; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + Table oldTable = getSchema().findTableOrView(session, oldTableName); + if (oldTable == null) { + if (ifTableExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, oldTableName); + } + session.getUser().checkRight(oldTable, Right.ALL); + Table t = getSchema().findTableOrView(session, newTableName); + if (t != null && hidden && newTableName.equals(oldTable.getName())) { + if (!t.isHidden()) { + t.setHidden(hidden); + oldTable.setHidden(true); + db.updateMeta(session, oldTable); + } + return 0; + } + if (t != null || newTableName.equals(oldTable.getName())) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, newTableName); + } + if (oldTable.isTemporary()) { + throw DbException.getUnsupportedException("temp table"); + } + db.renameSchemaObject(session, oldTable, newTableName); + return 0; + } + + @Override + public int getType() { + return CommandInterface.ALTER_TABLE_RENAME; + } + + public void setHidden(boolean hidden) { + this.hidden = hidden; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRenameColumn.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRenameColumn.java new file mode 100644 index 0000000000000..efb58e61a1624 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRenameColumn.java @@ -0,0 +1,86 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.Table; + +/** + * This class represents the statement + * ALTER TABLE ALTER COLUMN RENAME + */ +public class AlterTableRenameColumn extends SchemaCommand { + + private boolean ifTableExists; + private String tableName; + private String oldName; + private String newName; + + public AlterTableRenameColumn(Session session, Schema schema) { + super(session, schema); + } + + public void setIfTableExists(boolean b) { + this.ifTableExists = b; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public void setOldColumnName(String oldName) { + this.oldName = oldName; + } + + public void setNewColumnName(String newName) { + this.newName = newName; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + Table table = getSchema().findTableOrView(session, tableName); + if (table == null) { + if (ifTableExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + Column column = table.getColumn(oldName); + session.getUser().checkRight(table, Right.ALL); + table.checkSupportAlter(); + // we need to update CHECK constraint + // since it might reference the name of the column + Expression newCheckExpr = column.getCheckConstraint(session, newName); + table.renameColumn(column, newName); + column.removeCheckConstraint(); + column.addCheckConstraint(session, newCheckExpr); + table.setModified(); + db.updateMeta(session, table); + for (DbObject child : table.getChildren()) { + if (child.getCreateSQL() != null) { + db.updateMeta(session, child); + } + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.ALTER_TABLE_ALTER_COLUMN_RENAME; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRenameConstraint.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRenameConstraint.java new file mode 100644 index 0000000000000..f9feb78513937 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterTableRenameConstraint.java @@ -0,0 +1,59 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.constraint.Constraint; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; + +/** + * This class represents the statement + * ALTER TABLE RENAME CONSTRAINT + */ +public class AlterTableRenameConstraint extends SchemaCommand { + + private String constraintName; + private String newConstraintName; + + public AlterTableRenameConstraint(Session session, Schema schema) { + super(session, schema); + } + + public void setConstraintName(String string) { + constraintName = string; + } + public void setNewConstraintName(String newName) { + this.newConstraintName = newName; + } + + @Override + public int update() { + session.commit(true); + Constraint constraint = getSchema().findConstraint(session, constraintName); + if (constraint == null) { + throw DbException.get(ErrorCode.CONSTRAINT_NOT_FOUND_1, constraintName); + } + if (getSchema().findConstraint(session, newConstraintName) != null || + newConstraintName.equals(constraintName)) { + throw DbException.get(ErrorCode.CONSTRAINT_ALREADY_EXISTS_1, + newConstraintName); + } + session.getUser().checkRight(constraint.getTable(), Right.ALL); + session.getUser().checkRight(constraint.getRefTable(), Right.ALL); + session.getDatabase().renameSchemaObject(session, constraint, newConstraintName); + return 0; + } + + @Override + public int getType() { + return CommandInterface.ALTER_TABLE_RENAME_CONSTRAINT; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterUser.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterUser.java new file mode 100644 index 0000000000000..f7e54dc489e3e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterUser.java @@ -0,0 +1,105 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.User; +import org.h2.expression.Expression; +import org.h2.message.DbException; + +/** + * This class represents the statements + * ALTER USER ADMIN, + * ALTER USER RENAME, + * ALTER USER SET PASSWORD + */ +public class AlterUser extends DefineCommand { + + private int type; + private User user; + private String newName; + private Expression password; + private Expression salt; + private Expression hash; + private boolean admin; + + public AlterUser(Session session) { + super(session); + } + + public void setType(int type) { + this.type = type; + } + + public void setNewName(String newName) { + this.newName = newName; + } + + public void setUser(User user) { + this.user = user; + } + + public void setAdmin(boolean admin) { + this.admin = admin; + } + + public void setSalt(Expression e) { + salt = e; + } + + public void setHash(Expression e) { + hash = e; + } + + public void setPassword(Expression password) { + this.password = password; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + switch (type) { + case CommandInterface.ALTER_USER_SET_PASSWORD: + if (user != session.getUser()) { + session.getUser().checkAdmin(); + } + if (hash != null && salt != null) { + CreateUser.setSaltAndHash(user, session, salt, hash); + } else { + CreateUser.setPassword(user, session, password); + } + break; + case CommandInterface.ALTER_USER_RENAME: + session.getUser().checkAdmin(); + if (db.findUser(newName) != null || newName.equals(user.getName())) { + throw DbException.get(ErrorCode.USER_ALREADY_EXISTS_1, newName); + } + db.renameDatabaseObject(session, user, newName); + break; + case CommandInterface.ALTER_USER_ADMIN: + session.getUser().checkAdmin(); + if (!admin) { + user.checkOwnsNoSchemas(); + } + user.setAdmin(admin); + break; + default: + DbException.throwInternalError("type=" + type); + } + db.updateMeta(session, user); + return 0; + } + + @Override + public int getType() { + return type; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/AlterView.java b/modules/h2/src/main/java/org/h2/command/ddl/AlterView.java new file mode 100644 index 0000000000000..87f523560734a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/AlterView.java @@ -0,0 +1,54 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.command.CommandInterface; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.TableView; + +/** + * This class represents the statement + * ALTER VIEW + */ +public class AlterView extends DefineCommand { + + private boolean ifExists; + private TableView view; + + public AlterView(Session session) { + super(session); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setView(TableView view) { + this.view = view; + } + + @Override + public int update() { + session.commit(true); + if (view == null && ifExists) { + return 0; + } + session.getUser().checkRight(view, Right.ALL); + DbException e = view.recompile(session, false, true); + if (e != null) { + throw e; + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.ALTER_VIEW; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/Analyze.java b/modules/h2/src/main/java/org/h2/command/ddl/Analyze.java new file mode 100644 index 0000000000000..ff0c3e15a05a7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/Analyze.java @@ -0,0 +1,148 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.expression.Parameter; +import org.h2.result.ResultInterface; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.table.TableType; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; +import org.h2.value.ValueInt; +import org.h2.value.ValueNull; + +/** + * This class represents the statements + * ANALYZE and ANALYZE TABLE + */ +public class Analyze extends DefineCommand { + + /** + * The sample size. + */ + private int sampleRows; + /** + * used in ANALYZE TABLE... + */ + private Table table; + + public Analyze(Session session) { + super(session); + sampleRows = session.getDatabase().getSettings().analyzeSample; + } + + public void setTable(Table table) { + this.table = table; + } + + @Override + public int update() { + session.commit(true); + session.getUser().checkAdmin(); + Database db = session.getDatabase(); + if (table != null) { + analyzeTable(session, table, sampleRows, true); + } else { + for (Table table : db.getAllTablesAndViews(false)) { + analyzeTable(session, table, sampleRows, true); + } + } + return 0; + } + + /** + * Analyze this table. + * + * @param session the session + * @param table the table + * @param sample the number of sample rows + * @param manual whether the command was called by the user + */ + public static void analyzeTable(Session session, Table table, int sample, + boolean manual) { + if (table.getTableType() != TableType.TABLE || + table.isHidden() || session == null) { + return; + } + if (!manual) { + if (session.getDatabase().isSysTableLocked()) { + return; + } + if (table.hasSelectTrigger()) { + return; + } + } + if (table.isTemporary() && !table.isGlobalTemporary() + && session.findLocalTempTable(table.getName()) == null) { + return; + } + if (table.isLockedExclusively() && !table.isLockedExclusivelyBy(session)) { + return; + } + if (!session.getUser().hasRight(table, Right.SELECT)) { + return; + } + if (session.getCancel() != 0) { + // if the connection is closed and there is something to undo + return; + } + Column[] columns = table.getColumns(); + if (columns.length == 0) { + return; + } + Database db = session.getDatabase(); + StatementBuilder buff = new StatementBuilder("SELECT "); + for (Column col : columns) { + buff.appendExceptFirst(", "); + int type = col.getType(); + if (type == Value.BLOB || type == Value.CLOB) { + // can not index LOB columns, so calculating + // the selectivity is not required + buff.append("MAX(NULL)"); + } else { + buff.append("SELECTIVITY(").append(col.getSQL()).append(')'); + } + } + buff.append(" FROM ").append(table.getSQL()); + if (sample > 0) { + buff.append(" LIMIT ? SAMPLE_SIZE ? "); + } + String sql = buff.toString(); + Prepared command = session.prepare(sql); + if (sample > 0) { + ArrayList params = command.getParameters(); + params.get(0).setValue(ValueInt.get(1)); + params.get(1).setValue(ValueInt.get(sample)); + } + ResultInterface result = command.query(0); + result.next(); + for (int j = 0; j < columns.length; j++) { + Value v = result.currentRow()[j]; + if (v != ValueNull.INSTANCE) { + int selectivity = v.getInt(); + columns[j].setSelectivity(selectivity); + } + } + db.updateMeta(session, table); + } + + public void setTop(int top) { + this.sampleRows = top; + } + + @Override + public int getType() { + return CommandInterface.ANALYZE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CommandWithColumns.java b/modules/h2/src/main/java/org/h2/command/ddl/CommandWithColumns.java new file mode 100644 index 0000000000000..afb2b9987aaa6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CommandWithColumns.java @@ -0,0 +1,154 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.util.New; + +public abstract class CommandWithColumns extends SchemaCommand { + + private ArrayList constraintCommands; + + private IndexColumn[] pkColumns; + + protected CommandWithColumns(Session session, Schema schema) { + super(session, schema); + } + + /** + * Add a column to this table. + * + * @param column + * the column to add + */ + public abstract void addColumn(Column column); + + /** + * Add a constraint statement to this statement. The primary key definition is + * one possible constraint statement. + * + * @param command + * the statement to add + */ + public void addConstraintCommand(DefineCommand command) { + if (command instanceof CreateIndex) { + getConstraintCommands().add(command); + } else { + AlterTableAddConstraint con = (AlterTableAddConstraint) command; + boolean alreadySet; + if (con.getType() == CommandInterface.ALTER_TABLE_ADD_CONSTRAINT_PRIMARY_KEY) { + alreadySet = setPrimaryKeyColumns(con.getIndexColumns()); + } else { + alreadySet = false; + } + if (!alreadySet) { + getConstraintCommands().add(command); + } + } + } + + /** + * For the given list of columns, disable "nullable" for those columns that + * are primary key columns. + * + * @param columns the list of columns + */ + protected void changePrimaryKeysToNotNull(ArrayList columns) { + if (pkColumns != null) { + for (Column c : columns) { + for (IndexColumn idxCol : pkColumns) { + if (c.getName().equals(idxCol.columnName)) { + c.setNullable(false); + } + } + } + } + } + + /** + * Create the constraints. + */ + protected void createConstraints() { + if (constraintCommands != null) { + for (DefineCommand command : constraintCommands) { + command.setTransactional(transactional); + command.update(); + } + } + } + + /** + * For the given list of columns, create sequences for auto-increment + * columns (if needed), and then get the list of all sequences of the + * columns. + * + * @param columns the columns + * @param temporary whether generated sequences should be temporary + * @return the list of sequences (may be empty) + */ + protected ArrayList generateSequences(ArrayList columns, boolean temporary) { + ArrayList sequences = New.arrayList(); + if (columns != null) { + for (Column c : columns) { + if (c.isAutoIncrement()) { + int objId = getObjectId(); + c.convertAutoIncrementToSequence(session, getSchema(), objId, temporary); + if (!Constants.CLUSTERING_DISABLED.equals(session.getDatabase().getCluster())) { + throw DbException.getUnsupportedException("CLUSTERING && auto-increment columns"); + } + } + Sequence seq = c.getSequence(); + if (seq != null) { + sequences.add(seq); + } + } + } + return sequences; + } + + private ArrayList getConstraintCommands() { + if (constraintCommands == null) { + constraintCommands = New.arrayList(); + } + return constraintCommands; + } + + /** + * Sets the primary key columns, but also check if a primary key with different + * columns is already defined. + * + * @param columns + * the primary key columns + * @return true if the same primary key columns where already set + */ + private boolean setPrimaryKeyColumns(IndexColumn[] columns) { + if (pkColumns != null) { + int len = columns.length; + if (len != pkColumns.length) { + throw DbException.get(ErrorCode.SECOND_PRIMARY_KEY); + } + for (int i = 0; i < len; i++) { + if (!columns[i].columnName.equals(pkColumns[i].columnName)) { + throw DbException.get(ErrorCode.SECOND_PRIMARY_KEY); + } + } + return true; + } + this.pkColumns = columns; + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateAggregate.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateAggregate.java new file mode 100644 index 0000000000000..b26c3f4fb8f38 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateAggregate.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.UserAggregate; +import org.h2.message.DbException; +import org.h2.schema.Schema; + +/** + * This class represents the statement + * CREATE AGGREGATE + */ +public class CreateAggregate extends DefineCommand { + + private Schema schema; + private String name; + private String javaClassMethod; + private boolean ifNotExists; + private boolean force; + + public CreateAggregate(Session session) { + super(session); + } + + @Override + public int update() { + session.commit(true); + session.getUser().checkAdmin(); + Database db = session.getDatabase(); + if (db.findAggregate(name) != null || schema.findFunction(name) != null) { + if (!ifNotExists) { + throw DbException.get( + ErrorCode.FUNCTION_ALIAS_ALREADY_EXISTS_1, name); + } + } else { + int id = getObjectId(); + UserAggregate aggregate = new UserAggregate( + db, id, name, javaClassMethod, force); + db.addDatabaseObject(session, aggregate); + } + return 0; + } + + public void setSchema(Schema schema) { + this.schema = schema; + } + + public void setName(String name) { + this.name = name; + } + + public void setJavaClassMethod(String string) { + this.javaClassMethod = string; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setForce(boolean force) { + this.force = force; + } + + @Override + public int getType() { + return CommandInterface.CREATE_AGGREGATE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateConstant.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateConstant.java new file mode 100644 index 0000000000000..05dcc4b3b65fe --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateConstant.java @@ -0,0 +1,69 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.schema.Constant; +import org.h2.schema.Schema; +import org.h2.value.Value; + +/** + * This class represents the statement + * CREATE CONSTANT + */ +public class CreateConstant extends SchemaCommand { + + private String constantName; + private Expression expression; + private boolean ifNotExists; + + public CreateConstant(Session session, Schema schema) { + super(session, schema); + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + @Override + public int update() { + session.commit(true); + session.getUser().checkAdmin(); + Database db = session.getDatabase(); + if (getSchema().findConstant(constantName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.CONSTANT_ALREADY_EXISTS_1, constantName); + } + int id = getObjectId(); + Constant constant = new Constant(getSchema(), id, constantName); + expression = expression.optimize(session); + Value value = expression.getValue(session); + constant.setValue(value); + db.addSchemaObject(session, constant); + return 0; + } + + public void setConstantName(String constantName) { + this.constantName = constantName; + } + + public void setExpression(Expression expr) { + this.expression = expr; + } + + @Override + public int getType() { + return CommandInterface.CREATE_CONSTANT; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateFunctionAlias.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateFunctionAlias.java new file mode 100644 index 0000000000000..ad7a19f8baa32 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateFunctionAlias.java @@ -0,0 +1,106 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.FunctionAlias; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.util.StringUtils; + +/** + * This class represents the statement + * CREATE ALIAS + */ +public class CreateFunctionAlias extends SchemaCommand { + + private String aliasName; + private String javaClassMethod; + private boolean deterministic; + private boolean ifNotExists; + private boolean force; + private String source; + private boolean bufferResultSetToLocalTemp = true; + + public CreateFunctionAlias(Session session, Schema schema) { + super(session, schema); + } + + @Override + public int update() { + session.commit(true); + session.getUser().checkAdmin(); + Database db = session.getDatabase(); + if (getSchema().findFunction(aliasName) != null) { + if (!ifNotExists) { + throw DbException.get( + ErrorCode.FUNCTION_ALIAS_ALREADY_EXISTS_1, aliasName); + } + } else { + int id = getObjectId(); + FunctionAlias functionAlias; + if (javaClassMethod != null) { + functionAlias = FunctionAlias.newInstance(getSchema(), id, + aliasName, javaClassMethod, force, + bufferResultSetToLocalTemp); + } else { + functionAlias = FunctionAlias.newInstanceFromSource( + getSchema(), id, aliasName, source, force, + bufferResultSetToLocalTemp); + } + functionAlias.setDeterministic(deterministic); + db.addSchemaObject(session, functionAlias); + } + return 0; + } + + public void setAliasName(String name) { + this.aliasName = name; + } + + /** + * Set the qualified method name after removing whitespace. + * + * @param method the qualified method name + */ + public void setJavaClassMethod(String method) { + this.javaClassMethod = StringUtils.replaceAll(method, " ", ""); + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setForce(boolean force) { + this.force = force; + } + + public void setDeterministic(boolean deterministic) { + this.deterministic = deterministic; + } + + /** + * Should the return value ResultSet be buffered in a local temporary file? + * + * @param b the new value + */ + public void setBufferResultSetToLocalTemp(boolean b) { + this.bufferResultSetToLocalTemp = b; + } + + public void setSource(String source) { + this.source = source; + } + + @Override + public int getType() { + return CommandInterface.CREATE_ALIAS; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateIndex.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateIndex.java new file mode 100644 index 0000000000000..1d28cbe82fe96 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateIndex.java @@ -0,0 +1,141 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.IndexColumn; +import org.h2.table.Table; + +/** + * This class represents the statement + * CREATE INDEX + */ +public class CreateIndex extends SchemaCommand { + + private String tableName; + private String indexName; + private IndexColumn[] indexColumns; + private boolean primaryKey, unique, hash, spatial, affinity; + private boolean ifTableExists; + private boolean ifNotExists; + private String comment; + + public CreateIndex(Session session, Schema schema) { + super(session, schema); + } + + public void setIfTableExists(boolean b) { + this.ifTableExists = b; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public void setIndexName(String indexName) { + this.indexName = indexName; + } + + public void setIndexColumns(IndexColumn[] columns) { + this.indexColumns = columns; + } + + @Override + public int update() { + if (!transactional) { + session.commit(true); + } + Database db = session.getDatabase(); + boolean persistent = db.isPersistent(); + Table table = getSchema().findTableOrView(session, tableName); + if (table == null) { + if (ifTableExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + if (getSchema().findIndex(session, indexName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.INDEX_ALREADY_EXISTS_1, indexName); + } + session.getUser().checkRight(table, Right.ALL); + table.lock(session, true, true); + if (!table.isPersistIndexes()) { + persistent = false; + } + int id = getObjectId(); + if (indexName == null) { + if (primaryKey) { + indexName = table.getSchema().getUniqueIndexName(session, + table, Constants.PREFIX_PRIMARY_KEY); + } else { + indexName = table.getSchema().getUniqueIndexName(session, + table, Constants.PREFIX_INDEX); + } + } + IndexType indexType; + if (primaryKey) { + if (table.findPrimaryKey() != null) { + throw DbException.get(ErrorCode.SECOND_PRIMARY_KEY); + } + indexType = IndexType.createPrimaryKey(persistent, hash); + } else if (unique) { + indexType = IndexType.createUnique(persistent, hash); + } else if (affinity) { + indexType = IndexType.createAffinity(); + } else { + indexType = IndexType.createNonUnique(persistent, hash, spatial); + } + IndexColumn.mapColumns(indexColumns, table); + table.addIndex(session, indexName, id, indexColumns, indexType, create, + comment); + return 0; + } + + public void setPrimaryKey(boolean b) { + this.primaryKey = b; + } + + public void setUnique(boolean b) { + this.unique = b; + } + + public void setHash(boolean b) { + this.hash = b; + } + + public void setSpatial(boolean b) { + this.spatial = b; + } + + public void setAffinity(boolean b) { + this.affinity = b; + } + + public void setComment(String comment) { + this.comment = comment; + } + + @Override + public int getType() { + return CommandInterface.CREATE_INDEX; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateLinkedTable.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateLinkedTable.java new file mode 100644 index 0000000000000..ce788d858ec51 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateLinkedTable.java @@ -0,0 +1,124 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.TableLink; + +/** + * This class represents the statement + * CREATE LINKED TABLE + */ +public class CreateLinkedTable extends SchemaCommand { + + private String tableName; + private String driver, url, user, password, originalSchema, originalTable; + private boolean ifNotExists; + private String comment; + private boolean emitUpdates; + private boolean force; + private boolean temporary; + private boolean globalTemporary; + private boolean readOnly; + + public CreateLinkedTable(Session session, Schema schema) { + super(session, schema); + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public void setDriver(String driver) { + this.driver = driver; + } + + public void setOriginalTable(String originalTable) { + this.originalTable = originalTable; + } + + public void setPassword(String password) { + this.password = password; + } + + public void setUrl(String url) { + this.url = url; + } + + public void setUser(String user) { + this.user = user; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + session.getUser().checkAdmin(); + if (getSchema().resolveTableOrView(session, tableName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, + tableName); + } + int id = getObjectId(); + TableLink table = getSchema().createTableLink(id, tableName, driver, url, + user, password, originalSchema, originalTable, emitUpdates, force); + table.setTemporary(temporary); + table.setGlobalTemporary(globalTemporary); + table.setComment(comment); + table.setReadOnly(readOnly); + if (temporary && !globalTemporary) { + session.addLocalTempTable(table); + } else { + db.addSchemaObject(session, table); + } + return 0; + } + + public void setEmitUpdates(boolean emitUpdates) { + this.emitUpdates = emitUpdates; + } + + public void setComment(String comment) { + this.comment = comment; + } + + public void setForce(boolean force) { + this.force = force; + } + + public void setTemporary(boolean temp) { + this.temporary = temp; + } + + public void setGlobalTemporary(boolean globalTemp) { + this.globalTemporary = globalTemp; + } + + public void setReadOnly(boolean readOnly) { + this.readOnly = readOnly; + } + + public void setOriginalSchema(String originalSchema) { + this.originalSchema = originalSchema; + } + + @Override + public int getType() { + return CommandInterface.CREATE_LINKED_TABLE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateRole.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateRole.java new file mode 100644 index 0000000000000..4fd04eff1346b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateRole.java @@ -0,0 +1,61 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Role; +import org.h2.engine.Session; +import org.h2.message.DbException; + +/** + * This class represents the statement + * CREATE ROLE + */ +public class CreateRole extends DefineCommand { + + private String roleName; + private boolean ifNotExists; + + public CreateRole(Session session) { + super(session); + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setRoleName(String name) { + this.roleName = name; + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + if (db.findUser(roleName) != null) { + throw DbException.get(ErrorCode.USER_ALREADY_EXISTS_1, roleName); + } + if (db.findRole(roleName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.ROLE_ALREADY_EXISTS_1, roleName); + } + int id = getObjectId(); + Role role = new Role(db, id, roleName, false); + db.addDatabaseObject(session, role); + return 0; + } + + @Override + public int getType() { + return CommandInterface.CREATE_ROLE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateSchema.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateSchema.java new file mode 100644 index 0000000000000..ac51b98d15ba3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateSchema.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.User; +import org.h2.message.DbException; +import org.h2.schema.Schema; + +/** + * This class represents the statement + * CREATE SCHEMA + */ +public class CreateSchema extends DefineCommand { + + private String schemaName; + private String authorization; + private boolean ifNotExists; + private ArrayList tableEngineParams; + + public CreateSchema(Session session) { + super(session); + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + @Override + public int update() { + session.getUser().checkSchemaAdmin(); + session.commit(true); + Database db = session.getDatabase(); + User user = db.getUser(authorization); + // during DB startup, the Right/Role records have not yet been loaded + if (!db.isStarting()) { + user.checkSchemaAdmin(); + } + if (db.findSchema(schemaName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.SCHEMA_ALREADY_EXISTS_1, schemaName); + } + int id = getObjectId(); + Schema schema = new Schema(db, id, schemaName, user, false); + schema.setTableEngineParams(tableEngineParams); + db.addDatabaseObject(session, schema); + return 0; + } + + public void setSchemaName(String name) { + this.schemaName = name; + } + + public void setAuthorization(String userName) { + this.authorization = userName; + } + + public void setTableEngineParams(ArrayList tableEngineParams) { + this.tableEngineParams = tableEngineParams; + } + + @Override + public int getType() { + return CommandInterface.CREATE_SCHEMA; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateSequence.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateSequence.java new file mode 100644 index 0000000000000..dbc5147ad7e07 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateSequence.java @@ -0,0 +1,107 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; + +/** + * This class represents the statement + * CREATE SEQUENCE + */ +public class CreateSequence extends SchemaCommand { + + private String sequenceName; + private boolean ifNotExists; + private boolean cycle; + private Expression minValue; + private Expression maxValue; + private Expression start; + private Expression increment; + private Expression cacheSize; + private boolean belongsToTable; + + public CreateSequence(Session session, Schema schema) { + super(session, schema); + } + + public void setSequenceName(String sequenceName) { + this.sequenceName = sequenceName; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setCycle(boolean cycle) { + this.cycle = cycle; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + if (getSchema().findSequence(sequenceName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.SEQUENCE_ALREADY_EXISTS_1, sequenceName); + } + int id = getObjectId(); + Long startValue = getLong(start); + Long inc = getLong(increment); + Long cache = getLong(cacheSize); + Long min = getLong(minValue); + Long max = getLong(maxValue); + Sequence sequence = new Sequence(getSchema(), id, sequenceName, startValue, inc, + cache, min, max, cycle, belongsToTable); + db.addSchemaObject(session, sequence); + return 0; + } + + private Long getLong(Expression expr) { + if (expr == null) { + return null; + } + return expr.optimize(session).getValue(session).getLong(); + } + + public void setStartWith(Expression start) { + this.start = start; + } + + public void setIncrement(Expression increment) { + this.increment = increment; + } + + public void setMinValue(Expression minValue) { + this.minValue = minValue; + } + + public void setMaxValue(Expression maxValue) { + this.maxValue = maxValue; + } + + public void setBelongsToTable(boolean belongsToTable) { + this.belongsToTable = belongsToTable; + } + + public void setCacheSize(Expression cacheSize) { + this.cacheSize = cacheSize; + } + + @Override + public int getType() { + return CommandInterface.CREATE_SEQUENCE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateSynonym.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateSynonym.java new file mode 100644 index 0000000000000..b87dae540cbeb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateSynonym.java @@ -0,0 +1,114 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.TableSynonym; + +/** + * This class represents the statement + * CREATE SYNONYM + */ +public class CreateSynonym extends SchemaCommand { + + private final CreateSynonymData data = new CreateSynonymData(); + private boolean ifNotExists; + private boolean orReplace; + private String comment; + + public CreateSynonym(Session session, Schema schema) { + super(session, schema); + } + + public void setName(String name) { + data.synonymName = name; + } + + public void setSynonymFor(String tableName) { + data.synonymFor = tableName; + } + + public void setSynonymForSchema(Schema synonymForSchema) { + data.synonymForSchema = synonymForSchema; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setOrReplace(boolean orReplace) { this.orReplace = orReplace; } + + @Override + public int update() { + if (!transactional) { + session.commit(true); + } + session.getUser().checkAdmin(); + Database db = session.getDatabase(); + data.session = session; + db.lockMeta(session); + + if (data.synonymForSchema.findTableOrView(session, data.synonymName) != null) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, data.synonymName); + } + + if (data.synonymForSchema.findTableOrView(session, data.synonymFor) != null) { + return createTableSynonym(db); + } + + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, + data.synonymForSchema.getName() + "." + data.synonymFor); + + } + + private int createTableSynonym(Database db) { + + TableSynonym old = getSchema().getSynonym(data.synonymName); + if (old != null) { + if (orReplace) { + // ok, we replacing the existing synonym + } else if (ifNotExists) { + return 0; + } else { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, data.synonymName); + } + } + + TableSynonym table; + if (old != null) { + table = old; + data.schema = table.getSchema(); + table.updateData(data); + table.setComment(comment); + table.setModified(); + db.updateMeta(session, table); + } else { + data.id = getObjectId(); + table = getSchema().createSynonym(data); + table.setComment(comment); + db.addSchemaObject(session, table); + } + + table.updateSynonymFor(); + return 0; + } + + public void setComment(String comment) { + this.comment = comment; + } + + @Override + public int getType() { + return CommandInterface.CREATE_SYNONYM; + } + + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateSynonymData.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateSynonymData.java new file mode 100644 index 0000000000000..e6aab014cf86f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateSynonymData.java @@ -0,0 +1,44 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.engine.Session; +import org.h2.schema.Schema; + +/** + * The data required to create a synonym. + */ +public class CreateSynonymData { + + /** + * The schema. + */ + public Schema schema; + + /** + * The synonyms name. + */ + public String synonymName; + + /** + * The name of the table the synonym is created for. + */ + public String synonymFor; + + /** Schema synonymFor is located in. */ + public Schema synonymForSchema; + + /** + * The object id. + */ + public int id; + + /** + * The session. + */ + public Session session; + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateTable.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateTable.java new file mode 100644 index 0000000000000..6ae4bdc33eb52 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateTable.java @@ -0,0 +1,266 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.dml.Insert; +import org.h2.command.dml.Query; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.util.ColumnNamer; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * This class represents the statement + * CREATE TABLE + */ +public class CreateTable extends CommandWithColumns { + + private final CreateTableData data = new CreateTableData(); + private boolean ifNotExists; + private boolean onCommitDrop; + private boolean onCommitTruncate; + private Query asQuery; + private String comment; + private boolean sortedInsertMode; + + public CreateTable(Session session, Schema schema) { + super(session, schema); + data.persistIndexes = true; + data.persistData = true; + } + + public void setQuery(Query query) { + this.asQuery = query; + } + + public void setTemporary(boolean temporary) { + data.temporary = temporary; + } + + public void setTableName(String tableName) { + data.tableName = tableName; + } + + @Override + public void addColumn(Column column) { + data.columns.add(column); + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + @Override + public int update() { + if (!transactional) { + session.commit(true); + } + Database db = session.getDatabase(); + if (!db.isPersistent()) { + data.persistIndexes = false; + } + boolean isSessionTemporary = data.temporary && !data.globalTemporary; + if (!isSessionTemporary) { + db.lockMeta(session); + } + if (getSchema().resolveTableOrView(session, data.tableName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, data.tableName); + } + if (asQuery != null) { + asQuery.prepare(); + if (data.columns.isEmpty()) { + generateColumnsFromQuery(); + } else if (data.columns.size() != asQuery.getColumnCount()) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + } + changePrimaryKeysToNotNull(data.columns); + data.id = getObjectId(); + data.create = create; + data.session = session; + Table table = getSchema().createTable(data); + ArrayList sequences = generateSequences(data.columns, data.temporary); + table.setComment(comment); + if (isSessionTemporary) { + if (onCommitDrop) { + table.setOnCommitDrop(true); + } + if (onCommitTruncate) { + table.setOnCommitTruncate(true); + } + session.addLocalTempTable(table); + } else { + db.lockMeta(session); + db.addSchemaObject(session, table); + } + try { + for (Column c : data.columns) { + c.prepareExpression(session); + } + for (Sequence sequence : sequences) { + table.addSequence(sequence); + } + createConstraints(); + if (asQuery != null) { + boolean old = session.isUndoLogEnabled(); + try { + session.setUndoLogEnabled(false); + session.startStatementWithinTransaction(); + Insert insert = new Insert(session); + insert.setSortedInsertMode(sortedInsertMode); + insert.setQuery(asQuery); + insert.setTable(table); + insert.setInsertFromSelect(true); + insert.prepare(); + insert.update(); + } finally { + session.setUndoLogEnabled(old); + } + } + HashSet set = new HashSet<>(); + set.clear(); + table.addDependencies(set); + for (DbObject obj : set) { + if (obj == table) { + continue; + } + if (obj.getType() == DbObject.TABLE_OR_VIEW) { + if (obj instanceof Table) { + Table t = (Table) obj; + if (t.getId() > table.getId()) { + throw DbException.get( + ErrorCode.FEATURE_NOT_SUPPORTED_1, + "Table depends on another table " + + "with a higher ID: " + t + + ", this is currently not supported, " + + "as it would prevent the database from " + + "being re-opened"); + } + } + } + } + } catch (DbException e) { + db.checkPowerOff(); + db.removeSchemaObject(session, table); + if (!transactional) { + session.commit(true); + } + throw e; + } + return 0; + } + + private void generateColumnsFromQuery() { + int columnCount = asQuery.getColumnCount(); + ArrayList expressions = asQuery.getExpressions(); + ColumnNamer columnNamer= new ColumnNamer(session); + for (int i = 0; i < columnCount; i++) { + Expression expr = expressions.get(i); + int type = expr.getType(); + String name = columnNamer.getColumnName(expr,i,expr.getAlias()); + long precision = expr.getPrecision(); + int displaySize = expr.getDisplaySize(); + DataType dt = DataType.getDataType(type); + if (precision > 0 && (dt.defaultPrecision == 0 || + (dt.defaultPrecision > precision && dt.defaultPrecision < Byte.MAX_VALUE))) { + // dont' set precision to MAX_VALUE if this is the default + precision = dt.defaultPrecision; + } + int scale = expr.getScale(); + if (scale > 0 && (dt.defaultScale == 0 || + (dt.defaultScale > scale && dt.defaultScale < precision))) { + scale = dt.defaultScale; + } + if (scale > precision) { + precision = scale; + } + String[] enumerators = null; + if (dt.type == Value.ENUM) { + /** + * Only columns of tables may be enumerated. + */ + if(!(expr instanceof ExpressionColumn)) { + throw DbException.get(ErrorCode.GENERAL_ERROR_1, + "Unable to resolve enumerators of expression"); + } + enumerators = ((ExpressionColumn)expr).getColumn().getEnumerators(); + } + Column col = new Column(name, type, precision, scale, displaySize, enumerators); + addColumn(col); + } + } + + public void setPersistIndexes(boolean persistIndexes) { + data.persistIndexes = persistIndexes; + } + + public void setGlobalTemporary(boolean globalTemporary) { + data.globalTemporary = globalTemporary; + } + + /** + * This temporary table is dropped on commit. + */ + public void setOnCommitDrop() { + this.onCommitDrop = true; + } + + /** + * This temporary table is truncated on commit. + */ + public void setOnCommitTruncate() { + this.onCommitTruncate = true; + } + + public void setComment(String comment) { + this.comment = comment; + } + + public void setPersistData(boolean persistData) { + data.persistData = persistData; + if (!persistData) { + data.persistIndexes = false; + } + } + + public void setSortedInsertMode(boolean sortedInsertMode) { + this.sortedInsertMode = sortedInsertMode; + } + + public void setTableEngine(String tableEngine) { + data.tableEngine = tableEngine; + } + + public void setTableEngineParams(ArrayList tableEngineParams) { + data.tableEngineParams = tableEngineParams; + } + + public void setHidden(boolean isHidden) { + data.isHidden = isHidden; + } + + @Override + public int getType() { + return CommandInterface.CREATE_TABLE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateTableData.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateTableData.java new file mode 100644 index 0000000000000..cf6f8ddfb5fb1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateTableData.java @@ -0,0 +1,83 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import org.h2.engine.Session; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.util.New; + +/** + * The data required to create a table. + */ +public class CreateTableData { + + /** + * The schema. + */ + public Schema schema; + + /** + * The table name. + */ + public String tableName; + + /** + * The object id. + */ + public int id; + + /** + * The column list. + */ + public ArrayList columns = New.arrayList(); + + /** + * Whether this is a temporary table. + */ + public boolean temporary; + + /** + * Whether the table is global temporary. + */ + public boolean globalTemporary; + + /** + * Whether the indexes should be persisted. + */ + public boolean persistIndexes; + + /** + * Whether the data should be persisted. + */ + public boolean persistData; + + /** + * Whether to create a new table. + */ + public boolean create; + + /** + * The session. + */ + public Session session; + + /** + * The table engine to use for creating the table. + */ + public String tableEngine; + + /** + * The table engine params to use for creating the table. + */ + public ArrayList tableEngineParams; + + /** + * The table is hidden. + */ + public boolean isHidden; +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateTrigger.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateTrigger.java new file mode 100644 index 0000000000000..d24140c4cdb1b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateTrigger.java @@ -0,0 +1,137 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.TriggerObject; +import org.h2.table.Table; + +/** + * This class represents the statement + * CREATE TRIGGER + */ +public class CreateTrigger extends SchemaCommand { + + private String triggerName; + private boolean ifNotExists; + + private boolean insteadOf; + private boolean before; + private int typeMask; + private boolean rowBased; + private int queueSize = TriggerObject.DEFAULT_QUEUE_SIZE; + private boolean noWait; + private String tableName; + private String triggerClassName; + private String triggerSource; + private boolean force; + private boolean onRollback; + + public CreateTrigger(Session session, Schema schema) { + super(session, schema); + } + + public void setInsteadOf(boolean insteadOf) { + this.insteadOf = insteadOf; + } + + public void setBefore(boolean before) { + this.before = before; + } + + public void setTriggerClassName(String triggerClassName) { + this.triggerClassName = triggerClassName; + } + + public void setTriggerSource(String triggerSource) { + this.triggerSource = triggerSource; + } + + public void setTypeMask(int typeMask) { + this.typeMask = typeMask; + } + + public void setRowBased(boolean rowBased) { + this.rowBased = rowBased; + } + + public void setQueueSize(int size) { + this.queueSize = size; + } + + public void setNoWait(boolean noWait) { + this.noWait = noWait; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public void setTriggerName(String name) { + this.triggerName = name; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + if (getSchema().findTrigger(triggerName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get( + ErrorCode.TRIGGER_ALREADY_EXISTS_1, + triggerName); + } + if ((typeMask & Trigger.SELECT) == Trigger.SELECT && rowBased) { + throw DbException.get( + ErrorCode.TRIGGER_SELECT_AND_ROW_BASED_NOT_SUPPORTED, + triggerName); + } + int id = getObjectId(); + Table table = getSchema().getTableOrView(session, tableName); + TriggerObject trigger = new TriggerObject(getSchema(), id, triggerName, table); + trigger.setInsteadOf(insteadOf); + trigger.setBefore(before); + trigger.setNoWait(noWait); + trigger.setQueueSize(queueSize); + trigger.setRowBased(rowBased); + trigger.setTypeMask(typeMask); + trigger.setOnRollback(onRollback); + if (this.triggerClassName != null) { + trigger.setTriggerClassName(triggerClassName, force); + } else { + trigger.setTriggerSource(triggerSource, force); + } + db.addSchemaObject(session, trigger); + table.addTrigger(trigger); + return 0; + } + + public void setForce(boolean force) { + this.force = force; + } + + public void setOnRollback(boolean onRollback) { + this.onRollback = onRollback; + } + + @Override + public int getType() { + return CommandInterface.CREATE_TRIGGER; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateUser.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateUser.java new file mode 100644 index 0000000000000..fffb5829453d1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateUser.java @@ -0,0 +1,135 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.User; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.security.SHA256; +import org.h2.util.StringUtils; + +/** + * This class represents the statement + * CREATE USER + */ +public class CreateUser extends DefineCommand { + + private String userName; + private boolean admin; + private Expression password; + private Expression salt; + private Expression hash; + private boolean ifNotExists; + private String comment; + + public CreateUser(Session session) { + super(session); + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setUserName(String userName) { + this.userName = userName; + } + + public void setPassword(Expression password) { + this.password = password; + } + + /** + * Set the salt and hash for the given user. + * + * @param user the user + * @param session the session + * @param salt the salt + * @param hash the hash + */ + static void setSaltAndHash(User user, Session session, Expression salt, Expression hash) { + user.setSaltAndHash(getByteArray(session, salt), getByteArray(session, hash)); + } + + private static byte[] getByteArray(Session session, Expression e) { + String s = e.optimize(session).getValue(session).getString(); + return s == null ? new byte[0] : StringUtils.convertHexToBytes(s); + } + + /** + * Set the password for the given user. + * + * @param user the user + * @param session the session + * @param password the password + */ + static void setPassword(User user, Session session, Expression password) { + String pwd = password.optimize(session).getValue(session).getString(); + char[] passwordChars = pwd == null ? new char[0] : pwd.toCharArray(); + byte[] userPasswordHash; + String userName = user.getName(); + if (userName.length() == 0 && passwordChars.length == 0) { + userPasswordHash = new byte[0]; + } else { + userPasswordHash = SHA256.getKeyPasswordHash(userName, passwordChars); + } + user.setUserPasswordHash(userPasswordHash); + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + if (db.findRole(userName) != null) { + throw DbException.get(ErrorCode.ROLE_ALREADY_EXISTS_1, userName); + } + if (db.findUser(userName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get(ErrorCode.USER_ALREADY_EXISTS_1, userName); + } + int id = getObjectId(); + User user = new User(db, id, userName, false); + user.setAdmin(admin); + user.setComment(comment); + if (hash != null && salt != null) { + setSaltAndHash(user, session, salt, hash); + } else if (password != null) { + setPassword(user, session, password); + } else { + throw DbException.throwInternalError(); + } + db.addDatabaseObject(session, user); + return 0; + } + + public void setSalt(Expression e) { + salt = e; + } + + public void setHash(Expression e) { + hash = e; + } + + public void setAdmin(boolean b) { + admin = b; + } + + public void setComment(String comment) { + this.comment = comment; + } + + @Override + public int getType() { + return CommandInterface.CREATE_USER; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateUserDataType.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateUserDataType.java new file mode 100644 index 0000000000000..529b98e92f161 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateUserDataType.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.UserDataType; +import org.h2.message.DbException; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.value.DataType; + +/** + * This class represents the statement + * CREATE DOMAIN + */ +public class CreateUserDataType extends DefineCommand { + + private String typeName; + private Column column; + private boolean ifNotExists; + + public CreateUserDataType(Session session) { + super(session); + } + + public void setTypeName(String name) { + this.typeName = name; + } + + public void setColumn(Column column) { + this.column = column; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + session.getUser().checkAdmin(); + if (db.findUserDataType(typeName) != null) { + if (ifNotExists) { + return 0; + } + throw DbException.get( + ErrorCode.USER_DATA_TYPE_ALREADY_EXISTS_1, + typeName); + } + DataType builtIn = DataType.getTypeByName(typeName, session.getDatabase().getMode()); + if (builtIn != null) { + if (!builtIn.hidden) { + throw DbException.get( + ErrorCode.USER_DATA_TYPE_ALREADY_EXISTS_1, + typeName); + } + Table table = session.getDatabase().getFirstUserTable(); + if (table != null) { + throw DbException.get( + ErrorCode.USER_DATA_TYPE_ALREADY_EXISTS_1, + typeName + " (" + table.getSQL() + ")"); + } + } + int id = getObjectId(); + UserDataType type = new UserDataType(db, id, typeName); + type.setColumn(column); + db.addDatabaseObject(session, type); + return 0; + } + + @Override + public int getType() { + return CommandInterface.CREATE_DOMAIN; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/CreateView.java b/modules/h2/src/main/java/org/h2/command/ddl/CreateView.java new file mode 100644 index 0000000000000..7e8f0190626e8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/CreateView.java @@ -0,0 +1,153 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.dml.Query; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Parameter; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.table.TableType; +import org.h2.table.TableView; +import org.h2.value.Value; + +/** + * This class represents the statement + * CREATE VIEW + */ +public class CreateView extends SchemaCommand { + + private Query select; + private String viewName; + private boolean ifNotExists; + private String selectSQL; + private String[] columnNames; + private String comment; + private boolean orReplace; + private boolean force; + private boolean isTableExpression; + + public CreateView(Session session, Schema schema) { + super(session, schema); + } + + public void setViewName(String name) { + viewName = name; + } + + public void setSelect(Query select) { + this.select = select; + } + + public void setIfNotExists(boolean ifNotExists) { + this.ifNotExists = ifNotExists; + } + + public void setSelectSQL(String selectSQL) { + this.selectSQL = selectSQL; + } + + public void setColumnNames(String[] cols) { + this.columnNames = cols; + } + + public void setComment(String comment) { + this.comment = comment; + } + + public void setOrReplace(boolean orReplace) { + this.orReplace = orReplace; + } + + public void setForce(boolean force) { + this.force = force; + } + + public void setTableExpression(boolean isTableExpression) { + this.isTableExpression = isTableExpression; + } + + @Override + public int update() { + session.commit(true); + session.getUser().checkAdmin(); + Database db = session.getDatabase(); + TableView view = null; + Table old = getSchema().findTableOrView(session, viewName); + if (old != null) { + if (ifNotExists) { + return 0; + } + if (!orReplace || TableType.VIEW != old.getTableType()) { + throw DbException.get(ErrorCode.VIEW_ALREADY_EXISTS_1, viewName); + } + view = (TableView) old; + } + int id = getObjectId(); + String querySQL; + if (select == null) { + querySQL = selectSQL; + } else { + ArrayList params = select.getParameters(); + if (params != null && !params.isEmpty()) { + throw DbException.getUnsupportedException("parameters in views"); + } + querySQL = select.getPlanSQL(); + } + Column[] columnTemplatesAsUnknowns = null; + Column[] columnTemplatesAsStrings = null; + if (columnNames != null) { + columnTemplatesAsUnknowns = new Column[columnNames.length]; + columnTemplatesAsStrings = new Column[columnNames.length]; + for (int i = 0; i < columnNames.length; ++i) { + // non table expressions are fine to use unknown column type + columnTemplatesAsUnknowns[i] = new Column(columnNames[i], Value.UNKNOWN); + // table expressions can't have unknown types - so we use string instead + columnTemplatesAsStrings[i] = new Column(columnNames[i], Value.STRING); + } + } + if (view == null) { + if (isTableExpression) { + view = TableView.createTableViewMaybeRecursive(getSchema(), id, viewName, querySQL, null, + columnTemplatesAsStrings, session, false /* literalsChecked */, isTableExpression, + true /* isPersistent */, db); + } else { + view = new TableView(getSchema(), id, viewName, querySQL, null, columnTemplatesAsUnknowns, session, + false/* allow recursive */, false/* literalsChecked */, isTableExpression, true); + } + } else { + // TODO support isTableExpression in replace function... + view.replace(querySQL, columnTemplatesAsUnknowns, session, false, force, false); + view.setModified(); + } + if (comment != null) { + view.setComment(comment); + } + if (old == null) { + db.addSchemaObject(session, view); + db.unlockMeta(session); + } else { + db.updateMeta(session, view); + } + + // TODO: if we added any table expressions that aren't used by this view, detect them + // and drop them - otherwise they will leak and never get cleaned up. + + return 0; + } + + @Override + public int getType() { + return CommandInterface.CREATE_VIEW; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DeallocateProcedure.java b/modules/h2/src/main/java/org/h2/command/ddl/DeallocateProcedure.java new file mode 100644 index 0000000000000..4fd7b58f7f1eb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DeallocateProcedure.java @@ -0,0 +1,38 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.command.CommandInterface; +import org.h2.engine.Session; + +/** + * This class represents the statement + * DEALLOCATE + */ +public class DeallocateProcedure extends DefineCommand { + + private String procedureName; + + public DeallocateProcedure(Session session) { + super(session); + } + + @Override + public int update() { + session.removeProcedure(procedureName); + return 0; + } + + public void setProcedureName(String name) { + this.procedureName = name; + } + + @Override + public int getType() { + return CommandInterface.DEALLOCATE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DefineCommand.java b/modules/h2/src/main/java/org/h2/command/ddl/DefineCommand.java new file mode 100644 index 0000000000000..aa8a1035faded --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DefineCommand.java @@ -0,0 +1,52 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.command.Prepared; +import org.h2.engine.Session; +import org.h2.result.ResultInterface; + +/** + * This class represents a non-transaction statement, for example a CREATE or + * DROP. + */ +public abstract class DefineCommand extends Prepared { + + /** + * The transactional behavior. The default is disabled, meaning the command + * commits an open transaction. + */ + protected boolean transactional; + + /** + * Create a new command for the given session. + * + * @param session the session + */ + DefineCommand(Session session) { + super(session); + } + + @Override + public boolean isReadOnly() { + return false; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + public void setTransactional(boolean transactional) { + this.transactional = transactional; + } + + @Override + public boolean isTransactional() { + return transactional; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropAggregate.java b/modules/h2/src/main/java/org/h2/command/ddl/DropAggregate.java new file mode 100644 index 0000000000000..cd4d55a318966 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropAggregate.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.UserAggregate; +import org.h2.message.DbException; + +/** + * This class represents the statement + * DROP AGGREGATE + */ +public class DropAggregate extends DefineCommand { + + private String name; + private boolean ifExists; + + public DropAggregate(Session session) { + super(session); + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + UserAggregate aggregate = db.findAggregate(name); + if (aggregate == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.AGGREGATE_NOT_FOUND_1, name); + } + } else { + db.removeDatabaseObject(session, aggregate); + } + return 0; + } + + public void setName(String name) { + this.name = name; + } + + public void setIfExists(boolean ifExists) { + this.ifExists = ifExists; + } + + @Override + public int getType() { + return CommandInterface.DROP_AGGREGATE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropConstant.java b/modules/h2/src/main/java/org/h2/command/ddl/DropConstant.java new file mode 100644 index 0000000000000..01cb51d7e046e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropConstant.java @@ -0,0 +1,58 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Constant; +import org.h2.schema.Schema; + +/** + * This class represents the statement + * DROP CONSTANT + */ +public class DropConstant extends SchemaCommand { + + private String constantName; + private boolean ifExists; + + public DropConstant(Session session, Schema schema) { + super(session, schema); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setConstantName(String constantName) { + this.constantName = constantName; + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + Constant constant = getSchema().findConstant(constantName); + if (constant == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.CONSTANT_NOT_FOUND_1, constantName); + } + } else { + db.removeSchemaObject(session, constant); + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.DROP_CONSTANT; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropDatabase.java b/modules/h2/src/main/java/org/h2/command/ddl/DropDatabase.java new file mode 100644 index 0000000000000..07985e077d68e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropDatabase.java @@ -0,0 +1,161 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Role; +import org.h2.engine.Session; +import org.h2.engine.User; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObject; +import org.h2.schema.Sequence; +import org.h2.table.Table; +import org.h2.table.TableType; +import org.h2.util.New; + +/** + * This class represents the statement + * DROP ALL OBJECTS + */ +public class DropDatabase extends DefineCommand { + + private boolean dropAllObjects; + private boolean deleteFiles; + + public DropDatabase(Session session) { + super(session); + } + + @Override + public int update() { + if (dropAllObjects) { + dropAllObjects(); + } + if (deleteFiles) { + session.getDatabase().setDeleteFilesOnDisconnect(true); + } + return 0; + } + + private void dropAllObjects() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + db.lockMeta(session); + + // There can be dependencies between tables e.g. using computed columns, + // so we might need to loop over them multiple times. + boolean runLoopAgain; + do { + ArrayList
    tables = db.getAllTablesAndViews(false); + ArrayList
    toRemove = New.arrayList(); + for (Table t : tables) { + if (t.getName() != null && + TableType.VIEW == t.getTableType()) { + toRemove.add(t); + } + } + for (Table t : tables) { + if (t.getName() != null && + TableType.TABLE_LINK == t.getTableType()) { + toRemove.add(t); + } + } + for (Table t : tables) { + if (t.getName() != null && + TableType.TABLE == t.getTableType() && + !t.isHidden()) { + toRemove.add(t); + } + } + for (Table t : tables) { + if (t.getName() != null && + TableType.EXTERNAL_TABLE_ENGINE == t.getTableType() && + !t.isHidden()) { + toRemove.add(t); + } + } + runLoopAgain = false; + for (Table t : toRemove) { + if (t.getName() == null) { + // ignore, already dead + } else if (db.getDependentTable(t, t) == null) { + db.removeSchemaObject(session, t); + } else { + runLoopAgain = true; + } + } + } while (runLoopAgain); + + // TODO session-local temp tables are not removed + for (Schema schema : db.getAllSchemas()) { + if (schema.canDrop()) { + db.removeDatabaseObject(session, schema); + } + } + ArrayList list = New.arrayList(); + for (SchemaObject obj : db.getAllSchemaObjects(DbObject.SEQUENCE)) { + // ignore these. the ones we want to drop will get dropped when we + // drop their associated tables, and we will ignore the problematic + // ones that belong to session-local temp tables. + if (!((Sequence) obj).getBelongsToTable()) { + list.add(obj); + } + } + // maybe constraints and triggers on system tables will be allowed in + // the future + list.addAll(db.getAllSchemaObjects(DbObject.CONSTRAINT)); + list.addAll(db.getAllSchemaObjects(DbObject.TRIGGER)); + list.addAll(db.getAllSchemaObjects(DbObject.CONSTANT)); + list.addAll(db.getAllSchemaObjects(DbObject.FUNCTION_ALIAS)); + for (SchemaObject obj : list) { + if (obj.isHidden()) { + continue; + } + db.removeSchemaObject(session, obj); + } + for (User user : db.getAllUsers()) { + if (user != session.getUser()) { + db.removeDatabaseObject(session, user); + } + } + for (Role role : db.getAllRoles()) { + String sql = role.getCreateSQL(); + // the role PUBLIC must not be dropped + if (sql != null) { + db.removeDatabaseObject(session, role); + } + } + ArrayList dbObjects = New.arrayList(); + dbObjects.addAll(db.getAllRights()); + dbObjects.addAll(db.getAllAggregates()); + dbObjects.addAll(db.getAllUserDataTypes()); + for (DbObject obj : dbObjects) { + String sql = obj.getCreateSQL(); + // the role PUBLIC must not be dropped + if (sql != null) { + db.removeDatabaseObject(session, obj); + } + } + } + + public void setDropAllObjects(boolean b) { + this.dropAllObjects = b; + } + + public void setDeleteFiles(boolean b) { + this.deleteFiles = b; + } + + @Override + public int getType() { + return CommandInterface.DROP_ALL_OBJECTS; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropFunctionAlias.java b/modules/h2/src/main/java/org/h2/command/ddl/DropFunctionAlias.java new file mode 100644 index 0000000000000..3e0948828793d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropFunctionAlias.java @@ -0,0 +1,58 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.FunctionAlias; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; + +/** + * This class represents the statement + * DROP ALIAS + */ +public class DropFunctionAlias extends SchemaCommand { + + private String aliasName; + private boolean ifExists; + + public DropFunctionAlias(Session session, Schema schema) { + super(session, schema); + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + FunctionAlias functionAlias = getSchema().findFunction(aliasName); + if (functionAlias == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.FUNCTION_ALIAS_NOT_FOUND_1, aliasName); + } + } else { + db.removeSchemaObject(session, functionAlias); + } + return 0; + } + + public void setAliasName(String name) { + this.aliasName = name; + } + + public void setIfExists(boolean ifExists) { + this.ifExists = ifExists; + } + + @Override + public int getType() { + return CommandInterface.DROP_ALIAS; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropIndex.java b/modules/h2/src/main/java/org/h2/command/ddl/DropIndex.java new file mode 100644 index 0000000000000..9344a13269611 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropIndex.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.constraint.Constraint; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Table; + +/** + * This class represents the statement + * DROP INDEX + */ +public class DropIndex extends SchemaCommand { + + private String indexName; + private boolean ifExists; + + public DropIndex(Session session, Schema schema) { + super(session, schema); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setIndexName(String indexName) { + this.indexName = indexName; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + Index index = getSchema().findIndex(session, indexName); + if (index == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.INDEX_NOT_FOUND_1, indexName); + } + } else { + Table table = index.getTable(); + session.getUser().checkRight(index.getTable(), Right.ALL); + Constraint pkConstraint = null; + ArrayList constraints = table.getConstraints(); + for (int i = 0; constraints != null && i < constraints.size(); i++) { + Constraint cons = constraints.get(i); + if (cons.usesIndex(index)) { + // can drop primary key index (for compatibility) + if (Constraint.Type.PRIMARY_KEY == cons.getConstraintType()) { + pkConstraint = cons; + } else { + throw DbException.get( + ErrorCode.INDEX_BELONGS_TO_CONSTRAINT_2, + indexName, cons.getName()); + } + } + } + index.getTable().setModified(); + if (pkConstraint != null) { + db.removeSchemaObject(session, pkConstraint); + } else { + db.removeSchemaObject(session, index); + } + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.DROP_INDEX; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropRole.java b/modules/h2/src/main/java/org/h2/command/ddl/DropRole.java new file mode 100644 index 0000000000000..04bf88cfcd83d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropRole.java @@ -0,0 +1,61 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Role; +import org.h2.engine.Session; +import org.h2.message.DbException; + +/** + * This class represents the statement + * DROP ROLE + */ +public class DropRole extends DefineCommand { + + private String roleName; + private boolean ifExists; + + public DropRole(Session session) { + super(session); + } + + public void setRoleName(String roleName) { + this.roleName = roleName; + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + if (roleName.equals(Constants.PUBLIC_ROLE_NAME)) { + throw DbException.get(ErrorCode.ROLE_CAN_NOT_BE_DROPPED_1, roleName); + } + Role role = db.findRole(roleName); + if (role == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.ROLE_NOT_FOUND_1, roleName); + } + } else { + db.removeDatabaseObject(session, role); + } + return 0; + } + + public void setIfExists(boolean ifExists) { + this.ifExists = ifExists; + } + + @Override + public int getType() { + return CommandInterface.DROP_ROLE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropSchema.java b/modules/h2/src/main/java/org/h2/command/ddl/DropSchema.java new file mode 100644 index 0000000000000..12ca354ba9174 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropSchema.java @@ -0,0 +1,80 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.constraint.ConstraintActionType; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObject; +import org.h2.util.StatementBuilder; + +/** + * This class represents the statement + * DROP SCHEMA + */ +public class DropSchema extends DefineCommand { + + private String schemaName; + private boolean ifExists; + private ConstraintActionType dropAction; + + public DropSchema(Session session) { + super(session); + dropAction = session.getDatabase().getSettings().dropRestrict ? + ConstraintActionType.RESTRICT : ConstraintActionType.CASCADE; + } + + public void setSchemaName(String name) { + this.schemaName = name; + } + + @Override + public int update() { + session.getUser().checkSchemaAdmin(); + session.commit(true); + Database db = session.getDatabase(); + Schema schema = db.findSchema(schemaName); + if (schema == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.SCHEMA_NOT_FOUND_1, schemaName); + } + } else { + if (!schema.canDrop()) { + throw DbException.get(ErrorCode.SCHEMA_CAN_NOT_BE_DROPPED_1, schemaName); + } + if (dropAction == ConstraintActionType.RESTRICT && !schema.isEmpty()) { + StatementBuilder buff = new StatementBuilder(); + for (SchemaObject object : schema.getAll()) { + buff.appendExceptFirst(", "); + buff.append(object.getName()); + } + if (buff.length() > 0) { + throw DbException.get(ErrorCode.CANNOT_DROP_2, schemaName, buff.toString()); + } + } + db.removeDatabaseObject(session, schema); + } + return 0; + } + + public void setIfExists(boolean ifExists) { + this.ifExists = ifExists; + } + + public void setDropAction(ConstraintActionType dropAction) { + this.dropAction = dropAction; + } + + @Override + public int getType() { + return CommandInterface.DROP_SCHEMA; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropSequence.java b/modules/h2/src/main/java/org/h2/command/ddl/DropSequence.java new file mode 100644 index 0000000000000..486c42ad24e84 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropSequence.java @@ -0,0 +1,61 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; + +/** + * This class represents the statement + * DROP SEQUENCE + */ +public class DropSequence extends SchemaCommand { + + private String sequenceName; + private boolean ifExists; + + public DropSequence(Session session, Schema schema) { + super(session, schema); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setSequenceName(String sequenceName) { + this.sequenceName = sequenceName; + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + Sequence sequence = getSchema().findSequence(sequenceName); + if (sequence == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.SEQUENCE_NOT_FOUND_1, sequenceName); + } + } else { + if (sequence.getBelongsToTable()) { + throw DbException.get(ErrorCode.SEQUENCE_BELONGS_TO_A_TABLE_1, sequenceName); + } + db.removeSchemaObject(session, sequence); + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.DROP_SEQUENCE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropSynonym.java b/modules/h2/src/main/java/org/h2/command/ddl/DropSynonym.java new file mode 100644 index 0000000000000..debcf94e0d56b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropSynonym.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.TableSynonym; + +/** + * This class represents the statement + * DROP SYNONYM + */ +public class DropSynonym extends SchemaCommand { + + private String synonymName; + private boolean ifExists; + + public DropSynonym(Session session, Schema schema) { + super(session, schema); + } + + public void setSynonymName(String name) { + this.synonymName = name; + } + + @Override + public int update() { + session.commit(true); + session.getUser().checkAdmin(); + + TableSynonym synonym = getSchema().getSynonym(synonymName); + if (synonym == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, synonymName); + } + } else { + session.getDatabase().removeSchemaObject(session, synonym); + } + return 0; + } + + public void setIfExists(boolean ifExists) { + this.ifExists = ifExists; + } + + @Override + public int getType() { + return CommandInterface.DROP_SYNONYM; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropTable.java b/modules/h2/src/main/java/org/h2/command/ddl/DropTable.java new file mode 100644 index 0000000000000..a3e6f3e959d70 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropTable.java @@ -0,0 +1,146 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.List; +import java.util.concurrent.CopyOnWriteArrayList; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.constraint.Constraint; +import org.h2.constraint.ConstraintActionType; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Table; +import org.h2.table.TableView; +import org.h2.util.StatementBuilder; + +/** + * This class represents the statement + * DROP TABLE + */ +public class DropTable extends SchemaCommand { + + private boolean ifExists; + private String tableName; + private Table table; + private DropTable next; + private ConstraintActionType dropAction; + + public DropTable(Session session, Schema schema) { + super(session, schema); + dropAction = session.getDatabase().getSettings().dropRestrict ? + ConstraintActionType.RESTRICT : + ConstraintActionType.CASCADE; + } + + /** + * Chain another drop table statement to this statement. + * + * @param drop the statement to add + */ + public void addNextDropTable(DropTable drop) { + if (next == null) { + next = drop; + } else { + next.addNextDropTable(drop); + } + } + + public void setIfExists(boolean b) { + ifExists = b; + if (next != null) { + next.setIfExists(b); + } + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + private void prepareDrop() { + table = getSchema().findTableOrView(session, tableName); + if (table == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + } else { + session.getUser().checkRight(table, Right.ALL); + if (!table.canDrop()) { + throw DbException.get(ErrorCode.CANNOT_DROP_TABLE_1, tableName); + } + if (dropAction == ConstraintActionType.RESTRICT) { + StatementBuilder buff = new StatementBuilder(); + CopyOnWriteArrayList dependentViews = table.getDependentViews(); + if (dependentViews != null && !dependentViews.isEmpty()) { + for (TableView v : dependentViews) { + buff.appendExceptFirst(", "); + buff.append(v.getName()); + } + } + if (session.getDatabase() + .getSettings().standardDropTableRestrict) { + final List constraints = table.getConstraints(); + if (constraints != null && !constraints.isEmpty()) { + for (Constraint c : constraints) { + if (c.getTable() != table) { + buff.appendExceptFirst(", "); + buff.append(c.getName()); + } + } + } + } + if (buff.length() > 0) { + throw DbException.get(ErrorCode.CANNOT_DROP_2, tableName, buff.toString()); + } + + } + table.lock(session, true, true); + } + if (next != null) { + next.prepareDrop(); + } + } + + private void executeDrop() { + // need to get the table again, because it may be dropped already + // meanwhile (dependent object, or same object) + table = getSchema().findTableOrView(session, tableName); + + if (table != null) { + table.setModified(); + Database db = session.getDatabase(); + db.lockMeta(session); + db.removeSchemaObject(session, table); + } + if (next != null) { + next.executeDrop(); + } + } + + @Override + public int update() { + session.commit(true); + prepareDrop(); + executeDrop(); + return 0; + } + + public void setDropAction(ConstraintActionType dropAction) { + this.dropAction = dropAction; + if (next != null) { + next.setDropAction(dropAction); + } + } + + @Override + public int getType() { + return CommandInterface.DROP_TABLE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropTrigger.java b/modules/h2/src/main/java/org/h2/command/ddl/DropTrigger.java new file mode 100644 index 0000000000000..8e816bd710b7b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropTrigger.java @@ -0,0 +1,61 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.TriggerObject; +import org.h2.table.Table; + +/** + * This class represents the statement + * DROP TRIGGER + */ +public class DropTrigger extends SchemaCommand { + + private String triggerName; + private boolean ifExists; + + public DropTrigger(Session session, Schema schema) { + super(session, schema); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setTriggerName(String triggerName) { + this.triggerName = triggerName; + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + TriggerObject trigger = getSchema().findTrigger(triggerName); + if (trigger == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.TRIGGER_NOT_FOUND_1, triggerName); + } + } else { + Table table = trigger.getTable(); + session.getUser().checkRight(table, Right.ALL); + db.removeSchemaObject(session, trigger); + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.DROP_TRIGGER; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropUser.java b/modules/h2/src/main/java/org/h2/command/ddl/DropUser.java new file mode 100644 index 0000000000000..56723891174ca --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropUser.java @@ -0,0 +1,74 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.User; +import org.h2.message.DbException; + +/** + * This class represents the statement + * DROP USER + */ +public class DropUser extends DefineCommand { + + private boolean ifExists; + private String userName; + + public DropUser(Session session) { + super(session); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setUserName(String userName) { + this.userName = userName; + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + User user = db.findUser(userName); + if (user == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.USER_NOT_FOUND_1, userName); + } + } else { + if (user == session.getUser()) { + int adminUserCount = 0; + for (User u : db.getAllUsers()) { + if (u.isAdmin()) { + adminUserCount++; + } + } + if (adminUserCount == 1) { + throw DbException.get(ErrorCode.CANNOT_DROP_CURRENT_USER); + } + } + user.checkOwnsNoSchemas(); + db.removeDatabaseObject(session, user); + } + return 0; + } + + @Override + public boolean isTransactional() { + return false; + } + + @Override + public int getType() { + return CommandInterface.DROP_USER; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropUserDataType.java b/modules/h2/src/main/java/org/h2/command/ddl/DropUserDataType.java new file mode 100644 index 0000000000000..051d501a1ffda --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropUserDataType.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.UserDataType; +import org.h2.message.DbException; + +/** + * This class represents the statement + * DROP DOMAIN + */ +public class DropUserDataType extends DefineCommand { + + private String typeName; + private boolean ifExists; + + public DropUserDataType(Session session) { + super(session); + } + + public void setIfExists(boolean ifExists) { + this.ifExists = ifExists; + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + UserDataType type = db.findUserDataType(typeName); + if (type == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.USER_DATA_TYPE_NOT_FOUND_1, typeName); + } + } else { + db.removeDatabaseObject(session, type); + } + return 0; + } + + public void setTypeName(String name) { + this.typeName = name; + } + + @Override + public int getType() { + return CommandInterface.DROP_DOMAIN; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/DropView.java b/modules/h2/src/main/java/org/h2/command/ddl/DropView.java new file mode 100644 index 0000000000000..e7374bc7bdf2f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/DropView.java @@ -0,0 +1,101 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.constraint.ConstraintActionType; +import org.h2.engine.DbObject; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Table; +import org.h2.table.TableType; +import org.h2.table.TableView; + +/** + * This class represents the statement + * DROP VIEW + */ +public class DropView extends SchemaCommand { + + private String viewName; + private boolean ifExists; + private ConstraintActionType dropAction; + + public DropView(Session session, Schema schema) { + super(session, schema); + dropAction = session.getDatabase().getSettings().dropRestrict ? + ConstraintActionType.RESTRICT : + ConstraintActionType.CASCADE; + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setDropAction(ConstraintActionType dropAction) { + this.dropAction = dropAction; + } + + public void setViewName(String viewName) { + this.viewName = viewName; + } + + @Override + public int update() { + session.commit(true); + Table view = getSchema().findTableOrView(session, viewName); + if (view == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.VIEW_NOT_FOUND_1, viewName); + } + } else { + if (TableType.VIEW != view.getTableType()) { + throw DbException.get(ErrorCode.VIEW_NOT_FOUND_1, viewName); + } + session.getUser().checkRight(view, Right.ALL); + + if (dropAction == ConstraintActionType.RESTRICT) { + for (DbObject child : view.getChildren()) { + if (child instanceof TableView) { + throw DbException.get(ErrorCode.CANNOT_DROP_2, viewName, child.getName()); + } + } + } + + // TODO: Where is the ConstraintReferential.CASCADE style drop processing ? It's + // supported from imported keys - but not for dependent db objects + + TableView tableView = (TableView) view; + ArrayList
    copyOfDependencies = new ArrayList<>(tableView.getTables()); + + view.lock(session, true, true); + session.getDatabase().removeSchemaObject(session, view); + + // remove dependent table expressions + for (Table childTable: copyOfDependencies) { + if (TableType.VIEW == childTable.getTableType()) { + TableView childTableView = (TableView) childTable; + if (childTableView.isTableExpression() && childTableView.getName() != null) { + session.getDatabase().removeSchemaObject(session, childTableView); + } + } + } + // make sure its all unlocked + session.getDatabase().unlockMeta(session); + } + return 0; + } + + @Override + public int getType() { + return CommandInterface.DROP_VIEW; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/GrantRevoke.java b/modules/h2/src/main/java/org/h2/command/ddl/GrantRevoke.java new file mode 100644 index 0000000000000..7bbcceed73834 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/GrantRevoke.java @@ -0,0 +1,227 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Right; +import org.h2.engine.RightOwner; +import org.h2.engine.Role; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Table; +import org.h2.util.New; + +/** + * This class represents the statements + * GRANT RIGHT, + * GRANT ROLE, + * REVOKE RIGHT, + * REVOKE ROLE + */ +public class GrantRevoke extends DefineCommand { + + private ArrayList roleNames; + private int operationType; + private int rightMask; + private final ArrayList
    tables = New.arrayList(); + private Schema schema; + private RightOwner grantee; + + public GrantRevoke(Session session) { + super(session); + } + + public void setOperationType(int operationType) { + this.operationType = operationType; + } + + /** + * Add the specified right bit to the rights bitmap. + * + * @param right the right bit + */ + public void addRight(int right) { + this.rightMask |= right; + } + + /** + * Add the specified role to the list of roles. + * + * @param roleName the role + */ + public void addRoleName(String roleName) { + if (roleNames == null) { + roleNames = New.arrayList(); + } + roleNames.add(roleName); + } + + public void setGranteeName(String granteeName) { + Database db = session.getDatabase(); + grantee = db.findUser(granteeName); + if (grantee == null) { + grantee = db.findRole(granteeName); + if (grantee == null) { + throw DbException.get(ErrorCode.USER_OR_ROLE_NOT_FOUND_1, granteeName); + } + } + } + + @Override + public int update() { + session.getUser().checkAdmin(); + session.commit(true); + Database db = session.getDatabase(); + if (roleNames != null) { + for (String name : roleNames) { + Role grantedRole = db.findRole(name); + if (grantedRole == null) { + throw DbException.get(ErrorCode.ROLE_NOT_FOUND_1, name); + } + if (operationType == CommandInterface.GRANT) { + grantRole(grantedRole); + } else if (operationType == CommandInterface.REVOKE) { + revokeRole(grantedRole); + } else { + DbException.throwInternalError("type=" + operationType); + } + } + } else { + if (operationType == CommandInterface.GRANT) { + grantRight(); + } else if (operationType == CommandInterface.REVOKE) { + revokeRight(); + } else { + DbException.throwInternalError("type=" + operationType); + } + } + return 0; + } + + private void grantRight() { + if (schema != null) { + grantRight(schema); + } + for (Table table : tables) { + grantRight(table); + } + } + + private void grantRight(DbObject object) { + Database db = session.getDatabase(); + Right right = grantee.getRightForObject(object); + if (right == null) { + int id = getObjectId(); + right = new Right(db, id, grantee, rightMask, object); + grantee.grantRight(object, right); + db.addDatabaseObject(session, right); + } else { + right.setRightMask(right.getRightMask() | rightMask); + db.updateMeta(session, right); + } + } + + private void grantRole(Role grantedRole) { + if (grantedRole != grantee && grantee.isRoleGranted(grantedRole)) { + return; + } + if (grantee instanceof Role) { + Role granteeRole = (Role) grantee; + if (grantedRole.isRoleGranted(granteeRole)) { + // cyclic role grants are not allowed + throw DbException.get(ErrorCode.ROLE_ALREADY_GRANTED_1, grantedRole.getSQL()); + } + } + Database db = session.getDatabase(); + int id = getObjectId(); + Right right = new Right(db, id, grantee, grantedRole); + db.addDatabaseObject(session, right); + grantee.grantRole(grantedRole, right); + } + + private void revokeRight() { + if (schema != null) { + revokeRight(schema); + } + for (Table table : tables) { + revokeRight(table); + } + } + + private void revokeRight(DbObject object) { + Right right = grantee.getRightForObject(object); + if (right == null) { + return; + } + int mask = right.getRightMask(); + int newRight = mask & ~rightMask; + Database db = session.getDatabase(); + if (newRight == 0) { + db.removeDatabaseObject(session, right); + } else { + right.setRightMask(newRight); + db.updateMeta(session, right); + } + } + + + private void revokeRole(Role grantedRole) { + Right right = grantee.getRightForRole(grantedRole); + if (right == null) { + return; + } + Database db = session.getDatabase(); + db.removeDatabaseObject(session, right); + } + + @Override + public boolean isTransactional() { + return false; + } + + /** + * Add the specified table to the list of tables. + * + * @param table the table + */ + public void addTable(Table table) { + tables.add(table); + } + + /** + * Set the specified schema + * + * @param schema the schema + */ + public void setSchema(Schema schema) { + this.schema = schema; + } + + @Override + public int getType() { + return operationType; + } + + /** + * @return true if this command is using Roles + */ + public boolean isRoleMode() { + return roleNames != null; + } + + /** + * @return true if this command is using Rights + */ + public boolean isRightMode() { + return rightMask != 0; + } +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/PrepareProcedure.java b/modules/h2/src/main/java/org/h2/command/ddl/PrepareProcedure.java new file mode 100644 index 0000000000000..9f683fb474f8e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/PrepareProcedure.java @@ -0,0 +1,62 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import java.util.ArrayList; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Procedure; +import org.h2.engine.Session; +import org.h2.expression.Parameter; +import org.h2.util.New; + +/** + * This class represents the statement + * PREPARE + */ +public class PrepareProcedure extends DefineCommand { + + private String procedureName; + private Prepared prepared; + + public PrepareProcedure(Session session) { + super(session); + } + + @Override + public void checkParameters() { + // no not check parameters + } + + @Override + public int update() { + Procedure proc = new Procedure(procedureName, prepared); + prepared.setParameterList(parameters); + prepared.setPrepareAlways(prepareAlways); + prepared.prepare(); + session.addProcedure(proc); + return 0; + } + + public void setProcedureName(String name) { + this.procedureName = name; + } + + public void setPrepared(Prepared prep) { + this.prepared = prep; + } + + @Override + public ArrayList getParameters() { + return New.arrayList(); + } + + @Override + public int getType() { + return CommandInterface.PREPARE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/SchemaCommand.java b/modules/h2/src/main/java/org/h2/command/ddl/SchemaCommand.java new file mode 100644 index 0000000000000..3b580892ada13 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/SchemaCommand.java @@ -0,0 +1,38 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.engine.Session; +import org.h2.schema.Schema; + +/** + * This class represents a non-transaction statement that involves a schema. + */ +public abstract class SchemaCommand extends DefineCommand { + + private final Schema schema; + + /** + * Create a new command. + * + * @param session the session + * @param schema the schema + */ + public SchemaCommand(Session session, Schema schema) { + super(session); + this.schema = schema; + } + + /** + * Get the schema + * + * @return the schema + */ + protected Schema getSchema() { + return schema; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/SetComment.java b/modules/h2/src/main/java/org/h2/command/ddl/SetComment.java new file mode 100644 index 0000000000000..c5b4b3757dce4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/SetComment.java @@ -0,0 +1,157 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Comment; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.table.Table; + +/** + * This class represents the statement + * COMMENT + */ +public class SetComment extends DefineCommand { + + private String schemaName; + private String objectName; + private boolean column; + private String columnName; + private int objectType; + private Expression expr; + + public SetComment(Session session) { + super(session); + } + + @Override + public int update() { + session.commit(true); + Database db = session.getDatabase(); + session.getUser().checkAdmin(); + DbObject object = null; + int errorCode = ErrorCode.GENERAL_ERROR_1; + if (schemaName == null) { + schemaName = session.getCurrentSchemaName(); + } + switch (objectType) { + case DbObject.CONSTANT: + object = db.getSchema(schemaName).getConstant(objectName); + break; + case DbObject.CONSTRAINT: + object = db.getSchema(schemaName).getConstraint(objectName); + break; + case DbObject.FUNCTION_ALIAS: + object = db.getSchema(schemaName).findFunction(objectName); + errorCode = ErrorCode.FUNCTION_ALIAS_NOT_FOUND_1; + break; + case DbObject.INDEX: + object = db.getSchema(schemaName).getIndex(objectName); + break; + case DbObject.ROLE: + schemaName = null; + object = db.findRole(objectName); + errorCode = ErrorCode.ROLE_NOT_FOUND_1; + break; + case DbObject.SCHEMA: + schemaName = null; + object = db.findSchema(objectName); + errorCode = ErrorCode.SCHEMA_NOT_FOUND_1; + break; + case DbObject.SEQUENCE: + object = db.getSchema(schemaName).getSequence(objectName); + break; + case DbObject.TABLE_OR_VIEW: + object = db.getSchema(schemaName).getTableOrView(session, objectName); + break; + case DbObject.TRIGGER: + object = db.getSchema(schemaName).findTrigger(objectName); + errorCode = ErrorCode.TRIGGER_NOT_FOUND_1; + break; + case DbObject.USER: + schemaName = null; + object = db.getUser(objectName); + break; + case DbObject.USER_DATATYPE: + schemaName = null; + object = db.findUserDataType(objectName); + errorCode = ErrorCode.USER_DATA_TYPE_ALREADY_EXISTS_1; + break; + default: + } + if (object == null) { + throw DbException.get(errorCode, objectName); + } + String text = expr.optimize(session).getValue(session).getString(); + if (column) { + Table table = (Table) object; + table.getColumn(columnName).setComment(text); + } else { + object.setComment(text); + } + if (column || objectType == DbObject.TABLE_OR_VIEW || + objectType == DbObject.USER || + objectType == DbObject.INDEX || + objectType == DbObject.CONSTRAINT) { + db.updateMeta(session, object); + } else { + Comment comment = db.findComment(object); + if (comment == null) { + if (text == null) { + // reset a non-existing comment - nothing to do + } else { + int id = getObjectId(); + comment = new Comment(db, id, object); + comment.setCommentText(text); + db.addDatabaseObject(session, comment); + } + } else { + if (text == null) { + db.removeDatabaseObject(session, comment); + } else { + comment.setCommentText(text); + db.updateMeta(session, comment); + } + } + } + return 0; + } + + public void setCommentExpression(Expression expr) { + this.expr = expr; + } + + public void setObjectName(String objectName) { + this.objectName = objectName; + } + + public void setObjectType(int objectType) { + this.objectType = objectType; + } + + public void setColumnName(String columnName) { + this.columnName = columnName; + } + + public void setSchemaName(String schemaName) { + this.schemaName = schemaName; + } + + public void setColumn(boolean column) { + this.column = column; + } + + @Override + public int getType() { + return CommandInterface.COMMENT; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/TruncateTable.java b/modules/h2/src/main/java/org/h2/command/ddl/TruncateTable.java new file mode 100644 index 0000000000000..996e8fecd9a5b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/TruncateTable.java @@ -0,0 +1,48 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.ddl; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.Table; + +/** + * This class represents the statement + * TRUNCATE TABLE + */ +public class TruncateTable extends DefineCommand { + + private Table table; + + public TruncateTable(Session session) { + super(session); + } + + public void setTable(Table table) { + this.table = table; + } + + @Override + public int update() { + session.commit(true); + if (!table.canTruncate()) { + throw DbException.get(ErrorCode.CANNOT_TRUNCATE_1, table.getSQL()); + } + session.getUser().checkRight(table, Right.DELETE); + table.lock(session, true, true); + table.truncate(session); + return 0; + } + + @Override + public int getType() { + return CommandInterface.TRUNCATE_TABLE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/ddl/package.html b/modules/h2/src/main/java/org/h2/command/ddl/package.html new file mode 100644 index 0000000000000..0fe20f3466d7a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/ddl/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Contains DDL (data definition language) and related SQL statements. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/command/dml/AlterSequence.java b/modules/h2/src/main/java/org/h2/command/dml/AlterSequence.java new file mode 100644 index 0000000000000..caa3dc297d353 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/AlterSequence.java @@ -0,0 +1,133 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.ddl.SchemaCommand; +import org.h2.engine.Database; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; +import org.h2.table.Column; +import org.h2.table.Table; + +/** + * This class represents the statement + * ALTER SEQUENCE + */ +public class AlterSequence extends SchemaCommand { + + private boolean ifExists; + private Table table; + private String sequenceName; + private Sequence sequence; + private Expression start; + private Expression increment; + private Boolean cycle; + private Expression minValue; + private Expression maxValue; + private Expression cacheSize; + + public AlterSequence(Session session, Schema schema) { + super(session, schema); + } + + public void setIfExists(boolean b) { + ifExists = b; + } + + public void setSequenceName(String sequenceName) { + this.sequenceName = sequenceName; + } + + @Override + public boolean isTransactional() { + return true; + } + + public void setColumn(Column column) { + table = column.getTable(); + sequence = column.getSequence(); + if (sequence == null && !ifExists) { + throw DbException.get(ErrorCode.SEQUENCE_NOT_FOUND_1, column.getSQL()); + } + } + + public void setStartWith(Expression start) { + this.start = start; + } + + public void setIncrement(Expression increment) { + this.increment = increment; + } + + public void setCycle(Boolean cycle) { + this.cycle = cycle; + } + + public void setMinValue(Expression minValue) { + this.minValue = minValue; + } + + public void setMaxValue(Expression maxValue) { + this.maxValue = maxValue; + } + + public void setCacheSize(Expression cacheSize) { + this.cacheSize = cacheSize; + } + + @Override + public int update() { + Database db = session.getDatabase(); + if (sequence == null) { + sequence = getSchema().findSequence(sequenceName); + if (sequence == null) { + if (!ifExists) { + throw DbException.get(ErrorCode.SEQUENCE_NOT_FOUND_1, sequenceName); + } + return 0; + } + } + if (table != null) { + session.getUser().checkRight(table, Right.ALL); + } + if (cycle != null) { + sequence.setCycle(cycle); + } + if (cacheSize != null) { + long size = cacheSize.optimize(session).getValue(session).getLong(); + sequence.setCacheSize(size); + } + if (start != null || minValue != null || + maxValue != null || increment != null) { + Long startValue = getLong(start); + Long min = getLong(minValue); + Long max = getLong(maxValue); + Long inc = getLong(increment); + sequence.modify(startValue, min, max, inc); + } + db.updateMeta(session, sequence); + return 0; + } + + private Long getLong(Expression expr) { + if (expr == null) { + return null; + } + return expr.optimize(session).getValue(session).getLong(); + } + + @Override + public int getType() { + return CommandInterface.ALTER_SEQUENCE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/AlterTableSet.java b/modules/h2/src/main/java/org/h2/command/dml/AlterTableSet.java new file mode 100644 index 0000000000000..d66e4774de648 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/AlterTableSet.java @@ -0,0 +1,80 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.ddl.SchemaCommand; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.table.Table; + +/** + * This class represents the statement + * ALTER TABLE SET + */ +public class AlterTableSet extends SchemaCommand { + + private boolean ifTableExists; + private String tableName; + private final int type; + + private final boolean value; + private boolean checkExisting; + + public AlterTableSet(Session session, Schema schema, int type, boolean value) { + super(session, schema); + this.type = type; + this.value = value; + } + + public void setCheckExisting(boolean b) { + this.checkExisting = b; + } + + @Override + public boolean isTransactional() { + return true; + } + + public void setIfTableExists(boolean b) { + this.ifTableExists = b; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + @Override + public int update() { + Table table = getSchema().resolveTableOrView(session, tableName); + if (table == null) { + if (ifTableExists) { + return 0; + } + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, tableName); + } + session.getUser().checkRight(table, Right.ALL); + table.lock(session, true, true); + switch (type) { + case CommandInterface.ALTER_TABLE_SET_REFERENTIAL_INTEGRITY: + table.setCheckForeignKeyConstraints(session, value, value ? + checkExisting : false); + break; + default: + DbException.throwInternalError("type="+type); + } + return 0; + } + + @Override + public int getType() { + return type; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/BackupCommand.java b/modules/h2/src/main/java/org/h2/command/dml/BackupCommand.java new file mode 100644 index 0000000000000..e7c3a7a7cddb1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/BackupCommand.java @@ -0,0 +1,182 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.ArrayList; +import java.util.zip.ZipEntry; +import java.util.zip.ZipOutputStream; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.db.MVTableEngine.Store; +import org.h2.result.ResultInterface; +import org.h2.store.FileLister; +import org.h2.store.PageStore; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; + +/** + * This class represents the statement + * BACKUP + */ +public class BackupCommand extends Prepared { + + private Expression fileNameExpr; + + public BackupCommand(Session session) { + super(session); + } + + public void setFileName(Expression fileName) { + this.fileNameExpr = fileName; + } + + @Override + public int update() { + String name = fileNameExpr.getValue(session).getString(); + session.getUser().checkAdmin(); + backupTo(name); + return 0; + } + + private void backupTo(String fileName) { + Database db = session.getDatabase(); + if (!db.isPersistent()) { + throw DbException.get(ErrorCode.DATABASE_IS_NOT_PERSISTENT); + } + try { + Store mvStore = db.getMvStore(); + if (mvStore != null) { + mvStore.flush(); + } + String name = db.getName(); + name = FileUtils.getName(name); + try (OutputStream zip = FileUtils.newOutputStream(fileName, false)) { + ZipOutputStream out = new ZipOutputStream(zip); + db.flush(); + if (db.getPageStore() != null) { + String fn = db.getName() + Constants.SUFFIX_PAGE_FILE; + backupPageStore(out, fn, db.getPageStore()); + } + // synchronize on the database, to avoid concurrent temp file + // creation / deletion / backup + String base = FileUtils.getParent(db.getName()); + synchronized (db.getLobSyncObject()) { + String prefix = db.getDatabasePath(); + String dir = FileUtils.getParent(prefix); + dir = FileLister.getDir(dir); + ArrayList fileList = FileLister.getDatabaseFiles(dir, name, true); + for (String n : fileList) { + if (n.endsWith(Constants.SUFFIX_LOB_FILE)) { + backupFile(out, base, n); + } + if (n.endsWith(Constants.SUFFIX_MV_FILE) && mvStore != null) { + MVStore s = mvStore.getStore(); + boolean before = s.getReuseSpace(); + s.setReuseSpace(false); + try { + InputStream in = mvStore.getInputStream(); + backupFile(out, base, n, in); + } finally { + s.setReuseSpace(before); + } + } + } + } + out.close(); + } + } catch (IOException e) { + throw DbException.convertIOException(e, fileName); + } + } + + private void backupPageStore(ZipOutputStream out, String fileName, + PageStore store) throws IOException { + Database db = session.getDatabase(); + fileName = FileUtils.getName(fileName); + out.putNextEntry(new ZipEntry(fileName)); + int pos = 0; + try { + store.setBackup(true); + while (true) { + pos = store.copyDirect(pos, out); + if (pos < 0) { + break; + } + int max = store.getPageCount(); + db.setProgress(DatabaseEventListener.STATE_BACKUP_FILE, fileName, pos, max); + } + } finally { + store.setBackup(false); + } + out.closeEntry(); + } + + private static void backupFile(ZipOutputStream out, String base, String fn) + throws IOException { + InputStream in = FileUtils.newInputStream(fn); + backupFile(out, base, fn, in); + } + + private static void backupFile(ZipOutputStream out, String base, String fn, + InputStream in) throws IOException { + String f = FileUtils.toRealPath(fn); + base = FileUtils.toRealPath(base); + if (!f.startsWith(base)) { + DbException.throwInternalError(f + " does not start with " + base); + } + f = f.substring(base.length()); + f = correctFileName(f); + out.putNextEntry(new ZipEntry(f)); + IOUtils.copyAndCloseInput(in, out); + out.closeEntry(); + } + + @Override + public boolean isTransactional() { + return true; + } + + /** + * Fix the file name, replacing backslash with slash. + * + * @param f the file name + * @return the corrected file name + */ + public static String correctFileName(String f) { + f = f.replace('\\', '/'); + if (f.startsWith("/")) { + f = f.substring(1); + } + return f; + } + + @Override + public boolean needRecompile() { + return false; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.BACKUP; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Call.java b/modules/h2/src/main/java/org/h2/command/dml/Call.java new file mode 100644 index 0000000000000..cb7c9b118c3b9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Call.java @@ -0,0 +1,118 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.sql.ResultSet; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.result.LocalResult; +import org.h2.result.ResultInterface; +import org.h2.value.Value; + +/** + * This class represents the statement + * CALL. + */ +public class Call extends Prepared { + + private boolean isResultSet; + private Expression expression; + private Expression[] expressions; + + public Call(Session session) { + super(session); + } + + @Override + public ResultInterface queryMeta() { + LocalResult result; + if (isResultSet) { + Expression[] expr = expression.getExpressionColumns(session); + result = new LocalResult(session, expr, expr.length); + } else { + result = new LocalResult(session, expressions, 1); + } + result.done(); + return result; + } + + @Override + public int update() { + Value v = expression.getValue(session); + int type = v.getType(); + switch (type) { + case Value.RESULT_SET: + // this will throw an exception + // methods returning a result set may not be called like this. + return super.update(); + case Value.UNKNOWN: + case Value.NULL: + return 0; + default: + return v.getInt(); + } + } + + @Override + public ResultInterface query(int maxrows) { + setCurrentRowNumber(1); + Value v = expression.getValue(session); + if (isResultSet) { + v = v.convertTo(Value.RESULT_SET); + ResultSet rs = v.getResultSet(); + return LocalResult.read(session, rs, maxrows); + } + LocalResult result = new LocalResult(session, expressions, 1); + Value[] row = { v }; + result.addRow(row); + result.done(); + return result; + } + + @Override + public void prepare() { + expression = expression.optimize(session); + expressions = new Expression[] { expression }; + isResultSet = expression.getType() == Value.RESULT_SET; + if (isResultSet) { + prepareAlways = true; + } + } + + public void setExpression(Expression expression) { + this.expression = expression; + } + + @Override + public boolean isQuery() { + return true; + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public boolean isReadOnly() { + return expression.isEverything(ExpressionVisitor.READONLY_VISITOR); + + } + + @Override + public int getType() { + return CommandInterface.CALL; + } + + @Override + public boolean isCacheable() { + return !isResultSet; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Delete.java b/modules/h2/src/main/java/org/h2/command/dml/Delete.java new file mode 100644 index 0000000000000..77318b38f7e7b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Delete.java @@ -0,0 +1,192 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.api.Trigger; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.engine.UndoLogRecord; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.RowList; +import org.h2.table.PlanItem; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This class represents the statement + * DELETE + */ +public class Delete extends Prepared { + + private Expression condition; + private TableFilter targetTableFilter; + + /** + * The limit expression as specified in the LIMIT or TOP clause. + */ + private Expression limitExpr; + /** + * This table filter is for MERGE..USING support - not used in stand-alone DML + */ + private TableFilter sourceTableFilter; + + public Delete(Session session) { + super(session); + } + + public void setTableFilter(TableFilter tableFilter) { + this.targetTableFilter = tableFilter; + } + + public void setCondition(Expression condition) { + this.condition = condition; + } + + public Expression getCondition() { + return this.condition; + } + + @Override + public int update() { + targetTableFilter.startQuery(session); + targetTableFilter.reset(); + Table table = targetTableFilter.getTable(); + session.getUser().checkRight(table, Right.DELETE); + table.fire(session, Trigger.DELETE, true); + table.lock(session, true, false); + RowList rows = new RowList(session); + int limitRows = -1; + if (limitExpr != null) { + Value v = limitExpr.getValue(session); + if (v != ValueNull.INSTANCE) { + limitRows = v.getInt(); + } + } + try { + setCurrentRowNumber(0); + int count = 0; + while (limitRows != 0 && targetTableFilter.next()) { + setCurrentRowNumber(rows.size() + 1); + if (condition == null || condition.getBooleanValue(session)) { + Row row = targetTableFilter.get(); + boolean done = false; + if (table.fireRow()) { + done = table.fireBeforeRow(session, row, null); + } + if (!done) { + rows.add(row); + } + count++; + if (limitRows >= 0 && count >= limitRows) { + break; + } + } + } + int rowScanCount = 0; + for (rows.reset(); rows.hasNext();) { + if ((++rowScanCount & 127) == 0) { + checkCanceled(); + } + Row row = rows.next(); + table.removeRow(session, row); + session.log(table, UndoLogRecord.DELETE, row); + } + if (table.fireRow()) { + for (rows.reset(); rows.hasNext();) { + Row row = rows.next(); + table.fireAfterRow(session, row, null, false); + } + } + table.fire(session, Trigger.DELETE, false); + return count; + } finally { + rows.close(); + } + } + + @Override + public String getPlanSQL() { + StringBuilder buff = new StringBuilder(); + buff.append("DELETE "); + buff.append("FROM ").append(targetTableFilter.getPlanSQL(false)); + if (condition != null) { + buff.append("\nWHERE ").append(StringUtils.unEnclose( + condition.getSQL())); + } + if (limitExpr != null) { + buff.append("\nLIMIT (").append(StringUtils.unEnclose( + limitExpr.getSQL())).append(')'); + } + return buff.toString(); + } + + @Override + public void prepare() { + if (condition != null) { + condition.mapColumns(targetTableFilter, 0); + if (sourceTableFilter != null) { + condition.mapColumns(sourceTableFilter, 0); + } + condition = condition.optimize(session); + condition.createIndexConditions(session, targetTableFilter); + } + TableFilter[] filters; + if (sourceTableFilter == null) { + filters = new TableFilter[] { targetTableFilter }; + } else { + filters = new TableFilter[] { targetTableFilter, sourceTableFilter }; + } + PlanItem item = targetTableFilter.getBestPlanItem(session, filters, 0, + ExpressionVisitor.allColumnsForTableFilters(filters)); + targetTableFilter.setPlanItem(item); + targetTableFilter.prepare(); + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.DELETE; + } + + public void setLimit(Expression limit) { + this.limitExpr = limit; + } + + @Override + public boolean isCacheable() { + return true; + } + + public void setSourceTableFilter(TableFilter sourceTableFilter) { + this.sourceTableFilter = sourceTableFilter; + } + + public TableFilter getTableFilter() { + return targetTableFilter; + } + + public TableFilter getSourceTableFilter() { + return sourceTableFilter; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/ExecuteProcedure.java b/modules/h2/src/main/java/org/h2/command/dml/ExecuteProcedure.java new file mode 100644 index 0000000000000..95fd606ecdcee --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/ExecuteProcedure.java @@ -0,0 +1,92 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Procedure; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.Parameter; +import org.h2.result.ResultInterface; +import org.h2.util.New; + +/** + * This class represents the statement + * EXECUTE + */ +public class ExecuteProcedure extends Prepared { + + private final ArrayList expressions = New.arrayList(); + private Procedure procedure; + + public ExecuteProcedure(Session session) { + super(session); + } + + public void setProcedure(Procedure procedure) { + this.procedure = procedure; + } + + /** + * Set the expression at the given index. + * + * @param index the index (0 based) + * @param expr the expression + */ + public void setExpression(int index, Expression expr) { + expressions.add(index, expr); + } + + private void setParameters() { + Prepared prepared = procedure.getPrepared(); + ArrayList params = prepared.getParameters(); + for (int i = 0; params != null && i < params.size() && + i < expressions.size(); i++) { + Expression expr = expressions.get(i); + Parameter p = params.get(i); + p.setValue(expr.getValue(session)); + } + } + + @Override + public boolean isQuery() { + Prepared prepared = procedure.getPrepared(); + return prepared.isQuery(); + } + + @Override + public int update() { + setParameters(); + Prepared prepared = procedure.getPrepared(); + return prepared.update(); + } + + @Override + public ResultInterface query(int limit) { + setParameters(); + Prepared prepared = procedure.getPrepared(); + return prepared.query(limit); + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public ResultInterface queryMeta() { + Prepared prepared = procedure.getPrepared(); + return prepared.queryMeta(); + } + + @Override + public int getType() { + return CommandInterface.EXECUTE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Explain.java b/modules/h2/src/main/java/org/h2/command/dml/Explain.java new file mode 100644 index 0000000000000..23e934dc523b7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Explain.java @@ -0,0 +1,159 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.Map; +import java.util.TreeMap; +import java.util.Map.Entry; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.mvstore.db.MVTableEngine.Store; +import org.h2.result.LocalResult; +import org.h2.result.ResultInterface; +import org.h2.store.PageStore; +import org.h2.table.Column; +import org.h2.value.Value; +import org.h2.value.ValueString; + +/** + * This class represents the statement + * EXPLAIN + */ +public class Explain extends Prepared { + + private Prepared command; + private LocalResult result; + private boolean executeCommand; + + public Explain(Session session) { + super(session); + } + + public void setCommand(Prepared command) { + this.command = command; + } + + public Prepared getCommand() { + return command; + } + + @Override + public void prepare() { + command.prepare(); + } + + public void setExecuteCommand(boolean executeCommand) { + this.executeCommand = executeCommand; + } + + @Override + public ResultInterface queryMeta() { + return query(-1); + } + + @Override + protected void checkParameters() { + // Check params only in case of EXPLAIN ANALYZE + if (executeCommand) { + super.checkParameters(); + } + } + + @Override + public ResultInterface query(int maxrows) { + Column column = new Column("PLAN", Value.STRING); + Database db = session.getDatabase(); + ExpressionColumn expr = new ExpressionColumn(db, column); + Expression[] expressions = { expr }; + result = new LocalResult(session, expressions, 1); + if (maxrows >= 0) { + String plan; + if (executeCommand) { + PageStore store = null; + Store mvStore = null; + if (db.isPersistent()) { + store = db.getPageStore(); + if (store != null) { + store.statisticsStart(); + } + mvStore = db.getMvStore(); + if (mvStore != null) { + mvStore.statisticsStart(); + } + } + if (command.isQuery()) { + command.query(maxrows); + } else { + command.update(); + } + plan = command.getPlanSQL(); + Map statistics = null; + if (store != null) { + statistics = store.statisticsEnd(); + } else if (mvStore != null) { + statistics = mvStore.statisticsEnd(); + } + if (statistics != null) { + int total = 0; + for (Entry e : statistics.entrySet()) { + total += e.getValue(); + } + if (total > 0) { + statistics = new TreeMap<>(statistics); + StringBuilder buff = new StringBuilder(); + if (statistics.size() > 1) { + buff.append("total: ").append(total).append('\n'); + } + for (Entry e : statistics.entrySet()) { + int value = e.getValue(); + int percent = (int) (100L * value / total); + buff.append(e.getKey()).append(": ").append(value); + if (statistics.size() > 1) { + buff.append(" (").append(percent).append("%)"); + } + buff.append('\n'); + } + plan += "\n/*\n" + buff.toString() + "*/"; + } + } + } else { + plan = command.getPlanSQL(); + } + add(plan); + } + result.done(); + return result; + } + + private void add(String text) { + Value[] row = { ValueString.get(text) }; + result.addRow(row); + } + + @Override + public boolean isQuery() { + return true; + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public boolean isReadOnly() { + return command.isReadOnly(); + } + + @Override + public int getType() { + return executeCommand ? CommandInterface.EXPLAIN_ANALYZE : CommandInterface.EXPLAIN; + } +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Insert.java b/modules/h2/src/main/java/org/h2/command/dml/Insert.java new file mode 100644 index 0000000000000..fc6ee74811ddf --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Insert.java @@ -0,0 +1,454 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import java.util.HashMap; +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.Command; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.GeneratedKeys; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.engine.UndoLogRecord; +import org.h2.expression.Comparison; +import org.h2.expression.ConditionAndOr; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.expression.Parameter; +import org.h2.expression.SequenceValue; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.mvstore.db.MVPrimaryIndex; +import org.h2.result.ResultInterface; +import org.h2.result.ResultTarget; +import org.h2.result.Row; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This class represents the statement + * INSERT + */ +public class Insert extends Prepared implements ResultTarget { + + private Table table; + private Column[] columns; + private final ArrayList list = New.arrayList(); + private Query query; + private boolean sortedInsertMode; + private int rowNumber; + private boolean insertFromSelect; + /** + * This table filter is for MERGE..USING support - not used in stand-alone DML + */ + private TableFilter sourceTableFilter; + + /** + * For MySQL-style INSERT ... ON DUPLICATE KEY UPDATE .... + */ + private HashMap duplicateKeyAssignmentMap; + + /** + * For MySQL-style INSERT IGNORE + */ + private boolean ignore; + + public Insert(Session session) { + super(session); + } + + @Override + public void setCommand(Command command) { + super.setCommand(command); + if (query != null) { + query.setCommand(command); + } + } + + public void setTable(Table table) { + this.table = table; + } + + public void setColumns(Column[] columns) { + this.columns = columns; + } + + /** + * Sets MySQL-style INSERT IGNORE mode + * @param ignore ignore errors + */ + public void setIgnore(boolean ignore) { + this.ignore = ignore; + } + + public void setQuery(Query query) { + this.query = query; + } + + /** + * Keep a collection of the columns to pass to update if a duplicate key + * happens, for MySQL-style INSERT ... ON DUPLICATE KEY UPDATE .... + * + * @param column the column + * @param expression the expression + */ + public void addAssignmentForDuplicate(Column column, Expression expression) { + if (duplicateKeyAssignmentMap == null) { + duplicateKeyAssignmentMap = new HashMap<>(); + } + if (duplicateKeyAssignmentMap.containsKey(column)) { + throw DbException.get(ErrorCode.DUPLICATE_COLUMN_NAME_1, + column.getName()); + } + duplicateKeyAssignmentMap.put(column, expression); + } + + /** + * Add a row to this merge statement. + * + * @param expr the list of values + */ + public void addRow(Expression[] expr) { + list.add(expr); + } + + @Override + public int update() { + Index index = null; + if (sortedInsertMode) { + index = table.getScanIndex(session); + index.setSortedInsertMode(true); + } + try { + return insertRows(); + } finally { + if (index != null) { + index.setSortedInsertMode(false); + } + } + } + + private int insertRows() { + session.getUser().checkRight(table, Right.INSERT); + setCurrentRowNumber(0); + table.fire(session, Trigger.INSERT, true); + rowNumber = 0; + GeneratedKeys generatedKeys = session.getGeneratedKeys(); + generatedKeys.initialize(table); + int listSize = list.size(); + if (listSize > 0) { + int columnLen = columns.length; + for (int x = 0; x < listSize; x++) { + session.startStatementWithinTransaction(); + generatedKeys.nextRow(); + Row newRow = table.getTemplateRow(); + Expression[] expr = list.get(x); + setCurrentRowNumber(x + 1); + for (int i = 0; i < columnLen; i++) { + Column c = columns[i]; + int index = c.getColumnId(); + Expression e = expr[i]; + if (e != null) { + // e can be null (DEFAULT) + e = e.optimize(session); + try { + Value v = c.convert(e.getValue(session), session.getDatabase().getMode()); + newRow.setValue(index, v); + if (e instanceof SequenceValue) { + generatedKeys.add(c); + } + } catch (DbException ex) { + throw setRow(ex, x, getSQL(expr)); + } + } + } + rowNumber++; + table.validateConvertUpdateSequence(session, newRow); + boolean done = table.fireBeforeRow(session, null, newRow); + if (!done) { + table.lock(session, true, false); + try { + table.addRow(session, newRow); + } catch (DbException de) { + if (!handleOnDuplicate(de)) { + // INSERT IGNORE case + rowNumber--; + continue; + } + } + generatedKeys.confirmRow(newRow); + session.log(table, UndoLogRecord.INSERT, newRow); + table.fireAfterRow(session, null, newRow, false); + } + } + } else { + table.lock(session, true, false); + if (insertFromSelect) { + query.query(0, this); + } else { + ResultInterface rows = query.query(0); + while (rows.next()) { + generatedKeys.nextRow(); + Value[] r = rows.currentRow(); + Row newRow = addRowImpl(r); + if (newRow != null) { + generatedKeys.confirmRow(newRow); + } + } + rows.close(); + } + } + table.fire(session, Trigger.INSERT, false); + return rowNumber; + } + + @Override + public void addRow(Value[] values) { + addRowImpl(values); + } + + private Row addRowImpl(Value[] values) { + Row newRow = table.getTemplateRow(); + setCurrentRowNumber(++rowNumber); + for (int j = 0, len = columns.length; j < len; j++) { + Column c = columns[j]; + int index = c.getColumnId(); + try { + Value v = c.convert(values[j], session.getDatabase().getMode()); + newRow.setValue(index, v); + } catch (DbException ex) { + throw setRow(ex, rowNumber, getSQL(values)); + } + } + table.validateConvertUpdateSequence(session, newRow); + boolean done = table.fireBeforeRow(session, null, newRow); + if (!done) { + table.addRow(session, newRow); + session.log(table, UndoLogRecord.INSERT, newRow); + table.fireAfterRow(session, null, newRow, false); + return newRow; + } + return null; + } + + @Override + public int getRowCount() { + return rowNumber; + } + + @Override + public String getPlanSQL() { + StatementBuilder buff = new StatementBuilder("INSERT INTO "); + buff.append(table.getSQL()).append('('); + for (Column c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(")\n"); + if (insertFromSelect) { + buff.append("DIRECT "); + } + if (sortedInsertMode) { + buff.append("SORTED "); + } + if (!list.isEmpty()) { + buff.append("VALUES "); + int row = 0; + if (list.size() > 1) { + buff.append('\n'); + } + for (Expression[] expr : list) { + if (row++ > 0) { + buff.append(",\n"); + } + buff.append('('); + buff.resetCount(); + for (Expression e : expr) { + buff.appendExceptFirst(", "); + if (e == null) { + buff.append("DEFAULT"); + } else { + buff.append(e.getSQL()); + } + } + buff.append(')'); + } + } else { + buff.append(query.getPlanSQL()); + } + return buff.toString(); + } + + @Override + public void prepare() { + if (columns == null) { + if (!list.isEmpty() && list.get(0).length == 0) { + // special case where table is used as a sequence + columns = new Column[0]; + } else { + columns = table.getColumns(); + } + } + if (!list.isEmpty()) { + for (Expression[] expr : list) { + if (expr.length != columns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + for (int i = 0, len = expr.length; i < len; i++) { + Expression e = expr[i]; + if (e != null) { + if(sourceTableFilter!=null){ + e.mapColumns(sourceTableFilter, 0); + } + e = e.optimize(session); + if (e instanceof Parameter) { + Parameter p = (Parameter) e; + p.setColumn(columns[i]); + } + expr[i] = e; + } + } + } + } else { + query.prepare(); + if (query.getColumnCount() != columns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + } + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + public void setSortedInsertMode(boolean sortedInsertMode) { + this.sortedInsertMode = sortedInsertMode; + } + + @Override + public int getType() { + return CommandInterface.INSERT; + } + + public void setInsertFromSelect(boolean value) { + this.insertFromSelect = value; + } + + @Override + public boolean isCacheable() { + return duplicateKeyAssignmentMap == null || + duplicateKeyAssignmentMap.isEmpty(); + } + + /** + * @param de duplicate key exception + * @return {@code true} if row was updated, {@code false} if row was ignored + */ + private boolean handleOnDuplicate(DbException de) { + if (de.getErrorCode() != ErrorCode.DUPLICATE_KEY_1) { + throw de; + } + if (duplicateKeyAssignmentMap == null || + duplicateKeyAssignmentMap.isEmpty()) { + if (ignore) { + return false; + } + throw de; + } + + ArrayList variableNames = new ArrayList<>( + duplicateKeyAssignmentMap.size()); + Expression[] row = list.get(getCurrentRowNumber() - 1); + for (int i = 0; i < columns.length; i++) { + String key = table.getSchema().getName() + "." + + table.getName() + "." + columns[i].getName(); + variableNames.add(key); + session.setVariable(key, + row[i].getValue(session)); + } + + StatementBuilder buff = new StatementBuilder("UPDATE "); + buff.append(table.getSQL()).append(" SET "); + for (Column column : duplicateKeyAssignmentMap.keySet()) { + buff.appendExceptFirst(", "); + Expression ex = duplicateKeyAssignmentMap.get(column); + buff.append(column.getSQL()).append("=").append(ex.getSQL()); + } + buff.append(" WHERE "); + Index foundIndex = (Index) de.getSource(); + if (foundIndex == null) { + throw DbException.getUnsupportedException( + "Unable to apply ON DUPLICATE KEY UPDATE, no index found!"); + } + buff.append(prepareUpdateCondition(foundIndex).getSQL()); + String sql = buff.toString(); + Prepared command = session.prepare(sql); + for (Parameter param : command.getParameters()) { + Parameter insertParam = parameters.get(param.getIndex()); + param.setValue(insertParam.getValue(session)); + } + command.update(); + for (String variableName : variableNames) { + session.setVariable(variableName, ValueNull.INSTANCE); + } + return true; + } + + private Expression prepareUpdateCondition(Index foundIndex) { + // MVPrimaryIndex is playing fast and loose with it's implementation of + // the Index interface. + // It returns all of the columns in the table when we call + // getIndexColumns() or getColumns(). + // Don't have time right now to fix that, so just special-case it. + final Column[] indexedColumns; + if (foundIndex instanceof MVPrimaryIndex) { + MVPrimaryIndex foundMV = (MVPrimaryIndex) foundIndex; + indexedColumns = new Column[] { foundMV.getIndexColumns()[foundMV + .getMainIndexColumn()].column }; + } else { + indexedColumns = foundIndex.getColumns(); + } + + Expression[] row = list.get(getCurrentRowNumber() - 1); + Expression condition = null; + for (Column column : indexedColumns) { + ExpressionColumn expr = new ExpressionColumn(session.getDatabase(), + table.getSchema().getName(), table.getName(), + column.getName()); + for (int i = 0; i < columns.length; i++) { + if (expr.getColumnName().equals(columns[i].getName())) { + if (condition == null) { + condition = new Comparison(session, Comparison.EQUAL, expr, row[i]); + } else { + condition = new ConditionAndOr(ConditionAndOr.AND, condition, + new Comparison(session, Comparison.EQUAL, expr, row[i])); + } + break; + } + } + } + return condition; + } + + public void setSourceTableFilter(TableFilter sourceTableFilter) { + this.sourceTableFilter = sourceTableFilter; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Merge.java b/modules/h2/src/main/java/org/h2/command/dml/Merge.java new file mode 100644 index 0000000000000..6befe74e6f004 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Merge.java @@ -0,0 +1,346 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.Command; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.GeneratedKeys; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.engine.UndoLogRecord; +import org.h2.expression.Expression; +import org.h2.expression.Parameter; +import org.h2.expression.SequenceValue; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; + +/** + * This class represents the statement + * MERGE + */ +public class Merge extends Prepared { + + private Table targetTable; + private TableFilter targetTableFilter; + private Column[] columns; + private Column[] keys; + private final ArrayList valuesExpressionList = New.arrayList(); + private Query query; + private Prepared update; + + public Merge(Session session) { + super(session); + } + + @Override + public void setCommand(Command command) { + super.setCommand(command); + if (query != null) { + query.setCommand(command); + } + } + + public void setTargetTable(Table targetTable) { + this.targetTable = targetTable; + } + + public void setColumns(Column[] columns) { + this.columns = columns; + } + + public void setKeys(Column[] keys) { + this.keys = keys; + } + + public void setQuery(Query query) { + this.query = query; + } + + /** + * Add a row to this merge statement. + * + * @param expr the list of values + */ + public void addRow(Expression[] expr) { + valuesExpressionList.add(expr); + } + + @Override + public int update() { + int count; + session.getUser().checkRight(targetTable, Right.INSERT); + session.getUser().checkRight(targetTable, Right.UPDATE); + setCurrentRowNumber(0); + GeneratedKeys generatedKeys = session.getGeneratedKeys(); + if (!valuesExpressionList.isEmpty()) { + // process values in list + count = 0; + generatedKeys.initialize(targetTable); + for (int x = 0, size = valuesExpressionList.size(); x < size; x++) { + setCurrentRowNumber(x + 1); + generatedKeys.nextRow(); + Expression[] expr = valuesExpressionList.get(x); + Row newRow = targetTable.getTemplateRow(); + for (int i = 0, len = columns.length; i < len; i++) { + Column c = columns[i]; + int index = c.getColumnId(); + Expression e = expr[i]; + if (e != null) { + // e can be null (DEFAULT) + try { + Value v = c.convert(e.getValue(session)); + newRow.setValue(index, v); + if (e instanceof SequenceValue) { + generatedKeys.add(c); + } + } catch (DbException ex) { + throw setRow(ex, count, getSQL(expr)); + } + } + } + merge(newRow); + count++; + } + } else { + // process select data for list + query.setNeverLazy(true); + ResultInterface rows = query.query(0); + count = 0; + targetTable.fire(session, Trigger.UPDATE | Trigger.INSERT, true); + targetTable.lock(session, true, false); + while (rows.next()) { + count++; + generatedKeys.nextRow(); + Value[] r = rows.currentRow(); + Row newRow = targetTable.getTemplateRow(); + setCurrentRowNumber(count); + for (int j = 0; j < columns.length; j++) { + Column c = columns[j]; + int index = c.getColumnId(); + try { + Value v = c.convert(r[j]); + newRow.setValue(index, v); + } catch (DbException ex) { + throw setRow(ex, count, getSQL(r)); + } + } + merge(newRow); + } + rows.close(); + targetTable.fire(session, Trigger.UPDATE | Trigger.INSERT, false); + } + return count; + } + + /** + * Merge the given row. + * + * @param row the row + */ + protected void merge(Row row) { + ArrayList k = update.getParameters(); + for (int i = 0; i < columns.length; i++) { + Column col = columns[i]; + Value v = row.getValue(col.getColumnId()); + Parameter p = k.get(i); + p.setValue(v); + } + for (int i = 0; i < keys.length; i++) { + Column col = keys[i]; + Value v = row.getValue(col.getColumnId()); + if (v == null) { + throw DbException.get(ErrorCode.COLUMN_CONTAINS_NULL_VALUES_1, col.getSQL()); + } + Parameter p = k.get(columns.length + i); + p.setValue(v); + } + + // try and update + int count = update.update(); + + // if update fails try an insert + if (count == 0) { + try { + targetTable.validateConvertUpdateSequence(session, row); + boolean done = targetTable.fireBeforeRow(session, null, row); + if (!done) { + targetTable.lock(session, true, false); + targetTable.addRow(session, row); + session.getGeneratedKeys().confirmRow(row); + session.log(targetTable, UndoLogRecord.INSERT, row); + targetTable.fireAfterRow(session, null, row, false); + } + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.DUPLICATE_KEY_1) { + // possibly a concurrent merge or insert + Index index = (Index) e.getSource(); + if (index != null) { + // verify the index columns match the key + Column[] indexColumns = index.getColumns(); + boolean indexMatchesKeys = true; + if (indexColumns.length <= keys.length) { + for (int i = 0; i < indexColumns.length; i++) { + if (indexColumns[i] != keys[i]) { + indexMatchesKeys = false; + break; + } + } + } + if (indexMatchesKeys) { + throw DbException.get(ErrorCode.CONCURRENT_UPDATE_1, targetTable.getName()); + } + } + } + throw e; + } + } else if (count != 1) { + throw DbException.get(ErrorCode.DUPLICATE_KEY_1, targetTable.getSQL()); + } + } + + @Override + public String getPlanSQL() { + StatementBuilder buff = new StatementBuilder("MERGE INTO "); + buff.append(targetTable.getSQL()).append('('); + for (Column c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(')'); + if (keys != null) { + buff.append(" KEY("); + buff.resetCount(); + for (Column c : keys) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(')'); + } + buff.append('\n'); + if (!valuesExpressionList.isEmpty()) { + buff.append("VALUES "); + int row = 0; + for (Expression[] expr : valuesExpressionList) { + if (row++ > 0) { + buff.append(", "); + } + buff.append('('); + buff.resetCount(); + for (Expression e : expr) { + buff.appendExceptFirst(", "); + if (e == null) { + buff.append("DEFAULT"); + } else { + buff.append(e.getSQL()); + } + } + buff.append(')'); + } + } else { + buff.append(query.getPlanSQL()); + } + return buff.toString(); + } + + @Override + public void prepare() { + if (columns == null) { + if (!valuesExpressionList.isEmpty() && valuesExpressionList.get(0).length == 0) { + // special case where table is used as a sequence + columns = new Column[0]; + } else { + columns = targetTable.getColumns(); + } + } + if (!valuesExpressionList.isEmpty()) { + for (Expression[] expr : valuesExpressionList) { + if (expr.length != columns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + for (int i = 0; i < expr.length; i++) { + Expression e = expr[i]; + if (e != null) { + expr[i] = e.optimize(session); + } + } + } + } else { + query.prepare(); + if (query.getColumnCount() != columns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + } + if (keys == null) { + Index idx = targetTable.getPrimaryKey(); + if (idx == null) { + throw DbException.get(ErrorCode.CONSTRAINT_NOT_FOUND_1, "PRIMARY KEY"); + } + keys = idx.getColumns(); + } + StatementBuilder buff = new StatementBuilder("UPDATE "); + buff.append(targetTable.getSQL()).append(" SET "); + for (Column c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()).append("=?"); + } + buff.append(" WHERE "); + buff.resetCount(); + for (Column c : keys) { + buff.appendExceptFirst(" AND "); + buff.append(c.getSQL()).append("=?"); + } + String sql = buff.toString(); + update = session.prepare(sql); + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.MERGE; + } + + @Override + public boolean isCacheable() { + return true; + } + + public Table getTargetTable() { + return targetTable; + } + + public TableFilter getTargetTableFilter() { + return targetTableFilter; + } + + public void setTargetTableFilter(TableFilter targetTableFilter) { + this.targetTableFilter = targetTableFilter; + setTargetTable(targetTableFilter.getTable()); + } + + + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/MergeUsing.java b/modules/h2/src/main/java/org/h2/command/dml/MergeUsing.java new file mode 100644 index 0000000000000..174b5f4ce74c1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/MergeUsing.java @@ -0,0 +1,573 @@ +/* + * Copyright 2004-2017 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Right; +import org.h2.expression.ConditionAndOr; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.RowImpl; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; + +/** + * This class represents the statement syntax + * MERGE table alias USING... + * + * It does not replace the existing MERGE INTO... KEYS... form. + * + * It supports the SQL 2003/2008 standard MERGE statement: + * http://en.wikipedia.org/wiki/Merge_%28SQL%29 + * + * Database management systems Oracle Database, DB2, Teradata, EXASOL, Firebird, CUBRID, HSQLDB, + * MS SQL, Vectorwise and Apache Derby & Postgres support the standard syntax of the + * SQL 2003/2008 MERGE command: + * + * MERGE INTO targetTable AS T USING sourceTable AS S ON (T.ID = S.ID) + * WHEN MATCHED THEN + * UPDATE SET column1 = value1 [, column2 = value2 ...] WHERE column1=valueUpdate + * DELETE WHERE column1=valueDelete + * WHEN NOT MATCHED THEN + * INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 ...]); + * + * Only Oracle support the additional optional DELETE clause. + * + * Implementation notes: + * + * 1) The ON clause must specify 1 or more columns from the TARGET table because they are + * used in the plan SQL WHERE statement. Otherwise an exception is raised. + * + * 2) The ON clause must specify 1 or more columns from the SOURCE table/query because they + * are used to track the join key values for every source table row - to prevent any + * TARGET rows from being updated twice per MERGE USING statement. + * + * This is to implement a requirement from the MERGE INTO specification + * requiring each row from being updated more than once per MERGE USING statement. + * The source columns are used to gather the effective "key" values which have been + * updated, in order to implement this requirement. + * If the no SOURCE table/query columns are found in the ON clause, then an exception is + * raised. + * + * The update row counts of the embedded UPDATE and DELETE statements are also tracked to + * ensure no more than 1 row is ever updated. (Note One special case of this is that + * the DELETE is allowed to affect the same row which was updated by UPDATE - this is an + * Oracle only extension.) + * + * 3) UPDATE and DELETE statements are allowed to specify extra conditional criteria + * (in the WHERE clause) to allow fine-grained control of actions when a record is found. + * The ON clause conditions are always prepended to the WHERE clause of these embedded + * statements, so they will never update more than the ON join condition. + * + * 4) Previously if neither UPDATE or DELETE clause is supplied, but INSERT is supplied - the INSERT + * action is always triggered. This is because the embedded UPDATE and DELETE statement's + * returned update row count is used to detect a matching join. + * If neither of the two the statements are provided, no matching join is NEVER detected. + * + * A fix for this is now implemented as described below: + * We now generate a "matchSelect" query and use that to always detect + * a match join - rather than relying on UPDATE or DELETE statements. + * + * This is an improvement, especially in the case that if either of the + * UPDATE or DELETE statements had their own fine-grained WHERE conditions, making + * them completely different conditions than the plain ON condition clause which + * the SQL author would be specifying/expecting. + * + * An additional benefit of this solution is that this "matchSelect" query + * is used to return the ROWID of the found (or inserted) query - for more accurate + * enforcing of the only-update-each-target-row-once rule. + */ +public class MergeUsing extends Prepared { + + // Merge fields + private Table targetTable; + private TableFilter targetTableFilter; + private Column[] columns; + private Column[] keys; + private final ArrayList valuesExpressionList = New + .arrayList(); + private Query query; + + // MergeUsing fields + private TableFilter sourceTableFilter; + private Expression onCondition; + private Update updateCommand; + private Delete deleteCommand; + private Insert insertCommand; + private String queryAlias; + private int countUpdatedRows; + private Column[] sourceKeys; + private Select targetMatchQuery; + private final HashMap targetRowidsRemembered = new HashMap<>(); + private int sourceQueryRowNumber; + + + public MergeUsing(Merge merge) { + super(merge.getSession()); + + // bring across only the already parsed data from Merge... + this.targetTable = merge.getTargetTable(); + this.targetTableFilter = merge.getTargetTableFilter(); + } + + @Override + public int update() { + + // clear list of source table keys & rowids we have processed already + targetRowidsRemembered.clear(); + + if (targetTableFilter != null) { + targetTableFilter.startQuery(session); + targetTableFilter.reset(); + } + + if (sourceTableFilter != null) { + sourceTableFilter.startQuery(session); + sourceTableFilter.reset(); + } + + sourceQueryRowNumber = 0; + checkRights(); + setCurrentRowNumber(0); + + // process source select query data for row creation + ResultInterface rows = query.query(0); + targetTable.fire(session, evaluateTriggerMasks(), true); + targetTable.lock(session, true, false); + while (rows.next()) { + sourceQueryRowNumber++; + Value[] sourceRowValues = rows.currentRow(); + Row sourceRow = new RowImpl(sourceRowValues, 0); + setCurrentRowNumber(sourceQueryRowNumber); + + merge(sourceRow); + } + rows.close(); + targetTable.fire(session, evaluateTriggerMasks(), false); + return countUpdatedRows; + } + + private int evaluateTriggerMasks() { + int masks = 0; + if (insertCommand != null) { + masks |= Trigger.INSERT; + } + if (updateCommand != null) { + masks |= Trigger.UPDATE; + } + if (deleteCommand != null) { + masks |= Trigger.DELETE; + } + return masks; + } + + private void checkRights() { + if (insertCommand != null) { + session.getUser().checkRight(targetTable, Right.INSERT); + } + if (updateCommand != null) { + session.getUser().checkRight(targetTable, Right.UPDATE); + } + if (deleteCommand != null) { + session.getUser().checkRight(targetTable, Right.DELETE); + } + + // check the underlying tables + session.getUser().checkRight(targetTable, Right.SELECT); + session.getUser().checkRight(sourceTableFilter.getTable(), + Right.SELECT); + } + + /** + * Merge the given row. + * + * @param sourceRow the row + */ + protected void merge(Row sourceRow) { + // put the column values into the table filter + sourceTableFilter.set(sourceRow); + + // Is the target row there already ? + boolean rowFound = isTargetRowFound(); + + // try and perform an update + int rowUpdateCount = 0; + + if (rowFound) { + if (updateCommand != null) { + rowUpdateCount += updateCommand.update(); + } + if (deleteCommand != null) { + int deleteRowUpdateCount = deleteCommand.update(); + // under oracle rules these updates & delete combinations are + // allowed together + if (rowUpdateCount == 1 && deleteRowUpdateCount == 1) { + countUpdatedRows += deleteRowUpdateCount; + deleteRowUpdateCount = 0; + } else { + rowUpdateCount += deleteRowUpdateCount; + } + } + } else { + // if either updates do nothing, try an insert + if (rowUpdateCount == 0) { + rowUpdateCount += addRowByCommandInsert(sourceRow); + } else if (rowUpdateCount != 1) { + throw DbException.get(ErrorCode.DUPLICATE_KEY_1, + "Duplicate key inserted " + rowUpdateCount + + " rows at once, only 1 expected:" + + targetTable.getSQL()); + } + + } + countUpdatedRows += rowUpdateCount; + } + + private boolean isTargetRowFound() { + ResultInterface rows = targetMatchQuery.query(0); + int countTargetRowsFound = 0; + Value[] targetRowIdValue = null; + + while (rows.next()) { + countTargetRowsFound++; + targetRowIdValue = rows.currentRow(); + + // throw and exception if we have processed this _ROWID_ before... + if (targetRowidsRemembered.containsKey(targetRowIdValue[0])) { + throw DbException.get(ErrorCode.DUPLICATE_KEY_1, + "Merge using ON column expression, " + + "duplicate _ROWID_ target record already updated, deleted or inserted:_ROWID_=" + + targetRowIdValue[0].toString() + ":in:" + + targetTableFilter.getTable() + + ":conflicting source row number:" + + targetRowidsRemembered + .get(targetRowIdValue[0])); + } else { + // remember the source column values we have used before (they + // are the effective ON clause keys + // and should not be repeated + targetRowidsRemembered.put(targetRowIdValue[0], + sourceQueryRowNumber); + } + } + rows.close(); + if (countTargetRowsFound > 1) { + throw DbException.get(ErrorCode.DUPLICATE_KEY_1, + "Duplicate key updated " + countTargetRowsFound + + " rows at once, only 1 expected:_ROWID_=" + + targetRowIdValue[0].toString() + ":in:" + + targetTableFilter.getTable() + + ":conflicting source row number:" + + targetRowidsRemembered.get(targetRowIdValue[0])); + + } + return countTargetRowsFound > 0; + } + + private int addRowByCommandInsert(Row sourceRow) { + int localCount = 0; + if (insertCommand != null) { + localCount += insertCommand.update(); + if (!isTargetRowFound()) { + throw DbException.get(ErrorCode.GENERAL_ERROR_1, + "Expected to find key after row inserted, but none found. Insert does not match ON condition.:" + + targetTable.getSQL() + ":source row=" + + Arrays.asList(sourceRow.getValueList())); + } + } + return localCount; + } + + // Use the regular merge syntax as our plan SQL + @Override + public String getPlanSQL() { + StatementBuilder buff = new StatementBuilder("MERGE INTO "); + buff.append(targetTable.getSQL()).append('('); + for (Column c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(')'); + if (keys != null) { + buff.append(" KEY("); + buff.resetCount(); + for (Column c : keys) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(')'); + } + buff.append('\n'); + if (!valuesExpressionList.isEmpty()) { + buff.append("VALUES "); + int row = 0; + for (Expression[] expr : valuesExpressionList) { + if (row++ > 0) { + buff.append(", "); + } + buff.append('('); + buff.resetCount(); + for (Expression e : expr) { + buff.appendExceptFirst(", "); + if (e == null) { + buff.append("DEFAULT"); + } else { + buff.append(e.getSQL()); + } + } + buff.append(')'); + } + } else { + buff.append(query.getPlanSQL()); + } + return buff.toString(); + } + + @Override + public void prepare() { + onCondition.addFilterConditions(sourceTableFilter, true); + onCondition.addFilterConditions(targetTableFilter, true); + + onCondition.mapColumns(sourceTableFilter, 2); + onCondition.mapColumns(targetTableFilter, 1); + + if (keys == null) { + HashSet targetColumns = buildColumnListFromOnCondition( + targetTableFilter); + keys = targetColumns.toArray(new Column[0]); + } + if (keys.length == 0) { + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, + "No references to target columns found in ON clause:" + + targetTableFilter.toString()); + } + if (sourceKeys == null) { + HashSet sourceColumns = buildColumnListFromOnCondition( + sourceTableFilter); + sourceKeys = sourceColumns.toArray(new Column[0]); + } + if (sourceKeys.length == 0) { + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, + "No references to source columns found in ON clause:" + + sourceTableFilter.toString()); + } + + // only do the optimize now - before we have already gathered the + // unoptimized column data + onCondition = onCondition.optimize(session); + onCondition.createIndexConditions(session, sourceTableFilter); + onCondition.createIndexConditions(session, targetTableFilter); + + if (columns == null) { + if (!valuesExpressionList.isEmpty() + && valuesExpressionList.get(0).length == 0) { + // special case where table is used as a sequence + columns = new Column[0]; + } else { + columns = targetTable.getColumns(); + } + } + if (!valuesExpressionList.isEmpty()) { + for (Expression[] expr : valuesExpressionList) { + if (expr.length != columns.length) { + throw DbException + .get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + for (int i = 0; i < expr.length; i++) { + Expression e = expr[i]; + if (e != null) { + expr[i] = e.optimize(session); + } + } + } + } else { + query.prepare(); + if (query.getColumnCount() != columns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + } + + int embeddedStatementsCount = 0; + + // Prepare each of the sub-commands ready to aid in the MERGE + // collaboration + if (updateCommand != null) { + updateCommand.setSourceTableFilter(sourceTableFilter); + updateCommand.setCondition(appendOnCondition(updateCommand)); + updateCommand.prepare(); + embeddedStatementsCount++; + } + if (deleteCommand != null) { + deleteCommand.setSourceTableFilter(sourceTableFilter); + deleteCommand.setCondition(appendOnCondition(deleteCommand)); + deleteCommand.prepare(); + embeddedStatementsCount++; + } + if (insertCommand != null) { + insertCommand.setSourceTableFilter(sourceTableFilter); + insertCommand.prepare(); + embeddedStatementsCount++; + } + + if (embeddedStatementsCount == 0) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, + "At least UPDATE, DELETE or INSERT embedded statement must be supplied."); + } + + // setup the targetMatchQuery - for detecting if the target row exists + Expression targetMatchCondition = targetMatchQuery.getCondition(); + targetMatchCondition.addFilterConditions(sourceTableFilter, true); + targetMatchCondition.mapColumns(sourceTableFilter, 2); + targetMatchCondition = targetMatchCondition.optimize(session); + targetMatchCondition.createIndexConditions(session, sourceTableFilter); + targetMatchQuery.prepare(); + } + + private HashSet buildColumnListFromOnCondition( + TableFilter anyTableFilter) { + HashSet filteredColumns = new HashSet<>(); + HashSet columns = new HashSet<>(); + ExpressionVisitor visitor = ExpressionVisitor + .getColumnsVisitor(columns); + onCondition.isEverything(visitor); + for (Column c : columns) { + if (c != null && c.getTable() == anyTableFilter.getTable()) { + filteredColumns.add(c); + } + } + return filteredColumns; + } + + private Expression appendOnCondition(Update updateCommand) { + if (updateCommand.getCondition() == null) { + return onCondition; + } + return new ConditionAndOr(ConditionAndOr.AND, + updateCommand.getCondition(), onCondition); + } + + private Expression appendOnCondition(Delete deleteCommand) { + if (deleteCommand.getCondition() == null) { + return onCondition; + } + return new ConditionAndOr(ConditionAndOr.AND, + deleteCommand.getCondition(), onCondition); + } + + public void setSourceTableFilter(TableFilter sourceTableFilter) { + this.sourceTableFilter = sourceTableFilter; + } + + public TableFilter getSourceTableFilter() { + return sourceTableFilter; + } + + public void setOnCondition(Expression condition) { + this.onCondition = condition; + } + + public Expression getOnCondition() { + return onCondition; + } + + public Prepared getUpdateCommand() { + return updateCommand; + } + + public void setUpdateCommand(Update updateCommand) { + this.updateCommand = updateCommand; + } + + public Prepared getDeleteCommand() { + return deleteCommand; + } + + public void setDeleteCommand(Delete deleteCommand) { + this.deleteCommand = deleteCommand; + } + + public Insert getInsertCommand() { + return insertCommand; + } + + public void setInsertCommand(Insert insertCommand) { + this.insertCommand = insertCommand; + } + + public void setQueryAlias(String alias) { + this.queryAlias = alias; + + } + + public String getQueryAlias() { + return this.queryAlias; + + } + + public Query getQuery() { + return query; + } + + public void setQuery(Query query) { + this.query = query; + } + + public void setTargetTableFilter(TableFilter targetTableFilter) { + this.targetTableFilter = targetTableFilter; + } + + public TableFilter getTargetTableFilter() { + return targetTableFilter; + } + + public Table getTargetTable() { + return targetTable; + } + + public void setTargetTable(Table targetTable) { + this.targetTable = targetTable; + } + + public Select getTargetMatchQuery() { + return targetMatchQuery; + } + + public void setTargetMatchQuery(Select targetMatchQuery) { + this.targetMatchQuery = targetMatchQuery; + } + + // Prepared interface implementations + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.MERGE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/NoOperation.java b/modules/h2/src/main/java/org/h2/command/dml/NoOperation.java new file mode 100644 index 0000000000000..c2595869682ca --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/NoOperation.java @@ -0,0 +1,52 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Session; +import org.h2.result.ResultInterface; + +/** + * Represents an empty statement or a statement that has no effect. + */ +public class NoOperation extends Prepared { + + public NoOperation(Session session) { + super(session); + } + + @Override + public int update() { + return 0; + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public boolean needRecompile() { + return false; + } + + @Override + public boolean isReadOnly() { + return true; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.NO_OPERATION; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Optimizer.java b/modules/h2/src/main/java/org/h2/command/dml/Optimizer.java new file mode 100644 index 0000000000000..fa17a4454fb77 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Optimizer.java @@ -0,0 +1,264 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.table.Plan; +import org.h2.table.PlanItem; +import org.h2.table.TableFilter; +import org.h2.util.BitField; +import org.h2.util.Permutations; + +/** + * The optimizer is responsible to find the best execution plan + * for a given query. + */ +class Optimizer { + + private static final int MAX_BRUTE_FORCE_FILTERS = 7; + private static final int MAX_BRUTE_FORCE = 2000; + private static final int MAX_GENETIC = 500; + private long startNs; + private BitField switched; + + // possible plans for filters, if using brute force: + // 1 filter 1 plan + // 2 filters 2 plans + // 3 filters 6 plans + // 4 filters 24 plans + // 5 filters 120 plans + // 6 filters 720 plans + // 7 filters 5040 plans + // 8 filters 40320 plan + // 9 filters 362880 plans + // 10 filters 3628800 filters + + private final TableFilter[] filters; + private final Expression condition; + private final Session session; + + private Plan bestPlan; + private TableFilter topFilter; + private double cost; + private Random random; + + Optimizer(TableFilter[] filters, Expression condition, Session session) { + this.filters = filters; + this.condition = condition; + this.session = session; + } + + /** + * How many filter to calculate using brute force. The remaining filters are + * selected using a greedy algorithm which has a runtime of (1 + 2 + ... + + * n) = (n * (n-1) / 2) for n filters. The brute force algorithm has a + * runtime of n * (n-1) * ... * (n-m) when calculating m brute force of n + * total. The combined runtime is (brute force) * (greedy). + * + * @param filterCount the number of filters total + * @return the number of filters to calculate using brute force + */ + private static int getMaxBruteForceFilters(int filterCount) { + int i = 0, j = filterCount, total = filterCount; + while (j > 0 && total * (j * (j - 1) / 2) < MAX_BRUTE_FORCE) { + j--; + total *= j; + i++; + } + return i; + } + + private void calculateBestPlan() { + cost = -1; + if (filters.length == 1 || session.isForceJoinOrder()) { + testPlan(filters); + } else { + startNs = System.nanoTime(); + if (filters.length <= MAX_BRUTE_FORCE_FILTERS) { + calculateBruteForceAll(); + } else { + calculateBruteForceSome(); + random = new Random(0); + calculateGenetic(); + } + } + } + + private void calculateFakePlan() { + cost = -1; + bestPlan = new Plan(filters, filters.length, condition); + } + + private boolean canStop(int x) { + return (x & 127) == 0 + && cost >= 0 // don't calculate for simple queries (no rows or so) + && 10 * (System.nanoTime() - startNs) > cost * TimeUnit.MILLISECONDS.toNanos(1); + } + + private void calculateBruteForceAll() { + TableFilter[] list = new TableFilter[filters.length]; + Permutations p = Permutations.create(filters, list); + for (int x = 0; !canStop(x) && p.next(); x++) { + testPlan(list); + } + } + + private void calculateBruteForceSome() { + int bruteForce = getMaxBruteForceFilters(filters.length); + TableFilter[] list = new TableFilter[filters.length]; + Permutations p = Permutations.create(filters, list, bruteForce); + for (int x = 0; !canStop(x) && p.next(); x++) { + // find out what filters are not used yet + for (TableFilter f : filters) { + f.setUsed(false); + } + for (int i = 0; i < bruteForce; i++) { + list[i].setUsed(true); + } + // fill the remaining elements with the unused elements (greedy) + for (int i = bruteForce; i < filters.length; i++) { + double costPart = -1.0; + int bestPart = -1; + for (int j = 0; j < filters.length; j++) { + if (!filters[j].isUsed()) { + if (i == filters.length - 1) { + bestPart = j; + break; + } + list[i] = filters[j]; + Plan part = new Plan(list, i+1, condition); + double costNow = part.calculateCost(session); + if (costPart < 0 || costNow < costPart) { + costPart = costNow; + bestPart = j; + } + } + } + filters[bestPart].setUsed(true); + list[i] = filters[bestPart]; + } + testPlan(list); + } + } + + private void calculateGenetic() { + TableFilter[] best = new TableFilter[filters.length]; + TableFilter[] list = new TableFilter[filters.length]; + for (int x = 0; x < MAX_GENETIC; x++) { + if (canStop(x)) { + break; + } + boolean generateRandom = (x & 127) == 0; + if (!generateRandom) { + System.arraycopy(best, 0, list, 0, filters.length); + if (!shuffleTwo(list)) { + generateRandom = true; + } + } + if (generateRandom) { + switched = new BitField(); + System.arraycopy(filters, 0, best, 0, filters.length); + shuffleAll(best); + System.arraycopy(best, 0, list, 0, filters.length); + } + if (testPlan(list)) { + switched = new BitField(); + System.arraycopy(list, 0, best, 0, filters.length); + } + } + } + + private boolean testPlan(TableFilter[] list) { + Plan p = new Plan(list, list.length, condition); + double costNow = p.calculateCost(session); + if (cost < 0 || costNow < cost) { + cost = costNow; + bestPlan = p; + return true; + } + return false; + } + + private void shuffleAll(TableFilter[] f) { + for (int i = 0; i < f.length - 1; i++) { + int j = i + random.nextInt(f.length - i); + if (j != i) { + TableFilter temp = f[i]; + f[i] = f[j]; + f[j] = temp; + } + } + } + + private boolean shuffleTwo(TableFilter[] f) { + int a = 0, b = 0, i = 0; + for (; i < 20; i++) { + a = random.nextInt(f.length); + b = random.nextInt(f.length); + if (a == b) { + continue; + } + if (a < b) { + int temp = a; + a = b; + b = temp; + } + int s = a * f.length + b; + if (switched.get(s)) { + continue; + } + switched.set(s); + break; + } + if (i == 20) { + return false; + } + TableFilter temp = f[a]; + f[a] = f[b]; + f[b] = temp; + return true; + } + + /** + * Calculate the best query plan to use. + * + * @param parse If we do not need to really get the best plan because it is + * a view parsing stage. + */ + void optimize(boolean parse) { + if (parse) { + calculateFakePlan(); + } else { + calculateBestPlan(); + bestPlan.removeUnusableIndexConditions(); + } + TableFilter[] f2 = bestPlan.getFilters(); + topFilter = f2[0]; + for (int i = 0; i < f2.length - 1; i++) { + f2[i].addJoin(f2[i + 1], false, null); + } + if (parse) { + return; + } + for (TableFilter f : f2) { + PlanItem item = bestPlan.getItem(f); + f.setPlanItem(item); + } + } + + public TableFilter getTopFilter() { + return topFilter; + } + + double getCost() { + return cost; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Query.java b/modules/h2/src/main/java/org/h2/command/dml/Query.java new file mode 100644 index 0000000000000..e66e361d90055 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Query.java @@ -0,0 +1,596 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import java.util.HashSet; + +import org.h2.api.ErrorCode; +import org.h2.command.Prepared; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Alias; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.expression.ExpressionVisitor; +import org.h2.expression.Parameter; +import org.h2.expression.ValueExpression; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.ResultTarget; +import org.h2.result.SortOrder; +import org.h2.table.ColumnResolver; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.New; +import org.h2.value.Value; +import org.h2.value.ValueInt; +import org.h2.value.ValueNull; + +/** + * Represents a SELECT statement (simple, or union). + */ +public abstract class Query extends Prepared { + + /** + * The limit expression as specified in the LIMIT or TOP clause. + */ + protected Expression limitExpr; + + /** + * The offset expression as specified in the LIMIT .. OFFSET clause. + */ + protected Expression offsetExpr; + + /** + * The sample size expression as specified in the SAMPLE_SIZE clause. + */ + protected Expression sampleSizeExpr; + + /** + * Whether the result must only contain distinct rows. + */ + protected boolean distinct; + + /** + * Whether the result needs to support random access. + */ + protected boolean randomAccessResult; + + private boolean noCache; + private int lastLimit; + private long lastEvaluated; + private ResultInterface lastResult; + private Value[] lastParameters; + private boolean cacheableChecked; + private boolean neverLazy; + + Query(Session session) { + super(session); + } + + public void setNeverLazy(boolean b) { + this.neverLazy = b; + } + + public boolean isNeverLazy() { + return neverLazy; + } + + /** + * Check if this is a UNION query. + * + * @return {@code true} if this is a UNION query + */ + public abstract boolean isUnion(); + + /** + * Prepare join batching. + */ + public abstract void prepareJoinBatch(); + + /** + * Execute the query without checking the cache. If a target is specified, + * the results are written to it, and the method returns null. If no target + * is specified, a new LocalResult is created and returned. + * + * @param limit the limit as specified in the JDBC method call + * @param target the target to write results to + * @return the result + */ + protected abstract ResultInterface queryWithoutCache(int limit, + ResultTarget target); + + private ResultInterface queryWithoutCacheLazyCheck(int limit, + ResultTarget target) { + boolean disableLazy = neverLazy && session.isLazyQueryExecution(); + if (disableLazy) { + session.setLazyQueryExecution(false); + } + try { + return queryWithoutCache(limit, target); + } finally { + if (disableLazy) { + session.setLazyQueryExecution(true); + } + } + } + + /** + * Initialize the query. + */ + public abstract void init(); + + /** + * The the list of select expressions. + * This may include invisible expressions such as order by expressions. + * + * @return the list of expressions + */ + public abstract ArrayList getExpressions(); + + /** + * Calculate the cost to execute this query. + * + * @return the cost + */ + public abstract double getCost(); + + /** + * Calculate the cost when used as a subquery. + * This method returns a value between 10 and 1000000, + * to ensure adding other values can't result in an integer overflow. + * + * @return the estimated cost as an integer + */ + public int getCostAsExpression() { + // ensure the cost is not larger than 1 million, + // so that adding other values can't overflow + return (int) Math.min(1_000_000d, 10d + 10d * getCost()); + } + + /** + * Get all tables that are involved in this query. + * + * @return the set of tables + */ + public abstract HashSet
    getTables(); + + /** + * Set the order by list. + * + * @param order the order by list + */ + public abstract void setOrder(ArrayList order); + + /** + * Whether the query has an order. + * + * @return true if it has + */ + public abstract boolean hasOrder(); + + /** + * Set the 'for update' flag. + * + * @param forUpdate the new setting + */ + public abstract void setForUpdate(boolean forUpdate); + + /** + * Get the column count of this query. + * + * @return the column count + */ + public abstract int getColumnCount(); + + /** + * Map the columns to the given column resolver. + * + * @param resolver + * the resolver + * @param level + * the subquery level (0 is the top level query, 1 is the first + * subquery level) + */ + public abstract void mapColumns(ColumnResolver resolver, int level); + + /** + * Change the evaluatable flag. This is used when building the execution + * plan. + * + * @param tableFilter the table filter + * @param b the new value + */ + public abstract void setEvaluatable(TableFilter tableFilter, boolean b); + + /** + * Add a condition to the query. This is used for views. + * + * @param param the parameter + * @param columnId the column index (0 meaning the first column) + * @param comparisonType the comparison type + */ + public abstract void addGlobalCondition(Parameter param, int columnId, + int comparisonType); + + /** + * Check whether adding condition to the query is allowed. This is not + * allowed for views that have an order by and a limit, as it would affect + * the returned results. + * + * @return true if adding global conditions is allowed + */ + public abstract boolean allowGlobalConditions(); + + /** + * Check if this expression and all sub-expressions can fulfill a criteria. + * If any part returns false, the result is false. + * + * @param visitor the visitor + * @return if the criteria can be fulfilled + */ + public abstract boolean isEverything(ExpressionVisitor visitor); + + /** + * Update all aggregate function values. + * + * @param s the session + */ + public abstract void updateAggregate(Session s); + + /** + * Call the before triggers on all tables. + */ + public abstract void fireBeforeSelectTriggers(); + + /** + * Set the distinct flag. + * + * @param b the new value + */ + public void setDistinct(boolean b) { + distinct = b; + } + + public boolean isDistinct() { + return distinct; + } + + /** + * Whether results need to support random access. + * + * @param b the new value + */ + public void setRandomAccessResult(boolean b) { + randomAccessResult = b; + } + + @Override + public boolean isQuery() { + return true; + } + + @Override + public boolean isTransactional() { + return true; + } + + /** + * Disable caching of result sets. + */ + public void disableCache() { + this.noCache = true; + } + + private boolean sameResultAsLast(Session s, Value[] params, + Value[] lastParams, long lastEval) { + if (!cacheableChecked) { + long max = getMaxDataModificationId(); + noCache = max == Long.MAX_VALUE; + cacheableChecked = true; + } + if (noCache) { + return false; + } + Database db = s.getDatabase(); + for (int i = 0; i < params.length; i++) { + Value a = lastParams[i], b = params[i]; + if (a.getType() != b.getType() || !db.areEqual(a, b)) { + return false; + } + } + if (!isEverything(ExpressionVisitor.DETERMINISTIC_VISITOR) || + !isEverything(ExpressionVisitor.INDEPENDENT_VISITOR)) { + return false; + } + if (db.getModificationDataId() > lastEval && + getMaxDataModificationId() > lastEval) { + return false; + } + return true; + } + + public final Value[] getParameterValues() { + ArrayList list = getParameters(); + if (list == null) { + list = New.arrayList(); + } + int size = list.size(); + Value[] params = new Value[size]; + for (int i = 0; i < size; i++) { + Value v = list.get(i).getParamValue(); + params[i] = v; + } + return params; + } + + @Override + public final ResultInterface query(int maxrows) { + return query(maxrows, null); + } + + /** + * Execute the query, writing the result to the target result. + * + * @param limit the maximum number of rows to return + * @param target the target result (null will return the result) + * @return the result set (if the target is not set). + */ + public final ResultInterface query(int limit, ResultTarget target) { + if (isUnion()) { + // union doesn't always know the parameter list of the left and + // right queries + return queryWithoutCacheLazyCheck(limit, target); + } + fireBeforeSelectTriggers(); + if (noCache || !session.getDatabase().getOptimizeReuseResults() || + session.isLazyQueryExecution()) { + return queryWithoutCacheLazyCheck(limit, target); + } + Value[] params = getParameterValues(); + long now = session.getDatabase().getModificationDataId(); + if (isEverything(ExpressionVisitor.DETERMINISTIC_VISITOR)) { + if (lastResult != null && !lastResult.isClosed() && + limit == lastLimit) { + if (sameResultAsLast(session, params, lastParameters, + lastEvaluated)) { + lastResult = lastResult.createShallowCopy(session); + if (lastResult != null) { + lastResult.reset(); + return lastResult; + } + } + } + } + lastParameters = params; + closeLastResult(); + ResultInterface r = queryWithoutCacheLazyCheck(limit, target); + lastResult = r; + this.lastEvaluated = now; + lastLimit = limit; + return r; + } + + private void closeLastResult() { + if (lastResult != null) { + lastResult.close(); + } + } + + /** + * Initialize the order by list. This call may extend the expressions list. + * + * @param session the session + * @param expressions the select list expressions + * @param expressionSQL the select list SQL snippets + * @param orderList the order by list + * @param visible the number of visible columns in the select list + * @param mustBeInResult all order by expressions must be in the select list + * @param filters the table filters + */ + static void initOrder(Session session, + ArrayList expressions, + ArrayList expressionSQL, + ArrayList orderList, + int visible, + boolean mustBeInResult, + ArrayList filters) { + Database db = session.getDatabase(); + for (SelectOrderBy o : orderList) { + Expression e = o.expression; + if (e == null) { + continue; + } + // special case: SELECT 1 AS A FROM DUAL ORDER BY A + // (oracle supports it, but only in order by, not in group by and + // not in having): + // SELECT 1 AS A FROM DUAL ORDER BY -A + boolean isAlias = false; + int idx = expressions.size(); + if (e instanceof ExpressionColumn) { + // order by expression + ExpressionColumn exprCol = (ExpressionColumn) e; + String tableAlias = exprCol.getOriginalTableAliasName(); + String col = exprCol.getOriginalColumnName(); + for (int j = 0; j < visible; j++) { + boolean found = false; + Expression ec = expressions.get(j); + if (ec instanceof ExpressionColumn) { + // select expression + ExpressionColumn c = (ExpressionColumn) ec; + found = db.equalsIdentifiers(col, c.getColumnName()); + if (found && tableAlias != null) { + String ca = c.getOriginalTableAliasName(); + if (ca == null) { + found = false; + if (filters != null) { + // select id from test order by test.id + for (TableFilter f : filters) { + if (db.equalsIdentifiers(f.getTableAlias(), tableAlias)) { + found = true; + break; + } + } + } + } else { + found = db.equalsIdentifiers(ca, tableAlias); + } + } + } else if (!(ec instanceof Alias)) { + continue; + } else if (tableAlias == null && db.equalsIdentifiers(col, ec.getAlias())) { + found = true; + } else { + Expression ec2 = ec.getNonAliasExpression(); + if (ec2 instanceof ExpressionColumn) { + ExpressionColumn c2 = (ExpressionColumn) ec2; + String ta = exprCol.getSQL(); + String tb = c2.getSQL(); + String s2 = c2.getColumnName(); + found = db.equalsIdentifiers(col, s2); + if (!db.equalsIdentifiers(ta, tb)) { + found = false; + } + } + } + if (found) { + idx = j; + isAlias = true; + break; + } + } + } else { + String s = e.getSQL(); + if (expressionSQL != null) { + for (int j = 0, size = expressionSQL.size(); j < size; j++) { + String s2 = expressionSQL.get(j); + if (db.equalsIdentifiers(s2, s)) { + idx = j; + isAlias = true; + break; + } + } + } + } + if (!isAlias) { + if (mustBeInResult) { + throw DbException.get(ErrorCode.ORDER_BY_NOT_IN_RESULT, + e.getSQL()); + } + expressions.add(e); + String sql = e.getSQL(); + expressionSQL.add(sql); + } + o.columnIndexExpr = ValueExpression.get(ValueInt.get(idx + 1)); + o.expression = expressions.get(idx).getNonAliasExpression(); + } + } + + /** + * Create a {@link SortOrder} object given the list of {@link SelectOrderBy} + * objects. The expression list is extended if necessary. + * + * @param orderList a list of {@link SelectOrderBy} elements + * @param expressionCount the number of columns in the query + * @return the {@link SortOrder} object + */ + public SortOrder prepareOrder(ArrayList orderList, + int expressionCount) { + int size = orderList.size(); + int[] index = new int[size]; + int[] sortType = new int[size]; + for (int i = 0; i < size; i++) { + SelectOrderBy o = orderList.get(i); + int idx; + boolean reverse = false; + Expression expr = o.columnIndexExpr; + Value v = expr.getValue(null); + if (v == ValueNull.INSTANCE) { + // parameter not yet set - order by first column + idx = 0; + } else { + idx = v.getInt(); + if (idx < 0) { + reverse = true; + idx = -idx; + } + idx -= 1; + if (idx < 0 || idx >= expressionCount) { + throw DbException.get(ErrorCode.ORDER_BY_NOT_IN_RESULT, "" + (idx + 1)); + } + } + index[i] = idx; + boolean desc = o.descending; + if (reverse) { + desc = !desc; + } + int type = desc ? SortOrder.DESCENDING : SortOrder.ASCENDING; + if (o.nullsFirst) { + type += SortOrder.NULLS_FIRST; + } else if (o.nullsLast) { + type += SortOrder.NULLS_LAST; + } + sortType[i] = type; + } + return new SortOrder(session.getDatabase(), index, sortType, orderList); + } + + public void setOffset(Expression offset) { + this.offsetExpr = offset; + } + + public Expression getOffset() { + return offsetExpr; + } + + public void setLimit(Expression limit) { + this.limitExpr = limit; + } + + public Expression getLimit() { + return limitExpr; + } + + /** + * Add a parameter to the parameter list. + * + * @param param the parameter to add + */ + void addParameter(Parameter param) { + if (parameters == null) { + parameters = New.arrayList(); + } + parameters.add(param); + } + + public void setSampleSize(Expression sampleSize) { + this.sampleSizeExpr = sampleSize; + } + + /** + * Get the sample size, if set. + * + * @param session the session + * @return the sample size + */ + int getSampleSizeValue(Session session) { + if (sampleSizeExpr == null) { + return 0; + } + Value v = sampleSizeExpr.optimize(session).getValue(session); + if (v == ValueNull.INSTANCE) { + return 0; + } + return v.getInt(); + } + + public final long getMaxDataModificationId() { + ExpressionVisitor visitor = ExpressionVisitor.getMaxModificationIdVisitor(); + isEverything(visitor); + return visitor.getMaxDataModificationId(); + } +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Replace.java b/modules/h2/src/main/java/org/h2/command/dml/Replace.java new file mode 100644 index 0000000000000..bfbd89069be04 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Replace.java @@ -0,0 +1,321 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.Command; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.engine.UndoLogRecord; +import org.h2.expression.Expression; +import org.h2.expression.Parameter; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; + +import java.util.ArrayList; + +/** + * This class represents the MySQL-compatibility REPLACE statement + */ +public class Replace extends Prepared { + + private Table table; + private Column[] columns; + private Column[] keys; + private final ArrayList list = New.arrayList(); + private Query query; + private Prepared update; + + public Replace(Session session) { + super(session); + } + + @Override + public void setCommand(Command command) { + super.setCommand(command); + if (query != null) { + query.setCommand(command); + } + } + + public void setTable(Table table) { + this.table = table; + } + + public void setColumns(Column[] columns) { + this.columns = columns; + } + + public void setKeys(Column[] keys) { + this.keys = keys; + } + + public void setQuery(Query query) { + this.query = query; + } + + /** + * Add a row to this replace statement. + * + * @param expr the list of values + */ + public void addRow(Expression[] expr) { + list.add(expr); + } + + @Override + public int update() { + int count; + session.getUser().checkRight(table, Right.INSERT); + session.getUser().checkRight(table, Right.UPDATE); + setCurrentRowNumber(0); + if (!list.isEmpty()) { + count = 0; + for (int x = 0, size = list.size(); x < size; x++) { + setCurrentRowNumber(x + 1); + Expression[] expr = list.get(x); + Row newRow = table.getTemplateRow(); + for (int i = 0, len = columns.length; i < len; i++) { + Column c = columns[i]; + int index = c.getColumnId(); + Expression e = expr[i]; + if (e != null) { + // e can be null (DEFAULT) + try { + Value v = c.convert(e.getValue(session)); + newRow.setValue(index, v); + } catch (DbException ex) { + throw setRow(ex, count, getSQL(expr)); + } + } + } + replace(newRow); + count++; + } + } else { + ResultInterface rows = query.query(0); + count = 0; + table.fire(session, Trigger.UPDATE | Trigger.INSERT, true); + table.lock(session, true, false); + while (rows.next()) { + count++; + Value[] r = rows.currentRow(); + Row newRow = table.getTemplateRow(); + setCurrentRowNumber(count); + for (int j = 0; j < columns.length; j++) { + Column c = columns[j]; + int index = c.getColumnId(); + try { + Value v = c.convert(r[j]); + newRow.setValue(index, v); + } catch (DbException ex) { + throw setRow(ex, count, getSQL(r)); + } + } + replace(newRow); + } + rows.close(); + table.fire(session, Trigger.UPDATE | Trigger.INSERT, false); + } + return count; + } + + private void replace(Row row) { + int count = update(row); + if (count == 0) { + try { + table.validateConvertUpdateSequence(session, row); + boolean done = table.fireBeforeRow(session, null, row); + if (!done) { + table.lock(session, true, false); + table.addRow(session, row); + session.log(table, UndoLogRecord.INSERT, row); + table.fireAfterRow(session, null, row, false); + } + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.DUPLICATE_KEY_1) { + // possibly a concurrent replace or insert + Index index = (Index) e.getSource(); + if (index != null) { + // verify the index columns match the key + Column[] indexColumns = index.getColumns(); + boolean indexMatchesKeys = false; + if (indexColumns.length <= keys.length) { + for (int i = 0; i < indexColumns.length; i++) { + if (indexColumns[i] != keys[i]) { + indexMatchesKeys = false; + break; + } + } + } + if (indexMatchesKeys) { + throw DbException.get(ErrorCode.CONCURRENT_UPDATE_1, table.getName()); + } + } + } + throw e; + } + } else if (count != 1) { + throw DbException.get(ErrorCode.DUPLICATE_KEY_1, table.getSQL()); + } + } + + private int update(Row row) { + // if there is no valid primary key, + // the statement degenerates to an INSERT + if (update == null) { + return 0; + } + ArrayList k = update.getParameters(); + for (int i = 0; i < columns.length; i++) { + Column col = columns[i]; + Value v = row.getValue(col.getColumnId()); + Parameter p = k.get(i); + p.setValue(v); + } + for (int i = 0; i < keys.length; i++) { + Column col = keys[i]; + Value v = row.getValue(col.getColumnId()); + if (v == null) { + throw DbException.get(ErrorCode.COLUMN_CONTAINS_NULL_VALUES_1, col.getSQL()); + } + Parameter p = k.get(columns.length + i); + p.setValue(v); + } + return update.update(); + } + + @Override + public String getPlanSQL() { + StatementBuilder buff = new StatementBuilder("REPLACE INTO "); + buff.append(table.getSQL()).append('('); + for (Column c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(')'); + buff.append('\n'); + if (!list.isEmpty()) { + buff.append("VALUES "); + int row = 0; + for (Expression[] expr : list) { + if (row++ > 0) { + buff.append(", "); + } + buff.append('('); + buff.resetCount(); + for (Expression e : expr) { + buff.appendExceptFirst(", "); + if (e == null) { + buff.append("DEFAULT"); + } else { + buff.append(e.getSQL()); + } + } + buff.append(')'); + } + } else { + buff.append(query.getPlanSQL()); + } + return buff.toString(); + } + + @Override + public void prepare() { + if (columns == null) { + if (!list.isEmpty() && list.get(0).length == 0) { + // special case where table is used as a sequence + columns = new Column[0]; + } else { + columns = table.getColumns(); + } + } + if (!list.isEmpty()) { + for (Expression[] expr : list) { + if (expr.length != columns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + for (int i = 0; i < expr.length; i++) { + Expression e = expr[i]; + if (e != null) { + expr[i] = e.optimize(session); + } + } + } + } else { + query.prepare(); + if (query.getColumnCount() != columns.length) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + } + if (keys == null) { + Index idx = table.getPrimaryKey(); + if (idx == null) { + throw DbException.get(ErrorCode.CONSTRAINT_NOT_FOUND_1, "PRIMARY KEY"); + } + keys = idx.getColumns(); + } + // if there is no valid primary key, the statement degenerates to an + // INSERT + for (Column key : keys) { + boolean found = false; + for (Column column : columns) { + if (column.getColumnId() == key.getColumnId()) { + found = true; + break; + } + } + if (!found) { + return; + } + } + StatementBuilder buff = new StatementBuilder("UPDATE "); + buff.append(table.getSQL()).append(" SET "); + for (Column c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()).append("=?"); + } + buff.append(" WHERE "); + buff.resetCount(); + for (Column c : keys) { + buff.appendExceptFirst(" AND "); + buff.append(c.getSQL()).append("=?"); + } + String sql = buff.toString(); + update = session.prepare(sql); + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.REPLACE; + } + + @Override + public boolean isCacheable() { + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/RunScriptCommand.java b/modules/h2/src/main/java/org/h2/command/dml/RunScriptCommand.java new file mode 100644 index 0000000000000..2c5b02fc07472 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/RunScriptCommand.java @@ -0,0 +1,103 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStreamReader; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; + +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.util.ScriptReader; + +/** + * This class represents the statement + * RUNSCRIPT + */ +public class RunScriptCommand extends ScriptBase { + + /** + * The byte order mark. + * 0xfeff because this is the Unicode char + * represented by the UTF-8 byte order mark (EF BB BF). + */ + private static final char UTF8_BOM = '\uFEFF'; + + private Charset charset = StandardCharsets.UTF_8; + + public RunScriptCommand(Session session) { + super(session); + } + + @Override + public int update() { + session.getUser().checkAdmin(); + int count = 0; + try { + openInput(); + BufferedReader reader = new BufferedReader(new InputStreamReader(in, charset)); + // if necessary, strip the BOM from the front of the file + reader.mark(1); + if (reader.read() != UTF8_BOM) { + reader.reset(); + } + ScriptReader r = new ScriptReader(reader); + while (true) { + String sql = r.readStatement(); + if (sql == null) { + break; + } + execute(sql); + count++; + if ((count & 127) == 0) { + checkCanceled(); + } + } + r.close(); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } finally { + closeIO(); + } + return count; + } + + private void execute(String sql) { + try { + Prepared command = session.prepare(sql); + if (command.isQuery()) { + command.query(0); + } else { + command.update(); + } + if (session.getAutoCommit()) { + session.commit(false); + } + } catch (DbException e) { + throw e.addSQL(sql); + } + } + + public void setCharset(Charset charset) { + this.charset = charset; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.RUNSCRIPT; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/ScriptBase.java b/modules/h2/src/main/java/org/h2/command/dml/ScriptBase.java new file mode 100644 index 0000000000000..c8d8651f98872 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/ScriptBase.java @@ -0,0 +1,267 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import org.h2.api.ErrorCode; +import org.h2.api.JavaObjectSerializer; +import org.h2.command.Prepared; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.security.SHA256; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.FileStoreInputStream; +import org.h2.store.FileStoreOutputStream; +import org.h2.store.LobStorageBackend; +import org.h2.store.fs.FileUtils; +import org.h2.tools.CompressTool; +import org.h2.util.IOUtils; +import org.h2.util.SmallLRUCache; +import org.h2.util.TempFileDeleter; +import org.h2.value.CompareMode; + +/** + * This class is the base for RunScriptCommand and ScriptCommand. + */ +abstract class ScriptBase extends Prepared implements DataHandler { + + /** + * The default name of the script file if .zip compression is used. + */ + private static final String SCRIPT_SQL = "script.sql"; + + /** + * The output stream. + */ + protected OutputStream out; + + /** + * The input stream. + */ + protected InputStream in; + + /** + * The file name (if set). + */ + private Expression fileNameExpr; + + private Expression password; + + private String fileName; + + private String cipher; + private FileStore store; + private String compressionAlgorithm; + + ScriptBase(Session session) { + super(session); + } + + public void setCipher(String c) { + cipher = c; + } + + private boolean isEncrypted() { + return cipher != null; + } + + public void setPassword(Expression password) { + this.password = password; + } + + public void setFileNameExpr(Expression file) { + this.fileNameExpr = file; + } + + protected String getFileName() { + if (fileNameExpr != null && fileName == null) { + fileName = fileNameExpr.optimize(session).getValue(session).getString(); + if (fileName == null || fileName.trim().length() == 0) { + fileName = "script.sql"; + } + fileName = SysProperties.getScriptDirectory() + fileName; + } + return fileName; + } + + @Override + public boolean isTransactional() { + return false; + } + + /** + * Delete the target file. + */ + void deleteStore() { + String file = getFileName(); + if (file != null) { + FileUtils.delete(file); + } + } + + private void initStore() { + Database db = session.getDatabase(); + byte[] key = null; + if (cipher != null && password != null) { + char[] pass = password.optimize(session). + getValue(session).getString().toCharArray(); + key = SHA256.getKeyPasswordHash("script", pass); + } + String file = getFileName(); + store = FileStore.open(db, file, "rw", cipher, key); + store.setCheckedWriting(false); + store.init(); + } + + /** + * Open the output stream. + */ + void openOutput() { + String file = getFileName(); + if (file == null) { + return; + } + if (isEncrypted()) { + initStore(); + out = new FileStoreOutputStream(store, this, compressionAlgorithm); + // always use a big buffer, otherwise end-of-block is written a lot + out = new BufferedOutputStream(out, Constants.IO_BUFFER_SIZE_COMPRESS); + } else { + OutputStream o; + try { + o = FileUtils.newOutputStream(file, false); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + out = new BufferedOutputStream(o, Constants.IO_BUFFER_SIZE); + out = CompressTool.wrapOutputStream(out, compressionAlgorithm, SCRIPT_SQL); + } + } + + /** + * Open the input stream. + */ + void openInput() { + String file = getFileName(); + if (file == null) { + return; + } + if (isEncrypted()) { + initStore(); + in = new FileStoreInputStream(store, this, compressionAlgorithm != null, false); + } else { + InputStream inStream; + try { + inStream = FileUtils.newInputStream(file); + } catch (IOException e) { + throw DbException.convertIOException(e, file); + } + in = new BufferedInputStream(inStream, Constants.IO_BUFFER_SIZE); + in = CompressTool.wrapInputStream(in, compressionAlgorithm, SCRIPT_SQL); + if (in == null) { + throw DbException.get(ErrorCode.FILE_NOT_FOUND_1, SCRIPT_SQL + " in " + file); + } + } + } + + /** + * Close input and output streams. + */ + void closeIO() { + IOUtils.closeSilently(out); + out = null; + IOUtils.closeSilently(in); + in = null; + if (store != null) { + store.closeSilently(); + store = null; + } + } + + @Override + public boolean needRecompile() { + return false; + } + + @Override + public String getDatabasePath() { + return null; + } + + @Override + public FileStore openFile(String name, String mode, boolean mustExist) { + return null; + } + + @Override + public void checkPowerOff() { + session.getDatabase().checkPowerOff(); + } + + @Override + public void checkWritingAllowed() { + session.getDatabase().checkWritingAllowed(); + } + + @Override + public int getMaxLengthInplaceLob() { + return session.getDatabase().getMaxLengthInplaceLob(); + } + + @Override + public TempFileDeleter getTempFileDeleter() { + return session.getDatabase().getTempFileDeleter(); + } + + @Override + public String getLobCompressionAlgorithm(int type) { + return session.getDatabase().getLobCompressionAlgorithm(type); + } + + public void setCompressionAlgorithm(String algorithm) { + this.compressionAlgorithm = algorithm; + } + + @Override + public Object getLobSyncObject() { + return this; + } + + @Override + public SmallLRUCache getLobFileListCache() { + return null; + } + + @Override + public LobStorageBackend getLobStorage() { + return null; + } + + @Override + public int readLob(long lobId, byte[] hmac, long offset, byte[] buff, + int off, int length) { + throw DbException.throwInternalError(); + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + return session.getDataHandler().getJavaObjectSerializer(); + } + + @Override + public CompareMode getCompareMode() { + return session.getDataHandler().getCompareMode(); + } +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/ScriptCommand.java b/modules/h2/src/main/java/org/h2/command/dml/ScriptCommand.java new file mode 100644 index 0000000000000..9c92fb7d605f7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/ScriptCommand.java @@ -0,0 +1,729 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.io.BufferedInputStream; +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.Set; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.Parser; +import org.h2.constraint.Constraint; +import org.h2.engine.Comment; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Right; +import org.h2.engine.Role; +import org.h2.engine.Session; +import org.h2.engine.Setting; +import org.h2.engine.SysProperties; +import org.h2.engine.User; +import org.h2.engine.UserAggregate; +import org.h2.engine.UserDataType; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.result.LocalResult; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.schema.Constant; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObject; +import org.h2.schema.Sequence; +import org.h2.schema.TriggerObject; +import org.h2.table.Column; +import org.h2.table.PlanItem; +import org.h2.table.Table; +import org.h2.table.TableType; +import org.h2.util.IOUtils; +import org.h2.util.MathUtils; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.util.Utils; +import org.h2.value.Value; +import org.h2.value.ValueString; + +/** + * This class represents the statement + * SCRIPT + */ +public class ScriptCommand extends ScriptBase { + + private Charset charset = StandardCharsets.UTF_8; + private Set schemaNames; + private Collection
    tables; + private boolean passwords; + + // true if we're generating the INSERT..VALUES statements for row values + private boolean data; + private boolean settings; + + // true if we're generating the DROP statements + private boolean drop; + private boolean simple; + private LocalResult result; + private String lineSeparatorString; + private byte[] lineSeparator; + private byte[] buffer; + private boolean tempLobTableCreated; + private int nextLobId; + private int lobBlockSize = Constants.IO_BUFFER_SIZE; + + public ScriptCommand(Session session) { + super(session); + } + + @Override + public boolean isQuery() { + return true; + } + + // TODO lock all tables for 'script' command + + public void setSchemaNames(Set schemaNames) { + this.schemaNames = schemaNames; + } + + public void setTables(Collection
    tables) { + this.tables = tables; + } + + public void setData(boolean data) { + this.data = data; + } + + public void setPasswords(boolean passwords) { + this.passwords = passwords; + } + + public void setSettings(boolean settings) { + this.settings = settings; + } + + public void setLobBlockSize(long blockSize) { + this.lobBlockSize = MathUtils.convertLongToInt(blockSize); + } + + public void setDrop(boolean drop) { + this.drop = drop; + } + + @Override + public ResultInterface queryMeta() { + LocalResult r = createResult(); + r.done(); + return r; + } + + private LocalResult createResult() { + Expression[] expressions = { new ExpressionColumn( + session.getDatabase(), new Column("SCRIPT", Value.STRING)) }; + return new LocalResult(session, expressions, 1); + } + + @Override + public ResultInterface query(int maxrows) { + session.getUser().checkAdmin(); + reset(); + Database db = session.getDatabase(); + if (schemaNames != null) { + for (String schemaName : schemaNames) { + Schema schema = db.findSchema(schemaName); + if (schema == null) { + throw DbException.get(ErrorCode.SCHEMA_NOT_FOUND_1, + schemaName); + } + } + } + try { + result = createResult(); + deleteStore(); + openOutput(); + if (out != null) { + buffer = new byte[Constants.IO_BUFFER_SIZE]; + } + if (settings) { + for (Setting setting : db.getAllSettings()) { + if (setting.getName().equals(SetTypes.getTypeName( + SetTypes.CREATE_BUILD))) { + // don't add CREATE_BUILD to the script + // (it is only set when creating the database) + continue; + } + add(setting.getCreateSQL(), false); + } + } + if (out != null) { + add("", true); + } + for (User user : db.getAllUsers()) { + add(user.getCreateSQL(passwords), false); + } + for (Role role : db.getAllRoles()) { + add(role.getCreateSQL(true), false); + } + for (Schema schema : db.getAllSchemas()) { + if (excludeSchema(schema)) { + continue; + } + add(schema.getCreateSQL(), false); + } + for (UserDataType datatype : db.getAllUserDataTypes()) { + if (drop) { + add(datatype.getDropSQL(), false); + } + add(datatype.getCreateSQL(), false); + } + for (SchemaObject obj : db.getAllSchemaObjects( + DbObject.CONSTANT)) { + if (excludeSchema(obj.getSchema())) { + continue; + } + Constant constant = (Constant) obj; + add(constant.getCreateSQL(), false); + } + + final ArrayList
    tables = db.getAllTablesAndViews(false); + // sort by id, so that views are after tables and views on views + // after the base views + Collections.sort(tables, new Comparator
    () { + @Override + public int compare(Table t1, Table t2) { + return t1.getId() - t2.getId(); + } + }); + + // Generate the DROP XXX ... IF EXISTS + for (Table table : tables) { + if (excludeSchema(table.getSchema())) { + continue; + } + if (excludeTable(table)) { + continue; + } + if (table.isHidden()) { + continue; + } + table.lock(session, false, false); + String sql = table.getCreateSQL(); + if (sql == null) { + // null for metadata tables + continue; + } + if (drop) { + add(table.getDropSQL(), false); + } + } + for (SchemaObject obj : db.getAllSchemaObjects( + DbObject.FUNCTION_ALIAS)) { + if (excludeSchema(obj.getSchema())) { + continue; + } + if (drop) { + add(obj.getDropSQL(), false); + } + add(obj.getCreateSQL(), false); + } + for (UserAggregate agg : db.getAllAggregates()) { + if (drop) { + add(agg.getDropSQL(), false); + } + add(agg.getCreateSQL(), false); + } + for (SchemaObject obj : db.getAllSchemaObjects( + DbObject.SEQUENCE)) { + if (excludeSchema(obj.getSchema())) { + continue; + } + Sequence sequence = (Sequence) obj; + if (drop) { + add(sequence.getDropSQL(), false); + } + add(sequence.getCreateSQL(), false); + } + + // Generate CREATE TABLE and INSERT...VALUES + int count = 0; + for (Table table : tables) { + if (excludeSchema(table.getSchema())) { + continue; + } + if (excludeTable(table)) { + continue; + } + if (table.isHidden()) { + continue; + } + table.lock(session, false, false); + String createTableSql = table.getCreateSQL(); + if (createTableSql == null) { + // null for metadata tables + continue; + } + final TableType tableType = table.getTableType(); + add(createTableSql, false); + final ArrayList constraints = table.getConstraints(); + if (constraints != null) { + for (Constraint constraint : constraints) { + if (Constraint.Type.PRIMARY_KEY == constraint.getConstraintType()) { + add(constraint.getCreateSQLWithoutIndexes(), false); + } + } + } + if (TableType.TABLE == tableType) { + if (table.canGetRowCount()) { + String rowcount = "-- " + + table.getRowCountApproximation() + + " +/- SELECT COUNT(*) FROM " + table.getSQL(); + add(rowcount, false); + } + if (data) { + count = generateInsertValues(count, table); + } + } + final ArrayList indexes = table.getIndexes(); + for (int j = 0; indexes != null && j < indexes.size(); j++) { + Index index = indexes.get(j); + if (!index.getIndexType().getBelongsToConstraint()) { + add(index.getCreateSQL(), false); + } + } + } + if (tempLobTableCreated) { + add("DROP TABLE IF EXISTS SYSTEM_LOB_STREAM", true); + add("CALL SYSTEM_COMBINE_BLOB(-1)", true); + add("DROP ALIAS IF EXISTS SYSTEM_COMBINE_CLOB", true); + add("DROP ALIAS IF EXISTS SYSTEM_COMBINE_BLOB", true); + tempLobTableCreated = false; + } + // Generate CREATE CONSTRAINT ... + final ArrayList constraints = db.getAllSchemaObjects( + DbObject.CONSTRAINT); + Collections.sort(constraints, new Comparator() { + @Override + public int compare(SchemaObject c1, SchemaObject c2) { + return ((Constraint) c1).compareTo((Constraint) c2); + } + }); + for (SchemaObject obj : constraints) { + if (excludeSchema(obj.getSchema())) { + continue; + } + Constraint constraint = (Constraint) obj; + if (excludeTable(constraint.getTable())) { + continue; + } + if (constraint.getTable().isHidden()) { + continue; + } + if (Constraint.Type.PRIMARY_KEY != constraint.getConstraintType()) { + add(constraint.getCreateSQLWithoutIndexes(), false); + } + } + // Generate CREATE TRIGGER ... + for (SchemaObject obj : db.getAllSchemaObjects(DbObject.TRIGGER)) { + if (excludeSchema(obj.getSchema())) { + continue; + } + TriggerObject trigger = (TriggerObject) obj; + if (excludeTable(trigger.getTable())) { + continue; + } + add(trigger.getCreateSQL(), false); + } + // Generate GRANT ... + for (Right right : db.getAllRights()) { + DbObject object = right.getGrantedObject(); + if (object != null) { + if (object instanceof Schema) { + if (excludeSchema((Schema) object)) { + continue; + } + } else if (object instanceof Table) { + Table table = (Table) object; + if (excludeSchema(table.getSchema())) { + continue; + } + if (excludeTable(table)) { + continue; + } + } + } + add(right.getCreateSQL(), false); + } + // Generate COMMENT ON ... + for (Comment comment : db.getAllComments()) { + add(comment.getCreateSQL(), false); + } + if (out != null) { + out.close(); + } + } catch (IOException e) { + throw DbException.convertIOException(e, getFileName()); + } finally { + closeIO(); + } + result.done(); + LocalResult r = result; + reset(); + return r; + } + + private int generateInsertValues(int count, Table table) throws IOException { + PlanItem plan = table.getBestPlanItem(session, null, null, -1, null, null); + Index index = plan.getIndex(); + Cursor cursor = index.find(session, null, null); + Column[] columns = table.getColumns(); + StatementBuilder buff = new StatementBuilder("INSERT INTO "); + buff.append(table.getSQL()).append('('); + for (Column col : columns) { + buff.appendExceptFirst(", "); + buff.append(Parser.quoteIdentifier(col.getName())); + } + buff.append(") VALUES"); + if (!simple) { + buff.append('\n'); + } + buff.append('('); + String ins = buff.toString(); + buff = null; + while (cursor.next()) { + Row row = cursor.get(); + if (buff == null) { + buff = new StatementBuilder(ins); + } else { + buff.append(",\n("); + } + for (int j = 0; j < row.getColumnCount(); j++) { + if (j > 0) { + buff.append(", "); + } + Value v = row.getValue(j); + if (v.getPrecision() > lobBlockSize) { + int id; + if (v.getType() == Value.CLOB) { + id = writeLobStream(v); + buff.append("SYSTEM_COMBINE_CLOB(").append(id).append(')'); + } else if (v.getType() == Value.BLOB) { + id = writeLobStream(v); + buff.append("SYSTEM_COMBINE_BLOB(").append(id).append(')'); + } else { + buff.append(v.getSQL()); + } + } else { + buff.append(v.getSQL()); + } + } + buff.append(')'); + count++; + if ((count & 127) == 0) { + checkCanceled(); + } + if (simple || buff.length() > Constants.IO_BUFFER_SIZE) { + add(buff.toString(), true); + buff = null; + } + } + if (buff != null) { + add(buff.toString(), true); + } + return count; + } + + private int writeLobStream(Value v) throws IOException { + if (!tempLobTableCreated) { + add("CREATE TABLE IF NOT EXISTS SYSTEM_LOB_STREAM" + + "(ID INT NOT NULL, PART INT NOT NULL, " + + "CDATA VARCHAR, BDATA BINARY)", + true); + add("CREATE PRIMARY KEY SYSTEM_LOB_STREAM_PRIMARY_KEY " + + "ON SYSTEM_LOB_STREAM(ID, PART)", true); + add("CREATE ALIAS IF NOT EXISTS " + "SYSTEM_COMBINE_CLOB FOR \"" + + this.getClass().getName() + ".combineClob\"", true); + add("CREATE ALIAS IF NOT EXISTS " + "SYSTEM_COMBINE_BLOB FOR \"" + + this.getClass().getName() + ".combineBlob\"", true); + tempLobTableCreated = true; + } + int id = nextLobId++; + switch (v.getType()) { + case Value.BLOB: { + byte[] bytes = new byte[lobBlockSize]; + try (InputStream input = v.getInputStream()) { + for (int i = 0;; i++) { + StringBuilder buff = new StringBuilder(lobBlockSize * 2); + buff.append("INSERT INTO SYSTEM_LOB_STREAM VALUES(").append(id) + .append(", ").append(i).append(", NULL, '"); + int len = IOUtils.readFully(input, bytes, lobBlockSize); + if (len <= 0) { + break; + } + buff.append(StringUtils.convertBytesToHex(bytes, len)).append("')"); + String sql = buff.toString(); + add(sql, true); + } + } + break; + } + case Value.CLOB: { + char[] chars = new char[lobBlockSize]; + + try (Reader reader = v.getReader()) { + for (int i = 0;; i++) { + StringBuilder buff = new StringBuilder(lobBlockSize * 2); + buff.append("INSERT INTO SYSTEM_LOB_STREAM VALUES(").append(id).append(", ").append(i) + .append(", "); + int len = IOUtils.readFully(reader, chars, lobBlockSize); + if (len == 0) { + break; + } + buff.append(StringUtils.quoteStringSQL(new String(chars, 0, len))). + append(", NULL)"); + String sql = buff.toString(); + add(sql, true); + } + } + break; + } + default: + DbException.throwInternalError("type:" + v.getType()); + } + return id; + } + + /** + * Combine a BLOB. + * This method is called from the script. + * When calling with id -1, the file is deleted. + * + * @param conn a connection + * @param id the lob id + * @return a stream for the combined data + */ + public static InputStream combineBlob(Connection conn, int id) + throws SQLException { + if (id < 0) { + return null; + } + final ResultSet rs = getLobStream(conn, "BDATA", id); + return new InputStream() { + private InputStream current; + private boolean closed; + @Override + public int read() throws IOException { + while (true) { + try { + if (current == null) { + if (closed) { + return -1; + } + if (!rs.next()) { + close(); + return -1; + } + current = rs.getBinaryStream(1); + current = new BufferedInputStream(current); + } + int x = current.read(); + if (x >= 0) { + return x; + } + current = null; + } catch (SQLException e) { + throw DbException.convertToIOException(e); + } + } + } + @Override + public void close() throws IOException { + if (closed) { + return; + } + closed = true; + try { + rs.close(); + } catch (SQLException e) { + throw DbException.convertToIOException(e); + } + } + }; + } + + /** + * Combine a CLOB. + * This method is called from the script. + * + * @param conn a connection + * @param id the lob id + * @return a reader for the combined data + */ + public static Reader combineClob(Connection conn, int id) throws SQLException { + if (id < 0) { + return null; + } + final ResultSet rs = getLobStream(conn, "CDATA", id); + return new Reader() { + private Reader current; + private boolean closed; + @Override + public int read() throws IOException { + while (true) { + try { + if (current == null) { + if (closed) { + return -1; + } + if (!rs.next()) { + close(); + return -1; + } + current = rs.getCharacterStream(1); + current = new BufferedReader(current); + } + int x = current.read(); + if (x >= 0) { + return x; + } + current = null; + } catch (SQLException e) { + throw DbException.convertToIOException(e); + } + } + } + @Override + public void close() throws IOException { + if (closed) { + return; + } + closed = true; + try { + rs.close(); + } catch (SQLException e) { + throw DbException.convertToIOException(e); + } + } + @Override + public int read(char[] buffer, int off, int len) throws IOException { + if (len == 0) { + return 0; + } + int c = read(); + if (c == -1) { + return -1; + } + buffer[off] = (char) c; + int i = 1; + for (; i < len; i++) { + c = read(); + if (c == -1) { + break; + } + buffer[off + i] = (char) c; + } + return i; + } + }; + } + + private static ResultSet getLobStream(Connection conn, String column, int id) + throws SQLException { + PreparedStatement prep = conn.prepareStatement("SELECT " + column + + " FROM SYSTEM_LOB_STREAM WHERE ID=? ORDER BY PART"); + prep.setInt(1, id); + return prep.executeQuery(); + } + + private void reset() { + result = null; + buffer = null; + lineSeparatorString = SysProperties.LINE_SEPARATOR; + lineSeparator = lineSeparatorString.getBytes(charset); + } + + private boolean excludeSchema(Schema schema) { + if (schemaNames != null && !schemaNames.contains(schema.getName())) { + return true; + } + if (tables != null) { + // if filtering on specific tables, only include those schemas + for (Table table : schema.getAllTablesAndViews()) { + if (tables.contains(table)) { + return false; + } + } + return true; + } + return false; + } + + private boolean excludeTable(Table table) { + return tables != null && !tables.contains(table); + } + + private void add(String s, boolean insert) throws IOException { + if (s == null) { + return; + } + if (lineSeparator.length > 1 || lineSeparator[0] != '\n') { + s = StringUtils.replaceAll(s, "\n", lineSeparatorString); + } + s += ";"; + if (out != null) { + byte[] buff = s.getBytes(charset); + int len = MathUtils.roundUpInt(buff.length + + lineSeparator.length, Constants.FILE_BLOCK_SIZE); + buffer = Utils.copy(buff, buffer); + + if (len > buffer.length) { + buffer = new byte[len]; + } + System.arraycopy(buff, 0, buffer, 0, buff.length); + for (int i = buff.length; i < len - lineSeparator.length; i++) { + buffer[i] = ' '; + } + for (int j = 0, i = len - lineSeparator.length; i < len; i++, j++) { + buffer[i] = lineSeparator[j]; + } + out.write(buffer, 0, len); + if (!insert) { + Value[] row = { ValueString.get(s) }; + result.addRow(row); + } + } else { + Value[] row = { ValueString.get(s) }; + result.addRow(row); + } + } + + public void setSimple(boolean simple) { + this.simple = simple; + } + + public void setCharset(Charset charset) { + this.charset = charset; + } + + @Override + public int getType() { + return CommandInterface.SCRIPT; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Select.java b/modules/h2/src/main/java/org/h2/command/dml/Select.java new file mode 100644 index 0000000000000..fad82be0dbf5b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Select.java @@ -0,0 +1,1531 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.CommandInterface; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.expression.*; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.result.*; +import org.h2.table.*; +import org.h2.util.*; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueNull; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; + +/** + * This class represents a simple SELECT statement. + * + * For each select statement, + * visibleColumnCount <= distinctColumnCount <= expressionCount. + * The expression list count could include ORDER BY and GROUP BY expressions + * that are not in the select list. + * + * The call sequence is init(), mapColumns() if it's a subquery, prepare(). + * + * @author Thomas Mueller + * @author Joel Turkel (Group sorted query) + */ +public class Select extends Query { + + /** + * The main (top) table filter. + */ + TableFilter topTableFilter; + + private final ArrayList filters = New.arrayList(); + private final ArrayList topFilters = New.arrayList(); + + /** + * The column list, including synthetic columns (columns not shown in the + * result). + */ + ArrayList expressions; + private Expression[] expressionArray; + private Expression having; + private Expression condition; + + /** + * The visible columns (the ones required in the result). + */ + int visibleColumnCount; + + private int distinctColumnCount; + private ArrayList orderList; + private ArrayList group; + + /** + * The indexes of the group-by columns. + */ + int[] groupIndex; + + /** + * Whether a column in the expression list is part of a group-by. + */ + boolean[] groupByExpression; + + /** + * The current group-by values. + */ + HashMap currentGroup; + + private int havingIndex; + private boolean isGroupQuery, isGroupSortedQuery; + private boolean isForUpdate, isForUpdateMvcc; + private double cost; + private boolean isQuickAggregateQuery, isDistinctQuery; + private boolean isPrepared, checkInit; + private boolean sortUsingIndex; + private SortOrder sort; + + /** + * The id of the current group. + */ + int currentGroupRowId; + + public Select(Session session) { + super(session); + } + + @Override + public boolean isUnion() { + return false; + } + + /** + * Add a table to the query. + * + * @param filter the table to add + * @param isTop if the table can be the first table in the query plan + */ + public void addTableFilter(TableFilter filter, boolean isTop) { + // Oracle doesn't check on duplicate aliases + // String alias = filter.getAlias(); + // if (filterNames.contains(alias)) { + // throw Message.getSQLException( + // ErrorCode.DUPLICATE_TABLE_ALIAS, alias); + // } + // filterNames.add(alias); + filters.add(filter); + if (isTop) { + topFilters.add(filter); + } + } + + public ArrayList getTopFilters() { + return topFilters; + } + + public void setExpressions(ArrayList expressions) { + this.expressions = expressions; + } + + /** + * Called if this query contains aggregate functions. + */ + public void setGroupQuery() { + isGroupQuery = true; + } + + public void setGroupBy(ArrayList group) { + this.group = group; + } + + public ArrayList getGroupBy() { + return group; + } + + public HashMap getCurrentGroup() { + return currentGroup; + } + + public int getCurrentGroupRowId() { + return currentGroupRowId; + } + + @Override + public void setOrder(ArrayList order) { + orderList = order; + } + + @Override + public boolean hasOrder() { + return orderList != null || sort != null; + } + + /** + * Add a condition to the list of conditions. + * + * @param cond the condition to add + */ + public void addCondition(Expression cond) { + if (condition == null) { + condition = cond; + } else { + condition = new ConditionAndOr(ConditionAndOr.AND, cond, condition); + } + } + + public Expression getCondition() { + return condition; + } + + private LazyResult queryGroupSorted(int columnCount, ResultTarget result) { + LazyResultGroupSorted lazyResult = new LazyResultGroupSorted(expressionArray, columnCount); + if (result == null) { + return lazyResult; + } + while (lazyResult.next()) { + result.addRow(lazyResult.currentRow()); + } + return null; + } + + /** + * Create a row with the current values, for queries with group-sort. + * + * @param keyValues the key values + * @param columnCount the number of columns + * @return the row + */ + Value[] createGroupSortedRow(Value[] keyValues, int columnCount) { + Value[] row = new Value[columnCount]; + for (int j = 0; groupIndex != null && j < groupIndex.length; j++) { + row[groupIndex[j]] = keyValues[j]; + } + for (int j = 0; j < columnCount; j++) { + if (groupByExpression != null && groupByExpression[j]) { + continue; + } + Expression expr = expressions.get(j); + row[j] = expr.getValue(session); + } + if (isHavingNullOrFalse(row)) { + return null; + } + row = keepOnlyDistinct(row, columnCount); + return row; + } + + private Value[] keepOnlyDistinct(Value[] row, int columnCount) { + if (columnCount == distinctColumnCount) { + return row; + } + // remove columns so that 'distinct' can filter duplicate rows + return Arrays.copyOf(row, distinctColumnCount); + } + + private boolean isHavingNullOrFalse(Value[] row) { + return havingIndex >= 0 && !row[havingIndex].getBoolean(); + } + + private Index getGroupSortedIndex() { + if (groupIndex == null || groupByExpression == null) { + return null; + } + ArrayList indexes = topTableFilter.getTable().getIndexes(); + if (indexes != null) { + for (Index index : indexes) { + if (index.getIndexType().isScan()) { + continue; + } + if (index.getIndexType().isHash()) { + // does not allow scanning entries + continue; + } + if (isGroupSortedIndex(topTableFilter, index)) { + return index; + } + } + } + return null; + } + + private boolean isGroupSortedIndex(TableFilter tableFilter, Index index) { + // check that all the GROUP BY expressions are part of the index + Column[] indexColumns = index.getColumns(); + // also check that the first columns in the index are grouped + boolean[] grouped = new boolean[indexColumns.length]; + outerLoop: + for (int i = 0, size = expressions.size(); i < size; i++) { + if (!groupByExpression[i]) { + continue; + } + Expression expr = expressions.get(i).getNonAliasExpression(); + if (!(expr instanceof ExpressionColumn)) { + return false; + } + ExpressionColumn exprCol = (ExpressionColumn) expr; + for (int j = 0; j < indexColumns.length; ++j) { + if (tableFilter == exprCol.getTableFilter()) { + if (indexColumns[j].equals(exprCol.getColumn())) { + grouped[j] = true; + continue outerLoop; + } + } + } + // We didn't find a matching index column + // for one group by expression + return false; + } + // check that the first columns in the index are grouped + // good: index(a, b, c); group by b, a + // bad: index(a, b, c); group by a, c + for (int i = 1; i < grouped.length; i++) { + if (!grouped[i - 1] && grouped[i]) { + return false; + } + } + return true; + } + + private int getGroupByExpressionCount() { + if (groupByExpression == null) { + return 0; + } + int count = 0; + for (boolean b : groupByExpression) { + if (b) { + ++count; + } + } + return count; + } + + boolean isConditionMet() { + return condition == null || condition.getBooleanValue(session); + } + + private void queryGroup(int columnCount, LocalResult result) { + ValueHashMap> groups = + ValueHashMap.newInstance(); + int rowNumber = 0; + setCurrentRowNumber(0); + currentGroup = null; + ValueArray defaultGroup = ValueArray.get(new Value[0]); + int sampleSize = getSampleSizeValue(session); + while (topTableFilter.next()) { + setCurrentRowNumber(rowNumber + 1); + if (isConditionMet()) { + Value key; + rowNumber++; + if (groupIndex == null) { + key = defaultGroup; + } else { + Value[] keyValues = new Value[groupIndex.length]; + // update group + for (int i = 0; i < groupIndex.length; i++) { + int idx = groupIndex[i]; + Expression expr = expressions.get(idx); + keyValues[i] = expr.getValue(session); + } + key = ValueArray.get(keyValues); + } + HashMap values = groups.get(key); + if (values == null) { + values = new HashMap<>(); + groups.put(key, values); + } + currentGroup = values; + currentGroupRowId++; + for (int i = 0; i < columnCount; i++) { + if (groupByExpression == null || !groupByExpression[i]) { + Expression expr = expressions.get(i); + expr.updateAggregate(session); + } + } + if (sampleSize > 0 && rowNumber >= sampleSize) { + break; + } + } + } + if (groupIndex == null && groups.size() == 0) { + groups.put(defaultGroup, new HashMap()); + } + ArrayList keys = groups.keys(); + for (Value v : keys) { + ValueArray key = (ValueArray) v; + currentGroup = groups.get(key); + Value[] keyValues = key.getList(); + Value[] row = new Value[columnCount]; + for (int j = 0; groupIndex != null && j < groupIndex.length; j++) { + row[groupIndex[j]] = keyValues[j]; + } + for (int j = 0; j < columnCount; j++) { + if (groupByExpression != null && groupByExpression[j]) { + continue; + } + Expression expr = expressions.get(j); + row[j] = expr.getValue(session); + } + if (isHavingNullOrFalse(row)) { + continue; + } + row = keepOnlyDistinct(row, columnCount); + result.addRow(row); + } + } + + /** + * Get the index that matches the ORDER BY list, if one exists. This is to + * avoid running a separate ORDER BY if an index can be used. This is + * specially important for large result sets, if only the first few rows are + * important (LIMIT is used) + * + * @return the index if one is found + */ + private Index getSortIndex() { + if (sort == null) { + return null; + } + ArrayList sortColumns = New.arrayList(); + for (int idx : sort.getQueryColumnIndexes()) { + if (idx < 0 || idx >= expressions.size()) { + throw DbException.getInvalidValueException("ORDER BY", idx + 1); + } + Expression expr = expressions.get(idx); + expr = expr.getNonAliasExpression(); + if (expr.isConstant()) { + continue; + } + if (!(expr instanceof ExpressionColumn)) { + return null; + } + ExpressionColumn exprCol = (ExpressionColumn) expr; + if (exprCol.getTableFilter() != topTableFilter) { + return null; + } + sortColumns.add(exprCol.getColumn()); + } + Column[] sortCols = sortColumns.toArray(new Column[0]); + if (sortCols.length == 0) { + // sort just on constants - can use scan index + return topTableFilter.getTable().getScanIndex(session); + } + ArrayList list = topTableFilter.getTable().getIndexes(); + if (list != null) { + int[] sortTypes = sort.getSortTypesWithNullPosition(); + for (Index index : list) { + if (index.getCreateSQL() == null) { + // can't use the scan index + continue; + } + if (index.getIndexType().isHash()) { + continue; + } + IndexColumn[] indexCols = index.getIndexColumns(); + if (indexCols.length < sortCols.length) { + continue; + } + boolean ok = true; + for (int j = 0; j < sortCols.length; j++) { + // the index and the sort order must start + // with the exact same columns + IndexColumn idxCol = indexCols[j]; + Column sortCol = sortCols[j]; + if (idxCol.column != sortCol) { + ok = false; + break; + } + if (SortOrder.addExplicitNullPosition(idxCol.sortType) != sortTypes[j]) { + ok = false; + break; + } + } + if (ok) { + return index; + } + } + } + if (sortCols.length == 1 && sortCols[0].getColumnId() == -1) { + // special case: order by _ROWID_ + Index index = topTableFilter.getTable().getScanIndex(session); + if (index.isRowIdIndex()) { + return index; + } + } + return null; + } + + private void queryDistinct(ResultTarget result, long limitRows) { + // limitRows must be long, otherwise we get an int overflow + // if limitRows is at or near Integer.MAX_VALUE + // limitRows is never 0 here + if (limitRows > 0 && offsetExpr != null) { + int offset = offsetExpr.getValue(session).getInt(); + if (offset > 0) { + limitRows += offset; + } + } + int rowNumber = 0; + setCurrentRowNumber(0); + Index index = topTableFilter.getIndex(); + SearchRow first = null; + int columnIndex = index.getColumns()[0].getColumnId(); + int sampleSize = getSampleSizeValue(session); + while (true) { + setCurrentRowNumber(rowNumber + 1); + Cursor cursor = index.findNext(session, first, null); + if (!cursor.next()) { + break; + } + SearchRow found = cursor.getSearchRow(); + Value value = found.getValue(columnIndex); + if (first == null) { + first = topTableFilter.getTable().getTemplateSimpleRow(true); + } + first.setValue(columnIndex, value); + Value[] row = { value }; + result.addRow(row); + rowNumber++; + if ((sort == null || sortUsingIndex) && limitRows > 0 && + rowNumber >= limitRows) { + break; + } + if (sampleSize > 0 && rowNumber >= sampleSize) { + break; + } + } + } + + private LazyResult queryFlat(int columnCount, ResultTarget result, long limitRows) { + // limitRows must be long, otherwise we get an int overflow + // if limitRows is at or near Integer.MAX_VALUE + // limitRows is never 0 here + if (limitRows > 0 && offsetExpr != null) { + int offset = offsetExpr.getValue(session).getInt(); + if (offset > 0) { + limitRows += offset; + } + } + ArrayList forUpdateRows = null; + if (isForUpdateMvcc) { + forUpdateRows = New.arrayList(); + } + int sampleSize = getSampleSizeValue(session); + LazyResultQueryFlat lazyResult = new LazyResultQueryFlat(expressionArray, + sampleSize, columnCount); + if (result == null) { + return lazyResult; + } + while (lazyResult.next()) { + if (isForUpdateMvcc) { + topTableFilter.lockRowAdd(forUpdateRows); + } + result.addRow(lazyResult.currentRow()); + if ((sort == null || sortUsingIndex) && limitRows > 0 && + result.getRowCount() >= limitRows) { + break; + } + } + if (isForUpdateMvcc) { + topTableFilter.lockRows(forUpdateRows); + } + return null; + } + + private void queryQuick(int columnCount, ResultTarget result) { + Value[] row = new Value[columnCount]; + for (int i = 0; i < columnCount; i++) { + Expression expr = expressions.get(i); + row[i] = expr.getValue(session); + } + result.addRow(row); + } + + @Override + public ResultInterface queryMeta() { + LocalResult result = new LocalResult(session, expressionArray, + visibleColumnCount); + result.done(); + return result; + } + + @Override + protected ResultInterface queryWithoutCache(int maxRows, ResultTarget target) { + int limitRows = maxRows == 0 ? -1 : maxRows; + if (limitExpr != null) { + Value v = limitExpr.getValue(session); + int l = v == ValueNull.INSTANCE ? -1 : v.getInt(); + if (limitRows < 0) { + limitRows = l; + } else if (l >= 0) { + limitRows = Math.min(l, limitRows); + } + } + boolean lazy = session.isLazyQueryExecution() && + target == null && !isForUpdate && !isQuickAggregateQuery && + limitRows != 0 && offsetExpr == null && isReadOnly(); + int columnCount = expressions.size(); + LocalResult result = null; + if (!lazy && (target == null || + !session.getDatabase().getSettings().optimizeInsertFromSelect)) { + result = createLocalResult(result); + } + if (sort != null && (!sortUsingIndex || distinct)) { + result = createLocalResult(result); + result.setSortOrder(sort); + } + if (distinct && !isDistinctQuery) { + result = createLocalResult(result); + result.setDistinct(); + } + if (randomAccessResult) { + result = createLocalResult(result); + } + if (isGroupQuery && !isGroupSortedQuery) { + result = createLocalResult(result); + } + if (!lazy && (limitRows >= 0 || offsetExpr != null)) { + result = createLocalResult(result); + } + topTableFilter.startQuery(session); + topTableFilter.reset(); + boolean exclusive = isForUpdate && !isForUpdateMvcc; + if (isForUpdateMvcc) { + if (isGroupQuery) { + throw DbException.getUnsupportedException( + "MVCC=TRUE && FOR UPDATE && GROUP"); + } else if (distinct) { + throw DbException.getUnsupportedException( + "MVCC=TRUE && FOR UPDATE && DISTINCT"); + } else if (isQuickAggregateQuery) { + throw DbException.getUnsupportedException( + "MVCC=TRUE && FOR UPDATE && AGGREGATE"); + } else if (topTableFilter.getJoin() != null) { + throw DbException.getUnsupportedException( + "MVCC=TRUE && FOR UPDATE && JOIN"); + } + } + topTableFilter.lock(session, exclusive, exclusive); + ResultTarget to = result != null ? result : target; + lazy &= to == null; + LazyResult lazyResult = null; + if (limitRows != 0) { + try { + if (isQuickAggregateQuery) { + queryQuick(columnCount, to); + } else if (isGroupQuery) { + if (isGroupSortedQuery) { + lazyResult = queryGroupSorted(columnCount, to); + } else { + queryGroup(columnCount, result); + } + } else if (isDistinctQuery) { + queryDistinct(to, limitRows); + } else { + lazyResult = queryFlat(columnCount, to, limitRows); + } + } finally { + if (!lazy) { + resetJoinBatchAfterQuery(); + } + } + } + assert lazy == (lazyResult != null): lazy; + if (lazyResult != null) { + if (limitRows > 0) { + lazyResult.setLimit(limitRows); + } + return lazyResult; + } + if (offsetExpr != null) { + result.setOffset(offsetExpr.getValue(session).getInt()); + } + if (limitRows >= 0) { + result.setLimit(limitRows); + } + if (result != null) { + result.done(); + if (target != null) { + while (result.next()) { + target.addRow(result.currentRow()); + } + result.close(); + return null; + } + return result; + } + return null; + } + + /** + * Reset the batch-join after the query result is closed. + */ + void resetJoinBatchAfterQuery() { + JoinBatch jb = getJoinBatch(); + if (jb != null) { + jb.reset(false); + } + } + + private LocalResult createLocalResult(LocalResult old) { + return old != null ? old : new LocalResult(session, expressionArray, + visibleColumnCount); + } + + private void expandColumnList() { + Database db = session.getDatabase(); + + // the expressions may change within the loop + for (int i = 0; i < expressions.size(); i++) { + Expression expr = expressions.get(i); + if (!expr.isWildcard()) { + continue; + } + String schemaName = expr.getSchemaName(); + String tableAlias = expr.getTableAlias(); + if (tableAlias == null) { + expressions.remove(i); + for (TableFilter filter : filters) { + i = expandColumnList(filter, i); + } + i--; + } else { + TableFilter filter = null; + for (TableFilter f : filters) { + if (db.equalsIdentifiers(tableAlias, f.getTableAlias())) { + if (schemaName == null || + db.equalsIdentifiers(schemaName, + f.getSchemaName())) { + filter = f; + break; + } + } + } + if (filter == null) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, + tableAlias); + } + expressions.remove(i); + i = expandColumnList(filter, i); + i--; + } + } + } + + private int expandColumnList(TableFilter filter, int index) { + Table t = filter.getTable(); + String alias = filter.getTableAlias(); + Column[] columns = t.getColumns(); + for (Column c : columns) { + if (!c.getVisible()) { + continue; + } + if (filter.isNaturalJoinColumn(c)) { + continue; + } + String name = filter.getDerivedColumnName(c); + ExpressionColumn ec = new ExpressionColumn( + session.getDatabase(), null, alias, name != null ? name : c.getName()); + expressions.add(index++, ec); + } + return index; + } + + @Override + public void init() { + if (SysProperties.CHECK && checkInit) { + DbException.throwInternalError(); + } + expandColumnList(); + visibleColumnCount = expressions.size(); + ArrayList expressionSQL; + if (orderList != null || group != null) { + expressionSQL = New.arrayList(); + for (int i = 0; i < visibleColumnCount; i++) { + Expression expr = expressions.get(i); + expr = expr.getNonAliasExpression(); + String sql = expr.getSQL(); + expressionSQL.add(sql); + } + } else { + expressionSQL = null; + } + if (orderList != null) { + initOrder(session, expressions, expressionSQL, orderList, + visibleColumnCount, distinct, filters); + } + distinctColumnCount = expressions.size(); + if (having != null) { + expressions.add(having); + havingIndex = expressions.size() - 1; + having = null; + } else { + havingIndex = -1; + } + + Database db = session.getDatabase(); + + // first the select list (visible columns), + // then 'ORDER BY' expressions, + // then 'HAVING' expressions, + // and 'GROUP BY' expressions at the end + if (group != null) { + int size = group.size(); + int expSize = expressionSQL.size(); + groupIndex = new int[size]; + for (int i = 0; i < size; i++) { + Expression expr = group.get(i); + String sql = expr.getSQL(); + int found = -1; + for (int j = 0; j < expSize; j++) { + String s2 = expressionSQL.get(j); + if (db.equalsIdentifiers(s2, sql)) { + found = j; + break; + } + } + if (found < 0) { + // special case: GROUP BY a column alias + for (int j = 0; j < expSize; j++) { + Expression e = expressions.get(j); + if (db.equalsIdentifiers(sql, e.getAlias())) { + found = j; + break; + } + sql = expr.getAlias(); + if (db.equalsIdentifiers(sql, e.getAlias())) { + found = j; + break; + } + } + } + if (found < 0) { + int index = expressions.size(); + groupIndex[i] = index; + expressions.add(expr); + } else { + groupIndex[i] = found; + } + } + groupByExpression = new boolean[expressions.size()]; + for (int gi : groupIndex) { + groupByExpression[gi] = true; + } + group = null; + } + // map columns in select list and condition + for (TableFilter f : filters) { + mapColumns(f, 0); + } + if (havingIndex >= 0) { + Expression expr = expressions.get(havingIndex); + SelectListColumnResolver res = new SelectListColumnResolver(this); + expr.mapColumns(res, 0); + } + checkInit = true; + } + + @Override + public void prepare() { + if (isPrepared) { + // sometimes a subquery is prepared twice (CREATE TABLE AS SELECT) + return; + } + if (SysProperties.CHECK && !checkInit) { + DbException.throwInternalError("not initialized"); + } + if (orderList != null) { + sort = prepareOrder(orderList, expressions.size()); + orderList = null; + } + ColumnNamer columnNamer = new ColumnNamer(session); + for (int i = 0; i < expressions.size(); i++) { + Expression e = expressions.get(i); + String proposedColumnName = e.getAlias(); + String columnName = columnNamer.getColumnName(e, i, proposedColumnName); + // if the name changed, create an alias + if (!columnName.equals(proposedColumnName)) { + e = new Alias(e, columnName, true); + } + expressions.set(i, e.optimize(session)); + } + if (condition != null) { + condition = condition.optimize(session); + for (TableFilter f : filters) { + // outer joins: must not add index conditions such as + // "c is null" - example: + // create table parent(p int primary key) as select 1; + // create table child(c int primary key, pc int); + // insert into child values(2, 1); + // select p, c from parent + // left outer join child on p = pc where c is null; + if (!f.isJoinOuter() && !f.isJoinOuterIndirect()) { + condition.createIndexConditions(session, f); + } + } + } + if (isGroupQuery && groupIndex == null && + havingIndex < 0 && filters.size() == 1) { + if (condition == null) { + Table t = filters.get(0).getTable(); + ExpressionVisitor optimizable = ExpressionVisitor. + getOptimizableVisitor(t); + isQuickAggregateQuery = isEverything(optimizable); + } + } + cost = preparePlan(session.isParsingCreateView()); + if (distinct && session.getDatabase().getSettings().optimizeDistinct && + !isGroupQuery && filters.size() == 1 && + expressions.size() == 1 && condition == null) { + Expression expr = expressions.get(0); + expr = expr.getNonAliasExpression(); + if (expr instanceof ExpressionColumn) { + Column column = ((ExpressionColumn) expr).getColumn(); + int selectivity = column.getSelectivity(); + Index columnIndex = topTableFilter.getTable(). + getIndexForColumn(column, false, true); + if (columnIndex != null && + selectivity != Constants.SELECTIVITY_DEFAULT && + selectivity < 20) { + // the first column must be ascending + boolean ascending = columnIndex. + getIndexColumns()[0].sortType == SortOrder.ASCENDING; + Index current = topTableFilter.getIndex(); + // if another index is faster + if (columnIndex.canFindNext() && ascending && + (current == null || + current.getIndexType().isScan() || + columnIndex == current)) { + IndexType type = columnIndex.getIndexType(); + // hash indexes don't work, and unique single column + // indexes don't work + if (!type.isHash() && (!type.isUnique() || + columnIndex.getColumns().length > 1)) { + topTableFilter.setIndex(columnIndex); + isDistinctQuery = true; + } + } + } + } + } + if (sort != null && !isQuickAggregateQuery && !isGroupQuery) { + Index index = getSortIndex(); + Index current = topTableFilter.getIndex(); + if (index != null && current != null) { + if (current.getIndexType().isScan() || current == index) { + topTableFilter.setIndex(index); + if (!topTableFilter.hasInComparisons()) { + // in(select ...) and in(1,2,3) may return the key in + // another order + sortUsingIndex = true; + } + } else if (index.getIndexColumns() != null + && index.getIndexColumns().length >= current + .getIndexColumns().length) { + IndexColumn[] sortColumns = index.getIndexColumns(); + IndexColumn[] currentColumns = current.getIndexColumns(); + boolean swapIndex = false; + for (int i = 0; i < currentColumns.length; i++) { + if (sortColumns[i].column != currentColumns[i].column) { + swapIndex = false; + break; + } + if (sortColumns[i].sortType != currentColumns[i].sortType) { + swapIndex = true; + } + } + if (swapIndex) { + topTableFilter.setIndex(index); + sortUsingIndex = true; + } + } + } + } + if (!isQuickAggregateQuery && isGroupQuery && + getGroupByExpressionCount() > 0) { + Index index = getGroupSortedIndex(); + Index current = topTableFilter.getIndex(); + if (index != null && current != null && (current.getIndexType().isScan() || + current == index)) { + topTableFilter.setIndex(index); + isGroupSortedQuery = true; + } + } + expressionArray = expressions.toArray(new Expression[0]); + isPrepared = true; + } + + @Override + public void prepareJoinBatch() { + ArrayList list = New.arrayList(); + TableFilter f = getTopTableFilter(); + do { + if (f.getNestedJoin() != null) { + // we do not support batching with nested joins + return; + } + list.add(f); + f = f.getJoin(); + } while (f != null); + TableFilter[] fs = list.toArray(new TableFilter[0]); + // prepare join batch + JoinBatch jb = null; + for (int i = fs.length - 1; i >= 0; i--) { + jb = fs[i].prepareJoinBatch(jb, fs, i); + } + } + + public JoinBatch getJoinBatch() { + return getTopTableFilter().getJoinBatch(); + } + + @Override + public double getCost() { + return cost; + } + + @Override + public HashSet
    getTables() { + HashSet
    set = new HashSet<>(); + for (TableFilter filter : filters) { + set.add(filter.getTable()); + } + return set; + } + + @Override + public void fireBeforeSelectTriggers() { + for (TableFilter filter : filters) { + filter.getTable().fire(session, Trigger.SELECT, true); + } + } + + private double preparePlan(boolean parse) { + TableFilter[] topArray = topFilters.toArray(new TableFilter[0]); + for (TableFilter t : topArray) { + t.setFullCondition(condition); + } + + Optimizer optimizer = new Optimizer(topArray, condition, session); + optimizer.optimize(parse); + topTableFilter = optimizer.getTopFilter(); + double planCost = optimizer.getCost(); + + setEvaluatableRecursive(topTableFilter); + + if (!parse) { + topTableFilter.prepare(); + } + return planCost; + } + + private void setEvaluatableRecursive(TableFilter f) { + for (; f != null; f = f.getJoin()) { + f.setEvaluatable(f, true); + if (condition != null) { + condition.setEvaluatable(f, true); + } + TableFilter n = f.getNestedJoin(); + if (n != null) { + setEvaluatableRecursive(n); + } + Expression on = f.getJoinCondition(); + if (on != null) { + if (!on.isEverything(ExpressionVisitor.EVALUATABLE_VISITOR)) { + // need to check that all added are bound to a table + on = on.optimize(session); + if (!f.isJoinOuter() && !f.isJoinOuterIndirect()) { + f.removeJoinCondition(); + addCondition(on); + } + } + } + on = f.getFilterCondition(); + if (on != null) { + if (!on.isEverything(ExpressionVisitor.EVALUATABLE_VISITOR)) { + f.removeFilterCondition(); + addCondition(on); + } + } + // this is only important for subqueries, so they know + // the result columns are evaluatable + for (Expression e : expressions) { + e.setEvaluatable(f, true); + } + } + } + + @Override + public String getPlanSQL() { + // can not use the field sqlStatement because the parameter + // indexes may be incorrect: ? may be in fact ?2 for a subquery + // but indexes may be set manually as well + Expression[] exprList = expressions.toArray(new Expression[0]); + StatementBuilder buff = new StatementBuilder(); + for (TableFilter f : topFilters) { + Table t = f.getTable(); + TableView tableView = t.isView() ? (TableView) t : null; + if (tableView != null && tableView.isRecursive() && tableView.isTableExpression()) { + + if (tableView.isPersistent()) { + // skip the generation of plan SQL for this already recursive persistent CTEs, + // since using a with statement will re-create the common table expression + // views. + } else { + buff.append("WITH RECURSIVE ").append(t.getName()).append('('); + buff.resetCount(); + for (Column c : t.getColumns()) { + buff.appendExceptFirst(","); + buff.append(c.getName()); + } + buff.append(") AS ").append(t.getSQL()).append("\n"); + } + } + } + buff.resetCount(); + buff.append("SELECT"); + if (distinct) { + buff.append(" DISTINCT"); + } + for (int i = 0; i < visibleColumnCount; i++) { + buff.appendExceptFirst(","); + buff.append('\n'); + buff.append(StringUtils.indent(exprList[i].getSQL(), 4, false)); + } + buff.append("\nFROM "); + TableFilter filter = topTableFilter; + if (filter != null) { + buff.resetCount(); + int i = 0; + do { + buff.appendExceptFirst("\n"); + buff.append(filter.getPlanSQL(i++ > 0)); + filter = filter.getJoin(); + } while (filter != null); + } else { + buff.resetCount(); + int i = 0; + for (TableFilter f : topFilters) { + do { + buff.appendExceptFirst("\n"); + buff.append(f.getPlanSQL(i++ > 0)); + f = f.getJoin(); + } while (f != null); + } + } + if (condition != null) { + buff.append("\nWHERE ").append( + StringUtils.unEnclose(condition.getSQL())); + } + if (groupIndex != null) { + buff.append("\nGROUP BY "); + buff.resetCount(); + for (int gi : groupIndex) { + Expression g = exprList[gi]; + g = g.getNonAliasExpression(); + buff.appendExceptFirst(", "); + buff.append(StringUtils.unEnclose(g.getSQL())); + } + } + if (group != null) { + buff.append("\nGROUP BY "); + buff.resetCount(); + for (Expression g : group) { + buff.appendExceptFirst(", "); + buff.append(StringUtils.unEnclose(g.getSQL())); + } + } + if (having != null) { + // could be set in addGlobalCondition + // in this case the query is not run directly, just getPlanSQL is + // called + Expression h = having; + buff.append("\nHAVING ").append( + StringUtils.unEnclose(h.getSQL())); + } else if (havingIndex >= 0) { + Expression h = exprList[havingIndex]; + buff.append("\nHAVING ").append( + StringUtils.unEnclose(h.getSQL())); + } + if (sort != null) { + buff.append("\nORDER BY ").append( + sort.getSQL(exprList, visibleColumnCount)); + } + if (orderList != null) { + buff.append("\nORDER BY "); + buff.resetCount(); + for (SelectOrderBy o : orderList) { + buff.appendExceptFirst(", "); + buff.append(StringUtils.unEnclose(o.getSQL())); + } + } + if (limitExpr != null) { + buff.append("\nLIMIT ").append( + StringUtils.unEnclose(limitExpr.getSQL())); + if (offsetExpr != null) { + buff.append(" OFFSET ").append( + StringUtils.unEnclose(offsetExpr.getSQL())); + } + } + if (sampleSizeExpr != null) { + buff.append("\nSAMPLE_SIZE ").append( + StringUtils.unEnclose(sampleSizeExpr.getSQL())); + } + if (isForUpdate) { + buff.append("\nFOR UPDATE"); + } + if (isQuickAggregateQuery) { + buff.append("\n/* direct lookup */"); + } + if (isDistinctQuery) { + buff.append("\n/* distinct */"); + } + if (sortUsingIndex) { + buff.append("\n/* index sorted */"); + } + if (isGroupQuery) { + if (isGroupSortedQuery) { + buff.append("\n/* group sorted */"); + } + } + // buff.append("\n/* cost: " + cost + " */"); + return buff.toString(); + } + + public void setHaving(Expression having) { + this.having = having; + } + + public Expression getHaving() { + return having; + } + + @Override + public int getColumnCount() { + return visibleColumnCount; + } + + public TableFilter getTopTableFilter() { + return topTableFilter; + } + + @Override + public ArrayList getExpressions() { + return expressions; + } + + @Override + public void setForUpdate(boolean b) { + this.isForUpdate = b; + if (session.getDatabase().getSettings().selectForUpdateMvcc && + session.getDatabase().isMultiVersion()) { + isForUpdateMvcc = b; + } + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + for (Expression e : expressions) { + e.mapColumns(resolver, level); + } + if (condition != null) { + condition.mapColumns(resolver, level); + } + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + for (Expression e : expressions) { + e.setEvaluatable(tableFilter, b); + } + if (condition != null) { + condition.setEvaluatable(tableFilter, b); + } + } + + /** + * Check if this is an aggregate query with direct lookup, for example a + * query of the type SELECT COUNT(*) FROM TEST or + * SELECT MAX(ID) FROM TEST. + * + * @return true if a direct lookup is possible + */ + public boolean isQuickAggregateQuery() { + return isQuickAggregateQuery; + } + + @Override + public void addGlobalCondition(Parameter param, int columnId, + int comparisonType) { + addParameter(param); + Expression comp; + Expression col = expressions.get(columnId); + col = col.getNonAliasExpression(); + if (col.isEverything(ExpressionVisitor.QUERY_COMPARABLE_VISITOR)) { + comp = new Comparison(session, comparisonType, col, param); + } else { + // this condition will always evaluate to true, but need to + // add the parameter, so it can be set later + comp = new Comparison(session, Comparison.EQUAL_NULL_SAFE, param, param); + } + comp = comp.optimize(session); + boolean addToCondition = true; + if (isGroupQuery) { + addToCondition = false; + for (int i = 0; groupIndex != null && i < groupIndex.length; i++) { + if (groupIndex[i] == columnId) { + addToCondition = true; + break; + } + } + if (!addToCondition) { + if (havingIndex >= 0) { + having = expressions.get(havingIndex); + } + if (having == null) { + having = comp; + } else { + having = new ConditionAndOr(ConditionAndOr.AND, having, comp); + } + } + } + if (addToCondition) { + if (condition == null) { + condition = comp; + } else { + condition = new ConditionAndOr(ConditionAndOr.AND, condition, comp); + } + } + } + + @Override + public void updateAggregate(Session s) { + for (Expression e : expressions) { + e.updateAggregate(s); + } + if (condition != null) { + condition.updateAggregate(s); + } + if (having != null) { + having.updateAggregate(s); + } + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.DETERMINISTIC: { + if (isForUpdate) { + return false; + } + for (TableFilter f : filters) { + if (!f.getTable().isDeterministic()) { + return false; + } + } + break; + } + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: { + for (TableFilter f : filters) { + long m = f.getTable().getMaxDataModificationId(); + visitor.addDataModificationId(m); + } + break; + } + case ExpressionVisitor.EVALUATABLE: { + if (!session.getDatabase().getSettings().optimizeEvaluatableSubqueries) { + return false; + } + break; + } + case ExpressionVisitor.GET_DEPENDENCIES: { + for (TableFilter f : filters) { + Table table = f.getTable(); + visitor.addDependency(table); + table.addDependencies(visitor.getDependencies()); + } + break; + } + default: + } + ExpressionVisitor v2 = visitor.incrementQueryLevel(1); + boolean result = true; + for (Expression e : expressions) { + if (!e.isEverything(v2)) { + result = false; + break; + } + } + if (result && condition != null && !condition.isEverything(v2)) { + result = false; + } + if (result && having != null && !having.isEverything(v2)) { + result = false; + } + return result; + } + + @Override + public boolean isReadOnly() { + return isEverything(ExpressionVisitor.READONLY_VISITOR); + } + + + @Override + public boolean isCacheable() { + return !isForUpdate; + } + + @Override + public int getType() { + return CommandInterface.SELECT; + } + + @Override + public boolean allowGlobalConditions() { + return offsetExpr == null && (limitExpr == null || sort == null); + } + + public SortOrder getSortOrder() { + return sort; + } + + /** + * Lazy execution for this select. + */ + private abstract class LazyResultSelect extends LazyResult { + + int rowNumber; + int columnCount; + + LazyResultSelect(Expression[] expressions, int columnCount) { + super(expressions); + this.columnCount = columnCount; + setCurrentRowNumber(0); + } + + @Override + public final int getVisibleColumnCount() { + return visibleColumnCount; + } + + @Override + public void close() { + if (!isClosed()) { + super.close(); + resetJoinBatchAfterQuery(); + } + } + + @Override + public void reset() { + super.reset(); + resetJoinBatchAfterQuery(); + topTableFilter.reset(); + setCurrentRowNumber(0); + rowNumber = 0; + } + } + + /** + * Lazy execution for a flat query. + */ + private final class LazyResultQueryFlat extends LazyResultSelect { + + int sampleSize; + + LazyResultQueryFlat(Expression[] expressions, int sampleSize, int columnCount) { + super(expressions, columnCount); + this.sampleSize = sampleSize; + } + + @Override + protected Value[] fetchNextRow() { + while ((sampleSize <= 0 || rowNumber < sampleSize) && + topTableFilter.next()) { + setCurrentRowNumber(rowNumber + 1); + if (isConditionMet()) { + ++rowNumber; + Value[] row = new Value[columnCount]; + for (int i = 0; i < columnCount; i++) { + Expression expr = expressions.get(i); + row[i] = expr.getValue(getSession()); + } + return row; + } + } + return null; + } + } + + /** + * Lazy execution for a group sorted query. + */ + private final class LazyResultGroupSorted extends LazyResultSelect { + + Value[] previousKeyValues; + + LazyResultGroupSorted(Expression[] expressions, int columnCount) { + super(expressions, columnCount); + currentGroup = null; + } + + @Override + public void reset() { + super.reset(); + currentGroup = null; + } + + @Override + protected Value[] fetchNextRow() { + while (topTableFilter.next()) { + setCurrentRowNumber(rowNumber + 1); + if (isConditionMet()) { + rowNumber++; + Value[] keyValues = new Value[groupIndex.length]; + // update group + for (int i = 0; i < groupIndex.length; i++) { + int idx = groupIndex[i]; + Expression expr = expressions.get(idx); + keyValues[i] = expr.getValue(getSession()); + } + + Value[] row = null; + if (previousKeyValues == null) { + previousKeyValues = keyValues; + currentGroup = new HashMap<>(); + } else if (!Arrays.equals(previousKeyValues, keyValues)) { + row = createGroupSortedRow(previousKeyValues, columnCount); + previousKeyValues = keyValues; + currentGroup = new HashMap<>(); + } + currentGroupRowId++; + + for (int i = 0; i < columnCount; i++) { + if (groupByExpression == null || !groupByExpression[i]) { + Expression expr = expressions.get(i); + expr.updateAggregate(getSession()); + } + } + if (row != null) { + return row; + } + } + } + Value[] row = null; + if (previousKeyValues != null) { + row = createGroupSortedRow(previousKeyValues, columnCount); + previousKeyValues = null; + } + return row; + } + } +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/SelectListColumnResolver.java b/modules/h2/src/main/java/org/h2/command/dml/SelectListColumnResolver.java new file mode 100644 index 0000000000000..22ca9fc473a42 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/SelectListColumnResolver.java @@ -0,0 +1,100 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.ColumnNamer; +import org.h2.value.Value; + +/** + * This class represents a column resolver for the column list of a SELECT + * statement. It is used to resolve select column aliases in the HAVING clause. + * Example: + *

    + * SELECT X/3 AS A, COUNT(*) FROM SYSTEM_RANGE(1, 10) GROUP BY A HAVING A>2; + *

    + * + * @author Thomas Mueller + */ +public class SelectListColumnResolver implements ColumnResolver { + + private final Select select; + private final Expression[] expressions; + private final Column[] columns; + + SelectListColumnResolver(Select select) { + this.select = select; + int columnCount = select.getColumnCount(); + columns = new Column[columnCount]; + expressions = new Expression[columnCount]; + ArrayList columnList = select.getExpressions(); + ColumnNamer columnNamer= new ColumnNamer(select.getSession()); + for (int i = 0; i < columnCount; i++) { + Expression expr = columnList.get(i); + String columnName = columnNamer.getColumnName(expr, i, expr.getAlias()); + Column column = new Column(columnName, Value.NULL); + column.setTable(null, i); + columns[i] = column; + expressions[i] = expr.getNonAliasExpression(); + } + } + + @Override + public Column[] getColumns() { + return columns; + } + + @Override + public String getDerivedColumnName(Column column) { + return null; + } + + @Override + public String getSchemaName() { + return null; + } + + @Override + public Select getSelect() { + return select; + } + + @Override + public Column[] getSystemColumns() { + return null; + } + + @Override + public Column getRowIdColumn() { + return null; + } + + @Override + public String getTableAlias() { + return null; + } + + @Override + public TableFilter getTableFilter() { + return null; + } + + @Override + public Value getValue(Column column) { + return null; + } + + @Override + public Expression optimize(ExpressionColumn expressionColumn, Column column) { + return expressions[column.getColumnId()]; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/SelectOrderBy.java b/modules/h2/src/main/java/org/h2/command/dml/SelectOrderBy.java new file mode 100644 index 0000000000000..7e0c8d0b8141a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/SelectOrderBy.java @@ -0,0 +1,60 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.expression.Expression; + +/** + * Describes one element of the ORDER BY clause of a query. + */ +public class SelectOrderBy { + + /** + * The order by expression. + */ + public Expression expression; + + /** + * The column index expression. This can be a column index number (1 meaning + * the first column of the select list) or a parameter (the parameter is a + * number representing the column index number). + */ + public Expression columnIndexExpr; + + /** + * If the column should be sorted descending. + */ + public boolean descending; + + /** + * If NULL should be appear first. + */ + public boolean nullsFirst; + + /** + * If NULL should be appear at the end. + */ + public boolean nullsLast; + + public String getSQL() { + StringBuilder buff = new StringBuilder(); + if (expression != null) { + buff.append('=').append(expression.getSQL()); + } else { + buff.append(columnIndexExpr.getSQL()); + } + if (descending) { + buff.append(" DESC"); + } + if (nullsFirst) { + buff.append(" NULLS FIRST"); + } else if (nullsLast) { + buff.append(" NULLS LAST"); + } + return buff.toString(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/SelectUnion.java b/modules/h2/src/main/java/org/h2/command/dml/SelectUnion.java new file mode 100644 index 0000000000000..f58cb2c5026de --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/SelectUnion.java @@ -0,0 +1,573 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.expression.ExpressionVisitor; +import org.h2.expression.Parameter; +import org.h2.expression.ValueExpression; +import org.h2.message.DbException; +import org.h2.result.LazyResult; +import org.h2.result.LocalResult; +import org.h2.result.ResultInterface; +import org.h2.result.ResultTarget; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.ColumnNamer; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueInt; +import org.h2.value.ValueNull; + +/** + * Represents a union SELECT statement. + */ +public class SelectUnion extends Query { + + public enum UnionType { + /** + * The type of a UNION statement. + */ + UNION, + + /** + * The type of a UNION ALL statement. + */ + UNION_ALL, + + /** + * The type of an EXCEPT statement. + */ + EXCEPT, + + /** + * The type of an INTERSECT statement. + */ + INTERSECT + } + + private UnionType unionType; + + /** + * The left hand side of the union (the first subquery). + */ + final Query left; + + /** + * The right hand side of the union (the second subquery). + */ + Query right; + + private ArrayList expressions; + private Expression[] expressionArray; + private ArrayList orderList; + private SortOrder sort; + private boolean isPrepared, checkInit; + private boolean isForUpdate; + + public SelectUnion(Session session, Query query) { + super(session); + this.left = query; + } + + @Override + public boolean isUnion() { + return true; + } + + @Override + public void prepareJoinBatch() { + left.prepareJoinBatch(); + right.prepareJoinBatch(); + } + + public void setUnionType(UnionType type) { + this.unionType = type; + } + + public UnionType getUnionType() { + return unionType; + } + + public void setRight(Query select) { + right = select; + } + + public Query getLeft() { + return left; + } + + public Query getRight() { + return right; + } + + @Override + public void setSQL(String sql) { + this.sqlStatement = sql; + } + + @Override + public void setOrder(ArrayList order) { + orderList = order; + } + + @Override + public boolean hasOrder() { + return orderList != null || sort != null; + } + + private Value[] convert(Value[] values, int columnCount) { + Value[] newValues; + if (columnCount == values.length) { + // re-use the array if possible + newValues = values; + } else { + // create a new array if needed, + // for the value hash set + newValues = new Value[columnCount]; + } + for (int i = 0; i < columnCount; i++) { + Expression e = expressions.get(i); + newValues[i] = values[i].convertTo(e.getType()); + } + return newValues; + } + + @Override + public ResultInterface queryMeta() { + int columnCount = left.getColumnCount(); + LocalResult result = new LocalResult(session, expressionArray, columnCount); + result.done(); + return result; + } + + public LocalResult getEmptyResult() { + int columnCount = left.getColumnCount(); + return new LocalResult(session, expressionArray, columnCount); + } + + @Override + protected ResultInterface queryWithoutCache(int maxRows, ResultTarget target) { + if (maxRows != 0) { + // maxRows is set (maxRows 0 means no limit) + int l; + if (limitExpr == null) { + l = -1; + } else { + Value v = limitExpr.getValue(session); + l = v == ValueNull.INSTANCE ? -1 : v.getInt(); + } + if (l < 0) { + // for limitExpr, 0 means no rows, and -1 means no limit + l = maxRows; + } else { + l = Math.min(l, maxRows); + } + limitExpr = ValueExpression.get(ValueInt.get(l)); + } + if (session.getDatabase().getSettings().optimizeInsertFromSelect) { + if (unionType == UnionType.UNION_ALL && target != null) { + if (sort == null && !distinct && maxRows == 0 && + offsetExpr == null && limitExpr == null) { + left.query(0, target); + right.query(0, target); + return null; + } + } + } + int columnCount = left.getColumnCount(); + if (session.isLazyQueryExecution() && unionType == UnionType.UNION_ALL && !distinct && + sort == null && !randomAccessResult && !isForUpdate && + offsetExpr == null && isReadOnly()) { + int limit = -1; + if (limitExpr != null) { + Value v = limitExpr.getValue(session); + if (v != ValueNull.INSTANCE) { + limit = v.getInt(); + } + } + // limit 0 means no rows + if (limit != 0) { + LazyResultUnion lazyResult = new LazyResultUnion(expressionArray, columnCount); + if (limit > 0) { + lazyResult.setLimit(limit); + } + return lazyResult; + } + } + LocalResult result = new LocalResult(session, expressionArray, columnCount); + if (sort != null) { + result.setSortOrder(sort); + } + if (distinct) { + left.setDistinct(true); + right.setDistinct(true); + result.setDistinct(); + } + if (randomAccessResult) { + result.setRandomAccess(); + } + switch (unionType) { + case UNION: + case EXCEPT: + left.setDistinct(true); + right.setDistinct(true); + result.setDistinct(); + break; + case UNION_ALL: + break; + case INTERSECT: + left.setDistinct(true); + right.setDistinct(true); + break; + default: + DbException.throwInternalError("type=" + unionType); + } + ResultInterface l = left.query(0); + ResultInterface r = right.query(0); + l.reset(); + r.reset(); + switch (unionType) { + case UNION_ALL: + case UNION: { + while (l.next()) { + result.addRow(convert(l.currentRow(), columnCount)); + } + while (r.next()) { + result.addRow(convert(r.currentRow(), columnCount)); + } + break; + } + case EXCEPT: { + while (l.next()) { + result.addRow(convert(l.currentRow(), columnCount)); + } + while (r.next()) { + result.removeDistinct(convert(r.currentRow(), columnCount)); + } + break; + } + case INTERSECT: { + LocalResult temp = new LocalResult(session, expressionArray, columnCount); + temp.setDistinct(); + temp.setRandomAccess(); + while (l.next()) { + temp.addRow(convert(l.currentRow(), columnCount)); + } + while (r.next()) { + Value[] values = convert(r.currentRow(), columnCount); + if (temp.containsDistinct(values)) { + result.addRow(values); + } + } + temp.close(); + break; + } + default: + DbException.throwInternalError("type=" + unionType); + } + if (offsetExpr != null) { + result.setOffset(offsetExpr.getValue(session).getInt()); + } + if (limitExpr != null) { + Value v = limitExpr.getValue(session); + if (v != ValueNull.INSTANCE) { + result.setLimit(v.getInt()); + } + } + l.close(); + r.close(); + result.done(); + if (target != null) { + while (result.next()) { + target.addRow(result.currentRow()); + } + result.close(); + return null; + } + return result; + } + + @Override + public void init() { + if (SysProperties.CHECK && checkInit) { + DbException.throwInternalError(); + } + checkInit = true; + left.init(); + right.init(); + int len = left.getColumnCount(); + if (len != right.getColumnCount()) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + ArrayList le = left.getExpressions(); + // set the expressions to get the right column count and names, + // but can't validate at this time + expressions = New.arrayList(); + for (int i = 0; i < len; i++) { + Expression l = le.get(i); + expressions.add(l); + } + } + + @Override + public void prepare() { + if (isPrepared) { + // sometimes a subquery is prepared twice (CREATE TABLE AS SELECT) + return; + } + if (SysProperties.CHECK && !checkInit) { + DbException.throwInternalError("not initialized"); + } + isPrepared = true; + left.prepare(); + right.prepare(); + int len = left.getColumnCount(); + // set the correct expressions now + expressions = New.arrayList(); + ArrayList le = left.getExpressions(); + ArrayList re = right.getExpressions(); + ColumnNamer columnNamer= new ColumnNamer(session); + for (int i = 0; i < len; i++) { + Expression l = le.get(i); + Expression r = re.get(i); + int type = Value.getHigherOrder(l.getType(), r.getType()); + long prec = Math.max(l.getPrecision(), r.getPrecision()); + int scale = Math.max(l.getScale(), r.getScale()); + int displaySize = Math.max(l.getDisplaySize(), r.getDisplaySize()); + String columnName = columnNamer.getColumnName(l,i,l.getAlias()); + Column col = new Column(columnName, type, prec, scale, displaySize); + Expression e = new ExpressionColumn(session.getDatabase(), col); + expressions.add(e); + } + if (orderList != null) { + initOrder(session, expressions, null, orderList, getColumnCount(), true, null); + sort = prepareOrder(orderList, expressions.size()); + orderList = null; + } + expressionArray = expressions.toArray(new Expression[0]); + } + + @Override + public double getCost() { + return left.getCost() + right.getCost(); + } + + @Override + public HashSet
    getTables() { + HashSet
    set = left.getTables(); + set.addAll(right.getTables()); + return set; + } + + @Override + public ArrayList getExpressions() { + return expressions; + } + + @Override + public void setForUpdate(boolean forUpdate) { + left.setForUpdate(forUpdate); + right.setForUpdate(forUpdate); + isForUpdate = forUpdate; + } + + @Override + public int getColumnCount() { + return left.getColumnCount(); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + right.mapColumns(resolver, level); + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + right.setEvaluatable(tableFilter, b); + } + + @Override + public void addGlobalCondition(Parameter param, int columnId, + int comparisonType) { + addParameter(param); + switch (unionType) { + case UNION_ALL: + case UNION: + case INTERSECT: { + left.addGlobalCondition(param, columnId, comparisonType); + right.addGlobalCondition(param, columnId, comparisonType); + break; + } + case EXCEPT: { + left.addGlobalCondition(param, columnId, comparisonType); + break; + } + default: + DbException.throwInternalError("type=" + unionType); + } + } + + @Override + public String getPlanSQL() { + StringBuilder buff = new StringBuilder(); + buff.append('(').append(left.getPlanSQL()).append(')'); + switch (unionType) { + case UNION_ALL: + buff.append("\nUNION ALL\n"); + break; + case UNION: + buff.append("\nUNION\n"); + break; + case INTERSECT: + buff.append("\nINTERSECT\n"); + break; + case EXCEPT: + buff.append("\nEXCEPT\n"); + break; + default: + DbException.throwInternalError("type=" + unionType); + } + buff.append('(').append(right.getPlanSQL()).append(')'); + Expression[] exprList = expressions.toArray(new Expression[0]); + if (sort != null) { + buff.append("\nORDER BY ").append(sort.getSQL(exprList, exprList.length)); + } + if (limitExpr != null) { + buff.append("\nLIMIT ").append( + StringUtils.unEnclose(limitExpr.getSQL())); + if (offsetExpr != null) { + buff.append("\nOFFSET ").append( + StringUtils.unEnclose(offsetExpr.getSQL())); + } + } + if (sampleSizeExpr != null) { + buff.append("\nSAMPLE_SIZE ").append( + StringUtils.unEnclose(sampleSizeExpr.getSQL())); + } + if (isForUpdate) { + buff.append("\nFOR UPDATE"); + } + return buff.toString(); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return left.isEverything(visitor) && right.isEverything(visitor); + } + + @Override + public boolean isReadOnly() { + return left.isReadOnly() && right.isReadOnly(); + } + + @Override + public void updateAggregate(Session s) { + left.updateAggregate(s); + right.updateAggregate(s); + } + + @Override + public void fireBeforeSelectTriggers() { + left.fireBeforeSelectTriggers(); + right.fireBeforeSelectTriggers(); + } + + @Override + public int getType() { + return CommandInterface.SELECT; + } + + @Override + public boolean allowGlobalConditions() { + return left.allowGlobalConditions() && right.allowGlobalConditions(); + } + + /** + * Lazy execution for this union. + */ + private final class LazyResultUnion extends LazyResult { + + int columnCount; + ResultInterface l; + ResultInterface r; + boolean leftDone; + boolean rightDone; + + LazyResultUnion(Expression[] expressions, int columnCount) { + super(expressions); + this.columnCount = columnCount; + } + + @Override + public int getVisibleColumnCount() { + return columnCount; + } + + @Override + protected Value[] fetchNextRow() { + if (rightDone) { + return null; + } + if (!leftDone) { + if (l == null) { + l = left.query(0); + l.reset(); + } + if (l.next()) { + return l.currentRow(); + } + leftDone = true; + } + if (r == null) { + r = right.query(0); + r.reset(); + } + if (r.next()) { + return r.currentRow(); + } + rightDone = true; + return null; + } + + @Override + public void close() { + super.close(); + if (l != null) { + l.close(); + } + if (r != null) { + r.close(); + } + } + + @Override + public void reset() { + super.reset(); + if (l != null) { + l.reset(); + } + if (r != null) { + r.reset(); + } + leftDone = false; + rightDone = false; + } + } +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Set.java b/modules/h2/src/main/java/org/h2/command/dml/Set.java new file mode 100644 index 0000000000000..192d02eea5062 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Set.java @@ -0,0 +1,621 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.text.Collator; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.compress.Compressor; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Mode; +import org.h2.engine.Session; +import org.h2.engine.Setting; +import org.h2.expression.Expression; +import org.h2.expression.ValueExpression; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.RowFactory; +import org.h2.schema.Schema; +import org.h2.table.Table; +import org.h2.tools.CompressTool; +import org.h2.util.JdbcUtils; +import org.h2.util.StringUtils; +import org.h2.value.CompareMode; +import org.h2.value.ValueInt; + +/** + * This class represents the statement + * SET + */ +public class Set extends Prepared { + + private final int type; + private Expression expression; + private String stringValue; + private String[] stringValueList; + + public Set(Session session, int type) { + super(session); + this.type = type; + } + + public void setString(String v) { + this.stringValue = v; + } + + @Override + public boolean isTransactional() { + switch (type) { + case SetTypes.CLUSTER: + case SetTypes.VARIABLE: + case SetTypes.QUERY_TIMEOUT: + case SetTypes.LOCK_TIMEOUT: + case SetTypes.TRACE_LEVEL_SYSTEM_OUT: + case SetTypes.TRACE_LEVEL_FILE: + case SetTypes.THROTTLE: + case SetTypes.SCHEMA: + case SetTypes.SCHEMA_SEARCH_PATH: + case SetTypes.RETENTION_TIME: + return true; + default: + } + return false; + } + + @Override + public int update() { + Database database = session.getDatabase(); + String name = SetTypes.getTypeName(type); + switch (type) { + case SetTypes.ALLOW_LITERALS: { + session.getUser().checkAdmin(); + int value = getIntValue(); + if (value < 0 || value > 2) { + throw DbException.getInvalidValueException("ALLOW_LITERALS", + getIntValue()); + } + database.setAllowLiterals(value); + addOrUpdateSetting(name, null, value); + break; + } + case SetTypes.CACHE_SIZE: + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("CACHE_SIZE", + getIntValue()); + } + session.getUser().checkAdmin(); + database.setCacheSize(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + case SetTypes.CLUSTER: { + if (Constants.CLUSTERING_ENABLED.equals(stringValue)) { + // this value is used when connecting + // ignore, as the cluster setting is checked later + break; + } + String value = StringUtils.quoteStringSQL(stringValue); + if (!value.equals(database.getCluster())) { + if (!value.equals(Constants.CLUSTERING_DISABLED)) { + // anybody can disable the cluster + // (if he can't access a cluster node) + session.getUser().checkAdmin(); + } + database.setCluster(value); + // use the system session so that the current transaction + // (if any) is not committed + Session sysSession = database.getSystemSession(); + synchronized (sysSession) { + synchronized (database) { + addOrUpdateSetting(sysSession, name, value, 0); + sysSession.commit(true); + } + } + } + break; + } + case SetTypes.COLLATION: { + session.getUser().checkAdmin(); + final boolean binaryUnsigned = database. + getCompareMode().isBinaryUnsigned(); + CompareMode compareMode; + StringBuilder buff = new StringBuilder(stringValue); + if (stringValue.equals(CompareMode.OFF)) { + compareMode = CompareMode.getInstance(null, 0, binaryUnsigned); + } else { + int strength = getIntValue(); + buff.append(" STRENGTH "); + if (strength == Collator.IDENTICAL) { + buff.append("IDENTICAL"); + } else if (strength == Collator.PRIMARY) { + buff.append("PRIMARY"); + } else if (strength == Collator.SECONDARY) { + buff.append("SECONDARY"); + } else if (strength == Collator.TERTIARY) { + buff.append("TERTIARY"); + } + compareMode = CompareMode.getInstance(stringValue, strength, + binaryUnsigned); + } + CompareMode old = database.getCompareMode(); + if (old.equals(compareMode)) { + break; + } + Table table = database.getFirstUserTable(); + if (table != null) { + throw DbException.get( + ErrorCode.COLLATION_CHANGE_WITH_DATA_TABLE_1, + table.getSQL()); + } + addOrUpdateSetting(name, buff.toString(), 0); + database.setCompareMode(compareMode); + break; + } + case SetTypes.BINARY_COLLATION: { + session.getUser().checkAdmin(); + Table table = database.getFirstUserTable(); + if (table != null) { + throw DbException.get( + ErrorCode.COLLATION_CHANGE_WITH_DATA_TABLE_1, + table.getSQL()); + } + CompareMode currentMode = database.getCompareMode(); + CompareMode newMode; + if (stringValue.equals(CompareMode.SIGNED)) { + newMode = CompareMode.getInstance(currentMode.getName(), + currentMode.getStrength(), false); + } else if (stringValue.equals(CompareMode.UNSIGNED)) { + newMode = CompareMode.getInstance(currentMode.getName(), + currentMode.getStrength(), true); + } else { + throw DbException.getInvalidValueException("BINARY_COLLATION", + stringValue); + } + addOrUpdateSetting(name, stringValue, 0); + database.setCompareMode(newMode); + break; + } + case SetTypes.COMPRESS_LOB: { + session.getUser().checkAdmin(); + int algo = CompressTool.getCompressAlgorithm(stringValue); + database.setLobCompressionAlgorithm(algo == Compressor.NO ? + null : stringValue); + addOrUpdateSetting(name, stringValue, 0); + break; + } + case SetTypes.CREATE_BUILD: { + session.getUser().checkAdmin(); + if (database.isStarting()) { + // just ignore the command if not starting + // this avoids problems when running recovery scripts + int value = getIntValue(); + addOrUpdateSetting(name, null, value); + } + break; + } + case SetTypes.DATABASE_EVENT_LISTENER: { + session.getUser().checkAdmin(); + database.setEventListenerClass(stringValue); + break; + } + case SetTypes.DB_CLOSE_DELAY: { + int x = getIntValue(); + if (x == -1) { + // -1 is a special value for in-memory databases, + // which means "keep the DB alive and use the same + // DB for all connections" + } else if (x < 0) { + throw DbException.getInvalidValueException("DB_CLOSE_DELAY", x); + } + session.getUser().checkAdmin(); + database.setCloseDelay(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + } + case SetTypes.DEFAULT_LOCK_TIMEOUT: + if (getIntValue() < 0) { + throw DbException.getInvalidValueException( + "DEFAULT_LOCK_TIMEOUT", getIntValue()); + } + session.getUser().checkAdmin(); + addOrUpdateSetting(name, null, getIntValue()); + break; + case SetTypes.DEFAULT_TABLE_TYPE: + session.getUser().checkAdmin(); + database.setDefaultTableType(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + case SetTypes.EXCLUSIVE: { + session.getUser().checkAdmin(); + int value = getIntValue(); + switch (value) { + case 0: + database.setExclusiveSession(null, false); + break; + case 1: + database.setExclusiveSession(session, false); + break; + case 2: + database.setExclusiveSession(session, true); + break; + default: + throw DbException.getInvalidValueException("EXCLUSIVE", value); + } + break; + } + case SetTypes.JAVA_OBJECT_SERIALIZER: { + session.getUser().checkAdmin(); + Table table = database.getFirstUserTable(); + if (table != null) { + throw DbException.get(ErrorCode. + JAVA_OBJECT_SERIALIZER_CHANGE_WITH_DATA_TABLE, + table.getSQL()); + } + database.setJavaObjectSerializerName(stringValue); + addOrUpdateSetting(name, stringValue, 0); + break; + } + case SetTypes.IGNORECASE: + session.getUser().checkAdmin(); + database.setIgnoreCase(getIntValue() == 1); + addOrUpdateSetting(name, null, getIntValue()); + break; + case SetTypes.LOCK_MODE: + session.getUser().checkAdmin(); + database.setLockMode(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + case SetTypes.LOCK_TIMEOUT: + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("LOCK_TIMEOUT", + getIntValue()); + } + session.setLockTimeout(getIntValue()); + break; + case SetTypes.LOG: { + int value = getIntValue(); + if (database.isPersistent() && value != database.getLogMode()) { + session.getUser().checkAdmin(); + database.setLogMode(value); + } + break; + } + case SetTypes.MAX_LENGTH_INPLACE_LOB: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException( + "MAX_LENGTH_INPLACE_LOB", getIntValue()); + } + session.getUser().checkAdmin(); + database.setMaxLengthInplaceLob(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + } + case SetTypes.MAX_LOG_SIZE: + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("MAX_LOG_SIZE", + getIntValue()); + } + session.getUser().checkAdmin(); + database.setMaxLogSize((long) getIntValue() * 1024 * 1024); + addOrUpdateSetting(name, null, getIntValue()); + break; + case SetTypes.MAX_MEMORY_ROWS: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("MAX_MEMORY_ROWS", + getIntValue()); + } + session.getUser().checkAdmin(); + database.setMaxMemoryRows(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + } + case SetTypes.MAX_MEMORY_UNDO: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("MAX_MEMORY_UNDO", + getIntValue()); + } + session.getUser().checkAdmin(); + database.setMaxMemoryUndo(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + } + case SetTypes.MAX_OPERATION_MEMORY: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException( + "MAX_OPERATION_MEMORY", getIntValue()); + } + session.getUser().checkAdmin(); + int value = getIntValue(); + database.setMaxOperationMemory(value); + break; + } + case SetTypes.MODE: + Mode mode = Mode.getInstance(stringValue); + if (mode == null) { + throw DbException.get(ErrorCode.UNKNOWN_MODE_1, stringValue); + } + if (database.getMode() != mode) { + session.getUser().checkAdmin(); + database.setMode(mode); + session.getColumnNamerConfiguration().configure(mode.getEnum()); + } + break; + case SetTypes.MULTI_THREADED: { + session.getUser().checkAdmin(); + database.setMultiThreaded(getIntValue() == 1); + break; + } + case SetTypes.MVCC: { + if (database.isMultiVersion() != (getIntValue() == 1)) { + throw DbException.get( + ErrorCode.CANNOT_CHANGE_SETTING_WHEN_OPEN_1, "MVCC"); + } + break; + } + case SetTypes.OPTIMIZE_REUSE_RESULTS: { + session.getUser().checkAdmin(); + database.setOptimizeReuseResults(getIntValue() != 0); + break; + } + case SetTypes.QUERY_TIMEOUT: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("QUERY_TIMEOUT", + getIntValue()); + } + int value = getIntValue(); + session.setQueryTimeout(value); + break; + } + case SetTypes.REDO_LOG_BINARY: { + int value = getIntValue(); + session.setRedoLogBinary(value == 1); + break; + } + case SetTypes.REFERENTIAL_INTEGRITY: { + session.getUser().checkAdmin(); + int value = getIntValue(); + if (value < 0 || value > 1) { + throw DbException.getInvalidValueException( + "REFERENTIAL_INTEGRITY", getIntValue()); + } + database.setReferentialIntegrity(value == 1); + break; + } + case SetTypes.QUERY_STATISTICS: { + session.getUser().checkAdmin(); + int value = getIntValue(); + if (value < 0 || value > 1) { + throw DbException.getInvalidValueException("QUERY_STATISTICS", + getIntValue()); + } + database.setQueryStatistics(value == 1); + break; + } + case SetTypes.QUERY_STATISTICS_MAX_ENTRIES: { + session.getUser().checkAdmin(); + int value = getIntValue(); + if (value < 1) { + throw DbException.getInvalidValueException("QUERY_STATISTICS_MAX_ENTRIES", + getIntValue()); + } + database.setQueryStatisticsMaxEntries(value); + break; + } + case SetTypes.SCHEMA: { + Schema schema = database.getSchema(stringValue); + session.setCurrentSchema(schema); + break; + } + case SetTypes.SCHEMA_SEARCH_PATH: { + session.setSchemaSearchPath(stringValueList); + break; + } + case SetTypes.TRACE_LEVEL_FILE: + session.getUser().checkAdmin(); + if (getCurrentObjectId() == 0) { + // don't set the property when opening the database + // this is for compatibility with older versions, because + // this setting was persistent + database.getTraceSystem().setLevelFile(getIntValue()); + } + break; + case SetTypes.TRACE_LEVEL_SYSTEM_OUT: + session.getUser().checkAdmin(); + if (getCurrentObjectId() == 0) { + // don't set the property when opening the database + // this is for compatibility with older versions, because + // this setting was persistent + database.getTraceSystem().setLevelSystemOut(getIntValue()); + } + break; + case SetTypes.TRACE_MAX_FILE_SIZE: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException( + "TRACE_MAX_FILE_SIZE", getIntValue()); + } + session.getUser().checkAdmin(); + int size = getIntValue() * 1024 * 1024; + database.getTraceSystem().setMaxFileSize(size); + addOrUpdateSetting(name, null, getIntValue()); + break; + } + case SetTypes.THROTTLE: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("THROTTLE", + getIntValue()); + } + session.setThrottle(getIntValue()); + break; + } + case SetTypes.UNDO_LOG: { + int value = getIntValue(); + if (value < 0 || value > 1) { + throw DbException.getInvalidValueException("UNDO_LOG", + getIntValue()); + } + session.setUndoLogEnabled(value == 1); + break; + } + case SetTypes.VARIABLE: { + Expression expr = expression.optimize(session); + session.setVariable(stringValue, expr.getValue(session)); + break; + } + case SetTypes.WRITE_DELAY: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("WRITE_DELAY", + getIntValue()); + } + session.getUser().checkAdmin(); + database.setWriteDelay(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + } + case SetTypes.RETENTION_TIME: { + if (getIntValue() < 0) { + throw DbException.getInvalidValueException("RETENTION_TIME", + getIntValue()); + } + session.getUser().checkAdmin(); + database.setRetentionTime(getIntValue()); + addOrUpdateSetting(name, null, getIntValue()); + break; + } + case SetTypes.ROW_FACTORY: { + session.getUser().checkAdmin(); + String rowFactoryName = expression.getColumnName(); + Class rowFactoryClass = JdbcUtils.loadUserClass(rowFactoryName); + RowFactory rowFactory; + try { + rowFactory = rowFactoryClass.newInstance(); + } catch (Exception e) { + throw DbException.convert(e); + } + database.setRowFactory(rowFactory); + break; + } + case SetTypes.BATCH_JOINS: { + int value = getIntValue(); + if (value != 0 && value != 1) { + throw DbException.getInvalidValueException("BATCH_JOINS", + getIntValue()); + } + session.setJoinBatchEnabled(value == 1); + break; + } + case SetTypes.FORCE_JOIN_ORDER: { + int value = getIntValue(); + if (value != 0 && value != 1) { + throw DbException.getInvalidValueException("FORCE_JOIN_ORDER", + value); + } + session.setForceJoinOrder(value == 1); + break; + } + case SetTypes.LAZY_QUERY_EXECUTION: { + int value = getIntValue(); + if (value != 0 && value != 1) { + throw DbException.getInvalidValueException("LAZY_QUERY_EXECUTION", + value); + } + session.setLazyQueryExecution(value == 1); + break; + } + case SetTypes.BUILTIN_ALIAS_OVERRIDE: { + session.getUser().checkAdmin(); + int value = getIntValue(); + if (value != 0 && value != 1) { + throw DbException.getInvalidValueException("BUILTIN_ALIAS_OVERRIDE", + value); + } + database.setAllowBuiltinAliasOverride(value == 1); + break; + } + case SetTypes.COLUMN_NAME_RULES: { + session.getUser().checkAdmin(); + session.getColumnNamerConfiguration().configure(expression.getColumnName()); + break; + } + default: + DbException.throwInternalError("type="+type); + } + // the meta data information has changed + database.getNextModificationDataId(); + // query caches might be affected as well, for example + // when changing the compatibility mode + database.getNextModificationMetaId(); + return 0; + } + + private int getIntValue() { + expression = expression.optimize(session); + return expression.getValue(session).getInt(); + } + + public void setInt(int value) { + this.expression = ValueExpression.get(ValueInt.get(value)); + } + + public void setExpression(Expression expression) { + this.expression = expression; + } + + private void addOrUpdateSetting(String name, String s, int v) { + addOrUpdateSetting(session, name, s, v); + } + + private void addOrUpdateSetting(Session session, String name, String s, + int v) { + Database database = session.getDatabase(); + if (database.isReadOnly()) { + return; + } + Setting setting = database.findSetting(name); + boolean addNew = false; + if (setting == null) { + addNew = true; + int id = getObjectId(); + setting = new Setting(database, id, name); + } + if (s != null) { + if (!addNew && setting.getStringValue().equals(s)) { + return; + } + setting.setStringValue(s); + } else { + if (!addNew && setting.getIntValue() == v) { + return; + } + setting.setIntValue(v); + } + if (addNew) { + database.addDatabaseObject(session, setting); + } else { + database.updateMeta(session, setting); + } + } + + @Override + public boolean needRecompile() { + return false; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + public void setStringArray(String[] list) { + this.stringValueList = list; + } + + @Override + public int getType() { + return CommandInterface.SET; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/SetTypes.java b/modules/h2/src/main/java/org/h2/command/dml/SetTypes.java new file mode 100644 index 0000000000000..51f43fa91c437 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/SetTypes.java @@ -0,0 +1,341 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; + +/** + * The list of setting for a SET statement. + */ +public class SetTypes { + + /** + * The type of a SET IGNORECASE statement. + */ + public static final int IGNORECASE = 1; + + /** + * The type of a SET MAX_LOG_SIZE statement. + */ + public static final int MAX_LOG_SIZE = 2; + + /** + * The type of a SET MODE statement. + */ + public static final int MODE = 3; + + /** + * The type of a SET READONLY statement. + */ + public static final int READONLY = 4; + + /** + * The type of a SET LOCK_TIMEOUT statement. + */ + public static final int LOCK_TIMEOUT = 5; + + /** + * The type of a SET DEFAULT_LOCK_TIMEOUT statement. + */ + public static final int DEFAULT_LOCK_TIMEOUT = 6; + + /** + * The type of a SET DEFAULT_TABLE_TYPE statement. + */ + public static final int DEFAULT_TABLE_TYPE = 7; + + /** + * The type of a SET CACHE_SIZE statement. + */ + public static final int CACHE_SIZE = 8; + + /** + * The type of a SET TRACE_LEVEL_SYSTEM_OUT statement. + */ + public static final int TRACE_LEVEL_SYSTEM_OUT = 9; + + /** + * The type of a SET TRACE_LEVEL_FILE statement. + */ + public static final int TRACE_LEVEL_FILE = 10; + + /** + * The type of a SET TRACE_MAX_FILE_SIZE statement. + */ + public static final int TRACE_MAX_FILE_SIZE = 11; + + /** + * The type of a SET COLLATION statement. + */ + public static final int COLLATION = 12; + + /** + * The type of a SET CLUSTER statement. + */ + public static final int CLUSTER = 13; + + /** + * The type of a SET WRITE_DELAY statement. + */ + public static final int WRITE_DELAY = 14; + + /** + * The type of a SET DATABASE_EVENT_LISTENER statement. + */ + public static final int DATABASE_EVENT_LISTENER = 15; + + /** + * The type of a SET MAX_MEMORY_ROWS statement. + */ + public static final int MAX_MEMORY_ROWS = 16; + + /** + * The type of a SET LOCK_MODE statement. + */ + public static final int LOCK_MODE = 17; + + /** + * The type of a SET DB_CLOSE_DELAY statement. + */ + public static final int DB_CLOSE_DELAY = 18; + + /** + * The type of a SET LOG statement. + */ + public static final int LOG = 19; + + /** + * The type of a SET THROTTLE statement. + */ + public static final int THROTTLE = 20; + + /** + * The type of a SET MAX_MEMORY_UNDO statement. + */ + public static final int MAX_MEMORY_UNDO = 21; + + /** + * The type of a SET MAX_LENGTH_INPLACE_LOB statement. + */ + public static final int MAX_LENGTH_INPLACE_LOB = 22; + + /** + * The type of a SET COMPRESS_LOB statement. + */ + public static final int COMPRESS_LOB = 23; + + /** + * The type of a SET ALLOW_LITERALS statement. + */ + public static final int ALLOW_LITERALS = 24; + + /** + * The type of a SET MULTI_THREADED statement. + */ + public static final int MULTI_THREADED = 25; + + /** + * The type of a SET SCHEMA statement. + */ + public static final int SCHEMA = 26; + + /** + * The type of a SET OPTIMIZE_REUSE_RESULTS statement. + */ + public static final int OPTIMIZE_REUSE_RESULTS = 27; + + /** + * The type of a SET SCHEMA_SEARCH_PATH statement. + */ + public static final int SCHEMA_SEARCH_PATH = 28; + + /** + * The type of a SET UNDO_LOG statement. + */ + public static final int UNDO_LOG = 29; + + /** + * The type of a SET REFERENTIAL_INTEGRITY statement. + */ + public static final int REFERENTIAL_INTEGRITY = 30; + + /** + * The type of a SET MVCC statement. + */ + public static final int MVCC = 31; + + /** + * The type of a SET MAX_OPERATION_MEMORY statement. + */ + public static final int MAX_OPERATION_MEMORY = 32; + + /** + * The type of a SET EXCLUSIVE statement. + */ + public static final int EXCLUSIVE = 33; + + /** + * The type of a SET CREATE_BUILD statement. + */ + public static final int CREATE_BUILD = 34; + + /** + * The type of a SET \@VARIABLE statement. + */ + public static final int VARIABLE = 35; + + /** + * The type of a SET QUERY_TIMEOUT statement. + */ + public static final int QUERY_TIMEOUT = 36; + + /** + * The type of a SET REDO_LOG_BINARY statement. + */ + public static final int REDO_LOG_BINARY = 37; + + /** + * The type of a SET BINARY_COLLATION statement. + */ + public static final int BINARY_COLLATION = 38; + + /** + * The type of a SET JAVA_OBJECT_SERIALIZER statement. + */ + public static final int JAVA_OBJECT_SERIALIZER = 39; + + /** + * The type of a SET RETENTION_TIME statement. + */ + public static final int RETENTION_TIME = 40; + + /** + * The type of a SET QUERY_STATISTICS statement. + */ + public static final int QUERY_STATISTICS = 41; + + /** + * The type of a SET QUERY_STATISTICS_MAX_ENTRIES statement. + */ + public static final int QUERY_STATISTICS_MAX_ENTRIES = 42; + + /** + * The type of a SET ROW_FACTORY statement. + */ + public static final int ROW_FACTORY = 43; + + /** + * The type of SET BATCH_JOINS statement. + */ + public static final int BATCH_JOINS = 44; + + /** + * The type of SET FORCE_JOIN_ORDER statement. + */ + public static final int FORCE_JOIN_ORDER = 45; + + /** + * The type of SET LAZY_QUERY_EXECUTION statement. + */ + public static final int LAZY_QUERY_EXECUTION = 46; + + /** + * The type of SET BUILTIN_ALIAS_OVERRIDE statement. + */ + public static final int BUILTIN_ALIAS_OVERRIDE = 47; + + /** + * The type of a SET COLUMN_NAME_RULES statement. + */ + public static final int COLUMN_NAME_RULES = 48; + + private static final int COUNT = COLUMN_NAME_RULES + 1; + + private static final ArrayList TYPES; + + private SetTypes() { + // utility class + } + + static { + ArrayList list = new ArrayList<>(COUNT); + list.add(null); + list.add(IGNORECASE, "IGNORECASE"); + list.add(MAX_LOG_SIZE, "MAX_LOG_SIZE"); + list.add(MODE, "MODE"); + list.add(READONLY, "READONLY"); + list.add(LOCK_TIMEOUT, "LOCK_TIMEOUT"); + list.add(DEFAULT_LOCK_TIMEOUT, "DEFAULT_LOCK_TIMEOUT"); + list.add(DEFAULT_TABLE_TYPE, "DEFAULT_TABLE_TYPE"); + list.add(CACHE_SIZE, "CACHE_SIZE"); + list.add(TRACE_LEVEL_SYSTEM_OUT, "TRACE_LEVEL_SYSTEM_OUT"); + list.add(TRACE_LEVEL_FILE, "TRACE_LEVEL_FILE"); + list.add(TRACE_MAX_FILE_SIZE, "TRACE_MAX_FILE_SIZE"); + list.add(COLLATION, "COLLATION"); + list.add(CLUSTER, "CLUSTER"); + list.add(WRITE_DELAY, "WRITE_DELAY"); + list.add(DATABASE_EVENT_LISTENER, "DATABASE_EVENT_LISTENER"); + list.add(MAX_MEMORY_ROWS, "MAX_MEMORY_ROWS"); + list.add(LOCK_MODE, "LOCK_MODE"); + list.add(DB_CLOSE_DELAY, "DB_CLOSE_DELAY"); + list.add(LOG, "LOG"); + list.add(THROTTLE, "THROTTLE"); + list.add(MAX_MEMORY_UNDO, "MAX_MEMORY_UNDO"); + list.add(MAX_LENGTH_INPLACE_LOB, "MAX_LENGTH_INPLACE_LOB"); + list.add(COMPRESS_LOB, "COMPRESS_LOB"); + list.add(ALLOW_LITERALS, "ALLOW_LITERALS"); + list.add(MULTI_THREADED, "MULTI_THREADED"); + list.add(SCHEMA, "SCHEMA"); + list.add(OPTIMIZE_REUSE_RESULTS, "OPTIMIZE_REUSE_RESULTS"); + list.add(SCHEMA_SEARCH_PATH, "SCHEMA_SEARCH_PATH"); + list.add(UNDO_LOG, "UNDO_LOG"); + list.add(REFERENTIAL_INTEGRITY, "REFERENTIAL_INTEGRITY"); + list.add(MVCC, "MVCC"); + list.add(MAX_OPERATION_MEMORY, "MAX_OPERATION_MEMORY"); + list.add(EXCLUSIVE, "EXCLUSIVE"); + list.add(CREATE_BUILD, "CREATE_BUILD"); + list.add(VARIABLE, "@"); + list.add(QUERY_TIMEOUT, "QUERY_TIMEOUT"); + list.add(REDO_LOG_BINARY, "REDO_LOG_BINARY"); + list.add(BINARY_COLLATION, "BINARY_COLLATION"); + list.add(JAVA_OBJECT_SERIALIZER, "JAVA_OBJECT_SERIALIZER"); + list.add(RETENTION_TIME, "RETENTION_TIME"); + list.add(QUERY_STATISTICS, "QUERY_STATISTICS"); + list.add(QUERY_STATISTICS_MAX_ENTRIES, "QUERY_STATISTICS_MAX_ENTRIES"); + list.add(ROW_FACTORY, "ROW_FACTORY"); + list.add(BATCH_JOINS, "BATCH_JOINS"); + list.add(FORCE_JOIN_ORDER, "FORCE_JOIN_ORDER"); + list.add(LAZY_QUERY_EXECUTION, "LAZY_QUERY_EXECUTION"); + list.add(BUILTIN_ALIAS_OVERRIDE, "BUILTIN_ALIAS_OVERRIDE"); + list.add(COLUMN_NAME_RULES, "COLUMN_NAME_RULES"); + TYPES = list; + } + + /** + * Get the set type number. + * + * @param name the set type name + * @return the number + */ + public static int getType(String name) { + return TYPES.indexOf(name); + } + + public static ArrayList getTypes() { + return TYPES; + } + + /** + * Get the set type name. + * + * @param type the type number + * @return the name + */ + public static String getTypeName(int type) { + return TYPES.get(type); + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/TransactionCommand.java b/modules/h2/src/main/java/org/h2/command/dml/TransactionCommand.java new file mode 100644 index 0000000000000..55ecdf04599c9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/TransactionCommand.java @@ -0,0 +1,149 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; + +/** + * Represents a transactional statement. + */ +public class TransactionCommand extends Prepared { + + private final int type; + private String savepointName; + private String transactionName; + + public TransactionCommand(Session session, int type) { + super(session); + this.type = type; + } + + public void setSavepointName(String name) { + this.savepointName = name; + } + + @Override + public int update() { + switch (type) { + case CommandInterface.SET_AUTOCOMMIT_TRUE: + session.setAutoCommit(true); + break; + case CommandInterface.SET_AUTOCOMMIT_FALSE: + session.setAutoCommit(false); + break; + case CommandInterface.BEGIN: + session.begin(); + break; + case CommandInterface.COMMIT: + session.commit(false); + break; + case CommandInterface.ROLLBACK: + session.rollback(); + break; + case CommandInterface.CHECKPOINT: + session.getUser().checkAdmin(); + session.getDatabase().checkpoint(); + break; + case CommandInterface.SAVEPOINT: + session.addSavepoint(savepointName); + break; + case CommandInterface.ROLLBACK_TO_SAVEPOINT: + session.rollbackToSavepoint(savepointName); + break; + case CommandInterface.CHECKPOINT_SYNC: + session.getUser().checkAdmin(); + session.getDatabase().sync(); + break; + case CommandInterface.PREPARE_COMMIT: + session.prepareCommit(transactionName); + break; + case CommandInterface.COMMIT_TRANSACTION: + session.getUser().checkAdmin(); + session.setPreparedTransaction(transactionName, true); + break; + case CommandInterface.ROLLBACK_TRANSACTION: + session.getUser().checkAdmin(); + session.setPreparedTransaction(transactionName, false); + break; + case CommandInterface.SHUTDOWN_IMMEDIATELY: + session.getUser().checkAdmin(); + session.getDatabase().shutdownImmediately(); + break; + case CommandInterface.SHUTDOWN: + case CommandInterface.SHUTDOWN_COMPACT: + case CommandInterface.SHUTDOWN_DEFRAG: { + session.getUser().checkAdmin(); + session.commit(false); + if (type == CommandInterface.SHUTDOWN_COMPACT || + type == CommandInterface.SHUTDOWN_DEFRAG) { + session.getDatabase().setCompactMode(type); + } + // close the database, but don't update the persistent setting + session.getDatabase().setCloseDelay(0); + Database db = session.getDatabase(); + // throttle, to allow testing concurrent + // execution of shutdown and query + session.throttle(); + for (Session s : db.getSessions(false)) { + if (db.isMultiThreaded()) { + synchronized (s) { + s.rollback(); + } + } else { + // if not multi-threaded, the session could already own + // the lock, which would result in a deadlock + // the other session can not concurrently do anything + // because the current session has locked the database + s.rollback(); + } + if (s != session) { + s.close(); + } + } + session.close(); + break; + } + default: + DbException.throwInternalError("type=" + type); + } + return 0; + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public boolean needRecompile() { + return false; + } + + public void setTransactionName(String string) { + this.transactionName = string; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return type; + } + + @Override + public boolean isCacheable() { + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/Update.java b/modules/h2/src/main/java/org/h2/command/dml/Update.java new file mode 100644 index 0000000000000..e8272b2aa6759 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/Update.java @@ -0,0 +1,271 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.command.dml; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Objects; + +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.CommandInterface; +import org.h2.command.Prepared; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.expression.Parameter; +import org.h2.expression.ValueExpression; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.RowList; +import org.h2.table.Column; +import org.h2.table.PlanItem; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This class represents the statement + * UPDATE + */ +public class Update extends Prepared { + + private Expression condition; + private TableFilter targetTableFilter;// target of update + /** + * This table filter is for MERGE..USING support - not used in stand-alone DML + */ + private TableFilter sourceTableFilter; + + /** The limit expression as specified in the LIMIT clause. */ + private Expression limitExpr; + + private final ArrayList columns = New.arrayList(); + private final HashMap expressionMap = new HashMap<>(); + + public Update(Session session) { + super(session); + } + + public void setTableFilter(TableFilter tableFilter) { + this.targetTableFilter = tableFilter; + } + + public void setCondition(Expression condition) { + this.condition = condition; + } + + public Expression getCondition( ) { + return this.condition; + } + + /** + * Add an assignment of the form column = expression. + * + * @param column the column + * @param expression the expression + */ + public void setAssignment(Column column, Expression expression) { + if (expressionMap.containsKey(column)) { + throw DbException.get(ErrorCode.DUPLICATE_COLUMN_NAME_1, column + .getName()); + } + columns.add(column); + expressionMap.put(column, expression); + if (expression instanceof Parameter) { + Parameter p = (Parameter) expression; + p.setColumn(column); + } + } + + @Override + public int update() { + targetTableFilter.startQuery(session); + targetTableFilter.reset(); + RowList rows = new RowList(session); + try { + Table table = targetTableFilter.getTable(); + session.getUser().checkRight(table, Right.UPDATE); + table.fire(session, Trigger.UPDATE, true); + table.lock(session, true, false); + int columnCount = table.getColumns().length; + // get the old rows, compute the new rows + setCurrentRowNumber(0); + int count = 0; + Column[] columns = table.getColumns(); + int limitRows = -1; + if (limitExpr != null) { + Value v = limitExpr.getValue(session); + if (v != ValueNull.INSTANCE) { + limitRows = v.getInt(); + } + } + while (targetTableFilter.next()) { + setCurrentRowNumber(count+1); + if (limitRows >= 0 && count >= limitRows) { + break; + } + if (condition == null || condition.getBooleanValue(session)) { + Row oldRow = targetTableFilter.get(); + Row newRow = table.getTemplateRow(); + boolean setOnUpdate = false; + for (int i = 0; i < columnCount; i++) { + Expression newExpr = expressionMap.get(columns[i]); + Column column = table.getColumn(i); + Value newValue; + if (newExpr == null) { + if (column.getOnUpdateExpression() != null) { + setOnUpdate = true; + } + newValue = oldRow.getValue(i); + } else if (newExpr == ValueExpression.getDefault()) { + newValue = table.getDefaultValue(session, column); + } else { + newValue = column.convert(newExpr.getValue(session)); + } + newRow.setValue(i, newValue); + } + if (setOnUpdate) { + setOnUpdate = false; + for (int i = 0; i < columnCount; i++) { + // Use equals here to detect changes from numeric 0 to 0.0 and similar + if (!Objects.equals(oldRow.getValue(i), newRow.getValue(i))) { + setOnUpdate = true; + break; + } + } + if (setOnUpdate) { + for (int i = 0; i < columnCount; i++) { + if (expressionMap.get(columns[i]) == null) { + Column column = table.getColumn(i); + if (column.getOnUpdateExpression() != null) { + newRow.setValue(i, table.getOnUpdateValue(session, column)); + } + } + } + } + } + table.validateConvertUpdateSequence(session, newRow); + boolean done = false; + if (table.fireRow()) { + done = table.fireBeforeRow(session, oldRow, newRow); + } + if (!done) { + rows.add(oldRow); + rows.add(newRow); + } + count++; + } + } + // TODO self referencing referential integrity constraints + // don't work if update is multi-row and 'inversed' the condition! + // probably need multi-row triggers with 'deleted' and 'inserted' + // at the same time. anyway good for sql compatibility + // TODO update in-place (but if the key changes, + // we need to update all indexes) before row triggers + + // the cached row is already updated - we need the old values + table.updateRows(this, session, rows); + if (table.fireRow()) { + rows.invalidateCache(); + for (rows.reset(); rows.hasNext();) { + Row o = rows.next(); + Row n = rows.next(); + table.fireAfterRow(session, o, n, false); + } + } + table.fire(session, Trigger.UPDATE, false); + return count; + } finally { + rows.close(); + } + } + + @Override + public String getPlanSQL() { + StatementBuilder buff = new StatementBuilder("UPDATE "); + buff.append(targetTableFilter.getPlanSQL(false)).append("\nSET\n "); + for (Column c : columns) { + Expression e = expressionMap.get(c); + buff.appendExceptFirst(",\n "); + buff.append(c.getName()).append(" = ").append(e.getSQL()); + } + if (condition != null) { + buff.append("\nWHERE ").append(StringUtils.unEnclose(condition.getSQL())); + } + if (limitExpr != null) { + buff.append("\nLIMIT ").append( + StringUtils.unEnclose(limitExpr.getSQL())); + } + return buff.toString(); + } + + @Override + public void prepare() { + if (condition != null) { + condition.mapColumns(targetTableFilter, 0); + condition = condition.optimize(session); + condition.createIndexConditions(session, targetTableFilter); + } + for (Column c : columns) { + Expression e = expressionMap.get(c); + e.mapColumns(targetTableFilter, 0); + if (sourceTableFilter!=null){ + e.mapColumns(sourceTableFilter, 0); + } + expressionMap.put(c, e.optimize(session)); + } + TableFilter[] filters; + if(sourceTableFilter==null){ + filters = new TableFilter[] { targetTableFilter }; + } + else{ + filters = new TableFilter[] { targetTableFilter, sourceTableFilter }; + } + PlanItem item = targetTableFilter.getBestPlanItem(session, filters, 0, + ExpressionVisitor.allColumnsForTableFilters(filters)); + targetTableFilter.setPlanItem(item); + targetTableFilter.prepare(); + } + + @Override + public boolean isTransactional() { + return true; + } + + @Override + public ResultInterface queryMeta() { + return null; + } + + @Override + public int getType() { + return CommandInterface.UPDATE; + } + + public void setLimit(Expression limit) { + this.limitExpr = limit; + } + + @Override + public boolean isCacheable() { + return true; + } + + public TableFilter getSourceTableFilter() { + return sourceTableFilter; + } + + public void setSourceTableFilter(TableFilter sourceTableFilter) { + this.sourceTableFilter = sourceTableFilter; + } +} diff --git a/modules/h2/src/main/java/org/h2/command/dml/package.html b/modules/h2/src/main/java/org/h2/command/dml/package.html new file mode 100644 index 0000000000000..66d81b0a1cd83 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/dml/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Contains DML (data manipulation language) and related SQL statements. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/command/package.html b/modules/h2/src/main/java/org/h2/command/package.html new file mode 100644 index 0000000000000..502ac3f67d289 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/command/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +This package contains the parser and the base classes for prepared SQL statements. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/compress/CompressDeflate.java b/modules/h2/src/main/java/org/h2/compress/CompressDeflate.java new file mode 100644 index 0000000000000..39ac22c50df5b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/compress/CompressDeflate.java @@ -0,0 +1,95 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.compress; + +import java.util.StringTokenizer; +import java.util.zip.DataFormatException; +import java.util.zip.Deflater; +import java.util.zip.Inflater; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * This is a wrapper class for the Deflater class. + * This algorithm supports the following options: + *
      + *
    • l or level: -1 (default), 0 (no compression), + * 1 (best speed), ..., 9 (best compression) + *
    • s or strategy: 0 (default), + * 1 (filtered), 2 (huffman only) + *
    + * See also java.util.zip.Deflater for details. + */ +public class CompressDeflate implements Compressor { + + private int level = Deflater.DEFAULT_COMPRESSION; + private int strategy = Deflater.DEFAULT_STRATEGY; + + @Override + public void setOptions(String options) { + if (options == null) { + return; + } + try { + StringTokenizer tokenizer = new StringTokenizer(options); + while (tokenizer.hasMoreElements()) { + String option = tokenizer.nextToken(); + if ("level".equals(option) || "l".equals(option)) { + level = Integer.parseInt(tokenizer.nextToken()); + } else if ("strategy".equals(option) || "s".equals(option)) { + strategy = Integer.parseInt(tokenizer.nextToken()); + } + Deflater deflater = new Deflater(level); + deflater.setStrategy(strategy); + } + } catch (Exception e) { + throw DbException.get(ErrorCode.UNSUPPORTED_COMPRESSION_OPTIONS_1, options); + } + } + + @Override + public int compress(byte[] in, int inLen, byte[] out, int outPos) { + Deflater deflater = new Deflater(level); + deflater.setStrategy(strategy); + deflater.setInput(in, 0, inLen); + deflater.finish(); + int compressed = deflater.deflate(out, outPos, out.length - outPos); + while (compressed == 0) { + // the compressed length is 0, meaning compression didn't work + // (sounds like a JDK bug) + // try again, using the default strategy and compression level + strategy = Deflater.DEFAULT_STRATEGY; + level = Deflater.DEFAULT_COMPRESSION; + return compress(in, inLen, out, outPos); + } + deflater.end(); + return outPos + compressed; + } + + @Override + public int getAlgorithm() { + return Compressor.DEFLATE; + } + + @Override + public void expand(byte[] in, int inPos, int inLen, byte[] out, int outPos, + int outLen) { + Inflater decompresser = new Inflater(); + decompresser.setInput(in, inPos, inLen); + decompresser.finished(); + try { + int len = decompresser.inflate(out, outPos, outLen); + if (len != outLen) { + throw new DataFormatException(len + " " + outLen); + } + } catch (DataFormatException e) { + throw DbException.get(ErrorCode.COMPRESSION_ERROR, e); + } + decompresser.end(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/compress/CompressLZF.java b/modules/h2/src/main/java/org/h2/compress/CompressLZF.java new file mode 100644 index 0000000000000..50f8d4020a70c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/compress/CompressLZF.java @@ -0,0 +1,471 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * + * This code is based on the LZF algorithm from Marc Lehmann. It is a + * re-implementation of the C code: + * http://cvs.schmorp.de/liblzf/lzf_c.c?view=markup + * http://cvs.schmorp.de/liblzf/lzf_d.c?view=markup + * + * According to a mail from Marc Lehmann, it's OK to use his algorithm: + * Date: 2010-07-15 15:57 + * Subject: Re: Question about LZF licensing + * ... + * The algorithm is not copyrighted (and cannot be copyrighted afaik) - as long + * as you wrote everything yourself, without copying my code, that's just fine + * (looking is of course fine too). + * ... + * + * Still I would like to keep his copyright info: + * + * Copyright (c) 2000-2005 Marc Alexander Lehmann + * Copyright (c) 2005 Oren J. Maurice + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * 3. The name of the author may not be used to endorse or promote products + * derived from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ''AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO + * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; + * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR + * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF + * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +package org.h2.compress; + +import java.nio.ByteBuffer; + +/** + *

    + * This class implements the LZF lossless data compression algorithm. LZF is a + * Lempel-Ziv variant with byte-aligned output, and optimized for speed. + *

    + *

    + * Safety/Use Notes: + *

    + *
      + *
    • Each instance should be used by a single thread only.
    • + *
    • The data buffers should be smaller than 1 GB.
    • + *
    • For performance reasons, safety checks on expansion are omitted.
    • + *
    • Invalid compressed data can cause an ArrayIndexOutOfBoundsException.
    • + *
    + *

    + * The LZF compressed format knows literal runs and back-references: + *

    + *
      + *
    • Literal run: directly copy bytes from input to output.
    • + *
    • Back-reference: copy previous data to output stream, with specified + * offset from location and length. The length is at least 3 bytes.
    • + *
    + *

    + * The first byte of the compressed stream is the control byte. For literal + * runs, the highest three bits of the control byte are not set, the the lower + * bits are the literal run length, and the next bytes are data to copy directly + * into the output. For back-references, the highest three bits of the control + * byte are the back-reference length. If all three bits are set, then the + * back-reference length is stored in the next byte. The lower bits of the + * control byte combined with the next byte form the offset for the + * back-reference. + *

    + */ +public final class CompressLZF implements Compressor { + + /** + * The number of entries in the hash table. The size is a trade-off between + * hash collisions (reduced compression) and speed (amount that fits in CPU + * cache). + */ + private static final int HASH_SIZE = 1 << 14; + + /** + * The maximum number of literals in a chunk (32). + */ + private static final int MAX_LITERAL = 1 << 5; + + /** + * The maximum offset allowed for a back-reference (8192). + */ + private static final int MAX_OFF = 1 << 13; + + /** + * The maximum back-reference length (264). + */ + private static final int MAX_REF = (1 << 8) + (1 << 3); + + /** + * Hash table for matching byte sequences (reused for performance). + */ + private int[] cachedHashTable; + + @Override + public void setOptions(String options) { + // nothing to do + } + + /** + * Return the integer with the first two bytes 0, then the bytes at the + * index, then at index+1. + */ + private static int first(byte[] in, int inPos) { + return (in[inPos] << 8) | (in[inPos + 1] & 255); + } + + /** + * Return the integer with the first two bytes 0, then the bytes at the + * index, then at index+1. + */ + private static int first(ByteBuffer in, int inPos) { + return (in.get(inPos) << 8) | (in.get(inPos + 1) & 255); + } + + /** + * Shift the value 1 byte left, and add the byte at index inPos+2. + */ + private static int next(int v, byte[] in, int inPos) { + return (v << 8) | (in[inPos + 2] & 255); + } + + /** + * Shift the value 1 byte left, and add the byte at index inPos+2. + */ + private static int next(int v, ByteBuffer in, int inPos) { + return (v << 8) | (in.get(inPos + 2) & 255); + } + + /** + * Compute the address in the hash table. + */ + private static int hash(int h) { + return ((h * 2777) >> 9) & (HASH_SIZE - 1); + } + + @Override + public int compress(byte[] in, int inLen, byte[] out, int outPos) { + int inPos = 0; + if (cachedHashTable == null) { + cachedHashTable = new int[HASH_SIZE]; + } + int[] hashTab = cachedHashTable; + int literals = 0; + outPos++; + int future = first(in, 0); + while (inPos < inLen - 4) { + byte p2 = in[inPos + 2]; + // next + future = (future << 8) + (p2 & 255); + int off = hash(future); + int ref = hashTab[off]; + hashTab[off] = inPos; + // if (ref < inPos + // && ref > 0 + // && (off = inPos - ref - 1) < MAX_OFF + // && in[ref + 2] == p2 + // && (((in[ref] & 255) << 8) | (in[ref + 1] & 255)) == + // ((future >> 8) & 0xffff)) { + if (ref < inPos + && ref > 0 + && (off = inPos - ref - 1) < MAX_OFF + && in[ref + 2] == p2 + && in[ref + 1] == (byte) (future >> 8) + && in[ref] == (byte) (future >> 16)) { + // match + int maxLen = inLen - inPos - 2; + if (maxLen > MAX_REF) { + maxLen = MAX_REF; + } + if (literals == 0) { + // multiple back-references, + // so there is no literal run control byte + outPos--; + } else { + // set the control byte at the start of the literal run + // to store the number of literals + out[outPos - literals - 1] = (byte) (literals - 1); + literals = 0; + } + int len = 3; + while (len < maxLen && in[ref + len] == in[inPos + len]) { + len++; + } + len -= 2; + if (len < 7) { + out[outPos++] = (byte) ((off >> 8) + (len << 5)); + } else { + out[outPos++] = (byte) ((off >> 8) + (7 << 5)); + out[outPos++] = (byte) (len - 7); + } + out[outPos++] = (byte) off; + // move one byte forward to allow for a literal run control byte + outPos++; + inPos += len; + // rebuild the future, and store the last bytes to the + // hashtable. Storing hashes of the last bytes in back-reference + // improves the compression ratio and only reduces speed + // slightly. + future = first(in, inPos); + future = next(future, in, inPos); + hashTab[hash(future)] = inPos++; + future = next(future, in, inPos); + hashTab[hash(future)] = inPos++; + } else { + // copy one byte from input to output as part of literal + out[outPos++] = in[inPos++]; + literals++; + // at the end of this literal chunk, write the length + // to the control byte and start a new chunk + if (literals == MAX_LITERAL) { + out[outPos - literals - 1] = (byte) (literals - 1); + literals = 0; + // move ahead one byte to allow for the + // literal run control byte + outPos++; + } + } + } + // write the remaining few bytes as literals + while (inPos < inLen) { + out[outPos++] = in[inPos++]; + literals++; + if (literals == MAX_LITERAL) { + out[outPos - literals - 1] = (byte) (literals - 1); + literals = 0; + outPos++; + } + } + // writes the final literal run length to the control byte + out[outPos - literals - 1] = (byte) (literals - 1); + if (literals == 0) { + outPos--; + } + return outPos; + } + + /** + * Compress a number of bytes. + * + * @param in the input data + * @param inPos the offset at the input buffer + * @param out the output area + * @param outPos the offset at the output array + * @return the end position + */ + public int compress(ByteBuffer in, int inPos, byte[] out, int outPos) { + int inLen = in.capacity() - inPos; + if (cachedHashTable == null) { + cachedHashTable = new int[HASH_SIZE]; + } + int[] hashTab = cachedHashTable; + int literals = 0; + outPos++; + int future = first(in, 0); + while (inPos < inLen - 4) { + byte p2 = in.get(inPos + 2); + // next + future = (future << 8) + (p2 & 255); + int off = hash(future); + int ref = hashTab[off]; + hashTab[off] = inPos; + // if (ref < inPos + // && ref > 0 + // && (off = inPos - ref - 1) < MAX_OFF + // && in[ref + 2] == p2 + // && (((in[ref] & 255) << 8) | (in[ref + 1] & 255)) == + // ((future >> 8) & 0xffff)) { + if (ref < inPos + && ref > 0 + && (off = inPos - ref - 1) < MAX_OFF + && in.get(ref + 2) == p2 + && in.get(ref + 1) == (byte) (future >> 8) + && in.get(ref) == (byte) (future >> 16)) { + // match + int maxLen = inLen - inPos - 2; + if (maxLen > MAX_REF) { + maxLen = MAX_REF; + } + if (literals == 0) { + // multiple back-references, + // so there is no literal run control byte + outPos--; + } else { + // set the control byte at the start of the literal run + // to store the number of literals + out[outPos - literals - 1] = (byte) (literals - 1); + literals = 0; + } + int len = 3; + while (len < maxLen && in.get(ref + len) == in.get(inPos + len)) { + len++; + } + len -= 2; + if (len < 7) { + out[outPos++] = (byte) ((off >> 8) + (len << 5)); + } else { + out[outPos++] = (byte) ((off >> 8) + (7 << 5)); + out[outPos++] = (byte) (len - 7); + } + out[outPos++] = (byte) off; + // move one byte forward to allow for a literal run control byte + outPos++; + inPos += len; + // rebuild the future, and store the last bytes to the + // hashtable. Storing hashes of the last bytes in back-reference + // improves the compression ratio and only reduces speed + // slightly. + future = first(in, inPos); + future = next(future, in, inPos); + hashTab[hash(future)] = inPos++; + future = next(future, in, inPos); + hashTab[hash(future)] = inPos++; + } else { + // copy one byte from input to output as part of literal + out[outPos++] = in.get(inPos++); + literals++; + // at the end of this literal chunk, write the length + // to the control byte and start a new chunk + if (literals == MAX_LITERAL) { + out[outPos - literals - 1] = (byte) (literals - 1); + literals = 0; + // move ahead one byte to allow for the + // literal run control byte + outPos++; + } + } + } + // write the remaining few bytes as literals + while (inPos < inLen) { + out[outPos++] = in.get(inPos++); + literals++; + if (literals == MAX_LITERAL) { + out[outPos - literals - 1] = (byte) (literals - 1); + literals = 0; + outPos++; + } + } + // writes the final literal run length to the control byte + out[outPos - literals - 1] = (byte) (literals - 1); + if (literals == 0) { + outPos--; + } + return outPos; + } + + @Override + public void expand(byte[] in, int inPos, int inLen, byte[] out, int outPos, + int outLen) { + // if ((inPos | outPos | outLen) < 0) { + if (inPos < 0 || outPos < 0 || outLen < 0) { + throw new IllegalArgumentException(); + } + do { + int ctrl = in[inPos++] & 255; + if (ctrl < MAX_LITERAL) { + // literal run of length = ctrl + 1, + ctrl++; + // copy to output and move forward this many bytes + // while (ctrl-- > 0) { + // out[outPos++] = in[inPos++]; + // } + System.arraycopy(in, inPos, out, outPos, ctrl); + outPos += ctrl; + inPos += ctrl; + } else { + // back reference + // the highest 3 bits are the match length + int len = ctrl >> 5; + // if the length is maxed, add the next byte to the length + if (len == 7) { + len += in[inPos++] & 255; + } + // minimum back-reference is 3 bytes, + // so 2 was subtracted before storing size + len += 2; + + // ctrl is now the offset for a back-reference... + // the logical AND operation removes the length bits + ctrl = -((ctrl & 0x1f) << 8) - 1; + + // the next byte augments/increases the offset + ctrl -= in[inPos++] & 255; + + // copy the back-reference bytes from the given + // location in output to current position + ctrl += outPos; + if (outPos + len >= out.length) { + // reduce array bounds checking + throw new ArrayIndexOutOfBoundsException(); + } + for (int i = 0; i < len; i++) { + out[outPos++] = out[ctrl++]; + } + } + } while (outPos < outLen); + } + + /** + * Expand a number of compressed bytes. + * + * @param in the compressed data + * @param out the output area + */ + public static void expand(ByteBuffer in, ByteBuffer out) { + do { + int ctrl = in.get() & 255; + if (ctrl < MAX_LITERAL) { + // literal run of length = ctrl + 1, + ctrl++; + // copy to output and move forward this many bytes + // (maybe slice would be faster) + for (int i = 0; i < ctrl; i++) { + out.put(in.get()); + } + } else { + // back reference + // the highest 3 bits are the match length + int len = ctrl >> 5; + // if the length is maxed, add the next byte to the length + if (len == 7) { + len += in.get() & 255; + } + // minimum back-reference is 3 bytes, + // so 2 was subtracted before storing size + len += 2; + + // ctrl is now the offset for a back-reference... + // the logical AND operation removes the length bits + ctrl = -((ctrl & 0x1f) << 8) - 1; + + // the next byte augments/increases the offset + ctrl -= in.get() & 255; + + // copy the back-reference bytes from the given + // location in output to current position + // (maybe slice would be faster) + ctrl += out.position(); + for (int i = 0; i < len; i++) { + out.put(out.get(ctrl++)); + } + } + } while (out.position() < out.capacity()); + } + + @Override + public int getAlgorithm() { + return Compressor.LZF; + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/compress/CompressNo.java b/modules/h2/src/main/java/org/h2/compress/CompressNo.java new file mode 100644 index 0000000000000..c17c47b09232b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/compress/CompressNo.java @@ -0,0 +1,37 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.compress; + +/** + * This class implements a data compression algorithm that does in fact not + * compress. This is useful if the data can not be compressed because it is + * encrypted, already compressed, or random. + */ +public class CompressNo implements Compressor { + + @Override + public int getAlgorithm() { + return Compressor.NO; + } + + @Override + public void setOptions(String options) { + // nothing to do + } + + @Override + public int compress(byte[] in, int inLen, byte[] out, int outPos) { + System.arraycopy(in, 0, out, outPos, inLen); + return outPos + inLen; + } + + @Override + public void expand(byte[] in, int inPos, int inLen, byte[] out, int outPos, + int outLen) { + System.arraycopy(in, inPos, out, outPos, outLen); + } + +} diff --git a/modules/h2/src/main/java/org/h2/compress/Compressor.java b/modules/h2/src/main/java/org/h2/compress/Compressor.java new file mode 100644 index 0000000000000..711fd33340b75 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/compress/Compressor.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.compress; + + +/** + * Each data compression algorithm must implement this interface. + */ +public interface Compressor { + + /** + * No compression is used. + */ + int NO = 0; + + /** + * The LZF compression algorithm is used + */ + int LZF = 1; + + /** + * The DEFLATE compression algorithm is used. + */ + int DEFLATE = 2; + + /** + * Get the compression algorithm type. + * + * @return the type + */ + int getAlgorithm(); + + /** + * Compress a number of bytes. + * + * @param in the input data + * @param inLen the number of bytes to compress + * @param out the output area + * @param outPos the offset at the output array + * @return the end position + */ + int compress(byte[] in, int inLen, byte[] out, int outPos); + + /** + * Expand a number of compressed bytes. + * + * @param in the compressed data + * @param inPos the offset at the input array + * @param inLen the number of bytes to read + * @param out the output area + * @param outPos the offset at the output array + * @param outLen the size of the uncompressed data + */ + void expand(byte[] in, int inPos, int inLen, byte[] out, int outPos, + int outLen); + + /** + * Set the compression options. This may include settings for + * higher performance but less compression. + * + * @param options the options + */ + void setOptions(String options); +} diff --git a/modules/h2/src/main/java/org/h2/compress/LZFInputStream.java b/modules/h2/src/main/java/org/h2/compress/LZFInputStream.java new file mode 100644 index 0000000000000..33e9468e74515 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/compress/LZFInputStream.java @@ -0,0 +1,133 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.compress; + +import java.io.IOException; +import java.io.InputStream; +import org.h2.message.DbException; +import org.h2.util.Utils; + +/** + * An input stream to read from an LZF stream. + * The data is automatically expanded. + */ +public class LZFInputStream extends InputStream { + + private final InputStream in; + private CompressLZF decompress = new CompressLZF(); + private int pos; + private int bufferLength; + private byte[] inBuffer; + private byte[] buffer; + + public LZFInputStream(InputStream in) throws IOException { + this.in = in; + if (readInt() != LZFOutputStream.MAGIC) { + throw new IOException("Not an LZFInputStream"); + } + } + + private static byte[] ensureSize(byte[] buff, int len) { + return buff == null || buff.length < len ? Utils.newBytes(len) : buff; + } + + private void fillBuffer() throws IOException { + if (buffer != null && pos < bufferLength) { + return; + } + int len = readInt(); + if (decompress == null) { + // EOF + this.bufferLength = 0; + } else if (len < 0) { + len = -len; + buffer = ensureSize(buffer, len); + readFully(buffer, len); + this.bufferLength = len; + } else { + inBuffer = ensureSize(inBuffer, len); + int size = readInt(); + readFully(inBuffer, len); + buffer = ensureSize(buffer, size); + try { + decompress.expand(inBuffer, 0, len, buffer, 0, size); + } catch (ArrayIndexOutOfBoundsException e) { + DbException.convertToIOException(e); + } + this.bufferLength = size; + } + pos = 0; + } + + private void readFully(byte[] buff, int len) throws IOException { + int off = 0; + while (len > 0) { + int l = in.read(buff, off, len); + len -= l; + off += l; + } + } + + private int readInt() throws IOException { + int x = in.read(); + if (x < 0) { + decompress = null; + return 0; + } + x = (x << 24) + (in.read() << 16) + (in.read() << 8) + in.read(); + return x; + } + + @Override + public int read() throws IOException { + fillBuffer(); + if (pos >= bufferLength) { + return -1; + } + return buffer[pos++] & 255; + } + + @Override + public int read(byte[] b) throws IOException { + return read(b, 0, b.length); + } + + @Override + public int read(byte[] b, int off, int len) throws IOException { + if (len == 0) { + return 0; + } + int read = 0; + while (len > 0) { + int r = readBlock(b, off, len); + if (r < 0) { + break; + } + read += r; + off += r; + len -= r; + } + return read == 0 ? -1 : read; + } + + private int readBlock(byte[] b, int off, int len) throws IOException { + fillBuffer(); + if (pos >= bufferLength) { + return -1; + } + int max = Math.min(len, bufferLength - pos); + max = Math.min(max, b.length - off); + System.arraycopy(buffer, pos, b, off, max); + pos += max; + return max; + } + + @Override + public void close() throws IOException { + in.close(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/compress/LZFOutputStream.java b/modules/h2/src/main/java/org/h2/compress/LZFOutputStream.java new file mode 100644 index 0000000000000..36634a23837b1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/compress/LZFOutputStream.java @@ -0,0 +1,102 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.compress; + +import java.io.IOException; +import java.io.OutputStream; +import org.h2.engine.Constants; + +/** + * An output stream to write an LZF stream. + * The data is automatically compressed. + */ +public class LZFOutputStream extends OutputStream { + + /** + * The file header of a LZF file. + */ + static final int MAGIC = ('H' << 24) | ('2' << 16) | ('I' << 8) | 'S'; + + private final OutputStream out; + private final CompressLZF compress = new CompressLZF(); + private final byte[] buffer; + private int pos; + private byte[] outBuffer; + + public LZFOutputStream(OutputStream out) throws IOException { + this.out = out; + int len = Constants.IO_BUFFER_SIZE_COMPRESS; + buffer = new byte[len]; + ensureOutput(len); + writeInt(MAGIC); + } + + private void ensureOutput(int len) { + // TODO calculate the maximum overhead (worst case) for the output + // buffer + int outputLen = (len < 100 ? len + 100 : len) * 2; + if (outBuffer == null || outBuffer.length < outputLen) { + outBuffer = new byte[outputLen]; + } + } + + @Override + public void write(int b) throws IOException { + if (pos >= buffer.length) { + flush(); + } + buffer[pos++] = (byte) b; + } + + private void compressAndWrite(byte[] buff, int len) throws IOException { + if (len > 0) { + ensureOutput(len); + int compressed = compress.compress(buff, len, outBuffer, 0); + if (compressed > len) { + writeInt(-len); + out.write(buff, 0, len); + } else { + writeInt(compressed); + writeInt(len); + out.write(outBuffer, 0, compressed); + } + } + } + + private void writeInt(int x) throws IOException { + out.write((byte) (x >> 24)); + out.write((byte) (x >> 16)); + out.write((byte) (x >> 8)); + out.write((byte) x); + } + + @Override + public void write(byte[] buff, int off, int len) throws IOException { + while (len > 0) { + int copy = Math.min(buffer.length - pos, len); + System.arraycopy(buff, off, buffer, pos, copy); + pos += copy; + if (pos >= buffer.length) { + flush(); + } + off += copy; + len -= copy; + } + } + + @Override + public void flush() throws IOException { + compressAndWrite(buffer, pos); + pos = 0; + } + + @Override + public void close() throws IOException { + flush(); + out.close(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/compress/package.html b/modules/h2/src/main/java/org/h2/compress/package.html new file mode 100644 index 0000000000000..88571f47fe3cc --- /dev/null +++ b/modules/h2/src/main/java/org/h2/compress/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Lossless data compression classes. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/constraint/Constraint.java b/modules/h2/src/main/java/org/h2/constraint/Constraint.java new file mode 100644 index 0000000000000..003d1e9085bd0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/constraint/Constraint.java @@ -0,0 +1,196 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.constraint; + +import java.util.HashSet; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.expression.ExpressionVisitor; +import org.h2.index.Index; +import org.h2.message.Trace; +import org.h2.result.Row; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObjectBase; +import org.h2.table.Column; +import org.h2.table.Table; + +/** + * The base class for constraint checking. + */ +public abstract class Constraint extends SchemaObjectBase implements + Comparable { + + public enum Type { + /** + * The constraint type for check constraints. + */ + CHECK, + /** + * The constraint type for primary key constraints. + */ + PRIMARY_KEY, + /** + * The constraint type for unique constraints. + */ + UNIQUE, + /** + * The constraint type for referential constraints. + */ + REFERENTIAL; + + /** + * Get standard SQL type name. + * + * @return standard SQL type name + */ + public String getSqlName() { + if (this == Constraint.Type.PRIMARY_KEY) { + return "PRIMARY KEY"; + } + if (this == Constraint.Type.REFERENTIAL) { + return "FOREIGN KEY"; + } + return name(); + } + + } + + /** + * The table for which this constraint is defined. + */ + protected Table table; + + Constraint(Schema schema, int id, String name, Table table) { + initSchemaObjectBase(schema, id, name, Trace.CONSTRAINT); + this.table = table; + this.setTemporary(table.isTemporary()); + } + + /** + * The constraint type name + * + * @return the name + */ + public abstract Type getConstraintType(); + + /** + * Check if this row fulfils the constraint. + * This method throws an exception if not. + * + * @param session the session + * @param t the table + * @param oldRow the old row + * @param newRow the new row + */ + public abstract void checkRow(Session session, Table t, Row oldRow, Row newRow); + + /** + * Check if this constraint needs the specified index. + * + * @param index the index + * @return true if the index is used + */ + public abstract boolean usesIndex(Index index); + + /** + * This index is now the owner of the specified index. + * + * @param index the index + */ + public abstract void setIndexOwner(Index index); + + /** + * Get all referenced columns. + * + * @param table the table + * @return the set of referenced columns + */ + public abstract HashSet getReferencedColumns(Table table); + + /** + * Get the SQL statement to create this constraint. + * + * @return the SQL statement + */ + public abstract String getCreateSQLWithoutIndexes(); + + /** + * Check if this constraint needs to be checked before updating the data. + * + * @return true if it must be checked before updating + */ + public abstract boolean isBefore(); + + /** + * Check the existing data. This method is called if the constraint is added + * after data has been inserted into the table. + * + * @param session the session + */ + public abstract void checkExistingData(Session session); + + /** + * This method is called after a related table has changed + * (the table was renamed, or columns have been renamed). + */ + public abstract void rebuild(); + + /** + * Get the unique index used to enforce this constraint, or null if no index + * is used. + * + * @return the index + */ + public abstract Index getUniqueIndex(); + + @Override + public void checkRename() { + // ok + } + + @Override + public int getType() { + return DbObject.CONSTRAINT; + } + + public Table getTable() { + return table; + } + + public Table getRefTable() { + return table; + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public int compareTo(Constraint other) { + if (this == other) { + return 0; + } + return Integer.compare(getConstraintType().ordinal(), other.getConstraintType().ordinal()); + } + + @Override + public boolean isHidden() { + return table.isHidden(); + } + + /** + * Visit all elements in the constraint. + * + * @param visitor the visitor + * @return true if every visited expression returned true, or if there are + * no expressions + */ + public boolean isEverything(@SuppressWarnings("unused") ExpressionVisitor visitor) { + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/constraint/ConstraintActionType.java b/modules/h2/src/main/java/org/h2/constraint/ConstraintActionType.java new file mode 100644 index 0000000000000..4ac9292ee9628 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/constraint/ConstraintActionType.java @@ -0,0 +1,44 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.constraint; + +public enum ConstraintActionType { + /** + * The action is to restrict the operation. + */ + RESTRICT, + + /** + * The action is to cascade the operation. + */ + CASCADE, + + /** + * The action is to set the value to the default value. + */ + SET_DEFAULT, + + /** + * The action is to set the value to NULL. + */ + SET_NULL; + + /** + * Get standard SQL type name. + * + * @return standard SQL type name + */ + public String getSqlName() { + if (this == ConstraintActionType.SET_DEFAULT) { + return "SET DEFAULT"; + } + if (this == SET_NULL) { + return "SET NULL"; + } + return name(); + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/constraint/ConstraintCheck.java b/modules/h2/src/main/java/org/h2/constraint/ConstraintCheck.java new file mode 100644 index 0000000000000..67e1f32466835 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/constraint/ConstraintCheck.java @@ -0,0 +1,172 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.constraint; + +import java.util.HashSet; +import java.util.Iterator; +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * A check constraint. + */ +public class ConstraintCheck extends Constraint { + + private TableFilter filter; + private Expression expr; + + public ConstraintCheck(Schema schema, int id, String name, Table table) { + super(schema, id, name, table); + } + + @Override + public Type getConstraintType() { + return Constraint.Type.CHECK; + } + + public void setTableFilter(TableFilter filter) { + this.filter = filter; + } + + public void setExpression(Expression expr) { + this.expr = expr; + } + + @Override + public String getCreateSQLForCopy(Table forTable, String quotedName) { + StringBuilder buff = new StringBuilder("ALTER TABLE "); + buff.append(forTable.getSQL()).append(" ADD CONSTRAINT "); + if (forTable.isHidden()) { + buff.append("IF NOT EXISTS "); + } + buff.append(quotedName); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + buff.append(" CHECK").append(StringUtils.enclose(expr.getSQL())) + .append(" NOCHECK"); + return buff.toString(); + } + + private String getShortDescription() { + return getName() + ": " + expr.getSQL(); + } + + @Override + public String getCreateSQLWithoutIndexes() { + return getCreateSQL(); + } + + @Override + public String getCreateSQL() { + return getCreateSQLForCopy(table, getSQL()); + } + + @Override + public void removeChildrenAndResources(Session session) { + table.removeConstraint(this); + database.removeMeta(session, getId()); + filter = null; + expr = null; + table = null; + invalidate(); + } + + @Override + public void checkRow(Session session, Table t, Row oldRow, Row newRow) { + if (newRow == null) { + return; + } + filter.set(newRow); + boolean b; + try { + Value v = expr.getValue(session); + // Both TRUE and NULL are ok + b = v == ValueNull.INSTANCE || v.getBoolean(); + } catch (DbException ex) { + throw DbException.get(ErrorCode.CHECK_CONSTRAINT_INVALID, ex, + getShortDescription()); + } + if (!b) { + throw DbException.get(ErrorCode.CHECK_CONSTRAINT_VIOLATED_1, + getShortDescription()); + } + } + + @Override + public boolean usesIndex(Index index) { + return false; + } + + @Override + public void setIndexOwner(Index index) { + DbException.throwInternalError(toString()); + } + + @Override + public HashSet getReferencedColumns(Table table) { + HashSet columns = new HashSet<>(); + expr.isEverything(ExpressionVisitor.getColumnsVisitor(columns)); + for (Iterator it = columns.iterator(); it.hasNext();) { + if (it.next().getTable() != table) { + it.remove(); + } + } + return columns; + } + + public Expression getExpression() { + return expr; + } + + @Override + public boolean isBefore() { + return true; + } + + @Override + public void checkExistingData(Session session) { + if (session.getDatabase().isStarting()) { + // don't check at startup + return; + } + String sql = "SELECT 1 FROM " + filter.getTable().getSQL() + + " WHERE NOT(" + expr.getSQL() + ")"; + ResultInterface r = session.prepare(sql).query(1); + if (r.next()) { + throw DbException.get(ErrorCode.CHECK_CONSTRAINT_VIOLATED_1, getName()); + } + } + + @Override + public Index getUniqueIndex() { + return null; + } + + @Override + public void rebuild() { + // nothing to do + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return expr.isEverything(visitor); + } + +} diff --git a/modules/h2/src/main/java/org/h2/constraint/ConstraintReferential.java b/modules/h2/src/main/java/org/h2/constraint/ConstraintReferential.java new file mode 100644 index 0000000000000..a5be48c31f819 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/constraint/ConstraintReferential.java @@ -0,0 +1,645 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.constraint; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.command.Parser; +import org.h2.command.Prepared; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.Parameter; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * A referential constraint. + */ +public class ConstraintReferential extends Constraint { + + private IndexColumn[] columns; + private IndexColumn[] refColumns; + private ConstraintActionType deleteAction = ConstraintActionType.RESTRICT; + private ConstraintActionType updateAction = ConstraintActionType.RESTRICT; + private Table refTable; + private Index index; + private Index refIndex; + private boolean indexOwner; + private boolean refIndexOwner; + private String deleteSQL, updateSQL; + private boolean skipOwnTable; + + public ConstraintReferential(Schema schema, int id, String name, Table table) { + super(schema, id, name, table); + } + + @Override + public Type getConstraintType() { + return Constraint.Type.REFERENTIAL; + } + + /** + * Create the SQL statement of this object so a copy of the table can be + * made. + * + * @param forTable the table to create the object for + * @param quotedName the name of this object (quoted if necessary) + * @return the SQL statement + */ + @Override + public String getCreateSQLForCopy(Table forTable, String quotedName) { + return getCreateSQLForCopy(forTable, refTable, quotedName, true); + } + + /** + * Create the SQL statement of this object so a copy of the table can be + * made. + * + * @param forTable the table to create the object for + * @param forRefTable the referenced table + * @param quotedName the name of this object (quoted if necessary) + * @param internalIndex add the index name to the statement + * @return the SQL statement + */ + public String getCreateSQLForCopy(Table forTable, Table forRefTable, + String quotedName, boolean internalIndex) { + StatementBuilder buff = new StatementBuilder("ALTER TABLE "); + String mainTable = forTable.getSQL(); + buff.append(mainTable).append(" ADD CONSTRAINT "); + if (forTable.isHidden()) { + buff.append("IF NOT EXISTS "); + } + buff.append(quotedName); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + IndexColumn[] cols = columns; + IndexColumn[] refCols = refColumns; + buff.append(" FOREIGN KEY("); + for (IndexColumn c : cols) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(')'); + if (internalIndex && indexOwner && forTable == this.table) { + buff.append(" INDEX ").append(index.getSQL()); + } + buff.append(" REFERENCES "); + String quotedRefTable; + if (this.table == this.refTable) { + // self-referencing constraints: need to use new table + quotedRefTable = forTable.getSQL(); + } else { + quotedRefTable = forRefTable.getSQL(); + } + buff.append(quotedRefTable).append('('); + buff.resetCount(); + for (IndexColumn r : refCols) { + buff.appendExceptFirst(", "); + buff.append(r.getSQL()); + } + buff.append(')'); + if (internalIndex && refIndexOwner && forTable == this.table) { + buff.append(" INDEX ").append(refIndex.getSQL()); + } + if (deleteAction != ConstraintActionType.RESTRICT) { + buff.append(" ON DELETE ").append(deleteAction.getSqlName()); + } + if (updateAction != ConstraintActionType.RESTRICT) { + buff.append(" ON UPDATE ").append(updateAction.getSqlName()); + } + return buff.append(" NOCHECK").toString(); + } + + + /** + * Get a short description of the constraint. This includes the constraint + * name (if set), and the constraint expression. + * + * @param searchIndex the index, or null + * @param check the row, or null + * @return the description + */ + private String getShortDescription(Index searchIndex, SearchRow check) { + StatementBuilder buff = new StatementBuilder(getName()); + buff.append(": ").append(table.getSQL()).append(" FOREIGN KEY("); + for (IndexColumn c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(") REFERENCES ").append(refTable.getSQL()).append('('); + buff.resetCount(); + for (IndexColumn r : refColumns) { + buff.appendExceptFirst(", "); + buff.append(r.getSQL()); + } + buff.append(')'); + if (searchIndex != null && check != null) { + buff.append(" ("); + buff.resetCount(); + Column[] cols = searchIndex.getColumns(); + int len = Math.min(columns.length, cols.length); + for (int i = 0; i < len; i++) { + int idx = cols[i].getColumnId(); + Value c = check.getValue(idx); + buff.appendExceptFirst(", "); + buff.append(c == null ? "" : c.toString()); + } + buff.append(')'); + } + return buff.toString(); + } + + @Override + public String getCreateSQLWithoutIndexes() { + return getCreateSQLForCopy(table, refTable, getSQL(), false); + } + + @Override + public String getCreateSQL() { + return getCreateSQLForCopy(table, getSQL()); + } + + public void setColumns(IndexColumn[] cols) { + columns = cols; + } + + public IndexColumn[] getColumns() { + return columns; + } + + @Override + public HashSet getReferencedColumns(Table table) { + HashSet result = new HashSet<>(); + if (table == this.table) { + for (IndexColumn c : columns) { + result.add(c.column); + } + } else if (table == this.refTable) { + for (IndexColumn c : refColumns) { + result.add(c.column); + } + } + return result; + } + + public void setRefColumns(IndexColumn[] refCols) { + refColumns = refCols; + } + + public IndexColumn[] getRefColumns() { + return refColumns; + } + + public void setRefTable(Table refTable) { + this.refTable = refTable; + if (refTable.isTemporary()) { + setTemporary(true); + } + } + + /** + * Set the index to use for this constraint. + * + * @param index the index + * @param isOwner true if the index is generated by the system and belongs + * to this constraint + */ + public void setIndex(Index index, boolean isOwner) { + this.index = index; + this.indexOwner = isOwner; + } + + /** + * Set the index of the referenced table to use for this constraint. + * + * @param refIndex the index + * @param isRefOwner true if the index is generated by the system and + * belongs to this constraint + */ + public void setRefIndex(Index refIndex, boolean isRefOwner) { + this.refIndex = refIndex; + this.refIndexOwner = isRefOwner; + } + + @Override + public void removeChildrenAndResources(Session session) { + table.removeConstraint(this); + refTable.removeConstraint(this); + if (indexOwner) { + table.removeIndexOrTransferOwnership(session, index); + } + if (refIndexOwner) { + refTable.removeIndexOrTransferOwnership(session, refIndex); + } + database.removeMeta(session, getId()); + refTable = null; + index = null; + refIndex = null; + columns = null; + refColumns = null; + deleteSQL = null; + updateSQL = null; + table = null; + invalidate(); + } + + @Override + public void checkRow(Session session, Table t, Row oldRow, Row newRow) { + if (!database.getReferentialIntegrity()) { + return; + } + if (!table.getCheckForeignKeyConstraints() || + !refTable.getCheckForeignKeyConstraints()) { + return; + } + if (t == table) { + if (!skipOwnTable) { + checkRowOwnTable(session, oldRow, newRow); + } + } + if (t == refTable) { + checkRowRefTable(session, oldRow, newRow); + } + } + + private void checkRowOwnTable(Session session, Row oldRow, Row newRow) { + if (newRow == null) { + return; + } + boolean constraintColumnsEqual = oldRow != null; + for (IndexColumn col : columns) { + int idx = col.column.getColumnId(); + Value v = newRow.getValue(idx); + if (v == ValueNull.INSTANCE) { + // return early if one of the columns is NULL + return; + } + if (constraintColumnsEqual) { + if (!database.areEqual(v, oldRow.getValue(idx))) { + constraintColumnsEqual = false; + } + } + } + if (constraintColumnsEqual) { + // return early if the key columns didn't change + return; + } + if (refTable == table) { + // special case self referencing constraints: + // check the inserted row first + boolean self = true; + for (int i = 0, len = columns.length; i < len; i++) { + int idx = columns[i].column.getColumnId(); + Value v = newRow.getValue(idx); + Column refCol = refColumns[i].column; + int refIdx = refCol.getColumnId(); + Value r = newRow.getValue(refIdx); + if (!database.areEqual(r, v)) { + self = false; + break; + } + } + if (self) { + return; + } + } + Row check = refTable.getTemplateRow(); + for (int i = 0, len = columns.length; i < len; i++) { + int idx = columns[i].column.getColumnId(); + Value v = newRow.getValue(idx); + Column refCol = refColumns[i].column; + int refIdx = refCol.getColumnId(); + check.setValue(refIdx, refCol.convert(v)); + } + if (!existsRow(session, refIndex, check, null)) { + throw DbException.get(ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, + getShortDescription(refIndex, check)); + } + } + + private boolean existsRow(Session session, Index searchIndex, + SearchRow check, Row excluding) { + Table searchTable = searchIndex.getTable(); + searchTable.lock(session, false, false); + Cursor cursor = searchIndex.find(session, check, check); + while (cursor.next()) { + SearchRow found; + found = cursor.getSearchRow(); + if (excluding != null && found.getKey() == excluding.getKey()) { + continue; + } + Column[] cols = searchIndex.getColumns(); + boolean allEqual = true; + int len = Math.min(columns.length, cols.length); + for (int i = 0; i < len; i++) { + int idx = cols[i].getColumnId(); + Value c = check.getValue(idx); + Value f = found.getValue(idx); + if (searchTable.compareTypeSafe(c, f) != 0) { + allEqual = false; + break; + } + } + if (allEqual) { + return true; + } + } + return false; + } + + private boolean isEqual(Row oldRow, Row newRow) { + return refIndex.compareRows(oldRow, newRow) == 0; + } + + private void checkRow(Session session, Row oldRow) { + SearchRow check = table.getTemplateSimpleRow(false); + for (int i = 0, len = columns.length; i < len; i++) { + Column refCol = refColumns[i].column; + int refIdx = refCol.getColumnId(); + Column col = columns[i].column; + Value v = col.convert(oldRow.getValue(refIdx)); + if (v == ValueNull.INSTANCE) { + return; + } + check.setValue(col.getColumnId(), v); + } + // exclude the row only for self-referencing constraints + Row excluding = (refTable == table) ? oldRow : null; + if (existsRow(session, index, check, excluding)) { + throw DbException.get(ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_CHILD_EXISTS_1, + getShortDescription(index, check)); + } + } + + private void checkRowRefTable(Session session, Row oldRow, Row newRow) { + if (oldRow == null) { + // this is an insert + return; + } + if (newRow != null && isEqual(oldRow, newRow)) { + // on an update, if both old and new are the same, don't do anything + return; + } + if (newRow == null) { + // this is a delete + if (deleteAction == ConstraintActionType.RESTRICT) { + checkRow(session, oldRow); + } else { + int i = deleteAction == ConstraintActionType.CASCADE ? 0 : columns.length; + Prepared deleteCommand = getDelete(session); + setWhere(deleteCommand, i, oldRow); + updateWithSkipCheck(deleteCommand); + } + } else { + // this is an update + if (updateAction == ConstraintActionType.RESTRICT) { + checkRow(session, oldRow); + } else { + Prepared updateCommand = getUpdate(session); + if (updateAction == ConstraintActionType.CASCADE) { + ArrayList params = updateCommand.getParameters(); + for (int i = 0, len = columns.length; i < len; i++) { + Parameter param = params.get(i); + Column refCol = refColumns[i].column; + param.setValue(newRow.getValue(refCol.getColumnId())); + } + } + setWhere(updateCommand, columns.length, oldRow); + updateWithSkipCheck(updateCommand); + } + } + } + + private void updateWithSkipCheck(Prepared prep) { + // TODO constraints: maybe delay the update or support delayed checks + // (until commit) + try { + // TODO multithreaded kernel: this works only if nobody else updates + // this or the ref table at the same time + skipOwnTable = true; + prep.update(); + } finally { + skipOwnTable = false; + } + } + + private void setWhere(Prepared command, int pos, Row row) { + for (int i = 0, len = refColumns.length; i < len; i++) { + int idx = refColumns[i].column.getColumnId(); + Value v = row.getValue(idx); + ArrayList params = command.getParameters(); + Parameter param = params.get(pos + i); + param.setValue(v); + } + } + + public ConstraintActionType getDeleteAction() { + return deleteAction; + } + + /** + * Set the action to apply (restrict, cascade,...) on a delete. + * + * @param action the action + */ + public void setDeleteAction(ConstraintActionType action) { + if (action == deleteAction && deleteSQL == null) { + return; + } + if (deleteAction != ConstraintActionType.RESTRICT) { + throw DbException.get(ErrorCode.CONSTRAINT_ALREADY_EXISTS_1, "ON DELETE"); + } + this.deleteAction = action; + buildDeleteSQL(); + } + + private void buildDeleteSQL() { + if (deleteAction == ConstraintActionType.RESTRICT) { + return; + } + StatementBuilder buff = new StatementBuilder(); + if (deleteAction == ConstraintActionType.CASCADE) { + buff.append("DELETE FROM ").append(table.getSQL()); + } else { + appendUpdate(buff); + } + appendWhere(buff); + deleteSQL = buff.toString(); + } + + private Prepared getUpdate(Session session) { + return prepare(session, updateSQL, updateAction); + } + + private Prepared getDelete(Session session) { + return prepare(session, deleteSQL, deleteAction); + } + + public ConstraintActionType getUpdateAction() { + return updateAction; + } + + /** + * Set the action to apply (restrict, cascade,...) on an update. + * + * @param action the action + */ + public void setUpdateAction(ConstraintActionType action) { + if (action == updateAction && updateSQL == null) { + return; + } + if (updateAction != ConstraintActionType.RESTRICT) { + throw DbException.get(ErrorCode.CONSTRAINT_ALREADY_EXISTS_1, "ON UPDATE"); + } + this.updateAction = action; + buildUpdateSQL(); + } + + private void buildUpdateSQL() { + if (updateAction == ConstraintActionType.RESTRICT) { + return; + } + StatementBuilder buff = new StatementBuilder(); + appendUpdate(buff); + appendWhere(buff); + updateSQL = buff.toString(); + } + + @Override + public void rebuild() { + buildUpdateSQL(); + buildDeleteSQL(); + } + + private Prepared prepare(Session session, String sql, ConstraintActionType action) { + Prepared command = session.prepare(sql); + if (action != ConstraintActionType.CASCADE) { + ArrayList params = command.getParameters(); + for (int i = 0, len = columns.length; i < len; i++) { + Column column = columns[i].column; + Parameter param = params.get(i); + Value value; + if (action == ConstraintActionType.SET_NULL) { + value = ValueNull.INSTANCE; + } else { + Expression expr = column.getDefaultExpression(); + if (expr == null) { + throw DbException.get(ErrorCode.NO_DEFAULT_SET_1, column.getName()); + } + value = expr.getValue(session); + } + param.setValue(value); + } + } + return command; + } + + private void appendUpdate(StatementBuilder buff) { + buff.append("UPDATE ").append(table.getSQL()).append(" SET "); + buff.resetCount(); + for (IndexColumn c : columns) { + buff.appendExceptFirst(" , "); + buff.append(Parser.quoteIdentifier(c.column.getName())).append("=?"); + } + } + + private void appendWhere(StatementBuilder buff) { + buff.append(" WHERE "); + buff.resetCount(); + for (IndexColumn c : columns) { + buff.appendExceptFirst(" AND "); + buff.append(Parser.quoteIdentifier(c.column.getName())).append("=?"); + } + } + + @Override + public Table getRefTable() { + return refTable; + } + + @Override + public boolean usesIndex(Index idx) { + return idx == index || idx == refIndex; + } + + @Override + public void setIndexOwner(Index index) { + if (this.index == index) { + indexOwner = true; + } else if (this.refIndex == index) { + refIndexOwner = true; + } else { + DbException.throwInternalError(index + " " + toString()); + } + } + + @Override + public boolean isBefore() { + return false; + } + + @Override + public void checkExistingData(Session session) { + if (session.getDatabase().isStarting()) { + // don't check at startup + return; + } + session.startStatementWithinTransaction(); + StatementBuilder buff = new StatementBuilder("SELECT 1 FROM (SELECT "); + for (IndexColumn c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(" FROM ").append(table.getSQL()).append(" WHERE "); + buff.resetCount(); + for (IndexColumn c : columns) { + buff.appendExceptFirst(" AND "); + buff.append(c.getSQL()).append(" IS NOT NULL "); + } + buff.append(" ORDER BY "); + buff.resetCount(); + for (IndexColumn c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(") C WHERE NOT EXISTS(SELECT 1 FROM "). + append(refTable.getSQL()).append(" P WHERE "); + buff.resetCount(); + int i = 0; + for (IndexColumn c : columns) { + buff.appendExceptFirst(" AND "); + buff.append("C.").append(c.getSQL()).append('='). + append("P.").append(refColumns[i++].getSQL()); + } + buff.append(')'); + String sql = buff.toString(); + ResultInterface r = session.prepare(sql).query(1); + if (r.next()) { + throw DbException.get(ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, + getShortDescription(null, null)); + } + } + + @Override + public Index getUniqueIndex() { + return refIndex; + } + +} diff --git a/modules/h2/src/main/java/org/h2/constraint/ConstraintUnique.java b/modules/h2/src/main/java/org/h2/constraint/ConstraintUnique.java new file mode 100644 index 0000000000000..824de5315e089 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/constraint/ConstraintUnique.java @@ -0,0 +1,157 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.constraint; + +import java.util.HashSet; +import org.h2.command.Parser; +import org.h2.engine.Session; +import org.h2.index.Index; +import org.h2.result.Row; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; + +/** + * A unique constraint. This object always backed by a unique index. + */ +public class ConstraintUnique extends Constraint { + + private Index index; + private boolean indexOwner; + private IndexColumn[] columns; + private final boolean primaryKey; + + public ConstraintUnique(Schema schema, int id, String name, Table table, + boolean primaryKey) { + super(schema, id, name, table); + this.primaryKey = primaryKey; + } + + @Override + public Type getConstraintType() { + return primaryKey ? Constraint.Type.PRIMARY_KEY : Constraint.Type.UNIQUE; + } + + @Override + public String getCreateSQLForCopy(Table forTable, String quotedName) { + return getCreateSQLForCopy(forTable, quotedName, true); + } + + private String getCreateSQLForCopy(Table forTable, String quotedName, + boolean internalIndex) { + StatementBuilder buff = new StatementBuilder("ALTER TABLE "); + buff.append(forTable.getSQL()).append(" ADD CONSTRAINT "); + if (forTable.isHidden()) { + buff.append("IF NOT EXISTS "); + } + buff.append(quotedName); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + buff.append(' ').append(getConstraintType().getSqlName()).append('('); + for (IndexColumn c : columns) { + buff.appendExceptFirst(", "); + buff.append(Parser.quoteIdentifier(c.column.getName())); + } + buff.append(')'); + if (internalIndex && indexOwner && forTable == this.table) { + buff.append(" INDEX ").append(index.getSQL()); + } + return buff.toString(); + } + + @Override + public String getCreateSQLWithoutIndexes() { + return getCreateSQLForCopy(table, getSQL(), false); + } + + @Override + public String getCreateSQL() { + return getCreateSQLForCopy(table, getSQL()); + } + + public void setColumns(IndexColumn[] columns) { + this.columns = columns; + } + + public IndexColumn[] getColumns() { + return columns; + } + + /** + * Set the index to use for this unique constraint. + * + * @param index the index + * @param isOwner true if the index is generated by the system and belongs + * to this constraint + */ + public void setIndex(Index index, boolean isOwner) { + this.index = index; + this.indexOwner = isOwner; + } + + @Override + public void removeChildrenAndResources(Session session) { + table.removeConstraint(this); + if (indexOwner) { + table.removeIndexOrTransferOwnership(session, index); + } + database.removeMeta(session, getId()); + index = null; + columns = null; + table = null; + invalidate(); + } + + @Override + public void checkRow(Session session, Table t, Row oldRow, Row newRow) { + // unique index check is enough + } + + @Override + public boolean usesIndex(Index idx) { + return idx == index; + } + + @Override + public void setIndexOwner(Index index) { + indexOwner = true; + } + + @Override + public HashSet getReferencedColumns(Table table) { + HashSet result = new HashSet<>(); + for (IndexColumn c : columns) { + result.add(c.column); + } + return result; + } + + @Override + public boolean isBefore() { + return true; + } + + @Override + public void checkExistingData(Session session) { + // no need to check: when creating the unique index any problems are + // found + } + + @Override + public Index getUniqueIndex() { + return index; + } + + @Override + public void rebuild() { + // nothing to do + } + +} diff --git a/modules/h2/src/main/java/org/h2/constraint/package.html b/modules/h2/src/main/java/org/h2/constraint/package.html new file mode 100644 index 0000000000000..1f20ef6134865 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/constraint/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Database constraints such as check constraints, unique constraints, and referential constraints. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/engine/Comment.java b/modules/h2/src/main/java/org/h2/engine/Comment.java new file mode 100644 index 0000000000000..f650cb8a783b3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Comment.java @@ -0,0 +1,117 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.table.Table; +import org.h2.util.StringUtils; + +/** + * Represents a database object comment. + */ +public class Comment extends DbObjectBase { + + private final int objectType; + private final String objectName; + private String commentText; + + public Comment(Database database, int id, DbObject obj) { + initDbObjectBase(database, id, getKey(obj), Trace.DATABASE); + this.objectType = obj.getType(); + this.objectName = obj.getSQL(); + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + private static String getTypeName(int type) { + switch (type) { + case DbObject.CONSTANT: + return "CONSTANT"; + case DbObject.CONSTRAINT: + return "CONSTRAINT"; + case DbObject.FUNCTION_ALIAS: + return "ALIAS"; + case DbObject.INDEX: + return "INDEX"; + case DbObject.ROLE: + return "ROLE"; + case DbObject.SCHEMA: + return "SCHEMA"; + case DbObject.SEQUENCE: + return "SEQUENCE"; + case DbObject.TABLE_OR_VIEW: + return "TABLE"; + case DbObject.TRIGGER: + return "TRIGGER"; + case DbObject.USER: + return "USER"; + case DbObject.USER_DATATYPE: + return "DOMAIN"; + default: + // not supported by parser, but required when trying to find a + // comment + return "type" + type; + } + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQL() { + StringBuilder buff = new StringBuilder("COMMENT ON "); + buff.append(getTypeName(objectType)).append(' '). + append(objectName).append(" IS "); + if (commentText == null) { + buff.append("NULL"); + } else { + buff.append(StringUtils.quoteStringSQL(commentText)); + } + return buff.toString(); + } + + @Override + public int getType() { + return DbObject.COMMENT; + } + + @Override + public void removeChildrenAndResources(Session session) { + database.removeMeta(session, getId()); + } + + @Override + public void checkRename() { + DbException.throwInternalError(); + } + + /** + * Get the comment key name for the given database object. This key name is + * used internally to associate the comment to the object. + * + * @param obj the object + * @return the key name + */ + static String getKey(DbObject obj) { + return getTypeName(obj.getType()) + " " + obj.getSQL(); + } + + /** + * Set the comment text. + * + * @param comment the text + */ + public void setCommentText(String comment) { + this.commentText = comment; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/ConnectionInfo.java b/modules/h2/src/main/java/org/h2/engine/ConnectionInfo.java new file mode 100644 index 0000000000000..87563bab1509c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/ConnectionInfo.java @@ -0,0 +1,662 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Properties; + +import org.h2.api.ErrorCode; +import org.h2.command.dml.SetTypes; +import org.h2.message.DbException; +import org.h2.security.SHA256; +import org.h2.store.fs.FilePathEncrypt; +import org.h2.store.fs.FilePathRec; +import org.h2.store.fs.FileUtils; +import org.h2.util.SortedProperties; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * Encapsulates the connection settings, including user name and password. + */ +public class ConnectionInfo implements Cloneable { + private static final HashSet KNOWN_SETTINGS; + + private Properties prop = new Properties(); + private String originalURL; + private String url; + private String user; + private byte[] filePasswordHash; + private byte[] fileEncryptionKey; + private byte[] userPasswordHash; + + /** + * The database name + */ + private String name; + private String nameNormalized; + private boolean remote; + private boolean ssl; + private boolean persistent; + private boolean unnamed; + + /** + * Create a connection info object. + * + * @param name the database name (including tags), but without the + * "jdbc:h2:" prefix + */ + public ConnectionInfo(String name) { + this.name = name; + this.url = Constants.START_URL + name; + parseName(); + } + + /** + * Create a connection info object. + * + * @param u the database URL (must start with jdbc:h2:) + * @param info the connection properties + */ + public ConnectionInfo(String u, Properties info) { + u = remapURL(u); + this.originalURL = u; + if (!u.startsWith(Constants.START_URL)) { + throw DbException.getInvalidValueException("url", u); + } + this.url = u; + readProperties(info); + readSettingsFromURL(); + setUserName(removeProperty("USER", "")); + convertPasswords(); + name = url.substring(Constants.START_URL.length()); + parseName(); + String recoverTest = removeProperty("RECOVER_TEST", null); + if (recoverTest != null) { + FilePathRec.register(); + try { + Utils.callStaticMethod("org.h2.store.RecoverTester.init", recoverTest); + } catch (Exception e) { + throw DbException.convert(e); + } + name = "rec:" + name; + } + } + + static { + ArrayList list = SetTypes.getTypes(); + String[] connectionTime = { "ACCESS_MODE_DATA", "AUTOCOMMIT", "CIPHER", + "CREATE", "CACHE_TYPE", "FILE_LOCK", "IGNORE_UNKNOWN_SETTINGS", + "IFEXISTS", "INIT", "PASSWORD", "RECOVER", "RECOVER_TEST", + "USER", "AUTO_SERVER", "AUTO_SERVER_PORT", "NO_UPGRADE", + "AUTO_RECONNECT", "OPEN_NEW", "PAGE_SIZE", "PASSWORD_HASH", "JMX", + "SCOPE_GENERATED_KEYS" }; + HashSet set = new HashSet<>(list.size() + connectionTime.length); + set.addAll(list); + for (String key : connectionTime) { + if (!set.add(key) && SysProperties.CHECK) { + DbException.throwInternalError(key); + } + } + KNOWN_SETTINGS = set; + } + + private static boolean isKnownSetting(String s) { + return KNOWN_SETTINGS.contains(s); + } + + @Override + public ConnectionInfo clone() throws CloneNotSupportedException { + ConnectionInfo clone = (ConnectionInfo) super.clone(); + clone.prop = (Properties) prop.clone(); + clone.filePasswordHash = Utils.cloneByteArray(filePasswordHash); + clone.fileEncryptionKey = Utils.cloneByteArray(fileEncryptionKey); + clone.userPasswordHash = Utils.cloneByteArray(userPasswordHash); + return clone; + } + + private void parseName() { + if (".".equals(name)) { + name = "mem:"; + } + if (name.startsWith("tcp:")) { + remote = true; + name = name.substring("tcp:".length()); + } else if (name.startsWith("ssl:")) { + remote = true; + ssl = true; + name = name.substring("ssl:".length()); + } else if (name.startsWith("mem:")) { + persistent = false; + if ("mem:".equals(name)) { + unnamed = true; + } + } else if (name.startsWith("file:")) { + name = name.substring("file:".length()); + persistent = true; + } else { + persistent = true; + } + if (persistent && !remote) { + if ("/".equals(SysProperties.FILE_SEPARATOR)) { + name = name.replace('\\', '/'); + } else { + name = name.replace('/', '\\'); + } + } + } + + /** + * Set the base directory of persistent databases, unless the database is in + * the user home folder (~). + * + * @param dir the new base directory + */ + public void setBaseDir(String dir) { + if (persistent) { + String absDir = FileUtils.unwrap(FileUtils.toRealPath(dir)); + boolean absolute = FileUtils.isAbsolute(name); + String n; + String prefix = null; + if (dir.endsWith(SysProperties.FILE_SEPARATOR)) { + dir = dir.substring(0, dir.length() - 1); + } + if (absolute) { + n = name; + } else { + n = FileUtils.unwrap(name); + prefix = name.substring(0, name.length() - n.length()); + n = dir + SysProperties.FILE_SEPARATOR + n; + } + String normalizedName = FileUtils.unwrap(FileUtils.toRealPath(n)); + if (normalizedName.equals(absDir) || !normalizedName.startsWith(absDir)) { + // database name matches the baseDir or + // database name is clearly outside of the baseDir + throw DbException.get(ErrorCode.IO_EXCEPTION_1, normalizedName + " outside " + + absDir); + } + if (absDir.endsWith("/") || absDir.endsWith("\\")) { + // no further checks are needed for C:/ and similar + } else if (normalizedName.charAt(absDir.length()) != '/') { + // database must be within the directory + // (with baseDir=/test, the database name must not be + // /test2/x and not /test2) + throw DbException.get(ErrorCode.IO_EXCEPTION_1, normalizedName + " outside " + + absDir); + } + if (!absolute) { + name = prefix + dir + SysProperties.FILE_SEPARATOR + FileUtils.unwrap(name); + } + } + } + + /** + * Check if this is a remote connection. + * + * @return true if it is + */ + public boolean isRemote() { + return remote; + } + + /** + * Check if the referenced database is persistent. + * + * @return true if it is + */ + public boolean isPersistent() { + return persistent; + } + + /** + * Check if the referenced database is an unnamed in-memory database. + * + * @return true if it is + */ + boolean isUnnamedInMemory() { + return unnamed; + } + + private void readProperties(Properties info) { + Object[] list = info.keySet().toArray(); + DbSettings s = null; + for (Object k : list) { + String key = StringUtils.toUpperEnglish(k.toString()); + if (prop.containsKey(key)) { + throw DbException.get(ErrorCode.DUPLICATE_PROPERTY_1, key); + } + Object value = info.get(k); + if (isKnownSetting(key)) { + prop.put(key, value); + } else { + if (s == null) { + s = getDbSettings(); + } + if (s.containsKey(key)) { + prop.put(key, value); + } + } + } + } + + private void readSettingsFromURL() { + DbSettings defaultSettings = DbSettings.getDefaultSettings(); + int idx = url.indexOf(';'); + if (idx >= 0) { + String settings = url.substring(idx + 1); + url = url.substring(0, idx); + String[] list = StringUtils.arraySplit(settings, ';', false); + for (String setting : list) { + if (setting.length() == 0) { + continue; + } + int equal = setting.indexOf('='); + if (equal < 0) { + throw getFormatException(); + } + String value = setting.substring(equal + 1); + String key = setting.substring(0, equal); + key = StringUtils.toUpperEnglish(key); + if (!isKnownSetting(key) && !defaultSettings.containsKey(key)) { + throw DbException.get(ErrorCode.UNSUPPORTED_SETTING_1, key); + } + String old = prop.getProperty(key); + if (old != null && !old.equals(value)) { + throw DbException.get(ErrorCode.DUPLICATE_PROPERTY_1, key); + } + prop.setProperty(key, value); + } + } + } + + private char[] removePassword() { + Object p = prop.remove("PASSWORD"); + if (p == null) { + return new char[0]; + } else if (p instanceof char[]) { + return (char[]) p; + } else { + return p.toString().toCharArray(); + } + } + + /** + * Split the password property into file password and user password if + * necessary, and convert them to the internal hash format. + */ + private void convertPasswords() { + char[] password = removePassword(); + boolean passwordHash = removeProperty("PASSWORD_HASH", false); + if (getProperty("CIPHER", null) != null) { + // split password into (filePassword+' '+userPassword) + int space = -1; + for (int i = 0, len = password.length; i < len; i++) { + if (password[i] == ' ') { + space = i; + break; + } + } + if (space < 0) { + throw DbException.get(ErrorCode.WRONG_PASSWORD_FORMAT); + } + char[] np = Arrays.copyOfRange(password, space + 1, password.length); + char[] filePassword = Arrays.copyOf(password, space); + Arrays.fill(password, (char) 0); + password = np; + fileEncryptionKey = FilePathEncrypt.getPasswordBytes(filePassword); + filePasswordHash = hashPassword(passwordHash, "file", filePassword); + } + userPasswordHash = hashPassword(passwordHash, user, password); + } + + private static byte[] hashPassword(boolean passwordHash, String userName, + char[] password) { + if (passwordHash) { + return StringUtils.convertHexToBytes(new String(password)); + } + if (userName.length() == 0 && password.length == 0) { + return new byte[0]; + } + return SHA256.getKeyPasswordHash(userName, password); + } + + /** + * Get a boolean property if it is set and return the value. + * + * @param key the property name + * @param defaultValue the default value + * @return the value + */ + public boolean getProperty(String key, boolean defaultValue) { + return Utils.parseBoolean(getProperty(key, null), defaultValue, false); + } + + /** + * Remove a boolean property if it is set and return the value. + * + * @param key the property name + * @param defaultValue the default value + * @return the value + */ + public boolean removeProperty(String key, boolean defaultValue) { + return Utils.parseBoolean(removeProperty(key, null), defaultValue, false); + } + + /** + * Remove a String property if it is set and return the value. + * + * @param key the property name + * @param defaultValue the default value + * @return the value + */ + String removeProperty(String key, String defaultValue) { + if (SysProperties.CHECK && !isKnownSetting(key)) { + DbException.throwInternalError(key); + } + Object x = prop.remove(key); + return x == null ? defaultValue : x.toString(); + } + + /** + * Get the unique and normalized database name (excluding settings). + * + * @return the database name + */ + public String getName() { + if (!persistent) { + return name; + } + if (nameNormalized == null) { + if (!SysProperties.IMPLICIT_RELATIVE_PATH) { + if (!FileUtils.isAbsolute(name)) { + if (!name.contains("./") && + !name.contains(".\\") && + !name.contains(":/") && + !name.contains(":\\")) { + // the name could start with "./", or + // it could start with a prefix such as "nio:./" + // for Windows, the path "\test" is not considered + // absolute as the drive letter is missing, + // but we consider it absolute + throw DbException.get( + ErrorCode.URL_RELATIVE_TO_CWD, + originalURL); + } + } + } + String suffix = Constants.SUFFIX_PAGE_FILE; + String n; + if (FileUtils.exists(name + suffix)) { + n = FileUtils.toRealPath(name + suffix); + } else { + suffix = Constants.SUFFIX_MV_FILE; + n = FileUtils.toRealPath(name + suffix); + } + String fileName = FileUtils.getName(n); + if (fileName.length() < suffix.length() + 1) { + throw DbException.get(ErrorCode.INVALID_DATABASE_NAME_1, name); + } + nameNormalized = n.substring(0, n.length() - suffix.length()); + } + return nameNormalized; + } + + /** + * Get the file password hash if it is set. + * + * @return the password hash or null + */ + public byte[] getFilePasswordHash() { + return filePasswordHash; + } + + byte[] getFileEncryptionKey() { + return fileEncryptionKey; + } + + /** + * Get the name of the user. + * + * @return the user name + */ + public String getUserName() { + return user; + } + + /** + * Get the user password hash. + * + * @return the password hash + */ + byte[] getUserPasswordHash() { + return userPasswordHash; + } + + /** + * Get the property keys. + * + * @return the property keys + */ + String[] getKeys() { + return prop.keySet().toArray(new String[prop.size()]); + } + + /** + * Get the value of the given property. + * + * @param key the property key + * @return the value as a String + */ + String getProperty(String key) { + Object value = prop.get(key); + if (!(value instanceof String)) { + return null; + } + return value.toString(); + } + + /** + * Get the value of the given property. + * + * @param key the property key + * @param defaultValue the default value + * @return the value as a String + */ + int getProperty(String key, int defaultValue) { + if (SysProperties.CHECK && !isKnownSetting(key)) { + DbException.throwInternalError(key); + } + String s = getProperty(key); + return s == null ? defaultValue : Integer.parseInt(s); + } + + /** + * Get the value of the given property. + * + * @param key the property key + * @param defaultValue the default value + * @return the value as a String + */ + public String getProperty(String key, String defaultValue) { + if (SysProperties.CHECK && !isKnownSetting(key)) { + DbException.throwInternalError(key); + } + String s = getProperty(key); + return s == null ? defaultValue : s; + } + + /** + * Get the value of the given property. + * + * @param setting the setting id + * @param defaultValue the default value + * @return the value as a String + */ + String getProperty(int setting, String defaultValue) { + String key = SetTypes.getTypeName(setting); + String s = getProperty(key); + return s == null ? defaultValue : s; + } + + /** + * Get the value of the given property. + * + * @param setting the setting id + * @param defaultValue the default value + * @return the value as an integer + */ + int getIntProperty(int setting, int defaultValue) { + String key = SetTypes.getTypeName(setting); + String s = getProperty(key, null); + try { + return s == null ? defaultValue : Integer.decode(s); + } catch (NumberFormatException e) { + return defaultValue; + } + } + + /** + * Check if this is a remote connection with SSL enabled. + * + * @return true if it is + */ + boolean isSSL() { + return ssl; + } + + /** + * Overwrite the user name. The user name is case-insensitive and stored in + * uppercase. English conversion is used. + * + * @param name the user name + */ + public void setUserName(String name) { + this.user = StringUtils.toUpperEnglish(name); + } + + /** + * Set the user password hash. + * + * @param hash the new hash value + */ + public void setUserPasswordHash(byte[] hash) { + this.userPasswordHash = hash; + } + + /** + * Set the file password hash. + * + * @param hash the new hash value + */ + public void setFilePasswordHash(byte[] hash) { + this.filePasswordHash = hash; + } + + public void setFileEncryptionKey(byte[] key) { + this.fileEncryptionKey = key; + } + + /** + * Overwrite a property. + * + * @param key the property name + * @param value the value + */ + public void setProperty(String key, String value) { + // value is null if the value is an object + if (value != null) { + prop.setProperty(key, value); + } + } + + /** + * Get the database URL. + * + * @return the URL + */ + public String getURL() { + return url; + } + + /** + * Get the complete original database URL. + * + * @return the database URL + */ + public String getOriginalURL() { + return originalURL; + } + + /** + * Set the original database URL. + * + * @param url the database url + */ + public void setOriginalURL(String url) { + originalURL = url; + } + + /** + * Generate an URL format exception. + * + * @return the exception + */ + DbException getFormatException() { + String format = Constants.URL_FORMAT; + return DbException.get(ErrorCode.URL_FORMAT_ERROR_2, format, url); + } + + /** + * Switch to server mode, and set the server name and database key. + * + * @param serverKey the server name, '/', and the security key + */ + public void setServerKey(String serverKey) { + remote = true; + persistent = false; + this.name = serverKey; + } + + public DbSettings getDbSettings() { + DbSettings defaultSettings = DbSettings.getDefaultSettings(); + HashMap s = new HashMap<>(); + for (Object k : prop.keySet()) { + String key = k.toString(); + if (!isKnownSetting(key) && defaultSettings.containsKey(key)) { + s.put(key, prop.getProperty(key)); + } + } + return DbSettings.getInstance(s); + } + + private static String remapURL(String url) { + String urlMap = SysProperties.URL_MAP; + if (urlMap != null && urlMap.length() > 0) { + try { + SortedProperties prop; + prop = SortedProperties.loadProperties(urlMap); + String url2 = prop.getProperty(url); + if (url2 == null) { + prop.put(url, ""); + prop.store(urlMap); + } else { + url2 = url2.trim(); + if (url2.length() > 0) { + return url2; + } + } + } catch (IOException e) { + throw DbException.convert(e); + } + } + return url; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Constants.java b/modules/h2/src/main/java/org/h2/engine/Constants.java new file mode 100644 index 0000000000000..b432cbbdf5b05 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Constants.java @@ -0,0 +1,573 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.sql.ResultSet; + +/** + * Constants are fixed values that are used in the whole database code. + */ +public class Constants { + + /** + * The build date is updated for each public release. + */ + public static final String BUILD_DATE = "2018-03-18"; + + /** + * The build date of the last stable release. + */ + public static final String BUILD_DATE_STABLE = "2017-06-10"; + + /** + * The build id is incremented for each public release. + */ + public static final int BUILD_ID = 197; + + /** + * The build id of the last stable release. + */ + public static final int BUILD_ID_STABLE = 196; + + /** + * Whether this is a snapshot version. + */ + public static final boolean BUILD_SNAPSHOT = false; + + /** + * If H2 is compiled to be included in a product, this should be set to + * a unique vendor id (to distinguish from official releases). + * Additionally, a version number should be set to distinguish releases. + * Example: ACME_SVN1651_BUILD3 + */ + public static final String BUILD_VENDOR_AND_VERSION = null; + + /** + * The TCP protocol version number 8. + * @since 1.2.143 (2010-09-18) + */ + public static final int TCP_PROTOCOL_VERSION_8 = 8; + + /** + * The TCP protocol version number 9. + * @since 1.3.158 (2011-07-17) + */ + public static final int TCP_PROTOCOL_VERSION_9 = 9; + + /** + * The TCP protocol version number 10. + * @since 1.3.162 (2011-11-26) + */ + public static final int TCP_PROTOCOL_VERSION_10 = 10; + + /** + * The TCP protocol version number 11. + * @since 1.3.163 (2011-12-30) + */ + public static final int TCP_PROTOCOL_VERSION_11 = 11; + + /** + * The TCP protocol version number 12. + * @since 1.3.168 (2012-07-13) + */ + public static final int TCP_PROTOCOL_VERSION_12 = 12; + + /** + * The TCP protocol version number 13. + * @since 1.3.174 (2013-10-19) + */ + public static final int TCP_PROTOCOL_VERSION_13 = 13; + + /** + * The TCP protocol version number 14. + * @since 1.3.176 (2014-04-05) + */ + public static final int TCP_PROTOCOL_VERSION_14 = 14; + + /** + * The TCP protocol version number 15. + * @since 1.4.178 Beta (2014-05-02) + */ + public static final int TCP_PROTOCOL_VERSION_15 = 15; + + /** + * The TCP protocol version number 16. + * @since 1.4.194 (2017-03-10) + */ + public static final int TCP_PROTOCOL_VERSION_16 = 16; + + /** + * The TCP protocol version number 17. + * @since 1.4.197 (TODO) + */ + public static final int TCP_PROTOCOL_VERSION_17 = 17; + + /** + * Minimum supported version of TCP protocol. + */ + public static final int TCP_PROTOCOL_VERSION_MIN_SUPPORTED = TCP_PROTOCOL_VERSION_8; + + /** + * Maximum supported version of TCP protocol. + */ + public static final int TCP_PROTOCOL_VERSION_MAX_SUPPORTED = TCP_PROTOCOL_VERSION_17; + + /** + * The major version of this database. + */ + public static final int VERSION_MAJOR = 1; + + /** + * The minor version of this database. + */ + public static final int VERSION_MINOR = 4; + + /** + * The lock mode that means no locking is used at all. + */ + public static final int LOCK_MODE_OFF = 0; + + /** + * The lock mode that means read locks are acquired, but they are released + * immediately after the statement is executed. + */ + public static final int LOCK_MODE_READ_COMMITTED = 3; + + /** + * The lock mode that means table level locking is used for reads and + * writes. + */ + public static final int LOCK_MODE_TABLE = 1; + + /** + * The lock mode that means table level locking is used for reads and + * writes. If a table is locked, System.gc is called to close forgotten + * connections. + */ + public static final int LOCK_MODE_TABLE_GC = 2; + + /** + * Constant meaning both numbers and text is allowed in SQL statements. + */ + public static final int ALLOW_LITERALS_ALL = 2; + + /** + * Constant meaning no literals are allowed in SQL statements. + */ + public static final int ALLOW_LITERALS_NONE = 0; + + /** + * Constant meaning only numbers are allowed in SQL statements (but no + * texts). + */ + public static final int ALLOW_LITERALS_NUMBERS = 1; + + /** + * Whether searching in Blob values should be supported. + */ + public static final boolean BLOB_SEARCH = false; + + /** + * The minimum number of entries to keep in the cache. + */ + public static final int CACHE_MIN_RECORDS = 16; + + /** + * The default cache size in KB for each GB of RAM. + */ + public static final int CACHE_SIZE_DEFAULT = 64 * 1024; + + /** + * The default cache type. + */ + public static final String CACHE_TYPE_DEFAULT = "LRU"; + + /** + * The value of the cluster setting if clustering is disabled. + */ + public static final String CLUSTERING_DISABLED = "''"; + + /** + * The value of the cluster setting if clustering is enabled (the actual + * value is checked later). + */ + public static final String CLUSTERING_ENABLED = "TRUE"; + + /** + * The database URL used when calling a function if only the column list + * should be returned. + */ + public static final String CONN_URL_COLUMNLIST = "jdbc:columnlist:connection"; + + /** + * The database URL used when calling a function if the data should be + * returned. + */ + public static final String CONN_URL_INTERNAL = "jdbc:default:connection"; + + /** + * The cost is calculated on rowcount + this offset, + * to avoid using the wrong or no index if the table + * contains no rows _currently_ (when preparing the statement) + */ + public static final int COST_ROW_OFFSET = 1000; + + /** + * The number of milliseconds after which to check for a deadlock if locking + * is not successful. + */ + public static final int DEADLOCK_CHECK = 100; + + /** + * The default port number of the HTTP server (for the H2 Console). + * This value is also in the documentation and in the Server javadoc. + */ + public static final int DEFAULT_HTTP_PORT = 8082; + + /** + * The default value for the LOCK_MODE setting. + */ + public static final int DEFAULT_LOCK_MODE = LOCK_MODE_READ_COMMITTED; + + /** + * The default maximum length of an LOB that is stored with the record + * itself, and not in a separate place. + */ + public static final int DEFAULT_MAX_LENGTH_INPLACE_LOB = 256; + + /** + * The default value for the maximum transaction log size. + */ + public static final long DEFAULT_MAX_LOG_SIZE = 16 * 1024 * 1024; + + /** + * The default value for the MAX_MEMORY_UNDO setting. + */ + public static final int DEFAULT_MAX_MEMORY_UNDO = 50_000; + + /** + * The default for the setting MAX_OPERATION_MEMORY. + */ + public static final int DEFAULT_MAX_OPERATION_MEMORY = 100_000; + + /** + * The default page size to use for new databases. + */ + public static final int DEFAULT_PAGE_SIZE = 4096; + + /** + * The default result set concurrency for statements created with + * Connection.createStatement() or prepareStatement(String sql). + */ + public static final int DEFAULT_RESULT_SET_CONCURRENCY = + ResultSet.CONCUR_READ_ONLY; + + /** + * The default port of the TCP server. + * This port is also used in the documentation and in the Server javadoc. + */ + public static final int DEFAULT_TCP_PORT = 9092; + + /** + * The default delay in milliseconds before the transaction log is written. + */ + public static final int DEFAULT_WRITE_DELAY = 500; + + /** + * The password is hashed this many times + * to slow down dictionary attacks. + */ + public static final int ENCRYPTION_KEY_HASH_ITERATIONS = 1024; + + /** + * The block of a file. It is also the encryption block size. + */ + public static final int FILE_BLOCK_SIZE = 16; + + /** + * For testing, the lock timeout is smaller than for interactive use cases. + * This value could be increased to about 5 or 10 seconds. + */ + public static final int INITIAL_LOCK_TIMEOUT = 2000; + + /** + * The block size for I/O operations. + */ + public static final int IO_BUFFER_SIZE = 4 * 1024; + + /** + * The block size used to compress data in the LZFOutputStream. + */ + public static final int IO_BUFFER_SIZE_COMPRESS = 128 * 1024; + + /** + * The number of milliseconds to wait between checking the .lock.db file + * still exists once a database is locked. + */ + public static final int LOCK_SLEEP = 1000; + + /** + * The highest possible parameter index. + */ + public static final int MAX_PARAMETER_INDEX = 100_000; + + /** + * The memory needed by a object of class Data + */ + public static final int MEMORY_DATA = 24; + + /** + * This value is used to calculate the average memory usage. + */ + public static final int MEMORY_FACTOR = 64; + + /** + * The memory needed by a regular object with at least one field. + */ + // Java 6, 64 bit: 24 + // Java 6, 32 bit: 12 + public static final int MEMORY_OBJECT = 24; + + /** + * The memory needed by an object of class PageBtree. + */ + public static final int MEMORY_PAGE_BTREE = + 112 + MEMORY_DATA + 2 * MEMORY_OBJECT; + + /** + * The memory needed by an object of class PageData. + */ + public static final int MEMORY_PAGE_DATA = + 144 + MEMORY_DATA + 3 * MEMORY_OBJECT; + + /** + * The memory needed by an object of class PageDataOverflow. + */ + public static final int MEMORY_PAGE_DATA_OVERFLOW = 96 + MEMORY_DATA; + + /** + * The memory needed by a pointer. + */ + // Java 6, 64 bit: 8 + // Java 6, 32 bit: 4 + public static final int MEMORY_POINTER = 8; + + /** + * The memory needed by a Row. + */ + public static final int MEMORY_ROW = 40; + + /** + * The minimum write delay that causes commits to be delayed. + */ + public static final int MIN_WRITE_DELAY = 5; + + /** + * The name prefix used for indexes that are not explicitly named. + */ + public static final String PREFIX_INDEX = "INDEX_"; + + /** + * The name prefix used for synthetic nested join tables. + */ + public static final String PREFIX_JOIN = "SYSTEM_JOIN_"; + + /** + * The name prefix used for primary key constraints that are not explicitly + * named. + */ + public static final String PREFIX_PRIMARY_KEY = "PRIMARY_KEY_"; + + /** + * Every user belongs to this role. + */ + public static final String PUBLIC_ROLE_NAME = "PUBLIC"; + + /** + * The number of bytes in random salt that is used to hash passwords. + */ + public static final int SALT_LEN = 8; + + /** + * The name of the default schema. + */ + public static final String SCHEMA_MAIN = "PUBLIC"; + + /** + * The default selectivity (used if the selectivity is not calculated). + */ + public static final int SELECTIVITY_DEFAULT = 50; + + /** + * The number of distinct values to keep in memory when running ANALYZE. + */ + public static final int SELECTIVITY_DISTINCT_COUNT = 10_000; + + /** + * The default directory name of the server properties file for the H2 + * Console. + */ + public static final String SERVER_PROPERTIES_DIR = "~"; + + /** + * The name of the server properties file for the H2 Console. + */ + public static final String SERVER_PROPERTIES_NAME = ".h2.server.properties"; + + /** + * Queries that take longer than this number of milliseconds are written to + * the trace file with the level info. + */ + public static final long SLOW_QUERY_LIMIT_MS = 100; + + /** + * The database URL prefix of this database. + */ + public static final String START_URL = "jdbc:h2:"; + + /** + * The file name suffix of all database files. + */ + public static final String SUFFIX_DB_FILE = ".db"; + + /** + * The file name suffix of large object files. + */ + public static final String SUFFIX_LOB_FILE = ".lob.db"; + + /** + * The suffix of the directory name used if LOB objects are stored in a + * directory. + */ + public static final String SUFFIX_LOBS_DIRECTORY = ".lobs.db"; + + /** + * The file name suffix of file lock files that are used to make sure a + * database is open by only one process at any time. + */ + public static final String SUFFIX_LOCK_FILE = ".lock.db"; + + /** + * The file name suffix of a H2 version 1.1 database file. + */ + public static final String SUFFIX_OLD_DATABASE_FILE = ".data.db"; + + /** + * The file name suffix of page files. + */ + public static final String SUFFIX_PAGE_FILE = ".h2.db"; + /** + * The file name suffix of a MVStore file. + */ + public static final String SUFFIX_MV_FILE = ".mv.db"; + + /** + * The file name suffix of a new MVStore file, used when compacting a store. + */ + public static final String SUFFIX_MV_STORE_NEW_FILE = ".newFile"; + + /** + * The file name suffix of a temporary MVStore file, used when compacting a + * store. + */ + public static final String SUFFIX_MV_STORE_TEMP_FILE = ".tempFile"; + + /** + * The file name suffix of temporary files. + */ + public static final String SUFFIX_TEMP_FILE = ".temp.db"; + + /** + * The file name suffix of trace files. + */ + public static final String SUFFIX_TRACE_FILE = ".trace.db"; + + /** + * How often we check to see if we need to apply a throttling delay if SET + * THROTTLE has been used. + */ + public static final int THROTTLE_DELAY = 50; + + /** + * The maximum size of an undo log block. + */ + public static final int UNDO_BLOCK_SIZE = 1024 * 1024; + + /** + * The database URL format in simplified Backus-Naur form. + */ + public static final String URL_FORMAT = START_URL + + "{ {.|mem:}[name] | [file:]fileName | " + + "{tcp|ssl}:[//]server[:port][,server2[:port]]/name }[;key=value...]"; + + /** + * The package name of user defined classes. + */ + public static final String USER_PACKAGE = "org.h2.dynamic"; + + /** + * The maximum time in milliseconds to keep the cost of a view. + * 10000 means 10 seconds. + */ + public static final int VIEW_COST_CACHE_MAX_AGE = 10_000; + + /** + * The name of the index cache that is used for temporary view (subqueries + * used as tables). + */ + public static final int VIEW_INDEX_CACHE_SIZE = 64; + + /** + * The maximum number of entries in query statistics. + */ + public static final int QUERY_STATISTICS_MAX_ENTRIES = 100; + + /** + * Announced version for PgServer. + */ + public static final String PG_VERSION = "8.2.23"; + + private Constants() { + // utility class + } + + /** + * Get the version of this product, consisting of major version, minor + * version, and build id. + * + * @return the version number + */ + public static String getVersion() { + String version = VERSION_MAJOR + "." + VERSION_MINOR + "." + BUILD_ID; + if (BUILD_VENDOR_AND_VERSION != null) { + version += "_" + BUILD_VENDOR_AND_VERSION; + } + if (BUILD_SNAPSHOT) { + version += "-SNAPSHOT"; + } + return version; + } + + /** + * Get the last stable version name. + * + * @return the version number + */ + public static Object getVersionStable() { + return "1.4." + BUILD_ID_STABLE; + } + + /** + * Get the complete version number of this database, consisting of + * the major version, the minor version, the build id, and the build date. + * + * @return the complete version + */ + public static String getFullVersion() { + return getVersion() + " (" + BUILD_DATE + ")"; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Database.java b/modules/h2/src/main/java/org/h2/engine/Database.java new file mode 100644 index 0000000000000..9d4e2dacfe653 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Database.java @@ -0,0 +1,2949 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.io.IOException; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Objects; +import java.util.Properties; +import java.util.Set; +import java.util.StringTokenizer; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.api.JavaObjectSerializer; +import org.h2.api.TableEngine; +import org.h2.command.CommandInterface; +import org.h2.command.ddl.CreateTableData; +import org.h2.command.dml.SetTypes; +import org.h2.constraint.Constraint; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceSystem; +import org.h2.mvstore.db.MVTableEngine; +import org.h2.result.Row; +import org.h2.result.RowFactory; +import org.h2.result.SearchRow; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObject; +import org.h2.schema.Sequence; +import org.h2.schema.TriggerObject; +import org.h2.store.DataHandler; +import org.h2.store.FileLock; +import org.h2.store.FileLockMethod; +import org.h2.store.FileStore; +import org.h2.store.InDoubtTransaction; +import org.h2.store.LobStorageBackend; +import org.h2.store.LobStorageFrontend; +import org.h2.store.LobStorageInterface; +import org.h2.store.LobStorageMap; +import org.h2.store.PageStore; +import org.h2.store.WriterThread; +import org.h2.store.fs.FileUtils; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.MetaTable; +import org.h2.table.Table; +import org.h2.table.TableLinkConnection; +import org.h2.table.TableSynonym; +import org.h2.table.TableType; +import org.h2.table.TableView; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Server; +import org.h2.util.BitField; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.NetUtils; +import org.h2.util.New; +import org.h2.util.SmallLRUCache; +import org.h2.util.SourceCompiler; +import org.h2.util.StringUtils; +import org.h2.util.TempFileDeleter; +import org.h2.util.Utils; +import org.h2.value.CaseInsensitiveConcurrentMap; +import org.h2.value.CaseInsensitiveMap; +import org.h2.value.CompareMode; +import org.h2.value.NullableKeyConcurrentMap; +import org.h2.value.Value; +import org.h2.value.ValueInt; + +/** + * There is one database object per open database. + * + * The format of the meta data table is: + * id int, 0, objectType int, sql varchar + * + * @since 2004-04-15 22:49 + */ +public class Database implements DataHandler { + + private static int initialPowerOffCount; + + private static final ThreadLocal META_LOCK_DEBUGGING = new ThreadLocal<>(); + private static final ThreadLocal META_LOCK_DEBUGGING_STACK = new ThreadLocal<>(); + + /** + * The default name of the system user. This name is only used as long as + * there is no administrator user registered. + */ + private static final String SYSTEM_USER_NAME = "DBA"; + + private final boolean persistent; + private final String databaseName; + private final String databaseShortName; + private final String databaseURL; + private final String cipher; + private final byte[] filePasswordHash; + private final byte[] fileEncryptionKey; + + private final HashMap roles = new HashMap<>(); + private final HashMap users = new HashMap<>(); + private final HashMap settings = new HashMap<>(); + private final HashMap schemas = new HashMap<>(); + private final HashMap rights = new HashMap<>(); + private final HashMap userDataTypes = new HashMap<>(); + private final HashMap aggregates = new HashMap<>(); + private final HashMap comments = new HashMap<>(); + private final HashMap tableEngines = new HashMap<>(); + + private final Set userSessions = + Collections.synchronizedSet(new HashSet()); + private final AtomicReference exclusiveSession = new AtomicReference<>(); + private final BitField objectIds = new BitField(); + private final Object lobSyncObject = new Object(); + + private Schema mainSchema; + private Schema infoSchema; + private int nextSessionId; + private int nextTempTableId; + private User systemUser; + private Session systemSession; + private Session lobSession; + private Table meta; + private Index metaIdIndex; + private FileLock lock; + private WriterThread writer; + private boolean starting; + private TraceSystem traceSystem; + private Trace trace; + private final FileLockMethod fileLockMethod; + private Role publicRole; + private final AtomicLong modificationDataId = new AtomicLong(); + private final AtomicLong modificationMetaId = new AtomicLong(); + private CompareMode compareMode; + private String cluster = Constants.CLUSTERING_DISABLED; + private boolean readOnly; + private int writeDelay = Constants.DEFAULT_WRITE_DELAY; + private DatabaseEventListener eventListener; + private int maxMemoryRows = SysProperties.MAX_MEMORY_ROWS; + private int maxMemoryUndo = Constants.DEFAULT_MAX_MEMORY_UNDO; + private int lockMode = Constants.DEFAULT_LOCK_MODE; + private int maxLengthInplaceLob; + private int allowLiterals = Constants.ALLOW_LITERALS_ALL; + + private int powerOffCount = initialPowerOffCount; + private int closeDelay; + private DatabaseCloser delayedCloser; + private volatile boolean closing; + private boolean ignoreCase; + private boolean deleteFilesOnDisconnect; + private String lobCompressionAlgorithm; + private boolean optimizeReuseResults = true; + private final String cacheType; + private final String accessModeData; + private boolean referentialIntegrity = true; + private boolean multiVersion; + private DatabaseCloser closeOnExit; + private Mode mode = Mode.getRegular(); + private boolean multiThreaded; + private int maxOperationMemory = + Constants.DEFAULT_MAX_OPERATION_MEMORY; + private SmallLRUCache lobFileListCache; + private final boolean autoServerMode; + private final int autoServerPort; + private Server server; + private HashMap linkConnections; + private final TempFileDeleter tempFileDeleter = TempFileDeleter.getInstance(); + private PageStore pageStore; + private Properties reconnectLastLock; + private volatile long reconnectCheckNext; + private volatile boolean reconnectChangePending; + private volatile int checkpointAllowed; + private volatile boolean checkpointRunning; + private final Object reconnectSync = new Object(); + private int cacheSize; + private int compactMode; + private SourceCompiler compiler; + private volatile boolean metaTablesInitialized; + private boolean flushOnEachCommit; + private LobStorageInterface lobStorage; + private final int pageSize; + private int defaultTableType = Table.TYPE_CACHED; + private final DbSettings dbSettings; + private final long reconnectCheckDelayNs; + private int logMode; + private MVTableEngine.Store mvStore; + private int retentionTime; + private boolean allowBuiltinAliasOverride; + private DbException backgroundException; + private JavaObjectSerializer javaObjectSerializer; + private String javaObjectSerializerName; + private volatile boolean javaObjectSerializerInitialized; + private boolean queryStatistics; + private int queryStatisticsMaxEntries = Constants.QUERY_STATISTICS_MAX_ENTRIES; + private QueryStatisticsData queryStatisticsData; + private RowFactory rowFactory = RowFactory.DEFAULT; + + public Database(ConnectionInfo ci, String cipher) { + META_LOCK_DEBUGGING.set(null); + META_LOCK_DEBUGGING_STACK.set(null); + String name = ci.getName(); + this.dbSettings = ci.getDbSettings(); + this.reconnectCheckDelayNs = TimeUnit.MILLISECONDS.toNanos(dbSettings.reconnectCheckDelay); + this.compareMode = CompareMode.getInstance(null, 0); + this.persistent = ci.isPersistent(); + this.filePasswordHash = ci.getFilePasswordHash(); + this.fileEncryptionKey = ci.getFileEncryptionKey(); + this.databaseName = name; + this.databaseShortName = parseDatabaseShortName(); + this.maxLengthInplaceLob = Constants.DEFAULT_MAX_LENGTH_INPLACE_LOB; + this.cipher = cipher; + String lockMethodName = ci.getProperty("FILE_LOCK", null); + this.accessModeData = StringUtils.toLowerEnglish( + ci.getProperty("ACCESS_MODE_DATA", "rw")); + this.autoServerMode = ci.getProperty("AUTO_SERVER", false); + this.autoServerPort = ci.getProperty("AUTO_SERVER_PORT", 0); + int defaultCacheSize = Utils.scaleForAvailableMemory( + Constants.CACHE_SIZE_DEFAULT); + this.cacheSize = + ci.getProperty("CACHE_SIZE", defaultCacheSize); + this.pageSize = ci.getProperty("PAGE_SIZE", + Constants.DEFAULT_PAGE_SIZE); + if ("r".equals(accessModeData)) { + readOnly = true; + } + if (dbSettings.mvStore && lockMethodName == null) { + if (autoServerMode) { + fileLockMethod = FileLockMethod.FILE; + } else { + fileLockMethod = FileLockMethod.FS; + } + } else { + fileLockMethod = FileLock.getFileLockMethod(lockMethodName); + } + if (dbSettings.mvStore && fileLockMethod == FileLockMethod.SERIALIZED) { + throw DbException.getUnsupportedException( + "MV_STORE combined with FILE_LOCK=SERIALIZED"); + } + this.databaseURL = ci.getURL(); + String listener = ci.removeProperty("DATABASE_EVENT_LISTENER", null); + if (listener != null) { + listener = StringUtils.trim(listener, true, true, "'"); + setEventListenerClass(listener); + } + String modeName = ci.removeProperty("MODE", null); + if (modeName != null) { + this.mode = Mode.getInstance(modeName); + } + this.multiVersion = + ci.getProperty("MVCC", dbSettings.mvStore); + this.logMode = + ci.getProperty("LOG", PageStore.LOG_MODE_SYNC); + this.javaObjectSerializerName = + ci.getProperty("JAVA_OBJECT_SERIALIZER", null); + this.multiThreaded = + ci.getProperty("MULTI_THREADED", false); + boolean closeAtVmShutdown = + dbSettings.dbCloseOnExit; + int traceLevelFile = + ci.getIntProperty(SetTypes.TRACE_LEVEL_FILE, + TraceSystem.DEFAULT_TRACE_LEVEL_FILE); + int traceLevelSystemOut = + ci.getIntProperty(SetTypes.TRACE_LEVEL_SYSTEM_OUT, + TraceSystem.DEFAULT_TRACE_LEVEL_SYSTEM_OUT); + this.cacheType = StringUtils.toUpperEnglish( + ci.removeProperty("CACHE_TYPE", Constants.CACHE_TYPE_DEFAULT)); + openDatabase(traceLevelFile, traceLevelSystemOut, closeAtVmShutdown); + } + + private void openDatabase(int traceLevelFile, int traceLevelSystemOut, + boolean closeAtVmShutdown) { + try { + open(traceLevelFile, traceLevelSystemOut); + if (closeAtVmShutdown) { + try { + closeOnExit = new DatabaseCloser(this, 0, true); + Runtime.getRuntime().addShutdownHook(closeOnExit); + } catch (IllegalStateException e) { + // shutdown in progress - just don't register the handler + // (maybe an application wants to write something into a + // database at shutdown time) + } catch (SecurityException e) { + // applets may not do that - ignore + // Google App Engine doesn't allow + // to instantiate classes that extend Thread + } + } + } catch (Throwable e) { + if (e instanceof OutOfMemoryError) { + e.fillInStackTrace(); + } + boolean alreadyOpen = e instanceof DbException + && ((DbException) e).getErrorCode() == ErrorCode.DATABASE_ALREADY_OPEN_1; + if (alreadyOpen) { + stopServer(); + } + + if (traceSystem != null) { + if (e instanceof DbException && !alreadyOpen) { + // only write if the database is not already in use + trace.error(e, "opening {0}", databaseName); + } + traceSystem.close(); + } + closeOpenFilesAndUnlock(false); + throw DbException.convert(e); + } + } + + /** + * Create a new row for a table. + * + * @param data the values + * @param memory whether the row is in memory + * @return the created row + */ + public Row createRow(Value[] data, int memory) { + return rowFactory.createRow(data, memory); + } + + public RowFactory getRowFactory() { + return rowFactory; + } + + public void setRowFactory(RowFactory rowFactory) { + this.rowFactory = rowFactory; + } + + public static void setInitialPowerOffCount(int count) { + initialPowerOffCount = count; + } + + public void setPowerOffCount(int count) { + if (powerOffCount == -1) { + return; + } + powerOffCount = count; + } + + public MVTableEngine.Store getMvStore() { + return mvStore; + } + + public void setMvStore(MVTableEngine.Store mvStore) { + this.mvStore = mvStore; + this.retentionTime = mvStore.getStore().getRetentionTime(); + } + + /** + * Check if two values are equal with the current comparison mode. + * + * @param a the first value + * @param b the second value + * @return true if both objects are equal + */ + public boolean areEqual(Value a, Value b) { + // can not use equals because ValueDecimal 0.0 is not equal to 0.00. + return a.compareTo(b, compareMode) == 0; + } + + /** + * Compare two values with the current comparison mode. The values may not + * be of the same type. + * + * @param a the first value + * @param b the second value + * @return 0 if both values are equal, -1 if the first value is smaller, and + * 1 otherwise + */ + public int compare(Value a, Value b) { + return a.compareTo(b, compareMode); + } + + /** + * Compare two values with the current comparison mode. The values must be + * of the same type. + * + * @param a the first value + * @param b the second value + * @return 0 if both values are equal, -1 if the first value is smaller, and + * 1 otherwise + */ + public int compareTypeSafe(Value a, Value b) { + return a.compareTypeSafe(b, compareMode); + } + + public long getModificationDataId() { + return modificationDataId.get(); + } + + /** + * Set or reset the pending change flag in the .lock.db file. + * + * @param pending the new value of the flag + * @return true if the call was successful, + * false if another connection was faster + */ + private synchronized boolean reconnectModified(boolean pending) { + if (readOnly || lock == null || + fileLockMethod != FileLockMethod.SERIALIZED) { + return true; + } + try { + if (pending == reconnectChangePending) { + long now = System.nanoTime(); + if (now > reconnectCheckNext) { + if (pending) { + String pos = pageStore == null ? + null : "" + pageStore.getWriteCountTotal(); + lock.setProperty("logPos", pos); + lock.save(); + } + reconnectCheckNext = now + reconnectCheckDelayNs; + } + return true; + } + Properties old = lock.load(); + if (pending) { + if (old.getProperty("changePending") != null) { + return false; + } + trace.debug("wait before writing"); + Thread.sleep(TimeUnit.NANOSECONDS.toMillis((long) (reconnectCheckDelayNs * 1.1))); + Properties now = lock.load(); + if (!now.equals(old)) { + // somebody else was faster + return false; + } + } + String pos = pageStore == null ? + null : "" + pageStore.getWriteCountTotal(); + lock.setProperty("logPos", pos); + if (pending) { + lock.setProperty("changePending", "true-" + Math.random()); + } else { + lock.setProperty("changePending", null); + } + // ensure that the writer thread will + // not reset the flag before we are done + reconnectCheckNext = System.nanoTime() + + 2 * reconnectCheckDelayNs; + old = lock.save(); + if (pending) { + trace.debug("wait before writing again"); + Thread.sleep(TimeUnit.NANOSECONDS.toMillis((long) (reconnectCheckDelayNs * 1.1))); + Properties now = lock.load(); + if (!now.equals(old)) { + // somebody else was faster + return false; + } + } else { + Thread.sleep(1); + } + reconnectLastLock = old; + reconnectChangePending = pending; + reconnectCheckNext = System.nanoTime() + reconnectCheckDelayNs; + return true; + } catch (Exception e) { + trace.error(e, "pending {0}", pending); + return false; + } + } + + public long getNextModificationDataId() { + return modificationDataId.incrementAndGet(); + } + + public long getModificationMetaId() { + return modificationMetaId.get(); + } + + public long getNextModificationMetaId() { + // if the meta data has been modified, the data is modified as well + // (because MetaTable returns modificationDataId) + modificationDataId.incrementAndGet(); + return modificationMetaId.incrementAndGet() - 1; + } + + public int getPowerOffCount() { + return powerOffCount; + } + + @Override + public void checkPowerOff() { + if (powerOffCount == 0) { + return; + } + if (powerOffCount > 1) { + powerOffCount--; + return; + } + if (powerOffCount != -1) { + try { + powerOffCount = -1; + stopWriter(); + if (mvStore != null) { + mvStore.closeImmediately(); + } + if (pageStore != null) { + try { + pageStore.close(); + } catch (DbException e) { + // ignore + } + pageStore = null; + } + if (lock != null) { + stopServer(); + if (fileLockMethod != FileLockMethod.SERIALIZED) { + // allow testing shutdown + lock.unlock(); + } + lock = null; + } + if (traceSystem != null) { + traceSystem.close(); + } + } catch (DbException e) { + DbException.traceThrowable(e); + } + } + Engine.getInstance().close(databaseName); + throw DbException.get(ErrorCode.DATABASE_IS_CLOSED); + } + + /** + * Check if a database with the given name exists. + * + * @param name the name of the database (including path) + * @return true if one exists + */ + static boolean exists(String name) { + if (FileUtils.exists(name + Constants.SUFFIX_PAGE_FILE)) { + return true; + } + return FileUtils.exists(name + Constants.SUFFIX_MV_FILE); + } + + /** + * Get the trace object for the given module id. + * + * @param moduleId the module id + * @return the trace object + */ + public Trace getTrace(int moduleId) { + return traceSystem.getTrace(moduleId); + } + + @Override + public FileStore openFile(String name, String openMode, boolean mustExist) { + if (mustExist && !FileUtils.exists(name)) { + throw DbException.get(ErrorCode.FILE_NOT_FOUND_1, name); + } + FileStore store = FileStore.open(this, name, openMode, cipher, + filePasswordHash); + try { + store.init(); + } catch (DbException e) { + store.closeSilently(); + throw e; + } + return store; + } + + /** + * Check if the file password hash is correct. + * + * @param testCipher the cipher algorithm + * @param testHash the hash code + * @return true if the cipher algorithm and the password match + */ + boolean validateFilePasswordHash(String testCipher, byte[] testHash) { + if (!Objects.equals(testCipher, this.cipher)) { + return false; + } + return Utils.compareSecure(testHash, filePasswordHash); + } + + private String parseDatabaseShortName() { + String n = databaseName; + if (n.endsWith(":")) { + n = null; + } + if (n != null) { + StringTokenizer tokenizer = new StringTokenizer(n, "/\\:,;"); + while (tokenizer.hasMoreTokens()) { + n = tokenizer.nextToken(); + } + } + if (n == null || n.length() == 0) { + n = "unnamed"; + } + return dbSettings.databaseToUpper ? StringUtils.toUpperEnglish(n) : n; + } + + private synchronized void open(int traceLevelFile, int traceLevelSystemOut) { + if (persistent) { + String dataFileName = databaseName + Constants.SUFFIX_OLD_DATABASE_FILE; + boolean existsData = FileUtils.exists(dataFileName); + String pageFileName = databaseName + Constants.SUFFIX_PAGE_FILE; + String mvFileName = databaseName + Constants.SUFFIX_MV_FILE; + boolean existsPage = FileUtils.exists(pageFileName); + boolean existsMv = FileUtils.exists(mvFileName); + if (existsData && (!existsPage && !existsMv)) { + throw DbException.get( + ErrorCode.FILE_VERSION_ERROR_1, "Old database: " + + dataFileName + + " - please convert the database " + + "to a SQL script and re-create it."); + } + if (existsPage && !FileUtils.canWrite(pageFileName)) { + readOnly = true; + } + if (existsMv && !FileUtils.canWrite(mvFileName)) { + readOnly = true; + } + if (existsPage && !existsMv) { + dbSettings.mvStore = false; + } + if (readOnly) { + if (traceLevelFile >= TraceSystem.DEBUG) { + String traceFile = Utils.getProperty("java.io.tmpdir", ".") + + "/" + "h2_" + System.currentTimeMillis(); + traceSystem = new TraceSystem(traceFile + + Constants.SUFFIX_TRACE_FILE); + } else { + traceSystem = new TraceSystem(null); + } + } else { + traceSystem = new TraceSystem(databaseName + + Constants.SUFFIX_TRACE_FILE); + } + traceSystem.setLevelFile(traceLevelFile); + traceSystem.setLevelSystemOut(traceLevelSystemOut); + trace = traceSystem.getTrace(Trace.DATABASE); + trace.info("opening {0} (build {1})", databaseName, Constants.BUILD_ID); + if (autoServerMode) { + if (readOnly || + fileLockMethod == FileLockMethod.NO || + fileLockMethod == FileLockMethod.SERIALIZED || + fileLockMethod == FileLockMethod.FS || + !persistent) { + throw DbException.getUnsupportedException( + "autoServerMode && (readOnly || " + + "fileLockMethod == NO || " + + "fileLockMethod == SERIALIZED || " + + "fileLockMethod == FS || " + + "inMemory)"); + } + } + String lockFileName = databaseName + Constants.SUFFIX_LOCK_FILE; + if (readOnly) { + if (FileUtils.exists(lockFileName)) { + throw DbException.get(ErrorCode.DATABASE_ALREADY_OPEN_1, + "Lock file exists: " + lockFileName); + } + } + if (!readOnly && fileLockMethod != FileLockMethod.NO) { + if (fileLockMethod != FileLockMethod.FS) { + lock = new FileLock(traceSystem, lockFileName, Constants.LOCK_SLEEP); + lock.lock(fileLockMethod); + if (autoServerMode) { + startServer(lock.getUniqueId()); + } + } + } + if (SysProperties.MODIFY_ON_WRITE) { + while (isReconnectNeeded()) { + // wait until others stopped writing + } + } else { + while (isReconnectNeeded() && !beforeWriting()) { + // wait until others stopped writing and + // until we can write (the file is not yet open - + // no need to re-connect) + } + } + deleteOldTempFiles(); + starting = true; + if (SysProperties.MODIFY_ON_WRITE) { + try { + getPageStore(); + } catch (DbException e) { + if (e.getErrorCode() != ErrorCode.DATABASE_IS_READ_ONLY) { + throw e; + } + pageStore = null; + while (!beforeWriting()) { + // wait until others stopped writing and + // until we can write (the file is not yet open - + // no need to re-connect) + } + getPageStore(); + } + } else { + getPageStore(); + } + starting = false; + if (mvStore == null) { + writer = WriterThread.create(this, writeDelay); + } else { + setWriteDelay(writeDelay); + } + } else { + if (autoServerMode) { + throw DbException.getUnsupportedException( + "autoServerMode && inMemory"); + } + traceSystem = new TraceSystem(null); + trace = traceSystem.getTrace(Trace.DATABASE); + if (dbSettings.mvStore) { + getPageStore(); + } + } + systemUser = new User(this, 0, SYSTEM_USER_NAME, true); + mainSchema = new Schema(this, 0, Constants.SCHEMA_MAIN, systemUser, true); + infoSchema = new Schema(this, -1, "INFORMATION_SCHEMA", systemUser, true); + schemas.put(mainSchema.getName(), mainSchema); + schemas.put(infoSchema.getName(), infoSchema); + publicRole = new Role(this, 0, Constants.PUBLIC_ROLE_NAME, true); + roles.put(Constants.PUBLIC_ROLE_NAME, publicRole); + systemUser.setAdmin(true); + systemSession = new Session(this, systemUser, ++nextSessionId); + lobSession = new Session(this, systemUser, ++nextSessionId); + CreateTableData data = new CreateTableData(); + ArrayList cols = data.columns; + Column columnId = new Column("ID", Value.INT); + columnId.setNullable(false); + cols.add(columnId); + cols.add(new Column("HEAD", Value.INT)); + cols.add(new Column("TYPE", Value.INT)); + cols.add(new Column("SQL", Value.STRING)); + boolean create = true; + if (pageStore != null) { + create = pageStore.isNew(); + } + data.tableName = "SYS"; + data.id = 0; + data.temporary = false; + data.persistData = persistent; + data.persistIndexes = persistent; + data.create = create; + data.isHidden = true; + data.session = systemSession; + meta = mainSchema.createTable(data); + IndexColumn[] pkCols = IndexColumn.wrap(new Column[] { columnId }); + metaIdIndex = meta.addIndex(systemSession, "SYS_ID", + 0, pkCols, IndexType.createPrimaryKey( + false, false), true, null); + objectIds.set(0); + starting = true; + Cursor cursor = metaIdIndex.find(systemSession, null, null); + ArrayList records = New.arrayList(); + while (cursor.next()) { + MetaRecord rec = new MetaRecord(cursor.get()); + objectIds.set(rec.getId()); + records.add(rec); + } + Collections.sort(records); + synchronized (systemSession) { + for (MetaRecord rec : records) { + rec.execute(this, systemSession, eventListener); + } + } + if (mvStore != null) { + mvStore.initTransactions(); + mvStore.removeTemporaryMaps(objectIds); + } + recompileInvalidViews(systemSession); + starting = false; + if (!readOnly) { + // set CREATE_BUILD in a new database + String name = SetTypes.getTypeName(SetTypes.CREATE_BUILD); + if (settings.get(name) == null) { + Setting setting = new Setting(this, allocateObjectId(), name); + setting.setIntValue(Constants.BUILD_ID); + lockMeta(systemSession); + addDatabaseObject(systemSession, setting); + } + // mark all ids used in the page store + if (pageStore != null) { + BitField f = pageStore.getObjectIds(); + for (int i = 0, len = f.length(); i < len; i++) { + if (f.get(i) && !objectIds.get(i)) { + trace.info("unused object id: " + i); + objectIds.set(i); + } + } + } + } + getLobStorage().init(); + systemSession.commit(true); + + trace.info("opened {0}", databaseName); + if (checkpointAllowed > 0) { + afterWriting(); + } + } + + private void startServer(String key) { + try { + server = Server.createTcpServer( + "-tcpPort", Integer.toString(autoServerPort), + "-tcpAllowOthers", + "-tcpDaemon", + "-key", key, databaseName); + server.start(); + } catch (SQLException e) { + throw DbException.convert(e); + } + String localAddress = NetUtils.getLocalAddress(); + String address = localAddress + ":" + server.getPort(); + lock.setProperty("server", address); + String hostName = NetUtils.getHostName(localAddress); + lock.setProperty("hostName", hostName); + lock.save(); + } + + private void stopServer() { + if (server != null) { + Server s = server; + // avoid calling stop recursively + // because stopping the server will + // try to close the database as well + server = null; + s.stop(); + } + } + + private void recompileInvalidViews(Session session) { + boolean atLeastOneRecompiledSuccessfully; + do { + atLeastOneRecompiledSuccessfully = false; + for (Table obj : getAllTablesAndViews(false)) { + if (obj instanceof TableView) { + TableView view = (TableView) obj; + if (view.isInvalid()) { + view.recompile(session, true, false); + if (!view.isInvalid()) { + atLeastOneRecompiledSuccessfully = true; + } + } + } + } + } while (atLeastOneRecompiledSuccessfully); + TableView.clearIndexCaches(session.getDatabase()); + } + + private void initMetaTables() { + if (metaTablesInitialized) { + return; + } + synchronized (infoSchema) { + if (!metaTablesInitialized) { + for (int type = 0, count = MetaTable.getMetaTableTypeCount(); + type < count; type++) { + MetaTable m = new MetaTable(infoSchema, -1 - type, type); + infoSchema.add(m); + } + metaTablesInitialized = true; + } + } + } + + private synchronized void addMeta(Session session, DbObject obj) { + int id = obj.getId(); + if (id > 0 && !starting && !obj.isTemporary()) { + Row r = meta.getTemplateRow(); + MetaRecord rec = new MetaRecord(obj); + rec.setRecord(r); + objectIds.set(id); + if (SysProperties.CHECK) { + verifyMetaLocked(session); + } + meta.addRow(session, r); + if (isMultiVersion()) { + // TODO this should work without MVCC, but avoid risks at the + // moment + session.log(meta, UndoLogRecord.INSERT, r); + } + } + } + + /** + * Verify the meta table is locked. + * + * @param session the session + */ + public void verifyMetaLocked(Session session) { + if (meta != null && !meta.isLockedExclusivelyBy(session) + && lockMode != Constants.LOCK_MODE_OFF) { + throw DbException.throwInternalError(); + } + } + + /** + * Lock the metadata table for updates. + * + * @param session the session + * @return whether it was already locked before by this session + */ + public boolean lockMeta(Session session) { + // this method can not be synchronized on the database object, + // as unlocking is also synchronized on the database object - + // so if locking starts just before unlocking, locking could + // never be successful + if (meta == null) { + return true; + } + if (SysProperties.CHECK2) { + final Session prev = META_LOCK_DEBUGGING.get(); + if (prev == null) { + META_LOCK_DEBUGGING.set(session); + META_LOCK_DEBUGGING_STACK.set(new Throwable("Last meta lock granted in this stack trace, "+ + "this is debug information for following IllegalStateException")); + } else if (prev != session) { + META_LOCK_DEBUGGING_STACK.get().printStackTrace(); + throw new IllegalStateException("meta currently locked by " + + prev +", sessionid="+ prev.getId() + + " and trying to be locked by different session, " + + session +", sessionid="+ session.getId() + " on same thread"); + } + } + return meta.lock(session, true, true); + } + + /** + * Unlock the metadata table. + * + * @param session the session + */ + public void unlockMeta(Session session) { + unlockMetaDebug(session); + meta.unlock(session); + session.unlock(meta); + } + + /** + * This method doesn't actually unlock the metadata table, all it does it + * reset the debugging flags. + * + * @param session the session + */ + public void unlockMetaDebug(Session session) { + if (SysProperties.CHECK2) { + if (META_LOCK_DEBUGGING.get() == session) { + META_LOCK_DEBUGGING.set(null); + META_LOCK_DEBUGGING_STACK.set(null); + } + } + } + + /** + * Remove the given object from the meta data. + * + * @param session the session + * @param id the id of the object to remove + */ + public synchronized void removeMeta(Session session, int id) { + if (id > 0 && !starting) { + SearchRow r = meta.getTemplateSimpleRow(false); + r.setValue(0, ValueInt.get(id)); + boolean wasLocked = lockMeta(session); + Cursor cursor = metaIdIndex.find(session, r, r); + if (cursor.next()) { + if (SysProperties.CHECK) { + if (lockMode != Constants.LOCK_MODE_OFF && !wasLocked) { + throw DbException.throwInternalError(); + } + } + Row found = cursor.get(); + meta.removeRow(session, found); + if (isMultiVersion()) { + // TODO this should work without MVCC, but avoid risks at + // the moment + session.log(meta, UndoLogRecord.DELETE, found); + } + if (SysProperties.CHECK) { + checkMetaFree(session, id); + } + } else if (!wasLocked) { + unlockMetaDebug(session); + // must not keep the lock if it was not locked + // otherwise updating sequences may cause a deadlock + meta.unlock(session); + session.unlock(meta); + } + objectIds.clear(id); + } + } + + @SuppressWarnings("unchecked") + private HashMap getMap(int type) { + HashMap result; + switch (type) { + case DbObject.USER: + result = users; + break; + case DbObject.SETTING: + result = settings; + break; + case DbObject.ROLE: + result = roles; + break; + case DbObject.RIGHT: + result = rights; + break; + case DbObject.SCHEMA: + result = schemas; + break; + case DbObject.USER_DATATYPE: + result = userDataTypes; + break; + case DbObject.COMMENT: + result = comments; + break; + case DbObject.AGGREGATE: + result = aggregates; + break; + default: + throw DbException.throwInternalError("type=" + type); + } + return (HashMap) result; + } + + /** + * Add a schema object to the database. + * + * @param session the session + * @param obj the object to add + */ + public void addSchemaObject(Session session, SchemaObject obj) { + int id = obj.getId(); + if (id > 0 && !starting) { + checkWritingAllowed(); + } + lockMeta(session); + synchronized (this) { + obj.getSchema().add(obj); + addMeta(session, obj); + } + } + + /** + * Add an object to the database. + * + * @param session the session + * @param obj the object to add + */ + public synchronized void addDatabaseObject(Session session, DbObject obj) { + int id = obj.getId(); + if (id > 0 && !starting) { + checkWritingAllowed(); + } + HashMap map = getMap(obj.getType()); + if (obj.getType() == DbObject.USER) { + User user = (User) obj; + if (user.isAdmin() && systemUser.getName().equals(SYSTEM_USER_NAME)) { + systemUser.rename(user.getName()); + } + } + String name = obj.getName(); + if (SysProperties.CHECK && map.get(name) != null) { + DbException.throwInternalError("object already exists"); + } + lockMeta(session); + addMeta(session, obj); + map.put(name, obj); + } + + /** + * Get the user defined aggregate function if it exists, or null if not. + * + * @param name the name of the user defined aggregate function + * @return the aggregate function or null + */ + public UserAggregate findAggregate(String name) { + return aggregates.get(name); + } + + /** + * Get the comment for the given database object if one exists, or null if + * not. + * + * @param object the database object + * @return the comment or null + */ + public Comment findComment(DbObject object) { + if (object.getType() == DbObject.COMMENT) { + return null; + } + String key = Comment.getKey(object); + return comments.get(key); + } + + /** + * Get the role if it exists, or null if not. + * + * @param roleName the name of the role + * @return the role or null + */ + public Role findRole(String roleName) { + return roles.get(roleName); + } + + /** + * Get the schema if it exists, or null if not. + * + * @param schemaName the name of the schema + * @return the schema or null + */ + public Schema findSchema(String schemaName) { + Schema schema = schemas.get(schemaName); + if (schema == infoSchema) { + initMetaTables(); + } + return schema; + } + + /** + * Get the setting if it exists, or null if not. + * + * @param name the name of the setting + * @return the setting or null + */ + public Setting findSetting(String name) { + return settings.get(name); + } + + /** + * Get the user if it exists, or null if not. + * + * @param name the name of the user + * @return the user or null + */ + public User findUser(String name) { + return users.get(name); + } + + /** + * Get the user defined data type if it exists, or null if not. + * + * @param name the name of the user defined data type + * @return the user defined data type or null + */ + public UserDataType findUserDataType(String name) { + return userDataTypes.get(name); + } + + /** + * Get user with the given name. This method throws an exception if the user + * does not exist. + * + * @param name the user name + * @return the user + * @throws DbException if the user does not exist + */ + public User getUser(String name) { + User user = findUser(name); + if (user == null) { + throw DbException.get(ErrorCode.USER_NOT_FOUND_1, name); + } + return user; + } + + /** + * Create a session for the given user. + * + * @param user the user + * @return the session, or null if the database is currently closing + * @throws DbException if the database is in exclusive mode + */ + synchronized Session createSession(User user) { + if (closing) { + return null; + } + if (exclusiveSession.get() != null) { + throw DbException.get(ErrorCode.DATABASE_IS_IN_EXCLUSIVE_MODE); + } + Session session = new Session(this, user, ++nextSessionId); + userSessions.add(session); + trace.info("connecting session #{0} to {1}", session.getId(), databaseName); + if (delayedCloser != null) { + delayedCloser.reset(); + delayedCloser = null; + } + return session; + } + + /** + * Remove a session. This method is called after the user has disconnected. + * + * @param session the session + */ + public synchronized void removeSession(Session session) { + if (session != null) { + exclusiveSession.compareAndSet(session, null); + userSessions.remove(session); + if (session != systemSession && session != lobSession) { + trace.info("disconnecting session #{0}", session.getId()); + } + } + if (userSessions.isEmpty() && + session != systemSession && session != lobSession) { + if (closeDelay == 0) { + close(false); + } else if (closeDelay < 0) { + return; + } else { + delayedCloser = new DatabaseCloser(this, closeDelay * 1000, false); + delayedCloser.setName("H2 Close Delay " + getShortName()); + delayedCloser.setDaemon(true); + delayedCloser.start(); + } + } + if (session != systemSession && + session != lobSession && session != null) { + trace.info("disconnected session #{0}", session.getId()); + } + } + + private synchronized void closeAllSessionsException(Session except) { + Session[] all = userSessions.toArray(new Session[userSessions.size()]); + for (Session s : all) { + if (s != except) { + try { + // must roll back, otherwise the session is removed and + // the transaction log that contains its uncommitted + // operations as well + s.rollback(); + s.close(); + } catch (DbException e) { + trace.error(e, "disconnecting session #{0}", s.getId()); + } + } + } + } + + /** + * Close the database. + * + * @param fromShutdownHook true if this method is called from the shutdown + * hook + */ + void close(boolean fromShutdownHook) { + try { + synchronized (this) { + if (closing) { + return; + } + throwLastBackgroundException(); + if (fileLockMethod == FileLockMethod.SERIALIZED && + !reconnectChangePending) { + // another connection may have written something - don't write + try { + closeOpenFilesAndUnlock(false); + } catch (DbException e) { + // ignore + } + traceSystem.close(); + return; + } + closing = true; + stopServer(); + if (!userSessions.isEmpty()) { + if (!fromShutdownHook) { + return; + } + trace.info("closing {0} from shutdown hook", databaseName); + closeAllSessionsException(null); + } + trace.info("closing {0}", databaseName); + if (eventListener != null) { + // allow the event listener to connect to the database + closing = false; + DatabaseEventListener e = eventListener; + // set it to null, to make sure it's called only once + eventListener = null; + e.closingDatabase(); + if (!userSessions.isEmpty()) { + // if a connection was opened, we can't close the database + return; + } + closing = true; + } + } + removeOrphanedLobs(); + try { + if (systemSession != null) { + if (powerOffCount != -1) { + for (Table table : getAllTablesAndViews(false)) { + if (table.isGlobalTemporary()) { + table.removeChildrenAndResources(systemSession); + } else { + table.close(systemSession); + } + } + for (SchemaObject obj : getAllSchemaObjects( + DbObject.SEQUENCE)) { + Sequence sequence = (Sequence) obj; + sequence.close(); + } + } + for (SchemaObject obj : getAllSchemaObjects( + DbObject.TRIGGER)) { + TriggerObject trigger = (TriggerObject) obj; + try { + trigger.close(); + } catch (SQLException e) { + trace.error(e, "close"); + } + } + if (powerOffCount != -1) { + meta.close(systemSession); + systemSession.commit(true); + } + } + } catch (DbException e) { + trace.error(e, "close"); + } + tempFileDeleter.deleteAll(); + try { + closeOpenFilesAndUnlock(true); + } catch (DbException e) { + trace.error(e, "close"); + } + trace.info("closed"); + traceSystem.close(); + if (closeOnExit != null) { + closeOnExit.reset(); + try { + Runtime.getRuntime().removeShutdownHook(closeOnExit); + } catch (IllegalStateException e) { + // ignore + } catch (SecurityException e) { + // applets may not do that - ignore + } + closeOnExit = null; + } + if (deleteFilesOnDisconnect && persistent) { + deleteFilesOnDisconnect = false; + try { + String directory = FileUtils.getParent(databaseName); + String name = FileUtils.getName(databaseName); + DeleteDbFiles.execute(directory, name, true); + } catch (Exception e) { + // ignore (the trace is closed already) + } + } + } finally { + Engine.getInstance().close(databaseName); + } + } + + private void removeOrphanedLobs() { + // remove all session variables and temporary lobs + if (!persistent) { + return; + } + boolean lobStorageIsUsed = infoSchema.findTableOrView( + systemSession, LobStorageBackend.LOB_DATA_TABLE) != null; + lobStorageIsUsed |= mvStore != null; + if (!lobStorageIsUsed) { + return; + } + try { + getLobStorage(); + lobStorage.removeAllForTable( + LobStorageFrontend.TABLE_ID_SESSION_VARIABLE); + } catch (DbException e) { + trace.error(e, "close"); + } + } + + private void stopWriter() { + if (writer != null) { + writer.stopThread(); + writer = null; + } + } + + /** + * Close all open files and unlock the database. + * + * @param flush whether writing is allowed + */ + private synchronized void closeOpenFilesAndUnlock(boolean flush) { + stopWriter(); + if (pageStore != null) { + if (flush) { + try { + pageStore.checkpoint(); + if (!readOnly) { + lockMeta(pageStore.getPageStoreSession()); + pageStore.compact(compactMode); + unlockMeta(pageStore.getPageStoreSession()); + } + } catch (DbException e) { + if (SysProperties.CHECK2) { + int code = e.getErrorCode(); + if (code != ErrorCode.DATABASE_IS_CLOSED && + code != ErrorCode.LOCK_TIMEOUT_1 && + code != ErrorCode.IO_EXCEPTION_2) { + e.printStackTrace(); + } + } + trace.error(e, "close"); + } catch (Throwable t) { + if (SysProperties.CHECK2) { + t.printStackTrace(); + } + trace.error(t, "close"); + } + } + } + reconnectModified(false); + if (mvStore != null) { + long maxCompactTime = dbSettings.maxCompactTime; + if (compactMode == CommandInterface.SHUTDOWN_COMPACT) { + mvStore.compactFile(dbSettings.maxCompactTime); + } else if (compactMode == CommandInterface.SHUTDOWN_DEFRAG) { + maxCompactTime = Long.MAX_VALUE; + } else if (getSettings().defragAlways) { + maxCompactTime = Long.MAX_VALUE; + } + mvStore.close(maxCompactTime); + } + closeFiles(); + if (persistent && lock == null && + fileLockMethod != FileLockMethod.NO && + fileLockMethod != FileLockMethod.FS) { + // everything already closed (maybe in checkPowerOff) + // don't delete temp files in this case because + // the database could be open now (even from within another process) + return; + } + if (persistent) { + deleteOldTempFiles(); + } + if (systemSession != null) { + systemSession.close(); + systemSession = null; + } + if (lobSession != null) { + lobSession.close(); + lobSession = null; + } + if (lock != null) { + if (fileLockMethod == FileLockMethod.SERIALIZED) { + // wait before deleting the .lock file, + // otherwise other connections can not detect that + if (lock.load().containsKey("changePending")) { + try { + Thread.sleep(TimeUnit.NANOSECONDS + .toMillis((long) (reconnectCheckDelayNs * 1.1))); + } catch (InterruptedException e) { + trace.error(e, "close"); + } + } + } + lock.unlock(); + lock = null; + } + } + + private synchronized void closeFiles() { + try { + if (mvStore != null) { + mvStore.closeImmediately(); + } + if (pageStore != null) { + pageStore.close(); + pageStore = null; + } + } catch (DbException e) { + trace.error(e, "close"); + } + } + + private void checkMetaFree(Session session, int id) { + SearchRow r = meta.getTemplateSimpleRow(false); + r.setValue(0, ValueInt.get(id)); + Cursor cursor = metaIdIndex.find(session, r, r); + if (cursor.next()) { + DbException.throwInternalError(); + } + } + + /** + * Allocate a new object id. + * + * @return the id + */ + public synchronized int allocateObjectId() { + int i = objectIds.nextClearBit(0); + objectIds.set(i); + return i; + } + + public ArrayList getAllAggregates() { + return new ArrayList<>(aggregates.values()); + } + + public ArrayList getAllComments() { + return new ArrayList<>(comments.values()); + } + + public int getAllowLiterals() { + if (starting) { + return Constants.ALLOW_LITERALS_ALL; + } + return allowLiterals; + } + + public ArrayList getAllRights() { + return new ArrayList<>(rights.values()); + } + + public ArrayList getAllRoles() { + return new ArrayList<>(roles.values()); + } + + /** + * Get all schema objects. + * + * @return all objects of all types + */ + public ArrayList getAllSchemaObjects() { + initMetaTables(); + ArrayList list = New.arrayList(); + for (Schema schema : schemas.values()) { + list.addAll(schema.getAll()); + } + return list; + } + + /** + * Get all schema objects of the given type. + * + * @param type the object type + * @return all objects of that type + */ + public ArrayList getAllSchemaObjects(int type) { + if (type == DbObject.TABLE_OR_VIEW) { + initMetaTables(); + } + ArrayList list = New.arrayList(); + for (Schema schema : schemas.values()) { + list.addAll(schema.getAll(type)); + } + return list; + } + + /** + * Get all tables and views. + * + * @param includeMeta whether to force including the meta data tables (if + * true, metadata tables are always included; if false, metadata + * tables are only included if they are already initialized) + * @return all objects of that type + */ + public ArrayList
    getAllTablesAndViews(boolean includeMeta) { + if (includeMeta) { + initMetaTables(); + } + ArrayList
    list = New.arrayList(); + for (Schema schema : schemas.values()) { + list.addAll(schema.getAllTablesAndViews()); + } + return list; + } + + /** + * Get all synonyms. + * + * @return all objects of that type + */ + public ArrayList getAllSynonyms() { + ArrayList list = New.arrayList(); + for (Schema schema : schemas.values()) { + list.addAll(schema.getAllSynonyms()); + } + return list; + } + + /** + * Get the tables with the given name, if any. + * + * @param name the table name + * @return the list + */ + public ArrayList
    getTableOrViewByName(String name) { + ArrayList
    list = New.arrayList(); + for (Schema schema : schemas.values()) { + Table table = schema.getTableOrViewByName(name); + if (table != null) { + list.add(table); + } + } + return list; + } + + public ArrayList getAllSchemas() { + initMetaTables(); + return new ArrayList<>(schemas.values()); + } + + public ArrayList getAllSettings() { + return new ArrayList<>(settings.values()); + } + + public ArrayList getAllUserDataTypes() { + return new ArrayList<>(userDataTypes.values()); + } + + public ArrayList getAllUsers() { + return new ArrayList<>(users.values()); + } + + public String getCacheType() { + return cacheType; + } + + public String getCluster() { + return cluster; + } + + @Override + public CompareMode getCompareMode() { + return compareMode; + } + + @Override + public String getDatabasePath() { + if (persistent) { + return FileUtils.toRealPath(databaseName); + } + return null; + } + + public String getShortName() { + return databaseShortName; + } + + public String getName() { + return databaseName; + } + + /** + * Get all sessions that are currently connected to the database. + * + * @param includingSystemSession if the system session should also be + * included + * @return the list of sessions + */ + public Session[] getSessions(boolean includingSystemSession) { + ArrayList list; + // need to synchronized on userSession, otherwise the list + // may contain null elements + synchronized (userSessions) { + list = new ArrayList<>(userSessions); + } + // copy, to ensure the reference is stable + Session sys = systemSession; + Session lob = lobSession; + if (includingSystemSession && sys != null) { + list.add(sys); + } + if (includingSystemSession && lob != null) { + list.add(lob); + } + return list.toArray(new Session[0]); + } + + /** + * Update an object in the system table. + * + * @param session the session + * @param obj the database object + */ + public void updateMeta(Session session, DbObject obj) { + lockMeta(session); + synchronized (this) { + int id = obj.getId(); + removeMeta(session, id); + addMeta(session, obj); + // for temporary objects + if (id > 0) { + objectIds.set(id); + } + } + } + + /** + * Rename a schema object. + * + * @param session the session + * @param obj the object + * @param newName the new name + */ + public synchronized void renameSchemaObject(Session session, + SchemaObject obj, String newName) { + checkWritingAllowed(); + obj.getSchema().rename(obj, newName); + updateMetaAndFirstLevelChildren(session, obj); + } + + private synchronized void updateMetaAndFirstLevelChildren(Session session, DbObject obj) { + ArrayList list = obj.getChildren(); + Comment comment = findComment(obj); + if (comment != null) { + DbException.throwInternalError(comment.toString()); + } + updateMeta(session, obj); + // remember that this scans only one level deep! + if (list != null) { + for (DbObject o : list) { + if (o.getCreateSQL() != null) { + updateMeta(session, o); + } + } + } + } + + /** + * Rename a database object. + * + * @param session the session + * @param obj the object + * @param newName the new name + */ + public synchronized void renameDatabaseObject(Session session, + DbObject obj, String newName) { + checkWritingAllowed(); + int type = obj.getType(); + HashMap map = getMap(type); + if (SysProperties.CHECK) { + if (!map.containsKey(obj.getName())) { + DbException.throwInternalError("not found: " + obj.getName()); + } + if (obj.getName().equals(newName) || map.containsKey(newName)) { + DbException.throwInternalError("object already exists: " + newName); + } + } + obj.checkRename(); + int id = obj.getId(); + lockMeta(session); + removeMeta(session, id); + map.remove(obj.getName()); + obj.rename(newName); + map.put(newName, obj); + updateMetaAndFirstLevelChildren(session, obj); + } + + /** + * Create a temporary file in the database folder. + * + * @return the file name + */ + public String createTempFile() { + try { + boolean inTempDir = readOnly; + String name = databaseName; + if (!persistent) { + name = "memFS:" + name; + } + return FileUtils.createTempFile(name, + Constants.SUFFIX_TEMP_FILE, true, inTempDir); + } catch (IOException e) { + throw DbException.convertIOException(e, databaseName); + } + } + + private void deleteOldTempFiles() { + String path = FileUtils.getParent(databaseName); + for (String name : FileUtils.newDirectoryStream(path)) { + if (name.endsWith(Constants.SUFFIX_TEMP_FILE) && + name.startsWith(databaseName)) { + // can't always delete the files, they may still be open + FileUtils.tryDelete(name); + } + } + } + + /** + * Get the schema. If the schema does not exist, an exception is thrown. + * + * @param schemaName the name of the schema + * @return the schema + * @throws DbException no schema with that name exists + */ + public Schema getSchema(String schemaName) { + Schema schema = findSchema(schemaName); + if (schema == null) { + throw DbException.get(ErrorCode.SCHEMA_NOT_FOUND_1, schemaName); + } + return schema; + } + + /** + * Remove the object from the database. + * + * @param session the session + * @param obj the object to remove + */ + public synchronized void removeDatabaseObject(Session session, DbObject obj) { + checkWritingAllowed(); + String objName = obj.getName(); + int type = obj.getType(); + HashMap map = getMap(type); + if (SysProperties.CHECK && !map.containsKey(objName)) { + DbException.throwInternalError("not found: " + objName); + } + Comment comment = findComment(obj); + lockMeta(session); + if (comment != null) { + removeDatabaseObject(session, comment); + } + int id = obj.getId(); + obj.removeChildrenAndResources(session); + map.remove(objName); + removeMeta(session, id); + } + + /** + * Get the first table that depends on this object. + * + * @param obj the object to find + * @param except the table to exclude (or null) + * @return the first dependent table, or null + */ + public Table getDependentTable(SchemaObject obj, Table except) { + switch (obj.getType()) { + case DbObject.COMMENT: + case DbObject.CONSTRAINT: + case DbObject.INDEX: + case DbObject.RIGHT: + case DbObject.TRIGGER: + case DbObject.USER: + return null; + default: + } + HashSet set = new HashSet<>(); + for (Table t : getAllTablesAndViews(false)) { + if (except == t) { + continue; + } else if (TableType.VIEW == t.getTableType()) { + continue; + } + set.clear(); + t.addDependencies(set); + if (set.contains(obj)) { + return t; + } + } + return null; + } + + /** + * Remove an object from the system table. + * + * @param session the session + * @param obj the object to be removed + */ + public void removeSchemaObject(Session session, + SchemaObject obj) { + int type = obj.getType(); + if (type == DbObject.TABLE_OR_VIEW) { + Table table = (Table) obj; + if (table.isTemporary() && !table.isGlobalTemporary()) { + session.removeLocalTempTable(table); + return; + } + } else if (type == DbObject.INDEX) { + Index index = (Index) obj; + Table table = index.getTable(); + if (table.isTemporary() && !table.isGlobalTemporary()) { + session.removeLocalTempTableIndex(index); + return; + } + } else if (type == DbObject.CONSTRAINT) { + Constraint constraint = (Constraint) obj; + Table table = constraint.getTable(); + if (table.isTemporary() && !table.isGlobalTemporary()) { + session.removeLocalTempTableConstraint(constraint); + return; + } + } + checkWritingAllowed(); + lockMeta(session); + synchronized (this) { + Comment comment = findComment(obj); + if (comment != null) { + removeDatabaseObject(session, comment); + } + obj.getSchema().remove(obj); + int id = obj.getId(); + if (!starting) { + Table t = getDependentTable(obj, null); + if (t != null) { + obj.getSchema().add(obj); + throw DbException.get(ErrorCode.CANNOT_DROP_2, obj.getSQL(), + t.getSQL()); + } + obj.removeChildrenAndResources(session); + + } + removeMeta(session, id); + } + } + + /** + * Check if this database is disk-based. + * + * @return true if it is disk-based, false it it is in-memory only. + */ + public boolean isPersistent() { + return persistent; + } + + public TraceSystem getTraceSystem() { + return traceSystem; + } + + public synchronized void setCacheSize(int kb) { + if (starting) { + int max = MathUtils.convertLongToInt(Utils.getMemoryMax()) / 2; + kb = Math.min(kb, max); + } + cacheSize = kb; + if (pageStore != null) { + pageStore.getCache().setMaxMemory(kb); + } + if (mvStore != null) { + mvStore.setCacheSize(Math.max(1, kb)); + } + } + + public synchronized void setMasterUser(User user) { + lockMeta(systemSession); + addDatabaseObject(systemSession, user); + systemSession.commit(true); + } + + public Role getPublicRole() { + return publicRole; + } + + /** + * Get a unique temporary table name. + * + * @param baseName the prefix of the returned name + * @param session the session + * @return a unique name + */ + public synchronized String getTempTableName(String baseName, Session session) { + String tempName; + do { + tempName = baseName + "_COPY_" + session.getId() + + "_" + nextTempTableId++; + } while (mainSchema.findTableOrView(session, tempName) != null); + return tempName; + } + + public void setCompareMode(CompareMode compareMode) { + this.compareMode = compareMode; + } + + public void setCluster(String cluster) { + this.cluster = cluster; + } + + @Override + public void checkWritingAllowed() { + if (readOnly) { + throw DbException.get(ErrorCode.DATABASE_IS_READ_ONLY); + } + if (fileLockMethod == FileLockMethod.SERIALIZED) { + if (!reconnectChangePending) { + throw DbException.get(ErrorCode.DATABASE_IS_READ_ONLY); + } + } + } + + public boolean isReadOnly() { + return readOnly; + } + + public void setWriteDelay(int value) { + writeDelay = value; + if (writer != null) { + writer.setWriteDelay(value); + // TODO check if MIN_WRITE_DELAY is a good value + flushOnEachCommit = writeDelay < Constants.MIN_WRITE_DELAY; + } + if (mvStore != null) { + int millis = value < 0 ? 0 : value; + mvStore.getStore().setAutoCommitDelay(millis); + } + } + + public int getRetentionTime() { + return retentionTime; + } + + public void setRetentionTime(int value) { + retentionTime = value; + if (mvStore != null) { + mvStore.getStore().setRetentionTime(value); + } + } + + public void setAllowBuiltinAliasOverride(boolean b) { + allowBuiltinAliasOverride = b; + } + + public boolean isAllowBuiltinAliasOverride() { + return allowBuiltinAliasOverride; + } + + /** + * Check if flush-on-each-commit is enabled. + * + * @return true if it is + */ + public boolean getFlushOnEachCommit() { + return flushOnEachCommit; + } + + /** + * Get the list of in-doubt transactions. + * + * @return the list + */ + public ArrayList getInDoubtTransactions() { + if (mvStore != null) { + return mvStore.getInDoubtTransactions(); + } + return pageStore == null ? null : pageStore.getInDoubtTransactions(); + } + + /** + * Prepare a transaction. + * + * @param session the session + * @param transaction the name of the transaction + */ + synchronized void prepareCommit(Session session, String transaction) { + if (readOnly) { + return; + } + if (mvStore != null) { + mvStore.prepareCommit(session, transaction); + return; + } + if (pageStore != null) { + pageStore.flushLog(); + pageStore.prepareCommit(session, transaction); + } + } + + /** + * Commit the current transaction of the given session. + * + * @param session the session + */ + synchronized void commit(Session session) { + throwLastBackgroundException(); + if (readOnly) { + return; + } + if (pageStore != null) { + pageStore.commit(session); + } + session.setAllCommitted(); + } + + private void throwLastBackgroundException() { + if (backgroundException != null) { + // we don't care too much about concurrency here, + // we just want to make sure the exception is _normally_ + // not just logged to the .trace.db file + DbException b = backgroundException; + backgroundException = null; + if (b != null) { + // wrap the exception, so we see it was thrown here + throw DbException.get(b.getErrorCode(), b, b.getMessage()); + } + } + } + + public void setBackgroundException(DbException e) { + if (backgroundException == null) { + backgroundException = e; + TraceSystem t = getTraceSystem(); + if (t != null) { + t.getTrace(Trace.DATABASE).error(e, "flush"); + } + } + } + + /** + * Flush all pending changes to the transaction log. + */ + public synchronized void flush() { + if (readOnly) { + return; + } + if (pageStore != null) { + pageStore.flushLog(); + } + if (mvStore != null) { + try { + mvStore.flush(); + } catch (RuntimeException e) { + backgroundException = DbException.convert(e); + throw e; + } + } + } + + public void setEventListener(DatabaseEventListener eventListener) { + this.eventListener = eventListener; + } + + public void setEventListenerClass(String className) { + if (className == null || className.length() == 0) { + eventListener = null; + } else { + try { + eventListener = (DatabaseEventListener) + JdbcUtils.loadUserClass(className).newInstance(); + String url = databaseURL; + if (cipher != null) { + url += ";CIPHER=" + cipher; + } + eventListener.init(url); + } catch (Throwable e) { + throw DbException.get( + ErrorCode.ERROR_SETTING_DATABASE_EVENT_LISTENER_2, e, + className, e.toString()); + } + } + } + + /** + * Set the progress of a long running operation. + * This method calls the {@link DatabaseEventListener} if one is registered. + * + * @param state the {@link DatabaseEventListener} state + * @param name the object name + * @param x the current position + * @param max the highest value + */ + public void setProgress(int state, String name, int x, int max) { + if (eventListener != null) { + try { + eventListener.setProgress(state, name, x, max); + } catch (Exception e2) { + // ignore this (user made) exception + } + } + } + + /** + * This method is called after an exception occurred, to inform the database + * event listener (if one is set). + * + * @param e the exception + * @param sql the SQL statement + */ + public void exceptionThrown(SQLException e, String sql) { + if (eventListener != null) { + try { + eventListener.exceptionThrown(e, sql); + } catch (Exception e2) { + // ignore this (user made) exception + } + } + } + + /** + * Synchronize the files with the file system. This method is called when + * executing the SQL statement CHECKPOINT SYNC. + */ + public synchronized void sync() { + if (readOnly) { + return; + } + if (mvStore != null) { + mvStore.sync(); + } + if (pageStore != null) { + pageStore.sync(); + } + } + + public int getMaxMemoryRows() { + return maxMemoryRows; + } + + public void setMaxMemoryRows(int value) { + this.maxMemoryRows = value; + } + + public void setMaxMemoryUndo(int value) { + this.maxMemoryUndo = value; + } + + public int getMaxMemoryUndo() { + return maxMemoryUndo; + } + + public void setLockMode(int lockMode) { + switch (lockMode) { + case Constants.LOCK_MODE_OFF: + if (multiThreaded) { + // currently the combination of LOCK_MODE=0 and MULTI_THREADED + // is not supported. also see code in + // JdbcDatabaseMetaData#supportsTransactionIsolationLevel(int) + throw DbException.get( + ErrorCode.UNSUPPORTED_SETTING_COMBINATION, + "LOCK_MODE=0 & MULTI_THREADED"); + } + break; + case Constants.LOCK_MODE_READ_COMMITTED: + case Constants.LOCK_MODE_TABLE: + case Constants.LOCK_MODE_TABLE_GC: + break; + default: + throw DbException.getInvalidValueException("lock mode", lockMode); + } + this.lockMode = lockMode; + } + + public int getLockMode() { + return lockMode; + } + + public synchronized void setCloseDelay(int value) { + this.closeDelay = value; + } + + public Session getSystemSession() { + return systemSession; + } + + /** + * Check if the database is in the process of closing. + * + * @return true if the database is closing + */ + public boolean isClosing() { + return closing; + } + + public void setMaxLengthInplaceLob(int value) { + this.maxLengthInplaceLob = value; + } + + @Override + public int getMaxLengthInplaceLob() { + return maxLengthInplaceLob; + } + + public void setIgnoreCase(boolean b) { + ignoreCase = b; + } + + public boolean getIgnoreCase() { + if (starting) { + // tables created at startup must not be converted to ignorecase + return false; + } + return ignoreCase; + } + + public synchronized void setDeleteFilesOnDisconnect(boolean b) { + this.deleteFilesOnDisconnect = b; + } + + @Override + public String getLobCompressionAlgorithm(int type) { + return lobCompressionAlgorithm; + } + + public void setLobCompressionAlgorithm(String stringValue) { + this.lobCompressionAlgorithm = stringValue; + } + + public synchronized void setMaxLogSize(long value) { + if (pageStore != null) { + pageStore.setMaxLogSize(value); + } + } + + public void setAllowLiterals(int value) { + this.allowLiterals = value; + } + + public boolean getOptimizeReuseResults() { + return optimizeReuseResults; + } + + public void setOptimizeReuseResults(boolean b) { + optimizeReuseResults = b; + } + + @Override + public Object getLobSyncObject() { + return lobSyncObject; + } + + public int getSessionCount() { + return userSessions.size(); + } + + public void setReferentialIntegrity(boolean b) { + referentialIntegrity = b; + } + + public boolean getReferentialIntegrity() { + return referentialIntegrity; + } + + public void setQueryStatistics(boolean b) { + queryStatistics = b; + synchronized (this) { + if (!b) { + queryStatisticsData = null; + } + } + } + + public boolean getQueryStatistics() { + return queryStatistics; + } + + public void setQueryStatisticsMaxEntries(int n) { + queryStatisticsMaxEntries = n; + if (queryStatisticsData != null) { + synchronized (this) { + if (queryStatisticsData != null) { + queryStatisticsData.setMaxQueryEntries(queryStatisticsMaxEntries); + } + } + } + } + + public QueryStatisticsData getQueryStatisticsData() { + if (!queryStatistics) { + return null; + } + if (queryStatisticsData == null) { + synchronized (this) { + if (queryStatisticsData == null) { + queryStatisticsData = new QueryStatisticsData(queryStatisticsMaxEntries); + } + } + } + return queryStatisticsData; + } + + /** + * Check if the database is currently opening. This is true until all stored + * SQL statements have been executed. + * + * @return true if the database is still starting + */ + public boolean isStarting() { + return starting; + } + + /** + * Check if multi version concurrency is enabled for this database. + * + * @return true if it is enabled + */ + public boolean isMultiVersion() { + return multiVersion; + } + + /** + * Called after the database has been opened and initialized. This method + * notifies the event listener if one has been set. + */ + void opened() { + if (eventListener != null) { + eventListener.opened(); + } + if (writer != null) { + writer.startThread(); + } + } + + public void setMode(Mode mode) { + this.mode = mode; + } + + public Mode getMode() { + return mode; + } + + public boolean isMultiThreaded() { + return multiThreaded; + } + + public void setMultiThreaded(boolean multiThreaded) { + if (multiThreaded && this.multiThreaded != multiThreaded) { + if (multiVersion && mvStore == null) { + // currently the combination of MVCC and MULTI_THREADED is not + // supported + throw DbException.get( + ErrorCode.UNSUPPORTED_SETTING_COMBINATION, + "MVCC & MULTI_THREADED"); + } + if (lockMode == 0) { + // currently the combination of LOCK_MODE=0 and MULTI_THREADED + // is not supported + throw DbException.get( + ErrorCode.UNSUPPORTED_SETTING_COMBINATION, + "LOCK_MODE=0 & MULTI_THREADED"); + } + } + this.multiThreaded = multiThreaded; + } + + public void setMaxOperationMemory(int maxOperationMemory) { + this.maxOperationMemory = maxOperationMemory; + } + + public int getMaxOperationMemory() { + return maxOperationMemory; + } + + public Session getExclusiveSession() { + return exclusiveSession.get(); + } + + /** + * Set the session that can exclusively access the database. + * + * @param session the session + * @param closeOthers whether other sessions are closed + */ + public void setExclusiveSession(Session session, boolean closeOthers) { + this.exclusiveSession.set(session); + if (closeOthers) { + closeAllSessionsException(session); + } + } + + @Override + public SmallLRUCache getLobFileListCache() { + if (lobFileListCache == null) { + lobFileListCache = SmallLRUCache.newInstance(128); + } + return lobFileListCache; + } + + /** + * Checks if the system table (containing the catalog) is locked. + * + * @return true if it is currently locked + */ + public boolean isSysTableLocked() { + return meta == null || meta.isLockedExclusively(); + } + + /** + * Checks if the system table (containing the catalog) is locked by the + * given session. + * + * @param session the session + * @return true if it is currently locked + */ + public boolean isSysTableLockedBy(Session session) { + return meta == null || meta.isLockedExclusivelyBy(session); + } + + /** + * Open a new connection or get an existing connection to another database. + * + * @param driver the database driver or null + * @param url the database URL + * @param user the user name + * @param password the password + * @return the connection + */ + public TableLinkConnection getLinkConnection(String driver, String url, + String user, String password) { + if (linkConnections == null) { + linkConnections = new HashMap<>(); + } + return TableLinkConnection.open(linkConnections, driver, url, user, + password, dbSettings.shareLinkedConnections); + } + + @Override + public String toString() { + return databaseShortName + ":" + super.toString(); + } + + /** + * Immediately close the database. + */ + public void shutdownImmediately() { + setPowerOffCount(1); + try { + checkPowerOff(); + } catch (DbException e) { + // ignore + } + closeFiles(); + } + + @Override + public TempFileDeleter getTempFileDeleter() { + return tempFileDeleter; + } + + public PageStore getPageStore() { + if (dbSettings.mvStore) { + if (mvStore == null) { + mvStore = MVTableEngine.init(this); + } + return null; + } + if (pageStore == null) { + pageStore = new PageStore(this, databaseName + + Constants.SUFFIX_PAGE_FILE, accessModeData, cacheSize); + if (pageSize != Constants.DEFAULT_PAGE_SIZE) { + pageStore.setPageSize(pageSize); + } + if (!readOnly && fileLockMethod == FileLockMethod.FS) { + pageStore.setLockFile(true); + } + pageStore.setLogMode(logMode); + pageStore.open(); + } + return pageStore; + } + + /** + * Get the first user defined table. + * + * @return the table or null if no table is defined + */ + public Table getFirstUserTable() { + for (Table table : getAllTablesAndViews(false)) { + if (table.getCreateSQL() != null) { + if (table.isHidden()) { + // LOB tables + continue; + } + return table; + } + } + return null; + } + + /** + * Check if the contents of the database was changed and therefore it is + * required to re-connect. This method waits until pending changes are + * completed. If a pending change takes too long (more than 2 seconds), the + * pending change is broken (removed from the properties file). + * + * @return true if reconnecting is required + */ + public boolean isReconnectNeeded() { + if (fileLockMethod != FileLockMethod.SERIALIZED) { + return false; + } + if (reconnectChangePending) { + return false; + } + long now = System.nanoTime(); + if (now < reconnectCheckNext) { + return false; + } + reconnectCheckNext = now + reconnectCheckDelayNs; + if (lock == null) { + lock = new FileLock(traceSystem, databaseName + + Constants.SUFFIX_LOCK_FILE, Constants.LOCK_SLEEP); + } + try { + Properties prop = lock.load(), first = prop; + while (true) { + if (prop.equals(reconnectLastLock)) { + return false; + } + if (prop.getProperty("changePending", null) == null) { + break; + } + if (System.nanoTime() > + now + reconnectCheckDelayNs * 10) { + if (first.equals(prop)) { + // the writing process didn't update the file - + // it may have terminated + lock.setProperty("changePending", null); + lock.save(); + break; + } + } + trace.debug("delay (change pending)"); + Thread.sleep(TimeUnit.NANOSECONDS.toMillis(reconnectCheckDelayNs)); + prop = lock.load(); + } + reconnectLastLock = prop; + } catch (Exception e) { + // DbException, InterruptedException + trace.error(e, "readOnly {0}", readOnly); + // ignore + } + return true; + } + + /** + * Flush all changes when using the serialized mode, and if there are + * pending changes, and some time has passed. This switches to a new + * transaction log and resets the change pending flag in + * the .lock.db file. + */ + public void checkpointIfRequired() { + if (fileLockMethod != FileLockMethod.SERIALIZED || + readOnly || !reconnectChangePending || closing) { + return; + } + long now = System.nanoTime(); + if (now > reconnectCheckNext + reconnectCheckDelayNs) { + if (SysProperties.CHECK && checkpointAllowed < 0) { + DbException.throwInternalError("" + checkpointAllowed); + } + synchronized (reconnectSync) { + if (checkpointAllowed > 0) { + return; + } + checkpointRunning = true; + } + synchronized (this) { + trace.debug("checkpoint start"); + flushSequences(); + checkpoint(); + reconnectModified(false); + trace.debug("checkpoint end"); + } + synchronized (reconnectSync) { + checkpointRunning = false; + } + } + } + + public boolean isFileLockSerialized() { + return fileLockMethod == FileLockMethod.SERIALIZED; + } + + private void flushSequences() { + for (SchemaObject obj : getAllSchemaObjects(DbObject.SEQUENCE)) { + Sequence sequence = (Sequence) obj; + sequence.flushWithoutMargin(); + } + } + + /** + * Flush all changes and open a new transaction log. + */ + public void checkpoint() { + if (persistent) { + synchronized (this) { + if (pageStore != null) { + pageStore.checkpoint(); + } + } + if (mvStore != null) { + mvStore.flush(); + } + } + getTempFileDeleter().deleteUnused(); + } + + /** + * This method is called before writing to the transaction log. + * + * @return true if the call was successful and writing is allowed, + * false if another connection was faster + */ + public boolean beforeWriting() { + if (fileLockMethod != FileLockMethod.SERIALIZED) { + return true; + } + while (checkpointRunning) { + try { + Thread.sleep(10 + (int) (Math.random() * 10)); + } catch (Exception e) { + // ignore InterruptedException + } + } + synchronized (reconnectSync) { + if (reconnectModified(true)) { + checkpointAllowed++; + if (SysProperties.CHECK && checkpointAllowed > 20) { + throw DbException.throwInternalError("" + checkpointAllowed); + } + return true; + } + } + // make sure the next call to isReconnectNeeded() returns true + reconnectCheckNext = System.nanoTime() - 1; + reconnectLastLock = null; + return false; + } + + /** + * This method is called after updates are finished. + */ + public void afterWriting() { + if (fileLockMethod != FileLockMethod.SERIALIZED) { + return; + } + synchronized (reconnectSync) { + checkpointAllowed--; + } + if (SysProperties.CHECK && checkpointAllowed < 0) { + throw DbException.throwInternalError("" + checkpointAllowed); + } + } + + /** + * Switch the database to read-only mode. + * + * @param readOnly the new value + */ + public void setReadOnly(boolean readOnly) { + this.readOnly = readOnly; + } + + public void setCompactMode(int compactMode) { + this.compactMode = compactMode; + } + + public SourceCompiler getCompiler() { + if (compiler == null) { + compiler = new SourceCompiler(); + } + return compiler; + } + + @Override + public LobStorageInterface getLobStorage() { + if (lobStorage == null) { + if (dbSettings.mvStore) { + lobStorage = new LobStorageMap(this); + } else { + lobStorage = new LobStorageBackend(this); + } + } + return lobStorage; + } + + public JdbcConnection getLobConnectionForInit() { + String url = Constants.CONN_URL_INTERNAL; + JdbcConnection conn = new JdbcConnection( + systemSession, systemUser.getName(), url); + conn.setTraceLevel(TraceSystem.OFF); + return conn; + } + + public JdbcConnection getLobConnectionForRegularUse() { + String url = Constants.CONN_URL_INTERNAL; + JdbcConnection conn = new JdbcConnection( + lobSession, systemUser.getName(), url); + conn.setTraceLevel(TraceSystem.OFF); + return conn; + } + + public Session getLobSession() { + return lobSession; + } + + public void setLogMode(int log) { + if (log < 0 || log > 2) { + throw DbException.getInvalidValueException("LOG", log); + } + if (pageStore != null) { + if (log != PageStore.LOG_MODE_SYNC || + pageStore.getLogMode() != PageStore.LOG_MODE_SYNC) { + // write the log mode in the trace file when enabling or + // disabling a dangerous mode + trace.error(null, "log {0}", log); + } + this.logMode = log; + pageStore.setLogMode(log); + } + if (mvStore != null) { + this.logMode = log; + } + } + + public int getLogMode() { + if (pageStore != null) { + return pageStore.getLogMode(); + } + if (mvStore != null) { + return logMode; + } + return PageStore.LOG_MODE_OFF; + } + + public int getDefaultTableType() { + return defaultTableType; + } + + public void setDefaultTableType(int defaultTableType) { + this.defaultTableType = defaultTableType; + } + + public void setMultiVersion(boolean multiVersion) { + this.multiVersion = multiVersion; + } + + public DbSettings getSettings() { + return dbSettings; + } + + /** + * Create a new hash map. Depending on the configuration, the key is case + * sensitive or case insensitive. + * + * @param the value type + * @return the hash map + */ + public HashMap newStringMap() { + return dbSettings.databaseToUpper ? + new HashMap() : + new CaseInsensitiveMap(); + } + + /** + * Create a new hash map. Depending on the configuration, the key is case + * sensitive or case insensitive. + * + * @param the value type + * @return the hash map + */ + public ConcurrentHashMap newConcurrentStringMap() { + return dbSettings.databaseToUpper ? + new NullableKeyConcurrentMap() : + new CaseInsensitiveConcurrentMap(); + } + + /** + * Compare two identifiers (table names, column names,...) and verify they + * are equal. Case sensitivity depends on the configuration. + * + * @param a the first identifier + * @param b the second identifier + * @return true if they match + */ + public boolean equalsIdentifiers(String a, String b) { + return a.equals(b) || (!dbSettings.databaseToUpper && a.equalsIgnoreCase(b)); + } + + @Override + public int readLob(long lobId, byte[] hmac, long offset, byte[] buff, + int off, int length) { + throw DbException.throwInternalError(); + } + + public byte[] getFileEncryptionKey() { + return fileEncryptionKey; + } + + public int getPageSize() { + return pageSize; + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + initJavaObjectSerializer(); + return javaObjectSerializer; + } + + private void initJavaObjectSerializer() { + if (javaObjectSerializerInitialized) { + return; + } + synchronized (this) { + if (javaObjectSerializerInitialized) { + return; + } + String serializerName = javaObjectSerializerName; + if (serializerName != null) { + serializerName = serializerName.trim(); + if (!serializerName.isEmpty() && + !serializerName.equals("null")) { + try { + javaObjectSerializer = (JavaObjectSerializer) + JdbcUtils.loadUserClass(serializerName).newInstance(); + } catch (Exception e) { + throw DbException.convert(e); + } + } + } + javaObjectSerializerInitialized = true; + } + } + + public void setJavaObjectSerializerName(String serializerName) { + synchronized (this) { + javaObjectSerializerInitialized = false; + javaObjectSerializerName = serializerName; + } + } + + /** + * Get the table engine class, loading it if needed. + * + * @param tableEngine the table engine name + * @return the class + */ + public TableEngine getTableEngine(String tableEngine) { + assert Thread.holdsLock(this); + + TableEngine engine = tableEngines.get(tableEngine); + if (engine == null) { + try { + engine = (TableEngine) JdbcUtils.loadUserClass(tableEngine).newInstance(); + } catch (Exception e) { + throw DbException.convert(e); + } + tableEngines.put(tableEngine, engine); + } + return engine; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/DatabaseCloser.java b/modules/h2/src/main/java/org/h2/engine/DatabaseCloser.java new file mode 100644 index 0000000000000..a582bc28b1089 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/DatabaseCloser.java @@ -0,0 +1,80 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.lang.ref.WeakReference; + +import org.h2.message.Trace; + +/** + * This class is responsible to close a database if the application did not + * close a connection. A database closer object only exists if there is no user + * connected to the database. + */ +class DatabaseCloser extends Thread { + + private final boolean shutdownHook; + private final Trace trace; + private volatile WeakReference databaseRef; + private int delayInMillis; + + DatabaseCloser(Database db, int delayInMillis, boolean shutdownHook) { + this.databaseRef = new WeakReference<>(db); + this.delayInMillis = delayInMillis; + this.shutdownHook = shutdownHook; + trace = db.getTrace(Trace.DATABASE); + } + + /** + * Stop and disable the database closer. This method is called after the + * database has been closed, or after a session has been created. + */ + void reset() { + synchronized (this) { + databaseRef = null; + } + } + + @Override + public void run() { + while (delayInMillis > 0) { + try { + int step = 100; + Thread.sleep(step); + delayInMillis -= step; + } catch (Exception e) { + // ignore InterruptedException + } + if (databaseRef == null) { + return; + } + } + Database database = null; + synchronized (this) { + if (databaseRef != null) { + database = databaseRef.get(); + } + } + if (database != null) { + try { + database.close(shutdownHook); + } catch (RuntimeException e) { + // this can happen when stopping a web application, + // if loading classes is no longer allowed + // it would throw an IllegalStateException + try { + trace.error(e, "could not close the database"); + // if this was successful, we ignore the exception + // otherwise not + } catch (Throwable e2) { + e.addSuppressed(e2); + throw e; + } + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/DbObject.java b/modules/h2/src/main/java/org/h2/engine/DbObject.java new file mode 100644 index 0000000000000..f08d9c1c9fa63 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/DbObject.java @@ -0,0 +1,211 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayList; +import org.h2.table.Table; + +/** + * A database object such as a table, an index, or a user. + */ +public interface DbObject { + + /** + * The object is of the type table or view. + */ + int TABLE_OR_VIEW = 0; + + /** + * This object is an index. + */ + int INDEX = 1; + + /** + * This object is a user. + */ + int USER = 2; + + /** + * This object is a sequence. + */ + int SEQUENCE = 3; + + /** + * This object is a trigger. + */ + int TRIGGER = 4; + + /** + * This object is a constraint (check constraint, unique constraint, or + * referential constraint). + */ + int CONSTRAINT = 5; + + /** + * This object is a setting. + */ + int SETTING = 6; + + /** + * This object is a role. + */ + int ROLE = 7; + + /** + * This object is a right. + */ + int RIGHT = 8; + + /** + * This object is an alias for a Java function. + */ + int FUNCTION_ALIAS = 9; + + /** + * This object is a schema. + */ + int SCHEMA = 10; + + /** + * This object is a constant. + */ + int CONSTANT = 11; + + /** + * This object is a user data type (domain). + */ + int USER_DATATYPE = 12; + + /** + * This object is a comment. + */ + int COMMENT = 13; + + /** + * This object is a user-defined aggregate function. + */ + int AGGREGATE = 14; + + /** + * This object is a synonym. + */ + int SYNONYM = 15; + + /** + * Get the SQL name of this object (may be quoted). + * + * @return the SQL name + */ + String getSQL(); + + /** + * Get the list of dependent children (for tables, this includes indexes and + * so on). + * + * @return the list of children + */ + ArrayList getChildren(); + + /** + * Get the database. + * + * @return the database + */ + Database getDatabase(); + + /** + * Get the unique object id. + * + * @return the object id + */ + int getId(); + + /** + * Get the name. + * + * @return the name + */ + String getName(); + + /** + * Build a SQL statement to re-create the object, or to create a copy of the + * object with a different name or referencing a different table + * + * @param table the new table + * @param quotedName the quoted name + * @return the SQL statement + */ + String getCreateSQLForCopy(Table table, String quotedName); + + /** + * Construct the original CREATE ... SQL statement for this object. + * + * @return the SQL statement + */ + String getCreateSQL(); + + /** + * Construct a DROP ... SQL statement for this object. + * + * @return the SQL statement + */ + String getDropSQL(); + + /** + * Get the object type. + * + * @return the object type + */ + int getType(); + + /** + * Delete all dependent children objects and resources of this object. + * + * @param session the session + */ + void removeChildrenAndResources(Session session); + + /** + * Check if renaming is allowed. Does nothing when allowed. + */ + void checkRename(); + + /** + * Rename the object. + * + * @param newName the new name + */ + void rename(String newName); + + /** + * Check if this object is temporary (for example, a temporary table). + * + * @return true if is temporary + */ + boolean isTemporary(); + + /** + * Tell this object that it is temporary or not. + * + * @param temporary the new value + */ + void setTemporary(boolean temporary); + + /** + * Change the comment of this object. + * + * @param comment the new comment, or null for no comment + */ + void setComment(String comment); + + /** + * Get the current comment of this object. + * + * @return the comment, or null if not set + */ + String getComment(); + +} diff --git a/modules/h2/src/main/java/org/h2/engine/DbObjectBase.java b/modules/h2/src/main/java/org/h2/engine/DbObjectBase.java new file mode 100644 index 0000000000000..d97c04d5289cb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/DbObjectBase.java @@ -0,0 +1,178 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayList; +import org.h2.command.Parser; +import org.h2.message.DbException; +import org.h2.message.Trace; + +/** + * The base class for all database objects. + */ +public abstract class DbObjectBase implements DbObject { + + /** + * The database. + */ + protected Database database; + + /** + * The trace module. + */ + protected Trace trace; + + /** + * The comment (if set). + */ + protected String comment; + + private int id; + private String objectName; + private long modificationId; + private boolean temporary; + + /** + * Initialize some attributes of this object. + * + * @param db the database + * @param objectId the object id + * @param name the name + * @param traceModuleId the trace module id + */ + protected void initDbObjectBase(Database db, int objectId, String name, + int traceModuleId) { + this.database = db; + this.trace = db.getTrace(traceModuleId); + this.id = objectId; + this.objectName = name; + this.modificationId = db.getModificationMetaId(); + } + + /** + * Build a SQL statement to re-create this object. + * + * @return the SQL statement + */ + @Override + public abstract String getCreateSQL(); + + /** + * Build a SQL statement to drop this object. + * + * @return the SQL statement + */ + @Override + public abstract String getDropSQL(); + + /** + * Remove all dependent objects and free all resources (files, blocks in + * files) of this object. + * + * @param session the session + */ + @Override + public abstract void removeChildrenAndResources(Session session); + + /** + * Check if this object can be renamed. System objects may not be renamed. + */ + @Override + public abstract void checkRename(); + + /** + * Tell the object that is was modified. + */ + public void setModified() { + this.modificationId = database == null ? + -1 : database.getNextModificationMetaId(); + } + + public long getModificationId() { + return modificationId; + } + + protected void setObjectName(String name) { + objectName = name; + } + + @Override + public String getSQL() { + return Parser.quoteIdentifier(objectName); + } + + @Override + public ArrayList getChildren() { + return null; + } + + @Override + public Database getDatabase() { + return database; + } + + @Override + public int getId() { + return id; + } + + @Override + public String getName() { + return objectName; + } + + /** + * Set the main attributes to null to make sure the object is no longer + * used. + */ + protected void invalidate() { + if (SysProperties.CHECK && id == -1) { + throw DbException.throwInternalError(); + } + setModified(); + id = -1; + database = null; + trace = null; + objectName = null; + } + + public final boolean isValid() { + return id != -1; + } + + @Override + public void rename(String newName) { + checkRename(); + objectName = newName; + setModified(); + } + + @Override + public boolean isTemporary() { + return temporary; + } + + @Override + public void setTemporary(boolean temporary) { + this.temporary = temporary; + } + + @Override + public void setComment(String comment) { + this.comment = comment; + } + + @Override + public String getComment() { + return comment; + } + + @Override + public String toString() { + return objectName + ":" + id + ":" + super.toString(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/DbSettings.java b/modules/h2/src/main/java/org/h2/engine/DbSettings.java new file mode 100644 index 0000000000000..08af2674ecd4c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/DbSettings.java @@ -0,0 +1,393 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.HashMap; + +import org.h2.message.DbException; +import org.h2.util.Utils; + +/** + * This class contains various database-level settings. To override the + * documented default value for a database, append the setting in the database + * URL: "jdbc:h2:test;ALIAS_COLUMN_NAME=TRUE" when opening the first connection + * to the database. The settings can not be changed once the database is open. + *

    + * Some settings are a last resort and temporary solution to work around a + * problem in the application or database engine. Also, there are system + * properties to enable features that are not yet fully tested or that are not + * backward compatible. + *

    + */ +public class DbSettings extends SettingsBase { + + private static DbSettings defaultSettings; + + /** + * Database setting ALIAS_COLUMN_NAME (default: false).
    + * When enabled, aliased columns (as in SELECT ID AS I FROM TEST) return the + * alias (I in this case) in ResultSetMetaData.getColumnName() and 'null' in + * getTableName(). If disabled, the real column name (ID in this case) and + * table name is returned. + *
    + * This setting only affects the default and the MySQL mode. When using + * any other mode, this feature is enabled for compatibility, even if this + * database setting is not enabled explicitly. + */ + public final boolean aliasColumnName = get("ALIAS_COLUMN_NAME", false); + + /** + * Database setting ANALYZE_AUTO (default: 2000).
    + * After changing this many rows, ANALYZE is automatically run for a table. + * Automatically running ANALYZE is disabled if set to 0. If set to 1000, + * then ANALYZE will run against each user table after about 1000 changes to + * that table. The time between running ANALYZE doubles each time since + * starting the database. It is not run on local temporary tables, and + * tables that have a trigger on SELECT. + */ + public final int analyzeAuto = get("ANALYZE_AUTO", 2000); + + /** + * Database setting ANALYZE_SAMPLE (default: 10000).
    + * The default sample size when analyzing a table. + */ + public final int analyzeSample = get("ANALYZE_SAMPLE", 10_000); + + /** + * Database setting DATABASE_TO_UPPER (default: true).
    + * Database short names are converted to uppercase for the DATABASE() + * function, and in the CATALOG column of all database meta data methods. + * Setting this to "false" is experimental. When set to false, all + * identifier names (table names, column names) are case sensitive (except + * aggregate, built-in functions, data types, and keywords). + */ + public final boolean databaseToUpper = get("DATABASE_TO_UPPER", true); + + /** + * Database setting DB_CLOSE_ON_EXIT (default: true).
    + * Close the database when the virtual machine exits normally, using a + * shutdown hook. + */ + public final boolean dbCloseOnExit = get("DB_CLOSE_ON_EXIT", true); + + /** + * Database setting DEFAULT_CONNECTION (default: false).
    + * Whether Java functions can use + * DriverManager.getConnection("jdbc:default:connection") to + * get a database connection. This feature is disabled by default for + * performance reasons. Please note the Oracle JDBC driver will try to + * resolve this database URL if it is loaded before the H2 driver. + */ + public final boolean defaultConnection = get("DEFAULT_CONNECTION", false); + + /** + * Database setting DEFAULT_ESCAPE (default: \).
    + * The default escape character for LIKE comparisons. To select no escape + * character, use an empty string. + */ + public final String defaultEscape = get("DEFAULT_ESCAPE", "\\"); + + /** + * Database setting DEFRAG_ALWAYS (default: false).
    + * Each time the database is closed normally, it is fully defragmented (the + * same as SHUTDOWN DEFRAG). If you execute SHUTDOWN COMPACT, then this + * setting is ignored. + */ + public final boolean defragAlways = get("DEFRAG_ALWAYS", false); + + /** + * Database setting DROP_RESTRICT (default: true).
    + * Whether the default action for DROP TABLE, DROP VIEW, and DROP SCHEMA + * is RESTRICT. + */ + public final boolean dropRestrict = get("DROP_RESTRICT", true); + + /** + * Database setting EARLY_FILTER (default: false).
    + * This setting allows table implementations to apply filter conditions + * early on. + */ + public final boolean earlyFilter = get("EARLY_FILTER", false); + + /** + * Database setting ESTIMATED_FUNCTION_TABLE_ROWS (default: + * 1000).
    + * The estimated number of rows in a function table (for example, CSVREAD or + * FTL_SEARCH). This value is used by the optimizer. + */ + public final int estimatedFunctionTableRows = get( + "ESTIMATED_FUNCTION_TABLE_ROWS", 1000); + + /** + * Database setting FUNCTIONS_IN_SCHEMA + * (default: true).
    + * If set, all functions are stored in a schema. Specially, the SCRIPT + * statement will always include the schema name in the CREATE ALIAS + * statement. This is not backward compatible with H2 versions 1.2.134 and + * older. + */ + public final boolean functionsInSchema = get("FUNCTIONS_IN_SCHEMA", true); + + /** + * Database setting LARGE_TRANSACTIONS (default: true).
    + * Support very large transactions + */ + public final boolean largeTransactions = get("LARGE_TRANSACTIONS", true); + + /** + * Database setting LOB_TIMEOUT (default: 300000, + * which means 5 minutes).
    + * The number of milliseconds a temporary LOB reference is kept until it + * times out. After the timeout, the LOB is no longer accessible using this + * reference. + */ + public final int lobTimeout = get("LOB_TIMEOUT", 300_000); + + /** + * Database setting MAX_COMPACT_COUNT + * (default: Integer.MAX_VALUE).
    + * The maximum number of pages to move when closing a database. + */ + public final int maxCompactCount = get("MAX_COMPACT_COUNT", + Integer.MAX_VALUE); + + /** + * Database setting MAX_COMPACT_TIME (default: 200).
    + * The maximum time in milliseconds used to compact a database when closing. + */ + public final int maxCompactTime = get("MAX_COMPACT_TIME", 200); + + /** + * Database setting MAX_QUERY_TIMEOUT (default: 0).
    + * The maximum timeout of a query in milliseconds. The default is 0, meaning + * no limit. Please note the actual query timeout may be set to a lower + * value. + */ + public final int maxQueryTimeout = get("MAX_QUERY_TIMEOUT", 0); + + /** + * Database setting OPTIMIZE_DISTINCT (default: true).
    + * Improve the performance of simple DISTINCT queries if an index is + * available for the given column. The optimization is used if: + *
      + *
    • The select is a single column query without condition
    • + *
    • The query contains only one table, and no group by
    • + *
    • There is only one table involved
    • + *
    • There is an ascending index on the column
    • + *
    • The selectivity of the column is below 20
    • + *
    + */ + public final boolean optimizeDistinct = get("OPTIMIZE_DISTINCT", true); + + /** + * Database setting OPTIMIZE_EVALUATABLE_SUBQUERIES (default: + * true).
    + * Optimize subqueries that are not dependent on the outer query. + */ + public final boolean optimizeEvaluatableSubqueries = get( + "OPTIMIZE_EVALUATABLE_SUBQUERIES", true); + + /** + * Database setting OPTIMIZE_INSERT_FROM_SELECT + * (default: true).
    + * Insert into table from query directly bypassing temporary disk storage. + * This also applies to create table as select. + */ + public final boolean optimizeInsertFromSelect = get( + "OPTIMIZE_INSERT_FROM_SELECT", true); + + /** + * Database setting OPTIMIZE_IN_LIST (default: true).
    + * Optimize IN(...) and IN(SELECT ...) comparisons. This includes + * optimization for SELECT, DELETE, and UPDATE. + */ + public final boolean optimizeInList = get("OPTIMIZE_IN_LIST", true); + + /** + * Database setting OPTIMIZE_IN_SELECT (default: true).
    + * Optimize IN(SELECT ...) comparisons. This includes + * optimization for SELECT, DELETE, and UPDATE. + */ + public final boolean optimizeInSelect = get("OPTIMIZE_IN_SELECT", true); + + /** + * Database setting OPTIMIZE_IS_NULL (default: false).
    + * Use an index for condition of the form columnName IS NULL. + */ + public final boolean optimizeIsNull = get("OPTIMIZE_IS_NULL", true); + + /** + * Database setting OPTIMIZE_OR (default: true).
    + * Convert (C=? OR C=?) to (C IN(?, ?)). + */ + public final boolean optimizeOr = get("OPTIMIZE_OR", true); + + /** + * Database setting OPTIMIZE_TWO_EQUALS (default: true).
    + * Optimize expressions of the form A=B AND B=1. In this case, AND A=1 is + * added so an index on A can be used. + */ + public final boolean optimizeTwoEquals = get("OPTIMIZE_TWO_EQUALS", true); + + /** + * Database setting OPTIMIZE_UPDATE (default: true).
    + * Speed up inserts, updates, and deletes by not reading all rows from a + * page unless necessary. + */ + public final boolean optimizeUpdate = get("OPTIMIZE_UPDATE", true); + + /** + * Database setting PAGE_STORE_MAX_GROWTH + * (default: 128 * 1024).
    + * The maximum number of pages the file grows at any time. + */ + public final int pageStoreMaxGrowth = get("PAGE_STORE_MAX_GROWTH", + 128 * 1024); + + /** + * Database setting PAGE_STORE_INTERNAL_COUNT + * (default: false).
    + * Update the row counts on a node level. + */ + public final boolean pageStoreInternalCount = get( + "PAGE_STORE_INTERNAL_COUNT", false); + + /** + * Database setting PAGE_STORE_TRIM (default: true).
    + * Trim the database size when closing. + */ + public final boolean pageStoreTrim = get("PAGE_STORE_TRIM", true); + + /** + * Database setting QUERY_CACHE_SIZE (default: 8).
    + * The size of the query cache, in number of cached statements. Each session + * has it's own cache with the given size. The cache is only used if the SQL + * statement and all parameters match. Only the last returned result per + * query is cached. The following statement types are cached: SELECT + * statements are cached (excluding UNION and FOR UPDATE statements), CALL + * if it returns a single value, DELETE, INSERT, MERGE, UPDATE, and + * transactional statements such as COMMIT. This works for both statements + * and prepared statement. + */ + public final int queryCacheSize = get("QUERY_CACHE_SIZE", 8); + + /** + * Database setting RECOMPILE_ALWAYS (default: false).
    + * Always recompile prepared statements. + */ + public final boolean recompileAlways = get("RECOMPILE_ALWAYS", false); + + /** + * Database setting RECONNECT_CHECK_DELAY (default: 200).
    + * Check the .lock.db file every this many milliseconds to detect that the + * database was changed. The process writing to the database must first + * notify a change in the .lock.db file, then wait twice this many + * milliseconds before updating the database. + */ + public final int reconnectCheckDelay = get("RECONNECT_CHECK_DELAY", 200); + + /** + * Database setting REUSE_SPACE (default: true).
    + * If disabled, all changes are appended to the database file, and existing + * content is never overwritten. This setting has no effect if the database + * is already open. + */ + public final boolean reuseSpace = get("REUSE_SPACE", true); + + /** + * Database setting ROWID (default: true).
    + * If set, each table has a pseudo-column _ROWID_. + */ + public final boolean rowId = get("ROWID", true); + + /** + * Database setting SELECT_FOR_UPDATE_MVCC + * (default: true).
    + * If set, SELECT .. FOR UPDATE queries lock only the selected rows when + * using MVCC. + */ + public final boolean selectForUpdateMvcc = get("SELECT_FOR_UPDATE_MVCC", true); + + /** + * Database setting SHARE_LINKED_CONNECTIONS + * (default: true).
    + * Linked connections should be shared, that means connections to the same + * database should be used for all linked tables that connect to the same + * database. + */ + public final boolean shareLinkedConnections = get( + "SHARE_LINKED_CONNECTIONS", true); + + /** + * Database setting DEFAULT_TABLE_ENGINE + * (default: null).
    + * The default table engine to use for new tables. + */ + public final String defaultTableEngine = get("DEFAULT_TABLE_ENGINE", null); + + /** + * Database setting MV_STORE + * (default: false for version 1.3, true for version 1.4).
    + * Use the MVStore storage engine. + */ + public boolean mvStore = get("MV_STORE", Constants.VERSION_MINOR >= 4); + + /** + * Database setting COMPRESS + * (default: false).
    + * Compress data when storing. + */ + public final boolean compressData = get("COMPRESS", false); + + /** + * Database setting MULTI_THREADED + * (default: false).
    + */ + public final boolean multiThreaded = get("MULTI_THREADED", false); + + /** + * Database setting STANDARD_DROP_TABLE_RESTRICT (default: + * false).
    + * true if DROP TABLE RESTRICT should fail if there's any + * foreign key referencing the table to be dropped. false if + * foreign keys referencing the table to be dropped should be silently + * dropped as well. + */ + public final boolean standardDropTableRestrict = get( + "STANDARD_DROP_TABLE_RESTRICT", false); + + private DbSettings(HashMap s) { + super(s); + if (s.get("NESTED_JOINS") != null || Utils.getProperty("h2.nestedJoins", null) != null) { + throw DbException.getUnsupportedException("NESTED_JOINS setting is not available since 1.4.197"); + } + } + + /** + * INTERNAL. + * Get the settings for the given properties (may not be null). + * + * @param s the settings + * @return the settings + */ + public static DbSettings getInstance(HashMap s) { + return new DbSettings(s); + } + + /** + * INTERNAL. + * Get the default settings. Those must not be modified. + * + * @return the settings + */ + public static DbSettings getDefaultSettings() { + if (defaultSettings == null) { + defaultSettings = new DbSettings(new HashMap()); + } + return defaultSettings; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Engine.java b/modules/h2/src/main/java/org/h2/engine/Engine.java new file mode 100644 index 0000000000000..1300073667c7b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Engine.java @@ -0,0 +1,346 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.Parser; +import org.h2.command.dml.SetTypes; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.store.FileLock; +import org.h2.store.FileLockMethod; +import org.h2.util.MathUtils; +import org.h2.util.ThreadDeadlockDetector; +import org.h2.util.Utils; + +/** + * The engine contains a map of all open databases. + * It is also responsible for opening and creating new databases. + * This is a singleton class. + */ +public class Engine implements SessionFactory { + + private static final Engine INSTANCE = new Engine(); + private static final Map DATABASES = new HashMap<>(); + + private volatile long wrongPasswordDelay = + SysProperties.DELAY_WRONG_PASSWORD_MIN; + private boolean jmx; + + private Engine() { + // use getInstance() + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + ThreadDeadlockDetector.init(); + } + } + + public static Engine getInstance() { + return INSTANCE; + } + + private Session openSession(ConnectionInfo ci, boolean ifExists, + String cipher) { + String name = ci.getName(); + Database database; + ci.removeProperty("NO_UPGRADE", false); + boolean openNew = ci.getProperty("OPEN_NEW", false); + boolean opened = false; + User user = null; + synchronized (DATABASES) { + if (openNew || ci.isUnnamedInMemory()) { + database = null; + } else { + database = DATABASES.get(name); + } + if (database == null) { + if (ifExists && !Database.exists(name)) { + throw DbException.get(ErrorCode.DATABASE_NOT_FOUND_1, name); + } + database = new Database(ci, cipher); + opened = true; + if (database.getAllUsers().isEmpty()) { + // users is the last thing we add, so if no user is around, + // the database is new (or not initialized correctly) + user = new User(database, database.allocateObjectId(), + ci.getUserName(), false); + user.setAdmin(true); + user.setUserPasswordHash(ci.getUserPasswordHash()); + database.setMasterUser(user); + } + if (!ci.isUnnamedInMemory()) { + DATABASES.put(name, database); + } + } + } + if (opened) { + // start the thread when already synchronizing on the database + // otherwise a deadlock can occur when the writer thread + // opens a new database (as in recovery testing) + database.opened(); + } + if (database.isClosing()) { + return null; + } + if (user == null) { + if (database.validateFilePasswordHash(cipher, ci.getFilePasswordHash())) { + user = database.findUser(ci.getUserName()); + if (user != null) { + if (!user.validateUserPasswordHash(ci.getUserPasswordHash())) { + user = null; + } + } + } + if (opened && (user == null || !user.isAdmin())) { + // reset - because the user is not an admin, and has no + // right to listen to exceptions + database.setEventListener(null); + } + } + if (user == null) { + DbException er = DbException.get(ErrorCode.WRONG_USER_OR_PASSWORD); + database.getTrace(Trace.DATABASE).error(er, "wrong user or password; user: \"" + + ci.getUserName() + "\""); + database.removeSession(null); + throw er; + } + checkClustering(ci, database); + Session session = database.createSession(user); + if (session == null) { + // concurrently closing + return null; + } + if (ci.getProperty("JMX", false)) { + try { + Utils.callStaticMethod( + "org.h2.jmx.DatabaseInfo.registerMBean", ci, database); + } catch (Exception e) { + database.removeSession(session); + throw DbException.get(ErrorCode.FEATURE_NOT_SUPPORTED_1, e, "JMX"); + } + jmx = true; + } + return session; + } + + /** + * Open a database connection with the given connection information. + * + * @param ci the connection information + * @return the session + */ + @Override + public Session createSession(ConnectionInfo ci) { + return INSTANCE.createSessionAndValidate(ci); + } + + private Session createSessionAndValidate(ConnectionInfo ci) { + try { + ConnectionInfo backup = null; + String lockMethodName = ci.getProperty("FILE_LOCK", null); + FileLockMethod fileLockMethod = FileLock.getFileLockMethod(lockMethodName); + if (fileLockMethod == FileLockMethod.SERIALIZED) { + // In serialized mode, database instance sharing is not possible + ci.setProperty("OPEN_NEW", "TRUE"); + try { + backup = ci.clone(); + } catch (CloneNotSupportedException e) { + throw DbException.convert(e); + } + } + Session session = openSession(ci); + validateUserAndPassword(true); + if (backup != null) { + session.setConnectionInfo(backup); + } + return session; + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.WRONG_USER_OR_PASSWORD) { + validateUserAndPassword(false); + } + throw e; + } + } + + private synchronized Session openSession(ConnectionInfo ci) { + boolean ifExists = ci.removeProperty("IFEXISTS", false); + boolean ignoreUnknownSetting = ci.removeProperty( + "IGNORE_UNKNOWN_SETTINGS", false); + String cipher = ci.removeProperty("CIPHER", null); + String init = ci.removeProperty("INIT", null); + Session session; + for (int i = 0;; i++) { + session = openSession(ci, ifExists, cipher); + if (session != null) { + break; + } + // we found a database that is currently closing + // wait a bit to avoid a busy loop (the method is synchronized) + if (i > 60 * 1000) { + // retry at most 1 minute + throw DbException.get(ErrorCode.DATABASE_ALREADY_OPEN_1, + "Waited for database closing longer than 1 minute"); + } + try { + Thread.sleep(1); + } catch (InterruptedException e) { + // ignore + } + } + synchronized (session) { + session.setAllowLiterals(true); + DbSettings defaultSettings = DbSettings.getDefaultSettings(); + for (String setting : ci.getKeys()) { + if (defaultSettings.containsKey(setting)) { + // database setting are only used when opening the database + continue; + } + String value = ci.getProperty(setting); + try { + CommandInterface command = session.prepareCommand( + "SET " + Parser.quoteIdentifier(setting) + " " + value, + Integer.MAX_VALUE); + command.executeUpdate(false); + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.ADMIN_RIGHTS_REQUIRED) { + session.getTrace().error(e, "admin rights required; user: \"" + + ci.getUserName() + "\""); + } else { + session.getTrace().error(e, ""); + } + if (!ignoreUnknownSetting) { + session.close(); + throw e; + } + } + } + if (init != null) { + try { + CommandInterface command = session.prepareCommand(init, + Integer.MAX_VALUE); + command.executeUpdate(false); + } catch (DbException e) { + if (!ignoreUnknownSetting) { + session.close(); + throw e; + } + } + } + session.setAllowLiterals(false); + session.commit(true); + } + return session; + } + + private static void checkClustering(ConnectionInfo ci, Database database) { + String clusterSession = ci.getProperty(SetTypes.CLUSTER, null); + if (Constants.CLUSTERING_DISABLED.equals(clusterSession)) { + // in this case, no checking is made + // (so that a connection can be made to disable/change clustering) + return; + } + String clusterDb = database.getCluster(); + if (!Constants.CLUSTERING_DISABLED.equals(clusterDb)) { + if (!Constants.CLUSTERING_ENABLED.equals(clusterSession)) { + if (!Objects.equals(clusterSession, clusterDb)) { + if (clusterDb.equals(Constants.CLUSTERING_DISABLED)) { + throw DbException.get( + ErrorCode.CLUSTER_ERROR_DATABASE_RUNS_ALONE); + } + throw DbException.get( + ErrorCode.CLUSTER_ERROR_DATABASE_RUNS_CLUSTERED_1, + clusterDb); + } + } + } + } + + /** + * Called after a database has been closed, to remove the object from the + * list of open databases. + * + * @param name the database name + */ + void close(String name) { + if (jmx) { + try { + Utils.callStaticMethod("org.h2.jmx.DatabaseInfo.unregisterMBean", name); + } catch (Exception e) { + throw DbException.get(ErrorCode.FEATURE_NOT_SUPPORTED_1, e, "JMX"); + } + } + synchronized (DATABASES) { + DATABASES.remove(name); + } + } + + /** + * This method is called after validating user name and password. If user + * name and password were correct, the sleep time is reset, otherwise this + * method waits some time (to make brute force / rainbow table attacks + * harder) and then throws a 'wrong user or password' exception. The delay + * is a bit randomized to protect against timing attacks. Also the delay + * doubles after each unsuccessful logins, to make brute force attacks + * harder. + * + * There is only one exception message both for wrong user and for + * wrong password, to make it harder to get the list of user names. This + * method must only be called from one place, so it is not possible from the + * stack trace to see if the user name was wrong or the password. + * + * @param correct if the user name or the password was correct + * @throws DbException the exception 'wrong user or password' + */ + private void validateUserAndPassword(boolean correct) { + int min = SysProperties.DELAY_WRONG_PASSWORD_MIN; + if (correct) { + long delay = wrongPasswordDelay; + if (delay > min && delay > 0) { + // the first correct password must be blocked, + // otherwise parallel attacks are possible + synchronized (INSTANCE) { + // delay up to the last delay + // an attacker can't know how long it will be + delay = MathUtils.secureRandomInt((int) delay); + try { + Thread.sleep(delay); + } catch (InterruptedException e) { + // ignore + } + wrongPasswordDelay = min; + } + } + } else { + // this method is not synchronized on the Engine, so that + // regular successful attempts are not blocked + synchronized (INSTANCE) { + long delay = wrongPasswordDelay; + int max = SysProperties.DELAY_WRONG_PASSWORD_MAX; + if (max <= 0) { + max = Integer.MAX_VALUE; + } + wrongPasswordDelay += wrongPasswordDelay; + if (wrongPasswordDelay > max || wrongPasswordDelay < 0) { + wrongPasswordDelay = max; + } + if (min > 0) { + // a bit more to protect against timing attacks + delay += Math.abs(MathUtils.secureRandomLong() % 100); + try { + Thread.sleep(delay); + } catch (InterruptedException e) { + // ignore + } + } + throw DbException.get(ErrorCode.WRONG_USER_OR_PASSWORD); + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/FunctionAlias.java b/modules/h2/src/main/java/org/h2/engine/FunctionAlias.java new file mode 100644 index 0000000000000..d42e5f02aaa1f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/FunctionAlias.java @@ -0,0 +1,518 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.lang.reflect.Array; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.sql.Connection; +import java.util.ArrayList; +import java.util.Arrays; +import org.h2.Driver; +import org.h2.api.ErrorCode; +import org.h2.command.Parser; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObjectBase; +import org.h2.table.Table; +import org.h2.util.JdbcUtils; +import org.h2.util.New; +import org.h2.util.SourceCompiler; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueNull; + +/** + * Represents a user-defined function, or alias. + * + * @author Thomas Mueller + * @author Gary Tong + */ +public class FunctionAlias extends SchemaObjectBase { + + private String className; + private String methodName; + private String source; + private JavaMethod[] javaMethods; + private boolean deterministic; + private boolean bufferResultSetToLocalTemp = true; + + private FunctionAlias(Schema schema, int id, String name) { + initSchemaObjectBase(schema, id, name, Trace.FUNCTION); + } + + /** + * Create a new alias based on a method name. + * + * @param schema the schema + * @param id the id + * @param name the name + * @param javaClassMethod the class and method name + * @param force create the object even if the class or method does not exist + * @param bufferResultSetToLocalTemp whether the result should be buffered + * @return the database object + */ + public static FunctionAlias newInstance( + Schema schema, int id, String name, String javaClassMethod, + boolean force, boolean bufferResultSetToLocalTemp) { + FunctionAlias alias = new FunctionAlias(schema, id, name); + int paren = javaClassMethod.indexOf('('); + int lastDot = javaClassMethod.lastIndexOf('.', paren < 0 ? + javaClassMethod.length() : paren); + if (lastDot < 0) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, javaClassMethod); + } + alias.className = javaClassMethod.substring(0, lastDot); + alias.methodName = javaClassMethod.substring(lastDot + 1); + alias.bufferResultSetToLocalTemp = bufferResultSetToLocalTemp; + alias.init(force); + return alias; + } + + /** + * Create a new alias based on source code. + * + * @param schema the schema + * @param id the id + * @param name the name + * @param source the source code + * @param force create the object even if the class or method does not exist + * @param bufferResultSetToLocalTemp whether the result should be buffered + * @return the database object + */ + public static FunctionAlias newInstanceFromSource( + Schema schema, int id, String name, String source, boolean force, + boolean bufferResultSetToLocalTemp) { + FunctionAlias alias = new FunctionAlias(schema, id, name); + alias.source = source; + alias.bufferResultSetToLocalTemp = bufferResultSetToLocalTemp; + alias.init(force); + return alias; + } + + private void init(boolean force) { + try { + // at least try to compile the class, otherwise the data type is not + // initialized if it could be + load(); + } catch (DbException e) { + if (!force) { + throw e; + } + } + } + + private synchronized void load() { + if (javaMethods != null) { + return; + } + if (source != null) { + loadFromSource(); + } else { + loadClass(); + } + } + + private void loadFromSource() { + SourceCompiler compiler = database.getCompiler(); + synchronized (compiler) { + String fullClassName = Constants.USER_PACKAGE + "." + getName(); + compiler.setSource(fullClassName, source); + try { + Method m = compiler.getMethod(fullClassName); + JavaMethod method = new JavaMethod(m, 0); + javaMethods = new JavaMethod[] { + method + }; + } catch (DbException e) { + throw e; + } catch (Exception e) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, e, source); + } + } + } + + private void loadClass() { + Class javaClass = JdbcUtils.loadUserClass(className); + Method[] methods = javaClass.getMethods(); + ArrayList list = New.arrayList(); + for (int i = 0, len = methods.length; i < len; i++) { + Method m = methods[i]; + if (!Modifier.isStatic(m.getModifiers())) { + continue; + } + if (m.getName().equals(methodName) || + getMethodSignature(m).equals(methodName)) { + JavaMethod javaMethod = new JavaMethod(m, i); + for (JavaMethod old : list) { + if (old.getParameterCount() == javaMethod.getParameterCount()) { + throw DbException.get(ErrorCode. + METHODS_MUST_HAVE_DIFFERENT_PARAMETER_COUNTS_2, + old.toString(), javaMethod.toString()); + } + } + list.add(javaMethod); + } + } + if (list.isEmpty()) { + throw DbException.get( + ErrorCode.PUBLIC_STATIC_JAVA_METHOD_NOT_FOUND_1, + methodName + " (" + className + ")"); + } + javaMethods = list.toArray(new JavaMethod[0]); + // Sort elements. Methods with a variable number of arguments must be at + // the end. Reason: there could be one method without parameters and one + // with a variable number. The one without parameters needs to be used + // if no parameters are given. + Arrays.sort(javaMethods); + } + + private static String getMethodSignature(Method m) { + StatementBuilder buff = new StatementBuilder(m.getName()); + buff.append('('); + for (Class p : m.getParameterTypes()) { + // do not use a space here, because spaces are removed + // in CreateFunctionAlias.setJavaClassMethod() + buff.appendExceptFirst(","); + if (p.isArray()) { + buff.append(p.getComponentType().getName()).append("[]"); + } else { + buff.append(p.getName()); + } + } + return buff.append(')').toString(); + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getDropSQL() { + return "DROP ALIAS IF EXISTS " + getSQL(); + } + + @Override + public String getSQL() { + // TODO can remove this method once FUNCTIONS_IN_SCHEMA is enabled + if (database.getSettings().functionsInSchema || + !getSchema().getName().equals(Constants.SCHEMA_MAIN)) { + return super.getSQL(); + } + return Parser.quoteIdentifier(getName()); + } + + @Override + public String getCreateSQL() { + StringBuilder buff = new StringBuilder("CREATE FORCE ALIAS "); + buff.append(getSQL()); + if (deterministic) { + buff.append(" DETERMINISTIC"); + } + if (!bufferResultSetToLocalTemp) { + buff.append(" NOBUFFER"); + } + if (source != null) { + buff.append(" AS ").append(StringUtils.quoteStringSQL(source)); + } else { + buff.append(" FOR ").append(Parser.quoteIdentifier( + className + "." + methodName)); + } + return buff.toString(); + } + + @Override + public int getType() { + return DbObject.FUNCTION_ALIAS; + } + + @Override + public synchronized void removeChildrenAndResources(Session session) { + database.removeMeta(session, getId()); + className = null; + methodName = null; + javaMethods = null; + invalidate(); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("RENAME"); + } + + /** + * Find the Java method that matches the arguments. + * + * @param args the argument list + * @return the Java method + * @throws DbException if no matching method could be found + */ + public JavaMethod findJavaMethod(Expression[] args) { + load(); + int parameterCount = args.length; + for (JavaMethod m : javaMethods) { + int count = m.getParameterCount(); + if (count == parameterCount || (m.isVarArgs() && + count <= parameterCount + 1)) { + return m; + } + } + throw DbException.get(ErrorCode.METHOD_NOT_FOUND_1, getName() + " (" + + className + ", parameter count: " + parameterCount + ")"); + } + + public String getJavaClassName() { + return this.className; + } + + public String getJavaMethodName() { + return this.methodName; + } + + /** + * Get the Java methods mapped by this function. + * + * @return the Java methods. + */ + public JavaMethod[] getJavaMethods() { + load(); + return javaMethods; + } + + public void setDeterministic(boolean deterministic) { + this.deterministic = deterministic; + } + + public boolean isDeterministic() { + return deterministic; + } + + public String getSource() { + return source; + } + + /** + * Should the return value ResultSet be buffered in a local temporary file? + * + * @return true if yes + */ + public boolean isBufferResultSetToLocalTemp() { + return bufferResultSetToLocalTemp; + } + + /** + * There may be multiple Java methods that match a function name. + * Each method must have a different number of parameters however. + * This helper class represents one such method. + */ + public static class JavaMethod implements Comparable { + private final int id; + private final Method method; + private final int dataType; + private boolean hasConnectionParam; + private boolean varArgs; + private Class varArgClass; + private int paramCount; + + JavaMethod(Method method, int id) { + this.method = method; + this.id = id; + Class[] paramClasses = method.getParameterTypes(); + paramCount = paramClasses.length; + if (paramCount > 0) { + Class paramClass = paramClasses[0]; + if (Connection.class.isAssignableFrom(paramClass)) { + hasConnectionParam = true; + paramCount--; + } + } + if (paramCount > 0) { + Class lastArg = paramClasses[paramClasses.length - 1]; + if (lastArg.isArray() && method.isVarArgs()) { + varArgs = true; + varArgClass = lastArg.getComponentType(); + } + } + Class returnClass = method.getReturnType(); + dataType = DataType.getTypeFromClass(returnClass); + } + + @Override + public String toString() { + return method.toString(); + } + + /** + * Check if this function requires a database connection. + * + * @return if the function requires a connection + */ + public boolean hasConnectionParam() { + return this.hasConnectionParam; + } + + /** + * Call the user-defined function and return the value. + * + * @param session the session + * @param args the argument list + * @param columnList true if the function should only return the column + * list + * @return the value + */ + public Value getValue(Session session, Expression[] args, + boolean columnList) { + Class[] paramClasses = method.getParameterTypes(); + Object[] params = new Object[paramClasses.length]; + int p = 0; + if (hasConnectionParam && params.length > 0) { + params[p++] = session.createConnection(columnList); + } + + // allocate array for varArgs parameters + Object varArg = null; + if (varArgs) { + int len = args.length - params.length + 1 + + (hasConnectionParam ? 1 : 0); + varArg = Array.newInstance(varArgClass, len); + params[params.length - 1] = varArg; + } + + for (int a = 0, len = args.length; a < len; a++, p++) { + boolean currentIsVarArg = varArgs && + p >= paramClasses.length - 1; + Class paramClass; + if (currentIsVarArg) { + paramClass = varArgClass; + } else { + paramClass = paramClasses[p]; + } + int type = DataType.getTypeFromClass(paramClass); + Value v = args[a].getValue(session); + Object o; + if (Value.class.isAssignableFrom(paramClass)) { + o = v; + } else if (v.getType() == Value.ARRAY && + paramClass.isArray() && + paramClass.getComponentType() != Object.class) { + Value[] array = ((ValueArray) v).getList(); + Object[] objArray = (Object[]) Array.newInstance( + paramClass.getComponentType(), array.length); + int componentType = DataType.getTypeFromClass( + paramClass.getComponentType()); + for (int i = 0; i < objArray.length; i++) { + objArray[i] = array[i].convertTo(componentType).getObject(); + } + o = objArray; + } else { + v = v.convertTo(type, -1, session.getDatabase().getMode()); + o = v.getObject(); + } + if (o == null) { + if (paramClass.isPrimitive()) { + if (columnList) { + // If the column list is requested, the parameters + // may be null. Need to set to default value, + // otherwise the function can't be called at all. + o = DataType.getDefaultForPrimitiveType(paramClass); + } else { + // NULL for a java primitive: return NULL + return ValueNull.INSTANCE; + } + } + } else { + if (!paramClass.isAssignableFrom(o.getClass()) && !paramClass.isPrimitive()) { + o = DataType.convertTo(session.createConnection(false), v, paramClass); + } + } + if (currentIsVarArg) { + Array.set(varArg, p - params.length + 1, o); + } else { + params[p] = o; + } + } + boolean old = session.getAutoCommit(); + Value identity = session.getLastScopeIdentity(); + boolean defaultConnection = session.getDatabase(). + getSettings().defaultConnection; + try { + session.setAutoCommit(false); + Object returnValue; + try { + if (defaultConnection) { + Driver.setDefaultConnection( + session.createConnection(columnList)); + } + returnValue = method.invoke(null, params); + if (returnValue == null) { + return ValueNull.INSTANCE; + } + } catch (InvocationTargetException e) { + StatementBuilder buff = new StatementBuilder(method.getName()); + buff.append('('); + for (Object o : params) { + buff.appendExceptFirst(", "); + buff.append(o == null ? "null" : o.toString()); + } + buff.append(')'); + throw DbException.convertInvocation(e, buff.toString()); + } catch (Exception e) { + throw DbException.convert(e); + } + if (Value.class.isAssignableFrom(method.getReturnType())) { + return (Value) returnValue; + } + Value ret = DataType.convertToValue(session, returnValue, dataType); + return ret.convertTo(dataType); + } finally { + session.setLastScopeIdentity(identity); + session.setAutoCommit(old); + if (defaultConnection) { + Driver.setDefaultConnection(null); + } + } + } + + public Class[] getColumnClasses() { + return method.getParameterTypes(); + } + + public int getDataType() { + return dataType; + } + + public int getParameterCount() { + return paramCount; + } + + public boolean isVarArgs() { + return varArgs; + } + + @Override + public int compareTo(JavaMethod m) { + if (varArgs != m.varArgs) { + return varArgs ? 1 : -1; + } + if (paramCount != m.paramCount) { + return paramCount - m.paramCount; + } + if (hasConnectionParam != m.hasConnectionParam) { + return hasConnectionParam ? 1 : -1; + } + return id - m.id; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/GeneratedKeys.java b/modules/h2/src/main/java/org/h2/engine/GeneratedKeys.java new file mode 100644 index 0000000000000..a3e3b2f962c90 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/GeneratedKeys.java @@ -0,0 +1,241 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.result.LocalResult; +import org.h2.result.Row; +import org.h2.table.Column; +import org.h2.table.Table; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * Class for gathering and processing of generated keys. + */ +public final class GeneratedKeys { + /** + * Data for result set with generated keys. + */ + private final ArrayList> data = New.arrayList(); + + /** + * Columns with generated keys in the current row. + */ + private final ArrayList row = New.arrayList(); + + /** + * All columns with generated keys. + */ + private final ArrayList allColumns = New.arrayList(); + + /** + * Request for keys gathering. {@code false} if generated keys are not needed, + * {@code true} if generated keys should be configured automatically, + * {@code int[]} to specify column indices to return generated keys from, or + * {@code String[]} to specify column names to return generated keys from. + */ + private Object generatedKeysRequest; + + /** + * Processed table. + */ + private Table table; + + /** + * Remembers columns with generated keys. + * + * @param column + * table column + */ + public void add(Column column) { + if (Boolean.FALSE.equals(generatedKeysRequest)) { + return; + } + row.add(column); + } + + /** + * Clears all information from previous runs and sets a new request for + * gathering of generated keys. + * + * @param generatedKeysRequest + * {@code false} if generated keys are not needed, {@code true} if + * generated keys should be configured automatically, {@code int[]} + * to specify column indices to return generated keys from, or + * {@code String[]} to specify column names to return generated keys + * from + */ + public void clear(Object generatedKeysRequest) { + this.generatedKeysRequest = generatedKeysRequest; + data.clear(); + row.clear(); + allColumns.clear(); + table = null; + } + + /** + * Saves row with generated keys if any. + * + * @param tableRow + * table row that was inserted + */ + public void confirmRow(Row tableRow) { + if (Boolean.FALSE.equals(generatedKeysRequest)) { + return; + } + int size = row.size(); + if (size > 0) { + if (size == 1) { + Column column = row.get(0); + data.add(Collections.singletonMap(column, tableRow.getValue(column.getColumnId()))); + if (!allColumns.contains(column)) { + allColumns.add(column); + } + } else { + HashMap map = new HashMap<>(); + for (Column column : row) { + map.put(column, tableRow.getValue(column.getColumnId())); + if (!allColumns.contains(column)) { + allColumns.add(column); + } + } + data.add(map); + } + row.clear(); + } + } + + /** + * Returns generated keys. + * + * @param session + * session + * @return local result with generated keys + */ + public LocalResult getKeys(Session session) { + Database db = session == null ? null : session.getDatabase(); + if (Boolean.FALSE.equals(generatedKeysRequest)) { + clear(null); + return new LocalResult(); + } + ArrayList expressionColumns; + if (Boolean.TRUE.equals(generatedKeysRequest)) { + expressionColumns = new ArrayList<>(allColumns.size()); + for (Column column : allColumns) { + expressionColumns.add(new ExpressionColumn(db, column)); + } + } else if (generatedKeysRequest instanceof int[]) { + if (table != null) { + int[] indices = (int[]) generatedKeysRequest; + Column[] columns = table.getColumns(); + int cnt = columns.length; + allColumns.clear(); + expressionColumns = new ArrayList<>(indices.length); + for (int idx : indices) { + if (idx >= 1 && idx <= cnt) { + Column column = columns[idx - 1]; + expressionColumns.add(new ExpressionColumn(db, column)); + allColumns.add(column); + } + } + } else { + clear(null); + return new LocalResult(); + } + } else if (generatedKeysRequest instanceof String[]) { + if (table != null) { + String[] names = (String[]) generatedKeysRequest; + allColumns.clear(); + expressionColumns = new ArrayList<>(names.length); + for (String name : names) { + Column column; + search: if (table.doesColumnExist(name)) { + column = table.getColumn(name); + } else { + name = StringUtils.toUpperEnglish(name); + if (table.doesColumnExist(name)) { + column = table.getColumn(name); + } else { + for (Column c : table.getColumns()) { + if (c.getName().equalsIgnoreCase(name)) { + column = c; + break search; + } + } + continue; + } + } + expressionColumns.add(new ExpressionColumn(db, column)); + allColumns.add(column); + } + } else { + clear(null); + return new LocalResult(); + } + } else { + clear(null); + return new LocalResult(); + } + int columnCount = expressionColumns.size(); + if (columnCount == 0) { + clear(null); + return new LocalResult(); + } + LocalResult result = new LocalResult(session, expressionColumns.toArray(new Expression[0]), columnCount); + for (Map map : data) { + Value[] row = new Value[columnCount]; + for (Map.Entry entry : map.entrySet()) { + int idx = allColumns.indexOf(entry.getKey()); + if (idx >= 0) { + row[idx] = entry.getValue(); + } + } + for (int i = 0; i < columnCount; i++) { + if (row[i] == null) { + row[i] = ValueNull.INSTANCE; + } + } + result.addRow(row); + } + clear(null); + return result; + } + + /** + * Initializes processing of the specified table. Should be called after + * {@code clear()}, but before other methods. + * + * @param table + * table + */ + public void initialize(Table table) { + this.table = table; + } + + /** + * Clears unsaved information about previous row, if any. Should be called + * before processing of a new row if previous row was not confirmed or simply + * always before each row. + */ + public void nextRow() { + row.clear(); + } + + @Override + public String toString() { + return allColumns + ": " + data.size(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/GeneratedKeysMode.java b/modules/h2/src/main/java/org/h2/engine/GeneratedKeysMode.java new file mode 100644 index 0000000000000..ae2853ac4220e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/GeneratedKeysMode.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Modes of generated keys' gathering. + */ +public final class GeneratedKeysMode { + + /** + * Generated keys are not needed. + */ + public static final int NONE = 0; + + /** + * Generated keys should be configured automatically. + */ + public static final int AUTO = 1; + + /** + * Use specified column indices to return generated keys from. + */ + public static final int COLUMN_NUMBERS = 2; + + /** + * Use specified column names to return generated keys from. + */ + public static final int COLUMN_NAMES = 3; + + /** + * Determines mode of generated keys' gathering. + * + * @param generatedKeysRequest + * {@code false} if generated keys are not needed, {@code true} if + * generated keys should be configured automatically, {@code int[]} + * to specify column indices to return generated keys from, or + * {@code String[]} to specify column names to return generated keys + * from + * @return mode for the specified generated keys request + */ + public static int valueOf(Object generatedKeysRequest) { + if (Boolean.FALSE.equals(generatedKeysRequest)) { + return NONE; + } + if (Boolean.TRUE.equals(generatedKeysRequest)) { + return AUTO; + } + if (generatedKeysRequest instanceof int[]) { + return COLUMN_NUMBERS; + } + if (generatedKeysRequest instanceof String[]) { + return COLUMN_NAMES; + } + throw DbException.get(ErrorCode.INVALID_VALUE_2, + generatedKeysRequest == null ? "null" : generatedKeysRequest.toString()); + } + + private GeneratedKeysMode() { + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/MetaRecord.java b/modules/h2/src/main/java/org/h2/engine/MetaRecord.java new file mode 100644 index 0000000000000..fce6995aa8326 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/MetaRecord.java @@ -0,0 +1,151 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.sql.SQLException; +import org.h2.api.DatabaseEventListener; +import org.h2.command.Prepared; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.SearchRow; +import org.h2.value.ValueInt; +import org.h2.value.ValueString; + +/** + * A record in the system table of the database. + * It contains the SQL statement to create the database object. + */ +public class MetaRecord implements Comparable { + + private final int id; + private final int objectType; + private final String sql; + + public MetaRecord(SearchRow r) { + id = r.getValue(0).getInt(); + objectType = r.getValue(2).getInt(); + sql = r.getValue(3).getString(); + } + + MetaRecord(DbObject obj) { + id = obj.getId(); + objectType = obj.getType(); + sql = obj.getCreateSQL(); + } + + void setRecord(SearchRow r) { + r.setValue(0, ValueInt.get(id)); + r.setValue(1, ValueInt.get(0)); + r.setValue(2, ValueInt.get(objectType)); + r.setValue(3, ValueString.get(sql)); + } + + /** + * Execute the meta data statement. + * + * @param db the database + * @param systemSession the system session + * @param listener the database event listener + */ + void execute(Database db, Session systemSession, + DatabaseEventListener listener) { + try { + Prepared command = systemSession.prepare(sql); + command.setObjectId(id); + command.update(); + } catch (DbException e) { + e = e.addSQL(sql); + SQLException s = e.getSQLException(); + db.getTrace(Trace.DATABASE).error(s, sql); + if (listener != null) { + listener.exceptionThrown(s, sql); + // continue startup in this case + } else { + throw e; + } + } + } + + public int getId() { + return id; + } + + public int getObjectType() { + return objectType; + } + + public String getSQL() { + return sql; + } + + /** + * Sort the list of meta records by 'create order'. + * + * @param other the other record + * @return -1, 0, or 1 + */ + @Override + public int compareTo(MetaRecord other) { + int c1 = getCreateOrder(); + int c2 = other.getCreateOrder(); + if (c1 != c2) { + return c1 - c2; + } + return getId() - other.getId(); + } + + /** + * Get the sort order id for this object type. Objects are created in this + * order when opening a database. + * + * @return the sort index + */ + private int getCreateOrder() { + switch (objectType) { + case DbObject.SETTING: + return 0; + case DbObject.USER: + return 1; + case DbObject.SCHEMA: + return 2; + case DbObject.FUNCTION_ALIAS: + return 3; + case DbObject.USER_DATATYPE: + return 4; + case DbObject.SEQUENCE: + return 5; + case DbObject.CONSTANT: + return 6; + case DbObject.TABLE_OR_VIEW: + return 7; + case DbObject.INDEX: + return 8; + case DbObject.CONSTRAINT: + return 9; + case DbObject.TRIGGER: + return 10; + case DbObject.SYNONYM: + return 11; + case DbObject.ROLE: + return 12; + case DbObject.RIGHT: + return 13; + case DbObject.AGGREGATE: + return 14; + case DbObject.COMMENT: + return 15; + default: + throw DbException.throwInternalError("type="+objectType); + } + } + + @Override + public String toString() { + return "MetaRecord [id=" + id + ", objectType=" + objectType + + ", sql=" + sql + "]"; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Mode.java b/modules/h2/src/main/java/org/h2/engine/Mode.java new file mode 100644 index 0000000000000..e20d3ff574e36 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Mode.java @@ -0,0 +1,350 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Set; +import java.util.regex.Pattern; +import org.h2.util.StringUtils; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * The compatibility modes. There is a fixed set of modes (for example + * PostgreSQL, MySQL). Each mode has different settings. + */ +public class Mode { + + public enum ModeEnum { + REGULAR, DB2, Derby, MSSQLServer, HSQLDB, MySQL, Oracle, PostgreSQL, Ignite, + } + + /** + * Determines how rows with {@code NULL} values in indexed columns are handled + * in unique indexes. + */ + public enum UniqueIndexNullsHandling { + /** + * Multiple rows with identical values in indexed columns with at least one + * indexed {@code NULL} value are allowed in unique index. + */ + ALLOW_DUPLICATES_WITH_ANY_NULL, + + /** + * Multiple rows with identical values in indexed columns with all indexed + * {@code NULL} values are allowed in unique index. + */ + ALLOW_DUPLICATES_WITH_ALL_NULLS, + + /** + * Multiple rows with identical values in indexed columns are not allowed in + * unique index. + */ + FORBID_ANY_DUPLICATES + } + + private static final HashMap MODES = new HashMap<>(); + + // Modes are also documented in the features section + + /** + * When enabled, aliased columns (as in SELECT ID AS I FROM TEST) return the + * alias (I in this case) in ResultSetMetaData.getColumnName() and 'null' in + * getTableName(). If disabled, the real column name (ID in this case) and + * table name is returned. + */ + public boolean aliasColumnName; + + /** + * When inserting data, if a column is defined to be NOT NULL and NULL is + * inserted, then a 0 (or empty string, or the current timestamp for + * timestamp columns) value is used. Usually, this operation is not allowed + * and an exception is thrown. + */ + public boolean convertInsertNullToZero; + + /** + * When converting the scale of decimal data, the number is only converted + * if the new scale is smaller than the current scale. Usually, the scale is + * converted and 0s are added if required. + */ + public boolean convertOnlyToSmallerScale; + + /** + * Creating indexes in the CREATE TABLE statement is allowed using + * INDEX(..) or KEY(..). + * Example: create table test(id int primary key, name varchar(255), + * key idx_name(name)); + */ + public boolean indexDefinitionInCreateTable; + + /** + * Meta data calls return identifiers in lower case. + */ + public boolean lowerCaseIdentifiers; + + /** + * Concatenation with NULL results in NULL. Usually, NULL is treated as an + * empty string if only one of the operands is NULL, and NULL is only + * returned if both operands are NULL. + */ + public boolean nullConcatIsNull; + + /** + * Identifiers may be quoted using square brackets as in [Test]. + */ + public boolean squareBracketQuotedNames; + + /** + * The system columns 'CTID' and 'OID' are supported. + */ + public boolean systemColumns; + + /** + * Determines how rows with {@code NULL} values in indexed columns are handled + * in unique indexes. + */ + public UniqueIndexNullsHandling uniqueIndexNullsHandling = UniqueIndexNullsHandling.ALLOW_DUPLICATES_WITH_ANY_NULL; + + /** + * Empty strings are treated like NULL values. Useful for Oracle emulation. + */ + public boolean treatEmptyStringsAsNull; + + /** + * Support the pseudo-table SYSIBM.SYSDUMMY1. + */ + public boolean sysDummy1; + + /** + * Text can be concatenated using '+'. + */ + public boolean allowPlusForStringConcat; + + /** + * The function LOG() uses base 10 instead of E. + */ + public boolean logIsLogBase10; + + /** + * The function REGEXP_REPLACE() uses \ for back-references. + */ + public boolean regexpReplaceBackslashReferences; + + /** + * SERIAL and BIGSERIAL columns are not automatically primary keys. + */ + public boolean serialColumnIsNotPK; + + /** + * Swap the parameters of the CONVERT function. + */ + public boolean swapConvertFunctionParameters; + + /** + * can set the isolation level using WITH {RR|RS|CS|UR} + */ + public boolean isolationLevelInSelectOrInsertStatement; + + /** + * MySQL style INSERT ... ON DUPLICATE KEY UPDATE ... and INSERT IGNORE + */ + public boolean onDuplicateKeyUpdate; + + /** + * Pattern describing the keys the java.sql.Connection.setClientInfo() + * method accepts. + */ + public Pattern supportedClientInfoPropertiesRegEx; + + /** + * Support the # for column names + */ + public boolean supportPoundSymbolForColumnNames; + + /** + * Whether an empty list as in "NAME IN()" results in a syntax error. + */ + public boolean prohibitEmptyInPredicate; + + /** + * Whether AFFINITY KEY keywords are supported. + */ + public boolean allowAffinityKey; + + /** + * Whether to right-pad fixed strings with spaces. + */ + public boolean padFixedLengthStrings; + + /** + * Whether DB2 TIMESTAMP formats are allowed. + */ + public boolean allowDB2TimestampFormat; + + /** + * An optional Set of hidden/disallowed column types. + * Certain DBMSs don't support all column types provided by H2, such as + * "NUMBER" when using PostgreSQL mode. + */ + public Set disallowedTypes = Collections.emptySet(); + + /** + * Custom mappings from type names to data types. + */ + public HashMap typeByNameMap = new HashMap<>(); + + private final String name; + + private ModeEnum modeEnum; + + static { + Mode mode = new Mode(ModeEnum.REGULAR.name()); + mode.nullConcatIsNull = true; + add(mode); + + mode = new Mode(ModeEnum.DB2.name()); + mode.aliasColumnName = true; + mode.sysDummy1 = true; + mode.isolationLevelInSelectOrInsertStatement = true; + // See + // https://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/ + // com.ibm.db2z11.doc.java/src/tpc/imjcc_r0052001.dita + mode.supportedClientInfoPropertiesRegEx = + Pattern.compile("ApplicationName|ClientAccountingInformation|" + + "ClientUser|ClientCorrelationToken"); + mode.prohibitEmptyInPredicate = true; + mode.allowDB2TimestampFormat = true; + add(mode); + + mode = new Mode(ModeEnum.Derby.name()); + mode.aliasColumnName = true; + mode.uniqueIndexNullsHandling = UniqueIndexNullsHandling.FORBID_ANY_DUPLICATES; + mode.sysDummy1 = true; + mode.isolationLevelInSelectOrInsertStatement = true; + // Derby does not support client info properties as of version 10.12.1.1 + mode.supportedClientInfoPropertiesRegEx = null; + add(mode); + + mode = new Mode(ModeEnum.HSQLDB.name()); + mode.aliasColumnName = true; + mode.convertOnlyToSmallerScale = true; + mode.nullConcatIsNull = true; + mode.uniqueIndexNullsHandling = UniqueIndexNullsHandling.FORBID_ANY_DUPLICATES; + mode.allowPlusForStringConcat = true; + // HSQLDB does not support client info properties. See + // http://hsqldb.org/doc/apidocs/ + // org/hsqldb/jdbc/JDBCConnection.html# + // setClientInfo%28java.lang.String,%20java.lang.String%29 + mode.supportedClientInfoPropertiesRegEx = null; + add(mode); + + mode = new Mode(ModeEnum.MSSQLServer.name()); + mode.aliasColumnName = true; + mode.squareBracketQuotedNames = true; + mode.uniqueIndexNullsHandling = UniqueIndexNullsHandling.FORBID_ANY_DUPLICATES; + mode.allowPlusForStringConcat = true; + mode.swapConvertFunctionParameters = true; + mode.supportPoundSymbolForColumnNames = true; + // MS SQL Server does not support client info properties. See + // https://msdn.microsoft.com/en-Us/library/dd571296%28v=sql.110%29.aspx + mode.supportedClientInfoPropertiesRegEx = null; + add(mode); + + mode = new Mode(ModeEnum.MySQL.name()); + mode.convertInsertNullToZero = true; + mode.indexDefinitionInCreateTable = true; + mode.lowerCaseIdentifiers = true; + // Next one is for MariaDB + mode.regexpReplaceBackslashReferences = true; + mode.onDuplicateKeyUpdate = true; + // MySQL allows to use any key for client info entries. See + // http://grepcode.com/file/repo1.maven.org/maven2/mysql/ + // mysql-connector-java/5.1.24/com/mysql/jdbc/ + // JDBC4CommentClientInfoProvider.java + mode.supportedClientInfoPropertiesRegEx = + Pattern.compile(".*"); + mode.prohibitEmptyInPredicate = true; + add(mode); + + mode = new Mode(ModeEnum.Oracle.name()); + mode.aliasColumnName = true; + mode.convertOnlyToSmallerScale = true; + mode.uniqueIndexNullsHandling = UniqueIndexNullsHandling.ALLOW_DUPLICATES_WITH_ALL_NULLS; + mode.treatEmptyStringsAsNull = true; + mode.regexpReplaceBackslashReferences = true; + mode.supportPoundSymbolForColumnNames = true; + // Oracle accepts keys of the form .*. See + // https://docs.oracle.com/database/121/JJDBC/jdbcvers.htm#JJDBC29006 + mode.supportedClientInfoPropertiesRegEx = + Pattern.compile(".*\\..*"); + mode.prohibitEmptyInPredicate = true; + mode.typeByNameMap.put("DATE", DataType.getDataType(Value.TIMESTAMP)); + add(mode); + + mode = new Mode(ModeEnum.PostgreSQL.name()); + mode.aliasColumnName = true; + mode.nullConcatIsNull = true; + mode.systemColumns = true; + mode.logIsLogBase10 = true; + mode.regexpReplaceBackslashReferences = true; + mode.serialColumnIsNotPK = true; + // PostgreSQL only supports the ApplicationName property. See + // https://github.com/hhru/postgres-jdbc/blob/master/postgresql-jdbc-9.2-1002.src/ + // org/postgresql/jdbc4/AbstractJdbc4Connection.java + mode.supportedClientInfoPropertiesRegEx = + Pattern.compile("ApplicationName"); + mode.prohibitEmptyInPredicate = true; + mode.padFixedLengthStrings = true; + // Enumerate all H2 types NOT supported by PostgreSQL: + Set disallowedTypes = new java.util.HashSet<>(); + disallowedTypes.add("NUMBER"); + disallowedTypes.add("IDENTITY"); + disallowedTypes.add("TINYINT"); + disallowedTypes.add("BLOB"); + mode.disallowedTypes = disallowedTypes; + add(mode); + + mode = new Mode(ModeEnum.Ignite.name()); + mode.nullConcatIsNull = true; + mode.allowAffinityKey = true; + mode.indexDefinitionInCreateTable = true; + add(mode); + } + + private Mode(String name) { + this.name = name; + this.modeEnum = ModeEnum.valueOf(name); + } + + private static void add(Mode mode) { + MODES.put(StringUtils.toUpperEnglish(mode.name), mode); + } + + /** + * Get the mode with the given name. + * + * @param name the name of the mode + * @return the mode object + */ + public static Mode getInstance(String name) { + return MODES.get(StringUtils.toUpperEnglish(name)); + } + + public static Mode getRegular() { + return getInstance(ModeEnum.REGULAR.name()); + } + + public String getName() { + return name; + } + + public ModeEnum getEnum() { + return this.modeEnum; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Procedure.java b/modules/h2/src/main/java/org/h2/engine/Procedure.java new file mode 100644 index 0000000000000..af2d3648caa08 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Procedure.java @@ -0,0 +1,32 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.command.Prepared; + +/** + * Represents a procedure. Procedures are implemented for PostgreSQL + * compatibility. + */ +public class Procedure { + + private final String name; + private final Prepared prepared; + + public Procedure(String name, Prepared prepared) { + this.name = name; + this.prepared = prepared; + } + + public String getName() { + return name; + } + + public Prepared getPrepared() { + return prepared; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/QueryStatisticsData.java b/modules/h2/src/main/java/org/h2/engine/QueryStatisticsData.java new file mode 100644 index 0000000000000..56d5de829b68a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/QueryStatisticsData.java @@ -0,0 +1,200 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map.Entry; + +/** + * Maintains query statistics. + */ +public class QueryStatisticsData { + + private static final Comparator QUERY_ENTRY_COMPARATOR = + new Comparator() { + @Override + public int compare(QueryEntry o1, QueryEntry o2) { + return (int) Math.signum(o1.lastUpdateTime - o2.lastUpdateTime); + } + }; + + private final HashMap map = + new HashMap<>(); + + private int maxQueryEntries; + + public QueryStatisticsData(int maxQueryEntries) { + this.maxQueryEntries = maxQueryEntries; + } + + public synchronized void setMaxQueryEntries(int maxQueryEntries) { + this.maxQueryEntries = maxQueryEntries; + } + + public synchronized List getQueries() { + // return a copy of the map so we don't have to + // worry about external synchronization + ArrayList list = new ArrayList<>(map.values()); + // only return the newest 100 entries + Collections.sort(list, QUERY_ENTRY_COMPARATOR); + return list.subList(0, Math.min(list.size(), maxQueryEntries)); + } + + /** + * Update query statistics. + * + * @param sqlStatement the statement being executed + * @param executionTimeNanos the time in nanoseconds the query/update took + * to execute + * @param rowCount the query or update row count + */ + public synchronized void update(String sqlStatement, long executionTimeNanos, + int rowCount) { + QueryEntry entry = map.get(sqlStatement); + if (entry == null) { + entry = new QueryEntry(sqlStatement); + map.put(sqlStatement, entry); + } + entry.update(executionTimeNanos, rowCount); + + // Age-out the oldest entries if the map gets too big. + // Test against 1.5 x max-size so we don't do this too often + if (map.size() > maxQueryEntries * 1.5f) { + // Sort the entries by age + ArrayList list = new ArrayList<>(map.values()); + Collections.sort(list, QUERY_ENTRY_COMPARATOR); + // Create a set of the oldest 1/3 of the entries + HashSet oldestSet = + new HashSet<>(list.subList(0, list.size() / 3)); + // Loop over the map using the set and remove + // the oldest 1/3 of the entries. + for (Iterator> it = + map.entrySet().iterator(); it.hasNext();) { + Entry mapEntry = it.next(); + if (oldestSet.contains(mapEntry.getValue())) { + it.remove(); + } + } + } + } + + /** + * The collected statistics for one query. + */ + public static final class QueryEntry { + + /** + * The SQL statement. + */ + public final String sqlStatement; + + /** + * The number of times the statement was executed. + */ + public int count; + + /** + * The last time the statistics for this entry were updated, + * in milliseconds since 1970. + */ + public long lastUpdateTime; + + /** + * The minimum execution time, in nanoseconds. + */ + public long executionTimeMinNanos; + + /** + * The maximum execution time, in nanoseconds. + */ + public long executionTimeMaxNanos; + + /** + * The total execution time. + */ + public long executionTimeCumulativeNanos; + + /** + * The minimum number of rows. + */ + public int rowCountMin; + + /** + * The maximum number of rows. + */ + public int rowCountMax; + + /** + * The total number of rows. + */ + public long rowCountCumulative; + + /** + * The mean execution time. + */ + public double executionTimeMeanNanos; + + /** + * The mean number of rows. + */ + public double rowCountMean; + + // Using Welford's method, see also + // http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance + // http://www.johndcook.com/standard_deviation.html + + private double executionTimeM2Nanos; + private double rowCountM2; + + public QueryEntry(String sql) { + this.sqlStatement = sql; + } + + /** + * Update the statistics entry. + * + * @param timeNanos the execution time in nanos + * @param rows the number of rows + */ + void update(long timeNanos, int rows) { + count++; + executionTimeMinNanos = Math.min(timeNanos, executionTimeMinNanos); + executionTimeMaxNanos = Math.max(timeNanos, executionTimeMaxNanos); + rowCountMin = Math.min(rows, rowCountMin); + rowCountMax = Math.max(rows, rowCountMax); + + double rowDelta = rows - rowCountMean; + rowCountMean += rowDelta / count; + rowCountM2 += rowDelta * (rows - rowCountMean); + + double timeDelta = timeNanos - executionTimeMeanNanos; + executionTimeMeanNanos += timeDelta / count; + executionTimeM2Nanos += timeDelta * (timeNanos - executionTimeMeanNanos); + + executionTimeCumulativeNanos += timeNanos; + rowCountCumulative += rows; + lastUpdateTime = System.currentTimeMillis(); + } + + public double getExecutionTimeStandardDeviation() { + // population standard deviation + return Math.sqrt(executionTimeM2Nanos / count); + } + + public double getRowCountStandardDeviation() { + // population standard deviation + return Math.sqrt(rowCountM2 / count); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Right.java b/modules/h2/src/main/java/org/h2/engine/Right.java new file mode 100644 index 0000000000000..95a2769c35a91 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Right.java @@ -0,0 +1,190 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.schema.Schema; +import org.h2.table.Table; + +/** + * An access right. Rights are regular database objects, but have generated + * names. + */ +public class Right extends DbObjectBase { + + /** + * The right bit mask that means: selecting from a table is allowed. + */ + public static final int SELECT = 1; + + /** + * The right bit mask that means: deleting rows from a table is allowed. + */ + public static final int DELETE = 2; + + /** + * The right bit mask that means: inserting rows into a table is allowed. + */ + public static final int INSERT = 4; + + /** + * The right bit mask that means: updating data is allowed. + */ + public static final int UPDATE = 8; + + /** + * The right bit mask that means: create/alter/drop schema is allowed. + */ + public static final int ALTER_ANY_SCHEMA = 16; + + /** + * The right bit mask that means: select, insert, update, delete, and update + * for this object is allowed. + */ + public static final int ALL = SELECT | DELETE | INSERT | UPDATE; + + /** + * To whom the right is granted. + */ + private RightOwner grantee; + + /** + * The granted role, or null if a right was granted. + */ + private Role grantedRole; + + /** + * The granted right. + */ + private int grantedRight; + + /** + * The object. If the right is global, this is null. + */ + private DbObject grantedObject; + + public Right(Database db, int id, RightOwner grantee, Role grantedRole) { + initDbObjectBase(db, id, "RIGHT_" + id, Trace.USER); + this.grantee = grantee; + this.grantedRole = grantedRole; + } + + public Right(Database db, int id, RightOwner grantee, int grantedRight, + DbObject grantedObject) { + initDbObjectBase(db, id, "" + id, Trace.USER); + this.grantee = grantee; + this.grantedRight = grantedRight; + this.grantedObject = grantedObject; + } + + private static boolean appendRight(StringBuilder buff, int right, int mask, + String name, boolean comma) { + if ((right & mask) != 0) { + if (comma) { + buff.append(", "); + } + buff.append(name); + return true; + } + return comma; + } + + public String getRights() { + StringBuilder buff = new StringBuilder(); + if (grantedRight == ALL) { + buff.append("ALL"); + } else { + boolean comma = false; + comma = appendRight(buff, grantedRight, SELECT, "SELECT", comma); + comma = appendRight(buff, grantedRight, DELETE, "DELETE", comma); + comma = appendRight(buff, grantedRight, INSERT, "INSERT", comma); + comma = appendRight(buff, grantedRight, ALTER_ANY_SCHEMA, + "ALTER ANY SCHEMA", comma); + appendRight(buff, grantedRight, UPDATE, "UPDATE", comma); + } + return buff.toString(); + } + + public Role getGrantedRole() { + return grantedRole; + } + + public DbObject getGrantedObject() { + return grantedObject; + } + + public DbObject getGrantee() { + return grantee; + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + return getCreateSQLForCopy(table); + } + + private String getCreateSQLForCopy(DbObject object) { + StringBuilder buff = new StringBuilder(); + buff.append("GRANT "); + if (grantedRole != null) { + buff.append(grantedRole.getSQL()); + } else { + buff.append(getRights()); + if (object != null) { + if (object instanceof Schema) { + buff.append(" ON SCHEMA ").append(object.getSQL()); + } else if (object instanceof Table) { + buff.append(" ON ").append(object.getSQL()); + } + } + } + buff.append(" TO ").append(grantee.getSQL()); + return buff.toString(); + } + + @Override + public String getCreateSQL() { + return getCreateSQLForCopy(grantedObject); + } + + @Override + public int getType() { + return DbObject.RIGHT; + } + + @Override + public void removeChildrenAndResources(Session session) { + if (grantedRole != null) { + grantee.revokeRole(grantedRole); + } else { + grantee.revokeRight(grantedObject); + } + database.removeMeta(session, getId()); + grantedRole = null; + grantedObject = null; + grantee = null; + invalidate(); + } + + @Override + public void checkRename() { + DbException.throwInternalError(); + } + + public void setRightMask(int rightMask) { + grantedRight = rightMask; + } + + public int getRightMask() { + return grantedRight; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/RightOwner.java b/modules/h2/src/main/java/org/h2/engine/RightOwner.java new file mode 100644 index 0000000000000..9608a709b8a59 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/RightOwner.java @@ -0,0 +1,180 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.HashMap; + +import org.h2.table.Table; + +/** + * A right owner (sometimes called principal). + */ +public abstract class RightOwner extends DbObjectBase { + + /** + * The map of granted roles. + */ + private HashMap grantedRoles; + + /** + * The map of granted rights. + */ + private HashMap grantedRights; + + protected RightOwner(Database database, int id, String name, + int traceModuleId) { + initDbObjectBase(database, id, name, traceModuleId); + } + + /** + * Check if a role has been granted for this right owner. + * + * @param grantedRole the role + * @return true if the role has been granted + */ + public boolean isRoleGranted(Role grantedRole) { + if (grantedRole == this) { + return true; + } + if (grantedRoles != null) { + for (Role role : grantedRoles.keySet()) { + if (role == grantedRole) { + return true; + } + if (role.isRoleGranted(grantedRole)) { + return true; + } + } + } + return false; + } + + /** + * Check if a right is already granted to this object or to objects that + * were granted to this object. The rights for schemas takes + * precedence over rights of tables, in other words, the rights of schemas + * will be valid for every each table in the related schema. + * + * @param table the table to check + * @param rightMask the right mask to check + * @return true if the right was already granted + */ + boolean isRightGrantedRecursive(Table table, int rightMask) { + Right right; + if (grantedRights != null) { + if (table != null) { + right = grantedRights.get(table.getSchema()); + if (right != null) { + if ((right.getRightMask() & rightMask) == rightMask) { + return true; + } + } + } + right = grantedRights.get(table); + if (right != null) { + if ((right.getRightMask() & rightMask) == rightMask) { + return true; + } + } + } + if (grantedRoles != null) { + for (RightOwner role : grantedRoles.keySet()) { + if (role.isRightGrantedRecursive(table, rightMask)) { + return true; + } + } + } + return false; + } + + /** + * Grant a right for the given table. Only one right object per table is + * supported. + * + * @param object the object (table or schema) + * @param right the right + */ + public void grantRight(DbObject object, Right right) { + if (grantedRights == null) { + grantedRights = new HashMap<>(); + } + grantedRights.put(object, right); + } + + /** + * Revoke the right for the given object (table or schema). + * + * @param object the object + */ + void revokeRight(DbObject object) { + if (grantedRights == null) { + return; + } + grantedRights.remove(object); + if (grantedRights.size() == 0) { + grantedRights = null; + } + } + + /** + * Grant a role to this object. + * + * @param role the role + * @param right the right to grant + */ + public void grantRole(Role role, Right right) { + if (grantedRoles == null) { + grantedRoles = new HashMap<>(); + } + grantedRoles.put(role, right); + } + + /** + * Remove the right for the given role. + * + * @param role the role to revoke + */ + void revokeRole(Role role) { + if (grantedRoles == null) { + return; + } + Right right = grantedRoles.get(role); + if (right == null) { + return; + } + grantedRoles.remove(role); + if (grantedRoles.size() == 0) { + grantedRoles = null; + } + } + + /** + * Get the 'grant schema' right of this object. + * + * @param object the granted object (table or schema) + * @return the right or null if the right has not been granted + */ + public Right getRightForObject(DbObject object) { + if (grantedRights == null) { + return null; + } + return grantedRights.get(object); + } + + /** + * Get the 'grant role' right of this object. + * + * @param role the granted role + * @return the right or null if the right has not been granted + */ + public Right getRightForRole(Role role) { + if (grantedRoles == null) { + return null; + } + return grantedRoles.get(role); + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Role.java b/modules/h2/src/main/java/org/h2/engine/Role.java new file mode 100644 index 0000000000000..465e22cb46c84 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Role.java @@ -0,0 +1,90 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.table.Table; + +/** + * Represents a role. Roles can be granted to users, and to other roles. + */ +public class Role extends RightOwner { + + private final boolean system; + + public Role(Database database, int id, String roleName, boolean system) { + super(database, id, roleName, Trace.USER); + this.system = system; + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getDropSQL() { + return null; + } + + /** + * Get the CREATE SQL statement for this object. + * + * @param ifNotExists true if IF NOT EXISTS should be used + * @return the SQL statement + */ + public String getCreateSQL(boolean ifNotExists) { + if (system) { + return null; + } + StringBuilder buff = new StringBuilder("CREATE ROLE "); + if (ifNotExists) { + buff.append("IF NOT EXISTS "); + } + buff.append(getSQL()); + return buff.toString(); + } + + @Override + public String getCreateSQL() { + return getCreateSQL(false); + } + + @Override + public int getType() { + return DbObject.ROLE; + } + + @Override + public void removeChildrenAndResources(Session session) { + for (User user : database.getAllUsers()) { + Right right = user.getRightForRole(this); + if (right != null) { + database.removeDatabaseObject(session, right); + } + } + for (Role r2 : database.getAllRoles()) { + Right right = r2.getRightForRole(this); + if (right != null) { + database.removeDatabaseObject(session, right); + } + } + for (Right right : database.getAllRights()) { + if (right.getGrantee() == this) { + database.removeDatabaseObject(session, right); + } + } + database.removeMeta(session, getId()); + invalidate(); + } + + @Override + public void checkRename() { + // ok + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Session.java b/modules/h2/src/main/java/org/h2/engine/Session.java new file mode 100644 index 0000000000000..c3a8f254ba67e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Session.java @@ -0,0 +1,1811 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.Deque; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.LinkedList; +import java.util.Map; +import java.util.Random; +import java.util.concurrent.TimeUnit; +import org.h2.api.ErrorCode; +import org.h2.command.Command; +import org.h2.command.CommandInterface; +import org.h2.command.Parser; +import org.h2.command.Prepared; +import org.h2.command.ddl.Analyze; +import org.h2.command.dml.Query; +import org.h2.command.dml.SetTypes; +import org.h2.constraint.Constraint; +import org.h2.index.Index; +import org.h2.index.ViewIndex; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceSystem; +import org.h2.mvstore.db.MVTable; +import org.h2.mvstore.db.TransactionStore.Change; +import org.h2.mvstore.db.TransactionStore.Transaction; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.SortOrder; +import org.h2.schema.Schema; +import org.h2.store.DataHandler; +import org.h2.store.InDoubtTransaction; +import org.h2.store.LobStorageFrontend; +import org.h2.table.SubQueryInfo; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.table.TableType; +import org.h2.util.ColumnNamerConfiguration; +import org.h2.util.New; +import org.h2.util.SmallLRUCache; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; + +/** + * A session represents an embedded database connection. When using the server + * mode, this object resides on the server side and communicates with a + * SessionRemote object on the client side. + */ +public class Session extends SessionWithState { + + /** + * This special log position means that the log entry has been written. + */ + public static final int LOG_WRITTEN = -1; + + /** + * The prefix of generated identifiers. It may not have letters, because + * they are case sensitive. + */ + private static final String SYSTEM_IDENTIFIER_PREFIX = "_"; + private static int nextSerialId; + + private final int serialId = nextSerialId++; + private final Database database; + private ConnectionInfo connectionInfo; + private final User user; + private final int id; + private final ArrayList
    locks = New.arrayList(); + private final UndoLog undoLog; + private boolean autoCommit = true; + private Random random; + private int lockTimeout; + private Value lastIdentity = ValueLong.get(0); + private Value lastScopeIdentity = ValueLong.get(0); + private Value lastTriggerIdentity; + private GeneratedKeys generatedKeys; + private int firstUncommittedLog = Session.LOG_WRITTEN; + private int firstUncommittedPos = Session.LOG_WRITTEN; + private HashMap savepoints; + private HashMap localTempTables; + private HashMap localTempTableIndexes; + private HashMap localTempTableConstraints; + private long throttleNs; + private long lastThrottle; + private Command currentCommand; + private boolean allowLiterals; + private String currentSchemaName; + private String[] schemaSearchPath; + private Trace trace; + private HashMap removeLobMap; + private int systemIdentifier; + private HashMap procedures; + private boolean undoLogEnabled = true; + private boolean redoLogBinary = true; + private boolean autoCommitAtTransactionEnd; + private String currentTransactionName; + private volatile long cancelAtNs; + private boolean closed; + private final long sessionStart = System.currentTimeMillis(); + private long transactionStart; + private long currentCommandStart; + private HashMap variables; + private HashSet temporaryResults; + private int queryTimeout; + private boolean commitOrRollbackDisabled; + private Table waitForLock; + private Thread waitForLockThread; + private int modificationId; + private int objectId; + private final int queryCacheSize; + private SmallLRUCache queryCache; + private long modificationMetaID = -1; + private SubQueryInfo subQueryInfo; + private int parsingView; + private final Deque viewNameStack = new ArrayDeque<>(); + private int preparingQueryExpression; + private volatile SmallLRUCache viewIndexCache; + private HashMap subQueryIndexCache; + private boolean joinBatchEnabled; + private boolean forceJoinOrder; + private boolean lazyQueryExecution; + private ColumnNamerConfiguration columnNamerConfiguration; + /** + * Tables marked for ANALYZE after the current transaction is committed. + * Prevents us calling ANALYZE repeatedly in large transactions. + */ + private HashSet
    tablesToAnalyze; + + /** + * Temporary LOBs from result sets. Those are kept for some time. The + * problem is that transactions are committed before the result is returned, + * and in some cases the next transaction is already started before the + * result is read (for example when using the server mode, when accessing + * metadata methods). We can't simply free those values up when starting the + * next transaction, because they would be removed too early. + */ + private LinkedList temporaryResultLobs; + + /** + * The temporary LOBs that need to be removed on commit. + */ + private ArrayList temporaryLobs; + + private Transaction transaction; + private long startStatement = -1; + + public Session(Database database, User user, int id) { + this.database = database; + this.queryTimeout = database.getSettings().maxQueryTimeout; + this.queryCacheSize = database.getSettings().queryCacheSize; + this.undoLog = new UndoLog(this); + this.user = user; + this.id = id; + Setting setting = database.findSetting( + SetTypes.getTypeName(SetTypes.DEFAULT_LOCK_TIMEOUT)); + this.lockTimeout = setting == null ? + Constants.INITIAL_LOCK_TIMEOUT : setting.getIntValue(); + this.currentSchemaName = Constants.SCHEMA_MAIN; + this.columnNamerConfiguration = ColumnNamerConfiguration.getDefault(); + } + + public void setLazyQueryExecution(boolean lazyQueryExecution) { + this.lazyQueryExecution = lazyQueryExecution; + } + + public boolean isLazyQueryExecution() { + return lazyQueryExecution; + } + + public void setForceJoinOrder(boolean forceJoinOrder) { + this.forceJoinOrder = forceJoinOrder; + } + + public boolean isForceJoinOrder() { + return forceJoinOrder; + } + + public void setJoinBatchEnabled(boolean joinBatchEnabled) { + this.joinBatchEnabled = joinBatchEnabled; + } + + public boolean isJoinBatchEnabled() { + return joinBatchEnabled; + } + + /** + * Create a new row for a table. + * + * @param data the values + * @param memory whether the row is in memory + * @return the created row + */ + public Row createRow(Value[] data, int memory) { + return database.createRow(data, memory); + } + + /** + * Add a subquery info on top of the subquery info stack. + * + * @param masks the mask + * @param filters the filters + * @param filter the filter index + * @param sortOrder the sort order + */ + public void pushSubQueryInfo(int[] masks, TableFilter[] filters, int filter, + SortOrder sortOrder) { + subQueryInfo = new SubQueryInfo(subQueryInfo, masks, filters, filter, sortOrder); + } + + /** + * Remove the current subquery info from the stack. + */ + public void popSubQueryInfo() { + subQueryInfo = subQueryInfo.getUpper(); + } + + public SubQueryInfo getSubQueryInfo() { + return subQueryInfo; + } + + /** + * Stores name of currently parsed view in a stack so it can be determined + * during {@code prepare()}. + * + * @param parsingView + * {@code true} to store one more name, {@code false} to remove it + * from stack + * @param viewName + * name of the view + */ + public void setParsingCreateView(boolean parsingView, String viewName) { + // It can be recursive, thus implemented as counter. + this.parsingView += parsingView ? 1 : -1; + assert this.parsingView >= 0; + if (parsingView) { + viewNameStack.push(viewName); + } else { + assert viewName.equals(viewNameStack.peek()); + viewNameStack.pop(); + } + } + public String getParsingCreateViewName() { + if (viewNameStack.isEmpty()) { + return null; + } + return viewNameStack.peek(); + } + + public boolean isParsingCreateView() { + assert parsingView >= 0; + return parsingView != 0; + } + + /** + * Optimize a query. This will remember the subquery info, clear it, prepare + * the query, and reset the subquery info. + * + * @param query the query to prepare + */ + public void optimizeQueryExpression(Query query) { + // we have to hide current subQueryInfo if we are going to optimize + // query expression + SubQueryInfo tmp = subQueryInfo; + subQueryInfo = null; + preparingQueryExpression++; + try { + query.prepare(); + } finally { + subQueryInfo = tmp; + preparingQueryExpression--; + } + } + + public boolean isPreparingQueryExpression() { + assert preparingQueryExpression >= 0; + return preparingQueryExpression != 0; + } + + @Override + public ArrayList getClusterServers() { + return new ArrayList<>(); + } + + public boolean setCommitOrRollbackDisabled(boolean x) { + boolean old = commitOrRollbackDisabled; + commitOrRollbackDisabled = x; + return old; + } + + private void initVariables() { + if (variables == null) { + variables = database.newStringMap(); + } + } + + /** + * Set the value of the given variable for this session. + * + * @param name the name of the variable (may not be null) + * @param value the new value (may not be null) + */ + public void setVariable(String name, Value value) { + initVariables(); + modificationId++; + Value old; + if (value == ValueNull.INSTANCE) { + old = variables.remove(name); + } else { + // link LOB values, to make sure we have our own object + value = value.copy(database, + LobStorageFrontend.TABLE_ID_SESSION_VARIABLE); + old = variables.put(name, value); + } + if (old != null) { + // remove the old value (in case it is a lob) + old.remove(); + } + } + + /** + * Get the value of the specified user defined variable. This method always + * returns a value; it returns ValueNull.INSTANCE if the variable doesn't + * exist. + * + * @param name the variable name + * @return the value, or NULL + */ + public Value getVariable(String name) { + initVariables(); + Value v = variables.get(name); + return v == null ? ValueNull.INSTANCE : v; + } + + /** + * Get the list of variable names that are set for this session. + * + * @return the list of names + */ + public String[] getVariableNames() { + if (variables == null) { + return new String[0]; + } + return variables.keySet().toArray(new String[variables.size()]); + } + + /** + * Get the local temporary table if one exists with that name, or null if + * not. + * + * @param name the table name + * @return the table, or null + */ + public Table findLocalTempTable(String name) { + if (localTempTables == null) { + return null; + } + return localTempTables.get(name); + } + + public ArrayList
    getLocalTempTables() { + if (localTempTables == null) { + return New.arrayList(); + } + return new ArrayList<>(localTempTables.values()); + } + + /** + * Add a local temporary table to this session. + * + * @param table the table to add + * @throws DbException if a table with this name already exists + */ + public void addLocalTempTable(Table table) { + if (localTempTables == null) { + localTempTables = database.newStringMap(); + } + if (localTempTables.get(table.getName()) != null) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, + table.getSQL()+" AS "+table.getName()); + } + modificationId++; + localTempTables.put(table.getName(), table); + } + + /** + * Drop and remove the given local temporary table from this session. + * + * @param table the table + */ + public void removeLocalTempTable(Table table) { + // Exception thrown in org.h2.engine.Database.removeMeta if line below + // is missing with TestGeneralCommonTableQueries + database.lockMeta(this); + modificationId++; + localTempTables.remove(table.getName()); + synchronized (database) { + table.removeChildrenAndResources(this); + } + } + + /** + * Get the local temporary index if one exists with that name, or null if + * not. + * + * @param name the table name + * @return the table, or null + */ + public Index findLocalTempTableIndex(String name) { + if (localTempTableIndexes == null) { + return null; + } + return localTempTableIndexes.get(name); + } + + public HashMap getLocalTempTableIndexes() { + if (localTempTableIndexes == null) { + return new HashMap<>(); + } + return localTempTableIndexes; + } + + /** + * Add a local temporary index to this session. + * + * @param index the index to add + * @throws DbException if a index with this name already exists + */ + public void addLocalTempTableIndex(Index index) { + if (localTempTableIndexes == null) { + localTempTableIndexes = database.newStringMap(); + } + if (localTempTableIndexes.get(index.getName()) != null) { + throw DbException.get(ErrorCode.INDEX_ALREADY_EXISTS_1, + index.getSQL()); + } + localTempTableIndexes.put(index.getName(), index); + } + + /** + * Drop and remove the given local temporary index from this session. + * + * @param index the index + */ + public void removeLocalTempTableIndex(Index index) { + if (localTempTableIndexes != null) { + localTempTableIndexes.remove(index.getName()); + synchronized (database) { + index.removeChildrenAndResources(this); + } + } + } + + /** + * Get the local temporary constraint if one exists with that name, or + * null if not. + * + * @param name the constraint name + * @return the constraint, or null + */ + public Constraint findLocalTempTableConstraint(String name) { + if (localTempTableConstraints == null) { + return null; + } + return localTempTableConstraints.get(name); + } + + /** + * Get the map of constraints for all constraints on local, temporary + * tables, if any. The map's keys are the constraints' names. + * + * @return the map of constraints, or null + */ + public HashMap getLocalTempTableConstraints() { + if (localTempTableConstraints == null) { + return new HashMap<>(); + } + return localTempTableConstraints; + } + + /** + * Add a local temporary constraint to this session. + * + * @param constraint the constraint to add + * @throws DbException if a constraint with the same name already exists + */ + public void addLocalTempTableConstraint(Constraint constraint) { + if (localTempTableConstraints == null) { + localTempTableConstraints = database.newStringMap(); + } + String name = constraint.getName(); + if (localTempTableConstraints.get(name) != null) { + throw DbException.get(ErrorCode.CONSTRAINT_ALREADY_EXISTS_1, + constraint.getSQL()); + } + localTempTableConstraints.put(name, constraint); + } + + /** + * Drop and remove the given local temporary constraint from this session. + * + * @param constraint the constraint + */ + void removeLocalTempTableConstraint(Constraint constraint) { + if (localTempTableConstraints != null) { + localTempTableConstraints.remove(constraint.getName()); + synchronized (database) { + constraint.removeChildrenAndResources(this); + } + } + } + + @Override + public boolean getAutoCommit() { + return autoCommit; + } + + public User getUser() { + return user; + } + + @Override + public void setAutoCommit(boolean b) { + autoCommit = b; + } + + public int getLockTimeout() { + return lockTimeout; + } + + public void setLockTimeout(int lockTimeout) { + this.lockTimeout = lockTimeout; + } + + @Override + public synchronized CommandInterface prepareCommand(String sql, + int fetchSize) { + return prepareLocal(sql); + } + + /** + * Parse and prepare the given SQL statement. This method also checks the + * rights. + * + * @param sql the SQL statement + * @return the prepared statement + */ + public Prepared prepare(String sql) { + return prepare(sql, false, false); + } + + /** + * Parse and prepare the given SQL statement. + * + * @param sql the SQL statement + * @param rightsChecked true if the rights have already been checked + * @param literalsChecked true if the sql string has already been checked + * for literals (only used if ALLOW_LITERALS NONE is set). + * @return the prepared statement + */ + public Prepared prepare(String sql, boolean rightsChecked, boolean literalsChecked) { + Parser parser = new Parser(this); + parser.setRightsChecked(rightsChecked); + parser.setLiteralsChecked(literalsChecked); + return parser.prepare(sql); + } + + /** + * Parse and prepare the given SQL statement. + * This method also checks if the connection has been closed. + * + * @param sql the SQL statement + * @return the prepared statement + */ + public Command prepareLocal(String sql) { + if (closed) { + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, + "session closed"); + } + Command command; + if (queryCacheSize > 0) { + if (queryCache == null) { + queryCache = SmallLRUCache.newInstance(queryCacheSize); + modificationMetaID = database.getModificationMetaId(); + } else { + long newModificationMetaID = database.getModificationMetaId(); + if (newModificationMetaID != modificationMetaID) { + queryCache.clear(); + modificationMetaID = newModificationMetaID; + } + command = queryCache.get(sql); + if (command != null && command.canReuse()) { + command.reuse(); + return command; + } + } + } + Parser parser = new Parser(this); + try { + command = parser.prepareCommand(sql); + } finally { + // we can't reuse sub-query indexes, so just drop the whole cache + subQueryIndexCache = null; + } + command.prepareJoinBatch(); + if (queryCache != null) { + if (command.isCacheable()) { + queryCache.put(sql, command); + } + } + return command; + } + + public Database getDatabase() { + return database; + } + + @Override + public int getPowerOffCount() { + return database.getPowerOffCount(); + } + + @Override + public void setPowerOffCount(int count) { + database.setPowerOffCount(count); + } + + /** + * Commit the current transaction. If the statement was not a data + * definition statement, and if there are temporary tables that should be + * dropped or truncated at commit, this is done as well. + * + * @param ddl if the statement was a data definition statement + */ + public void commit(boolean ddl) { + checkCommitRollback(); + currentTransactionName = null; + transactionStart = 0; + if (transaction != null) { + // increment the data mod count, so that other sessions + // see the changes + // TODO should not rely on locking + if (!locks.isEmpty()) { + for (Table t : locks) { + if (t instanceof MVTable) { + ((MVTable) t).commit(); + } + } + } + transaction.commit(); + transaction = null; + } + if (containsUncommitted()) { + // need to commit even if rollback is not possible + // (create/drop table and so on) + database.commit(this); + } + removeTemporaryLobs(true); + if (undoLog.size() > 0) { + // commit the rows when using MVCC + if (database.isMultiVersion()) { + ArrayList rows = New.arrayList(); + synchronized (database) { + while (undoLog.size() > 0) { + UndoLogRecord entry = undoLog.getLast(); + entry.commit(); + rows.add(entry.getRow()); + undoLog.removeLast(false); + } + for (Row r : rows) { + r.commit(); + } + } + } + undoLog.clear(); + } + if (!ddl) { + // do not clean the temp tables if the last command was a + // create/drop + cleanTempTables(false); + if (autoCommitAtTransactionEnd) { + autoCommit = true; + autoCommitAtTransactionEnd = false; + } + } + + int rows = getDatabase().getSettings().analyzeSample / 10; + if (tablesToAnalyze != null) { + for (Table table : tablesToAnalyze) { + Analyze.analyzeTable(this, table, rows, false); + } + // analyze can lock the meta + database.unlockMeta(this); + } + tablesToAnalyze = null; + + endTransaction(); + } + + private void removeTemporaryLobs(boolean onTimeout) { + if (SysProperties.CHECK2) { + if (this == getDatabase().getLobSession() + && !Thread.holdsLock(this) && !Thread.holdsLock(getDatabase())) { + throw DbException.throwInternalError(); + } + } + if (temporaryLobs != null) { + for (Value v : temporaryLobs) { + if (!v.isLinkedToTable()) { + v.remove(); + } + } + temporaryLobs.clear(); + } + if (temporaryResultLobs != null && !temporaryResultLobs.isEmpty()) { + long keepYoungerThan = System.nanoTime() - + TimeUnit.MILLISECONDS.toNanos(database.getSettings().lobTimeout); + while (!temporaryResultLobs.isEmpty()) { + TimeoutValue tv = temporaryResultLobs.getFirst(); + if (onTimeout && tv.created >= keepYoungerThan) { + break; + } + Value v = temporaryResultLobs.removeFirst().value; + if (!v.isLinkedToTable()) { + v.remove(); + } + } + } + } + + private void checkCommitRollback() { + if (commitOrRollbackDisabled && !locks.isEmpty()) { + throw DbException.get(ErrorCode.COMMIT_ROLLBACK_NOT_ALLOWED); + } + } + + private void endTransaction() { + if (removeLobMap != null && removeLobMap.size() > 0) { + if (database.getMvStore() == null) { + // need to flush the transaction log, because we can't unlink + // lobs if the commit record is not written + database.flush(); + } + for (Value v : removeLobMap.values()) { + v.remove(); + } + removeLobMap = null; + } + unlockAll(); + } + + /** + * Fully roll back the current transaction. + */ + public void rollback() { + checkCommitRollback(); + currentTransactionName = null; + transactionStart = 0; + boolean needCommit = false; + if (undoLog.size() > 0) { + rollbackTo(null, false); + needCommit = true; + } + if (transaction != null) { + rollbackTo(null, false); + needCommit = true; + // rollback stored the undo operations in the transaction + // committing will end the transaction + transaction.commit(); + transaction = null; + } + if (!locks.isEmpty() || needCommit) { + database.commit(this); + } + cleanTempTables(false); + if (autoCommitAtTransactionEnd) { + autoCommit = true; + autoCommitAtTransactionEnd = false; + } + endTransaction(); + } + + /** + * Partially roll back the current transaction. + * + * @param savepoint the savepoint to which should be rolled back + * @param trimToSize if the list should be trimmed + */ + public void rollbackTo(Savepoint savepoint, boolean trimToSize) { + int index = savepoint == null ? 0 : savepoint.logIndex; + while (undoLog.size() > index) { + UndoLogRecord entry = undoLog.getLast(); + entry.undo(this); + undoLog.removeLast(trimToSize); + } + if (transaction != null) { + long savepointId = savepoint == null ? 0 : savepoint.transactionSavepoint; + HashMap tableMap = + database.getMvStore().getTables(); + Iterator it = transaction.getChanges(savepointId); + while (it.hasNext()) { + Change c = it.next(); + MVTable t = tableMap.get(c.mapName); + if (t != null) { + long key = ((ValueLong) c.key).getLong(); + ValueArray value = (ValueArray) c.value; + short op; + Row row; + if (value == null) { + op = UndoLogRecord.INSERT; + row = t.getRow(this, key); + } else { + op = UndoLogRecord.DELETE; + row = createRow(value.getList(), Row.MEMORY_CALCULATE); + } + row.setKey(key); + UndoLogRecord log = new UndoLogRecord(t, op, row); + log.undo(this); + } + } + } + if (savepoints != null) { + String[] names = savepoints.keySet().toArray(new String[savepoints.size()]); + for (String name : names) { + Savepoint sp = savepoints.get(name); + int savepointIndex = sp.logIndex; + if (savepointIndex > index) { + savepoints.remove(name); + } + } + } + } + + @Override + public boolean hasPendingTransaction() { + return undoLog.size() > 0; + } + + /** + * Create a savepoint to allow rolling back to this state. + * + * @return the savepoint + */ + public Savepoint setSavepoint() { + Savepoint sp = new Savepoint(); + sp.logIndex = undoLog.size(); + if (database.getMvStore() != null) { + sp.transactionSavepoint = getStatementSavepoint(); + } + return sp; + } + + public int getId() { + return id; + } + + @Override + public void cancel() { + cancelAtNs = System.nanoTime(); + } + + @Override + public void close() { + if (!closed) { + try { + database.checkPowerOff(); + + // release any open table locks + rollback(); + + removeTemporaryLobs(false); + cleanTempTables(true); + undoLog.clear(); + // Table#removeChildrenAndResources can take the meta lock, + // and we need to unlock before we call removeSession(), which might + // want to take the meta lock using the system session. + database.unlockMeta(this); + database.removeSession(this); + } finally { + closed = true; + } + } + } + + /** + * Add a lock for the given table. The object is unlocked on commit or + * rollback. + * + * @param table the table that is locked + */ + public void addLock(Table table) { + if (SysProperties.CHECK) { + if (locks.contains(table)) { + DbException.throwInternalError(table.toString()); + } + } + locks.add(table); + } + + /** + * Add an undo log entry to this session. + * + * @param table the table + * @param operation the operation type (see {@link UndoLogRecord}) + * @param row the row + */ + public void log(Table table, short operation, Row row) { + if (table.isMVStore()) { + return; + } + if (undoLogEnabled) { + UndoLogRecord log = new UndoLogRecord(table, operation, row); + // called _after_ the row was inserted successfully into the table, + // otherwise rollback will try to rollback a not-inserted row + if (SysProperties.CHECK) { + int lockMode = database.getLockMode(); + if (lockMode != Constants.LOCK_MODE_OFF && + !database.isMultiVersion()) { + TableType tableType = log.getTable().getTableType(); + if (!locks.contains(log.getTable()) + && TableType.TABLE_LINK != tableType + && TableType.EXTERNAL_TABLE_ENGINE != tableType) { + DbException.throwInternalError("" + tableType); + } + } + } + undoLog.add(log); + } else { + if (database.isMultiVersion()) { + // see also UndoLogRecord.commit + ArrayList indexes = table.getIndexes(); + for (Index index : indexes) { + index.commit(operation, row); + } + row.commit(); + } + } + } + + /** + * Unlock all read locks. This is done if the transaction isolation mode is + * READ_COMMITTED. + */ + public void unlockReadLocks() { + if (database.isMultiVersion()) { + // MVCC: keep shared locks (insert / update / delete) + return; + } + // locks is modified in the loop + for (int i = 0; i < locks.size(); i++) { + Table t = locks.get(i); + if (!t.isLockedExclusively()) { + synchronized (database) { + t.unlock(this); + locks.remove(i); + } + i--; + } + } + } + + /** + * Unlock just this table. + * + * @param t the table to unlock + */ + void unlock(Table t) { + locks.remove(t); + } + + private void unlockAll() { + if (SysProperties.CHECK) { + if (undoLog.size() > 0) { + DbException.throwInternalError(); + } + } + if (!locks.isEmpty()) { + // don't use the enhanced for loop to save memory + for (Table t : locks) { + t.unlock(this); + } + locks.clear(); + } + database.unlockMetaDebug(this); + savepoints = null; + sessionStateChanged = true; + } + + private void cleanTempTables(boolean closeSession) { + if (localTempTables != null && localTempTables.size() > 0) { + synchronized (database) { + Iterator
    it = localTempTables.values().iterator(); + while (it.hasNext()) { + Table table = it.next(); + if (closeSession || table.getOnCommitDrop()) { + modificationId++; + table.setModified(); + it.remove(); + // Exception thrown in org.h2.engine.Database.removeMeta + // if line below is missing with TestDeadlock + database.lockMeta(this); + table.removeChildrenAndResources(this); + if (closeSession) { + // need to commit, otherwise recovery might + // ignore the table removal + database.commit(this); + } + } else if (table.getOnCommitTruncate()) { + table.truncate(this); + } + } + } + } + } + + public Random getRandom() { + if (random == null) { + random = new Random(); + } + return random; + } + + @Override + public Trace getTrace() { + if (trace != null && !closed) { + return trace; + } + String traceModuleName = "jdbc[" + id + "]"; + if (closed) { + return new TraceSystem(null).getTrace(traceModuleName); + } + trace = database.getTraceSystem().getTrace(traceModuleName); + return trace; + } + + public void setLastIdentity(Value last) { + this.lastIdentity = last; + this.lastScopeIdentity = last; + } + + public Value getLastIdentity() { + return lastIdentity; + } + + public void setLastScopeIdentity(Value last) { + this.lastScopeIdentity = last; + } + + public Value getLastScopeIdentity() { + return lastScopeIdentity; + } + + public void setLastTriggerIdentity(Value last) { + this.lastTriggerIdentity = last; + } + + public Value getLastTriggerIdentity() { + return lastTriggerIdentity; + } + + public GeneratedKeys getGeneratedKeys() { + if (generatedKeys == null) { + generatedKeys = new GeneratedKeys(); + } + return generatedKeys; + } + + /** + * Called when a log entry for this session is added. The session keeps + * track of the first entry in the transaction log that is not yet + * committed. + * + * @param logId the transaction log id + * @param pos the position of the log entry in the transaction log + */ + public void addLogPos(int logId, int pos) { + if (firstUncommittedLog == Session.LOG_WRITTEN) { + firstUncommittedLog = logId; + firstUncommittedPos = pos; + } + } + + public int getFirstUncommittedLog() { + return firstUncommittedLog; + } + + /** + * This method is called after the transaction log has written the commit + * entry for this session. + */ + void setAllCommitted() { + firstUncommittedLog = Session.LOG_WRITTEN; + firstUncommittedPos = Session.LOG_WRITTEN; + } + + /** + * Whether the session contains any uncommitted changes. + * + * @return true if yes + */ + public boolean containsUncommitted() { + if (database.getMvStore() != null) { + return transaction != null; + } + return firstUncommittedLog != Session.LOG_WRITTEN; + } + + /** + * Create a savepoint that is linked to the current log position. + * + * @param name the savepoint name + */ + public void addSavepoint(String name) { + if (savepoints == null) { + savepoints = database.newStringMap(); + } + Savepoint sp = new Savepoint(); + sp.logIndex = undoLog.size(); + if (database.getMvStore() != null) { + sp.transactionSavepoint = getStatementSavepoint(); + } + savepoints.put(name, sp); + } + + /** + * Undo all operations back to the log position of the given savepoint. + * + * @param name the savepoint name + */ + public void rollbackToSavepoint(String name) { + checkCommitRollback(); + currentTransactionName = null; + transactionStart = 0; + if (savepoints == null) { + throw DbException.get(ErrorCode.SAVEPOINT_IS_INVALID_1, name); + } + Savepoint savepoint = savepoints.get(name); + if (savepoint == null) { + throw DbException.get(ErrorCode.SAVEPOINT_IS_INVALID_1, name); + } + rollbackTo(savepoint, false); + } + + /** + * Prepare the given transaction. + * + * @param transactionName the name of the transaction + */ + public void prepareCommit(String transactionName) { + if (containsUncommitted()) { + // need to commit even if rollback is not possible (create/drop + // table and so on) + database.prepareCommit(this, transactionName); + } + currentTransactionName = transactionName; + } + + /** + * Commit or roll back the given transaction. + * + * @param transactionName the name of the transaction + * @param commit true for commit, false for rollback + */ + public void setPreparedTransaction(String transactionName, boolean commit) { + if (currentTransactionName != null && + currentTransactionName.equals(transactionName)) { + if (commit) { + commit(false); + } else { + rollback(); + } + } else { + ArrayList list = database + .getInDoubtTransactions(); + int state = commit ? InDoubtTransaction.COMMIT + : InDoubtTransaction.ROLLBACK; + boolean found = false; + if (list != null) { + for (InDoubtTransaction p: list) { + if (p.getTransactionName().equals(transactionName)) { + p.setState(state); + found = true; + break; + } + } + } + if (!found) { + throw DbException.get(ErrorCode.TRANSACTION_NOT_FOUND_1, + transactionName); + } + } + } + + @Override + public boolean isClosed() { + return closed; + } + + public void setThrottle(int throttle) { + this.throttleNs = TimeUnit.MILLISECONDS.toNanos(throttle); + } + + /** + * Wait for some time if this session is throttled (slowed down). + */ + public void throttle() { + if (currentCommandStart == 0) { + currentCommandStart = System.currentTimeMillis(); + } + if (throttleNs == 0) { + return; + } + long time = System.nanoTime(); + if (lastThrottle + TimeUnit.MILLISECONDS.toNanos(Constants.THROTTLE_DELAY) > time) { + return; + } + lastThrottle = time + throttleNs; + try { + Thread.sleep(TimeUnit.NANOSECONDS.toMillis(throttleNs)); + } catch (Exception e) { + // ignore InterruptedException + } + } + + /** + * Set the current command of this session. This is done just before + * executing the statement. + * + * @param command the command + * @param generatedKeysRequest + * {@code false} if generated keys are not needed, {@code true} if + * generated keys should be configured automatically, {@code int[]} + * to specify column indices to return generated keys from, or + * {@code String[]} to specify column names to return generated keys + * from + */ + public void setCurrentCommand(Command command, Object generatedKeysRequest) { + this.currentCommand = command; + // Preserve generated keys in case of a new query due to possible nested + // queries in update + if (command != null && !command.isQuery()) { + getGeneratedKeys().clear(generatedKeysRequest); + } + if (queryTimeout > 0 && command != null) { + currentCommandStart = System.currentTimeMillis(); + long now = System.nanoTime(); + cancelAtNs = now + TimeUnit.MILLISECONDS.toNanos(queryTimeout); + } + } + + /** + * Check if the current transaction is canceled by calling + * Statement.cancel() or because a session timeout was set and expired. + * + * @throws DbException if the transaction is canceled + */ + public void checkCanceled() { + throttle(); + if (cancelAtNs == 0) { + return; + } + long time = System.nanoTime(); + if (time >= cancelAtNs) { + cancelAtNs = 0; + throw DbException.get(ErrorCode.STATEMENT_WAS_CANCELED); + } + } + + /** + * Get the cancel time. + * + * @return the time or 0 if not set + */ + public long getCancel() { + return cancelAtNs; + } + + public Command getCurrentCommand() { + return currentCommand; + } + + public long getCurrentCommandStart() { + return currentCommandStart; + } + + public boolean getAllowLiterals() { + return allowLiterals; + } + + public void setAllowLiterals(boolean b) { + this.allowLiterals = b; + } + + public void setCurrentSchema(Schema schema) { + modificationId++; + this.currentSchemaName = schema.getName(); + } + + @Override + public String getCurrentSchemaName() { + return currentSchemaName; + } + + @Override + public void setCurrentSchemaName(String schemaName) { + Schema schema = database.getSchema(schemaName); + setCurrentSchema(schema); + } + + /** + * Create an internal connection. This connection is used when initializing + * triggers, and when calling user defined functions. + * + * @param columnList if the url should be 'jdbc:columnlist:connection' + * @return the internal connection + */ + public JdbcConnection createConnection(boolean columnList) { + String url; + if (columnList) { + url = Constants.CONN_URL_COLUMNLIST; + } else { + url = Constants.CONN_URL_INTERNAL; + } + return new JdbcConnection(this, getUser().getName(), url); + } + + @Override + public DataHandler getDataHandler() { + return database; + } + + /** + * Remember that the given LOB value must be removed at commit. + * + * @param v the value + */ + public void removeAtCommit(Value v) { + if (SysProperties.CHECK && !v.isLinkedToTable()) { + DbException.throwInternalError(v.toString()); + } + if (removeLobMap == null) { + removeLobMap = new HashMap<>(); + } + removeLobMap.put(v.toString(), v); + } + + /** + * Do not remove this LOB value at commit any longer. + * + * @param v the value + */ + public void removeAtCommitStop(Value v) { + if (removeLobMap != null) { + removeLobMap.remove(v.toString()); + } + } + + /** + * Get the next system generated identifiers. The identifier returned does + * not occur within the given SQL statement. + * + * @param sql the SQL statement + * @return the new identifier + */ + public String getNextSystemIdentifier(String sql) { + String identifier; + do { + identifier = SYSTEM_IDENTIFIER_PREFIX + systemIdentifier++; + } while (sql.contains(identifier)); + return identifier; + } + + /** + * Add a procedure to this session. + * + * @param procedure the procedure to add + */ + public void addProcedure(Procedure procedure) { + if (procedures == null) { + procedures = database.newStringMap(); + } + procedures.put(procedure.getName(), procedure); + } + + /** + * Remove a procedure from this session. + * + * @param name the name of the procedure to remove + */ + public void removeProcedure(String name) { + if (procedures != null) { + procedures.remove(name); + } + } + + /** + * Get the procedure with the given name, or null + * if none exists. + * + * @param name the procedure name + * @return the procedure or null + */ + public Procedure getProcedure(String name) { + if (procedures == null) { + return null; + } + return procedures.get(name); + } + + public void setSchemaSearchPath(String[] schemas) { + modificationId++; + this.schemaSearchPath = schemas; + } + + public String[] getSchemaSearchPath() { + return schemaSearchPath; + } + + @Override + public int hashCode() { + return serialId; + } + + @Override + public String toString() { + return "#" + serialId + " (user: " + user.getName() + ")"; + } + + public void setUndoLogEnabled(boolean b) { + this.undoLogEnabled = b; + } + + public void setRedoLogBinary(boolean b) { + this.redoLogBinary = b; + } + + public boolean isUndoLogEnabled() { + return undoLogEnabled; + } + + /** + * Begin a transaction. + */ + public void begin() { + autoCommitAtTransactionEnd = true; + autoCommit = false; + } + + public long getSessionStart() { + return sessionStart; + } + + public long getTransactionStart() { + if (transactionStart == 0) { + transactionStart = System.currentTimeMillis(); + } + return transactionStart; + } + + public Table[] getLocks() { + // copy the data without synchronizing + ArrayList
    copy = New.arrayList(); + for (Table lock : locks) { + try { + copy.add(lock); + } catch (Exception e) { + // ignore + break; + } + } + return copy.toArray(new Table[0]); + } + + /** + * Wait if the exclusive mode has been enabled for another session. This + * method returns as soon as the exclusive mode has been disabled. + */ + public void waitIfExclusiveModeEnabled() { + // Even in exclusive mode, we have to let the LOB session proceed, or we + // will get deadlocks. + if (database.getLobSession() == this) { + return; + } + while (true) { + Session exclusive = database.getExclusiveSession(); + if (exclusive == null || exclusive == this) { + break; + } + if (Thread.holdsLock(exclusive)) { + // if another connection is used within the connection + break; + } + try { + Thread.sleep(100); + } catch (InterruptedException e) { + // ignore + } + } + } + + /** + * Get the view cache for this session. There are two caches: the subquery + * cache (which is only use for a single query, has no bounds, and is + * cleared after use), and the cache for regular views. + * + * @param subQuery true to get the subquery cache + * @return the view cache + */ + public Map getViewIndexCache(boolean subQuery) { + if (subQuery) { + // for sub-queries we don't need to use LRU because the cache should + // not grow too large for a single query (we drop the whole cache in + // the end of prepareLocal) + if (subQueryIndexCache == null) { + subQueryIndexCache = new HashMap<>(); + } + return subQueryIndexCache; + } + SmallLRUCache cache = viewIndexCache; + if (cache == null) { + viewIndexCache = cache = SmallLRUCache.newInstance(Constants.VIEW_INDEX_CACHE_SIZE); + } + return cache; + } + + /** + * Remember the result set and close it as soon as the transaction is + * committed (if it needs to be closed). This is done to delete temporary + * files as soon as possible, and free object ids of temporary tables. + * + * @param result the temporary result set + */ + public void addTemporaryResult(ResultInterface result) { + if (!result.needToClose()) { + return; + } + if (temporaryResults == null) { + temporaryResults = new HashSet<>(); + } + if (temporaryResults.size() < 100) { + // reference at most 100 result sets to avoid memory problems + temporaryResults.add(result); + } + } + + private void closeTemporaryResults() { + if (temporaryResults != null) { + for (ResultInterface result : temporaryResults) { + result.close(); + } + temporaryResults = null; + } + } + + public void setQueryTimeout(int queryTimeout) { + int max = database.getSettings().maxQueryTimeout; + if (max != 0 && (max < queryTimeout || queryTimeout == 0)) { + // the value must be at most max + queryTimeout = max; + } + this.queryTimeout = queryTimeout; + // must reset the cancel at here, + // otherwise it is still used + this.cancelAtNs = 0; + } + + public int getQueryTimeout() { + return queryTimeout; + } + + /** + * Set the table this session is waiting for, and the thread that is + * waiting. + * + * @param waitForLock the table + * @param waitForLockThread the current thread (the one that is waiting) + */ + public void setWaitForLock(Table waitForLock, Thread waitForLockThread) { + this.waitForLock = waitForLock; + this.waitForLockThread = waitForLockThread; + } + + public Table getWaitForLock() { + return waitForLock; + } + + public Thread getWaitForLockThread() { + return waitForLockThread; + } + + public int getModificationId() { + return modificationId; + } + + @Override + public boolean isReconnectNeeded(boolean write) { + while (true) { + boolean reconnect = database.isReconnectNeeded(); + if (reconnect) { + return true; + } + if (write) { + if (database.beforeWriting()) { + return false; + } + } else { + return false; + } + } + } + + @Override + public void afterWriting() { + database.afterWriting(); + } + + @Override + public SessionInterface reconnect(boolean write) { + readSessionState(); + close(); + Session newSession = Engine.getInstance().createSession(connectionInfo); + newSession.sessionState = sessionState; + newSession.recreateSessionState(); + if (write) { + while (!newSession.database.beforeWriting()) { + // wait until we are allowed to write + } + } + return newSession; + } + + public void setConnectionInfo(ConnectionInfo ci) { + connectionInfo = ci; + } + + public Value getTransactionId() { + if (database.getMvStore() != null) { + if (transaction == null) { + return ValueNull.INSTANCE; + } + return ValueString.get(Long.toString(getTransaction().getId())); + } + if (!database.isPersistent()) { + return ValueNull.INSTANCE; + } + if (undoLog.size() == 0) { + return ValueNull.INSTANCE; + } + return ValueString.get(firstUncommittedLog + "-" + firstUncommittedPos + + "-" + id); + } + + /** + * Get the next object id. + * + * @return the next object id + */ + public int nextObjectId() { + return objectId++; + } + + public boolean isRedoLogBinaryEnabled() { + return redoLogBinary; + } + + /** + * Get the transaction to use for this session. + * + * @return the transaction + */ + public Transaction getTransaction() { + if (transaction == null) { + if (database.getMvStore().getStore().isClosed()) { + database.shutdownImmediately(); + throw DbException.get(ErrorCode.DATABASE_IS_CLOSED); + } + transaction = database.getMvStore().getTransactionStore().begin(); + startStatement = -1; + } + return transaction; + } + + public long getStatementSavepoint() { + if (startStatement == -1) { + startStatement = getTransaction().setSavepoint(); + } + return startStatement; + } + + /** + * Start a new statement within a transaction. + */ + public void startStatementWithinTransaction() { + startStatement = -1; + } + + /** + * Mark the statement as completed. This also close all temporary result + * set, and deletes all temporary files held by the result sets. + */ + public void endStatement() { + startStatement = -1; + closeTemporaryResults(); + } + + /** + * Clear the view cache for this session. + */ + public void clearViewIndexCache() { + viewIndexCache = null; + } + + @Override + public void addTemporaryLob(Value v) { + if (v.getType() != Value.CLOB && v.getType() != Value.BLOB) { + return; + } + if (v.getTableId() == LobStorageFrontend.TABLE_RESULT || + v.getTableId() == LobStorageFrontend.TABLE_TEMP) { + if (temporaryResultLobs == null) { + temporaryResultLobs = new LinkedList<>(); + } + temporaryResultLobs.add(new TimeoutValue(v)); + } else { + if (temporaryLobs == null) { + temporaryLobs = new ArrayList<>(); + } + temporaryLobs.add(v); + } + } + + @Override + public boolean isRemote() { + return false; + } + + /** + * Mark that the given table needs to be analyzed on commit. + * + * @param table the table + */ + public void markTableForAnalyze(Table table) { + if (tablesToAnalyze == null) { + tablesToAnalyze = new HashSet<>(); + } + tablesToAnalyze.add(table); + } + + /** + * Represents a savepoint (a position in a transaction to where one can roll + * back to). + */ + public static class Savepoint { + + /** + * The undo log index. + */ + int logIndex; + + /** + * The transaction savepoint id. + */ + long transactionSavepoint; + } + + /** + * An object with a timeout. + */ + public static class TimeoutValue { + + /** + * The time when this object was created. + */ + final long created = System.nanoTime(); + + /** + * The value. + */ + final Value value; + + TimeoutValue(Value v) { + this.value = v; + } + + } + + public ColumnNamerConfiguration getColumnNamerConfiguration() { + return columnNamerConfiguration; + } + + public void setColumnNamerConfiguration(ColumnNamerConfiguration columnNamerConfiguration) { + this.columnNamerConfiguration = columnNamerConfiguration; + } + + @Override + public boolean isSupportsGeneratedKeys() { + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/SessionFactory.java b/modules/h2/src/main/java/org/h2/engine/SessionFactory.java new file mode 100644 index 0000000000000..fb18c764ee42f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/SessionFactory.java @@ -0,0 +1,25 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.sql.SQLException; + +/** + * A class that implements this interface can create new database sessions. This + * exists so that the JDBC layer (the client) can be compiled without dependency + * to the core database engine. + */ +interface SessionFactory { + + /** + * Create a new session. + * + * @param ci the connection parameters + * @return the new session + */ + SessionInterface createSession(ConnectionInfo ci) throws SQLException; + +} diff --git a/modules/h2/src/main/java/org/h2/engine/SessionInterface.java b/modules/h2/src/main/java/org/h2/engine/SessionInterface.java new file mode 100644 index 0000000000000..5547c8264ed72 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/SessionInterface.java @@ -0,0 +1,165 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.io.Closeable; +import java.util.ArrayList; +import org.h2.command.CommandInterface; +import org.h2.message.Trace; +import org.h2.store.DataHandler; +import org.h2.value.Value; + +/** + * A local or remote session. A session represents a database connection. + */ +public interface SessionInterface extends Closeable { + + /** + * Get the list of the cluster servers for this session. + * + * @return A list of "ip:port" strings for the cluster servers in this + * session. + */ + ArrayList getClusterServers(); + + /** + * Parse a command and prepare it for execution. + * + * @param sql the SQL statement + * @param fetchSize the number of rows to fetch in one step + * @return the prepared command + */ + CommandInterface prepareCommand(String sql, int fetchSize); + + /** + * Roll back pending transactions and close the session. + */ + @Override + void close(); + + /** + * Get the trace object + * + * @return the trace object + */ + Trace getTrace(); + + /** + * Check if close was called. + * + * @return if the session has been closed + */ + boolean isClosed(); + + /** + * Get the number of disk operations before power failure is simulated. + * This is used for testing. If not set, 0 is returned + * + * @return the number of operations, or 0 + */ + int getPowerOffCount(); + + /** + * Set the number of disk operations before power failure is simulated. + * To disable the countdown, use 0. + * + * @param i the number of operations + */ + void setPowerOffCount(int i); + + /** + * Get the data handler object. + * + * @return the data handler + */ + DataHandler getDataHandler(); + + /** + * Check whether this session has a pending transaction. + * + * @return true if it has + */ + boolean hasPendingTransaction(); + + /** + * Cancel the current or next command (called when closing a connection). + */ + void cancel(); + + /** + * Check if the database changed and therefore reconnecting is required. + * + * @param write if the next operation may be writing + * @return true if reconnecting is required + */ + boolean isReconnectNeeded(boolean write); + + /** + * Close the connection and open a new connection. + * + * @param write if the next operation may be writing + * @return the new connection + */ + SessionInterface reconnect(boolean write); + + /** + * Called after writing has ended. It needs to be called after + * isReconnectNeeded(true) returned false. + */ + void afterWriting(); + + /** + * Check if this session is in auto-commit mode. + * + * @return true if the session is in auto-commit mode + */ + boolean getAutoCommit(); + + /** + * Set the auto-commit mode. This call doesn't commit the current + * transaction. + * + * @param autoCommit the new value + */ + void setAutoCommit(boolean autoCommit); + + /** + * Add a temporary LOB, which is closed when the session commits. + * + * @param v the value + */ + void addTemporaryLob(Value v); + + /** + * Check if this session is remote or embedded. + * + * @return true if this session is remote + */ + boolean isRemote(); + + /** + * Set current schema. + * + * @param schema the schema name + */ + void setCurrentSchemaName(String schema); + + /** + * Get current schema. + * + * @return the current schema name + */ + String getCurrentSchemaName(); + + /** + * Returns is this session supports generated keys. + * + * @return {@code true} if generated keys are supported, {@code false} if only + * {@code SCOPE_IDENTITY()} is supported + */ + boolean isSupportsGeneratedKeys(); + +} diff --git a/modules/h2/src/main/java/org/h2/engine/SessionRemote.java b/modules/h2/src/main/java/org/h2/engine/SessionRemote.java new file mode 100644 index 0000000000000..e155e8e17d02a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/SessionRemote.java @@ -0,0 +1,875 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.io.IOException; +import java.net.Socket; +import java.util.ArrayList; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.api.JavaObjectSerializer; +import org.h2.command.CommandInterface; +import org.h2.command.CommandRemote; +import org.h2.command.dml.SetTypes; +import org.h2.jdbc.JdbcSQLException; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceSystem; +import org.h2.result.ResultInterface; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.LobStorageFrontend; +import org.h2.store.LobStorageInterface; +import org.h2.store.fs.FileUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.NetUtils; +import org.h2.util.New; +import org.h2.util.SmallLRUCache; +import org.h2.util.StringUtils; +import org.h2.util.TempFileDeleter; +import org.h2.value.CompareMode; +import org.h2.value.Transfer; +import org.h2.value.Value; + +/** + * The client side part of a session when using the server mode. This object + * communicates with a Session on the server side. + */ +public class SessionRemote extends SessionWithState implements DataHandler { + + public static final int SESSION_PREPARE = 0; + public static final int SESSION_CLOSE = 1; + public static final int COMMAND_EXECUTE_QUERY = 2; + public static final int COMMAND_EXECUTE_UPDATE = 3; + public static final int COMMAND_CLOSE = 4; + public static final int RESULT_FETCH_ROWS = 5; + public static final int RESULT_RESET = 6; + public static final int RESULT_CLOSE = 7; + public static final int COMMAND_COMMIT = 8; + public static final int CHANGE_ID = 9; + public static final int COMMAND_GET_META_DATA = 10; + public static final int SESSION_PREPARE_READ_PARAMS = 11; + public static final int SESSION_SET_ID = 12; + public static final int SESSION_CANCEL_STATEMENT = 13; + public static final int SESSION_CHECK_KEY = 14; + public static final int SESSION_SET_AUTOCOMMIT = 15; + public static final int SESSION_HAS_PENDING_TRANSACTION = 16; + public static final int LOB_READ = 17; + public static final int SESSION_PREPARE_READ_PARAMS2 = 18; + + public static final int STATUS_ERROR = 0; + public static final int STATUS_OK = 1; + public static final int STATUS_CLOSED = 2; + public static final int STATUS_OK_STATE_CHANGED = 3; + + private static SessionFactory sessionFactory; + + private TraceSystem traceSystem; + private Trace trace; + private ArrayList transferList = New.arrayList(); + private int nextId; + private boolean autoCommit = true; + private CommandInterface autoCommitFalse, autoCommitTrue; + private ConnectionInfo connectionInfo; + private String databaseName; + private String cipher; + private byte[] fileEncryptionKey; + private final Object lobSyncObject = new Object(); + private String sessionId; + private int clientVersion; + private boolean autoReconnect; + private int lastReconnect; + private SessionInterface embedded; + private DatabaseEventListener eventListener; + private LobStorageFrontend lobStorage; + private boolean cluster; + private TempFileDeleter tempFileDeleter; + + private JavaObjectSerializer javaObjectSerializer; + private volatile boolean javaObjectSerializerInitialized; + + private final CompareMode compareMode = CompareMode.getInstance(null, 0); + + public SessionRemote(ConnectionInfo ci) { + this.connectionInfo = ci; + } + + @Override + public ArrayList getClusterServers() { + ArrayList serverList = new ArrayList<>(); + for (Transfer transfer : transferList) { + serverList.add(transfer.getSocket().getInetAddress(). + getHostAddress() + ":" + + transfer.getSocket().getPort()); + } + return serverList; + } + + private Transfer initTransfer(ConnectionInfo ci, String db, String server) + throws IOException { + Socket socket = NetUtils.createSocket(server, + Constants.DEFAULT_TCP_PORT, ci.isSSL()); + Transfer trans = new Transfer(this, socket); + trans.setSSL(ci.isSSL()); + trans.init(); + trans.writeInt(Constants.TCP_PROTOCOL_VERSION_MIN_SUPPORTED); + trans.writeInt(Constants.TCP_PROTOCOL_VERSION_MAX_SUPPORTED); + trans.writeString(db); + trans.writeString(ci.getOriginalURL()); + trans.writeString(ci.getUserName()); + trans.writeBytes(ci.getUserPasswordHash()); + trans.writeBytes(ci.getFilePasswordHash()); + String[] keys = ci.getKeys(); + trans.writeInt(keys.length); + for (String key : keys) { + trans.writeString(key).writeString(ci.getProperty(key)); + } + try { + done(trans); + clientVersion = trans.readInt(); + trans.setVersion(clientVersion); + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_14) { + if (ci.getFileEncryptionKey() != null) { + trans.writeBytes(ci.getFileEncryptionKey()); + } + } + trans.writeInt(SessionRemote.SESSION_SET_ID); + trans.writeString(sessionId); + done(trans); + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_15) { + autoCommit = trans.readBoolean(); + } else { + autoCommit = true; + } + return trans; + } catch (DbException e) { + trans.close(); + throw e; + } + } + + @Override + public boolean hasPendingTransaction() { + if (clientVersion < Constants.TCP_PROTOCOL_VERSION_10) { + return true; + } + for (int i = 0, count = 0; i < transferList.size(); i++) { + Transfer transfer = transferList.get(i); + try { + traceOperation("SESSION_HAS_PENDING_TRANSACTION", 0); + transfer.writeInt( + SessionRemote.SESSION_HAS_PENDING_TRANSACTION); + done(transfer); + return transfer.readInt() != 0; + } catch (IOException e) { + removeServer(e, i--, ++count); + } + } + return true; + } + + @Override + public void cancel() { + // this method is called when closing the connection + // the statement that is currently running is not canceled in this case + // however Statement.cancel is supported + } + + /** + * Cancel the statement with the given id. + * + * @param id the statement id + */ + public void cancelStatement(int id) { + for (Transfer transfer : transferList) { + try { + Transfer trans = transfer.openNewConnection(); + trans.init(); + trans.writeInt(clientVersion); + trans.writeInt(clientVersion); + trans.writeString(null); + trans.writeString(null); + trans.writeString(sessionId); + trans.writeInt(SessionRemote.SESSION_CANCEL_STATEMENT); + trans.writeInt(id); + trans.close(); + } catch (IOException e) { + trace.debug(e, "could not cancel statement"); + } + } + } + + private void checkClusterDisableAutoCommit(String serverList) { + if (autoCommit && transferList.size() > 1) { + setAutoCommitSend(false); + CommandInterface c = prepareCommand( + "SET CLUSTER " + serverList, Integer.MAX_VALUE); + // this will set autoCommit to false + c.executeUpdate(false); + // so we need to switch it on + autoCommit = true; + cluster = true; + } + } + + public int getClientVersion() { + return clientVersion; + } + + @Override + public boolean getAutoCommit() { + return autoCommit; + } + + @Override + public void setAutoCommit(boolean autoCommit) { + if (!cluster) { + setAutoCommitSend(autoCommit); + } + this.autoCommit = autoCommit; + } + + public void setAutoCommitFromServer(boolean autoCommit) { + if (cluster) { + if (autoCommit) { + // the user executed SET AUTOCOMMIT TRUE + setAutoCommitSend(false); + this.autoCommit = true; + } + } else { + this.autoCommit = autoCommit; + } + } + + private synchronized void setAutoCommitSend(boolean autoCommit) { + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_8) { + for (int i = 0, count = 0; i < transferList.size(); i++) { + Transfer transfer = transferList.get(i); + try { + traceOperation("SESSION_SET_AUTOCOMMIT", autoCommit ? 1 : 0); + transfer.writeInt(SessionRemote.SESSION_SET_AUTOCOMMIT). + writeBoolean(autoCommit); + done(transfer); + } catch (IOException e) { + removeServer(e, i--, ++count); + } + } + } else { + if (autoCommit) { + if (autoCommitTrue == null) { + autoCommitTrue = prepareCommand( + "SET AUTOCOMMIT TRUE", Integer.MAX_VALUE); + } + autoCommitTrue.executeUpdate(false); + } else { + if (autoCommitFalse == null) { + autoCommitFalse = prepareCommand( + "SET AUTOCOMMIT FALSE", Integer.MAX_VALUE); + } + autoCommitFalse.executeUpdate(false); + } + } + } + + /** + * Calls COMMIT if the session is in cluster mode. + */ + public void autoCommitIfCluster() { + if (autoCommit && cluster) { + // server side auto commit is off because of race conditions + // (update set id=1 where id=0, but update set id=2 where id=0 is + // faster) + for (int i = 0, count = 0; i < transferList.size(); i++) { + Transfer transfer = transferList.get(i); + try { + traceOperation("COMMAND_COMMIT", 0); + transfer.writeInt(SessionRemote.COMMAND_COMMIT); + done(transfer); + } catch (IOException e) { + removeServer(e, i--, ++count); + } + } + } + } + + private String getFilePrefix(String dir) { + StringBuilder buff = new StringBuilder(dir); + buff.append('/'); + for (int i = 0; i < databaseName.length(); i++) { + char ch = databaseName.charAt(i); + if (Character.isLetterOrDigit(ch)) { + buff.append(ch); + } else { + buff.append('_'); + } + } + return buff.toString(); + } + + @Override + public int getPowerOffCount() { + return 0; + } + + @Override + public void setPowerOffCount(int count) { + throw DbException.getUnsupportedException("remote"); + } + + /** + * Open a new (remote or embedded) session. + * + * @param openNew whether to open a new session in any case + * @return the session + */ + public SessionInterface connectEmbeddedOrServer(boolean openNew) { + ConnectionInfo ci = connectionInfo; + if (ci.isRemote()) { + connectServer(ci); + return this; + } + // create the session using reflection, + // so that the JDBC layer can be compiled without it + boolean autoServerMode = ci.getProperty("AUTO_SERVER", false); + ConnectionInfo backup = null; + try { + if (autoServerMode) { + backup = ci.clone(); + connectionInfo = ci.clone(); + } + if (openNew) { + ci.setProperty("OPEN_NEW", "true"); + } + if (sessionFactory == null) { + sessionFactory = (SessionFactory) Class.forName( + "org.h2.engine.Engine").getMethod("getInstance").invoke(null); + } + return sessionFactory.createSession(ci); + } catch (Exception re) { + DbException e = DbException.convert(re); + if (e.getErrorCode() == ErrorCode.DATABASE_ALREADY_OPEN_1) { + if (autoServerMode) { + String serverKey = ((JdbcSQLException) e.getSQLException()). + getSQL(); + if (serverKey != null) { + backup.setServerKey(serverKey); + // OPEN_NEW must be removed now, otherwise + // opening a session with AUTO_SERVER fails + // if another connection is already open + backup.removeProperty("OPEN_NEW", null); + connectServer(backup); + return this; + } + } + } + throw e; + } + } + + private void connectServer(ConnectionInfo ci) { + String name = ci.getName(); + if (name.startsWith("//")) { + name = name.substring("//".length()); + } + int idx = name.indexOf('/'); + if (idx < 0) { + throw ci.getFormatException(); + } + databaseName = name.substring(idx + 1); + String server = name.substring(0, idx); + traceSystem = new TraceSystem(null); + String traceLevelFile = ci.getProperty( + SetTypes.TRACE_LEVEL_FILE, null); + if (traceLevelFile != null) { + int level = Integer.parseInt(traceLevelFile); + String prefix = getFilePrefix( + SysProperties.CLIENT_TRACE_DIRECTORY); + try { + traceSystem.setLevelFile(level); + if (level > 0 && level < 4) { + String file = FileUtils.createTempFile(prefix, + Constants.SUFFIX_TRACE_FILE, false, false); + traceSystem.setFileName(file); + } + } catch (IOException e) { + throw DbException.convertIOException(e, prefix); + } + } + String traceLevelSystemOut = ci.getProperty( + SetTypes.TRACE_LEVEL_SYSTEM_OUT, null); + if (traceLevelSystemOut != null) { + int level = Integer.parseInt(traceLevelSystemOut); + traceSystem.setLevelSystemOut(level); + } + trace = traceSystem.getTrace(Trace.JDBC); + String serverList = null; + if (server.indexOf(',') >= 0) { + serverList = StringUtils.quoteStringSQL(server); + ci.setProperty("CLUSTER", Constants.CLUSTERING_ENABLED); + } + autoReconnect = ci.getProperty("AUTO_RECONNECT", false); + // AUTO_SERVER implies AUTO_RECONNECT + boolean autoServer = ci.getProperty("AUTO_SERVER", false); + if (autoServer && serverList != null) { + throw DbException + .getUnsupportedException("autoServer && serverList != null"); + } + autoReconnect |= autoServer; + if (autoReconnect) { + String className = ci.getProperty("DATABASE_EVENT_LISTENER"); + if (className != null) { + className = StringUtils.trim(className, true, true, "'"); + try { + eventListener = (DatabaseEventListener) JdbcUtils + .loadUserClass(className).newInstance(); + } catch (Throwable e) { + throw DbException.convert(e); + } + } + } + cipher = ci.getProperty("CIPHER"); + if (cipher != null) { + fileEncryptionKey = MathUtils.secureRandomBytes(32); + } + String[] servers = StringUtils.arraySplit(server, ',', true); + int len = servers.length; + transferList.clear(); + sessionId = StringUtils.convertBytesToHex(MathUtils.secureRandomBytes(32)); + // TODO cluster: support more than 2 connections + boolean switchOffCluster = false; + try { + for (String s : servers) { + try { + Transfer trans = initTransfer(ci, databaseName, s); + transferList.add(trans); + } catch (IOException e) { + if (len == 1) { + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, e, e + ": " + s); + } + switchOffCluster = true; + } + } + checkClosed(); + if (switchOffCluster) { + switchOffCluster(); + } + checkClusterDisableAutoCommit(serverList); + } catch (DbException e) { + traceSystem.close(); + throw e; + } + } + + private void switchOffCluster() { + CommandInterface ci = prepareCommand("SET CLUSTER ''", Integer.MAX_VALUE); + ci.executeUpdate(false); + } + + /** + * Remove a server from the list of cluster nodes and disables the cluster + * mode. + * + * @param e the exception (used for debugging) + * @param i the index of the server to remove + * @param count the retry count index + */ + public void removeServer(IOException e, int i, int count) { + trace.debug(e, "removing server because of exception"); + transferList.remove(i); + if (transferList.isEmpty() && autoReconnect(count)) { + return; + } + checkClosed(); + switchOffCluster(); + } + + @Override + public synchronized CommandInterface prepareCommand(String sql, int fetchSize) { + checkClosed(); + return new CommandRemote(this, transferList, sql, fetchSize); + } + + /** + * Automatically re-connect if necessary and if configured to do so. + * + * @param count the retry count index + * @return true if reconnected + */ + private boolean autoReconnect(int count) { + if (!isClosed()) { + return false; + } + if (!autoReconnect) { + return false; + } + if (!cluster && !autoCommit) { + return false; + } + if (count > SysProperties.MAX_RECONNECT) { + return false; + } + lastReconnect++; + while (true) { + try { + embedded = connectEmbeddedOrServer(false); + break; + } catch (DbException e) { + if (e.getErrorCode() != ErrorCode.DATABASE_IS_IN_EXCLUSIVE_MODE) { + throw e; + } + // exclusive mode: re-try endlessly + try { + Thread.sleep(500); + } catch (Exception e2) { + // ignore + } + } + } + if (embedded == this) { + // connected to a server somewhere else + embedded = null; + } else { + // opened an embedded connection now - + // must connect to this database in server mode + // unfortunately + connectEmbeddedOrServer(true); + } + recreateSessionState(); + if (eventListener != null) { + eventListener.setProgress(DatabaseEventListener.STATE_RECONNECTED, + databaseName, count, SysProperties.MAX_RECONNECT); + } + return true; + } + + /** + * Check if this session is closed and throws an exception if so. + * + * @throws DbException if the session is closed + */ + public void checkClosed() { + if (isClosed()) { + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, "session closed"); + } + } + + @Override + public void close() { + RuntimeException closeError = null; + if (transferList != null) { + synchronized (this) { + for (Transfer transfer : transferList) { + try { + traceOperation("SESSION_CLOSE", 0); + transfer.writeInt(SessionRemote.SESSION_CLOSE); + done(transfer); + transfer.close(); + } catch (RuntimeException e) { + trace.error(e, "close"); + closeError = e; + } catch (Exception e) { + trace.error(e, "close"); + } + } + } + transferList = null; + } + traceSystem.close(); + if (embedded != null) { + embedded.close(); + embedded = null; + } + if (closeError != null) { + throw closeError; + } + } + + @Override + public Trace getTrace() { + return traceSystem.getTrace(Trace.JDBC); + } + + public int getNextId() { + return nextId++; + } + + public int getCurrentId() { + return nextId; + } + + /** + * Called to flush the output after data has been sent to the server and + * just before receiving data. This method also reads the status code from + * the server and throws any exception the server sent. + * + * @param transfer the transfer object + * @throws DbException if the server sent an exception + * @throws IOException if there is a communication problem between client + * and server + */ + public void done(Transfer transfer) throws IOException { + transfer.flush(); + int status = transfer.readInt(); + if (status == STATUS_ERROR) { + String sqlstate = transfer.readString(); + String message = transfer.readString(); + String sql = transfer.readString(); + int errorCode = transfer.readInt(); + String stackTrace = transfer.readString(); + JdbcSQLException s = new JdbcSQLException(message, sql, sqlstate, + errorCode, null, stackTrace); + if (errorCode == ErrorCode.CONNECTION_BROKEN_1) { + // allow re-connect + throw new IOException(s.toString(), s); + } + throw DbException.convert(s); + } else if (status == STATUS_CLOSED) { + transferList = null; + } else if (status == STATUS_OK_STATE_CHANGED) { + sessionStateChanged = true; + } else if (status == STATUS_OK) { + // ok + } else { + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, + "unexpected status " + status); + } + } + + /** + * Returns true if the connection was opened in cluster mode. + * + * @return true if it is + */ + public boolean isClustered() { + return cluster; + } + + @Override + public boolean isClosed() { + return transferList == null || transferList.isEmpty(); + } + + /** + * Write the operation to the trace system if debug trace is enabled. + * + * @param operation the operation performed + * @param id the id of the operation + */ + public void traceOperation(String operation, int id) { + if (trace.isDebugEnabled()) { + trace.debug("{0} {1}", operation, id); + } + } + + @Override + public void checkPowerOff() { + // ok + } + + @Override + public void checkWritingAllowed() { + // ok + } + + @Override + public String getDatabasePath() { + return ""; + } + + @Override + public String getLobCompressionAlgorithm(int type) { + return null; + } + + @Override + public int getMaxLengthInplaceLob() { + return SysProperties.LOB_CLIENT_MAX_SIZE_MEMORY; + } + + @Override + public FileStore openFile(String name, String mode, boolean mustExist) { + if (mustExist && !FileUtils.exists(name)) { + throw DbException.get(ErrorCode.FILE_NOT_FOUND_1, name); + } + FileStore store; + if (cipher == null) { + store = FileStore.open(this, name, mode); + } else { + store = FileStore.open(this, name, mode, cipher, fileEncryptionKey, 0); + } + store.setCheckedWriting(false); + try { + store.init(); + } catch (DbException e) { + store.closeSilently(); + throw e; + } + return store; + } + + @Override + public DataHandler getDataHandler() { + return this; + } + + @Override + public Object getLobSyncObject() { + return lobSyncObject; + } + + @Override + public SmallLRUCache getLobFileListCache() { + return null; + } + + public int getLastReconnect() { + return lastReconnect; + } + + @Override + public TempFileDeleter getTempFileDeleter() { + if (tempFileDeleter == null) { + tempFileDeleter = TempFileDeleter.getInstance(); + } + return tempFileDeleter; + } + + @Override + public boolean isReconnectNeeded(boolean write) { + return false; + } + + @Override + public SessionInterface reconnect(boolean write) { + return this; + } + + @Override + public void afterWriting() { + // nothing to do + } + + @Override + public LobStorageInterface getLobStorage() { + if (lobStorage == null) { + lobStorage = new LobStorageFrontend(this); + } + return lobStorage; + } + + @Override + public synchronized int readLob(long lobId, byte[] hmac, long offset, + byte[] buff, int off, int length) { + checkClosed(); + for (int i = 0, count = 0; i < transferList.size(); i++) { + Transfer transfer = transferList.get(i); + try { + traceOperation("LOB_READ", (int) lobId); + transfer.writeInt(SessionRemote.LOB_READ); + transfer.writeLong(lobId); + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_12) { + transfer.writeBytes(hmac); + } + transfer.writeLong(offset); + transfer.writeInt(length); + done(transfer); + length = transfer.readInt(); + if (length <= 0) { + return length; + } + transfer.readBytes(buff, off, length); + return length; + } catch (IOException e) { + removeServer(e, i--, ++count); + } + } + return 1; + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + initJavaObjectSerializer(); + return javaObjectSerializer; + } + + private void initJavaObjectSerializer() { + if (javaObjectSerializerInitialized) { + return; + } + synchronized (this) { + if (javaObjectSerializerInitialized) { + return; + } + String serializerFQN = readSerializationSettings(); + if (serializerFQN != null) { + serializerFQN = serializerFQN.trim(); + if (!serializerFQN.isEmpty() && !serializerFQN.equals("null")) { + try { + javaObjectSerializer = (JavaObjectSerializer) JdbcUtils + .loadUserClass(serializerFQN).newInstance(); + } catch (Exception e) { + throw DbException.convert(e); + } + } + } + javaObjectSerializerInitialized = true; + } + } + + /** + * Read the serializer name from the persistent database settings. + * + * @return the serializer + */ + private String readSerializationSettings() { + String javaObjectSerializerFQN = null; + CommandInterface ci = prepareCommand( + "SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS "+ + " WHERE NAME='JAVA_OBJECT_SERIALIZER'", Integer.MAX_VALUE); + try { + ResultInterface result = ci.executeQuery(0, false); + if (result.next()) { + Value[] row = result.currentRow(); + javaObjectSerializerFQN = row[0].getString(); + } + } finally { + ci.close(); + } + return javaObjectSerializerFQN; + } + + @Override + public void addTemporaryLob(Value v) { + // do nothing + } + + @Override + public CompareMode getCompareMode() { + return compareMode; + } + + @Override + public boolean isRemote() { + return true; + } + + @Override + public String getCurrentSchemaName() { + throw DbException.getUnsupportedException("getSchema && remote session"); + } + + @Override + public void setCurrentSchemaName(String schema) { + throw DbException.getUnsupportedException("setSchema && remote session"); + } + + @Override + public boolean isSupportsGeneratedKeys() { + return getClientVersion() >= Constants.TCP_PROTOCOL_VERSION_17; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/SessionWithState.java b/modules/h2/src/main/java/org/h2/engine/SessionWithState.java new file mode 100644 index 0000000000000..94377577d2942 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/SessionWithState.java @@ -0,0 +1,60 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayList; +import org.h2.command.CommandInterface; +import org.h2.result.ResultInterface; +import org.h2.util.New; +import org.h2.value.Value; + +/** + * The base class for both remote and embedded sessions. + */ +abstract class SessionWithState implements SessionInterface { + + protected ArrayList sessionState; + protected boolean sessionStateChanged; + private boolean sessionStateUpdating; + + /** + * Re-create the session state using the stored sessionState list. + */ + protected void recreateSessionState() { + if (sessionState != null && !sessionState.isEmpty()) { + sessionStateUpdating = true; + try { + for (String sql : sessionState) { + CommandInterface ci = prepareCommand(sql, Integer.MAX_VALUE); + ci.executeUpdate(false); + } + } finally { + sessionStateUpdating = false; + sessionStateChanged = false; + } + } + } + + /** + * Read the session state if necessary. + */ + public void readSessionState() { + if (!sessionStateChanged || sessionStateUpdating) { + return; + } + sessionStateChanged = false; + sessionState = New.arrayList(); + CommandInterface ci = prepareCommand( + "SELECT * FROM INFORMATION_SCHEMA.SESSION_STATE", + Integer.MAX_VALUE); + ResultInterface result = ci.executeQuery(0, false); + while (result.next()) { + Value[] row = result.currentRow(); + sessionState.add(row[1].getString()); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/Setting.java b/modules/h2/src/main/java/org/h2/engine/Setting.java new file mode 100644 index 0000000000000..686dedd95d94c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/Setting.java @@ -0,0 +1,78 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.table.Table; + +/** + * A persistent database setting. + */ +public class Setting extends DbObjectBase { + + private int intValue; + private String stringValue; + + public Setting(Database database, int id, String settingName) { + initDbObjectBase(database, id, settingName, Trace.SETTING); + } + + public void setIntValue(int value) { + intValue = value; + } + + public int getIntValue() { + return intValue; + } + + public void setStringValue(String value) { + stringValue = value; + } + + public String getStringValue() { + return stringValue; + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQL() { + StringBuilder buff = new StringBuilder("SET "); + buff.append(getSQL()).append(' '); + if (stringValue != null) { + buff.append(stringValue); + } else { + buff.append(intValue); + } + return buff.toString(); + } + + @Override + public int getType() { + return DbObject.SETTING; + } + + @Override + public void removeChildrenAndResources(Session session) { + database.removeMeta(session, getId()); + invalidate(); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("RENAME"); + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/SettingsBase.java b/modules/h2/src/main/java/org/h2/engine/SettingsBase.java new file mode 100644 index 0000000000000..59b7509dc7b2c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/SettingsBase.java @@ -0,0 +1,107 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.HashMap; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.util.Utils; + +/** + * The base class for settings. + */ +public class SettingsBase { + + private final HashMap settings; + + protected SettingsBase(HashMap s) { + this.settings = s; + } + + /** + * Get the setting for the given key. + * + * @param key the key + * @param defaultValue the default value + * @return the setting + */ + protected boolean get(String key, boolean defaultValue) { + String s = get(key, Boolean.toString(defaultValue)); + try { + return Utils.parseBoolean(s, defaultValue, true); + } catch (IllegalArgumentException e) { + throw DbException.get(ErrorCode.DATA_CONVERSION_ERROR_1, + e, "key:" + key + " value:" + s); + } + } + + /** + * Get the setting for the given key. + * + * @param key the key + * @param defaultValue the default value + * @return the setting + */ + protected int get(String key, int defaultValue) { + String s = get(key, "" + defaultValue); + try { + return Integer.decode(s); + } catch (NumberFormatException e) { + throw DbException.get(ErrorCode.DATA_CONVERSION_ERROR_1, + e, "key:" + key + " value:" + s); + } + } + + /** + * Get the setting for the given key. + * + * @param key the key + * @param defaultValue the default value + * @return the setting + */ + protected String get(String key, String defaultValue) { + String v = settings.get(key); + if (v != null) { + return v; + } + StringBuilder buff = new StringBuilder("h2."); + boolean nextUpper = false; + for (char c : key.toCharArray()) { + if (c == '_') { + nextUpper = true; + } else { + // Character.toUpperCase / toLowerCase ignores the locale + buff.append(nextUpper ? Character.toUpperCase(c) : Character.toLowerCase(c)); + nextUpper = false; + } + } + String sysProperty = buff.toString(); + v = Utils.getProperty(sysProperty, defaultValue); + settings.put(key, v); + return v; + } + + /** + * Check if the settings contains the given key. + * + * @param k the key + * @return true if they do + */ + protected boolean containsKey(String k) { + return settings.containsKey(k); + } + + /** + * Get all settings. + * + * @return the settings + */ + public HashMap getSettings() { + return settings; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/SysProperties.java b/modules/h2/src/main/java/org/h2/engine/SysProperties.java new file mode 100644 index 0000000000000..df9e575e432be --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/SysProperties.java @@ -0,0 +1,628 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.util.MathUtils; +import org.h2.util.Utils; + +/** + * The constants defined in this class are initialized from system properties. + * Some system properties are per machine settings, and others are as a last + * resort and temporary solution to work around a problem in the application or + * database engine. Also, there are system properties to enable features that + * are not yet fully tested or that are not backward compatible. + *

    + * System properties can be set when starting the virtual machine: + *

    + * + *
    + * java -Dh2.baseDir=/temp
    + * 
    + * + * They can be set within the application, but this must be done before loading + * any classes of this database (before loading the JDBC driver): + * + *
    + * System.setProperty("h2.baseDir", "/temp");
    + * 
    + */ +public class SysProperties { + + /** + * INTERNAL + */ + public static final String H2_SCRIPT_DIRECTORY = "h2.scriptDirectory"; + + /** + * INTERNAL + */ + public static final String H2_BROWSER = "h2.browser"; + + /** + * System property file.encoding (default: Cp1252).
    + * It is usually set by the system and is the default encoding used for the + * RunScript and CSV tool. + */ + public static final String FILE_ENCODING = + Utils.getProperty("file.encoding", "Cp1252"); + + /** + * System property file.separator (default: /).
    + * It is usually set by the system, and used to build absolute file names. + */ + public static final String FILE_SEPARATOR = + Utils.getProperty("file.separator", "/"); + + /** + * System property line.separator (default: \n).
    + * It is usually set by the system, and used by the script and trace tools. + */ + public static final String LINE_SEPARATOR = + Utils.getProperty("line.separator", "\n"); + + /** + * System property user.home (empty string if not set).
    + * It is usually set by the system, and used as a replacement for ~ in file + * names. + */ + public static final String USER_HOME = + Utils.getProperty("user.home", ""); + + /** + * System property h2.allowedClasses (default: *).
    + * Comma separated list of class names or prefixes. + */ + public static final String ALLOWED_CLASSES = + Utils.getProperty("h2.allowedClasses", "*"); + + /** + * System property h2.enableAnonymousTLS (default: true).
    + * When using TLS connection, the anonymous cipher suites should be enabled. + */ + public static final boolean ENABLE_ANONYMOUS_TLS = + Utils.getProperty("h2.enableAnonymousTLS", true); + + /** + * System property h2.bindAddress (default: null).
    + * The bind address to use. + */ + public static final String BIND_ADDRESS = + Utils.getProperty("h2.bindAddress", null); + + /** + * System property h2.check (default: true).
    + * Assertions in the database engine. + */ + //## CHECK ## + public static final boolean CHECK = + Utils.getProperty("h2.check", true); + /*/ + public static final boolean CHECK = false; + //*/ + + /** + * System property h2.check2 (default: false).
    + * Additional assertions in the database engine. + */ + //## CHECK ## + public static final boolean CHECK2 = + Utils.getProperty("h2.check2", false); + /*/ + public static final boolean CHECK2 = false; + //*/ + + /** + * System property h2.clientTraceDirectory (default: + * trace.db/).
    + * Directory where the trace files of the JDBC client are stored (only for + * client / server). + */ + public static final String CLIENT_TRACE_DIRECTORY = + Utils.getProperty("h2.clientTraceDirectory", "trace.db/"); + + /** + * System property h2.collatorCacheSize (default: 32000).
    + * The cache size for collation keys (in elements). Used when a collator has + * been set for the database. + */ + public static final int COLLATOR_CACHE_SIZE = + Utils.getProperty("h2.collatorCacheSize", 32_000); + + /** + * System property h2.consoleTableIndexes + * (default: 100).
    + * Up to this many tables, the column type and indexes are listed. + */ + public static final int CONSOLE_MAX_TABLES_LIST_INDEXES = + Utils.getProperty("h2.consoleTableIndexes", 100); + + /** + * System property h2.consoleTableColumns + * (default: 500).
    + * Up to this many tables, the column names are listed. + */ + public static final int CONSOLE_MAX_TABLES_LIST_COLUMNS = + Utils.getProperty("h2.consoleTableColumns", 500); + + /** + * System property h2.consoleProcedureColumns + * (default: 500).
    + * Up to this many procedures, the column names are listed. + */ + public static final int CONSOLE_MAX_PROCEDURES_LIST_COLUMNS = + Utils.getProperty("h2.consoleProcedureColumns", 300); + + /** + * System property h2.consoleStream (default: true).
    + * H2 Console: stream query results. + */ + public static final boolean CONSOLE_STREAM = + Utils.getProperty("h2.consoleStream", true); + + /** + * System property h2.consoleTimeout (default: 1800000).
    + * H2 Console: session timeout in milliseconds. The default is 30 minutes. + */ + public static final int CONSOLE_TIMEOUT = + Utils.getProperty("h2.consoleTimeout", 30 * 60 * 1000); + + /** + * System property h2.dataSourceTraceLevel (default: 1).
    + * The trace level of the data source implementation. Default is 1 for + * error. + */ + public static final int DATASOURCE_TRACE_LEVEL = + Utils.getProperty("h2.dataSourceTraceLevel", 1); + + /** + * System property h2.delayWrongPasswordMin + * (default: 250).
    + * The minimum delay in milliseconds before an exception is thrown for using + * the wrong user name or password. This slows down brute force attacks. The + * delay is reset to this value after a successful login. Unsuccessful + * logins will double the time until DELAY_WRONG_PASSWORD_MAX. + * To disable the delay, set this system property to 0. + */ + public static final int DELAY_WRONG_PASSWORD_MIN = + Utils.getProperty("h2.delayWrongPasswordMin", 250); + + /** + * System property h2.delayWrongPasswordMax + * (default: 4000).
    + * The maximum delay in milliseconds before an exception is thrown for using + * the wrong user name or password. This slows down brute force attacks. The + * delay is reset after a successful login. The value 0 means there is no + * maximum delay. + */ + public static final int DELAY_WRONG_PASSWORD_MAX = + Utils.getProperty("h2.delayWrongPasswordMax", 4000); + + /** + * System property h2.javaSystemCompiler (default: true).
    + * Whether to use the Java system compiler + * (ToolProvider.getSystemJavaCompiler()) if it is available to compile user + * defined functions. If disabled or if the system compiler is not + * available, the com.sun.tools.javac compiler is used if available, and + * "javac" (as an external process) is used if not. + */ + public static final boolean JAVA_SYSTEM_COMPILER = + Utils.getProperty("h2.javaSystemCompiler", true); + + /** + * System property h2.lobCloseBetweenReads + * (default: false).
    + * Close LOB files between read operations. + */ + public static boolean lobCloseBetweenReads = + Utils.getProperty("h2.lobCloseBetweenReads", false); + + /** + * System property h2.lobFilesPerDirectory + * (default: 256).
    + * Maximum number of LOB files per directory. + */ + public static final int LOB_FILES_PER_DIRECTORY = + Utils.getProperty("h2.lobFilesPerDirectory", 256); + + /** + * System property h2.lobClientMaxSizeMemory (default: + * 1048576).
    + * The maximum size of a LOB object to keep in memory on the client side + * when using the server mode. + */ + public static final int LOB_CLIENT_MAX_SIZE_MEMORY = + Utils.getProperty("h2.lobClientMaxSizeMemory", 1024 * 1024); + + /** + * System property h2.maxFileRetry (default: 16).
    + * Number of times to retry file delete and rename. in Windows, files can't + * be deleted if they are open. Waiting a bit can help (sometimes the + * Windows Explorer opens the files for a short time) may help. Sometimes, + * running garbage collection may close files if the user forgot to call + * Connection.close() or InputStream.close(). + */ + public static final int MAX_FILE_RETRY = + Math.max(1, Utils.getProperty("h2.maxFileRetry", 16)); + + /** + * System property h2.maxReconnect (default: 3).
    + * The maximum number of tries to reconnect in a row. + */ + public static final int MAX_RECONNECT = + Utils.getProperty("h2.maxReconnect", 3); + + /** + * System property h2.maxMemoryRows + * (default: 40000 per GB of available RAM).
    + * The default maximum number of rows to be kept in memory in a result set. + */ + public static final int MAX_MEMORY_ROWS = + getAutoScaledForMemoryProperty("h2.maxMemoryRows", 40_000); + + /** + * System property h2.maxTraceDataLength + * (default: 65535).
    + * The maximum size of a LOB value that is written as data to the trace + * system. + */ + public static final long MAX_TRACE_DATA_LENGTH = + Utils.getProperty("h2.maxTraceDataLength", 65535); + + /** + * System property h2.modifyOnWrite (default: false).
    + * Only modify the database file when recovery is necessary, or when writing + * to the database. If disabled, opening the database always writes to the + * file (except if the database is read-only). When enabled, the serialized + * file lock is faster. + */ + public static final boolean MODIFY_ON_WRITE = + Utils.getProperty("h2.modifyOnWrite", false); + + /** + * System property h2.nioLoadMapped (default: false).
    + * If the mapped buffer should be loaded when the file is opened. + * This can improve performance. + */ + public static final boolean NIO_LOAD_MAPPED = + Utils.getProperty("h2.nioLoadMapped", false); + + /** + * System property h2.nioCleanerHack (default: false).
    + * If enabled, use the reflection hack to un-map the mapped file if + * possible. If disabled, System.gc() is called in a loop until the object + * is garbage collected. See also + * http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038 + */ + public static final boolean NIO_CLEANER_HACK = + Utils.getProperty("h2.nioCleanerHack", false); + + /** + * System property h2.objectCache (default: true).
    + * Cache commonly used values (numbers, strings). There is a shared cache + * for all values. + */ + public static final boolean OBJECT_CACHE = + Utils.getProperty("h2.objectCache", true); + + /** + * System property h2.objectCacheMaxPerElementSize (default: + * 4096).
    + * The maximum size (precision) of an object in the cache. + */ + public static final int OBJECT_CACHE_MAX_PER_ELEMENT_SIZE = + Utils.getProperty("h2.objectCacheMaxPerElementSize", 4096); + + /** + * System property h2.objectCacheSize (default: 1024).
    + * The maximum number of objects in the cache. + * This value must be a power of 2. + */ + public static final int OBJECT_CACHE_SIZE; + static { + try { + OBJECT_CACHE_SIZE = MathUtils.nextPowerOf2( + Utils.getProperty("h2.objectCacheSize", 1024)); + } catch (IllegalArgumentException e) { + throw new IllegalStateException("Invalid h2.objectCacheSize", e); + } + } + + /** + * System property h2.oldStyleOuterJoin + * (default: true for version 1.3, false for version 1.4).
    + * Limited support for the old-style Oracle outer join with "(+)". + */ + public static final boolean OLD_STYLE_OUTER_JOIN = + Utils.getProperty("h2.oldStyleOuterJoin", + Constants.VERSION_MINOR < 4); + + /** + * System property {@code h2.oldResultSetGetObject}, {@code true} by default. + * Return {@code Byte} and {@code Short} instead of {@code Integer} from + * {@code ResultSet#getObject(...)} for {@code TINYINT} and {@code SMALLINT} + * values. + */ + public static final boolean OLD_RESULT_SET_GET_OBJECT = + Utils.getProperty("h2.oldResultSetGetObject", true); + + /** + * System property {@code h2.bigDecimalIsDecimal}, {@code true} by default. If + * {@code true} map {@code BigDecimal} to {@code DECIMAL} type, if {@code false} + * map it to {@code NUMERIC} as specified in JDBC specification (see Mapping + * from Java Object Types to JDBC Types). + */ + public static final boolean BIG_DECIMAL_IS_DECIMAL = + Utils.getProperty("h2.bigDecimalIsDecimal", true); + + + /** + * System property {@code h2.unlimitedTimeRange}, {@code false} by default. + * + *

    + * Controls limits of TIME data type. + *

    + * + *
    + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    h2.unlimitedTimeRangeMinimum TIME valueMaximum TIME value
    false00:00:00.00000000023:59:59.999999999
    true-2562047:47:16.8547758082562047:47:16.854775807
    + */ + public static final boolean UNLIMITED_TIME_RANGE = + Utils.getProperty("h2.unlimitedTimeRange", false); + + /** + * System property h2.pgClientEncoding (default: UTF-8).
    + * Default client encoding for PG server. It is used if the client does not + * sends his encoding. + */ + public static final String PG_DEFAULT_CLIENT_ENCODING = + Utils.getProperty("h2.pgClientEncoding", "UTF-8"); + + /** + * System property h2.prefixTempFile (default: h2.temp).
    + * The prefix for temporary files in the temp directory. + */ + public static final String PREFIX_TEMP_FILE = + Utils.getProperty("h2.prefixTempFile", "h2.temp"); + + /** + * System property h2.serverCachedObjects (default: 64).
    + * TCP Server: number of cached objects per session. + */ + public static final int SERVER_CACHED_OBJECTS = + Utils.getProperty("h2.serverCachedObjects", 64); + + /** + * System property h2.serverResultSetFetchSize + * (default: 100).
    + * The default result set fetch size when using the server mode. + */ + public static final int SERVER_RESULT_SET_FETCH_SIZE = + Utils.getProperty("h2.serverResultSetFetchSize", 100); + + /** + * System property h2.socketConnectRetry (default: 16).
    + * The number of times to retry opening a socket. Windows sometimes fails + * to open a socket, see bug + * http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6213296 + */ + public static final int SOCKET_CONNECT_RETRY = + Utils.getProperty("h2.socketConnectRetry", 16); + + /** + * System property h2.socketConnectTimeout + * (default: 2000).
    + * The timeout in milliseconds to connect to a server. + */ + public static final int SOCKET_CONNECT_TIMEOUT = + Utils.getProperty("h2.socketConnectTimeout", 2000); + + /** + * System property h2.sortBinaryUnsigned + * (default: false with version 1.3, true with version 1.4).
    + * Whether binary data should be sorted in unsigned mode + * (0xff is larger than 0x00). + */ + public static final boolean SORT_BINARY_UNSIGNED = + Utils.getProperty("h2.sortBinaryUnsigned", + Constants.VERSION_MINOR >= 4); + + /** + * System property h2.sortNullsHigh (default: false).
    + * Invert the default sorting behavior for NULL, such that NULL + * is at the end of a result set in an ascending sort and at + * the beginning of a result set in a descending sort. + */ + public static final boolean SORT_NULLS_HIGH = + Utils.getProperty("h2.sortNullsHigh", false); + + /** + * System property h2.splitFileSizeShift (default: 30).
    + * The maximum file size of a split file is 1L << x. + */ + public static final long SPLIT_FILE_SIZE_SHIFT = + Utils.getProperty("h2.splitFileSizeShift", 30); + + /** + * System property h2.syncMethod (default: sync).
    + * What method to call when closing the database, on checkpoint, and on + * CHECKPOINT SYNC. The following options are supported: + * "sync" (default): RandomAccessFile.getFD().sync(); + * "force": RandomAccessFile.getChannel().force(true); + * "forceFalse": RandomAccessFile.getChannel().force(false); + * "": do not call a method (fast but there is a risk of data loss + * on power failure). + */ + public static final String SYNC_METHOD = + Utils.getProperty("h2.syncMethod", "sync"); + + /** + * System property h2.traceIO (default: false).
    + * Trace all I/O operations. + */ + public static final boolean TRACE_IO = + Utils.getProperty("h2.traceIO", false); + + /** + * System property h2.threadDeadlockDetector + * (default: false).
    + * Detect thread deadlocks in a background thread. + */ + public static final boolean THREAD_DEADLOCK_DETECTOR = + Utils.getProperty("h2.threadDeadlockDetector", false); + + /** + * System property h2.implicitRelativePath + * (default: true for version 1.3, false for version 1.4).
    + * If disabled, relative paths in database URLs need to be written as + * jdbc:h2:./test instead of jdbc:h2:test. + */ + public static final boolean IMPLICIT_RELATIVE_PATH = + Utils.getProperty("h2.implicitRelativePath", + Constants.VERSION_MINOR < 4); + + /** + * System property h2.urlMap (default: null).
    + * A properties file that contains a mapping between database URLs. New + * connections are written into the file. An empty value in the map means no + * redirection is used for the given URL. + */ + public static final String URL_MAP = + Utils.getProperty("h2.urlMap", null); + + /** + * System property h2.useThreadContextClassLoader + * (default: false).
    + * Instead of using the default class loader when deserializing objects, the + * current thread-context class loader will be used. + */ + public static final boolean USE_THREAD_CONTEXT_CLASS_LOADER = + Utils.getProperty("h2.useThreadContextClassLoader", false); + + /** + * System property h2.serializeJavaObject + * (default: true).
    + * If true, values of type OTHER will be stored in serialized form + * and have the semantics of binary data for all operations (such as sorting + * and conversion to string). + *
    + * If false, the objects will be serialized only for I/O operations + * and a few other special cases (for example when someone tries to get the + * value in binary form or when comparing objects that are not comparable + * otherwise). + *
    + * If the object implements the Comparable interface, the method compareTo + * will be used for sorting (but only if objects being compared have a + * common comparable super type). Otherwise the objects will be compared by + * type, and if they are the same by hashCode, and if the hash codes are + * equal, but objects are not, the serialized forms (the byte arrays) are + * compared. + *
    + * The string representation of the values use the toString method of + * object. + *
    + * In client-server mode, the server must have all required classes in the + * class path. On the client side, this setting is required to be disabled + * as well, to have correct string representation and display size. + *
    + * In embedded mode, no data copying occurs, so the user has to make + * defensive copy himself before storing, or ensure that the value object is + * immutable. + */ + public static boolean serializeJavaObject = + Utils.getProperty("h2.serializeJavaObject", true); + + /** + * System property h2.javaObjectSerializer + * (default: null).
    + * The JavaObjectSerializer class name for java objects being stored in + * column of type OTHER. It must be the same on client and server to work + * correctly. + */ + public static final String JAVA_OBJECT_SERIALIZER = + Utils.getProperty("h2.javaObjectSerializer", null); + + /** + * System property h2.customDataTypesHandler + * (default: null).
    + * The CustomDataTypesHandler class name that is used + * to provide support for user defined custom data types. + * It must be the same on client and server to work correctly. + */ + public static final String CUSTOM_DATA_TYPES_HANDLER = + Utils.getProperty("h2.customDataTypesHandler", null); + + private static final String H2_BASE_DIR = "h2.baseDir"; + + private SysProperties() { + // utility class + } + + /** + * INTERNAL + */ + public static void setBaseDir(String dir) { + if (!dir.endsWith("/")) { + dir += "/"; + } + System.setProperty(H2_BASE_DIR, dir); + } + + /** + * INTERNAL + */ + public static String getBaseDir() { + return Utils.getProperty(H2_BASE_DIR, null); + } + + /** + * System property h2.scriptDirectory (default: empty + * string).
    + * Relative or absolute directory where the script files are stored to or + * read from. + * + * @return the current value + */ + public static String getScriptDirectory() { + return Utils.getProperty(H2_SCRIPT_DIRECTORY, ""); + } + + /** + * This method attempts to auto-scale some of our properties to take + * advantage of more powerful machines out of the box. We assume that our + * default properties are set correctly for approx. 1G of memory, and scale + * them up if we have more. + */ + private static int getAutoScaledForMemoryProperty(String key, int defaultValue) { + String s = Utils.getProperty(key, null); + if (s != null) { + try { + return Integer.decode(s).intValue(); + } catch (NumberFormatException e) { + // ignore + } + } + return Utils.scaleForAvailableMemory(defaultValue); + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/UndoLog.java b/modules/h2/src/main/java/org/h2/engine/UndoLog.java new file mode 100644 index 0000000000000..fd8df08636f79 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/UndoLog.java @@ -0,0 +1,242 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayList; +import java.util.HashMap; +import org.h2.message.DbException; +import org.h2.store.Data; +import org.h2.store.FileStore; +import org.h2.table.Table; +import org.h2.util.New; + +/** + * Each session keeps a undo log if rollback is required. + */ +public class UndoLog { + + private final Database database; + private final ArrayList storedEntriesPos = New.arrayList(); + private final ArrayList records = New.arrayList(); + private FileStore file; + private Data rowBuff; + private int memoryUndo; + private int storedEntries; + private HashMap tables; + private final boolean largeTransactions; + + /** + * Create a new undo log for the given session. + * + * @param session the session + */ + UndoLog(Session session) { + this.database = session.getDatabase(); + largeTransactions = database.getSettings().largeTransactions; + } + + /** + * Get the number of active rows in this undo log. + * + * @return the number of rows + */ + int size() { + if (largeTransactions) { + return storedEntries + records.size(); + } + if (SysProperties.CHECK && memoryUndo > records.size()) { + DbException.throwInternalError(); + } + return records.size(); + } + + /** + * Clear the undo log. This method is called after the transaction is + * committed. + */ + void clear() { + records.clear(); + storedEntries = 0; + storedEntriesPos.clear(); + memoryUndo = 0; + if (file != null) { + file.closeAndDeleteSilently(); + file = null; + rowBuff = null; + } + } + + /** + * Get the last record and remove it from the list of operations. + * + * @return the last record + */ + public UndoLogRecord getLast() { + int i = records.size() - 1; + if (largeTransactions) { + if (i < 0 && storedEntries > 0) { + int last = storedEntriesPos.size() - 1; + long pos = storedEntriesPos.remove(last); + long end = file.length(); + int bufferLength = (int) (end - pos); + Data buff = Data.create(database, bufferLength); + file.seek(pos); + file.readFully(buff.getBytes(), 0, bufferLength); + while (buff.length() < bufferLength) { + UndoLogRecord e = UndoLogRecord.loadFromBuffer(buff, this); + records.add(e); + memoryUndo++; + } + storedEntries -= records.size(); + file.setLength(pos); + file.seek(pos); + } + i = records.size() - 1; + } + UndoLogRecord entry = records.get(i); + if (entry.isStored()) { + int start = Math.max(0, i - database.getMaxMemoryUndo() / 2); + UndoLogRecord first = null; + for (int j = start; j <= i; j++) { + UndoLogRecord e = records.get(j); + if (e.isStored()) { + e.load(rowBuff, file, this); + memoryUndo++; + if (first == null) { + first = e; + } + } + } + for (int k = 0; k < i; k++) { + UndoLogRecord e = records.get(k); + e.invalidatePos(); + } + seek(first.getFilePos()); + } + return entry; + } + + /** + * Go to the right position in the file. + * + * @param filePos the position in the file + */ + void seek(long filePos) { + file.seek(filePos * Constants.FILE_BLOCK_SIZE); + } + + /** + * Remove the last record from the list of operations. + * + * @param trimToSize if the undo array should shrink to conserve memory + */ + void removeLast(boolean trimToSize) { + int i = records.size() - 1; + UndoLogRecord r = records.remove(i); + if (!r.isStored()) { + memoryUndo--; + } + if (trimToSize && i > 1024 && (i & 1023) == 0) { + records.trimToSize(); + } + } + + /** + * Append an undo log entry to the log. + * + * @param entry the entry + */ + void add(UndoLogRecord entry) { + records.add(entry); + if (largeTransactions) { + memoryUndo++; + if (memoryUndo > database.getMaxMemoryUndo() && + database.isPersistent() && + !database.isMultiVersion()) { + if (file == null) { + String fileName = database.createTempFile(); + file = database.openFile(fileName, "rw", false); + file.setCheckedWriting(false); + file.setLength(FileStore.HEADER_LENGTH); + } + Data buff = Data.create(database, Constants.DEFAULT_PAGE_SIZE); + for (int i = 0; i < records.size(); i++) { + UndoLogRecord r = records.get(i); + buff.checkCapacity(Constants.DEFAULT_PAGE_SIZE); + r.append(buff, this); + if (i == records.size() - 1 || buff.length() > Constants.UNDO_BLOCK_SIZE) { + storedEntriesPos.add(file.getFilePointer()); + file.write(buff.getBytes(), 0, buff.length()); + buff.reset(); + } + } + storedEntries += records.size(); + memoryUndo = 0; + records.clear(); + file.autoDelete(); + } + } else { + if (!entry.isStored()) { + memoryUndo++; + } + if (memoryUndo > database.getMaxMemoryUndo() && + database.isPersistent() && + !database.isMultiVersion()) { + if (file == null) { + String fileName = database.createTempFile(); + file = database.openFile(fileName, "rw", false); + file.setCheckedWriting(false); + file.seek(FileStore.HEADER_LENGTH); + rowBuff = Data.create(database, Constants.DEFAULT_PAGE_SIZE); + Data buff = rowBuff; + for (UndoLogRecord r : records) { + saveIfPossible(r, buff); + } + } else { + saveIfPossible(entry, rowBuff); + } + file.autoDelete(); + } + } + } + + private void saveIfPossible(UndoLogRecord r, Data buff) { + if (!r.isStored() && r.canStore()) { + r.save(buff, file, this); + memoryUndo--; + } + } + + /** + * Get the table id for this undo log. If the table is not registered yet, + * this is done as well. + * + * @param table the table + * @return the id + */ + int getTableId(Table table) { + int id = table.getId(); + if (tables == null) { + tables = new HashMap<>(); + } + // need to overwrite the old entry, because the old object + // might be deleted in the meantime + tables.put(id, table); + return id; + } + + /** + * Get the table for this id. The table must be registered for this undo log + * first by calling getTableId. + * + * @param id the table id + * @return the table object + */ + Table getTable(int id) { + return tables.get(id); + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/UndoLogRecord.java b/modules/h2/src/main/java/org/h2/engine/UndoLogRecord.java new file mode 100644 index 0000000000000..b22f1a690344f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/UndoLogRecord.java @@ -0,0 +1,273 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.store.Data; +import org.h2.store.FileStore; +import org.h2.table.Table; +import org.h2.value.Value; + +/** + * An entry in a undo log. + */ +public class UndoLogRecord { + + /** + * Operation type meaning the row was inserted. + */ + public static final short INSERT = 0; + + /** + * Operation type meaning the row was deleted. + */ + public static final short DELETE = 1; + + private static final int IN_MEMORY = 0, STORED = 1, IN_MEMORY_INVALID = 2; + private Table table; + private Row row; + private short operation; + private short state; + private int filePos; + + /** + * Create a new undo log record + * + * @param table the table + * @param op the operation type + * @param row the row that was deleted or inserted + */ + UndoLogRecord(Table table, short op, Row row) { + this.table = table; + this.row = row; + this.operation = op; + this.state = IN_MEMORY; + } + + /** + * Check if the log record is stored in the file. + * + * @return true if it is + */ + boolean isStored() { + return state == STORED; + } + + /** + * Check if this undo log record can be store. Only record can be stored if + * the table has a unique index. + * + * @return if it can be stored + */ + boolean canStore() { + // if large transactions are enabled, this method is not called + return table.getUniqueIndex() != null; + } + + /** + * Un-do the operation. If the row was inserted before, it is deleted now, + * and vice versa. + * + * @param session the session + */ + void undo(Session session) { + Database db = session.getDatabase(); + switch (operation) { + case INSERT: + if (state == IN_MEMORY_INVALID) { + state = IN_MEMORY; + } + if (db.getLockMode() == Constants.LOCK_MODE_OFF) { + if (row.isDeleted()) { + // it might have been deleted by another thread + return; + } + } + try { + row.setDeleted(false); + table.removeRow(session, row); + table.fireAfterRow(session, row, null, true); + } catch (DbException e) { + if (session.getDatabase().getLockMode() == Constants.LOCK_MODE_OFF + && e.getErrorCode() == ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1) { + // it might have been deleted by another thread + // ignore + } else { + throw e; + } + } + break; + case DELETE: + try { + table.addRow(session, row); + table.fireAfterRow(session, null, row, true); + // reset session id, otherwise other sessions think + // that this row was inserted by this session + row.commit(); + } catch (DbException e) { + if (session.getDatabase().getLockMode() == Constants.LOCK_MODE_OFF + && e.getSQLException().getErrorCode() == ErrorCode.DUPLICATE_KEY_1) { + // it might have been added by another thread + // ignore + } else { + throw e; + } + } + break; + default: + DbException.throwInternalError("op=" + operation); + } + } + + /** + * Append the row to the buffer. + * + * @param buff the buffer + * @param log the undo log + */ + void append(Data buff, UndoLog log) { + int p = buff.length(); + buff.writeInt(0); + buff.writeInt(operation); + buff.writeByte(row.isDeleted() ? (byte) 1 : (byte) 0); + buff.writeInt(log.getTableId(table)); + buff.writeLong(row.getKey()); + buff.writeInt(row.getSessionId()); + int count = row.getColumnCount(); + buff.writeInt(count); + for (int i = 0; i < count; i++) { + Value v = row.getValue(i); + buff.checkCapacity(buff.getValueLen(v)); + buff.writeValue(v); + } + buff.fillAligned(); + buff.setInt(p, (buff.length() - p) / Constants.FILE_BLOCK_SIZE); + } + + /** + * Save the row in the file using a buffer. + * + * @param buff the buffer + * @param file the file + * @param log the undo log + */ + void save(Data buff, FileStore file, UndoLog log) { + buff.reset(); + append(buff, log); + filePos = (int) (file.getFilePointer() / Constants.FILE_BLOCK_SIZE); + file.write(buff.getBytes(), 0, buff.length()); + row = null; + state = STORED; + } + + /** + * Load an undo log record row using a buffer. + * + * @param buff the buffer + * @param log the log + * @return the undo log record + */ + static UndoLogRecord loadFromBuffer(Data buff, UndoLog log) { + UndoLogRecord rec = new UndoLogRecord(null, (short) 0, null); + int pos = buff.length(); + int len = buff.readInt() * Constants.FILE_BLOCK_SIZE; + rec.load(buff, log); + buff.setPos(pos + len); + return rec; + } + + /** + * Load an undo log record row using a buffer. + * + * @param buff the buffer + * @param file the source file + * @param log the log + */ + void load(Data buff, FileStore file, UndoLog log) { + int min = Constants.FILE_BLOCK_SIZE; + log.seek(filePos); + buff.reset(); + file.readFully(buff.getBytes(), 0, min); + int len = buff.readInt() * Constants.FILE_BLOCK_SIZE; + buff.checkCapacity(len); + if (len - min > 0) { + file.readFully(buff.getBytes(), min, len - min); + } + int oldOp = operation; + load(buff, log); + if (SysProperties.CHECK) { + if (operation != oldOp) { + DbException.throwInternalError("operation=" + operation + " op=" + oldOp); + } + } + } + + private void load(Data buff, UndoLog log) { + operation = (short) buff.readInt(); + boolean deleted = buff.readByte() == 1; + table = log.getTable(buff.readInt()); + long key = buff.readLong(); + int sessionId = buff.readInt(); + int columnCount = buff.readInt(); + Value[] values = new Value[columnCount]; + for (int i = 0; i < columnCount; i++) { + values[i] = buff.readValue(); + } + row = getTable().getDatabase().createRow(values, Row.MEMORY_CALCULATE); + row.setKey(key); + row.setDeleted(deleted); + row.setSessionId(sessionId); + state = IN_MEMORY_INVALID; + } + + /** + * Get the table. + * + * @return the table + */ + public Table getTable() { + return table; + } + + /** + * Get the position in the file. + * + * @return the file position + */ + public long getFilePos() { + return filePos; + } + + /** + * This method is called after the operation was committed. + * It commits the change to the indexes. + */ + void commit() { + table.commit(operation, row); + } + + /** + * Get the row that was deleted or inserted. + * + * @return the row + */ + public Row getRow() { + return row; + } + + /** + * Change the state from IN_MEMORY to IN_MEMORY_INVALID. This method is + * called if a later record was read from the temporary file, and therefore + * the position could have changed. + */ + void invalidatePos() { + if (this.state == IN_MEMORY) { + state = IN_MEMORY_INVALID; + } + } +} diff --git a/modules/h2/src/main/java/org/h2/engine/User.java b/modules/h2/src/main/java/org/h2/engine/User.java new file mode 100644 index 0000000000000..c9d65b895917a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/User.java @@ -0,0 +1,274 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.util.ArrayList; +import java.util.Arrays; +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.schema.Schema; +import org.h2.security.SHA256; +import org.h2.table.MetaTable; +import org.h2.table.RangeTable; +import org.h2.table.Table; +import org.h2.table.TableType; +import org.h2.table.TableView; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * Represents a user object. + */ +public class User extends RightOwner { + + private final boolean systemUser; + private byte[] salt; + private byte[] passwordHash; + private boolean admin; + + public User(Database database, int id, String userName, boolean systemUser) { + super(database, id, userName, Trace.USER); + this.systemUser = systemUser; + } + + public void setAdmin(boolean admin) { + this.admin = admin; + } + + public boolean isAdmin() { + return admin; + } + + /** + * Set the salt and hash of the password for this user. + * + * @param salt the salt + * @param hash the password hash + */ + public void setSaltAndHash(byte[] salt, byte[] hash) { + this.salt = salt; + this.passwordHash = hash; + } + + /** + * Set the user name password hash. A random salt is generated as well. + * The parameter is filled with zeros after use. + * + * @param userPasswordHash the user name password hash + */ + public void setUserPasswordHash(byte[] userPasswordHash) { + if (userPasswordHash != null) { + if (userPasswordHash.length == 0) { + salt = passwordHash = userPasswordHash; + } else { + salt = new byte[Constants.SALT_LEN]; + MathUtils.randomBytes(salt); + passwordHash = SHA256.getHashWithSalt(userPasswordHash, salt); + } + } + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getCreateSQL() { + return getCreateSQL(true); + } + + @Override + public String getDropSQL() { + return null; + } + + /** + * Checks that this user has the given rights for this database object. + * + * @param table the database object + * @param rightMask the rights required + * @throws DbException if this user does not have the required rights + */ + public void checkRight(Table table, int rightMask) { + if (!hasRight(table, rightMask)) { + throw DbException.get(ErrorCode.NOT_ENOUGH_RIGHTS_FOR_1, table.getSQL()); + } + } + + /** + * See if this user has the given rights for this database object. + * + * @param table the database object, or null for schema-only check + * @param rightMask the rights required + * @return true if the user has the rights + */ + public boolean hasRight(Table table, int rightMask) { + if (rightMask != Right.SELECT && !systemUser && table != null) { + table.checkWritingAllowed(); + } + if (admin) { + return true; + } + Role publicRole = database.getPublicRole(); + if (publicRole.isRightGrantedRecursive(table, rightMask)) { + return true; + } + if (table instanceof MetaTable || table instanceof RangeTable) { + // everybody has access to the metadata information + return true; + } + if (table != null) { + if (hasRight(null, Right.ALTER_ANY_SCHEMA)) { + return true; + } + TableType tableType = table.getTableType(); + if (TableType.VIEW == tableType) { + TableView v = (TableView) table; + if (v.getOwner() == this) { + // the owner of a view has access: + // SELECT * FROM (SELECT * FROM ...) + return true; + } + } else if (tableType == null) { + // function table + return true; + } + if (table.isTemporary() && !table.isGlobalTemporary()) { + // the owner has all rights on local temporary tables + return true; + } + } + return isRightGrantedRecursive(table, rightMask); + } + + /** + * Get the CREATE SQL statement for this object. + * + * @param password true if the password (actually the salt and hash) should + * be returned + * @return the SQL statement + */ + public String getCreateSQL(boolean password) { + StringBuilder buff = new StringBuilder("CREATE USER IF NOT EXISTS "); + buff.append(getSQL()); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + if (password) { + buff.append(" SALT '"). + append(StringUtils.convertBytesToHex(salt)). + append("' HASH '"). + append(StringUtils.convertBytesToHex(passwordHash)). + append('\''); + } else { + buff.append(" PASSWORD ''"); + } + if (admin) { + buff.append(" ADMIN"); + } + return buff.toString(); + } + + /** + * Check the password of this user. + * + * @param userPasswordHash the password data (the user password hash) + * @return true if the user password hash is correct + */ + boolean validateUserPasswordHash(byte[] userPasswordHash) { + if (userPasswordHash.length == 0 && passwordHash.length == 0) { + return true; + } + if (userPasswordHash.length == 0) { + userPasswordHash = SHA256.getKeyPasswordHash(getName(), new char[0]); + } + byte[] hash = SHA256.getHashWithSalt(userPasswordHash, salt); + return Utils.compareSecure(hash, passwordHash); + } + + /** + * Check if this user has admin rights. An exception is thrown if he does + * not have them. + * + * @throws DbException if this user is not an admin + */ + public void checkAdmin() { + if (!admin) { + throw DbException.get(ErrorCode.ADMIN_RIGHTS_REQUIRED); + } + } + + /** + * Check if this user has schema admin rights. An exception is thrown if he + * does not have them. + * + * @throws DbException if this user is not a schema admin + */ + public void checkSchemaAdmin() { + if (!hasRight(null, Right.ALTER_ANY_SCHEMA)) { + throw DbException.get(ErrorCode.ADMIN_RIGHTS_REQUIRED); + } + } + + @Override + public int getType() { + return DbObject.USER; + } + + @Override + public ArrayList getChildren() { + ArrayList children = New.arrayList(); + for (Right right : database.getAllRights()) { + if (right.getGrantee() == this) { + children.add(right); + } + } + for (Schema schema : database.getAllSchemas()) { + if (schema.getOwner() == this) { + children.add(schema); + } + } + return children; + } + + @Override + public void removeChildrenAndResources(Session session) { + for (Right right : database.getAllRights()) { + if (right.getGrantee() == this) { + database.removeDatabaseObject(session, right); + } + } + database.removeMeta(session, getId()); + salt = null; + Arrays.fill(passwordHash, (byte) 0); + passwordHash = null; + invalidate(); + } + + @Override + public void checkRename() { + // ok + } + + /** + * Check that this user does not own any schema. An exception is thrown if + * he owns one or more schemas. + * + * @throws DbException if this user owns a schema + */ + public void checkOwnsNoSchemas() { + for (Schema s : database.getAllSchemas()) { + if (this == s.getOwner()) { + throw DbException.get(ErrorCode.CANNOT_DROP_2, getName(), s.getName()); + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/UserAggregate.java b/modules/h2/src/main/java/org/h2/engine/UserAggregate.java new file mode 100644 index 0000000000000..0056f64bf1165 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/UserAggregate.java @@ -0,0 +1,129 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import java.sql.Connection; +import java.sql.SQLException; +import org.h2.api.Aggregate; +import org.h2.api.AggregateFunction; +import org.h2.command.Parser; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.table.Table; +import org.h2.util.JdbcUtils; +import org.h2.value.DataType; + +/** + * Represents a user-defined aggregate function. + */ +public class UserAggregate extends DbObjectBase { + + private String className; + private Class javaClass; + + public UserAggregate(Database db, int id, String name, String className, + boolean force) { + initDbObjectBase(db, id, name, Trace.FUNCTION); + this.className = className; + if (!force) { + getInstance(); + } + } + + public Aggregate getInstance() { + if (javaClass == null) { + javaClass = JdbcUtils.loadUserClass(className); + } + Object obj; + try { + obj = javaClass.newInstance(); + Aggregate agg; + if (obj instanceof Aggregate) { + agg = (Aggregate) obj; + } else { + agg = new AggregateWrapper((AggregateFunction) obj); + } + return agg; + } catch (Exception e) { + throw DbException.convert(e); + } + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getDropSQL() { + return "DROP AGGREGATE IF EXISTS " + getSQL(); + } + + @Override + public String getCreateSQL() { + return "CREATE FORCE AGGREGATE " + getSQL() + + " FOR " + Parser.quoteIdentifier(className); + } + + @Override + public int getType() { + return DbObject.AGGREGATE; + } + + @Override + public synchronized void removeChildrenAndResources(Session session) { + database.removeMeta(session, getId()); + className = null; + javaClass = null; + invalidate(); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("AGGREGATE"); + } + + public String getJavaClassName() { + return this.className; + } + + /** + * Wrap {@link AggregateFunction} in order to behave as + * {@link org.h2.api.Aggregate} + **/ + private static class AggregateWrapper implements Aggregate { + private final AggregateFunction aggregateFunction; + + AggregateWrapper(AggregateFunction aggregateFunction) { + this.aggregateFunction = aggregateFunction; + } + + @Override + public void init(Connection conn) throws SQLException { + aggregateFunction.init(conn); + } + + @Override + public int getInternalType(int[] inputTypes) throws SQLException { + int[] sqlTypes = new int[inputTypes.length]; + for (int i = 0; i < inputTypes.length; i++) { + sqlTypes[i] = DataType.convertTypeToSQLType(inputTypes[i]); + } + return DataType.convertSQLTypeToValueType(aggregateFunction.getType(sqlTypes)); + } + + @Override + public void add(Object value) throws SQLException { + aggregateFunction.add(value); + } + + @Override + public Object getResult() throws SQLException { + return aggregateFunction.getResult(); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/UserDataType.java b/modules/h2/src/main/java/org/h2/engine/UserDataType.java new file mode 100644 index 0000000000000..552d6bb678058 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/UserDataType.java @@ -0,0 +1,62 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.engine; + +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.table.Column; +import org.h2.table.Table; + +/** + * Represents a domain (user-defined data type). + */ +public class UserDataType extends DbObjectBase { + + private Column column; + + public UserDataType(Database database, int id, String name) { + initDbObjectBase(database, id, name, Trace.DATABASE); + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getDropSQL() { + return "DROP DOMAIN IF EXISTS " + getSQL(); + } + + @Override + public String getCreateSQL() { + return "CREATE DOMAIN " + getSQL() + " AS " + column.getCreateSQL(); + } + + public Column getColumn() { + return column; + } + + @Override + public int getType() { + return DbObject.USER_DATATYPE; + } + + @Override + public void removeChildrenAndResources(Session session) { + database.removeMeta(session, getId()); + } + + @Override + public void checkRename() { + // ok + } + + public void setColumn(Column column) { + this.column = column; + } + +} diff --git a/modules/h2/src/main/java/org/h2/engine/package.html b/modules/h2/src/main/java/org/h2/engine/package.html new file mode 100644 index 0000000000000..f10f25cfa619a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/engine/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Contains high level classes of the database and classes that don't fit in another sub-package. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/expression/Aggregate.java b/modules/h2/src/main/java/org/h2/expression/Aggregate.java new file mode 100644 index 0000000000000..115aa6becf335 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Aggregate.java @@ -0,0 +1,797 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import org.h2.api.ErrorCode; +import org.h2.command.dml.Select; +import org.h2.command.dml.SelectOrderBy; +import org.h2.engine.Session; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueDouble; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; + +/** + * Implements the integrated aggregate functions, such as COUNT, MAX, SUM. + */ +public class Aggregate extends Expression { + + public enum AggregateType { + /** + * The aggregate type for COUNT(*). + */ + COUNT_ALL, + + /** + * The aggregate type for COUNT(expression). + */ + COUNT, + + /** + * The aggregate type for GROUP_CONCAT(...). + */ + GROUP_CONCAT, + + /** + * The aggregate type for SUM(expression). + */ + SUM, + + /** + * The aggregate type for MIN(expression). + */ + MIN, + + /** + * The aggregate type for MAX(expression). + */ + MAX, + + /** + * The aggregate type for AVG(expression). + */ + AVG, + + /** + * The aggregate type for STDDEV_POP(expression). + */ + STDDEV_POP, + + /** + * The aggregate type for STDDEV_SAMP(expression). + */ + STDDEV_SAMP, + + /** + * The aggregate type for VAR_POP(expression). + */ + VAR_POP, + + /** + * The aggregate type for VAR_SAMP(expression). + */ + VAR_SAMP, + + /** + * The aggregate type for BOOL_OR(expression). + */ + BOOL_OR, + + /** + * The aggregate type for BOOL_AND(expression). + */ + BOOL_AND, + + /** + * The aggregate type for BOOL_OR(expression). + */ + BIT_OR, + + /** + * The aggregate type for BOOL_AND(expression). + */ + BIT_AND, + + /** + * The aggregate type for SELECTIVITY(expression). + */ + SELECTIVITY, + + /** + * The aggregate type for HISTOGRAM(expression). + */ + HISTOGRAM, + + /** + * The aggregate type for MEDIAN(expression). + */ + MEDIAN, + /** + * The aggregate type for ARRAY_AGG(expression). + */ + ARRAY_AGG + } + + private static final HashMap AGGREGATES = new HashMap<>(26); + + private final AggregateType type; + private final Select select; + private final boolean distinct; + + private Expression on; + private Expression groupConcatSeparator; + private ArrayList groupConcatOrderList; + private ArrayList arrayAggOrderList; + private SortOrder groupConcatSort; + private SortOrder arrayOrderSort; + private int dataType, scale; + private long precision; + private int displaySize; + private int lastGroupRowId; + + private Expression filterCondition; + + /** + * Create a new aggregate object. + * + * @param type the aggregate type + * @param on the aggregated expression + * @param select the select statement + * @param distinct if distinct is used + */ + public Aggregate(AggregateType type, Expression on, Select select, boolean distinct) { + this.type = type; + this.on = on; + this.select = select; + this.distinct = distinct; + } + + static { + /* + * Update initial size of AGGREGATES after editing the following list. + */ + addAggregate("COUNT", AggregateType.COUNT); + addAggregate("SUM", AggregateType.SUM); + addAggregate("MIN", AggregateType.MIN); + addAggregate("MAX", AggregateType.MAX); + addAggregate("AVG", AggregateType.AVG); + addAggregate("GROUP_CONCAT", AggregateType.GROUP_CONCAT); + // PostgreSQL compatibility: string_agg(expression, delimiter) + addAggregate("STRING_AGG", AggregateType.GROUP_CONCAT); + addAggregate("STDDEV_SAMP", AggregateType.STDDEV_SAMP); + addAggregate("STDDEV", AggregateType.STDDEV_SAMP); + addAggregate("STDDEV_POP", AggregateType.STDDEV_POP); + addAggregate("STDDEVP", AggregateType.STDDEV_POP); + addAggregate("VAR_POP", AggregateType.VAR_POP); + addAggregate("VARP", AggregateType.VAR_POP); + addAggregate("VAR_SAMP", AggregateType.VAR_SAMP); + addAggregate("VAR", AggregateType.VAR_SAMP); + addAggregate("VARIANCE", AggregateType.VAR_SAMP); + addAggregate("BOOL_OR", AggregateType.BOOL_OR); + // HSQLDB compatibility, but conflicts with x > EVERY(...) + addAggregate("SOME", AggregateType.BOOL_OR); + addAggregate("BOOL_AND", AggregateType.BOOL_AND); + // HSQLDB compatibility, but conflicts with x > SOME(...) + addAggregate("EVERY", AggregateType.BOOL_AND); + addAggregate("SELECTIVITY", AggregateType.SELECTIVITY); + addAggregate("HISTOGRAM", AggregateType.HISTOGRAM); + addAggregate("BIT_OR", AggregateType.BIT_OR); + addAggregate("BIT_AND", AggregateType.BIT_AND); + addAggregate("MEDIAN", AggregateType.MEDIAN); + addAggregate("ARRAY_AGG", AggregateType.ARRAY_AGG); + } + + private static void addAggregate(String name, AggregateType type) { + AGGREGATES.put(name, type); + } + + /** + * Get the aggregate type for this name, or -1 if no aggregate has been + * found. + * + * @param name the aggregate function name + * @return null if no aggregate function has been found, or the aggregate type + */ + public static AggregateType getAggregateType(String name) { + return AGGREGATES.get(name); + } + + /** + * Set the order for GROUP_CONCAT() aggregate. + * + * @param orderBy the order by list + */ + public void setGroupConcatOrder(ArrayList orderBy) { + this.groupConcatOrderList = orderBy; + } + + /** + * Set the order for ARRAY_AGG() aggregate. + * + * @param orderBy the order by list + */ + public void setArrayAggOrder(ArrayList orderBy) { + this.arrayAggOrderList = orderBy; + } + + /** + * Set the separator for the GROUP_CONCAT() aggregate. + * + * @param separator the separator expression + */ + public void setGroupConcatSeparator(Expression separator) { + this.groupConcatSeparator = separator; + } + + /** + * Sets the FILTER condition. + * + * @param filterCondition condition + */ + public void setFilterCondition(Expression filterCondition) { + this.filterCondition = filterCondition; + } + + private SortOrder initOrder(ArrayList orderList, Session session) { + int size = orderList.size(); + int[] index = new int[size]; + int[] sortType = new int[size]; + for (int i = 0; i < size; i++) { + SelectOrderBy o = orderList.get(i); + index[i] = i + 1; + int order = o.descending ? SortOrder.DESCENDING : SortOrder.ASCENDING; + sortType[i] = order; + } + return new SortOrder(session.getDatabase(), index, sortType, null); + } + + @Override + public void updateAggregate(Session session) { + // TODO aggregates: check nested MIN(MAX(ID)) and so on + // if (on != null) { + // on.updateAggregate(); + // } + HashMap group = select.getCurrentGroup(); + if (group == null) { + // this is a different level (the enclosing query) + return; + } + + int groupRowId = select.getCurrentGroupRowId(); + if (lastGroupRowId == groupRowId) { + // already visited + return; + } + lastGroupRowId = groupRowId; + + AggregateData data = (AggregateData) group.get(this); + if (data == null) { + data = AggregateData.create(type); + group.put(this, data); + } + Value v = on == null ? null : on.getValue(session); + if (type == AggregateType.GROUP_CONCAT) { + if (v != ValueNull.INSTANCE) { + v = v.convertTo(Value.STRING); + if (groupConcatOrderList != null) { + int size = groupConcatOrderList.size(); + Value[] array = new Value[1 + size]; + array[0] = v; + for (int i = 0; i < size; i++) { + SelectOrderBy o = groupConcatOrderList.get(i); + array[i + 1] = o.expression.getValue(session); + } + v = ValueArray.get(array); + } + } + } + if (type == AggregateType.ARRAY_AGG) { + if (v != ValueNull.INSTANCE) { + if (arrayAggOrderList != null) { + int size = arrayAggOrderList.size(); + Value[] array = new Value[1 + size]; + array[0] = v; + for (int i = 0; i < size; i++) { + SelectOrderBy o = arrayAggOrderList.get(i); + array[i + 1] = o.expression.getValue(session); + } + v = ValueArray.get(array); + } + } + } + if (filterCondition != null) { + if (!filterCondition.getBooleanValue(session)) { + return; + } + } + data.add(session.getDatabase(), dataType, distinct, v); + } + + @Override + public Value getValue(Session session) { + if (select.isQuickAggregateQuery()) { + switch (type) { + case COUNT: + case COUNT_ALL: + Table table = select.getTopTableFilter().getTable(); + return ValueLong.get(table.getRowCount(session)); + case MIN: + case MAX: { + boolean first = type == AggregateType.MIN; + Index index = getMinMaxColumnIndex(); + int sortType = index.getIndexColumns()[0].sortType; + if ((sortType & SortOrder.DESCENDING) != 0) { + first = !first; + } + Cursor cursor = index.findFirstOrLast(session, first); + SearchRow row = cursor.getSearchRow(); + Value v; + if (row == null) { + v = ValueNull.INSTANCE; + } else { + v = row.getValue(index.getColumns()[0].getColumnId()); + } + return v; + } + case MEDIAN: { + return AggregateDataMedian.getResultFromIndex(session, on, dataType); + } + default: + DbException.throwInternalError("type=" + type); + } + } + HashMap group = select.getCurrentGroup(); + if (group == null) { + throw DbException.get(ErrorCode.INVALID_USE_OF_AGGREGATE_FUNCTION_1, getSQL()); + } + AggregateData data = (AggregateData) group.get(this); + if (data == null) { + data = AggregateData.create(type); + } + Value v = data.getValue(session.getDatabase(), dataType, distinct); + if (type == AggregateType.GROUP_CONCAT) { + ArrayList list = ((AggregateDataArrayCollecting) data).getList(); + if (list == null || list.isEmpty()) { + return ValueNull.INSTANCE; + } + if (groupConcatOrderList != null) { + final SortOrder sortOrder = groupConcatSort; + Collections.sort(list, new Comparator() { + @Override + public int compare(Value v1, Value v2) { + Value[] a1 = ((ValueArray) v1).getList(); + Value[] a2 = ((ValueArray) v2).getList(); + return sortOrder.compare(a1, a2); + } + }); + } + StatementBuilder buff = new StatementBuilder(); + String sep = groupConcatSeparator == null ? + "," : groupConcatSeparator.getValue(session).getString(); + for (Value val : list) { + String s; + if (val.getType() == Value.ARRAY) { + s = ((ValueArray) val).getList()[0].getString(); + } else { + s = val.getString(); + } + if (s == null) { + continue; + } + if (sep != null) { + buff.appendExceptFirst(sep); + } + buff.append(s); + } + v = ValueString.get(buff.toString()); + } else if (type == AggregateType.ARRAY_AGG) { + ArrayList list = ((AggregateDataArrayCollecting) data).getList(); + if (list == null || list.isEmpty()) { + return ValueNull.INSTANCE; + } + if (arrayAggOrderList != null) { + final SortOrder sortOrder = arrayOrderSort; + Collections.sort(list, new Comparator() { + @Override + public int compare(Value v1, Value v2) { + Value[] a1 = ((ValueArray) v1).getList(); + Value[] a2 = ((ValueArray) v2).getList(); + return sortOrder.compare(a1, a2); + } + }); + } + v = ValueArray.get(list.toArray(new Value[list.size()])); + } + return v; + } + + @Override + public int getType() { + return dataType; + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + if (on != null) { + on.mapColumns(resolver, level); + } + if (groupConcatOrderList != null) { + for (SelectOrderBy o : groupConcatOrderList) { + o.expression.mapColumns(resolver, level); + } + } + if (arrayAggOrderList != null) { + for (SelectOrderBy o : arrayAggOrderList) { + o.expression.mapColumns(resolver, level); + } + } + if (groupConcatSeparator != null) { + groupConcatSeparator.mapColumns(resolver, level); + } + if (filterCondition != null) { + filterCondition.mapColumns(resolver, level); + } + } + + @Override + public Expression optimize(Session session) { + if (on != null) { + on = on.optimize(session); + dataType = on.getType(); + scale = on.getScale(); + precision = on.getPrecision(); + displaySize = on.getDisplaySize(); + } + if (groupConcatOrderList != null) { + for (SelectOrderBy o : groupConcatOrderList) { + o.expression = o.expression.optimize(session); + } + groupConcatSort = initOrder(groupConcatOrderList, session); + } + if (arrayAggOrderList != null) { + for (SelectOrderBy o : arrayAggOrderList) { + o.expression = o.expression.optimize(session); + } + arrayOrderSort = initOrder(arrayAggOrderList, session); + } + if (groupConcatSeparator != null) { + groupConcatSeparator = groupConcatSeparator.optimize(session); + } + if (filterCondition != null) { + filterCondition = filterCondition.optimize(session); + } + switch (type) { + case GROUP_CONCAT: + dataType = Value.STRING; + scale = 0; + precision = displaySize = Integer.MAX_VALUE; + break; + case COUNT_ALL: + case COUNT: + dataType = Value.LONG; + scale = 0; + precision = ValueLong.PRECISION; + displaySize = ValueLong.DISPLAY_SIZE; + break; + case SELECTIVITY: + dataType = Value.INT; + scale = 0; + precision = ValueInt.PRECISION; + displaySize = ValueInt.DISPLAY_SIZE; + break; + case HISTOGRAM: + dataType = Value.ARRAY; + scale = 0; + precision = displaySize = Integer.MAX_VALUE; + break; + case SUM: + if (dataType == Value.BOOLEAN) { + // example: sum(id > 3) (count the rows) + dataType = Value.LONG; + } else if (!DataType.supportsAdd(dataType)) { + throw DbException.get(ErrorCode.SUM_OR_AVG_ON_WRONG_DATATYPE_1, getSQL()); + } else { + dataType = DataType.getAddProofType(dataType); + } + break; + case AVG: + if (!DataType.supportsAdd(dataType)) { + throw DbException.get(ErrorCode.SUM_OR_AVG_ON_WRONG_DATATYPE_1, getSQL()); + } + break; + case MIN: + case MAX: + case MEDIAN: + break; + case STDDEV_POP: + case STDDEV_SAMP: + case VAR_POP: + case VAR_SAMP: + dataType = Value.DOUBLE; + precision = ValueDouble.PRECISION; + displaySize = ValueDouble.DISPLAY_SIZE; + scale = 0; + break; + case BOOL_AND: + case BOOL_OR: + dataType = Value.BOOLEAN; + precision = ValueBoolean.PRECISION; + displaySize = ValueBoolean.DISPLAY_SIZE; + scale = 0; + break; + case BIT_AND: + case BIT_OR: + if (!DataType.supportsAdd(dataType)) { + throw DbException.get(ErrorCode.SUM_OR_AVG_ON_WRONG_DATATYPE_1, getSQL()); + } + break; + case ARRAY_AGG: + dataType = Value.ARRAY; + scale = 0; + precision = displaySize = Integer.MAX_VALUE; + break; + default: + DbException.throwInternalError("type=" + type); + } + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + if (on != null) { + on.setEvaluatable(tableFilter, b); + } + if (groupConcatOrderList != null) { + for (SelectOrderBy o : groupConcatOrderList) { + o.expression.setEvaluatable(tableFilter, b); + } + } + if (arrayAggOrderList != null) { + for (SelectOrderBy o : arrayAggOrderList) { + o.expression.setEvaluatable(tableFilter, b); + } + } + if (groupConcatSeparator != null) { + groupConcatSeparator.setEvaluatable(tableFilter, b); + } + if (filterCondition != null) { + filterCondition.setEvaluatable(tableFilter, b); + } + } + + @Override + public int getScale() { + return scale; + } + + @Override + public long getPrecision() { + return precision; + } + + @Override + public int getDisplaySize() { + return displaySize; + } + + private String getSQLGroupConcat() { + StatementBuilder buff = new StatementBuilder("GROUP_CONCAT("); + if (distinct) { + buff.append("DISTINCT "); + } + buff.append(on.getSQL()); + if (groupConcatOrderList != null) { + buff.append(" ORDER BY "); + for (SelectOrderBy o : groupConcatOrderList) { + buff.appendExceptFirst(", "); + buff.append(o.expression.getSQL()); + if (o.descending) { + buff.append(" DESC"); + } + } + } + if (groupConcatSeparator != null) { + buff.append(" SEPARATOR ").append(groupConcatSeparator.getSQL()); + } + buff.append(')'); + if (filterCondition != null) { + buff.append(" FILTER (WHERE ").append(filterCondition.getSQL()).append(')'); + } + return buff.toString(); + } + + private String getSQLArrayAggregate() { + StatementBuilder buff = new StatementBuilder("ARRAY_AGG("); + if (distinct) { + buff.append("DISTINCT "); + } + buff.append(on.getSQL()); + if (arrayAggOrderList != null) { + buff.append(" ORDER BY "); + for (SelectOrderBy o : arrayAggOrderList) { + buff.appendExceptFirst(", "); + buff.append(o.expression.getSQL()); + if (o.descending) { + buff.append(" DESC"); + } + } + } + buff.append(')'); + if (filterCondition != null) { + buff.append(" FILTER (WHERE ").append(filterCondition.getSQL()).append(')'); + } + return buff.toString(); + } + + @Override + public String getSQL() { + String text; + switch (type) { + case GROUP_CONCAT: + return getSQLGroupConcat(); + case COUNT_ALL: + return "COUNT(*)"; + case COUNT: + text = "COUNT"; + break; + case SELECTIVITY: + text = "SELECTIVITY"; + break; + case HISTOGRAM: + text = "HISTOGRAM"; + break; + case SUM: + text = "SUM"; + break; + case MIN: + text = "MIN"; + break; + case MAX: + text = "MAX"; + break; + case AVG: + text = "AVG"; + break; + case STDDEV_POP: + text = "STDDEV_POP"; + break; + case STDDEV_SAMP: + text = "STDDEV_SAMP"; + break; + case VAR_POP: + text = "VAR_POP"; + break; + case VAR_SAMP: + text = "VAR_SAMP"; + break; + case BOOL_AND: + text = "BOOL_AND"; + break; + case BOOL_OR: + text = "BOOL_OR"; + break; + case BIT_AND: + text = "BIT_AND"; + break; + case BIT_OR: + text = "BIT_OR"; + break; + case MEDIAN: + text = "MEDIAN"; + break; + case ARRAY_AGG: + return getSQLArrayAggregate(); + default: + throw DbException.throwInternalError("type=" + type); + } + if (distinct) { + text += "(DISTINCT " + on.getSQL() + ')'; + } else { + text += StringUtils.enclose(on.getSQL()); + } + if (filterCondition != null) { + text += " FILTER (WHERE " + filterCondition.getSQL() + ')'; + } + return text; + } + + private Index getMinMaxColumnIndex() { + if (on instanceof ExpressionColumn) { + ExpressionColumn col = (ExpressionColumn) on; + Column column = col.getColumn(); + TableFilter filter = col.getTableFilter(); + if (filter != null) { + Table table = filter.getTable(); + return table.getIndexForColumn(column, true, false); + } + } + return null; + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + if (filterCondition != null && !filterCondition.isEverything(visitor)) { + return false; + } + if (visitor.getType() == ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL) { + switch (type) { + case COUNT: + if (!distinct && on.getNullable() == Column.NOT_NULLABLE) { + return visitor.getTable().canGetRowCount(); + } + return false; + case COUNT_ALL: + return visitor.getTable().canGetRowCount(); + case MIN: + case MAX: + Index index = getMinMaxColumnIndex(); + return index != null; + case MEDIAN: + if (distinct) { + return false; + } + return AggregateDataMedian.getMedianColumnIndex(on) != null; + default: + return false; + } + } + if (on != null && !on.isEverything(visitor)) { + return false; + } + if (groupConcatSeparator != null && + !groupConcatSeparator.isEverything(visitor)) { + return false; + } + if (groupConcatOrderList != null) { + for (SelectOrderBy o : groupConcatOrderList) { + if (!o.expression.isEverything(visitor)) { + return false; + } + } + } + if (arrayAggOrderList != null) { + for (SelectOrderBy o : arrayAggOrderList) { + if (!o.expression.isEverything(visitor)) { + return false; + } + } + } + return true; + } + + @Override + public int getCost() { + int cost = 1; + if (on != null) { + cost += on.getCost(); + } + if (filterCondition != null) { + cost += filterCondition.getCost(); + } + return cost; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateData.java b/modules/h2/src/main/java/org/h2/expression/AggregateData.java new file mode 100644 index 0000000000000..37a8c429c0d24 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateData.java @@ -0,0 +1,62 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Database; +import org.h2.expression.Aggregate.AggregateType; +import org.h2.value.Value; + +/** + * Abstract class for the computation of an aggregate. + */ +abstract class AggregateData { + + /** + * Create an AggregateData object of the correct sub-type. + * + * @param aggregateType the type of the aggregate operation + * @return the aggregate data object of the specified type + */ + static AggregateData create(AggregateType aggregateType) { + if (aggregateType == AggregateType.SELECTIVITY) { + return new AggregateDataSelectivity(); + } else if (aggregateType == AggregateType.GROUP_CONCAT) { + return new AggregateDataArrayCollecting(); + } else if (aggregateType == AggregateType.ARRAY_AGG) { + return new AggregateDataArrayCollecting(); + } else if (aggregateType == AggregateType.COUNT_ALL) { + return new AggregateDataCountAll(); + } else if (aggregateType == AggregateType.COUNT) { + return new AggregateDataCount(); + } else if (aggregateType == AggregateType.HISTOGRAM) { + return new AggregateDataHistogram(); + } else if (aggregateType == AggregateType.MEDIAN) { + return new AggregateDataMedian(); + } else { + return new AggregateDataDefault(aggregateType); + } + } + + /** + * Add a value to this aggregate. + * + * @param database the database + * @param dataType the datatype of the computed result + * @param distinct if the calculation should be distinct + * @param v the value + */ + abstract void add(Database database, int dataType, boolean distinct, Value v); + + /** + * Get the aggregate result. + * + * @param database the database + * @param dataType the datatype of the computed result + * @param distinct if distinct is used + * @return the value + */ + abstract Value getValue(Database database, int dataType, boolean distinct); +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateDataArrayCollecting.java b/modules/h2/src/main/java/org/h2/expression/AggregateDataArrayCollecting.java new file mode 100644 index 0000000000000..a67fa2b81ed3f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateDataArrayCollecting.java @@ -0,0 +1,60 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.ArrayList; +import org.h2.engine.Database; +import org.h2.util.New; +import org.h2.util.ValueHashMap; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * Data stored while calculating a GROUP_CONCAT/ARRAY_AGG aggregate. + */ +class AggregateDataArrayCollecting extends AggregateData { + private ArrayList list; + private ValueHashMap distinctValues; + + @Override + void add(Database database, int dataType, boolean distinct, Value v) { + if (v == ValueNull.INSTANCE) { + return; + } + if (distinct) { + if (distinctValues == null) { + distinctValues = ValueHashMap.newInstance(); + } + distinctValues.put(v, this); + return; + } + if (list == null) { + list = New.arrayList(); + } + list.add(v); + } + + @Override + Value getValue(Database database, int dataType, boolean distinct) { + if (distinct) { + distinct(database, dataType); + } + return null; + } + + ArrayList getList() { + return list; + } + + private void distinct(Database database, int dataType) { + if (distinctValues == null) { + return; + } + for (Value v : distinctValues.keys()) { + add(database, dataType, false, v); + } + } +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateDataCount.java b/modules/h2/src/main/java/org/h2/expression/AggregateDataCount.java new file mode 100644 index 0000000000000..395e0b58cfecd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateDataCount.java @@ -0,0 +1,48 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Database; +import org.h2.util.ValueHashMap; +import org.h2.value.Value; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; + +/** + * Data stored while calculating an aggregate. + */ +class AggregateDataCount extends AggregateData { + private long count; + private ValueHashMap distinctValues; + + @Override + void add(Database database, int dataType, boolean distinct, Value v) { + if (v == ValueNull.INSTANCE) { + return; + } + count++; + if (distinct) { + if (distinctValues == null) { + distinctValues = ValueHashMap.newInstance(); + } + distinctValues.put(v, this); + } + } + + @Override + Value getValue(Database database, int dataType, boolean distinct) { + if (distinct) { + if (distinctValues != null) { + count = distinctValues.size(); + } else { + count = 0; + } + } + Value v = ValueLong.get(count); + return v.convertTo(dataType); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateDataCountAll.java b/modules/h2/src/main/java/org/h2/expression/AggregateDataCountAll.java new file mode 100644 index 0000000000000..5ed43ff2311f4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateDataCountAll.java @@ -0,0 +1,37 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Database; +import org.h2.message.DbException; +import org.h2.value.Value; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; + +/** + * Data stored while calculating a COUNT(*) aggregate. + */ +class AggregateDataCountAll extends AggregateData { + private long count; + + @Override + void add(Database database, int dataType, boolean distinct, Value v) { + if (distinct) { + throw DbException.throwInternalError(); + } + count++; + } + + @Override + Value getValue(Database database, int dataType, boolean distinct) { + if (distinct) { + throw DbException.throwInternalError(); + } + Value v = ValueLong.get(count); + return v == null ? ValueNull.INSTANCE : v.convertTo(dataType); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateDataDefault.java b/modules/h2/src/main/java/org/h2/expression/AggregateDataDefault.java new file mode 100644 index 0000000000000..49b6fcb3deb3f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateDataDefault.java @@ -0,0 +1,205 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Database; +import org.h2.expression.Aggregate.AggregateType; +import org.h2.message.DbException; +import org.h2.util.ValueHashMap; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueDouble; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; + +/** + * Data stored while calculating an aggregate. + */ +class AggregateDataDefault extends AggregateData { + private final AggregateType aggregateType; + private long count; + private ValueHashMap distinctValues; + private Value value; + private double m2, mean; + + /** + * @param aggregateType the type of the aggregate operation + */ + AggregateDataDefault(AggregateType aggregateType) { + this.aggregateType = aggregateType; + } + + @Override + void add(Database database, int dataType, boolean distinct, Value v) { + if (v == ValueNull.INSTANCE) { + return; + } + count++; + if (distinct) { + if (distinctValues == null) { + distinctValues = ValueHashMap.newInstance(); + } + distinctValues.put(v, this); + return; + } + switch (aggregateType) { + case SUM: + if (value == null) { + value = v.convertTo(dataType); + } else { + v = v.convertTo(value.getType()); + value = value.add(v); + } + break; + case AVG: + if (value == null) { + value = v.convertTo(DataType.getAddProofType(dataType)); + } else { + v = v.convertTo(value.getType()); + value = value.add(v); + } + break; + case MIN: + if (value == null || database.compare(v, value) < 0) { + value = v; + } + break; + case MAX: + if (value == null || database.compare(v, value) > 0) { + value = v; + } + break; + case STDDEV_POP: + case STDDEV_SAMP: + case VAR_POP: + case VAR_SAMP: { + // Using Welford's method, see also + // http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance + // http://www.johndcook.com/standard_deviation.html + double x = v.getDouble(); + if (count == 1) { + mean = x; + m2 = 0; + } else { + double delta = x - mean; + mean += delta / count; + m2 += delta * (x - mean); + } + break; + } + case BOOL_AND: + v = v.convertTo(Value.BOOLEAN); + if (value == null) { + value = v; + } else { + value = ValueBoolean.get(value.getBoolean() && v.getBoolean()); + } + break; + case BOOL_OR: + v = v.convertTo(Value.BOOLEAN); + if (value == null) { + value = v; + } else { + value = ValueBoolean.get(value.getBoolean() || v.getBoolean()); + } + break; + case BIT_AND: + if (value == null) { + value = v.convertTo(dataType); + } else { + value = ValueLong.get(value.getLong() & v.getLong()).convertTo(dataType); + } + break; + case BIT_OR: + if (value == null) { + value = v.convertTo(dataType); + } else { + value = ValueLong.get(value.getLong() | v.getLong()).convertTo(dataType); + } + break; + default: + DbException.throwInternalError("type=" + aggregateType); + } + } + + @Override + Value getValue(Database database, int dataType, boolean distinct) { + if (distinct) { + count = 0; + groupDistinct(database, dataType); + } + Value v = null; + switch (aggregateType) { + case SUM: + case MIN: + case MAX: + case BIT_OR: + case BIT_AND: + case BOOL_OR: + case BOOL_AND: + v = value; + break; + case AVG: + if (value != null) { + v = divide(value, count); + } + break; + case STDDEV_POP: { + if (count < 1) { + return ValueNull.INSTANCE; + } + v = ValueDouble.get(Math.sqrt(m2 / count)); + break; + } + case STDDEV_SAMP: { + if (count < 2) { + return ValueNull.INSTANCE; + } + v = ValueDouble.get(Math.sqrt(m2 / (count - 1))); + break; + } + case VAR_POP: { + if (count < 1) { + return ValueNull.INSTANCE; + } + v = ValueDouble.get(m2 / count); + break; + } + case VAR_SAMP: { + if (count < 2) { + return ValueNull.INSTANCE; + } + v = ValueDouble.get(m2 / (count - 1)); + break; + } + default: + DbException.throwInternalError("type=" + aggregateType); + } + return v == null ? ValueNull.INSTANCE : v.convertTo(dataType); + } + + private static Value divide(Value a, long by) { + if (by == 0) { + return ValueNull.INSTANCE; + } + int type = Value.getHigherOrder(a.getType(), Value.LONG); + Value b = ValueLong.get(by).convertTo(type); + a = a.convertTo(type).divide(b); + return a; + } + + private void groupDistinct(Database database, int dataType) { + if (distinctValues == null) { + return; + } + count = 0; + for (Value v : distinctValues.keys()) { + add(database, dataType, false, v); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateDataHistogram.java b/modules/h2/src/main/java/org/h2/expression/AggregateDataHistogram.java new file mode 100644 index 0000000000000..1629270a67912 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateDataHistogram.java @@ -0,0 +1,78 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.Arrays; +import java.util.Comparator; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.util.ValueHashMap; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueLong; + +/** + * Data stored while calculating a HISTOGRAM aggregate. + */ +class AggregateDataHistogram extends AggregateData { + private long count; + private ValueHashMap distinctValues; + + @Override + void add(Database database, int dataType, boolean distinct, Value v) { + if (distinctValues == null) { + distinctValues = ValueHashMap.newInstance(); + } + AggregateDataHistogram a = distinctValues.get(v); + if (a == null) { + if (distinctValues.size() < Constants.SELECTIVITY_DISTINCT_COUNT) { + a = new AggregateDataHistogram(); + distinctValues.put(v, a); + } + } + if (a != null) { + a.count++; + } + } + + @Override + Value getValue(Database database, int dataType, boolean distinct) { + if (distinct) { + count = 0; + groupDistinct(database, dataType); + } + ValueArray[] values = new ValueArray[distinctValues.size()]; + int i = 0; + for (Value dv : distinctValues.keys()) { + AggregateDataHistogram d = distinctValues.get(dv); + values[i] = ValueArray.get(new Value[] { dv, ValueLong.get(d.count) }); + i++; + } + final CompareMode compareMode = database.getCompareMode(); + Arrays.sort(values, new Comparator() { + @Override + public int compare(ValueArray v1, ValueArray v2) { + Value a1 = v1.getList()[0]; + Value a2 = v2.getList()[0]; + return a1.compareTo(a2, compareMode); + } + }); + Value v = ValueArray.get(values); + return v.convertTo(dataType); + } + + private void groupDistinct(Database database, int dataType) { + if (distinctValues == null) { + return; + } + count = 0; + for (Value v : distinctValues.keys()) { + add(database, dataType, false, v); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateDataMedian.java b/modules/h2/src/main/java/org/h2/expression/AggregateDataMedian.java new file mode 100644 index 0000000000000..9210070f1aa6e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateDataMedian.java @@ -0,0 +1,278 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.math.BigDecimal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Comparator; +import java.util.HashSet; + +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.DateTimeUtils; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * Data stored while calculating a MEDIAN aggregate. + */ +class AggregateDataMedian extends AggregateData { + private Collection values; + + private static boolean isNullsLast(Index index) { + IndexColumn ic = index.getIndexColumns()[0]; + int sortType = ic.sortType; + return (sortType & SortOrder.NULLS_LAST) != 0 + || (sortType & SortOrder.DESCENDING) != 0 && (sortType & SortOrder.NULLS_FIRST) == 0; + } + + /** + * Get the index (if any) for the column specified in the median aggregate. + * + * @param on the expression (usually a column expression) + * @return the index, or null + */ + static Index getMedianColumnIndex(Expression on) { + if (on instanceof ExpressionColumn) { + ExpressionColumn col = (ExpressionColumn) on; + Column column = col.getColumn(); + TableFilter filter = col.getTableFilter(); + if (filter != null) { + Table table = filter.getTable(); + ArrayList indexes = table.getIndexes(); + Index result = null; + if (indexes != null) { + boolean nullable = column.isNullable(); + for (int i = 1, size = indexes.size(); i < size; i++) { + Index index = indexes.get(i); + if (!index.canFindNext()) { + continue; + } + if (!index.isFirstColumn(column)) { + continue; + } + // Prefer index without nulls last for nullable columns + if (result == null || result.getColumns().length > index.getColumns().length + || nullable && isNullsLast(result) && !isNullsLast(index)) { + result = index; + } + } + } + return result; + } + } + return null; + } + + /** + * Get the result from the index. + * + * @param session the session + * @param on the expression + * @param dataType the data type + * @return the result + */ + static Value getResultFromIndex(Session session, Expression on, int dataType) { + Index index = getMedianColumnIndex(on); + long count = index.getRowCount(session); + if (count == 0) { + return ValueNull.INSTANCE; + } + Cursor cursor = index.find(session, null, null); + cursor.next(); + int columnId = index.getColumns()[0].getColumnId(); + ExpressionColumn expr = (ExpressionColumn) on; + if (expr.getColumn().isNullable()) { + boolean hasNulls = false; + SearchRow row; + // Try to skip nulls from the start first with the same cursor that + // will be used to read values. + while (count > 0) { + row = cursor.getSearchRow(); + if (row == null) { + return ValueNull.INSTANCE; + } + if (row.getValue(columnId) == ValueNull.INSTANCE) { + count--; + cursor.next(); + hasNulls = true; + } else { + break; + } + } + if (count == 0) { + return ValueNull.INSTANCE; + } + // If no nulls found and if index orders nulls last create a second + // cursor to count nulls at the end. + if (!hasNulls && isNullsLast(index)) { + TableFilter tableFilter = expr.getTableFilter(); + SearchRow check = tableFilter.getTable().getTemplateSimpleRow(true); + check.setValue(columnId, ValueNull.INSTANCE); + Cursor nullsCursor = index.find(session, check, check); + while (nullsCursor.next()) { + count--; + } + if (count <= 0) { + return ValueNull.INSTANCE; + } + } + } + long skip = (count - 1) / 2; + for (int i = 0; i < skip; i++) { + cursor.next(); + } + SearchRow row = cursor.getSearchRow(); + if (row == null) { + return ValueNull.INSTANCE; + } + Value v = row.getValue(columnId); + if (v == ValueNull.INSTANCE) { + return v; + } + if ((count & 1) == 0) { + cursor.next(); + row = cursor.getSearchRow(); + if (row == null) { + return v; + } + Value v2 = row.getValue(columnId); + if (v2 == ValueNull.INSTANCE) { + return v; + } + return getMedian(v, v2, dataType, session.getDatabase().getCompareMode()); + } + return v; + } + + @Override + void add(Database database, int dataType, boolean distinct, Value v) { + if (v == ValueNull.INSTANCE) { + return; + } + Collection c = values; + if (c == null) { + values = c = distinct ? new HashSet() : new ArrayList(); + } + c.add(v); + } + + @Override + Value getValue(Database database, int dataType, boolean distinct) { + Collection c = values; + // Non-null collection cannot be empty here + if (c == null) { + return ValueNull.INSTANCE; + } + if (distinct && c instanceof ArrayList) { + c = new HashSet<>(c); + } + Value[] a = c.toArray(new Value[0]); + final CompareMode mode = database.getCompareMode(); + Arrays.sort(a, new Comparator() { + @Override + public int compare(Value o1, Value o2) { + return o1.compareTo(o2, mode); + } + }); + int len = a.length; + int idx = len / 2; + Value v1 = a[idx]; + if ((len & 1) == 1) { + return v1.convertTo(dataType); + } + return getMedian(a[idx - 1], v1, dataType, mode); + } + + private static Value getMedian(Value v0, Value v1, int dataType, CompareMode mode) { + if (v0.compareTo(v1, mode) == 0) { + return v0.convertTo(dataType); + } + switch (dataType) { + case Value.BYTE: + case Value.SHORT: + case Value.INT: + return ValueInt.get((v0.getInt() + v1.getInt()) / 2).convertTo(dataType); + case Value.LONG: + return ValueLong.get((v0.getLong() + v1.getLong()) / 2); + case Value.DECIMAL: + return ValueDecimal.get(v0.getBigDecimal().add(v1.getBigDecimal()).divide(BigDecimal.valueOf(2))); + case Value.FLOAT: + return ValueFloat.get((v0.getFloat() + v1.getFloat()) / 2); + case Value.DOUBLE: + return ValueDouble.get((v0.getFloat() + v1.getDouble()) / 2); + case Value.TIME: { + ValueTime t0 = (ValueTime) v0.convertTo(Value.TIME), t1 = (ValueTime) v1.convertTo(Value.TIME); + return ValueTime.fromNanos((t0.getNanos() + t1.getNanos()) / 2); + } + case Value.DATE: { + ValueDate d0 = (ValueDate) v0.convertTo(Value.DATE), d1 = (ValueDate) v1.convertTo(Value.DATE); + return ValueDate.fromDateValue( + DateTimeUtils.dateValueFromAbsoluteDay((DateTimeUtils.absoluteDayFromDateValue(d0.getDateValue()) + + DateTimeUtils.absoluteDayFromDateValue(d1.getDateValue())) / 2)); + } + case Value.TIMESTAMP: { + ValueTimestamp ts0 = (ValueTimestamp) v0.convertTo(Value.TIMESTAMP), + ts1 = (ValueTimestamp) v1.convertTo(Value.TIMESTAMP); + long dateSum = DateTimeUtils.absoluteDayFromDateValue(ts0.getDateValue()) + + DateTimeUtils.absoluteDayFromDateValue(ts1.getDateValue()); + long nanos = (ts0.getTimeNanos() + ts1.getTimeNanos()) / 2; + if ((dateSum & 1) != 0) { + nanos += DateTimeUtils.NANOS_PER_DAY / 2; + if (nanos >= DateTimeUtils.NANOS_PER_DAY) { + nanos -= DateTimeUtils.NANOS_PER_DAY; + dateSum++; + } + } + return ValueTimestamp.fromDateValueAndNanos(DateTimeUtils.dateValueFromAbsoluteDay(dateSum / 2), nanos); + } + case Value.TIMESTAMP_TZ: { + ValueTimestampTimeZone ts0 = (ValueTimestampTimeZone) v0.convertTo(Value.TIMESTAMP_TZ), + ts1 = (ValueTimestampTimeZone) v1.convertTo(Value.TIMESTAMP_TZ); + long dateSum = DateTimeUtils.absoluteDayFromDateValue(ts0.getDateValue()) + + DateTimeUtils.absoluteDayFromDateValue(ts1.getDateValue()); + long nanos = (ts0.getTimeNanos() + ts1.getTimeNanos()) / 2; + int offset = ts0.getTimeZoneOffsetMins() + ts1.getTimeZoneOffsetMins(); + if ((dateSum & 1) != 0) { + nanos += DateTimeUtils.NANOS_PER_DAY / 2; + } + if ((offset & 1) != 0) { + nanos += 30_000_000_000L; + } + if (nanos >= DateTimeUtils.NANOS_PER_DAY) { + nanos -= DateTimeUtils.NANOS_PER_DAY; + dateSum++; + } + return ValueTimestampTimeZone.fromDateValueAndNanos(DateTimeUtils.dateValueFromAbsoluteDay(dateSum / 2), + nanos, (short) (offset / 2)); + } + default: + // Just return first + return v0.convertTo(dataType); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/AggregateDataSelectivity.java b/modules/h2/src/main/java/org/h2/expression/AggregateDataSelectivity.java new file mode 100644 index 0000000000000..480562cf281a3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/AggregateDataSelectivity.java @@ -0,0 +1,56 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.util.IntIntHashMap; +import org.h2.value.Value; +import org.h2.value.ValueInt; + +/** + * Data stored while calculating a SELECTIVITY aggregate. + */ +class AggregateDataSelectivity extends AggregateData { + private long count; + private IntIntHashMap distinctHashes; + private double m2; + + @Override + void add(Database database, int dataType, boolean distinct, Value v) { + count++; + if (distinctHashes == null) { + distinctHashes = new IntIntHashMap(); + } + int size = distinctHashes.size(); + if (size > Constants.SELECTIVITY_DISTINCT_COUNT) { + distinctHashes = new IntIntHashMap(); + m2 += size; + } + int hash = v.hashCode(); + // the value -1 is not supported + distinctHashes.put(hash, 1); + } + + @Override + Value getValue(Database database, int dataType, boolean distinct) { + if (distinct) { + count = 0; + } + Value v = null; + int s = 0; + if (count == 0) { + s = 0; + } else { + m2 += distinctHashes.size(); + m2 = 100 * m2 / count; + s = (int) m2; + s = s <= 0 ? 1 : s > 100 ? 100 : s; + } + v = ValueInt.get(s); + return v.convertTo(dataType); + } +} diff --git a/modules/h2/src/main/java/org/h2/expression/Alias.java b/modules/h2/src/main/java/org/h2/expression/Alias.java new file mode 100644 index 0000000000000..cae84fe5539b5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Alias.java @@ -0,0 +1,126 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.command.Parser; +import org.h2.engine.Session; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; + +/** + * A column alias as in SELECT 'Hello' AS NAME ... + */ +public class Alias extends Expression { + + private final String alias; + private Expression expr; + private final boolean aliasColumnName; + + public Alias(Expression expression, String alias, boolean aliasColumnName) { + this.expr = expression; + this.alias = alias; + this.aliasColumnName = aliasColumnName; + } + + @Override + public Expression getNonAliasExpression() { + return expr; + } + + @Override + public Value getValue(Session session) { + return expr.getValue(session); + } + + @Override + public int getType() { + return expr.getType(); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + expr.mapColumns(resolver, level); + } + + @Override + public Expression optimize(Session session) { + expr = expr.optimize(session); + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + expr.setEvaluatable(tableFilter, b); + } + + @Override + public int getScale() { + return expr.getScale(); + } + + @Override + public long getPrecision() { + return expr.getPrecision(); + } + + @Override + public int getDisplaySize() { + return expr.getDisplaySize(); + } + + @Override + public boolean isAutoIncrement() { + return expr.isAutoIncrement(); + } + + @Override + public String getSQL() { + return expr.getSQL() + " AS " + Parser.quoteIdentifier(alias); + } + + @Override + public void updateAggregate(Session session) { + expr.updateAggregate(session); + } + + @Override + public String getAlias() { + return alias; + } + + @Override + public int getNullable() { + return expr.getNullable(); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return expr.isEverything(visitor); + } + + @Override + public int getCost() { + return expr.getCost(); + } + + @Override + public String getTableName() { + if (aliasColumnName) { + return super.getTableName(); + } + return expr.getTableName(); + } + + @Override + public String getColumnName() { + if (!(expr instanceof ExpressionColumn) || aliasColumnName) { + return super.getColumnName(); + } + return expr.getColumnName(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/CompareLike.java b/modules/h2/src/main/java/org/h2/expression/CompareLike.java new file mode 100644 index 0000000000000..63214fa040b63 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/CompareLike.java @@ -0,0 +1,518 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.regex.Pattern; +import java.util.regex.PatternSyntaxException; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.IndexCondition; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; + +/** + * Pattern matching comparison expression: WHERE NAME LIKE ? + */ +public class CompareLike extends Condition { + + private static final int MATCH = 0, ONE = 1, ANY = 2; + + private final CompareMode compareMode; + private final String defaultEscape; + private Expression left; + private Expression right; + private Expression escape; + + private boolean isInit; + + private char[] patternChars; + private String patternString; + /** one of MATCH / ONE / ANY */ + private int[] patternTypes; + private int patternLength; + + private final boolean regexp; + private Pattern patternRegexp; + + private boolean ignoreCase; + private boolean fastCompare; + private boolean invalidPattern; + /** indicates that we can shortcut the comparison and use startsWith */ + private boolean shortcutToStartsWith; + /** indicates that we can shortcut the comparison and use endsWith */ + private boolean shortcutToEndsWith; + /** indicates that we can shortcut the comparison and use contains */ + private boolean shortcutToContains; + + public CompareLike(Database db, Expression left, Expression right, + Expression escape, boolean regexp) { + this(db.getCompareMode(), db.getSettings().defaultEscape, left, right, + escape, regexp); + } + + public CompareLike(CompareMode compareMode, String defaultEscape, + Expression left, Expression right, Expression escape, boolean regexp) { + this.compareMode = compareMode; + this.defaultEscape = defaultEscape; + this.regexp = regexp; + this.left = left; + this.right = right; + this.escape = escape; + } + + private static Character getEscapeChar(String s) { + return s == null || s.length() == 0 ? null : s.charAt(0); + } + + @Override + public String getSQL() { + String sql; + if (regexp) { + sql = left.getSQL() + " REGEXP " + right.getSQL(); + } else { + sql = left.getSQL() + " LIKE " + right.getSQL(); + if (escape != null) { + sql += " ESCAPE " + escape.getSQL(); + } + } + return "(" + sql + ")"; + } + + @Override + public Expression optimize(Session session) { + left = left.optimize(session); + right = right.optimize(session); + if (left.getType() == Value.STRING_IGNORECASE) { + ignoreCase = true; + } + if (left.isValueSet()) { + Value l = left.getValue(session); + if (l == ValueNull.INSTANCE) { + // NULL LIKE something > NULL + return ValueExpression.getNull(); + } + } + if (escape != null) { + escape = escape.optimize(session); + } + if (right.isValueSet() && (escape == null || escape.isValueSet())) { + if (left.isValueSet()) { + return ValueExpression.get(getValue(session)); + } + Value r = right.getValue(session); + if (r == ValueNull.INSTANCE) { + // something LIKE NULL > NULL + return ValueExpression.getNull(); + } + Value e = escape == null ? null : escape.getValue(session); + if (e == ValueNull.INSTANCE) { + return ValueExpression.getNull(); + } + String p = r.getString(); + initPattern(p, getEscapeChar(e)); + if (invalidPattern) { + return ValueExpression.getNull(); + } + if ("%".equals(p)) { + // optimization for X LIKE '%': convert to X IS NOT NULL + return new Comparison(session, + Comparison.IS_NOT_NULL, left, null).optimize(session); + } + if (isFullMatch()) { + // optimization for X LIKE 'Hello': convert to X = 'Hello' + Value value = ValueString.get(patternString); + Expression expr = ValueExpression.get(value); + return new Comparison(session, + Comparison.EQUAL, left, expr).optimize(session); + } + isInit = true; + } + return this; + } + + private Character getEscapeChar(Value e) { + if (e == null) { + return getEscapeChar(defaultEscape); + } + String es = e.getString(); + Character esc; + if (es == null) { + esc = getEscapeChar(defaultEscape); + } else if (es.length() == 0) { + esc = null; + } else if (es.length() > 1) { + throw DbException.get(ErrorCode.LIKE_ESCAPE_ERROR_1, es); + } else { + esc = es.charAt(0); + } + return esc; + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (regexp) { + return; + } + if (!(left instanceof ExpressionColumn)) { + return; + } + ExpressionColumn l = (ExpressionColumn) left; + if (filter != l.getTableFilter()) { + return; + } + // parameters are always evaluatable, but + // we need to check if the value is set + // (at prepare time) + // otherwise we would need to prepare at execute time, + // which may be slower (possibly not in this case) + if (!right.isEverything(ExpressionVisitor.INDEPENDENT_VISITOR)) { + return; + } + if (escape != null && + !escape.isEverything(ExpressionVisitor.INDEPENDENT_VISITOR)) { + return; + } + String p = right.getValue(session).getString(); + if (!isInit) { + Value e = escape == null ? null : escape.getValue(session); + if (e == ValueNull.INSTANCE) { + // should already be optimized + DbException.throwInternalError(); + } + initPattern(p, getEscapeChar(e)); + } + if (invalidPattern) { + return; + } + if (patternLength <= 0 || patternTypes[0] != MATCH) { + // can't use an index + return; + } + int dataType = l.getColumn().getType(); + if (dataType != Value.STRING && dataType != Value.STRING_IGNORECASE && + dataType != Value.STRING_FIXED) { + // column is not a varchar - can't use the index + return; + } + // Get the MATCH prefix and see if we can create an index condition from + // that. + int maxMatch = 0; + StringBuilder buff = new StringBuilder(); + while (maxMatch < patternLength && patternTypes[maxMatch] == MATCH) { + buff.append(patternChars[maxMatch++]); + } + String begin = buff.toString(); + if (maxMatch == patternLength) { + filter.addIndexCondition(IndexCondition.get(Comparison.EQUAL, l, + ValueExpression.get(ValueString.get(begin)))); + } else { + // TODO check if this is correct according to Unicode rules + // (code points) + String end; + if (begin.length() > 0) { + filter.addIndexCondition(IndexCondition.get( + Comparison.BIGGER_EQUAL, l, + ValueExpression.get(ValueString.get(begin)))); + char next = begin.charAt(begin.length() - 1); + // search the 'next' unicode character (or at least a character + // that is higher) + for (int i = 1; i < 2000; i++) { + end = begin.substring(0, begin.length() - 1) + (char) (next + i); + if (compareMode.compareString(begin, end, ignoreCase) == -1) { + filter.addIndexCondition(IndexCondition.get( + Comparison.SMALLER, l, + ValueExpression.get(ValueString.get(end)))); + break; + } + } + } + } + } + + @Override + public Value getValue(Session session) { + Value l = left.getValue(session); + if (l == ValueNull.INSTANCE) { + return l; + } + if (!isInit) { + Value r = right.getValue(session); + if (r == ValueNull.INSTANCE) { + return r; + } + String p = r.getString(); + Value e = escape == null ? null : escape.getValue(session); + if (e == ValueNull.INSTANCE) { + return ValueNull.INSTANCE; + } + initPattern(p, getEscapeChar(e)); + } + if (invalidPattern) { + return ValueNull.INSTANCE; + } + String value = l.getString(); + boolean result; + if (regexp) { + result = patternRegexp.matcher(value).find(); + } else if (shortcutToStartsWith) { + result = value.regionMatches(ignoreCase, 0, patternString, 0, patternLength - 1); + } else if (shortcutToEndsWith) { + result = value.regionMatches(ignoreCase, value.length() - + patternLength + 1, patternString, 1, patternLength - 1); + } else if (shortcutToContains) { + String p = patternString.substring(1, patternString.length() - 1); + if (ignoreCase) { + result = containsIgnoreCase(value, p); + } else { + result = value.contains(p); + } + } else { + result = compareAt(value, 0, 0, value.length(), patternChars, patternTypes); + } + return ValueBoolean.get(result); + } + + private static boolean containsIgnoreCase(String src, String what) { + final int length = what.length(); + if (length == 0) { + // Empty string is contained + return true; + } + + final char firstLo = Character.toLowerCase(what.charAt(0)); + final char firstUp = Character.toUpperCase(what.charAt(0)); + + for (int i = src.length() - length; i >= 0; i--) { + // Quick check before calling the more expensive regionMatches() + final char ch = src.charAt(i); + if (ch != firstLo && ch != firstUp) { + continue; + } + if (src.regionMatches(true, i, what, 0, length)) { + return true; + } + } + + return false; + } + + private boolean compareAt(String s, int pi, int si, int sLen, + char[] pattern, int[] types) { + for (; pi < patternLength; pi++) { + switch (types[pi]) { + case MATCH: + if ((si >= sLen) || !compare(pattern, s, pi, si++)) { + return false; + } + break; + case ONE: + if (si++ >= sLen) { + return false; + } + break; + case ANY: + if (++pi >= patternLength) { + return true; + } + while (si < sLen) { + if (compare(pattern, s, pi, si) && + compareAt(s, pi, si, sLen, pattern, types)) { + return true; + } + si++; + } + return false; + default: + DbException.throwInternalError("" + types[pi]); + } + } + return si == sLen; + } + + private boolean compare(char[] pattern, String s, int pi, int si) { + return pattern[pi] == s.charAt(si) || + (!fastCompare && compareMode.equalsChars(patternString, pi, s, + si, ignoreCase)); + } + + /** + * Test if the value matches the pattern. + * + * @param testPattern the pattern + * @param value the value + * @param escapeChar the escape character + * @return true if the value matches + */ + public boolean test(String testPattern, String value, char escapeChar) { + initPattern(testPattern, escapeChar); + if (invalidPattern) { + return false; + } + return compareAt(value, 0, 0, value.length(), patternChars, patternTypes); + } + + private void initPattern(String p, Character escapeChar) { + if (compareMode.getName().equals(CompareMode.OFF) && !ignoreCase) { + fastCompare = true; + } + if (regexp) { + patternString = p; + try { + if (ignoreCase) { + patternRegexp = Pattern.compile(p, Pattern.CASE_INSENSITIVE); + } else { + patternRegexp = Pattern.compile(p); + } + } catch (PatternSyntaxException e) { + throw DbException.get(ErrorCode.LIKE_ESCAPE_ERROR_1, e, p); + } + return; + } + patternLength = 0; + if (p == null) { + patternTypes = null; + patternChars = null; + return; + } + int len = p.length(); + patternChars = new char[len]; + patternTypes = new int[len]; + boolean lastAny = false; + for (int i = 0; i < len; i++) { + char c = p.charAt(i); + int type; + if (escapeChar != null && escapeChar == c) { + if (i >= len - 1) { + invalidPattern = true; + return; + } + c = p.charAt(++i); + type = MATCH; + lastAny = false; + } else if (c == '%') { + if (lastAny) { + continue; + } + type = ANY; + lastAny = true; + } else if (c == '_') { + type = ONE; + } else { + type = MATCH; + lastAny = false; + } + patternTypes[patternLength] = type; + patternChars[patternLength++] = c; + } + for (int i = 0; i < patternLength - 1; i++) { + if ((patternTypes[i] == ANY) && (patternTypes[i + 1] == ONE)) { + patternTypes[i] = ONE; + patternTypes[i + 1] = ANY; + } + } + patternString = new String(patternChars, 0, patternLength); + + // Clear optimizations + shortcutToStartsWith = false; + shortcutToEndsWith = false; + shortcutToContains = false; + + // optimizes the common case of LIKE 'foo%' + if (compareMode.getName().equals(CompareMode.OFF) && patternLength > 1) { + int maxMatch = 0; + while (maxMatch < patternLength && patternTypes[maxMatch] == MATCH) { + maxMatch++; + } + if (maxMatch == patternLength - 1 && patternTypes[patternLength - 1] == ANY) { + shortcutToStartsWith = true; + return; + } + } + // optimizes the common case of LIKE '%foo' + if (compareMode.getName().equals(CompareMode.OFF) && patternLength > 1) { + if (patternTypes[0] == ANY) { + int maxMatch = 1; + while (maxMatch < patternLength && patternTypes[maxMatch] == MATCH) { + maxMatch++; + } + if (maxMatch == patternLength) { + shortcutToEndsWith = true; + return; + } + } + } + // optimizes the common case of LIKE '%foo%' + if (compareMode.getName().equals(CompareMode.OFF) && patternLength > 2) { + if (patternTypes[0] == ANY) { + int maxMatch = 1; + while (maxMatch < patternLength && patternTypes[maxMatch] == MATCH) { + maxMatch++; + } + if (maxMatch == patternLength - 1 && patternTypes[patternLength - 1] == ANY) { + shortcutToContains = true; + } + } + } + } + + private boolean isFullMatch() { + if (patternTypes == null) { + return false; + } + for (int type : patternTypes) { + if (type != MATCH) { + return false; + } + } + return true; + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + right.mapColumns(resolver, level); + if (escape != null) { + escape.mapColumns(resolver, level); + } + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + right.setEvaluatable(tableFilter, b); + if (escape != null) { + escape.setEvaluatable(tableFilter, b); + } + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + right.updateAggregate(session); + if (escape != null) { + escape.updateAggregate(session); + } + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return left.isEverything(visitor) && right.isEverything(visitor) + && (escape == null || escape.isEverything(visitor)); + } + + @Override + public int getCost() { + return left.getCost() + right.getCost() + 3; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Comparison.java b/modules/h2/src/main/java/org/h2/expression/Comparison.java new file mode 100644 index 0000000000000..bb02f9202a24d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Comparison.java @@ -0,0 +1,610 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.ArrayList; +import java.util.Arrays; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.index.IndexCondition; +import org.h2.message.DbException; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.MathUtils; +import org.h2.value.*; + +/** + * Example comparison expressions are ID=1, NAME=NAME, NAME IS NULL. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class Comparison extends Condition { + + /** + * This is a flag meaning the comparison is null safe (meaning never returns + * NULL even if one operand is NULL). Only EQUAL and NOT_EQUAL are supported + * currently. + */ + public static final int NULL_SAFE = 16; + + /** + * The comparison type meaning = as in ID=1. + */ + public static final int EQUAL = 0; + + /** + * The comparison type meaning ID IS 1 (ID IS NOT DISTINCT FROM 1). + */ + public static final int EQUAL_NULL_SAFE = EQUAL | NULL_SAFE; + + /** + * The comparison type meaning >= as in ID>=1. + */ + public static final int BIGGER_EQUAL = 1; + + /** + * The comparison type meaning > as in ID>1. + */ + public static final int BIGGER = 2; + + /** + * The comparison type meaning <= as in ID<=1. + */ + public static final int SMALLER_EQUAL = 3; + + /** + * The comparison type meaning < as in ID<1. + */ + public static final int SMALLER = 4; + + /** + * The comparison type meaning <> as in ID<>1. + */ + public static final int NOT_EQUAL = 5; + + /** + * The comparison type meaning ID IS NOT 1 (ID IS DISTINCT FROM 1). + */ + public static final int NOT_EQUAL_NULL_SAFE = NOT_EQUAL | NULL_SAFE; + + /** + * The comparison type meaning IS NULL as in NAME IS NULL. + */ + public static final int IS_NULL = 6; + + /** + * The comparison type meaning IS NOT NULL as in NAME IS NOT NULL. + */ + public static final int IS_NOT_NULL = 7; + + /** + * This is a pseudo comparison type that is only used for index conditions. + * It means the comparison will always yield FALSE. Example: 1=0. + */ + public static final int FALSE = 8; + + /** + * This is a pseudo comparison type that is only used for index conditions. + * It means equals any value of a list. Example: IN(1, 2, 3). + */ + public static final int IN_LIST = 9; + + /** + * This is a pseudo comparison type that is only used for index conditions. + * It means equals any value of a list. Example: IN(SELECT ...). + */ + public static final int IN_QUERY = 10; + + /** + * This is a comparison type that is only used for spatial index + * conditions (operator "&&"). + */ + public static final int SPATIAL_INTERSECTS = 11; + + private final Database database; + private int compareType; + private Expression left; + private Expression right; + + public Comparison(Session session, int compareType, Expression left, + Expression right) { + this.database = session.getDatabase(); + this.left = left; + this.right = right; + this.compareType = compareType; + } + + @Override + public String getSQL() { + String sql; + switch (compareType) { + case IS_NULL: + sql = left.getSQL() + " IS NULL"; + break; + case IS_NOT_NULL: + sql = left.getSQL() + " IS NOT NULL"; + break; + case SPATIAL_INTERSECTS: + sql = "INTERSECTS(" + left.getSQL() + ", " + right.getSQL() + ")"; + break; + default: + sql = left.getSQL() + " " + getCompareOperator(compareType) + + " " + right.getSQL(); + } + return "(" + sql + ")"; + } + + /** + * Get the comparison operator string ("=", ">",...). + * + * @param compareType the compare type + * @return the string + */ + static String getCompareOperator(int compareType) { + switch (compareType) { + case EQUAL: + return "="; + case EQUAL_NULL_SAFE: + return "IS"; + case BIGGER_EQUAL: + return ">="; + case BIGGER: + return ">"; + case SMALLER_EQUAL: + return "<="; + case SMALLER: + return "<"; + case NOT_EQUAL: + return "<>"; + case NOT_EQUAL_NULL_SAFE: + return "IS NOT"; + case SPATIAL_INTERSECTS: + return "&&"; + default: + throw DbException.throwInternalError("compareType=" + compareType); + } + } + + @Override + public Expression optimize(Session session) { + left = left.optimize(session); + if (right != null) { + right = right.optimize(session); + if (right instanceof ExpressionColumn) { + if (left.isConstant() || left instanceof Parameter) { + Expression temp = left; + left = right; + right = temp; + compareType = getReversedCompareType(compareType); + } + } + if (left instanceof ExpressionColumn) { + if (right.isConstant()) { + Value r = right.getValue(session); + if (r == ValueNull.INSTANCE) { + if ((compareType & NULL_SAFE) == 0) { + return ValueExpression.getNull(); + } + } + int colType = left.getType(); + int constType = r.getType(); + int resType = Value.getHigherOrder(colType, constType); + // If not, the column values will need to be promoted + // to constant type, but vise versa, then let's do this here + // once. + if (constType != resType) { + Column column = ((ExpressionColumn) left).getColumn(); + right = ValueExpression.get(r.convertTo(resType, + MathUtils.convertLongToInt(left.getPrecision()), + session.getDatabase().getMode(), column, column.getEnumerators())); + } + } else if (right instanceof Parameter) { + ((Parameter) right).setColumn( + ((ExpressionColumn) left).getColumn()); + } + } + } + if (compareType == IS_NULL || compareType == IS_NOT_NULL) { + if (left.isConstant()) { + return ValueExpression.get(getValue(session)); + } + } else { + if (SysProperties.CHECK && (left == null || right == null)) { + DbException.throwInternalError(left + " " + right); + } + if (left == ValueExpression.getNull() || + right == ValueExpression.getNull()) { + // TODO NULL handling: maybe issue a warning when comparing with + // a NULL constants + if ((compareType & NULL_SAFE) == 0) { + return ValueExpression.getNull(); + } + } + if (left.isConstant() && right.isConstant()) { + return ValueExpression.get(getValue(session)); + } + } + return this; + } + + @Override + public Value getValue(Session session) { + Value l = left.getValue(session); + if (right == null) { + boolean result; + switch (compareType) { + case IS_NULL: + result = l == ValueNull.INSTANCE; + break; + case IS_NOT_NULL: + result = !(l == ValueNull.INSTANCE); + break; + default: + throw DbException.throwInternalError("type=" + compareType); + } + return ValueBoolean.get(result); + } + if (l == ValueNull.INSTANCE) { + if ((compareType & NULL_SAFE) == 0) { + return ValueNull.INSTANCE; + } + } + Value r = right.getValue(session); + if (r == ValueNull.INSTANCE) { + if ((compareType & NULL_SAFE) == 0) { + return ValueNull.INSTANCE; + } + } + int dataType = Value.getHigherOrder(left.getType(), right.getType()); + if (dataType == Value.ENUM) { + String[] enumerators = getEnumerators(l, r); + l = l.convertToEnum(enumerators); + r = r.convertToEnum(enumerators); + } else { + l = l.convertTo(dataType); + r = r.convertTo(dataType); + } + boolean result = compareNotNull(database, l, r, compareType); + return ValueBoolean.get(result); + } + + private String[] getEnumerators(Value left, Value right) { + if (left.getType() == Value.ENUM) { + return ((ValueEnum) left).getEnumerators(); + } else if (right.getType() == Value.ENUM) { + return ((ValueEnum) right).getEnumerators(); + } else { + return new String[0]; + } + } + + /** + * Compare two values, given the values are not NULL. + * + * @param database the database + * @param l the first value + * @param r the second value + * @param compareType the compare type + * @return true if the comparison indicated by the comparison type evaluates + * to true + */ + static boolean compareNotNull(Database database, Value l, Value r, + int compareType) { + boolean result; + switch (compareType) { + case EQUAL: + case EQUAL_NULL_SAFE: + result = database.areEqual(l, r); + break; + case NOT_EQUAL: + case NOT_EQUAL_NULL_SAFE: + result = !database.areEqual(l, r); + break; + case BIGGER_EQUAL: + result = database.compare(l, r) >= 0; + break; + case BIGGER: + result = database.compare(l, r) > 0; + break; + case SMALLER_EQUAL: + result = database.compare(l, r) <= 0; + break; + case SMALLER: + result = database.compare(l, r) < 0; + break; + case SPATIAL_INTERSECTS: { + ValueGeometry lg = (ValueGeometry) l.convertTo(Value.GEOMETRY); + ValueGeometry rg = (ValueGeometry) r.convertTo(Value.GEOMETRY); + result = lg.intersectsBoundingBox(rg); + break; + } + default: + throw DbException.throwInternalError("type=" + compareType); + } + return result; + } + + private int getReversedCompareType(int type) { + switch (compareType) { + case EQUAL: + case EQUAL_NULL_SAFE: + case NOT_EQUAL: + case NOT_EQUAL_NULL_SAFE: + case SPATIAL_INTERSECTS: + return type; + case BIGGER_EQUAL: + return SMALLER_EQUAL; + case BIGGER: + return SMALLER; + case SMALLER_EQUAL: + return BIGGER_EQUAL; + case SMALLER: + return BIGGER; + default: + throw DbException.throwInternalError("type=" + compareType); + } + } + + @Override + public Expression getNotIfPossible(Session session) { + if (compareType == SPATIAL_INTERSECTS) { + return null; + } + int type = getNotCompareType(); + return new Comparison(session, type, left, right); + } + + private int getNotCompareType() { + switch (compareType) { + case EQUAL: + return NOT_EQUAL; + case EQUAL_NULL_SAFE: + return NOT_EQUAL_NULL_SAFE; + case NOT_EQUAL: + return EQUAL; + case NOT_EQUAL_NULL_SAFE: + return EQUAL_NULL_SAFE; + case BIGGER_EQUAL: + return SMALLER; + case BIGGER: + return SMALLER_EQUAL; + case SMALLER_EQUAL: + return BIGGER; + case SMALLER: + return BIGGER_EQUAL; + case IS_NULL: + return IS_NOT_NULL; + case IS_NOT_NULL: + return IS_NULL; + default: + throw DbException.throwInternalError("type=" + compareType); + } + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (!filter.getTable().isQueryComparable()) { + return; + } + ExpressionColumn l = null; + if (left instanceof ExpressionColumn) { + l = (ExpressionColumn) left; + if (filter != l.getTableFilter()) { + l = null; + } + } + if (right == null) { + if (l != null) { + switch (compareType) { + case IS_NULL: + if (session.getDatabase().getSettings().optimizeIsNull) { + filter.addIndexCondition( + IndexCondition.get( + Comparison.EQUAL_NULL_SAFE, l, + ValueExpression.getNull())); + } + } + } + return; + } + ExpressionColumn r = null; + if (right instanceof ExpressionColumn) { + r = (ExpressionColumn) right; + if (filter != r.getTableFilter()) { + r = null; + } + } + // one side must be from the current filter + if (l == null && r == null) { + return; + } + if (l != null && r != null) { + return; + } + if (l == null) { + ExpressionVisitor visitor = + ExpressionVisitor.getNotFromResolverVisitor(filter); + if (!left.isEverything(visitor)) { + return; + } + } else if (r == null) { + ExpressionVisitor visitor = + ExpressionVisitor.getNotFromResolverVisitor(filter); + if (!right.isEverything(visitor)) { + return; + } + } else { + // if both sides are part of the same filter, it can't be used for + // index lookup + return; + } + boolean addIndex; + switch (compareType) { + case NOT_EQUAL: + case NOT_EQUAL_NULL_SAFE: + addIndex = false; + break; + case EQUAL: + case EQUAL_NULL_SAFE: + case BIGGER: + case BIGGER_EQUAL: + case SMALLER_EQUAL: + case SMALLER: + case SPATIAL_INTERSECTS: + addIndex = true; + break; + default: + throw DbException.throwInternalError("type=" + compareType); + } + if (addIndex) { + if (l != null) { + filter.addIndexCondition( + IndexCondition.get(compareType, l, right)); + } else if (r != null) { + int compareRev = getReversedCompareType(compareType); + filter.addIndexCondition( + IndexCondition.get(compareRev, r, left)); + } + } + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + if (right != null) { + right.setEvaluatable(tableFilter, b); + } + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + if (right != null) { + right.updateAggregate(session); + } + } + + @Override + public void addFilterConditions(TableFilter filter, boolean outerJoin) { + if (compareType == IS_NULL && outerJoin) { + // can not optimize: + // select * from test t1 left join test t2 on t1.id = t2.id + // where t2.id is null + // to + // select * from test t1 left join test t2 + // on t1.id = t2.id and t2.id is null + return; + } + super.addFilterConditions(filter, outerJoin); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + if (right != null) { + right.mapColumns(resolver, level); + } + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return left.isEverything(visitor) && + (right == null || right.isEverything(visitor)); + } + + @Override + public int getCost() { + return left.getCost() + (right == null ? 0 : right.getCost()) + 1; + } + + /** + * Get the other expression if this is an equals comparison and the other + * expression matches. + * + * @param match the expression that should match + * @return null if no match, the other expression if there is a match + */ + Expression getIfEquals(Expression match) { + if (compareType == EQUAL) { + String sql = match.getSQL(); + if (left.getSQL().equals(sql)) { + return right; + } else if (right.getSQL().equals(sql)) { + return left; + } + } + return null; + } + + /** + * Get an additional condition if possible. Example: given two conditions + * A=B AND B=C, the new condition A=C is returned. Given the two conditions + * A=1 OR A=2, the new condition A IN(1, 2) is returned. + * + * @param session the session + * @param other the second condition + * @param and true for AND, false for OR + * @return null or the third condition + */ + Expression getAdditional(Session session, Comparison other, boolean and) { + if (compareType == other.compareType && compareType == EQUAL) { + boolean lc = left.isConstant(); + boolean rc = right.isConstant(); + boolean l2c = other.left.isConstant(); + boolean r2c = other.right.isConstant(); + String l = left.getSQL(); + String l2 = other.left.getSQL(); + String r = right.getSQL(); + String r2 = other.right.getSQL(); + if (and) { + // a=b AND a=c + // must not compare constants. example: NOT(B=2 AND B=3) + if (!(rc && r2c) && l.equals(l2)) { + return new Comparison(session, EQUAL, right, other.right); + } else if (!(rc && l2c) && l.equals(r2)) { + return new Comparison(session, EQUAL, right, other.left); + } else if (!(lc && r2c) && r.equals(l2)) { + return new Comparison(session, EQUAL, left, other.right); + } else if (!(lc && l2c) && r.equals(r2)) { + return new Comparison(session, EQUAL, left, other.left); + } + } else { + // a=b OR a=c + Database db = session.getDatabase(); + if (rc && r2c && l.equals(l2)) { + return new ConditionIn(db, left, + new ArrayList<>(Arrays.asList(right, other.right))); + } else if (rc && l2c && l.equals(r2)) { + return new ConditionIn(db, left, + new ArrayList<>(Arrays.asList(right, other.left))); + } else if (lc && r2c && r.equals(l2)) { + return new ConditionIn(db, right, + new ArrayList<>(Arrays.asList(left, other.right))); + } else if (lc && l2c && r.equals(r2)) { + return new ConditionIn(db, right, + new ArrayList<>(Arrays.asList(left, other.left))); + } + } + } + return null; + } + + /** + * Get the left or the right sub-expression of this condition. + * + * @param getLeft true to get the left sub-expression, false to get the + * right sub-expression. + * @return the sub-expression + */ + public Expression getExpression(boolean getLeft) { + return getLeft ? this.left : right; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Condition.java b/modules/h2/src/main/java/org/h2/expression/Condition.java new file mode 100644 index 0000000000000..6acdfaa856b5d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Condition.java @@ -0,0 +1,36 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.value.Value; +import org.h2.value.ValueBoolean; + +/** + * Represents a condition returning a boolean value, or NULL. + */ +abstract class Condition extends Expression { + + @Override + public int getType() { + return Value.BOOLEAN; + } + + @Override + public int getScale() { + return 0; + } + + @Override + public long getPrecision() { + return ValueBoolean.PRECISION; + } + + @Override + public int getDisplaySize() { + return ValueBoolean.DISPLAY_SIZE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ConditionAndOr.java b/modules/h2/src/main/java/org/h2/expression/ConditionAndOr.java new file mode 100644 index 0000000000000..b9327ffe3621c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ConditionAndOr.java @@ -0,0 +1,297 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; + +/** + * An 'and' or 'or' condition as in WHERE ID=1 AND NAME=? + */ +public class ConditionAndOr extends Condition { + + /** + * The AND condition type as in ID=1 AND NAME='Hello'. + */ + public static final int AND = 0; + + /** + * The OR condition type as in ID=1 OR NAME='Hello'. + */ + public static final int OR = 1; + + private final int andOrType; + private Expression left, right; + + public ConditionAndOr(int andOrType, Expression left, Expression right) { + this.andOrType = andOrType; + this.left = left; + this.right = right; + if (SysProperties.CHECK && (left == null || right == null)) { + DbException.throwInternalError(left + " " + right); + } + } + + @Override + public String getSQL() { + String sql; + switch (andOrType) { + case AND: + sql = left.getSQL() + "\n AND " + right.getSQL(); + break; + case OR: + sql = left.getSQL() + "\n OR " + right.getSQL(); + break; + default: + throw DbException.throwInternalError("andOrType=" + andOrType); + } + return "(" + sql + ")"; + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (andOrType == AND) { + left.createIndexConditions(session, filter); + right.createIndexConditions(session, filter); + } + } + + @Override + public Expression getNotIfPossible(Session session) { + // (NOT (A OR B)): (NOT(A) AND NOT(B)) + // (NOT (A AND B)): (NOT(A) OR NOT(B)) + Expression l = left.getNotIfPossible(session); + if (l == null) { + l = new ConditionNot(left); + } + Expression r = right.getNotIfPossible(session); + if (r == null) { + r = new ConditionNot(right); + } + int reversed = andOrType == AND ? OR : AND; + return new ConditionAndOr(reversed, l, r); + } + + @Override + public Value getValue(Session session) { + Value l = left.getValue(session); + Value r; + switch (andOrType) { + case AND: { + if (l != ValueNull.INSTANCE && !l.getBoolean()) { + return l; + } + r = right.getValue(session); + if (r != ValueNull.INSTANCE && !r.getBoolean()) { + return r; + } + if (l == ValueNull.INSTANCE) { + return l; + } + if (r == ValueNull.INSTANCE) { + return r; + } + return ValueBoolean.TRUE; + } + case OR: { + if (l.getBoolean()) { + return l; + } + r = right.getValue(session); + if (r.getBoolean()) { + return r; + } + if (l == ValueNull.INSTANCE) { + return l; + } + if (r == ValueNull.INSTANCE) { + return r; + } + return ValueBoolean.FALSE; + } + default: + throw DbException.throwInternalError("type=" + andOrType); + } + } + + @Override + public Expression optimize(Session session) { + // NULL handling: see wikipedia, + // http://www-cs-students.stanford.edu/~wlam/compsci/sqlnulls + left = left.optimize(session); + right = right.optimize(session); + int lc = left.getCost(), rc = right.getCost(); + if (rc < lc) { + Expression t = left; + left = right; + right = t; + } + // this optimization does not work in the following case, + // but NOT is optimized before: + // CREATE TABLE TEST(A INT, B INT); + // INSERT INTO TEST VALUES(1, NULL); + // SELECT * FROM TEST WHERE NOT (B=A AND B=0); // no rows + // SELECT * FROM TEST WHERE NOT (B=A AND B=0 AND A=0); // 1, NULL + if (session.getDatabase().getSettings().optimizeTwoEquals && + andOrType == AND) { + // try to add conditions (A=B AND B=1: add A=1) + if (left instanceof Comparison && right instanceof Comparison) { + Comparison compLeft = (Comparison) left; + Comparison compRight = (Comparison) right; + Expression added = compLeft.getAdditional( + session, compRight, true); + if (added != null) { + added = added.optimize(session); + return new ConditionAndOr(AND, this, added); + } + } + } + // TODO optimization: convert ((A=1 AND B=2) OR (A=1 AND B=3)) to + // (A=1 AND (B=2 OR B=3)) + if (andOrType == OR && + session.getDatabase().getSettings().optimizeOr) { + // try to add conditions (A=B AND B=1: add A=1) + if (left instanceof Comparison && + right instanceof Comparison) { + Comparison compLeft = (Comparison) left; + Comparison compRight = (Comparison) right; + Expression added = compLeft.getAdditional( + session, compRight, false); + if (added != null) { + return added.optimize(session); + } + } else if (left instanceof ConditionIn && + right instanceof Comparison) { + Expression added = ((ConditionIn) left). + getAdditional((Comparison) right); + if (added != null) { + return added.optimize(session); + } + } else if (right instanceof ConditionIn && + left instanceof Comparison) { + Expression added = ((ConditionIn) right). + getAdditional((Comparison) left); + if (added != null) { + return added.optimize(session); + } + } else if (left instanceof ConditionInConstantSet && + right instanceof Comparison) { + Expression added = ((ConditionInConstantSet) left). + getAdditional(session, (Comparison) right); + if (added != null) { + return added.optimize(session); + } + } else if (right instanceof ConditionInConstantSet && + left instanceof Comparison) { + Expression added = ((ConditionInConstantSet) right). + getAdditional(session, (Comparison) left); + if (added != null) { + return added.optimize(session); + } + } + } + // TODO optimization: convert .. OR .. to UNION if the cost is lower + Value l = left.isConstant() ? left.getValue(session) : null; + Value r = right.isConstant() ? right.getValue(session) : null; + if (l == null && r == null) { + return this; + } + if (l != null && r != null) { + return ValueExpression.get(getValue(session)); + } + switch (andOrType) { + case AND: + if (l != null) { + if (l != ValueNull.INSTANCE && !l.getBoolean()) { + return ValueExpression.get(l); + } else if (l.getBoolean()) { + return right; + } + } else if (r != null) { + if (r != ValueNull.INSTANCE && !r.getBoolean()) { + return ValueExpression.get(r); + } else if (r.getBoolean()) { + return left; + } + } + break; + case OR: + if (l != null) { + if (l.getBoolean()) { + return ValueExpression.get(l); + } else if (l != ValueNull.INSTANCE) { + return right; + } + } else if (r != null) { + if (r.getBoolean()) { + return ValueExpression.get(r); + } else if (r != ValueNull.INSTANCE) { + return left; + } + } + break; + default: + DbException.throwInternalError("type=" + andOrType); + } + return this; + } + + @Override + public void addFilterConditions(TableFilter filter, boolean outerJoin) { + if (andOrType == AND) { + left.addFilterConditions(filter, outerJoin); + right.addFilterConditions(filter, outerJoin); + } else { + super.addFilterConditions(filter, outerJoin); + } + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + right.mapColumns(resolver, level); + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + right.setEvaluatable(tableFilter, b); + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + right.updateAggregate(session); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return left.isEverything(visitor) && right.isEverything(visitor); + } + + @Override + public int getCost() { + return left.getCost() + right.getCost(); + } + + /** + * Get the left or the right sub-expression of this condition. + * + * @param getLeft true to get the left sub-expression, false to get the + * right sub-expression. + * @return the sub-expression + */ + public Expression getExpression(boolean getLeft) { + return getLeft ? this.left : right; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ConditionExists.java b/modules/h2/src/main/java/org/h2/expression/ConditionExists.java new file mode 100644 index 0000000000000..c1150a40eb415 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ConditionExists.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.command.dml.Query; +import org.h2.engine.Session; +import org.h2.result.ResultInterface; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; + +/** + * An 'exists' condition as in WHERE EXISTS(SELECT ...) + */ +public class ConditionExists extends Condition { + + private final Query query; + + public ConditionExists(Query query) { + this.query = query; + } + + @Override + public Value getValue(Session session) { + query.setSession(session); + ResultInterface result = query.query(1); + session.addTemporaryResult(result); + boolean r = result.hasNext(); + return ValueBoolean.get(r); + } + + @Override + public Expression optimize(Session session) { + session.optimizeQueryExpression(query); + return this; + } + + @Override + public String getSQL() { + return "EXISTS(\n" + StringUtils.indent(query.getPlanSQL(), 4, false) + ")"; + } + + @Override + public void updateAggregate(Session session) { + // TODO exists: is it allowed that the subquery contains aggregates? + // probably not + // select id from test group by id having exists (select * from test2 + // where id=count(test.id)) + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + query.mapColumns(resolver, level + 1); + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + query.setEvaluatable(tableFilter, b); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return query.isEverything(visitor); + } + + @Override + public int getCost() { + return query.getCostAsExpression(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ConditionIn.java b/modules/h2/src/main/java/org/h2/expression/ConditionIn.java new file mode 100644 index 0000000000000..e97747573f033 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ConditionIn.java @@ -0,0 +1,212 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.ArrayList; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.IndexCondition; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; + +/** + * An 'in' condition with a list of values, as in WHERE NAME IN(...) + */ +public class ConditionIn extends Condition { + + private final Database database; + private Expression left; + private final ArrayList valueList; + private int queryLevel; + + /** + * Create a new IN(..) condition. + * + * @param database the database + * @param left the expression before IN + * @param values the value list (at least one element) + */ + public ConditionIn(Database database, Expression left, + ArrayList values) { + this.database = database; + this.left = left; + this.valueList = values; + } + + @Override + public Value getValue(Session session) { + Value l = left.getValue(session); + if (l == ValueNull.INSTANCE) { + return l; + } + boolean result = false; + boolean hasNull = false; + for (Expression e : valueList) { + Value r = e.getValue(session); + if (r == ValueNull.INSTANCE) { + hasNull = true; + } else { + r = r.convertTo(l.getType()); + result = Comparison.compareNotNull(database, l, r, Comparison.EQUAL); + if (result) { + break; + } + } + } + if (!result && hasNull) { + return ValueNull.INSTANCE; + } + return ValueBoolean.get(result); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + for (Expression e : valueList) { + e.mapColumns(resolver, level); + } + this.queryLevel = Math.max(level, this.queryLevel); + } + + @Override + public Expression optimize(Session session) { + left = left.optimize(session); + boolean constant = left.isConstant(); + if (constant && left == ValueExpression.getNull()) { + return left; + } + boolean allValuesConstant = true; + boolean allValuesNull = true; + int size = valueList.size(); + for (int i = 0; i < size; i++) { + Expression e = valueList.get(i); + e = e.optimize(session); + if (e.isConstant() && e.getValue(session) != ValueNull.INSTANCE) { + allValuesNull = false; + } + if (allValuesConstant && !e.isConstant()) { + allValuesConstant = false; + } + if (left instanceof ExpressionColumn && e instanceof Parameter) { + ((Parameter) e) + .setColumn(((ExpressionColumn) left).getColumn()); + } + valueList.set(i, e); + } + if (constant && allValuesConstant) { + return ValueExpression.get(getValue(session)); + } + if (size == 1) { + Expression right = valueList.get(0); + Expression expr = new Comparison(session, Comparison.EQUAL, left, right); + expr = expr.optimize(session); + return expr; + } + if (allValuesConstant && !allValuesNull) { + int leftType = left.getType(); + if (leftType == Value.UNKNOWN) { + return this; + } + Expression expr = new ConditionInConstantSet(session, left, valueList); + expr = expr.optimize(session); + return expr; + } + return this; + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (!(left instanceof ExpressionColumn)) { + return; + } + ExpressionColumn l = (ExpressionColumn) left; + if (filter != l.getTableFilter()) { + return; + } + if (session.getDatabase().getSettings().optimizeInList) { + ExpressionVisitor visitor = ExpressionVisitor.getNotFromResolverVisitor(filter); + for (Expression e : valueList) { + if (!e.isEverything(visitor)) { + return; + } + } + filter.addIndexCondition(IndexCondition.getInList(l, valueList)); + } + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + for (Expression e : valueList) { + e.setEvaluatable(tableFilter, b); + } + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder("("); + buff.append(left.getSQL()).append(" IN("); + for (Expression e : valueList) { + buff.appendExceptFirst(", "); + buff.append(e.getSQL()); + } + return buff.append("))").toString(); + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + for (Expression e : valueList) { + e.updateAggregate(session); + } + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + if (!left.isEverything(visitor)) { + return false; + } + return areAllValues(visitor); + } + + private boolean areAllValues(ExpressionVisitor visitor) { + for (Expression e : valueList) { + if (!e.isEverything(visitor)) { + return false; + } + } + return true; + } + + @Override + public int getCost() { + int cost = left.getCost(); + for (Expression e : valueList) { + cost += e.getCost(); + } + return cost; + } + + /** + * Add an additional element if possible. Example: given two conditions + * A IN(1, 2) OR A=3, the constant 3 is added: A IN(1, 2, 3). + * + * @param other the second condition + * @return null if the condition was not added, or the new condition + */ + Expression getAdditional(Comparison other) { + Expression add = other.getIfEquals(left); + if (add != null) { + valueList.add(add); + return this; + } + return null; + } +} diff --git a/modules/h2/src/main/java/org/h2/expression/ConditionInConstantSet.java b/modules/h2/src/main/java/org/h2/expression/ConditionInConstantSet.java new file mode 100644 index 0000000000000..a9b87c93b35f2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ConditionInConstantSet.java @@ -0,0 +1,167 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.TreeSet; +import org.h2.engine.Session; +import org.h2.index.IndexCondition; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; + +/** + * Used for optimised IN(...) queries where the contents of the IN list are all + * constant and of the same type. + *

    + * Checking using a HashSet is has time complexity O(1), instead of O(n) for + * checking using an array. + */ +public class ConditionInConstantSet extends Condition { + + private Expression left; + private int queryLevel; + private final ArrayList valueList; + private final TreeSet valueSet; + + /** + * Create a new IN(..) condition. + * + * @param session the session + * @param left the expression before IN + * @param valueList the value list (at least two elements) + */ + public ConditionInConstantSet(final Session session, Expression left, + ArrayList valueList) { + this.left = left; + this.valueList = valueList; + this.valueSet = new TreeSet<>(new Comparator() { + @Override + public int compare(Value o1, Value o2) { + return session.getDatabase().compare(o1, o2); + } + }); + int type = left.getType(); + for (Expression expression : valueList) { + valueSet.add(expression.getValue(session).convertTo(type)); + } + } + + @Override + public Value getValue(Session session) { + Value x = left.getValue(session); + if (x == ValueNull.INSTANCE) { + return x; + } + boolean result = valueSet.contains(x); + if (!result) { + boolean setHasNull = valueSet.contains(ValueNull.INSTANCE); + if (setHasNull) { + return ValueNull.INSTANCE; + } + } + return ValueBoolean.get(result); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + this.queryLevel = Math.max(level, this.queryLevel); + } + + @Override + public Expression optimize(Session session) { + left = left.optimize(session); + return this; + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (!(left instanceof ExpressionColumn)) { + return; + } + ExpressionColumn l = (ExpressionColumn) left; + if (filter != l.getTableFilter()) { + return; + } + if (session.getDatabase().getSettings().optimizeInList) { + filter.addIndexCondition(IndexCondition.getInList(l, valueList)); + } + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder("("); + buff.append(left.getSQL()).append(" IN("); + for (Expression e : valueList) { + buff.appendExceptFirst(", "); + buff.append(e.getSQL()); + } + return buff.append("))").toString(); + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + if (!left.isEverything(visitor)) { + return false; + } + switch (visitor.getType()) { + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + case ExpressionVisitor.DETERMINISTIC: + case ExpressionVisitor.READONLY: + case ExpressionVisitor.INDEPENDENT: + case ExpressionVisitor.EVALUATABLE: + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + case ExpressionVisitor.NOT_FROM_RESOLVER: + case ExpressionVisitor.GET_DEPENDENCIES: + case ExpressionVisitor.QUERY_COMPARABLE: + case ExpressionVisitor.GET_COLUMNS: + return true; + default: + throw DbException.throwInternalError("type=" + visitor.getType()); + } + } + + @Override + public int getCost() { + return left.getCost(); + } + + /** + * Add an additional element if possible. Example: given two conditions + * A IN(1, 2) OR A=3, the constant 3 is added: A IN(1, 2, 3). + * + * @param session the session + * @param other the second condition + * @return null if the condition was not added, or the new condition + */ + Expression getAdditional(Session session, Comparison other) { + Expression add = other.getIfEquals(left); + if (add != null) { + if (add.isConstant()) { + valueList.add(add); + valueSet.add(add.getValue(session).convertTo(left.getType())); + return this; + } + } + return null; + } +} diff --git a/modules/h2/src/main/java/org/h2/expression/ConditionInParameter.java b/modules/h2/src/main/java/org/h2/expression/ConditionInParameter.java new file mode 100644 index 0000000000000..a57303d24ffe0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ConditionInParameter.java @@ -0,0 +1,164 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.AbstractList; + +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.IndexCondition; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; + +/** + * A condition with parameter as {@code = ANY(?)}. + */ +public class ConditionInParameter extends Condition { + private static final class ParameterList extends AbstractList { + private final Parameter parameter; + + ParameterList(Parameter parameter) { + this.parameter = parameter; + } + + @Override + public Expression get(int index) { + Value value = parameter.getParamValue(); + if (value instanceof ValueArray) { + return ValueExpression.get(((ValueArray) value).getList()[index]); + } + if (index != 0) { + throw new IndexOutOfBoundsException(); + } + return ValueExpression.get(value); + } + + @Override + public int size() { + if (!parameter.isValueSet()) { + return 0; + } + Value value = parameter.getParamValue(); + if (value instanceof ValueArray) { + return ((ValueArray) value).getList().length; + } + return 1; + } + } + + private final Database database; + + private Expression left; + + private final Parameter parameter; + + /** + * Create a new {@code = ANY(?)} condition. + * + * @param database + * the database + * @param left + * the expression before {@code = ANY(?)} + * @param parameter + * parameter + */ + public ConditionInParameter(Database database, Expression left, Parameter parameter) { + this.database = database; + this.left = left; + this.parameter = parameter; + } + + @Override + public Value getValue(Session session) { + Value l = left.getValue(session); + if (l == ValueNull.INSTANCE) { + return l; + } + boolean result = false; + boolean hasNull = false; + Value value = parameter.getValue(session); + if (value instanceof ValueArray) { + for (Value r : ((ValueArray) value).getList()) { + if (r == ValueNull.INSTANCE) { + hasNull = true; + } else { + r = r.convertTo(l.getType()); + result = Comparison.compareNotNull(database, l, r, Comparison.EQUAL); + if (result) { + break; + } + } + } + } else { + if (value == ValueNull.INSTANCE) { + hasNull = true; + } else { + value = value.convertTo(l.getType()); + result = Comparison.compareNotNull(database, l, value, Comparison.EQUAL); + } + } + if (!result && hasNull) { + return ValueNull.INSTANCE; + } + return ValueBoolean.get(result); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + } + + @Override + public Expression optimize(Session session) { + left = left.optimize(session); + if (left.isConstant() && left == ValueExpression.getNull()) { + return left; + } + return this; + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (!(left instanceof ExpressionColumn)) { + return; + } + ExpressionColumn l = (ExpressionColumn) left; + if (filter != l.getTableFilter()) { + return; + } + filter.addIndexCondition(IndexCondition.getInList(l, new ParameterList(parameter))); + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + } + + @Override + public String getSQL() { + return '(' + left.getSQL() + " = ANY(" + parameter.getSQL() + "))"; + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return left.isEverything(visitor) && parameter.isEverything(visitor); + } + + @Override + public int getCost() { + return left.getCost(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ConditionInSelect.java b/modules/h2/src/main/java/org/h2/expression/ConditionInSelect.java new file mode 100644 index 0000000000000..3639444359f32 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ConditionInSelect.java @@ -0,0 +1,185 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.api.ErrorCode; +import org.h2.command.dml.Query; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.IndexCondition; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; + +/** + * An 'in' condition with a subquery, as in WHERE ID IN(SELECT ...) + */ +public class ConditionInSelect extends Condition { + + private final Database database; + private Expression left; + private final Query query; + private final boolean all; + private final int compareType; + private int queryLevel; + + public ConditionInSelect(Database database, Expression left, Query query, + boolean all, int compareType) { + this.database = database; + this.left = left; + this.query = query; + this.all = all; + this.compareType = compareType; + } + + @Override + public Value getValue(Session session) { + query.setSession(session); + if (!query.hasOrder()) { + query.setDistinct(true); + } + ResultInterface rows = query.query(0); + Value l = left.getValue(session); + if (!rows.hasNext()) { + return ValueBoolean.get(all); + } else if (l == ValueNull.INSTANCE) { + return l; + } + if (!session.getDatabase().getSettings().optimizeInSelect) { + return getValueSlow(rows, l); + } + if (all || (compareType != Comparison.EQUAL && + compareType != Comparison.EQUAL_NULL_SAFE)) { + return getValueSlow(rows, l); + } + int dataType = rows.getColumnType(0); + if (dataType == Value.NULL) { + return ValueBoolean.FALSE; + } + l = l.convertTo(dataType); + if (rows.containsDistinct(new Value[] { l })) { + return ValueBoolean.TRUE; + } + if (rows.containsDistinct(new Value[] { ValueNull.INSTANCE })) { + return ValueNull.INSTANCE; + } + return ValueBoolean.FALSE; + } + + private Value getValueSlow(ResultInterface rows, Value l) { + // this only returns the correct result if the result has at least one + // row, and if l is not null + boolean hasNull = false; + boolean result = all; + while (rows.next()) { + boolean value; + Value r = rows.currentRow()[0]; + if (r == ValueNull.INSTANCE) { + value = false; + hasNull = true; + } else { + value = Comparison.compareNotNull(database, l, r, compareType); + } + if (!value && all) { + result = false; + break; + } else if (value && !all) { + result = true; + break; + } + } + if (!result && hasNull) { + return ValueNull.INSTANCE; + } + return ValueBoolean.get(result); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + query.mapColumns(resolver, level + 1); + this.queryLevel = Math.max(level, this.queryLevel); + } + + @Override + public Expression optimize(Session session) { + left = left.optimize(session); + query.setRandomAccessResult(true); + session.optimizeQueryExpression(query); + if (query.getColumnCount() != 1) { + throw DbException.get(ErrorCode.SUBQUERY_IS_NOT_SINGLE_COLUMN); + } + // Can not optimize: the data may change + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + query.setEvaluatable(tableFilter, b); + } + + @Override + public String getSQL() { + StringBuilder buff = new StringBuilder(); + buff.append('(').append(left.getSQL()).append(' '); + if (all) { + buff.append(Comparison.getCompareOperator(compareType)). + append(" ALL"); + } else { + if (compareType == Comparison.EQUAL) { + buff.append("IN"); + } else { + buff.append(Comparison.getCompareOperator(compareType)). + append(" ANY"); + } + } + buff.append("(\n").append(StringUtils.indent(query.getPlanSQL(), 4, false)). + append("))"); + return buff.toString(); + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + query.updateAggregate(session); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return left.isEverything(visitor) && query.isEverything(visitor); + } + + @Override + public int getCost() { + return left.getCost() + query.getCostAsExpression(); + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (!session.getDatabase().getSettings().optimizeInList) { + return; + } + if (!(left instanceof ExpressionColumn)) { + return; + } + ExpressionColumn l = (ExpressionColumn) left; + if (filter != l.getTableFilter()) { + return; + } + ExpressionVisitor visitor = ExpressionVisitor.getNotFromResolverVisitor(filter); + if (!query.isEverything(visitor)) { + return; + } + filter.addIndexCondition(IndexCondition.getInQuery(l, query)); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ConditionNot.java b/modules/h2/src/main/java/org/h2/expression/ConditionNot.java new file mode 100644 index 0000000000000..ff53f7393271d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ConditionNot.java @@ -0,0 +1,101 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Session; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * A NOT condition. + */ +public class ConditionNot extends Condition { + + private Expression condition; + + public ConditionNot(Expression condition) { + this.condition = condition; + } + + @Override + public Expression getNotIfPossible(Session session) { + return condition; + } + + @Override + public Value getValue(Session session) { + Value v = condition.getValue(session); + if (v == ValueNull.INSTANCE) { + return v; + } + return v.convertTo(Value.BOOLEAN).negate(); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + condition.mapColumns(resolver, level); + } + + @Override + public Expression optimize(Session session) { + Expression e2 = condition.getNotIfPossible(session); + if (e2 != null) { + return e2.optimize(session); + } + Expression expr = condition.optimize(session); + if (expr.isConstant()) { + Value v = expr.getValue(session); + if (v == ValueNull.INSTANCE) { + return ValueExpression.getNull(); + } + return ValueExpression.get(v.convertTo(Value.BOOLEAN).negate()); + } + condition = expr; + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + condition.setEvaluatable(tableFilter, b); + } + + @Override + public String getSQL() { + return "(NOT " + condition.getSQL() + ")"; + } + + @Override + public void updateAggregate(Session session) { + condition.updateAggregate(session); + } + + @Override + public void addFilterConditions(TableFilter filter, boolean outerJoin) { + if (outerJoin) { + // can not optimize: + // select * from test t1 left join test t2 on t1.id = t2.id where + // not t2.id is not null + // to + // select * from test t1 left join test t2 on t1.id = t2.id and + // t2.id is not null + return; + } + super.addFilterConditions(filter, outerJoin); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return condition.isEverything(visitor); + } + + @Override + public int getCost() { + return condition.getCost(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Expression.java b/modules/h2/src/main/java/org/h2/expression/Expression.java new file mode 100644 index 0000000000000..a83637c2c2f1b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Expression.java @@ -0,0 +1,350 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StringUtils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; + +/** + * An expression is a operation, a value, or a function in a query. + */ +public abstract class Expression { + + private boolean addedToFilter; + + /** + * Return the resulting value for the current row. + * + * @param session the session + * @return the result + */ + public abstract Value getValue(Session session); + + /** + * Return the data type. The data type may not be known before the + * optimization phase. + * + * @return the type + */ + public abstract int getType(); + + /** + * Map the columns of the resolver to expression columns. + * + * @param resolver the column resolver + * @param level the subquery nesting level + */ + public abstract void mapColumns(ColumnResolver resolver, int level); + + /** + * Try to optimize the expression. + * + * @param session the session + * @return the optimized expression + */ + public abstract Expression optimize(Session session); + + /** + * Tell the expression columns whether the table filter can return values + * now. This is used when optimizing the query. + * + * @param tableFilter the table filter + * @param value true if the table filter can return value + */ + public abstract void setEvaluatable(TableFilter tableFilter, boolean value); + + /** + * Get the scale of this expression. + * + * @return the scale + */ + public abstract int getScale(); + + /** + * Get the precision of this expression. + * + * @return the precision + */ + public abstract long getPrecision(); + + /** + * Get the display size of this expression. + * + * @return the display size + */ + public abstract int getDisplaySize(); + + /** + * Get the SQL statement of this expression. + * This may not always be the original SQL statement, + * specially after optimization. + * + * @return the SQL statement + */ + public abstract String getSQL(); + + /** + * Update an aggregate value. This method is called at statement execution + * time. It is usually called once for each row, but if the expression is + * used multiple times (for example in the column list, and as part of the + * HAVING expression) it is called multiple times - the row counter needs to + * be used to make sure the internal state is only updated once. + * + * @param session the session + */ + public abstract void updateAggregate(Session session); + + /** + * Check if this expression and all sub-expressions can fulfill a criteria. + * If any part returns false, the result is false. + * + * @param visitor the visitor + * @return if the criteria can be fulfilled + */ + public abstract boolean isEverything(ExpressionVisitor visitor); + + /** + * Estimate the cost to process the expression. + * Used when optimizing the query, to calculate the query plan + * with the lowest estimated cost. + * + * @return the estimated cost + */ + public abstract int getCost(); + + /** + * If it is possible, return the negated expression. This is used + * to optimize NOT expressions: NOT ID>10 can be converted to + * ID<=10. Returns null if negating is not possible. + * + * @param session the session + * @return the negated expression, or null + */ + public Expression getNotIfPossible(@SuppressWarnings("unused") Session session) { + // by default it is not possible + return null; + } + + /** + * Check if this expression will always return the same value. + * + * @return if the expression is constant + */ + public boolean isConstant() { + return false; + } + + /** + * Is the value of a parameter set. + * + * @return true if set + */ + public boolean isValueSet() { + return false; + } + + /** + * Check if this is an auto-increment column. + * + * @return true if it is an auto-increment column + */ + public boolean isAutoIncrement() { + return false; + } + + /** + * Get the value in form of a boolean expression. + * Returns true or false. + * In this database, everything can be a condition. + * + * @param session the session + * @return the result + */ + public boolean getBooleanValue(Session session) { + return getValue(session).getBoolean(); + } + + /** + * Create index conditions if possible and attach them to the table filter. + * + * @param session the session + * @param filter the table filter + */ + @SuppressWarnings("unused") + public void createIndexConditions(Session session, TableFilter filter) { + // default is do nothing + } + + /** + * Get the column name or alias name of this expression. + * + * @return the column name + */ + public String getColumnName() { + return getAlias(); + } + + /** + * Get the schema name, or null + * + * @return the schema name + */ + public String getSchemaName() { + return null; + } + + /** + * Get the table name, or null + * + * @return the table name + */ + public String getTableName() { + return null; + } + + /** + * Check whether this expression is a column and can store NULL. + * + * @return whether NULL is allowed + */ + public int getNullable() { + return Column.NULLABLE_UNKNOWN; + } + + /** + * Get the table alias name or null + * if this expression does not represent a column. + * + * @return the table alias name + */ + public String getTableAlias() { + return null; + } + + /** + * Get the alias name of a column or SQL expression + * if it is not an aliased expression. + * + * @return the alias name + */ + public String getAlias() { + return StringUtils.unEnclose(getSQL()); + } + + /** + * Only returns true if the expression is a wildcard. + * + * @return if this expression is a wildcard + */ + public boolean isWildcard() { + return false; + } + + /** + * Returns the main expression, skipping aliases. + * + * @return the expression + */ + public Expression getNonAliasExpression() { + return this; + } + + /** + * Add conditions to a table filter if they can be evaluated. + * + * @param filter the table filter + * @param outerJoin if the expression is part of an outer join + */ + public void addFilterConditions(TableFilter filter, boolean outerJoin) { + if (!addedToFilter && !outerJoin && + isEverything(ExpressionVisitor.EVALUATABLE_VISITOR)) { + filter.addFilterCondition(this, false); + addedToFilter = true; + } + } + + /** + * Convert this expression to a String. + * + * @return the string representation + */ + @Override + public String toString() { + return getSQL(); + } + + /** + * If this expression consists of column expressions it should return them. + * + * @param session the session + * @return array of expression columns if applicable, null otherwise + */ + @SuppressWarnings("unused") + public Expression[] getExpressionColumns(Session session) { + return null; + } + + /** + * Extracts expression columns from ValueArray + * + * @param session the current session + * @param value the value to extract columns from + * @return array of expression columns + */ + static Expression[] getExpressionColumns(Session session, ValueArray value) { + Value[] list = value.getList(); + ExpressionColumn[] expr = new ExpressionColumn[list.length]; + for (int i = 0, len = list.length; i < len; i++) { + Value v = list[i]; + Column col = new Column("C" + (i + 1), v.getType(), + v.getPrecision(), v.getScale(), + v.getDisplaySize()); + expr[i] = new ExpressionColumn(session.getDatabase(), col); + } + return expr; + } + + /** + * Extracts expression columns from the given result set. + * + * @param session the session + * @param rs the result set + * @return an array of expression columns + */ + public static Expression[] getExpressionColumns(Session session, ResultSet rs) { + try { + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + Expression[] expressions = new Expression[columnCount]; + Database db = session == null ? null : session.getDatabase(); + for (int i = 0; i < columnCount; i++) { + String name = meta.getColumnLabel(i + 1); + int type = DataType.getValueTypeFromResultSet(meta, i + 1); + int precision = meta.getPrecision(i + 1); + int scale = meta.getScale(i + 1); + int displaySize = meta.getColumnDisplaySize(i + 1); + Column col = new Column(name, type, precision, scale, displaySize); + Expression expr = new ExpressionColumn(db, col); + expressions[i] = expr; + } + return expressions; + } catch (SQLException e) { + throw DbException.convert(e); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ExpressionColumn.java b/modules/h2/src/main/java/org/h2/expression/ExpressionColumn.java new file mode 100644 index 0000000000000..61cfd72ed5d07 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ExpressionColumn.java @@ -0,0 +1,344 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.HashMap; +import org.h2.api.ErrorCode; +import org.h2.command.Parser; +import org.h2.command.dml.Select; +import org.h2.command.dml.SelectListColumnResolver; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.IndexCondition; +import org.h2.message.DbException; +import org.h2.schema.Constant; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueEnum; +import org.h2.value.ValueNull; + +/** + * A expression that represents a column of a table or view. + */ +public class ExpressionColumn extends Expression { + + private final Database database; + private final String schemaName; + private final String tableAlias; + private String columnName; + private ColumnResolver columnResolver; + private int queryLevel; + private Column column; + + public ExpressionColumn(Database database, Column column) { + this.database = database; + this.column = column; + this.schemaName = null; + this.tableAlias = null; + this.columnName = null; + } + + public ExpressionColumn(Database database, String schemaName, + String tableAlias, String columnName) { + this.database = database; + this.schemaName = schemaName; + this.tableAlias = tableAlias; + this.columnName = columnName; + } + + @Override + public String getSQL() { + String sql; + boolean quote = database.getSettings().databaseToUpper; + if (column != null) { + sql = column.getSQL(); + } else { + sql = quote ? Parser.quoteIdentifier(columnName) : columnName; + } + if (tableAlias != null) { + String a = quote ? Parser.quoteIdentifier(tableAlias) : tableAlias; + sql = a + "." + sql; + } + if (schemaName != null) { + String s = quote ? Parser.quoteIdentifier(schemaName) : schemaName; + sql = s + "." + sql; + } + return sql; + } + + public TableFilter getTableFilter() { + return columnResolver == null ? null : columnResolver.getTableFilter(); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + if (tableAlias != null && !database.equalsIdentifiers( + tableAlias, resolver.getTableAlias())) { + return; + } + if (schemaName != null && !database.equalsIdentifiers( + schemaName, resolver.getSchemaName())) { + return; + } + for (Column col : resolver.getColumns()) { + String n = resolver.getDerivedColumnName(col); + if (n == null) { + n = col.getName(); + } + if (database.equalsIdentifiers(columnName, n)) { + mapColumn(resolver, col, level); + return; + } + } + if (database.equalsIdentifiers(Column.ROWID, columnName)) { + Column col = resolver.getRowIdColumn(); + if (col != null) { + mapColumn(resolver, col, level); + return; + } + } + Column[] columns = resolver.getSystemColumns(); + for (int i = 0; columns != null && i < columns.length; i++) { + Column col = columns[i]; + if (database.equalsIdentifiers(columnName, col.getName())) { + mapColumn(resolver, col, level); + return; + } + } + } + + private void mapColumn(ColumnResolver resolver, Column col, int level) { + if (this.columnResolver == null) { + queryLevel = level; + column = col; + this.columnResolver = resolver; + } else if (queryLevel == level && this.columnResolver != resolver) { + if (resolver instanceof SelectListColumnResolver) { + // ignore - already mapped, that's ok + } else { + throw DbException.get(ErrorCode.AMBIGUOUS_COLUMN_NAME_1, columnName); + } + } + } + + @Override + public Expression optimize(Session session) { + if (columnResolver == null) { + Schema schema = session.getDatabase().findSchema( + tableAlias == null ? session.getCurrentSchemaName() : tableAlias); + if (schema != null) { + Constant constant = schema.findConstant(columnName); + if (constant != null) { + return constant.getValue(); + } + } + String name = columnName; + if (tableAlias != null) { + name = tableAlias + "." + name; + if (schemaName != null) { + name = schemaName + "." + name; + } + } + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, name); + } + return columnResolver.optimize(this, column); + } + + @Override + public void updateAggregate(Session session) { + Value now = columnResolver.getValue(column); + Select select = columnResolver.getSelect(); + if (select == null) { + throw DbException.get(ErrorCode.MUST_GROUP_BY_COLUMN_1, getSQL()); + } + HashMap values = select.getCurrentGroup(); + if (values == null) { + // this is a different level (the enclosing query) + return; + } + Value v = (Value) values.get(this); + if (v == null) { + values.put(this, now); + } else { + if (!database.areEqual(now, v)) { + throw DbException.get(ErrorCode.MUST_GROUP_BY_COLUMN_1, getSQL()); + } + } + } + + @Override + public Value getValue(Session session) { + Select select = columnResolver.getSelect(); + if (select != null) { + HashMap values = select.getCurrentGroup(); + if (values != null) { + Value v = (Value) values.get(this); + if (v != null) { + return v; + } + } + } + Value value = columnResolver.getValue(column); + if (value == null) { + if (select == null) { + throw DbException.get(ErrorCode.NULL_NOT_ALLOWED, getSQL()); + } else { + throw DbException.get(ErrorCode.MUST_GROUP_BY_COLUMN_1, getSQL()); + } + } + if (column.getEnumerators() != null && value != ValueNull.INSTANCE) { + return ValueEnum.get(column.getEnumerators(), value.getInt()); + } + return value; + } + + @Override + public int getType() { + return column.getType(); + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + } + + public Column getColumn() { + return column; + } + + @Override + public int getScale() { + return column.getScale(); + } + + @Override + public long getPrecision() { + return column.getPrecision(); + } + + @Override + public int getDisplaySize() { + return column.getDisplaySize(); + } + + public String getOriginalColumnName() { + return columnName; + } + + public String getOriginalTableAliasName() { + return tableAlias; + } + + @Override + public String getColumnName() { + return columnName != null ? columnName : column.getName(); + } + + @Override + public String getSchemaName() { + Table table = column.getTable(); + return table == null ? null : table.getSchema().getName(); + } + + @Override + public String getTableName() { + Table table = column.getTable(); + return table == null ? null : table.getName(); + } + + @Override + public String getAlias() { + if (column != null) { + if (columnResolver != null) { + String name = columnResolver.getDerivedColumnName(column); + if (name != null) { + return name; + } + } + return column.getName(); + } + if (tableAlias != null) { + return tableAlias + "." + columnName; + } + return columnName; + } + + @Override + public boolean isAutoIncrement() { + return column.getSequence() != null; + } + + @Override + public int getNullable() { + return column.isNullable() ? Column.NULLABLE : Column.NOT_NULLABLE; + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + return false; + case ExpressionVisitor.READONLY: + case ExpressionVisitor.DETERMINISTIC: + case ExpressionVisitor.QUERY_COMPARABLE: + return true; + case ExpressionVisitor.INDEPENDENT: + return this.queryLevel < visitor.getQueryLevel(); + case ExpressionVisitor.EVALUATABLE: + // if this column belongs to a 'higher level' query and is + // therefore just a parameter + if (visitor.getQueryLevel() < this.queryLevel) { + return true; + } + if (getTableFilter() == null) { + return false; + } + return getTableFilter().isEvaluatable(); + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + visitor.addDataModificationId(column.getTable().getMaxDataModificationId()); + return true; + case ExpressionVisitor.NOT_FROM_RESOLVER: + return columnResolver != visitor.getResolver(); + case ExpressionVisitor.GET_DEPENDENCIES: + if (column != null) { + visitor.addDependency(column.getTable()); + } + return true; + case ExpressionVisitor.GET_COLUMNS: + visitor.addColumn(column); + return true; + default: + throw DbException.throwInternalError("type=" + visitor.getType()); + } + } + + @Override + public int getCost() { + return 2; + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + TableFilter tf = getTableFilter(); + if (filter == tf && column.getType() == Value.BOOLEAN) { + IndexCondition cond = IndexCondition.get( + Comparison.EQUAL, this, ValueExpression.get( + ValueBoolean.TRUE)); + filter.addIndexCondition(cond); + } + } + + @Override + public Expression getNotIfPossible(Session session) { + return new Comparison(session, Comparison.EQUAL, this, + ValueExpression.get(ValueBoolean.FALSE)); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ExpressionList.java b/modules/h2/src/main/java/org/h2/expression/ExpressionList.java new file mode 100644 index 0000000000000..eeef02d8932a8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ExpressionList.java @@ -0,0 +1,139 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Session; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; +import org.h2.value.ValueArray; + +/** + * A list of expressions, as in (ID, NAME). + * The result of this expression is an array. + */ +public class ExpressionList extends Expression { + + private final Expression[] list; + + public ExpressionList(Expression[] list) { + this.list = list; + } + + @Override + public Value getValue(Session session) { + Value[] v = new Value[list.length]; + for (int i = 0; i < list.length; i++) { + v[i] = list[i].getValue(session); + } + return ValueArray.get(v); + } + + @Override + public int getType() { + return Value.ARRAY; + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + for (Expression e : list) { + e.mapColumns(resolver, level); + } + } + + @Override + public Expression optimize(Session session) { + boolean allConst = true; + for (int i = 0; i < list.length; i++) { + Expression e = list[i].optimize(session); + if (!e.isConstant()) { + allConst = false; + } + list[i] = e; + } + if (allConst) { + return ValueExpression.get(getValue(session)); + } + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + for (Expression e : list) { + e.setEvaluatable(tableFilter, b); + } + } + + @Override + public int getScale() { + return 0; + } + + @Override + public long getPrecision() { + return Integer.MAX_VALUE; + } + + @Override + public int getDisplaySize() { + return Integer.MAX_VALUE; + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder("("); + for (Expression e: list) { + buff.appendExceptFirst(", "); + buff.append(e.getSQL()); + } + if (list.length == 1) { + buff.append(','); + } + return buff.append(')').toString(); + } + + @Override + public void updateAggregate(Session session) { + for (Expression e : list) { + e.updateAggregate(session); + } + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + for (Expression e : list) { + if (!e.isEverything(visitor)) { + return false; + } + } + return true; + } + + @Override + public int getCost() { + int cost = 1; + for (Expression e : list) { + cost += e.getCost(); + } + return cost; + } + + @Override + public Expression[] getExpressionColumns(Session session) { + ExpressionColumn[] expr = new ExpressionColumn[list.length]; + for (int i = 0; i < list.length; i++) { + Expression e = list[i]; + Column col = new Column("C" + (i + 1), + e.getType(), e.getPrecision(), e.getScale(), + e.getDisplaySize()); + expr[i] = new ExpressionColumn(session.getDatabase(), col); + } + return expr; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ExpressionVisitor.java b/modules/h2/src/main/java/org/h2/expression/ExpressionVisitor.java new file mode 100644 index 0000000000000..8efb4ff7991e7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ExpressionVisitor.java @@ -0,0 +1,305 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.HashSet; +import org.h2.engine.DbObject; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.Table; +import org.h2.table.TableFilter; + +/** + * The visitor pattern is used to iterate through all expressions of a query + * to optimize a statement. + */ +public class ExpressionVisitor { + + /** + * Is the value independent on unset parameters or on columns of a higher + * level query, or sequence values (that means can it be evaluated right + * now)? + */ + public static final int INDEPENDENT = 0; + + /** + * The visitor singleton for the type INDEPENDENT. + */ + public static final ExpressionVisitor INDEPENDENT_VISITOR = + new ExpressionVisitor(INDEPENDENT); + + /** + * Are all aggregates MIN(column), MAX(column), or COUNT(*) for the given + * table (getTable)? + */ + public static final int OPTIMIZABLE_MIN_MAX_COUNT_ALL = 1; + + /** + * Does the expression return the same results for the same parameters? + */ + public static final int DETERMINISTIC = 2; + + /** + * The visitor singleton for the type DETERMINISTIC. + */ + public static final ExpressionVisitor DETERMINISTIC_VISITOR = + new ExpressionVisitor(DETERMINISTIC); + + /** + * Can the expression be evaluated, that means are all columns set to + * 'evaluatable'? + */ + public static final int EVALUATABLE = 3; + + /** + * The visitor singleton for the type EVALUATABLE. + */ + public static final ExpressionVisitor EVALUATABLE_VISITOR = + new ExpressionVisitor(EVALUATABLE); + + /** + * Request to set the latest modification id (addDataModificationId). + */ + public static final int SET_MAX_DATA_MODIFICATION_ID = 4; + + /** + * Does the expression have no side effects (change the data)? + */ + public static final int READONLY = 5; + + /** + * The visitor singleton for the type EVALUATABLE. + */ + public static final ExpressionVisitor READONLY_VISITOR = + new ExpressionVisitor(READONLY); + + /** + * Does an expression have no relation to the given table filter + * (getResolver)? + */ + public static final int NOT_FROM_RESOLVER = 6; + + /** + * Request to get the set of dependencies (addDependency). + */ + public static final int GET_DEPENDENCIES = 7; + + /** + * Can the expression be added to a condition of an outer query. Example: + * ROWNUM() can't be added as a condition to the inner query of select id + * from (select t.*, rownum as r from test t) where r between 2 and 3; Also + * a sequence expression must not be used. + */ + public static final int QUERY_COMPARABLE = 8; + + /** + * Get all referenced columns. + */ + public static final int GET_COLUMNS = 9; + + /** + * The visitor singleton for the type QUERY_COMPARABLE. + */ + public static final ExpressionVisitor QUERY_COMPARABLE_VISITOR = + new ExpressionVisitor(QUERY_COMPARABLE); + + private final int type; + private final int queryLevel; + private final HashSet dependencies; + private final HashSet columns; + private final Table table; + private final long[] maxDataModificationId; + private final ColumnResolver resolver; + + private ExpressionVisitor(int type, + int queryLevel, + HashSet dependencies, + HashSet columns, Table table, ColumnResolver resolver, + long[] maxDataModificationId) { + this.type = type; + this.queryLevel = queryLevel; + this.dependencies = dependencies; + this.columns = columns; + this.table = table; + this.resolver = resolver; + this.maxDataModificationId = maxDataModificationId; + } + + private ExpressionVisitor(int type) { + this.type = type; + this.queryLevel = 0; + this.dependencies = null; + this.columns = null; + this.table = null; + this.resolver = null; + this.maxDataModificationId = null; + } + + /** + * Create a new visitor object to collect dependencies. + * + * @param dependencies the dependencies set + * @return the new visitor + */ + public static ExpressionVisitor getDependenciesVisitor( + HashSet dependencies) { + return new ExpressionVisitor(GET_DEPENDENCIES, 0, dependencies, null, + null, null, null); + } + + /** + * Create a new visitor to check if all aggregates are for the given table. + * + * @param table the table + * @return the new visitor + */ + public static ExpressionVisitor getOptimizableVisitor(Table table) { + return new ExpressionVisitor(OPTIMIZABLE_MIN_MAX_COUNT_ALL, 0, null, + null, table, null, null); + } + + /** + * Create a new visitor to check if no expression depends on the given + * resolver. + * + * @param resolver the resolver + * @return the new visitor + */ + static ExpressionVisitor getNotFromResolverVisitor(ColumnResolver resolver) { + return new ExpressionVisitor(NOT_FROM_RESOLVER, 0, null, null, null, + resolver, null); + } + + /** + * Create a new visitor to get all referenced columns. + * + * @param columns the columns map + * @return the new visitor + */ + public static ExpressionVisitor getColumnsVisitor(HashSet columns) { + return new ExpressionVisitor(GET_COLUMNS, 0, null, columns, null, null, null); + } + + public static ExpressionVisitor getMaxModificationIdVisitor() { + return new ExpressionVisitor(SET_MAX_DATA_MODIFICATION_ID, 0, null, + null, null, null, new long[1]); + } + + /** + * Add a new dependency to the set of dependencies. + * This is used for GET_DEPENDENCIES visitors. + * + * @param obj the additional dependency. + */ + public void addDependency(DbObject obj) { + dependencies.add(obj); + } + + /** + * Add a new column to the set of columns. + * This is used for GET_COLUMNS visitors. + * + * @param column the additional column. + */ + void addColumn(Column column) { + columns.add(column); + } + + /** + * Get the dependency set. + * This is used for GET_DEPENDENCIES visitors. + * + * @return the set + */ + public HashSet getDependencies() { + return dependencies; + } + + /** + * Increment or decrement the query level. + * + * @param offset 1 to increment, -1 to decrement + * @return a clone of this expression visitor, with the changed query level + */ + public ExpressionVisitor incrementQueryLevel(int offset) { + return new ExpressionVisitor(type, queryLevel + offset, dependencies, + columns, table, resolver, maxDataModificationId); + } + + /** + * Get the column resolver. + * This is used for NOT_FROM_RESOLVER visitors. + * + * @return the column resolver + */ + public ColumnResolver getResolver() { + return resolver; + } + + /** + * Update the field maxDataModificationId if this value is higher + * than the current value. + * This is used for SET_MAX_DATA_MODIFICATION_ID visitors. + * + * @param value the data modification id + */ + public void addDataModificationId(long value) { + long m = maxDataModificationId[0]; + if (value > m) { + maxDataModificationId[0] = value; + } + } + + /** + * Get the last data modification. + * This is used for SET_MAX_DATA_MODIFICATION_ID visitors. + * + * @return the maximum modification id + */ + public long getMaxDataModificationId() { + return maxDataModificationId[0]; + } + + int getQueryLevel() { + return queryLevel; + } + + /** + * Get the table. + * This is used for OPTIMIZABLE_MIN_MAX_COUNT_ALL visitors. + * + * @return the table + */ + public Table getTable() { + return table; + } + + /** + * Get the visitor type. + * + * @return the type + */ + public int getType() { + return type; + } + + /** + * Get the set of columns of all tables. + * + * @param filters the filters + * @return the set of columns + */ + public static HashSet allColumnsForTableFilters(TableFilter[] filters) { + HashSet allColumnsSet = new HashSet<>(); + for (TableFilter filter : filters) { + if (filter.getSelect() != null) { + filter.getSelect().isEverything(ExpressionVisitor.getColumnsVisitor(allColumnsSet)); + } + } + return allColumnsSet; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Function.java b/modules/h2/src/main/java/org/h2/expression/Function.java new file mode 100644 index 0000000000000..fe764b994da1a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Function.java @@ -0,0 +1,2623 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.Reader; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.regex.Pattern; +import java.util.regex.PatternSyntaxException; +import org.h2.api.ErrorCode; +import org.h2.command.Command; +import org.h2.command.Parser; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Mode; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; +import org.h2.security.BlockCipher; +import org.h2.security.CipherFactory; +import org.h2.security.SHA256; +import org.h2.store.fs.FileUtils; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.LinkSchema; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.tools.CompressTool; +import org.h2.tools.Csv; +import org.h2.util.DateTimeFunctions; +import org.h2.util.DateTimeUtils; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.util.ToChar; +import org.h2.util.ToDateParser; +import org.h2.util.Utils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDouble; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; +import org.h2.value.ValueString; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; +import org.h2.value.ValueUuid; + +/** + * This class implements most built-in functions of this database. + */ +public class Function extends Expression implements FunctionCall { + public static final int ABS = 0, ACOS = 1, ASIN = 2, ATAN = 3, ATAN2 = 4, + BITAND = 5, BITOR = 6, BITXOR = 7, CEILING = 8, COS = 9, COT = 10, + DEGREES = 11, EXP = 12, FLOOR = 13, LOG = 14, LOG10 = 15, MOD = 16, + PI = 17, POWER = 18, RADIANS = 19, RAND = 20, ROUND = 21, + ROUNDMAGIC = 22, SIGN = 23, SIN = 24, SQRT = 25, TAN = 26, + TRUNCATE = 27, SECURE_RAND = 28, HASH = 29, ENCRYPT = 30, + DECRYPT = 31, COMPRESS = 32, EXPAND = 33, ZERO = 34, + RANDOM_UUID = 35, COSH = 36, SINH = 37, TANH = 38, LN = 39, + BITGET = 40; + + public static final int ASCII = 50, BIT_LENGTH = 51, CHAR = 52, + CHAR_LENGTH = 53, CONCAT = 54, DIFFERENCE = 55, HEXTORAW = 56, + INSERT = 57, INSTR = 58, LCASE = 59, LEFT = 60, LENGTH = 61, + LOCATE = 62, LTRIM = 63, OCTET_LENGTH = 64, RAWTOHEX = 65, + REPEAT = 66, REPLACE = 67, RIGHT = 68, RTRIM = 69, SOUNDEX = 70, + SPACE = 71, SUBSTR = 72, SUBSTRING = 73, UCASE = 74, LOWER = 75, + UPPER = 76, POSITION = 77, TRIM = 78, STRINGENCODE = 79, + STRINGDECODE = 80, STRINGTOUTF8 = 81, UTF8TOSTRING = 82, + XMLATTR = 83, XMLNODE = 84, XMLCOMMENT = 85, XMLCDATA = 86, + XMLSTARTDOC = 87, XMLTEXT = 88, REGEXP_REPLACE = 89, RPAD = 90, + LPAD = 91, CONCAT_WS = 92, TO_CHAR = 93, TRANSLATE = 94, ORA_HASH = 95, + TO_DATE = 96, TO_TIMESTAMP = 97, ADD_MONTHS = 98, TO_TIMESTAMP_TZ = 99; + + public static final int CURDATE = 100, CURTIME = 101, DATE_ADD = 102, + DATE_DIFF = 103, DAY_NAME = 104, DAY_OF_MONTH = 105, + DAY_OF_WEEK = 106, DAY_OF_YEAR = 107, HOUR = 108, MINUTE = 109, + MONTH = 110, MONTH_NAME = 111, NOW = 112, QUARTER = 113, + SECOND = 114, WEEK = 115, YEAR = 116, CURRENT_DATE = 117, + CURRENT_TIME = 118, CURRENT_TIMESTAMP = 119, EXTRACT = 120, + FORMATDATETIME = 121, PARSEDATETIME = 122, ISO_YEAR = 123, + ISO_WEEK = 124, ISO_DAY_OF_WEEK = 125, DATE_TRUNC = 132; + + /** + * Pseudo functions for DATEADD, DATEDIFF, and EXTRACT. + */ + public static final int MILLISECOND = 126, EPOCH = 127, MICROSECOND = 128, NANOSECOND = 129, + TIMEZONE_HOUR = 130, TIMEZONE_MINUTE = 131, DECADE = 132, CENTURY = 133, + MILLENNIUM = 134; + + public static final int DATABASE = 150, USER = 151, CURRENT_USER = 152, + IDENTITY = 153, SCOPE_IDENTITY = 154, AUTOCOMMIT = 155, + READONLY = 156, DATABASE_PATH = 157, LOCK_TIMEOUT = 158, + DISK_SPACE_USED = 159, SIGNAL = 160; + + private static final Pattern SIGNAL_PATTERN = Pattern.compile("[0-9A-Z]{5}"); + + public static final int IFNULL = 200, CASEWHEN = 201, CONVERT = 202, + CAST = 203, COALESCE = 204, NULLIF = 205, CASE = 206, + NEXTVAL = 207, CURRVAL = 208, ARRAY_GET = 209, CSVREAD = 210, + CSVWRITE = 211, MEMORY_FREE = 212, MEMORY_USED = 213, + LOCK_MODE = 214, SCHEMA = 215, SESSION_ID = 216, + ARRAY_LENGTH = 217, LINK_SCHEMA = 218, GREATEST = 219, LEAST = 220, + CANCEL_SESSION = 221, SET = 222, TABLE = 223, TABLE_DISTINCT = 224, + FILE_READ = 225, TRANSACTION_ID = 226, TRUNCATE_VALUE = 227, + NVL2 = 228, DECODE = 229, ARRAY_CONTAINS = 230, FILE_WRITE = 232; + + public static final int REGEXP_LIKE = 240; + + /** + * Used in MySQL-style INSERT ... ON DUPLICATE KEY UPDATE ... VALUES + */ + public static final int VALUES = 250; + + /** + * This is called H2VERSION() and not VERSION(), because we return a fake + * value for VERSION() when running under the PostgreSQL ODBC driver. + */ + public static final int H2VERSION = 231; + + public static final int ROW_NUMBER = 300; + + private static final int VAR_ARGS = -1; + private static final long PRECISION_UNKNOWN = -1; + + private static final HashMap FUNCTIONS = new HashMap<>(); + private static final char[] SOUNDEX_INDEX = new char[128]; + + protected Expression[] args; + + private final FunctionInfo info; + private ArrayList varArgs; + private int dataType, scale; + private long precision = PRECISION_UNKNOWN; + private int displaySize; + private final Database database; + + static { + // SOUNDEX_INDEX + String index = "7AEIOUY8HW1BFPV2CGJKQSXZ3DT4L5MN6R"; + char number = 0; + for (int i = 0, length = index.length(); i < length; i++) { + char c = index.charAt(i); + if (c < '9') { + number = c; + } else { + SOUNDEX_INDEX[c] = number; + SOUNDEX_INDEX[Character.toLowerCase(c)] = number; + } + } + + // FUNCTIONS + addFunction("ABS", ABS, 1, Value.NULL); + addFunction("ACOS", ACOS, 1, Value.DOUBLE); + addFunction("ASIN", ASIN, 1, Value.DOUBLE); + addFunction("ATAN", ATAN, 1, Value.DOUBLE); + addFunction("ATAN2", ATAN2, 2, Value.DOUBLE); + addFunction("BITAND", BITAND, 2, Value.LONG); + addFunction("BITGET", BITGET, 2, Value.BOOLEAN); + addFunction("BITOR", BITOR, 2, Value.LONG); + addFunction("BITXOR", BITXOR, 2, Value.LONG); + addFunction("CEILING", CEILING, 1, Value.DOUBLE); + addFunction("CEIL", CEILING, 1, Value.DOUBLE); + addFunction("COS", COS, 1, Value.DOUBLE); + addFunction("COSH", COSH, 1, Value.DOUBLE); + addFunction("COT", COT, 1, Value.DOUBLE); + addFunction("DEGREES", DEGREES, 1, Value.DOUBLE); + addFunction("EXP", EXP, 1, Value.DOUBLE); + addFunction("FLOOR", FLOOR, 1, Value.DOUBLE); + addFunction("LOG", LOG, 1, Value.DOUBLE); + addFunction("LN", LN, 1, Value.DOUBLE); + addFunction("LOG10", LOG10, 1, Value.DOUBLE); + addFunction("MOD", MOD, 2, Value.LONG); + addFunction("PI", PI, 0, Value.DOUBLE); + addFunction("POWER", POWER, 2, Value.DOUBLE); + addFunction("RADIANS", RADIANS, 1, Value.DOUBLE); + // RAND without argument: get the next value + // RAND with one argument: seed the random generator + addFunctionNotDeterministic("RAND", RAND, VAR_ARGS, Value.DOUBLE); + addFunctionNotDeterministic("RANDOM", RAND, VAR_ARGS, Value.DOUBLE); + addFunction("ROUND", ROUND, VAR_ARGS, Value.DOUBLE); + addFunction("ROUNDMAGIC", ROUNDMAGIC, 1, Value.DOUBLE); + addFunction("SIGN", SIGN, 1, Value.INT); + addFunction("SIN", SIN, 1, Value.DOUBLE); + addFunction("SINH", SINH, 1, Value.DOUBLE); + addFunction("SQRT", SQRT, 1, Value.DOUBLE); + addFunction("TAN", TAN, 1, Value.DOUBLE); + addFunction("TANH", TANH, 1, Value.DOUBLE); + addFunction("TRUNCATE", TRUNCATE, VAR_ARGS, Value.NULL); + // same as TRUNCATE + addFunction("TRUNC", TRUNCATE, VAR_ARGS, Value.NULL); + addFunction("HASH", HASH, 3, Value.BYTES); + addFunction("ENCRYPT", ENCRYPT, 3, Value.BYTES); + addFunction("DECRYPT", DECRYPT, 3, Value.BYTES); + addFunctionNotDeterministic("SECURE_RAND", SECURE_RAND, 1, Value.BYTES); + addFunction("COMPRESS", COMPRESS, VAR_ARGS, Value.BYTES); + addFunction("EXPAND", EXPAND, 1, Value.BYTES); + addFunction("ZERO", ZERO, 0, Value.INT); + addFunctionNotDeterministic("RANDOM_UUID", RANDOM_UUID, 0, Value.UUID); + addFunctionNotDeterministic("SYS_GUID", RANDOM_UUID, 0, Value.UUID); + addFunctionNotDeterministic("UUID", RANDOM_UUID, 0, Value.UUID); + // string + addFunction("ASCII", ASCII, 1, Value.INT); + addFunction("BIT_LENGTH", BIT_LENGTH, 1, Value.LONG); + addFunction("CHAR", CHAR, 1, Value.STRING); + addFunction("CHR", CHAR, 1, Value.STRING); + addFunction("CHAR_LENGTH", CHAR_LENGTH, 1, Value.INT); + // same as CHAR_LENGTH + addFunction("CHARACTER_LENGTH", CHAR_LENGTH, 1, Value.INT); + addFunctionWithNull("CONCAT", CONCAT, VAR_ARGS, Value.STRING); + addFunctionWithNull("CONCAT_WS", CONCAT_WS, VAR_ARGS, Value.STRING); + addFunction("DIFFERENCE", DIFFERENCE, 2, Value.INT); + addFunction("HEXTORAW", HEXTORAW, 1, Value.STRING); + addFunctionWithNull("INSERT", INSERT, 4, Value.STRING); + addFunction("LCASE", LCASE, 1, Value.STRING); + addFunction("LEFT", LEFT, 2, Value.STRING); + addFunction("LENGTH", LENGTH, 1, Value.LONG); + // 2 or 3 arguments + addFunction("LOCATE", LOCATE, VAR_ARGS, Value.INT); + // alias for MSSQLServer + addFunction("CHARINDEX", LOCATE, VAR_ARGS, Value.INT); + // same as LOCATE with 2 arguments + addFunction("POSITION", LOCATE, 2, Value.INT); + addFunction("INSTR", INSTR, VAR_ARGS, Value.INT); + addFunction("LTRIM", LTRIM, VAR_ARGS, Value.STRING); + addFunction("OCTET_LENGTH", OCTET_LENGTH, 1, Value.LONG); + addFunction("RAWTOHEX", RAWTOHEX, 1, Value.STRING); + addFunction("REPEAT", REPEAT, 2, Value.STRING); + addFunction("REPLACE", REPLACE, VAR_ARGS, Value.STRING, false, true,true); + addFunction("RIGHT", RIGHT, 2, Value.STRING); + addFunction("RTRIM", RTRIM, VAR_ARGS, Value.STRING); + addFunction("SOUNDEX", SOUNDEX, 1, Value.STRING); + addFunction("SPACE", SPACE, 1, Value.STRING); + addFunction("SUBSTR", SUBSTR, VAR_ARGS, Value.STRING); + addFunction("SUBSTRING", SUBSTRING, VAR_ARGS, Value.STRING); + addFunction("UCASE", UCASE, 1, Value.STRING); + addFunction("LOWER", LOWER, 1, Value.STRING); + addFunction("UPPER", UPPER, 1, Value.STRING); + addFunction("POSITION", POSITION, 2, Value.INT); + addFunction("TRIM", TRIM, VAR_ARGS, Value.STRING); + addFunction("STRINGENCODE", STRINGENCODE, 1, Value.STRING); + addFunction("STRINGDECODE", STRINGDECODE, 1, Value.STRING); + addFunction("STRINGTOUTF8", STRINGTOUTF8, 1, Value.BYTES); + addFunction("UTF8TOSTRING", UTF8TOSTRING, 1, Value.STRING); + addFunction("XMLATTR", XMLATTR, 2, Value.STRING); + addFunctionWithNull("XMLNODE", XMLNODE, VAR_ARGS, Value.STRING); + addFunction("XMLCOMMENT", XMLCOMMENT, 1, Value.STRING); + addFunction("XMLCDATA", XMLCDATA, 1, Value.STRING); + addFunction("XMLSTARTDOC", XMLSTARTDOC, 0, Value.STRING); + addFunction("XMLTEXT", XMLTEXT, VAR_ARGS, Value.STRING); + addFunction("REGEXP_REPLACE", REGEXP_REPLACE, VAR_ARGS, Value.STRING); + addFunction("RPAD", RPAD, VAR_ARGS, Value.STRING); + addFunction("LPAD", LPAD, VAR_ARGS, Value.STRING); + addFunction("TO_CHAR", TO_CHAR, VAR_ARGS, Value.STRING); + addFunction("ORA_HASH", ORA_HASH, VAR_ARGS, Value.INT); + addFunction("TRANSLATE", TRANSLATE, 3, Value.STRING); + addFunction("REGEXP_LIKE", REGEXP_LIKE, VAR_ARGS, Value.BOOLEAN); + + // date + addFunctionNotDeterministic("CURRENT_DATE", CURRENT_DATE, + 0, Value.DATE); + addFunctionNotDeterministic("CURDATE", CURDATE, + 0, Value.DATE); + addFunctionNotDeterministic("TODAY", CURRENT_DATE, + 0, Value.DATE); + addFunction("TO_DATE", TO_DATE, VAR_ARGS, Value.TIMESTAMP); + addFunction("TO_TIMESTAMP", TO_TIMESTAMP, VAR_ARGS, Value.TIMESTAMP); + addFunction("ADD_MONTHS", ADD_MONTHS, 2, Value.TIMESTAMP); + addFunction("TO_TIMESTAMP_TZ", TO_TIMESTAMP_TZ, VAR_ARGS, Value.TIMESTAMP_TZ); + // alias for MSSQLServer + addFunctionNotDeterministic("GETDATE", CURDATE, + 0, Value.DATE); + addFunctionNotDeterministic("CURRENT_TIME", CURRENT_TIME, + 0, Value.TIME); + addFunctionNotDeterministic("SYSTIME", CURRENT_TIME, + 0, Value.TIME); + addFunctionNotDeterministic("CURTIME", CURTIME, + 0, Value.TIME); + addFunctionNotDeterministic("CURRENT_TIMESTAMP", CURRENT_TIMESTAMP, + VAR_ARGS, Value.TIMESTAMP); + addFunctionNotDeterministic("SYSDATE", CURRENT_TIMESTAMP, + VAR_ARGS, Value.TIMESTAMP); + addFunctionNotDeterministic("SYSTIMESTAMP", CURRENT_TIMESTAMP, + VAR_ARGS, Value.TIMESTAMP); + addFunctionNotDeterministic("NOW", NOW, + VAR_ARGS, Value.TIMESTAMP); + addFunction("DATEADD", DATE_ADD, + 3, Value.TIMESTAMP); + addFunction("TIMESTAMPADD", DATE_ADD, + 3, Value.TIMESTAMP); + addFunction("DATEDIFF", DATE_DIFF, + 3, Value.LONG); + addFunction("TIMESTAMPDIFF", DATE_DIFF, + 3, Value.LONG); + addFunction("DAYNAME", DAY_NAME, + 1, Value.STRING); + addFunction("DAYNAME", DAY_NAME, + 1, Value.STRING); + addFunction("DAY", DAY_OF_MONTH, + 1, Value.INT); + addFunction("DAY_OF_MONTH", DAY_OF_MONTH, + 1, Value.INT); + addFunction("DAY_OF_WEEK", DAY_OF_WEEK, + 1, Value.INT); + addFunction("DAY_OF_YEAR", DAY_OF_YEAR, + 1, Value.INT); + addFunction("DAYOFMONTH", DAY_OF_MONTH, + 1, Value.INT); + addFunction("DAYOFWEEK", DAY_OF_WEEK, + 1, Value.INT); + addFunction("DAYOFYEAR", DAY_OF_YEAR, + 1, Value.INT); + addFunction("HOUR", HOUR, + 1, Value.INT); + addFunction("MINUTE", MINUTE, + 1, Value.INT); + addFunction("MONTH", MONTH, + 1, Value.INT); + addFunction("MONTHNAME", MONTH_NAME, + 1, Value.STRING); + addFunction("QUARTER", QUARTER, + 1, Value.INT); + addFunction("SECOND", SECOND, + 1, Value.INT); + addFunction("WEEK", WEEK, + 1, Value.INT); + addFunction("YEAR", YEAR, + 1, Value.INT); + addFunction("EXTRACT", EXTRACT, + 2, Value.INT); + addFunctionWithNull("FORMATDATETIME", FORMATDATETIME, + VAR_ARGS, Value.STRING); + addFunctionWithNull("PARSEDATETIME", PARSEDATETIME, + VAR_ARGS, Value.TIMESTAMP); + addFunction("ISO_YEAR", ISO_YEAR, + 1, Value.INT); + addFunction("ISO_WEEK", ISO_WEEK, + 1, Value.INT); + addFunction("ISO_DAY_OF_WEEK", ISO_DAY_OF_WEEK, + 1, Value.INT); + addFunction("DATE_TRUNC", DATE_TRUNC, 2, Value.NULL); + // system + addFunctionNotDeterministic("DATABASE", DATABASE, + 0, Value.STRING); + addFunctionNotDeterministic("USER", USER, + 0, Value.STRING); + addFunctionNotDeterministic("CURRENT_USER", CURRENT_USER, + 0, Value.STRING); + addFunctionNotDeterministic("IDENTITY", IDENTITY, + 0, Value.LONG); + addFunctionNotDeterministic("SCOPE_IDENTITY", SCOPE_IDENTITY, + 0, Value.LONG); + addFunctionNotDeterministic("IDENTITY_VAL_LOCAL", IDENTITY, + 0, Value.LONG); + addFunctionNotDeterministic("LAST_INSERT_ID", IDENTITY, + 0, Value.LONG); + addFunctionNotDeterministic("LASTVAL", IDENTITY, + 0, Value.LONG); + addFunctionNotDeterministic("AUTOCOMMIT", AUTOCOMMIT, + 0, Value.BOOLEAN); + addFunctionNotDeterministic("READONLY", READONLY, + 0, Value.BOOLEAN); + addFunction("DATABASE_PATH", DATABASE_PATH, + 0, Value.STRING); + addFunctionNotDeterministic("LOCK_TIMEOUT", LOCK_TIMEOUT, + 0, Value.INT); + addFunctionWithNull("IFNULL", IFNULL, + 2, Value.NULL); + addFunctionWithNull("ISNULL", IFNULL, + 2, Value.NULL); + addFunctionWithNull("CASEWHEN", CASEWHEN, + 3, Value.NULL); + addFunctionWithNull("CONVERT", CONVERT, + 1, Value.NULL); + addFunctionWithNull("CAST", CAST, + 1, Value.NULL); + addFunctionWithNull("TRUNCATE_VALUE", TRUNCATE_VALUE, + 3, Value.NULL); + addFunctionWithNull("COALESCE", COALESCE, + VAR_ARGS, Value.NULL); + addFunctionWithNull("NVL", COALESCE, + VAR_ARGS, Value.NULL); + addFunctionWithNull("NVL2", NVL2, + 3, Value.NULL); + addFunctionWithNull("NULLIF", NULLIF, + 2, Value.NULL); + addFunctionWithNull("CASE", CASE, + VAR_ARGS, Value.NULL); + addFunctionNotDeterministic("NEXTVAL", NEXTVAL, + VAR_ARGS, Value.LONG); + addFunctionNotDeterministic("CURRVAL", CURRVAL, + VAR_ARGS, Value.LONG); + addFunction("ARRAY_GET", ARRAY_GET, + 2, Value.STRING); + addFunction("ARRAY_CONTAINS", ARRAY_CONTAINS, + 2, Value.BOOLEAN, false, true, true); + addFunction("CSVREAD", CSVREAD, + VAR_ARGS, Value.RESULT_SET, false, false, false); + addFunction("CSVWRITE", CSVWRITE, + VAR_ARGS, Value.INT, false, false, true); + addFunctionNotDeterministic("MEMORY_FREE", MEMORY_FREE, + 0, Value.INT); + addFunctionNotDeterministic("MEMORY_USED", MEMORY_USED, + 0, Value.INT); + addFunctionNotDeterministic("LOCK_MODE", LOCK_MODE, + 0, Value.INT); + addFunctionNotDeterministic("SCHEMA", SCHEMA, + 0, Value.STRING); + addFunctionNotDeterministic("SESSION_ID", SESSION_ID, + 0, Value.INT); + addFunction("ARRAY_LENGTH", ARRAY_LENGTH, + 1, Value.INT); + addFunctionNotDeterministic("LINK_SCHEMA", LINK_SCHEMA, + 6, Value.RESULT_SET); + addFunctionWithNull("LEAST", LEAST, + VAR_ARGS, Value.NULL); + addFunctionWithNull("GREATEST", GREATEST, + VAR_ARGS, Value.NULL); + addFunctionNotDeterministic("CANCEL_SESSION", CANCEL_SESSION, + 1, Value.BOOLEAN); + addFunction("SET", SET, + 2, Value.NULL, false, false, true); + addFunction("FILE_READ", FILE_READ, + VAR_ARGS, Value.NULL, false, false, true); + addFunction("FILE_WRITE", FILE_WRITE, + 2, Value.LONG, false, false, true); + addFunctionNotDeterministic("TRANSACTION_ID", TRANSACTION_ID, + 0, Value.STRING); + addFunctionWithNull("DECODE", DECODE, + VAR_ARGS, Value.NULL); + addFunctionNotDeterministic("DISK_SPACE_USED", DISK_SPACE_USED, + 1, Value.LONG); + addFunctionWithNull("SIGNAL", SIGNAL, 2, Value.NULL); + addFunction("H2VERSION", H2VERSION, 0, Value.STRING); + + // TableFunction + addFunctionWithNull("TABLE", TABLE, + VAR_ARGS, Value.RESULT_SET); + addFunctionWithNull("TABLE_DISTINCT", TABLE_DISTINCT, + VAR_ARGS, Value.RESULT_SET); + + // pseudo function + addFunctionWithNull("ROW_NUMBER", ROW_NUMBER, 0, Value.LONG); + + // ON DUPLICATE KEY VALUES function + addFunction("VALUES", VALUES, 1, Value.NULL, false, true, false); + } + + protected Function(Database database, FunctionInfo info) { + this.database = database; + this.info = info; + if (info.parameterCount == VAR_ARGS) { + varArgs = New.arrayList(); + } else { + args = new Expression[info.parameterCount]; + } + } + + private static void addFunction(String name, int type, int parameterCount, + int returnDataType, boolean nullIfParameterIsNull, boolean deterministic, + boolean bufferResultSetToLocalTemp) { + FunctionInfo info = new FunctionInfo(); + info.name = name; + info.type = type; + info.parameterCount = parameterCount; + info.returnDataType = returnDataType; + info.nullIfParameterIsNull = nullIfParameterIsNull; + info.deterministic = deterministic; + info.bufferResultSetToLocalTemp = bufferResultSetToLocalTemp; + FUNCTIONS.put(name, info); + } + + private static void addFunctionNotDeterministic(String name, int type, + int parameterCount, int returnDataType) { + addFunction(name, type, parameterCount, returnDataType, true, false, true); + } + + private static void addFunction(String name, int type, int parameterCount, + int returnDataType) { + addFunction(name, type, parameterCount, returnDataType, true, true, true); + } + + private static void addFunctionWithNull(String name, int type, + int parameterCount, int returnDataType) { + addFunction(name, type, parameterCount, returnDataType, false, true, true); + } + + /** + * Get the function info object for this function, or null if there is no + * such function. + * + * @param name the function name + * @return the function info + */ + private static FunctionInfo getFunctionInfo(String name) { + return FUNCTIONS.get(name); + } + + /** + * Get an instance of the given function for this database. + * If no function with this name is found, null is returned. + * + * @param database the database + * @param name the function name + * @return the function object or null + */ + public static Function getFunction(Database database, String name) { + if (!database.getSettings().databaseToUpper) { + // if not yet converted to uppercase, do it now + name = StringUtils.toUpperEnglish(name); + } + FunctionInfo info = getFunctionInfo(name); + if (info == null) { + return null; + } + switch (info.type) { + case TABLE: + case TABLE_DISTINCT: + return new TableFunction(database, info, Long.MAX_VALUE); + default: + return new Function(database, info); + } + } + + /** + * Set the parameter expression at the given index. + * + * @param index the index (0, 1,...) + * @param param the expression + */ + public void setParameter(int index, Expression param) { + if (varArgs != null) { + varArgs.add(param); + } else { + if (index >= args.length) { + throw DbException.get(ErrorCode.INVALID_PARAMETER_COUNT_2, + info.name, "" + args.length); + } + args[index] = param; + } + } + + @Override + public Value getValue(Session session) { + return getValueWithArgs(session, args); + } + + private Value getSimpleValue(Session session, Value v0, Expression[] args, + Value[] values) { + Value result; + switch (info.type) { + case ABS: + result = v0.getSignum() >= 0 ? v0 : v0.negate(); + break; + case ACOS: + result = ValueDouble.get(Math.acos(v0.getDouble())); + break; + case ASIN: + result = ValueDouble.get(Math.asin(v0.getDouble())); + break; + case ATAN: + result = ValueDouble.get(Math.atan(v0.getDouble())); + break; + case CEILING: + result = ValueDouble.get(Math.ceil(v0.getDouble())); + break; + case COS: + result = ValueDouble.get(Math.cos(v0.getDouble())); + break; + case COSH: + result = ValueDouble.get(Math.cosh(v0.getDouble())); + break; + case COT: { + double d = Math.tan(v0.getDouble()); + if (d == 0.0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + result = ValueDouble.get(1. / d); + break; + } + case DEGREES: + result = ValueDouble.get(Math.toDegrees(v0.getDouble())); + break; + case EXP: + result = ValueDouble.get(Math.exp(v0.getDouble())); + break; + case FLOOR: + result = ValueDouble.get(Math.floor(v0.getDouble())); + break; + case LN: + result = ValueDouble.get(Math.log(v0.getDouble())); + break; + case LOG: + if (database.getMode().logIsLogBase10) { + result = ValueDouble.get(Math.log10(v0.getDouble())); + } else { + result = ValueDouble.get(Math.log(v0.getDouble())); + } + break; + case LOG10: + result = ValueDouble.get(Math.log10(v0.getDouble())); + break; + case PI: + result = ValueDouble.get(Math.PI); + break; + case RADIANS: + result = ValueDouble.get(Math.toRadians(v0.getDouble())); + break; + case RAND: { + if (v0 != null) { + session.getRandom().setSeed(v0.getInt()); + } + result = ValueDouble.get(session.getRandom().nextDouble()); + break; + } + case ROUNDMAGIC: + result = ValueDouble.get(roundMagic(v0.getDouble())); + break; + case SIGN: + result = ValueInt.get(v0.getSignum()); + break; + case SIN: + result = ValueDouble.get(Math.sin(v0.getDouble())); + break; + case SINH: + result = ValueDouble.get(Math.sinh(v0.getDouble())); + break; + case SQRT: + result = ValueDouble.get(Math.sqrt(v0.getDouble())); + break; + case TAN: + result = ValueDouble.get(Math.tan(v0.getDouble())); + break; + case TANH: + result = ValueDouble.get(Math.tanh(v0.getDouble())); + break; + case SECURE_RAND: + result = ValueBytes.getNoCopy( + MathUtils.secureRandomBytes(v0.getInt())); + break; + case EXPAND: + result = ValueBytes.getNoCopy( + CompressTool.getInstance().expand(v0.getBytesNoCopy())); + break; + case ZERO: + result = ValueInt.get(0); + break; + case RANDOM_UUID: + result = ValueUuid.getNewRandom(); + break; + // string + case ASCII: { + String s = v0.getString(); + if (s.length() == 0) { + result = ValueNull.INSTANCE; + } else { + result = ValueInt.get(s.charAt(0)); + } + break; + } + case BIT_LENGTH: + result = ValueLong.get(16 * length(v0)); + break; + case CHAR: + result = ValueString.get(String.valueOf((char) v0.getInt()), + database.getMode().treatEmptyStringsAsNull); + break; + case CHAR_LENGTH: + case LENGTH: + result = ValueLong.get(length(v0)); + break; + case OCTET_LENGTH: + result = ValueLong.get(2 * length(v0)); + break; + case CONCAT_WS: + case CONCAT: { + result = ValueNull.INSTANCE; + int start = 0; + String separator = ""; + if (info.type == CONCAT_WS) { + start = 1; + separator = getNullOrValue(session, args, values, 0).getString(); + } + for (int i = start; i < args.length; i++) { + Value v = getNullOrValue(session, args, values, i); + if (v == ValueNull.INSTANCE) { + continue; + } + if (result == ValueNull.INSTANCE) { + result = v; + } else { + String tmp = v.getString(); + if (!StringUtils.isNullOrEmpty(separator) + && !StringUtils.isNullOrEmpty(tmp)) { + tmp = separator + tmp; + } + result = ValueString.get(result.getString() + tmp, + database.getMode().treatEmptyStringsAsNull); + } + } + if (info.type == CONCAT_WS) { + if (separator != null && result == ValueNull.INSTANCE) { + result = ValueString.get("", + database.getMode().treatEmptyStringsAsNull); + } + } + break; + } + case HEXTORAW: + result = ValueString.get(hexToRaw(v0.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case LOWER: + case LCASE: + // TODO this is locale specific, need to document or provide a way + // to set the locale + result = ValueString.get(v0.getString().toLowerCase(), + database.getMode().treatEmptyStringsAsNull); + break; + case RAWTOHEX: + result = ValueString.get(rawToHex(v0.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case SOUNDEX: + result = ValueString.get(getSoundex(v0.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case SPACE: { + int len = Math.max(0, v0.getInt()); + char[] chars = new char[len]; + for (int i = len - 1; i >= 0; i--) { + chars[i] = ' '; + } + result = ValueString.get(new String(chars), + database.getMode().treatEmptyStringsAsNull); + break; + } + case UPPER: + case UCASE: + // TODO this is locale specific, need to document or provide a way + // to set the locale + result = ValueString.get(v0.getString().toUpperCase(), + database.getMode().treatEmptyStringsAsNull); + break; + case STRINGENCODE: + result = ValueString.get(StringUtils.javaEncode(v0.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case STRINGDECODE: + result = ValueString.get(StringUtils.javaDecode(v0.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case STRINGTOUTF8: + result = ValueBytes.getNoCopy(v0.getString(). + getBytes(StandardCharsets.UTF_8)); + break; + case UTF8TOSTRING: + result = ValueString.get(new String(v0.getBytesNoCopy(), + StandardCharsets.UTF_8), + database.getMode().treatEmptyStringsAsNull); + break; + case XMLCOMMENT: + result = ValueString.get(StringUtils.xmlComment(v0.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case XMLCDATA: + result = ValueString.get(StringUtils.xmlCData(v0.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case XMLSTARTDOC: + result = ValueString.get(StringUtils.xmlStartDoc(), + database.getMode().treatEmptyStringsAsNull); + break; + case DAY_NAME: { + int dayOfWeek = DateTimeUtils.getSundayDayOfWeek(DateTimeUtils.dateAndTimeFromValue(v0)[0]); + result = ValueString.get(DateTimeFunctions.getMonthsAndWeeks(1)[dayOfWeek], + database.getMode().treatEmptyStringsAsNull); + break; + } + case DAY_OF_MONTH: + case DAY_OF_WEEK: + case DAY_OF_YEAR: + case HOUR: + case MINUTE: + case MONTH: + case QUARTER: + case ISO_YEAR: + case ISO_WEEK: + case ISO_DAY_OF_WEEK: + case SECOND: + case WEEK: + case YEAR: + result = ValueInt.get(DateTimeFunctions.getIntDatePart(v0, info.type)); + break; + case MONTH_NAME: { + int month = DateTimeUtils.monthFromDateValue(DateTimeUtils.dateAndTimeFromValue(v0)[0]); + result = ValueString.get(DateTimeFunctions.getMonthsAndWeeks(0)[month - 1], + database.getMode().treatEmptyStringsAsNull); + break; + } + case CURDATE: + case CURRENT_DATE: { + long now = session.getTransactionStart(); + // need to normalize + result = ValueDate.fromMillis(now); + break; + } + case CURTIME: + case CURRENT_TIME: { + long now = session.getTransactionStart(); + // need to normalize + result = ValueTime.fromMillis(now); + break; + } + case NOW: + case CURRENT_TIMESTAMP: { + long now = session.getTransactionStart(); + ValueTimestamp vt = ValueTimestamp.fromMillis(now); + if (v0 != null) { + Mode mode = database.getMode(); + vt = (ValueTimestamp) vt.convertScale( + mode.convertOnlyToSmallerScale, v0.getInt()); + } + result = vt; + break; + } + case DATABASE: + result = ValueString.get(database.getShortName(), + database.getMode().treatEmptyStringsAsNull); + break; + case USER: + case CURRENT_USER: + result = ValueString.get(session.getUser().getName(), + database.getMode().treatEmptyStringsAsNull); + break; + case IDENTITY: + result = session.getLastIdentity(); + break; + case SCOPE_IDENTITY: + result = session.getLastScopeIdentity(); + break; + case AUTOCOMMIT: + result = ValueBoolean.get(session.getAutoCommit()); + break; + case READONLY: + result = ValueBoolean.get(database.isReadOnly()); + break; + case DATABASE_PATH: { + String path = database.getDatabasePath(); + result = path == null ? + (Value) ValueNull.INSTANCE : ValueString.get(path, + database.getMode().treatEmptyStringsAsNull); + break; + } + case LOCK_TIMEOUT: + result = ValueInt.get(session.getLockTimeout()); + break; + case DISK_SPACE_USED: + result = ValueLong.get(getDiskSpaceUsed(session, v0)); + break; + case CAST: + case CONVERT: { + v0 = v0.convertTo(dataType); + Mode mode = database.getMode(); + v0 = v0.convertScale(mode.convertOnlyToSmallerScale, scale); + v0 = v0.convertPrecision(getPrecision(), false); + result = v0; + break; + } + case MEMORY_FREE: + session.getUser().checkAdmin(); + result = ValueInt.get(Utils.getMemoryFree()); + break; + case MEMORY_USED: + session.getUser().checkAdmin(); + result = ValueInt.get(Utils.getMemoryUsed()); + break; + case LOCK_MODE: + result = ValueInt.get(database.getLockMode()); + break; + case SCHEMA: + result = ValueString.get(session.getCurrentSchemaName(), + database.getMode().treatEmptyStringsAsNull); + break; + case SESSION_ID: + result = ValueInt.get(session.getId()); + break; + case IFNULL: { + result = v0; + if (v0 == ValueNull.INSTANCE) { + result = getNullOrValue(session, args, values, 1); + } + result = convertResult(result); + break; + } + case CASEWHEN: { + Value v; + if (!v0.getBoolean()) { + v = getNullOrValue(session, args, values, 2); + } else { + v = getNullOrValue(session, args, values, 1); + } + result = v.convertTo(dataType); + break; + } + case DECODE: { + int index = -1; + for (int i = 1, len = args.length - 1; i < len; i += 2) { + if (database.areEqual(v0, + getNullOrValue(session, args, values, i))) { + index = i + 1; + break; + } + } + if (index < 0 && args.length % 2 == 0) { + index = args.length - 1; + } + Value v = index < 0 ? ValueNull.INSTANCE : + getNullOrValue(session, args, values, index); + result = v.convertTo(dataType); + break; + } + case NVL2: { + Value v; + if (v0 == ValueNull.INSTANCE) { + v = getNullOrValue(session, args, values, 2); + } else { + v = getNullOrValue(session, args, values, 1); + } + result = v.convertTo(dataType); + break; + } + case COALESCE: { + result = v0; + for (int i = 0; i < args.length; i++) { + Value v = getNullOrValue(session, args, values, i); + if (!(v == ValueNull.INSTANCE)) { + result = v.convertTo(dataType); + break; + } + } + break; + } + case GREATEST: + case LEAST: { + result = ValueNull.INSTANCE; + for (int i = 0; i < args.length; i++) { + Value v = getNullOrValue(session, args, values, i); + if (!(v == ValueNull.INSTANCE)) { + v = v.convertTo(dataType); + if (result == ValueNull.INSTANCE) { + result = v; + } else { + int comp = database.compareTypeSafe(result, v); + if (info.type == GREATEST && comp < 0) { + result = v; + } else if (info.type == LEAST && comp > 0) { + result = v; + } + } + } + } + break; + } + case CASE: { + Expression then = null; + if (v0 == null) { + // Searched CASE expression + // (null, when, then) + // (null, when, then, else) + // (null, when, then, when, then) + // (null, when, then, when, then, else) + for (int i = 1, len = args.length - 1; i < len; i += 2) { + Value when = args[i].getValue(session); + if (when.getBoolean()) { + then = args[i + 1]; + break; + } + } + } else { + // Simple CASE expression + // (expr, when, then) + // (expr, when, then, else) + // (expr, when, then, when, then) + // (expr, when, then, when, then, else) + if (!(v0 == ValueNull.INSTANCE)) { + for (int i = 1, len = args.length - 1; i < len; i += 2) { + Value when = args[i].getValue(session); + if (database.areEqual(v0, when)) { + then = args[i + 1]; + break; + } + } + } + } + if (then == null && args.length % 2 == 0) { + // then = elsePart + then = args[args.length - 1]; + } + Value v = then == null ? ValueNull.INSTANCE : then.getValue(session); + result = v.convertTo(dataType); + break; + } + case ARRAY_GET: { + if (v0.getType() == Value.ARRAY) { + Value v1 = getNullOrValue(session, args, values, 1); + int element = v1.getInt(); + Value[] list = ((ValueArray) v0).getList(); + if (element < 1 || element > list.length) { + result = ValueNull.INSTANCE; + } else { + result = list[element - 1]; + } + } else { + result = ValueNull.INSTANCE; + } + break; + } + case ARRAY_LENGTH: { + if (v0.getType() == Value.ARRAY) { + Value[] list = ((ValueArray) v0).getList(); + result = ValueInt.get(list.length); + } else { + result = ValueNull.INSTANCE; + } + break; + } + case ARRAY_CONTAINS: { + result = ValueBoolean.FALSE; + if (v0.getType() == Value.ARRAY) { + Value v1 = getNullOrValue(session, args, values, 1); + Value[] list = ((ValueArray) v0).getList(); + for (Value v : list) { + if (v.equals(v1)) { + result = ValueBoolean.TRUE; + break; + } + } + } + break; + } + case CANCEL_SESSION: { + result = ValueBoolean.get(cancelStatement(session, v0.getInt())); + break; + } + case TRANSACTION_ID: { + result = session.getTransactionId(); + break; + } + default: + result = null; + } + return result; + } + + private Value convertResult(Value v) { + return v.convertTo(dataType); + } + + private static boolean cancelStatement(Session session, int targetSessionId) { + session.getUser().checkAdmin(); + Session[] sessions = session.getDatabase().getSessions(false); + for (Session s : sessions) { + if (s.getId() == targetSessionId) { + Command c = s.getCurrentCommand(); + if (c == null) { + return false; + } + c.cancel(); + return true; + } + } + return false; + } + + private static long getDiskSpaceUsed(Session session, Value v0) { + Parser p = new Parser(session); + String sql = v0.getString(); + Table table = p.parseTableName(sql); + return table.getDiskSpaceUsed(); + } + + private static Value getNullOrValue(Session session, Expression[] args, + Value[] values, int i) { + if (i >= args.length) { + return null; + } + Value v = values[i]; + if (v == null) { + Expression e = args[i]; + if (e == null) { + return null; + } + v = values[i] = e.getValue(session); + } + return v; + } + + private Value getValueWithArgs(Session session, Expression[] args) { + Value[] values = new Value[args.length]; + if (info.nullIfParameterIsNull) { + for (int i = 0; i < args.length; i++) { + Expression e = args[i]; + Value v = e.getValue(session); + if (v == ValueNull.INSTANCE) { + return ValueNull.INSTANCE; + } + values[i] = v; + } + } + Value v0 = getNullOrValue(session, args, values, 0); + Value resultSimple = getSimpleValue(session, v0, args, values); + if (resultSimple != null) { + return resultSimple; + } + Value v1 = getNullOrValue(session, args, values, 1); + Value v2 = getNullOrValue(session, args, values, 2); + Value v3 = getNullOrValue(session, args, values, 3); + Value v4 = getNullOrValue(session, args, values, 4); + Value v5 = getNullOrValue(session, args, values, 5); + Value result; + switch (info.type) { + case ATAN2: + result = ValueDouble.get( + Math.atan2(v0.getDouble(), v1.getDouble())); + break; + case BITAND: + result = ValueLong.get(v0.getLong() & v1.getLong()); + break; + case BITGET: + result = ValueBoolean.get((v0.getLong() & (1L << v1.getInt())) != 0); + break; + case BITOR: + result = ValueLong.get(v0.getLong() | v1.getLong()); + break; + case BITXOR: + result = ValueLong.get(v0.getLong() ^ v1.getLong()); + break; + case MOD: { + long x = v1.getLong(); + if (x == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + result = ValueLong.get(v0.getLong() % x); + break; + } + case POWER: + result = ValueDouble.get(Math.pow( + v0.getDouble(), v1.getDouble())); + break; + case ROUND: { + double f = v1 == null ? 1. : Math.pow(10., v1.getDouble()); + + double middleResult = v0.getDouble() * f; + + int oneWithSymbol = middleResult > 0 ? 1 : -1; + result = ValueDouble.get(Math.round(Math.abs(middleResult)) / f * oneWithSymbol); + break; + } + case TRUNCATE: { + if (v0.getType() == Value.TIMESTAMP) { + result = ValueTimestamp.fromDateValueAndNanos(((ValueTimestamp) v0).getDateValue(), 0); + } else if (v0.getType() == Value.DATE) { + result = ValueTimestamp.fromDateValueAndNanos(((ValueDate) v0).getDateValue(), 0); + } else if (v0.getType() == Value.TIMESTAMP_TZ) { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) v0; + result = ValueTimestampTimeZone.fromDateValueAndNanos(ts.getDateValue(), 0, + ts.getTimeZoneOffsetMins()); + } else if (v0.getType() == Value.STRING) { + ValueTimestamp ts = ValueTimestamp.parse(v0.getString(), session.getDatabase().getMode()); + result = ValueTimestamp.fromDateValueAndNanos(ts.getDateValue(), 0); + } else { + double d = v0.getDouble(); + int p = v1 == null ? 0 : v1.getInt(); + double f = Math.pow(10., p); + double g = d * f; + result = ValueDouble.get(((d < 0) ? Math.ceil(g) : Math.floor(g)) / f); + } + break; + } + case HASH: + result = ValueBytes.getNoCopy(getHash(v0.getString(), + v1.getBytesNoCopy(), v2.getInt())); + break; + case ENCRYPT: + result = ValueBytes.getNoCopy(encrypt(v0.getString(), + v1.getBytesNoCopy(), v2.getBytesNoCopy())); + break; + case DECRYPT: + result = ValueBytes.getNoCopy(decrypt(v0.getString(), + v1.getBytesNoCopy(), v2.getBytesNoCopy())); + break; + case COMPRESS: { + String algorithm = null; + if (v1 != null) { + algorithm = v1.getString(); + } + result = ValueBytes.getNoCopy(CompressTool.getInstance(). + compress(v0.getBytesNoCopy(), algorithm)); + break; + } + case DIFFERENCE: + result = ValueInt.get(getDifference( + v0.getString(), v1.getString())); + break; + case INSERT: { + if (v1 == ValueNull.INSTANCE || v2 == ValueNull.INSTANCE) { + result = v1; + } else { + result = ValueString.get(insert(v0.getString(), + v1.getInt(), v2.getInt(), v3.getString()), + database.getMode().treatEmptyStringsAsNull); + } + break; + } + case LEFT: + result = ValueString.get(left(v0.getString(), v1.getInt()), + database.getMode().treatEmptyStringsAsNull); + break; + case LOCATE: { + int start = v2 == null ? 0 : v2.getInt(); + result = ValueInt.get(locate(v0.getString(), v1.getString(), start)); + break; + } + case INSTR: { + int start = v2 == null ? 0 : v2.getInt(); + result = ValueInt.get(locate(v1.getString(), v0.getString(), start)); + break; + } + case REPEAT: { + int count = Math.max(0, v1.getInt()); + result = ValueString.get(repeat(v0.getString(), count), + database.getMode().treatEmptyStringsAsNull); + break; + } + case REPLACE: { + if (v0 == ValueNull.INSTANCE || v1 == ValueNull.INSTANCE + || v2 == ValueNull.INSTANCE && database.getMode().getEnum() != Mode.ModeEnum.Oracle) { + result = ValueNull.INSTANCE; + } else { + String s0 = v0.getString(); + String s1 = v1.getString(); + String s2 = (v2 == null) ? "" : v2.getString(); + if (s2 == null) { + s2 = ""; + } + result = ValueString.get(StringUtils.replaceAll(s0, s1, s2), + database.getMode().treatEmptyStringsAsNull); + } + break; + } + case RIGHT: + result = ValueString.get(right(v0.getString(), v1.getInt()), + database.getMode().treatEmptyStringsAsNull); + break; + case LTRIM: + result = ValueString.get(StringUtils.trim(v0.getString(), + true, false, v1 == null ? " " : v1.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case TRIM: + result = ValueString.get(StringUtils.trim(v0.getString(), + true, true, v1 == null ? " " : v1.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case RTRIM: + result = ValueString.get(StringUtils.trim(v0.getString(), + false, true, v1 == null ? " " : v1.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case SUBSTR: + case SUBSTRING: { + String s = v0.getString(); + int offset = v1.getInt(); + if (offset < 0) { + offset = s.length() + offset + 1; + } + int length = v2 == null ? s.length() : v2.getInt(); + result = ValueString.get(substring(s, offset, length), + database.getMode().treatEmptyStringsAsNull); + break; + } + case POSITION: + result = ValueInt.get(locate(v0.getString(), v1.getString(), 0)); + break; + case XMLATTR: + result = ValueString.get( + StringUtils.xmlAttr(v0.getString(), v1.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case XMLNODE: { + String attr = v1 == null ? + null : v1 == ValueNull.INSTANCE ? null : v1.getString(); + String content = v2 == null ? + null : v2 == ValueNull.INSTANCE ? null : v2.getString(); + boolean indent = v3 == null ? + true : v3.getBoolean(); + result = ValueString.get(StringUtils.xmlNode( + v0.getString(), attr, content, indent), + database.getMode().treatEmptyStringsAsNull); + break; + } + case REGEXP_REPLACE: { + String regexp = v1.getString(); + String replacement = v2.getString(); + if (database.getMode().regexpReplaceBackslashReferences) { + if ((replacement.indexOf('\\') >= 0) || (replacement.indexOf('$') >= 0)) { + StringBuilder sb = new StringBuilder(); + for (int i = 0; i < replacement.length(); i++) { + char c = replacement.charAt(i); + if (c == '$') { + sb.append('\\'); + } else if (c == '\\' && ++i < replacement.length()) { + c = replacement.charAt(i); + sb.append(c >= '0' && c <= '9' ? '$' : '\\'); + } + sb.append(c); + } + replacement = sb.toString(); + } + } + String regexpMode = v3 == null || v3.getString() == null ? "" : + v3.getString(); + int flags = makeRegexpFlags(regexpMode); + try { + result = ValueString.get( + Pattern.compile(regexp, flags).matcher(v0.getString()) + .replaceAll(replacement), + database.getMode().treatEmptyStringsAsNull); + } catch (StringIndexOutOfBoundsException e) { + throw DbException.get( + ErrorCode.LIKE_ESCAPE_ERROR_1, e, replacement); + } catch (PatternSyntaxException e) { + throw DbException.get( + ErrorCode.LIKE_ESCAPE_ERROR_1, e, regexp); + } catch (IllegalArgumentException e) { + throw DbException.get( + ErrorCode.LIKE_ESCAPE_ERROR_1, e, replacement); + } + break; + } + case RPAD: + result = ValueString.get(StringUtils.pad(v0.getString(), + v1.getInt(), v2 == null ? null : v2.getString(), true), + database.getMode().treatEmptyStringsAsNull); + break; + case LPAD: + result = ValueString.get(StringUtils.pad(v0.getString(), + v1.getInt(), v2 == null ? null : v2.getString(), false), + database.getMode().treatEmptyStringsAsNull); + break; + case ORA_HASH: + result = ValueLong.get(oraHash(v0.getString(), + v1 == null ? null : v1.getInt(), + v2 == null ? null : v2.getInt())); + break; + case TO_CHAR: + switch (v0.getType()){ + case Value.TIME: + case Value.DATE: + case Value.TIMESTAMP: + case Value.TIMESTAMP_TZ: + result = ValueString.get(ToChar.toCharDateTime(v0, + v1 == null ? null : v1.getString(), + v2 == null ? null : v2.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + case Value.SHORT: + case Value.INT: + case Value.LONG: + case Value.DECIMAL: + case Value.DOUBLE: + case Value.FLOAT: + result = ValueString.get(ToChar.toChar(v0.getBigDecimal(), + v1 == null ? null : v1.getString(), + v2 == null ? null : v2.getString()), + database.getMode().treatEmptyStringsAsNull); + break; + default: + result = ValueString.get(v0.getString(), + database.getMode().treatEmptyStringsAsNull); + } + break; + case TO_DATE: + result = ToDateParser.toDate(v0.getString(), + v1 == null ? null : v1.getString()); + break; + case TO_TIMESTAMP: + result = ToDateParser.toTimestamp(v0.getString(), + v1 == null ? null : v1.getString()); + break; + case ADD_MONTHS: + result = DateTimeFunctions.dateadd("MONTH", v1.getInt(), v0); + break; + case TO_TIMESTAMP_TZ: + result = ToDateParser.toTimestampTz(v0.getString(), + v1 == null ? null : v1.getString()); + break; + case TRANSLATE: { + String matching = v1.getString(); + String replacement = v2.getString(); + result = ValueString.get( + translate(v0.getString(), matching, replacement), + database.getMode().treatEmptyStringsAsNull); + break; + } + case H2VERSION: + result = ValueString.get(Constants.getVersion(), + database.getMode().treatEmptyStringsAsNull); + break; + case DATE_ADD: + result = DateTimeFunctions.dateadd(v0.getString(), v1.getLong(), v2); + break; + case DATE_DIFF: + result = ValueLong.get(DateTimeFunctions.datediff(v0.getString(), v1, v2)); + break; + case DATE_TRUNC: + result = DateTimeFunctions.truncateDate(v0.getString(), v1); + break; + case EXTRACT: + result = DateTimeFunctions.extract(v0.getString(), v1); + break; + case FORMATDATETIME: { + if (v0 == ValueNull.INSTANCE || v1 == ValueNull.INSTANCE) { + result = ValueNull.INSTANCE; + } else { + String locale = v2 == null ? + null : v2 == ValueNull.INSTANCE ? null : v2.getString(); + String tz = v3 == null ? + null : v3 == ValueNull.INSTANCE ? null : v3.getString(); + if (v0 instanceof ValueTimestampTimeZone) { + tz = DateTimeUtils.timeZoneNameFromOffsetMins( + ((ValueTimestampTimeZone) v0).getTimeZoneOffsetMins()); + } + result = ValueString.get(DateTimeFunctions.formatDateTime( + v0.getTimestamp(), v1.getString(), locale, tz), + database.getMode().treatEmptyStringsAsNull); + } + break; + } + case PARSEDATETIME: { + if (v0 == ValueNull.INSTANCE || v1 == ValueNull.INSTANCE) { + result = ValueNull.INSTANCE; + } else { + String locale = v2 == null ? + null : v2 == ValueNull.INSTANCE ? null : v2.getString(); + String tz = v3 == null ? + null : v3 == ValueNull.INSTANCE ? null : v3.getString(); + java.util.Date d = DateTimeFunctions.parseDateTime( + v0.getString(), v1.getString(), locale, tz); + result = ValueTimestamp.fromMillis(d.getTime()); + } + break; + } + case NULLIF: + result = database.areEqual(v0, v1) ? ValueNull.INSTANCE : v0; + break; + // system + case NEXTVAL: { + Sequence sequence = getSequence(session, v0, v1); + SequenceValue value = new SequenceValue(sequence); + result = value.getValue(session); + break; + } + case CURRVAL: { + Sequence sequence = getSequence(session, v0, v1); + result = ValueLong.get(sequence.getCurrentValue()); + break; + } + case CSVREAD: { + String fileName = v0.getString(); + String columnList = v1 == null ? null : v1.getString(); + Csv csv = new Csv(); + String options = v2 == null ? null : v2.getString(); + String charset = null; + if (options != null && options.indexOf('=') >= 0) { + charset = csv.setOptions(options); + } else { + charset = options; + String fieldSeparatorRead = v3 == null ? null : v3.getString(); + String fieldDelimiter = v4 == null ? null : v4.getString(); + String escapeCharacter = v5 == null ? null : v5.getString(); + Value v6 = getNullOrValue(session, args, values, 6); + String nullString = v6 == null ? null : v6.getString(); + setCsvDelimiterEscape(csv, fieldSeparatorRead, fieldDelimiter, + escapeCharacter); + csv.setNullString(nullString); + } + char fieldSeparator = csv.getFieldSeparatorRead(); + String[] columns = StringUtils.arraySplit(columnList, + fieldSeparator, true); + try { + result = ValueResultSet.get(csv.read(fileName, + columns, charset)); + } catch (SQLException e) { + throw DbException.convert(e); + } + break; + } + case LINK_SCHEMA: { + session.getUser().checkAdmin(); + Connection conn = session.createConnection(false); + ResultSet rs = LinkSchema.linkSchema(conn, v0.getString(), + v1.getString(), v2.getString(), v3.getString(), + v4.getString(), v5.getString()); + result = ValueResultSet.get(rs); + break; + } + case CSVWRITE: { + session.getUser().checkAdmin(); + Connection conn = session.createConnection(false); + Csv csv = new Csv(); + String options = v2 == null ? null : v2.getString(); + String charset = null; + if (options != null && options.indexOf('=') >= 0) { + charset = csv.setOptions(options); + } else { + charset = options; + String fieldSeparatorWrite = v3 == null ? null : v3.getString(); + String fieldDelimiter = v4 == null ? null : v4.getString(); + String escapeCharacter = v5 == null ? null : v5.getString(); + Value v6 = getNullOrValue(session, args, values, 6); + String nullString = v6 == null ? null : v6.getString(); + Value v7 = getNullOrValue(session, args, values, 7); + String lineSeparator = v7 == null ? null : v7.getString(); + setCsvDelimiterEscape(csv, fieldSeparatorWrite, fieldDelimiter, + escapeCharacter); + csv.setNullString(nullString); + if (lineSeparator != null) { + csv.setLineSeparator(lineSeparator); + } + } + try { + int rows = csv.write(conn, v0.getString(), v1.getString(), + charset); + result = ValueInt.get(rows); + } catch (SQLException e) { + throw DbException.convert(e); + } + break; + } + case SET: { + Variable var = (Variable) args[0]; + session.setVariable(var.getName(), v1); + result = v1; + break; + } + case FILE_READ: { + session.getUser().checkAdmin(); + String fileName = v0.getString(); + boolean blob = args.length == 1; + try { + long fileLength = FileUtils.size(fileName); + final InputStream in = FileUtils.newInputStream(fileName); + try { + if (blob) { + result = database.getLobStorage().createBlob(in, fileLength); + } else { + Reader reader; + if (v1 == ValueNull.INSTANCE) { + reader = new InputStreamReader(in); + } else { + reader = new InputStreamReader(in, v1.getString()); + } + result = database.getLobStorage().createClob(reader, fileLength); + } + } finally { + IOUtils.closeSilently(in); + } + session.addTemporaryLob(result); + } catch (IOException e) { + throw DbException.convertIOException(e, fileName); + } + break; + } + case FILE_WRITE: { + session.getUser().checkAdmin(); + result = ValueNull.INSTANCE; + String fileName = v1.getString(); + try { + FileOutputStream fileOutputStream = new FileOutputStream(fileName); + try (InputStream in = v0.getInputStream()) { + result = ValueLong.get(IOUtils.copyAndClose(in, + fileOutputStream)); + } + } catch (IOException e) { + throw DbException.convertIOException(e, fileName); + } + break; + } + case TRUNCATE_VALUE: { + result = v0.convertPrecision(v1.getLong(), v2.getBoolean()); + break; + } + case XMLTEXT: + if (v1 == null) { + result = ValueString.get(StringUtils.xmlText( + v0.getString()), + database.getMode().treatEmptyStringsAsNull); + } else { + result = ValueString.get(StringUtils.xmlText( + v0.getString(), v1.getBoolean()), + database.getMode().treatEmptyStringsAsNull); + } + break; + case REGEXP_LIKE: { + String regexp = v1.getString(); + String regexpMode = v2 == null || v2.getString() == null ? "" : + v2.getString(); + int flags = makeRegexpFlags(regexpMode); + try { + result = ValueBoolean.get(Pattern.compile(regexp, flags) + .matcher(v0.getString()).find()); + } catch (PatternSyntaxException e) { + throw DbException.get(ErrorCode.LIKE_ESCAPE_ERROR_1, e, regexp); + } + break; + } + case VALUES: + result = session.getVariable(args[0].getSchemaName() + "." + + args[0].getTableName() + "." + args[0].getColumnName()); + break; + case SIGNAL: { + String sqlState = v0.getString(); + if (sqlState.startsWith("00") || !SIGNAL_PATTERN.matcher(sqlState).matches()) { + throw DbException.getInvalidValueException("SQLSTATE", sqlState); + } + String msgText = v1.getString(); + throw DbException.fromUser(sqlState, msgText); + } + default: + throw DbException.throwInternalError("type=" + info.type); + } + return result; + } + + private Sequence getSequence(Session session, Value v0, Value v1) { + String schemaName, sequenceName; + if (v1 == null) { + Parser p = new Parser(session); + String sql = v0.getString(); + Expression expr = p.parseExpression(sql); + if (expr instanceof ExpressionColumn) { + ExpressionColumn seq = (ExpressionColumn) expr; + schemaName = seq.getOriginalTableAliasName(); + if (schemaName == null) { + schemaName = session.getCurrentSchemaName(); + sequenceName = sql; + } else { + sequenceName = seq.getColumnName(); + } + } else { + throw DbException.getSyntaxError(sql, 1); + } + } else { + schemaName = v0.getString(); + sequenceName = v1.getString(); + } + Schema s = database.findSchema(schemaName); + if (s == null) { + schemaName = StringUtils.toUpperEnglish(schemaName); + s = database.getSchema(schemaName); + } + Sequence seq = s.findSequence(sequenceName); + if (seq == null) { + sequenceName = StringUtils.toUpperEnglish(sequenceName); + seq = s.getSequence(sequenceName); + } + return seq; + } + + private static long length(Value v) { + switch (v.getType()) { + case Value.BLOB: + case Value.CLOB: + case Value.BYTES: + case Value.JAVA_OBJECT: + return v.getPrecision(); + default: + return v.getString().length(); + } + } + + private static byte[] getPaddedArrayCopy(byte[] data, int blockSize) { + int size = MathUtils.roundUpInt(data.length, blockSize); + return Utils.copyBytes(data, size); + } + + private static byte[] decrypt(String algorithm, byte[] key, byte[] data) { + BlockCipher cipher = CipherFactory.getBlockCipher(algorithm); + byte[] newKey = getPaddedArrayCopy(key, cipher.getKeyLength()); + cipher.setKey(newKey); + byte[] newData = getPaddedArrayCopy(data, BlockCipher.ALIGN); + cipher.decrypt(newData, 0, newData.length); + return newData; + } + + private static byte[] encrypt(String algorithm, byte[] key, byte[] data) { + BlockCipher cipher = CipherFactory.getBlockCipher(algorithm); + byte[] newKey = getPaddedArrayCopy(key, cipher.getKeyLength()); + cipher.setKey(newKey); + byte[] newData = getPaddedArrayCopy(data, BlockCipher.ALIGN); + cipher.encrypt(newData, 0, newData.length); + return newData; + } + + private static byte[] getHash(String algorithm, byte[] bytes, int iterations) { + if (!"SHA256".equalsIgnoreCase(algorithm)) { + throw DbException.getInvalidValueException("algorithm", algorithm); + } + for (int i = 0; i < iterations; i++) { + bytes = SHA256.getHash(bytes, false); + } + return bytes; + } + + private static String substring(String s, int start, int length) { + int len = s.length(); + start--; + if (start < 0) { + start = 0; + } + if (length < 0) { + length = 0; + } + start = (start > len) ? len : start; + if (start + length > len) { + length = len - start; + } + return s.substring(start, start + length); + } + + private static String repeat(String s, int count) { + StringBuilder buff = new StringBuilder(s.length() * count); + while (count-- > 0) { + buff.append(s); + } + return buff.toString(); + } + + private static String rawToHex(String s) { + int length = s.length(); + StringBuilder buff = new StringBuilder(4 * length); + for (int i = 0; i < length; i++) { + String hex = Integer.toHexString(s.charAt(i) & 0xffff); + for (int j = hex.length(); j < 4; j++) { + buff.append('0'); + } + buff.append(hex); + } + return buff.toString(); + } + + private static int locate(String search, String s, int start) { + if (start < 0) { + int i = s.length() + start; + return s.lastIndexOf(search, i) + 1; + } + int i = (start == 0) ? 0 : start - 1; + return s.indexOf(search, i) + 1; + } + + private static String right(String s, int count) { + if (count < 0) { + count = 0; + } else if (count > s.length()) { + count = s.length(); + } + return s.substring(s.length() - count); + } + + private static String left(String s, int count) { + if (count < 0) { + count = 0; + } else if (count > s.length()) { + count = s.length(); + } + return s.substring(0, count); + } + + private static String insert(String s1, int start, int length, String s2) { + if (s1 == null) { + return s2; + } + if (s2 == null) { + return s1; + } + int len1 = s1.length(); + int len2 = s2.length(); + start--; + if (start < 0 || length <= 0 || len2 == 0 || start > len1) { + return s1; + } + if (start + length > len1) { + length = len1 - start; + } + return s1.substring(0, start) + s2 + s1.substring(start + length); + } + + private static String hexToRaw(String s) { + // TODO function hextoraw compatibility with oracle + int len = s.length(); + if (len % 4 != 0) { + throw DbException.get(ErrorCode.DATA_CONVERSION_ERROR_1, s); + } + StringBuilder buff = new StringBuilder(len / 4); + for (int i = 0; i < len; i += 4) { + try { + char raw = (char) Integer.parseInt(s.substring(i, i + 4), 16); + buff.append(raw); + } catch (NumberFormatException e) { + throw DbException.get(ErrorCode.DATA_CONVERSION_ERROR_1, s); + } + } + return buff.toString(); + } + + private static int getDifference(String s1, String s2) { + // TODO function difference: compatibility with SQL Server and HSQLDB + s1 = getSoundex(s1); + s2 = getSoundex(s2); + int e = 0; + for (int i = 0; i < 4; i++) { + if (s1.charAt(i) == s2.charAt(i)) { + e++; + } + } + return e; + } + + private static String translate(String original, String findChars, + String replaceChars) { + if (StringUtils.isNullOrEmpty(original) || + StringUtils.isNullOrEmpty(findChars)) { + return original; + } + // if it stays null, then no replacements have been made + StringBuilder buff = null; + // if shorter than findChars, then characters are removed + // (if null, we don't access replaceChars at all) + int replaceSize = replaceChars == null ? 0 : replaceChars.length(); + for (int i = 0, size = original.length(); i < size; i++) { + char ch = original.charAt(i); + int index = findChars.indexOf(ch); + if (index >= 0) { + if (buff == null) { + buff = new StringBuilder(size); + if (i > 0) { + buff.append(original, 0, i); + } + } + if (index < replaceSize) { + ch = replaceChars.charAt(index); + } + } + if (buff != null) { + buff.append(ch); + } + } + return buff == null ? original : buff.toString(); + } + + private static double roundMagic(double d) { + if ((d < 0.000_000_000_000_1) && (d > -0.000_000_000_000_1)) { + return 0.0; + } + if ((d > 1_000_000_000_000d) || (d < -1_000_000_000_000d)) { + return d; + } + StringBuilder s = new StringBuilder(); + s.append(d); + if (s.toString().indexOf('E') >= 0) { + return d; + } + int len = s.length(); + if (len < 16) { + return d; + } + if (s.toString().indexOf('.') > len - 3) { + return d; + } + s.delete(len - 2, len); + len -= 2; + char c1 = s.charAt(len - 2); + char c2 = s.charAt(len - 3); + char c3 = s.charAt(len - 4); + if ((c1 == '0') && (c2 == '0') && (c3 == '0')) { + s.setCharAt(len - 1, '0'); + } else if ((c1 == '9') && (c2 == '9') && (c3 == '9')) { + s.setCharAt(len - 1, '9'); + s.append('9'); + s.append('9'); + s.append('9'); + } + return Double.parseDouble(s.toString()); + } + + private static String getSoundex(String s) { + int len = s.length(); + char[] chars = { '0', '0', '0', '0' }; + char lastDigit = '0'; + for (int i = 0, j = 0; i < len && j < 4; i++) { + char c = s.charAt(i); + char newDigit = c > SOUNDEX_INDEX.length ? + 0 : SOUNDEX_INDEX[c]; + if (newDigit != 0) { + if (j == 0) { + chars[j++] = c; + lastDigit = newDigit; + } else if (newDigit <= '6') { + if (newDigit != lastDigit) { + chars[j++] = newDigit; + lastDigit = newDigit; + } + } else if (newDigit == '7') { + lastDigit = newDigit; + } + } + } + return new String(chars); + } + + private static Integer oraHash(String s, Integer bucket, Integer seed) { + int hc = s.hashCode(); + if (seed != null && seed.intValue() != 0) { + hc *= seed.intValue() * 17; + } + if (bucket == null || bucket.intValue() <= 0) { + // do nothing + } else { + hc %= bucket.intValue(); + } + return hc; + } + + private static int makeRegexpFlags(String stringFlags) { + int flags = Pattern.UNICODE_CASE; + if (stringFlags != null) { + for (int i = 0; i < stringFlags.length(); ++i) { + switch (stringFlags.charAt(i)) { + case 'i': + flags |= Pattern.CASE_INSENSITIVE; + break; + case 'c': + flags &= ~Pattern.CASE_INSENSITIVE; + break; + case 'n': + flags |= Pattern.DOTALL; + break; + case 'm': + flags |= Pattern.MULTILINE; + break; + default: + throw DbException.get(ErrorCode.INVALID_VALUE_2, stringFlags); + } + } + } + return flags; + } + + @Override + public int getType() { + return dataType; + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + for (Expression e : args) { + if (e != null) { + e.mapColumns(resolver, level); + } + } + } + + /** + * Check if the parameter count is correct. + * + * @param len the number of parameters set + * @throws DbException if the parameter count is incorrect + */ + protected void checkParameterCount(int len) { + int min = 0, max = Integer.MAX_VALUE; + switch (info.type) { + case COALESCE: + case CSVREAD: + case LEAST: + case GREATEST: + min = 1; + break; + case NOW: + case CURRENT_TIMESTAMP: + case RAND: + max = 1; + break; + case COMPRESS: + case LTRIM: + case RTRIM: + case TRIM: + case FILE_READ: + case ROUND: + case XMLTEXT: + case TRUNCATE: + case TO_TIMESTAMP: + case TO_TIMESTAMP_TZ: + min = 1; + max = 2; + break; + case DATE_TRUNC: + min = 2; + max = 2; + break; + case TO_CHAR: + case TO_DATE: + min = 1; + max = 3; + break; + case ORA_HASH: + min = 1; + max = 3; + break; + case REPLACE: + case LOCATE: + case INSTR: + case SUBSTR: + case SUBSTRING: + case LPAD: + case RPAD: + min = 2; + max = 3; + break; + case CONCAT: + case CONCAT_WS: + case CSVWRITE: + min = 2; + break; + case XMLNODE: + min = 1; + max = 4; + break; + case FORMATDATETIME: + case PARSEDATETIME: + min = 2; + max = 4; + break; + case CURRVAL: + case NEXTVAL: + min = 1; + max = 2; + break; + case DECODE: + case CASE: + min = 3; + break; + case REGEXP_REPLACE: + min = 3; + max = 4; + break; + case REGEXP_LIKE: + min = 2; + max = 3; + break; + default: + DbException.throwInternalError("type=" + info.type); + } + boolean ok = (len >= min) && (len <= max); + if (!ok) { + throw DbException.get( + ErrorCode.INVALID_PARAMETER_COUNT_2, + info.name, min + ".." + max); + } + } + + /** + * This method is called after all the parameters have been set. + * It checks if the parameter count is correct. + * + * @throws DbException if the parameter count is incorrect. + */ + public void doneWithParameters() { + if (info.parameterCount == VAR_ARGS) { + checkParameterCount(varArgs.size()); + args = varArgs.toArray(new Expression[0]); + varArgs = null; + } else { + int len = args.length; + if (len > 0 && args[len - 1] == null) { + throw DbException.get( + ErrorCode.INVALID_PARAMETER_COUNT_2, + info.name, "" + len); + } + } + } + + public void setDataType(Column col) { + dataType = col.getType(); + precision = col.getPrecision(); + displaySize = col.getDisplaySize(); + scale = col.getScale(); + } + + @Override + public Expression optimize(Session session) { + boolean allConst = info.deterministic; + for (int i = 0; i < args.length; i++) { + Expression e = args[i]; + if (e == null) { + continue; + } + e = e.optimize(session); + args[i] = e; + if (!e.isConstant()) { + allConst = false; + } + } + int t, s, d; + long p; + Expression p0 = args.length < 1 ? null : args[0]; + switch (info.type) { + case IFNULL: + case NULLIF: + case COALESCE: + case LEAST: + case GREATEST: { + t = Value.UNKNOWN; + s = 0; + p = 0; + d = 0; + for (Expression e : args) { + if (e != ValueExpression.getNull()) { + int type = e.getType(); + if (type != Value.UNKNOWN && type != Value.NULL) { + t = Value.getHigherOrder(t, type); + s = Math.max(s, e.getScale()); + p = Math.max(p, e.getPrecision()); + d = Math.max(d, e.getDisplaySize()); + } + } + } + if (t == Value.UNKNOWN) { + t = Value.STRING; + s = 0; + p = Integer.MAX_VALUE; + d = Integer.MAX_VALUE; + } + break; + } + case CASE: + case DECODE: { + t = Value.UNKNOWN; + s = 0; + p = 0; + d = 0; + // (expr, when, then) + // (expr, when, then, else) + // (expr, when, then, when, then) + // (expr, when, then, when, then, else) + for (int i = 2, len = args.length; i < len; i += 2) { + Expression then = args[i]; + if (then != ValueExpression.getNull()) { + int type = then.getType(); + if (type != Value.UNKNOWN && type != Value.NULL) { + t = Value.getHigherOrder(t, type); + s = Math.max(s, then.getScale()); + p = Math.max(p, then.getPrecision()); + d = Math.max(d, then.getDisplaySize()); + } + } + } + if (args.length % 2 == 0) { + Expression elsePart = args[args.length - 1]; + if (elsePart != ValueExpression.getNull()) { + int type = elsePart.getType(); + if (type != Value.UNKNOWN && type != Value.NULL) { + t = Value.getHigherOrder(t, type); + s = Math.max(s, elsePart.getScale()); + p = Math.max(p, elsePart.getPrecision()); + d = Math.max(d, elsePart.getDisplaySize()); + } + } + } + if (t == Value.UNKNOWN) { + t = Value.STRING; + s = 0; + p = Integer.MAX_VALUE; + d = Integer.MAX_VALUE; + } + break; + } + case CASEWHEN: + t = Value.getHigherOrder(args[1].getType(), args[2].getType()); + p = Math.max(args[1].getPrecision(), args[2].getPrecision()); + d = Math.max(args[1].getDisplaySize(), args[2].getDisplaySize()); + s = Math.max(args[1].getScale(), args[2].getScale()); + break; + case NVL2: + switch (args[1].getType()) { + case Value.STRING: + case Value.CLOB: + case Value.STRING_FIXED: + case Value.STRING_IGNORECASE: + t = args[1].getType(); + break; + default: + t = Value.getHigherOrder(args[1].getType(), args[2].getType()); + break; + } + p = Math.max(args[1].getPrecision(), args[2].getPrecision()); + d = Math.max(args[1].getDisplaySize(), args[2].getDisplaySize()); + s = Math.max(args[1].getScale(), args[2].getScale()); + break; + case CAST: + case CONVERT: + case TRUNCATE_VALUE: + // data type, precision and scale is already set + t = dataType; + p = precision; + s = scale; + d = displaySize; + break; + case TRUNCATE: + t = p0.getType(); + s = p0.getScale(); + p = p0.getPrecision(); + d = p0.getDisplaySize(); + if (t == Value.NULL) { + t = Value.INT; + p = ValueInt.PRECISION; + d = ValueInt.DISPLAY_SIZE; + s = 0; + } else if (t == Value.TIMESTAMP) { + t = Value.DATE; + p = ValueDate.PRECISION; + s = 0; + d = ValueDate.PRECISION; + } + break; + case ABS: + case FLOOR: + case ROUND: + t = p0.getType(); + s = p0.getScale(); + p = p0.getPrecision(); + d = p0.getDisplaySize(); + if (t == Value.NULL) { + t = Value.INT; + p = ValueInt.PRECISION; + d = ValueInt.DISPLAY_SIZE; + s = 0; + } + break; + case SET: { + Expression p1 = args[1]; + t = p1.getType(); + p = p1.getPrecision(); + s = p1.getScale(); + d = p1.getDisplaySize(); + if (!(p0 instanceof Variable)) { + throw DbException.get( + ErrorCode.CAN_ONLY_ASSIGN_TO_VARIABLE_1, p0.getSQL()); + } + break; + } + case FILE_READ: { + if (args.length == 1) { + t = Value.BLOB; + } else { + t = Value.CLOB; + } + p = Integer.MAX_VALUE; + s = 0; + d = Integer.MAX_VALUE; + break; + } + case SUBSTRING: + case SUBSTR: { + t = info.returnDataType; + p = args[0].getPrecision(); + s = 0; + if (args[1].isConstant()) { + // if only two arguments are used, + // subtract offset from first argument length + p -= args[1].getValue(session).getLong() - 1; + } + if (args.length == 3 && args[2].isConstant()) { + // if the third argument is constant it is at most this value + p = Math.min(p, args[2].getValue(session).getLong()); + } + p = Math.max(0, p); + d = MathUtils.convertLongToInt(p); + break; + } + default: + t = info.returnDataType; + DataType type = DataType.getDataType(t); + p = PRECISION_UNKNOWN; + d = 0; + s = type.defaultScale; + } + dataType = t; + precision = p; + scale = s; + displaySize = d; + if (allConst) { + Value v = getValue(session); + if (v == ValueNull.INSTANCE) { + if (info.type == CAST || info.type == CONVERT) { + return this; + } + } + return ValueExpression.get(v); + } + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + for (Expression e : args) { + if (e != null) { + e.setEvaluatable(tableFilter, b); + } + } + } + + @Override + public int getScale() { + return scale; + } + + @Override + public long getPrecision() { + if (precision == PRECISION_UNKNOWN) { + calculatePrecisionAndDisplaySize(); + } + return precision; + } + + @Override + public int getDisplaySize() { + if (precision == PRECISION_UNKNOWN) { + calculatePrecisionAndDisplaySize(); + } + return displaySize; + } + + private void calculatePrecisionAndDisplaySize() { + switch (info.type) { + case ENCRYPT: + case DECRYPT: + precision = args[2].getPrecision(); + displaySize = args[2].getDisplaySize(); + break; + case COMPRESS: + precision = args[0].getPrecision(); + displaySize = args[0].getDisplaySize(); + break; + case CHAR: + precision = 1; + displaySize = 1; + break; + case CONCAT: + precision = 0; + displaySize = 0; + for (Expression e : args) { + precision += e.getPrecision(); + displaySize = MathUtils.convertLongToInt( + (long) displaySize + e.getDisplaySize()); + if (precision < 0) { + precision = Long.MAX_VALUE; + } + } + break; + case HEXTORAW: + precision = (args[0].getPrecision() + 3) / 4; + displaySize = MathUtils.convertLongToInt(precision); + break; + case LCASE: + case LTRIM: + case RIGHT: + case RTRIM: + case UCASE: + case LOWER: + case UPPER: + case TRIM: + case STRINGDECODE: + case UTF8TOSTRING: + case TRUNCATE: + precision = args[0].getPrecision(); + displaySize = args[0].getDisplaySize(); + break; + case RAWTOHEX: + precision = args[0].getPrecision() * 4; + displaySize = MathUtils.convertLongToInt(precision); + break; + case SOUNDEX: + precision = 4; + displaySize = (int) precision; + break; + case DAY_NAME: + case MONTH_NAME: + // day and month names may be long in some languages + precision = 20; + displaySize = (int) precision; + break; + default: + DataType type = DataType.getDataType(dataType); + precision = type.defaultPrecision; + displaySize = type.defaultDisplaySize; + } + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder(info.name); + if (info.type == CASE) { + if (args[0] != null) { + buff.append(" ").append(args[0].getSQL()); + } + for (int i = 1, len = args.length - 1; i < len; i += 2) { + buff.append(" WHEN ").append(args[i].getSQL()); + buff.append(" THEN ").append(args[i + 1].getSQL()); + } + if (args.length % 2 == 0) { + buff.append(" ELSE ").append(args[args.length - 1].getSQL()); + } + return buff.append(" END").toString(); + } + buff.append('('); + switch (info.type) { + case CAST: { + buff.append(args[0].getSQL()).append(" AS "). + append(new Column(null, dataType, precision, + scale, displaySize).getCreateSQL()); + break; + } + case CONVERT: { + if (database.getMode().swapConvertFunctionParameters) { + buff.append(new Column(null, dataType, precision, + scale, displaySize).getCreateSQL()). + append(',').append(args[0].getSQL()); + } else { + buff.append(args[0].getSQL()).append(','). + append(new Column(null, dataType, precision, + scale, displaySize).getCreateSQL()); + } + break; + } + case EXTRACT: { + ValueString v = (ValueString) ((ValueExpression) args[0]).getValue(null); + buff.append(v.getString()).append(" FROM ").append(args[1].getSQL()); + break; + } + default: { + for (Expression e : args) { + buff.appendExceptFirst(", "); + buff.append(e.getSQL()); + } + } + } + return buff.append(')').toString(); + } + + @Override + public void updateAggregate(Session session) { + for (Expression e : args) { + if (e != null) { + e.updateAggregate(session); + } + } + } + + public int getFunctionType() { + return info.type; + } + + @Override + public String getName() { + return info.name; + } + + @Override + public ValueResultSet getValueForColumnList(Session session, + Expression[] argList) { + switch (info.type) { + case CSVREAD: { + String fileName = argList[0].getValue(session).getString(); + if (fileName == null) { + throw DbException.get(ErrorCode.PARAMETER_NOT_SET_1, "fileName"); + } + String columnList = argList.length < 2 ? + null : argList[1].getValue(session).getString(); + Csv csv = new Csv(); + String options = argList.length < 3 ? + null : argList[2].getValue(session).getString(); + String charset = null; + if (options != null && options.indexOf('=') >= 0) { + charset = csv.setOptions(options); + } else { + charset = options; + String fieldSeparatorRead = argList.length < 4 ? + null : argList[3].getValue(session).getString(); + String fieldDelimiter = argList.length < 5 ? + null : argList[4].getValue(session).getString(); + String escapeCharacter = argList.length < 6 ? + null : argList[5].getValue(session).getString(); + setCsvDelimiterEscape(csv, fieldSeparatorRead, fieldDelimiter, + escapeCharacter); + } + char fieldSeparator = csv.getFieldSeparatorRead(); + String[] columns = StringUtils.arraySplit(columnList, fieldSeparator, true); + ResultSet rs = null; + ValueResultSet x; + try { + rs = csv.read(fileName, columns, charset); + x = ValueResultSet.getCopy(rs, 0); + } catch (SQLException e) { + throw DbException.convert(e); + } finally { + csv.close(); + JdbcUtils.closeSilently(rs); + } + return x; + } + default: + break; + } + return (ValueResultSet) getValueWithArgs(session, argList); + } + + private static void setCsvDelimiterEscape(Csv csv, String fieldSeparator, + String fieldDelimiter, String escapeCharacter) { + if (fieldSeparator != null) { + csv.setFieldSeparatorWrite(fieldSeparator); + if (fieldSeparator.length() > 0) { + char fs = fieldSeparator.charAt(0); + csv.setFieldSeparatorRead(fs); + } + } + if (fieldDelimiter != null) { + char fd = fieldDelimiter.length() == 0 ? + 0 : fieldDelimiter.charAt(0); + csv.setFieldDelimiter(fd); + } + if (escapeCharacter != null) { + char ec = escapeCharacter.length() == 0 ? + 0 : escapeCharacter.charAt(0); + csv.setEscapeCharacter(ec); + } + } + + @Override + public Expression[] getArgs() { + return args; + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + for (Expression e : args) { + if (e != null && !e.isEverything(visitor)) { + return false; + } + } + switch (visitor.getType()) { + case ExpressionVisitor.DETERMINISTIC: + case ExpressionVisitor.QUERY_COMPARABLE: + case ExpressionVisitor.READONLY: + return info.deterministic; + case ExpressionVisitor.EVALUATABLE: + case ExpressionVisitor.GET_DEPENDENCIES: + case ExpressionVisitor.INDEPENDENT: + case ExpressionVisitor.NOT_FROM_RESOLVER: + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + case ExpressionVisitor.GET_COLUMNS: + return true; + default: + throw DbException.throwInternalError("type=" + visitor.getType()); + } + } + + @Override + public int getCost() { + int cost = 3; + for (Expression e : args) { + if (e != null) { + cost += e.getCost(); + } + } + return cost; + } + + @Override + public boolean isDeterministic() { + return info.deterministic; + } + + @Override + public boolean isBufferResultSetToLocalTemp() { + return info.bufferResultSetToLocalTemp; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/FunctionCall.java b/modules/h2/src/main/java/org/h2/expression/FunctionCall.java new file mode 100644 index 0000000000000..b25de0fcad2d1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/FunctionCall.java @@ -0,0 +1,77 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Session; +import org.h2.value.ValueResultSet; + +/** + * This interface is used by the built-in functions, + * as well as the user-defined functions. + */ +public interface FunctionCall { + + /** + * Get the name of the function. + * + * @return the name + */ + String getName(); + + /** + * Get an empty result set with the column names set. + * + * @param session the session + * @param nullArgs the argument list (some arguments may be null) + * @return the empty result set + */ + ValueResultSet getValueForColumnList(Session session, Expression[] nullArgs); + + /** + * Get the data type. + * + * @return the data type + */ + int getType(); + + /** + * Optimize the function if possible. + * + * @param session the session + * @return the optimized expression + */ + Expression optimize(Session session); + + /** + * Get the function arguments. + * + * @return argument list + */ + Expression[] getArgs(); + + /** + * Get the SQL snippet of the function (including arguments). + * + * @return the SQL snippet. + */ + String getSQL(); + + /** + * Whether the function always returns the same result for the same + * parameters. + * + * @return true if it does + */ + boolean isDeterministic(); + + /** + * Should the return value ResultSet be buffered in a local temporary file? + * + * @return true if it should be. + */ + boolean isBufferResultSetToLocalTemp(); + +} diff --git a/modules/h2/src/main/java/org/h2/expression/FunctionInfo.java b/modules/h2/src/main/java/org/h2/expression/FunctionInfo.java new file mode 100644 index 0000000000000..20359392d300c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/FunctionInfo.java @@ -0,0 +1,48 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +/** + * This class contains information about a built-in function. + */ +class FunctionInfo { + + /** + * The name of the function. + */ + String name; + + /** + * The function type. + */ + int type; + + /** + * The data type of the return value. + */ + int returnDataType; + + /** + * The number of parameters. + */ + int parameterCount; + + /** + * If the result of the function is NULL if any of the parameters is NULL. + */ + boolean nullIfParameterIsNull; + + /** + * If this function always returns the same value for the same parameters. + */ + boolean deterministic; + + /** + * Should the return value ResultSet be buffered in a local temporary file? + */ + boolean bufferResultSetToLocalTemp = true; + +} diff --git a/modules/h2/src/main/java/org/h2/expression/JavaAggregate.java b/modules/h2/src/main/java/org/h2/expression/JavaAggregate.java new file mode 100644 index 0000000000000..e39ff1973b5fc --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/JavaAggregate.java @@ -0,0 +1,231 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.sql.Connection; +import java.sql.SQLException; +import java.util.HashMap; +import org.h2.api.Aggregate; +import org.h2.api.ErrorCode; +import org.h2.command.Parser; +import org.h2.command.dml.Select; +import org.h2.engine.Session; +import org.h2.engine.UserAggregate; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This class wraps a user-defined aggregate. + */ +public class JavaAggregate extends Expression { + + private final UserAggregate userAggregate; + private final Select select; + private final Expression[] args; + private int[] argTypes; + private Expression filterCondition; + private int dataType; + private Connection userConnection; + private int lastGroupRowId; + + public JavaAggregate(UserAggregate userAggregate, Expression[] args, + Select select, Expression filterCondition) { + this.userAggregate = userAggregate; + this.args = args; + this.select = select; + this.filterCondition = filterCondition; + } + + @Override + public int getCost() { + int cost = 5; + for (Expression e : args) { + cost += e.getCost(); + } + if (filterCondition != null) { + cost += filterCondition.getCost(); + } + return cost; + } + + @Override + public long getPrecision() { + return Integer.MAX_VALUE; + } + + @Override + public int getDisplaySize() { + return Integer.MAX_VALUE; + } + + @Override + public int getScale() { + return DataType.getDataType(dataType).defaultScale; + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder(); + buff.append(Parser.quoteIdentifier(userAggregate.getName())).append('('); + for (Expression e : args) { + buff.appendExceptFirst(", "); + buff.append(e.getSQL()); + } + buff.append(')'); + if (filterCondition != null) { + buff.append(" FILTER (WHERE ").append(filterCondition.getSQL()).append(')'); + } + return buff.toString(); + } + + @Override + public int getType() { + return dataType; + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.DETERMINISTIC: + // TODO optimization: some functions are deterministic, but we don't + // know (no setting for that) + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + // user defined aggregate functions can not be optimized + return false; + case ExpressionVisitor.GET_DEPENDENCIES: + visitor.addDependency(userAggregate); + break; + default: + } + for (Expression e : args) { + if (e != null && !e.isEverything(visitor)) { + return false; + } + } + return filterCondition == null || filterCondition.isEverything(visitor); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + for (Expression arg : args) { + arg.mapColumns(resolver, level); + } + if (filterCondition != null) { + filterCondition.mapColumns(resolver, level); + } + } + + @Override + public Expression optimize(Session session) { + userConnection = session.createConnection(false); + int len = args.length; + argTypes = new int[len]; + for (int i = 0; i < len; i++) { + Expression expr = args[i]; + args[i] = expr.optimize(session); + int type = expr.getType(); + argTypes[i] = type; + } + try { + Aggregate aggregate = getInstance(); + dataType = aggregate.getInternalType(argTypes); + } catch (SQLException e) { + throw DbException.convert(e); + } + if (filterCondition != null) { + filterCondition = filterCondition.optimize(session); + } + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + for (Expression e : args) { + e.setEvaluatable(tableFilter, b); + } + if (filterCondition != null) { + filterCondition.setEvaluatable(tableFilter, b); + } + } + + private Aggregate getInstance() throws SQLException { + Aggregate agg = userAggregate.getInstance(); + agg.init(userConnection); + return agg; + } + + @Override + public Value getValue(Session session) { + HashMap group = select.getCurrentGroup(); + if (group == null) { + throw DbException.get(ErrorCode.INVALID_USE_OF_AGGREGATE_FUNCTION_1, getSQL()); + } + try { + Aggregate agg = (Aggregate) group.get(this); + if (agg == null) { + agg = getInstance(); + } + Object obj = agg.getResult(); + if (obj == null) { + return ValueNull.INSTANCE; + } + return DataType.convertToValue(session, obj, dataType); + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + public void updateAggregate(Session session) { + HashMap group = select.getCurrentGroup(); + if (group == null) { + // this is a different level (the enclosing query) + return; + } + + int groupRowId = select.getCurrentGroupRowId(); + if (lastGroupRowId == groupRowId) { + // already visited + return; + } + lastGroupRowId = groupRowId; + + if (filterCondition != null) { + if (!filterCondition.getBooleanValue(session)) { + return; + } + } + + Aggregate agg = (Aggregate) group.get(this); + try { + if (agg == null) { + agg = getInstance(); + group.put(this, agg); + } + Object[] argValues = new Object[args.length]; + Object arg = null; + for (int i = 0, len = args.length; i < len; i++) { + Value v = args[i].getValue(session); + v = v.convertTo(argTypes[i]); + arg = v.getObject(); + argValues[i] = arg; + } + if (args.length == 1) { + agg.add(arg); + } else { + agg.add(argValues); + } + } catch (SQLException e) { + throw DbException.convert(e); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/JavaFunction.java b/modules/h2/src/main/java/org/h2/expression/JavaFunction.java new file mode 100644 index 0000000000000..9b32432e3e8d4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/JavaFunction.java @@ -0,0 +1,188 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.command.Parser; +import org.h2.engine.Constants; +import org.h2.engine.FunctionAlias; +import org.h2.engine.Session; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; + +/** + * This class wraps a user-defined function. + */ +public class JavaFunction extends Expression implements FunctionCall { + + private final FunctionAlias functionAlias; + private final FunctionAlias.JavaMethod javaMethod; + private final Expression[] args; + + public JavaFunction(FunctionAlias functionAlias, Expression[] args) { + this.functionAlias = functionAlias; + this.javaMethod = functionAlias.findJavaMethod(args); + this.args = args; + } + + @Override + public Value getValue(Session session) { + return javaMethod.getValue(session, args, false); + } + + @Override + public int getType() { + return javaMethod.getDataType(); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + for (Expression e : args) { + e.mapColumns(resolver, level); + } + } + + @Override + public Expression optimize(Session session) { + boolean allConst = isDeterministic(); + for (int i = 0, len = args.length; i < len; i++) { + Expression e = args[i].optimize(session); + args[i] = e; + allConst &= e.isConstant(); + } + if (allConst) { + return ValueExpression.get(getValue(session)); + } + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + for (Expression e : args) { + if (e != null) { + e.setEvaluatable(tableFilter, b); + } + } + } + + @Override + public int getScale() { + return DataType.getDataType(getType()).defaultScale; + } + + @Override + public long getPrecision() { + return Integer.MAX_VALUE; + } + + @Override + public int getDisplaySize() { + return Integer.MAX_VALUE; + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder(); + // TODO always append the schema once FUNCTIONS_IN_SCHEMA is enabled + if (functionAlias.getDatabase().getSettings().functionsInSchema || + !functionAlias.getSchema().getName().equals(Constants.SCHEMA_MAIN)) { + buff.append( + Parser.quoteIdentifier(functionAlias.getSchema().getName())) + .append('.'); + } + buff.append(Parser.quoteIdentifier(functionAlias.getName())).append('('); + for (Expression e : args) { + buff.appendExceptFirst(", "); + buff.append(e.getSQL()); + } + return buff.append(')').toString(); + } + + @Override + public void updateAggregate(Session session) { + for (Expression e : args) { + if (e != null) { + e.updateAggregate(session); + } + } + } + + @Override + public String getName() { + return functionAlias.getName(); + } + + @Override + public ValueResultSet getValueForColumnList(Session session, + Expression[] argList) { + Value v = javaMethod.getValue(session, argList, true); + return v == ValueNull.INSTANCE ? null : (ValueResultSet) v; + } + + @Override + public Expression[] getArgs() { + return args; + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.DETERMINISTIC: + if (!isDeterministic()) { + return false; + } + // only if all parameters are deterministic as well + break; + case ExpressionVisitor.GET_DEPENDENCIES: + visitor.addDependency(functionAlias); + break; + default: + } + for (Expression e : args) { + if (e != null && !e.isEverything(visitor)) { + return false; + } + } + return true; + } + + @Override + public int getCost() { + int cost = javaMethod.hasConnectionParam() ? 25 : 5; + for (Expression e : args) { + cost += e.getCost(); + } + return cost; + } + + @Override + public boolean isDeterministic() { + return functionAlias.isDeterministic(); + } + + @Override + public Expression[] getExpressionColumns(Session session) { + switch (getType()) { + case Value.RESULT_SET: + ValueResultSet rs = getValueForColumnList(session, getArgs()); + return getExpressionColumns(session, rs.getResultSet()); + case Value.ARRAY: + return getExpressionColumns(session, (ValueArray) getValue(session)); + } + return super.getExpressionColumns(session); + } + + @Override + public boolean isBufferResultSetToLocalTemp() { + return functionAlias.isBufferResultSetToLocalTemp(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Operation.java b/modules/h2/src/main/java/org/h2/expression/Operation.java new file mode 100644 index 0000000000000..ac11aa0679295 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Operation.java @@ -0,0 +1,406 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Mode; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.MathUtils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueInt; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; + +/** + * A mathematical expression, or string concatenation. + */ +public class Operation extends Expression { + + public enum OpType { + /** + * This operation represents a string concatenation as in + * 'Hello' || 'World'. + */ + CONCAT, + + /** + * This operation represents an addition as in 1 + 2. + */ + PLUS, + + /** + * This operation represents a subtraction as in 2 - 1. + */ + MINUS, + + /** + * This operation represents a multiplication as in 2 * 3. + */ + MULTIPLY, + + /** + * This operation represents a division as in 4 * 2. + */ + DIVIDE, + + /** + * This operation represents a negation as in - ID. + */ + NEGATE, + + /** + * This operation represents a modulus as in 5 % 2. + */ + MODULUS + } + + private OpType opType; + private Expression left, right; + private int dataType; + private boolean convertRight = true; + + public Operation(OpType opType, Expression left, Expression right) { + this.opType = opType; + this.left = left; + this.right = right; + } + + @Override + public String getSQL() { + String sql; + if (opType == OpType.NEGATE) { + // don't remove the space, otherwise it might end up some thing like + // --1 which is a line remark + sql = "- " + left.getSQL(); + } else { + // don't remove the space, otherwise it might end up some thing like + // --1 which is a line remark + sql = left.getSQL() + " " + getOperationToken() + " " + right.getSQL(); + } + return "(" + sql + ")"; + } + + private String getOperationToken() { + switch (opType) { + case NEGATE: + return "-"; + case CONCAT: + return "||"; + case PLUS: + return "+"; + case MINUS: + return "-"; + case MULTIPLY: + return "*"; + case DIVIDE: + return "/"; + case MODULUS: + return "%"; + default: + throw DbException.throwInternalError("opType=" + opType); + } + } + + @Override + public Value getValue(Session session) { + Value l = left.getValue(session).convertTo(dataType); + Value r; + if (right == null) { + r = null; + } else { + r = right.getValue(session); + if (convertRight) { + r = r.convertTo(dataType); + } + } + switch (opType) { + case NEGATE: + return l == ValueNull.INSTANCE ? l : l.negate(); + case CONCAT: { + Mode mode = session.getDatabase().getMode(); + if (l == ValueNull.INSTANCE) { + if (mode.nullConcatIsNull) { + return ValueNull.INSTANCE; + } + return r; + } else if (r == ValueNull.INSTANCE) { + if (mode.nullConcatIsNull) { + return ValueNull.INSTANCE; + } + return l; + } + String s1 = l.getString(), s2 = r.getString(); + StringBuilder buff = new StringBuilder(s1.length() + s2.length()); + buff.append(s1).append(s2); + return ValueString.get(buff.toString()); + } + case PLUS: + if (l == ValueNull.INSTANCE || r == ValueNull.INSTANCE) { + return ValueNull.INSTANCE; + } + return l.add(r); + case MINUS: + if (l == ValueNull.INSTANCE || r == ValueNull.INSTANCE) { + return ValueNull.INSTANCE; + } + return l.subtract(r); + case MULTIPLY: + if (l == ValueNull.INSTANCE || r == ValueNull.INSTANCE) { + return ValueNull.INSTANCE; + } + return l.multiply(r); + case DIVIDE: + if (l == ValueNull.INSTANCE || r == ValueNull.INSTANCE) { + return ValueNull.INSTANCE; + } + return l.divide(r); + case MODULUS: + if (l == ValueNull.INSTANCE || r == ValueNull.INSTANCE) { + return ValueNull.INSTANCE; + } + return l.modulus(r); + default: + throw DbException.throwInternalError("type=" + opType); + } + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + left.mapColumns(resolver, level); + if (right != null) { + right.mapColumns(resolver, level); + } + } + + @Override + public Expression optimize(Session session) { + left = left.optimize(session); + switch (opType) { + case NEGATE: + dataType = left.getType(); + if (dataType == Value.UNKNOWN) { + dataType = Value.DECIMAL; + } + break; + case CONCAT: + right = right.optimize(session); + dataType = Value.STRING; + if (left.isConstant() && right.isConstant()) { + return ValueExpression.get(getValue(session)); + } + break; + case PLUS: + case MINUS: + case MULTIPLY: + case DIVIDE: + case MODULUS: + right = right.optimize(session); + int l = left.getType(); + int r = right.getType(); + if ((l == Value.NULL && r == Value.NULL) || + (l == Value.UNKNOWN && r == Value.UNKNOWN)) { + // (? + ?) - use decimal by default (the most safe data type) or + // string when text concatenation with + is enabled + if (opType == OpType.PLUS && session.getDatabase(). + getMode().allowPlusForStringConcat) { + dataType = Value.STRING; + opType = OpType.CONCAT; + } else { + dataType = Value.DECIMAL; + } + } else if (l == Value.DATE || l == Value.TIMESTAMP || + l == Value.TIME || r == Value.DATE || + r == Value.TIMESTAMP || r == Value.TIME) { + if (opType == OpType.PLUS) { + if (r != Value.getHigherOrder(l, r)) { + // order left and right: INT < TIME < DATE < TIMESTAMP + swap(); + int t = l; + l = r; + r = t; + } + if (l == Value.INT) { + // Oracle date add + Function f = Function.getFunction(session.getDatabase(), "DATEADD"); + f.setParameter(0, ValueExpression.get(ValueString.get("DAY"))); + f.setParameter(1, left); + f.setParameter(2, right); + f.doneWithParameters(); + return f.optimize(session); + } else if (l == Value.DECIMAL || l == Value.FLOAT || l == Value.DOUBLE) { + // Oracle date add + Function f = Function.getFunction(session.getDatabase(), "DATEADD"); + f.setParameter(0, ValueExpression.get(ValueString.get("SECOND"))); + left = new Operation(OpType.MULTIPLY, ValueExpression.get(ValueInt + .get(60 * 60 * 24)), left); + f.setParameter(1, left); + f.setParameter(2, right); + f.doneWithParameters(); + return f.optimize(session); + } else if (l == Value.TIME && r == Value.TIME) { + dataType = Value.TIME; + return this; + } else if (l == Value.TIME) { + dataType = Value.TIMESTAMP; + return this; + } + } else if (opType == OpType.MINUS) { + if ((l == Value.DATE || l == Value.TIMESTAMP) && r == Value.INT) { + // Oracle date subtract + Function f = Function.getFunction(session.getDatabase(), "DATEADD"); + f.setParameter(0, ValueExpression.get(ValueString.get("DAY"))); + right = new Operation(OpType.NEGATE, right, null); + right = right.optimize(session); + f.setParameter(1, right); + f.setParameter(2, left); + f.doneWithParameters(); + return f.optimize(session); + } else if ((l == Value.DATE || l == Value.TIMESTAMP) && + (r == Value.DECIMAL || r == Value.FLOAT || r == Value.DOUBLE)) { + // Oracle date subtract + Function f = Function.getFunction(session.getDatabase(), "DATEADD"); + f.setParameter(0, ValueExpression.get(ValueString.get("SECOND"))); + right = new Operation(OpType.MULTIPLY, ValueExpression.get(ValueInt + .get(60 * 60 * 24)), right); + right = new Operation(OpType.NEGATE, right, null); + right = right.optimize(session); + f.setParameter(1, right); + f.setParameter(2, left); + f.doneWithParameters(); + return f.optimize(session); + } else if (l == Value.DATE || l == Value.TIMESTAMP) { + if (r == Value.TIME) { + dataType = Value.TIMESTAMP; + return this; + } else if (r == Value.DATE || r == Value.TIMESTAMP) { + // Oracle date subtract + Function f = Function.getFunction(session.getDatabase(), "DATEDIFF"); + f.setParameter(0, ValueExpression.get(ValueString.get("DAY"))); + f.setParameter(1, right); + f.setParameter(2, left); + f.doneWithParameters(); + return f.optimize(session); + } + } else if (l == Value.TIME && r == Value.TIME) { + dataType = Value.TIME; + return this; + } + } else if (opType == OpType.MULTIPLY) { + if (l == Value.TIME) { + dataType = Value.TIME; + convertRight = false; + return this; + } else if (r == Value.TIME) { + swap(); + dataType = Value.TIME; + convertRight = false; + return this; + } + } else if (opType == OpType.DIVIDE) { + if (l == Value.TIME) { + dataType = Value.TIME; + convertRight = false; + return this; + } + } + throw DbException.getUnsupportedException( + DataType.getDataType(l).name + " " + + getOperationToken() + " " + + DataType.getDataType(r).name); + } else { + dataType = Value.getHigherOrder(l, r); + if (DataType.isStringType(dataType) && + session.getDatabase().getMode().allowPlusForStringConcat) { + opType = OpType.CONCAT; + } + } + break; + default: + DbException.throwInternalError("type=" + opType); + } + if (left.isConstant() && (right == null || right.isConstant())) { + return ValueExpression.get(getValue(session)); + } + return this; + } + + private void swap() { + Expression temp = left; + left = right; + right = temp; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + left.setEvaluatable(tableFilter, b); + if (right != null) { + right.setEvaluatable(tableFilter, b); + } + } + + @Override + public int getType() { + return dataType; + } + + @Override + public long getPrecision() { + if (right != null) { + switch (opType) { + case CONCAT: + return left.getPrecision() + right.getPrecision(); + default: + return Math.max(left.getPrecision(), right.getPrecision()); + } + } + return left.getPrecision(); + } + + @Override + public int getDisplaySize() { + if (right != null) { + switch (opType) { + case CONCAT: + return MathUtils.convertLongToInt((long) left.getDisplaySize() + + (long) right.getDisplaySize()); + default: + return Math.max(left.getDisplaySize(), right.getDisplaySize()); + } + } + return left.getDisplaySize(); + } + + @Override + public int getScale() { + if (right != null) { + return Math.max(left.getScale(), right.getScale()); + } + return left.getScale(); + } + + @Override + public void updateAggregate(Session session) { + left.updateAggregate(session); + if (right != null) { + right.updateAggregate(session); + } + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return left.isEverything(visitor) && + (right == null || right.isEverything(visitor)); + } + + @Override + public int getCost() { + return left.getCost() + 1 + (right == null ? 0 : right.getCost()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Parameter.java b/modules/h2/src/main/java/org/h2/expression/Parameter.java new file mode 100644 index 0000000000000..01c2f7d4a6c5b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Parameter.java @@ -0,0 +1,190 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.Column; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; + +/** + * A parameter of a prepared statement. + */ +public class Parameter extends Expression implements ParameterInterface { + + private Value value; + private Column column; + private final int index; + + public Parameter(int index) { + this.index = index; + } + + @Override + public String getSQL() { + return "?" + (index + 1); + } + + @Override + public void setValue(Value v, boolean closeOld) { + // don't need to close the old value as temporary files are anyway + // removed + this.value = v; + } + + public void setValue(Value v) { + this.value = v; + } + + @Override + public Value getParamValue() { + if (value == null) { + // to allow parameters in function tables + return ValueNull.INSTANCE; + } + return value; + } + + @Override + public Value getValue(Session session) { + return getParamValue(); + } + + @Override + public int getType() { + if (value != null) { + return value.getType(); + } + if (column != null) { + return column.getType(); + } + return Value.UNKNOWN; + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + // can't map + } + + @Override + public void checkSet() { + if (value == null) { + throw DbException.get(ErrorCode.PARAMETER_NOT_SET_1, "#" + (index + 1)); + } + } + + @Override + public Expression optimize(Session session) { + if (session.getDatabase().getMode().treatEmptyStringsAsNull) { + if (value instanceof ValueString) { + value = ValueString.get(value.getString(), true); + } + } + return this; + } + + @Override + public boolean isConstant() { + return false; + } + + @Override + public boolean isValueSet() { + return value != null; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + // not bound + } + + @Override + public int getScale() { + if (value != null) { + return value.getScale(); + } + if (column != null) { + return column.getScale(); + } + return 0; + } + + @Override + public long getPrecision() { + if (value != null) { + return value.getPrecision(); + } + if (column != null) { + return column.getPrecision(); + } + return 0; + } + + @Override + public int getDisplaySize() { + if (value != null) { + return value.getDisplaySize(); + } + if (column != null) { + return column.getDisplaySize(); + } + return 0; + } + + @Override + public void updateAggregate(Session session) { + // nothing to do + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.EVALUATABLE: + // the parameter _will_be_ evaluatable at execute time + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + // it is checked independently if the value is the same as the last + // time + case ExpressionVisitor.NOT_FROM_RESOLVER: + case ExpressionVisitor.QUERY_COMPARABLE: + case ExpressionVisitor.GET_DEPENDENCIES: + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + case ExpressionVisitor.DETERMINISTIC: + case ExpressionVisitor.READONLY: + case ExpressionVisitor.GET_COLUMNS: + return true; + case ExpressionVisitor.INDEPENDENT: + return value != null; + default: + throw DbException.throwInternalError("type="+visitor.getType()); + } + } + + @Override + public int getCost() { + return 0; + } + + @Override + public Expression getNotIfPossible(Session session) { + return new Comparison(session, Comparison.EQUAL, this, + ValueExpression.get(ValueBoolean.FALSE)); + } + + public void setColumn(Column column) { + this.column = column; + } + + public int getIndex() { + return index; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ParameterInterface.java b/modules/h2/src/main/java/org/h2/expression/ParameterInterface.java new file mode 100644 index 0000000000000..df4bcc21f8c63 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ParameterInterface.java @@ -0,0 +1,74 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.message.DbException; +import org.h2.value.Value; + +/** + * The interface for client side (remote) and server side parameters. + */ +public interface ParameterInterface { + + /** + * Set the value of the parameter. + * + * @param value the new value + * @param closeOld if the old value (if one is set) should be closed + */ + void setValue(Value value, boolean closeOld); + + /** + * Get the value of the parameter if set. + * + * @return the value or null + */ + Value getParamValue(); + + /** + * Check if the value is set. + * + * @throws DbException if not set. + */ + void checkSet() throws DbException; + + /** + * Is the value of a parameter set. + * + * @return true if set + */ + boolean isValueSet(); + + /** + * Get the expected data type of the parameter if no value is set, or the + * data type of the value if one is set. + * + * @return the data type + */ + int getType(); + + /** + * Get the expected precision of this parameter. + * + * @return the expected precision + */ + long getPrecision(); + + /** + * Get the expected scale of this parameter. + * + * @return the expected scale + */ + int getScale(); + + /** + * Check if this column is nullable. + * + * @return Column.NULLABLE_* + */ + int getNullable(); + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ParameterRemote.java b/modules/h2/src/main/java/org/h2/expression/ParameterRemote.java new file mode 100644 index 0000000000000..ffae2988dd85f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ParameterRemote.java @@ -0,0 +1,103 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.io.IOException; +import java.sql.ResultSetMetaData; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.value.Transfer; +import org.h2.value.Value; + +/** + * A client side (remote) parameter. + */ +public class ParameterRemote implements ParameterInterface { + + private Value value; + private final int index; + private int dataType = Value.UNKNOWN; + private long precision; + private int scale; + private int nullable = ResultSetMetaData.columnNullableUnknown; + + public ParameterRemote(int index) { + this.index = index; + } + + @Override + public void setValue(Value newValue, boolean closeOld) { + if (closeOld && value != null) { + value.remove(); + } + value = newValue; + } + + @Override + public Value getParamValue() { + return value; + } + + @Override + public void checkSet() { + if (value == null) { + throw DbException.get(ErrorCode.PARAMETER_NOT_SET_1, "#" + (index + 1)); + } + } + + @Override + public boolean isValueSet() { + return value != null; + } + + @Override + public int getType() { + return value == null ? dataType : value.getType(); + } + + @Override + public long getPrecision() { + return value == null ? precision : value.getPrecision(); + } + + @Override + public int getScale() { + return value == null ? scale : value.getScale(); + } + + @Override + public int getNullable() { + return nullable; + } + + /** + * Write the parameter meta data from the transfer object. + * + * @param transfer the transfer object + */ + public void readMetaData(Transfer transfer) throws IOException { + dataType = transfer.readInt(); + precision = transfer.readLong(); + scale = transfer.readInt(); + nullable = transfer.readInt(); + } + + /** + * Write the parameter meta data to the transfer object. + * + * @param transfer the transfer object + * @param p the parameter + */ + public static void writeMetaData(Transfer transfer, ParameterInterface p) + throws IOException { + transfer.writeInt(p.getType()); + transfer.writeLong(p.getPrecision()); + transfer.writeInt(p.getScale()); + transfer.writeInt(p.getNullable()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Rownum.java b/modules/h2/src/main/java/org/h2/expression/Rownum.java new file mode 100644 index 0000000000000..d0233e9fb70ad --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Rownum.java @@ -0,0 +1,106 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.command.Prepared; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueInt; + +/** + * Represents the ROWNUM function. + */ +public class Rownum extends Expression { + + private final Prepared prepared; + + public Rownum(Prepared prepared) { + if (prepared == null) { + throw DbException.throwInternalError(); + } + this.prepared = prepared; + } + + @Override + public Value getValue(Session session) { + return ValueInt.get(prepared.getCurrentRowNumber()); + } + + @Override + public int getType() { + return Value.INT; + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + // nothing to do + } + + @Override + public Expression optimize(Session session) { + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + // nothing to do + } + + @Override + public int getScale() { + return 0; + } + + @Override + public long getPrecision() { + return ValueInt.PRECISION; + } + + @Override + public int getDisplaySize() { + return ValueInt.DISPLAY_SIZE; + } + + @Override + public String getSQL() { + return "ROWNUM()"; + } + + @Override + public void updateAggregate(Session session) { + // nothing to do + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.QUERY_COMPARABLE: + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + case ExpressionVisitor.DETERMINISTIC: + case ExpressionVisitor.INDEPENDENT: + return false; + case ExpressionVisitor.EVALUATABLE: + case ExpressionVisitor.READONLY: + case ExpressionVisitor.NOT_FROM_RESOLVER: + case ExpressionVisitor.GET_DEPENDENCIES: + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + case ExpressionVisitor.GET_COLUMNS: + // if everything else is the same, the rownum is the same + return true; + default: + throw DbException.throwInternalError("type="+visitor.getType()); + } + } + + @Override + public int getCost() { + return 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/SequenceValue.java b/modules/h2/src/main/java/org/h2/expression/SequenceValue.java new file mode 100644 index 0000000000000..eaa497d080628 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/SequenceValue.java @@ -0,0 +1,109 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.schema.Sequence; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; + +/** + * Wraps a sequence when used in a statement. + */ +public class SequenceValue extends Expression { + + private final Sequence sequence; + + public SequenceValue(Sequence sequence) { + this.sequence = sequence; + } + + @Override + public Value getValue(Session session) { + long value = sequence.getNext(session); + session.setLastIdentity(ValueLong.get(value)); + return ValueLong.get(value); + } + + @Override + public int getType() { + return Value.LONG; + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + // nothing to do + } + + @Override + public Expression optimize(Session session) { + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + // nothing to do + } + + @Override + public int getScale() { + return 0; + } + + @Override + public long getPrecision() { + return ValueInt.PRECISION; + } + + @Override + public int getDisplaySize() { + return ValueInt.DISPLAY_SIZE; + } + + @Override + public String getSQL() { + return "(NEXT VALUE FOR " + sequence.getSQL() +")"; + } + + @Override + public void updateAggregate(Session session) { + // nothing to do + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.EVALUATABLE: + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + case ExpressionVisitor.NOT_FROM_RESOLVER: + case ExpressionVisitor.GET_COLUMNS: + return true; + case ExpressionVisitor.DETERMINISTIC: + case ExpressionVisitor.READONLY: + case ExpressionVisitor.INDEPENDENT: + case ExpressionVisitor.QUERY_COMPARABLE: + return false; + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + visitor.addDataModificationId(sequence.getModificationId()); + return true; + case ExpressionVisitor.GET_DEPENDENCIES: + visitor.addDependency(sequence); + return true; + default: + throw DbException.throwInternalError("type="+visitor.getType()); + } + } + + @Override + public int getCost() { + return 1; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Subquery.java b/modules/h2/src/main/java/org/h2/expression/Subquery.java new file mode 100644 index 0000000000000..b823325b60ab5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Subquery.java @@ -0,0 +1,136 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.command.dml.Query; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueNull; + +/** + * A query returning a single value. + * Subqueries are used inside other statements. + */ +public class Subquery extends Expression { + + private final Query query; + private Expression expression; + + public Subquery(Query query) { + this.query = query; + } + + @Override + public Value getValue(Session session) { + query.setSession(session); + try (ResultInterface result = query.query(2)) { + Value v; + if (!result.next()) { + v = ValueNull.INSTANCE; + } else { + Value[] values = result.currentRow(); + if (result.getVisibleColumnCount() == 1) { + v = values[0]; + } else { + v = ValueArray.get(values); + } + if (result.hasNext()) { + throw DbException.get(ErrorCode.SCALAR_SUBQUERY_CONTAINS_MORE_THAN_ONE_ROW); + } + } + return v; + } + } + + @Override + public int getType() { + return getExpression().getType(); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + query.mapColumns(resolver, level + 1); + } + + @Override + public Expression optimize(Session session) { + session.optimizeQueryExpression(query); + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + query.setEvaluatable(tableFilter, b); + } + + @Override + public int getScale() { + return getExpression().getScale(); + } + + @Override + public long getPrecision() { + return getExpression().getPrecision(); + } + + @Override + public int getDisplaySize() { + return getExpression().getDisplaySize(); + } + + @Override + public String getSQL() { + return "(" + query.getPlanSQL() + ")"; + } + + @Override + public void updateAggregate(Session session) { + query.updateAggregate(session); + } + + private Expression getExpression() { + if (expression == null) { + ArrayList expressions = query.getExpressions(); + int columnCount = query.getColumnCount(); + if (columnCount == 1) { + expression = expressions.get(0); + } else { + Expression[] list = new Expression[columnCount]; + for (int i = 0; i < columnCount; i++) { + list[i] = expressions.get(i); + } + expression = new ExpressionList(list); + } + } + return expression; + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + return query.isEverything(visitor); + } + + public Query getQuery() { + return query; + } + + @Override + public int getCost() { + return query.getCostAsExpression(); + } + + @Override + public Expression[] getExpressionColumns(Session session) { + return getExpression().getExpressionColumns(session); + } +} diff --git a/modules/h2/src/main/java/org/h2/expression/TableFunction.java b/modules/h2/src/main/java/org/h2/expression/TableFunction.java new file mode 100644 index 0000000000000..ce1d6131b7189 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/TableFunction.java @@ -0,0 +1,168 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import java.util.ArrayList; + +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.LocalResult; +import org.h2.table.Column; +import org.h2.tools.SimpleResultSet; +import org.h2.util.MathUtils; +import org.h2.util.StatementBuilder; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; + +/** + * Implementation of the functions TABLE(..) and TABLE_DISTINCT(..). + */ +public class TableFunction extends Function { + private final boolean distinct; + private final long rowCount; + private Column[] columnList; + + TableFunction(Database database, FunctionInfo info, long rowCount) { + super(database, info); + distinct = info.type == Function.TABLE_DISTINCT; + this.rowCount = rowCount; + } + + @Override + public Value getValue(Session session) { + return getTable(session, args, false, distinct); + } + + @Override + protected void checkParameterCount(int len) { + if (len < 1) { + throw DbException.get(ErrorCode.INVALID_PARAMETER_COUNT_2, getName(), ">0"); + } + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder(getName()); + buff.append('('); + int i = 0; + for (Expression e : args) { + buff.appendExceptFirst(", "); + buff.append(columnList[i++].getCreateSQL()).append('=').append(e.getSQL()); + } + return buff.append(')').toString(); + } + + + @Override + public String getName() { + return distinct ? "TABLE_DISTINCT" : "TABLE"; + } + + @Override + public ValueResultSet getValueForColumnList(Session session, + Expression[] nullArgs) { + return getTable(session, args, true, false); + } + + public void setColumns(ArrayList columns) { + this.columnList = columns.toArray(new Column[0]); + } + + private ValueResultSet getTable(Session session, Expression[] argList, + boolean onlyColumnList, boolean distinctRows) { + int len = columnList.length; + Expression[] header = new Expression[len]; + Database db = session.getDatabase(); + for (int i = 0; i < len; i++) { + Column c = columnList[i]; + ExpressionColumn col = new ExpressionColumn(db, c); + header[i] = col; + } + LocalResult result = new LocalResult(session, header, len); + if (distinctRows) { + result.setDistinct(); + } + if (!onlyColumnList) { + Value[][] list = new Value[len][]; + int rows = 0; + for (int i = 0; i < len; i++) { + Value v = argList[i].getValue(session); + if (v == ValueNull.INSTANCE) { + list[i] = new Value[0]; + } else { + ValueArray array = (ValueArray) v.convertTo(Value.ARRAY); + Value[] l = array.getList(); + list[i] = l; + rows = Math.max(rows, l.length); + } + } + for (int row = 0; row < rows; row++) { + Value[] r = new Value[len]; + for (int j = 0; j < len; j++) { + Value[] l = list[j]; + Value v; + if (l.length <= row) { + v = ValueNull.INSTANCE; + } else { + Column c = columnList[j]; + v = l[row]; + v = c.convert(v); + v = v.convertPrecision(c.getPrecision(), false); + v = v.convertScale(true, c.getScale()); + } + r[j] = v; + } + result.addRow(r); + } + } + result.done(); + return ValueResultSet.get(getSimpleResultSet(result, + Integer.MAX_VALUE)); + } + + private static SimpleResultSet getSimpleResultSet(LocalResult rs, + int maxrows) { + int columnCount = rs.getVisibleColumnCount(); + SimpleResultSet simple = new SimpleResultSet(); + simple.setAutoClose(false); + for (int i = 0; i < columnCount; i++) { + String name = rs.getColumnName(i); + /* + * TODO Some types, such as Value.BYTES and Value.UUID are mapped to the same + * SQL type and we can lose real type here. + */ + int sqlType = DataType.convertTypeToSQLType(rs.getColumnType(i)); + int precision = MathUtils.convertLongToInt(rs.getColumnPrecision(i)); + int scale = rs.getColumnScale(i); + simple.addColumn(name, sqlType, precision, scale); + } + rs.reset(); + for (int i = 0; i < maxrows && rs.next(); i++) { + Object[] list = new Object[columnCount]; + for (int j = 0; j < columnCount; j++) { + list[j] = rs.currentRow()[j].getObject(); + } + simple.addRow(list); + } + return simple; + } + + public long getRowCount() { + return rowCount; + } + + @Override + public Expression[] getExpressionColumns(Session session) { + return getExpressionColumns(session, + getTable(session, getArgs(), true, false).getResultSet()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/ValueExpression.java b/modules/h2/src/main/java/org/h2/expression/ValueExpression.java new file mode 100644 index 0000000000000..f1899de276052 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/ValueExpression.java @@ -0,0 +1,181 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.engine.Session; +import org.h2.index.IndexCondition; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueNull; + +/** + * An expression representing a constant value. + */ +public class ValueExpression extends Expression { + /** + * The expression represents ValueNull.INSTANCE. + */ + private static final Object NULL = new ValueExpression(ValueNull.INSTANCE); + + /** + * This special expression represents the default value. It is used for + * UPDATE statements of the form SET COLUMN = DEFAULT. The value is + * ValueNull.INSTANCE, but should never be accessed. + */ + private static final Object DEFAULT = new ValueExpression(ValueNull.INSTANCE); + + private final Value value; + + private ValueExpression(Value value) { + this.value = value; + } + + /** + * Get the NULL expression. + * + * @return the NULL expression + */ + public static ValueExpression getNull() { + return (ValueExpression) NULL; + } + + /** + * Get the DEFAULT expression. + * + * @return the DEFAULT expression + */ + public static ValueExpression getDefault() { + return (ValueExpression) DEFAULT; + } + + /** + * Create a new expression with the given value. + * + * @param value the value + * @return the expression + */ + public static ValueExpression get(Value value) { + if (value == ValueNull.INSTANCE) { + return getNull(); + } + return new ValueExpression(value); + } + + @Override + public Value getValue(Session session) { + return value; + } + + @Override + public int getType() { + return value.getType(); + } + + @Override + public void createIndexConditions(Session session, TableFilter filter) { + if (value.getType() == Value.BOOLEAN) { + boolean v = ((ValueBoolean) value).getBoolean(); + if (!v) { + filter.addIndexCondition(IndexCondition.get(Comparison.FALSE, null, this)); + } + } + } + + @Override + public Expression getNotIfPossible(Session session) { + return new Comparison(session, Comparison.EQUAL, this, + ValueExpression.get(ValueBoolean.FALSE)); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + // nothing to do + } + + @Override + public Expression optimize(Session session) { + return this; + } + + @Override + public boolean isConstant() { + return true; + } + + @Override + public boolean isValueSet() { + return true; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + // nothing to do + } + + @Override + public int getScale() { + return value.getScale(); + } + + @Override + public long getPrecision() { + return value.getPrecision(); + } + + @Override + public int getDisplaySize() { + return value.getDisplaySize(); + } + + @Override + public String getSQL() { + if (this == DEFAULT) { + return "DEFAULT"; + } + return value.getSQL(); + } + + @Override + public void updateAggregate(Session session) { + // nothing to do + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + case ExpressionVisitor.DETERMINISTIC: + case ExpressionVisitor.READONLY: + case ExpressionVisitor.INDEPENDENT: + case ExpressionVisitor.EVALUATABLE: + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + case ExpressionVisitor.NOT_FROM_RESOLVER: + case ExpressionVisitor.GET_DEPENDENCIES: + case ExpressionVisitor.QUERY_COMPARABLE: + case ExpressionVisitor.GET_COLUMNS: + return true; + default: + throw DbException.throwInternalError("type=" + visitor.getType()); + } + } + + @Override + public int getCost() { + return 0; + } + + @Override + public Expression[] getExpressionColumns(Session session) { + if (getType() == Value.ARRAY) { + return getExpressionColumns(session, (ValueArray) getValue(session)); + } + return super.getExpressionColumns(session); + } +} diff --git a/modules/h2/src/main/java/org/h2/expression/Variable.java b/modules/h2/src/main/java/org/h2/expression/Variable.java new file mode 100644 index 0000000000000..22e83f3fbf391 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Variable.java @@ -0,0 +1,111 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.command.Parser; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.value.Value; + +/** + * A user-defined variable, for example: @ID. + */ +public class Variable extends Expression { + + private final String name; + private Value lastValue; + + public Variable(Session session, String name) { + this.name = name; + lastValue = session.getVariable(name); + } + + @Override + public int getCost() { + return 0; + } + + @Override + public int getDisplaySize() { + return lastValue.getDisplaySize(); + } + + @Override + public long getPrecision() { + return lastValue.getPrecision(); + } + + @Override + public String getSQL() { + return "@" + Parser.quoteIdentifier(name); + } + + @Override + public int getScale() { + return lastValue.getScale(); + } + + @Override + public int getType() { + return lastValue.getType(); + } + + @Override + public Value getValue(Session session) { + lastValue = session.getVariable(name); + return lastValue; + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + switch (visitor.getType()) { + case ExpressionVisitor.EVALUATABLE: + // the value will be evaluated at execute time + case ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID: + // it is checked independently if the value is the same as the last + // time + case ExpressionVisitor.OPTIMIZABLE_MIN_MAX_COUNT_ALL: + case ExpressionVisitor.READONLY: + case ExpressionVisitor.INDEPENDENT: + case ExpressionVisitor.NOT_FROM_RESOLVER: + case ExpressionVisitor.QUERY_COMPARABLE: + case ExpressionVisitor.GET_DEPENDENCIES: + case ExpressionVisitor.GET_COLUMNS: + return true; + case ExpressionVisitor.DETERMINISTIC: + return false; + default: + throw DbException.throwInternalError("type="+visitor.getType()); + } + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + // nothing to do + } + + @Override + public Expression optimize(Session session) { + return this; + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean value) { + // nothing to do + } + + @Override + public void updateAggregate(Session session) { + // nothing to do + } + + public String getName() { + return name; + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/Wildcard.java b/modules/h2/src/main/java/org/h2/expression/Wildcard.java new file mode 100644 index 0000000000000..345c08706e5dd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/Wildcard.java @@ -0,0 +1,111 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.expression; + +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.table.ColumnResolver; +import org.h2.table.TableFilter; +import org.h2.util.StringUtils; +import org.h2.value.Value; + +/** + * A wildcard expression as in SELECT * FROM TEST. + * This object is only used temporarily during the parsing phase, and later + * replaced by column expressions. + */ +public class Wildcard extends Expression { + private final String schema; + private final String table; + + public Wildcard(String schema, String table) { + this.schema = schema; + this.table = table; + } + + @Override + public boolean isWildcard() { + return true; + } + + @Override + public Value getValue(Session session) { + throw DbException.throwInternalError(toString()); + } + + @Override + public int getType() { + throw DbException.throwInternalError(toString()); + } + + @Override + public void mapColumns(ColumnResolver resolver, int level) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, table); + } + + @Override + public Expression optimize(Session session) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, table); + } + + @Override + public void setEvaluatable(TableFilter tableFilter, boolean b) { + DbException.throwInternalError(toString()); + } + + @Override + public int getScale() { + throw DbException.throwInternalError(toString()); + } + + @Override + public long getPrecision() { + throw DbException.throwInternalError(toString()); + } + + @Override + public int getDisplaySize() { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getTableAlias() { + return table; + } + + @Override + public String getSchemaName() { + return schema; + } + + @Override + public String getSQL() { + if (table == null) { + return "*"; + } + return StringUtils.quoteIdentifier(table) + ".*"; + } + + @Override + public void updateAggregate(Session session) { + DbException.throwInternalError(toString()); + } + + @Override + public boolean isEverything(ExpressionVisitor visitor) { + if (visitor.getType() == ExpressionVisitor.QUERY_COMPARABLE) { + return true; + } + throw DbException.throwInternalError("" + visitor.getType()); + } + + @Override + public int getCost() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/expression/package.html b/modules/h2/src/main/java/org/h2/expression/package.html new file mode 100644 index 0000000000000..74e952ca1d3ea --- /dev/null +++ b/modules/h2/src/main/java/org/h2/expression/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Expressions include mathematical operations, conditions, simple values, and functions. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/index/AbstractFunctionCursor.java b/modules/h2/src/main/java/org/h2/index/AbstractFunctionCursor.java new file mode 100644 index 0000000000000..d8768e7ede0d5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/AbstractFunctionCursor.java @@ -0,0 +1,102 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.value.Value; + +/** + * Abstract function cursor. This implementation filters the rows (only returns + * entries that are larger or equal to "first", and smaller than last or equal + * to "last"). + */ +abstract class AbstractFunctionCursor implements Cursor { + private final FunctionIndex index; + + private final SearchRow first; + + private final SearchRow last; + + final Session session; + + Value[] values; + + Row row; + + /** + * @param index + * index + * @param first + * first row + * @param last + * last row + * @param session + * session + */ + AbstractFunctionCursor(FunctionIndex index, SearchRow first, SearchRow last, Session session) { + this.index = index; + this.first = first; + this.last = last; + this.session = session; + } + + @Override + public Row get() { + if (values == null) { + return null; + } + if (row == null) { + row = session.createRow(values, 1); + } + return row; + } + + @Override + public SearchRow getSearchRow() { + return get(); + } + + @Override + public boolean next() { + final SearchRow first = this.first, last = this.last; + if (first == null && last == null) { + return nextImpl(); + } + while (nextImpl()) { + Row current = get(); + if (first != null) { + int comp = index.compareRows(current, first); + if (comp < 0) { + continue; + } + } + if (last != null) { + int comp = index.compareRows(current, last); + if (comp > 0) { + continue; + } + } + return true; + } + return false; + } + + /** + * Skip to the next row if one is available. This method does not filter. + * + * @return true if another row is available + */ + abstract boolean nextImpl(); + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/BaseIndex.java b/modules/h2/src/main/java/org/h2/index/BaseIndex.java new file mode 100644 index 0000000000000..0a4e21371c8a7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/BaseIndex.java @@ -0,0 +1,486 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.schema.SchemaObjectBase; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * Most index implementations extend the base index. + */ +public abstract class BaseIndex extends SchemaObjectBase implements Index { + + protected IndexColumn[] indexColumns; + protected Column[] columns; + protected int[] columnIds; + protected Table table; + protected IndexType indexType; + protected boolean isMultiVersion; + + /** + * Initialize the base index. + * + * @param newTable the table + * @param id the object id + * @param name the index name + * @param newIndexColumns the columns that are indexed or null if this is + * not yet known + * @param newIndexType the index type + */ + protected void initBaseIndex(Table newTable, int id, String name, + IndexColumn[] newIndexColumns, IndexType newIndexType) { + initSchemaObjectBase(newTable.getSchema(), id, name, Trace.INDEX); + this.indexType = newIndexType; + this.table = newTable; + if (newIndexColumns != null) { + this.indexColumns = newIndexColumns; + columns = new Column[newIndexColumns.length]; + int len = columns.length; + columnIds = new int[len]; + for (int i = 0; i < len; i++) { + Column col = newIndexColumns[i].column; + columns[i] = col; + columnIds[i] = col.getColumnId(); + } + } + } + + /** + * Check that the index columns are not CLOB or BLOB. + * + * @param columns the columns + */ + protected static void checkIndexColumnTypes(IndexColumn[] columns) { + for (IndexColumn c : columns) { + int type = c.column.getType(); + if (type == Value.CLOB || type == Value.BLOB) { + throw DbException.getUnsupportedException( + "Index on BLOB or CLOB column: " + c.column.getCreateSQL()); + } + } + } + + @Override + public String getDropSQL() { + return null; + } + + /** + * Create a duplicate key exception with a message that contains the index + * name. + * + * @param key the key values + * @return the exception + */ + protected DbException getDuplicateKeyException(String key) { + String sql = getName() + " ON " + table.getSQL() + + "(" + getColumnListSQL() + ")"; + if (key != null) { + sql += " VALUES " + key; + } + DbException e = DbException.get(ErrorCode.DUPLICATE_KEY_1, sql); + e.setSource(this); + return e; + } + + @Override + public String getPlanSQL() { + return getSQL(); + } + + @Override + public void removeChildrenAndResources(Session session) { + table.removeIndex(this); + remove(session); + database.removeMeta(session, getId()); + } + + @Override + public boolean canFindNext() { + return false; + } + + @Override + public boolean isFindUsingFullTableScan() { + return false; + } + + @Override + public Cursor find(TableFilter filter, SearchRow first, SearchRow last) { + return find(filter.getSession(), first, last); + } + + /** + * Find a row or a list of rows that is larger and create a cursor to + * iterate over the result. The base implementation doesn't support this + * feature. + * + * @param session the session + * @param higherThan the lower limit (excluding) + * @param last the last row, or null for no limit + * @return the cursor + * @throws DbException always + */ + @Override + public Cursor findNext(Session session, SearchRow higherThan, SearchRow last) { + throw DbException.throwInternalError(toString()); + } + + /** + * Calculate the cost for the given mask as if this index was a typical + * b-tree range index. This is the estimated cost required to search one + * row, and then iterate over the given number of rows. + * + * @param masks the IndexCondition search masks, one for each column in the + * table + * @param rowCount the number of rows in the index + * @param filters all joined table filters + * @param filter the current table filter index + * @param sortOrder the sort order + * @param isScanIndex whether this is a "table scan" index + * @param allColumnsSet the set of all columns + * @return the estimated cost + */ + protected final long getCostRangeIndex(int[] masks, long rowCount, + TableFilter[] filters, int filter, SortOrder sortOrder, + boolean isScanIndex, HashSet allColumnsSet) { + rowCount += Constants.COST_ROW_OFFSET; + int totalSelectivity = 0; + long rowsCost = rowCount; + if (masks != null) { + for (int i = 0, len = columns.length; i < len; i++) { + Column column = columns[i]; + int index = column.getColumnId(); + int mask = masks[index]; + if ((mask & IndexCondition.EQUALITY) == IndexCondition.EQUALITY) { + if (i == columns.length - 1 && getIndexType().isUnique()) { + rowsCost = 3; + break; + } + totalSelectivity = 100 - ((100 - totalSelectivity) * + (100 - column.getSelectivity()) / 100); + long distinctRows = rowCount * totalSelectivity / 100; + if (distinctRows <= 0) { + distinctRows = 1; + } + rowsCost = 2 + Math.max(rowCount / distinctRows, 1); + } else if ((mask & IndexCondition.RANGE) == IndexCondition.RANGE) { + rowsCost = 2 + rowCount / 4; + break; + } else if ((mask & IndexCondition.START) == IndexCondition.START) { + rowsCost = 2 + rowCount / 3; + break; + } else if ((mask & IndexCondition.END) == IndexCondition.END) { + rowsCost = rowCount / 3; + break; + } else { + break; + } + } + } + // If the ORDER BY clause matches the ordering of this index, + // it will be cheaper than another index, so adjust the cost + // accordingly. + long sortingCost = 0; + if (sortOrder != null) { + sortingCost = 100 + rowCount / 10; + } + if (sortOrder != null && !isScanIndex) { + boolean sortOrderMatches = true; + int coveringCount = 0; + int[] sortTypes = sortOrder.getSortTypes(); + TableFilter tableFilter = filters == null ? null : filters[filter]; + for (int i = 0, len = sortTypes.length; i < len; i++) { + if (i >= indexColumns.length) { + // We can still use this index if we are sorting by more + // than it's columns, it's just that the coveringCount + // is lower than with an index that contains + // more of the order by columns. + break; + } + Column col = sortOrder.getColumn(i, tableFilter); + if (col == null) { + sortOrderMatches = false; + break; + } + IndexColumn indexCol = indexColumns[i]; + if (!col.equals(indexCol.column)) { + sortOrderMatches = false; + break; + } + int sortType = sortTypes[i]; + if (sortType != indexCol.sortType) { + sortOrderMatches = false; + break; + } + coveringCount++; + } + if (sortOrderMatches) { + // "coveringCount" makes sure that when we have two + // or more covering indexes, we choose the one + // that covers more. + sortingCost = 100 - coveringCount; + } + } + // If we have two indexes with the same cost, and one of the indexes can + // satisfy the query without needing to read from the primary table + // (scan index), make that one slightly lower cost. + boolean needsToReadFromScanIndex = true; + if (!isScanIndex && allColumnsSet != null && !allColumnsSet.isEmpty()) { + boolean foundAllColumnsWeNeed = true; + for (Column c : allColumnsSet) { + if (c.getTable() == getTable()) { + boolean found = false; + for (Column c2 : columns) { + if (c == c2) { + found = true; + break; + } + } + if (!found) { + foundAllColumnsWeNeed = false; + break; + } + } + } + if (foundAllColumnsWeNeed) { + needsToReadFromScanIndex = false; + } + } + long rc; + if (isScanIndex) { + rc = rowsCost + sortingCost + 20; + } else if (needsToReadFromScanIndex) { + rc = rowsCost + rowsCost + sortingCost + 20; + } else { + // The (20-x) calculation makes sure that when we pick a covering + // index, we pick the covering index that has the smallest number of + // columns (the more columns we have in index - the higher cost). + // This is faster because a smaller index will fit into fewer data + // blocks. + rc = rowsCost + sortingCost + columns.length; + } + return rc; + } + + @Override + public int compareRows(SearchRow rowData, SearchRow compare) { + if (rowData == compare) { + return 0; + } + for (int i = 0, len = indexColumns.length; i < len; i++) { + int index = columnIds[i]; + Value v1 = rowData.getValue(index); + Value v2 = compare.getValue(index); + if (v1 == null || v2 == null) { + // can't compare further + return 0; + } + int c = compareValues(v1, v2, indexColumns[i].sortType); + if (c != 0) { + return c; + } + } + return 0; + } + + /** + * Check if this row may have duplicates with the same indexed values in the + * current compatibility mode. Duplicates with {@code NULL} values are + * allowed in some modes. + * + * @param searchRow + * the row to check + * @return {@code true} if specified row may have duplicates, + * {@code false otherwise} + */ + protected boolean mayHaveNullDuplicates(SearchRow searchRow) { + switch (database.getMode().uniqueIndexNullsHandling) { + case ALLOW_DUPLICATES_WITH_ANY_NULL: + for (int index : columnIds) { + if (searchRow.getValue(index) == ValueNull.INSTANCE) { + return true; + } + } + return false; + case ALLOW_DUPLICATES_WITH_ALL_NULLS: + for (int index : columnIds) { + if (searchRow.getValue(index) != ValueNull.INSTANCE) { + return false; + } + } + return true; + default: + return false; + } + } + + /** + * Compare the positions of two rows. + * + * @param rowData the first row + * @param compare the second row + * @return 0 if both rows are equal, -1 if the first row is smaller, + * otherwise 1 + */ + int compareKeys(SearchRow rowData, SearchRow compare) { + long k1 = rowData.getKey(); + long k2 = compare.getKey(); + if (k1 == k2) { + if (isMultiVersion) { + int v1 = rowData.getVersion(); + int v2 = compare.getVersion(); + return Integer.compare(v2, v1); + } + return 0; + } + return k1 > k2 ? 1 : -1; + } + + private int compareValues(Value a, Value b, int sortType) { + if (a == b) { + return 0; + } + int comp = table.compareTypeSafe(a, b); + if ((sortType & SortOrder.DESCENDING) != 0) { + comp = -comp; + } + return comp; + } + + @Override + public int getColumnIndex(Column col) { + for (int i = 0, len = columns.length; i < len; i++) { + if (columns[i].equals(col)) { + return i; + } + } + return -1; + } + + @Override + public boolean isFirstColumn(Column column) { + return column.equals(columns[0]); + } + + /** + * Get the list of columns as a string. + * + * @return the list of columns + */ + private String getColumnListSQL() { + StatementBuilder buff = new StatementBuilder(); + for (IndexColumn c : indexColumns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + return buff.toString(); + } + + @Override + public String getCreateSQLForCopy(Table targetTable, String quotedName) { + StringBuilder buff = new StringBuilder("CREATE "); + buff.append(indexType.getSQL()); + buff.append(' '); + if (table.isHidden()) { + buff.append("IF NOT EXISTS "); + } + buff.append(quotedName); + buff.append(" ON ").append(targetTable.getSQL()); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + buff.append('(').append(getColumnListSQL()).append(')'); + return buff.toString(); + } + + @Override + public String getCreateSQL() { + return getCreateSQLForCopy(table, getSQL()); + } + + @Override + public IndexColumn[] getIndexColumns() { + return indexColumns; + } + + @Override + public Column[] getColumns() { + return columns; + } + + @Override + public IndexType getIndexType() { + return indexType; + } + + @Override + public int getType() { + return DbObject.INDEX; + } + + @Override + public Table getTable() { + return table; + } + + @Override + public void commit(int operation, Row row) { + // nothing to do + } + + void setMultiVersion(boolean multiVersion) { + this.isMultiVersion = multiVersion; + } + + @Override + public Row getRow(Session session, long key) { + throw DbException.getUnsupportedException(toString()); + } + + @Override + public boolean isHidden() { + return table.isHidden(); + } + + @Override + public boolean isRowIdIndex() { + return false; + } + + @Override + public boolean canScan() { + return true; + } + + @Override + public void setSortedInsertMode(boolean sortedInsertMode) { + // ignore + } + + @Override + public IndexLookupBatch createLookupBatch(TableFilter[] filters, int filter) { + // Lookup batching is not supported. + return null; + } +} diff --git a/modules/h2/src/main/java/org/h2/index/Cursor.java b/modules/h2/src/main/java/org/h2/index/Cursor.java new file mode 100644 index 0000000000000..f1183a2c65b86 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/Cursor.java @@ -0,0 +1,53 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * A cursor is a helper object to iterate through an index. + * For indexes are sorted (such as the b tree index), it can iterate + * to the very end of the index. For other indexes that don't support + * that (such as a hash index), only one row is returned. + * The cursor is initially positioned before the first row, that means + * next() must be called before accessing data. + * + */ +public interface Cursor { + + /** + * Get the complete current row. + * All column are available. + * + * @return the complete row + */ + Row get(); + + /** + * Get the current row. + * Only the data for indexed columns is available in this row. + * + * @return the search row + */ + SearchRow getSearchRow(); + + /** + * Skip to the next row if one is available. + * + * @return true if another row is available + */ + boolean next(); + + /** + * Skip to the previous row if one is available. + * No filtering is made here. + * + * @return true if another row is available + */ + boolean previous(); + +} diff --git a/modules/h2/src/main/java/org/h2/index/FunctionCursor.java b/modules/h2/src/main/java/org/h2/index/FunctionCursor.java new file mode 100644 index 0000000000000..8ab320368fcf3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/FunctionCursor.java @@ -0,0 +1,35 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.engine.Session; +import org.h2.result.ResultInterface; +import org.h2.result.SearchRow; + +/** + * A cursor for a function that returns a result. + */ +public class FunctionCursor extends AbstractFunctionCursor { + + private final ResultInterface result; + + FunctionCursor(FunctionIndex index, SearchRow first, SearchRow last, Session session, ResultInterface result) { + super(index, first, last, session); + this.result = result; + } + + @Override + boolean nextImpl() { + row = null; + if (result != null && result.next()) { + values = result.currentRow(); + } else { + values = null; + } + return values != null; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/FunctionCursorResultSet.java b/modules/h2/src/main/java/org/h2/index/FunctionCursorResultSet.java new file mode 100644 index 0000000000000..3c27b986b640f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/FunctionCursorResultSet.java @@ -0,0 +1,55 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.SearchRow; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * A cursor for a function that returns a JDBC result set. + */ +public class FunctionCursorResultSet extends AbstractFunctionCursor { + + private final ResultSet result; + private final ResultSetMetaData meta; + + FunctionCursorResultSet(FunctionIndex index, SearchRow first, SearchRow last, Session session, ResultSet result) { + super(index, first, last, session); + this.result = result; + try { + this.meta = result.getMetaData(); + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + boolean nextImpl() { + row = null; + try { + if (result != null && result.next()) { + int columnCount = meta.getColumnCount(); + values = new Value[columnCount]; + for (int i = 0; i < columnCount; i++) { + int type = DataType.getValueTypeFromResultSet(meta, i + 1); + values[i] = DataType.readValue(session, result, i + 1, type); + } + } else { + values = null; + } + } catch (SQLException e) { + throw DbException.convert(e); + } + return values != null; + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/index/FunctionIndex.java b/modules/h2/src/main/java/org/h2/index/FunctionIndex.java new file mode 100644 index 0000000000000..9c21c9600327a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/FunctionIndex.java @@ -0,0 +1,132 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.FunctionTable; +import org.h2.table.IndexColumn; +import org.h2.table.TableFilter; + +/** + * An index for a function that returns a result set. Search in this index + * performs scan over all rows and should be avoided. + */ +public class FunctionIndex extends BaseIndex { + + private final FunctionTable functionTable; + + public FunctionIndex(FunctionTable functionTable, IndexColumn[] columns) { + initBaseIndex(functionTable, 0, null, columns, IndexType.createNonUnique(true)); + this.functionTable = functionTable; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void add(Session session, Row row) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public void remove(Session session, Row row) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public boolean isFindUsingFullTableScan() { + return true; + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + if (functionTable.isBufferResultSetToLocalTemp()) { + return new FunctionCursor(this, first, last, session, functionTable.getResult(session)); + } + return new FunctionCursorResultSet(this, first, last, session, + functionTable.getResultSet(session)); + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + if (masks != null) { + throw DbException.getUnsupportedException("ALIAS"); + } + long expectedRows; + if (functionTable.canGetRowCount()) { + expectedRows = functionTable.getRowCountApproximation(); + } else { + expectedRows = database.getSettings().estimatedFunctionTableRows; + } + return expectedRows * 10; + } + + @Override + public void remove(Session session) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public long getRowCount(Session session) { + return functionTable.getRowCount(session); + } + + @Override + public long getRowCountApproximation() { + return functionTable.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public String getPlanSQL() { + return "function"; + } + + @Override + public boolean canScan() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/HashIndex.java b/modules/h2/src/main/java/org/h2/index/HashIndex.java new file mode 100644 index 0000000000000..55130de364d16 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/HashIndex.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.TableFilter; +import org.h2.util.ValueHashMap; +import org.h2.value.Value; + +/** + * An unique index based on an in-memory hash map. + */ +public class HashIndex extends BaseIndex { + + /** + * The index of the indexed column. + */ + private final int indexColumn; + + private final RegularTable tableData; + private ValueHashMap rows; + + public HashIndex(RegularTable table, int id, String indexName, + IndexColumn[] columns, IndexType indexType) { + initBaseIndex(table, id, indexName, columns, indexType); + this.indexColumn = columns[0].column.getColumnId(); + this.tableData = table; + reset(); + } + + private void reset() { + rows = ValueHashMap.newInstance(); + } + + @Override + public void truncate(Session session) { + reset(); + } + + @Override + public void add(Session session, Row row) { + Value key = row.getValue(indexColumn); + Object old = rows.get(key); + if (old != null) { + // TODO index duplicate key for hash indexes: is this allowed? + throw getDuplicateKeyException(key.toString()); + } + rows.put(key, row.getKey()); + } + + @Override + public void remove(Session session, Row row) { + rows.remove(row.getValue(indexColumn)); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + if (first == null || last == null) { + // TODO hash index: should additionally check if values are the same + throw DbException.throwInternalError(first + " " + last); + } + Value v = first.getValue(indexColumn); + /* + * Sometimes the incoming search is a similar, but not the same type + * e.g. the search value is INT, but the index column is LONG. In which + * case we need to convert, otherwise the ValueHashMap will not find the + * result. + */ + v = v.convertTo(tableData.getColumn(indexColumn).getType()); + Row result; + Long pos = rows.get(v); + if (pos == null) { + result = null; + } else { + result = tableData.getRow(session, pos.intValue()); + } + return new SingleRowCursor(result); + } + + @Override + public long getRowCount(Session session) { + return getRowCountApproximation(); + } + + @Override + public long getRowCountApproximation() { + return rows.size(); + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void remove(Session session) { + // nothing to do + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + for (Column column : columns) { + int index = column.getColumnId(); + int mask = masks[index]; + if ((mask & IndexCondition.EQUALITY) != IndexCondition.EQUALITY) { + return Long.MAX_VALUE; + } + } + return 2; + } + + @Override + public void checkRename() { + // ok + } + + @Override + public boolean needRebuild() { + return true; + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.getUnsupportedException("HASH"); + } + + @Override + public boolean canScan() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/Index.java b/modules/h2/src/main/java/org/h2/index/Index.java new file mode 100644 index 0000000000000..48008de2c4f54 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/Index.java @@ -0,0 +1,289 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.schema.SchemaObject; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.table.TableFilter; + +/** + * An index. Indexes are used to speed up searching data. + */ +public interface Index extends SchemaObject { + + /** + * Get the message to show in a EXPLAIN statement. + * + * @return the plan + */ + String getPlanSQL(); + + /** + * Close this index. + * + * @param session the session used to write data + */ + void close(Session session); + + /** + * Add a row to the index. + * + * @param session the session to use + * @param row the row to add + */ + void add(Session session, Row row); + + /** + * Remove a row from the index. + * + * @param session the session + * @param row the row + */ + void remove(Session session, Row row); + + /** + * Returns {@code true} if {@code find()} implementation performs scan over all + * index, {@code false} if {@code find()} performs the fast lookup. + * + * @return {@code true} if {@code find()} implementation performs scan over all + * index, {@code false} if {@code find()} performs the fast lookup + */ + boolean isFindUsingFullTableScan(); + + /** + * Find a row or a list of rows and create a cursor to iterate over the + * result. + * + * @param session the session + * @param first the first row, or null for no limit + * @param last the last row, or null for no limit + * @return the cursor to iterate over the results + */ + Cursor find(Session session, SearchRow first, SearchRow last); + + /** + * Find a row or a list of rows and create a cursor to iterate over the + * result. + * + * @param filter the table filter (which possibly knows about additional + * conditions) + * @param first the first row, or null for no limit + * @param last the last row, or null for no limit + * @return the cursor to iterate over the results + */ + Cursor find(TableFilter filter, SearchRow first, SearchRow last); + + /** + * Estimate the cost to search for rows given the search mask. + * There is one element per column in the search mask. + * For possible search masks, see IndexCondition. + * + * @param session the session + * @param masks per-column comparison bit masks, null means 'always false', + * see constants in IndexCondition + * @param filters all joined table filters + * @param filter the current table filter index + * @param sortOrder the sort order + * @param allColumnsSet the set of all columns + * @return the estimated cost + */ + double getCost(Session session, int[] masks, TableFilter[] filters, int filter, + SortOrder sortOrder, HashSet allColumnsSet); + + /** + * Remove the index. + * + * @param session the session + */ + void remove(Session session); + + /** + * Remove all rows from the index. + * + * @param session the session + */ + void truncate(Session session); + + /** + * Check if the index can directly look up the lowest or highest value of a + * column. + * + * @return true if it can + */ + boolean canGetFirstOrLast(); + + /** + * Check if the index can get the next higher value. + * + * @return true if it can + */ + boolean canFindNext(); + + /** + * Find a row or a list of rows that is larger and create a cursor to + * iterate over the result. + * + * @param session the session + * @param higherThan the lower limit (excluding) + * @param last the last row, or null for no limit + * @return the cursor + */ + Cursor findNext(Session session, SearchRow higherThan, SearchRow last); + + /** + * Find the first (or last) value of this index. The cursor returned is + * positioned on the correct row, or on null if no row has been found. + * + * @param session the session + * @param first true if the first (lowest for ascending indexes) or last + * value should be returned + * @return a cursor (never null) + */ + Cursor findFirstOrLast(Session session, boolean first); + + /** + * Check if the index needs to be rebuilt. + * This method is called after opening an index. + * + * @return true if a rebuild is required. + */ + boolean needRebuild(); + + /** + * Get the row count of this table, for the given session. + * + * @param session the session + * @return the row count + */ + long getRowCount(Session session); + + /** + * Get the approximated row count for this table. + * + * @return the approximated row count + */ + long getRowCountApproximation(); + + /** + * Get the used disk space for this index. + * + * @return the estimated number of bytes + */ + long getDiskSpaceUsed(); + + /** + * Compare two rows. + * + * @param rowData the first row + * @param compare the second row + * @return 0 if both rows are equal, -1 if the first row is smaller, + * otherwise 1 + */ + int compareRows(SearchRow rowData, SearchRow compare); + + /** + * Get the index of a column in the list of index columns + * + * @param col the column + * @return the index (0 meaning first column) + */ + int getColumnIndex(Column col); + + /** + * Check if the given column is the first for this index + * + * @param column the column + * @return true if the given columns is the first + */ + boolean isFirstColumn(Column column); + + /** + * Get the indexed columns as index columns (with ordering information). + * + * @return the index columns + */ + IndexColumn[] getIndexColumns(); + + /** + * Get the indexed columns. + * + * @return the columns + */ + Column[] getColumns(); + + /** + * Get the index type. + * + * @return the index type + */ + IndexType getIndexType(); + + /** + * Get the table on which this index is based. + * + * @return the table + */ + Table getTable(); + + /** + * Commit the operation for a row. This is only important for multi-version + * indexes. The method is only called if multi-version is enabled. + * + * @param operation the operation type + * @param row the row + */ + void commit(int operation, Row row); + + /** + * Get the row with the given key. + * + * @param session the session + * @param key the unique key + * @return the row + */ + Row getRow(Session session, long key); + + /** + * Does this index support lookup by row id? + * + * @return true if it does + */ + boolean isRowIdIndex(); + + /** + * Can this index iterate over all rows? + * + * @return true if it can + */ + boolean canScan(); + + /** + * Enable or disable the 'sorted insert' optimizations (rows are inserted in + * ascending or descending order) if applicable for this index + * implementation. + * + * @param sortedInsertMode the new value + */ + void setSortedInsertMode(boolean sortedInsertMode); + + /** + * Creates new lookup batch. Note that returned {@link IndexLookupBatch} + * instance can be used multiple times. + * + * @param filters the table filters + * @param filter the filter index (0, 1,...) + * @return created batch or {@code null} if batched lookup is not supported + * by this index. + */ + IndexLookupBatch createLookupBatch(TableFilter[] filters, int filter); +} diff --git a/modules/h2/src/main/java/org/h2/index/IndexCondition.java b/modules/h2/src/main/java/org/h2/index/IndexCondition.java new file mode 100644 index 0000000000000..4060474b905de --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/IndexCondition.java @@ -0,0 +1,432 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import org.h2.command.dml.Query; +import org.h2.engine.Session; +import org.h2.expression.Comparison; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.expression.ExpressionVisitor; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.table.Column; +import org.h2.table.TableType; +import org.h2.util.StatementBuilder; +import org.h2.value.CompareMode; +import org.h2.value.Value; + +/** + * A index condition object is made for each condition that can potentially use + * an index. This class does not extend expression, but in general there is one + * expression that maps to each index condition. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class IndexCondition { + + /** + * A bit of a search mask meaning 'equal'. + */ + public static final int EQUALITY = 1; + + /** + * A bit of a search mask meaning 'larger or equal'. + */ + public static final int START = 2; + + /** + * A bit of a search mask meaning 'smaller or equal'. + */ + public static final int END = 4; + + /** + * A search mask meaning 'between'. + */ + public static final int RANGE = START | END; + + /** + * A bit of a search mask meaning 'the condition is always false'. + */ + public static final int ALWAYS_FALSE = 8; + + /** + * A bit of a search mask meaning 'spatial intersection'. + */ + public static final int SPATIAL_INTERSECTS = 16; + + private final Column column; + /** + * see constants in {@link Comparison} + */ + private final int compareType; + + private final Expression expression; + private List expressionList; + private Query expressionQuery; + + /** + * @param compareType the comparison type, see constants in + * {@link Comparison} + */ + private IndexCondition(int compareType, ExpressionColumn column, + Expression expression) { + this.compareType = compareType; + this.column = column == null ? null : column.getColumn(); + this.expression = expression; + } + + /** + * Create an index condition with the given parameters. + * + * @param compareType the comparison type, see constants in + * {@link Comparison} + * @param column the column + * @param expression the expression + * @return the index condition + */ + public static IndexCondition get(int compareType, ExpressionColumn column, + Expression expression) { + return new IndexCondition(compareType, column, expression); + } + + /** + * Create an index condition with the compare type IN_LIST and with the + * given parameters. + * + * @param column the column + * @param list the expression list + * @return the index condition + */ + public static IndexCondition getInList(ExpressionColumn column, + List list) { + IndexCondition cond = new IndexCondition(Comparison.IN_LIST, column, + null); + cond.expressionList = list; + return cond; + } + + /** + * Create an index condition with the compare type IN_QUERY and with the + * given parameters. + * + * @param column the column + * @param query the select statement + * @return the index condition + */ + public static IndexCondition getInQuery(ExpressionColumn column, Query query) { + IndexCondition cond = new IndexCondition(Comparison.IN_QUERY, column, + null); + cond.expressionQuery = query; + return cond; + } + + /** + * Get the current value of the expression. + * + * @param session the session + * @return the value + */ + public Value getCurrentValue(Session session) { + return expression.getValue(session); + } + + /** + * Get the current value list of the expression. The value list is of the + * same type as the column, distinct, and sorted. + * + * @param session the session + * @return the value list + */ + public Value[] getCurrentValueList(Session session) { + HashSet valueSet = new HashSet<>(); + for (Expression e : expressionList) { + Value v = e.getValue(session); + v = column.convert(v); + valueSet.add(v); + } + Value[] array = valueSet.toArray(new Value[valueSet.size()]); + final CompareMode mode = session.getDatabase().getCompareMode(); + Arrays.sort(array, new Comparator() { + @Override + public int compare(Value o1, Value o2) { + return o1.compareTo(o2, mode); + } + }); + return array; + } + + /** + * Get the current result of the expression. The rows may not be of the same + * type, therefore the rows may not be unique. + * + * @return the result + */ + public ResultInterface getCurrentResult() { + return expressionQuery.query(0); + } + + /** + * Get the SQL snippet of this comparison. + * + * @return the SQL snippet + */ + public String getSQL() { + if (compareType == Comparison.FALSE) { + return "FALSE"; + } + StatementBuilder buff = new StatementBuilder(); + buff.append(column.getSQL()); + switch (compareType) { + case Comparison.EQUAL: + buff.append(" = "); + break; + case Comparison.EQUAL_NULL_SAFE: + buff.append(" IS "); + break; + case Comparison.BIGGER_EQUAL: + buff.append(" >= "); + break; + case Comparison.BIGGER: + buff.append(" > "); + break; + case Comparison.SMALLER_EQUAL: + buff.append(" <= "); + break; + case Comparison.SMALLER: + buff.append(" < "); + break; + case Comparison.IN_LIST: + buff.append(" IN("); + for (Expression e : expressionList) { + buff.appendExceptFirst(", "); + buff.append(e.getSQL()); + } + buff.append(')'); + break; + case Comparison.IN_QUERY: + buff.append(" IN("); + buff.append(expressionQuery.getPlanSQL()); + buff.append(')'); + break; + case Comparison.SPATIAL_INTERSECTS: + buff.append(" && "); + break; + default: + DbException.throwInternalError("type=" + compareType); + } + if (expression != null) { + buff.append(expression.getSQL()); + } + return buff.toString(); + } + + /** + * Get the comparison bit mask. + * + * @param indexConditions all index conditions + * @return the mask + */ + public int getMask(ArrayList indexConditions) { + switch (compareType) { + case Comparison.FALSE: + return ALWAYS_FALSE; + case Comparison.EQUAL: + case Comparison.EQUAL_NULL_SAFE: + return EQUALITY; + case Comparison.IN_LIST: + case Comparison.IN_QUERY: + if (indexConditions.size() > 1) { + if (TableType.TABLE != column.getTable().getTableType()) { + // if combined with other conditions, + // IN(..) can only be used for regular tables + // test case: + // create table test(a int, b int, primary key(id, name)); + // create unique index c on test(b, a); + // insert into test values(1, 10), (2, 20); + // select * from (select * from test) + // where a=1 and b in(10, 20); + return 0; + } + } + return EQUALITY; + case Comparison.BIGGER_EQUAL: + case Comparison.BIGGER: + return START; + case Comparison.SMALLER_EQUAL: + case Comparison.SMALLER: + return END; + case Comparison.SPATIAL_INTERSECTS: + return SPATIAL_INTERSECTS; + default: + throw DbException.throwInternalError("type=" + compareType); + } + } + + /** + * Check if the result is always false. + * + * @return true if the result will always be false + */ + public boolean isAlwaysFalse() { + return compareType == Comparison.FALSE; + } + + /** + * Check if this index condition is of the type column larger or equal to + * value. + * + * @return true if this is a start condition + */ + public boolean isStart() { + switch (compareType) { + case Comparison.EQUAL: + case Comparison.EQUAL_NULL_SAFE: + case Comparison.BIGGER_EQUAL: + case Comparison.BIGGER: + return true; + default: + return false; + } + } + + /** + * Check if this index condition is of the type column smaller or equal to + * value. + * + * @return true if this is a end condition + */ + public boolean isEnd() { + switch (compareType) { + case Comparison.EQUAL: + case Comparison.EQUAL_NULL_SAFE: + case Comparison.SMALLER_EQUAL: + case Comparison.SMALLER: + return true; + default: + return false; + } + } + + /** + * Check if this index condition is of the type spatial column intersects + * value. + * + * @return true if this is a spatial intersects condition + */ + public boolean isSpatialIntersects() { + switch (compareType) { + case Comparison.SPATIAL_INTERSECTS: + return true; + default: + return false; + } + } + + public int getCompareType() { + return compareType; + } + + /** + * Get the referenced column. + * + * @return the column + */ + public Column getColumn() { + return column; + } + + /** + * Get expression. + * + * @return Expression. + */ + public Expression getExpression() { + return expression; + } + + /** + * Get expression list. + * + * @return Expression list. + */ + public List getExpressionList() { + return expressionList; + } + + /** + * Get expression query. + * + * @return Expression query. + */ + public Query getExpressionQuery() { + return expressionQuery; + } + + /** + * Check if the expression can be evaluated. + * + * @return true if it can be evaluated + */ + public boolean isEvaluatable() { + if (expression != null) { + return expression + .isEverything(ExpressionVisitor.EVALUATABLE_VISITOR); + } + if (expressionList != null) { + for (Expression e : expressionList) { + if (!e.isEverything(ExpressionVisitor.EVALUATABLE_VISITOR)) { + return false; + } + } + return true; + } + return expressionQuery + .isEverything(ExpressionVisitor.EVALUATABLE_VISITOR); + } + + @Override + public String toString() { + return "column=" + column + + ", compareType=" + compareTypeToString(compareType) + + ", expression=" + expression + + ", expressionList=" + expressionList + + ", expressionQuery=" + expressionQuery; + } + + private static String compareTypeToString(int i) { + StatementBuilder s = new StatementBuilder(); + if ((i & EQUALITY) == EQUALITY) { + s.appendExceptFirst("&"); + s.append("EQUALITY"); + } + if ((i & START) == START) { + s.appendExceptFirst("&"); + s.append("START"); + } + if ((i & END) == END) { + s.appendExceptFirst("&"); + s.append("END"); + } + if ((i & ALWAYS_FALSE) == ALWAYS_FALSE) { + s.appendExceptFirst("&"); + s.append("ALWAYS_FALSE"); + } + if ((i & SPATIAL_INTERSECTS) == SPATIAL_INTERSECTS) { + s.appendExceptFirst("&"); + s.append("SPATIAL_INTERSECTS"); + } + return s.toString(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/IndexCursor.java b/modules/h2/src/main/java/org/h2/index/IndexCursor.java new file mode 100644 index 0000000000000..07226cf1f6145 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/IndexCursor.java @@ -0,0 +1,360 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.expression.Comparison; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueGeometry; +import org.h2.value.ValueNull; + +/** + * The filter used to walk through an index. This class supports IN(..) + * and IN(SELECT ...) optimizations. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class IndexCursor implements Cursor { + + private Session session; + private final TableFilter tableFilter; + private Index index; + private Table table; + private IndexColumn[] indexColumns; + private boolean alwaysFalse; + + private SearchRow start, end, intersects; + private Cursor cursor; + private Column inColumn; + private int inListIndex; + private Value[] inList; + private ResultInterface inResult; + private HashSet inResultTested; + + public IndexCursor(TableFilter filter) { + this.tableFilter = filter; + } + + public void setIndex(Index index) { + this.index = index; + this.table = index.getTable(); + Column[] columns = table.getColumns(); + indexColumns = new IndexColumn[columns.length]; + IndexColumn[] idxCols = index.getIndexColumns(); + if (idxCols != null) { + for (int i = 0, len = columns.length; i < len; i++) { + int idx = index.getColumnIndex(columns[i]); + if (idx >= 0) { + indexColumns[i] = idxCols[idx]; + } + } + } + } + + /** + * Prepare this index cursor to make a lookup in index. + * + * @param s Session. + * @param indexConditions Index conditions. + */ + public void prepare(Session s, ArrayList indexConditions) { + this.session = s; + alwaysFalse = false; + start = end = null; + inList = null; + inColumn = null; + inResult = null; + inResultTested = null; + intersects = null; + // don't use enhanced for loop to avoid creating objects + for (IndexCondition condition : indexConditions) { + if (condition.isAlwaysFalse()) { + alwaysFalse = true; + break; + } + // If index can perform only full table scan do not try to use it for regular + // lookups, each such lookup will perform an own table scan. + if (index.isFindUsingFullTableScan()) { + continue; + } + Column column = condition.getColumn(); + if (condition.getCompareType() == Comparison.IN_LIST) { + if (start == null && end == null) { + if (canUseIndexForIn(column)) { + this.inColumn = column; + inList = condition.getCurrentValueList(s); + inListIndex = 0; + } + } + } else if (condition.getCompareType() == Comparison.IN_QUERY) { + if (start == null && end == null) { + if (canUseIndexForIn(column)) { + this.inColumn = column; + inResult = condition.getCurrentResult(); + } + } + } else { + Value v = condition.getCurrentValue(s); + boolean isStart = condition.isStart(); + boolean isEnd = condition.isEnd(); + boolean isIntersects = condition.isSpatialIntersects(); + int columnId = column.getColumnId(); + if (columnId >= 0) { + IndexColumn idxCol = indexColumns[columnId]; + if (idxCol != null && (idxCol.sortType & SortOrder.DESCENDING) != 0) { + // if the index column is sorted the other way, we swap + // end and start NULLS_FIRST / NULLS_LAST is not a + // problem, as nulls never match anyway + boolean temp = isStart; + isStart = isEnd; + isEnd = temp; + } + } + if (isStart) { + start = getSearchRow(start, columnId, v, true); + } + if (isEnd) { + end = getSearchRow(end, columnId, v, false); + } + if (isIntersects) { + intersects = getSpatialSearchRow(intersects, columnId, v); + } + // An X=? condition will produce less rows than + // an X IN(..) condition, unless the X IN condition can use the index. + if ((isStart || isEnd) && !canUseIndexFor(inColumn)) { + inColumn = null; + inList = null; + inResult = null; + } + if (!session.getDatabase().getSettings().optimizeIsNull) { + if (isStart && isEnd) { + if (v == ValueNull.INSTANCE) { + // join on a column=NULL is always false + alwaysFalse = true; + } + } + } + } + } + if (inColumn != null) { + start = table.getTemplateRow(); + } + } + + /** + * Re-evaluate the start and end values of the index search for rows. + * + * @param s the session + * @param indexConditions the index conditions + */ + public void find(Session s, ArrayList indexConditions) { + prepare(s, indexConditions); + if (inColumn != null) { + return; + } + if (!alwaysFalse) { + if (intersects != null && index instanceof SpatialIndex) { + cursor = ((SpatialIndex) index).findByGeometry(tableFilter, + start, end, intersects); + } else { + cursor = index.find(tableFilter, start, end); + } + } + } + + private boolean canUseIndexForIn(Column column) { + if (inColumn != null) { + // only one IN(..) condition can be used at the same time + return false; + } + return canUseIndexFor(column); + } + + private boolean canUseIndexFor(Column column) { + // The first column of the index must match this column, + // or it must be a VIEW index (where the column is null). + // Multiple IN conditions with views are not supported, see + // IndexCondition.getMask. + IndexColumn[] cols = index.getIndexColumns(); + if (cols == null) { + return true; + } + IndexColumn idxCol = cols[0]; + return idxCol == null || idxCol.column == column; + } + + private SearchRow getSpatialSearchRow(SearchRow row, int columnId, Value v) { + if (row == null) { + row = table.getTemplateRow(); + } else if (row.getValue(columnId) != null) { + // if an object needs to overlap with both a and b, + // then it needs to overlap with the the union of a and b + // (not the intersection) + ValueGeometry vg = (ValueGeometry) row.getValue(columnId). + convertTo(Value.GEOMETRY); + v = ((ValueGeometry) v.convertTo(Value.GEOMETRY)). + getEnvelopeUnion(vg); + } + if (columnId < 0) { + row.setKey(v.getLong()); + } else { + row.setValue(columnId, v); + } + return row; + } + + private SearchRow getSearchRow(SearchRow row, int columnId, Value v, + boolean max) { + if (row == null) { + row = table.getTemplateRow(); + } else { + v = getMax(row.getValue(columnId), v, max); + } + if (columnId < 0) { + row.setKey(v.getLong()); + } else { + row.setValue(columnId, v); + } + return row; + } + + private Value getMax(Value a, Value b, boolean bigger) { + if (a == null) { + return b; + } else if (b == null) { + return a; + } + if (session.getDatabase().getSettings().optimizeIsNull) { + // IS NULL must be checked later + if (a == ValueNull.INSTANCE) { + return b; + } else if (b == ValueNull.INSTANCE) { + return a; + } + } + int comp = a.compareTo(b, table.getDatabase().getCompareMode()); + if (comp == 0) { + return a; + } + if (a == ValueNull.INSTANCE || b == ValueNull.INSTANCE) { + if (session.getDatabase().getSettings().optimizeIsNull) { + // column IS NULL AND column is always false + return null; + } + } + if (!bigger) { + comp = -comp; + } + return comp > 0 ? a : b; + } + + /** + * Check if the result is empty for sure. + * + * @return true if it is + */ + public boolean isAlwaysFalse() { + return alwaysFalse; + } + + /** + * Get start search row. + * + * @return search row + */ + public SearchRow getStart() { + return start; + } + + /** + * Get end search row. + * + * @return search row + */ + public SearchRow getEnd() { + return end; + } + + @Override + public Row get() { + if (cursor == null) { + return null; + } + return cursor.get(); + } + + @Override + public SearchRow getSearchRow() { + return cursor.getSearchRow(); + } + + @Override + public boolean next() { + while (true) { + if (cursor == null) { + nextCursor(); + if (cursor == null) { + return false; + } + } + if (cursor.next()) { + return true; + } + cursor = null; + } + } + + private void nextCursor() { + if (inList != null) { + while (inListIndex < inList.length) { + Value v = inList[inListIndex++]; + if (v != ValueNull.INSTANCE) { + find(v); + break; + } + } + } else if (inResult != null) { + while (inResult.next()) { + Value v = inResult.currentRow()[0]; + if (v != ValueNull.INSTANCE) { + if (inResultTested == null) { + inResultTested = new HashSet<>(); + } + if (inResultTested.add(v)) { + find(v); + break; + } + } + } + } + } + + private void find(Value v) { + v = inColumn.convert(v); + int id = inColumn.getColumnId(); + start.setValue(id, v); + cursor = index.find(tableFilter, start, start); + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/IndexLookupBatch.java b/modules/h2/src/main/java/org/h2/index/IndexLookupBatch.java new file mode 100644 index 0000000000000..11e683b528c26 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/IndexLookupBatch.java @@ -0,0 +1,70 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.List; +import java.util.concurrent.Future; +import org.h2.result.SearchRow; + +/** + * Support for asynchronous batched lookups in indexes. The flow is the + * following: H2 engine will be calling + * {@link #addSearchRows(SearchRow, SearchRow)} until method + * {@link #isBatchFull()} will return {@code true} or there are no more search + * rows to add. Then method {@link #find()} will be called to execute batched + * lookup. Note that a single instance of {@link IndexLookupBatch} can be reused + * for multiple sequential batched lookups, moreover it can be reused for + * multiple queries for the same prepared statement. + * + * @see Index#createLookupBatch(org.h2.table.TableFilter[], int) + * @author Sergi Vladykin + */ +public interface IndexLookupBatch { + /** + * Add search row pair to the batch. + * + * @param first the first row, or null for no limit + * @param last the last row, or null for no limit + * @return {@code false} if this search row pair is known to produce no + * results and thus the given row pair was not added + * @see Index#find(org.h2.table.TableFilter, SearchRow, SearchRow) + */ + boolean addSearchRows(SearchRow first, SearchRow last); + + /** + * Check if this batch is full. + * + * @return {@code true} If batch is full, will not accept any + * more rows and {@link #find()} can be executed. + */ + boolean isBatchFull(); + + /** + * Execute batched lookup and return future cursor for each provided search + * row pair. Note that this method must return exactly the same number of + * future cursors in result list as number of + * {@link #addSearchRows(SearchRow, SearchRow)} calls has been done before + * {@link #find()} call exactly in the same order. + * + * @return List of future cursors for collected search rows. + */ + List> find(); + + /** + * Get plan for EXPLAIN. + * + * @return plan + */ + String getPlanSQL(); + + /** + * Reset this batch to clear state. This method will be called before and + * after each query execution. + * + * @param beforeQuery if it is being called before query execution + */ + void reset(boolean beforeQuery); +} diff --git a/modules/h2/src/main/java/org/h2/index/IndexType.java b/modules/h2/src/main/java/org/h2/index/IndexType.java new file mode 100644 index 0000000000000..ce760c31b5da1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/IndexType.java @@ -0,0 +1,207 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +/** + * Represents information about the properties of an index + */ +public class IndexType { + + private boolean primaryKey, persistent, unique, hash, scan, spatial, affinity; + private boolean belongsToConstraint; + + /** + * Create a primary key index. + * + * @param persistent if the index is persistent + * @param hash if a hash index should be used + * @return the index type + */ + public static IndexType createPrimaryKey(boolean persistent, boolean hash) { + IndexType type = new IndexType(); + type.primaryKey = true; + type.persistent = persistent; + type.hash = hash; + type.unique = true; + return type; + } + + /** + * Create a unique index. + * + * @param persistent if the index is persistent + * @param hash if a hash index should be used + * @return the index type + */ + public static IndexType createUnique(boolean persistent, boolean hash) { + IndexType type = new IndexType(); + type.unique = true; + type.persistent = persistent; + type.hash = hash; + return type; + } + + /** + * Create a non-unique index. + * + * @param persistent if the index is persistent + * @return the index type + */ + public static IndexType createNonUnique(boolean persistent) { + return createNonUnique(persistent, false, false); + } + + /** + * Create a non-unique index. + * + * @param persistent if the index is persistent + * @param hash if a hash index should be used + * @param spatial if a spatial index should be used + * @return the index type + */ + public static IndexType createNonUnique(boolean persistent, boolean hash, + boolean spatial) { + IndexType type = new IndexType(); + type.persistent = persistent; + type.hash = hash; + type.spatial = spatial; + return type; + } + + /** + * Create an affinity index. + * + * @return the index type + */ + public static IndexType createAffinity() { + IndexType type = new IndexType(); + type.affinity = true; + return type; + } + + /** + * Create a scan pseudo-index. + * + * @param persistent if the index is persistent + * @return the index type + */ + public static IndexType createScan(boolean persistent) { + IndexType type = new IndexType(); + type.persistent = persistent; + type.scan = true; + return type; + } + + /** + * Sets if this index belongs to a constraint. + * + * @param belongsToConstraint if the index belongs to a constraint + */ + public void setBelongsToConstraint(boolean belongsToConstraint) { + this.belongsToConstraint = belongsToConstraint; + } + + /** + * If the index is created because of a constraint. Such indexes are to be + * dropped once the constraint is dropped. + * + * @return if the index belongs to a constraint + */ + public boolean getBelongsToConstraint() { + return belongsToConstraint; + } + + /** + * Is this a hash index? + * + * @return true if it is a hash index + */ + public boolean isHash() { + return hash; + } + + /** + * Is this a spatial index? + * + * @return true if it is a spatial index + */ + public boolean isSpatial() { + return spatial; + } + + /** + * Is this index persistent? + * + * @return true if it is persistent + */ + public boolean isPersistent() { + return persistent; + } + + /** + * Does this index belong to a primary key constraint? + * + * @return true if it references a primary key constraint + */ + public boolean isPrimaryKey() { + return primaryKey; + } + + /** + * Is this a unique index? + * + * @return true if it is + */ + public boolean isUnique() { + return unique; + } + + /** + * Does this index represent an affinity key? + * + * @return true if it does + */ + public boolean isAffinity() { + return affinity; + } + + /** + * Get the SQL snippet to create such an index. + * + * @return the SQL snippet + */ + public String getSQL() { + StringBuilder buff = new StringBuilder(); + if (primaryKey) { + buff.append("PRIMARY KEY"); + if (hash) { + buff.append(" HASH"); + } + } else { + if (unique) { + buff.append("UNIQUE "); + } + if (hash) { + buff.append("HASH "); + } + if (spatial) { + buff.append("SPATIAL "); + } + buff.append("INDEX"); + } + return buff.toString(); + } + + /** + * Is this a table scan pseudo-index? + * + * @return true if it is + */ + public boolean isScan() { + return scan; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/LinkedCursor.java b/modules/h2/src/main/java/org/h2/index/LinkedCursor.java new file mode 100644 index 0000000000000..7c930e704ba3b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/LinkedCursor.java @@ -0,0 +1,78 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.table.Column; +import org.h2.table.TableLink; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * The cursor implementation for the linked index. + */ +public class LinkedCursor implements Cursor { + + private final TableLink tableLink; + private final PreparedStatement prep; + private final String sql; + private final Session session; + private final ResultSet rs; + private Row current; + + LinkedCursor(TableLink tableLink, ResultSet rs, Session session, + String sql, PreparedStatement prep) { + this.session = session; + this.tableLink = tableLink; + this.rs = rs; + this.sql = sql; + this.prep = prep; + } + + @Override + public Row get() { + return current; + } + + @Override + public SearchRow getSearchRow() { + return current; + } + + @Override + public boolean next() { + try { + boolean result = rs.next(); + if (!result) { + rs.close(); + tableLink.reusePreparedStatement(prep, sql); + current = null; + return false; + } + } catch (SQLException e) { + throw DbException.convert(e); + } + current = tableLink.getTemplateRow(); + for (int i = 0; i < current.getColumnCount(); i++) { + Column col = tableLink.getColumn(i); + Value v = DataType.readValue(session, rs, i + 1, col.getType()); + current.setValue(i, v); + } + return true; + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/LinkedIndex.java b/modules/h2/src/main/java/org/h2/index/LinkedIndex.java new file mode 100644 index 0000000000000..0354785a6807c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/LinkedIndex.java @@ -0,0 +1,273 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.TableFilter; +import org.h2.table.TableLink; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * A linked index is a index for a linked (remote) table. + * It is backed by an index on the remote table which is accessed over JDBC. + */ +public class LinkedIndex extends BaseIndex { + + private final TableLink link; + private final String targetTableName; + private long rowCount; + + public LinkedIndex(TableLink table, int id, IndexColumn[] columns, + IndexType indexType) { + initBaseIndex(table, id, null, columns, indexType); + link = table; + targetTableName = link.getQualifiedTable(); + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public void close(Session session) { + // nothing to do + } + + private static boolean isNull(Value v) { + return v == null || v == ValueNull.INSTANCE; + } + + @Override + public void add(Session session, Row row) { + ArrayList params = New.arrayList(); + StatementBuilder buff = new StatementBuilder("INSERT INTO "); + buff.append(targetTableName).append(" VALUES("); + for (int i = 0; i < row.getColumnCount(); i++) { + Value v = row.getValue(i); + buff.appendExceptFirst(", "); + if (v == null) { + buff.append("DEFAULT"); + } else if (isNull(v)) { + buff.append("NULL"); + } else { + buff.append('?'); + params.add(v); + } + } + buff.append(')'); + String sql = buff.toString(); + try { + link.execute(sql, params, true); + rowCount++; + } catch (Exception e) { + throw TableLink.wrapException(sql, e); + } + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + ArrayList params = New.arrayList(); + StatementBuilder buff = new StatementBuilder("SELECT * FROM "); + buff.append(targetTableName).append(" T"); + for (int i = 0; first != null && i < first.getColumnCount(); i++) { + Value v = first.getValue(i); + if (v != null) { + buff.appendOnlyFirst(" WHERE "); + buff.appendExceptFirst(" AND "); + Column col = table.getColumn(i); + buff.append(col.getSQL()); + if (v == ValueNull.INSTANCE) { + buff.append(" IS NULL"); + } else { + buff.append(">="); + addParameter(buff, col); + params.add(v); + } + } + } + for (int i = 0; last != null && i < last.getColumnCount(); i++) { + Value v = last.getValue(i); + if (v != null) { + buff.appendOnlyFirst(" WHERE "); + buff.appendExceptFirst(" AND "); + Column col = table.getColumn(i); + buff.append(col.getSQL()); + if (v == ValueNull.INSTANCE) { + buff.append(" IS NULL"); + } else { + buff.append("<="); + addParameter(buff, col); + params.add(v); + } + } + } + String sql = buff.toString(); + try { + PreparedStatement prep = link.execute(sql, params, false); + ResultSet rs = prep.getResultSet(); + return new LinkedCursor(link, rs, session, sql, prep); + } catch (Exception e) { + throw TableLink.wrapException(sql, e); + } + } + + private void addParameter(StatementBuilder buff, Column col) { + if (col.getType() == Value.STRING_FIXED && link.isOracle()) { + // workaround for Oracle + // create table test(id int primary key, name char(15)); + // insert into test values(1, 'Hello') + // select * from test where name = ? -- where ? = "Hello" > no rows + buff.append("CAST(? AS CHAR(").append(col.getPrecision()).append("))"); + } else { + buff.append('?'); + } + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 100 + getCostRangeIndex(masks, rowCount + + Constants.COST_ROW_OFFSET, filters, filter, sortOrder, false, allColumnsSet); + } + + @Override + public void remove(Session session) { + // nothing to do + } + + @Override + public void truncate(Session session) { + // nothing to do + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("LINKED"); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + // TODO optimization: could get the first or last value (in any case; + // maybe not optimized) + throw DbException.getUnsupportedException("LINKED"); + } + + @Override + public void remove(Session session, Row row) { + ArrayList params = New.arrayList(); + StatementBuilder buff = new StatementBuilder("DELETE FROM "); + buff.append(targetTableName).append(" WHERE "); + for (int i = 0; i < row.getColumnCount(); i++) { + buff.appendExceptFirst("AND "); + Column col = table.getColumn(i); + buff.append(col.getSQL()); + Value v = row.getValue(i); + if (isNull(v)) { + buff.append(" IS NULL "); + } else { + buff.append('='); + addParameter(buff, col); + params.add(v); + buff.append(' '); + } + } + String sql = buff.toString(); + try { + PreparedStatement prep = link.execute(sql, params, false); + int count = prep.executeUpdate(); + link.reusePreparedStatement(prep, sql); + rowCount -= count; + } catch (Exception e) { + throw TableLink.wrapException(sql, e); + } + } + + /** + * Update a row using a UPDATE statement. This method is to be called if the + * emit updates option is enabled. + * + * @param oldRow the old data + * @param newRow the new data + */ + public void update(Row oldRow, Row newRow) { + ArrayList params = New.arrayList(); + StatementBuilder buff = new StatementBuilder("UPDATE "); + buff.append(targetTableName).append(" SET "); + for (int i = 0; i < newRow.getColumnCount(); i++) { + buff.appendExceptFirst(", "); + buff.append(table.getColumn(i).getSQL()).append('='); + Value v = newRow.getValue(i); + if (v == null) { + buff.append("DEFAULT"); + } else { + buff.append('?'); + params.add(v); + } + } + buff.append(" WHERE "); + buff.resetCount(); + for (int i = 0; i < oldRow.getColumnCount(); i++) { + Column col = table.getColumn(i); + buff.appendExceptFirst(" AND "); + buff.append(col.getSQL()); + Value v = oldRow.getValue(i); + if (isNull(v)) { + buff.append(" IS NULL"); + } else { + buff.append('='); + params.add(v); + addParameter(buff, col); + } + } + String sql = buff.toString(); + try { + link.execute(sql, params, true); + } catch (Exception e) { + throw TableLink.wrapException(sql, e); + } + } + + @Override + public long getRowCount(Session session) { + return rowCount; + } + + @Override + public long getRowCountApproximation() { + return rowCount; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } +} diff --git a/modules/h2/src/main/java/org/h2/index/MetaCursor.java b/modules/h2/src/main/java/org/h2/index/MetaCursor.java new file mode 100644 index 0000000000000..b5a29a5b9dde3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/MetaCursor.java @@ -0,0 +1,48 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * An index for a meta data table. + * This index can only scan through all rows, search is not supported. + */ +public class MetaCursor implements Cursor { + + private Row current; + private final ArrayList rows; + private int index; + + MetaCursor(ArrayList rows) { + this.rows = rows; + } + + @Override + public Row get() { + return current; + } + + @Override + public SearchRow getSearchRow() { + return current; + } + + @Override + public boolean next() { + current = index >= rows.size() ? null : rows.get(index++); + return current != null; + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/MetaIndex.java b/modules/h2/src/main/java/org/h2/index/MetaIndex.java new file mode 100644 index 0000000000000..8c84ee0ccfaff --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/MetaIndex.java @@ -0,0 +1,138 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.MetaTable; +import org.h2.table.TableFilter; + +/** + * The index implementation for meta data tables. + */ +public class MetaIndex extends BaseIndex { + + private final MetaTable meta; + private final boolean scan; + + public MetaIndex(MetaTable meta, IndexColumn[] columns, boolean scan) { + initBaseIndex(meta, 0, null, columns, IndexType.createNonUnique(true)); + this.meta = meta; + this.scan = scan; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void add(Session session, Row row) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public void remove(Session session, Row row) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + ArrayList rows = meta.generateRows(session, first, last); + return new MetaCursor(rows); + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + if (scan) { + return 10 * MetaTable.ROW_COUNT_APPROXIMATION; + } + return getCostRangeIndex(masks, MetaTable.ROW_COUNT_APPROXIMATION, + filters, filter, sortOrder, false, allColumnsSet); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public void remove(Session session) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public int getColumnIndex(Column col) { + if (scan) { + // the scan index cannot use any columns + return -1; + } + return super.getColumnIndex(col); + } + + @Override + public boolean isFirstColumn(Column column) { + if (scan) { + return false; + } + return super.isFirstColumn(column); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("META"); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public long getRowCount(Session session) { + return MetaTable.ROW_COUNT_APPROXIMATION; + } + + @Override + public long getRowCountApproximation() { + return MetaTable.ROW_COUNT_APPROXIMATION; + } + + @Override + public long getDiskSpaceUsed() { + return meta.getDiskSpaceUsed(); + } + + @Override + public String getPlanSQL() { + return "meta"; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/MultiVersionCursor.java b/modules/h2/src/main/java/org/h2/index/MultiVersionCursor.java new file mode 100644 index 0000000000000..52fe1a84fecd3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/MultiVersionCursor.java @@ -0,0 +1,188 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * The cursor implementation for the multi-version index. + */ +public class MultiVersionCursor implements Cursor { + + private final MultiVersionIndex index; + private final Session session; + private final Cursor baseCursor, deltaCursor; + private final Object sync; + private SearchRow baseRow; + private Row deltaRow; + private boolean onBase; + private boolean end; + private boolean needNewDelta, needNewBase; + private boolean reverse; + + MultiVersionCursor(Session session, MultiVersionIndex index, Cursor base, + Cursor delta, Object sync) { + this.session = session; + this.index = index; + this.baseCursor = base; + this.deltaCursor = delta; + this.sync = sync; + needNewDelta = true; + needNewBase = true; + } + + /** + * Load the current row. + */ + void loadCurrent() { + synchronized (sync) { + baseRow = baseCursor.getSearchRow(); + deltaRow = deltaCursor.get(); + needNewDelta = false; + needNewBase = false; + } + } + + private void loadNext(boolean base) { + synchronized (sync) { + if (base) { + if (step(baseCursor)) { + baseRow = baseCursor.getSearchRow(); + } else { + baseRow = null; + } + } else { + if (step(deltaCursor)) { + deltaRow = deltaCursor.get(); + } else { + deltaRow = null; + } + } + } + } + + private boolean step(Cursor cursor) { + return reverse ? cursor.previous() : cursor.next(); + } + + @Override + public Row get() { + synchronized (sync) { + if (end) { + return null; + } + return onBase ? baseCursor.get() : deltaCursor.get(); + } + } + + @Override + public SearchRow getSearchRow() { + synchronized (sync) { + if (end) { + return null; + } + return onBase ? baseCursor.getSearchRow() : deltaCursor.getSearchRow(); + } + } + + @Override + public boolean next() { + synchronized (sync) { + if (SysProperties.CHECK && end) { + DbException.throwInternalError(); + } + while (true) { + if (needNewDelta) { + loadNext(false); + needNewDelta = false; + } + if (needNewBase) { + loadNext(true); + needNewBase = false; + } + if (deltaRow == null) { + if (baseRow == null) { + end = true; + return false; + } + onBase = true; + needNewBase = true; + return true; + } + int sessionId = deltaRow.getSessionId(); + boolean isThisSession = sessionId == session.getId(); + boolean isDeleted = deltaRow.isDeleted(); + if (isThisSession && isDeleted) { + needNewDelta = true; + continue; + } + if (baseRow == null) { + if (isDeleted) { + if (isThisSession) { + end = true; + return false; + } + // the row was deleted by another session: return it + onBase = false; + needNewDelta = true; + return true; + } + DbException.throwInternalError(); + } + int compare = index.compareRows(deltaRow, baseRow); + if (compare == 0) { + // can't use compareKeys because the + // version would be compared as well + long k1 = deltaRow.getKey(); + long k2 = baseRow.getKey(); + compare = Long.compare(k1, k2); + } + if (compare == 0) { + if (isDeleted) { + if (isThisSession) { + DbException.throwInternalError(); + } + // another session updated the row + } else { + if (isThisSession) { + onBase = false; + needNewBase = true; + needNewDelta = true; + return true; + } + // another session inserted the row: ignore + needNewBase = true; + needNewDelta = true; + continue; + } + } + if (compare > 0) { + onBase = true; + needNewBase = true; + return true; + } + onBase = false; + needNewDelta = true; + return true; + } + } + } + + @Override + public boolean previous() { + reverse = true; + try { + return next(); + } finally { + reverse = false; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/MultiVersionIndex.java b/modules/h2/src/main/java/org/h2/index/MultiVersionIndex.java new file mode 100644 index 0000000000000..c76e2c199dc85 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/MultiVersionIndex.java @@ -0,0 +1,406 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * A multi-version index is a combination of a regular index, + * and a in-memory tree index that contains uncommitted changes. + * Uncommitted changes can include new rows, and deleted rows. + */ +public class MultiVersionIndex implements Index { + + private final Index base; + private final TreeIndex delta; + private final RegularTable table; + private final Object sync; + private final Column firstColumn; + + public MultiVersionIndex(Index base, RegularTable table) { + this.base = base; + this.table = table; + IndexType deltaIndexType = IndexType.createNonUnique(false); + if (base instanceof SpatialIndex) { + throw DbException.get(ErrorCode.FEATURE_NOT_SUPPORTED_1, + "MVCC & spatial index"); + } + this.delta = new TreeIndex(table, -1, "DELTA", base.getIndexColumns(), + deltaIndexType); + delta.setMultiVersion(true); + this.sync = base.getDatabase(); + this.firstColumn = base.getColumns()[0]; + } + + @Override + public void add(Session session, Row row) { + synchronized (sync) { + base.add(session, row); + if (removeIfExists(session, row)) { + // for example rolling back a delete operation + } else if (row.getSessionId() != 0) { + // don't insert rows that are added when creating an index + delta.add(session, row); + } + } + } + + @Override + public void close(Session session) { + synchronized (sync) { + base.close(session); + } + } + + @Override + public boolean isFindUsingFullTableScan() { + return base.isFindUsingFullTableScan(); + } + + @Override + public Cursor find(TableFilter filter, SearchRow first, SearchRow last) { + synchronized (sync) { + Cursor baseCursor = base.find(filter, first, last); + Cursor deltaCursor = delta.find(filter, first, last); + return new MultiVersionCursor(filter.getSession(), this, + baseCursor, deltaCursor, sync); + } + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + synchronized (sync) { + Cursor baseCursor = base.find(session, first, last); + Cursor deltaCursor = delta.find(session, first, last); + return new MultiVersionCursor(session, this, baseCursor, deltaCursor, sync); + } + } + + @Override + public Cursor findNext(Session session, SearchRow first, SearchRow last) { + throw DbException.throwInternalError(toString()); + } + + @Override + public boolean canFindNext() { + // TODO possible, but more complicated + return false; + } + + @Override + public boolean canGetFirstOrLast() { + return base.canGetFirstOrLast() && delta.canGetFirstOrLast(); + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + if (first) { + // TODO optimization: this loops through NULL elements + Cursor cursor = find(session, null, null); + while (cursor.next()) { + SearchRow row = cursor.getSearchRow(); + Value v = row.getValue(firstColumn.getColumnId()); + if (v != ValueNull.INSTANCE) { + return cursor; + } + } + return cursor; + } + Cursor baseCursor = base.findFirstOrLast(session, false); + Cursor deltaCursor = delta.findFirstOrLast(session, false); + MultiVersionCursor cursor = new MultiVersionCursor(session, this, + baseCursor, deltaCursor, sync); + cursor.loadCurrent(); + // TODO optimization: this loops through NULL elements + while (cursor.previous()) { + SearchRow row = cursor.getSearchRow(); + if (row == null) { + break; + } + Value v = row.getValue(firstColumn.getColumnId()); + if (v != ValueNull.INSTANCE) { + return cursor; + } + } + return cursor; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return base.getCost(session, masks, filters, filter, sortOrder, allColumnsSet); + } + + @Override + public boolean needRebuild() { + return base.needRebuild(); + } + + /** + * Check if there is an uncommitted row with the given key + * within a different session. + * + * @param session the original session + * @param row the row (only the key is checked) + * @return true if there is an uncommitted row + */ + public boolean isUncommittedFromOtherSession(Session session, Row row) { + Cursor c = delta.find(session, row, row); + while (c.next()) { + Row r = c.get(); + return r.getSessionId() != session.getId(); + } + return false; + } + + private boolean removeIfExists(Session session, Row row) { + // maybe it was inserted by the same session just before + Cursor c = delta.find(session, row, row); + while (c.next()) { + Row r = c.get(); + if (r.getKey() == row.getKey() && r.getVersion() == row.getVersion()) { + if (r != row && table.getScanIndex(session).compareRows(r, row) != 0) { + row.setVersion(r.getVersion() + 1); + } else { + delta.remove(session, r); + return true; + } + } + } + return false; + } + + @Override + public void remove(Session session, Row row) { + synchronized (sync) { + base.remove(session, row); + if (removeIfExists(session, row)) { + // added and deleted in the same transaction: no change + } else { + delta.add(session, row); + } + } + } + + @Override + public void remove(Session session) { + synchronized (sync) { + base.remove(session); + } + } + + @Override + public void truncate(Session session) { + synchronized (sync) { + delta.truncate(session); + base.truncate(session); + } + } + + @Override + public void commit(int operation, Row row) { + synchronized (sync) { + removeIfExists(null, row); + } + } + + @Override + public int compareRows(SearchRow rowData, SearchRow compare) { + return base.compareRows(rowData, compare); + } + + @Override + public int getColumnIndex(Column col) { + return base.getColumnIndex(col); + } + + @Override + public boolean isFirstColumn(Column column) { + return base.isFirstColumn(column); + } + + @Override + public Column[] getColumns() { + return base.getColumns(); + } + + @Override + public IndexColumn[] getIndexColumns() { + return base.getIndexColumns(); + } + + @Override + public String getCreateSQL() { + return base.getCreateSQL(); + } + + @Override + public String getCreateSQLForCopy(Table forTable, String quotedName) { + return base.getCreateSQLForCopy(forTable, quotedName); + } + + @Override + public String getDropSQL() { + return base.getDropSQL(); + } + + @Override + public IndexType getIndexType() { + return base.getIndexType(); + } + + @Override + public String getPlanSQL() { + return base.getPlanSQL(); + } + + @Override + public long getRowCount(Session session) { + return base.getRowCount(session); + } + + @Override + public Table getTable() { + return base.getTable(); + } + + @Override + public int getType() { + return base.getType(); + } + + @Override + public void removeChildrenAndResources(Session session) { + synchronized (sync) { + table.removeIndex(this); + remove(session); + } + } + + @Override + public String getSQL() { + return base.getSQL(); + } + + @Override + public Schema getSchema() { + return base.getSchema(); + } + + @Override + public void checkRename() { + base.checkRename(); + } + + @Override + public ArrayList getChildren() { + return base.getChildren(); + } + + @Override + public String getComment() { + return base.getComment(); + } + + @Override + public Database getDatabase() { + return base.getDatabase(); + } + + @Override + public int getId() { + return base.getId(); + } + + @Override + public String getName() { + return base.getName(); + } + + @Override + public boolean isTemporary() { + return base.isTemporary(); + } + + @Override + public void rename(String newName) { + base.rename(newName); + } + + @Override + public void setComment(String comment) { + base.setComment(comment); + } + + @Override + public void setTemporary(boolean temporary) { + base.setTemporary(temporary); + } + + @Override + public long getRowCountApproximation() { + return base.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return base.getDiskSpaceUsed(); + } + + public Index getBaseIndex() { + return base; + } + + @Override + public Row getRow(Session session, long key) { + return base.getRow(session, key); + } + + @Override + public boolean isHidden() { + return base.isHidden(); + } + + @Override + public boolean isRowIdIndex() { + return base.isRowIdIndex() && delta.isRowIdIndex(); + } + + @Override + public boolean canScan() { + return base.canScan(); + } + + @Override + public void setSortedInsertMode(boolean sortedInsertMode) { + base.setSortedInsertMode(sortedInsertMode); + delta.setSortedInsertMode(sortedInsertMode); + } + + @Override + public IndexLookupBatch createLookupBatch(TableFilter[] filters, int filter) { + // Lookup batching is not supported. + return null; + } +} diff --git a/modules/h2/src/main/java/org/h2/index/NonUniqueHashCursor.java b/modules/h2/src/main/java/org/h2/index/NonUniqueHashCursor.java new file mode 100644 index 0000000000000..b06c7e743b341 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/NonUniqueHashCursor.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import org.h2.engine.Session; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.table.RegularTable; + +/** + * Cursor implementation for non-unique hash index + * + * @author Sergi Vladykin + */ +public class NonUniqueHashCursor implements Cursor { + + private final Session session; + private final ArrayList positions; + private final RegularTable tableData; + + private int index = -1; + + public NonUniqueHashCursor(Session session, RegularTable tableData, + ArrayList positions) { + this.session = session; + this.tableData = tableData; + this.positions = positions; + } + + @Override + public Row get() { + if (index < 0 || index >= positions.size()) { + return null; + } + return tableData.getRow(session, positions.get(index)); + } + + @Override + public SearchRow getSearchRow() { + return get(); + } + + @Override + public boolean next() { + return positions != null && ++index < positions.size(); + } + + @Override + public boolean previous() { + return positions != null && --index >= 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/NonUniqueHashIndex.java b/modules/h2/src/main/java/org/h2/index/NonUniqueHashIndex.java new file mode 100644 index 0000000000000..89268b7b5b8f2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/NonUniqueHashIndex.java @@ -0,0 +1,172 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.TableFilter; +import org.h2.util.New; +import org.h2.util.ValueHashMap; +import org.h2.value.Value; + +/** + * A non-unique index based on an in-memory hash map. + * + * @author Sergi Vladykin + */ +public class NonUniqueHashIndex extends BaseIndex { + + /** + * The index of the indexed column. + */ + private final int indexColumn; + private ValueHashMap> rows; + private final RegularTable tableData; + private long rowCount; + + public NonUniqueHashIndex(RegularTable table, int id, String indexName, + IndexColumn[] columns, IndexType indexType) { + initBaseIndex(table, id, indexName, columns, indexType); + this.indexColumn = columns[0].column.getColumnId(); + this.tableData = table; + reset(); + } + + private void reset() { + rows = ValueHashMap.newInstance(); + rowCount = 0; + } + + @Override + public void truncate(Session session) { + reset(); + } + + @Override + public void add(Session session, Row row) { + Value key = row.getValue(indexColumn); + ArrayList positions = rows.get(key); + if (positions == null) { + positions = New.arrayList(); + rows.put(key, positions); + } + positions.add(row.getKey()); + rowCount++; + } + + @Override + public void remove(Session session, Row row) { + if (rowCount == 1) { + // last row in table + reset(); + } else { + Value key = row.getValue(indexColumn); + ArrayList positions = rows.get(key); + if (positions.size() == 1) { + // last row with such key + rows.remove(key); + } else { + positions.remove(row.getKey()); + } + rowCount--; + } + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + if (first == null || last == null) { + throw DbException.throwInternalError(first + " " + last); + } + if (first != last) { + if (compareKeys(first, last) != 0) { + throw DbException.throwInternalError(); + } + } + Value v = first.getValue(indexColumn); + /* + * Sometimes the incoming search is a similar, but not the same type + * e.g. the search value is INT, but the index column is LONG. In which + * case we need to convert, otherwise the ValueHashMap will not find the + * result. + */ + v = v.convertTo(tableData.getColumn(indexColumn).getType()); + ArrayList positions = rows.get(v); + return new NonUniqueHashCursor(session, tableData, positions); + } + + @Override + public long getRowCount(Session session) { + return rowCount; + } + + @Override + public long getRowCountApproximation() { + return rowCount; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void remove(Session session) { + // nothing to do + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + for (Column column : columns) { + int index = column.getColumnId(); + int mask = masks[index]; + if ((mask & IndexCondition.EQUALITY) != IndexCondition.EQUALITY) { + return Long.MAX_VALUE; + } + } + return 2; + } + + @Override + public void checkRename() { + // ok + } + + @Override + public boolean needRebuild() { + return true; + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.getUnsupportedException("HASH"); + } + + @Override + public boolean canScan() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageBtree.java b/modules/h2/src/main/java/org/h2/index/PageBtree.java new file mode 100644 index 0000000000000..d7712deb8b86c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageBtree.java @@ -0,0 +1,292 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.result.SearchRow; +import org.h2.store.Data; +import org.h2.store.Page; + +/** + * A page that contains index data. + */ +public abstract class PageBtree extends Page { + + /** + * This is a root page. + */ + static final int ROOT = 0; + + /** + * Indicator that the row count is not known. + */ + static final int UNKNOWN_ROWCOUNT = -1; + + /** + * The index. + */ + protected final PageBtreeIndex index; + + /** + * The page number of the parent. + */ + protected int parentPageId; + + /** + * The data page. + */ + protected final Data data; + + /** + * The row offsets. + */ + protected int[] offsets; + + /** + * The number of entries. + */ + protected int entryCount; + + /** + * The index data + */ + protected SearchRow[] rows; + + /** + * The start of the data area. + */ + protected int start; + + /** + * If only the position of the row is stored in the page + */ + protected boolean onlyPosition; + + /** + * Whether the data page is up-to-date. + */ + protected boolean written; + + /** + * The estimated memory used by this object. + */ + private final int memoryEstimated; + + PageBtree(PageBtreeIndex index, int pageId, Data data) { + this.index = index; + this.data = data; + setPos(pageId); + memoryEstimated = index.getMemoryPerPage(); + } + + /** + * Get the real row count. If required, this will read all child pages. + * + * @return the row count + */ + abstract int getRowCount(); + + /** + * Set the stored row count. This will write the page. + * + * @param rowCount the stored row count + */ + abstract void setRowCountStored(int rowCount); + + /** + * Find an entry. + * + * @param compare the row + * @param bigger if looking for a larger row + * @param add if the row should be added (check for duplicate keys) + * @param compareKeys compare the row keys as well + * @return the index of the found row + */ + int find(SearchRow compare, boolean bigger, boolean add, boolean compareKeys) { + if (compare == null) { + return 0; + } + int l = 0, r = entryCount; + int comp = 1; + while (l < r) { + int i = (l + r) >>> 1; + SearchRow row = getRow(i); + comp = index.compareRows(row, compare); + if (comp == 0) { + if (add && index.indexType.isUnique()) { + if (!index.mayHaveNullDuplicates(compare)) { + throw index.getDuplicateKeyException(compare.toString()); + } + } + if (compareKeys) { + comp = index.compareKeys(row, compare); + if (comp == 0) { + return i; + } + } + } + if (comp > 0 || (!bigger && comp == 0)) { + r = i; + } else { + l = i + 1; + } + } + return l; + } + + /** + * Add a row if possible. If it is possible this method returns -1, + * otherwise the split point. It is always possible to add one row. + * + * @param row the row to add + * @return the split point of this page, or -1 if no split is required + */ + abstract int addRowTry(SearchRow row); + + /** + * Find the first row. + * + * @param cursor the cursor + * @param first the row to find + * @param bigger if the row should be bigger + */ + abstract void find(PageBtreeCursor cursor, SearchRow first, boolean bigger); + + /** + * Find the last row. + * + * @param cursor the cursor + */ + abstract void last(PageBtreeCursor cursor); + + /** + * Get the row at this position. + * + * @param at the index + * @return the row + */ + SearchRow getRow(int at) { + SearchRow row = rows[at]; + if (row == null) { + row = index.readRow(data, offsets[at], onlyPosition, true); + memoryChange(); + rows[at] = row; + } else if (!index.hasData(row)) { + row = index.readRow(row.getKey()); + memoryChange(); + rows[at] = row; + } + return row; + } + + /** + * The memory usage of this page was changed. Propagate the change if + * needed. + */ + protected void memoryChange() { + // nothing to do + } + + /** + * Split the index page at the given point. + * + * @param splitPoint the index where to split + * @return the new page that contains about half the entries + */ + abstract PageBtree split(int splitPoint); + + /** + * Change the page id. + * + * @param id the new page id + */ + void setPageId(int id) { + changeCount = index.getPageStore().getChangeCount(); + written = false; + index.getPageStore().removeFromCache(getPos()); + setPos(id); + index.getPageStore().logUndo(this, null); + remapChildren(); + } + + /** + * Get the first child leaf page of a page. + * + * @return the page + */ + abstract PageBtreeLeaf getFirstLeaf(); + + /** + * Get the first child leaf page of a page. + * + * @return the page + */ + abstract PageBtreeLeaf getLastLeaf(); + + /** + * Change the parent page id. + * + * @param id the new parent page id + */ + void setParentPageId(int id) { + index.getPageStore().logUndo(this, data); + changeCount = index.getPageStore().getChangeCount(); + written = false; + parentPageId = id; + } + + /** + * Update the parent id of all children. + */ + abstract void remapChildren(); + + /** + * Remove a row. + * + * @param row the row to remove + * @return null if the last row didn't change, + * the deleted row if the page is now empty, + * otherwise the new last row of this page + */ + abstract SearchRow remove(SearchRow row); + + /** + * Free this page and all child pages. + */ + abstract void freeRecursive(); + + /** + * Ensure all rows are read in memory. + */ + protected void readAllRows() { + for (int i = 0; i < entryCount; i++) { + SearchRow row = rows[i]; + if (row == null) { + row = index.readRow(data, offsets[i], onlyPosition, false); + rows[i] = row; + } + } + } + + /** + * Get the estimated memory size. + * + * @return number of double words (4 bytes) + */ + @Override + public int getMemory() { + // need to always return the same value for the same object (otherwise + // the cache size would change after adding and then removing the same + // page from the cache) but index.getMemoryPerPage() can adopt according + // to how much memory a row needs on average + return memoryEstimated; + } + + @Override + public boolean canRemove() { + return changeCount < index.getPageStore().getChangeCount(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageBtreeCursor.java b/modules/h2/src/main/java/org/h2/index/PageBtreeCursor.java new file mode 100644 index 0000000000000..61bb8b83c0c59 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageBtreeCursor.java @@ -0,0 +1,93 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.engine.Session; +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * The cursor implementation for the page b-tree index. + */ +public class PageBtreeCursor implements Cursor { + + private final Session session; + private final PageBtreeIndex index; + private final SearchRow last; + private PageBtreeLeaf current; + private int i; + private SearchRow currentSearchRow; + private Row currentRow; + + PageBtreeCursor(Session session, PageBtreeIndex index, SearchRow last) { + this.session = session; + this.index = index; + this.last = last; + } + + /** + * Set the position of the current row. + * + * @param current the leaf page + * @param i the index within the page + */ + void setCurrent(PageBtreeLeaf current, int i) { + this.current = current; + this.i = i; + } + + @Override + public Row get() { + if (currentRow == null && currentSearchRow != null) { + currentRow = index.getRow(session, currentSearchRow.getKey()); + } + return currentRow; + } + + @Override + public SearchRow getSearchRow() { + return currentSearchRow; + } + + @Override + public boolean next() { + if (current == null) { + return false; + } + if (i >= current.getEntryCount()) { + current.nextPage(this); + if (current == null) { + return false; + } + } + currentSearchRow = current.getRow(i); + currentRow = null; + if (last != null && index.compareRows(currentSearchRow, last) > 0) { + currentSearchRow = null; + return false; + } + i++; + return true; + } + + @Override + public boolean previous() { + if (current == null) { + return false; + } + if (i < 0) { + current.previousPage(this); + if (current == null) { + return false; + } + } + currentSearchRow = current.getRow(i); + currentRow = null; + i--; + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageBtreeIndex.java b/modules/h2/src/main/java/org/h2/index/PageBtreeIndex.java new file mode 100644 index 0000000000000..4f0b493ce0fe9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageBtreeIndex.java @@ -0,0 +1,494 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.store.Data; +import org.h2.store.Page; +import org.h2.store.PageStore; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.TableFilter; +import org.h2.util.MathUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This is the most common type of index, a b tree index. + * Only the data of the indexed columns are stored in the index. + */ +public class PageBtreeIndex extends PageIndex { + + private static int memoryChangeRequired; + + private final PageStore store; + private final RegularTable tableData; + private final boolean needRebuild; + private long rowCount; + private int memoryPerPage; + private int memoryCount; + + public PageBtreeIndex(RegularTable table, int id, String indexName, + IndexColumn[] columns, + IndexType indexType, boolean create, Session session) { + initBaseIndex(table, id, indexName, columns, indexType); + if (!database.isStarting() && create) { + checkIndexColumnTypes(columns); + } + // int test; + // trace.setLevel(TraceSystem.DEBUG); + tableData = table; + if (!database.isPersistent() || id < 0) { + throw DbException.throwInternalError("" + indexName); + } + this.store = database.getPageStore(); + store.addIndex(this); + if (create) { + // new index + rootPageId = store.allocatePage(); + // TODO currently the head position is stored in the log + // it should not for new tables, otherwise redo of other operations + // must ensure this page is not used for other things + store.addMeta(this, session); + PageBtreeLeaf root = PageBtreeLeaf.create(this, rootPageId, PageBtree.ROOT); + store.logUndo(root, null); + store.update(root); + } else { + rootPageId = store.getRootPageId(id); + PageBtree root = getPage(rootPageId); + rowCount = root.getRowCount(); + } + this.needRebuild = create || (rowCount == 0 && store.isRecoveryRunning()); + if (trace.isDebugEnabled()) { + trace.debug("opened {0} rows: {1}", getName() , rowCount); + } + memoryPerPage = (Constants.MEMORY_PAGE_BTREE + store.getPageSize()) >> 2; + } + + @Override + public void add(Session session, Row row) { + if (trace.isDebugEnabled()) { + trace.debug("{0} add {1}", getName(), row); + } + // safe memory + SearchRow newRow = getSearchRow(row); + try { + addRow(newRow); + } finally { + store.incrementChangeCount(); + } + } + + private void addRow(SearchRow newRow) { + while (true) { + PageBtree root = getPage(rootPageId); + int splitPoint = root.addRowTry(newRow); + if (splitPoint == -1) { + break; + } + if (trace.isDebugEnabled()) { + trace.debug("split {0}", splitPoint); + } + SearchRow pivot = root.getRow(splitPoint - 1); + store.logUndo(root, root.data); + PageBtree page1 = root; + PageBtree page2 = root.split(splitPoint); + store.logUndo(page2, null); + int id = store.allocatePage(); + page1.setPageId(id); + page1.setParentPageId(rootPageId); + page2.setParentPageId(rootPageId); + PageBtreeNode newRoot = PageBtreeNode.create( + this, rootPageId, PageBtree.ROOT); + store.logUndo(newRoot, null); + newRoot.init(page1, pivot, page2); + store.update(page1); + store.update(page2); + store.update(newRoot); + root = newRoot; + } + invalidateRowCount(); + rowCount++; + } + + /** + * Create a search row for this row. + * + * @param row the row + * @return the search row + */ + private SearchRow getSearchRow(Row row) { + SearchRow r = table.getTemplateSimpleRow(columns.length == 1); + r.setKeyAndVersion(row); + for (Column c : columns) { + int idx = c.getColumnId(); + r.setValue(idx, row.getValue(idx)); + } + return r; + } + + /** + * Read the given page. + * + * @param id the page id + * @return the page + */ + PageBtree getPage(int id) { + Page p = store.getPage(id); + if (p == null) { + PageBtreeLeaf empty = PageBtreeLeaf.create(this, id, PageBtree.ROOT); + // could have been created before, but never committed + store.logUndo(empty, null); + store.update(empty); + return empty; + } else if (!(p instanceof PageBtree)) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, "" + p); + } + return (PageBtree) p; + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findNext(Session session, SearchRow first, SearchRow last) { + return find(session, first, true, last); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return find(session, first, false, last); + } + + private Cursor find(Session session, SearchRow first, boolean bigger, + SearchRow last) { + if (SysProperties.CHECK && store == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + PageBtree root = getPage(rootPageId); + PageBtreeCursor cursor = new PageBtreeCursor(session, this, last); + root.find(cursor, first, bigger); + return cursor; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + if (first) { + // TODO optimization: this loops through NULL elements + Cursor cursor = find(session, null, false, null); + while (cursor.next()) { + SearchRow row = cursor.getSearchRow(); + Value v = row.getValue(columnIds[0]); + if (v != ValueNull.INSTANCE) { + return cursor; + } + } + return cursor; + } + PageBtree root = getPage(rootPageId); + PageBtreeCursor cursor = new PageBtreeCursor(session, this, null); + root.last(cursor); + cursor.previous(); + // TODO optimization: this loops through NULL elements + do { + SearchRow row = cursor.getSearchRow(); + if (row == null) { + break; + } + Value v = row.getValue(columnIds[0]); + if (v != ValueNull.INSTANCE) { + return cursor; + } + } while (cursor.previous()); + return cursor; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 10 * getCostRangeIndex(masks, tableData.getRowCount(session), + filters, filter, sortOrder, false, allColumnsSet); + } + + @Override + public boolean needRebuild() { + return needRebuild; + } + + @Override + public void remove(Session session, Row row) { + if (trace.isDebugEnabled()) { + trace.debug("{0} remove {1}", getName(), row); + } + // TODO invalidate row count + // setChanged(session); + if (rowCount == 1) { + removeAllRows(); + } else { + try { + PageBtree root = getPage(rootPageId); + root.remove(row); + invalidateRowCount(); + rowCount--; + } finally { + store.incrementChangeCount(); + } + } + } + + @Override + public void remove(Session session) { + if (trace.isDebugEnabled()) { + trace.debug("remove"); + } + removeAllRows(); + store.free(rootPageId); + store.removeMeta(this, session); + } + + @Override + public void truncate(Session session) { + if (trace.isDebugEnabled()) { + trace.debug("truncate"); + } + removeAllRows(); + if (tableData.getContainsLargeObject()) { + database.getLobStorage().removeAllForTable(table.getId()); + } + tableData.setRowCount(0); + } + + private void removeAllRows() { + try { + PageBtree root = getPage(rootPageId); + root.freeRecursive(); + root = PageBtreeLeaf.create(this, rootPageId, PageBtree.ROOT); + store.removeFromCache(rootPageId); + store.update(root); + rowCount = 0; + } finally { + store.incrementChangeCount(); + } + } + + @Override + public void checkRename() { + // ok + } + + /** + * Get a row from the main index. + * + * @param session the session + * @param key the row key + * @return the row + */ + @Override + public Row getRow(Session session, long key) { + return tableData.getRow(session, key); + } + + PageStore getPageStore() { + return store; + } + + @Override + public long getRowCountApproximation() { + return tableData.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return tableData.getDiskSpaceUsed(); + } + + @Override + public long getRowCount(Session session) { + return rowCount; + } + + @Override + public void close(Session session) { + if (trace.isDebugEnabled()) { + trace.debug("close"); + } + // can not close the index because it might get used afterwards, + // for example after running recovery + try { + writeRowCount(); + } finally { + store.incrementChangeCount(); + } + } + + /** + * Read a row from the data page at the given offset. + * + * @param data the data + * @param offset the offset + * @param onlyPosition whether only the position of the row is stored + * @param needData whether the row data is required + * @return the row + */ + SearchRow readRow(Data data, int offset, boolean onlyPosition, + boolean needData) { + synchronized (data) { + data.setPos(offset); + long key = data.readVarLong(); + if (onlyPosition) { + if (needData) { + return tableData.getRow(null, key); + } + SearchRow row = table.getTemplateSimpleRow(true); + row.setKey(key); + return row; + } + SearchRow row = table.getTemplateSimpleRow(columns.length == 1); + row.setKey(key); + for (Column col : columns) { + int idx = col.getColumnId(); + row.setValue(idx, data.readValue()); + } + return row; + } + } + + /** + * Get the complete row from the data index. + * + * @param key the key + * @return the row + */ + SearchRow readRow(long key) { + return tableData.getRow(null, key); + } + + /** + * Write a row to the data page at the given offset. + * + * @param data the data + * @param offset the offset + * @param onlyPosition whether only the position of the row is stored + * @param row the row to write + */ + void writeRow(Data data, int offset, SearchRow row, boolean onlyPosition) { + data.setPos(offset); + data.writeVarLong(row.getKey()); + if (!onlyPosition) { + for (Column col : columns) { + int idx = col.getColumnId(); + data.writeValue(row.getValue(idx)); + } + } + } + + /** + * Get the size of a row (only the part that is stored in the index). + * + * @param dummy a dummy data page to calculate the size + * @param row the row + * @param onlyPosition whether only the position of the row is stored + * @return the number of bytes + */ + int getRowSize(Data dummy, SearchRow row, boolean onlyPosition) { + int rowsize = Data.getVarLongLen(row.getKey()); + if (!onlyPosition) { + for (Column col : columns) { + Value v = row.getValue(col.getColumnId()); + rowsize += dummy.getValueLen(v); + } + } + return rowsize; + } + + @Override + public boolean canFindNext() { + return true; + } + + /** + * The root page has changed. + * + * @param session the session + * @param newPos the new position + */ + void setRootPageId(Session session, int newPos) { + store.removeMeta(this, session); + this.rootPageId = newPos; + store.addMeta(this, session); + store.addIndex(this); + } + + private void invalidateRowCount() { + PageBtree root = getPage(rootPageId); + root.setRowCountStored(PageData.UNKNOWN_ROWCOUNT); + } + + @Override + public void writeRowCount() { + if (SysProperties.MODIFY_ON_WRITE && rootPageId == 0) { + // currently creating the index + return; + } + PageBtree root = getPage(rootPageId); + root.setRowCountStored(MathUtils.convertLongToInt(rowCount)); + } + + /** + * Check whether the given row contains data. + * + * @param row the row + * @return true if it contains data + */ + boolean hasData(SearchRow row) { + return row.getValue(columns[0].getColumnId()) != null; + } + + int getMemoryPerPage() { + return memoryPerPage; + } + + /** + * The memory usage of a page was changed. The new value is used to adopt + * the average estimated memory size of a page. + * + * @param x the new memory size + */ + void memoryChange(int x) { + if (memoryCount < Constants.MEMORY_FACTOR) { + memoryPerPage += (x - memoryPerPage) / ++memoryCount; + } else { + memoryPerPage += (x > memoryPerPage ? 1 : -1) + + ((x - memoryPerPage) / Constants.MEMORY_FACTOR); + } + } + + /** + * Check if calculating the memory is required. + * + * @return true if it is + */ + static boolean isMemoryChangeRequired() { + if (memoryChangeRequired-- <= 0) { + memoryChangeRequired = 10; + return true; + } + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageBtreeLeaf.java b/modules/h2/src/main/java/org/h2/index/PageBtreeLeaf.java new file mode 100644 index 0000000000000..2344c74f86ac7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageBtreeLeaf.java @@ -0,0 +1,403 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.Arrays; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.result.SearchRow; +import org.h2.store.Data; +import org.h2.store.Page; +import org.h2.store.PageStore; + +/** + * A b-tree leaf page that contains index data. Format: + *
      + *
    • page type: byte
    • + *
    • checksum: short
    • + *
    • parent page id (0 for root): int
    • + *
    • index id: varInt
    • + *
    • entry count: short
    • + *
    • list of offsets: short
    • + *
    • data (key: varLong, value,...)
    • + *
    + */ +public class PageBtreeLeaf extends PageBtree { + + private static final int OFFSET_LENGTH = 2; + + private final boolean optimizeUpdate; + private boolean writtenData; + + private PageBtreeLeaf(PageBtreeIndex index, int pageId, Data data) { + super(index, pageId, data); + this.optimizeUpdate = index.getDatabase().getSettings().optimizeUpdate; + } + + /** + * Read a b-tree leaf page. + * + * @param index the index + * @param data the data + * @param pageId the page id + * @return the page + */ + public static Page read(PageBtreeIndex index, Data data, int pageId) { + PageBtreeLeaf p = new PageBtreeLeaf(index, pageId, data); + p.read(); + return p; + } + + /** + * Create a new page. + * + * @param index the index + * @param pageId the page id + * @param parentPageId the parent + * @return the page + */ + static PageBtreeLeaf create(PageBtreeIndex index, int pageId, + int parentPageId) { + PageBtreeLeaf p = new PageBtreeLeaf(index, pageId, index.getPageStore() + .createData()); + index.getPageStore().logUndo(p, null); + p.rows = SearchRow.EMPTY_ARRAY; + p.parentPageId = parentPageId; + p.writeHead(); + p.start = p.data.length(); + return p; + } + + private void read() { + data.reset(); + int type = data.readByte(); + data.readShortInt(); + this.parentPageId = data.readInt(); + onlyPosition = (type & Page.FLAG_LAST) == 0; + int indexId = data.readVarInt(); + if (indexId != index.getId()) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "page:" + getPos() + " expected index:" + index.getId() + + "got:" + indexId); + } + entryCount = data.readShortInt(); + offsets = new int[entryCount]; + rows = new SearchRow[entryCount]; + for (int i = 0; i < entryCount; i++) { + offsets[i] = data.readShortInt(); + } + start = data.length(); + written = true; + writtenData = true; + } + + @Override + int addRowTry(SearchRow row) { + int x = addRow(row, true); + memoryChange(); + return x; + } + + private int addRow(SearchRow row, boolean tryOnly) { + int rowLength = index.getRowSize(data, row, onlyPosition); + int pageSize = index.getPageStore().getPageSize(); + int last = entryCount == 0 ? pageSize : offsets[entryCount - 1]; + if (last - rowLength < start + OFFSET_LENGTH) { + if (tryOnly && entryCount > 1) { + int x = find(row, false, true, true); + if (entryCount < 5) { + // required, otherwise the index doesn't work correctly + return entryCount / 2; + } + // split near the insertion point to better fill pages + // split in half would be: + // return entryCount / 2; + int third = entryCount / 3; + return x < third ? third : x >= 2 * third ? 2 * third : x; + } + readAllRows(); + writtenData = false; + onlyPosition = true; + // change the offsets (now storing only positions) + int o = pageSize; + for (int i = 0; i < entryCount; i++) { + o -= index.getRowSize(data, getRow(i), true); + offsets[i] = o; + } + last = entryCount == 0 ? pageSize : offsets[entryCount - 1]; + rowLength = index.getRowSize(data, row, true); + if (SysProperties.CHECK && last - rowLength < start + OFFSET_LENGTH) { + throw DbException.throwInternalError(); + } + } + index.getPageStore().logUndo(this, data); + if (!optimizeUpdate) { + readAllRows(); + } + changeCount = index.getPageStore().getChangeCount(); + written = false; + int x; + if (entryCount == 0) { + x = 0; + } else { + x = find(row, false, true, true); + } + start += OFFSET_LENGTH; + int offset = (x == 0 ? pageSize : offsets[x - 1]) - rowLength; + if (optimizeUpdate && writtenData) { + if (entryCount > 0) { + byte[] d = data.getBytes(); + int dataStart = offsets[entryCount - 1]; + System.arraycopy(d, dataStart, d, dataStart - rowLength, + offset - dataStart + rowLength); + } + index.writeRow(data, offset, row, onlyPosition); + } + offsets = insert(offsets, entryCount, x, offset); + add(offsets, x + 1, entryCount + 1, -rowLength); + rows = insert(rows, entryCount, x, row); + entryCount++; + index.getPageStore().update(this); + return -1; + } + + private void removeRow(int at) { + if (!optimizeUpdate) { + readAllRows(); + } + index.getPageStore().logUndo(this, data); + entryCount--; + written = false; + changeCount = index.getPageStore().getChangeCount(); + if (entryCount <= 0) { + DbException.throwInternalError("" + entryCount); + } + int startNext = at > 0 ? offsets[at - 1] : index.getPageStore().getPageSize(); + int rowLength = startNext - offsets[at]; + start -= OFFSET_LENGTH; + + if (optimizeUpdate) { + if (writtenData) { + byte[] d = data.getBytes(); + int dataStart = offsets[entryCount]; + System.arraycopy(d, dataStart, d, + dataStart + rowLength, offsets[at] - dataStart); + Arrays.fill(d, dataStart, dataStart + rowLength, (byte) 0); + } + } + + offsets = remove(offsets, entryCount + 1, at); + add(offsets, at, entryCount, rowLength); + rows = remove(rows, entryCount + 1, at); + } + + int getEntryCount() { + return entryCount; + } + + @Override + PageBtree split(int splitPoint) { + int newPageId = index.getPageStore().allocatePage(); + PageBtreeLeaf p2 = PageBtreeLeaf.create(index, newPageId, parentPageId); + while (splitPoint < entryCount) { + p2.addRow(getRow(splitPoint), false); + removeRow(splitPoint); + } + memoryChange(); + p2.memoryChange(); + return p2; + } + + @Override + PageBtreeLeaf getFirstLeaf() { + return this; + } + + @Override + PageBtreeLeaf getLastLeaf() { + return this; + } + + @Override + SearchRow remove(SearchRow row) { + int at = find(row, false, false, true); + SearchRow delete = getRow(at); + if (index.compareRows(row, delete) != 0 || delete.getKey() != row.getKey()) { + throw DbException.get(ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1, + index.getSQL() + ": " + row); + } + index.getPageStore().logUndo(this, data); + if (entryCount == 1) { + // the page is now empty + return row; + } + removeRow(at); + memoryChange(); + index.getPageStore().update(this); + if (at == entryCount) { + // the last row changed + return getRow(at - 1); + } + // the last row didn't change + return null; + } + + @Override + void freeRecursive() { + index.getPageStore().logUndo(this, data); + index.getPageStore().free(getPos()); + } + + @Override + int getRowCount() { + return entryCount; + } + + @Override + void setRowCountStored(int rowCount) { + // ignore + } + + @Override + public void write() { + writeData(); + index.getPageStore().writePage(getPos(), data); + } + + private void writeHead() { + data.reset(); + data.writeByte((byte) (Page.TYPE_BTREE_LEAF | + (onlyPosition ? 0 : Page.FLAG_LAST))); + data.writeShortInt(0); + data.writeInt(parentPageId); + data.writeVarInt(index.getId()); + data.writeShortInt(entryCount); + } + + private void writeData() { + if (written) { + return; + } + if (!optimizeUpdate) { + readAllRows(); + } + writeHead(); + for (int i = 0; i < entryCount; i++) { + data.writeShortInt(offsets[i]); + } + if (!writtenData || !optimizeUpdate) { + for (int i = 0; i < entryCount; i++) { + index.writeRow(data, offsets[i], rows[i], onlyPosition); + } + writtenData = true; + } + written = true; + memoryChange(); + } + + @Override + void find(PageBtreeCursor cursor, SearchRow first, boolean bigger) { + int i = find(first, bigger, false, false); + if (i > entryCount) { + if (parentPageId == PageBtree.ROOT) { + return; + } + PageBtreeNode next = (PageBtreeNode) index.getPage(parentPageId); + next.find(cursor, first, bigger); + return; + } + cursor.setCurrent(this, i); + } + + @Override + void last(PageBtreeCursor cursor) { + cursor.setCurrent(this, entryCount - 1); + } + + @Override + void remapChildren() { + // nothing to do + } + + /** + * Set the cursor to the first row of the next page. + * + * @param cursor the cursor + */ + void nextPage(PageBtreeCursor cursor) { + if (parentPageId == PageBtree.ROOT) { + cursor.setCurrent(null, 0); + return; + } + PageBtreeNode next = (PageBtreeNode) index.getPage(parentPageId); + next.nextPage(cursor, getPos()); + } + + /** + * Set the cursor to the last row of the previous page. + * + * @param cursor the cursor + */ + void previousPage(PageBtreeCursor cursor) { + if (parentPageId == PageBtree.ROOT) { + cursor.setCurrent(null, 0); + return; + } + PageBtreeNode next = (PageBtreeNode) index.getPage(parentPageId); + next.previousPage(cursor, getPos()); + } + + @Override + public String toString() { + return "page[" + getPos() + "] b-tree leaf table:" + + index.getId() + " entries:" + entryCount; + } + + @Override + public void moveTo(Session session, int newPos) { + PageStore store = index.getPageStore(); + readAllRows(); + PageBtreeLeaf p2 = PageBtreeLeaf.create(index, newPos, parentPageId); + store.logUndo(this, data); + store.logUndo(p2, null); + p2.rows = rows; + p2.entryCount = entryCount; + p2.offsets = offsets; + p2.onlyPosition = onlyPosition; + p2.parentPageId = parentPageId; + p2.start = start; + store.update(p2); + if (parentPageId == ROOT) { + index.setRootPageId(session, newPos); + } else { + PageBtreeNode p = (PageBtreeNode) store.getPage(parentPageId); + p.moveChild(getPos(), newPos); + } + store.free(getPos()); + } + + @Override + protected void memoryChange() { + if (!PageBtreeIndex.isMemoryChangeRequired()) { + return; + } + int memory = Constants.MEMORY_PAGE_BTREE + index.getPageStore().getPageSize(); + if (rows != null) { + memory += getEntryCount() * (4 + Constants.MEMORY_POINTER); + for (int i = 0; i < entryCount; i++) { + SearchRow r = rows[i]; + if (r != null) { + memory += r.getMemory(); + } + } + } + index.memoryChange(memory >> 2); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageBtreeNode.java b/modules/h2/src/main/java/org/h2/index/PageBtreeNode.java new file mode 100644 index 0000000000000..ee519b905ecb2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageBtreeNode.java @@ -0,0 +1,610 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.result.SearchRow; +import org.h2.store.Data; +import org.h2.store.Page; +import org.h2.store.PageStore; +import org.h2.util.Utils; + +/** + * A b-tree node page that contains index data. Format: + *
      + *
    • page type: byte
    • + *
    • checksum: short
    • + *
    • parent page id (0 for root): int
    • + *
    • index id: varInt
    • + *
    • count of all children (-1 if not known): int
    • + *
    • entry count: short
    • + *
    • rightmost child page id: int
    • + *
    • entries (child page id: int, offset: short)
    • + *
    + * The row contains the largest key of the respective child, + * meaning row[0] contains the largest key of child[0]. + */ +public class PageBtreeNode extends PageBtree { + + private static final int CHILD_OFFSET_PAIR_LENGTH = 6; + private static final int MAX_KEY_LENGTH = 10; + + private final boolean pageStoreInternalCount; + + /** + * The page ids of the children. + */ + private int[] childPageIds; + + private int rowCountStored = UNKNOWN_ROWCOUNT; + + private int rowCount = UNKNOWN_ROWCOUNT; + + private PageBtreeNode(PageBtreeIndex index, int pageId, Data data) { + super(index, pageId, data); + this.pageStoreInternalCount = index.getDatabase(). + getSettings().pageStoreInternalCount; + } + + /** + * Read a b-tree node page. + * + * @param index the index + * @param data the data + * @param pageId the page id + * @return the page + */ + public static Page read(PageBtreeIndex index, Data data, int pageId) { + PageBtreeNode p = new PageBtreeNode(index, pageId, data); + p.read(); + return p; + } + + /** + * Create a new b-tree node page. + * + * @param index the index + * @param pageId the page id + * @param parentPageId the parent page id + * @return the page + */ + static PageBtreeNode create(PageBtreeIndex index, int pageId, + int parentPageId) { + PageBtreeNode p = new PageBtreeNode(index, pageId, index.getPageStore() + .createData()); + index.getPageStore().logUndo(p, null); + p.parentPageId = parentPageId; + p.writeHead(); + // 4 bytes for the rightmost child page id + p.start = p.data.length() + 4; + p.rows = SearchRow.EMPTY_ARRAY; + if (p.pageStoreInternalCount) { + p.rowCount = 0; + } + return p; + } + + private void read() { + data.reset(); + int type = data.readByte(); + data.readShortInt(); + this.parentPageId = data.readInt(); + onlyPosition = (type & Page.FLAG_LAST) == 0; + int indexId = data.readVarInt(); + if (indexId != index.getId()) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "page:" + getPos() + " expected index:" + index.getId() + + "got:" + indexId); + } + rowCount = rowCountStored = data.readInt(); + entryCount = data.readShortInt(); + childPageIds = new int[entryCount + 1]; + childPageIds[entryCount] = data.readInt(); + rows = entryCount == 0 ? SearchRow.EMPTY_ARRAY : new SearchRow[entryCount]; + offsets = Utils.newIntArray(entryCount); + for (int i = 0; i < entryCount; i++) { + childPageIds[i] = data.readInt(); + offsets[i] = data.readShortInt(); + } + check(); + start = data.length(); + written = true; + } + + /** + * Add a row. If it is possible this method returns -1, otherwise + * the split point. It is always possible to add two rows. + * + * @param row the now to add + * @return the split point of this page, or -1 if no split is required + */ + private int addChildTry(SearchRow row) { + if (entryCount < 4) { + return -1; + } + int startData; + if (onlyPosition) { + // if we only store the position, we may at most store as many + // entries as there is space for keys, because the current data area + // might get larger when _removing_ a child (if the new key needs + // more space) - and removing a child can't split this page + startData = entryCount + 1 * MAX_KEY_LENGTH; + } else { + int rowLength = index.getRowSize(data, row, onlyPosition); + int pageSize = index.getPageStore().getPageSize(); + int last = entryCount == 0 ? pageSize : offsets[entryCount - 1]; + startData = last - rowLength; + } + if (startData < start + CHILD_OFFSET_PAIR_LENGTH) { + return entryCount / 2; + } + return -1; + } + + /** + * Add a child at the given position. + * + * @param x the position + * @param childPageId the child + * @param row the row smaller than the first row of the child and its + * children + */ + private void addChild(int x, int childPageId, SearchRow row) { + int rowLength = index.getRowSize(data, row, onlyPosition); + int pageSize = index.getPageStore().getPageSize(); + int last = entryCount == 0 ? pageSize : offsets[entryCount - 1]; + if (last - rowLength < start + CHILD_OFFSET_PAIR_LENGTH) { + readAllRows(); + onlyPosition = true; + // change the offsets (now storing only positions) + int o = pageSize; + for (int i = 0; i < entryCount; i++) { + o -= index.getRowSize(data, getRow(i), true); + offsets[i] = o; + } + last = entryCount == 0 ? pageSize : offsets[entryCount - 1]; + rowLength = index.getRowSize(data, row, true); + if (SysProperties.CHECK && last - rowLength < + start + CHILD_OFFSET_PAIR_LENGTH) { + throw DbException.throwInternalError(); + } + } + int offset = last - rowLength; + if (entryCount > 0) { + if (x < entryCount) { + offset = (x == 0 ? pageSize : offsets[x - 1]) - rowLength; + } + } + rows = insert(rows, entryCount, x, row); + offsets = insert(offsets, entryCount, x, offset); + add(offsets, x + 1, entryCount + 1, -rowLength); + childPageIds = insert(childPageIds, entryCount + 1, x + 1, childPageId); + start += CHILD_OFFSET_PAIR_LENGTH; + if (pageStoreInternalCount) { + if (rowCount != UNKNOWN_ROWCOUNT) { + rowCount += offset; + } + } + entryCount++; + written = false; + changeCount = index.getPageStore().getChangeCount(); + } + + @Override + int addRowTry(SearchRow row) { + while (true) { + int x = find(row, false, true, true); + PageBtree page = index.getPage(childPageIds[x]); + int splitPoint = page.addRowTry(row); + if (splitPoint == -1) { + break; + } + SearchRow pivot = page.getRow(splitPoint - 1); + index.getPageStore().logUndo(this, data); + int splitPoint2 = addChildTry(pivot); + if (splitPoint2 != -1) { + return splitPoint2; + } + PageBtree page2 = page.split(splitPoint); + readAllRows(); + addChild(x, page2.getPos(), pivot); + index.getPageStore().update(page); + index.getPageStore().update(page2); + index.getPageStore().update(this); + } + updateRowCount(1); + written = false; + changeCount = index.getPageStore().getChangeCount(); + return -1; + } + + private void updateRowCount(int offset) { + if (rowCount != UNKNOWN_ROWCOUNT) { + rowCount += offset; + } + if (rowCountStored != UNKNOWN_ROWCOUNT) { + rowCountStored = UNKNOWN_ROWCOUNT; + index.getPageStore().logUndo(this, data); + if (written) { + writeHead(); + } + index.getPageStore().update(this); + } + } + + @Override + PageBtree split(int splitPoint) { + int newPageId = index.getPageStore().allocatePage(); + PageBtreeNode p2 = PageBtreeNode.create(index, newPageId, parentPageId); + index.getPageStore().logUndo(this, data); + if (onlyPosition) { + // TODO optimize: maybe not required + p2.onlyPosition = true; + } + int firstChild = childPageIds[splitPoint]; + readAllRows(); + while (splitPoint < entryCount) { + p2.addChild(p2.entryCount, childPageIds[splitPoint + 1], getRow(splitPoint)); + removeChild(splitPoint); + } + int lastChild = childPageIds[splitPoint - 1]; + removeChild(splitPoint - 1); + childPageIds[splitPoint - 1] = lastChild; + if (p2.childPageIds == null) { + p2.childPageIds = new int[1]; + } + p2.childPageIds[0] = firstChild; + p2.remapChildren(); + return p2; + } + + @Override + protected void remapChildren() { + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + PageBtree p = index.getPage(child); + p.setParentPageId(getPos()); + index.getPageStore().update(p); + } + } + + /** + * Initialize the page. + * + * @param page1 the first child page + * @param pivot the pivot key + * @param page2 the last child page + */ + void init(PageBtree page1, SearchRow pivot, PageBtree page2) { + entryCount = 0; + childPageIds = new int[] { page1.getPos() }; + rows = SearchRow.EMPTY_ARRAY; + offsets = Utils.EMPTY_INT_ARRAY; + addChild(0, page2.getPos(), pivot); + if (pageStoreInternalCount) { + rowCount = page1.getRowCount() + page2.getRowCount(); + } + check(); + } + + @Override + void find(PageBtreeCursor cursor, SearchRow first, boolean bigger) { + int i = find(first, bigger, false, false); + if (i > entryCount) { + if (parentPageId == PageBtree.ROOT) { + return; + } + PageBtreeNode next = (PageBtreeNode) index.getPage(parentPageId); + next.find(cursor, first, bigger); + return; + } + PageBtree page = index.getPage(childPageIds[i]); + page.find(cursor, first, bigger); + } + + @Override + void last(PageBtreeCursor cursor) { + int child = childPageIds[entryCount]; + index.getPage(child).last(cursor); + } + + @Override + PageBtreeLeaf getFirstLeaf() { + int child = childPageIds[0]; + return index.getPage(child).getFirstLeaf(); + } + + @Override + PageBtreeLeaf getLastLeaf() { + int child = childPageIds[entryCount]; + return index.getPage(child).getLastLeaf(); + } + + @Override + SearchRow remove(SearchRow row) { + int at = find(row, false, false, true); + // merge is not implemented to allow concurrent usage + // TODO maybe implement merge + PageBtree page = index.getPage(childPageIds[at]); + SearchRow last = page.remove(row); + index.getPageStore().logUndo(this, data); + updateRowCount(-1); + written = false; + changeCount = index.getPageStore().getChangeCount(); + if (last == null) { + // the last row didn't change - nothing to do + return null; + } else if (last == row) { + // this child is now empty + index.getPageStore().free(page.getPos()); + if (entryCount < 1) { + // no more children - this page is empty as well + return row; + } + if (at == entryCount) { + // removing the last child + last = getRow(at - 1); + } else { + last = null; + } + removeChild(at); + index.getPageStore().update(this); + return last; + } + // the last row is in the last child + if (at == entryCount) { + return last; + } + int child = childPageIds[at]; + removeChild(at); + // TODO this can mean only the position is now stored + // should split at the next possible moment + addChild(at, child, last); + // remove and add swapped two children, fix that + int temp = childPageIds[at]; + childPageIds[at] = childPageIds[at + 1]; + childPageIds[at + 1] = temp; + index.getPageStore().update(this); + return null; + } + + @Override + int getRowCount() { + if (rowCount == UNKNOWN_ROWCOUNT) { + int count = 0; + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + PageBtree page = index.getPage(child); + count += page.getRowCount(); + index.getDatabase().setProgress( + DatabaseEventListener.STATE_SCAN_FILE, + index.getName(), count, Integer.MAX_VALUE); + } + rowCount = count; + } + return rowCount; + } + + @Override + void setRowCountStored(int rowCount) { + if (rowCount < 0 && pageStoreInternalCount) { + return; + } + this.rowCount = rowCount; + if (rowCountStored != rowCount) { + rowCountStored = rowCount; + index.getPageStore().logUndo(this, data); + if (written) { + changeCount = index.getPageStore().getChangeCount(); + writeHead(); + } + index.getPageStore().update(this); + } + } + + private void check() { + if (SysProperties.CHECK) { + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + if (child == 0) { + DbException.throwInternalError(); + } + } + } + } + + @Override + public void write() { + check(); + writeData(); + index.getPageStore().writePage(getPos(), data); + } + + private void writeHead() { + data.reset(); + data.writeByte((byte) (Page.TYPE_BTREE_NODE | + (onlyPosition ? 0 : Page.FLAG_LAST))); + data.writeShortInt(0); + data.writeInt(parentPageId); + data.writeVarInt(index.getId()); + data.writeInt(rowCountStored); + data.writeShortInt(entryCount); + } + + private void writeData() { + if (written) { + return; + } + readAllRows(); + writeHead(); + data.writeInt(childPageIds[entryCount]); + for (int i = 0; i < entryCount; i++) { + data.writeInt(childPageIds[i]); + data.writeShortInt(offsets[i]); + } + for (int i = 0; i < entryCount; i++) { + index.writeRow(data, offsets[i], rows[i], onlyPosition); + } + written = true; + } + + @Override + void freeRecursive() { + index.getPageStore().logUndo(this, data); + index.getPageStore().free(getPos()); + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + index.getPage(child).freeRecursive(); + } + } + + private void removeChild(int i) { + readAllRows(); + entryCount--; + if (pageStoreInternalCount) { + updateRowCount(-index.getPage(childPageIds[i]).getRowCount()); + } + written = false; + changeCount = index.getPageStore().getChangeCount(); + if (entryCount < 0) { + DbException.throwInternalError("" + entryCount); + } + if (entryCount > i) { + int startNext = i > 0 ? offsets[i - 1] : index.getPageStore().getPageSize(); + int rowLength = startNext - offsets[i]; + add(offsets, i, entryCount + 1, rowLength); + } + rows = remove(rows, entryCount + 1, i); + offsets = remove(offsets, entryCount + 1, i); + childPageIds = remove(childPageIds, entryCount + 2, i); + start -= CHILD_OFFSET_PAIR_LENGTH; + } + + /** + * Set the cursor to the first row of the next page. + * + * @param cursor the cursor + * @param pageId id of the next page + */ + void nextPage(PageBtreeCursor cursor, int pageId) { + int i; + // TODO maybe keep the index in the child page (transiently) + for (i = 0; i < entryCount + 1; i++) { + if (childPageIds[i] == pageId) { + i++; + break; + } + } + if (i > entryCount) { + if (parentPageId == PageBtree.ROOT) { + cursor.setCurrent(null, 0); + return; + } + PageBtreeNode next = (PageBtreeNode) index.getPage(parentPageId); + next.nextPage(cursor, getPos()); + return; + } + PageBtree page = index.getPage(childPageIds[i]); + PageBtreeLeaf leaf = page.getFirstLeaf(); + cursor.setCurrent(leaf, 0); + } + + /** + * Set the cursor to the last row of the previous page. + * + * @param cursor the cursor + * @param pageId id of the previous page + */ + void previousPage(PageBtreeCursor cursor, int pageId) { + int i; + // TODO maybe keep the index in the child page (transiently) + for (i = entryCount; i >= 0; i--) { + if (childPageIds[i] == pageId) { + i--; + break; + } + } + if (i < 0) { + if (parentPageId == PageBtree.ROOT) { + cursor.setCurrent(null, 0); + return; + } + PageBtreeNode previous = (PageBtreeNode) index.getPage(parentPageId); + previous.previousPage(cursor, getPos()); + return; + } + PageBtree page = index.getPage(childPageIds[i]); + PageBtreeLeaf leaf = page.getLastLeaf(); + cursor.setCurrent(leaf, leaf.entryCount - 1); + } + + + @Override + public String toString() { + return "page[" + getPos() + "] b-tree node table:" + + index.getId() + " entries:" + entryCount; + } + + @Override + public void moveTo(Session session, int newPos) { + PageStore store = index.getPageStore(); + store.logUndo(this, data); + PageBtreeNode p2 = PageBtreeNode.create(index, newPos, parentPageId); + readAllRows(); + p2.rowCountStored = rowCountStored; + p2.rowCount = rowCount; + p2.childPageIds = childPageIds; + p2.rows = rows; + p2.entryCount = entryCount; + p2.offsets = offsets; + p2.onlyPosition = onlyPosition; + p2.parentPageId = parentPageId; + p2.start = start; + store.update(p2); + if (parentPageId == ROOT) { + index.setRootPageId(session, newPos); + } else { + Page p = store.getPage(parentPageId); + if (!(p instanceof PageBtreeNode)) { + throw DbException.throwInternalError(); + } + PageBtreeNode n = (PageBtreeNode) p; + n.moveChild(getPos(), newPos); + } + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + PageBtree p = index.getPage(child); + p.setParentPageId(newPos); + store.update(p); + } + store.free(getPos()); + } + + /** + * One of the children has moved to a new page. + * + * @param oldPos the old position + * @param newPos the new position + */ + void moveChild(int oldPos, int newPos) { + for (int i = 0; i < entryCount + 1; i++) { + if (childPageIds[i] == oldPos) { + index.getPageStore().logUndo(this, data); + written = false; + changeCount = index.getPageStore().getChangeCount(); + childPageIds[i] = newPos; + index.getPageStore().update(this); + return; + } + } + throw DbException.throwInternalError(oldPos + " " + newPos); + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/index/PageData.java b/modules/h2/src/main/java/org/h2/index/PageData.java new file mode 100644 index 0000000000000..794a80411aaec --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageData.java @@ -0,0 +1,250 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.engine.Session; +import org.h2.result.Row; +import org.h2.store.Data; +import org.h2.store.Page; + +/** + * A page that contains data rows. + */ +abstract class PageData extends Page { + + /** + * The position of the parent page id. + */ + static final int START_PARENT = 3; + + /** + * This is a root page. + */ + static final int ROOT = 0; + + /** + * Indicator that the row count is not known. + */ + static final int UNKNOWN_ROWCOUNT = -1; + + /** + * The index. + */ + protected final PageDataIndex index; + + /** + * The page number of the parent. + */ + protected int parentPageId; + + /** + * The data page. + */ + protected final Data data; + + /** + * The number of entries. + */ + protected int entryCount; + + /** + * The row keys. + */ + protected long[] keys; + + /** + * Whether the data page is up-to-date. + */ + protected boolean written; + + /** + * The estimated heap memory used by this object, in number of double words + * (4 bytes each). + */ + private final int memoryEstimated; + + PageData(PageDataIndex index, int pageId, Data data) { + this.index = index; + this.data = data; + setPos(pageId); + memoryEstimated = index.getMemoryPerPage(); + } + + /** + * Get the real row count. If required, this will read all child pages. + * + * @return the row count + */ + abstract int getRowCount(); + + /** + * Set the stored row count. This will write the page. + * + * @param rowCount the stored row count + */ + abstract void setRowCountStored(int rowCount); + + /** + * Get the used disk space for this index. + * + * @return the estimated number of bytes + */ + abstract long getDiskSpaceUsed(); + + /** + * Find an entry by key. + * + * @param key the key (may not exist) + * @return the matching or next index + */ + int find(long key) { + int l = 0, r = entryCount; + while (l < r) { + int i = (l + r) >>> 1; + long k = keys[i]; + if (k == key) { + return i; + } else if (k > key) { + r = i; + } else { + l = i + 1; + } + } + return l; + } + + /** + * Add a row if possible. If it is possible this method returns -1, + * otherwise the split point. It is always possible to add one row. + * + * @param row the now to add + * @return the split point of this page, or -1 if no split is required + */ + abstract int addRowTry(Row row); + + /** + * Get a cursor. + * + * @param session the session + * @param minKey the smallest key + * @param maxKey the largest key + * @param multiVersion if the delta should be used + * @return the cursor + */ + abstract Cursor find(Session session, long minKey, long maxKey, + boolean multiVersion); + + /** + * Get the key at this position. + * + * @param at the index + * @return the key + */ + long getKey(int at) { + return keys[at]; + } + + /** + * Split the index page at the given point. + * + * @param splitPoint the index where to split + * @return the new page that contains about half the entries + */ + abstract PageData split(int splitPoint); + + /** + * Change the page id. + * + * @param id the new page id + */ + void setPageId(int id) { + int old = getPos(); + index.getPageStore().removeFromCache(getPos()); + setPos(id); + index.getPageStore().logUndo(this, null); + remapChildren(old); + } + + /** + * Get the last key of a page. + * + * @return the last key + */ + abstract long getLastKey(); + + /** + * Get the first child leaf page of a page. + * + * @return the page + */ + abstract PageDataLeaf getFirstLeaf(); + + /** + * Change the parent page id. + * + * @param id the new parent page id + */ + void setParentPageId(int id) { + index.getPageStore().logUndo(this, data); + parentPageId = id; + if (written) { + changeCount = index.getPageStore().getChangeCount(); + data.setInt(START_PARENT, parentPageId); + } + } + + /** + * Update the parent id of all children. + * + * @param old the previous position + */ + abstract void remapChildren(int old); + + /** + * Remove a row. + * + * @param key the key of the row to remove + * @return true if this page is now empty + */ + abstract boolean remove(long key); + + /** + * Free this page and all child pages. + */ + abstract void freeRecursive(); + + /** + * Get the row for the given key. + * + * @param key the key + * @return the row + */ + abstract Row getRowWithKey(long key); + + /** + * Get the estimated heap memory size. + * + * @return number of double words (4 bytes each) + */ + @Override + public int getMemory() { + // need to always return the same value for the same object (otherwise + // the cache size would change after adding and then removing the same + // page from the cache) but index.getMemoryPerPage() can adopt according + // to how much memory a row needs on average + return memoryEstimated; + } + + int getParentPageId() { + return parentPageId; + } + + @Override + public boolean canRemove() { + return changeCount < index.getPageStore().getChangeCount(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageDataCursor.java b/modules/h2/src/main/java/org/h2/index/PageDataCursor.java new file mode 100644 index 0000000000000..fe68c1b81398e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageDataCursor.java @@ -0,0 +1,110 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.Iterator; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * The cursor implementation for the page scan index. + */ +class PageDataCursor implements Cursor { + + private PageDataLeaf current; + private int idx; + private final long maxKey; + private Row row; + private final boolean multiVersion; + private final Session session; + private Iterator delta; + + PageDataCursor(Session session, PageDataLeaf current, int idx, long maxKey, + boolean multiVersion) { + this.current = current; + this.idx = idx; + this.maxKey = maxKey; + this.multiVersion = multiVersion; + this.session = session; + if (multiVersion) { + delta = current.index.getDelta(); + } + } + + @Override + public Row get() { + return row; + } + + @Override + public SearchRow getSearchRow() { + return get(); + } + + @Override + public boolean next() { + if (!multiVersion) { + nextRow(); + return checkMax(); + } + while (true) { + if (delta != null) { + if (!delta.hasNext()) { + delta = null; + row = null; + continue; + } + row = delta.next(); + if (!row.isDeleted() || row.getSessionId() == session.getId()) { + continue; + } + } else { + nextRow(); + if (row != null && row.getSessionId() != 0 && + row.getSessionId() != session.getId()) { + continue; + } + } + break; + } + return checkMax(); + } + + private boolean checkMax() { + if (row != null) { + if (maxKey != Long.MAX_VALUE) { + long x = current.index.getKey(row, Long.MAX_VALUE, Long.MAX_VALUE); + if (x > maxKey) { + row = null; + return false; + } + } + return true; + } + return false; + } + + private void nextRow() { + if (idx >= current.getEntryCount()) { + current = current.getNextPage(); + idx = 0; + if (current == null) { + row = null; + return; + } + } + row = current.getRowAt(idx); + idx++; + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageDataIndex.java b/modules/h2/src/main/java/org/h2/index/PageDataIndex.java new file mode 100644 index 0000000000000..43942f2cb6d7a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageDataIndex.java @@ -0,0 +1,590 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.engine.UndoLogRecord; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.store.Page; +import org.h2.store.PageStore; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.TableFilter; +import org.h2.util.MathUtils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * The scan index allows to access a row by key. It can be used to iterate over + * all rows of a table. Each regular table has one such object, even if no + * primary key or indexes are defined. + */ +public class PageDataIndex extends PageIndex { + + private final PageStore store; + private final RegularTable tableData; + private long lastKey; + private long rowCount; + private HashSet delta; + private int rowCountDiff; + private final HashMap sessionRowCount; + private int mainIndexColumn = -1; + private DbException fastDuplicateKeyException; + + /** + * The estimated heap memory per page, in number of double words (4 bytes + * each). + */ + private int memoryPerPage; + private int memoryCount; + + private final boolean multiVersion; + + public PageDataIndex(RegularTable table, int id, IndexColumn[] columns, + IndexType indexType, boolean create, Session session) { + initBaseIndex(table, id, table.getName() + "_DATA", columns, indexType); + this.multiVersion = database.isMultiVersion(); + + // trace = database.getTrace(Trace.PAGE_STORE + "_di"); + // trace.setLevel(TraceSystem.DEBUG); + if (multiVersion) { + sessionRowCount = new HashMap<>(); + isMultiVersion = true; + } else { + sessionRowCount = null; + } + tableData = table; + this.store = database.getPageStore(); + store.addIndex(this); + if (!database.isPersistent()) { + throw DbException.throwInternalError(table.getName()); + } + if (create) { + rootPageId = store.allocatePage(); + store.addMeta(this, session); + PageDataLeaf root = PageDataLeaf.create(this, rootPageId, PageData.ROOT); + store.update(root); + } else { + rootPageId = store.getRootPageId(id); + PageData root = getPage(rootPageId, 0); + lastKey = root.getLastKey(); + rowCount = root.getRowCount(); + } + if (trace.isDebugEnabled()) { + trace.debug("{0} opened rows: {1}", this, rowCount); + } + table.setRowCount(rowCount); + memoryPerPage = (Constants.MEMORY_PAGE_DATA + store.getPageSize()) >> 2; + } + + @Override + public DbException getDuplicateKeyException(String key) { + if (fastDuplicateKeyException == null) { + fastDuplicateKeyException = super.getDuplicateKeyException(null); + } + return fastDuplicateKeyException; + } + + @Override + public void add(Session session, Row row) { + boolean retry = false; + if (mainIndexColumn != -1) { + row.setKey(row.getValue(mainIndexColumn).getLong()); + } else { + if (row.getKey() == 0) { + row.setKey((int) ++lastKey); + retry = true; + } + } + if (tableData.getContainsLargeObject()) { + for (int i = 0, len = row.getColumnCount(); i < len; i++) { + Value v = row.getValue(i); + Value v2 = v.copy(database, getId()); + if (v2.isLinkedToTable()) { + session.removeAtCommitStop(v2); + } + if (v != v2) { + row.setValue(i, v2); + } + } + } + // when using auto-generated values, it's possible that multiple + // tries are required (specially if there was originally a primary key) + if (trace.isDebugEnabled()) { + trace.debug("{0} add {1}", getName(), row); + } + long add = 0; + while (true) { + try { + addTry(session, row); + break; + } catch (DbException e) { + if (e != fastDuplicateKeyException) { + throw e; + } + if (!retry) { + throw getNewDuplicateKeyException(); + } + if (add == 0) { + // in the first re-try add a small random number, + // to avoid collisions after a re-start + row.setKey((long) (row.getKey() + Math.random() * 10_000)); + } else { + row.setKey(row.getKey() + add); + } + add++; + } finally { + store.incrementChangeCount(); + } + } + lastKey = Math.max(lastKey, row.getKey()); + } + + public DbException getNewDuplicateKeyException() { + String sql = "PRIMARY KEY ON " + table.getSQL(); + if (mainIndexColumn >= 0 && mainIndexColumn < indexColumns.length) { + sql += "(" + indexColumns[mainIndexColumn].getSQL() + ")"; + } + DbException e = DbException.get(ErrorCode.DUPLICATE_KEY_1, sql); + e.setSource(this); + return e; + } + + private void addTry(Session session, Row row) { + while (true) { + PageData root = getPage(rootPageId, 0); + int splitPoint = root.addRowTry(row); + if (splitPoint == -1) { + break; + } + if (trace.isDebugEnabled()) { + trace.debug("{0} split", this); + } + long pivot = splitPoint == 0 ? row.getKey() : root.getKey(splitPoint - 1); + PageData page1 = root; + PageData page2 = root.split(splitPoint); + int id = store.allocatePage(); + page1.setPageId(id); + page1.setParentPageId(rootPageId); + page2.setParentPageId(rootPageId); + PageDataNode newRoot = PageDataNode.create(this, rootPageId, PageData.ROOT); + newRoot.init(page1, pivot, page2); + store.update(page1); + store.update(page2); + store.update(newRoot); + root = newRoot; + } + row.setDeleted(false); + if (multiVersion) { + if (delta == null) { + delta = new HashSet<>(); + } + boolean wasDeleted = delta.remove(row); + if (!wasDeleted) { + delta.add(row); + } + incrementRowCount(session.getId(), 1); + } + invalidateRowCount(); + rowCount++; + store.logAddOrRemoveRow(session, tableData.getId(), row, true); + } + + /** + * Read an overflow page page. + * + * @param id the page id + * @return the page + */ + PageDataOverflow getPageOverflow(int id) { + Page p = store.getPage(id); + if (p instanceof PageDataOverflow) { + return (PageDataOverflow) p; + } + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + p == null ? "null" : p.toString()); + } + + /** + * Read the given page. + * + * @param id the page id + * @param parent the parent, or -1 if unknown + * @return the page + */ + PageData getPage(int id, int parent) { + Page pd = store.getPage(id); + if (pd == null) { + PageDataLeaf empty = PageDataLeaf.create(this, id, parent); + // could have been created before, but never committed + store.logUndo(empty, null); + store.update(empty); + return empty; + } else if (!(pd instanceof PageData)) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, "" + pd); + } + PageData p = (PageData) pd; + if (parent != -1) { + if (p.getParentPageId() != parent) { + throw DbException.throwInternalError(p + + " parent " + p.getParentPageId() + " expected " + parent); + } + } + return p; + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + /** + * Get the key from the row. + * + * @param row the row + * @param ifEmpty the value to use if the row is empty + * @param ifNull the value to use if the column is NULL + * @return the key + */ + long getKey(SearchRow row, long ifEmpty, long ifNull) { + if (row == null) { + return ifEmpty; + } + Value v = row.getValue(mainIndexColumn); + if (v == null) { + throw DbException.throwInternalError(row.toString()); + } else if (v == ValueNull.INSTANCE) { + return ifNull; + } + return v.getLong(); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + long from = first == null ? Long.MIN_VALUE : first.getKey(); + long to = last == null ? Long.MAX_VALUE : last.getKey(); + PageData root = getPage(rootPageId, 0); + return root.find(session, from, to, isMultiVersion); + + } + + /** + * Search for a specific row or a set of rows. + * + * @param session the session + * @param first the key of the first row + * @param last the key of the last row + * @param multiVersion if mvcc should be used + * @return the cursor + */ + Cursor find(Session session, long first, long last, boolean multiVersion) { + PageData root = getPage(rootPageId, 0); + return root.find(session, first, last, multiVersion); + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.throwInternalError(toString()); + } + + long getLastKey() { + PageData root = getPage(rootPageId, 0); + return root.getLastKey(); + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 10 * (tableData.getRowCountApproximation() + + Constants.COST_ROW_OFFSET); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public void remove(Session session, Row row) { + if (tableData.getContainsLargeObject()) { + for (int i = 0, len = row.getColumnCount(); i < len; i++) { + Value v = row.getValue(i); + if (v.isLinkedToTable()) { + session.removeAtCommitStop(v); + } + } + } + if (trace.isDebugEnabled()) { + trace.debug("{0} remove {1}", getName(), row); + } + if (rowCount == 1) { + removeAllRows(); + } else { + try { + long key = row.getKey(); + PageData root = getPage(rootPageId, 0); + root.remove(key); + invalidateRowCount(); + rowCount--; + } finally { + store.incrementChangeCount(); + } + } + if (multiVersion) { + // if storage is null, the delete flag is not yet set + row.setDeleted(true); + if (delta == null) { + delta = new HashSet<>(); + } + boolean wasAdded = delta.remove(row); + if (!wasAdded) { + delta.add(row); + } + incrementRowCount(session.getId(), -1); + } + store.logAddOrRemoveRow(session, tableData.getId(), row, false); + } + + @Override + public void remove(Session session) { + if (trace.isDebugEnabled()) { + trace.debug("{0} remove", this); + } + removeAllRows(); + store.free(rootPageId); + store.removeMeta(this, session); + } + + @Override + public void truncate(Session session) { + if (trace.isDebugEnabled()) { + trace.debug("{0} truncate", this); + } + store.logTruncate(session, tableData.getId()); + removeAllRows(); + if (tableData.getContainsLargeObject() && tableData.isPersistData()) { + // unfortunately, the data is gone on rollback + session.commit(false); + database.getLobStorage().removeAllForTable(table.getId()); + } + if (multiVersion) { + sessionRowCount.clear(); + } + tableData.setRowCount(0); + } + + private void removeAllRows() { + try { + PageData root = getPage(rootPageId, 0); + root.freeRecursive(); + root = PageDataLeaf.create(this, rootPageId, PageData.ROOT); + store.removeFromCache(rootPageId); + store.update(root); + rowCount = 0; + lastKey = 0; + } finally { + store.incrementChangeCount(); + } + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("PAGE"); + } + + @Override + public Row getRow(Session session, long key) { + return getRowWithKey(key); + } + + /** + * Get the row with the given key. + * + * @param key the key + * @return the row + */ + public Row getRowWithKey(long key) { + PageData root = getPage(rootPageId, 0); + return root.getRowWithKey(key); + } + + PageStore getPageStore() { + return store; + } + + @Override + public long getRowCountApproximation() { + return rowCount; + } + + @Override + public long getRowCount(Session session) { + if (multiVersion) { + Integer i = sessionRowCount.get(session.getId()); + long count = i == null ? 0 : i.intValue(); + count += rowCount; + count -= rowCountDiff; + return count; + } + return rowCount; + } + + @Override + public long getDiskSpaceUsed() { + PageData root = getPage(rootPageId, 0); + return root.getDiskSpaceUsed(); + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public int getColumnIndex(Column col) { + // can not use this index - use the PageDelegateIndex instead + return -1; + } + + @Override + public boolean isFirstColumn(Column column) { + return false; + } + + @Override + public void close(Session session) { + if (trace.isDebugEnabled()) { + trace.debug("{0} close", this); + } + if (delta != null) { + delta.clear(); + } + rowCountDiff = 0; + if (sessionRowCount != null) { + sessionRowCount.clear(); + } + // can not close the index because it might get used afterwards, + // for example after running recovery + writeRowCount(); + } + + Iterator getDelta() { + if (delta == null) { + List e = Collections.emptyList(); + return e.iterator(); + } + return delta.iterator(); + } + + private void incrementRowCount(int sessionId, int count) { + if (multiVersion) { + Integer id = sessionId; + Integer c = sessionRowCount.get(id); + int current = c == null ? 0 : c.intValue(); + sessionRowCount.put(id, current + count); + rowCountDiff += count; + } + } + + @Override + public void commit(int operation, Row row) { + if (multiVersion) { + if (delta != null) { + delta.remove(row); + } + incrementRowCount(row.getSessionId(), + operation == UndoLogRecord.DELETE ? 1 : -1); + } + } + + /** + * The root page has changed. + * + * @param session the session + * @param newPos the new position + */ + void setRootPageId(Session session, int newPos) { + store.removeMeta(this, session); + this.rootPageId = newPos; + store.addMeta(this, session); + store.addIndex(this); + } + + public void setMainIndexColumn(int mainIndexColumn) { + this.mainIndexColumn = mainIndexColumn; + } + + public int getMainIndexColumn() { + return mainIndexColumn; + } + + @Override + public String toString() { + return getName(); + } + + private void invalidateRowCount() { + PageData root = getPage(rootPageId, 0); + root.setRowCountStored(PageData.UNKNOWN_ROWCOUNT); + } + + @Override + public void writeRowCount() { + if (SysProperties.MODIFY_ON_WRITE && rootPageId == 0) { + // currently creating the index + return; + } + try { + PageData root = getPage(rootPageId, 0); + root.setRowCountStored(MathUtils.convertLongToInt(rowCount)); + } finally { + store.incrementChangeCount(); + } + } + + @Override + public String getPlanSQL() { + return table.getSQL() + ".tableScan"; + } + + int getMemoryPerPage() { + return memoryPerPage; + } + + /** + * The memory usage of a page was changed. The new value is used to adopt + * the average estimated memory size of a page. + * + * @param x the new memory size + */ + void memoryChange(int x) { + if (memoryCount < Constants.MEMORY_FACTOR) { + memoryPerPage += (x - memoryPerPage) / ++memoryCount; + } else { + memoryPerPage += (x > memoryPerPage ? 1 : -1) + + ((x - memoryPerPage) / Constants.MEMORY_FACTOR); + } + } + + @Override + public boolean isRowIdIndex() { + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageDataLeaf.java b/modules/h2/src/main/java/org/h2/index/PageDataLeaf.java new file mode 100644 index 0000000000000..f473e5cee79c1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageDataLeaf.java @@ -0,0 +1,629 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.lang.ref.SoftReference; +import java.util.Arrays; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.store.Data; +import org.h2.store.Page; +import org.h2.store.PageStore; +import org.h2.value.Value; + +/** + * A leaf page that contains data of one or multiple rows. Format: + *
      + *
    • page type: byte (0)
    • + *
    • checksum: short (1-2)
    • + *
    • parent page id (0 for root): int (3-6)
    • + *
    • table id: varInt
    • + *
    • column count: varInt
    • + *
    • entry count: short
    • + *
    • with overflow: the first overflow page id: int
    • + *
    • list of key / offset pairs (key: varLong, offset: shortInt)
    • + *
    • data
    • + *
    + */ +public class PageDataLeaf extends PageData { + + private final boolean optimizeUpdate; + + /** + * The row offsets. + */ + private int[] offsets; + + /** + * The rows. + */ + private Row[] rows; + + /** + * For pages with overflow: the soft reference to the row + */ + private SoftReference rowRef; + + /** + * The page id of the first overflow page (0 if no overflow). + */ + private int firstOverflowPageId; + + /** + * The start of the data area. + */ + private int start; + + /** + * The size of the row in bytes for large rows. + */ + private int overflowRowSize; + + private int columnCount; + + private int memoryData; + + private boolean writtenData; + + private PageDataLeaf(PageDataIndex index, int pageId, Data data) { + super(index, pageId, data); + this.optimizeUpdate = index.getDatabase().getSettings().optimizeUpdate; + } + + /** + * Create a new page. + * + * @param index the index + * @param pageId the page id + * @param parentPageId the parent + * @return the page + */ + static PageDataLeaf create(PageDataIndex index, int pageId, int parentPageId) { + PageDataLeaf p = new PageDataLeaf(index, pageId, index.getPageStore() + .createData()); + index.getPageStore().logUndo(p, null); + p.rows = Row.EMPTY_ARRAY; + p.parentPageId = parentPageId; + p.columnCount = index.getTable().getColumns().length; + p.writeHead(); + p.start = p.data.length(); + return p; + } + + /** + * Read a data leaf page. + * + * @param index the index + * @param data the data + * @param pageId the page id + * @return the page + */ + public static Page read(PageDataIndex index, Data data, int pageId) { + PageDataLeaf p = new PageDataLeaf(index, pageId, data); + p.read(); + return p; + } + + private void read() { + data.reset(); + int type = data.readByte(); + data.readShortInt(); + this.parentPageId = data.readInt(); + int tableId = data.readVarInt(); + if (tableId != index.getId()) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "page:" + getPos() + " expected table:" + index.getId() + + " got:" + tableId + " type:" + type); + } + columnCount = data.readVarInt(); + entryCount = data.readShortInt(); + offsets = new int[entryCount]; + keys = new long[entryCount]; + rows = new Row[entryCount]; + if (type == Page.TYPE_DATA_LEAF) { + if (entryCount != 1) { + DbException.throwInternalError("entries: " + entryCount); + } + firstOverflowPageId = data.readInt(); + } + for (int i = 0; i < entryCount; i++) { + keys[i] = data.readVarLong(); + offsets[i] = data.readShortInt(); + } + start = data.length(); + written = true; + writtenData = true; + } + + private int getRowLength(Row row) { + int size = 0; + for (int i = 0; i < columnCount; i++) { + size += data.getValueLen(row.getValue(i)); + } + return size; + } + + private int findInsertionPoint(long key) { + int x = find(key); + if (x < entryCount && keys[x] == key) { + throw index.getDuplicateKeyException(""+key); + } + return x; + } + + @Override + int addRowTry(Row row) { + index.getPageStore().logUndo(this, data); + int rowLength = getRowLength(row); + int pageSize = index.getPageStore().getPageSize(); + int last = entryCount == 0 ? pageSize : offsets[entryCount - 1]; + int keyOffsetPairLen = 2 + Data.getVarLongLen(row.getKey()); + if (entryCount > 0 && last - rowLength < start + keyOffsetPairLen) { + int x = findInsertionPoint(row.getKey()); + if (entryCount > 1) { + if (entryCount < 5) { + // required, otherwise the index doesn't work correctly + return entryCount / 2; + } + if (index.isSortedInsertMode()) { + return x < 2 ? 1 : x > entryCount - 1 ? entryCount - 1 : x; + } + // split near the insertion point to better fill pages + // split in half would be: + // return entryCount / 2; + int third = entryCount / 3; + return x < third ? third : x >= 2 * third ? 2 * third : x; + } + return x; + } + index.getPageStore().logUndo(this, data); + int x; + if (entryCount == 0) { + x = 0; + } else { + if (!optimizeUpdate) { + readAllRows(); + } + x = findInsertionPoint(row.getKey()); + } + written = false; + changeCount = index.getPageStore().getChangeCount(); + last = x == 0 ? pageSize : offsets[x - 1]; + int offset = last - rowLength; + start += keyOffsetPairLen; + offsets = insert(offsets, entryCount, x, offset); + add(offsets, x + 1, entryCount + 1, -rowLength); + keys = insert(keys, entryCount, x, row.getKey()); + rows = insert(rows, entryCount, x, row); + entryCount++; + index.getPageStore().update(this); + if (optimizeUpdate) { + if (writtenData && offset >= start) { + byte[] d = data.getBytes(); + int dataStart = offsets[entryCount - 1] + rowLength; + int dataEnd = offsets[x]; + System.arraycopy(d, dataStart, d, dataStart - rowLength, + dataEnd - dataStart + rowLength); + data.setPos(dataEnd); + for (int j = 0; j < columnCount; j++) { + data.writeValue(row.getValue(j)); + } + } + } + if (offset < start) { + writtenData = false; + if (entryCount > 1) { + DbException.throwInternalError("" + entryCount); + } + // need to write the overflow page id + start += 4; + int remaining = rowLength - (pageSize - start); + // fix offset + offset = start; + offsets[x] = offset; + int previous = getPos(); + int dataOffset = pageSize; + int page = index.getPageStore().allocatePage(); + firstOverflowPageId = page; + this.overflowRowSize = pageSize + rowLength; + writeData(); + // free up the space used by the row + Row r = rows[0]; + rowRef = new SoftReference<>(r); + rows[0] = null; + Data all = index.getPageStore().createData(); + all.checkCapacity(data.length()); + all.write(data.getBytes(), 0, data.length()); + data.truncate(index.getPageStore().getPageSize()); + do { + int type, size, next; + if (remaining <= pageSize - PageDataOverflow.START_LAST) { + type = Page.TYPE_DATA_OVERFLOW | Page.FLAG_LAST; + size = remaining; + next = 0; + } else { + type = Page.TYPE_DATA_OVERFLOW; + size = pageSize - PageDataOverflow.START_MORE; + next = index.getPageStore().allocatePage(); + } + PageDataOverflow overflow = PageDataOverflow.create(index.getPageStore(), + page, type, previous, next, all, dataOffset, size); + index.getPageStore().update(overflow); + dataOffset += size; + remaining -= size; + previous = page; + page = next; + } while (remaining > 0); + } + if (rowRef == null) { + memoryChange(true, row); + } else { + memoryChange(true, null); + } + return -1; + } + + private void removeRow(int i) { + index.getPageStore().logUndo(this, data); + written = false; + changeCount = index.getPageStore().getChangeCount(); + if (!optimizeUpdate) { + readAllRows(); + } + Row r = getRowAt(i); + if (r != null) { + memoryChange(false, r); + } + entryCount--; + if (entryCount < 0) { + DbException.throwInternalError("" + entryCount); + } + if (firstOverflowPageId != 0) { + start -= 4; + freeOverflow(); + firstOverflowPageId = 0; + overflowRowSize = 0; + rowRef = null; + } + int keyOffsetPairLen = 2 + Data.getVarLongLen(keys[i]); + int startNext = i > 0 ? offsets[i - 1] : index.getPageStore().getPageSize(); + int rowLength = startNext - offsets[i]; + if (optimizeUpdate) { + if (writtenData) { + byte[] d = data.getBytes(); + int dataStart = offsets[entryCount]; + System.arraycopy(d, dataStart, d, dataStart + rowLength, + offsets[i] - dataStart); + Arrays.fill(d, dataStart, dataStart + rowLength, (byte) 0); + } + } else { + int clearStart = offsets[entryCount]; + Arrays.fill(data.getBytes(), clearStart, clearStart + rowLength, (byte) 0); + } + start -= keyOffsetPairLen; + offsets = remove(offsets, entryCount + 1, i); + add(offsets, i, entryCount, rowLength); + keys = remove(keys, entryCount + 1, i); + rows = remove(rows, entryCount + 1, i); + } + + @Override + Cursor find(Session session, long minKey, long maxKey, boolean multiVersion) { + int x = find(minKey); + return new PageDataCursor(session, this, x, maxKey, multiVersion); + } + + /** + * Get the row at the given index. + * + * @param at the index + * @return the row + */ + Row getRowAt(int at) { + Row r = rows[at]; + if (r == null) { + if (firstOverflowPageId == 0) { + r = readRow(data, offsets[at], columnCount); + } else { + if (rowRef != null) { + r = rowRef.get(); + if (r != null) { + return r; + } + } + PageStore store = index.getPageStore(); + Data buff = store.createData(); + int pageSize = store.getPageSize(); + int offset = offsets[at]; + buff.write(data.getBytes(), offset, pageSize - offset); + int next = firstOverflowPageId; + do { + PageDataOverflow page = index.getPageOverflow(next); + next = page.readInto(buff); + } while (next != 0); + overflowRowSize = pageSize + buff.length(); + r = readRow(buff, 0, columnCount); + } + r.setKey(keys[at]); + if (firstOverflowPageId != 0) { + rowRef = new SoftReference<>(r); + } else { + rows[at] = r; + memoryChange(true, r); + } + } + return r; + } + + int getEntryCount() { + return entryCount; + } + + @Override + PageData split(int splitPoint) { + int newPageId = index.getPageStore().allocatePage(); + PageDataLeaf p2 = PageDataLeaf.create(index, newPageId, parentPageId); + while (splitPoint < entryCount) { + int split = p2.addRowTry(getRowAt(splitPoint)); + if (split != -1) { + DbException.throwInternalError("split " + split); + } + removeRow(splitPoint); + } + return p2; + } + + @Override + long getLastKey() { + // TODO re-use keys, but remove this mechanism + if (entryCount == 0) { + return 0; + } + return getRowAt(entryCount - 1).getKey(); + } + + PageDataLeaf getNextPage() { + if (parentPageId == PageData.ROOT) { + return null; + } + PageDataNode next = (PageDataNode) index.getPage(parentPageId, -1); + return next.getNextPage(keys[entryCount - 1]); + } + + @Override + PageDataLeaf getFirstLeaf() { + return this; + } + + @Override + protected void remapChildren(int old) { + if (firstOverflowPageId == 0) { + return; + } + PageDataOverflow overflow = index.getPageOverflow(firstOverflowPageId); + overflow.setParentPageId(getPos()); + index.getPageStore().update(overflow); + } + + @Override + boolean remove(long key) { + int i = find(key); + if (keys == null || keys[i] != key) { + throw DbException.get(ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1, + index.getSQL() + ": " + key + " " + (keys == null ? -1 : keys[i])); + } + index.getPageStore().logUndo(this, data); + if (entryCount == 1) { + freeRecursive(); + return true; + } + removeRow(i); + index.getPageStore().update(this); + return false; + } + + @Override + void freeRecursive() { + index.getPageStore().logUndo(this, data); + index.getPageStore().free(getPos()); + freeOverflow(); + } + + private void freeOverflow() { + if (firstOverflowPageId != 0) { + int next = firstOverflowPageId; + do { + PageDataOverflow page = index.getPageOverflow(next); + page.free(); + next = page.getNextOverflow(); + } while (next != 0); + } + } + + @Override + Row getRowWithKey(long key) { + int at = find(key); + return getRowAt(at); + } + + @Override + int getRowCount() { + return entryCount; + } + + @Override + void setRowCountStored(int rowCount) { + // ignore + } + + @Override + long getDiskSpaceUsed() { + return index.getPageStore().getPageSize(); + } + + @Override + public void write() { + writeData(); + index.getPageStore().writePage(getPos(), data); + data.truncate(index.getPageStore().getPageSize()); + } + + private void readAllRows() { + for (int i = 0; i < entryCount; i++) { + getRowAt(i); + } + } + + private void writeHead() { + data.reset(); + int type; + if (firstOverflowPageId == 0) { + type = Page.TYPE_DATA_LEAF | Page.FLAG_LAST; + } else { + type = Page.TYPE_DATA_LEAF; + } + data.writeByte((byte) type); + data.writeShortInt(0); + if (SysProperties.CHECK2) { + if (data.length() != START_PARENT) { + DbException.throwInternalError(); + } + } + data.writeInt(parentPageId); + data.writeVarInt(index.getId()); + data.writeVarInt(columnCount); + data.writeShortInt(entryCount); + } + + private void writeData() { + if (written) { + return; + } + if (!optimizeUpdate) { + readAllRows(); + } + writeHead(); + if (firstOverflowPageId != 0) { + data.writeInt(firstOverflowPageId); + data.checkCapacity(overflowRowSize); + } + for (int i = 0; i < entryCount; i++) { + data.writeVarLong(keys[i]); + data.writeShortInt(offsets[i]); + } + if (!writtenData || !optimizeUpdate) { + for (int i = 0; i < entryCount; i++) { + data.setPos(offsets[i]); + Row r = getRowAt(i); + for (int j = 0; j < columnCount; j++) { + data.writeValue(r.getValue(j)); + } + } + writtenData = true; + } + written = true; + } + + @Override + public String toString() { + return "page[" + getPos() + "] data leaf table:" + + index.getId() + " " + index.getTable().getName() + + " entries:" + entryCount + " parent:" + parentPageId + + (firstOverflowPageId == 0 ? "" : " overflow:" + firstOverflowPageId) + + " keys:" + Arrays.toString(keys) + " offsets:" + Arrays.toString(offsets); + } + + @Override + public void moveTo(Session session, int newPos) { + PageStore store = index.getPageStore(); + // load the pages into the cache, to ensure old pages + // are written + if (parentPageId != ROOT) { + store.getPage(parentPageId); + } + store.logUndo(this, data); + PageDataLeaf p2 = PageDataLeaf.create(index, newPos, parentPageId); + readAllRows(); + p2.keys = keys; + p2.overflowRowSize = overflowRowSize; + p2.firstOverflowPageId = firstOverflowPageId; + p2.rowRef = rowRef; + p2.rows = rows; + if (firstOverflowPageId != 0) { + p2.rows[0] = getRowAt(0); + } + p2.entryCount = entryCount; + p2.offsets = offsets; + p2.start = start; + p2.remapChildren(getPos()); + p2.writeData(); + p2.data.truncate(index.getPageStore().getPageSize()); + store.update(p2); + if (parentPageId == ROOT) { + index.setRootPageId(session, newPos); + } else { + PageDataNode p = (PageDataNode) store.getPage(parentPageId); + p.moveChild(getPos(), newPos); + } + store.free(getPos()); + } + + /** + * Set the overflow page id. + * + * @param old the old overflow page id + * @param overflow the new overflow page id + */ + void setOverflow(int old, int overflow) { + if (SysProperties.CHECK && old != firstOverflowPageId) { + DbException.throwInternalError("move " + this + " " + firstOverflowPageId); + } + index.getPageStore().logUndo(this, data); + firstOverflowPageId = overflow; + if (written) { + changeCount = index.getPageStore().getChangeCount(); + writeHead(); + data.writeInt(firstOverflowPageId); + } + index.getPageStore().update(this); + } + + private void memoryChange(boolean add, Row r) { + int diff = r == null ? 0 : 4 + 8 + Constants.MEMORY_POINTER + r.getMemory(); + memoryData += add ? diff : -diff; + index.memoryChange((Constants.MEMORY_PAGE_DATA + + memoryData + index.getPageStore().getPageSize()) >> 2); + } + + @Override + public boolean isStream() { + return firstOverflowPageId > 0; + } + + /** + * Read a row from the data page at the given position. + * + * @param data the data page + * @param pos the position to read from + * @param columnCount the number of columns + * @return the row + */ + private Row readRow(Data data, int pos, int columnCount) { + Value[] values = new Value[columnCount]; + synchronized (data) { + data.setPos(pos); + for (int i = 0; i < columnCount; i++) { + values[i] = data.readValue(); + } + } + return index.getDatabase().createRow(values, Row.MEMORY_CALCULATE); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageDataNode.java b/modules/h2/src/main/java/org/h2/index/PageDataNode.java new file mode 100644 index 0000000000000..ec6e70cc266ae --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageDataNode.java @@ -0,0 +1,459 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.Arrays; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.store.Data; +import org.h2.store.Page; +import org.h2.store.PageStore; +import org.h2.util.Utils; + +/** + * A leaf page that contains data of one or multiple rows. Format: + *
      + *
    • page type: byte (0)
    • + *
    • checksum: short (1-2)
    • + *
    • parent page id (0 for root): int (3-6)
    • + *
    • table id: varInt
    • + *
    • count of all children (-1 if not known): int
    • + *
    • entry count: short
    • + *
    • rightmost child page id: int
    • + *
    • entries (child page id: int, key: varLong)
    • + *
    + * The key is the largest key of the respective child, meaning key[0] is the + * largest key of child[0]. + */ +public class PageDataNode extends PageData { + + /** + * The page ids of the children. + */ + private int[] childPageIds; + + private int rowCountStored = UNKNOWN_ROWCOUNT; + + private int rowCount = UNKNOWN_ROWCOUNT; + + /** + * The number of bytes used in the page + */ + private int length; + + private PageDataNode(PageDataIndex index, int pageId, Data data) { + super(index, pageId, data); + } + + /** + * Create a new page. + * + * @param index the index + * @param pageId the page id + * @param parentPageId the parent + * @return the page + */ + static PageDataNode create(PageDataIndex index, int pageId, int parentPageId) { + PageDataNode p = new PageDataNode(index, pageId, + index.getPageStore().createData()); + index.getPageStore().logUndo(p, null); + p.parentPageId = parentPageId; + p.writeHead(); + // 4 bytes for the rightmost child page id + p.length = p.data.length() + 4; + return p; + } + + /** + * Read a data node page. + * + * @param index the index + * @param data the data + * @param pageId the page id + * @return the page + */ + public static Page read(PageDataIndex index, Data data, int pageId) { + PageDataNode p = new PageDataNode(index, pageId, data); + p.read(); + return p; + } + + private void read() { + data.reset(); + data.readByte(); + data.readShortInt(); + this.parentPageId = data.readInt(); + int indexId = data.readVarInt(); + if (indexId != index.getId()) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "page:" + getPos() + " expected index:" + index.getId() + + "got:" + indexId); + } + rowCount = rowCountStored = data.readInt(); + entryCount = data.readShortInt(); + childPageIds = new int[entryCount + 1]; + childPageIds[entryCount] = data.readInt(); + keys = Utils.newLongArray(entryCount); + for (int i = 0; i < entryCount; i++) { + childPageIds[i] = data.readInt(); + keys[i] = data.readVarLong(); + } + length = data.length(); + check(); + written = true; + } + + private void addChild(int x, int childPageId, long key) { + index.getPageStore().logUndo(this, data); + written = false; + changeCount = index.getPageStore().getChangeCount(); + childPageIds = insert(childPageIds, entryCount + 1, x + 1, childPageId); + keys = insert(keys, entryCount, x, key); + entryCount++; + length += 4 + Data.getVarLongLen(key); + } + + @Override + int addRowTry(Row row) { + index.getPageStore().logUndo(this, data); + int keyOffsetPairLen = 4 + Data.getVarLongLen(row.getKey()); + while (true) { + int x = find(row.getKey()); + PageData page = index.getPage(childPageIds[x], getPos()); + int splitPoint = page.addRowTry(row); + if (splitPoint == -1) { + break; + } + if (length + keyOffsetPairLen > index.getPageStore().getPageSize()) { + return entryCount / 2; + } + long pivot = splitPoint == 0 ? row.getKey() : page.getKey(splitPoint - 1); + PageData page2 = page.split(splitPoint); + index.getPageStore().update(page); + index.getPageStore().update(page2); + addChild(x, page2.getPos(), pivot); + index.getPageStore().update(this); + } + updateRowCount(1); + return -1; + } + + private void updateRowCount(int offset) { + if (rowCount != UNKNOWN_ROWCOUNT) { + rowCount += offset; + } + if (rowCountStored != UNKNOWN_ROWCOUNT) { + rowCountStored = UNKNOWN_ROWCOUNT; + index.getPageStore().logUndo(this, data); + if (written) { + writeHead(); + } + index.getPageStore().update(this); + } + } + + @Override + Cursor find(Session session, long minKey, long maxKey, boolean multiVersion) { + int x = find(minKey); + int child = childPageIds[x]; + return index.getPage(child, getPos()).find(session, minKey, maxKey, + multiVersion); + } + + @Override + PageData split(int splitPoint) { + int newPageId = index.getPageStore().allocatePage(); + PageDataNode p2 = PageDataNode.create(index, newPageId, parentPageId); + int firstChild = childPageIds[splitPoint]; + while (splitPoint < entryCount) { + p2.addChild(p2.entryCount, childPageIds[splitPoint + 1], keys[splitPoint]); + removeChild(splitPoint); + } + int lastChild = childPageIds[splitPoint - 1]; + removeChild(splitPoint - 1); + childPageIds[splitPoint - 1] = lastChild; + p2.childPageIds[0] = firstChild; + p2.remapChildren(getPos()); + return p2; + } + + @Override + protected void remapChildren(int old) { + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + PageData p = index.getPage(child, old); + p.setParentPageId(getPos()); + index.getPageStore().update(p); + } + } + + /** + * Initialize the page. + * + * @param page1 the first child page + * @param pivot the pivot key + * @param page2 the last child page + */ + void init(PageData page1, long pivot, PageData page2) { + entryCount = 1; + childPageIds = new int[] { page1.getPos(), page2.getPos() }; + keys = new long[] { pivot }; + length += 4 + Data.getVarLongLen(pivot); + check(); + } + + @Override + long getLastKey() { + return index.getPage(childPageIds[entryCount], getPos()).getLastKey(); + } + + /** + * Get the next leaf page. + * + * @param key the last key of the current page + * @return the next leaf page + */ + PageDataLeaf getNextPage(long key) { + int i = find(key) + 1; + if (i > entryCount) { + if (parentPageId == PageData.ROOT) { + return null; + } + PageDataNode next = (PageDataNode) index.getPage(parentPageId, -1); + return next.getNextPage(key); + } + PageData page = index.getPage(childPageIds[i], getPos()); + return page.getFirstLeaf(); + } + + @Override + PageDataLeaf getFirstLeaf() { + int child = childPageIds[0]; + return index.getPage(child, getPos()).getFirstLeaf(); + } + + @Override + boolean remove(long key) { + int at = find(key); + // merge is not implemented to allow concurrent usage + // TODO maybe implement merge + PageData page = index.getPage(childPageIds[at], getPos()); + boolean empty = page.remove(key); + index.getPageStore().logUndo(this, data); + updateRowCount(-1); + if (!empty) { + // the first row didn't change - nothing to do + return false; + } + // this child is now empty + index.getPageStore().free(page.getPos()); + if (entryCount < 1) { + // no more children - this page is empty as well + return true; + } + removeChild(at); + index.getPageStore().update(this); + return false; + } + + @Override + void freeRecursive() { + index.getPageStore().logUndo(this, data); + index.getPageStore().free(getPos()); + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + index.getPage(child, getPos()).freeRecursive(); + } + } + + @Override + Row getRowWithKey(long key) { + int at = find(key); + PageData page = index.getPage(childPageIds[at], getPos()); + return page.getRowWithKey(key); + } + + @Override + int getRowCount() { + if (rowCount == UNKNOWN_ROWCOUNT) { + int count = 0; + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + PageData page = index.getPage(child, getPos()); + if (getPos() == page.getPos()) { + throw DbException.throwInternalError("Page is its own child: " + getPos()); + } + count += page.getRowCount(); + index.getDatabase().setProgress(DatabaseEventListener.STATE_SCAN_FILE, + index.getTable() + "." + index.getName(), count, Integer.MAX_VALUE); + } + rowCount = count; + } + return rowCount; + } + + @Override + long getDiskSpaceUsed() { + long count = 0; + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + PageData page = index.getPage(child, getPos()); + if (getPos() == page.getPos()) { + throw DbException.throwInternalError("Page is its own child: " + getPos()); + } + count += page.getDiskSpaceUsed(); + index.getDatabase().setProgress(DatabaseEventListener.STATE_SCAN_FILE, + index.getTable() + "." + index.getName(), + (int) (count >> 16), Integer.MAX_VALUE); + } + return count; + } + + @Override + void setRowCountStored(int rowCount) { + this.rowCount = rowCount; + if (rowCountStored != rowCount) { + rowCountStored = rowCount; + index.getPageStore().logUndo(this, data); + if (written) { + changeCount = index.getPageStore().getChangeCount(); + writeHead(); + } + index.getPageStore().update(this); + } + } + + private void check() { + if (SysProperties.CHECK) { + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + if (child == 0) { + DbException.throwInternalError(); + } + } + } + } + + @Override + public void write() { + writeData(); + index.getPageStore().writePage(getPos(), data); + } + + private void writeHead() { + data.reset(); + data.writeByte((byte) Page.TYPE_DATA_NODE); + data.writeShortInt(0); + if (SysProperties.CHECK2) { + if (data.length() != START_PARENT) { + DbException.throwInternalError(); + } + } + data.writeInt(parentPageId); + data.writeVarInt(index.getId()); + data.writeInt(rowCountStored); + data.writeShortInt(entryCount); + } + + private void writeData() { + if (written) { + return; + } + check(); + writeHead(); + data.writeInt(childPageIds[entryCount]); + for (int i = 0; i < entryCount; i++) { + data.writeInt(childPageIds[i]); + data.writeVarLong(keys[i]); + } + if (length != data.length()) { + DbException.throwInternalError("expected pos: " + length + + " got: " + data.length()); + } + written = true; + } + + private void removeChild(int i) { + index.getPageStore().logUndo(this, data); + written = false; + changeCount = index.getPageStore().getChangeCount(); + int removedKeyIndex = i < entryCount ? i : i - 1; + entryCount--; + length -= 4 + Data.getVarLongLen(keys[removedKeyIndex]); + if (entryCount < 0) { + DbException.throwInternalError("" + entryCount); + } + keys = remove(keys, entryCount + 1, removedKeyIndex); + childPageIds = remove(childPageIds, entryCount + 2, i); + } + + @Override + public String toString() { + return "page[" + getPos() + "] data node table:" + index.getId() + + " entries:" + entryCount + " " + Arrays.toString(childPageIds); + } + + @Override + public void moveTo(Session session, int newPos) { + PageStore store = index.getPageStore(); + // load the pages into the cache, to ensure old pages + // are written + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + store.getPage(child); + } + if (parentPageId != ROOT) { + store.getPage(parentPageId); + } + store.logUndo(this, data); + PageDataNode p2 = PageDataNode.create(index, newPos, parentPageId); + p2.rowCountStored = rowCountStored; + p2.rowCount = rowCount; + p2.childPageIds = childPageIds; + p2.keys = keys; + p2.entryCount = entryCount; + p2.length = length; + store.update(p2); + if (parentPageId == ROOT) { + index.setRootPageId(session, newPos); + } else { + PageDataNode p = (PageDataNode) store.getPage(parentPageId); + p.moveChild(getPos(), newPos); + } + for (int i = 0; i < entryCount + 1; i++) { + int child = childPageIds[i]; + PageData p = (PageData) store.getPage(child); + p.setParentPageId(newPos); + store.update(p); + } + store.free(getPos()); + } + + /** + * One of the children has moved to another page. + * + * @param oldPos the old position + * @param newPos the new position + */ + void moveChild(int oldPos, int newPos) { + for (int i = 0; i < entryCount + 1; i++) { + if (childPageIds[i] == oldPos) { + index.getPageStore().logUndo(this, data); + written = false; + changeCount = index.getPageStore().getChangeCount(); + childPageIds[i] = newPos; + index.getPageStore().update(this); + return; + } + } + throw DbException.throwInternalError(oldPos + " " + newPos); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageDataOverflow.java b/modules/h2/src/main/java/org/h2/index/PageDataOverflow.java new file mode 100644 index 0000000000000..e04d1442b613e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageDataOverflow.java @@ -0,0 +1,274 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.Data; +import org.h2.store.Page; +import org.h2.store.PageStore; + +/** + * Overflow data for a leaf page. Format: + *
      + *
    • page type: byte (0)
    • + *
    • checksum: short (1-2)
    • + *
    • parent page id (0 for root): int (3-6)
    • + *
    • more data: next overflow page id: int (7-10)
    • + *
    • last remaining size: short (7-8)
    • + *
    • data (11-/9-)
    • + *
    + */ +public class PageDataOverflow extends Page { + + /** + * The start of the data in the last overflow page. + */ + static final int START_LAST = 9; + + /** + * The start of the data in a overflow page that is not the last one. + */ + static final int START_MORE = 11; + + private static final int START_NEXT_OVERFLOW = 7; + + /** + * The page store. + */ + private final PageStore store; + + /** + * The page type. + */ + private int type; + + /** + * The parent page (overflow or leaf). + */ + private int parentPageId; + + /** + * The next overflow page, or 0. + */ + private int nextPage; + + private final Data data; + + private int start; + private int size; + + /** + * Create an object from the given data page. + * + * @param store the page store + * @param pageId the page id + * @param data the data page + */ + private PageDataOverflow(PageStore store, int pageId, Data data) { + this.store = store; + setPos(pageId); + this.data = data; + } + + /** + * Read an overflow page. + * + * @param store the page store + * @param data the data + * @param pageId the page id + * @return the page + */ + public static Page read(PageStore store, Data data, int pageId) { + PageDataOverflow p = new PageDataOverflow(store, pageId, data); + p.read(); + return p; + } + + /** + * Create a new overflow page. + * + * @param store the page store + * @param page the page id + * @param type the page type + * @param parentPageId the parent page id + * @param next the next page or 0 + * @param all the data + * @param offset the offset within the data + * @param size the number of bytes + * @return the page + */ + static PageDataOverflow create(PageStore store, int page, + int type, int parentPageId, int next, + Data all, int offset, int size) { + Data data = store.createData(); + PageDataOverflow p = new PageDataOverflow(store, page, data); + store.logUndo(p, null); + data.writeByte((byte) type); + data.writeShortInt(0); + data.writeInt(parentPageId); + if (type == Page.TYPE_DATA_OVERFLOW) { + data.writeInt(next); + } else { + data.writeShortInt(size); + } + p.start = data.length(); + data.write(all.getBytes(), offset, size); + p.type = type; + p.parentPageId = parentPageId; + p.nextPage = next; + p.size = size; + return p; + } + + /** + * Read the page. + */ + private void read() { + data.reset(); + type = data.readByte(); + data.readShortInt(); + parentPageId = data.readInt(); + if (type == (Page.TYPE_DATA_OVERFLOW | Page.FLAG_LAST)) { + size = data.readShortInt(); + nextPage = 0; + } else if (type == Page.TYPE_DATA_OVERFLOW) { + nextPage = data.readInt(); + size = store.getPageSize() - data.length(); + } else { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, "page:" + + getPos() + " type:" + type); + } + start = data.length(); + } + + /** + * Read the data into a target buffer. + * + * @param target the target data page + * @return the next page, or 0 if no next page + */ + int readInto(Data target) { + target.checkCapacity(size); + if (type == (Page.TYPE_DATA_OVERFLOW | Page.FLAG_LAST)) { + target.write(data.getBytes(), START_LAST, size); + return 0; + } + target.write(data.getBytes(), START_MORE, size); + return nextPage; + } + + int getNextOverflow() { + return nextPage; + } + + private void writeHead() { + data.writeByte((byte) type); + data.writeShortInt(0); + data.writeInt(parentPageId); + } + + @Override + public void write() { + writeData(); + store.writePage(getPos(), data); + } + + + private void writeData() { + data.reset(); + writeHead(); + if (type == Page.TYPE_DATA_OVERFLOW) { + data.writeInt(nextPage); + } else { + data.writeShortInt(size); + } + } + + + @Override + public String toString() { + return "page[" + getPos() + "] data leaf overflow parent:" + + parentPageId + " next:" + nextPage; + } + + /** + * Get the estimated memory size. + * + * @return number of double words (4 bytes) + */ + @Override + public int getMemory() { + return (Constants.MEMORY_PAGE_DATA_OVERFLOW + store.getPageSize()) >> 2; + } + + void setParentPageId(int parent) { + store.logUndo(this, data); + this.parentPageId = parent; + } + + @Override + public void moveTo(Session session, int newPos) { + // load the pages into the cache, to ensure old pages + // are written + Page parent = store.getPage(parentPageId); + if (parent == null) { + throw DbException.throwInternalError(); + } + PageDataOverflow next = null; + if (nextPage != 0) { + next = (PageDataOverflow) store.getPage(nextPage); + } + store.logUndo(this, data); + PageDataOverflow p2 = PageDataOverflow.create(store, newPos, type, + parentPageId, nextPage, data, start, size); + store.update(p2); + if (next != null) { + next.setParentPageId(newPos); + store.update(next); + } + if (parent instanceof PageDataOverflow) { + PageDataOverflow p1 = (PageDataOverflow) parent; + p1.setNext(getPos(), newPos); + } else { + PageDataLeaf p1 = (PageDataLeaf) parent; + p1.setOverflow(getPos(), newPos); + } + store.update(parent); + store.free(getPos()); + } + + private void setNext(int old, int nextPage) { + if (SysProperties.CHECK && old != this.nextPage) { + DbException.throwInternalError("move " + this + " " + nextPage); + } + store.logUndo(this, data); + this.nextPage = nextPage; + data.setInt(START_NEXT_OVERFLOW, nextPage); + } + + /** + * Free this page. + */ + void free() { + store.logUndo(this, data); + store.free(getPos()); + } + + @Override + public boolean canRemove() { + return true; + } + + @Override + public boolean isStream() { + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageDelegateIndex.java b/modules/h2/src/main/java/org/h2/index/PageDelegateIndex.java new file mode 100644 index 0000000000000..f87ca4002d1b6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageDelegateIndex.java @@ -0,0 +1,158 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.store.PageStore; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.TableFilter; + +/** + * An index that delegates indexing to the page data index. + */ +public class PageDelegateIndex extends PageIndex { + + private final PageDataIndex mainIndex; + + public PageDelegateIndex(RegularTable table, int id, String name, + IndexType indexType, PageDataIndex mainIndex, boolean create, + Session session) { + IndexColumn[] cols = IndexColumn.wrap( + new Column[] { table.getColumn(mainIndex.getMainIndexColumn())}); + this.initBaseIndex(table, id, name, cols, indexType); + this.mainIndex = mainIndex; + if (!database.isPersistent() || id < 0) { + throw DbException.throwInternalError("" + name); + } + PageStore store = database.getPageStore(); + store.addIndex(this); + if (create) { + store.addMeta(this, session); + } + } + + @Override + public void add(Session session, Row row) { + // nothing to do + } + + @Override + public boolean canFindNext() { + return false; + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + long min = mainIndex.getKey(first, Long.MIN_VALUE, Long.MIN_VALUE); + // ifNull is MIN_VALUE as well, because the column is never NULL + // so avoid returning all rows (returning one row is OK) + long max = mainIndex.getKey(last, Long.MAX_VALUE, Long.MIN_VALUE); + return mainIndex.find(session, min, max, false); + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + Cursor cursor; + if (first) { + cursor = mainIndex.find(session, Long.MIN_VALUE, Long.MAX_VALUE, false); + } else { + long x = mainIndex.getLastKey(); + cursor = mainIndex.find(session, x, x, false); + } + cursor.next(); + return cursor; + } + + @Override + public Cursor findNext(Session session, SearchRow higherThan, SearchRow last) { + throw DbException.throwInternalError(toString()); + } + + @Override + public int getColumnIndex(Column col) { + if (col.getColumnId() == mainIndex.getMainIndexColumn()) { + return 0; + } + return -1; + } + + @Override + public boolean isFirstColumn(Column column) { + return getColumnIndex(column) == 0; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 10 * getCostRangeIndex(masks, mainIndex.getRowCount(session), + filters, filter, sortOrder, false, allColumnsSet); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public void remove(Session session, Row row) { + // nothing to do + } + + @Override + public void remove(Session session) { + mainIndex.setMainIndexColumn(-1); + session.getDatabase().getPageStore().removeMeta(this, session); + } + + @Override + public void truncate(Session session) { + // nothing to do + } + + @Override + public void checkRename() { + // ok + } + + @Override + public long getRowCount(Session session) { + return mainIndex.getRowCount(session); + } + + @Override + public long getRowCountApproximation() { + return mainIndex.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return mainIndex.getDiskSpaceUsed(); + } + + @Override + public void writeRowCount() { + // ignore + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/PageIndex.java b/modules/h2/src/main/java/org/h2/index/PageIndex.java new file mode 100644 index 0000000000000..f22b929d051c1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/PageIndex.java @@ -0,0 +1,44 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + + +/** + * A page store index. + */ +public abstract class PageIndex extends BaseIndex { + + /** + * The root page of this index. + */ + protected int rootPageId; + + private boolean sortedInsertMode; + + /** + * Get the root page of this index. + * + * @return the root page id + */ + public int getRootPageId() { + return rootPageId; + } + + /** + * Write back the row count if it has changed. + */ + public abstract void writeRowCount(); + + @Override + public void setSortedInsertMode(boolean sortedInsertMode) { + this.sortedInsertMode = sortedInsertMode; + } + + boolean isSortedInsertMode() { + return sortedInsertMode; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/RangeCursor.java b/modules/h2/src/main/java/org/h2/index/RangeCursor.java new file mode 100644 index 0000000000000..f0253b390d083 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/RangeCursor.java @@ -0,0 +1,65 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.value.Value; +import org.h2.value.ValueLong; + +/** + * The cursor implementation for the range index. + */ +class RangeCursor implements Cursor { + + private final Session session; + private boolean beforeFirst; + private long current; + private Row currentRow; + private final long start, end, step; + + RangeCursor(Session session, long start, long end) { + this(session, start, end, 1); + } + + RangeCursor(Session session, long start, long end, long step) { + this.session = session; + this.start = start; + this.end = end; + this.step = step; + beforeFirst = true; + } + + @Override + public Row get() { + return currentRow; + } + + @Override + public SearchRow getSearchRow() { + return currentRow; + } + + @Override + public boolean next() { + if (beforeFirst) { + beforeFirst = false; + current = start; + } else { + current += step; + } + currentRow = session.createRow(new Value[]{ValueLong.get(current)}, 1); + return step > 0 ? current <= end : current >= end; + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/RangeIndex.java b/modules/h2/src/main/java/org/h2/index/RangeIndex.java new file mode 100644 index 0000000000000..977efcc599538 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/RangeIndex.java @@ -0,0 +1,123 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RangeTable; +import org.h2.table.TableFilter; + +/** + * An index for the SYSTEM_RANGE table. + * This index can only scan through all rows, search is not supported. + */ +public class RangeIndex extends BaseIndex { + + private final RangeTable rangeTable; + + public RangeIndex(RangeTable table, IndexColumn[] columns) { + initBaseIndex(table, 0, "RANGE_INDEX", columns, + IndexType.createNonUnique(true)); + this.rangeTable = table; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void add(Session session, Row row) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public void remove(Session session, Row row) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + long min = rangeTable.getMin(session), start = min; + long max = rangeTable.getMax(session), end = max; + long step = rangeTable.getStep(session); + try { + start = Math.max(min, first == null ? min : first.getValue(0).getLong()); + } catch (Exception e) { + // error when converting the value - ignore + } + try { + end = Math.min(max, last == null ? max : last.getValue(0).getLong()); + } catch (Exception e) { + // error when converting the value - ignore + } + return new RangeCursor(session, start, end, step); + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 1; + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public void remove(Session session) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + long pos = first ? rangeTable.getMin(session) : rangeTable.getMax(session); + return new RangeCursor(session, pos, pos); + } + + @Override + public long getRowCount(Session session) { + return rangeTable.getRowCountApproximation(); + } + + @Override + public long getRowCountApproximation() { + return rangeTable.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } +} diff --git a/modules/h2/src/main/java/org/h2/index/ScanCursor.java b/modules/h2/src/main/java/org/h2/index/ScanCursor.java new file mode 100644 index 0000000000000..5ff28080a13eb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/ScanCursor.java @@ -0,0 +1,78 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.Iterator; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * The cursor implementation for the scan index. + */ +public class ScanCursor implements Cursor { + private final ScanIndex scan; + private Row row; + private final Session session; + private final boolean multiVersion; + private Iterator delta; + + ScanCursor(Session session, ScanIndex scan, boolean multiVersion) { + this.session = session; + this.scan = scan; + this.multiVersion = multiVersion; + if (multiVersion) { + delta = scan.getDelta(); + } + row = null; + } + + @Override + public Row get() { + return row; + } + + @Override + public SearchRow getSearchRow() { + return row; + } + + @Override + public boolean next() { + if (multiVersion) { + while (true) { + if (delta != null) { + if (!delta.hasNext()) { + delta = null; + row = null; + continue; + } + row = delta.next(); + if (!row.isDeleted() || row.getSessionId() == session.getId()) { + continue; + } + } else { + row = scan.getNextRow(row); + if (row != null && row.getSessionId() != 0 && + row.getSessionId() != session.getId()) { + continue; + } + } + break; + } + return row != null; + } + row = scan.getNextRow(row); + return row != null; + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/ScanIndex.java b/modules/h2/src/main/java/org/h2/index/ScanIndex.java new file mode 100644 index 0000000000000..55b503ac1a2e9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/ScanIndex.java @@ -0,0 +1,273 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.engine.UndoLogRecord; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.TableFilter; +import org.h2.util.New; + +/** + * The scan index is not really an 'index' in the strict sense, because it can + * not be used for direct lookup. It can only be used to iterate over all rows + * of a table. Each regular table has one such object, even if no primary key or + * indexes are defined. + */ +public class ScanIndex extends BaseIndex { + private long firstFree = -1; + private ArrayList rows = New.arrayList(); + private final RegularTable tableData; + private int rowCountDiff; + private final HashMap sessionRowCount; + private HashSet delta; + private long rowCount; + + public ScanIndex(RegularTable table, int id, IndexColumn[] columns, + IndexType indexType) { + initBaseIndex(table, id, table.getName() + "_DATA", columns, indexType); + if (database.isMultiVersion()) { + sessionRowCount = new HashMap<>(); + } else { + sessionRowCount = null; + } + tableData = table; + } + + @Override + public void remove(Session session) { + truncate(session); + } + + @Override + public void truncate(Session session) { + rows = New.arrayList(); + firstFree = -1; + if (tableData.getContainsLargeObject() && tableData.isPersistData()) { + database.getLobStorage().removeAllForTable(table.getId()); + } + tableData.setRowCount(0); + rowCount = 0; + rowCountDiff = 0; + if (database.isMultiVersion()) { + sessionRowCount.clear(); + } + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public Row getRow(Session session, long key) { + return rows.get((int) key); + } + + @Override + public void add(Session session, Row row) { + // in-memory + if (firstFree == -1) { + int key = rows.size(); + row.setKey(key); + rows.add(row); + } else { + long key = firstFree; + Row free = rows.get((int) key); + firstFree = free.getKey(); + row.setKey(key); + rows.set((int) key, row); + } + row.setDeleted(false); + if (database.isMultiVersion()) { + if (delta == null) { + delta = new HashSet<>(); + } + boolean wasDeleted = delta.remove(row); + if (!wasDeleted) { + delta.add(row); + } + incrementRowCount(session.getId(), 1); + } + rowCount++; + } + + @Override + public void commit(int operation, Row row) { + if (database.isMultiVersion()) { + if (delta != null) { + delta.remove(row); + } + incrementRowCount(row.getSessionId(), + operation == UndoLogRecord.DELETE ? 1 : -1); + } + } + + private void incrementRowCount(int sessionId, int count) { + if (database.isMultiVersion()) { + Integer id = sessionId; + Integer c = sessionRowCount.get(id); + int current = c == null ? 0 : c.intValue(); + sessionRowCount.put(id, current + count); + rowCountDiff += count; + } + } + + @Override + public void remove(Session session, Row row) { + // in-memory + if (!database.isMultiVersion() && rowCount == 1) { + rows = New.arrayList(); + firstFree = -1; + } else { + Row free = session.createRow(null, 1); + free.setKey(firstFree); + long key = row.getKey(); + if (rows.size() <= key) { + throw DbException.get(ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1, + rows.size() + ": " + key); + } + rows.set((int) key, free); + firstFree = key; + } + if (database.isMultiVersion()) { + // if storage is null, the delete flag is not yet set + row.setDeleted(true); + if (delta == null) { + delta = new HashSet<>(); + } + boolean wasAdded = delta.remove(row); + if (!wasAdded) { + delta.add(row); + } + incrementRowCount(session.getId(), -1); + } + rowCount--; + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return new ScanCursor(session, this, database.isMultiVersion()); + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return tableData.getRowCountApproximation() + Constants.COST_ROW_OFFSET; + } + + @Override + public long getRowCount(Session session) { + if (database.isMultiVersion()) { + Integer i = sessionRowCount.get(session.getId()); + long count = i == null ? 0 : i.intValue(); + count += rowCount; + count -= rowCountDiff; + return count; + } + return rowCount; + } + + /** + * Get the next row that is stored after this row. + * + * @param row the current row or null to start the scan + * @return the next row or null if there are no more rows + */ + Row getNextRow(Row row) { + long key; + if (row == null) { + key = -1; + } else { + key = row.getKey(); + } + while (true) { + key++; + if (key >= rows.size()) { + return null; + } + row = rows.get((int) key); + if (!row.isEmpty()) { + return row; + } + } + } + + @Override + public int getColumnIndex(Column col) { + // the scan index cannot use any columns + return -1; + } + + @Override + public boolean isFirstColumn(Column column) { + return false; + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("SCAN"); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.getUnsupportedException("SCAN"); + } + + Iterator getDelta() { + if (delta == null) { + List e = Collections.emptyList(); + return e.iterator(); + } + return delta.iterator(); + } + + @Override + public long getRowCountApproximation() { + return rowCount; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public String getPlanSQL() { + return table.getSQL() + ".tableScan"; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/SingleRowCursor.java b/modules/h2/src/main/java/org/h2/index/SingleRowCursor.java new file mode 100644 index 0000000000000..013238e806ff2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/SingleRowCursor.java @@ -0,0 +1,53 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * A cursor with at most one row. + */ +public class SingleRowCursor implements Cursor { + private Row row; + private boolean end; + + /** + * Create a new cursor. + * + * @param row - the single row (if null then cursor is empty) + */ + public SingleRowCursor(Row row) { + this.row = row; + } + + @Override + public Row get() { + return row; + } + + @Override + public SearchRow getSearchRow() { + return row; + } + + @Override + public boolean next() { + if (row == null || end) { + row = null; + return false; + } + end = true; + return true; + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/SpatialIndex.java b/modules/h2/src/main/java/org/h2/index/SpatialIndex.java new file mode 100644 index 0000000000000..5c4f875fcd60b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/SpatialIndex.java @@ -0,0 +1,32 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.result.SearchRow; +import org.h2.table.TableFilter; + +/** + * A spatial index. Spatial indexes are used to speed up searching + * spatial/geometric data. + */ +public interface SpatialIndex extends Index { + + /** + * Find a row or a list of rows and create a cursor to iterate over the + * result. + * + * @param filter the table filter (which possibly knows about additional + * conditions) + * @param first the lower bound + * @param last the upper bound + * @param intersection the geometry which values should intersect with, or + * null for anything + * @return the cursor to iterate over the results + */ + Cursor findByGeometry(TableFilter filter, SearchRow first, SearchRow last, + SearchRow intersection); + +} diff --git a/modules/h2/src/main/java/org/h2/index/SpatialTreeIndex.java b/modules/h2/src/main/java/org/h2/index/SpatialTreeIndex.java new file mode 100644 index 0000000000000..f443b5a358390 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/SpatialTreeIndex.java @@ -0,0 +1,307 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import java.util.Iterator; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.db.MVTableEngine; +import org.h2.mvstore.rtree.MVRTreeMap; +import org.h2.mvstore.rtree.SpatialKey; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueGeometry; +import org.h2.value.ValueNull; +import org.locationtech.jts.geom.Envelope; +import org.locationtech.jts.geom.Geometry; + +/** + * This is an index based on a MVR-TreeMap. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class SpatialTreeIndex extends BaseIndex implements SpatialIndex { + + private static final String MAP_PREFIX = "RTREE_"; + + private final MVRTreeMap treeMap; + private final MVStore store; + + private boolean closed; + private boolean needRebuild; + + /** + * Constructor. + * + * @param table the table instance + * @param id the index id + * @param indexName the index name + * @param columns the indexed columns (only one geometry column allowed) + * @param persistent whether the index should be persisted + * @param indexType the index type (only spatial index) + * @param create whether to create a new index + * @param session the session. + */ + public SpatialTreeIndex(Table table, int id, String indexName, + IndexColumn[] columns, IndexType indexType, boolean persistent, + boolean create, Session session) { + if (indexType.isUnique()) { + throw DbException.getUnsupportedException("not unique"); + } + if (!persistent && !create) { + throw DbException.getUnsupportedException( + "Non persistent index called with create==false"); + } + if (columns.length > 1) { + throw DbException.getUnsupportedException( + "can only do one column"); + } + if ((columns[0].sortType & SortOrder.DESCENDING) != 0) { + throw DbException.getUnsupportedException( + "cannot do descending"); + } + if ((columns[0].sortType & SortOrder.NULLS_FIRST) != 0) { + throw DbException.getUnsupportedException( + "cannot do nulls first"); + } + if ((columns[0].sortType & SortOrder.NULLS_LAST) != 0) { + throw DbException.getUnsupportedException( + "cannot do nulls last"); + } + initBaseIndex(table, id, indexName, columns, indexType); + this.needRebuild = create; + this.table = table; + if (!database.isStarting()) { + if (columns[0].column.getType() != Value.GEOMETRY) { + throw DbException.getUnsupportedException( + "spatial index on non-geometry column, " + + columns[0].column.getCreateSQL()); + } + } + if (!persistent) { + // Index in memory + store = MVStore.open(null); + treeMap = store.openMap("spatialIndex", + new MVRTreeMap.Builder()); + } else { + if (id < 0) { + throw DbException.getUnsupportedException( + "Persistent index with id<0"); + } + MVTableEngine.init(session.getDatabase()); + store = session.getDatabase().getMvStore().getStore(); + // Called after CREATE SPATIAL INDEX or + // by PageStore.addMeta + treeMap = store.openMap(MAP_PREFIX + getId(), + new MVRTreeMap.Builder()); + if (treeMap.isEmpty()) { + needRebuild = true; + } + } + } + + @Override + public void close(Session session) { + store.close(); + closed = true; + } + + @Override + public void add(Session session, Row row) { + if (closed) { + throw DbException.throwInternalError(); + } + treeMap.add(getKey(row), row.getKey()); + } + + private SpatialKey getKey(SearchRow row) { + if (row == null) { + return null; + } + Value v = row.getValue(columnIds[0]); + if (v == ValueNull.INSTANCE) { + return null; + } + Geometry g = ((ValueGeometry) v.convertTo(Value.GEOMETRY)).getGeometryNoCopy(); + Envelope env = g.getEnvelopeInternal(); + return new SpatialKey(row.getKey(), + (float) env.getMinX(), (float) env.getMaxX(), + (float) env.getMinY(), (float) env.getMaxY()); + } + + @Override + public void remove(Session session, Row row) { + if (closed) { + throw DbException.throwInternalError(); + } + if (!treeMap.remove(getKey(row), row.getKey())) { + throw DbException.throwInternalError("row not found"); + } + } + + @Override + public Cursor find(TableFilter filter, SearchRow first, SearchRow last) { + return find(filter.getSession()); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return find(session); + } + + private Cursor find(Session session) { + return new SpatialCursor(treeMap.keySet().iterator(), table, session); + } + + @Override + public Cursor findByGeometry(TableFilter filter, SearchRow first, + SearchRow last, SearchRow intersection) { + if (intersection == null) { + return find(filter.getSession(), first, last); + } + return new SpatialCursor( + treeMap.findIntersectingKeys(getKey(intersection)), table, + filter.getSession()); + } + + /** + * Compute spatial index cost + * @param masks Search mask + * @param columns Table columns + * @return Index cost hint + */ + public static long getCostRangeIndex(int[] masks, Column[] columns) { + // Never use spatial tree index without spatial filter + if (columns.length == 0) { + return Long.MAX_VALUE; + } + for (Column column : columns) { + int index = column.getColumnId(); + int mask = masks[index]; + if ((mask & IndexCondition.SPATIAL_INTERSECTS) != IndexCondition.SPATIAL_INTERSECTS) { + return Long.MAX_VALUE; + } + } + return 2; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return getCostRangeIndex(masks, columns); + } + + + @Override + public void remove(Session session) { + if (!treeMap.isClosed()) { + store.removeMap(treeMap); + } + } + + @Override + public void truncate(Session session) { + treeMap.clear(); + } + + @Override + public void checkRename() { + // nothing to do + } + + @Override + public boolean needRebuild() { + return needRebuild; + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + if (closed) { + throw DbException.throwInternalError(toString()); + } + if (!first) { + throw DbException.throwInternalError( + "Spatial Index can only be fetch by ascending order"); + } + return find(session); + } + + @Override + public long getRowCount(Session session) { + return treeMap.sizeAsLong(); + } + + @Override + public long getRowCountApproximation() { + return treeMap.sizeAsLong(); + } + + @Override + public long getDiskSpaceUsed() { + // TODO estimate disk space usage + return 0; + } + + /** + * A cursor to iterate over spatial keys. + */ + private static final class SpatialCursor implements Cursor { + + private final Iterator it; + private SpatialKey current; + private final Table table; + private Session session; + + public SpatialCursor(Iterator it, Table table, Session session) { + this.it = it; + this.table = table; + this.session = session; + } + + @Override + public Row get() { + return table.getRow(session, current.getId()); + } + + @Override + public SearchRow getSearchRow() { + return get(); + } + + @Override + public boolean next() { + if (!it.hasNext()) { + return false; + } + current = it.next(); + return true; + } + + @Override + public boolean previous() { + return false; + } + + } + +} + diff --git a/modules/h2/src/main/java/org/h2/index/TreeCursor.java b/modules/h2/src/main/java/org/h2/index/TreeCursor.java new file mode 100644 index 0000000000000..a700062a44d3f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/TreeCursor.java @@ -0,0 +1,124 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.result.Row; +import org.h2.result.SearchRow; + +/** + * The cursor implementation for a tree index. + */ +public class TreeCursor implements Cursor { + private final TreeIndex tree; + private TreeNode node; + private boolean beforeFirst; + private final SearchRow first, last; + + TreeCursor(TreeIndex tree, TreeNode node, SearchRow first, SearchRow last) { + this.tree = tree; + this.node = node; + this.first = first; + this.last = last; + beforeFirst = true; + } + + @Override + public Row get() { + return node == null ? null : node.row; + } + + @Override + public SearchRow getSearchRow() { + return get(); + } + + @Override + public boolean next() { + if (beforeFirst) { + beforeFirst = false; + if (node == null) { + return false; + } + if (first != null && tree.compareRows(node.row, first) < 0) { + node = next(node); + } + } else { + node = next(node); + } + if (node != null && last != null) { + if (tree.compareRows(node.row, last) > 0) { + node = null; + } + } + return node != null; + } + + @Override + public boolean previous() { + node = previous(node); + return node != null; + } + + /** + * Get the next node if there is one. + * + * @param x the node + * @return the next node or null + */ + private static TreeNode next(TreeNode x) { + if (x == null) { + return null; + } + TreeNode r = x.right; + if (r != null) { + x = r; + TreeNode l = x.left; + while (l != null) { + x = l; + l = x.left; + } + return x; + } + TreeNode ch = x; + x = x.parent; + while (x != null && ch == x.right) { + ch = x; + x = x.parent; + } + return x; + } + + + /** + * Get the previous node if there is one. + * + * @param x the node + * @return the previous node or null + */ + private static TreeNode previous(TreeNode x) { + if (x == null) { + return null; + } + TreeNode l = x.left; + if (l != null) { + x = l; + TreeNode r = x.right; + while (r != null) { + x = r; + r = x.right; + } + return x; + } + TreeNode ch = x; + x = x.parent; + while (x != null && ch == x.left) { + ch = x; + x = x.parent; + } + return x; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/TreeIndex.java b/modules/h2/src/main/java/org/h2/index/TreeIndex.java new file mode 100644 index 0000000000000..ed38930044e5a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/TreeIndex.java @@ -0,0 +1,413 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * The tree index is an in-memory index based on a binary AVL trees. + */ +public class TreeIndex extends BaseIndex { + + private TreeNode root; + private final RegularTable tableData; + private long rowCount; + private boolean closed; + + public TreeIndex(RegularTable table, int id, String indexName, + IndexColumn[] columns, IndexType indexType) { + initBaseIndex(table, id, indexName, columns, indexType); + tableData = table; + if (!database.isStarting()) { + checkIndexColumnTypes(columns); + } + } + + @Override + public void close(Session session) { + root = null; + closed = true; + } + + @Override + public void add(Session session, Row row) { + if (closed) { + throw DbException.throwInternalError(); + } + TreeNode i = new TreeNode(row); + TreeNode n = root, x = n; + boolean isLeft = true; + while (true) { + if (n == null) { + if (x == null) { + root = i; + rowCount++; + return; + } + set(x, isLeft, i); + break; + } + Row r = n.row; + int compare = compareRows(row, r); + if (compare == 0) { + if (indexType.isUnique()) { + if (!mayHaveNullDuplicates(row)) { + throw getDuplicateKeyException(row.toString()); + } + } + compare = compareKeys(row, r); + } + isLeft = compare < 0; + x = n; + n = child(x, isLeft); + } + balance(x, isLeft); + rowCount++; + } + + private void balance(TreeNode x, boolean isLeft) { + while (true) { + int sign = isLeft ? 1 : -1; + switch (x.balance * sign) { + case 1: + x.balance = 0; + return; + case 0: + x.balance = -sign; + break; + case -1: + TreeNode l = child(x, isLeft); + if (l.balance == -sign) { + replace(x, l); + set(x, isLeft, child(l, !isLeft)); + set(l, !isLeft, x); + x.balance = 0; + l.balance = 0; + } else { + TreeNode r = child(l, !isLeft); + replace(x, r); + set(l, !isLeft, child(r, isLeft)); + set(r, isLeft, l); + set(x, isLeft, child(r, !isLeft)); + set(r, !isLeft, x); + int rb = r.balance; + x.balance = (rb == -sign) ? sign : 0; + l.balance = (rb == sign) ? -sign : 0; + r.balance = 0; + } + return; + default: + DbException.throwInternalError("b:" + x.balance * sign); + } + if (x == root) { + return; + } + isLeft = x.isFromLeft(); + x = x.parent; + } + } + + private static TreeNode child(TreeNode x, boolean isLeft) { + return isLeft ? x.left : x.right; + } + + private void replace(TreeNode x, TreeNode n) { + if (x == root) { + root = n; + if (n != null) { + n.parent = null; + } + } else { + set(x.parent, x.isFromLeft(), n); + } + } + + private static void set(TreeNode parent, boolean left, TreeNode n) { + if (left) { + parent.left = n; + } else { + parent.right = n; + } + if (n != null) { + n.parent = parent; + } + } + + @Override + public void remove(Session session, Row row) { + if (closed) { + throw DbException.throwInternalError(); + } + TreeNode x = findFirstNode(row, true); + if (x == null) { + throw DbException.throwInternalError("not found!"); + } + TreeNode n; + if (x.left == null) { + n = x.right; + } else if (x.right == null) { + n = x.left; + } else { + TreeNode d = x; + x = x.left; + for (TreeNode temp = x; (temp = temp.right) != null;) { + x = temp; + } + // x will be replaced with n later + n = x.left; + // swap d and x + int b = x.balance; + x.balance = d.balance; + d.balance = b; + + // set x.parent + TreeNode xp = x.parent; + TreeNode dp = d.parent; + if (d == root) { + root = x; + } + x.parent = dp; + if (dp != null) { + if (dp.right == d) { + dp.right = x; + } else { + dp.left = x; + } + } + // TODO index / tree: link d.r = x(p?).r directly + if (xp == d) { + d.parent = x; + if (d.left == x) { + x.left = d; + x.right = d.right; + } else { + x.right = d; + x.left = d.left; + } + } else { + d.parent = xp; + xp.right = d; + x.right = d.right; + x.left = d.left; + } + + if (SysProperties.CHECK && x.right == null) { + DbException.throwInternalError("tree corrupted"); + } + x.right.parent = x; + x.left.parent = x; + // set d.left, d.right + d.left = n; + if (n != null) { + n.parent = d; + } + d.right = null; + x = d; + } + rowCount--; + + boolean isLeft = x.isFromLeft(); + replace(x, n); + n = x.parent; + while (n != null) { + x = n; + int sign = isLeft ? 1 : -1; + switch (x.balance * sign) { + case -1: + x.balance = 0; + break; + case 0: + x.balance = sign; + return; + case 1: + TreeNode r = child(x, !isLeft); + int b = r.balance; + if (b * sign >= 0) { + replace(x, r); + set(x, !isLeft, child(r, isLeft)); + set(r, isLeft, x); + if (b == 0) { + x.balance = sign; + r.balance = -sign; + return; + } + x.balance = 0; + r.balance = 0; + x = r; + } else { + TreeNode l = child(r, isLeft); + replace(x, l); + b = l.balance; + set(r, isLeft, child(l, !isLeft)); + set(l, !isLeft, r); + set(x, !isLeft, child(l, isLeft)); + set(l, isLeft, x); + x.balance = (b == sign) ? -sign : 0; + r.balance = (b == -sign) ? sign : 0; + l.balance = 0; + x = l; + } + break; + default: + DbException.throwInternalError("b: " + x.balance * sign); + } + isLeft = x.isFromLeft(); + n = x.parent; + } + } + + private TreeNode findFirstNode(SearchRow row, boolean withKey) { + TreeNode x = root, result = x; + while (x != null) { + result = x; + int compare = compareRows(x.row, row); + if (compare == 0 && withKey) { + compare = compareKeys(x.row, row); + } + if (compare == 0) { + if (withKey) { + return x; + } + x = x.left; + } else if (compare > 0) { + x = x.left; + } else { + x = x.right; + } + } + return result; + } + + @Override + public Cursor find(TableFilter filter, SearchRow first, SearchRow last) { + return find(first, last); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return find(first, last); + } + + private Cursor find(SearchRow first, SearchRow last) { + if (first == null) { + TreeNode x = root, n; + while (x != null) { + n = x.left; + if (n == null) { + break; + } + x = n; + } + return new TreeCursor(this, x, null, last); + } + TreeNode x = findFirstNode(first, false); + return new TreeCursor(this, x, first, last); + } + + @Override + public double getCost(Session session, int[] masks, TableFilter[] filters, int filter, + SortOrder sortOrder, HashSet allColumnsSet) { + return getCostRangeIndex(masks, tableData.getRowCountApproximation(), + filters, filter, sortOrder, false, allColumnsSet); + } + + @Override + public void remove(Session session) { + truncate(session); + } + + @Override + public void truncate(Session session) { + root = null; + rowCount = 0; + } + + @Override + public void checkRename() { + // nothing to do + } + + @Override + public boolean needRebuild() { + return true; + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + if (closed) { + throw DbException.throwInternalError(toString()); + } + if (first) { + // TODO optimization: this loops through NULL + Cursor cursor = find(session, null, null); + while (cursor.next()) { + SearchRow row = cursor.getSearchRow(); + Value v = row.getValue(columnIds[0]); + if (v != ValueNull.INSTANCE) { + return cursor; + } + } + return cursor; + } + TreeNode x = root, n; + while (x != null) { + n = x.right; + if (n == null) { + break; + } + x = n; + } + TreeCursor cursor = new TreeCursor(this, x, null, null); + if (x == null) { + return cursor; + } + // TODO optimization: this loops through NULL elements + do { + SearchRow row = cursor.getSearchRow(); + if (row == null) { + break; + } + Value v = row.getValue(columnIds[0]); + if (v != ValueNull.INSTANCE) { + return cursor; + } + } while (cursor.previous()); + return cursor; + } + + @Override + public long getRowCount(Session session) { + return rowCount; + } + + @Override + public long getRowCountApproximation() { + return rowCount; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/TreeNode.java b/modules/h2/src/main/java/org/h2/index/TreeNode.java new file mode 100644 index 0000000000000..db1ec8c8e8c77 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/TreeNode.java @@ -0,0 +1,54 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.result.Row; + +/** + * Represents a index node of a tree index. + */ +class TreeNode { + + /** + * The balance. For more information, see the AVL tree documentation. + */ + int balance; + + /** + * The left child node or null. + */ + TreeNode left; + + /** + * The right child node or null. + */ + TreeNode right; + + /** + * The parent node or null if this is the root node. + */ + TreeNode parent; + + /** + * The row. + */ + final Row row; + + TreeNode(Row row) { + this.row = row; + } + + /** + * Check if this node is the left child of its parent. This method returns + * true if this is the root node. + * + * @return true if this node is the root or a left child + */ + boolean isFromLeft() { + return parent == null || parent.left == this; + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/ViewCursor.java b/modules/h2/src/main/java/org/h2/index/ViewCursor.java new file mode 100644 index 0000000000000..422a74be69811 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/ViewCursor.java @@ -0,0 +1,87 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.table.Table; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * The cursor implementation of a view index. + */ +public class ViewCursor implements Cursor { + + private final Table table; + private final ViewIndex index; + private final ResultInterface result; + private final SearchRow first, last; + private Row current; + + public ViewCursor(ViewIndex index, ResultInterface result, SearchRow first, + SearchRow last) { + this.table = index.getTable(); + this.index = index; + this.result = result; + this.first = first; + this.last = last; + } + + @Override + public Row get() { + return current; + } + + @Override + public SearchRow getSearchRow() { + return current; + } + + @Override + public boolean next() { + while (true) { + boolean res = result.next(); + if (!res) { + if (index.isRecursive()) { + result.reset(); + } else { + result.close(); + } + current = null; + return false; + } + current = table.getTemplateRow(); + Value[] values = result.currentRow(); + for (int i = 0, len = current.getColumnCount(); i < len; i++) { + Value v = i < values.length ? values[i] : ValueNull.INSTANCE; + current.setValue(i, v); + } + int comp; + if (first != null) { + comp = index.compareRows(current, first); + if (comp < 0) { + continue; + } + } + if (last != null) { + comp = index.compareRows(current, last); + if (comp > 0) { + continue; + } + } + return true; + } + } + + @Override + public boolean previous() { + throw DbException.throwInternalError(toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/index/ViewIndex.java b/modules/h2/src/main/java/org/h2/index/ViewIndex.java new file mode 100644 index 0000000000000..140b4a6890584 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/ViewIndex.java @@ -0,0 +1,450 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.index; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.concurrent.TimeUnit; +import org.h2.api.ErrorCode; +import org.h2.command.Parser; +import org.h2.command.Prepared; +import org.h2.command.dml.Query; +import org.h2.command.dml.SelectUnion; +import org.h2.engine.Constants; +import org.h2.engine.Session; +import org.h2.expression.Comparison; +import org.h2.expression.Parameter; +import org.h2.message.DbException; +import org.h2.result.LocalResult; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.JoinBatch; +import org.h2.table.TableFilter; +import org.h2.table.TableView; +import org.h2.util.IntArray; +import org.h2.util.New; +import org.h2.value.Value; + +/** + * This object represents a virtual index for a query. + * Actually it only represents a prepared SELECT statement. + */ +public class ViewIndex extends BaseIndex implements SpatialIndex { + + private static final long MAX_AGE_NANOS = + TimeUnit.MILLISECONDS.toNanos(Constants.VIEW_COST_CACHE_MAX_AGE); + + private final TableView view; + private final String querySQL; + private final ArrayList originalParameters; + private boolean recursive; + private final int[] indexMasks; + private Query query; + private final Session createSession; + + /** + * The time in nanoseconds when this index (and its cost) was calculated. + */ + private final long evaluatedAt; + + /** + * Constructor for the original index in {@link TableView}. + * + * @param view the table view + * @param querySQL the query SQL + * @param originalParameters the original parameters + * @param recursive if the view is recursive + */ + public ViewIndex(TableView view, String querySQL, + ArrayList originalParameters, boolean recursive) { + initBaseIndex(view, 0, null, null, IndexType.createNonUnique(false)); + this.view = view; + this.querySQL = querySQL; + this.originalParameters = originalParameters; + this.recursive = recursive; + columns = new Column[0]; + this.createSession = null; + this.indexMasks = null; + // this is a main index of TableView, it does not need eviction time + // stamp + evaluatedAt = Long.MIN_VALUE; + } + + /** + * Constructor for plan item generation. Over this index the query will be + * executed. + * + * @param view the table view + * @param index the view index + * @param session the session + * @param masks the masks + * @param filters table filters + * @param filter current filter + * @param sortOrder sort order + */ + public ViewIndex(TableView view, ViewIndex index, Session session, + int[] masks, TableFilter[] filters, int filter, SortOrder sortOrder) { + initBaseIndex(view, 0, null, null, IndexType.createNonUnique(false)); + this.view = view; + this.querySQL = index.querySQL; + this.originalParameters = index.originalParameters; + this.recursive = index.recursive; + this.indexMasks = masks; + this.createSession = session; + columns = new Column[0]; + if (!recursive) { + query = getQuery(session, masks, filters, filter, sortOrder); + } + // we don't need eviction for recursive views since we can't calculate + // their cost if it is a sub-query we don't need eviction as well + // because the whole ViewIndex cache is getting dropped in + // Session.prepareLocal + evaluatedAt = recursive || view.getTopQuery() != null ? Long.MAX_VALUE : System.nanoTime(); + } + + @Override + public IndexLookupBatch createLookupBatch(TableFilter[] filters, int filter) { + if (recursive) { + // we do not support batching for recursive queries + return null; + } + return JoinBatch.createViewIndexLookupBatch(this); + } + + public Session getSession() { + return createSession; + } + + public boolean isExpired() { + assert evaluatedAt != Long.MIN_VALUE : "must not be called for main index of TableView"; + return !recursive && view.getTopQuery() == null && + System.nanoTime() - evaluatedAt > MAX_AGE_NANOS; + } + + @Override + public String getPlanSQL() { + return query == null ? null : query.getPlanSQL(); + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void add(Session session, Row row) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public void remove(Session session, Row row) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return recursive ? 1000 : query.getCost(); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return find(session, first, last, null); + } + + @Override + public Cursor findByGeometry(TableFilter filter, SearchRow first, + SearchRow last, SearchRow intersection) { + return find(filter.getSession(), first, last, intersection); + } + + private static Query prepareSubQuery(String sql, Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder) { + Prepared p; + session.pushSubQueryInfo(masks, filters, filter, sortOrder); + try { + p = session.prepare(sql, true, true); + } finally { + session.popSubQueryInfo(); + } + return (Query) p; + } + + private Cursor findRecursive(SearchRow first, SearchRow last) { + assert recursive; + ResultInterface recursiveResult = view.getRecursiveResult(); + if (recursiveResult != null) { + recursiveResult.reset(); + return new ViewCursor(this, recursiveResult, first, last); + } + if (query == null) { + Parser parser = new Parser(createSession); + parser.setRightsChecked(true); + parser.setSuppliedParameterList(originalParameters); + query = (Query) parser.prepare(querySQL); + query.setNeverLazy(true); + } + if (!query.isUnion()) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_2, + "recursive queries without UNION"); + } + SelectUnion union = (SelectUnion) query; + Query left = union.getLeft(); + left.setNeverLazy(true); + // to ensure the last result is not closed + left.disableCache(); + ResultInterface resultInterface = left.query(0); + LocalResult localResult = union.getEmptyResult(); + // ensure it is not written to disk, + // because it is not closed normally + localResult.setMaxMemoryRows(Integer.MAX_VALUE); + while (resultInterface.next()) { + Value[] cr = resultInterface.currentRow(); + localResult.addRow(cr); + } + Query right = union.getRight(); + right.setNeverLazy(true); + resultInterface.reset(); + view.setRecursiveResult(resultInterface); + // to ensure the last result is not closed + right.disableCache(); + while (true) { + resultInterface = right.query(0); + if (!resultInterface.hasNext()) { + break; + } + while (resultInterface.next()) { + Value[] cr = resultInterface.currentRow(); + localResult.addRow(cr); + } + resultInterface.reset(); + view.setRecursiveResult(resultInterface); + } + view.setRecursiveResult(null); + localResult.done(); + return new ViewCursor(this, localResult, first, last); + } + + /** + * Set the query parameters. + * + * @param session the session + * @param first the lower bound + * @param last the upper bound + * @param intersection the intersection + */ + public void setupQueryParameters(Session session, SearchRow first, SearchRow last, + SearchRow intersection) { + ArrayList paramList = query.getParameters(); + if (originalParameters != null) { + for (Parameter orig : originalParameters) { + int idx = orig.getIndex(); + Value value = orig.getValue(session); + setParameter(paramList, idx, value); + } + } + int len; + if (first != null) { + len = first.getColumnCount(); + } else if (last != null) { + len = last.getColumnCount(); + } else if (intersection != null) { + len = intersection.getColumnCount(); + } else { + len = 0; + } + int idx = view.getParameterOffset(originalParameters); + for (int i = 0; i < len; i++) { + int mask = indexMasks[i]; + if ((mask & IndexCondition.EQUALITY) != 0) { + setParameter(paramList, idx++, first.getValue(i)); + } + if ((mask & IndexCondition.START) != 0) { + setParameter(paramList, idx++, first.getValue(i)); + } + if ((mask & IndexCondition.END) != 0) { + setParameter(paramList, idx++, last.getValue(i)); + } + if ((mask & IndexCondition.SPATIAL_INTERSECTS) != 0) { + setParameter(paramList, idx++, intersection.getValue(i)); + } + } + } + + private Cursor find(Session session, SearchRow first, SearchRow last, + SearchRow intersection) { + if (recursive) { + return findRecursive(first, last); + } + setupQueryParameters(session, first, last, intersection); + ResultInterface result = query.query(0); + return new ViewCursor(this, result, first, last); + } + + private static void setParameter(ArrayList paramList, int x, + Value v) { + if (x >= paramList.size()) { + // the parameter may be optimized away as in + // select * from (select null as x) where x=1; + return; + } + Parameter param = paramList.get(x); + param.setValue(v); + } + + public Query getQuery() { + return query; + } + + private Query getQuery(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder) { + Query q = prepareSubQuery(querySQL, session, masks, filters, filter, sortOrder); + if (masks == null) { + return q; + } + if (!q.allowGlobalConditions()) { + return q; + } + int firstIndexParam = view.getParameterOffset(originalParameters); + // the column index of each parameter + // (for example: paramColumnIndex {0, 0} mean + // param[0] is column 0, and param[1] is also column 0) + IntArray paramColumnIndex = new IntArray(); + int indexColumnCount = 0; + for (int i = 0; i < masks.length; i++) { + int mask = masks[i]; + if (mask == 0) { + continue; + } + indexColumnCount++; + // the number of parameters depends on the mask; + // for range queries it is 2: >= x AND <= y + // but bitMask could also be 7 (=, and <=, and >=) + int bitCount = Integer.bitCount(mask); + for (int j = 0; j < bitCount; j++) { + paramColumnIndex.add(i); + } + } + int len = paramColumnIndex.size(); + ArrayList columnList = New.arrayList(); + for (int i = 0; i < len;) { + int idx = paramColumnIndex.get(i); + columnList.add(table.getColumn(idx)); + int mask = masks[idx]; + if ((mask & IndexCondition.EQUALITY) != 0) { + Parameter param = new Parameter(firstIndexParam + i); + q.addGlobalCondition(param, idx, Comparison.EQUAL_NULL_SAFE); + i++; + } + if ((mask & IndexCondition.START) != 0) { + Parameter param = new Parameter(firstIndexParam + i); + q.addGlobalCondition(param, idx, Comparison.BIGGER_EQUAL); + i++; + } + if ((mask & IndexCondition.END) != 0) { + Parameter param = new Parameter(firstIndexParam + i); + q.addGlobalCondition(param, idx, Comparison.SMALLER_EQUAL); + i++; + } + if ((mask & IndexCondition.SPATIAL_INTERSECTS) != 0) { + Parameter param = new Parameter(firstIndexParam + i); + q.addGlobalCondition(param, idx, Comparison.SPATIAL_INTERSECTS); + i++; + } + } + columns = columnList.toArray(new Column[0]); + + // reconstruct the index columns from the masks + this.indexColumns = new IndexColumn[indexColumnCount]; + this.columnIds = new int[indexColumnCount]; + for (int type = 0, indexColumnId = 0; type < 2; type++) { + for (int i = 0; i < masks.length; i++) { + int mask = masks[i]; + if (mask == 0) { + continue; + } + if (type == 0) { + if ((mask & IndexCondition.EQUALITY) == 0) { + // the first columns need to be equality conditions + continue; + } + } else { + if ((mask & IndexCondition.EQUALITY) != 0) { + // after that only range conditions + continue; + } + } + IndexColumn c = new IndexColumn(); + c.column = table.getColumn(i); + indexColumns[indexColumnId] = c; + columnIds[indexColumnId] = c.column.getColumnId(); + indexColumnId++; + } + } + + String sql = q.getPlanSQL(); + q = prepareSubQuery(sql, session, masks, filters, filter, sortOrder); + return q; + } + + @Override + public void remove(Session session) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.getUnsupportedException("VIEW"); + } + + public void setRecursive(boolean value) { + this.recursive = value; + } + + @Override + public long getRowCount(Session session) { + return 0; + } + + @Override + public long getRowCountApproximation() { + return 0; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + public boolean isRecursive() { + return recursive; + } +} diff --git a/modules/h2/src/main/java/org/h2/index/package.html b/modules/h2/src/main/java/org/h2/index/package.html new file mode 100644 index 0000000000000..03bfb92f2447c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/index/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Various table index implementations, as well as cursors to navigate in an index. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcArray.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcArray.java new file mode 100644 index 0000000000000..b34d24b96919b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcArray.java @@ -0,0 +1,299 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.Array; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Types; +import java.util.Arrays; +import java.util.Map; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.tools.SimpleResultSet; +import org.h2.value.Value; + +/** + * Represents an ARRAY value. + */ +public class JdbcArray extends TraceObject implements Array { + + private Value value; + private final JdbcConnection conn; + + /** + * INTERNAL + */ + public JdbcArray(JdbcConnection conn, Value value, int id) { + setTrace(conn.getSession().getTrace(), TraceObject.ARRAY, id); + this.conn = conn; + this.value = value; + } + + /** + * Returns the value as a Java array. + * This method always returns an Object[]. + * + * @return the Object array + */ + @Override + public Object getArray() throws SQLException { + try { + debugCodeCall("getArray"); + checkClosed(); + return get(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value as a Java array. + * This method always returns an Object[]. + * + * @param map is ignored. Only empty or null maps are supported + * @return the Object array + */ + @Override + public Object getArray(Map> map) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getArray("+quoteMap(map)+");"); + } + JdbcConnection.checkMap(map); + checkClosed(); + return get(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value as a Java array. A subset of the array is returned, + * starting from the index (1 meaning the first element) and up to the given + * object count. This method always returns an Object[]. + * + * @param index the start index of the subset (starting with 1) + * @param count the maximum number of values + * @return the Object array + */ + @Override + public Object getArray(long index, int count) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getArray(" + index + ", " + count + ");"); + } + checkClosed(); + return get(index, count); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value as a Java array. A subset of the array is returned, + * starting from the index (1 meaning the first element) and up to the given + * object count. This method always returns an Object[]. + * + * @param index the start index of the subset (starting with 1) + * @param count the maximum number of values + * @param map is ignored. Only empty or null maps are supported + * @return the Object array + */ + @Override + public Object getArray(long index, int count, Map> map) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getArray(" + index + ", " + count + ", " + quoteMap(map)+");"); + } + checkClosed(); + JdbcConnection.checkMap(map); + return get(index, count); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the base type of the array. This database does support mixed type + * arrays and therefore there is no base type. + * + * @return Types.NULL + */ + @Override + public int getBaseType() throws SQLException { + try { + debugCodeCall("getBaseType"); + checkClosed(); + return Types.NULL; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the base type name of the array. This database does support mixed + * type arrays and therefore there is no base type. + * + * @return "NULL" + */ + @Override + public String getBaseTypeName() throws SQLException { + try { + debugCodeCall("getBaseTypeName"); + checkClosed(); + return "NULL"; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value as a result set. + * The first column contains the index + * (starting with 1) and the second column the value. + * + * @return the result set + */ + @Override + public ResultSet getResultSet() throws SQLException { + try { + debugCodeCall("getResultSet"); + checkClosed(); + return getResultSet(get(), 0); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value as a result set. The first column contains the index + * (starting with 1) and the second column the value. + * + * @param map is ignored. Only empty or null maps are supported + * @return the result set + */ + @Override + public ResultSet getResultSet(Map> map) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getResultSet("+quoteMap(map)+");"); + } + checkClosed(); + JdbcConnection.checkMap(map); + return getResultSet(get(), 0); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value as a result set. The first column contains the index + * (starting with 1) and the second column the value. A subset of the array + * is returned, starting from the index (1 meaning the first element) and + * up to the given object count. + * + * @param index the start index of the subset (starting with 1) + * @param count the maximum number of values + * @return the result set + */ + @Override + public ResultSet getResultSet(long index, int count) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getResultSet("+index+", " + count+");"); + } + checkClosed(); + return getResultSet(get(index, count), index - 1); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value as a result set. + * The first column contains the index + * (starting with 1) and the second column the value. + * A subset of the array is returned, starting from the index + * (1 meaning the first element) and up to the given object count. + * + * @param index the start index of the subset (starting with 1) + * @param count the maximum number of values + * @param map is ignored. Only empty or null maps are supported + * @return the result set + */ + @Override + public ResultSet getResultSet(long index, int count, + Map> map) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getResultSet("+index+", " + count+", " + quoteMap(map)+");"); + } + checkClosed(); + JdbcConnection.checkMap(map); + return getResultSet(get(index, count), index - 1); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Release all resources of this object. + */ + @Override + public void free() { + debugCodeCall("free"); + value = null; + } + + private static ResultSet getResultSet(Object[] array, long offset) { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("INDEX", Types.BIGINT, 0, 0); + // TODO array result set: there are multiple data types possible + rs.addColumn("VALUE", Types.NULL, 0, 0); + for (int i = 0; i < array.length; i++) { + rs.addRow(offset + i + 1, array[i]); + } + return rs; + } + + private void checkClosed() { + conn.checkClosed(); + if (value == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + } + + private Object[] get() { + return (Object[]) value.convertTo(Value.ARRAY).getObject(); + } + + private Object[] get(long index, int count) { + Object[] array = get(); + if (count < 0 || count > array.length) { + throw DbException.getInvalidValueException("count (1.." + + array.length + ")", count); + } + if (index < 1 || index > array.length) { + throw DbException.getInvalidValueException("index (1.." + + array.length + ")", index); + } + int offset = (int) (index - 1); + return Arrays.copyOfRange(array, offset, offset + count); + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return value == null ? "null" : + (getTraceObjectName() + ": " + value.getTraceSQL()); + } +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcBatchUpdateException.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcBatchUpdateException.java new file mode 100644 index 0000000000000..9860f385b302d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcBatchUpdateException.java @@ -0,0 +1,66 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.io.PrintStream; +import java.io.PrintWriter; +import java.sql.BatchUpdateException; +import java.sql.SQLException; + +/** + * Represents a batch update database exception. + */ +public class JdbcBatchUpdateException extends BatchUpdateException { + + private static final long serialVersionUID = 1L; + + /** + * INTERNAL + */ + JdbcBatchUpdateException(SQLException next, int[] updateCounts) { + super(next.getMessage(), next.getSQLState(), next.getErrorCode(), updateCounts); + setNextException(next); + } + + /** + * INTERNAL + */ + @Override + public void printStackTrace() { + // The default implementation already does that, + // but we do it again to avoid problems. + // If it is not implemented, somebody might implement it + // later on which would be a problem if done in the wrong way. + printStackTrace(System.err); + } + + /** + * INTERNAL + */ + @Override + public void printStackTrace(PrintWriter s) { + if (s != null) { + super.printStackTrace(s); + if (getNextException() != null) { + getNextException().printStackTrace(s); + } + } + } + + /** + * INTERNAL + */ + @Override + public void printStackTrace(PrintStream s) { + if (s != null) { + super.printStackTrace(s); + if (getNextException() != null) { + getNextException().printStackTrace(s); + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcBlob.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcBlob.java new file mode 100644 index 0000000000000..bc58bbd153560 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcBlob.java @@ -0,0 +1,352 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.PipedInputStream; +import java.io.PipedOutputStream; +import java.sql.Blob; +import java.sql.SQLException; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.util.IOUtils; +import org.h2.util.Task; +import org.h2.value.Value; + +/** + * Represents a BLOB value. + */ +public class JdbcBlob extends TraceObject implements Blob { + + Value value; + private final JdbcConnection conn; + + /** + * INTERNAL + */ + public JdbcBlob(JdbcConnection conn, Value value, int id) { + setTrace(conn.getSession().getTrace(), TraceObject.BLOB, id); + this.conn = conn; + this.value = value; + } + + /** + * Returns the length. + * + * @return the length + */ + @Override + public long length() throws SQLException { + try { + debugCodeCall("length"); + checkClosed(); + if (value.getType() == Value.BLOB) { + long precision = value.getPrecision(); + if (precision > 0) { + return precision; + } + } + return IOUtils.copyAndCloseInput(value.getInputStream(), null); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Truncates the object. + * + * @param len the new length + */ + @Override + public void truncate(long len) throws SQLException { + throw unsupported("LOB update"); + } + + /** + * Returns some bytes of the object. + * + * @param pos the index, the first byte is at position 1 + * @param length the number of bytes + * @return the bytes, at most length bytes + */ + @Override + public byte[] getBytes(long pos, int length) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getBytes("+pos+", "+length+");"); + } + checkClosed(); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + try (InputStream in = value.getInputStream()) { + IOUtils.skipFully(in, pos - 1); + IOUtils.copy(in, out, length); + } + return out.toByteArray(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Fills the Blob. This is only supported for new, empty Blob objects that + * were created with Connection.createBlob(). The position + * must be 1, meaning the whole Blob data is set. + * + * @param pos where to start writing (the first byte is at position 1) + * @param bytes the bytes to set + * @return the length of the added data + */ + @Override + public int setBytes(long pos, byte[] bytes) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBytes("+pos+", "+quoteBytes(bytes)+");"); + } + checkClosed(); + if (pos != 1) { + throw DbException.getInvalidValueException("pos", pos); + } + value = conn.createBlob(new ByteArrayInputStream(bytes), -1); + return bytes.length; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets some bytes of the object. + * + * @param pos the write position + * @param bytes the bytes to set + * @param offset the bytes offset + * @param len the number of bytes to write + * @return how many bytes have been written + */ + @Override + public int setBytes(long pos, byte[] bytes, int offset, int len) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBytes(" + pos + ", " + quoteBytes(bytes) + ", " + offset + ", " + len + ");"); + } + checkClosed(); + if (pos != 1) { + throw DbException.getInvalidValueException("pos", pos); + } + value = conn.createBlob(new ByteArrayInputStream(bytes, offset, len), -1); + return (int) value.getPrecision(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the input stream. + * + * @return the input stream + */ + @Override + public InputStream getBinaryStream() throws SQLException { + try { + debugCodeCall("getBinaryStream"); + checkClosed(); + return value.getInputStream(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Get a writer to update the Blob. This is only supported for new, empty + * Blob objects that were created with Connection.createBlob(). The Blob is + * created in a separate thread, and the object is only updated when + * OutputStream.close() is called. The position must be 1, meaning the whole + * Blob data is set. + * + * @param pos where to start writing (the first byte is at position 1) + * @return an output stream + */ + @Override + public OutputStream setBinaryStream(long pos) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBinaryStream("+pos+");"); + } + checkClosed(); + if (pos != 1) { + throw DbException.getInvalidValueException("pos", pos); + } + if (value.getPrecision() != 0) { + throw DbException.getInvalidValueException("length", value.getPrecision()); + } + final JdbcConnection c = conn; // local variable avoids generating synthetic accessor method + final PipedInputStream in = new PipedInputStream(); + final Task task = new Task() { + @Override + public void call() { + value = c.createBlob(in, -1); + } + }; + PipedOutputStream out = new PipedOutputStream(in) { + @Override + public void close() throws IOException { + super.close(); + try { + task.get(); + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + }; + task.execute(); + return new BufferedOutputStream(out); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Searches a pattern and return the position. + * + * @param pattern the pattern to search + * @param start the index, the first byte is at position 1 + * @return the position (first byte is at position 1), or -1 for not found + */ + @Override + public long position(byte[] pattern, long start) throws SQLException { + if (isDebugEnabled()) { + debugCode("position("+quoteBytes(pattern)+", "+start+");"); + } + if (Constants.BLOB_SEARCH) { + try { + checkClosed(); + if (pattern == null) { + return -1; + } + if (pattern.length == 0) { + return 1; + } + // TODO performance: blob pattern search is slow + BufferedInputStream in = new BufferedInputStream(value.getInputStream()); + IOUtils.skipFully(in, start - 1); + int pos = 0; + int patternPos = 0; + while (true) { + int x = in.read(); + if (x < 0) { + break; + } + if (x == (pattern[patternPos] & 0xff)) { + if (patternPos == 0) { + in.mark(pattern.length); + } + if (patternPos == pattern.length) { + return pos - patternPos; + } + patternPos++; + } else { + if (patternPos > 0) { + in.reset(); + pos -= patternPos; + } + } + pos++; + } + return -1; + } catch (Exception e) { + throw logAndConvert(e); + } + } + throw unsupported("LOB search"); + } + + /** + * [Not supported] Searches a pattern and return the position. + * + * @param blobPattern the pattern to search + * @param start the index, the first byte is at position 1 + * @return the position (first byte is at position 1), or -1 for not found + */ + @Override + public long position(Blob blobPattern, long start) throws SQLException { + if (isDebugEnabled()) { + debugCode("position(blobPattern, "+start+");"); + } + if (Constants.BLOB_SEARCH) { + try { + checkClosed(); + if (blobPattern == null) { + return -1; + } + ByteArrayOutputStream out = new ByteArrayOutputStream(); + InputStream in = blobPattern.getBinaryStream(); + while (true) { + int x = in.read(); + if (x < 0) { + break; + } + out.write(x); + } + return position(out.toByteArray(), start); + } catch (Exception e) { + throw logAndConvert(e); + } + } + throw unsupported("LOB subset"); + } + + /** + * Release all resources of this object. + */ + @Override + public void free() { + debugCodeCall("free"); + value = null; + } + + /** + * Returns the input stream, starting from an offset. + * + * @param pos where to start reading + * @param length the number of bytes that will be read + * @return the input stream to read + */ + @Override + public InputStream getBinaryStream(long pos, long length) throws SQLException { + try { + debugCodeCall("getBinaryStream(pos, length)"); + checkClosed(); + return value.getInputStream(pos, length); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private void checkClosed() { + conn.checkClosed(); + if (value == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": " + + (value == null ? "null" : value.getTraceSQL()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcCallableStatement.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcCallableStatement.java new file mode 100644 index 0000000000000..d0ceaa27d15f8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcCallableStatement.java @@ -0,0 +1,1701 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.io.InputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.net.URL; +import java.sql.Array; +import java.sql.Blob; +import java.sql.CallableStatement; +import java.sql.Clob; +import java.sql.Date; +import java.sql.NClob; +import java.sql.Ref; +import java.sql.ResultSetMetaData; +import java.sql.RowId; +import java.sql.SQLException; +import java.sql.SQLXML; +import java.sql.Time; +import java.sql.Timestamp; +import java.util.Calendar; +import java.util.HashMap; +import java.util.Map; +import org.h2.api.ErrorCode; +import org.h2.expression.ParameterInterface; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.util.BitField; +import org.h2.value.ValueNull; + +/** + * Represents a callable statement. + * + * @author Sergi Vladykin + * @author Thomas Mueller + */ +public class JdbcCallableStatement extends JdbcPreparedStatement implements + CallableStatement, JdbcCallableStatementBackwardsCompat { + + private BitField outParameters; + private int maxOutParameters; + private HashMap namedParameters; + + JdbcCallableStatement(JdbcConnection conn, String sql, int id, + int resultSetType, int resultSetConcurrency) { + super(conn, sql, id, resultSetType, resultSetConcurrency, false, false); + setTrace(session.getTrace(), TraceObject.CALLABLE_STATEMENT, id); + } + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback) + * @throws SQLException if this object is closed or invalid + */ + @Override + public int executeUpdate() throws SQLException { + try { + checkClosed(); + if (command.isQuery()) { + super.executeQuery(); + return 0; + } + return super.executeUpdate(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback) + * @throws SQLException if this object is closed or invalid + */ + @Override + public long executeLargeUpdate() throws SQLException { + try { + checkClosed(); + if (command.isQuery()) { + super.executeQuery(); + return 0; + } + return super.executeLargeUpdate(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Registers the given OUT parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param sqlType the data type (Types.x) - ignored + */ + @Override + public void registerOutParameter(int parameterIndex, int sqlType) + throws SQLException { + registerOutParameter(parameterIndex); + } + + /** + * Registers the given OUT parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param sqlType the data type (Types.x) - ignored + * @param typeName the SQL type name - ignored + */ + @Override + public void registerOutParameter(int parameterIndex, int sqlType, + String typeName) throws SQLException { + registerOutParameter(parameterIndex); + } + + /** + * Registers the given OUT parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param sqlType the data type (Types.x) + * @param scale is ignored + */ + @Override + public void registerOutParameter(int parameterIndex, int sqlType, int scale) + throws SQLException { + registerOutParameter(parameterIndex); + } + + /** + * Registers the given OUT parameter. + * + * @param parameterName the parameter name + * @param sqlType the data type (Types.x) - ignored + * @param typeName the SQL type name - ignored + */ + @Override + public void registerOutParameter(String parameterName, int sqlType, + String typeName) throws SQLException { + registerOutParameter(getIndexForName(parameterName), sqlType, typeName); + } + + /** + * Registers the given OUT parameter. + * + * @param parameterName the parameter name + * @param sqlType the data type (Types.x) - ignored + * @param scale is ignored + */ + @Override + public void registerOutParameter(String parameterName, int sqlType, + int scale) throws SQLException { + registerOutParameter(getIndexForName(parameterName), sqlType, scale); + } + + /** + * Registers the given OUT parameter. + * + * @param parameterName the parameter name + * @param sqlType the data type (Types.x) - ignored + */ + @Override + public void registerOutParameter(String parameterName, int sqlType) + throws SQLException { + registerOutParameter(getIndexForName(parameterName), sqlType); + } + + /** + * Returns whether the last column accessed was null. + * + * @return true if the last column accessed was null + */ + @Override + public boolean wasNull() throws SQLException { + return getOpenResultSet().wasNull(); + } + + /** + * [Not supported] + */ + @Override + public URL getURL(int parameterIndex) throws SQLException { + throw unsupported("url"); + } + + /** + * Returns the value of the specified column as a String. + * + * @param parameterIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public String getString(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getString(parameterIndex); + } + + /** + * Returns the value of the specified column as a boolean. + * + * @param parameterIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public boolean getBoolean(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getBoolean(parameterIndex); + } + + /** + * Returns the value of the specified column as a byte. + * + * @param parameterIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public byte getByte(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getByte(parameterIndex); + } + + /** + * Returns the value of the specified column as a short. + * + * @param parameterIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public short getShort(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getShort(parameterIndex); + } + + /** + * Returns the value of the specified column as an int. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public int getInt(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getInt(parameterIndex); + } + + /** + * Returns the value of the specified column as a long. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public long getLong(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getLong(parameterIndex); + } + + /** + * Returns the value of the specified column as a float. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public float getFloat(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getFloat(parameterIndex); + } + + /** + * Returns the value of the specified column as a double. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public double getDouble(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getDouble(parameterIndex); + } + + /** + * Returns the value of the specified column as a BigDecimal. + * + * @deprecated use {@link #getBigDecimal(int)} + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param scale is ignored + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Deprecated + @Override + public BigDecimal getBigDecimal(int parameterIndex, int scale) + throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getBigDecimal(parameterIndex, scale); + } + + /** + * Returns the value of the specified column as a byte array. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public byte[] getBytes(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getBytes(parameterIndex); + } + + /** + * Returns the value of the specified column as a java.sql.Date. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Date getDate(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getDate(parameterIndex); + } + + /** + * Returns the value of the specified column as a java.sql.Time. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Time getTime(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getTime(parameterIndex); + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Timestamp getTimestamp(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getTimestamp(parameterIndex); + } + + /** + * Returns a column value as a Java object. The data is + * de-serialized into a Java object (on the client side). + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value or null + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Object getObject(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getObject(parameterIndex); + } + + /** + * Returns the value of the specified column as a BigDecimal. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public BigDecimal getBigDecimal(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getBigDecimal(parameterIndex); + } + + /** + * [Not supported] Gets a column as a object using the specified type + * mapping. + */ + @Override + public Object getObject(int parameterIndex, Map> map) + throws SQLException { + throw unsupported("map"); + } + + /** + * [Not supported] Gets a column as a reference. + */ + @Override + public Ref getRef(int parameterIndex) throws SQLException { + throw unsupported("ref"); + } + + /** + * Returns the value of the specified column as a Blob. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Blob getBlob(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getBlob(parameterIndex); + } + + /** + * Returns the value of the specified column as a Clob. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Clob getClob(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getClob(parameterIndex); + } + + /** + * Returns the value of the specified column as an Array. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Array getArray(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getArray(parameterIndex); + } + + /** + * Returns the value of the specified column as a java.sql.Date using a + * specified time zone. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param cal the calendar + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Date getDate(int parameterIndex, Calendar cal) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getDate(parameterIndex, cal); + } + + /** + * Returns the value of the specified column as a java.sql.Time using a + * specified time zone. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param cal the calendar + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Time getTime(int parameterIndex, Calendar cal) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getTime(parameterIndex, cal); + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp using a + * specified time zone. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param cal the calendar + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Timestamp getTimestamp(int parameterIndex, Calendar cal) + throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getTimestamp(parameterIndex, cal); + } + + /** + * [Not supported] + */ + @Override + public URL getURL(String parameterName) throws SQLException { + throw unsupported("url"); + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp using a + * specified time zone. + * + * @param parameterName the parameter name + * @param cal the calendar + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Timestamp getTimestamp(String parameterName, Calendar cal) + throws SQLException { + return getTimestamp(getIndexForName(parameterName), cal); + } + + /** + * Returns the value of the specified column as a java.sql.Time using a + * specified time zone. + * + * @param parameterName the parameter name + * @param cal the calendar + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Time getTime(String parameterName, Calendar cal) throws SQLException { + return getTime(getIndexForName(parameterName), cal); + } + + /** + * Returns the value of the specified column as a java.sql.Date using a + * specified time zone. + * + * @param parameterName the parameter name + * @param cal the calendar + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Date getDate(String parameterName, Calendar cal) throws SQLException { + return getDate(getIndexForName(parameterName), cal); + } + + /** + * Returns the value of the specified column as an Array. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Array getArray(String parameterName) throws SQLException { + return getArray(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a Clob. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Clob getClob(String parameterName) throws SQLException { + return getClob(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a Blob. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Blob getBlob(String parameterName) throws SQLException { + return getBlob(getIndexForName(parameterName)); + } + + /** + * [Not supported] Gets a column as a reference. + */ + @Override + public Ref getRef(String parameterName) throws SQLException { + throw unsupported("ref"); + } + + /** + * [Not supported] Gets a column as a object using the specified type + * mapping. + */ + @Override + public Object getObject(String parameterName, Map> map) + throws SQLException { + throw unsupported("map"); + } + + /** + * Returns the value of the specified column as a BigDecimal. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public BigDecimal getBigDecimal(String parameterName) throws SQLException { + return getBigDecimal(getIndexForName(parameterName)); + } + + /** + * Returns a column value as a Java object. The data is + * de-serialized into a Java object (on the client side). + * + * @param parameterName the parameter name + * @return the value or null + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Object getObject(String parameterName) throws SQLException { + return getObject(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Timestamp getTimestamp(String parameterName) throws SQLException { + return getTimestamp(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a java.sql.Time. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Time getTime(String parameterName) throws SQLException { + return getTime(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a java.sql.Date. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Date getDate(String parameterName) throws SQLException { + return getDate(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a byte array. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public byte[] getBytes(String parameterName) throws SQLException { + return getBytes(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a double. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public double getDouble(String parameterName) throws SQLException { + return getDouble(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a float. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public float getFloat(String parameterName) throws SQLException { + return getFloat(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a long. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public long getLong(String parameterName) throws SQLException { + return getLong(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as an int. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public int getInt(String parameterName) throws SQLException { + return getInt(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a short. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public short getShort(String parameterName) throws SQLException { + return getShort(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a byte. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public byte getByte(String parameterName) throws SQLException { + return getByte(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a boolean. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public boolean getBoolean(String parameterName) throws SQLException { + return getBoolean(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a String. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public String getString(String parameterName) throws SQLException { + return getString(getIndexForName(parameterName)); + } + + /** + * [Not supported] Returns the value of the specified column as a row id. + * + * @param parameterIndex the parameter index (1, 2, ...) + */ + @Override + public RowId getRowId(int parameterIndex) throws SQLException { + throw unsupported("rowId"); + } + + /** + * [Not supported] Returns the value of the specified column as a row id. + * + * @param parameterName the parameter name + */ + @Override + public RowId getRowId(String parameterName) throws SQLException { + throw unsupported("rowId"); + } + + /** + * Returns the value of the specified column as a Clob. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public NClob getNClob(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getNClob(parameterIndex); + } + + /** + * Returns the value of the specified column as a Clob. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public NClob getNClob(String parameterName) throws SQLException { + return getNClob(getIndexForName(parameterName)); + } + + /** + * [Not supported] Returns the value of the specified column as a SQLXML + * object. + */ + @Override + public SQLXML getSQLXML(int parameterIndex) throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * [Not supported] Returns the value of the specified column as a SQLXML + * object. + */ + @Override + public SQLXML getSQLXML(String parameterName) throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * Returns the value of the specified column as a String. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public String getNString(int parameterIndex) throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getNString(parameterIndex); + } + + /** + * Returns the value of the specified column as a String. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public String getNString(String parameterName) throws SQLException { + return getNString(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a reader. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Reader getNCharacterStream(int parameterIndex) + throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getNCharacterStream(parameterIndex); + } + + /** + * Returns the value of the specified column as a reader. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Reader getNCharacterStream(String parameterName) + throws SQLException { + return getNCharacterStream(getIndexForName(parameterName)); + } + + /** + * Returns the value of the specified column as a reader. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Reader getCharacterStream(int parameterIndex) + throws SQLException { + checkRegistered(parameterIndex); + return getOpenResultSet().getCharacterStream(parameterIndex); + } + + /** + * Returns the value of the specified column as a reader. + * + * @param parameterName the parameter name + * @return the value + * @throws SQLException if the column is not found or if this object is + * closed + */ + @Override + public Reader getCharacterStream(String parameterName) + throws SQLException { + return getCharacterStream(getIndexForName(parameterName)); + } + + // ============================================================= + + /** + * Sets a parameter to null. + * + * @param parameterName the parameter name + * @param sqlType the data type (Types.x) + * @param typeName this parameter is ignored + * @throws SQLException if this object is closed + */ + @Override + public void setNull(String parameterName, int sqlType, String typeName) + throws SQLException { + setNull(getIndexForName(parameterName), sqlType, typeName); + } + + /** + * Sets a parameter to null. + * + * @param parameterName the parameter name + * @param sqlType the data type (Types.x) + * @throws SQLException if this object is closed + */ + @Override + public void setNull(String parameterName, int sqlType) throws SQLException { + setNull(getIndexForName(parameterName), sqlType); + } + + /** + * Sets the timestamp using a specified time zone. The value will be + * converted to the local time zone. + * + * @param parameterName the parameter name + * @param x the value + * @param cal the calendar + * @throws SQLException if this object is closed + */ + @Override + public void setTimestamp(String parameterName, Timestamp x, Calendar cal) + throws SQLException { + setTimestamp(getIndexForName(parameterName), x, cal); + } + + /** + * Sets the time using a specified time zone. The value will be converted to + * the local time zone. + * + * @param parameterName the parameter name + * @param x the value + * @param cal the calendar + * @throws SQLException if this object is closed + */ + @Override + public void setTime(String parameterName, Time x, Calendar cal) + throws SQLException { + setTime(getIndexForName(parameterName), x, cal); + } + + /** + * Sets the date using a specified time zone. The value will be converted to + * the local time zone. + * + * @param parameterName the parameter name + * @param x the value + * @param cal the calendar + * @throws SQLException if this object is closed + */ + @Override + public void setDate(String parameterName, Date x, Calendar cal) + throws SQLException { + setDate(getIndexForName(parameterName), x, cal); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setCharacterStream(String parameterName, Reader x, int length) + throws SQLException { + setCharacterStream(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter. + * Objects of unknown classes are serialized (on the client side). + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setObject(String parameterName, Object x) throws SQLException { + setObject(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. The object is converted, if required, to + * the specified data type before sending to the database. + * Objects of unknown classes are serialized (on the client side). + * + * @param parameterName the parameter name + * @param x the value, null is allowed + * @param targetSqlType the type as defined in java.sql.Types + * @throws SQLException if this object is closed + */ + @Override + public void setObject(String parameterName, Object x, int targetSqlType) + throws SQLException { + setObject(getIndexForName(parameterName), x, targetSqlType); + } + + /** + * Sets the value of a parameter. The object is converted, if required, to + * the specified data type before sending to the database. + * Objects of unknown classes are serialized (on the client side). + * + * @param parameterName the parameter name + * @param x the value, null is allowed + * @param targetSqlType the type as defined in java.sql.Types + * @param scale is ignored + * @throws SQLException if this object is closed + */ + @Override + public void setObject(String parameterName, Object x, int targetSqlType, + int scale) throws SQLException { + setObject(getIndexForName(parameterName), x, targetSqlType, scale); + } + + /** + * Sets the value of a parameter as an input stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setBinaryStream(String parameterName, InputStream x, int length) + throws SQLException { + setBinaryStream(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as an ASCII stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setAsciiStream(String parameterName, + InputStream x, long length) throws SQLException { + setAsciiStream(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setTimestamp(String parameterName, Timestamp x) + throws SQLException { + setTimestamp(getIndexForName(parameterName), x); + } + + /** + * Sets the time using a specified time zone. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setTime(String parameterName, Time x) throws SQLException { + setTime(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setDate(String parameterName, Date x) throws SQLException { + setDate(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a byte array. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBytes(String parameterName, byte[] x) throws SQLException { + setBytes(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setString(String parameterName, String x) throws SQLException { + setString(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBigDecimal(String parameterName, BigDecimal x) + throws SQLException { + setBigDecimal(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setDouble(String parameterName, double x) throws SQLException { + setDouble(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setFloat(String parameterName, float x) throws SQLException { + setFloat(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setLong(String parameterName, long x) throws SQLException { + setLong(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setInt(String parameterName, int x) throws SQLException { + setInt(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setShort(String parameterName, short x) throws SQLException { + setShort(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setByte(String parameterName, byte x) throws SQLException { + setByte(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBoolean(String parameterName, boolean x) throws SQLException { + setBoolean(getIndexForName(parameterName), x); + } + + /** + * [Not supported] + */ + @Override + public void setURL(String parameterName, URL val) throws SQLException { + throw unsupported("url"); + } + + /** + * [Not supported] Sets the value of a parameter as a row id. + */ + @Override + public void setRowId(String parameterName, RowId x) + throws SQLException { + throw unsupported("rowId"); + } + + /** + * Sets the value of a parameter. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNString(String parameterName, String x) + throws SQLException { + setNString(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setNCharacterStream(String parameterName, + Reader x, long length) throws SQLException { + setNCharacterStream(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as a Clob. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNClob(String parameterName, NClob x) + throws SQLException { + setNClob(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a Clob. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setClob(String parameterName, Reader x, + long length) throws SQLException { + setClob(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as a Blob. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setBlob(String parameterName, InputStream x, + long length) throws SQLException { + setBlob(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as a Clob. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setNClob(String parameterName, Reader x, + long length) throws SQLException { + setNClob(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as a Blob. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBlob(String parameterName, Blob x) + throws SQLException { + setBlob(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a Clob. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setClob(String parameterName, Clob x) throws SQLException { + setClob(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as an ASCII stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setAsciiStream(String parameterName, InputStream x) + throws SQLException { + setAsciiStream(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as an ASCII stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setAsciiStream(String parameterName, + InputStream x, int length) throws SQLException { + setAsciiStream(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as an input stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBinaryStream(String parameterName, + InputStream x) throws SQLException { + setBinaryStream(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as an input stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setBinaryStream(String parameterName, + InputStream x, long length) throws SQLException { + setBinaryStream(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as a Blob. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBlob(String parameterName, InputStream x) + throws SQLException { + setBlob(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setCharacterStream(String parameterName, Reader x) + throws SQLException { + setCharacterStream(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setCharacterStream(String parameterName, + Reader x, long length) throws SQLException { + setCharacterStream(getIndexForName(parameterName), x, length); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setClob(String parameterName, Reader x) throws SQLException { + setClob(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNCharacterStream(String parameterName, Reader x) + throws SQLException { + setNCharacterStream(getIndexForName(parameterName), x); + } + + /** + * Sets the value of a parameter as a Clob. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterName the parameter name + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNClob(String parameterName, Reader x) + throws SQLException { + setNClob(getIndexForName(parameterName), x); + } + + /** + * [Not supported] Sets the value of a parameter as a SQLXML object. + */ + @Override + public void setSQLXML(String parameterName, SQLXML x) + throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * [Not supported] + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param type the class of the returned value + */ + @Override + public T getObject(int parameterIndex, Class type) throws SQLException { + return getOpenResultSet().getObject(parameterIndex, type); + } + + /** + * [Not supported] + * + * @param parameterName the parameter name + * @param type the class of the returned value + */ + @Override + public T getObject(String parameterName, Class type) throws SQLException { + return getObject(getIndexForName(parameterName), type); + } + + private ResultSetMetaData getCheckedMetaData() throws SQLException { + ResultSetMetaData meta = getMetaData(); + if (meta == null) { + throw DbException.getUnsupportedException( + "Supported only for calling stored procedures"); + } + return meta; + } + + private void checkIndexBounds(int parameterIndex) { + checkClosed(); + if (parameterIndex < 1 || parameterIndex > maxOutParameters) { + throw DbException.getInvalidValueException("parameterIndex", parameterIndex); + } + } + + private void registerOutParameter(int parameterIndex) throws SQLException { + try { + checkClosed(); + if (outParameters == null) { + maxOutParameters = Math.min( + getParameterMetaData().getParameterCount(), + getCheckedMetaData().getColumnCount()); + outParameters = new BitField(); + } + checkIndexBounds(parameterIndex); + ParameterInterface param = command.getParameters().get(--parameterIndex); + if (!param.isValueSet()) { + param.setValue(ValueNull.INSTANCE, false); + } + outParameters.set(parameterIndex); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private void checkRegistered(int parameterIndex) throws SQLException { + try { + checkIndexBounds(parameterIndex); + if (!outParameters.get(parameterIndex - 1)) { + throw DbException.getInvalidValueException("parameterIndex", parameterIndex); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private int getIndexForName(String parameterName) throws SQLException { + try { + checkClosed(); + if (namedParameters == null) { + ResultSetMetaData meta = getCheckedMetaData(); + int columnCount = meta.getColumnCount(); + HashMap map = new HashMap<>(columnCount); + for (int i = 1; i <= columnCount; i++) { + map.put(meta.getColumnLabel(i), i); + } + namedParameters = map; + } + Integer index = namedParameters.get(parameterName); + if (index == null) { + throw DbException.getInvalidValueException("parameterName", parameterName); + } + return index; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private JdbcResultSet getOpenResultSet() throws SQLException { + try { + checkClosed(); + if (resultSet == null) { + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + if (resultSet.isBeforeFirst()) { + resultSet.next(); + } + return resultSet; + } catch (Exception e) { + throw logAndConvert(e); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcCallableStatementBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcCallableStatementBackwardsCompat.java new file mode 100644 index 0000000000000..8a0ff0a34356d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcCallableStatementBackwardsCompat.java @@ -0,0 +1,16 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcCallableStatementBackwardsCompat { + + // compatibility interface + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcClob.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcClob.java new file mode 100644 index 0000000000000..929b82801abaf --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcClob.java @@ -0,0 +1,320 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.PipedInputStream; +import java.io.PipedOutputStream; +import java.io.Reader; +import java.io.StringReader; +import java.io.StringWriter; +import java.io.Writer; +import java.sql.Clob; +import java.sql.NClob; +import java.sql.SQLException; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.store.RangeReader; +import org.h2.util.IOUtils; +import org.h2.util.Task; +import org.h2.value.Value; + +/** + * Represents a CLOB value. + */ +public class JdbcClob extends TraceObject implements NClob +{ + + Value value; + private final JdbcConnection conn; + + /** + * INTERNAL + */ + public JdbcClob(JdbcConnection conn, Value value, int id) { + setTrace(conn.getSession().getTrace(), TraceObject.CLOB, id); + this.conn = conn; + this.value = value; + } + + /** + * Returns the length. + * + * @return the length + */ + @Override + public long length() throws SQLException { + try { + debugCodeCall("length"); + checkClosed(); + if (value.getType() == Value.CLOB) { + long precision = value.getPrecision(); + if (precision > 0) { + return precision; + } + } + return IOUtils.copyAndCloseInput(value.getReader(), null, Long.MAX_VALUE); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Truncates the object. + */ + @Override + public void truncate(long len) throws SQLException { + throw unsupported("LOB update"); + } + + /** + * Returns the input stream. + * + * @return the input stream + */ + @Override + public InputStream getAsciiStream() throws SQLException { + try { + debugCodeCall("getAsciiStream"); + checkClosed(); + String s = value.getString(); + return IOUtils.getInputStreamFromString(s); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Returns an output stream. + */ + @Override + public OutputStream setAsciiStream(long pos) throws SQLException { + throw unsupported("LOB update"); + } + + /** + * Returns the reader. + * + * @return the reader + */ + @Override + public Reader getCharacterStream() throws SQLException { + try { + debugCodeCall("getCharacterStream"); + checkClosed(); + return value.getReader(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Get a writer to update the Clob. This is only supported for new, empty + * Clob objects that were created with Connection.createClob() or + * createNClob(). The Clob is created in a separate thread, and the object + * is only updated when Writer.close() is called. The position must be 1, + * meaning the whole Clob data is set. + * + * @param pos where to start writing (the first character is at position 1) + * @return a writer + */ + @Override + public Writer setCharacterStream(long pos) throws SQLException { + try { + if (isDebugEnabled()) { + debugCodeCall("setCharacterStream(" + pos + ");"); + } + checkClosed(); + if (pos != 1) { + throw DbException.getInvalidValueException("pos", pos); + } + if (value.getPrecision() != 0) { + throw DbException.getInvalidValueException("length", value.getPrecision()); + } + final JdbcConnection c = conn; // required to avoid synthetic method creation + // PipedReader / PipedWriter are a lot slower + // than PipedInputStream / PipedOutputStream + // (Sun/Oracle Java 1.6.0_20) + final PipedInputStream in = new PipedInputStream(); + final Task task = new Task() { + @Override + public void call() { + value = c.createClob(IOUtils.getReader(in), -1); + } + }; + PipedOutputStream out = new PipedOutputStream(in) { + @Override + public void close() throws IOException { + super.close(); + try { + task.get(); + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + }; + task.execute(); + return IOUtils.getBufferedWriter(out); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns a substring. + * + * @param pos the position (the first character is at position 1) + * @param length the number of characters + * @return the string + */ + @Override + public String getSubString(long pos, int length) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getSubString(" + pos + ", " + length + ");"); + } + checkClosed(); + if (pos < 1) { + throw DbException.getInvalidValueException("pos", pos); + } + if (length < 0) { + throw DbException.getInvalidValueException("length", length); + } + StringWriter writer = new StringWriter( + Math.min(Constants.IO_BUFFER_SIZE, length)); + try (Reader reader = value.getReader()) { + IOUtils.skipFully(reader, pos - 1); + IOUtils.copyAndCloseInput(reader, writer, length); + } + return writer.toString(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Fills the Clob. This is only supported for new, empty Clob objects that + * were created with Connection.createClob() or createNClob(). The position + * must be 1, meaning the whole Clob data is set. + * + * @param pos where to start writing (the first character is at position 1) + * @param str the string to add + * @return the length of the added text + */ + @Override + public int setString(long pos, String str) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setString(" + pos + ", " + quote(str) + ");"); + } + checkClosed(); + if (pos != 1) { + throw DbException.getInvalidValueException("pos", pos); + } else if (str == null) { + throw DbException.getInvalidValueException("str", str); + } + value = conn.createClob(new StringReader(str), -1); + return str.length(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Fills the Clob. This is only supported for new, empty Clob objects that + * were created with Connection.createClob() or createNClob(). The position + * must be 1, meaning the whole Clob data is set. + * + * @param pos where to start writing (the first character is at position 1) + * @param str the string to add + * @param offset the string offset + * @param len the number of characters to read + * @return the length of the added text + */ + @Override + public int setString(long pos, String str, int offset, int len) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setString(" + pos + ", " + quote(str) + ", " + offset + ", " + len + ");"); + } + checkClosed(); + if (pos != 1) { + throw DbException.getInvalidValueException("pos", pos); + } else if (str == null) { + throw DbException.getInvalidValueException("str", str); + } + value = conn.createClob(new RangeReader(new StringReader(str), offset, len), -1); + return (int) value.getPrecision(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Searches a pattern and return the position. + */ + @Override + public long position(String pattern, long start) throws SQLException { + throw unsupported("LOB search"); + } + + /** + * [Not supported] Searches a pattern and return the position. + */ + @Override + public long position(Clob clobPattern, long start) throws SQLException { + throw unsupported("LOB search"); + } + + /** + * Release all resources of this object. + */ + @Override + public void free() { + debugCodeCall("free"); + value = null; + } + + /** + * Returns the reader, starting from an offset. + * + * @param pos 1-based offset + * @param length length of requested area + * @return the reader + */ + @Override + public Reader getCharacterStream(long pos, long length) throws SQLException { + try { + debugCodeCall("getCharacterStream(pos, length)"); + checkClosed(); + return value.getReader(pos, length); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private void checkClosed() { + conn.checkClosed(); + if (value == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": " + (value == null ? + "null" : value.getTraceSQL()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcConnection.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcConnection.java new file mode 100644 index 0000000000000..f1b0eb5c1b5c8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcConnection.java @@ -0,0 +1,2119 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, and the + * EPL 1.0 (http://h2database.com/html/license.html). Initial Developer: H2 + * Group + */ +package org.h2.jdbc; + +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.Reader; +import java.sql.Array; +import java.sql.Blob; +import java.sql.CallableStatement; +import java.sql.ClientInfoStatus; +import java.sql.Clob; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.NClob; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLClientInfoException; +import java.sql.SQLException; +import java.sql.SQLWarning; +import java.sql.SQLXML; +import java.sql.Savepoint; +import java.sql.Statement; +import java.sql.Struct; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; +import java.util.Properties; +import java.util.concurrent.Executor; +import java.util.regex.Pattern; + +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.Constants; +import org.h2.engine.Mode; +import org.h2.engine.SessionInterface; +import org.h2.engine.SessionRemote; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.result.ResultInterface; +import org.h2.util.CloseWatcher; +import org.h2.util.JdbcUtils; +import org.h2.util.Utils; +import org.h2.value.CompareMode; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueInt; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; + +/** + *

    + * Represents a connection (session) to a database. + *

    + *

    + * Thread safety: the connection is thread-safe, because access is synchronized. + * However, for compatibility with other databases, a connection should only be + * used in one thread at any time. + *

    + */ +public class JdbcConnection extends TraceObject + implements Connection, JdbcConnectionBackwardsCompat { + + private static final String NUM_SERVERS = "numServers"; + private static final String PREFIX_SERVER = "server"; + + private static boolean keepOpenStackTrace; + + private final String url; + private final String user; + + // ResultSet.HOLD_CURSORS_OVER_COMMIT + private int holdability = 1; + + private SessionInterface session; + private CommandInterface commit, rollback; + private CommandInterface getReadOnly, getGeneratedKeys; + private CommandInterface setLockMode, getLockMode; + private CommandInterface setQueryTimeout, getQueryTimeout; + + private int savepointId; + private String catalog; + private Statement executingStatement; + private final CloseWatcher watcher; + private int queryTimeoutCache = -1; + + private Map clientInfo; + private String mode; + private final boolean scopeGeneratedKeys; + + /** + * INTERNAL + */ + public JdbcConnection(String url, Properties info) throws SQLException { + this(new ConnectionInfo(url, info), true); + } + + /** + * INTERNAL + */ + /* + * the session closable object does not leak as Eclipse warns - due to the + * CloseWatcher. + */ + @SuppressWarnings("resource") + public JdbcConnection(ConnectionInfo ci, boolean useBaseDir) + throws SQLException { + try { + if (useBaseDir) { + String baseDir = SysProperties.getBaseDir(); + if (baseDir != null) { + ci.setBaseDir(baseDir); + } + } + // this will return an embedded or server connection + session = new SessionRemote(ci).connectEmbeddedOrServer(false); + trace = session.getTrace(); + int id = getNextId(TraceObject.CONNECTION); + setTrace(trace, TraceObject.CONNECTION, id); + this.user = ci.getUserName(); + if (isInfoEnabled()) { + trace.infoCode("Connection " + getTraceObjectName() + + " = DriverManager.getConnection(" + + quote(ci.getOriginalURL()) + ", " + quote(user) + + ", \"\");"); + } + this.url = ci.getURL(); + scopeGeneratedKeys = ci.getProperty("SCOPE_GENERATED_KEYS", false); + closeOld(); + watcher = CloseWatcher.register(this, session, keepOpenStackTrace); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * INTERNAL + */ + public JdbcConnection(JdbcConnection clone) { + this.session = clone.session; + trace = session.getTrace(); + int id = getNextId(TraceObject.CONNECTION); + setTrace(trace, TraceObject.CONNECTION, id); + this.user = clone.user; + this.url = clone.url; + this.catalog = clone.catalog; + this.commit = clone.commit; + this.getGeneratedKeys = clone.getGeneratedKeys; + this.getLockMode = clone.getLockMode; + this.getQueryTimeout = clone.getQueryTimeout; + this.getReadOnly = clone.getReadOnly; + this.rollback = clone.rollback; + this.scopeGeneratedKeys = clone.scopeGeneratedKeys; + this.watcher = null; + if (clone.clientInfo != null) { + this.clientInfo = new HashMap<>(clone.clientInfo); + } + } + + /** + * INTERNAL + */ + public JdbcConnection(SessionInterface session, String user, String url) { + this.session = session; + trace = session.getTrace(); + int id = getNextId(TraceObject.CONNECTION); + setTrace(trace, TraceObject.CONNECTION, id); + this.user = user; + this.url = url; + this.scopeGeneratedKeys = false; + this.watcher = null; + } + + private void closeOld() { + while (true) { + CloseWatcher w = CloseWatcher.pollUnclosed(); + if (w == null) { + break; + } + try { + w.getCloseable().close(); + } catch (Exception e) { + trace.error(e, "closing session"); + } + // there was an unclosed object - + // keep the stack trace from now on + keepOpenStackTrace = true; + String s = w.getOpenStackTrace(); + Exception ex = DbException + .get(ErrorCode.TRACE_CONNECTION_NOT_CLOSED); + trace.error(ex, s); + } + } + + /** + * Creates a new statement. + * + * @return the new statement + * @throws SQLException if the connection is closed + */ + @Override + public Statement createStatement() throws SQLException { + try { + int id = getNextId(TraceObject.STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("Statement", TraceObject.STATEMENT, id, + "createStatement()"); + } + checkClosed(); + return new JdbcStatement(this, id, ResultSet.TYPE_FORWARD_ONLY, + Constants.DEFAULT_RESULT_SET_CONCURRENCY, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a statement with the specified result set type and concurrency. + * + * @param resultSetType the result set type (ResultSet.TYPE_*) + * @param resultSetConcurrency the concurrency (ResultSet.CONCUR_*) + * @return the statement + * @throws SQLException if the connection is closed or the result set type + * or concurrency are not supported + */ + @Override + public Statement createStatement(int resultSetType, + int resultSetConcurrency) throws SQLException { + try { + int id = getNextId(TraceObject.STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("Statement", TraceObject.STATEMENT, id, + "createStatement(" + resultSetType + ", " + + resultSetConcurrency + ")"); + } + checkTypeConcurrency(resultSetType, resultSetConcurrency); + checkClosed(); + return new JdbcStatement(this, id, resultSetType, + resultSetConcurrency, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a statement with the specified result set type, concurrency, and + * holdability. + * + * @param resultSetType the result set type (ResultSet.TYPE_*) + * @param resultSetConcurrency the concurrency (ResultSet.CONCUR_*) + * @param resultSetHoldability the holdability (ResultSet.HOLD* / CLOSE*) + * @return the statement + * @throws SQLException if the connection is closed or the result set type, + * concurrency, or holdability are not supported + */ + @Override + public Statement createStatement(int resultSetType, + int resultSetConcurrency, int resultSetHoldability) + throws SQLException { + try { + int id = getNextId(TraceObject.STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("Statement", TraceObject.STATEMENT, id, + "createStatement(" + resultSetType + ", " + + resultSetConcurrency + ", " + + resultSetHoldability + ")"); + } + checkTypeConcurrency(resultSetType, resultSetConcurrency); + checkHoldability(resultSetHoldability); + checkClosed(); + return new JdbcStatement(this, id, resultSetType, + resultSetConcurrency, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a new prepared statement. + * + * @param sql the SQL statement + * @return the prepared statement + * @throws SQLException if the connection is closed + */ + @Override + public PreparedStatement prepareStatement(String sql) throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("PreparedStatement", + TraceObject.PREPARED_STATEMENT, id, + "prepareStatement(" + quote(sql) + ")"); + } + checkClosed(); + sql = translateSQL(sql); + return new JdbcPreparedStatement(this, sql, id, + ResultSet.TYPE_FORWARD_ONLY, + Constants.DEFAULT_RESULT_SET_CONCURRENCY, false, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Prepare a statement that will automatically close when the result set is + * closed. This method is used to retrieve database meta data. + * + * @param sql the SQL statement + * @return the prepared statement + */ + PreparedStatement prepareAutoCloseStatement(String sql) + throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("PreparedStatement", + TraceObject.PREPARED_STATEMENT, id, + "prepareStatement(" + quote(sql) + ")"); + } + checkClosed(); + sql = translateSQL(sql); + return new JdbcPreparedStatement(this, sql, id, + ResultSet.TYPE_FORWARD_ONLY, + Constants.DEFAULT_RESULT_SET_CONCURRENCY, true, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the database meta data for this database. + * + * @return the database meta data + * @throws SQLException if the connection is closed + */ + @Override + public DatabaseMetaData getMetaData() throws SQLException { + try { + int id = getNextId(TraceObject.DATABASE_META_DATA); + if (isDebugEnabled()) { + debugCodeAssign("DatabaseMetaData", + TraceObject.DATABASE_META_DATA, id, "getMetaData()"); + } + checkClosed(); + return new JdbcDatabaseMetaData(this, trace, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * INTERNAL + */ + public SessionInterface getSession() { + return session; + } + + /** + * Closes this connection. All open statements, prepared statements and + * result sets that where created by this connection become invalid after + * calling this method. If there is an uncommitted transaction, it will be + * rolled back. + */ + @Override + public synchronized void close() throws SQLException { + try { + debugCodeCall("close"); + if (session == null) { + return; + } + CloseWatcher.unregister(watcher); + session.cancel(); + synchronized (session) { + if (executingStatement != null) { + try { + executingStatement.cancel(); + } catch (NullPointerException e) { + // ignore + } + } + try { + if (!session.isClosed()) { + try { + if (session.hasPendingTransaction()) { + // roll back unless that would require to + // re-connect (the transaction can't be rolled + // back after re-connecting) + if (!session.isReconnectNeeded(true)) { + try { + rollbackInternal(); + } catch (DbException e) { + // ignore if the connection is broken + // right now + if (e.getErrorCode() != ErrorCode.CONNECTION_BROKEN_1) { + throw e; + } + } + } + session.afterWriting(); + } + closePreparedCommands(); + } finally { + session.close(); + } + } + } finally { + session = null; + } + } + } catch (Throwable e) { + throw logAndConvert(e); + } + } + + private void closePreparedCommands() { + commit = closeAndSetNull(commit); + rollback = closeAndSetNull(rollback); + getReadOnly = closeAndSetNull(getReadOnly); + getGeneratedKeys = closeAndSetNull(getGeneratedKeys); + getLockMode = closeAndSetNull(getLockMode); + setLockMode = closeAndSetNull(setLockMode); + getQueryTimeout = closeAndSetNull(getQueryTimeout); + setQueryTimeout = closeAndSetNull(setQueryTimeout); + } + + private static CommandInterface closeAndSetNull(CommandInterface command) { + if (command != null) { + command.close(); + } + return null; + } + + /** + * Switches auto commit on or off. Enabling it commits an uncommitted + * transaction, if there is one. + * + * @param autoCommit true for auto commit on, false for off + * @throws SQLException if the connection is closed + */ + @Override + public synchronized void setAutoCommit(boolean autoCommit) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setAutoCommit(" + autoCommit + ");"); + } + checkClosed(); + if (autoCommit && !session.getAutoCommit()) { + commit(); + } + synchronized (session) { + session.setAutoCommit(autoCommit); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the current setting for auto commit. + * + * @return true for on, false for off + * @throws SQLException if the connection is closed + */ + @Override + public synchronized boolean getAutoCommit() throws SQLException { + try { + checkClosed(); + debugCodeCall("getAutoCommit"); + return session.getAutoCommit(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Commits the current transaction. This call has only an effect if auto + * commit is switched off. + * + * @throws SQLException if the connection is closed + */ + @Override + public synchronized void commit() throws SQLException { + try { + debugCodeCall("commit"); + checkClosedForWrite(); + try { + commit = prepareCommand("COMMIT", commit); + commit.executeUpdate(false); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Rolls back the current transaction. This call has only an effect if auto + * commit is switched off. + * + * @throws SQLException if the connection is closed + */ + @Override + public synchronized void rollback() throws SQLException { + try { + debugCodeCall("rollback"); + checkClosedForWrite(); + try { + rollbackInternal(); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns true if this connection has been closed. + * + * @return true if close was called + */ + @Override + public boolean isClosed() throws SQLException { + try { + debugCodeCall("isClosed"); + return session == null || session.isClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Translates a SQL statement into the database grammar. + * + * @param sql the SQL statement with or without JDBC escape sequences + * @return the translated statement + * @throws SQLException if the connection is closed + */ + @Override + public String nativeSQL(String sql) throws SQLException { + try { + debugCodeCall("nativeSQL", sql); + checkClosed(); + return translateSQL(sql); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * According to the JDBC specs, this setting is only a hint to the database + * to enable optimizations - it does not cause writes to be prohibited. + * + * @param readOnly ignored + * @throws SQLException if the connection is closed + */ + @Override + public void setReadOnly(boolean readOnly) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setReadOnly(" + readOnly + ");"); + } + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns true if the database is read-only. + * + * @return if the database is read-only + * @throws SQLException if the connection is closed + */ + @Override + public boolean isReadOnly() throws SQLException { + try { + debugCodeCall("isReadOnly"); + checkClosed(); + getReadOnly = prepareCommand("CALL READONLY()", getReadOnly); + ResultInterface result = getReadOnly.executeQuery(0, false); + result.next(); + return result.currentRow()[0].getBoolean(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Set the default catalog name. This call is ignored. + * + * @param catalog ignored + * @throws SQLException if the connection is closed + */ + @Override + public void setCatalog(String catalog) throws SQLException { + try { + debugCodeCall("setCatalog", catalog); + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the current catalog name. + * + * @return the catalog name + * @throws SQLException if the connection is closed + */ + @Override + public String getCatalog() throws SQLException { + try { + debugCodeCall("getCatalog"); + checkClosed(); + if (catalog == null) { + CommandInterface cat = prepareCommand("CALL DATABASE()", + Integer.MAX_VALUE); + ResultInterface result = cat.executeQuery(0, false); + result.next(); + catalog = result.currentRow()[0].getString(); + cat.close(); + } + return catalog; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the first warning reported by calls on this object. + * + * @return null + */ + @Override + public SQLWarning getWarnings() throws SQLException { + try { + debugCodeCall("getWarnings"); + checkClosed(); + return null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Clears all warnings. + */ + @Override + public void clearWarnings() throws SQLException { + try { + debugCodeCall("clearWarnings"); + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a prepared statement with the specified result set type and + * concurrency. + * + * @param sql the SQL statement + * @param resultSetType the result set type (ResultSet.TYPE_*) + * @param resultSetConcurrency the concurrency (ResultSet.CONCUR_*) + * @return the prepared statement + * @throws SQLException if the connection is closed or the result set type + * or concurrency are not supported + */ + @Override + public PreparedStatement prepareStatement(String sql, int resultSetType, + int resultSetConcurrency) throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("PreparedStatement", + TraceObject.PREPARED_STATEMENT, id, + "prepareStatement(" + quote(sql) + ", " + resultSetType + + ", " + resultSetConcurrency + ")"); + } + checkTypeConcurrency(resultSetType, resultSetConcurrency); + checkClosed(); + sql = translateSQL(sql); + return new JdbcPreparedStatement(this, sql, id, resultSetType, + resultSetConcurrency, false, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Changes the current transaction isolation level. Calling this method will + * commit an open transaction, even if the new level is the same as the old + * one, except if the level is not supported. Internally, this method calls + * SET LOCK_MODE, which affects all connections. The following isolation + * levels are supported: + *
      + *
    • Connection.TRANSACTION_READ_UNCOMMITTED = SET LOCK_MODE 0: no locking + * (should only be used for testing).
    • + *
    • Connection.TRANSACTION_SERIALIZABLE = SET LOCK_MODE 1: table level + * locking.
    • + *
    • Connection.TRANSACTION_READ_COMMITTED = SET LOCK_MODE 3: table level + * locking, but read locks are released immediately (default).
    • + *
    + * This setting is not persistent. Please note that using + * TRANSACTION_READ_UNCOMMITTED while at the same time using multiple + * connections may result in inconsistent transactions. + * + * @param level the new transaction isolation level: + * Connection.TRANSACTION_READ_UNCOMMITTED, + * Connection.TRANSACTION_READ_COMMITTED, or + * Connection.TRANSACTION_SERIALIZABLE + * @throws SQLException if the connection is closed or the isolation level + * is not supported + */ + @Override + public void setTransactionIsolation(int level) throws SQLException { + try { + debugCodeCall("setTransactionIsolation", level); + checkClosed(); + int lockMode; + switch (level) { + case Connection.TRANSACTION_READ_UNCOMMITTED: + lockMode = Constants.LOCK_MODE_OFF; + break; + case Connection.TRANSACTION_READ_COMMITTED: + lockMode = Constants.LOCK_MODE_READ_COMMITTED; + break; + case Connection.TRANSACTION_REPEATABLE_READ: + case Connection.TRANSACTION_SERIALIZABLE: + lockMode = Constants.LOCK_MODE_TABLE; + break; + default: + throw DbException.getInvalidValueException("level", level); + } + commit(); + setLockMode = prepareCommand("SET LOCK_MODE ?", setLockMode); + setLockMode.getParameters().get(0).setValue(ValueInt.get(lockMode), + false); + setLockMode.executeUpdate(false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * INTERNAL + */ + public void setQueryTimeout(int seconds) throws SQLException { + try { + debugCodeCall("setQueryTimeout", seconds); + checkClosed(); + setQueryTimeout = prepareCommand("SET QUERY_TIMEOUT ?", + setQueryTimeout); + setQueryTimeout.getParameters().get(0) + .setValue(ValueInt.get(seconds * 1000), false); + setQueryTimeout.executeUpdate(false); + queryTimeoutCache = seconds; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * INTERNAL + */ + int getQueryTimeout() throws SQLException { + try { + if (queryTimeoutCache == -1) { + checkClosed(); + getQueryTimeout = prepareCommand( + "SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS " + + "WHERE NAME=?", + getQueryTimeout); + getQueryTimeout.getParameters().get(0) + .setValue(ValueString.get("QUERY_TIMEOUT"), false); + ResultInterface result = getQueryTimeout.executeQuery(0, false); + result.next(); + int queryTimeout = result.currentRow()[0].getInt(); + result.close(); + if (queryTimeout != 0) { + // round to the next second, otherwise 999 millis would + // return 0 seconds + queryTimeout = (queryTimeout + 999) / 1000; + } + queryTimeoutCache = queryTimeout; + } + return queryTimeoutCache; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the current transaction isolation level. + * + * @return the isolation level. + * @throws SQLException if the connection is closed + */ + @Override + public int getTransactionIsolation() throws SQLException { + try { + debugCodeCall("getTransactionIsolation"); + checkClosed(); + getLockMode = prepareCommand("CALL LOCK_MODE()", getLockMode); + ResultInterface result = getLockMode.executeQuery(0, false); + result.next(); + int lockMode = result.currentRow()[0].getInt(); + result.close(); + int transactionIsolationLevel; + switch (lockMode) { + case Constants.LOCK_MODE_OFF: + transactionIsolationLevel = Connection.TRANSACTION_READ_UNCOMMITTED; + break; + case Constants.LOCK_MODE_READ_COMMITTED: + transactionIsolationLevel = Connection.TRANSACTION_READ_COMMITTED; + break; + case Constants.LOCK_MODE_TABLE: + case Constants.LOCK_MODE_TABLE_GC: + transactionIsolationLevel = Connection.TRANSACTION_SERIALIZABLE; + break; + default: + throw DbException.throwInternalError("lockMode:" + lockMode); + } + return transactionIsolationLevel; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Changes the current result set holdability. + * + * @param holdability ResultSet.HOLD_CURSORS_OVER_COMMIT or + * ResultSet.CLOSE_CURSORS_AT_COMMIT; + * @throws SQLException if the connection is closed or the holdability is + * not supported + */ + @Override + public void setHoldability(int holdability) throws SQLException { + try { + debugCodeCall("setHoldability", holdability); + checkClosed(); + checkHoldability(holdability); + this.holdability = holdability; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the current result set holdability. + * + * @return the holdability + * @throws SQLException if the connection is closed + */ + @Override + public int getHoldability() throws SQLException { + try { + debugCodeCall("getHoldability"); + checkClosed(); + return holdability; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the type map. + * + * @return null + * @throws SQLException if the connection is closed + */ + @Override + public Map> getTypeMap() throws SQLException { + try { + debugCodeCall("getTypeMap"); + checkClosed(); + return null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Partially supported] Sets the type map. This is only supported if the + * map is empty or null. + */ + @Override + public void setTypeMap(Map> map) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setTypeMap(" + quoteMap(map) + ");"); + } + checkMap(map); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a new callable statement. + * + * @param sql the SQL statement + * @return the callable statement + * @throws SQLException if the connection is closed or the statement is not + * valid + */ + @Override + public CallableStatement prepareCall(String sql) throws SQLException { + try { + int id = getNextId(TraceObject.CALLABLE_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("CallableStatement", + TraceObject.CALLABLE_STATEMENT, id, + "prepareCall(" + quote(sql) + ")"); + } + checkClosed(); + sql = translateSQL(sql); + return new JdbcCallableStatement(this, sql, id, + ResultSet.TYPE_FORWARD_ONLY, + Constants.DEFAULT_RESULT_SET_CONCURRENCY); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a callable statement with the specified result set type and + * concurrency. + * + * @param sql the SQL statement + * @param resultSetType the result set type (ResultSet.TYPE_*) + * @param resultSetConcurrency the concurrency (ResultSet.CONCUR_*) + * @return the callable statement + * @throws SQLException if the connection is closed or the result set type + * or concurrency are not supported + */ + @Override + public CallableStatement prepareCall(String sql, int resultSetType, + int resultSetConcurrency) throws SQLException { + try { + int id = getNextId(TraceObject.CALLABLE_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("CallableStatement", + TraceObject.CALLABLE_STATEMENT, id, + "prepareCall(" + quote(sql) + ", " + resultSetType + + ", " + resultSetConcurrency + ")"); + } + checkTypeConcurrency(resultSetType, resultSetConcurrency); + checkClosed(); + sql = translateSQL(sql); + return new JdbcCallableStatement(this, sql, id, resultSetType, + resultSetConcurrency); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a callable statement with the specified result set type, + * concurrency, and holdability. + * + * @param sql the SQL statement + * @param resultSetType the result set type (ResultSet.TYPE_*) + * @param resultSetConcurrency the concurrency (ResultSet.CONCUR_*) + * @param resultSetHoldability the holdability (ResultSet.HOLD* / CLOSE*) + * @return the callable statement + * @throws SQLException if the connection is closed or the result set type, + * concurrency, or holdability are not supported + */ + @Override + public CallableStatement prepareCall(String sql, int resultSetType, + int resultSetConcurrency, int resultSetHoldability) + throws SQLException { + try { + int id = getNextId(TraceObject.CALLABLE_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("CallableStatement", + TraceObject.CALLABLE_STATEMENT, id, + "prepareCall(" + quote(sql) + ", " + resultSetType + + ", " + resultSetConcurrency + ", " + + resultSetHoldability + ")"); + } + checkTypeConcurrency(resultSetType, resultSetConcurrency); + checkHoldability(resultSetHoldability); + checkClosed(); + sql = translateSQL(sql); + return new JdbcCallableStatement(this, sql, id, resultSetType, + resultSetConcurrency); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a new unnamed savepoint. + * + * @return the new savepoint + */ + @Override + public Savepoint setSavepoint() throws SQLException { + try { + int id = getNextId(TraceObject.SAVEPOINT); + if (isDebugEnabled()) { + debugCodeAssign("Savepoint", TraceObject.SAVEPOINT, id, + "setSavepoint()"); + } + checkClosed(); + CommandInterface set = prepareCommand( + "SAVEPOINT " + JdbcSavepoint.getName(null, savepointId), + Integer.MAX_VALUE); + set.executeUpdate(false); + JdbcSavepoint savepoint = new JdbcSavepoint(this, savepointId, null, + trace, id); + savepointId++; + return savepoint; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a new named savepoint. + * + * @param name the savepoint name + * @return the new savepoint + */ + @Override + public Savepoint setSavepoint(String name) throws SQLException { + try { + int id = getNextId(TraceObject.SAVEPOINT); + if (isDebugEnabled()) { + debugCodeAssign("Savepoint", TraceObject.SAVEPOINT, id, + "setSavepoint(" + quote(name) + ")"); + } + checkClosed(); + CommandInterface set = prepareCommand( + "SAVEPOINT " + JdbcSavepoint.getName(name, 0), + Integer.MAX_VALUE); + set.executeUpdate(false); + return new JdbcSavepoint(this, 0, name, trace, + id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Rolls back to a savepoint. + * + * @param savepoint the savepoint + */ + @Override + public void rollback(Savepoint savepoint) throws SQLException { + try { + JdbcSavepoint sp = convertSavepoint(savepoint); + if (isDebugEnabled()) { + debugCode("rollback(" + sp.getTraceObjectName() + ");"); + } + checkClosedForWrite(); + try { + sp.rollback(); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Releases a savepoint. + * + * @param savepoint the savepoint to release + */ + @Override + public void releaseSavepoint(Savepoint savepoint) throws SQLException { + try { + debugCode("releaseSavepoint(savepoint);"); + checkClosed(); + convertSavepoint(savepoint).release(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private static JdbcSavepoint convertSavepoint(Savepoint savepoint) { + if (!(savepoint instanceof JdbcSavepoint)) { + throw DbException.get(ErrorCode.SAVEPOINT_IS_INVALID_1, + "" + savepoint); + } + return (JdbcSavepoint) savepoint; + } + + /** + * Creates a prepared statement with the specified result set type, + * concurrency, and holdability. + * + * @param sql the SQL statement + * @param resultSetType the result set type (ResultSet.TYPE_*) + * @param resultSetConcurrency the concurrency (ResultSet.CONCUR_*) + * @param resultSetHoldability the holdability (ResultSet.HOLD* / CLOSE*) + * @return the prepared statement + * @throws SQLException if the connection is closed or the result set type, + * concurrency, or holdability are not supported + */ + @Override + public PreparedStatement prepareStatement(String sql, int resultSetType, + int resultSetConcurrency, int resultSetHoldability) + throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("PreparedStatement", + TraceObject.PREPARED_STATEMENT, id, + "prepareStatement(" + quote(sql) + ", " + resultSetType + + ", " + resultSetConcurrency + ", " + + resultSetHoldability + ")"); + } + checkTypeConcurrency(resultSetType, resultSetConcurrency); + checkHoldability(resultSetHoldability); + checkClosed(); + sql = translateSQL(sql); + return new JdbcPreparedStatement(this, sql, id, resultSetType, + resultSetConcurrency, false, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a new prepared statement. + * + * @param sql the SQL statement + * @param autoGeneratedKeys + * {@link Statement#RETURN_GENERATED_KEYS} if generated keys should + * be available for retrieval, {@link Statement#NO_GENERATED_KEYS} if + * generated keys should not be available + * @return the prepared statement + * @throws SQLException if the connection is closed + */ + @Override + public PreparedStatement prepareStatement(String sql, int autoGeneratedKeys) + throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("PreparedStatement", + TraceObject.PREPARED_STATEMENT, id, + "prepareStatement(" + quote(sql) + ", " + + autoGeneratedKeys + ");"); + } + checkClosed(); + sql = translateSQL(sql); + return new JdbcPreparedStatement(this, sql, id, + ResultSet.TYPE_FORWARD_ONLY, + Constants.DEFAULT_RESULT_SET_CONCURRENCY, false, + autoGeneratedKeys == Statement.RETURN_GENERATED_KEYS); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a new prepared statement. + * + * @param sql the SQL statement + * @param columnIndexes + * an array of column indexes indicating the columns with generated + * keys that should be returned from the inserted row + * @return the prepared statement + * @throws SQLException if the connection is closed + */ + @Override + public PreparedStatement prepareStatement(String sql, int[] columnIndexes) + throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("PreparedStatement", + TraceObject.PREPARED_STATEMENT, id, + "prepareStatement(" + quote(sql) + ", " + + quoteIntArray(columnIndexes) + ");"); + } + checkClosed(); + sql = translateSQL(sql); + return new JdbcPreparedStatement(this, sql, id, + ResultSet.TYPE_FORWARD_ONLY, + Constants.DEFAULT_RESULT_SET_CONCURRENCY, false, columnIndexes); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Creates a new prepared statement. + * + * @param sql the SQL statement + * @param columnNames + * an array of column names indicating the columns with generated + * keys that should be returned from the inserted row + * @return the prepared statement + * @throws SQLException if the connection is closed + */ + @Override + public PreparedStatement prepareStatement(String sql, String[] columnNames) + throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + if (isDebugEnabled()) { + debugCodeAssign("PreparedStatement", + TraceObject.PREPARED_STATEMENT, id, + "prepareStatement(" + quote(sql) + ", " + + quoteArray(columnNames) + ");"); + } + checkClosed(); + sql = translateSQL(sql); + return new JdbcPreparedStatement(this, sql, id, + ResultSet.TYPE_FORWARD_ONLY, + Constants.DEFAULT_RESULT_SET_CONCURRENCY, false, columnNames); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + // ============================================================= + + /** + * Prepare an command. This will parse the SQL statement. + * + * @param sql the SQL statement + * @param fetchSize the fetch size (used in remote connections) + * @return the command + */ + CommandInterface prepareCommand(String sql, int fetchSize) { + return session.prepareCommand(sql, fetchSize); + } + + private CommandInterface prepareCommand(String sql, CommandInterface old) { + return old == null ? session.prepareCommand(sql, Integer.MAX_VALUE) + : old; + } + + private static int translateGetEnd(String sql, int i, char c) { + int len = sql.length(); + switch (c) { + case '$': { + if (i < len - 1 && sql.charAt(i + 1) == '$' + && (i == 0 || sql.charAt(i - 1) <= ' ')) { + int j = sql.indexOf("$$", i + 2); + if (j < 0) { + throw DbException.getSyntaxError(sql, i); + } + return j + 1; + } + return i; + } + case '\'': { + int j = sql.indexOf('\'', i + 1); + if (j < 0) { + throw DbException.getSyntaxError(sql, i); + } + return j; + } + case '"': { + int j = sql.indexOf('"', i + 1); + if (j < 0) { + throw DbException.getSyntaxError(sql, i); + } + return j; + } + case '/': { + checkRunOver(i + 1, len, sql); + if (sql.charAt(i + 1) == '*') { + // block comment + int j = sql.indexOf("*/", i + 2); + if (j < 0) { + throw DbException.getSyntaxError(sql, i); + } + i = j + 1; + } else if (sql.charAt(i + 1) == '/') { + // single line comment + i += 2; + while (i < len && (c = sql.charAt(i)) != '\r' && c != '\n') { + i++; + } + } + return i; + } + case '-': { + checkRunOver(i + 1, len, sql); + if (sql.charAt(i + 1) == '-') { + // single line comment + i += 2; + while (i < len && (c = sql.charAt(i)) != '\r' && c != '\n') { + i++; + } + } + return i; + } + default: + throw DbException.throwInternalError("c=" + c); + } + } + + /** + * Convert JDBC escape sequences in the SQL statement. This method throws an + * exception if the SQL statement is null. + * + * @param sql the SQL statement with or without JDBC escape sequences + * @return the SQL statement without JDBC escape sequences + */ + private static String translateSQL(String sql) { + return translateSQL(sql, true); + } + + /** + * Convert JDBC escape sequences in the SQL statement if required. This + * method throws an exception if the SQL statement is null. + * + * @param sql the SQL statement with or without JDBC escape sequences + * @param escapeProcessing whether escape sequences should be replaced + * @return the SQL statement without JDBC escape sequences + */ + static String translateSQL(String sql, boolean escapeProcessing) { + if (sql == null) { + throw DbException.getInvalidValueException("SQL", null); + } + if (!escapeProcessing) { + return sql; + } + if (sql.indexOf('{') < 0) { + return sql; + } + int len = sql.length(); + char[] chars = null; + int level = 0; + for (int i = 0; i < len; i++) { + char c = sql.charAt(i); + switch (c) { + case '\'': + case '"': + case '/': + case '-': + i = translateGetEnd(sql, i, c); + break; + case '{': + level++; + if (chars == null) { + chars = sql.toCharArray(); + } + chars[i] = ' '; + while (Character.isSpaceChar(chars[i])) { + i++; + checkRunOver(i, len, sql); + } + int start = i; + if (chars[i] >= '0' && chars[i] <= '9') { + chars[i - 1] = '{'; + while (true) { + checkRunOver(i, len, sql); + c = chars[i]; + if (c == '}') { + break; + } + switch (c) { + case '\'': + case '"': + case '/': + case '-': + i = translateGetEnd(sql, i, c); + break; + default: + } + i++; + } + level--; + break; + } else if (chars[i] == '?') { + i++; + checkRunOver(i, len, sql); + while (Character.isSpaceChar(chars[i])) { + i++; + checkRunOver(i, len, sql); + } + if (sql.charAt(i) != '=') { + throw DbException.getSyntaxError(sql, i, "="); + } + i++; + checkRunOver(i, len, sql); + while (Character.isSpaceChar(chars[i])) { + i++; + checkRunOver(i, len, sql); + } + } + while (!Character.isSpaceChar(chars[i])) { + i++; + checkRunOver(i, len, sql); + } + int remove = 0; + if (found(sql, start, "fn")) { + remove = 2; + } else if (found(sql, start, "escape")) { + break; + } else if (found(sql, start, "call")) { + break; + } else if (found(sql, start, "oj")) { + remove = 2; + } else if (found(sql, start, "ts")) { + break; + } else if (found(sql, start, "t")) { + break; + } else if (found(sql, start, "d")) { + break; + } else if (found(sql, start, "params")) { + remove = "params".length(); + } + for (i = start; remove > 0; i++, remove--) { + chars[i] = ' '; + } + break; + case '}': + if (--level < 0) { + throw DbException.getSyntaxError(sql, i); + } + chars[i] = ' '; + break; + case '$': + i = translateGetEnd(sql, i, c); + break; + default: + } + } + if (level != 0) { + throw DbException.getSyntaxError(sql, sql.length() - 1); + } + if (chars != null) { + sql = new String(chars); + } + return sql; + } + + private static void checkRunOver(int i, int len, String sql) { + if (i >= len) { + throw DbException.getSyntaxError(sql, i); + } + } + + private static boolean found(String sql, int start, String other) { + return sql.regionMatches(true, start, other, 0, other.length()); + } + + private static void checkTypeConcurrency(int resultSetType, + int resultSetConcurrency) { + switch (resultSetType) { + case ResultSet.TYPE_FORWARD_ONLY: + case ResultSet.TYPE_SCROLL_INSENSITIVE: + case ResultSet.TYPE_SCROLL_SENSITIVE: + break; + default: + throw DbException.getInvalidValueException("resultSetType", + resultSetType); + } + switch (resultSetConcurrency) { + case ResultSet.CONCUR_READ_ONLY: + case ResultSet.CONCUR_UPDATABLE: + break; + default: + throw DbException.getInvalidValueException("resultSetConcurrency", + resultSetConcurrency); + } + } + + private static void checkHoldability(int resultSetHoldability) { + // TODO compatibility / correctness: DBPool uses + // ResultSet.HOLD_CURSORS_OVER_COMMIT + if (resultSetHoldability != ResultSet.HOLD_CURSORS_OVER_COMMIT + && resultSetHoldability != ResultSet.CLOSE_CURSORS_AT_COMMIT) { + throw DbException.getInvalidValueException("resultSetHoldability", + resultSetHoldability); + } + } + + /** + * INTERNAL. Check if this connection is closed. The next operation is a + * read request. + * + * @throws DbException if the connection or session is closed + */ + protected void checkClosed() { + checkClosed(false); + } + + /** + * Check if this connection is closed. The next operation may be a write + * request. + * + * @throws DbException if the connection or session is closed + */ + private void checkClosedForWrite() { + checkClosed(true); + } + + /** + * INTERNAL. Check if this connection is closed. + * + * @param write if the next operation is possibly writing + * @throws DbException if the connection or session is closed + */ + protected void checkClosed(boolean write) { + if (session == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + if (session.isClosed()) { + throw DbException.get(ErrorCode.DATABASE_CALLED_AT_SHUTDOWN); + } + if (session.isReconnectNeeded(write)) { + trace.debug("reconnect"); + closePreparedCommands(); + session = session.reconnect(write); + trace = session.getTrace(); + } + } + + /** + * INTERNAL. Called after executing a command that could have written + * something. + */ + protected void afterWriting() { + if (session != null) { + session.afterWriting(); + } + } + + String getURL() { + checkClosed(); + return url; + } + + String getUser() { + checkClosed(); + return user; + } + + private void rollbackInternal() { + rollback = prepareCommand("ROLLBACK", rollback); + rollback.executeUpdate(false); + } + + /** + * INTERNAL + */ + public int getPowerOffCount() { + return (session == null || session.isClosed()) ? 0 + : session.getPowerOffCount(); + } + + /** + * INTERNAL + */ + public void setPowerOffCount(int count) { + if (session != null) { + session.setPowerOffCount(count); + } + } + + /** + * INTERNAL + */ + public void setExecutingStatement(Statement stat) { + executingStatement = stat; + } + + /** + * INTERNAL + */ + boolean scopeGeneratedKeys() { + return scopeGeneratedKeys; + } + + /** + * INTERNAL + */ + ResultSet getGeneratedKeys(JdbcStatement stat, int id) { + getGeneratedKeys = prepareCommand( + "SELECT SCOPE_IDENTITY() " + + "WHERE SCOPE_IDENTITY() IS NOT NULL", + getGeneratedKeys); + ResultInterface result = getGeneratedKeys.executeQuery(0, false); + return new JdbcResultSet(this, stat, getGeneratedKeys, result, + id, false, true, false); + } + + /** + * Create a new empty Clob object. + * + * @return the object + */ + @Override + public Clob createClob() throws SQLException { + try { + int id = getNextId(TraceObject.CLOB); + debugCodeAssign("Clob", TraceObject.CLOB, id, "createClob()"); + checkClosedForWrite(); + try { + Value v = session.getDataHandler().getLobStorage() + .createClob(new InputStreamReader( + new ByteArrayInputStream(Utils.EMPTY_BYTES)), + 0); + session.addTemporaryLob(v); + return new JdbcClob(this, v, id); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Create a new empty Blob object. + * + * @return the object + */ + @Override + public Blob createBlob() throws SQLException { + try { + int id = getNextId(TraceObject.BLOB); + debugCodeAssign("Blob", TraceObject.BLOB, id, "createClob()"); + checkClosedForWrite(); + try { + Value v = session.getDataHandler().getLobStorage().createBlob( + new ByteArrayInputStream(Utils.EMPTY_BYTES), 0); + synchronized (session) { + session.addTemporaryLob(v); + } + return new JdbcBlob(this, v, id); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Create a new empty NClob object. + * + * @return the object + */ + @Override + public NClob createNClob() throws SQLException { + try { + int id = getNextId(TraceObject.CLOB); + debugCodeAssign("NClob", TraceObject.CLOB, id, "createNClob()"); + checkClosedForWrite(); + try { + Value v = session.getDataHandler().getLobStorage() + .createClob(new InputStreamReader( + new ByteArrayInputStream(Utils.EMPTY_BYTES)), + 0); + session.addTemporaryLob(v); + return new JdbcClob(this, v, id); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Create a new empty SQLXML object. + */ + @Override + public SQLXML createSQLXML() throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * Create a new Array object. + * + * @param typeName the type name + * @param elements the values + * @return the array + */ + @Override + public Array createArrayOf(String typeName, Object[] elements) + throws SQLException { + try { + int id = getNextId(TraceObject.ARRAY); + debugCodeAssign("Array", TraceObject.ARRAY, id, "createArrayOf()"); + checkClosed(); + Value value = DataType.convertToValue(session, elements, + Value.ARRAY); + return new JdbcArray(this, value, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Create a new empty Struct object. + */ + @Override + public Struct createStruct(String typeName, Object[] attributes) + throws SQLException { + throw unsupported("Struct"); + } + + /** + * Returns true if this connection is still valid. + * + * @param timeout the number of seconds to wait for the database to respond + * (ignored) + * @return true if the connection is valid. + */ + @Override + public synchronized boolean isValid(int timeout) { + try { + debugCodeCall("isValid", timeout); + if (session == null || session.isClosed()) { + return false; + } + // force a network round trip (if networked) + getTransactionIsolation(); + return true; + } catch (Exception e) { + // this method doesn't throw an exception, but it logs it + logAndConvert(e); + return false; + } + } + + /** + * Set a client property. This method always throws a SQLClientInfoException + * in standard mode. In compatibility mode the following properties are + * supported: + *
      + *
    • DB2: The properties: ApplicationName, ClientAccountingInformation, + * ClientUser and ClientCorrelationToken are supported.
    • + *
    • MySQL: All property names are supported.
    • + *
    • Oracle: All properties in the form <namespace>.<key name> + * are supported.
    • + *
    • PostgreSQL: The ApplicationName property is supported.
    • + *
    + * For unsupported properties a SQLClientInfoException is thrown. + * + * @param name the name of the property + * @param value the value + */ + @Override + public void setClientInfo(String name, String value) + throws SQLClientInfoException { + try { + if (isDebugEnabled()) { + debugCode("setClientInfo(" + quote(name) + ", " + quote(value) + + ");"); + } + checkClosed(); + + // no change to property: Ignore call. This early exit fixes a + // problem with websphere liberty resetting the client info of a + // pooled connection to its initial values. + if (Objects.equals(value, getClientInfo(name))) { + return; + } + + if (isInternalProperty(name)) { + throw new SQLClientInfoException( + "Property name '" + name + " is used internally by H2.", + Collections. emptyMap()); + } + + Pattern clientInfoNameRegEx = Mode + .getInstance(getMode()).supportedClientInfoPropertiesRegEx; + + if (clientInfoNameRegEx != null + && clientInfoNameRegEx.matcher(name).matches()) { + if (clientInfo == null) { + clientInfo = new HashMap<>(); + } + clientInfo.put(name, value); + } else { + throw new SQLClientInfoException( + "Client info name '" + name + "' not supported.", + Collections. emptyMap()); + } + } catch (Exception e) { + throw convertToClientInfoException(logAndConvert(e)); + } + } + + private static boolean isInternalProperty(String name) { + return NUM_SERVERS.equals(name) || name.startsWith(PREFIX_SERVER); + } + + private static SQLClientInfoException convertToClientInfoException( + SQLException x) { + if (x instanceof SQLClientInfoException) { + return (SQLClientInfoException) x; + } + return new SQLClientInfoException(x.getMessage(), x.getSQLState(), + x.getErrorCode(), null, null); + } + + /** + * Set the client properties. This replaces all existing properties. This + * method always throws a SQLClientInfoException in standard mode. In + * compatibility mode some properties may be supported (see + * setProperty(String, String) for details). + * + * @param properties the properties (ignored) + */ + @Override + public void setClientInfo(Properties properties) + throws SQLClientInfoException { + try { + if (isDebugEnabled()) { + debugCode("setClientInfo(properties);"); + } + checkClosed(); + if (clientInfo == null) { + clientInfo = new HashMap<>(); + } else { + clientInfo.clear(); + } + for (Map.Entry entry : properties.entrySet()) { + setClientInfo((String) entry.getKey(), + (String) entry.getValue()); + } + } catch (Exception e) { + throw convertToClientInfoException(logAndConvert(e)); + } + } + + /** + * Get the client properties. + * + * @return the property list + */ + @Override + public Properties getClientInfo() throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getClientInfo();"); + } + checkClosed(); + ArrayList serverList = session.getClusterServers(); + Properties p = new Properties(); + + if (clientInfo != null) { + for (Map.Entry entry : clientInfo.entrySet()) { + p.setProperty(entry.getKey(), entry.getValue()); + } + } + + p.setProperty(NUM_SERVERS, String.valueOf(serverList.size())); + for (int i = 0; i < serverList.size(); i++) { + p.setProperty(PREFIX_SERVER + String.valueOf(i), + serverList.get(i)); + } + + return p; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Get a client property. + * + * @param name the client info name + * @return the property value or null if the property is not found or not + * supported. + */ + @Override + public String getClientInfo(String name) throws SQLException { + try { + if (isDebugEnabled()) { + debugCodeCall("getClientInfo", name); + } + checkClosed(); + if (name == null) { + throw DbException.getInvalidValueException("name", null); + } + return getClientInfo().getProperty(name); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Return an object of this class if possible. + * + * @param iface the class + * @return this + */ + @Override + @SuppressWarnings("unchecked") + public T unwrap(Class iface) throws SQLException { + try { + if (isWrapperFor(iface)) { + return (T) this; + } + throw DbException.getInvalidValueException("iface", iface); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if unwrap can return an object of this class. + * + * @param iface the class + * @return whether or not the interface is assignable from this class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return iface != null && iface.isAssignableFrom(getClass()); + } + + /** + * Create a Clob value from this reader. + * + * @param x the reader + * @param length the length (if smaller or equal than 0, all data until the + * end of file is read) + * @return the value + */ + public Value createClob(Reader x, long length) { + if (x == null) { + return ValueNull.INSTANCE; + } + if (length <= 0) { + length = -1; + } + Value v = session.getDataHandler().getLobStorage().createClob(x, + length); + session.addTemporaryLob(v); + return v; + } + + /** + * Create a Blob value from this input stream. + * + * @param x the input stream + * @param length the length (if smaller or equal than 0, all data until the + * end of file is read) + * @return the value + */ + public Value createBlob(InputStream x, long length) { + if (x == null) { + return ValueNull.INSTANCE; + } + if (length <= 0) { + length = -1; + } + Value v = session.getDataHandler().getLobStorage().createBlob(x, + length); + session.addTemporaryLob(v); + return v; + } + + /** + * Sets the given schema name to access. Current implementation is case + * sensitive, i.e. requires schema name to be passed in correct case. + * + * @param schema the schema name + */ + @Override + public void setSchema(String schema) throws SQLException { + try { + if (isDebugEnabled()) { + debugCodeCall("setSchema", schema); + } + checkClosed(); + session.setCurrentSchemaName(schema); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Retrieves this current schema name for this connection. + * + * @return current schema name + */ + @Override + public String getSchema() throws SQLException { + try { + if (isDebugEnabled()) { + debugCodeCall("getSchema"); + } + checkClosed(); + return session.getCurrentSchemaName(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + * + * @param executor the executor used by this method + */ + @Override + public void abort(Executor executor) { + // not supported + } + + /** + * [Not supported] + * + * @param executor the executor used by this method + * @param milliseconds the TCP connection timeout + */ + @Override + public void setNetworkTimeout(Executor executor, int milliseconds) { + // not supported + } + + /** + * [Not supported] + */ + @Override + public int getNetworkTimeout() { + return 0; + } + + /** + * Check that the given type map is either null or empty. + * + * @param map the type map + * @throws DbException if the map is not empty + */ + static void checkMap(Map> map) { + if (map != null && map.size() > 0) { + throw DbException.getUnsupportedException("map.size > 0"); + } + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": url=" + url + " user=" + user; + } + + /** + * Convert an object to the default Java object for the given SQL type. For + * example, LOB objects are converted to java.sql.Clob / java.sql.Blob. + * + * @param v the value + * @return the object + */ + Object convertToDefaultObject(Value v) { + switch (v.getType()) { + case Value.CLOB: { + int id = getNextId(TraceObject.CLOB); + return new JdbcClob(this, v, id); + } + case Value.BLOB: { + int id = getNextId(TraceObject.BLOB); + return new JdbcBlob(this, v, id); + } + case Value.JAVA_OBJECT: + if (SysProperties.serializeJavaObject) { + return JdbcUtils.deserialize(v.getBytesNoCopy(), + session.getDataHandler()); + } + break; + case Value.BYTE: + case Value.SHORT: + if (!SysProperties.OLD_RESULT_SET_GET_OBJECT) { + return v.getInt(); + } + break; + } + return v.getObject(); + } + + CompareMode getCompareMode() { + return session.getDataHandler().getCompareMode(); + } + + /** + * INTERNAL + */ + public void setTraceLevel(int level) { + trace.setLevel(level); + } + + String getMode() throws SQLException { + if (mode == null) { + PreparedStatement prep = prepareStatement( + "SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME=?"); + prep.setString(1, "MODE"); + ResultSet rs = prep.executeQuery(); + rs.next(); + mode = rs.getString(1); + prep.close(); + } + return mode; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcConnectionBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcConnectionBackwardsCompat.java new file mode 100644 index 0000000000000..4ed025575be31 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcConnectionBackwardsCompat.java @@ -0,0 +1,16 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcConnectionBackwardsCompat { + + // compatibility interface + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcDatabaseMetaData.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcDatabaseMetaData.java new file mode 100644 index 0000000000000..88ad44772bf85 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcDatabaseMetaData.java @@ -0,0 +1,3260 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.RowIdLifetime; +import java.sql.SQLException; +import java.sql.Types; +import java.util.Arrays; +import java.util.Properties; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceObject; +import org.h2.tools.SimpleResultSet; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; + +/** + * Represents the meta data for a database. + */ +public class JdbcDatabaseMetaData extends TraceObject implements + DatabaseMetaData, JdbcDatabaseMetaDataBackwardsCompat { + + private final JdbcConnection conn; + + JdbcDatabaseMetaData(JdbcConnection conn, Trace trace, int id) { + setTrace(trace, TraceObject.DATABASE_META_DATA, id); + this.conn = conn; + } + + /** + * Returns the major version of this driver. + * + * @return the major version number + */ + @Override + public int getDriverMajorVersion() { + debugCodeCall("getDriverMajorVersion"); + return Constants.VERSION_MAJOR; + } + + /** + * Returns the minor version of this driver. + * + * @return the minor version number + */ + @Override + public int getDriverMinorVersion() { + debugCodeCall("getDriverMinorVersion"); + return Constants.VERSION_MINOR; + } + + /** + * Gets the database product name. + * + * @return the product name ("H2") + */ + @Override + public String getDatabaseProductName() { + debugCodeCall("getDatabaseProductName"); + // This value must stay like that, see + // http://opensource.atlassian.com/projects/hibernate/browse/HHH-2682 + return "H2"; + } + + /** + * Gets the product version of the database. + * + * @return the product version + */ + @Override + public String getDatabaseProductVersion() { + debugCodeCall("getDatabaseProductVersion"); + return Constants.getFullVersion(); + } + + /** + * Gets the name of the JDBC driver. + * + * @return the driver name ("H2 JDBC Driver") + */ + @Override + public String getDriverName() { + debugCodeCall("getDriverName"); + return "H2 JDBC Driver"; + } + + /** + * Gets the version number of the driver. The format is + * [MajorVersion].[MinorVersion]. + * + * @return the version number + */ + @Override + public String getDriverVersion() { + debugCodeCall("getDriverVersion"); + return Constants.getFullVersion(); + } + + /** + * Gets the list of tables in the database. The result set is sorted by + * TABLE_TYPE, TABLE_SCHEM, and TABLE_NAME. + * + *
      + *
    • 1 TABLE_CAT (String) table catalog
    • + *
    • 2 TABLE_SCHEM (String) table schema
    • + *
    • 3 TABLE_NAME (String) table name
    • + *
    • 4 TABLE_TYPE (String) table type
    • + *
    • 5 REMARKS (String) comment
    • + *
    • 6 TYPE_CAT (String) always null
    • + *
    • 7 TYPE_SCHEM (String) always null
    • + *
    • 8 TYPE_NAME (String) always null
    • + *
    • 9 SELF_REFERENCING_COL_NAME (String) always null
    • + *
    • 10 REF_GENERATION (String) always null
    • + *
    • 11 SQL (String) the create table statement or NULL for systems tables + *
    • + *
    + * + * @param catalogPattern null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableNamePattern null (to get all objects) or a table name + * (uppercase for unquoted names) + * @param types null or a list of table types + * @return the list of columns + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getTables(String catalogPattern, String schemaPattern, + String tableNamePattern, String[] types) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getTables(" + quote(catalogPattern) + ", " + + quote(schemaPattern) + ", " + quote(tableNamePattern) + + ", " + quoteArray(types) + ");"); + } + checkClosed(); + String tableType; + if (types != null && types.length > 0) { + StatementBuilder buff = new StatementBuilder("TABLE_TYPE IN("); + for (String ignored : types) { + buff.appendExceptFirst(", "); + buff.append('?'); + } + tableType = buff.append(')').toString(); + } else { + tableType = "TRUE"; + } + + String tableSelect = "SELECT " + + "TABLE_CATALOG TABLE_CAT, " + + "TABLE_SCHEMA TABLE_SCHEM, " + + "TABLE_NAME, " + + "TABLE_TYPE, " + + "REMARKS, " + + "TYPE_NAME TYPE_CAT, " + + "TYPE_NAME TYPE_SCHEM, " + + "TYPE_NAME, " + + "TYPE_NAME SELF_REFERENCING_COL_NAME, " + + "TYPE_NAME REF_GENERATION, " + + "SQL " + + "FROM INFORMATION_SCHEMA.TABLES " + + "WHERE TABLE_CATALOG LIKE ? ESCAPE ? " + + "AND TABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND TABLE_NAME LIKE ? ESCAPE ? " + + "AND (" + tableType + ") "; + + boolean includeSynonyms = types == null || Arrays.asList(types).contains("SYNONYM"); + String synonymSelect = "SELECT " + + "SYNONYM_CATALOG TABLE_CAT, " + + "SYNONYM_SCHEMA TABLE_SCHEM, " + + "SYNONYM_NAME as TABLE_NAME, " + + "TYPE_NAME AS TABLE_TYPE, " + + "REMARKS, " + + "TYPE_NAME TYPE_CAT, " + + "TYPE_NAME TYPE_SCHEM, " + + "TYPE_NAME AS TYPE_NAME, " + + "TYPE_NAME SELF_REFERENCING_COL_NAME, " + + "TYPE_NAME REF_GENERATION, " + + "NULL AS SQL " + + "FROM INFORMATION_SCHEMA.SYNONYMS " + + "WHERE SYNONYM_CATALOG LIKE ? ESCAPE ? " + + "AND SYNONYM_SCHEMA LIKE ? ESCAPE ? " + + "AND SYNONYM_NAME LIKE ? ESCAPE ? " + + "AND (" + includeSynonyms + ") "; + + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TABLE_CAT, " + + "TABLE_SCHEM, " + + "TABLE_NAME, " + + "TABLE_TYPE, " + + "REMARKS, " + + "TYPE_CAT, " + + "TYPE_SCHEM, " + + "TYPE_NAME, " + + "SELF_REFERENCING_COL_NAME, " + + "REF_GENERATION, " + + "SQL " + + "FROM (" + synonymSelect + " UNION " + tableSelect + ") " + + "ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, getPattern(tableNamePattern)); + prep.setString(6, "\\"); + prep.setString(7, getCatalogPattern(catalogPattern)); + prep.setString(8, "\\"); + prep.setString(9, getSchemaPattern(schemaPattern)); + prep.setString(10, "\\"); + prep.setString(11, getPattern(tableNamePattern)); + prep.setString(12, "\\"); + for (int i = 0; types != null && i < types.length; i++) { + prep.setString(13 + i, types[i]); + } + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of columns. The result set is sorted by TABLE_SCHEM, + * TABLE_NAME, and ORDINAL_POSITION. + * + *
      + *
    • 1 TABLE_CAT (String) table catalog
    • + *
    • 2 TABLE_SCHEM (String) table schema
    • + *
    • 3 TABLE_NAME (String) table name
    • + *
    • 4 COLUMN_NAME (String) column name
    • + *
    • 5 DATA_TYPE (short) data type (see java.sql.Types)
    • + *
    • 6 TYPE_NAME (String) data type name ("INTEGER", "VARCHAR",...)
    • + *
    • 7 COLUMN_SIZE (int) precision + * (values larger than 2 GB are returned as 2 GB)
    • + *
    • 8 BUFFER_LENGTH (int) unused
    • + *
    • 9 DECIMAL_DIGITS (int) scale (0 for INTEGER and VARCHAR)
    • + *
    • 10 NUM_PREC_RADIX (int) radix (always 10)
    • + *
    • 11 NULLABLE (int) columnNoNulls or columnNullable
    • + *
    • 12 REMARKS (String) comment (always empty)
    • + *
    • 13 COLUMN_DEF (String) default value
    • + *
    • 14 SQL_DATA_TYPE (int) unused
    • + *
    • 15 SQL_DATETIME_SUB (int) unused
    • + *
    • 16 CHAR_OCTET_LENGTH (int) unused
    • + *
    • 17 ORDINAL_POSITION (int) the column index (1,2,...)
    • + *
    • 18 IS_NULLABLE (String) "NO" or "YES"
    • + *
    • 19 SCOPE_CATALOG (String) always null
    • + *
    • 20 SCOPE_SCHEMA (String) always null
    • + *
    • 21 SCOPE_TABLE (String) always null
    • + *
    • 22 SOURCE_DATA_TYPE (short) null
    • + *
    • 23 IS_AUTOINCREMENT (String) "NO" or "YES"
    • + *
    • 24 SCOPE_CATLOG (String) always null (the typo is on purpose, + * for compatibility with the JDBC specification prior to 4.1)
    • + *
    + * + * @param catalogPattern null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableNamePattern null (to get all objects) or a table name + * (uppercase for unquoted names) + * @param columnNamePattern null (to get all objects) or a column name + * (uppercase for unquoted names) + * @return the list of columns + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getColumns(String catalogPattern, String schemaPattern, + String tableNamePattern, String columnNamePattern) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getColumns(" + quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(tableNamePattern)+", " + +quote(columnNamePattern)+");"); + } + checkClosed(); + String tableSql = "SELECT " + + "TABLE_CATALOG TABLE_CAT, " + + "TABLE_SCHEMA TABLE_SCHEM, " + + "TABLE_NAME, " + + "COLUMN_NAME, " + + "DATA_TYPE, " + + "TYPE_NAME, " + + "CHARACTER_MAXIMUM_LENGTH COLUMN_SIZE, " + + "CHARACTER_MAXIMUM_LENGTH BUFFER_LENGTH, " + + "NUMERIC_SCALE DECIMAL_DIGITS, " + + "NUMERIC_PRECISION_RADIX NUM_PREC_RADIX, " + + "NULLABLE, " + + "REMARKS, " + + "COLUMN_DEFAULT COLUMN_DEF, " + + "DATA_TYPE SQL_DATA_TYPE, " + + "ZERO() SQL_DATETIME_SUB, " + + "CHARACTER_OCTET_LENGTH CHAR_OCTET_LENGTH, " + + "ORDINAL_POSITION, " + + "IS_NULLABLE IS_NULLABLE, " + + "CAST(SOURCE_DATA_TYPE AS VARCHAR) SCOPE_CATALOG, " + + "CAST(SOURCE_DATA_TYPE AS VARCHAR) SCOPE_SCHEMA, " + + "CAST(SOURCE_DATA_TYPE AS VARCHAR) SCOPE_TABLE, " + + "SOURCE_DATA_TYPE, " + + "CASE WHEN SEQUENCE_NAME IS NULL THEN " + + "CAST(? AS VARCHAR) ELSE CAST(? AS VARCHAR) END IS_AUTOINCREMENT, " + + "CAST(SOURCE_DATA_TYPE AS VARCHAR) SCOPE_CATLOG " + + "FROM INFORMATION_SCHEMA.COLUMNS " + + "WHERE TABLE_CATALOG LIKE ? ESCAPE ? " + + "AND TABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND TABLE_NAME LIKE ? ESCAPE ? " + + "AND COLUMN_NAME LIKE ? ESCAPE ? " + + "ORDER BY TABLE_SCHEM, TABLE_NAME, ORDINAL_POSITION"; + String synonymSql = "SELECT " + + "s.SYNONYM_CATALOG TABLE_CAT, " + + "s.SYNONYM_SCHEMA TABLE_SCHEM, " + + "s.SYNONYM_NAME TABLE_NAME, " + + "c.COLUMN_NAME, " + + "c.DATA_TYPE, " + + "c.TYPE_NAME, " + + "c.CHARACTER_MAXIMUM_LENGTH COLUMN_SIZE, " + + "c.CHARACTER_MAXIMUM_LENGTH BUFFER_LENGTH, " + + "c.NUMERIC_SCALE DECIMAL_DIGITS, " + + "c.NUMERIC_PRECISION_RADIX NUM_PREC_RADIX, " + + "c.NULLABLE, " + + "c.REMARKS, " + + "c.COLUMN_DEFAULT COLUMN_DEF, " + + "c.DATA_TYPE SQL_DATA_TYPE, " + + "ZERO() SQL_DATETIME_SUB, " + + "c.CHARACTER_OCTET_LENGTH CHAR_OCTET_LENGTH, " + + "c.ORDINAL_POSITION, " + + "c.IS_NULLABLE IS_NULLABLE, " + + "CAST(c.SOURCE_DATA_TYPE AS VARCHAR) SCOPE_CATALOG, " + + "CAST(c.SOURCE_DATA_TYPE AS VARCHAR) SCOPE_SCHEMA, " + + "CAST(c.SOURCE_DATA_TYPE AS VARCHAR) SCOPE_TABLE, " + + "c.SOURCE_DATA_TYPE, " + + "CASE WHEN c.SEQUENCE_NAME IS NULL THEN " + + "CAST(? AS VARCHAR) ELSE CAST(? AS VARCHAR) END IS_AUTOINCREMENT, " + + "CAST(c.SOURCE_DATA_TYPE AS VARCHAR) SCOPE_CATLOG " + + "FROM INFORMATION_SCHEMA.COLUMNS c JOIN INFORMATION_SCHEMA.SYNONYMS s ON " + + "s.SYNONYM_FOR = c.TABLE_NAME " + + "AND s.SYNONYM_FOR_SCHEMA = c.TABLE_SCHEMA " + + "WHERE s.SYNONYM_CATALOG LIKE ? ESCAPE ? " + + "AND s.SYNONYM_SCHEMA LIKE ? ESCAPE ? " + + "AND s.SYNONYM_NAME LIKE ? ESCAPE ? " + + "AND c.COLUMN_NAME LIKE ? ESCAPE ? "; + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TABLE_CAT, " + + "TABLE_SCHEM, " + + "TABLE_NAME, " + + "COLUMN_NAME, " + + "DATA_TYPE, " + + "TYPE_NAME, " + + "COLUMN_SIZE, " + + "BUFFER_LENGTH, " + + "DECIMAL_DIGITS, " + + "NUM_PREC_RADIX, " + + "NULLABLE, " + + "REMARKS, " + + "COLUMN_DEF, " + + "SQL_DATA_TYPE, " + + "SQL_DATETIME_SUB, " + + "CHAR_OCTET_LENGTH, " + + "ORDINAL_POSITION, " + + "IS_NULLABLE, " + + "SCOPE_CATALOG, " + + "SCOPE_SCHEMA, " + + "SCOPE_TABLE, " + + "SOURCE_DATA_TYPE, " + + "IS_AUTOINCREMENT, " + + "SCOPE_CATLOG " + + "FROM ((" + tableSql + ") UNION (" + synonymSql + + ")) ORDER BY TABLE_SCHEM, TABLE_NAME, ORDINAL_POSITION"); + prep.setString(1, "NO"); + prep.setString(2, "YES"); + prep.setString(3, getCatalogPattern(catalogPattern)); + prep.setString(4, "\\"); + prep.setString(5, getSchemaPattern(schemaPattern)); + prep.setString(6, "\\"); + prep.setString(7, getPattern(tableNamePattern)); + prep.setString(8, "\\"); + prep.setString(9, getPattern(columnNamePattern)); + prep.setString(10, "\\"); + prep.setString(11, "NO"); + prep.setString(12, "YES"); + prep.setString(13, getCatalogPattern(catalogPattern)); + prep.setString(14, "\\"); + prep.setString(15, getSchemaPattern(schemaPattern)); + prep.setString(16, "\\"); + prep.setString(17, getPattern(tableNamePattern)); + prep.setString(18, "\\"); + prep.setString(19, getPattern(columnNamePattern)); + prep.setString(20, "\\"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of indexes for this database. The primary key index (if + * there is one) is also listed, with the name PRIMARY_KEY. The result set + * is sorted by NON_UNIQUE ('false' first), TYPE, TABLE_SCHEM, INDEX_NAME, + * and ORDINAL_POSITION. + * + *
      + *
    • 1 TABLE_CAT (String) table catalog
    • + *
    • 2 TABLE_SCHEM (String) table schema
    • + *
    • 3 TABLE_NAME (String) table name
    • + *
    • 4 NON_UNIQUE (boolean) 'true' if non-unique
    • + *
    • 5 INDEX_QUALIFIER (String) index catalog
    • + *
    • 6 INDEX_NAME (String) index name
    • + *
    • 7 TYPE (short) the index type (always tableIndexOther)
    • + *
    • 8 ORDINAL_POSITION (short) column index (1, 2, ...)
    • + *
    • 9 COLUMN_NAME (String) column name
    • + *
    • 10 ASC_OR_DESC (String) ascending or descending (always 'A')
    • + *
    • 11 CARDINALITY (int) numbers of unique values
    • + *
    • 12 PAGES (int) number of pages use (always 0)
    • + *
    • 13 FILTER_CONDITION (String) filter condition (always empty)
    • + *
    • 14 SORT_TYPE (int) the sort type bit map: 1=DESCENDING, + * 2=NULLS_FIRST, 4=NULLS_LAST
    • + *
    + * + * @param catalogPattern null or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableName table name (must be specified) + * @param unique only unique indexes + * @param approximate is ignored + * @return the list of indexes and columns + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getIndexInfo(String catalogPattern, String schemaPattern, + String tableName, boolean unique, boolean approximate) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getIndexInfo(" + quote(catalogPattern) + ", " + + quote(schemaPattern) + ", " + quote(tableName) + ", " + + unique + ", " + approximate + ");"); + } + String uniqueCondition; + if (unique) { + uniqueCondition = "NON_UNIQUE=FALSE"; + } else { + uniqueCondition = "TRUE"; + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TABLE_CATALOG TABLE_CAT, " + + "TABLE_SCHEMA TABLE_SCHEM, " + + "TABLE_NAME, " + + "NON_UNIQUE, " + + "TABLE_CATALOG INDEX_QUALIFIER, " + + "INDEX_NAME, " + + "INDEX_TYPE TYPE, " + + "ORDINAL_POSITION, " + + "COLUMN_NAME, " + + "ASC_OR_DESC, " + // TODO meta data for number of unique values in an index + + "CARDINALITY, " + + "PAGES, " + + "FILTER_CONDITION, " + + "SORT_TYPE " + + "FROM INFORMATION_SCHEMA.INDEXES " + + "WHERE TABLE_CATALOG LIKE ? ESCAPE ? " + + "AND TABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND (" + uniqueCondition + ") " + + "AND TABLE_NAME = ? " + + "ORDER BY NON_UNIQUE, TYPE, TABLE_SCHEM, INDEX_NAME, ORDINAL_POSITION"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, tableName); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the primary key columns for a table. The result set is sorted by + * TABLE_SCHEM, and COLUMN_NAME (and not by KEY_SEQ). + * + *
      + *
    • 1 TABLE_CAT (String) table catalog
    • + *
    • 2 TABLE_SCHEM (String) table schema
    • + *
    • 3 TABLE_NAME (String) table name
    • + *
    • 4 COLUMN_NAME (String) column name
    • + *
    • 5 KEY_SEQ (short) the column index of this column (1,2,...)
    • + *
    • 6 PK_NAME (String) the name of the primary key index
    • + *
    + * + * @param catalogPattern null or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableName table name (must be specified) + * @return the list of primary key columns + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getPrimaryKeys(String catalogPattern, + String schemaPattern, String tableName) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getPrimaryKeys(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(tableName)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TABLE_CATALOG TABLE_CAT, " + + "TABLE_SCHEMA TABLE_SCHEM, " + + "TABLE_NAME, " + + "COLUMN_NAME, " + + "ORDINAL_POSITION KEY_SEQ, " + + "IFNULL(CONSTRAINT_NAME, INDEX_NAME) PK_NAME " + + "FROM INFORMATION_SCHEMA.INDEXES " + + "WHERE TABLE_CATALOG LIKE ? ESCAPE ? " + + "AND TABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND TABLE_NAME = ? " + + "AND PRIMARY_KEY = TRUE " + + "ORDER BY COLUMN_NAME"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, tableName); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if all procedures callable. + * + * @return true + */ + @Override + public boolean allProceduresAreCallable() { + debugCodeCall("allProceduresAreCallable"); + return true; + } + + /** + * Checks if it possible to query all tables returned by getTables. + * + * @return true + */ + @Override + public boolean allTablesAreSelectable() { + debugCodeCall("allTablesAreSelectable"); + return true; + } + + /** + * Returns the database URL for this connection. + * + * @return the url + */ + @Override + public String getURL() throws SQLException { + try { + debugCodeCall("getURL"); + return conn.getURL(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the user name as passed to DriverManager.getConnection(url, user, + * password). + * + * @return the user name + */ + @Override + public String getUserName() throws SQLException { + try { + debugCodeCall("getUserName"); + return conn.getUser(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the same as Connection.isReadOnly(). + * + * @return if read only optimization is switched on + */ + @Override + public boolean isReadOnly() throws SQLException { + try { + debugCodeCall("isReadOnly"); + return conn.isReadOnly(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if NULL is sorted high (bigger than anything that is not null). + * + * @return false by default; true if the system property h2.sortNullsHigh is + * set to true + */ + @Override + public boolean nullsAreSortedHigh() { + debugCodeCall("nullsAreSortedHigh"); + return SysProperties.SORT_NULLS_HIGH; + } + + /** + * Checks if NULL is sorted low (smaller than anything that is not null). + * + * @return true by default; false if the system property h2.sortNullsHigh is + * set to true + */ + @Override + public boolean nullsAreSortedLow() { + debugCodeCall("nullsAreSortedLow"); + return !SysProperties.SORT_NULLS_HIGH; + } + + /** + * Checks if NULL is sorted at the beginning (no matter if ASC or DESC is + * used). + * + * @return false + */ + @Override + public boolean nullsAreSortedAtStart() { + debugCodeCall("nullsAreSortedAtStart"); + return false; + } + + /** + * Checks if NULL is sorted at the end (no matter if ASC or DESC is used). + * + * @return false + */ + @Override + public boolean nullsAreSortedAtEnd() { + debugCodeCall("nullsAreSortedAtEnd"); + return false; + } + + /** + * Returns the connection that created this object. + * + * @return the connection + */ + @Override + public Connection getConnection() { + debugCodeCall("getConnection"); + return conn; + } + + /** + * Gets the list of procedures. The result set is sorted by PROCEDURE_SCHEM, + * PROCEDURE_NAME, and NUM_INPUT_PARAMS. There are potentially multiple + * procedures with the same name, each with a different number of input + * parameters. + * + *
      + *
    • 1 PROCEDURE_CAT (String) catalog
    • + *
    • 2 PROCEDURE_SCHEM (String) schema
    • + *
    • 3 PROCEDURE_NAME (String) name
    • + *
    • 4 NUM_INPUT_PARAMS (int) the number of arguments
    • + *
    • 5 NUM_OUTPUT_PARAMS (int) for future use, always 0
    • + *
    • 6 NUM_RESULT_SETS (int) for future use, always 0
    • + *
    • 7 REMARKS (String) description
    • + *
    • 8 PROCEDURE_TYPE (short) if this procedure returns a result + * (procedureNoResult or procedureReturnsResult)
    • + *
    • 9 SPECIFIC_NAME (String) name
    • + *
    + * + * @param catalogPattern null or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param procedureNamePattern the procedure name pattern + * @return the procedures + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getProcedures(String catalogPattern, String schemaPattern, + String procedureNamePattern) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getProcedures(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(procedureNamePattern)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "ALIAS_CATALOG PROCEDURE_CAT, " + + "ALIAS_SCHEMA PROCEDURE_SCHEM, " + + "ALIAS_NAME PROCEDURE_NAME, " + + "COLUMN_COUNT NUM_INPUT_PARAMS, " + + "ZERO() NUM_OUTPUT_PARAMS, " + + "ZERO() NUM_RESULT_SETS, " + + "REMARKS, " + + "RETURNS_RESULT PROCEDURE_TYPE, " + + "ALIAS_NAME SPECIFIC_NAME " + + "FROM INFORMATION_SCHEMA.FUNCTION_ALIASES " + + "WHERE ALIAS_CATALOG LIKE ? ESCAPE ? " + + "AND ALIAS_SCHEMA LIKE ? ESCAPE ? " + + "AND ALIAS_NAME LIKE ? ESCAPE ? " + + "ORDER BY PROCEDURE_SCHEM, PROCEDURE_NAME, NUM_INPUT_PARAMS"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, getPattern(procedureNamePattern)); + prep.setString(6, "\\"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of procedure columns. The result set is sorted by + * PROCEDURE_SCHEM, PROCEDURE_NAME, NUM_INPUT_PARAMS, and POS. + * There are potentially multiple procedures with the same name, each with a + * different number of input parameters. + * + *
      + *
    • 1 PROCEDURE_CAT (String) catalog
    • + *
    • 2 PROCEDURE_SCHEM (String) schema
    • + *
    • 3 PROCEDURE_NAME (String) name
    • + *
    • 4 COLUMN_NAME (String) column name
    • + *
    • 5 COLUMN_TYPE (short) column type + * (always DatabaseMetaData.procedureColumnIn)
    • + *
    • 6 DATA_TYPE (short) sql type
    • + *
    • 7 TYPE_NAME (String) type name
    • + *
    • 8 PRECISION (int) precision
    • + *
    • 9 LENGTH (int) length
    • + *
    • 10 SCALE (short) scale
    • + *
    • 11 RADIX (int) always 10
    • + *
    • 12 NULLABLE (short) nullable + * (DatabaseMetaData.columnNoNulls for primitive data types, + * DatabaseMetaData.columnNullable otherwise)
    • + *
    • 13 REMARKS (String) description
    • + *
    • 14 COLUMN_DEF (String) always null
    • + *
    • 15 SQL_DATA_TYPE (int) for future use, always 0
    • + *
    • 16 SQL_DATETIME_SUB (int) for future use, always 0
    • + *
    • 17 CHAR_OCTET_LENGTH (int) always null
    • + *
    • 18 ORDINAL_POSITION (int) the parameter index + * starting from 1 (0 is the return value)
    • + *
    • 19 IS_NULLABLE (String) always "YES"
    • + *
    • 20 SPECIFIC_NAME (String) name
    • + *
    + * + * @param catalogPattern null or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param procedureNamePattern the procedure name pattern + * @param columnNamePattern the procedure name pattern + * @return the procedure columns + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getProcedureColumns(String catalogPattern, + String schemaPattern, String procedureNamePattern, + String columnNamePattern) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getProcedureColumns(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(procedureNamePattern)+", " + +quote(columnNamePattern)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "ALIAS_CATALOG PROCEDURE_CAT, " + + "ALIAS_SCHEMA PROCEDURE_SCHEM, " + + "ALIAS_NAME PROCEDURE_NAME, " + + "COLUMN_NAME, " + + "COLUMN_TYPE, " + + "DATA_TYPE, " + + "TYPE_NAME, " + + "PRECISION, " + + "PRECISION LENGTH, " + + "SCALE, " + + "RADIX, " + + "NULLABLE, " + + "REMARKS, " + + "COLUMN_DEFAULT COLUMN_DEF, " + + "ZERO() SQL_DATA_TYPE, " + + "ZERO() SQL_DATETIME_SUB, " + + "ZERO() CHAR_OCTET_LENGTH, " + + "POS ORDINAL_POSITION, " + + "? IS_NULLABLE, " + + "ALIAS_NAME SPECIFIC_NAME " + + "FROM INFORMATION_SCHEMA.FUNCTION_COLUMNS " + + "WHERE ALIAS_CATALOG LIKE ? ESCAPE ? " + + "AND ALIAS_SCHEMA LIKE ? ESCAPE ? " + + "AND ALIAS_NAME LIKE ? ESCAPE ? " + + "AND COLUMN_NAME LIKE ? ESCAPE ? " + + "ORDER BY PROCEDURE_SCHEM, PROCEDURE_NAME, ORDINAL_POSITION"); + prep.setString(1, "YES"); + prep.setString(2, getCatalogPattern(catalogPattern)); + prep.setString(3, "\\"); + prep.setString(4, getSchemaPattern(schemaPattern)); + prep.setString(5, "\\"); + prep.setString(6, getPattern(procedureNamePattern)); + prep.setString(7, "\\"); + prep.setString(8, getPattern(columnNamePattern)); + prep.setString(9, "\\"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of schemas. + * The result set is sorted by TABLE_SCHEM. + * + *
      + *
    • 1 TABLE_SCHEM (String) schema name + *
    • 2 TABLE_CATALOG (String) catalog name + *
    • 3 IS_DEFAULT (boolean) if this is the default schema + *
    + * + * @return the schema list + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getSchemas() throws SQLException { + try { + debugCodeCall("getSchemas"); + checkClosed(); + PreparedStatement prep = conn + .prepareAutoCloseStatement("SELECT " + + "SCHEMA_NAME TABLE_SCHEM, " + + "CATALOG_NAME TABLE_CATALOG, " + +" IS_DEFAULT " + + "FROM INFORMATION_SCHEMA.SCHEMATA " + + "ORDER BY SCHEMA_NAME"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of catalogs. + * The result set is sorted by TABLE_CAT. + * + *
      + *
    • 1 TABLE_CAT (String) catalog name + *
    + * + * @return the catalog list + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getCatalogs() throws SQLException { + try { + debugCodeCall("getCatalogs"); + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement( + "SELECT CATALOG_NAME TABLE_CAT " + + "FROM INFORMATION_SCHEMA.CATALOGS"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of table types. This call returns a result set with five + * records: "SYSTEM TABLE", "TABLE", "VIEW", "TABLE LINK" and "EXTERNAL". + *
      + *
    • 1 TABLE_TYPE (String) table type + *
    + * + * @return the table types + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getTableTypes() throws SQLException { + try { + debugCodeCall("getTableTypes"); + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TYPE TABLE_TYPE " + + "FROM INFORMATION_SCHEMA.TABLE_TYPES " + + "ORDER BY TABLE_TYPE"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of column privileges. The result set is sorted by + * COLUMN_NAME and PRIVILEGE + * + *
      + *
    • 1 TABLE_CAT (String) table catalog
    • + *
    • 2 TABLE_SCHEM (String) table schema
    • + *
    • 3 TABLE_NAME (String) table name
    • + *
    • 4 COLUMN_NAME (String) column name
    • + *
    • 5 GRANTOR (String) grantor of access
    • + *
    • 6 GRANTEE (String) grantee of access
    • + *
    • 7 PRIVILEGE (String) SELECT, INSERT, UPDATE, DELETE or REFERENCES + * (only one per row)
    • + *
    • 8 IS_GRANTABLE (String) YES means the grantee can grant access to + * others
    • + *
    + * + * @param catalogPattern null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param table a table name (uppercase for unquoted names) + * @param columnNamePattern null (to get all objects) or a column name + * (uppercase for unquoted names) + * @return the list of privileges + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getColumnPrivileges(String catalogPattern, + String schemaPattern, String table, String columnNamePattern) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getColumnPrivileges(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(table)+", " + +quote(columnNamePattern)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TABLE_CATALOG TABLE_CAT, " + + "TABLE_SCHEMA TABLE_SCHEM, " + + "TABLE_NAME, " + + "COLUMN_NAME, " + + "GRANTOR, " + + "GRANTEE, " + + "PRIVILEGE_TYPE PRIVILEGE, " + + "IS_GRANTABLE " + + "FROM INFORMATION_SCHEMA.COLUMN_PRIVILEGES " + + "WHERE TABLE_CATALOG LIKE ? ESCAPE ? " + + "AND TABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND TABLE_NAME = ? " + + "AND COLUMN_NAME LIKE ? ESCAPE ? " + + "ORDER BY COLUMN_NAME, PRIVILEGE"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, table); + prep.setString(6, getPattern(columnNamePattern)); + prep.setString(7, "\\"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of table privileges. The result set is sorted by + * TABLE_SCHEM, TABLE_NAME, and PRIVILEGE. + * + *
      + *
    • 1 TABLE_CAT (String) table catalog
    • + *
    • 2 TABLE_SCHEM (String) table schema
    • + *
    • 3 TABLE_NAME (String) table name
    • + *
    • 4 GRANTOR (String) grantor of access
    • + *
    • 5 GRANTEE (String) grantee of access
    • + *
    • 6 PRIVILEGE (String) SELECT, INSERT, UPDATE, DELETE or REFERENCES + * (only one per row)
    • + *
    • 7 IS_GRANTABLE (String) YES means the grantee can grant access to + * others
    • + *
    + * + * @param catalogPattern null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableNamePattern null (to get all objects) or a table name + * (uppercase for unquoted names) + * @return the list of privileges + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getTablePrivileges(String catalogPattern, + String schemaPattern, String tableNamePattern) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getTablePrivileges(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(tableNamePattern)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TABLE_CATALOG TABLE_CAT, " + + "TABLE_SCHEMA TABLE_SCHEM, " + + "TABLE_NAME, " + + "GRANTOR, " + + "GRANTEE, " + + "PRIVILEGE_TYPE PRIVILEGE, " + + "IS_GRANTABLE " + + "FROM INFORMATION_SCHEMA.TABLE_PRIVILEGES " + + "WHERE TABLE_CATALOG LIKE ? ESCAPE ? " + + "AND TABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND TABLE_NAME LIKE ? ESCAPE ? " + + "ORDER BY TABLE_SCHEM, TABLE_NAME, PRIVILEGE"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, getPattern(tableNamePattern)); + prep.setString(6, "\\"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of columns that best identifier a row in a table. + * The list is ordered by SCOPE. + * + *
      + *
    • 1 SCOPE (short) scope of result (always bestRowSession) + *
    • 2 COLUMN_NAME (String) column name + *
    • 3 DATA_TYPE (short) SQL data type, see also java.sql.Types + *
    • 4 TYPE_NAME (String) type name + *
    • 5 COLUMN_SIZE (int) precision + * (values larger than 2 GB are returned as 2 GB) + *
    • 6 BUFFER_LENGTH (int) unused + *
    • 7 DECIMAL_DIGITS (short) scale + *
    • 8 PSEUDO_COLUMN (short) (always bestRowNotPseudo) + *
    + * + * @param catalogPattern null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableName table name (must be specified) + * @param scope ignored + * @param nullable ignored + * @return the primary key index + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getBestRowIdentifier(String catalogPattern, + String schemaPattern, String tableName, int scope, boolean nullable) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getBestRowIdentifier(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(tableName)+", " + +scope+", "+nullable+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "CAST(? AS SMALLINT) SCOPE, " + + "C.COLUMN_NAME, " + + "C.DATA_TYPE, " + + "C.TYPE_NAME, " + + "C.CHARACTER_MAXIMUM_LENGTH COLUMN_SIZE, " + + "C.CHARACTER_MAXIMUM_LENGTH BUFFER_LENGTH, " + + "CAST(C.NUMERIC_SCALE AS SMALLINT) DECIMAL_DIGITS, " + + "CAST(? AS SMALLINT) PSEUDO_COLUMN " + + "FROM INFORMATION_SCHEMA.INDEXES I, " + +" INFORMATION_SCHEMA.COLUMNS C " + + "WHERE C.TABLE_NAME = I.TABLE_NAME " + + "AND C.COLUMN_NAME = I.COLUMN_NAME " + + "AND C.TABLE_CATALOG LIKE ? ESCAPE ? " + + "AND C.TABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND C.TABLE_NAME = ? " + + "AND I.PRIMARY_KEY = TRUE " + + "ORDER BY SCOPE"); + // SCOPE + prep.setInt(1, DatabaseMetaData.bestRowSession); + // PSEUDO_COLUMN + prep.setInt(2, DatabaseMetaData.bestRowNotPseudo); + prep.setString(3, getCatalogPattern(catalogPattern)); + prep.setString(4, "\\"); + prep.setString(5, getSchemaPattern(schemaPattern)); + prep.setString(6, "\\"); + prep.setString(7, tableName); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Get the list of columns that are update when any value is updated. + * The result set is always empty. + * + *
      + *
    • 1 SCOPE (int) not used + *
    • 2 COLUMN_NAME (String) column name + *
    • 3 DATA_TYPE (int) SQL data type - see also java.sql.Types + *
    • 4 TYPE_NAME (String) data type name + *
    • 5 COLUMN_SIZE (int) precision + * (values larger than 2 GB are returned as 2 GB) + *
    • 6 BUFFER_LENGTH (int) length (bytes) + *
    • 7 DECIMAL_DIGITS (int) scale + *
    • 8 PSEUDO_COLUMN (int) is this column a pseudo column + *
    + * + * @param catalog null (to get all objects) or the catalog name + * @param schema null (to get all objects) or a schema name + * @param tableName table name (must be specified) + * @return an empty result set + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getVersionColumns(String catalog, String schema, + String tableName) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getVersionColumns(" + +quote(catalog)+", " + +quote(schema)+", " + +quote(tableName)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "ZERO() SCOPE, " + + "COLUMN_NAME, " + + "CAST(DATA_TYPE AS INT) DATA_TYPE, " + + "TYPE_NAME, " + + "NUMERIC_PRECISION COLUMN_SIZE, " + + "NUMERIC_PRECISION BUFFER_LENGTH, " + + "NUMERIC_PRECISION DECIMAL_DIGITS, " + + "ZERO() PSEUDO_COLUMN " + + "FROM INFORMATION_SCHEMA.COLUMNS " + + "WHERE FALSE"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of primary key columns that are referenced by a table. The + * result set is sorted by PKTABLE_CAT, PKTABLE_SCHEM, PKTABLE_NAME, + * FK_NAME, KEY_SEQ. + * + *
      + *
    • 1 PKTABLE_CAT (String) primary catalog
    • + *
    • 2 PKTABLE_SCHEM (String) primary schema
    • + *
    • 3 PKTABLE_NAME (String) primary table
    • + *
    • 4 PKCOLUMN_NAME (String) primary column
    • + *
    • 5 FKTABLE_CAT (String) foreign catalog
    • + *
    • 6 FKTABLE_SCHEM (String) foreign schema
    • + *
    • 7 FKTABLE_NAME (String) foreign table
    • + *
    • 8 FKCOLUMN_NAME (String) foreign column
    • + *
    • 9 KEY_SEQ (short) sequence number (1, 2, ...)
    • + *
    • 10 UPDATE_RULE (short) action on update (see + * DatabaseMetaData.importedKey...)
    • + *
    • 11 DELETE_RULE (short) action on delete (see + * DatabaseMetaData.importedKey...)
    • + *
    • 12 FK_NAME (String) foreign key name
    • + *
    • 13 PK_NAME (String) primary key name
    • + *
    • 14 DEFERRABILITY (short) deferrable or not (always + * importedKeyNotDeferrable)
    • + *
    + * + * @param catalogPattern null (to get all objects) or the catalog name + * @param schemaPattern the schema name of the foreign table + * @param tableName the name of the foreign table + * @return the result set + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getImportedKeys(String catalogPattern, + String schemaPattern, String tableName) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getImportedKeys(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(tableName)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "PKTABLE_CATALOG PKTABLE_CAT, " + + "PKTABLE_SCHEMA PKTABLE_SCHEM, " + + "PKTABLE_NAME PKTABLE_NAME, " + + "PKCOLUMN_NAME, " + + "FKTABLE_CATALOG FKTABLE_CAT, " + + "FKTABLE_SCHEMA FKTABLE_SCHEM, " + + "FKTABLE_NAME, " + + "FKCOLUMN_NAME, " + + "ORDINAL_POSITION KEY_SEQ, " + + "UPDATE_RULE, " + + "DELETE_RULE, " + + "FK_NAME, " + + "PK_NAME, " + + "DEFERRABILITY " + + "FROM INFORMATION_SCHEMA.CROSS_REFERENCES " + + "WHERE FKTABLE_CATALOG LIKE ? ESCAPE ? " + + "AND FKTABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND FKTABLE_NAME = ? " + + "ORDER BY PKTABLE_CAT, PKTABLE_SCHEM, PKTABLE_NAME, FK_NAME, KEY_SEQ"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, tableName); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of foreign key columns that reference a table. The result + * set is sorted by FKTABLE_CAT, FKTABLE_SCHEM, FKTABLE_NAME, FK_NAME, + * KEY_SEQ. + * + *
      + *
    • 1 PKTABLE_CAT (String) primary catalog
    • + *
    • 2 PKTABLE_SCHEM (String) primary schema
    • + *
    • 3 PKTABLE_NAME (String) primary table
    • + *
    • 4 PKCOLUMN_NAME (String) primary column
    • + *
    • 5 FKTABLE_CAT (String) foreign catalog
    • + *
    • 6 FKTABLE_SCHEM (String) foreign schema
    • + *
    • 7 FKTABLE_NAME (String) foreign table
    • + *
    • 8 FKCOLUMN_NAME (String) foreign column
    • + *
    • 9 KEY_SEQ (short) sequence number (1,2,...)
    • + *
    • 10 UPDATE_RULE (short) action on update (see + * DatabaseMetaData.importedKey...)
    • + *
    • 11 DELETE_RULE (short) action on delete (see + * DatabaseMetaData.importedKey...)
    • + *
    • 12 FK_NAME (String) foreign key name
    • + *
    • 13 PK_NAME (String) primary key name
    • + *
    • 14 DEFERRABILITY (short) deferrable or not (always + * importedKeyNotDeferrable)
    • + *
    + * + * @param catalogPattern null or the catalog name + * @param schemaPattern the schema name of the primary table + * @param tableName the name of the primary table + * @return the result set + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getExportedKeys(String catalogPattern, + String schemaPattern, String tableName) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getExportedKeys(" + +quote(catalogPattern)+", " + +quote(schemaPattern)+", " + +quote(tableName)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "PKTABLE_CATALOG PKTABLE_CAT, " + + "PKTABLE_SCHEMA PKTABLE_SCHEM, " + + "PKTABLE_NAME PKTABLE_NAME, " + + "PKCOLUMN_NAME, " + + "FKTABLE_CATALOG FKTABLE_CAT, " + + "FKTABLE_SCHEMA FKTABLE_SCHEM, " + + "FKTABLE_NAME, " + + "FKCOLUMN_NAME, " + + "ORDINAL_POSITION KEY_SEQ, " + + "UPDATE_RULE, " + + "DELETE_RULE, " + + "FK_NAME, " + + "PK_NAME, " + + "DEFERRABILITY " + + "FROM INFORMATION_SCHEMA.CROSS_REFERENCES " + + "WHERE PKTABLE_CATALOG LIKE ? ESCAPE ? " + + "AND PKTABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND PKTABLE_NAME = ? " + + "ORDER BY FKTABLE_CAT, FKTABLE_SCHEM, FKTABLE_NAME, FK_NAME, KEY_SEQ"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, tableName); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of foreign key columns that references a table, as well as + * the list of primary key columns that are references by a table. The + * result set is sorted by FKTABLE_CAT, FKTABLE_SCHEM, FKTABLE_NAME, + * FK_NAME, KEY_SEQ. + * + *
      + *
    • 1 PKTABLE_CAT (String) primary catalog
    • + *
    • 2 PKTABLE_SCHEM (String) primary schema
    • + *
    • 3 PKTABLE_NAME (String) primary table
    • + *
    • 4 PKCOLUMN_NAME (String) primary column
    • + *
    • 5 FKTABLE_CAT (String) foreign catalog
    • + *
    • 6 FKTABLE_SCHEM (String) foreign schema
    • + *
    • 7 FKTABLE_NAME (String) foreign table
    • + *
    • 8 FKCOLUMN_NAME (String) foreign column
    • + *
    • 9 KEY_SEQ (short) sequence number (1,2,...)
    • + *
    • 10 UPDATE_RULE (short) action on update (see + * DatabaseMetaData.importedKey...)
    • + *
    • 11 DELETE_RULE (short) action on delete (see + * DatabaseMetaData.importedKey...)
    • + *
    • 12 FK_NAME (String) foreign key name
    • + *
    • 13 PK_NAME (String) primary key name
    • + *
    • 14 DEFERRABILITY (short) deferrable or not (always + * importedKeyNotDeferrable)
    • + *
    + * + * @param primaryCatalogPattern null or the catalog name + * @param primarySchemaPattern the schema name of the primary table + * (optional) + * @param primaryTable the name of the primary table (must be specified) + * @param foreignCatalogPattern null or the catalog name + * @param foreignSchemaPattern the schema name of the foreign table + * (optional) + * @param foreignTable the name of the foreign table (must be specified) + * @return the result set + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getCrossReference(String primaryCatalogPattern, + String primarySchemaPattern, String primaryTable, String foreignCatalogPattern, + String foreignSchemaPattern, String foreignTable) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getCrossReference(" + +quote(primaryCatalogPattern)+", " + +quote(primarySchemaPattern)+", " + +quote(primaryTable)+", " + +quote(foreignCatalogPattern)+", " + +quote(foreignSchemaPattern)+", " + +quote(foreignTable)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "PKTABLE_CATALOG PKTABLE_CAT, " + + "PKTABLE_SCHEMA PKTABLE_SCHEM, " + + "PKTABLE_NAME PKTABLE_NAME, " + + "PKCOLUMN_NAME, " + + "FKTABLE_CATALOG FKTABLE_CAT, " + + "FKTABLE_SCHEMA FKTABLE_SCHEM, " + + "FKTABLE_NAME, " + + "FKCOLUMN_NAME, " + + "ORDINAL_POSITION KEY_SEQ, " + + "UPDATE_RULE, " + + "DELETE_RULE, " + + "FK_NAME, " + + "PK_NAME, " + + "DEFERRABILITY " + + "FROM INFORMATION_SCHEMA.CROSS_REFERENCES " + + "WHERE PKTABLE_CATALOG LIKE ? ESCAPE ? " + + "AND PKTABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND PKTABLE_NAME = ? " + + "AND FKTABLE_CATALOG LIKE ? ESCAPE ? " + + "AND FKTABLE_SCHEMA LIKE ? ESCAPE ? " + + "AND FKTABLE_NAME = ? " + + "ORDER BY FKTABLE_CAT, FKTABLE_SCHEM, FKTABLE_NAME, FK_NAME, KEY_SEQ"); + prep.setString(1, getCatalogPattern(primaryCatalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(primarySchemaPattern)); + prep.setString(4, "\\"); + prep.setString(5, primaryTable); + prep.setString(6, getCatalogPattern(foreignCatalogPattern)); + prep.setString(7, "\\"); + prep.setString(8, getSchemaPattern(foreignSchemaPattern)); + prep.setString(9, "\\"); + prep.setString(10, foreignTable); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of user-defined data types. + * This call returns an empty result set. + * + *
      + *
    • 1 TYPE_CAT (String) catalog + *
    • 2 TYPE_SCHEM (String) schema + *
    • 3 TYPE_NAME (String) type name + *
    • 4 CLASS_NAME (String) Java class + *
    • 5 DATA_TYPE (short) SQL Type - see also java.sql.Types + *
    • 6 REMARKS (String) description + *
    • 7 BASE_TYPE (short) base type - see also java.sql.Types + *
    + * + * @param catalog ignored + * @param schemaPattern ignored + * @param typeNamePattern ignored + * @param types ignored + * @return an empty result set + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getUDTs(String catalog, String schemaPattern, + String typeNamePattern, int[] types) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getUDTs(" + +quote(catalog)+", " + +quote(schemaPattern)+", " + +quote(typeNamePattern)+", " + +quoteIntArray(types)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "CAST(NULL AS VARCHAR) TYPE_CAT, " + + "CAST(NULL AS VARCHAR) TYPE_SCHEM, " + + "CAST(NULL AS VARCHAR) TYPE_NAME, " + + "CAST(NULL AS VARCHAR) CLASS_NAME, " + + "CAST(NULL AS SMALLINT) DATA_TYPE, " + + "CAST(NULL AS VARCHAR) REMARKS, " + + "CAST(NULL AS SMALLINT) BASE_TYPE " + + "FROM DUAL WHERE FALSE"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the list of data types. The result set is sorted by DATA_TYPE and + * afterwards by how closely the data type maps to the corresponding JDBC + * SQL type (best match first). + * + *
      + *
    • 1 TYPE_NAME (String) type name
    • + *
    • 2 DATA_TYPE (short) SQL data type - see also java.sql.Types
    • + *
    • 3 PRECISION (int) maximum precision
    • + *
    • 4 LITERAL_PREFIX (String) prefix used to quote a literal
    • + *
    • 5 LITERAL_SUFFIX (String) suffix used to quote a literal
    • + *
    • 6 CREATE_PARAMS (String) parameters used (may be null)
    • + *
    • 7 NULLABLE (short) typeNoNulls (NULL not allowed) or typeNullable + *
    • + *
    • 8 CASE_SENSITIVE (boolean) case sensitive
    • + *
    • 9 SEARCHABLE (short) typeSearchable
    • + *
    • 10 UNSIGNED_ATTRIBUTE (boolean) unsigned
    • + *
    • 11 FIXED_PREC_SCALE (boolean) fixed precision
    • + *
    • 12 AUTO_INCREMENT (boolean) auto increment
    • + *
    • 13 LOCAL_TYPE_NAME (String) localized version of the data type
    • + *
    • 14 MINIMUM_SCALE (short) minimum scale
    • + *
    • 15 MAXIMUM_SCALE (short) maximum scale
    • + *
    • 16 SQL_DATA_TYPE (int) unused
    • + *
    • 17 SQL_DATETIME_SUB (int) unused
    • + *
    • 18 NUM_PREC_RADIX (int) 2 for binary, 10 for decimal
    • + *
    + * + * @return the list of data types + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getTypeInfo() throws SQLException { + try { + debugCodeCall("getTypeInfo"); + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "TYPE_NAME, " + + "DATA_TYPE, " + + "PRECISION, " + + "PREFIX LITERAL_PREFIX, " + + "SUFFIX LITERAL_SUFFIX, " + + "PARAMS CREATE_PARAMS, " + + "NULLABLE, " + + "CASE_SENSITIVE, " + + "SEARCHABLE, " + + "FALSE UNSIGNED_ATTRIBUTE, " + + "FALSE FIXED_PREC_SCALE, " + + "AUTO_INCREMENT, " + + "TYPE_NAME LOCAL_TYPE_NAME, " + + "MINIMUM_SCALE, " + + "MAXIMUM_SCALE, " + + "DATA_TYPE SQL_DATA_TYPE, " + + "ZERO() SQL_DATETIME_SUB, " + + "RADIX NUM_PREC_RADIX " + + "FROM INFORMATION_SCHEMA.TYPE_INFO " + + "ORDER BY DATA_TYPE, POS"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this database store data in local files. + * + * @return true + */ + @Override + public boolean usesLocalFiles() { + debugCodeCall("usesLocalFiles"); + return true; + } + + /** + * Checks if this database use one file per table. + * + * @return false + */ + @Override + public boolean usesLocalFilePerTable() { + debugCodeCall("usesLocalFilePerTable"); + return false; + } + + /** + * Returns the string used to quote identifiers. + * + * @return a double quote + */ + @Override + public String getIdentifierQuoteString() { + debugCodeCall("getIdentifierQuoteString"); + return "\""; + } + + /** + * Gets the comma-separated list of all SQL keywords that are not supported + * as table/column/index name, in addition to the SQL-2003 keywords. The list + * returned is: + *
    +     * LIMIT,MINUS,OFFSET,ROWNUM,SYSDATE,SYSTIME,SYSTIMESTAMP,TODAY
    +     * 
    + * The complete list of keywords (including SQL-2003 keywords) is: + *
    +     * ALL, CHECK, CONSTRAINT, CROSS, CURRENT_DATE, CURRENT_TIME,
    +     * CURRENT_TIMESTAMP, DISTINCT, EXCEPT, EXISTS, FALSE, FETCH, FOR, FOREIGN,
    +     * FROM, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, LIMIT,
    +     * MINUS, NATURAL, NOT, NULL, OFFSET, ON, ORDER, PRIMARY, ROWNUM, SELECT,
    +     * SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, UNIQUE, WHERE, WITH
    +     * 
    + * + * @return a list of additional the keywords + */ + @Override + public String getSQLKeywords() { + debugCodeCall("getSQLKeywords"); + return "LIMIT,MINUS,OFFSET,ROWNUM,SYSDATE,SYSTIME,SYSTIMESTAMP,TODAY"; + } + + /** + * Returns the list of numeric functions supported by this database. + * + * @return the list + */ + @Override + public String getNumericFunctions() throws SQLException { + debugCodeCall("getNumericFunctions"); + return getFunctions("Functions (Numeric)"); + } + + /** + * Returns the list of string functions supported by this database. + * + * @return the list + */ + @Override + public String getStringFunctions() throws SQLException { + debugCodeCall("getStringFunctions"); + return getFunctions("Functions (String)"); + } + + /** + * Returns the list of system functions supported by this database. + * + * @return the list + */ + @Override + public String getSystemFunctions() throws SQLException { + debugCodeCall("getSystemFunctions"); + return getFunctions("Functions (System)"); + } + + /** + * Returns the list of date and time functions supported by this database. + * + * @return the list + */ + @Override + public String getTimeDateFunctions() throws SQLException { + debugCodeCall("getTimeDateFunctions"); + return getFunctions("Functions (Time and Date)"); + } + + private String getFunctions(String section) throws SQLException { + try { + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT TOPIC " + + "FROM INFORMATION_SCHEMA.HELP WHERE SECTION = ?"); + prep.setString(1, section); + ResultSet rs = prep.executeQuery(); + StatementBuilder buff = new StatementBuilder(); + while (rs.next()) { + String s = rs.getString(1).trim(); + String[] array = StringUtils.arraySplit(s, ',', true); + for (String a : array) { + buff.appendExceptFirst(","); + String f = a.trim(); + if (f.indexOf(' ') >= 0) { + // remove 'Function' from 'INSERT Function' + f = f.substring(0, f.indexOf(' ')).trim(); + } + buff.append(f); + } + } + rs.close(); + prep.close(); + return buff.toString(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the default escape character for DatabaseMetaData search + * patterns. + * + * @return the default escape character (always '\', independent on the + * mode) + */ + @Override + public String getSearchStringEscape() { + debugCodeCall("getSearchStringEscape"); + return "\\"; + } + + /** + * Returns the characters that are allowed for identifiers in addiction to + * A-Z, a-z, 0-9 and '_'. + * + * @return an empty String ("") + */ + @Override + public String getExtraNameCharacters() { + debugCodeCall("getExtraNameCharacters"); + return ""; + } + + /** + * Returns whether alter table with add column is supported. + * @return true + */ + @Override + public boolean supportsAlterTableWithAddColumn() { + debugCodeCall("supportsAlterTableWithAddColumn"); + return true; + } + + /** + * Returns whether alter table with drop column is supported. + * + * @return true + */ + @Override + public boolean supportsAlterTableWithDropColumn() { + debugCodeCall("supportsAlterTableWithDropColumn"); + return true; + } + + /** + * Returns whether column aliasing is supported. + * + * @return true + */ + @Override + public boolean supportsColumnAliasing() { + debugCodeCall("supportsColumnAliasing"); + return true; + } + + /** + * Returns whether NULL+1 is NULL or not. + * + * @return true + */ + @Override + public boolean nullPlusNonNullIsNull() { + debugCodeCall("nullPlusNonNullIsNull"); + return true; + } + + /** + * Returns whether CONVERT is supported. + * + * @return true + */ + @Override + public boolean supportsConvert() { + debugCodeCall("supportsConvert"); + return true; + } + + /** + * Returns whether CONVERT is supported for one datatype to another. + * + * @param fromType the source SQL type + * @param toType the target SQL type + * @return true + */ + @Override + public boolean supportsConvert(int fromType, int toType) { + if (isDebugEnabled()) { + debugCode("supportsConvert("+fromType+", "+fromType+");"); + } + return true; + } + + /** + * Returns whether table correlation names (table alias) are supported. + * + * @return true + */ + @Override + public boolean supportsTableCorrelationNames() { + debugCodeCall("supportsTableCorrelationNames"); + return true; + } + + /** + * Returns whether table correlation names (table alias) are restricted to + * be different than table names. + * + * @return false + */ + @Override + public boolean supportsDifferentTableCorrelationNames() { + debugCodeCall("supportsDifferentTableCorrelationNames"); + return false; + } + + /** + * Returns whether expression in ORDER BY are supported. + * + * @return true + */ + @Override + public boolean supportsExpressionsInOrderBy() { + debugCodeCall("supportsExpressionsInOrderBy"); + return true; + } + + /** + * Returns whether ORDER BY is supported if the column is not in the SELECT + * list. + * + * @return true + */ + @Override + public boolean supportsOrderByUnrelated() { + debugCodeCall("supportsOrderByUnrelated"); + return true; + } + + /** + * Returns whether GROUP BY is supported. + * + * @return true + */ + @Override + public boolean supportsGroupBy() { + debugCodeCall("supportsGroupBy"); + return true; + } + + /** + * Returns whether GROUP BY is supported if the column is not in the SELECT + * list. + * + * @return true + */ + @Override + public boolean supportsGroupByUnrelated() { + debugCodeCall("supportsGroupByUnrelated"); + return true; + } + + /** + * Checks whether a GROUP BY clause can use columns that are not in the + * SELECT clause, provided that it specifies all the columns in the SELECT + * clause. + * + * @return true + */ + @Override + public boolean supportsGroupByBeyondSelect() { + debugCodeCall("supportsGroupByBeyondSelect"); + return true; + } + + /** + * Returns whether LIKE... ESCAPE is supported. + * + * @return true + */ + @Override + public boolean supportsLikeEscapeClause() { + debugCodeCall("supportsLikeEscapeClause"); + return true; + } + + /** + * Returns whether multiple result sets are supported. + * + * @return false + */ + @Override + public boolean supportsMultipleResultSets() { + debugCodeCall("supportsMultipleResultSets"); + return false; + } + + /** + * Returns whether multiple transactions (on different connections) are + * supported. + * + * @return true + */ + @Override + public boolean supportsMultipleTransactions() { + debugCodeCall("supportsMultipleTransactions"); + return true; + } + + /** + * Returns whether columns with NOT NULL are supported. + * + * @return true + */ + @Override + public boolean supportsNonNullableColumns() { + debugCodeCall("supportsNonNullableColumns"); + return true; + } + + /** + * Returns whether ODBC Minimum SQL grammar is supported. + * + * @return true + */ + @Override + public boolean supportsMinimumSQLGrammar() { + debugCodeCall("supportsMinimumSQLGrammar"); + return true; + } + + /** + * Returns whether ODBC Core SQL grammar is supported. + * + * @return true + */ + @Override + public boolean supportsCoreSQLGrammar() { + debugCodeCall("supportsCoreSQLGrammar"); + return true; + } + + /** + * Returns whether ODBC Extended SQL grammar is supported. + * + * @return false + */ + @Override + public boolean supportsExtendedSQLGrammar() { + debugCodeCall("supportsExtendedSQLGrammar"); + return false; + } + + /** + * Returns whether SQL-92 entry level grammar is supported. + * + * @return true + */ + @Override + public boolean supportsANSI92EntryLevelSQL() { + debugCodeCall("supportsANSI92EntryLevelSQL"); + return true; + } + + /** + * Returns whether SQL-92 intermediate level grammar is supported. + * + * @return false + */ + @Override + public boolean supportsANSI92IntermediateSQL() { + debugCodeCall("supportsANSI92IntermediateSQL"); + return false; + } + + /** + * Returns whether SQL-92 full level grammar is supported. + * + * @return false + */ + @Override + public boolean supportsANSI92FullSQL() { + debugCodeCall("supportsANSI92FullSQL"); + return false; + } + + /** + * Returns whether referential integrity is supported. + * + * @return true + */ + @Override + public boolean supportsIntegrityEnhancementFacility() { + debugCodeCall("supportsIntegrityEnhancementFacility"); + return true; + } + + /** + * Returns whether outer joins are supported. + * + * @return true + */ + @Override + public boolean supportsOuterJoins() { + debugCodeCall("supportsOuterJoins"); + return true; + } + + /** + * Returns whether full outer joins are supported. + * + * @return false + */ + @Override + public boolean supportsFullOuterJoins() { + debugCodeCall("supportsFullOuterJoins"); + return false; + } + + /** + * Returns whether limited outer joins are supported. + * + * @return true + */ + @Override + public boolean supportsLimitedOuterJoins() { + debugCodeCall("supportsLimitedOuterJoins"); + return true; + } + + /** + * Returns the term for "schema". + * + * @return "schema" + */ + @Override + public String getSchemaTerm() { + debugCodeCall("getSchemaTerm"); + return "schema"; + } + + /** + * Returns the term for "procedure". + * + * @return "procedure" + */ + @Override + public String getProcedureTerm() { + debugCodeCall("getProcedureTerm"); + return "procedure"; + } + + /** + * Returns the term for "catalog". + * + * @return "catalog" + */ + @Override + public String getCatalogTerm() { + debugCodeCall("getCatalogTerm"); + return "catalog"; + } + + /** + * Returns whether the catalog is at the beginning. + * + * @return true + */ + @Override + public boolean isCatalogAtStart() { + debugCodeCall("isCatalogAtStart"); + return true; + } + + /** + * Returns the catalog separator. + * + * @return "." + */ + @Override + public String getCatalogSeparator() { + debugCodeCall("getCatalogSeparator"); + return "."; + } + + /** + * Returns whether the schema name in INSERT, UPDATE, DELETE is supported. + * + * @return true + */ + @Override + public boolean supportsSchemasInDataManipulation() { + debugCodeCall("supportsSchemasInDataManipulation"); + return true; + } + + /** + * Returns whether the schema name in procedure calls is supported. + * + * @return true + */ + @Override + public boolean supportsSchemasInProcedureCalls() { + debugCodeCall("supportsSchemasInProcedureCalls"); + return true; + } + + /** + * Returns whether the schema name in CREATE TABLE is supported. + * + * @return true + */ + @Override + public boolean supportsSchemasInTableDefinitions() { + debugCodeCall("supportsSchemasInTableDefinitions"); + return true; + } + + /** + * Returns whether the schema name in CREATE INDEX is supported. + * + * @return true + */ + @Override + public boolean supportsSchemasInIndexDefinitions() { + debugCodeCall("supportsSchemasInIndexDefinitions"); + return true; + } + + /** + * Returns whether the schema name in GRANT is supported. + * + * @return true + */ + @Override + public boolean supportsSchemasInPrivilegeDefinitions() { + debugCodeCall("supportsSchemasInPrivilegeDefinitions"); + return true; + } + + /** + * Returns whether the catalog name in INSERT, UPDATE, DELETE is supported. + * + * @return true + */ + @Override + public boolean supportsCatalogsInDataManipulation() { + debugCodeCall("supportsCatalogsInDataManipulation"); + return true; + } + + /** + * Returns whether the catalog name in procedure calls is supported. + * + * @return false + */ + @Override + public boolean supportsCatalogsInProcedureCalls() { + debugCodeCall("supportsCatalogsInProcedureCalls"); + return false; + } + + /** + * Returns whether the catalog name in CREATE TABLE is supported. + * + * @return true + */ + @Override + public boolean supportsCatalogsInTableDefinitions() { + debugCodeCall("supportsCatalogsInTableDefinitions"); + return true; + } + + /** + * Returns whether the catalog name in CREATE INDEX is supported. + * + * @return true + */ + @Override + public boolean supportsCatalogsInIndexDefinitions() { + debugCodeCall("supportsCatalogsInIndexDefinitions"); + return true; + } + + /** + * Returns whether the catalog name in GRANT is supported. + * + * @return true + */ + @Override + public boolean supportsCatalogsInPrivilegeDefinitions() { + debugCodeCall("supportsCatalogsInPrivilegeDefinitions"); + return true; + } + + /** + * Returns whether positioned deletes are supported. + * + * @return true + */ + @Override + public boolean supportsPositionedDelete() { + debugCodeCall("supportsPositionedDelete"); + return true; + } + + /** + * Returns whether positioned updates are supported. + * + * @return true + */ + @Override + public boolean supportsPositionedUpdate() { + debugCodeCall("supportsPositionedUpdate"); + return true; + } + + /** + * Returns whether SELECT ... FOR UPDATE is supported. + * + * @return true + */ + @Override + public boolean supportsSelectForUpdate() { + debugCodeCall("supportsSelectForUpdate"); + return true; + } + + /** + * Returns whether stored procedures are supported. + * + * @return false + */ + @Override + public boolean supportsStoredProcedures() { + debugCodeCall("supportsStoredProcedures"); + return false; + } + + /** + * Returns whether subqueries (SELECT) in comparisons are supported. + * + * @return true + */ + @Override + public boolean supportsSubqueriesInComparisons() { + debugCodeCall("supportsSubqueriesInComparisons"); + return true; + } + + /** + * Returns whether SELECT in EXISTS is supported. + * + * @return true + */ + @Override + public boolean supportsSubqueriesInExists() { + debugCodeCall("supportsSubqueriesInExists"); + return true; + } + + /** + * Returns whether IN(SELECT...) is supported. + * + * @return true + */ + @Override + public boolean supportsSubqueriesInIns() { + debugCodeCall("supportsSubqueriesInIns"); + return true; + } + + /** + * Returns whether subqueries in quantified expression are supported. + * + * @return true + */ + @Override + public boolean supportsSubqueriesInQuantifieds() { + debugCodeCall("supportsSubqueriesInQuantifieds"); + return true; + } + + /** + * Returns whether correlated subqueries are supported. + * + * @return true + */ + @Override + public boolean supportsCorrelatedSubqueries() { + debugCodeCall("supportsCorrelatedSubqueries"); + return true; + } + + /** + * Returns whether UNION SELECT is supported. + * + * @return true + */ + @Override + public boolean supportsUnion() { + debugCodeCall("supportsUnion"); + return true; + } + + /** + * Returns whether UNION ALL SELECT is supported. + * + * @return true + */ + @Override + public boolean supportsUnionAll() { + debugCodeCall("supportsUnionAll"); + return true; + } + + /** + * Returns whether open result sets across commits are supported. + * + * @return false + */ + @Override + public boolean supportsOpenCursorsAcrossCommit() { + debugCodeCall("supportsOpenCursorsAcrossCommit"); + return false; + } + + /** + * Returns whether open result sets across rollback are supported. + * + * @return false + */ + @Override + public boolean supportsOpenCursorsAcrossRollback() { + debugCodeCall("supportsOpenCursorsAcrossRollback"); + return false; + } + + /** + * Returns whether open statements across commit are supported. + * + * @return true + */ + @Override + public boolean supportsOpenStatementsAcrossCommit() { + debugCodeCall("supportsOpenStatementsAcrossCommit"); + return true; + } + + /** + * Returns whether open statements across rollback are supported. + * + * @return true + */ + @Override + public boolean supportsOpenStatementsAcrossRollback() { + debugCodeCall("supportsOpenStatementsAcrossRollback"); + return true; + } + + /** + * Returns whether transactions are supported. + * + * @return true + */ + @Override + public boolean supportsTransactions() { + debugCodeCall("supportsTransactions"); + return true; + } + + /** + * Returns whether a specific transaction isolation level is supported. + * + * @param level the transaction isolation level (Connection.TRANSACTION_*) + * @return true + */ + @Override + public boolean supportsTransactionIsolationLevel(int level) throws SQLException { + debugCodeCall("supportsTransactionIsolationLevel"); + if (level == Connection.TRANSACTION_READ_UNCOMMITTED) { + // currently the combination of LOCK_MODE=0 and MULTI_THREADED + // is not supported, also see code in Database#setLockMode(int) + PreparedStatement prep = conn.prepareAutoCloseStatement( + "SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME=?"); + prep.setString(1, "MULTI_THREADED"); + ResultSet rs = prep.executeQuery(); + return !rs.next() || !rs.getString(1).equals("1"); + } + return true; + } + + /** + * Returns whether data manipulation and CREATE/DROP is supported in + * transactions. + * + * @return false + */ + @Override + public boolean supportsDataDefinitionAndDataManipulationTransactions() { + debugCodeCall("supportsDataDefinitionAndDataManipulationTransactions"); + return false; + } + + /** + * Returns whether only data manipulations are supported in transactions. + * + * @return true + */ + @Override + public boolean supportsDataManipulationTransactionsOnly() { + debugCodeCall("supportsDataManipulationTransactionsOnly"); + return true; + } + + /** + * Returns whether CREATE/DROP commit an open transaction. + * + * @return true + */ + @Override + public boolean dataDefinitionCausesTransactionCommit() { + debugCodeCall("dataDefinitionCausesTransactionCommit"); + return true; + } + + /** + * Returns whether CREATE/DROP do not affect transactions. + * + * @return false + */ + @Override + public boolean dataDefinitionIgnoredInTransactions() { + debugCodeCall("dataDefinitionIgnoredInTransactions"); + return false; + } + + /** + * Returns whether a specific result set type is supported. + * ResultSet.TYPE_SCROLL_SENSITIVE is not supported. + * + * @param type the result set type + * @return true for all types except ResultSet.TYPE_FORWARD_ONLY + */ + @Override + public boolean supportsResultSetType(int type) { + debugCodeCall("supportsResultSetType", type); + return type != ResultSet.TYPE_SCROLL_SENSITIVE; + } + + /** + * Returns whether a specific result set concurrency is supported. + * ResultSet.TYPE_SCROLL_SENSITIVE is not supported. + * + * @param type the result set type + * @param concurrency the result set concurrency + * @return true if the type is not ResultSet.TYPE_SCROLL_SENSITIVE + */ + @Override + public boolean supportsResultSetConcurrency(int type, int concurrency) { + if (isDebugEnabled()) { + debugCode("supportsResultSetConcurrency("+type+", "+concurrency+");"); + } + return type != ResultSet.TYPE_SCROLL_SENSITIVE; + } + + /** + * Returns whether own updates are visible. + * + * @param type the result set type + * @return true + */ + @Override + public boolean ownUpdatesAreVisible(int type) { + debugCodeCall("ownUpdatesAreVisible", type); + return true; + } + + /** + * Returns whether own deletes are visible. + * + * @param type the result set type + * @return false + */ + @Override + public boolean ownDeletesAreVisible(int type) { + debugCodeCall("ownDeletesAreVisible", type); + return false; + } + + /** + * Returns whether own inserts are visible. + * + * @param type the result set type + * @return false + */ + @Override + public boolean ownInsertsAreVisible(int type) { + debugCodeCall("ownInsertsAreVisible", type); + return false; + } + + /** + * Returns whether other updates are visible. + * + * @param type the result set type + * @return false + */ + @Override + public boolean othersUpdatesAreVisible(int type) { + debugCodeCall("othersUpdatesAreVisible", type); + return false; + } + + /** + * Returns whether other deletes are visible. + * + * @param type the result set type + * @return false + */ + @Override + public boolean othersDeletesAreVisible(int type) { + debugCodeCall("othersDeletesAreVisible", type); + return false; + } + + /** + * Returns whether other inserts are visible. + * + * @param type the result set type + * @return false + */ + @Override + public boolean othersInsertsAreVisible(int type) { + debugCodeCall("othersInsertsAreVisible", type); + return false; + } + + /** + * Returns whether updates are detected. + * + * @param type the result set type + * @return false + */ + @Override + public boolean updatesAreDetected(int type) { + debugCodeCall("updatesAreDetected", type); + return false; + } + + /** + * Returns whether deletes are detected. + * + * @param type the result set type + * @return false + */ + @Override + public boolean deletesAreDetected(int type) { + debugCodeCall("deletesAreDetected", type); + return false; + } + + /** + * Returns whether inserts are detected. + * + * @param type the result set type + * @return false + */ + @Override + public boolean insertsAreDetected(int type) { + debugCodeCall("insertsAreDetected", type); + return false; + } + + /** + * Returns whether batch updates are supported. + * + * @return true + */ + @Override + public boolean supportsBatchUpdates() { + debugCodeCall("supportsBatchUpdates"); + return true; + } + + /** + * Returns whether the maximum row size includes blobs. + * + * @return false + */ + @Override + public boolean doesMaxRowSizeIncludeBlobs() { + debugCodeCall("doesMaxRowSizeIncludeBlobs"); + return false; + } + + /** + * Returns the default transaction isolation level. + * + * @return Connection.TRANSACTION_READ_COMMITTED + */ + @Override + public int getDefaultTransactionIsolation() { + debugCodeCall("getDefaultTransactionIsolation"); + return Connection.TRANSACTION_READ_COMMITTED; + } + + /** + * Checks if for CREATE TABLE Test(ID INT), getTables returns Test as the + * table name. + * + * @return false + */ + @Override + public boolean supportsMixedCaseIdentifiers() { + debugCodeCall("supportsMixedCaseIdentifiers"); + return false; + } + + /** + * Checks if a table created with CREATE TABLE "Test"(ID INT) is a different + * table than a table created with CREATE TABLE TEST(ID INT). + * + * @return true usually, and false in MySQL mode + */ + @Override + public boolean supportsMixedCaseQuotedIdentifiers() throws SQLException { + debugCodeCall("supportsMixedCaseQuotedIdentifiers"); + String m = conn.getMode(); + return !m.equals("MySQL"); + } + + /** + * Checks if for CREATE TABLE Test(ID INT), getTables returns TEST as the + * table name. + * + * @return true usually, and false in MySQL mode + */ + @Override + public boolean storesUpperCaseIdentifiers() throws SQLException { + debugCodeCall("storesUpperCaseIdentifiers"); + String m = conn.getMode(); + return !m.equals("MySQL"); + } + + /** + * Checks if for CREATE TABLE Test(ID INT), getTables returns test as the + * table name. + * + * @return false usually, and true in MySQL mode + */ + @Override + public boolean storesLowerCaseIdentifiers() throws SQLException { + debugCodeCall("storesLowerCaseIdentifiers"); + String m = conn.getMode(); + return m.equals("MySQL"); + } + + /** + * Checks if for CREATE TABLE Test(ID INT), getTables returns Test as the + * table name. + * + * @return false + */ + @Override + public boolean storesMixedCaseIdentifiers() { + debugCodeCall("storesMixedCaseIdentifiers"); + return false; + } + + /** + * Checks if for CREATE TABLE "Test"(ID INT), getTables returns TEST as the + * table name. + * + * @return false usually, and true in MySQL mode + */ + @Override + public boolean storesUpperCaseQuotedIdentifiers() throws SQLException { + debugCodeCall("storesUpperCaseQuotedIdentifiers"); + String m = conn.getMode(); + return m.equals("MySQL"); + } + + /** + * Checks if for CREATE TABLE "Test"(ID INT), getTables returns test as the + * table name. + * + * @return false usually, and true in MySQL mode + */ + @Override + public boolean storesLowerCaseQuotedIdentifiers() throws SQLException { + debugCodeCall("storesLowerCaseQuotedIdentifiers"); + String m = conn.getMode(); + return m.equals("MySQL"); + } + + /** + * Checks if for CREATE TABLE "Test"(ID INT), getTables returns Test as the + * table name. + * + * @return true usually, and false in MySQL mode + */ + @Override + public boolean storesMixedCaseQuotedIdentifiers() throws SQLException { + debugCodeCall("storesMixedCaseQuotedIdentifiers"); + String m = conn.getMode(); + return !m.equals("MySQL"); + } + + /** + * Returns the maximum length for hex values (characters). + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxBinaryLiteralLength() { + debugCodeCall("getMaxBinaryLiteralLength"); + return 0; + } + + /** + * Returns the maximum length for literals. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxCharLiteralLength() { + debugCodeCall("getMaxCharLiteralLength"); + return 0; + } + + /** + * Returns the maximum length for column names. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxColumnNameLength() { + debugCodeCall("getMaxColumnNameLength"); + return 0; + } + + /** + * Returns the maximum number of columns in GROUP BY. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxColumnsInGroupBy() { + debugCodeCall("getMaxColumnsInGroupBy"); + return 0; + } + + /** + * Returns the maximum number of columns in CREATE INDEX. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxColumnsInIndex() { + debugCodeCall("getMaxColumnsInIndex"); + return 0; + } + + /** + * Returns the maximum number of columns in ORDER BY. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxColumnsInOrderBy() { + debugCodeCall("getMaxColumnsInOrderBy"); + return 0; + } + + /** + * Returns the maximum number of columns in SELECT. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxColumnsInSelect() { + debugCodeCall("getMaxColumnsInSelect"); + return 0; + } + + /** + * Returns the maximum number of columns in CREATE TABLE. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxColumnsInTable() { + debugCodeCall("getMaxColumnsInTable"); + return 0; + } + + /** + * Returns the maximum number of open connection. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxConnections() { + debugCodeCall("getMaxConnections"); + return 0; + } + + /** + * Returns the maximum length for a cursor name. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxCursorNameLength() { + debugCodeCall("getMaxCursorNameLength"); + return 0; + } + + /** + * Returns the maximum length for an index (in bytes). + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxIndexLength() { + debugCodeCall("getMaxIndexLength"); + return 0; + } + + /** + * Returns the maximum length for a schema name. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxSchemaNameLength() { + debugCodeCall("getMaxSchemaNameLength"); + return 0; + } + + /** + * Returns the maximum length for a procedure name. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxProcedureNameLength() { + debugCodeCall("getMaxProcedureNameLength"); + return 0; + } + + /** + * Returns the maximum length for a catalog name. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxCatalogNameLength() { + debugCodeCall("getMaxCatalogNameLength"); + return 0; + } + + /** + * Returns the maximum size of a row (in bytes). + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxRowSize() { + debugCodeCall("getMaxRowSize"); + return 0; + } + + /** + * Returns the maximum length of a statement. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxStatementLength() { + debugCodeCall("getMaxStatementLength"); + return 0; + } + + /** + * Returns the maximum number of open statements. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxStatements() { + debugCodeCall("getMaxStatements"); + return 0; + } + + /** + * Returns the maximum length for a table name. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxTableNameLength() { + debugCodeCall("getMaxTableNameLength"); + return 0; + } + + /** + * Returns the maximum number of tables in a SELECT. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxTablesInSelect() { + debugCodeCall("getMaxTablesInSelect"); + return 0; + } + + /** + * Returns the maximum length for a user name. + * + * @return 0 for limit is unknown + */ + @Override + public int getMaxUserNameLength() { + debugCodeCall("getMaxUserNameLength"); + return 0; + } + + /** + * Does the database support savepoints. + * + * @return true + */ + @Override + public boolean supportsSavepoints() { + debugCodeCall("supportsSavepoints"); + return true; + } + + /** + * Does the database support named parameters. + * + * @return false + */ + @Override + public boolean supportsNamedParameters() { + debugCodeCall("supportsNamedParameters"); + return false; + } + + /** + * Does the database support multiple open result sets. + * + * @return true + */ + @Override + public boolean supportsMultipleOpenResults() { + debugCodeCall("supportsMultipleOpenResults"); + return true; + } + + /** + * Does the database support getGeneratedKeys. + * + * @return true + */ + @Override + public boolean supportsGetGeneratedKeys() { + debugCodeCall("supportsGetGeneratedKeys"); + return true; + } + + /** + * [Not supported] + */ + @Override + public ResultSet getSuperTypes(String catalog, String schemaPattern, + String typeNamePattern) throws SQLException { + throw unsupported("superTypes"); + } + + /** + * Get the list of super tables of a table. This method currently returns an + * empty result set. + *
      + *
    • 1 TABLE_CAT (String) table catalog
    • + *
    • 2 TABLE_SCHEM (String) table schema
    • + *
    • 3 TABLE_NAME (String) table name
    • + *
    • 4 SUPERTABLE_NAME (String) the name of the super table
    • + *
    + * + * @param catalog null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableNamePattern null (to get all objects) or a table name pattern + * (uppercase for unquoted names) + * @return an empty result set + */ + @Override + public ResultSet getSuperTables(String catalog, String schemaPattern, + String tableNamePattern) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getSuperTables(" + +quote(catalog)+", " + +quote(schemaPattern)+", " + +quote(tableNamePattern)+");"); + } + checkClosed(); + PreparedStatement prep = conn.prepareAutoCloseStatement("SELECT " + + "CATALOG_NAME TABLE_CAT, " + + "CATALOG_NAME TABLE_SCHEM, " + + "CATALOG_NAME TABLE_NAME, " + + "CATALOG_NAME SUPERTABLE_NAME " + + "FROM INFORMATION_SCHEMA.CATALOGS " + + "WHERE FALSE"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + */ + @Override + public ResultSet getAttributes(String catalog, String schemaPattern, + String typeNamePattern, String attributeNamePattern) + throws SQLException { + throw unsupported("attributes"); + } + + /** + * Does this database supports a result set holdability. + * + * @param holdability ResultSet.HOLD_CURSORS_OVER_COMMIT or + * CLOSE_CURSORS_AT_COMMIT + * @return true if the holdability is ResultSet.CLOSE_CURSORS_AT_COMMIT + */ + @Override + public boolean supportsResultSetHoldability(int holdability) { + debugCodeCall("supportsResultSetHoldability", holdability); + return holdability == ResultSet.CLOSE_CURSORS_AT_COMMIT; + } + + /** + * Gets the result set holdability. + * + * @return ResultSet.CLOSE_CURSORS_AT_COMMIT + */ + @Override + public int getResultSetHoldability() { + debugCodeCall("getResultSetHoldability"); + return ResultSet.CLOSE_CURSORS_AT_COMMIT; + } + + /** + * Gets the major version of the database. + * + * @return the major version + */ + @Override + public int getDatabaseMajorVersion() { + debugCodeCall("getDatabaseMajorVersion"); + return Constants.VERSION_MAJOR; + } + + /** + * Gets the minor version of the database. + * + * @return the minor version + */ + @Override + public int getDatabaseMinorVersion() { + debugCodeCall("getDatabaseMinorVersion"); + return Constants.VERSION_MINOR; + } + + /** + * Gets the major version of the supported JDBC API. + * + * @return the major version (4) + */ + @Override + public int getJDBCMajorVersion() { + debugCodeCall("getJDBCMajorVersion"); + return 4; + } + + /** + * Gets the minor version of the supported JDBC API. + * + * @return the minor version (0) + */ + @Override + public int getJDBCMinorVersion() { + debugCodeCall("getJDBCMinorVersion"); + return 0; + } + + /** + * Gets the SQL State type. + * + * @return DatabaseMetaData.sqlStateSQL99 + */ + @Override + public int getSQLStateType() { + debugCodeCall("getSQLStateType"); + return DatabaseMetaData.sqlStateSQL99; + } + + /** + * Does the database make a copy before updating. + * + * @return false + */ + @Override + public boolean locatorsUpdateCopy() { + debugCodeCall("locatorsUpdateCopy"); + return false; + } + + /** + * Does the database support statement pooling. + * + * @return false + */ + @Override + public boolean supportsStatementPooling() { + debugCodeCall("supportsStatementPooling"); + return false; + } + + // ============================================================= + + private void checkClosed() { + conn.checkClosed(); + } + + private static String getPattern(String pattern) { + return pattern == null ? "%" : pattern; + } + + private static String getSchemaPattern(String pattern) { + return pattern == null ? "%" : pattern.length() == 0 ? + Constants.SCHEMA_MAIN : pattern; + } + + private static String getCatalogPattern(String catalogPattern) { + // Workaround for OpenOffice: getColumns is called with "" as the + // catalog + return catalogPattern == null || catalogPattern.length() == 0 ? + "%" : catalogPattern; + } + + /** + * Get the lifetime of a rowid. + * + * @return ROWID_UNSUPPORTED + */ + @Override + public RowIdLifetime getRowIdLifetime() { + debugCodeCall("getRowIdLifetime"); + return RowIdLifetime.ROWID_UNSUPPORTED; + } + + /** + * Gets the list of schemas in the database. + * The result set is sorted by TABLE_SCHEM. + * + *
      + *
    • 1 TABLE_SCHEM (String) schema name + *
    • 2 TABLE_CATALOG (String) catalog name + *
    • 3 IS_DEFAULT (boolean) if this is the default schema + *
    + * + * @param catalogPattern null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @return the schema list + * @throws SQLException if the connection is closed + */ + @Override + public ResultSet getSchemas(String catalogPattern, String schemaPattern) + throws SQLException { + try { + debugCodeCall("getSchemas(String,String)"); + checkClosed(); + PreparedStatement prep = conn + .prepareAutoCloseStatement("SELECT " + + "SCHEMA_NAME TABLE_SCHEM, " + + "CATALOG_NAME TABLE_CATALOG, " + +" IS_DEFAULT " + + "FROM INFORMATION_SCHEMA.SCHEMATA " + + "WHERE CATALOG_NAME LIKE ? ESCAPE ? " + + "AND SCHEMA_NAME LIKE ? ESCAPE ? " + + "ORDER BY SCHEMA_NAME"); + prep.setString(1, getCatalogPattern(catalogPattern)); + prep.setString(2, "\\"); + prep.setString(3, getSchemaPattern(schemaPattern)); + prep.setString(4, "\\"); + return prep.executeQuery(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns whether the database supports calling functions using the call + * syntax. + * + * @return true + */ + @Override + public boolean supportsStoredFunctionsUsingCallSyntax() { + debugCodeCall("supportsStoredFunctionsUsingCallSyntax"); + return true; + } + + /** + * Returns whether an exception while auto commit is on closes all result + * sets. + * + * @return false + */ + @Override + public boolean autoCommitFailureClosesAllResultSets() { + debugCodeCall("autoCommitFailureClosesAllResultSets"); + return false; + } + + @Override + public ResultSet getClientInfoProperties() throws SQLException { + Properties clientInfo = conn.getClientInfo(); + SimpleResultSet result = new SimpleResultSet(); + result.addColumn("Name", Types.VARCHAR, 0, 0); + result.addColumn("Value", Types.VARCHAR, 0, 0); + for (Object key : clientInfo.keySet()) { + result.addRow(key, clientInfo.get(key)); + } + return result; + } + + /** + * Return an object of this class if possible. + * + * @param iface the class + * @return this + */ + @Override + @SuppressWarnings("unchecked") + public T unwrap(Class iface) throws SQLException { + try { + if (isWrapperFor(iface)) { + return (T) this; + } + throw DbException.getInvalidValueException("iface", iface); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if unwrap can return an object of this class. + * + * @param iface the class + * @return whether or not the interface is assignable from this class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return iface != null && iface.isAssignableFrom(getClass()); + } + + /** + * [Not supported] Gets the list of function columns. + */ + @Override + public ResultSet getFunctionColumns(String catalog, String schemaPattern, + String functionNamePattern, String columnNamePattern) + throws SQLException { + throw unsupported("getFunctionColumns"); + } + + /** + * [Not supported] Gets the list of functions. + */ + @Override + public ResultSet getFunctions(String catalog, String schemaPattern, + String functionNamePattern) throws SQLException { + throw unsupported("getFunctions"); + } + + /** + * [Not supported] + */ + @Override + public boolean generatedKeyAlwaysReturned() { + return true; + } + + /** + * [Not supported] + * + * @param catalog null (to get all objects) or the catalog name + * @param schemaPattern null (to get all objects) or a schema name + * (uppercase for unquoted names) + * @param tableNamePattern null (to get all objects) or a table name + * (uppercase for unquoted names) + * @param columnNamePattern null (to get all objects) or a column name + * (uppercase for unquoted names) + */ + @Override + public ResultSet getPseudoColumns(String catalog, String schemaPattern, + String tableNamePattern, String columnNamePattern) { + return null; + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": " + conn; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcDatabaseMetaDataBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcDatabaseMetaDataBackwardsCompat.java new file mode 100644 index 0000000000000..8aca1227416e5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcDatabaseMetaDataBackwardsCompat.java @@ -0,0 +1,16 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, and the + * EPL 1.0 (http://h2database.com/html/license.html). Initial Developer: H2 + * Group + */ +package org.h2.jdbc; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcDatabaseMetaDataBackwardsCompat { + + // compatibility interface + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcParameterMetaData.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcParameterMetaData.java new file mode 100644 index 0000000000000..cc25c50481bca --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcParameterMetaData.java @@ -0,0 +1,259 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.ParameterMetaData; +import java.sql.SQLException; +import java.util.ArrayList; +import org.h2.command.CommandInterface; +import org.h2.expression.ParameterInterface; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceObject; +import org.h2.util.MathUtils; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * Information about the parameters of a prepared statement. + */ +public class JdbcParameterMetaData extends TraceObject implements + ParameterMetaData { + + private final JdbcPreparedStatement prep; + private final int paramCount; + private final ArrayList parameters; + + JdbcParameterMetaData(Trace trace, JdbcPreparedStatement prep, + CommandInterface command, int id) { + setTrace(trace, TraceObject.PARAMETER_META_DATA, id); + this.prep = prep; + this.parameters = command.getParameters(); + this.paramCount = parameters.size(); + } + + /** + * Returns the number of parameters. + * + * @return the number + */ + @Override + public int getParameterCount() throws SQLException { + try { + debugCodeCall("getParameterCount"); + checkClosed(); + return paramCount; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the parameter mode. + * Always returns parameterModeIn. + * + * @param param the column index (1,2,...) + * @return parameterModeIn + */ + @Override + public int getParameterMode(int param) throws SQLException { + try { + debugCodeCall("getParameterMode", param); + getParameter(param); + return parameterModeIn; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the parameter type. + * java.sql.Types.VARCHAR is returned if the data type is not known. + * + * @param param the column index (1,2,...) + * @return the data type + */ + @Override + public int getParameterType(int param) throws SQLException { + try { + debugCodeCall("getParameterType", param); + ParameterInterface p = getParameter(param); + int type = p.getType(); + if (type == Value.UNKNOWN) { + type = Value.STRING; + } + return DataType.getDataType(type).sqlType; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the parameter precision. + * The value 0 is returned if the precision is not known. + * + * @param param the column index (1,2,...) + * @return the precision + */ + @Override + public int getPrecision(int param) throws SQLException { + try { + debugCodeCall("getPrecision", param); + ParameterInterface p = getParameter(param); + return MathUtils.convertLongToInt(p.getPrecision()); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the parameter scale. + * The value 0 is returned if the scale is not known. + * + * @param param the column index (1,2,...) + * @return the scale + */ + @Override + public int getScale(int param) throws SQLException { + try { + debugCodeCall("getScale", param); + ParameterInterface p = getParameter(param); + return p.getScale(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this is nullable parameter. + * Returns ResultSetMetaData.columnNullableUnknown.. + * + * @param param the column index (1,2,...) + * @return ResultSetMetaData.columnNullableUnknown + */ + @Override + public int isNullable(int param) throws SQLException { + try { + debugCodeCall("isNullable", param); + return getParameter(param).getNullable(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this parameter is signed. + * It always returns true. + * + * @param param the column index (1,2,...) + * @return true + */ + @Override + public boolean isSigned(int param) throws SQLException { + try { + debugCodeCall("isSigned", param); + getParameter(param); + return true; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the Java class name of the parameter. + * "java.lang.String" is returned if the type is not known. + * + * @param param the column index (1,2,...) + * @return the Java class name + */ + @Override + public String getParameterClassName(int param) throws SQLException { + try { + debugCodeCall("getParameterClassName", param); + ParameterInterface p = getParameter(param); + int type = p.getType(); + if (type == Value.UNKNOWN) { + type = Value.STRING; + } + return DataType.getTypeClassName(type); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the parameter type name. + * "VARCHAR" is returned if the type is not known. + * + * @param param the column index (1,2,...) + * @return the type name + */ + @Override + public String getParameterTypeName(int param) throws SQLException { + try { + debugCodeCall("getParameterTypeName", param); + ParameterInterface p = getParameter(param); + int type = p.getType(); + if (type == Value.UNKNOWN) { + type = Value.STRING; + } + return DataType.getDataType(type).name; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private ParameterInterface getParameter(int param) { + checkClosed(); + if (param < 1 || param > paramCount) { + throw DbException.getInvalidValueException("param", param); + } + return parameters.get(param - 1); + } + + private void checkClosed() { + prep.checkClosed(); + } + + /** + * Return an object of this class if possible. + * + * @param iface the class + * @return this + */ + @Override + @SuppressWarnings("unchecked") + public T unwrap(Class iface) throws SQLException { + try { + if (isWrapperFor(iface)) { + return (T) this; + } + throw DbException.getInvalidValueException("iface", iface); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if unwrap can return an object of this class. + * + * @param iface the class + * @return whether or not the interface is assignable from this class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return iface != null && iface.isAssignableFrom(getClass()); + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": parameterCount=" + paramCount; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcPreparedStatement.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcPreparedStatement.java new file mode 100644 index 0000000000000..97bf150b20c4e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcPreparedStatement.java @@ -0,0 +1,1816 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.io.InputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.net.URL; +import java.sql.Array; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.NClob; +import java.sql.ParameterMetaData; +import java.sql.PreparedStatement; +import java.sql.Ref; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.RowId; +import java.sql.SQLException; +import java.sql.SQLXML; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Calendar; +import java.util.HashMap; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.expression.ParameterInterface; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.result.ResultInterface; +import org.h2.result.ResultWithGeneratedKeys; +import org.h2.util.DateTimeUtils; +import org.h2.util.IOUtils; +import org.h2.util.MergedResultSet; +import org.h2.util.New; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueByte; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueShort; +import org.h2.value.ValueString; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; + +/** + * Represents a prepared statement. + */ +public class JdbcPreparedStatement extends JdbcStatement implements + PreparedStatement, JdbcPreparedStatementBackwardsCompat { + + protected CommandInterface command; + private final String sqlStatement; + private ArrayList batchParameters; + private MergedResultSet batchIdentities; + private HashMap cachedColumnLabelMap; + private final Object generatedKeysRequest; + + JdbcPreparedStatement(JdbcConnection conn, String sql, int id, + int resultSetType, int resultSetConcurrency, + boolean closeWithResultSet, Object generatedKeysRequest) { + super(conn, id, resultSetType, resultSetConcurrency, closeWithResultSet); + this.generatedKeysRequest = conn.scopeGeneratedKeys() ? false : generatedKeysRequest; + setTrace(session.getTrace(), TraceObject.PREPARED_STATEMENT, id); + this.sqlStatement = sql; + command = conn.prepareCommand(sql, fetchSize); + } + + /** + * Cache the column labels (looking up the column index can sometimes show + * up on the performance profile). + * + * @param cachedColumnLabelMap the column map + */ + void setCachedColumnLabelMap(HashMap cachedColumnLabelMap) { + this.cachedColumnLabelMap = cachedColumnLabelMap; + } + + /** + * Executes a query (select statement) and returns the result set. If + * another result set exists for this statement, this will be closed (even + * if this statement fails). + * + * @return the result set + * @throws SQLException if this object is closed or invalid + */ + @Override + public ResultSet executeQuery() throws SQLException { + try { + int id = getNextId(TraceObject.RESULT_SET); + if (isDebugEnabled()) { + debugCodeAssign("ResultSet", TraceObject.RESULT_SET, id, "executeQuery()"); + } + batchIdentities = null; + synchronized (session) { + checkClosed(); + closeOldResultSet(); + ResultInterface result; + boolean lazy = false; + boolean scrollable = resultSetType != ResultSet.TYPE_FORWARD_ONLY; + boolean updatable = resultSetConcurrency == ResultSet.CONCUR_UPDATABLE; + try { + setExecutingStatement(command); + result = command.executeQuery(maxRows, scrollable); + lazy = result.isLazy(); + } finally { + if (!lazy) { + setExecutingStatement(null); + } + } + resultSet = new JdbcResultSet(conn, this, command, result, id, + closedByResultSet, scrollable, updatable, cachedColumnLabelMap); + } + return resultSet; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback) + * @throws SQLException if this object is closed or invalid + */ + @Override + public int executeUpdate() throws SQLException { + try { + debugCodeCall("executeUpdate"); + checkClosedForWrite(); + batchIdentities = null; + try { + return executeUpdateInternal(); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback) + * @throws SQLException if this object is closed or invalid + */ + @Override + public long executeLargeUpdate() throws SQLException { + try { + debugCodeCall("executeLargeUpdate"); + checkClosedForWrite(); + batchIdentities = null; + try { + return executeUpdateInternal(); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private int executeUpdateInternal() throws SQLException { + closeOldResultSet(); + synchronized (session) { + try { + setExecutingStatement(command); + ResultWithGeneratedKeys result = command.executeUpdate(generatedKeysRequest); + updateCount = result.getUpdateCount(); + ResultInterface gk = result.getGeneratedKeys(); + if (gk != null) { + int id = getNextId(TraceObject.RESULT_SET); + generatedKeys = new JdbcResultSet(conn, this, command, gk, id, + false, true, false); + } + } finally { + setExecutingStatement(null); + } + } + return updateCount; + } + + /** + * Executes an arbitrary statement. If another result set exists for this + * statement, this will be closed (even if this statement fails). If auto + * commit is on, and the statement is not a select, this statement will be + * committed. + * + * @return true if a result set is available, false if not + * @throws SQLException if this object is closed or invalid + */ + @Override + public boolean execute() throws SQLException { + try { + int id = getNextId(TraceObject.RESULT_SET); + if (isDebugEnabled()) { + debugCodeCall("execute"); + } + checkClosedForWrite(); + try { + boolean returnsResultSet; + synchronized (conn.getSession()) { + closeOldResultSet(); + boolean lazy = false; + try { + setExecutingStatement(command); + if (command.isQuery()) { + returnsResultSet = true; + boolean scrollable = resultSetType != ResultSet.TYPE_FORWARD_ONLY; + boolean updatable = resultSetConcurrency == ResultSet.CONCUR_UPDATABLE; + ResultInterface result = command.executeQuery(maxRows, scrollable); + lazy = result.isLazy(); + resultSet = new JdbcResultSet(conn, this, command, result, + id, closedByResultSet, scrollable, + updatable, cachedColumnLabelMap); + } else { + returnsResultSet = false; + ResultWithGeneratedKeys result = command.executeUpdate(generatedKeysRequest); + updateCount = result.getUpdateCount(); + ResultInterface gk = result.getGeneratedKeys(); + if (gk != null) { + generatedKeys = new JdbcResultSet(conn, this, command, gk, id, + false, true, false); + } + } + } finally { + if (!lazy) { + setExecutingStatement(null); + } + } + } + return returnsResultSet; + } finally { + afterWriting(); + } + } catch (Throwable e) { + throw logAndConvert(e); + } + } + + /** + * Clears all parameters. + * + * @throws SQLException if this object is closed or invalid + */ + @Override + public void clearParameters() throws SQLException { + try { + debugCodeCall("clearParameters"); + checkClosed(); + ArrayList parameters = command.getParameters(); + for (ParameterInterface param : parameters) { + // can only delete old temp files if they are not in the batch + param.setValue(null, batchParameters == null); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @throws SQLException Unsupported Feature + */ + @Override + public ResultSet executeQuery(String sql) throws SQLException { + try { + debugCodeCall("executeQuery", sql); + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @throws SQLException Unsupported Feature + */ + @Override + public void addBatch(String sql) throws SQLException { + try { + debugCodeCall("addBatch", sql); + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @throws SQLException Unsupported Feature + */ + @Override + public int executeUpdate(String sql) throws SQLException { + try { + debugCodeCall("executeUpdate", sql); + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @throws SQLException Unsupported Feature + */ + @Override + public long executeLargeUpdate(String sql) throws SQLException { + try { + debugCodeCall("executeLargeUpdate", sql); + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @throws SQLException Unsupported Feature + */ + @Override + public boolean execute(String sql) throws SQLException { + try { + debugCodeCall("execute", sql); + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + // ============================================================= + + /** + * Sets a parameter to null. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param sqlType the data type (Types.x) + * @throws SQLException if this object is closed + */ + @Override + public void setNull(int parameterIndex, int sqlType) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setNull("+parameterIndex+", "+sqlType+");"); + } + setParameter(parameterIndex, ValueNull.INSTANCE); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setInt(int parameterIndex, int x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setInt("+parameterIndex+", "+x+");"); + } + setParameter(parameterIndex, ValueInt.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setString(int parameterIndex, String x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setString("+parameterIndex+", "+quote(x)+");"); + } + Value v = x == null ? (Value) ValueNull.INSTANCE : ValueString.get(x); + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBigDecimal(int parameterIndex, BigDecimal x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBigDecimal("+parameterIndex+", " + quoteBigDecimal(x) + ");"); + } + Value v = x == null ? (Value) ValueNull.INSTANCE : ValueDecimal.get(x); + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setDate(int parameterIndex, java.sql.Date x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setDate("+parameterIndex+", " + quoteDate(x) + ");"); + } + Value v = x == null ? (Value) ValueNull.INSTANCE : ValueDate.get(x); + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setTime(int parameterIndex, java.sql.Time x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setTime("+parameterIndex+", " + quoteTime(x) + ");"); + } + Value v = x == null ? (Value) ValueNull.INSTANCE : ValueTime.get(x); + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setTimestamp(int parameterIndex, java.sql.Timestamp x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setTimestamp("+parameterIndex+", " + quoteTimestamp(x) + ");"); + } + Value v = x == null ? (Value) ValueNull.INSTANCE : ValueTimestamp.get(x); + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * Objects of unknown classes are serialized (on the client side). + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setObject(int parameterIndex, Object x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setObject("+parameterIndex+", x);"); + } + if (x == null) { + // throw Errors.getInvalidValueException("null", "x"); + setParameter(parameterIndex, ValueNull.INSTANCE); + } else { + setParameter(parameterIndex, + DataType.convertToValue(session, x, Value.UNKNOWN)); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. The object is converted, if required, to + * the specified data type before sending to the database. + * Objects of unknown classes are serialized (on the client side). + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value, null is allowed + * @param targetSqlType the type as defined in java.sql.Types + * @throws SQLException if this object is closed + */ + @Override + public void setObject(int parameterIndex, Object x, int targetSqlType) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setObject("+parameterIndex+", x, "+targetSqlType+");"); + } + int type = DataType.convertSQLTypeToValueType(targetSqlType); + if (x == null) { + setParameter(parameterIndex, ValueNull.INSTANCE); + } else { + Value v = DataType.convertToValue(conn.getSession(), x, type); + setParameter(parameterIndex, v.convertTo(type)); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. The object is converted, if required, to + * the specified data type before sending to the database. + * Objects of unknown classes are serialized (on the client side). + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value, null is allowed + * @param targetSqlType the type as defined in java.sql.Types + * @param scale is ignored + * @throws SQLException if this object is closed + */ + @Override + public void setObject(int parameterIndex, Object x, int targetSqlType, + int scale) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setObject("+parameterIndex+", x, "+targetSqlType+", "+scale+");"); + } + setObject(parameterIndex, x, targetSqlType); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBoolean(int parameterIndex, boolean x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBoolean("+parameterIndex+", "+x+");"); + } + setParameter(parameterIndex, ValueBoolean.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setByte(int parameterIndex, byte x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setByte("+parameterIndex+", "+x+");"); + } + setParameter(parameterIndex, ValueByte.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setShort(int parameterIndex, short x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setShort("+parameterIndex+", (short) "+x+");"); + } + setParameter(parameterIndex, ValueShort.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setLong(int parameterIndex, long x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setLong("+parameterIndex+", "+x+"L);"); + } + setParameter(parameterIndex, ValueLong.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setFloat(int parameterIndex, float x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setFloat("+parameterIndex+", "+x+"f);"); + } + setParameter(parameterIndex, ValueFloat.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setDouble(int parameterIndex, double x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setDouble("+parameterIndex+", "+x+"d);"); + } + setParameter(parameterIndex, ValueDouble.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Sets the value of a column as a reference. + */ + @Override + public void setRef(int parameterIndex, Ref x) throws SQLException { + throw unsupported("ref"); + } + + /** + * Sets the date using a specified time zone. The value will be converted to + * the local time zone. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param calendar the calendar + * @throws SQLException if this object is closed + */ + @Override + public void setDate(int parameterIndex, java.sql.Date x, Calendar calendar) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setDate("+parameterIndex+", " + quoteDate(x) + ", calendar);"); + } + if (x == null) { + setParameter(parameterIndex, ValueNull.INSTANCE); + } else { + setParameter(parameterIndex, DateTimeUtils.convertDate(x, calendar)); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the time using a specified time zone. The value will be converted to + * the local time zone. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param calendar the calendar + * @throws SQLException if this object is closed + */ + @Override + public void setTime(int parameterIndex, java.sql.Time x, Calendar calendar) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setTime("+parameterIndex+", " + quoteTime(x) + ", calendar);"); + } + if (x == null) { + setParameter(parameterIndex, ValueNull.INSTANCE); + } else { + setParameter(parameterIndex, DateTimeUtils.convertTime(x, calendar)); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the timestamp using a specified time zone. The value will be + * converted to the local time zone. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param calendar the calendar + * @throws SQLException if this object is closed + */ + @Override + public void setTimestamp(int parameterIndex, java.sql.Timestamp x, + Calendar calendar) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setTimestamp(" + parameterIndex + ", " + + quoteTimestamp(x) + ", calendar);"); + } + if (x == null) { + setParameter(parameterIndex, ValueNull.INSTANCE); + } else { + setParameter(parameterIndex, DateTimeUtils.convertTimestamp(x, calendar)); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] This feature is deprecated and not supported. + * + * @deprecated since JDBC 2.0, use setCharacterStream + */ + @Deprecated + @Override + public void setUnicodeStream(int parameterIndex, InputStream x, int length) + throws SQLException { + throw unsupported("unicodeStream"); + } + + /** + * Sets a parameter to null. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param sqlType the data type (Types.x) + * @param typeName this parameter is ignored + * @throws SQLException if this object is closed + */ + @Override + public void setNull(int parameterIndex, int sqlType, String typeName) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setNull("+parameterIndex+", "+sqlType+", "+quote(typeName)+");"); + } + setNull(parameterIndex, sqlType); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Blob. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBlob(int parameterIndex, Blob x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBlob("+parameterIndex+", x);"); + } + checkClosedForWrite(); + try { + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createBlob(x.getBinaryStream(), -1); + } + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Blob. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBlob(int parameterIndex, InputStream x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBlob("+parameterIndex+", x);"); + } + checkClosedForWrite(); + try { + Value v = conn.createBlob(x, -1); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Clob. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setClob(int parameterIndex, Clob x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setClob("+parameterIndex+", x);"); + } + checkClosedForWrite(); + try { + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createClob(x.getCharacterStream(), -1); + } + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Clob. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setClob(int parameterIndex, Reader x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setClob("+parameterIndex+", x);"); + } + checkClosedForWrite(); + try { + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createClob(x, -1); + } + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as an Array. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setArray(int parameterIndex, Array x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setArray("+parameterIndex+", x);"); + } + checkClosed(); + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = DataType.convertToValue(session, x.getArray(), Value.ARRAY); + } + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a byte array. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBytes(int parameterIndex, byte[] x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBytes("+parameterIndex+", "+quoteBytes(x)+");"); + } + Value v = x == null ? (Value) ValueNull.INSTANCE : ValueBytes.get(x); + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as an input stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setBinaryStream(int parameterIndex, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBinaryStream("+parameterIndex+", x, "+length+"L);"); + } + checkClosedForWrite(); + try { + Value v = conn.createBlob(x, length); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as an input stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setBinaryStream(int parameterIndex, InputStream x, int length) + throws SQLException { + setBinaryStream(parameterIndex, x, (long) length); + } + + /** + * Sets the value of a parameter as an input stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setBinaryStream(int parameterIndex, InputStream x) + throws SQLException { + setBinaryStream(parameterIndex, x, -1); + } + + /** + * Sets the value of a parameter as an ASCII stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setAsciiStream(int parameterIndex, InputStream x, int length) + throws SQLException { + setAsciiStream(parameterIndex, x, (long) length); + } + + /** + * Sets the value of a parameter as an ASCII stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setAsciiStream(int parameterIndex, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setAsciiStream("+parameterIndex+", x, "+length+"L);"); + } + checkClosedForWrite(); + try { + Value v = conn.createClob(IOUtils.getAsciiReader(x), length); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as an ASCII stream. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setAsciiStream(int parameterIndex, InputStream x) + throws SQLException { + setAsciiStream(parameterIndex, x, -1); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setCharacterStream(int parameterIndex, Reader x, int length) + throws SQLException { + setCharacterStream(parameterIndex, x, (long) length); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setCharacterStream(int parameterIndex, Reader x) + throws SQLException { + setCharacterStream(parameterIndex, x, -1); + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setCharacterStream(int parameterIndex, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setCharacterStream("+parameterIndex+", x, "+length+"L);"); + } + checkClosedForWrite(); + try { + Value v = conn.createClob(x, length); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + */ + @Override + public void setURL(int parameterIndex, URL x) throws SQLException { + throw unsupported("url"); + } + + /** + * Gets the result set metadata of the query returned when the statement is + * executed. If this is not a query, this method returns null. + * + * @return the meta data or null if this is not a query + * @throws SQLException if this object is closed + */ + @Override + public ResultSetMetaData getMetaData() throws SQLException { + try { + debugCodeCall("getMetaData"); + checkClosed(); + ResultInterface result = command.getMetaData(); + if (result == null) { + return null; + } + int id = getNextId(TraceObject.RESULT_SET_META_DATA); + if (isDebugEnabled()) { + debugCodeAssign("ResultSetMetaData", + TraceObject.RESULT_SET_META_DATA, id, "getMetaData()"); + } + String catalog = conn.getCatalog(); + return new JdbcResultSetMetaData( + null, this, result, catalog, session.getTrace(), id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Clears the batch. + */ + @Override + public void clearBatch() throws SQLException { + try { + debugCodeCall("clearBatch"); + checkClosed(); + batchParameters = null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Closes this statement. + * All result sets that where created by this statement + * become invalid after calling this method. + */ + @Override + public void close() throws SQLException { + try { + super.close(); + batchParameters = null; + if (command != null) { + command.close(); + command = null; + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes the batch. + * If one of the batched statements fails, this database will continue. + * + * @return the array of update counts + */ + @Override + public int[] executeBatch() throws SQLException { + try { + int id = getNextId(TraceObject.PREPARED_STATEMENT); + debugCodeCall("executeBatch"); + if (batchParameters == null) { + // TODO batch: check what other database do if no parameters are + // set + batchParameters = New.arrayList(); + } + batchIdentities = new MergedResultSet(); + int size = batchParameters.size(); + int[] result = new int[size]; + boolean error = false; + SQLException next = null; + checkClosedForWrite(); + try { + for (int i = 0; i < size; i++) { + Value[] set = batchParameters.get(i); + ArrayList parameters = + command.getParameters(); + for (int j = 0; j < set.length; j++) { + Value value = set[j]; + ParameterInterface param = parameters.get(j); + param.setValue(value, false); + } + try { + result[i] = executeUpdateInternal(); + // Cannot use own implementation, it returns batch identities + ResultSet rs = super.getGeneratedKeys(); + batchIdentities.add(rs); + } catch (Exception re) { + SQLException e = logAndConvert(re); + if (next == null) { + next = e; + } else { + e.setNextException(next); + next = e; + } + result[i] = Statement.EXECUTE_FAILED; + error = true; + } + } + batchParameters = null; + if (error) { + throw new JdbcBatchUpdateException(next, result); + } + return result; + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + @Override + public ResultSet getGeneratedKeys() throws SQLException { + if (batchIdentities != null) { + return batchIdentities.getResult(); + } + return super.getGeneratedKeys(); + } + + /** + * Adds the current settings to the batch. + */ + @Override + public void addBatch() throws SQLException { + try { + debugCodeCall("addBatch"); + checkClosedForWrite(); + try { + ArrayList parameters = + command.getParameters(); + int size = parameters.size(); + Value[] set = new Value[size]; + for (int i = 0; i < size; i++) { + ParameterInterface param = parameters.get(i); + param.checkSet(); + Value value = param.getParamValue(); + set[i] = value; + } + if (batchParameters == null) { + batchParameters = New.arrayList(); + } + batchParameters.add(set); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param autoGeneratedKeys ignored + * @throws SQLException Unsupported Feature + */ + @Override + public int executeUpdate(String sql, int autoGeneratedKeys) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeUpdate("+quote(sql)+", "+autoGeneratedKeys+");"); + } + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param autoGeneratedKeys ignored + * @throws SQLException Unsupported Feature + */ + @Override + public long executeLargeUpdate(String sql, int autoGeneratedKeys) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeLargeUpdate("+quote(sql)+", "+autoGeneratedKeys+");"); + } + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param columnIndexes ignored + * @throws SQLException Unsupported Feature + */ + @Override + public int executeUpdate(String sql, int[] columnIndexes) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeUpdate(" + quote(sql) + ", " + + quoteIntArray(columnIndexes) + ");"); + } + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param columnIndexes ignored + * @throws SQLException Unsupported Feature + */ + @Override + public long executeLargeUpdate(String sql, int[] columnIndexes) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeLargeUpdate(" + quote(sql) + ", " + + quoteIntArray(columnIndexes) + ");"); + } + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param columnNames ignored + * @throws SQLException Unsupported Feature + */ + @Override + public int executeUpdate(String sql, String[] columnNames) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeUpdate(" + quote(sql) + ", " + + quoteArray(columnNames) + ");"); + } + throw DbException.get( + ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param columnNames ignored + * @throws SQLException Unsupported Feature + */ + @Override + public long executeLargeUpdate(String sql, String[] columnNames) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeLargeUpdate(" + quote(sql) + ", " + + quoteArray(columnNames) + ");"); + } + throw DbException.get( + ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param autoGeneratedKeys ignored + * @throws SQLException Unsupported Feature + */ + @Override + public boolean execute(String sql, int autoGeneratedKeys) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("execute(" + quote(sql) + ", " + autoGeneratedKeys + ");"); + } + throw DbException.get( + ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param columnIndexes ignored + * @throws SQLException Unsupported Feature + */ + @Override + public boolean execute(String sql, int[] columnIndexes) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("execute(" + quote(sql) + ", " + quoteIntArray(columnIndexes) + ");"); + } + throw DbException.get(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Calling this method is not legal on a PreparedStatement. + * + * @param sql ignored + * @param columnNames ignored + * @throws SQLException Unsupported Feature + */ + @Override + public boolean execute(String sql, String[] columnNames) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("execute(" + quote(sql) + ", " + quoteArray(columnNames) + ");"); + } + throw DbException.get( + ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Get the parameter meta data of this prepared statement. + * + * @return the meta data + */ + @Override + public ParameterMetaData getParameterMetaData() throws SQLException { + try { + int id = getNextId(TraceObject.PARAMETER_META_DATA); + if (isDebugEnabled()) { + debugCodeAssign("ParameterMetaData", + TraceObject.PARAMETER_META_DATA, id, "getParameterMetaData()"); + } + checkClosed(); + return new JdbcParameterMetaData( + session.getTrace(), this, command, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + // ============================================================= + + private void setParameter(int parameterIndex, Value value) { + checkClosed(); + parameterIndex--; + ArrayList parameters = command.getParameters(); + if (parameterIndex < 0 || parameterIndex >= parameters.size()) { + throw DbException.getInvalidValueException("parameterIndex", + parameterIndex + 1); + } + ParameterInterface param = parameters.get(parameterIndex); + // can only delete old temp files if they are not in the batch + param.setValue(value, batchParameters == null); + } + + /** + * [Not supported] Sets the value of a parameter as a row id. + */ + @Override + public void setRowId(int parameterIndex, RowId x) throws SQLException { + throw unsupported("rowId"); + } + + /** + * Sets the value of a parameter. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNString(int parameterIndex, String x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setNString("+parameterIndex+", "+quote(x)+");"); + } + Value v = x == null ? (Value) ValueNull.INSTANCE : ValueString.get(x); + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setNCharacterStream(int parameterIndex, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setNCharacterStream("+ + parameterIndex+", x, "+length+"L);"); + } + checkClosedForWrite(); + try { + Value v = conn.createClob(x, length); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a character stream. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNCharacterStream(int parameterIndex, Reader x) + throws SQLException { + setNCharacterStream(parameterIndex, x, -1); + } + + /** + * Sets the value of a parameter as a Clob. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNClob(int parameterIndex, NClob x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setNClob("+parameterIndex+", x);"); + } + checkClosedForWrite(); + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createClob(x.getCharacterStream(), -1); + } + setParameter(parameterIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Clob. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @throws SQLException if this object is closed + */ + @Override + public void setNClob(int parameterIndex, Reader x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setNClob("+parameterIndex+", x);"); + } + checkClosedForWrite(); + try { + Value v = conn.createClob(x, -1); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Clob. This method does not close the + * reader. The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setClob(int parameterIndex, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setClob("+parameterIndex+", x, "+length+"L);"); + } + checkClosedForWrite(); + try { + Value v = conn.createClob(x, length); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Blob. + * This method does not close the stream. + * The stream may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of bytes + * @throws SQLException if this object is closed + */ + @Override + public void setBlob(int parameterIndex, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setBlob("+parameterIndex+", x, "+length+"L);"); + } + checkClosedForWrite(); + try { + Value v = conn.createBlob(x, length); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the value of a parameter as a Clob. + * This method does not close the reader. + * The reader may be closed after executing the statement. + * + * @param parameterIndex the parameter index (1, 2, ...) + * @param x the value + * @param length the maximum number of characters + * @throws SQLException if this object is closed + */ + @Override + public void setNClob(int parameterIndex, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setNClob("+parameterIndex+", x, "+length+"L);"); + } + checkClosedForWrite(); + try { + Value v = conn.createClob(x, length); + setParameter(parameterIndex, v); + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Sets the value of a parameter as a SQLXML object. + */ + @Override + public void setSQLXML(int parameterIndex, SQLXML x) throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": " + command; + } + + @Override + protected boolean checkClosed(boolean write) { + if (super.checkClosed(write)) { + // if the session was re-connected, re-prepare the statement + ArrayList oldParams = command.getParameters(); + command = conn.prepareCommand(sqlStatement, fetchSize); + ArrayList newParams = command.getParameters(); + for (int i = 0, size = oldParams.size(); i < size; i++) { + ParameterInterface old = oldParams.get(i); + Value value = old.getParamValue(); + if (value != null) { + ParameterInterface n = newParams.get(i); + n.setValue(value, false); + } + } + return true; + } + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcPreparedStatementBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcPreparedStatementBackwardsCompat.java new file mode 100644 index 0000000000000..282fdc736c4cf --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcPreparedStatementBackwardsCompat.java @@ -0,0 +1,37 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.SQLException; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcPreparedStatementBackwardsCompat { + + // compatibility interface + + // JDBC 4.2 (incomplete) + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback) + * @throws SQLException if this object is closed or invalid + */ + long executeLargeUpdate() throws SQLException; +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSet.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSet.java new file mode 100644 index 0000000000000..fe150a6ae6387 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSet.java @@ -0,0 +1,3878 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.io.InputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.net.URL; +import java.sql.Array; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.Date; +import java.sql.NClob; +import java.sql.Ref; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.RowId; +import java.sql.SQLException; +import java.sql.SQLWarning; +import java.sql.SQLXML; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.util.Calendar; +import java.util.HashMap; +import java.util.Map; +import java.util.UUID; +import org.h2.api.ErrorCode; +import org.h2.api.TimestampWithTimeZone; +import org.h2.command.CommandInterface; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.result.ResultInterface; +import org.h2.result.UpdatableRow; +import org.h2.util.DateTimeUtils; +import org.h2.util.IOUtils; +import org.h2.util.LocalDateTimeUtils; +import org.h2.util.StringUtils; +import org.h2.value.CompareMode; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueByte; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueShort; +import org.h2.value.ValueString; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; + +/** + *

    + * Represents a result set. + *

    + *

    + * Column labels are case-insensitive, quotes are not supported. The first + * column has the column index 1. + *

    + *

    + * Updatable result sets: Result sets are updatable when the result only + * contains columns from one table, and if it contains all columns of a unique + * index (primary key or other) of this table. Key columns may not contain NULL + * (because multiple rows with NULL could exist). In updatable result sets, own + * changes are visible, but not own inserts and deletes. + *

    + */ +public class JdbcResultSet extends TraceObject implements ResultSet, JdbcResultSetBackwardsCompat { + + private final boolean closeStatement; + private final boolean scrollable; + private final boolean updatable; + private ResultInterface result; + private JdbcConnection conn; + private JdbcStatement stat; + private int columnCount; + private boolean wasNull; + private Value[] insertRow; + private Value[] updateRow; + private HashMap columnLabelMap; + private HashMap patchedRows; + private JdbcPreparedStatement preparedStatement; + private final CommandInterface command; + + JdbcResultSet(JdbcConnection conn, JdbcStatement stat, CommandInterface command, + ResultInterface result, int id, boolean closeStatement, + boolean scrollable, boolean updatable) { + setTrace(conn.getSession().getTrace(), TraceObject.RESULT_SET, id); + this.conn = conn; + this.stat = stat; + this.command = command; + this.result = result; + this.columnCount = result.getVisibleColumnCount(); + this.closeStatement = closeStatement; + this.scrollable = scrollable; + this.updatable = updatable; + } + + JdbcResultSet(JdbcConnection conn, JdbcPreparedStatement preparedStatement, + CommandInterface command, ResultInterface result, int id, boolean closeStatement, + boolean scrollable, boolean updatable, + HashMap columnLabelMap) { + this(conn, preparedStatement, command, result, id, closeStatement, scrollable, + updatable); + this.columnLabelMap = columnLabelMap; + this.preparedStatement = preparedStatement; + } + + /** + * Moves the cursor to the next row of the result set. + * + * @return true if successful, false if there are no more rows + */ + @Override + public boolean next() throws SQLException { + try { + debugCodeCall("next"); + checkClosed(); + return nextRow(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the meta data of this result set. + * + * @return the meta data + */ + @Override + public ResultSetMetaData getMetaData() throws SQLException { + try { + int id = getNextId(TraceObject.RESULT_SET_META_DATA); + if (isDebugEnabled()) { + debugCodeAssign("ResultSetMetaData", + TraceObject.RESULT_SET_META_DATA, id, "getMetaData()"); + } + checkClosed(); + String catalog = conn.getCatalog(); + return new JdbcResultSetMetaData(this, null, result, catalog, conn.getSession().getTrace(), id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns whether the last column accessed was null. + * + * @return true if the last column accessed was null + */ + @Override + public boolean wasNull() throws SQLException { + try { + debugCodeCall("wasNull"); + checkClosed(); + return wasNull; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Searches for a specific column in the result set. A case-insensitive + * search is made. + * + * @param columnLabel the column label + * @return the column index (1,2,...) + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public int findColumn(String columnLabel) throws SQLException { + try { + debugCodeCall("findColumn", columnLabel); + return getColumnIndex(columnLabel); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Closes the result set. + */ + @Override + public void close() throws SQLException { + try { + debugCodeCall("close"); + closeInternal(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Close the result set. This method also closes the statement if required. + */ + void closeInternal() throws SQLException { + if (result != null) { + try { + if (result.isLazy()) { + stat.onLazyResultSetClose(command, preparedStatement == null); + } + result.close(); + if (closeStatement && stat != null) { + stat.close(); + } + } finally { + columnCount = 0; + result = null; + stat = null; + conn = null; + insertRow = null; + updateRow = null; + } + } + } + + /** + * Returns the statement that created this object. + * + * @return the statement or prepared statement, or null if created by a + * DatabaseMetaData call. + */ + @Override + public Statement getStatement() throws SQLException { + try { + debugCodeCall("getStatement"); + checkClosed(); + if (closeStatement) { + // if the result set was opened by a DatabaseMetaData call + return null; + } + return stat; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the first warning reported by calls on this object. + * + * @return null + */ + @Override + public SQLWarning getWarnings() throws SQLException { + try { + debugCodeCall("getWarnings"); + checkClosed(); + return null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Clears all warnings. + */ + @Override + public void clearWarnings() throws SQLException { + try { + debugCodeCall("clearWarnings"); + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + // ============================================================= + + /** + * Returns the value of the specified column as a String. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public String getString(int columnIndex) throws SQLException { + try { + debugCodeCall("getString", columnIndex); + return get(columnIndex).getString(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a String. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public String getString(String columnLabel) throws SQLException { + try { + debugCodeCall("getString", columnLabel); + return get(columnLabel).getString(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an int. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public int getInt(int columnIndex) throws SQLException { + try { + debugCodeCall("getInt", columnIndex); + return get(columnIndex).getInt(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an int. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public int getInt(String columnLabel) throws SQLException { + try { + debugCodeCall("getInt", columnLabel); + return get(columnLabel).getInt(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a BigDecimal. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public BigDecimal getBigDecimal(int columnIndex) throws SQLException { + try { + debugCodeCall("getBigDecimal", columnIndex); + return get(columnIndex).getBigDecimal(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Date. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Date getDate(int columnIndex) throws SQLException { + try { + debugCodeCall("getDate", columnIndex); + return get(columnIndex).getDate(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Time. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Time getTime(int columnIndex) throws SQLException { + try { + debugCodeCall("getTime", columnIndex); + return get(columnIndex).getTime(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Timestamp getTimestamp(int columnIndex) throws SQLException { + try { + debugCodeCall("getTimestamp", columnIndex); + return get(columnIndex).getTimestamp(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a BigDecimal. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public BigDecimal getBigDecimal(String columnLabel) throws SQLException { + try { + debugCodeCall("getBigDecimal", columnLabel); + return get(columnLabel).getBigDecimal(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Date. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Date getDate(String columnLabel) throws SQLException { + try { + debugCodeCall("getDate", columnLabel); + return get(columnLabel).getDate(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Time. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Time getTime(String columnLabel) throws SQLException { + try { + debugCodeCall("getTime", columnLabel); + return get(columnLabel).getTime(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Timestamp getTimestamp(String columnLabel) throws SQLException { + try { + debugCodeCall("getTimestamp", columnLabel); + return get(columnLabel).getTimestamp(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns a column value as a Java object. The data is + * de-serialized into a Java object (on the client side). + * + * @param columnIndex (1,2,...) + * @return the value or null + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Object getObject(int columnIndex) throws SQLException { + try { + debugCodeCall("getObject", columnIndex); + Value v = get(columnIndex); + return conn.convertToDefaultObject(v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns a column value as a Java object. The data is + * de-serialized into a Java object (on the client side). + * + * @param columnLabel the column label + * @return the value or null + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Object getObject(String columnLabel) throws SQLException { + try { + debugCodeCall("getObject", columnLabel); + Value v = get(columnLabel); + return conn.convertToDefaultObject(v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a boolean. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public boolean getBoolean(int columnIndex) throws SQLException { + try { + debugCodeCall("getBoolean", columnIndex); + return get(columnIndex).getBoolean(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a boolean. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public boolean getBoolean(String columnLabel) throws SQLException { + try { + debugCodeCall("getBoolean", columnLabel); + return get(columnLabel).getBoolean(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a byte. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public byte getByte(int columnIndex) throws SQLException { + try { + debugCodeCall("getByte", columnIndex); + return get(columnIndex).getByte(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a byte. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public byte getByte(String columnLabel) throws SQLException { + try { + debugCodeCall("getByte", columnLabel); + return get(columnLabel).getByte(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a short. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public short getShort(int columnIndex) throws SQLException { + try { + debugCodeCall("getShort", columnIndex); + return get(columnIndex).getShort(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a short. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public short getShort(String columnLabel) throws SQLException { + try { + debugCodeCall("getShort", columnLabel); + return get(columnLabel).getShort(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a long. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public long getLong(int columnIndex) throws SQLException { + try { + debugCodeCall("getLong", columnIndex); + return get(columnIndex).getLong(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a long. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public long getLong(String columnLabel) throws SQLException { + try { + debugCodeCall("getLong", columnLabel); + return get(columnLabel).getLong(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a float. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public float getFloat(int columnIndex) throws SQLException { + try { + debugCodeCall("getFloat", columnIndex); + return get(columnIndex).getFloat(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a float. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public float getFloat(String columnLabel) throws SQLException { + try { + debugCodeCall("getFloat", columnLabel); + return get(columnLabel).getFloat(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a double. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public double getDouble(int columnIndex) throws SQLException { + try { + debugCodeCall("getDouble", columnIndex); + return get(columnIndex).getDouble(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a double. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public double getDouble(String columnLabel) throws SQLException { + try { + debugCodeCall("getDouble", columnLabel); + return get(columnLabel).getDouble(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a BigDecimal. + * + * @deprecated use {@link #getBigDecimal(String)} + * + * @param columnLabel the column label + * @param scale the scale of the returned value + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Deprecated + @Override + public BigDecimal getBigDecimal(String columnLabel, int scale) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getBigDecimal(" + + StringUtils.quoteJavaString(columnLabel)+", "+scale+");"); + } + if (scale < 0) { + throw DbException.getInvalidValueException("scale", scale); + } + BigDecimal bd = get(columnLabel).getBigDecimal(); + return bd == null ? null : ValueDecimal.setScale(bd, scale); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a BigDecimal. + * + * @deprecated use {@link #getBigDecimal(int)} + * + * @param columnIndex (1,2,...) + * @param scale the scale of the returned value + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Deprecated + @Override + public BigDecimal getBigDecimal(int columnIndex, int scale) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getBigDecimal(" + columnIndex + ", " + scale + ");"); + } + if (scale < 0) { + throw DbException.getInvalidValueException("scale", scale); + } + BigDecimal bd = get(columnIndex).getBigDecimal(); + return bd == null ? null : ValueDecimal.setScale(bd, scale); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + * @deprecated since JDBC 2.0, use getCharacterStream + */ + @Deprecated + @Override + public InputStream getUnicodeStream(int columnIndex) throws SQLException { + throw unsupported("unicodeStream"); + } + + /** + * [Not supported] + * @deprecated since JDBC 2.0, use setCharacterStream + */ + @Deprecated + @Override + public InputStream getUnicodeStream(String columnLabel) throws SQLException { + throw unsupported("unicodeStream"); + } + + /** + * [Not supported] Gets a column as a object using the specified type + * mapping. + */ + @Override + public Object getObject(int columnIndex, Map> map) + throws SQLException { + throw unsupported("map"); + } + + /** + * [Not supported] Gets a column as a object using the specified type + * mapping. + */ + @Override + public Object getObject(String columnLabel, Map> map) + throws SQLException { + throw unsupported("map"); + } + + /** + * [Not supported] Gets a column as a reference. + */ + @Override + public Ref getRef(int columnIndex) throws SQLException { + throw unsupported("ref"); + } + + /** + * [Not supported] Gets a column as a reference. + */ + @Override + public Ref getRef(String columnLabel) throws SQLException { + throw unsupported("ref"); + } + + /** + * Returns the value of the specified column as a java.sql.Date using a + * specified time zone. + * + * @param columnIndex (1,2,...) + * @param calendar the calendar + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Date getDate(int columnIndex, Calendar calendar) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getDate(" + columnIndex + ", calendar)"); + } + return DateTimeUtils.convertDate(get(columnIndex), calendar); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Date using a + * specified time zone. + * + * @param columnLabel the column label + * @param calendar the calendar + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Date getDate(String columnLabel, Calendar calendar) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getDate(" + + StringUtils.quoteJavaString(columnLabel) + + ", calendar)"); + } + return DateTimeUtils.convertDate(get(columnLabel), calendar); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Time using a + * specified time zone. + * + * @param columnIndex (1,2,...) + * @param calendar the calendar + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Time getTime(int columnIndex, Calendar calendar) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getTime(" + columnIndex + ", calendar)"); + } + return DateTimeUtils.convertTime(get(columnIndex), calendar); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Time using a + * specified time zone. + * + * @param columnLabel the column label + * @param calendar the calendar + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Time getTime(String columnLabel, Calendar calendar) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getTime(" + + StringUtils.quoteJavaString(columnLabel) + + ", calendar)"); + } + return DateTimeUtils.convertTime(get(columnLabel), calendar); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp using a + * specified time zone. + * + * @param columnIndex (1,2,...) + * @param calendar the calendar + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Timestamp getTimestamp(int columnIndex, Calendar calendar) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getTimestamp(" + columnIndex + ", calendar)"); + } + Value value = get(columnIndex); + return DateTimeUtils.convertTimestamp(value, calendar); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a java.sql.Timestamp. + * + * @param columnLabel the column label + * @param calendar the calendar + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Timestamp getTimestamp(String columnLabel, Calendar calendar) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("getTimestamp(" + + StringUtils.quoteJavaString(columnLabel) + + ", calendar)"); + } + Value value = get(columnLabel); + return DateTimeUtils.convertTimestamp(value, calendar); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a Blob. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Blob getBlob(int columnIndex) throws SQLException { + try { + int id = getNextId(TraceObject.BLOB); + if (isDebugEnabled()) { + debugCodeAssign("Blob", TraceObject.BLOB, + id, "getBlob(" + columnIndex + ")"); + } + Value v = get(columnIndex); + return v == ValueNull.INSTANCE ? null : new JdbcBlob(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a Blob. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Blob getBlob(String columnLabel) throws SQLException { + try { + int id = getNextId(TraceObject.BLOB); + if (isDebugEnabled()) { + debugCodeAssign("Blob", TraceObject.BLOB, + id, "getBlob(" + quote(columnLabel) + ")"); + } + Value v = get(columnLabel); + return v == ValueNull.INSTANCE ? null : new JdbcBlob(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a byte array. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public byte[] getBytes(int columnIndex) throws SQLException { + try { + debugCodeCall("getBytes", columnIndex); + return get(columnIndex).getBytes(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a byte array. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public byte[] getBytes(String columnLabel) throws SQLException { + try { + debugCodeCall("getBytes", columnLabel); + return get(columnLabel).getBytes(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an input stream. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public InputStream getBinaryStream(int columnIndex) throws SQLException { + try { + debugCodeCall("getBinaryStream", columnIndex); + return get(columnIndex).getInputStream(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an input stream. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public InputStream getBinaryStream(String columnLabel) throws SQLException { + try { + debugCodeCall("getBinaryStream", columnLabel); + return get(columnLabel).getInputStream(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + + /** + * Returns the value of the specified column as a Clob. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Clob getClob(int columnIndex) throws SQLException { + try { + int id = getNextId(TraceObject.CLOB); + if (isDebugEnabled()) { + debugCodeAssign("Clob", TraceObject.CLOB, id, "getClob(" + columnIndex + ")"); + } + Value v = get(columnIndex); + return v == ValueNull.INSTANCE ? null : new JdbcClob(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a Clob. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Clob getClob(String columnLabel) throws SQLException { + try { + int id = getNextId(TraceObject.CLOB); + if (isDebugEnabled()) { + debugCodeAssign("Clob", TraceObject.CLOB, id, "getClob(" + + quote(columnLabel) + ")"); + } + Value v = get(columnLabel); + return v == ValueNull.INSTANCE ? null : new JdbcClob(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an Array. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Array getArray(int columnIndex) throws SQLException { + try { + int id = getNextId(TraceObject.ARRAY); + if (isDebugEnabled()) { + debugCodeAssign("Array", TraceObject.ARRAY, id, "getArray(" + columnIndex + ")"); + } + Value v = get(columnIndex); + return v == ValueNull.INSTANCE ? null : new JdbcArray(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an Array. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Array getArray(String columnLabel) throws SQLException { + try { + int id = getNextId(TraceObject.ARRAY); + if (isDebugEnabled()) { + debugCodeAssign("Array", TraceObject.ARRAY, id, "getArray(" + + quote(columnLabel) + ")"); + } + Value v = get(columnLabel); + return v == ValueNull.INSTANCE ? null : new JdbcArray(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an input stream. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public InputStream getAsciiStream(int columnIndex) throws SQLException { + try { + debugCodeCall("getAsciiStream", columnIndex); + String s = get(columnIndex).getString(); + return s == null ? null : IOUtils.getInputStreamFromString(s); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as an input stream. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public InputStream getAsciiStream(String columnLabel) throws SQLException { + try { + debugCodeCall("getAsciiStream", columnLabel); + String s = get(columnLabel).getString(); + return IOUtils.getInputStreamFromString(s); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a reader. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Reader getCharacterStream(int columnIndex) throws SQLException { + try { + debugCodeCall("getCharacterStream", columnIndex); + return get(columnIndex).getReader(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a reader. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Reader getCharacterStream(String columnLabel) throws SQLException { + try { + debugCodeCall("getCharacterStream", columnLabel); + return get(columnLabel).getReader(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + */ + @Override + public URL getURL(int columnIndex) throws SQLException { + throw unsupported("url"); + } + + /** + * [Not supported] + */ + @Override + public URL getURL(String columnLabel) throws SQLException { + throw unsupported("url"); + } + + // ============================================================= + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNull(int columnIndex) throws SQLException { + try { + debugCodeCall("updateNull", columnIndex); + update(columnIndex, ValueNull.INSTANCE); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNull(String columnLabel) throws SQLException { + try { + debugCodeCall("updateNull", columnLabel); + update(columnLabel, ValueNull.INSTANCE); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBoolean(int columnIndex, boolean x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBoolean("+columnIndex+", "+x+");"); + } + update(columnIndex, ValueBoolean.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if result set is closed or not updatable + */ + @Override + public void updateBoolean(String columnLabel, boolean x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBoolean("+quote(columnLabel)+", "+x+");"); + } + update(columnLabel, ValueBoolean.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateByte(int columnIndex, byte x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateByte("+columnIndex+", "+x+");"); + } + update(columnIndex, ValueByte.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateByte(String columnLabel, byte x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateByte("+columnLabel+", "+x+");"); + } + update(columnLabel, ValueByte.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBytes(int columnIndex, byte[] x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBytes("+columnIndex+", x);"); + } + update(columnIndex, x == null ? (Value) ValueNull.INSTANCE : ValueBytes.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBytes(String columnLabel, byte[] x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBytes("+quote(columnLabel)+", x);"); + } + update(columnLabel, x == null ? (Value) ValueNull.INSTANCE : ValueBytes.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateShort(int columnIndex, short x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateShort("+columnIndex+", (short) "+x+");"); + } + update(columnIndex, ValueShort.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateShort(String columnLabel, short x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateShort("+quote(columnLabel)+", (short) "+x+");"); + } + update(columnLabel, ValueShort.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateInt(int columnIndex, int x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateInt("+columnIndex+", "+x+");"); + } + update(columnIndex, ValueInt.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateInt(String columnLabel, int x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateInt("+quote(columnLabel)+", "+x+");"); + } + update(columnLabel, ValueInt.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateLong(int columnIndex, long x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateLong("+columnIndex+", "+x+"L);"); + } + update(columnIndex, ValueLong.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateLong(String columnLabel, long x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateLong("+quote(columnLabel)+", "+x+"L);"); + } + update(columnLabel, ValueLong.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateFloat(int columnIndex, float x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateFloat("+columnIndex+", "+x+"f);"); + } + update(columnIndex, ValueFloat.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateFloat(String columnLabel, float x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateFloat("+quote(columnLabel)+", "+x+"f);"); + } + update(columnLabel, ValueFloat.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateDouble(int columnIndex, double x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateDouble("+columnIndex+", "+x+"d);"); + } + update(columnIndex, ValueDouble.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateDouble(String columnLabel, double x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateDouble("+quote(columnLabel)+", "+x+"d);"); + } + update(columnLabel, ValueDouble.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBigDecimal(int columnIndex, BigDecimal x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBigDecimal("+columnIndex+", " + quoteBigDecimal(x) + ");"); + } + update(columnIndex, x == null ? (Value) ValueNull.INSTANCE + : ValueDecimal.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBigDecimal(String columnLabel, BigDecimal x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBigDecimal(" + quote(columnLabel) + ", " + + quoteBigDecimal(x) + ");"); + } + update(columnLabel, x == null ? (Value) ValueNull.INSTANCE + : ValueDecimal.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateString(int columnIndex, String x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateString("+columnIndex+", "+quote(x)+");"); + } + update(columnIndex, x == null ? (Value) ValueNull.INSTANCE + : ValueString.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateString(String columnLabel, String x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateString("+quote(columnLabel)+", "+quote(x)+");"); + } + update(columnLabel, x == null ? (Value) ValueNull.INSTANCE + : ValueString.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateDate(int columnIndex, Date x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateDate("+columnIndex+", x);"); + } + update(columnIndex, x == null ? (Value) ValueNull.INSTANCE : ValueDate.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateDate(String columnLabel, Date x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateDate("+quote(columnLabel)+", x);"); + } + update(columnLabel, x == null ? (Value) ValueNull.INSTANCE : ValueDate.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateTime(int columnIndex, Time x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateTime("+columnIndex+", x);"); + } + update(columnIndex, x == null ? (Value) ValueNull.INSTANCE : ValueTime.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateTime(String columnLabel, Time x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateTime("+quote(columnLabel)+", x);"); + } + update(columnLabel, x == null ? (Value) ValueNull.INSTANCE : ValueTime.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateTimestamp(int columnIndex, Timestamp x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateTimestamp("+columnIndex+", x);"); + } + update(columnIndex, x == null ? (Value) ValueNull.INSTANCE + : ValueTimestamp.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateTimestamp(String columnLabel, Timestamp x) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateTimestamp("+quote(columnLabel)+", x);"); + } + update(columnLabel, x == null ? (Value) ValueNull.INSTANCE + : ValueTimestamp.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateAsciiStream(int columnIndex, InputStream x, int length) + throws SQLException { + updateAsciiStream(columnIndex, x, (long) length); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateAsciiStream(int columnIndex, InputStream x) + throws SQLException { + updateAsciiStream(columnIndex, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateAsciiStream(int columnIndex, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateAsciiStream("+columnIndex+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createClob(IOUtils.getAsciiReader(x), length); + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateAsciiStream(String columnLabel, InputStream x, int length) + throws SQLException { + updateAsciiStream(columnLabel, x, (long) length); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed + */ + @Override + public void updateAsciiStream(String columnLabel, InputStream x) + throws SQLException { + updateAsciiStream(columnLabel, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateAsciiStream(String columnLabel, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateAsciiStream("+quote(columnLabel)+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createClob(IOUtils.getAsciiReader(x), length); + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBinaryStream(int columnIndex, InputStream x, int length) + throws SQLException { + updateBinaryStream(columnIndex, x, (long) length); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBinaryStream(int columnIndex, InputStream x) + throws SQLException { + updateBinaryStream(columnIndex, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBinaryStream(int columnIndex, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBinaryStream("+columnIndex+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createBlob(x, length); + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBinaryStream(String columnLabel, InputStream x) + throws SQLException { + updateBinaryStream(columnLabel, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBinaryStream(String columnLabel, InputStream x, int length) + throws SQLException { + updateBinaryStream(columnLabel, x, (long) length); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBinaryStream(String columnLabel, InputStream x, + long length) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBinaryStream("+quote(columnLabel)+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createBlob(x, length); + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateCharacterStream(int columnIndex, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateCharacterStream("+columnIndex+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createClob(x, length); + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateCharacterStream(int columnIndex, Reader x, int length) + throws SQLException { + updateCharacterStream(columnIndex, x, (long) length); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateCharacterStream(int columnIndex, Reader x) + throws SQLException { + updateCharacterStream(columnIndex, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateCharacterStream(String columnLabel, Reader x, int length) + throws SQLException { + updateCharacterStream(columnLabel, x, (long) length); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateCharacterStream(String columnLabel, Reader x) + throws SQLException { + updateCharacterStream(columnLabel, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateCharacterStream(String columnLabel, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateCharacterStream("+quote(columnLabel)+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createClob(x, length); + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param scale is ignored + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateObject(int columnIndex, Object x, int scale) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateObject("+columnIndex+", x, "+scale+");"); + } + update(columnIndex, convertToUnknownValue(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param scale is ignored + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateObject(String columnLabel, Object x, int scale) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateObject("+quote(columnLabel)+", x, "+scale+");"); + } + update(columnLabel, convertToUnknownValue(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateObject(int columnIndex, Object x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateObject("+columnIndex+", x);"); + } + update(columnIndex, convertToUnknownValue(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateObject(String columnLabel, Object x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateObject("+quote(columnLabel)+", x);"); + } + update(columnLabel, convertToUnknownValue(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + */ + @Override + public void updateRef(int columnIndex, Ref x) throws SQLException { + throw unsupported("ref"); + } + + /** + * [Not supported] + */ + @Override + public void updateRef(String columnLabel, Ref x) throws SQLException { + throw unsupported("ref"); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBlob(int columnIndex, InputStream x) throws SQLException { + updateBlob(columnIndex, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the length + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBlob(int columnIndex, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBlob("+columnIndex+", x, " + length + "L);"); + } + checkClosed(); + Value v = conn.createBlob(x, length); + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBlob(int columnIndex, Blob x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBlob("+columnIndex+", x);"); + } + checkClosed(); + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createBlob(x.getBinaryStream(), -1); + } + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBlob(String columnLabel, Blob x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBlob("+quote(columnLabel)+", x);"); + } + checkClosed(); + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createBlob(x.getBinaryStream(), -1); + } + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBlob(String columnLabel, InputStream x) throws SQLException { + updateBlob(columnLabel, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the length + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateBlob(String columnLabel, InputStream x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateBlob("+quote(columnLabel)+", x, " + length + "L);"); + } + checkClosed(); + Value v = conn.createBlob(x, -1); + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateClob(int columnIndex, Clob x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateClob("+columnIndex+", x);"); + } + checkClosed(); + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createClob(x.getCharacterStream(), -1); + } + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateClob(int columnIndex, Reader x) throws SQLException { + updateClob(columnIndex, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the length + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateClob(int columnIndex, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateClob("+columnIndex+", x, " + length + "L);"); + } + checkClosed(); + Value v = conn.createClob(x, length); + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateClob(String columnLabel, Clob x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateClob("+quote(columnLabel)+", x);"); + } + checkClosed(); + Value v; + if (x == null) { + v = ValueNull.INSTANCE; + } else { + v = conn.createClob(x.getCharacterStream(), -1); + } + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateClob(String columnLabel, Reader x) throws SQLException { + updateClob(columnLabel, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the length + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateClob(String columnLabel, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateClob("+quote(columnLabel)+", x, " + length + "L);"); + } + checkClosed(); + Value v = conn.createClob(x, length); + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + */ + @Override + public void updateArray(int columnIndex, Array x) throws SQLException { + throw unsupported("setArray"); + } + + /** + * [Not supported] + */ + @Override + public void updateArray(String columnLabel, Array x) throws SQLException { + throw unsupported("setArray"); + } + + /** + * [Not supported] Gets the cursor name if it was defined. This feature is + * superseded by updateX methods. This method throws a SQLException because + * cursor names are not supported. + */ + @Override + public String getCursorName() throws SQLException { + throw unsupported("cursorName"); + } + + /** + * Gets the current row number. The first row is row 1, the second 2 and so + * on. This method returns 0 before the first and after the last row. + * + * @return the row number + */ + @Override + public int getRow() throws SQLException { + try { + debugCodeCall("getRow"); + checkClosed(); + if (result.isAfterLast()) { + return 0; + } + int rowId = result.getRowId(); + return rowId + 1; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the result set concurrency. Result sets are only updatable if the + * statement was created with updatable concurrency, and if the result set + * contains all columns of the primary key or of a unique index of a table. + * + * @return ResultSet.CONCUR_UPDATABLE if the result set is updatable, or + * ResultSet.CONCUR_READ_ONLY otherwise + */ + @Override + public int getConcurrency() throws SQLException { + try { + debugCodeCall("getConcurrency"); + checkClosed(); + if (!updatable) { + return ResultSet.CONCUR_READ_ONLY; + } + UpdatableRow row = new UpdatableRow(conn, result); + return row.isUpdatable() ? ResultSet.CONCUR_UPDATABLE + : ResultSet.CONCUR_READ_ONLY; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the fetch direction. + * + * @return the direction: FETCH_FORWARD + */ + @Override + public int getFetchDirection() throws SQLException { + try { + debugCodeCall("getFetchDirection"); + checkClosed(); + return ResultSet.FETCH_FORWARD; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the number of rows suggested to read in one step. + * + * @return the current fetch size + */ + @Override + public int getFetchSize() throws SQLException { + try { + debugCodeCall("getFetchSize"); + checkClosed(); + return result.getFetchSize(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the number of rows suggested to read in one step. This value cannot + * be higher than the maximum rows (setMaxRows) set by the statement or + * prepared statement, otherwise an exception is throws. Setting the value + * to 0 will set the default value. The default value can be changed using + * the system property h2.serverResultSetFetchSize. + * + * @param rows the number of rows + */ + @Override + public void setFetchSize(int rows) throws SQLException { + try { + debugCodeCall("setFetchSize", rows); + checkClosed(); + if (rows < 0) { + throw DbException.getInvalidValueException("rows", rows); + } else if (rows > 0) { + if (stat != null) { + int maxRows = stat.getMaxRows(); + if (maxRows > 0 && rows > maxRows) { + throw DbException.getInvalidValueException("rows", rows); + } + } + } else { + rows = SysProperties.SERVER_RESULT_SET_FETCH_SIZE; + } + result.setFetchSize(rows); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + * Sets (changes) the fetch direction for this result set. This method + * should only be called for scrollable result sets, otherwise it will throw + * an exception (no matter what direction is used). + * + * @param direction the new fetch direction + * @throws SQLException Unsupported Feature if the method is called for a + * forward-only result set + */ + @Override + public void setFetchDirection(int direction) throws SQLException { + debugCodeCall("setFetchDirection", direction); + // ignore FETCH_FORWARD, that's the default value, which we do support + if (direction != ResultSet.FETCH_FORWARD) { + throw unsupported("setFetchDirection"); + } + } + + /** + * Get the result set type. + * + * @return the result set type (TYPE_FORWARD_ONLY, TYPE_SCROLL_INSENSITIVE + * or TYPE_SCROLL_SENSITIVE) + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public int getType() throws SQLException { + try { + debugCodeCall("getType"); + checkClosed(); + return stat == null ? ResultSet.TYPE_FORWARD_ONLY : stat.resultSetType; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if the current position is before the first row, that means next() + * was not called yet, and there is at least one row. + * + * @return if there are results and the current position is before the first + * row + * @throws SQLException if the result set is closed + */ + @Override + public boolean isBeforeFirst() throws SQLException { + try { + debugCodeCall("isBeforeFirst"); + checkClosed(); + return result.getRowId() < 0 && result.hasNext(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if the current position is after the last row, that means next() + * was called and returned false, and there was at least one row. + * + * @return if there are results and the current position is after the last + * row + * @throws SQLException if the result set is closed + */ + @Override + public boolean isAfterLast() throws SQLException { + try { + debugCodeCall("isAfterLast"); + checkClosed(); + return result.getRowId() > 0 && result.isAfterLast(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if the current position is row 1, that means next() was called + * once and returned true. + * + * @return if the current position is the first row + * @throws SQLException if the result set is closed + */ + @Override + public boolean isFirst() throws SQLException { + try { + debugCodeCall("isFirst"); + checkClosed(); + return result.getRowId() == 0 && !result.isAfterLast(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if the current position is the last row, that means next() was + * called and did not yet returned false, but will in the next call. + * + * @return if the current position is the last row + * @throws SQLException if the result set is closed + */ + @Override + public boolean isLast() throws SQLException { + try { + debugCodeCall("isLast"); + checkClosed(); + int rowId = result.getRowId(); + return rowId >= 0 && !result.isAfterLast() && !result.hasNext(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the current position to before the first row, that means resets the + * result set. + * + * @throws SQLException if the result set is closed + */ + @Override + public void beforeFirst() throws SQLException { + try { + debugCodeCall("beforeFirst"); + checkClosed(); + if (result.getRowId() >= 0) { + resetResult(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the current position to after the last row, that means after the + * end. + * + * @throws SQLException if the result set is closed + */ + @Override + public void afterLast() throws SQLException { + try { + debugCodeCall("afterLast"); + checkClosed(); + while (nextRow()) { + // nothing + } + } catch (Exception e) { + throw logAndConvert(e); + } +} + + /** + * Moves the current position to the first row. This is the same as calling + * beforeFirst() followed by next(). + * + * @return true if there is a row available, false if not + * @throws SQLException if the result set is closed + */ + @Override + public boolean first() throws SQLException { + try { + debugCodeCall("first"); + checkClosed(); + if (result.getRowId() >= 0) { + resetResult(); + } + return nextRow(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the current position to the last row. + * + * @return true if there is a row available, false if not + * @throws SQLException if the result set is closed + */ + @Override + public boolean last() throws SQLException { + try { + debugCodeCall("last"); + checkClosed(); + if (result.isAfterLast()) { + resetResult(); + } + while (result.hasNext()) { + nextRow(); + } + return isOnValidRow(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the current position to a specific row. + * + * @param rowNumber the row number. 0 is not allowed, 1 means the first row, + * 2 the second. -1 means the last row, -2 the row before the + * last row. If the value is too large, the position is moved + * after the last row, if if the value is too small it is moved + * before the first row. + * @return true if there is a row available, false if not + * @throws SQLException if the result set is closed + */ + @Override + public boolean absolute(int rowNumber) throws SQLException { + try { + debugCodeCall("absolute", rowNumber); + checkClosed(); + if (rowNumber < 0) { + rowNumber = result.getRowCount() + rowNumber + 1; + } + if (--rowNumber < result.getRowId()) { + resetResult(); + } + while (result.getRowId() < rowNumber) { + if (!nextRow()) { + return false; + } + } + return isOnValidRow(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the current position to a specific row relative to the current row. + * + * @param rowCount 0 means don't do anything, 1 is the next row, -1 the + * previous. If the value is too large, the position is moved + * after the last row, if if the value is too small it is moved + * before the first row. + * @return true if there is a row available, false if not + * @throws SQLException if the result set is closed + */ + @Override + public boolean relative(int rowCount) throws SQLException { + try { + debugCodeCall("relative", rowCount); + checkClosed(); + if (rowCount < 0) { + rowCount = result.getRowId() + rowCount + 1; + resetResult(); + } + for (int i = 0; i < rowCount; i++) { + if (!nextRow()) { + return false; + } + } + return isOnValidRow(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the cursor to the last row, or row before first row if the current + * position is the first row. + * + * @return true if there is a row available, false if not + * @throws SQLException if the result set is closed + */ + @Override + public boolean previous() throws SQLException { + try { + debugCodeCall("previous"); + checkClosed(); + return relative(-1); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the current position to the insert row. The current row is + * remembered. + * + * @throws SQLException if the result set is closed or is not updatable + */ + @Override + public void moveToInsertRow() throws SQLException { + try { + debugCodeCall("moveToInsertRow"); + checkUpdatable(); + insertRow = new Value[columnCount]; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves the current position to the current row. + * + * @throws SQLException if the result set is closed or is not updatable + */ + @Override + public void moveToCurrentRow() throws SQLException { + try { + debugCodeCall("moveToCurrentRow"); + checkUpdatable(); + insertRow = null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Detects if the row was updated (by somebody else or the caller). + * + * @return false because this driver does not detect this + */ + @Override + public boolean rowUpdated() throws SQLException { + try { + debugCodeCall("rowUpdated"); + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Detects if the row was inserted. + * + * @return false because this driver does not detect this + */ + @Override + public boolean rowInserted() throws SQLException { + try { + debugCodeCall("rowInserted"); + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Detects if the row was deleted (by somebody else or the caller). + * + * @return false because this driver does not detect this + */ + @Override + public boolean rowDeleted() throws SQLException { + try { + debugCodeCall("rowDeleted"); + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Inserts the current row. The current position must be the insert row. + * + * @throws SQLException if the result set is closed or if not on the insert + * row, or if the result set it not updatable + */ + @Override + public void insertRow() throws SQLException { + try { + debugCodeCall("insertRow"); + checkUpdatable(); + if (insertRow == null) { + throw DbException.get(ErrorCode.NOT_ON_UPDATABLE_ROW); + } + getUpdatableRow().insertRow(insertRow); + insertRow = null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates the current row. + * + * @throws SQLException if the result set is closed, if the current row is + * the insert row or if not on a valid row, or if the result set + * it not updatable + */ + @Override + public void updateRow() throws SQLException { + try { + debugCodeCall("updateRow"); + checkUpdatable(); + if (insertRow != null) { + throw DbException.get(ErrorCode.NOT_ON_UPDATABLE_ROW); + } + checkOnValidRow(); + if (updateRow != null) { + UpdatableRow row = getUpdatableRow(); + Value[] current = new Value[columnCount]; + for (int i = 0; i < updateRow.length; i++) { + current[i] = get(i + 1); + } + row.updateRow(current, updateRow); + for (int i = 0; i < updateRow.length; i++) { + if (updateRow[i] == null) { + updateRow[i] = current[i]; + } + } + Value[] patch = row.readRow(updateRow); + patchCurrentRow(patch); + updateRow = null; + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Deletes the current row. + * + * @throws SQLException if the result set is closed, if the current row is + * the insert row or if not on a valid row, or if the result set + * it not updatable + */ + @Override + public void deleteRow() throws SQLException { + try { + debugCodeCall("deleteRow"); + checkUpdatable(); + if (insertRow != null) { + throw DbException.get(ErrorCode.NOT_ON_UPDATABLE_ROW); + } + checkOnValidRow(); + getUpdatableRow().deleteRow(result.currentRow()); + updateRow = null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Re-reads the current row from the database. + * + * @throws SQLException if the result set is closed or if the current row is + * the insert row or if the row has been deleted or if not on a + * valid row + */ + @Override + public void refreshRow() throws SQLException { + try { + debugCodeCall("refreshRow"); + checkClosed(); + if (insertRow != null) { + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + checkOnValidRow(); + patchCurrentRow(getUpdatableRow().readRow(result.currentRow())); + updateRow = null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Cancels updating a row. + * + * @throws SQLException if the result set is closed or if the current row is + * the insert row + */ + @Override + public void cancelRowUpdates() throws SQLException { + try { + debugCodeCall("cancelRowUpdates"); + checkClosed(); + if (insertRow != null) { + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + updateRow = null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + // ============================================================= + + private UpdatableRow getUpdatableRow() throws SQLException { + UpdatableRow row = new UpdatableRow(conn, result); + if (!row.isUpdatable()) { + throw DbException.get(ErrorCode.RESULT_SET_NOT_UPDATABLE); + } + return row; + } + + private int getColumnIndex(String columnLabel) { + checkClosed(); + if (columnLabel == null) { + throw DbException.getInvalidValueException("columnLabel", null); + } + if (columnCount >= 3) { + // use a hash table if more than 2 columns + if (columnLabelMap == null) { + HashMap map = new HashMap<>(columnCount); + // column labels have higher priority + for (int i = 0; i < columnCount; i++) { + String c = StringUtils.toUpperEnglish(result.getAlias(i)); + mapColumn(map, c, i); + } + for (int i = 0; i < columnCount; i++) { + String colName = result.getColumnName(i); + if (colName != null) { + colName = StringUtils.toUpperEnglish(colName); + mapColumn(map, colName, i); + String tabName = result.getTableName(i); + if (tabName != null) { + colName = StringUtils.toUpperEnglish(tabName) + "." + colName; + mapColumn(map, colName, i); + } + } + } + // assign at the end so concurrent access is supported + columnLabelMap = map; + if (preparedStatement != null) { + preparedStatement.setCachedColumnLabelMap(columnLabelMap); + } + } + Integer index = columnLabelMap.get(StringUtils.toUpperEnglish(columnLabel)); + if (index == null) { + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, columnLabel); + } + return index.intValue() + 1; + } + for (int i = 0; i < columnCount; i++) { + if (columnLabel.equalsIgnoreCase(result.getAlias(i))) { + return i + 1; + } + } + int idx = columnLabel.indexOf('.'); + if (idx > 0) { + String table = columnLabel.substring(0, idx); + String col = columnLabel.substring(idx+1); + for (int i = 0; i < columnCount; i++) { + if (table.equalsIgnoreCase(result.getTableName(i)) && + col.equalsIgnoreCase(result.getColumnName(i))) { + return i + 1; + } + } + } else { + for (int i = 0; i < columnCount; i++) { + if (columnLabel.equalsIgnoreCase(result.getColumnName(i))) { + return i + 1; + } + } + } + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, columnLabel); + } + + private static void mapColumn(HashMap map, String label, + int index) { + // put the index (usually that's the only operation) + Integer old = map.put(label, index); + if (old != null) { + // if there was a clash (which is seldom), + // put the old one back + map.put(label, old); + } + } + + private void checkColumnIndex(int columnIndex) { + checkClosed(); + if (columnIndex < 1 || columnIndex > columnCount) { + throw DbException.getInvalidValueException("columnIndex", columnIndex); + } + } + + /** + * Check if this result set is closed. + * + * @throws DbException if it is closed + */ + void checkClosed() { + if (result == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + if (stat != null) { + stat.checkClosed(); + } + if (conn != null) { + conn.checkClosed(); + } + } + + private boolean isOnValidRow() { + return result.getRowId() >= 0 && !result.isAfterLast(); + } + + private void checkOnValidRow() { + if (!isOnValidRow()) { + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + } + + /** + * INTERNAL + * + * @param columnIndex + * index of a column + * @return internal representation of the value in the specified column + */ + public Value get(int columnIndex) { + checkColumnIndex(columnIndex); + checkOnValidRow(); + Value[] list; + if (patchedRows == null) { + list = result.currentRow(); + } else { + list = patchedRows.get(result.getRowId()); + if (list == null) { + list = result.currentRow(); + } + } + Value value = list[columnIndex - 1]; + wasNull = value == ValueNull.INSTANCE; + return value; + } + + private Value get(String columnLabel) { + int columnIndex = getColumnIndex(columnLabel); + return get(columnIndex); + } + + private void update(String columnLabel, Value v) { + int columnIndex = getColumnIndex(columnLabel); + update(columnIndex, v); + } + + private void update(int columnIndex, Value v) { + checkUpdatable(); + checkColumnIndex(columnIndex); + if (insertRow != null) { + insertRow[columnIndex - 1] = v; + } else { + if (updateRow == null) { + updateRow = new Value[columnCount]; + } + updateRow[columnIndex - 1] = v; + } + } + + private boolean nextRow() { + if (result.isLazy() && stat.isCancelled()) { + throw DbException.get(ErrorCode.STATEMENT_WAS_CANCELED); + } + boolean next = result.next(); + if (!next && !scrollable) { + result.close(); + } + return next; + } + + private void resetResult() { + if (!scrollable) { + throw DbException.get(ErrorCode.RESULT_SET_NOT_SCROLLABLE); + } + result.reset(); + } + + /** + * [Not supported] Returns the value of the specified column as a row id. + * + * @param columnIndex (1,2,...) + */ + @Override + public RowId getRowId(int columnIndex) throws SQLException { + throw unsupported("rowId"); + } + + /** + * [Not supported] Returns the value of the specified column as a row id. + * + * @param columnLabel the column label + */ + @Override + public RowId getRowId(String columnLabel) throws SQLException { + throw unsupported("rowId"); + } + + /** + * [Not supported] Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + */ + @Override + public void updateRowId(int columnIndex, RowId x) throws SQLException { + throw unsupported("rowId"); + } + + /** + * [Not supported] Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + */ + @Override + public void updateRowId(String columnLabel, RowId x) throws SQLException { + throw unsupported("rowId"); + } + + /** + * Returns the current result set holdability. + * + * @return the holdability + * @throws SQLException if the connection is closed + */ + @Override + public int getHoldability() throws SQLException { + try { + debugCodeCall("getHoldability"); + checkClosed(); + return conn.getHoldability(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns whether this result set is closed. + * + * @return true if the result set is closed + */ + @Override + public boolean isClosed() throws SQLException { + try { + debugCodeCall("isClosed"); + return result == null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNString(int columnIndex, String x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateNString("+columnIndex+", "+quote(x)+");"); + } + update(columnIndex, x == null ? (Value) + ValueNull.INSTANCE : ValueString.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNString(String columnLabel, String x) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateNString("+quote(columnLabel)+", "+quote(x)+");"); + } + update(columnLabel, x == null ? (Value) ValueNull.INSTANCE : + ValueString.get(x)); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + */ + @Override + public void updateNClob(int columnIndex, NClob x) throws SQLException { + throw unsupported("NClob"); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNClob(int columnIndex, Reader x) throws SQLException { + updateClob(columnIndex, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the length + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNClob(int columnIndex, Reader x, long length) + throws SQLException { + updateClob(columnIndex, x, length); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNClob(String columnLabel, Reader x) throws SQLException { + updateClob(columnLabel, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the length + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNClob(String columnLabel, Reader x, long length) + throws SQLException { + updateClob(columnLabel, x, length); + } + + /** + * [Not supported] + */ + @Override + public void updateNClob(String columnLabel, NClob x) throws SQLException { + throw unsupported("NClob"); + } + + /** + * Returns the value of the specified column as a Clob. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public NClob getNClob(int columnIndex) throws SQLException { + try { + int id = getNextId(TraceObject.CLOB); + if (isDebugEnabled()) { + debugCodeAssign("NClob", TraceObject.CLOB, id, "getNClob(" + columnIndex + ")"); + } + Value v = get(columnIndex); + return v == ValueNull.INSTANCE ? null : new JdbcClob(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a Clob. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public NClob getNClob(String columnLabel) throws SQLException { + try { + int id = getNextId(TraceObject.CLOB); + if (isDebugEnabled()) { + debugCodeAssign("NClob", TraceObject.CLOB, id, "getNClob(" + columnLabel + ")"); + } + Value v = get(columnLabel); + return v == ValueNull.INSTANCE ? null : new JdbcClob(conn, v, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] Returns the value of the specified column as a SQLXML + * object. + */ + @Override + public SQLXML getSQLXML(int columnIndex) throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * [Not supported] Returns the value of the specified column as a SQLXML + * object. + */ + @Override + public SQLXML getSQLXML(String columnLabel) throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * [Not supported] Updates a column in the current or insert row. + */ + @Override + public void updateSQLXML(int columnIndex, SQLXML xmlObject) + throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * [Not supported] Updates a column in the current or insert row. + */ + @Override + public void updateSQLXML(String columnLabel, SQLXML xmlObject) + throws SQLException { + throw unsupported("SQLXML"); + } + + /** + * Returns the value of the specified column as a String. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public String getNString(int columnIndex) throws SQLException { + try { + debugCodeCall("getNString", columnIndex); + return get(columnIndex).getString(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a String. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public String getNString(String columnLabel) throws SQLException { + try { + debugCodeCall("getNString", columnLabel); + return get(columnLabel).getString(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a reader. + * + * @param columnIndex (1,2,...) + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Reader getNCharacterStream(int columnIndex) throws SQLException { + try { + debugCodeCall("getNCharacterStream", columnIndex); + return get(columnIndex).getReader(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the value of the specified column as a reader. + * + * @param columnLabel the column label + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public Reader getNCharacterStream(String columnLabel) throws SQLException { + try { + debugCodeCall("getNCharacterStream", columnLabel); + return get(columnLabel).getReader(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNCharacterStream(int columnIndex, Reader x) + throws SQLException { + updateNCharacterStream(columnIndex, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnIndex (1,2,...) + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNCharacterStream(int columnIndex, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateNCharacterStream("+columnIndex+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createClob(x, length); + update(columnIndex, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNCharacterStream(String columnLabel, Reader x) + throws SQLException { + updateNCharacterStream(columnLabel, x, -1); + } + + /** + * Updates a column in the current or insert row. + * + * @param columnLabel the column label + * @param x the value + * @param length the number of characters + * @throws SQLException if the result set is closed or not updatable + */ + @Override + public void updateNCharacterStream(String columnLabel, Reader x, long length) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("updateNCharacterStream("+quote(columnLabel)+", x, "+length+"L);"); + } + checkClosed(); + Value v = conn.createClob(x, length); + update(columnLabel, v); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Return an object of this class if possible. + * + * @param iface the class + * @return this + */ + @Override + @SuppressWarnings("unchecked") + public T unwrap(Class iface) throws SQLException { + try { + if (isWrapperFor(iface)) { + return (T) this; + } + throw DbException.getInvalidValueException("iface", iface); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if unwrap can return an object of this class. + * + * @param iface the class + * @return whether or not the interface is assignable from this class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return iface != null && iface.isAssignableFrom(getClass()); + } + + /** + * Returns a column value as a Java object. The data is + * de-serialized into a Java object (on the client side). + * + * @param columnIndex the column index (1, 2, ...) + * @param type the class of the returned value + * @return the value + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public T getObject(int columnIndex, Class type) throws SQLException { + try { + if (type == null) { + throw DbException.getInvalidValueException("type", type); + } + debugCodeCall("getObject", columnIndex); + Value value = get(columnIndex); + return extractObjectOfType(type, value); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns a column value as a Java object. The data is + * de-serialized into a Java object (on the client side). + * + * @param columnName the column name + * @param type the class of the returned value + * @return the value + */ + @Override + public T getObject(String columnName, Class type) throws SQLException { + try { + if (type == null) { + throw DbException.getInvalidValueException("type", type); + } + debugCodeCall("getObject", columnName); + Value value = get(columnName); + return extractObjectOfType(type, value); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private T extractObjectOfType(Class type, Value value) throws SQLException { + if (value == ValueNull.INSTANCE) { + return null; + } + if (type == BigDecimal.class) { + return type.cast(value.getBigDecimal()); + } else if (type == BigInteger.class) { + return type.cast(value.getBigDecimal().toBigInteger()); + } else if (type == String.class) { + return type.cast(value.getString()); + } else if (type == Boolean.class) { + return type.cast(value.getBoolean()); + } else if (type == Byte.class) { + return type.cast(value.getByte()); + } else if (type == Short.class) { + return type.cast(value.getShort()); + } else if (type == Integer.class) { + return type.cast(value.getInt()); + } else if (type == Long.class) { + return type.cast(value.getLong()); + } else if (type == Float.class) { + return type.cast(value.getFloat()); + } else if (type == Double.class) { + return type.cast(value.getDouble()); + } else if (type == Date.class) { + return type.cast(value.getDate()); + } else if (type == Time.class) { + return type.cast(value.getTime()); + } else if (type == Timestamp.class) { + return type.cast(value.getTimestamp()); + } else if (type == java.util.Date.class) { + return type.cast(new java.util.Date(value.getTimestamp().getTime())); + } else if (type == Calendar.class) { + Calendar calendar = DateTimeUtils.createGregorianCalendar(); + calendar.setTime(value.getTimestamp()); + return type.cast(calendar); + } else if (type == UUID.class) { + return type.cast(value.getObject()); + } else if (type == byte[].class) { + return type.cast(value.getBytes()); + } else if (type == java.sql.Array.class) { + int id = getNextId(TraceObject.ARRAY); + return type.cast(value == ValueNull.INSTANCE ? null : new JdbcArray(conn, value, id)); + } else if (type == Blob.class) { + int id = getNextId(TraceObject.BLOB); + return type.cast(value == ValueNull.INSTANCE ? null : new JdbcBlob(conn, value, id)); + } else if (type == Clob.class) { + int id = getNextId(TraceObject.CLOB); + return type.cast(value == ValueNull.INSTANCE ? null : new JdbcClob(conn, value, id)); + } else if (type == TimestampWithTimeZone.class) { + return type.cast(value.getObject()); + } else if (DataType.isGeometryClass(type)) { + return type.cast(value.getObject()); + } else if (type == LocalDateTimeUtils.LOCAL_DATE) { + return type.cast(LocalDateTimeUtils.valueToLocalDate(value)); + } else if (type == LocalDateTimeUtils.LOCAL_TIME) { + return type.cast(LocalDateTimeUtils.valueToLocalTime(value)); + } else if (type == LocalDateTimeUtils.LOCAL_DATE_TIME) { + return type.cast(LocalDateTimeUtils.valueToLocalDateTime(value)); + } else if (type == LocalDateTimeUtils.INSTANT) { + return type.cast(LocalDateTimeUtils.valueToInstant(value)); + } else if (type == LocalDateTimeUtils.OFFSET_DATE_TIME) { + return type.cast(LocalDateTimeUtils.valueToOffsetDateTime(value)); + } else { + throw unsupported(type.getName()); + } + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": " + result; + } + + private void patchCurrentRow(Value[] row) { + boolean changed = false; + Value[] current = result.currentRow(); + CompareMode mode = conn.getCompareMode(); + for (int i = 0; i < row.length; i++) { + if (row[i].compareTo(current[i], mode) != 0) { + changed = true; + break; + } + } + if (patchedRows == null) { + patchedRows = new HashMap<>(); + } + Integer rowId = result.getRowId(); + if (!changed) { + patchedRows.remove(rowId); + } else { + patchedRows.put(rowId, row); + } + } + + private Value convertToUnknownValue(Object x) { + checkClosed(); + return DataType.convertToValue(conn.getSession(), x, Value.UNKNOWN); + } + + private void checkUpdatable() { + checkClosed(); + if (!updatable) { + throw DbException.get(ErrorCode.RESULT_SET_READONLY); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSetBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSetBackwardsCompat.java new file mode 100644 index 0000000000000..8310a2228ac92 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSetBackwardsCompat.java @@ -0,0 +1,16 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcResultSetBackwardsCompat { + + // compatibility interface + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSetMetaData.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSetMetaData.java new file mode 100644 index 0000000000000..2718a8d37ef69 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcResultSetMetaData.java @@ -0,0 +1,489 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceObject; +import org.h2.result.ResultInterface; +import org.h2.util.MathUtils; +import org.h2.value.DataType; + +/** + * Represents the meta data for a ResultSet. + */ +public class JdbcResultSetMetaData extends TraceObject implements + ResultSetMetaData { + + private final String catalog; + private final JdbcResultSet rs; + private final JdbcPreparedStatement prep; + private final ResultInterface result; + private final int columnCount; + + JdbcResultSetMetaData(JdbcResultSet rs, JdbcPreparedStatement prep, + ResultInterface result, String catalog, Trace trace, int id) { + setTrace(trace, TraceObject.RESULT_SET_META_DATA, id); + this.catalog = catalog; + this.rs = rs; + this.prep = prep; + this.result = result; + this.columnCount = result.getVisibleColumnCount(); + } + + /** + * Returns the number of columns. + * + * @return the number of columns + * @throws SQLException if the result set is closed or invalid + */ + @Override + public int getColumnCount() throws SQLException { + try { + debugCodeCall("getColumnCount"); + checkClosed(); + return columnCount; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the column label. + * + * @param column the column index (1,2,...) + * @return the column label + * @throws SQLException if the result set is closed or invalid + */ + @Override + public String getColumnLabel(int column) throws SQLException { + try { + debugCodeCall("getColumnLabel", column); + checkColumnIndex(column); + return result.getAlias(--column); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the column name. + * + * @param column the column index (1,2,...) + * @return the column name + * @throws SQLException if the result set is closed or invalid + */ + @Override + public String getColumnName(int column) throws SQLException { + try { + debugCodeCall("getColumnName", column); + checkColumnIndex(column); + return result.getColumnName(--column); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the data type of a column. + * See also java.sql.Type. + * + * @param column the column index (1,2,...) + * @return the data type + * @throws SQLException if the result set is closed or invalid + */ + @Override + public int getColumnType(int column) throws SQLException { + try { + debugCodeCall("getColumnType", column); + checkColumnIndex(column); + int type = result.getColumnType(--column); + return DataType.convertTypeToSQLType(type); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the data type name of a column. + * + * @param column the column index (1,2,...) + * @return the data type name + * @throws SQLException if the result set is closed or invalid + */ + @Override + public String getColumnTypeName(int column) throws SQLException { + try { + debugCodeCall("getColumnTypeName", column); + checkColumnIndex(column); + int type = result.getColumnType(--column); + return DataType.getDataType(type).name; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the schema name. + * + * @param column the column index (1,2,...) + * @return the schema name, or "" (an empty string) if not applicable + * @throws SQLException if the result set is closed or invalid + */ + @Override + public String getSchemaName(int column) throws SQLException { + try { + debugCodeCall("getSchemaName", column); + checkColumnIndex(column); + String schema = result.getSchemaName(--column); + return schema == null ? "" : schema; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the table name. + * + * @param column the column index (1,2,...) + * @return the table name + * @throws SQLException if the result set is closed or invalid + */ + @Override + public String getTableName(int column) throws SQLException { + try { + debugCodeCall("getTableName", column); + checkColumnIndex(column); + String table = result.getTableName(--column); + return table == null ? "" : table; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the catalog name. + * + * @param column the column index (1,2,...) + * @return the catalog name + * @throws SQLException if the result set is closed or invalid + */ + @Override + public String getCatalogName(int column) throws SQLException { + try { + debugCodeCall("getCatalogName", column); + checkColumnIndex(column); + return catalog == null ? "" : catalog; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this an autoincrement column. + * + * @param column the column index (1,2,...) + * @return false + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isAutoIncrement(int column) throws SQLException { + try { + debugCodeCall("isAutoIncrement", column); + checkColumnIndex(column); + return result.isAutoIncrement(--column); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this column is case sensitive. + * It always returns true. + * + * @param column the column index (1,2,...) + * @return true + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isCaseSensitive(int column) throws SQLException { + try { + debugCodeCall("isCaseSensitive", column); + checkColumnIndex(column); + return true; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this column is searchable. + * It always returns true. + * + * @param column the column index (1,2,...) + * @return true + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isSearchable(int column) throws SQLException { + try { + debugCodeCall("isSearchable", column); + checkColumnIndex(column); + return true; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this is a currency column. + * It always returns false. + * + * @param column the column index (1,2,...) + * @return false + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isCurrency(int column) throws SQLException { + try { + debugCodeCall("isCurrency", column); + checkColumnIndex(column); + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this is nullable column. Returns + * ResultSetMetaData.columnNullableUnknown if this is not a column of a + * table. Otherwise, it returns ResultSetMetaData.columnNoNulls if the + * column is not nullable, and ResultSetMetaData.columnNullable if it is + * nullable. + * + * @param column the column index (1,2,...) + * @return ResultSetMetaData.column* + * @throws SQLException if the result set is closed or invalid + */ + @Override + public int isNullable(int column) throws SQLException { + try { + debugCodeCall("isNullable", column); + checkColumnIndex(column); + return result.getNullable(--column); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this column is signed. + * It always returns true. + * + * @param column the column index (1,2,...) + * @return true + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isSigned(int column) throws SQLException { + try { + debugCodeCall("isSigned", column); + checkColumnIndex(column); + return true; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if this column is read only. + * It always returns false. + * + * @param column the column index (1,2,...) + * @return false + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isReadOnly(int column) throws SQLException { + try { + debugCodeCall("isReadOnly", column); + checkColumnIndex(column); + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks whether it is possible for a write on this column to succeed. + * It always returns true. + * + * @param column the column index (1,2,...) + * @return true + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isWritable(int column) throws SQLException { + try { + debugCodeCall("isWritable", column); + checkColumnIndex(column); + return true; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks whether a write on this column will definitely succeed. + * It always returns false. + * + * @param column the column index (1,2,...) + * @return false + * @throws SQLException if the result set is closed or invalid + */ + @Override + public boolean isDefinitelyWritable(int column) throws SQLException { + try { + debugCodeCall("isDefinitelyWritable", column); + checkColumnIndex(column); + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the Java class name of the object that will be returned + * if ResultSet.getObject is called. + * + * @param column the column index (1,2,...) + * @return the Java class name + * @throws SQLException if the result set is closed or invalid + */ + @Override + public String getColumnClassName(int column) throws SQLException { + try { + debugCodeCall("getColumnClassName", column); + checkColumnIndex(column); + int type = result.getColumnType(--column); + return DataType.getTypeClassName(type); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the precision for this column. + * + * @param column the column index (1,2,...) + * @return the precision + * @throws SQLException if the result set is closed or invalid + */ + @Override + public int getPrecision(int column) throws SQLException { + try { + debugCodeCall("getPrecision", column); + checkColumnIndex(column); + long prec = result.getColumnPrecision(--column); + return MathUtils.convertLongToInt(prec); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the scale for this column. + * + * @param column the column index (1,2,...) + * @return the scale + * @throws SQLException if the result set is closed or invalid + */ + @Override + public int getScale(int column) throws SQLException { + try { + debugCodeCall("getScale", column); + checkColumnIndex(column); + return result.getColumnScale(--column); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the maximum display size for this column. + * + * @param column the column index (1,2,...) + * @return the display size + * @throws SQLException if the result set is closed or invalid + */ + @Override + public int getColumnDisplaySize(int column) throws SQLException { + try { + debugCodeCall("getColumnDisplaySize", column); + checkColumnIndex(column); + return result.getDisplaySize(--column); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private void checkClosed() { + if (rs != null) { + rs.checkClosed(); + } + if (prep != null) { + prep.checkClosed(); + } + } + + private void checkColumnIndex(int columnIndex) { + checkClosed(); + if (columnIndex < 1 || columnIndex > columnCount) { + throw DbException.getInvalidValueException("columnIndex", columnIndex); + } + } + + /** + * Return an object of this class if possible. + * + * @param iface the class + * @return this + */ + @Override + @SuppressWarnings("unchecked") + public T unwrap(Class iface) throws SQLException { + try { + if (isWrapperFor(iface)) { + return (T) this; + } + throw DbException.getInvalidValueException("iface", iface); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if unwrap can return an object of this class. + * + * @param iface the class + * @return whether or not the interface is assignable from this class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return iface != null && iface.isAssignableFrom(getClass()); + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": columns=" + columnCount; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcSQLException.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcSQLException.java new file mode 100644 index 0000000000000..b27010512438e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcSQLException.java @@ -0,0 +1,180 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.io.PrintStream; +import java.io.PrintWriter; +import java.sql.SQLException; + +import org.h2.engine.Constants; + +/** + * Represents a database exception. + */ +public class JdbcSQLException extends SQLException { + + /** + * If the SQL statement contains this text, then it is never added to the + * SQL exception. Hiding the SQL statement may be important if it contains a + * passwords, such as a CREATE LINKED TABLE statement. + */ + public static final String HIDE_SQL = "--hide--"; + + private static final long serialVersionUID = 1L; + private final String originalMessage; + private final Throwable cause; + private final String stackTrace; + private String message; + private String sql; + + /** + * Creates a SQLException. + * + * @param message the reason + * @param sql the SQL statement + * @param state the SQL state + * @param errorCode the error code + * @param cause the exception that was the reason for this exception + * @param stackTrace the stack trace + */ + public JdbcSQLException(String message, String sql, String state, + int errorCode, Throwable cause, String stackTrace) { + super(message, state, errorCode); + this.originalMessage = message; + setSQL(sql); + this.cause = cause; + this.stackTrace = stackTrace; + buildMessage(); + initCause(cause); + } + + /** + * Get the detail error message. + * + * @return the message + */ + @Override + public String getMessage() { + return message; + } + + /** + * INTERNAL + */ + public String getOriginalMessage() { + return originalMessage; + } + + /** + * Prints the stack trace to the standard error stream. + */ + @Override + public void printStackTrace() { + // The default implementation already does that, + // but we do it again to avoid problems. + // If it is not implemented, somebody might implement it + // later on which would be a problem if done in the wrong way. + printStackTrace(System.err); + } + + /** + * Prints the stack trace to the specified print writer. + * + * @param s the print writer + */ + @Override + public void printStackTrace(PrintWriter s) { + if (s != null) { + super.printStackTrace(s); + // getNextException().printStackTrace(s) would be very very slow + // if many exceptions are joined + SQLException next = getNextException(); + for (int i = 0; i < 100 && next != null; i++) { + s.println(next.toString()); + next = next.getNextException(); + } + if (next != null) { + s.println("(truncated)"); + } + } + } + + /** + * Prints the stack trace to the specified print stream. + * + * @param s the print stream + */ + @Override + public void printStackTrace(PrintStream s) { + if (s != null) { + super.printStackTrace(s); + // getNextException().printStackTrace(s) would be very very slow + // if many exceptions are joined + SQLException next = getNextException(); + for (int i = 0; i < 100 && next != null; i++) { + s.println(next.toString()); + next = next.getNextException(); + } + if (next != null) { + s.println("(truncated)"); + } + } + } + + /** + * INTERNAL + */ + public Throwable getOriginalCause() { + return cause; + } + + /** + * Returns the SQL statement. + * SQL statements that contain '--hide--' are not listed. + * + * @return the SQL statement + */ + public String getSQL() { + return sql; + } + + /** + * INTERNAL + */ + public void setSQL(String sql) { + if (sql != null && sql.contains(HIDE_SQL)) { + sql = "-"; + } + this.sql = sql; + buildMessage(); + } + + private void buildMessage() { + StringBuilder buff = new StringBuilder(originalMessage == null ? + "- " : originalMessage); + if (sql != null) { + buff.append("; SQL statement:\n").append(sql); + } + buff.append(" [").append(getErrorCode()). + append('-').append(Constants.BUILD_ID).append(']'); + message = buff.toString(); + } + + /** + * Returns the class name, the message, and in the server mode, the stack + * trace of the server + * + * @return the string representation + */ + @Override + public String toString() { + if (stackTrace == null) { + return super.toString(); + } + return stackTrace; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcSavepoint.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcSavepoint.java new file mode 100644 index 0000000000000..bc58dbfcf268e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcSavepoint.java @@ -0,0 +1,123 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.SQLException; +import java.sql.Savepoint; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceObject; +import org.h2.util.StringUtils; + +/** + * A savepoint is a point inside a transaction to where a transaction can be + * rolled back. The tasks that where done before the savepoint are not rolled + * back in this case. + */ +public class JdbcSavepoint extends TraceObject implements Savepoint { + + private static final String SYSTEM_SAVEPOINT_PREFIX = "SYSTEM_SAVEPOINT_"; + + private final int savepointId; + private final String name; + private JdbcConnection conn; + + JdbcSavepoint(JdbcConnection conn, int savepointId, String name, + Trace trace, int id) { + setTrace(trace, TraceObject.SAVEPOINT, id); + this.conn = conn; + this.savepointId = savepointId; + this.name = name; + } + + /** + * Release this savepoint. This method only set the connection to null and + * does not execute a statement. + */ + void release() { + this.conn = null; + } + + /** + * Get the savepoint name for this name or id. + * If the name is null, the id is used. + * + * @param name the name (may be null) + * @param id the id + * @return the savepoint name + */ + static String getName(String name, int id) { + if (name != null) { + return StringUtils.quoteJavaString(name); + } + return SYSTEM_SAVEPOINT_PREFIX + id; + } + + /** + * Roll back to this savepoint. + */ + void rollback() { + checkValid(); + conn.prepareCommand( + "ROLLBACK TO SAVEPOINT " + getName(name, savepointId), + Integer.MAX_VALUE).executeUpdate(false); + } + + private void checkValid() { + if (conn == null) { + throw DbException.get(ErrorCode.SAVEPOINT_IS_INVALID_1, + getName(name, savepointId)); + } + } + + /** + * Get the generated id of this savepoint. + * @return the id + */ + @Override + public int getSavepointId() throws SQLException { + try { + debugCodeCall("getSavepointId"); + checkValid(); + if (name != null) { + throw DbException.get(ErrorCode.SAVEPOINT_IS_NAMED); + } + return savepointId; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Get the name of this savepoint. + * @return the name + */ + @Override + public String getSavepointName() throws SQLException { + try { + debugCodeCall("getSavepointName"); + checkValid(); + if (name == null) { + throw DbException.get(ErrorCode.SAVEPOINT_IS_UNNAMED); + } + return name; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": id=" + savepointId + " name=" + name; + } + + +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcStatement.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcStatement.java new file mode 100644 index 0000000000000..b3f3966715943 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcStatement.java @@ -0,0 +1,1382 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.SQLWarning; +import java.sql.Statement; +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.engine.SessionInterface; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.result.ResultInterface; +import org.h2.result.ResultWithGeneratedKeys; +import org.h2.tools.SimpleResultSet; +import org.h2.util.New; +import org.h2.util.ParserUtil; +import org.h2.util.StringUtils; + +/** + * Represents a statement. + */ +public class JdbcStatement extends TraceObject implements Statement, JdbcStatementBackwardsCompat { + + protected JdbcConnection conn; + protected SessionInterface session; + protected JdbcResultSet resultSet; + protected int maxRows; + protected int fetchSize = SysProperties.SERVER_RESULT_SET_FETCH_SIZE; + protected int updateCount; + protected JdbcResultSet generatedKeys; + protected final int resultSetType; + protected final int resultSetConcurrency; + protected final boolean closedByResultSet; + private volatile CommandInterface executingCommand; + private int lastExecutedCommandType; + private ArrayList batchCommands; + private boolean escapeProcessing = true; + private volatile boolean cancelled; + + JdbcStatement(JdbcConnection conn, int id, int resultSetType, + int resultSetConcurrency, boolean closeWithResultSet) { + this.conn = conn; + this.session = conn.getSession(); + setTrace(session.getTrace(), TraceObject.STATEMENT, id); + this.resultSetType = resultSetType; + this.resultSetConcurrency = resultSetConcurrency; + this.closedByResultSet = closeWithResultSet; + } + + /** + * Executes a query (select statement) and returns the result set. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * @param sql the SQL statement to execute + * @return the result set + */ + @Override + public ResultSet executeQuery(String sql) throws SQLException { + try { + int id = getNextId(TraceObject.RESULT_SET); + if (isDebugEnabled()) { + debugCodeAssign("ResultSet", TraceObject.RESULT_SET, id, + "executeQuery(" + quote(sql) + ")"); + } + synchronized (session) { + checkClosed(); + closeOldResultSet(); + sql = JdbcConnection.translateSQL(sql, escapeProcessing); + CommandInterface command = conn.prepareCommand(sql, fetchSize); + ResultInterface result; + boolean lazy = false; + boolean scrollable = resultSetType != ResultSet.TYPE_FORWARD_ONLY; + boolean updatable = resultSetConcurrency == ResultSet.CONCUR_UPDATABLE; + setExecutingStatement(command); + try { + result = command.executeQuery(maxRows, scrollable); + lazy = result.isLazy(); + } finally { + if (!lazy) { + setExecutingStatement(null); + } + } + if (!lazy) { + command.close(); + } + resultSet = new JdbcResultSet(conn, this, command, result, id, + closedByResultSet, scrollable, updatable); + } + return resultSet; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @param sql the SQL statement + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public int executeUpdate(String sql) throws SQLException { + try { + debugCodeCall("executeUpdate", sql); + return executeUpdateInternal(sql, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @param sql the SQL statement + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public long executeLargeUpdate(String sql) throws SQLException { + try { + debugCodeCall("executeLargeUpdate", sql); + return executeUpdateInternal(sql, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private int executeUpdateInternal(String sql, Object generatedKeysRequest) throws SQLException { + checkClosedForWrite(); + try { + closeOldResultSet(); + sql = JdbcConnection.translateSQL(sql, escapeProcessing); + CommandInterface command = conn.prepareCommand(sql, fetchSize); + synchronized (session) { + setExecutingStatement(command); + try { + ResultWithGeneratedKeys result = command.executeUpdate( + conn.scopeGeneratedKeys() ? false : generatedKeysRequest); + updateCount = result.getUpdateCount(); + ResultInterface gk = result.getGeneratedKeys(); + if (gk != null) { + int id = getNextId(TraceObject.RESULT_SET); + generatedKeys = new JdbcResultSet(conn, this, command, gk, id, + false, true, false); + } + } finally { + setExecutingStatement(null); + } + } + command.close(); + return updateCount; + } finally { + afterWriting(); + } + } + + /** + * Executes an arbitrary statement. If another result set exists for this + * statement, this will be closed (even if this statement fails). + * + * If the statement is a create or drop and does not throw an exception, the + * current transaction (if any) is committed after executing the statement. + * If auto commit is on, and the statement is not a select, this statement + * will be committed. + * + * @param sql the SQL statement to execute + * @return true if a result set is available, false if not + */ + @Override + public boolean execute(String sql) throws SQLException { + try { + debugCodeCall("execute", sql); + return executeInternal(sql, false); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + private boolean executeInternal(String sql, Object generatedKeysRequest) throws SQLException { + int id = getNextId(TraceObject.RESULT_SET); + checkClosedForWrite(); + try { + closeOldResultSet(); + sql = JdbcConnection.translateSQL(sql, escapeProcessing); + CommandInterface command = conn.prepareCommand(sql, fetchSize); + boolean lazy = false; + boolean returnsResultSet; + synchronized (session) { + setExecutingStatement(command); + try { + if (command.isQuery()) { + returnsResultSet = true; + boolean scrollable = resultSetType != ResultSet.TYPE_FORWARD_ONLY; + boolean updatable = resultSetConcurrency == ResultSet.CONCUR_UPDATABLE; + ResultInterface result = command.executeQuery(maxRows, scrollable); + lazy = result.isLazy(); + resultSet = new JdbcResultSet(conn, this, command, result, id, + closedByResultSet, scrollable, updatable); + } else { + returnsResultSet = false; + ResultWithGeneratedKeys result = command.executeUpdate( + conn.scopeGeneratedKeys() ? false : generatedKeysRequest); + updateCount = result.getUpdateCount(); + ResultInterface gk = result.getGeneratedKeys(); + if (gk != null) { + generatedKeys = new JdbcResultSet(conn, this, command, gk, id, + false, true, false); + } + } + } finally { + if (!lazy) { + setExecutingStatement(null); + } + } + } + if (!lazy) { + command.close(); + } + return returnsResultSet; + } finally { + afterWriting(); + } + } + + /** + * Returns the last result set produces by this statement. + * + * @return the result set + */ + @Override + public ResultSet getResultSet() throws SQLException { + try { + checkClosed(); + if (resultSet != null) { + int id = resultSet.getTraceId(); + debugCodeAssign("ResultSet", TraceObject.RESULT_SET, id, "getResultSet()"); + } else { + debugCodeCall("getResultSet"); + } + return resultSet; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the last update count of this statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback; -1 if the statement was a select). + * @throws SQLException if this object is closed or invalid + */ + @Override + public int getUpdateCount() throws SQLException { + try { + debugCodeCall("getUpdateCount"); + checkClosed(); + return updateCount; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the last update count of this statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback; -1 if the statement was a select). + * @throws SQLException if this object is closed or invalid + */ + @Override + public long getLargeUpdateCount() throws SQLException { + try { + debugCodeCall("getLargeUpdateCount"); + checkClosed(); + return updateCount; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Closes this statement. + * All result sets that where created by this statement + * become invalid after calling this method. + */ + @Override + public void close() throws SQLException { + try { + debugCodeCall("close"); + synchronized (session) { + closeOldResultSet(); + if (conn != null) { + conn = null; + } + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Returns the connection that created this object. + * + * @return the connection + */ + @Override + public Connection getConnection() { + debugCodeCall("getConnection"); + return conn; + } + + /** + * Gets the first warning reported by calls on this object. + * This driver does not support warnings, and will always return null. + * + * @return null + */ + @Override + public SQLWarning getWarnings() throws SQLException { + try { + debugCodeCall("getWarnings"); + checkClosed(); + return null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Clears all warnings. As this driver does not support warnings, + * this call is ignored. + */ + @Override + public void clearWarnings() throws SQLException { + try { + debugCodeCall("clearWarnings"); + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the name of the cursor. This call is ignored. + * + * @param name ignored + * @throws SQLException if this object is closed + */ + @Override + public void setCursorName(String name) throws SQLException { + try { + debugCodeCall("setCursorName", name); + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the fetch direction. + * This call is ignored by this driver. + * + * @param direction ignored + * @throws SQLException if this object is closed + */ + @Override + public void setFetchDirection(int direction) throws SQLException { + try { + debugCodeCall("setFetchDirection", direction); + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the fetch direction. + * + * @return FETCH_FORWARD + * @throws SQLException if this object is closed + */ + @Override + public int getFetchDirection() throws SQLException { + try { + debugCodeCall("getFetchDirection"); + checkClosed(); + return ResultSet.FETCH_FORWARD; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the maximum number of rows for a ResultSet. + * + * @return the number of rows where 0 means no limit + * @throws SQLException if this object is closed + */ + @Override + public int getMaxRows() throws SQLException { + try { + debugCodeCall("getMaxRows"); + checkClosed(); + return maxRows; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the maximum number of rows for a ResultSet. + * + * @return the number of rows where 0 means no limit + * @throws SQLException if this object is closed + */ + @Override + public long getLargeMaxRows() throws SQLException { + try { + debugCodeCall("getLargeMaxRows"); + checkClosed(); + return maxRows; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the maximum number of rows for a ResultSet. + * + * @param maxRows the number of rows where 0 means no limit + * @throws SQLException if this object is closed + */ + @Override + public void setMaxRows(int maxRows) throws SQLException { + try { + debugCodeCall("setMaxRows", maxRows); + checkClosed(); + if (maxRows < 0) { + throw DbException.getInvalidValueException("maxRows", maxRows); + } + this.maxRows = maxRows; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the maximum number of rows for a ResultSet. + * + * @param maxRows the number of rows where 0 means no limit + * @throws SQLException if this object is closed + */ + @Override + public void setLargeMaxRows(long maxRows) throws SQLException { + try { + debugCodeCall("setLargeMaxRows", maxRows); + checkClosed(); + if (maxRows < 0) { + throw DbException.getInvalidValueException("maxRows", maxRows); + } + this.maxRows = maxRows <= Integer.MAX_VALUE ? (int) maxRows : 0; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the number of rows suggested to read in one step. + * This value cannot be higher than the maximum rows (setMaxRows) + * set by the statement or prepared statement, otherwise an exception + * is throws. Setting the value to 0 will set the default value. + * The default value can be changed using the system property + * h2.serverResultSetFetchSize. + * + * @param rows the number of rows + * @throws SQLException if this object is closed + */ + @Override + public void setFetchSize(int rows) throws SQLException { + try { + debugCodeCall("setFetchSize", rows); + checkClosed(); + if (rows < 0 || (rows > 0 && maxRows > 0 && rows > maxRows)) { + throw DbException.getInvalidValueException("rows", rows); + } + if (rows == 0) { + rows = SysProperties.SERVER_RESULT_SET_FETCH_SIZE; + } + fetchSize = rows; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the number of rows suggested to read in one step. + * + * @return the current fetch size + * @throws SQLException if this object is closed + */ + @Override + public int getFetchSize() throws SQLException { + try { + debugCodeCall("getFetchSize"); + checkClosed(); + return fetchSize; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the result set concurrency created by this object. + * + * @return the concurrency + */ + @Override + public int getResultSetConcurrency() throws SQLException { + try { + debugCodeCall("getResultSetConcurrency"); + checkClosed(); + return resultSetConcurrency; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the result set type. + * + * @return the type + * @throws SQLException if this object is closed + */ + @Override + public int getResultSetType() throws SQLException { + try { + debugCodeCall("getResultSetType"); + checkClosed(); + return resultSetType; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the maximum number of bytes for a result set column. + * + * @return always 0 for no limit + * @throws SQLException if this object is closed + */ + @Override + public int getMaxFieldSize() throws SQLException { + try { + debugCodeCall("getMaxFieldSize"); + checkClosed(); + return 0; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the maximum number of bytes for a result set column. + * This method does currently do nothing for this driver. + * + * @param max the maximum size - ignored + * @throws SQLException if this object is closed + */ + @Override + public void setMaxFieldSize(int max) throws SQLException { + try { + debugCodeCall("setMaxFieldSize", max); + checkClosed(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Enables or disables processing or JDBC escape syntax. + * See also Connection.nativeSQL. + * + * @param enable - true (default) or false (no conversion is attempted) + * @throws SQLException if this object is closed + */ + @Override + public void setEscapeProcessing(boolean enable) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("setEscapeProcessing("+enable+");"); + } + checkClosed(); + escapeProcessing = enable; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Cancels a currently running statement. + * This method must be called from within another + * thread than the execute method. + * Operations on large objects are not interrupted, + * only operations that process many rows. + * + * @throws SQLException if this object is closed + */ + @Override + public void cancel() throws SQLException { + try { + debugCodeCall("cancel"); + checkClosed(); + // executingCommand can be reset by another thread + CommandInterface c = executingCommand; + try { + if (c != null) { + c.cancel(); + cancelled = true; + } + } finally { + setExecutingStatement(null); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Check whether the statement was cancelled. + * + * @return true if yes + */ + public boolean isCancelled() { + return cancelled; + } + + /** + * Gets the current query timeout in seconds. + * This method will return 0 if no query timeout is set. + * The result is rounded to the next second. + * For performance reasons, only the first call to this method + * will query the database. If the query timeout was changed in another + * way than calling setQueryTimeout, this method will always return + * the last value. + * + * @return the timeout in seconds + * @throws SQLException if this object is closed + */ + @Override + public int getQueryTimeout() throws SQLException { + try { + debugCodeCall("getQueryTimeout"); + checkClosed(); + return conn.getQueryTimeout(); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Sets the current query timeout in seconds. + * Changing the value will affect all statements of this connection. + * This method does not commit a transaction, + * and rolling back a transaction does not affect this setting. + * + * @param seconds the timeout in seconds - 0 means no timeout, values + * smaller 0 will throw an exception + * @throws SQLException if this object is closed + */ + @Override + public void setQueryTimeout(int seconds) throws SQLException { + try { + debugCodeCall("setQueryTimeout", seconds); + checkClosed(); + if (seconds < 0) { + throw DbException.getInvalidValueException("seconds", seconds); + } + conn.setQueryTimeout(seconds); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Adds a statement to the batch. + * + * @param sql the SQL statement + */ + @Override + public void addBatch(String sql) throws SQLException { + try { + debugCodeCall("addBatch", sql); + checkClosed(); + sql = JdbcConnection.translateSQL(sql, escapeProcessing); + if (batchCommands == null) { + batchCommands = New.arrayList(); + } + batchCommands.add(sql); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Clears the batch. + */ + @Override + public void clearBatch() throws SQLException { + try { + debugCodeCall("clearBatch"); + checkClosed(); + batchCommands = null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes the batch. + * If one of the batched statements fails, this database will continue. + * + * @return the array of update counts + */ + @Override + public int[] executeBatch() throws SQLException { + try { + debugCodeCall("executeBatch"); + checkClosedForWrite(); + try { + if (batchCommands == null) { + // TODO batch: check what other database do if no commands + // are set + batchCommands = New.arrayList(); + } + int size = batchCommands.size(); + int[] result = new int[size]; + boolean error = false; + SQLException next = null; + for (int i = 0; i < size; i++) { + String sql = batchCommands.get(i); + try { + result[i] = executeUpdateInternal(sql, false); + } catch (Exception re) { + SQLException e = logAndConvert(re); + if (next == null) { + next = e; + } else { + e.setNextException(next); + next = e; + } + result[i] = Statement.EXECUTE_FAILED; + error = true; + } + } + batchCommands = null; + if (error) { + throw new JdbcBatchUpdateException(next, result); + } + return result; + } finally { + afterWriting(); + } + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes the batch. + * If one of the batched statements fails, this database will continue. + * + * @return the array of update counts + */ + @Override + public long[] executeLargeBatch() throws SQLException { + int[] intResult = executeBatch(); + int count = intResult.length; + long[] longResult = new long[count]; + for (int i = 0; i < count; i++) { + longResult[i] = intResult[i]; + } + return longResult; + } + + /** + * Return a result set that contains the last generated auto-increment key + * for this connection, if there was one. If no key was generated by the + * last modification statement, then an empty result set is returned. + * The returned result set only contains the data for the very last row. + * + * @return the result set with one row and one column containing the key + * @throws SQLException if this object is closed + */ + @Override + public ResultSet getGeneratedKeys() throws SQLException { + try { + int id = getNextId(TraceObject.RESULT_SET); + if (isDebugEnabled()) { + debugCodeAssign("ResultSet", TraceObject.RESULT_SET, id, "getGeneratedKeys()"); + } + checkClosed(); + if (!conn.scopeGeneratedKeys()) { + if (generatedKeys != null) { + return generatedKeys; + } + if (session.isSupportsGeneratedKeys()) { + return new SimpleResultSet(); + } + } + // Compatibility mode or an old server, so use SCOPE_IDENTITY() + return conn.getGeneratedKeys(this, id); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Moves to the next result set - however there is always only one result + * set. This call also closes the current result set (if there is one). + * Returns true if there is a next result set (that means - it always + * returns false). + * + * @return false + * @throws SQLException if this object is closed. + */ + @Override + public boolean getMoreResults() throws SQLException { + try { + debugCodeCall("getMoreResults"); + checkClosed(); + closeOldResultSet(); + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Move to the next result set. + * This method always returns false. + * + * @param current Statement.CLOSE_CURRENT_RESULT, + * Statement.KEEP_CURRENT_RESULT, + * or Statement.CLOSE_ALL_RESULTS + * @return false + */ + @Override + public boolean getMoreResults(int current) throws SQLException { + try { + debugCodeCall("getMoreResults", current); + switch (current) { + case Statement.CLOSE_CURRENT_RESULT: + case Statement.CLOSE_ALL_RESULTS: + checkClosed(); + closeOldResultSet(); + break; + case Statement.KEEP_CURRENT_RESULT: + // nothing to do + break; + default: + throw DbException.getInvalidValueException("current", current); + } + return false; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param autoGeneratedKeys + * {@link Statement.RETURN_GENERATED_KEYS} if generated keys should + * be available for retrieval, {@link Statement.NO_GENERATED_KEYS} if + * generated keys should not be available + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public int executeUpdate(String sql, int autoGeneratedKeys) + throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeUpdate("+quote(sql)+", "+autoGeneratedKeys+");"); + } + return executeUpdateInternal(sql, autoGeneratedKeys == RETURN_GENERATED_KEYS); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param autoGeneratedKeys + * {@link Statement.RETURN_GENERATED_KEYS} if generated keys should + * be available for retrieval, {@link Statement.NO_GENERATED_KEYS} if + * generated keys should not be available + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public long executeLargeUpdate(String sql, int autoGeneratedKeys) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeLargeUpdate("+quote(sql)+", "+autoGeneratedKeys+");"); + } + return executeUpdateInternal(sql, autoGeneratedKeys == RETURN_GENERATED_KEYS); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param columnIndexes + * an array of column indexes indicating the columns with generated + * keys that should be returned from the inserted row + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public int executeUpdate(String sql, int[] columnIndexes) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeUpdate("+quote(sql)+", "+quoteIntArray(columnIndexes)+");"); + } + return executeUpdateInternal(sql, columnIndexes); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param columnIndexes + * an array of column indexes indicating the columns with generated + * keys that should be returned from the inserted row + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public long executeLargeUpdate(String sql, int columnIndexes[]) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeLargeUpdate("+quote(sql)+", "+quoteIntArray(columnIndexes)+");"); + } + return executeUpdateInternal(sql, columnIndexes); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param columnNames + * an array of column names indicating the columns with generated + * keys that should be returned from the inserted row + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public int executeUpdate(String sql, String[] columnNames) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeUpdate("+quote(sql)+", "+quoteArray(columnNames)+");"); + } + return executeUpdateInternal(sql, columnNames); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param columnNames + * an array of column names indicating the columns with generated + * keys that should be returned from the inserted row + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public long executeLargeUpdate(String sql, String columnNames[]) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("executeLargeUpdate("+quote(sql)+", "+quoteArray(columnNames)+");"); + } + return executeUpdateInternal(sql, columnNames); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param autoGeneratedKeys + * {@link Statement.RETURN_GENERATED_KEYS} if generated keys should + * be available for retrieval, {@link Statement.NO_GENERATED_KEYS} if + * generated keys should not be available + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public boolean execute(String sql, int autoGeneratedKeys) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("execute("+quote(sql)+", "+autoGeneratedKeys+");"); + } + return executeInternal(sql, autoGeneratedKeys == RETURN_GENERATED_KEYS); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param columnIndexes + * an array of column indexes indicating the columns with generated + * keys that should be returned from the inserted row + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public boolean execute(String sql, int[] columnIndexes) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("execute("+quote(sql)+", "+quoteIntArray(columnIndexes)+");"); + } + return executeInternal(sql, columnIndexes); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Executes a statement and returns the update count. + * + * @param sql the SQL statement + * @param columnNames + * an array of column names indicating the columns with generated + * keys that should be returned from the inserted row + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + @Override + public boolean execute(String sql, String[] columnNames) throws SQLException { + try { + if (isDebugEnabled()) { + debugCode("execute("+quote(sql)+", "+quoteArray(columnNames)+");"); + } + return executeInternal(sql, columnNames); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Gets the result set holdability. + * + * @return the holdability + */ + @Override + public int getResultSetHoldability() throws SQLException { + try { + debugCodeCall("getResultSetHoldability"); + checkClosed(); + return ResultSet.HOLD_CURSORS_OVER_COMMIT; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * [Not supported] + */ + @Override + public void closeOnCompletion() { + // not supported + } + + /** + * [Not supported] + */ + @Override + public boolean isCloseOnCompletion() { + return true; + } + + // ============================================================= + + /** + * Check if this connection is closed. + * The next operation is a read request. + * + * @return true if the session was re-connected + * @throws DbException if the connection or session is closed + */ + boolean checkClosed() { + return checkClosed(false); + } + + /** + * Check if this connection is closed. + * The next operation may be a write request. + * + * @return true if the session was re-connected + * @throws DbException if the connection or session is closed + */ + boolean checkClosedForWrite() { + return checkClosed(true); + } + + /** + * INTERNAL. + * Check if the statement is closed. + * + * @param write if the next operation is possibly writing + * @return true if a reconnect was required + * @throws DbException if it is closed + */ + protected boolean checkClosed(boolean write) { + if (conn == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + conn.checkClosed(write); + SessionInterface s = conn.getSession(); + if (s != session) { + session = s; + trace = session.getTrace(); + return true; + } + return false; + } + + /** + * Called after each write operation. + */ + void afterWriting() { + if (conn != null) { + conn.afterWriting(); + } + } + + /** + * INTERNAL. + * Close and old result set if there is still one open. + */ + protected void closeOldResultSet() throws SQLException { + try { + if (!closedByResultSet) { + if (resultSet != null) { + resultSet.closeInternal(); + } + if (generatedKeys != null) { + generatedKeys.closeInternal(); + } + } + } finally { + cancelled = false; + resultSet = null; + updateCount = -1; + generatedKeys = null; + } + } + + /** + * INTERNAL. + * Set the statement that is currently running. + * + * @param c the command + */ + protected void setExecutingStatement(CommandInterface c) { + if (c == null) { + conn.setExecutingStatement(null); + } else { + conn.setExecutingStatement(this); + lastExecutedCommandType = c.getCommandType(); + } + executingCommand = c; + } + + /** + * Called when the result set is closed. + * + * @param command the command + * @param closeCommand whether to close the command + */ + void onLazyResultSetClose(CommandInterface command, boolean closeCommand) { + setExecutingStatement(null); + command.stop(); + if (closeCommand) { + command.close(); + } + } + + /** + * INTERNAL. + * Get the command type of the last executed command. + */ + public int getLastExecutedCommandType() { + return lastExecutedCommandType; + } + + /** + * Returns whether this statement is closed. + * + * @return true if the statement is closed + */ + @Override + public boolean isClosed() throws SQLException { + try { + debugCodeCall("isClosed"); + return conn == null; + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Return an object of this class if possible. + * + * @param iface the class + * @return this + */ + @Override + @SuppressWarnings("unchecked") + public T unwrap(Class iface) throws SQLException { + try { + if (isWrapperFor(iface)) { + return (T) this; + } + throw DbException.getInvalidValueException("iface", iface); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if unwrap can return an object of this class. + * + * @param iface the class + * @return whether or not the interface is assignable from this class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return iface != null && iface.isAssignableFrom(getClass()); + } + + /** + * Returns whether this object is poolable. + * @return false + */ + @Override + public boolean isPoolable() { + debugCodeCall("isPoolable"); + return false; + } + + /** + * Requests that this object should be pooled or not. + * This call is ignored. + * + * @param poolable the requested value + */ + @Override + public void setPoolable(boolean poolable) { + if (isDebugEnabled()) { + debugCode("setPoolable("+poolable+");"); + } + } + + /** + * @param identifier + * identifier to quote if required + * @param alwaysQuote + * if {@code true} identifier will be quoted unconditionally + * @return specified identifier quoted if required or explicitly requested + */ + @Override + public String enquoteIdentifier(String identifier, boolean alwaysQuote) throws SQLException { + if (alwaysQuote || !isSimpleIdentifier(identifier)) { + return StringUtils.quoteIdentifier(identifier); + } + return identifier; + } + + /** + * @param identifier + * identifier to check + * @return is specified identifier may be used without quotes + */ + @Override + public boolean isSimpleIdentifier(String identifier) throws SQLException { + return ParserUtil.isSimpleIdentifier(identifier, true); + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName(); + } + +} + diff --git a/modules/h2/src/main/java/org/h2/jdbc/JdbcStatementBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbc/JdbcStatementBackwardsCompat.java new file mode 100644 index 0000000000000..4d5137a122be7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/JdbcStatementBackwardsCompat.java @@ -0,0 +1,140 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbc; + +import java.sql.SQLException; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcStatementBackwardsCompat { + + // compatibility interface + + // JDBC 4.2 + + /** + * Returns the last update count of this statement. + * + * @return the update count (number of row affected by an insert, update or + * delete, or 0 if no rows or the statement was a create, drop, + * commit or rollback; -1 if the statement was a select). + * @throws SQLException if this object is closed or invalid + */ + long getLargeUpdateCount() throws SQLException; + + /** + * Gets the maximum number of rows for a ResultSet. + * + * @param max the number of rows where 0 means no limit + * @throws SQLException if this object is closed + */ + void setLargeMaxRows(long max) throws SQLException; + + /** + * Gets the maximum number of rows for a ResultSet. + * + * @return the number of rows where 0 means no limit + * @throws SQLException if this object is closed + */ + long getLargeMaxRows() throws SQLException; + + /** + * Executes the batch. + * If one of the batched statements fails, this database will continue. + * + * @return the array of update counts + */ + long[] executeLargeBatch() throws SQLException; + + /** + * Executes a statement (insert, update, delete, create, drop) + * and returns the update count. + * If another result set exists for this statement, this will be closed + * (even if this statement fails). + * + * If auto commit is on, this statement will be committed. + * If the statement is a DDL statement (create, drop, alter) and does not + * throw an exception, the current transaction (if any) is committed after + * executing the statement. + * + * @param sql the SQL statement + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + long executeLargeUpdate(String sql) throws SQLException; + + /** + * Executes a statement and returns the update count. + * This method just calls executeUpdate(String sql) internally. + * The method getGeneratedKeys supports at most one columns and row. + * + * @param sql the SQL statement + * @param autoGeneratedKeys ignored + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + long executeLargeUpdate(String sql, int autoGeneratedKeys) throws SQLException; + + /** + * Executes a statement and returns the update count. + * This method just calls executeUpdate(String sql) internally. + * The method getGeneratedKeys supports at most one columns and row. + * + * @param sql the SQL statement + * @param columnIndexes ignored + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + long executeLargeUpdate(String sql, int columnIndexes[]) throws SQLException; + + /** + * Executes a statement and returns the update count. + * This method just calls executeUpdate(String sql) internally. + * The method getGeneratedKeys supports at most one columns and row. + * + * @param sql the SQL statement + * @param columnNames ignored + * @return the update count (number of row affected by an insert, + * update or delete, or 0 if no rows or the statement was a + * create, drop, commit or rollback) + * @throws SQLException if a database error occurred or a + * select statement was executed + */ + long executeLargeUpdate(String sql, String columnNames[]) throws SQLException; + + // JDBC 4.3 (incomplete) + + /** + * Enquotes the specified identifier. + * + * @param identifier + * identifier to quote if required + * @param alwaysQuote + * if {@code true} identifier will be quoted unconditionally + * @return specified identifier quoted if required or explicitly requested + */ + String enquoteIdentifier(String identifier, boolean alwaysQuote) throws SQLException; + + /** + * Checks if specified identifier may be used without quotes. + * + * @param identifier + * identifier to check + * @return is specified identifier may be used without quotes + */ + boolean isSimpleIdentifier(String identifier) throws SQLException; +} diff --git a/modules/h2/src/main/java/org/h2/jdbc/package.html b/modules/h2/src/main/java/org/h2/jdbc/package.html new file mode 100644 index 0000000000000..2e630781d047a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbc/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Implementation of the JDBC API (package java.sql). + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/jdbcx/JdbcConnectionPool.java b/modules/h2/src/main/java/org/h2/jdbcx/JdbcConnectionPool.java new file mode 100644 index 0000000000000..c8a0e49cb5909 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/JdbcConnectionPool.java @@ -0,0 +1,340 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Christian d'Heureuse, www.source-code.biz + * + * This class is multi-licensed under LGPL, MPL 2.0, and EPL 1.0. + * + * This module is free software: you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation, either + * version 3 of the License, or (at your option) any later version. + * See http://www.gnu.org/licenses/lgpl.html + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied + * warranty of MERCHANTABILITY or FITNESS FOR A + * PARTICULAR PURPOSE. See the GNU Lesser General Public + * License for more details. + */ +package org.h2.jdbcx; + +import java.io.PrintWriter; +import java.sql.Connection; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.concurrent.TimeUnit; +import java.util.logging.Logger; +import javax.sql.ConnectionEvent; +import javax.sql.ConnectionEventListener; +import javax.sql.ConnectionPoolDataSource; +import javax.sql.DataSource; +import javax.sql.PooledConnection; +import org.h2.message.DbException; +import org.h2.util.New; + +/** + * A simple standalone JDBC connection pool. + * It is based on the + * + * MiniConnectionPoolManager written by Christian d'Heureuse (Java 1.5) + * . It is used as follows: + *
    + * import java.sql.*;
    + * import org.h2.jdbcx.JdbcConnectionPool;
    + * public class Test {
    + *     public static void main(String... args) throws Exception {
    + *         JdbcConnectionPool cp = JdbcConnectionPool.create(
    + *             "jdbc:h2:~/test", "sa", "sa");
    + *         for (String sql : args) {
    + *             Connection conn = cp.getConnection();
    + *             conn.createStatement().execute(sql);
    + *             conn.close();
    + *         }
    + *         cp.dispose();
    + *     }
    + * }
    + * 
    + * + * @author Christian d'Heureuse + * (www.source-code.biz) + * @author Thomas Mueller + */ +public class JdbcConnectionPool implements DataSource, ConnectionEventListener, + JdbcConnectionPoolBackwardsCompat { + + private static final int DEFAULT_TIMEOUT = 30; + private static final int DEFAULT_MAX_CONNECTIONS = 10; + + private final ConnectionPoolDataSource dataSource; + private final ArrayList recycledConnections = New.arrayList(); + private PrintWriter logWriter; + private int maxConnections = DEFAULT_MAX_CONNECTIONS; + private int timeout = DEFAULT_TIMEOUT; + private int activeConnections; + private boolean isDisposed; + + protected JdbcConnectionPool(ConnectionPoolDataSource dataSource) { + this.dataSource = dataSource; + if (dataSource != null) { + try { + logWriter = dataSource.getLogWriter(); + } catch (SQLException e) { + // ignore + } + } + } + + /** + * Constructs a new connection pool. + * + * @param dataSource the data source to create connections + * @return the connection pool + */ + public static JdbcConnectionPool create(ConnectionPoolDataSource dataSource) { + return new JdbcConnectionPool(dataSource); + } + + /** + * Constructs a new connection pool for H2 databases. + * + * @param url the database URL of the H2 connection + * @param user the user name + * @param password the password + * @return the connection pool + */ + public static JdbcConnectionPool create(String url, String user, + String password) { + JdbcDataSource ds = new JdbcDataSource(); + ds.setURL(url); + ds.setUser(user); + ds.setPassword(password); + return new JdbcConnectionPool(ds); + } + + /** + * Sets the maximum number of connections to use from now on. + * The default value is 10 connections. + * + * @param max the maximum number of connections + */ + public synchronized void setMaxConnections(int max) { + if (max < 1) { + throw new IllegalArgumentException("Invalid maxConnections value: " + max); + } + this.maxConnections = max; + // notify waiting threads if the value was increased + notifyAll(); + } + + /** + * Gets the maximum number of connections to use. + * + * @return the max the maximum number of connections + */ + public synchronized int getMaxConnections() { + return maxConnections; + } + + /** + * Gets the maximum time in seconds to wait for a free connection. + * + * @return the timeout in seconds + */ + @Override + public synchronized int getLoginTimeout() { + return timeout; + } + + /** + * Sets the maximum time in seconds to wait for a free connection. + * The default timeout is 30 seconds. Calling this method with the + * value 0 will set the timeout to the default value. + * + * @param seconds the timeout, 0 meaning the default + */ + @Override + public synchronized void setLoginTimeout(int seconds) { + if (seconds == 0) { + seconds = DEFAULT_TIMEOUT; + } + this.timeout = seconds; + } + + /** + * Closes all unused pooled connections. + * Exceptions while closing are written to the log stream (if set). + */ + public synchronized void dispose() { + if (isDisposed) { + return; + } + isDisposed = true; + for (PooledConnection aList : recycledConnections) { + closeConnection(aList); + } + } + + /** + * Retrieves a connection from the connection pool. If + * maxConnections connections are already in use, the method + * waits until a connection becomes available or timeout + * seconds elapsed. When the application is finished using the connection, + * it must close it in order to return it to the pool. + * If no connection becomes available within the given timeout, an exception + * with SQL state 08001 and vendor code 8001 is thrown. + * + * @return a new Connection object. + * @throws SQLException when a new connection could not be established, + * or a timeout occurred + */ + @Override + public Connection getConnection() throws SQLException { + long max = System.nanoTime() + TimeUnit.SECONDS.toNanos(timeout); + do { + synchronized (this) { + if (activeConnections < maxConnections) { + return getConnectionNow(); + } + try { + wait(1000); + } catch (InterruptedException e) { + // ignore + } + } + } while (System.nanoTime() <= max); + throw new SQLException("Login timeout", "08001", 8001); + } + + /** + * INTERNAL + */ + @Override + public Connection getConnection(String user, String password) { + throw new UnsupportedOperationException(); + } + + private Connection getConnectionNow() throws SQLException { + if (isDisposed) { + throw new IllegalStateException("Connection pool has been disposed."); + } + PooledConnection pc; + if (!recycledConnections.isEmpty()) { + pc = recycledConnections.remove(recycledConnections.size() - 1); + } else { + pc = dataSource.getPooledConnection(); + } + Connection conn = pc.getConnection(); + activeConnections++; + pc.addConnectionEventListener(this); + return conn; + } + + /** + * This method usually puts the connection back into the pool. There are + * some exceptions: if the pool is disposed, the connection is disposed as + * well. If the pool is full, the connection is closed. + * + * @param pc the pooled connection + */ + synchronized void recycleConnection(PooledConnection pc) { + if (activeConnections <= 0) { + throw new AssertionError(); + } + activeConnections--; + if (!isDisposed && activeConnections < maxConnections) { + recycledConnections.add(pc); + } else { + closeConnection(pc); + } + if (activeConnections >= maxConnections - 1) { + notifyAll(); + } + } + + private void closeConnection(PooledConnection pc) { + try { + pc.close(); + } catch (SQLException e) { + if (logWriter != null) { + e.printStackTrace(logWriter); + } + } + } + + /** + * INTERNAL + */ + @Override + public void connectionClosed(ConnectionEvent event) { + PooledConnection pc = (PooledConnection) event.getSource(); + pc.removeConnectionEventListener(this); + recycleConnection(pc); + } + + /** + * INTERNAL + */ + @Override + public void connectionErrorOccurred(ConnectionEvent event) { + // not used + } + + /** + * Returns the number of active (open) connections of this pool. This is the + * number of Connection objects that have been issued by + * getConnection() for which Connection.close() has + * not yet been called. + * + * @return the number of active connections. + */ + public synchronized int getActiveConnections() { + return activeConnections; + } + + /** + * INTERNAL + */ + @Override + public PrintWriter getLogWriter() { + return logWriter; + } + + /** + * INTERNAL + */ + @Override + public void setLogWriter(PrintWriter logWriter) { + this.logWriter = logWriter; + } + + /** + * [Not supported] Return an object of this class if possible. + * + * @param iface the class + */ + @Override + public T unwrap(Class iface) throws SQLException { + throw DbException.getUnsupportedException("unwrap"); + } + + /** + * [Not supported] Checks if unwrap can return an object of this class. + * + * @param iface the class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + throw DbException.getUnsupportedException("isWrapperFor"); + } + + /** + * [Not supported] + */ + @Override + public Logger getParentLogger() { + return null; + } + + +} diff --git a/modules/h2/src/main/java/org/h2/jdbcx/JdbcConnectionPoolBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbcx/JdbcConnectionPoolBackwardsCompat.java new file mode 100644 index 0000000000000..09f0bff2a3b21 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/JdbcConnectionPoolBackwardsCompat.java @@ -0,0 +1,16 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbcx; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcConnectionPoolBackwardsCompat { + + // compatibility interface + +} diff --git a/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSource.java b/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSource.java new file mode 100644 index 0000000000000..0d529e9ad06cb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSource.java @@ -0,0 +1,450 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbcx; + +import java.io.IOException; +import java.io.ObjectInputStream; +import java.io.PrintWriter; +import java.io.Serializable; +import java.sql.Connection; +import java.sql.SQLException; +import java.util.Properties; +import java.util.logging.Logger; +import javax.naming.Reference; +import javax.naming.Referenceable; +import javax.naming.StringRefAddr; +import javax.sql.ConnectionPoolDataSource; +import javax.sql.DataSource; +import javax.sql.PooledConnection; +import javax.sql.XAConnection; +import javax.sql.XADataSource; +import org.h2.Driver; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.util.StringUtils; + +/** + * A data source for H2 database connections. It is a factory for XAConnection + * and Connection objects. This class is usually registered in a JNDI naming + * service. To create a data source object and register it with a JNDI service, + * use the following code: + * + *
    + * import org.h2.jdbcx.JdbcDataSource;
    + * import javax.naming.Context;
    + * import javax.naming.InitialContext;
    + * JdbcDataSource ds = new JdbcDataSource();
    + * ds.setURL("jdbc:h2:˜/test");
    + * ds.setUser("sa");
    + * ds.setPassword("sa");
    + * Context ctx = new InitialContext();
    + * ctx.bind("jdbc/dsName", ds);
    + * 
    + * + * To use a data source that is already registered, use the following code: + * + *
    + * import java.sql.Connection;
    + * import javax.sql.DataSource;
    + * import javax.naming.Context;
    + * import javax.naming.InitialContext;
    + * Context ctx = new InitialContext();
    + * DataSource ds = (DataSource) ctx.lookup("jdbc/dsName");
    + * Connection conn = ds.getConnection();
    + * 
    + * + * In this example the user name and password are serialized as + * well; this may be a security problem in some cases. + */ +public class JdbcDataSource extends TraceObject implements XADataSource, + DataSource, ConnectionPoolDataSource, Serializable, Referenceable, + JdbcDataSourceBackwardsCompat { + + private static final long serialVersionUID = 1288136338451857771L; + + private transient JdbcDataSourceFactory factory; + private transient PrintWriter logWriter; + private int loginTimeout; + private String userName = ""; + private char[] passwordChars = { }; + private String url = ""; + private String description; + + static { + org.h2.Driver.load(); + } + + /** + * The public constructor. + */ + public JdbcDataSource() { + initFactory(); + int id = getNextId(TraceObject.DATA_SOURCE); + setTrace(factory.getTrace(), TraceObject.DATA_SOURCE, id); + } + + /** + * Called when de-serializing the object. + * + * @param in the input stream + */ + private void readObject(ObjectInputStream in) throws IOException, + ClassNotFoundException { + initFactory(); + in.defaultReadObject(); + } + + private void initFactory() { + factory = new JdbcDataSourceFactory(); + } + + /** + * Get the login timeout in seconds, 0 meaning no timeout. + * + * @return the timeout in seconds + */ + @Override + public int getLoginTimeout() { + debugCodeCall("getLoginTimeout"); + return loginTimeout; + } + + /** + * Set the login timeout in seconds, 0 meaning no timeout. + * The default value is 0. + * This value is ignored by this database. + * + * @param timeout the timeout in seconds + */ + @Override + public void setLoginTimeout(int timeout) { + debugCodeCall("setLoginTimeout", timeout); + this.loginTimeout = timeout; + } + + /** + * Get the current log writer for this object. + * + * @return the log writer + */ + @Override + public PrintWriter getLogWriter() { + debugCodeCall("getLogWriter"); + return logWriter; + } + + /** + * Set the current log writer for this object. + * This value is ignored by this database. + * + * @param out the log writer + */ + @Override + public void setLogWriter(PrintWriter out) { + debugCodeCall("setLogWriter(out)"); + logWriter = out; + } + + /** + * Open a new connection using the current URL, user name and password. + * + * @return the connection + */ + @Override + public Connection getConnection() throws SQLException { + debugCodeCall("getConnection"); + return getJdbcConnection(userName, + StringUtils.cloneCharArray(passwordChars)); + } + + /** + * Open a new connection using the current URL and the specified user name + * and password. + * + * @param user the user name + * @param password the password + * @return the connection + */ + @Override + public Connection getConnection(String user, String password) + throws SQLException { + if (isDebugEnabled()) { + debugCode("getConnection("+quote(user)+", \"\");"); + } + return getJdbcConnection(user, convertToCharArray(password)); + } + + private JdbcConnection getJdbcConnection(String user, char[] password) + throws SQLException { + if (isDebugEnabled()) { + debugCode("getJdbcConnection("+quote(user)+", new char[0]);"); + } + Properties info = new Properties(); + info.setProperty("user", user); + info.put("password", password); + Connection conn = Driver.load().connect(url, info); + if (conn == null) { + throw new SQLException("No suitable driver found for " + url, + "08001", 8001); + } else if (!(conn instanceof JdbcConnection)) { + throw new SQLException( + "Connecting with old version is not supported: " + url, + "08001", 8001); + } + return (JdbcConnection) conn; + } + + /** + * Get the current URL. + * + * @return the URL + */ + public String getURL() { + debugCodeCall("getURL"); + return url; + } + + /** + * Set the current URL. + * + * @param url the new URL + */ + public void setURL(String url) { + debugCodeCall("setURL", url); + this.url = url; + } + + /** + * Get the current URL. + * This method does the same as getURL, but this methods signature conforms + * the JavaBean naming convention. + * + * @return the URL + */ + public String getUrl() { + debugCodeCall("getUrl"); + return url; + } + + /** + * Set the current URL. + * This method does the same as setURL, but this methods signature conforms + * the JavaBean naming convention. + * + * @param url the new URL + */ + public void setUrl(String url) { + debugCodeCall("setUrl", url); + this.url = url; + } + + /** + * Set the current password. + * + * @param password the new password. + */ + public void setPassword(String password) { + debugCodeCall("setPassword", ""); + this.passwordChars = convertToCharArray(password); + } + + /** + * Set the current password in the form of a char array. + * + * @param password the new password in the form of a char array. + */ + public void setPasswordChars(char[] password) { + if (isDebugEnabled()) { + debugCode("setPasswordChars(new char[0]);"); + } + this.passwordChars = password; + } + + private static char[] convertToCharArray(String s) { + return s == null ? null : s.toCharArray(); + } + + private static String convertToString(char[] a) { + return a == null ? null : new String(a); + } + + /** + * Get the current password. + * + * @return the password + */ + public String getPassword() { + debugCodeCall("getPassword"); + return convertToString(passwordChars); + } + + /** + * Get the current user name. + * + * @return the user name + */ + public String getUser() { + debugCodeCall("getUser"); + return userName; + } + + /** + * Set the current user name. + * + * @param user the new user name + */ + public void setUser(String user) { + debugCodeCall("setUser", user); + this.userName = user; + } + + /** + * Get the current description. + * + * @return the description + */ + public String getDescription() { + debugCodeCall("getDescription"); + return description; + } + + /** + * Set the description. + * + * @param description the new description + */ + public void setDescription(String description) { + debugCodeCall("getDescription", description); + this.description = description; + } + + /** + * Get a new reference for this object, using the current settings. + * + * @return the new reference + */ + @Override + public Reference getReference() { + debugCodeCall("getReference"); + String factoryClassName = JdbcDataSourceFactory.class.getName(); + Reference ref = new Reference(getClass().getName(), factoryClassName, null); + ref.add(new StringRefAddr("url", url)); + ref.add(new StringRefAddr("user", userName)); + ref.add(new StringRefAddr("password", convertToString(passwordChars))); + ref.add(new StringRefAddr("loginTimeout", String.valueOf(loginTimeout))); + ref.add(new StringRefAddr("description", description)); + return ref; + } + + /** + * Open a new XA connection using the current URL, user name and password. + * + * @return the connection + */ + @Override + public XAConnection getXAConnection() throws SQLException { + debugCodeCall("getXAConnection"); + int id = getNextId(XA_DATA_SOURCE); + return new JdbcXAConnection(factory, id, getJdbcConnection(userName, + StringUtils.cloneCharArray(passwordChars))); + } + + /** + * Open a new XA connection using the current URL and the specified user + * name and password. + * + * @param user the user name + * @param password the password + * @return the connection + */ + @Override + public XAConnection getXAConnection(String user, String password) + throws SQLException { + if (isDebugEnabled()) { + debugCode("getXAConnection("+quote(user)+", \"\");"); + } + int id = getNextId(XA_DATA_SOURCE); + return new JdbcXAConnection(factory, id, getJdbcConnection(user, + convertToCharArray(password))); + } + + /** + * Open a new pooled connection using the current URL, user name and + * password. + * + * @return the connection + */ + @Override + public PooledConnection getPooledConnection() throws SQLException { + debugCodeCall("getPooledConnection"); + return getXAConnection(); + } + + /** + * Open a new pooled connection using the current URL and the specified user + * name and password. + * + * @param user the user name + * @param password the password + * @return the connection + */ + @Override + public PooledConnection getPooledConnection(String user, String password) + throws SQLException { + if (isDebugEnabled()) { + debugCode("getPooledConnection("+quote(user)+", \"\");"); + } + return getXAConnection(user, password); + } + + /** + * Return an object of this class if possible. + * + * @param iface the class + * @return this + */ + @Override + @SuppressWarnings("unchecked") + public T unwrap(Class iface) throws SQLException { + try { + if (isWrapperFor(iface)) { + return (T) this; + } + throw DbException.getInvalidValueException("iface", iface); + } catch (Exception e) { + throw logAndConvert(e); + } + } + + /** + * Checks if unwrap can return an object of this class. + * + * @param iface the class + * @return whether or not the interface is assignable from this class + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return iface != null && iface.isAssignableFrom(getClass()); + } + + /** + * [Not supported] + */ + @Override + public Logger getParentLogger() { + return null; + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": url=" + url + " user=" + userName; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSourceBackwardsCompat.java b/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSourceBackwardsCompat.java new file mode 100644 index 0000000000000..30d1284de40d9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSourceBackwardsCompat.java @@ -0,0 +1,16 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbcx; + +/** + * Allows us to compile on older platforms, while still implementing the methods + * from the newer JDBC API. + */ +public interface JdbcDataSourceBackwardsCompat { + + // compatibility interface + +} diff --git a/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSourceFactory.java b/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSourceFactory.java new file mode 100644 index 0000000000000..31088d02ae6d8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/JdbcDataSourceFactory.java @@ -0,0 +1,94 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbcx; + +import java.util.Hashtable; + +import javax.naming.Context; +import javax.naming.Name; +import javax.naming.Reference; +import javax.naming.spi.ObjectFactory; + +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.Trace; +import org.h2.message.TraceSystem; + +/** + * This class is used to create new DataSource objects. + * An application should not use this class directly. + */ +public class JdbcDataSourceFactory implements ObjectFactory { + + private static TraceSystem cachedTraceSystem; + private final Trace trace; + + static { + org.h2.Driver.load(); + } + + /** + * The public constructor to create new factory objects. + */ + public JdbcDataSourceFactory() { + trace = getTraceSystem().getTrace(Trace.JDBCX); + } + + /** + * Creates a new object using the specified location or reference + * information. + * + * @param obj the reference (this factory only supports objects of type + * javax.naming.Reference) + * @param name unused + * @param nameCtx unused + * @param environment unused + * @return the new JdbcDataSource, or null if the reference class name is + * not JdbcDataSource. + */ + @Override + public synchronized Object getObjectInstance(Object obj, Name name, + Context nameCtx, Hashtable environment) { + if (trace.isDebugEnabled()) { + trace.debug("getObjectInstance obj={0} name={1} " + + "nameCtx={2} environment={3}", obj, name, nameCtx, environment); + } + if (obj instanceof Reference) { + Reference ref = (Reference) obj; + if (ref.getClassName().equals(JdbcDataSource.class.getName())) { + JdbcDataSource dataSource = new JdbcDataSource(); + dataSource.setURL((String) ref.get("url").getContent()); + dataSource.setUser((String) ref.get("user").getContent()); + dataSource.setPassword((String) ref.get("password").getContent()); + dataSource.setDescription((String) ref.get("description").getContent()); + String s = (String) ref.get("loginTimeout").getContent(); + dataSource.setLoginTimeout(Integer.parseInt(s)); + return dataSource; + } + } + return null; + } + + /** + * INTERNAL + */ + public static TraceSystem getTraceSystem() { + synchronized (JdbcDataSourceFactory.class) { + if (cachedTraceSystem == null) { + cachedTraceSystem = new TraceSystem( + SysProperties.CLIENT_TRACE_DIRECTORY + "h2datasource" + + Constants.SUFFIX_TRACE_FILE); + cachedTraceSystem.setLevelFile(SysProperties.DATASOURCE_TRACE_LEVEL); + } + return cachedTraceSystem; + } + } + + Trace getTrace() { + return trace; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbcx/JdbcXAConnection.java b/modules/h2/src/main/java/org/h2/jdbcx/JdbcXAConnection.java new file mode 100644 index 0000000000000..c991ceacad9d6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/JdbcXAConnection.java @@ -0,0 +1,475 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbcx; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import javax.sql.ConnectionEvent; +import javax.sql.ConnectionEventListener; +import javax.sql.StatementEventListener; +import javax.sql.XAConnection; +import javax.transaction.xa.XAException; +import javax.transaction.xa.XAResource; +import javax.transaction.xa.Xid; +import org.h2.api.ErrorCode; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.util.New; + + +/** + * This class provides support for distributed transactions. + * An application developer usually does not use this interface. + * It is used by the transaction manager internally. + */ +public class JdbcXAConnection extends TraceObject implements XAConnection, + XAResource { + + private final JdbcDataSourceFactory factory; + + // this connection is kept open as long as the XAConnection is alive + private JdbcConnection physicalConn; + + // this connection is replaced whenever getConnection is called + private volatile Connection handleConn; + private final ArrayList listeners = New.arrayList(); + private Xid currentTransaction; + private boolean prepared; + + static { + org.h2.Driver.load(); + } + + JdbcXAConnection(JdbcDataSourceFactory factory, int id, + JdbcConnection physicalConn) { + this.factory = factory; + setTrace(factory.getTrace(), TraceObject.XA_DATA_SOURCE, id); + this.physicalConn = physicalConn; + } + + /** + * Get the XAResource object. + * + * @return itself + */ + @Override + public XAResource getXAResource() { + debugCodeCall("getXAResource"); + return this; + } + + /** + * Close the physical connection. + * This method is usually called by the connection pool. + */ + @Override + public void close() throws SQLException { + debugCodeCall("close"); + Connection lastHandle = handleConn; + if (lastHandle != null) { + listeners.clear(); + lastHandle.close(); + } + if (physicalConn != null) { + try { + physicalConn.close(); + } finally { + physicalConn = null; + } + } + } + + /** + * Get a connection that is a handle to the physical connection. This method + * is usually called by the connection pool. This method closes the last + * connection handle if one exists. + * + * @return the connection + */ + @Override + public Connection getConnection() throws SQLException { + debugCodeCall("getConnection"); + Connection lastHandle = handleConn; + if (lastHandle != null) { + lastHandle.close(); + } + // this will ensure the rollback command is cached + physicalConn.rollback(); + handleConn = new PooledJdbcConnection(physicalConn); + return handleConn; + } + + /** + * Register a new listener for the connection. + * + * @param listener the event listener + */ + @Override + public void addConnectionEventListener(ConnectionEventListener listener) { + debugCode("addConnectionEventListener(listener);"); + listeners.add(listener); + } + + /** + * Remove the event listener. + * + * @param listener the event listener + */ + @Override + public void removeConnectionEventListener(ConnectionEventListener listener) { + debugCode("removeConnectionEventListener(listener);"); + listeners.remove(listener); + } + + /** + * INTERNAL + */ + void closedHandle() { + debugCode("closedHandle();"); + ConnectionEvent event = new ConnectionEvent(this); + // go backward so that a listener can remove itself + // (otherwise we need to clone the list) + for (int i = listeners.size() - 1; i >= 0; i--) { + ConnectionEventListener listener = listeners.get(i); + listener.connectionClosed(event); + } + handleConn = null; + } + + /** + * Get the transaction timeout. + * + * @return 0 + */ + @Override + public int getTransactionTimeout() { + debugCodeCall("getTransactionTimeout"); + return 0; + } + + /** + * Set the transaction timeout. + * + * @param seconds ignored + * @return false + */ + @Override + public boolean setTransactionTimeout(int seconds) { + debugCodeCall("setTransactionTimeout", seconds); + return false; + } + + /** + * Checks if this is the same XAResource. + * + * @param xares the other object + * @return true if this is the same object + */ + @Override + public boolean isSameRM(XAResource xares) { + debugCode("isSameRM(xares);"); + return xares == this; + } + + /** + * Get the list of prepared transaction branches. This method is called by + * the transaction manager during recovery. + * + * @param flag TMSTARTRSCAN, TMENDRSCAN, or TMNOFLAGS. If no other flags are + * set, TMNOFLAGS must be used. + * @return zero or more Xid objects + */ + @Override + public Xid[] recover(int flag) throws XAException { + debugCodeCall("recover", quoteFlags(flag)); + checkOpen(); + try (Statement stat = physicalConn.createStatement()) { + ResultSet rs = stat.executeQuery("SELECT * FROM " + + "INFORMATION_SCHEMA.IN_DOUBT ORDER BY TRANSACTION"); + ArrayList list = New.arrayList(); + while (rs.next()) { + String tid = rs.getString("TRANSACTION"); + int id = getNextId(XID); + Xid xid = new JdbcXid(factory, id, tid); + list.add(xid); + } + rs.close(); + Xid[] result = list.toArray(new Xid[0]); + if (!list.isEmpty()) { + prepared = true; + } + return result; + } catch (SQLException e) { + XAException xa = new XAException(XAException.XAER_RMERR); + xa.initCause(e); + throw xa; + } + } + + /** + * Prepare a transaction. + * + * @param xid the transaction id + * @return XA_OK + */ + @Override + public int prepare(Xid xid) throws XAException { + if (isDebugEnabled()) { + debugCode("prepare("+JdbcXid.toString(xid)+");"); + } + checkOpen(); + if (!currentTransaction.equals(xid)) { + throw new XAException(XAException.XAER_INVAL); + } + + try (Statement stat = physicalConn.createStatement()) { + stat.execute("PREPARE COMMIT " + JdbcXid.toString(xid)); + prepared = true; + } catch (SQLException e) { + throw convertException(e); + } + return XA_OK; + } + + /** + * Forget a transaction. + * This method does not have an effect for this database. + * + * @param xid the transaction id + */ + @Override + public void forget(Xid xid) { + if (isDebugEnabled()) { + debugCode("forget("+JdbcXid.toString(xid)+");"); + } + prepared = false; + } + + /** + * Roll back a transaction. + * + * @param xid the transaction id + */ + @Override + public void rollback(Xid xid) throws XAException { + if (isDebugEnabled()) { + debugCode("rollback("+JdbcXid.toString(xid)+");"); + } + try { + if (prepared) { + try (Statement stat = physicalConn.createStatement()) { + stat.execute("ROLLBACK TRANSACTION " + JdbcXid.toString(xid)); + } + prepared = false; + } else { + physicalConn.rollback(); + } + physicalConn.setAutoCommit(true); + } catch (SQLException e) { + throw convertException(e); + } + currentTransaction = null; + } + + /** + * End a transaction. + * + * @param xid the transaction id + * @param flags TMSUCCESS, TMFAIL, or TMSUSPEND + */ + @Override + public void end(Xid xid, int flags) throws XAException { + if (isDebugEnabled()) { + debugCode("end("+JdbcXid.toString(xid)+", "+quoteFlags(flags)+");"); + } + // TODO transaction end: implement this method + if (flags == TMSUSPEND) { + return; + } + if (!currentTransaction.equals(xid)) { + throw new XAException(XAException.XAER_OUTSIDE); + } + prepared = false; + } + + /** + * Start or continue to work on a transaction. + * + * @param xid the transaction id + * @param flags TMNOFLAGS, TMJOIN, or TMRESUME + */ + @Override + public void start(Xid xid, int flags) throws XAException { + if (isDebugEnabled()) { + debugCode("start("+JdbcXid.toString(xid)+", "+quoteFlags(flags)+");"); + } + if (flags == TMRESUME) { + return; + } + if (flags == TMJOIN) { + if (currentTransaction != null && !currentTransaction.equals(xid)) { + throw new XAException(XAException.XAER_RMERR); + } + } else if (currentTransaction != null) { + throw new XAException(XAException.XAER_NOTA); + } + try { + physicalConn.setAutoCommit(false); + } catch (SQLException e) { + throw convertException(e); + } + currentTransaction = xid; + prepared = false; + } + + /** + * Commit a transaction. + * + * @param xid the transaction id + * @param onePhase use a one-phase protocol if true + */ + @Override + public void commit(Xid xid, boolean onePhase) throws XAException { + if (isDebugEnabled()) { + debugCode("commit("+JdbcXid.toString(xid)+", "+onePhase+");"); + } + + try { + if (onePhase) { + physicalConn.commit(); + } else { + try (Statement stat = physicalConn.createStatement()) { + stat.execute("COMMIT TRANSACTION " + JdbcXid.toString(xid)); + prepared = false; + } + } + physicalConn.setAutoCommit(true); + } catch (SQLException e) { + throw convertException(e); + } + currentTransaction = null; + } + + /** + * [Not supported] Add a statement event listener. + * + * @param listener the new statement event listener + */ + @Override + public void addStatementEventListener(StatementEventListener listener) { + throw new UnsupportedOperationException(); + } + + /** + * [Not supported] Remove a statement event listener. + * + * @param listener the statement event listener + */ + @Override + public void removeStatementEventListener(StatementEventListener listener) { + throw new UnsupportedOperationException(); + } + + /** + * INTERNAL + */ + @Override + public String toString() { + return getTraceObjectName() + ": " + physicalConn; + } + + private static XAException convertException(SQLException e) { + XAException xa = new XAException(e.getMessage()); + xa.initCause(e); + return xa; + } + + private static String quoteFlags(int flags) { + StringBuilder buff = new StringBuilder(); + if ((flags & XAResource.TMENDRSCAN) != 0) { + buff.append("|XAResource.TMENDRSCAN"); + } + if ((flags & XAResource.TMFAIL) != 0) { + buff.append("|XAResource.TMFAIL"); + } + if ((flags & XAResource.TMJOIN) != 0) { + buff.append("|XAResource.TMJOIN"); + } + if ((flags & XAResource.TMONEPHASE) != 0) { + buff.append("|XAResource.TMONEPHASE"); + } + if ((flags & XAResource.TMRESUME) != 0) { + buff.append("|XAResource.TMRESUME"); + } + if ((flags & XAResource.TMSTARTRSCAN) != 0) { + buff.append("|XAResource.TMSTARTRSCAN"); + } + if ((flags & XAResource.TMSUCCESS) != 0) { + buff.append("|XAResource.TMSUCCESS"); + } + if ((flags & XAResource.TMSUSPEND) != 0) { + buff.append("|XAResource.TMSUSPEND"); + } + if ((flags & XAResource.XA_RDONLY) != 0) { + buff.append("|XAResource.XA_RDONLY"); + } + if (buff.length() == 0) { + buff.append("|XAResource.TMNOFLAGS"); + } + return buff.toString().substring(1); + } + + private void checkOpen() throws XAException { + if (physicalConn == null) { + throw new XAException(XAException.XAER_RMERR); + } + } + + /** + * A pooled connection. + */ + class PooledJdbcConnection extends JdbcConnection { + + private boolean isClosed; + + public PooledJdbcConnection(JdbcConnection conn) { + super(conn); + } + + @Override + public synchronized void close() throws SQLException { + if (!isClosed) { + try { + rollback(); + setAutoCommit(true); + } catch (SQLException e) { + // ignore + } + closedHandle(); + isClosed = true; + } + } + + @Override + public synchronized boolean isClosed() throws SQLException { + return isClosed || super.isClosed(); + } + + @Override + protected synchronized void checkClosed(boolean write) { + if (isClosed) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + super.checkClosed(write); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbcx/JdbcXid.java b/modules/h2/src/main/java/org/h2/jdbcx/JdbcXid.java new file mode 100644 index 0000000000000..622d8e37228b5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/JdbcXid.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jdbcx; + +import java.util.StringTokenizer; +import javax.transaction.xa.Xid; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.message.TraceObject; +import org.h2.util.StringUtils; + +/** + * An object of this class represents a transaction id. + */ +public class JdbcXid extends TraceObject implements Xid { + + private static final String PREFIX = "XID"; + + private final int formatId; + private final byte[] branchQualifier; + private final byte[] globalTransactionId; + + JdbcXid(JdbcDataSourceFactory factory, int id, String tid) { + setTrace(factory.getTrace(), TraceObject.XID, id); + try { + StringTokenizer tokenizer = new StringTokenizer(tid, "_"); + String prefix = tokenizer.nextToken(); + if (!PREFIX.equals(prefix)) { + throw DbException.get(ErrorCode.WRONG_XID_FORMAT_1, tid); + } + formatId = Integer.parseInt(tokenizer.nextToken()); + branchQualifier = StringUtils.convertHexToBytes(tokenizer.nextToken()); + globalTransactionId = StringUtils.convertHexToBytes(tokenizer.nextToken()); + } catch (RuntimeException e) { + throw DbException.get(ErrorCode.WRONG_XID_FORMAT_1, tid); + } + } + + /** + * INTERNAL + */ + public static String toString(Xid xid) { + return PREFIX + '_' + xid.getFormatId() + '_' + StringUtils.convertBytesToHex(xid.getBranchQualifier()) + '_' + + StringUtils.convertBytesToHex(xid.getGlobalTransactionId()); + } + + /** + * Get the format id. + * + * @return the format id + */ + @Override + public int getFormatId() { + debugCodeCall("getFormatId"); + return formatId; + } + + /** + * The transaction branch identifier. + * + * @return the identifier + */ + @Override + public byte[] getBranchQualifier() { + debugCodeCall("getBranchQualifier"); + return branchQualifier; + } + + /** + * The global transaction identifier. + * + * @return the transaction id + */ + @Override + public byte[] getGlobalTransactionId() { + debugCodeCall("getGlobalTransactionId"); + return globalTransactionId; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jdbcx/package.html b/modules/h2/src/main/java/org/h2/jdbcx/package.html new file mode 100644 index 0000000000000..ab97d2dcc4241 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jdbcx/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Implementation of the extended JDBC API (package javax.sql). + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/jmx/DatabaseInfo.java b/modules/h2/src/main/java/org/h2/jmx/DatabaseInfo.java new file mode 100644 index 0000000000000..9d05380e5a847 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jmx/DatabaseInfo.java @@ -0,0 +1,280 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jmx; + +import java.lang.management.ManagementFactory; + +import java.sql.Timestamp; +import java.util.HashMap; +import java.util.Hashtable; +import java.util.Map; +import java.util.TreeMap; +import javax.management.JMException; +import javax.management.MBeanServer; +import javax.management.ObjectName; +import org.h2.command.Command; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.store.PageStore; +import org.h2.table.Table; + +/** + * The MBean implementation. + * + * @author Eric Dong + * @author Thomas Mueller + */ +public class DatabaseInfo implements DatabaseInfoMBean { + + private static final Map MBEANS = new HashMap<>(); + + /** Database. */ + private final Database database; + + private DatabaseInfo(Database database) { + if (database == null) { + throw new IllegalArgumentException("Argument 'database' must not be null"); + } + this.database = database; + } + + /** + * Returns a JMX new ObjectName instance. + * + * @param name name of the MBean + * @param path the path + * @return a new ObjectName instance + * @throws JMException if the ObjectName could not be created + */ + private static ObjectName getObjectName(String name, String path) + throws JMException { + name = name.replace(':', '_'); + path = path.replace(':', '_'); + Hashtable map = new Hashtable<>(); + map.put("name", name); + map.put("path", path); + return new ObjectName("org.h2", map); + } + + /** + * Registers an MBean for the database. + * + * @param connectionInfo connection info + * @param database database + */ + public static void registerMBean(ConnectionInfo connectionInfo, + Database database) throws JMException { + String path = connectionInfo.getName(); + if (!MBEANS.containsKey(path)) { + MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer(); + String name = database.getShortName(); + ObjectName mbeanObjectName = getObjectName(name, path); + MBEANS.put(path, mbeanObjectName); + DatabaseInfo info = new DatabaseInfo(database); + Object mbean = new DocumentedMBean(info, DatabaseInfoMBean.class); + mbeanServer.registerMBean(mbean, mbeanObjectName); + } + } + + /** + * Unregisters the MBean for the database if one is registered. + * + * @param name database name + */ + public static void unregisterMBean(String name) throws Exception { + ObjectName mbeanObjectName = MBEANS.remove(name); + if (mbeanObjectName != null) { + MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer(); + mbeanServer.unregisterMBean(mbeanObjectName); + } + } + + @Override + public boolean isExclusive() { + return database.getExclusiveSession() != null; + } + + @Override + public boolean isReadOnly() { + return database.isReadOnly(); + } + + @Override + public String getMode() { + return database.getMode().getName(); + } + + @Override + public boolean isMultiThreaded() { + return database.isMultiThreaded(); + } + + @Override + public boolean isMvcc() { + return database.isMultiVersion(); + } + + @Override + public int getLogMode() { + return database.getLogMode(); + } + + @Override + public void setLogMode(int value) { + database.setLogMode(value); + } + + @Override + public int getTraceLevel() { + return database.getTraceSystem().getLevelFile(); + } + + @Override + public void setTraceLevel(int level) { + database.getTraceSystem().setLevelFile(level); + } + + @Override + public long getFileWriteCountTotal() { + if (!database.isPersistent()) { + return 0; + } + PageStore p = database.getPageStore(); + if (p != null) { + return p.getWriteCountTotal(); + } + // TODO remove this method when removing the page store + // (the MVStore doesn't support it) + return 0; + } + + @Override + public long getFileWriteCount() { + if (!database.isPersistent()) { + return 0; + } + PageStore p = database.getPageStore(); + if (p != null) { + return p.getWriteCount(); + } + return database.getMvStore().getStore().getFileStore().getReadCount(); + } + + @Override + public long getFileReadCount() { + if (!database.isPersistent()) { + return 0; + } + PageStore p = database.getPageStore(); + if (p != null) { + return p.getReadCount(); + } + return database.getMvStore().getStore().getFileStore().getReadCount(); + } + + @Override + public long getFileSize() { + if (!database.isPersistent()) { + return 0; + } + PageStore p = database.getPageStore(); + if (p != null) { + return p.getPageCount() * p.getPageSize() / 1024; + } + return database.getMvStore().getStore().getFileStore().size(); + } + + @Override + public int getCacheSizeMax() { + if (!database.isPersistent()) { + return 0; + } + PageStore p = database.getPageStore(); + if (p != null) { + return p.getCache().getMaxMemory(); + } + return database.getMvStore().getStore().getCacheSize() * 1024; + } + + @Override + public void setCacheSizeMax(int kb) { + if (database.isPersistent()) { + database.setCacheSize(kb); + } + } + + @Override + public int getCacheSize() { + if (!database.isPersistent()) { + return 0; + } + PageStore p = database.getPageStore(); + if (p != null) { + return p.getCache().getMemory(); + } + return database.getMvStore().getStore().getCacheSizeUsed() * 1024; + } + + @Override + public String getVersion() { + return Constants.getFullVersion(); + } + + @Override + public String listSettings() { + StringBuilder buff = new StringBuilder(); + for (Map.Entry e : + new TreeMap<>( + database.getSettings().getSettings()).entrySet()) { + buff.append(e.getKey()).append(" = ").append(e.getValue()).append('\n'); + } + return buff.toString(); + } + + @Override + public String listSessions() { + StringBuilder buff = new StringBuilder(); + for (Session session : database.getSessions(false)) { + buff.append("session id: ").append(session.getId()); + buff.append(" user: "). + append(session.getUser().getName()). + append('\n'); + buff.append("connected: "). + append(new Timestamp(session.getSessionStart())). + append('\n'); + Command command = session.getCurrentCommand(); + if (command != null) { + buff.append("statement: "). + append(session.getCurrentCommand()). + append('\n'); + long commandStart = session.getCurrentCommandStart(); + if (commandStart != 0) { + buff.append("started: ").append( + new Timestamp(commandStart)). + append('\n'); + } + } + Table[] t = session.getLocks(); + if (t.length > 0) { + for (Table table : session.getLocks()) { + if (table.isLockedExclusivelyBy(session)) { + buff.append("write lock on "); + } else { + buff.append("read lock on "); + } + buff.append(table.getSchema().getName()). + append('.').append(table.getName()). + append('\n'); + } + } + buff.append('\n'); + } + return buff.toString(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/jmx/DatabaseInfoMBean.java b/modules/h2/src/main/java/org/h2/jmx/DatabaseInfoMBean.java new file mode 100644 index 0000000000000..a42d6322a3861 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jmx/DatabaseInfoMBean.java @@ -0,0 +1,168 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jmx; + +/** + * Information and management operations for the given database. + * @h2.resource + * + * @author Eric Dong + * @author Thomas Mueller + */ +public interface DatabaseInfoMBean { + + /** + * Is the database open in exclusive mode? + * @h2.resource + * + * @return true if the database is open in exclusive mode, false otherwise + */ + boolean isExclusive(); + + /** + * Is the database read-only? + * @h2.resource + * + * @return true if the database is read-only, false otherwise + */ + boolean isReadOnly(); + + /** + * The database compatibility mode (REGULAR if no compatibility mode is + * used). + * @h2.resource + * + * @return the database mode + */ + String getMode(); + + /** + * Is multi-threading enabled? + * @h2.resource + * + * @return true if multi-threading is enabled, false otherwise + */ + boolean isMultiThreaded(); + + /** + * Is MVCC (multi version concurrency) enabled? + * @h2.resource + * + * @return true if MVCC is enabled, false otherwise + */ + boolean isMvcc(); + + /** + * The transaction log mode (0 disabled, 1 without sync, 2 enabled). + * @h2.resource + * + * @return the transaction log mode + */ + int getLogMode(); + + /** + * Set the transaction log mode. + * + * @param value the new log mode + */ + void setLogMode(int value); + + /** + * The number of write operations since the database was created. + * @h2.resource + * + * @return the total write count + */ + long getFileWriteCountTotal(); + + /** + * The number of write operations since the database was opened. + * @h2.resource + * + * @return the write count + */ + long getFileWriteCount(); + + /** + * The file read count since the database was opened. + * @h2.resource + * + * @return the read count + */ + long getFileReadCount(); + + /** + * The database file size in KB. + * @h2.resource + * + * @return the number of pages + */ + long getFileSize(); + + /** + * The maximum cache size in KB. + * @h2.resource + * + * @return the maximum size + */ + int getCacheSizeMax(); + + /** + * Change the maximum size. + * + * @param kb the cache size in KB. + */ + void setCacheSizeMax(int kb); + + /** + * The current cache size in KB. + * @h2.resource + * + * @return the current size + */ + int getCacheSize(); + + /** + * The database version. + * @h2.resource + * + * @return the version + */ + String getVersion(); + + /** + * The trace level (0 disabled, 1 error, 2 info, 3 debug). + * @h2.resource + * + * @return the level + */ + int getTraceLevel(); + + /** + * Set the trace level. + * + * @param level the new value + */ + void setTraceLevel(int level); + + /** + * List the database settings. + * @h2.resource + * + * @return the database settings + */ + String listSettings(); + + /** + * List sessions, including the queries that are in + * progress, and locked tables. + * @h2.resource + * + * @return information about the sessions + */ + String listSessions(); + +} diff --git a/modules/h2/src/main/java/org/h2/jmx/DocumentedMBean.java b/modules/h2/src/main/java/org/h2/jmx/DocumentedMBean.java new file mode 100644 index 0000000000000..29a83aed276c4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jmx/DocumentedMBean.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.jmx; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.util.Properties; +import javax.management.MBeanAttributeInfo; +import javax.management.MBeanInfo; +import javax.management.MBeanOperationInfo; +import javax.management.NotCompliantMBeanException; +import javax.management.StandardMBean; +import org.h2.util.Utils; + +/** + * An MBean that reads the documentation from a resource file. + */ +public class DocumentedMBean extends StandardMBean { + + private final String interfaceName; + private Properties resources; + + public DocumentedMBean(T impl, Class mbeanInterface) + throws NotCompliantMBeanException { + super(impl, mbeanInterface); + this.interfaceName = impl.getClass().getName() + "MBean"; + } + + private Properties getResources() { + if (resources == null) { + resources = new Properties(); + String resourceName = "/org/h2/res/javadoc.properties"; + try { + byte[] buff = Utils.getResource(resourceName); + if (buff != null) { + resources.load(new ByteArrayInputStream(buff)); + } + } catch (IOException e) { + // ignore + } + } + return resources; + } + + @Override + protected String getDescription(MBeanInfo info) { + String s = getResources().getProperty(interfaceName); + return s == null ? super.getDescription(info) : s; + } + + @Override + protected String getDescription(MBeanOperationInfo op) { + String s = getResources().getProperty(interfaceName + "." + op.getName()); + return s == null ? super.getDescription(op) : s; + } + + @Override + protected String getDescription(MBeanAttributeInfo info) { + String prefix = info.isIs() ? "is" : "get"; + String s = getResources().getProperty( + interfaceName + "." + prefix + info.getName()); + return s == null ? super.getDescription(info) : s; + } + + @Override + protected int getImpact(MBeanOperationInfo info) { + if (info.getName().startsWith("list")) { + return MBeanOperationInfo.INFO; + } + return MBeanOperationInfo.ACTION; + } + +} diff --git a/modules/h2/src/main/java/org/h2/jmx/package.html b/modules/h2/src/main/java/org/h2/jmx/package.html new file mode 100644 index 0000000000000..90cc43370d622 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/jmx/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Implementation of the Java Management Extension (JMX) features. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/message/DbException.java b/modules/h2/src/main/java/org/h2/message/DbException.java new file mode 100644 index 0000000000000..6ee66d672edc1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/message/DbException.java @@ -0,0 +1,399 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.message; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.PrintWriter; +import java.lang.reflect.InvocationTargetException; +import java.nio.charset.StandardCharsets; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.text.MessageFormat; +import java.util.Locale; +import java.util.Map.Entry; +import java.util.Properties; + +import org.h2.api.ErrorCode; +import org.h2.jdbc.JdbcSQLException; +import org.h2.util.SortedProperties; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * This exception wraps a checked exception. + * It is used in methods where checked exceptions are not supported, + * for example in a Comparator. + */ +public class DbException extends RuntimeException { + + private static final long serialVersionUID = 1L; + + private static final Properties MESSAGES = new Properties(); + + private Object source; + + static { + try { + byte[] messages = Utils.getResource( + "/org/h2/res/_messages_en.prop"); + if (messages != null) { + MESSAGES.load(new ByteArrayInputStream(messages)); + } + String language = Locale.getDefault().getLanguage(); + if (!"en".equals(language)) { + byte[] translations = Utils.getResource( + "/org/h2/res/_messages_" + language + ".prop"); + // message: translated message + english + // (otherwise certain applications don't work) + if (translations != null) { + Properties p = SortedProperties.fromLines( + new String(translations, StandardCharsets.UTF_8)); + for (Entry e : p.entrySet()) { + String key = (String) e.getKey(); + String translation = (String) e.getValue(); + if (translation != null && !translation.startsWith("#")) { + String original = MESSAGES.getProperty(key); + String message = translation + "\n" + original; + MESSAGES.put(key, message); + } + } + } + } + } catch (OutOfMemoryError e) { + DbException.traceThrowable(e); + } catch (IOException e) { + DbException.traceThrowable(e); + } + } + + private DbException(SQLException e) { + super(e.getMessage(), e); + } + + private static String translate(String key, String... params) { + String message = null; + if (MESSAGES != null) { + // Tomcat sets final static fields to null sometimes + message = MESSAGES.getProperty(key); + } + if (message == null) { + message = "(Message " + key + " not found)"; + } + if (params != null) { + for (int i = 0; i < params.length; i++) { + String s = params[i]; + if (s != null && s.length() > 0) { + params[i] = StringUtils.quoteIdentifier(s); + } + } + message = MessageFormat.format(message, (Object[]) params); + } + return message; + } + + /** + * Get the SQLException object. + * + * @return the exception + */ + public SQLException getSQLException() { + return (SQLException) getCause(); + } + + /** + * Get the error code. + * + * @return the error code + */ + public int getErrorCode() { + return getSQLException().getErrorCode(); + } + + /** + * Set the SQL statement of the given exception. + * This method may create a new object. + * + * @param sql the SQL statement + * @return the exception + */ + public DbException addSQL(String sql) { + SQLException e = getSQLException(); + if (e instanceof JdbcSQLException) { + JdbcSQLException j = (JdbcSQLException) e; + if (j.getSQL() == null) { + j.setSQL(sql); + } + return this; + } + e = new JdbcSQLException(e.getMessage(), sql, e.getSQLState(), + e.getErrorCode(), e, null); + return new DbException(e); + } + + /** + * Create a database exception for a specific error code. + * + * @param errorCode the error code + * @return the exception + */ + public static DbException get(int errorCode) { + return get(errorCode, (String) null); + } + + /** + * Create a database exception for a specific error code. + * + * @param errorCode the error code + * @param p1 the first parameter of the message + * @return the exception + */ + public static DbException get(int errorCode, String p1) { + return get(errorCode, new String[] { p1 }); + } + + /** + * Create a database exception for a specific error code. + * + * @param errorCode the error code + * @param cause the cause of the exception + * @param params the list of parameters of the message + * @return the exception + */ + public static DbException get(int errorCode, Throwable cause, + String... params) { + return new DbException(getJdbcSQLException(errorCode, cause, params)); + } + + /** + * Create a database exception for a specific error code. + * + * @param errorCode the error code + * @param params the list of parameters of the message + * @return the exception + */ + public static DbException get(int errorCode, String... params) { + return new DbException(getJdbcSQLException(errorCode, null, params)); + } + + /** + * Create a database exception for an arbitrary SQLState. + * + * @param sqlstate the state to use + * @param message the message to use + * @return the exception + */ + public static DbException fromUser(String sqlstate, String message) { + // do not translate as sqlstate is arbitrary : avoid "message not found" + return new DbException(new JdbcSQLException(message, null, sqlstate, 0, null, null)); + } + + /** + * Create a syntax error exception. + * + * @param sql the SQL statement + * @param index the position of the error in the SQL statement + * @return the exception + */ + public static DbException getSyntaxError(String sql, int index) { + sql = StringUtils.addAsterisk(sql, index); + return get(ErrorCode.SYNTAX_ERROR_1, sql); + } + + /** + * Create a syntax error exception. + * + * @param sql the SQL statement + * @param index the position of the error in the SQL statement + * @param message the message + * @return the exception + */ + public static DbException getSyntaxError(String sql, int index, + String message) { + sql = StringUtils.addAsterisk(sql, index); + return new DbException(getJdbcSQLException(ErrorCode.SYNTAX_ERROR_2, + null, sql, message)); + } + + /** + * Gets a SQL exception meaning this feature is not supported. + * + * @param message what exactly is not supported + * @return the exception + */ + public static DbException getUnsupportedException(String message) { + return get(ErrorCode.FEATURE_NOT_SUPPORTED_1, message); + } + + /** + * Gets a SQL exception meaning this value is invalid. + * + * @param param the name of the parameter + * @param value the value passed + * @return the IllegalArgumentException object + */ + public static DbException getInvalidValueException(String param, + Object value) { + return get(ErrorCode.INVALID_VALUE_2, + value == null ? "null" : value.toString(), param); + } + + /** + * Throw an internal error. This method seems to return an exception object, + * so that it can be used instead of 'return', but in fact it always throws + * the exception. + * + * @param s the message + * @return the RuntimeException object + * @throws RuntimeException the exception + */ + public static RuntimeException throwInternalError(String s) { + RuntimeException e = new RuntimeException(s); + DbException.traceThrowable(e); + throw e; + } + + /** + * Throw an internal error. This method seems to return an exception object, + * so that it can be used instead of 'return', but in fact it always throws + * the exception. + * + * @return the RuntimeException object + */ + public static RuntimeException throwInternalError() { + return throwInternalError("Unexpected code path"); + } + + /** + * Convert an exception to a SQL exception using the default mapping. + * + * @param e the root cause + * @return the SQL exception object + */ + public static SQLException toSQLException(Throwable e) { + if (e instanceof SQLException) { + return (SQLException) e; + } + return convert(e).getSQLException(); + } + + /** + * Convert a throwable to an SQL exception using the default mapping. All + * errors except the following are re-thrown: StackOverflowError, + * LinkageError. + * + * @param e the root cause + * @return the exception object + */ + public static DbException convert(Throwable e) { + if (e instanceof DbException) { + return (DbException) e; + } else if (e instanceof SQLException) { + return new DbException((SQLException) e); + } else if (e instanceof InvocationTargetException) { + return convertInvocation((InvocationTargetException) e, null); + } else if (e instanceof IOException) { + return get(ErrorCode.IO_EXCEPTION_1, e, e.toString()); + } else if (e instanceof OutOfMemoryError) { + return get(ErrorCode.OUT_OF_MEMORY, e); + } else if (e instanceof StackOverflowError || e instanceof LinkageError) { + return get(ErrorCode.GENERAL_ERROR_1, e, e.toString()); + } else if (e instanceof Error) { + throw (Error) e; + } + return get(ErrorCode.GENERAL_ERROR_1, e, e.toString()); + } + + /** + * Convert an InvocationTarget exception to a database exception. + * + * @param te the root cause + * @param message the added message or null + * @return the database exception object + */ + public static DbException convertInvocation(InvocationTargetException te, + String message) { + Throwable t = te.getTargetException(); + if (t instanceof SQLException || t instanceof DbException) { + return convert(t); + } + message = message == null ? t.getMessage() : message + ": " + t.getMessage(); + return get(ErrorCode.EXCEPTION_IN_FUNCTION_1, t, message); + } + + /** + * Convert an IO exception to a database exception. + * + * @param e the root cause + * @param message the message or null + * @return the database exception object + */ + public static DbException convertIOException(IOException e, String message) { + if (message == null) { + Throwable t = e.getCause(); + if (t instanceof DbException) { + return (DbException) t; + } + return get(ErrorCode.IO_EXCEPTION_1, e, e.toString()); + } + return get(ErrorCode.IO_EXCEPTION_2, e, e.toString(), message); + } + + /** + * Gets the SQL exception object for a specific error code. + * + * @param errorCode the error code + * @param cause the cause of the exception + * @param params the list of parameters of the message + * @return the SQLException object + */ + private static JdbcSQLException getJdbcSQLException(int errorCode, + Throwable cause, String... params) { + String sqlstate = ErrorCode.getState(errorCode); + String message = translate(sqlstate, params); + return new JdbcSQLException(message, null, sqlstate, errorCode, cause, null); + } + + /** + * Convert an exception to an IO exception. + * + * @param e the root cause + * @return the IO exception + */ + public static IOException convertToIOException(Throwable e) { + if (e instanceof IOException) { + return (IOException) e; + } + if (e instanceof JdbcSQLException) { + JdbcSQLException e2 = (JdbcSQLException) e; + if (e2.getOriginalCause() != null) { + e = e2.getOriginalCause(); + } + } + return new IOException(e.toString(), e); + } + + public Object getSource() { + return source; + } + + public void setSource(Object source) { + this.source = source; + } + + /** + * Write the exception to the driver manager log writer if configured. + * + * @param e the exception + */ + public static void traceThrowable(Throwable e) { + PrintWriter writer = DriverManager.getLogWriter(); + if (writer != null) { + e.printStackTrace(writer); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/message/Trace.java b/modules/h2/src/main/java/org/h2/message/Trace.java new file mode 100644 index 0000000000000..646f8c44eeccc --- /dev/null +++ b/modules/h2/src/main/java/org/h2/message/Trace.java @@ -0,0 +1,371 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.message; + +import java.text.MessageFormat; +import java.util.ArrayList; + +import org.h2.engine.SysProperties; +import org.h2.expression.ParameterInterface; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.Value; + +/** + * This class represents a trace module. + */ +public class Trace { + + /** + * The trace module id for commands. + */ + public static final int COMMAND = 0; + + /** + * The trace module id for constraints. + */ + public static final int CONSTRAINT = 1; + + /** + * The trace module id for databases. + */ + public static final int DATABASE = 2; + + /** + * The trace module id for functions. + */ + public static final int FUNCTION = 3; + + /** + * The trace module id for file locks. + */ + public static final int FILE_LOCK = 4; + + /** + * The trace module id for indexes. + */ + public static final int INDEX = 5; + + /** + * The trace module id for the JDBC API. + */ + public static final int JDBC = 6; + + /** + * The trace module id for locks. + */ + public static final int LOCK = 7; + + /** + * The trace module id for schemas. + */ + public static final int SCHEMA = 8; + + /** + * The trace module id for sequences. + */ + public static final int SEQUENCE = 9; + + /** + * The trace module id for settings. + */ + public static final int SETTING = 10; + + /** + * The trace module id for tables. + */ + public static final int TABLE = 11; + + /** + * The trace module id for triggers. + */ + public static final int TRIGGER = 12; + + /** + * The trace module id for users. + */ + public static final int USER = 13; + + /** + * The trace module id for the page store. + */ + public static final int PAGE_STORE = 14; + + /** + * The trace module id for the JDBCX API + */ + public static final int JDBCX = 15; + + /** + * Module names by their ids as array indexes. + */ + public static final String[] MODULE_NAMES = { + "command", + "constraint", + "database", + "function", + "fileLock", + "index", + "jdbc", + "lock", + "schema", + "sequence", + "setting", + "table", + "trigger", + "user", + "pageStore", + "JDBCX" + }; + + private final TraceWriter traceWriter; + private final String module; + private final String lineSeparator; + private int traceLevel = TraceSystem.PARENT; + + Trace(TraceWriter traceWriter, int moduleId) { + this(traceWriter, MODULE_NAMES[moduleId]); + } + + Trace(TraceWriter traceWriter, String module) { + this.traceWriter = traceWriter; + this.module = module; + this.lineSeparator = SysProperties.LINE_SEPARATOR; + } + + /** + * Set the trace level of this component. This setting overrides the parent + * trace level. + * + * @param level the new level + */ + public void setLevel(int level) { + this.traceLevel = level; + } + + private boolean isEnabled(int level) { + if (this.traceLevel == TraceSystem.PARENT) { + return traceWriter.isEnabled(level); + } + return level <= this.traceLevel; + } + + /** + * Check if the trace level is equal or higher than INFO. + * + * @return true if it is + */ + public boolean isInfoEnabled() { + return isEnabled(TraceSystem.INFO); + } + + /** + * Check if the trace level is equal or higher than DEBUG. + * + * @return true if it is + */ + public boolean isDebugEnabled() { + return isEnabled(TraceSystem.DEBUG); + } + + /** + * Write a message with trace level ERROR to the trace system. + * + * @param t the exception + * @param s the message + */ + public void error(Throwable t, String s) { + if (isEnabled(TraceSystem.ERROR)) { + traceWriter.write(TraceSystem.ERROR, module, s, t); + } + } + + /** + * Write a message with trace level ERROR to the trace system. + * + * @param t the exception + * @param s the message + * @param params the parameters + */ + public void error(Throwable t, String s, Object... params) { + if (isEnabled(TraceSystem.ERROR)) { + s = MessageFormat.format(s, params); + traceWriter.write(TraceSystem.ERROR, module, s, t); + } + } + + /** + * Write a message with trace level INFO to the trace system. + * + * @param s the message + */ + public void info(String s) { + if (isEnabled(TraceSystem.INFO)) { + traceWriter.write(TraceSystem.INFO, module, s, null); + } + } + + /** + * Write a message with trace level INFO to the trace system. + * + * @param s the message + * @param params the parameters + */ + public void info(String s, Object... params) { + if (isEnabled(TraceSystem.INFO)) { + s = MessageFormat.format(s, params); + traceWriter.write(TraceSystem.INFO, module, s, null); + } + } + + /** + * Write a message with trace level INFO to the trace system. + * + * @param t the exception + * @param s the message + */ + void info(Throwable t, String s) { + if (isEnabled(TraceSystem.INFO)) { + traceWriter.write(TraceSystem.INFO, module, s, t); + } + } + + /** + * Format the parameter list. + * + * @param parameters the parameter list + * @return the formatted text + */ + public static String formatParams( + ArrayList parameters) { + if (parameters.isEmpty()) { + return ""; + } + StatementBuilder buff = new StatementBuilder(); + int i = 0; + boolean params = false; + for (ParameterInterface p : parameters) { + if (p.isValueSet()) { + if (!params) { + buff.append(" {"); + params = true; + } + buff.appendExceptFirst(", "); + Value v = p.getParamValue(); + buff.append(++i).append(": ").append(v.getTraceSQL()); + } + } + if (params) { + buff.append('}'); + } + return buff.toString(); + } + + /** + * Write a SQL statement with trace level INFO to the trace system. + * + * @param sql the SQL statement + * @param params the parameters used, in the for {1:...} + * @param count the update count + * @param time the time it took to run the statement in ms + */ + public void infoSQL(String sql, String params, int count, long time) { + if (!isEnabled(TraceSystem.INFO)) { + return; + } + StringBuilder buff = new StringBuilder(sql.length() + params.length() + 20); + buff.append(lineSeparator).append("/*SQL"); + boolean space = false; + if (params.length() > 0) { + // This looks like a bug, but it is intentional: + // If there are no parameters, the SQL statement is + // the rest of the line. If there are parameters, they + // are appended at the end of the line. Knowing the size + // of the statement simplifies separating the SQL statement + // from the parameters (no need to parse). + space = true; + buff.append(" l:").append(sql.length()); + } + if (count > 0) { + space = true; + buff.append(" #:").append(count); + } + if (time > 0) { + space = true; + buff.append(" t:").append(time); + } + if (!space) { + buff.append(' '); + } + buff.append("*/"). + append(StringUtils.javaEncode(sql)). + append(StringUtils.javaEncode(params)). + append(';'); + sql = buff.toString(); + traceWriter.write(TraceSystem.INFO, module, sql, null); + } + + /** + * Write a message with trace level DEBUG to the trace system. + * + * @param s the message + * @param params the parameters + */ + public void debug(String s, Object... params) { + if (isEnabled(TraceSystem.DEBUG)) { + s = MessageFormat.format(s, params); + traceWriter.write(TraceSystem.DEBUG, module, s, null); + } + } + + /** + * Write a message with trace level DEBUG to the trace system. + * + * @param s the message + */ + public void debug(String s) { + if (isEnabled(TraceSystem.DEBUG)) { + traceWriter.write(TraceSystem.DEBUG, module, s, null); + } + } + + /** + * Write a message with trace level DEBUG to the trace system. + * @param t the exception + * @param s the message + */ + public void debug(Throwable t, String s) { + if (isEnabled(TraceSystem.DEBUG)) { + traceWriter.write(TraceSystem.DEBUG, module, s, t); + } + } + + + /** + * Write Java source code with trace level INFO to the trace system. + * + * @param java the source code + */ + public void infoCode(String java) { + if (isEnabled(TraceSystem.INFO)) { + traceWriter.write(TraceSystem.INFO, module, lineSeparator + + "/**/" + java, null); + } + } + + /** + * Write Java source code with trace level DEBUG to the trace system. + * + * @param java the source code + */ + void debugCode(String java) { + if (isEnabled(TraceSystem.DEBUG)) { + traceWriter.write(TraceSystem.DEBUG, module, lineSeparator + + "/**/" + java, null); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/message/TraceObject.java b/modules/h2/src/main/java/org/h2/message/TraceObject.java new file mode 100644 index 0000000000000..77f3a19804fe8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/message/TraceObject.java @@ -0,0 +1,391 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.message; + +import java.math.BigDecimal; +import java.sql.SQLException; +import java.util.Map; +import java.util.concurrent.atomic.AtomicIntegerArray; + +import org.h2.util.StringUtils; + +/** + * The base class for objects that can print trace information about themselves. + */ +public class TraceObject { + + /** + * The trace type id for callable statements. + */ + protected static final int CALLABLE_STATEMENT = 0; + + /** + * The trace type id for connections. + */ + protected static final int CONNECTION = 1; + + /** + * The trace type id for database meta data objects. + */ + protected static final int DATABASE_META_DATA = 2; + + /** + * The trace type id for prepared statements. + */ + protected static final int PREPARED_STATEMENT = 3; + + /** + * The trace type id for result sets. + */ + protected static final int RESULT_SET = 4; + + /** + * The trace type id for result set meta data objects. + */ + protected static final int RESULT_SET_META_DATA = 5; + + /** + * The trace type id for savepoint objects. + */ + protected static final int SAVEPOINT = 6; + + /** + * The trace type id for statements. + */ + protected static final int STATEMENT = 8; + + /** + * The trace type id for blobs. + */ + protected static final int BLOB = 9; + + /** + * The trace type id for clobs. + */ + protected static final int CLOB = 10; + + /** + * The trace type id for parameter meta data objects. + */ + protected static final int PARAMETER_META_DATA = 11; + + /** + * The trace type id for data sources. + */ + protected static final int DATA_SOURCE = 12; + + /** + * The trace type id for XA data sources. + */ + protected static final int XA_DATA_SOURCE = 13; + + /** + * The trace type id for transaction ids. + */ + protected static final int XID = 15; + + /** + * The trace type id for array objects. + */ + protected static final int ARRAY = 16; + + private static final int LAST = ARRAY + 1; + private static final AtomicIntegerArray ID = new AtomicIntegerArray(LAST); + + private static final String[] PREFIX = { "call", "conn", "dbMeta", "prep", + "rs", "rsMeta", "sp", "ex", "stat", "blob", "clob", "pMeta", "ds", + "xads", "xares", "xid", "ar" }; + + /** + * The trace module used by this object. + */ + protected Trace trace; + + private int traceType; + private int id; + + /** + * Set the options to use when writing trace message. + * + * @param trace the trace object + * @param type the trace object type + * @param id the trace object id + */ + protected void setTrace(Trace trace, int type, int id) { + this.trace = trace; + this.traceType = type; + this.id = id; + } + + /** + * INTERNAL + */ + public int getTraceId() { + return id; + } + + /** + * INTERNAL + */ + public String getTraceObjectName() { + return PREFIX[traceType] + id; + } + + /** + * Get the next trace object id for this object type. + * + * @param type the object type + * @return the new trace object id + */ + protected static int getNextId(int type) { + return ID.getAndIncrement(type); + } + + /** + * Check if the debug trace level is enabled. + * + * @return true if it is + */ + protected boolean isDebugEnabled() { + return trace.isDebugEnabled(); + } + + /** + * Check if info trace level is enabled. + * + * @return true if it is + */ + protected boolean isInfoEnabled() { + return trace.isInfoEnabled(); + } + + /** + * Write trace information as an assignment in the form + * className prefixId = objectName.value. + * + * @param className the class name of the result + * @param newType the prefix type + * @param newId the trace object id of the created object + * @param value the value to assign this new object to + */ + protected void debugCodeAssign(String className, int newType, int newId, + String value) { + if (trace.isDebugEnabled()) { + trace.debugCode(className + " " + PREFIX[newType] + + newId + " = " + getTraceObjectName() + "." + value + ";"); + } + } + + /** + * Write trace information as a method call in the form + * objectName.methodName(). + * + * @param methodName the method name + */ + protected void debugCodeCall(String methodName) { + if (trace.isDebugEnabled()) { + trace.debugCode(getTraceObjectName() + "." + methodName + "();"); + } + } + + /** + * Write trace information as a method call in the form + * objectName.methodName(param) where the parameter is formatted as a long + * value. + * + * @param methodName the method name + * @param param one single long parameter + */ + protected void debugCodeCall(String methodName, long param) { + if (trace.isDebugEnabled()) { + trace.debugCode(getTraceObjectName() + "." + + methodName + "(" + param + ");"); + } + } + + /** + * Write trace information as a method call in the form + * objectName.methodName(param) where the parameter is formatted as a Java + * string. + * + * @param methodName the method name + * @param param one single string parameter + */ + protected void debugCodeCall(String methodName, String param) { + if (trace.isDebugEnabled()) { + trace.debugCode(getTraceObjectName() + "." + + methodName + "(" + quote(param) + ");"); + } + } + + /** + * Write trace information in the form objectName.text. + * + * @param text the trace text + */ + protected void debugCode(String text) { + if (trace.isDebugEnabled()) { + trace.debugCode(getTraceObjectName() + "." + text); + } + } + + /** + * Format a string as a Java string literal. + * + * @param s the string to convert + * @return the Java string literal + */ + protected static String quote(String s) { + return StringUtils.quoteJavaString(s); + } + + /** + * Format a time to the Java source code that represents this object. + * + * @param x the time to convert + * @return the Java source code + */ + protected static String quoteTime(java.sql.Time x) { + if (x == null) { + return "null"; + } + return "Time.valueOf(\"" + x.toString() + "\")"; + } + + /** + * Format a timestamp to the Java source code that represents this object. + * + * @param x the timestamp to convert + * @return the Java source code + */ + protected static String quoteTimestamp(java.sql.Timestamp x) { + if (x == null) { + return "null"; + } + return "Timestamp.valueOf(\"" + x.toString() + "\")"; + } + + /** + * Format a date to the Java source code that represents this object. + * + * @param x the date to convert + * @return the Java source code + */ + protected static String quoteDate(java.sql.Date x) { + if (x == null) { + return "null"; + } + return "Date.valueOf(\"" + x.toString() + "\")"; + } + + /** + * Format a big decimal to the Java source code that represents this object. + * + * @param x the big decimal to convert + * @return the Java source code + */ + protected static String quoteBigDecimal(BigDecimal x) { + if (x == null) { + return "null"; + } + return "new BigDecimal(\"" + x.toString() + "\")"; + } + + /** + * Format a byte array to the Java source code that represents this object. + * + * @param x the byte array to convert + * @return the Java source code + */ + protected static String quoteBytes(byte[] x) { + if (x == null) { + return "null"; + } + return "org.h2.util.StringUtils.convertHexToBytes(\"" + + StringUtils.convertBytesToHex(x) + "\")"; + } + + /** + * Format a string array to the Java source code that represents this + * object. + * + * @param s the string array to convert + * @return the Java source code + */ + protected static String quoteArray(String[] s) { + return StringUtils.quoteJavaStringArray(s); + } + + /** + * Format an int array to the Java source code that represents this object. + * + * @param s the int array to convert + * @return the Java source code + */ + protected static String quoteIntArray(int[] s) { + return StringUtils.quoteJavaIntArray(s); + } + + /** + * Format a map to the Java source code that represents this object. + * + * @param map the map to convert + * @return the Java source code + */ + protected static String quoteMap(Map> map) { + if (map == null) { + return "null"; + } + if (map.size() == 0) { + return "new Map()"; + } + return "new Map() /* " + map.toString() + " */"; + } + + /** + * Log an exception and convert it to a SQL exception if required. + * + * @param ex the exception + * @return the SQL exception object + */ + protected SQLException logAndConvert(Throwable ex) { + SQLException e = null; + try { + e = DbException.toSQLException(ex); + if (trace == null) { + DbException.traceThrowable(e); + } else { + int errorCode = e.getErrorCode(); + if (errorCode >= 23000 && errorCode < 24000) { + trace.info(e, "exception"); + } else { + trace.error(e, "exception"); + } + } + } catch(Throwable ignore) { + if (e == null) { + e = new SQLException("", "HY000", ex); + } + e.addSuppressed(ignore); + } + return e; + } + + /** + * Get a SQL exception meaning this feature is not supported. + * + * @param message the message + * @return the SQL exception + */ + protected SQLException unsupported(String message) { + try { + throw DbException.getUnsupportedException(message); + } catch (Exception e) { + return logAndConvert(e); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/message/TraceSystem.java b/modules/h2/src/main/java/org/h2/message/TraceSystem.java new file mode 100644 index 0000000000000..b5909e85b4e28 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/message/TraceSystem.java @@ -0,0 +1,348 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.message; + +import java.io.IOException; +import java.io.PrintStream; +import java.io.PrintWriter; +import java.io.Writer; +import java.text.SimpleDateFormat; +import java.util.concurrent.atomic.AtomicReferenceArray; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.jdbc.JdbcSQLException; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; + +/** + * The trace mechanism is the logging facility of this database. There is + * usually one trace system per database. It is called 'trace' because the term + * 'log' is already used in the database domain and means 'transaction log'. It + * is possible to write after close was called, but that means for each write + * the file will be opened and closed again (which is slower). + */ +public class TraceSystem implements TraceWriter { + + /** + * The parent trace level should be used. + */ + public static final int PARENT = -1; + + /** + * This trace level means nothing should be written. + */ + public static final int OFF = 0; + + /** + * This trace level means only errors should be written. + */ + public static final int ERROR = 1; + + /** + * This trace level means errors and informational messages should be + * written. + */ + public static final int INFO = 2; + + /** + * This trace level means all type of messages should be written. + */ + public static final int DEBUG = 3; + + /** + * This trace level means all type of messages should be written, but + * instead of using the trace file the messages should be written to SLF4J. + */ + public static final int ADAPTER = 4; + + /** + * The default level for system out trace messages. + */ + public static final int DEFAULT_TRACE_LEVEL_SYSTEM_OUT = OFF; + + /** + * The default level for file trace messages. + */ + public static final int DEFAULT_TRACE_LEVEL_FILE = ERROR; + + /** + * The default maximum trace file size. It is currently 64 MB. Additionally, + * there could be a .old file of the same size. + */ + private static final int DEFAULT_MAX_FILE_SIZE = 64 * 1024 * 1024; + + private static final int CHECK_SIZE_EACH_WRITES = 4096; + + private int levelSystemOut = DEFAULT_TRACE_LEVEL_SYSTEM_OUT; + private int levelFile = DEFAULT_TRACE_LEVEL_FILE; + private int levelMax; + private int maxFileSize = DEFAULT_MAX_FILE_SIZE; + private String fileName; + private final AtomicReferenceArray traces = + new AtomicReferenceArray<>(Trace.MODULE_NAMES.length); + private SimpleDateFormat dateFormat; + private Writer fileWriter; + private PrintWriter printWriter; + private int checkSize; + private boolean closed; + private boolean writingErrorLogged; + private TraceWriter writer = this; + private PrintStream sysOut = System.out; + + /** + * Create a new trace system object. + * + * @param fileName the file name + */ + public TraceSystem(String fileName) { + this.fileName = fileName; + updateLevel(); + } + + private void updateLevel() { + levelMax = Math.max(levelSystemOut, levelFile); + } + + /** + * Set the print stream to use instead of System.out. + * + * @param out the new print stream + */ + public void setSysOut(PrintStream out) { + this.sysOut = out; + } + + /** + * Get or create a trace object for this module id. Trace modules with id + * are cached. + * + * @param moduleId module id + * @return the trace object + */ + public Trace getTrace(int moduleId) { + Trace t = traces.get(moduleId); + if (t == null) { + t = new Trace(writer, moduleId); + if (!traces.compareAndSet(moduleId, null, t)) { + t = traces.get(moduleId); + } + } + return t; + } + + /** + * Create a trace object for this module. Trace modules with names are not + * cached. + * + * @param module the module name + * @return the trace object + */ + public Trace getTrace(String module) { + return new Trace(writer, module); + } + + @Override + public boolean isEnabled(int level) { + if (levelMax == ADAPTER) { + return writer.isEnabled(level); + } + return level <= this.levelMax; + } + + /** + * Set the trace file name. + * + * @param name the file name + */ + public void setFileName(String name) { + this.fileName = name; + } + + /** + * Set the maximum trace file size in bytes. + * + * @param max the maximum size + */ + public void setMaxFileSize(int max) { + this.maxFileSize = max; + } + + /** + * Set the trace level to use for System.out + * + * @param level the new level + */ + public void setLevelSystemOut(int level) { + levelSystemOut = level; + updateLevel(); + } + + /** + * Set the file trace level. + * + * @param level the new level + */ + public void setLevelFile(int level) { + if (level == ADAPTER) { + String adapterClass = "org.h2.message.TraceWriterAdapter"; + try { + writer = (TraceWriter) Class.forName(adapterClass).newInstance(); + } catch (Throwable e) { + e = DbException.get(ErrorCode.CLASS_NOT_FOUND_1, e, adapterClass); + write(ERROR, Trace.DATABASE, adapterClass, e); + return; + } + String name = fileName; + if (name != null) { + if (name.endsWith(Constants.SUFFIX_TRACE_FILE)) { + name = name.substring(0, name.length() - Constants.SUFFIX_TRACE_FILE.length()); + } + int idx = Math.max(name.lastIndexOf('/'), name.lastIndexOf('\\')); + if (idx >= 0) { + name = name.substring(idx + 1); + } + writer.setName(name); + } + } + levelFile = level; + updateLevel(); + } + + public int getLevelFile() { + return levelFile; + } + + private synchronized String format(String module, String s) { + if (dateFormat == null) { + dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss "); + } + return dateFormat.format(System.currentTimeMillis()) + module + ": " + s; + } + + @Override + public void write(int level, int moduleId, String s, Throwable t) { + write(level, Trace.MODULE_NAMES[moduleId], s, t); + } + + @Override + public void write(int level, String module, String s, Throwable t) { + if (level <= levelSystemOut || level > this.levelMax) { + // level <= levelSystemOut: the system out level is set higher + // level > this.level: the level for this module is set higher + sysOut.println(format(module, s)); + if (t != null && levelSystemOut == DEBUG) { + t.printStackTrace(sysOut); + } + } + if (fileName != null) { + if (level <= levelFile) { + writeFile(format(module, s), t); + } + } + } + + private synchronized void writeFile(String s, Throwable t) { + try { + if (checkSize++ >= CHECK_SIZE_EACH_WRITES) { + checkSize = 0; + closeWriter(); + if (maxFileSize > 0 && FileUtils.size(fileName) > maxFileSize) { + String old = fileName + ".old"; + FileUtils.delete(old); + FileUtils.move(fileName, old); + } + } + if (!openWriter()) { + return; + } + printWriter.println(s); + if (t != null) { + if (levelFile == ERROR && t instanceof JdbcSQLException) { + JdbcSQLException se = (JdbcSQLException) t; + int code = se.getErrorCode(); + if (ErrorCode.isCommon(code)) { + printWriter.println(t.toString()); + } else { + t.printStackTrace(printWriter); + } + } else { + t.printStackTrace(printWriter); + } + } + printWriter.flush(); + if (closed) { + closeWriter(); + } + } catch (Exception e) { + logWritingError(e); + } + } + + private void logWritingError(Exception e) { + if (writingErrorLogged) { + return; + } + writingErrorLogged = true; + Exception se = DbException.get( + ErrorCode.TRACE_FILE_ERROR_2, e, fileName, e.toString()); + // print this error only once + fileName = null; + sysOut.println(se); + se.printStackTrace(); + } + + private boolean openWriter() { + if (printWriter == null) { + try { + FileUtils.createDirectories(FileUtils.getParent(fileName)); + if (FileUtils.exists(fileName) && !FileUtils.canWrite(fileName)) { + // read only database: don't log error if the trace file + // can't be opened + return false; + } + fileWriter = IOUtils.getBufferedWriter( + FileUtils.newOutputStream(fileName, true)); + printWriter = new PrintWriter(fileWriter, true); + } catch (Exception e) { + logWritingError(e); + return false; + } + } + return true; + } + + private synchronized void closeWriter() { + if (printWriter != null) { + printWriter.flush(); + printWriter.close(); + printWriter = null; + } + if (fileWriter != null) { + try { + fileWriter.close(); + } catch (IOException e) { + // ignore + } + fileWriter = null; + } + } + + /** + * Close the writers, and the files if required. It is still possible to + * write after closing, however after each write the file is closed again + * (slowing down tracing). + */ + public void close() { + closeWriter(); + closed = true; + } + + @Override + public void setName(String name) { + // nothing to do (the file name is already set) + } + +} diff --git a/modules/h2/src/main/java/org/h2/message/TraceWriter.java b/modules/h2/src/main/java/org/h2/message/TraceWriter.java new file mode 100644 index 0000000000000..08609f2332f02 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/message/TraceWriter.java @@ -0,0 +1,52 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.message; + +/** + * The backend of the trace system must implement this interface. Two + * implementations are supported: the (default) native trace writer + * implementation that can write to a file and to system out, and an adapter + * that uses SLF4J (Simple Logging Facade for Java). + */ +interface TraceWriter { + + /** + * Set the name of the database or trace object. + * + * @param name the new name + */ + void setName(String name); + + /** + * Write a message. + * + * @param level the trace level + * @param module the name of the module + * @param s the message + * @param t the exception (may be null) + */ + void write(int level, String module, String s, Throwable t); + + /** + * Write a message. + * + * @param level the trace level + * @param moduleId the id of the module + * @param s the message + * @param t the exception (may be null) + */ + void write(int level, int moduleId, String s, Throwable t); + + + /** + * Check the given trace / log level is enabled. + * + * @param level the level + * @return true if the level is enabled + */ + boolean isEnabled(int level); + +} diff --git a/modules/h2/src/main/java/org/h2/message/TraceWriterAdapter.java b/modules/h2/src/main/java/org/h2/message/TraceWriterAdapter.java new file mode 100644 index 0000000000000..c5b6084685e99 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/message/TraceWriterAdapter.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.message; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * This adapter sends log output to SLF4J. SLF4J supports multiple + * implementations such as Logback, Log4j, Jakarta Commons Logging (JCL), JDK + * 1.4 logging, x4juli, and Simple Log. To use SLF4J, you need to add the + * required jar files to the classpath, and set the trace level to 4 when + * opening a database: + * + *
    + * jdbc:h2:˜/test;TRACE_LEVEL_FILE=4
    + * 
    + * + * The logger name is 'h2database'. + */ +public class TraceWriterAdapter implements TraceWriter { + + private String name; + private final Logger logger = LoggerFactory.getLogger("h2database"); + + @Override + public void setName(String name) { + this.name = name; + } + + @Override + public boolean isEnabled(int level) { + switch (level) { + case TraceSystem.DEBUG: + return logger.isDebugEnabled(); + case TraceSystem.INFO: + return logger.isInfoEnabled(); + case TraceSystem.ERROR: + return logger.isErrorEnabled(); + default: + return false; + } + } + + @Override + public void write(int level, int moduleId, String s, Throwable t) { + write(level, Trace.MODULE_NAMES[moduleId], s, t); + } + + @Override + public void write(int level, String module, String s, Throwable t) { + if (isEnabled(level)) { + if (name != null) { + s = name + ":" + module + " " + s; + } else { + s = module + " " + s; + } + switch (level) { + case TraceSystem.DEBUG: + logger.debug(s, t); + break; + case TraceSystem.INFO: + logger.info(s, t); + break; + case TraceSystem.ERROR: + logger.error(s, t); + break; + default: + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/message/package.html b/modules/h2/src/main/java/org/h2/message/package.html new file mode 100644 index 0000000000000..3af0d3265e20b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/message/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Trace (logging facility) and error message tool. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/mode/FunctionsMySQL.java b/modules/h2/src/main/java/org/h2/mode/FunctionsMySQL.java new file mode 100644 index 0000000000000..41ce12de51723 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mode/FunctionsMySQL.java @@ -0,0 +1,163 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Jason Brittain (jason.brittain at gmail.com) + */ +package org.h2.mode; + +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; +import java.text.SimpleDateFormat; +import java.util.Date; +import java.util.Locale; + +import org.h2.util.StringUtils; + +/** + * This class implements some MySQL-specific functions. + * + * @author Jason Brittain + * @author Thomas Mueller + */ +public class FunctionsMySQL { + + /** + * The date format of a MySQL formatted date/time. + * Example: 2008-09-25 08:40:59 + */ + private static final String DATE_TIME_FORMAT = "yyyy-MM-dd HH:mm:ss"; + + /** + * Format replacements for MySQL date formats. + * See + * http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-format + */ + private static final String[] FORMAT_REPLACE = { + "%a", "EEE", + "%b", "MMM", + "%c", "MM", + "%d", "dd", + "%e", "d", + "%H", "HH", + "%h", "hh", + "%I", "hh", + "%i", "mm", + "%j", "DDD", + "%k", "H", + "%l", "h", + "%M", "MMMM", + "%m", "MM", + "%p", "a", + "%r", "hh:mm:ss a", + "%S", "ss", + "%s", "ss", + "%T", "HH:mm:ss", + "%W", "EEEE", + "%w", "F", + "%Y", "yyyy", + "%y", "yy", + "%%", "%", + }; + + /** + * Register the functionality in the database. + * Nothing happens if the functions are already registered. + * + * @param conn the connection + */ + public static void register(Connection conn) throws SQLException { + String[] init = { + "UNIX_TIMESTAMP", "unixTimestamp", + "FROM_UNIXTIME", "fromUnixTime", + "DATE", "date", + }; + Statement stat = conn.createStatement(); + for (int i = 0; i < init.length; i += 2) { + String alias = init[i], method = init[i + 1]; + stat.execute( + "CREATE ALIAS IF NOT EXISTS " + alias + + " FOR \"" + FunctionsMySQL.class.getName() + "." + method + "\""); + } + } + + /** + * Get the seconds since 1970-01-01 00:00:00 UTC. + * See + * http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp + * + * @return the current timestamp in seconds (not milliseconds). + */ + public static int unixTimestamp() { + return (int) (System.currentTimeMillis() / 1000L); + } + + /** + * Get the seconds since 1970-01-01 00:00:00 UTC of the given timestamp. + * See + * http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp + * + * @param timestamp the timestamp + * @return the current timestamp in seconds (not milliseconds). + */ + public static int unixTimestamp(java.sql.Timestamp timestamp) { + return (int) (timestamp.getTime() / 1000L); + } + + /** + * See + * http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_from-unixtime + * + * @param seconds The current timestamp in seconds. + * @return a formatted date/time String in the format "yyyy-MM-dd HH:mm:ss". + */ + public static String fromUnixTime(int seconds) { + SimpleDateFormat formatter = new SimpleDateFormat(DATE_TIME_FORMAT, + Locale.ENGLISH); + return formatter.format(new Date(seconds * 1000L)); + } + + /** + * See + * http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_from-unixtime + * + * @param seconds The current timestamp in seconds. + * @param format The format of the date/time String to return. + * @return a formatted date/time String in the given format. + */ + public static String fromUnixTime(int seconds, String format) { + format = convertToSimpleDateFormat(format); + SimpleDateFormat formatter = new SimpleDateFormat(format, Locale.ENGLISH); + return formatter.format(new Date(seconds * 1000L)); + } + + private static String convertToSimpleDateFormat(String format) { + String[] replace = FORMAT_REPLACE; + for (int i = 0; i < replace.length; i += 2) { + format = StringUtils.replaceAll(format, replace[i], replace[i + 1]); + } + return format; + } + + /** + * See + * http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date + * This function is dependent on the exact formatting of the MySQL date/time + * string. + * + * @param dateTime The date/time String from which to extract just the date + * part. + * @return the date part of the given date/time String argument. + */ + public static String date(String dateTime) { + if (dateTime == null) { + return null; + } + int index = dateTime.indexOf(' '); + if (index != -1) { + return dateTime.substring(0, index); + } + return dateTime; + } + +} diff --git a/modules/h2/src/main/java/org/h2/mode/package.html b/modules/h2/src/main/java/org/h2/mode/package.html new file mode 100644 index 0000000000000..cad05ad7fdc13 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mode/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Utility classes for compatibility with other database, for example MySQL. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/mvstore/Chunk.java b/modules/h2/src/main/java/org/h2/mvstore/Chunk.java new file mode 100644 index 0000000000000..07139b3245170 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/Chunk.java @@ -0,0 +1,277 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.util.HashMap; + +/** + * A chunk of data, containing one or multiple pages. + *

    + * Chunks are page aligned (each page is usually 4096 bytes). + * There are at most 67 million (2^26) chunks, + * each chunk is at most 2 GB large. + */ +public class Chunk { + + /** + * The maximum chunk id. + */ + public static final int MAX_ID = (1 << 26) - 1; + + /** + * The maximum length of a chunk header, in bytes. + */ + static final int MAX_HEADER_LENGTH = 1024; + + /** + * The length of the chunk footer. The longest footer is: + * chunk:ffffffff,block:ffffffffffffffff, + * version:ffffffffffffffff,fletcher:ffffffff + */ + static final int FOOTER_LENGTH = 128; + + /** + * The chunk id. + */ + public final int id; + + /** + * The start block number within the file. + */ + public long block; + + /** + * The length in number of blocks. + */ + public int len; + + /** + * The total number of pages in this chunk. + */ + public int pageCount; + + /** + * The number of pages still alive. + */ + public int pageCountLive; + + /** + * The sum of the max length of all pages. + */ + public long maxLen; + + /** + * The sum of the max length of all pages that are in use. + */ + public long maxLenLive; + + /** + * The garbage collection priority. Priority 0 means it needs to be + * collected, a high value means low priority. + */ + public int collectPriority; + + /** + * The position of the meta root. + */ + public long metaRootPos; + + /** + * The version stored in this chunk. + */ + public long version; + + /** + * When this chunk was created, in milliseconds after the store was created. + */ + public long time; + + /** + * When this chunk was no longer needed, in milliseconds after the store was + * created. After this, the chunk is kept alive a bit longer (in case it is + * referenced in older versions). + */ + public long unused; + + /** + * The last used map id. + */ + public int mapId; + + /** + * The predicted position of the next chunk. + */ + public long next; + + Chunk(int id) { + this.id = id; + } + + /** + * Read the header from the byte buffer. + * + * @param buff the source buffer + * @param start the start of the chunk in the file + * @return the chunk + */ + static Chunk readChunkHeader(ByteBuffer buff, long start) { + int pos = buff.position(); + byte[] data = new byte[Math.min(buff.remaining(), MAX_HEADER_LENGTH)]; + buff.get(data); + try { + for (int i = 0; i < data.length; i++) { + if (data[i] == '\n') { + // set the position to the start of the first page + buff.position(pos + i + 1); + String s = new String(data, 0, i, StandardCharsets.ISO_8859_1).trim(); + return fromString(s); + } + } + } catch (Exception e) { + // there could be various reasons + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupt reading chunk at position {0}", start, e); + } + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupt reading chunk at position {0}", start); + } + + /** + * Write the chunk header. + * + * @param buff the target buffer + * @param minLength the minimum length + */ + void writeChunkHeader(WriteBuffer buff, int minLength) { + long pos = buff.position(); + buff.put(asString().getBytes(StandardCharsets.ISO_8859_1)); + while (buff.position() - pos < minLength - 1) { + buff.put((byte) ' '); + } + if (minLength != 0 && buff.position() > minLength) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, + "Chunk metadata too long"); + } + buff.put((byte) '\n'); + } + + /** + * Get the metadata key for the given chunk id. + * + * @param chunkId the chunk id + * @return the metadata key + */ + static String getMetaKey(int chunkId) { + return "chunk." + Integer.toHexString(chunkId); + } + + /** + * Build a block from the given string. + * + * @param s the string + * @return the block + */ + public static Chunk fromString(String s) { + HashMap map = DataUtils.parseMap(s); + int id = DataUtils.readHexInt(map, "chunk", 0); + Chunk c = new Chunk(id); + c.block = DataUtils.readHexLong(map, "block", 0); + c.len = DataUtils.readHexInt(map, "len", 0); + c.pageCount = DataUtils.readHexInt(map, "pages", 0); + c.pageCountLive = DataUtils.readHexInt(map, "livePages", c.pageCount); + c.mapId = DataUtils.readHexInt(map, "map", 0); + c.maxLen = DataUtils.readHexLong(map, "max", 0); + c.maxLenLive = DataUtils.readHexLong(map, "liveMax", c.maxLen); + c.metaRootPos = DataUtils.readHexLong(map, "root", 0); + c.time = DataUtils.readHexLong(map, "time", 0); + c.unused = DataUtils.readHexLong(map, "unused", 0); + c.version = DataUtils.readHexLong(map, "version", id); + c.next = DataUtils.readHexLong(map, "next", 0); + return c; + } + + /** + * Calculate the fill rate in %. 0 means empty, 100 means full. + * + * @return the fill rate + */ + public int getFillRate() { + if (maxLenLive <= 0) { + return 0; + } else if (maxLenLive == maxLen) { + return 100; + } + return 1 + (int) (98 * maxLenLive / maxLen); + } + + @Override + public int hashCode() { + return id; + } + + @Override + public boolean equals(Object o) { + return o instanceof Chunk && ((Chunk) o).id == id; + } + + /** + * Get the chunk data as a string. + * + * @return the string + */ + public String asString() { + StringBuilder buff = new StringBuilder(240); + DataUtils.appendMap(buff, "chunk", id); + DataUtils.appendMap(buff, "block", block); + DataUtils.appendMap(buff, "len", len); + if (maxLen != maxLenLive) { + DataUtils.appendMap(buff, "liveMax", maxLenLive); + } + if (pageCount != pageCountLive) { + DataUtils.appendMap(buff, "livePages", pageCountLive); + } + DataUtils.appendMap(buff, "map", mapId); + DataUtils.appendMap(buff, "max", maxLen); + if (next != 0) { + DataUtils.appendMap(buff, "next", next); + } + DataUtils.appendMap(buff, "pages", pageCount); + DataUtils.appendMap(buff, "root", metaRootPos); + DataUtils.appendMap(buff, "time", time); + if (unused != 0) { + DataUtils.appendMap(buff, "unused", unused); + } + DataUtils.appendMap(buff, "version", version); + return buff.toString(); + } + + byte[] getFooterBytes() { + StringBuilder buff = new StringBuilder(FOOTER_LENGTH); + DataUtils.appendMap(buff, "chunk", id); + DataUtils.appendMap(buff, "block", block); + DataUtils.appendMap(buff, "version", version); + byte[] bytes = buff.toString().getBytes(StandardCharsets.ISO_8859_1); + int checksum = DataUtils.getFletcher32(bytes, 0, bytes.length); + DataUtils.appendMap(buff, "fletcher", checksum); + while (buff.length() < FOOTER_LENGTH - 1) { + buff.append(' '); + } + buff.append('\n'); + return buff.toString().getBytes(StandardCharsets.ISO_8859_1); + } + + @Override + public String toString() { + return asString(); + } + +} + diff --git a/modules/h2/src/main/java/org/h2/mvstore/ConcurrentArrayList.java b/modules/h2/src/main/java/org/h2/mvstore/ConcurrentArrayList.java new file mode 100644 index 0000000000000..1979c24344707 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/ConcurrentArrayList.java @@ -0,0 +1,122 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.util.Arrays; +import java.util.Iterator; + +/** + * A very simple array list that supports concurrent access. + * Internally, it uses immutable objects. + * + * @param the key type + */ +public class ConcurrentArrayList { + + /** + * The array. + */ + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[0]; + + /** + * Get the first element, or null if none. + * + * @return the first element + */ + public K peekFirst() { + K[] a = array; + return a.length == 0 ? null : a[0]; + } + + /** + * Get the last element, or null if none. + * + * @return the last element + */ + public K peekLast() { + K[] a = array; + int len = a.length; + return len == 0 ? null : a[len - 1]; + } + + /** + * Add an element at the end. + * + * @param obj the element + */ + public synchronized void add(K obj) { + if (obj == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "adding null value to list"); + } + int len = array.length; + array = Arrays.copyOf(array, len + 1); + array[len] = obj; + } + + /** + * Remove the first element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public synchronized boolean removeFirst(K obj) { + if (peekFirst() != obj) { + return false; + } + int len = array.length; + @SuppressWarnings("unchecked") + K[] a = (K[]) new Object[len - 1]; + System.arraycopy(array, 1, a, 0, len - 1); + array = a; + return true; + } + + /** + * Remove the last element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public synchronized boolean removeLast(K obj) { + if (peekLast() != obj) { + return false; + } + array = Arrays.copyOf(array, array.length - 1); + return true; + } + + /** + * Get an iterator over all entries. + * + * @return the iterator + */ + public Iterator iterator() { + return new Iterator() { + + K[] a = array; + int index; + + @Override + public boolean hasNext() { + return index < a.length; + } + + @Override + public K next() { + return a[index++]; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/Cursor.java b/modules/h2/src/main/java/org/h2/mvstore/Cursor.java new file mode 100644 index 0000000000000..cd78fdfb0aee1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/Cursor.java @@ -0,0 +1,156 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.util.Iterator; + +/** + * A cursor to iterate over elements in ascending order. + * + * @param the key type + * @param the value type + */ +public class Cursor implements Iterator { + + private final MVMap map; + private final K from; + private CursorPos pos; + private K current, last; + private V currentValue, lastValue; + private Page lastPage; + private final Page root; + private boolean initialized; + + Cursor(MVMap map, Page root, K from) { + this.map = map; + this.root = root; + this.from = from; + } + + @Override + public boolean hasNext() { + if (!initialized) { + min(root, from); + initialized = true; + fetchNext(); + } + return current != null; + } + + @Override + public K next() { + hasNext(); + K c = current; + last = current; + lastValue = currentValue; + lastPage = pos == null ? null : pos.page; + fetchNext(); + return c; + } + + /** + * Get the last read key if there was one. + * + * @return the key or null + */ + public K getKey() { + return last; + } + + /** + * Get the last read value if there was one. + * + * @return the value or null + */ + public V getValue() { + return lastValue; + } + + Page getPage() { + return lastPage; + } + + /** + * Skip over that many entries. This method is relatively fast (for this map + * implementation) even if many entries need to be skipped. + * + * @param n the number of entries to skip + */ + public void skip(long n) { + if (!hasNext()) { + return; + } + if (n < 10) { + while (n-- > 0) { + fetchNext(); + } + return; + } + long index = map.getKeyIndex(current); + K k = map.getKey(index + n); + pos = null; + min(root, k); + fetchNext(); + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException( + "Removing is not supported"); + } + + /** + * Fetch the next entry that is equal or larger than the given key, starting + * from the given page. This method retains the stack. + * + * @param p the page to start + * @param from the key to search + */ + private void min(Page p, K from) { + while (true) { + if (p.isLeaf()) { + int x = from == null ? 0 : p.binarySearch(from); + if (x < 0) { + x = -x - 1; + } + pos = new CursorPos(p, x, pos); + break; + } + int x = from == null ? -1 : p.binarySearch(from); + if (x < 0) { + x = -x - 1; + } else { + x++; + } + pos = new CursorPos(p, x + 1, pos); + p = p.getChildPage(x); + } + } + + /** + * Fetch the next entry if there is one. + */ + @SuppressWarnings("unchecked") + private void fetchNext() { + while (pos != null) { + if (pos.index < pos.page.getKeyCount()) { + int index = pos.index++; + current = (K) pos.page.getKey(index); + currentValue = (V) pos.page.getValue(index); + return; + } + pos = pos.parent; + if (pos == null) { + break; + } + if (pos.index < map.getChildPageCount(pos.page)) { + min(pos.page.getChildPage(pos.index++), null); + } + } + current = null; + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/CursorPos.java b/modules/h2/src/main/java/org/h2/mvstore/CursorPos.java new file mode 100644 index 0000000000000..65e41712ad15e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/CursorPos.java @@ -0,0 +1,35 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +/** + * A position in a cursor + */ +public class CursorPos { + + /** + * The current page. + */ + public Page page; + + /** + * The current index. + */ + public int index; + + /** + * The position in the parent page, if any. + */ + public final CursorPos parent; + + public CursorPos(Page page, int index, CursorPos parent) { + this.page = page; + this.index = index; + this.parent = parent; + } + +} + diff --git a/modules/h2/src/main/java/org/h2/mvstore/DataUtils.java b/modules/h2/src/main/java/org/h2/mvstore/DataUtils.java new file mode 100644 index 0000000000000..4938dc74e1243 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/DataUtils.java @@ -0,0 +1,1079 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.io.EOFException; +import java.io.IOException; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.charset.StandardCharsets; +import java.text.MessageFormat; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; + +import org.h2.engine.Constants; + +/** + * Utility methods + */ +public final class DataUtils { + + /** + * An error occurred while reading from the file. + */ + public static final int ERROR_READING_FAILED = 1; + + /** + * An error occurred when trying to write to the file. + */ + public static final int ERROR_WRITING_FAILED = 2; + + /** + * An internal error occurred. This could be a bug, or a memory corruption + * (for example caused by out of memory). + */ + public static final int ERROR_INTERNAL = 3; + + /** + * The object is already closed. + */ + public static final int ERROR_CLOSED = 4; + + /** + * The file format is not supported. + */ + public static final int ERROR_UNSUPPORTED_FORMAT = 5; + + /** + * The file is corrupt or (for encrypted files) the encryption key is wrong. + */ + public static final int ERROR_FILE_CORRUPT = 6; + + /** + * The file is locked. + */ + public static final int ERROR_FILE_LOCKED = 7; + + /** + * An error occurred when serializing or de-serializing. + */ + public static final int ERROR_SERIALIZATION = 8; + + /** + * The application was trying to read data from a chunk that is no longer + * available. + */ + public static final int ERROR_CHUNK_NOT_FOUND = 9; + + /** + * The block in the stream store was not found. + */ + public static final int ERROR_BLOCK_NOT_FOUND = 50; + + /** + * The transaction store is corrupt. + */ + public static final int ERROR_TRANSACTION_CORRUPT = 100; + + /** + * An entry is still locked by another transaction. + */ + public static final int ERROR_TRANSACTION_LOCKED = 101; + + /** + * There are too many open transactions. + */ + public static final int ERROR_TOO_MANY_OPEN_TRANSACTIONS = 102; + + /** + * The transaction store is in an illegal state (for example, not yet + * initialized). + */ + public static final int ERROR_TRANSACTION_ILLEGAL_STATE = 103; + + /** + * The type for leaf page. + */ + public static final int PAGE_TYPE_LEAF = 0; + + /** + * The type for node page. + */ + public static final int PAGE_TYPE_NODE = 1; + + /** + * The bit mask for compressed pages (compression level fast). + */ + public static final int PAGE_COMPRESSED = 2; + + /** + * The bit mask for compressed pages (compression level high). + */ + public static final int PAGE_COMPRESSED_HIGH = 2 + 4; + + /** + * The maximum length of a variable size int. + */ + public static final int MAX_VAR_INT_LEN = 5; + + /** + * The maximum length of a variable size long. + */ + public static final int MAX_VAR_LONG_LEN = 10; + + /** + * The maximum integer that needs less space when using variable size + * encoding (only 3 bytes instead of 4). + */ + public static final int COMPRESSED_VAR_INT_MAX = 0x1fffff; + + /** + * The maximum long that needs less space when using variable size + * encoding (only 7 bytes instead of 8). + */ + public static final long COMPRESSED_VAR_LONG_MAX = 0x1ffffffffffffL; + + /** + * The estimated number of bytes used per page object. + */ + public static final int PAGE_MEMORY = 128; + + /** + * The estimated number of bytes used per child entry. + */ + public static final int PAGE_MEMORY_CHILD = 16; + + /** + * The marker size of a very large page. + */ + public static final int PAGE_LARGE = 2 * 1024 * 1024; + + /** + * Get the length of the variable size int. + * + * @param x the value + * @return the length in bytes + */ + public static int getVarIntLen(int x) { + if ((x & (-1 << 7)) == 0) { + return 1; + } else if ((x & (-1 << 14)) == 0) { + return 2; + } else if ((x & (-1 << 21)) == 0) { + return 3; + } else if ((x & (-1 << 28)) == 0) { + return 4; + } + return 5; + } + + /** + * Get the length of the variable size long. + * + * @param x the value + * @return the length in bytes + */ + public static int getVarLongLen(long x) { + int i = 1; + while (true) { + x >>>= 7; + if (x == 0) { + return i; + } + i++; + } + } + + /** + * Read a variable size int. + * + * @param buff the source buffer + * @return the value + */ + public static int readVarInt(ByteBuffer buff) { + int b = buff.get(); + if (b >= 0) { + return b; + } + // a separate function so that this one can be inlined + return readVarIntRest(buff, b); + } + + private static int readVarIntRest(ByteBuffer buff, int b) { + int x = b & 0x7f; + b = buff.get(); + if (b >= 0) { + return x | (b << 7); + } + x |= (b & 0x7f) << 7; + b = buff.get(); + if (b >= 0) { + return x | (b << 14); + } + x |= (b & 0x7f) << 14; + b = buff.get(); + if (b >= 0) { + return x | b << 21; + } + x |= ((b & 0x7f) << 21) | (buff.get() << 28); + return x; + } + + /** + * Read a variable size long. + * + * @param buff the source buffer + * @return the value + */ + public static long readVarLong(ByteBuffer buff) { + long x = buff.get(); + if (x >= 0) { + return x; + } + x &= 0x7f; + for (int s = 7; s < 64; s += 7) { + long b = buff.get(); + x |= (b & 0x7f) << s; + if (b >= 0) { + break; + } + } + return x; + } + + /** + * Write a variable size int. + * + * @param out the output stream + * @param x the value + */ + public static void writeVarInt(OutputStream out, int x) throws IOException { + while ((x & ~0x7f) != 0) { + out.write((byte) (0x80 | (x & 0x7f))); + x >>>= 7; + } + out.write((byte) x); + } + + /** + * Write a variable size int. + * + * @param buff the source buffer + * @param x the value + */ + public static void writeVarInt(ByteBuffer buff, int x) { + while ((x & ~0x7f) != 0) { + buff.put((byte) (0x80 | (x & 0x7f))); + x >>>= 7; + } + buff.put((byte) x); + } + + /** + * Write characters from a string (without the length). + * + * @param buff the target buffer (must be large enough) + * @param s the string + * @param len the number of characters + */ + public static void writeStringData(ByteBuffer buff, + String s, int len) { + for (int i = 0; i < len; i++) { + int c = s.charAt(i); + if (c < 0x80) { + buff.put((byte) c); + } else if (c >= 0x800) { + buff.put((byte) (0xe0 | (c >> 12))); + buff.put((byte) (((c >> 6) & 0x3f))); + buff.put((byte) (c & 0x3f)); + } else { + buff.put((byte) (0xc0 | (c >> 6))); + buff.put((byte) (c & 0x3f)); + } + } + } + + /** + * Read a string. + * + * @param buff the source buffer + * @param len the number of characters + * @return the value + */ + public static String readString(ByteBuffer buff, int len) { + char[] chars = new char[len]; + for (int i = 0; i < len; i++) { + int x = buff.get() & 0xff; + if (x < 0x80) { + chars[i] = (char) x; + } else if (x >= 0xe0) { + chars[i] = (char) (((x & 0xf) << 12) + + ((buff.get() & 0x3f) << 6) + (buff.get() & 0x3f)); + } else { + chars[i] = (char) (((x & 0x1f) << 6) + (buff.get() & 0x3f)); + } + } + return new String(chars); + } + + /** + * Write a variable size long. + * + * @param buff the target buffer + * @param x the value + */ + public static void writeVarLong(ByteBuffer buff, long x) { + while ((x & ~0x7f) != 0) { + buff.put((byte) (0x80 | (x & 0x7f))); + x >>>= 7; + } + buff.put((byte) x); + } + + /** + * Write a variable size long. + * + * @param out the output stream + * @param x the value + */ + public static void writeVarLong(OutputStream out, long x) + throws IOException { + while ((x & ~0x7f) != 0) { + out.write((byte) (0x80 | (x & 0x7f))); + x >>>= 7; + } + out.write((byte) x); + } + + /** + * Copy the elements of an array, with a gap. + * + * @param src the source array + * @param dst the target array + * @param oldSize the size of the old array + * @param gapIndex the index of the gap + */ + public static void copyWithGap(Object src, Object dst, int oldSize, + int gapIndex) { + if (gapIndex > 0) { + System.arraycopy(src, 0, dst, 0, gapIndex); + } + if (gapIndex < oldSize) { + System.arraycopy(src, gapIndex, dst, gapIndex + 1, oldSize + - gapIndex); + } + } + + /** + * Copy the elements of an array, and remove one element. + * + * @param src the source array + * @param dst the target array + * @param oldSize the size of the old array + * @param removeIndex the index of the entry to remove + */ + public static void copyExcept(Object src, Object dst, int oldSize, + int removeIndex) { + if (removeIndex > 0 && oldSize > 0) { + System.arraycopy(src, 0, dst, 0, removeIndex); + } + if (removeIndex < oldSize) { + System.arraycopy(src, removeIndex + 1, dst, removeIndex, oldSize + - removeIndex - 1); + } + } + + /** + * Read from a file channel until the buffer is full. + * The buffer is rewind after reading. + * + * @param file the file channel + * @param pos the absolute position within the file + * @param dst the byte buffer + * @throws IllegalStateException if some data could not be read + */ + public static void readFully(FileChannel file, long pos, ByteBuffer dst) { + try { + do { + int len = file.read(dst, pos); + if (len < 0) { + throw new EOFException(); + } + pos += len; + } while (dst.remaining() > 0); + dst.rewind(); + } catch (IOException e) { + long size; + try { + size = file.size(); + } catch (IOException e2) { + size = -1; + } + throw newIllegalStateException( + ERROR_READING_FAILED, + "Reading from {0} failed; file length {1} " + + "read length {2} at {3}", + file, size, dst.remaining(), pos, e); + } + } + + /** + * Write to a file channel. + * + * @param file the file channel + * @param pos the absolute position within the file + * @param src the source buffer + */ + public static void writeFully(FileChannel file, long pos, ByteBuffer src) { + try { + int off = 0; + do { + int len = file.write(src, pos + off); + off += len; + } while (src.remaining() > 0); + } catch (IOException e) { + throw newIllegalStateException( + ERROR_WRITING_FAILED, + "Writing to {0} failed; length {1} at {2}", + file, src.remaining(), pos, e); + } + } + + /** + * Convert the length to a length code 0..31. 31 means more than 1 MB. + * + * @param len the length + * @return the length code + */ + public static int encodeLength(int len) { + if (len <= 32) { + return 0; + } + int code = Integer.numberOfLeadingZeros(len); + int remaining = len << (code + 1); + code += code; + if ((remaining & (1 << 31)) != 0) { + code--; + } + if ((remaining << 1) != 0) { + code--; + } + code = Math.min(31, 52 - code); + // alternative code (slower): + // int x = len; + // int shift = 0; + // while (x > 3) { + // shift++; + // x = (x >>> 1) + (x & 1); + // } + // shift = Math.max(0, shift - 4); + // int code = (shift << 1) + (x & 1); + // code = Math.min(31, code); + return code; + } + + /** + * Get the chunk id from the position. + * + * @param pos the position + * @return the chunk id + */ + public static int getPageChunkId(long pos) { + return (int) (pos >>> 38); + } + + /** + * Get the maximum length for the given code. + * For the code 31, PAGE_LARGE is returned. + * + * @param pos the position + * @return the maximum length + */ + public static int getPageMaxLength(long pos) { + int code = (int) ((pos >> 1) & 31); + if (code == 31) { + return PAGE_LARGE; + } + return (2 + (code & 1)) << ((code >> 1) + 4); + } + + /** + * Get the offset from the position. + * + * @param pos the position + * @return the offset + */ + public static int getPageOffset(long pos) { + return (int) (pos >> 6); + } + + /** + * Get the page type from the position. + * + * @param pos the position + * @return the page type (PAGE_TYPE_NODE or PAGE_TYPE_LEAF) + */ + public static int getPageType(long pos) { + return ((int) pos) & 1; + } + + /** + * Get the position of this page. The following information is encoded in + * the position: the chunk id, the offset, the maximum length, and the type + * (node or leaf). + * + * @param chunkId the chunk id + * @param offset the offset + * @param length the length + * @param type the page type (1 for node, 0 for leaf) + * @return the position + */ + public static long getPagePos(int chunkId, int offset, + int length, int type) { + long pos = (long) chunkId << 38; + pos |= (long) offset << 6; + pos |= encodeLength(length) << 1; + pos |= type; + return pos; + } + + /** + * Calculate a check value for the given integer. A check value is mean to + * verify the data is consistent with a high probability, but not meant to + * protect against media failure or deliberate changes. + * + * @param x the value + * @return the check value + */ + public static short getCheckValue(int x) { + return (short) ((x >> 16) ^ x); + } + + /** + * Append a map to the string builder, sorted by key. + * + * @param buff the target buffer + * @param map the map + * @return the string builder + */ + public static StringBuilder appendMap(StringBuilder buff, HashMap map) { + Object[] keys = map.keySet().toArray(); + Arrays.sort(keys); + for (Object k : keys) { + String key = (String) k; + Object value = map.get(key); + if (value instanceof Long) { + appendMap(buff, key, (long) value); + } else if (value instanceof Integer) { + appendMap(buff, key, (int) value); + } else { + appendMap(buff, key, value.toString()); + } + } + return buff; + } + + private static StringBuilder appendMapKey(StringBuilder buff, String key) { + if (buff.length() > 0) { + buff.append(','); + } + return buff.append(key).append(':'); + } + + /** + * Append a key-value pair to the string builder. Keys may not contain a + * colon. Values that contain a comma or a double quote are enclosed in + * double quotes, with special characters escaped using a backslash. + * + * @param buff the target buffer + * @param key the key + * @param value the value + */ + public static void appendMap(StringBuilder buff, String key, String value) { + appendMapKey(buff, key); + if (value.indexOf(',') < 0 && value.indexOf('\"') < 0) { + buff.append(value); + } else { + buff.append('\"'); + for (int i = 0, size = value.length(); i < size; i++) { + char c = value.charAt(i); + if (c == '\"') { + buff.append('\\'); + } + buff.append(c); + } + buff.append('\"'); + } + } + + /** + * Append a key-value pair to the string builder. Keys may not contain a + * colon. + * + * @param buff the target buffer + * @param key the key + * @param value the value + */ + public static void appendMap(StringBuilder buff, String key, long value) { + appendMapKey(buff, key).append(Long.toHexString(value)); + } + + /** + * Append a key-value pair to the string builder. Keys may not contain a + * colon. + * + * @param buff the target buffer + * @param key the key + * @param value the value + */ + public static void appendMap(StringBuilder buff, String key, int value) { + appendMapKey(buff, key).append(Integer.toHexString(value)); + } + + /** + * @param buff output buffer, should be empty + * @param s parsed string + * @param i offset to parse from + * @param size stop offset (exclusive) + * @return new offset + */ + private static int parseMapValue(StringBuilder buff, String s, int i, int size) { + while (i < size) { + char c = s.charAt(i++); + if (c == ',') { + break; + } else if (c == '\"') { + while (i < size) { + c = s.charAt(i++); + if (c == '\\') { + if (i == size) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, "Not a map: {0}", s); + } + c = s.charAt(i++); + } else if (c == '\"') { + break; + } + buff.append(c); + } + } else { + buff.append(c); + } + } + return i; + } + + /** + * Parse a key-value pair list. + * + * @param s the list + * @return the map + * @throws IllegalStateException if parsing failed + */ + public static HashMap parseMap(String s) { + HashMap map = new HashMap<>(); + StringBuilder buff = new StringBuilder(); + for (int i = 0, size = s.length(); i < size;) { + int startKey = i; + i = s.indexOf(':', i); + if (i < 0) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, "Not a map: {0}", s); + } + String key = s.substring(startKey, i++); + i = parseMapValue(buff, s, i, size); + map.put(key, buff.toString()); + buff.setLength(0); + } + return map; + } + + /** + * Parse a key-value pair list and checks its checksum. + * + * @param bytes encoded map + * @return the map without mapping for {@code "fletcher"}, or {@code null} if checksum is wrong + * @throws IllegalStateException if parsing failed + */ + public static HashMap parseChecksummedMap(byte[] bytes) { + int start = 0, end = bytes.length; + while (start < end && bytes[start] <= ' ') { + start++; + } + while (start < end && bytes[end - 1] <= ' ') { + end--; + } + String s = new String(bytes, start, end - start, StandardCharsets.ISO_8859_1); + HashMap map = new HashMap<>(); + StringBuilder buff = new StringBuilder(); + for (int i = 0, size = s.length(); i < size;) { + int startKey = i; + i = s.indexOf(':', i); + if (i < 0) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, "Not a map: {0}", s); + } + if (i - startKey == 8 && s.regionMatches(startKey, "fletcher", 0, 8)) { + parseMapValue(buff, s, i + 1, size); + int check = (int) Long.parseLong(buff.toString(), 16); + if (check == getFletcher32(bytes, start, startKey - 1)) { + return map; + } + // Corrupted map + return null; + } + String key = s.substring(startKey, i++); + i = parseMapValue(buff, s, i, size); + map.put(key, buff.toString()); + buff.setLength(0); + } + // Corrupted map + return null; + } + + /** + * Parse a name from key-value pair list. + * + * @param s the list + * @return value of name item, or {@code null} + * @throws IllegalStateException if parsing failed + */ + public static String getMapName(String s) { + return getFromMap(s, "name"); + } + + /** + * Parse a specified pair from key-value pair list. + * + * @param s the list + * @param key the name of the key + * @return value of the specified item, or {@code null} + * @throws IllegalStateException if parsing failed + */ + public static String getFromMap(String s, String key) { + int keyLength = key.length(); + for (int i = 0, size = s.length(); i < size;) { + int startKey = i; + i = s.indexOf(':', i); + if (i < 0) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, "Not a map: {0}", s); + } + if (i++ - startKey == keyLength && s.regionMatches(startKey, key, 0, keyLength)) { + StringBuilder buff = new StringBuilder(); + parseMapValue(buff, s, i, size); + return buff.toString(); + } else { + while (i < size) { + char c = s.charAt(i++); + if (c == ',') { + break; + } else if (c == '\"') { + while (i < size) { + c = s.charAt(i++); + if (c == '\\') { + if (i++ == size) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, "Not a map: {0}", s); + } + } else if (c == '\"') { + break; + } + } + } + } + } + } + return null; + } + + /** + * Calculate the Fletcher32 checksum. + * + * @param bytes the bytes + * @param offset initial offset + * @param length the message length (if odd, 0 is appended) + * @return the checksum + */ + public static int getFletcher32(byte[] bytes, int offset, int length) { + int s1 = 0xffff, s2 = 0xffff; + int i = offset, len = offset + (length & ~1); + while (i < len) { + // reduce after 360 words (each word is two bytes) + for (int end = Math.min(i + 720, len); i < end;) { + int x = ((bytes[i++] & 0xff) << 8) | (bytes[i++] & 0xff); + s2 += s1 += x; + } + s1 = (s1 & 0xffff) + (s1 >>> 16); + s2 = (s2 & 0xffff) + (s2 >>> 16); + } + if ((length & 1) != 0) { + // odd length: append 0 + int x = (bytes[i] & 0xff) << 8; + s2 += s1 += x; + } + s1 = (s1 & 0xffff) + (s1 >>> 16); + s2 = (s2 & 0xffff) + (s2 >>> 16); + return (s2 << 16) | s1; + } + + /** + * Throw an IllegalArgumentException if the argument is invalid. + * + * @param test true if the argument is valid + * @param message the message + * @param arguments the arguments + * @throws IllegalArgumentException if the argument is invalid + */ + public static void checkArgument(boolean test, String message, + Object... arguments) { + if (!test) { + throw newIllegalArgumentException(message, arguments); + } + } + + /** + * Create a new IllegalArgumentException. + * + * @param message the message + * @param arguments the arguments + * @return the exception + */ + public static IllegalArgumentException newIllegalArgumentException( + String message, Object... arguments) { + return initCause(new IllegalArgumentException( + formatMessage(0, message, arguments)), + arguments); + } + + /** + * Create a new UnsupportedOperationException. + * + * @param message the message + * @return the exception + */ + public static UnsupportedOperationException + newUnsupportedOperationException(String message) { + return new UnsupportedOperationException(formatMessage(0, message)); + } + + /** + * Create a new IllegalStateException. + * + * @param errorCode the error code + * @param message the message + * @param arguments the arguments + * @return the exception + */ + public static IllegalStateException newIllegalStateException( + int errorCode, String message, Object... arguments) { + return initCause(new IllegalStateException( + formatMessage(errorCode, message, arguments)), + arguments); + } + + private static T initCause(T e, Object... arguments) { + int size = arguments.length; + if (size > 0) { + Object o = arguments[size - 1]; + if (o instanceof Throwable) { + e.initCause((Throwable) o); + } + } + return e; + } + + /** + * Format an error message. + * + * @param errorCode the error code + * @param message the message + * @param arguments the arguments + * @return the formatted message + */ + public static String formatMessage(int errorCode, String message, + Object... arguments) { + // convert arguments to strings, to avoid locale specific formatting + arguments = arguments.clone(); + for (int i = 0; i < arguments.length; i++) { + Object a = arguments[i]; + if (!(a instanceof Exception)) { + String s = a == null ? "null" : a.toString(); + if (s.length() > 1000) { + s = s.substring(0, 1000) + "..."; + } + arguments[i] = s; + } + } + return MessageFormat.format(message, arguments) + + " [" + Constants.VERSION_MAJOR + "." + + Constants.VERSION_MINOR + "." + Constants.BUILD_ID + + "/" + errorCode + "]"; + } + + /** + * Get the error code from an exception message. + * + * @param m the message + * @return the error code, or 0 if none + */ + public static int getErrorCode(String m) { + if (m != null && m.endsWith("]")) { + int dash = m.lastIndexOf('/'); + if (dash >= 0) { + String s = m.substring(dash + 1, m.length() - 1); + try { + return Integer.parseInt(s); + } catch (NumberFormatException e) { + // no error code + } + } + } + return 0; + } + + /** + * Read a hex long value from a map. + * + * @param map the map + * @param key the key + * @param defaultValue if the value is null + * @return the parsed value + * @throws IllegalStateException if parsing fails + */ + public static long readHexLong(Map map, String key, long defaultValue) { + Object v = map.get(key); + if (v == null) { + return defaultValue; + } else if (v instanceof Long) { + return (Long) v; + } + try { + return parseHexLong((String) v); + } catch (NumberFormatException e) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, + "Error parsing the value {0}", v, e); + } + } + + /** + * Parse an unsigned, hex long. + * + * @param x the string + * @return the parsed value + * @throws IllegalStateException if parsing fails + */ + public static long parseHexLong(String x) { + try { + if (x.length() == 16) { + // avoid problems with overflow + // in Java 8, this special case is not needed + return (Long.parseLong(x.substring(0, 8), 16) << 32) | + Long.parseLong(x.substring(8, 16), 16); + } + return Long.parseLong(x, 16); + } catch (NumberFormatException e) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, + "Error parsing the value {0}", x, e); + } + } + + /** + * Parse an unsigned, hex long. + * + * @param x the string + * @return the parsed value + * @throws IllegalStateException if parsing fails + */ + public static int parseHexInt(String x) { + try { + // avoid problems with overflow + // in Java 8, we can use Integer.parseLong(x, 16); + return (int) Long.parseLong(x, 16); + } catch (NumberFormatException e) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, + "Error parsing the value {0}", x, e); + } + } + + /** + * Read a hex int value from a map. + * + * @param map the map + * @param key the key + * @param defaultValue if the value is null + * @return the parsed value + * @throws IllegalStateException if parsing fails + */ + public static int readHexInt(HashMap map, String key, int defaultValue) { + Object v = map.get(key); + if (v == null) { + return defaultValue; + } else if (v instanceof Integer) { + return (Integer) v; + } + try { + // support unsigned hex value + return (int) Long.parseLong((String) v, 16); + } catch (NumberFormatException e) { + throw newIllegalStateException(ERROR_FILE_CORRUPT, + "Error parsing the value {0}", v, e); + } + } + + /** + * An entry of a map. + * + * @param the key type + * @param the value type + */ + public static final class MapEntry implements Map.Entry { + + private final K key; + private final V value; + + public MapEntry(K key, V value) { + this.key = key; + this.value = value; + } + + @Override + public K getKey() { + return key; + } + + @Override + public V getValue() { + return value; + } + + @Override + public V setValue(V value) { + throw newUnsupportedOperationException("Updating the value is not supported"); + } + + } + + /** + * Get the configuration parameter value, or default. + * + * @param config the configuration + * @param key the key + * @param defaultValue the default + * @return the configured value or default + */ + public static int getConfigParam(Map config, String key, int defaultValue) { + Object o = config.get(key); + if (o instanceof Number) { + return ((Number) o).intValue(); + } else if (o != null) { + try { + return Integer.decode(o.toString()); + } catch (NumberFormatException e) { + // ignore + } + } + return defaultValue; + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/FileStore.java b/modules/h2/src/main/java/org/h2/mvstore/FileStore.java new file mode 100644 index 0000000000000..79b029cb569e9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/FileStore.java @@ -0,0 +1,378 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.OverlappingFileLockException; +import java.util.concurrent.atomic.AtomicLong; +import org.h2.mvstore.cache.FilePathCache; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathDisk; +import org.h2.store.fs.FilePathEncrypt; +import org.h2.store.fs.FilePathNio; + +/** + * The default storage mechanism of the MVStore. This implementation persists + * data to a file. The file store is responsible to persist data and for free + * space management. + */ +public class FileStore { + + /** + * The number of read operations. + */ + protected final AtomicLong readCount = new AtomicLong(0); + + /** + * The number of read bytes. + */ + protected final AtomicLong readBytes = new AtomicLong(0); + + /** + * The number of write operations. + */ + protected final AtomicLong writeCount = new AtomicLong(0); + + /** + * The number of written bytes. + */ + protected final AtomicLong writeBytes = new AtomicLong(0); + + /** + * The free spaces between the chunks. The first block to use is block 2 + * (the first two blocks are the store header). + */ + protected final FreeSpaceBitSet freeSpace = + new FreeSpaceBitSet(2, MVStore.BLOCK_SIZE); + + /** + * The file name. + */ + protected String fileName; + + /** + * Whether this store is read-only. + */ + protected boolean readOnly; + + /** + * The file size (cached). + */ + protected long fileSize; + + /** + * The file. + */ + protected FileChannel file; + + /** + * The encrypted file (if encryption is used). + */ + protected FileChannel encryptedFile; + + /** + * The file lock. + */ + protected FileLock fileLock; + + @Override + public String toString() { + return fileName; + } + + /** + * Read from the file. + * + * @param pos the write position + * @param len the number of bytes to read + * @return the byte buffer + */ + public ByteBuffer readFully(long pos, int len) { + ByteBuffer dst = ByteBuffer.allocate(len); + DataUtils.readFully(file, pos, dst); + readCount.incrementAndGet(); + readBytes.addAndGet(len); + return dst; + } + + /** + * Write to the file. + * + * @param pos the write position + * @param src the source buffer + */ + public void writeFully(long pos, ByteBuffer src) { + int len = src.remaining(); + fileSize = Math.max(fileSize, pos + len); + DataUtils.writeFully(file, pos, src); + writeCount.incrementAndGet(); + writeBytes.addAndGet(len); + } + + /** + * Try to open the file. + * + * @param fileName the file name + * @param readOnly whether the file should only be opened in read-only mode, + * even if the file is writable + * @param encryptionKey the encryption key, or null if encryption is not + * used + */ + public void open(String fileName, boolean readOnly, char[] encryptionKey) { + if (file != null) { + return; + } + if (fileName != null) { + // ensure the Cache file system is registered + FilePathCache.INSTANCE.getScheme(); + FilePath p = FilePath.get(fileName); + // if no explicit scheme was specified, NIO is used + if (p instanceof FilePathDisk && + !fileName.startsWith(p.getScheme() + ":")) { + // ensure the NIO file system is registered + FilePathNio.class.getName(); + fileName = "nio:" + fileName; + } + } + this.fileName = fileName; + FilePath f = FilePath.get(fileName); + FilePath parent = f.getParent(); + if (parent != null && !parent.exists()) { + throw DataUtils.newIllegalArgumentException( + "Directory does not exist: {0}", parent); + } + if (f.exists() && !f.canWrite()) { + readOnly = true; + } + this.readOnly = readOnly; + try { + file = f.open(readOnly ? "r" : "rw"); + if (encryptionKey != null) { + byte[] key = FilePathEncrypt.getPasswordBytes(encryptionKey); + encryptedFile = file; + file = new FilePathEncrypt.FileEncrypt(fileName, key, file); + } + try { + if (readOnly) { + fileLock = file.tryLock(0, Long.MAX_VALUE, true); + } else { + fileLock = file.tryLock(); + } + } catch (OverlappingFileLockException e) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_LOCKED, + "The file is locked: {0}", fileName, e); + } + if (fileLock == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_LOCKED, + "The file is locked: {0}", fileName); + } + fileSize = file.size(); + } catch (IOException e) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_READING_FAILED, + "Could not open file {0}", fileName, e); + } + } + + /** + * Close this store. + */ + public void close() { + try { + if (fileLock != null) { + fileLock.release(); + fileLock = null; + } + file.close(); + freeSpace.clear(); + } catch (Exception e) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_WRITING_FAILED, + "Closing failed for file {0}", fileName, e); + } finally { + file = null; + } + } + + /** + * Flush all changes. + */ + public void sync() { + try { + file.force(true); + } catch (IOException e) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_WRITING_FAILED, + "Could not sync file {0}", fileName, e); + } + } + + /** + * Get the file size. + * + * @return the file size + */ + public long size() { + return fileSize; + } + + /** + * Truncate the file. + * + * @param size the new file size + */ + public void truncate(long size) { + try { + writeCount.incrementAndGet(); + file.truncate(size); + fileSize = Math.min(fileSize, size); + } catch (IOException e) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_WRITING_FAILED, + "Could not truncate file {0} to size {1}", + fileName, size, e); + } + } + + /** + * Get the file instance in use. + *

    + * The application may read from the file (for example for online backup), + * but not write to it or truncate it. + * + * @return the file + */ + public FileChannel getFile() { + return file; + } + + /** + * Get the encrypted file instance, if encryption is used. + *

    + * The application may read from the file (for example for online backup), + * but not write to it or truncate it. + * + * @return the encrypted file, or null if encryption is not used + */ + public FileChannel getEncryptedFile() { + return encryptedFile; + } + + /** + * Get the number of write operations since this store was opened. + * For file based stores, this is the number of file write operations. + * + * @return the number of write operations + */ + public long getWriteCount() { + return writeCount.get(); + } + + /** + * Get the number of written bytes since this store was opened. + * + * @return the number of write operations + */ + public long getWriteBytes() { + return writeBytes.get(); + } + + /** + * Get the number of read operations since this store was opened. + * For file based stores, this is the number of file read operations. + * + * @return the number of read operations + */ + public long getReadCount() { + return readCount.get(); + } + + /** + * Get the number of read bytes since this store was opened. + * + * @return the number of write operations + */ + public long getReadBytes() { + return readBytes.get(); + } + + public boolean isReadOnly() { + return readOnly; + } + + /** + * Get the default retention time for this store in milliseconds. + * + * @return the retention time + */ + public int getDefaultRetentionTime() { + return 45_000; + } + + /** + * Mark the space as in use. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public void markUsed(long pos, int length) { + freeSpace.markUsed(pos, length); + } + + /** + * Allocate a number of blocks and mark them as used. + * + * @param length the number of bytes to allocate + * @return the start position in bytes + */ + public long allocate(int length) { + return freeSpace.allocate(length); + } + + /** + * Mark the space as free. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public void free(long pos, int length) { + freeSpace.free(pos, length); + } + + public int getFillRate() { + return freeSpace.getFillRate(); + } + + long getFirstFree() { + return freeSpace.getFirstFree(); + } + + long getFileLengthInUse() { + return freeSpace.getLastFree(); + } + + /** + * Mark the file as empty. + */ + public void clear() { + freeSpace.clear(); + } + + /** + * Get the file name. + * + * @return the file name + */ + public String getFileName() { + return fileName; + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/FreeSpaceBitSet.java b/modules/h2/src/main/java/org/h2/mvstore/FreeSpaceBitSet.java new file mode 100644 index 0000000000000..14e613d249bc7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/FreeSpaceBitSet.java @@ -0,0 +1,222 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.util.BitSet; + +import org.h2.util.MathUtils; + +/** + * A free space bit set. + */ +public class FreeSpaceBitSet { + + private static final boolean DETAILED_INFO = false; + + /** + * The first usable block. + */ + private final int firstFreeBlock; + + /** + * The block size in bytes. + */ + private final int blockSize; + + /** + * The bit set. + */ + private final BitSet set = new BitSet(); + + /** + * Create a new free space map. + * + * @param firstFreeBlock the first free block + * @param blockSize the block size + */ + public FreeSpaceBitSet(int firstFreeBlock, int blockSize) { + this.firstFreeBlock = firstFreeBlock; + this.blockSize = blockSize; + clear(); + } + + /** + * Reset the list. + */ + public void clear() { + set.clear(); + set.set(0, firstFreeBlock); + } + + /** + * Check whether one of the blocks is in use. + * + * @param pos the position in bytes + * @param length the number of bytes + * @return true if a block is in use + */ + public boolean isUsed(long pos, int length) { + int start = getBlock(pos); + int blocks = getBlockCount(length); + for (int i = start; i < start + blocks; i++) { + if (!set.get(i)) { + return false; + } + } + return true; + } + + /** + * Check whether one of the blocks is free. + * + * @param pos the position in bytes + * @param length the number of bytes + * @return true if a block is free + */ + public boolean isFree(long pos, int length) { + int start = getBlock(pos); + int blocks = getBlockCount(length); + for (int i = start; i < start + blocks; i++) { + if (set.get(i)) { + return false; + } + } + return true; + } + + /** + * Allocate a number of blocks and mark them as used. + * + * @param length the number of bytes to allocate + * @return the start position in bytes + */ + public long allocate(int length) { + int blocks = getBlockCount(length); + for (int i = 0;;) { + int start = set.nextClearBit(i); + int end = set.nextSetBit(start + 1); + if (end < 0 || end - start >= blocks) { + set.set(start, start + blocks); + return getPos(start); + } + i = end; + } + } + + /** + * Mark the space as in use. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public void markUsed(long pos, int length) { + int start = getBlock(pos); + int blocks = getBlockCount(length); + set.set(start, start + blocks); + } + + /** + * Mark the space as free. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public void free(long pos, int length) { + int start = getBlock(pos); + int blocks = getBlockCount(length); + set.clear(start, start + blocks); + } + + private long getPos(int block) { + return (long) block * (long) blockSize; + } + + private int getBlock(long pos) { + return (int) (pos / blockSize); + } + + private int getBlockCount(int length) { + return MathUtils.roundUpInt(length, blockSize) / blockSize; + } + + /** + * Get the fill rate of the space in percent. The value 0 means the space is + * completely free, and 100 means it is completely full. + * + * @return the fill rate (0 - 100) + */ + public int getFillRate() { + int total = set.length(), count = 0; + for (int i = 0; i < total; i++) { + if (set.get(i)) { + count++; + } + } + if (count == 0) { + return 0; + } + return Math.max(1, (int) (100L * count / total)); + } + + /** + * Get the position of the first free space. + * + * @return the position. + */ + public long getFirstFree() { + return getPos(set.nextClearBit(0)); + } + + /** + * Get the position of the last (infinite) free space. + * + * @return the position. + */ + public long getLastFree() { + return getPos(set.previousSetBit(set.size()-1) + 1); + } + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + if (DETAILED_INFO) { + int onCount = 0, offCount = 0; + int on = 0; + for (int i = 0; i < set.length(); i++) { + if (set.get(i)) { + onCount++; + on++; + } else { + offCount++; + } + if ((i & 1023) == 1023) { + buff.append(String.format("%3x", on)).append(' '); + on = 0; + } + } + buff.append('\n') + .append(" on ").append(onCount).append(" off ").append(offCount) + .append(' ').append(100 * onCount / (onCount+offCount)).append("% used "); + } + buff.append('['); + for (int i = 0;;) { + if (i > 0) { + buff.append(", "); + } + int start = set.nextClearBit(i); + buff.append(Integer.toHexString(start)).append('-'); + int end = set.nextSetBit(start + 1); + if (end < 0) { + break; + } + buff.append(Integer.toHexString(end - 1)); + i = end + 1; + } + buff.append(']'); + return buff.toString(); + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/mvstore/MVMap.java b/modules/h2/src/main/java/org/h2/mvstore/MVMap.java new file mode 100644 index 0000000000000..ac191035e8662 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/MVMap.java @@ -0,0 +1,1299 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.util.AbstractList; +import java.util.AbstractMap; +import java.util.AbstractSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentMap; +import org.h2.mvstore.type.DataType; +import org.h2.mvstore.type.ObjectDataType; + +/** + * A stored map. + *

    + * Read operations can happen concurrently with all other + * operations, without risk of corruption. + *

    + * Write operations first read the relevant area from disk to memory + * concurrently, and only then modify the data. The in-memory part of write + * operations is synchronized. For scalable concurrent in-memory write + * operations, the map should be split into multiple smaller sub-maps that are + * then synchronized independently. + * + * @param the key class + * @param the value class + */ +public class MVMap extends AbstractMap + implements ConcurrentMap { + + /** + * The store. + */ + protected MVStore store; + + /** + * The current root page (may not be null). + */ + protected volatile Page root; + + /** + * The version used for writing. + */ + protected volatile long writeVersion; + + private int id; + private long createVersion; + private final DataType keyType; + private final DataType valueType; + + private final ConcurrentArrayList oldRoots = + new ConcurrentArrayList<>(); + + + /** + * Whether the map is closed. Volatile so we don't accidentally write to a + * closed map in multithreaded mode. + */ + private volatile boolean closed; + private boolean readOnly; + private boolean isVolatile; + + protected MVMap(DataType keyType, DataType valueType) { + this.keyType = keyType; + this.valueType = valueType; + } + + /** + * Get the metadata key for the root of the given map id. + * + * @param mapId the map id + * @return the metadata key + */ + static String getMapRootKey(int mapId) { + return "root." + Integer.toHexString(mapId); + } + + /** + * Get the metadata key for the given map id. + * + * @param mapId the map id + * @return the metadata key + */ + static String getMapKey(int mapId) { + return "map." + Integer.toHexString(mapId); + } + + /** + * Open this map with the given store and configuration. + * + * @param store the store + * @param id map id + * @param createVersion version in which this map was created + */ + protected void init(MVStore store, int id, long createVersion) { + this.store = store; + this.id = id; + this.createVersion = createVersion; + this.writeVersion = store.getCurrentVersion(); + this.root = Page.createEmpty(this, -1); + } + + /** + * Add or replace a key-value pair. + * + * @param key the key (may not be null) + * @param value the value (may not be null) + * @return the old value if the key existed, or null otherwise + */ + @Override + @SuppressWarnings("unchecked") + public synchronized V put(K key, V value) { + DataUtils.checkArgument(value != null, "The value may not be null"); + beforeWrite(); + long v = writeVersion; + Page p = root.copy(v); + p = splitRootIfNeeded(p, v); + Object result = put(p, v, key, value); + newRoot(p); + return (V) result; + } + + /** + * Split the root page if necessary. + * + * @param p the page + * @param writeVersion the write version + * @return the new sibling + */ + protected Page splitRootIfNeeded(Page p, long writeVersion) { + if (p.getMemory() <= store.getPageSplitSize() || p.getKeyCount() <= 1) { + return p; + } + int at = p.getKeyCount() / 2; + long totalCount = p.getTotalCount(); + Object k = p.getKey(at); + Page split = p.split(at); + Object[] keys = { k }; + Page.PageReference[] children = { + new Page.PageReference(p, p.getPos(), p.getTotalCount()), + new Page.PageReference(split, split.getPos(), split.getTotalCount()), + }; + p = Page.create(this, writeVersion, + keys, null, + children, + totalCount, 0); + return p; + } + + /** + * Add or update a key-value pair. + * + * @param p the page + * @param writeVersion the write version + * @param key the key (may not be null) + * @param value the value (may not be null) + * @return the old value, or null + */ + protected Object put(Page p, long writeVersion, Object key, Object value) { + int index = p.binarySearch(key); + if (p.isLeaf()) { + if (index < 0) { + index = -index - 1; + p.insertLeaf(index, key, value); + return null; + } + return p.setValue(index, value); + } + // p is a node + if (index < 0) { + index = -index - 1; + } else { + index++; + } + Page c = p.getChildPage(index).copy(writeVersion); + if (c.getMemory() > store.getPageSplitSize() && c.getKeyCount() > 1) { + // split on the way down + int at = c.getKeyCount() / 2; + Object k = c.getKey(at); + Page split = c.split(at); + p.setChild(index, split); + p.insertNode(index, k, c); + // now we are not sure where to add + return put(p, writeVersion, key, value); + } + Object result = put(c, writeVersion, key, value); + p.setChild(index, c); + return result; + } + + /** + * Get the first key, or null if the map is empty. + * + * @return the first key, or null + */ + public K firstKey() { + return getFirstLast(true); + } + + /** + * Get the last key, or null if the map is empty. + * + * @return the last key, or null + */ + public K lastKey() { + return getFirstLast(false); + } + + /** + * Get the key at the given index. + *

    + * This is a O(log(size)) operation. + * + * @param index the index + * @return the key + */ + @SuppressWarnings("unchecked") + public K getKey(long index) { + if (index < 0 || index >= size()) { + return null; + } + Page p = root; + long offset = 0; + while (true) { + if (p.isLeaf()) { + if (index >= offset + p.getKeyCount()) { + return null; + } + return (K) p.getKey((int) (index - offset)); + } + int i = 0, size = getChildPageCount(p); + for (; i < size; i++) { + long c = p.getCounts(i); + if (index < c + offset) { + break; + } + offset += c; + } + if (i == size) { + return null; + } + p = p.getChildPage(i); + } + } + + /** + * Get the key list. The list is a read-only representation of all keys. + *

    + * The get and indexOf methods are O(log(size)) operations. The result of + * indexOf is cast to an int. + * + * @return the key list + */ + public List keyList() { + return new AbstractList() { + + @Override + public K get(int index) { + return getKey(index); + } + + @Override + public int size() { + return MVMap.this.size(); + } + + @Override + @SuppressWarnings("unchecked") + public int indexOf(Object key) { + return (int) getKeyIndex((K) key); + } + + }; + } + + /** + * Get the index of the given key in the map. + *

    + * This is a O(log(size)) operation. + *

    + * If the key was found, the returned value is the index in the key array. + * If not found, the returned value is negative, where -1 means the provided + * key is smaller than any keys. See also Arrays.binarySearch. + * + * @param key the key + * @return the index + */ + public long getKeyIndex(K key) { + if (size() == 0) { + return -1; + } + Page p = root; + long offset = 0; + while (true) { + int x = p.binarySearch(key); + if (p.isLeaf()) { + if (x < 0) { + return -offset + x; + } + return offset + x; + } + if (x < 0) { + x = -x - 1; + } else { + x++; + } + for (int i = 0; i < x; i++) { + offset += p.getCounts(i); + } + p = p.getChildPage(x); + } + } + + /** + * Get the first (lowest) or last (largest) key. + * + * @param first whether to retrieve the first key + * @return the key, or null if the map is empty + */ + @SuppressWarnings("unchecked") + protected K getFirstLast(boolean first) { + if (size() == 0) { + return null; + } + Page p = root; + while (true) { + if (p.isLeaf()) { + return (K) p.getKey(first ? 0 : p.getKeyCount() - 1); + } + p = p.getChildPage(first ? 0 : getChildPageCount(p) - 1); + } + } + + /** + * Get the smallest key that is larger than the given key, or null if no + * such key exists. + * + * @param key the key + * @return the result + */ + public K higherKey(K key) { + return getMinMax(key, false, true); + } + + /** + * Get the smallest key that is larger or equal to this key. + * + * @param key the key + * @return the result + */ + public K ceilingKey(K key) { + return getMinMax(key, false, false); + } + + /** + * Get the largest key that is smaller or equal to this key. + * + * @param key the key + * @return the result + */ + public K floorKey(K key) { + return getMinMax(key, true, false); + } + + /** + * Get the largest key that is smaller than the given key, or null if no + * such key exists. + * + * @param key the key + * @return the result + */ + public K lowerKey(K key) { + return getMinMax(key, true, true); + } + + /** + * Get the smallest or largest key using the given bounds. + * + * @param key the key + * @param min whether to retrieve the smallest key + * @param excluding if the given upper/lower bound is exclusive + * @return the key, or null if no such key exists + */ + protected K getMinMax(K key, boolean min, boolean excluding) { + return getMinMax(root, key, min, excluding); + } + + @SuppressWarnings("unchecked") + private K getMinMax(Page p, K key, boolean min, boolean excluding) { + if (p.isLeaf()) { + int x = p.binarySearch(key); + if (x < 0) { + x = -x - (min ? 2 : 1); + } else if (excluding) { + x += min ? -1 : 1; + } + if (x < 0 || x >= p.getKeyCount()) { + return null; + } + return (K) p.getKey(x); + } + int x = p.binarySearch(key); + if (x < 0) { + x = -x - 1; + } else { + x++; + } + while (true) { + if (x < 0 || x >= getChildPageCount(p)) { + return null; + } + K k = getMinMax(p.getChildPage(x), key, min, excluding); + if (k != null) { + return k; + } + x += min ? -1 : 1; + } + } + + + /** + * Get a value. + * + * @param key the key + * @return the value, or null if not found + */ + @Override + @SuppressWarnings("unchecked") + public V get(Object key) { + return (V) binarySearch(root, key); + } + + /** + * Get the value for the given key, or null if not found. + * + * @param p the page + * @param key the key + * @return the value or null + */ + protected Object binarySearch(Page p, Object key) { + int x = p.binarySearch(key); + if (!p.isLeaf()) { + if (x < 0) { + x = -x - 1; + } else { + x++; + } + p = p.getChildPage(x); + return binarySearch(p, key); + } + if (x >= 0) { + return p.getValue(x); + } + return null; + } + + @Override + public boolean containsKey(Object key) { + return get(key) != null; + } + + /** + * Remove all entries. + */ + @Override + public synchronized void clear() { + beforeWrite(); + root.removeAllRecursive(); + newRoot(Page.createEmpty(this, writeVersion)); + } + + /** + * Close the map. Accessing the data is still possible (to allow concurrent + * reads), but it is marked as closed. + */ + void close() { + closed = true; + } + + public boolean isClosed() { + return closed; + } + + /** + * Remove a key-value pair, if the key exists. + * + * @param key the key (may not be null) + * @return the old value if the key existed, or null otherwise + */ + @Override + @SuppressWarnings("unchecked") + public V remove(Object key) { + beforeWrite(); + V result = get(key); + if (result == null) { + return null; + } + long v = writeVersion; + synchronized (this) { + Page p = root.copy(v); + result = (V) remove(p, v, key); + if (!p.isLeaf() && p.getTotalCount() == 0) { + p.removePage(); + p = Page.createEmpty(this, p.getVersion()); + } + newRoot(p); + } + return result; + } + + /** + * Add a key-value pair if it does not yet exist. + * + * @param key the key (may not be null) + * @param value the new value + * @return the old value if the key existed, or null otherwise + */ + @Override + public synchronized V putIfAbsent(K key, V value) { + V old = get(key); + if (old == null) { + put(key, value); + } + return old; + } + + /** + * Remove a key-value pair if the value matches the stored one. + * + * @param key the key (may not be null) + * @param value the expected value + * @return true if the item was removed + */ + @Override + public synchronized boolean remove(Object key, Object value) { + V old = get(key); + if (areValuesEqual(old, value)) { + remove(key); + return true; + } + return false; + } + + /** + * Check whether the two values are equal. + * + * @param a the first value + * @param b the second value + * @return true if they are equal + */ + public boolean areValuesEqual(Object a, Object b) { + if (a == b) { + return true; + } else if (a == null || b == null) { + return false; + } + return valueType.compare(a, b) == 0; + } + + /** + * Replace a value for an existing key, if the value matches. + * + * @param key the key (may not be null) + * @param oldValue the expected value + * @param newValue the new value + * @return true if the value was replaced + */ + @Override + public synchronized boolean replace(K key, V oldValue, V newValue) { + V old = get(key); + if (areValuesEqual(old, oldValue)) { + put(key, newValue); + return true; + } + return false; + } + + /** + * Replace a value for an existing key. + * + * @param key the key (may not be null) + * @param value the new value + * @return the old value, if the value was replaced, or null + */ + @Override + public synchronized V replace(K key, V value) { + V old = get(key); + if (old != null) { + put(key, value); + return old; + } + return null; + } + + /** + * Remove a key-value pair. + * + * @param p the page (may not be null) + * @param writeVersion the write version + * @param key the key + * @return the old value, or null if the key did not exist + */ + protected Object remove(Page p, long writeVersion, Object key) { + int index = p.binarySearch(key); + Object result = null; + if (p.isLeaf()) { + if (index >= 0) { + result = p.getValue(index); + p.remove(index); + } + return result; + } + // node + if (index < 0) { + index = -index - 1; + } else { + index++; + } + Page cOld = p.getChildPage(index); + Page c = cOld.copy(writeVersion); + result = remove(c, writeVersion, key); + if (result == null || c.getTotalCount() != 0) { + // no change, or + // there are more nodes + p.setChild(index, c); + } else { + // this child was deleted + if (p.getKeyCount() == 0) { + p.setChild(index, c); + c.removePage(); + } else { + p.remove(index); + } + } + return result; + } + + /** + * Use the new root page from now on. + * + * @param newRoot the new root page + */ + protected void newRoot(Page newRoot) { + if (root != newRoot) { + removeUnusedOldVersions(); + if (root.getVersion() != newRoot.getVersion()) { + Page last = oldRoots.peekLast(); + if (last == null || last.getVersion() != root.getVersion()) { + oldRoots.add(root); + } + } + root = newRoot; + } + } + + /** + * Compare two keys. + * + * @param a the first key + * @param b the second key + * @return -1 if the first key is smaller, 1 if bigger, 0 if equal + */ + int compare(Object a, Object b) { + return keyType.compare(a, b); + } + + /** + * Get the key type. + * + * @return the key type + */ + public DataType getKeyType() { + return keyType; + } + + /** + * Get the value type. + * + * @return the value type + */ + public DataType getValueType() { + return valueType; + } + + /** + * Read a page. + * + * @param pos the position of the page + * @return the page + */ + Page readPage(long pos) { + return store.readPage(this, pos); + } + + /** + * Set the position of the root page. + * + * @param rootPos the position, 0 for empty + * @param version the version of the root + */ + void setRootPos(long rootPos, long version) { + root = rootPos == 0 ? Page.createEmpty(this, -1) : readPage(rootPos); + root.setVersion(version); + } + + /** + * Iterate over a number of keys. + * + * @param from the first key to return + * @return the iterator + */ + public Iterator keyIterator(K from) { + return new Cursor(this, root, from); + } + + /** + * Re-write any pages that belong to one of the chunks in the given set. + * + * @param set the set of chunk ids + * @return whether rewriting was successful + */ + boolean rewrite(Set set) { + // read from old version, to avoid concurrent reads + long previousVersion = store.getCurrentVersion() - 1; + if (previousVersion < createVersion) { + // a new map + return true; + } + MVMap readMap; + try { + readMap = openVersion(previousVersion); + } catch (IllegalArgumentException e) { + // unknown version: ok + // TODO should not rely on exception handling + return true; + } + try { + rewrite(readMap.root, set); + return true; + } catch (IllegalStateException e) { + // TODO should not rely on exception handling + if (DataUtils.getErrorCode(e.getMessage()) == DataUtils.ERROR_CHUNK_NOT_FOUND) { + // ignore + return false; + } + throw e; + } + } + + private int rewrite(Page p, Set set) { + if (p.isLeaf()) { + long pos = p.getPos(); + int chunkId = DataUtils.getPageChunkId(pos); + if (!set.contains(chunkId)) { + return 0; + } + if (p.getKeyCount() > 0) { + @SuppressWarnings("unchecked") + K key = (K) p.getKey(0); + V value = get(key); + if (value != null) { + replace(key, value, value); + } + } + return 1; + } + int writtenPageCount = 0; + for (int i = 0; i < getChildPageCount(p); i++) { + long childPos = p.getChildPagePos(i); + if (childPos != 0 && DataUtils.getPageType(childPos) == DataUtils.PAGE_TYPE_LEAF) { + // we would need to load the page, and it's a leaf: + // only do that if it's within the set of chunks we are + // interested in + int chunkId = DataUtils.getPageChunkId(childPos); + if (!set.contains(chunkId)) { + continue; + } + } + writtenPageCount += rewrite(p.getChildPage(i), set); + } + if (writtenPageCount == 0) { + long pos = p.getPos(); + int chunkId = DataUtils.getPageChunkId(pos); + if (set.contains(chunkId)) { + // an inner node page that is in one of the chunks, + // but only points to chunks that are not in the set: + // if no child was changed, we need to do that now + // (this is not needed if anyway one of the children + // was changed, as this would have updated this + // page as well) + Page p2 = p; + while (!p2.isLeaf()) { + p2 = p2.getChildPage(0); + } + @SuppressWarnings("unchecked") + K key = (K) p2.getKey(0); + V value = get(key); + if (value != null) { + replace(key, value, value); + } + writtenPageCount++; + } + } + return writtenPageCount; + } + + /** + * Get a cursor to iterate over a number of keys and values. + * + * @param from the first key to return + * @return the cursor + */ + public Cursor cursor(K from) { + return new Cursor<>(this, root, from); + } + + @Override + public Set> entrySet() { + final MVMap map = this; + final Page root = this.root; + return new AbstractSet>() { + + @Override + public Iterator> iterator() { + final Cursor cursor = new Cursor<>(map, root, null); + return new Iterator>() { + + @Override + public boolean hasNext() { + return cursor.hasNext(); + } + + @Override + public Entry next() { + K k = cursor.next(); + return new DataUtils.MapEntry<>(k, cursor.getValue()); + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException( + "Removing is not supported"); + } + }; + + } + + @Override + public int size() { + return MVMap.this.size(); + } + + @Override + public boolean contains(Object o) { + return MVMap.this.containsKey(o); + } + + }; + + } + + @Override + public Set keySet() { + final MVMap map = this; + final Page root = this.root; + return new AbstractSet() { + + @Override + public Iterator iterator() { + return new Cursor(map, root, null); + } + + @Override + public int size() { + return MVMap.this.size(); + } + + @Override + public boolean contains(Object o) { + return MVMap.this.containsKey(o); + } + + }; + } + + /** + * Get the root page. + * + * @return the root page + */ + public Page getRoot() { + return root; + } + + /** + * Get the map name. + * + * @return the name + */ + public String getName() { + return store.getMapName(id); + } + + public MVStore getStore() { + return store; + } + + /** + * Get the map id. Please note the map id may be different after compacting + * a store. + * + * @return the map id + */ + public int getId() { + return id; + } + + /** + * Rollback to the given version. + * + * @param version the version + */ + void rollbackTo(long version) { + beforeWrite(); + if (version <= createVersion) { + // the map is removed later + } else if (root.getVersion() >= version) { + while (true) { + Page last = oldRoots.peekLast(); + if (last == null) { + break; + } + // slow, but rollback is not a common operation + oldRoots.removeLast(last); + root = last; + if (root.getVersion() < version) { + break; + } + } + } + } + + /** + * Forget those old versions that are no longer needed. + */ + void removeUnusedOldVersions() { + long oldest = store.getOldestVersionToKeep(); + if (oldest == -1) { + return; + } + Page last = oldRoots.peekLast(); + while (true) { + Page p = oldRoots.peekFirst(); + if (p == null || p.getVersion() >= oldest || p == last) { + break; + } + oldRoots.removeFirst(p); + } + } + + public boolean isReadOnly() { + return readOnly; + } + + /** + * Set the volatile flag of the map. + * + * @param isVolatile the volatile flag + */ + public void setVolatile(boolean isVolatile) { + this.isVolatile = isVolatile; + } + + /** + * Whether this is volatile map, meaning that changes + * are not persisted. By default (even if the store is not persisted), + * maps are not volatile. + * + * @return whether this map is volatile + */ + public boolean isVolatile() { + return isVolatile; + } + + /** + * This method is called before writing to the map. The default + * implementation checks whether writing is allowed, and tries + * to detect concurrent modification. + * + * @throws UnsupportedOperationException if the map is read-only, + * or if another thread is concurrently writing + */ + protected void beforeWrite() { + if (closed) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_CLOSED, "This map is closed"); + } + if (readOnly) { + throw DataUtils.newUnsupportedOperationException( + "This map is read-only"); + } + store.beforeWrite(this); + } + + @Override + public int hashCode() { + return id; + } + + @Override + public boolean equals(Object o) { + return this == o; + } + + /** + * Get the number of entries, as a integer. Integer.MAX_VALUE is returned if + * there are more than this entries. + * + * @return the number of entries, as an integer + */ + @Override + public int size() { + long size = sizeAsLong(); + return size > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) size; + } + + /** + * Get the number of entries, as a long. + * + * @return the number of entries + */ + public long sizeAsLong() { + return root.getTotalCount(); + } + + @Override + public boolean isEmpty() { + // could also use (sizeAsLong() == 0) + return root.isLeaf() && root.getKeyCount() == 0; + } + + public long getCreateVersion() { + return createVersion; + } + + /** + * Remove the given page (make the space available). + * + * @param pos the position of the page to remove + * @param memory the number of bytes used for this page + */ + protected void removePage(long pos, int memory) { + store.removePage(this, pos, memory); + } + + /** + * Open an old version for the given map. + * + * @param version the version + * @return the map + */ + public MVMap openVersion(long version) { + if (readOnly) { + throw DataUtils.newUnsupportedOperationException( + "This map is read-only; need to call " + + "the method on the writable map"); + } + DataUtils.checkArgument(version >= createVersion, + "Unknown version {0}; this map was created in version is {1}", + version, createVersion); + Page newest = null; + // need to copy because it can change + Page r = root; + if (version >= r.getVersion() && + (version == writeVersion || + r.getVersion() >= 0 || + version <= createVersion || + store.getFileStore() == null)) { + newest = r; + } else { + Page last = oldRoots.peekFirst(); + if (last == null || version < last.getVersion()) { + // smaller than all in-memory versions + return store.openMapVersion(version, id, this); + } + Iterator it = oldRoots.iterator(); + while (it.hasNext()) { + Page p = it.next(); + if (p.getVersion() > version) { + break; + } + last = p; + } + newest = last; + } + MVMap m = openReadOnly(); + m.root = newest; + return m; + } + + /** + * Open a copy of the map in read-only mode. + * + * @return the opened map + */ + MVMap openReadOnly() { + MVMap m = new MVMap<>(keyType, valueType); + m.readOnly = true; + m.init(store, id, createVersion); + m.root = root; + return m; + } + + public long getVersion() { + return root.getVersion(); + } + + /** + * Get the child page count for this page. This is to allow another map + * implementation to override the default, in case the last child is not to + * be used. + * + * @param p the page + * @return the number of direct children + */ + protected int getChildPageCount(Page p) { + return p.getRawChildPageCount(); + } + + /** + * Get the map type. When opening an existing map, the map type must match. + * + * @return the map type + */ + public String getType() { + return null; + } + + /** + * Get the map metadata as a string. + * + * @param name the map name (or null) + * @return the string + */ + String asString(String name) { + StringBuilder buff = new StringBuilder(); + if (name != null) { + DataUtils.appendMap(buff, "name", name); + } + if (createVersion != 0) { + DataUtils.appendMap(buff, "createVersion", createVersion); + } + String type = getType(); + if (type != null) { + DataUtils.appendMap(buff, "type", type); + } + return buff.toString(); + } + + void setWriteVersion(long writeVersion) { + this.writeVersion = writeVersion; + } + + /** + * Copy a map. All pages are copied. + * + * @param sourceMap the source map + */ + void copyFrom(MVMap sourceMap) { + beforeWrite(); + newRoot(copy(sourceMap.root, null)); + } + + private Page copy(Page source, CursorPos parent) { + Page target = Page.create(this, writeVersion, source); + if (source.isLeaf()) { + Page child = target; + for (CursorPos p = parent; p != null; p = p.parent) { + p.page.setChild(p.index, child); + p.page = p.page.copy(writeVersion); + child = p.page; + if (p.parent == null) { + newRoot(p.page); + beforeWrite(); + } + } + } else { + // temporarily, replace child pages with empty pages, + // to ensure there are no links to the old store + for (int i = 0; i < getChildPageCount(target); i++) { + target.setChild(i, null); + } + CursorPos pos = new CursorPos(target, 0, parent); + for (int i = 0; i < getChildPageCount(target); i++) { + pos.index = i; + long p = source.getChildPagePos(i); + if (p != 0) { + // p == 0 means no child + // (for example the last entry of an r-tree node) + // (the MVMap is also used for r-trees for compacting) + copy(source.getChildPage(i), pos); + } + } + target = pos.page; + } + return target; + } + + @Override + public String toString() { + return asString(null); + } + + /** + * A builder for maps. + * + * @param the map type + * @param the key type + * @param the value type + */ + public interface MapBuilder, K, V> { + + /** + * Create a new map of the given type. + * + * @return the map + */ + M create(); + + } + + /** + * A builder for this class. + * + * @param the key type + * @param the value type + */ + public static class Builder implements MapBuilder, K, V> { + + protected DataType keyType; + protected DataType valueType; + + /** + * Create a new builder with the default key and value data types. + */ + public Builder() { + // ignore + } + + /** + * Set the key data type. + * + * @param keyType the key type + * @return this + */ + public Builder keyType(DataType keyType) { + this.keyType = keyType; + return this; + } + + public DataType getKeyType() { + return keyType; + } + + public DataType getValueType() { + return valueType; + } + + /** + * Set the value data type. + * + * @param valueType the value type + * @return this + */ + public Builder valueType(DataType valueType) { + this.valueType = valueType; + return this; + } + + @Override + public MVMap create() { + if (keyType == null) { + keyType = new ObjectDataType(); + } + if (valueType == null) { + valueType = new ObjectDataType(); + } + return new MVMap<>(keyType, valueType); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/MVMapConcurrent.java b/modules/h2/src/main/java/org/h2/mvstore/MVMapConcurrent.java new file mode 100644 index 0000000000000..a0a6f53c444b5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/MVMapConcurrent.java @@ -0,0 +1,77 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import org.h2.mvstore.type.DataType; +import org.h2.mvstore.type.ObjectDataType; + +/** + * A class used for backward compatibility. + * + * @param the key type + * @param the value type + */ +public class MVMapConcurrent extends MVMap { + + public MVMapConcurrent(DataType keyType, DataType valueType) { + super(keyType, valueType); + } + + /** + * A builder for this class. + * + * @param the key type + * @param the value type + */ + public static class Builder implements + MapBuilder, K, V> { + + protected DataType keyType; + protected DataType valueType; + + /** + * Create a new builder with the default key and value data types. + */ + public Builder() { + // ignore + } + + /** + * Set the key data type. + * + * @param keyType the key type + * @return this + */ + public Builder keyType(DataType keyType) { + this.keyType = keyType; + return this; + } + + /** + * Set the key data type. + * + * @param valueType the key type + * @return this + */ + public Builder valueType(DataType valueType) { + this.valueType = valueType; + return this; + } + + @Override + public MVMapConcurrent create() { + if (keyType == null) { + keyType = new ObjectDataType(); + } + if (valueType == null) { + valueType = new ObjectDataType(); + } + return new MVMapConcurrent<>(keyType, valueType); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/MVStore.java b/modules/h2/src/main/java/org/h2/mvstore/MVStore.java new file mode 100644 index 0000000000000..38ef2e064361c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/MVStore.java @@ -0,0 +1,2956 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.lang.Thread.UncaughtExceptionHandler; +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import org.h2.compress.CompressDeflate; +import org.h2.compress.CompressLZF; +import org.h2.compress.Compressor; +import org.h2.mvstore.Page.PageChildren; +import org.h2.mvstore.cache.CacheLongKeyLIRS; +import org.h2.mvstore.type.StringDataType; +import org.h2.util.MathUtils; +import org.h2.util.New; + +/* + +TODO: + +Documentation +- rolling docs review: at "Metadata Map" +- better document that writes are in background thread +- better document how to do non-unique indexes +- document pluggable store and OffHeapStore + +TransactionStore: +- ability to disable the transaction log, + if there is only one connection + +MVStore: +- better and clearer memory usage accounting rules + (heap memory versus disk memory), so that even there is + never an out of memory + even for a small heap, and so that chunks + are still relatively big on average +- make sure serialization / deserialization errors don't corrupt the file +- test and possibly improve compact operation (for large dbs) +- automated 'kill process' and 'power failure' test +- defragment (re-creating maps, specially those with small pages) +- store number of write operations per page (maybe defragment + if much different than count) +- r-tree: nearest neighbor search +- use a small object value cache (StringCache), test on Android + for default serialization +- MVStoreTool.dump should dump the data if possible; + possibly using a callback for serialization +- implement a sharded map (in one store, multiple stores) + to support concurrent updates and writes, and very large maps +- to save space when persisting very small transactions, + use a transaction log where only the deltas are stored +- serialization for lists, sets, sets, sorted sets, maps, sorted maps +- maybe rename 'rollback' to 'revert' to distinguish from transactions +- support other compression algorithms (deflate, LZ4,...) +- remove features that are not really needed; simplify the code + possibly using a separate layer or tools + (retainVersion?) +- optional pluggable checksum mechanism (per page), which + requires that everything is a page (including headers) +- rename "store" to "save", as "store" is used in "storeVersion" +- rename setStoreVersion to setDataVersion, setSchemaVersion or similar +- temporary file storage +- simple rollback method (rollback to last committed version) +- MVMap to implement SortedMap, then NavigableMap +- storage that splits database into multiple files, + to speed up compact and allow using trim + (by truncating / deleting empty files) +- add new feature to the file system API to avoid copying data + (reads that returns a ByteBuffer instead of writing into one) + for memory mapped files and off-heap storage +- support log structured merge style operations (blind writes) + using one map per level plus bloom filter +- have a strict call order MVStore -> MVMap -> Page -> FileStore +- autocommit commits, stores, and compacts from time to time; + the background thread should wait at least 90% of the + configured write delay to store changes +- compact* should also store uncommitted changes (if there are any) +- write a LSM-tree (log structured merge tree) utility on top of the MVStore + with blind writes and/or a bloom filter that + internally uses regular maps and merge sort +- chunk metadata: maybe split into static and variable, + or use a small page size for metadata +- data type "string": maybe use prefix compression for keys +- test chunk id rollover +- feature to auto-compact from time to time and on close +- compact very small chunks +- Page: to save memory, combine keys & values into one array + (also children & counts). Maybe remove some other + fields (childrenCount for example) +- Support SortedMap for MVMap +- compact: copy whole pages (without having to open all maps) +- maybe change the length code to have lower gaps +- test with very low limits (such as: short chunks, small pages) +- maybe allow to read beyond the retention time: + when compacting, move live pages in old chunks + to a map (possibly the metadata map) - + this requires a change in the compaction code, plus + a map lookup when reading old data; also, this + old data map needs to be cleaned up somehow; + maybe using an additional timeout +- rollback of removeMap should restore the data - + which has big consequences, as the metadata map + would probably need references to the root nodes of all maps + +*/ + +/** + * A persistent storage for maps. + */ +public final class MVStore { + + /** + * Whether assertions are enabled. + */ + public static final boolean ASSERT = false; + + /** + * The block size (physical sector size) of the disk. The store header is + * written twice, one copy in each block, to ensure it survives a crash. + */ + static final int BLOCK_SIZE = 4 * 1024; + + private static final int FORMAT_WRITE = 1; + private static final int FORMAT_READ = 1; + + /** + * Used to mark a chunk as free, when it was detected that live bookkeeping + * is incorrect. + */ + private static final int MARKED_FREE = 10_000_000; + + /** + * The background thread, if any. + */ + volatile BackgroundWriterThread backgroundWriterThread; + + private volatile boolean reuseSpace = true; + + private volatile boolean closed; + + private final FileStore fileStore; + private final boolean fileStoreIsProvided; + + private final int pageSplitSize; + + /** + * The page cache. The default size is 16 MB, and the average size is 2 KB. + * It is split in 16 segments. The stack move distance is 2% of the expected + * number of entries. + */ + private final CacheLongKeyLIRS cache; + + /** + * The page chunk references cache. The default size is 4 MB, and the + * average size is 2 KB. It is split in 16 segments. The stack move distance + * is 2% of the expected number of entries. + */ + private final CacheLongKeyLIRS cacheChunkRef; + + /** + * The newest chunk. If nothing was stored yet, this field is not set. + */ + private Chunk lastChunk; + + /** + * The map of chunks. + */ + private final ConcurrentHashMap chunks = + new ConcurrentHashMap<>(); + + /** + * The map of temporarily freed storage space caused by freed pages. The key + * is the unsaved version, the value is the map of chunks. The maps contains + * the number of freed entries per chunk. + *

    + * Access is partially synchronized, hence the need for concurrent maps. + * Sometimes we hold the MVStore lock, sometimes the MVMap lock, and sometimes + * we even sync on the ConcurrentHashMap object. + */ + private final ConcurrentHashMap> freedPageSpace = + new ConcurrentHashMap<>(); + + /** + * The metadata map. Write access to this map needs to be synchronized on + * the store. + */ + private final MVMap meta; + + private final ConcurrentHashMap> maps = + new ConcurrentHashMap<>(); + + private final HashMap storeHeader = new HashMap<>(); + + private WriteBuffer writeBuffer; + + private int lastMapId; + + private int versionsToKeep = 5; + + /** + * The compression level for new pages (0 for disabled, 1 for fast, 2 for + * high). Even if disabled, the store may contain (old) compressed pages. + */ + private final int compressionLevel; + + private Compressor compressorFast; + + private Compressor compressorHigh; + + private final UncaughtExceptionHandler backgroundExceptionHandler; + + private volatile long currentVersion; + + /** + * The version of the last stored chunk, or -1 if nothing was stored so far. + */ + private long lastStoredVersion; + + /** + * The estimated memory used by unsaved pages. This number is not accurate, + * also because it may be changed concurrently, and because temporary pages + * are counted. + */ + private int unsavedMemory; + private final int autoCommitMemory; + private boolean saveNeeded; + + /** + * The time the store was created, in milliseconds since 1970. + */ + private long creationTime; + + /** + * How long to retain old, persisted chunks, in milliseconds. For larger or + * equal to zero, a chunk is never directly overwritten if unused, but + * instead, the unused field is set. If smaller zero, chunks are directly + * overwritten if unused. + */ + private int retentionTime; + + private long lastCommitTime; + + /** + * The earliest chunk to retain, if any. + */ + private Chunk retainChunk; + + /** + * The version of the current store operation (if any). + */ + private volatile long currentStoreVersion = -1; + + private Thread currentStoreThread; + + private volatile boolean metaChanged; + + /** + * The delay in milliseconds to automatically commit and write changes. + */ + private int autoCommitDelay; + + private final int autoCompactFillRate; + private long autoCompactLastFileOpCount; + + private final Object compactSync = new Object(); + + private IllegalStateException panicException; + + private long lastTimeAbsolute; + + private long lastFreeUnusedChunks; + + /** + * Create and open the store. + * + * @param config the configuration to use + * @throws IllegalStateException if the file is corrupt, or an exception + * occurred while opening + * @throws IllegalArgumentException if the directory does not exist + */ + MVStore(Map config) { + this.compressionLevel = DataUtils.getConfigParam(config, "compress", 0); + String fileName = (String) config.get("fileName"); + FileStore fileStore = (FileStore) config.get("fileStore"); + fileStoreIsProvided = fileStore != null; + if(fileStore == null && fileName != null) { + fileStore = new FileStore(); + } + this.fileStore = fileStore; + + int pgSplitSize = 48; // for "mem:" case it is # of keys + CacheLongKeyLIRS.Config cc = null; + if (this.fileStore != null) { + int mb = DataUtils.getConfigParam(config, "cacheSize", 16); + if (mb > 0) { + cc = new CacheLongKeyLIRS.Config(); + cc.maxMemory = mb * 1024L * 1024L; + Object o = config.get("cacheConcurrency"); + if (o != null) { + cc.segmentCount = (Integer)o; + } + } + pgSplitSize = 16 * 1024; + } + if (cc != null) { + cache = new CacheLongKeyLIRS<>(cc); + cc.maxMemory /= 4; + cacheChunkRef = new CacheLongKeyLIRS<>(cc); + } else { + cache = null; + cacheChunkRef = null; + } + + pgSplitSize = DataUtils.getConfigParam(config, "pageSplitSize", pgSplitSize); + // Make sure pages will fit into cache + if (cache != null && pgSplitSize > cache.getMaxItemSize()) { + pgSplitSize = (int)cache.getMaxItemSize(); + } + pageSplitSize = pgSplitSize; + backgroundExceptionHandler = + (UncaughtExceptionHandler)config.get("backgroundExceptionHandler"); + meta = new MVMap<>(StringDataType.INSTANCE, + StringDataType.INSTANCE); + meta.init(this, 0, currentVersion); + if (this.fileStore != null) { + retentionTime = this.fileStore.getDefaultRetentionTime(); + int kb = DataUtils.getConfigParam(config, "autoCommitBufferSize", 1024); + // 19 KB memory is about 1 KB storage + autoCommitMemory = kb * 1024 * 19; + autoCompactFillRate = DataUtils.getConfigParam(config, "autoCompactFillRate", 40); + char[] encryptionKey = (char[]) config.get("encryptionKey"); + try { + if (!fileStoreIsProvided) { + boolean readOnly = config.containsKey("readOnly"); + this.fileStore.open(fileName, readOnly, encryptionKey); + } + if (this.fileStore.size() == 0) { + creationTime = getTimeAbsolute(); + lastCommitTime = creationTime; + storeHeader.put("H", 2); + storeHeader.put("blockSize", BLOCK_SIZE); + storeHeader.put("format", FORMAT_WRITE); + storeHeader.put("created", creationTime); + writeStoreHeader(); + } else { + readStoreHeader(); + } + } catch (IllegalStateException e) { + panic(e); + } finally { + if (encryptionKey != null) { + Arrays.fill(encryptionKey, (char) 0); + } + } + lastCommitTime = getTimeSinceCreation(); + + // setAutoCommitDelay starts the thread, but only if + // the parameter is different from the old value + int delay = DataUtils.getConfigParam(config, "autoCommitDelay", 1000); + setAutoCommitDelay(delay); + } else { + autoCommitMemory = 0; + autoCompactFillRate = 0; + } + } + + private void panic(IllegalStateException e) { + handleException(e); + panicException = e; + closeImmediately(); + throw e; + } + + /** + * Open a store in exclusive mode. For a file-based store, the parent + * directory must already exist. + * + * @param fileName the file name (null for in-memory) + * @return the store + */ + public static MVStore open(String fileName) { + HashMap config = new HashMap<>(); + config.put("fileName", fileName); + return new MVStore(config); + } + + /** + * Open an old, stored version of a map. + * + * @param version the version + * @param mapId the map id + * @param template the template map + * @return the read-only map + */ + @SuppressWarnings("unchecked") + > T openMapVersion(long version, int mapId, + MVMap template) { + MVMap oldMeta = getMetaMap(version); + long rootPos = getRootPos(oldMeta, mapId); + MVMap m = template.openReadOnly(); + m.setRootPos(rootPos, version); + return (T) m; + } + + /** + * Open a map with the default settings. The map is automatically create if + * it does not yet exist. If a map with this name is already open, this map + * is returned. + * + * @param the key type + * @param the value type + * @param name the name of the map + * @return the map + */ + public MVMap openMap(String name) { + return openMap(name, new MVMap.Builder()); + } + + /** + * Open a map with the given builder. The map is automatically create if it + * does not yet exist. If a map with this name is already open, this map is + * returned. + * + * @param the key type + * @param the value type + * @param name the name of the map + * @param builder the map builder + * @return the map + */ + public synchronized , K, V> M openMap( + String name, MVMap.MapBuilder builder) { + checkOpen(); + String x = meta.get("name." + name); + int id; + long root; + M map; + if (x != null) { + id = DataUtils.parseHexInt(x); + @SuppressWarnings("unchecked") + M old = (M) maps.get(id); + if (old != null) { + return old; + } + map = builder.create(); + String config = meta.get(MVMap.getMapKey(id)); + String v = DataUtils.getFromMap(config, "createVersion"); + map.init(this, id, v != null ? DataUtils.parseHexLong(v): 0); + root = getRootPos(meta, id); + } else { + id = ++lastMapId; + map = builder.create(); + map.init(this, id, currentVersion); + markMetaChanged(); + x = Integer.toHexString(id); + meta.put(MVMap.getMapKey(id), map.asString(name)); + meta.put("name." + name, x); + root = 0; + } + map.setRootPos(root, -1); + maps.put(id, map); + return map; + } + + /** + * Get the set of all map names. + * + * @return the set of names + */ + public synchronized Set getMapNames() { + HashSet set = new HashSet<>(); + checkOpen(); + for (Iterator it = meta.keyIterator("name."); it.hasNext();) { + String x = it.next(); + if (!x.startsWith("name.")) { + break; + } + set.add(x.substring("name.".length())); + } + return set; + } + + /** + * Get the metadata map. This data is for informational purposes only. The + * data is subject to change in future versions. + *

    + * The data in this map should not be modified (changing system data may + * corrupt the store). If modifications are needed, they need be + * synchronized on the store. + *

    + * The metadata map contains the following entries: + *

    +     * chunk.{chunkId} = {chunk metadata}
    +     * name.{name} = {mapId}
    +     * map.{mapId} = {map metadata}
    +     * root.{mapId} = {root position}
    +     * setting.storeVersion = {version}
    +     * 
    + * + * @return the metadata map + */ + public MVMap getMetaMap() { + checkOpen(); + return meta; + } + + private MVMap getMetaMap(long version) { + Chunk c = getChunkForVersion(version); + DataUtils.checkArgument(c != null, "Unknown version {0}", version); + c = readChunkHeader(c.block); + MVMap oldMeta = meta.openReadOnly(); + oldMeta.setRootPos(c.metaRootPos, version); + return oldMeta; + } + + private Chunk getChunkForVersion(long version) { + Chunk newest = null; + for (Chunk c : chunks.values()) { + if (c.version <= version) { + if (newest == null || c.id > newest.id) { + newest = c; + } + } + } + return newest; + } + + /** + * Check whether a given map exists. + * + * @param name the map name + * @return true if it exists + */ + public boolean hasMap(String name) { + return meta.containsKey("name." + name); + } + + private void markMetaChanged() { + // changes in the metadata alone are usually not detected, as the meta + // map is changed after storing + metaChanged = true; + } + + private synchronized void readStoreHeader() { + Chunk newest = null; + boolean validStoreHeader = false; + // find out which chunk and version are the newest + // read the first two blocks + ByteBuffer fileHeaderBlocks = fileStore.readFully(0, 2 * BLOCK_SIZE); + byte[] buff = new byte[BLOCK_SIZE]; + for (int i = 0; i <= BLOCK_SIZE; i += BLOCK_SIZE) { + fileHeaderBlocks.get(buff); + // the following can fail for various reasons + try { + HashMap m = DataUtils.parseChecksummedMap(buff); + if (m == null) { + continue; + } + int blockSize = DataUtils.readHexInt( + m, "blockSize", BLOCK_SIZE); + if (blockSize != BLOCK_SIZE) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_UNSUPPORTED_FORMAT, + "Block size {0} is currently not supported", + blockSize); + } + long version = DataUtils.readHexLong(m, "version", 0); + if (newest == null || version > newest.version) { + validStoreHeader = true; + storeHeader.putAll(m); + creationTime = DataUtils.readHexLong(m, "created", 0); + int chunkId = DataUtils.readHexInt(m, "chunk", 0); + long block = DataUtils.readHexLong(m, "block", 0); + Chunk test = readChunkHeaderAndFooter(block); + if (test != null && test.id == chunkId) { + newest = test; + } + } + } catch (Exception e) { + } + } + if (!validStoreHeader) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Store header is corrupt: {0}", fileStore); + } + long format = DataUtils.readHexLong(storeHeader, "format", 1); + if (format > FORMAT_WRITE && !fileStore.isReadOnly()) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_UNSUPPORTED_FORMAT, + "The write format {0} is larger " + + "than the supported format {1}, " + + "and the file was not opened in read-only mode", + format, FORMAT_WRITE); + } + format = DataUtils.readHexLong(storeHeader, "formatRead", format); + if (format > FORMAT_READ) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_UNSUPPORTED_FORMAT, + "The read format {0} is larger " + + "than the supported format {1}", + format, FORMAT_READ); + } + lastStoredVersion = -1; + chunks.clear(); + long now = System.currentTimeMillis(); + // calculate the year (doesn't have to be exact; + // we assume 365.25 days per year, * 4 = 1461) + int year = 1970 + (int) (now / (1000L * 60 * 60 * 6 * 1461)); + if (year < 2014) { + // if the year is before 2014, + // we assume the system doesn't have a real-time clock, + // and we set the creationTime to the past, so that + // existing chunks are overwritten + creationTime = now - fileStore.getDefaultRetentionTime(); + } else if (now < creationTime) { + // the system time was set to the past: + // we change the creation time + creationTime = now; + storeHeader.put("created", creationTime); + } + Chunk test = readChunkFooter(fileStore.size()); + if (test != null) { + test = readChunkHeaderAndFooter(test.block); + if (test != null) { + if (newest == null || test.version > newest.version) { + newest = test; + } + } + } + if (newest == null) { + // no chunk + return; + } + // read the chunk header and footer, + // and follow the chain of next chunks + while (true) { + if (newest.next == 0 || + newest.next >= fileStore.size() / BLOCK_SIZE) { + // no (valid) next + break; + } + test = readChunkHeaderAndFooter(newest.next); + if (test == null || test.id <= newest.id) { + break; + } + newest = test; + } + setLastChunk(newest); + loadChunkMeta(); + // read all chunk headers and footers within the retention time, + // to detect unwritten data after a power failure + verifyLastChunks(); + // build the free space list + for (Chunk c : chunks.values()) { + if (c.pageCountLive == 0) { + // remove this chunk in the next save operation + registerFreePage(currentVersion, c.id, 0, 0); + } + long start = c.block * BLOCK_SIZE; + int length = c.len * BLOCK_SIZE; + fileStore.markUsed(start, length); + } + } + + private void loadChunkMeta() { + // load the chunk metadata: we can load in any order, + // because loading chunk metadata might recursively load another chunk + for (Iterator it = meta.keyIterator("chunk."); it.hasNext();) { + String s = it.next(); + if (!s.startsWith("chunk.")) { + break; + } + s = meta.get(s); + Chunk c = Chunk.fromString(s); + if (chunks.putIfAbsent(c.id, c) == null) { + if (c.block == Long.MAX_VALUE) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Chunk {0} is invalid", c.id); + } + } + } + } + + private void setLastChunk(Chunk last) { + lastChunk = last; + if (last == null) { + // no valid chunk + lastMapId = 0; + currentVersion = 0; + meta.setRootPos(0, -1); + } else { + lastMapId = last.mapId; + currentVersion = last.version; + chunks.put(last.id, last); + meta.setRootPos(last.metaRootPos, -1); + } + setWriteVersion(currentVersion); + } + + private void verifyLastChunks() { + long time = getTimeSinceCreation(); + ArrayList ids = new ArrayList<>(chunks.keySet()); + Collections.sort(ids); + int newestValidChunk = -1; + Chunk old = null; + for (Integer chunkId : ids) { + Chunk c = chunks.get(chunkId); + if (old != null && c.time < old.time) { + // old chunk (maybe leftover from a previous crash) + break; + } + old = c; + if (c.time + retentionTime < time) { + // old chunk, no need to verify + newestValidChunk = c.id; + continue; + } + Chunk test = readChunkHeaderAndFooter(c.block); + if (test == null || test.id != c.id) { + break; + } + newestValidChunk = chunkId; + } + Chunk newest = chunks.get(newestValidChunk); + if (newest != lastChunk) { + // to avoid re-using newer chunks later on, we could clear + // the headers and footers of those, but we might not know about all + // of them, so that could be incomplete - but we check that newer + // chunks are written after older chunks, so we are safe + rollbackTo(newest == null ? 0 : newest.version); + } + } + + /** + * Read a chunk header and footer, and verify the stored data is consistent. + * + * @param block the block + * @return the chunk, or null if the header or footer don't match or are not + * consistent + */ + private Chunk readChunkHeaderAndFooter(long block) { + Chunk header; + try { + header = readChunkHeader(block); + } catch (Exception e) { + // invalid chunk header: ignore, but stop + return null; + } + if (header == null) { + return null; + } + Chunk footer = readChunkFooter((block + header.len) * BLOCK_SIZE); + if (footer == null || footer.id != header.id) { + return null; + } + return header; + } + + /** + * Try to read a chunk footer. + * + * @param end the end of the chunk + * @return the chunk, or null if not successful + */ + private Chunk readChunkFooter(long end) { + // the following can fail for various reasons + try { + // read the chunk footer of the last block of the file + ByteBuffer lastBlock = fileStore.readFully( + end - Chunk.FOOTER_LENGTH, Chunk.FOOTER_LENGTH); + byte[] buff = new byte[Chunk.FOOTER_LENGTH]; + lastBlock.get(buff); + HashMap m = DataUtils.parseChecksummedMap(buff); + if (m != null) { + int chunk = DataUtils.readHexInt(m, "chunk", 0); + Chunk c = new Chunk(chunk); + c.version = DataUtils.readHexLong(m, "version", 0); + c.block = DataUtils.readHexLong(m, "block", 0); + return c; + } + } catch (Exception e) { + // ignore + } + return null; + } + + private void writeStoreHeader() { + StringBuilder buff = new StringBuilder(112); + if (lastChunk != null) { + storeHeader.put("block", lastChunk.block); + storeHeader.put("chunk", lastChunk.id); + storeHeader.put("version", lastChunk.version); + } + DataUtils.appendMap(buff, storeHeader); + byte[] bytes = buff.toString().getBytes(StandardCharsets.ISO_8859_1); + int checksum = DataUtils.getFletcher32(bytes, 0, bytes.length); + DataUtils.appendMap(buff, "fletcher", checksum); + buff.append('\n'); + bytes = buff.toString().getBytes(StandardCharsets.ISO_8859_1); + ByteBuffer header = ByteBuffer.allocate(2 * BLOCK_SIZE); + header.put(bytes); + header.position(BLOCK_SIZE); + header.put(bytes); + header.rewind(); + write(0, header); + } + + private void write(long pos, ByteBuffer buffer) { + try { + fileStore.writeFully(pos, buffer); + } catch (IllegalStateException e) { + panic(e); + throw e; + } + } + + /** + * Close the file and the store. Unsaved changes are written to disk first. + */ + public void close() { + if (closed) { + return; + } + FileStore f = fileStore; + if (f != null && !f.isReadOnly()) { + stopBackgroundThread(); + if (hasUnsavedChanges()) { + commitAndSave(); + } + } + closeStore(true); + } + + /** + * Close the file and the store, without writing anything. This will stop + * the background thread. This method ignores all errors. + */ + public void closeImmediately() { + try { + closeStore(false); + } catch (Throwable e) { + handleException(e); + } + } + + private void closeStore(boolean shrinkIfPossible) { + if (closed) { + return; + } + // can not synchronize on this yet, because + // the thread also synchronized on this, which + // could result in a deadlock + stopBackgroundThread(); + closed = true; + synchronized (this) { + if (fileStore != null && shrinkIfPossible) { + shrinkFileIfPossible(0); + } + // release memory early - this is important when called + // because of out of memory + if (cache != null) { + cache.clear(); + } + if (cacheChunkRef != null) { + cacheChunkRef.clear(); + } + for (MVMap m : new ArrayList<>(maps.values())) { + m.close(); + } + chunks.clear(); + maps.clear(); + if (fileStore != null && !fileStoreIsProvided) { + fileStore.close(); + } + } + } + + /** + * Get the chunk for the given position. + * + * @param pos the position + * @return the chunk + */ + private Chunk getChunk(long pos) { + Chunk c = getChunkIfFound(pos); + if (c == null) { + int chunkId = DataUtils.getPageChunkId(pos); + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Chunk {0} not found", chunkId); + } + return c; + } + + private Chunk getChunkIfFound(long pos) { + int chunkId = DataUtils.getPageChunkId(pos); + Chunk c = chunks.get(chunkId); + if (c == null) { + checkOpen(); + if (!Thread.holdsLock(this)) { + // it could also be unsynchronized metadata + // access (if synchronization on this was forgotten) + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_CHUNK_NOT_FOUND, + "Chunk {0} no longer exists", + chunkId); + } + String s = meta.get(Chunk.getMetaKey(chunkId)); + if (s == null) { + return null; + } + c = Chunk.fromString(s); + if (c.block == Long.MAX_VALUE) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Chunk {0} is invalid", chunkId); + } + chunks.put(c.id, c); + } + return c; + } + + private void setWriteVersion(long version) { + for (MVMap map : maps.values()) { + map.setWriteVersion(version); + } + MVMap m = meta; + if (m == null) { + checkOpen(); + } + m.setWriteVersion(version); + } + + /** + * Commit the changes. + *

    + * For in-memory stores, this method increments the version. + *

    + * For persistent stores, it also writes changes to disk. It does nothing if + * there are no unsaved changes, and returns the old version. It is not + * necessary to call this method when auto-commit is enabled (the default + * setting), as in this case it is automatically called from time to time or + * when enough changes have accumulated. However, it may still be called to + * flush all changes to disk. + * + * @return the new version + */ + public synchronized long commit() { + if (fileStore != null) { + return commitAndSave(); + } + long v = ++currentVersion; + setWriteVersion(v); + return v; + } + + /** + * Commit all changes and persist them to disk. This method does nothing if + * there are no unsaved changes, otherwise it increments the current version + * and stores the data (for file based stores). + *

    + * At most one store operation may run at any time. + * + * @return the new version (incremented if there were changes) + */ + private synchronized long commitAndSave() { + if (closed) { + return currentVersion; + } + if (fileStore == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_WRITING_FAILED, + "This is an in-memory store"); + } + if (currentStoreVersion >= 0) { + // store is possibly called within store, if the meta map changed + return currentVersion; + } + if (!hasUnsavedChanges()) { + return currentVersion; + } + if (fileStore.isReadOnly()) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_WRITING_FAILED, "This store is read-only"); + } + try { + currentStoreVersion = currentVersion; + currentStoreThread = Thread.currentThread(); + return storeNow(); + } finally { + // in any case reset the current store version, + // to allow closing the store + currentStoreVersion = -1; + currentStoreThread = null; + } + } + + private long storeNow() { + try { + return storeNowTry(); + } catch (IllegalStateException e) { + panic(e); + return -1; + } + } + + private long storeNowTry() { + long time = getTimeSinceCreation(); + freeUnusedIfNeeded(time); + int currentUnsavedPageCount = unsavedMemory; + long storeVersion = currentStoreVersion; + long version = ++currentVersion; + lastCommitTime = time; + retainChunk = null; + + // the metadata of the last chunk was not stored so far, and needs to be + // set now (it's better not to update right after storing, because that + // would modify the meta map again) + int lastChunkId; + if (lastChunk == null) { + lastChunkId = 0; + } else { + lastChunkId = lastChunk.id; + meta.put(Chunk.getMetaKey(lastChunkId), lastChunk.asString()); + // never go backward in time + time = Math.max(lastChunk.time, time); + } + int newChunkId = lastChunkId; + while (true) { + newChunkId = (newChunkId + 1) % Chunk.MAX_ID; + Chunk old = chunks.get(newChunkId); + if (old == null) { + break; + } + if (old.block == Long.MAX_VALUE) { + IllegalStateException e = DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, + "Last block not stored, possibly due to out-of-memory"); + panic(e); + } + } + Chunk c = new Chunk(newChunkId); + c.pageCount = Integer.MAX_VALUE; + c.pageCountLive = Integer.MAX_VALUE; + c.maxLen = Long.MAX_VALUE; + c.maxLenLive = Long.MAX_VALUE; + c.metaRootPos = Long.MAX_VALUE; + c.block = Long.MAX_VALUE; + c.len = Integer.MAX_VALUE; + c.time = time; + c.version = version; + c.mapId = lastMapId; + c.next = Long.MAX_VALUE; + chunks.put(c.id, c); + // force a metadata update + meta.put(Chunk.getMetaKey(c.id), c.asString()); + meta.remove(Chunk.getMetaKey(c.id)); + ArrayList> list = new ArrayList<>(maps.values()); + ArrayList> changed = New.arrayList(); + for (MVMap m : list) { + m.setWriteVersion(version); + long v = m.getVersion(); + if (m.getCreateVersion() > storeVersion) { + // the map was created after storing started + continue; + } + if (m.isVolatile()) { + continue; + } + if (v >= 0 && v >= lastStoredVersion) { + MVMap r = m.openVersion(storeVersion); + if (r.getRoot().getPos() == 0) { + changed.add(r); + } + } + } + applyFreedSpace(storeVersion); + WriteBuffer buff = getWriteBuffer(); + // need to patch the header later + c.writeChunkHeader(buff, 0); + int headerLength = buff.position(); + c.pageCount = 0; + c.pageCountLive = 0; + c.maxLen = 0; + c.maxLenLive = 0; + for (MVMap m : changed) { + Page p = m.getRoot(); + String key = MVMap.getMapRootKey(m.getId()); + if (p.getTotalCount() == 0) { + meta.put(key, "0"); + } else { + p.writeUnsavedRecursive(c, buff); + long root = p.getPos(); + meta.put(key, Long.toHexString(root)); + } + } + meta.setWriteVersion(version); + + Page metaRoot = meta.getRoot(); + metaRoot.writeUnsavedRecursive(c, buff); + + int chunkLength = buff.position(); + + // add the store header and round to the next block + int length = MathUtils.roundUpInt(chunkLength + + Chunk.FOOTER_LENGTH, BLOCK_SIZE); + buff.limit(length); + + // the length of the file that is still in use + // (not necessarily the end of the file) + long end = getFileLengthInUse(); + long filePos; + if (reuseSpace) { + filePos = fileStore.allocate(length); + } else { + filePos = end; + } + // end is not necessarily the end of the file + boolean storeAtEndOfFile = filePos + length >= fileStore.size(); + + if (!reuseSpace) { + // we can not mark it earlier, because it + // might have been allocated by one of the + // removed chunks + fileStore.markUsed(end, length); + } + + c.block = filePos / BLOCK_SIZE; + c.len = length / BLOCK_SIZE; + c.metaRootPos = metaRoot.getPos(); + // calculate and set the likely next position + if (reuseSpace) { + int predictBlocks = c.len; + long predictedNextStart = fileStore.allocate( + predictBlocks * BLOCK_SIZE); + fileStore.free(predictedNextStart, predictBlocks * BLOCK_SIZE); + c.next = predictedNextStart / BLOCK_SIZE; + } else { + // just after this chunk + c.next = 0; + } + buff.position(0); + c.writeChunkHeader(buff, headerLength); + revertTemp(storeVersion); + + buff.position(buff.limit() - Chunk.FOOTER_LENGTH); + buff.put(c.getFooterBytes()); + + buff.position(0); + write(filePos, buff.getBuffer()); + releaseWriteBuffer(buff); + + // whether we need to write the store header + boolean writeStoreHeader = false; + if (!storeAtEndOfFile) { + if (lastChunk == null) { + writeStoreHeader = true; + } else if (lastChunk.next != c.block) { + // the last prediction did not matched + writeStoreHeader = true; + } else { + long headerVersion = DataUtils.readHexLong( + storeHeader, "version", 0); + if (lastChunk.version - headerVersion > 20) { + // we write after at least 20 entries + writeStoreHeader = true; + } else { + int chunkId = DataUtils.readHexInt(storeHeader, "chunk", 0); + while (true) { + Chunk old = chunks.get(chunkId); + if (old == null) { + // one of the chunks in between + // was removed + writeStoreHeader = true; + break; + } + if (chunkId == lastChunk.id) { + break; + } + chunkId++; + } + } + } + } + + lastChunk = c; + if (writeStoreHeader) { + writeStoreHeader(); + } + if (!storeAtEndOfFile) { + // may only shrink after the store header was written + shrinkFileIfPossible(1); + } + for (MVMap m : changed) { + Page p = m.getRoot(); + if (p.getTotalCount() > 0) { + p.writeEnd(); + } + } + metaRoot.writeEnd(); + + // some pages might have been changed in the meantime (in the newest + // version) + unsavedMemory = Math.max(0, unsavedMemory + - currentUnsavedPageCount); + + metaChanged = false; + lastStoredVersion = storeVersion; + + return version; + } + + /** + * Try to free unused chunks. This method doesn't directly write, but can + * change the metadata, and therefore cause a background write. + */ + private void freeUnusedIfNeeded(long time) { + int freeDelay = retentionTime / 5; + if (time >= lastFreeUnusedChunks + freeDelay) { + // set early in case it fails (out of memory or so) + lastFreeUnusedChunks = time; + freeUnusedChunks(); + // set it here as well, to avoid calling it often if it was slow + lastFreeUnusedChunks = getTimeSinceCreation(); + } + } + + private synchronized void freeUnusedChunks() { + if (lastChunk == null || !reuseSpace) { + return; + } + Set referenced = collectReferencedChunks(); + long time = getTimeSinceCreation(); + for (Iterator it = chunks.values().iterator(); it.hasNext(); ) { + Chunk c = it.next(); + if (!referenced.contains(c.id)) { + if (canOverwriteChunk(c, time)) { + it.remove(); + markMetaChanged(); + meta.remove(Chunk.getMetaKey(c.id)); + long start = c.block * BLOCK_SIZE; + int length = c.len * BLOCK_SIZE; + fileStore.free(start, length); + } else { + if (c.unused == 0) { + c.unused = time; + meta.put(Chunk.getMetaKey(c.id), c.asString()); + markMetaChanged(); + } + } + } + } + } + + private Set collectReferencedChunks() { + long testVersion = lastChunk.version; + DataUtils.checkArgument(testVersion > 0, "Collect references on version 0"); + long readCount = getFileStore().readCount.get(); + Set referenced = new HashSet<>(); + for (Cursor c = meta.cursor("root."); c.hasNext();) { + String key = c.next(); + if (!key.startsWith("root.")) { + break; + } + long pos = DataUtils.parseHexLong(c.getValue()); + if (pos == 0) { + continue; + } + int mapId = DataUtils.parseHexInt(key.substring("root.".length())); + collectReferencedChunks(referenced, mapId, pos, 0); + } + long pos = lastChunk.metaRootPos; + collectReferencedChunks(referenced, 0, pos, 0); + readCount = fileStore.readCount.get() - readCount; + return referenced; + } + + private void collectReferencedChunks(Set targetChunkSet, + int mapId, long pos, int level) { + int c = DataUtils.getPageChunkId(pos); + targetChunkSet.add(c); + if (DataUtils.getPageType(pos) == DataUtils.PAGE_TYPE_LEAF) { + return; + } + PageChildren refs = readPageChunkReferences(mapId, pos, -1); + if (!refs.chunkList) { + Set target = new HashSet<>(); + for (int i = 0; i < refs.children.length; i++) { + long p = refs.children[i]; + collectReferencedChunks(target, mapId, p, level + 1); + } + // we don't need a reference to this chunk + target.remove(c); + long[] children = new long[target.size()]; + int i = 0; + for (Integer p : target) { + children[i++] = DataUtils.getPagePos(p, 0, 0, + DataUtils.PAGE_TYPE_LEAF); + } + refs.children = children; + refs.chunkList = true; + if (cacheChunkRef != null) { + cacheChunkRef.put(refs.pos, refs, refs.getMemory()); + } + } + for (long p : refs.children) { + targetChunkSet.add(DataUtils.getPageChunkId(p)); + } + } + + private PageChildren readPageChunkReferences(int mapId, long pos, int parentChunk) { + if (DataUtils.getPageType(pos) == DataUtils.PAGE_TYPE_LEAF) { + return null; + } + PageChildren r; + if (cacheChunkRef != null) { + r = cacheChunkRef.get(pos); + } else { + r = null; + } + if (r == null) { + // if possible, create it from the cached page + if (cache != null) { + Page p = cache.get(pos); + if (p != null) { + r = new PageChildren(p); + } + } + if (r == null) { + // page was not cached: read the data + Chunk c = getChunk(pos); + long filePos = c.block * BLOCK_SIZE; + filePos += DataUtils.getPageOffset(pos); + if (filePos < 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Negative position {0}; p={1}, c={2}", filePos, pos, c.toString()); + } + long maxPos = (c.block + c.len) * BLOCK_SIZE; + r = PageChildren.read(fileStore, pos, mapId, filePos, maxPos); + } + r.removeDuplicateChunkReferences(); + if (cacheChunkRef != null) { + cacheChunkRef.put(pos, r, r.getMemory()); + } + } + if (r.children.length == 0) { + int chunk = DataUtils.getPageChunkId(pos); + if (chunk == parentChunk) { + return null; + } + } + return r; + } + + /** + * Get a buffer for writing. This caller must synchronize on the store + * before calling the method and until after using the buffer. + * + * @return the buffer + */ + private WriteBuffer getWriteBuffer() { + WriteBuffer buff; + if (writeBuffer != null) { + buff = writeBuffer; + buff.clear(); + } else { + buff = new WriteBuffer(); + } + return buff; + } + + /** + * Release a buffer for writing. This caller must synchronize on the store + * before calling the method and until after using the buffer. + * + * @param buff the buffer than can be re-used + */ + private void releaseWriteBuffer(WriteBuffer buff) { + if (buff.capacity() <= 4 * 1024 * 1024) { + writeBuffer = buff; + } + } + + private boolean canOverwriteChunk(Chunk c, long time) { + if (retentionTime >= 0) { + if (c.time + retentionTime > time) { + return false; + } + if (c.unused == 0 || c.unused + retentionTime / 2 > time) { + return false; + } + } + Chunk r = retainChunk; + if (r != null && c.version > r.version) { + return false; + } + return true; + } + + private long getTimeSinceCreation() { + return Math.max(0, getTimeAbsolute() - creationTime); + } + + private long getTimeAbsolute() { + long now = System.currentTimeMillis(); + if (lastTimeAbsolute != 0 && now < lastTimeAbsolute) { + // time seems to have run backwards - this can happen + // when the system time is adjusted, for example + // on a leap second + now = lastTimeAbsolute; + } else { + lastTimeAbsolute = now; + } + return now; + } + + /** + * Apply the freed space to the chunk metadata. The metadata is updated, but + * completely free chunks are not removed from the set of chunks, and the + * disk space is not yet marked as free. + * + * @param storeVersion apply up to the given version + */ + private void applyFreedSpace(long storeVersion) { + while (true) { + ArrayList modified = New.arrayList(); + Iterator>> it; + it = freedPageSpace.entrySet().iterator(); + while (it.hasNext()) { + Entry> e = it.next(); + long v = e.getKey(); + if (v > storeVersion) { + continue; + } + ConcurrentHashMap freed = e.getValue(); + for (Chunk f : freed.values()) { + Chunk c = chunks.get(f.id); + if (c == null) { + // already removed + continue; + } + // no need to synchronize, as old entries + // are not concurrently modified + c.maxLenLive += f.maxLenLive; + c.pageCountLive += f.pageCountLive; + if (c.pageCountLive < 0 && c.pageCountLive > -MARKED_FREE) { + // can happen after a rollback + c.pageCountLive = 0; + } + if (c.maxLenLive < 0 && c.maxLenLive > -MARKED_FREE) { + // can happen after a rollback + c.maxLenLive = 0; + } + modified.add(c); + } + it.remove(); + } + for (Chunk c : modified) { + meta.put(Chunk.getMetaKey(c.id), c.asString()); + } + if (modified.isEmpty()) { + break; + } + } + } + + /** + * Shrink the file if possible, and if at least a given percentage can be + * saved. + * + * @param minPercent the minimum percentage to save + */ + private void shrinkFileIfPossible(int minPercent) { + if (fileStore.isReadOnly()) { + return; + } + long end = getFileLengthInUse(); + long fileSize = fileStore.size(); + if (end >= fileSize) { + return; + } + if (minPercent > 0 && fileSize - end < BLOCK_SIZE) { + return; + } + int savedPercent = (int) (100 - (end * 100 / fileSize)); + if (savedPercent < minPercent) { + return; + } + if (!closed) { + sync(); + } + fileStore.truncate(end); + } + + /** + * Get the position right after the last used byte. + * + * @return the position + */ + private long getFileLengthInUse() { + long result = fileStore.getFileLengthInUse(); + assert result == measureFileLengthInUse() : result + " != " + measureFileLengthInUse(); + return result; + } + + private long measureFileLengthInUse() { + long size = 2; + for (Chunk c : chunks.values()) { + if (c.len != Integer.MAX_VALUE) { + size = Math.max(size, c.block + c.len); + } + } + return size * BLOCK_SIZE; + } + + /** + * Check whether there are any unsaved changes. + * + * @return if there are any changes + */ + public boolean hasUnsavedChanges() { + checkOpen(); + if (metaChanged) { + return true; + } + for (MVMap m : maps.values()) { + if (!m.isClosed()) { + long v = m.getVersion(); + if (v >= 0 && v > lastStoredVersion) { + return true; + } + } + } + return false; + } + + private Chunk readChunkHeader(long block) { + long p = block * BLOCK_SIZE; + ByteBuffer buff = fileStore.readFully(p, Chunk.MAX_HEADER_LENGTH); + return Chunk.readChunkHeader(buff, p); + } + + /** + * Compact the store by moving all live pages to new chunks. + * + * @return if anything was written + */ + public synchronized boolean compactRewriteFully() { + checkOpen(); + if (lastChunk == null) { + // nothing to do + return false; + } + for (MVMap m : maps.values()) { + @SuppressWarnings("unchecked") + MVMap map = (MVMap) m; + Cursor cursor = map.cursor(null); + Page lastPage = null; + while (cursor.hasNext()) { + cursor.next(); + Page p = cursor.getPage(); + if (p == lastPage) { + continue; + } + Object k = p.getKey(0); + Object v = p.getValue(0); + map.put(k, v); + lastPage = p; + } + } + commitAndSave(); + return true; + } + + /** + * Compact by moving all chunks next to each other. + * + * @return if anything was written + */ + public synchronized boolean compactMoveChunks() { + return compactMoveChunks(100, Long.MAX_VALUE); + } + + /** + * Compact the store by moving all chunks next to each other, if there is + * free space between chunks. This might temporarily increase the file size. + * Chunks are overwritten irrespective of the current retention time. Before + * overwriting chunks and before resizing the file, syncFile() is called. + * + * @param targetFillRate do nothing if the file store fill rate is higher + * than this + * @param moveSize the number of bytes to move + * @return if anything was written + */ + public synchronized boolean compactMoveChunks(int targetFillRate, long moveSize) { + checkOpen(); + if (lastChunk == null || !reuseSpace) { + // nothing to do + return false; + } + int oldRetentionTime = retentionTime; + boolean oldReuse = reuseSpace; + try { + retentionTime = -1; + freeUnusedChunks(); + if (fileStore.getFillRate() > targetFillRate) { + return false; + } + long start = fileStore.getFirstFree() / BLOCK_SIZE; + ArrayList move = compactGetMoveBlocks(start, moveSize); + compactMoveChunks(move); + freeUnusedChunks(); + storeNow(); + } finally { + reuseSpace = oldReuse; + retentionTime = oldRetentionTime; + } + return true; + } + + private ArrayList compactGetMoveBlocks(long startBlock, long moveSize) { + ArrayList move = New.arrayList(); + for (Chunk c : chunks.values()) { + if (c.block > startBlock) { + move.add(c); + } + } + // sort by block + Collections.sort(move, new Comparator() { + @Override + public int compare(Chunk o1, Chunk o2) { + return Long.signum(o1.block - o2.block); + } + }); + // find which is the last block to keep + int count = 0; + long size = 0; + for (Chunk c : move) { + long chunkSize = c.len * (long) BLOCK_SIZE; + if (size + chunkSize > moveSize) { + break; + } + size += chunkSize; + count++; + } + // move the first block (so the first gap is moved), + // and the one at the end (so the file shrinks) + while (move.size() > count && move.size() > 1) { + move.remove(1); + } + + return move; + } + + private void compactMoveChunks(ArrayList move) { + for (Chunk c : move) { + WriteBuffer buff = getWriteBuffer(); + long start = c.block * BLOCK_SIZE; + int length = c.len * BLOCK_SIZE; + buff.limit(length); + ByteBuffer readBuff = fileStore.readFully(start, length); + Chunk.readChunkHeader(readBuff, start); + int chunkHeaderLen = readBuff.position(); + buff.position(chunkHeaderLen); + buff.put(readBuff); + long end = getFileLengthInUse(); + fileStore.markUsed(end, length); + fileStore.free(start, length); + c.block = end / BLOCK_SIZE; + c.next = 0; + buff.position(0); + c.writeChunkHeader(buff, chunkHeaderLen); + buff.position(length - Chunk.FOOTER_LENGTH); + buff.put(c.getFooterBytes()); + buff.position(0); + write(end, buff.getBuffer()); + releaseWriteBuffer(buff); + markMetaChanged(); + meta.put(Chunk.getMetaKey(c.id), c.asString()); + } + + // update the metadata (store at the end of the file) + reuseSpace = false; + commitAndSave(); + sync(); + + // now re-use the empty space + reuseSpace = true; + for (Chunk c : move) { + if (!chunks.containsKey(c.id)) { + // already removed during the + // previous store operation + continue; + } + WriteBuffer buff = getWriteBuffer(); + long start = c.block * BLOCK_SIZE; + int length = c.len * BLOCK_SIZE; + buff.limit(length); + ByteBuffer readBuff = fileStore.readFully(start, length); + Chunk.readChunkHeader(readBuff, 0); + int chunkHeaderLen = readBuff.position(); + buff.position(chunkHeaderLen); + buff.put(readBuff); + long pos = fileStore.allocate(length); + fileStore.free(start, length); + buff.position(0); + c.block = pos / BLOCK_SIZE; + c.writeChunkHeader(buff, chunkHeaderLen); + buff.position(length - Chunk.FOOTER_LENGTH); + buff.put(c.getFooterBytes()); + buff.position(0); + write(pos, buff.getBuffer()); + releaseWriteBuffer(buff); + markMetaChanged(); + meta.put(Chunk.getMetaKey(c.id), c.asString()); + } + + // update the metadata (within the file) + commitAndSave(); + sync(); + shrinkFileIfPossible(0); + } + + /** + * Force all stored changes to be written to the storage. The default + * implementation calls FileChannel.force(true). + */ + public void sync() { + checkOpen(); + FileStore f = fileStore; + if (f != null) { + f.sync(); + } + } + + /** + * Try to increase the fill rate by re-writing partially full chunks. Chunks + * with a low number of live items are re-written. + *

    + * If the current fill rate is higher than the target fill rate, nothing is + * done. + *

    + * Please note this method will not necessarily reduce the file size, as + * empty chunks are not overwritten. + *

    + * Only data of open maps can be moved. For maps that are not open, the old + * chunk is still referenced. Therefore, it is recommended to open all maps + * before calling this method. + * + * @param targetFillRate the minimum percentage of live entries + * @param write the minimum number of bytes to write + * @return if a chunk was re-written + */ + public boolean compact(int targetFillRate, int write) { + if (!reuseSpace) { + return false; + } + synchronized (compactSync) { + checkOpen(); + ArrayList old; + synchronized (this) { + old = compactGetOldChunks(targetFillRate, write); + } + if (old == null || old.isEmpty()) { + return false; + } + compactRewrite(old); + return true; + } + } + + /** + * Get the current fill rate (percentage of used space in the file). Unlike + * the fill rate of the store, here we only account for chunk data; the fill + * rate here is how much of the chunk data is live (still referenced). Young + * chunks are considered live. + * + * @return the fill rate, in percent (100 is completely full) + */ + public int getCurrentFillRate() { + long maxLengthSum = 1; + long maxLengthLiveSum = 1; + long time = getTimeSinceCreation(); + for (Chunk c : chunks.values()) { + maxLengthSum += c.maxLen; + if (c.time + retentionTime > time) { + // young chunks (we don't optimize those): + // assume if they are fully live + // so that we don't try to optimize yet + // until they get old + maxLengthLiveSum += c.maxLen; + } else { + maxLengthLiveSum += c.maxLenLive; + } + } + // the fill rate of all chunks combined + if (maxLengthSum <= 0) { + // avoid division by 0 + maxLengthSum = 1; + } + int fillRate = (int) (100 * maxLengthLiveSum / maxLengthSum); + return fillRate; + } + + private ArrayList compactGetOldChunks(int targetFillRate, int write) { + if (lastChunk == null) { + // nothing to do + return null; + } + long time = getTimeSinceCreation(); + int fillRate = getCurrentFillRate(); + if (fillRate >= targetFillRate) { + return null; + } + + // the 'old' list contains the chunks we want to free up + ArrayList old = New.arrayList(); + Chunk last = chunks.get(lastChunk.id); + for (Chunk c : chunks.values()) { + // only look at chunk older than the retention time + // (it's possible to compact chunks earlier, but right + // now we don't do that) + if (c.time + retentionTime > time) { + continue; + } + long age = last.version - c.version + 1; + c.collectPriority = (int) (c.getFillRate() * 1000 / age); + old.add(c); + } + if (old.isEmpty()) { + return null; + } + + // sort the list, so the first entry should be collected first + Collections.sort(old, new Comparator() { + @Override + public int compare(Chunk o1, Chunk o2) { + int comp = Integer.compare(o1.collectPriority, + o2.collectPriority); + if (comp == 0) { + comp = Long.compare(o1.maxLenLive, + o2.maxLenLive); + } + return comp; + } + }); + // find out up to were in the old list we need to move + long written = 0; + int chunkCount = 0; + Chunk move = null; + for (Chunk c : old) { + if (move != null) { + if (c.collectPriority > 0 && written > write) { + break; + } + } + written += c.maxLenLive; + chunkCount++; + move = c; + } + if (chunkCount < 1) { + return null; + } + // remove the chunks we want to keep from this list + boolean remove = false; + for (Iterator it = old.iterator(); it.hasNext();) { + Chunk c = it.next(); + if (move == c) { + remove = true; + } else if (remove) { + it.remove(); + } + } + return old; + } + + private void compactRewrite(ArrayList old) { + HashSet set = new HashSet<>(); + for (Chunk c : old) { + set.add(c.id); + } + for (MVMap m : maps.values()) { + @SuppressWarnings("unchecked") + MVMap map = (MVMap) m; + if (!map.rewrite(set)) { + return; + } + } + if (!meta.rewrite(set)) { + return; + } + freeUnusedChunks(); + commitAndSave(); + } + + /** + * Read a page. + * + * @param map the map + * @param pos the page position + * @return the page + */ + Page readPage(MVMap map, long pos) { + if (pos == 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, "Position 0"); + } + Page p = cache == null ? null : cache.get(pos); + if (p == null) { + Chunk c = getChunk(pos); + long filePos = c.block * BLOCK_SIZE; + filePos += DataUtils.getPageOffset(pos); + if (filePos < 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Negative position {0}", filePos); + } + long maxPos = (c.block + c.len) * BLOCK_SIZE; + p = Page.read(fileStore, pos, map, filePos, maxPos); + cachePage(pos, p, p.getMemory()); + } + return p; + } + + /** + * Remove a page. + * + * @param map the map the page belongs to + * @param pos the position of the page + * @param memory the memory usage + */ + void removePage(MVMap map, long pos, int memory) { + // we need to keep temporary pages, + // to support reading old versions and rollback + if (pos == 0) { + // the page was not yet stored: + // just using "unsavedMemory -= memory" could result in negative + // values, because in some cases a page is allocated, but never + // stored, so we need to use max + unsavedMemory = Math.max(0, unsavedMemory - memory); + return; + } + + // This could result in a cache miss if the operation is rolled back, + // but we don't optimize for rollback. + // We could also keep the page in the cache, as somebody + // could still read it (reading the old version). + if (cache != null) { + if (DataUtils.getPageType(pos) == DataUtils.PAGE_TYPE_LEAF) { + // keep nodes in the cache, because they are still used for + // garbage collection + cache.remove(pos); + } + } + + Chunk c = getChunk(pos); + long version = currentVersion; + if (map == meta && currentStoreVersion >= 0) { + if (Thread.currentThread() == currentStoreThread) { + // if the meta map is modified while storing, + // then this freed page needs to be registered + // with the stored chunk, so that the old chunk + // can be re-used + version = currentStoreVersion; + } + } + registerFreePage(version, c.id, + DataUtils.getPageMaxLength(pos), 1); + } + + private void registerFreePage(long version, int chunkId, + long maxLengthLive, int pageCount) { + ConcurrentHashMap freed = freedPageSpace.get(version); + if (freed == null) { + freed = new ConcurrentHashMap<>(); + ConcurrentHashMap f2 = freedPageSpace.putIfAbsent(version, + freed); + if (f2 != null) { + freed = f2; + } + } + // synchronize, because pages could be freed concurrently + synchronized (freed) { + Chunk chunk = freed.get(chunkId); + if (chunk == null) { + chunk = new Chunk(chunkId); + Chunk chunk2 = freed.putIfAbsent(chunkId, chunk); + if (chunk2 != null) { + chunk = chunk2; + } + } + chunk.maxLenLive -= maxLengthLive; + chunk.pageCountLive -= pageCount; + } + } + + Compressor getCompressorFast() { + if (compressorFast == null) { + compressorFast = new CompressLZF(); + } + return compressorFast; + } + + Compressor getCompressorHigh() { + if (compressorHigh == null) { + compressorHigh = new CompressDeflate(); + } + return compressorHigh; + } + + int getCompressionLevel() { + return compressionLevel; + } + + public int getPageSplitSize() { + return pageSplitSize; + } + + public boolean getReuseSpace() { + return reuseSpace; + } + + /** + * Whether empty space in the file should be re-used. If enabled, old data + * is overwritten (default). If disabled, writes are appended at the end of + * the file. + *

    + * This setting is specially useful for online backup. To create an online + * backup, disable this setting, then copy the file (starting at the + * beginning of the file). In this case, concurrent backup and write + * operations are possible (obviously the backup process needs to be faster + * than the write operations). + * + * @param reuseSpace the new value + */ + public void setReuseSpace(boolean reuseSpace) { + this.reuseSpace = reuseSpace; + } + + public int getRetentionTime() { + return retentionTime; + } + + /** + * How long to retain old, persisted chunks, in milliseconds. Chunks that + * are older may be overwritten once they contain no live data. + *

    + * The default value is 45000 (45 seconds) when using the default file + * store. It is assumed that a file system and hard disk will flush all + * write buffers within this time. Using a lower value might be dangerous, + * unless the file system and hard disk flush the buffers earlier. To + * manually flush the buffers, use + * MVStore.getFile().force(true), however please note that + * according to various tests this does not always work as expected + * depending on the operating system and hardware. + *

    + * The retention time needs to be long enough to allow reading old chunks + * while traversing over the entries of a map. + *

    + * This setting is not persisted. + * + * @param ms how many milliseconds to retain old chunks (0 to overwrite them + * as early as possible) + */ + public void setRetentionTime(int ms) { + this.retentionTime = ms; + } + + /** + * How many versions to retain for in-memory stores. If not set, 5 old + * versions are retained. + * + * @param count the number of versions to keep + */ + public void setVersionsToKeep(int count) { + this.versionsToKeep = count; + } + + /** + * Get the oldest version to retain in memory (for in-memory stores). + * + * @return the version + */ + public long getVersionsToKeep() { + return versionsToKeep; + } + + /** + * Get the oldest version to retain in memory, which is the manually set + * retain version, or the current store version (whatever is older). + * + * @return the version + */ + long getOldestVersionToKeep() { + long v = currentVersion; + if (fileStore == null) { + return v - versionsToKeep; + } + long storeVersion = currentStoreVersion; + if (storeVersion > -1) { + v = Math.min(v, storeVersion); + } + return v; + } + + /** + * Check whether all data can be read from this version. This requires that + * all chunks referenced by this version are still available (not + * overwritten). + * + * @param version the version + * @return true if all data can be read + */ + private boolean isKnownVersion(long version) { + if (version > currentVersion || version < 0) { + return false; + } + if (version == currentVersion || chunks.size() == 0) { + // no stored data + return true; + } + // need to check if a chunk for this version exists + Chunk c = getChunkForVersion(version); + if (c == null) { + return false; + } + // also, all chunks referenced by this version + // need to be available in the file + MVMap oldMeta = getMetaMap(version); + if (oldMeta == null) { + return false; + } + try { + for (Iterator it = oldMeta.keyIterator("chunk."); + it.hasNext();) { + String chunkKey = it.next(); + if (!chunkKey.startsWith("chunk.")) { + break; + } + if (!meta.containsKey(chunkKey)) { + String s = oldMeta.get(chunkKey); + Chunk c2 = Chunk.fromString(s); + Chunk test = readChunkHeaderAndFooter(c2.block); + if (test == null || test.id != c2.id) { + return false; + } + // we store this chunk + chunks.put(c2.id, c2); + } + } + } catch (IllegalStateException e) { + // the chunk missing where the metadata is stored + return false; + } + return true; + } + + /** + * Increment the number of unsaved pages. + * + * @param memory the memory usage of the page + */ + void registerUnsavedPage(int memory) { + unsavedMemory += memory; + int newValue = unsavedMemory; + if (newValue > autoCommitMemory && autoCommitMemory > 0) { + saveNeeded = true; + } + } + + /** + * This method is called before writing to a map. + * + * @param map the map + */ + void beforeWrite(MVMap map) { + if (saveNeeded) { + if (map == meta) { + // to, don't save while the metadata map is locked + // this is to avoid deadlocks that could occur when we + // synchronize on the store and then on the metadata map + // TODO there should be no deadlocks possible + return; + } + saveNeeded = false; + // check again, because it could have been written by now + if (unsavedMemory > autoCommitMemory && autoCommitMemory > 0) { + commitAndSave(); + } + } + } + + /** + * Get the store version. The store version is usually used to upgrade the + * structure of the store after upgrading the application. Initially the + * store version is 0, until it is changed. + * + * @return the store version + */ + public int getStoreVersion() { + checkOpen(); + String x = meta.get("setting.storeVersion"); + return x == null ? 0 : DataUtils.parseHexInt(x); + } + + /** + * Update the store version. + * + * @param version the new store version + */ + public synchronized void setStoreVersion(int version) { + checkOpen(); + markMetaChanged(); + meta.put("setting.storeVersion", Integer.toHexString(version)); + } + + /** + * Revert to the beginning of the current version, reverting all uncommitted + * changes. + */ + public void rollback() { + rollbackTo(currentVersion); + } + + /** + * Revert to the beginning of the given version. All later changes (stored + * or not) are forgotten. All maps that were created later are closed. A + * rollback to a version before the last stored version is immediately + * persisted. Rollback to version 0 means all data is removed. + * + * @param version the version to revert to + */ + public synchronized void rollbackTo(long version) { + checkOpen(); + if (version == 0) { + // special case: remove all data + for (MVMap m : maps.values()) { + m.close(); + } + meta.clear(); + chunks.clear(); + if (fileStore != null) { + fileStore.clear(); + } + maps.clear(); + freedPageSpace.clear(); + currentVersion = version; + setWriteVersion(version); + metaChanged = false; + return; + } + DataUtils.checkArgument( + isKnownVersion(version), + "Unknown version {0}", version); + for (MVMap m : maps.values()) { + m.rollbackTo(version); + } + for (long v = currentVersion; v >= version; v--) { + if (freedPageSpace.size() == 0) { + break; + } + freedPageSpace.remove(v); + } + meta.rollbackTo(version); + metaChanged = false; + boolean loadFromFile = false; + // find out which chunks to remove, + // and which is the newest chunk to keep + // (the chunk list can have gaps) + ArrayList remove = new ArrayList<>(); + Chunk keep = null; + for (Chunk c : chunks.values()) { + if (c.version > version) { + remove.add(c.id); + } else if (keep == null || keep.id < c.id) { + keep = c; + } + } + if (!remove.isEmpty()) { + // remove the youngest first, so we don't create gaps + // (in case we remove many chunks) + Collections.sort(remove, Collections.reverseOrder()); + revertTemp(version); + loadFromFile = true; + for (int id : remove) { + Chunk c = chunks.remove(id); + long start = c.block * BLOCK_SIZE; + int length = c.len * BLOCK_SIZE; + fileStore.free(start, length); + // overwrite the chunk, + // so it is not be used later on + WriteBuffer buff = getWriteBuffer(); + buff.limit(length); + // buff.clear() does not set the data + Arrays.fill(buff.getBuffer().array(), (byte) 0); + write(start, buff.getBuffer()); + releaseWriteBuffer(buff); + // only really needed if we remove many chunks, when writes are + // re-ordered - but we do it always, because rollback is not + // performance critical + sync(); + } + lastChunk = keep; + writeStoreHeader(); + readStoreHeader(); + } + for (MVMap m : new ArrayList<>(maps.values())) { + int id = m.getId(); + if (m.getCreateVersion() >= version) { + m.close(); + maps.remove(id); + } else { + if (loadFromFile) { + m.setRootPos(getRootPos(meta, id), -1); + } + } + } + // rollback might have rolled back the stored chunk metadata as well + if (lastChunk != null) { + for (Chunk c : chunks.values()) { + meta.put(Chunk.getMetaKey(c.id), c.asString()); + } + } + currentVersion = version; + setWriteVersion(version); + } + + private static long getRootPos(MVMap map, int mapId) { + String root = map.get(MVMap.getMapRootKey(mapId)); + return root == null ? 0 : DataUtils.parseHexLong(root); + } + + private void revertTemp(long storeVersion) { + for (Iterator>> it = + freedPageSpace.entrySet().iterator(); it.hasNext(); ) { + Entry> entry = it.next(); + Long v = entry.getKey(); + if (v <= storeVersion) { + it.remove(); + } + } + for (MVMap m : maps.values()) { + m.removeUnusedOldVersions(); + } + } + + /** + * Get the current version of the data. When a new store is created, the + * version is 0. + * + * @return the version + */ + public long getCurrentVersion() { + return currentVersion; + } + + /** + * Get the file store. + * + * @return the file store + */ + public FileStore getFileStore() { + return fileStore; + } + + /** + * Get the store header. This data is for informational purposes only. The + * data is subject to change in future versions. The data should not be + * modified (doing so may corrupt the store). + * + * @return the store header + */ + public Map getStoreHeader() { + return storeHeader; + } + + private void checkOpen() { + if (closed) { + throw DataUtils.newIllegalStateException(DataUtils.ERROR_CLOSED, + "This store is closed", panicException); + } + } + + /** + * Rename a map. + * + * @param map the map + * @param newName the new name + */ + public synchronized void renameMap(MVMap map, String newName) { + checkOpen(); + DataUtils.checkArgument(map != meta, + "Renaming the meta map is not allowed"); + int id = map.getId(); + String oldName = getMapName(id); + if (oldName.equals(newName)) { + return; + } + DataUtils.checkArgument( + !meta.containsKey("name." + newName), + "A map named {0} already exists", newName); + markMetaChanged(); + String x = Integer.toHexString(id); + meta.remove("name." + oldName); + meta.put(MVMap.getMapKey(id), map.asString(newName)); + meta.put("name." + newName, x); + } + + /** + * Remove a map. Please note rolling back this operation does not restore + * the data; if you need this ability, use Map.clear(). + * + * @param map the map to remove + */ + public synchronized void removeMap(MVMap map) { + checkOpen(); + DataUtils.checkArgument(map != meta, + "Removing the meta map is not allowed"); + map.clear(); + int id = map.getId(); + String name = getMapName(id); + markMetaChanged(); + meta.remove(MVMap.getMapKey(id)); + meta.remove("name." + name); + meta.remove(MVMap.getMapRootKey(id)); + maps.remove(id); + } + + /** + * Get the name of the given map. + * + * @param id the map id + * @return the name, or null if not found + */ + public synchronized String getMapName(int id) { + checkOpen(); + String m = meta.get(MVMap.getMapKey(id)); + return m == null ? null : DataUtils.getMapName(m); + } + + /** + * Commit and save all changes, if there are any, and compact the store if + * needed. + */ + void writeInBackground() { + try { + if (closed) { + return; + } + + // could also commit when there are many unsaved pages, + // but according to a test it doesn't really help + + long time = getTimeSinceCreation(); + if (time <= lastCommitTime + autoCommitDelay) { + return; + } + if (hasUnsavedChanges()) { + try { + commitAndSave(); + } catch (Throwable e) { + handleException(e); + return; + } + } + if (autoCompactFillRate > 0) { + // whether there were file read or write operations since + // the last time + boolean fileOps; + long fileOpCount = fileStore.getWriteCount() + fileStore.getReadCount(); + if (autoCompactLastFileOpCount != fileOpCount) { + fileOps = true; + } else { + fileOps = false; + } + // use a lower fill rate if there were any file operations + int targetFillRate = fileOps ? autoCompactFillRate / 3 : autoCompactFillRate; + compact(targetFillRate, autoCommitMemory); + autoCompactLastFileOpCount = fileStore.getWriteCount() + fileStore.getReadCount(); + } + } catch (Throwable e) { + handleException(e); + } + } + + private void handleException(Throwable ex) { + if (backgroundExceptionHandler != null) { + try { + backgroundExceptionHandler.uncaughtException(null, ex); + } catch(Throwable ignore) { + if (ex != ignore) { // OOME may be the same + ex.addSuppressed(ignore); + } + } + } + } + + /** + * Set the read cache size in MB. + * + * @param mb the cache size in MB. + */ + public void setCacheSize(int mb) { + final long bytes = (long) mb * 1024 * 1024; + if (cache != null) { + cache.setMaxMemory(bytes); + cache.clear(); + } + if (cacheChunkRef != null) { + cacheChunkRef.setMaxMemory(bytes / 4); + cacheChunkRef.clear(); + } + } + + public boolean isClosed() { + return closed; + } + + private void stopBackgroundThread() { + BackgroundWriterThread t = backgroundWriterThread; + if (t == null) { + return; + } + backgroundWriterThread = null; + if (Thread.currentThread() == t) { + // within the thread itself - can not join + return; + } + synchronized (t.sync) { + t.sync.notifyAll(); + } + if (Thread.holdsLock(this)) { + // called from storeNow: can not join, + // because that could result in a deadlock + return; + } + try { + t.join(); + } catch (Exception e) { + // ignore + } + } + + /** + * Set the maximum delay in milliseconds to auto-commit changes. + *

    + * To disable auto-commit, set the value to 0. In this case, changes are + * only committed when explicitly calling commit. + *

    + * The default is 1000, meaning all changes are committed after at most one + * second. + * + * @param millis the maximum delay + */ + public void setAutoCommitDelay(int millis) { + if (autoCommitDelay == millis) { + return; + } + autoCommitDelay = millis; + if (fileStore == null || fileStore.isReadOnly()) { + return; + } + stopBackgroundThread(); + // start the background thread if needed + if (millis > 0) { + int sleep = Math.max(1, millis / 10); + BackgroundWriterThread t = + new BackgroundWriterThread(this, sleep, + fileStore.toString()); + t.start(); + backgroundWriterThread = t; + } + } + + /** + * Get the auto-commit delay. + * + * @return the delay in milliseconds, or 0 if auto-commit is disabled. + */ + public int getAutoCommitDelay() { + return autoCommitDelay; + } + + /** + * Get the maximum memory (in bytes) used for unsaved pages. If this number + * is exceeded, unsaved changes are stored to disk. + * + * @return the memory in bytes + */ + public int getAutoCommitMemory() { + return autoCommitMemory; + } + + /** + * Get the estimated memory (in bytes) of unsaved data. If the value exceeds + * the auto-commit memory, the changes are committed. + *

    + * The returned value is an estimation only. + * + * @return the memory in bytes + */ + public int getUnsavedMemory() { + return unsavedMemory; + } + + /** + * Put the page in the cache. + * + * @param pos the page position + * @param page the page + * @param memory the memory used + */ + void cachePage(long pos, Page page, int memory) { + if (cache != null) { + cache.put(pos, page, memory); + } + } + + /** + * Get the amount of memory used for caching, in MB. + * Note that this does not include the page chunk references cache, which is + * 25% of the size of the page cache. + * + * @return the amount of memory used for caching + */ + public int getCacheSizeUsed() { + if (cache == null) { + return 0; + } + return (int) (cache.getUsedMemory() / 1024 / 1024); + } + + /** + * Get the maximum cache size, in MB. + * Note that this does not include the page chunk references cache, which is + * 25% of the size of the page cache. + * + * @return the cache size + */ + public int getCacheSize() { + if (cache == null) { + return 0; + } + return (int) (cache.getMaxMemory() / 1024 / 1024); + } + + /** + * Get the cache. + * + * @return the cache + */ + public CacheLongKeyLIRS getCache() { + return cache; + } + + /** + * Whether the store is read-only. + * + * @return true if it is + */ + public boolean isReadOnly() { + return fileStore == null ? false : fileStore.isReadOnly(); + } + + /** + * A background writer thread to automatically store changes from time to + * time. + */ + private static class BackgroundWriterThread extends Thread { + + public final Object sync = new Object(); + private final MVStore store; + private final int sleep; + + BackgroundWriterThread(MVStore store, int sleep, String fileStoreName) { + super("MVStore background writer " + fileStoreName); + this.store = store; + this.sleep = sleep; + setDaemon(true); + } + + @Override + public void run() { + while (true) { + Thread t = store.backgroundWriterThread; + if (t == null) { + break; + } + synchronized (sync) { + try { + sync.wait(sleep); + } catch (InterruptedException e) { + continue; + } + } + store.writeInBackground(); + } + } + + } + + /** + * A builder for an MVStore. + */ + public static class Builder { + + private final HashMap config; + + private Builder(HashMap config) { + this.config = config; + } + + /** + * Creates new instance of MVStore.Builder. + */ + public Builder() { + config = new HashMap<>(); + } + + private Builder set(String key, Object value) { + config.put(key, value); + return this; + } + + /** + * Disable auto-commit, by setting the auto-commit delay and auto-commit + * buffer size to 0. + * + * @return this + */ + public Builder autoCommitDisabled() { + // we have a separate config option so that + // no thread is started if the write delay is 0 + // (if we only had a setter in the MVStore, + // the thread would need to be started in any case) + set("autoCommitBufferSize", 0); + return set("autoCommitDelay", 0); + } + + /** + * Set the size of the write buffer, in KB disk space (for file-based + * stores). Unless auto-commit is disabled, changes are automatically + * saved if there are more than this amount of changes. + *

    + * The default is 1024 KB. + *

    + * When the value is set to 0 or lower, data is not automatically + * stored. + * + * @param kb the write buffer size, in kilobytes + * @return this + */ + public Builder autoCommitBufferSize(int kb) { + return set("autoCommitBufferSize", kb); + } + + /** + * Set the auto-compact target fill rate. If the average fill rate (the + * percentage of the storage space that contains active data) of the + * chunks is lower, then the chunks with a low fill rate are re-written. + * Also, if the percentage of empty space between chunks is higher than + * this value, then chunks at the end of the file are moved. Compaction + * stops if the target fill rate is reached. + *

    + * The default value is 40 (40%). The value 0 disables auto-compacting. + *

    + * + * @param percent the target fill rate + * @return this + */ + public Builder autoCompactFillRate(int percent) { + return set("autoCompactFillRate", percent); + } + + /** + * Use the following file name. If the file does not exist, it is + * automatically created. The parent directory already must exist. + * + * @param fileName the file name + * @return this + */ + public Builder fileName(String fileName) { + return set("fileName", fileName); + } + + /** + * Encrypt / decrypt the file using the given password. This method has + * no effect for in-memory stores. The password is passed as a + * char array so that it can be cleared as soon as possible. Please note + * there is still a small risk that password stays in memory (due to + * Java garbage collection). Also, the hashed encryption key is kept in + * memory as long as the file is open. + * + * @param password the password + * @return this + */ + public Builder encryptionKey(char[] password) { + return set("encryptionKey", password); + } + + /** + * Open the file in read-only mode. In this case, a shared lock will be + * acquired to ensure the file is not concurrently opened in write mode. + *

    + * If this option is not used, the file is locked exclusively. + *

    + * Please note a store may only be opened once in every JVM (no matter + * whether it is opened in read-only or read-write mode), because each + * file may be locked only once in a process. + * + * @return this + */ + public Builder readOnly() { + return set("readOnly", 1); + } + + /** + * Set the read cache size in MB. The default is 16 MB. + * + * @param mb the cache size in megabytes + * @return this + */ + public Builder cacheSize(int mb) { + return set("cacheSize", mb); + } + + /** + * Set the read cache concurrency. The default is 16, meaning 16 + * segments are used. + * + * @param concurrency the cache concurrency + * @return this + */ + public Builder cacheConcurrency(int concurrency) { + return set("cacheConcurrency", concurrency); + } + + /** + * Compress data before writing using the LZF algorithm. This will save + * about 50% of the disk space, but will slow down read and write + * operations slightly. + *

    + * This setting only affects writes; it is not necessary to enable + * compression when reading, even if compression was enabled when + * writing. + * + * @return this + */ + public Builder compress() { + return set("compress", 1); + } + + /** + * Compress data before writing using the Deflate algorithm. This will + * save more disk space, but will slow down read and write operations + * quite a bit. + *

    + * This setting only affects writes; it is not necessary to enable + * compression when reading, even if compression was enabled when + * writing. + * + * @return this + */ + public Builder compressHigh() { + return set("compress", 2); + } + + /** + * Set the amount of memory a page should contain at most, in bytes, + * before it is split. The default is 16 KB for persistent stores and 4 + * KB for in-memory stores. This is not a limit in the page size, as + * pages with one entry can get larger. It is just the point where pages + * that contain more than one entry are split. + * + * @param pageSplitSize the page size + * @return this + */ + public Builder pageSplitSize(int pageSplitSize) { + return set("pageSplitSize", pageSplitSize); + } + + /** + * Set the listener to be used for exceptions that occur when writing in + * the background thread. + * + * @param exceptionHandler the handler + * @return this + */ + public Builder backgroundExceptionHandler( + Thread.UncaughtExceptionHandler exceptionHandler) { + return set("backgroundExceptionHandler", exceptionHandler); + } + + /** + * Use the provided file store instead of the default one. + *

    + * File stores passed in this way need to be open. They are not closed + * when closing the store. + *

    + * Please note that any kind of store (including an off-heap store) is + * considered a "persistence", while an "in-memory store" means objects + * are not persisted and fully kept in the JVM heap. + * + * @param store the file store + * @return this + */ + public Builder fileStore(FileStore store) { + return set("fileStore", store); + } + + /** + * Open the store. + * + * @return the opened store + */ + public MVStore open() { + return new MVStore(config); + } + + @Override + public String toString() { + return DataUtils.appendMap(new StringBuilder(), config).toString(); + } + + /** + * Read the configuration from a string. + * + * @param s the string representation + * @return the builder + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + public static Builder fromString(String s) { + // Cast from HashMap to HashMap is safe + return new Builder((HashMap) DataUtils.parseMap(s)); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/MVStoreTool.java b/modules/h2/src/main/java/org/h2/mvstore/MVStoreTool.java new file mode 100644 index 0000000000000..1645489b26f5e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/MVStoreTool.java @@ -0,0 +1,720 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.io.IOException; +import java.io.OutputStream; +import java.io.PrintWriter; +import java.io.Writer; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.charset.StandardCharsets; +import java.sql.Timestamp; +import java.util.Map; +import java.util.Map.Entry; +import java.util.TreeMap; + +import org.h2.compress.CompressDeflate; +import org.h2.compress.CompressLZF; +import org.h2.compress.Compressor; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.mvstore.type.DataType; +import org.h2.mvstore.type.StringDataType; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FileUtils; +import org.h2.util.Utils; + +/** + * Utility methods used in combination with the MVStore. + */ +public class MVStoreTool { + + /** + * Runs this tool. + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + *
    [-dump <fileName>]Dump the contends of the file
    [-info <fileName>]Get summary information about a file
    [-compact <fileName>]Compact a store
    [-compress <fileName>]Compact a store with compression enabled
    + * + * @param args the command line arguments + */ + public static void main(String... args) { + for (int i = 0; i < args.length; i++) { + if ("-dump".equals(args[i])) { + String fileName = args[++i]; + dump(fileName, new PrintWriter(System.out), true); + } else if ("-info".equals(args[i])) { + String fileName = args[++i]; + info(fileName, new PrintWriter(System.out)); + } else if ("-compact".equals(args[i])) { + String fileName = args[++i]; + compact(fileName, false); + } else if ("-compress".equals(args[i])) { + String fileName = args[++i]; + compact(fileName, true); + } else if ("-rollback".equals(args[i])) { + String fileName = args[++i]; + long targetVersion = Long.decode(args[++i]); + rollback(fileName, targetVersion, new PrintWriter(System.out)); + } else if ("-repair".equals(args[i])) { + String fileName = args[++i]; + repair(fileName); + } + } + } + + /** + * Read the contents of the file and write them to system out. + * + * @param fileName the name of the file + * @param details whether to print details + */ + public static void dump(String fileName, boolean details) { + dump(fileName, new PrintWriter(System.out), details); + } + + /** + * Read the summary information of the file and write them to system out. + * + * @param fileName the name of the file + */ + public static void info(String fileName) { + info(fileName, new PrintWriter(System.out)); + } + + /** + * Read the contents of the file and display them in a human-readable + * format. + * + * @param fileName the name of the file + * @param writer the print writer + * @param details print the page details + */ + public static void dump(String fileName, Writer writer, boolean details) { + PrintWriter pw = new PrintWriter(writer, true); + if (!FilePath.get(fileName).exists()) { + pw.println("File not found: " + fileName); + return; + } + long size = FileUtils.size(fileName); + pw.printf("File %s, %d bytes, %d MB\n", fileName, size, size / 1024 / 1024); + FileChannel file = null; + int blockSize = MVStore.BLOCK_SIZE; + TreeMap mapSizesTotal = + new TreeMap<>(); + long pageSizeTotal = 0; + try { + file = FilePath.get(fileName).open("r"); + long fileSize = file.size(); + int len = Long.toHexString(fileSize).length(); + ByteBuffer block = ByteBuffer.allocate(4096); + long pageCount = 0; + for (long pos = 0; pos < fileSize;) { + block.rewind(); + DataUtils.readFully(file, pos, block); + block.rewind(); + int headerType = block.get(); + if (headerType == 'H') { + String header = new String(block.array(), StandardCharsets.ISO_8859_1).trim(); + pw.printf("%0" + len + "x fileHeader %s%n", + pos, header); + pos += blockSize; + continue; + } + if (headerType != 'c') { + pos += blockSize; + continue; + } + block.position(0); + Chunk c = null; + try { + c = Chunk.readChunkHeader(block, pos); + } catch (IllegalStateException e) { + pos += blockSize; + continue; + } + if (c.len <= 0) { + // not a chunk + pos += blockSize; + continue; + } + int length = c.len * MVStore.BLOCK_SIZE; + pw.printf("%n%0" + len + "x chunkHeader %s%n", + pos, c.toString()); + ByteBuffer chunk = ByteBuffer.allocate(length); + DataUtils.readFully(file, pos, chunk); + int p = block.position(); + pos += length; + int remaining = c.pageCount; + pageCount += c.pageCount; + TreeMap mapSizes = + new TreeMap<>(); + int pageSizeSum = 0; + while (remaining > 0) { + int start = p; + try { + chunk.position(p); + } catch (IllegalArgumentException e) { + // too far + pw.printf("ERROR illegal position %d%n", p); + break; + } + int pageSize = chunk.getInt(); + // check value (ignored) + chunk.getShort(); + int mapId = DataUtils.readVarInt(chunk); + int entries = DataUtils.readVarInt(chunk); + int type = chunk.get(); + boolean compressed = (type & DataUtils.PAGE_COMPRESSED) != 0; + boolean node = (type & 1) != 0; + if (details) { + pw.printf( + "+%0" + len + + "x %s, map %x, %d entries, %d bytes, maxLen %x%n", + p, + (node ? "node" : "leaf") + + (compressed ? " compressed" : ""), + mapId, + node ? entries + 1 : entries, + pageSize, + DataUtils.getPageMaxLength(DataUtils.getPagePos(0, 0, pageSize, 0)) + ); + } + p += pageSize; + Integer mapSize = mapSizes.get(mapId); + if (mapSize == null) { + mapSize = 0; + } + mapSizes.put(mapId, mapSize + pageSize); + Long total = mapSizesTotal.get(mapId); + if (total == null) { + total = 0L; + } + mapSizesTotal.put(mapId, total + pageSize); + pageSizeSum += pageSize; + pageSizeTotal += pageSize; + remaining--; + long[] children = null; + long[] counts = null; + if (node) { + children = new long[entries + 1]; + for (int i = 0; i <= entries; i++) { + children[i] = chunk.getLong(); + } + counts = new long[entries + 1]; + for (int i = 0; i <= entries; i++) { + long s = DataUtils.readVarLong(chunk); + counts[i] = s; + } + } + String[] keys = new String[entries]; + if (mapId == 0 && details) { + ByteBuffer data; + if (compressed) { + boolean fast = !((type & DataUtils.PAGE_COMPRESSED_HIGH) == + DataUtils.PAGE_COMPRESSED_HIGH); + Compressor compressor = getCompressor(fast); + int lenAdd = DataUtils.readVarInt(chunk); + int compLen = pageSize + start - chunk.position(); + byte[] comp = Utils.newBytes(compLen); + chunk.get(comp); + int l = compLen + lenAdd; + data = ByteBuffer.allocate(l); + compressor.expand(comp, 0, compLen, data.array(), 0, l); + } else { + data = chunk; + } + for (int i = 0; i < entries; i++) { + String k = StringDataType.INSTANCE.read(data); + keys[i] = k; + } + if (node) { + // meta map node + for (int i = 0; i < entries; i++) { + long cp = children[i]; + pw.printf(" %d children < %s @ " + + "chunk %x +%0" + + len + "x%n", + counts[i], + keys[i], + DataUtils.getPageChunkId(cp), + DataUtils.getPageOffset(cp)); + } + long cp = children[entries]; + pw.printf(" %d children >= %s @ chunk %x +%0" + + len + "x%n", + counts[entries], + keys.length >= entries ? null : keys[entries], + DataUtils.getPageChunkId(cp), + DataUtils.getPageOffset(cp)); + } else { + // meta map leaf + String[] values = new String[entries]; + for (int i = 0; i < entries; i++) { + String v = StringDataType.INSTANCE.read(data); + values[i] = v; + } + for (int i = 0; i < entries; i++) { + pw.println(" " + keys[i] + + " = " + values[i]); + } + } + } else { + if (node && details) { + for (int i = 0; i <= entries; i++) { + long cp = children[i]; + pw.printf(" %d children @ chunk %x +%0" + + len + "x%n", + counts[i], + DataUtils.getPageChunkId(cp), + DataUtils.getPageOffset(cp)); + } + } + } + } + pageSizeSum = Math.max(1, pageSizeSum); + for (Integer mapId : mapSizes.keySet()) { + int percent = 100 * mapSizes.get(mapId) / pageSizeSum; + pw.printf("map %x: %d bytes, %d%%%n", mapId, mapSizes.get(mapId), percent); + } + int footerPos = chunk.limit() - Chunk.FOOTER_LENGTH; + try { + chunk.position(footerPos); + pw.printf( + "+%0" + len + "x chunkFooter %s%n", + footerPos, + new String(chunk.array(), chunk.position(), + Chunk.FOOTER_LENGTH, StandardCharsets.ISO_8859_1).trim()); + } catch (IllegalArgumentException e) { + // too far + pw.printf("ERROR illegal footer position %d%n", footerPos); + } + } + pw.printf("%n%0" + len + "x eof%n", fileSize); + pw.printf("\n"); + pageCount = Math.max(1, pageCount); + pw.printf("page size total: %d bytes, page count: %d, average page size: %d bytes\n", + pageSizeTotal, pageCount, pageSizeTotal / pageCount); + pageSizeTotal = Math.max(1, pageSizeTotal); + for (Integer mapId : mapSizesTotal.keySet()) { + int percent = (int) (100 * mapSizesTotal.get(mapId) / pageSizeTotal); + pw.printf("map %x: %d bytes, %d%%%n", mapId, mapSizesTotal.get(mapId), percent); + } + } catch (IOException e) { + pw.println("ERROR: " + e); + e.printStackTrace(pw); + } finally { + if (file != null) { + try { + file.close(); + } catch (IOException e) { + // ignore + } + } + } + pw.flush(); + } + + private static Compressor getCompressor(boolean fast) { + return fast ? new CompressLZF() : new CompressDeflate(); + } + + /** + * Read the summary information of the file and write them to system out. + * + * @param fileName the name of the file + * @param writer the print writer + * @return null if successful (if there was no error), otherwise the error + * message + */ + public static String info(String fileName, Writer writer) { + PrintWriter pw = new PrintWriter(writer, true); + if (!FilePath.get(fileName).exists()) { + pw.println("File not found: " + fileName); + return "File not found: " + fileName; + } + long fileLength = FileUtils.size(fileName); + MVStore store = new MVStore.Builder(). + fileName(fileName). + readOnly().open(); + try { + MVMap meta = store.getMetaMap(); + Map header = store.getStoreHeader(); + long fileCreated = DataUtils.readHexLong(header, "created", 0L); + TreeMap chunks = new TreeMap<>(); + long chunkLength = 0; + long maxLength = 0; + long maxLengthLive = 0; + long maxLengthNotEmpty = 0; + for (Entry e : meta.entrySet()) { + String k = e.getKey(); + if (k.startsWith("chunk.")) { + Chunk c = Chunk.fromString(e.getValue()); + chunks.put(c.id, c); + chunkLength += c.len * MVStore.BLOCK_SIZE; + maxLength += c.maxLen; + maxLengthLive += c.maxLenLive; + if (c.maxLenLive > 0) { + maxLengthNotEmpty += c.maxLen; + } + } + } + pw.printf("Created: %s\n", formatTimestamp(fileCreated, fileCreated)); + pw.printf("Last modified: %s\n", + formatTimestamp(FileUtils.lastModified(fileName), fileCreated)); + pw.printf("File length: %d\n", fileLength); + pw.printf("The last chunk is not listed\n"); + pw.printf("Chunk length: %d\n", chunkLength); + pw.printf("Chunk count: %d\n", chunks.size()); + pw.printf("Used space: %d%%\n", getPercent(chunkLength, fileLength)); + pw.printf("Chunk fill rate: %d%%\n", maxLength == 0 ? 100 : + getPercent(maxLengthLive, maxLength)); + pw.printf("Chunk fill rate excluding empty chunks: %d%%\n", + maxLengthNotEmpty == 0 ? 100 : + getPercent(maxLengthLive, maxLengthNotEmpty)); + for (Entry e : chunks.entrySet()) { + Chunk c = e.getValue(); + long created = fileCreated + c.time; + pw.printf(" Chunk %d: %s, %d%% used, %d blocks", + c.id, formatTimestamp(created, fileCreated), + getPercent(c.maxLenLive, c.maxLen), + c.len + ); + if (c.maxLenLive == 0) { + pw.printf(", unused: %s", + formatTimestamp(fileCreated + c.unused, fileCreated)); + } + pw.printf("\n"); + } + pw.printf("\n"); + } catch (Exception e) { + pw.println("ERROR: " + e); + e.printStackTrace(pw); + return e.getMessage(); + } finally { + store.close(); + } + pw.flush(); + return null; + } + + private static String formatTimestamp(long t, long start) { + String x = new Timestamp(t).toString(); + String s = x.substring(0, 19); + s += " (+" + ((t - start) / 1000) + " s)"; + return s; + } + + private static int getPercent(long value, long max) { + if (value == 0) { + return 0; + } else if (value == max) { + return 100; + } + return (int) (1 + 98 * value / Math.max(1, max)); + } + + /** + * Compress the store by creating a new file and copying the live pages + * there. Temporarily, a file with the suffix ".tempFile" is created. This + * file is then renamed, replacing the original file, if possible. If not, + * the new file is renamed to ".newFile", then the old file is removed, and + * the new file is renamed. This might be interrupted, so it's better to + * compactCleanUp before opening a store, in case this method was used. + * + * @param fileName the file name + * @param compress whether to compress the data + */ + public static void compact(String fileName, boolean compress) { + String tempName = fileName + Constants.SUFFIX_MV_STORE_TEMP_FILE; + FileUtils.delete(tempName); + compact(fileName, tempName, compress); + try { + FileUtils.moveAtomicReplace(tempName, fileName); + } catch (DbException e) { + String newName = fileName + Constants.SUFFIX_MV_STORE_NEW_FILE; + FileUtils.delete(newName); + FileUtils.move(tempName, newName); + FileUtils.delete(fileName); + FileUtils.move(newName, fileName); + } + } + + /** + * Clean up if needed, in a case a compact operation was interrupted due to + * killing the process or a power failure. This will delete temporary files + * (if any), and in case atomic file replacements were not used, rename the + * new file. + * + * @param fileName the file name + */ + public static void compactCleanUp(String fileName) { + String tempName = fileName + Constants.SUFFIX_MV_STORE_TEMP_FILE; + if (FileUtils.exists(tempName)) { + FileUtils.delete(tempName); + } + String newName = fileName + Constants.SUFFIX_MV_STORE_NEW_FILE; + if (FileUtils.exists(newName)) { + if (FileUtils.exists(fileName)) { + FileUtils.delete(newName); + } else { + FileUtils.move(newName, fileName); + } + } + } + + /** + * Copy all live pages from the source store to the target store. + * + * @param sourceFileName the name of the source store + * @param targetFileName the name of the target store + * @param compress whether to compress the data + */ + public static void compact(String sourceFileName, String targetFileName, boolean compress) { + MVStore source = new MVStore.Builder(). + fileName(sourceFileName). + readOnly(). + open(); + FileUtils.delete(targetFileName); + MVStore.Builder b = new MVStore.Builder(). + fileName(targetFileName); + if (compress) { + b.compress(); + } + MVStore target = b.open(); + compact(source, target); + target.close(); + source.close(); + } + + /** + * Copy all live pages from the source store to the target store. + * + * @param source the source store + * @param target the target store + */ + public static void compact(MVStore source, MVStore target) { + MVMap sourceMeta = source.getMetaMap(); + MVMap targetMeta = target.getMetaMap(); + for (Entry m : sourceMeta.entrySet()) { + String key = m.getKey(); + if (key.startsWith("chunk.")) { + // ignore + } else if (key.startsWith("map.")) { + // ignore + } else if (key.startsWith("name.")) { + // ignore + } else if (key.startsWith("root.")) { + // ignore + } else { + targetMeta.put(key, m.getValue()); + } + } + for (String mapName : source.getMapNames()) { + MVMap.Builder mp = + new MVMap.Builder<>(). + keyType(new GenericDataType()). + valueType(new GenericDataType()); + MVMap sourceMap = source.openMap(mapName, mp); + MVMap targetMap = target.openMap(mapName, mp); + targetMap.copyFrom(sourceMap); + } + } + + /** + * Repair a store by rolling back to the newest good version. + * + * @param fileName the file name + */ + public static void repair(String fileName) { + PrintWriter pw = new PrintWriter(System.out); + long version = Long.MAX_VALUE; + OutputStream ignore = new OutputStream() { + @Override + public void write(int b) throws IOException { + // ignore + } + }; + while (version >= 0) { + pw.println(version == Long.MAX_VALUE ? "Trying latest version" + : ("Trying version " + version)); + pw.flush(); + version = rollback(fileName, version, new PrintWriter(ignore)); + try { + String error = info(fileName + ".temp", new PrintWriter(ignore)); + if (error == null) { + FilePath.get(fileName).moveTo(FilePath.get(fileName + ".back"), true); + FilePath.get(fileName + ".temp").moveTo(FilePath.get(fileName), true); + pw.println("Success"); + break; + } + pw.println(" ... failed: " + error); + } catch (Exception e) { + pw.println("Fail: " + e.getMessage()); + pw.flush(); + } + version--; + } + pw.flush(); + } + + /** + * Roll back to a given revision into a a file called *.temp. + * + * @param fileName the file name + * @param targetVersion the version to roll back to (Long.MAX_VALUE for the + * latest version) + * @param writer the log writer + * @return the version rolled back to (-1 if no version) + */ + public static long rollback(String fileName, long targetVersion, Writer writer) { + long newestVersion = -1; + PrintWriter pw = new PrintWriter(writer, true); + if (!FilePath.get(fileName).exists()) { + pw.println("File not found: " + fileName); + return newestVersion; + } + FileChannel file = null; + FileChannel target = null; + int blockSize = MVStore.BLOCK_SIZE; + try { + file = FilePath.get(fileName).open("r"); + FilePath.get(fileName + ".temp").delete(); + target = FilePath.get(fileName + ".temp").open("rw"); + long fileSize = file.size(); + ByteBuffer block = ByteBuffer.allocate(4096); + Chunk newestChunk = null; + for (long pos = 0; pos < fileSize;) { + block.rewind(); + DataUtils.readFully(file, pos, block); + block.rewind(); + int headerType = block.get(); + if (headerType == 'H') { + block.rewind(); + target.write(block, pos); + pos += blockSize; + continue; + } + if (headerType != 'c') { + pos += blockSize; + continue; + } + Chunk c = null; + try { + c = Chunk.readChunkHeader(block, pos); + } catch (IllegalStateException e) { + pos += blockSize; + continue; + } + if (c.len <= 0) { + // not a chunk + pos += blockSize; + continue; + } + int length = c.len * MVStore.BLOCK_SIZE; + ByteBuffer chunk = ByteBuffer.allocate(length); + DataUtils.readFully(file, pos, chunk); + if (c.version > targetVersion) { + // newer than the requested version + pos += length; + continue; + } + chunk.rewind(); + target.write(chunk, pos); + if (newestChunk == null || c.version > newestChunk.version) { + newestChunk = c; + newestVersion = c.version; + } + pos += length; + } + int length = newestChunk.len * MVStore.BLOCK_SIZE; + ByteBuffer chunk = ByteBuffer.allocate(length); + DataUtils.readFully(file, newestChunk.block * MVStore.BLOCK_SIZE, chunk); + chunk.rewind(); + target.write(chunk, fileSize); + } catch (IOException e) { + pw.println("ERROR: " + e); + e.printStackTrace(pw); + } finally { + if (file != null) { + try { + file.close(); + } catch (IOException e) { + // ignore + } + } + if (target != null) { + try { + target.close(); + } catch (IOException e) { + // ignore + } + } + } + pw.flush(); + return newestVersion; + } + + /** + * A data type that can read any data that is persisted, and converts it to + * a byte array. + */ + static class GenericDataType implements DataType { + + @Override + public int compare(Object a, Object b) { + throw DataUtils.newUnsupportedOperationException("Can not compare"); + } + + @Override + public int getMemory(Object obj) { + return obj == null ? 0 : ((byte[]) obj).length * 8; + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (obj != null) { + buff.put((byte[]) obj); + } + } + + @Override + public void write(WriteBuffer buff, Object[] obj, int len, boolean key) { + for (Object o : obj) { + write(buff, o); + } + } + + @Override + public Object read(ByteBuffer buff) { + int len = buff.remaining(); + if (len == 0) { + return null; + } + byte[] data = new byte[len]; + buff.get(data); + return data; + } + + @Override + public void read(ByteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < obj.length; i++) { + obj[i] = read(buff); + } + } + + } + + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/OffHeapStore.java b/modules/h2/src/main/java/org/h2/mvstore/OffHeapStore.java new file mode 100644 index 0000000000000..e960cc64e0cfd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/OffHeapStore.java @@ -0,0 +1,148 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.nio.ByteBuffer; +import java.util.Iterator; +import java.util.Map.Entry; +import java.util.TreeMap; + +/** + * A storage mechanism that "persists" data in the off-heap area of the main + * memory. + */ +public class OffHeapStore extends FileStore { + + private final TreeMap memory = + new TreeMap<>(); + + @Override + public void open(String fileName, boolean readOnly, char[] encryptionKey) { + memory.clear(); + } + + @Override + public String toString() { + return memory.toString(); + } + + @Override + public ByteBuffer readFully(long pos, int len) { + Entry memEntry = memory.floorEntry(pos); + if (memEntry == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_READING_FAILED, + "Could not read from position {0}", pos); + } + readCount.incrementAndGet(); + readBytes.addAndGet(len); + ByteBuffer buff = memEntry.getValue(); + ByteBuffer read = buff.duplicate(); + int offset = (int) (pos - memEntry.getKey()); + read.position(offset); + read.limit(len + offset); + return read.slice(); + } + + @Override + public void free(long pos, int length) { + freeSpace.free(pos, length); + ByteBuffer buff = memory.remove(pos); + if (buff == null) { + // nothing was written (just allocated) + } else if (buff.remaining() != length) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_READING_FAILED, + "Partial remove is not supported at position {0}", pos); + } + } + + @Override + public void writeFully(long pos, ByteBuffer src) { + fileSize = Math.max(fileSize, pos + src.remaining()); + Entry mem = memory.floorEntry(pos); + if (mem == null) { + // not found: create a new entry + writeNewEntry(pos, src); + return; + } + long prevPos = mem.getKey(); + ByteBuffer buff = mem.getValue(); + int prevLength = buff.capacity(); + int length = src.remaining(); + if (prevPos == pos) { + if (prevLength != length) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_READING_FAILED, + "Could not write to position {0}; " + + "partial overwrite is not supported", pos); + } + writeCount.incrementAndGet(); + writeBytes.addAndGet(length); + buff.rewind(); + buff.put(src); + return; + } + if (prevPos + prevLength > pos) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_READING_FAILED, + "Could not write to position {0}; " + + "partial overwrite is not supported", pos); + } + writeNewEntry(pos, src); + } + + private void writeNewEntry(long pos, ByteBuffer src) { + int length = src.remaining(); + writeCount.incrementAndGet(); + writeBytes.addAndGet(length); + ByteBuffer buff = ByteBuffer.allocateDirect(length); + buff.put(src); + buff.rewind(); + memory.put(pos, buff); + } + + @Override + public void truncate(long size) { + writeCount.incrementAndGet(); + if (size == 0) { + fileSize = 0; + memory.clear(); + return; + } + fileSize = size; + for (Iterator it = memory.keySet().iterator(); it.hasNext();) { + long pos = it.next(); + if (pos < size) { + break; + } + ByteBuffer buff = memory.get(pos); + if (buff.capacity() > size) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_READING_FAILED, + "Could not truncate to {0}; " + + "partial truncate is not supported", pos); + } + it.remove(); + } + } + + @Override + public void close() { + memory.clear(); + } + + @Override + public void sync() { + // nothing to do + } + + @Override + public int getDefaultRetentionTime() { + return 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/Page.java b/modules/h2/src/main/java/org/h2/mvstore/Page.java new file mode 100644 index 0000000000000..a30dee4a1c1a4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/Page.java @@ -0,0 +1,1132 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.nio.ByteBuffer; +import java.util.HashSet; +import org.h2.compress.Compressor; +import org.h2.mvstore.type.DataType; +import org.h2.util.Utils; + +/** + * A page (a node or a leaf). + *

    + * For b-tree nodes, the key at a given index is larger than the largest key of + * the child at the same index. + *

    + * File format: + * page length (including length): int + * check value: short + * map id: varInt + * number of keys: varInt + * type: byte (0: leaf, 1: node; +2: compressed) + * compressed: bytes saved (varInt) + * keys + * leaf: values (one for each key) + * node: children (1 more than keys) + */ +public class Page { + + /** + * An empty object array. + */ + public static final Object[] EMPTY_OBJECT_ARRAY = new Object[0]; + private static final int IN_MEMORY = Integer.MIN_VALUE; + + private final MVMap map; + private long version; + private long pos; + + /** + * The total entry count of this page and all children. + */ + private long totalCount; + + /** + * The last result of a find operation is cached. + */ + private int cachedCompare; + + /** + * The estimated memory used in persistent case, IN_MEMORY marker value otherwise. + */ + private int memory; + + /** + * The keys. + *

    + * The array might be larger than needed, to avoid frequent re-sizing. + */ + private Object[] keys; + + /** + * The values. + *

    + * The array might be larger than needed, to avoid frequent re-sizing. + */ + private Object[] values; + + /** + * The child page references. + *

    + * The array might be larger than needed, to avoid frequent re-sizing. + */ + private PageReference[] children; + + /** + * Whether the page is an in-memory (not stored, or not yet stored) page, + * and it is removed. This is to keep track of pages that concurrently + * changed while they are being stored, in which case the live bookkeeping + * needs to be aware of such cases. + */ + private volatile boolean removedInMemory; + + Page(MVMap map, long version) { + this.map = map; + this.version = version; + } + + /** + * Create a new, empty page. + * + * @param map the map + * @param version the version + * @return the new page + */ + static Page createEmpty(MVMap map, long version) { + return create(map, version, + EMPTY_OBJECT_ARRAY, EMPTY_OBJECT_ARRAY, + null, + 0, DataUtils.PAGE_MEMORY); + } + + /** + * Create a new page. The arrays are not cloned. + * + * @param map the map + * @param version the version + * @param keys the keys + * @param values the values + * @param children the child page positions + * @param totalCount the total number of keys + * @param memory the memory used in bytes + * @return the page + */ + public static Page create(MVMap map, long version, + Object[] keys, Object[] values, PageReference[] children, + long totalCount, int memory) { + Page p = new Page(map, version); + // the position is 0 + p.keys = keys; + p.values = values; + p.children = children; + p.totalCount = totalCount; + MVStore store = map.store; + if(store.getFileStore() == null) { + p.memory = IN_MEMORY; + } else if (memory == 0) { + p.recalculateMemory(); + } else { + p.addMemory(memory); + } + if(store.getFileStore() != null) { + store.registerUnsavedPage(p.memory); + } + return p; + } + + /** + * Create a copy of a page. + * + * @param map the map + * @param version the version + * @param source the source page + * @return the page + */ + public static Page create(MVMap map, long version, Page source) { + return create(map, version, source.keys, source.values, source.children, + source.totalCount, source.memory); + } + + /** + * Read a page. + * + * @param fileStore the file store + * @param pos the position + * @param map the map + * @param filePos the position in the file + * @param maxPos the maximum position (the end of the chunk) + * @return the page + */ + static Page read(FileStore fileStore, long pos, MVMap map, + long filePos, long maxPos) { + ByteBuffer buff; + int maxLength = DataUtils.getPageMaxLength(pos); + if (maxLength == DataUtils.PAGE_LARGE) { + buff = fileStore.readFully(filePos, 128); + maxLength = buff.getInt(); + // read the first bytes again + } + maxLength = (int) Math.min(maxPos - filePos, maxLength); + int length = maxLength; + if (length < 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Illegal page length {0} reading at {1}; max pos {2} ", + length, filePos, maxPos); + } + buff = fileStore.readFully(filePos, length); + Page p = new Page(map, 0); + p.pos = pos; + int chunkId = DataUtils.getPageChunkId(pos); + int offset = DataUtils.getPageOffset(pos); + p.read(buff, chunkId, offset, maxLength); + return p; + } + + /** + * Get the key at the given index. + * + * @param index the index + * @return the key + */ + public Object getKey(int index) { + return keys[index]; + } + + /** + * Get the child page at the given index. + * + * @param index the index + * @return the child page + */ + public Page getChildPage(int index) { + PageReference ref = children[index]; + return ref.page != null ? ref.page : map.readPage(ref.pos); + } + + /** + * Get the position of the child. + * + * @param index the index + * @return the position + */ + public long getChildPagePos(int index) { + return children[index].pos; + } + + /** + * Get the value at the given index. + * + * @param index the index + * @return the value + */ + public Object getValue(int index) { + return values[index]; + } + + /** + * Get the number of keys in this page. + * + * @return the number of keys + */ + public int getKeyCount() { + return keys.length; + } + + /** + * Check whether this is a leaf page. + * + * @return true if it is a leaf + */ + public boolean isLeaf() { + return children == null; + } + + /** + * Get the position of the page + * + * @return the position + */ + public long getPos() { + return pos; + } + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + buff.append("id: ").append(System.identityHashCode(this)).append('\n'); + buff.append("version: ").append(Long.toHexString(version)).append('\n'); + buff.append("pos: ").append(Long.toHexString(pos)).append('\n'); + if (pos != 0) { + int chunkId = DataUtils.getPageChunkId(pos); + buff.append("chunk: ").append(Long.toHexString(chunkId)).append('\n'); + } + for (int i = 0; i <= keys.length; i++) { + if (i > 0) { + buff.append(" "); + } + if (children != null) { + buff.append('[').append(Long.toHexString(children[i].pos)).append("] "); + } + if (i < keys.length) { + buff.append(keys[i]); + if (values != null) { + buff.append(':'); + buff.append(values[i]); + } + } + } + return buff.toString(); + } + + /** + * Create a copy of this page. + * + * @param version the new version + * @return a page with the given version + */ + public Page copy(long version) { + Page newPage = create(map, version, + keys, values, + children, totalCount, + memory); + // mark the old as deleted + removePage(); + newPage.cachedCompare = cachedCompare; + return newPage; + } + + /** + * Search the key in this page using a binary search. Instead of always + * starting the search in the middle, the last found index is cached. + *

    + * If the key was found, the returned value is the index in the key array. + * If not found, the returned value is negative, where -1 means the provided + * key is smaller than any keys in this page. See also Arrays.binarySearch. + * + * @param key the key + * @return the value or null + */ + public int binarySearch(Object key) { + int low = 0, high = keys.length - 1; + // the cached index minus one, so that + // for the first time (when cachedCompare is 0), + // the default value is used + int x = cachedCompare - 1; + if (x < 0 || x > high) { + x = high >>> 1; + } + Object[] k = keys; + while (low <= high) { + int compare = map.compare(key, k[x]); + if (compare > 0) { + low = x + 1; + } else if (compare < 0) { + high = x - 1; + } else { + cachedCompare = x + 1; + return x; + } + x = (low + high) >>> 1; + } + cachedCompare = low; + return -(low + 1); + + // regular binary search (without caching) + // int low = 0, high = keys.length - 1; + // while (low <= high) { + // int x = (low + high) >>> 1; + // int compare = map.compare(key, keys[x]); + // if (compare > 0) { + // low = x + 1; + // } else if (compare < 0) { + // high = x - 1; + // } else { + // return x; + // } + // } + // return -(low + 1); + } + + /** + * Split the page. This modifies the current page. + * + * @param at the split index + * @return the page with the entries after the split index + */ + Page split(int at) { + Page page = isLeaf() ? splitLeaf(at) : splitNode(at); + if(isPersistent()) { + recalculateMemory(); + } + return page; + } + + private Page splitLeaf(int at) { + int b = keys.length - at; + Object[] aKeys = new Object[at]; + Object[] bKeys = new Object[b]; + System.arraycopy(keys, 0, aKeys, 0, at); + System.arraycopy(keys, at, bKeys, 0, b); + keys = aKeys; + Object[] aValues = new Object[at]; + Object[] bValues = new Object[b]; + bValues = new Object[b]; + System.arraycopy(values, 0, aValues, 0, at); + System.arraycopy(values, at, bValues, 0, b); + values = aValues; + totalCount = at; + return create(map, version, + bKeys, bValues, + null, + b, 0); + } + + private Page splitNode(int at) { + int b = keys.length - at; + + Object[] aKeys = new Object[at]; + Object[] bKeys = new Object[b - 1]; + System.arraycopy(keys, 0, aKeys, 0, at); + System.arraycopy(keys, at + 1, bKeys, 0, b - 1); + keys = aKeys; + + PageReference[] aChildren = new PageReference[at + 1]; + PageReference[] bChildren = new PageReference[b]; + System.arraycopy(children, 0, aChildren, 0, at + 1); + System.arraycopy(children, at + 1, bChildren, 0, b); + children = aChildren; + + long t = 0; + for (PageReference x : aChildren) { + t += x.count; + } + totalCount = t; + t = 0; + for (PageReference x : bChildren) { + t += x.count; + } + return create(map, version, + bKeys, null, + bChildren, + t, 0); + } + + /** + * Get the total number of key-value pairs, including child pages. + * + * @return the number of key-value pairs + */ + public long getTotalCount() { + if (MVStore.ASSERT) { + long check = 0; + if (isLeaf()) { + check = keys.length; + } else { + for (PageReference x : children) { + check += x.count; + } + } + if (check != totalCount) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, + "Expected: {0} got: {1}", check, totalCount); + } + } + return totalCount; + } + + /** + * Get the descendant counts for the given child. + * + * @param index the child index + * @return the descendant count + */ + long getCounts(int index) { + return children[index].count; + } + + /** + * Replace the child page. + * + * @param index the index + * @param c the new child page + */ + public void setChild(int index, Page c) { + if (c == null) { + long oldCount = children[index].count; + // this is slightly slower: + // children = Arrays.copyOf(children, children.length); + children = children.clone(); + PageReference ref = new PageReference(null, 0, 0); + children[index] = ref; + totalCount -= oldCount; + } else if (c != children[index].page || + c.getPos() != children[index].pos) { + long oldCount = children[index].count; + // this is slightly slower: + // children = Arrays.copyOf(children, children.length); + children = children.clone(); + PageReference ref = new PageReference(c, c.pos, c.totalCount); + children[index] = ref; + totalCount += c.totalCount - oldCount; + } + } + + /** + * Replace the key at an index in this page. + * + * @param index the index + * @param key the new key + */ + public void setKey(int index, Object key) { + // this is slightly slower: + // keys = Arrays.copyOf(keys, keys.length); + keys = keys.clone(); + if(isPersistent()) { + Object old = keys[index]; + DataType keyType = map.getKeyType(); + int mem = keyType.getMemory(key); + if (old != null) { + mem -= keyType.getMemory(old); + } + addMemory(mem); + } + keys[index] = key; + } + + /** + * Replace the value at an index in this page. + * + * @param index the index + * @param value the new value + * @return the old value + */ + public Object setValue(int index, Object value) { + Object old = values[index]; + // this is slightly slower: + // values = Arrays.copyOf(values, values.length); + values = values.clone(); + DataType valueType = map.getValueType(); + if(isPersistent()) { + addMemory(valueType.getMemory(value) - + valueType.getMemory(old)); + } + values[index] = value; + return old; + } + + /** + * Remove this page and all child pages. + */ + void removeAllRecursive() { + if (children != null) { + for (int i = 0, size = map.getChildPageCount(this); i < size; i++) { + PageReference ref = children[i]; + if (ref.page != null) { + ref.page.removeAllRecursive(); + } else { + long c = children[i].pos; + int type = DataUtils.getPageType(c); + if (type == DataUtils.PAGE_TYPE_LEAF) { + int mem = DataUtils.getPageMaxLength(c); + map.removePage(c, mem); + } else { + map.readPage(c).removeAllRecursive(); + } + } + } + } + removePage(); + } + + /** + * Insert a key-value pair into this leaf. + * + * @param index the index + * @param key the key + * @param value the value + */ + public void insertLeaf(int index, Object key, Object value) { + int len = keys.length + 1; + Object[] newKeys = new Object[len]; + DataUtils.copyWithGap(keys, newKeys, len - 1, index); + keys = newKeys; + Object[] newValues = new Object[len]; + DataUtils.copyWithGap(values, newValues, len - 1, index); + values = newValues; + keys[index] = key; + values[index] = value; + totalCount++; + if(isPersistent()) { + addMemory(map.getKeyType().getMemory(key) + + map.getValueType().getMemory(value)); + } + } + + /** + * Insert a child page into this node. + * + * @param index the index + * @param key the key + * @param childPage the child page + */ + public void insertNode(int index, Object key, Page childPage) { + + Object[] newKeys = new Object[keys.length + 1]; + DataUtils.copyWithGap(keys, newKeys, keys.length, index); + newKeys[index] = key; + keys = newKeys; + + int childCount = children.length; + PageReference[] newChildren = new PageReference[childCount + 1]; + DataUtils.copyWithGap(children, newChildren, childCount, index); + newChildren[index] = new PageReference( + childPage, childPage.getPos(), childPage.totalCount); + children = newChildren; + + totalCount += childPage.totalCount; + if(isPersistent()) { + addMemory(map.getKeyType().getMemory(key) + + DataUtils.PAGE_MEMORY_CHILD); + } + } + + /** + * Remove the key and value (or child) at the given index. + * + * @param index the index + */ + public void remove(int index) { + int keyLength = keys.length; + int keyIndex = index >= keyLength ? index - 1 : index; + if(isPersistent()) { + Object old = keys[keyIndex]; + addMemory(-map.getKeyType().getMemory(old)); + } + Object[] newKeys = new Object[keyLength - 1]; + DataUtils.copyExcept(keys, newKeys, keyLength, keyIndex); + keys = newKeys; + + if (values != null) { + if(isPersistent()) { + Object old = values[index]; + addMemory(-map.getValueType().getMemory(old)); + } + Object[] newValues = new Object[keyLength - 1]; + DataUtils.copyExcept(values, newValues, keyLength, index); + values = newValues; + totalCount--; + } + if (children != null) { + if(isPersistent()) { + addMemory(-DataUtils.PAGE_MEMORY_CHILD); + } + long countOffset = children[index].count; + + int childCount = children.length; + PageReference[] newChildren = new PageReference[childCount - 1]; + DataUtils.copyExcept(children, newChildren, childCount, index); + children = newChildren; + + totalCount -= countOffset; + } + } + + /** + * Read the page from the buffer. + * + * @param buff the buffer + * @param chunkId the chunk id + * @param offset the offset within the chunk + * @param maxLength the maximum length + */ + void read(ByteBuffer buff, int chunkId, int offset, int maxLength) { + int start = buff.position(); + int pageLength = buff.getInt(); + if (pageLength > maxLength || pageLength < 4) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupted in chunk {0}, expected page length 4..{1}, got {2}", + chunkId, maxLength, pageLength); + } + buff.limit(start + pageLength); + short check = buff.getShort(); + int mapId = DataUtils.readVarInt(buff); + if (mapId != map.getId()) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupted in chunk {0}, expected map id {1}, got {2}", + chunkId, map.getId(), mapId); + } + int checkTest = DataUtils.getCheckValue(chunkId) + ^ DataUtils.getCheckValue(offset) + ^ DataUtils.getCheckValue(pageLength); + if (check != (short) checkTest) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupted in chunk {0}, expected check value {1}, got {2}", + chunkId, checkTest, check); + } + int len = DataUtils.readVarInt(buff); + keys = new Object[len]; + int type = buff.get(); + boolean node = (type & 1) == DataUtils.PAGE_TYPE_NODE; + if (node) { + children = new PageReference[len + 1]; + long[] p = new long[len + 1]; + for (int i = 0; i <= len; i++) { + p[i] = buff.getLong(); + } + long total = 0; + for (int i = 0; i <= len; i++) { + long s = DataUtils.readVarLong(buff); + total += s; + children[i] = new PageReference(null, p[i], s); + } + totalCount = total; + } + boolean compressed = (type & DataUtils.PAGE_COMPRESSED) != 0; + if (compressed) { + Compressor compressor; + if ((type & DataUtils.PAGE_COMPRESSED_HIGH) == + DataUtils.PAGE_COMPRESSED_HIGH) { + compressor = map.getStore().getCompressorHigh(); + } else { + compressor = map.getStore().getCompressorFast(); + } + int lenAdd = DataUtils.readVarInt(buff); + int compLen = pageLength + start - buff.position(); + byte[] comp = Utils.newBytes(compLen); + buff.get(comp); + int l = compLen + lenAdd; + buff = ByteBuffer.allocate(l); + compressor.expand(comp, 0, compLen, buff.array(), + buff.arrayOffset(), l); + } + map.getKeyType().read(buff, keys, len, true); + if (!node) { + values = new Object[len]; + map.getValueType().read(buff, values, len, false); + totalCount = len; + } + recalculateMemory(); + } + + /** + * Store the page and update the position. + * + * @param chunk the chunk + * @param buff the target buffer + * @return the position of the buffer just after the type + */ + private int write(Chunk chunk, WriteBuffer buff) { + int start = buff.position(); + int len = keys.length; + int type = children != null ? DataUtils.PAGE_TYPE_NODE + : DataUtils.PAGE_TYPE_LEAF; + buff.putInt(0). + putShort((byte) 0). + putVarInt(map.getId()). + putVarInt(len); + int typePos = buff.position(); + buff.put((byte) type); + if (type == DataUtils.PAGE_TYPE_NODE) { + writeChildren(buff); + for (int i = 0; i <= len; i++) { + buff.putVarLong(children[i].count); + } + } + int compressStart = buff.position(); + map.getKeyType().write(buff, keys, len, true); + if (type == DataUtils.PAGE_TYPE_LEAF) { + map.getValueType().write(buff, values, len, false); + } + MVStore store = map.getStore(); + int expLen = buff.position() - compressStart; + if (expLen > 16) { + int compressionLevel = store.getCompressionLevel(); + if (compressionLevel > 0) { + Compressor compressor; + int compressType; + if (compressionLevel == 1) { + compressor = map.getStore().getCompressorFast(); + compressType = DataUtils.PAGE_COMPRESSED; + } else { + compressor = map.getStore().getCompressorHigh(); + compressType = DataUtils.PAGE_COMPRESSED_HIGH; + } + byte[] exp = new byte[expLen]; + buff.position(compressStart).get(exp); + byte[] comp = new byte[expLen * 2]; + int compLen = compressor.compress(exp, expLen, comp, 0); + int plus = DataUtils.getVarIntLen(compLen - expLen); + if (compLen + plus < expLen) { + buff.position(typePos). + put((byte) (type + compressType)); + buff.position(compressStart). + putVarInt(expLen - compLen). + put(comp, 0, compLen); + } + } + } + int pageLength = buff.position() - start; + int chunkId = chunk.id; + int check = DataUtils.getCheckValue(chunkId) + ^ DataUtils.getCheckValue(start) + ^ DataUtils.getCheckValue(pageLength); + buff.putInt(start, pageLength). + putShort(start + 4, (short) check); + if (pos != 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "Page already stored"); + } + pos = DataUtils.getPagePos(chunkId, start, pageLength, type); + store.cachePage(pos, this, getMemory()); + if (type == DataUtils.PAGE_TYPE_NODE) { + // cache again - this will make sure nodes stays in the cache + // for a longer time + store.cachePage(pos, this, getMemory()); + } + long max = DataUtils.getPageMaxLength(pos); + chunk.maxLen += max; + chunk.maxLenLive += max; + chunk.pageCount++; + chunk.pageCountLive++; + if (removedInMemory) { + // if the page was removed _before_ the position was assigned, we + // need to mark it removed here, so the fields are updated + // when the next chunk is stored + map.removePage(pos, memory); + } + return typePos + 1; + } + + private void writeChildren(WriteBuffer buff) { + int len = keys.length; + for (int i = 0; i <= len; i++) { + buff.putLong(children[i].pos); + } + } + + /** + * Store this page and all children that are changed, in reverse order, and + * update the position and the children. + * + * @param chunk the chunk + * @param buff the target buffer + */ + void writeUnsavedRecursive(Chunk chunk, WriteBuffer buff) { + if (pos != 0) { + // already stored before + return; + } + int patch = write(chunk, buff); + if (!isLeaf()) { + int len = children.length; + for (int i = 0; i < len; i++) { + Page p = children[i].page; + if (p != null) { + p.writeUnsavedRecursive(chunk, buff); + children[i] = new PageReference(p, p.getPos(), p.totalCount); + } + } + int old = buff.position(); + buff.position(patch); + writeChildren(buff); + buff.position(old); + } + } + + /** + * Unlink the children recursively after all data is written. + */ + void writeEnd() { + if (isLeaf()) { + return; + } + int len = children.length; + for (int i = 0; i < len; i++) { + PageReference ref = children[i]; + if (ref.page != null) { + if (ref.page.getPos() == 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "Page not written"); + } + ref.page.writeEnd(); + children[i] = new PageReference(null, ref.pos, ref.count); + } + } + } + + long getVersion() { + return version; + } + + public int getRawChildPageCount() { + return children.length; + } + + @Override + public boolean equals(Object other) { + if (other == this) { + return true; + } + if (other instanceof Page) { + if (pos != 0 && ((Page) other).pos == pos) { + return true; + } + return this == other; + } + return false; + } + + @Override + public int hashCode() { + return pos != 0 ? (int) (pos | (pos >>> 32)) : super.hashCode(); + } + + private boolean isPersistent() { + return memory != IN_MEMORY; + } + + public int getMemory() { + if (isPersistent()) { + if (MVStore.ASSERT) { + int mem = memory; + recalculateMemory(); + if (mem != memory) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "Memory calculation error"); + } + } + return memory; + } + return getKeyCount(); + } + + private void addMemory(int mem) { + memory += mem; + } + + private void recalculateMemory() { + int mem = DataUtils.PAGE_MEMORY; + DataType keyType = map.getKeyType(); + for (Object key : keys) { + mem += keyType.getMemory(key); + } + if (this.isLeaf()) { + DataType valueType = map.getValueType(); + for (int i = 0; i < keys.length; i++) { + mem += valueType.getMemory(values[i]); + } + } else { + mem += this.getRawChildPageCount() * DataUtils.PAGE_MEMORY_CHILD; + } + addMemory(mem - memory); + } + + void setVersion(long version) { + this.version = version; + } + + /** + * Remove the page. + */ + public void removePage() { + if(isPersistent()) { + long p = pos; + if (p == 0) { + removedInMemory = true; + } + map.removePage(p, memory); + } + } + + /** + * A pointer to a page, either in-memory or using a page position. + */ + public static class PageReference { + + /** + * The position, if known, or 0. + */ + final long pos; + + /** + * The page, if in memory, or null. + */ + final Page page; + + /** + * The descendant count for this child page. + */ + final long count; + + public PageReference(Page page, long pos, long count) { + this.page = page; + this.pos = pos; + this.count = count; + } + + } + + /** + * Contains information about which other pages are referenced (directly or + * indirectly) by the given page. This is a subset of the page data, for + * pages of type node. This information is used for garbage collection (to + * quickly find out which chunks are still in use). + */ + public static class PageChildren { + + /** + * An empty array of type long. + */ + public static final long[] EMPTY_ARRAY = new long[0]; + + /** + * The position of the page. + */ + final long pos; + + /** + * The page positions of (direct or indirect) children. Depending on the + * use case, this can be the complete list, or only a subset of all + * children, for example only only one reference to a child in another + * chunk. + */ + long[] children; + + /** + * Whether this object only contains the list of chunks. + */ + boolean chunkList; + + private PageChildren(long pos, long[] children) { + this.pos = pos; + this.children = children; + } + + PageChildren(Page p) { + this.pos = p.getPos(); + int count = p.getRawChildPageCount(); + this.children = new long[count]; + for (int i = 0; i < count; i++) { + children[i] = p.getChildPagePos(i); + } + } + + int getMemory() { + return 64 + 8 * children.length; + } + + /** + * Read an inner node page from the buffer, but ignore the keys and + * values. + * + * @param fileStore the file store + * @param pos the position + * @param mapId the map id + * @param filePos the position in the file + * @param maxPos the maximum position (the end of the chunk) + * @return the page children object + */ + static PageChildren read(FileStore fileStore, long pos, int mapId, + long filePos, long maxPos) { + ByteBuffer buff; + int maxLength = DataUtils.getPageMaxLength(pos); + if (maxLength == DataUtils.PAGE_LARGE) { + buff = fileStore.readFully(filePos, 128); + maxLength = buff.getInt(); + // read the first bytes again + } + maxLength = (int) Math.min(maxPos - filePos, maxLength); + int length = maxLength; + if (length < 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "Illegal page length {0} reading at {1}; max pos {2} ", + length, filePos, maxPos); + } + buff = fileStore.readFully(filePos, length); + int chunkId = DataUtils.getPageChunkId(pos); + int offset = DataUtils.getPageOffset(pos); + int start = buff.position(); + int pageLength = buff.getInt(); + if (pageLength > maxLength) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupted in chunk {0}, expected page length =< {1}, got {2}", + chunkId, maxLength, pageLength); + } + buff.limit(start + pageLength); + short check = buff.getShort(); + int m = DataUtils.readVarInt(buff); + if (m != mapId) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupted in chunk {0}, expected map id {1}, got {2}", + chunkId, mapId, m); + } + int checkTest = DataUtils.getCheckValue(chunkId) + ^ DataUtils.getCheckValue(offset) + ^ DataUtils.getCheckValue(pageLength); + if (check != (short) checkTest) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, + "File corrupted in chunk {0}, expected check value {1}, got {2}", + chunkId, checkTest, check); + } + int len = DataUtils.readVarInt(buff); + int type = buff.get(); + boolean node = (type & 1) == DataUtils.PAGE_TYPE_NODE; + if (!node) { + return null; + } + long[] children = new long[len + 1]; + for (int i = 0; i <= len; i++) { + children[i] = buff.getLong(); + } + return new PageChildren(pos, children); + } + + /** + * Only keep one reference to the same chunk. Only leaf references are + * removed (references to inner nodes are not removed, as they could + * indirectly point to other chunks). + */ + void removeDuplicateChunkReferences() { + HashSet chunks = new HashSet<>(); + // we don't need references to leaves in the same chunk + chunks.add(DataUtils.getPageChunkId(pos)); + for (int i = 0; i < children.length; i++) { + long p = children[i]; + int chunkId = DataUtils.getPageChunkId(p); + boolean wasNew = chunks.add(chunkId); + if (DataUtils.getPageType(p) == DataUtils.PAGE_TYPE_NODE) { + continue; + } + if (wasNew) { + continue; + } + removeChild(i--); + } + } + + private void removeChild(int index) { + if (index == 0 && children.length == 1) { + children = EMPTY_ARRAY; + return; + } + long[] c2 = new long[children.length - 1]; + DataUtils.copyExcept(children, c2, children.length, index); + children = c2; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/StreamStore.java b/modules/h2/src/main/java/org/h2/mvstore/StreamStore.java new file mode 100644 index 0000000000000..cfa1aaacafe6b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/StreamStore.java @@ -0,0 +1,582 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.ByteBuffer; +import java.util.Arrays; +import java.util.Map; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; + +/** + * A facility to store streams in a map. Streams are split into blocks, which + * are stored in a map. Very small streams are inlined in the stream id. + *

    + * The key of the map is a long (incremented for each stored block). The default + * initial value is 0. Before storing blocks into the map, the stream store + * checks if there is already a block with the next key, and if necessary + * searches the next free entry using a binary search (0 to Long.MAX_VALUE). + *

    + * Before storing + *

    + * The format of the binary id is: An empty id represents 0 bytes of data. + * In-place data is encoded as 0, the size (a variable size int), then the data. + * A stored block is encoded as 1, the length of the block (a variable size + * int), then the key (a variable size long). Multiple ids can be concatenated + * to concatenate the data. If the id is large, it is stored itself, which is + * encoded as 2, the total length (a variable size long), and the key of the + * block that contains the id (a variable size long). + */ +public class StreamStore { + + private final Map map; + private int minBlockSize = 256; + private int maxBlockSize = 256 * 1024; + private final AtomicLong nextKey = new AtomicLong(); + private final AtomicReference nextBuffer = + new AtomicReference<>(); + + /** + * Create a stream store instance. + * + * @param map the map to store blocks of data + */ + public StreamStore(Map map) { + this.map = map; + } + + public Map getMap() { + return map; + } + + public void setNextKey(long nextKey) { + this.nextKey.set(nextKey); + } + + public long getNextKey() { + return nextKey.get(); + } + + /** + * Set the minimum block size. The default is 256 bytes. + * + * @param minBlockSize the new value + */ + public void setMinBlockSize(int minBlockSize) { + this.minBlockSize = minBlockSize; + } + + public int getMinBlockSize() { + return minBlockSize; + } + + /** + * Set the maximum block size. The default is 256 KB. + * + * @param maxBlockSize the new value + */ + public void setMaxBlockSize(int maxBlockSize) { + this.maxBlockSize = maxBlockSize; + } + + public long getMaxBlockSize() { + return maxBlockSize; + } + + /** + * Store the stream, and return the id. The stream is not closed. + * + * @param in the stream + * @return the id (potentially an empty array) + */ + @SuppressWarnings("resource") + public byte[] put(InputStream in) throws IOException { + ByteArrayOutputStream id = new ByteArrayOutputStream(); + int level = 0; + try { + while (!put(id, in, level)) { + if (id.size() > maxBlockSize / 2) { + id = putIndirectId(id); + level++; + } + } + } catch (IOException e) { + remove(id.toByteArray()); + throw e; + } + if (id.size() > minBlockSize * 2) { + id = putIndirectId(id); + } + return id.toByteArray(); + } + + private boolean put(ByteArrayOutputStream id, InputStream in, int level) + throws IOException { + if (level > 0) { + ByteArrayOutputStream id2 = new ByteArrayOutputStream(); + while (true) { + boolean eof = put(id2, in, level - 1); + if (id2.size() > maxBlockSize / 2) { + id2 = putIndirectId(id2); + id2.writeTo(id); + return eof; + } else if (eof) { + id2.writeTo(id); + return true; + } + } + } + byte[] readBuffer = nextBuffer.getAndSet(null); + if (readBuffer == null) { + readBuffer = new byte[maxBlockSize]; + } + byte[] buff = read(in, readBuffer); + if (buff != readBuffer) { + // re-use the buffer if the result was shorter + nextBuffer.set(readBuffer); + } + int len = buff.length; + if (len == 0) { + return true; + } + boolean eof = len < maxBlockSize; + if (len < minBlockSize) { + // in-place: 0, len (int), data + id.write(0); + DataUtils.writeVarInt(id, len); + id.write(buff); + } else { + // block: 1, len (int), blockId (long) + id.write(1); + DataUtils.writeVarInt(id, len); + DataUtils.writeVarLong(id, writeBlock(buff)); + } + return eof; + } + + private static byte[] read(InputStream in, byte[] target) + throws IOException { + int copied = 0; + int remaining = target.length; + while (remaining > 0) { + try { + int len = in.read(target, copied, remaining); + if (len < 0) { + return Arrays.copyOf(target, copied); + } + copied += len; + remaining -= len; + } catch (RuntimeException e) { + throw new IOException(e); + } + } + return target; + } + + private ByteArrayOutputStream putIndirectId(ByteArrayOutputStream id) + throws IOException { + byte[] data = id.toByteArray(); + id = new ByteArrayOutputStream(); + // indirect: 2, total len (long), blockId (long) + id.write(2); + DataUtils.writeVarLong(id, length(data)); + DataUtils.writeVarLong(id, writeBlock(data)); + return id; + } + + private long writeBlock(byte[] data) { + long key = getAndIncrementNextKey(); + map.put(key, data); + onStore(data.length); + return key; + } + + /** + * This method is called after a block of data is stored. Override this + * method to persist data if necessary. + * + * @param len the length of the stored block. + */ + @SuppressWarnings("unused") + protected void onStore(int len) { + // do nothing by default + } + + /** + * Generate a new key. + * + * @return the new key + */ + private long getAndIncrementNextKey() { + long key = nextKey.getAndIncrement(); + if (!map.containsKey(key)) { + return key; + } + // search the next free id using binary search + synchronized (this) { + long low = key, high = Long.MAX_VALUE; + while (low < high) { + long x = (low + high) >>> 1; + if (map.containsKey(x)) { + low = x + 1; + } else { + high = x; + } + } + key = low; + nextKey.set(key + 1); + return key; + } + } + + /** + * Get the key of the biggest block, of -1 for inline data. + * This method is used to garbage collect orphaned blocks. + * + * @param id the id + * @return the key, or -1 + */ + public long getMaxBlockKey(byte[] id) { + long maxKey = -1; + ByteBuffer idBuffer = ByteBuffer.wrap(id); + while (idBuffer.hasRemaining()) { + switch (idBuffer.get()) { + case 0: + // in-place: 0, len (int), data + int len = DataUtils.readVarInt(idBuffer); + idBuffer.position(idBuffer.position() + len); + break; + case 1: + // block: 1, len (int), blockId (long) + DataUtils.readVarInt(idBuffer); + long k = DataUtils.readVarLong(idBuffer); + maxKey = Math.max(maxKey, k); + break; + case 2: + // indirect: 2, total len (long), blockId (long) + DataUtils.readVarLong(idBuffer); + long k2 = DataUtils.readVarLong(idBuffer); + maxKey = k2; + byte[] r = map.get(k2); + // recurse + long m = getMaxBlockKey(r); + if (m >= 0) { + maxKey = Math.max(maxKey, m); + } + break; + default: + throw DataUtils.newIllegalArgumentException( + "Unsupported id {0}", Arrays.toString(id)); + } + } + return maxKey; + } + + /** + * Remove all stored blocks for the given id. + * + * @param id the id + */ + public void remove(byte[] id) { + ByteBuffer idBuffer = ByteBuffer.wrap(id); + while (idBuffer.hasRemaining()) { + switch (idBuffer.get()) { + case 0: + // in-place: 0, len (int), data + int len = DataUtils.readVarInt(idBuffer); + idBuffer.position(idBuffer.position() + len); + break; + case 1: + // block: 1, len (int), blockId (long) + DataUtils.readVarInt(idBuffer); + long k = DataUtils.readVarLong(idBuffer); + map.remove(k); + break; + case 2: + // indirect: 2, total len (long), blockId (long) + DataUtils.readVarLong(idBuffer); + long k2 = DataUtils.readVarLong(idBuffer); + // recurse + remove(map.get(k2)); + map.remove(k2); + break; + default: + throw DataUtils.newIllegalArgumentException( + "Unsupported id {0}", Arrays.toString(id)); + } + } + } + + /** + * Convert the id to a human readable string. + * + * @param id the stream id + * @return the string + */ + public static String toString(byte[] id) { + StringBuilder buff = new StringBuilder(); + ByteBuffer idBuffer = ByteBuffer.wrap(id); + long length = 0; + while (idBuffer.hasRemaining()) { + long block; + int len; + switch (idBuffer.get()) { + case 0: + // in-place: 0, len (int), data + len = DataUtils.readVarInt(idBuffer); + idBuffer.position(idBuffer.position() + len); + buff.append("data len=").append(len); + length += len; + break; + case 1: + // block: 1, len (int), blockId (long) + len = DataUtils.readVarInt(idBuffer); + length += len; + block = DataUtils.readVarLong(idBuffer); + buff.append("block ").append(block).append(" len=").append(len); + break; + case 2: + // indirect: 2, total len (long), blockId (long) + len = DataUtils.readVarInt(idBuffer); + length += DataUtils.readVarLong(idBuffer); + block = DataUtils.readVarLong(idBuffer); + buff.append("indirect block ").append(block).append(" len=").append(len); + break; + default: + buff.append("error"); + } + buff.append(", "); + } + buff.append("length=").append(length); + return buff.toString(); + } + + /** + * Calculate the number of data bytes for the given id. As the length is + * encoded in the id, this operation does not cause any reads in the map. + * + * @param id the id + * @return the length + */ + public long length(byte[] id) { + ByteBuffer idBuffer = ByteBuffer.wrap(id); + long length = 0; + while (idBuffer.hasRemaining()) { + switch (idBuffer.get()) { + case 0: + // in-place: 0, len (int), data + int len = DataUtils.readVarInt(idBuffer); + idBuffer.position(idBuffer.position() + len); + length += len; + break; + case 1: + // block: 1, len (int), blockId (long) + length += DataUtils.readVarInt(idBuffer); + DataUtils.readVarLong(idBuffer); + break; + case 2: + // indirect: 2, total len (long), blockId (long) + length += DataUtils.readVarLong(idBuffer); + DataUtils.readVarLong(idBuffer); + break; + default: + throw DataUtils.newIllegalArgumentException( + "Unsupported id {0}", Arrays.toString(id)); + } + } + return length; + } + + /** + * Check whether the id itself contains all the data. This operation does + * not cause any reads in the map. + * + * @param id the id + * @return if the id contains the data + */ + public boolean isInPlace(byte[] id) { + ByteBuffer idBuffer = ByteBuffer.wrap(id); + while (idBuffer.hasRemaining()) { + if (idBuffer.get() != 0) { + return false; + } + int len = DataUtils.readVarInt(idBuffer); + idBuffer.position(idBuffer.position() + len); + } + return true; + } + + /** + * Open an input stream to read data. + * + * @param id the id + * @return the stream + */ + public InputStream get(byte[] id) { + return new Stream(this, id); + } + + /** + * Get the block. + * + * @param key the key + * @return the block + */ + byte[] getBlock(long key) { + byte[] data = map.get(key); + if (data == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_BLOCK_NOT_FOUND, + "Block {0} not found", key); + } + return data; + } + + /** + * A stream backed by a map. + */ + static class Stream extends InputStream { + + private final StreamStore store; + private byte[] oneByteBuffer; + private ByteBuffer idBuffer; + private ByteArrayInputStream buffer; + private long skip; + private final long length; + private long pos; + + Stream(StreamStore store, byte[] id) { + this.store = store; + this.length = store.length(id); + this.idBuffer = ByteBuffer.wrap(id); + } + + @Override + public int read() throws IOException { + byte[] buffer = oneByteBuffer; + if (buffer == null) { + buffer = oneByteBuffer = new byte[1]; + } + int len = read(buffer, 0, 1); + return len == -1 ? -1 : (buffer[0] & 255); + } + + @Override + public long skip(long n) { + n = Math.min(length - pos, n); + if (n == 0) { + return 0; + } + if (buffer != null) { + long s = buffer.skip(n); + if (s > 0) { + n = s; + } else { + buffer = null; + skip += n; + } + } else { + skip += n; + } + pos += n; + return n; + } + + @Override + public void close() { + buffer = null; + idBuffer.position(idBuffer.limit()); + pos = length; + } + + @Override + public int read(byte[] b, int off, int len) throws IOException { + if (len <= 0) { + return 0; + } + while (true) { + if (buffer == null) { + try { + buffer = nextBuffer(); + } catch (IllegalStateException e) { + String msg = DataUtils.formatMessage( + DataUtils.ERROR_BLOCK_NOT_FOUND, + "Block not found in id {0}", + Arrays.toString(idBuffer.array())); + throw new IOException(msg, e); + } + if (buffer == null) { + return -1; + } + } + int result = buffer.read(b, off, len); + if (result > 0) { + pos += result; + return result; + } + buffer = null; + } + } + + private ByteArrayInputStream nextBuffer() { + while (idBuffer.hasRemaining()) { + switch (idBuffer.get()) { + case 0: { + int len = DataUtils.readVarInt(idBuffer); + if (skip >= len) { + skip -= len; + idBuffer.position(idBuffer.position() + len); + continue; + } + int p = (int) (idBuffer.position() + skip); + int l = (int) (len - skip); + idBuffer.position(p + l); + return new ByteArrayInputStream(idBuffer.array(), p, l); + } + case 1: { + int len = DataUtils.readVarInt(idBuffer); + long key = DataUtils.readVarLong(idBuffer); + if (skip >= len) { + skip -= len; + continue; + } + byte[] data = store.getBlock(key); + int s = (int) skip; + skip = 0; + return new ByteArrayInputStream(data, s, data.length - s); + } + case 2: { + long len = DataUtils.readVarLong(idBuffer); + long key = DataUtils.readVarLong(idBuffer); + if (skip >= len) { + skip -= len; + continue; + } + byte[] k = store.getBlock(key); + ByteBuffer newBuffer = ByteBuffer.allocate(k.length + + idBuffer.limit() - idBuffer.position()); + newBuffer.put(k); + newBuffer.put(idBuffer); + newBuffer.flip(); + idBuffer = newBuffer; + return nextBuffer(); + } + default: + throw DataUtils.newIllegalArgumentException( + "Unsupported id {0}", + Arrays.toString(idBuffer.array())); + } + } + return null; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/WriteBuffer.java b/modules/h2/src/main/java/org/h2/mvstore/WriteBuffer.java new file mode 100644 index 0000000000000..3931626ad033d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/WriteBuffer.java @@ -0,0 +1,331 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore; + +import java.nio.ByteBuffer; + +/** + * An auto-resize buffer to write data into a ByteBuffer. + */ +public class WriteBuffer { + + /** + * The maximum size of the buffer in order to be re-used after a clear + * operation. + */ + private static final int MAX_REUSE_CAPACITY = 4 * 1024 * 1024; + + /** + * The minimum number of bytes to grow a buffer at a time. + */ + private static final int MIN_GROW = 1024 * 1024; + + /** + * The buffer that is used after a clear operation. + */ + private ByteBuffer reuse; + + /** + * The current buffer (may be replaced if it is too small). + */ + private ByteBuffer buff; + + public WriteBuffer(int initialSize) { + reuse = ByteBuffer.allocate(initialSize); + buff = reuse; + } + + public WriteBuffer() { + this(MIN_GROW); + } + + /** + * Write a variable size integer. + * + * @param x the value + * @return this + */ + public WriteBuffer putVarInt(int x) { + DataUtils.writeVarInt(ensureCapacity(5), x); + return this; + } + + /** + * Write a variable size long. + * + * @param x the value + * @return this + */ + public WriteBuffer putVarLong(long x) { + DataUtils.writeVarLong(ensureCapacity(10), x); + return this; + } + + /** + * Write the characters of a string in a format similar to UTF-8. + * + * @param s the string + * @param len the number of characters to write + * @return this + */ + public WriteBuffer putStringData(String s, int len) { + ByteBuffer b = ensureCapacity(3 * len); + DataUtils.writeStringData(b, s, len); + return this; + } + + /** + * Put a byte. + * + * @param x the value + * @return this + */ + public WriteBuffer put(byte x) { + ensureCapacity(1).put(x); + return this; + } + + /** + * Put a character. + * + * @param x the value + * @return this + */ + public WriteBuffer putChar(char x) { + ensureCapacity(2).putChar(x); + return this; + } + + /** + * Put a short. + * + * @param x the value + * @return this + */ + public WriteBuffer putShort(short x) { + ensureCapacity(2).putShort(x); + return this; + } + + /** + * Put an integer. + * + * @param x the value + * @return this + */ + public WriteBuffer putInt(int x) { + ensureCapacity(4).putInt(x); + return this; + } + + /** + * Put a long. + * + * @param x the value + * @return this + */ + public WriteBuffer putLong(long x) { + ensureCapacity(8).putLong(x); + return this; + } + + /** + * Put a float. + * + * @param x the value + * @return this + */ + public WriteBuffer putFloat(float x) { + ensureCapacity(4).putFloat(x); + return this; + } + + /** + * Put a double. + * + * @param x the value + * @return this + */ + public WriteBuffer putDouble(double x) { + ensureCapacity(8).putDouble(x); + return this; + } + + /** + * Put a byte array. + * + * @param bytes the value + * @return this + */ + public WriteBuffer put(byte[] bytes) { + ensureCapacity(bytes.length).put(bytes); + return this; + } + + /** + * Put a byte array. + * + * @param bytes the value + * @param offset the source offset + * @param length the number of bytes + * @return this + */ + public WriteBuffer put(byte[] bytes, int offset, int length) { + ensureCapacity(length).put(bytes, offset, length); + return this; + } + + /** + * Put the contents of a byte buffer. + * + * @param src the source buffer + * @return this + */ + public WriteBuffer put(ByteBuffer src) { + ensureCapacity(src.remaining()).put(src); + return this; + } + + /** + * Set the limit, possibly growing the buffer. + * + * @param newLimit the new limit + * @return this + */ + public WriteBuffer limit(int newLimit) { + ensureCapacity(newLimit - buff.position()).limit(newLimit); + return this; + } + + /** + * Get the capacity. + * + * @return the capacity + */ + public int capacity() { + return buff.capacity(); + } + + /** + * Set the position. + * + * @param newPosition the new position + * @return the new position + */ + public WriteBuffer position(int newPosition) { + buff.position(newPosition); + return this; + } + + /** + * Get the limit. + * + * @return the limit + */ + public int limit() { + return buff.limit(); + } + + /** + * Get the current position. + * + * @return the position + */ + public int position() { + return buff.position(); + } + + /** + * Copy the data into the destination array. + * + * @param dst the destination array + * @return this + */ + public WriteBuffer get(byte[] dst) { + buff.get(dst); + return this; + } + + /** + * Update an integer at the given index. + * + * @param index the index + * @param value the value + * @return this + */ + public WriteBuffer putInt(int index, int value) { + buff.putInt(index, value); + return this; + } + + /** + * Update a short at the given index. + * + * @param index the index + * @param value the value + * @return this + */ + public WriteBuffer putShort(int index, short value) { + buff.putShort(index, value); + return this; + } + + /** + * Clear the buffer after use. + * + * @return this + */ + public WriteBuffer clear() { + if (buff.limit() > MAX_REUSE_CAPACITY) { + buff = reuse; + } else if (buff != reuse) { + reuse = buff; + } + buff.clear(); + return this; + } + + /** + * Get the byte buffer. + * + * @return the byte buffer + */ + public ByteBuffer getBuffer() { + return buff; + } + + private ByteBuffer ensureCapacity(int len) { + if (buff.remaining() < len) { + grow(len); + } + return buff; + } + + private void grow(int additional) { + ByteBuffer temp = buff; + int needed = additional - temp.remaining(); + // grow at least MIN_GROW + long grow = Math.max(needed, MIN_GROW); + // grow at least 50% of the current size + grow = Math.max(temp.capacity() / 2, grow); + // the new capacity is at most Integer.MAX_VALUE + int newCapacity = (int) Math.min(Integer.MAX_VALUE, temp.capacity() + grow); + if (newCapacity < needed) { + throw new OutOfMemoryError("Capacity: " + newCapacity + " needed: " + needed); + } + try { + buff = ByteBuffer.allocate(newCapacity); + } catch (OutOfMemoryError e) { + throw new OutOfMemoryError("Capacity: " + newCapacity); + } + temp.flip(); + buff.put(temp); + if (newCapacity <= MAX_REUSE_CAPACITY) { + reuse = buff; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/cache/CacheLongKeyLIRS.java b/modules/h2/src/main/java/org/h2/mvstore/cache/CacheLongKeyLIRS.java new file mode 100644 index 0000000000000..d890e7ff1f1ff --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/cache/CacheLongKeyLIRS.java @@ -0,0 +1,1199 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.cache; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import org.h2.mvstore.DataUtils; + +/** + * A scan resistant cache that uses keys of type long. It is meant to cache + * objects that are relatively costly to acquire, for example file content. + *

    + * This implementation is multi-threading safe and supports concurrent access. + * Null keys or null values are not allowed. The map fill factor is at most 75%. + *

    + * Each entry is assigned a distinct memory size, and the cache will try to use + * at most the specified amount of memory. The memory unit is not relevant, + * however it is suggested to use bytes as the unit. + *

    + * This class implements an approximation of the the LIRS replacement algorithm + * invented by Xiaodong Zhang and Song Jiang as described in + * http://www.cse.ohio-state.edu/~zhang/lirs-sigmetrics-02.html with a few + * smaller changes: An additional queue for non-resident entries is used, to + * prevent unbound memory usage. The maximum size of this queue is at most the + * size of the rest of the stack. About 6.25% of the mapped entries are cold. + *

    + * Internally, the cache is split into a number of segments, and each segment is + * an individual LIRS cache. + *

    + * Accessed entries are only moved to the top of the stack if at least a number + * of other entries have been moved to the front (8 per segment by default). + * Write access and moving entries to the top of the stack is synchronized per + * segment. + * + * @author Thomas Mueller + * @param the value type + */ +public class CacheLongKeyLIRS { + + /** + * The maximum memory this cache should use. + */ + private long maxMemory; + + private final Segment[] segments; + + private final int segmentCount; + private final int segmentShift; + private final int segmentMask; + private final int stackMoveDistance; + private final int nonResidentQueueSize; + + /** + * Create a new cache with the given memory size. + * + * @param config the configuration + */ + @SuppressWarnings("unchecked") + public CacheLongKeyLIRS(Config config) { + setMaxMemory(config.maxMemory); + this.nonResidentQueueSize = config.nonResidentQueueSize; + DataUtils.checkArgument( + Integer.bitCount(config.segmentCount) == 1, + "The segment count must be a power of 2, is {0}", config.segmentCount); + this.segmentCount = config.segmentCount; + this.segmentMask = segmentCount - 1; + this.stackMoveDistance = config.stackMoveDistance; + segments = new Segment[segmentCount]; + clear(); + // use the high bits for the segment + this.segmentShift = 32 - Integer.bitCount(segmentMask); + } + + /** + * Remove all entries. + */ + public void clear() { + long max = getMaxItemSize(); + for (int i = 0; i < segmentCount; i++) { + segments[i] = new Segment<>( + max, stackMoveDistance, 8, nonResidentQueueSize); + } + } + + /** + * Determines max size of the data item size to fit into cache + * @return data items size limit + */ + public long getMaxItemSize() { + return Math.max(1, maxMemory / segmentCount); + } + + private Entry find(long key) { + int hash = getHash(key); + return getSegment(hash).find(key, hash); + } + + /** + * Check whether there is a resident entry for the given key. This + * method does not adjust the internal state of the cache. + * + * @param key the key (may not be null) + * @return true if there is a resident entry + */ + public boolean containsKey(long key) { + int hash = getHash(key); + return getSegment(hash).containsKey(key, hash); + } + + /** + * Get the value for the given key if the entry is cached. This method does + * not modify the internal state. + * + * @param key the key (may not be null) + * @return the value, or null if there is no resident entry + */ + public V peek(long key) { + Entry e = find(key); + return e == null ? null : e.value; + } + + /** + * Add an entry to the cache using the average memory size. + * + * @param key the key (may not be null) + * @param value the value (may not be null) + * @return the old value, or null if there was no resident entry + */ + public V put(long key, V value) { + return put(key, value, sizeOf(value)); + } + + /** + * Add an entry to the cache. The entry may or may not exist in the + * cache yet. This method will usually mark unknown entries as cold and + * known entries as hot. + * + * @param key the key (may not be null) + * @param value the value (may not be null) + * @param memory the memory used for the given entry + * @return the old value, or null if there was no resident entry + */ + public V put(long key, V value, int memory) { + int hash = getHash(key); + int segmentIndex = getSegmentIndex(hash); + Segment s = segments[segmentIndex]; + // check whether resize is required: synchronize on s, to avoid + // concurrent resizes (concurrent reads read + // from the old segment) + synchronized (s) { + s = resizeIfNeeded(s, segmentIndex); + return s.put(key, hash, value, memory); + } + } + + private Segment resizeIfNeeded(Segment s, int segmentIndex) { + int newLen = s.getNewMapLen(); + if (newLen == 0) { + return s; + } + // another thread might have resized + // (as we retrieved the segment before synchronizing on it) + Segment s2 = segments[segmentIndex]; + if (s == s2) { + // no other thread resized, so we do + s = new Segment<>(s, newLen); + segments[segmentIndex] = s; + } + return s; + } + + /** + * Get the size of the given value. The default implementation returns 1. + * + * @param value the value + * @return the size + */ + @SuppressWarnings("unused") + protected int sizeOf(V value) { + return 1; + } + + /** + * Remove an entry. Both resident and non-resident entries can be + * removed. + * + * @param key the key (may not be null) + * @return the old value, or null if there was no resident entry + */ + public V remove(long key) { + int hash = getHash(key); + int segmentIndex = getSegmentIndex(hash); + Segment s = segments[segmentIndex]; + // check whether resize is required: synchronize on s, to avoid + // concurrent resizes (concurrent reads read + // from the old segment) + synchronized (s) { + s = resizeIfNeeded(s, segmentIndex); + return s.remove(key, hash); + } + } + + /** + * Get the memory used for the given key. + * + * @param key the key (may not be null) + * @return the memory, or 0 if there is no resident entry + */ + public int getMemory(long key) { + int hash = getHash(key); + return getSegment(hash).getMemory(key, hash); + } + + /** + * Get the value for the given key if the entry is cached. This method + * adjusts the internal state of the cache sometimes, to ensure commonly + * used entries stay in the cache. + * + * @param key the key (may not be null) + * @return the value, or null if there is no resident entry + */ + public V get(long key) { + int hash = getHash(key); + return getSegment(hash).get(key, hash); + } + + private Segment getSegment(int hash) { + return segments[getSegmentIndex(hash)]; + } + + private int getSegmentIndex(int hash) { + return (hash >>> segmentShift) & segmentMask; + } + + /** + * Get the hash code for the given key. The hash code is + * further enhanced to spread the values more evenly. + * + * @param key the key + * @return the hash code + */ + static int getHash(long key) { + int hash = (int) ((key >>> 32) ^ key); + // a supplemental secondary hash function + // to protect against hash codes that don't differ much + hash = ((hash >>> 16) ^ hash) * 0x45d9f3b; + hash = ((hash >>> 16) ^ hash) * 0x45d9f3b; + hash = (hash >>> 16) ^ hash; + return hash; + } + + /** + * Get the currently used memory. + * + * @return the used memory + */ + public long getUsedMemory() { + long x = 0; + for (Segment s : segments) { + x += s.usedMemory; + } + return x; + } + + /** + * Set the maximum memory this cache should use. This will not + * immediately cause entries to get removed however; it will only change + * the limit. To resize the internal array, call the clear method. + * + * @param maxMemory the maximum size (1 or larger) in bytes + */ + public void setMaxMemory(long maxMemory) { + DataUtils.checkArgument( + maxMemory > 0, + "Max memory must be larger than 0, is {0}", maxMemory); + this.maxMemory = maxMemory; + if (segments != null) { + long max = 1 + maxMemory / segments.length; + for (Segment s : segments) { + s.setMaxMemory(max); + } + } + } + + /** + * Get the maximum memory to use. + * + * @return the maximum memory + */ + public long getMaxMemory() { + return maxMemory; + } + + /** + * Get the entry set for all resident entries. + * + * @return the entry set + */ + public synchronized Set> entrySet() { + HashMap map = new HashMap<>(); + for (long k : keySet()) { + map.put(k, find(k).value); + } + return map.entrySet(); + } + + /** + * Get the set of keys for resident entries. + * + * @return the set of keys + */ + public Set keySet() { + HashSet set = new HashSet<>(); + for (Segment s : segments) { + set.addAll(s.keySet()); + } + return set; + } + + /** + * Get the number of non-resident entries in the cache. + * + * @return the number of non-resident entries + */ + public int sizeNonResident() { + int x = 0; + for (Segment s : segments) { + x += s.queue2Size; + } + return x; + } + + /** + * Get the length of the internal map array. + * + * @return the size of the array + */ + public int sizeMapArray() { + int x = 0; + for (Segment s : segments) { + x += s.entries.length; + } + return x; + } + + /** + * Get the number of hot entries in the cache. + * + * @return the number of hot entries + */ + public int sizeHot() { + int x = 0; + for (Segment s : segments) { + x += s.mapSize - s.queueSize - s.queue2Size; + } + return x; + } + + /** + * Get the number of cache hits. + * + * @return the cache hits + */ + public long getHits() { + long x = 0; + for (Segment s : segments) { + x += s.hits; + } + return x; + } + + /** + * Get the number of cache misses. + * + * @return the cache misses + */ + public long getMisses() { + int x = 0; + for (Segment s : segments) { + x += s.misses; + } + return x; + } + + /** + * Get the number of resident entries. + * + * @return the number of entries + */ + public int size() { + int x = 0; + for (Segment s : segments) { + x += s.mapSize - s.queue2Size; + } + return x; + } + + /** + * Get the list of keys. This method allows to read the internal state of + * the cache. + * + * @param cold if true, only keys for the cold entries are returned + * @param nonResident true for non-resident entries + * @return the key list + */ + public List keys(boolean cold, boolean nonResident) { + ArrayList keys = new ArrayList<>(); + for (Segment s : segments) { + keys.addAll(s.keys(cold, nonResident)); + } + return keys; + } + + /** + * Get the values for all resident entries. + * + * @return the entry set + */ + public List values() { + ArrayList list = new ArrayList<>(); + for (long k : keySet()) { + V value = find(k).value; + if (value != null) { + list.add(value); + } + } + return list; + } + + /** + * Check whether the cache is empty. + * + * @return true if it is empty + */ + public boolean isEmpty() { + return size() == 0; + } + + /** + * Check whether the given value is stored. + * + * @param value the value + * @return true if it is stored + */ + public boolean containsValue(Object value) { + return getMap().containsValue(value); + } + + /** + * Convert this cache to a map. + * + * @return the map + */ + public Map getMap() { + HashMap map = new HashMap<>(); + for (long k : keySet()) { + V x = find(k).value; + if (x != null) { + map.put(k, x); + } + } + return map; + } + + /** + * Add all elements of the map to this cache. + * + * @param m the map + */ + public void putAll(Map m) { + for (Map.Entry e : m.entrySet()) { + // copy only non-null entries + put(e.getKey(), e.getValue()); + } + } + + /** + * A cache segment + * + * @param the value type + */ + private static class Segment { + + /** + * The number of (hot, cold, and non-resident) entries in the map. + */ + int mapSize; + + /** + * The size of the LIRS queue for resident cold entries. + */ + int queueSize; + + /** + * The size of the LIRS queue for non-resident cold entries. + */ + int queue2Size; + + /** + * The number of cache hits. + */ + long hits; + + /** + * The number of cache misses. + */ + long misses; + + /** + * The map array. The size is always a power of 2. + */ + final Entry[] entries; + + /** + * The currently used memory. + */ + long usedMemory; + + /** + * How many other item are to be moved to the top of the stack before + * the current item is moved. + */ + private final int stackMoveDistance; + + /** + * The maximum memory this cache should use in bytes. + */ + private long maxMemory; + + /** + * The bit mask that is applied to the key hash code to get the index in + * the map array. The mask is the length of the array minus one. + */ + private final int mask; + + /** + * The number of entries in the non-resident queue, as a factor of the + * number of entries in the map. + */ + private final int nonResidentQueueSize; + + /** + * The stack of recently referenced elements. This includes all hot + * entries, and the recently referenced cold entries. Resident cold + * entries that were not recently referenced, as well as non-resident + * cold entries, are not in the stack. + *

    + * There is always at least one entry: the head entry. + */ + private final Entry stack; + + /** + * The number of entries in the stack. + */ + private int stackSize; + + /** + * The queue of resident cold entries. + *

    + * There is always at least one entry: the head entry. + */ + private final Entry queue; + + /** + * The queue of non-resident cold entries. + *

    + * There is always at least one entry: the head entry. + */ + private final Entry queue2; + + /** + * The number of times any item was moved to the top of the stack. + */ + private int stackMoveCounter; + + /** + * Create a new cache segment. + * + * @param maxMemory the maximum memory to use + * @param stackMoveDistance the number of other entries to be moved to + * the top of the stack before moving an entry to the top + * @param len the number of hash table buckets (must be a power of 2) + * @param nonResidentQueueSize the non-resident queue size factor + */ + Segment(long maxMemory, int stackMoveDistance, int len, + int nonResidentQueueSize) { + setMaxMemory(maxMemory); + this.stackMoveDistance = stackMoveDistance; + this.nonResidentQueueSize = nonResidentQueueSize; + + // the bit mask has all bits set + mask = len - 1; + + // initialize the stack and queue heads + stack = new Entry<>(); + stack.stackPrev = stack.stackNext = stack; + queue = new Entry<>(); + queue.queuePrev = queue.queueNext = queue; + queue2 = new Entry<>(); + queue2.queuePrev = queue2.queueNext = queue2; + + @SuppressWarnings("unchecked") + Entry[] e = new Entry[len]; + entries = e; + } + + /** + * Create a new cache segment from an existing one. + * The caller must synchronize on the old segment, to avoid + * concurrent modifications. + * + * @param old the old segment + * @param len the number of hash table buckets (must be a power of 2) + */ + Segment(Segment old, int len) { + this(old.maxMemory, old.stackMoveDistance, len, old.nonResidentQueueSize); + hits = old.hits; + misses = old.misses; + Entry s = old.stack.stackPrev; + while (s != old.stack) { + Entry e = copy(s); + addToMap(e); + addToStack(e); + s = s.stackPrev; + } + s = old.queue.queuePrev; + while (s != old.queue) { + Entry e = find(s.key, getHash(s.key)); + if (e == null) { + e = copy(s); + addToMap(e); + } + addToQueue(queue, e); + s = s.queuePrev; + } + s = old.queue2.queuePrev; + while (s != old.queue2) { + Entry e = find(s.key, getHash(s.key)); + if (e == null) { + e = copy(s); + addToMap(e); + } + addToQueue(queue2, e); + s = s.queuePrev; + } + } + + /** + * Calculate the new number of hash table buckets if the internal map + * should be re-sized. + * + * @return 0 if no resizing is needed, or the new length + */ + int getNewMapLen() { + int len = mask + 1; + if (len * 3 < mapSize * 4 && len < (1 << 28)) { + // more than 75% usage + return len * 2; + } else if (len > 32 && len / 8 > mapSize) { + // less than 12% usage + return len / 2; + } + return 0; + } + + private void addToMap(Entry e) { + int index = getHash(e.key) & mask; + e.mapNext = entries[index]; + entries[index] = e; + usedMemory += e.memory; + mapSize++; + } + + private static Entry copy(Entry old) { + Entry e = new Entry<>(); + e.key = old.key; + e.value = old.value; + e.memory = old.memory; + e.topMove = old.topMove; + return e; + } + + /** + * Get the memory used for the given key. + * + * @param key the key (may not be null) + * @param hash the hash + * @return the memory, or 0 if there is no resident entry + */ + int getMemory(long key, int hash) { + Entry e = find(key, hash); + return e == null ? 0 : e.memory; + } + + /** + * Get the value for the given key if the entry is cached. This method + * adjusts the internal state of the cache sometimes, to ensure commonly + * used entries stay in the cache. + * + * @param key the key (may not be null) + * @param hash the hash + * @return the value, or null if there is no resident entry + */ + V get(long key, int hash) { + Entry e = find(key, hash); + if (e == null) { + // the entry was not found + misses++; + return null; + } + V value = e.value; + if (value == null) { + // it was a non-resident entry + misses++; + return null; + } + if (e.isHot()) { + if (e != stack.stackNext) { + if (stackMoveDistance == 0 || + stackMoveCounter - e.topMove > stackMoveDistance) { + access(key, hash); + } + } + } else { + access(key, hash); + } + hits++; + return value; + } + + /** + * Access an item, moving the entry to the top of the stack or front of + * the queue if found. + * + * @param key the key + */ + private synchronized void access(long key, int hash) { + Entry e = find(key, hash); + if (e == null || e.value == null) { + return; + } + if (e.isHot()) { + if (e != stack.stackNext) { + if (stackMoveDistance == 0 || + stackMoveCounter - e.topMove > stackMoveDistance) { + // move a hot entry to the top of the stack + // unless it is already there + boolean wasEnd = e == stack.stackPrev; + removeFromStack(e); + if (wasEnd) { + // if moving the last entry, the last entry + // could now be cold, which is not allowed + pruneStack(); + } + addToStack(e); + } + } + } else { + removeFromQueue(e); + if (e.stackNext != null) { + // resident cold entries become hot + // if they are on the stack + removeFromStack(e); + // which means a hot entry needs to become cold + // (this entry is cold, that means there is at least one + // more entry in the stack, which must be hot) + convertOldestHotToCold(); + } else { + // cold entries that are not on the stack + // move to the front of the queue + addToQueue(queue, e); + } + // in any case, the cold entry is moved to the top of the stack + addToStack(e); + } + } + + /** + * Add an entry to the cache. The entry may or may not exist in the + * cache yet. This method will usually mark unknown entries as cold and + * known entries as hot. + * + * @param key the key (may not be null) + * @param hash the hash + * @param value the value (may not be null) + * @param memory the memory used for the given entry + * @return the old value, or null if there was no resident entry + */ + synchronized V put(long key, int hash, V value, int memory) { + if (value == null) { + throw DataUtils.newIllegalArgumentException( + "The value may not be null"); + } + V old; + Entry e = find(key, hash); + boolean existed; + if (e == null) { + existed = false; + old = null; + } else { + existed = true; + old = e.value; + remove(key, hash); + } + if (memory > maxMemory) { + // the new entry is too big to fit + return old; + } + e = new Entry<>(); + e.key = key; + e.value = value; + e.memory = memory; + int index = hash & mask; + e.mapNext = entries[index]; + entries[index] = e; + usedMemory += memory; + if (usedMemory > maxMemory) { + // old entries needs to be removed + evict(); + // if the cache is full, the new entry is + // cold if possible + if (stackSize > 0) { + // the new cold entry is at the top of the queue + addToQueue(queue, e); + } + } + mapSize++; + // added entries are always added to the stack + addToStack(e); + if (existed) { + // if it was there before (even non-resident), it becomes hot + access(key, hash); + } + return old; + } + + /** + * Remove an entry. Both resident and non-resident entries can be + * removed. + * + * @param key the key (may not be null) + * @param hash the hash + * @return the old value, or null if there was no resident entry + */ + synchronized V remove(long key, int hash) { + int index = hash & mask; + Entry e = entries[index]; + if (e == null) { + return null; + } + V old; + if (e.key == key) { + old = e.value; + entries[index] = e.mapNext; + } else { + Entry last; + do { + last = e; + e = e.mapNext; + if (e == null) { + return null; + } + } while (e.key != key); + old = e.value; + last.mapNext = e.mapNext; + } + mapSize--; + usedMemory -= e.memory; + if (e.stackNext != null) { + removeFromStack(e); + } + if (e.isHot()) { + // when removing a hot entry, the newest cold entry gets hot, + // so the number of hot entries does not change + e = queue.queueNext; + if (e != queue) { + removeFromQueue(e); + if (e.stackNext == null) { + addToStackBottom(e); + } + } + } else { + removeFromQueue(e); + } + pruneStack(); + return old; + } + + /** + * Evict cold entries (resident and non-resident) until the memory limit + * is reached. The new entry is added as a cold entry, except if it is + * the only entry. + */ + private void evict() { + do { + evictBlock(); + } while (usedMemory > maxMemory); + } + + private void evictBlock() { + // ensure there are not too many hot entries: right shift of 5 is + // division by 32, that means if there are only 1/32 (3.125%) or + // less cold entries, a hot entry needs to become cold + while (queueSize <= (mapSize >>> 5) && stackSize > 0) { + convertOldestHotToCold(); + } + // the oldest resident cold entries become non-resident + while (usedMemory > maxMemory && queueSize > 0) { + Entry e = queue.queuePrev; + usedMemory -= e.memory; + removeFromQueue(e); + e.value = null; + e.memory = 0; + addToQueue(queue2, e); + // the size of the non-resident-cold entries needs to be limited + int maxQueue2Size = nonResidentQueueSize * (mapSize - queue2Size); + if (maxQueue2Size >= 0) { + while (queue2Size > maxQueue2Size) { + e = queue2.queuePrev; + int hash = getHash(e.key); + remove(e.key, hash); + } + } + } + } + + private void convertOldestHotToCold() { + // the last entry of the stack is known to be hot + Entry last = stack.stackPrev; + if (last == stack) { + // never remove the stack head itself (this would mean the + // internal structure of the cache is corrupt) + throw new IllegalStateException(); + } + // remove from stack - which is done anyway in the stack pruning, + // but we can do it here as well + removeFromStack(last); + // adding an entry to the queue will make it cold + addToQueue(queue, last); + pruneStack(); + } + + /** + * Ensure the last entry of the stack is cold. + */ + private void pruneStack() { + while (true) { + Entry last = stack.stackPrev; + // must stop at a hot entry or the stack head, + // but the stack head itself is also hot, so we + // don't have to test it + if (last.isHot()) { + break; + } + // the cold entry is still in the queue + removeFromStack(last); + } + } + + /** + * Try to find an entry in the map. + * + * @param key the key + * @param hash the hash + * @return the entry (might be a non-resident) + */ + Entry find(long key, int hash) { + int index = hash & mask; + Entry e = entries[index]; + while (e != null && e.key != key) { + e = e.mapNext; + } + return e; + } + + private void addToStack(Entry e) { + e.stackPrev = stack; + e.stackNext = stack.stackNext; + e.stackNext.stackPrev = e; + stack.stackNext = e; + stackSize++; + e.topMove = stackMoveCounter++; + } + + private void addToStackBottom(Entry e) { + e.stackNext = stack; + e.stackPrev = stack.stackPrev; + e.stackPrev.stackNext = e; + stack.stackPrev = e; + stackSize++; + } + + /** + * Remove the entry from the stack. The head itself must not be removed. + * + * @param e the entry + */ + private void removeFromStack(Entry e) { + e.stackPrev.stackNext = e.stackNext; + e.stackNext.stackPrev = e.stackPrev; + e.stackPrev = e.stackNext = null; + stackSize--; + } + + private void addToQueue(Entry q, Entry e) { + e.queuePrev = q; + e.queueNext = q.queueNext; + e.queueNext.queuePrev = e; + q.queueNext = e; + if (e.value != null) { + queueSize++; + } else { + queue2Size++; + } + } + + private void removeFromQueue(Entry e) { + e.queuePrev.queueNext = e.queueNext; + e.queueNext.queuePrev = e.queuePrev; + e.queuePrev = e.queueNext = null; + if (e.value != null) { + queueSize--; + } else { + queue2Size--; + } + } + + /** + * Get the list of keys. This method allows to read the internal state + * of the cache. + * + * @param cold if true, only keys for the cold entries are returned + * @param nonResident true for non-resident entries + * @return the key list + */ + synchronized List keys(boolean cold, boolean nonResident) { + ArrayList keys = new ArrayList<>(); + if (cold) { + Entry start = nonResident ? queue2 : queue; + for (Entry e = start.queueNext; e != start; + e = e.queueNext) { + keys.add(e.key); + } + } else { + for (Entry e = stack.stackNext; e != stack; + e = e.stackNext) { + keys.add(e.key); + } + } + return keys; + } + + /** + * Check whether there is a resident entry for the given key. This + * method does not adjust the internal state of the cache. + * + * @param key the key (may not be null) + * @param hash the hash + * @return true if there is a resident entry + */ + boolean containsKey(long key, int hash) { + Entry e = find(key, hash); + return e != null && e.value != null; + } + + /** + * Get the set of keys for resident entries. + * + * @return the set of keys + */ + synchronized Set keySet() { + HashSet set = new HashSet<>(); + for (Entry e = stack.stackNext; e != stack; e = e.stackNext) { + set.add(e.key); + } + for (Entry e = queue.queueNext; e != queue; e = e.queueNext) { + set.add(e.key); + } + return set; + } + + /** + * Set the maximum memory this cache should use. This will not + * immediately cause entries to get removed however; it will only change + * the limit. To resize the internal array, call the clear method. + * + * @param maxMemory the maximum size (1 or larger) in bytes + */ + void setMaxMemory(long maxMemory) { + this.maxMemory = maxMemory; + } + + } + + /** + * A cache entry. Each entry is either hot (low inter-reference recency; + * LIR), cold (high inter-reference recency; HIR), or non-resident-cold. Hot + * entries are in the stack only. Cold entries are in the queue, and may be + * in the stack. Non-resident-cold entries have their value set to null and + * are in the stack and in the non-resident queue. + * + * @param the value type + */ + static class Entry { + + /** + * The key. + */ + long key; + + /** + * The value. Set to null for non-resident-cold entries. + */ + V value; + + /** + * The estimated memory used. + */ + int memory; + + /** + * When the item was last moved to the top of the stack. + */ + int topMove; + + /** + * The next entry in the stack. + */ + Entry stackNext; + + /** + * The previous entry in the stack. + */ + Entry stackPrev; + + /** + * The next entry in the queue (either the resident queue or the + * non-resident queue). + */ + Entry queueNext; + + /** + * The previous entry in the queue. + */ + Entry queuePrev; + + /** + * The next entry in the map (the chained entry). + */ + Entry mapNext; + + /** + * Whether this entry is hot. Cold entries are in one of the two queues. + * + * @return whether the entry is hot + */ + boolean isHot() { + return queueNext == null; + } + + } + + /** + * The cache configuration. + */ + public static class Config { + + /** + * The maximum memory to use (1 or larger). + */ + public long maxMemory = 1; + + /** + * The number of cache segments (must be a power of 2). + */ + public int segmentCount = 16; + + /** + * How many other item are to be moved to the top of the stack before + * the current item is moved. + */ + public int stackMoveDistance = 32; + + /** + * The number of entries in the non-resident queue, as a factor of the + * number of all other entries in the map. + */ + public final int nonResidentQueueSize = 3; + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/cache/FilePathCache.java b/modules/h2/src/main/java/org/h2/mvstore/cache/FilePathCache.java new file mode 100644 index 0000000000000..b75f78f3114e5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/cache/FilePathCache.java @@ -0,0 +1,181 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.cache; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import org.h2.store.fs.FileBase; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathWrapper; + +/** + * A file with a read cache. + */ +public class FilePathCache extends FilePathWrapper { + + /** + * The instance. + */ + public static final FilePathCache INSTANCE = new FilePathCache(); + + /** + * Register the file system. + */ + static { + FilePath.register(INSTANCE); + } + + public static FileChannel wrap(FileChannel f) { + return new FileCache(f); + } + + @Override + public FileChannel open(String mode) throws IOException { + return new FileCache(getBase().open(mode)); + } + + @Override + public String getScheme() { + return "cache"; + } + + /** + * A file with a read cache. + */ + public static class FileCache extends FileBase { + + private static final int CACHE_BLOCK_SIZE = 4 * 1024; + private final FileChannel base; + + private final CacheLongKeyLIRS cache; + + { + CacheLongKeyLIRS.Config cc = new CacheLongKeyLIRS.Config(); + // 1 MB cache size + cc.maxMemory = 1024 * 1024; + cache = new CacheLongKeyLIRS<>(cc); + } + + FileCache(FileChannel base) { + this.base = base; + } + + @Override + protected void implCloseChannel() throws IOException { + base.close(); + } + + @Override + public FileChannel position(long newPosition) throws IOException { + base.position(newPosition); + return this; + } + + @Override + public long position() throws IOException { + return base.position(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + return base.read(dst); + } + + @Override + public synchronized int read(ByteBuffer dst, long position) throws IOException { + long cachePos = getCachePos(position); + int off = (int) (position - cachePos); + int len = CACHE_BLOCK_SIZE - off; + len = Math.min(len, dst.remaining()); + ByteBuffer buff = cache.get(cachePos); + if (buff == null) { + buff = ByteBuffer.allocate(CACHE_BLOCK_SIZE); + long pos = cachePos; + while (true) { + int read = base.read(buff, pos); + if (read <= 0) { + break; + } + if (buff.remaining() == 0) { + break; + } + pos += read; + } + int read = buff.position(); + if (read == CACHE_BLOCK_SIZE) { + cache.put(cachePos, buff, CACHE_BLOCK_SIZE); + } else { + if (read <= 0) { + return -1; + } + len = Math.min(len, read - off); + } + } + dst.put(buff.array(), off, len); + return len == 0 ? -1 : len; + } + + private static long getCachePos(long pos) { + return (pos / CACHE_BLOCK_SIZE) * CACHE_BLOCK_SIZE; + } + + @Override + public long size() throws IOException { + return base.size(); + } + + @Override + public synchronized FileChannel truncate(long newSize) throws IOException { + cache.clear(); + base.truncate(newSize); + return this; + } + + @Override + public synchronized int write(ByteBuffer src, long position) throws IOException { + clearCache(src, position); + return base.write(src, position); + } + + @Override + public synchronized int write(ByteBuffer src) throws IOException { + clearCache(src, position()); + return base.write(src); + } + + private void clearCache(ByteBuffer src, long position) { + if (cache.size() > 0) { + int len = src.remaining(); + long p = getCachePos(position); + while (len > 0) { + cache.remove(p); + p += CACHE_BLOCK_SIZE; + len -= CACHE_BLOCK_SIZE; + } + } + } + + @Override + public void force(boolean metaData) throws IOException { + base.force(metaData); + } + + @Override + public FileLock tryLock(long position, long size, boolean shared) + throws IOException { + return base.tryLock(position, size, shared); + } + + @Override + public String toString() { + return "cache:" + base.toString(); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/cache/package.html b/modules/h2/src/main/java/org/h2/mvstore/cache/package.html new file mode 100644 index 0000000000000..9161d6c662041 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/cache/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Classes related to caching. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/MVDelegateIndex.java b/modules/h2/src/main/java/org/h2/mvstore/db/MVDelegateIndex.java new file mode 100644 index 0000000000000..33ad397fa72d7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/MVDelegateIndex.java @@ -0,0 +1,142 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.util.HashSet; +import java.util.List; +import org.h2.engine.Session; +import org.h2.index.BaseIndex; +import org.h2.index.Cursor; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.TableFilter; +import org.h2.value.ValueLong; + +/** + * An index that delegates indexing to another index. + */ +public class MVDelegateIndex extends BaseIndex implements MVIndex { + + private final MVPrimaryIndex mainIndex; + + public MVDelegateIndex(MVTable table, int id, String name, + MVPrimaryIndex mainIndex, + IndexType indexType) { + IndexColumn[] cols = IndexColumn.wrap(new Column[] { table + .getColumn(mainIndex.getMainIndexColumn()) }); + this.initBaseIndex(table, id, name, cols, indexType); + this.mainIndex = mainIndex; + if (id < 0) { + throw DbException.throwInternalError("" + name); + } + } + + @Override + public void addRowsToBuffer(List rows, String bufferName) { + throw DbException.throwInternalError(); + } + + @Override + public void addBufferedRows(List bufferNames) { + throw DbException.throwInternalError(); + } + + @Override + public void add(Session session, Row row) { + // nothing to do + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + ValueLong min = mainIndex.getKey(first, ValueLong.MIN, ValueLong.MIN); + // ifNull is MIN as well, because the column is never NULL + // so avoid returning all rows (returning one row is OK) + ValueLong max = mainIndex.getKey(last, ValueLong.MAX, ValueLong.MIN); + return mainIndex.find(session, min, max); + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + return mainIndex.findFirstOrLast(session, first); + } + + @Override + public int getColumnIndex(Column col) { + if (col.getColumnId() == mainIndex.getMainIndexColumn()) { + return 0; + } + return -1; + } + + @Override + public boolean isFirstColumn(Column column) { + return getColumnIndex(column) == 0; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 10 * getCostRangeIndex(masks, mainIndex.getRowCountApproximation(), + filters, filter, sortOrder, true, allColumnsSet); + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public void remove(Session session, Row row) { + // nothing to do + } + + @Override + public void remove(Session session) { + mainIndex.setMainIndexColumn(-1); + } + + @Override + public void truncate(Session session) { + // nothing to do + } + + @Override + public void checkRename() { + // ok + } + + @Override + public long getRowCount(Session session) { + return mainIndex.getRowCount(session); + } + + @Override + public long getRowCountApproximation() { + return mainIndex.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/MVIndex.java b/modules/h2/src/main/java/org/h2/mvstore/db/MVIndex.java new file mode 100644 index 0000000000000..3727ec876d10f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/MVIndex.java @@ -0,0 +1,35 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.util.List; + +import org.h2.index.Index; +import org.h2.result.Row; + +/** + * An index that stores the data in an MVStore. + */ +public interface MVIndex extends Index { + + /** + * Add the rows to a temporary storage (not to the index yet). The rows are + * sorted by the index columns. This is to more quickly build the index. + * + * @param rows the rows + * @param bufferName the name of the temporary storage + */ + void addRowsToBuffer(List rows, String bufferName); + + /** + * Add all the index data from the buffers to the index. The index will + * typically use merge sort to add the data more quickly in sorted order. + * + * @param bufferNames the names of the temporary storage + */ + void addBufferedRows(List bufferNames); + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/MVPrimaryIndex.java b/modules/h2/src/main/java/org/h2/mvstore/db/MVPrimaryIndex.java new file mode 100644 index 0000000000000..45414ab0f6b75 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/MVPrimaryIndex.java @@ -0,0 +1,416 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.util.Collections; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map.Entry; +import java.util.concurrent.atomic.AtomicLong; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.BaseIndex; +import org.h2.index.Cursor; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.db.TransactionStore.Transaction; +import org.h2.mvstore.db.TransactionStore.TransactionMap; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; + +/** + * A table stored in a MVStore. + */ +public class MVPrimaryIndex extends BaseIndex { + + private final MVTable mvTable; + private final String mapName; + private final TransactionMap dataMap; + private final AtomicLong lastKey = new AtomicLong(0); + private int mainIndexColumn = -1; + + public MVPrimaryIndex(Database db, MVTable table, int id, + IndexColumn[] columns, IndexType indexType) { + this.mvTable = table; + initBaseIndex(table, id, table.getName() + "_DATA", columns, indexType); + int[] sortTypes = new int[columns.length]; + for (int i = 0; i < columns.length; i++) { + sortTypes[i] = SortOrder.ASCENDING; + } + ValueDataType keyType = new ValueDataType(null, null, null); + ValueDataType valueType = new ValueDataType(db.getCompareMode(), db, + sortTypes); + mapName = "table." + getId(); + Transaction t = mvTable.getTransactionBegin(); + dataMap = t.openMap(mapName, keyType, valueType); + t.commit(); + if (!table.isPersistData()) { + dataMap.map.setVolatile(true); + } + Value k = dataMap.map.lastKey(); // include uncommitted keys as well + lastKey.set(k == null ? 0 : k.getLong()); + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public String getPlanSQL() { + return table.getSQL() + ".tableScan"; + } + + public void setMainIndexColumn(int mainIndexColumn) { + this.mainIndexColumn = mainIndexColumn; + } + + public int getMainIndexColumn() { + return mainIndexColumn; + } + + @Override + public void close(Session session) { + // ok + } + + @Override + public void add(Session session, Row row) { + if (mainIndexColumn == -1) { + if (row.getKey() == 0) { + row.setKey(lastKey.incrementAndGet()); + } + } else { + long c = row.getValue(mainIndexColumn).getLong(); + row.setKey(c); + } + + if (mvTable.getContainsLargeObject()) { + for (int i = 0, len = row.getColumnCount(); i < len; i++) { + Value v = row.getValue(i); + Value v2 = v.copy(database, getId()); + if (v2.isLinkedToTable()) { + session.removeAtCommitStop(v2); + } + if (v != v2) { + row.setValue(i, v2); + } + } + } + + TransactionMap map = getMap(session); + Value key = ValueLong.get(row.getKey()); + Value old = map.getLatest(key); + if (old != null) { + String sql = "PRIMARY KEY ON " + table.getSQL(); + if (mainIndexColumn >= 0 && mainIndexColumn < indexColumns.length) { + sql += "(" + indexColumns[mainIndexColumn].getSQL() + ")"; + } + DbException e = DbException.get(ErrorCode.DUPLICATE_KEY_1, sql); + e.setSource(this); + throw e; + } + try { + map.put(key, ValueArray.get(row.getValueList())); + } catch (IllegalStateException e) { + throw mvTable.convertException(e); + } + // because it's possible to directly update the key using the _rowid_ + // syntax + if (row.getKey() > lastKey.get()) { + lastKey.set(row.getKey()); + } + } + + @Override + public void remove(Session session, Row row) { + if (mvTable.getContainsLargeObject()) { + for (int i = 0, len = row.getColumnCount(); i < len; i++) { + Value v = row.getValue(i); + if (v.isLinkedToTable()) { + session.removeAtCommit(v); + } + } + } + TransactionMap map = getMap(session); + try { + Value old = map.remove(ValueLong.get(row.getKey())); + if (old == null) { + throw DbException.get(ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1, + getSQL() + ": " + row.getKey()); + } + } catch (IllegalStateException e) { + throw mvTable.convertException(e); + } + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + ValueLong min, max; + if (first == null) { + min = ValueLong.MIN; + } else if (mainIndexColumn < 0) { + min = ValueLong.get(first.getKey()); + } else { + ValueLong v = (ValueLong) first.getValue(mainIndexColumn); + if (v == null) { + min = ValueLong.get(first.getKey()); + } else { + min = v; + } + } + if (last == null) { + max = ValueLong.MAX; + } else if (mainIndexColumn < 0) { + max = ValueLong.get(last.getKey()); + } else { + ValueLong v = (ValueLong) last.getValue(mainIndexColumn); + if (v == null) { + max = ValueLong.get(last.getKey()); + } else { + max = v; + } + } + TransactionMap map = getMap(session); + return new MVStoreCursor(session, map.entryIterator(min, max)); + } + + @Override + public MVTable getTable() { + return mvTable; + } + + @Override + public Row getRow(Session session, long key) { + TransactionMap map = getMap(session); + Value v = map.get(ValueLong.get(key)); + if (v == null) { + throw DbException.get(ErrorCode.ROW_NOT_FOUND_IN_PRIMARY_INDEX, + getSQL() + ": " + key); + } + ValueArray array = (ValueArray) v; + Row row = session.createRow(array.getList(), 0); + row.setKey(key); + return row; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + try { + return 10 * getCostRangeIndex(masks, dataMap.sizeAsLongMax(), + filters, filter, sortOrder, true, allColumnsSet); + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } + } + + @Override + public int getColumnIndex(Column col) { + // can not use this index - use the delegate index instead + return -1; + } + + @Override + public boolean isFirstColumn(Column column) { + return false; + } + + @Override + public void remove(Session session) { + TransactionMap map = getMap(session); + if (!map.isClosed()) { + Transaction t = session.getTransaction(); + t.removeMap(map); + } + } + + @Override + public void truncate(Session session) { + TransactionMap map = getMap(session); + if (mvTable.getContainsLargeObject()) { + database.getLobStorage().removeAllForTable(table.getId()); + } + map.clear(); + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + TransactionMap map = getMap(session); + ValueLong v = (ValueLong) (first ? map.firstKey() : map.lastKey()); + if (v == null) { + return new MVStoreCursor(session, + Collections.> emptyList().iterator()); + } + Value value = map.get(v); + Entry e = new DataUtils.MapEntry(v, value); + List> list = Collections.singletonList(e); + MVStoreCursor c = new MVStoreCursor(session, list.iterator()); + c.next(); + return c; + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public long getRowCount(Session session) { + TransactionMap map = getMap(session); + return map.sizeAsLong(); + } + + /** + * The maximum number of rows, including uncommitted rows of any session. + * + * @return the maximum number of rows + */ + public long getRowCountMax() { + try { + return dataMap.sizeAsLongMax(); + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } + } + + @Override + public long getRowCountApproximation() { + return getRowCountMax(); + } + + @Override + public long getDiskSpaceUsed() { + // TODO estimate disk space usage + return 0; + } + + public String getMapName() { + return mapName; + } + + @Override + public void checkRename() { + // ok + } + + /** + * Get the key from the row. + * + * @param row the row + * @param ifEmpty the value to use if the row is empty + * @param ifNull the value to use if the column is NULL + * @return the key + */ + ValueLong getKey(SearchRow row, ValueLong ifEmpty, ValueLong ifNull) { + if (row == null) { + return ifEmpty; + } + Value v = row.getValue(mainIndexColumn); + if (v == null) { + throw DbException.throwInternalError(row.toString()); + } else if (v == ValueNull.INSTANCE) { + return ifNull; + } + return (ValueLong) v.convertTo(Value.LONG); + } + + /** + * Search for a specific row or a set of rows. + * + * @param session the session + * @param first the key of the first row + * @param last the key of the last row + * @return the cursor + */ + Cursor find(Session session, ValueLong first, ValueLong last) { + TransactionMap map = getMap(session); + return new MVStoreCursor(session, map.entryIterator(first, last)); + } + + @Override + public boolean isRowIdIndex() { + return true; + } + + /** + * Get the map to store the data. + * + * @param session the session + * @return the map + */ + TransactionMap getMap(Session session) { + if (session == null) { + return dataMap; + } + Transaction t = session.getTransaction(); + return dataMap.getInstance(t, Long.MAX_VALUE); + } + + /** + * A cursor. + */ + class MVStoreCursor implements Cursor { + + private final Session session; + private final Iterator> it; + private Entry current; + private Row row; + + public MVStoreCursor(Session session, Iterator> it) { + this.session = session; + this.it = it; + } + + @Override + public Row get() { + if (row == null) { + if (current != null) { + ValueArray array = (ValueArray) current.getValue(); + row = session.createRow(array.getList(), 0); + row.setKey(current.getKey().getLong()); + } + } + return row; + } + + @Override + public SearchRow getSearchRow() { + return get(); + } + + @Override + public boolean next() { + current = it.hasNext() ? it.next() : null; + row = null; + return current != null; + } + + @Override + public boolean previous() { + throw DbException.getUnsupportedException("previous"); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/MVSecondaryIndex.java b/modules/h2/src/main/java/org/h2/mvstore/db/MVSecondaryIndex.java new file mode 100644 index 0000000000000..423c8eb5b2378 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/MVSecondaryIndex.java @@ -0,0 +1,530 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.PriorityQueue; +import java.util.Queue; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.BaseIndex; +import org.h2.index.Cursor; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.db.TransactionStore.Transaction; +import org.h2.mvstore.db.TransactionStore.TransactionMap; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.TableFilter; +import org.h2.util.New; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; + +/** + * A table stored in a MVStore. + */ +public final class MVSecondaryIndex extends BaseIndex implements MVIndex { + + /** + * The multi-value table. + */ + final MVTable mvTable; + private final int keyColumns; + private final TransactionMap dataMap; + + public MVSecondaryIndex(Database db, MVTable table, int id, String indexName, + IndexColumn[] columns, IndexType indexType) { + this.mvTable = table; + initBaseIndex(table, id, indexName, columns, indexType); + if (!database.isStarting()) { + checkIndexColumnTypes(columns); + } + // always store the row key in the map key, + // even for unique indexes, as some of the index columns could be null + keyColumns = columns.length + 1; + String mapName = "index." + getId(); + int[] sortTypes = new int[keyColumns]; + for (int i = 0; i < columns.length; i++) { + sortTypes[i] = columns[i].sortType; + } + sortTypes[keyColumns - 1] = SortOrder.ASCENDING; + ValueDataType keyType = new ValueDataType( + db.getCompareMode(), db, sortTypes); + ValueDataType valueType = new ValueDataType(null, null, null); + Transaction t = mvTable.getTransactionBegin(); + dataMap = t.openMap(mapName, keyType, valueType); + t.commit(); + if (!keyType.equals(dataMap.getKeyType())) { + throw DbException.throwInternalError("Incompatible key type"); + } + } + + @Override + public void addRowsToBuffer(List rows, String bufferName) { + MVMap map = openMap(bufferName); + for (Row row : rows) { + ValueArray key = convertToKey(row); + map.put(key, ValueNull.INSTANCE); + } + } + + private static final class Source { + private final Iterator iterator; + ValueArray currentRowData; + + public Source(Iterator iterator) { + this.iterator = iterator; + this.currentRowData = iterator.next(); + } + + public boolean hasNext() { + boolean result = iterator.hasNext(); + if(result) { + currentRowData = iterator.next(); + } + return result; + } + + public ValueArray next() { + return currentRowData; + } + + public static final class Comparator implements java.util.Comparator { + private final CompareMode compareMode; + + public Comparator(CompareMode compareMode) { + this.compareMode = compareMode; + } + + @Override + public int compare(Source one, Source two) { + return one.currentRowData.compareTo(two.currentRowData, compareMode); + } + } + } + + @Override + public void addBufferedRows(List bufferNames) { + ArrayList mapNames = new ArrayList<>(bufferNames); + CompareMode compareMode = database.getCompareMode(); + int buffersCount = bufferNames.size(); + Queue queue = new PriorityQueue<>(buffersCount, new Source.Comparator(compareMode)); + for (String bufferName : bufferNames) { + Iterator iter = openMap(bufferName).keyIterator(null); + if (iter.hasNext()) { + queue.add(new Source(iter)); + } + } + + try { + while (!queue.isEmpty()) { + Source s = queue.remove(); + ValueArray rowData = s.next(); + + if (indexType.isUnique()) { + Value[] array = rowData.getList(); + // don't change the original value + array = array.clone(); + array[keyColumns - 1] = ValueLong.MIN; + ValueArray unique = ValueArray.get(array); + SearchRow row = convertToSearchRow(rowData); + if (!mayHaveNullDuplicates(row)) { + requireUnique(row, dataMap, unique); + } + } + + dataMap.putCommitted(rowData, ValueNull.INSTANCE); + + if (s.hasNext()) { + queue.offer(s); + } + } + } finally { + for (String tempMapName : mapNames) { + MVMap map = openMap(tempMapName); + map.getStore().removeMap(map); + } + } + } + + private MVMap openMap(String mapName) { + int[] sortTypes = new int[keyColumns]; + for (int i = 0; i < indexColumns.length; i++) { + sortTypes[i] = indexColumns[i].sortType; + } + sortTypes[keyColumns - 1] = SortOrder.ASCENDING; + ValueDataType keyType = new ValueDataType( + database.getCompareMode(), database, sortTypes); + ValueDataType valueType = new ValueDataType(null, null, null); + MVMap.Builder builder = + new MVMap.Builder().keyType(keyType).valueType(valueType); + MVMap map = database.getMvStore(). + getStore().openMap(mapName, builder); + if (!keyType.equals(map.getKeyType())) { + throw DbException.throwInternalError("Incompatible key type"); + } + return map; + } + + @Override + public void close(Session session) { + // ok + } + + @Override + public void add(Session session, Row row) { + TransactionMap map = getMap(session); + ValueArray array = convertToKey(row); + ValueArray unique = null; + if (indexType.isUnique()) { + // this will detect committed entries only + unique = convertToKey(row); + unique.getList()[keyColumns - 1] = ValueLong.MIN; + if (mayHaveNullDuplicates(row)) { + // No further unique checks required + unique = null; + } else { + requireUnique(row, map, unique); + } + } + try { + map.put(array, ValueNull.INSTANCE); + } catch (IllegalStateException e) { + throw mvTable.convertException(e); + } + if (unique != null) { + // This code expects that mayHaveDuplicates(row) == false + Iterator it = map.keyIterator(unique, true); + while (it.hasNext()) { + ValueArray k = (ValueArray) it.next(); + if (compareRows(row, convertToSearchRow(k)) != 0) { + break; + } + if (map.isSameTransaction(k)) { + continue; + } + if (map.get(k) != null) { + // committed + throw getDuplicateKeyException(k.toString()); + } + throw DbException.get(ErrorCode.CONCURRENT_UPDATE_1, table.getName()); + } + } + } + + private void requireUnique(SearchRow row, TransactionMap map, ValueArray unique) { + Value key = map.ceilingKey(unique); + if (key != null) { + ValueArray k = (ValueArray) key; + if (compareRows(row, convertToSearchRow(k)) == 0) { + // committed + throw getDuplicateKeyException(k.toString()); + } + } + } + + @Override + public void remove(Session session, Row row) { + ValueArray array = convertToKey(row); + TransactionMap map = getMap(session); + try { + Value old = map.remove(array); + if (old == null) { + throw DbException.get(ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1, + getSQL() + ": " + row.getKey()); + } + } catch (IllegalStateException e) { + throw mvTable.convertException(e); + } + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return find(session, first, false, last); + } + + private Cursor find(Session session, SearchRow first, boolean bigger, SearchRow last) { + ValueArray min = convertToKey(first); + if (min != null) { + min.getList()[keyColumns - 1] = ValueLong.MIN; + } + TransactionMap map = getMap(session); + if (bigger && min != null) { + // search for the next: first skip 1, then 2, 4, 8, until + // we have a higher key; then skip 4, 2,... + // (binary search), until 1 + int offset = 1; + while (true) { + ValueArray v = (ValueArray) map.relativeKey(min, offset); + if (v != null) { + boolean foundHigher = false; + for (int i = 0; i < keyColumns - 1; i++) { + int idx = columnIds[i]; + Value b = first.getValue(idx); + if (b == null) { + break; + } + Value a = v.getList()[i]; + if (database.compare(a, b) > 0) { + foundHigher = true; + break; + } + } + if (!foundHigher) { + offset += offset; + min = v; + continue; + } + } + if (offset > 1) { + offset /= 2; + continue; + } + if (map.get(v) == null) { + min = (ValueArray) map.higherKey(min); + if (min == null) { + break; + } + continue; + } + min = v; + break; + } + if (min == null) { + return new MVStoreCursor(session, + Collections.emptyList().iterator(), null); + } + } + return new MVStoreCursor(session, map.keyIterator(min), last); + } + + private ValueArray convertToKey(SearchRow r) { + if (r == null) { + return null; + } + Value[] array = new Value[keyColumns]; + for (int i = 0; i < columns.length; i++) { + Column c = columns[i]; + int idx = c.getColumnId(); + Value v = r.getValue(idx); + if (v != null) { + array[i] = v.convertTo(c.getType()); + } + } + array[keyColumns - 1] = ValueLong.get(r.getKey()); + return ValueArray.get(array); + } + + /** + * Convert array of values to a SearchRow. + * + * @param key the index key + * @return the row + */ + SearchRow convertToSearchRow(ValueArray key) { + Value[] array = key.getList(); + SearchRow searchRow = mvTable.getTemplateRow(); + searchRow.setKey((array[array.length - 1]).getLong()); + Column[] cols = getColumns(); + for (int i = 0; i < array.length - 1; i++) { + Column c = cols[i]; + int idx = c.getColumnId(); + Value v = array[i]; + searchRow.setValue(idx, v); + } + return searchRow; + } + + @Override + public MVTable getTable() { + return mvTable; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + try { + return 10 * getCostRangeIndex(masks, dataMap.sizeAsLongMax(), + filters, filter, sortOrder, false, allColumnsSet); + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } + } + + @Override + public void remove(Session session) { + TransactionMap map = getMap(session); + if (!map.isClosed()) { + Transaction t = session.getTransaction(); + t.removeMap(map); + } + } + + @Override + public void truncate(Session session) { + TransactionMap map = getMap(session); + map.clear(); + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + TransactionMap map = getMap(session); + Value key = first ? map.firstKey() : map.lastKey(); + while (true) { + if (key == null) { + return new MVStoreCursor(session, + Collections.emptyList().iterator(), null); + } + if (((ValueArray) key).getList()[0] != ValueNull.INSTANCE) { + break; + } + key = first ? map.higherKey(key) : map.lowerKey(key); + } + ArrayList list = New.arrayList(); + list.add(key); + MVStoreCursor cursor = new MVStoreCursor(session, list.iterator(), null); + cursor.next(); + return cursor; + } + + @Override + public boolean needRebuild() { + try { + return dataMap.sizeAsLongMax() == 0; + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } + } + + @Override + public long getRowCount(Session session) { + TransactionMap map = getMap(session); + return map.sizeAsLong(); + } + + @Override + public long getRowCountApproximation() { + try { + return dataMap.sizeAsLongMax(); + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } + } + + @Override + public long getDiskSpaceUsed() { + // TODO estimate disk space usage + return 0; + } + + @Override + public boolean canFindNext() { + return true; + } + + @Override + public Cursor findNext(Session session, SearchRow higherThan, SearchRow last) { + return find(session, higherThan, true, last); + } + + @Override + public void checkRename() { + // ok + } + + /** + * Get the map to store the data. + * + * @param session the session + * @return the map + */ + private TransactionMap getMap(Session session) { + if (session == null) { + return dataMap; + } + Transaction t = session.getTransaction(); + return dataMap.getInstance(t, Long.MAX_VALUE); + } + + /** + * A cursor. + */ + final class MVStoreCursor implements Cursor { + + private final Session session; + private final Iterator it; + private final SearchRow last; + private Value current; + private SearchRow searchRow; + private Row row; + + MVStoreCursor(Session session, Iterator it, SearchRow last) { + this.session = session; + this.it = it; + this.last = last; + } + + @Override + public Row get() { + if (row == null) { + SearchRow r = getSearchRow(); + if (r != null) { + row = mvTable.getRow(session, r.getKey()); + } + } + return row; + } + + @Override + public SearchRow getSearchRow() { + if (searchRow == null) { + if (current != null) { + searchRow = convertToSearchRow((ValueArray) current); + } + } + return searchRow; + } + + @Override + public boolean next() { + current = it.hasNext() ? it.next() : null; + searchRow = null; + if (current != null) { + if (last != null && compareRows(getSearchRow(), last) > 0) { + searchRow = null; + current = null; + } + } + row = null; + return current != null; + } + + @Override + public boolean previous() { + throw DbException.getUnsupportedException("previous"); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/MVSpatialIndex.java b/modules/h2/src/main/java/org/h2/mvstore/db/MVSpatialIndex.java new file mode 100644 index 0000000000000..aa9f67c22d277 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/MVSpatialIndex.java @@ -0,0 +1,390 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.index.BaseIndex; +import org.h2.index.Cursor; +import org.h2.index.IndexType; +import org.h2.index.SpatialIndex; +import org.h2.index.SpatialTreeIndex; +import org.h2.message.DbException; +import org.h2.mvstore.db.TransactionStore.Transaction; +import org.h2.mvstore.db.TransactionStore.TransactionMap; +import org.h2.mvstore.db.TransactionStore.VersionedValue; +import org.h2.mvstore.db.TransactionStore.VersionedValueType; +import org.h2.mvstore.rtree.MVRTreeMap; +import org.h2.mvstore.rtree.MVRTreeMap.RTreeCursor; +import org.h2.mvstore.rtree.SpatialKey; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.TableFilter; +import org.h2.value.Value; +import org.h2.value.ValueGeometry; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.locationtech.jts.geom.Envelope; +import org.locationtech.jts.geom.Geometry; + +/** + * This is an index based on a MVRTreeMap. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class MVSpatialIndex extends BaseIndex implements SpatialIndex, MVIndex { + + /** + * The multi-value table. + */ + final MVTable mvTable; + + private final String mapName; + private final TransactionMap dataMap; + private final MVRTreeMap spatialMap; + + /** + * Constructor. + * + * @param db the database + * @param table the table instance + * @param id the index id + * @param indexName the index name + * @param columns the indexed columns (only one geometry column allowed) + * @param indexType the index type (only spatial index) + */ + public MVSpatialIndex( + Database db, MVTable table, int id, String indexName, + IndexColumn[] columns, IndexType indexType) { + if (columns.length != 1) { + throw DbException.getUnsupportedException( + "Can only index one column"); + } + IndexColumn col = columns[0]; + if ((col.sortType & SortOrder.DESCENDING) != 0) { + throw DbException.getUnsupportedException( + "Cannot index in descending order"); + } + if ((col.sortType & SortOrder.NULLS_FIRST) != 0) { + throw DbException.getUnsupportedException( + "Nulls first is not supported"); + } + if ((col.sortType & SortOrder.NULLS_LAST) != 0) { + throw DbException.getUnsupportedException( + "Nulls last is not supported"); + } + if (col.column.getType() != Value.GEOMETRY) { + throw DbException.getUnsupportedException( + "Spatial index on non-geometry column, " + + col.column.getCreateSQL()); + } + this.mvTable = table; + initBaseIndex(table, id, indexName, columns, indexType); + if (!database.isStarting()) { + checkIndexColumnTypes(columns); + } + mapName = "index." + getId(); + ValueDataType vt = new ValueDataType(null, null, null); + VersionedValueType valueType = new VersionedValueType(vt); + MVRTreeMap.Builder mapBuilder = + new MVRTreeMap.Builder(). + valueType(valueType); + spatialMap = db.getMvStore().getStore().openMap(mapName, mapBuilder); + Transaction t = mvTable.getTransactionBegin(); + dataMap = t.openMap(spatialMap); + t.commit(); + } + + @Override + public void addRowsToBuffer(List rows, String bufferName) { + throw DbException.throwInternalError(); + } + + @Override + public void addBufferedRows(List bufferNames) { + throw DbException.throwInternalError(); + } + + @Override + public void close(Session session) { + // ok + } + + @Override + public void add(Session session, Row row) { + TransactionMap map = getMap(session); + SpatialKey key = getKey(row); + + if (key.isNull()) { + return; + } + + if (indexType.isUnique()) { + // this will detect committed entries only + RTreeCursor cursor = spatialMap.findContainedKeys(key); + Iterator it = map.wrapIterator(cursor, false); + while (it.hasNext()) { + SpatialKey k = it.next(); + if (k.equalsIgnoringId(key)) { + throw getDuplicateKeyException(key.toString()); + } + } + } + try { + map.put(key, ValueLong.get(0)); + } catch (IllegalStateException e) { + throw mvTable.convertException(e); + } + if (indexType.isUnique()) { + // check if there is another (uncommitted) entry + RTreeCursor cursor = spatialMap.findContainedKeys(key); + Iterator it = map.wrapIterator(cursor, true); + while (it.hasNext()) { + SpatialKey k = it.next(); + if (k.equalsIgnoringId(key)) { + if (map.isSameTransaction(k)) { + continue; + } + map.remove(key); + if (map.get(k) != null) { + // committed + throw getDuplicateKeyException(k.toString()); + } + throw DbException.get(ErrorCode.CONCURRENT_UPDATE_1, table.getName()); + } + } + } + } + + @Override + public void remove(Session session, Row row) { + SpatialKey key = getKey(row); + + if (key.isNull()) { + return; + } + + TransactionMap map = getMap(session); + try { + Value old = map.remove(key); + if (old == null) { + old = map.remove(key); + throw DbException.get(ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1, + getSQL() + ": " + row.getKey()); + } + } catch (IllegalStateException e) { + throw mvTable.convertException(e); + } + } + + @Override + public Cursor find(TableFilter filter, SearchRow first, SearchRow last) { + return find(filter.getSession()); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return find(session); + } + + private Cursor find(Session session) { + Iterator cursor = spatialMap.keyIterator(null); + TransactionMap map = getMap(session); + Iterator it = map.wrapIterator(cursor, false); + return new MVStoreCursor(session, it); + } + + @Override + public Cursor findByGeometry(TableFilter filter, SearchRow first, + SearchRow last, SearchRow intersection) { + Session session = filter.getSession(); + if (intersection == null) { + return find(session, first, last); + } + Iterator cursor = + spatialMap.findIntersectingKeys(getKey(intersection)); + TransactionMap map = getMap(session); + Iterator it = map.wrapIterator(cursor, false); + return new MVStoreCursor(session, it); + } + + private SpatialKey getKey(SearchRow row) { + Value v = row.getValue(columnIds[0]); + if (v == ValueNull.INSTANCE) { + return new SpatialKey(row.getKey()); + } + Geometry g = ((ValueGeometry) v.convertTo(Value.GEOMETRY)).getGeometryNoCopy(); + Envelope env = g.getEnvelopeInternal(); + return new SpatialKey(row.getKey(), + (float) env.getMinX(), (float) env.getMaxX(), + (float) env.getMinY(), (float) env.getMaxY()); + } + + /** + * Get the row with the given index key. + * + * @param key the index key + * @return the row + */ + SearchRow getRow(SpatialKey key) { + SearchRow searchRow = mvTable.getTemplateRow(); + searchRow.setKey(key.getId()); + return searchRow; + } + + @Override + public MVTable getTable() { + return mvTable; + } + + @Override + public double getCost(Session session, int[] masks, TableFilter[] filters, + int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return SpatialTreeIndex.getCostRangeIndex(masks, columns); + } + + @Override + public void remove(Session session) { + TransactionMap map = getMap(session); + if (!map.isClosed()) { + Transaction t = session.getTransaction(); + t.removeMap(map); + } + } + + @Override + public void truncate(Session session) { + TransactionMap map = getMap(session); + map.clear(); + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + if (!first) { + throw DbException.throwInternalError( + "Spatial Index can only be fetch in ascending order"); + } + return find(session); + } + + @Override + public boolean needRebuild() { + try { + return dataMap.sizeAsLongMax() == 0; + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } + } + + @Override + public long getRowCount(Session session) { + TransactionMap map = getMap(session); + return map.sizeAsLong(); + } + + @Override + public long getRowCountApproximation() { + try { + return dataMap.sizeAsLongMax(); + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } + } + + @Override + public long getDiskSpaceUsed() { + // TODO estimate disk space usage + return 0; + } + + @Override + public void checkRename() { + // ok + } + + /** + * Get the map to store the data. + * + * @param session the session + * @return the map + */ + TransactionMap getMap(Session session) { + if (session == null) { + return dataMap; + } + Transaction t = session.getTransaction(); + return dataMap.getInstance(t, Long.MAX_VALUE); + } + + /** + * A cursor. + */ + class MVStoreCursor implements Cursor { + + private final Session session; + private final Iterator it; + private SpatialKey current; + private SearchRow searchRow; + private Row row; + + public MVStoreCursor(Session session, Iterator it) { + this.session = session; + this.it = it; + } + + @Override + public Row get() { + if (row == null) { + SearchRow r = getSearchRow(); + if (r != null) { + row = mvTable.getRow(session, r.getKey()); + } + } + return row; + } + + @Override + public SearchRow getSearchRow() { + if (searchRow == null) { + if (current != null) { + searchRow = getRow(current); + } + } + return searchRow; + } + + @Override + public boolean next() { + current = it.next(); + searchRow = null; + row = null; + return current != null; + } + + @Override + public boolean previous() { + throw DbException.getUnsupportedException("previous"); + } + + } + +} + diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/MVTable.java b/modules/h2/src/main/java/org/h2/mvstore/db/MVTable.java new file mode 100644 index 0000000000000..dbdd61c398dc1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/MVTable.java @@ -0,0 +1,928 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.command.ddl.CreateTableData; +import org.h2.constraint.Constraint; +import org.h2.constraint.ConstraintReferential; +import org.h2.engine.Constants; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.index.MultiVersionIndex; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.db.MVTableEngine.Store; +import org.h2.mvstore.db.TransactionStore.Transaction; +import org.h2.result.Row; +import org.h2.result.SortOrder; +import org.h2.schema.SchemaObject; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.table.TableBase; +import org.h2.table.TableType; +import org.h2.util.DebuggingThreadLocal; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * A table stored in a MVStore. + */ +public class MVTable extends TableBase { + /** + * The table name this thread is waiting to lock. + */ + public static final DebuggingThreadLocal WAITING_FOR_LOCK; + + /** + * The table names this thread has exclusively locked. + */ + public static final DebuggingThreadLocal> EXCLUSIVE_LOCKS; + + /** + * The tables names this thread has a shared lock on. + */ + public static final DebuggingThreadLocal> SHARED_LOCKS; + + /** + * The type of trace lock events + */ + private enum TraceLockEvent{ + + TRACE_LOCK_OK("ok"), + TRACE_LOCK_WAITING_FOR("waiting for"), + TRACE_LOCK_REQUESTING_FOR("requesting for"), + TRACE_LOCK_TIMEOUT_AFTER("timeout after "), + TRACE_LOCK_UNLOCK("unlock"), + TRACE_LOCK_ADDED_FOR("added for"), + TRACE_LOCK_ADD_UPGRADED_FOR("add (upgraded) for "); + + private final String eventText; + + TraceLockEvent(String eventText) { + this.eventText = eventText; + } + + public String getEventText() { + return eventText; + } + } + private static final String NO_EXTRA_INFO = ""; + + static { + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + WAITING_FOR_LOCK = new DebuggingThreadLocal<>(); + EXCLUSIVE_LOCKS = new DebuggingThreadLocal<>(); + SHARED_LOCKS = new DebuggingThreadLocal<>(); + } else { + WAITING_FOR_LOCK = null; + EXCLUSIVE_LOCKS = null; + SHARED_LOCKS = null; + } + } + + private MVPrimaryIndex primaryIndex; + private final ArrayList indexes = New.arrayList(); + private volatile long lastModificationId; + private volatile Session lockExclusiveSession; + + // using a ConcurrentHashMap as a set + private final ConcurrentHashMap lockSharedSessions = + new ConcurrentHashMap<>(); + + /** + * The queue of sessions waiting to lock the table. It is a FIFO queue to + * prevent starvation, since Java's synchronized locking is biased. + */ + private final ArrayDeque waitingSessions = new ArrayDeque<>(); + private final Trace traceLock; + private int changesSinceAnalyze; + private int nextAnalyze; + private final boolean containsLargeObject; + private Column rowIdColumn; + + private final MVTableEngine.Store store; + private final TransactionStore transactionStore; + + public MVTable(CreateTableData data, MVTableEngine.Store store) { + super(data); + nextAnalyze = database.getSettings().analyzeAuto; + this.store = store; + this.transactionStore = store.getTransactionStore(); + this.isHidden = data.isHidden; + boolean b = false; + for (Column col : getColumns()) { + if (DataType.isLargeObject(col.getType())) { + b = true; + break; + } + } + containsLargeObject = b; + traceLock = database.getTrace(Trace.LOCK); + } + + /** + * Initialize the table. + * + * @param session the session + */ + void init(Session session) { + primaryIndex = new MVPrimaryIndex(session.getDatabase(), this, getId(), + IndexColumn.wrap(getColumns()), IndexType.createScan(true)); + indexes.add(primaryIndex); + } + + public String getMapName() { + return primaryIndex.getMapName(); + } + + @Override + public boolean lock(Session session, boolean exclusive, + boolean forceLockEvenInMvcc) { + int lockMode = database.getLockMode(); + if (lockMode == Constants.LOCK_MODE_OFF) { + return false; + } + if (!forceLockEvenInMvcc && database.isMultiVersion()) { + // MVCC: update, delete, and insert use a shared lock. + // Select doesn't lock except when using FOR UPDATE and + // the system property h2.selectForUpdateMvcc + // is not enabled + if (exclusive) { + exclusive = false; + } else { + if (lockExclusiveSession == null) { + return false; + } + } + } + if (lockExclusiveSession == session) { + return true; + } + if (!exclusive && lockSharedSessions.containsKey(session)) { + return true; + } + synchronized (getLockSyncObject()) { + if (!exclusive && lockSharedSessions.containsKey(session)) { + return true; + } + session.setWaitForLock(this, Thread.currentThread()); + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + WAITING_FOR_LOCK.set(getName()); + } + waitingSessions.addLast(session); + try { + doLock1(session, lockMode, exclusive); + } finally { + session.setWaitForLock(null, null); + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + WAITING_FOR_LOCK.remove(); + } + waitingSessions.remove(session); + } + } + return false; + } + + /** + * The the object on which to synchronize and wait on. For the + * multi-threaded mode, this is this object, but for non-multi-threaded, it + * is the database, as in this case all operations are synchronized on the + * database object. + * + * @return the lock sync object + */ + private Object getLockSyncObject() { + if (database.isMultiThreaded()) { + return this; + } + return database; + } + + private void doLock1(Session session, int lockMode, boolean exclusive) { + traceLock(session, exclusive, TraceLockEvent.TRACE_LOCK_REQUESTING_FOR, NO_EXTRA_INFO); + // don't get the current time unless necessary + long max = 0; + boolean checkDeadlock = false; + while (true) { + // if I'm the next one in the queue + if (waitingSessions.getFirst() == session) { + if (doLock2(session, lockMode, exclusive)) { + return; + } + } + if (checkDeadlock) { + ArrayList sessions = checkDeadlock(session, null, null); + if (sessions != null) { + throw DbException.get(ErrorCode.DEADLOCK_1, + getDeadlockDetails(sessions, exclusive)); + } + } else { + // check for deadlocks from now on + checkDeadlock = true; + } + long now = System.nanoTime(); + if (max == 0) { + // try at least one more time + max = now + TimeUnit.MILLISECONDS.toNanos(session.getLockTimeout()); + } else if (now >= max) { + traceLock(session, exclusive, + TraceLockEvent.TRACE_LOCK_TIMEOUT_AFTER, NO_EXTRA_INFO+session.getLockTimeout()); + throw DbException.get(ErrorCode.LOCK_TIMEOUT_1, getName()); + } + try { + traceLock(session, exclusive, TraceLockEvent.TRACE_LOCK_WAITING_FOR, NO_EXTRA_INFO); + if (database.getLockMode() == Constants.LOCK_MODE_TABLE_GC) { + for (int i = 0; i < 20; i++) { + long free = Runtime.getRuntime().freeMemory(); + System.gc(); + long free2 = Runtime.getRuntime().freeMemory(); + if (free == free2) { + break; + } + } + } + // don't wait too long so that deadlocks are detected early + long sleep = Math.min(Constants.DEADLOCK_CHECK, + TimeUnit.NANOSECONDS.toMillis(max - now)); + if (sleep == 0) { + sleep = 1; + } + getLockSyncObject().wait(sleep); + } catch (InterruptedException e) { + // ignore + } + } + } + + private boolean doLock2(Session session, int lockMode, boolean exclusive) { + if (exclusive) { + if (lockExclusiveSession == null) { + if (lockSharedSessions.isEmpty()) { + traceLock(session, exclusive, TraceLockEvent.TRACE_LOCK_ADDED_FOR, NO_EXTRA_INFO); + session.addLock(this); + lockExclusiveSession = session; + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + if (EXCLUSIVE_LOCKS.get() == null) { + EXCLUSIVE_LOCKS.set(new ArrayList()); + } + EXCLUSIVE_LOCKS.get().add(getName()); + } + return true; + } else if (lockSharedSessions.size() == 1 && + lockSharedSessions.containsKey(session)) { + traceLock(session, exclusive, TraceLockEvent.TRACE_LOCK_ADD_UPGRADED_FOR, NO_EXTRA_INFO); + lockExclusiveSession = session; + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + if (EXCLUSIVE_LOCKS.get() == null) { + EXCLUSIVE_LOCKS.set(new ArrayList()); + } + EXCLUSIVE_LOCKS.get().add(getName()); + } + return true; + } + } + } else { + if (lockExclusiveSession == null) { + if (lockMode == Constants.LOCK_MODE_READ_COMMITTED) { + if (!database.isMultiThreaded() && + !database.isMultiVersion()) { + // READ_COMMITTED: a read lock is acquired, + // but released immediately after the operation + // is complete. + // When allowing only one thread, no lock is + // required. + // Row level locks work like read committed. + return true; + } + } + if (!lockSharedSessions.containsKey(session)) { + traceLock(session, exclusive, TraceLockEvent.TRACE_LOCK_OK, NO_EXTRA_INFO); + session.addLock(this); + lockSharedSessions.put(session, session); + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + if (SHARED_LOCKS.get() == null) { + SHARED_LOCKS.set(new ArrayList()); + } + SHARED_LOCKS.get().add(getName()); + } + } + return true; + } + } + return false; + } + + private static String getDeadlockDetails(ArrayList sessions, boolean exclusive) { + // We add the thread details here to make it easier for customers to + // match up these error messages with their own logs. + StringBuilder buff = new StringBuilder(); + for (Session s : sessions) { + Table lock = s.getWaitForLock(); + Thread thread = s.getWaitForLockThread(); + buff.append("\nSession ").append(s.toString()) + .append(" on thread ").append(thread.getName()) + .append(" is waiting to lock ").append(lock.toString()) + .append(exclusive ? " (exclusive)" : " (shared)") + .append(" while locking "); + int i = 0; + for (Table t : s.getLocks()) { + if (i++ > 0) { + buff.append(", "); + } + buff.append(t.toString()); + if (t instanceof MVTable) { + if (((MVTable) t).lockExclusiveSession == s) { + buff.append(" (exclusive)"); + } else { + buff.append(" (shared)"); + } + } + } + buff.append('.'); + } + return buff.toString(); + } + + @Override + public ArrayList checkDeadlock(Session session, Session clash, + Set visited) { + // only one deadlock check at any given time + synchronized (MVTable.class) { + if (clash == null) { + // verification is started + clash = session; + visited = new HashSet<>(); + } else if (clash == session) { + // we found a circle where this session is involved + return New.arrayList(); + } else if (visited.contains(session)) { + // we have already checked this session. + // there is a circle, but the sessions in the circle need to + // find it out themselves + return null; + } + visited.add(session); + ArrayList error = null; + for (Session s : lockSharedSessions.keySet()) { + if (s == session) { + // it doesn't matter if we have locked the object already + continue; + } + Table t = s.getWaitForLock(); + if (t != null) { + error = t.checkDeadlock(s, clash, visited); + if (error != null) { + error.add(session); + break; + } + } + } + // take a local copy so we don't see inconsistent data, since we are + // not locked while checking the lockExclusiveSession value + Session copyOfLockExclusiveSession = lockExclusiveSession; + if (error == null && copyOfLockExclusiveSession != null) { + Table t = copyOfLockExclusiveSession.getWaitForLock(); + if (t != null) { + error = t.checkDeadlock(copyOfLockExclusiveSession, clash, + visited); + if (error != null) { + error.add(session); + } + } + } + return error; + } + } + + private void traceLock(Session session, boolean exclusive, TraceLockEvent eventEnum, String extraInfo) { + if (traceLock.isDebugEnabled()) { + traceLock.debug("{0} {1} {2} {3} {4}", session.getId(), + exclusive ? "exclusive write lock" : "shared read lock", eventEnum.getEventText(), + getName(), extraInfo); + } + } + + @Override + public boolean isLockedExclusively() { + return lockExclusiveSession != null; + } + + @Override + public boolean isLockedExclusivelyBy(Session session) { + return lockExclusiveSession == session; + } + + @Override + public void unlock(Session s) { + if (database != null) { + traceLock(s, lockExclusiveSession == s, TraceLockEvent.TRACE_LOCK_UNLOCK, NO_EXTRA_INFO); + if (lockExclusiveSession == s) { + lockSharedSessions.remove(s); + lockExclusiveSession = null; + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + if (EXCLUSIVE_LOCKS.get() != null) { + EXCLUSIVE_LOCKS.get().remove(getName()); + } + } + } + synchronized (getLockSyncObject()) { + if (lockSharedSessions.size() > 0) { + lockSharedSessions.remove(s); + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + if (SHARED_LOCKS.get() != null) { + SHARED_LOCKS.get().remove(getName()); + } + } + } + if (!waitingSessions.isEmpty()) { + getLockSyncObject().notifyAll(); + } + } + } + } + + @Override + public boolean canTruncate() { + if (getCheckForeignKeyConstraints() && + database.getReferentialIntegrity()) { + ArrayList constraints = getConstraints(); + if (constraints != null) { + for (Constraint c : constraints) { + if (c.getConstraintType() != Constraint.Type.REFERENTIAL) { + continue; + } + ConstraintReferential ref = (ConstraintReferential) c; + if (ref.getRefTable() == this) { + return false; + } + } + } + } + return true; + } + + @Override + public void close(Session session) { + // ignore + } + + @Override + public Row getRow(Session session, long key) { + return primaryIndex.getRow(session, key); + } + + @Override + public Index addIndex(Session session, String indexName, int indexId, + IndexColumn[] cols, IndexType indexType, boolean create, + String indexComment) { + if (indexType.isPrimaryKey()) { + for (IndexColumn c : cols) { + Column column = c.column; + if (column.isNullable()) { + throw DbException.get( + ErrorCode.COLUMN_MUST_NOT_BE_NULLABLE_1, + column.getName()); + } + column.setPrimaryKey(true); + } + } + boolean isSessionTemporary = isTemporary() && !isGlobalTemporary(); + if (!isSessionTemporary) { + database.lockMeta(session); + } + MVIndex index; + int mainIndexColumn; + mainIndexColumn = getMainIndexColumn(indexType, cols); + if (database.isStarting()) { + if (transactionStore.store.hasMap("index." + indexId)) { + mainIndexColumn = -1; + } + } else if (primaryIndex.getRowCountMax() != 0) { + mainIndexColumn = -1; + } + if (mainIndexColumn != -1) { + primaryIndex.setMainIndexColumn(mainIndexColumn); + index = new MVDelegateIndex(this, indexId, indexName, primaryIndex, + indexType); + } else if (indexType.isSpatial()) { + index = new MVSpatialIndex(session.getDatabase(), this, indexId, + indexName, cols, indexType); + } else { + index = new MVSecondaryIndex(session.getDatabase(), this, indexId, + indexName, cols, indexType); + } + if (index.needRebuild()) { + rebuildIndex(session, index, indexName); + } + index.setTemporary(isTemporary()); + if (index.getCreateSQL() != null) { + index.setComment(indexComment); + if (isSessionTemporary) { + session.addLocalTempTableIndex(index); + } else { + database.addSchemaObject(session, index); + } + } + indexes.add(index); + setModified(); + return index; + } + + private void rebuildIndex(Session session, MVIndex index, String indexName) { + try { + if (session.getDatabase().getMvStore() == null || + index instanceof MVSpatialIndex) { + // in-memory + rebuildIndexBuffered(session, index); + } else { + rebuildIndexBlockMerge(session, index); + } + } catch (DbException e) { + getSchema().freeUniqueName(indexName); + try { + index.remove(session); + } catch (DbException e2) { + // this could happen, for example on failure in the storage + // but if that is not the case it means + // there is something wrong with the database + trace.error(e2, "could not remove index"); + throw e2; + } + throw e; + } + } + + private void rebuildIndexBlockMerge(Session session, MVIndex index) { + if (index instanceof MVSpatialIndex) { + // the spatial index doesn't support multi-way merge sort + rebuildIndexBuffered(session, index); + } + // Read entries in memory, sort them, write to a new map (in sorted + // order); repeat (using a new map for every block of 1 MB) until all + // record are read. Merge all maps to the target (using merge sort; + // duplicates are detected in the target). For randomly ordered data, + // this should use relatively few write operations. + // A possible optimization is: change the buffer size from "row count" + // to "amount of memory", and buffer index keys instead of rows. + Index scan = getScanIndex(session); + long remaining = scan.getRowCount(session); + long total = remaining; + Cursor cursor = scan.find(session, null, null); + long i = 0; + Store store = session.getDatabase().getMvStore(); + + int bufferSize = database.getMaxMemoryRows() / 2; + ArrayList buffer = new ArrayList<>(bufferSize); + String n = getName() + ":" + index.getName(); + int t = MathUtils.convertLongToInt(total); + ArrayList bufferNames = New.arrayList(); + while (cursor.next()) { + Row row = cursor.get(); + buffer.add(row); + database.setProgress(DatabaseEventListener.STATE_CREATE_INDEX, n, + MathUtils.convertLongToInt(i++), t); + if (buffer.size() >= bufferSize) { + sortRows(buffer, index); + String mapName = store.nextTemporaryMapName(); + index.addRowsToBuffer(buffer, mapName); + bufferNames.add(mapName); + buffer.clear(); + } + remaining--; + } + sortRows(buffer, index); + if (!bufferNames.isEmpty()) { + String mapName = store.nextTemporaryMapName(); + index.addRowsToBuffer(buffer, mapName); + bufferNames.add(mapName); + buffer.clear(); + index.addBufferedRows(bufferNames); + } else { + addRowsToIndex(session, buffer, index); + } + if (SysProperties.CHECK && remaining != 0) { + DbException.throwInternalError("rowcount remaining=" + remaining + + " " + getName()); + } + } + + private void rebuildIndexBuffered(Session session, Index index) { + Index scan = getScanIndex(session); + long remaining = scan.getRowCount(session); + long total = remaining; + Cursor cursor = scan.find(session, null, null); + long i = 0; + int bufferSize = (int) Math.min(total, database.getMaxMemoryRows()); + ArrayList buffer = new ArrayList<>(bufferSize); + String n = getName() + ":" + index.getName(); + int t = MathUtils.convertLongToInt(total); + while (cursor.next()) { + Row row = cursor.get(); + buffer.add(row); + database.setProgress(DatabaseEventListener.STATE_CREATE_INDEX, n, + MathUtils.convertLongToInt(i++), t); + if (buffer.size() >= bufferSize) { + addRowsToIndex(session, buffer, index); + } + remaining--; + } + addRowsToIndex(session, buffer, index); + if (SysProperties.CHECK && remaining != 0) { + DbException.throwInternalError("rowcount remaining=" + remaining + + " " + getName()); + } + } + + private int getMainIndexColumn(IndexType indexType, IndexColumn[] cols) { + if (primaryIndex.getMainIndexColumn() != -1) { + return -1; + } + if (!indexType.isPrimaryKey() || cols.length != 1) { + return -1; + } + IndexColumn first = cols[0]; + if (first.sortType != SortOrder.ASCENDING) { + return -1; + } + switch (first.column.getType()) { + case Value.BYTE: + case Value.SHORT: + case Value.INT: + case Value.LONG: + break; + default: + return -1; + } + return first.column.getColumnId(); + } + + private static void addRowsToIndex(Session session, ArrayList list, + Index index) { + sortRows(list, index); + for (Row row : list) { + index.add(session, row); + } + list.clear(); + } + + private static void sortRows(ArrayList list, final Index index) { + Collections.sort(list, new Comparator() { + @Override + public int compare(Row r1, Row r2) { + return index.compareRows(r1, r2); + } + }); + } + + @Override + public void removeRow(Session session, Row row) { + lastModificationId = database.getNextModificationDataId(); + Transaction t = session.getTransaction(); + long savepoint = t.setSavepoint(); + try { + for (int i = indexes.size() - 1; i >= 0; i--) { + Index index = indexes.get(i); + index.remove(session, row); + } + } catch (Throwable e) { + t.rollbackToSavepoint(savepoint); + throw DbException.convert(e); + } + analyzeIfRequired(session); + } + + @Override + public void truncate(Session session) { + lastModificationId = database.getNextModificationDataId(); + for (int i = indexes.size() - 1; i >= 0; i--) { + Index index = indexes.get(i); + index.truncate(session); + } + changesSinceAnalyze = 0; + } + + @Override + public void addRow(Session session, Row row) { + lastModificationId = database.getNextModificationDataId(); + Transaction t = session.getTransaction(); + long savepoint = t.setSavepoint(); + try { + for (Index index : indexes) { + index.add(session, row); + } + } catch (Throwable e) { + t.rollbackToSavepoint(savepoint); + DbException de = DbException.convert(e); + if (de.getErrorCode() == ErrorCode.DUPLICATE_KEY_1) { + for (Index index : indexes) { + if (index.getIndexType().isUnique() && + index instanceof MultiVersionIndex) { + MultiVersionIndex mv = (MultiVersionIndex) index; + if (mv.isUncommittedFromOtherSession(session, row)) { + throw DbException.get( + ErrorCode.CONCURRENT_UPDATE_1, + index.getName()); + } + } + } + } + throw de; + } + analyzeIfRequired(session); + } + + private void analyzeIfRequired(Session session) { + synchronized (this) { + if (nextAnalyze == 0 || nextAnalyze > changesSinceAnalyze++) { + return; + } + changesSinceAnalyze = 0; + int n = 2 * nextAnalyze; + if (n > 0) { + nextAnalyze = n; + } + } + session.markTableForAnalyze(this); + } + + @Override + public void checkSupportAlter() { + // ok + } + + @Override + public TableType getTableType() { + return TableType.TABLE; + } + + @Override + public Index getScanIndex(Session session) { + return primaryIndex; + } + + @Override + public Index getUniqueIndex() { + return primaryIndex; + } + + @Override + public ArrayList getIndexes() { + return indexes; + } + + @Override + public long getMaxDataModificationId() { + return lastModificationId; + } + + public boolean getContainsLargeObject() { + return containsLargeObject; + } + + @Override + public boolean isDeterministic() { + return true; + } + + @Override + public boolean canGetRowCount() { + return true; + } + + @Override + public boolean canDrop() { + return true; + } + + @Override + public void removeChildrenAndResources(Session session) { + if (containsLargeObject) { + // unfortunately, the data is gone on rollback + truncate(session); + database.getLobStorage().removeAllForTable(getId()); + database.lockMeta(session); + } + database.getMvStore().removeTable(this); + super.removeChildrenAndResources(session); + // go backwards because database.removeIndex will + // call table.removeIndex + while (indexes.size() > 1) { + Index index = indexes.get(1); + if (index.getName() != null) { + database.removeSchemaObject(session, index); + } + // needed for session temporary indexes + indexes.remove(index); + } + if (SysProperties.CHECK) { + for (SchemaObject obj : database + .getAllSchemaObjects(DbObject.INDEX)) { + Index index = (Index) obj; + if (index.getTable() == this) { + DbException.throwInternalError("index not dropped: " + + index.getName()); + } + } + } + primaryIndex.remove(session); + database.removeMeta(session, getId()); + close(session); + invalidate(); + } + + @Override + public long getRowCount(Session session) { + return primaryIndex.getRowCount(session); + } + + @Override + public long getRowCountApproximation() { + return primaryIndex.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return primaryIndex.getDiskSpaceUsed(); + } + + @Override + public void checkRename() { + // ok + } + + /** + * Get a new transaction. + * + * @return the transaction + */ + Transaction getTransactionBegin() { + // TODO need to commit/rollback the transaction + return transactionStore.begin(); + } + + @Override + public Column getRowIdColumn() { + if (rowIdColumn == null) { + rowIdColumn = new Column(Column.ROWID, Value.LONG); + rowIdColumn.setTable(this, -1); + } + return rowIdColumn; + } + + @Override + public String toString() { + return getSQL(); + } + + @Override + public boolean isMVStore() { + return true; + } + + /** + * Mark the transaction as committed, so that the modification counter of + * the database is incremented. + */ + public void commit() { + if (database != null) { + lastModificationId = database.getNextModificationDataId(); + } + } + + /** + * Convert the illegal state exception to a database exception. + * + * @param e the illegal state exception + * @return the database exception + */ + DbException convertException(IllegalStateException e) { + if (DataUtils.getErrorCode(e.getMessage()) == + DataUtils.ERROR_TRANSACTION_LOCKED) { + throw DbException.get(ErrorCode.CONCURRENT_UPDATE_1, + e, getName()); + } + return store.convertIllegalStateException(e); + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/MVTableEngine.java b/modules/h2/src/main/java/org/h2/mvstore/db/MVTableEngine.java new file mode 100644 index 0000000000000..a8256cf5e1f27 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/MVTableEngine.java @@ -0,0 +1,470 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.io.InputStream; +import java.lang.Thread.UncaughtExceptionHandler; +import java.nio.channels.FileChannel; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; + +import org.h2.api.ErrorCode; +import org.h2.api.TableEngine; +import org.h2.command.ddl.CreateTableData; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.FileStore; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.MVStoreTool; +import org.h2.mvstore.db.TransactionStore.Transaction; +import org.h2.mvstore.db.TransactionStore.TransactionMap; +import org.h2.store.InDoubtTransaction; +import org.h2.store.fs.FileChannelInputStream; +import org.h2.store.fs.FileUtils; +import org.h2.table.TableBase; +import org.h2.util.BitField; +import org.h2.util.New; + +/** + * A table engine that internally uses the MVStore. + */ +public class MVTableEngine implements TableEngine { + + /** + * Initialize the MVStore. + * + * @param db the database + * @return the store + */ + public static Store init(final Database db) { + Store store = db.getMvStore(); + if (store != null) { + return store; + } + byte[] key = db.getFileEncryptionKey(); + String dbPath = db.getDatabasePath(); + MVStore.Builder builder = new MVStore.Builder(); + store = new Store(); + boolean encrypted = false; + if (dbPath != null) { + String fileName = dbPath + Constants.SUFFIX_MV_FILE; + MVStoreTool.compactCleanUp(fileName); + builder.fileName(fileName); + builder.pageSplitSize(db.getPageSize()); + if (db.isReadOnly()) { + builder.readOnly(); + } else { + // possibly create the directory + boolean exists = FileUtils.exists(fileName); + if (exists && !FileUtils.canWrite(fileName)) { + // read only + } else { + String dir = FileUtils.getParent(fileName); + FileUtils.createDirectories(dir); + } + } + if (key != null) { + encrypted = true; + char[] password = new char[key.length / 2]; + for (int i = 0; i < password.length; i++) { + password[i] = (char) (((key[i + i] & 255) << 16) | + ((key[i + i + 1]) & 255)); + } + builder.encryptionKey(password); + } + if (db.getSettings().compressData) { + builder.compress(); + // use a larger page split size to improve the compression ratio + builder.pageSplitSize(64 * 1024); + } + builder.backgroundExceptionHandler(new UncaughtExceptionHandler() { + + @Override + public void uncaughtException(Thread t, Throwable e) { + db.setBackgroundException(DbException.convert(e)); + } + + }); + } + store.open(db, builder, encrypted); + db.setMvStore(store); + return store; + } + + @Override + public TableBase createTable(CreateTableData data) { + Database db = data.session.getDatabase(); + Store store = init(db); + MVTable table = new MVTable(data, store); + table.init(data.session); + store.tableMap.put(table.getMapName(), table); + return table; + } + + /** + * A store with open tables. + */ + public static class Store { + + /** + * The map of open tables. + * Key: the map name, value: the table. + */ + final ConcurrentHashMap tableMap = + new ConcurrentHashMap<>(); + + /** + * The store. + */ + private MVStore store; + + /** + * The transaction store. + */ + private TransactionStore transactionStore; + + private long statisticsStart; + + private int temporaryMapId; + + private boolean encrypted; + + private String fileName; + + /** + * Open the store for this database. + * + * @param db the database + * @param builder the builder + * @param encrypted whether the store is encrypted + */ + void open(Database db, MVStore.Builder builder, boolean encrypted) { + this.encrypted = encrypted; + try { + this.store = builder.open(); + FileStore fs = store.getFileStore(); + if (fs != null) { + this.fileName = fs.getFileName(); + } + if (!db.getSettings().reuseSpace) { + store.setReuseSpace(false); + } + this.transactionStore = new TransactionStore( + store, + new ValueDataType(db.getCompareMode(), db, null)); + transactionStore.init(); + } catch (IllegalStateException e) { + throw convertIllegalStateException(e); + } + } + + /** + * Convert the illegal state exception to the correct database + * exception. + * + * @param e the illegal state exception + * @return the database exception + */ + DbException convertIllegalStateException(IllegalStateException e) { + int errorCode = DataUtils.getErrorCode(e.getMessage()); + if (errorCode == DataUtils.ERROR_FILE_CORRUPT) { + if (encrypted) { + throw DbException.get( + ErrorCode.FILE_ENCRYPTION_ERROR_1, + e, fileName); + } + } else if (errorCode == DataUtils.ERROR_FILE_LOCKED) { + throw DbException.get( + ErrorCode.DATABASE_ALREADY_OPEN_1, + e, fileName); + } else if (errorCode == DataUtils.ERROR_READING_FAILED) { + throw DbException.get( + ErrorCode.IO_EXCEPTION_1, + e, fileName); + } + throw DbException.get( + ErrorCode.FILE_CORRUPTED_1, + e, fileName); + + } + + public MVStore getStore() { + return store; + } + + public TransactionStore getTransactionStore() { + return transactionStore; + } + + public HashMap getTables() { + return new HashMap<>(tableMap); + } + + /** + * Remove a table. + * + * @param table the table + */ + public void removeTable(MVTable table) { + tableMap.remove(table.getMapName()); + } + + /** + * Store all pending changes. + */ + public void flush() { + FileStore s = store.getFileStore(); + if (s == null || s.isReadOnly()) { + return; + } + if (!store.compact(50, 4 * 1024 * 1024)) { + store.commit(); + } + } + + /** + * Close the store, without persisting changes. + */ + public void closeImmediately() { + if (store.isClosed()) { + return; + } + store.closeImmediately(); + } + + /** + * Commit all transactions that are in the committing state, and + * rollback all open transactions. + */ + public void initTransactions() { + List list = transactionStore.getOpenTransactions(); + for (Transaction t : list) { + if (t.getStatus() == Transaction.STATUS_COMMITTING) { + t.commit(); + } else if (t.getStatus() != Transaction.STATUS_PREPARED) { + t.rollback(); + } + } + } + + /** + * Remove all temporary maps. + * + * @param objectIds the ids of the objects to keep + */ + public void removeTemporaryMaps(BitField objectIds) { + for (String mapName : store.getMapNames()) { + if (mapName.startsWith("temp.")) { + MVMap map = store.openMap(mapName); + store.removeMap(map); + } else if (mapName.startsWith("table.") || mapName.startsWith("index.")) { + int id = Integer.parseInt(mapName.substring(1 + mapName.indexOf('.'))); + if (!objectIds.get(id)) { + ValueDataType keyType = new ValueDataType(null, null, null); + ValueDataType valueType = new ValueDataType(null, null, null); + Transaction t = transactionStore.begin(); + TransactionMap m = t.openMap(mapName, keyType, valueType); + transactionStore.removeMap(m); + t.commit(); + } + } + } + } + + /** + * Get the name of the next available temporary map. + * + * @return the map name + */ + public synchronized String nextTemporaryMapName() { + return "temp." + temporaryMapId++; + } + + /** + * Prepare a transaction. + * + * @param session the session + * @param transactionName the transaction name (may be null) + */ + public void prepareCommit(Session session, String transactionName) { + Transaction t = session.getTransaction(); + t.setName(transactionName); + t.prepare(); + store.commit(); + } + + public ArrayList getInDoubtTransactions() { + List list = transactionStore.getOpenTransactions(); + ArrayList result = New.arrayList(); + for (Transaction t : list) { + if (t.getStatus() == Transaction.STATUS_PREPARED) { + result.add(new MVInDoubtTransaction(store, t)); + } + } + return result; + } + + /** + * Set the maximum memory to be used by the cache. + * + * @param kb the maximum size in KB + */ + public void setCacheSize(int kb) { + store.setCacheSize(Math.max(1, kb / 1024)); + } + + public InputStream getInputStream() { + FileChannel fc = store.getFileStore().getEncryptedFile(); + if (fc == null) { + fc = store.getFileStore().getFile(); + } + return new FileChannelInputStream(fc, false); + } + + /** + * Force the changes to disk. + */ + public void sync() { + flush(); + store.sync(); + } + + /** + * Compact the database file, that is, compact blocks that have a low + * fill rate, and move chunks next to each other. This will typically + * shrink the database file. Changes are flushed to the file, and old + * chunks are overwritten. + * + * @param maxCompactTime the maximum time in milliseconds to compact + */ + public void compactFile(long maxCompactTime) { + store.setRetentionTime(0); + long start = System.nanoTime(); + while (store.compact(95, 16 * 1024 * 1024)) { + store.sync(); + store.compactMoveChunks(95, 16 * 1024 * 1024); + long time = System.nanoTime() - start; + if (time > TimeUnit.MILLISECONDS.toNanos(maxCompactTime)) { + break; + } + } + } + + /** + * Close the store. Pending changes are persisted. Chunks with a low + * fill rate are compacted, but old chunks are kept for some time, so + * most likely the database file will not shrink. + * + * @param maxCompactTime the maximum time in milliseconds to compact + */ + public void close(long maxCompactTime) { + try { + if (!store.isClosed() && store.getFileStore() != null) { + boolean compactFully = false; + if (!store.getFileStore().isReadOnly()) { + transactionStore.close(); + if (maxCompactTime == Long.MAX_VALUE) { + compactFully = true; + } + } + String fileName = store.getFileStore().getFileName(); + store.close(); + if (compactFully && FileUtils.exists(fileName)) { + // the file could have been deleted concurrently, + // so only compact if the file still exists + MVStoreTool.compact(fileName, true); + } + } + } catch (IllegalStateException e) { + int errorCode = DataUtils.getErrorCode(e.getMessage()); + if (errorCode == DataUtils.ERROR_WRITING_FAILED) { + // disk full - ok + } else if (errorCode == DataUtils.ERROR_FILE_CORRUPT) { + // wrong encryption key - ok + } + store.closeImmediately(); + throw DbException.get(ErrorCode.IO_EXCEPTION_1, e, "Closing"); + } + } + + /** + * Start collecting statistics. + */ + public void statisticsStart() { + FileStore fs = store.getFileStore(); + statisticsStart = fs == null ? 0 : fs.getReadCount(); + } + + /** + * Stop collecting statistics. + * + * @return the statistics + */ + public Map statisticsEnd() { + HashMap map = new HashMap<>(); + FileStore fs = store.getFileStore(); + int reads = fs == null ? 0 : (int) (fs.getReadCount() - statisticsStart); + map.put("reads", reads); + return map; + } + + } + + /** + * An in-doubt transaction. + */ + private static class MVInDoubtTransaction implements InDoubtTransaction { + + private final MVStore store; + private final Transaction transaction; + private int state = InDoubtTransaction.IN_DOUBT; + + MVInDoubtTransaction(MVStore store, Transaction transaction) { + this.store = store; + this.transaction = transaction; + } + + @Override + public void setState(int state) { + if (state == InDoubtTransaction.COMMIT) { + transaction.commit(); + } else { + transaction.rollback(); + } + store.commit(); + this.state = state; + } + + @Override + public String getState() { + switch (state) { + case IN_DOUBT: + return "IN_DOUBT"; + case COMMIT: + return "COMMIT"; + case ROLLBACK: + return "ROLLBACK"; + default: + throw DbException.throwInternalError("state="+state); + } + } + + @Override + public String getTransactionName() { + return transaction.getName(); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/TransactionStore.java b/modules/h2/src/main/java/org/h2/mvstore/db/TransactionStore.java new file mode 100644 index 0000000000000..a1a5ee8c67bde --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/TransactionStore.java @@ -0,0 +1,1851 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.BitSet; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map.Entry; +import java.util.concurrent.locks.ReentrantReadWriteLock; +import org.h2.mvstore.Cursor; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.WriteBuffer; +import org.h2.mvstore.type.DataType; +import org.h2.mvstore.type.ObjectDataType; +import org.h2.util.New; + +/** + * A store that supports concurrent MVCC read-committed transactions. + */ +public class TransactionStore { + + /** + * The store. + */ + final MVStore store; + + /** + * The persisted map of prepared transactions. + * Key: transactionId, value: [ status, name ]. + */ + final MVMap preparedTransactions; + + /** + * The undo log. + *

    + * If the first entry for a transaction doesn't have a logId + * of 0, then the transaction is partially committed (which means rollback + * is not possible). Log entries are written before the data is changed + * (write-ahead). + *

    + * Key: opId, value: [ mapId, key, oldValue ]. + */ + final MVMap undoLog; + + /** + * the reader/writer lock for the undo-log. Allows us to process multiple + * selects in parallel. + */ + final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock(); + + /** + * The map of maps. + */ + private final HashMap> maps = + new HashMap<>(); + + private final DataType dataType; + + private final BitSet openTransactions = new BitSet(); + + private boolean init; + + private int maxTransactionId = 0xffff; + + /** + * The next id of a temporary map. + */ + private int nextTempMapId; + + /** + * Create a new transaction store. + * + * @param store the store + */ + public TransactionStore(MVStore store) { + this(store, new ObjectDataType()); + } + + /** + * Create a new transaction store. + * + * @param store the store + * @param dataType the data type for map keys and values + */ + public TransactionStore(MVStore store, DataType dataType) { + this.store = store; + this.dataType = dataType; + preparedTransactions = store.openMap("openTransactions", + new MVMap.Builder()); + VersionedValueType oldValueType = new VersionedValueType(dataType); + ArrayType undoLogValueType = new ArrayType(new DataType[]{ + new ObjectDataType(), dataType, oldValueType + }); + MVMap.Builder builder = + new MVMap.Builder(). + valueType(undoLogValueType); + undoLog = store.openMap("undoLog", builder); + if (undoLog.getValueType() != undoLogValueType) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_TRANSACTION_CORRUPT, + "Undo map open with a different value type"); + } + } + + /** + * Initialize the store. This is needed before a transaction can be opened. + * If the transaction store is corrupt, this method can throw an exception, + * in which case the store can only be used for reading. + */ + public synchronized void init() { + init = true; + // remove all temporary maps + for (String mapName : store.getMapNames()) { + if (mapName.startsWith("temp.")) { + MVMap temp = openTempMap(mapName); + store.removeMap(temp); + } + } + rwLock.writeLock().lock(); + try { + if (undoLog.size() > 0) { + for (Long key : undoLog.keySet()) { + int transactionId = getTransactionId(key); + openTransactions.set(transactionId); + } + } + } finally { + rwLock.writeLock().unlock(); + } + } + + /** + * Set the maximum transaction id, after which ids are re-used. If the old + * transaction is still in use when re-using an old id, the new transaction + * fails. + * + * @param max the maximum id + */ + public void setMaxTransactionId(int max) { + this.maxTransactionId = max; + } + + /** + * Combine the transaction id and the log id to an operation id. + * + * @param transactionId the transaction id + * @param logId the log id + * @return the operation id + */ + static long getOperationId(int transactionId, long logId) { + DataUtils.checkArgument(transactionId >= 0 && transactionId < (1 << 24), + "Transaction id out of range: {0}", transactionId); + DataUtils.checkArgument(logId >= 0 && logId < (1L << 40), + "Transaction log id out of range: {0}", logId); + return ((long) transactionId << 40) | logId; + } + + /** + * Get the transaction id for the given operation id. + * + * @param operationId the operation id + * @return the transaction id + */ + static int getTransactionId(long operationId) { + return (int) (operationId >>> 40); + } + + /** + * Get the log id for the given operation id. + * + * @param operationId the operation id + * @return the log id + */ + static long getLogId(long operationId) { + return operationId & ((1L << 40) - 1); + } + + /** + * Get the list of unclosed transactions that have pending writes. + * + * @return the list of transactions (sorted by id) + */ + public List getOpenTransactions() { + rwLock.readLock().lock(); + try { + ArrayList list = New.arrayList(); + Long key = undoLog.firstKey(); + while (key != null) { + int transactionId = getTransactionId(key); + key = undoLog.lowerKey(getOperationId(transactionId + 1, 0)); + long logId = getLogId(key) + 1; + Object[] data = preparedTransactions.get(transactionId); + int status; + String name; + if (data == null) { + if (undoLog.containsKey(getOperationId(transactionId, 0))) { + status = Transaction.STATUS_OPEN; + } else { + status = Transaction.STATUS_COMMITTING; + } + name = null; + } else { + status = (Integer) data[0]; + name = (String) data[1]; + } + Transaction t = new Transaction(this, transactionId, status, + name, logId); + list.add(t); + key = undoLog.ceilingKey(getOperationId(transactionId + 1, 0)); + } + return list; + } finally { + rwLock.readLock().unlock(); + } + } + + /** + * Close the transaction store. + */ + public synchronized void close() { + store.commit(); + } + + /** + * Begin a new transaction. + * + * @return the transaction + */ + public synchronized Transaction begin() { + + int transactionId; + int status; + if (!init) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_TRANSACTION_ILLEGAL_STATE, + "Not initialized"); + } + transactionId = openTransactions.nextClearBit(1); + if (transactionId > maxTransactionId) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_TOO_MANY_OPEN_TRANSACTIONS, + "There are {0} open transactions", + transactionId - 1); + } + openTransactions.set(transactionId); + status = Transaction.STATUS_OPEN; + return new Transaction(this, transactionId, status, null, 0); + } + + /** + * Store a transaction. + * + * @param t the transaction + */ + synchronized void storeTransaction(Transaction t) { + if (t.getStatus() == Transaction.STATUS_PREPARED || + t.getName() != null) { + Object[] v = { t.getStatus(), t.getName() }; + preparedTransactions.put(t.getId(), v); + } + } + + /** + * Log an entry. + * + * @param t the transaction + * @param logId the log id + * @param mapId the map id + * @param key the key + * @param oldValue the old value + */ + void log(Transaction t, long logId, int mapId, + Object key, Object oldValue) { + Long undoKey = getOperationId(t.getId(), logId); + Object[] log = { mapId, key, oldValue }; + rwLock.writeLock().lock(); + try { + if (logId == 0) { + if (undoLog.containsKey(undoKey)) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_TOO_MANY_OPEN_TRANSACTIONS, + "An old transaction with the same id " + + "is still open: {0}", + t.getId()); + } + } + undoLog.put(undoKey, log); + } finally { + rwLock.writeLock().unlock(); + } + } + + /** + * Remove a log entry. + * + * @param t the transaction + * @param logId the log id + */ + public void logUndo(Transaction t, long logId) { + Long undoKey = getOperationId(t.getId(), logId); + rwLock.writeLock().lock(); + try { + Object[] old = undoLog.remove(undoKey); + if (old == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_TRANSACTION_ILLEGAL_STATE, + "Transaction {0} was concurrently rolled back", + t.getId()); + } + } finally { + rwLock.writeLock().unlock(); + } + } + + /** + * Remove the given map. + * + * @param the key type + * @param the value type + * @param map the map + */ + synchronized void removeMap(TransactionMap map) { + maps.remove(map.mapId); + store.removeMap(map.map); + } + + /** + * Commit a transaction. + * + * @param t the transaction + * @param maxLogId the last log id + */ + void commit(Transaction t, long maxLogId) { + if (store.isClosed()) { + return; + } + // TODO could synchronize on blocks (100 at a time or so) + rwLock.writeLock().lock(); + int oldStatus = t.getStatus(); + try { + t.setStatus(Transaction.STATUS_COMMITTING); + for (long logId = 0; logId < maxLogId; logId++) { + Long undoKey = getOperationId(t.getId(), logId); + Object[] op = undoLog.get(undoKey); + if (op == null) { + // partially committed: load next + undoKey = undoLog.ceilingKey(undoKey); + if (undoKey == null || + getTransactionId(undoKey) != t.getId()) { + break; + } + logId = getLogId(undoKey) - 1; + continue; + } + int mapId = (Integer) op[0]; + MVMap map = openMap(mapId); + if (map != null) { // might be null if map was removed later + Object key = op[1]; + VersionedValue value = map.get(key); + if (value != null) { + // only commit (remove/update) value if we've reached + // last undoLog entry for a given key + if (value.operationId == undoKey) { + if (value.value == null) { + map.remove(key); + } else { + map.put(key, new VersionedValue(0L, value.value)); + } + } + } + } + undoLog.remove(undoKey); + } + } finally { + rwLock.writeLock().unlock(); + } + endTransaction(t, oldStatus); + } + + /** + * Open the map with the given name. + * + * @param the key type + * @param name the map name + * @param keyType the key type + * @param valueType the value type + * @return the map + */ + synchronized MVMap openMap(String name, + DataType keyType, DataType valueType) { + if (keyType == null) { + keyType = new ObjectDataType(); + } + if (valueType == null) { + valueType = new ObjectDataType(); + } + VersionedValueType vt = new VersionedValueType(valueType); + MVMap map; + MVMap.Builder builder = + new MVMap.Builder(). + keyType(keyType).valueType(vt); + map = store.openMap(name, builder); + @SuppressWarnings("unchecked") + MVMap m = (MVMap) map; + maps.put(map.getId(), m); + return map; + } + + /** + * Open the map with the given id. + * + * @param mapId the id + * @return the map + */ + synchronized MVMap openMap(int mapId) { + MVMap map = maps.get(mapId); + if (map != null) { + return map; + } + String mapName = store.getMapName(mapId); + if (mapName == null) { + // the map was removed later on + return null; + } + VersionedValueType vt = new VersionedValueType(dataType); + MVMap.Builder mapBuilder = + new MVMap.Builder(). + keyType(dataType).valueType(vt); + map = store.openMap(mapName, mapBuilder); + maps.put(mapId, map); + return map; + } + + /** + * Create a temporary map. Such maps are removed when opening the store. + * + * @return the map + */ + synchronized MVMap createTempMap() { + String mapName = "temp." + nextTempMapId++; + return openTempMap(mapName); + } + + /** + * Open a temporary map. + * + * @param mapName the map name + * @return the map + */ + MVMap openTempMap(String mapName) { + MVMap.Builder mapBuilder = + new MVMap.Builder(). + keyType(dataType); + return store.openMap(mapName, mapBuilder); + } + + /** + * End this transaction + * + * @param t the transaction + * @param oldStatus status of this transaction + */ + synchronized void endTransaction(Transaction t, int oldStatus) { + if (oldStatus == Transaction.STATUS_PREPARED) { + preparedTransactions.remove(t.getId()); + } + t.setStatus(Transaction.STATUS_CLOSED); + openTransactions.clear(t.transactionId); + if (oldStatus == Transaction.STATUS_PREPARED || store.getAutoCommitDelay() == 0) { + store.commit(); + return; + } + // to avoid having to store the transaction log, + // if there is no open transaction, + // and if there have been many changes, store them now + if (undoLog.isEmpty()) { + int unsaved = store.getUnsavedMemory(); + int max = store.getAutoCommitMemory(); + // save at 3/4 capacity + if (unsaved * 4 > max * 3) { + store.commit(); + } + } + } + + /** + * Rollback to an old savepoint. + * + * @param t the transaction + * @param maxLogId the last log id + * @param toLogId the log id to roll back to + */ + void rollbackTo(Transaction t, long maxLogId, long toLogId) { + // TODO could synchronize on blocks (100 at a time or so) + rwLock.writeLock().lock(); + try { + for (long logId = maxLogId - 1; logId >= toLogId; logId--) { + Long undoKey = getOperationId(t.getId(), logId); + Object[] op = undoLog.get(undoKey); + if (op == null) { + // partially rolled back: load previous + undoKey = undoLog.floorKey(undoKey); + if (undoKey == null || + getTransactionId(undoKey) != t.getId()) { + break; + } + logId = getLogId(undoKey) + 1; + continue; + } + int mapId = ((Integer) op[0]).intValue(); + MVMap map = openMap(mapId); + if (map != null) { + Object key = op[1]; + VersionedValue oldValue = (VersionedValue) op[2]; + if (oldValue == null) { + // this transaction added the value + map.remove(key); + } else { + // this transaction updated the value + map.put(key, oldValue); + } + } + undoLog.remove(undoKey); + } + } finally { + rwLock.writeLock().unlock(); + } + } + + /** + * Get the changes of the given transaction, starting from the latest log id + * back to the given log id. + * + * @param t the transaction + * @param maxLogId the maximum log id + * @param toLogId the minimum log id + * @return the changes + */ + Iterator getChanges(final Transaction t, final long maxLogId, + final long toLogId) { + return new Iterator() { + + private long logId = maxLogId - 1; + private Change current; + + { + fetchNext(); + } + + private void fetchNext() { + rwLock.writeLock().lock(); + try { + while (logId >= toLogId) { + Long undoKey = getOperationId(t.getId(), logId); + Object[] op = undoLog.get(undoKey); + logId--; + if (op == null) { + // partially rolled back: load previous + undoKey = undoLog.floorKey(undoKey); + if (undoKey == null || + getTransactionId(undoKey) != t.getId()) { + break; + } + logId = getLogId(undoKey); + continue; + } + int mapId = ((Integer) op[0]).intValue(); + MVMap m = openMap(mapId); + if (m == null) { + // map was removed later on + } else { + current = new Change(); + current.mapName = m.getName(); + current.key = op[1]; + VersionedValue oldValue = (VersionedValue) op[2]; + current.value = oldValue == null ? + null : oldValue.value; + return; + } + } + } finally { + rwLock.writeLock().unlock(); + } + current = null; + } + + @Override + public boolean hasNext() { + return current != null; + } + + @Override + public Change next() { + if (current == null) { + throw DataUtils.newUnsupportedOperationException("no data"); + } + Change result = current; + fetchNext(); + return result; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + + /** + * A change in a map. + */ + public static class Change { + + /** + * The name of the map where the change occurred. + */ + public String mapName; + + /** + * The key. + */ + public Object key; + + /** + * The value. + */ + public Object value; + } + + /** + * A transaction. + */ + public static class Transaction { + + /** + * The status of a closed transaction (committed or rolled back). + */ + public static final int STATUS_CLOSED = 0; + + /** + * The status of an open transaction. + */ + public static final int STATUS_OPEN = 1; + + /** + * The status of a prepared transaction. + */ + public static final int STATUS_PREPARED = 2; + + /** + * The status of a transaction that is being committed, but possibly not + * yet finished. A transactions can go into this state when the store is + * closed while the transaction is committing. When opening a store, + * such transactions should be committed. + */ + public static final int STATUS_COMMITTING = 3; + + /** + * The transaction store. + */ + final TransactionStore store; + + /** + * The transaction id. + */ + final int transactionId; + + /** + * The log id of the last entry in the undo log map. + */ + long logId; + + private int status; + + private String name; + + Transaction(TransactionStore store, int transactionId, int status, + String name, long logId) { + this.store = store; + this.transactionId = transactionId; + this.status = status; + this.name = name; + this.logId = logId; + } + + public int getId() { + return transactionId; + } + + public int getStatus() { + return status; + } + + void setStatus(int status) { + this.status = status; + } + + public void setName(String name) { + checkNotClosed(); + this.name = name; + store.storeTransaction(this); + } + + public String getName() { + return name; + } + + /** + * Create a new savepoint. + * + * @return the savepoint id + */ + public long setSavepoint() { + return logId; + } + + /** + * Add a log entry. + * + * @param mapId the map id + * @param key the key + * @param oldValue the old value + */ + void log(int mapId, Object key, Object oldValue) { + store.log(this, logId, mapId, key, oldValue); + // only increment the log id if logging was successful + logId++; + } + + /** + * Remove the last log entry. + */ + void logUndo() { + store.logUndo(this, --logId); + } + + /** + * Open a data map. + * + * @param the key type + * @param the value type + * @param name the name of the map + * @return the transaction map + */ + public TransactionMap openMap(String name) { + return openMap(name, null, null); + } + + /** + * Open the map to store the data. + * + * @param the key type + * @param the value type + * @param name the name of the map + * @param keyType the key data type + * @param valueType the value data type + * @return the transaction map + */ + public TransactionMap openMap(String name, + DataType keyType, DataType valueType) { + checkNotClosed(); + MVMap map = store.openMap(name, keyType, + valueType); + int mapId = map.getId(); + return new TransactionMap<>(this, map, mapId); + } + + /** + * Open the transactional version of the given map. + * + * @param the key type + * @param the value type + * @param map the base map + * @return the transactional map + */ + public TransactionMap openMap( + MVMap map) { + checkNotClosed(); + int mapId = map.getId(); + return new TransactionMap<>(this, map, mapId); + } + + /** + * Prepare the transaction. Afterwards, the transaction can only be + * committed or rolled back. + */ + public void prepare() { + checkNotClosed(); + status = STATUS_PREPARED; + store.storeTransaction(this); + } + + /** + * Commit the transaction. Afterwards, this transaction is closed. + */ + public void commit() { + checkNotClosed(); + store.commit(this, logId); + } + + /** + * Roll back to the given savepoint. This is only allowed if the + * transaction is open. + * + * @param savepointId the savepoint id + */ + public void rollbackToSavepoint(long savepointId) { + checkNotClosed(); + store.rollbackTo(this, logId, savepointId); + logId = savepointId; + } + + /** + * Roll the transaction back. Afterwards, this transaction is closed. + */ + public void rollback() { + checkNotClosed(); + store.rollbackTo(this, logId, 0); + store.endTransaction(this, status); + } + + /** + * Get the list of changes, starting with the latest change, up to the + * given savepoint (in reverse order than they occurred). The value of + * the change is the value before the change was applied. + * + * @param savepointId the savepoint id, 0 meaning the beginning of the + * transaction + * @return the changes + */ + public Iterator getChanges(long savepointId) { + return store.getChanges(this, logId, savepointId); + } + + /** + * Check whether this transaction is open or prepared. + */ + void checkNotClosed() { + if (status == STATUS_CLOSED) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_CLOSED, "Transaction is closed"); + } + } + + /** + * Remove the map. + * + * @param map the map + */ + public void removeMap(TransactionMap map) { + store.removeMap(map); + } + + @Override + public String toString() { + return "" + transactionId; + } + + } + + /** + * A map that supports transactions. + * + * @param the key type + * @param the value type + */ + public static class TransactionMap { + + /** + * The map id. + */ + final int mapId; + + /** + * If a record was read that was updated by this transaction, and the + * update occurred before this log id, the older version is read. This + * is so that changes are not immediately visible, to support statement + * processing (for example "update test set id = id + 1"). + */ + long readLogId = Long.MAX_VALUE; + + /** + * The map used for writing (the latest version). + *

    + * Key: key the key of the data. + * Value: { transactionId, oldVersion, value } + */ + final MVMap map; + + /** + * The transaction which is used for this map. + */ + final Transaction transaction; + + TransactionMap(Transaction transaction, MVMap map, + int mapId) { + this.transaction = transaction; + this.map = map; + this.mapId = mapId; + } + + /** + * Set the savepoint. Afterwards, reads are based on the specified + * savepoint. + * + * @param savepoint the savepoint + */ + public void setSavepoint(long savepoint) { + this.readLogId = savepoint; + } + + /** + * Get a clone of this map for the given transaction. + * + * @param transaction the transaction + * @param savepoint the savepoint + * @return the map + */ + public TransactionMap getInstance(Transaction transaction, + long savepoint) { + TransactionMap m = + new TransactionMap<>(transaction, map, mapId); + m.setSavepoint(savepoint); + return m; + } + + /** + * Get the size of the raw map. This includes uncommitted entries, and + * transiently removed entries, so it is the maximum number of entries. + * + * @return the maximum size + */ + public long sizeAsLongMax() { + return map.sizeAsLong(); + } + + /** + * Get the size of the map as seen by this transaction. + * + * @return the size + */ + public long sizeAsLong() { + transaction.store.rwLock.readLock().lock(); + try { + long sizeRaw = map.sizeAsLong(); + MVMap undo = transaction.store.undoLog; + long undoLogSize; + synchronized (undo) { + undoLogSize = undo.sizeAsLong(); + } + if (undoLogSize == 0) { + return sizeRaw; + } + if (undoLogSize > sizeRaw) { + // the undo log is larger than the map - + // count the entries of the map + long size = 0; + Cursor cursor = map.cursor(null); + while (cursor.hasNext()) { + K key = cursor.next(); + // cursor.getValue() returns outdated value + VersionedValue data = map.get(key); + data = getValue(key, readLogId, data); + if (data != null && data.value != null) { + size++; + } + } + return size; + } + // the undo log is smaller than the map - + // scan the undo log and subtract invisible entries + synchronized (undo) { + // re-fetch in case any transaction was committed now + long size = map.sizeAsLong(); + MVMap temp = transaction.store + .createTempMap(); + try { + for (Entry e : undo.entrySet()) { + Object[] op = e.getValue(); + int m = (Integer) op[0]; + if (m != mapId) { + // a different map - ignore + continue; + } + @SuppressWarnings("unchecked") + K key = (K) op[1]; + if (get(key) == null) { + Integer old = temp.put(key, 1); + // count each key only once (there might be + // multiple + // changes for the same key) + if (old == null) { + size--; + } + } + } + } finally { + transaction.store.store.removeMap(temp); + } + return size; + } + } finally { + transaction.store.rwLock.readLock().unlock(); + } + } + + /** + * Remove an entry. + *

    + * If the row is locked, this method will retry until the row could be + * updated or until a lock timeout. + * + * @param key the key + * @throws IllegalStateException if a lock timeout occurs + */ + public V remove(K key) { + return set(key, null); + } + + /** + * Update the value for the given key. + *

    + * If the row is locked, this method will retry until the row could be + * updated or until a lock timeout. + * + * @param key the key + * @param value the new value (not null) + * @return the old value + * @throws IllegalStateException if a lock timeout occurs + */ + public V put(K key, V value) { + DataUtils.checkArgument(value != null, "The value may not be null"); + return set(key, value); + } + + /** + * Update the value for the given key, without adding an undo log entry. + * + * @param key the key + * @param value the value + * @return the old value + */ + @SuppressWarnings("unchecked") + public V putCommitted(K key, V value) { + DataUtils.checkArgument(value != null, "The value may not be null"); + VersionedValue newValue = new VersionedValue(0L, value); + VersionedValue oldValue = map.put(key, newValue); + return (V) (oldValue == null ? null : oldValue.value); + } + + private V set(K key, V value) { + transaction.checkNotClosed(); + V old = get(key); + boolean ok = trySet(key, value, false); + if (ok) { + return old; + } + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_TRANSACTION_LOCKED, "Entry is locked"); + } + + /** + * Try to remove the value for the given key. + *

    + * This will fail if the row is locked by another transaction (that + * means, if another open transaction changed the row). + * + * @param key the key + * @return whether the entry could be removed + */ + public boolean tryRemove(K key) { + return trySet(key, null, false); + } + + /** + * Try to update the value for the given key. + *

    + * This will fail if the row is locked by another transaction (that + * means, if another open transaction changed the row). + * + * @param key the key + * @param value the new value + * @return whether the entry could be updated + */ + public boolean tryPut(K key, V value) { + DataUtils.checkArgument(value != null, "The value may not be null"); + return trySet(key, value, false); + } + + /** + * Try to set or remove the value. When updating only unchanged entries, + * then the value is only changed if it was not changed after opening + * the map. + * + * @param key the key + * @param value the new value (null to remove the value) + * @param onlyIfUnchanged only set the value if it was not changed (by + * this or another transaction) since the map was opened + * @return true if the value was set, false if there was a concurrent + * update + */ + public boolean trySet(K key, V value, boolean onlyIfUnchanged) { + VersionedValue current = map.get(key); + if (onlyIfUnchanged) { + VersionedValue old = getValue(key, readLogId); + if (!map.areValuesEqual(old, current)) { + long tx = getTransactionId(current.operationId); + if (tx == transaction.transactionId) { + if (value == null) { + // ignore removing an entry + // if it was added or changed + // in the same statement + return true; + } else if (current.value == null) { + // add an entry that was removed + // in the same statement + } else { + return false; + } + } else { + return false; + } + } + } + VersionedValue newValue = new VersionedValue( + getOperationId(transaction.transactionId, transaction.logId), + value); + if (current == null) { + // a new value + transaction.log(mapId, key, current); + VersionedValue old = map.putIfAbsent(key, newValue); + if (old != null) { + transaction.logUndo(); + return false; + } + return true; + } + long id = current.operationId; + if (id == 0) { + // committed + transaction.log(mapId, key, current); + // the transaction is committed: + // overwrite the value + if (!map.replace(key, current, newValue)) { + // somebody else was faster + transaction.logUndo(); + return false; + } + return true; + } + int tx = getTransactionId(current.operationId); + if (tx == transaction.transactionId) { + // added or updated by this transaction + transaction.log(mapId, key, current); + if (!map.replace(key, current, newValue)) { + // strange, somebody overwrote the value + // even though the change was not committed + transaction.logUndo(); + return false; + } + return true; + } + // the transaction is not yet committed + return false; + } + + /** + * Get the value for the given key at the time when this map was opened. + * + * @param key the key + * @return the value or null + */ + public V get(K key) { + return get(key, readLogId); + } + + /** + * Get the most recent value for the given key. + * + * @param key the key + * @return the value or null + */ + public V getLatest(K key) { + return get(key, Long.MAX_VALUE); + } + + /** + * Whether the map contains the key. + * + * @param key the key + * @return true if the map contains an entry for this key + */ + public boolean containsKey(K key) { + return get(key) != null; + } + + /** + * Get the value for the given key. + * + * @param key the key + * @param maxLogId the maximum log id + * @return the value or null + */ + @SuppressWarnings("unchecked") + public V get(K key, long maxLogId) { + VersionedValue data = getValue(key, maxLogId); + return data == null ? null : (V) data.value; + } + + /** + * Whether the entry for this key was added or removed from this + * session. + * + * @param key the key + * @return true if yes + */ + public boolean isSameTransaction(K key) { + VersionedValue data = map.get(key); + if (data == null) { + // doesn't exist or deleted by a committed transaction + return false; + } + int tx = getTransactionId(data.operationId); + return tx == transaction.transactionId; + } + + private VersionedValue getValue(K key, long maxLog) { + transaction.store.rwLock.readLock().lock(); + try { + VersionedValue data = map.get(key); + return getValue(key, maxLog, data); + } finally { + transaction.store.rwLock.readLock().unlock(); + } + } + + /** + * Get the versioned value for the given key. + * + * @param key the key + * @param maxLog the maximum log id of the entry + * @param data the value stored in the main map + * @return the value + */ + VersionedValue getValue(K key, long maxLog, VersionedValue data) { + while (true) { + if (data == null) { + // doesn't exist or deleted by a committed transaction + return null; + } + long id = data.operationId; + if (id == 0) { + // it is committed + return data; + } + int tx = getTransactionId(id); + if (tx == transaction.transactionId) { + // added by this transaction + if (getLogId(id) < maxLog) { + return data; + } + } + // get the value before the uncommitted transaction + Object[] d; + d = transaction.store.undoLog.get(id); + if (d == null) { + if (transaction.store.store.isReadOnly()) { + // uncommitted transaction for a read-only store + return null; + } + // this entry should be committed or rolled back + // in the meantime (the transaction might still be open) + // or it might be changed again in a different + // transaction (possibly one with the same id) + data = map.get(key); + } else { + data = (VersionedValue) d[2]; + } + } + } + + /** + * Check whether this map is closed. + * + * @return true if closed + */ + public boolean isClosed() { + return map.isClosed(); + } + + /** + * Clear the map. + */ + public void clear() { + // TODO truncate transactionally? + map.clear(); + } + + /** + * Get the first key. + * + * @return the first key, or null if empty + */ + public K firstKey() { + Iterator it = keyIterator(null); + return it.hasNext() ? it.next() : null; + } + + /** + * Get the last key. + * + * @return the last key, or null if empty + */ + public K lastKey() { + K k = map.lastKey(); + while (k != null && get(k) == null) { + k = map.lowerKey(k); + } + return k; + } + + /** + * Get the smallest key that is larger than the given key, or null if no + * such key exists. + * + * @param key the key (may not be null) + * @return the result + */ + public K higherKey(K key) { + do { + key = map.higherKey(key); + } while (key != null && get(key) == null); + return key; + } + + /** + * Get the smallest key that is larger than or equal to this key, + * or null if no such key exists. + * + * @param key the key (may not be null) + * @return the result + */ + public K ceilingKey(K key) { + Iterator it = keyIterator(key); + return it.hasNext() ? it.next() : null; + } + + /** + * Get one of the previous or next keys. There might be no value + * available for the returned key. + * + * @param key the key (may not be null) + * @param offset how many keys to skip (-1 for previous, 1 for next) + * @return the key + */ + public K relativeKey(K key, long offset) { + K k = offset > 0 ? map.ceilingKey(key) : map.floorKey(key); + if (k == null) { + return k; + } + long index = map.getKeyIndex(k); + return map.getKey(index + offset); + } + + /** + * Get the largest key that is smaller than or equal to this key, + * or null if no such key exists. + * + * @param key the key (may not be null) + * @return the result + */ + public K floorKey(K key) { + key = map.floorKey(key); + while (key != null && get(key) == null) { + // Use lowerKey() for the next attempts, otherwise we'll get an infinite loop + key = map.lowerKey(key); + } + return key; + } + + /** + * Get the largest key that is smaller than the given key, or null if no + * such key exists. + * + * @param key the key (may not be null) + * @return the result + */ + public K lowerKey(K key) { + do { + key = map.lowerKey(key); + } while (key != null && get(key) == null); + return key; + } + + /** + * Iterate over keys. + * + * @param from the first key to return + * @return the iterator + */ + public Iterator keyIterator(K from) { + return keyIterator(from, false); + } + + /** + * Iterate over keys. + * + * @param from the first key to return + * @param includeUncommitted whether uncommitted entries should be + * included + * @return the iterator + */ + public Iterator keyIterator(final K from, final boolean includeUncommitted) { + return new Iterator() { + private K currentKey = from; + private Cursor cursor = map.cursor(currentKey); + + { + fetchNext(); + } + + private void fetchNext() { + while (cursor.hasNext()) { + K k; + try { + k = cursor.next(); + } catch (IllegalStateException e) { + // TODO this is a bit ugly + if (DataUtils.getErrorCode(e.getMessage()) == + DataUtils.ERROR_CHUNK_NOT_FOUND) { + cursor = map.cursor(currentKey); + // we (should) get the current key again, + // we need to ignore that one + if (!cursor.hasNext()) { + break; + } + cursor.next(); + if (!cursor.hasNext()) { + break; + } + k = cursor.next(); + } else { + throw e; + } + } + currentKey = k; + if (includeUncommitted) { + return; + } + if (containsKey(k)) { + return; + } + } + currentKey = null; + } + + @Override + public boolean hasNext() { + return currentKey != null; + } + + @Override + public K next() { + K result = currentKey; + fetchNext(); + return result; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException( + "Removing is not supported"); + } + }; + } + + /** + * Iterate over entries. + * + * @param from the first key to return + * @param to the last key to return + * @return the iterator + */ + public Iterator> entryIterator(final K from, final K to) { + return new Iterator>() { + private Entry current; + private K currentKey = from; + private Cursor cursor = map.cursor(currentKey); + + { + fetchNext(); + } + + private void fetchNext() { + while (cursor.hasNext()) { + transaction.store.rwLock.readLock().lock(); + try { + K k; + try { + k = cursor.next(); + } catch (IllegalStateException e) { + // TODO this is a bit ugly + if (DataUtils.getErrorCode(e.getMessage()) == + DataUtils.ERROR_CHUNK_NOT_FOUND) { + cursor = map.cursor(currentKey); + // we (should) get the current key again, + // we need to ignore that one + if (!cursor.hasNext()) { + break; + } + cursor.next(); + if (!cursor.hasNext()) { + break; + } + k = cursor.next(); + } else { + throw e; + } + } + final K key = k; + if (to != null && map.getKeyType().compare(k, to) > 0) { + break; + } + // cursor.getValue() returns outdated value + VersionedValue data = map.get(key); + data = getValue(key, readLogId, data); + if (data != null && data.value != null) { + @SuppressWarnings("unchecked") + final V value = (V) data.value; + current = new DataUtils.MapEntry<>(key, value); + currentKey = key; + return; + } + } finally { + transaction.store.rwLock.readLock().unlock(); + } + } + current = null; + currentKey = null; + } + + @Override + public boolean hasNext() { + return current != null; + } + + @Override + public Entry next() { + Entry result = current; + fetchNext(); + return result; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException( + "Removing is not supported"); + } + }; + + } + + /** + * Iterate over keys. + * + * @param iterator the iterator to wrap + * @param includeUncommitted whether uncommitted entries should be + * included + * @return the iterator + */ + public Iterator wrapIterator(final Iterator iterator, + final boolean includeUncommitted) { + // TODO duplicate code for wrapIterator and entryIterator + return new Iterator() { + private K current; + + { + fetchNext(); + } + + private void fetchNext() { + while (iterator.hasNext()) { + current = iterator.next(); + if (includeUncommitted) { + return; + } + if (containsKey(current)) { + return; + } + } + current = null; + } + + @Override + public boolean hasNext() { + return current != null; + } + + @Override + public K next() { + K result = current; + fetchNext(); + return result; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException( + "Removing is not supported"); + } + }; + } + + public Transaction getTransaction() { + return transaction; + } + + public DataType getKeyType() { + return map.getKeyType(); + } + + } + + /** + * A versioned value (possibly null). It contains a pointer to the old + * value, and the value itself. + */ + static class VersionedValue { + + /** + * The operation id. + */ + final long operationId; + + /** + * The value. + */ + final Object value; + + VersionedValue(long operationId, Object value) { + this.operationId = operationId; + this.value = value; + } + + @Override + public String toString() { + return value + (operationId == 0 ? "" : ( + " " + + getTransactionId(operationId) + "/" + + getLogId(operationId))); + } + + } + + /** + * The value type for a versioned value. + */ + public static class VersionedValueType implements DataType { + + private final DataType valueType; + + VersionedValueType(DataType valueType) { + this.valueType = valueType; + } + + @Override + public int getMemory(Object obj) { + VersionedValue v = (VersionedValue) obj; + return valueType.getMemory(v.value) + 8; + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj == bObj) { + return 0; + } + VersionedValue a = (VersionedValue) aObj; + VersionedValue b = (VersionedValue) bObj; + long comp = a.operationId - b.operationId; + if (comp == 0) { + return valueType.compare(a.value, b.value); + } + return Long.signum(comp); + } + + @Override + public void read(ByteBuffer buff, Object[] obj, int len, boolean key) { + if (buff.get() == 0) { + // fast path (no op ids or null entries) + for (int i = 0; i < len; i++) { + obj[i] = new VersionedValue(0L, valueType.read(buff)); + } + } else { + // slow path (some entries may be null) + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + } + + @Override + public Object read(ByteBuffer buff) { + long operationId = DataUtils.readVarLong(buff); + Object value; + if (buff.get() == 1) { + value = valueType.read(buff); + } else { + value = null; + } + return new VersionedValue(operationId, value); + } + + @Override + public void write(WriteBuffer buff, Object[] obj, int len, boolean key) { + boolean fastPath = true; + for (int i = 0; i < len; i++) { + VersionedValue v = (VersionedValue) obj[i]; + if (v.operationId != 0 || v.value == null) { + fastPath = false; + } + } + if (fastPath) { + buff.put((byte) 0); + for (int i = 0; i < len; i++) { + VersionedValue v = (VersionedValue) obj[i]; + valueType.write(buff, v.value); + } + } else { + // slow path: + // store op ids, and some entries may be null + buff.put((byte) 1); + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + } + + @Override + public void write(WriteBuffer buff, Object obj) { + VersionedValue v = (VersionedValue) obj; + buff.putVarLong(v.operationId); + if (v.value == null) { + buff.put((byte) 0); + } else { + buff.put((byte) 1); + valueType.write(buff, v.value); + } + } + + } + + /** + * A data type that contains an array of objects with the specified data + * types. + */ + public static class ArrayType implements DataType { + + private final int arrayLength; + private final DataType[] elementTypes; + + ArrayType(DataType[] elementTypes) { + this.arrayLength = elementTypes.length; + this.elementTypes = elementTypes; + } + + @Override + public int getMemory(Object obj) { + Object[] array = (Object[]) obj; + int size = 0; + for (int i = 0; i < arrayLength; i++) { + DataType t = elementTypes[i]; + Object o = array[i]; + if (o != null) { + size += t.getMemory(o); + } + } + return size; + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj == bObj) { + return 0; + } + Object[] a = (Object[]) aObj; + Object[] b = (Object[]) bObj; + for (int i = 0; i < arrayLength; i++) { + DataType t = elementTypes[i]; + int comp = t.compare(a[i], b[i]); + if (comp != 0) { + return comp; + } + } + return 0; + } + + @Override + public void read(ByteBuffer buff, Object[] obj, + int len, boolean key) { + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + + @Override + public void write(WriteBuffer buff, Object[] obj, + int len, boolean key) { + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + + @Override + public void write(WriteBuffer buff, Object obj) { + Object[] array = (Object[]) obj; + for (int i = 0; i < arrayLength; i++) { + DataType t = elementTypes[i]; + Object o = array[i]; + if (o == null) { + buff.put((byte) 0); + } else { + buff.put((byte) 1); + t.write(buff, o); + } + } + } + + @Override + public Object read(ByteBuffer buff) { + Object[] array = new Object[arrayLength]; + for (int i = 0; i < arrayLength; i++) { + DataType t = elementTypes[i]; + if (buff.get() == 1) { + array[i] = t.read(buff); + } + } + return array; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/ValueDataType.java b/modules/h2/src/main/java/org/h2/mvstore/db/ValueDataType.java new file mode 100644 index 0000000000000..3da41d360a49c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/ValueDataType.java @@ -0,0 +1,668 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.db; + +import java.math.BigDecimal; +import java.math.BigInteger; +import java.nio.ByteBuffer; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.util.Arrays; +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.WriteBuffer; +import org.h2.mvstore.rtree.SpatialDataType; +import org.h2.mvstore.rtree.SpatialKey; +import org.h2.mvstore.type.DataType; +import org.h2.result.SortOrder; +import org.h2.store.DataHandler; +import org.h2.tools.SimpleResultSet; +import org.h2.util.JdbcUtils; +import org.h2.util.Utils; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueByte; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueGeometry; +import org.h2.value.ValueInt; +import org.h2.value.ValueJavaObject; +import org.h2.value.ValueLobDb; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; +import org.h2.value.ValueShort; +import org.h2.value.ValueString; +import org.h2.value.ValueStringFixed; +import org.h2.value.ValueStringIgnoreCase; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; +import org.h2.value.ValueUuid; + +/** + * A row type. + */ +public class ValueDataType implements DataType { + + private static final int INT_0_15 = 32; + private static final int LONG_0_7 = 48; + private static final int DECIMAL_0_1 = 56; + private static final int DECIMAL_SMALL_0 = 58; + private static final int DECIMAL_SMALL = 59; + private static final int DOUBLE_0_1 = 60; + private static final int FLOAT_0_1 = 62; + private static final int BOOLEAN_FALSE = 64; + private static final int BOOLEAN_TRUE = 65; + private static final int INT_NEG = 66; + private static final int LONG_NEG = 67; + private static final int STRING_0_31 = 68; + private static final int BYTES_0_31 = 100; + private static final int SPATIAL_KEY_2D = 132; + private static final int CUSTOM_DATA_TYPE = 133; + + final DataHandler handler; + final CompareMode compareMode; + final int[] sortTypes; + SpatialDataType spatialType; + + public ValueDataType(CompareMode compareMode, DataHandler handler, + int[] sortTypes) { + this.compareMode = compareMode; + this.handler = handler; + this.sortTypes = sortTypes; + } + + private SpatialDataType getSpatialDataType() { + if (spatialType == null) { + spatialType = new SpatialDataType(2); + } + return spatialType; + } + + @Override + public int compare(Object a, Object b) { + if (a == b) { + return 0; + } + if (a instanceof ValueArray && b instanceof ValueArray) { + Value[] ax = ((ValueArray) a).getList(); + Value[] bx = ((ValueArray) b).getList(); + int al = ax.length; + int bl = bx.length; + int len = Math.min(al, bl); + for (int i = 0; i < len; i++) { + int sortType = sortTypes == null ? SortOrder.ASCENDING : sortTypes[i]; + int comp = compareValues(ax[i], bx[i], sortType); + if (comp != 0) { + return comp; + } + } + if (len < al) { + return -1; + } else if (len < bl) { + return 1; + } + return 0; + } + return compareValues((Value) a, (Value) b, SortOrder.ASCENDING); + } + + private int compareValues(Value a, Value b, int sortType) { + if (a == b) { + return 0; + } + // null is never stored; + // comparison with null is used to retrieve all entries + // in which case null is always lower than all entries + // (even for descending ordered indexes) + if (a == null) { + return -1; + } else if (b == null) { + return 1; + } + boolean aNull = a == ValueNull.INSTANCE; + boolean bNull = b == ValueNull.INSTANCE; + if (aNull || bNull) { + return SortOrder.compareNull(aNull, sortType); + } + int comp = a.compareTypeSafe(b, compareMode); + if ((sortType & SortOrder.DESCENDING) != 0) { + comp = -comp; + } + return comp; + } + + @Override + public int getMemory(Object obj) { + if (obj instanceof SpatialKey) { + return getSpatialDataType().getMemory(obj); + } + return getMemory((Value) obj); + } + + private static int getMemory(Value v) { + return v == null ? 0 : v.getMemory(); + } + + @Override + public void read(ByteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + + @Override + public void write(WriteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + + @Override + public Object read(ByteBuffer buff) { + return readValue(buff); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (obj instanceof SpatialKey) { + buff.put((byte) SPATIAL_KEY_2D); + getSpatialDataType().write(buff, obj); + return; + } + Value x = (Value) obj; + writeValue(buff, x); + } + + private void writeValue(WriteBuffer buff, Value v) { + if (v == ValueNull.INSTANCE) { + buff.put((byte) 0); + return; + } + int type = v.getType(); + switch (type) { + case Value.BOOLEAN: + buff.put((byte) (v.getBoolean() ? BOOLEAN_TRUE : BOOLEAN_FALSE)); + break; + case Value.BYTE: + buff.put((byte) type).put(v.getByte()); + break; + case Value.SHORT: + buff.put((byte) type).putShort(v.getShort()); + break; + case Value.ENUM: + case Value.INT: { + int x = v.getInt(); + if (x < 0) { + buff.put((byte) INT_NEG).putVarInt(-x); + } else if (x < 16) { + buff.put((byte) (INT_0_15 + x)); + } else { + buff.put((byte) type).putVarInt(x); + } + break; + } + case Value.LONG: { + long x = v.getLong(); + if (x < 0) { + buff.put((byte) LONG_NEG).putVarLong(-x); + } else if (x < 8) { + buff.put((byte) (LONG_0_7 + x)); + } else { + buff.put((byte) type).putVarLong(x); + } + break; + } + case Value.DECIMAL: { + BigDecimal x = v.getBigDecimal(); + if (BigDecimal.ZERO.equals(x)) { + buff.put((byte) DECIMAL_0_1); + } else if (BigDecimal.ONE.equals(x)) { + buff.put((byte) (DECIMAL_0_1 + 1)); + } else { + int scale = x.scale(); + BigInteger b = x.unscaledValue(); + int bits = b.bitLength(); + if (bits <= 63) { + if (scale == 0) { + buff.put((byte) DECIMAL_SMALL_0). + putVarLong(b.longValue()); + } else { + buff.put((byte) DECIMAL_SMALL). + putVarInt(scale). + putVarLong(b.longValue()); + } + } else { + byte[] bytes = b.toByteArray(); + buff.put((byte) type). + putVarInt(scale). + putVarInt(bytes.length). + put(bytes); + } + } + break; + } + case Value.TIME: { + ValueTime t = (ValueTime) v; + long nanos = t.getNanos(); + long millis = nanos / 1000000; + nanos -= millis * 1000000; + buff.put((byte) type). + putVarLong(millis). + putVarLong(nanos); + break; + } + case Value.DATE: { + long x = ((ValueDate) v).getDateValue(); + buff.put((byte) type).putVarLong(x); + break; + } + case Value.TIMESTAMP: { + ValueTimestamp ts = (ValueTimestamp) v; + long dateValue = ts.getDateValue(); + long nanos = ts.getTimeNanos(); + long millis = nanos / 1000000; + nanos -= millis * 1000000; + buff.put((byte) type). + putVarLong(dateValue). + putVarLong(millis). + putVarLong(nanos); + break; + } + case Value.TIMESTAMP_TZ: { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) v; + long dateValue = ts.getDateValue(); + long nanos = ts.getTimeNanos(); + long millis = nanos / 1000000; + nanos -= millis * 1000000; + buff.put((byte) type). + putVarLong(dateValue). + putVarLong(millis). + putVarLong(nanos). + putVarInt(ts.getTimeZoneOffsetMins()); + break; + } + case Value.JAVA_OBJECT: { + byte[] b = v.getBytesNoCopy(); + buff.put((byte) type). + putVarInt(b.length). + put(b); + break; + } + case Value.BYTES: { + byte[] b = v.getBytesNoCopy(); + int len = b.length; + if (len < 32) { + buff.put((byte) (BYTES_0_31 + len)). + put(b); + } else { + buff.put((byte) type). + putVarInt(b.length). + put(b); + } + break; + } + case Value.UUID: { + ValueUuid uuid = (ValueUuid) v; + buff.put((byte) type). + putLong(uuid.getHigh()). + putLong(uuid.getLow()); + break; + } + case Value.STRING: { + String s = v.getString(); + int len = s.length(); + if (len < 32) { + buff.put((byte) (STRING_0_31 + len)). + putStringData(s, len); + } else { + buff.put((byte) type); + writeString(buff, s); + } + break; + } + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + buff.put((byte) type); + writeString(buff, v.getString()); + break; + case Value.DOUBLE: { + double x = v.getDouble(); + if (x == 1.0d) { + buff.put((byte) (DOUBLE_0_1 + 1)); + } else { + long d = Double.doubleToLongBits(x); + if (d == ValueDouble.ZERO_BITS) { + buff.put((byte) DOUBLE_0_1); + } else { + buff.put((byte) type). + putVarLong(Long.reverse(d)); + } + } + break; + } + case Value.FLOAT: { + float x = v.getFloat(); + if (x == 1.0f) { + buff.put((byte) (FLOAT_0_1 + 1)); + } else { + int f = Float.floatToIntBits(x); + if (f == ValueFloat.ZERO_BITS) { + buff.put((byte) FLOAT_0_1); + } else { + buff.put((byte) type). + putVarInt(Integer.reverse(f)); + } + } + break; + } + case Value.BLOB: + case Value.CLOB: { + buff.put((byte) type); + ValueLobDb lob = (ValueLobDb) v; + byte[] small = lob.getSmall(); + if (small == null) { + buff.putVarInt(-3). + putVarInt(lob.getTableId()). + putVarLong(lob.getLobId()). + putVarLong(lob.getPrecision()); + } else { + buff.putVarInt(small.length). + put(small); + } + break; + } + case Value.ARRAY: { + Value[] list = ((ValueArray) v).getList(); + buff.put((byte) type).putVarInt(list.length); + for (Value x : list) { + writeValue(buff, x); + } + break; + } + case Value.RESULT_SET: { + buff.put((byte) type); + try { + ResultSet rs = ((ValueResultSet) v).getResultSet(); + rs.beforeFirst(); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + buff.putVarInt(columnCount); + for (int i = 0; i < columnCount; i++) { + writeString(buff, meta.getColumnName(i + 1)); + buff.putVarInt(meta.getColumnType(i + 1)). + putVarInt(meta.getPrecision(i + 1)). + putVarInt(meta.getScale(i + 1)); + } + while (rs.next()) { + buff.put((byte) 1); + for (int i = 0; i < columnCount; i++) { + int t = org.h2.value.DataType. + getValueTypeFromResultSet(meta, i + 1); + Value val = org.h2.value.DataType.readValue( + null, rs, i + 1, t); + writeValue(buff, val); + } + } + buff.put((byte) 0); + rs.beforeFirst(); + } catch (SQLException e) { + throw DbException.convert(e); + } + break; + } + case Value.GEOMETRY: { + byte[] b = v.getBytes(); + int len = b.length; + buff.put((byte) type). + putVarInt(len). + put(b); + break; + } + default: + if (JdbcUtils.customDataTypesHandler != null) { + byte[] b = v.getBytesNoCopy(); + buff.put((byte)CUSTOM_DATA_TYPE). + putVarInt(type). + putVarInt(b.length). + put(b); + break; + } + DbException.throwInternalError("type=" + v.getType()); + } + } + + private static void writeString(WriteBuffer buff, String s) { + int len = s.length(); + buff.putVarInt(len).putStringData(s, len); + } + + /** + * Read a value. + * + * @return the value + */ + private Object readValue(ByteBuffer buff) { + int type = buff.get() & 255; + switch (type) { + case Value.NULL: + return ValueNull.INSTANCE; + case BOOLEAN_TRUE: + return ValueBoolean.TRUE; + case BOOLEAN_FALSE: + return ValueBoolean.FALSE; + case INT_NEG: + return ValueInt.get(-readVarInt(buff)); + case Value.ENUM: + case Value.INT: + return ValueInt.get(readVarInt(buff)); + case LONG_NEG: + return ValueLong.get(-readVarLong(buff)); + case Value.LONG: + return ValueLong.get(readVarLong(buff)); + case Value.BYTE: + return ValueByte.get(buff.get()); + case Value.SHORT: + return ValueShort.get(buff.getShort()); + case DECIMAL_0_1: + return ValueDecimal.ZERO; + case DECIMAL_0_1 + 1: + return ValueDecimal.ONE; + case DECIMAL_SMALL_0: + return ValueDecimal.get(BigDecimal.valueOf( + readVarLong(buff))); + case DECIMAL_SMALL: { + int scale = readVarInt(buff); + return ValueDecimal.get(BigDecimal.valueOf( + readVarLong(buff), scale)); + } + case Value.DECIMAL: { + int scale = readVarInt(buff); + int len = readVarInt(buff); + byte[] buff2 = Utils.newBytes(len); + buff.get(buff2, 0, len); + BigInteger b = new BigInteger(buff2); + return ValueDecimal.get(new BigDecimal(b, scale)); + } + case Value.DATE: { + return ValueDate.fromDateValue(readVarLong(buff)); + } + case Value.TIME: { + long nanos = readVarLong(buff) * 1000000 + readVarLong(buff); + return ValueTime.fromNanos(nanos); + } + case Value.TIMESTAMP: { + long dateValue = readVarLong(buff); + long nanos = readVarLong(buff) * 1000000 + readVarLong(buff); + return ValueTimestamp.fromDateValueAndNanos(dateValue, nanos); + } + case Value.TIMESTAMP_TZ: { + long dateValue = readVarLong(buff); + long nanos = readVarLong(buff) * 1000000 + readVarLong(buff); + short tz = (short) readVarInt(buff); + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, nanos, tz); + } + case Value.BYTES: { + int len = readVarInt(buff); + byte[] b = Utils.newBytes(len); + buff.get(b, 0, len); + return ValueBytes.getNoCopy(b); + } + case Value.JAVA_OBJECT: { + int len = readVarInt(buff); + byte[] b = Utils.newBytes(len); + buff.get(b, 0, len); + return ValueJavaObject.getNoCopy(null, b, handler); + } + case Value.UUID: + return ValueUuid.get(buff.getLong(), buff.getLong()); + case Value.STRING: + return ValueString.get(readString(buff)); + case Value.STRING_IGNORECASE: + return ValueStringIgnoreCase.get(readString(buff)); + case Value.STRING_FIXED: + return ValueStringFixed.get(readString(buff)); + case FLOAT_0_1: + return ValueFloat.get(0); + case FLOAT_0_1 + 1: + return ValueFloat.get(1); + case DOUBLE_0_1: + return ValueDouble.get(0); + case DOUBLE_0_1 + 1: + return ValueDouble.get(1); + case Value.DOUBLE: + return ValueDouble.get(Double.longBitsToDouble( + Long.reverse(readVarLong(buff)))); + case Value.FLOAT: + return ValueFloat.get(Float.intBitsToFloat( + Integer.reverse(readVarInt(buff)))); + case Value.BLOB: + case Value.CLOB: { + int smallLen = readVarInt(buff); + if (smallLen >= 0) { + byte[] small = Utils.newBytes(smallLen); + buff.get(small, 0, smallLen); + return ValueLobDb.createSmallLob(type, small); + } else if (smallLen == -3) { + int tableId = readVarInt(buff); + long lobId = readVarLong(buff); + long precision = readVarLong(buff); + return ValueLobDb.create(type, + handler, tableId, lobId, null, precision); + } else { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "lob type: " + smallLen); + } + } + case Value.ARRAY: { + int len = readVarInt(buff); + Value[] list = new Value[len]; + for (int i = 0; i < len; i++) { + list[i] = (Value) readValue(buff); + } + return ValueArray.get(list); + } + case Value.RESULT_SET: { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + int columns = readVarInt(buff); + for (int i = 0; i < columns; i++) { + rs.addColumn(readString(buff), + readVarInt(buff), + readVarInt(buff), + readVarInt(buff)); + } + while (buff.get() != 0) { + Object[] o = new Object[columns]; + for (int i = 0; i < columns; i++) { + o[i] = ((Value) readValue(buff)).getObject(); + } + rs.addRow(o); + } + return ValueResultSet.get(rs); + } + case Value.GEOMETRY: { + int len = readVarInt(buff); + byte[] b = Utils.newBytes(len); + buff.get(b, 0, len); + return ValueGeometry.get(b); + } + case SPATIAL_KEY_2D: + return getSpatialDataType().read(buff); + case CUSTOM_DATA_TYPE: { + if (JdbcUtils.customDataTypesHandler != null) { + int customType = readVarInt(buff); + int len = readVarInt(buff); + byte[] b = Utils.newBytes(len); + buff.get(b, 0, len); + return JdbcUtils.customDataTypesHandler.convert( + ValueBytes.getNoCopy(b), customType); + } + throw DbException.get(ErrorCode.UNKNOWN_DATA_TYPE_1, + "No CustomDataTypesHandler has been set up"); + } + default: + if (type >= INT_0_15 && type < INT_0_15 + 16) { + return ValueInt.get(type - INT_0_15); + } else if (type >= LONG_0_7 && type < LONG_0_7 + 8) { + return ValueLong.get(type - LONG_0_7); + } else if (type >= BYTES_0_31 && type < BYTES_0_31 + 32) { + int len = type - BYTES_0_31; + byte[] b = Utils.newBytes(len); + buff.get(b, 0, len); + return ValueBytes.getNoCopy(b); + } else if (type >= STRING_0_31 && type < STRING_0_31 + 32) { + return ValueString.get(readString(buff, type - STRING_0_31)); + } + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, "type: " + type); + } + } + + private static int readVarInt(ByteBuffer buff) { + return DataUtils.readVarInt(buff); + } + + private static long readVarLong(ByteBuffer buff) { + return DataUtils.readVarLong(buff); + } + + private static String readString(ByteBuffer buff, int len) { + return DataUtils.readString(buff, len); + } + + private static String readString(ByteBuffer buff) { + int len = readVarInt(buff); + return DataUtils.readString(buff, len); + } + + @Override + public int hashCode() { + return compareMode.hashCode() ^ Arrays.hashCode(sortTypes); + } + + @Override + public boolean equals(Object obj) { + if (obj == this) { + return true; + } else if (!(obj instanceof ValueDataType)) { + return false; + } + ValueDataType v = (ValueDataType) obj; + if (!compareMode.equals(v.compareMode)) { + return false; + } + return Arrays.equals(sortTypes, v.sortTypes); + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/db/package.html b/modules/h2/src/main/java/org/h2/mvstore/db/package.html new file mode 100644 index 0000000000000..6db68ad22f413 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/db/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Helper classes to use the MVStore in the H2 database. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/mvstore/package.html b/modules/h2/src/main/java/org/h2/mvstore/package.html new file mode 100644 index 0000000000000..af41161ab11db --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A persistent storage for tree maps. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/mvstore/rtree/MVRTreeMap.java b/modules/h2/src/main/java/org/h2/mvstore/rtree/MVRTreeMap.java new file mode 100644 index 0000000000000..b1c26dcb0b569 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/rtree/MVRTreeMap.java @@ -0,0 +1,620 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.rtree; + +import java.util.ArrayList; +import java.util.Iterator; +import org.h2.mvstore.CursorPos; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.Page; +import org.h2.mvstore.type.DataType; +import org.h2.mvstore.type.ObjectDataType; +import org.h2.util.New; + +/** + * An r-tree implementation. It supports both the linear and the quadratic split + * algorithm. + * + * @param the value class + */ +public class MVRTreeMap extends MVMap { + + /** + * The spatial key type. + */ + final SpatialDataType keyType; + + private boolean quadraticSplit; + + public MVRTreeMap(int dimensions, DataType valueType) { + super(new SpatialDataType(dimensions), valueType); + this.keyType = (SpatialDataType) getKeyType(); + } + + /** + * Create a new map with the given dimensions and value type. + * + * @param the value type + * @param dimensions the number of dimensions + * @param valueType the value type + * @return the map + */ + public static MVRTreeMap create(int dimensions, DataType valueType) { + return new MVRTreeMap<>(dimensions, valueType); + } + + @Override + @SuppressWarnings("unchecked") + public V get(Object key) { + return (V) get(root, key); + } + + /** + * Iterate over all keys that have an intersection with the given rectangle. + * + * @param x the rectangle + * @return the iterator + */ + public RTreeCursor findIntersectingKeys(SpatialKey x) { + return new RTreeCursor(root, x) { + @Override + protected boolean check(boolean leaf, SpatialKey key, + SpatialKey test) { + return keyType.isOverlap(key, test); + } + }; + } + + /** + * Iterate over all keys that are fully contained within the given + * rectangle. + * + * @param x the rectangle + * @return the iterator + */ + public RTreeCursor findContainedKeys(SpatialKey x) { + return new RTreeCursor(root, x) { + @Override + protected boolean check(boolean leaf, SpatialKey key, + SpatialKey test) { + if (leaf) { + return keyType.isInside(key, test); + } + return keyType.isOverlap(key, test); + } + }; + } + + private boolean contains(Page p, int index, Object key) { + return keyType.contains(p.getKey(index), key); + } + + /** + * Get the object for the given key. An exact match is required. + * + * @param p the page + * @param key the key + * @return the value, or null if not found + */ + protected Object get(Page p, Object key) { + if (!p.isLeaf()) { + for (int i = 0; i < p.getKeyCount(); i++) { + if (contains(p, i, key)) { + Object o = get(p.getChildPage(i), key); + if (o != null) { + return o; + } + } + } + } else { + for (int i = 0; i < p.getKeyCount(); i++) { + if (keyType.equals(p.getKey(i), key)) { + return p.getValue(i); + } + } + } + return null; + } + + @Override + protected synchronized Object remove(Page p, long writeVersion, Object key) { + Object result = null; + if (p.isLeaf()) { + for (int i = 0; i < p.getKeyCount(); i++) { + if (keyType.equals(p.getKey(i), key)) { + result = p.getValue(i); + p.remove(i); + break; + } + } + return result; + } + for (int i = 0; i < p.getKeyCount(); i++) { + if (contains(p, i, key)) { + Page cOld = p.getChildPage(i); + // this will mark the old page as deleted + // so we need to update the parent in any case + // (otherwise the old page might be deleted again) + Page c = cOld.copy(writeVersion); + long oldSize = c.getTotalCount(); + result = remove(c, writeVersion, key); + p.setChild(i, c); + if (oldSize == c.getTotalCount()) { + continue; + } + if (c.getTotalCount() == 0) { + // this child was deleted + p.remove(i); + if (p.getKeyCount() == 0) { + c.removePage(); + } + break; + } + Object oldBounds = p.getKey(i); + if (!keyType.isInside(key, oldBounds)) { + p.setKey(i, getBounds(c)); + } + break; + } + } + return result; + } + + private Object getBounds(Page x) { + Object bounds = keyType.createBoundingBox(x.getKey(0)); + for (int i = 1; i < x.getKeyCount(); i++) { + keyType.increaseBounds(bounds, x.getKey(i)); + } + return bounds; + } + + @Override + @SuppressWarnings("unchecked") + public V put(SpatialKey key, V value) { + return (V) putOrAdd(key, value, false); + } + + /** + * Add a given key-value pair. The key should not exist (if it exists, the + * result is undefined). + * + * @param key the key + * @param value the value + */ + public void add(SpatialKey key, V value) { + putOrAdd(key, value, true); + } + + private synchronized Object putOrAdd(SpatialKey key, V value, boolean alwaysAdd) { + beforeWrite(); + long v = writeVersion; + Page p = root.copy(v); + Object result; + if (alwaysAdd || get(key) == null) { + if (p.getMemory() > store.getPageSplitSize() && + p.getKeyCount() > 3) { + // only possible if this is the root, else we would have + // split earlier (this requires pageSplitSize is fixed) + long totalCount = p.getTotalCount(); + Page split = split(p, v); + Object k1 = getBounds(p); + Object k2 = getBounds(split); + Object[] keys = { k1, k2 }; + Page.PageReference[] children = { + new Page.PageReference(p, p.getPos(), p.getTotalCount()), + new Page.PageReference(split, split.getPos(), split.getTotalCount()), + new Page.PageReference(null, 0, 0) + }; + p = Page.create(this, v, + keys, null, + children, + totalCount, 0); + // now p is a node; continues + } + add(p, v, key, value); + result = null; + } else { + result = set(p, v, key, value); + } + newRoot(p); + return result; + } + + /** + * Update the value for the given key. The key must exist. + * + * @param p the page + * @param writeVersion the write version + * @param key the key + * @param value the new value + * @return the old value (never null) + */ + private Object set(Page p, long writeVersion, Object key, Object value) { + if (p.isLeaf()) { + for (int i = 0; i < p.getKeyCount(); i++) { + if (keyType.equals(p.getKey(i), key)) { + p.setKey(i, key); + return p.setValue(i, value); + } + } + } else { + for (int i = 0; i < p.getKeyCount(); i++) { + if (contains(p, i, key)) { + Page c = p.getChildPage(i); + if (get(c, key) != null) { + c = c.copy(writeVersion); + Object result = set(c, writeVersion, key, value); + p.setChild(i, c); + return result; + } + } + } + } + throw DataUtils.newIllegalStateException(DataUtils.ERROR_INTERNAL, + "Not found: {0}", key); + } + + private void add(Page p, long writeVersion, Object key, Object value) { + if (p.isLeaf()) { + p.insertLeaf(p.getKeyCount(), key, value); + return; + } + // p is a node + int index = -1; + for (int i = 0; i < p.getKeyCount(); i++) { + if (contains(p, i, key)) { + index = i; + break; + } + } + if (index < 0) { + // a new entry, we don't know where to add yet + float min = Float.MAX_VALUE; + for (int i = 0; i < p.getKeyCount(); i++) { + Object k = p.getKey(i); + float areaIncrease = keyType.getAreaIncrease(k, key); + if (areaIncrease < min) { + index = i; + min = areaIncrease; + } + } + } + Page c = p.getChildPage(index).copy(writeVersion); + if (c.getMemory() > store.getPageSplitSize() && c.getKeyCount() > 4) { + // split on the way down + Page split = split(c, writeVersion); + p.setKey(index, getBounds(c)); + p.setChild(index, c); + p.insertNode(index, getBounds(split), split); + // now we are not sure where to add + add(p, writeVersion, key, value); + return; + } + add(c, writeVersion, key, value); + Object bounds = p.getKey(index); + keyType.increaseBounds(bounds, key); + p.setKey(index, bounds); + p.setChild(index, c); + } + + private Page split(Page p, long writeVersion) { + return quadraticSplit ? + splitQuadratic(p, writeVersion) : + splitLinear(p, writeVersion); + } + + private Page splitLinear(Page p, long writeVersion) { + ArrayList keys = New.arrayList(); + for (int i = 0; i < p.getKeyCount(); i++) { + keys.add(p.getKey(i)); + } + int[] extremes = keyType.getExtremes(keys); + if (extremes == null) { + return splitQuadratic(p, writeVersion); + } + Page splitA = newPage(p.isLeaf(), writeVersion); + Page splitB = newPage(p.isLeaf(), writeVersion); + move(p, splitA, extremes[0]); + if (extremes[1] > extremes[0]) { + extremes[1]--; + } + move(p, splitB, extremes[1]); + Object boundsA = keyType.createBoundingBox(splitA.getKey(0)); + Object boundsB = keyType.createBoundingBox(splitB.getKey(0)); + while (p.getKeyCount() > 0) { + Object o = p.getKey(0); + float a = keyType.getAreaIncrease(boundsA, o); + float b = keyType.getAreaIncrease(boundsB, o); + if (a < b) { + keyType.increaseBounds(boundsA, o); + move(p, splitA, 0); + } else { + keyType.increaseBounds(boundsB, o); + move(p, splitB, 0); + } + } + while (splitB.getKeyCount() > 0) { + move(splitB, p, 0); + } + return splitA; + } + + private Page splitQuadratic(Page p, long writeVersion) { + Page splitA = newPage(p.isLeaf(), writeVersion); + Page splitB = newPage(p.isLeaf(), writeVersion); + float largest = Float.MIN_VALUE; + int ia = 0, ib = 0; + for (int a = 0; a < p.getKeyCount(); a++) { + Object objA = p.getKey(a); + for (int b = 0; b < p.getKeyCount(); b++) { + if (a == b) { + continue; + } + Object objB = p.getKey(b); + float area = keyType.getCombinedArea(objA, objB); + if (area > largest) { + largest = area; + ia = a; + ib = b; + } + } + } + move(p, splitA, ia); + if (ia < ib) { + ib--; + } + move(p, splitB, ib); + Object boundsA = keyType.createBoundingBox(splitA.getKey(0)); + Object boundsB = keyType.createBoundingBox(splitB.getKey(0)); + while (p.getKeyCount() > 0) { + float diff = 0, bestA = 0, bestB = 0; + int best = 0; + for (int i = 0; i < p.getKeyCount(); i++) { + Object o = p.getKey(i); + float incA = keyType.getAreaIncrease(boundsA, o); + float incB = keyType.getAreaIncrease(boundsB, o); + float d = Math.abs(incA - incB); + if (d > diff) { + diff = d; + bestA = incA; + bestB = incB; + best = i; + } + } + if (bestA < bestB) { + keyType.increaseBounds(boundsA, p.getKey(best)); + move(p, splitA, best); + } else { + keyType.increaseBounds(boundsB, p.getKey(best)); + move(p, splitB, best); + } + } + while (splitB.getKeyCount() > 0) { + move(splitB, p, 0); + } + return splitA; + } + + private Page newPage(boolean leaf, long writeVersion) { + Object[] values; + Page.PageReference[] refs; + if (leaf) { + values = Page.EMPTY_OBJECT_ARRAY; + refs = null; + } else { + values = null; + refs = new Page.PageReference[] { + new Page.PageReference(null, 0, 0)}; + } + return Page.create(this, writeVersion, + Page.EMPTY_OBJECT_ARRAY, values, + refs, 0, 0); + } + + private static void move(Page source, Page target, int sourceIndex) { + Object k = source.getKey(sourceIndex); + if (source.isLeaf()) { + Object v = source.getValue(sourceIndex); + target.insertLeaf(0, k, v); + } else { + Page c = source.getChildPage(sourceIndex); + target.insertNode(0, k, c); + } + source.remove(sourceIndex); + } + + /** + * Add all node keys (including internal bounds) to the given list. + * This is mainly used to visualize the internal splits. + * + * @param list the list + * @param p the root page + */ + public void addNodeKeys(ArrayList list, Page p) { + if (p != null && !p.isLeaf()) { + for (int i = 0; i < p.getKeyCount(); i++) { + list.add((SpatialKey) p.getKey(i)); + addNodeKeys(list, p.getChildPage(i)); + } + } + } + + public boolean isQuadraticSplit() { + return quadraticSplit; + } + + public void setQuadraticSplit(boolean quadraticSplit) { + this.quadraticSplit = quadraticSplit; + } + + @Override + protected int getChildPageCount(Page p) { + return p.getRawChildPageCount() - 1; + } + + /** + * A cursor to iterate over a subset of the keys. + */ + public static class RTreeCursor implements Iterator { + + private final SpatialKey filter; + private CursorPos pos; + private SpatialKey current; + private final Page root; + private boolean initialized; + + protected RTreeCursor(Page root, SpatialKey filter) { + this.root = root; + this.filter = filter; + } + + @Override + public boolean hasNext() { + if (!initialized) { + // init + pos = new CursorPos(root, 0, null); + fetchNext(); + initialized = true; + } + return current != null; + } + + /** + * Skip over that many entries. This method is relatively fast (for this + * map implementation) even if many entries need to be skipped. + * + * @param n the number of entries to skip + */ + public void skip(long n) { + while (hasNext() && n-- > 0) { + fetchNext(); + } + } + + @Override + public SpatialKey next() { + if (!hasNext()) { + return null; + } + SpatialKey c = current; + fetchNext(); + return c; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException( + "Removing is not supported"); + } + + /** + * Fetch the next entry if there is one. + */ + protected void fetchNext() { + while (pos != null) { + Page p = pos.page; + if (p.isLeaf()) { + while (pos.index < p.getKeyCount()) { + SpatialKey c = (SpatialKey) p.getKey(pos.index++); + if (filter == null || check(true, c, filter)) { + current = c; + return; + } + } + } else { + boolean found = false; + while (pos.index < p.getKeyCount()) { + int index = pos.index++; + SpatialKey c = (SpatialKey) p.getKey(index); + if (filter == null || check(false, c, filter)) { + Page child = pos.page.getChildPage(index); + pos = new CursorPos(child, 0, pos); + found = true; + break; + } + } + if (found) { + continue; + } + } + // parent + pos = pos.parent; + } + current = null; + } + + /** + * Check a given key. + * + * @param leaf if the key is from a leaf page + * @param key the stored key + * @param test the user-supplied test key + * @return true if there is a match + */ + @SuppressWarnings("unused") + protected boolean check(boolean leaf, SpatialKey key, SpatialKey test) { + return true; + } + + } + + @Override + public String getType() { + return "rtree"; + } + + /** + * A builder for this class. + * + * @param the value type + */ + public static class Builder implements + MVMap.MapBuilder, SpatialKey, V> { + + private int dimensions = 2; + private DataType valueType; + + /** + * Create a new builder for maps with 2 dimensions. + */ + public Builder() { + // default + } + + /** + * Set the dimensions. + * + * @param dimensions the dimensions to use + * @return this + */ + public Builder dimensions(int dimensions) { + this.dimensions = dimensions; + return this; + } + + /** + * Set the key data type. + * + * @param valueType the key type + * @return this + */ + public Builder valueType(DataType valueType) { + this.valueType = valueType; + return this; + } + + @Override + public MVRTreeMap create() { + if (valueType == null) { + valueType = new ObjectDataType(); + } + return new MVRTreeMap<>(dimensions, valueType); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/rtree/SpatialDataType.java b/modules/h2/src/main/java/org/h2/mvstore/rtree/SpatialDataType.java new file mode 100644 index 0000000000000..b16831e9f41c7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/rtree/SpatialDataType.java @@ -0,0 +1,384 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.rtree; + +import java.nio.ByteBuffer; +import java.util.ArrayList; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.WriteBuffer; +import org.h2.mvstore.type.DataType; +import org.h2.util.New; + +/** + * A spatial data type. This class supports up to 31 dimensions. Each dimension + * can have a minimum and a maximum value of type float. For each dimension, the + * maximum value is only stored when it is not the same as the minimum. + */ +public class SpatialDataType implements DataType { + + private final int dimensions; + + public SpatialDataType(int dimensions) { + // Because of how we are storing the + // min-max-flag in the read/write method + // the number of dimensions must be < 32. + DataUtils.checkArgument( + dimensions >= 1 && dimensions < 32, + "Dimensions must be between 1 and 31, is {0}", dimensions); + this.dimensions = dimensions; + } + + @Override + public int compare(Object a, Object b) { + if (a == b) { + return 0; + } else if (a == null) { + return -1; + } else if (b == null) { + return 1; + } + long la = ((SpatialKey) a).getId(); + long lb = ((SpatialKey) b).getId(); + return Long.compare(la, lb); + } + + /** + * Check whether two spatial values are equal. + * + * @param a the first value + * @param b the second value + * @return true if they are equal + */ + public boolean equals(Object a, Object b) { + if (a == b) { + return true; + } else if (a == null || b == null) { + return false; + } + long la = ((SpatialKey) a).getId(); + long lb = ((SpatialKey) b).getId(); + return la == lb; + } + + @Override + public int getMemory(Object obj) { + return 40 + dimensions * 4; + } + + @Override + public void read(ByteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + + @Override + public void write(WriteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + + @Override + public void write(WriteBuffer buff, Object obj) { + SpatialKey k = (SpatialKey) obj; + if (k.isNull()) { + buff.putVarInt(-1); + buff.putVarLong(k.getId()); + return; + } + int flags = 0; + for (int i = 0; i < dimensions; i++) { + if (k.min(i) == k.max(i)) { + flags |= 1 << i; + } + } + buff.putVarInt(flags); + for (int i = 0; i < dimensions; i++) { + buff.putFloat(k.min(i)); + if ((flags & (1 << i)) == 0) { + buff.putFloat(k.max(i)); + } + } + buff.putVarLong(k.getId()); + } + + @Override + public Object read(ByteBuffer buff) { + int flags = DataUtils.readVarInt(buff); + if (flags == -1) { + long id = DataUtils.readVarLong(buff); + return new SpatialKey(id); + } + float[] minMax = new float[dimensions * 2]; + for (int i = 0; i < dimensions; i++) { + float min = buff.getFloat(); + float max; + if ((flags & (1 << i)) != 0) { + max = min; + } else { + max = buff.getFloat(); + } + minMax[i + i] = min; + minMax[i + i + 1] = max; + } + long id = DataUtils.readVarLong(buff); + return new SpatialKey(id, minMax); + } + + /** + * Check whether the two objects overlap. + * + * @param objA the first object + * @param objB the second object + * @return true if they overlap + */ + public boolean isOverlap(Object objA, Object objB) { + SpatialKey a = (SpatialKey) objA; + SpatialKey b = (SpatialKey) objB; + if (a.isNull() || b.isNull()) { + return false; + } + for (int i = 0; i < dimensions; i++) { + if (a.max(i) < b.min(i) || a.min(i) > b.max(i)) { + return false; + } + } + return true; + } + + /** + * Increase the bounds in the given spatial object. + * + * @param bounds the bounds (may be modified) + * @param add the value + */ + public void increaseBounds(Object bounds, Object add) { + SpatialKey a = (SpatialKey) add; + SpatialKey b = (SpatialKey) bounds; + if (a.isNull() || b.isNull()) { + return; + } + for (int i = 0; i < dimensions; i++) { + b.setMin(i, Math.min(b.min(i), a.min(i))); + b.setMax(i, Math.max(b.max(i), a.max(i))); + } + } + + /** + * Get the area increase by extending a to contain b. + * + * @param objA the bounding box + * @param objB the object + * @return the area + */ + public float getAreaIncrease(Object objA, Object objB) { + SpatialKey b = (SpatialKey) objB; + SpatialKey a = (SpatialKey) objA; + if (a.isNull() || b.isNull()) { + return 0; + } + float min = a.min(0); + float max = a.max(0); + float areaOld = max - min; + min = Math.min(min, b.min(0)); + max = Math.max(max, b.max(0)); + float areaNew = max - min; + for (int i = 1; i < dimensions; i++) { + min = a.min(i); + max = a.max(i); + areaOld *= max - min; + min = Math.min(min, b.min(i)); + max = Math.max(max, b.max(i)); + areaNew *= max - min; + } + return areaNew - areaOld; + } + + /** + * Get the combined area of both objects. + * + * @param objA the first object + * @param objB the second object + * @return the area + */ + float getCombinedArea(Object objA, Object objB) { + SpatialKey a = (SpatialKey) objA; + SpatialKey b = (SpatialKey) objB; + if (a.isNull()) { + return getArea(b); + } else if (b.isNull()) { + return getArea(a); + } + float area = 1; + for (int i = 0; i < dimensions; i++) { + float min = Math.min(a.min(i), b.min(i)); + float max = Math.max(a.max(i), b.max(i)); + area *= max - min; + } + return area; + } + + private float getArea(SpatialKey a) { + if (a.isNull()) { + return 0; + } + float area = 1; + for (int i = 0; i < dimensions; i++) { + area *= a.max(i) - a.min(i); + } + return area; + } + + /** + * Check whether a contains b. + * + * @param objA the bounding box + * @param objB the object + * @return the area + */ + public boolean contains(Object objA, Object objB) { + SpatialKey a = (SpatialKey) objA; + SpatialKey b = (SpatialKey) objB; + if (a.isNull() || b.isNull()) { + return false; + } + for (int i = 0; i < dimensions; i++) { + if (a.min(i) > b.min(i) || a.max(i) < b.max(i)) { + return false; + } + } + return true; + } + + /** + * Check whether a is completely inside b and does not touch the + * given bound. + * + * @param objA the object to check + * @param objB the bounds + * @return true if a is completely inside b + */ + public boolean isInside(Object objA, Object objB) { + SpatialKey a = (SpatialKey) objA; + SpatialKey b = (SpatialKey) objB; + if (a.isNull() || b.isNull()) { + return false; + } + for (int i = 0; i < dimensions; i++) { + if (a.min(i) <= b.min(i) || a.max(i) >= b.max(i)) { + return false; + } + } + return true; + } + + /** + * Create a bounding box starting with the given object. + * + * @param objA the object + * @return the bounding box + */ + Object createBoundingBox(Object objA) { + SpatialKey a = (SpatialKey) objA; + if (a.isNull()) { + return a; + } + float[] minMax = new float[dimensions * 2]; + for (int i = 0; i < dimensions; i++) { + minMax[i + i] = a.min(i); + minMax[i + i + 1] = a.max(i); + } + return new SpatialKey(0, minMax); + } + + /** + * Get the most extreme pair (elements that are as far apart as possible). + * This method is used to split a page (linear split). If no extreme objects + * could be found, this method returns null. + * + * @param list the objects + * @return the indexes of the extremes + */ + public int[] getExtremes(ArrayList list) { + list = getNotNull(list); + if (list.isEmpty()) { + return null; + } + SpatialKey bounds = (SpatialKey) createBoundingBox(list.get(0)); + SpatialKey boundsInner = (SpatialKey) createBoundingBox(bounds); + for (int i = 0; i < dimensions; i++) { + float t = boundsInner.min(i); + boundsInner.setMin(i, boundsInner.max(i)); + boundsInner.setMax(i, t); + } + for (Object o : list) { + increaseBounds(bounds, o); + increaseMaxInnerBounds(boundsInner, o); + } + double best = 0; + int bestDim = 0; + for (int i = 0; i < dimensions; i++) { + float inner = boundsInner.max(i) - boundsInner.min(i); + if (inner < 0) { + continue; + } + float outer = bounds.max(i) - bounds.min(i); + float d = inner / outer; + if (d > best) { + best = d; + bestDim = i; + } + } + if (best <= 0) { + return null; + } + float min = boundsInner.min(bestDim); + float max = boundsInner.max(bestDim); + int firstIndex = -1, lastIndex = -1; + for (int i = 0; i < list.size() && + (firstIndex < 0 || lastIndex < 0); i++) { + SpatialKey o = (SpatialKey) list.get(i); + if (firstIndex < 0 && o.max(bestDim) == min) { + firstIndex = i; + } else if (lastIndex < 0 && o.min(bestDim) == max) { + lastIndex = i; + } + } + return new int[] { firstIndex, lastIndex }; + } + + private static ArrayList getNotNull(ArrayList list) { + ArrayList result = null; + for (Object o : list) { + SpatialKey a = (SpatialKey) o; + if (a.isNull()) { + result = New.arrayList(); + break; + } + } + if (result == null) { + return list; + } + for (Object o : list) { + SpatialKey a = (SpatialKey) o; + if (!a.isNull()) { + result.add(a); + } + } + return result; + } + + private void increaseMaxInnerBounds(Object bounds, Object add) { + SpatialKey b = (SpatialKey) bounds; + SpatialKey a = (SpatialKey) add; + for (int i = 0; i < dimensions; i++) { + b.setMin(i, Math.min(b.min(i), a.max(i))); + b.setMax(i, Math.max(b.max(i), a.min(i))); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/rtree/SpatialKey.java b/modules/h2/src/main/java/org/h2/mvstore/rtree/SpatialKey.java new file mode 100644 index 0000000000000..150eb6fb32bf6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/rtree/SpatialKey.java @@ -0,0 +1,119 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.rtree; + +import java.util.Arrays; + +/** + * A unique spatial key. + */ +public class SpatialKey { + + private final long id; + private final float[] minMax; + + /** + * Create a new key. + * + * @param id the id + * @param minMax min x, max x, min y, max y, and so on + */ + public SpatialKey(long id, float... minMax) { + this.id = id; + this.minMax = minMax; + } + + /** + * Get the minimum value for the given dimension. + * + * @param dim the dimension + * @return the value + */ + public float min(int dim) { + return minMax[dim + dim]; + } + + /** + * Set the minimum value for the given dimension. + * + * @param dim the dimension + * @param x the value + */ + public void setMin(int dim, float x) { + minMax[dim + dim] = x; + } + + /** + * Get the maximum value for the given dimension. + * + * @param dim the dimension + * @return the value + */ + public float max(int dim) { + return minMax[dim + dim + 1]; + } + + /** + * Set the maximum value for the given dimension. + * + * @param dim the dimension + * @param x the value + */ + public void setMax(int dim, float x) { + minMax[dim + dim + 1] = x; + } + + public long getId() { + return id; + } + + public boolean isNull() { + return minMax.length == 0; + } + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + buff.append(id).append(": ("); + for (int i = 0; i < minMax.length; i += 2) { + if (i > 0) { + buff.append(", "); + } + buff.append(minMax[i]).append('/').append(minMax[i + 1]); + } + return buff.append(")").toString(); + } + + @Override + public int hashCode() { + return (int) ((id >>> 32) ^ id); + } + + @Override + public boolean equals(Object other) { + if (other == this) { + return true; + } else if (!(other instanceof SpatialKey)) { + return false; + } + SpatialKey o = (SpatialKey) other; + if (id != o.id) { + return false; + } + return equalsIgnoringId(o); + } + + /** + * Check whether two objects are equals, but do not compare the id fields. + * + * @param o the other key + * @return true if the contents are the same + */ + public boolean equalsIgnoringId(SpatialKey o) { + return Arrays.equals(minMax, o.minMax); + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/rtree/package.html b/modules/h2/src/main/java/org/h2/mvstore/rtree/package.html new file mode 100644 index 0000000000000..9685a80631521 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/rtree/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +An R-tree implementation + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/mvstore/type/DataType.java b/modules/h2/src/main/java/org/h2/mvstore/type/DataType.java new file mode 100644 index 0000000000000..9b5e519c03d17 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/type/DataType.java @@ -0,0 +1,72 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.type; + +import java.nio.ByteBuffer; + +import org.h2.mvstore.WriteBuffer; + +/** + * A data type. + */ +public interface DataType { + + /** + * Compare two keys. + * + * @param a the first key + * @param b the second key + * @return -1 if the first key is smaller, 1 if larger, and 0 if equal + * @throws UnsupportedOperationException if the type is not orderable + */ + int compare(Object a, Object b); + + /** + * Estimate the used memory in bytes. + * + * @param obj the object + * @return the used memory + */ + int getMemory(Object obj); + + /** + * Write an object. + * + * @param buff the target buffer + * @param obj the value + */ + void write(WriteBuffer buff, Object obj); + + /** + * Write a list of objects. + * + * @param buff the target buffer + * @param obj the objects + * @param len the number of objects to write + * @param key whether the objects are keys + */ + void write(WriteBuffer buff, Object[] obj, int len, boolean key); + + /** + * Read an object. + * + * @param buff the source buffer + * @return the object + */ + Object read(ByteBuffer buff); + + /** + * Read a list of objects. + * + * @param buff the target buffer + * @param obj the objects + * @param len the number of objects to read + * @param key whether the objects are keys + */ + void read(ByteBuffer buff, Object[] obj, int len, boolean key); + +} + diff --git a/modules/h2/src/main/java/org/h2/mvstore/type/ObjectDataType.java b/modules/h2/src/main/java/org/h2/mvstore/type/ObjectDataType.java new file mode 100644 index 0000000000000..733e55fb20ecb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/type/ObjectDataType.java @@ -0,0 +1,1560 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.type; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.ObjectInputStream; +import java.io.ObjectOutputStream; +import java.lang.reflect.Array; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.nio.ByteBuffer; +import java.util.Arrays; +import java.util.Date; +import java.util.HashMap; +import java.util.UUID; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.WriteBuffer; +import org.h2.util.Utils; + +/** + * A data type implementation for the most common data types, including + * serializable objects. + */ +public class ObjectDataType implements DataType { + + /** + * The type constants are also used as tag values. + */ + static final int TYPE_NULL = 0; + static final int TYPE_BOOLEAN = 1; + static final int TYPE_BYTE = 2; + static final int TYPE_SHORT = 3; + static final int TYPE_INT = 4; + static final int TYPE_LONG = 5; + static final int TYPE_BIG_INTEGER = 6; + static final int TYPE_FLOAT = 7; + static final int TYPE_DOUBLE = 8; + static final int TYPE_BIG_DECIMAL = 9; + static final int TYPE_CHAR = 10; + static final int TYPE_STRING = 11; + static final int TYPE_UUID = 12; + static final int TYPE_DATE = 13; + static final int TYPE_ARRAY = 14; + static final int TYPE_SERIALIZED_OBJECT = 19; + + /** + * For very common values (e.g. 0 and 1) we save space by encoding the value + * in the tag. e.g. TAG_BOOLEAN_TRUE and TAG_FLOAT_0. + */ + static final int TAG_BOOLEAN_TRUE = 32; + static final int TAG_INTEGER_NEGATIVE = 33; + static final int TAG_INTEGER_FIXED = 34; + static final int TAG_LONG_NEGATIVE = 35; + static final int TAG_LONG_FIXED = 36; + static final int TAG_BIG_INTEGER_0 = 37; + static final int TAG_BIG_INTEGER_1 = 38; + static final int TAG_BIG_INTEGER_SMALL = 39; + static final int TAG_FLOAT_0 = 40; + static final int TAG_FLOAT_1 = 41; + static final int TAG_FLOAT_FIXED = 42; + static final int TAG_DOUBLE_0 = 43; + static final int TAG_DOUBLE_1 = 44; + static final int TAG_DOUBLE_FIXED = 45; + static final int TAG_BIG_DECIMAL_0 = 46; + static final int TAG_BIG_DECIMAL_1 = 47; + static final int TAG_BIG_DECIMAL_SMALL = 48; + static final int TAG_BIG_DECIMAL_SMALL_SCALED = 49; + + /** + * For small-values/small-arrays, we encode the value/array-length in the + * tag. + */ + static final int TAG_INTEGER_0_15 = 64; + static final int TAG_LONG_0_7 = 80; + static final int TAG_STRING_0_15 = 88; + static final int TAG_BYTE_ARRAY_0_15 = 104; + + /** + * Constants for floating point synchronization. + */ + static final int FLOAT_ZERO_BITS = Float.floatToIntBits(0.0f); + static final int FLOAT_ONE_BITS = Float.floatToIntBits(1.0f); + static final long DOUBLE_ZERO_BITS = Double.doubleToLongBits(0.0d); + static final long DOUBLE_ONE_BITS = Double.doubleToLongBits(1.0d); + + static final Class[] COMMON_CLASSES = { boolean.class, byte.class, + short.class, char.class, int.class, long.class, float.class, + double.class, Object.class, Boolean.class, Byte.class, Short.class, + Character.class, Integer.class, Long.class, BigInteger.class, + Float.class, Double.class, BigDecimal.class, String.class, + UUID.class, Date.class }; + + private static final HashMap, Integer> COMMON_CLASSES_MAP = new HashMap<>(COMMON_CLASSES.length); + + private AutoDetectDataType last = new StringType(this); + + @Override + public int compare(Object a, Object b) { + return last.compare(a, b); + } + + @Override + public int getMemory(Object obj) { + return last.getMemory(obj); + } + + @Override + public void read(ByteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + + @Override + public void write(WriteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + + @Override + public void write(WriteBuffer buff, Object obj) { + last.write(buff, obj); + } + + private AutoDetectDataType newType(int typeId) { + switch (typeId) { + case TYPE_NULL: + return new NullType(this); + case TYPE_BOOLEAN: + return new BooleanType(this); + case TYPE_BYTE: + return new ByteType(this); + case TYPE_SHORT: + return new ShortType(this); + case TYPE_CHAR: + return new CharacterType(this); + case TYPE_INT: + return new IntegerType(this); + case TYPE_LONG: + return new LongType(this); + case TYPE_FLOAT: + return new FloatType(this); + case TYPE_DOUBLE: + return new DoubleType(this); + case TYPE_BIG_INTEGER: + return new BigIntegerType(this); + case TYPE_BIG_DECIMAL: + return new BigDecimalType(this); + case TYPE_STRING: + return new StringType(this); + case TYPE_UUID: + return new UUIDType(this); + case TYPE_DATE: + return new DateType(this); + case TYPE_ARRAY: + return new ObjectArrayType(this); + case TYPE_SERIALIZED_OBJECT: + return new SerializedObjectType(this); + } + throw DataUtils.newIllegalStateException(DataUtils.ERROR_INTERNAL, + "Unsupported type {0}", typeId); + } + + @Override + public Object read(ByteBuffer buff) { + int tag = buff.get(); + int typeId; + if (tag <= TYPE_SERIALIZED_OBJECT) { + typeId = tag; + } else { + switch (tag) { + case TAG_BOOLEAN_TRUE: + typeId = TYPE_BOOLEAN; + break; + case TAG_INTEGER_NEGATIVE: + case TAG_INTEGER_FIXED: + typeId = TYPE_INT; + break; + case TAG_LONG_NEGATIVE: + case TAG_LONG_FIXED: + typeId = TYPE_LONG; + break; + case TAG_BIG_INTEGER_0: + case TAG_BIG_INTEGER_1: + case TAG_BIG_INTEGER_SMALL: + typeId = TYPE_BIG_INTEGER; + break; + case TAG_FLOAT_0: + case TAG_FLOAT_1: + case TAG_FLOAT_FIXED: + typeId = TYPE_FLOAT; + break; + case TAG_DOUBLE_0: + case TAG_DOUBLE_1: + case TAG_DOUBLE_FIXED: + typeId = TYPE_DOUBLE; + break; + case TAG_BIG_DECIMAL_0: + case TAG_BIG_DECIMAL_1: + case TAG_BIG_DECIMAL_SMALL: + case TAG_BIG_DECIMAL_SMALL_SCALED: + typeId = TYPE_BIG_DECIMAL; + break; + default: + if (tag >= TAG_INTEGER_0_15 && tag <= TAG_INTEGER_0_15 + 15) { + typeId = TYPE_INT; + } else if (tag >= TAG_STRING_0_15 + && tag <= TAG_STRING_0_15 + 15) { + typeId = TYPE_STRING; + } else if (tag >= TAG_LONG_0_7 && tag <= TAG_LONG_0_7 + 7) { + typeId = TYPE_LONG; + } else if (tag >= TAG_BYTE_ARRAY_0_15 + && tag <= TAG_BYTE_ARRAY_0_15 + 15) { + typeId = TYPE_ARRAY; + } else { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_FILE_CORRUPT, "Unknown tag {0}", + tag); + } + } + } + AutoDetectDataType t = last; + if (typeId != t.typeId) { + last = t = newType(typeId); + } + return t.read(buff, tag); + } + + private static int getTypeId(Object obj) { + if (obj instanceof Integer) { + return TYPE_INT; + } else if (obj instanceof String) { + return TYPE_STRING; + } else if (obj instanceof Long) { + return TYPE_LONG; + } else if (obj instanceof Double) { + return TYPE_DOUBLE; + } else if (obj instanceof Float) { + return TYPE_FLOAT; + } else if (obj instanceof Boolean) { + return TYPE_BOOLEAN; + } else if (obj instanceof UUID) { + return TYPE_UUID; + } else if (obj instanceof Byte) { + return TYPE_BYTE; + } else if (obj instanceof Short) { + return TYPE_SHORT; + } else if (obj instanceof Character) { + return TYPE_CHAR; + } else if (obj == null) { + return TYPE_NULL; + } else if (isDate(obj)) { + return TYPE_DATE; + } else if (isBigInteger(obj)) { + return TYPE_BIG_INTEGER; + } else if (isBigDecimal(obj)) { + return TYPE_BIG_DECIMAL; + } else if (obj.getClass().isArray()) { + return TYPE_ARRAY; + } + return TYPE_SERIALIZED_OBJECT; + } + + /** + * Switch the last remembered type to match the type of the given object. + * + * @param obj the object + * @return the auto-detected type used + */ + AutoDetectDataType switchType(Object obj) { + int typeId = getTypeId(obj); + AutoDetectDataType l = last; + if (typeId != l.typeId) { + last = l = newType(typeId); + } + return l; + } + + /** + * Check whether this object is a BigInteger. + * + * @param obj the object + * @return true if yes + */ + static boolean isBigInteger(Object obj) { + return obj instanceof BigInteger && obj.getClass() == BigInteger.class; + } + + /** + * Check whether this object is a BigDecimal. + * + * @param obj the object + * @return true if yes + */ + static boolean isBigDecimal(Object obj) { + return obj instanceof BigDecimal && obj.getClass() == BigDecimal.class; + } + + /** + * Check whether this object is a date. + * + * @param obj the object + * @return true if yes + */ + static boolean isDate(Object obj) { + return obj instanceof Date && obj.getClass() == Date.class; + } + + /** + * Check whether this object is an array. + * + * @param obj the object + * @return true if yes + */ + static boolean isArray(Object obj) { + return obj != null && obj.getClass().isArray(); + } + + /** + * Get the class id, or null if not found. + * + * @param clazz the class + * @return the class id or null + */ + static Integer getCommonClassId(Class clazz) { + HashMap, Integer> map = COMMON_CLASSES_MAP; + if (map.size() == 0) { + // lazy initialization + // synchronized, because the COMMON_CLASSES_MAP is not + synchronized (map) { + if (map.size() == 0) { + for (int i = 0, size = COMMON_CLASSES.length; i < size; i++) { + map.put(COMMON_CLASSES[i], i); + } + } + } + } + return map.get(clazz); + } + + /** + * Serialize the object to a byte array. + * + * @param obj the object to serialize + * @return the byte array + */ + public static byte[] serialize(Object obj) { + try { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + ObjectOutputStream os = new ObjectOutputStream(out); + os.writeObject(obj); + return out.toByteArray(); + } catch (Throwable e) { + throw DataUtils.newIllegalArgumentException( + "Could not serialize {0}", obj, e); + } + } + + /** + * De-serialize the byte array to an object. + * + * @param data the byte array + * @return the object + */ + public static Object deserialize(byte[] data) { + try { + ByteArrayInputStream in = new ByteArrayInputStream(data); + ObjectInputStream is = new ObjectInputStream(in); + return is.readObject(); + } catch (Throwable e) { + throw DataUtils.newIllegalArgumentException( + "Could not deserialize {0}", Arrays.toString(data), e); + } + } + + /** + * Compare the contents of two byte arrays. If the content or length of the + * first array is smaller than the second array, -1 is returned. If the + * content or length of the second array is smaller than the first array, 1 + * is returned. If the contents and lengths are the same, 0 is returned. + *

    + * This method interprets bytes as unsigned. + * + * @param data1 the first byte array (must not be null) + * @param data2 the second byte array (must not be null) + * @return the result of the comparison (-1, 1 or 0) + */ + public static int compareNotNull(byte[] data1, byte[] data2) { + if (data1 == data2) { + return 0; + } + int len = Math.min(data1.length, data2.length); + for (int i = 0; i < len; i++) { + int b = data1[i] & 255; + int b2 = data2[i] & 255; + if (b != b2) { + return b > b2 ? 1 : -1; + } + } + return Integer.signum(data1.length - data2.length); + } + + /** + * The base class for auto-detect data types. + */ + abstract static class AutoDetectDataType implements DataType { + + protected final ObjectDataType base; + protected final int typeId; + + AutoDetectDataType(ObjectDataType base, int typeId) { + this.base = base; + this.typeId = typeId; + } + + @Override + public int getMemory(Object o) { + return getType(o).getMemory(o); + } + + @Override + public int compare(Object aObj, Object bObj) { + AutoDetectDataType aType = getType(aObj); + AutoDetectDataType bType = getType(bObj); + int typeDiff = aType.typeId - bType.typeId; + if (typeDiff == 0) { + return aType.compare(aObj, bObj); + } + return Integer.signum(typeDiff); + } + + @Override + public void write(WriteBuffer buff, Object[] obj, + int len, boolean key) { + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + + @Override + public void write(WriteBuffer buff, Object o) { + getType(o).write(buff, o); + } + + @Override + public void read(ByteBuffer buff, Object[] obj, + int len, boolean key) { + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + + @Override + public final Object read(ByteBuffer buff) { + throw DataUtils.newIllegalStateException(DataUtils.ERROR_INTERNAL, + "Internal error"); + } + + /** + * Get the type for the given object. + * + * @param o the object + * @return the type + */ + AutoDetectDataType getType(Object o) { + return base.switchType(o); + } + + /** + * Read an object from the buffer. + * + * @param buff the buffer + * @param tag the first byte of the object (usually the type) + * @return the read object + */ + abstract Object read(ByteBuffer buff, int tag); + + } + + /** + * The type for the null value + */ + static class NullType extends AutoDetectDataType { + + NullType(ObjectDataType base) { + super(base, TYPE_NULL); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj == null && bObj == null) { + return 0; + } else if (aObj == null) { + return -1; + } else if (bObj == null) { + return 1; + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj == null ? 0 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (obj != null) { + super.write(buff, obj); + return; + } + buff.put((byte) TYPE_NULL); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + return null; + } + + } + + /** + * The type for boolean true and false. + */ + static class BooleanType extends AutoDetectDataType { + + BooleanType(ObjectDataType base) { + super(base, TYPE_BOOLEAN); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Boolean && bObj instanceof Boolean) { + Boolean a = (Boolean) aObj; + Boolean b = (Boolean) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Boolean ? 0 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Boolean)) { + super.write(buff, obj); + return; + } + int tag = ((Boolean) obj) ? TAG_BOOLEAN_TRUE : TYPE_BOOLEAN; + buff.put((byte) tag); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + return tag == TYPE_BOOLEAN ? Boolean.FALSE : Boolean.TRUE; + } + + } + + /** + * The type for byte objects. + */ + static class ByteType extends AutoDetectDataType { + + ByteType(ObjectDataType base) { + super(base, TYPE_BYTE); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Byte && bObj instanceof Byte) { + Byte a = (Byte) aObj; + Byte b = (Byte) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Byte ? 0 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Byte)) { + super.write(buff, obj); + return; + } + buff.put((byte) TYPE_BYTE); + buff.put(((Byte) obj).byteValue()); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + return buff.get(); + } + + } + + /** + * The type for character objects. + */ + static class CharacterType extends AutoDetectDataType { + + CharacterType(ObjectDataType base) { + super(base, TYPE_CHAR); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Character && bObj instanceof Character) { + Character a = (Character) aObj; + Character b = (Character) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Character ? 24 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Character)) { + super.write(buff, obj); + return; + } + buff.put((byte) TYPE_CHAR); + buff.putChar(((Character) obj).charValue()); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + return buff.getChar(); + } + + } + + /** + * The type for short objects. + */ + static class ShortType extends AutoDetectDataType { + + ShortType(ObjectDataType base) { + super(base, TYPE_SHORT); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Short && bObj instanceof Short) { + Short a = (Short) aObj; + Short b = (Short) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Short ? 24 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Short)) { + super.write(buff, obj); + return; + } + buff.put((byte) TYPE_SHORT); + buff.putShort(((Short) obj).shortValue()); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + return buff.getShort(); + } + + } + + /** + * The type for integer objects. + */ + static class IntegerType extends AutoDetectDataType { + + IntegerType(ObjectDataType base) { + super(base, TYPE_INT); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Integer && bObj instanceof Integer) { + Integer a = (Integer) aObj; + Integer b = (Integer) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Integer ? 24 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Integer)) { + super.write(buff, obj); + return; + } + int x = (Integer) obj; + if (x < 0) { + // -Integer.MIN_VALUE is smaller than 0 + if (-x < 0 || -x > DataUtils.COMPRESSED_VAR_INT_MAX) { + buff.put((byte) TAG_INTEGER_FIXED).putInt(x); + } else { + buff.put((byte) TAG_INTEGER_NEGATIVE).putVarInt(-x); + } + } else if (x <= 15) { + buff.put((byte) (TAG_INTEGER_0_15 + x)); + } else if (x <= DataUtils.COMPRESSED_VAR_INT_MAX) { + buff.put((byte) TYPE_INT).putVarInt(x); + } else { + buff.put((byte) TAG_INTEGER_FIXED).putInt(x); + } + } + + @Override + public Object read(ByteBuffer buff, int tag) { + switch (tag) { + case TYPE_INT: + return DataUtils.readVarInt(buff); + case TAG_INTEGER_NEGATIVE: + return -DataUtils.readVarInt(buff); + case TAG_INTEGER_FIXED: + return buff.getInt(); + } + return tag - TAG_INTEGER_0_15; + } + + } + + /** + * The type for long objects. + */ + static class LongType extends AutoDetectDataType { + + LongType(ObjectDataType base) { + super(base, TYPE_LONG); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Long && bObj instanceof Long) { + Long a = (Long) aObj; + Long b = (Long) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Long ? 30 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Long)) { + super.write(buff, obj); + return; + } + long x = (Long) obj; + if (x < 0) { + // -Long.MIN_VALUE is smaller than 0 + if (-x < 0 || -x > DataUtils.COMPRESSED_VAR_LONG_MAX) { + buff.put((byte) TAG_LONG_FIXED); + buff.putLong(x); + } else { + buff.put((byte) TAG_LONG_NEGATIVE); + buff.putVarLong(-x); + } + } else if (x <= 7) { + buff.put((byte) (TAG_LONG_0_7 + x)); + } else if (x <= DataUtils.COMPRESSED_VAR_LONG_MAX) { + buff.put((byte) TYPE_LONG); + buff.putVarLong(x); + } else { + buff.put((byte) TAG_LONG_FIXED); + buff.putLong(x); + } + } + + @Override + public Object read(ByteBuffer buff, int tag) { + switch (tag) { + case TYPE_LONG: + return DataUtils.readVarLong(buff); + case TAG_LONG_NEGATIVE: + return -DataUtils.readVarLong(buff); + case TAG_LONG_FIXED: + return buff.getLong(); + } + return (long) (tag - TAG_LONG_0_7); + } + + } + + /** + * The type for float objects. + */ + static class FloatType extends AutoDetectDataType { + + FloatType(ObjectDataType base) { + super(base, TYPE_FLOAT); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Float && bObj instanceof Float) { + Float a = (Float) aObj; + Float b = (Float) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Float ? 24 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Float)) { + super.write(buff, obj); + return; + } + float x = (Float) obj; + int f = Float.floatToIntBits(x); + if (f == ObjectDataType.FLOAT_ZERO_BITS) { + buff.put((byte) TAG_FLOAT_0); + } else if (f == ObjectDataType.FLOAT_ONE_BITS) { + buff.put((byte) TAG_FLOAT_1); + } else { + int value = Integer.reverse(f); + if (value >= 0 && value <= DataUtils.COMPRESSED_VAR_INT_MAX) { + buff.put((byte) TYPE_FLOAT).putVarInt(value); + } else { + buff.put((byte) TAG_FLOAT_FIXED).putFloat(x); + } + } + } + + @Override + public Object read(ByteBuffer buff, int tag) { + switch (tag) { + case TAG_FLOAT_0: + return 0f; + case TAG_FLOAT_1: + return 1f; + case TAG_FLOAT_FIXED: + return buff.getFloat(); + } + return Float.intBitsToFloat(Integer.reverse(DataUtils + .readVarInt(buff))); + } + + } + + /** + * The type for double objects. + */ + static class DoubleType extends AutoDetectDataType { + + DoubleType(ObjectDataType base) { + super(base, TYPE_DOUBLE); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof Double && bObj instanceof Double) { + Double a = (Double) aObj; + Double b = (Double) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof Double ? 30 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof Double)) { + super.write(buff, obj); + return; + } + double x = (Double) obj; + long d = Double.doubleToLongBits(x); + if (d == ObjectDataType.DOUBLE_ZERO_BITS) { + buff.put((byte) TAG_DOUBLE_0); + } else if (d == ObjectDataType.DOUBLE_ONE_BITS) { + buff.put((byte) TAG_DOUBLE_1); + } else { + long value = Long.reverse(d); + if (value >= 0 && value <= DataUtils.COMPRESSED_VAR_LONG_MAX) { + buff.put((byte) TYPE_DOUBLE); + buff.putVarLong(value); + } else { + buff.put((byte) TAG_DOUBLE_FIXED); + buff.putDouble(x); + } + } + } + + @Override + public Object read(ByteBuffer buff, int tag) { + switch (tag) { + case TAG_DOUBLE_0: + return 0d; + case TAG_DOUBLE_1: + return 1d; + case TAG_DOUBLE_FIXED: + return buff.getDouble(); + } + return Double.longBitsToDouble(Long.reverse(DataUtils + .readVarLong(buff))); + } + + } + + /** + * The type for BigInteger objects. + */ + static class BigIntegerType extends AutoDetectDataType { + + BigIntegerType(ObjectDataType base) { + super(base, TYPE_BIG_INTEGER); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (isBigInteger(aObj) && isBigInteger(bObj)) { + BigInteger a = (BigInteger) aObj; + BigInteger b = (BigInteger) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return isBigInteger(obj) ? 100 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!isBigInteger(obj)) { + super.write(buff, obj); + return; + } + BigInteger x = (BigInteger) obj; + if (BigInteger.ZERO.equals(x)) { + buff.put((byte) TAG_BIG_INTEGER_0); + } else if (BigInteger.ONE.equals(x)) { + buff.put((byte) TAG_BIG_INTEGER_1); + } else { + int bits = x.bitLength(); + if (bits <= 63) { + buff.put((byte) TAG_BIG_INTEGER_SMALL).putVarLong( + x.longValue()); + } else { + byte[] bytes = x.toByteArray(); + buff.put((byte) TYPE_BIG_INTEGER).putVarInt(bytes.length) + .put(bytes); + } + } + } + + @Override + public Object read(ByteBuffer buff, int tag) { + switch (tag) { + case TAG_BIG_INTEGER_0: + return BigInteger.ZERO; + case TAG_BIG_INTEGER_1: + return BigInteger.ONE; + case TAG_BIG_INTEGER_SMALL: + return BigInteger.valueOf(DataUtils.readVarLong(buff)); + } + int len = DataUtils.readVarInt(buff); + byte[] bytes = Utils.newBytes(len); + buff.get(bytes); + return new BigInteger(bytes); + } + + } + + /** + * The type for BigDecimal objects. + */ + static class BigDecimalType extends AutoDetectDataType { + + BigDecimalType(ObjectDataType base) { + super(base, TYPE_BIG_DECIMAL); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (isBigDecimal(aObj) && isBigDecimal(bObj)) { + BigDecimal a = (BigDecimal) aObj; + BigDecimal b = (BigDecimal) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public int getMemory(Object obj) { + return isBigDecimal(obj) ? 150 : super.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!isBigDecimal(obj)) { + super.write(buff, obj); + return; + } + BigDecimal x = (BigDecimal) obj; + if (BigDecimal.ZERO.equals(x)) { + buff.put((byte) TAG_BIG_DECIMAL_0); + } else if (BigDecimal.ONE.equals(x)) { + buff.put((byte) TAG_BIG_DECIMAL_1); + } else { + int scale = x.scale(); + BigInteger b = x.unscaledValue(); + int bits = b.bitLength(); + if (bits < 64) { + if (scale == 0) { + buff.put((byte) TAG_BIG_DECIMAL_SMALL); + } else { + buff.put((byte) TAG_BIG_DECIMAL_SMALL_SCALED) + .putVarInt(scale); + } + buff.putVarLong(b.longValue()); + } else { + byte[] bytes = b.toByteArray(); + buff.put((byte) TYPE_BIG_DECIMAL).putVarInt(scale) + .putVarInt(bytes.length).put(bytes); + } + } + } + + @Override + public Object read(ByteBuffer buff, int tag) { + switch (tag) { + case TAG_BIG_DECIMAL_0: + return BigDecimal.ZERO; + case TAG_BIG_DECIMAL_1: + return BigDecimal.ONE; + case TAG_BIG_DECIMAL_SMALL: + return BigDecimal.valueOf(DataUtils.readVarLong(buff)); + case TAG_BIG_DECIMAL_SMALL_SCALED: + int scale = DataUtils.readVarInt(buff); + return BigDecimal.valueOf(DataUtils.readVarLong(buff), scale); + } + int scale = DataUtils.readVarInt(buff); + int len = DataUtils.readVarInt(buff); + byte[] bytes = Utils.newBytes(len); + buff.get(bytes); + BigInteger b = new BigInteger(bytes); + return new BigDecimal(b, scale); + } + + } + + /** + * The type for string objects. + */ + static class StringType extends AutoDetectDataType { + + StringType(ObjectDataType base) { + super(base, TYPE_STRING); + } + + @Override + public int getMemory(Object obj) { + if (!(obj instanceof String)) { + return super.getMemory(obj); + } + return 24 + 2 * obj.toString().length(); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof String && bObj instanceof String) { + return aObj.toString().compareTo(bObj.toString()); + } + return super.compare(aObj, bObj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof String)) { + super.write(buff, obj); + return; + } + String s = (String) obj; + int len = s.length(); + if (len <= 15) { + buff.put((byte) (TAG_STRING_0_15 + len)); + } else { + buff.put((byte) TYPE_STRING).putVarInt(len); + } + buff.putStringData(s, len); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + int len; + if (tag == TYPE_STRING) { + len = DataUtils.readVarInt(buff); + } else { + len = tag - TAG_STRING_0_15; + } + return DataUtils.readString(buff, len); + } + + } + + /** + * The type for UUID objects. + */ + static class UUIDType extends AutoDetectDataType { + + UUIDType(ObjectDataType base) { + super(base, TYPE_UUID); + } + + @Override + public int getMemory(Object obj) { + return obj instanceof UUID ? 40 : super.getMemory(obj); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (aObj instanceof UUID && bObj instanceof UUID) { + UUID a = (UUID) aObj; + UUID b = (UUID) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!(obj instanceof UUID)) { + super.write(buff, obj); + return; + } + buff.put((byte) TYPE_UUID); + UUID a = (UUID) obj; + buff.putLong(a.getMostSignificantBits()); + buff.putLong(a.getLeastSignificantBits()); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + long a = buff.getLong(), b = buff.getLong(); + return new UUID(a, b); + } + + } + + /** + * The type for java.util.Date objects. + */ + static class DateType extends AutoDetectDataType { + + DateType(ObjectDataType base) { + super(base, TYPE_DATE); + } + + @Override + public int getMemory(Object obj) { + return isDate(obj) ? 40 : super.getMemory(obj); + } + + @Override + public int compare(Object aObj, Object bObj) { + if (isDate(aObj) && isDate(bObj)) { + Date a = (Date) aObj; + Date b = (Date) bObj; + return a.compareTo(b); + } + return super.compare(aObj, bObj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!isDate(obj)) { + super.write(buff, obj); + return; + } + buff.put((byte) TYPE_DATE); + Date a = (Date) obj; + buff.putLong(a.getTime()); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + long a = buff.getLong(); + return new Date(a); + } + + } + + /** + * The type for object arrays. + */ + static class ObjectArrayType extends AutoDetectDataType { + + private final ObjectDataType elementType = new ObjectDataType(); + + ObjectArrayType(ObjectDataType base) { + super(base, TYPE_ARRAY); + } + + @Override + public int getMemory(Object obj) { + if (!isArray(obj)) { + return super.getMemory(obj); + } + int size = 64; + Class type = obj.getClass().getComponentType(); + if (type.isPrimitive()) { + int len = Array.getLength(obj); + if (type == boolean.class) { + size += len; + } else if (type == byte.class) { + size += len; + } else if (type == char.class) { + size += len * 2; + } else if (type == short.class) { + size += len * 2; + } else if (type == int.class) { + size += len * 4; + } else if (type == float.class) { + size += len * 4; + } else if (type == double.class) { + size += len * 8; + } else if (type == long.class) { + size += len * 8; + } + } else { + for (Object x : (Object[]) obj) { + if (x != null) { + size += elementType.getMemory(x); + } + } + } + // we say they are larger, because these objects + // use quite a lot of disk space + return size * 2; + } + + @Override + public int compare(Object aObj, Object bObj) { + if (!isArray(aObj) || !isArray(bObj)) { + return super.compare(aObj, bObj); + } + if (aObj == bObj) { + return 0; + } + Class type = aObj.getClass().getComponentType(); + Class bType = bObj.getClass().getComponentType(); + if (type != bType) { + Integer classA = getCommonClassId(type); + Integer classB = getCommonClassId(bType); + if (classA != null) { + if (classB != null) { + return classA.compareTo(classB); + } + return -1; + } else if (classB != null) { + return 1; + } + return type.getName().compareTo(bType.getName()); + } + int aLen = Array.getLength(aObj); + int bLen = Array.getLength(bObj); + int len = Math.min(aLen, bLen); + if (type.isPrimitive()) { + if (type == byte.class) { + byte[] a = (byte[]) aObj; + byte[] b = (byte[]) bObj; + return compareNotNull(a, b); + } + for (int i = 0; i < len; i++) { + int x; + if (type == boolean.class) { + x = Integer.signum((((boolean[]) aObj)[i] ? 1 : 0) + - (((boolean[]) bObj)[i] ? 1 : 0)); + } else if (type == char.class) { + x = Integer.signum((((char[]) aObj)[i]) + - (((char[]) bObj)[i])); + } else if (type == short.class) { + x = Integer.signum((((short[]) aObj)[i]) + - (((short[]) bObj)[i])); + } else if (type == int.class) { + int a = ((int[]) aObj)[i]; + int b = ((int[]) bObj)[i]; + x = Integer.compare(a, b); + } else if (type == float.class) { + x = Float.compare(((float[]) aObj)[i], + ((float[]) bObj)[i]); + } else if (type == double.class) { + x = Double.compare(((double[]) aObj)[i], + ((double[]) bObj)[i]); + } else { + long a = ((long[]) aObj)[i]; + long b = ((long[]) bObj)[i]; + x = Long.compare(a, b); + } + if (x != 0) { + return x; + } + } + } else { + Object[] a = (Object[]) aObj; + Object[] b = (Object[]) bObj; + for (int i = 0; i < len; i++) { + int comp = elementType.compare(a[i], b[i]); + if (comp != 0) { + return comp; + } + } + } + return Integer.compare(aLen, bLen); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + if (!isArray(obj)) { + super.write(buff, obj); + return; + } + Class type = obj.getClass().getComponentType(); + Integer classId = getCommonClassId(type); + if (classId != null) { + if (type.isPrimitive()) { + if (type == byte.class) { + byte[] data = (byte[]) obj; + int len = data.length; + if (len <= 15) { + buff.put((byte) (TAG_BYTE_ARRAY_0_15 + len)); + } else { + buff.put((byte) TYPE_ARRAY) + .put((byte) classId.intValue()) + .putVarInt(len); + } + buff.put(data); + return; + } + int len = Array.getLength(obj); + buff.put((byte) TYPE_ARRAY).put((byte) classId.intValue()) + .putVarInt(len); + for (int i = 0; i < len; i++) { + if (type == boolean.class) { + buff.put((byte) (((boolean[]) obj)[i] ? 1 : 0)); + } else if (type == char.class) { + buff.putChar(((char[]) obj)[i]); + } else if (type == short.class) { + buff.putShort(((short[]) obj)[i]); + } else if (type == int.class) { + buff.putInt(((int[]) obj)[i]); + } else if (type == float.class) { + buff.putFloat(((float[]) obj)[i]); + } else if (type == double.class) { + buff.putDouble(((double[]) obj)[i]); + } else { + buff.putLong(((long[]) obj)[i]); + } + } + return; + } + buff.put((byte) TYPE_ARRAY).put((byte) classId.intValue()); + } else { + buff.put((byte) TYPE_ARRAY).put((byte) -1); + String c = type.getName(); + StringDataType.INSTANCE.write(buff, c); + } + Object[] array = (Object[]) obj; + int len = array.length; + buff.putVarInt(len); + for (Object x : array) { + elementType.write(buff, x); + } + } + + @Override + public Object read(ByteBuffer buff, int tag) { + if (tag != TYPE_ARRAY) { + byte[] data; + int len = tag - TAG_BYTE_ARRAY_0_15; + data = Utils.newBytes(len); + buff.get(data); + return data; + } + int ct = buff.get(); + Class clazz; + Object obj; + if (ct == -1) { + String componentType = StringDataType.INSTANCE.read(buff); + try { + clazz = Class.forName(componentType); + } catch (Exception e) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_SERIALIZATION, + "Could not get class {0}", componentType, e); + } + } else { + clazz = COMMON_CLASSES[ct]; + } + int len = DataUtils.readVarInt(buff); + try { + obj = Array.newInstance(clazz, len); + } catch (Exception e) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_SERIALIZATION, + "Could not create array of type {0} length {1}", clazz, + len, e); + } + if (clazz.isPrimitive()) { + for (int i = 0; i < len; i++) { + if (clazz == boolean.class) { + ((boolean[]) obj)[i] = buff.get() == 1; + } else if (clazz == byte.class) { + ((byte[]) obj)[i] = buff.get(); + } else if (clazz == char.class) { + ((char[]) obj)[i] = buff.getChar(); + } else if (clazz == short.class) { + ((short[]) obj)[i] = buff.getShort(); + } else if (clazz == int.class) { + ((int[]) obj)[i] = buff.getInt(); + } else if (clazz == float.class) { + ((float[]) obj)[i] = buff.getFloat(); + } else if (clazz == double.class) { + ((double[]) obj)[i] = buff.getDouble(); + } else { + ((long[]) obj)[i] = buff.getLong(); + } + } + } else { + Object[] array = (Object[]) obj; + for (int i = 0; i < len; i++) { + array[i] = elementType.read(buff); + } + } + return obj; + } + + } + + /** + * The type for serialized objects. + */ + static class SerializedObjectType extends AutoDetectDataType { + + private int averageSize = 10_000; + + SerializedObjectType(ObjectDataType base) { + super(base, TYPE_SERIALIZED_OBJECT); + } + + @SuppressWarnings("unchecked") + @Override + public int compare(Object aObj, Object bObj) { + if (aObj == bObj) { + return 0; + } + DataType ta = getType(aObj); + DataType tb = getType(bObj); + if (ta != this || tb != this) { + if (ta == tb) { + return ta.compare(aObj, bObj); + } + return super.compare(aObj, bObj); + } + // TODO ensure comparable type (both may be comparable but not + // with each other) + if (aObj instanceof Comparable) { + if (aObj.getClass().isAssignableFrom(bObj.getClass())) { + return ((Comparable) aObj).compareTo(bObj); + } + } + if (bObj instanceof Comparable) { + if (bObj.getClass().isAssignableFrom(aObj.getClass())) { + return -((Comparable) bObj).compareTo(aObj); + } + } + byte[] a = serialize(aObj); + byte[] b = serialize(bObj); + return compareNotNull(a, b); + } + + @Override + public int getMemory(Object obj) { + DataType t = getType(obj); + if (t == this) { + return averageSize; + } + return t.getMemory(obj); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + DataType t = getType(obj); + if (t != this) { + t.write(buff, obj); + return; + } + byte[] data = serialize(obj); + // we say they are larger, because these objects + // use quite a lot of disk space + int size = data.length * 2; + // adjust the average size + // using an exponential moving average + averageSize = (size + 15 * averageSize) / 16; + buff.put((byte) TYPE_SERIALIZED_OBJECT).putVarInt(data.length) + .put(data); + } + + @Override + public Object read(ByteBuffer buff, int tag) { + int len = DataUtils.readVarInt(buff); + byte[] data = Utils.newBytes(len); + int size = data.length * 2; + // adjust the average size + // using an exponential moving average + averageSize = (size + 15 * averageSize) / 16; + buff.get(data); + return deserialize(data); + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/mvstore/type/StringDataType.java b/modules/h2/src/main/java/org/h2/mvstore/type/StringDataType.java new file mode 100644 index 0000000000000..93d4b19c85d03 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/type/StringDataType.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.mvstore.type; + +import java.nio.ByteBuffer; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.WriteBuffer; + +/** + * A string type. + */ +public class StringDataType implements DataType { + + public static final StringDataType INSTANCE = new StringDataType(); + + @Override + public int compare(Object a, Object b) { + return a.toString().compareTo(b.toString()); + } + + @Override + public int getMemory(Object obj) { + return 24 + 2 * obj.toString().length(); + } + + @Override + public void read(ByteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + + @Override + public void write(WriteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + + @Override + public String read(ByteBuffer buff) { + int len = DataUtils.readVarInt(buff); + return DataUtils.readString(buff, len); + } + + @Override + public void write(WriteBuffer buff, Object obj) { + String s = obj.toString(); + int len = s.length(); + buff.putVarInt(len).putStringData(s, len); + } + +} + diff --git a/modules/h2/src/main/java/org/h2/mvstore/type/package.html b/modules/h2/src/main/java/org/h2/mvstore/type/package.html new file mode 100644 index 0000000000000..02ae23f32de3d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/mvstore/type/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Data types and serialization / deserialization + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/package.html b/modules/h2/src/main/java/org/h2/package.html new file mode 100644 index 0000000000000..228a08b1f5d3c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Implementation of the JDBC driver. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/res/_messages_cs.prop b/modules/h2/src/main/java/org/h2/res/_messages_cs.prop new file mode 100644 index 0000000000000..b99a1e5a2ae7b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_cs.prop @@ -0,0 +1,179 @@ +.translator=Hannibal (http://hannibal.cestiny.cz/) +02000=Žádná data nejsou k dispozici +07001=Neplatný počet parametrů pro {0}, očekávaný počet: {1} +08000=Chyba při otevírání databáze: {0} +21S02=Počet sloupců nesouhlasí +22001=Příliš dlouhá hodnota pro sloupec {0}: {1} +22003=Číselná hodnota je mimo rozsah: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=Nelze zpracovat konstantu {0} {1} +22012=Dělení nulou: {0} +22018=Chyba při převodu dat {0} +22025=Chyba v LIKE escapování: {0} +22030=#Value not permitted for column {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=Pro sloupec {0} není hodnota NULL povolena +23503=Nedodržení omezení referenční integrity: {0} +23505=Nedodržení unikátního indexu nebo primárního klíče: {0} +23506=Nedodržení omezení referenční integrity: {0} +23507=Nebyla nastavena žádná výchozí hodnota pro sloupec {0} +23513=Nedodržení omezení kontroly: {0} +23514=#Check constraint invalid: {0} +28000=Nesprávné uživatelské jméno nebo heslo +40001=Detekován deadlock. Probíhající transakce byla vrácena zpět. Podrobnosti: {0} +42000=Chyba syntaxe v SQL příkazu {0} +42001=Chyba syntaxe v SQL příkazu {0}; očekáváno {1} +42S01=Tabulka {0} již existuje +42S02=Tabulka {0} nenalezena +42S11=Index {0} již existuje +42S12=Index {0} nenalezen +42S21=Duplicitní název sloupce {0} +42S22=Sloupec {0} nenalezen +42S32=Nastavení {0} nenalezeno +57014=Příkaz byl zrušen nebo připojení vypršelo +90000=Funkce {0} musí vracet výsledek +90001=Metoda neumožňuje dotazování. Použijte execute nebo executeQuery namísto executeUpdate +90002=Metoda umožňuje pouze pro dotazování. Použijte execute nebo executeUpdate namísto executeQuery +90003=Hexadecimální řetězec s lichým počtem znaků: {0} +90004=Hexadecimální řetězec obsahuje neplatný znak: {0} +90006=#Sequence {0} has run out of numbers +90007=Tento objekt byl již uzavřen +90008=Neplatná hodnota {0} pro parametr {1} +90009=#Unable to create or alter sequence {0} because of invalid attributes (start value {1}, min value {2}, max value {3}, increment {4}) +90010=#Invalid TO_CHAR format {0} +90011=#A file path that is implicitly relative to the current working directory is not allowed in the database URL {0}. Use an absolute path, ~/name, ./name, or the baseDir setting instead. +90012=Parametr {0} není nastaven +90013=Databáze {0} nenalezena +90014=Chyba zpracování {0} +90015=SUM nebo AVG pro nesprávný datový typ {0} +90016=Sloupec {0} musí být uveden v seznamu GROUP BY +90017=Pokus o nastavení druhého primárního klíče +90018=Připojení nebylo ukončeno aplikací a bude automaticky odstraněno +90019=Nelze smazat stávajícího uživatele +90020=Databáze je pravděpodobně používána: {0}. Možná řešení: uzavřete všechny ostatní připojení; použijte režim server +90021=#This combination of database settings is not supported: {0} +90022=Funkce {0} nenalezena +90023=Sloupec {0} nesmí mít možnou hodnotu NULL +90024=Chyba při přejmenování souboru {0} na {1} +90025=Nelze smazat soubor {0} +90026=Serializace selhala, příčina: {0} +90027=Deserializace selhala, příčina: {0} +90028=Výjimka vstupu a výstupu: {0} +90029=Nyní nelze řádek aktualizovat +90030=Při čtení záznamu zjištěno poškození souboru: {0}. Možná řešení: použijte nástroj pro zotavení +90031=Výjimka vstupu a výstupu: {0}; {1} +90032=Uživatel {0} nenalezena +90033=Uživatel {0} již existuje +90034=Chyba log souboru: {0}, příčina: {1} +90035=Sekvence {0} již existuje +90036=Sekvence {0} nenalezena +90037=Pohled {0} nenalezen +90038=Pohled {0} již existuje +90039=#This CLOB or BLOB reference timed out: {0} +90040=Pro tuto operaci je vyžadováno oprávnění Admin +90041=Trigger {0} již existuje +90042=Trigger {0} nenalezen +90043=Chyba při vytvoření nebo zavedení triggeru {0} objekt, třída {1}, příčina: {2}; pro podrobnosti prostudujte původní příčinu +90044=Chyba při provádění triggeru {0}, třída {1}, příčina: {2}; pro podrobnosti prostudujte původní příčinu +90045=Omezení {0} již existuje +90046=Chyba formátu URL; musí být {0}, je však {1} +90047=Verze nesouhlasí, verze ovladače je {0}, ale verze serveru je {1} +90048=Nepodporovaná verze souboru databáze nebo neplatná hlavička souboru {0} +90049=Chyba šifrování v souboru {0} +90050=Nesprávný formát hesla, musí být: heslo k souboru uživatelské heslo +90051=#Scale(${0}) must not be bigger than precision({1}) +90052=Vnořený dotaz není pouze jediný sloupec dotazu +90053=Skalární vnořený dotaz obsahuje více než jeden řádek +90054=Neplatné použití agregátní funkce {0} +90055=Nepodporované šifrování {0} +90056=#Function {0}: Invalid date format: {1} +90057=Omezení {0} nenalezeno +90058=Vkládání nebo vrácení změn není povoleno uvnitř triggeru +90059=Dvojsmyslný název sloupce {0} +90060=Nepodporovaný způsob zamykání souboru {0} +90061=Výjimka při otevírání portu {0} (port je již pravděpodobně používán), příčina: {1} +90062=Chyba při vytváření souboru {0} +90063=Uložený bod je neplatný: {0} +90064=Uložený bod není pojmenovaný +90065=Uložený bod je pojmenovaný +90066=Duplicitní vlastnost {0} +90067=Připojení bylo přerušeno: {0} +90068=Výraz pro seřazení {0} musí být v tomto případě uveden v seznamu výsledků +90069=Role {0} již existuje +90070=Role {0} nenalezena +90071=Uživatel nebo role {0} nenalezena +90072=Role a oprávnění nelze vzájemně míchat +90073=Shodně pojmenované Java metody musí mít odlišný počet parametrů: {0} a {1} +90074=Role {0} byla již udělena +90075=Sloupec je součástí indexu {0} +90076=Alias funkce {0} již existuje +90077=Alias funkce {0} nenalezen +90078=Schéma {0} již existuje +90079=Schéma {0} nenalezeno +90080=Název schématu musí být shodný +90081=Sloupec {0} obsahuje NULL hodnoty +90082=Sekvence {0} patří k tabulce +90083=Sloupec může být odkazem {0} +90084=Nelze odstranit poslední sloupec {0} +90085=Index {0} patří k omezení {1} +90086=Třída {0} nenalezena +90087=Metoda {0} nenalezena +90088=Neznámý režim {0} +90089=Porovnání nemůže být změněno, protože jde o datovou tabulku: {0} +90090=Schéma {0} nemůže být odstraněno +90091=Role {0} nemůže být odstraněna +90093=Chyba clusteru - databáze je spuštěna v samostatném režimu +90094=Chyba clusteru - databáze je spuštěna v cluster režimu, seznam serverů: {0} +90095=Chyba formátu řetězce: {0} +90096=Nedostatečná oprávnění k objektu {0} +90097=Databáze je pouze pro čtení +90098=Databáze byla ukončena +90099=Chyba nastavení posluchače událostí databáze {0}, příčina: {1} +90101=Špatný XID formát: {0} +90102=Nepodporovaná kompresní volba: {0} +90103=Nepodporovaný kompresní algoritmus: {0} +90104=Chyba komprese +90105=Výjimka při volání uživatelsky definované funkce: {0} +90106=Nelze vyprázdnit {0} +90107=Nelze odstranit {0}, protože {1} na něm závisí +90108=Nedostatek paměti. +90109=Pohled {0} je neplatný: {1} +90111=Chyba přístupu propojené tabulky s SQL příkazem {0}, příčina: {1} +90112=Řádek nebyl nalezen při pokusu o smazání z indexu {0} +90113=Nepodporované nastavení připojení {0} +90114=Konstanta {0} již existuje +90115=Konstanta {0} nenalezena +90116=Definice tohoto druhu nejsou povoleny +90117=Vzdálené připojení není na tomto serveru povoleno, zkontrolujte volbu -tcpAllowOthers +90118=Nelze odstranit tabulku {0} +90119=Uživatelský datový typ {0} již existuje +90120=Uživatelský datový typ {0} nenalezen +90121=Databáze byla již ukončena (pro deaktivaci automatického ukončení při zastavení virtuálního stroje přidejte parametr ";DB_CLOSE_ON_EXIT=FALSE" do URL databáze) +90122=Operace není podporována pro tabulku {0}, pokud na tabulku existují pohledy: {1} +90123=Nelze vzájemně míchat indexované a neindexované parametry +90124=Soubor nenalezen: {0} +90125=Neplatná třída, očekáváno {0}, ale obdrženo {1} +90126=Databáze není perzistentní +90127=Vrácený výsledek nelze upravovat. Dotaz musí vybrat všechny sloupce, které jsou definovány jako unikátní klíč. Vybrána může být pouze jediná tabulka. +90128=Vrácený výsledek nelze procházet ani resetovat. Možná budete muset použít conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ...). +90129=Transakce {0} nenalezena +90130=Tato metoda není povolena pro připravený dotaz; místo toho použijte regulérní dotaz. +90131=Souběžná úprava v tabulce {0}: Jiná transakce upravila nebo smazala stejný řádek +90132=Agregátor {0} nenalezen +90133=Nelze změnit nastavení {0}, pokud je již databáze otevřena +90134=Přístup ke třídě {0} byl odepřen +90135=Databáze je spuštěna ve vyhrazeném režimu; nelze otevřít další spojení +90136=Nepodporovaná podmínka vnějšího spojení: {0} +90137=Lze přiřadit pouze proměnné, nikoli: {0} +90138=Neplatný název databáze: {0} +90139=Nenalezena veřejná statická Java metoda: {0} +90140=Vrácený výsledek je pouze pro čtení. Možná budete muset použít conn.createStatement(..., ResultSet.CONCUR_UPDATABLE). +90141=#Serializer cannot be changed because there is a data table: {0} +90142=#Step size must not be zero +90143=#Row {1} not found in primary index {0} +HY000=Obecná chyba: {0} +HY004=Neznámý datový typ: {0} +HYC00=Vlastnost není podporována: {0} +HYT00=Timeout se pokouší uzamknout tabulku {0} diff --git a/modules/h2/src/main/java/org/h2/res/_messages_de.prop b/modules/h2/src/main/java/org/h2/res/_messages_de.prop new file mode 100644 index 0000000000000..66c740bd2f796 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_de.prop @@ -0,0 +1,179 @@ +.translator=Thomas Mueller +02000=Keine Daten verfügbar +07001=Ungültige Anzahl Parameter für {0}, erwartet: {1} +08000=Fehler beim Öffnen der Datenbank: {0} +21S02=Anzahl der Felder stimmt nicht überein +22001=Wert zu gross / lang für Feld {0}: {1} +22003=Numerischer Wert ausserhalb des Bereichs: {0} +22004=Numerischer Wert ausserhalb des Bereichs: {0} in Feld {1} +22007=Kann {0} {1} nicht umwandeln +22012=Division durch 0: {0} +22018=Datenumwandlungsfehler beim Umwandeln von {0} +22025=Fehler in LIKE ESCAPE: {0} +22030=Wert nicht erlaubt für Feld {0}: {1} +22031=Wert nicht Teil der Aufzählung {0}: {1} +22032=Leere Aufzählungen sind nicht erlaubt +22033=Doppelte Nennungen sind nicht erlaubt für Aufzählungstypen: {0} +23502=NULL nicht zulässig für Feld {0} +23503=Referentielle Integrität verletzt: {0} +23505=Eindeutiger Index oder Primärschlüssel verletzt: {0} +23506=Referentielle Integrität verletzt: {0} +23507=Kein Vorgabewert für Feld {0} +23513=Bedingung verletzt: {0} +23514=Ungültige Bedingung: {0} +28000=Falscher Benutzer Name oder Passwort +40001=Eine Verklemmung (Deadlock) ist aufgetreten. Die aktuelle Transaktion wurde rückgängig gemacht. Details: {0} +42000=Syntax Fehler in SQL Befehl {0} +42001=Syntax Fehler in SQL Befehl {0}; erwartet {1} +42S01=Tabelle {0} besteht bereits +42S02=Tabelle {0} nicht gefunden +42S11=Index {0} besteht bereits +42S12=Index {0} nicht gefunden +42S21=Doppelter Feldname {0} +42S22=Feld {0} nicht gefunden +42S32=Einstellung {0} nicht gefunden +57014=Befehl wurde abgebrochen oder das Session-Timeout ist abgelaufen +90000=Funktion {0} muss Zeilen zurückgeben +90001=Methode nicht zulässig für eine Abfrage. Erlaubt sind execute oder executeQuery, nicht jedoch executeUpdate +90002=Methode nur zulässig for eine Abfrage. Erlaubt sind execute oder executeUpdate, nicht jedoch executeQuery +90003=Hexadezimal Zahl mit einer ungeraden Anzahl Zeichen: {0} +90004=Hexadezimal Zahl enthält unerlaubtes Zeichen: {0} +90006=Die Sequenz {0} hat keine freien Nummern mehr +90007=Das Objekt wurde bereits geschlossen +90008=Unerlaubter Wert {0} für Parameter {1} +90009=Kann die Sequenz {0} nicht ändern aufgrund falscher Attribute (Start-Wert {1}, Minimal-Wert {2}, Maximal-Wert {3}, Inkrement {4}) +90010=Ungültiges TO_CHAR Format {0} +90011=Ein implizit relativer Pfad zum Arbeitsverzeichnis ist nicht erlaubt in der Datenbank URL {0}. Bitte absolute Pfade, ~/name, ./name, oder baseDir verwenden. +90012=Parameter {0} wurde nicht gesetzt +90013=Datenbank {0} nicht gefunden +90014=Fehler beim Parsen von {0} +90015=SUM oder AVG auf falschem Datentyp für {0} +90016=Feld {0} muss in der GROUP BY Liste sein +90017=Versuche, einen zweiten Primärschlüssel zu definieren +90018=Die Datenbank-Verbindung wurde nicht explizit geschlossen (jetzt in der Müllabfuhr) +90019=Kann aktuellen Benutzer nicht löschen +90020=Datenbank wird wahrscheinlich bereits benutzt: {0}. Mögliche Lösungen: alle Verbindungen schliessen; Server Modus verwenden +90021=Diese Kombination von Einstellungen wird nicht unterstützt {0} +90022=Funktion {0} nicht gefunden +90023=Feld {0} darf nicht NULL nicht erlauben +90024=Fehler beim Umbenennen der Datei {0} nach {1} +90025=Kann Datei {0} nicht löschen +90026=Serialisierung fehlgeschlagen, Grund: {0} +90027=De-Serialisierung fehlgeschlagen, Grund: {1} +90028=Eingabe/Ausgabe Fehler: {0} +90029=Im Moment nicht auf einer veränderbaren Zeile +90030=Datei fehlerhaft beim Lesen des Datensatzes: {0}. Mögliche Lösung: Recovery Werkzeug verwenden +90031=Eingabe/Ausgabe: {0}; {1} +90032=Benutzer {0} nicht gefunden +90033=Benutzer {0} besteht bereits +90034=Log Datei Fehler: {0}, Grund: {1} +90035=Sequenz {0} besteht bereits +90036=Sequenz {0} nicht gefunden +90037=View {0} nicht gefunden +90038=View {0} besteht bereits +90039=Diese CLOB oder BLOB Reference ist abgelaufen: {0} +90040=Für diese Operation werden Administrator-Rechte benötigt +90041=Trigger {0} besteht bereits +90042=Trigger {0} nicht gefunden +90043=Fehler beim Erzeugen des Triggers {0}, Klasse {1}, Grund: {1}; siehe Ursache für Details +90044=Fehler beim Ausführen des Triggers {0}, Klasse {1}, Grund: {1}; siehe Ursache für Details +90045=Bedingung {0} besteht bereits +90046=URL Format Fehler; erwartet {0}, erhalten {1} +90047=Falsche Version, Treiber Version ist {0}, Server Version ist {1} +90048=Datenbank Datei Version wird nicht unterstützt oder ungültiger Dateikopf in Datei {0} +90049=Verschlüsselungsfehler in Datei {0} +90050=Falsches Passwort Format, benötigt wird: Datei-Passwort Benutzer-Passwort +90051=Skalierung(${0}) darf nicht grösser als Präzision sein({1}) +90052=Unterabfrage gibt mehr als eine Feld zurück +90053=Skalar-Unterabfrage enthält mehr als eine Zeile +90054=Ungültige Verwendung der Aggregat Funktion {0} +90055=Chiffre nicht unterstützt: {0} +90056=Funktion {0}: Ungültiges Datums-Format: {1} +90057=Bedingung {0} nicht gefunden +90058=Innerhalb eines Triggers sind Commit und Rollback ist nicht erlaubt +90059=Mehrdeutiger Feldname {0} +90060=Ungültige Datei-Sperr-Methode {0} +90061=Fehler beim Öffnen von Port {0} (Port wird ev. bereits verwendet), Grund: {1} +90062=Fehler beim Erzeugen der Datei {0} +90063=Savepoint ist ungültig: {0} +90064=Savepoint hat keinen Namen +90065=Savepoint hat einen Namen +90066=Doppeltes Merkmahl {0} +90067=Verbindung ist unterbrochen: {0} +90068=Sortier-Ausdruck {0} muss in diesem Fall im Resultat vorkommen +90069=Rolle {0} besteht bereits +90070=Rolle {0} nicht gefunden +90071=Benutzer or Rolle {0} nicht gefunden +90072=Rollen und Rechte können nicht gemischt werden +90073=Java Methoden müssen eine unterschiedliche Anzahl Parameter aufweisen: {0} und {1} +90074=Rolle {0} bereits zugewiesen +90075=Feld ist Teil eines Indexes {0} +90076=Funktions-Alias {0} besteht bereits +90077=Funktions-Alias {0} nicht gefunden +90078=Schema {0} besteht bereits +90079=Schema {0} nicht gefunden +90080=Schema Namen müssen übereinstimmen +90081=Feld {0} enthält NULL Werte +90082=Sequenz {0} gehört zu einer Tabelle +90083=Feld wird referenziert durch {0} +90084=Kann das letzte Feld nicht löschen {0} +90085=Index {0} gehört zur Bedingung {1} +90086=Klasse {0} nicht gefunden +90087=Methode {0} nicht gefunden +90088=Unbekannter Modus {0} +90089=Textvergleich-Modus kann nicht geändert werden wenn eine Daten-Tabelle existiert : {0} +90090=Schema {0} kann nicht gelöscht werden +90091=Rolle {0} kann nicht gelöscht werden +90093=Clustering Fehler - Datenbank läuft bereits im autonomen Modus +90094=Clustering Fehler - Datenbank läuft bereits im Cluster Modus, Serverliste: {0} +90095=Textformat Fehler: {0} +90096=Nicht genug Rechte für Objekt {0} +90097=Die Datenbank ist schreibgeschützt +90098=Die Datenbank ist bereits geschlossen +90099=Fehler beim Setzen des Datenbank Ereignis Empfängers {0}, Grund: {1} +90101=Falsches XID Format: {0} +90102=Datenkompressions-Option nicht unterstützt: {0} +90103=Datenkompressions-Algorithmus nicht unterstützt: {0} +90104=Datenkompressions Fehler +90105=Fehler beim Aufruf eine benutzerdefinierten Funktion: {0} +90106=Kann {0} nicht zurücksetzen per TRUNCATE +90107=Kann {0} nicht löschen weil {1} davon abhängt +90108=Nicht genug Hauptspeicher. +90109=View {0} ist ungültig: {1} +90111=Fehler beim Zugriff auf eine verknüpfte Tabelle mit SQL Befehl {0}, Grund: {1} +90112=Zeile nicht gefunden beim Löschen von Index {0} +90113=Datenbank-Verbindungs Option {0} nicht unterstützt +90114=Konstante {0} besteht bereits +90115=Konstante {0} nicht gefunden +90116=Literal dieser Art nicht zugelassen +90117=Verbindungen von anderen Rechnern sind nicht freigegeben, siehe -tcpAllowOthers +90118=Kann Tabelle nicht löschen {0} +90119=Benutzer-Datentyp {0} besteht bereits +90120=Benutzer-Datentyp {0} nicht gefunden +90121=Die Datenbank wurde bereits geschlossen (um das automatische Schliessen beim Stopp der VM zu deaktivieren, die Datenbank URL mit ";DB_CLOSE_ON_EXIT=FALSE" ergänzen) +90122=Funktion nicht unterstützt für Tabelle {0} wenn Views auf die Tabelle vorhanden sind: {1} +90123=Kann nicht indizierte und nicht indizierte Parameter mischen +90124=Datei nicht gefunden: {0} +90125=Ungültig Klasse, erwartet {0} erhalten {1} +90126=Datenbank ist nicht persistent +90127=Die Resultat-Zeilen können nicht verändert werden. Die Abfrage muss alle Felder eines eindeutigen Schlüssels enthalten, und nur eine Tabelle enthalten. +90128=Kann nicht an den Anfang der Resultat-Zeilen springen. Mögliche Lösung: conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Transaktion {0} nicht gefunden +90130=Diese Methode ist nicht erlaubt für ein PreparedStatement; benützen Sie ein Statement. +90131=Gleichzeitige Änderung in Tabelle {0}: eine andere Transaktion hat den gleichen Datensatz geändert oder gelöscht +90132=Aggregat-Funktion {0} nicht gefunden +90133=Kann das Setting {0} nicht ändern wenn die Datenbank bereits geöffnet ist +90134=Der Zugriff auf die Klasse {0} ist nicht erlaubt +90135=Die Datenbank befindet sich im Exclusiv Modus; es können keine zusätzlichen Verbindungen geöffnet werden +90136=Diese Outer Join Bedingung wird nicht unterstützt: {0} +90137=Werte können nur einer Variablen zugewiesen werden, nicht an: {0} +90138=Ungültiger Datenbank Name: {0} +90139=Die (public static) Java Funktion wurde nicht gefunden: {0} +90140=Die Resultat-Zeilen können nicht verändert werden. Mögliche Lösung: conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=Serialisierer kann nicht geändert werden wenn eine Daten-Tabelle existiert: {0} +90142=Schrittgrösse darf nicht 0 sein +90143=#Row {1} not found in primary index {0} +HY000=Allgemeiner Fehler: {0} +HY004=Unbekannter Datentyp: {0} +HYC00=Dieses Feature wird nicht unterstützt: {0} +HYT00=Zeitüberschreitung beim Versuch die Tabelle {0} zu sperren diff --git a/modules/h2/src/main/java/org/h2/res/_messages_en.prop b/modules/h2/src/main/java/org/h2/res/_messages_en.prop new file mode 100644 index 0000000000000..b306a9b7f18cf --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_en.prop @@ -0,0 +1,179 @@ +.translator=Thomas Mueller +02000=No data is available +07001=Invalid parameter count for {0}, expected count: {1} +08000=Error opening database: {0} +21S02=Column count does not match +22001=Value too long for column {0}: {1} +22003=Numeric value out of range: {0} +22004=Numeric value out of range: {0} in column {1} +22007=Cannot parse {0} constant {1} +22012=Division by zero: {0} +22018=Data conversion error converting {0} +22025=Error in LIKE ESCAPE: {0} +22030=Value not permitted for column {0}: {1} +22031=Value not a member of enumerators {0}: {1} +22032=Empty enums are not allowed +22033=Duplicate enumerators are not allowed for enum types: {0} +23502=NULL not allowed for column {0} +23503=Referential integrity constraint violation: {0} +23505=Unique index or primary key violation: {0} +23506=Referential integrity constraint violation: {0} +23507=No default value is set for column {0} +23513=Check constraint violation: {0} +23514=Check constraint invalid: {0} +28000=Wrong user name or password +40001=Deadlock detected. The current transaction was rolled back. Details: {0} +42000=Syntax error in SQL statement {0} +42001=Syntax error in SQL statement {0}; expected {1} +42S01=Table {0} already exists +42S02=Table {0} not found +42S11=Index {0} already exists +42S12=Index {0} not found +42S21=Duplicate column name {0} +42S22=Column {0} not found +42S32=Setting {0} not found +57014=Statement was canceled or the session timed out +90000=Function {0} must return a result set +90001=Method is not allowed for a query. Use execute or executeQuery instead of executeUpdate +90002=Method is only allowed for a query. Use execute or executeUpdate instead of executeQuery +90003=Hexadecimal string with odd number of characters: {0} +90004=Hexadecimal string contains non-hex character: {0} +90006=Sequence {0} has run out of numbers +90007=The object is already closed +90008=Invalid value {0} for parameter {1} +90009=Unable to create or alter sequence {0} because of invalid attributes (start value {1}, min value {2}, max value {3}, increment {4}) +90010=Invalid TO_CHAR format {0} +90011=A file path that is implicitly relative to the current working directory is not allowed in the database URL {0}. Use an absolute path, ~/name, ./name, or the baseDir setting instead. +90012=Parameter {0} is not set +90013=Database {0} not found +90014=Error parsing {0} +90015=SUM or AVG on wrong data type for {0} +90016=Column {0} must be in the GROUP BY list +90017=Attempt to define a second primary key +90018=The connection was not closed by the application and is garbage collected +90019=Cannot drop the current user +90020=Database may be already in use: {0}. Possible solutions: close all other connection(s); use the server mode +90021=This combination of database settings is not supported: {0} +90022=Function {0} not found +90023=Column {0} must not be nullable +90024=Error while renaming file {0} to {1} +90025=Cannot delete file {0} +90026=Serialization failed, cause: {0} +90027=Deserialization failed, cause: {0} +90028=IO Exception: {0} +90029=Currently not on an updatable row +90030=File corrupted while reading record: {0}. Possible solution: use the recovery tool +90031=IO Exception: {0}; {1} +90032=User {0} not found +90033=User {0} already exists +90034=Log file error: {0}, cause: {1} +90035=Sequence {0} already exists +90036=Sequence {0} not found +90037=View {0} not found +90038=View {0} already exists +90039=This CLOB or BLOB reference timed out: {0} +90040=Admin rights are required for this operation +90041=Trigger {0} already exists +90042=Trigger {0} not found +90043=Error creating or initializing trigger {0} object, class {1}, cause: {2}; see root cause for details +90044=Error executing trigger {0}, class {1}, cause : {2}; see root cause for details +90045=Constraint {0} already exists +90046=URL format error; must be {0} but is {1} +90047=Version mismatch, driver version is {0} but server version is {1} +90048=Unsupported database file version or invalid file header in file {0} +90049=Encryption error in file {0} +90050=Wrong password format, must be: file password user password +90051=Scale(${0}) must not be bigger than precision({1}) +90052=Subquery is not a single column query +90053=Scalar subquery contains more than one row +90054=Invalid use of aggregate function {0} +90055=Unsupported cipher {0} +90056=Function {0}: Invalid date format: {1} +90057=Constraint {0} not found +90058=Commit or rollback is not allowed within a trigger +90059=Ambiguous column name {0} +90060=Unsupported file lock method {0} +90061=Exception opening port {0} (port may be in use), cause: {1} +90062=Error while creating file {0} +90063=Savepoint is invalid: {0} +90064=Savepoint is unnamed +90065=Savepoint is named +90066=Duplicate property {0} +90067=Connection is broken: {0} +90068=Order by expression {0} must be in the result list in this case +90069=Role {0} already exists +90070=Role {0} not found +90071=User or role {0} not found +90072=Roles and rights cannot be mixed +90073=Matching Java methods must have different parameter counts: {0} and {1} +90074=Role {0} already granted +90075=Column is part of the index {0} +90076=Function alias {0} already exists +90077=Function alias {0} not found +90078=Schema {0} already exists +90079=Schema {0} not found +90080=Schema name must match +90081=Column {0} contains null values +90082=Sequence {0} belongs to a table +90083=Column may be referenced by {0} +90084=Cannot drop last column {0} +90085=Index {0} belongs to constraint {1} +90086=Class {0} not found +90087=Method {0} not found +90088=Unknown mode {0} +90089=Collation cannot be changed because there is a data table: {0} +90090=Schema {0} cannot be dropped +90091=Role {0} cannot be dropped +90093=Clustering error - database currently runs in standalone mode +90094=Clustering error - database currently runs in cluster mode, server list: {0} +90095=String format error: {0} +90096=Not enough rights for object {0} +90097=The database is read only +90098=The database has been closed +90099=Error setting database event listener {0}, cause: {1} +90101=Wrong XID format: {0} +90102=Unsupported compression options: {0} +90103=Unsupported compression algorithm: {0} +90104=Compression error +90105=Exception calling user-defined function: {0} +90106=Cannot truncate {0} +90107=Cannot drop {0} because {1} depends on it +90108=Out of memory. +90109=View {0} is invalid: {1} +90111=Error accessing linked table with SQL statement {0}, cause: {1} +90112=Row not found when trying to delete from index {0} +90113=Unsupported connection setting {0} +90114=Constant {0} already exists +90115=Constant {0} not found +90116=Literals of this kind are not allowed +90117=Remote connections to this server are not allowed, see -tcpAllowOthers +90118=Cannot drop table {0} +90119=User data type {0} already exists +90120=User data type {0} not found +90121=Database is already closed (to disable automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the db URL) +90122=Operation not supported for table {0} when there are views on the table: {1} +90123=Cannot mix indexed and non-indexed parameters +90124=File not found: {0} +90125=Invalid class, expected {0} but got {1} +90126=Database is not persistent +90127=The result set is not updatable. The query must select all columns from a unique key. Only one table may be selected. +90128=The result set is not scrollable and can not be reset. You may need to use conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Transaction {0} not found +90130=This method is not allowed for a prepared statement; use a regular statement instead. +90131=Concurrent update in table {0}: another transaction has updated or deleted the same row +90132=Aggregate {0} not found +90133=Cannot change the setting {0} when the database is already open +90134=Access to the class {0} is denied +90135=The database is open in exclusive mode; can not open additional connections +90136=Unsupported outer join condition: {0} +90137=Can only assign to a variable, not to: {0} +90138=Invalid database name: {0} +90139=The public static Java method was not found: {0} +90140=The result set is readonly. You may need to use conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=Serializer cannot be changed because there is a data table: {0} +90142=Step size must not be zero +90143=Row {1} not found in primary index {0} +HY000=General error: {0} +HY004=Unknown data type: {0} +HYC00=Feature not supported: {0} +HYT00=Timeout trying to lock table {0} diff --git a/modules/h2/src/main/java/org/h2/res/_messages_es.prop b/modules/h2/src/main/java/org/h2/res/_messages_es.prop new file mode 100644 index 0000000000000..1c66a273f0dff --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_es.prop @@ -0,0 +1,179 @@ +.translator=Dario V. Fassi +02000=No hay datos disponibles. +07001=Cantidad de parametros invalidos para {0}, cantidad esperada: {1} +08000=Error abriendo la base de datos: {0} +21S02=La cantidad de columnas no coincide +22001=Valor demasiado largo para la columna {0}: {1} +22003=Valor numerico fuera de rango: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=Imposible interpretar la constante {0} {1} +22012=División por cero: {0} +22018=Conversión de datos fallida, convirtiendo {0} +22025=Error en LIKE ESCAPE: {0} +22030=Valor no permitido para la columna {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=La columna {0} no permite valores nulos (NULL) +23503=Violación de una restricción de Integridad Referencial: {0} +23505=Violación de indice de Unicidad ó Clave primaria: {0} +23506=Violación de una restricción de Integridad Referencial: {0} +23507=No se fijado un valor por defecto para la columna {0} +23513=Violación de Check constraint: {0} +23514=#Check constraint invalid: {0} +28000=Nombre de usuario ó password incorrecto +40001=Deadlock - Punto muerto detectado. La transacción actual fue retrotraída (rollback). Detalles: {0} +42000=Error de Sintaxis en sentencia SQL {0} +42001=Error de Sintaxis en sentencia SQL {0}; se esperaba {1} +42S01=Tabla {0} ya existe +42S02=Tabla {0} no encontrada +42S11=Indice {0} ya existe +42S12=Indice {0} no encontrado +42S21=Nombre de columna Duplicada {0} +42S22=Columna {0} no encontrada +42S32=Setting {0} no encontrado +57014=Ls sentencia fue cancelado ó la sesión expiró por tiempo vencido +90000=Función {0} debe devolver un set de resultados (ResultSet) +90001=Metodo no permitido en un query. Use execute ó executeQuery en lugar de executeUpdate +90002=Metodo permitido unicamente en un query. Use execute ó executeUpdate en lugar de executeQuery +90003=Cadena Hexadecimal con cantidad impar de caracteres: {0} +90004=Cadena Hexadecimal contiene caracteres invalidos: {0} +90006=#Sequence {0} has run out of numbers +90007=El objeto ya está cerrado +90008=Valor Invalido {0} para el parametro {1} +90009=#Unable to create or alter sequence {0} because of invalid attributes (start value {1}, min value {2}, max value {3}, increment {4}) +90010=#Invalid TO_CHAR format {0} +90011=#A file path that is implicitly relative to the current working directory is not allowed in the database URL {0}. Use an absolute path, ~/name, ./name, or the baseDir setting instead. +90012=Parametro {0} no está fijado +90013=Database {0} no encontrada +90014=Error interpretando {0} +90015=SUM ó AVG sobre un tipo de datos invalidado para {0} +90016=La columna {0} debe estar incluida en la lista de GROUP BY +90017=Intento de definir una segunda clave primaria +90018=La conexión no fue cerrada por la aplicación y esta siendo limpiada (garbage collected) +90019=Imposible eliminar el usuario actual +90020=La base de datos puede que ya esté siendo utilizada: {0}. Soluciones Posibles: cierre todas las otras conexiones; use el modo server +90021=#This combination of database settings is not supported: {0} +90022=Función {0} no encontrada +90023=Columna {0} no puede ser nullable +90024=Error mientras se renombraba el archivo {0} a {1} +90025=No se pudo borrar el archivo {0} +90026=Serialización fallida, causa: {0} +90027=Deserialización fallida, causa: {0} +90028=IO Exception: {0} +90029=La fila actual no es actualizable +90030=Archivo corrupto mientras se leía el registro: {0}. Solución Posible: use la herramienta de recuperación (recovery tool) +90031=IO Exception: {0}; {1} +90032=Usuario {0} no encontrado +90033=Usuario {0} ya existe +90034=Log archivo error: {0}, causa: {1} +90035=Sequence {0} ya existe +90036=Sequence {0} no encontrado +90037=View {0} no encontrado +90038=View {0} ya existe +90039=#This CLOB or BLOB reference timed out: {0} +90040=Derechos de Admin son requeridos para esta operación +90041=Trigger {0} ya existe +90042=Trigger {0} no encontrado +90043=Error creando ó inicializando trigger {0} objeto, clase {1}, causa: {2}; vea la causa raiz para mas detalle +90044=Error ejecutando trigger {0}, clase {1}, causa : {2}; vea la causa raiz para mas detalle +90045=Constraint {0} ya existe +90046=Error de formato en URL; debe ser {0} pero es {1} +90047=Discordancia de Versión, la versión del driver es {0} y la del server es {1} +90048=Versión del archivo de base de datos no soportada ó encabezado de archivo invalido en archivo {0} +90049=Error de Encriptación en archivo {0} +90050=Formato de password erroneo, debe ser: archivo password Usuario password +90051=#Scale(${0}) must not be bigger than precision({1}) +90052=El Subquery no es un query escalar (debe devolver una sola columna) +90053=El Subquery escalar contiene mas de una fila +90054=Uso Invalido de la función de columna agregada {0} +90055=Cipher No soportado {0} +90056=Function {0}: Invalid date format: {1} +90057=Constraint {0} no encontrado +90058=Commit ó rollback no permitido dentro de un trigger +90059=Nombre de columna ambigua {0} +90060=Metodo de lockeo de archivo no soportado {0} +90061=Exception abriendo puerto {0} (el puerto puede estar en uso), causa: {1} +90062=Error creando archivo {0} +90063=Savepoint invalido: {0} +90064=Savepoint sin nombre +90065=Savepoint con nombre +90066=Propiedad Duplicada {0} +90067=Conexión rota: {0} +90068=Expresión Order by {0} debe estar en la lista de campos a devolver en este caso +90069=Role {0} ya existe +90070=Role {0} no encontrado +90071=Usuario ó role {0} no encontrado +90072=Roles y derechos no pueden ser mezclados +90073=Los metodos Java de Matching deben tener diferente cantidad de parametros: {0} y {1} +90074=Role {0} ya otorgado +90075=La columna es parte del indice {0} +90076=Función alias {0} ya existe +90077=Función alias {0} no encontrada +90078=Schema {0} ya existe +90079=Schema {0} no encontrado +90080=El nombre del Schema debe concordar +90081=Columna {0} contiene valores nulos +90082=Sequence {0} pertenece a una tabla +90083=La columna puede estar referenciada por {0} +90084=Imposible eliminar la ultima columna {0} +90085=Index {0} pertenece a un constraint {1} +90086=Class {0} no encontrada +90087=#Method {0} not found +90088=Modo desconocido {0} +90089=Collation no puede ser cambiado debido a que existe una tabla de datos: {0} +90090=Schema {0} no puede ser eliminado +90091=Role {0} no puede ser eliminado +90093=Clustering error - la base de datos se esta corriendo en modo standalone +90094=Clustering error - la base de datos esta corriendo en modo CLUSTER, lista de servidores: {0} +90095=String con error de formato: {0} +90096=No tiene los derechos necesarios para el objeto {0} +90097=La base de datos es de solo lectura (read only) +90098=La base de datos ha sido cerrada +90099=Error setting database event listener {0}, causa: {1} +90101=Formato erroneo de XID : {0} +90102=Opciones de compresión No soportadas: {0} +90103=Algoritmo de compresión No soportado: {0} +90104=Error de Compresión +90105=Exception llamando a una función definida por el usuario: {0} +90106=Imposible truncar {0} +90107=Imposible eliminar {0} debido a que {1} depende de él. +90108=Memoria Insuficiente - Out of memory. Tamaño: {0} +90109=La Vista {0} es invalida: {1} +90111=Error accediendo Linked Table con sentencia SQL {0}, causa: {1} +90112=Fila no encontrada mientras se intentaba borrar del indice {0} +90113=Parametro de conexión No soportado {0} +90114=Constante {0} ya existe +90115=Constante {0} no encontrado +90116=Literales de este tipo no estan permitidos +90117=Este server no permite Conexiones Remotas, vea -tcpAllowOthers +90118=Imposible eliminar tabla {0} +90119=Tipo de dato de usuario {0} ya existe +90120=Tipo de dato de usuario {0} no encontrado +90121=La base de datos ya esta cerrada (para des-habilitar el cerrado automatico durante el shutdown de la VM, agregue ";DB_CLOSE_ON_EXIT=FALSE" a la URL de conexión) +90122=Operación no soportada para la tabla {0} cuando existen vistas sobre la tabla: {1} +90123=No se puede mezclar parametros indexados y no-indexados +90124=Archivo no encontrado: {0} +90125=Clase Invalida, se esperaba {0} pero se obtuvo {1} +90126=La base de datos no es persistente +90127=El conjunto de resultados NO es actualizable. El query debe seleccionar todas la columnas desde una clave de unicidad. Solo una tabla puede ser seleccionada. +90128=El conjunto de resultados NO es scrollable y no puede ser reseteada. You may need to use conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Transacción {0} no encontrada +90130=Este metodo no esta permitido para una sentencia preparada; en su lugar use una sentencia regular. +90131=Actualización concurrente sobre la tabla {0}: otra transacción ha actualizado ó borrado la misma fila +90132=Aggregate {0} no encontrado +90133=No puede cambiar el setting {0} cuando la base de datos esta abierta +90134=Acceso denegado a la clase {0} +90135=La base de datos esta abierta en modo EXCLUSIVO; no puede abrir conexiones adicionales +90136=Condición No soportada en Outer join : {0} +90137=Solo puede asignarse a una variable, no a: {0} +90138=Nombre de base de datos Invalido: {0} +90139=El metodo Java (publico y estatico) : {0} no fue encontrado +90140=El conjunto de resultados es de solo lectura. Puede ser necesario usar conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=#Serializer cannot be changed because there is a data table: {0} +90142=#Step size must not be zero +90143=#Row {1} not found in primary index {0} +HY000=Error General : {0} +HY004=Tipo de dato desconocido : {0} +HYC00=Caracteristica no soportada: {0} +HYT00=Tiempo vencido intentando trabar (lock) la tabla {0} diff --git a/modules/h2/src/main/java/org/h2/res/_messages_fr.prop b/modules/h2/src/main/java/org/h2/res/_messages_fr.prop new file mode 100644 index 0000000000000..d1e899c47b23f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_fr.prop @@ -0,0 +1,179 @@ +.translator=Xavier Bouclet +02000=Aucune donnée disponible +07001=Nombre de paramètre invalide pour {0}, nombre de paramètre attendu: {1} +08000=Une erreur est survenue lors de l'ouverture de la base de données: {0} +21S02=Le nombre de colonnes ne correspond pas +22001=Valeur trop longue pour la colonne {0}: {1} +22003=Valeur numérique hors de portée: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=Impossible d'analyser {0} constante {1} +22012=Division par zéro: {0} +22018=Erreur lors de la conversion de données {0} +22025=Erreur dans LIKE ESCAPE: {0} +22030=Valeur non permise pour la colonne {0}: {1} +22031=La valeur n'est pas un membre de l'énumération {0}: {1} +22032=Les enums vides ne sont pas permis +22033=Les valeurs énumérées en double ne sont pas autorisées pour les types énumérés: {0} +23502=NULL non permis pour la colonne {0} +23503=Intégrité référentielle violation de contrainte: {0} +23505=Violation d'index unique ou clé primaire: {0} +23506=Intégrité référentielle violation de contrainte: {0} +23507=Pas de valeur par défaut initialisée pour la colonne {0} +23513=Vérifiez la violation de contrainte: {0} +23514=Vérifiez la contraite invalide: {0} +28000=Mauvais nom d'utilisateur ou mot de passe +40001=Deadlock détecté. La transaction courante a été annulée. Détails: {0} +42000=Erreur de syntaxe dans l'instruction SQL {0} +42001=Erreur de syntaxe dans l'instruction SQL {0}; attendu {1} +42S01=La table {0} existe déjà +42S02=Table {0} non trouvée +42S11=L'index {0} existe déjà +42S12=Index {0} non trouvé +42S21=Duplication du nom de colonnes {0} +42S22=Colonne {0} non trouvée +42S32=Paramètre {0} non trouvé +57014=L'instruction a été annulée ou la session a expiré +90000=La fonction {0} doit retourner résultat +90001=Methode non autorisée pour une requête. Utilisez execute ou executeQuery à la place d'executeUpdate +90002=Methode est autorisée uniquement pour une requête. Utilisez execute ou executeUpdate à la place d'executeQuery +90003=Chaîne héxadecimale contenant un nombre impair de caractères: {0} +90004=Chaîne héxadecimale contenant un caractère non-héxa: {0} +90006=La séquence {0} a épuisé ses éléments +90007=L'objet est déjà fermé +90008=Valeur invalide {0} pour le paramètre {1} +90009=Impossible de créer ou modifier la séquence {0} car les attributs sont invalides (start value {1}, min value {2}, max value {3}, increment {4}) +90010=Format invalide TO_CHAR {0} +90011=Un chemin de fichier implicitement relatif au répertoire de travail actuel n'est pas autorisé dans l'URL de la base de données {0}. Utilisez un chemin absolu, ~ /nom, ./nom ou le paramètre baseDir à la place. +90012=La paramètre {0} n'est pas initialisé +90013=Base de données {0} non trouvée +90014=Analyse d'erreur {0} +90015=SUM ou AVG sur le mauvais type de données pour {0} +90016=La colonne {0} doit être dans la liste du GROUP BY +90017=Tentative de définir une seconde clé primaire +90018=La connexion n'a pas été fermée et a été récupérée par le ramasse miette. +90019=Impossible de supprimer l'utilisateur actuel +90020=La base de données est peut-être en cours d'utilisation: {0}. Solutions posibles: fermer toutes les autres connexions; utilisez le mode serveur +90021=Cette combinaison de paramètres de base de données n'est pas supportée: {0} +90022=La fonction {0} n'a pas été trouvée +90023=La colonne {0} ne doit pas être nulle +90024=Erreur lors du renommage du fichier {0} vers {1} +90025=Impossible de supprimer le fichier {0} +90026=La sérialisation a échoué, cause: {0} +90027=La désérialisation a échoué, cause: {0} +90028=IO Exception: {0} +90029=Actuellement sur une ligne non actualisable +90030=Fichier corrompu lors de la lecture de l'enregistrement: {0}. Solution possible: utiliser l'outil de récupération +90031=IO Exception: {0}; {1} +90032=Utilisateur {0} non trouvé +90033=L'utilisateur {0} existe déjà +90034=Erreur du fichier journal: {0}, cause: {1} +90035=La séquence {0} existe déjà +90036=Séquence {0} non trouvée +90037=Vue {0} non trouvée +90038=La vue {0} existe déjà +90039=La référence CLOB ou BLOB a expiré: {0} +90040=Les droits admins sont requis pour cette opération +90041=Le trigger {0} existe déjà +90042=Trigger {0} non trouvé +90043=Erreur lors de la création ou l'initialisation du trigger {0} object, class {1}, cause: {2}; voir la racine de l'erreur pour les détails +90044=Erreur lors de l'exécution du trigger {0}, class {1}, cause : {2}; voir la racine de l'erreur pour les détails +90045=La contrainte {0} existe déjà +90046=Erreur dans le format de l'URL; doit être {0} mais est {1} +90047=Version non correspondante, la version du driver est {0}mais la version du serveur est {1} +90048=Version de fichier de base de données non supportée ou entête de ficher invalide dans le fichier {0} +90049=Erreur de cryptage dans le fichier {0} +90050=Mauvais format de mot de passe, doit être: mot de passe du fichier mot de passe de l'utilisateur +90051=L'échelle(${0}) ne doit pas être plus grande que la précision({1}) +90052=La sous requête n'est pas une requête sur une seule colonne +90053=La sous-requête scalaire contient plus d'une rangée +90054=Utilisation invalide de la fonction agrégée {0} +90055=Chiffrement non pris en charge {0} +90056=Fonction {0}: Format de date invalide: {1} +90057=Contrainte {0} non trouvée +90058=Commit ou rollback n'est pas autorisé à l'intérieur d'un trigger +90059=Nom de colonne ambigu {0} +90060=Méthode de verrouillage de fichier non prise en charge {0} +90061=Exception à l'ouverture du port {0} (le port est peut-être en cours d'utilisation), cause: {1} +90062=Erreur lors de la création du fichier {0} +90063=Le point de sauvegarde est invalide: {0} +90064=Le point de sauvegarde est sans nom +90065=Le point de sauvegarde est nommé +90066=Propriété dupliquée {0} +90067=La connexion est cassée: {0} +90068=L'expression Order by {0} doit être dans ce cas dans la liste des résultats +90069=Le rôle {0} existe déjà +90070=Rôle {0} non trouvé +90071=Utilisateur ou rôle {0} non trouvé +90072=Les rôles et les droits ne peuvent être mélangés +90073=Les méthodes Java correspondantes doivent avoir un nombre de paramètres différents: {0} et {1} +90074=Le rôle {0} est déjà accordé +90075=La colonne fait partie de l'index {0} +90076=L'alias de fonction {0} existe déjà +90077=Alias de fonction {0} non trouvé +90078=Le schéma {0} existe déjà +90079=Schéma {0} non trouvé +90080=Le nom de schéma doit correspondre +90081=La colonne {0} contient des valeurs nulles +90082=La séquence {0} appartient une table +90083=La colonne doit être référencée par {0} +90084=Impossible de supprimer la dernière colonne {0} +90085=L'index {0} appartient à la contrainte {1} +90086=Classe {0} non trouvée +90087=Méthode {0} non trouvée +90088=Mode inconnu {0} +90089=La collation ne peut pas être changée parce qu'il y a des données dans la table: {0} +90090=Le schéma {0} ne peut pas être supprimé +90091=Le rôle {0} ne peut pas être supprimé +90093=Erreur de clustering - la base de données s'exécute actuellement en mode autonome +90094=Erreur de clustering - la base de données s'exécute actuellement en mode cluster, liste de serveurs: {0} +90095=Erreur de format de chaîne: {0} +90096=Pas assez de droit pour l'objet {0} +90097=La base de données est en lecture seule +90098=La base de données a été fermée +90099=Erreur lors du paramétrage de l'auditeur d'événements de la base de données {0}, cause: {1} +90101=Mauvais format XID: {0} +90102=Options de compression non supportées: {0} +90103=Algorithme de conpression non supporté: {0} +90104=Erreur de compression +90105=Exception lors de l'appel de la fonction définie par l'utilisateur: {0} +90106=Impossible de tronquer {0} +90107=Impossible de supprimer {0} car {1} dépend de lui +90108=Mémoire insuffisante. +90109=La vue {0} est invalide: {1} +90111=Erreur lors de l'accès à la table liée à l'aide de l'instruction SQL {0}, cause: {1} +90112=Ligne non trouvée lors de la tentative de suppression à partir de l'index {0} +90113=Paramétrage de connexion non pris en charge {0} +90114=La constante {0} existe déjà +90115=Constante {0} non trouvée +90116=Les littérals de ce type ne sont pas permis +90117=Les connexions à distance à ce serveur ne sont pas autorisées, voir -tcpAllowOthers +90118=Impossible de supprimer la table {0} +90119=Le type de données utilisateur {0} existe déjà +90120=Type de données utilisateur {0} non trouvé +90121=La base de données est déjà fermée (pour désactiver la fermeture automatique à l'arrêt de la VM, ajoutez "; DB_CLOSE_ON_EXIT = FALSE" à l'URL db) +90122=Opération non prise en charge pour la table {0} lorsqu'il existe des vues sur la table: {1} +90123=Impossible de mélanger des paramètres indexés et non-indexés +90124=Fichier non trouvé: {0} +90125=Classe invalide, attendue {0} mais obtenue {1} +90126=La base de données n'est pas persistante +90127=L'ensemble des résultats ne peut pas être mis à jour. La requête doit sélectionner toutes les colonnes à partir d'une clé unique. Seule une table peut être sélectionnée. +90128=L'ensemble des résultats n'est pas scollable et ne peut pas être réinitialisé. Vous pouvez avoir besoin d'utiliser conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Transaction {0} non trouvée +90130=Cette méthode n'est pas autorisée pour une instruction paramétrée; à la place utilisez une instruction régulière. +90131=Mise à jour concurrente dans la table {0}: une autre transaction à mis à jour ou supprimé la même ligne +90132=Aggregat {0} non trouvé +90133=Impossible de changer le paramétrage {0} lorsque la base de données est déjà ouverte +90134=L'accès à la classe {0} est interdit +90135=La base de données est ouverte en mode exclusif; impossible d'ouvrir des connexions additionnelles +90136=Condition de jointure extérieure non prise en charge: {0} +90137=Peut seulement être assigné à une variable, pas à: {0} +90138=Nom de la base de données invalide: {0} +90139=La méthode Java public static n'a pas été trouvée: {0} +90140='ensemble des résultats est en lecture seule. Vous pouvez avoir besoin d'utiliser conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=Le sérialiseur ne peut être changé parce que il y a des données dans la table: {0} +90142=La taille de l'étape ne doit pas être de 0 +90143=#Row {1} not found in primary index {0} +HY000=Erreur générale: {0} +HY004=Type de données inconnu: {0} +HYC00=Fonctionnalité non supportée: {0} +HYT00=Dépassement du temps lors du vérrouillage de la table {0} diff --git a/modules/h2/src/main/java/org/h2/res/_messages_ja.prop b/modules/h2/src/main/java/org/h2/res/_messages_ja.prop new file mode 100644 index 0000000000000..4b98a3258bbf0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_ja.prop @@ -0,0 +1,179 @@ +.translator=IKEMOTO, Masahiro +02000=有効なデータがありません +07001={0} は無効なパラメータ番号です, 期待される番号: {1} +08000=データベースオープンエラー: {0} +21S02=列番号が一致しません +22001=列 {0} の値が長過ぎます: {1} +22003=範囲外の数値です: {0} +22004=#Numeric value out of range: {0} in column {1} +22007={0} 定数 {1} を解析できません +22012=ゼロで除算しました: {0} +22018=データ変換中にエラーが発生しました {0} +22025=LIKE ESCAPE にエラーがあります: {0} +22030=#Value not permitted for column {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=列 {0} にはnull値が許されていません +23503=参照整合性制約違反: {0} +23505=ユニークインデックス、またはプライマリキー違反: {0} +23506=参照整合性制約違反: {0} +23507=列 {0} にデフォルト値が設定されていません +23513=制約違反を確認してください: {0} +23514=制約が無効です。確認してください: {0} +28000=ユーザ名またはパスワードが不正です +40001=デッドロックが検出されました。現在のトランザクションはロールバックされました。詳細: {0} +42000=SQLステートメントに文法エラーがあります {0} +42001=SQLステートメントに文法エラーがあります {0}; 期待されるステートメント {1} +42S01=テーブル {0} はすでに存在します +42S02=テーブル {0} が見つかりません +42S11=インデックス {0} はすでに存在します +42S12=インデックス {0} が見つかりません +42S21=列名 {0} が重複しています +42S22=列 {0} が見つかりません +42S32=設定 {0} が見つかりません +57014=ステートメントがキャンセルされたか、セッションがタイムアウトしました +90000=関数 {0} はリザルトセットを返さなければなりません +90001=メソッドはクエリをサポートしていません。executeUpdateのかわりに、excute、またはexecuteQueryを使用してください +90002=メソッドはクエリしかサポートしていません。executeQueryのかわりに、excecute、またはexecuteUpdateを使用してください +90003=文字数が奇数の16進文字列です: {0} +90004=16進文字列に不正な文字が含まれています: {0} +90006=シーケンス {0} を使い果たしました +90007=オブジェクトはすでに閉じられています +90008=パラメータ {1} に対する値 {0} が不正です +90009=無効な属性により、シーケンス {0} の作成または変更ができません。(開始値 {1}, 最小値 {2}, 最大値 {3}, 増分 {4}) +90010=無効な TO_CHAR フォーマット {0} +90011=暗黙的なカレントディレクトリからの相対ファイルパスをデータベースURL({0})に指定することは許可されていません。代わりに絶対パスか相対パス( ~/name, ./name)あるいは baseDir を指定して下さい. +90012=パラメータ {0} がセットされていません +90013=データベース {0} が見つかりません +90014=解析エラー {0} +90015=SUM、AVGを不正なデータ型 {0} に使用しました +90016=列 {0} はリストによりグループ化されなければなりません +90017=複数のプライマリキーを定義しようとしました +90018=アプリケーションにより閉じられていない接続がガベージコレクトされました +90019=使用中のユーザをドロップすることはできません +90020=データベースが使用中です: {0}. 可能な解決策: 他の接続を全て閉じる; サーバモードを使う +90021=この組み合わせのデータベース設定はサポートされていません: {0} +90022=関数 {0} が見つかりません +90023=列 {0} にはnull値を許すべきてはありません +90024=ファイル名を {0} から {1} に変更中にエラーが発生しました +90025=ファイル {0} を削除できません +90026=直列化に失敗しました +90027=直列化復元に失敗しました +90028=入出力例外: {0} +90029=現在行は更新不可です +90030=レコード {0} を読み込み中にファイルの破損を検出しました。可能な解決策: リカバリツールを使用してください +90031=入出力例外: {0}; {1} +90032=ユーザ {0} が見つかりません +90033=ユーザ {0} はすでに存在します +90034=ログファイルエラー: {0} +90035=シーケンス {0} はすでに存在します +90036=シーケンス {0} が見つかりません +90037=ビュー {0} が見つかりません +90038=ビュー {0} はすでに存在します +90039=この CLOB または BLOB の参照がタイムアウトしました: {0} +90040=この操作には管理権限が必要です +90041=トリガ {0} はすでに存在します +90042=トリガ {0} が見つかりません +90043=トリガ {0} オブジェクト, クラス {1} を生成中にエラーが発生しました +90044=トリガ {0} オブジェクト, クラス {1} を実行中にエラーが発生しました +90045=制約 {0} はすでに存在します +90046=URLフォーマットエラー; {1} ではなく {0} でなければなりません +90047=バージョンが一致しません。ドライババージョンは {0} ですが、サーババージョンは {1} です +90048=ファイル {0} は、未サポートのバージョンか、不正なファイルヘッダを持つデータベースファイルです +90049=ファイル {0} の暗号化エラーです +90050=不正なパスワードフォーマットです。正しくは: ファイルパスワード <空白> ユーザパスワード +90051=スケール(${0}) より大きい精度({1})は指定できません +90052=サブクエリが単一列のクエリではありません +90053=数値サブクエリが複数の行を含んでいます +90054=集約関数 {0} の不正な使用 +90055={0} は未サポートの暗号です +90056=関数 {0}: 無効な日付フォーマット: {1} +90057=制約 {0} が見つかりません +90058=トリガ内でのコミット、ロールバックは許されていません +90059=列名 {0} があいまいです +90060={0} は未サポートのファイルロック方式です +90061=ポート {0} をオープン中に例外が発生しました (ポートが使用中の可能性があります) +90062=ファイル {0} を作成中にエラーが発生しました +90063=セーブポイントが不正です: {0} +90064=セーブポイントの名前を削除しました +90065=セーブポイントに名前を設定しました +90066=プロパティ {0} が重複しています +90067=接続が壊れています: {0} +90068=order by 対象の式 {0} は、結果リストに含まれる必要があります +90069=ロール {0} はすでに存在します +90070=ロール {0} が見つかりません +90071=ユーザ、またはロール {0} が見つかりません +90072=ロールと権限は混在できません +90073=適合するJavaメソッドは異なるパラメータ数である必要があります: {0}, {1} +90074=ロール {0} はすでに許可されています +90075=列はインデックス {0} の一部です +90076=関数の別名 {0} はすでに存在します +90077=関数の別名 {0} が見つかりません +90078=スキーマ {0} はすでに存在します +90079=スキーマ {0} が見つかりません +90080=スキーマ名が一致しません +90081=列 {0} がnull値を含んでいます +90082=シーケンス {0} はテーブルに属します +90083=列は {0} に参照されている可能性があります +90084=最後の列 {0} をドロップすることはできません +90085=インデックス {0} は制約に属しています {1} +90086=クラス {0} が見つかりません +90087=メソッド {0} が見つかりません +90088={0} は不明なモードです +90089=データテーブル {0} があるため、照合順序の変更はできません +90090=スキーマ {0} はドロップできません +90091=ロール {0} はドロップできません +90093=クラスタリングエラー - データベースはスタンドアロンモードで動作しています +90094=クラスタリングエラー - データベースはクラスターモードで動作しています, サーバリスト: {0} +90095=文字列フォーマットエラー: {0} +90096=オブジェクト {0} に対する十分な権限がありません +90097=データベースは読み込み専用です +90098=データベースはすでに閉じられています +90099=データベースイベントリスナの設定エラー {0} +90101=不正なXIDフォーマット: {0} +90102=未サポートの圧縮オプション: {0} +90103=未サポートの圧縮アルゴリズム: {0} +90104=圧縮エラー +90105=ユーザ定義関数を実行中に例外が発生しました: {0} +90106={0} を空にできません +90107={1} が依存しているため、{0} をドロップすることはできません +90108=メモリが不足しています +90109=ビュー {0} は無効です: {1} +90111=SQLステートメント {0} による結合テーブルアクセスエラー +90112=インデックス {0} から削除を試みましたが、行が見つかりません +90113=未サポートの接続設定 {0} +90114=定数 {0} はすでに存在します +90115=定数 {0} が見つかりません +90116=この種類のリテラルは許されていません +90117=このサーバへのリモート接続は許されていません, -tcpAllowOthersを参照 +90118=テーブル {0} はドロップできません +90119=ユーザデータ型 {0} はすでに存在します +90120=ユーザデータ型 {0} が見つかりません +90121=データベースはすでに閉じられています (VM終了時の自動データベースクローズを無効にするためには、db URLに ";DB_CLOSE_ON_EXIT=FALSE" を追加してください) +90122=ビューが存在するテーブル {0} に対する操作はサポートされていません: {1} +90123=インデックスの付いたパラメータと付いていないパラメータを混在させることはできません +90124=ファイルが見つかりません: {0} +90125=無効なクラス, {0} が期待されているにもかかわらず {1} を取得しました +90126=データベースは永続的ではありません +90127=リザルトセットが更新可能ではありません。クエリは単一のテーブルから、ユニークキーを全てselectしなければなりません +90128=リザルトセットがスクロール、リセット可能ではありません。conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..) を使う必要があるかもしれません +90129=トランザクション {0} が見つかりません +90130=プリペアドステートメントにこのメソッドは許されていません; かわりに通常のステートメントを使用してください +90131=テーブル {0} に並行して更新が行われました: 別のトランザクションが、同じ行に更新か削除を行いました +90132=集約 {0} が見つかりません +90133=データベースオープン中には、設定 {0} を変更できません +90134=クラス {0} へのアクセスが拒否されました +90135=データベースは排他モードでオープンされています; 接続を追加することはできません +90136=未サポートの外部結合条件: {0} +90137=割り当ては変数にのみ可能です。{0} にはできません +90138=不正なデータベース名: {0} +90139=public staticであるJavaメソッドが見つかりません: {0} +90140=リザルトセットは読み込み専用です。conn.createStatement(.., ResultSet.CONCUR_UPDATABLE) を使う必要があるかもしれません +90141=データテーブル {0} があるため、シリアライザを変更することはできません +90142=ステップサイズに0は指定できません +90143=#Row {1} not found in primary index {0} +HY000=一般エラー: {0} +HY004=不明なデータ型: {0} +HYC00=機能はサポートされていません: {0} +HYT00=テーブル {0} のロック試行がタイムアウトしました diff --git a/modules/h2/src/main/java/org/h2/res/_messages_pl.prop b/modules/h2/src/main/java/org/h2/res/_messages_pl.prop new file mode 100644 index 0000000000000..b62d9a8a08736 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_pl.prop @@ -0,0 +1,179 @@ +.translator=Tomek; Wojciech S. Jurczyk +02000=Dane nie są dostępne +07001=Niewłaściwa liczba parametrów dla {0}, oczekiwano ilości: {1} +08000=Błąd otwarcia bazy danych: {0} +21S02=Niezgodna ilość kolumn +22001=Wartość za długa dla kolumny {0}: {1} +22003=Wartość numeryczna poza zakresem: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=Nie można odczytać {0} jako {1} +22012=Dzielenie przez zero: {0} +22018=Błąd konwersji danych {0} +22025=Błąd w LIKE ESCAPE: {0} +22030=#Value not permitted for column {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=Pole nie może być NULL{0} +23503=Naruszenie więzów integralności: {0} +23505=Naruszenie ograniczenia Klucza Głównego lub Indeksu Unikalnego: {0} +23506=Naruszenie więzów integralności: {0} +23507=Brak domyślnej wartości dla kolumny {0} +23513=Naruszenie ograniczenia CHECK {0} +23514=Nieprawidłowe ograniczenie CHECK: {0} +28000=Nieprawidłowy użytkownik/hasło +40001=Wykryto zakleszczenie. Bieżąca transakcja została wycofana. Szczegóły : {0} +42000=Błąd składniowy w wyrażeniu SQL {0} +42001=Błąd składniowy w wyrażeniu SQL {0}; oczekiwano {1} +42S01=Tabela {0} już istnieje +42S02=Tabela {0} nie istnieje +42S11=Indeks {0} już istnieje +42S12=Indeks {0} nie istnieje +42S21=Zduplikowana nazwa kolumny {0} +42S22=Kolumna {0} nie istnieje +42S32=Ustawienie {0} nie istnieje +57014=Kwerenda została anulowana albo sesja wygasła +90000=Funkcja {0} musi zwrócić dane +90001=Metoda nie jest dozwolona w kwerendzie +90002=Metoda jest dozwolona tylko w kwerendzie +90003=Heksadecymalny string z nieparzystą liczbą znaków: {0} +90004=Heksadecymalny string zawiera niedozwolony znak: {0} +90006=Sekwencja {0} została wyczerpana +90007=Obiekt jest zamknięty +90008=Nieprawidłowa wartość {0} parametru {1} +90009=Nie można utworzyć/zmienić sekwencji {0} ponieważ podane atrybuty są nieprawidłowe (wartość początkowa {1}, wartość minimalna {2}, wartość maksymalna {3}, przyrost {4}) +90010=Nieprawidłowy format TO_CHAR {0} +90011=#A file path that is implicitly relative to the current working directory is not allowed in the database URL {0}. Use an absolute path, ~/name, ./name, or the baseDir setting instead. +90012=Parametr o numerze {0} nie jest ustalony +90013=Baza danych {0} nie znaleziona +90014=Błąd parsowania {0} +90015=Agregacja SUM lub AVG wywołana na złym typie danych {0} +90016=Kolumna {0} musi być na liście grupowania +90017=Próba zdefiniowania drugiego klucza głównego +90018=Połączenie zostało zamknięte przez aplikacje i zostało usunięte przez kolektor nieużytków +90019=Nie można skasować aktualnego użytkownika +90020=Baza danych może być już otwarta: {0} +90021=#This combination of database settings is not supported: {0} +90022=Funkcja {0} nie istnieje +90023=Kolumna {0} nie może zawierać wartości pustej +90024=Błąd w zmianie nazwy pliku {0} na {1} +90025=Nie można skasować pliku {0} +90026=Serializacja nie powiodła się: {0} +90027=Deserializacja nie powiodła się: {0} +90028=Błąd wejścia/wyjścia: {0} +90029=Aktualny rekord nie pozwala na aktualizacje +90030=Uszkodzenie pliku podczas wczytywania rekordu: {0} +90031=Połączenie nie zostało zamknięte +90032=Użytkownik {0} nie istnieje +90033=Użytkownik {0} już istnieje +90034=Błąd pliku dziennika: {0}, błąd: {1} +90035=Sekwencja {0} już istnieje +90036=Sekwencja {0} nie istnieje +90037=Widok {0} nie istnieje +90038=Widok {0} już istnieje +90039=#This CLOB or BLOB reference timed out: {0} +90040=Uprawnienia administratora są wymagane do wykonania tej operacji +90041=Wyzwalacz {0} już istnieje +90042=Wyzwalacz {0} nie istnieje +90043=Błąd tworzenia/inicjowania wyzwalacza {0} klasy {1}, błąd: {2}; szczegóły w źródle błędu +90044=Błąd uruchomienia wyzwalacza {0}, klasy {1}, błąd: {2}; szczegóły w źródle błędu +90045=Ograniczenie {0} już istnieje +90046=Błędny format URL; powinno być {0} a jest {1} +90047=Niezgodna wersja sterownika, aktualna wersja to {0} a wersja serwera to {1} +90048=Nieprawidłowa wersja pliku bazy danych lub nieprawidłowy nagłówek pliku {0} +90049=Błąd szyfrowania pliku {0} +90050=Zły format hasła, powinno być: plik hasło użytkownik hasło +90051=#Scale(${0}) must not be bigger than precision({1}) +90052=Podzapytanie nie jest zapytaniem opartym o jedna kolumnę +90053=Skalarna pod-kwerenda zawiera więcej niż jeden wiersz +90054=Nieprawidłowe użycie funkcji agregującej {0} +90055=Nieobsługiwany szyfr {0} +90056=Function {0}: Invalid date format: {1} +90057=Ograniczenie {0} nie istnieje +90058=Zatwierdzenie lub wycofanie transakcji nie jest dozwolone w wyzwalaczu +90059=Niejednoznaczna nazwa kolumny {0} +90060=Niewspierana metoda blokowania pliku {0} +90061=Błąd podczas otwierania portu {0} (może być w użyciu), błąd: {1} +90062=Błąd tworzenia pliku {0} +90063=Zakładką jest nieprawidłowa: {0} +90064=Zakładka jest bez nazwy +90065=Zakładka jest nazwana +90066=Zduplikowana właściwość {0} +90067=Połączenie uszkodzone: {0} +90068=Wyrażenie sortowania {0} musi być na liście wyboru w tym przypadku +90069=Rola {0} już istnieje +90070=Rola {0} nie istnieje +90071=Użytkownik lub rola {0} nie istnieje +90072=Role i prawa nie mogą być mieszane +90073=Metody Javy muszą mieć różną liczbę parametrów: {0} oraz {1} +90074=Rola {0} już przyznana +90075=Kolumna jest częścią indeksu {0} +90076=Alias funkcji {0} już istnieje +90077=Alias funkcji {0} nie istnieje +90078=Schemat{0} już istnieje +90079=Schemat {0} nie istnieje +90080=Nazwa schematu musi pasować +90081=Kolumna {0} zawiera wartości puste +90082=Sekwencja {0} należy do tabeli +90083=Do kolumny można odwołać się przez referencje {0} +90084=Nie można skasować ostatniej kolumny {0} +90085=Indeks {0} należy do ograniczenia {1} +90086=Klasa {0} nie istnieje +90087=#Method {0} not found +90088=Nieznany stan {0} +90089=Metoda porównywania językowego nie może być zmieniona z powodu istnienia danych w tabeli {0} +90090=Schemat {0} nie może zostać skasowany +90091=Rola {0} nie może zostać skasowana +90093=Błąd klastrowania - baza obecnie pracuje w trybie autonomicznym +90094=Błąd klastrowania - baza obecnie pracuje w trybie klastra, lista serwerów: {0} +90095=Błąd ciągu formatowania: {0} +90096=Brak wystarczających praw do obiektu {0} +90097=Baza danych jest w trybie tylko do odczytu +90098=Baza danych została zamknięta +90099=Błąd ustawiania detektora zdarzeń bazy danych {0}, błąd: {1} +90101=Zły format XID: {0} +90102=Nie wspierana opcja kompresji: {0} +90103=Nie wspierany algorytm kompresji: {0} +90104=Błąd kompresji +90105=Wyjątek wywołuje funkcję użytkownika: {0} +90106=Nie można obciąć {0} +90107=Nie można skasować {0} ponieważ zależy od {1} +90108=Brak pamięci. +90109=Widok {0} jest nieprawidłowy +90111=Błąd dostępu do tabeli skrzyżowań przy pomocy zapytania SQL {0}, błąd: {1} +90112=Rekord nie znaleziony przy probie kasowania z indeksu {0} +90113=Nie wspierana opcja połączenia {0} +90114=Stała {0} już istnieje +90115=Stała {0} nie istnieje +90116=Literał tego typu nie jest dozwolony +90117=Zdalne połączenia do tego serwera nie są dozwolone, zobacz -tcpAllowOthers +90118=Nie można skasować tabeli {0} +90119=Typ danych użytkownika {0} już istnieje +90120=Typ danych użytkownika {0} nie istnieje +90121=Baza danych jest już zamknięta (aby zablokować samoczynne zamykanie podczas zamknięcia VM dodaj ";DB_CLOSE_ON_EXIT=FALSE" do URL bazy danych) +90122=Operacja nie jest dozwolona dla tabeli {0} gdy istnieją widoki oparte na tabeli: {1} +90123=Nie można mieszać parametrów indeksowych z nieindeksowymi +90124=Plik nie istnieje: {0} +90125=Nieprawidłowa klasa, oczekiwano {0}, a jest {1} +90126=Baza danych nie jest trwała +90127=Wynik nie jest uaktualnialny. Kwerenda musi wybrać wszystkie kolumny z klucza unikalnego. Tylko jedna tabela może zostać wybrana. +90128=Wynik nie jest typu SCROLLABLE i nie może być zresetowany. Być może powinieneś użyć conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Transakcja {0} nie znaleziona +90130=Ta metoda jest niedozwolona dla sparametryzowanych kwerend (prepared statement); użyj zwykłej kwerendy +90131=Jednoczesna zmiana w tabeli {0}: inna transakcja zaktualizowała lub usunęła ten sam wiersz +90132=Agregacja {0} nie znaleziona +90133=Nie można zmienić ustawienia {0} gdy baza danych jest otwarta +90134=Dostęp do klasy {0} jest zabroniony +90135=Baza danych jest otwarta w trybie wyłączności, nie można otworzyć dodatkowych połączeń +90136=Nieobsługiwany warunek złączenia zewnętrznego: {0} +90137=Można przypisywać tylko do zmiennych, nie do: {0} +90138=Nieprawidłowa nazwa bazy danych: {0} +90139=Publiczna, statyczna metoda Java nie znaleziona: {0} +90140=Wyniki są tylko do odczytu. Być może potrzebujesz użyć conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=Serializator nie może być zmieniony ponieważ istnieje tabela z danymi: {0} +90142=#Step size must not be zero +90143=#Row {1} not found in primary index {0} +HY000=Błąd ogólny: {0} +HY004=Nieznany typ danych: {0} +HYC00=Cecha nie jest wspierana: {0} +HYT00=Czas oczekiwania na blokadę tabeli {0} skończył się diff --git a/modules/h2/src/main/java/org/h2/res/_messages_pt_br.prop b/modules/h2/src/main/java/org/h2/res/_messages_pt_br.prop new file mode 100644 index 0000000000000..da7612d54bad7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_pt_br.prop @@ -0,0 +1,179 @@ +.translator=Eduardo Fonseca Velasques +02000=Não há dados disponíveis +07001=Quantidade de parâmetros errados para {0}, experado: {1} +08000=Erro ao abrir a base de dados: {0} +21S02=A quantidade de colunas não corresponde +22001=Valor muito longo para a coluna {0}: {1} +22003=Valor númerico não esta dentro do limite: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=Não é possível converter {1} para {0} +22012=Divisão por zero: {0} +22018=Erro na conversão de dado, convertendo {0} +22025=Erro em LIKE ESCAPE: {0} +22030=#Value not permitted for column {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=NULL não é permitido para a coluna {0} +23503=Violação da integridade de restrição: {0} +23505=Violação de índice único ou de chave primária: {0} +23506=Violação da integridade de restrição: {0} +23507=Nenhum valor pré-definido foi especificado para a coluna {0} +23513=Violação da restrição: {0} +23514=#Check constraint invalid: {0} +28000=Autenticaçao inválida, verifique o usuário ou a senha +40001=#Deadlock detected. The current transaction was rolled back. Details: {0} +42000=Erro de sintax na declaração SQL {0} +42001=Erro de sintax na declaração SQL {0}; esperado {1} +42S01=Tabela {0} já existe +42S02=Tabela {0} não foi encontrada +42S11=índice {0} já existe +42S12=índice {0} não foi encontrado +42S21=Nome duplicado da coluna {0} +42S22=Coluna {0} não foi encontrada +42S32=Definição {0} não foi encontrada +57014=#Statement was canceled or the session timed out +90000=Função {0} deve retornar algum resultado +90001=O método não esta hábilitado para consulta. Use o execute ou o executeQuery em vez de executeUpdate +90002=O método é apenas para consulta. Use o execute ou o executeUpdate em vez de executeQuery +90003=Sequência Hexadecimal com número ímpar de caracteres: {0} +90004=Sequência Hexadecimal contêm caracteres inválidos: {0} +90006=#Sequence {0} has run out of numbers +90007=O objeto está fechado +90008=Valor inválido {0} para o parâmetro {1} +90009=#Unable to create or alter sequence {0} because of invalid attributes (start value {1}, min value {2}, max value {3}, increment {4}) +90010=#Invalid TO_CHAR format {0} +90011=#A file path that is implicitly relative to the current working directory is not allowed in the database URL {0}. Use an absolute path, ~/name, ./name, or the baseDir setting instead. +90012=Parâmetro {0} não esta definido +90013=Base de dados {0} não encontrada +90014=Erro na conversão {0} +90015=SUM ou AVG com tipo de dados errado para {0} +90016=Coluna {0} também deve estar no GROUP BY +90017=Tentativa para definir uma segunda chave primária +90018=A conecção foi fechada pela aplicação e retirada da memória +90019=Não pode remover o usuário corrente +90020=A base de dados talvez esteja em uso: {0}. Solução possível: fechar todas as outras conecções; use o modo servidor +90021=#This combination of database settings is not supported: {0} +90022=Função {0} não encontrada +90023=Coluna {0} não deve permitir valor nulo +90024=Erro ao renomear arquivo {0} para {1} +90025=Não pode apagar o arquivo {0} +90026=Serialização falhada, causa: {0} +90027=Deserialização falhada, causa: {0} +90028=Exceção de IO: {0} +90029=No momento não é possível fazer alterações nas linhas +90030=Arquivo corrompido durante a leitura: {0}. Possível solução: use o recovery tool +90031=Exceção de IO: {0}; {1} +90032=Usuário {0} não foi encontrado +90033=Usuário {0} já existe +90034=Erro no arquivo de Log: {0}, causa: {1} +90035=Sequência {0} já existe +90036=Sequência {0} não foi encontrada +90037=Vista {0} não foi encontrada +90038=Vista {0} já existe +90039=#This CLOB or BLOB reference timed out: {0} +90040=Direitos de permisões do Admin são necessários para está operação +90041=Trigger {0} já existe +90042=Trigger {0} não foi encontrada +90043=Erro na criação ou inicialização da trigger no objeto {0}, classe {1}, causa: {2}; para maior detalhe veja a raiz da causa +90044=Erro executando trigger {0}, classe {1}, causa : {2}; para maior detalhe veja a raiz da causa +90045=Restrição {0} já existe +90046=Erro no formato da URL; deve ser {0} mas está {1} +90047=Versões incompatíveis, versão do driver é {0} mas a versão do servidor é {1} +90048=Versão do arquivo de base de dados não é suportado, ou o cabeçalho do arquivo é inválido, no arquivo {0} +90049=Erro de encriptação no arquivo {0} +90050=Erro no formato da senha, deveria ser: arquivo de senha senha do usuário +90051=#Scale(${0}) must not be bigger than precision({1}) +90052=A Subquery não é de coluna única +90053=A Subquery contém mais de uma linha +90054=Uso inválido da função {0} agregada +90055=Cipher {0} não é suportado +90056=Function {0}: Invalid date format: {1} +90057=Restrição {0} não foi encontrada +90058=#Commit or rollback is not allowed within a trigger +90059=Nome da coluna {0} é ambíguo. +90060=Não suporta o método do arquivo de bloqueio {0} +90061=Exceção ao abrir no porto {0} (provavelmente está em uso), causa: {1} +90062=Erro na criação do arquivo {0} +90063=Savepoint é inválido: {0} +90064=Savepoint não está nomeado +90065=Savepoint está nomeado +90066=Propriedade {0} duplicada +90067=A conecção está quebrada: {0} +90068=Expressão order by {0} deve estar na lista neste caso +90069=Regra {0} já existe +90070=Regra {0} não foi encontrada +90071=Usuário ou regra {0} não foram encontrados +90072=Regras e permissões não podem ser combinados +90073=#Matching Java methods must have different parameter counts: {0} and {1} +90074=Regra {0} já foi concedida +90075=A coluna faz parte do índice {0} +90076=Nome alternativo da função {0} já existe +90077=Nome alternativo da função {0} não foi encontrado +90078=Esquema {0} já existe +90079=Esquema {0} não foi encontrado +90080=Nome do esquema deve ser válido +90081=Coluna {0} restringe valores nulos +90082=Sequência {0} pertence a uma tabela +90083=A coluna pode ser referênciada por {0} +90084=Não pode apagar a última coluna {0} +90085=índice {0} pertence a uma restrição {1} +90086=Classe {0} não foi encontrada +90087=#Method {0} not found +90088=Modo {0} desconhecido +90089=A coleção não pode ser alterada, porque existe uma tabela de dados: {0} +90090=Esquema {0} não pode ser apagado +90091=Regra {0} não pode ser apagada +90093=Erro de clusterização - base de dados rodando no modo standalone no momento +90094=Erro de clusterização - base de dados rodando no modo cluster, lista do servidor: {0} +90095=Erro no formato da string: {0} +90096=Permissões insuficientes para o objeto {0} +90097=Base de dados é somente leitura +90098=Base de dados foi fechada +90099=Erro na definição do evento listener {0} da base de dados, causa: {1} +90101=Formato XID incorreto: {0} +90102=Não é suportado as opções de compressão: {0} +90103=Não é suportado o algorítimo de compressão: {0} +90104=Erro na compressão +90105=Exceção na chamada da função definida pelo usuário: {0} +90106=Não pode fazer o truncate {0} +90107=Não pode apagar {0} por que depende de {1} +90108=#Out of memory. +90109=Vista {0} é inválida: {1} +90111=Erro ao acessar a tabela lincada com a instrução SQL {0}, causa: {1} +90112=A linha não foi encontrada ao tentar eliminar apartir do índice {0} +90113=Não suporta a definição de conecção {0} +90114=Constante {0} já existe +90115=Constante {0} não foi encontrada +90116=Literais deste tipo não são permitidas +90117=Conecções remotas para este servidor não estão habilitadas, veja -tcpAllowOthers +90118=Não pode apagar a tabela {0} +90119=Tipo de dados do usuário {0} já existe +90120=Tipo de dados do usuário {0} não foram encontrados +90121=Base de dados já está fechada (para desabilitar o fechamento automático quando a VM terminar, addicione ";DB_CLOSE_ON_EXIT=FALSE" na url da base de dados) +90122=Operação não suportada para a tabela {0} quando existe alguma vista sobre a tabela: {1} +90123=Não pode combinar parâmetros de índices com não índices +90124=Arquivo não encontrado: {0} +90125=Classe inválida, experada {0} mas está {1} +90126=Base de dados não é persistente +90127=O resultado definido não é atualizável. A consulta deve selecionar todas as colunas de chave primária. Somente uma tabela pode ser selecionada. +90128=O resultado não é navegável e não pode ser resetado. Você pode usar conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Transação {0} não encontrada +90130=Este método não é permitido para um statement preparado; Utilize um statement instanciado. +90131=Atualização concorrente na tabela {0}: outra transação atualizou ou deletou a mesma linha +90132=Agregação {0} não encontrada +90133=#Cannot change the setting {0} when the database is already open +90134=#Access to the class {0} is denied +90135=#The database is open in exclusive mode; can not open additional connections +90136=#Unsupported outer join condition: {0} +90137=#Can only assign to a variable, not to: {0} +90138=#Invalid database name: {0} +90139=#The public static Java method was not found: {0} +90140=#The result set is readonly. You may need to use conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=#Serializer cannot be changed because there is a data table: {0} +90142=#Step size must not be zero +90143=#Row {1} not found in primary index {0} +HY000=Erro geral: {0} +HY004=Tipo de dados desconhecido: {0} +HYC00=Recurso não suportado: {0} +HYT00=Timeout ao tentar bloquear a tabela {0} diff --git a/modules/h2/src/main/java/org/h2/res/_messages_ru.prop b/modules/h2/src/main/java/org/h2/res/_messages_ru.prop new file mode 100644 index 0000000000000..4838bc6a1242b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_ru.prop @@ -0,0 +1,179 @@ +.translator=Sergi Vladykin +02000=Нет данных +07001=Неверное количество параметров для функции {0}, ожидаемое количество: {1} +08000=Ошибка при открытии базы данных: {0} +21S02=Неверное количество столбцов +22001=Значение слишком длинное для поля {0}: {1} +22003=Численное значение вне допустимого диапазона: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=Невозможно преобразование строки {1} в тип {0} +22012=Деление на ноль: {0} +22018=Ошибка преобразования данных при конвертации {0} +22025=Ошибка в LIKE ESCAPE: {0} +22030=#Value not permitted for column {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=Значение NULL не разрешено для поля {0} +23503=Нарушение ссылочной целостности: {0} +23505=Нарушение уникального индекса или первичного ключа: {0} +23506=Нарушение ссылочной целостности: {0} +23507=Для поля {0} не установлено значение по умолчанию +23513=Нарушение ограничения: {0} +23514=Неправильное ограничение CHECK: {0} +28000=Неверное имя пользователя или пароль +40001=Обнаружена взаимная блокировка потоков. Текущая транзакция была откачена. Детали: {0} +42000=Синтаксическая ошибка в выражении SQL {0} +42001=Синтаксическая ошибка в выражении SQL {0}; ожидалось {1} +42S01=Таблица {0} уже существует +42S02=Таблица {0} не найдена +42S11=Индекс {0} уже существует +42S12=Индекс {0} не найден +42S21=Повтор имени столбца {0} +42S22=Столбец {0} не найден +42S32=Настройка {0} не найдена +57014=Запрос был отменен или закончилось время ожидания сессии +90000=Функция {0} должна возвращать набор записей +90001=Метод не разрешен для запросов. Используйте execute или executeQuery вместо executeUpdate +90002=Метод разрешен только для запросов. Используйте execute или executeUpdate вместо executeQuery +90003=Шестнадцатиричная строка содержит нечетное количество символов: {0} +90004=Шестнадцатиричная строка содержит нешестнадцатиричные символы: {0} +90006=Последовательность {0} вышла за границы (MINVALUE, MAXVALUE) +90007=Объект уже закрыт +90008=Недопустимое значение {0} для параметра {1} +90009=Невозможно создать или изменить последовательность {0} из-за неправильных атрибутов (START/RESTART {1}, MINVALUE {2}, MAXVALUE {3}, INCREMENT {4}) +90010=Неправильный формат TO_CHAR {0} +90011=Путь неявно является относительным для текущего рабочего каталога и не допустим в URL базы данных {0}. Используйте абсолютный путь, ~/name, ./name, или настройку baseDir. +90012=Параметр {0} не установлен +90013=База данных {0} не найдена +90014=Ошибка при разборе {0} +90015=SUM или AVG на недопустимом типе данных {0} +90016=Столбец {0} должен быть в предложении GROUP BY +90017=Попытка создания второго первичного ключа +90018=Незакрытое приложением соединение уничтожено сборщиком мусора +90019=Невозможно удалить текущего пользователя +90020=База данных уже используется: {0}. Возможные решения: закрыть все другие соединения; использовать режим сервера +90021=Такое сочетание настроек базы данных не поддерживается: {0} +90022=Функция {0} не найдена +90023=Поле {0} не должно поддерживать значение NULL +90024=Ошибка при переименовании файла {0} в {1} +90025=Невозможно удалить файл {0} +90026=Ошибка сериализации, причина: {0} +90027=Ошибка десериализации, причина: {0} +90028=Ошибка ввода/вывода: {0} +90029=Запись не является обновляемой +90030=Файл поврежден при чтении строки: {0}. Возможные решения: используйте утилиту восстановления (recovery tool) +90031=Ошибка ввода/вывода: {0}; {1} +90032=Пользователь {0} не найден +90033=Пользователь {0} уже существует +90034=Ошибка записи в файл трассировки: {0}, причина: {1} +90035=Последовательность {0} уже существует +90036=Последовательность {0} не найдена +90037=Представление {0} не найдено +90038=Представление {0} уже существует +90039=Этот CLOB или BLOB объект закрыт по таймауту: {0} +90040=Для выполнения данной операции необходимы права администратора +90041=Триггер {0} уже существует +90042=Триггер {0} не найден +90043=Ошибка при создании или инициализации триггера {0}, класс {1}, причина: {2}; для изучения подробностей смотрите корневую причину +90044=Ошибка при выполнении триггера {0}, класс {1}, причина : {2}; для изучения подробностей смотрите корневую причину +90045=Ограничение {0} уже существует +90046=Ошибка формата URL; должно быть {0}, на текущий момент {1} +90047=Несовпадение версий: версия драйвера {0}, версия сервера {1} +90048=Неподдерживаемая версия файлов базы данных или некорректный заголовок в файле {0} +90049=Ошибка шифрования в файле {0} +90050=Некорректный формат пароля, должен быть: пароль файла <пробел> пароль пользователя +90051=Количество цифр после разделителя (scale) (${0}) не должно быть больше общего количества цифр (precision) ({1}) +90052=Подзапрос выбирает более одного столбца +90053=Подзапрос выбирает более одной строки +90054=Некорректное использование агрегирующей функции {0} +90055=Метод шифрования {0} не поддерживается +90056=Функция {0}: Неверный формат даты: {1} +90057=Ограничение {0} не найдено +90058=Commit или rollback внутри триггера не допускается +90059=Неоднозначное имя столбца {0} +90060=Метод блокировки файлов {0} не поддерживается +90061=Ошибка при открытии порта {0} (порт может уже использоваться), причина: {1} +90062=Ошибка при создании файла {0} +90063=Savepoint не существует: {0} +90064=Savepoint не имеет имени +90065=Savepoint является именованным +90066=Повтор свойства соединения {0} +90067=Соединение разорвано: {0} +90068=Столбцы предложения ORDER BY {0} в данном случае должны содержаться в выбираемом списке +90069=Роль {0} уже существует +90070=Роль {0} не найдена +90071=Пользователь или роль {0} не найдены +90072=Недопустимо одновременно управлять правами и ролями пользователя +90073=Соответствующие методы Java должны иметь различное количество аргументов: {0} и {1} +90074=Роль {0} уже выдана +90075=Поле является частью индекса {0} +90076=Псевдоним функции {0} уже существует +90077=Псевдоним функции {0} не найден +90078=Схема {0} уже существует +90079=Схема {0} не найдена +90080=Схема должна совпадать с текущей +90081=Таблица содержит записи со значением NULL в поле {0} +90082=Последовательность {0} относится к таблице +90083=На поле может ссылаться {0} +90084=Невозможно удалить последнее поле {0} +90085=Индекс {0} относится к ограничению {1} +90086=Класс {0} не найден +90087=Метод {0} не найден +90088=Неизвестный режим {0} +90089=Невозможно изменить collation, пока есть данные в таблицах: {0} +90090=Схема {0} не может быть удалена +90091=Роль {0} не может быть удалена +90093=Ошибка кластеризации - база данных работает в одиночном режиме +90094=Ошибка кластеризации - база данных работает в кластерном режиме, список серверов: {0} +90095=Ошибка формата строкового литерала: {0} +90096=Недостаточно прав на объект {0} +90097=База данных доступна только на чтение +90098=База данных уже закрыта +90099=Ошибка при установке слушателя событий базы данных {0}, причина: {1} +90101=Некорректный XID формат: {0} +90102=Неподдерживаемые опции сжатия: {0} +90103=Неподдерживаемый алгоритм сжатия: {0} +90104=Ошибка сжатия +90105=Ошибка при вызове пользовательской функции: {0} +90106=Невозможно очистить {0} +90107=Невозможно удалить {0}, пока существует зависимый объект {1} +90108=Ошибка нехватки памяти +90109=Представление {0} содержит ошибки: {1} +90111=Ошибка при обращении к линкованной таблице SQL запросом {0}, причина: {1} +90112=Запись не найдена при удалении из индекса {0} +90113=Неподдерживаемая опция соединения {0} +90114=Константа {0} уже существует +90115=Константа {0} не найдена +90116=Вычисление литералов запрещено +90117=Удаленные соединения к данному серверу запрещены, см. -tcpAllowOthers +90118=Невозможно удалить таблицу {0} +90119=Объект с именем {0} уже существует +90120=Домен {0} не найден +90121=База данных уже закрыта (чтобы отключить автоматическое закрытие базы данных при останове JVM, добавьте ";DB_CLOSE_ON_EXIT=FALSE" в URL) +90122=Операция для таблицы {0} не поддерживается, пока существуют представления: {1} +90123=Одновременное использование индексированных и неиндексированных параметров в запросе не поддерживается +90124=Файл не найден: {0} +90125=Недопустимый класс, ожидался {0}, но получен {1} +90126=База данных не является персистентной +90127=Набор записей не является обновляемым. Запрос должен выбирать все поля уникального ключа. Только одна таблица может быть выбрана. +90128=Набор записей не является прокручиваемым. Возможно необходимо использовать conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Транзакция {0} не найдена +90130=Данный метод не разрешен для PreparedStatement; используйте Statement. +90131=Конкурентное изменение таблицы {0}: другая транзакция обновила или удалила ту же строку +90132=Агрегирующая функция {0} не найдена +90133=Невозможно изменить опцию {0}, когда база данных уже открыта +90134=Доступ к классу {0} запрещен +90135=База данных открыта в эксклюзивном режиме, открыть дополнительные соединения невозможно +90136=Данное условие не поддерживается в OUTER JOIN : {0} +90137=Присваивать значения возможно только переменным, но не: {0} +90138=Недопустимое имя базы данных: {0} +90139=public static Java метод не найден: {0} +90140=Набор записей не является обновляемым. Возможно необходимо использовать conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=Serializer не может быть изменен, потому что есть таблица данных: {0} +90142=Размер шага не должен быть равен нулю +90143=#Row {1} not found in primary index {0} +HY000=Внутренняя ошибка: {0} +HY004=Неизвестный тип данных: {0} +HYC00=Данная функция не поддерживается: {0} +HYT00=Время ожидания блокировки таблицы {0} истекло diff --git a/modules/h2/src/main/java/org/h2/res/_messages_sk.prop b/modules/h2/src/main/java/org/h2/res/_messages_sk.prop new file mode 100644 index 0000000000000..1f017a3688c19 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_sk.prop @@ -0,0 +1,179 @@ +.translator=Ľubomír Grajciar +02000=Žiadné dáta nie sú dostupné +07001=Nesprávny počet parametrov pre {0}, očakávaný počet: {1} +08000=Chyba otvorenia databázy: {0} +21S02=Počet stĺpcov sa nezhoduje +22001=Hodnota je príliš dlhá pre stĺpec {0}: {1} +22003=Číselná hodnota mimo rozsah: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=Nemožem rozobrať {0} konštantu {1} +22012=Delenie nulou: {0} +22018=Chyba konverzie dát pre {0} +22025=Chyba v LIKE ESCAPE: {0} +22030=#Value not permitted for column {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=NULL nie je povolený pre stĺpec {0} +23503=Porušenie obmedzenia (constraint) referenčnej integrity: {0} +23505=Porušenie jedinečnosti (unique) indexu alebo primárneho kľúča: {0} +23506=Porušenie obmedzenia (constraint) referenčnej integrity: {0} +23507=Nie je nastavená vychodzia hodnota stĺpca {0} +23513=Skontrolujte porušenie obmedzenia (constraint): {0} +23514=#Check constraint invalid: {0} +28000=Nesprávne používateľské meno alebo heslo +40001=Mŕtvy bod (deadlock) detegovaný. Aktuálna transakcia bude odvolaná (rolled back). Podrobnosti: {0} +42000=Syntaktická chyba v SQL príkaze {0} +42001=Syntaktická chyba v SQL príkaze {0}; očakávané {1} +42S01=Tabuľka {0} už existuje +42S02=Tabuľka {0} nenájdená +42S11=Index {0} už existuje +42S12=Index {0} nenájdený +42S21=Duplicitné meno stĺpca {0} +42S22=Stĺpec {0} nenájdený +42S32=Nastavenie {0} nenájdené +57014=Príkaz bol zrušený alebo vypršal časový limit sedenia +90000=Funkcia {0} musí vracať výsledok (result set) +90001=Metóda nie je povolená pre dopyt (query). Použite execute alebo executeQuery namiesto executeUpdate +90002=Metóda je povolená iba pre dopyt (query). Použite execute alebo executeUpdate namiesto executeQuery +90003=Hexadecimálny reťazec s nepárnym počtom znakov: {0} +90004=Hexadecimálny reťazec obsahuje nepovolené znaky pre šestnáskovú sústavu: {0} +90006=#Sequence {0} has run out of numbers +90007=Objekt už je zatvorený +90008=Nesprávna hodnota {0} parametra {1} +90009=#Unable to create or alter sequence {0} because of invalid attributes (start value {1}, min value {2}, max value {3}, increment {4}) +90010=#Invalid TO_CHAR format {0} +90011=#A file path that is implicitly relative to the current working directory is not allowed in the database URL {0}. Use an absolute path, ~/name, ./name, or the baseDir setting instead. +90012=Parameter {0} nie je nastavený +90013=Databáza {0} nenájdená +90014=Chyba rozobrania (parse) {0} +90015=SUM alebo AVG na nesprávnom dátovom type {0} +90016=Stĺpec {0} musí byť uvedený v GROUP BY zozname +90017=Pokus o definíciu druhého primárneho kľúča +90018=Spojenie neuzatvorené aplikáciou bolo zrušené +90019=Nemôžem zmazať aktuálneho používateľa +90020=Databáza sa už asi používa: {0}. Možné riešenia: zatvorte všetky dalšie spojenia; použite serverový mód +90021=#This combination of database settings is not supported: {0} +90022=Funkcia {0} nenájdená +90023=Stĺpec {0} nesmie umožniť vložiť NULL +90024=Chyba pri premenovaní súboru {0} na {1} +90025=Nemôžem zmazať súbor {0} +90026=Serializácia neúspešná, dôvod: {0} +90027=Deserializácia neúspešná, dôvod: {0} +90028=IO výnimka: {0} +90029=Momentálne nie ste na riadku umožňujúcom úpravu (update) +90030=Poškodenie súboru pri čítaní záznamu: {0}. Možné riešenie: použiť nástroj na opravu databázy +90031=IO výnimka: {0}; {1} +90032=Používateľ {0} nenájdený +90033=Používateľ {0} už existuje +90034=Chyba Log súboru: {0}, dôvod: {1} +90035=Sekvencia {0} už existuje +90036=Sekvencia {0} nenájdená +90037=Pohľad (view) {0} nenájdený +90038=Pohľad (view) {0} už existuje +90039=#This CLOB or BLOB reference timed out: {0} +90040=Administrátorské práva sú potrebné pre túto operáciu +90041=Spúštač (trigger) {0} už existuje +90042=Spúšťač (trigger) {0} nenájdený +90043=Chyba vytvorenia alebo spustenia spúšťača (trigger) {0} objekt, trieda {1}, dôvod: {2}; pre podrobnosti pozrite prvotný (root) dôvod +90044=Chyba vykonania spúšťača (trigger) {0}, trieda {1}, dôvod : {2}; pre podrobnosti pozrite prvotný (root) dôvod +90045=Obmedzenie (constraint) {0} už existuje +90046=Chyba formátu URL; musí byť {0}, ale je {1} +90047=Nezhoda verzií, verzia ovládača je {0}, ale verzia servera je {1} +90048=Nepodporovaná verzia databázového súboru alebo chybná hlavička súuboru {0} +90049=Chyba šifrovania súboru {0} +90050=Nesprávny formát hesiel, musí byť: súborové heslo používateľské heslo +90051=#Scale(${0}) must not be bigger than precision({1}) +90052=Vnorený dopyt (subquery) nie je dopyt na jeden stĺpec +90053=Skalárny vnorený dopyt (scalar subquery) obsahuje viac ako jeden riadok +90054=Nesprávne použitie agregačnej funkcie {0} +90055=Nepodporovaný typ šifry {0} +90056=#Function {0}: Invalid date format: {1} +90057=Obmedzenie (constraint) {0} nenájdený +90058=Commit alebo Rollback nie je povolené použiť v spúšťači (trigger) +90059=Nejednoznačné meno stĺpca {0} +90060=Nepodporovaná metóda zamknutia súboru {0} +90061=Vznikla výnimka pri otváraní portu {0} (port sa asi už používa), dôvod: {1} +90062=Chyba vytvorenia súboru {0} +90063=Bod návratu (savepoint) je nesprávny: {0} +90064=Bod návratu (savepoint) nie je pomenovaný +90065=Bod návratu (savepoint) je pomenovaný +90066=Duplicitná vlastnosť (property) {0} +90067=Spojenie je prerušené: {0} +90068=Order by expression {0} must be in the result list in this case +90069=Rola {0} už existuje +90070=Rola {0} nenájdená +90071=Používateľ alebo rola {0} nenajdená +90072=Role a práva sa nemôžu miešať +90073=Zhodné Java metódy musia mať rozdielny počet parametrov: {0} a {1} +90074=Rola {0} už je udelená (granted) +90075=Stĺpec je časťou indexu {0} +90076=Alias funkcie {0} už existuje +90077=Alias funkcie {0} nenájdený +90078=Schéma {0} už existuje +90079=Schéma {0} nenájdená +90080=Meno schémy sa musí zhodovať +90081=Stĺpec {0} obsahuje NULL hodnoty +90082=Sekvencia {0} patrí tabuľke +90083=Na stĺpec asi odkazuje {0} +90084=Nemôžem zmazať posledný stĺpec {0} +90085=Index {0} patrí obmedzeniu (constraint) {1} +90086=Trieda {0} nenájdená +90087=Metóda {0} nenájdená +90088=Neznámy mód {0} +90089=Kolácia (collation) nemôže byť zmenená pretože tu je dátová tabuľka: {0} +90090=Schéma {0} nemôže byť zmazaná +90091=Rola {0} nemôže byť zmazaná +90093=Chyba klustra - databáza je teraz v nezáviaslom (standalone) móde +90094=Chyba klustra - databáza je teraz v kluster móde, zoznam serverov: {0} +90095=Chyba formátu reťazca: {0} +90096=Nedostatočné práva na objekt {0} +90097=Databáza je iba na čítanie +90098=Databáza bola zatvorená +90099=Chyba nastavenia poslucháča udalostí (event listener) {0} pre databázu, dôvod: {1} +90101=Nesprávny XID formát: {0} +90102=Nepodporované prepínače kompresie: {0} +90103=Nepodporovaný algoritmus kompresie: {0} +90104=Chyba kompresie +90105=Výnimka pri volaní používatelsky definovanej funkcie +90106=Nemožem skrátiť {0} +90107=Nemôžem zmazať {0} lebo {1} zavisí na {0} +90108=Nedostatok pamäte. +90109=Pohľad (view) {0} je nesprávny: {1} +90111=Chyba prístupu k linkovanej tabuľke SQL príkazom {0}, dôvod: {1} +90112=Riadok nenájdený pri pokuse o vymazanie cez index {0} +90113=Nepodporované nastavenie spojenia {0} +90114=Konštanta {0} už existuje +90115=Konštanta {0} nenájdená +90116=Písmená (literals) tohto druhu nie sú povolené +90117=Vzdialené pripojenia na tento server nie sú povolené, pozrite -tcpAllowOthers +90118=Nemôžem zmazať tabuľku {0} +90119=Používateľský dátový typ {0} už existuje +90120=Používateľský dátový typ {0} nenájdený +90121=Databáza už je zatvorená (na zamedzenie automatického zatvárania pri ukončení VM, pridajte ";DB_CLOSE_ON_EXIT=FALSE" do DB URL) +90122=Operácia pre tabuľku {0} nie je podporovaná, kedže existujú na tabuľku pohľady (views): {1} +90123=Nemožno miešať indexované a neindexované parametre +90124=Súbor nenájdený: {0} +90125=Nesprávna trieda {1}, očakávana je {0} +90126=Databáza nie je trvalá (persistent) +90127=Výsledok (result set) nie je upraviteľný. Dopyt musí vybrať všetky stĺpce zahrnuté v jedinečnom (unique) kĺúči. Iba jedna tabuľka môže byť vybraná. +90128=Výsledok (result set) nie je rolovateľný (scrollable) a nemôže byť resetnutý. Je potrebné použiť conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129=Transakcia {0} nenájdená +90130=Táto metóda nie je povolená pre pripravený prikaz (prepared statement); použite namiesto neho normálny prikaz (regular statement). +90131=Viacnásobná úprava v tabuľke {0}: iná transakcia upravila alebo zmazala rovnaký riadok +90132=Agregácia {0} nenájdená +90133=Nemôžem zmeniť nastavenie {0} keď už je databáza otvorená +90134=Prístup k triede {0} odoprený +90135=Databáza je otvorená vo výhradnom (exclusive) móde; nemôžem na ňu otvoriť ďalšie pripojenia +90136=Nepodporovaná "outer join" podmienka: {0} +90137=Môžete priradiť len do premennej, nie do: {0} +90138=Nesprávne meno databázy: {0} +90139=Verejná statická Java metóda nebola nájdená: {0} +90140=Výsledok (result set) je iba na čítanie. Je potrebné použiť conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=#Serializer cannot be changed because there is a data table: {0} +90142=#Step size must not be zero +90143=#Row {1} not found in primary index {0} +HY000=Všeobecná chyba: {0} +HY004=Neznámy dátový typ: {0} +HYC00=Vlastnosť nie je podporovaná: {0} +HYT00=Vypršal časový limit na zamknutie tabuľky {0} diff --git a/modules/h2/src/main/java/org/h2/res/_messages_zh_cn.prop b/modules/h2/src/main/java/org/h2/res/_messages_zh_cn.prop new file mode 100644 index 0000000000000..aa5cace8ae56c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/_messages_zh_cn.prop @@ -0,0 +1,179 @@ +.translator=Huang JingCong,Terrence +02000=没有可用数据 +07001= {0}非法参数数量, 预期数量: {1} +08000=开启数据库错误: {0} +21S02=字段数目不匹配 +22001=字段 {0}数值太大: {1} +22003=数值超出范围: {0} +22004=#Numeric value out of range: {0} in column {1} +22007=不能解析字段 {0} 的数值 :{1} +22012=除数为零: {0} +22018=转换数据{0}期间出现转换错误 +22025=LIKE ESCAPE(转义符)存在错误: {0} +22030=#Value not permitted for column {0}: {1} +22031=#Value not a member of enumerators {0}: {1} +22032=#Empty enums are not allowed +22033=#Duplicate enumerators are not allowed for enum types: {0} +23502=字段 {0} 不允许为NULL值 +23503=违反引用完整性约束: {0} +23505=违反唯一索引或逐渐约束: {0} +23506=违反引用完整性约束: {0} +23507=没有为字段{0}设置缺省值 +23513=违反检查约束: {0} +23514=#Check constraint invalid: {0} +28000=用户名或口令错误 +40001=检测到死锁.当前事务已回滚.详情: {0} +42000=SQL语法错误 {0} +42001=SQL语法错误 {0}; 预期: {1} +42S01= {0}表已存在 +42S02=找不到表 {0} +42S11=索引 {0} 已存在 +42S12=找不到索引 {0} +42S21=重复的字段: {0} +42S22=找不到字段 {0} +42S32=找不到设置 {0} +57014=语句已取消执行或会话已过期 +90000={0} 函数必须返回一个结果集 +90001=不允许在查询内使用的方法,使用execute 或 executeQuery 代替 executeUpdate +90002=只允许在查询内使用的方法. 使用 execute 或 executeUpdate 代替 executeQuery +90003=十六进制字符串包含奇数个数字字符: {0} +90004=十六进制字符串包含非十六进制字符: {0} +90006=#Sequence {0} has run out of numbers +90007=对象已关闭 +90008=被发现非法的数值 {0} 在参数 {1} +90009=#Unable to create or alter sequence {0} because of invalid attributes (start value {1}, min value {2}, max value {3}, increment {4}) +90010=#Invalid TO_CHAR format {0} +90011=#A file path that is implicitly relative to the current working directory is not allowed in the database URL {0}. Use an absolute path, ~/name, ./name, or the baseDir setting instead. +90012=参数 {0} 的值还没有设置 +90013=找不到数据库 {0} +90014=解析 {0} 时发生错误 +90015=错误的数据类型 {0}调用SUM 或者 AVG +90016=字段 {0} 必须出现在 GROUP BY 的列表内 +90017=尝试定义第二个主键 +90018=连接未被关闭但已被垃圾回收器回收 +90019=不能移除当前用户 +90020=数据可可能当前已正被使用: {0}. 解决方案有:关闭所有其他连接; 使用服务器模式(server mode) +90021=#This combination of database settings is not supported: {0} +90022=未找到函数 {0} +90023=字段 {0} 不能为空 +90024=将文件 {0} 命名为 {1} 发生错误 +90025=不能删除文件 {0} +90026=序列化(串行化)失败,原因: {0} +90027=反串行化失败, 原因: {0} +90028=IO 异常: {0} +90029=当前不是一个可更新的行 +90030=读取记录时文件错误: {0}. 解决方案: 使用恢复工具(recovery tool) +90031=IO 异常: {0}; {1} +90032=找不到用户 {0} +90033=用户{0}已存在 +90034=日志文件错误: {0}, 原因: {1} +90035=序列 {0} 已存在 +90036=找不到序列 {0} +90037=视图 {0} 已存在 +90038=找不到 {0} 视图 +90039=#This CLOB or BLOB reference timed out: {0} +90040=此操作需要管理员权限 +90041=触发器 {0}已存在 +90042=找不到触发器 {0} +90043=创建或初始化触发器{0}时发生错误 , 类型 {1}, 原因: {2}; 更多详情请查看根本原因 +90044=实行触发器 {0}时发生错误, 类型 {1}, 原因 : {2}; 更多详情请查看根本原因 +90045=约束 {0} 已存在 +90046=URL 格式错误; 必须为 {0} 但错误为 {1} +90047=版本不匹配, 驱动版本为 {0} 但服务器版本为 {1} +90048=不支持的数据库文件版本或无效的文件头 {0} +90049=文件加密错误 {0} +90050=错误的口令格式, 必须为: 文件 口令 <空格> 用户 口令 +90051=#Scale(${0}) must not be bigger than precision({1}) +90052=子查询非单一字段查询 +90053=标量子查询(Scalar subquery)包含多于一行结果 +90054=非法使用聚合函数 {0} +90055=不支持的加密算法 {0} +90056=#Function {0}: Invalid date format: {1} +90057=约束 {0} 找不到 +90058=提交或回滚不能办函触发器 +90059=不明确的字段名 {0} +90060=不支持的文件锁方法 {0} +90061=打开端口 {0} 异常(可能端口正被使用), 原因: {1} +90062=创建文件错误{0} +90063=非法记录点: {0} +90064=未命名的记录点 +90065=记录点已命名 +90066=重复的属性 {0} +90067=连接已断开: {0} +90068=Order by 表达式 {0} 必须在结果列表内 +90069=角色 {0} 已存在 +90070=找不到角色 {0} +90071=找不到用户或者角色{0} +90072=不能回合角色和权限 +90073=匹配的Java方法必须用不同的参数数目: {0} 和 {1} +90074=角色 {0}已授权 +90075=字段是 {0}索引的一部分 +90076=已存在 {0} +90077=找不到函数别名 {0} +90078=方案(Schema) {0} 已存在 +90079=找不到方案(Schema) {0} +90080=方案名(Schema) 必须匹配 +90081=字段 {0} 包含空值 +90082=序列 {0} 属于一个表 +90083=字段可能已被 {0} 引用 +90084=不能移除最后一个字段 {0} +90085=索引 {0} 属于一个约束 {1} +90086=找不到类 {0} +90087=找不到方法 {0} +90088=未知模式 {0} +90089=不能变更整理(Collation),因为存在一个数据表: {0} +90090=不能删除方案(schema) {0} +90091=不能删除角色 {0} +90093=集群错误 - 数据库现在运行在独立运行模式 +90094=集群错误 - 数据库现在运行在集群模式, 服务器列表: {0} +90095=字符串格式错误: {0} +90096=对象所需权限不够 {0} +90097=数据库为只读 +90098=数据库已关闭 +90099=设立数据库事件监听器错误 {0}, 原因: {1} +90101=错误的 XID 格式: {0} +90102=不支持的压缩选项: {0} +90103=不支持的压缩算法: {0} +90104=压缩错误 +90105=调用用户自定义函数时发生异常: {0} +90106=不能截断 {0} +90107=不能删除 {0} ,因为 {1} 依赖着它 +90108=内存不足. +90109=视图 {0} 无效: {1} +90111=SQL语句访问表连接错误 {0}, 原因: {1} +90112=尝试从索引中删除 {0}的时候找不到行 +90113=不支持的连接设置 {0} +90114=常量 {0} 已存在 +90115=找不到常量{0} +90116=不允许此类型的字面值 +90117=不允许远程连接到本服务器, 参见 -tcpAllowOthers +90118=不能删除表 {0} +90119=用户数据类型 {0} 已存在 +90120=找不到用户数据类型 {0} +90121=数据库已关闭 (若需要禁用在虚拟机关闭的时候同时关闭数据库,请加上 ";DB_CLOSE_ON_EXIT=FALSE" 到数据库连接的 URL) +90122={0}表不支持本操作,因为在表上存在视图: {1} +90123=不能混合已索引和未索引的参数 +90124=文件 找不到: {0} +90125=无效的类, 取代找到 {0} 但得到 {1} +90126=数据库不是持久化的 +90127=结果集不可更新. 查询必须选择唯一键的所有字段. 只能选择一个表. +90128=结果集不可滚动和重置. 你可以使用 conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ..). +90129= 找不到 事务{0} +90130=预编译语句不允许使用此方法; 使用常规语句代替. +90131=并发修改表 {0}: 另一个事务已经更新或删除相同的行 +90132=找不到聚合{0} +90133=数据库有已启动的时候不允许更改设置{0} +90134=访问 {0}类时被拒绝 +90135=数据库运行在独占模式(exclusive mode); 不能打开额外的连接 +90136=不支持的外连接条件: {0} +90137=只能赋值到一个变量,而不是: {0} +90138=无效数据库名称: {0} +90139=找不到公用Java静态方法: {0} +90140=结果集是只读的. 你可以使用 conn.createStatement(.., ResultSet.CONCUR_UPDATABLE). +90141=#Serializer cannot be changed because there is a data table: {0} +90142=#Step size must not be zero +90143=#Row {1} not found in primary index {0} +HY000=常规错误: {0} +HY004=位置数据类型: {0} +HYC00=不支持的特性: {0} +HYT00=尝试锁定表 {0} 的时候超时 diff --git a/modules/h2/src/main/java/org/h2/res/h2-22-t.png b/modules/h2/src/main/java/org/h2/res/h2-22-t.png new file mode 100644 index 0000000000000..42cd1d5e24c4f Binary files /dev/null and b/modules/h2/src/main/java/org/h2/res/h2-22-t.png differ diff --git a/modules/h2/src/main/java/org/h2/res/h2-22.png b/modules/h2/src/main/java/org/h2/res/h2-22.png new file mode 100644 index 0000000000000..95ba9c7be15ba Binary files /dev/null and b/modules/h2/src/main/java/org/h2/res/h2-22.png differ diff --git a/modules/h2/src/main/java/org/h2/res/h2-24.png b/modules/h2/src/main/java/org/h2/res/h2-24.png new file mode 100644 index 0000000000000..04b5d2b524f21 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/res/h2-24.png differ diff --git a/modules/h2/src/main/java/org/h2/res/h2-64-t.png b/modules/h2/src/main/java/org/h2/res/h2-64-t.png new file mode 100644 index 0000000000000..3680773d11e9b Binary files /dev/null and b/modules/h2/src/main/java/org/h2/res/h2-64-t.png differ diff --git a/modules/h2/src/main/java/org/h2/res/h2.png b/modules/h2/src/main/java/org/h2/res/h2.png new file mode 100644 index 0000000000000..d5340e557f3a9 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/res/h2.png differ diff --git a/modules/h2/src/main/java/org/h2/res/help.csv b/modules/h2/src/main/java/org/h2/res/help.csv new file mode 100644 index 0000000000000..f9ecd070b9707 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/help.csv @@ -0,0 +1,1660 @@ +# Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +# and the EPL 1.0 (http://h2database.com/html/license.html). +# Initial Developer: H2 Group) +"SECTION","TOPIC","SYNTAX","TEXT" +"Commands (DML)","SELECT"," +SELECT [ TOP term ] [ DISTINCT | ALL ] selectExpression [,...] +FROM tableExpression [,...] [ WHERE expression ] +[ GROUP BY expression [,...] ] [ HAVING expression ] +[ { UNION [ ALL ] | MINUS | EXCEPT | INTERSECT } select ] +[ ORDER BY order [,...] ] +[ { LIMIT expression [ OFFSET expression ] [ SAMPLE_SIZE rowCountInt ] } + | { [ OFFSET expression { ROW | ROWS } ] + [ { FETCH { FIRST | NEXT } expression { ROW | ROWS } ONLY } ] } ] +[ FOR UPDATE ] +"," +Selects data from a table or multiple tables." +"Commands (DML)","INSERT"," +INSERT INTO tableName +{ [ ( columnName [,...] ) ] + { VALUES { ( { DEFAULT | expression } [,...] ) } [,...] + | [ DIRECT ] [ SORTED ] select } } | + { SET { columnName = { DEFAULT | expression } } [,...] } +"," +Inserts a new row / new rows into a table." +"Commands (DML)","UPDATE"," +UPDATE tableName [ [ AS ] newTableAlias ] SET +{ { columnName = { DEFAULT | expression } } [,...] } | + { ( columnName [,...] ) = ( select ) } +[ WHERE expression ] [ ORDER BY order [,...] ] [ LIMIT expression ] +"," +Updates data in a table." +"Commands (DML)","DELETE"," +DELETE [ TOP term ] FROM tableName [ WHERE expression ] [ LIMIT term ] +"," +Deletes rows form a table." +"Commands (DML)","BACKUP"," +BACKUP TO fileNameString +"," +Backs up the database files to a ." +"Commands (DML)","CALL"," +CALL expression +"," +Calculates a simple expression." +"Commands (DML)","EXPLAIN"," +EXPLAIN { [ PLAN FOR ] | ANALYZE } +{ select | insert | update | delete | merge } +"," +Shows the execution plan for a statement." +"Commands (DML)","MERGE"," +MERGE INTO tableName [ ( columnName [,...] ) ] +[ KEY ( columnName [,...] ) ] +{ VALUES { ( { DEFAULT | expression } [,...] ) } [,...] | select } +"," +Updates existing rows, and insert rows that don't exist." +"Commands (DML)","MERGE USING"," +MERGE INTO targetTableName [ [AS] targetAlias] +USING { ( select ) | sourceTableName }[ [AS] sourceAlias ] +ON ( expression ) +[ WHEN MATCHED THEN [ update ] [ delete] ] +[ WHEN NOT MATCHED THEN insert ] +"," +Updates or deletes existing rows, and insert rows that don't exist." +"Commands (DML)","RUNSCRIPT"," +RUNSCRIPT FROM fileNameString scriptCompressionEncryption +[ CHARSET charsetString ] +"," +Runs a SQL script from a file." +"Commands (DML)","SCRIPT"," +SCRIPT [ SIMPLE ] [ NODATA ] [ NOPASSWORDS ] [ NOSETTINGS ] +[ DROP ] [ BLOCKSIZE blockSizeInt ] +[ TO fileNameString scriptCompressionEncryption + [ CHARSET charsetString ] ] +[ TABLE tableName [, ...] ] +[ SCHEMA schemaName [, ...] ] +"," +Creates a SQL script from the database." +"Commands (DML)","SHOW"," +SHOW { SCHEMAS | TABLES [ FROM schemaName ] | + COLUMNS FROM tableName [ FROM schemaName ] } +"," +Lists the schemas, tables, or the columns of a table." +"Commands (DML)","WITH"," +WITH [ RECURSIVE ] { name [( columnName [,...] )] AS ( select ) [,...] } +{ select | insert | update | merge | delete | createTable } +"," +Can be used to create a recursive or non-recursive query (common table expression)." +"Commands (DDL)","ALTER INDEX RENAME"," +ALTER INDEX [ IF EXISTS ] indexName RENAME TO newIndexName +"," +Renames an index." +"Commands (DDL)","ALTER SCHEMA RENAME"," +ALTER SCHEMA [ IF EXISTS ] schema RENAME TO newSchemaName +"," +Renames a schema." +"Commands (DDL)","ALTER SEQUENCE"," +ALTER SEQUENCE [ IF EXISTS ] sequenceName +[ RESTART WITH long ] +[ INCREMENT BY long ] +[ MINVALUE long | NOMINVALUE | NO MINVALUE ] +[ MAXVALUE long | NOMAXVALUE | NO MAXVALUE ] +[ CYCLE long | NOCYCLE | NO CYCLE ] +[ CACHE long | NOCACHE | NO CACHE ] +"," +Changes the parameters of a sequence." +"Commands (DDL)","ALTER TABLE ADD"," +ALTER TABLE [ IF EXISTS ] tableName ADD [ COLUMN ] +{ [ IF NOT EXISTS ] columnName columnDefinition + | ( { columnName columnDefinition | constraint } [,...] ) } +[ { { BEFORE | AFTER } columnName } | FIRST ] +"," +Adds a new column to a table." +"Commands (DDL)","ALTER TABLE ADD CONSTRAINT"," +ALTER TABLE [ IF EXISTS ] tableName ADD constraint [ CHECK | NOCHECK ] +"," +Adds a constraint to a table." +"Commands (DDL)","ALTER TABLE RENAME CONSTRAINT"," +ALTER TABLE [ IF EXISTS ] tableName RENAME oldConstraintName +TO newConstraintName +"," +Renames a constraint." +"Commands (DDL)","ALTER TABLE ALTER COLUMN"," +ALTER TABLE [ IF EXISTS ] tableName ALTER COLUMN columnName +{ { columnDefinition } + | { RENAME TO name } + | { RESTART WITH long } + | { SELECTIVITY int } + | { SET DEFAULT expression } + | { SET ON UPDATE expression } + | { SET NULL } + | { SET NOT NULL } + | { SET { VISIBLE | INVISIBLE } } + | { DROP { DEFAULT | ON UPDATE } } } +"," +Changes the data type of a column, rename a column, +change the identity value, or change the selectivity." +"Commands (DDL)","ALTER TABLE DROP COLUMN"," +ALTER TABLE [ IF EXISTS ] tableName DROP COLUMN [ IF EXISTS ] +columnName [,...] | ( columnName [,...] ) +"," +Removes column(s) from a table." +"Commands (DDL)","ALTER TABLE DROP CONSTRAINT"," +ALTER TABLE [ IF EXISTS ] tableName DROP +{ CONSTRAINT [ IF EXISTS ] constraintName | PRIMARY KEY } +"," +Removes a constraint or a primary key from a table." +"Commands (DDL)","ALTER TABLE SET"," +ALTER TABLE [ IF EXISTS ] tableName SET REFERENTIAL_INTEGRITY +{ FALSE | TRUE } [ CHECK | NOCHECK ] +"," +Disables or enables referential integrity checking for a table." +"Commands (DDL)","ALTER TABLE RENAME"," +ALTER TABLE [ IF EXISTS ] tableName RENAME TO newName +"," +Renames a table." +"Commands (DDL)","ALTER USER ADMIN"," +ALTER USER userName ADMIN { TRUE | FALSE } +"," +Switches the admin flag of a user on or off." +"Commands (DDL)","ALTER USER RENAME"," +ALTER USER userName RENAME TO newUserName +"," +Renames a user." +"Commands (DDL)","ALTER USER SET PASSWORD"," +ALTER USER userName SET { PASSWORD string | SALT bytes HASH bytes } +"," +Changes the password of a user." +"Commands (DDL)","ALTER VIEW"," +ALTER VIEW [ IF EXISTS ] viewName RECOMPILE +"," +Recompiles a view after the underlying tables have been changed or created." +"Commands (DDL)","ANALYZE"," +ANALYZE [ TABLE tableName ] [ SAMPLE_SIZE rowCountInt ] +"," +Updates the selectivity statistics of tables." +"Commands (DDL)","COMMENT"," +COMMENT ON +{ { COLUMN [ schemaName. ] tableName.columnName } + | { { TABLE | VIEW | CONSTANT | CONSTRAINT | ALIAS | INDEX | ROLE + | SCHEMA | SEQUENCE | TRIGGER | USER | DOMAIN } [ schemaName. ] objectName } } +IS expression +"," +Sets the comment of a database object." +"Commands (DDL)","CREATE AGGREGATE"," +CREATE AGGREGATE [ IF NOT EXISTS ] newAggregateName FOR className +"," +Creates a new user-defined aggregate function." +"Commands (DDL)","CREATE ALIAS"," +CREATE ALIAS [ IF NOT EXISTS ] newFunctionAliasName [ DETERMINISTIC ] +[ NOBUFFER ] { FOR classAndMethodName | AS sourceCodeString } +"," +Creates a new function alias." +"Commands (DDL)","CREATE CONSTANT"," +CREATE CONSTANT [ IF NOT EXISTS ] newConstantName VALUE expression +"," +Creates a new constant." +"Commands (DDL)","CREATE DOMAIN"," +CREATE DOMAIN [ IF NOT EXISTS ] newDomainName AS dataType +[ DEFAULT expression ] [ [ NOT ] NULL ] [ SELECTIVITY selectivity ] +[ CHECK condition ] +"," +Creates a new data type (domain)." +"Commands (DDL)","CREATE INDEX"," +CREATE +{ [ UNIQUE ] [ HASH | SPATIAL] INDEX [ [ IF NOT EXISTS ] newIndexName ] + | PRIMARY KEY [ HASH ] } +ON tableName ( indexColumn [,...] ) +"," +Creates a new index." +"Commands (DDL)","CREATE LINKED TABLE"," +CREATE [ FORCE ] [ [ GLOBAL | LOCAL ] TEMPORARY ] +LINKED TABLE [ IF NOT EXISTS ] +name ( driverString, urlString, userString, passwordString, +[ originalSchemaString, ] originalTableString ) [ EMIT UPDATES | READONLY ] +"," +Creates a table link to an external table." +"Commands (DDL)","CREATE ROLE"," +CREATE ROLE [ IF NOT EXISTS ] newRoleName +"," +Creates a new role." +"Commands (DDL)","CREATE SCHEMA"," +CREATE SCHEMA [ IF NOT EXISTS ] name +[ AUTHORIZATION ownerUserName ] +[ WITH tableEngineParamName [,...] ] +"," +Creates a new schema." +"Commands (DDL)","CREATE SEQUENCE"," +CREATE SEQUENCE [ IF NOT EXISTS ] newSequenceName +[ START WITH long ] +[ INCREMENT BY long ] +[ MINVALUE long | NOMINVALUE | NO MINVALUE ] +[ MAXVALUE long | NOMAXVALUE | NO MAXVALUE ] +[ CYCLE long | NOCYCLE | NO CYCLE ] +[ CACHE long | NOCACHE | NO CACHE ] +"," +Creates a new sequence." +"Commands (DDL)","CREATE TABLE"," +CREATE [ CACHED | MEMORY ] [ TEMP | [ GLOBAL | LOCAL ] TEMPORARY ] +TABLE [ IF NOT EXISTS ] name +[ ( { columnName columnDefinition | constraint } [,...] ) ] +[ ENGINE tableEngineName ] +[ WITH tableEngineParamName [,...] ] +[ NOT PERSISTENT ] [ TRANSACTIONAL ] +[ AS select ]"," +Creates a new table." +"Commands (DDL)","CREATE TRIGGER"," +CREATE TRIGGER [ IF NOT EXISTS ] newTriggerName +{ BEFORE | AFTER | INSTEAD OF } +{ INSERT | UPDATE | DELETE | SELECT | ROLLBACK } +[,...] ON tableName [ FOR EACH ROW ] +[ QUEUE int ] [ NOWAIT ] +{ CALL triggeredClassName | AS sourceCodeString } +"," +Creates a new trigger." +"Commands (DDL)","CREATE USER"," +CREATE USER [ IF NOT EXISTS ] newUserName +{ PASSWORD string | SALT bytes HASH bytes } [ ADMIN ] +"," +Creates a new user." +"Commands (DDL)","CREATE VIEW"," +CREATE [ OR REPLACE ] [ FORCE ] VIEW [ IF NOT EXISTS ] newViewName +[ ( columnName [,...] ) ] AS select +"," +Creates a new view." +"Commands (DDL)","DROP AGGREGATE"," +DROP AGGREGATE [ IF EXISTS ] aggregateName +"," +Drops an existing user-defined aggregate function." +"Commands (DDL)","DROP ALIAS"," +DROP ALIAS [ IF EXISTS ] existingFunctionAliasName +"," +Drops an existing function alias." +"Commands (DDL)","DROP ALL OBJECTS"," +DROP ALL OBJECTS [ DELETE FILES ] +"," +Drops all existing views, tables, sequences, schemas, function aliases, roles, +user-defined aggregate functions, domains, and users (except the current user)." +"Commands (DDL)","DROP CONSTANT"," +DROP CONSTANT [ IF EXISTS ] constantName +"," +Drops a constant." +"Commands (DDL)","DROP DOMAIN"," +DROP DOMAIN [ IF EXISTS ] domainName +"," +Drops a data type (domain)." +"Commands (DDL)","DROP INDEX"," +DROP INDEX [ IF EXISTS ] indexName +"," +Drops an index." +"Commands (DDL)","DROP ROLE"," +DROP ROLE [ IF EXISTS ] roleName +"," +Drops a role." +"Commands (DDL)","DROP SCHEMA"," +DROP SCHEMA [ IF EXISTS ] schemaName [ RESTRICT | CASCADE ] +"," +Drops a schema." +"Commands (DDL)","DROP SEQUENCE"," +DROP SEQUENCE [ IF EXISTS ] sequenceName +"," +Drops a sequence." +"Commands (DDL)","DROP TABLE"," +DROP TABLE [ IF EXISTS ] tableName [,...] [ RESTRICT | CASCADE ] +"," +Drops an existing table, or a list of tables." +"Commands (DDL)","DROP TRIGGER"," +DROP TRIGGER [ IF EXISTS ] triggerName +"," +Drops an existing trigger." +"Commands (DDL)","DROP USER"," +DROP USER [ IF EXISTS ] userName +"," +Drops a user." +"Commands (DDL)","DROP VIEW"," +DROP VIEW [ IF EXISTS ] viewName [ RESTRICT | CASCADE ] +"," +Drops an existing view." +"Commands (DDL)","TRUNCATE TABLE"," +TRUNCATE TABLE tableName +"," +Removes all rows from a table." +"Commands (Other)","CHECKPOINT"," +CHECKPOINT +"," +Flushes the data to disk." +"Commands (Other)","CHECKPOINT SYNC"," +CHECKPOINT SYNC +"," +Flushes the data to disk and and forces all system buffers be written +to the underlying device." +"Commands (Other)","COMMIT"," +COMMIT [ WORK ] +"," +Commits a transaction." +"Commands (Other)","COMMIT TRANSACTION"," +COMMIT TRANSACTION transactionName +"," +Sets the resolution of an in-doubt transaction to 'commit'." +"Commands (Other)","GRANT RIGHT"," +GRANT { SELECT | INSERT | UPDATE | DELETE | ALL } [,...] ON +{ { SCHEMA schemaName } | { tableName [,...] } } +TO { PUBLIC | userName | roleName } +"," +Grants rights for a table to a user or role." +"Commands (Other)","GRANT ALTER ANY SCHEMA"," +GRANT ALTER ANY SCHEMA TO userName +"," +Grant schema altering rights to a user." +"Commands (Other)","GRANT ROLE"," +GRANT roleName TO { PUBLIC | userName | roleName } +"," +Grants a role to a user or role." +"Commands (Other)","HELP"," +HELP [ anything [...] ] +"," +Displays the help pages of SQL commands or keywords." +"Commands (Other)","PREPARE COMMIT"," +PREPARE COMMIT newTransactionName +"," +Prepares committing a transaction." +"Commands (Other)","REVOKE RIGHT"," +REVOKE { SELECT | INSERT | UPDATE | DELETE | ALL } [,...] ON +{ { SCHEMA schemaName } | { tableName [,...] } } +FROM { PUBLIC | userName | roleName } +"," +Removes rights for a table from a user or role." +"Commands (Other)","REVOKE ROLE"," +REVOKE roleName FROM { PUBLIC | userName | roleName } +"," +Removes a role from a user or role." +"Commands (Other)","ROLLBACK"," +ROLLBACK [ TO SAVEPOINT savepointName ] +"," +Rolls back a transaction." +"Commands (Other)","ROLLBACK TRANSACTION"," +ROLLBACK TRANSACTION transactionName +"," +Sets the resolution of an in-doubt transaction to 'rollback'." +"Commands (Other)","SAVEPOINT"," +SAVEPOINT savepointName +"," +Create a new savepoint." +"Commands (Other)","SET @"," +SET @variableName [ = ] expression +"," +Updates a user-defined variable." +"Commands (Other)","SET ALLOW_LITERALS"," +SET ALLOW_LITERALS { NONE | ALL | NUMBERS } +"," +This setting can help solve the SQL injection problem." +"Commands (Other)","SET AUTOCOMMIT"," +SET AUTOCOMMIT { TRUE | ON | FALSE | OFF } +"," +Switches auto commit on or off." +"Commands (Other)","SET CACHE_SIZE"," +SET CACHE_SIZE int +"," +Sets the size of the cache in KB (each KB being 1024 bytes) for the current database." +"Commands (Other)","SET CLUSTER"," +SET CLUSTER serverListString +"," +This command should not be used directly by an application, the statement is +executed automatically by the system." +"Commands (Other)","SET BINARY_COLLATION"," +SET BINARY_COLLATION +{ UNSIGNED | SIGNED } ] } +"," +Sets the collation used for comparing BINARY columns, the default is SIGNED +for version 1." +"Commands (Other)","SET BUILTIN_ALIAS_OVERRIDE"," +SET BUILTIN_ALIAS_OVERRIDE +{ TRUE | FALSE } ] } +"," +Allows the overriding of the builtin system date/time functions +for unit testing purposes." +"Commands (Other)","SET COLLATION"," +SET [ DATABASE ] COLLATION +{ OFF | collationName + [ STRENGTH { PRIMARY | SECONDARY | TERTIARY | IDENTICAL } ] } +"," +Sets the collation used for comparing strings." +"Commands (Other)","SET COMPRESS_LOB"," +SET COMPRESS_LOB { NO | LZF | DEFLATE } +"," +This feature is only available for the PageStore storage engine." +"Commands (Other)","SET DATABASE_EVENT_LISTENER"," +SET DATABASE_EVENT_LISTENER classNameString +"," +Sets the event listener class." +"Commands (Other)","SET DB_CLOSE_DELAY"," +SET DB_CLOSE_DELAY int +"," +Sets the delay for closing a database if all connections are closed." +"Commands (Other)","SET DEFAULT_LOCK_TIMEOUT"," +SET DEFAULT LOCK_TIMEOUT int +"," +Sets the default lock timeout (in milliseconds) in this database that is used +for the new sessions." +"Commands (Other)","SET DEFAULT_TABLE_TYPE"," +SET DEFAULT_TABLE_TYPE { MEMORY | CACHED } +"," +Sets the default table storage type that is used when creating new tables." +"Commands (Other)","SET EXCLUSIVE"," +SET EXCLUSIVE { 0 | 1 | 2 } +"," +Switched the database to exclusive mode (1, 2) and back to normal mode (0)." +"Commands (Other)","SET IGNORECASE"," +SET IGNORECASE { TRUE | FALSE } +"," +If IGNORECASE is enabled, text columns in newly created tables will be +case-insensitive." +"Commands (Other)","SET JAVA_OBJECT_SERIALIZER"," +SET JAVA_OBJECT_SERIALIZER +{ null | className } +"," +Sets the object used to serialize and deserialize java objects being stored in column of type OTHER." +"Commands (Other)","SET LAZY_QUERY_EXECUTION"," +SET LAZY_QUERY_EXECUTION int +"," +Sets the lazy query execution mode." +"Commands (Other)","SET LOG"," +SET LOG int +"," +Sets the transaction log mode." +"Commands (Other)","SET LOCK_MODE"," +SET LOCK_MODE int +"," +Sets the lock mode." +"Commands (Other)","SET LOCK_TIMEOUT"," +SET LOCK_TIMEOUT int +"," +Sets the lock timeout (in milliseconds) for the current session." +"Commands (Other)","SET MAX_LENGTH_INPLACE_LOB"," +SET MAX_LENGTH_INPLACE_LOB int +"," +Sets the maximum size of an in-place LOB object." +"Commands (Other)","SET MAX_LOG_SIZE"," +SET MAX_LOG_SIZE int +"," +Sets the maximum size of the transaction log, in megabytes." +"Commands (Other)","SET MAX_MEMORY_ROWS"," +SET MAX_MEMORY_ROWS int +"," +The maximum number of rows in a result set that are kept in-memory." +"Commands (Other)","SET MAX_MEMORY_UNDO"," +SET MAX_MEMORY_UNDO int +"," +The maximum number of undo records per a session that are kept in-memory." +"Commands (Other)","SET MAX_OPERATION_MEMORY"," +SET MAX_OPERATION_MEMORY int +"," +Sets the maximum memory used for large operations (delete and insert), in bytes." +"Commands (Other)","SET MODE"," +SET MODE { REGULAR | DB2 | DERBY | HSQLDB | MSSQLSERVER | MYSQL | ORACLE | POSTGRESQL } +"," +Changes to another database compatibility mode." +"Commands (Other)","SET MULTI_THREADED"," +SET MULTI_THREADED { 0 | 1 } +"," +Enabled (1) or disabled (0) multi-threading inside the database engine." +"Commands (Other)","SET OPTIMIZE_REUSE_RESULTS"," +SET OPTIMIZE_REUSE_RESULTS { 0 | 1 } +"," +Enabled (1) or disabled (0) the result reuse optimization." +"Commands (Other)","SET PASSWORD"," +SET PASSWORD string +"," +Changes the password of the current user." +"Commands (Other)","SET QUERY_STATISTICS"," +SET QUERY_STATISTICS { TRUE | FALSE } +"," +Disabled or enables query statistics gathering for the whole database." +"Commands (Other)","SET QUERY_STATISTICS_MAX_ENTRIES"," +SET QUERY_STATISTICS int +"," +Set the maximum number of entries in query statistics meta-table." +"Commands (Other)","SET QUERY_TIMEOUT"," +SET QUERY_TIMEOUT int +"," +Set the query timeout of the current session to the given value." +"Commands (Other)","SET REFERENTIAL_INTEGRITY"," +SET REFERENTIAL_INTEGRITY { TRUE | FALSE } +"," +Disabled or enables referential integrity checking for the whole database." +"Commands (Other)","SET RETENTION_TIME"," +SET RETENTION_TIME int +"," +This property is only used when using the MVStore storage engine." +"Commands (Other)","SET SALT HASH"," +SET SALT bytes HASH bytes +"," +Sets the password salt and hash for the current user." +"Commands (Other)","SET SCHEMA"," +SET SCHEMA schemaName +"," +Changes the default schema of the current connection." +"Commands (Other)","SET SCHEMA_SEARCH_PATH"," +SET SCHEMA_SEARCH_PATH schemaName [,...] +"," +Changes the schema search path of the current connection." +"Commands (Other)","SET THROTTLE"," +SET THROTTLE int +"," +Sets the throttle for the current connection." +"Commands (Other)","SET TRACE_LEVEL"," +SET { TRACE_LEVEL_FILE | TRACE_LEVEL_SYSTEM_OUT } int +"," +Sets the trace level for file the file or system out stream." +"Commands (Other)","SET TRACE_MAX_FILE_SIZE"," +SET TRACE_MAX_FILE_SIZE int +"," +Sets the maximum trace file size." +"Commands (Other)","SET UNDO_LOG"," +SET UNDO_LOG int +"," +Enables (1) or disables (0) the per session undo log." +"Commands (Other)","SET WRITE_DELAY"," +SET WRITE_DELAY int +"," +Set the maximum delay between a commit and flushing the log, in milliseconds." +"Commands (Other)","SHUTDOWN"," +SHUTDOWN [ IMMEDIATELY | COMPACT | DEFRAG ] +"," +This statement closes all open connections to the database and closes the +database." +"Datetime fields","Datetime field"," +yearField | monthField | dayOfMonthField + | hourField | minuteField | secondField + | millisecondField | microsecondField | nanosecondField + | timezoneHourField | timezoneMinuteField + | dayOfWeekField | isoDayOfWeekField + | weekOfYearField | isoWeekOfYearField + | quarterField | dayOfYearField | epochField +"," +Fields for EXTRACT, DATEADD, and DATEDIFF functions." +"Datetime fields","Year field"," +YEAR | YYYY | YY | SQL_TSI_YEAR +"," +Year." +"Datetime fields","Month field"," +MONTH | MM | M | SQL_TSI_MONTH +"," +Month (1-12)." +"Datetime fields","Day of month field"," +DAY | DD | D | SQL_TSI_DAY +"," +Day of month (1-31)." +"Datetime fields","Hour field"," +HOUR | HH | SQL_TSI_HOUR +"," +Hour (0-23)." +"Datetime fields","Minute field"," +MINUTE | MI | N | SQL_TSI_MINUTE +"," +Minute (0-59)." +"Datetime fields","Second field"," +SECOND | SS | S | SQL_TSI_SECOND +"," +Second (0-59)." +"Datetime fields","Millisecond field"," +MILLISECOND | MS +"," +Millisecond (0-999)." +"Datetime fields","Microsecond field"," +MICROSECOND | MCS +"," +Microsecond (0-999999)." +"Datetime fields","Nanosecond field"," +NANOSECOND | NS +"," +Nanosecond (0-999999999)." +"Datetime fields","Timezone hour field"," +TIMEZONE_HOUR +"," +Timezone hour (from -18 to +18)." +"Datetime fields","Timezone minute field"," +TIMEZONE_MINUTE +"," +Timezone minute (from -59 to +59)." +"Datetime fields","Day of week field"," +DAY_OF_WEEK | DAYOFWEEK | DOW +"," +Day of week (1-7)." +"Datetime fields","ISO day of week field"," +ISO_DAY_OF_WEEK +"," +ISO day of week (1-7)." +"Datetime fields","Week of year field"," +WEEK | WW | W | SQL_TSI_WEEK +"," +Week of year (1-53)." +"Datetime fields","ISO week of year field"," +ISO_WEEK +"," +ISO week of year (1-53)." +"Datetime fields","Quarter field"," +QUARTER +"," +Quarter (1-4)." +"Datetime fields","Day of year field"," +DAYOFYEAR | DAY_OF_YEAR | DOY | DY +"," +Day of year (1-366)." +"Datetime fields","Epoch field"," +EPOCH +"," +For TIMESTAMP values number of seconds since 1970-01-01 00:00:00 in local time zone." +"Other Grammar","Alias"," +name +"," +An alias is a name that is only valid in the context of the statement." +"Other Grammar","And Condition"," +condition [ { AND condition } [...] ] +"," +Value or condition." +"Other Grammar","Array"," +( [ expression, [ expression [,...] ] ] ) +"," +An array of values." +"Other Grammar","Boolean"," +TRUE | FALSE +"," +A boolean value." +"Other Grammar","Bytes"," +X'hex' +"," +A binary value." +"Other Grammar","Case"," +CASE expression { WHEN expression THEN expression } [...] +[ ELSE expression ] END +"," +Returns the first expression where the value is equal to the test expression." +"Other Grammar","Case When"," +CASE { WHEN expression THEN expression} [...] +[ ELSE expression ] END +"," +Returns the first expression where the condition is true." +"Other Grammar","Cipher"," +AES +"," +Only the algorithm AES (""AES-128"") is supported currently." +"Other Grammar","Column Definition"," +dataType [ VISIBLE | INVISIBLE ] +[ { DEFAULT expression | AS computedColumnExpression } ] +[ ON UPDATE expression ] [ [ NOT ] NULL ] +[ { AUTO_INCREMENT | IDENTITY } [ ( startInt [, incrementInt ] ) ] ] +[ SELECTIVITY selectivity ] [ COMMENT expression ] +[ PRIMARY KEY [ HASH ] | UNIQUE ] [ CHECK condition ] +"," +Default expressions are used if no explicit value was used when adding a row." +"Other Grammar","Comments"," +-- anythingUntilEndOfLine | // anythingUntilEndOfLine | /* anythingUntilEndComment */ +"," +Comments can be used anywhere in a command and are ignored by the database." +"Other Grammar","Compare"," +<> | <= | >= | = | < | > | != | && +"," +Comparison operator." +"Other Grammar","Condition"," +operand [ conditionRightHandSide ] | NOT condition | EXISTS ( select ) +"," +Boolean value or condition." +"Other Grammar","Condition Right Hand Side"," +compare { { { ALL | ANY | SOME } ( select ) } | operand } + | IS [ NOT ] NULL + | IS [ NOT ] [ DISTINCT FROM ] operand + | BETWEEN operand AND operand + | IN ( { select | expression [,...] } ) + | [ NOT ] [ LIKE | ILIKE ] operand [ ESCAPE string ] + | [ NOT ] REGEXP operand +"," +The right hand side of a condition." +"Other Grammar","Constraint"," +[ constraintNameDefinition ] +{ CHECK expression + | UNIQUE ( columnName [,...] ) + | referentialConstraint + | PRIMARY KEY [ HASH ] ( columnName [,...] ) } +"," +Defines a constraint." +"Other Grammar","Constraint Name Definition"," +CONSTRAINT [ IF NOT EXISTS ] newConstraintName +"," +Defines a constraint name." +"Other Grammar","Csv Options"," +charsetString [, fieldSepString [, fieldDelimString [, escString [, nullString]]]]] + | optionString +"," +Optional parameters for CSVREAD and CSVWRITE." +"Other Grammar","Data Type"," +intType | booleanType | tinyintType | smallintType | bigintType | identityType + | decimalType | doubleType | realType | dateType | timeType | timestampType + | timestampWithTimeZoneType | binaryType | otherType | varcharType + | varcharIgnorecaseType | charType | blobType | clobType | uuidType + | arrayType | enumType +"," +A data type definition." +"Other Grammar","Date"," +DATE 'yyyy-MM-dd' +"," +A date literal." +"Other Grammar","Decimal"," +[ + | - ] { { number [ . number ] } | { . number } } +[ E [ + | - ] expNumber [...] ] ] +"," +A decimal number with fixed precision and scale." +"Other Grammar","Digit"," +0-9 +"," +A digit." +"Other Grammar","Dollar Quoted String"," +$$anythingExceptTwoDollarSigns$$ +"," +A string starts and ends with two dollar signs." +"Other Grammar","Expression"," +andCondition [ { OR andCondition } [...] ] +"," +Value or condition." +"Other Grammar","Factor"," +term [ { { * | / | % } term } [...] ] +"," +A value or a numeric factor." +"Other Grammar","Hex"," +{ { digit | a-f | A-F } { digit | a-f | A-F } } [...] +"," +The hexadecimal representation of a number or of bytes." +"Other Grammar","Hex Number"," +[ + | - ] 0x hex +"," +A number written in hexadecimal notation." +"Other Grammar","Index Column"," +columnName [ ASC | DESC ] [ NULLS { FIRST | LAST } ] +"," +Indexes this column in ascending or descending order." +"Other Grammar","Int"," +[ + | - ] number +"," +The maximum integer number is 2147483647, the minimum is -2147483648." +"Other Grammar","Long"," +[ + | - ] number +"," +Long numbers are between -9223372036854775808 and 9223372036854775807." +"Other Grammar","Name"," +{ { A-Z|_ } [ { A-Z|_|0-9 } [...] ] } | quotedName +"," +Names are not case sensitive." +"Other Grammar","Null"," +NULL +"," +NULL is a value without data type and means 'unknown value'." +"Other Grammar","Number"," +digit [...] +"," +The maximum length of the number depends on the data type used." +"Other Grammar","Numeric"," +decimal | int | long | hexNumber +"," +The data type of a numeric value is always the lowest possible for the given value." +"Other Grammar","Operand"," +summand [ { || summand } [...] ] +"," +A value or a concatenation of values." +"Other Grammar","Order"," +{ int | expression } [ ASC | DESC ] [ NULLS { FIRST | LAST } ] +"," +Sorts the result by the given column number, or by an expression." +"Other Grammar","Quoted Name"," +""anythingExceptDoubleQuote"" +"," +Quoted names are case sensitive, and can contain spaces." +"Other Grammar","Referential Constraint"," +FOREIGN KEY ( columnName [,...] ) +REFERENCES [ refTableName ] [ ( refColumnName [,...] ) ] +[ ON DELETE referentialAction ] [ ON UPDATE referentialAction ] +"," +Defines a referential constraint." +"Other Grammar","Referential Action"," +CASCADE | RESTRICT | NO ACTION | SET { DEFAULT | NULL } +"," +The action CASCADE will cause conflicting rows in the referencing (child) table to be deleted or updated." +"Other Grammar","Script Compression Encryption"," +[ COMPRESSION { DEFLATE | LZF | ZIP | GZIP } ] +[ CIPHER cipher PASSWORD string ] +"," +The compression and encryption algorithm to use for script files." +"Other Grammar","Select Expression"," +* | expression [ [ AS ] columnAlias ] | tableAlias.* +"," +An expression in a SELECT statement." +"Other Grammar","String"," +'anythingExceptSingleQuote' +"," +A string starts and ends with a single quote." +"Other Grammar","Summand"," +factor [ { { + | - } factor } [...] ] +"," +A value or a numeric sum." +"Other Grammar","Table Expression"," +{ [ schemaName. ] tableName | ( select ) | valuesExpression } +[ [ AS ] newTableAlias [ ( columnName [,...] ) ] ] +[ USE INDEX ([ indexName [,...] ]) ] +[ { { LEFT | RIGHT } [ OUTER ] | [ INNER ] | CROSS | NATURAL } + JOIN tableExpression [ ON expression ] ] +"," +Joins a table." +"Other Grammar","Values Expression"," +VALUES { ( expression [,...] ) } [,...] +"," +A list of rows that can be used like a table." +"Other Grammar","Term"," +value + | columnName + | ?[ int ] + | NEXT VALUE FOR sequenceName + | function + | { - | + } term + | ( expression ) + | select + | case + | caseWhen + | tableAlias.columnName + | userDefinedFunctionName +"," +A value." +"Other Grammar","Time"," +TIME [ WITHOUT TIME ZONE ] 'hh:mm:ss[.nnnnnnnnn]' +"," +A time literal." +"Other Grammar","Timestamp"," +TIMESTAMP [ WITHOUT TIME ZONE ] 'yyyy-MM-dd hh:mm:ss[.nnnnnnnnn]' +"," +A timestamp literal." +"Other Grammar","Timestamp with time zone"," +TIMESTAMP WITH TIME ZONE 'yyyy-MM-dd hh:mm:ss[.nnnnnnnnn] +[Z | { - | + } timeZoneOffsetString | timeZoneNameString ]' +"," +A timestamp with time zone literal." +"Other Grammar","Date and time"," +date | time | timestamp | timestampWithTimeZone +"," +A literal value of any date-time data type." +"Other Grammar","Value"," +string | dollarQuotedString | numeric | dateAndTime | boolean | bytes | array | null +"," +A literal value of any data type, or null." +"Data Types","INT Type"," +INT | INTEGER | MEDIUMINT | INT4 | SIGNED +"," +Possible values: -2147483648 to 2147483647." +"Data Types","BOOLEAN Type"," +BOOLEAN | BIT | BOOL +"," +Possible values: TRUE and FALSE." +"Data Types","TINYINT Type"," +TINYINT +"," +Possible values are: -128 to 127." +"Data Types","SMALLINT Type"," +SMALLINT | INT2 | YEAR +"," +Possible values: -32768 to 32767." +"Data Types","BIGINT Type"," +BIGINT | INT8 +"," +Possible values: -9223372036854775808 to 9223372036854775807." +"Data Types","IDENTITY Type"," +IDENTITY +"," +Auto-Increment value." +"Data Types","DECIMAL Type"," +{ DECIMAL | NUMBER | DEC | NUMERIC } ( precisionInt [ , scaleInt ] ) +"," +Data type with fixed precision and scale." +"Data Types","DOUBLE Type"," +{ DOUBLE [ PRECISION ] | FLOAT [ ( precisionInt ) ] | FLOAT8 } +"," +A floating point number." +"Data Types","REAL Type"," +{ REAL | FLOAT ( precisionInt ) | FLOAT4 } +"," +A single precision floating point number." +"Data Types","TIME Type"," +TIME [ ( precisionInt ) ] [ WITHOUT TIME ZONE ] +"," +The time data type." +"Data Types","DATE Type"," +DATE +"," +The date data type." +"Data Types","TIMESTAMP Type"," +{ TIMESTAMP [ ( precisionInt ) ] [ WITHOUT TIME ZONE ] + | DATETIME | SMALLDATETIME } +"," +The timestamp data type." +"Data Types","TIMESTAMP WITH TIME ZONE Type"," +TIMESTAMP [ ( precisionInt ) ] WITH TIME ZONE +"," +The timestamp with time zone data type." +"Data Types","BINARY Type"," +{ BINARY | VARBINARY | LONGVARBINARY | RAW | BYTEA } +[ ( precisionInt ) ] +"," +Represents a byte array." +"Data Types","OTHER Type"," +OTHER +"," +This type allows storing serialized Java objects." +"Data Types","VARCHAR Type"," +{ VARCHAR | LONGVARCHAR | VARCHAR2 | NVARCHAR + | NVARCHAR2 | VARCHAR_CASESENSITIVE} [ ( precisionInt ) ] +"," +A Unicode String." +"Data Types","VARCHAR_IGNORECASE Type"," +VARCHAR_IGNORECASE [ ( precisionInt ) ] +"," +Same as VARCHAR, but not case sensitive when comparing." +"Data Types","CHAR Type"," +{ CHAR | CHARACTER | NCHAR } [ ( precisionInt ) ] +"," +A Unicode String." +"Data Types","BLOB Type"," +{ BLOB | TINYBLOB | MEDIUMBLOB | LONGBLOB | IMAGE | OID } +[ ( precisionInt ) ] +"," +Like BINARY, but intended for very large values such as files or images." +"Data Types","CLOB Type"," +{ CLOB | TINYTEXT | TEXT | MEDIUMTEXT | LONGTEXT | NTEXT | NCLOB } +[ ( precisionInt ) ] +"," +CLOB is like VARCHAR, but intended for very large values." +"Data Types","UUID Type"," +UUID +"," +Universally unique identifier." +"Data Types","ARRAY Type"," +ARRAY +"," +An array of values." +"Data Types","ENUM Type"," +{ ENUM (string [, ... ]) } +"," +A type with enumerated values." +"Data Types","GEOMETRY Type"," +GEOMETRY +"," +A spatial geometry type, based on the ""org.locationtech.jts"" library." +"Functions (Aggregate)","AVG"," +AVG ( [ DISTINCT ] { numeric } ) [ FILTER ( WHERE expression ) ] +"," +The average (mean) value." +"Functions (Aggregate)","BIT_AND"," +BIT_AND(expression) [ FILTER ( WHERE expression ) ] +"," +The bitwise AND of all non-null values." +"Functions (Aggregate)","BIT_OR"," +BIT_OR(expression) [ FILTER ( WHERE expression ) ] +"," +The bitwise OR of all non-null values." +"Functions (Aggregate)","BOOL_AND"," +BOOL_AND(boolean) [ FILTER ( WHERE expression ) ] +"," +Returns true if all expressions are true." +"Functions (Aggregate)","BOOL_OR"," +BOOL_OR(boolean) [ FILTER ( WHERE expression ) ] +"," +Returns true if any expression is true." +"Functions (Aggregate)","COUNT"," +COUNT( { * | { [ DISTINCT ] expression } } ) [ FILTER ( WHERE expression ) ] +"," +The count of all row, or of the non-null values." +"Functions (Aggregate)","GROUP_CONCAT"," +GROUP_CONCAT ( [ DISTINCT ] string +[ ORDER BY { expression [ ASC | DESC ] } [,...] ] +[ SEPARATOR expression ] ) [ FILTER ( WHERE expression ) ] +"," +Concatenates strings with a separator." +"Functions (Aggregate)","ARRAY_AGG"," +ARRAY_AGG ( [ DISTINCT ] string +[ ORDER BY { expression [ ASC | DESC ] } [,...] ] ) +[ FILTER ( WHERE expression ) ] +"," +Aggregate the value into an array." +"Functions (Aggregate)","MAX"," +MAX(value) [ FILTER ( WHERE expression ) ] +"," +The highest value." +"Functions (Aggregate)","MIN"," +MIN(value) [ FILTER ( WHERE expression ) ] +"," +The lowest value." +"Functions (Aggregate)","SUM"," +SUM( [ DISTINCT ] { numeric } ) [ FILTER ( WHERE expression ) ] +"," +The sum of all values." +"Functions (Aggregate)","SELECTIVITY"," +SELECTIVITY(value) [ FILTER ( WHERE expression ) ] +"," +Estimates the selectivity (0-100) of a value." +"Functions (Aggregate)","STDDEV_POP"," +STDDEV_POP( [ DISTINCT ] numeric ) [ FILTER ( WHERE expression ) ] +"," +The population standard deviation." +"Functions (Aggregate)","STDDEV_SAMP"," +STDDEV_SAMP( [ DISTINCT ] numeric ) [ FILTER ( WHERE expression ) ] +"," +The sample standard deviation." +"Functions (Aggregate)","VAR_POP"," +VAR_POP( [ DISTINCT ] numeric ) [ FILTER ( WHERE expression ) ] +"," +The population variance (square of the population standard deviation)." +"Functions (Aggregate)","VAR_SAMP"," +VAR_SAMP( [ DISTINCT ] numeric ) [ FILTER ( WHERE expression ) ] +"," +The sample variance (square of the sample standard deviation)." +"Functions (Aggregate)","MEDIAN"," +MEDIAN( [ DISTINCT ] value ) [ FILTER ( WHERE expression ) ] +"," +The value separating the higher half of a values from the lower half." +"Functions (Numeric)","ABS"," +ABS ( { numeric } ) +"," +See also Java ""Math.abs""." +"Functions (Numeric)","ACOS"," +ACOS(numeric) +"," +Calculate the arc cosine." +"Functions (Numeric)","ASIN"," +ASIN(numeric) +"," +Calculate the arc sine." +"Functions (Numeric)","ATAN"," +ATAN(numeric) +"," +Calculate the arc tangent." +"Functions (Numeric)","COS"," +COS(numeric) +"," +Calculate the trigonometric cosine." +"Functions (Numeric)","COSH"," +COSH(numeric) +"," +Calculate the hyperbolic cosine." +"Functions (Numeric)","COT"," +COT(numeric) +"," +Calculate the trigonometric cotangent (""1/TAN(ANGLE)"")." +"Functions (Numeric)","SIN"," +SIN(numeric) +"," +Calculate the trigonometric sine." +"Functions (Numeric)","SINH"," +SINH(numeric) +"," +Calculate the hyperbolic sine." +"Functions (Numeric)","TAN"," +TAN(numeric) +"," +Calculate the trigonometric tangent." +"Functions (Numeric)","TANH"," +TANH(numeric) +"," +Calculate the hyperbolic tangent." +"Functions (Numeric)","ATAN2"," +ATAN2(numeric, numeric) +"," +Calculate the angle when converting the rectangular coordinates to polar coordinates." +"Functions (Numeric)","BITAND"," +BITAND(long, long) +"," +The bitwise AND operation." +"Functions (Numeric)","BITGET"," +BITGET(long, int) +"," +Returns true if and only if the first parameter has a bit set in the +position specified by the second parameter." +"Functions (Numeric)","BITOR"," +BITOR(long, long) +"," +The bitwise OR operation." +"Functions (Numeric)","BITXOR"," +BITXOR(long, long) +"," +The bitwise XOR operation." +"Functions (Numeric)","MOD"," +MOD(long, long) +"," +The modulo operation." +"Functions (Numeric)","CEILING"," +{ CEILING | CEIL } (numeric) +"," +See also Java ""Math.ceil""." +"Functions (Numeric)","DEGREES"," +DEGREES(numeric) +"," +See also Java ""Math.toDegrees""." +"Functions (Numeric)","EXP"," +EXP(numeric) +"," +See also Java ""Math.exp""." +"Functions (Numeric)","FLOOR"," +FLOOR(numeric) +"," +See also Java ""Math.floor""." +"Functions (Numeric)","LOG"," +{ LOG | LN } (numeric) +"," +See also Java ""Math.log""." +"Functions (Numeric)","LOG10"," +LOG10(numeric) +"," +See also Java ""Math.log10""." +"Functions (Numeric)","RADIANS"," +RADIANS(numeric) +"," +See also Java ""Math.toRadians""." +"Functions (Numeric)","SQRT"," +SQRT(numeric) +"," +See also Java ""Math.sqrt""." +"Functions (Numeric)","PI"," +PI() +"," +See also Java ""Math.PI""." +"Functions (Numeric)","POWER"," +POWER(numeric, numeric) +"," +See also Java ""Math.pow""." +"Functions (Numeric)","RAND"," +{ RAND | RANDOM } ( [ int ] ) +"," +Calling the function without parameter returns the next a pseudo random number." +"Functions (Numeric)","RANDOM_UUID"," +{ RANDOM_UUID | UUID } () +"," +Returns a new UUID with 122 pseudo random bits." +"Functions (Numeric)","ROUND"," +ROUND(numeric [, digitsInt]) +"," +Rounds to a number of digits, or to the nearest long if the number of digits if not set." +"Functions (Numeric)","ROUNDMAGIC"," +ROUNDMAGIC(numeric) +"," +This function rounds numbers in a good way, but it is slow." +"Functions (Numeric)","SECURE_RAND"," +SECURE_RAND(int) +"," +Generates a number of cryptographically secure random numbers." +"Functions (Numeric)","SIGN"," +SIGN ( { numeric } ) +"," +Returns -1 if the value is smaller 0, 0 if zero, and otherwise 1." +"Functions (Numeric)","ENCRYPT"," +ENCRYPT(algorithmString, keyBytes, dataBytes) +"," +Encrypts data using a key." +"Functions (Numeric)","DECRYPT"," +DECRYPT(algorithmString, keyBytes, dataBytes) +"," +Decrypts data using a key." +"Functions (Numeric)","HASH"," +HASH(algorithmString, dataBytes, iterationInt) +"," +Calculate the hash value using an algorithm, and repeat this process for a number of iterations." +"Functions (Numeric)","TRUNCATE"," +{ TRUNC | TRUNCATE } ( { {numeric, digitsInt} + | timestamp | timestampWithTimeZone | date | timestampString } ) +"," +Truncates to a number of digits (to the next value closer to 0)." +"Functions (Numeric)","COMPRESS"," +COMPRESS(dataBytes [, algorithmString]) +"," +Compresses the data using the specified compression algorithm." +"Functions (Numeric)","EXPAND"," +EXPAND(bytes) +"," +Expands data that was compressed using the COMPRESS function." +"Functions (Numeric)","ZERO"," +ZERO() +"," +Returns the value 0." +"Functions (String)","ASCII"," +ASCII(string) +"," +Returns the ASCII value of the first character in the string." +"Functions (String)","BIT_LENGTH"," +BIT_LENGTH(string) +"," +Returns the number of bits in a string." +"Functions (String)","LENGTH"," +{ LENGTH | CHAR_LENGTH | CHARACTER_LENGTH } ( string ) +"," +Returns the number of characters in a string." +"Functions (String)","OCTET_LENGTH"," +OCTET_LENGTH(string) +"," +Returns the number of bytes in a string." +"Functions (String)","CHAR"," +{ CHAR | CHR } ( int ) +"," +Returns the character that represents the ASCII value." +"Functions (String)","CONCAT"," +CONCAT(string, string [,...]) +"," +Combines strings." +"Functions (String)","CONCAT_WS"," +CONCAT_WS(separatorString, string, string [,...]) +"," +Combines strings with separator." +"Functions (String)","DIFFERENCE"," +DIFFERENCE(string, string) +"," +Returns the difference between the sounds of two strings." +"Functions (String)","HEXTORAW"," +HEXTORAW(string) +"," +Converts a hex representation of a string to a string." +"Functions (String)","RAWTOHEX"," +RAWTOHEX(string) +"," +Converts a string to the hex representation." +"Functions (String)","INSTR"," +INSTR(string, searchString, [, startInt]) +"," +Returns the location of a search string in a string." +"Functions (String)","INSERT Function"," +INSERT(originalString, startInt, lengthInt, addString) +"," +Inserts a additional string into the original string at a specified start position." +"Functions (String)","LOWER"," +{ LOWER | LCASE } ( string ) +"," +Converts a string to lowercase." +"Functions (String)","UPPER"," +{ UPPER | UCASE } ( string ) +"," +Converts a string to uppercase." +"Functions (String)","LEFT"," +LEFT(string, int) +"," +Returns the leftmost number of characters." +"Functions (String)","RIGHT"," +RIGHT(string, int) +"," +Returns the rightmost number of characters." +"Functions (String)","LOCATE"," +LOCATE(searchString, string [, startInt]) +"," +Returns the location of a search string in a string." +"Functions (String)","POSITION"," +POSITION(searchString, string) +"," +Returns the location of a search string in a string." +"Functions (String)","LPAD"," +LPAD(string, int[, paddingString]) +"," +Left pad the string to the specified length." +"Functions (String)","RPAD"," +RPAD(string, int[, paddingString]) +"," +Right pad the string to the specified length." +"Functions (String)","LTRIM"," +LTRIM(string) +"," +Removes all leading spaces from a string." +"Functions (String)","RTRIM"," +RTRIM(string) +"," +Removes all trailing spaces from a string." +"Functions (String)","TRIM"," +TRIM ( [ { LEADING | TRAILING | BOTH } [ string ] FROM ] string ) +"," +Removes all leading spaces, trailing spaces, or spaces at both ends, from a string." +"Functions (String)","REGEXP_REPLACE"," +REGEXP_REPLACE(inputString, regexString, replacementString [, flagsString]) +"," +Replaces each substring that matches a regular expression." +"Functions (String)","REGEXP_LIKE"," +REGEXP_LIKE(inputString, regexString [, flagsString]) +"," +Matches string to a regular expression." +"Functions (String)","REPEAT"," +REPEAT(string, int) +"," +Returns a string repeated some number of times." +"Functions (String)","REPLACE"," +REPLACE(string, searchString [, replacementString]) +"," +Replaces all occurrences of a search string in a text with another string." +"Functions (String)","SOUNDEX"," +SOUNDEX(string) +"," +Returns a four character code representing the sound of a string." +"Functions (String)","SPACE"," +SPACE(int) +"," +Returns a string consisting of a number of spaces." +"Functions (String)","STRINGDECODE"," +STRINGDECODE(string) +"," +Converts a encoded string using the Java string literal encoding format." +"Functions (String)","STRINGENCODE"," +STRINGENCODE(string) +"," +Encodes special characters in a string using the Java string literal encoding format." +"Functions (String)","STRINGTOUTF8"," +STRINGTOUTF8(string) +"," +Encodes a string to a byte array using the UTF8 encoding format." +"Functions (String)","SUBSTRING"," +{ SUBSTRING | SUBSTR } ( string, startInt [, lengthInt ] ) +"," +Returns a substring of a string starting at a position." +"Functions (String)","UTF8TOSTRING"," +UTF8TOSTRING(bytes) +"," +Decodes a byte array in the UTF8 format to a string." +"Functions (String)","XMLATTR"," +XMLATTR(nameString, valueString) +"," +Creates an XML attribute element of the form ""name=value""." +"Functions (String)","XMLNODE"," +XMLNODE(elementString [, attributesString [, contentString [, indentBoolean]]]) +"," +Create an XML node element." +"Functions (String)","XMLCOMMENT"," +XMLCOMMENT(commentString) +"," +Creates an XML comment." +"Functions (String)","XMLCDATA"," +XMLCDATA(valueString) +"," +Creates an XML CDATA element." +"Functions (String)","XMLSTARTDOC"," +XMLSTARTDOC() +"," +Returns the XML declaration." +"Functions (String)","XMLTEXT"," +XMLTEXT(valueString [, escapeNewlineBoolean]) +"," +Creates an XML text element." +"Functions (String)","TO_CHAR"," +TO_CHAR(value [, formatString[, nlsParamString]]) +"," +Oracle-compatible TO_CHAR function that can format a timestamp, a number, or text." +"Functions (String)","TRANSLATE"," +TRANSLATE(value, searchString, replacementString) +"," +Oracle-compatible TRANSLATE function that replaces a sequence of characters in a string with another set of characters." +"Functions (Time and Date)","CURRENT_DATE"," +{ CURRENT_DATE [ () ] | CURDATE() | SYSDATE | TODAY } +"," +Returns the current date." +"Functions (Time and Date)","CURRENT_TIME"," +{ CURRENT_TIME [ () ] | CURTIME() } +"," +Returns the current time." +"Functions (Time and Date)","CURRENT_TIMESTAMP"," +{ CURRENT_TIMESTAMP [ ( [ int ] ) ] | NOW( [ int ] ) } +"," +Returns the current timestamp." +"Functions (Time and Date)","DATEADD"," +{ DATEADD| TIMESTAMPADD } (datetimeField, addIntLong, dateAndTime) +"," +Adds units to a date-time value." +"Functions (Time and Date)","DATEDIFF"," +{ DATEDIFF | TIMESTAMPDIFF } (datetimeField, aDateAndTime, bDateAndTime) +"," +Returns the the number of crossed unit boundaries between two date/time values." +"Functions (Time and Date)","DAYNAME"," +DAYNAME(dateAndTime) +"," +Returns the name of the day (in English)." +"Functions (Time and Date)","DAY_OF_MONTH"," +DAY_OF_MONTH(dateAndTime) +"," +Returns the day of the month (1-31)." +"Functions (Time and Date)","DAY_OF_WEEK"," +DAY_OF_WEEK(dateAndTime) +"," +Returns the day of the week (1 means Sunday)." +"Functions (Time and Date)","ISO_DAY_OF_WEEK"," +ISO_DAY_OF_WEEK(dateAndTime) +"," +Returns the ISO day of the week (1 means Monday)." +"Functions (Time and Date)","DAY_OF_YEAR"," +DAY_OF_YEAR(dateAndTime) +"," +Returns the day of the year (1-366)." +"Functions (Time and Date)","EXTRACT"," +EXTRACT ( datetimeField FROM dateAndTime ) +"," +Returns a value of the specific time unit from a date/time value." +"Functions (Time and Date)","FORMATDATETIME"," +FORMATDATETIME ( dateAndTime, formatString +[ , localeString [ , timeZoneString ] ] ) +"," +Formats a date, time or timestamp as a string." +"Functions (Time and Date)","HOUR"," +HOUR(dateAndTime) +"," +Returns the hour (0-23) from a date/time value." +"Functions (Time and Date)","MINUTE"," +MINUTE(dateAndTime) +"," +Returns the minute (0-59) from a date/time value." +"Functions (Time and Date)","MONTH"," +MONTH(dateAndTime) +"," +Returns the month (1-12) from a date/time value." +"Functions (Time and Date)","MONTHNAME"," +MONTHNAME(dateAndTime) +"," +Returns the name of the month (in English)." +"Functions (Time and Date)","PARSEDATETIME"," +PARSEDATETIME(string, formatString +[, localeString [, timeZoneString]]) +"," +Parses a string and returns a timestamp." +"Functions (Time and Date)","QUARTER"," +QUARTER(dateAndTime) +"," +Returns the quarter (1-4) from a date/time value." +"Functions (Time and Date)","SECOND"," +SECOND(dateAndTime) +"," +Returns the second (0-59) from a date/time value." +"Functions (Time and Date)","WEEK"," +WEEK(dateAndTime) +"," +Returns the week (1-53) from a date/time value." +"Functions (Time and Date)","ISO_WEEK"," +ISO_WEEK(dateAndTime) +"," +Returns the ISO week (1-53) from a date/time value." +"Functions (Time and Date)","YEAR"," +YEAR(dateAndTime) +"," +Returns the year from a date/time value." +"Functions (Time and Date)","ISO_YEAR"," +ISO_YEAR(dateAndTime) +"," +Returns the ISO week year from a date/time value." +"Functions (System)","ARRAY_GET"," +ARRAY_GET(arrayExpression, indexExpression) +"," +Returns one element of an array." +"Functions (System)","ARRAY_LENGTH"," +ARRAY_LENGTH(arrayExpression) +"," +Returns the length of an array." +"Functions (System)","ARRAY_CONTAINS"," +ARRAY_CONTAINS(arrayExpression, value) +"," +Returns a boolean true if the array contains the value." +"Functions (System)","AUTOCOMMIT"," +AUTOCOMMIT() +"," +Returns true if auto commit is switched on for this session." +"Functions (System)","CANCEL_SESSION"," +CANCEL_SESSION(sessionInt) +"," +Cancels the currently executing statement of another session." +"Functions (System)","CASEWHEN Function"," +CASEWHEN(boolean, aValue, bValue) +"," +Returns 'a' if the boolean expression is true, otherwise 'b'." +"Functions (System)","CAST"," +CAST(value AS dataType) +"," +Converts a value to another data type." +"Functions (System)","COALESCE"," +{ COALESCE | NVL } (aValue, bValue [,...]) +"," +Returns the first value that is not null." +"Functions (System)","CONVERT"," +CONVERT(value, dataType) +"," +Converts a value to another data type." +"Functions (System)","CURRVAL"," +CURRVAL( [ schemaName, ] sequenceString ) +"," +Returns the current (last) value of the sequence, independent of the session." +"Functions (System)","CSVREAD"," +CSVREAD(fileNameString [, columnsString [, csvOptions ] ] ) +"," +Returns the result set of reading the CSV (comma separated values) file." +"Functions (System)","CSVWRITE"," +CSVWRITE ( fileNameString, queryString [, csvOptions [, lineSepString] ] ) +"," +Writes a CSV (comma separated values)." +"Functions (System)","DATABASE"," +DATABASE() +"," +Returns the name of the database." +"Functions (System)","DATABASE_PATH"," +DATABASE_PATH() +"," +Returns the directory of the database files and the database name, if it is file based." +"Functions (System)","DECODE"," +DECODE(value, whenValue, thenValue [,...]) +"," +Returns the first matching value." +"Functions (System)","DISK_SPACE_USED"," +DISK_SPACE_USED(tableNameString) +"," +Returns the approximate amount of space used by the table specified." +"Functions (System)","SIGNAL"," +SIGNAL(sqlStateString, messageString) +"," +Throw an SQLException with the passed SQLState and reason." +"Functions (System)","FILE_READ"," +FILE_READ(fileNameString [,encodingString]) +"," +Returns the contents of a file." +"Functions (System)","FILE_WRITE"," +FILE_WRITE(blobValue, fileNameString) +"," +Write the supplied parameter into a file." +"Functions (System)","GREATEST"," +GREATEST(aValue, bValue [,...]) +"," +Returns the largest value that is not NULL, or NULL if all values are NULL." +"Functions (System)","IDENTITY"," +IDENTITY() +"," +Returns the last inserted identity value for this session." +"Functions (System)","IFNULL"," +IFNULL(aValue, bValue) +"," +Returns the value of 'a' if it is not null, otherwise 'b'." +"Functions (System)","LEAST"," +LEAST(aValue, bValue [,...]) +"," +Returns the smallest value that is not NULL, or NULL if all values are NULL." +"Functions (System)","LOCK_MODE"," +LOCK_MODE() +"," +Returns the current lock mode." +"Functions (System)","LOCK_TIMEOUT"," +LOCK_TIMEOUT() +"," +Returns the lock timeout of the current session (in milliseconds)." +"Functions (System)","LINK_SCHEMA"," +LINK_SCHEMA(targetSchemaString, driverString, urlString, +userString, passwordString, sourceSchemaString) +"," +Creates table links for all tables in a schema." +"Functions (System)","MEMORY_FREE"," +MEMORY_FREE() +"," +Returns the free memory in KB (where 1024 bytes is a KB)." +"Functions (System)","MEMORY_USED"," +MEMORY_USED() +"," +Returns the used memory in KB (where 1024 bytes is a KB)." +"Functions (System)","NEXTVAL"," +NEXTVAL ( [ schemaName, ] sequenceString ) +"," +Returns the next value of the sequence." +"Functions (System)","NULLIF"," +NULLIF(aValue, bValue) +"," +Returns NULL if 'a' is equals to 'b', otherwise 'a'." +"Functions (System)","NVL2"," +NVL2(testValue, aValue, bValue) +"," +If the test value is null, then 'b' is returned." +"Functions (System)","READONLY"," +READONLY() +"," +Returns true if the database is read-only." +"Functions (System)","ROWNUM"," +{ ROWNUM() } | { ROW_NUMBER() OVER() } +"," +Returns the number of the current row." +"Functions (System)","SCHEMA"," +SCHEMA() +"," +Returns the name of the default schema for this session." +"Functions (System)","SCOPE_IDENTITY"," +SCOPE_IDENTITY() +"," +Returns the last inserted identity value for this session for the current scope +(the current statement)." +"Functions (System)","SESSION_ID"," +SESSION_ID() +"," +Returns the unique session id number for the current database connection." +"Functions (System)","SET"," +SET(@variableName, value) +"," +Updates a variable with the given value." +"Functions (System)","TABLE"," +{ TABLE | TABLE_DISTINCT } ( { name dataType = expression } [,...] ) +"," +Returns the result set." +"Functions (System)","TRANSACTION_ID"," +TRANSACTION_ID() +"," +Returns the current transaction id for this session." +"Functions (System)","TRUNCATE_VALUE"," +TRUNCATE_VALUE(value, precisionInt, forceBoolean) +"," +Truncate a value to the required precision." +"Functions (System)","USER"," +{ USER | CURRENT_USER } () +"," +Returns the name of the current user of this session." +"Functions (System)","H2VERSION"," +H2VERSION() +"," +Returns the H2 version as a String." +"System Tables","Information Schema"," +INFORMATION_SCHEMA +"," +To get the list of system tables, execute the statement SELECT * FROM +INFORMATION_SCHEMA." +"System Tables","Range Table"," +SYSTEM_RANGE(start, end) +"," +Contains all values from start to end (this is a dynamic table)." diff --git a/modules/h2/src/main/java/org/h2/res/javadoc.properties b/modules/h2/src/main/java/org/h2/res/javadoc.properties new file mode 100644 index 0000000000000..620ebf80b022c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/res/javadoc.properties @@ -0,0 +1,41 @@ +org.h2.jmx.DatabaseInfoMBean=Information and management operations for the given database. +org.h2.jmx.DatabaseInfoMBean.getCacheSize=The current cache size in KB. +org.h2.jmx.DatabaseInfoMBean.getCacheSizeMax=The maximum cache size in KB. +org.h2.jmx.DatabaseInfoMBean.getFileReadCount=The file read count since the database was opened. +org.h2.jmx.DatabaseInfoMBean.getFileSize=The database file size in KB. +org.h2.jmx.DatabaseInfoMBean.getFileWriteCount=The number of write operations since the database was opened. +org.h2.jmx.DatabaseInfoMBean.getFileWriteCountTotal=The number of write operations since the database was created. +org.h2.jmx.DatabaseInfoMBean.getLogMode=The transaction log mode (0 disabled, 1 without sync, 2 enabled). +org.h2.jmx.DatabaseInfoMBean.getMode=The database compatibility mode (REGULAR if no compatibility mode is\n used). +org.h2.jmx.DatabaseInfoMBean.getTraceLevel=The trace level (0 disabled, 1 error, 2 info, 3 debug). +org.h2.jmx.DatabaseInfoMBean.getVersion=The database version. +org.h2.jmx.DatabaseInfoMBean.isExclusive=Is the database open in exclusive mode? +org.h2.jmx.DatabaseInfoMBean.isMultiThreaded=Is multi-threading enabled? +org.h2.jmx.DatabaseInfoMBean.isMvcc=Is MVCC (multi version concurrency) enabled? +org.h2.jmx.DatabaseInfoMBean.isReadOnly=Is the database read-only? +org.h2.jmx.DatabaseInfoMBean.listSessions=List sessions, including the queries that are in\n progress, and locked tables. +org.h2.jmx.DatabaseInfoMBean.listSettings=List the database settings. +org.h2.tools.Backup=Creates a backup of a database.\nThis tool copies all database files. The database must be closed before using\n this tool. To create a backup while the database is in use, run the BACKUP\n SQL statement. In an emergency, for example if the application is not\n responding, creating a backup using the Backup tool is possible by using the\n quiet mode. However, if the database is changed while the backup is running\n in quiet mode, the backup could be corrupt. +org.h2.tools.Backup.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-file ] The target file name (default\: backup.zip)\n[-dir ] The source directory (default\: .)\n[-db ] Source database; not required if there is only one\n[-quiet] Do not print progress information +org.h2.tools.ChangeFileEncryption=Allows changing the database file encryption password or algorithm.\nThis tool can not be used to change a password of a user.\n The database must be closed before using this tool. +org.h2.tools.ChangeFileEncryption.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-cipher type] The encryption type (AES)\n[-dir ] The database directory (default\: .)\n[-db ] Database name (all databases if not set)\n[-decrypt ] The decryption password (if not set\: not yet encrypted)\n[-encrypt ] The encryption password (if not set\: do not encrypt)\n[-quiet] Do not print progress information +org.h2.tools.Console=Starts the H2 Console (web-) server, as well as the TCP and PG server. +org.h2.tools.Console.main=When running without options, -tcp, -web, -browser and -pg are started.\nOptions are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-url] Start a browser and connect to this URL\n[-driver] Used together with -url\: the driver\n[-user] Used together with -url\: the user name\n[-password] Used together with -url\: the password\n[-web] Start the web server with the H2 Console\n[-tool] Start the icon or window that allows to start a browser\n[-browser] Start a browser connecting to the web server\n[-tcp] Start the TCP server\n[-pg] Start the PG server\nFor each Server, additional options are available;\n for details, see the Server tool.\nIf a service can not be started, the program\n terminates with an exit code of 1. +org.h2.tools.ConvertTraceFile=Converts a .trace.db file to a SQL script and Java source code.\nSQL statement statistics are listed as well. +org.h2.tools.ConvertTraceFile.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-traceFile ] The trace file name (default\: test.trace.db)\n[-script ] The script file name (default\: test.sql)\n[-javaClass ] The Java directory and class file name (default\: Test) +org.h2.tools.CreateCluster=Creates a cluster from a stand-alone database.\nCopies a database to another location if required. +org.h2.tools.CreateCluster.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-urlSource ""] The database URL of the source database (jdbc\:h2\:...)\n[-urlTarget ""] The database URL of the target database (jdbc\:h2\:...)\n[-user ] The user name (default\: sa)\n[-password ] The password\n[-serverList ] The comma separated list of host names or IP addresses +org.h2.tools.DeleteDbFiles=Deletes all files belonging to a database.\nThe database must be closed before calling this tool. +org.h2.tools.DeleteDbFiles.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-dir ] The directory (default\: .)\n[-db ] The database name\n[-quiet] Do not print progress information +org.h2.tools.Recover=Helps recovering a corrupted database. +org.h2.tools.Recover.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-dir ] The directory (default\: .)\n[-db ] The database name (all databases if not set)\n[-trace] Print additional trace information\n[-transactionLog] Print the transaction log\nEncrypted databases need to be decrypted first. +org.h2.tools.Restore=Restores a H2 database by extracting the database files from a .zip file. +org.h2.tools.Restore.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-file ] The source file name (default\: backup.zip)\n[-dir ] The target directory (default\: .)\n[-db ] The target database name (as stored if not set)\n[-quiet] Do not print progress information +org.h2.tools.RunScript=Runs a SQL script against a database. +org.h2.tools.RunScript.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-url ""] The database URL (jdbc\:...)\n[-user ] The user name (default\: sa)\n[-password ] The password\n[-script ] The script file to run (default\: backup.sql)\n[-driver ] The JDBC driver class to use (not required in most cases)\n[-showResults] Show the statements and the results of queries\n[-checkResults] Check if the query results match the expected results\n[-continueOnError] Continue even if the script contains errors\n[-options ...] RUNSCRIPT options (embedded H2; -*Results not supported) +org.h2.tools.Script=Creates a SQL script file by extracting the schema and data of a database. +org.h2.tools.Script.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-url ""] The database URL (jdbc\:...)\n[-user ] The user name (default\: sa)\n[-password ] The password\n[-script ] The target script file name (default\: backup.sql)\n[-options ...] A list of options (only for embedded H2, see SCRIPT)\n[-quiet] Do not print progress information +org.h2.tools.Server=Starts the H2 Console (web-) server, TCP, and PG server. +org.h2.tools.Server.main=When running without options, -tcp, -web, -browser and -pg are started.\nOptions are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-web] Start the web server with the H2 Console\n[-webAllowOthers] Allow other computers to connect - see below\n[-webDaemon] Use a daemon thread\n[-webPort ] The port (default\: 8082)\n[-webSSL] Use encrypted (HTTPS) connections\n[-browser] Start a browser connecting to the web server\n[-tcp] Start the TCP server\n[-tcpAllowOthers] Allow other computers to connect - see below\n[-tcpDaemon] Use a daemon thread\n[-tcpPort ] The port (default\: 9092)\n[-tcpSSL] Use encrypted (SSL) connections\n[-tcpPassword ] The password for shutting down a TCP server\n[-tcpShutdown ""] Stop the TCP server; example\: tcp\://localhost\n[-tcpShutdownForce] Do not wait until all connections are closed\n[-pg] Start the PG server\n[-pgAllowOthers] Allow other computers to connect - see below\n[-pgDaemon] Use a daemon thread\n[-pgPort ] The port (default\: 5435)\n[-properties ""] Server properties (default\: ~, disable\: null)\n[-baseDir ] The base directory for H2 databases (all servers)\n[-ifExists] Only existing databases may be opened (all servers)\n[-trace] Print additional trace information (all servers)\n[-key ] Allows to map a database name to another (all servers)\nThe options -xAllowOthers are potentially risky.\nFor details, see Advanced Topics / Protection against Remote Access. +org.h2.tools.Shell=Interactive command line tool to access a database using JDBC. +org.h2.tools.Shell.main=Options are case sensitive. Supported options are\:\n[-help] or [-?] Print the list of options\n[-url ""] The database URL (jdbc\:h2\:...)\n[-user ] The user name\n[-password ] The password\n[-driver ] The JDBC driver class to use (not required in most cases)\n[-sql ""] Execute the SQL statements and exit\n[-properties ""] Load the server properties from this directory\nIf special characters don't work as expected, you may need to use\n -Dfile.encoding\=UTF-8 (Mac OS X) or CP850 (Windows). diff --git a/modules/h2/src/main/java/org/h2/result/LazyResult.java b/modules/h2/src/main/java/org/h2/result/LazyResult.java new file mode 100644 index 0000000000000..74c1dd9ed480a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/LazyResult.java @@ -0,0 +1,195 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.engine.SessionInterface; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.value.Value; + +/** + * Lazy execution support for queries. + * + * @author Sergi Vladykin + */ +public abstract class LazyResult implements ResultInterface { + + private final Expression[] expressions; + private int rowId = -1; + private Value[] currentRow; + private Value[] nextRow; + private boolean closed; + private boolean afterLast; + private int limit; + + public LazyResult(Expression[] expressions) { + this.expressions = expressions; + } + + public void setLimit(int limit) { + this.limit = limit; + } + + @Override + public boolean isLazy() { + return true; + } + + @Override + public void reset() { + if (closed) { + throw DbException.throwInternalError(); + } + rowId = -1; + afterLast = false; + currentRow = null; + nextRow = null; + } + + @Override + public Value[] currentRow() { + return currentRow; + } + + @Override + public boolean next() { + if (hasNext()) { + rowId++; + currentRow = nextRow; + nextRow = null; + return true; + } + if (!afterLast) { + rowId++; + currentRow = null; + afterLast = true; + } + return false; + } + + @Override + public boolean hasNext() { + if (closed || afterLast) { + return false; + } + if (nextRow == null && (limit <= 0 || rowId + 1 < limit)) { + nextRow = fetchNextRow(); + } + return nextRow != null; + } + + /** + * Fetch next row or null if none available. + * + * @return next row or null + */ + protected abstract Value[] fetchNextRow(); + + @Override + public boolean isAfterLast() { + return afterLast; + } + + @Override + public int getRowId() { + return rowId; + } + + @Override + public int getRowCount() { + throw DbException.getUnsupportedException("Row count is unknown for lazy result."); + } + + @Override + public boolean needToClose() { + return true; + } + + @Override + public boolean isClosed() { + return closed; + } + + @Override + public void close() { + closed = true; + } + + @Override + public String getAlias(int i) { + return expressions[i].getAlias(); + } + + @Override + public String getSchemaName(int i) { + return expressions[i].getSchemaName(); + } + + @Override + public String getTableName(int i) { + return expressions[i].getTableName(); + } + + @Override + public String getColumnName(int i) { + return expressions[i].getColumnName(); + } + + @Override + public int getColumnType(int i) { + return expressions[i].getType(); + } + + @Override + public long getColumnPrecision(int i) { + return expressions[i].getPrecision(); + } + + @Override + public int getColumnScale(int i) { + return expressions[i].getScale(); + } + + @Override + public int getDisplaySize(int i) { + return expressions[i].getDisplaySize(); + } + + @Override + public boolean isAutoIncrement(int i) { + return expressions[i].isAutoIncrement(); + } + + @Override + public int getNullable(int i) { + return expressions[i].getNullable(); + } + + @Override + public void setFetchSize(int fetchSize) { + // ignore + } + + @Override + public int getFetchSize() { + // We always fetch rows one by one. + return 1; + } + + @Override + public ResultInterface createShallowCopy(SessionInterface targetSession) { + // Copying is impossible with lazy result. + return null; + } + + @Override + public boolean containsDistinct(Value[] values) { + // We have to make sure that we do not allow lazy + // evaluation when this call is needed: + // WHERE x IN (SELECT ...). + throw DbException.throwInternalError(); + } +} diff --git a/modules/h2/src/main/java/org/h2/result/LocalResult.java b/modules/h2/src/main/java/org/h2/result/LocalResult.java new file mode 100644 index 0000000000000..059eb246a8270 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/LocalResult.java @@ -0,0 +1,548 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Arrays; + +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.SessionInterface; +import org.h2.expression.Expression; +import org.h2.message.DbException; +import org.h2.util.New; +import org.h2.util.ValueHashMap; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; + +/** + * A local result set contains all row data of a result set. + * This is the object generated by engine, + * and it is also used directly by the ResultSet class in the embedded mode. + * If the result does not fit in memory, it is written to a temporary file. + */ +public class LocalResult implements ResultInterface, ResultTarget { + + private int maxMemoryRows; + private Session session; + private int visibleColumnCount; + private Expression[] expressions; + private int rowId, rowCount; + private ArrayList rows; + private SortOrder sort; + private ValueHashMap distinctRows; + private Value[] currentRow; + private int offset; + private int limit = -1; + private ResultExternal external; + private int diskOffset; + private boolean distinct; + private boolean randomAccess; + private boolean closed; + private boolean containsLobs; + + /** + * Construct a local result object. + */ + public LocalResult() { + // nothing to do + } + + /** + * Construct a local result object. + * + * @param session the session + * @param expressions the expression array + * @param visibleColumnCount the number of visible columns + */ + public LocalResult(Session session, Expression[] expressions, + int visibleColumnCount) { + this.session = session; + if (session == null) { + this.maxMemoryRows = Integer.MAX_VALUE; + } else { + Database db = session.getDatabase(); + if (db.isPersistent() && !db.isReadOnly()) { + this.maxMemoryRows = session.getDatabase().getMaxMemoryRows(); + } else { + this.maxMemoryRows = Integer.MAX_VALUE; + } + } + rows = New.arrayList(); + this.visibleColumnCount = visibleColumnCount; + rowId = -1; + this.expressions = expressions; + } + + @Override + public boolean isLazy() { + return false; + } + + public void setMaxMemoryRows(int maxValue) { + this.maxMemoryRows = maxValue; + } + + /** + * Construct a local result set by reading all data from a regular result + * set. + * + * @param session the session + * @param rs the result set + * @param maxrows the maximum number of rows to read (0 for no limit) + * @return the local result set + */ + public static LocalResult read(Session session, ResultSet rs, int maxrows) { + Expression[] cols = Expression.getExpressionColumns(session, rs); + int columnCount = cols.length; + LocalResult result = new LocalResult(session, cols, columnCount); + try { + for (int i = 0; (maxrows == 0 || i < maxrows) && rs.next(); i++) { + Value[] list = new Value[columnCount]; + for (int j = 0; j < columnCount; j++) { + int type = result.getColumnType(j); + list[j] = DataType.readValue(session, rs, j + 1, type); + } + result.addRow(list); + } + } catch (SQLException e) { + throw DbException.convert(e); + } + result.done(); + return result; + } + + /** + * Create a shallow copy of the result set. The data and a temporary table + * (if there is any) is not copied. + * + * @param targetSession the session of the copy + * @return the copy if possible, or null if copying is not possible + */ + @Override + public LocalResult createShallowCopy(SessionInterface targetSession) { + if (external == null && (rows == null || rows.size() < rowCount)) { + return null; + } + if (containsLobs) { + return null; + } + ResultExternal e2 = null; + if (external != null) { + e2 = external.createShallowCopy(); + if (e2 == null) { + return null; + } + } + LocalResult copy = new LocalResult(); + copy.maxMemoryRows = this.maxMemoryRows; + copy.session = (Session) targetSession; + copy.visibleColumnCount = this.visibleColumnCount; + copy.expressions = this.expressions; + copy.rowId = -1; + copy.rowCount = this.rowCount; + copy.rows = this.rows; + copy.sort = this.sort; + copy.distinctRows = this.distinctRows; + copy.distinct = distinct; + copy.randomAccess = randomAccess; + copy.currentRow = null; + copy.offset = 0; + copy.limit = -1; + copy.external = e2; + copy.diskOffset = this.diskOffset; + return copy; + } + + /** + * Set the sort order. + * + * @param sort the sort order + */ + public void setSortOrder(SortOrder sort) { + this.sort = sort; + } + + /** + * Remove duplicate rows. + */ + public void setDistinct() { + distinct = true; + distinctRows = ValueHashMap.newInstance(); + } + + /** + * Random access is required (containsDistinct). + */ + public void setRandomAccess() { + this.randomAccess = true; + } + + /** + * Remove the row from the result set if it exists. + * + * @param values the row + */ + public void removeDistinct(Value[] values) { + if (!distinct) { + DbException.throwInternalError(); + } + if (distinctRows != null) { + ValueArray array = ValueArray.get(values); + distinctRows.remove(array); + rowCount = distinctRows.size(); + } else { + rowCount = external.removeRow(values); + } + } + + /** + * Check if this result set contains the given row. + * + * @param values the row + * @return true if the row exists + */ + @Override + public boolean containsDistinct(Value[] values) { + if (external != null) { + return external.contains(values); + } + if (distinctRows == null) { + distinctRows = ValueHashMap.newInstance(); + for (Value[] row : rows) { + ValueArray array = getArrayOfVisible(row); + distinctRows.put(array, array.getList()); + } + } + ValueArray array = ValueArray.get(values); + return distinctRows.get(array) != null; + } + + @Override + public void reset() { + rowId = -1; + currentRow = null; + if (external != null) { + external.reset(); + if (diskOffset > 0) { + for (int i = 0; i < diskOffset; i++) { + external.next(); + } + } + } + } + + @Override + public Value[] currentRow() { + return currentRow; + } + + @Override + public boolean next() { + if (!closed && rowId < rowCount) { + rowId++; + if (rowId < rowCount) { + if (external != null) { + currentRow = external.next(); + } else { + currentRow = rows.get(rowId); + } + return true; + } + currentRow = null; + } + return false; + } + + @Override + public int getRowId() { + return rowId; + } + + @Override + public boolean isAfterLast() { + return rowId >= rowCount; + } + + private void cloneLobs(Value[] values) { + for (int i = 0; i < values.length; i++) { + Value v = values[i]; + Value v2 = v.copyToResult(); + if (v2 != v) { + containsLobs = true; + session.addTemporaryLob(v2); + values[i] = v2; + } + } + } + + private ValueArray getArrayOfVisible(Value[] values) { + if (values.length > visibleColumnCount) { + values = Arrays.copyOf(values, visibleColumnCount); + } + return ValueArray.get(values); + } + + /** + * Add a row to this object. + * + * @param values the row to add + */ + @Override + public void addRow(Value[] values) { + cloneLobs(values); + if (distinct) { + if (distinctRows != null) { + ValueArray array = getArrayOfVisible(values); + distinctRows.put(array, values); + rowCount = distinctRows.size(); + if (rowCount > maxMemoryRows) { + external = new ResultTempTable(session, expressions, true, sort); + rowCount = external.addRows(distinctRows.values()); + distinctRows = null; + } + } else { + rowCount = external.addRow(values); + } + return; + } + rows.add(values); + rowCount++; + if (rows.size() > maxMemoryRows) { + if (external == null) { + external = new ResultTempTable(session, expressions, false, sort); + } + addRowsToDisk(); + } + } + + private void addRowsToDisk() { + rowCount = external.addRows(rows); + rows.clear(); + } + + @Override + public int getVisibleColumnCount() { + return visibleColumnCount; + } + + /** + * This method is called after all rows have been added. + */ + public void done() { + if (distinct) { + if (distinctRows != null) { + rows = distinctRows.values(); + } else { + if (external != null && sort != null) { + // external sort + ResultExternal temp = external; + external = null; + temp.reset(); + rows = New.arrayList(); + // TODO use offset directly if possible + while (true) { + Value[] list = temp.next(); + if (list == null) { + break; + } + if (external == null) { + external = new ResultTempTable(session, expressions, true, sort); + } + rows.add(list); + if (rows.size() > maxMemoryRows) { + rowCount = external.addRows(rows); + rows.clear(); + } + } + temp.close(); + // the remaining data in rows is written in the following + // lines + } + } + } + if (external != null) { + addRowsToDisk(); + external.done(); + } else { + if (sort != null) { + if (offset > 0 || limit > 0) { + sort.sort(rows, offset, limit < 0 ? rows.size() : limit); + } else { + sort.sort(rows); + } + } + } + applyOffset(); + applyLimit(); + reset(); + } + + @Override + public int getRowCount() { + return rowCount; + } + + @Override + public boolean hasNext() { + return !closed && rowId < rowCount - 1; + } + + /** + * Set the number of rows that this result will return at the maximum. + * + * @param limit the limit (-1 means no limit, 0 means no rows) + */ + public void setLimit(int limit) { + this.limit = limit; + } + + private void applyLimit() { + if (limit < 0) { + return; + } + if (external == null) { + if (rows.size() > limit) { + rows = new ArrayList<>(rows.subList(0, limit)); + rowCount = limit; + distinctRows = null; + } + } else { + if (limit < rowCount) { + rowCount = limit; + distinctRows = null; + } + } + } + + @Override + public boolean needToClose() { + return external != null; + } + + @Override + public void close() { + if (external != null) { + external.close(); + external = null; + closed = true; + } + } + + @Override + public String getAlias(int i) { + return expressions[i].getAlias(); + } + + @Override + public String getTableName(int i) { + return expressions[i].getTableName(); + } + + @Override + public String getSchemaName(int i) { + return expressions[i].getSchemaName(); + } + + @Override + public int getDisplaySize(int i) { + return expressions[i].getDisplaySize(); + } + + @Override + public String getColumnName(int i) { + return expressions[i].getColumnName(); + } + + @Override + public int getColumnType(int i) { + return expressions[i].getType(); + } + + @Override + public long getColumnPrecision(int i) { + return expressions[i].getPrecision(); + } + + @Override + public int getNullable(int i) { + return expressions[i].getNullable(); + } + + @Override + public boolean isAutoIncrement(int i) { + return expressions[i].isAutoIncrement(); + } + + @Override + public int getColumnScale(int i) { + return expressions[i].getScale(); + } + + /** + * Set the offset of the first row to return. + * + * @param offset the offset + */ + public void setOffset(int offset) { + this.offset = offset; + } + + private void applyOffset() { + if (offset <= 0) { + return; + } + if (external == null) { + if (offset >= rows.size()) { + rows.clear(); + rowCount = 0; + } else { + // avoid copying the whole array for each row + int remove = Math.min(offset, rows.size()); + rows = new ArrayList<>(rows.subList(remove, rows.size())); + rowCount -= remove; + } + } else { + if (offset >= rowCount) { + rowCount = 0; + } else { + diskOffset = offset; + rowCount -= offset; + } + } + distinctRows = null; + } + + @Override + public String toString() { + return super.toString() + " columns: " + visibleColumnCount + + " rows: " + rowCount + " pos: " + rowId; + } + + /** + * Check if this result set is closed. + * + * @return true if it is + */ + @Override + public boolean isClosed() { + return closed; + } + + @Override + public int getFetchSize() { + return 0; + } + + @Override + public void setFetchSize(int fetchSize) { + // ignore + } + +} diff --git a/modules/h2/src/main/java/org/h2/result/ResultColumn.java b/modules/h2/src/main/java/org/h2/result/ResultColumn.java new file mode 100644 index 0000000000000..51915bb63b38b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/ResultColumn.java @@ -0,0 +1,106 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import java.io.IOException; + +import org.h2.value.Transfer; + +/** + * A result set column of a remote result. + */ +public class ResultColumn { + + /** + * The column alias. + */ + final String alias; + + /** + * The schema name or null. + */ + final String schemaName; + + /** + * The table name or null. + */ + final String tableName; + + /** + * The column name or null. + */ + final String columnName; + + /** + * The value type of this column. + */ + final int columnType; + + /** + * The precision. + */ + final long precision; + + /** + * The scale. + */ + final int scale; + + /** + * The expected display size. + */ + final int displaySize; + + /** + * True if this is an autoincrement column. + */ + final boolean autoIncrement; + + /** + * True if this column is nullable. + */ + final int nullable; + + /** + * Read an object from the given transfer object. + * + * @param in the object from where to read the data + */ + ResultColumn(Transfer in) throws IOException { + alias = in.readString(); + schemaName = in.readString(); + tableName = in.readString(); + columnName = in.readString(); + columnType = in.readInt(); + precision = in.readLong(); + scale = in.readInt(); + displaySize = in.readInt(); + autoIncrement = in.readBoolean(); + nullable = in.readInt(); + } + + /** + * Write a result column to the given output. + * + * @param out the object to where to write the data + * @param result the result + * @param i the column index + */ + public static void writeColumn(Transfer out, ResultInterface result, int i) + throws IOException { + out.writeString(result.getAlias(i)); + out.writeString(result.getSchemaName(i)); + out.writeString(result.getTableName(i)); + out.writeString(result.getColumnName(i)); + out.writeInt(result.getColumnType(i)); + out.writeLong(result.getColumnPrecision(i)); + out.writeInt(result.getColumnScale(i)); + out.writeInt(result.getDisplaySize(i)); + out.writeBoolean(result.isAutoIncrement(i)); + out.writeInt(result.getNullable(i)); + } + +} diff --git a/modules/h2/src/main/java/org/h2/result/ResultExternal.java b/modules/h2/src/main/java/org/h2/result/ResultExternal.java new file mode 100644 index 0000000000000..f825cbc6abeb9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/ResultExternal.java @@ -0,0 +1,79 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import java.util.ArrayList; +import org.h2.value.Value; + +/** + * This interface is used to extend the LocalResult class, if data does not fit + * in memory. + */ +public interface ResultExternal { + + /** + * Reset the current position of this object. + */ + void reset(); + + /** + * Get the next row from the result. + * + * @return the next row or null + */ + Value[] next(); + + /** + * Add a row to this object. + * + * @param values the row to add + * @return the new number of rows in this object + */ + int addRow(Value[] values); + + /** + * Add a number of rows to the result. + * + * @param rows the list of rows to add + * @return the new number of rows in this object + */ + int addRows(ArrayList rows); + + /** + * This method is called after all rows have been added. + */ + void done(); + + /** + * Close this object and delete the temporary file. + */ + void close(); + + /** + * Remove the row with the given values from this object if such a row + * exists. + * + * @param values the row + * @return the new row count + */ + int removeRow(Value[] values); + + /** + * Check if the given row exists in this object. + * + * @param values the row + * @return true if it exists + */ + boolean contains(Value[] values); + + /** + * Create a shallow copy of this object if possible. + * + * @return the shallow copy, or null + */ + ResultExternal createShallowCopy(); + +} diff --git a/modules/h2/src/main/java/org/h2/result/ResultInterface.java b/modules/h2/src/main/java/org/h2/result/ResultInterface.java new file mode 100644 index 0000000000000..4d5ee9316f22b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/ResultInterface.java @@ -0,0 +1,212 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.engine.SessionInterface; +import org.h2.value.Value; + +/** + * The result interface is used by the LocalResult and ResultRemote class. + * A result may contain rows, or just an update count. + */ +public interface ResultInterface extends AutoCloseable { + + /** + * Go to the beginning of the result, that means + * before the first row. + */ + void reset(); + + /** + * Get the current row. + * + * @return the row + */ + Value[] currentRow(); + + /** + * Go to the next row. + * + * @return true if a row exists + */ + boolean next(); + + /** + * Get the current row id, starting with 0. + * -1 is returned when next() was not called yet. + * + * @return the row id + */ + int getRowId(); + + /** + * Check if the current position is after last row. + * + * @return true if after last + */ + boolean isAfterLast(); + + /** + * Get the number of visible columns. + * More columns may exist internally for sorting or grouping. + * + * @return the number of columns + */ + int getVisibleColumnCount(); + + /** + * Get the number of rows in this object. + * + * @return the number of rows + */ + int getRowCount(); + + /** + * Check if this result has more rows to fetch. + * + * @return true if it has + */ + boolean hasNext(); + + /** + * Check if this result set should be closed, for example because it is + * buffered using a temporary file. + * + * @return true if close should be called. + */ + boolean needToClose(); + + /** + * Close the result and delete any temporary files + */ + @Override + void close(); + + /** + * Get the column alias name for the column. + * + * @param i the column number (starting with 0) + * @return the alias name + */ + String getAlias(int i); + + /** + * Get the schema name for the column, if one exists. + * + * @param i the column number (starting with 0) + * @return the schema name or null + */ + String getSchemaName(int i); + + /** + * Get the table name for the column, if one exists. + * + * @param i the column number (starting with 0) + * @return the table name or null + */ + String getTableName(int i); + + /** + * Get the column name. + * + * @param i the column number (starting with 0) + * @return the column name + */ + String getColumnName(int i); + + /** + * Get the column data type. + * + * @param i the column number (starting with 0) + * @return the column data type + */ + int getColumnType(int i); + + /** + * Get the precision for this column. + * + * @param i the column number (starting with 0) + * @return the precision + */ + long getColumnPrecision(int i); + + /** + * Get the scale for this column. + * + * @param i the column number (starting with 0) + * @return the scale + */ + int getColumnScale(int i); + + /** + * Get the display size for this column. + * + * @param i the column number (starting with 0) + * @return the display size + */ + int getDisplaySize(int i); + + /** + * Check if this is an auto-increment column. + * + * @param i the column number (starting with 0) + * @return true for auto-increment columns + */ + boolean isAutoIncrement(int i); + + /** + * Check if this column is nullable. + * + * @param i the column number (starting with 0) + * @return Column.NULLABLE_* + */ + int getNullable(int i); + + /** + * Set the fetch size for this result set. + * + * @param fetchSize the new fetch size + */ + void setFetchSize(int fetchSize); + + /** + * Get the current fetch size for this result set. + * + * @return the fetch size + */ + int getFetchSize(); + + /** + * Check if this a lazy execution result. + * + * @return true if it is a lazy result + */ + boolean isLazy(); + + /** + * Check if this result set is closed. + * + * @return true if it is + */ + boolean isClosed(); + + /** + * Create a shallow copy of the result set. The data and a temporary table + * (if there is any) is not copied. + * + * @param targetSession the session of the copy + * @return the copy if possible, or null if copying is not possible + */ + ResultInterface createShallowCopy(SessionInterface targetSession); + + /** + * Check if this result set contains the given row. + * + * @param values the row + * @return true if the row exists + */ + boolean containsDistinct(Value[] values); +} diff --git a/modules/h2/src/main/java/org/h2/result/ResultRemote.java b/modules/h2/src/main/java/org/h2/result/ResultRemote.java new file mode 100644 index 0000000000000..eafe9802f7fae --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/ResultRemote.java @@ -0,0 +1,289 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import java.io.IOException; +import java.util.ArrayList; +import org.h2.engine.SessionInterface; +import org.h2.engine.SessionRemote; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.util.New; +import org.h2.value.Transfer; +import org.h2.value.Value; + +/** + * The client side part of a result set that is kept on the server. + * In many cases, the complete data is kept on the client side, + * but for large results only a subset is in-memory. + */ +public class ResultRemote implements ResultInterface { + + private int fetchSize; + private SessionRemote session; + private Transfer transfer; + private int id; + private final ResultColumn[] columns; + private Value[] currentRow; + private final int rowCount; + private int rowId, rowOffset; + private ArrayList result; + private final Trace trace; + + public ResultRemote(SessionRemote session, Transfer transfer, int id, + int columnCount, int fetchSize) throws IOException { + this.session = session; + trace = session.getTrace(); + this.transfer = transfer; + this.id = id; + this.columns = new ResultColumn[columnCount]; + rowCount = transfer.readInt(); + for (int i = 0; i < columnCount; i++) { + columns[i] = new ResultColumn(transfer); + } + rowId = -1; + result = New.arrayList(); + this.fetchSize = fetchSize; + fetchRows(false); + } + + @Override + public boolean isLazy() { + return false; + } + + @Override + public String getAlias(int i) { + return columns[i].alias; + } + + @Override + public String getSchemaName(int i) { + return columns[i].schemaName; + } + + @Override + public String getTableName(int i) { + return columns[i].tableName; + } + + @Override + public String getColumnName(int i) { + return columns[i].columnName; + } + + @Override + public int getColumnType(int i) { + return columns[i].columnType; + } + + @Override + public long getColumnPrecision(int i) { + return columns[i].precision; + } + + @Override + public int getColumnScale(int i) { + return columns[i].scale; + } + + @Override + public int getDisplaySize(int i) { + return columns[i].displaySize; + } + + @Override + public boolean isAutoIncrement(int i) { + return columns[i].autoIncrement; + } + + @Override + public int getNullable(int i) { + return columns[i].nullable; + } + + @Override + public void reset() { + rowId = -1; + currentRow = null; + if (session == null) { + return; + } + synchronized (session) { + session.checkClosed(); + try { + session.traceOperation("RESULT_RESET", id); + transfer.writeInt(SessionRemote.RESULT_RESET).writeInt(id).flush(); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + } + + @Override + public Value[] currentRow() { + return currentRow; + } + + @Override + public boolean next() { + if (rowId < rowCount) { + rowId++; + remapIfOld(); + if (rowId < rowCount) { + if (rowId - rowOffset >= result.size()) { + fetchRows(true); + } + currentRow = result.get(rowId - rowOffset); + return true; + } + currentRow = null; + } + return false; + } + + @Override + public int getRowId() { + return rowId; + } + + @Override + public boolean isAfterLast() { + return rowId >= rowCount; + } + + @Override + public int getVisibleColumnCount() { + return columns.length; + } + + @Override + public int getRowCount() { + return rowCount; + } + + @Override + public boolean hasNext() { + return rowId < rowCount - 1; + } + + private void sendClose() { + if (session == null) { + return; + } + // TODO result sets: no reset possible for larger remote result sets + try { + synchronized (session) { + session.traceOperation("RESULT_CLOSE", id); + transfer.writeInt(SessionRemote.RESULT_CLOSE).writeInt(id); + } + } catch (IOException e) { + trace.error(e, "close"); + } finally { + transfer = null; + session = null; + } + } + + @Override + public void close() { + result = null; + sendClose(); + } + + private void remapIfOld() { + if (session == null) { + return; + } + try { + if (id <= session.getCurrentId() - SysProperties.SERVER_CACHED_OBJECTS / 2) { + // object is too old - we need to map it to a new id + int newId = session.getNextId(); + session.traceOperation("CHANGE_ID", id); + transfer.writeInt(SessionRemote.CHANGE_ID).writeInt(id).writeInt(newId); + id = newId; + // TODO remote result set: very old result sets may be + // already removed on the server (theoretically) - how to + // solve this? + } + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + private void fetchRows(boolean sendFetch) { + synchronized (session) { + session.checkClosed(); + try { + rowOffset += result.size(); + result.clear(); + int fetch = Math.min(fetchSize, rowCount - rowOffset); + if (sendFetch) { + session.traceOperation("RESULT_FETCH_ROWS", id); + transfer.writeInt(SessionRemote.RESULT_FETCH_ROWS). + writeInt(id).writeInt(fetch); + session.done(transfer); + } + for (int r = 0; r < fetch; r++) { + boolean row = transfer.readBoolean(); + if (!row) { + break; + } + int len = columns.length; + Value[] values = new Value[len]; + for (int i = 0; i < len; i++) { + Value v = transfer.readValue(); + values[i] = v; + } + result.add(values); + } + if (rowOffset + result.size() >= rowCount) { + sendClose(); + } + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + } + + @Override + public String toString() { + return "columns: " + columns.length + " rows: " + rowCount + " pos: " + rowId; + } + + @Override + public int getFetchSize() { + return fetchSize; + } + + @Override + public void setFetchSize(int fetchSize) { + this.fetchSize = fetchSize; + } + + @Override + public boolean needToClose() { + return true; + } + + @Override + public ResultInterface createShallowCopy(SessionInterface targetSession) { + // The operation is not supported on remote result. + return null; + } + + @Override + public boolean isClosed() { + return result == null; + } + + @Override + public boolean containsDistinct(Value[] values) { + // We should never do this on remote result. + throw DbException.throwInternalError(); + } +} diff --git a/modules/h2/src/main/java/org/h2/result/ResultTarget.java b/modules/h2/src/main/java/org/h2/result/ResultTarget.java new file mode 100644 index 0000000000000..951cf3b6bca20 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/ResultTarget.java @@ -0,0 +1,29 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.value.Value; + +/** + * A object where rows are written to. + */ +public interface ResultTarget { + + /** + * Add the row to the result set. + * + * @param values the values + */ + void addRow(Value[] values); + + /** + * Get the number of rows. + * + * @return the number of rows + */ + int getRowCount(); + +} diff --git a/modules/h2/src/main/java/org/h2/result/ResultTempTable.java b/modules/h2/src/main/java/org/h2/result/ResultTempTable.java new file mode 100644 index 0000000000000..fdf633b92e619 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/ResultTempTable.java @@ -0,0 +1,323 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import java.util.ArrayList; +import java.util.Arrays; +import org.h2.command.ddl.CreateTableData; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.schema.Schema; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.Table; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This class implements the temp table buffer for the LocalResult class. + */ +public class ResultTempTable implements ResultExternal { + + private static final String COLUMN_NAME = "DATA"; + private final boolean distinct; + private final SortOrder sort; + private Index index; + private final Session session; + private Table table; + private Cursor resultCursor; + private int rowCount; + private final int columnCount; + + private final ResultTempTable parent; + private boolean closed; + private int childCount; + private final boolean containsLob; + + ResultTempTable(Session session, Expression[] expressions, boolean distinct, SortOrder sort) { + this.session = session; + this.distinct = distinct; + this.sort = sort; + this.columnCount = expressions.length; + Schema schema = session.getDatabase().getSchema(Constants.SCHEMA_MAIN); + CreateTableData data = new CreateTableData(); + boolean b = false; + for (int i = 0; i < expressions.length; i++) { + int type = expressions[i].getType(); + Column col = new Column(COLUMN_NAME + i, + type); + if (type == Value.CLOB || type == Value.BLOB) { + b = true; + } + data.columns.add(col); + } + containsLob = b; + data.id = session.getDatabase().allocateObjectId(); + data.tableName = "TEMP_RESULT_SET_" + data.id; + data.temporary = true; + data.persistIndexes = false; + data.persistData = true; + data.create = true; + data.session = session; + table = schema.createTable(data); + if (sort != null || distinct) { + createIndex(); + } + parent = null; + } + + private ResultTempTable(ResultTempTable parent) { + this.parent = parent; + this.columnCount = parent.columnCount; + this.distinct = parent.distinct; + this.session = parent.session; + this.table = parent.table; + this.index = parent.index; + this.rowCount = parent.rowCount; + this.sort = parent.sort; + this.containsLob = parent.containsLob; + reset(); + } + + private void createIndex() { + IndexColumn[] indexCols = null; + // If we need to do distinct, the distinct columns may not match the + // sort columns. So we need to disregard the sort. Not ideal. + if (sort != null && !distinct) { + int[] colIndex = sort.getQueryColumnIndexes(); + indexCols = new IndexColumn[colIndex.length]; + for (int i = 0; i < colIndex.length; i++) { + IndexColumn indexColumn = new IndexColumn(); + indexColumn.column = table.getColumn(colIndex[i]); + indexColumn.sortType = sort.getSortTypes()[i]; + indexColumn.columnName = COLUMN_NAME + i; + indexCols[i] = indexColumn; + } + } else { + indexCols = new IndexColumn[columnCount]; + for (int i = 0; i < columnCount; i++) { + IndexColumn indexColumn = new IndexColumn(); + indexColumn.column = table.getColumn(i); + indexColumn.columnName = COLUMN_NAME + i; + indexCols[i] = indexColumn; + } + } + String indexName = table.getSchema().getUniqueIndexName(session, + table, Constants.PREFIX_INDEX); + int indexId = session.getDatabase().allocateObjectId(); + IndexType indexType = IndexType.createNonUnique(true); + index = table.addIndex(session, indexName, indexId, indexCols, + indexType, true, null); + } + + @Override + public synchronized ResultExternal createShallowCopy() { + if (parent != null) { + return parent.createShallowCopy(); + } + if (closed) { + return null; + } + childCount++; + return new ResultTempTable(this); + } + + @Override + public int removeRow(Value[] values) { + Row row = convertToRow(values); + Cursor cursor = find(row); + if (cursor != null) { + row = cursor.get(); + table.removeRow(session, row); + rowCount--; + } + return rowCount; + } + + @Override + public boolean contains(Value[] values) { + return find(convertToRow(values)) != null; + } + + @Override + public int addRow(Value[] values) { + Row row = convertToRow(values); + if (distinct) { + Cursor cursor = find(row); + if (cursor == null) { + table.addRow(session, row); + rowCount++; + } + } else { + table.addRow(session, row); + rowCount++; + } + return rowCount; + } + + @Override + public int addRows(ArrayList rows) { + // speeds up inserting, but not really needed: + if (sort != null) { + sort.sort(rows); + } + for (Value[] values : rows) { + addRow(values); + } + return rowCount; + } + + private synchronized void closeChild() { + if (--childCount == 0 && closed) { + dropTable(); + } + } + + @Override + public synchronized void close() { + if (closed) { + return; + } + closed = true; + if (parent != null) { + parent.closeChild(); + } else { + if (childCount == 0) { + dropTable(); + } + } + } + + private void dropTable() { + if (table == null) { + return; + } + if (containsLob) { + // contains BLOB or CLOB: can not truncate now, + // otherwise the BLOB and CLOB entries are removed + return; + } + try { + Database database = session.getDatabase(); + // Need to lock because not all of the code-paths + // that reach here have already taken this lock, + // notably via the close() paths. + synchronized (session) { + synchronized (database) { + table.truncate(session); + } + } + // This session may not lock the sys table (except if it already has + // locked it) because it must be committed immediately, otherwise + // other threads can not access the sys table. If the table is not + // removed now, it will be when the database is opened the next + // time. (the table is truncated, so this is just one record) + if (!database.isSysTableLocked()) { + Session sysSession = database.getSystemSession(); + table.removeChildrenAndResources(sysSession); + if (index != null) { + // need to explicitly do this, + // as it's not registered in the system session + session.removeLocalTempTableIndex(index); + } + // the transaction must be committed immediately + // TODO this synchronization cascade is very ugly + synchronized (session) { + synchronized (sysSession) { + synchronized (database) { + sysSession.commit(false); + } + } + } + } + } finally { + table = null; + } + } + + @Override + public void done() { + // nothing to do + } + + @Override + public Value[] next() { + if (resultCursor == null) { + Index idx; + if (distinct || sort != null) { + idx = index; + } else { + idx = table.getScanIndex(session); + } + if (session.getDatabase().getMvStore() != null) { + // sometimes the transaction is already committed, + // in which case we can't use the session + if (idx.getRowCount(session) == 0 && rowCount > 0) { + // this means querying is not transactional + resultCursor = idx.find((Session) null, null, null); + } else { + // the transaction is still open + resultCursor = idx.find(session, null, null); + } + } else { + resultCursor = idx.find(session, null, null); + } + } + if (!resultCursor.next()) { + return null; + } + Row row = resultCursor.get(); + return row.getValueList(); + } + + @Override + public void reset() { + resultCursor = null; + } + + private Row convertToRow(Value[] values) { + if (values.length < columnCount) { + Value[] v2 = Arrays.copyOf(values, columnCount); + for (int i = values.length; i < columnCount; i++) { + v2[i] = ValueNull.INSTANCE; + } + values = v2; + } + return session.createRow(values, Row.MEMORY_CALCULATE); + } + + private Cursor find(Row row) { + if (index == null) { + // for the case "in(select ...)", the query might + // use an optimization and not create the index + // up front + createIndex(); + } + Cursor cursor = index.find(session, row, row); + while (cursor.next()) { + SearchRow found = cursor.getSearchRow(); + boolean ok = true; + Database db = session.getDatabase(); + for (int i = 0; i < row.getColumnCount(); i++) { + if (!db.areEqual(row.getValue(i), found.getValue(i))) { + ok = false; + break; + } + } + if (ok) { + return cursor; + } + } + return null; + } + +} + diff --git a/modules/h2/src/main/java/org/h2/result/ResultWithGeneratedKeys.java b/modules/h2/src/main/java/org/h2/result/ResultWithGeneratedKeys.java new file mode 100644 index 0000000000000..9db650df5ba0b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/ResultWithGeneratedKeys.java @@ -0,0 +1,72 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +/** + * Result of update command with optional generated keys. + */ +public class ResultWithGeneratedKeys { + /** + * Result of update command with generated keys; + */ + public static final class WithKeys extends ResultWithGeneratedKeys { + private final ResultInterface generatedKeys; + + /** + * Creates a result with update count and generated keys. + * + * @param updateCount + * update count + * @param generatedKeys + * generated keys + */ + public WithKeys(int updateCount, ResultInterface generatedKeys) { + super(updateCount); + this.generatedKeys = generatedKeys; + } + + @Override + public ResultInterface getGeneratedKeys() { + return generatedKeys; + } + } + + /** + * Returns a result with only update count. + * + * @param updateCount + * update count + * @return the result. + */ + public static ResultWithGeneratedKeys of(int updateCount) { + return new ResultWithGeneratedKeys(updateCount); + } + + private final int updateCount; + + ResultWithGeneratedKeys(int updateCount) { + this.updateCount = updateCount; + } + + /** + * Returns generated keys, or {@code null}. + * + * @return generated keys, or {@code null} + */ + public ResultInterface getGeneratedKeys() { + return null; + } + + /** + * Returns update count. + * + * @return update count + */ + public int getUpdateCount() { + return updateCount; + } + +} diff --git a/modules/h2/src/main/java/org/h2/result/Row.java b/modules/h2/src/main/java/org/h2/result/Row.java new file mode 100644 index 0000000000000..7c1730c68e0ec --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/Row.java @@ -0,0 +1,88 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.store.Data; +import org.h2.value.Value; + +/** + * Represents a row in a table. + */ +public interface Row extends SearchRow { + + int MEMORY_CALCULATE = -1; + Row[] EMPTY_ARRAY = {}; + + /** + * Get a copy of the row that is distinct from (not equal to) this row. + * This is used for FOR UPDATE to allow pseudo-updating a row. + * + * @return a new row with the same data + */ + Row getCopy(); + + /** + * Set version. + * + * @param version row version + */ + void setVersion(int version); + + /** + * Get the number of bytes required for the data. + * + * @param dummy the template buffer + * @return the number of bytes + */ + int getByteCount(Data dummy); + + /** + * Check if this is an empty row. + * + * @return {@code true} if the row is empty + */ + boolean isEmpty(); + + /** + * Mark the row as deleted. + * + * @param deleted deleted flag + */ + void setDeleted(boolean deleted); + + /** + * Set session id. + * + * @param sessionId the session id + */ + void setSessionId(int sessionId); + + /** + * Get session id. + * + * @return the session id + */ + int getSessionId(); + + /** + * This record has been committed. The session id is reset. + */ + void commit(); + + /** + * Check if the row is deleted. + * + * @return {@code true} if the row is deleted + */ + boolean isDeleted(); + + /** + * Get values. + * + * @return values + */ + Value[] getValueList(); +} diff --git a/modules/h2/src/main/java/org/h2/result/RowFactory.java b/modules/h2/src/main/java/org/h2/result/RowFactory.java new file mode 100644 index 0000000000000..d3f3d9e06d761 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/RowFactory.java @@ -0,0 +1,39 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.value.Value; + +/** + * Creates rows. + * + * @author Sergi Vladykin + */ +public abstract class RowFactory { + /** + * Default implementation of row factory. + */ + public static final RowFactory DEFAULT = new DefaultRowFactory(); + + /** + * Create new row. + * + * @param data the values + * @param memory whether the row is in memory + * @return the created row + */ + public abstract Row createRow(Value[] data, int memory); + + /** + * Default implementation of row factory. + */ + static final class DefaultRowFactory extends RowFactory { + @Override + public Row createRow(Value[] data, int memory) { + return new RowImpl(data, memory); + } + } +} diff --git a/modules/h2/src/main/java/org/h2/result/RowImpl.java b/modules/h2/src/main/java/org/h2/result/RowImpl.java new file mode 100644 index 0000000000000..177a99ba55145 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/RowImpl.java @@ -0,0 +1,184 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import java.util.Arrays; + +import org.h2.engine.Constants; +import org.h2.store.Data; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; +import org.h2.value.ValueLong; + +/** + * Default row implementation. + */ +public class RowImpl implements Row { + private long key; + private final Value[] data; + private int memory; + private int version; + private boolean deleted; + private int sessionId; + + public RowImpl(Value[] data, int memory) { + this.data = data; + this.memory = memory; + } + + /** + * Get a copy of the row that is distinct from (not equal to) this row. + * This is used for FOR UPDATE to allow pseudo-updating a row. + * + * @return a new row with the same data + */ + @Override + public Row getCopy() { + Value[] d2 = Arrays.copyOf(data, data.length); + RowImpl r2 = new RowImpl(d2, memory); + r2.key = key; + r2.version = version + 1; + r2.sessionId = sessionId; + return r2; + } + + @Override + public void setKeyAndVersion(SearchRow row) { + setKey(row.getKey()); + setVersion(row.getVersion()); + } + + @Override + public int getVersion() { + return version; + } + + @Override + public void setVersion(int version) { + this.version = version; + } + + @Override + public long getKey() { + return key; + } + + @Override + public void setKey(long key) { + this.key = key; + } + + @Override + public Value getValue(int i) { + return i == -1 ? ValueLong.get(key) : data[i]; + } + + /** + * Get the number of bytes required for the data. + * + * @param dummy the template buffer + * @return the number of bytes + */ + @Override + public int getByteCount(Data dummy) { + int size = 0; + for (Value v : data) { + size += dummy.getValueLen(v); + } + return size; + } + + @Override + public void setValue(int i, Value v) { + if (i == -1) { + this.key = v.getLong(); + } else { + data[i] = v; + } + } + + @Override + public boolean isEmpty() { + return data == null; + } + + @Override + public int getColumnCount() { + return data.length; + } + + @Override + public int getMemory() { + if (memory != MEMORY_CALCULATE) { + return memory; + } + int m = Constants.MEMORY_ROW; + if (data != null) { + int len = data.length; + m += Constants.MEMORY_OBJECT + len * Constants.MEMORY_POINTER; + for (Value v : data) { + if (v != null) { + m += v.getMemory(); + } + } + } + this.memory = m; + return m; + } + + @Override + public String toString() { + StatementBuilder buff = new StatementBuilder("( /* key:"); + buff.append(getKey()); + if (version != 0) { + buff.append(" v:").append(version); + } + if (isDeleted()) { + buff.append(" deleted"); + } + buff.append(" */ "); + if (data != null) { + for (Value v : data) { + buff.appendExceptFirst(", "); + buff.append(v == null ? "null" : v.getTraceSQL()); + } + } + return buff.append(')').toString(); + } + + @Override + public void setDeleted(boolean deleted) { + this.deleted = deleted; + } + + @Override + public void setSessionId(int sessionId) { + this.sessionId = sessionId; + } + + @Override + public int getSessionId() { + return sessionId; + } + + /** + * This record has been committed. The session id is reset. + */ + @Override + public void commit() { + this.sessionId = 0; + } + + @Override + public boolean isDeleted() { + return deleted; + } + + @Override + public Value[] getValueList() { + return data; + } +} diff --git a/modules/h2/src/main/java/org/h2/result/RowList.java b/modules/h2/src/main/java/org/h2/result/RowList.java new file mode 100644 index 0000000000000..814a6b536871c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/RowList.java @@ -0,0 +1,266 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import java.util.ArrayList; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.store.Data; +import org.h2.store.FileStore; +import org.h2.util.New; +import org.h2.value.Value; + +/** + * A list of rows. If the list grows too large, it is buffered to disk + * automatically. + */ +public class RowList { + + private final Session session; + private final ArrayList list = New.arrayList(); + private int size; + private int index, listIndex; + private FileStore file; + private Data rowBuff; + private ArrayList lobs; + private final int maxMemory; + private int memory; + private boolean written; + private boolean readUncached; + + /** + * Construct a new row list for this session. + * + * @param session the session + */ + public RowList(Session session) { + this.session = session; + if (session.getDatabase().isPersistent()) { + maxMemory = session.getDatabase().getMaxOperationMemory(); + } else { + maxMemory = 0; + } + } + + private void writeRow(Data buff, Row r) { + buff.checkCapacity(1 + Data.LENGTH_INT * 8); + buff.writeByte((byte) 1); + buff.writeInt(r.getMemory()); + int columnCount = r.getColumnCount(); + buff.writeInt(columnCount); + buff.writeLong(r.getKey()); + buff.writeInt(r.getVersion()); + buff.writeInt(r.isDeleted() ? 1 : 0); + buff.writeInt(r.getSessionId()); + for (int i = 0; i < columnCount; i++) { + Value v = r.getValue(i); + buff.checkCapacity(1); + if (v == null) { + buff.writeByte((byte) 0); + } else { + buff.writeByte((byte) 1); + if (v.getType() == Value.CLOB || v.getType() == Value.BLOB) { + // need to keep a reference to temporary lobs, + // otherwise the temp file is deleted + if (v.getSmall() == null && v.getTableId() == 0) { + if (lobs == null) { + lobs = New.arrayList(); + } + // need to create a copy, otherwise, + // if stored multiple times, it may be renamed + // and then not found + v = v.copyToTemp(); + lobs.add(v); + } + } + buff.checkCapacity(buff.getValueLen(v)); + buff.writeValue(v); + } + } + } + + private void writeAllRows() { + if (file == null) { + Database db = session.getDatabase(); + String fileName = db.createTempFile(); + file = db.openFile(fileName, "rw", false); + file.setCheckedWriting(false); + file.seek(FileStore.HEADER_LENGTH); + rowBuff = Data.create(db, Constants.DEFAULT_PAGE_SIZE); + file.seek(FileStore.HEADER_LENGTH); + } + Data buff = rowBuff; + initBuffer(buff); + for (int i = 0, size = list.size(); i < size; i++) { + if (i > 0 && buff.length() > Constants.IO_BUFFER_SIZE) { + flushBuffer(buff); + initBuffer(buff); + } + Row r = list.get(i); + writeRow(buff, r); + } + flushBuffer(buff); + file.autoDelete(); + list.clear(); + memory = 0; + } + + private static void initBuffer(Data buff) { + buff.reset(); + buff.writeInt(0); + } + + private void flushBuffer(Data buff) { + buff.checkCapacity(1); + buff.writeByte((byte) 0); + buff.fillAligned(); + buff.setInt(0, buff.length() / Constants.FILE_BLOCK_SIZE); + file.write(buff.getBytes(), 0, buff.length()); + } + + /** + * Add a row to the list. + * + * @param r the row to add + */ + public void add(Row r) { + list.add(r); + memory += r.getMemory() + Constants.MEMORY_POINTER; + if (maxMemory > 0 && memory > maxMemory) { + writeAllRows(); + } + size++; + } + + /** + * Remove all rows from the list. + */ + public void reset() { + index = 0; + if (file != null) { + listIndex = 0; + if (!written) { + writeAllRows(); + written = true; + } + list.clear(); + file.seek(FileStore.HEADER_LENGTH); + } + } + + /** + * Check if there are more rows in this list. + * + * @return true it there are more rows + */ + public boolean hasNext() { + return index < size; + } + + private Row readRow(Data buff) { + if (buff.readByte() == 0) { + return null; + } + int mem = buff.readInt(); + int columnCount = buff.readInt(); + long key = buff.readLong(); + int version = buff.readInt(); + if (readUncached) { + key = 0; + } + boolean deleted = buff.readInt() == 1; + int sessionId = buff.readInt(); + Value[] values = new Value[columnCount]; + for (int i = 0; i < columnCount; i++) { + Value v; + if (buff.readByte() == 0) { + v = null; + } else { + v = buff.readValue(); + if (v.isLinkedToTable()) { + // the table id is 0 if it was linked when writing + // a temporary entry + if (v.getTableId() == 0) { + session.removeAtCommit(v); + } + } + } + values[i] = v; + } + Row row = session.createRow(values, mem); + row.setKey(key); + row.setVersion(version); + row.setDeleted(deleted); + row.setSessionId(sessionId); + return row; + } + + /** + * Get the next row from the list. + * + * @return the next row + */ + public Row next() { + Row r; + if (file == null) { + r = list.get(index++); + } else { + if (listIndex >= list.size()) { + list.clear(); + listIndex = 0; + Data buff = rowBuff; + buff.reset(); + int min = Constants.FILE_BLOCK_SIZE; + file.readFully(buff.getBytes(), 0, min); + int len = buff.readInt() * Constants.FILE_BLOCK_SIZE; + buff.checkCapacity(len); + if (len - min > 0) { + file.readFully(buff.getBytes(), min, len - min); + } + while (true) { + r = readRow(buff); + if (r == null) { + break; + } + list.add(r); + } + } + index++; + r = list.get(listIndex++); + } + return r; + } + + /** + * Get the number of rows in this list. + * + * @return the number of rows + */ + public int size() { + return size; + } + + /** + * Do not use the cache. + */ + public void invalidateCache() { + readUncached = true; + } + + /** + * Close the result list and delete the temporary file. + */ + public void close() { + if (file != null) { + file.autoDelete(); + file.closeAndDeleteSilently(); + file = null; + rowBuff = null; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/result/SearchRow.java b/modules/h2/src/main/java/org/h2/result/SearchRow.java new file mode 100644 index 0000000000000..baf7c28c80170 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/SearchRow.java @@ -0,0 +1,79 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.value.Value; + +/** + * The interface for rows stored in a table, and for partial rows stored in the + * index. + */ +public interface SearchRow { + + /** + * An empty array of SearchRow objects. + */ + SearchRow[] EMPTY_ARRAY = {}; + + /** + * Get the column count. + * + * @return the column count + */ + int getColumnCount(); + + /** + * Get the value for the column + * + * @param index the column number (starting with 0) + * @return the value + */ + Value getValue(int index); + + /** + * Set the value for given column + * + * @param index the column number (starting with 0) + * @param v the new value + */ + void setValue(int index, Value v); + + /** + * Set the position and version to match another row. + * + * @param old the other row. + */ + void setKeyAndVersion(SearchRow old); + + /** + * Get the version of the row. + * + * @return the version + */ + int getVersion(); + + /** + * Set the unique key of the row. + * + * @param key the key + */ + void setKey(long key); + + /** + * Get the unique key of the row. + * + * @return the key + */ + long getKey(); + + /** + * Get the estimated memory used for this row, in bytes. + * + * @return the memory + */ + int getMemory(); + +} diff --git a/modules/h2/src/main/java/org/h2/result/SimpleRow.java b/modules/h2/src/main/java/org/h2/result/SimpleRow.java new file mode 100644 index 0000000000000..77b7384855ebe --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/SimpleRow.java @@ -0,0 +1,91 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.engine.Constants; +import org.h2.util.StatementBuilder; +import org.h2.value.Value; + +/** + * Represents a simple row without state. + */ +public class SimpleRow implements SearchRow { + + private long key; + private int version; + private final Value[] data; + private int memory; + + public SimpleRow(Value[] data) { + this.data = data; + } + + @Override + public int getColumnCount() { + return data.length; + } + + @Override + public long getKey() { + return key; + } + + @Override + public void setKey(long key) { + this.key = key; + } + + @Override + public void setKeyAndVersion(SearchRow row) { + key = row.getKey(); + version = row.getVersion(); + } + + @Override + public int getVersion() { + return version; + } + + @Override + public void setValue(int i, Value v) { + data[i] = v; + } + + @Override + public Value getValue(int i) { + return data[i]; + } + + @Override + public String toString() { + StatementBuilder buff = new StatementBuilder("( /* key:"); + buff.append(getKey()); + if (version != 0) { + buff.append(" v:").append(version); + } + buff.append(" */ "); + for (Value v : data) { + buff.appendExceptFirst(", "); + buff.append(v == null ? "null" : v.getTraceSQL()); + } + return buff.append(')').toString(); + } + + @Override + public int getMemory() { + if (memory == 0) { + int len = data.length; + memory = Constants.MEMORY_OBJECT + len * Constants.MEMORY_POINTER; + for (Value v : data) { + if (v != null) { + memory += v.getMemory(); + } + } + } + return memory; + } + +} diff --git a/modules/h2/src/main/java/org/h2/result/SimpleRowValue.java b/modules/h2/src/main/java/org/h2/result/SimpleRowValue.java new file mode 100644 index 0000000000000..a8e5e9b08f1f1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/SimpleRowValue.java @@ -0,0 +1,74 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.engine.Constants; +import org.h2.value.Value; + +/** + * A simple row that contains data for only one column. + */ +public class SimpleRowValue implements SearchRow { + + private long key; + private int version; + private int index; + private final int virtualColumnCount; + private Value data; + + public SimpleRowValue(int columnCount) { + this.virtualColumnCount = columnCount; + } + + @Override + public void setKeyAndVersion(SearchRow row) { + key = row.getKey(); + version = row.getVersion(); + } + + @Override + public int getVersion() { + return version; + } + + @Override + public int getColumnCount() { + return virtualColumnCount; + } + + @Override + public long getKey() { + return key; + } + + @Override + public void setKey(long key) { + this.key = key; + } + + @Override + public Value getValue(int idx) { + return idx == index ? data : null; + } + + @Override + public void setValue(int idx, Value v) { + index = idx; + data = v; + } + + @Override + public String toString() { + return "( /* " + key + " */ " + (data == null ? + "null" : data.getTraceSQL()) + " )"; + } + + @Override + public int getMemory() { + return Constants.MEMORY_OBJECT + (data == null ? 0 : data.getMemory()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/result/SortOrder.java b/modules/h2/src/main/java/org/h2/result/SortOrder.java new file mode 100644 index 0000000000000..fca1918526074 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/SortOrder.java @@ -0,0 +1,302 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.result; + +import org.h2.command.dml.SelectOrderBy; +import org.h2.engine.Database; +import org.h2.engine.SysProperties; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.table.Column; +import org.h2.table.TableFilter; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.util.Utils; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; + +/** + * A sort order represents an ORDER BY clause in a query. + */ +public class SortOrder implements Comparator { + + /** + * This bit mask means the values should be sorted in ascending order. + */ + public static final int ASCENDING = 0; + + /** + * This bit mask means the values should be sorted in descending order. + */ + public static final int DESCENDING = 1; + + /** + * This bit mask means NULLs should be sorted before other data, no matter + * if ascending or descending order is used. + */ + public static final int NULLS_FIRST = 2; + + /** + * This bit mask means NULLs should be sorted after other data, no matter + * if ascending or descending order is used. + */ + public static final int NULLS_LAST = 4; + + /** + * The default sort order for NULL. + */ + private static final int DEFAULT_NULL_SORT = + SysProperties.SORT_NULLS_HIGH ? 1 : -1; + + /** + * The default sort order bit for NULLs last. + */ + private static final int DEFAULT_NULLS_LAST = SysProperties.SORT_NULLS_HIGH ? NULLS_LAST : NULLS_FIRST; + + /** + * The default sort order bit for NULLs first. + */ + private static final int DEFAULT_NULLS_FIRST = SysProperties.SORT_NULLS_HIGH ? NULLS_FIRST : NULLS_LAST; + + private final Database database; + + /** + * The column indexes of the order by expressions within the query. + */ + private final int[] queryColumnIndexes; + + /** + * The sort type bit mask (DESCENDING, NULLS_FIRST, NULLS_LAST). + */ + private final int[] sortTypes; + + /** + * The order list. + */ + private final ArrayList orderList; + + /** + * Construct a new sort order object. + * + * @param database the database + * @param queryColumnIndexes the column index list + * @param sortType the sort order bit masks + * @param orderList the original query order list (if this is a query) + */ + public SortOrder(Database database, int[] queryColumnIndexes, + int[] sortType, ArrayList orderList) { + this.database = database; + this.queryColumnIndexes = queryColumnIndexes; + this.sortTypes = sortType; + this.orderList = orderList; + } + + /** + * Create the SQL snippet that describes this sort order. + * This is the SQL snippet that usually appears after the ORDER BY clause. + * + * @param list the expression list + * @param visible the number of columns in the select list + * @return the SQL snippet + */ + public String getSQL(Expression[] list, int visible) { + StatementBuilder buff = new StatementBuilder(); + int i = 0; + for (int idx : queryColumnIndexes) { + buff.appendExceptFirst(", "); + if (idx < visible) { + buff.append(idx + 1); + } else { + buff.append('=').append(StringUtils.unEnclose(list[idx].getSQL())); + } + int type = sortTypes[i++]; + if ((type & DESCENDING) != 0) { + buff.append(" DESC"); + } + if ((type & NULLS_FIRST) != 0) { + buff.append(" NULLS FIRST"); + } else if ((type & NULLS_LAST) != 0) { + buff.append(" NULLS LAST"); + } + } + return buff.toString(); + } + + /** + * Compare two expressions where one of them is NULL. + * + * @param aNull whether the first expression is null + * @param sortType the sort bit mask to use + * @return the result of the comparison (-1 meaning the first expression + * should appear before the second, 0 if they are equal) + */ + public static int compareNull(boolean aNull, int sortType) { + if ((sortType & NULLS_FIRST) != 0) { + return aNull ? -1 : 1; + } else if ((sortType & NULLS_LAST) != 0) { + return aNull ? 1 : -1; + } else { + // see also JdbcDatabaseMetaData.nullsAreSorted* + int comp = aNull ? DEFAULT_NULL_SORT : -DEFAULT_NULL_SORT; + return (sortType & DESCENDING) == 0 ? comp : -comp; + } + } + + /** + * Compare two expression lists. + * + * @param a the first expression list + * @param b the second expression list + * @return the result of the comparison + */ + @Override + public int compare(Value[] a, Value[] b) { + for (int i = 0, len = queryColumnIndexes.length; i < len; i++) { + int idx = queryColumnIndexes[i]; + int type = sortTypes[i]; + Value ao = a[idx]; + Value bo = b[idx]; + boolean aNull = ao == ValueNull.INSTANCE, bNull = bo == ValueNull.INSTANCE; + if (aNull || bNull) { + if (aNull == bNull) { + continue; + } + return compareNull(aNull, type); + } + int comp = database.compare(ao, bo); + if (comp != 0) { + return (type & DESCENDING) == 0 ? comp : -comp; + } + } + return 0; + } + + /** + * Sort a list of rows. + * + * @param rows the list of rows + */ + public void sort(ArrayList rows) { + Collections.sort(rows, this); + } + + /** + * Sort a list of rows using offset and limit. + * + * @param rows the list of rows + * @param offset the offset + * @param limit the limit + */ + public void sort(ArrayList rows, int offset, int limit) { + int rowsSize = rows.size(); + if (rows.isEmpty() || offset >= rowsSize || limit == 0) { + return; + } + if (offset < 0) { + offset = 0; + } + if (offset + limit > rowsSize) { + limit = rowsSize - offset; + } + if (limit == 1 && offset == 0) { + rows.set(0, Collections.min(rows, this)); + return; + } + Value[][] arr = rows.toArray(new Value[0][]); + Utils.sortTopN(arr, offset, limit, this); + for (int i = 0, end = Math.min(offset + limit, rowsSize); i < end; i++) { + rows.set(i, arr[i]); + } + } + + /** + * Get the column index list. This is the column indexes of the order by + * expressions within the query. + *

    + * For the query "select name, id from test order by id, name" this is {1, + * 0} as the first order by expression (the column "id") is the second + * column of the query, and the second order by expression ("name") is the + * first column of the query. + * + * @return the list + */ + public int[] getQueryColumnIndexes() { + return queryColumnIndexes; + } + + /** + * Get the column for the given table filter, if the sort column is for this + * filter. + * + * @param index the column index (0, 1,..) + * @param filter the table filter + * @return the column, or null + */ + public Column getColumn(int index, TableFilter filter) { + if (orderList == null) { + return null; + } + SelectOrderBy order = orderList.get(index); + Expression expr = order.expression; + if (expr == null) { + return null; + } + expr = expr.getNonAliasExpression(); + if (expr.isConstant()) { + return null; + } + if (!(expr instanceof ExpressionColumn)) { + return null; + } + ExpressionColumn exprCol = (ExpressionColumn) expr; + if (exprCol.getTableFilter() != filter) { + return null; + } + return exprCol.getColumn(); + } + + /** + * Get the sort order bit masks. + * + * @return the list + */ + public int[] getSortTypes() { + return sortTypes; + } + + /** + * Returns sort order bit masks with {@link #NULLS_FIRST} or {@link #NULLS_LAST} + * explicitly set, depending on {@link SysProperties#SORT_NULLS_HIGH}. + * + * @return bit masks with either {@link #NULLS_FIRST} or {@link #NULLS_LAST} explicitly set. + */ + public int[] getSortTypesWithNullPosition() { + final int[] sortTypes = this.sortTypes.clone(); + for (int i=0, length = sortTypes.length; i key; + private boolean isUpdatable; + + /** + * Construct a new object that is linked to the result set. The constructor + * reads the database meta data to find out if the result set is updatable. + * + * @param conn the database connection + * @param result the result + */ + public UpdatableRow(JdbcConnection conn, ResultInterface result) + throws SQLException { + this.conn = conn; + this.result = result; + columnCount = result.getVisibleColumnCount(); + for (int i = 0; i < columnCount; i++) { + String t = result.getTableName(i); + String s = result.getSchemaName(i); + if (t == null || s == null) { + return; + } + if (tableName == null) { + tableName = t; + } else if (!tableName.equals(t)) { + return; + } + if (schemaName == null) { + schemaName = s; + } else if (!schemaName.equals(s)) { + return; + } + } + final DatabaseMetaData meta = conn.getMetaData(); + ResultSet rs = meta.getTables(null, + StringUtils.escapeMetaDataPattern(schemaName), + StringUtils.escapeMetaDataPattern(tableName), + new String[] { "TABLE" }); + if (!rs.next()) { + return; + } + if (rs.getString("SQL") == null) { + // system table + return; + } + String table = rs.getString("TABLE_NAME"); + // if the table name in the database meta data is lower case, + // but the table in the result set meta data is not, then the column + // in the database meta data is also lower case + boolean toUpper = !table.equals(tableName) && table.equalsIgnoreCase(tableName); + key = New.arrayList(); + rs = meta.getPrimaryKeys(null, + StringUtils.escapeMetaDataPattern(schemaName), + tableName); + while (rs.next()) { + String c = rs.getString("COLUMN_NAME"); + key.add(toUpper ? StringUtils.toUpperEnglish(c) : c); + } + if (isIndexUsable(key)) { + isUpdatable = true; + return; + } + key.clear(); + rs = meta.getIndexInfo(null, + StringUtils.escapeMetaDataPattern(schemaName), + tableName, true, true); + while (rs.next()) { + int pos = rs.getShort("ORDINAL_POSITION"); + if (pos == 1) { + // check the last key if there was any + if (isIndexUsable(key)) { + isUpdatable = true; + return; + } + key.clear(); + } + String c = rs.getString("COLUMN_NAME"); + key.add(toUpper ? StringUtils.toUpperEnglish(c) : c); + } + if (isIndexUsable(key)) { + isUpdatable = true; + return; + } + key = null; + } + + private boolean isIndexUsable(ArrayList indexColumns) { + if (indexColumns.isEmpty()) { + return false; + } + for (String c : indexColumns) { + if (findColumnIndex(c) < 0) { + return false; + } + } + return true; + } + + /** + * Check if this result set is updatable. + * + * @return true if it is + */ + public boolean isUpdatable() { + return isUpdatable; + } + + private int findColumnIndex(String columnName) { + for (int i = 0; i < columnCount; i++) { + String col = result.getColumnName(i); + if (col.equals(columnName)) { + return i; + } + } + return -1; + } + + private int getColumnIndex(String columnName) { + int index = findColumnIndex(columnName); + if (index < 0) { + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, columnName); + } + return index; + } + + private void appendColumnList(StatementBuilder buff, boolean set) { + buff.resetCount(); + for (int i = 0; i < columnCount; i++) { + buff.appendExceptFirst(","); + String col = result.getColumnName(i); + buff.append(StringUtils.quoteIdentifier(col)); + if (set) { + buff.append("=? "); + } + } + } + + private void appendKeyCondition(StatementBuilder buff) { + buff.append(" WHERE "); + buff.resetCount(); + for (String k : key) { + buff.appendExceptFirst(" AND "); + buff.append(StringUtils.quoteIdentifier(k)).append("=?"); + } + } + + private void setKey(PreparedStatement prep, int start, Value[] current) + throws SQLException { + for (int i = 0, size = key.size(); i < size; i++) { + String col = key.get(i); + int idx = getColumnIndex(col); + Value v = current[idx]; + if (v == null || v == ValueNull.INSTANCE) { + // rows with a unique key containing NULL are not supported, + // as multiple such rows could exist + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + v.set(prep, start + i); + } + } + +// public boolean isRowDeleted(Value[] row) throws SQLException { +// StringBuilder buff = new StringBuilder(); +// buff.append("SELECT COUNT(*) FROM "). +// append(StringUtils.quoteIdentifier(tableName)); +// appendKeyCondition(buff); +// PreparedStatement prep = conn.prepareStatement(buff.toString()); +// setKey(prep, 1, row); +// ResultSet rs = prep.executeQuery(); +// rs.next(); +// return rs.getInt(1) == 0; +// } + + private void appendTableName(StatementBuilder buff) { + if (schemaName != null && schemaName.length() > 0) { + buff.append(StringUtils.quoteIdentifier(schemaName)).append('.'); + } + buff.append(StringUtils.quoteIdentifier(tableName)); + } + + /** + * Re-reads a row from the database and updates the values in the array. + * + * @param row the values that contain the key + * @return the row + */ + public Value[] readRow(Value[] row) throws SQLException { + StatementBuilder buff = new StatementBuilder("SELECT "); + appendColumnList(buff, false); + buff.append(" FROM "); + appendTableName(buff); + appendKeyCondition(buff); + PreparedStatement prep = conn.prepareStatement(buff.toString()); + setKey(prep, 1, row); + ResultSet rs = prep.executeQuery(); + if (!rs.next()) { + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + Value[] newRow = new Value[columnCount]; + for (int i = 0; i < columnCount; i++) { + int type = result.getColumnType(i); + newRow[i] = DataType.readValue(conn.getSession(), rs, i + 1, type); + } + return newRow; + } + + /** + * Delete the given row in the database. + * + * @param current the row + * @throws SQLException if this row has already been deleted + */ + public void deleteRow(Value[] current) throws SQLException { + StatementBuilder buff = new StatementBuilder("DELETE FROM "); + appendTableName(buff); + appendKeyCondition(buff); + PreparedStatement prep = conn.prepareStatement(buff.toString()); + setKey(prep, 1, current); + int count = prep.executeUpdate(); + if (count != 1) { + // the row has already been deleted + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + } + + /** + * Update a row in the database. + * + * @param current the old row + * @param updateRow the new row + * @throws SQLException if the row has been deleted + */ + public void updateRow(Value[] current, Value[] updateRow) throws SQLException { + StatementBuilder buff = new StatementBuilder("UPDATE "); + appendTableName(buff); + buff.append(" SET "); + appendColumnList(buff, true); + // TODO updatable result set: we could add all current values to the + // where clause + // - like this optimistic ('no') locking is possible + appendKeyCondition(buff); + PreparedStatement prep = conn.prepareStatement(buff.toString()); + int j = 1; + for (int i = 0; i < columnCount; i++) { + Value v = updateRow[i]; + if (v == null) { + v = current[i]; + } + v.set(prep, j++); + } + setKey(prep, j, current); + int count = prep.executeUpdate(); + if (count != 1) { + // the row has been deleted + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + } + + /** + * Insert a new row into the database. + * + * @param row the new row + * @throws SQLException if the row could not be inserted + */ + public void insertRow(Value[] row) throws SQLException { + StatementBuilder buff = new StatementBuilder("INSERT INTO "); + appendTableName(buff); + buff.append('('); + appendColumnList(buff, false); + buff.append(")VALUES("); + buff.resetCount(); + for (int i = 0; i < columnCount; i++) { + buff.appendExceptFirst(","); + Value v = row[i]; + if (v == null) { + buff.append("DEFAULT"); + } else { + buff.append('?'); + } + } + buff.append(')'); + PreparedStatement prep = conn.prepareStatement(buff.toString()); + for (int i = 0, j = 0; i < columnCount; i++) { + Value v = row[i]; + if (v != null) { + v.set(prep, j++ + 1); + } + } + int count = prep.executeUpdate(); + if (count != 1) { + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/result/package.html b/modules/h2/src/main/java/org/h2/result/package.html new file mode 100644 index 0000000000000..0845d83e2bcfd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/result/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Implementation of row and internal result sets. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/schema/Constant.java b/modules/h2/src/main/java/org/h2/schema/Constant.java new file mode 100644 index 0000000000000..5a4e1c558ea61 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/schema/Constant.java @@ -0,0 +1,69 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.schema; + +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.expression.ValueExpression; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.table.Table; +import org.h2.value.Value; + +/** + * A user-defined constant as created by the SQL statement + * CREATE CONSTANT + */ +public class Constant extends SchemaObjectBase { + + private Value value; + private ValueExpression expression; + + public Constant(Schema schema, int id, String name) { + initSchemaObjectBase(schema, id, name, Trace.SCHEMA); + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQL() { + return "CREATE CONSTANT " + getSQL() + " VALUE " + value.getSQL(); + } + + @Override + public int getType() { + return DbObject.CONSTANT; + } + + @Override + public void removeChildrenAndResources(Session session) { + database.removeMeta(session, getId()); + invalidate(); + } + + @Override + public void checkRename() { + // ok + } + + public void setValue(Value value) { + this.value = value; + expression = ValueExpression.get(value); + } + + public ValueExpression getValue() { + return expression; + } + +} diff --git a/modules/h2/src/main/java/org/h2/schema/Schema.java b/modules/h2/src/main/java/org/h2/schema/Schema.java new file mode 100644 index 0000000000000..87d9c89e33b1a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/schema/Schema.java @@ -0,0 +1,715 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.schema; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import org.h2.api.ErrorCode; +import org.h2.command.ddl.CreateSynonymData; +import org.h2.command.ddl.CreateTableData; +import org.h2.constraint.Constraint; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.DbObjectBase; +import org.h2.engine.DbSettings; +import org.h2.engine.FunctionAlias; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.engine.User; +import org.h2.index.Index; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.mvstore.db.MVTableEngine; +import org.h2.table.RegularTable; +import org.h2.table.Table; +import org.h2.table.TableLink; +import org.h2.table.TableSynonym; +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * A schema as created by the SQL statement + * CREATE SCHEMA + */ +public class Schema extends DbObjectBase { + + private User owner; + private final boolean system; + private ArrayList tableEngineParams; + + private final ConcurrentHashMap tablesAndViews; + private final ConcurrentHashMap synonyms; + private final ConcurrentHashMap indexes; + private final ConcurrentHashMap sequences; + private final ConcurrentHashMap triggers; + private final ConcurrentHashMap constraints; + private final ConcurrentHashMap constants; + private final ConcurrentHashMap functions; + + /** + * The set of returned unique names that are not yet stored. It is used to + * avoid returning the same unique name twice when multiple threads + * concurrently create objects. + */ + private final HashSet temporaryUniqueNames = new HashSet<>(); + + /** + * Create a new schema object. + * + * @param database the database + * @param id the object id + * @param schemaName the schema name + * @param owner the owner of the schema + * @param system if this is a system schema (such a schema can not be + * dropped) + */ + public Schema(Database database, int id, String schemaName, User owner, + boolean system) { + tablesAndViews = database.newConcurrentStringMap(); + synonyms = database.newConcurrentStringMap(); + indexes = database.newConcurrentStringMap(); + sequences = database.newConcurrentStringMap(); + triggers = database.newConcurrentStringMap(); + constraints = database.newConcurrentStringMap(); + constants = database.newConcurrentStringMap(); + functions = database.newConcurrentStringMap(); + initDbObjectBase(database, id, schemaName, Trace.SCHEMA); + this.owner = owner; + this.system = system; + } + + /** + * Check if this schema can be dropped. System schemas can not be dropped. + * + * @return true if it can be dropped + */ + public boolean canDrop() { + return !system; + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQL() { + if (system) { + return null; + } + return "CREATE SCHEMA IF NOT EXISTS " + + getSQL() + " AUTHORIZATION " + owner.getSQL(); + } + + @Override + public int getType() { + return DbObject.SCHEMA; + } + + /** + * Return whether is this schema is empty (does not contain any objects). + * + * @return {@code true} if this schema is empty, {@code false} otherwise + */ + public boolean isEmpty() { + return tablesAndViews.isEmpty() && synonyms.isEmpty() && indexes.isEmpty() && sequences.isEmpty() + && triggers.isEmpty() && constraints.isEmpty() && constants.isEmpty() && functions.isEmpty(); + } + + @Override + public void removeChildrenAndResources(Session session) { + while (triggers != null && triggers.size() > 0) { + TriggerObject obj = (TriggerObject) triggers.values().toArray()[0]; + database.removeSchemaObject(session, obj); + } + while (constraints != null && constraints.size() > 0) { + Constraint obj = (Constraint) constraints.values().toArray()[0]; + database.removeSchemaObject(session, obj); + } + // There can be dependencies between tables e.g. using computed columns, + // so we might need to loop over them multiple times. + boolean runLoopAgain = false; + do { + runLoopAgain = false; + if (tablesAndViews != null) { + // Loop over a copy because the map is modified underneath us. + for (Table obj : new ArrayList<>(tablesAndViews.values())) { + // Check for null because multiple tables might be deleted + // in one go underneath us. + if (obj.getName() != null) { + if (database.getDependentTable(obj, obj) == null) { + database.removeSchemaObject(session, obj); + } else { + runLoopAgain = true; + } + } + } + } + } while (runLoopAgain); + while (indexes != null && indexes.size() > 0) { + Index obj = (Index) indexes.values().toArray()[0]; + database.removeSchemaObject(session, obj); + } + while (sequences != null && sequences.size() > 0) { + Sequence obj = (Sequence) sequences.values().toArray()[0]; + database.removeSchemaObject(session, obj); + } + while (constants != null && constants.size() > 0) { + Constant obj = (Constant) constants.values().toArray()[0]; + database.removeSchemaObject(session, obj); + } + while (functions != null && functions.size() > 0) { + FunctionAlias obj = (FunctionAlias) functions.values().toArray()[0]; + database.removeSchemaObject(session, obj); + } + database.removeMeta(session, getId()); + owner = null; + invalidate(); + } + + @Override + public void checkRename() { + // ok + } + + /** + * Get the owner of this schema. + * + * @return the owner + */ + public User getOwner() { + return owner; + } + + /** + * Get table engine params of this schema. + * + * @return default table engine params + */ + public ArrayList getTableEngineParams() { + return tableEngineParams; + } + + /** + * Set table engine params of this schema. + * @param tableEngineParams default table engine params + */ + public void setTableEngineParams(ArrayList tableEngineParams) { + this.tableEngineParams = tableEngineParams; + } + + @SuppressWarnings("unchecked") + private Map getMap(int type) { + Map result; + switch (type) { + case DbObject.TABLE_OR_VIEW: + result = tablesAndViews; + break; + case DbObject.SYNONYM: + result = synonyms; + break; + case DbObject.SEQUENCE: + result = sequences; + break; + case DbObject.INDEX: + result = indexes; + break; + case DbObject.TRIGGER: + result = triggers; + break; + case DbObject.CONSTRAINT: + result = constraints; + break; + case DbObject.CONSTANT: + result = constants; + break; + case DbObject.FUNCTION_ALIAS: + result = functions; + break; + default: + throw DbException.throwInternalError("type=" + type); + } + return (Map) result; + } + + /** + * Add an object to this schema. + * This method must not be called within CreateSchemaObject; + * use Database.addSchemaObject() instead + * + * @param obj the object to add + */ + public void add(SchemaObject obj) { + if (SysProperties.CHECK && obj.getSchema() != this) { + DbException.throwInternalError("wrong schema"); + } + String name = obj.getName(); + Map map = getMap(obj.getType()); + if (SysProperties.CHECK && map.get(name) != null) { + DbException.throwInternalError("object already exists: " + name); + } + map.put(name, obj); + freeUniqueName(name); + } + + /** + * Rename an object. + * + * @param obj the object to rename + * @param newName the new name + */ + public void rename(SchemaObject obj, String newName) { + int type = obj.getType(); + Map map = getMap(type); + if (SysProperties.CHECK) { + if (!map.containsKey(obj.getName())) { + DbException.throwInternalError("not found: " + obj.getName()); + } + if (obj.getName().equals(newName) || map.containsKey(newName)) { + DbException.throwInternalError("object already exists: " + newName); + } + } + obj.checkRename(); + map.remove(obj.getName()); + freeUniqueName(obj.getName()); + obj.rename(newName); + map.put(newName, obj); + freeUniqueName(newName); + } + + /** + * Try to find a table or view with this name. This method returns null if + * no object with this name exists. Local temporary tables are also + * returned. Synonyms are not returned or resolved. + * + * @param session the session + * @param name the object name + * @return the object or null + */ + public Table findTableOrView(Session session, String name) { + Table table = tablesAndViews.get(name); + if (table == null && session != null) { + table = session.findLocalTempTable(name); + } + return table; + } + + /** + * Try to find a table or view with this name. This method returns null if + * no object with this name exists. Local temporary tables are also + * returned. If a synonym with this name exists, the backing table of the + * synonym is returned + * + * @param session the session + * @param name the object name + * @return the object or null + */ + public Table resolveTableOrView(Session session, String name) { + Table table = findTableOrView(session, name); + if (table == null) { + TableSynonym synonym = synonyms.get(name); + if (synonym != null) { + return synonym.getSynonymFor(); + } + return null; + } + return table; + } + + /** + * Try to find a synonym with this name. This method returns null if + * no object with this name exists. + * + * @param name the object name + * @return the object or null + */ + public TableSynonym getSynonym(String name) { + return synonyms.get(name); + } + + /** + * Try to find an index with this name. This method returns null if + * no object with this name exists. + * + * @param session the session + * @param name the object name + * @return the object or null + */ + public Index findIndex(Session session, String name) { + Index index = indexes.get(name); + if (index == null) { + index = session.findLocalTempTableIndex(name); + } + return index; + } + + /** + * Try to find a trigger with this name. This method returns null if + * no object with this name exists. + * + * @param name the object name + * @return the object or null + */ + public TriggerObject findTrigger(String name) { + return triggers.get(name); + } + + /** + * Try to find a sequence with this name. This method returns null if + * no object with this name exists. + * + * @param sequenceName the object name + * @return the object or null + */ + public Sequence findSequence(String sequenceName) { + return sequences.get(sequenceName); + } + + /** + * Try to find a constraint with this name. This method returns null if no + * object with this name exists. + * + * @param session the session + * @param name the object name + * @return the object or null + */ + public Constraint findConstraint(Session session, String name) { + Constraint constraint = constraints.get(name); + if (constraint == null) { + constraint = session.findLocalTempTableConstraint(name); + } + return constraint; + } + + /** + * Try to find a user defined constant with this name. This method returns + * null if no object with this name exists. + * + * @param constantName the object name + * @return the object or null + */ + public Constant findConstant(String constantName) { + return constants.get(constantName); + } + + /** + * Try to find a user defined function with this name. This method returns + * null if no object with this name exists. + * + * @param functionAlias the object name + * @return the object or null + */ + public FunctionAlias findFunction(String functionAlias) { + return functions.get(functionAlias); + } + + /** + * Release a unique object name. + * + * @param name the object name + */ + public void freeUniqueName(String name) { + if (name != null) { + synchronized (temporaryUniqueNames) { + temporaryUniqueNames.remove(name); + } + } + } + + private String getUniqueName(DbObject obj, + Map map, String prefix) { + String hash = StringUtils.toUpperEnglish(Integer.toHexString(obj.getName().hashCode())); + String name = null; + synchronized (temporaryUniqueNames) { + for (int i = 1, len = hash.length(); i < len; i++) { + name = prefix + hash.substring(0, i); + if (!map.containsKey(name) && !temporaryUniqueNames.contains(name)) { + break; + } + name = null; + } + if (name == null) { + prefix = prefix + hash + "_"; + for (int i = 0;; i++) { + name = prefix + i; + if (!map.containsKey(name) && !temporaryUniqueNames.contains(name)) { + break; + } + } + } + temporaryUniqueNames.add(name); + } + return name; + } + + /** + * Create a unique constraint name. + * + * @param session the session + * @param table the constraint table + * @return the unique name + */ + public String getUniqueConstraintName(Session session, Table table) { + Map tableConstraints; + if (table.isTemporary() && !table.isGlobalTemporary()) { + tableConstraints = session.getLocalTempTableConstraints(); + } else { + tableConstraints = constraints; + } + return getUniqueName(table, tableConstraints, "CONSTRAINT_"); + } + + /** + * Create a unique index name. + * + * @param session the session + * @param table the indexed table + * @param prefix the index name prefix + * @return the unique name + */ + public String getUniqueIndexName(Session session, Table table, String prefix) { + Map tableIndexes; + if (table.isTemporary() && !table.isGlobalTemporary()) { + tableIndexes = session.getLocalTempTableIndexes(); + } else { + tableIndexes = indexes; + } + return getUniqueName(table, tableIndexes, prefix); + } + + /** + * Get the table or view with the given name. + * Local temporary tables are also returned. + * + * @param session the session + * @param name the table or view name + * @return the table or view + * @throws DbException if no such object exists + */ + public Table getTableOrView(Session session, String name) { + Table table = tablesAndViews.get(name); + if (table == null) { + if (session != null) { + table = session.findLocalTempTable(name); + } + if (table == null) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, name); + } + } + return table; + } + + /** + * Get the index with the given name. + * + * @param name the index name + * @return the index + * @throws DbException if no such object exists + */ + public Index getIndex(String name) { + Index index = indexes.get(name); + if (index == null) { + throw DbException.get(ErrorCode.INDEX_NOT_FOUND_1, name); + } + return index; + } + + /** + * Get the constraint with the given name. + * + * @param name the constraint name + * @return the constraint + * @throws DbException if no such object exists + */ + public Constraint getConstraint(String name) { + Constraint constraint = constraints.get(name); + if (constraint == null) { + throw DbException.get(ErrorCode.CONSTRAINT_NOT_FOUND_1, name); + } + return constraint; + } + + /** + * Get the user defined constant with the given name. + * + * @param constantName the constant name + * @return the constant + * @throws DbException if no such object exists + */ + public Constant getConstant(String constantName) { + Constant constant = constants.get(constantName); + if (constant == null) { + throw DbException.get(ErrorCode.CONSTANT_NOT_FOUND_1, constantName); + } + return constant; + } + + /** + * Get the sequence with the given name. + * + * @param sequenceName the sequence name + * @return the sequence + * @throws DbException if no such object exists + */ + public Sequence getSequence(String sequenceName) { + Sequence sequence = sequences.get(sequenceName); + if (sequence == null) { + throw DbException.get(ErrorCode.SEQUENCE_NOT_FOUND_1, sequenceName); + } + return sequence; + } + + /** + * Get all objects. + * + * @return a (possible empty) list of all objects + */ + public ArrayList getAll() { + ArrayList all = New.arrayList(); + all.addAll(getMap(DbObject.TABLE_OR_VIEW).values()); + all.addAll(getMap(DbObject.SYNONYM).values()); + all.addAll(getMap(DbObject.SEQUENCE).values()); + all.addAll(getMap(DbObject.INDEX).values()); + all.addAll(getMap(DbObject.TRIGGER).values()); + all.addAll(getMap(DbObject.CONSTRAINT).values()); + all.addAll(getMap(DbObject.CONSTANT).values()); + all.addAll(getMap(DbObject.FUNCTION_ALIAS).values()); + return all; + } + + /** + * Get all objects of the given type. + * + * @param type the object type + * @return a (possible empty) list of all objects + */ + public ArrayList getAll(int type) { + Map map = getMap(type); + return new ArrayList<>(map.values()); + } + + /** + * Get all tables and views. + * + * @return a (possible empty) list of all objects + */ + public ArrayList getAllTablesAndViews() { + synchronized (database) { + return new ArrayList<>(tablesAndViews.values()); + } + } + + + public ArrayList getAllSynonyms() { + synchronized (database) { + return new ArrayList<>(synonyms.values()); + } + } + + /** + * Get the table with the given name, if any. + * + * @param name the table name + * @return the table or null if not found + */ + public Table getTableOrViewByName(String name) { + synchronized (database) { + return tablesAndViews.get(name); + } + } + + /** + * Remove an object from this schema. + * + * @param obj the object to remove + */ + public void remove(SchemaObject obj) { + String objName = obj.getName(); + Map map = getMap(obj.getType()); + if (SysProperties.CHECK && !map.containsKey(objName)) { + DbException.throwInternalError("not found: " + objName); + } + map.remove(objName); + freeUniqueName(objName); + } + + /** + * Add a table to the schema. + * + * @param data the create table information + * @return the created {@link Table} object + */ + public Table createTable(CreateTableData data) { + synchronized (database) { + if (!data.temporary || data.globalTemporary) { + database.lockMeta(data.session); + } + data.schema = this; + if (data.tableEngine == null) { + DbSettings s = database.getSettings(); + if (s.defaultTableEngine != null) { + data.tableEngine = s.defaultTableEngine; + } else if (s.mvStore) { + data.tableEngine = MVTableEngine.class.getName(); + } + } + if (data.tableEngine != null) { + if (data.tableEngineParams == null) { + data.tableEngineParams = this.tableEngineParams; + } + return database.getTableEngine(data.tableEngine).createTable(data); + } + return new RegularTable(data); + } + } + + /** + * Add a table synonym to the schema. + * + * @param data the create synonym information + * @return the created {@link TableSynonym} object + */ + public TableSynonym createSynonym(CreateSynonymData data) { + synchronized (database) { + database.lockMeta(data.session); + data.schema = this; + return new TableSynonym(data); + } + } + + /** + * Add a linked table to the schema. + * + * @param id the object id + * @param tableName the table name of the alias + * @param driver the driver class name + * @param url the database URL + * @param user the user name + * @param password the password + * @param originalSchema the schema name of the target table + * @param originalTable the table name of the target table + * @param emitUpdates if updates should be emitted instead of delete/insert + * @param force create the object even if the database can not be accessed + * @return the {@link TableLink} object + */ + public TableLink createTableLink(int id, String tableName, String driver, + String url, String user, String password, String originalSchema, + String originalTable, boolean emitUpdates, boolean force) { + synchronized (database) { + return new TableLink(this, id, tableName, + driver, url, user, password, + originalSchema, originalTable, emitUpdates, force); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/schema/SchemaObject.java b/modules/h2/src/main/java/org/h2/schema/SchemaObject.java new file mode 100644 index 0000000000000..c2c1c6d7a9794 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/schema/SchemaObject.java @@ -0,0 +1,30 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.schema; + +import org.h2.engine.DbObject; + +/** + * Any database object that is stored in a schema. + */ +public interface SchemaObject extends DbObject { + + /** + * Get the schema in which this object is defined + * + * @return the schema + */ + Schema getSchema(); + + /** + * Check whether this is a hidden object that doesn't appear in the meta + * data and in the script, and is not dropped on DROP ALL OBJECTS. + * + * @return true if it is hidden + */ + boolean isHidden(); + +} diff --git a/modules/h2/src/main/java/org/h2/schema/SchemaObjectBase.java b/modules/h2/src/main/java/org/h2/schema/SchemaObjectBase.java new file mode 100644 index 0000000000000..2a8af49ce6857 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/schema/SchemaObjectBase.java @@ -0,0 +1,47 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.schema; + +import org.h2.engine.DbObjectBase; + +/** + * The base class for classes implementing SchemaObject. + */ +public abstract class SchemaObjectBase extends DbObjectBase implements + SchemaObject { + + private Schema schema; + + /** + * Initialize some attributes of this object. + * + * @param newSchema the schema + * @param id the object id + * @param name the name + * @param traceModuleId the trace module id + */ + protected void initSchemaObjectBase(Schema newSchema, int id, String name, + int traceModuleId) { + initDbObjectBase(newSchema.getDatabase(), id, name, traceModuleId); + this.schema = newSchema; + } + + @Override + public Schema getSchema() { + return schema; + } + + @Override + public String getSQL() { + return schema.getSQL() + "." + super.getSQL(); + } + + @Override + public boolean isHidden() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/schema/Sequence.java b/modules/h2/src/main/java/org/h2/schema/Sequence.java new file mode 100644 index 0000000000000..b840e236a42c7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/schema/Sequence.java @@ -0,0 +1,360 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.schema; + +import java.math.BigInteger; +import org.h2.api.ErrorCode; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.table.Table; + +/** + * A sequence is created using the statement + * CREATE SEQUENCE + */ +public class Sequence extends SchemaObjectBase { + + /** + * The default cache size for sequences. + */ + public static final int DEFAULT_CACHE_SIZE = 32; + + private long value; + private long valueWithMargin; + private long increment; + private long cacheSize; + private long minValue; + private long maxValue; + private boolean cycle; + private boolean belongsToTable; + private boolean writeWithMargin; + + /** + * Creates a new sequence for an auto-increment column. + * + * @param schema the schema + * @param id the object id + * @param name the sequence name + * @param startValue the first value to return + * @param increment the increment count + */ + public Sequence(Schema schema, int id, String name, long startValue, + long increment) { + this(schema, id, name, startValue, increment, null, null, null, false, + true); + } + + /** + * Creates a new sequence. + * + * @param schema the schema + * @param id the object id + * @param name the sequence name + * @param startValue the first value to return + * @param increment the increment count + * @param cacheSize the number of entries to pre-fetch + * @param minValue the minimum value + * @param maxValue the maximum value + * @param cycle whether to jump back to the min value if needed + * @param belongsToTable whether this sequence belongs to a table (for + * auto-increment columns) + */ + public Sequence(Schema schema, int id, String name, Long startValue, + Long increment, Long cacheSize, Long minValue, Long maxValue, + boolean cycle, boolean belongsToTable) { + initSchemaObjectBase(schema, id, name, Trace.SEQUENCE); + this.increment = increment != null ? + increment : 1; + this.minValue = minValue != null ? + minValue : getDefaultMinValue(startValue, this.increment); + this.maxValue = maxValue != null ? + maxValue : getDefaultMaxValue(startValue, this.increment); + this.value = startValue != null ? + startValue : getDefaultStartValue(this.increment); + this.valueWithMargin = value; + this.cacheSize = cacheSize != null ? + Math.max(1, cacheSize) : DEFAULT_CACHE_SIZE; + this.cycle = cycle; + this.belongsToTable = belongsToTable; + if (!isValid(this.value, this.minValue, this.maxValue, this.increment)) { + throw DbException.get(ErrorCode.SEQUENCE_ATTRIBUTES_INVALID, name, + String.valueOf(this.value), String.valueOf(this.minValue), + String.valueOf(this.maxValue), + String.valueOf(this.increment)); + } + } + + /** + * Allows the start value, increment, min value and max value to be updated + * atomically, including atomic validation. Useful because setting these + * attributes one after the other could otherwise result in an invalid + * sequence state (e.g. min value > max value, start value < min value, + * etc). + * + * @param startValue the new start value (null if no change) + * @param minValue the new min value (null if no change) + * @param maxValue the new max value (null if no change) + * @param increment the new increment (null if no change) + */ + public synchronized void modify(Long startValue, Long minValue, + Long maxValue, Long increment) { + if (startValue == null) { + startValue = this.value; + } + if (minValue == null) { + minValue = this.minValue; + } + if (maxValue == null) { + maxValue = this.maxValue; + } + if (increment == null) { + increment = this.increment; + } + if (!isValid(startValue, minValue, maxValue, increment)) { + throw DbException.get(ErrorCode.SEQUENCE_ATTRIBUTES_INVALID, + getName(), String.valueOf(startValue), + String.valueOf(minValue), + String.valueOf(maxValue), + String.valueOf(increment)); + } + this.value = startValue; + this.valueWithMargin = startValue; + this.minValue = minValue; + this.maxValue = maxValue; + this.increment = increment; + } + + /** + * Validates the specified prospective start value, min value, max value and + * increment relative to each other, since each of their respective + * validities are contingent on the values of the other parameters. + * + * @param value the prospective start value + * @param minValue the prospective min value + * @param maxValue the prospective max value + * @param increment the prospective increment + */ + private static boolean isValid(long value, long minValue, long maxValue, + long increment) { + return minValue <= value && + maxValue >= value && + maxValue > minValue && + increment != 0 && + // Math.abs(increment) < maxValue - minValue + // use BigInteger to avoid overflows when maxValue and minValue + // are really big + BigInteger.valueOf(increment).abs().compareTo( + BigInteger.valueOf(maxValue).subtract(BigInteger.valueOf(minValue))) < 0; + } + + private static long getDefaultMinValue(Long startValue, long increment) { + long v = increment >= 0 ? 1 : Long.MIN_VALUE; + if (startValue != null && increment >= 0 && startValue < v) { + v = startValue; + } + return v; + } + + private static long getDefaultMaxValue(Long startValue, long increment) { + long v = increment >= 0 ? Long.MAX_VALUE : -1; + if (startValue != null && increment < 0 && startValue > v) { + v = startValue; + } + return v; + } + + private long getDefaultStartValue(long increment) { + return increment >= 0 ? minValue : maxValue; + } + + public boolean getBelongsToTable() { + return belongsToTable; + } + + public long getIncrement() { + return increment; + } + + public long getMinValue() { + return minValue; + } + + public long getMaxValue() { + return maxValue; + } + + public boolean getCycle() { + return cycle; + } + + public void setCycle(boolean cycle) { + this.cycle = cycle; + } + + @Override + public String getDropSQL() { + if (getBelongsToTable()) { + return null; + } + return "DROP SEQUENCE IF EXISTS " + getSQL(); + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + @Override + public synchronized String getCreateSQL() { + long v = writeWithMargin ? valueWithMargin : value; + StringBuilder buff = new StringBuilder("CREATE SEQUENCE "); + buff.append(getSQL()).append(" START WITH ").append(v); + if (increment != 1) { + buff.append(" INCREMENT BY ").append(increment); + } + if (minValue != getDefaultMinValue(v, increment)) { + buff.append(" MINVALUE ").append(minValue); + } + if (maxValue != getDefaultMaxValue(v, increment)) { + buff.append(" MAXVALUE ").append(maxValue); + } + if (cycle) { + buff.append(" CYCLE"); + } + if (cacheSize != DEFAULT_CACHE_SIZE) { + buff.append(" CACHE ").append(cacheSize); + } + if (belongsToTable) { + buff.append(" BELONGS_TO_TABLE"); + } + return buff.toString(); + } + + /** + * Get the next value for this sequence. + * + * @param session the session + * @return the next value + */ + public long getNext(Session session) { + boolean needsFlush = false; + long result; + synchronized (this) { + if ((increment > 0 && value >= valueWithMargin) || + (increment < 0 && value <= valueWithMargin)) { + valueWithMargin += increment * cacheSize; + needsFlush = true; + } + if ((increment > 0 && value > maxValue) || + (increment < 0 && value < minValue)) { + if (cycle) { + value = increment > 0 ? minValue : maxValue; + valueWithMargin = value + (increment * cacheSize); + needsFlush = true; + } else { + throw DbException.get(ErrorCode.SEQUENCE_EXHAUSTED, getName()); + } + } + result = value; + value += increment; + } + if (needsFlush) { + flush(session); + } + return result; + } + + /** + * Flush the current value to disk. + */ + public void flushWithoutMargin() { + if (valueWithMargin != value) { + valueWithMargin = value; + flush(null); + } + } + + /** + * Flush the current value, including the margin, to disk. + * + * @param session the session + */ + public void flush(Session session) { + if (isTemporary()) { + return; + } + if (session == null || !database.isSysTableLockedBy(session)) { + // This session may not lock the sys table (except if it has already + // locked it) because it must be committed immediately, otherwise + // other threads can not access the sys table. + Session sysSession = database.getSystemSession(); + synchronized (database.isMultiThreaded() ? sysSession : database) { + flushInternal(sysSession); + sysSession.commit(false); + } + } else { + synchronized (database.isMultiThreaded() ? session : database) { + flushInternal(session); + } + } + } + + private void flushInternal(Session session) { + final boolean metaWasLocked = database.lockMeta(session); + // just for this case, use the value with the margin + try { + writeWithMargin = true; + database.updateMeta(session, this); + } finally { + writeWithMargin = false; + } + if (!metaWasLocked) { + database.unlockMeta(session); + } + } + + /** + * Flush the current value to disk and close this object. + */ + public void close() { + flushWithoutMargin(); + } + + @Override + public int getType() { + return DbObject.SEQUENCE; + } + + @Override + public void removeChildrenAndResources(Session session) { + database.removeMeta(session, getId()); + invalidate(); + } + + @Override + public void checkRename() { + // nothing to do + } + + public synchronized long getCurrentValue() { + return value - increment; + } + + public void setBelongsToTable(boolean b) { + this.belongsToTable = b; + } + + public void setCacheSize(long cacheSize) { + this.cacheSize = Math.max(1, cacheSize); + } + + public long getCacheSize() { + return cacheSize; + } + +} diff --git a/modules/h2/src/main/java/org/h2/schema/TriggerObject.java b/modules/h2/src/main/java/org/h2/schema/TriggerObject.java new file mode 100644 index 0000000000000..1fbef66dcd310 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/schema/TriggerObject.java @@ -0,0 +1,462 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.schema; + +import java.lang.reflect.Method; +import java.sql.Connection; +import java.sql.SQLException; +import java.util.Arrays; + +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.command.Parser; +import org.h2.engine.Constants; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.Row; +import org.h2.table.Table; +import org.h2.util.JdbcUtils; +import org.h2.util.SourceCompiler; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + *A trigger is created using the statement + * CREATE TRIGGER + */ +public class TriggerObject extends SchemaObjectBase { + + /** + * The default queue size. + */ + public static final int DEFAULT_QUEUE_SIZE = 1024; + + private boolean insteadOf; + private boolean before; + private int typeMask; + private boolean rowBased; + private boolean onRollback; + // TODO trigger: support queue and noWait = false as well + private int queueSize = DEFAULT_QUEUE_SIZE; + private boolean noWait; + private Table table; + private String triggerClassName; + private String triggerSource; + private Trigger triggerCallback; + + public TriggerObject(Schema schema, int id, String name, Table table) { + initSchemaObjectBase(schema, id, name, Trace.TRIGGER); + this.table = table; + setTemporary(table.isTemporary()); + } + + public void setBefore(boolean before) { + this.before = before; + } + + public void setInsteadOf(boolean insteadOf) { + this.insteadOf = insteadOf; + } + + private synchronized void load() { + if (triggerCallback != null) { + return; + } + try { + Session sysSession = database.getSystemSession(); + Connection c2 = sysSession.createConnection(false); + Object obj; + if (triggerClassName != null) { + obj = JdbcUtils.loadUserClass(triggerClassName).newInstance(); + } else { + obj = loadFromSource(); + } + triggerCallback = (Trigger) obj; + triggerCallback.init(c2, getSchema().getName(), getName(), + table.getName(), before, typeMask); + } catch (Throwable e) { + // try again later + triggerCallback = null; + throw DbException.get(ErrorCode.ERROR_CREATING_TRIGGER_OBJECT_3, e, getName(), + triggerClassName != null ? triggerClassName : "..source..", e.toString()); + } + } + + private Trigger loadFromSource() { + SourceCompiler compiler = database.getCompiler(); + synchronized (compiler) { + String fullClassName = Constants.USER_PACKAGE + ".trigger." + getName(); + compiler.setSource(fullClassName, triggerSource); + try { + if (SourceCompiler.isJavaxScriptSource(triggerSource)) { + return (Trigger) compiler.getCompiledScript(fullClassName).eval(); + } else { + final Method m = compiler.getMethod(fullClassName); + if (m.getParameterTypes().length > 0) { + throw new IllegalStateException("No parameters are allowed for a trigger"); + } + return (Trigger) m.invoke(null); + } + } catch (DbException e) { + throw e; + } catch (Exception e) { + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, e, triggerSource); + } + } + } + + /** + * Set the trigger class name and load the class if possible. + * + * @param triggerClassName the name of the trigger class + * @param force whether exceptions (due to missing class or access rights) + * should be ignored + */ + public void setTriggerClassName(String triggerClassName, boolean force) { + this.setTriggerAction(triggerClassName, null, force); + } + + /** + * Set the trigger source code and compile it if possible. + * + * @param source the source code of a method returning a {@link Trigger} + * @param force whether exceptions (due to syntax error) + * should be ignored + */ + public void setTriggerSource(String source, boolean force) { + this.setTriggerAction(null, source, force); + } + + private void setTriggerAction(String triggerClassName, String source, boolean force) { + this.triggerClassName = triggerClassName; + this.triggerSource = source; + try { + load(); + } catch (DbException e) { + if (!force) { + throw e; + } + } + } + + /** + * Call the trigger class if required. This method does nothing if the + * trigger is not defined for the given action. This method is called before + * or after any rows have been processed, once for each statement. + * + * @param session the session + * @param type the trigger type + * @param beforeAction if this method is called before applying the changes + */ + public void fire(Session session, int type, boolean beforeAction) { + if (rowBased || before != beforeAction || (typeMask & type) == 0) { + return; + } + load(); + Connection c2 = session.createConnection(false); + boolean old = false; + if (type != Trigger.SELECT) { + old = session.setCommitOrRollbackDisabled(true); + } + Value identity = session.getLastScopeIdentity(); + try { + triggerCallback.fire(c2, null, null); + } catch (Throwable e) { + throw DbException.get(ErrorCode.ERROR_EXECUTING_TRIGGER_3, e, getName(), + triggerClassName != null ? triggerClassName : "..source..", e.toString()); + } finally { + if (session.getLastTriggerIdentity() != null) { + session.setLastScopeIdentity(session.getLastTriggerIdentity()); + session.setLastTriggerIdentity(null); + } else { + session.setLastScopeIdentity(identity); + } + if (type != Trigger.SELECT) { + session.setCommitOrRollbackDisabled(old); + } + } + } + + private static Object[] convertToObjectList(Row row) { + if (row == null) { + return null; + } + int len = row.getColumnCount(); + Object[] list = new Object[len]; + for (int i = 0; i < len; i++) { + list[i] = row.getValue(i).getObject(); + } + return list; + } + + /** + * Call the fire method of the user-defined trigger class if required. This + * method does nothing if the trigger is not defined for the given action. + * This method is called before or after a row is processed, possibly many + * times for each statement. + * + * @param session the session + * @param table the table + * @param oldRow the old row + * @param newRow the new row + * @param beforeAction true if this method is called before the operation is + * applied + * @param rollback when the operation occurred within a rollback + * @return true if no further action is required (for 'instead of' triggers) + */ + public boolean fireRow(Session session, Table table, Row oldRow, Row newRow, + boolean beforeAction, boolean rollback) { + if (!rowBased || before != beforeAction) { + return false; + } + if (rollback && !onRollback) { + return false; + } + load(); + Object[] oldList; + Object[] newList; + boolean fire = false; + if ((typeMask & Trigger.INSERT) != 0) { + if (oldRow == null && newRow != null) { + fire = true; + } + } + if ((typeMask & Trigger.UPDATE) != 0) { + if (oldRow != null && newRow != null) { + fire = true; + } + } + if ((typeMask & Trigger.DELETE) != 0) { + if (oldRow != null && newRow == null) { + fire = true; + } + } + if (!fire) { + return false; + } + oldList = convertToObjectList(oldRow); + newList = convertToObjectList(newRow); + Object[] newListBackup; + if (before && newList != null) { + newListBackup = Arrays.copyOf(newList, newList.length); + } else { + newListBackup = null; + } + Connection c2 = session.createConnection(false); + boolean old = session.getAutoCommit(); + boolean oldDisabled = session.setCommitOrRollbackDisabled(true); + Value identity = session.getLastScopeIdentity(); + try { + session.setAutoCommit(false); + triggerCallback.fire(c2, oldList, newList); + if (newListBackup != null) { + for (int i = 0; i < newList.length; i++) { + Object o = newList[i]; + if (o != newListBackup[i]) { + Value v = DataType.convertToValue(session, o, Value.UNKNOWN); + session.getGeneratedKeys().add(table.getColumn(i)); + newRow.setValue(i, v); + } + } + } + } catch (Exception e) { + if (onRollback) { + // ignore + } else { + throw DbException.convert(e); + } + } finally { + if (session.getLastTriggerIdentity() != null) { + session.setLastScopeIdentity(session.getLastTriggerIdentity()); + session.setLastTriggerIdentity(null); + } else { + session.setLastScopeIdentity(identity); + } + session.setCommitOrRollbackDisabled(oldDisabled); + session.setAutoCommit(old); + } + return insteadOf; + } + + /** + * Set the trigger type. + * + * @param typeMask the type + */ + public void setTypeMask(int typeMask) { + this.typeMask = typeMask; + } + + public void setRowBased(boolean rowBased) { + this.rowBased = rowBased; + } + + public void setQueueSize(int size) { + this.queueSize = size; + } + + public int getQueueSize() { + return queueSize; + } + + public void setNoWait(boolean noWait) { + this.noWait = noWait; + } + + public boolean isNoWait() { + return noWait; + } + + public void setOnRollback(boolean onRollback) { + this.onRollback = onRollback; + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQLForCopy(Table targetTable, String quotedName) { + StringBuilder buff = new StringBuilder("CREATE FORCE TRIGGER "); + buff.append(quotedName); + if (insteadOf) { + buff.append(" INSTEAD OF "); + } else if (before) { + buff.append(" BEFORE "); + } else { + buff.append(" AFTER "); + } + buff.append(getTypeNameList()); + buff.append(" ON ").append(targetTable.getSQL()); + if (rowBased) { + buff.append(" FOR EACH ROW"); + } + if (noWait) { + buff.append(" NOWAIT"); + } else { + buff.append(" QUEUE ").append(queueSize); + } + if (triggerClassName != null) { + buff.append(" CALL ").append(Parser.quoteIdentifier(triggerClassName)); + } else { + buff.append(" AS ").append(StringUtils.quoteStringSQL(triggerSource)); + } + return buff.toString(); + } + + public String getTypeNameList() { + StatementBuilder buff = new StatementBuilder(); + if ((typeMask & Trigger.INSERT) != 0) { + buff.appendExceptFirst(", "); + buff.append("INSERT"); + } + if ((typeMask & Trigger.UPDATE) != 0) { + buff.appendExceptFirst(", "); + buff.append("UPDATE"); + } + if ((typeMask & Trigger.DELETE) != 0) { + buff.appendExceptFirst(", "); + buff.append("DELETE"); + } + if ((typeMask & Trigger.SELECT) != 0) { + buff.appendExceptFirst(", "); + buff.append("SELECT"); + } + if (onRollback) { + buff.appendExceptFirst(", "); + buff.append("ROLLBACK"); + } + return buff.toString(); + } + + @Override + public String getCreateSQL() { + return getCreateSQLForCopy(table, getSQL()); + } + + @Override + public int getType() { + return DbObject.TRIGGER; + } + + @Override + public void removeChildrenAndResources(Session session) { + table.removeTrigger(this); + database.removeMeta(session, getId()); + if (triggerCallback != null) { + try { + triggerCallback.remove(); + } catch (SQLException e) { + throw DbException.convert(e); + } + } + table = null; + triggerClassName = null; + triggerSource = null; + triggerCallback = null; + invalidate(); + } + + @Override + public void checkRename() { + // nothing to do + } + + /** + * Get the table of this trigger. + * + * @return the table + */ + public Table getTable() { + return table; + } + + /** + * Check if this is a before trigger. + * + * @return true if it is + */ + public boolean isBefore() { + return before; + } + + /** + * Get the trigger class name. + * + * @return the class name + */ + public String getTriggerClassName() { + return triggerClassName; + } + + public String getTriggerSource() { + return triggerSource; + } + + /** + * Close the trigger. + */ + public void close() throws SQLException { + if (triggerCallback != null) { + triggerCallback.close(); + } + } + + /** + * Check whether this is a select trigger. + * + * @return true if it is + */ + public boolean isSelectTrigger() { + return (typeMask & Trigger.SELECT) != 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/schema/package.html b/modules/h2/src/main/java/org/h2/schema/package.html new file mode 100644 index 0000000000000..37abbc3cc7a01 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/schema/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Schema implementation and objects that are stored in a schema (for example, sequences and constants). + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/security/AES.java b/modules/h2/src/main/java/org/h2/security/AES.java new file mode 100644 index 0000000000000..c4825292a1bbc --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/AES.java @@ -0,0 +1,327 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.security; + +import org.h2.util.Bits; + +/** + * An implementation of the AES block cipher algorithm, + * also known as Rijndael. Only AES-128 is supported by this class. + */ +public class AES implements BlockCipher { + + private static final int[] RCON = new int[10]; + private static final int[] FS = new int[256]; + private static final int[] FT0 = new int[256]; + private static final int[] FT1 = new int[256]; + private static final int[] FT2 = new int[256]; + private static final int[] FT3 = new int[256]; + private static final int[] RS = new int[256]; + private static final int[] RT0 = new int[256]; + private static final int[] RT1 = new int[256]; + private static final int[] RT2 = new int[256]; + private static final int[] RT3 = new int[256]; + private final int[] encKey = new int[44]; + private final int[] decKey = new int[44]; + + private static int rot8(int x) { + return (x >>> 8) | (x << 24); + } + + private static int xtime(int x) { + return ((x << 1) ^ (((x & 0x80) != 0) ? 0x1b : 0)) & 255; + } + + private static int mul(int[] pow, int[] log, int x, int y) { + return (x != 0 && y != 0) ? pow[(log[x] + log[y]) % 255] : 0; + } + + static { + int[] pow = new int[256]; + int[] log = new int[256]; + for (int i = 0, x = 1; i < 256; i++, x ^= xtime(x)) { + pow[i] = x; + log[x] = i; + } + for (int i = 0, x = 1; i < 10; i++, x = xtime(x)) { + RCON[i] = x << 24; + } + FS[0x00] = 0x63; + RS[0x63] = 0x00; + for (int i = 1; i < 256; i++) { + int x = pow[255 - log[i]], y = x; + y = ((y << 1) | (y >> 7)) & 255; + x ^= y; + y = ((y << 1) | (y >> 7)) & 255; + x ^= y; + y = ((y << 1) | (y >> 7)) & 255; + x ^= y; + y = ((y << 1) | (y >> 7)) & 255; + x ^= y ^ 0x63; + FS[i] = x & 255; + RS[x] = i & 255; + } + for (int i = 0; i < 256; i++) { + int x = FS[i], y = xtime(x); + FT0[i] = (x ^ y) ^ (x << 8) ^ (x << 16) ^ (y << 24); + FT1[i] = rot8(FT0[i]); + FT2[i] = rot8(FT1[i]); + FT3[i] = rot8(FT2[i]); + y = RS[i]; + RT0[i] = mul(pow, log, 0x0b, y) ^ (mul(pow, log, 0x0d, y) << 8) + ^ (mul(pow, log, 0x09, y) << 16) ^ (mul(pow, log, 0x0e, y) << 24); + RT1[i] = rot8(RT0[i]); + RT2[i] = rot8(RT1[i]); + RT3[i] = rot8(RT2[i]); + } + } + + private static int getDec(int t) { + return RT0[FS[(t >> 24) & 255]] ^ RT1[FS[(t >> 16) & 255]] + ^ RT2[FS[(t >> 8) & 255]] ^ RT3[FS[t & 255]]; + } + + @Override + public void setKey(byte[] key) { + for (int i = 0, j = 0; i < 4; i++) { + encKey[i] = decKey[i] = ((key[j++] & 255) << 24) + | ((key[j++] & 255) << 16) | ((key[j++] & 255) << 8) + | (key[j++] & 255); + } + int e = 0; + for (int i = 0; i < 10; i++, e += 4) { + encKey[e + 4] = encKey[e] ^ RCON[i] + ^ (FS[(encKey[e + 3] >> 16) & 255] << 24) + ^ (FS[(encKey[e + 3] >> 8) & 255] << 16) + ^ (FS[(encKey[e + 3]) & 255] << 8) + ^ FS[(encKey[e + 3] >> 24) & 255]; + encKey[e + 5] = encKey[e + 1] ^ encKey[e + 4]; + encKey[e + 6] = encKey[e + 2] ^ encKey[e + 5]; + encKey[e + 7] = encKey[e + 3] ^ encKey[e + 6]; + } + int d = 0; + decKey[d++] = encKey[e++]; + decKey[d++] = encKey[e++]; + decKey[d++] = encKey[e++]; + decKey[d++] = encKey[e++]; + for (int i = 1; i < 10; i++) { + e -= 8; + decKey[d++] = getDec(encKey[e++]); + decKey[d++] = getDec(encKey[e++]); + decKey[d++] = getDec(encKey[e++]); + decKey[d++] = getDec(encKey[e++]); + } + e -= 8; + decKey[d++] = encKey[e++]; + decKey[d++] = encKey[e++]; + decKey[d++] = encKey[e++]; + decKey[d] = encKey[e]; + } + + @Override + public void encrypt(byte[] bytes, int off, int len) { + for (int i = off; i < off + len; i += 16) { + encryptBlock(bytes, bytes, i); + } + } + + @Override + public void decrypt(byte[] bytes, int off, int len) { + for (int i = off; i < off + len; i += 16) { + decryptBlock(bytes, bytes, i); + } + } + + private void encryptBlock(byte[] in, byte[] out, int off) { + int[] k = encKey; + int x0 = Bits.readInt(in, off) ^ k[0]; + int x1 = Bits.readInt(in, off + 4) ^ k[1]; + int x2 = Bits.readInt(in, off + 8) ^ k[2]; + int x3 = Bits.readInt(in, off + 12) ^ k[3]; + int y0 = FT0[(x0 >> 24) & 255] ^ FT1[(x1 >> 16) & 255] + ^ FT2[(x2 >> 8) & 255] ^ FT3[x3 & 255] ^ k[4]; + int y1 = FT0[(x1 >> 24) & 255] ^ FT1[(x2 >> 16) & 255] + ^ FT2[(x3 >> 8) & 255] ^ FT3[x0 & 255] ^ k[5]; + int y2 = FT0[(x2 >> 24) & 255] ^ FT1[(x3 >> 16) & 255] + ^ FT2[(x0 >> 8) & 255] ^ FT3[x1 & 255] ^ k[6]; + int y3 = FT0[(x3 >> 24) & 255] ^ FT1[(x0 >> 16) & 255] + ^ FT2[(x1 >> 8) & 255] ^ FT3[x2 & 255] ^ k[7]; + x0 = FT0[(y0 >> 24) & 255] ^ FT1[(y1 >> 16) & 255] + ^ FT2[(y2 >> 8) & 255] ^ FT3[y3 & 255] ^ k[8]; + x1 = FT0[(y1 >> 24) & 255] ^ FT1[(y2 >> 16) & 255] + ^ FT2[(y3 >> 8) & 255] ^ FT3[y0 & 255] ^ k[9]; + x2 = FT0[(y2 >> 24) & 255] ^ FT1[(y3 >> 16) & 255] + ^ FT2[(y0 >> 8) & 255] ^ FT3[y1 & 255] ^ k[10]; + x3 = FT0[(y3 >> 24) & 255] ^ FT1[(y0 >> 16) & 255] + ^ FT2[(y1 >> 8) & 255] ^ FT3[y2 & 255] ^ k[11]; + y0 = FT0[(x0 >> 24) & 255] ^ FT1[(x1 >> 16) & 255] + ^ FT2[(x2 >> 8) & 255] ^ FT3[x3 & 255] ^ k[12]; + y1 = FT0[(x1 >> 24) & 255] ^ FT1[(x2 >> 16) & 255] + ^ FT2[(x3 >> 8) & 255] ^ FT3[x0 & 255] ^ k[13]; + y2 = FT0[(x2 >> 24) & 255] ^ FT1[(x3 >> 16) & 255] + ^ FT2[(x0 >> 8) & 255] ^ FT3[x1 & 255] ^ k[14]; + y3 = FT0[(x3 >> 24) & 255] ^ FT1[(x0 >> 16) & 255] + ^ FT2[(x1 >> 8) & 255] ^ FT3[x2 & 255] ^ k[15]; + x0 = FT0[(y0 >> 24) & 255] ^ FT1[(y1 >> 16) & 255] + ^ FT2[(y2 >> 8) & 255] ^ FT3[y3 & 255] ^ k[16]; + x1 = FT0[(y1 >> 24) & 255] ^ FT1[(y2 >> 16) & 255] + ^ FT2[(y3 >> 8) & 255] ^ FT3[y0 & 255] ^ k[17]; + x2 = FT0[(y2 >> 24) & 255] ^ FT1[(y3 >> 16) & 255] + ^ FT2[(y0 >> 8) & 255] ^ FT3[y1 & 255] ^ k[18]; + x3 = FT0[(y3 >> 24) & 255] ^ FT1[(y0 >> 16) & 255] + ^ FT2[(y1 >> 8) & 255] ^ FT3[y2 & 255] ^ k[19]; + y0 = FT0[(x0 >> 24) & 255] ^ FT1[(x1 >> 16) & 255] + ^ FT2[(x2 >> 8) & 255] ^ FT3[x3 & 255] ^ k[20]; + y1 = FT0[(x1 >> 24) & 255] ^ FT1[(x2 >> 16) & 255] + ^ FT2[(x3 >> 8) & 255] ^ FT3[x0 & 255] ^ k[21]; + y2 = FT0[(x2 >> 24) & 255] ^ FT1[(x3 >> 16) & 255] + ^ FT2[(x0 >> 8) & 255] ^ FT3[x1 & 255] ^ k[22]; + y3 = FT0[(x3 >> 24) & 255] ^ FT1[(x0 >> 16) & 255] + ^ FT2[(x1 >> 8) & 255] ^ FT3[x2 & 255] ^ k[23]; + x0 = FT0[(y0 >> 24) & 255] ^ FT1[(y1 >> 16) & 255] + ^ FT2[(y2 >> 8) & 255] ^ FT3[y3 & 255] ^ k[24]; + x1 = FT0[(y1 >> 24) & 255] ^ FT1[(y2 >> 16) & 255] + ^ FT2[(y3 >> 8) & 255] ^ FT3[y0 & 255] ^ k[25]; + x2 = FT0[(y2 >> 24) & 255] ^ FT1[(y3 >> 16) & 255] + ^ FT2[(y0 >> 8) & 255] ^ FT3[y1 & 255] ^ k[26]; + x3 = FT0[(y3 >> 24) & 255] ^ FT1[(y0 >> 16) & 255] + ^ FT2[(y1 >> 8) & 255] ^ FT3[y2 & 255] ^ k[27]; + y0 = FT0[(x0 >> 24) & 255] ^ FT1[(x1 >> 16) & 255] + ^ FT2[(x2 >> 8) & 255] ^ FT3[x3 & 255] ^ k[28]; + y1 = FT0[(x1 >> 24) & 255] ^ FT1[(x2 >> 16) & 255] + ^ FT2[(x3 >> 8) & 255] ^ FT3[x0 & 255] ^ k[29]; + y2 = FT0[(x2 >> 24) & 255] ^ FT1[(x3 >> 16) & 255] + ^ FT2[(x0 >> 8) & 255] ^ FT3[x1 & 255] ^ k[30]; + y3 = FT0[(x3 >> 24) & 255] ^ FT1[(x0 >> 16) & 255] + ^ FT2[(x1 >> 8) & 255] ^ FT3[x2 & 255] ^ k[31]; + x0 = FT0[(y0 >> 24) & 255] ^ FT1[(y1 >> 16) & 255] + ^ FT2[(y2 >> 8) & 255] ^ FT3[y3 & 255] ^ k[32]; + x1 = FT0[(y1 >> 24) & 255] ^ FT1[(y2 >> 16) & 255] + ^ FT2[(y3 >> 8) & 255] ^ FT3[y0 & 255] ^ k[33]; + x2 = FT0[(y2 >> 24) & 255] ^ FT1[(y3 >> 16) & 255] + ^ FT2[(y0 >> 8) & 255] ^ FT3[y1 & 255] ^ k[34]; + x3 = FT0[(y3 >> 24) & 255] ^ FT1[(y0 >> 16) & 255] + ^ FT2[(y1 >> 8) & 255] ^ FT3[y2 & 255] ^ k[35]; + y0 = FT0[(x0 >> 24) & 255] ^ FT1[(x1 >> 16) & 255] + ^ FT2[(x2 >> 8) & 255] ^ FT3[x3 & 255] ^ k[36]; + y1 = FT0[(x1 >> 24) & 255] ^ FT1[(x2 >> 16) & 255] + ^ FT2[(x3 >> 8) & 255] ^ FT3[x0 & 255] ^ k[37]; + y2 = FT0[(x2 >> 24) & 255] ^ FT1[(x3 >> 16) & 255] + ^ FT2[(x0 >> 8) & 255] ^ FT3[x1 & 255] ^ k[38]; + y3 = FT0[(x3 >> 24) & 255] ^ FT1[(x0 >> 16) & 255] + ^ FT2[(x1 >> 8) & 255] ^ FT3[x2 & 255] ^ k[39]; + x0 = ((FS[(y0 >> 24) & 255] << 24) | (FS[(y1 >> 16) & 255] << 16) + | (FS[(y2 >> 8) & 255] << 8) | FS[y3 & 255]) ^ k[40]; + x1 = ((FS[(y1 >> 24) & 255] << 24) | (FS[(y2 >> 16) & 255] << 16) + | (FS[(y3 >> 8) & 255] << 8) | FS[y0 & 255]) ^ k[41]; + x2 = ((FS[(y2 >> 24) & 255] << 24) | (FS[(y3 >> 16) & 255] << 16) + | (FS[(y0 >> 8) & 255] << 8) | FS[y1 & 255]) ^ k[42]; + x3 = ((FS[(y3 >> 24) & 255] << 24) | (FS[(y0 >> 16) & 255] << 16) + | (FS[(y1 >> 8) & 255] << 8) | FS[y2 & 255]) ^ k[43]; + Bits.writeInt(out, off, x0); + Bits.writeInt(out, off + 4, x1); + Bits.writeInt(out, off + 8, x2); + Bits.writeInt(out, off + 12, x3); + } + + private void decryptBlock(byte[] in, byte[] out, int off) { + int[] k = decKey; + int x0 = Bits.readInt(in, off) ^ k[0]; + int x1 = Bits.readInt(in, off + 4) ^ k[1]; + int x2 = Bits.readInt(in, off + 8) ^ k[2]; + int x3 = Bits.readInt(in, off + 12) ^ k[3]; + int y0 = RT0[(x0 >> 24) & 255] ^ RT1[(x3 >> 16) & 255] + ^ RT2[(x2 >> 8) & 255] ^ RT3[x1 & 255] ^ k[4]; + int y1 = RT0[(x1 >> 24) & 255] ^ RT1[(x0 >> 16) & 255] + ^ RT2[(x3 >> 8) & 255] ^ RT3[x2 & 255] ^ k[5]; + int y2 = RT0[(x2 >> 24) & 255] ^ RT1[(x1 >> 16) & 255] + ^ RT2[(x0 >> 8) & 255] ^ RT3[x3 & 255] ^ k[6]; + int y3 = RT0[(x3 >> 24) & 255] ^ RT1[(x2 >> 16) & 255] + ^ RT2[(x1 >> 8) & 255] ^ RT3[x0 & 255] ^ k[7]; + x0 = RT0[(y0 >> 24) & 255] ^ RT1[(y3 >> 16) & 255] + ^ RT2[(y2 >> 8) & 255] ^ RT3[y1 & 255] ^ k[8]; + x1 = RT0[(y1 >> 24) & 255] ^ RT1[(y0 >> 16) & 255] + ^ RT2[(y3 >> 8) & 255] ^ RT3[y2 & 255] ^ k[9]; + x2 = RT0[(y2 >> 24) & 255] ^ RT1[(y1 >> 16) & 255] + ^ RT2[(y0 >> 8) & 255] ^ RT3[y3 & 255] ^ k[10]; + x3 = RT0[(y3 >> 24) & 255] ^ RT1[(y2 >> 16) & 255] + ^ RT2[(y1 >> 8) & 255] ^ RT3[y0 & 255] ^ k[11]; + y0 = RT0[(x0 >> 24) & 255] ^ RT1[(x3 >> 16) & 255] + ^ RT2[(x2 >> 8) & 255] ^ RT3[x1 & 255] ^ k[12]; + y1 = RT0[(x1 >> 24) & 255] ^ RT1[(x0 >> 16) & 255] + ^ RT2[(x3 >> 8) & 255] ^ RT3[x2 & 255] ^ k[13]; + y2 = RT0[(x2 >> 24) & 255] ^ RT1[(x1 >> 16) & 255] + ^ RT2[(x0 >> 8) & 255] ^ RT3[x3 & 255] ^ k[14]; + y3 = RT0[(x3 >> 24) & 255] ^ RT1[(x2 >> 16) & 255] + ^ RT2[(x1 >> 8) & 255] ^ RT3[x0 & 255] ^ k[15]; + x0 = RT0[(y0 >> 24) & 255] ^ RT1[(y3 >> 16) & 255] + ^ RT2[(y2 >> 8) & 255] ^ RT3[y1 & 255] ^ k[16]; + x1 = RT0[(y1 >> 24) & 255] ^ RT1[(y0 >> 16) & 255] + ^ RT2[(y3 >> 8) & 255] ^ RT3[y2 & 255] ^ k[17]; + x2 = RT0[(y2 >> 24) & 255] ^ RT1[(y1 >> 16) & 255] + ^ RT2[(y0 >> 8) & 255] ^ RT3[y3 & 255] ^ k[18]; + x3 = RT0[(y3 >> 24) & 255] ^ RT1[(y2 >> 16) & 255] + ^ RT2[(y1 >> 8) & 255] ^ RT3[y0 & 255] ^ k[19]; + y0 = RT0[(x0 >> 24) & 255] ^ RT1[(x3 >> 16) & 255] + ^ RT2[(x2 >> 8) & 255] ^ RT3[x1 & 255] ^ k[20]; + y1 = RT0[(x1 >> 24) & 255] ^ RT1[(x0 >> 16) & 255] + ^ RT2[(x3 >> 8) & 255] ^ RT3[x2 & 255] ^ k[21]; + y2 = RT0[(x2 >> 24) & 255] ^ RT1[(x1 >> 16) & 255] + ^ RT2[(x0 >> 8) & 255] ^ RT3[x3 & 255] ^ k[22]; + y3 = RT0[(x3 >> 24) & 255] ^ RT1[(x2 >> 16) & 255] + ^ RT2[(x1 >> 8) & 255] ^ RT3[x0 & 255] ^ k[23]; + x0 = RT0[(y0 >> 24) & 255] ^ RT1[(y3 >> 16) & 255] + ^ RT2[(y2 >> 8) & 255] ^ RT3[y1 & 255] ^ k[24]; + x1 = RT0[(y1 >> 24) & 255] ^ RT1[(y0 >> 16) & 255] + ^ RT2[(y3 >> 8) & 255] ^ RT3[y2 & 255] ^ k[25]; + x2 = RT0[(y2 >> 24) & 255] ^ RT1[(y1 >> 16) & 255] + ^ RT2[(y0 >> 8) & 255] ^ RT3[y3 & 255] ^ k[26]; + x3 = RT0[(y3 >> 24) & 255] ^ RT1[(y2 >> 16) & 255] + ^ RT2[(y1 >> 8) & 255] ^ RT3[y0 & 255] ^ k[27]; + y0 = RT0[(x0 >> 24) & 255] ^ RT1[(x3 >> 16) & 255] + ^ RT2[(x2 >> 8) & 255] ^ RT3[x1 & 255] ^ k[28]; + y1 = RT0[(x1 >> 24) & 255] ^ RT1[(x0 >> 16) & 255] + ^ RT2[(x3 >> 8) & 255] ^ RT3[x2 & 255] ^ k[29]; + y2 = RT0[(x2 >> 24) & 255] ^ RT1[(x1 >> 16) & 255] + ^ RT2[(x0 >> 8) & 255] ^ RT3[x3 & 255] ^ k[30]; + y3 = RT0[(x3 >> 24) & 255] ^ RT1[(x2 >> 16) & 255] + ^ RT2[(x1 >> 8) & 255] ^ RT3[x0 & 255] ^ k[31]; + x0 = RT0[(y0 >> 24) & 255] ^ RT1[(y3 >> 16) & 255] + ^ RT2[(y2 >> 8) & 255] ^ RT3[y1 & 255] ^ k[32]; + x1 = RT0[(y1 >> 24) & 255] ^ RT1[(y0 >> 16) & 255] + ^ RT2[(y3 >> 8) & 255] ^ RT3[y2 & 255] ^ k[33]; + x2 = RT0[(y2 >> 24) & 255] ^ RT1[(y1 >> 16) & 255] + ^ RT2[(y0 >> 8) & 255] ^ RT3[y3 & 255] ^ k[34]; + x3 = RT0[(y3 >> 24) & 255] ^ RT1[(y2 >> 16) & 255] + ^ RT2[(y1 >> 8) & 255] ^ RT3[y0 & 255] ^ k[35]; + y0 = RT0[(x0 >> 24) & 255] ^ RT1[(x3 >> 16) & 255] + ^ RT2[(x2 >> 8) & 255] ^ RT3[x1 & 255] ^ k[36]; + y1 = RT0[(x1 >> 24) & 255] ^ RT1[(x0 >> 16) & 255] + ^ RT2[(x3 >> 8) & 255] ^ RT3[x2 & 255] ^ k[37]; + y2 = RT0[(x2 >> 24) & 255] ^ RT1[(x1 >> 16) & 255] + ^ RT2[(x0 >> 8) & 255] ^ RT3[x3 & 255] ^ k[38]; + y3 = RT0[(x3 >> 24) & 255] ^ RT1[(x2 >> 16) & 255] + ^ RT2[(x1 >> 8) & 255] ^ RT3[x0 & 255] ^ k[39]; + x0 = ((RS[(y0 >> 24) & 255] << 24) | (RS[(y3 >> 16) & 255] << 16) + | (RS[(y2 >> 8) & 255] << 8) | RS[y1 & 255]) ^ k[40]; + x1 = ((RS[(y1 >> 24) & 255] << 24) | (RS[(y0 >> 16) & 255] << 16) + | (RS[(y3 >> 8) & 255] << 8) | RS[y2 & 255]) ^ k[41]; + x2 = ((RS[(y2 >> 24) & 255] << 24) | (RS[(y1 >> 16) & 255] << 16) + | (RS[(y0 >> 8) & 255] << 8) | RS[y3 & 255]) ^ k[42]; + x3 = ((RS[(y3 >> 24) & 255] << 24) | (RS[(y2 >> 16) & 255] << 16) + | (RS[(y1 >> 8) & 255] << 8) | RS[y0 & 255]) ^ k[43]; + Bits.writeInt(out, off, x0); + Bits.writeInt(out, off + 4, x1); + Bits.writeInt(out, off + 8, x2); + Bits.writeInt(out, off + 12, x3); + } + + @Override + public int getKeyLength() { + return 16; + } + +} diff --git a/modules/h2/src/main/java/org/h2/security/BlockCipher.java b/modules/h2/src/main/java/org/h2/security/BlockCipher.java new file mode 100644 index 0000000000000..08e1a7ab8b5d7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/BlockCipher.java @@ -0,0 +1,53 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.security; + +/** + * A block cipher is a data encryption algorithm that operates on blocks. + */ +public interface BlockCipher { + + /** + * Blocks sizes are always multiples of this number. + */ + int ALIGN = 16; + + /** + * Set the encryption key used for encrypting and decrypting. + * The key needs to be 16 bytes long. + * + * @param key the key + */ + void setKey(byte[] key); + + /** + * Encrypt a number of bytes. This is done in-place, that + * means the bytes are overwritten. + * + * @param bytes the byte array + * @param off the start index + * @param len the number of bytes to encrypt + */ + void encrypt(byte[] bytes, int off, int len); + + /** + * Decrypt a number of bytes. This is done in-place, that + * means the bytes are overwritten. + * + * @param bytes the byte array + * @param off the start index + * @param len the number of bytes to decrypt + */ + void decrypt(byte[] bytes, int off, int len); + + /** + * Get the length of the key in bytes. + * + * @return the length of the key + */ + int getKeyLength(); + +} diff --git a/modules/h2/src/main/java/org/h2/security/CipherFactory.java b/modules/h2/src/main/java/org/h2/security/CipherFactory.java new file mode 100644 index 0000000000000..b2448c397d73e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/CipherFactory.java @@ -0,0 +1,411 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.security; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.ServerSocket; +import java.net.Socket; +import java.security.KeyFactory; +import java.security.KeyStore; +import java.security.PrivateKey; +import java.security.Security; +import java.security.cert.Certificate; +import java.security.cert.CertificateFactory; +import java.security.spec.PKCS8EncodedKeySpec; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.LinkedHashSet; +import java.util.LinkedList; +import java.util.List; +import java.util.Properties; + +import javax.net.ServerSocketFactory; +import javax.net.ssl.SSLServerSocket; +import javax.net.ssl.SSLServerSocketFactory; +import javax.net.ssl.SSLSocket; +import javax.net.ssl.SSLSocketFactory; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.StringUtils; + +/** + * A factory to create new block cipher objects. + */ +public class CipherFactory { + + /** + * The default password to use for the .h2.keystore file + */ + public static final String KEYSTORE_PASSWORD = + "h2pass"; + + /** + * The security property which can prevent anonymous TLS connections. + * Introduced into Java 6, 7, 8 in updates from July 2015. + */ + public static final String LEGACY_ALGORITHMS_SECURITY_KEY = + "jdk.tls.legacyAlgorithms"; + + /** + * The value of {@value #LEGACY_ALGORITHMS_SECURITY_KEY} security + * property at the time of class initialization. + * Null if it is not set. + */ + public static final String DEFAULT_LEGACY_ALGORITHMS = getLegacyAlgorithmsSilently(); + + private static final String KEYSTORE = + "~/.h2.keystore"; + private static final String KEYSTORE_KEY = + "javax.net.ssl.keyStore"; + private static final String KEYSTORE_PASSWORD_KEY = + "javax.net.ssl.keyStorePassword"; + + + private CipherFactory() { + // utility class + } + + /** + * Get a new block cipher object for the given algorithm. + * + * @param algorithm the algorithm + * @return a new cipher object + */ + public static BlockCipher getBlockCipher(String algorithm) { + if ("XTEA".equalsIgnoreCase(algorithm)) { + return new XTEA(); + } else if ("AES".equalsIgnoreCase(algorithm)) { + return new AES(); + } else if ("FOG".equalsIgnoreCase(algorithm)) { + return new Fog(); + } + throw DbException.get(ErrorCode.UNSUPPORTED_CIPHER, algorithm); + } + + /** + * Create a secure client socket that is connected to the given address and + * port. + * + * @param address the address to connect to + * @param port the port + * @return the socket + */ + public static Socket createSocket(InetAddress address, int port) + throws IOException { + Socket socket = null; + setKeystore(); + SSLSocketFactory f = (SSLSocketFactory) SSLSocketFactory.getDefault(); + SSLSocket secureSocket = (SSLSocket) f.createSocket(); + secureSocket.connect(new InetSocketAddress(address, port), + SysProperties.SOCKET_CONNECT_TIMEOUT); + secureSocket.setEnabledProtocols( + disableSSL(secureSocket.getEnabledProtocols())); + if (SysProperties.ENABLE_ANONYMOUS_TLS) { + String[] list = enableAnonymous( + secureSocket.getEnabledCipherSuites(), + secureSocket.getSupportedCipherSuites()); + secureSocket.setEnabledCipherSuites(list); + } + socket = secureSocket; + return socket; + } + +/** + * Create a secure server socket. If a bind address is specified, the + * socket is only bound to this address. + * If h2.enableAnonymousTLS is true, an attempt is made to modify + * the security property jdk.tls.legacyAlgorithms (in newer JVMs) to allow + * anonymous TLS. This system change is effectively permanent for the + * lifetime of the JVM. + * @see #removeAnonFromLegacyAlgorithms() + * + * @param port the port to listen on + * @param bindAddress the address to bind to, or null to bind to all + * addresses + * @return the server socket + */ + public static ServerSocket createServerSocket(int port, + InetAddress bindAddress) throws IOException { + ServerSocket socket = null; + if (SysProperties.ENABLE_ANONYMOUS_TLS) { + removeAnonFromLegacyAlgorithms(); + } + setKeystore(); + ServerSocketFactory f = SSLServerSocketFactory.getDefault(); + SSLServerSocket secureSocket; + if (bindAddress == null) { + secureSocket = (SSLServerSocket) f.createServerSocket(port); + } else { + secureSocket = (SSLServerSocket) f.createServerSocket(port, 0, bindAddress); + } + secureSocket.setEnabledProtocols( + disableSSL(secureSocket.getEnabledProtocols())); + if (SysProperties.ENABLE_ANONYMOUS_TLS) { + String[] list = enableAnonymous( + secureSocket.getEnabledCipherSuites(), + secureSocket.getSupportedCipherSuites()); + secureSocket.setEnabledCipherSuites(list); + } + + socket = secureSocket; + return socket; + } + + /** + * Removes DH_anon and ECDH_anon from a comma separated list of ciphers. + * Only the first occurrence is removed. + * If there is nothing to remove, returns the reference to the argument. + * @param list a list of names separated by commas (and spaces) + * @return a new string without DH_anon and ECDH_anon items, + * or the original if none were found + */ + public static String removeDhAnonFromCommaSeparatedList(String list) { + if (list == null) { + return list; + } + List algorithms = new LinkedList<>(Arrays.asList(list.split("\\s*,\\s*"))); + boolean dhAnonRemoved = algorithms.remove("DH_anon"); + boolean ecdhAnonRemoved = algorithms.remove("ECDH_anon"); + if (dhAnonRemoved || ecdhAnonRemoved) { + String string = Arrays.toString(algorithms.toArray(new String[algorithms.size()])); + return (!algorithms.isEmpty()) ? string.substring(1, string.length() - 1): ""; + } + return list; + } + + /** + * Attempts to weaken the security properties to allow anonymous TLS. + * New JREs would not choose an anonymous cipher suite in a TLS handshake + * if server-side security property + * {@value #LEGACY_ALGORITHMS_SECURITY_KEY} + * were not modified from the default value. + *

    + * NOTE: In current (as of 2016) default implementations of JSSE which use + * this security property, the value is permanently cached inside the + * ServerHandshake class upon its first use. + * Therefore the modification accomplished by this method has to be done + * before the first use of a server SSL socket. + * Later changes to this property will not have any effect on server socket + * behavior. + */ + public static synchronized void removeAnonFromLegacyAlgorithms() { + String legacyOriginal = getLegacyAlgorithmsSilently(); + if (legacyOriginal == null) { + return; + } + String legacyNew = removeDhAnonFromCommaSeparatedList(legacyOriginal); + if (!legacyOriginal.equals(legacyNew)) { + setLegacyAlgorithmsSilently(legacyNew); + } + } + + /** + * Attempts to resets the security property to the default value. + * The default value of {@value #LEGACY_ALGORITHMS_SECURITY_KEY} was + * obtained at time of class initialization. + *

    + * NOTE: Resetting the property might not have any effect on server + * socket behavior. + * @see #removeAnonFromLegacyAlgorithms() + */ + public static synchronized void resetDefaultLegacyAlgorithms() { + setLegacyAlgorithmsSilently(DEFAULT_LEGACY_ALGORITHMS); + } + + /** + * Returns the security property {@value #LEGACY_ALGORITHMS_SECURITY_KEY}. + * Ignores security exceptions. + * + * @return the value of the security property, or null if not set + * or not accessible + */ + public static String getLegacyAlgorithmsSilently() { + String defaultLegacyAlgorithms = null; + try { + defaultLegacyAlgorithms = Security.getProperty(LEGACY_ALGORITHMS_SECURITY_KEY); + } catch (SecurityException e) { + // ignore + } + return defaultLegacyAlgorithms; + } + + private static void setLegacyAlgorithmsSilently(String legacyAlgorithms) { + if (legacyAlgorithms == null) { + return; + } + try { + Security.setProperty(LEGACY_ALGORITHMS_SECURITY_KEY, legacyAlgorithms); + } catch (SecurityException e) { + // ignore + } + } + + private static byte[] getKeyStoreBytes(KeyStore store, String password) + throws IOException { + ByteArrayOutputStream bout = new ByteArrayOutputStream(); + try { + store.store(bout, password.toCharArray()); + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + return bout.toByteArray(); + } + + /** + * Get the keystore object using the given password. + * + * @param password the keystore password + * @return the keystore + */ + public static KeyStore getKeyStore(String password) throws IOException { + try { + // The following source code can be re-generated + // if you have a keystore file. + // This code is (hopefully) more Java version independent + // than using keystores directly. See also: + // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4887561 + // (1.4.2 cannot read keystore written with 1.4.1) + // --- generated code start --- + + KeyStore store = KeyStore.getInstance(KeyStore.getDefaultType()); + + store.load(null, password.toCharArray()); + KeyFactory keyFactory = KeyFactory.getInstance("RSA"); + store.load(null, password.toCharArray()); + PKCS8EncodedKeySpec keySpec = new PKCS8EncodedKeySpec( + StringUtils.convertHexToBytes( + "30820277020100300d06092a864886f70d010101" + + "0500048202613082025d02010002818100dc0a13" + + "c602b7141110eade2f051b54777b060d0f74e6a1" + + "10f9cce81159f271ebc88d8e8aa1f743b505fc2e" + + "7dfe38d33b8d3f64d1b363d1af4d877833897954" + + "cbaec2fa384c22a415498cf306bb07ac09b76b00" + + "1cd68bf77ea0a628f5101959cf2993a9c23dbee7" + + "9b19305977f8715ae78d023471194cc900b231ee" + + "cb0aaea98d02030100010281810099aa4ff4d0a0" + + "9a5af0bd953cb10c4d08c3d98df565664ac5582e" + + "494314d5c3c92dddedd5d316a32a206be4ec0846" + + "16fe57be15e27cad111aa3c21fa79e32258c6ca8" + + "430afc69eddd52d3b751b37da6b6860910b94653" + + "192c0db1d02abcfd6ce14c01f238eec7c20bd3bb" + + "750940004bacba2880349a9494d10e139ecb2355" + + "d101024100ffdc3defd9c05a2d377ef6019fa62b" + + "3fbd5b0020a04cc8533bca730e1f6fcf5dfceea1" + + "b044fbe17d9eababfbc7d955edad6bc60f9be826" + + "ad2c22ba77d19a9f65024100dc28d43fdbbc9385" + + "2cc3567093157702bc16f156f709fb7db0d9eec0" + + "28f41fd0edcd17224c866e66be1744141fb724a1" + + "0fd741c8a96afdd9141b36d67fff6309024077b1" + + "cddbde0f69604bdcfe33263fb36ddf24aa3b9922" + + "327915b890f8a36648295d0139ecdf68c245652c" + + "4489c6257b58744fbdd961834a4cab201801a3b1" + + "e52d024100b17142e8991d1b350a0802624759d4" + + "8ae2b8071a158ff91fabeb6a8f7c328e762143dc" + + "726b8529f42b1fab6220d1c676fdc27ba5d44e84" + + "7c72c52064afd351a902407c6e23fe35bcfcd1a6" + + "62aa82a2aa725fcece311644d5b6e3894853fd4c" + + "e9fe78218c957b1ff03fc9e5ef8ffeb6bd58235f" + + "6a215c97d354fdace7e781e4a63e8b")); + PrivateKey privateKey = keyFactory.generatePrivate(keySpec); + Certificate[] certs = { CertificateFactory + .getInstance("X.509") + .generateCertificate( + new ByteArrayInputStream( + StringUtils.convertHexToBytes( + "3082018b3081f502044295ce6b300d06092a8648" + + "86f70d0101040500300d310b3009060355040313" + + "024832301e170d3035303532363133323630335a" + + "170d3337303933303036353734375a300d310b30" + + "0906035504031302483230819f300d06092a8648" + + "86f70d010101050003818d0030818902818100dc" + + "0a13c602b7141110eade2f051b54777b060d0f74" + + "e6a110f9cce81159f271ebc88d8e8aa1f743b505" + + "fc2e7dfe38d33b8d3f64d1b363d1af4d87783389" + + "7954cbaec2fa384c22a415498cf306bb07ac09b7" + + "6b001cd68bf77ea0a628f5101959cf2993a9c23d" + + "bee79b19305977f8715ae78d023471194cc900b2" + + "31eecb0aaea98d0203010001300d06092a864886" + + "f70d01010405000381810083f4401a279453701b" + + "ef9a7681a5b8b24f153f7d18c7c892133d97bd5f" + + "13736be7505290a445a7d5ceb75522403e509751" + + "5cd966ded6351ff60d5193de34cd36e5cb04d380" + + "398e66286f99923fd92296645fd4ada45844d194" + + "dfd815e6cd57f385c117be982809028bba1116c8" + + "5740b3d27a55b1a0948bf291ddba44bed337b9"))), }; + store.setKeyEntry("h2", privateKey, password.toCharArray(), certs); + // --- generated code end --- + return store; + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + + private static void setKeystore() throws IOException { + Properties p = System.getProperties(); + if (p.getProperty(KEYSTORE_KEY) == null) { + String fileName = KEYSTORE; + byte[] data = getKeyStoreBytes(getKeyStore( + KEYSTORE_PASSWORD), KEYSTORE_PASSWORD); + boolean needWrite = true; + if (FileUtils.exists(fileName) && FileUtils.size(fileName) == data.length) { + // don't need to overwrite the file if it did not change + InputStream fin = FileUtils.newInputStream(fileName); + byte[] now = IOUtils.readBytesAndClose(fin, 0); + if (now != null && Arrays.equals(data, now)) { + needWrite = false; + } + } + if (needWrite) { + try { + OutputStream out = FileUtils.newOutputStream(fileName, false); + out.write(data); + out.close(); + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + String absolutePath = FileUtils.toRealPath(fileName); + System.setProperty(KEYSTORE_KEY, absolutePath); + } + if (p.getProperty(KEYSTORE_PASSWORD_KEY) == null) { + System.setProperty(KEYSTORE_PASSWORD_KEY, KEYSTORE_PASSWORD); + } + } + + private static String[] enableAnonymous(String[] enabled, String[] supported) { + LinkedHashSet set = new LinkedHashSet<>(); + for (String x : supported) { + if (!x.startsWith("SSL") && x.contains("_anon_") && + (x.contains("_AES_") || x.contains("_3DES_")) && x.contains("_SHA")) { + set.add(x); + } + } + Collections.addAll(set, enabled); + return set.toArray(new String[0]); + } + + private static String[] disableSSL(String[] enabled) { + HashSet set = new HashSet<>(); + for (String x : enabled) { + if (!x.startsWith("SSL")) { + set.add(x); + } + } + return set.toArray(new String[0]); + } + +} diff --git a/modules/h2/src/main/java/org/h2/security/Fog.java b/modules/h2/src/main/java/org/h2/security/Fog.java new file mode 100644 index 0000000000000..ad54a90e4933d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/Fog.java @@ -0,0 +1,75 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.security; + +import org.h2.util.Bits; + +/** + * A pseudo-encryption algorithm that makes the data appear to be + * encrypted. This algorithm is cryptographically extremely weak, and should + * only be used to hide data from reading the plain text using a text editor. + */ +public class Fog implements BlockCipher { + + private int key; + + @Override + public void encrypt(byte[] bytes, int off, int len) { + for (int i = off; i < off + len; i += 16) { + encryptBlock(bytes, bytes, i); + } + } + + @Override + public void decrypt(byte[] bytes, int off, int len) { + for (int i = off; i < off + len; i += 16) { + decryptBlock(bytes, bytes, i); + } + } + + private void encryptBlock(byte[] in, byte[] out, int off) { + int x0 = Bits.readInt(in, off); + int x1 = Bits.readInt(in, off + 4); + int x2 = Bits.readInt(in, off + 8); + int x3 = Bits.readInt(in, off + 12); + int k = key; + x0 = Integer.rotateLeft(x0 ^ k, x1); + x2 = Integer.rotateLeft(x2 ^ k, x1); + x1 = Integer.rotateLeft(x1 ^ k, x0); + x3 = Integer.rotateLeft(x3 ^ k, x0); + Bits.writeInt(out, off, x0); + Bits.writeInt(out, off + 4, x1); + Bits.writeInt(out, off + 8, x2); + Bits.writeInt(out, off + 12, x3); + } + + private void decryptBlock(byte[] in, byte[] out, int off) { + int x0 = Bits.readInt(in, off); + int x1 = Bits.readInt(in, off + 4); + int x2 = Bits.readInt(in, off + 8); + int x3 = Bits.readInt(in, off + 12); + int k = key; + x1 = Integer.rotateRight(x1, x0) ^ k; + x3 = Integer.rotateRight(x3, x0) ^ k; + x0 = Integer.rotateRight(x0, x1) ^ k; + x2 = Integer.rotateRight(x2, x1) ^ k; + Bits.writeInt(out, off, x0); + Bits.writeInt(out, off + 4, x1); + Bits.writeInt(out, off + 8, x2); + Bits.writeInt(out, off + 12, x3); + } + + @Override + public int getKeyLength() { + return 16; + } + + @Override + public void setKey(byte[] key) { + this.key = (int) Bits.readLong(key, 0); + } + +} diff --git a/modules/h2/src/main/java/org/h2/security/SHA256.java b/modules/h2/src/main/java/org/h2/security/SHA256.java new file mode 100644 index 0000000000000..c89b5ac00023a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/SHA256.java @@ -0,0 +1,156 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.security; + +import java.security.GeneralSecurityException; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.Arrays; + +import javax.crypto.Mac; +import javax.crypto.spec.SecretKeySpec; + +import org.h2.util.Bits; + +/** + * This class implements the cryptographic hash function SHA-256. + */ +public class SHA256 { + + private SHA256() { + } + + /** + * Calculate the hash code by using the given salt. The salt is appended + * after the data before the hash code is calculated. After generating the + * hash code, the data and all internal buffers are filled with zeros to + * avoid keeping insecure data in memory longer than required (and possibly + * swapped to disk). + * + * @param data the data to hash + * @param salt the salt to use + * @return the hash code + */ + public static byte[] getHashWithSalt(byte[] data, byte[] salt) { + byte[] buff = new byte[data.length + salt.length]; + System.arraycopy(data, 0, buff, 0, data.length); + System.arraycopy(salt, 0, buff, data.length, salt.length); + return getHash(buff, true); + } + + /** + * Calculate the hash of a password by prepending the user name and a '@' + * character. Both the user name and the password are encoded to a byte + * array using UTF-16. After generating the hash code, the password array + * and all internal buffers are filled with zeros to avoid keeping the plain + * text password in memory longer than required (and possibly swapped to + * disk). + * + * @param userName the user name + * @param password the password + * @return the hash code + */ + public static byte[] getKeyPasswordHash(String userName, char[] password) { + String user = userName + "@"; + byte[] buff = new byte[2 * (user.length() + password.length)]; + int n = 0; + for (int i = 0, length = user.length(); i < length; i++) { + char c = user.charAt(i); + buff[n++] = (byte) (c >> 8); + buff[n++] = (byte) c; + } + for (char c : password) { + buff[n++] = (byte) (c >> 8); + buff[n++] = (byte) c; + } + Arrays.fill(password, (char) 0); + return getHash(buff, true); + } + + /** + * Calculate the hash-based message authentication code. + * + * @param key the key + * @param message the message + * @return the hash + */ + public static byte[] getHMAC(byte[] key, byte[] message) { + return initMac(key).doFinal(message); + } + + private static Mac initMac(byte[] key) { + // Java forbids empty keys + if (key.length == 0) { + key = new byte[1]; + } + try { + Mac mac = Mac.getInstance("HmacSHA256"); + mac.init(new SecretKeySpec(key, "HmacSHA256")); + return mac; + } catch (GeneralSecurityException e) { + throw new RuntimeException(e); + } + } + + /** + * Calculate the hash using the password-based key derivation function 2. + * + * @param password the password + * @param salt the salt + * @param iterations the number of iterations + * @param resultLen the number of bytes in the result + * @return the result + */ + public static byte[] getPBKDF2(byte[] password, byte[] salt, + int iterations, int resultLen) { + byte[] result = new byte[resultLen]; + Mac mac = initMac(password); + int len = 64 + Math.max(32, salt.length + 4); + byte[] message = new byte[len]; + byte[] macRes = null; + for (int k = 1, offset = 0; offset < resultLen; k++, offset += 32) { + for (int i = 0; i < iterations; i++) { + if (i == 0) { + System.arraycopy(salt, 0, message, 0, salt.length); + Bits.writeInt(message, salt.length, k); + len = salt.length + 4; + } else { + System.arraycopy(macRes, 0, message, 0, 32); + len = 32; + } + mac.update(message, 0, len); + macRes = mac.doFinal(); + for (int j = 0; j < 32 && j + offset < resultLen; j++) { + result[j + offset] ^= macRes[j]; + } + } + } + Arrays.fill(password, (byte) 0); + return result; + } + + /** + * Calculate the hash code for the given data. + * + * @param data the data to hash + * @param nullData if the data should be filled with zeros after calculating + * the hash code + * @return the hash code + */ + public static byte[] getHash(byte[] data, boolean nullData) { + byte[] result; + try { + result = MessageDigest.getInstance("SHA-256").digest(data); + } catch (NoSuchAlgorithmException e) { + throw new RuntimeException(e); + } + if (nullData) { + Arrays.fill(data, (byte) 0); + } + return result; + } + +} diff --git a/modules/h2/src/main/java/org/h2/security/SecureFileStore.java b/modules/h2/src/main/java/org/h2/security/SecureFileStore.java new file mode 100644 index 0000000000000..4e24a61b4f4cd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/SecureFileStore.java @@ -0,0 +1,114 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.security; + +import org.h2.engine.Constants; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.util.Bits; +import org.h2.util.MathUtils; + +/** + * A file store that encrypts all data before writing, and decrypts all data + * after reading. Areas that were never written to (for example after calling + * setLength to enlarge the file) are not encrypted (contains 0 bytes). + */ +public class SecureFileStore extends FileStore { + + private byte[] key; + private final BlockCipher cipher; + private final BlockCipher cipherForInitVector; + private byte[] buffer = new byte[4]; + private long pos; + private final byte[] bufferForInitVector; + private final int keyIterations; + + public SecureFileStore(DataHandler handler, String name, String mode, + String cipher, byte[] key, int keyIterations) { + super(handler, name, mode); + this.key = key; + this.cipher = CipherFactory.getBlockCipher(cipher); + this.cipherForInitVector = CipherFactory.getBlockCipher(cipher); + this.keyIterations = keyIterations; + bufferForInitVector = new byte[Constants.FILE_BLOCK_SIZE]; + } + + @Override + protected byte[] generateSalt() { + return MathUtils.secureRandomBytes(Constants.FILE_BLOCK_SIZE); + } + + @Override + protected void initKey(byte[] salt) { + key = SHA256.getHashWithSalt(key, salt); + for (int i = 0; i < keyIterations; i++) { + key = SHA256.getHash(key, true); + } + cipher.setKey(key); + key = SHA256.getHash(key, true); + cipherForInitVector.setKey(key); + } + + @Override + protected void writeDirect(byte[] b, int off, int len) { + super.write(b, off, len); + pos += len; + } + + @Override + public void write(byte[] b, int off, int len) { + if (buffer.length < b.length) { + buffer = new byte[len]; + } + System.arraycopy(b, off, buffer, 0, len); + xorInitVector(buffer, 0, len, pos); + cipher.encrypt(buffer, 0, len); + super.write(buffer, 0, len); + pos += len; + } + + @Override + protected void readFullyDirect(byte[] b, int off, int len) { + super.readFully(b, off, len); + pos += len; + } + + @Override + public void readFully(byte[] b, int off, int len) { + super.readFully(b, off, len); + for (int i = 0; i < len; i++) { + if (b[i] != 0) { + cipher.decrypt(b, off, len); + xorInitVector(b, off, len, pos); + break; + } + } + pos += len; + } + + @Override + public void seek(long x) { + this.pos = x; + super.seek(x); + } + + private void xorInitVector(byte[] b, int off, int len, long p) { + byte[] iv = bufferForInitVector; + while (len > 0) { + for (int i = 0; i < Constants.FILE_BLOCK_SIZE; i += 8) { + Bits.writeLong(iv, i, (p + i) >>> 3); + } + cipherForInitVector.encrypt(iv, 0, Constants.FILE_BLOCK_SIZE); + for (int i = 0; i < Constants.FILE_BLOCK_SIZE; i++) { + b[off + i] ^= iv[i]; + } + p += Constants.FILE_BLOCK_SIZE; + off += Constants.FILE_BLOCK_SIZE; + len -= Constants.FILE_BLOCK_SIZE; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/security/XTEA.java b/modules/h2/src/main/java/org/h2/security/XTEA.java new file mode 100644 index 0000000000000..1072dedbf6830 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/XTEA.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.security; + +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.util.Bits; + +/** + * An implementation of the XTEA block cipher algorithm. + *

    + * This implementation uses 32 rounds. + * The best attack reported as of 2009 is 36 rounds (Wikipedia). + */ +public class XTEA implements BlockCipher { + + private static final int DELTA = 0x9E3779B9; + private int k0, k1, k2, k3, k4, k5, k6, k7; + private int k8, k9, k10, k11, k12, k13, k14, k15; + private int k16, k17, k18, k19, k20, k21, k22, k23; + private int k24, k25, k26, k27, k28, k29, k30, k31; + + @Override + public void setKey(byte[] b) { + int[] key = new int[4]; + for (int i = 0; i < 16; i += 4) { + key[i / 4] = Bits.readInt(b, i); + } + int[] r = new int[32]; + for (int i = 0, sum = 0; i < 32;) { + r[i++] = sum + key[sum & 3]; + sum += DELTA; + r[i++] = sum + key[ (sum >>> 11) & 3]; + } + k0 = r[0]; k1 = r[1]; k2 = r[2]; k3 = r[3]; + k4 = r[4]; k5 = r[5]; k6 = r[6]; k7 = r[7]; + k8 = r[8]; k9 = r[9]; k10 = r[10]; k11 = r[11]; + k12 = r[12]; k13 = r[13]; k14 = r[14]; k15 = r[15]; + k16 = r[16]; k17 = r[17]; k18 = r[18]; k19 = r[19]; + k20 = r[20]; k21 = r[21]; k22 = r[22]; k23 = r[23]; + k24 = r[24]; k25 = r[25]; k26 = r[26]; k27 = r[27]; + k28 = r[28]; k29 = r[29]; k30 = r[30]; k31 = r[31]; + } + + @Override + public void encrypt(byte[] bytes, int off, int len) { + if (SysProperties.CHECK) { + if (len % ALIGN != 0) { + DbException.throwInternalError("unaligned len " + len); + } + } + for (int i = off; i < off + len; i += 8) { + encryptBlock(bytes, bytes, i); + } + } + + @Override + public void decrypt(byte[] bytes, int off, int len) { + if (SysProperties.CHECK) { + if (len % ALIGN != 0) { + DbException.throwInternalError("unaligned len " + len); + } + } + for (int i = off; i < off + len; i += 8) { + decryptBlock(bytes, bytes, i); + } + } + + private void encryptBlock(byte[] in, byte[] out, int off) { + int y = Bits.readInt(in, off); + int z = Bits.readInt(in, off + 4); + y += (((z << 4) ^ (z >>> 5)) + z) ^ k0; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k1; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k2; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k3; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k4; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k5; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k6; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k7; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k8; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k9; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k10; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k11; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k12; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k13; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k14; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k15; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k16; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k17; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k18; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k19; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k20; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k21; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k22; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k23; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k24; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k25; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k26; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k27; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k28; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k29; + y += (((z << 4) ^ (z >>> 5)) + z) ^ k30; + z += (((y >>> 5) ^ (y << 4)) + y) ^ k31; + Bits.writeInt(out, off, y); + Bits.writeInt(out, off + 4, z); + } + + private void decryptBlock(byte[] in, byte[] out, int off) { + int y = Bits.readInt(in, off); + int z = Bits.readInt(in, off + 4); + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k31; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k30; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k29; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k28; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k27; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k26; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k25; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k24; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k23; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k22; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k21; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k20; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k19; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k18; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k17; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k16; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k15; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k14; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k13; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k12; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k11; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k10; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k9; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k8; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k7; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k6; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k5; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k4; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k3; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k2; + z -= (((y >>> 5) ^ (y << 4)) + y) ^ k1; + y -= (((z << 4) ^ (z >>> 5)) + z) ^ k0; + Bits.writeInt(out, off, y); + Bits.writeInt(out, off + 4, z); + } + + @Override + public int getKeyLength() { + return 16; + } + +} diff --git a/modules/h2/src/main/java/org/h2/security/package.html b/modules/h2/src/main/java/org/h2/security/package.html new file mode 100644 index 0000000000000..164b246d8408b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/security/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Security classes, such as encryption and cryptographically secure hash algorithms. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/Service.java b/modules/h2/src/main/java/org/h2/server/Service.java new file mode 100644 index 0000000000000..466a30bc8d7fd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/Service.java @@ -0,0 +1,92 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server; + +import java.sql.SQLException; + +/** + * Classes implementing this interface usually provide a + * TCP/IP listener such as an FTP server. + * The can be started and stopped, and may or may not + * allow remote connections. + */ +public interface Service { + + /** + * Initialize the service from command line options. + * + * @param args the command line options + */ + void init(String... args) throws Exception; + + /** + * Get the URL of this service in a human readable form + * + * @return the url + */ + String getURL(); + + /** + * Start the service. This usually means create the server socket. + * This method must not block. + */ + void start() throws SQLException; + + /** + * Listen for incoming connections. + * This method blocks. + */ + void listen(); + + /** + * Stop the service. + */ + void stop(); + + /** + * Check if the service is running. + * + * @param traceError if errors should be written + * @return if the server is running + */ + boolean isRunning(boolean traceError); + + /** + * Check if remote connections are allowed. + * + * @return true if remote connections are allowed + */ + boolean getAllowOthers(); + + /** + * Get the human readable name of the service. + * + * @return the name + */ + String getName(); + + /** + * Get the human readable short name of the service. + * + * @return the type + */ + String getType(); + + /** + * Gets the port this service is listening on. + * + * @return the port + */ + int getPort(); + + /** + * Check if a daemon thread should be used. + * + * @return true if a daemon thread should be used + */ + boolean isDaemon(); + +} diff --git a/modules/h2/src/main/java/org/h2/server/ShutdownHandler.java b/modules/h2/src/main/java/org/h2/server/ShutdownHandler.java new file mode 100644 index 0000000000000..a7d6d7dbb227f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/ShutdownHandler.java @@ -0,0 +1,17 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server; + +/** + * A shutdown handler is a listener for shutdown events. + */ +public interface ShutdownHandler { + + /** + * Tell the listener to shut down. + */ + void shutdown(); +} diff --git a/modules/h2/src/main/java/org/h2/server/TcpServer.java b/modules/h2/src/main/java/org/h2/server/TcpServer.java new file mode 100644 index 0000000000000..85b376a8dfdc7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/TcpServer.java @@ -0,0 +1,517 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server; + +import java.io.IOException; +import java.net.ServerSocket; +import java.net.Socket; +import java.net.UnknownHostException; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Properties; +import java.util.Set; +import org.h2.Driver; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.util.JdbcUtils; +import org.h2.util.NetUtils; +import org.h2.util.StringUtils; +import org.h2.util.Tool; + +/** + * The TCP server implements the native H2 database server protocol. + * It supports multiple client connections to multiple databases + * (many to many). The same database may be opened by multiple clients. + * Also supported is the mixed mode: opening databases in embedded mode, + * and at the same time start a TCP server to allow clients to connect to + * the same database over the network. + */ +public class TcpServer implements Service { + + private static final int SHUTDOWN_NORMAL = 0; + private static final int SHUTDOWN_FORCE = 1; + + /** + * The name of the in-memory management database used by the TCP server + * to keep the active sessions. + */ + private static final String MANAGEMENT_DB_PREFIX = "management_db_"; + + private static final Map SERVERS = + Collections.synchronizedMap(new HashMap()); + + private int port; + private boolean portIsSet; + private boolean trace; + private boolean ssl; + private boolean stop; + private ShutdownHandler shutdownHandler; + private ServerSocket serverSocket; + private final Set running = + Collections.synchronizedSet(new HashSet()); + private String baseDir; + private boolean allowOthers; + private boolean isDaemon; + private boolean ifExists; + private Connection managementDb; + private PreparedStatement managementDbAdd; + private PreparedStatement managementDbRemove; + private String managementPassword = ""; + private Thread listenerThread; + private int nextThreadId; + private String key, keyDatabase; + + /** + * Get the database name of the management database. + * The management database contains a table with active sessions (SESSIONS). + * + * @param port the TCP server port + * @return the database name (usually starting with mem:) + */ + public static String getManagementDbName(int port) { + return "mem:" + MANAGEMENT_DB_PREFIX + port; + } + + private void initManagementDb() throws SQLException { + Properties prop = new Properties(); + prop.setProperty("user", ""); + prop.setProperty("password", managementPassword); + // avoid using the driver manager + Connection conn = Driver.load().connect("jdbc:h2:" + + getManagementDbName(port), prop); + managementDb = conn; + + try (Statement stat = conn.createStatement()) { + stat.execute("CREATE ALIAS IF NOT EXISTS STOP_SERVER FOR \"" + + TcpServer.class.getName() + ".stopServer\""); + stat.execute("CREATE TABLE IF NOT EXISTS SESSIONS" + + "(ID INT PRIMARY KEY, URL VARCHAR, USER VARCHAR, " + + "CONNECTED TIMESTAMP)"); + managementDbAdd = conn.prepareStatement( + "INSERT INTO SESSIONS VALUES(?, ?, ?, NOW())"); + managementDbRemove = conn.prepareStatement( + "DELETE FROM SESSIONS WHERE ID=?"); + } + SERVERS.put(port, this); + } + + /** + * Shut down this server. + */ + void shutdown() { + if (shutdownHandler != null) { + shutdownHandler.shutdown(); + } + } + + public void setShutdownHandler(ShutdownHandler shutdownHandler) { + this.shutdownHandler = shutdownHandler; + } + + /** + * Add a connection to the management database. + * + * @param id the connection id + * @param url the database URL + * @param user the user name + */ + synchronized void addConnection(int id, String url, String user) { + try { + managementDbAdd.setInt(1, id); + managementDbAdd.setString(2, url); + managementDbAdd.setString(3, user); + managementDbAdd.execute(); + } catch (SQLException e) { + DbException.traceThrowable(e); + } + } + + /** + * Remove a connection from the management database. + * + * @param id the connection id + */ + synchronized void removeConnection(int id) { + try { + managementDbRemove.setInt(1, id); + managementDbRemove.execute(); + } catch (SQLException e) { + DbException.traceThrowable(e); + } + } + + private synchronized void stopManagementDb() { + if (managementDb != null) { + try { + managementDb.close(); + } catch (SQLException e) { + DbException.traceThrowable(e); + } + managementDb = null; + } + } + + @Override + public void init(String... args) { + port = Constants.DEFAULT_TCP_PORT; + for (int i = 0; args != null && i < args.length; i++) { + String a = args[i]; + if (Tool.isOption(a, "-trace")) { + trace = true; + } else if (Tool.isOption(a, "-tcpSSL")) { + ssl = true; + } else if (Tool.isOption(a, "-tcpPort")) { + port = Integer.decode(args[++i]); + portIsSet = true; + } else if (Tool.isOption(a, "-tcpPassword")) { + managementPassword = args[++i]; + } else if (Tool.isOption(a, "-baseDir")) { + baseDir = args[++i]; + } else if (Tool.isOption(a, "-key")) { + key = args[++i]; + keyDatabase = args[++i]; + } else if (Tool.isOption(a, "-tcpAllowOthers")) { + allowOthers = true; + } else if (Tool.isOption(a, "-tcpDaemon")) { + isDaemon = true; + } else if (Tool.isOption(a, "-ifExists")) { + ifExists = true; + } + } + org.h2.Driver.load(); + } + + @Override + public String getURL() { + return (ssl ? "ssl" : "tcp") + "://" + NetUtils.getLocalAddress() + ":" + port; + } + + @Override + public int getPort() { + return port; + } + + /** + * Check if this socket may connect to this server. Remote connections are + * not allowed if the flag allowOthers is set. + * + * @param socket the socket + * @return true if this client may connect + */ + boolean allow(Socket socket) { + if (allowOthers) { + return true; + } + try { + return NetUtils.isLocalAddress(socket); + } catch (UnknownHostException e) { + traceError(e); + return false; + } + } + + @Override + public synchronized void start() throws SQLException { + stop = false; + try { + serverSocket = NetUtils.createServerSocket(port, ssl); + } catch (DbException e) { + if (!portIsSet) { + serverSocket = NetUtils.createServerSocket(0, ssl); + } else { + throw e; + } + } + port = serverSocket.getLocalPort(); + initManagementDb(); + } + + @Override + public void listen() { + listenerThread = Thread.currentThread(); + String threadName = listenerThread.getName(); + try { + while (!stop) { + Socket s = serverSocket.accept(); + TcpServerThread c = new TcpServerThread(s, this, nextThreadId++); + running.add(c); + Thread thread = new Thread(c, threadName + " thread"); + thread.setDaemon(isDaemon); + c.setThread(thread); + thread.start(); + } + serverSocket = NetUtils.closeSilently(serverSocket); + } catch (Exception e) { + if (!stop) { + DbException.traceThrowable(e); + } + } + stopManagementDb(); + } + + @Override + public synchronized boolean isRunning(boolean traceError) { + if (serverSocket == null) { + return false; + } + try { + Socket s = NetUtils.createLoopbackSocket(port, ssl); + s.close(); + return true; + } catch (Exception e) { + if (traceError) { + traceError(e); + } + return false; + } + } + + @Override + public void stop() { + // TODO server: share code between web and tcp servers + // need to remove the server first, otherwise the connection is broken + // while the server is still registered in this map + SERVERS.remove(port); + if (!stop) { + stopManagementDb(); + stop = true; + if (serverSocket != null) { + try { + serverSocket.close(); + } catch (IOException e) { + DbException.traceThrowable(e); + } catch (NullPointerException e) { + // ignore + } + serverSocket = null; + } + if (listenerThread != null) { + try { + listenerThread.join(1000); + } catch (InterruptedException e) { + DbException.traceThrowable(e); + } + } + } + // TODO server: using a boolean 'now' argument? a timeout? + for (TcpServerThread c : new ArrayList<>(running)) { + if (c != null) { + c.close(); + try { + c.getThread().join(100); + } catch (Exception e) { + DbException.traceThrowable(e); + } + } + } + } + + /** + * Stop a running server. This method is called via reflection from the + * STOP_SERVER function. + * + * @param port the port where the server runs, or 0 for all running servers + * @param password the password (or null) + * @param shutdownMode the shutdown mode, SHUTDOWN_NORMAL or SHUTDOWN_FORCE. + */ + public static void stopServer(int port, String password, int shutdownMode) { + if (port == 0) { + for (int p : SERVERS.keySet().toArray(new Integer[0])) { + if (p != 0) { + stopServer(p, password, shutdownMode); + } + } + return; + } + TcpServer server = SERVERS.get(port); + if (server == null) { + return; + } + if (!server.managementPassword.equals(password)) { + return; + } + if (shutdownMode == SHUTDOWN_NORMAL) { + server.stopManagementDb(); + server.stop = true; + try { + Socket s = NetUtils.createLoopbackSocket(port, false); + s.close(); + } catch (Exception e) { + // try to connect - so that accept returns + } + } else if (shutdownMode == SHUTDOWN_FORCE) { + server.stop(); + } + server.shutdown(); + } + + /** + * Remove a thread from the list. + * + * @param t the thread to remove + */ + void remove(TcpServerThread t) { + running.remove(t); + } + + /** + * Get the configured base directory. + * + * @return the base directory + */ + String getBaseDir() { + return baseDir; + } + + /** + * Print a message if the trace flag is enabled. + * + * @param s the message + */ + void trace(String s) { + if (trace) { + System.out.println(s); + } + } + /** + * Print a stack trace if the trace flag is enabled. + * + * @param e the exception + */ + void traceError(Throwable e) { + if (trace) { + e.printStackTrace(); + } + } + + @Override + public boolean getAllowOthers() { + return allowOthers; + } + + @Override + public String getType() { + return "TCP"; + } + + @Override + public String getName() { + return "H2 TCP Server"; + } + + boolean getIfExists() { + return ifExists; + } + + /** + * Stop the TCP server with the given URL. + * + * @param url the database URL + * @param password the password + * @param force if the server should be stopped immediately + * @param all whether all TCP servers that are running in the JVM should be + * stopped + */ + public static synchronized void shutdown(String url, String password, + boolean force, boolean all) throws SQLException { + try { + int port = Constants.DEFAULT_TCP_PORT; + int idx = url.lastIndexOf(':'); + if (idx >= 0) { + String p = url.substring(idx + 1); + if (StringUtils.isNumber(p)) { + port = Integer.decode(p); + } + } + String db = getManagementDbName(port); + try { + org.h2.Driver.load(); + } catch (Throwable e) { + throw DbException.convert(e); + } + for (int i = 0; i < 2; i++) { + Connection conn = null; + PreparedStatement prep = null; + try { + conn = DriverManager.getConnection("jdbc:h2:" + url + "/" + db, "", password); + prep = conn.prepareStatement("CALL STOP_SERVER(?, ?, ?)"); + prep.setInt(1, all ? 0 : port); + prep.setString(2, password); + prep.setInt(3, force ? SHUTDOWN_FORCE : SHUTDOWN_NORMAL); + try { + prep.execute(); + } catch (SQLException e) { + if (force) { + // ignore + } else { + if (e.getErrorCode() != ErrorCode.CONNECTION_BROKEN_1) { + throw e; + } + } + } + break; + } catch (SQLException e) { + if (i == 1) { + throw e; + } + } finally { + JdbcUtils.closeSilently(prep); + JdbcUtils.closeSilently(conn); + } + } + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } + + /** + * Cancel a running statement. + * + * @param sessionId the session id + * @param statementId the statement id + */ + void cancelStatement(String sessionId, int statementId) { + for (TcpServerThread c : new ArrayList<>(running)) { + if (c != null) { + c.cancelStatement(sessionId, statementId); + } + } + } + + /** + * If no key is set, return the original database name. If a key is set, + * check if the key matches. If yes, return the correct database name. If + * not, throw an exception. + * + * @param db the key to test (or database name if no key is used) + * @return the database name + * @throws DbException if a key is set but doesn't match + */ + public String checkKeyAndGetDatabaseName(String db) { + if (key == null) { + return db; + } + if (key.equals(db)) { + return keyDatabase; + } + throw DbException.get(ErrorCode.WRONG_USER_OR_PASSWORD); + } + + @Override + public boolean isDaemon() { + return isDaemon; + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/TcpServerThread.java b/modules/h2/src/main/java/org/h2/server/TcpServerThread.java new file mode 100644 index 0000000000000..490cfa24bc608 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/TcpServerThread.java @@ -0,0 +1,653 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server; + +import java.io.ByteArrayInputStream; +import java.io.FilterInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.PrintWriter; +import java.io.StringWriter; +import java.net.Socket; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Objects; + +import org.h2.api.ErrorCode; +import org.h2.command.Command; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.Constants; +import org.h2.engine.Engine; +import org.h2.engine.GeneratedKeysMode; +import org.h2.engine.Session; +import org.h2.engine.SessionRemote; +import org.h2.engine.SysProperties; +import org.h2.expression.Parameter; +import org.h2.expression.ParameterInterface; +import org.h2.expression.ParameterRemote; +import org.h2.jdbc.JdbcSQLException; +import org.h2.message.DbException; +import org.h2.result.ResultColumn; +import org.h2.result.ResultInterface; +import org.h2.result.ResultWithGeneratedKeys; +import org.h2.store.LobStorageInterface; +import org.h2.util.IOUtils; +import org.h2.util.SmallLRUCache; +import org.h2.util.SmallMap; +import org.h2.value.Transfer; +import org.h2.value.Value; +import org.h2.value.ValueLobDb; + +/** + * One server thread is opened per client connection. + */ +public class TcpServerThread implements Runnable { + + protected final Transfer transfer; + private final TcpServer server; + private Session session; + private boolean stop; + private Thread thread; + private Command commit; + private final SmallMap cache = + new SmallMap(SysProperties.SERVER_CACHED_OBJECTS); + private final SmallLRUCache lobs = + SmallLRUCache.newInstance(Math.max( + SysProperties.SERVER_CACHED_OBJECTS, + SysProperties.SERVER_RESULT_SET_FETCH_SIZE * 5)); + private final int threadId; + private int clientVersion; + private String sessionId; + + TcpServerThread(Socket socket, TcpServer server, int id) { + this.server = server; + this.threadId = id; + transfer = new Transfer(null, socket); + } + + private void trace(String s) { + server.trace(this + " " + s); + } + + @Override + public void run() { + try { + transfer.init(); + trace("Connect"); + // TODO server: should support a list of allowed databases + // and a list of allowed clients + try { + Socket socket = transfer.getSocket(); + if (socket == null) { + // the transfer is already closed, prevent NPE in TcpServer#allow(Socket) + return; + } + if (!server.allow(transfer.getSocket())) { + throw DbException.get(ErrorCode.REMOTE_CONNECTION_NOT_ALLOWED); + } + int minClientVersion = transfer.readInt(); + int maxClientVersion = transfer.readInt(); + if (maxClientVersion < Constants.TCP_PROTOCOL_VERSION_MIN_SUPPORTED) { + throw DbException.get(ErrorCode.DRIVER_VERSION_ERROR_2, + "" + clientVersion, "" + Constants.TCP_PROTOCOL_VERSION_MIN_SUPPORTED); + } else if (minClientVersion > Constants.TCP_PROTOCOL_VERSION_MAX_SUPPORTED) { + throw DbException.get(ErrorCode.DRIVER_VERSION_ERROR_2, + "" + clientVersion, "" + Constants.TCP_PROTOCOL_VERSION_MAX_SUPPORTED); + } + if (maxClientVersion >= Constants.TCP_PROTOCOL_VERSION_MAX_SUPPORTED) { + clientVersion = Constants.TCP_PROTOCOL_VERSION_MAX_SUPPORTED; + } else { + clientVersion = maxClientVersion; + } + transfer.setVersion(clientVersion); + String db = transfer.readString(); + String originalURL = transfer.readString(); + if (db == null && originalURL == null) { + String targetSessionId = transfer.readString(); + int command = transfer.readInt(); + stop = true; + if (command == SessionRemote.SESSION_CANCEL_STATEMENT) { + // cancel a running statement + int statementId = transfer.readInt(); + server.cancelStatement(targetSessionId, statementId); + } else if (command == SessionRemote.SESSION_CHECK_KEY) { + // check if this is the correct server + db = server.checkKeyAndGetDatabaseName(targetSessionId); + if (!targetSessionId.equals(db)) { + transfer.writeInt(SessionRemote.STATUS_OK); + } else { + transfer.writeInt(SessionRemote.STATUS_ERROR); + } + } + } + String baseDir = server.getBaseDir(); + if (baseDir == null) { + baseDir = SysProperties.getBaseDir(); + } + db = server.checkKeyAndGetDatabaseName(db); + ConnectionInfo ci = new ConnectionInfo(db); + ci.setOriginalURL(originalURL); + ci.setUserName(transfer.readString()); + ci.setUserPasswordHash(transfer.readBytes()); + ci.setFilePasswordHash(transfer.readBytes()); + int len = transfer.readInt(); + for (int i = 0; i < len; i++) { + ci.setProperty(transfer.readString(), transfer.readString()); + } + // override client's requested properties with server settings + if (baseDir != null) { + ci.setBaseDir(baseDir); + } + if (server.getIfExists()) { + ci.setProperty("IFEXISTS", "TRUE"); + } + transfer.writeInt(SessionRemote.STATUS_OK); + transfer.writeInt(clientVersion); + transfer.flush(); + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_13) { + if (ci.getFilePasswordHash() != null) { + ci.setFileEncryptionKey(transfer.readBytes()); + } + } + session = Engine.getInstance().createSession(ci); + transfer.setSession(session); + server.addConnection(threadId, originalURL, ci.getUserName()); + trace("Connected"); + } catch (Throwable e) { + sendError(e); + stop = true; + } + while (!stop) { + try { + process(); + } catch (Throwable e) { + sendError(e); + } + } + trace("Disconnect"); + } catch (Throwable e) { + server.traceError(e); + } finally { + close(); + } + } + + private void closeSession() { + if (session != null) { + RuntimeException closeError = null; + try { + Command rollback = session.prepareLocal("ROLLBACK"); + rollback.executeUpdate(false); + } catch (RuntimeException e) { + closeError = e; + server.traceError(e); + } catch (Exception e) { + server.traceError(e); + } + try { + session.close(); + server.removeConnection(threadId); + } catch (RuntimeException e) { + if (closeError == null) { + closeError = e; + server.traceError(e); + } + } catch (Exception e) { + server.traceError(e); + } finally { + session = null; + } + if (closeError != null) { + throw closeError; + } + } + } + + /** + * Close a connection. + */ + void close() { + try { + stop = true; + closeSession(); + } catch (Exception e) { + server.traceError(e); + } finally { + transfer.close(); + trace("Close"); + server.remove(this); + } + } + + private void sendError(Throwable t) { + try { + SQLException e = DbException.convert(t).getSQLException(); + StringWriter writer = new StringWriter(); + e.printStackTrace(new PrintWriter(writer)); + String trace = writer.toString(); + String message; + String sql; + if (e instanceof JdbcSQLException) { + JdbcSQLException j = (JdbcSQLException) e; + message = j.getOriginalMessage(); + sql = j.getSQL(); + } else { + message = e.getMessage(); + sql = null; + } + transfer.writeInt(SessionRemote.STATUS_ERROR). + writeString(e.getSQLState()).writeString(message). + writeString(sql).writeInt(e.getErrorCode()).writeString(trace).flush(); + } catch (Exception e2) { + if (!transfer.isClosed()) { + server.traceError(e2); + } + // if writing the error does not work, close the connection + stop = true; + } + } + + private void setParameters(Command command) throws IOException { + int len = transfer.readInt(); + ArrayList params = command.getParameters(); + for (int i = 0; i < len; i++) { + Parameter p = (Parameter) params.get(i); + p.setValue(transfer.readValue()); + } + } + + private void process() throws IOException { + int operation = transfer.readInt(); + switch (operation) { + case SessionRemote.SESSION_PREPARE_READ_PARAMS: + case SessionRemote.SESSION_PREPARE_READ_PARAMS2: + case SessionRemote.SESSION_PREPARE: { + int id = transfer.readInt(); + String sql = transfer.readString(); + int old = session.getModificationId(); + Command command = session.prepareLocal(sql); + boolean readonly = command.isReadOnly(); + cache.addObject(id, command); + boolean isQuery = command.isQuery(); + + transfer.writeInt(getState(old)).writeBoolean(isQuery). + writeBoolean(readonly); + + if (operation == SessionRemote.SESSION_PREPARE_READ_PARAMS2) { + transfer.writeInt(command.getCommandType()); + } + + ArrayList params = command.getParameters(); + + transfer.writeInt(params.size()); + + if (operation != SessionRemote.SESSION_PREPARE) { + for (ParameterInterface p : params) { + ParameterRemote.writeMetaData(transfer, p); + } + } + transfer.flush(); + break; + } + case SessionRemote.SESSION_CLOSE: { + stop = true; + closeSession(); + transfer.writeInt(SessionRemote.STATUS_OK).flush(); + close(); + break; + } + case SessionRemote.COMMAND_COMMIT: { + if (commit == null) { + commit = session.prepareLocal("COMMIT"); + } + int old = session.getModificationId(); + commit.executeUpdate(false); + transfer.writeInt(getState(old)).flush(); + break; + } + case SessionRemote.COMMAND_GET_META_DATA: { + int id = transfer.readInt(); + int objectId = transfer.readInt(); + Command command = (Command) cache.getObject(id, false); + ResultInterface result = command.getMetaData(); + cache.addObject(objectId, result); + int columnCount = result.getVisibleColumnCount(); + transfer.writeInt(SessionRemote.STATUS_OK). + writeInt(columnCount).writeInt(0); + for (int i = 0; i < columnCount; i++) { + ResultColumn.writeColumn(transfer, result, i); + } + transfer.flush(); + break; + } + case SessionRemote.COMMAND_EXECUTE_QUERY: { + int id = transfer.readInt(); + int objectId = transfer.readInt(); + int maxRows = transfer.readInt(); + int fetchSize = transfer.readInt(); + Command command = (Command) cache.getObject(id, false); + setParameters(command); + int old = session.getModificationId(); + ResultInterface result; + synchronized (session) { + result = command.executeQuery(maxRows, false); + } + cache.addObject(objectId, result); + int columnCount = result.getVisibleColumnCount(); + int state = getState(old); + transfer.writeInt(state).writeInt(columnCount); + int rowCount = result.getRowCount(); + transfer.writeInt(rowCount); + for (int i = 0; i < columnCount; i++) { + ResultColumn.writeColumn(transfer, result, i); + } + int fetch = Math.min(rowCount, fetchSize); + for (int i = 0; i < fetch; i++) { + sendRow(result); + } + transfer.flush(); + break; + } + case SessionRemote.COMMAND_EXECUTE_UPDATE: { + int id = transfer.readInt(); + Command command = (Command) cache.getObject(id, false); + setParameters(command); + boolean supportsGeneratedKeys = clientVersion >= Constants.TCP_PROTOCOL_VERSION_17; + boolean writeGeneratedKeys = supportsGeneratedKeys; + Object generatedKeysRequest; + if (supportsGeneratedKeys) { + int mode = transfer.readInt(); + switch (mode) { + case GeneratedKeysMode.NONE: + generatedKeysRequest = false; + writeGeneratedKeys = false; + break; + case GeneratedKeysMode.AUTO: + generatedKeysRequest = true; + break; + case GeneratedKeysMode.COLUMN_NUMBERS: { + int len = transfer.readInt(); + int[] keys = new int[len]; + for (int i = 0; i < len; i++) { + keys[i] = transfer.readInt(); + } + generatedKeysRequest = keys; + break; + } + case GeneratedKeysMode.COLUMN_NAMES: { + int len = transfer.readInt(); + String[] keys = new String[len]; + for (int i = 0; i < len; i++) { + keys[i] = transfer.readString(); + } + generatedKeysRequest = keys; + break; + } + default: + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, + "Unsupported generated keys' mode " + mode); + } + } else { + generatedKeysRequest = false; + } + int old = session.getModificationId(); + ResultWithGeneratedKeys result; + synchronized (session) { + result = command.executeUpdate(generatedKeysRequest); + } + int status; + if (session.isClosed()) { + status = SessionRemote.STATUS_CLOSED; + stop = true; + } else { + status = getState(old); + } + transfer.writeInt(status).writeInt(result.getUpdateCount()). + writeBoolean(session.getAutoCommit()); + if (writeGeneratedKeys) { + ResultInterface generatedKeys = result.getGeneratedKeys(); + int columnCount = generatedKeys.getVisibleColumnCount(); + transfer.writeInt(columnCount); + int rowCount = generatedKeys.getRowCount(); + transfer.writeInt(rowCount); + for (int i = 0; i < columnCount; i++) { + ResultColumn.writeColumn(transfer, generatedKeys, i); + } + for (int i = 0; i < rowCount; i++) { + sendRow(generatedKeys); + } + generatedKeys.close(); + } + transfer.flush(); + break; + } + case SessionRemote.COMMAND_CLOSE: { + int id = transfer.readInt(); + Command command = (Command) cache.getObject(id, true); + if (command != null) { + command.close(); + cache.freeObject(id); + } + break; + } + case SessionRemote.RESULT_FETCH_ROWS: { + int id = transfer.readInt(); + int count = transfer.readInt(); + ResultInterface result = (ResultInterface) cache.getObject(id, false); + transfer.writeInt(SessionRemote.STATUS_OK); + for (int i = 0; i < count; i++) { + sendRow(result); + } + transfer.flush(); + break; + } + case SessionRemote.RESULT_RESET: { + int id = transfer.readInt(); + ResultInterface result = (ResultInterface) cache.getObject(id, false); + result.reset(); + break; + } + case SessionRemote.RESULT_CLOSE: { + int id = transfer.readInt(); + ResultInterface result = (ResultInterface) cache.getObject(id, true); + if (result != null) { + result.close(); + cache.freeObject(id); + } + break; + } + case SessionRemote.CHANGE_ID: { + int oldId = transfer.readInt(); + int newId = transfer.readInt(); + Object obj = cache.getObject(oldId, false); + cache.freeObject(oldId); + cache.addObject(newId, obj); + break; + } + case SessionRemote.SESSION_SET_ID: { + sessionId = transfer.readString(); + transfer.writeInt(SessionRemote.STATUS_OK); + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_15) { + transfer.writeBoolean(session.getAutoCommit()); + } + transfer.flush(); + break; + } + case SessionRemote.SESSION_SET_AUTOCOMMIT: { + boolean autoCommit = transfer.readBoolean(); + session.setAutoCommit(autoCommit); + transfer.writeInt(SessionRemote.STATUS_OK).flush(); + break; + } + case SessionRemote.SESSION_HAS_PENDING_TRANSACTION: { + transfer.writeInt(SessionRemote.STATUS_OK). + writeInt(session.hasPendingTransaction() ? 1 : 0).flush(); + break; + } + case SessionRemote.LOB_READ: { + long lobId = transfer.readLong(); + byte[] hmac; + CachedInputStream in; + boolean verifyMac; + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_11) { + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_12) { + hmac = transfer.readBytes(); + verifyMac = true; + } else { + hmac = null; + verifyMac = false; + } + in = lobs.get(lobId); + if (in == null && verifyMac) { + in = new CachedInputStream(null); + lobs.put(lobId, in); + } + } else { + verifyMac = false; + hmac = null; + in = lobs.get(lobId); + } + long offset = transfer.readLong(); + int length = transfer.readInt(); + if (verifyMac) { + transfer.verifyLobMac(hmac, lobId); + } + if (in == null) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + if (in.getPos() != offset) { + LobStorageInterface lobStorage = session.getDataHandler().getLobStorage(); + // only the lob id is used + ValueLobDb lob = ValueLobDb.create(Value.BLOB, null, -1, lobId, hmac, -1); + InputStream lobIn = lobStorage.getInputStream(lob, hmac, -1); + in = new CachedInputStream(lobIn); + lobs.put(lobId, in); + lobIn.skip(offset); + } + // limit the buffer size + length = Math.min(16 * Constants.IO_BUFFER_SIZE, length); + byte[] buff = new byte[length]; + length = IOUtils.readFully(in, buff, length); + transfer.writeInt(SessionRemote.STATUS_OK); + transfer.writeInt(length); + transfer.writeBytes(buff, 0, length); + transfer.flush(); + break; + } + default: + trace("Unknown operation: " + operation); + closeSession(); + close(); + } + } + + private int getState(int oldModificationId) { + if (session.getModificationId() == oldModificationId) { + return SessionRemote.STATUS_OK; + } + return SessionRemote.STATUS_OK_STATE_CHANGED; + } + + private void sendRow(ResultInterface result) throws IOException { + if (result.next()) { + transfer.writeBoolean(true); + Value[] v = result.currentRow(); + for (int i = 0; i < result.getVisibleColumnCount(); i++) { + if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_12) { + transfer.writeValue(v[i]); + } else { + writeValue(v[i]); + } + } + } else { + transfer.writeBoolean(false); + } + } + + private void writeValue(Value v) throws IOException { + if (v.getType() == Value.CLOB || v.getType() == Value.BLOB) { + if (v instanceof ValueLobDb) { + ValueLobDb lob = (ValueLobDb) v; + if (lob.isStored()) { + long id = lob.getLobId(); + lobs.put(id, new CachedInputStream(null)); + } + } + } + transfer.writeValue(v); + } + + void setThread(Thread thread) { + this.thread = thread; + } + + Thread getThread() { + return thread; + } + + /** + * Cancel a running statement. + * + * @param targetSessionId the session id + * @param statementId the statement to cancel + */ + void cancelStatement(String targetSessionId, int statementId) { + if (Objects.equals(targetSessionId, this.sessionId)) { + Command cmd = (Command) cache.getObject(statementId, false); + cmd.cancel(); + } + } + + /** + * An input stream with a position. + */ + static class CachedInputStream extends FilterInputStream { + + private static final ByteArrayInputStream DUMMY = + new ByteArrayInputStream(new byte[0]); + private long pos; + + CachedInputStream(InputStream in) { + super(in == null ? DUMMY : in); + if (in == null) { + pos = -1; + } + } + + @Override + public int read(byte[] buff, int off, int len) throws IOException { + len = super.read(buff, off, len); + if (len > 0) { + pos += len; + } + return len; + } + + @Override + public int read() throws IOException { + int x = in.read(); + if (x >= 0) { + pos++; + } + return x; + } + + @Override + public long skip(long n) throws IOException { + n = super.skip(n); + if (n > 0) { + pos += n; + } + return n; + } + + public long getPos() { + return pos; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/package.html b/modules/h2/src/main/java/org/h2/server/package.html new file mode 100644 index 0000000000000..7fffcd3d28a38 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A small FTP server. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/pg/PgServer.java b/modules/h2/src/main/java/org/h2/server/pg/PgServer.java new file mode 100644 index 0000000000000..d6470a889c8c5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/pg/PgServer.java @@ -0,0 +1,593 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.pg; + +import java.io.IOException; +import java.net.ServerSocket; +import java.net.Socket; +import java.net.UnknownHostException; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.server.Service; +import org.h2.util.NetUtils; +import org.h2.util.Tool; + +/** + * This class implements a subset of the PostgreSQL protocol as described here: + * http://developer.postgresql.org/pgdocs/postgres/protocol.html + * The PostgreSQL catalog is described here: + * http://www.postgresql.org/docs/7.4/static/catalogs.html + * + * @author Thomas Mueller + * @author Sergi Vladykin 2009-07-03 (convertType) + */ +public class PgServer implements Service { + + /** + * The default port to use for the PG server. + * This value is also in the documentation and in the Server javadoc. + */ + public static final int DEFAULT_PORT = 5435; + + /** + * The VARCHAR type. + */ + public static final int PG_TYPE_VARCHAR = 1043; + + public static final int PG_TYPE_BOOL = 16; + public static final int PG_TYPE_BYTEA = 17; + public static final int PG_TYPE_BPCHAR = 1042; + public static final int PG_TYPE_INT8 = 20; + public static final int PG_TYPE_INT2 = 21; + public static final int PG_TYPE_INT4 = 23; + public static final int PG_TYPE_TEXT = 25; + public static final int PG_TYPE_OID = 26; + public static final int PG_TYPE_FLOAT4 = 700; + public static final int PG_TYPE_FLOAT8 = 701; + public static final int PG_TYPE_UNKNOWN = 705; + public static final int PG_TYPE_TEXTARRAY = 1009; + public static final int PG_TYPE_DATE = 1082; + public static final int PG_TYPE_TIME = 1083; + public static final int PG_TYPE_TIMESTAMP_NO_TMZONE = 1114; + public static final int PG_TYPE_NUMERIC = 1700; + + private final HashSet typeSet = new HashSet<>(); + + private int port = PgServer.DEFAULT_PORT; + private boolean portIsSet; + private boolean stop; + private boolean trace; + private ServerSocket serverSocket; + private final Set running = Collections. + synchronizedSet(new HashSet()); + private final AtomicInteger pid = new AtomicInteger(); + private String baseDir; + private boolean allowOthers; + private boolean isDaemon; + private boolean ifExists; + private String key, keyDatabase; + + @Override + public void init(String... args) { + port = DEFAULT_PORT; + for (int i = 0; args != null && i < args.length; i++) { + String a = args[i]; + if (Tool.isOption(a, "-trace")) { + trace = true; + } else if (Tool.isOption(a, "-pgPort")) { + port = Integer.decode(args[++i]); + portIsSet = true; + } else if (Tool.isOption(a, "-baseDir")) { + baseDir = args[++i]; + } else if (Tool.isOption(a, "-pgAllowOthers")) { + allowOthers = true; + } else if (Tool.isOption(a, "-pgDaemon")) { + isDaemon = true; + } else if (Tool.isOption(a, "-ifExists")) { + ifExists = true; + } else if (Tool.isOption(a, "-key")) { + key = args[++i]; + keyDatabase = args[++i]; + } + } + org.h2.Driver.load(); + // int testing; + // trace = true; + } + + boolean getTrace() { + return trace; + } + + /** + * Print a message if the trace flag is enabled. + * + * @param s the message + */ + void trace(String s) { + if (trace) { + System.out.println(s); + } + } + + /** + * Remove a thread from the list. + * + * @param t the thread to remove + */ + synchronized void remove(PgServerThread t) { + running.remove(t); + } + + /** + * Print the stack trace if the trace flag is enabled. + * + * @param e the exception + */ + void traceError(Exception e) { + if (trace) { + e.printStackTrace(); + } + } + + @Override + public String getURL() { + return "pg://" + NetUtils.getLocalAddress() + ":" + port; + } + + @Override + public int getPort() { + return port; + } + + private boolean allow(Socket socket) { + if (allowOthers) { + return true; + } + try { + return NetUtils.isLocalAddress(socket); + } catch (UnknownHostException e) { + traceError(e); + return false; + } + } + + @Override + public void start() { + stop = false; + try { + serverSocket = NetUtils.createServerSocket(port, false); + } catch (DbException e) { + if (!portIsSet) { + serverSocket = NetUtils.createServerSocket(0, false); + } else { + throw e; + } + } + port = serverSocket.getLocalPort(); + } + + @Override + public void listen() { + String threadName = Thread.currentThread().getName(); + try { + while (!stop) { + Socket s = serverSocket.accept(); + if (!allow(s)) { + trace("Connection not allowed"); + s.close(); + } else { + PgServerThread c = new PgServerThread(s, this); + running.add(c); + c.setProcessId(pid.incrementAndGet()); + Thread thread = new Thread(c, threadName+" thread"); + thread.setDaemon(isDaemon); + c.setThread(thread); + thread.start(); + } + } + } catch (Exception e) { + if (!stop) { + e.printStackTrace(); + } + } + } + + @Override + public void stop() { + // TODO server: combine with tcp server + if (!stop) { + stop = true; + if (serverSocket != null) { + try { + serverSocket.close(); + } catch (IOException e) { + // TODO log exception + e.printStackTrace(); + } + serverSocket = null; + } + } + // TODO server: using a boolean 'now' argument? a timeout? + for (PgServerThread c : new ArrayList<>(running)) { + c.close(); + try { + Thread t = c.getThread(); + if (t != null) { + t.join(100); + } + } catch (Exception e) { + // TODO log exception + e.printStackTrace(); + } + } + } + + @Override + public boolean isRunning(boolean traceError) { + if (serverSocket == null) { + return false; + } + try { + Socket s = NetUtils.createLoopbackSocket(serverSocket.getLocalPort(), false); + s.close(); + return true; + } catch (Exception e) { + if (traceError) { + traceError(e); + } + return false; + } + } + + /** + * Get the thread with the given process id. + * + * @param processId the process id + * @return the thread + */ + PgServerThread getThread(int processId) { + for (PgServerThread c : new ArrayList<>(running)) { + if (c.getProcessId() == processId) { + return c; + } + } + return null; + } + + String getBaseDir() { + return baseDir; + } + + @Override + public boolean getAllowOthers() { + return allowOthers; + } + + @Override + public String getType() { + return "PG"; + } + + @Override + public String getName() { + return "H2 PG Server"; + } + + boolean getIfExists() { + return ifExists; + } + + /** + * The Java implementation of the PostgreSQL function pg_get_indexdef. The + * method is used to get CREATE INDEX command for an index, or the column + * definition of one column in the index. + * + * @param conn the connection + * @param indexId the index id + * @param ordinalPosition the ordinal position (null if the SQL statement + * should be returned) + * @param pretty this flag is ignored + * @return the SQL statement or the column name + */ + @SuppressWarnings("unused") + public static String getIndexColumn(Connection conn, int indexId, + Integer ordinalPosition, Boolean pretty) throws SQLException { + if (ordinalPosition == null || ordinalPosition.intValue() == 0) { + PreparedStatement prep = conn.prepareStatement( + "select sql from information_schema.indexes where id=?"); + prep.setInt(1, indexId); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + return rs.getString(1); + } + return ""; + } + PreparedStatement prep = conn.prepareStatement( + "select column_name from information_schema.indexes " + + "where id=? and ordinal_position=?"); + prep.setInt(1, indexId); + prep.setInt(2, ordinalPosition.intValue()); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + return rs.getString(1); + } + return ""; + } + + /** + * Get the name of the current schema. + * This method is called by the database. + * + * @param conn the connection + * @return the schema name + */ + public static String getCurrentSchema(Connection conn) throws SQLException { + ResultSet rs = conn.createStatement().executeQuery("call schema()"); + rs.next(); + return rs.getString(1); + } + + /** + * Get the OID of an object. This method is called by the database. + * + * @param conn the connection + * @param tableName the table name + * @return the oid + */ + public static int getOid(Connection conn, String tableName) + throws SQLException { + if (tableName.startsWith("\"") && tableName.endsWith("\"")) { + tableName = tableName.substring(1, tableName.length() - 1); + } + PreparedStatement prep = conn.prepareStatement( + "select oid from pg_class where relName = ?"); + prep.setString(1, tableName); + ResultSet rs = prep.executeQuery(); + if (!rs.next()) { + return 0; + } + return rs.getInt(1); + } + + /** + * Get the name of this encoding code. + * This method is called by the database. + * + * @param code the encoding code + * @return the encoding name + */ + public static String getEncodingName(int code) { + switch (code) { + case 0: + return "SQL_ASCII"; + case 6: + return "UTF8"; + case 8: + return "LATIN1"; + default: + return code < 40 ? "UTF8" : ""; + } + } + + /** + * Get the version. This method must return PostgreSQL to keep some clients + * happy. This method is called by the database. + * + * @return the server name and version + */ + public static String getVersion() { + return "PostgreSQL " + Constants.PG_VERSION + " server protocol using H2 " + + Constants.getFullVersion(); + } + + /** + * Get the current system time. + * This method is called by the database. + * + * @return the current system time + */ + public static Timestamp getStartTime() { + return new Timestamp(System.currentTimeMillis()); + } + + /** + * Get the user name for this id. + * This method is called by the database. + * + * @param conn the connection + * @param id the user id + * @return the user name + */ + public static String getUserById(Connection conn, int id) throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "SELECT NAME FROM INFORMATION_SCHEMA.USERS WHERE ID=?"); + prep.setInt(1, id); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + return rs.getString(1); + } + return null; + } + + /** + * Check if the this session has the given database privilege. + * This method is called by the database. + * + * @param id the session id + * @param privilege the privilege to check + * @return true + */ + @SuppressWarnings("unused") + public static boolean hasDatabasePrivilege(int id, String privilege) { + return true; + } + + /** + * Check if the current session has access to this table. + * This method is called by the database. + * + * @param table the table name + * @param privilege the privilege to check + * @return true + */ + @SuppressWarnings("unused") + public static boolean hasTablePrivilege(String table, String privilege) { + return true; + } + + /** + * Get the current transaction id. + * This method is called by the database. + * + * @param table the table name + * @param id the id + * @return 1 + */ + @SuppressWarnings("unused") + public static int getCurrentTid(String table, String id) { + return 1; + } + + /** + * A fake wrapper around pg_get_expr(expr_text, relation_oid), in PostgreSQL + * it "decompiles the internal form of an expression, assuming that any vars + * in it refer to the relation indicated by the second parameter". + * + * @param exprText the expression text + * @param relationOid the relation object id + * @return always null + */ + @SuppressWarnings("unused") + public static String getPgExpr(String exprText, int relationOid) { + return null; + } + + /** + * Check if the current session has access to this table. + * This method is called by the database. + * + * @param conn the connection + * @param pgType the PostgreSQL type oid + * @param typeMod the type modifier (typically -1) + * @return the name of the given type + */ + public static String formatType(Connection conn, int pgType, int typeMod) + throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "select typname from pg_catalog.pg_type where oid = ? and typtypmod = ?"); + prep.setInt(1, pgType); + prep.setInt(2, typeMod); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + return rs.getString(1); + } + return null; + } + + /** + * Convert the SQL type to a PostgreSQL type + * + * @param type the SQL type + * @return the PostgreSQL type + */ + public static int convertType(final int type) { + switch (type) { + case Types.BOOLEAN: + return PG_TYPE_BOOL; + case Types.VARCHAR: + return PG_TYPE_VARCHAR; + case Types.CLOB: + return PG_TYPE_TEXT; + case Types.CHAR: + return PG_TYPE_BPCHAR; + case Types.SMALLINT: + return PG_TYPE_INT2; + case Types.INTEGER: + return PG_TYPE_INT4; + case Types.BIGINT: + return PG_TYPE_INT8; + case Types.DECIMAL: + return PG_TYPE_NUMERIC; + case Types.REAL: + return PG_TYPE_FLOAT4; + case Types.DOUBLE: + return PG_TYPE_FLOAT8; + case Types.TIME: + return PG_TYPE_TIME; + case Types.DATE: + return PG_TYPE_DATE; + case Types.TIMESTAMP: + return PG_TYPE_TIMESTAMP_NO_TMZONE; + case Types.VARBINARY: + return PG_TYPE_BYTEA; + case Types.BLOB: + return PG_TYPE_OID; + case Types.ARRAY: + return PG_TYPE_TEXTARRAY; + default: + return PG_TYPE_UNKNOWN; + } + } + + /** + * Get the type hash set. + * + * @return the type set + */ + HashSet getTypeSet() { + return typeSet; + } + + /** + * Check whether a data type is supported. + * A warning is logged if not. + * + * @param type the type + */ + void checkType(int type) { + if (!typeSet.contains(type)) { + trace("Unsupported type: " + type); + } + } + + /** + * If no key is set, return the original database name. If a key is set, + * check if the key matches. If yes, return the correct database name. If + * not, throw an exception. + * + * @param db the key to test (or database name if no key is used) + * @return the database name + * @throws DbException if a key is set but doesn't match + */ + public String checkKeyAndGetDatabaseName(String db) { + if (key == null) { + return db; + } + if (key.equals(db)) { + return keyDatabase; + } + throw DbException.get(ErrorCode.WRONG_USER_OR_PASSWORD); + } + + @Override + public boolean isDaemon() { + return isDaemon; + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/pg/PgServerThread.java b/modules/h2/src/main/java/org/h2/server/pg/PgServerThread.java new file mode 100644 index 0000000000000..bacce5158398e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/pg/PgServerThread.java @@ -0,0 +1,1119 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.pg; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.DataInputStream; +import java.io.DataOutputStream; +import java.io.EOFException; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.Reader; +import java.io.StringReader; +import java.net.Socket; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.ParameterMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Properties; + +import org.h2.command.CommandInterface; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.jdbc.JdbcConnection; +import org.h2.jdbc.JdbcPreparedStatement; +import org.h2.jdbc.JdbcResultSet; +import org.h2.jdbc.JdbcStatement; +import org.h2.message.DbException; +import org.h2.util.DateTimeUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.ScriptReader; +import org.h2.util.StringUtils; +import org.h2.util.Utils; +import org.h2.value.CaseInsensitiveMap; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueNull; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; + +/** + * One server thread is opened for each client. + */ +public class PgServerThread implements Runnable { + private static final boolean INTEGER_DATE_TYPES = false; + + private final PgServer server; + private Socket socket; + private Connection conn; + private boolean stop; + private DataInputStream dataInRaw; + private DataInputStream dataIn; + private OutputStream out; + private int messageType; + private ByteArrayOutputStream outBuffer; + private DataOutputStream dataOut; + private Thread thread; + private boolean initDone; + private String userName; + private String databaseName; + private int processId; + private final int secret; + private JdbcStatement activeRequest; + private String clientEncoding = SysProperties.PG_DEFAULT_CLIENT_ENCODING; + private String dateStyle = "ISO, MDY"; + private final HashMap prepared = + new CaseInsensitiveMap<>(); + private final HashMap portals = + new CaseInsensitiveMap<>(); + + PgServerThread(Socket socket, PgServer server) { + this.server = server; + this.socket = socket; + this.secret = (int) MathUtils.secureRandomLong(); + } + + @Override + public void run() { + try { + server.trace("Connect"); + InputStream ins = socket.getInputStream(); + out = socket.getOutputStream(); + dataInRaw = new DataInputStream(ins); + while (!stop) { + process(); + out.flush(); + } + } catch (EOFException e) { + // more or less normal disconnect + } catch (Exception e) { + server.traceError(e); + } finally { + server.trace("Disconnect"); + close(); + } + } + + private String readString() throws IOException { + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + while (true) { + int x = dataIn.read(); + if (x <= 0) { + break; + } + buff.write(x); + } + return new String(buff.toByteArray(), getEncoding()); + } + + private int readInt() throws IOException { + return dataIn.readInt(); + } + + private short readShort() throws IOException { + return dataIn.readShort(); + } + + private byte readByte() throws IOException { + return dataIn.readByte(); + } + + private void readFully(byte[] buff) throws IOException { + dataIn.readFully(buff); + } + + private void process() throws IOException { + int x; + if (initDone) { + x = dataInRaw.read(); + if (x < 0) { + stop = true; + return; + } + } else { + x = 0; + } + int len = dataInRaw.readInt(); + len -= 4; + byte[] data = Utils.newBytes(len); + dataInRaw.readFully(data, 0, len); + dataIn = new DataInputStream(new ByteArrayInputStream(data, 0, len)); + switch (x) { + case 0: + server.trace("Init"); + int version = readInt(); + if (version == 80877102) { + server.trace("CancelRequest"); + int pid = readInt(); + int key = readInt(); + PgServerThread c = server.getThread(pid); + if (c != null && key == c.secret) { + c.cancelRequest(); + } else { + // According to the PostgreSQL documentation, when canceling + // a request, if an invalid secret is provided then no + // exception should be sent back to the client. + server.trace("Invalid CancelRequest: pid=" + pid + ", key=" + key); + } + close(); + } else if (version == 80877103) { + server.trace("SSLRequest"); + out.write('N'); + } else { + server.trace("StartupMessage"); + server.trace(" version " + version + + " (" + (version >> 16) + "." + (version & 0xff) + ")"); + while (true) { + String param = readString(); + if (param.length() == 0) { + break; + } + String value = readString(); + if ("user".equals(param)) { + this.userName = value; + } else if ("database".equals(param)) { + this.databaseName = server.checkKeyAndGetDatabaseName(value); + } else if ("client_encoding".equals(param)) { + // UTF8 + clientEncoding = value; + } else if ("DateStyle".equals(param)) { + if (value.indexOf(',') < 0) { + value += ", MDY"; + } + dateStyle = value; + } + // extra_float_digits 2 + // geqo on (Genetic Query Optimization) + server.trace(" param " + param + "=" + value); + } + sendAuthenticationCleartextPassword(); + initDone = true; + } + break; + case 'p': { + server.trace("PasswordMessage"); + String password = readString(); + try { + Properties info = new Properties(); + info.put("MODE", "PostgreSQL"); + info.put("USER", userName); + info.put("PASSWORD", password); + String url = "jdbc:h2:" + databaseName; + ConnectionInfo ci = new ConnectionInfo(url, info); + String baseDir = server.getBaseDir(); + if (baseDir == null) { + baseDir = SysProperties.getBaseDir(); + } + if (baseDir != null) { + ci.setBaseDir(baseDir); + } + if (server.getIfExists()) { + ci.setProperty("IFEXISTS", "TRUE"); + } + conn = new JdbcConnection(ci, false); + // can not do this because when called inside + // DriverManager.getConnection, a deadlock occurs + // conn = DriverManager.getConnection(url, userName, password); + initDb(); + sendAuthenticationOk(); + } catch (Exception e) { + e.printStackTrace(); + stop = true; + } + break; + } + case 'P': { + server.trace("Parse"); + Prepared p = new Prepared(); + p.name = readString(); + p.sql = getSQL(readString()); + int paramTypesCount = readShort(); + int[] paramTypes = null; + if (paramTypesCount > 0) { + paramTypes = new int[paramTypesCount]; + for (int i = 0; i < paramTypesCount; i++) { + paramTypes[i] = readInt(); + } + } + try { + p.prep = (JdbcPreparedStatement) conn.prepareStatement(p.sql); + ParameterMetaData meta = p.prep.getParameterMetaData(); + p.paramType = new int[meta.getParameterCount()]; + for (int i = 0; i < p.paramType.length; i++) { + int type; + if (i < paramTypesCount && paramTypes[i] != 0) { + type = paramTypes[i]; + server.checkType(type); + } else { + type = PgServer.convertType(meta.getParameterType(i + 1)); + } + p.paramType[i] = type; + } + prepared.put(p.name, p); + sendParseComplete(); + } catch (Exception e) { + sendErrorResponse(e); + } + break; + } + case 'B': { + server.trace("Bind"); + Portal portal = new Portal(); + portal.name = readString(); + String prepName = readString(); + Prepared prep = prepared.get(prepName); + if (prep == null) { + sendErrorResponse("Prepared not found"); + break; + } + portal.prep = prep; + portals.put(portal.name, portal); + int formatCodeCount = readShort(); + int[] formatCodes = new int[formatCodeCount]; + for (int i = 0; i < formatCodeCount; i++) { + formatCodes[i] = readShort(); + } + int paramCount = readShort(); + try { + for (int i = 0; i < paramCount; i++) { + setParameter(prep.prep, prep.paramType[i], i, formatCodes); + } + } catch (Exception e) { + sendErrorResponse(e); + break; + } + int resultCodeCount = readShort(); + portal.resultColumnFormat = new int[resultCodeCount]; + for (int i = 0; i < resultCodeCount; i++) { + portal.resultColumnFormat[i] = readShort(); + } + sendBindComplete(); + break; + } + case 'C': { + char type = (char) readByte(); + String name = readString(); + server.trace("Close"); + if (type == 'S') { + Prepared p = prepared.remove(name); + if (p != null) { + JdbcUtils.closeSilently(p.prep); + } + } else if (type == 'P') { + portals.remove(name); + } else { + server.trace("expected S or P, got " + type); + sendErrorResponse("expected S or P"); + break; + } + sendCloseComplete(); + break; + } + case 'D': { + char type = (char) readByte(); + String name = readString(); + server.trace("Describe"); + if (type == 'S') { + Prepared p = prepared.get(name); + if (p == null) { + sendErrorResponse("Prepared not found: " + name); + } else { + try { + sendParameterDescription(p.prep.getParameterMetaData(), p.paramType); + sendRowDescription(p.prep.getMetaData()); + } catch (Exception e) { + sendErrorResponse(e); + } + } + } else if (type == 'P') { + Portal p = portals.get(name); + if (p == null) { + sendErrorResponse("Portal not found: " + name); + } else { + PreparedStatement prep = p.prep.prep; + try { + ResultSetMetaData meta = prep.getMetaData(); + sendRowDescription(meta); + } catch (Exception e) { + sendErrorResponse(e); + } + } + } else { + server.trace("expected S or P, got " + type); + sendErrorResponse("expected S or P"); + } + break; + } + case 'E': { + String name = readString(); + server.trace("Execute"); + Portal p = portals.get(name); + if (p == null) { + sendErrorResponse("Portal not found: " + name); + break; + } + int maxRows = readShort(); + Prepared prepared = p.prep; + JdbcPreparedStatement prep = prepared.prep; + server.trace(prepared.sql); + try { + prep.setMaxRows(maxRows); + setActiveRequest(prep); + boolean result = prep.execute(); + if (result) { + try { + ResultSet rs = prep.getResultSet(); + // the meta-data is sent in the prior 'Describe' + while (rs.next()) { + sendDataRow(rs, p.resultColumnFormat); + } + sendCommandComplete(prep, 0); + } catch (Exception e) { + sendErrorResponse(e); + } + } else { + sendCommandComplete(prep, prep.getUpdateCount()); + } + } catch (Exception e) { + if (prep.isCancelled()) { + sendCancelQueryResponse(); + } else { + sendErrorResponse(e); + } + } finally { + setActiveRequest(null); + } + break; + } + case 'S': { + server.trace("Sync"); + sendReadyForQuery(); + break; + } + case 'Q': { + server.trace("Query"); + String query = readString(); + ScriptReader reader = new ScriptReader(new StringReader(query)); + while (true) { + JdbcStatement stat = null; + try { + String s = reader.readStatement(); + if (s == null) { + break; + } + s = getSQL(s); + stat = (JdbcStatement) conn.createStatement(); + setActiveRequest(stat); + boolean result = stat.execute(s); + if (result) { + ResultSet rs = stat.getResultSet(); + ResultSetMetaData meta = rs.getMetaData(); + try { + sendRowDescription(meta); + while (rs.next()) { + sendDataRow(rs, null); + } + sendCommandComplete(stat, 0); + } catch (Exception e) { + sendErrorResponse(e); + break; + } + } else { + sendCommandComplete(stat, stat.getUpdateCount()); + } + } catch (SQLException e) { + if (stat != null && stat.isCancelled()) { + sendCancelQueryResponse(); + } else { + sendErrorResponse(e); + } + break; + } finally { + JdbcUtils.closeSilently(stat); + setActiveRequest(null); + } + } + sendReadyForQuery(); + break; + } + case 'X': { + server.trace("Terminate"); + close(); + break; + } + default: + server.trace("Unsupported: " + x + " (" + (char) x + ")"); + break; + } + } + + private String getSQL(String s) { + String lower = StringUtils.toLowerEnglish(s); + if (lower.startsWith("show max_identifier_length")) { + s = "CALL 63"; + } else if (lower.startsWith("set client_encoding to")) { + s = "set DATESTYLE ISO"; + } + // s = StringUtils.replaceAll(s, "i.indkey[ia.attnum-1]", "0"); + if (server.getTrace()) { + server.trace(s + ";"); + } + return s; + } + + private void sendCommandComplete(JdbcStatement stat, int updateCount) + throws IOException { + startMessage('C'); + switch (stat.getLastExecutedCommandType()) { + case CommandInterface.INSERT: + writeStringPart("INSERT 0 "); + writeString(Integer.toString(updateCount)); + break; + case CommandInterface.UPDATE: + writeStringPart("UPDATE "); + writeString(Integer.toString(updateCount)); + break; + case CommandInterface.DELETE: + writeStringPart("DELETE "); + writeString(Integer.toString(updateCount)); + break; + case CommandInterface.SELECT: + case CommandInterface.CALL: + writeString("SELECT"); + break; + case CommandInterface.BEGIN: + writeString("BEGIN"); + break; + default: + server.trace("check CommandComplete tag for command " + stat); + writeStringPart("UPDATE "); + writeString(Integer.toString(updateCount)); + } + sendMessage(); + } + + private void sendDataRow(ResultSet rs, int[] formatCodes) throws Exception { + ResultSetMetaData metaData = rs.getMetaData(); + int columns = metaData.getColumnCount(); + startMessage('D'); + writeShort(columns); + for (int i = 1; i <= columns; i++) { + int pgType = PgServer.convertType(metaData.getColumnType(i)); + boolean text = formatAsText(pgType); + if (formatCodes != null) { + if (formatCodes.length == 0) { + text = true; + } else if (formatCodes.length == 1) { + text = formatCodes[0] == 0; + } else if (i - 1 < formatCodes.length) { + text = formatCodes[i - 1] == 0; + } + } + writeDataColumn(rs, i, pgType, text); + } + sendMessage(); + } + + private static long toPostgreDays(long dateValue) { + return DateTimeUtils.prolepticGregorianAbsoluteDayFromDateValue(dateValue) - 10_957; + } + + private void writeDataColumn(ResultSet rs, int column, int pgType, boolean text) + throws Exception { + Value v = ((JdbcResultSet) rs).get(column); + if (v == ValueNull.INSTANCE) { + writeInt(-1); + return; + } + if (text) { + // plain text + switch (pgType) { + case PgServer.PG_TYPE_BOOL: + writeInt(1); + dataOut.writeByte(v.getBoolean() ? 't' : 'f'); + break; + default: + byte[] data = v.getString().getBytes(getEncoding()); + writeInt(data.length); + write(data); + } + } else { + // binary + switch (pgType) { + case PgServer.PG_TYPE_INT2: + writeInt(2); + writeShort(v.getShort()); + break; + case PgServer.PG_TYPE_INT4: + writeInt(4); + writeInt(v.getInt()); + break; + case PgServer.PG_TYPE_INT8: + writeInt(8); + dataOut.writeLong(v.getLong()); + break; + case PgServer.PG_TYPE_FLOAT4: + writeInt(4); + dataOut.writeFloat(v.getFloat()); + break; + case PgServer.PG_TYPE_FLOAT8: + writeInt(8); + dataOut.writeDouble(v.getDouble()); + break; + case PgServer.PG_TYPE_BYTEA: { + byte[] data = v.getBytesNoCopy(); + writeInt(data.length); + write(data); + break; + } + case PgServer.PG_TYPE_DATE: { + ValueDate d = (ValueDate) v.convertTo(Value.DATE); + writeInt(4); + writeInt((int) (toPostgreDays(d.getDateValue()))); + break; + } + case PgServer.PG_TYPE_TIME: { + ValueTime t = (ValueTime) v.convertTo(Value.TIME); + writeInt(8); + long m = t.getNanos(); + if (INTEGER_DATE_TYPES) { + // long format + m /= 1_000; + } else { + // double format + m = Double.doubleToLongBits(m * 0.000_000_001); + } + dataOut.writeLong(m); + break; + } + case PgServer.PG_TYPE_TIMESTAMP_NO_TMZONE: { + ValueTimestamp t = (ValueTimestamp) v.convertTo(Value.TIMESTAMP); + writeInt(8); + long m = toPostgreDays(t.getDateValue()) * 86_400; + long nanos = t.getTimeNanos(); + if (INTEGER_DATE_TYPES) { + // long format + m = m * 1_000_000 + nanos / 1_000; + } else { + // double format + m = Double.doubleToLongBits(m + nanos * 0.000_000_001); + } + dataOut.writeLong(m); + break; + } + default: throw new IllegalStateException("output binary format is undefined"); + } + } + } + + private Charset getEncoding() { + if ("UNICODE".equals(clientEncoding)) { + return StandardCharsets.UTF_8; + } + return Charset.forName(clientEncoding); + } + + private void setParameter(PreparedStatement prep, + int pgType, int i, int[] formatCodes) throws SQLException, IOException { + boolean text = (i >= formatCodes.length) || (formatCodes[i] == 0); + int col = i + 1; + int paramLen = readInt(); + if (paramLen == -1) { + prep.setNull(col, Types.NULL); + } else if (text) { + // plain text + byte[] data = Utils.newBytes(paramLen); + readFully(data); + String str = new String(data, getEncoding()); + switch (pgType) { + case PgServer.PG_TYPE_DATE: { + // Strip timezone offset + int idx = str.indexOf(' '); + if (idx > 0) { + str = str.substring(0, idx); + } + break; + } + case PgServer.PG_TYPE_TIME: { + // Strip timezone offset + int idx = str.indexOf('+'); + if (idx <= 0) { + idx = str.indexOf('-'); + } + if (idx > 0) { + str = str.substring(0, idx); + } + break; + } + } + prep.setString(col, str); + } else { + // binary + switch (pgType) { + case PgServer.PG_TYPE_INT2: + checkParamLength(2, paramLen); + prep.setShort(col, readShort()); + break; + case PgServer.PG_TYPE_INT4: + checkParamLength(4, paramLen); + prep.setInt(col, readInt()); + break; + case PgServer.PG_TYPE_INT8: + checkParamLength(8, paramLen); + prep.setLong(col, dataIn.readLong()); + break; + case PgServer.PG_TYPE_FLOAT4: + checkParamLength(4, paramLen); + prep.setFloat(col, dataIn.readFloat()); + break; + case PgServer.PG_TYPE_FLOAT8: + checkParamLength(8, paramLen); + prep.setDouble(col, dataIn.readDouble()); + break; + case PgServer.PG_TYPE_BYTEA: + byte[] d1 = Utils.newBytes(paramLen); + readFully(d1); + prep.setBytes(col, d1); + break; + default: + server.trace("Binary format for type: "+pgType+" is unsupported"); + byte[] d2 = Utils.newBytes(paramLen); + readFully(d2); + prep.setString(col, new String(d2, getEncoding())); + } + } + } + + private static void checkParamLength(int expected, int got) { + if (expected != got) { + throw DbException.getInvalidValueException("paramLen", got); + } + } + + private void sendErrorResponse(Exception re) throws IOException { + SQLException e = DbException.toSQLException(re); + server.traceError(e); + startMessage('E'); + write('S'); + writeString("ERROR"); + write('C'); + writeString(e.getSQLState()); + write('M'); + writeString(e.getMessage()); + write('D'); + writeString(e.toString()); + write(0); + sendMessage(); + } + + private void sendCancelQueryResponse() throws IOException { + server.trace("CancelSuccessResponse"); + startMessage('E'); + write('S'); + writeString("ERROR"); + write('C'); + writeString("57014"); + write('M'); + writeString("canceling statement due to user request"); + write(0); + sendMessage(); + } + + private void sendParameterDescription(ParameterMetaData meta, + int[] paramTypes) throws Exception { + int count = meta.getParameterCount(); + startMessage('t'); + writeShort(count); + for (int i = 0; i < count; i++) { + int type; + if (paramTypes != null && paramTypes[i] != 0) { + type = paramTypes[i]; + } else { + type = PgServer.PG_TYPE_VARCHAR; + } + server.checkType(type); + writeInt(type); + } + sendMessage(); + } + + private void sendNoData() throws IOException { + startMessage('n'); + sendMessage(); + } + + private void sendRowDescription(ResultSetMetaData meta) throws Exception { + if (meta == null) { + sendNoData(); + } else { + int columns = meta.getColumnCount(); + int[] types = new int[columns]; + int[] precision = new int[columns]; + String[] names = new String[columns]; + for (int i = 0; i < columns; i++) { + String name = meta.getColumnName(i + 1); + names[i] = name; + int type = meta.getColumnType(i + 1); + int pgType = PgServer.convertType(type); + // the ODBC client needs the column pg_catalog.pg_index + // to be of type 'int2vector' + // if (name.equalsIgnoreCase("indkey") && + // "pg_index".equalsIgnoreCase( + // meta.getTableName(i + 1))) { + // type = PgServer.PG_TYPE_INT2VECTOR; + // } + precision[i] = meta.getColumnDisplaySize(i + 1); + if (type != Types.NULL) { + server.checkType(pgType); + } + types[i] = pgType; + } + startMessage('T'); + writeShort(columns); + for (int i = 0; i < columns; i++) { + writeString(StringUtils.toLowerEnglish(names[i])); + // object ID + writeInt(0); + // attribute number of the column + writeShort(0); + // data type + writeInt(types[i]); + // pg_type.typlen + writeShort(getTypeSize(types[i], precision[i])); + // pg_attribute.atttypmod + writeInt(-1); + // the format type: text = 0, binary = 1 + writeShort(formatAsText(types[i]) ? 0 : 1); + } + sendMessage(); + } + } + + /** + * Check whether the given type should be formatted as text. + * + * @return true for binary + */ + private static boolean formatAsText(int pgType) { + switch (pgType) { + // TODO: add more types to send as binary once compatibility is + // confirmed + case PgServer.PG_TYPE_BYTEA: + return false; + } + return true; + } + + private static int getTypeSize(int pgType, int precision) { + switch (pgType) { + case PgServer.PG_TYPE_BOOL: + return 1; + case PgServer.PG_TYPE_VARCHAR: + return Math.max(255, precision + 10); + default: + return precision + 4; + } + } + + private void sendErrorResponse(String message) throws IOException { + server.trace("Exception: " + message); + startMessage('E'); + write('S'); + writeString("ERROR"); + write('C'); + // PROTOCOL VIOLATION + writeString("08P01"); + write('M'); + writeString(message); + sendMessage(); + } + + private void sendParseComplete() throws IOException { + startMessage('1'); + sendMessage(); + } + + private void sendBindComplete() throws IOException { + startMessage('2'); + sendMessage(); + } + + private void sendCloseComplete() throws IOException { + startMessage('3'); + sendMessage(); + } + + private void initDb() throws SQLException { + Statement stat = null; + try { + synchronized (server) { + // better would be: set the database to exclusive mode + boolean tableFound; + try (ResultSet rs = conn.getMetaData().getTables(null, "PG_CATALOG", "PG_VERSION", null)) { + tableFound = rs.next(); + } + stat = conn.createStatement(); + if (!tableFound) { + installPgCatalog(stat); + } + try (ResultSet rs = stat.executeQuery("select * from pg_catalog.pg_version")) { + if (!rs.next() || rs.getInt(1) < 2) { + // installation incomplete, or old version + installPgCatalog(stat); + } else { + // version 2 or newer: check the read version + int versionRead = rs.getInt(2); + if (versionRead > 2) { + throw DbException.throwInternalError("Incompatible PG_VERSION"); + } + } + } + } + stat.execute("set search_path = PUBLIC, pg_catalog"); + HashSet typeSet = server.getTypeSet(); + if (typeSet.isEmpty()) { + try (ResultSet rs = stat.executeQuery("select oid from pg_catalog.pg_type")) { + while (rs.next()) { + typeSet.add(rs.getInt(1)); + } + } + } + } finally { + JdbcUtils.closeSilently(stat); + } + } + + private static void installPgCatalog(Statement stat) throws SQLException { + try (Reader r = new InputStreamReader(new ByteArrayInputStream(Utils + .getResource("/org/h2/server/pg/pg_catalog.sql")))) { + ScriptReader reader = new ScriptReader(r); + while (true) { + String sql = reader.readStatement(); + if (sql == null) { + break; + } + stat.execute(sql); + } + reader.close(); + } catch (IOException e) { + throw DbException.convertIOException(e, "Can not read pg_catalog resource"); + } + } + + /** + * Close this connection. + */ + void close() { + try { + stop = true; + JdbcUtils.closeSilently(conn); + if (socket != null) { + socket.close(); + } + server.trace("Close"); + } catch (Exception e) { + server.traceError(e); + } + conn = null; + socket = null; + server.remove(this); + } + + private void sendAuthenticationCleartextPassword() throws IOException { + startMessage('R'); + writeInt(3); + sendMessage(); + } + + private void sendAuthenticationOk() throws IOException { + startMessage('R'); + writeInt(0); + sendMessage(); + sendParameterStatus("client_encoding", clientEncoding); + sendParameterStatus("DateStyle", dateStyle); + sendParameterStatus("integer_datetimes", "off"); + sendParameterStatus("is_superuser", "off"); + sendParameterStatus("server_encoding", "SQL_ASCII"); + sendParameterStatus("server_version", Constants.PG_VERSION); + sendParameterStatus("session_authorization", userName); + sendParameterStatus("standard_conforming_strings", "off"); + // TODO PostgreSQL TimeZone + sendParameterStatus("TimeZone", "CET"); + sendParameterStatus("integer_datetimes", INTEGER_DATE_TYPES ? "on" : "off"); + sendBackendKeyData(); + sendReadyForQuery(); + } + + private void sendReadyForQuery() throws IOException { + startMessage('Z'); + char c; + try { + if (conn.getAutoCommit()) { + // idle + c = 'I'; + } else { + // in a transaction block + c = 'T'; + } + } catch (SQLException e) { + // failed transaction block + c = 'E'; + } + write((byte) c); + sendMessage(); + } + + private void sendBackendKeyData() throws IOException { + startMessage('K'); + writeInt(processId); + writeInt(secret); + sendMessage(); + } + + private void writeString(String s) throws IOException { + writeStringPart(s); + write(0); + } + + private void writeStringPart(String s) throws IOException { + write(s.getBytes(getEncoding())); + } + + private void writeInt(int i) throws IOException { + dataOut.writeInt(i); + } + + private void writeShort(int i) throws IOException { + dataOut.writeShort(i); + } + + private void write(byte[] data) throws IOException { + dataOut.write(data); + } + + private void write(int b) throws IOException { + dataOut.write(b); + } + + private void startMessage(int newMessageType) { + this.messageType = newMessageType; + outBuffer = new ByteArrayOutputStream(); + dataOut = new DataOutputStream(outBuffer); + } + + private void sendMessage() throws IOException { + dataOut.flush(); + byte[] buff = outBuffer.toByteArray(); + int len = buff.length; + dataOut = new DataOutputStream(out); + dataOut.write(messageType); + dataOut.writeInt(len + 4); + dataOut.write(buff); + dataOut.flush(); + } + + private void sendParameterStatus(String param, String value) + throws IOException { + startMessage('S'); + writeString(param); + writeString(value); + sendMessage(); + } + + void setThread(Thread thread) { + this.thread = thread; + } + + Thread getThread() { + return thread; + } + + void setProcessId(int id) { + this.processId = id; + } + + int getProcessId() { + return this.processId; + } + + private synchronized void setActiveRequest(JdbcStatement statement) { + activeRequest = statement; + } + + /** + * Kill a currently running query on this thread. + */ + private synchronized void cancelRequest() { + if (activeRequest != null) { + try { + activeRequest.cancel(); + activeRequest = null; + } catch (SQLException e) { + throw DbException.convert(e); + } + } + } + + /** + * Represents a PostgreSQL Prepared object. + */ + static class Prepared { + + /** + * The object name. + */ + String name; + + /** + * The SQL statement. + */ + String sql; + + /** + * The prepared statement. + */ + JdbcPreparedStatement prep; + + /** + * The list of parameter types (if set). + */ + int[] paramType; + } + + /** + * Represents a PostgreSQL Portal object. + */ + static class Portal { + + /** + * The portal name. + */ + String name; + + /** + * The format used in the result set columns (if set). + */ + int[] resultColumnFormat; + + /** + * The prepared object. + */ + Prepared prep; + } +} diff --git a/modules/h2/src/main/java/org/h2/server/pg/package.html b/modules/h2/src/main/java/org/h2/server/pg/package.html new file mode 100644 index 0000000000000..d5b2b738e48ac --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/pg/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +PostgreSQL server implementation of this database. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/pg/pg_catalog.sql b/modules/h2/src/main/java/org/h2/server/pg/pg_catalog.sql new file mode 100644 index 0000000000000..4937295d5e94a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/pg/pg_catalog.sql @@ -0,0 +1,378 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +; +drop schema if exists pg_catalog cascade; +create schema pg_catalog; + +drop alias if exists pg_convertType; +create alias pg_convertType deterministic for "org.h2.server.pg.PgServer.convertType"; + +drop alias if exists pg_get_oid; +create alias pg_get_oid deterministic for "org.h2.server.pg.PgServer.getOid"; + +create table pg_catalog.pg_version as select 2 as version, 2 as version_read; +grant select on pg_catalog.pg_version to PUBLIC; + +create view pg_catalog.pg_roles -- (oid, rolname, rolcreaterole, rolcreatedb) +as +select + id oid, + cast(name as varchar_ignorecase) rolname, + case when admin then 't' else 'f' end as rolcreaterole, + case when admin then 't' else 'f' end as rolcreatedb +from INFORMATION_SCHEMA.users; +grant select on pg_catalog.pg_roles to PUBLIC; + +create view pg_catalog.pg_namespace -- (oid, nspname) +as +select + id oid, + cast(schema_name as varchar_ignorecase) nspname +from INFORMATION_SCHEMA.schemata; +grant select on pg_catalog.pg_namespace to PUBLIC; + +create table pg_catalog.pg_type( + oid int primary key, + typname varchar_ignorecase, + typnamespace int, + typlen int, + typtype varchar, + typbasetype int, + typtypmod int, + typnotnull boolean, + typinput varchar +); +grant select on pg_catalog.pg_type to PUBLIC; + +insert into pg_catalog.pg_type +select + pg_convertType(data_type) oid, + cast(type_name as varchar_ignorecase) typname, + (select oid from pg_catalog.pg_namespace where nspname = 'pg_catalog') typnamespace, + -1 typlen, + 'c' typtype, + 0 typbasetype, + -1 typtypmod, + false typnotnull, + null typinput +from INFORMATION_SCHEMA.type_info +where pos = 0 + and pg_convertType(data_type) <> 705; -- not unknown + +merge into pg_catalog.pg_type values( + 19, + 'name', + (select oid from pg_catalog.pg_namespace where nspname = 'pg_catalog'), + -1, + 'c', + 0, + -1, + false, + null +); +merge into pg_catalog.pg_type values( + 0, + 'null', + (select oid from pg_catalog.pg_namespace where nspname = 'pg_catalog'), + -1, + 'c', + 0, + -1, + false, + null +); +merge into pg_catalog.pg_type values( + 22, + 'int2vector', + (select oid from pg_catalog.pg_namespace where nspname = 'pg_catalog'), + -1, + 'c', + 0, + -1, + false, + null +); +merge into pg_catalog.pg_type values( + 2205, + 'regproc', + (select oid from pg_catalog.pg_namespace where nspname = 'pg_catalog'), + 4, + 'b', + 0, + -1, + false, + null +); + +create domain regproc as varchar_ignorecase; + +create view pg_catalog.pg_class -- (oid, relname, relnamespace, relkind, relam, reltuples, reltablespace, relpages, relhasindex, relhasrules, relhasoids, relchecks, reltriggers) +as +select + id oid, + cast(table_name as varchar_ignorecase) relname, + (select id from INFORMATION_SCHEMA.schemata where schema_name = table_schema) relnamespace, + case table_type when 'TABLE' then 'r' else 'v' end relkind, + 0 relam, + cast(0 as float) reltuples, + 0 reltablespace, + 0 relpages, + false relhasindex, + false relhasrules, + false relhasoids, + cast(0 as smallint) relchecks, + (select count(*) from INFORMATION_SCHEMA.triggers t where t.table_schema = table_schema and t.table_name = table_name) reltriggers +from INFORMATION_SCHEMA.tables +union all +select + id oid, + cast(index_name as varchar_ignorecase) relname, + (select id from INFORMATION_SCHEMA.schemata where schema_name = table_schema) relnamespace, + 'i' relkind, + 0 relam, + cast(0 as float) reltuples, + 0 reltablespace, + 0 relpages, + true relhasindex, + false relhasrules, + false relhasoids, + cast(0 as smallint) relchecks, + 0 reltriggers +from INFORMATION_SCHEMA.indexes; +grant select on pg_catalog.pg_class to PUBLIC; + +create table pg_catalog.pg_proc( + oid int, + proname varchar_ignorecase, + prorettype int, + pronamespace int +); +grant select on pg_catalog.pg_proc to PUBLIC; + +create table pg_catalog.pg_trigger( + oid int, + tgconstrrelid int, + tgfoid int, + tgargs int, + tgnargs int, + tgdeferrable boolean, + tginitdeferred boolean, + tgconstrname varchar_ignorecase, + tgrelid int +); +grant select on pg_catalog.pg_trigger to PUBLIC; + +create view pg_catalog.pg_attrdef -- (oid, adsrc, adrelid, adnum) +as +select + id oid, + 0 adsrc, + 0 adrelid, + 0 adnum, + null adbin +from INFORMATION_SCHEMA.tables where 1=0; +grant select on pg_catalog.pg_attrdef to PUBLIC; + +create view pg_catalog.pg_attribute -- (oid, attrelid, attname, atttypid, attlen, attnum, atttypmod, attnotnull, attisdropped, atthasdef) +as +select + t.id*10000 + c.ordinal_position oid, + t.id attrelid, + c.column_name attname, + pg_convertType(data_type) atttypid, + case when numeric_precision > 255 then -1 else numeric_precision end attlen, + c.ordinal_position attnum, + -1 atttypmod, + case c.is_nullable when 'YES' then false else true end attnotnull, + false attisdropped, + false atthasdef +from INFORMATION_SCHEMA.tables t, INFORMATION_SCHEMA.columns c +where t.table_name = c.table_name +and t.table_schema = c.table_schema +union all +select + 1000000 + t.id*10000 + c.ordinal_position oid, + i.id attrelid, + c.column_name attname, + pg_convertType(data_type) atttypid, + case when numeric_precision > 255 then -1 else numeric_precision end attlen, + c.ordinal_position attnum, + -1 atttypmod, + case c.is_nullable when 'YES' then false else true end attnotnull, + false attisdropped, + false atthasdef +from INFORMATION_SCHEMA.tables t, INFORMATION_SCHEMA.indexes i, INFORMATION_SCHEMA.columns c +where t.table_name = i.table_name +and t.table_schema = i.table_schema +and t.table_name = c.table_name +and t.table_schema = c.table_schema; +grant select on pg_catalog.pg_attribute to PUBLIC; + +create view pg_catalog.pg_index -- (oid, indexrelid, indrelid, indisclustered, indisunique, indisprimary, indexprs, indkey, indpred) +as +select + i.id oid, + i.id indexrelid, + t.id indrelid, + false indisclustered, + not non_unique indisunique, + primary_key indisprimary, + cast('' as varchar_ignorecase) indexprs, + cast(1 as array) indkey, + null indpred +from INFORMATION_SCHEMA.indexes i, INFORMATION_SCHEMA.tables t +where i.table_schema = t.table_schema +and i.table_name = t.table_name +and i.ordinal_position = 1 +-- workaround for MS Access problem opening tables with primary key +and 1=0; +grant select on pg_catalog.pg_index to PUBLIC; + +drop alias if exists pg_get_indexdef; +create alias pg_get_indexdef for "org.h2.server.pg.PgServer.getIndexColumn"; + +drop alias if exists pg_catalog.pg_get_indexdef; +create alias pg_catalog.pg_get_indexdef for "org.h2.server.pg.PgServer.getIndexColumn"; + +drop alias if exists pg_catalog.pg_get_expr; +create alias pg_catalog.pg_get_expr for "org.h2.server.pg.PgServer.getPgExpr"; + +drop alias if exists pg_catalog.format_type; +create alias pg_catalog.format_type for "org.h2.server.pg.PgServer.formatType"; + +drop alias if exists version; +create alias version for "org.h2.server.pg.PgServer.getVersion"; + +drop alias if exists current_schema; +create alias current_schema for "org.h2.server.pg.PgServer.getCurrentSchema"; + +drop alias if exists pg_encoding_to_char; +create alias pg_encoding_to_char for "org.h2.server.pg.PgServer.getEncodingName"; + +drop alias if exists pg_postmaster_start_time; +create alias pg_postmaster_start_time for "org.h2.server.pg.PgServer.getStartTime"; + +drop alias if exists pg_get_userbyid; +create alias pg_get_userbyid for "org.h2.server.pg.PgServer.getUserById"; + +drop alias if exists has_database_privilege; +create alias has_database_privilege for "org.h2.server.pg.PgServer.hasDatabasePrivilege"; + +drop alias if exists has_table_privilege; +create alias has_table_privilege for "org.h2.server.pg.PgServer.hasTablePrivilege"; + +drop alias if exists currtid2; +create alias currtid2 for "org.h2.server.pg.PgServer.getCurrentTid"; + +create table pg_catalog.pg_database( + oid int, + datname varchar_ignorecase, + encoding int, + datlastsysoid int, + datallowconn boolean, + datconfig array, -- text[] + datacl array, -- aclitem[] + datdba int, + dattablespace int +); +grant select on pg_catalog.pg_database to PUBLIC; + +insert into pg_catalog.pg_database values( + 0, -- oid + 'postgres', -- datname + 6, -- encoding, UTF8 + 100000, -- datlastsysoid + true, -- datallowconn + null, -- datconfig + null, -- datacl + select min(id) from INFORMATION_SCHEMA.users where admin=true, -- datdba + 0 -- dattablespace +); + +create table pg_catalog.pg_tablespace( + oid int, + spcname varchar_ignorecase, + spclocation varchar_ignorecase, + spcowner int, + spcacl array -- aclitem[] +); +grant select on pg_catalog.pg_tablespace to PUBLIC; + +insert into pg_catalog.pg_tablespace values( + 0, + 'main', -- spcname + '?', -- spclocation + 0, -- spcowner, + null -- spcacl +); + +create table pg_catalog.pg_settings( + oid int, + name varchar_ignorecase, + setting varchar_ignorecase +); +grant select on pg_catalog.pg_settings to PUBLIC; + +insert into pg_catalog.pg_settings values +(0, 'autovacuum', 'on'), +(1, 'stats_start_collector', 'on'), +(2, 'stats_row_level', 'on'); + +create view pg_catalog.pg_user -- oid, usename, usecreatedb, usesuper +as +select + id oid, + cast(name as varchar_ignorecase) usename, + true usecreatedb, + true usesuper +from INFORMATION_SCHEMA.users; +grant select on pg_catalog.pg_user to PUBLIC; + +create table pg_catalog.pg_authid( + oid int, + rolname varchar_ignorecase, + rolsuper boolean, + rolinherit boolean, + rolcreaterole boolean, + rolcreatedb boolean, + rolcatupdate boolean, + rolcanlogin boolean, + rolconnlimit boolean, + rolpassword boolean, + rolvaliduntil timestamp, -- timestamptz + rolconfig array -- text[] +); +grant select on pg_catalog.pg_authid to PUBLIC; + +create table pg_catalog.pg_am(oid int, amname varchar_ignorecase); +grant select on pg_catalog.pg_am to PUBLIC; +insert into pg_catalog.pg_am values(0, 'btree'); +insert into pg_catalog.pg_am values(1, 'hash'); + +create table pg_catalog.pg_description -- (objoid, objsubid, classoid, description) +as +select + oid objoid, + 0 objsubid, + -1 classoid, + cast(datname as varchar_ignorecase) description +from pg_catalog.pg_database; +grant select on pg_catalog.pg_description to PUBLIC; + +create table pg_catalog.pg_group -- oid, groname +as +select + 0 oid, + cast('' as varchar_ignorecase) groname +from pg_catalog.pg_database where 1=0; +grant select on pg_catalog.pg_group to PUBLIC; + +create table pg_catalog.pg_inherits( + inhrelid int, + inhparent int, + inhseqno int +); +grant select on pg_catalog.pg_inherits to PUBLIC; diff --git a/modules/h2/src/main/java/org/h2/server/web/ConnectionInfo.java b/modules/h2/src/main/java/org/h2/server/web/ConnectionInfo.java new file mode 100644 index 0000000000000..ea1dca9f86816 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/ConnectionInfo.java @@ -0,0 +1,66 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import org.h2.util.StringUtils; + +/** + * The connection info object is a wrapper for database connection information + * such as the database URL, user name and password. + * This class is used by the H2 Console. + */ +public class ConnectionInfo implements Comparable { + /** + * The driver class name. + */ + public String driver; + + /** + * The database URL. + */ + public String url; + + /** + * The user name. + */ + public String user; + + /** + * The connection display name. + */ + String name; + + /** + * The last time this connection was used. + */ + int lastAccess; + + ConnectionInfo() { + // nothing to do + } + + public ConnectionInfo(String data) { + String[] array = StringUtils.arraySplit(data, '|', false); + name = get(array, 0); + driver = get(array, 1); + url = get(array, 2); + user = get(array, 3); + } + + private static String get(String[] array, int i) { + return array != null && array.length > i ? array[i] : ""; + } + + String getString() { + return StringUtils.arrayCombine(new String[] { name, driver, url, user }, '|'); + } + + @Override + public int compareTo(ConnectionInfo o) { + return -Integer.compare(lastAccess, o.lastAccess); + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/web/DbStarter.java b/modules/h2/src/main/java/org/h2/server/web/DbStarter.java new file mode 100644 index 0000000000000..16cd0b0fa17db --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/DbStarter.java @@ -0,0 +1,93 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.Statement; + +import javax.servlet.ServletContext; +import javax.servlet.ServletContextEvent; +import javax.servlet.ServletContextListener; + +import org.h2.tools.Server; +import org.h2.util.StringUtils; + +/** + * This class can be used to start the H2 TCP server (or other H2 servers, for + * example the PG server) inside a web application container such as Tomcat or + * Jetty. It can also open a database connection. + */ +public class DbStarter implements ServletContextListener { + + private Connection conn; + private Server server; + + @Override + public void contextInitialized(ServletContextEvent servletContextEvent) { + try { + org.h2.Driver.load(); + + // This will get the setting from a context-param in web.xml if + // defined: + ServletContext servletContext = servletContextEvent.getServletContext(); + String url = getParameter(servletContext, "db.url", "jdbc:h2:~/test"); + String user = getParameter(servletContext, "db.user", "sa"); + String password = getParameter(servletContext, "db.password", "sa"); + + // Start the server if configured to do so + String serverParams = getParameter(servletContext, "db.tcpServer", null); + if (serverParams != null) { + String[] params = StringUtils.arraySplit(serverParams, ' ', true); + server = Server.createTcpServer(params); + server.start(); + } + + // To access the database in server mode, use the database URL: + // jdbc:h2:tcp://localhost/~/test + conn = DriverManager.getConnection(url, user, password); + servletContext.setAttribute("connection", conn); + } catch (Exception e) { + e.printStackTrace(); + } + } + + private static String getParameter(ServletContext servletContext, + String key, String defaultValue) { + String value = servletContext.getInitParameter(key); + return value == null ? defaultValue : value; + } + + /** + * Get the connection. + * + * @return the connection + */ + public Connection getConnection() { + return conn; + } + + @Override + public void contextDestroyed(ServletContextEvent servletContextEvent) { + try { + Statement stat = conn.createStatement(); + stat.execute("SHUTDOWN"); + stat.close(); + } catch (Exception e) { + e.printStackTrace(); + } + try { + conn.close(); + } catch (Exception e) { + e.printStackTrace(); + } + if (server != null) { + server.stop(); + server = null; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/web/PageParser.java b/modules/h2/src/main/java/org/h2/server/web/PageParser.java new file mode 100644 index 0000000000000..e80dacc2a28ff --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/PageParser.java @@ -0,0 +1,345 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import java.text.ParseException; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import org.h2.util.New; + +/** + * A page parser can parse an HTML page and replace the tags there. + * This class is used by the H2 Console. + */ +public class PageParser { + private static final int TAB_WIDTH = 4; + + private final String page; + private int pos; + private final Map settings; + private final int len; + private StringBuilder result; + + private PageParser(String page, Map settings, int pos) { + this.page = page; + this.pos = pos; + this.len = page.length(); + this.settings = settings; + result = new StringBuilder(len); + } + + /** + * Replace the tags in the HTML page with the given settings. + * + * @param page the HTML page + * @param settings the settings + * @return the converted page + */ + public static String parse(String page, Map settings) { + PageParser block = new PageParser(page, settings, 0); + return block.replaceTags(); + } + + private void setError(int i) { + String s = page.substring(0, i) + "####BUG####" + page.substring(i); + s = PageParser.escapeHtml(s); + result = new StringBuilder(); + result.append(s); + } + + private String parseBlockUntil(String end) throws ParseException { + PageParser block = new PageParser(page, settings, pos); + block.parseAll(); + if (!block.readIf(end)) { + throw new ParseException(page, block.pos); + } + pos = block.pos; + return block.result.toString(); + } + + private String replaceTags() { + try { + parseAll(); + if (pos != len) { + setError(pos); + } + } catch (ParseException e) { + setError(pos); + } + return result.toString(); + } + + @SuppressWarnings("unchecked") + private void parseAll() throws ParseException { + StringBuilder buff = result; + String p = page; + int i = pos; + for (; i < len; i++) { + char c = p.charAt(i); + switch (c) { + case '<': { + if (p.charAt(i + 3) == ':' && p.charAt(i + 1) == '/') { + // end tag + pos = i; + return; + } else if (p.charAt(i + 2) == ':') { + pos = i; + if (readIf(""); + int start = pos; + List list = (List) get(items); + if (list == null) { + result.append("?items?"); + list = New.arrayList(); + } + if (list.isEmpty()) { + parseBlockUntil(""); + } + for (Object o : list) { + settings.put(var, o); + pos = start; + String block = parseBlockUntil(""); + result.append(block); + } + } else if (readIf(""); + String block = parseBlockUntil(""); + pos--; + if (value.equals(val)) { + result.append(block); + } + } else { + setError(i); + return; + } + i = pos; + } else { + buff.append(c); + } + break; + } + case '$': + if (p.length() > i + 1 && p.charAt(i + 1) == '{') { + i += 2; + int j = p.indexOf('}', i); + if (j < 0) { + setError(i); + return; + } + String item = p.substring(i, j).trim(); + i = j; + String s = (String) get(item); + replaceTags(s); + } else { + buff.append(c); + } + break; + default: + buff.append(c); + break; + } + } + pos = i; + } + + @SuppressWarnings("unchecked") + private Object get(String item) { + int dot = item.indexOf('.'); + if (dot >= 0) { + String sub = item.substring(dot + 1); + item = item.substring(0, dot); + HashMap map = (HashMap) settings.get(item); + if (map == null) { + return "?" + item + "?"; + } + return map.get(sub); + } + return settings.get(item); + } + + private void replaceTags(String s) { + if (s != null) { + result.append(PageParser.parse(s, settings)); + } + } + + private String readParam(String name) throws ParseException { + read(name); + read("="); + read("\""); + int start = pos; + while (page.charAt(pos) != '"') { + pos++; + } + int end = pos; + read("\""); + String s = page.substring(start, end); + return PageParser.parse(s, settings); + } + + private void skipSpaces() { + while (page.charAt(pos) == ' ') { + pos++; + } + } + + private void read(String s) throws ParseException { + if (!readIf(s)) { + throw new ParseException(s, pos); + } + } + + private boolean readIf(String s) { + skipSpaces(); + if (page.regionMatches(pos, s, 0, s.length())) { + pos += s.length(); + skipSpaces(); + return true; + } + return false; + } + + /** + * Convert data to HTML, but don't convert newlines and multiple spaces. + * + * @param s the data + * @return the escaped html text + */ + static String escapeHtmlData(String s) { + return escapeHtml(s, false); + } + + /** + * Convert data to HTML, including newlines and multiple spaces. + * + * @param s the data + * @return the escaped html text + */ + public static String escapeHtml(String s) { + return escapeHtml(s, true); + } + + private static String escapeHtml(String s, boolean convertBreakAndSpace) { + if (s == null) { + return null; + } + if (convertBreakAndSpace) { + if (s.length() == 0) { + return " "; + } + } + StringBuilder buff = new StringBuilder(s.length()); + boolean convertSpace = true; + for (int i = 0; i < s.length(); i++) { + char c = s.charAt(i); + if (c == ' ' || c == '\t') { + // convert tabs into spaces + for (int j = 0; j < (c == ' ' ? 1 : TAB_WIDTH); j++) { + if (convertSpace && convertBreakAndSpace) { + buff.append(" "); + } else { + buff.append(' '); + convertSpace = true; + } + } + continue; + } + convertSpace = false; + switch (c) { + case '$': + // so that ${ } in the text is interpreted correctly + buff.append("$"); + break; + case '<': + buff.append("<"); + break; + case '>': + buff.append(">"); + break; + case '&': + buff.append("&"); + break; + case '"': + buff.append("""); + break; + case '\'': + buff.append("'"); + break; + case '\n': + if (convertBreakAndSpace) { + buff.append("
    "); + convertSpace = true; + } else { + buff.append(c); + } + break; + default: + if (c >= 128) { + buff.append("&#").append((int) c).append(';'); + } else { + buff.append(c); + } + break; + } + } + return buff.toString(); + } + + /** + * Escape text as a the javascript string. + * + * @param s the text + * @return the javascript string + */ + static String escapeJavaScript(String s) { + if (s == null) { + return null; + } + if (s.length() == 0) { + return ""; + } + StringBuilder buff = new StringBuilder(s.length()); + for (int i = 0; i < s.length(); i++) { + char c = s.charAt(i); + switch (c) { + case '"': + buff.append("\\\""); + break; + case '\'': + buff.append("\\'"); + break; + case '\\': + buff.append("\\\\"); + break; + case '\n': + buff.append("\\n"); + break; + case '\r': + buff.append("\\r"); + break; + case '\t': + buff.append("\\t"); + break; + default: + buff.append(c); + break; + } + } + return buff.toString(); + } +} diff --git a/modules/h2/src/main/java/org/h2/server/web/WebApp.java b/modules/h2/src/main/java/org/h2/server/web/WebApp.java new file mode 100644 index 0000000000000..4b95e852b65bd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/WebApp.java @@ -0,0 +1,1898 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import java.io.ByteArrayOutputStream; +import java.io.PrintStream; +import java.io.PrintWriter; +import java.io.StringReader; +import java.io.StringWriter; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.math.BigDecimal; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.ParameterMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.Iterator; +import java.util.Locale; +import java.util.Map; +import java.util.Properties; +import java.util.Random; +import org.h2.api.ErrorCode; +import org.h2.bnf.Bnf; +import org.h2.bnf.context.DbColumn; +import org.h2.bnf.context.DbContents; +import org.h2.bnf.context.DbSchema; +import org.h2.bnf.context.DbTableOrView; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.jdbc.JdbcSQLException; +import org.h2.message.DbException; +import org.h2.security.SHA256; +import org.h2.tools.Backup; +import org.h2.tools.ChangeFileEncryption; +import org.h2.tools.ConvertTraceFile; +import org.h2.tools.CreateCluster; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Recover; +import org.h2.tools.Restore; +import org.h2.tools.RunScript; +import org.h2.tools.Script; +import org.h2.tools.SimpleResultSet; +import org.h2.util.JdbcUtils; +import org.h2.util.New; +import org.h2.util.Profiler; +import org.h2.util.ScriptReader; +import org.h2.util.SortedProperties; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.util.Tool; +import org.h2.util.Utils; + +/** + * For each connection to a session, an object of this class is created. + * This class is used by the H2 Console. + */ +public class WebApp { + + /** + * The web server. + */ + protected final WebServer server; + + /** + * The session. + */ + protected WebSession session; + + /** + * The session attributes + */ + protected Properties attributes; + + /** + * The mime type of the current response. + */ + protected String mimeType; + + /** + * Whether the response can be cached. + */ + protected boolean cache; + + /** + * Whether to close the connection. + */ + protected boolean stop; + + /** + * The language in the HTTP header. + */ + protected String headerLanguage; + + private Profiler profiler; + + WebApp(WebServer server) { + this.server = server; + } + + /** + * Set the web session and attributes. + * + * @param session the session + * @param attributes the attributes + */ + void setSession(WebSession session, Properties attributes) { + this.session = session; + this.attributes = attributes; + } + + /** + * Process an HTTP request. + * + * @param file the file that was requested + * @param hostAddr the host address + * @return the name of the file to return to the client + */ + String processRequest(String file, String hostAddr) { + int index = file.lastIndexOf('.'); + String suffix; + if (index >= 0) { + suffix = file.substring(index + 1); + } else { + suffix = ""; + } + if ("ico".equals(suffix)) { + mimeType = "image/x-icon"; + cache = true; + } else if ("gif".equals(suffix)) { + mimeType = "image/gif"; + cache = true; + } else if ("css".equals(suffix)) { + cache = true; + mimeType = "text/css"; + } else if ("html".equals(suffix) || + "do".equals(suffix) || + "jsp".equals(suffix)) { + cache = false; + mimeType = "text/html"; + if (session == null) { + session = server.createNewSession(hostAddr); + if (!"notAllowed.jsp".equals(file)) { + file = "index.do"; + } + } + } else if ("js".equals(suffix)) { + cache = true; + mimeType = "text/javascript"; + } else { + cache = true; + mimeType = "application/octet-stream"; + } + trace("mimeType=" + mimeType); + trace(file); + if (file.endsWith(".do")) { + file = process(file); + } + return file; + } + + private static String getComboBox(String[] elements, String selected) { + StringBuilder buff = new StringBuilder(); + for (String value : elements) { + buff.append(""); + } + return buff.toString(); + } + + private static String getComboBox(String[][] elements, String selected) { + StringBuilder buff = new StringBuilder(); + for (String[] n : elements) { + buff.append(""); + } + return buff.toString(); + } + + private String process(String file) { + trace("process " + file); + while (file.endsWith(".do")) { + if ("login.do".equals(file)) { + file = login(); + } else if ("index.do".equals(file)) { + file = index(); + } else if ("logout.do".equals(file)) { + file = logout(); + } else if ("settingRemove.do".equals(file)) { + file = settingRemove(); + } else if ("settingSave.do".equals(file)) { + file = settingSave(); + } else if ("test.do".equals(file)) { + file = test(); + } else if ("query.do".equals(file)) { + file = query(); + } else if ("tables.do".equals(file)) { + file = tables(); + } else if ("editResult.do".equals(file)) { + file = editResult(); + } else if ("getHistory.do".equals(file)) { + file = getHistory(); + } else if ("admin.do".equals(file)) { + file = admin(); + } else if ("adminSave.do".equals(file)) { + file = adminSave(); + } else if ("adminStartTranslate.do".equals(file)) { + file = adminStartTranslate(); + } else if ("adminShutdown.do".equals(file)) { + file = adminShutdown(); + } else if ("autoCompleteList.do".equals(file)) { + file = autoCompleteList(); + } else if ("tools.do".equals(file)) { + file = tools(); + } else { + file = "error.jsp"; + } + } + trace("return " + file); + return file; + } + + private String autoCompleteList() { + String query = (String) attributes.get("query"); + boolean lowercase = false; + if (query.trim().length() > 0 && + Character.isLowerCase(query.trim().charAt(0))) { + lowercase = true; + } + try { + String sql = query; + if (sql.endsWith(";")) { + sql += " "; + } + ScriptReader reader = new ScriptReader(new StringReader(sql)); + reader.setSkipRemarks(true); + String lastSql = ""; + while (true) { + String n = reader.readStatement(); + if (n == null) { + break; + } + lastSql = n; + } + String result = ""; + if (reader.isInsideRemark()) { + if (reader.isBlockRemark()) { + result = "1#(End Remark)# */\n" + result; + } else { + result = "1#(Newline)#\n" + result; + } + } else { + sql = lastSql; + while (sql.length() > 0 && sql.charAt(0) <= ' ') { + sql = sql.substring(1); + } + if (sql.trim().length() > 0 && Character.isLowerCase(sql.trim().charAt(0))) { + lowercase = true; + } + Bnf bnf = session.getBnf(); + if (bnf == null) { + return "autoCompleteList.jsp"; + } + HashMap map = bnf.getNextTokenList(sql); + String space = ""; + if (sql.length() > 0) { + char last = sql.charAt(sql.length() - 1); + if (!Character.isWhitespace(last) && (last != '.' && + last >= ' ' && last != '\'' && last != '"')) { + space = " "; + } + } + ArrayList list = new ArrayList<>(map.size()); + for (Map.Entry entry : map.entrySet()) { + String key = entry.getKey(); + String value = entry.getValue(); + String type = "" + key.charAt(0); + if (Integer.parseInt(type) > 2) { + continue; + } + key = key.substring(2); + if (Character.isLetter(key.charAt(0)) && lowercase) { + key = StringUtils.toLowerEnglish(key); + value = StringUtils.toLowerEnglish(value); + } + if (key.equals(value) && !".".equals(value)) { + value = space + value; + } + key = StringUtils.urlEncode(key); + key = key.replace('+', ' '); + value = StringUtils.urlEncode(value); + value = value.replace('+', ' '); + list.add(type + "#" + key + "#" + value); + } + Collections.sort(list); + if (query.endsWith("\n") || query.trim().endsWith(";")) { + list.add(0, "1#(Newline)#\n"); + } + StatementBuilder buff = new StatementBuilder(); + for (String s : list) { + buff.appendExceptFirst("|"); + buff.append(s); + } + result = buff.toString(); + } + session.put("autoCompleteList", result); + } catch (Throwable e) { + server.traceError(e); + } + return "autoCompleteList.jsp"; + } + + private String admin() { + session.put("port", "" + server.getPort()); + session.put("allowOthers", "" + server.getAllowOthers()); + session.put("ssl", String.valueOf(server.getSSL())); + session.put("sessions", server.getSessions()); + return "admin.jsp"; + } + + private String adminSave() { + try { + Properties prop = new SortedProperties(); + int port = Integer.decode((String) attributes.get("port")); + prop.setProperty("webPort", String.valueOf(port)); + server.setPort(port); + boolean allowOthers = Utils.parseBoolean((String) attributes.get("allowOthers"), false, false); + prop.setProperty("webAllowOthers", String.valueOf(allowOthers)); + server.setAllowOthers(allowOthers); + boolean ssl = Utils.parseBoolean((String) attributes.get("ssl"), false, false); + prop.setProperty("webSSL", String.valueOf(ssl)); + server.setSSL(ssl); + server.saveProperties(prop); + } catch (Exception e) { + trace(e.toString()); + } + return admin(); + } + + private String tools() { + try { + String toolName = (String) attributes.get("tool"); + session.put("tool", toolName); + String args = (String) attributes.get("args"); + String[] argList = StringUtils.arraySplit(args, ',', false); + Tool tool = null; + if ("Backup".equals(toolName)) { + tool = new Backup(); + } else if ("Restore".equals(toolName)) { + tool = new Restore(); + } else if ("Recover".equals(toolName)) { + tool = new Recover(); + } else if ("DeleteDbFiles".equals(toolName)) { + tool = new DeleteDbFiles(); + } else if ("ChangeFileEncryption".equals(toolName)) { + tool = new ChangeFileEncryption(); + } else if ("Script".equals(toolName)) { + tool = new Script(); + } else if ("RunScript".equals(toolName)) { + tool = new RunScript(); + } else if ("ConvertTraceFile".equals(toolName)) { + tool = new ConvertTraceFile(); + } else if ("CreateCluster".equals(toolName)) { + tool = new CreateCluster(); + } else { + throw DbException.throwInternalError(toolName); + } + ByteArrayOutputStream outBuff = new ByteArrayOutputStream(); + PrintStream out = new PrintStream(outBuff, false, "UTF-8"); + tool.setOut(out); + try { + tool.runTool(argList); + out.flush(); + String o = new String(outBuff.toByteArray(), StandardCharsets.UTF_8); + String result = PageParser.escapeHtml(o); + session.put("toolResult", result); + } catch (Exception e) { + session.put("toolResult", getStackTrace(0, e, true)); + } + } catch (Exception e) { + server.traceError(e); + } + return "tools.jsp"; + } + + private String adminStartTranslate() { + Map p = Map.class.cast(session.map.get("text")); + @SuppressWarnings("unchecked") + Map p2 = (Map) p; + String file = server.startTranslate(p2); + session.put("translationFile", file); + return "helpTranslate.jsp"; + } + + /** + * Stop the application and the server. + * + * @return the page to display + */ + protected String adminShutdown() { + server.shutdown(); + return "admin.jsp"; + } + + private String index() { + String[][] languageArray = WebServer.LANGUAGES; + String language = (String) attributes.get("language"); + Locale locale = session.locale; + if (language != null) { + if (locale == null || !StringUtils.toLowerEnglish( + locale.getLanguage()).equals(language)) { + locale = new Locale(language, ""); + server.readTranslations(session, locale.getLanguage()); + session.put("language", language); + session.locale = locale; + } + } else { + language = (String) session.get("language"); + } + if (language == null) { + // if the language is not yet known + // use the last header + language = headerLanguage; + } + session.put("languageCombo", getComboBox(languageArray, language)); + String[] settingNames = server.getSettingNames(); + String setting = attributes.getProperty("setting"); + if (setting == null && settingNames.length > 0) { + setting = settingNames[0]; + } + String combobox = getComboBox(settingNames, setting); + session.put("settingsList", combobox); + ConnectionInfo info = server.getSetting(setting); + if (info == null) { + info = new ConnectionInfo(); + } + session.put("setting", PageParser.escapeHtmlData(setting)); + session.put("name", PageParser.escapeHtmlData(setting)); + session.put("driver", PageParser.escapeHtmlData(info.driver)); + session.put("url", PageParser.escapeHtmlData(info.url)); + session.put("user", PageParser.escapeHtmlData(info.user)); + return "index.jsp"; + } + + private String getHistory() { + int id = Integer.parseInt(attributes.getProperty("id")); + String sql = session.getCommand(id); + session.put("query", PageParser.escapeHtmlData(sql)); + return "query.jsp"; + } + + private static int addColumns(boolean mainSchema, DbTableOrView table, + StringBuilder buff, int treeIndex, boolean showColumnTypes, + StringBuilder columnsBuffer) { + DbColumn[] columns = table.getColumns(); + for (int i = 0; columns != null && i < columns.length; i++) { + DbColumn column = columns[i]; + if (columnsBuffer.length() > 0) { + columnsBuffer.append(' '); + } + columnsBuffer.append(column.getName()); + String col = escapeIdentifier(column.getName()); + String level = mainSchema ? ", 1, 1" : ", 2, 2"; + buff.append("setNode(").append(treeIndex).append(level) + .append(", 'column', '") + .append(PageParser.escapeJavaScript(column.getName())) + .append("', 'javascript:ins(\\'").append(col).append("\\')');\n"); + treeIndex++; + if (mainSchema && showColumnTypes) { + buff.append("setNode(").append(treeIndex) + .append(", 2, 2, 'type', '") + .append(PageParser.escapeJavaScript(column.getDataType())) + .append("', null);\n"); + treeIndex++; + } + } + return treeIndex; + } + + private static String escapeIdentifier(String name) { + return StringUtils.urlEncode( + PageParser.escapeJavaScript(name)).replace('+', ' '); + } + + /** + * This class represents index information for the GUI. + */ + static class IndexInfo { + + /** + * The index name. + */ + String name; + + /** + * The index type name. + */ + String type; + + /** + * The indexed columns. + */ + String columns; + } + + private static int addIndexes(boolean mainSchema, DatabaseMetaData meta, + String table, String schema, StringBuilder buff, int treeIndex) + throws SQLException { + ResultSet rs; + try { + rs = meta.getIndexInfo(null, schema, table, false, true); + } catch (SQLException e) { + // SQLite + return treeIndex; + } + HashMap indexMap = new HashMap<>(); + while (rs.next()) { + String name = rs.getString("INDEX_NAME"); + IndexInfo info = indexMap.get(name); + if (info == null) { + int t = rs.getInt("TYPE"); + String type; + if (t == DatabaseMetaData.tableIndexClustered) { + type = ""; + } else if (t == DatabaseMetaData.tableIndexHashed) { + type = " (${text.tree.hashed})"; + } else if (t == DatabaseMetaData.tableIndexOther) { + type = ""; + } else { + type = null; + } + if (name != null && type != null) { + info = new IndexInfo(); + info.name = name; + type = (rs.getBoolean("NON_UNIQUE") ? + "${text.tree.nonUnique}" : "${text.tree.unique}") + type; + info.type = type; + info.columns = rs.getString("COLUMN_NAME"); + indexMap.put(name, info); + } + } else { + info.columns += ", " + rs.getString("COLUMN_NAME"); + } + } + rs.close(); + if (indexMap.size() > 0) { + String level = mainSchema ? ", 1, 1" : ", 2, 1"; + String levelIndex = mainSchema ? ", 2, 1" : ", 3, 1"; + String levelColumnType = mainSchema ? ", 3, 2" : ", 4, 2"; + buff.append("setNode(").append(treeIndex).append(level) + .append(", 'index_az', '${text.tree.indexes}', null);\n"); + treeIndex++; + for (IndexInfo info : indexMap.values()) { + buff.append("setNode(").append(treeIndex).append(levelIndex) + .append(", 'index', '") + .append(PageParser.escapeJavaScript(info.name)) + .append("', null);\n"); + treeIndex++; + buff.append("setNode(").append(treeIndex).append(levelColumnType) + .append(", 'type', '").append(info.type).append("', null);\n"); + treeIndex++; + buff.append("setNode(").append(treeIndex).append(levelColumnType) + .append(", 'type', '") + .append(PageParser.escapeJavaScript(info.columns)) + .append("', null);\n"); + treeIndex++; + } + } + return treeIndex; + } + + private int addTablesAndViews(DbSchema schema, boolean mainSchema, + StringBuilder buff, int treeIndex) throws SQLException { + if (schema == null) { + return treeIndex; + } + Connection conn = session.getConnection(); + DatabaseMetaData meta = session.getMetaData(); + int level = mainSchema ? 0 : 1; + boolean showColumns = mainSchema || !schema.isSystem; + String indentation = ", " + level + ", " + (showColumns ? "1" : "2") + ", "; + String indentNode = ", " + (level + 1) + ", 2, "; + DbTableOrView[] tables = schema.getTables(); + if (tables == null) { + return treeIndex; + } + boolean isOracle = schema.getContents().isOracle(); + boolean notManyTables = tables.length < SysProperties.CONSOLE_MAX_TABLES_LIST_INDEXES; + for (DbTableOrView table : tables) { + if (table.isView()) { + continue; + } + int tableId = treeIndex; + String tab = table.getQuotedName(); + if (!mainSchema) { + tab = schema.quotedName + "." + tab; + } + tab = escapeIdentifier(tab); + buff.append("setNode(").append(treeIndex).append(indentation) + .append(" 'table', '") + .append(PageParser.escapeJavaScript(table.getName())) + .append("', 'javascript:ins(\\'").append(tab).append("\\',true)');\n"); + treeIndex++; + if (mainSchema || showColumns) { + StringBuilder columnsBuffer = new StringBuilder(); + treeIndex = addColumns(mainSchema, table, buff, treeIndex, + notManyTables, columnsBuffer); + if (!isOracle && notManyTables) { + treeIndex = addIndexes(mainSchema, meta, table.getName(), + schema.name, buff, treeIndex); + } + buff.append("addTable('") + .append(PageParser.escapeJavaScript(table.getName())).append("', '") + .append(PageParser.escapeJavaScript(columnsBuffer.toString())).append("', ") + .append(tableId).append(");\n"); + } + } + tables = schema.getTables(); + for (DbTableOrView view : tables) { + if (!view.isView()) { + continue; + } + int tableId = treeIndex; + String tab = view.getQuotedName(); + if (!mainSchema) { + tab = view.getSchema().quotedName + "." + tab; + } + tab = escapeIdentifier(tab); + buff.append("setNode(").append(treeIndex).append(indentation) + .append(" 'view', '") + .append(PageParser.escapeJavaScript(view.getName())) + .append("', 'javascript:ins(\\'").append(tab).append("\\',true)');\n"); + treeIndex++; + if (mainSchema) { + StringBuilder columnsBuffer = new StringBuilder(); + treeIndex = addColumns(mainSchema, view, buff, + treeIndex, notManyTables, columnsBuffer); + if (schema.getContents().isH2()) { + + try (PreparedStatement prep = conn.prepareStatement("SELECT * FROM " + + "INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME=?")) { + prep.setString(1, view.getName()); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + String sql = rs.getString("SQL"); + buff.append("setNode(").append(treeIndex) + .append(indentNode) + .append(" 'type', '") + .append(PageParser.escapeJavaScript(sql)) + .append("', null);\n"); + treeIndex++; + } + rs.close(); + } + } + buff.append("addTable('") + .append(PageParser.escapeJavaScript(view.getName())).append("', '") + .append(PageParser.escapeJavaScript(columnsBuffer.toString())).append("', ") + .append(tableId).append(");\n"); + } + } + return treeIndex; + } + + private String tables() { + DbContents contents = session.getContents(); + boolean isH2 = false; + try { + String url = (String) session.get("url"); + Connection conn = session.getConnection(); + contents.readContents(url, conn); + session.loadBnf(); + isH2 = contents.isH2(); + + StringBuilder buff = new StringBuilder() + .append("setNode(0, 0, 0, 'database', '") + .append(PageParser.escapeJavaScript(url)) + .append("', null);\n"); + int treeIndex = 1; + + DbSchema defaultSchema = contents.getDefaultSchema(); + treeIndex = addTablesAndViews(defaultSchema, true, buff, treeIndex); + DbSchema[] schemas = contents.getSchemas(); + for (DbSchema schema : schemas) { + if (schema == defaultSchema || schema == null) { + continue; + } + buff.append("setNode(").append(treeIndex).append(", 0, 1, 'folder', '") + .append(PageParser.escapeJavaScript(schema.name)) + .append("', null);\n"); + treeIndex++; + treeIndex = addTablesAndViews(schema, false, buff, treeIndex); + } + if (isH2) { + try (Statement stat = conn.createStatement()) { + ResultSet rs = stat.executeQuery("SELECT * FROM " + + "INFORMATION_SCHEMA.SEQUENCES ORDER BY SEQUENCE_NAME"); + for (int i = 0; rs.next(); i++) { + if (i == 0) { + buff.append("setNode(").append(treeIndex) + .append(", 0, 1, 'sequences', '${text.tree.sequences}', null);\n"); + treeIndex++; + } + String name = rs.getString("SEQUENCE_NAME"); + String current = rs.getString("CURRENT_VALUE"); + String increment = rs.getString("INCREMENT"); + buff.append("setNode(").append(treeIndex) + .append(", 1, 1, 'sequence', '") + .append(PageParser.escapeJavaScript(name)) + .append("', null);\n"); + treeIndex++; + buff.append("setNode(").append(treeIndex) + .append(", 2, 2, 'type', '${text.tree.current}: ") + .append(PageParser.escapeJavaScript(current)) + .append("', null);\n"); + treeIndex++; + if (!"1".equals(increment)) { + buff.append("setNode(").append(treeIndex) + .append(", 2, 2, 'type', '${text.tree.increment}: ") + .append(PageParser.escapeJavaScript(increment)) + .append("', null);\n"); + treeIndex++; + } + } + rs.close(); + rs = stat.executeQuery("SELECT * FROM " + + "INFORMATION_SCHEMA.USERS ORDER BY NAME"); + for (int i = 0; rs.next(); i++) { + if (i == 0) { + buff.append("setNode(").append(treeIndex) + .append(", 0, 1, 'users', '${text.tree.users}', null);\n"); + treeIndex++; + } + String name = rs.getString("NAME"); + String admin = rs.getString("ADMIN"); + buff.append("setNode(").append(treeIndex) + .append(", 1, 1, 'user', '") + .append(PageParser.escapeJavaScript(name)) + .append("', null);\n"); + treeIndex++; + if (admin.equalsIgnoreCase("TRUE")) { + buff.append("setNode(").append(treeIndex) + .append(", 2, 2, 'type', '${text.tree.admin}', null);\n"); + treeIndex++; + } + } + rs.close(); + } + } + DatabaseMetaData meta = session.getMetaData(); + String version = meta.getDatabaseProductName() + " " + + meta.getDatabaseProductVersion(); + buff.append("setNode(").append(treeIndex) + .append(", 0, 0, 'info', '") + .append(PageParser.escapeJavaScript(version)) + .append("', null);\n") + .append("refreshQueryTables();"); + session.put("tree", buff.toString()); + } catch (Exception e) { + session.put("tree", ""); + session.put("error", getStackTrace(0, e, isH2)); + } + return "tables.jsp"; + } + + private String getStackTrace(int id, Throwable e, boolean isH2) { + try { + StringWriter writer = new StringWriter(); + e.printStackTrace(new PrintWriter(writer)); + String stackTrace = writer.toString(); + stackTrace = PageParser.escapeHtml(stackTrace); + if (isH2) { + stackTrace = linkToSource(stackTrace); + } + stackTrace = StringUtils.replaceAll(stackTrace, "\t", + "    "); + String message = PageParser.escapeHtml(e.getMessage()); + String error = "" + message + + ""; + if (e instanceof SQLException) { + SQLException se = (SQLException) e; + error += " " + se.getSQLState() + "/" + se.getErrorCode(); + if (isH2) { + int code = se.getErrorCode(); + error += " (${text.a.help})"; + } + } + error += "
    " + stackTrace + "
    "; + error = formatAsError(error); + return error; + } catch (OutOfMemoryError e2) { + server.traceError(e); + return e.toString(); + } + } + + private static String linkToSource(String s) { + try { + StringBuilder result = new StringBuilder(s.length()); + int idx = s.indexOf("
    "); + result.append(s, 0, idx); + while (true) { + int start = s.indexOf("org.h2.", idx); + if (start < 0) { + result.append(s.substring(idx)); + break; + } + result.append(s, idx, start); + int end = s.indexOf(')', start); + if (end < 0) { + result.append(s.substring(idx)); + break; + } + String element = s.substring(start, end); + int open = element.lastIndexOf('('); + int dotMethod = element.lastIndexOf('.', open - 1); + int dotClass = element.lastIndexOf('.', dotMethod - 1); + String packageName = element.substring(0, dotClass); + int colon = element.lastIndexOf(':'); + String file = element.substring(open + 1, colon); + String lineNumber = element.substring(colon + 1, element.length()); + String fullFileName = packageName.replace('.', '/') + "/" + file; + result.append(""); + result.append(element); + result.append(""); + idx = end; + } + return result.toString(); + } catch (Throwable t) { + return s; + } + } + + private static String formatAsError(String s) { + return "
    " + s + "
    "; + } + + private String test() { + String driver = attributes.getProperty("driver", ""); + String url = attributes.getProperty("url", ""); + String user = attributes.getProperty("user", ""); + String password = attributes.getProperty("password", ""); + session.put("driver", driver); + session.put("url", url); + session.put("user", user); + boolean isH2 = url.startsWith("jdbc:h2:"); + try { + long start = System.currentTimeMillis(); + String profOpen = "", profClose = ""; + Profiler prof = new Profiler(); + prof.startCollecting(); + Connection conn; + try { + conn = server.getConnection(driver, url, user, password); + } finally { + prof.stopCollecting(); + profOpen = prof.getTop(3); + } + prof = new Profiler(); + prof.startCollecting(); + try { + JdbcUtils.closeSilently(conn); + } finally { + prof.stopCollecting(); + profClose = prof.getTop(3); + } + long time = System.currentTimeMillis() - start; + String success; + if (time > 1000) { + success = "" + + "${text.login.testSuccessful}" + + "
    " + + PageParser.escapeHtml(profOpen) + + "
    " + + PageParser.escapeHtml(profClose) + + "
    "; + } else { + success = "
    ${text.login.testSuccessful}
    "; + } + session.put("error", success); + // session.put("error", "${text.login.testSuccessful}"); + return "login.jsp"; + } catch (Exception e) { + session.put("error", getLoginError(e, isH2)); + return "login.jsp"; + } + } + + /** + * Get the formatted login error message. + * + * @param e the exception + * @param isH2 if the current database is a H2 database + * @return the formatted error message + */ + private String getLoginError(Exception e, boolean isH2) { + if (e instanceof JdbcSQLException && + ((JdbcSQLException) e).getErrorCode() == ErrorCode.CLASS_NOT_FOUND_1) { + return "${text.login.driverNotFound}
    " + getStackTrace(0, e, isH2); + } + return getStackTrace(0, e, isH2); + } + + private String login() { + String driver = attributes.getProperty("driver", ""); + String url = attributes.getProperty("url", ""); + String user = attributes.getProperty("user", ""); + String password = attributes.getProperty("password", ""); + session.put("autoCommit", "checked"); + session.put("autoComplete", "1"); + session.put("maxrows", "1000"); + boolean isH2 = url.startsWith("jdbc:h2:"); + try { + Connection conn = server.getConnection(driver, url, user, password); + session.setConnection(conn); + session.put("url", url); + session.put("user", user); + session.remove("error"); + settingSave(); + return "frame.jsp"; + } catch (Exception e) { + session.put("error", getLoginError(e, isH2)); + return "login.jsp"; + } + } + + private String logout() { + try { + Connection conn = session.getConnection(); + session.setConnection(null); + session.remove("conn"); + session.remove("result"); + session.remove("tables"); + session.remove("user"); + session.remove("tool"); + if (conn != null) { + if (session.getShutdownServerOnDisconnect()) { + server.shutdown(); + } else { + conn.close(); + } + } + } catch (Exception e) { + trace(e.toString()); + } + return "index.do"; + } + + private String query() { + String sql = attributes.getProperty("sql").trim(); + try { + ScriptReader r = new ScriptReader(new StringReader(sql)); + final ArrayList list = New.arrayList(); + while (true) { + String s = r.readStatement(); + if (s == null) { + break; + } + list.add(s); + } + final Connection conn = session.getConnection(); + if (SysProperties.CONSOLE_STREAM && server.getAllowChunked()) { + String page = new String(server.getFile("result.jsp"), StandardCharsets.UTF_8); + int idx = page.indexOf("${result}"); + // the first element of the list is the header, the last the + // footer + list.add(0, page.substring(0, idx)); + list.add(page.substring(idx + "${result}".length())); + session.put("chunks", new Iterator() { + private int i; + @Override + public boolean hasNext() { + return i < list.size(); + } + @Override + public String next() { + String s = list.get(i++); + if (i == 1 || i == list.size()) { + return s; + } + StringBuilder b = new StringBuilder(); + query(conn, s, i - 1, list.size() - 2, b); + return b.toString(); + } + @Override + public void remove() { + throw new UnsupportedOperationException(); + } + }); + return "result.jsp"; + } + String result; + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < list.size(); i++) { + String s = list.get(i); + query(conn, s, i, list.size(), buff); + } + result = buff.toString(); + session.put("result", result); + } catch (Throwable e) { + session.put("result", getStackTrace(0, e, session.getContents().isH2())); + } + return "result.jsp"; + } + + /** + * Execute a query and append the result to the buffer. + * + * @param conn the connection + * @param s the statement + * @param i the index + * @param size the number of statements + * @param buff the target buffer + */ + void query(Connection conn, String s, int i, int size, StringBuilder buff) { + if (!(s.startsWith("@") && s.endsWith("."))) { + buff.append(PageParser.escapeHtml(s + ";")).append("
    "); + } + boolean forceEdit = s.startsWith("@edit"); + buff.append(getResult(conn, i + 1, s, size == 1, forceEdit)). + append("
    "); + } + + private String editResult() { + ResultSet rs = session.result; + int row = Integer.parseInt(attributes.getProperty("row")); + int op = Integer.parseInt(attributes.getProperty("op")); + String result = "", error = ""; + try { + if (op == 1) { + boolean insert = row < 0; + if (insert) { + rs.moveToInsertRow(); + } else { + rs.absolute(row); + } + for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + String x = attributes.getProperty("r" + row + "c" + (i + 1)); + unescapeData(x, rs, i + 1); + } + if (insert) { + rs.insertRow(); + } else { + rs.updateRow(); + } + } else if (op == 2) { + rs.absolute(row); + rs.deleteRow(); + } else if (op == 3) { + // cancel + } + } catch (Throwable e) { + result = "
    " + getStackTrace(0, e, session.getContents().isH2()); + error = formatAsError(e.getMessage()); + } + String sql = "@edit " + (String) session.get("resultSetSQL"); + Connection conn = session.getConnection(); + result = error + getResult(conn, -1, sql, true, true) + result; + session.put("result", result); + return "result.jsp"; + } + + private ResultSet getMetaResultSet(Connection conn, String sql) + throws SQLException { + DatabaseMetaData meta = conn.getMetaData(); + if (isBuiltIn(sql, "@best_row_identifier")) { + String[] p = split(sql); + int scale = p[4] == null ? 0 : Integer.parseInt(p[4]); + boolean nullable = Boolean.parseBoolean(p[5]); + return meta.getBestRowIdentifier(p[1], p[2], p[3], scale, nullable); + } else if (isBuiltIn(sql, "@catalogs")) { + return meta.getCatalogs(); + } else if (isBuiltIn(sql, "@columns")) { + String[] p = split(sql); + return meta.getColumns(p[1], p[2], p[3], p[4]); + } else if (isBuiltIn(sql, "@column_privileges")) { + String[] p = split(sql); + return meta.getColumnPrivileges(p[1], p[2], p[3], p[4]); + } else if (isBuiltIn(sql, "@cross_references")) { + String[] p = split(sql); + return meta.getCrossReference(p[1], p[2], p[3], p[4], p[5], p[6]); + } else if (isBuiltIn(sql, "@exported_keys")) { + String[] p = split(sql); + return meta.getExportedKeys(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@imported_keys")) { + String[] p = split(sql); + return meta.getImportedKeys(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@index_info")) { + String[] p = split(sql); + boolean unique = Boolean.parseBoolean(p[4]); + boolean approx = Boolean.parseBoolean(p[5]); + return meta.getIndexInfo(p[1], p[2], p[3], unique, approx); + } else if (isBuiltIn(sql, "@primary_keys")) { + String[] p = split(sql); + return meta.getPrimaryKeys(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@procedures")) { + String[] p = split(sql); + return meta.getProcedures(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@procedure_columns")) { + String[] p = split(sql); + return meta.getProcedureColumns(p[1], p[2], p[3], p[4]); + } else if (isBuiltIn(sql, "@schemas")) { + return meta.getSchemas(); + } else if (isBuiltIn(sql, "@tables")) { + String[] p = split(sql); + String[] types = p[4] == null ? null : StringUtils.arraySplit(p[4], ',', false); + return meta.getTables(p[1], p[2], p[3], types); + } else if (isBuiltIn(sql, "@table_privileges")) { + String[] p = split(sql); + return meta.getTablePrivileges(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@table_types")) { + return meta.getTableTypes(); + } else if (isBuiltIn(sql, "@type_info")) { + return meta.getTypeInfo(); + } else if (isBuiltIn(sql, "@udts")) { + String[] p = split(sql); + int[] types; + if (p[4] == null) { + types = null; + } else { + String[] t = StringUtils.arraySplit(p[4], ',', false); + types = new int[t.length]; + for (int i = 0; i < t.length; i++) { + types[i] = Integer.parseInt(t[i]); + } + } + return meta.getUDTs(p[1], p[2], p[3], types); + } else if (isBuiltIn(sql, "@version_columns")) { + String[] p = split(sql); + return meta.getVersionColumns(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@memory")) { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("Type", Types.VARCHAR, 0, 0); + rs.addColumn("KB", Types.VARCHAR, 0, 0); + rs.addRow("Used Memory", "" + Utils.getMemoryUsed()); + rs.addRow("Free Memory", "" + Utils.getMemoryFree()); + return rs; + } else if (isBuiltIn(sql, "@info")) { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("KEY", Types.VARCHAR, 0, 0); + rs.addColumn("VALUE", Types.VARCHAR, 0, 0); + rs.addRow("conn.getCatalog", conn.getCatalog()); + rs.addRow("conn.getAutoCommit", "" + conn.getAutoCommit()); + rs.addRow("conn.getTransactionIsolation", "" + conn.getTransactionIsolation()); + rs.addRow("conn.getWarnings", "" + conn.getWarnings()); + String map; + try { + map = "" + conn.getTypeMap(); + } catch (SQLException e) { + map = e.toString(); + } + rs.addRow("conn.getTypeMap", "" + map); + rs.addRow("conn.isReadOnly", "" + conn.isReadOnly()); + rs.addRow("conn.getHoldability", "" + conn.getHoldability()); + addDatabaseMetaData(rs, meta); + return rs; + } else if (isBuiltIn(sql, "@attributes")) { + String[] p = split(sql); + return meta.getAttributes(p[1], p[2], p[3], p[4]); + } else if (isBuiltIn(sql, "@super_tables")) { + String[] p = split(sql); + return meta.getSuperTables(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@super_types")) { + String[] p = split(sql); + return meta.getSuperTypes(p[1], p[2], p[3]); + } else if (isBuiltIn(sql, "@prof_stop")) { + if (profiler != null) { + profiler.stopCollecting(); + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("Top Stack Trace(s)", Types.VARCHAR, 0, 0); + rs.addRow(profiler.getTop(3)); + profiler = null; + return rs; + } + } + return null; + } + + private static void addDatabaseMetaData(SimpleResultSet rs, + DatabaseMetaData meta) { + Method[] methods = DatabaseMetaData.class.getDeclaredMethods(); + Arrays.sort(methods, new Comparator() { + @Override + public int compare(Method o1, Method o2) { + return o1.toString().compareTo(o2.toString()); + } + }); + for (Method m : methods) { + if (m.getParameterTypes().length == 0) { + try { + Object o = m.invoke(meta); + rs.addRow("meta." + m.getName(), "" + o); + } catch (InvocationTargetException e) { + rs.addRow("meta." + m.getName(), e.getTargetException().toString()); + } catch (Exception e) { + rs.addRow("meta." + m.getName(), e.toString()); + } + } + } + } + + private static String[] split(String s) { + String[] list = new String[10]; + String[] t = StringUtils.arraySplit(s, ' ', true); + System.arraycopy(t, 0, list, 0, t.length); + for (int i = 0; i < list.length; i++) { + if ("null".equals(list[i])) { + list[i] = null; + } + } + return list; + } + + private int getMaxrows() { + String r = (String) session.get("maxrows"); + return r == null ? 0 : Integer.parseInt(r); + } + + private String getResult(Connection conn, int id, String sql, + boolean allowEdit, boolean forceEdit) { + try { + sql = sql.trim(); + StringBuilder buff = new StringBuilder(); + String sqlUpper = StringUtils.toUpperEnglish(sql); + if (sqlUpper.contains("CREATE") || + sqlUpper.contains("DROP") || + sqlUpper.contains("ALTER") || + sqlUpper.contains("RUNSCRIPT")) { + String sessionId = attributes.getProperty("jsessionid"); + buff.append(""); + } + Statement stat; + DbContents contents = session.getContents(); + if (forceEdit || (allowEdit && contents.isH2())) { + stat = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, + ResultSet.CONCUR_UPDATABLE); + } else { + stat = conn.createStatement(); + } + ResultSet rs; + long time = System.currentTimeMillis(); + boolean metadata = false; + int generatedKeys = Statement.NO_GENERATED_KEYS; + boolean edit = false; + boolean list = false; + if (isBuiltIn(sql, "@autocommit_true")) { + conn.setAutoCommit(true); + return "${text.result.autoCommitOn}"; + } else if (isBuiltIn(sql, "@autocommit_false")) { + conn.setAutoCommit(false); + return "${text.result.autoCommitOff}"; + } else if (isBuiltIn(sql, "@cancel")) { + stat = session.executingStatement; + if (stat != null) { + stat.cancel(); + buff.append("${text.result.statementWasCanceled}"); + } else { + buff.append("${text.result.noRunningStatement}"); + } + return buff.toString(); + } else if (isBuiltIn(sql, "@edit")) { + edit = true; + sql = sql.substring("@edit".length()).trim(); + session.put("resultSetSQL", sql); + } + if (isBuiltIn(sql, "@list")) { + list = true; + sql = sql.substring("@list".length()).trim(); + } + if (isBuiltIn(sql, "@meta")) { + metadata = true; + sql = sql.substring("@meta".length()).trim(); + } + if (isBuiltIn(sql, "@generated")) { + generatedKeys = Statement.RETURN_GENERATED_KEYS; + sql = sql.substring("@generated".length()).trim(); + } else if (isBuiltIn(sql, "@history")) { + buff.append(getCommandHistoryString()); + return buff.toString(); + } else if (isBuiltIn(sql, "@loop")) { + sql = sql.substring("@loop".length()).trim(); + int idx = sql.indexOf(' '); + int count = Integer.decode(sql.substring(0, idx)); + sql = sql.substring(idx).trim(); + return executeLoop(conn, count, sql); + } else if (isBuiltIn(sql, "@maxrows")) { + int maxrows = (int) Double.parseDouble( + sql.substring("@maxrows".length()).trim()); + session.put("maxrows", "" + maxrows); + return "${text.result.maxrowsSet}"; + } else if (isBuiltIn(sql, "@parameter_meta")) { + sql = sql.substring("@parameter_meta".length()).trim(); + PreparedStatement prep = conn.prepareStatement(sql); + buff.append(getParameterResultSet(prep.getParameterMetaData())); + return buff.toString(); + } else if (isBuiltIn(sql, "@password_hash")) { + sql = sql.substring("@password_hash".length()).trim(); + String[] p = split(sql); + return StringUtils.convertBytesToHex( + SHA256.getKeyPasswordHash(p[0], p[1].toCharArray())); + } else if (isBuiltIn(sql, "@prof_start")) { + if (profiler != null) { + profiler.stopCollecting(); + } + profiler = new Profiler(); + profiler.startCollecting(); + return "Ok"; + } else if (isBuiltIn(sql, "@sleep")) { + String s = sql.substring("@sleep".length()).trim(); + int sleep = 1; + if (s.length() > 0) { + sleep = Integer.parseInt(s); + } + Thread.sleep(sleep * 1000); + return "Ok"; + } else if (isBuiltIn(sql, "@transaction_isolation")) { + String s = sql.substring("@transaction_isolation".length()).trim(); + if (s.length() > 0) { + int level = Integer.parseInt(s); + conn.setTransactionIsolation(level); + } + buff.append("Transaction Isolation: ") + .append(conn.getTransactionIsolation()) + .append("
    "); + buff.append(Connection.TRANSACTION_READ_UNCOMMITTED) + .append(": read_uncommitted
    "); + buff.append(Connection.TRANSACTION_READ_COMMITTED) + .append(": read_committed
    "); + buff.append(Connection.TRANSACTION_REPEATABLE_READ) + .append(": repeatable_read
    "); + buff.append(Connection.TRANSACTION_SERIALIZABLE) + .append(": serializable"); + } + if (sql.startsWith("@")) { + rs = getMetaResultSet(conn, sql); + if (rs == null) { + buff.append("?: ").append(sql); + return buff.toString(); + } + } else { + int maxrows = getMaxrows(); + stat.setMaxRows(maxrows); + session.executingStatement = stat; + boolean isResultSet = stat.execute(sql, generatedKeys); + session.addCommand(sql); + if (generatedKeys == Statement.RETURN_GENERATED_KEYS) { + rs = null; + rs = stat.getGeneratedKeys(); + } else { + if (!isResultSet) { + buff.append("${text.result.updateCount}: ") + .append(stat.getUpdateCount()); + time = System.currentTimeMillis() - time; + buff.append("
    (").append(time).append(" ms)"); + stat.close(); + return buff.toString(); + } + rs = stat.getResultSet(); + } + } + time = System.currentTimeMillis() - time; + buff.append(getResultSet(sql, rs, metadata, list, edit, time, allowEdit)); + // SQLWarning warning = stat.getWarnings(); + // if (warning != null) { + // buff.append("
    Warning:
    "). + // append(getStackTrace(id, warning)); + // } + if (!edit) { + stat.close(); + } + return buff.toString(); + } catch (Throwable e) { + // throwable: including OutOfMemoryError and so on + return getStackTrace(id, e, session.getContents().isH2()); + } finally { + session.executingStatement = null; + } + } + + private static boolean isBuiltIn(String sql, String builtIn) { + return StringUtils.startsWithIgnoreCase(sql, builtIn); + } + + private String executeLoop(Connection conn, int count, String sql) + throws SQLException { + ArrayList params = New.arrayList(); + int idx = 0; + while (!stop) { + idx = sql.indexOf('?', idx); + if (idx < 0) { + break; + } + if (isBuiltIn(sql.substring(idx), "?/*rnd*/")) { + params.add(1); + sql = sql.substring(0, idx) + "?" + sql.substring(idx + "/*rnd*/".length() + 1); + } else { + params.add(0); + } + idx++; + } + boolean prepared; + Random random = new Random(1); + long time = System.currentTimeMillis(); + if (isBuiltIn(sql, "@statement")) { + sql = sql.substring("@statement".length()).trim(); + prepared = false; + Statement stat = conn.createStatement(); + for (int i = 0; !stop && i < count; i++) { + String s = sql; + for (Integer type : params) { + idx = s.indexOf('?'); + if (type.intValue() == 1) { + s = s.substring(0, idx) + random.nextInt(count) + s.substring(idx + 1); + } else { + s = s.substring(0, idx) + i + s.substring(idx + 1); + } + } + if (stat.execute(s)) { + ResultSet rs = stat.getResultSet(); + while (!stop && rs.next()) { + // maybe get the data as well + } + rs.close(); + } + } + } else { + prepared = true; + PreparedStatement prep = conn.prepareStatement(sql); + for (int i = 0; !stop && i < count; i++) { + for (int j = 0; j < params.size(); j++) { + Integer type = params.get(j); + if (type.intValue() == 1) { + prep.setInt(j + 1, random.nextInt(count)); + } else { + prep.setInt(j + 1, i); + } + } + if (session.getContents().isSQLite()) { + // SQLite currently throws an exception on prep.execute() + prep.executeUpdate(); + } else { + if (prep.execute()) { + ResultSet rs = prep.getResultSet(); + while (!stop && rs.next()) { + // maybe get the data as well + } + rs.close(); + } + } + } + } + time = System.currentTimeMillis() - time; + StatementBuilder buff = new StatementBuilder(); + buff.append(time).append(" ms: ").append(count).append(" * "); + if (prepared) { + buff.append("(Prepared) "); + } else { + buff.append("(Statement) "); + } + buff.append('('); + for (int p : params) { + buff.appendExceptFirst(", "); + buff.append(p == 0 ? "i" : "rnd"); + } + return buff.append(") ").append(sql).toString(); + } + + private String getCommandHistoryString() { + StringBuilder buff = new StringBuilder(); + ArrayList history = session.getCommandHistory(); + buff.append("
    " + + ""); + for (int i = history.size() - 1; i >= 0; i--) { + String sql = history.get(i); + buff.append(""); + } + buff.append("
    Command
    "). + append("\"${text.resultEdit.edit}\""). + append(""). + append(PageParser.escapeHtml(sql)). + append("
    "); + return buff.toString(); + } + + private static String getParameterResultSet(ParameterMetaData meta) + throws SQLException { + StringBuilder buff = new StringBuilder(); + if (meta == null) { + return "No parameter meta data"; + } + buff.append(""). + append(""). + append(""); + for (int i = 0; i < meta.getParameterCount(); i++) { + buff.append(""); + } + buff.append("
    classNamemodetypetypeNameprecisionscale
    "). + append(meta.getParameterClassName(i + 1)). + append(""). + append(meta.getParameterMode(i + 1)). + append(""). + append(meta.getParameterType(i + 1)). + append(""). + append(meta.getParameterTypeName(i + 1)). + append(""). + append(meta.getPrecision(i + 1)). + append(""). + append(meta.getScale(i + 1)). + append("
    "); + return buff.toString(); + } + + private String getResultSet(String sql, ResultSet rs, boolean metadata, + boolean list, boolean edit, long time, boolean allowEdit) + throws SQLException { + int maxrows = getMaxrows(); + time = System.currentTimeMillis() - time; + StringBuilder buff = new StringBuilder(); + if (edit) { + buff.append("
    " + + "" + + "" + + ""); + } else { + buff.append("
    "); + } + if (metadata) { + SimpleResultSet r = new SimpleResultSet(); + r.addColumn("#", Types.INTEGER, 0, 0); + r.addColumn("label", Types.VARCHAR, 0, 0); + r.addColumn("catalog", Types.VARCHAR, 0, 0); + r.addColumn("schema", Types.VARCHAR, 0, 0); + r.addColumn("table", Types.VARCHAR, 0, 0); + r.addColumn("column", Types.VARCHAR, 0, 0); + r.addColumn("type", Types.INTEGER, 0, 0); + r.addColumn("typeName", Types.VARCHAR, 0, 0); + r.addColumn("class", Types.VARCHAR, 0, 0); + r.addColumn("precision", Types.INTEGER, 0, 0); + r.addColumn("scale", Types.INTEGER, 0, 0); + r.addColumn("displaySize", Types.INTEGER, 0, 0); + r.addColumn("autoIncrement", Types.BOOLEAN, 0, 0); + r.addColumn("caseSensitive", Types.BOOLEAN, 0, 0); + r.addColumn("currency", Types.BOOLEAN, 0, 0); + r.addColumn("nullable", Types.INTEGER, 0, 0); + r.addColumn("readOnly", Types.BOOLEAN, 0, 0); + r.addColumn("searchable", Types.BOOLEAN, 0, 0); + r.addColumn("signed", Types.BOOLEAN, 0, 0); + r.addColumn("writable", Types.BOOLEAN, 0, 0); + r.addColumn("definitelyWritable", Types.BOOLEAN, 0, 0); + ResultSetMetaData m = rs.getMetaData(); + for (int i = 1; i <= m.getColumnCount(); i++) { + r.addRow(i, + m.getColumnLabel(i), + m.getCatalogName(i), + m.getSchemaName(i), + m.getTableName(i), + m.getColumnName(i), + m.getColumnType(i), + m.getColumnTypeName(i), + m.getColumnClassName(i), + m.getPrecision(i), + m.getScale(i), + m.getColumnDisplaySize(i), + m.isAutoIncrement(i), + m.isCaseSensitive(i), + m.isCurrency(i), + m.isNullable(i), + m.isReadOnly(i), + m.isSearchable(i), + m.isSigned(i), + m.isWritable(i), + m.isDefinitelyWritable(i)); + } + rs = r; + } + ResultSetMetaData meta = rs.getMetaData(); + int columns = meta.getColumnCount(); + int rows = 0; + if (list) { + buff.append(""); + while (rs.next()) { + if (maxrows > 0 && rows >= maxrows) { + break; + } + rows++; + buff.append(""); + for (int i = 0; i < columns; i++) { + buff.append(""); + } + } + } else { + buff.append(""); + if (edit) { + buff.append(""); + } + for (int i = 0; i < columns; i++) { + buff.append(""); + } + buff.append(""); + while (rs.next()) { + if (maxrows > 0 && rows >= maxrows) { + break; + } + rows++; + buff.append(""); + if (edit) { + buff.append(""); + } + for (int i = 0; i < columns; i++) { + buff.append(""); + } + buff.append(""); + } + } + boolean isUpdatable = false; + try { + if (!session.getContents().isDB2()) { + isUpdatable = rs.getConcurrency() == ResultSet.CONCUR_UPDATABLE + && rs.getType() != ResultSet.TYPE_FORWARD_ONLY; + } + } catch (NullPointerException e) { + // ignore + // workaround for a JDBC-ODBC bridge problem + } + if (edit) { + ResultSet old = session.result; + if (old != null) { + old.close(); + } + session.result = rs; + } else { + rs.close(); + } + if (edit) { + buff.append(""); + for (int i = 0; i < columns; i++) { + buff.append(""); + } + buff.append(""); + } + buff.append("
    ColumnData
    Row #"). + append(rows).append("
    "). + append(PageParser.escapeHtml(meta.getColumnLabel(i + 1))). + append(""). + append(escapeData(rs, i + 1)). + append("
    ${text.resultEdit.action}"). + append(PageParser.escapeHtml(meta.getColumnLabel(i + 1))). + append("
    "). + append("\"${text.resultEdit.edit}\""). + append("\"${text.resultEdit.delete}\""). + append(""). + append(escapeData(rs, i + 1)). + append("
    "). + append("\"${text.resultEdit.add}\""). + append("
    "); + if (edit) { + buff.append("
    "); + } + if (rows == 0) { + buff.append("(${text.result.noRows}"); + } else if (rows == 1) { + buff.append("(${text.result.1row}"); + } else { + buff.append('(').append(rows).append(" ${text.result.rows}"); + } + buff.append(", "); + time = System.currentTimeMillis() - time; + buff.append(time).append(" ms)"); + if (!edit && isUpdatable && allowEdit) { + buff.append("

    " + + "
    " + + "" + + "
    "); + } + return buff.toString(); + } + + /** + * Save the current connection settings to the properties file. + * + * @return the file to open afterwards + */ + private String settingSave() { + ConnectionInfo info = new ConnectionInfo(); + info.name = attributes.getProperty("name", ""); + info.driver = attributes.getProperty("driver", ""); + info.url = attributes.getProperty("url", ""); + info.user = attributes.getProperty("user", ""); + server.updateSetting(info); + attributes.put("setting", info.name); + server.saveProperties(null); + return "index.do"; + } + + private static String escapeData(ResultSet rs, int columnIndex) + throws SQLException { + String d = rs.getString(columnIndex); + if (d == null) { + return "null"; + } else if (d.length() > 100_000) { + String s; + if (isBinary(rs.getMetaData().getColumnType(columnIndex))) { + s = PageParser.escapeHtml(d.substring(0, 6)) + + "... (" + (d.length() / 2) + " ${text.result.bytes})"; + } else { + s = PageParser.escapeHtml(d.substring(0, 100)) + + "... (" + d.length() + " ${text.result.characters})"; + } + return "
    =+
    " + s; + } else if (d.equals("null") || d.startsWith("= ") || d.startsWith("=+")) { + return "
    =
    " + PageParser.escapeHtml(d); + } else if (d.equals("")) { + // PageParser.escapeHtml replaces "" with a non-breaking space + return ""; + } + return PageParser.escapeHtml(d); + } + + private static boolean isBinary(int sqlType) { + switch (sqlType) { + case Types.BINARY: + case Types.BLOB: + case Types.JAVA_OBJECT: + case Types.LONGVARBINARY: + case Types.OTHER: + case Types.VARBINARY: + return true; + } + return false; + } + + private void unescapeData(String x, ResultSet rs, int columnIndex) + throws SQLException { + if (x.equals("null")) { + rs.updateNull(columnIndex); + return; + } else if (x.startsWith("=+")) { + // don't update + return; + } else if (x.equals("=*")) { + // set an appropriate default value + int type = rs.getMetaData().getColumnType(columnIndex); + switch (type) { + case Types.TIME: + rs.updateString(columnIndex, "12:00:00"); + break; + case Types.TIMESTAMP: + case Types.DATE: + rs.updateString(columnIndex, "2001-01-01"); + break; + default: + rs.updateString(columnIndex, "1"); + break; + } + return; + } else if (x.startsWith("= ")) { + x = x.substring(2); + } + ResultSetMetaData meta = rs.getMetaData(); + int type = meta.getColumnType(columnIndex); + if (session.getContents().isH2()) { + rs.updateString(columnIndex, x); + return; + } + switch (type) { + case Types.BIGINT: + rs.updateLong(columnIndex, Long.decode(x)); + break; + case Types.DECIMAL: + rs.updateBigDecimal(columnIndex, new BigDecimal(x)); + break; + case Types.DOUBLE: + case Types.FLOAT: + rs.updateDouble(columnIndex, Double.parseDouble(x)); + break; + case Types.REAL: + rs.updateFloat(columnIndex, Float.parseFloat(x)); + break; + case Types.INTEGER: + rs.updateInt(columnIndex, Integer.decode(x)); + break; + case Types.TINYINT: + rs.updateShort(columnIndex, Short.decode(x)); + break; + default: + rs.updateString(columnIndex, x); + } + } + + private String settingRemove() { + String setting = attributes.getProperty("name", ""); + server.removeSetting(setting); + ArrayList settings = server.getSettings(); + if (!settings.isEmpty()) { + attributes.put("setting", settings.get(0)); + } + server.saveProperties(null); + return "index.do"; + } + + /** + * Get the current mime type. + * + * @return the mime type + */ + String getMimeType() { + return mimeType; + } + + boolean getCache() { + return cache; + } + + WebSession getSession() { + return session; + } + + private void trace(String s) { + server.trace(s); + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/web/WebServer.java b/modules/h2/src/main/java/org/h2/server/web/WebServer.java new file mode 100644 index 0000000000000..e30704f64cfc9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/WebServer.java @@ -0,0 +1,852 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.ServerSocket; +import java.net.Socket; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.SQLException; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Locale; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Properties; +import java.util.Set; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.server.Service; +import org.h2.server.ShutdownHandler; +import org.h2.store.fs.FileUtils; +import org.h2.util.DateTimeUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.NetUtils; +import org.h2.util.New; +import org.h2.util.SortedProperties; +import org.h2.util.StringUtils; +import org.h2.util.Tool; +import org.h2.util.Utils; + +/** + * The web server is a simple standalone HTTP server that implements the H2 + * Console application. It is not optimized for performance. + */ +public class WebServer implements Service { + + static final String[][] LANGUAGES = { + { "cs", "\u010ce\u0161tina" }, + { "de", "Deutsch" }, + { "en", "English" }, + { "es", "Espa\u00f1ol" }, + { "fr", "Fran\u00e7ais" }, + { "hu", "Magyar"}, + { "ko", "\ud55c\uad6d\uc5b4"}, + { "in", "Indonesia"}, + { "it", "Italiano"}, + { "ja", "\u65e5\u672c\u8a9e"}, + { "nl", "Nederlands"}, + { "pl", "Polski"}, + { "pt_BR", "Portugu\u00eas (Brasil)"}, + { "pt_PT", "Portugu\u00eas (Europeu)"}, + { "ru", "\u0440\u0443\u0441\u0441\u043a\u0438\u0439"}, + { "sk", "Slovensky"}, + { "tr", "T\u00fcrk\u00e7e"}, + { "uk", "\u0423\u043A\u0440\u0430\u0457\u043D\u0441\u044C\u043A\u0430"}, + { "zh_CN", "\u4e2d\u6587 (\u7b80\u4f53)"}, + { "zh_TW", "\u4e2d\u6587 (\u7e41\u9ad4)"}, + }; + + private static final String COMMAND_HISTORY = "commandHistory"; + + private static final String DEFAULT_LANGUAGE = "en"; + + private static final String[] GENERIC = { + "Generic JNDI Data Source|javax.naming.InitialContext|" + + "java:comp/env/jdbc/Test|sa", + "Generic Teradata|com.teradata.jdbc.TeraDriver|" + + "jdbc:teradata://whomooz/|", + "Generic Snowflake|com.snowflake.client.jdbc.SnowflakeDriver|" + + "jdbc:snowflake://accountName.snowflakecomputing.com|", + "Generic Redshift|com.amazon.redshift.jdbc42.Driver|" + + "jdbc:redshift://endpoint:5439/database|", + "Generic Impala|org.cloudera.impala.jdbc41.Driver|" + + "jdbc:impala://clustername:21050/default|", + "Generic Hive 2|org.apache.hive.jdbc.HiveDriver|" + + "jdbc:hive2://clustername:10000/default|", + "Generic Hive|org.apache.hadoop.hive.jdbc.HiveDriver|" + + "jdbc:hive://clustername:10000/default|", + "Generic Azure SQL|com.microsoft.sqlserver.jdbc.SQLServerDriver|" + + "jdbc:sqlserver://name.database.windows.net:1433|", + "Generic Firebird Server|org.firebirdsql.jdbc.FBDriver|" + + "jdbc:firebirdsql:localhost:c:/temp/firebird/test|sysdba", + "Generic SQLite|org.sqlite.JDBC|" + + "jdbc:sqlite:test|sa", + "Generic DB2|com.ibm.db2.jcc.DB2Driver|" + + "jdbc:db2://localhost/test|" , + "Generic Oracle|oracle.jdbc.driver.OracleDriver|" + + "jdbc:oracle:thin:@localhost:1521:XE|sa" , + "Generic MS SQL Server 2000|com.microsoft.jdbc.sqlserver.SQLServerDriver|" + + "jdbc:microsoft:sqlserver://localhost:1433;DatabaseName=sqlexpress|sa", + "Generic MS SQL Server 2005|com.microsoft.sqlserver.jdbc.SQLServerDriver|" + + "jdbc:sqlserver://localhost;DatabaseName=test|sa", + "Generic PostgreSQL|org.postgresql.Driver|" + + "jdbc:postgresql:test|" , + "Generic MySQL|com.mysql.jdbc.Driver|" + + "jdbc:mysql://localhost:3306/test|" , + "Generic HSQLDB|org.hsqldb.jdbcDriver|" + + "jdbc:hsqldb:test;hsqldb.default_table_type=cached|sa" , + "Generic Derby (Server)|org.apache.derby.jdbc.ClientDriver|" + + "jdbc:derby://localhost:1527/test;create=true|sa", + "Generic Derby (Embedded)|org.apache.derby.jdbc.EmbeddedDriver|" + + "jdbc:derby:test;create=true|sa", + "Generic H2 (Server)|org.h2.Driver|" + + "jdbc:h2:tcp://localhost/~/test|sa", + // this will be listed on top for new installations + "Generic H2 (Embedded)|org.h2.Driver|" + + "jdbc:h2:~/test|sa", + }; + + private static int ticker; + + /** + * The session timeout (the default is 30 minutes). + */ + private static final long SESSION_TIMEOUT = SysProperties.CONSOLE_TIMEOUT; + +// public static void main(String... args) throws IOException { +// String s = IOUtils.readStringAndClose(new java.io.FileReader( +// // "src/main/org/h2/server/web/res/_text_cs.prop"), -1); +// "src/main/org/h2/res/_messages_cs.prop"), -1); +// System.out.println(StringUtils.javaEncode("...")); +// String[] list = Locale.getISOLanguages(); +// for (int i = 0; i < list.length; i++) { +// System.out.print(list[i] + " "); +// } +// System.out.println(); +// String l = "de"; +// String lang = new java.util.Locale(l). +// getDisplayLanguage(new java.util.Locale(l)); +// System.out.println(new java.util.Locale(l).getDisplayLanguage()); +// System.out.println(lang); +// java.util.Locale.CHINESE.getDisplayLanguage(java.util.Locale.CHINESE); +// for (int i = 0; i < lang.length(); i++) { +// System.out.println(Integer.toHexString(lang.charAt(i)) + " "); +// } +// } + + // private URLClassLoader urlClassLoader; + private int port; + private boolean allowOthers; + private boolean isDaemon; + private final Set running = + Collections.synchronizedSet(new HashSet()); + private boolean ssl; + private final HashMap connInfoMap = new HashMap<>(); + + private long lastTimeoutCheck; + private final HashMap sessions = new HashMap<>(); + private final HashSet languages = new HashSet<>(); + private String startDateTime; + private ServerSocket serverSocket; + private String url; + private ShutdownHandler shutdownHandler; + private Thread listenerThread; + private boolean ifExists; + private boolean trace; + private TranslateThread translateThread; + private boolean allowChunked = true; + private String serverPropertiesDir = Constants.SERVER_PROPERTIES_DIR; + // null means the history is not allowed to be stored + private String commandHistoryString; + + /** + * Read the given file from the file system or from the resources. + * + * @param file the file name + * @return the data + */ + byte[] getFile(String file) throws IOException { + trace("getFile <" + file + ">"); + byte[] data = Utils.getResource("/org/h2/server/web/res/" + file); + if (data == null) { + trace(" null"); + } else { + trace(" size=" + data.length); + } + return data; + } + + /** + * Remove this web thread from the set of running threads. + * + * @param t the thread to remove + */ + synchronized void remove(WebThread t) { + running.remove(t); + } + + private static String generateSessionId() { + byte[] buff = MathUtils.secureRandomBytes(16); + return StringUtils.convertBytesToHex(buff); + } + + /** + * Get the web session object for the given session id. + * + * @param sessionId the session id + * @return the web session or null + */ + WebSession getSession(String sessionId) { + long now = System.currentTimeMillis(); + if (lastTimeoutCheck + SESSION_TIMEOUT < now) { + for (String id : new ArrayList<>(sessions.keySet())) { + WebSession session = sessions.get(id); + if (session.lastAccess + SESSION_TIMEOUT < now) { + trace("timeout for " + id); + sessions.remove(id); + } + } + lastTimeoutCheck = now; + } + WebSession session = sessions.get(sessionId); + if (session != null) { + session.lastAccess = System.currentTimeMillis(); + } + return session; + } + + /** + * Create a new web session id and object. + * + * @param hostAddr the host address + * @return the web session object + */ + WebSession createNewSession(String hostAddr) { + String newId; + do { + newId = generateSessionId(); + } while (sessions.get(newId) != null); + WebSession session = new WebSession(this); + session.lastAccess = System.currentTimeMillis(); + session.put("sessionId", newId); + session.put("ip", hostAddr); + session.put("language", DEFAULT_LANGUAGE); + session.put("frame-border", "0"); + session.put("frameset-border", "4"); + sessions.put(newId, session); + // always read the english translation, + // so that untranslated text appears at least in english + readTranslations(session, DEFAULT_LANGUAGE); + return getSession(newId); + } + + String getStartDateTime() { + if (startDateTime == null) { + SimpleDateFormat format = new SimpleDateFormat( + "EEE, d MMM yyyy HH:mm:ss z", new Locale("en", "")); + format.setTimeZone(DateTimeUtils.UTC); + startDateTime = format.format(System.currentTimeMillis()); + } + return startDateTime; + } + + @Override + public void init(String... args) { + // set the serverPropertiesDir, because it's used in loadProperties() + for (int i = 0; args != null && i < args.length; i++) { + if ("-properties".equals(args[i])) { + serverPropertiesDir = args[++i]; + } + } + Properties prop = loadProperties(); + port = SortedProperties.getIntProperty(prop, + "webPort", Constants.DEFAULT_HTTP_PORT); + ssl = SortedProperties.getBooleanProperty(prop, + "webSSL", false); + allowOthers = SortedProperties.getBooleanProperty(prop, + "webAllowOthers", false); + commandHistoryString = prop.getProperty(COMMAND_HISTORY); + for (int i = 0; args != null && i < args.length; i++) { + String a = args[i]; + if (Tool.isOption(a, "-webPort")) { + port = Integer.decode(args[++i]); + } else if (Tool.isOption(a, "-webSSL")) { + ssl = true; + } else if (Tool.isOption(a, "-webAllowOthers")) { + allowOthers = true; + } else if (Tool.isOption(a, "-webDaemon")) { + isDaemon = true; + } else if (Tool.isOption(a, "-baseDir")) { + String baseDir = args[++i]; + SysProperties.setBaseDir(baseDir); + } else if (Tool.isOption(a, "-ifExists")) { + ifExists = true; + } else if (Tool.isOption(a, "-properties")) { + // already set + i++; + } else if (Tool.isOption(a, "-trace")) { + trace = true; + } + } +// if (driverList != null) { +// try { +// String[] drivers = +// StringUtils.arraySplit(driverList, ',', false); +// URL[] urls = new URL[drivers.length]; +// for(int i=0; i(sessions.values())) { + session.close(); + } + for (WebThread c : new ArrayList<>(running)) { + try { + c.stopNow(); + c.join(100); + } catch (Exception e) { + traceError(e); + } + } + } + + /** + * Write trace information if trace is enabled. + * + * @param s the message to write + */ + void trace(String s) { + if (trace) { + System.out.println(s); + } + } + + /** + * Write the stack trace if trace is enabled. + * + * @param e the exception + */ + void traceError(Throwable e) { + if (trace) { + e.printStackTrace(); + } + } + + /** + * Check if this language is supported / translated. + * + * @param language the language + * @return true if a translation is available + */ + boolean supportsLanguage(String language) { + return languages.contains(language); + } + + /** + * Read the translation for this language and save them in the 'text' + * property of this session. + * + * @param session the session + * @param language the language + */ + void readTranslations(WebSession session, String language) { + Properties text = new Properties(); + try { + trace("translation: "+language); + byte[] trans = getFile("_text_"+language+".prop"); + trace(" "+new String(trans)); + text = SortedProperties.fromLines(new String(trans, StandardCharsets.UTF_8)); + // remove starting # (if not translated yet) + for (Entry entry : text.entrySet()) { + String value = (String) entry.getValue(); + if (value.startsWith("#")) { + entry.setValue(value.substring(1)); + } + } + } catch (IOException e) { + DbException.traceThrowable(e); + } + session.put("text", new HashMap<>(text)); + } + + ArrayList> getSessions() { + ArrayList> list = New.arrayList(); + for (WebSession s : sessions.values()) { + list.add(s.getInfo()); + } + return list; + } + + @Override + public String getType() { + return "Web Console"; + } + + @Override + public String getName() { + return "H2 Console Server"; + } + + void setAllowOthers(boolean b) { + allowOthers = b; + } + + @Override + public boolean getAllowOthers() { + return allowOthers; + } + + void setSSL(boolean b) { + ssl = b; + } + + void setPort(int port) { + this.port = port; + } + + boolean getSSL() { + return ssl; + } + + @Override + public int getPort() { + return port; + } + + public boolean isCommandHistoryAllowed() { + return commandHistoryString != null; + } + + public void setCommandHistoryAllowed(boolean allowed) { + if (allowed) { + if (commandHistoryString == null) { + commandHistoryString = ""; + } + } else { + commandHistoryString = null; + } + } + + public ArrayList getCommandHistoryList() { + ArrayList result = New.arrayList(); + if (commandHistoryString == null) { + return result; + } + + // Split the commandHistoryString on non-escaped semicolons + // and unescape it. + StringBuilder sb = new StringBuilder(); + for (int end = 0;; end++) { + if (end == commandHistoryString.length() || + commandHistoryString.charAt(end) == ';') { + if (sb.length() > 0) { + result.add(sb.toString()); + sb.delete(0, sb.length()); + } + if (end == commandHistoryString.length()) { + break; + } + } else if (commandHistoryString.charAt(end) == '\\' && + end < commandHistoryString.length() - 1) { + sb.append(commandHistoryString.charAt(++end)); + } else { + sb.append(commandHistoryString.charAt(end)); + } + } + return result; + } + + /** + * Save the command history to the properties file. + * + * @param commandHistory the history + */ + public void saveCommandHistoryList(ArrayList commandHistory) { + StringBuilder sb = new StringBuilder(); + for (String s : commandHistory) { + if (sb.length() > 0) { + sb.append(';'); + } + sb.append(s.replace("\\", "\\\\").replace(";", "\\;")); + } + commandHistoryString = sb.toString(); + saveProperties(null); + } + + /** + * Get the connection information for this setting. + * + * @param name the setting name + * @return the connection information + */ + ConnectionInfo getSetting(String name) { + return connInfoMap.get(name); + } + + /** + * Update a connection information setting. + * + * @param info the connection information + */ + void updateSetting(ConnectionInfo info) { + connInfoMap.put(info.name, info); + info.lastAccess = ticker++; + } + + /** + * Remove a connection information setting from the list + * + * @param name the setting to remove + */ + void removeSetting(String name) { + connInfoMap.remove(name); + } + + private Properties loadProperties() { + try { + if ("null".equals(serverPropertiesDir)) { + return new Properties(); + } + return SortedProperties.loadProperties( + serverPropertiesDir + "/" + Constants.SERVER_PROPERTIES_NAME); + } catch (Exception e) { + DbException.traceThrowable(e); + return new Properties(); + } + } + + /** + * Get the list of connection information setting names. + * + * @return the connection info names + */ + String[] getSettingNames() { + ArrayList list = getSettings(); + String[] names = new String[list.size()]; + for (int i = 0; i < list.size(); i++) { + names[i] = list.get(i).name; + } + return names; + } + + /** + * Get the list of connection info objects. + * + * @return the list + */ + synchronized ArrayList getSettings() { + ArrayList settings = New.arrayList(); + if (connInfoMap.size() == 0) { + Properties prop = loadProperties(); + if (prop.size() == 0) { + for (String gen : GENERIC) { + ConnectionInfo info = new ConnectionInfo(gen); + settings.add(info); + updateSetting(info); + } + } else { + for (int i = 0;; i++) { + String data = prop.getProperty(String.valueOf(i)); + if (data == null) { + break; + } + ConnectionInfo info = new ConnectionInfo(data); + settings.add(info); + updateSetting(info); + } + } + } else { + settings.addAll(connInfoMap.values()); + } + Collections.sort(settings); + return settings; + } + + /** + * Save the settings to the properties file. + * + * @param prop null or the properties webPort, webAllowOthers, and webSSL + */ + synchronized void saveProperties(Properties prop) { + try { + if (prop == null) { + Properties old = loadProperties(); + prop = new SortedProperties(); + prop.setProperty("webPort", + "" + SortedProperties.getIntProperty(old, + "webPort", port)); + prop.setProperty("webAllowOthers", + "" + SortedProperties.getBooleanProperty(old, + "webAllowOthers", allowOthers)); + prop.setProperty("webSSL", + "" + SortedProperties.getBooleanProperty(old, + "webSSL", ssl)); + if (commandHistoryString != null) { + prop.setProperty(COMMAND_HISTORY, commandHistoryString); + } + } + ArrayList settings = getSettings(); + int len = settings.size(); + for (int i = 0; i < len; i++) { + ConnectionInfo info = settings.get(i); + if (info != null) { + prop.setProperty(String.valueOf(len - i - 1), info.getString()); + } + } + if (!"null".equals(serverPropertiesDir)) { + OutputStream out = FileUtils.newOutputStream( + serverPropertiesDir + "/" + Constants.SERVER_PROPERTIES_NAME, false); + prop.store(out, "H2 Server Properties"); + out.close(); + } + } catch (Exception e) { + DbException.traceThrowable(e); + } + } + + /** + * Open a database connection. + * + * @param driver the driver class name + * @param databaseUrl the database URL + * @param user the user name + * @param password the password + * @return the database connection + */ + Connection getConnection(String driver, String databaseUrl, String user, + String password) throws SQLException { + driver = driver.trim(); + databaseUrl = databaseUrl.trim(); + org.h2.Driver.load(); + Properties p = new Properties(); + p.setProperty("user", user.trim()); + // do not trim the password, otherwise an + // encrypted H2 database with empty user password doesn't work + p.setProperty("password", password); + if (databaseUrl.startsWith("jdbc:h2:")) { + if (ifExists) { + databaseUrl += ";IFEXISTS=TRUE"; + } + // PostgreSQL would throw a NullPointerException + // if it is loaded before the H2 driver + // because it can't deal with non-String objects in the connection + // Properties + return org.h2.Driver.load().connect(databaseUrl, p); + } +// try { +// Driver dr = (Driver) urlClassLoader. +// loadClass(driver).newInstance(); +// return dr.connect(url, p); +// } catch(ClassNotFoundException e2) { +// throw e2; +// } + return JdbcUtils.getConnection(driver, databaseUrl, p); + } + + /** + * Shut down the web server. + */ + void shutdown() { + if (shutdownHandler != null) { + shutdownHandler.shutdown(); + } + } + + public void setShutdownHandler(ShutdownHandler shutdownHandler) { + this.shutdownHandler = shutdownHandler; + } + + /** + * Create a session with a given connection. + * + * @param conn the connection + * @return the URL of the web site to access this connection + */ + public String addSession(Connection conn) throws SQLException { + WebSession session = createNewSession("local"); + session.setShutdownServerOnDisconnect(); + session.setConnection(conn); + session.put("url", conn.getMetaData().getURL()); + String s = (String) session.get("sessionId"); + return url + "/frame.jsp?jsessionid=" + s; + } + + /** + * The translate thread reads and writes the file translation.properties + * once a second. + */ + private class TranslateThread extends Thread { + + private final File file = new File("translation.properties"); + private final Map translation; + private volatile boolean stopNow; + + TranslateThread(Map translation) { + this.translation = translation; + } + + public String getFileName() { + return file.getAbsolutePath(); + } + + public void stopNow() { + this.stopNow = true; + try { + join(); + } catch (InterruptedException e) { + // ignore + } + } + + @Override + public void run() { + while (!stopNow) { + try { + SortedProperties sp = new SortedProperties(); + if (file.exists()) { + InputStream in = FileUtils.newInputStream(file.getName()); + sp.load(in); + translation.putAll(sp); + } else { + OutputStream out = FileUtils.newOutputStream(file.getName(), false); + sp.putAll(translation); + sp.store(out, "Translation"); + } + Thread.sleep(1000); + } catch (Exception e) { + traceError(e); + } + } + } + + } + + /** + * Start the translation thread that reads the file once a second. + * + * @param translation the translation map + * @return the name of the file to translate + */ + String startTranslate(Map translation) { + if (translateThread != null) { + translateThread.stopNow(); + } + translateThread = new TranslateThread(translation); + translateThread.setDaemon(true); + translateThread.start(); + return translateThread.getFileName(); + } + + @Override + public boolean isDaemon() { + return isDaemon; + } + + void setAllowChunked(boolean allowChunked) { + this.allowChunked = allowChunked; + } + + boolean getAllowChunked() { + return allowChunked; + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/web/WebServlet.java b/modules/h2/src/main/java/org/h2/server/web/WebServlet.java new file mode 100644 index 0000000000000..48eca4cef386b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/WebServlet.java @@ -0,0 +1,164 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import java.io.IOException; +import java.net.InetAddress; +import java.net.UnknownHostException; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Enumeration; +import java.util.Properties; + +import javax.servlet.ServletConfig; +import javax.servlet.ServletOutputStream; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import org.h2.util.New; + +/** + * This servlet lets the H2 Console be used in a standard servlet container + * such as Tomcat or Jetty. + */ +public class WebServlet extends HttpServlet { + + private static final long serialVersionUID = 1L; + private transient WebServer server; + + @Override + public void init() { + ServletConfig config = getServletConfig(); + Enumeration en = config.getInitParameterNames(); + ArrayList list = New.arrayList(); + while (en.hasMoreElements()) { + String name = en.nextElement().toString(); + String value = config.getInitParameter(name); + if (!name.startsWith("-")) { + name = "-" + name; + } + list.add(name); + if (value.length() > 0) { + list.add(value); + } + } + String[] args = list.toArray(new String[0]); + server = new WebServer(); + server.setAllowChunked(false); + server.init(args); + } + + @Override + public void destroy() { + server.stop(); + } + + private boolean allow(HttpServletRequest req) { + if (server.getAllowOthers()) { + return true; + } + String addr = req.getRemoteAddr(); + try { + InetAddress address = InetAddress.getByName(addr); + return address.isLoopbackAddress(); + } catch (UnknownHostException e) { + return false; + } catch (NoClassDefFoundError e) { + // Google App Engine does not allow java.net.InetAddress + return false; + } + } + + private String getAllowedFile(HttpServletRequest req, String requestedFile) { + if (!allow(req)) { + return "notAllowed.jsp"; + } + if (requestedFile.length() == 0) { + return "index.do"; + } + return requestedFile; + } + + @Override + public void doGet(HttpServletRequest req, HttpServletResponse resp) + throws IOException { + req.setCharacterEncoding("utf-8"); + String file = req.getPathInfo(); + if (file == null) { + resp.sendRedirect(req.getRequestURI() + "/"); + return; + } else if (file.startsWith("/")) { + file = file.substring(1); + } + file = getAllowedFile(req, file); + + // extract the request attributes + Properties attributes = new Properties(); + Enumeration en = req.getAttributeNames(); + while (en.hasMoreElements()) { + String name = en.nextElement().toString(); + String value = req.getAttribute(name).toString(); + attributes.put(name, value); + } + en = req.getParameterNames(); + while (en.hasMoreElements()) { + String name = en.nextElement().toString(); + String value = req.getParameter(name); + attributes.put(name, value); + } + + WebSession session = null; + String sessionId = attributes.getProperty("jsessionid"); + if (sessionId != null) { + session = server.getSession(sessionId); + } + WebApp app = new WebApp(server); + app.setSession(session, attributes); + String ifModifiedSince = req.getHeader("if-modified-since"); + + String hostAddr = req.getRemoteAddr(); + file = app.processRequest(file, hostAddr); + session = app.getSession(); + + String mimeType = app.getMimeType(); + boolean cache = app.getCache(); + + if (cache && server.getStartDateTime().equals(ifModifiedSince)) { + resp.setStatus(HttpServletResponse.SC_NOT_MODIFIED); + return; + } + byte[] bytes = server.getFile(file); + if (bytes == null) { + resp.sendError(HttpServletResponse.SC_NOT_FOUND); + bytes = ("File not found: " + file).getBytes(StandardCharsets.UTF_8); + } else { + if (session != null && file.endsWith(".jsp")) { + String page = new String(bytes, StandardCharsets.UTF_8); + page = PageParser.parse(page, session.map); + bytes = page.getBytes(StandardCharsets.UTF_8); + } + resp.setContentType(mimeType); + if (!cache) { + resp.setHeader("Cache-Control", "no-cache"); + } else { + resp.setHeader("Cache-Control", "max-age=10"); + resp.setHeader("Last-Modified", server.getStartDateTime()); + } + } + if (bytes != null) { + ServletOutputStream out = resp.getOutputStream(); + out.write(bytes); + } + } + + @Override + public void doPost(HttpServletRequest req, HttpServletResponse resp) + throws IOException { + doGet(req, resp); + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/web/WebSession.java b/modules/h2/src/main/java/org/h2/server/web/WebSession.java new file mode 100644 index 0000000000000..966cf25ee9f28 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/WebSession.java @@ -0,0 +1,274 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Locale; + +import org.h2.bnf.Bnf; +import org.h2.bnf.context.DbContents; +import org.h2.bnf.context.DbContextRule; +import org.h2.message.DbException; + +/** + * The web session keeps all data of a user session. + * This class is used by the H2 Console. + */ +class WebSession { + + private static final int MAX_HISTORY = 1000; + + /** + * The last time this client sent a request. + */ + long lastAccess; + + /** + * The session attribute map. + */ + final HashMap map = new HashMap<>(); + + /** + * The current locale. + */ + Locale locale; + + /** + * The currently executing statement. + */ + Statement executingStatement; + + /** + * The current updatable result set. + */ + ResultSet result; + + private final WebServer server; + + private final ArrayList commandHistory; + + private Connection conn; + private DatabaseMetaData meta; + private DbContents contents = new DbContents(); + private Bnf bnf; + private boolean shutdownServerOnDisconnect; + + WebSession(WebServer server) { + this.server = server; + // This must be stored in the session rather than in the server. + // Otherwise, one client could allow + // saving history for others (insecure). + this.commandHistory = server.getCommandHistoryList(); + } + + /** + * Put an attribute value in the map. + * + * @param key the key + * @param value the new value + */ + void put(String key, Object value) { + map.put(key, value); + } + + /** + * Get the value for the given key. + * + * @param key the key + * @return the value + */ + Object get(String key) { + if ("sessions".equals(key)) { + return server.getSessions(); + } + return map.get(key); + } + + /** + * Remove a session attribute from the map. + * + * @param key the key + */ + void remove(String key) { + map.remove(key); + } + + /** + * Get the BNF object. + * + * @return the BNF object + */ + Bnf getBnf() { + return bnf; + } + + /** + * Load the SQL grammar BNF. + */ + void loadBnf() { + try { + Bnf newBnf = Bnf.getInstance(null); + DbContextRule columnRule = + new DbContextRule(contents, DbContextRule.COLUMN); + DbContextRule newAliasRule = + new DbContextRule(contents, DbContextRule.NEW_TABLE_ALIAS); + DbContextRule aliasRule = + new DbContextRule(contents, DbContextRule.TABLE_ALIAS); + DbContextRule tableRule = + new DbContextRule(contents, DbContextRule.TABLE); + DbContextRule schemaRule = + new DbContextRule(contents, DbContextRule.SCHEMA); + DbContextRule columnAliasRule = + new DbContextRule(contents, DbContextRule.COLUMN_ALIAS); + DbContextRule procedure = + new DbContextRule(contents, DbContextRule.PROCEDURE); + newBnf.updateTopic("procedure", procedure); + newBnf.updateTopic("column_name", columnRule); + newBnf.updateTopic("new_table_alias", newAliasRule); + newBnf.updateTopic("table_alias", aliasRule); + newBnf.updateTopic("column_alias", columnAliasRule); + newBnf.updateTopic("table_name", tableRule); + newBnf.updateTopic("schema_name", schemaRule); + newBnf.linkStatements(); + bnf = newBnf; + } catch (Exception e) { + // ok we don't have the bnf + server.traceError(e); + } + } + + /** + * Get the SQL statement from history. + * + * @param id the history id + * @return the SQL statement + */ + String getCommand(int id) { + return commandHistory.get(id); + } + + /** + * Add a SQL statement to the history. + * + * @param sql the SQL statement + */ + void addCommand(String sql) { + if (sql == null) { + return; + } + sql = sql.trim(); + if (sql.length() == 0) { + return; + } + if (commandHistory.size() > MAX_HISTORY) { + commandHistory.remove(0); + } + int idx = commandHistory.indexOf(sql); + if (idx >= 0) { + commandHistory.remove(idx); + } + commandHistory.add(sql); + if (server.isCommandHistoryAllowed()) { + server.saveCommandHistoryList(commandHistory); + } + } + + /** + * Get the list of SQL statements in the history. + * + * @return the commands + */ + ArrayList getCommandHistory() { + return commandHistory; + } + + /** + * Update session meta data information and get the information in a map. + * + * @return a map containing the session meta data + */ + HashMap getInfo() { + HashMap m = new HashMap<>(map.size() + 5); + m.putAll(map); + m.put("lastAccess", new Timestamp(lastAccess).toString()); + try { + m.put("url", conn == null ? + "${text.admin.notConnected}" : conn.getMetaData().getURL()); + m.put("user", conn == null ? + "-" : conn.getMetaData().getUserName()); + m.put("lastQuery", commandHistory.isEmpty() ? + "" : commandHistory.get(0)); + m.put("executing", executingStatement == null ? + "${text.admin.no}" : "${text.admin.yes}"); + } catch (SQLException e) { + DbException.traceThrowable(e); + } + return m; + } + + void setConnection(Connection conn) throws SQLException { + this.conn = conn; + if (conn == null) { + meta = null; + } else { + meta = conn.getMetaData(); + } + contents = new DbContents(); + } + + DatabaseMetaData getMetaData() { + return meta; + } + + Connection getConnection() { + return conn; + } + + DbContents getContents() { + return contents; + } + + /** + * Shutdown the server when disconnecting. + */ + void setShutdownServerOnDisconnect() { + this.shutdownServerOnDisconnect = true; + } + + boolean getShutdownServerOnDisconnect() { + return shutdownServerOnDisconnect; + } + + /** + * Close the connection and stop the statement if one is currently + * executing. + */ + void close() { + if (executingStatement != null) { + try { + executingStatement.cancel(); + } catch (Exception e) { + // ignore + } + } + if (conn != null) { + try { + conn.close(); + } catch (Exception e) { + // ignore + } + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/server/web/WebThread.java b/modules/h2/src/main/java/org/h2/server/web/WebThread.java new file mode 100644 index 0000000000000..1332a948ecfe3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/WebThread.java @@ -0,0 +1,355 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.server.web; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.Socket; +import java.net.UnknownHostException; +import java.nio.charset.StandardCharsets; +import java.util.Iterator; +import java.util.Locale; +import java.util.Properties; +import java.util.StringTokenizer; + +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.util.IOUtils; +import org.h2.util.NetUtils; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * For each connection to a session, an object of this class is created. + * This class is used by the H2 Console. + */ +class WebThread extends WebApp implements Runnable { + + protected OutputStream output; + protected final Socket socket; + private final Thread thread; + private InputStream input; + private String ifModifiedSince; + + WebThread(Socket socket, WebServer server) { + super(server); + this.socket = socket; + thread = new Thread(this, "H2 Console thread"); + } + + /** + * Start the thread. + */ + void start() { + thread.start(); + } + + /** + * Wait until the thread is stopped. + * + * @param millis the maximum number of milliseconds to wait + */ + void join(int millis) throws InterruptedException { + thread.join(millis); + } + + /** + * Close the connection now. + */ + void stopNow() { + this.stop = true; + try { + socket.close(); + } catch (IOException e) { + // ignore + } + } + + private String getAllowedFile(String requestedFile) { + if (!allow()) { + return "notAllowed.jsp"; + } + if (requestedFile.length() == 0) { + return "index.do"; + } + return requestedFile; + } + + @Override + public void run() { + try { + input = new BufferedInputStream(socket.getInputStream()); + output = new BufferedOutputStream(socket.getOutputStream()); + while (!stop) { + if (!process()) { + break; + } + } + } catch (Exception e) { + DbException.traceThrowable(e); + } + IOUtils.closeSilently(output); + IOUtils.closeSilently(input); + try { + socket.close(); + } catch (IOException e) { + // ignore + } finally { + server.remove(this); + } + } + + @SuppressWarnings("unchecked") + private boolean process() throws IOException { + boolean keepAlive = false; + String head = readHeaderLine(); + if (head.startsWith("GET ") || head.startsWith("POST ")) { + int begin = head.indexOf('/'), end = head.lastIndexOf(' '); + String file; + if (begin < 0 || end < begin) { + file = ""; + } else { + file = head.substring(begin + 1, end).trim(); + } + trace(head + ": " + file); + file = getAllowedFile(file); + attributes = new Properties(); + int paramIndex = file.indexOf('?'); + session = null; + if (paramIndex >= 0) { + String attrib = file.substring(paramIndex + 1); + parseAttributes(attrib); + String sessionId = attributes.getProperty("jsessionid"); + file = file.substring(0, paramIndex); + session = server.getSession(sessionId); + } + keepAlive = parseHeader(); + String hostAddr = socket.getInetAddress().getHostAddress(); + file = processRequest(file, hostAddr); + if (file.length() == 0) { + // asynchronous request + return true; + } + String message; + byte[] bytes; + if (cache && ifModifiedSince != null && + ifModifiedSince.equals(server.getStartDateTime())) { + bytes = null; + message = "HTTP/1.1 304 Not Modified\r\n"; + } else { + bytes = server.getFile(file); + if (bytes == null) { + message = "HTTP/1.1 404 Not Found\r\n"; + bytes = ("File not found: " + file).getBytes(StandardCharsets.UTF_8); + message += "Content-Length: " + bytes.length + "\r\n"; + } else { + if (session != null && file.endsWith(".jsp")) { + String page = new String(bytes, StandardCharsets.UTF_8); + if (SysProperties.CONSOLE_STREAM) { + Iterator it = (Iterator) session.map.remove("chunks"); + if (it != null) { + message = "HTTP/1.1 200 OK\r\n"; + message += "Content-Type: " + mimeType + "\r\n"; + message += "Cache-Control: no-cache\r\n"; + message += "Transfer-Encoding: chunked\r\n"; + message += "\r\n"; + trace(message); + output.write(message.getBytes()); + while (it.hasNext()) { + String s = it.next(); + s = PageParser.parse(s, session.map); + bytes = s.getBytes(StandardCharsets.UTF_8); + if (bytes.length == 0) { + continue; + } + output.write(Integer.toHexString(bytes.length).getBytes()); + output.write("\r\n".getBytes()); + output.write(bytes); + output.write("\r\n".getBytes()); + output.flush(); + } + output.write("0\r\n\r\n".getBytes()); + output.flush(); + return keepAlive; + } + } + page = PageParser.parse(page, session.map); + bytes = page.getBytes(StandardCharsets.UTF_8); + } + message = "HTTP/1.1 200 OK\r\n"; + message += "Content-Type: " + mimeType + "\r\n"; + if (!cache) { + message += "Cache-Control: no-cache\r\n"; + } else { + message += "Cache-Control: max-age=10\r\n"; + message += "Last-Modified: " + server.getStartDateTime() + "\r\n"; + } + message += "Content-Length: " + bytes.length + "\r\n"; + } + } + message += "\r\n"; + trace(message); + output.write(message.getBytes()); + if (bytes != null) { + output.write(bytes); + } + output.flush(); + } + return keepAlive; + } + + private String readHeaderLine() throws IOException { + StringBuilder buff = new StringBuilder(); + while (true) { + int c = input.read(); + if (c == -1) { + throw new IOException("Unexpected EOF"); + } else if (c == '\r') { + if (input.read() == '\n') { + return buff.length() > 0 ? buff.toString() : null; + } + } else if (c == '\n') { + return buff.length() > 0 ? buff.toString() : null; + } else { + buff.append((char) c); + } + } + } + + private void parseAttributes(String s) { + trace("data=" + s); + while (s != null) { + int idx = s.indexOf('='); + if (idx >= 0) { + String property = s.substring(0, idx); + s = s.substring(idx + 1); + idx = s.indexOf('&'); + String value; + if (idx >= 0) { + value = s.substring(0, idx); + s = s.substring(idx + 1); + } else { + value = s; + } + String attr = StringUtils.urlDecode(value); + attributes.put(property, attr); + } else { + break; + } + } + trace(attributes.toString()); + } + + private boolean parseHeader() throws IOException { + boolean keepAlive = false; + trace("parseHeader"); + int len = 0; + ifModifiedSince = null; + boolean multipart = false; + while (true) { + String line = readHeaderLine(); + if (line == null) { + break; + } + trace(" " + line); + String lower = StringUtils.toLowerEnglish(line); + if (lower.startsWith("if-modified-since")) { + ifModifiedSince = getHeaderLineValue(line); + } else if (lower.startsWith("connection")) { + String conn = getHeaderLineValue(line); + if ("keep-alive".equals(conn)) { + keepAlive = true; + } + } else if (lower.startsWith("content-type")) { + String type = getHeaderLineValue(line); + if (type.startsWith("multipart/form-data")) { + multipart = true; + } + } else if (lower.startsWith("content-length")) { + len = Integer.parseInt(getHeaderLineValue(line)); + trace("len=" + len); + } else if (lower.startsWith("user-agent")) { + boolean isWebKit = lower.contains("webkit/"); + if (isWebKit && session != null) { + // workaround for what seems to be a WebKit bug: + // http://code.google.com/p/chromium/issues/detail?id=6402 + session.put("frame-border", "1"); + session.put("frameset-border", "2"); + } + } else if (lower.startsWith("accept-language")) { + Locale locale = session == null ? null : session.locale; + if (locale == null) { + String languages = getHeaderLineValue(line); + StringTokenizer tokenizer = new StringTokenizer(languages, ",;"); + while (tokenizer.hasMoreTokens()) { + String token = tokenizer.nextToken(); + if (!token.startsWith("q=")) { + if (server.supportsLanguage(token)) { + int dash = token.indexOf('-'); + if (dash >= 0) { + String language = token.substring(0, dash); + String country = token.substring(dash + 1); + locale = new Locale(language, country); + } else { + locale = new Locale(token, ""); + } + headerLanguage = locale.getLanguage(); + if (session != null) { + session.locale = locale; + session.put("language", headerLanguage); + server.readTranslations(session, headerLanguage); + } + break; + } + } + } + } + } else if (line.trim().length() == 0) { + break; + } + } + if (multipart) { + // not supported + } else if (session != null && len > 0) { + byte[] bytes = Utils.newBytes(len); + for (int pos = 0; pos < len;) { + pos += input.read(bytes, pos, len - pos); + } + String s = new String(bytes); + parseAttributes(s); + } + return keepAlive; + } + + private static String getHeaderLineValue(String line) { + return line.substring(line.indexOf(':') + 1).trim(); + } + + @Override + protected String adminShutdown() { + stopNow(); + return super.adminShutdown(); + } + + private boolean allow() { + if (server.getAllowOthers()) { + return true; + } + try { + return NetUtils.isLocalAddress(socket); + } catch (UnknownHostException e) { + server.traceError(e); + return false; + } + } + + private void trace(String s) { + server.trace(s); + } +} diff --git a/modules/h2/src/main/java/org/h2/server/web/package.html b/modules/h2/src/main/java/org/h2/server/web/package.html new file mode 100644 index 0000000000000..07c1c2e6b5c28 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +The H2 Console tool. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_cs.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_cs.prop new file mode 100644 index 0000000000000..2126edefaca36 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_cs.prop @@ -0,0 +1,163 @@ +.translator=Hannibal (http://hannibal.cestiny.cz/) +a.help=Nápověda +a.language=Čeština +a.lynxNotSupported=Bohužel, Lynx není zatím podporován +a.password=Heslo +a.remoteConnectionsDisabled=Bohužel, vzdálené připojení ('webAllowOthers') je na tomto serveru zakázáno. +a.title=H2 konzole +a.tools=Nástroje +a.user=Uživatelské jméno +admin.executing=Probíhá zpracování +admin.ip=IP +admin.lastAccess=Poslední přístup +admin.lastQuery=Poslední dotaz +admin.no=ne +admin.notConnected=nepřipojeno +admin.url=URL +admin.yes=ano +adminAllow=Povolení klienti +adminConnection=Zabezpečení připojení +adminHttp=Použít nešifrované HTTP připojení +adminHttps=Použít šifrované SSL (HTTPS) připojení +adminLocal=Povolit pouze lokální připojení +adminLogin=Správa přístupu +adminLoginCancel=Zrušit +adminLoginOk=OK +adminLogout=Odhlásit +adminOthers=Povolit připojení z jiných počítačů +adminPort=Číslo portu +adminPortWeb=Číslo portu webového serveru +adminRestart=Změny se projeví po restartu serveru. +adminSave=Uložit +adminSessions=Aktivní přístupy +adminShutdown=Zastavit +adminTitle=H2 konzole - Předvolby +adminTranslateHelp=Přeložit nebo zlepšit překlad H2 konzole. +adminTranslateStart=Přeložit +helpAction=Akce +helpAddAnotherRow=Přidat další řádek +helpAddDrivers=Přidání ovladačů k databázi +helpAddDriversText=Dodatečné ovladače k databázi lze zaregistrovat přidáním cesty k Jar souboru s ovladačem do proměnné prostředí H2DRIVERS nebo CLASSPATH. Příklad (Windows): k přidání knihovny s ovladačem k databázi C:/Programs/hsqldb/lib/hsqldb.jar, nastavíme hodnotu proměnné prostředí H2DRIVERS na C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Přidat nový řádek +helpCommandHistory=Zobrazí historii příkazů +helpCreateTable=Vytvořit novou tabulku +helpDeleteRow=Smazat řádek +helpDisconnect=Odpojení od databáze +helpDisplayThis=Zobrazí tuto stránku s nápovědou +helpDropTable=Smazat tabulku, pokud existuje +helpExecuteCurrent=Provede daný SQL příkaz +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Ikona +helpImportantCommands=Důležité příkazy +helpOperations=Operace +helpQuery=Dotaz na tabulku +helpSampleSQL=Ukázka SQL skriptu +helpStatements=SQL příkazy +helpUpdate=Změnit hodnotu v řádku +helpWithColumnsIdName=obsahující sloupce ID a NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Připojit +login.driverClass=Třída ovladače +login.driverNotFound=Ovladač k databázi nebyl nalezen
    Postup pro přidání ovladače naleznete v sekci Nápověda +login.goAdmin=Předvolby +login.jdbcUrl=JDBC URL +login.language=Jazyk +login.login=Přihlášení +login.remove=Odstranit +login.save=Uložit +login.savedSetting=Uložená nastavení +login.settingName=Název nastavení +login.testConnection=Vyzkoušet připojení +login.testSuccessful=Připojení úspěšné +login.welcome=H2 konzole +result.1row=1 řádek +result.autoCommitOff=Automatické vkládání je nyní VYPNUTÉ +result.autoCommitOn=Automatické vkládání je nyní ZAPNUTÉ +result.bytes=bytů +result.characters=znaků +result.maxrowsSet=Nastaven maximální počet řádků +result.noRows=žádné řádky +result.noRunningStatement=V tuto chvíli se neprovádí žádný příkaz +result.rows=řádků +result.statementWasCanceled=Příkaz byl zrušen +result.updateCount=Počet změn +resultEdit.action=Akce +resultEdit.add=Přidat +resultEdit.cancel=Zrušit +resultEdit.delete=Smazat +resultEdit.edit=Upravit +resultEdit.editResult=Upravit +resultEdit.save=Uložit +toolbar.all=Vše +toolbar.autoCommit=Automatické vkládání +toolbar.autoComplete=Automatické dokončování +toolbar.autoComplete.full=Úplné +toolbar.autoComplete.normal=Normální +toolbar.autoComplete.off=Vypnuto +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Vypnuto +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Zrušit prováděný příkaz +toolbar.clear=Vyčistit +toolbar.commit=Vložit +toolbar.disconnect=Odpojit +toolbar.history=Historie příkazů +toolbar.maxRows=Maximum řádků +toolbar.refresh=Aktualizovat +toolbar.rollback=Vrátit změny +toolbar.run=Provést příkaz +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL příkaz +tools.backup=Zálohovat +tools.backup.help=Provede zálohu databáze. +tools.changeFileEncryption=Změnit šifrování souboru +tools.changeFileEncryption.help=Umožňuje změnu šifrovacího hesla a algoritmu souboru databáze. +tools.cipher=Šifrování (AES nebo XTEA) +tools.commandLine=Příkazová řádka +tools.convertTraceFile=Převést transakční soubor +tools.convertTraceFile.help=Převede soubor .trace.db na Java aplikaci a SQL skript. +tools.createCluster=Vytvořit cluster +tools.createCluster.help=Vytvoří cluster ze samostatné databáze. +tools.databaseName=Název databáze +tools.decryptionPassword=Heslo k dešifrování +tools.deleteDbFiles=Smazat databázové soubory +tools.deleteDbFiles.help=Vymaže všechny soubory, které patří databázi. +tools.directory=Adresář +tools.encryptionPassword=Heslo k šifrování +tools.javaDirectoryClassName=Java adresář a název třídy +tools.recover=Zotavení +tools.recover.help=Pomůže zotavit poškozenou databázi. +tools.restore=Obnovit +tools.restore.help=Obnoví zálohu databáze. +tools.result=Výsledek +tools.run=Spustit +tools.runScript=Spustit skript +tools.runScript.help=Spustí SQL skript. +tools.script=Skript +tools.script.help=Umožní převést databázi na SQL skript pro účely zálohy či přesunu. +tools.scriptFileName=Název souboru skriptu +tools.serverList=Seznam serverů +tools.sourceDatabaseName=Název zdrojové databáze +tools.sourceDatabaseURL=URL zdrojové databáze +tools.sourceDirectory=Zdrojový adresář +tools.sourceFileName=Název zdrojového souboru +tools.sourceScriptFileName=Název zdrojového souboru skriptu +tools.targetDatabaseName=Název cílové databáze +tools.targetDatabaseURL=URL cílové databáze +tools.targetDirectory=Cílový adresář +tools.targetFileName=Název cílového souboru +tools.targetScriptFileName=Název cílového souboru skriptu +tools.traceFileName=Název transakčního souboru +tree.admin=Admin +tree.current=Současná hodnota +tree.hashed=Hešováno +tree.increment=Přírůstek +tree.indexes=Indexy +tree.nonUnique=Neunikátní +tree.sequences=Sekvence +tree.unique=Unikátní +tree.users=Uživatelé diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_de.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_de.prop new file mode 100644 index 0000000000000..53cfa6f07e63f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_de.prop @@ -0,0 +1,163 @@ +.translator=Thomas Mueller +a.help=Hilfe +a.language=Deutsch +a.lynxNotSupported=Dieser Browser unterstützt keine Frames. Frames (und Javascript) werden benötigt. +a.password=Passwort +a.remoteConnectionsDisabled=Verbindungen von anderen Rechnern sind nicht freigegeben ('webAllowOthers'). +a.title=H2 Console +a.tools=Tools +a.user=Benutzername +admin.executing=Aktive Ausführung +admin.ip=IP +admin.lastAccess=Letzter Zugriff +admin.lastQuery=Letzter Befehl +admin.no=Nein +admin.notConnected=nicht verbunden +admin.url=URL +admin.yes=Ja +adminAllow=Zugelassene Verbindungen +adminConnection=Verbindungssicherheit +adminHttp=Unverschlüsselte HTTP Verbindungen verwenden +adminHttps=Verschlüsselte HTTPS Verbindungen verwenden +adminLocal=Nur lokale Verbindungen erlauben +adminLogin=Administration Login +adminLoginCancel=Abbrechen +adminLoginOk=OK +adminLogout=Beenden +adminOthers=Verbindungen von anderen Computern erlauben +adminPort=Admin Port +adminPortWeb=Web-Server Port +adminRestart=Änderungen werden nach einem Neustart des Servers aktiv. +adminSave=Speichern +adminSessions=Aktive Verbindungen +adminShutdown=Herunterfahren +adminTitle=H2 Console Optionen +adminTranslateHelp=Die H2 Console übersetzen oder die Übersetzung verbessern. +adminTranslateStart=Übersetzen +helpAction=Aktion +helpAddAnotherRow=Fügt einen weiteren Datensatz hinzu +helpAddDrivers=Datenbank Treiber hinzufügen +helpAddDriversText=Es ist möglich, zusätzliche Datenbank-Treiber zu laden, indem die Pfade der Treiber-Dateien in den Umgebungsvariablen H2DRIVERS oder CLASSPATH eingetragen werden. Beispiel (Windows): Um den Datenbank-Treiber mit dem Jar-File C:/Programs/hsqldb/lib/hsqldb.jar hinzuzufügen, setzen Sie die Umgebungvariable H2DRIVERS auf C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Fügt einen Datensatz hinzu +helpCommandHistory=Zeigt die Befehls-Chronik +helpCreateTable=Erzeugt eine neue Tabelle +helpDeleteRow=Entfernt einen Datensatz +helpDisconnect=Trennt die Verbindung zur Datenbank +helpDisplayThis=Zeigt diese Hilfe Seite +helpDropTable=Löscht die Tabelle falls es sie gibt +helpExecuteCurrent=Führt den aktuellen SQL Befehl aus +helpExecuteSelected=Führt den SQL Befehl des ausgwählten Textes aus +helpIcon=Schaltfläche +helpImportantCommands=Wichtige Befehle +helpOperations=Operationen +helpQuery=Fragt die Tabelle ab +helpSampleSQL=Beispiel SQL Skript +helpStatements=SQL Befehle +helpUpdate=Ändert Daten in einer Zeile +helpWithColumnsIdName=mit zwei Spalten +key.alt=Alt +key.ctrl=Strg +key.enter=Enter +key.shift=Umsch +key.space=Leer +login.connect=Verbinden +login.driverClass=Datenbank-Treiber Klasse +login.driverNotFound=Datenbank-Treiber nicht gefunden
    Für Informationen zum Hinzufügen von Treibern siehe Hilfe +login.goAdmin=Optionen +login.jdbcUrl=JDBC URL +login.language=Sprache +login.login=Login +login.remove=Entfernen +login.save=Speichern +login.savedSetting=Gespeicherte Einstellung +login.settingName=Einstellungs-Name +login.testConnection=Verbindung testen +login.testSuccessful=Test erfolgreich +login.welcome=H2 Console +result.1row=1 Datensatz +result.autoCommitOff=Auto-Commit ist jetzt ausgeschaltet +result.autoCommitOn=Auto-Commit ist jetzt eingeschaltet +result.bytes=Bytes +result.characters=Characters +result.maxrowsSet=Maximale Anzahl Zeilen ist jetzt gesetzt +result.noRows=keine Datensätze +result.noRunningStatement=Im Moment wird kein Befehl ausgeführt +result.rows=Datensätze +result.statementWasCanceled=Der Befehl wurde abgebrochen +result.updateCount=Änderungen +resultEdit.action=Aktion +resultEdit.add=Hinzufügen +resultEdit.cancel=Abbrechen +resultEdit.delete=Löschen +resultEdit.edit=Bearbeiten +resultEdit.editResult=Bearbeiten +resultEdit.save=Speichern +toolbar.all=Alle +toolbar.autoCommit=Auto-Commit +toolbar.autoComplete=Auto-Complete +toolbar.autoComplete.full=Alles +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Aus +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Aus +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Laufenden Befehl abbrechen +toolbar.clear=Leeren +toolbar.commit=Commit (Abschliessen/Speichern) +toolbar.disconnect=Verbindung trennen +toolbar.history=Befehls-Chronik +toolbar.maxRows=Maximale Anzahl Zeilen +toolbar.refresh=Aktualisieren +toolbar.rollback=Rollback (Rückgängig) +toolbar.run=Ausführen +toolbar.runSelected=Ausgewähltes Ausführen +toolbar.sqlStatement=SQL Befehl +tools.backup=Backup +tools.backup.help=Erzeugt eine Sicherheitskopie einer Datenbank. +tools.changeFileEncryption=ChangeFileEncryption +tools.changeFileEncryption.help=Erlaubt, Datei Verschlüsselungs-Passwort und -Algorithmus einer Datenbank zu ändern. +tools.cipher=Verschlüsselung (AES oder XTEA) +tools.commandLine=Kommandozeile +tools.convertTraceFile=ConvertTraceFile +tools.convertTraceFile.help=Konvertiert eine .trace.db Datei in eine Java Applikation und ein SQL Skript. +tools.createCluster=CreateCluster +tools.createCluster.help=Generiert ein Cluster aus einer autonomen Datenbank. +tools.databaseName=Datenbankname +tools.decryptionPassword=Entschlüsselungs-Passwort +tools.deleteDbFiles=DeleteDbFiles +tools.deleteDbFiles.help=Löscht alle Dateien die zu einer Datenbank gehören. +tools.directory=Verzeichnis +tools.encryptionPassword=Verschlüsselungs-Passwort +tools.javaDirectoryClassName=Java Verzeichnis- und Klassen-Name +tools.recover=Recover +tools.recover.help=Hilft bei der Reparatur einer beschädigten Datenbank. +tools.restore=Restore +tools.restore.help=Stellt eine Datenbank aus einem Backup her. +tools.result=Ergebnis +tools.run=Start +tools.runScript=RunScript +tools.runScript.help=Führt ein SQL Skript aus. +tools.script=Script +tools.script.help=Generiert ein SQL Skript einer Datenbank für Backup- und Migrationszwecke. +tools.scriptFileName=Skript Dateiname +tools.serverList=Server Liste +tools.sourceDatabaseName=Quell-Datenbankname +tools.sourceDatabaseURL=Quell-Datenbank URL +tools.sourceDirectory=Quell-Verzeichnis +tools.sourceFileName=Quell-Dateiname +tools.sourceScriptFileName=Dateiname des Skripts (Quelle) +tools.targetDatabaseName=Ziel-Datenbankname +tools.targetDatabaseURL=Ziel-Datenbank URL +tools.targetDirectory=Ziel-Verzeichnis +tools.targetFileName=Ziel-Dateiname +tools.targetScriptFileName=Dateiname des Skripts (Ziel) +tools.traceFileName=Name der Trace Datei +tree.admin=Administrator +tree.current=Aktueller Wert +tree.hashed=Hash-basiert +tree.increment=Inkrement +tree.indexes=Indexe +tree.nonUnique=nicht eindeutig +tree.sequences=Sequenzen +tree.unique=eindeutig +tree.users=Benutzer diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_en.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_en.prop new file mode 100644 index 0000000000000..fca703676f176 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_en.prop @@ -0,0 +1,163 @@ +.translator=Thomas Mueller +a.help=Help +a.language=English +a.lynxNotSupported=Sorry, Lynx not supported yet +a.password=Password +a.remoteConnectionsDisabled=Sorry, remote connections ('webAllowOthers') are disabled on this server. +a.title=H2 Console +a.tools=Tools +a.user=User Name +admin.executing=Executing +admin.ip=IP +admin.lastAccess=Last Access +admin.lastQuery=Last Query +admin.no=no +admin.notConnected=not connected +admin.url=URL +admin.yes=yes +adminAllow=Allowed clients +adminConnection=Connection security +adminHttp=Use unencrypted HTTP connections +adminHttps=Use encrypted SSL (HTTPS) connections +adminLocal=Only allow local connections +adminLogin=Administration Login +adminLoginCancel=Cancel +adminLoginOk=OK +adminLogout=Logout +adminOthers=Allow connections from other computers +adminPort=Port number +adminPortWeb=Web server port number +adminRestart=Changes take effect after restarting the server. +adminSave=Save +adminSessions=Active Sessions +adminShutdown=Shutdown +adminTitle=H2 Console Preferences +adminTranslateHelp=Translate or improve the translation of the H2 Console. +adminTranslateStart=Translate +helpAction=Action +helpAddAnotherRow=Add another row +helpAddDrivers=Adding Database Drivers +helpAddDriversText=Additional database drivers can be registered by adding the Jar file location of the driver to the the environment variables H2DRIVERS or CLASSPATH. Example (Windows): to add the database driver library C:/Programs/hsqldb/lib/hsqldb.jar, set the environment variable H2DRIVERS to C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Add a new row +helpCommandHistory=Shows the Command History +helpCreateTable=Create a new table +helpDeleteRow=Remove a row +helpDisconnect=Disconnects from the database +helpDisplayThis=Displays this Help Page +helpDropTable=Delete the table if it exists +helpExecuteCurrent=Executes the current SQL statement +helpExecuteSelected=Executes the SQL statement defined by the text selection +helpIcon=Icon +helpImportantCommands=Important Commands +helpOperations=Operations +helpQuery=Query the table +helpSampleSQL=Sample SQL Script +helpStatements=SQL statements +helpUpdate=Change data in a row +helpWithColumnsIdName=with ID and NAME columns +key.alt=Alt +key.ctrl=Ctrl +key.enter=Enter +key.shift=Shift +key.space=Space +login.connect=Connect +login.driverClass=Driver Class +login.driverNotFound=Database driver not found
    See in the Help for how to add drivers +login.goAdmin=Preferences +login.jdbcUrl=JDBC URL +login.language=Language +login.login=Login +login.remove=Remove +login.save=Save +login.savedSetting=Saved Settings +login.settingName=Setting Name +login.testConnection=Test Connection +login.testSuccessful=Test successful +login.welcome=H2 Console +result.1row=1 row +result.autoCommitOff=Auto commit is now OFF +result.autoCommitOn=Auto commit is now ON +result.bytes=bytes +result.characters=characters +result.maxrowsSet=Max rowcount is set +result.noRows=no rows +result.noRunningStatement=There is currently no running statement +result.rows=rows +result.statementWasCanceled=The statement was canceled +result.updateCount=Update count +resultEdit.action=Action +resultEdit.add=Add +resultEdit.cancel=Cancel +resultEdit.delete=Delete +resultEdit.edit=Edit +resultEdit.editResult=Edit +resultEdit.save=Save +toolbar.all=All +toolbar.autoCommit=Auto commit +toolbar.autoComplete=Auto complete +toolbar.autoComplete.full=Full +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Off +toolbar.autoSelect=Auto select +toolbar.autoSelect.off=Off +toolbar.autoSelect.on=On +toolbar.cancelStatement=Cancel the current statement +toolbar.clear=Clear +toolbar.commit=Commit +toolbar.disconnect=Disconnect +toolbar.history=Command history +toolbar.maxRows=Max rows +toolbar.refresh=Refresh +toolbar.rollback=Rollback +toolbar.run=Run +toolbar.runSelected=Run Selected +toolbar.sqlStatement=SQL statement +tools.backup=Backup +tools.backup.help=Creates a backup of a database. +tools.changeFileEncryption=ChangeFileEncryption +tools.changeFileEncryption.help=Allows changing the database file encryption password and algorithm. +tools.cipher=Cipher (AES or XTEA) +tools.commandLine=Command line +tools.convertTraceFile=ConvertTraceFile +tools.convertTraceFile.help=Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=CreateCluster +tools.createCluster.help=Creates a cluster from a standalone database. +tools.databaseName=Database name +tools.decryptionPassword=Decryption password +tools.deleteDbFiles=DeleteDbFiles +tools.deleteDbFiles.help=Deletes all files belonging to a database. +tools.directory=Directory +tools.encryptionPassword=Encryption password +tools.javaDirectoryClassName=Java directory and class name +tools.recover=Recover +tools.recover.help=Helps recovering a corrupted database. +tools.restore=Restore +tools.restore.help=Restores a database backup. +tools.result=Result +tools.run=Run +tools.runScript=RunScript +tools.runScript.help=Runs a SQL script. +tools.script=Script +tools.script.help=Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=Script file name +tools.serverList=Server list +tools.sourceDatabaseName=Source database name +tools.sourceDatabaseURL=Source database URL +tools.sourceDirectory=Source directory +tools.sourceFileName=Source file name +tools.sourceScriptFileName=Source script file name +tools.targetDatabaseName=Target database name +tools.targetDatabaseURL=Target database URL +tools.targetDirectory=Target directory +tools.targetFileName=Target file name +tools.targetScriptFileName=Target script file name +tools.traceFileName=Trace file name +tree.admin=Admin +tree.current=Current value +tree.hashed=Hashed +tree.increment=Increment +tree.indexes=Indexes +tree.nonUnique=Non unique +tree.sequences=Sequences +tree.unique=Unique +tree.users=Users diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_es.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_es.prop new file mode 100644 index 0000000000000..8f1e1c576e4df --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_es.prop @@ -0,0 +1,163 @@ +.translator=Miguel Angel +a.help=Ayuda +a.language=Español +a.lynxNotSupported=Lo sentimos, Lynx no está soportado todavía +a.password=Contraseña +a.remoteConnectionsDisabled=Conexiones remotas desactivadas +a.title=H2 Console +a.tools=#Tools +a.user=Nombre de usuario +admin.executing=Ejecutando +admin.ip=IP +admin.lastAccess=Último acceso +admin.lastQuery=Última consulta +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Clientes permitidos +adminConnection=Conexión segura +adminHttp=Usar conexiones HTTP desencriptadas +adminHttps=Usar conexiones HTTPS (SSL) encriptadas +adminLocal=Permitir únicamente conexiones locales +adminLogin=Registrar administración +adminLoginCancel=Cancelar +adminLoginOk=Aceptar +adminLogout=Desconectar +adminOthers=Permitir conexiones desde otros ordenadores +adminPort=Puerto +adminPortWeb=Puerto del servidor Web +adminRestart=Los cambios tendrán efecto al reiniciar el servidor. +adminSave=Guardar +adminSessions=Sesiones activas +adminShutdown=Parar servidor +adminTitle=Preferencias de H2 Consola +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Action +helpAddAnotherRow=Añadir otra fila +helpAddDrivers=Añadiendo drivers de base de datos +helpAddDriversText=Se pueden registrar otros drivers añadiendo el archivo Jar del driver a la variable de entorno H2DRIVERS o CLASSPATH. Por ejemplo (Windows): Para añadir la librería del driver de base de datos C:/Programs/hsqldb/lib/hsqldb.jar, hay que establecer la variable de entorno H2DRIVERS a C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Añadir una fila nueva +helpCommandHistory=Ver histórico de comandos +helpCreateTable=Crear una tabla nueva +helpDeleteRow=Borrar una fila +helpDisconnect=Desconectar de la base de datos. +helpDisplayThis=Visualizar esta página de ayuda. +helpDropTable=Borrar la tabla si existe +helpExecuteCurrent=Ejecuta la actual sentencia SQL +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Icon +helpImportantCommands=Comandos importantes +helpOperations=Operaciones +helpQuery=Consulta la tabla +helpSampleSQL=Ejemplo SQL Script +helpStatements=Sentencias SQL +helpUpdate=Modificar datos en una fila +helpWithColumnsIdName=con las columnas ID y NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Conectar +login.driverClass=Controlador +login.driverNotFound=Controlador no encontrado +login.goAdmin=Preferencias +login.jdbcUrl=URL JDBC +login.language=Idioma +login.login=Registrar +login.remove=Eliminar +login.save=Guardar +login.savedSetting=Configuraciones guardadas +login.settingName=Nombre de la configuración +login.testConnection=Probar la conexión +login.testSuccessful=Prueba correcta +login.welcome=H2 Consola +result.1row=1 fila +result.autoCommitOff=El auto commit no está activo +result.autoCommitOn=El auto commit está activo +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Número máximo de filas modificado +result.noRows=No se han recuperado filas +result.noRunningStatement=No hay una instrucción ejecutándose +result.rows=filas +result.statementWasCanceled=La instrucción fue cancelada +result.updateCount=Modificaciones +resultEdit.action=Action +resultEdit.add=Añadir +resultEdit.cancel=Cancelar +resultEdit.delete=Eliminar +resultEdit.edit=Editar +resultEdit.editResult=Editar +resultEdit.save=Guardar +toolbar.all=Todos +toolbar.autoCommit=Auto commit +toolbar.autoComplete=Auto completado +toolbar.autoComplete.full=Completo +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Desactivado +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Desactivado +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Cancelar la instrucción actual +toolbar.clear=Eliminar +toolbar.commit=Commit +toolbar.disconnect=Desconectar +toolbar.history=Histórico comandos +toolbar.maxRows=Número máximo de filas +toolbar.refresh=Actualizar +toolbar.rollback=Rollback +toolbar.run=Ejecutar +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=Instrucción SQL +tools.backup=#Backup +tools.backup.help=#Creates a backup of a database. +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=#Cipher (AES or XTEA) +tools.commandLine=#Command line +tools.convertTraceFile=#ConvertTraceFile +tools.convertTraceFile.help=#Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=#CreateCluster +tools.createCluster.help=#Creates a cluster from a standalone database. +tools.databaseName=#Database name +tools.decryptionPassword=#Decryption password +tools.deleteDbFiles=#DeleteDbFiles +tools.deleteDbFiles.help=#Deletes all files belonging to a database. +tools.directory=#Directory +tools.encryptionPassword=#Encryption password +tools.javaDirectoryClassName=#Java directory and class name +tools.recover=#Recover +tools.recover.help=#Helps recovering a corrupted database. +tools.restore=#Restore +tools.restore.help=#Restores a database backup. +tools.result=#Result +tools.run=Ejecutar +tools.runScript=#RunScript +tools.runScript.help=#Runs a SQL script. +tools.script=#Script +tools.script.help=#Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=#Script file name +tools.serverList=#Server list +tools.sourceDatabaseName=#Source database name +tools.sourceDatabaseURL=#Source database URL +tools.sourceDirectory=#Source directory +tools.sourceFileName=#Source file name +tools.sourceScriptFileName=#Source script file name +tools.targetDatabaseName=#Target database name +tools.targetDatabaseURL=#Target database URL +tools.targetDirectory=#Target directory +tools.targetFileName=#Target file name +tools.targetScriptFileName=#Target script file name +tools.traceFileName=#Trace file name +tree.admin=Administrador +tree.current=Valor actual +tree.hashed=Hash +tree.increment=Incremento +tree.indexes=Índices +tree.nonUnique=No único +tree.sequences=Secuencias +tree.unique=Único +tree.users=Usuarios diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_fr.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_fr.prop new file mode 100644 index 0000000000000..8380c479c8b24 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_fr.prop @@ -0,0 +1,163 @@ +.translator=Olivier Parent +a.help=Aide +a.language=Français +a.lynxNotSupported=Désolé, Lynx n'est pas encore supporté +a.password=Mot de passe +a.remoteConnectionsDisabled=Désolé, la gestion des connexions provenant de machines distantes est désactivée sur ce serveur ('webAllowOthers'). +a.title=Console H2 +a.tools=Outils +a.user=Nom d'utilisateur +admin.executing=Exécution en cours +admin.ip=Adresse IP +admin.lastAccess=Dernier accès +admin.lastQuery=Dernière requête +admin.no=Non +admin.notConnected=#not connected +admin.url=URL +admin.yes=Oui +adminAllow=Clients autorisés +adminConnection=Sécurité des connexions +adminHttp=Utiliser des connexions HTTP non sécurisées +adminHttps=Utiliser des connexions sécurisées +adminLocal=Autoriser uniquement les connexions locales +adminLogin=Login administrateur +adminLoginCancel=Annuler +adminLoginOk=OK +adminLogout=Déconnexion +adminOthers=Autoriser les connexions d'ordinateurs distants +adminPort=Numéro de port +adminPortWeb=Numéro de port du serveur Web +adminRestart=Modifications effectuées après redémarrage du serveur. +adminSave=Enregistrer +adminSessions=Sessions actives +adminShutdown=Arrêt +adminTitle=Console H2 de paramétrage des options +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Action +helpAddAnotherRow=Ajouter un autre enregistrement +helpAddDrivers=Ajouter des pilotes de base de données +helpAddDriversText=Des pilotes additionels peuvent être configurés en déclarant l'emplacement du fichier Jar contenant ces pilotes dans les variables d'environnement H2DRIVERS ou CLASSPATH. Exemple (Windows): Pour ajouter la bibliothèque C:/Programs/hsqldb/lib/hsqldb.jar, définir la valeur de la variable d'environnement H2DRIVERS en C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Ajouter un nouvel enregistrement +helpCommandHistory=Affiche l'historique des commandes +helpCreateTable=Créer une nouvelle table +helpDeleteRow=Effacer un enregistrement +helpDisconnect=Déconnexion de la base de données +helpDisplayThis=Affiche cette page d'aide +helpDropTable=Effacer une table si elle existe +helpExecuteCurrent=Exécute la commande courante +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Icone +helpImportantCommands=Commandes principales +helpOperations=Opérations +helpQuery=Requêter une table +helpSampleSQL=Exemple de script SQL +helpStatements=Instructions SQL +helpUpdate=Modifier un enregistrement +helpWithColumnsIdName=avec les colonnes ID et NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Connecter +login.driverClass=Pilote JDBC +login.driverNotFound=Pilote non trouvé.
    Veuillez consulter dans l'aide la procédure d'ajout de pilotes. +login.goAdmin=Options +login.jdbcUrl=URL JDBC +login.language=Langue +login.login=Connexion +login.remove=Supprimer +login.save=Enregistrer +login.savedSetting=Configuration enregistrée +login.settingName=Nom de configuration +login.testConnection=Test de connexion +login.testSuccessful=Succès +login.welcome=Console H2 +result.1row=1 enregistrement +result.autoCommitOff=Validation automatique non activée +result.autoCommitOn=Validation automatique activée +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Nombre max d'enregistrements défini +result.noRows=Aucun enregistrement +result.noRunningStatement=Pas d'instruction en cours +result.rows=enregistrements +result.statementWasCanceled=L'instruction a été annulée +result.updateCount=Nombre de modifications +resultEdit.action=Action +resultEdit.add=Ajouter +resultEdit.cancel=Annuler +resultEdit.delete=Supprimer +resultEdit.edit=Editer +resultEdit.editResult=Editer +resultEdit.save=Enregistrer +toolbar.all=Tous +toolbar.autoCommit=Validation automatique (auto commit) +toolbar.autoComplete=Complètement automatique +toolbar.autoComplete.full=Exhaustif +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Désactivé +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Désactivé +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Annuler l'instruction en cours +toolbar.clear=Effacer +toolbar.commit=Valider +toolbar.disconnect=Déconnecter +toolbar.history=Historique des instructions +toolbar.maxRows=Max lignes +toolbar.refresh=Rafraîchir +toolbar.rollback=Revenir en arrière +toolbar.run=Exécuter +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=Instruction SQL +tools.backup=Sauvegarde +tools.backup.help=Crée une sauvegarde de base de données. +tools.changeFileEncryption=Modifier le chiffrement +tools.changeFileEncryption.help=Permet la modification du mot de passe et de l'algorithme de chiffrement des fichiers de base de données. +tools.cipher=Chiffrement (AES ou XTEA) +tools.commandLine=Ligne de commande +tools.convertTraceFile=Convertir le fichier trace +tools.convertTraceFile.help=Convertit un fichier .trace.db en une application Java ou un script SQL. +tools.createCluster=Créer une grappe de serveurs +tools.createCluster.help=Crée une grappe de serveurs depuis une base de données autonome. +tools.databaseName=Nom de la base de données +tools.decryptionPassword=Mot de passe de déchiffrement +tools.deleteDbFiles=Supprimer les fichiers +tools.deleteDbFiles.help=Supprime tous les fichiers appartenant à une base de données +tools.directory=Répertoire +tools.encryptionPassword=Mot de passe de chiffrement +tools.javaDirectoryClassName=Répertoire Java et nom de classe +tools.recover=Réparation +tools.recover.help=Aide la réparation d'une base de données corrompue +tools.restore=Restauration +tools.restore.help=Restaure une sauvegarde de base de données. +tools.result=Résultat +tools.run=Exécuter +tools.runScript=Exécuter le script +tools.runScript.help=Exécute un script SQL. +tools.script=Script +tools.script.help=Convertit une base de données en script SQL pour une sauvegarde ou une migration. +tools.scriptFileName=Nom du fichier de script +tools.serverList=Liste des serveurs +tools.sourceDatabaseName=Base de données source +tools.sourceDatabaseURL=URL base de données source +tools.sourceDirectory=Répertoire source +tools.sourceFileName=Nom de fichier source +tools.sourceScriptFileName=Nom du fichier script source +tools.targetDatabaseName=Base de données cible +tools.targetDatabaseURL=URL base de données cible +tools.targetDirectory=Répertoire cible +tools.targetFileName=Nom de fichier cible +tools.targetScriptFileName=Nom de fichier script cible +tools.traceFileName=Nom du fichier de trace +tree.admin=Administrateur +tree.current=Valeur actuelle +tree.hashed=Haché (hashed) +tree.increment=Incrément +tree.indexes=Index +tree.nonUnique=Non unique +tree.sequences=Séquences +tree.unique=Unique +tree.users=Utilisateurs diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_hu.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_hu.prop new file mode 100644 index 0000000000000..56aeddfbcc98c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_hu.prop @@ -0,0 +1,163 @@ +.translator=Andras Hideg +a.help=Súgó +a.language=Magyar +a.lynxNotSupported=A Lynx böngésző egyelőre nem támogatott +a.password=Jelszó +a.remoteConnectionsDisabled=Ezen a kiszolgálón távoli kapcsolatok ('webAllowOthers') nem engedélyezettek. +a.title=H2 konzol +a.tools=#Tools +a.user=Felhasználónév +admin.executing=Utasítás végrehajtása +admin.ip=IP +admin.lastAccess=Legutóbbi hozzáférés +admin.lastQuery=Legutóbbi lekérdezés +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Engedélyezett ügyfelek +adminConnection=Kapcsolatbiztonság +adminHttp=Titkosítatlan HTTP kapcsolatok használata +adminHttps=Titkosított SSL (HTTPS) kapcsolatok használata +adminLocal=Csak helyi kapcsolatok engedélyezése +adminLogin=Adminisztrációs bejelentkezés +adminLoginCancel=Mégse +adminLoginOk=OK +adminLogout=Kilépés +adminOthers=Más számítógépekről kezdeményezett kapcsolatok engedélyezése +adminPort=#Port number +adminPortWeb=Webkiszolgáló portszáma +adminRestart=A változtatások a kiszolgáló újraindítása után lépnek érvénybe +adminSave=Mentés +adminSessions=Aktív munkamenetek +adminShutdown=Leállítás +adminTitle=H2 Konzol tulajdonságai +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Parancs +helpAddAnotherRow=Rekord hozzáadása +helpAddDrivers=Adatbázis-illesztőprogramok hozzáadása +helpAddDriversText=További adatbázis-illesztőprogramok regisztrálásakor a H2DRIVERS vagy CLASSPATH környezeti változókhoz kell adni a .jar illesztőprogram-fájlok elérési útvonalait. Például (Windows esetén) a C:/Programs/hsqldb/lib/hsqldb.jar illesztőprogram regisztrálásához a H2DRIVERS környezeti változónak az alábbi értékét kell megadni: C:/Programs/hsqldb/lib/hsqldb.jar +helpAddRow=Rekord hozzáadása +helpCommandHistory=Korábbi utasítások megjelenítése +helpCreateTable=Új tábla létrehozása +helpDeleteRow=rekord törlése +helpDisconnect=Kapcsolat megszakítása az adatbázissal +helpDisplayThis=Súgó megjelenítése +helpDropTable=Tábla törlése, ha az létezik +helpExecuteCurrent=Aktuális SQL utasítás végrehajtása +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Ikon +helpImportantCommands=Fontos utasítások +helpOperations=Műveletek +helpQuery=Tábla lekérdezése +helpSampleSQL=Minta SQL szkript +helpStatements=SQL utasítások +helpUpdate=Adatok módosítása egy rekordon belül +helpWithColumnsIdName=ID és NAME oszlopokkal +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Csatlakozás +login.driverClass=Illesztőprogram osztály +login.driverNotFound=Adatbázis-illesztőprogram nem található
    Illesztőprogramok hozzáadásáról a Súgó ad felvilágosítást. +login.goAdmin=Tulajdonságok +login.jdbcUrl=JDBC URL +login.language=Nyelv +login.login=Belépés +login.remove=Eltávolítás +login.save=Mentés +login.savedSetting=Mentett beállítások +login.settingName=Beállítás neve +login.testConnection=Kapcsolat tesztelése +login.testSuccessful=Kapcsolat tesztelése sikeres volt +login.welcome=H2 konzol +result.1row=1 rekord +result.autoCommitOff=Automatikus jóváhagyás kikapcsolva +result.autoCommitOn=Automatikus jóváhagyás bekapcsolva +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Rekordok maximális száma beállítva +result.noRows=nincs rekord +result.noRunningStatement=Utasítás jelenleg nincs folyamatban +result.rows=rekordok +result.statementWasCanceled=Az utasítás végrehajtása megszakadt +result.updateCount=Frissítés +resultEdit.action=Parancs +resultEdit.add=Hozzáadás +resultEdit.cancel=Mégse +resultEdit.delete=Törlés +resultEdit.edit=Szerkesztés +resultEdit.editResult=Szerkesztés +resultEdit.save=Mentés +toolbar.all=Mind +toolbar.autoCommit=Automatikus jóváhagyás +toolbar.autoComplete=Automatikus kiegészítés +toolbar.autoComplete.full=Teljes +toolbar.autoComplete.normal=Normál +toolbar.autoComplete.off=Kikapcsolva +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Kikapcsolva +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Aktuális utasítás végrehajtásának megszakítása +toolbar.clear=Törlés +toolbar.commit=Jóváhagyás +toolbar.disconnect=Kapcsolat bontása +toolbar.history=Korábbi utasítások +toolbar.maxRows=Rekordok maximális száma +toolbar.refresh=Frissítés +toolbar.rollback=Visszagörgetés +toolbar.run=Végrehajtás +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL utasítás +tools.backup=#Backup +tools.backup.help=#Creates a backup of a database. +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=#Cipher (AES or XTEA) +tools.commandLine=#Command line +tools.convertTraceFile=#ConvertTraceFile +tools.convertTraceFile.help=#Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=#CreateCluster +tools.createCluster.help=#Creates a cluster from a standalone database. +tools.databaseName=#Database name +tools.decryptionPassword=#Decryption password +tools.deleteDbFiles=#DeleteDbFiles +tools.deleteDbFiles.help=#Deletes all files belonging to a database. +tools.directory=#Directory +tools.encryptionPassword=#Encryption password +tools.javaDirectoryClassName=#Java directory and class name +tools.recover=#Recover +tools.recover.help=#Helps recovering a corrupted database. +tools.restore=#Restore +tools.restore.help=#Restores a database backup. +tools.result=#Result +tools.run=Végrehajtás +tools.runScript=#RunScript +tools.runScript.help=#Runs a SQL script. +tools.script=#Script +tools.script.help=#Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=#Script file name +tools.serverList=#Server list +tools.sourceDatabaseName=#Source database name +tools.sourceDatabaseURL=#Source database URL +tools.sourceDirectory=#Source directory +tools.sourceFileName=#Source file name +tools.sourceScriptFileName=#Source script file name +tools.targetDatabaseName=#Target database name +tools.targetDatabaseURL=#Target database URL +tools.targetDirectory=#Target directory +tools.targetFileName=#Target file name +tools.targetScriptFileName=#Target script file name +tools.traceFileName=#Trace file name +tree.admin=Adminisztrátor +tree.current=Aktuális érték +tree.hashed=Hash-értékkel ellátott +tree.increment=Inkrementált +tree.indexes=Indexek +tree.nonUnique=Nem egyedi +tree.sequences=Szekvencia +tree.unique=Egyedi +tree.users=Felhasználók diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_in.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_in.prop new file mode 100644 index 0000000000000..8a569cb42ebc1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_in.prop @@ -0,0 +1,163 @@ +.translator=Joko Yuliantoro +a.help=Bantuan +a.language=Indonesia +a.lynxNotSupported=Maaf, Lynx belum didukung. Gunakan browser yang mendukung Javascript (dan Frame). +a.password=Kata kunci +a.remoteConnectionsDisabled=Maaf, fungsi koneksi jarak jauh ('webAllowOthers') dimatikan pada server ini. +a.title=Konsol H2 +a.tools=#Tools +a.user=Nama pengguna +admin.executing=Sedang eksekusi +admin.ip=IP +admin.lastAccess=Akses Terakhir +admin.lastQuery=Kueri Terakhir +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Klien terijin +adminConnection=Keamanan koneksi +adminHttp=Gunakan koneksi HTTP tidak terenkripsi +adminHttps=Gunakan koneksi SSL terenkripsi (HTTPS) +adminLocal=Hanya ijinkan koneksi lokal +adminLogin=Login Pengelola +adminLoginCancel=Batal +adminLoginOk=OK +adminLogout=Keluar +adminOthers=Ijinkan koneksi dari komputer lain +adminPort=Nomor port +adminPortWeb=Nomor port web server +adminRestart=Perubahan akan efektif setelah server di-restart. +adminSave=Simpan +adminSessions=Sesi aktif +adminShutdown=Matikan +adminTitle=Pilihan di Konsol H2 +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Aksi +helpAddAnotherRow=Menambah sebuah baris +helpAddDrivers=Menambah pengendali basis data +helpAddDriversText=Pengendali basis data tambahan dapat didaftarkan dengan cara menambah lokasi file Jar dari si pengendali ke variabel lingkungan H2DRIVERS atau CLASSPATH. Contoh (Windows): Untuk menambah librari pengendali basis data C:/Programs/hsqldb/lib/hsqldb.jar, atur variabel lingkungan H2DRIVERS menjadi C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Tambah sebuah baris baru +helpCommandHistory=Tampilkan sejarah perintah +helpCreateTable=Ciptakan sebuah tabel +helpDeleteRow=Buang sebuah baris +helpDisconnect=Putuskan koneksi dari basis data +helpDisplayThis=Tampilkan laman bantuan ini +helpDropTable=Hapus tabel jika sudah ada +helpExecuteCurrent=Jalankan pernyataan SQL terkini +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Ikon +helpImportantCommands=Perintah penting +helpOperations=Operasi +helpQuery=Kueri ke tabel +helpSampleSQL=Contoh skrip SQL +helpStatements=Pernyataan SQL +helpUpdate=Rubah data dalam sebuah baris +helpWithColumnsIdName=dengan kolom ID dan NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Hubungkan +login.driverClass=Kelas Pengendali +login.driverNotFound=Pengendali basis data tidak ditemukan
    Pelajari bagian Bantuan untuk mengetahui bagaimana cara menambah pengendali basis data +login.goAdmin=Pilihan +login.jdbcUrl=JDBC URL +login.language=Bahasa +login.login=Login +login.remove=Buang +login.save=Simpan +login.savedSetting=Pengaturan tersimpan +login.settingName=Nama pengaturan +login.testConnection=Tes koneksi +login.testSuccessful=Tes berhasil +login.welcome=Konsol H2 +result.1row=1 baris +result.autoCommitOff=Autocommit sekarang OFF +result.autoCommitOn=Autocommit sekarang ON +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Hitungan baris maksimum terpasang +result.noRows=Tidak ada hasil +result.noRunningStatement=Saat ini tidak ada pernyataan yang beroperasi +result.rows=baris +result.statementWasCanceled=Pernyataan tersebut dibatalkan +result.updateCount=Perbarui Hitungan +resultEdit.action=Aksi +resultEdit.add=Tambah +resultEdit.cancel=Batal +resultEdit.delete=Hapus +resultEdit.edit=Ubah +resultEdit.editResult=Ubah +resultEdit.save=Simpan +toolbar.all=Semua +toolbar.autoCommit=Autocommit +toolbar.autoComplete=Auto-Complete +toolbar.autoComplete.full=Full +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Off +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Off +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Batalkan pernyataan terkini +toolbar.clear=Bersihkan +toolbar.commit=Laksanakan +toolbar.disconnect=Putuskan koneksi +toolbar.history=Sejarah perintah +toolbar.maxRows=Baris maksimum +toolbar.refresh=Segarkan +toolbar.rollback=Gulung mundur +toolbar.run=Jalankan +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=Pernyataan SQL +tools.backup=#Backup +tools.backup.help=#Creates a backup of a database. +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=#Cipher (AES or XTEA) +tools.commandLine=#Command line +tools.convertTraceFile=#ConvertTraceFile +tools.convertTraceFile.help=#Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=#CreateCluster +tools.createCluster.help=#Creates a cluster from a standalone database. +tools.databaseName=#Database name +tools.decryptionPassword=#Decryption password +tools.deleteDbFiles=#DeleteDbFiles +tools.deleteDbFiles.help=#Deletes all files belonging to a database. +tools.directory=#Directory +tools.encryptionPassword=#Encryption password +tools.javaDirectoryClassName=#Java directory and class name +tools.recover=#Recover +tools.recover.help=#Helps recovering a corrupted database. +tools.restore=#Restore +tools.restore.help=#Restores a database backup. +tools.result=#Result +tools.run=Jalankan +tools.runScript=#RunScript +tools.runScript.help=#Runs a SQL script. +tools.script=#Script +tools.script.help=#Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=#Script file name +tools.serverList=#Server list +tools.sourceDatabaseName=#Source database name +tools.sourceDatabaseURL=#Source database URL +tools.sourceDirectory=#Source directory +tools.sourceFileName=#Source file name +tools.sourceScriptFileName=#Source script file name +tools.targetDatabaseName=#Target database name +tools.targetDatabaseURL=#Target database URL +tools.targetDirectory=#Target directory +tools.targetFileName=#Target file name +tools.targetScriptFileName=#Target script file name +tools.traceFileName=#Trace file name +tree.admin=Admin +tree.current=Nilai terkini +tree.hashed=Hashed +tree.increment=Penambahan +tree.indexes=Indeks +tree.nonUnique=Tidak-Unik +tree.sequences=Urut-urutan +tree.unique=Unik +tree.users=Para pengguna diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_it.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_it.prop new file mode 100644 index 0000000000000..ac32ed9406153 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_it.prop @@ -0,0 +1,163 @@ +.translator=PierPaolo Ucchino +a.help=Aiuto +a.language=Italiano +a.lynxNotSupported=Spiacente, Lynx non è al momento supportato +a.password=Password +a.remoteConnectionsDisabled=Spiacente, le connessioni remote ('webAllowOthers') sono disabilitate su questo server. +a.title=H2 Pannello di controllo +a.tools=Accessori +a.user=Nome utente +admin.executing=Esecuzione in corso +admin.ip=IP +admin.lastAccess=Ultimo accesso +admin.lastQuery=Ultima query +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Client abilitati +adminConnection=Sicurezza della connessione +adminHttp=Usa connessioni HTTP non criptate +adminHttps=Usa connessioni criptate con SSL (HTTPS) +adminLocal=Abilita solo connessioni locali +adminLogin=Accesso amministratore +adminLoginCancel=Annulla +adminLoginOk=OK +adminLogout=Disconnessione +adminOthers=Abilita connessioni da altri computers +adminPort=Numero di porta +adminPortWeb=Numero di porta del server Web +adminRestart=Le modifiche saranno effettive dopo il riavvio del server. +adminSave=Salva +adminSessions=Connessioni attive +adminShutdown=Shutdown +adminTitle=Pannello di controllo preferenze H2 +adminTranslateHelp=Traduci o modifica la traduzione della console di H2. +adminTranslateStart=Traduci +helpAction=Azione +helpAddAnotherRow=Aggiunge un'altra riga +helpAddDrivers=Aggiunta di altri driver per l'accesso al database +helpAddDriversText=I drivers per il database possono essere inseriti aggiungendo la posizione del file Jar del driver stesso alle variabili di ambiente H2DRIVERS o CLASSPATH. Esempio (Windows): Per aggiungere alla libreria il drivers per il database C:/Programs/hsqldb/lib/hsqldb.jar, basta modificare la variabile di ambiente H2DRIVERS in C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Aggiunge una nuova riga +helpCommandHistory=Mostra l'elenco dei comandi eseguiti +helpCreateTable=Crea una nuova tabella +helpDeleteRow=Rimuove una riga +helpDisconnect=Disconnette dal database +helpDisplayThis=Mostra questa pagina di aiuto +helpDropTable=Cancella la tabella se esiste +helpExecuteCurrent=Esegue il corrente comando SQL +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Icona +helpImportantCommands=Comandi importanti +helpOperations=Operazioni +helpQuery=Seleziona tutto dalla tabella +helpSampleSQL=Programma SQL di esempio +helpStatements=Comandi SQL +helpUpdate=Cambia dei dati in una riga +helpWithColumnsIdName=tramite ID e NOME colonne +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=Invio +key.shift=#Shift +key.space=#Space +login.connect=Connetti +login.driverClass=Classe driver +login.driverNotFound=Il driver del database non è stato trovato
    Guardare in Aiuto su come aggiungere dei drivers +login.goAdmin=Preferenze +login.jdbcUrl=JDBC URL +login.language=Lingua +login.login=Accesso +login.remove=Rimuovi +login.save=Salva +login.savedSetting=Configurazioni salvate +login.settingName=Nome configurazione +login.testConnection=Prova la connessione +login.testSuccessful=Connessione riuscita +login.welcome=Pannello di controllo H2 +result.1row=1 riga +result.autoCommitOff=Auto inserimento adesso e' disattivo +result.autoCommitOn=Auto inserimento adesso e' attivo +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Il numero massimo di righe e' stato impostato +result.noRows=nessuna riga +result.noRunningStatement=C'e' un comando in corso di esecuzione +result.rows=righe +result.statementWasCanceled=Il comando e' stato annullato +result.updateCount=Aggiorna il risultato +resultEdit.action=Azione +resultEdit.add=Aggiungi +resultEdit.cancel=Annulla +resultEdit.delete=Cancella +resultEdit.edit=Modifica +resultEdit.editResult=Modifica +resultEdit.save=Salva +toolbar.all=Tutto +toolbar.autoCommit=Auto inserimento +toolbar.autoComplete=Auto completamento +toolbar.autoComplete.full=Pieno +toolbar.autoComplete.normal=Normale +toolbar.autoComplete.off=Disattivo +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Disattivo +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Annulla il seguente comando +toolbar.clear=Annulla +toolbar.commit=Esegui comando +toolbar.disconnect=Disconnetti +toolbar.history=Comandi eseguiti +toolbar.maxRows=Massimo numero righe +toolbar.refresh=Aggiorna +toolbar.rollback=Annulla comando +toolbar.run=Esegui +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=Comando SQL +tools.backup=Backup +tools.backup.help=Crea un backup del database. +tools.changeFileEncryption=Cambia cifratura del file +tools.changeFileEncryption.help=Permette il cambiamento della password e dell'algoritmo di cifratura del database. +tools.cipher=Cifratura (AES or XTEA) +tools.commandLine=Linea di comando +tools.convertTraceFile=Converti il file di tracciamento +tools.convertTraceFile.help=Converte un file .trace.db in una applicazione java e uno script SQL. +tools.createCluster=Crea un cluster +tools.createCluster.help=Crea un cluster da un database standalone. +tools.databaseName=Nome del database +tools.decryptionPassword=Password per la decifratura +tools.deleteDbFiles=Cancella files del database +tools.deleteDbFiles.help=Cancella tutti i files appartenenti a un database. +tools.directory=Directory +tools.encryptionPassword=Password di cifratura +tools.javaDirectoryClassName=Directory java e nome della classe +tools.recover=Recupera +tools.recover.help=Aiuta a recuperare un database corrotto. +tools.restore=Ripristina +tools.restore.help=Ripristina un backup di un database. +tools.result=Risultato +tools.run=Esegui +tools.runScript=Esegui script +tools.runScript.help=Esegui script SQL. +tools.script=Script +tools.script.help=Permette la conversione di un database in uno script SQL per un backup o una migrazione. +tools.scriptFileName=Nome del file di script +tools.serverList=Lista dei servers +tools.sourceDatabaseName=Nome del database sorgente +tools.sourceDatabaseURL=URL del database sorgente +tools.sourceDirectory=Directory sorgente +tools.sourceFileName=Nome del file sorgente +tools.sourceScriptFileName=Nome del file script sorgente +tools.targetDatabaseName=Nome del database destinazione +tools.targetDatabaseURL=URL del database destinazione +tools.targetDirectory=Directory destinazione +tools.targetFileName=Nome del file destinazione +tools.targetScriptFileName=Nome del file di script destinazione +tools.traceFileName=Nome del file di traccia +tree.admin=Amministratore +tree.current=Valore corrente +tree.hashed=Valore hash +tree.increment=Incremento +tree.indexes=Indici +tree.nonUnique=Non unico +tree.sequences=Sequenze +tree.unique=Unico +tree.users=Utenti diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_ja.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_ja.prop new file mode 100644 index 0000000000000..e16f17ae4b43e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_ja.prop @@ -0,0 +1,163 @@ +.translator=IKEMOTO, Masahiro +a.help=ヘルプ +a.language=日本語 +a.lynxNotSupported=Lynxにはまだ対応していません +a.password=パスワード +a.remoteConnectionsDisabled=このサーバでは、リモート接続('webAllowOthers')が有効になっていません。 +a.title=H2コンソール +a.tools=ツール +a.user=ユーザ名 +admin.executing=実行中 +admin.ip=IP +admin.lastAccess=最終アクセス +admin.lastQuery=最終問い合せ +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=クライアント接続許可 +adminConnection=接続セキュリティ +adminHttp=暗号化されない HTTP 接続を使用 +adminHttps=暗号化された SSL (HTTPS) 接続を使用 +adminLocal=ローカル接続のみ許可 +adminLogin=管理者ログイン +adminLoginCancel=キャンセル +adminLoginOk=OK +adminLogout=ログアウト +adminOthers=他のコンピュータからの接続を許可 +adminPort=ポート番号 +adminPortWeb=Webサーバポート番号 +adminRestart=変更はサーバの再起動後に有効になります。 +adminSave=保存 +adminSessions=アクティブセッション +adminShutdown=シャットダウン +adminTitle=H2コンソール設定 +adminTranslateHelp=H2コンソールの日本語訳を更新します。 +adminTranslateStart=翻訳 +helpAction=アクション +helpAddAnotherRow=別の行を追加 +helpAddDrivers=データベースドライバの追加 +helpAddDriversText=追加ドライバは、Jarファイルの場所を環境変数 H2DRIVERS、または CLASSPATH に追加することで登録できます。例 (Windows): データベースドライバライブラリ C:/Programs/hsqldb/lib/hsqldb.jar を追加するには、環境変数 H2DRIVERS に、C:/Programs/hsqldb/lib/hsqldb.jar を設定します。 +helpAddRow=新しい行を追加 +helpCommandHistory=コマンド履歴を表示 +helpCreateTable=ID、および NAME 列を持つ +helpDeleteRow=行を削除 +helpDisconnect=データベース接続を切断 +helpDisplayThis=このページを表示 +helpDropTable=存在するならテーブルを削除 +helpExecuteCurrent=現在のSQLステートメントを実行 +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=アイコン +helpImportantCommands=重要なコマンド +helpOperations=オペレーション +helpQuery=テーブル問い合わせ +helpSampleSQL=SQLステートメントのサンプル +helpStatements=SQLステートメント +helpUpdate=行データの変更 +helpWithColumnsIdName=新しいテーブルを作成 +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=接続 +login.driverClass=ドライバクラス +login.driverNotFound=データベースドライバが見付かりません
    ヘルプでドライバの追加方法を確認してください +login.goAdmin=設定 +login.jdbcUrl=JDBC URL +login.language=言語 +login.login=ログイン +login.remove=削除 +login.save=保存 +login.savedSetting=保存済設定 +login.settingName=設定名 +login.testConnection=接続テスト +login.testSuccessful=テストは成功しました +login.welcome=H2コンソール +result.1row=1 行 +result.autoCommitOff=オートコミットが無効になりました +result.autoCommitOn=オートコミットが有効になりました +result.bytes=バイト +result.characters=文字 +result.maxrowsSet=最大行数が設定されました +result.noRows=該当行無し +result.noRunningStatement=現在実行中のステートメントはありません +result.rows=行 +result.statementWasCanceled=ステートメントがキャンセルされました +result.updateCount=更新数 +resultEdit.action=アクション +resultEdit.add=追加 +resultEdit.cancel=キャンセル +resultEdit.delete=削除 +resultEdit.edit=編集 +resultEdit.editResult=編集 +resultEdit.save=保存 +toolbar.all=全て +toolbar.autoCommit=オートコミット +toolbar.autoComplete=オートコンプリート +toolbar.autoComplete.full=フル +toolbar.autoComplete.normal=ノーマル +toolbar.autoComplete.off=オフ +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=オフ +toolbar.autoSelect.on=#On +toolbar.cancelStatement=現在のステートメントをキャンセル +toolbar.clear=クリア +toolbar.commit=コミット +toolbar.disconnect=切断 +toolbar.history=コマンド履歴 +toolbar.maxRows=最大行数 +toolbar.refresh=リフレッシュ +toolbar.rollback=ロールバック +toolbar.run=実行 +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQLステートメント +tools.backup=バックアップ +tools.backup.help=データベースのバックアップを作成します。 +tools.changeFileEncryption=ファイル暗号化設定の変更 +tools.changeFileEncryption.help=データベースファイルの暗号化方式とパスワードを変更します。 +tools.cipher=暗号化方式 (AES or XTEA) +tools.commandLine=コマンドライン +tools.convertTraceFile=トレースファイルの変換 +tools.convertTraceFile.help=.trace.dbファイルを、JavaアプリケーションとSQLスクリプトに変換します。 +tools.createCluster=クラスタ作成 +tools.createCluster.help=スタンドアロンデータベースからクラスタを作成します。 +tools.databaseName=データベース名 +tools.decryptionPassword=復号化パスワード +tools.deleteDbFiles=データベースファイルの削除 +tools.deleteDbFiles.help=データベースに関連する全てのファイルを削除します。 +tools.directory=ディレクトリ +tools.encryptionPassword=暗号化パスワード +tools.javaDirectoryClassName=Javaディレクトリとクラス名 +tools.recover=修復 +tools.recover.help=破損したデータベースの修復を支援します。 +tools.restore=リストア +tools.restore.help=データベースをバックアップからリストアします。 +tools.result=結果 +tools.run=実行 +tools.runScript=スクリプトの実行 +tools.runScript.help=SQLスクリプトを実行します。 +tools.script=スクリプト +tools.script.help=バックアップ、または移動のために、データベースをSQLスクリプトに変換します。 +tools.scriptFileName=スクリプトファイル名 +tools.serverList=サーバリスト +tools.sourceDatabaseName=ソースデータベース名 +tools.sourceDatabaseURL=ソースデータベースURL +tools.sourceDirectory=ソースディレクトリ +tools.sourceFileName=ソースファイル名 +tools.sourceScriptFileName=ソーススクリプトファイル名 +tools.targetDatabaseName=ターゲットデータベース名 +tools.targetDatabaseURL=ターゲットデータベースURL +tools.targetDirectory=ターゲットディレクトリ +tools.targetFileName=ターゲットファイル名 +tools.targetScriptFileName=ターゲットスクリプトファイル名 +tools.traceFileName=トレースファイル名 +tree.admin=管理者 +tree.current=現在値 +tree.hashed=ハッシュ +tree.increment=インクリメント +tree.indexes=インデックス +tree.nonUnique=Non-Unique +tree.sequences=シーケンス +tree.unique=Unique +tree.users=ユーザ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_ko.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_ko.prop new file mode 100644 index 0000000000000..780072c65d1a6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_ko.prop @@ -0,0 +1,163 @@ +.translator=Dylan +a.help=도움말 +a.language=한국어 +a.lynxNotSupported=죄송하지만 Lynx는 아직 지원되지 않습니다 +a.password=비밀번호 +a.remoteConnectionsDisabled=죄송하지만 이 서버는 원격 연결('webAllowOthers')이 사용 중지된 상태입니다. +a.title=H2 콘솔 +a.tools=도구 +a.user=사용자명 +admin.executing=실행 여부 +admin.ip=IP +admin.lastAccess=최종 액세스 +admin.lastQuery=최종 질의 +admin.no=아니요 +admin.notConnected=연결되지 않았음 +admin.url=URL +admin.yes=예 +adminAllow=클라이언트 허가 +adminConnection=연결 보안 +adminHttp=암호화되지 않은 HTTP 연결 사용 +adminHttps=암호화된 SSL(HTTPS) 연결 사용 +adminLocal=로컬 연결만 허가 +adminLogin=관리자 로그인 +adminLoginCancel=취소 +adminLoginOk=확인 +adminLogout=로그아웃 +adminOthers=다른 컴퓨터에서의 연결 허가 +adminPort=포트 번호 +adminPortWeb=웹 서버 포트 번호 +adminRestart=변경 사항은 서버 재시작 후 반영됩니다. +adminSave=저장 +adminSessions=활동중인 세션 +adminShutdown=종료 +adminTitle=H2 콘솔 설정 +adminTranslateHelp=H2 콘솔을 번역하거나 번역을 수정할 수 있습니다. +adminTranslateStart=번역 +helpAction=작업 +helpAddAnotherRow=행 추가 +helpAddDrivers=데이터베이스 드라이버 추가 +helpAddDriversText=데이터베이스 드라이버를 추가로 등록하려면 H2DRIVERS나 CLASSPATH 환경 변수에 드라이버의 Jar 파일 위치를 추가하면 됩니다. 예 (Windows): 데이터베이스 드라이버 라이브러리로 C:/Programs/hsqldb/lib/hsqldb.jar를 추가하려면 H2DRIVERS 환경 변수를 C:/Programs/hsqldb/lib/hsqldb.jar로 설정합니다. +helpAddRow=행 추가 +helpCommandHistory=명령 이력 보기 +helpCreateTable=새 테이블 만들기 +helpDeleteRow=행 삭제 +helpDisconnect=데이터베이스 연결 끊기 +helpDisplayThis=이 도움말 페이지 보기 +helpDropTable=테이블이 존재하는 경우 삭제하기 +helpExecuteCurrent=현재의 SQL 문 실행 +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=아이콘 +helpImportantCommands=중요 명령 +helpOperations=작동 +helpQuery=테이블 질의 +helpSampleSQL=샘플 SQL 스크립트 +helpStatements=SQL 문 +helpUpdate=행 데이터 변경 +helpWithColumnsIdName=컬럼은 ID와 NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=엔터 +key.shift=#Shift +key.space=#Space +login.connect=연결 +login.driverClass=드라이버 클래스 +login.driverNotFound=데이터베이스 드라이버를 찾지 못함
    도움말에서 드라이버 추가 방법 참고 +login.goAdmin=설정 +login.jdbcUrl=JDBC URL +login.language=언어 +login.login=로그인 +login.remove=삭제 +login.save=저장 +login.savedSetting=저장한 설정 +login.settingName=설정 이름 +login.testConnection=연결 시험 +login.testSuccessful=시험 성공 +login.welcome=H2 콘설 +result.1row=1 row +result.autoCommitOff=자동 커밋이 중지됨 +result.autoCommitOn=자동 커밋이 작동됨 +result.bytes=바이트 +result.characters=자 +result.maxrowsSet=최대 행 개수 설정 +result.noRows=행 없음 +result.noRunningStatement=현재 실행중인 문 없음 +result.rows=행 +result.statementWasCanceled=문이 취소됨 +result.updateCount=갱신된 개수 +resultEdit.action=작업 +resultEdit.add=추가 +resultEdit.cancel=취소 +resultEdit.delete=삭제 +resultEdit.edit=편집 +resultEdit.editResult=편집 +resultEdit.save=저장 +toolbar.all=전체 +toolbar.autoCommit=자동 커밋 +toolbar.autoComplete=자동 완성 +toolbar.autoComplete.full=전체 +toolbar.autoComplete.normal=보통 +toolbar.autoComplete.off=안함 +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=안함 +toolbar.autoSelect.on=#On +toolbar.cancelStatement=현재 문 취소 +toolbar.clear=지우기 +toolbar.commit=커밋 +toolbar.disconnect=연결 끊기 +toolbar.history=명령 이력 +toolbar.maxRows=최대 행 수 +toolbar.refresh=새로 고침 +toolbar.rollback=롤백 +toolbar.run=실행 +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL 문 +tools.backup=백업 +tools.backup.help=데이터베이스 백업 만들기. +tools.changeFileEncryption=파일암호화변경 +tools.changeFileEncryption.help=데이터베이스 파일 암호화 비밀번호 및 앨거리듬을 변경할 수 있습니다. +tools.cipher=Cipher (AES 또는 XTEA) +tools.commandLine=명령줄 +tools.convertTraceFile=트레이스파일변환 +tools.convertTraceFile.help=.trace.db 파일을 Java 애플리케이션 및 SQL 스크립트로 변환합니다. +tools.createCluster=클러스터생성 +tools.createCluster.help=독립형 데이터베이스에서 클러스터를 만듧니다. +tools.databaseName=데이터베이스 이름 +tools.decryptionPassword=복호화 비밀번호 +tools.deleteDbFiles=Db파일삭제 +tools.deleteDbFiles.help=데이터베이스에 속한 모든 파일을 삭제합니다. +tools.directory=디렉터리 +tools.encryptionPassword=암호화 비밀번호 +tools.javaDirectoryClassName=Java 디렉터리 및 클래스 이름 +tools.recover=복구 +tools.recover.help=손상된 데이터베이스를 복구하도록 합니다. +tools.restore=복원 +tools.restore.help=데이터베이스 백업을 복원합니다. +tools.result=결과 +tools.run=실행 +tools.runScript=스크립트실행 +tools.runScript.help=SQL 스크립트를 실행합니다. +tools.script=스크립트 +tools.script.help=백업 또는 이관을 위해 데이터베이스를 SQL 스크립트로 변환할 수 있습니다. +tools.scriptFileName=스크립트 파일명 +tools.serverList=서버 목록 +tools.sourceDatabaseName=원본 데이터베이스 이름 +tools.sourceDatabaseURL=원본 데이터베이스 URL +tools.sourceDirectory=원본 디렉터리 +tools.sourceFileName=원본 파일명 +tools.sourceScriptFileName=원본 스크립트 파일명 +tools.targetDatabaseName=대상 데이터베이스 이름 +tools.targetDatabaseURL=대상 데이터베이스 URL +tools.targetDirectory=대상 디렉터리 +tools.targetFileName=대상 파일명 +tools.targetScriptFileName=대상 스크립트 파일명 +tools.traceFileName=트레이스 파일명 +tree.admin=Admin +tree.current=현재 값 +tree.hashed=Hashed +tree.increment=증가 +tree.indexes=인덱스 +tree.nonUnique=유니크아님 +tree.sequences=시퀀스 +tree.unique=유니크 +tree.users=사용자 diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_nl.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_nl.prop new file mode 100644 index 0000000000000..ccea089d33a82 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_nl.prop @@ -0,0 +1,163 @@ +.translator=Remco Schoen +a.help=Help +a.language=Nederlands +a.lynxNotSupported=Sorry, Lynx wordt nog niet ondersteund +a.password=Wachtwoord +a.remoteConnectionsDisabled=Sorry, externe verbindingen ('webAllowOthers') zijn niet toegelaten voor deze server. +a.title=H2 Console +a.tools=#Tools +a.user=Gebruikersnaam +admin.executing=Uitvoeren +admin.ip=IP +admin.lastAccess=Laatste toegang +admin.lastQuery=Laatste query +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Toegestane clients +adminConnection=Beveiliging verbinding +adminHttp=Gebruik onversleutelde HTTP verbindingen +adminHttps=Gebruik versleutelde SSL (HTTPS) verbindingen +adminLocal=Sta alleen lokale verbindingen toe +adminLogin=Inloggen instellingen +adminLoginCancel=Annuleren +adminLoginOk=OK +adminLogout=Uitloggen +adminOthers=Sta verbindingen vanaf andere computers toe +adminPort=Poortnummer +adminPortWeb=Webserver poortnummer +adminRestart=Wijzigingen worden doorgevoerd na herstarten server +adminSave=Opslaan +adminSessions=Actieve sessies +adminShutdown=Server stoppen +adminTitle=H2 Console instellingen +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Gebeurtenis +helpAddAnotherRow=Voeg nogmaals een nieuwe regel toe +helpAddDrivers=Toevoegen drivers voor een database +helpAddDriversText=Extra drivers voor een database kunnen worden geregistreerd door het toevoegen van het Jar-bestand van de driver aan de omgevingsvariabelen H2DRIVERS of CLASSPATH. Voorbeeld (Windows): om de driver bibliotheek C:/Programs/hsqldb/lib/hsqldb.jar toe te voegen, moet de omgevingsvariabele H2DRIVERS op C:/Programs/hsqldb/lib/hsqldb.jar gezet worden. +helpAddRow=Voeg een nieuwe regel toe +helpCommandHistory=Toont de geschiedenis van commando's +helpCreateTable=Maak een nieuwe table aan +helpDeleteRow=Verwijder een regel +helpDisconnect=Verbreekt de verbinding met de database +helpDisplayThis=Toont deze helppagina +helpDropTable=Verwijdert de tabel als deze bestaat +helpExecuteCurrent=Voert het huidige SQL-statement uit +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Icoon +helpImportantCommands=Belangrijke commando's +helpOperations=Acties +helpQuery=Vraag gegevens op uit de tabel +helpSampleSQL=Voorbeeld SQL-script +helpStatements=SQL-statements +helpUpdate=Verander de gegevens in een rij +helpWithColumnsIdName=met de kolommen ID en NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Verbind +login.driverClass=Driver klasse +login.driverNotFound=Driver voor database niet gevonden
    Kijk in de help hoe je drivers kunt toevoegen +login.goAdmin=Instellingen +login.jdbcUrl=JDBC URL +login.language=Taal +login.login=Inloggen +login.remove=Verwijderen +login.save=Opslaan +login.savedSetting=Opgeslagen instellingen +login.settingName=Instellingsnaam +login.testConnection=Test verbinding +login.testSuccessful=Test succesvol +login.welcome=H2 Console +result.1row=1 rij +result.autoCommitOff=Auto commit is nu UIT +result.autoCommitOn=Auto commit is nu AAN +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Het maximum aantal rijen is ingesteld +result.noRows=geen rijen +result.noRunningStatement=Er wordt momenteel geen statement uitgevoerd +result.rows=rijen +result.statementWasCanceled=Het statement is geannuleerd +result.updateCount=Aantal wijzigingen +resultEdit.action=Gebeurtenis +resultEdit.add=Toevoegen +resultEdit.cancel=Annuleren +resultEdit.delete=Verwijderen +resultEdit.edit=Wijzigen +resultEdit.editResult=Wijzigen +resultEdit.save=Opslaan +toolbar.all=Alle +toolbar.autoCommit=Auto commit +toolbar.autoComplete=Auto aanvullen +toolbar.autoComplete.full=Volledig +toolbar.autoComplete.normal=Normaal +toolbar.autoComplete.off=Uit +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Uit +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Annuleer het huidige statement +toolbar.clear=Wissen +toolbar.commit=Commit +toolbar.disconnect=Verbinding verbreken +toolbar.history=Geschiedenis commando's +toolbar.maxRows=Max rijen +toolbar.refresh=Verversen +toolbar.rollback=Rollback +toolbar.run=Uitvoeren +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL statement +tools.backup=#Backup +tools.backup.help=#Creates a backup of a database. +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=#Cipher (AES or XTEA) +tools.commandLine=#Command line +tools.convertTraceFile=#ConvertTraceFile +tools.convertTraceFile.help=#Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=#CreateCluster +tools.createCluster.help=#Creates a cluster from a standalone database. +tools.databaseName=#Database name +tools.decryptionPassword=#Decryption password +tools.deleteDbFiles=#DeleteDbFiles +tools.deleteDbFiles.help=#Deletes all files belonging to a database. +tools.directory=#Directory +tools.encryptionPassword=#Encryption password +tools.javaDirectoryClassName=#Java directory and class name +tools.recover=#Recover +tools.recover.help=#Helps recovering a corrupted database. +tools.restore=#Restore +tools.restore.help=#Restores a database backup. +tools.result=#Result +tools.run=Uitvoeren +tools.runScript=#RunScript +tools.runScript.help=#Runs a SQL script. +tools.script=#Script +tools.script.help=#Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=#Script file name +tools.serverList=#Server list +tools.sourceDatabaseName=#Source database name +tools.sourceDatabaseURL=#Source database URL +tools.sourceDirectory=#Source directory +tools.sourceFileName=#Source file name +tools.sourceScriptFileName=#Source script file name +tools.targetDatabaseName=#Target database name +tools.targetDatabaseURL=#Target database URL +tools.targetDirectory=#Target directory +tools.targetFileName=#Target file name +tools.targetScriptFileName=#Target script file name +tools.traceFileName=#Trace file name +tree.admin=Beheerder +tree.current=Huidige waarde +tree.hashed=Hashed +tree.increment=Ophogen +tree.indexes=Indices +tree.nonUnique=Niet uniek +tree.sequences=Sequenties +tree.unique=Uniek +tree.users=Gebruikers diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_pl.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_pl.prop new file mode 100644 index 0000000000000..0c10899fef597 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_pl.prop @@ -0,0 +1,163 @@ +.translator=Tomek +a.help=Pomoc +a.language=Polski +a.lynxNotSupported=Przeglądarka Lynx nie jest wpierana jeszcze +a.password=Hasło +a.remoteConnectionsDisabled=Połączenia zdalne ('webAllowOthers') są wyłączone na tym serwerze. +a.title=Konsola H2 +a.tools=Narzędzia +a.user=Użytkownik +admin.executing=Wykonywanie +admin.ip=IP +admin.lastAccess=Ostatnie logowanie +admin.lastQuery=Ostatnie zapytanie +admin.no=no +admin.notConnected=not connected +admin.url=URL +admin.yes=yes +adminAllow=Zaufani klienci +adminConnection=Bezpieczeństwo połączeń +adminHttp=Używaj szyfrowanych połączeń HTTP +adminHttps=Używaj szyfrowanych połączeń SSL (HTTPS) +adminLocal=Tylko lokalne połączenia +adminLogin=Logowanie administracyjne +adminLoginCancel=Anuluj +adminLoginOk=OK +adminLogout=Wyloguj +adminOthers=Pozwalaj na połączenia zdalne +adminPort=Numer portu +adminPortWeb=Numer portu serwera Web +adminRestart=Zmiany będą widoczne po zrestartowaniu serwera. +adminSave=Zapisz +adminSessions=Aktywne sesje +adminShutdown=Wyłącz +adminTitle=Ustawienia konsoli H2 +adminTranslateHelp=Przetłumacz lub popraw tłumaczenie konsoli H2. +adminTranslateStart=Tłumacz +helpAction=Akcja +helpAddAnotherRow=Dodaj kolejny rekord +helpAddDrivers=Dodatkowe sterowniki baz danych +helpAddDriversText=Dodatkowe sterowniki bazy danych mogą być rejestrowane przez dodanie plików Jar do zmiennych środowiskowych H2DRIVERS lub CLASSPATH. Przykładowo (Windows): Aby dodać sterownik z pliku C:/Programs/hsqldb/lib/hsqldb.jar, ustaw zmienną środowiskową H2DRIVERS na C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Dodaj nowy rekord +helpCommandHistory=Pokazuje historię komend +helpCreateTable=Tworzy nową tabele +helpDeleteRow=Usuń rekord +helpDisconnect=Wyloguj się z bazy danych +helpDisplayThis=Wyświetla tą strone pomocy +helpDropTable=Skasuj tabele jeżeli istnieje +helpExecuteCurrent=Wykonuje bieżące zapytanie SQL +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Ikona +helpImportantCommands=Ważne komendy +helpOperations=Operacje +helpQuery=Wykonaj zapytanie wybierajace +helpSampleSQL=Prosty skrypt SQL +helpStatements=Wyrażenia SQL +helpUpdate=Zmień dane w rekordzie +helpWithColumnsIdName=z kolumnami ID i NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Połącz +login.driverClass=Klasa sterownika +login.driverNotFound=Sterownik nie istnieje
    Zobacz w dokumentacji opis dodawania sterowników +login.goAdmin=Opcje +login.jdbcUrl=JDBC URL +login.language=Język +login.login=Użytkownik +login.remove=Usuń +login.save=Zapisz +login.savedSetting=Zapisz opcje +login.settingName=Nazwa opcji +login.testConnection=Testuj połaczenie +login.testSuccessful=Test zakończony sukcesem +login.welcome=Konsola H2 +result.1row=1 rekord +result.autoCommitOff=Automatyczne zatwierdzanie jest teraz WYŁĄCZONE +result.autoCommitOn=Automatyczne zatwierdzanie jest teraz WŁACZONE +result.bytes=bytes +result.characters=characters +result.maxrowsSet=Maksymalna ilość rekordów +result.noRows=brak danych +result.noRunningStatement=Obecnie nie jest wykonywane żedne zapytanie +result.rows=rekordy +result.statementWasCanceled=Zapytanie zostało anulowane +result.updateCount=Ilość aktualizacji +resultEdit.action=Akcja +resultEdit.add=Dodaj +resultEdit.cancel=Anuluj +resultEdit.delete=Skasuj +resultEdit.edit=Edytuj +resultEdit.editResult=Edytuj +resultEdit.save=Zapisz +toolbar.all=Wszystko +toolbar.autoCommit=Automatyczne zatwierdzanie +toolbar.autoComplete=Automatyczne uzupełnianie +toolbar.autoComplete.full=Pełny +toolbar.autoComplete.normal=Normalny +toolbar.autoComplete.off=Wyłączony +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Wyłączony +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Anuluj bieżące zapytanie +toolbar.clear=Wyczyść +toolbar.commit=Zatwierdź +toolbar.disconnect=Rozłącz +toolbar.history=Historia komend +toolbar.maxRows=Maksymalna ilość rekordów +toolbar.refresh=Odśwież +toolbar.rollback=Cofnij zmiany +toolbar.run=Wykonaj +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=Zapytanie SQL +tools.backup=Backup +tools.backup.help=Twozy kopie bezpieczenstwa bazy danych. +tools.changeFileEncryption=Zmien kodowanie pliku +tools.changeFileEncryption.help=Zezwalaj a zmiane hasła i algorytmu szyforowania. +tools.cipher=Cipher (AES lub XTEA) +tools.commandLine=Linia komend +tools.convertTraceFile=Konwertuj plik śladu +tools.convertTraceFile.help=Konwertuje plik .trace.db na aplikacje Java i skrypt SQL. +tools.createCluster=Utwórz Klaster +tools.createCluster.help=Tworzy klaster z samodzielnej bazy danych. +tools.databaseName=Naza bazy danych +tools.decryptionPassword=Hasło deszyfrowania +tools.deleteDbFiles=Skasuj pliki bazy danych +tools.deleteDbFiles.help=Skasuj wszystkie pliki bazy danych. +tools.directory=Katalog +tools.encryptionPassword=hasło szyfrowania +tools.javaDirectoryClassName=Katalog Javy i nazwa klasy +tools.recover=Odzyskaj +tools.recover.help=Pomaga odzyskać dane z uszkodzonej bazy. +tools.restore=Przywroć +tools.restore.help=Przywraca baze z archiwum. +tools.result=Wynik +tools.run=Uruchom +tools.runScript=Wykonaj Skrypt +tools.runScript.help=Uruchamia skrytp SQL. +tools.script=Skrypt +tools.script.help=Pozwala na konwersje bazy danych do skryptu SQL dla potrzeb kopii bezpieczenstwa lub migracji bazy. +tools.scriptFileName=Nazwa pliku skryptu +tools.serverList=Lista serwerów +tools.sourceDatabaseName=Nazwa źrodłowej bazy danych +tools.sourceDatabaseURL=URL źrodłowej bazy danych +tools.sourceDirectory=Źrodłowy katalog +tools.sourceFileName=Nazwa pliku źrodłowego +tools.sourceScriptFileName=Nazwa pliku źrodłowego skryptu +tools.targetDatabaseName=Nazwa docelowej bazy danych +tools.targetDatabaseURL=URL docelowej bazy danych +tools.targetDirectory=Docelowy katalog +tools.targetFileName=Nazwa pliku docelowego +tools.targetScriptFileName=Nazwa pliku docelowego skryptu +tools.traceFileName=Nazwa pliku śladu +tree.admin=Administrator +tree.current=Bieżąca wartość +tree.hashed=Haszowany +tree.increment=Zwiekszenie +tree.indexes=Indeksy +tree.nonUnique=Nieunikalny +tree.sequences=Sekwencje +tree.unique=Unikalny +tree.users=Użytkownicy diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_pt_br.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_pt_br.prop new file mode 100644 index 0000000000000..ed98fc282f8f8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_pt_br.prop @@ -0,0 +1,163 @@ +.translator=Eduardo Fonseca Velasques +a.help=Ajuda +a.language=Português (Brasil) +a.lynxNotSupported=Lynx não é suportado +a.password=Senha +a.remoteConnectionsDisabled=Conexões remotas ('webAllowOthers') estão desativadas neste servidor. +a.title=H2 Terminal +a.tools=#Tools +a.user=Usuário +admin.executing=Executando +admin.ip=IP +admin.lastAccess=Último Acesso +admin.lastQuery=Última Query +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Clientes com acesso +adminConnection=Nível de segurança da conexão +adminHttp=Usar conexão do tipo HTTP não segura (os dados não são encriptados) +adminHttps=Usar conexão do tipo HTTPS segura por SSL (os dados são encriptados) +adminLocal=Permitir apenas conexões do computador local +adminLogin=Entrar como administrador +adminLoginCancel=Cancelar +adminLoginOk=Confirmar +adminLogout=Sair +adminOthers=Permitir conexões de outros computadores na rede +adminPort=Número da porta +adminPortWeb=Número da porta do servidor +adminRestart=As alterações serão aplicadas depois de reiniciar o servidor. +adminSave=Salvar +adminSessions=Sessões ativas +adminShutdown=Terminar +adminTitle=Configuração do H2 Terminal +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Ação +helpAddAnotherRow=Adicionar outra linha +helpAddDrivers=Adicionar drivers de Base de Dados +helpAddDriversText=É possível registrar outros drivers, adicionando o arquivo JAR respectivo na variável de ambiente H2DRIVERS ou CLASSPATH. Exemplo (Windows): Para adicionar o driver que está em C:/Programs/hsqldb/lib/hsqldb.jar altere o valor da variável de ambiente H2DRIVERS para C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Adicionar uma linha nova +helpCommandHistory=Mostrar o histórico de comandos +helpCreateTable=Criar uma tabela nova +helpDeleteRow=Apagar uma linha +helpDisconnect=Fechar conexão à Base de Dados +helpDisplayThis=Mostrar está página de ajuda +helpDropTable=Apagar a tabela, caso ela exista +helpExecuteCurrent=Executar o comando SQL corrente +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Símbolo +helpImportantCommands=Comandos importantes +helpOperations=Operações= +helpQuery=Pesquisar uma tabela +helpSampleSQL=Scripts de exemplo +helpStatements=Comandos SQL +helpUpdate=Alterar os dados de uma linha +helpWithColumnsIdName=com as colunas ID e NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Conectar +login.driverClass=Classe com o driver +login.driverNotFound=O driver não foi encontrado
    Ver na seção de ajuda, como adicionar drivers +login.goAdmin=Preferências +login.jdbcUrl=JDBC URL +login.language=Língua +login.login=Login +login.remove=Remover +login.save=Gravar +login.savedSetting=Configuração ativa +login.settingName=Nome da configuração +login.testConnection=Testar conexão +login.testSuccessful=Teste bem sucedido +login.welcome=Consola H2 +result.1row=Uma linha +result.autoCommitOff=Auto commit agora está desligado +result.autoCommitOn=Auto commit agora está ligado +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Número máximo de linhas foi alterado +result.noRows=sem linhas +result.noRunningStatement=Actualmente não existe nenhum comando em execução +result.rows=linhas +result.statementWasCanceled=O comando foi cancelado +result.updateCount=Número de registros alterados +resultEdit.action=Ação +resultEdit.add=Adicionar +resultEdit.cancel=Cancelar +resultEdit.delete=Apagar +resultEdit.edit=Alterar +resultEdit.editResult=Alterar +resultEdit.save=Salvar +toolbar.all=Todas +toolbar.autoCommit=Auto commit +toolbar.autoComplete=Auto complete +toolbar.autoComplete.full=Total +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Desligado +toolbar.autoSelect=Auto seleção +toolbar.autoSelect.off=Desligado +toolbar.autoSelect.on=Ligado +toolbar.cancelStatement=Cancelar o comando que está em execução +toolbar.clear=Limpar +toolbar.commit=Commit +toolbar.disconnect=Desligar +toolbar.history=Histórico de comandos executados +toolbar.maxRows=Número máximo de linhas +toolbar.refresh=Atualizar +toolbar.rollback=Rollback +toolbar.run=Executar comando +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=Comando SQL +tools.backup=#Backup +tools.backup.help=#Creates a backup of a database. +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=#Cipher (AES or XTEA) +tools.commandLine=#Command line +tools.convertTraceFile=#ConvertTraceFile +tools.convertTraceFile.help=#Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=#CreateCluster +tools.createCluster.help=#Creates a cluster from a standalone database. +tools.databaseName=#Database name +tools.decryptionPassword=#Decryption password +tools.deleteDbFiles=#DeleteDbFiles +tools.deleteDbFiles.help=#Deletes all files belonging to a database. +tools.directory=#Directory +tools.encryptionPassword=#Encryption password +tools.javaDirectoryClassName=#Java directory and class name +tools.recover=#Recover +tools.recover.help=#Helps recovering a corrupted database. +tools.restore=#Restore +tools.restore.help=#Restores a database backup. +tools.result=#Result +tools.run=Executar comando +tools.runScript=#RunScript +tools.runScript.help=#Runs a SQL script. +tools.script=#Script +tools.script.help=#Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=#Script file name +tools.serverList=#Server list +tools.sourceDatabaseName=#Source database name +tools.sourceDatabaseURL=#Source database URL +tools.sourceDirectory=#Source directory +tools.sourceFileName=#Source file name +tools.sourceScriptFileName=#Source script file name +tools.targetDatabaseName=#Target database name +tools.targetDatabaseURL=#Target database URL +tools.targetDirectory=#Target directory +tools.targetFileName=#Target file name +tools.targetScriptFileName=#Target script file name +tools.traceFileName=#Trace file name +tree.admin=Administrador +tree.current=Valor corrente +tree.hashed=Hashed +tree.increment=Incrementar +tree.indexes=Índices +tree.nonUnique=Não único +tree.sequences=Sequências +tree.unique=Único +tree.users=Usuários diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_pt_pt.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_pt_pt.prop new file mode 100644 index 0000000000000..205084a6ac059 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_pt_pt.prop @@ -0,0 +1,163 @@ +.translator=Antonio Casqueiro +a.help=Ajuda +a.language=Português (Europeu) +a.lynxNotSupported=Lynx ainda não é suportado +a.password=Senha +a.remoteConnectionsDisabled=Conexões remotas ('webAllowOthers') encontram-se desactivadas neste servidor. +a.title=H2 Consola +a.tools=#Tools +a.user=Nome Utilizador +admin.executing=A executar +admin.ip=IP +admin.lastAccess=Último Acesso +admin.lastQuery=Última Query +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Clientes com acesso +adminConnection=Nível de segurança da conexão +adminHttp=Usar conexões HTTP não seguras (os dados não são cifrados) +adminHttps=Usar conexões SSL (HTTPS) seguras (os dados são cifrados) +adminLocal=Apenas permitir conexões a partir do computador local +adminLogin=Entrar como administrador +adminLoginCancel=Cancelar +adminLoginOk=Confirmar +adminLogout=Sair +adminOthers=Permitir conexões a partir de outro computador na rede +adminPort=Número do porto +adminPortWeb=Número do porto do servidor +adminRestart=As alterações apenas serão aplicadas após reiniciar o servidor. +adminSave=Gravar +adminSessions=Sessões activas +adminShutdown=Encerrar +adminTitle=Preferencias da Consola H2 +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Acção +helpAddAnotherRow=Adicionar outra linha +helpAddDrivers=Adicionar drivers de Base de Dados +helpAddDriversText=É possível registar outros drivers, adicionando o ficheiro JAR respectivo, à variável de ambiente H2DRIVERS ou CLASSPATH. Exemplo (Windows): Para adicionar o driver que se encontra em C:/Programs/hsqldb/lib/hsqldb.jar, alterar o valor da variável de ambiente H2DRIVERS para C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Adicionar uma linha nova +helpCommandHistory=Mostrar o histórico de comandos +helpCreateTable=Criar uma tabela nova +helpDeleteRow=Apagar uma linha +helpDisconnect=Fechar conexão à Base de Dados +helpDisplayThis=Mostrar esta página de ajuda +helpDropTable=Apagar a tabela, caso ela exista +helpExecuteCurrent=Executar o comando SQL corrente +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Simbolo +helpImportantCommands=Comandos importantes +helpOperations=Operações= +helpQuery=Pesquisar uma tabela +helpSampleSQL=Scripts de exemplo +helpStatements=Comandos SQL +helpUpdate=Modificar os dados de uma linha +helpWithColumnsIdName=com as colunas ID e NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Estabelecer conexão +login.driverClass=Classe com o driver +login.driverNotFound=O driver não foi encontrado
    Ver na secção de ajuda, como adicionar drivers +login.goAdmin=Preferencias +login.jdbcUrl=JDBC URL +login.language=Lingua +login.login=Login +login.remove=Remover +login.save=Gravar +login.savedSetting=Configuração activa +login.settingName=Nome da configuração +login.testConnection=Testar conexão +login.testSuccessful=Teste bem sucedido +login.welcome=Consola H2 +result.1row=1 linha +result.autoCommitOff=O auto commit passou a estar desligado +result.autoCommitOn=O auto commit passou a estar ligado +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=O número máximo de linhas foi alterado +result.noRows=sem linhas +result.noRunningStatement=Actualmente não existe nenhum comando em execução +result.rows=linhas +result.statementWasCanceled=O comando foi cancelado +result.updateCount=Número de registos actualizados +resultEdit.action=Acção +resultEdit.add=Adicionar +resultEdit.cancel=Cancelar +resultEdit.delete=Apagar +resultEdit.edit=Alterar +resultEdit.editResult=Alterar +resultEdit.save=Gravar +toolbar.all=Todas +toolbar.autoCommit=Auto commit +toolbar.autoComplete=Auto complete +toolbar.autoComplete.full=Total +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Desligado +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Desligado +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Cancelar o comando que se encontra em execução +toolbar.clear=Limpar +toolbar.commit=Commit +toolbar.disconnect=Desligar +toolbar.history=Histórico de comandos executados +toolbar.maxRows=Número máximo de linhas +toolbar.refresh=Actualizar +toolbar.rollback=Rollback +toolbar.run=Executar comando +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=Comando SQL +tools.backup=#Backup +tools.backup.help=#Creates a backup of a database. +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=#Cipher (AES or XTEA) +tools.commandLine=#Command line +tools.convertTraceFile=#ConvertTraceFile +tools.convertTraceFile.help=#Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=#CreateCluster +tools.createCluster.help=#Creates a cluster from a standalone database. +tools.databaseName=#Database name +tools.decryptionPassword=#Decryption password +tools.deleteDbFiles=#DeleteDbFiles +tools.deleteDbFiles.help=#Deletes all files belonging to a database. +tools.directory=#Directory +tools.encryptionPassword=#Encryption password +tools.javaDirectoryClassName=#Java directory and class name +tools.recover=#Recover +tools.recover.help=#Helps recovering a corrupted database. +tools.restore=#Restore +tools.restore.help=#Restores a database backup. +tools.result=#Result +tools.run=Executar comando +tools.runScript=#RunScript +tools.runScript.help=#Runs a SQL script. +tools.script=#Script +tools.script.help=#Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=#Script file name +tools.serverList=#Server list +tools.sourceDatabaseName=#Source database name +tools.sourceDatabaseURL=#Source database URL +tools.sourceDirectory=#Source directory +tools.sourceFileName=#Source file name +tools.sourceScriptFileName=#Source script file name +tools.targetDatabaseName=#Target database name +tools.targetDatabaseURL=#Target database URL +tools.targetDirectory=#Target directory +tools.targetFileName=#Target file name +tools.targetScriptFileName=#Target script file name +tools.traceFileName=#Trace file name +tree.admin=Administrador +tree.current=Valor corrente +tree.hashed=Hashed +tree.increment=Incrementar +tree.indexes=Índices +tree.nonUnique=Não único +tree.sequences=Sequências +tree.unique=Único +tree.users=Utilizadoreses diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_ru.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_ru.prop new file mode 100644 index 0000000000000..155b287f6d106 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_ru.prop @@ -0,0 +1,163 @@ +.translator=Vlad Alexahin +a.help=Помощь +a.language=Русский +a.lynxNotSupported=Извините, Lynx пока что не поддерживается +a.password=Пароль +a.remoteConnectionsDisabled=Извините, удаленные подключения ('webAllowOthers') запрещены на этом сервере. +a.title=H2 Console +a.tools=Инструменты +a.user=Пользователь Имя +admin.executing=Выполняется +admin.ip=IP +admin.lastAccess=Последний Вход +admin.lastQuery=Последний Запрос +admin.no=нет +admin.notConnected=нет соединения +admin.url=URL +admin.yes=да +adminAllow=Разрешенные клиенты +adminConnection=Безопасность подключения +adminHttp=Используйте незашифрованные HTTP-соединения +adminHttps=Используйте SSL (HTTPS) соединения +adminLocal=Разрешены только локальные подключения +adminLogin=Администратор Логин +adminLoginCancel=Отменить +adminLoginOk=OK +adminLogout=Выход +adminOthers=Разрешить удаленные подключения +adminPort=Номер порта +adminPortWeb=Порт web-сервера +adminRestart=Изменения вступят в силу после перезагрузки сервера. +adminSave=Сохранить +adminSessions=Активные сессии +adminShutdown=Выключить +adminTitle=Настройки консоли H2 +adminTranslateHelp=Перевести или улучшить перевод консоли H2 +adminTranslateStart=Перевести +helpAction=Действие +helpAddAnotherRow=Добавить строку +helpAddDrivers=Добавляем драйвер базы данных +helpAddDriversText=Дополнительные драйверы базы данных могут быть зарегистрированы добавлением соответствующих Jar-файлов в переменную среды H2DRIVERS или в CLASSPATH. Пример (Windows): Чтобы добавить библиотеку драйвера базы данных C:/Programs/hsqldb/lib/hsqldb.jar, установите в переменную среды H2DRIVERS значение C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Добавить новую строку +helpCommandHistory=Показывает историю выполненных команд +helpCreateTable=Создать новую таблицу +helpDeleteRow=Удалить строку +helpDisconnect=Отключиться от базы данных +helpDisplayThis=Показывает это окно помощи +helpDropTable=Удаляет таблицу, если она уже существует +helpExecuteCurrent=Выполнить текущий SQL-запрос +helpExecuteSelected=Выполнить SQL-запрос, выделенный в тексте +helpIcon=Иконка +helpImportantCommands=Важные команды +helpOperations=Операции +helpQuery=Запрос к таблице +helpSampleSQL=Примеры скриптов SQL +helpStatements=SQL-запрос +helpUpdate=Изменить данные в строке +helpWithColumnsIdName=с колонками ID и NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Соединиться +login.driverClass=Класс драйвера +login.driverNotFound=Драйвер базы данных не найден
    Посмотрите в Помощи, как добавить драйвер базы данных +login.goAdmin=Настройки +login.jdbcUrl=JDBC URL +login.language=Язык +login.login=Логин +login.remove=Удалить +login.save=Сохранить +login.savedSetting=Сохранить настройки +login.settingName=Имя настройки +login.testConnection=Тестовое соединение +login.testSuccessful=Тест прошел успешно +login.welcome=H2 Console +result.1row=1 строка +result.autoCommitOff=Авто-выполнение сейчас ВЫКЛЮЧЕНО +result.autoCommitOn=Авто-выполнение сейчас ВКЛЮЧЕНО +result.bytes=байт +result.characters=символов +result.maxrowsSet=Установлено максимальное количество строк +result.noRows=нет строк +result.noRunningStatement=Сейчас нету выполняемых запросов +result.rows=строки +result.statementWasCanceled=Запрос был отменен +result.updateCount=Обновить количество +resultEdit.action=Действие +resultEdit.add=Добавить +resultEdit.cancel=Отменить +resultEdit.delete=Удалить +resultEdit.edit=Править +resultEdit.editResult=Править +resultEdit.save=Сохранить +toolbar.all=Все +toolbar.autoCommit=Авто-выполнение +toolbar.autoComplete=Авто-завершение +toolbar.autoComplete.full=Все +toolbar.autoComplete.normal=Нормальные +toolbar.autoComplete.off=Выключено +toolbar.autoSelect=Автовыбор +toolbar.autoSelect.off=Выключено +toolbar.autoSelect.on=Включено +toolbar.cancelStatement=Отменить текущий запрос +toolbar.clear=Очистить +toolbar.commit=Выполнить +toolbar.disconnect=Отсоединиться +toolbar.history=История команд +toolbar.maxRows=Максимальное количество строк +toolbar.refresh=Обновить +toolbar.rollback=Вернуть назад +toolbar.run=Выполнить +toolbar.runSelected=Выполнить выделенное +toolbar.sqlStatement=SQL-запрос +tools.backup=Резервное копирование +tools.backup.help=Создает резервную копию базы данных. +tools.changeFileEncryption=Изменить шифрование файла +tools.changeFileEncryption.help=Позволяет изменить алгоритм шифрования файлов базы данных и пароль. +tools.cipher=Алгоритм (AES или XTEA) +tools.commandLine=Командная строка +tools.convertTraceFile=Преобразовать trace-файл +tools.convertTraceFile.help=Преобразует .trace.db файл в приложение Java и скрипт SQL. +tools.createCluster=Создать кластер +tools.createCluster.help=Создание кластера из автономной базы данных. +tools.databaseName=Имя базы данных +tools.decryptionPassword=Пароль дешифровки +tools.deleteDbFiles=Удалить файлы БД +tools.deleteDbFiles.help=Удаляет все файлы, относящиеся к базе данных. +tools.directory=Каталог +tools.encryptionPassword=Пароль шифровки +tools.javaDirectoryClassName=Каталог и имя класса +tools.recover=Восстановление +tools.recover.help=Восстановление поврежденной базы данных. +tools.restore=Восстановить +tools.restore.help=Восстановление из резервной копии базы данных. +tools.result=Результат +tools.run=Выполнить +tools.runScript=Выполнить скрипт +tools.runScript.help=Выполнение скрипта SQL. +tools.script=Скрипт +tools.script.help=Позволяет преобразовать базу данных в скрипт SQL для резервного копирования или миграции. +tools.scriptFileName=Имя файла скрипта SQL +tools.serverList=Список серверов +tools.sourceDatabaseName=Имя базы данных источника +tools.sourceDatabaseURL=URL базы данных источника +tools.sourceDirectory=Каталог источника +tools.sourceFileName=Имя файла источника +tools.sourceScriptFileName=Имя файла скрипта источника +tools.targetDatabaseName=Имя базы данных назначения +tools.targetDatabaseURL=URL базы данных назначения +tools.targetDirectory=Каталог назначения +tools.targetFileName=Имя файла назначения +tools.targetScriptFileName=Имя файла скрипта назначения +tools.traceFileName=Имя trace-файла +tree.admin=Администратор +tree.current=Текущее значение +tree.hashed=Hashed +tree.increment=Увеличить +tree.indexes=Индексы +tree.nonUnique=Неуникальное +tree.sequences=Последовательность +tree.unique=Уникальное +tree.users=Пользователи diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_sk.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_sk.prop new file mode 100644 index 0000000000000..2d9c227666880 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_sk.prop @@ -0,0 +1,163 @@ +.translator=Ľubomír Grajciar +a.help=Pomocník +a.language=Slovensky +a.lynxNotSupported=Prepáčte, Lynx zatiaľ nie je podporovaný +a.password=Heslo +a.remoteConnectionsDisabled=Prepáčte, vzdialené pripojenia ('webAllowOthers') sú pre tento server zakázané. +a.title=H2 konzola +a.tools=Nástroje +a.user=Meno používateľa +admin.executing=Vykonávanie +admin.ip=IP +admin.lastAccess=Posledný prístup +admin.lastQuery=Posledný príkaz +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Povolenia klientov +adminConnection=Bezpečnosť pripojenia +adminHttp=Použiť nešifrované HTTP pripojenia +adminHttps=Použiť šifrované SSL (HTTPS) pripojenia +adminLocal=Povoliť iba lokálne pripojenia +adminLogin=Prihlásenie Správcu +adminLoginCancel=Zrušiť +adminLoginOk=OK +adminLogout=Odhlásiť +adminOthers=Povoliť pripojenia z iných počítačov +adminPort=Číslo portu +adminPortWeb=Číslo portu Web servera +adminRestart=Zmeny sa vykonajú po reštarte servera +adminSave=Uložiť +adminSessions=Aktívne pripojenia +adminShutdown=Ukončiť H2 server +adminTitle=Nastavenia H2 konzoly +adminTranslateHelp=Preložiť alebo zlepšiť preklad H2 konzoly. +adminTranslateStart=Preložiť +helpAction=Akcia +helpAddAnotherRow=Pridať ďalší riadok +helpAddDrivers=Pridanie databázových ovládačov +helpAddDriversText=Ďalšie databázové ovládače môžu byť zaregistrované pridaním mena jar súboru ovládača aj s cestou do premennej prostredia H2DRIVERS alebo CLASSPATH. Napríklad (Windows): na pridanie knižnice databázového ovládača C:/Programs/hsqldb/lib/hsqldb.jar, nastavte premennú prostredia H2DRIVERS na C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Pridať nový riadok +helpCommandHistory=Ukáž históriu príkazov +helpCreateTable=Vytvoriť novú tabuľku +helpDeleteRow=Odstrániť riadok +helpDisconnect=Odpojiť sa od databázy +helpDisplayThis=Zobraziť túto stránku pomocníka +helpDropTable=Zmazať tabuľku pokiaľ existuje +helpExecuteCurrent=Vykonať aktuálny SQL príkaz +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Ikona +helpImportantCommands=Dôležité príkazy +helpOperations=Operácie +helpQuery=Vypýtať dáta z tabuľky +helpSampleSQL=Príklad SQL skriptu +helpStatements=SQL príkazy +helpUpdate=Zmeniť dáta v riadku +helpWithColumnsIdName=so stĺpcami ID a NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Pripojiť +login.driverClass=Trieda ovládača +login.driverNotFound=Databázový ovládač nebol nájdený
    Použite pomocníka na zistenie, ako pridať databázový ovládač +login.goAdmin=Nastavenia +login.jdbcUrl=JDBC URL +login.language=Jazyk +login.login=Prihlásenie +login.remove=Odstrániť +login.save=Uložiť +login.savedSetting=Uložené nastavenia +login.settingName=Meno nastavenia +login.testConnection=Test pripojenia +login.testSuccessful=Test úspešný +login.welcome=H2 konzola +result.1row=1 riadok +result.autoCommitOff=Automatický commit je teraz VYPNUTÝ +result.autoCommitOn=Automatický commit je teraz ZAPNUTÝ +result.bytes=bajtov +result.characters=znakov +result.maxrowsSet=Maximalny počet riadkov v skupine +result.noRows=žiadne riadky +result.noRunningStatement=Momentálne sa nevykonáva žiadny príkaz +result.rows=riadky/ov +result.statementWasCanceled=Príkaz bol zrušený +result.updateCount=Počet aktualizácii +resultEdit.action=Akcia +resultEdit.add=Pridať +resultEdit.cancel=Zrušiť +resultEdit.delete=Zmazať +resultEdit.edit=Upraviť +resultEdit.editResult=Upraviť +resultEdit.save=Uložiť +toolbar.all=Všetko +toolbar.autoCommit=Auto commit +toolbar.autoComplete=Auto dokončovanie +toolbar.autoComplete.full=Plné +toolbar.autoComplete.normal=Normálne +toolbar.autoComplete.off=Vypnuté +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Vypnuté +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Zrušiť aktuálny príkaz +toolbar.clear=Vyčistiť +toolbar.commit=Commit (schváliť) +toolbar.disconnect=Odpojiť +toolbar.history=História príkazov +toolbar.maxRows=Max. riadkov +toolbar.refresh=Obnoviť +toolbar.rollback=Rollback (odvolať) +toolbar.run=Spustiť +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL príkaz +tools.backup=Zálohovať +tools.backup.help=Vytvorenie zálohy databázy +tools.changeFileEncryption=ZmeniťŠifrovanieSúboru +tools.changeFileEncryption.help=Umožní zmeniť heslo a algoritmus šifrovania databázového súboru. +tools.cipher=Šifra (AES alebo XTEA) +tools.commandLine=Príkazový riadok +tools.convertTraceFile=PrekonvertovaťTraceSúbor +tools.convertTraceFile.help=Prekonvertuje .trace.db súbor na Java program a SQL skript. +tools.createCluster=VytvoriťKluster +tools.createCluster.help=Vytvoriť kluster zo samostatnej databázy. +tools.databaseName=Meno databázy +tools.decryptionPassword=Heslo na dešifrovanie +tools.deleteDbFiles=ZmazaťDbSúbory +tools.deleteDbFiles.help=Vymaže všetky súbory databázy +tools.directory=Priečinok +tools.encryptionPassword=Heslo na šifrovanie +tools.javaDirectoryClassName=Java priečinok a meno triedy +tools.recover=Opraviť +tools.recover.help=Umožní opraviť poškodenú databázu. +tools.restore=Obnoviť +tools.restore.help=Obnoviť databázu zo zálohy. +tools.result=Výsledok +tools.run=Spustiť +tools.runScript=SpustiťSkript +tools.runScript.help=Spustí SQL skript. +tools.script=Skript +tools.script.help=Umožní vytvoriť z databázy SQL skript na zálohu alebo prenesenie databázy. +tools.scriptFileName=Meno súboru skriptu +tools.serverList=Zoznam serverov +tools.sourceDatabaseName=Meno zdrojovej databázy +tools.sourceDatabaseURL=URL zdrojovej databázy +tools.sourceDirectory=Zdrojový priečinok +tools.sourceFileName=Meno zdrojového súboru +tools.sourceScriptFileName=Meno súboru zdrojového skriptu +tools.targetDatabaseName=Meno cieľovej databázy +tools.targetDatabaseURL=URL cieľovej databázy +tools.targetDirectory=Cieľový priečinok +tools.targetFileName=Meno cieľového súboru +tools.targetScriptFileName=Meno súboru cieľového skriptu +tools.traceFileName=Meno trace súboru +tree.admin=Admin +tree.current=Aktuálna hodnota +tree.hashed=Hashed (s kontrolným súčtom) +tree.increment=Inkrement +tree.indexes=Indexy +tree.nonUnique=Non unique (nie je jedinečný) +tree.sequences=Sekvencie +tree.unique=Unique (jedinečný) +tree.users=Používatelia diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_tr.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_tr.prop new file mode 100644 index 0000000000000..deac77695c41e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_tr.prop @@ -0,0 +1,163 @@ +.translator=Rıdvan Ağar +a.help=Yardım +a.language=Türkçe +a.lynxNotSupported=Web tarayıcınız HTML Frames'i desteklemiyor. Frames (ve Javascript) desteği gerekli. +a.password=Şifre +a.remoteConnectionsDisabled=Başka bilgisayarlardan, veri tabanına bağlanma izni henüz ayarlanmamış ('webAllowOthers'). +a.title=H2 Konsolu +a.tools=Araçlar +a.user=Kullanıcı adı +admin.executing=Aktif +admin.ip=IP +admin.lastAccess=Son bağlantı +admin.lastQuery=Son komut +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=İzin verilen bağlantılar +adminConnection=Bağlantı güvenliği +adminHttp=Şifrelenmemiş HTTP bağlantıları +adminHttps=Şifrelenmiş HTTP bağlantıları +adminLocal=Sadece yerel bağlantılara izin ver +adminLogin=Yönetim girişi +adminLoginCancel=İptal et +adminLoginOk=Tamam +adminLogout=Bitir +adminOthers=Başka bilgisayarlardan, veri tabanına bağlanma izni ver +adminPort=Port +adminPortWeb=Web-Server Port +adminRestart=Değişiklikler veri tabanı hizmetçisinin yeniden başlatılmasıyla etkinlik kazanacak. +adminSave=Kaydet +adminSessions=Aktif bağlantılar +adminShutdown=Kapat +adminTitle=H2 Konsol ayarları +adminTranslateHelp=H2 Kullanıcı arayüzünü (H2 Konsol) dilinize çevirin yada çeviriyi düzeltin. +adminTranslateStart=Çeviri +helpAction=Aksiyon +helpAddAnotherRow=Yeni bir satır ekle +helpAddDrivers=Veritabanı sürücüsü ekle +helpAddDriversText=Yeni veri tabanı sürücüleri eklemek için, sürücü dosyalarının yerini H2DRIVERS yada CLASSPATH çevre değişkenlerine ekleyebilirsiniz. Örnek (Windows): Sürücü dosyası C:/Programs/hsqldb/lib/hsqldb.jar ise H2DRIVERS değişkenini C:/Programs/hsqldb/lib/hsqldb.jar olarak girin. +helpAddRow=Veri tabanına yeni bir satır ekler +helpCommandHistory=Komut tarihçesini gösterir +helpCreateTable=Veri tabanına yeni bir tabela ekler +helpDeleteRow=Tabeladan satırı siler +helpDisconnect=Veri tabanı bağlantısını keser +helpDisplayThis=Bu yardım sayfasını gösterir +helpDropTable=Var ise, istenen tabelayı siler +helpExecuteCurrent=Girilen SQL komutunu icra eder +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Şalter +helpImportantCommands=Önemli komutlar +helpOperations=İşlemler +helpQuery=Tabela içeriğini gösterir +helpSampleSQL=Örnek SQL +helpStatements=SQL komutları +helpUpdate=Bir tabeladaki belli bir satır içeriğini değiştirir +helpWithColumnsIdName=Colon isimleriyle birlikte +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Bağlan +login.driverClass=Veri tabanı sürücü sınıfı +login.driverNotFound=İstenilen veri tabanı sürücüsü bulunamadı
    Sürücü ekleme konusunda bilgi için Yardım'a başvurunuz +login.goAdmin=Seçenekler +login.jdbcUrl=JDBC URL +login.language=Dil +login.login=Giriş +login.remove=Sil +login.save=Kaydet +login.savedSetting=Kayıtlı ayarlar +login.settingName=Ayar adı +login.testConnection=Bağlantıyı test et +login.testSuccessful=Test başarılı +login.welcome=H2 Konsolu +result.1row=1 dizi +result.autoCommitOff=Auto-Commit kapatıldı +result.autoCommitOn=Auto-Commit açıldı +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Maximum dizi sayısı ayarı yapıldı +result.noRows=Hiç bir bilgi yok +result.noRunningStatement=Şu an bir komut icra ediliyor +result.rows=Dizi +result.statementWasCanceled=Komut iptal edildi +result.updateCount=Güncelleşterilen dizi sayısı +resultEdit.action=Aksiyon +resultEdit.add=Ekle +resultEdit.cancel=İptal +resultEdit.delete=Sil +resultEdit.edit=Değiştir +resultEdit.editResult=Değiştir +resultEdit.save=Kaydet +toolbar.all=Hepsi +toolbar.autoCommit=Auto-Commit +toolbar.autoComplete=Auto-Complete +toolbar.autoComplete.full=Hepsi +toolbar.autoComplete.normal=Normal +toolbar.autoComplete.off=Kapalı +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Kapalı +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Yürütülen işlemi iptal et +toolbar.clear=Temizle +toolbar.commit=Degişiklikleri kaydet +toolbar.disconnect=Bağlantıyı kes +toolbar.history=Verilmiş olan komutlar +toolbar.maxRows=Maximum dizi sayısı +toolbar.refresh=Güncelleştir +toolbar.rollback=Değişiklikleri geri al +toolbar.run=İşlemi yürüt +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL komutu +tools.backup=Yedekle +tools.backup.help=Bir veritabanının yedeklemesini yapar. +tools.changeFileEncryption=DosyaKodla +tools.changeFileEncryption.help=Veritabanının dosya kodlama şifresi ve türünü belirler. +tools.cipher=Şifreleme türü (AES yada XTEA) +tools.commandLine=Komut +tools.convertTraceFile=TraceDosyasiDönüştür +tools.convertTraceFile.help=Verilen bir trace.db dosyasını Java uygulamasına ve SQL-Betiğe çevirir. +tools.createCluster=KümeYarat +tools.createCluster.help=Bağımsız bir veritabanından bir küme (Cluster) yaratır. +tools.databaseName=Veritabanının adı +tools.decryptionPassword=Kod çözme şifresi +tools.deleteDbFiles=VeritabanıDosyalarınıSil +tools.deleteDbFiles.help=Bir veritabanına ait bütün dosyaları siler. +tools.directory=Dizelge +tools.encryptionPassword=Kodlama şifresi +tools.javaDirectoryClassName=Java dizelge ve sınıf adı +tools.recover=Kurtar +tools.recover.help=Bozuk bir veritabanının kurtarılmasına yardımcı olur. +tools.restore=YenidenY???#252kle +tools.restore.help=Bir veritabanının yedeklemesini yeniden yükler. +tools.result=Sonuç +tools.run=İşlemi yürüt +tools.runScript=BetikÇalıştır +tools.runScript.help=Bir betik dosyası çalıştırır. +tools.script=Betik +tools.script.help=Bir veritabanının yedekleme yada taşıma amaçlı SQL-Betiğe çevrilmesini sağlar +tools.scriptFileName=Betik dosya adı +tools.serverList=Hizmetçi listesi +tools.sourceDatabaseName=Kaynak veritabanının adı +tools.sourceDatabaseURL=Kaynak veritabanının URL'u +tools.sourceDirectory=Kaynak dizelge +tools.sourceFileName=Kaynak dosya adı +tools.sourceScriptFileName=Kaynak betik dosya adı +tools.targetDatabaseName=Hedef veritabanının adı +tools.targetDatabaseURL=Hedef veritabanının URL'u +tools.targetDirectory=Hedef dizelge +tools.targetFileName=Hedef dosya adı +tools.targetScriptFileName=Hedef betik dosya adı +tools.traceFileName=Trace dosya adı +tree.admin=Yönetici +tree.current=Güncel değer +tree.hashed=Hash tabanlı +tree.increment=Artır +tree.indexes=Indexler +tree.nonUnique=eşsiz değil +tree.sequences=Dizinler +tree.unique=Eşsiz +tree.users=Kullanıcı diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_uk.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_uk.prop new file mode 100644 index 0000000000000..8b32ea913a192 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_uk.prop @@ -0,0 +1,163 @@ +.translator=Igor Dobrovolskyi +a.help=Допомога +a.language=Українська +a.lynxNotSupported=Вибачте, але Lynx не підтримується +a.password=Пароль +a.remoteConnectionsDisabled=Вибачте, віддалені підключення ('webAllowOthers') на цьому сервері заборонені. +a.title=Консоль H2 +a.tools=#Tools +a.user=Iм'я користувача +admin.executing=Виконується +admin.ip=IP +admin.lastAccess=Останній доступ +admin.lastQuery=Останній запит +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=Дозволені клієнти +adminConnection=Безпека під'єднання +adminHttp=Використовуйте незашифровані HTTP під'єднання +adminHttps=Використовуйте зашифровані SSL (HTTPS) під'єднання +adminLocal=Дозволено лише локальні під'єднання +adminLogin=Адміністративний логін +adminLoginCancel=Відмінити +adminLoginOk=OK +adminLogout=Завершення сеансу +adminOthers=Дозволити під'єднання з інших копм'ютерів +adminPort=Номер порта +adminPortWeb=Номер порта веб сервера +adminRestart=Зміни вступлять в силу після перезавантаження сервера. +adminSave=Зберегти +adminSessions=Активні сесії +adminShutdown=Виключити +adminTitle=Настройки консолі H2 +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=Дія +helpAddAnotherRow=Додати новий рядок +helpAddDrivers=Додати драйвер бази даних +helpAddDriversText=Нові драйвери баз даних можуть бути зареєстровані додаванням шляху до Jar-файлу з драйвером до змінної оточення H2DRIVERS або CLASSPATH. Наприклад (Windows): Щоб додати драйвер бази даних C:/Programs/hsqldb/lib/hsqldb.jar, встановіть змінну оточення H2DRIVERS рівною C:/Programs/hsqldb/lib/hsqldb.jar. +helpAddRow=Додати новий рядок +helpCommandHistory=Показує історію команд +helpCreateTable=Створити нову таблицю +helpDeleteRow=Видалити рядок +helpDisconnect=Від'єднує від бази даних +helpDisplayThis=Показує цю сторінку допомоги +helpDropTable=Видалити таблицю, якщо вона існує +helpExecuteCurrent=Виконує поточний SQL запит +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=Iконка +helpImportantCommands=Важливі команди +helpOperations=Операції +helpQuery=Запит до таблиці +helpSampleSQL=Приклад SQL запиту +helpStatements=SQL запити +helpUpdate=Змінити дані в рядку +helpWithColumnsIdName=з колонками ID і NAME +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=Під'єднатись +login.driverClass=Driver Class +login.driverNotFound=Дравер бази даних не знайдено
    Подивіться в допомозі як додати нові драйвери +login.goAdmin=Настройки +login.jdbcUrl=JDBC URL +login.language=Мова +login.login=Логін +login.remove=Видалити +login.save=Зберегти +login.savedSetting=Збережені налаштування +login.settingName=Iм'я налаштування +login.testConnection=Тестове під'єднання +login.testSuccessful=Тест пройдено успішно +login.welcome=Консоль H2 +result.1row=1 рядок +result.autoCommitOff=Автозбереження виключене +result.autoCommitOn=Автозбереження включене +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=Встановлено максимальну кількість рядків +result.noRows=немає рядків +result.noRunningStatement=В даний момент не виконується жоден запит +result.rows=рядків +result.statementWasCanceled=Запит було відмінено +result.updateCount=Кількість змінених +resultEdit.action=Дія +resultEdit.add=Додати +resultEdit.cancel=Відмінити +resultEdit.delete=Видалити +resultEdit.edit=Редагувати +resultEdit.editResult=Редагувати +resultEdit.save=Зберегти +toolbar.all=Всі +toolbar.autoCommit=Автозбереження +toolbar.autoComplete=Авто доповнення +toolbar.autoComplete.full=Повне +toolbar.autoComplete.normal=Нормальне +toolbar.autoComplete.off=Виключене +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=Виключене +toolbar.autoSelect.on=#On +toolbar.cancelStatement=Відмінити поточний запит +toolbar.clear=Очистити +toolbar.commit=Підтвердити зміни +toolbar.disconnect=Від'єднатись +toolbar.history=Iсторія команд +toolbar.maxRows=Максимальна кількість рядків +toolbar.refresh=Оновити +toolbar.rollback=Вернути назад +toolbar.run=Виконати +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL запит +tools.backup=#Backup +tools.backup.help=#Creates a backup of a database. +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=#Cipher (AES or XTEA) +tools.commandLine=#Command line +tools.convertTraceFile=#ConvertTraceFile +tools.convertTraceFile.help=#Converts a .trace.db file to a Java application and SQL script. +tools.createCluster=#CreateCluster +tools.createCluster.help=#Creates a cluster from a standalone database. +tools.databaseName=#Database name +tools.decryptionPassword=#Decryption password +tools.deleteDbFiles=#DeleteDbFiles +tools.deleteDbFiles.help=#Deletes all files belonging to a database. +tools.directory=#Directory +tools.encryptionPassword=#Encryption password +tools.javaDirectoryClassName=#Java directory and class name +tools.recover=#Recover +tools.recover.help=#Helps recovering a corrupted database. +tools.restore=#Restore +tools.restore.help=#Restores a database backup. +tools.result=#Result +tools.run=Виконати +tools.runScript=#RunScript +tools.runScript.help=#Runs a SQL script. +tools.script=#Script +tools.script.help=#Allows to convert a database to a SQL script for backup or migration. +tools.scriptFileName=#Script file name +tools.serverList=#Server list +tools.sourceDatabaseName=#Source database name +tools.sourceDatabaseURL=#Source database URL +tools.sourceDirectory=#Source directory +tools.sourceFileName=#Source file name +tools.sourceScriptFileName=#Source script file name +tools.targetDatabaseName=#Target database name +tools.targetDatabaseURL=#Target database URL +tools.targetDirectory=#Target directory +tools.targetFileName=#Target file name +tools.targetScriptFileName=#Target script file name +tools.traceFileName=#Trace file name +tree.admin=Адмін +tree.current=Поточне значення +tree.hashed=Хешований +tree.increment=Збільшити +tree.indexes=Iндекси +tree.nonUnique=Неунікальне +tree.sequences=Послідовності +tree.unique=Унікальне +tree.users=Користувачі diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_zh_cn.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_zh_cn.prop new file mode 100644 index 0000000000000..aac9fffdf945f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_zh_cn.prop @@ -0,0 +1,163 @@ +.translator=#Thomas Mueller +a.help=帮助 +a.language=中文 (简体) +a.lynxNotSupported=抱歉, 目前还不支持Lynx +a.password=密码 +a.remoteConnectionsDisabled=抱歉, 服务器上的远程计算机连接被禁用. +a.title=H2 控制台 +a.tools=工具 +a.user=用户名 +admin.executing=执行中 +admin.ip=IP地址 +admin.lastAccess=最近访问 +admin.lastQuery=最近查询 +admin.no=否 +admin.notConnected=未连接 +admin.url=URL +admin.yes=是 +adminAllow=允许客户端连接 +adminConnection=连接安全 +adminHttp=使用非加密的 HTTP 连接 +adminHttps=使用加密的 SSL (HTTPS) 连接 +adminLocal=只允许本地连接 +adminLogin=管理员登录 +adminLoginCancel=取消 +adminLoginOk=确认 +adminLogout=注销 +adminOthers=允许来自其他远程计算机的连接 +adminPort=端口号 +adminPortWeb=Web 服务器端口号 +adminRestart=更新配置将在重启服务器后生效. +adminSave=保存 +adminSessions=活动的会话 +adminShutdown=关闭 +adminTitle=H2 控制台配置 +adminTranslateHelp=创建或改进H2控制台的翻译 +adminTranslateStart=翻译 +helpAction=活动 +helpAddAnotherRow=增加另一行 +helpAddDrivers=增加数据库驱动 +helpAddDriversText=可以通过添加系统环境变量H2DRIVERS 或者 CLASSPATH 来增加数据库驱动注册。例如(Windows):要增加数据库驱动C:/Programs/hsqldb/lib/hsqldb.jar,可以增加系统环境变量H2DRIVERS并设置到C:/Programs/hsqldb/lib/hsqldb.jar。 +helpAddRow=增加新的一行 +helpCommandHistory=显示历史SQL命令 +helpCreateTable=创建一个新表 +helpDeleteRow=删除一行 +helpDisconnect=断开数据库连接 +helpDisplayThis=显示帮助页 +helpDropTable=如果表存在删除它 +helpExecuteCurrent=执行当前SQL语句 +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=图标 +helpImportantCommands=重要的命令 +helpOperations=操作 +helpQuery=查询表 +helpSampleSQL=样例SQL脚本 +helpStatements=SQL 语句 +helpUpdate=改变一行数据 +helpWithColumnsIdName=用ID和NAME列 +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=连接 +login.driverClass=驱动类 +login.driverNotFound=找不到数据库驱动
    请参考帮助去添加数据库驱动 +login.goAdmin=配置 +login.jdbcUrl=JDBC URL +login.language=语言 +login.login=登录 +login.remove=删除 +login.save=保存 +login.savedSetting=保存的连接设置 +login.settingName=连接设置名称 +login.testConnection=测试连接 +login.testSuccessful=测试成功 +login.welcome=H2 控制台 +result.1row=1 行 +result.autoCommitOff=自动提交已关闭 +result.autoCommitOn=自动提交已打开 +result.bytes=字节 +result.characters=字符 +result.maxrowsSet=最大返回行数被设置 +result.noRows=无返回行 +result.noRunningStatement=当前没有正在执行的SQL语句 +result.rows=行 +result.statementWasCanceled=SQL 语句被取消 +result.updateCount=更新行数 +resultEdit.action=活动 +resultEdit.add=增加 +resultEdit.cancel=取消 +resultEdit.delete=删除 +resultEdit.edit=编辑 +resultEdit.editResult=编辑结果集 +resultEdit.save=保存 +toolbar.all=全部 +toolbar.autoCommit=自动提交 +toolbar.autoComplete=自动完成 +toolbar.autoComplete.full=完全 +toolbar.autoComplete.normal=正常 +toolbar.autoComplete.off=关闭 +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=关闭 +toolbar.autoSelect.on=#On +toolbar.cancelStatement=取消当前的执行语句 +toolbar.clear=清除 +toolbar.commit=提交 +toolbar.disconnect=断开连接 +toolbar.history=历史SQL命令 +toolbar.maxRows=最大行数 +toolbar.refresh=刷新 +toolbar.rollback=回滚 +toolbar.run=执行 +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL 语句 +tools.backup=备份 +tools.backup.help=创建一个数据库备份 +tools.changeFileEncryption=更改文件加密方式 +tools.changeFileEncryption.help=允许更改数据库文件加密方式和算法 +tools.cipher=加密方式 (AES 或 XTEA) +tools.commandLine=命令行 +tools.convertTraceFile=转换跟踪文件 +tools.convertTraceFile.help=转换 .trace.db 文件为Java程序和SQL脚本. +tools.createCluster=创建集群 +tools.createCluster.help=从单一数据库创建集群. +tools.databaseName=数据库名称 +tools.decryptionPassword=密码明文 +tools.deleteDbFiles=删除数据库文件 +tools.deleteDbFiles.help=删除数据库的所有文件 +tools.directory=目录 +tools.encryptionPassword=密码密文 +tools.javaDirectoryClassName=Java目录和类名 +tools.recover=恢复 +tools.recover.help=帮助恢复一个已崩溃的数据库 +tools.restore=还原 +tools.restore.help=还原数据库备份 +tools.result=结果 +tools.run=运行 +tools.runScript=运行脚本 +tools.runScript.help=运行SQL脚本 +tools.script=脚本 +tools.script.help=允许为备份或迁移而 +tools.scriptFileName=脚本文件名 +tools.serverList=服务器列表 +tools.sourceDatabaseName=源数据库名 +tools.sourceDatabaseURL=源数据库 URL +tools.sourceDirectory=源目录 +tools.sourceFileName=源文件名 +tools.sourceScriptFileName=源脚本文件名 +tools.targetDatabaseName=目标据库名 +tools.targetDatabaseURL=目标数据库 URL +tools.targetDirectory=目标目录 +tools.targetFileName=目标文件名 +tools.targetScriptFileName=目标脚本文件名 +tools.traceFileName=跟踪文件名 +tree.admin=管理 +tree.current=当前值 +tree.hashed=杂乱的 +tree.increment=增加 +tree.indexes=索引 +tree.nonUnique=不唯一 +tree.sequences=序列 +tree.unique=唯一 +tree.users=用户 diff --git a/modules/h2/src/main/java/org/h2/server/web/res/_text_zh_tw.prop b/modules/h2/src/main/java/org/h2/server/web/res/_text_zh_tw.prop new file mode 100644 index 0000000000000..cd3f35eb38e94 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/_text_zh_tw.prop @@ -0,0 +1,163 @@ +.translator=derek chao (Dept of Geog., CCU, Taiwan), 2008.04.20 +a.help=輔助說明 +a.language=中文 (繁體) +a.lynxNotSupported=抱歉, 目前還不支援Lynx瀏覽器 +a.password=密碼 +a.remoteConnectionsDisabled=抱歉, 本伺服器禁用遠端連接 ('webAllowOthers'). +a.title=H2 控制台 +a.tools=#Tools +a.user=使用者名稱 +admin.executing=執行 +admin.ip=IP位址 +admin.lastAccess=上次存取 +admin.lastQuery=上次查詢 +admin.no=#no +admin.notConnected=#not connected +admin.url=URL +admin.yes=#yes +adminAllow=允許連接的客戶端 +adminConnection=連接之安全性 +adminHttp=使用非加密的 HTTP 連接 +adminHttps=使用加密的 SSL (HTTPS) 連接 +adminLocal=只允許本地連接 +adminLogin=管理員登入 +adminLoginCancel=取消 +adminLoginOk=確定 +adminLogout=登出 +adminOthers=允許來自其他電腦的連接 +adminPort=通訊埠 +adminPortWeb=Web 伺服器的通訊埠 +adminRestart=伺服器重新啟動後修改才會生效. +adminSave=儲存 +adminSessions=作用中的進程 (Active Sessions) +adminShutdown=關閉 +adminTitle=H2 控制台個人喜好設定 +adminTranslateHelp=#Translate or improve the translation of the H2 Console. +adminTranslateStart=#Translate +helpAction=動作 +helpAddAnotherRow=增加另一資料列 (row) +helpAddDrivers=增加資料庫驅動程式 +helpAddDriversText=可以透過添加系統環境變量H2DRIVERS 或 CLASSPATH 指向Jar檔案的位置,來註冊資料庫的驅動程式。例如(在Windows系統內):添加系統環境變數H2DRIVERS並指向C:/Programs/hsqldb/lib/hsqldb.jar,即可將C:/Programs/hsqldb/lib/hsqldb.jar資料庫驅動程式庫加入。 +helpAddRow=增加新的資料列 (row) +helpCommandHistory=顯示命令史 +helpCreateTable=創建新資料表 +helpDeleteRow=刪除資料列 (row) +helpDisconnect=中斷資料庫連接 +helpDisplayThis=顯示輔助說明頁 +helpDropTable=刪除存在的資料表 +helpExecuteCurrent=執行目前的SQL述句 +helpExecuteSelected=#Executes the SQL statement defined by the text selection +helpIcon=圖示 +helpImportantCommands=重要命令 +helpOperations=操作 +helpQuery=資料表查詢 +helpSampleSQL=範例SQL腳本 +helpStatements=SQL述句 +helpUpdate=變更資料列 (row) 中的資料 +helpWithColumnsIdName=用ID和NAME欄位 +key.alt=#Alt +key.ctrl=#Ctrl +key.enter=#Enter +key.shift=#Shift +key.space=#Space +login.connect=連接 +login.driverClass=驅動程式類別 (Driver Class) +login.driverNotFound=沒有找到資料庫驅動程式
    請參考輔助說明來添加驅動程式 +login.goAdmin=個人喜好設定 +login.jdbcUrl=JDBC URL +login.language=語言 +login.login=登入 +login.remove=刪除 +login.save=儲存 +login.savedSetting=儲存的設定值 +login.settingName=設定的名稱 +login.testConnection=測試連接 +login.testSuccessful=測試成功 +login.welcome=H2 控制台 +result.1row=1列資料列 (row) +result.autoCommitOff=自動提交現在為關閉狀態 +result.autoCommitOn=自動提交現在為開啟狀態 +result.bytes=#bytes +result.characters=#characters +result.maxrowsSet=最大資料列 (rowcount) 設定完成 +result.noRows=無資料列 (rows) +result.noRunningStatement=目前沒有正在執行的SQL述句 +result.rows=資料列 (rows) +result.statementWasCanceled=SQL述句已取消 +result.updateCount=更新計數 +resultEdit.action=動作 +resultEdit.add=增加 +resultEdit.cancel=取消 +resultEdit.delete=刪除 +resultEdit.edit=編輯 +resultEdit.editResult=編輯 +resultEdit.save=儲存 +toolbar.all=全部 +toolbar.autoCommit=自動提交 +toolbar.autoComplete=自動完成 (complete) +toolbar.autoComplete.full=完整 +toolbar.autoComplete.normal=標準 +toolbar.autoComplete.off=關閉 +toolbar.autoSelect=#Auto select +toolbar.autoSelect.off=關閉 +toolbar.autoSelect.on=#On +toolbar.cancelStatement=取消目前的SQL述句 +toolbar.clear=清除 +toolbar.commit=提交 +toolbar.disconnect=中斷連接 +toolbar.history=命令史 +toolbar.maxRows=最大資料列 (rows) +toolbar.refresh=更新 +toolbar.rollback=退返 (rollback) +toolbar.run=執行 +toolbar.runSelected=#Run Selected +toolbar.sqlStatement=SQL 述句 +tools.backup=備份 +tools.backup.help=建立資料庫的備份 +tools.changeFileEncryption=#ChangeFileEncryption +tools.changeFileEncryption.help=#Allows changing the database file encryption password and algorithm. +tools.cipher=加密 (AES 或 XTEA) +tools.commandLine=命令列 +tools.convertTraceFile=轉換Trace檔案 +tools.convertTraceFile.help=將.trace.db檔案轉換成Java應用程式與SQL腳本 (script). +tools.createCluster=建立叢集 (Cluster) +tools.createCluster.help=自獨立的資料庫建立叢集 (Cluster) +tools.databaseName=資料庫名稱 +tools.decryptionPassword=明文密碼 +tools.deleteDbFiles=刪除資料庫檔案 +tools.deleteDbFiles.help=刪除某一資料庫的所有相關檔案 +tools.directory=目錄 +tools.encryptionPassword=密文密碼 +tools.javaDirectoryClassName=Java目錄 (directory) 與類別 (class) 名稱 +tools.recover=修復 +tools.recover.help=協助修復損壞的資料庫 +tools.restore=回存 +tools.restore.help=回存資料庫的備份 +tools.result=結果 +tools.run=執行 +tools.runScript=執行腳本 (Script) +tools.runScript.help=執行SQL腳本 (script) +tools.script=腳本 (Script) +tools.script.help=允許自資料庫轉換出為備份或搬遷用的SQL腳本(script) +tools.scriptFileName=腳本 (Script) 檔案名稱 +tools.serverList=伺服器清單 +tools.sourceDatabaseName=來源 (source) 資料庫名稱 +tools.sourceDatabaseURL=來源 (source) 資料庫URL +tools.sourceDirectory=來源目錄 (source directory) +tools.sourceFileName=來源 (source) 檔案名稱 +tools.sourceScriptFileName=來源腳本 (script) 檔案名稱 +tools.targetDatabaseName=目的 (target) 資料庫名稱 +tools.targetDatabaseURL=目的 (target) 資料庫URL +tools.targetDirectory=目的目錄 (target directory) +tools.targetFileName=目的 (target) 檔案名稱 +tools.targetScriptFileName=目的腳本 (script) 檔案名稱 +tools.traceFileName=Trace 檔案名稱 +tree.admin=管理 +tree.current=目前的數值 +tree.hashed=使用雜湊法 (hashed) +tree.increment=遞增 +tree.indexes=索引 +tree.nonUnique=非唯一 +tree.sequences=序列 +tree.unique=唯一 +tree.users=使用者 diff --git a/modules/h2/src/main/java/org/h2/server/web/res/admin.jsp b/modules/h2/src/main/java/org/h2/server/web/res/admin.jsp new file mode 100644 index 0000000000000..f3ffb954b89bb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/admin.jsp @@ -0,0 +1,125 @@ + + + + + + ${text.a.title} + + + +

    + ${text.adminTitle} +

    +

    + ${text.adminLogout} +

    +
    +
    +

    + ${text.adminAllow} +

    +

    + + + + + + + ${text.adminLocal}
    + + + + + + + + ${text.adminOthers}
    +

    +

    + ${text.adminConnection} +

    +

    + + + + + + + ${text.adminHttp}
    + + + + + + + + ${text.adminHttps}
    +

    +

    + ${text.adminPort} +

    +

    + ${text.adminPortWeb}: +

    +
    +

    + +

    +

    + ${text.adminRestart} +

    +
    +
    +

    +

    + +
    +

    +

    + ${text.adminTranslateHelp} +

    +
    +

    + ${text.adminSessions} +

    + + + + + + + + + + + + + + + + + + + +
    ${text.admin.ip}${text.admin.url}${text.a.user}${text.admin.executing}${text.admin.lastAccess}${text.admin.lastQuery}
    + ${item.ip} + + ${item.url} + + ${item.user} + + ${item.executing} + + ${item.lastAccess} + + ${item.lastQuery} +
    +
    +
    + +
    + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/adminLogin.jsp b/modules/h2/src/main/java/org/h2/server/web/res/adminLogin.jsp new file mode 100644 index 0000000000000..66bc3609ac64b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/adminLogin.jsp @@ -0,0 +1,44 @@ + + + + + ${text.a.title} + + + +
    + + + + + + + + + + + + + + + +
    +

    ${error}

    +
    + + + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/autoCompleteList.jsp b/modules/h2/src/main/java/org/h2/server/web/res/autoCompleteList.jsp new file mode 100644 index 0000000000000..8d68480c79d1f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/autoCompleteList.jsp @@ -0,0 +1 @@ +${autoCompleteList} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/background.gif b/modules/h2/src/main/java/org/h2/server/web/res/background.gif new file mode 100644 index 0000000000000..f8832e6b1a0a2 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/background.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/error.jsp b/modules/h2/src/main/java/org/h2/server/web/res/error.jsp new file mode 100644 index 0000000000000..0dae6dfbe9d6e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/error.jsp @@ -0,0 +1,16 @@ + + + + + ${text.a.title} + + + +

    + ${error} +

    + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/favicon.ico b/modules/h2/src/main/java/org/h2/server/web/res/favicon.ico new file mode 100644 index 0000000000000..fd5e73a416cf2 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/favicon.ico differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/frame.jsp b/modules/h2/src/main/java/org/h2/server/web/res/frame.jsp new file mode 100644 index 0000000000000..7ff3a122cc1f2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/frame.jsp @@ -0,0 +1,28 @@ + + + + + + ${text.a.title} + + + + + + + + + + + + + +<body> + ${text.a.lynxNotSupported} +</body> + + diff --git a/modules/h2/src/main/java/org/h2/server/web/res/header.jsp b/modules/h2/src/main/java/org/h2/server/web/res/header.jsp new file mode 100644 index 0000000000000..0c5e78f68c73c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/header.jsp @@ -0,0 +1,163 @@ + + + + + +${text.a.title} + + + +
    + + + + + + + + + + + + + + + +
    + + ${text.toolbar.disconnect} + + + + ${text.toolbar.refresh} + + + + + + ${text.toolbar.autoCommit}  + + + ${text.toolbar.rollback} + + + ${text.toolbar.commit} + + + +  ${text.toolbar.maxRows}:  + +   + + + ${text.toolbar.run} + + + + ${text.toolbar.runSelected} + + + + ${text.toolbar.cancelStatement} + + + + ${text.toolbar.history} + + + + ${text.toolbar.autoComplete}  +   + + ${text.toolbar.autoSelect}  + + + + ${text.a.help} + +
    +
    + + + diff --git a/modules/h2/src/main/java/org/h2/server/web/res/help.jsp b/modules/h2/src/main/java/org/h2/server/web/res/help.jsp new file mode 100644 index 0000000000000..407d0d0dae32a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/help.jsp @@ -0,0 +1,99 @@ + + + + + + ${text.a.title} + + + + + + +
    + +

    ${text.helpImportantCommands}

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ${text.a.help}${text.helpDisplayThis}
    ${text.toolbar.history}${text.helpCommandHistory}
    ${text.toolbar.run}${text.key.ctrl}+${text.key.enter}${text.helpExecuteCurrent}
    ${text.toolbar.runSelected}${text.key.shift}+${text.key.enter}${text.helpExecuteSelected}
    ${text.key.ctrl}+${text.key.space}${text.toolbar.autoComplete}
    ${text.toolbar.disconnect}${text.helpDisconnect}
    + +

    ${text.helpSampleSQL}

    + + + +
    + ${text.helpDropTable}
    + ${text.helpCreateTable}
    +   ${text.helpWithColumnsIdName}
    + ${text.helpAddRow}
    + ${text.helpAddAnotherRow}
    + ${text.helpQuery}
    + ${text.helpUpdate}
    + ${text.helpDeleteRow} +
    + DROP TABLE IF EXISTS TEST;
    + CREATE TABLE TEST(ID INT PRIMARY KEY,
    +    NAME VARCHAR(255));
    + INSERT INTO TEST VALUES(1, 'Hello');
    + INSERT INTO TEST VALUES(2, 'World');
    + SELECT * FROM TEST ORDER BY ID;
    + UPDATE TEST SET NAME='Hi' WHERE ID=1;
    + DELETE FROM TEST WHERE ID=2; +
    + ${text.a.help} + + HELP ... +
    + +

    ${text.helpAddDrivers}

    +

    +${text.helpAddDriversText} +

    + +
    + +
    + + diff --git a/modules/h2/src/main/java/org/h2/server/web/res/helpTranslate.jsp b/modules/h2/src/main/java/org/h2/server/web/res/helpTranslate.jsp new file mode 100644 index 0000000000000..d63b6f1ac7351 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/helpTranslate.jsp @@ -0,0 +1,43 @@ + + + + + + ${text.a.title} + + + + +
    + +

    Translate

    +

    +You can now translate the file ${translationFile} with your favorite editor. +

    +

    +To view the changes in context, save the file and refresh the browser. +The H2 Console reads the file every second. +

    +

    +When done, please send the file to the H2 support. +Please send the file as an attachment (to avoid line breaks). +

    +

    +To translate from scratch: +

    +
    • Stop the H2 Console +
    • Rename or delete the translation file +
    • Start the H2 Console +
    • Select the source language of your choice +
    • Go to 'Preferences' and click 'Translation' +
    + +${text.adminLogout} + +
    + + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/ico_add.gif b/modules/h2/src/main/java/org/h2/server/web/res/ico_add.gif new file mode 100644 index 0000000000000..252d7ebcb8c74 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/ico_add.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/ico_ok.gif b/modules/h2/src/main/java/org/h2/server/web/res/ico_ok.gif new file mode 100644 index 0000000000000..9e80aafb0ff73 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/ico_ok.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/ico_remove.gif b/modules/h2/src/main/java/org/h2/server/web/res/ico_remove.gif new file mode 100644 index 0000000000000..64b438488bcb5 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/ico_remove.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/ico_remove_ok.gif b/modules/h2/src/main/java/org/h2/server/web/res/ico_remove_ok.gif new file mode 100644 index 0000000000000..facfd1395af87 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/ico_remove_ok.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/ico_search.gif b/modules/h2/src/main/java/org/h2/server/web/res/ico_search.gif new file mode 100644 index 0000000000000..a4548c5dc8cc3 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/ico_search.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/ico_undo.gif b/modules/h2/src/main/java/org/h2/server/web/res/ico_undo.gif new file mode 100644 index 0000000000000..d77f94fc0532c Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/ico_undo.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/ico_write.gif b/modules/h2/src/main/java/org/h2/server/web/res/ico_write.gif new file mode 100644 index 0000000000000..feb8e94a74f14 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/ico_write.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_commit.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_commit.gif new file mode 100644 index 0000000000000..ac8e3eef6c3ed Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_commit.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_disconnect.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_disconnect.gif new file mode 100644 index 0000000000000..2ece8d838f369 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_disconnect.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_help.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_help.gif new file mode 100644 index 0000000000000..a5dfad03182a0 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_help.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_history.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_history.gif new file mode 100644 index 0000000000000..745fdf2c099da Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_history.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_line.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_line.gif new file mode 100644 index 0000000000000..98c2c3832c01f Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_line.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_refresh.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_refresh.gif new file mode 100644 index 0000000000000..2fdec08a9252d Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_refresh.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_rollback.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_rollback.gif new file mode 100644 index 0000000000000..bcc2a4c6b96d9 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_rollback.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_run.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_run.gif new file mode 100644 index 0000000000000..4c6a6bffcbbee Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_run.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_run_selected.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_run_selected.gif new file mode 100644 index 0000000000000..41e63796c0894 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_run_selected.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/icon_stop.gif b/modules/h2/src/main/java/org/h2/server/web/res/icon_stop.gif new file mode 100644 index 0000000000000..ae44bf3b6399e Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/icon_stop.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/index.jsp b/modules/h2/src/main/java/org/h2/server/web/res/index.jsp new file mode 100644 index 0000000000000..c1f7cdefbbbd8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/index.jsp @@ -0,0 +1,24 @@ + + + + + ${text.a.title} + + + + + +

    Welcome to H2

    +

    No Javascript

    +If you are not automatically redirected to the login page, then +Javascript is currently disabled or your browser does not support Javascript. +For this application to work, Javascript is essential. +Please enable Javascript now, or use another web browser that supports it. + + diff --git a/modules/h2/src/main/java/org/h2/server/web/res/login.jsp b/modules/h2/src/main/java/org/h2/server/web/res/login.jsp new file mode 100644 index 0000000000000..5ccaafffb45b4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/login.jsp @@ -0,0 +1,132 @@ + + + + + + ${text.a.title} + + + + +
    +

    +    ${text.login.goAdmin} + +    ${text.a.tools} +    ${text.a.help} +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +

    ${error}

    +
    + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/notAllowed.jsp b/modules/h2/src/main/java/org/h2/server/web/res/notAllowed.jsp new file mode 100644 index 0000000000000..acd14003c74f0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/notAllowed.jsp @@ -0,0 +1,16 @@ + + + + + ${text.a.title} + + +

    ${text.a.title}

    +

    + ${text.a.remoteConnectionsDisabled} +

    + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/query.jsp b/modules/h2/src/main/java/org/h2/server/web/res/query.jsp new file mode 100644 index 0000000000000..9ab4240f6afdd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/query.jsp @@ -0,0 +1,530 @@ + + + + + + ${text.a.title} + + + + +
    + + + + + + ${text.toolbar.sqlStatement}: + +
    + +
    + +
    +
    + +
    + + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/result.jsp b/modules/h2/src/main/java/org/h2/server/web/res/result.jsp new file mode 100644 index 0000000000000..5aadb9807071b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/result.jsp @@ -0,0 +1,22 @@ + + + + + + ${text.a.title} + + + + + +
    +${result} +
    + +
    + + diff --git a/modules/h2/src/main/java/org/h2/server/web/res/stylesheet.css b/modules/h2/src/main/java/org/h2/server/web/res/stylesheet.css new file mode 100644 index 0000000000000..c5f7b7d813e91 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/stylesheet.css @@ -0,0 +1,334 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * * Initial Developer: H2 Group + */ + +td, input, select, textarea, body, code, pre { + font: 12px/1.4 Arial, sans-serif; +} + +h1, h2, h3, h4, h5 { + font: 12px/1.4 Arial, sans-serif; + font-weight: bold; +} + +a { + text-decoration: none; + color: #0000ff; +} + +a:hover { + text-decoration: underline; +} + +body { + margin: 4px; +} + +code { + background-color: #ece9d8; + padding: 0px 2px; +} + +h1 { + background-color: #0000bb; + padding: 2px 4px 2px 4px; + color: #fff; + font-size: 22px; + line-height: normal; +} + +h2 { + font-size: 19px; +} + +h3 { + font-size: 16px; +} + +li { + margin-top: 6px; +} + +ol { + list-style-type: upper-roman; + list-style-position: outside; +} + +table { + background-color: #ffffff; + border-collapse: collapse; + border: 1px solid #aca899; +} + +td { + background-color: #ffffff; + padding: 2px; + text-align: left; + vertical-align:top; + border: 1px solid #aca899; +} + +textarea { + width: 100%; + overflow: auto; +} + +th { + font-weight: normal; + text-align: left; + background-color: #ece9d8; + padding: 2px; + border: 1px solid #aca899; +} + +ul { + list-style-type: disc; + list-style-position: outside; + padding-left: 20px; +} + +.result { + background-color: #f4f0e0; + margin: 10px; +} + +.toolbar { + background-color: #ece9d8; +} + +table.toolbar { + border-collapse: collapse; + border: 0px; + padding: 0px 0px; +} + +th.toolbar { + border: 0px; +} + +tr.toolbar { + border: 0px; +} + +td.toolbar { + vertical-align: middle; + border: 0px; + padding: 0px 0px; +} + +table.nav { + border: 0px; +} + +tr.nav { + border: 0px; +} + +td.nav { + border: 0px; +} + +table.login { + background-color: #ece9d8; + border:1px solid #aca899; +} + +tr.login { + border: 0px; +} + +th.login { + color: #ffffff; + text-align: left; + border: 0px; + background-color: #ece9d8; + padding: 4px 10px; + background-image: url(background.gif); +} + +td.login { + background-color: #ece9d8; + padding: 5px 10px; + text-align: left; + border: 0px; +} + +.iconLine { + border-width:0px; + border-style:solid; +} + +.icon { + border-top-color:#ece9d8; + border-left-color:#ece9d8; + border-right-color:#ece9d8; + border-bottom-color:#ece9d8; + border-width:1px; + border-style:solid; +} + +.icon_hover { + border-color:#aca899; + border-radius: 2px; + -moz-border-radius: 2px; + -webkit-border-radius: 2px; + border-width:1px; + border-style:solid; +} + +table.empty { + background-color: #ffffff; + border: 0px; +} + +td.empty { + background-color: #ffffff; + border: 0px; + padding: 5px 10px; + text-align: left; +} + +.error { + color: #771111; +} + +div.error { + background-color: #eecccc; + border-color: #ddbbbb; +} + +div.success { + color: #226622; + background-color: #cceecc; + border-color: #bbddbb; +} + +div.error, div.success { + border-radius: 4px; + padding: 10px; + border-width: 1px; + border-style: solid; +} + +input.button { + padding: 3px; + background-color: #ece9d8; + border-color: #aca899; + border-radius: 2px; + -moz-border-radius: 2px; + -webkit-border-radius: 2px; + border-width: 1px; + border-style: solid; +} + +input.button:hover { + border-color: #5e5c55; +} + +input.button:active { + position:relative; + top:1px; +} + +.tree { + border: 0px; + vertical-align: middle; + white-space: nowrap; +} + +.tree img { + height: 18px; + width: 18px; + border: 0px; + vertical-align: middle; +} + +.tree a { + border: 0px; + text-decoration: none; + vertical-align: middle; + white-space: nowrap; + color: #000000; +} + +.tree a:hover { + color: #345373; +} + +table.content { + width: 100%; + height: 100%; + border: 0px; +} + +tr.content { + border:0px; + border-left:1px solid #aca899; +} + +td.content { + border:0px; + border-left:1px solid #aca899; +} + +.contentDiv { + margin:10px; +} + +tr.contentResult { + border:0px; + border-top:1px solid #aca899; + border-left:1px solid #aca899; +} + +td.contentResult { + border:0px; + border-top:1px solid #aca899; + border-left:1px solid #aca899; +} + +table.autoComp { + background-color: #e0ecff; + border: 1px solid #7f9db9; + cursor: pointer; + position: absolute; + top: 1px; + left: 1px; + z-index:0; + padding: 0px; + margin: 0px; + border-spacing:2px; +} + +td.autoComp0 { + border-spacing: 0px; + padding: 1px 8px; + background-color: #cce0ff; + border: 0px; +} + +td.autoComp1 { + border-spacing: 0px; + padding: 1px 8px; + background-color: #e7f0ff; + border: 0px; +} + +td.autoComp2 { + border-spacing: 0px; + padding: 1px 8px; + background-color: #ffffff; + border: 0px; +} + +td.autoCompHide { + padding: 2px; + display: none; +} + +table.tool, table.tool tr, table.tool tr td { + padding: 0px; + border: 0px; +} diff --git a/modules/h2/src/main/java/org/h2/server/web/res/table.js b/modules/h2/src/main/java/org/h2/server/web/res/table.js new file mode 100644 index 0000000000000..f77b448123f76 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/table.js @@ -0,0 +1,245 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * * Initial Developer: H2 Group + */ + +addEvent(window, "load", initSort); + +function addEvent(elm, evType, fn, useCapture) { + // addEvent and removeEvent + // cross-browser event handling for IE5+, NS6 and Mozilla + // By Scott Andrew + if (elm.addEventListener){ + elm.addEventListener(evType, fn, useCapture); + return true; + } else if (elm.attachEvent){ + var r = elm.attachEvent("on"+evType, fn); + return r; + } else { + alert("Handler could not be added"); + } +} + +function initSort() { + if (document.getElementById('editing') != undefined) { + // don't allow sorting while editing + return; + } + var tables = document.getElementsByTagName("table"); + for (var i=0; i 0) { + var header = table.rows[0]; + for(var j=0;j  '; + } + } + } +} + +function editRow(row, session, write, undo) { + var table = document.getElementById('editTable'); + var y = row < 0 ? table.rows.length - 1 : row; + var i; + for(i=1; i'; + var undo = ''+undo+''; + cell.innerHTML = edit + undo; + } else { + cell.innerHTML = ''; + } + } + var cells = table.rows[y].cells; + for (i=1; i/g, '>'); + var size; + var newHTML; + if (text.indexOf('\n') >= 0) { + size = 40; + newHTML = ''; + } else { + size = text.length+5; + newHTML = ''; + } + newHTML = newHTML.replace('$rowName', 'r' + row + 'c' + i); + newHTML = newHTML.replace('$row', row); + newHTML = newHTML.replace('$t', text); + newHTML = newHTML.replace('$size', size); + cell.innerHTML = newHTML; + } +} + +function deleteRow(row, session, write, undo) { + var table = document.getElementById('editTable'); + var y = row < 0 ? table.rows.length - 1 : row; + var i; + for(i=1; i'; + var undo = ''+undo+''; + cell.innerHTML = edit + undo; + } else { + cell.innerHTML = ''; + } + } + var cells = table.rows[y].cells; + for (i=1; i + + + + + ${text.a.title} + + + + + + + +
    + +
    + +

    + ${error} +

    + + + diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tools.jsp b/modules/h2/src/main/java/org/h2/server/web/res/tools.jsp new file mode 100644 index 0000000000000..2c40b7588b6c0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/tools.jsp @@ -0,0 +1,241 @@ + + + + + + ${text.a.tools} + + + + +
    + +

    ${text.a.tools}

    +

    +${text.adminLogout} +

    +
    +

    +${text.tools.backup}   +${text.tools.restore}   +${text.tools.recover}   +${text.tools.deleteDbFiles}   +${text.tools.changeFileEncryption} +

    +${text.tools.script}   +${text.tools.runScript}   +${text.tools.convertTraceFile}   +${text.tools.createCluster} +

    +
    + + + + + + + + + + + + + + +
    + + + diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree.js b/modules/h2/src/main/java/org/h2/server/web/res/tree.js new file mode 100644 index 0000000000000..0e710609f59b0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/server/web/res/tree.js @@ -0,0 +1,124 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * * Initial Developer: H2 Group + */ + +var nodeList = new Array(); +var icons = new Array(); +var tables = new Array(); +var tablesByName = new Object(); + +function Table(name, columns, i) { + this.name = name; + this.columns = columns; + this.id = i; +} + +function addTable(name, columns, i) { + var t = new Table(name, columns, i); + tables[tables.length] = t; + tablesByName[name] = t; +} + +function ins(s, isTable) { + if (parent.h2query) { + if (parent.h2query.insertText) { + parent.h2query.insertText(s, isTable); + } + } +} + +function refreshQueryTables() { + if (parent.h2query) { + if (parent.h2query.refreshTables) { + parent.h2query.refreshTables(); + } + } +} + +function goToTable(s) { + var t = tablesByName[s]; + if (t) { + hitOpen(t.id); + return true; + } + return false; +} + +function loadIcons() { + icons[0] = new Image(); + icons[0].src = "tree_minus.gif"; + icons[1] = new Image(); + icons[1].src = "tree_plus.gif"; +} + +function Node(level, type, icon, text, link) { + this.level = level; + this.type = type; + this.icon = icon; + this.text = text; + this.link = link; +} + +function setNode(id, level, type, icon, text, link) { + nodeList[id] = new Node(level, type, icon, text, link); +} + +function writeDiv(i, level, dist) { + if (dist>0) { + document.write("
    "); + } else { + while (dist++<0) { + document.write("
    "); + } + } +} + +function writeTree() { + loadIcons(); + var last=nodeList[0]; + for (var i=0; i0) { + document.write(""); + } + if (node.type==1) { + if (i < nodeList.length-1 && nodeList[i+1].level > node.level) { + document.write(""); + } else { + document.write(""); + } + } + document.write(" "); + if (node.link==null) { + document.write(node.text); + } else { + document.write(""+node.text+""); + } + document.write("
    "); + } + writeDiv(0, 0, -last.type); +} + +function hit(i) { + var theDiv = document.getElementById("div"+i); + var theJoin = document.getElementById("join"+i); + if (theDiv.style.display == 'none') { + theJoin.src = icons[0].src; + theDiv.style.display = ''; + } else { + theJoin.src = icons[1].src; + theDiv.style.display = 'none'; + } +} + +function hitOpen(i) { + var theDiv = document.getElementById("div"+i); + var theJoin = document.getElementById("join"+i); + theJoin.src = icons[0].src; + theDiv.style.display = ''; +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_column.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_column.gif new file mode 100644 index 0000000000000..e6aeb35736119 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_column.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_database.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_database.gif new file mode 100644 index 0000000000000..c0cb0dacb59b2 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_database.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_empty.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_empty.gif new file mode 100644 index 0000000000000..b5cf52378fa5f Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_empty.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_folder.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_folder.gif new file mode 100644 index 0000000000000..eb129763dcea0 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_folder.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_index.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_index.gif new file mode 100644 index 0000000000000..a84151102a129 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_index.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_index_az.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_index_az.gif new file mode 100644 index 0000000000000..e956755469609 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_index_az.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_info.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_info.gif new file mode 100644 index 0000000000000..a9a38f4600fd8 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_info.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_line.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_line.gif new file mode 100644 index 0000000000000..1a259eea00c33 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_line.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_minus.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_minus.gif new file mode 100644 index 0000000000000..2592ac20f3f4c Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_minus.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_page.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_page.gif new file mode 100644 index 0000000000000..42d7318c5d928 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_page.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_plus.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_plus.gif new file mode 100644 index 0000000000000..f258ce211a0a1 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_plus.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_sequence.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_sequence.gif new file mode 100644 index 0000000000000..60820a97564ef Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_sequence.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_sequences.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_sequences.gif new file mode 100644 index 0000000000000..e72a59619e628 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_sequences.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_table.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_table.gif new file mode 100644 index 0000000000000..e64a80d426cea Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_table.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_type.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_type.gif new file mode 100644 index 0000000000000..313fc00837eed Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_type.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_types.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_types.gif new file mode 100644 index 0000000000000..8e55d90b0e531 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_types.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_user.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_user.gif new file mode 100644 index 0000000000000..72160a838e5aa Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_user.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_users.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_users.gif new file mode 100644 index 0000000000000..3c3d84fb59c09 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_users.gif differ diff --git a/modules/h2/src/main/java/org/h2/server/web/res/tree_view.gif b/modules/h2/src/main/java/org/h2/server/web/res/tree_view.gif new file mode 100644 index 0000000000000..0ef44930a4df5 Binary files /dev/null and b/modules/h2/src/main/java/org/h2/server/web/res/tree_view.gif differ diff --git a/modules/h2/src/main/java/org/h2/store/CountingReaderInputStream.java b/modules/h2/src/main/java/org/h2/store/CountingReaderInputStream.java new file mode 100644 index 0000000000000..2f24b2d5ead5a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/CountingReaderInputStream.java @@ -0,0 +1,112 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import java.nio.ByteBuffer; +import java.nio.CharBuffer; +import java.nio.charset.CharsetEncoder; +import java.nio.charset.CodingErrorAction; +import java.nio.charset.StandardCharsets; + +import org.h2.engine.Constants; + +/** + * An input stream that reads the data from a reader and limits the number of + * bytes that can be read. + */ +public class CountingReaderInputStream extends InputStream { + + private final Reader reader; + + private final CharBuffer charBuffer = + CharBuffer.allocate(Constants.IO_BUFFER_SIZE); + + private final CharsetEncoder encoder = StandardCharsets.UTF_8.newEncoder(). + onMalformedInput(CodingErrorAction.REPLACE). + onUnmappableCharacter(CodingErrorAction.REPLACE); + + private ByteBuffer byteBuffer = ByteBuffer.allocate(0); + private long length; + private long remaining; + + CountingReaderInputStream(Reader reader, long maxLength) { + this.reader = reader; + this.remaining = maxLength; + } + + @Override + public int read(byte[] buff, int offset, int len) throws IOException { + if (!fetch()) { + return -1; + } + len = Math.min(len, byteBuffer.remaining()); + byteBuffer.get(buff, offset, len); + return len; + } + + @Override + public int read() throws IOException { + if (!fetch()) { + return -1; + } + return byteBuffer.get() & 255; + } + + private boolean fetch() throws IOException { + if (byteBuffer != null && byteBuffer.remaining() == 0) { + fillBuffer(); + } + return byteBuffer != null; + } + + private void fillBuffer() throws IOException { + int len = (int) Math.min(charBuffer.capacity() - charBuffer.position(), + remaining); + if (len > 0) { + len = reader.read(charBuffer.array(), charBuffer.position(), len); + } + if (len > 0) { + remaining -= len; + } else { + len = 0; + remaining = 0; + } + length += len; + charBuffer.limit(charBuffer.position() + len); + charBuffer.rewind(); + byteBuffer = ByteBuffer.allocate(Constants.IO_BUFFER_SIZE); + boolean end = remaining == 0; + encoder.encode(charBuffer, byteBuffer, end); + if (end && byteBuffer.position() == 0) { + // EOF + byteBuffer = null; + return; + } + byteBuffer.flip(); + charBuffer.compact(); + charBuffer.flip(); + charBuffer.position(charBuffer.limit()); + } + + /** + * The number of characters read so far (but there might still be some bytes + * in the buffer). + * + * @return the number of characters + */ + public long getLength() { + return length; + } + + @Override + public void close() throws IOException { + reader.close(); + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/store/Data.java b/modules/h2/src/main/java/org/h2/store/Data.java new file mode 100644 index 0000000000000..0589f5d471511 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/Data.java @@ -0,0 +1,1356 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + * + * The variable size number format code is a port from SQLite, + * but stored in reverse order (least significant bits in the first byte). + */ +package org.h2.store; + +import java.io.IOException; +import java.io.OutputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.util.Arrays; + +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.tools.SimpleResultSet; +import org.h2.util.Bits; +import org.h2.util.DateTimeUtils; +import org.h2.util.MathUtils; +import org.h2.util.Utils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueByte; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueGeometry; +import org.h2.value.ValueInt; +import org.h2.value.ValueJavaObject; +import org.h2.value.ValueLob; +import org.h2.value.ValueLobDb; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; +import org.h2.value.ValueShort; +import org.h2.value.ValueString; +import org.h2.value.ValueStringFixed; +import org.h2.value.ValueStringIgnoreCase; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; +import org.h2.value.ValueUuid; + +/** + * This class represents a byte buffer that contains persistent data of a page. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class Data { + + /** + * The length of an integer value. + */ + public static final int LENGTH_INT = 4; + + /** + * The length of a long value. + */ + private static final int LENGTH_LONG = 8; + + private static final int INT_0_15 = 32; + private static final int LONG_0_7 = 48; + private static final int DECIMAL_0_1 = 56; + private static final int DECIMAL_SMALL_0 = 58; + private static final int DECIMAL_SMALL = 59; + private static final int DOUBLE_0_1 = 60; + private static final int FLOAT_0_1 = 62; + private static final int BOOLEAN_FALSE = 64; + private static final int BOOLEAN_TRUE = 65; + private static final int INT_NEG = 66; + private static final int LONG_NEG = 67; + private static final int STRING_0_31 = 68; + private static final int BYTES_0_31 = 100; + private static final int LOCAL_TIME = 132; + private static final int LOCAL_DATE = 133; + private static final int LOCAL_TIMESTAMP = 134; + + private static final long MILLIS_PER_MINUTE = 1000 * 60; + + /** + * Can not store the local time, because doing so with old database files + * that didn't do it could result in an ArrayIndexOutOfBoundsException. The + * reason is that adding a row to a page only allocated space for the new + * row, but didn't take into account that existing rows now can use more + * space, due to the changed format. + */ + private static final boolean STORE_LOCAL_TIME = false; + + /** + * The data itself. + */ + private byte[] data; + + /** + * The current write or read position. + */ + private int pos; + + /** + * The data handler responsible for lob objects. + */ + private final DataHandler handler; + + private Data(DataHandler handler, byte[] data) { + this.handler = handler; + this.data = data; + } + + /** + * Update an integer at the given position. + * The current position is not change. + * + * @param pos the position + * @param x the value + */ + public void setInt(int pos, int x) { + Bits.writeInt(data, pos, x); + } + + /** + * Write an integer at the current position. + * The current position is incremented. + * + * @param x the value + */ + public void writeInt(int x) { + Bits.writeInt(data, pos, x); + pos += 4; + } + + /** + * Read an integer at the current position. + * The current position is incremented. + * + * @return the value + */ + public int readInt() { + int x = Bits.readInt(data, pos); + pos += 4; + return x; + } + + /** + * Get the length of a String. This includes the bytes required to encode + * the length. + * + * @param s the string + * @return the number of bytes required + */ + public static int getStringLen(String s) { + int len = s.length(); + return getStringWithoutLengthLen(s, len) + getVarIntLen(len); + } + + /** + * Calculate the length of String, excluding the bytes required to encode + * the length. + *

    + * For performance reasons the internal representation of a String is + * similar to UTF-8, but not exactly UTF-8. + * + * @param s the string + * @param len the length of the string + * @return the number of bytes required + */ + private static int getStringWithoutLengthLen(String s, int len) { + int plus = 0; + for (int i = 0; i < len; i++) { + char c = s.charAt(i); + if (c >= 0x800) { + plus += 2; + } else if (c >= 0x80) { + plus++; + } + } + return len + plus; + } + + /** + * Read a String value. + * The current position is incremented. + * + * @return the value + */ + public String readString() { + int len = readVarInt(); + return readString(len); + } + + /** + * Read a String from the byte array. + *

    + * For performance reasons the internal representation of a String is + * similar to UTF-8, but not exactly UTF-8. + * + * @param len the length of the resulting string + * @return the String + */ + private String readString(int len) { + byte[] buff = data; + int p = pos; + char[] chars = new char[len]; + for (int i = 0; i < len; i++) { + int x = buff[p++] & 0xff; + if (x < 0x80) { + chars[i] = (char) x; + } else if (x >= 0xe0) { + chars[i] = (char) (((x & 0xf) << 12) + + ((buff[p++] & 0x3f) << 6) + + (buff[p++] & 0x3f)); + } else { + chars[i] = (char) (((x & 0x1f) << 6) + + (buff[p++] & 0x3f)); + } + } + pos = p; + return new String(chars); + } + + /** + * Write a String. + * The current position is incremented. + * + * @param s the value + */ + public void writeString(String s) { + int len = s.length(); + writeVarInt(len); + writeStringWithoutLength(s, len); + } + + /** + * Write a String. + *

    + * For performance reasons the internal representation of a String is + * similar to UTF-8, but not exactly UTF-8. + * + * @param s the string + * @param len the number of characters to write + */ + private void writeStringWithoutLength(String s, int len) { + int p = pos; + byte[] buff = data; + for (int i = 0; i < len; i++) { + int c = s.charAt(i); + if (c < 0x80) { + buff[p++] = (byte) c; + } else if (c >= 0x800) { + buff[p++] = (byte) (0xe0 | (c >> 12)); + buff[p++] = (byte) (((c >> 6) & 0x3f)); + buff[p++] = (byte) (c & 0x3f); + } else { + buff[p++] = (byte) (0xc0 | (c >> 6)); + buff[p++] = (byte) (c & 0x3f); + } + } + pos = p; + } + + private void writeStringWithoutLength(char[] chars, int len) { + int p = pos; + byte[] buff = data; + for (int i = 0; i < len; i++) { + int c = chars[i]; + if (c < 0x80) { + buff[p++] = (byte) c; + } else if (c >= 0x800) { + buff[p++] = (byte) (0xe0 | (c >> 12)); + buff[p++] = (byte) (((c >> 6) & 0x3f)); + buff[p++] = (byte) (c & 0x3f); + } else { + buff[p++] = (byte) (0xc0 | (c >> 6)); + buff[p++] = (byte) (c & 0x3f); + } + } + pos = p; + } + + /** + * Create a new buffer for the given handler. The + * handler will decide what type of buffer is created. + * + * @param handler the data handler + * @param capacity the initial capacity of the buffer + * @return the buffer + */ + public static Data create(DataHandler handler, int capacity) { + return new Data(handler, new byte[capacity]); + } + + /** + * Create a new buffer using the given data for the given handler. The + * handler will decide what type of buffer is created. + * + * @param handler the data handler + * @param buff the data + * @return the buffer + */ + public static Data create(DataHandler handler, byte[] buff) { + return new Data(handler, buff); + } + + /** + * Get the current write position of this buffer, which is the current + * length. + * + * @return the length + */ + public int length() { + return pos; + } + + /** + * Get the byte array used for this page. + * + * @return the byte array + */ + public byte[] getBytes() { + return data; + } + + /** + * Set the position to 0. + */ + public void reset() { + pos = 0; + } + + /** + * Append a number of bytes to this buffer. + * + * @param buff the data + * @param off the offset in the data + * @param len the length in bytes + */ + public void write(byte[] buff, int off, int len) { + System.arraycopy(buff, off, data, pos, len); + pos += len; + } + + /** + * Copy a number of bytes to the given buffer from the current position. The + * current position is incremented accordingly. + * + * @param buff the output buffer + * @param off the offset in the output buffer + * @param len the number of bytes to copy + */ + public void read(byte[] buff, int off, int len) { + System.arraycopy(data, pos, buff, off, len); + pos += len; + } + + /** + * Append one single byte. + * + * @param x the value + */ + public void writeByte(byte x) { + data[pos++] = x; + } + + /** + * Read one single byte. + * + * @return the value + */ + public byte readByte() { + return data[pos++]; + } + + /** + * Read a long value. This method reads two int values and combines them. + * + * @return the long value + */ + public long readLong() { + long x = Bits.readLong(data, pos); + pos += 8; + return x; + } + + /** + * Append a long value. This method writes two int values. + * + * @param x the value + */ + public void writeLong(long x) { + Bits.writeLong(data, pos, x); + pos += 8; + } + + /** + * Append a value. + * + * @param v the value + */ + public void writeValue(Value v) { + int start = pos; + if (v == ValueNull.INSTANCE) { + data[pos++] = 0; + return; + } + int type = v.getType(); + switch (type) { + case Value.BOOLEAN: + writeByte((byte) (v.getBoolean() ? BOOLEAN_TRUE : BOOLEAN_FALSE)); + break; + case Value.BYTE: + writeByte((byte) type); + writeByte(v.getByte()); + break; + case Value.SHORT: + writeByte((byte) type); + writeShortInt(v.getShort()); + break; + case Value.ENUM: + case Value.INT: { + int x = v.getInt(); + if (x < 0) { + writeByte((byte) INT_NEG); + writeVarInt(-x); + } else if (x < 16) { + writeByte((byte) (INT_0_15 + x)); + } else { + writeByte((byte) type); + writeVarInt(x); + } + break; + } + case Value.LONG: { + long x = v.getLong(); + if (x < 0) { + writeByte((byte) LONG_NEG); + writeVarLong(-x); + } else if (x < 8) { + writeByte((byte) (LONG_0_7 + x)); + } else { + writeByte((byte) type); + writeVarLong(x); + } + break; + } + case Value.DECIMAL: { + BigDecimal x = v.getBigDecimal(); + if (BigDecimal.ZERO.equals(x)) { + writeByte((byte) DECIMAL_0_1); + } else if (BigDecimal.ONE.equals(x)) { + writeByte((byte) (DECIMAL_0_1 + 1)); + } else { + int scale = x.scale(); + BigInteger b = x.unscaledValue(); + int bits = b.bitLength(); + if (bits <= 63) { + if (scale == 0) { + writeByte((byte) DECIMAL_SMALL_0); + writeVarLong(b.longValue()); + } else { + writeByte((byte) DECIMAL_SMALL); + writeVarInt(scale); + writeVarLong(b.longValue()); + } + } else { + writeByte((byte) type); + writeVarInt(scale); + byte[] bytes = b.toByteArray(); + writeVarInt(bytes.length); + write(bytes, 0, bytes.length); + } + } + break; + } + case Value.TIME: + if (STORE_LOCAL_TIME) { + writeByte((byte) LOCAL_TIME); + ValueTime t = (ValueTime) v; + long nanos = t.getNanos(); + long millis = nanos / 1_000_000; + nanos -= millis * 1_000_000; + writeVarLong(millis); + writeVarLong(nanos); + } else { + writeByte((byte) type); + writeVarLong(DateTimeUtils.getTimeLocalWithoutDst(v.getTime())); + } + break; + case Value.DATE: { + if (STORE_LOCAL_TIME) { + writeByte((byte) LOCAL_DATE); + long x = ((ValueDate) v).getDateValue(); + writeVarLong(x); + } else { + writeByte((byte) type); + long x = DateTimeUtils.getTimeLocalWithoutDst(v.getDate()); + writeVarLong(x / MILLIS_PER_MINUTE); + } + break; + } + case Value.TIMESTAMP: { + if (STORE_LOCAL_TIME) { + writeByte((byte) LOCAL_TIMESTAMP); + ValueTimestamp ts = (ValueTimestamp) v; + long dateValue = ts.getDateValue(); + writeVarLong(dateValue); + long nanos = ts.getTimeNanos(); + long millis = nanos / 1_000_000; + nanos -= millis * 1_000_000; + writeVarLong(millis); + writeVarLong(nanos); + } else { + Timestamp ts = v.getTimestamp(); + writeByte((byte) type); + writeVarLong(DateTimeUtils.getTimeLocalWithoutDst(ts)); + writeVarInt(ts.getNanos() % 1_000_000); + } + break; + } + case Value.TIMESTAMP_TZ: { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) v; + writeByte((byte) type); + writeVarLong(ts.getDateValue()); + writeVarLong(ts.getTimeNanos()); + writeVarInt(ts.getTimeZoneOffsetMins()); + break; + } + case Value.GEOMETRY: + // fall though + case Value.JAVA_OBJECT: { + writeByte((byte) type); + byte[] b = v.getBytesNoCopy(); + int len = b.length; + writeVarInt(len); + write(b, 0, len); + break; + } + case Value.BYTES: { + byte[] b = v.getBytesNoCopy(); + int len = b.length; + if (len < 32) { + writeByte((byte) (BYTES_0_31 + len)); + write(b, 0, len); + } else { + writeByte((byte) type); + writeVarInt(len); + write(b, 0, len); + } + break; + } + case Value.UUID: { + writeByte((byte) type); + ValueUuid uuid = (ValueUuid) v; + writeLong(uuid.getHigh()); + writeLong(uuid.getLow()); + break; + } + case Value.STRING: { + String s = v.getString(); + int len = s.length(); + if (len < 32) { + writeByte((byte) (STRING_0_31 + len)); + writeStringWithoutLength(s, len); + } else { + writeByte((byte) type); + writeString(s); + } + break; + } + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + writeByte((byte) type); + writeString(v.getString()); + break; + case Value.DOUBLE: { + double x = v.getDouble(); + if (x == 1.0d) { + writeByte((byte) (DOUBLE_0_1 + 1)); + } else { + long d = Double.doubleToLongBits(x); + if (d == ValueDouble.ZERO_BITS) { + writeByte((byte) DOUBLE_0_1); + } else { + writeByte((byte) type); + writeVarLong(Long.reverse(d)); + } + } + break; + } + case Value.FLOAT: { + float x = v.getFloat(); + if (x == 1.0f) { + writeByte((byte) (FLOAT_0_1 + 1)); + } else { + int f = Float.floatToIntBits(x); + if (f == ValueFloat.ZERO_BITS) { + writeByte((byte) FLOAT_0_1); + } else { + writeByte((byte) type); + writeVarInt(Integer.reverse(f)); + } + } + break; + } + case Value.BLOB: + case Value.CLOB: { + writeByte((byte) type); + if (v instanceof ValueLob) { + ValueLob lob = (ValueLob) v; + lob.convertToFileIfRequired(handler); + byte[] small = lob.getSmall(); + if (small == null) { + int t = -1; + if (!lob.isLinkedToTable()) { + t = -2; + } + writeVarInt(t); + writeVarInt(lob.getTableId()); + writeVarInt(lob.getObjectId()); + writeVarLong(lob.getPrecision()); + writeByte((byte) (lob.isCompressed() ? 1 : 0)); + if (t == -2) { + writeString(lob.getFileName()); + } + } else { + writeVarInt(small.length); + write(small, 0, small.length); + } + } else { + ValueLobDb lob = (ValueLobDb) v; + byte[] small = lob.getSmall(); + if (small == null) { + writeVarInt(-3); + writeVarInt(lob.getTableId()); + writeVarLong(lob.getLobId()); + writeVarLong(lob.getPrecision()); + } else { + writeVarInt(small.length); + write(small, 0, small.length); + } + } + break; + } + case Value.ARRAY: { + writeByte((byte) type); + Value[] list = ((ValueArray) v).getList(); + writeVarInt(list.length); + for (Value x : list) { + writeValue(x); + } + break; + } + case Value.RESULT_SET: { + writeByte((byte) type); + try { + ResultSet rs = ((ValueResultSet) v).getResultSet(); + rs.beforeFirst(); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + writeVarInt(columnCount); + for (int i = 0; i < columnCount; i++) { + writeString(meta.getColumnName(i + 1)); + writeVarInt(meta.getColumnType(i + 1)); + writeVarInt(meta.getPrecision(i + 1)); + writeVarInt(meta.getScale(i + 1)); + } + while (rs.next()) { + writeByte((byte) 1); + for (int i = 0; i < columnCount; i++) { + int t = DataType.getValueTypeFromResultSet(meta, i + 1); + Value val = DataType.readValue(null, rs, i + 1, t); + writeValue(val); + } + } + writeByte((byte) 0); + rs.beforeFirst(); + } catch (SQLException e) { + throw DbException.convert(e); + } + break; + } + default: + DbException.throwInternalError("type=" + v.getType()); + } + if (SysProperties.CHECK2) { + if (pos - start != getValueLen(v, handler)) { + throw DbException.throwInternalError( + "value size error: got " + (pos - start) + + " expected " + getValueLen(v, handler)); + } + } + } + + /** + * Read a value. + * + * @return the value + */ + public Value readValue() { + int type = data[pos++] & 255; + switch (type) { + case Value.NULL: + return ValueNull.INSTANCE; + case BOOLEAN_TRUE: + return ValueBoolean.TRUE; + case BOOLEAN_FALSE: + return ValueBoolean.FALSE; + case INT_NEG: + return ValueInt.get(-readVarInt()); + case Value.ENUM: + case Value.INT: + return ValueInt.get(readVarInt()); + case LONG_NEG: + return ValueLong.get(-readVarLong()); + case Value.LONG: + return ValueLong.get(readVarLong()); + case Value.BYTE: + return ValueByte.get(readByte()); + case Value.SHORT: + return ValueShort.get(readShortInt()); + case DECIMAL_0_1: + return (ValueDecimal) ValueDecimal.ZERO; + case DECIMAL_0_1 + 1: + return (ValueDecimal) ValueDecimal.ONE; + case DECIMAL_SMALL_0: + return ValueDecimal.get(BigDecimal.valueOf(readVarLong())); + case DECIMAL_SMALL: { + int scale = readVarInt(); + return ValueDecimal.get(BigDecimal.valueOf(readVarLong(), scale)); + } + case Value.DECIMAL: { + int scale = readVarInt(); + int len = readVarInt(); + byte[] buff = Utils.newBytes(len); + read(buff, 0, len); + BigInteger b = new BigInteger(buff); + return ValueDecimal.get(new BigDecimal(b, scale)); + } + case LOCAL_DATE: { + return ValueDate.fromDateValue(readVarLong()); + } + case Value.DATE: { + long x = readVarLong() * MILLIS_PER_MINUTE; + return ValueDate.fromMillis(DateTimeUtils.getTimeUTCWithoutDst(x)); + } + case LOCAL_TIME: { + long nanos = readVarLong() * 1_000_000 + readVarLong(); + return ValueTime.fromNanos(nanos); + } + case Value.TIME: + // need to normalize the year, month and day + return ValueTime.fromMillis( + DateTimeUtils.getTimeUTCWithoutDst(readVarLong())); + case LOCAL_TIMESTAMP: { + long dateValue = readVarLong(); + long nanos = readVarLong() * 1_000_000 + readVarLong(); + return ValueTimestamp.fromDateValueAndNanos(dateValue, nanos); + } + case Value.TIMESTAMP: { + return ValueTimestamp.fromMillisNanos( + DateTimeUtils.getTimeUTCWithoutDst(readVarLong()), + readVarInt()); + } + case Value.TIMESTAMP_TZ: { + long dateValue = readVarLong(); + long nanos = readVarLong(); + short tz = (short) readVarInt(); + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, nanos, tz); + } + case Value.BYTES: { + int len = readVarInt(); + byte[] b = Utils.newBytes(len); + read(b, 0, len); + return ValueBytes.getNoCopy(b); + } + case Value.GEOMETRY: { + int len = readVarInt(); + byte[] b = Utils.newBytes(len); + read(b, 0, len); + return ValueGeometry.get(b); + } + case Value.JAVA_OBJECT: { + int len = readVarInt(); + byte[] b = Utils.newBytes(len); + read(b, 0, len); + return ValueJavaObject.getNoCopy(null, b, handler); + } + case Value.UUID: + return ValueUuid.get(readLong(), readLong()); + case Value.STRING: + return ValueString.get(readString()); + case Value.STRING_IGNORECASE: + return ValueStringIgnoreCase.get(readString()); + case Value.STRING_FIXED: + return ValueStringFixed.get(readString()); + case FLOAT_0_1: + return ValueFloat.get(0); + case FLOAT_0_1 + 1: + return ValueFloat.get(1); + case DOUBLE_0_1: + return ValueDouble.get(0); + case DOUBLE_0_1 + 1: + return ValueDouble.get(1); + case Value.DOUBLE: + return ValueDouble.get(Double.longBitsToDouble( + Long.reverse(readVarLong()))); + case Value.FLOAT: + return ValueFloat.get(Float.intBitsToFloat( + Integer.reverse(readVarInt()))); + case Value.BLOB: + case Value.CLOB: { + int smallLen = readVarInt(); + if (smallLen >= 0) { + byte[] small = Utils.newBytes(smallLen); + read(small, 0, smallLen); + return ValueLobDb.createSmallLob(type, small); + } else if (smallLen == -3) { + int tableId = readVarInt(); + long lobId = readVarLong(); + long precision = readVarLong(); + return ValueLobDb.create(type, handler, tableId, + lobId, null, precision); + } else { + int tableId = readVarInt(); + int objectId = readVarInt(); + long precision = 0; + boolean compression = false; + // -1: regular; -2: regular, but not linked (in this case: + // including file name) + if (smallLen == -1 || smallLen == -2) { + precision = readVarLong(); + compression = readByte() == 1; + } + if (smallLen == -2) { + String filename = readString(); + return ValueLob.openUnlinked(type, handler, tableId, + objectId, precision, compression, filename); + } + return ValueLob.openLinked(type, handler, tableId, + objectId, precision, compression); + } + } + case Value.ARRAY: { + int len = readVarInt(); + Value[] list = new Value[len]; + for (int i = 0; i < len; i++) { + list[i] = readValue(); + } + return ValueArray.get(list); + } + case Value.RESULT_SET: { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + int columns = readVarInt(); + for (int i = 0; i < columns; i++) { + rs.addColumn(readString(), readVarInt(), readVarInt(), readVarInt()); + } + while (readByte() != 0) { + Object[] o = new Object[columns]; + for (int i = 0; i < columns; i++) { + o[i] = readValue().getObject(); + } + rs.addRow(o); + } + return ValueResultSet.get(rs); + } + default: + if (type >= INT_0_15 && type < INT_0_15 + 16) { + return ValueInt.get(type - INT_0_15); + } else if (type >= LONG_0_7 && type < LONG_0_7 + 8) { + return ValueLong.get(type - LONG_0_7); + } else if (type >= BYTES_0_31 && type < BYTES_0_31 + 32) { + int len = type - BYTES_0_31; + byte[] b = Utils.newBytes(len); + read(b, 0, len); + return ValueBytes.getNoCopy(b); + } else if (type >= STRING_0_31 && type < STRING_0_31 + 32) { + return ValueString.get(readString(type - STRING_0_31)); + } + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, "type: " + type); + } + } + + /** + * Calculate the number of bytes required to encode the given value. + * + * @param v the value + * @return the number of bytes required to store this value + */ + public int getValueLen(Value v) { + return getValueLen(v, handler); + } + + /** + * Calculate the number of bytes required to encode the given value. + * + * @param v the value + * @param handler the data handler for lobs + * @return the number of bytes required to store this value + */ + public static int getValueLen(Value v, DataHandler handler) { + if (v == ValueNull.INSTANCE) { + return 1; + } + switch (v.getType()) { + case Value.BOOLEAN: + return 1; + case Value.BYTE: + return 2; + case Value.SHORT: + return 3; + case Value.ENUM: + case Value.INT: { + int x = v.getInt(); + if (x < 0) { + return 1 + getVarIntLen(-x); + } else if (x < 16) { + return 1; + } else { + return 1 + getVarIntLen(x); + } + } + case Value.LONG: { + long x = v.getLong(); + if (x < 0) { + return 1 + getVarLongLen(-x); + } else if (x < 8) { + return 1; + } else { + return 1 + getVarLongLen(x); + } + } + case Value.DOUBLE: { + double x = v.getDouble(); + if (x == 1.0d) { + return 1; + } + long d = Double.doubleToLongBits(x); + if (d == ValueDouble.ZERO_BITS) { + return 1; + } + return 1 + getVarLongLen(Long.reverse(d)); + } + case Value.FLOAT: { + float x = v.getFloat(); + if (x == 1.0f) { + return 1; + } + int f = Float.floatToIntBits(x); + if (f == ValueFloat.ZERO_BITS) { + return 1; + } + return 1 + getVarIntLen(Integer.reverse(f)); + } + case Value.STRING: { + String s = v.getString(); + int len = s.length(); + if (len < 32) { + return 1 + getStringWithoutLengthLen(s, len); + } + return 1 + getStringLen(s); + } + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + return 1 + getStringLen(v.getString()); + case Value.DECIMAL: { + BigDecimal x = v.getBigDecimal(); + if (BigDecimal.ZERO.equals(x)) { + return 1; + } else if (BigDecimal.ONE.equals(x)) { + return 1; + } + int scale = x.scale(); + BigInteger b = x.unscaledValue(); + int bits = b.bitLength(); + if (bits <= 63) { + if (scale == 0) { + return 1 + getVarLongLen(b.longValue()); + } + return 1 + getVarIntLen(scale) + getVarLongLen(b.longValue()); + } + byte[] bytes = b.toByteArray(); + return 1 + getVarIntLen(scale) + getVarIntLen(bytes.length) + bytes.length; + } + case Value.TIME: + if (STORE_LOCAL_TIME) { + long nanos = ((ValueTime) v).getNanos(); + long millis = nanos / 1_000_000; + nanos -= millis * 1_000_000; + return 1 + getVarLongLen(millis) + getVarLongLen(nanos); + } + return 1 + getVarLongLen(DateTimeUtils.getTimeLocalWithoutDst(v.getTime())); + case Value.DATE: { + if (STORE_LOCAL_TIME) { + long dateValue = ((ValueDate) v).getDateValue(); + return 1 + getVarLongLen(dateValue); + } + long x = DateTimeUtils.getTimeLocalWithoutDst(v.getDate()); + return 1 + getVarLongLen(x / MILLIS_PER_MINUTE); + } + case Value.TIMESTAMP: { + if (STORE_LOCAL_TIME) { + ValueTimestamp ts = (ValueTimestamp) v; + long dateValue = ts.getDateValue(); + long nanos = ts.getTimeNanos(); + long millis = nanos / 1_000_000; + nanos -= millis * 1_000_000; + return 1 + getVarLongLen(dateValue) + getVarLongLen(millis) + + getVarLongLen(nanos); + } + Timestamp ts = v.getTimestamp(); + return 1 + getVarLongLen(DateTimeUtils.getTimeLocalWithoutDst(ts)) + + getVarIntLen(ts.getNanos() % 1_000_000); + } + case Value.TIMESTAMP_TZ: { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) v; + long dateValue = ts.getDateValue(); + long nanos = ts.getTimeNanos(); + short tz = ts.getTimeZoneOffsetMins(); + return 1 + getVarLongLen(dateValue) + getVarLongLen(nanos) + + getVarIntLen(tz); + } + case Value.GEOMETRY: + case Value.JAVA_OBJECT: { + byte[] b = v.getBytesNoCopy(); + return 1 + getVarIntLen(b.length) + b.length; + } + case Value.BYTES: { + byte[] b = v.getBytesNoCopy(); + int len = b.length; + if (len < 32) { + return 1 + b.length; + } + return 1 + getVarIntLen(b.length) + b.length; + } + case Value.UUID: + return 1 + LENGTH_LONG + LENGTH_LONG; + case Value.BLOB: + case Value.CLOB: { + int len = 1; + if (v instanceof ValueLob) { + ValueLob lob = (ValueLob) v; + lob.convertToFileIfRequired(handler); + byte[] small = lob.getSmall(); + if (small == null) { + int t = -1; + if (!lob.isLinkedToTable()) { + t = -2; + } + len += getVarIntLen(t); + len += getVarIntLen(lob.getTableId()); + len += getVarIntLen(lob.getObjectId()); + len += getVarLongLen(lob.getPrecision()); + len += 1; + if (t == -2) { + len += getStringLen(lob.getFileName()); + } + } else { + len += getVarIntLen(small.length); + len += small.length; + } + } else { + ValueLobDb lob = (ValueLobDb) v; + byte[] small = lob.getSmall(); + if (small == null) { + len += getVarIntLen(-3); + len += getVarIntLen(lob.getTableId()); + len += getVarLongLen(lob.getLobId()); + len += getVarLongLen(lob.getPrecision()); + } else { + len += getVarIntLen(small.length); + len += small.length; + } + } + return len; + } + case Value.ARRAY: { + Value[] list = ((ValueArray) v).getList(); + int len = 1 + getVarIntLen(list.length); + for (Value x : list) { + len += getValueLen(x, handler); + } + return len; + } + case Value.RESULT_SET: { + int len = 1; + try { + ResultSet rs = ((ValueResultSet) v).getResultSet(); + rs.beforeFirst(); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + len += getVarIntLen(columnCount); + for (int i = 0; i < columnCount; i++) { + len += getStringLen(meta.getColumnName(i + 1)); + len += getVarIntLen(meta.getColumnType(i + 1)); + len += getVarIntLen(meta.getPrecision(i + 1)); + len += getVarIntLen(meta.getScale(i + 1)); + } + while (rs.next()) { + len++; + for (int i = 0; i < columnCount; i++) { + int t = DataType.getValueTypeFromResultSet(meta, i + 1); + Value val = DataType.readValue(null, rs, i + 1, t); + len += getValueLen(val, handler); + } + } + len++; + rs.beforeFirst(); + } catch (SQLException e) { + throw DbException.convert(e); + } + return len; + } + default: + throw DbException.throwInternalError("type=" + v.getType()); + } + } + + /** + * Set the current read / write position. + * + * @param pos the new position + */ + public void setPos(int pos) { + this.pos = pos; + } + + /** + * Write a short integer at the current position. + * The current position is incremented. + * + * @param x the value + */ + public void writeShortInt(int x) { + byte[] buff = data; + buff[pos++] = (byte) (x >> 8); + buff[pos++] = (byte) x; + } + + /** + * Read an short integer at the current position. + * The current position is incremented. + * + * @return the value + */ + public short readShortInt() { + byte[] buff = data; + return (short) (((buff[pos++] & 0xff) << 8) + (buff[pos++] & 0xff)); + } + + /** + * Shrink the array to this size. + * + * @param size the new size + */ + public void truncate(int size) { + if (pos > size) { + byte[] buff = Arrays.copyOf(data, size); + this.pos = size; + data = buff; + } + } + + /** + * The number of bytes required for a variable size int. + * + * @param x the value + * @return the len + */ + private static int getVarIntLen(int x) { + if ((x & (-1 << 7)) == 0) { + return 1; + } else if ((x & (-1 << 14)) == 0) { + return 2; + } else if ((x & (-1 << 21)) == 0) { + return 3; + } else if ((x & (-1 << 28)) == 0) { + return 4; + } + return 5; + } + + /** + * Write a variable size int. + * + * @param x the value + */ + public void writeVarInt(int x) { + while ((x & ~0x7f) != 0) { + data[pos++] = (byte) (0x80 | (x & 0x7f)); + x >>>= 7; + } + data[pos++] = (byte) x; + } + + /** + * Read a variable size int. + * + * @return the value + */ + public int readVarInt() { + int b = data[pos]; + if (b >= 0) { + pos++; + return b; + } + // a separate function so that this one can be inlined + return readVarIntRest(b); + } + + private int readVarIntRest(int b) { + int x = b & 0x7f; + b = data[pos + 1]; + if (b >= 0) { + pos += 2; + return x | (b << 7); + } + x |= (b & 0x7f) << 7; + b = data[pos + 2]; + if (b >= 0) { + pos += 3; + return x | (b << 14); + } + x |= (b & 0x7f) << 14; + b = data[pos + 3]; + if (b >= 0) { + pos += 4; + return x | b << 21; + } + x |= ((b & 0x7f) << 21) | (data[pos + 4] << 28); + pos += 5; + return x; + } + + /** + * The number of bytes required for a variable size long. + * + * @param x the value + * @return the len + */ + public static int getVarLongLen(long x) { + int i = 1; + while (true) { + x >>>= 7; + if (x == 0) { + return i; + } + i++; + } + } + + /** + * Write a variable size long. + * + * @param x the value + */ + public void writeVarLong(long x) { + while ((x & ~0x7f) != 0) { + data[pos++] = (byte) ((x & 0x7f) | 0x80); + x >>>= 7; + } + data[pos++] = (byte) x; + } + + /** + * Read a variable size long. + * + * @return the value + */ + public long readVarLong() { + long x = data[pos++]; + if (x >= 0) { + return x; + } + x &= 0x7f; + for (int s = 7;; s += 7) { + long b = data[pos++]; + x |= (b & 0x7f) << s; + if (b >= 0) { + return x; + } + } + } + + /** + * Check if there is still enough capacity in the buffer. + * This method extends the buffer if required. + * + * @param plus the number of additional bytes required + */ + public void checkCapacity(int plus) { + if (pos + plus >= data.length) { + // a separate method to simplify inlining + expand(plus); + } + } + + private void expand(int plus) { + // must copy everything, because pos could be 0 and data may be + // still required + data = Utils.copyBytes(data, (data.length + plus) * 2); + } + + /** + * Fill up the buffer with empty space and an (initially empty) checksum + * until the size is a multiple of Constants.FILE_BLOCK_SIZE. + */ + public void fillAligned() { + // 0..6 > 8, 7..14 > 16, 15..22 > 24, ... + int len = MathUtils.roundUpInt(pos + 2, Constants.FILE_BLOCK_SIZE); + pos = len; + if (data.length < len) { + checkCapacity(len - data.length); + } + } + + /** + * Copy a String from a reader to an output stream. + * + * @param source the reader + * @param target the output stream + */ + public static void copyString(Reader source, OutputStream target) + throws IOException { + char[] buff = new char[Constants.IO_BUFFER_SIZE]; + Data d = new Data(null, new byte[3 * Constants.IO_BUFFER_SIZE]); + while (true) { + int l = source.read(buff); + if (l < 0) { + break; + } + d.writeStringWithoutLength(buff, l); + target.write(d.data, 0, d.pos); + d.reset(); + } + } + + public DataHandler getHandler() { + return handler; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/DataHandler.java b/modules/h2/src/main/java/org/h2/store/DataHandler.java new file mode 100644 index 0000000000000..fed9ddbe81a52 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/DataHandler.java @@ -0,0 +1,124 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import org.h2.api.JavaObjectSerializer; +import org.h2.message.DbException; +import org.h2.util.SmallLRUCache; +import org.h2.util.TempFileDeleter; +import org.h2.value.CompareMode; + +/** + * A data handler contains a number of callback methods, mostly related to CLOB + * and BLOB handling. The most important implementing class is a database. + */ +public interface DataHandler { + + /** + * Get the database path. + * + * @return the database path + */ + String getDatabasePath(); + + /** + * Open a file at the given location. + * + * @param name the file name + * @param mode the mode + * @param mustExist whether the file must already exist + * @return the file + */ + FileStore openFile(String name, String mode, boolean mustExist); + + /** + * Check if the simulated power failure occurred. + * This call will decrement the countdown. + * + * @throws DbException if the simulated power failure occurred + */ + void checkPowerOff() throws DbException; + + /** + * Check if writing is allowed. + * + * @throws DbException if it is not allowed + */ + void checkWritingAllowed() throws DbException; + + /** + * Get the maximum length of a in-place large object + * + * @return the maximum size + */ + int getMaxLengthInplaceLob(); + + /** + * Get the compression algorithm used for large objects. + * + * @param type the data type (CLOB or BLOB) + * @return the compression algorithm, or null + */ + String getLobCompressionAlgorithm(int type); + + /** + * Get the temp file deleter mechanism. + * + * @return the temp file deleter + */ + TempFileDeleter getTempFileDeleter(); + + /** + * Get the synchronization object for lob operations. + * + * @return the synchronization object + */ + Object getLobSyncObject(); + + /** + * Get the lob file list cache if it is used. + * + * @return the cache or null + */ + SmallLRUCache getLobFileListCache(); + + /** + * Get the lob storage mechanism to use. + * + * @return the lob storage mechanism + */ + LobStorageInterface getLobStorage(); + + /** + * Read from a lob. + * + * @param lobId the lob id + * @param hmac the message authentication code + * @param offset the offset within the lob + * @param buff the target buffer + * @param off the offset within the target buffer + * @param length the number of bytes to read + * @return the number of bytes read + */ + int readLob(long lobId, byte[] hmac, long offset, byte[] buff, int off, + int length); + + /** + * Return the serializer to be used for java objects being stored in + * column of type OTHER. + * + * @return the serializer to be used for java objects being stored in + * column of type OTHER + */ + JavaObjectSerializer getJavaObjectSerializer(); + + /** + * Return compare mode. + * + * @return Compare mode. + */ + CompareMode getCompareMode(); +} diff --git a/modules/h2/src/main/java/org/h2/store/DataReader.java b/modules/h2/src/main/java/org/h2/store/DataReader.java new file mode 100644 index 0000000000000..c9911afb02a0a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/DataReader.java @@ -0,0 +1,201 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.EOFException; +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import org.h2.util.IOUtils; + +/** + * This class is backed by an input stream and supports reading values and + * variable size data. + */ +public class DataReader extends Reader { + + private final InputStream in; + + /** + * Create a new data reader. + * + * @param in the input stream + */ + public DataReader(InputStream in) { + this.in = in; + } + + /** + * Read a byte. + * + * @return the byte + */ + public byte readByte() throws IOException { + int x = in.read(); + if (x < 0) { + throw new FastEOFException(); + } + return (byte) x; + } + + /** + * Read a variable size integer. + * + * @return the value + */ + public int readVarInt() throws IOException { + int b = readByte(); + if (b >= 0) { + return b; + } + int x = b & 0x7f; + b = readByte(); + if (b >= 0) { + return x | (b << 7); + } + x |= (b & 0x7f) << 7; + b = readByte(); + if (b >= 0) { + return x | (b << 14); + } + x |= (b & 0x7f) << 14; + b = readByte(); + if (b >= 0) { + return x | b << 21; + } + return x | ((b & 0x7f) << 21) | (readByte() << 28); + } + + /** + * Read a variable size long. + * + * @return the value + */ + public long readVarLong() throws IOException { + long x = readByte(); + if (x >= 0) { + return x; + } + x &= 0x7f; + for (int s = 7;; s += 7) { + long b = readByte(); + x |= (b & 0x7f) << s; + if (b >= 0) { + return x; + } + } + } + + /** + * Read an integer. + * + * @return the value + */ + // public int readInt() throws IOException { + // return (read() << 24) + ((read() & 0xff) << 16) + + // ((read() & 0xff) << 8) + (read() & 0xff); + //} + + /** + * Read a long. + * + * @return the value + */ + // public long readLong() throws IOException { + // return ((long) (readInt()) << 32) + (readInt() & 0xffffffffL); + // } + + /** + * Read a number of bytes. + * + * @param buff the target buffer + * @param len the number of bytes to read + */ + public void readFully(byte[] buff, int len) throws IOException { + int got = IOUtils.readFully(in, buff, len); + if (got < len) { + throw new FastEOFException(); + } + } + + /** + * Read a string from the stream. + * + * @return the string + */ + public String readString() throws IOException { + int len = readVarInt(); + return readString(len); + } + + private String readString(int len) throws IOException { + char[] chars = new char[len]; + for (int i = 0; i < len; i++) { + chars[i] = readChar(); + } + return new String(chars); + } + + /** + * Read one character from the input stream. + * + * @return the character + */ + private char readChar() throws IOException { + int x = readByte() & 0xff; + if (x < 0x80) { + return (char) x; + } else if (x >= 0xe0) { + return (char) (((x & 0xf) << 12) + + ((readByte() & 0x3f) << 6) + + (readByte() & 0x3f)); + } else { + return (char) (((x & 0x1f) << 6) + + (readByte() & 0x3f)); + } + } + + @Override + public void close() throws IOException { + // ignore + } + + @Override + public int read(char[] buff, int off, int len) throws IOException { + if (len == 0) { + return 0; + } + int i = 0; + try { + for (; i < len; i++) { + buff[off + i] = readChar(); + } + return len; + } catch (EOFException e) { + if (i == 0) { + return -1; + } + return i; + } + } + + /** + * Constructing such an EOF exception is fast, because the stack trace is + * not filled in. If used in a static context, this will also avoid + * classloader memory leaks. + */ + static class FastEOFException extends EOFException { + + private static final long serialVersionUID = 1L; + + @Override + public synchronized Throwable fillInStackTrace() { + return null; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/FileLister.java b/modules/h2/src/main/java/org/h2/store/FileLister.java new file mode 100644 index 0000000000000..8036e950a9bc3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/FileLister.java @@ -0,0 +1,123 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.nio.channels.FileChannel; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.List; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.message.TraceSystem; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FileUtils; +import org.h2.util.New; + +/** + * Utility class to list the files of a database. + */ +public class FileLister { + + private FileLister() { + // utility class + } + + /** + * Try to lock the database, and then unlock it. If this worked, the + * .lock.db file will be removed. + * + * @param files the database files to check + * @param message the text to include in the error message + * @throws SQLException if it failed + */ + public static void tryUnlockDatabase(List files, String message) + throws SQLException { + for (String fileName : files) { + if (fileName.endsWith(Constants.SUFFIX_LOCK_FILE)) { + FileLock lock = new FileLock(new TraceSystem(null), fileName, + Constants.LOCK_SLEEP); + try { + lock.lock(FileLockMethod.FILE); + lock.unlock(); + } catch (DbException e) { + throw DbException.get( + ErrorCode.CANNOT_CHANGE_SETTING_WHEN_OPEN_1, + message).getSQLException(); + } + } else if (fileName.endsWith(Constants.SUFFIX_MV_FILE)) { + try (FileChannel f = FilePath.get(fileName).open("r")) { + java.nio.channels.FileLock lock = f.tryLock(0, Long.MAX_VALUE, true); + lock.release(); + } catch (Exception e) { + throw DbException.get( + ErrorCode.CANNOT_CHANGE_SETTING_WHEN_OPEN_1, e, + message).getSQLException(); + } + } + } + } + + /** + * Normalize the directory name. + * + * @param dir the directory (null for the current directory) + * @return the normalized directory name + */ + public static String getDir(String dir) { + if (dir == null || dir.equals("")) { + return "."; + } + return FileUtils.toRealPath(dir); + } + + /** + * Get the list of database files. + * + * @param dir the directory (must be normalized) + * @param db the database name (null for all databases) + * @param all if true, files such as the lock, trace, and lob + * files are included. If false, only data, index, log, + * and lob files are returned + * @return the list of files + */ + public static ArrayList getDatabaseFiles(String dir, String db, + boolean all) { + ArrayList files = New.arrayList(); + // for Windows, File.getCanonicalPath("...b.") returns just "...b" + String start = db == null ? null : (FileUtils.toRealPath(dir + "/" + db) + "."); + for (String f : FileUtils.newDirectoryStream(dir)) { + boolean ok = false; + if (f.endsWith(Constants.SUFFIX_LOBS_DIRECTORY)) { + if (start == null || f.startsWith(start)) { + files.addAll(getDatabaseFiles(f, null, all)); + ok = true; + } + } else if (f.endsWith(Constants.SUFFIX_LOB_FILE)) { + ok = true; + } else if (f.endsWith(Constants.SUFFIX_PAGE_FILE)) { + ok = true; + } else if (f.endsWith(Constants.SUFFIX_MV_FILE)) { + ok = true; + } else if (all) { + if (f.endsWith(Constants.SUFFIX_LOCK_FILE)) { + ok = true; + } else if (f.endsWith(Constants.SUFFIX_TEMP_FILE)) { + ok = true; + } else if (f.endsWith(Constants.SUFFIX_TRACE_FILE)) { + ok = true; + } + } + if (ok) { + if (db == null || f.startsWith(start)) { + files.add(f); + } + } + } + return files; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/FileLock.java b/modules/h2/src/main/java/org/h2/store/FileLock.java new file mode 100644 index 0000000000000..d054e264a3e96 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/FileLock.java @@ -0,0 +1,519 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ConnectException; +import java.net.InetAddress; +import java.net.ServerSocket; +import java.net.Socket; +import java.net.UnknownHostException; +import java.util.Properties; +import org.h2.Driver; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.SessionRemote; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.message.TraceSystem; +import org.h2.store.fs.FileUtils; +import org.h2.util.MathUtils; +import org.h2.util.NetUtils; +import org.h2.util.SortedProperties; +import org.h2.util.StringUtils; +import org.h2.value.Transfer; + +/** + * The file lock is used to lock a database so that only one process can write + * to it. It uses a cooperative locking protocol. Usually a .lock.db file is + * used, but locking by creating a socket is supported as well. + */ +public class FileLock implements Runnable { + + private static final String MAGIC = "FileLock"; + private static final String FILE = "file"; + private static final String SOCKET = "socket"; + private static final String SERIALIZED = "serialized"; + private static final int RANDOM_BYTES = 16; + private static final int SLEEP_GAP = 25; + private static final int TIME_GRANULARITY = 2000; + + /** + * The lock file name. + */ + private volatile String fileName; + + /** + * The server socket (only used when using the SOCKET mode). + */ + private volatile ServerSocket serverSocket; + + /** + * Whether the file is locked. + */ + private volatile boolean locked; + + /** + * The number of milliseconds to sleep after checking a file. + */ + private final int sleep; + + /** + * The trace object. + */ + private final Trace trace; + + /** + * The last time the lock file was written. + */ + private long lastWrite; + + private String method, ipAddress; + private Properties properties; + private String uniqueId; + private Thread watchdog; + + /** + * Create a new file locking object. + * + * @param traceSystem the trace system to use + * @param fileName the file name + * @param sleep the number of milliseconds to sleep + */ + public FileLock(TraceSystem traceSystem, String fileName, int sleep) { + this.trace = traceSystem == null ? + null : traceSystem.getTrace(Trace.FILE_LOCK); + this.fileName = fileName; + this.sleep = sleep; + } + + /** + * Lock the file if possible. A file may only be locked once. + * + * @param fileLockMethod the file locking method to use + * @throws DbException if locking was not successful + */ + public synchronized void lock(FileLockMethod fileLockMethod) { + checkServer(); + if (locked) { + DbException.throwInternalError("already locked"); + } + switch (fileLockMethod) { + case FILE: + lockFile(); + break; + case SOCKET: + lockSocket(); + break; + case SERIALIZED: + lockSerialized(); + break; + case FS: + case NO: + break; + } + locked = true; + } + + /** + * Unlock the file. The watchdog thread is stopped. This method does nothing + * if the file is already unlocked. + */ + public synchronized void unlock() { + if (!locked) { + return; + } + locked = false; + try { + if (watchdog != null) { + watchdog.interrupt(); + } + } catch (Exception e) { + trace.debug(e, "unlock"); + } + try { + if (fileName != null) { + if (load().equals(properties)) { + FileUtils.delete(fileName); + } + } + if (serverSocket != null) { + serverSocket.close(); + } + } catch (Exception e) { + trace.debug(e, "unlock"); + } finally { + fileName = null; + serverSocket = null; + } + try { + if (watchdog != null) { + watchdog.join(); + } + } catch (Exception e) { + trace.debug(e, "unlock"); + } finally { + watchdog = null; + } + } + + /** + * Add or change a setting to the properties. This call does not save the + * file. + * + * @param key the key + * @param value the value + */ + public void setProperty(String key, String value) { + if (value == null) { + properties.remove(key); + } else { + properties.put(key, value); + } + } + + /** + * Save the lock file. + * + * @return the saved properties + */ + public Properties save() { + try { + try (OutputStream out = FileUtils.newOutputStream(fileName, false)) { + properties.store(out, MAGIC); + } + lastWrite = FileUtils.lastModified(fileName); + if (trace.isDebugEnabled()) { + trace.debug("save " + properties); + } + return properties; + } catch (IOException e) { + throw getExceptionFatal("Could not save properties " + fileName, e); + } + } + + private void checkServer() { + Properties prop = load(); + String server = prop.getProperty("server"); + if (server == null) { + return; + } + boolean running = false; + String id = prop.getProperty("id"); + try { + Socket socket = NetUtils.createSocket(server, + Constants.DEFAULT_TCP_PORT, false); + Transfer transfer = new Transfer(null, socket); + transfer.init(); + transfer.writeInt(Constants.TCP_PROTOCOL_VERSION_MIN_SUPPORTED); + transfer.writeInt(Constants.TCP_PROTOCOL_VERSION_MAX_SUPPORTED); + transfer.writeString(null); + transfer.writeString(null); + transfer.writeString(id); + transfer.writeInt(SessionRemote.SESSION_CHECK_KEY); + transfer.flush(); + int state = transfer.readInt(); + if (state == SessionRemote.STATUS_OK) { + running = true; + } + transfer.close(); + socket.close(); + } catch (IOException e) { + return; + } + if (running) { + DbException e = DbException.get( + ErrorCode.DATABASE_ALREADY_OPEN_1, "Server is running"); + throw e.addSQL(server + "/" + id); + } + } + + /** + * Load the properties file. + * + * @return the properties + */ + public Properties load() { + IOException lastException = null; + for (int i = 0; i < 5; i++) { + try { + Properties p2 = SortedProperties.loadProperties(fileName); + if (trace.isDebugEnabled()) { + trace.debug("load " + p2); + } + return p2; + } catch (IOException e) { + lastException = e; + } + } + throw getExceptionFatal( + "Could not load properties " + fileName, lastException); + } + + private void waitUntilOld() { + for (int i = 0; i < 2 * TIME_GRANULARITY / SLEEP_GAP; i++) { + long last = FileUtils.lastModified(fileName); + long dist = System.currentTimeMillis() - last; + if (dist < -TIME_GRANULARITY) { + // lock file modified in the future - + // wait for a bit longer than usual + try { + Thread.sleep(2 * (long) sleep); + } catch (Exception e) { + trace.debug(e, "sleep"); + } + return; + } else if (dist > TIME_GRANULARITY) { + return; + } + try { + Thread.sleep(SLEEP_GAP); + } catch (Exception e) { + trace.debug(e, "sleep"); + } + } + throw getExceptionFatal("Lock file recently modified", null); + } + + private void setUniqueId() { + byte[] bytes = MathUtils.secureRandomBytes(RANDOM_BYTES); + String random = StringUtils.convertBytesToHex(bytes); + uniqueId = Long.toHexString(System.currentTimeMillis()) + random; + properties.setProperty("id", uniqueId); + } + + private void lockSerialized() { + method = SERIALIZED; + FileUtils.createDirectories(FileUtils.getParent(fileName)); + if (FileUtils.createFile(fileName)) { + properties = new SortedProperties(); + properties.setProperty("method", String.valueOf(method)); + setUniqueId(); + save(); + } else { + while (true) { + try { + properties = load(); + } catch (DbException e) { + // ignore + } + return; + } + } + } + + private void lockFile() { + method = FILE; + properties = new SortedProperties(); + properties.setProperty("method", String.valueOf(method)); + setUniqueId(); + FileUtils.createDirectories(FileUtils.getParent(fileName)); + if (!FileUtils.createFile(fileName)) { + waitUntilOld(); + String m2 = load().getProperty("method", FILE); + if (!m2.equals(FILE)) { + throw getExceptionFatal("Unsupported lock method " + m2, null); + } + save(); + sleep(2 * sleep); + if (!load().equals(properties)) { + throw getExceptionAlreadyInUse("Locked by another process: " + fileName); + } + FileUtils.delete(fileName); + if (!FileUtils.createFile(fileName)) { + throw getExceptionFatal("Another process was faster", null); + } + } + save(); + sleep(SLEEP_GAP); + if (!load().equals(properties)) { + fileName = null; + throw getExceptionFatal("Concurrent update", null); + } + locked = true; + watchdog = new Thread(this, "H2 File Lock Watchdog " + fileName); + Driver.setThreadContextClassLoader(watchdog); + watchdog.setDaemon(true); + watchdog.setPriority(Thread.MAX_PRIORITY - 1); + watchdog.start(); + } + + private void lockSocket() { + method = SOCKET; + properties = new SortedProperties(); + properties.setProperty("method", String.valueOf(method)); + setUniqueId(); + // if this returns 127.0.0.1, + // the computer is probably not networked + ipAddress = NetUtils.getLocalAddress(); + FileUtils.createDirectories(FileUtils.getParent(fileName)); + if (!FileUtils.createFile(fileName)) { + waitUntilOld(); + long read = FileUtils.lastModified(fileName); + Properties p2 = load(); + String m2 = p2.getProperty("method", SOCKET); + if (m2.equals(FILE)) { + lockFile(); + return; + } else if (!m2.equals(SOCKET)) { + throw getExceptionFatal("Unsupported lock method " + m2, null); + } + String ip = p2.getProperty("ipAddress", ipAddress); + if (!ipAddress.equals(ip)) { + throw getExceptionAlreadyInUse("Locked by another computer: " + ip); + } + String port = p2.getProperty("port", "0"); + int portId = Integer.parseInt(port); + InetAddress address; + try { + address = InetAddress.getByName(ip); + } catch (UnknownHostException e) { + throw getExceptionFatal("Unknown host " + ip, e); + } + for (int i = 0; i < 3; i++) { + try { + Socket s = new Socket(address, portId); + s.close(); + throw getExceptionAlreadyInUse("Locked by another process"); + } catch (BindException e) { + throw getExceptionFatal("Bind Exception", null); + } catch (ConnectException e) { + trace.debug(e, "socket not connected to port " + port); + } catch (IOException e) { + throw getExceptionFatal("IOException", null); + } + } + if (read != FileUtils.lastModified(fileName)) { + throw getExceptionFatal("Concurrent update", null); + } + FileUtils.delete(fileName); + if (!FileUtils.createFile(fileName)) { + throw getExceptionFatal("Another process was faster", null); + } + } + try { + // 0 to use any free port + serverSocket = NetUtils.createServerSocket(0, false); + int port = serverSocket.getLocalPort(); + properties.setProperty("ipAddress", ipAddress); + properties.setProperty("port", String.valueOf(port)); + } catch (Exception e) { + trace.debug(e, "lock"); + serverSocket = null; + lockFile(); + return; + } + save(); + locked = true; + watchdog = new Thread(this, + "H2 File Lock Watchdog (Socket) " + fileName); + watchdog.setDaemon(true); + watchdog.start(); + } + + private static void sleep(int time) { + try { + Thread.sleep(time); + } catch (InterruptedException e) { + throw getExceptionFatal("Sleep interrupted", e); + } + } + + private static DbException getExceptionFatal(String reason, Throwable t) { + return DbException.get( + ErrorCode.ERROR_OPENING_DATABASE_1, t, reason); + } + + private DbException getExceptionAlreadyInUse(String reason) { + DbException e = DbException.get( + ErrorCode.DATABASE_ALREADY_OPEN_1, reason); + if (fileName != null) { + try { + Properties prop = load(); + String server = prop.getProperty("server"); + if (server != null) { + String serverId = server + "/" + prop.getProperty("id"); + e = e.addSQL(serverId); + } + } catch (DbException e2) { + // ignore + } + } + return e; + } + + /** + * Get the file locking method type given a method name. + * + * @param method the method name + * @return the method type + * @throws DbException if the method name is unknown + */ + public static FileLockMethod getFileLockMethod(String method) { + if (method == null || method.equalsIgnoreCase("FILE")) { + return FileLockMethod.FILE; + } else if (method.equalsIgnoreCase("NO")) { + return FileLockMethod.NO; + } else if (method.equalsIgnoreCase("SOCKET")) { + return FileLockMethod.SOCKET; + } else if (method.equalsIgnoreCase("SERIALIZED")) { + return FileLockMethod.SERIALIZED; + } else if (method.equalsIgnoreCase("FS")) { + return FileLockMethod.FS; + } else { + throw DbException.get( + ErrorCode.UNSUPPORTED_LOCK_METHOD_1, method); + } + } + + public String getUniqueId() { + return uniqueId; + } + + @Override + public void run() { + try { + while (locked && fileName != null) { + // trace.debug("watchdog check"); + try { + if (!FileUtils.exists(fileName) || + FileUtils.lastModified(fileName) != lastWrite) { + save(); + } + Thread.sleep(sleep); + } catch (OutOfMemoryError e) { + // ignore + } catch (InterruptedException e) { + // ignore + } catch (NullPointerException e) { + // ignore + } catch (Exception e) { + trace.debug(e, "watchdog"); + } + } + while (true) { + // take a copy so we don't get an NPE between checking it and using it + ServerSocket local = serverSocket; + if (local == null) { + break; + } + try { + trace.debug("watchdog accept"); + Socket s = local.accept(); + s.close(); + } catch (Exception e) { + trace.debug(e, "watchdog"); + } + } + } catch (Exception e) { + trace.debug(e, "watchdog"); + } + trace.debug("watchdog end"); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/FileLockMethod.java b/modules/h2/src/main/java/org/h2/store/FileLockMethod.java new file mode 100644 index 0000000000000..cb7e998fba527 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/FileLockMethod.java @@ -0,0 +1,35 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +public enum FileLockMethod { + /** + * This locking method means no locking is used at all. + */ + NO, + + /** + * This locking method means the cooperative file locking protocol should be + * used. + */ + FILE, + + /** + * This locking method means a socket is created on the given machine. + */ + SOCKET, + + /** + * This locking method means multiple writers are allowed, and they + * synchronize themselves. + */ + SERIALIZED, + + /** + * Use the file system to lock the file; don't use a separate lock file. + */ + FS +} diff --git a/modules/h2/src/main/java/org/h2/store/FileStore.java b/modules/h2/src/main/java/org/h2/store/FileStore.java new file mode 100644 index 0000000000000..be92f86d0c441 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/FileStore.java @@ -0,0 +1,510 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.lang.ref.Reference; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.security.SecureFileStore; +import org.h2.store.fs.FileUtils; + +/** + * This class is an abstraction of a random access file. + * Each file contains a magic header, and reading / writing is done in blocks. + * See also {@link SecureFileStore} + */ +public class FileStore { + + /** + * The size of the file header in bytes. + */ + public static final int HEADER_LENGTH = 3 * Constants.FILE_BLOCK_SIZE; + + /** + * The magic file header. + */ + private static final String HEADER = + "-- H2 0.5/B -- ".substring(0, Constants.FILE_BLOCK_SIZE - 1) + "\n"; + + /** + * The file name. + */ + protected String name; + + /** + * The callback object is responsible to check access rights, and free up + * disk space if required. + */ + private final DataHandler handler; + + private FileChannel file; + private long filePos; + private long fileLength; + private Reference autoDeleteReference; + private boolean checkedWriting = true; + private final String mode; + private java.nio.channels.FileLock lock; + + /** + * Create a new file using the given settings. + * + * @param handler the callback object + * @param name the file name + * @param mode the access mode ("r", "rw", "rws", "rwd") + */ + protected FileStore(DataHandler handler, String name, String mode) { + this.handler = handler; + this.name = name; + try { + boolean exists = FileUtils.exists(name); + if (exists && !FileUtils.canWrite(name)) { + mode = "r"; + } else { + FileUtils.createDirectories(FileUtils.getParent(name)); + } + file = FileUtils.open(name, mode); + if (exists) { + fileLength = file.size(); + } + } catch (IOException e) { + throw DbException.convertIOException( + e, "name: " + name + " mode: " + mode); + } + this.mode = mode; + } + + /** + * Open a non encrypted file store with the given settings. + * + * @param handler the data handler + * @param name the file name + * @param mode the access mode (r, rw, rws, rwd) + * @return the created object + */ + public static FileStore open(DataHandler handler, String name, String mode) { + return open(handler, name, mode, null, null, 0); + } + + /** + * Open an encrypted file store with the given settings. + * + * @param handler the data handler + * @param name the file name + * @param mode the access mode (r, rw, rws, rwd) + * @param cipher the name of the cipher algorithm + * @param key the encryption key + * @return the created object + */ + public static FileStore open(DataHandler handler, String name, String mode, + String cipher, byte[] key) { + return open(handler, name, mode, cipher, key, + Constants.ENCRYPTION_KEY_HASH_ITERATIONS); + } + + /** + * Open an encrypted file store with the given settings. + * + * @param handler the data handler + * @param name the file name + * @param mode the access mode (r, rw, rws, rwd) + * @param cipher the name of the cipher algorithm + * @param key the encryption key + * @param keyIterations the number of iterations the key should be hashed + * @return the created object + */ + public static FileStore open(DataHandler handler, String name, String mode, + String cipher, byte[] key, int keyIterations) { + FileStore store; + if (cipher == null) { + store = new FileStore(handler, name, mode); + } else { + store = new SecureFileStore(handler, name, mode, + cipher, key, keyIterations); + } + return store; + } + + /** + * Generate the random salt bytes if required. + * + * @return the random salt or the magic + */ + protected byte[] generateSalt() { + return HEADER.getBytes(StandardCharsets.UTF_8); + } + + /** + * Initialize the key using the given salt. + * + * @param salt the salt + */ + @SuppressWarnings("unused") + protected void initKey(byte[] salt) { + // do nothing + } + + public void setCheckedWriting(boolean value) { + this.checkedWriting = value; + } + + private void checkWritingAllowed() { + if (handler != null && checkedWriting) { + handler.checkWritingAllowed(); + } + } + + private void checkPowerOff() { + if (handler != null) { + handler.checkPowerOff(); + } + } + + /** + * Initialize the file. This method will write or check the file header if + * required. + */ + public void init() { + int len = Constants.FILE_BLOCK_SIZE; + byte[] salt; + byte[] magic = HEADER.getBytes(StandardCharsets.UTF_8); + if (length() < HEADER_LENGTH) { + // write unencrypted + checkedWriting = false; + writeDirect(magic, 0, len); + salt = generateSalt(); + writeDirect(salt, 0, len); + initKey(salt); + // write (maybe) encrypted + write(magic, 0, len); + checkedWriting = true; + } else { + // read unencrypted + seek(0); + byte[] buff = new byte[len]; + readFullyDirect(buff, 0, len); + if (!Arrays.equals(buff, magic)) { + throw DbException.get(ErrorCode.FILE_VERSION_ERROR_1, name); + } + salt = new byte[len]; + readFullyDirect(salt, 0, len); + initKey(salt); + // read (maybe) encrypted + readFully(buff, 0, Constants.FILE_BLOCK_SIZE); + if (!Arrays.equals(buff, magic)) { + throw DbException.get(ErrorCode.FILE_ENCRYPTION_ERROR_1, name); + } + } + } + + /** + * Close the file. + */ + public void close() { + if (file != null) { + try { + trace("close", name, file); + file.close(); + } catch (IOException e) { + throw DbException.convertIOException(e, name); + } finally { + file = null; + } + } + } + + /** + * Close the file without throwing any exceptions. Exceptions are simply + * ignored. + */ + public void closeSilently() { + try { + close(); + } catch (Exception e) { + // ignore + } + } + + /** + * Close the file (ignoring exceptions) and delete the file. + */ + public void closeAndDeleteSilently() { + if (file != null) { + closeSilently(); + handler.getTempFileDeleter().deleteFile(autoDeleteReference, name); + name = null; + } + } + + /** + * Read a number of bytes without decrypting. + * + * @param b the target buffer + * @param off the offset + * @param len the number of bytes to read + */ + protected void readFullyDirect(byte[] b, int off, int len) { + readFully(b, off, len); + } + + /** + * Read a number of bytes. + * + * @param b the target buffer + * @param off the offset + * @param len the number of bytes to read + */ + public void readFully(byte[] b, int off, int len) { + if (SysProperties.CHECK && + (len < 0 || len % Constants.FILE_BLOCK_SIZE != 0)) { + DbException.throwInternalError( + "unaligned read " + name + " len " + len); + } + checkPowerOff(); + try { + FileUtils.readFully(file, ByteBuffer.wrap(b, off, len)); + } catch (IOException e) { + throw DbException.convertIOException(e, name); + } + filePos += len; + } + + /** + * Go to the specified file location. + * + * @param pos the location + */ + public void seek(long pos) { + if (SysProperties.CHECK && + pos % Constants.FILE_BLOCK_SIZE != 0) { + DbException.throwInternalError( + "unaligned seek " + name + " pos " + pos); + } + try { + if (pos != filePos) { + file.position(pos); + filePos = pos; + } + } catch (IOException e) { + throw DbException.convertIOException(e, name); + } + } + + /** + * Write a number of bytes without encrypting. + * + * @param b the source buffer + * @param off the offset + * @param len the number of bytes to write + */ + protected void writeDirect(byte[] b, int off, int len) { + write(b, off, len); + } + + /** + * Write a number of bytes. + * + * @param b the source buffer + * @param off the offset + * @param len the number of bytes to write + */ + public void write(byte[] b, int off, int len) { + if (SysProperties.CHECK && (len < 0 || + len % Constants.FILE_BLOCK_SIZE != 0)) { + DbException.throwInternalError( + "unaligned write " + name + " len " + len); + } + checkWritingAllowed(); + checkPowerOff(); + try { + FileUtils.writeFully(file, ByteBuffer.wrap(b, off, len)); + } catch (IOException e) { + closeFileSilently(); + throw DbException.convertIOException(e, name); + } + filePos += len; + fileLength = Math.max(filePos, fileLength); + } + + /** + * Set the length of the file. This will expand or shrink the file. + * + * @param newLength the new file size + */ + public void setLength(long newLength) { + if (SysProperties.CHECK && newLength % Constants.FILE_BLOCK_SIZE != 0) { + DbException.throwInternalError( + "unaligned setLength " + name + " pos " + newLength); + } + checkPowerOff(); + checkWritingAllowed(); + try { + if (newLength > fileLength) { + long pos = filePos; + file.position(newLength - 1); + FileUtils.writeFully(file, ByteBuffer.wrap(new byte[1])); + file.position(pos); + } else { + file.truncate(newLength); + } + fileLength = newLength; + } catch (IOException e) { + closeFileSilently(); + throw DbException.convertIOException(e, name); + } + } + + /** + * Get the file size in bytes. + * + * @return the file size + */ + public long length() { + try { + long len = fileLength; + if (SysProperties.CHECK2) { + len = file.size(); + if (len != fileLength) { + DbException.throwInternalError( + "file " + name + " length " + len + " expected " + fileLength); + } + } + if (SysProperties.CHECK2 && len % Constants.FILE_BLOCK_SIZE != 0) { + long newLength = len + Constants.FILE_BLOCK_SIZE - + (len % Constants.FILE_BLOCK_SIZE); + file.truncate(newLength); + fileLength = newLength; + DbException.throwInternalError( + "unaligned file length " + name + " len " + len); + } + return len; + } catch (IOException e) { + throw DbException.convertIOException(e, name); + } + } + + /** + * Get the current location of the file pointer. + * + * @return the location + */ + public long getFilePointer() { + if (SysProperties.CHECK2) { + try { + if (file.position() != filePos) { + DbException.throwInternalError(file.position() + " " + filePos); + } + } catch (IOException e) { + throw DbException.convertIOException(e, name); + } + } + return filePos; + } + + /** + * Call fsync. Depending on the operating system and hardware, this may or + * may not in fact write the changes. + */ + public void sync() { + try { + file.force(true); + } catch (IOException e) { + closeFileSilently(); + throw DbException.convertIOException(e, name); + } + } + + /** + * Automatically delete the file once it is no longer in use. + */ + public void autoDelete() { + if (autoDeleteReference == null) { + autoDeleteReference = handler.getTempFileDeleter().addFile(name, this); + } + } + + /** + * No longer automatically delete the file once it is no longer in use. + */ + public void stopAutoDelete() { + handler.getTempFileDeleter().stopAutoDelete(autoDeleteReference, name); + autoDeleteReference = null; + } + + /** + * Close the file. The file may later be re-opened using openFile. + */ + public void closeFile() throws IOException { + file.close(); + file = null; + } + + /** + * Just close the file, without setting the reference to null. This method + * is called when writing failed. The reference is not set to null so that + * there are no NullPointerExceptions later on. + */ + private void closeFileSilently() { + try { + file.close(); + } catch (IOException e) { + // ignore + } + } + + /** + * Re-open the file. The file pointer will be reset to the previous + * location. + */ + public void openFile() throws IOException { + if (file == null) { + file = FileUtils.open(name, mode); + file.position(filePos); + } + } + + private static void trace(String method, String fileName, Object o) { + if (SysProperties.TRACE_IO) { + System.out.println("FileStore." + method + " " + fileName + " " + o); + } + } + + /** + * Try to lock the file. + * + * @return true if successful + */ + public synchronized boolean tryLock() { + try { + lock = file.tryLock(); + return lock != null; + } catch (Exception e) { + // ignore OverlappingFileLockException + return false; + } + } + + /** + * Release the file lock. + */ + public synchronized void releaseLock() { + if (file != null && lock != null) { + try { + lock.release(); + } catch (Exception e) { + // ignore + } + lock = null; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/FileStoreInputStream.java b/modules/h2/src/main/java/org/h2/store/FileStoreInputStream.java new file mode 100644 index 0000000000000..27cb2d4e01dcb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/FileStoreInputStream.java @@ -0,0 +1,160 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.InputStream; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.tools.CompressTool; +import org.h2.util.Utils; + +/** + * An input stream that is backed by a file store. + */ +public class FileStoreInputStream extends InputStream { + + private FileStore store; + private final Data page; + private int remainingInBuffer; + private final CompressTool compress; + private boolean endOfFile; + private final boolean alwaysClose; + + public FileStoreInputStream(FileStore store, DataHandler handler, + boolean compression, boolean alwaysClose) { + this.store = store; + this.alwaysClose = alwaysClose; + if (compression) { + compress = CompressTool.getInstance(); + } else { + compress = null; + } + page = Data.create(handler, Constants.FILE_BLOCK_SIZE); + try { + if (store.length() <= FileStore.HEADER_LENGTH) { + close(); + } else { + fillBuffer(); + } + } catch (IOException e) { + throw DbException.convertIOException(e, store.name); + } + } + + @Override + public int available() { + return remainingInBuffer <= 0 ? 0 : remainingInBuffer; + } + + @Override + public int read(byte[] buff) throws IOException { + return read(buff, 0, buff.length); + } + + @Override + public int read(byte[] b, int off, int len) throws IOException { + if (len == 0) { + return 0; + } + int read = 0; + while (len > 0) { + int r = readBlock(b, off, len); + if (r < 0) { + break; + } + read += r; + off += r; + len -= r; + } + return read == 0 ? -1 : read; + } + + private int readBlock(byte[] buff, int off, int len) throws IOException { + fillBuffer(); + if (endOfFile) { + return -1; + } + int l = Math.min(remainingInBuffer, len); + page.read(buff, off, l); + remainingInBuffer -= l; + return l; + } + + private void fillBuffer() throws IOException { + if (remainingInBuffer > 0 || endOfFile) { + return; + } + page.reset(); + store.openFile(); + if (store.length() == store.getFilePointer()) { + close(); + return; + } + store.readFully(page.getBytes(), 0, Constants.FILE_BLOCK_SIZE); + page.reset(); + remainingInBuffer = page.readInt(); + if (remainingInBuffer < 0) { + close(); + return; + } + page.checkCapacity(remainingInBuffer); + // get the length to read + if (compress != null) { + page.checkCapacity(Data.LENGTH_INT); + page.readInt(); + } + page.setPos(page.length() + remainingInBuffer); + page.fillAligned(); + int len = page.length() - Constants.FILE_BLOCK_SIZE; + page.reset(); + page.readInt(); + store.readFully(page.getBytes(), Constants.FILE_BLOCK_SIZE, len); + page.reset(); + page.readInt(); + if (compress != null) { + int uncompressed = page.readInt(); + byte[] buff = Utils.newBytes(remainingInBuffer); + page.read(buff, 0, remainingInBuffer); + page.reset(); + page.checkCapacity(uncompressed); + CompressTool.expand(buff, page.getBytes(), 0); + remainingInBuffer = uncompressed; + } + if (alwaysClose) { + store.closeFile(); + } + } + + @Override + public void close() { + if (store != null) { + try { + store.close(); + endOfFile = true; + } finally { + store = null; + } + } + } + + @Override + protected void finalize() { + close(); + } + + @Override + public int read() throws IOException { + fillBuffer(); + if (endOfFile) { + return -1; + } + int i = page.readByte() & 0xff; + remainingInBuffer--; + return i; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/FileStoreOutputStream.java b/modules/h2/src/main/java/org/h2/store/FileStoreOutputStream.java new file mode 100644 index 0000000000000..ab320c504e989 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/FileStoreOutputStream.java @@ -0,0 +1,85 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.OutputStream; +import java.util.Arrays; + +import org.h2.engine.Constants; +import org.h2.tools.CompressTool; + +/** + * An output stream that is backed by a file store. + */ +public class FileStoreOutputStream extends OutputStream { + private FileStore store; + private final Data page; + private final String compressionAlgorithm; + private final CompressTool compress; + private final byte[] buffer = { 0 }; + + public FileStoreOutputStream(FileStore store, DataHandler handler, + String compressionAlgorithm) { + this.store = store; + if (compressionAlgorithm != null) { + this.compress = CompressTool.getInstance(); + this.compressionAlgorithm = compressionAlgorithm; + } else { + this.compress = null; + this.compressionAlgorithm = null; + } + page = Data.create(handler, Constants.FILE_BLOCK_SIZE); + } + + @Override + public void write(int b) { + buffer[0] = (byte) b; + write(buffer); + } + + @Override + public void write(byte[] buff) { + write(buff, 0, buff.length); + } + + @Override + public void write(byte[] buff, int off, int len) { + if (len > 0) { + page.reset(); + if (compress != null) { + if (off != 0 || len != buff.length) { + buff = Arrays.copyOfRange(buff, off, off + len); + off = 0; + } + int uncompressed = len; + buff = compress.compress(buff, compressionAlgorithm); + len = buff.length; + page.checkCapacity(2 * Data.LENGTH_INT + len); + page.writeInt(len); + page.writeInt(uncompressed); + page.write(buff, off, len); + } else { + page.checkCapacity(Data.LENGTH_INT + len); + page.writeInt(len); + page.write(buff, off, len); + } + page.fillAligned(); + store.write(page.getBytes(), 0, page.length()); + } + } + + @Override + public void close() { + if (store != null) { + try { + store.close(); + } finally { + store = null; + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/InDoubtTransaction.java b/modules/h2/src/main/java/org/h2/store/InDoubtTransaction.java new file mode 100644 index 0000000000000..c8576e091858e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/InDoubtTransaction.java @@ -0,0 +1,51 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +/** + * Represents an in-doubt transaction (a transaction in the prepare phase). + */ +public interface InDoubtTransaction { + + /** + * The transaction state meaning this transaction is not committed yet, but + * also not rolled back (in-doubt). + */ + int IN_DOUBT = 0; + + /** + * The transaction state meaning this transaction is committed. + */ + int COMMIT = 1; + + /** + * The transaction state meaning this transaction is rolled back. + */ + int ROLLBACK = 2; + + /** + * Change the state of this transaction. + * This will also update the transaction log. + * + * @param state the new state + */ + void setState(int state); + + /** + * Get the state of this transaction as a text. + * + * @return the transaction state text + */ + String getState(); + + /** + * Get the name of the transaction. + * + * @return the transaction name + */ + String getTransactionName(); + +} diff --git a/modules/h2/src/main/java/org/h2/store/LobStorageBackend.java b/modules/h2/src/main/java/org/h2/store/LobStorageBackend.java new file mode 100644 index 0000000000000..10297bf52cbc6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/LobStorageBackend.java @@ -0,0 +1,790 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.engine.SysProperties; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.tools.CompressTool; +import org.h2.util.IOUtils; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.value.Value; +import org.h2.value.ValueLobDb; + +/** + * This class stores LOB objects in the database, in tables. This is the + * back-end i.e. the server side of the LOB storage. + *

    + * Using the system session + *

    + * Why do we use the system session to store the data? Some LOB operations can + * take a very long time. If we did them on a normal session, we would be + * locking the LOB tables for long periods of time, which is extremely + * detrimental to the rest of the system. Perhaps when we shift to the MVStore + * engine, we can revisit this design decision (using the StreamStore, that is, + * no connection at all). + *

    + * Locking + *

    + * Normally, the locking order in H2 is: first lock the Session object, then + * lock the Database object. However, in the case of the LOB data, we are using + * the system session to store the data. If we locked the normal way, we see + * deadlocks caused by the following pattern: + * + *

    + *  Thread 1:
    + *     locks normal session
    + *     locks database
    + *     waiting to lock system session
    + *  Thread 2:
    + *      locks system session
    + *      waiting to lock database.
    + * 
    + * + * So, in this class alone, we do two things: we have our very own dedicated + * session, the LOB session, and we take the locks in this order: first the + * Database object, and then the LOB session. Since we own the LOB session, + * no-one else can lock on it, and we are safe. + */ +public class LobStorageBackend implements LobStorageInterface { + + /** + * The name of the lob data table. If this table exists, then lob storage is + * used. + */ + public static final String LOB_DATA_TABLE = "LOB_DATA"; + + private static final String LOB_SCHEMA = "INFORMATION_SCHEMA"; + private static final String LOBS = LOB_SCHEMA + ".LOBS"; + private static final String LOB_MAP = LOB_SCHEMA + ".LOB_MAP"; + private static final String LOB_DATA = LOB_SCHEMA + "." + LOB_DATA_TABLE; + + /** + * The size of the chunks we use when storing LOBs inside the database file. + */ + private static final int BLOCK_LENGTH = 20_000; + + /** + * The size of cache for lob block hashes. Each entry needs 2 longs (16 + * bytes), therefore, the size 4096 means 64 KB. + */ + private static final int HASH_CACHE_SIZE = 4 * 1024; + + JdbcConnection conn; + final Database database; + + private final HashMap prepared = new HashMap<>(); + private long nextBlock; + private final CompressTool compress = CompressTool.getInstance(); + private long[] hashBlocks; + + private boolean init; + + public LobStorageBackend(Database database) { + this.database = database; + } + + @Override + public void init() { + if (init) { + return; + } + synchronized (database) { + // have to check this again or we might miss an update on another + // thread + if (init) { + return; + } + init = true; + conn = database.getLobConnectionForRegularUse(); + JdbcConnection initConn = database.getLobConnectionForInit(); + try { + Statement stat = initConn.createStatement(); + // stat.execute("SET UNDO_LOG 0"); + // stat.execute("SET REDO_LOG_BINARY 0"); + boolean create = true; + PreparedStatement prep = initConn.prepareStatement( + "SELECT ZERO() FROM INFORMATION_SCHEMA.COLUMNS WHERE " + + "TABLE_SCHEMA=? AND TABLE_NAME=? AND COLUMN_NAME=?"); + prep.setString(1, "INFORMATION_SCHEMA"); + prep.setString(2, "LOB_MAP"); + prep.setString(3, "POS"); + ResultSet rs; + rs = prep.executeQuery(); + if (rs.next()) { + prep = initConn.prepareStatement( + "SELECT ZERO() FROM INFORMATION_SCHEMA.TABLES WHERE " + + "TABLE_SCHEMA=? AND TABLE_NAME=?"); + prep.setString(1, "INFORMATION_SCHEMA"); + prep.setString(2, "LOB_DATA"); + rs = prep.executeQuery(); + if (rs.next()) { + create = false; + } + } + if (create) { + stat.execute("CREATE CACHED TABLE IF NOT EXISTS " + LOBS + + "(ID BIGINT PRIMARY KEY, BYTE_COUNT BIGINT, TABLE INT) HIDDEN"); + stat.execute("CREATE INDEX IF NOT EXISTS " + + "INFORMATION_SCHEMA.INDEX_LOB_TABLE ON " + + LOBS + "(TABLE)"); + stat.execute("CREATE CACHED TABLE IF NOT EXISTS " + LOB_MAP + + "(LOB BIGINT, SEQ INT, POS BIGINT, HASH INT, " + + "BLOCK BIGINT, PRIMARY KEY(LOB, SEQ)) HIDDEN"); + stat.execute("ALTER TABLE " + LOB_MAP + + " RENAME TO " + LOB_MAP + " HIDDEN"); + stat.execute("ALTER TABLE " + LOB_MAP + + " ADD IF NOT EXISTS POS BIGINT BEFORE HASH"); + // TODO the column name OFFSET was used in version 1.3.156, + // so this can be remove in a later version + stat.execute("ALTER TABLE " + LOB_MAP + + " DROP COLUMN IF EXISTS \"OFFSET\""); + stat.execute("CREATE INDEX IF NOT EXISTS " + + "INFORMATION_SCHEMA.INDEX_LOB_MAP_DATA_LOB ON " + + LOB_MAP + "(BLOCK, LOB)"); + stat.execute("CREATE CACHED TABLE IF NOT EXISTS " + + LOB_DATA + + "(BLOCK BIGINT PRIMARY KEY, COMPRESSED INT, DATA BINARY) HIDDEN"); + } + rs = stat.executeQuery("SELECT MAX(BLOCK) FROM " + LOB_DATA); + rs.next(); + nextBlock = rs.getLong(1) + 1; + stat.close(); + } catch (SQLException e) { + throw DbException.convert(e); + } + } + } + + private long getNextLobId() throws SQLException { + String sql = "SELECT MAX(LOB) FROM " + LOB_MAP; + PreparedStatement prep = prepare(sql); + ResultSet rs = prep.executeQuery(); + rs.next(); + long x = rs.getLong(1) + 1; + reuse(sql, prep); + sql = "SELECT MAX(ID) FROM " + LOBS; + prep = prepare(sql); + rs = prep.executeQuery(); + rs.next(); + x = Math.max(x, rs.getLong(1) + 1); + reuse(sql, prep); + return x; + } + + @Override + public void removeAllForTable(int tableId) { + init(); + try { + String sql = "SELECT ID FROM " + LOBS + " WHERE TABLE = ?"; + PreparedStatement prep = prepare(sql); + prep.setInt(1, tableId); + ResultSet rs = prep.executeQuery(); + while (rs.next()) { + removeLob(rs.getLong(1)); + } + reuse(sql, prep); + } catch (SQLException e) { + throw DbException.convert(e); + } + if (tableId == LobStorageFrontend.TABLE_ID_SESSION_VARIABLE) { + removeAllForTable(LobStorageFrontend.TABLE_TEMP); + removeAllForTable(LobStorageFrontend.TABLE_RESULT); + } + } + + /** + * Read a block of data from the given LOB. + * + * @param block the block number + * @return the block (expanded if stored compressed) + */ + byte[] readBlock(long block) throws SQLException { + // see locking discussion at the top + assertNotHolds(conn.getSession()); + synchronized (database) { + synchronized (conn.getSession()) { + String sql = "SELECT COMPRESSED, DATA FROM " + + LOB_DATA + " WHERE BLOCK = ?"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, block); + ResultSet rs = prep.executeQuery(); + if (!rs.next()) { + throw DbException.get(ErrorCode.IO_EXCEPTION_1, + "Missing lob entry, block: " + block) + .getSQLException(); + } + int compressed = rs.getInt(1); + byte[] buffer = rs.getBytes(2); + if (compressed != 0) { + buffer = compress.expand(buffer); + } + reuse(sql, prep); + return buffer; + } + } + } + + /** + * Create a prepared statement, or re-use an existing one. + * + * @param sql the SQL statement + * @return the prepared statement + */ + PreparedStatement prepare(String sql) throws SQLException { + if (SysProperties.CHECK2) { + if (!Thread.holdsLock(database)) { + throw DbException.throwInternalError(); + } + } + PreparedStatement prep = prepared.remove(sql); + if (prep == null) { + prep = conn.prepareStatement(sql); + } + return prep; + } + + /** + * Allow to re-use the prepared statement. + * + * @param sql the SQL statement + * @param prep the prepared statement + */ + void reuse(String sql, PreparedStatement prep) { + if (SysProperties.CHECK2) { + if (!Thread.holdsLock(database)) { + throw DbException.throwInternalError(); + } + } + prepared.put(sql, prep); + } + + @Override + public void removeLob(ValueLobDb lob) { + removeLob(lob.getLobId()); + } + + private void removeLob(long lobId) { + try { + // see locking discussion at the top + assertNotHolds(conn.getSession()); + synchronized (database) { + synchronized (conn.getSession()) { + String sql = "SELECT BLOCK, HASH FROM " + LOB_MAP + " D WHERE D.LOB = ? " + + "AND NOT EXISTS(SELECT 1 FROM " + LOB_MAP + " O " + + "WHERE O.BLOCK = D.BLOCK AND O.LOB <> ?)"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, lobId); + prep.setLong(2, lobId); + ResultSet rs = prep.executeQuery(); + ArrayList blocks = New.arrayList(); + while (rs.next()) { + blocks.add(rs.getLong(1)); + int hash = rs.getInt(2); + setHashCacheBlock(hash, -1); + } + reuse(sql, prep); + + sql = "DELETE FROM " + LOB_MAP + " WHERE LOB = ?"; + prep = prepare(sql); + prep.setLong(1, lobId); + prep.execute(); + reuse(sql, prep); + + sql = "DELETE FROM " + LOB_DATA + " WHERE BLOCK = ?"; + prep = prepare(sql); + for (long block : blocks) { + prep.setLong(1, block); + prep.execute(); + } + reuse(sql, prep); + + sql = "DELETE FROM " + LOBS + " WHERE ID = ?"; + prep = prepare(sql); + prep.setLong(1, lobId); + prep.execute(); + reuse(sql, prep); + } + } + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + public InputStream getInputStream(ValueLobDb lob, byte[] hmac, + long byteCount) throws IOException { + try { + init(); + assertNotHolds(conn.getSession()); + // see locking discussion at the top + synchronized (database) { + synchronized (conn.getSession()) { + long lobId = lob.getLobId(); + return new LobInputStream(lobId, byteCount); + } + } + } catch (SQLException e) { + throw DbException.convertToIOException(e); + } + } + + private ValueLobDb addLob(InputStream in, long maxLength, int type, + CountingReaderInputStream countingReaderForClob) { + try { + byte[] buff = new byte[BLOCK_LENGTH]; + if (maxLength < 0) { + maxLength = Long.MAX_VALUE; + } + long length = 0; + long lobId = -1; + int maxLengthInPlaceLob = database.getMaxLengthInplaceLob(); + String compressAlgorithm = database.getLobCompressionAlgorithm(type); + try { + byte[] small = null; + for (int seq = 0; maxLength > 0; seq++) { + int len = (int) Math.min(BLOCK_LENGTH, maxLength); + len = IOUtils.readFully(in, buff, len); + if (len <= 0) { + break; + } + maxLength -= len; + // if we had a short read, trim the buffer + byte[] b; + if (len != buff.length) { + b = Arrays.copyOf(buff, len); + } else { + b = buff; + } + if (seq == 0 && b.length < BLOCK_LENGTH && + b.length <= maxLengthInPlaceLob) { + small = b; + break; + } + assertNotHolds(conn.getSession()); + // see locking discussion at the top + synchronized (database) { + synchronized (conn.getSession()) { + if (seq == 0) { + lobId = getNextLobId(); + } + storeBlock(lobId, seq, length, b, compressAlgorithm); + } + } + length += len; + } + if (lobId == -1 && small == null) { + // zero length + small = new byte[0]; + } + if (small != null) { + // For a BLOB, precision is length in bytes. + // For a CLOB, precision is length in chars + long precision = countingReaderForClob == null ? + small.length : countingReaderForClob.getLength(); + return ValueLobDb.createSmallLob(type, small, precision); + } + // For a BLOB, precision is length in bytes. + // For a CLOB, precision is length in chars + long precision = countingReaderForClob == null ? + length : countingReaderForClob.getLength(); + return registerLob(type, lobId, + LobStorageFrontend.TABLE_TEMP, length, precision); + } catch (IOException e) { + if (lobId != -1) { + removeLob(lobId); + } + throw DbException.convertIOException(e, null); + } + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + private ValueLobDb registerLob(int type, long lobId, int tableId, + long byteCount, long precision) throws SQLException { + assertNotHolds(conn.getSession()); + // see locking discussion at the top + synchronized (database) { + synchronized (conn.getSession()) { + String sql = "INSERT INTO " + LOBS + + "(ID, BYTE_COUNT, TABLE) VALUES(?, ?, ?)"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, lobId); + prep.setLong(2, byteCount); + prep.setInt(3, tableId); + prep.execute(); + reuse(sql, prep); + return ValueLobDb.create(type, + database, tableId, lobId, null, precision); + } + } + } + + @Override + public boolean isReadOnly() { + return database.isReadOnly(); + } + + @Override + public ValueLobDb copyLob(ValueLobDb old, int tableId, long length) { + int type = old.getType(); + long oldLobId = old.getLobId(); + assertNotHolds(conn.getSession()); + // see locking discussion at the top + synchronized (database) { + synchronized (conn.getSession()) { + try { + init(); + ValueLobDb v = null; + if (!old.isRecoveryReference()) { + long lobId = getNextLobId(); + String sql = "INSERT INTO " + LOB_MAP + + "(LOB, SEQ, POS, HASH, BLOCK) " + + "SELECT ?, SEQ, POS, HASH, BLOCK FROM " + + LOB_MAP + " WHERE LOB = ?"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, lobId); + prep.setLong(2, oldLobId); + prep.executeUpdate(); + reuse(sql, prep); + + sql = "INSERT INTO " + LOBS + + "(ID, BYTE_COUNT, TABLE) " + + "SELECT ?, BYTE_COUNT, ? FROM " + LOBS + + " WHERE ID = ?"; + prep = prepare(sql); + prep.setLong(1, lobId); + prep.setLong(2, tableId); + prep.setLong(3, oldLobId); + prep.executeUpdate(); + reuse(sql, prep); + + v = ValueLobDb.create(type, database, tableId, lobId, null, length); + } else { + // Recovery process, no need to copy LOB using normal + // infrastructure + v = ValueLobDb.create(type, database, tableId, oldLobId, null, length); + } + return v; + } catch (SQLException e) { + throw DbException.convert(e); + } + } + } + } + + private long getHashCacheBlock(int hash) { + if (HASH_CACHE_SIZE > 0) { + initHashCache(); + int index = hash & (HASH_CACHE_SIZE - 1); + long oldHash = hashBlocks[index]; + if (oldHash == hash) { + return hashBlocks[index + HASH_CACHE_SIZE]; + } + } + return -1; + } + + private void setHashCacheBlock(int hash, long block) { + if (HASH_CACHE_SIZE > 0) { + initHashCache(); + int index = hash & (HASH_CACHE_SIZE - 1); + hashBlocks[index] = hash; + hashBlocks[index + HASH_CACHE_SIZE] = block; + } + } + + private void initHashCache() { + if (hashBlocks == null) { + hashBlocks = new long[HASH_CACHE_SIZE * 2]; + } + } + + /** + * Store a block in the LOB storage. + * + * @param lobId the lob id + * @param seq the sequence number + * @param pos the position within the lob + * @param b the data + * @param compressAlgorithm the compression algorithm (may be null) + */ + void storeBlock(long lobId, int seq, long pos, byte[] b, + String compressAlgorithm) throws SQLException { + long block; + boolean blockExists = false; + if (compressAlgorithm != null) { + b = compress.compress(b, compressAlgorithm); + } + int hash = Arrays.hashCode(b); + assertHoldsLock(conn.getSession()); + assertHoldsLock(database); + block = getHashCacheBlock(hash); + if (block != -1) { + String sql = "SELECT COMPRESSED, DATA FROM " + LOB_DATA + + " WHERE BLOCK = ?"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, block); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + boolean compressed = rs.getInt(1) != 0; + byte[] compare = rs.getBytes(2); + if (compressed == (compressAlgorithm != null) && Arrays.equals(b, compare)) { + blockExists = true; + } + } + reuse(sql, prep); + } + if (!blockExists) { + block = nextBlock++; + setHashCacheBlock(hash, block); + String sql = "INSERT INTO " + LOB_DATA + + "(BLOCK, COMPRESSED, DATA) VALUES(?, ?, ?)"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, block); + prep.setInt(2, compressAlgorithm == null ? 0 : 1); + prep.setBytes(3, b); + prep.execute(); + reuse(sql, prep); + } + String sql = "INSERT INTO " + LOB_MAP + + "(LOB, SEQ, POS, HASH, BLOCK) VALUES(?, ?, ?, ?, ?)"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, lobId); + prep.setInt(2, seq); + prep.setLong(3, pos); + prep.setLong(4, hash); + prep.setLong(5, block); + prep.execute(); + reuse(sql, prep); + } + + @Override + public Value createBlob(InputStream in, long maxLength) { + init(); + return addLob(in, maxLength, Value.BLOB, null); + } + + @Override + public Value createClob(Reader reader, long maxLength) { + init(); + long max = maxLength == -1 ? Long.MAX_VALUE : maxLength; + CountingReaderInputStream in = new CountingReaderInputStream(reader, max); + return addLob(in, Long.MAX_VALUE, Value.CLOB, in); + } + + private static void assertNotHolds(Object lock) { + if (Thread.holdsLock(lock)) { + throw DbException.throwInternalError(lock.toString()); + } + } + + /** + * Check whether this thread has synchronized on this object. + * + * @param lock the object + */ + static void assertHoldsLock(Object lock) { + if (!Thread.holdsLock(lock)) { + throw DbException.throwInternalError(lock.toString()); + } + } + + /** + * An input stream that reads from a LOB. + */ + public class LobInputStream extends InputStream { + + /** + * Data from the LOB_MAP table. We cache this to prevent other updates + * to the table that contains the LOB column from changing the data + * under us. + */ + private final long[] lobMapBlocks; + + /** + * index into the lobMapBlocks array. + */ + private int lobMapIndex; + + /** + * The remaining bytes in the lob. + */ + private long remainingBytes; + + /** + * The temporary buffer. + */ + private byte[] buffer; + + /** + * The position within the buffer. + */ + private int bufferPos; + + + public LobInputStream(long lobId, long byteCount) throws SQLException { + + // we have to take the lock on the session + // before the lock on the database to prevent ABBA deadlocks + assertHoldsLock(conn.getSession()); + assertHoldsLock(database); + + if (byteCount == -1) { + String sql = "SELECT BYTE_COUNT FROM " + LOBS + " WHERE ID = ?"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, lobId); + ResultSet rs = prep.executeQuery(); + if (!rs.next()) { + throw DbException.get(ErrorCode.IO_EXCEPTION_1, + "Missing lob entry: " + lobId).getSQLException(); + } + byteCount = rs.getLong(1); + reuse(sql, prep); + } + this.remainingBytes = byteCount; + + String sql = "SELECT COUNT(*) FROM " + LOB_MAP + " WHERE LOB = ?"; + PreparedStatement prep = prepare(sql); + prep.setLong(1, lobId); + ResultSet rs = prep.executeQuery(); + rs.next(); + int lobMapCount = rs.getInt(1); + if (lobMapCount == 0) { + throw DbException.get(ErrorCode.IO_EXCEPTION_1, + "Missing lob entry: " + lobId).getSQLException(); + } + reuse(sql, prep); + + this.lobMapBlocks = new long[lobMapCount]; + + sql = "SELECT BLOCK FROM " + LOB_MAP + " WHERE LOB = ? ORDER BY SEQ"; + prep = prepare(sql); + prep.setLong(1, lobId); + rs = prep.executeQuery(); + int i = 0; + while (rs.next()) { + this.lobMapBlocks[i] = rs.getLong(1); + i++; + } + reuse(sql, prep); + } + + @Override + public int read() throws IOException { + fillBuffer(); + if (remainingBytes <= 0) { + return -1; + } + remainingBytes--; + return buffer[bufferPos++] & 255; + } + + @Override + public long skip(long n) throws IOException { + if (n <= 0) { + return 0; + } + long remaining = n; + remaining -= skipSmall(remaining); + if (remaining > BLOCK_LENGTH) { + while (remaining > BLOCK_LENGTH) { + remaining -= BLOCK_LENGTH; + remainingBytes -= BLOCK_LENGTH; + lobMapIndex++; + } + bufferPos = 0; + buffer = null; + } + fillBuffer(); + remaining -= skipSmall(remaining); + remaining -= super.skip(remaining); + return n - remaining; + } + + private int skipSmall(long n) { + if (buffer != null && bufferPos < buffer.length) { + int x = MathUtils.convertLongToInt(Math.min(n, buffer.length - bufferPos)); + bufferPos += x; + remainingBytes -= x; + return x; + } + return 0; + } + + @Override + public int available() throws IOException { + return MathUtils.convertLongToInt(remainingBytes); + } + + @Override + public int read(byte[] buff) throws IOException { + return readFully(buff, 0, buff.length); + } + + @Override + public int read(byte[] buff, int off, int length) throws IOException { + return readFully(buff, off, length); + } + + private int readFully(byte[] buff, int off, int length) throws IOException { + if (length == 0) { + return 0; + } + int read = 0; + while (length > 0) { + fillBuffer(); + if (remainingBytes <= 0) { + break; + } + int len = (int) Math.min(length, remainingBytes); + len = Math.min(len, buffer.length - bufferPos); + System.arraycopy(buffer, bufferPos, buff, off, len); + bufferPos += len; + read += len; + remainingBytes -= len; + off += len; + length -= len; + } + return read == 0 ? -1 : read; + } + + private void fillBuffer() throws IOException { + if (buffer != null && bufferPos < buffer.length) { + return; + } + if (remainingBytes <= 0) { + return; + } +if (lobMapIndex >= lobMapBlocks.length) { + System.out.println("halt!"); +} + try { + buffer = readBlock(lobMapBlocks[lobMapIndex]); + lobMapIndex++; + bufferPos = 0; + } catch (SQLException e) { + throw DbException.convertToIOException(e); + } + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/LobStorageFrontend.java b/modules/h2/src/main/java/org/h2/store/LobStorageFrontend.java new file mode 100644 index 0000000000000..cff97f0d741e8 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/LobStorageFrontend.java @@ -0,0 +1,108 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.BufferedInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import org.h2.value.Value; +import org.h2.value.ValueLobDb; + +/** + * This factory creates in-memory objects and temporary files. It is used on the + * client side. + */ +public class LobStorageFrontend implements LobStorageInterface { + + /** + * The table id for session variables (LOBs not assigned to a table). + */ + public static final int TABLE_ID_SESSION_VARIABLE = -1; + + /** + * The table id for temporary objects (not assigned to any object). + */ + public static final int TABLE_TEMP = -2; + + /** + * The table id for result sets. + */ + public static final int TABLE_RESULT = -3; + + private final DataHandler handler; + + public LobStorageFrontend(DataHandler handler) { + this.handler = handler; + } + + @Override + public void removeLob(ValueLobDb lob) { + // not stored in the database + } + + /** + * Get the input stream for the given lob. + * + * @param lob the lob + * @param hmac the message authentication code (for remote input streams) + * @param byteCount the number of bytes to read, or -1 if not known + * @return the stream + */ + @Override + public InputStream getInputStream(ValueLobDb lob, byte[] hmac, + long byteCount) throws IOException { + if (byteCount < 0) { + byteCount = Long.MAX_VALUE; + } + return new BufferedInputStream(new LobStorageRemoteInputStream( + handler, lob, hmac, byteCount)); + } + + @Override + public boolean isReadOnly() { + return false; + } + + @Override + public ValueLobDb copyLob(ValueLobDb old, int tableId, long length) { + throw new UnsupportedOperationException(); + } + + @Override + public void removeAllForTable(int tableId) { + throw new UnsupportedOperationException(); + } + + @Override + public Value createBlob(InputStream in, long maxLength) { + // need to use a temp file, because the input stream could come from + // the same database, which would create a weird situation (trying + // to read a block while writing something) + return ValueLobDb.createTempBlob(in, maxLength, handler); + } + + /** + * Create a CLOB object. + * + * @param reader the reader + * @param maxLength the maximum length (-1 if not known) + * @return the LOB + */ + @Override + public Value createClob(Reader reader, long maxLength) { + // need to use a temp file, because the input stream could come from + // the same database, which would create a weird situation (trying + // to read a block while writing something) + return ValueLobDb.createTempClob(reader, maxLength, handler); + } + + @Override + public void init() { + // nothing to do + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/LobStorageInterface.java b/modules/h2/src/main/java/org/h2/store/LobStorageInterface.java new file mode 100644 index 0000000000000..a36f56be611af --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/LobStorageInterface.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import org.h2.value.Value; +import org.h2.value.ValueLobDb; + +/** + * A mechanism to store and retrieve lob data. + */ +public interface LobStorageInterface { + + /** + * Create a CLOB object. + * + * @param reader the reader + * @param maxLength the maximum length (-1 if not known) + * @return the LOB + */ + Value createClob(Reader reader, long maxLength); + + /** + * Create a BLOB object. + * + * @param in the input stream + * @param maxLength the maximum length (-1 if not known) + * @return the LOB + */ + Value createBlob(InputStream in, long maxLength); + + /** + * Copy a lob. + * + * @param old the old lob + * @param tableId the new table id + * @param length the length + * @return the new lob + */ + ValueLobDb copyLob(ValueLobDb old, int tableId, long length); + + /** + * Get the input stream for the given lob. + * + * @param lob the lob id + * @param hmac the message authentication code (for remote input streams) + * @param byteCount the number of bytes to read, or -1 if not known + * @return the stream + */ + InputStream getInputStream(ValueLobDb lob, byte[] hmac, long byteCount) + throws IOException; + + /** + * Delete a LOB (from the database, if it is stored there). + * + * @param lob the lob + */ + void removeLob(ValueLobDb lob); + + /** + * Remove all LOBs for this table. + * + * @param tableId the table id + */ + void removeAllForTable(int tableId); + + /** + * Initialize the lob storage. + */ + void init(); + + /** + * Whether the storage is read-only + * + * @return true if yes + */ + boolean isReadOnly(); + +} diff --git a/modules/h2/src/main/java/org/h2/store/LobStorageMap.java b/modules/h2/src/main/java/org/h2/store/LobStorageMap.java new file mode 100644 index 0000000000000..ffeb9076f7018 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/LobStorageMap.java @@ -0,0 +1,362 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Map.Entry; +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.message.DbException; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.StreamStore; +import org.h2.mvstore.db.MVTableEngine.Store; +import org.h2.util.IOUtils; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueLobDb; + +/** + * This class stores LOB objects in the database, in maps. This is the back-end + * i.e. the server side of the LOB storage. + */ +public class LobStorageMap implements LobStorageInterface { + + private static final boolean TRACE = false; + + private final Database database; + + private boolean init; + + private final Object nextLobIdSync = new Object(); + private long nextLobId; + + /** + * The lob metadata map. It contains the mapping from the lob id + * (which is a long) to the stream store id (which is a byte array). + * + * Key: lobId (long) + * Value: { streamStoreId (byte[]), tableId (int), + * byteCount (long), hash (long) }. + */ + private MVMap lobMap; + + /** + * The reference map. It is used to remove data from the stream store: if no + * more entries for the given streamStoreId exist, the data is removed from + * the stream store. + * + * Key: { streamStoreId (byte[]), lobId (long) }. + * Value: true (boolean). + */ + private MVMap refMap; + + /** + * The stream store data map. + * + * Key: stream store block id (long). + * Value: data (byte[]). + */ + private MVMap dataMap; + + private StreamStore streamStore; + + public LobStorageMap(Database database) { + this.database = database; + } + + @Override + public void init() { + if (init) { + return; + } + init = true; + Store s = database.getMvStore(); + MVStore mvStore; + if (s == null) { + // in-memory database + mvStore = MVStore.open(null); + } else { + mvStore = s.getStore(); + } + lobMap = mvStore.openMap("lobMap"); + refMap = mvStore.openMap("lobRef"); + dataMap = mvStore.openMap("lobData"); + streamStore = new StreamStore(dataMap); + // garbage collection of the last blocks + if (database.isReadOnly()) { + return; + } + if (dataMap.isEmpty()) { + return; + } + // search for the last block + // (in theory, only the latest lob can have unreferenced blocks, + // but the latest lob could be a copy of another one, and + // we don't know that, so we iterate over all lobs) + long lastUsedKey = -1; + for (Entry e : lobMap.entrySet()) { + long lobId = e.getKey(); + Object[] v = e.getValue(); + byte[] id = (byte[]) v[0]; + long max = streamStore.getMaxBlockKey(id); + // a lob may not have a referenced blocks if data is kept inline + if (max != -1 && max > lastUsedKey) { + lastUsedKey = max; + if (TRACE) { + trace("lob " + lobId + " lastUsedKey=" + lastUsedKey); + } + } + } + if (TRACE) { + trace("lastUsedKey=" + lastUsedKey); + } + // delete all blocks that are newer + while (true) { + Long last = dataMap.lastKey(); + if (last == null || last <= lastUsedKey) { + break; + } + if (TRACE) { + trace("gc " + last); + } + dataMap.remove(last); + } + // don't re-use block ids, except at the very end + Long last = dataMap.lastKey(); + if (last != null) { + streamStore.setNextKey(last + 1); + } + } + + @Override + public Value createBlob(InputStream in, long maxLength) { + init(); + int type = Value.BLOB; + try { + if (maxLength != -1 + && maxLength <= database.getMaxLengthInplaceLob()) { + byte[] small = new byte[(int) maxLength]; + int len = IOUtils.readFully(in, small, (int) maxLength); + if (len > maxLength) { + throw new IllegalStateException( + "len > blobLength, " + len + " > " + maxLength); + } + if (len < small.length) { + small = Arrays.copyOf(small, len); + } + return ValueLobDb.createSmallLob(type, small); + } + if (maxLength != -1) { + in = new RangeInputStream(in, 0L, maxLength); + } + return createLob(in, type); + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + @Override + public Value createClob(Reader reader, long maxLength) { + init(); + int type = Value.CLOB; + try { + // we multiple by 3 here to get the worst-case size in bytes + if (maxLength != -1 + && maxLength * 3 <= database.getMaxLengthInplaceLob()) { + char[] small = new char[(int) maxLength]; + int len = IOUtils.readFully(reader, small, (int) maxLength); + if (len > maxLength) { + throw new IllegalStateException( + "len > blobLength, " + len + " > " + maxLength); + } + byte[] utf8 = new String(small, 0, len) + .getBytes(StandardCharsets.UTF_8); + if (utf8.length > database.getMaxLengthInplaceLob()) { + throw new IllegalStateException( + "len > maxinplace, " + utf8.length + " > " + + database.getMaxLengthInplaceLob()); + } + return ValueLobDb.createSmallLob(type, utf8); + } + if (maxLength < 0) { + maxLength = Long.MAX_VALUE; + } + CountingReaderInputStream in = new CountingReaderInputStream(reader, + maxLength); + ValueLobDb lob = createLob(in, type); + // the length is not correct + lob = ValueLobDb.create(type, database, lob.getTableId(), + lob.getLobId(), null, in.getLength()); + return lob; + } catch (IllegalStateException e) { + throw DbException.get(ErrorCode.OBJECT_CLOSED, e); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + private ValueLobDb createLob(InputStream in, int type) throws IOException { + byte[] streamStoreId; + try { + streamStoreId = streamStore.put(in); + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + long lobId = generateLobId(); + long length = streamStore.length(streamStoreId); + int tableId = LobStorageFrontend.TABLE_TEMP; + Object[] value = { streamStoreId, tableId, length, 0 }; + lobMap.put(lobId, value); + Object[] key = { streamStoreId, lobId }; + refMap.put(key, Boolean.TRUE); + ValueLobDb lob = ValueLobDb.create( + type, database, tableId, lobId, null, length); + if (TRACE) { + trace("create " + tableId + "/" + lobId); + } + return lob; + } + + private long generateLobId() { + synchronized (nextLobIdSync) { + if (nextLobId == 0) { + Long id = lobMap.lastKey(); + nextLobId = id == null ? 1 : id + 1; + } + return nextLobId++; + } + } + + @Override + public boolean isReadOnly() { + return database.isReadOnly(); + } + + @Override + public ValueLobDb copyLob(ValueLobDb old, int tableId, long length) { + init(); + int type = old.getType(); + long oldLobId = old.getLobId(); + long oldLength = old.getPrecision(); + if (oldLength != length) { + throw DbException.throwInternalError("Length is different"); + } + Object[] value = lobMap.get(oldLobId); + value = value.clone(); + byte[] streamStoreId = (byte[]) value[0]; + long lobId = generateLobId(); + value[1] = tableId; + lobMap.put(lobId, value); + Object[] key = { streamStoreId, lobId }; + refMap.put(key, Boolean.TRUE); + ValueLobDb lob = ValueLobDb.create( + type, database, tableId, lobId, null, length); + if (TRACE) { + trace("copy " + old.getTableId() + "/" + old.getLobId() + + " > " + tableId + "/" + lobId); + } + return lob; + } + + @Override + public InputStream getInputStream(ValueLobDb lob, byte[] hmac, long byteCount) + throws IOException { + init(); + Object[] value = lobMap.get(lob.getLobId()); + if (value == null) { + if (lob.getTableId() == LobStorageFrontend.TABLE_RESULT || + lob.getTableId() == LobStorageFrontend.TABLE_ID_SESSION_VARIABLE) { + throw DbException.get( + ErrorCode.LOB_CLOSED_ON_TIMEOUT_1, "" + + lob.getLobId() + "/" + lob.getTableId()); + } + throw DbException.throwInternalError("Lob not found: " + + lob.getLobId() + "/" + lob.getTableId()); + } + byte[] streamStoreId = (byte[]) value[0]; + return streamStore.get(streamStoreId); + } + + @Override + public void removeAllForTable(int tableId) { + init(); + if (database.getMvStore().getStore().isClosed()) { + return; + } + // this might not be very efficient - + // to speed it up, we would need yet another map + ArrayList list = New.arrayList(); + for (Entry e : lobMap.entrySet()) { + Object[] value = e.getValue(); + int t = (Integer) value[1]; + if (t == tableId) { + list.add(e.getKey()); + } + } + for (long lobId : list) { + removeLob(tableId, lobId); + } + if (tableId == LobStorageFrontend.TABLE_ID_SESSION_VARIABLE) { + removeAllForTable(LobStorageFrontend.TABLE_TEMP); + removeAllForTable(LobStorageFrontend.TABLE_RESULT); + } + } + + @Override + public void removeLob(ValueLobDb lob) { + init(); + int tableId = lob.getTableId(); + long lobId = lob.getLobId(); + removeLob(tableId, lobId); + } + + private void removeLob(int tableId, long lobId) { + if (TRACE) { + trace("remove " + tableId + "/" + lobId); + } + Object[] value = lobMap.remove(lobId); + if (value == null) { + // already removed + return; + } + byte[] streamStoreId = (byte[]) value[0]; + Object[] key = {streamStoreId, lobId }; + refMap.remove(key); + // check if there are more entries for this streamStoreId + key = new Object[] {streamStoreId, 0L }; + value = refMap.ceilingKey(key); + boolean hasMoreEntries = false; + if (value != null) { + byte[] s2 = (byte[]) value[0]; + if (Arrays.equals(streamStoreId, s2)) { + if (TRACE) { + trace(" stream still needed in lob " + value[1]); + } + hasMoreEntries = true; + } + } + if (!hasMoreEntries) { + if (TRACE) { + trace(" remove stream " + StringUtils.convertBytesToHex(streamStoreId)); + } + streamStore.remove(streamStoreId); + } + } + + private static void trace(String op) { + System.out.println("[" + Thread.currentThread().getName() + "] LOB " + op); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/LobStorageRemoteInputStream.java b/modules/h2/src/main/java/org/h2/store/LobStorageRemoteInputStream.java new file mode 100644 index 0000000000000..189aa3780882a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/LobStorageRemoteInputStream.java @@ -0,0 +1,90 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.InputStream; + +import org.h2.message.DbException; +import org.h2.value.ValueLobDb; + +/** + * An input stream that reads from a remote LOB. + */ +class LobStorageRemoteInputStream extends InputStream { + + /** + * The data handler. + */ + private final DataHandler handler; + + /** + * The lob id. + */ + private final long lob; + + private final byte[] hmac; + + /** + * The position. + */ + private long pos; + + /** + * The remaining bytes in the lob. + */ + private long remainingBytes; + + public LobStorageRemoteInputStream(DataHandler handler, ValueLobDb lob, + byte[] hmac, long byteCount) { + this.handler = handler; + this.lob = lob.getLobId(); + this.hmac = hmac; + remainingBytes = byteCount; + } + + @Override + public int read() throws IOException { + byte[] buff = new byte[1]; + int len = read(buff, 0, 1); + return len < 0 ? len : (buff[0] & 255); + } + + @Override + public int read(byte[] buff) throws IOException { + return read(buff, 0, buff.length); + } + + @Override + public int read(byte[] buff, int off, int length) throws IOException { + if (length == 0) { + return 0; + } + length = (int) Math.min(length, remainingBytes); + if (length == 0) { + return -1; + } + try { + length = handler.readLob(lob, hmac, pos, buff, off, length); + } catch (DbException e) { + throw DbException.convertToIOException(e); + } + if (length == 0) { + return -1; + } + remainingBytes -= length; + pos += length; + return length; + } + + @Override + public long skip(long n) { + remainingBytes -= n; + pos += n; + return n; + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/store/Page.java b/modules/h2/src/main/java/org/h2/store/Page.java new file mode 100644 index 0000000000000..81ff764e28fbf --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/Page.java @@ -0,0 +1,264 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.lang.reflect.Array; +import org.h2.engine.Session; +import org.h2.util.CacheObject; + +/** + * A page. Format: + *
    • 0-3: parent page id (0 for root) + *
    • 4-4: page type + *
    • page-type specific data + *
    + */ +public abstract class Page extends CacheObject { + + /** + * This is the last page of a chain. + */ + public static final int FLAG_LAST = 16; + + /** + * An empty page. + */ + public static final int TYPE_EMPTY = 0; + + /** + * A data leaf page (without overflow: + FLAG_LAST). + */ + public static final int TYPE_DATA_LEAF = 1; + + /** + * A data node page (never has overflow pages). + */ + public static final int TYPE_DATA_NODE = 2; + + /** + * A data overflow page (the last page: + FLAG_LAST). + */ + public static final int TYPE_DATA_OVERFLOW = 3; + + /** + * A b-tree leaf page (without overflow: + FLAG_LAST). + */ + public static final int TYPE_BTREE_LEAF = 4; + + /** + * A b-tree node page (never has overflow pages). + */ + public static final int TYPE_BTREE_NODE = 5; + + /** + * A page containing a list of free pages (the last page: + FLAG_LAST). + */ + public static final int TYPE_FREE_LIST = 6; + + /** + * A stream trunk page. + */ + public static final int TYPE_STREAM_TRUNK = 7; + + /** + * A stream data page. + */ + public static final int TYPE_STREAM_DATA = 8; + + private static final int COPY_THRESHOLD = 4; + + /** + * When this page was changed the last time. + */ + protected long changeCount; + + /** + * Copy the data to a new location, change the parent to point to the new + * location, and free up the current page. + * + * @param session the session + * @param newPos the new position + */ + public abstract void moveTo(Session session, int newPos); + + /** + * Write the page. + */ + public abstract void write(); + + /** + * Insert a value in an array. A new array is created if required. + * + * @param old the old array + * @param oldSize the old size + * @param pos the position + * @param x the value to insert + * @return the (new) array + */ + @SuppressWarnings("unchecked") + public static T[] insert(T[] old, int oldSize, int pos, T x) { + T[] result; + if (old.length > oldSize) { + result = old; + } else { + // according to a test, this is as fast as "new Row[..]" + result = (T[]) Array.newInstance( + old.getClass().getComponentType(), oldSize + 1 + COPY_THRESHOLD); + if (pos > 0) { + System.arraycopy(old, 0, result, 0, pos); + } + } + if (oldSize - pos > 0) { + System.arraycopy(old, pos, result, pos + 1, oldSize - pos); + } + result[pos] = x; + return result; + } + + /** + * Delete a value in an array. A new array is created if required. + * + * @param old the old array + * @param oldSize the old size + * @param pos the position + * @return the (new) array + */ + @SuppressWarnings("unchecked") + public + static T[] remove(T[] old, int oldSize, int pos) { + T[] result; + if (old.length - oldSize < COPY_THRESHOLD) { + result = old; + } else { + // according to a test, this is as fast as "new Row[..]" + result = (T[]) Array.newInstance( + old.getClass().getComponentType(), oldSize - 1); + System.arraycopy(old, 0, result, 0, Math.min(oldSize - 1, pos)); + } + if (pos < oldSize) { + System.arraycopy(old, pos + 1, result, pos, oldSize - pos - 1); + } + return result; + } + + /** + * Insert a value in an array. A new array is created if required. + * + * @param old the old array + * @param oldSize the old size + * @param pos the position + * @param x the value to insert + * @return the (new) array + */ + protected static long[] insert(long[] old, int oldSize, int pos, long x) { + long[] result; + if (old != null && old.length > oldSize) { + result = old; + } else { + result = new long[oldSize + 1 + COPY_THRESHOLD]; + if (pos > 0) { + System.arraycopy(old, 0, result, 0, pos); + } + } + if (old != null && oldSize - pos > 0) { + System.arraycopy(old, pos, result, pos + 1, oldSize - pos); + } + result[pos] = x; + return result; + } + + /** + * Delete a value in an array. A new array is created if required. + * + * @param old the old array + * @param oldSize the old size + * @param pos the position + * @return the (new) array + */ + protected static long[] remove(long[] old, int oldSize, int pos) { + long[] result; + if (old.length - oldSize < COPY_THRESHOLD) { + result = old; + } else { + result = new long[oldSize - 1]; + System.arraycopy(old, 0, result, 0, pos); + } + System.arraycopy(old, pos + 1, result, pos, oldSize - pos - 1); + return result; + } + + /** + * Insert a value in an array. A new array is created if required. + * + * @param old the old array + * @param oldSize the old size + * @param pos the position + * @param x the value to insert + * @return the (new) array + */ + protected static int[] insert(int[] old, int oldSize, int pos, int x) { + int[] result; + if (old != null && old.length > oldSize) { + result = old; + } else { + result = new int[oldSize + 1 + COPY_THRESHOLD]; + if (pos > 0 && old != null) { + System.arraycopy(old, 0, result, 0, pos); + } + } + if (old != null && oldSize - pos > 0) { + System.arraycopy(old, pos, result, pos + 1, oldSize - pos); + } + result[pos] = x; + return result; + } + + /** + * Delete a value in an array. A new array is created if required. + * + * @param old the old array + * @param oldSize the old size + * @param pos the position + * @return the (new) array + */ + protected static int[] remove(int[] old, int oldSize, int pos) { + int[] result; + if (old.length - oldSize < COPY_THRESHOLD) { + result = old; + } else { + result = new int[oldSize - 1]; + System.arraycopy(old, 0, result, 0, Math.min(oldSize - 1, pos)); + } + if (pos < oldSize) { + System.arraycopy(old, pos + 1, result, pos, oldSize - pos - 1); + } + return result; + } + + /** + * Add a value to a subset of the array. + * + * @param array the array + * @param from the index of the first element (including) + * @param to the index of the last element (excluding) + * @param x the value to add + */ + protected static void add(int[] array, int from, int to, int x) { + for (int i = from; i < to; i++) { + array[i] += x; + } + } + + /** + * If this page can be moved. Transaction log and free-list pages can not. + * + * @return true if moving is allowed + */ + public boolean canMove() { + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/PageFreeList.java b/modules/h2/src/main/java/org/h2/store/PageFreeList.java new file mode 100644 index 0000000000000..7c3a0ec7a7d62 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageFreeList.java @@ -0,0 +1,232 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import org.h2.engine.Session; +import org.h2.util.BitField; + +/** + * The list of free pages of a page store. The format of a free list trunk page + * is: + *
      + *
    • page type: byte (0)
    • + *
    • checksum: short (1-2)
    • + *
    • data (3-)
    • + *
    + */ +public class PageFreeList extends Page { + + private static final int DATA_START = 3; + + private final PageStore store; + private final BitField used; + private final int pageCount; + private boolean full; + private Data data; + + private PageFreeList(PageStore store, int pageId) { + // kept in cache, and array list in page store + setPos(pageId); + this.store = store; + pageCount = (store.getPageSize() - DATA_START) * 8; + used = new BitField(pageCount); + used.set(0); + } + + /** + * Read a free-list page. + * + * @param store the page store + * @param data the data + * @param pageId the page id + * @return the page + */ + static PageFreeList read(PageStore store, Data data, int pageId) { + PageFreeList p = new PageFreeList(store, pageId); + p.data = data; + p.read(); + return p; + } + + /** + * Create a new free-list page. + * + * @param store the page store + * @param pageId the page id + * @return the page + */ + static PageFreeList create(PageStore store, int pageId) { + return new PageFreeList(store, pageId); + } + + /** + * Allocate a page from the free list. + * + * @param exclude the exclude list or null + * @param first the first page to look for + * @return the page, or -1 if all pages are used + */ + int allocate(BitField exclude, int first) { + if (full) { + return -1; + } + // TODO cache last result + int start = Math.max(0, first - getPos()); + while (true) { + int free = used.nextClearBit(start); + if (free >= pageCount) { + if (start == 0) { + full = true; + } + return -1; + } + if (exclude != null && exclude.get(free + getPos())) { + start = exclude.nextClearBit(free + getPos()) - getPos(); + if (start >= pageCount) { + return -1; + } + } else { + // set the bit first, because logUndo can + // allocate other pages, and we must not + // return the same page twice + used.set(free); + store.logUndo(this, data); + store.update(this); + return free + getPos(); + } + } + } + + /** + * Get the first free page starting at the given offset. + * + * @param first the page number to start the search + * @return the page number, or -1 + */ + int getFirstFree(int first) { + if (full) { + return -1; + } + int start = Math.max(0, first - getPos()); + int free = used.nextClearBit(start); + if (free >= pageCount) { + return -1; + } + return free + getPos(); + } + + int getLastUsed() { + int last = used.length() - 1; + return last <= 0 ? -1 : last + getPos(); + } + + /** + * Mark a page as used. + * + * @param pageId the page id + */ + void allocate(int pageId) { + int idx = pageId - getPos(); + if (idx >= 0 && !used.get(idx)) { + // set the bit first, because logUndo can + // allocate other pages, and we must not + // return the same page twice + used.set(idx); + store.logUndo(this, data); + store.update(this); + } + } + + /** + * Add a page to the free list. + * + * @param pageId the page id to add + */ + void free(int pageId) { + full = false; + store.logUndo(this, data); + used.clear(pageId - getPos()); + store.update(this); + } + + /** + * Read the page from the disk. + */ + private void read() { + data.reset(); + data.readByte(); + data.readShortInt(); + for (int i = 0; i < pageCount; i += 8) { + int x = data.readByte() & 255; + used.setByte(i, x); + } + full = false; + } + + @Override + public void write() { + data = store.createData(); + data.writeByte((byte) Page.TYPE_FREE_LIST); + data.writeShortInt(0); + for (int i = 0; i < pageCount; i += 8) { + data.writeByte((byte) used.getByte(i)); + } + store.writePage(getPos(), data); + } + + /** + * Get the number of pages that can fit in a free list. + * + * @param pageSize the page size + * @return the number of pages + */ + public static int getPagesAddressed(int pageSize) { + return (pageSize - DATA_START) * 8; + } + + /** + * Get the estimated memory size. + * + * @return number of double words (4 bytes) + */ + @Override + public int getMemory() { + return store.getPageSize() >> 2; + } + + /** + * Check if a page is already in use. + * + * @param pageId the page to check + * @return true if it is in use + */ + boolean isUsed(int pageId) { + return used.get(pageId - getPos()); + } + + @Override + public void moveTo(Session session, int newPos) { + // the old data does not need to be copied, as free-list pages + // at the end of the file are not required + store.free(getPos(), false); + } + + @Override + public String toString() { + return "page [" + getPos() + "] freeList" + (full ? "full" : ""); + } + + @Override + public boolean canRemove() { + return true; + } + + @Override + public boolean canMove() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/PageInputStream.java b/modules/h2/src/main/java/org/h2/store/PageInputStream.java new file mode 100644 index 0000000000000..e2599bebd7c69 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageInputStream.java @@ -0,0 +1,171 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.EOFException; +import java.io.IOException; +import java.io.InputStream; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.util.BitField; + +/** + * An input stream that reads from a page store. + */ +public class PageInputStream extends InputStream { + + private final PageStore store; + private final Trace trace; + private final int firstTrunkPage; + private final PageStreamTrunk.Iterator trunkIterator; + private int dataPage; + private PageStreamTrunk trunk; + private int trunkIndex; + private PageStreamData data; + private int dataPos; + private boolean endOfFile; + private int remaining; + private final byte[] buffer = { 0 }; + private int logKey; + + PageInputStream(PageStore store, int logKey, int firstTrunkPage, int dataPage) { + this.store = store; + this.trace = store.getTrace(); + // minus one because we increment before comparing + this.logKey = logKey - 1; + this.firstTrunkPage = firstTrunkPage; + trunkIterator = new PageStreamTrunk.Iterator(store, firstTrunkPage); + this.dataPage = dataPage; + } + + @Override + public int read() throws IOException { + int len = read(buffer); + return len < 0 ? -1 : (buffer[0] & 255); + } + + @Override + public int read(byte[] b) throws IOException { + return read(b, 0, b.length); + } + + @Override + public int read(byte[] b, int off, int len) throws IOException { + if (len == 0) { + return 0; + } + int read = 0; + while (len > 0) { + int r = readBlock(b, off, len); + if (r < 0) { + break; + } + read += r; + off += r; + len -= r; + } + return read == 0 ? -1 : read; + } + + private int readBlock(byte[] buff, int off, int len) throws IOException { + try { + fillBuffer(); + if (endOfFile) { + return -1; + } + int l = Math.min(remaining, len); + data.read(dataPos, buff, off, l); + remaining -= l; + dataPos += l; + return l; + } catch (DbException e) { + throw new EOFException(); + } + } + + private void fillBuffer() { + if (remaining > 0 || endOfFile) { + return; + } + int next; + while (true) { + if (trunk == null) { + trunk = trunkIterator.next(); + trunkIndex = 0; + logKey++; + if (trunk == null || trunk.getLogKey() != logKey) { + endOfFile = true; + return; + } + } + if (trunk != null) { + next = trunk.getPageData(trunkIndex++); + if (next == -1) { + trunk = null; + } else if (dataPage == -1 || dataPage == next) { + break; + } + } + } + if (trace.isDebugEnabled()) { + trace.debug("pageIn.readPage " + next); + } + dataPage = -1; + data = null; + Page p = store.getPage(next); + if (p instanceof PageStreamData) { + data = (PageStreamData) p; + } + if (data == null || data.getLogKey() != logKey) { + endOfFile = true; + return; + } + dataPos = PageStreamData.getReadStart(); + remaining = store.getPageSize() - dataPos; + } + + /** + * Set all pages as 'allocated' in the page store. + * + * @return the bit set + */ + BitField allocateAllPages() { + BitField pages = new BitField(); + int key = logKey; + PageStreamTrunk.Iterator it = new PageStreamTrunk.Iterator( + store, firstTrunkPage); + while (true) { + PageStreamTrunk t = it.next(); + key++; + if (it.canDelete()) { + store.allocatePage(it.getCurrentPageId()); + } + if (t == null || t.getLogKey() != key) { + break; + } + pages.set(t.getPos()); + for (int i = 0;; i++) { + int n = t.getPageData(i); + if (n == -1) { + break; + } + pages.set(n); + store.allocatePage(n); + } + } + return pages; + } + + int getDataPage() { + return data.getPos(); + } + + @Override + public void close() { + // nothing to do + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/PageLog.java b/modules/h2/src/main/java/org/h2/store/PageLog.java new file mode 100644 index 0000000000000..a2ea65febd264 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageLog.java @@ -0,0 +1,897 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import org.h2.api.ErrorCode; +import org.h2.compress.CompressLZF; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.Row; +import org.h2.result.RowFactory; +import org.h2.util.BitField; +import org.h2.util.IntArray; +import org.h2.util.IntIntHashMap; +import org.h2.util.New; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * Transaction log mechanism. The stream contains a list of records. The data + * format for a record is: + *
      + *
    • type (0: no-op, 1: undo, 2: commit, ...)
    • + *
    • data
    • + *
    + * The transaction log is split into sections. + * A checkpoint starts a new section. + */ +public class PageLog { + + /** + * No operation. + */ + public static final int NOOP = 0; + + /** + * An undo log entry. Format: page id: varInt, size, page. Size 0 means + * uncompressed, size 1 means empty page, otherwise the size is the number + * of compressed bytes. + */ + public static final int UNDO = 1; + + /** + * A commit entry of a session. + * Format: session id: varInt. + */ + public static final int COMMIT = 2; + + /** + * A prepare commit entry for a session. + * Format: session id: varInt, transaction name: string. + */ + public static final int PREPARE_COMMIT = 3; + + /** + * Roll back a prepared transaction. + * Format: session id: varInt. + */ + public static final int ROLLBACK = 4; + + /** + * Add a record to a table. + * Format: session id: varInt, table id: varInt, row. + */ + public static final int ADD = 5; + + /** + * Remove a record from a table. + * Format: session id: varInt, table id: varInt, row. + */ + public static final int REMOVE = 6; + + /** + * Truncate a table. + * Format: session id: varInt, table id: varInt. + */ + public static final int TRUNCATE = 7; + + /** + * Perform a checkpoint. The log section id is incremented. + * Format: - + */ + public static final int CHECKPOINT = 8; + + /** + * Free a log page. + * Format: count: varInt, page ids: varInt + */ + public static final int FREE_LOG = 9; + + /** + * The recovery stage to undo changes (re-apply the backup). + */ + static final int RECOVERY_STAGE_UNDO = 0; + + /** + * The recovery stage to allocate pages used by the transaction log. + */ + static final int RECOVERY_STAGE_ALLOCATE = 1; + + /** + * The recovery stage to redo operations. + */ + static final int RECOVERY_STAGE_REDO = 2; + + private static final boolean COMPRESS_UNDO = true; + + private final PageStore store; + private final Trace trace; + + private Data writeBuffer; + private PageOutputStream pageOut; + private int firstTrunkPage; + private int firstDataPage; + private final Data dataBuffer; + private int logKey; + private int logSectionId, logPos; + private int firstSectionId; + + private final CompressLZF compress; + private final byte[] compressBuffer; + + /** + * If the bit is set, the given page was written to the current log section. + * The undo entry of these pages doesn't need to be written again. + */ + private BitField undo = new BitField(); + + /** + * The undo entry of those pages was written in any log section. + * These pages may not be used in the transaction log. + */ + private final BitField undoAll = new BitField(); + + /** + * The map of section ids (key) and data page where the section starts + * (value). + */ + private final IntIntHashMap logSectionPageMap = new IntIntHashMap(); + + /** + * The session state map. + * Only used during recovery. + */ + private HashMap sessionStates = new HashMap<>(); + + /** + * The map of pages used by the transaction log. + * Only used during recovery. + */ + private BitField usedLogPages; + + /** + * This flag is set while freeing up pages. + */ + private boolean freeing; + + PageLog(PageStore store) { + this.store = store; + dataBuffer = store.createData(); + trace = store.getTrace(); + compress = new CompressLZF(); + compressBuffer = new byte[store.getPageSize() * 2]; + } + + /** + * Open the log for writing. For an existing database, the recovery + * must be run first. + * + * @param newFirstTrunkPage the first trunk page + * @param atEnd whether only pages at the end of the file should be used + */ + void openForWriting(int newFirstTrunkPage, boolean atEnd) { + trace.debug("log openForWriting firstPage: " + newFirstTrunkPage); + this.firstTrunkPage = newFirstTrunkPage; + logKey++; + pageOut = new PageOutputStream(store, + newFirstTrunkPage, undoAll, logKey, atEnd); + pageOut.reserve(1); + // pageBuffer = new BufferedOutputStream(pageOut, 8 * 1024); + store.setLogFirstPage(logKey, newFirstTrunkPage, + pageOut.getCurrentDataPageId()); + writeBuffer = store.createData(); + } + + /** + * Free up all pages allocated by the log. + */ + void free() { + if (trace.isDebugEnabled()) { + trace.debug("log free"); + } + int currentDataPage = 0; + if (pageOut != null) { + currentDataPage = pageOut.getCurrentDataPageId(); + pageOut.freeReserved(); + } + try { + freeing = true; + int first = 0; + int loopDetect = 1024, loopCount = 0; + PageStreamTrunk.Iterator it = new PageStreamTrunk.Iterator( + store, firstTrunkPage); + while (firstTrunkPage != 0 && firstTrunkPage < store.getPageCount()) { + PageStreamTrunk t = it.next(); + if (t == null) { + if (it.canDelete()) { + store.free(firstTrunkPage, false); + } + break; + } + if (loopCount++ >= loopDetect) { + first = t.getPos(); + loopCount = 0; + loopDetect *= 2; + } else if (first != 0 && first == t.getPos()) { + throw DbException.throwInternalError( + "endless loop at " + t); + } + t.free(currentDataPage); + firstTrunkPage = t.getNextTrunk(); + } + } finally { + freeing = false; + } + } + + /** + * Open the log for reading. + * + * @param newLogKey the first expected log key + * @param newFirstTrunkPage the first trunk page + * @param newFirstDataPage the index of the first data page + */ + void openForReading(int newLogKey, int newFirstTrunkPage, + int newFirstDataPage) { + this.logKey = newLogKey; + this.firstTrunkPage = newFirstTrunkPage; + this.firstDataPage = newFirstDataPage; + } + + /** + * Run one recovery stage. There are three recovery stages: 0: only the undo + * steps are run (restoring the state before the last checkpoint). 1: the + * pages that are used by the transaction log are allocated. 2: the + * committed operations are re-applied. + * + * @param stage the recovery stage + * @return whether the transaction log was empty + */ + boolean recover(int stage) { + if (trace.isDebugEnabled()) { + trace.debug("log recover stage: " + stage); + } + if (stage == RECOVERY_STAGE_ALLOCATE) { + PageInputStream in = new PageInputStream(store, + logKey, firstTrunkPage, firstDataPage); + usedLogPages = in.allocateAllPages(); + in.close(); + return true; + } + PageInputStream pageIn = new PageInputStream(store, + logKey, firstTrunkPage, firstDataPage); + DataReader in = new DataReader(pageIn); + int logId = 0; + Data data = store.createData(); + boolean isEmpty = true; + try { + int pos = 0; + while (true) { + int x = in.readByte(); + if (x < 0) { + break; + } + pos++; + isEmpty = false; + if (x == UNDO) { + int pageId = in.readVarInt(); + int size = in.readVarInt(); + if (size == 0) { + in.readFully(data.getBytes(), store.getPageSize()); + } else if (size == 1) { + // empty + Arrays.fill(data.getBytes(), 0, store.getPageSize(), (byte) 0); + } else { + in.readFully(compressBuffer, size); + try { + compress.expand(compressBuffer, 0, size, + data.getBytes(), 0, store.getPageSize()); + } catch (ArrayIndexOutOfBoundsException e) { + DbException.convertToIOException(e); + } + } + if (stage == RECOVERY_STAGE_UNDO) { + if (!undo.get(pageId)) { + if (trace.isDebugEnabled()) { + trace.debug("log undo {0}", pageId); + } + store.writePage(pageId, data); + undo.set(pageId); + undoAll.set(pageId); + } else { + if (trace.isDebugEnabled()) { + trace.debug("log undo skip {0}", pageId); + } + } + } + } else if (x == ADD) { + int sessionId = in.readVarInt(); + int tableId = in.readVarInt(); + Row row = readRow(store.getDatabase().getRowFactory(), in, data); + if (stage == RECOVERY_STAGE_UNDO) { + store.allocateIfIndexRoot(pos, tableId, row); + } else if (stage == RECOVERY_STAGE_REDO) { + if (isSessionCommitted(sessionId, logId, pos)) { + if (trace.isDebugEnabled()) { + trace.debug("log redo + table: " + tableId + + " s: " + sessionId + " " + row); + } + store.redo(tableId, row, true); + } else { + if (trace.isDebugEnabled()) { + trace.debug("log ignore s: " + sessionId + + " + table: " + tableId + " " + row); + } + } + } + } else if (x == REMOVE) { + int sessionId = in.readVarInt(); + int tableId = in.readVarInt(); + long key = in.readVarLong(); + if (stage == RECOVERY_STAGE_REDO) { + if (isSessionCommitted(sessionId, logId, pos)) { + if (trace.isDebugEnabled()) { + trace.debug("log redo - table: " + tableId + + " s:" + sessionId + " key: " + key); + } + store.redoDelete(tableId, key); + } else { + if (trace.isDebugEnabled()) { + trace.debug("log ignore s: " + sessionId + + " - table: " + tableId + " " + key); + } + } + } + } else if (x == TRUNCATE) { + int sessionId = in.readVarInt(); + int tableId = in.readVarInt(); + if (stage == RECOVERY_STAGE_REDO) { + if (isSessionCommitted(sessionId, logId, pos)) { + if (trace.isDebugEnabled()) { + trace.debug("log redo truncate table: " + tableId); + } + store.redoTruncate(tableId); + } else { + if (trace.isDebugEnabled()) { + trace.debug("log ignore s: "+ sessionId + + " truncate table: " + tableId); + } + } + } + } else if (x == PREPARE_COMMIT) { + int sessionId = in.readVarInt(); + String transaction = in.readString(); + if (trace.isDebugEnabled()) { + trace.debug("log prepare commit " + sessionId + " " + + transaction + " pos: " + pos); + } + if (stage == RECOVERY_STAGE_UNDO) { + int page = pageIn.getDataPage(); + setPrepareCommit(sessionId, page, transaction); + } + } else if (x == ROLLBACK) { + int sessionId = in.readVarInt(); + if (trace.isDebugEnabled()) { + trace.debug("log rollback " + sessionId + " pos: " + pos); + } + // ignore - this entry is just informational + } else if (x == COMMIT) { + int sessionId = in.readVarInt(); + if (trace.isDebugEnabled()) { + trace.debug("log commit " + sessionId + " pos: " + pos); + } + if (stage == RECOVERY_STAGE_UNDO) { + setLastCommitForSession(sessionId, logId, pos); + } + } else if (x == NOOP) { + // nothing to do + } else if (x == CHECKPOINT) { + logId++; + } else if (x == FREE_LOG) { + int count = in.readVarInt(); + for (int i = 0; i < count; i++) { + int pageId = in.readVarInt(); + if (stage == RECOVERY_STAGE_REDO) { + if (!usedLogPages.get(pageId)) { + store.free(pageId, false); + } + } + } + } else { + if (trace.isDebugEnabled()) { + trace.debug("log end"); + break; + } + } + } + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.FILE_CORRUPTED_1) { + trace.debug("log recovery stopped"); + } else { + throw e; + } + } catch (IOException e) { + trace.debug("log recovery completed"); + } + undo = new BitField(); + if (stage == RECOVERY_STAGE_REDO) { + usedLogPages = null; + } + return isEmpty; + } + + /** + * This method is called when a 'prepare commit' log entry is read when + * opening the database. + * + * @param sessionId the session id + * @param pageId the data page with the prepare entry + * @param transaction the transaction name, or null to rollback + */ + private void setPrepareCommit(int sessionId, int pageId, String transaction) { + SessionState state = getOrAddSessionState(sessionId); + PageStoreInDoubtTransaction doubt; + if (transaction == null) { + doubt = null; + } else { + doubt = new PageStoreInDoubtTransaction(store, sessionId, pageId, + transaction); + } + state.inDoubtTransaction = doubt; + } + + /** + * Read a row from an input stream. + * + * @param rowFactory the row factory + * @param in the input stream + * @param data a temporary buffer + * @return the row + */ + public static Row readRow(RowFactory rowFactory, DataReader in, Data data) throws IOException { + long key = in.readVarLong(); + int len = in.readVarInt(); + data.reset(); + data.checkCapacity(len); + in.readFully(data.getBytes(), len); + int columnCount = data.readVarInt(); + Value[] values = new Value[columnCount]; + for (int i = 0; i < columnCount; i++) { + values[i] = data.readValue(); + } + Row row = rowFactory.createRow(values, Row.MEMORY_CALCULATE); + row.setKey(key); + return row; + } + + /** + * Check if the undo entry was already written for the given page. + * + * @param pageId the page + * @return true if it was written + */ + boolean getUndo(int pageId) { + return undo.get(pageId); + } + + /** + * Add an undo entry to the log. The page data is only written once until + * the next checkpoint. + * + * @param pageId the page id + * @param page the old page data + */ + void addUndo(int pageId, Data page) { + if (undo.get(pageId) || freeing) { + return; + } + if (trace.isDebugEnabled()) { + trace.debug("log undo " + pageId); + } + if (SysProperties.CHECK) { + if (page == null) { + DbException.throwInternalError("Undo entry not written"); + } + } + undo.set(pageId); + undoAll.set(pageId); + Data buffer = getBuffer(); + buffer.writeByte((byte) UNDO); + buffer.writeVarInt(pageId); + if (page.getBytes()[0] == 0) { + buffer.writeVarInt(1); + } else { + int pageSize = store.getPageSize(); + if (COMPRESS_UNDO) { + int size = compress.compress(page.getBytes(), + pageSize, compressBuffer, 0); + if (size < pageSize) { + buffer.writeVarInt(size); + buffer.checkCapacity(size); + buffer.write(compressBuffer, 0, size); + } else { + buffer.writeVarInt(0); + buffer.checkCapacity(pageSize); + buffer.write(page.getBytes(), 0, pageSize); + } + } else { + buffer.writeVarInt(0); + buffer.checkCapacity(pageSize); + buffer.write(page.getBytes(), 0, pageSize); + } + } + write(buffer); + } + + private void freeLogPages(IntArray pages) { + if (trace.isDebugEnabled()) { + trace.debug("log frees " + pages.get(0) + ".." + + pages.get(pages.size() - 1)); + } + Data buffer = getBuffer(); + buffer.writeByte((byte) FREE_LOG); + int size = pages.size(); + buffer.writeVarInt(size); + for (int i = 0; i < size; i++) { + buffer.writeVarInt(pages.get(i)); + } + write(buffer); + } + + private void write(Data data) { + pageOut.write(data.getBytes(), 0, data.length()); + data.reset(); + } + + /** + * Mark a transaction as committed. + * + * @param sessionId the session + */ + void commit(int sessionId) { + if (trace.isDebugEnabled()) { + trace.debug("log commit s: " + sessionId); + } + if (store.getDatabase().getPageStore() == null) { + // database already closed + return; + } + Data buffer = getBuffer(); + buffer.writeByte((byte) COMMIT); + buffer.writeVarInt(sessionId); + write(buffer); + if (store.getDatabase().getFlushOnEachCommit()) { + flush(); + } + } + + /** + * Prepare a transaction. + * + * @param session the session + * @param transaction the name of the transaction + */ + void prepareCommit(Session session, String transaction) { + if (trace.isDebugEnabled()) { + trace.debug("log prepare commit s: " + session.getId() + ", " + transaction); + } + if (store.getDatabase().getPageStore() == null) { + // database already closed + return; + } + // store it on a separate log page + int pageSize = store.getPageSize(); + pageOut.flush(); + pageOut.fillPage(); + Data buffer = getBuffer(); + buffer.writeByte((byte) PREPARE_COMMIT); + buffer.writeVarInt(session.getId()); + buffer.writeString(transaction); + if (buffer.length() >= PageStreamData.getCapacity(pageSize)) { + throw DbException.getInvalidValueException( + "transaction name (too long)", transaction); + } + write(buffer); + // store it on a separate log page + flushOut(); + pageOut.fillPage(); + if (store.getDatabase().getFlushOnEachCommit()) { + flush(); + } + } + + /** + * A record is added to a table, or removed from a table. + * + * @param session the session + * @param tableId the table id + * @param row the row to add + * @param add true if the row is added, false if it is removed + */ + void logAddOrRemoveRow(Session session, int tableId, Row row, boolean add) { + if (trace.isDebugEnabled()) { + trace.debug("log " + (add ? "+" : "-") + + " s: " + session.getId() + " table: " + tableId + " row: " + row); + } + session.addLogPos(logSectionId, logPos); + logPos++; + Data data = dataBuffer; + data.reset(); + int columns = row.getColumnCount(); + data.writeVarInt(columns); + data.checkCapacity(row.getByteCount(data)); + if (session.isRedoLogBinaryEnabled()) { + for (int i = 0; i < columns; i++) { + data.writeValue(row.getValue(i)); + } + } else { + for (int i = 0; i < columns; i++) { + Value v = row.getValue(i); + if (v.getType() == Value.BYTES) { + data.writeValue(ValueNull.INSTANCE); + } else { + data.writeValue(v); + } + } + } + Data buffer = getBuffer(); + buffer.writeByte((byte) (add ? ADD : REMOVE)); + buffer.writeVarInt(session.getId()); + buffer.writeVarInt(tableId); + buffer.writeVarLong(row.getKey()); + if (add) { + buffer.writeVarInt(data.length()); + buffer.checkCapacity(data.length()); + buffer.write(data.getBytes(), 0, data.length()); + } + write(buffer); + } + + /** + * A table is truncated. + * + * @param session the session + * @param tableId the table id + */ + void logTruncate(Session session, int tableId) { + if (trace.isDebugEnabled()) { + trace.debug("log truncate s: " + session.getId() + " table: " + tableId); + } + session.addLogPos(logSectionId, logPos); + logPos++; + Data buffer = getBuffer(); + buffer.writeByte((byte) TRUNCATE); + buffer.writeVarInt(session.getId()); + buffer.writeVarInt(tableId); + write(buffer); + } + + /** + * Flush the transaction log. + */ + void flush() { + if (pageOut != null) { + flushOut(); + } + } + + /** + * Switch to a new log section. + */ + void checkpoint() { + Data buffer = getBuffer(); + buffer.writeByte((byte) CHECKPOINT); + write(buffer); + undo = new BitField(); + logSectionId++; + logPos = 0; + pageOut.flush(); + pageOut.fillPage(); + int currentDataPage = pageOut.getCurrentDataPageId(); + logSectionPageMap.put(logSectionId, currentDataPage); + } + + int getLogSectionId() { + return logSectionId; + } + + int getLogFirstSectionId() { + return firstSectionId; + } + + int getLogPos() { + return logPos; + } + + /** + * Remove all pages until the given log (excluding). + * + * @param firstUncommittedSection the first log section to keep + */ + void removeUntil(int firstUncommittedSection) { + if (firstUncommittedSection == 0) { + return; + } + int firstDataPageToKeep = logSectionPageMap.get(firstUncommittedSection); + firstTrunkPage = removeUntil(firstTrunkPage, firstDataPageToKeep); + store.setLogFirstPage(logKey, firstTrunkPage, firstDataPageToKeep); + while (firstSectionId < firstUncommittedSection) { + if (firstSectionId > 0) { + // there is no entry for log 0 + logSectionPageMap.remove(firstSectionId); + } + firstSectionId++; + } + } + + /** + * Remove all pages until the given data page. + * + * @param trunkPage the first trunk page + * @param firstDataPageToKeep the first data page to keep + * @return the trunk page of the data page to keep + */ + private int removeUntil(int trunkPage, int firstDataPageToKeep) { + trace.debug("log.removeUntil " + trunkPage + " " + firstDataPageToKeep); + int last = trunkPage; + while (true) { + Page p = store.getPage(trunkPage); + PageStreamTrunk t = (PageStreamTrunk) p; + if (t == null) { + throw DbException.throwInternalError( + "log.removeUntil not found: " + firstDataPageToKeep + " last " + last); + } + logKey = t.getLogKey(); + last = t.getPos(); + if (t.contains(firstDataPageToKeep)) { + return last; + } + trunkPage = t.getNextTrunk(); + IntArray list = new IntArray(); + list.add(t.getPos()); + for (int i = 0;; i++) { + int next = t.getPageData(i); + if (next == -1) { + break; + } + list.add(next); + } + freeLogPages(list); + pageOut.free(t); + } + } + + /** + * Close without further writing. + */ + void close() { + trace.debug("log close"); + if (pageOut != null) { + pageOut.close(); + pageOut = null; + } + writeBuffer = null; + } + + /** + * Check if the session committed after than the given position. + * + * @param sessionId the session id + * @param logId the log id + * @param pos the position in the log + * @return true if it is committed + */ + private boolean isSessionCommitted(int sessionId, int logId, int pos) { + SessionState state = sessionStates.get(sessionId); + if (state == null) { + return false; + } + return state.isCommitted(logId, pos); + } + + /** + * Set the last commit record for a session. + * + * @param sessionId the session id + * @param logId the log id + * @param pos the position in the log + */ + private void setLastCommitForSession(int sessionId, int logId, int pos) { + SessionState state = getOrAddSessionState(sessionId); + state.lastCommitLog = logId; + state.lastCommitPos = pos; + state.inDoubtTransaction = null; + } + + /** + * Get the session state for this session. A new object is created if there + * is no session state yet. + * + * @param sessionId the session id + * @return the session state object + */ + private SessionState getOrAddSessionState(int sessionId) { + Integer key = sessionId; + SessionState state = sessionStates.get(key); + if (state == null) { + state = new SessionState(); + sessionStates.put(key, state); + state.sessionId = sessionId; + } + return state; + } + + long getSize() { + return pageOut == null ? 0 : pageOut.getSize(); + } + + ArrayList getInDoubtTransactions() { + ArrayList list = New.arrayList(); + for (SessionState state : sessionStates.values()) { + PageStoreInDoubtTransaction in = state.inDoubtTransaction; + if (in != null) { + list.add(in); + } + } + return list; + } + + /** + * Set the state of an in-doubt transaction. + * + * @param sessionId the session + * @param pageId the page where the commit was prepared + * @param commit whether the transaction should be committed + */ + void setInDoubtTransactionState(int sessionId, int pageId, boolean commit) { + PageStreamData d = (PageStreamData) store.getPage(pageId); + d.initWrite(); + Data buff = store.createData(); + buff.writeByte((byte) (commit ? COMMIT : ROLLBACK)); + buff.writeVarInt(sessionId); + byte[] bytes = buff.getBytes(); + d.write(bytes, 0, bytes.length); + bytes = new byte[d.getRemaining()]; + d.write(bytes, 0, bytes.length); + d.write(); + } + + /** + * Called after the recovery has been completed. + */ + void recoverEnd() { + sessionStates = new HashMap<>(); + } + + private void flushOut() { + pageOut.flush(); + } + + private Data getBuffer() { + if (writeBuffer.length() == 0) { + return writeBuffer; + } + return store.createData(); + } + + + /** + * Get the smallest possible page id used. This is the trunk page if only + * appending at the end of the file, or 0. + * + * @return the smallest possible page. + */ + int getMinPageId() { + return pageOut == null ? 0 : pageOut.getMinPageId(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/PageOutputStream.java b/modules/h2/src/main/java/org/h2/store/PageOutputStream.java new file mode 100644 index 0000000000000..01e7a6b9955d0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageOutputStream.java @@ -0,0 +1,223 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.util.BitField; +import org.h2.util.IntArray; + +/** + * An output stream that writes into a page store. + */ +public class PageOutputStream { + + private PageStore store; + private final Trace trace; + private final BitField exclude; + private final boolean atEnd; + private final int minPageId; + + private int trunkPageId; + private int trunkNext; + private IntArray reservedPages = new IntArray(); + private PageStreamTrunk trunk; + private int trunkIndex; + private PageStreamData data; + private int reserved; + private boolean needFlush; + private boolean writing; + private int pageCount; + private int logKey; + + /** + * Create a new page output stream. + * + * @param store the page store + * @param trunkPage the first trunk page (already allocated) + * @param exclude the pages not to use + * @param logKey the log key of the first trunk page + * @param atEnd whether only pages at the end of the file should be used + */ + public PageOutputStream(PageStore store, int trunkPage, BitField exclude, + int logKey, boolean atEnd) { + this.trace = store.getTrace(); + this.store = store; + this.trunkPageId = trunkPage; + this.exclude = exclude; + // minus one, because we increment before creating a trunk page + this.logKey = logKey - 1; + this.atEnd = atEnd; + minPageId = atEnd ? trunkPage : 0; + } + + /** + * Allocate the required pages so that no pages need to be allocated while + * writing. + * + * @param minBuffer the number of bytes to allocate + */ + void reserve(int minBuffer) { + if (reserved < minBuffer) { + int pageSize = store.getPageSize(); + int capacityPerPage = PageStreamData.getCapacity(pageSize); + int pages = PageStreamTrunk.getPagesAddressed(pageSize); + int pagesToAllocate = 0, totalCapacity = 0; + do { + // allocate x data pages plus one trunk page + pagesToAllocate += pages + 1; + totalCapacity += pages * capacityPerPage; + } while (totalCapacity < minBuffer); + int firstPageToUse = atEnd ? trunkPageId : 0; + store.allocatePages(reservedPages, pagesToAllocate, exclude, firstPageToUse); + reserved += totalCapacity; + if (data == null) { + initNextData(); + } + } + } + + private void initNextData() { + int nextData = trunk == null ? -1 : trunk.getPageData(trunkIndex++); + if (nextData == -1) { + int parent = trunkPageId; + if (trunkNext != 0) { + trunkPageId = trunkNext; + } + int len = PageStreamTrunk.getPagesAddressed(store.getPageSize()); + int[] pageIds = new int[len]; + for (int i = 0; i < len; i++) { + pageIds[i] = reservedPages.get(i); + } + trunkNext = reservedPages.get(len); + logKey++; + trunk = PageStreamTrunk.create(store, parent, trunkPageId, + trunkNext, logKey, pageIds); + trunkIndex = 0; + pageCount++; + trunk.write(); + reservedPages.removeRange(0, len + 1); + nextData = trunk.getPageData(trunkIndex++); + } + data = PageStreamData.create(store, nextData, trunk.getPos(), logKey); + pageCount++; + data.initWrite(); + } + + /** + * Write the data. + * + * @param b the buffer + * @param off the offset + * @param len the length + */ + public void write(byte[] b, int off, int len) { + if (len <= 0) { + return; + } + if (writing) { + DbException.throwInternalError("writing while still writing"); + } + try { + reserve(len); + writing = true; + while (len > 0) { + int l = data.write(b, off, len); + if (l < len) { + storePage(); + initNextData(); + } + reserved -= l; + off += l; + len -= l; + } + needFlush = true; + } finally { + writing = false; + } + } + + private void storePage() { + if (trace.isDebugEnabled()) { + trace.debug("pageOut.storePage " + data); + } + data.write(); + } + + /** + * Write all data. + */ + public void flush() { + if (needFlush) { + storePage(); + needFlush = false; + } + } + + /** + * Close the stream. + */ + public void close() { + store = null; + } + + int getCurrentDataPageId() { + return data.getPos(); + } + + /** + * Fill the data page with zeros and write it. + * This is required for a checkpoint. + */ + void fillPage() { + if (trace.isDebugEnabled()) { + trace.debug("pageOut.storePage fill " + data.getPos()); + } + reserve(data.getRemaining() + 1); + reserved -= data.getRemaining(); + data.write(); + initNextData(); + } + + long getSize() { + return pageCount * store.getPageSize(); + } + + /** + * Remove a trunk page from the stream. + * + * @param t the trunk page + */ + void free(PageStreamTrunk t) { + pageCount -= t.free(0); + } + + /** + * Free up all reserved pages. + */ + void freeReserved() { + if (reservedPages.size() > 0) { + int[] array = new int[reservedPages.size()]; + reservedPages.toArray(array); + reservedPages = new IntArray(); + reserved = 0; + for (int p : array) { + store.free(p, false); + } + } + } + + /** + * Get the smallest possible page id used. This is the trunk page if only + * appending at the end of the file, or 0. + * + * @return the smallest possible page. + */ + int getMinPageId() { + return minPageId; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/PageStore.java b/modules/h2/src/main/java/org/h2/store/PageStore.java new file mode 100644 index 0000000000000..8a8e864a0652c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageStore.java @@ -0,0 +1,2044 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.OutputStream; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; +import java.util.zip.CRC32; +import org.h2.api.ErrorCode; +import org.h2.command.CommandInterface; +import org.h2.command.ddl.CreateTableData; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.index.Cursor; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.index.MultiVersionIndex; +import org.h2.index.PageBtreeIndex; +import org.h2.index.PageBtreeLeaf; +import org.h2.index.PageBtreeNode; +import org.h2.index.PageDataIndex; +import org.h2.index.PageDataLeaf; +import org.h2.index.PageDataNode; +import org.h2.index.PageDataOverflow; +import org.h2.index.PageDelegateIndex; +import org.h2.index.PageIndex; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.Row; +import org.h2.schema.Schema; +import org.h2.store.fs.FileUtils; +import org.h2.table.Column; +import org.h2.table.IndexColumn; +import org.h2.table.RegularTable; +import org.h2.table.Table; +import org.h2.table.TableType; +import org.h2.util.BitField; +import org.h2.util.Cache; +import org.h2.util.CacheLRU; +import org.h2.util.CacheObject; +import org.h2.util.CacheWriter; +import org.h2.util.IntArray; +import org.h2.util.IntIntHashMap; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueInt; +import org.h2.value.ValueString; + +/** + * This class represents a file that is organized as a number of pages. Page 0 + * contains a static file header, and pages 1 and 2 both contain the variable + * file header (page 2 is a copy of page 1 and is only read if the checksum of + * page 1 is invalid). The format of page 0 is: + *
      + *
    • 0-47: file header (3 time "-- H2 0.5/B -- \n")
    • + *
    • 48-51: page size in bytes (512 - 32768, must be a power of 2)
    • + *
    • 52: write version (read-only if larger than 1)
    • + *
    • 53: read version (opening fails if larger than 1)
    • + *
    + * The format of page 1 and 2 is: + *
      + *
    • CRC32 of the remaining data: int (0-3)
    • + *
    • write counter (incremented on each write): long (4-11)
    • + *
    • log trunk key: int (12-15)
    • + *
    • log trunk page (0 for none): int (16-19)
    • + *
    • log data page (0 for none): int (20-23)
    • + *
    + * Page 3 contains the first free list page. + * Page 4 contains the meta table root page. + */ +public class PageStore implements CacheWriter { + + // TODO test running out of disk space (using a special file system) + // TODO unused pages should be freed once in a while + // TODO node row counts are incorrect (it's not splitting row counts) + // TODO after opening the database, delay writing until required + // TODO optimization: try to avoid allocating a byte array per page + // TODO optimization: check if calling Data.getValueLen slows things down + // TODO order pages so that searching for a key only seeks forward + // TODO optimization: update: only log the key and changed values + // TODO index creation: use less space (ordered, split at insertion point) + // TODO detect circles in linked lists + // (input stream, free list, extend pages...) + // at runtime and recovery + // TODO remove trace or use isDebugEnabled + // TODO recover tool: support syntax to delete a row with a key + // TODO don't store default values (store a special value) + // TODO check for file size (exception if not exact size expected) + // TODO online backup using bsdiff + + /** + * The smallest possible page size. + */ + public static final int PAGE_SIZE_MIN = 64; + + /** + * The biggest possible page size. + */ + public static final int PAGE_SIZE_MAX = 32768; + + /** + * This log mode means the transaction log is not used. + */ + public static final int LOG_MODE_OFF = 0; + + /** + * This log mode means the transaction log is used and FileDescriptor.sync() + * is called for each checkpoint. This is the default level. + */ + public static final int LOG_MODE_SYNC = 2; + private static final int PAGE_ID_FREE_LIST_ROOT = 3; + private static final int PAGE_ID_META_ROOT = 4; + private static final int MIN_PAGE_COUNT = 5; + private static final int INCREMENT_KB = 1024; + private static final int INCREMENT_PERCENT_MIN = 35; + private static final int READ_VERSION = 3; + private static final int WRITE_VERSION = 3; + private static final int META_TYPE_DATA_INDEX = 0; + private static final int META_TYPE_BTREE_INDEX = 1; + private static final int META_TABLE_ID = -1; + private static final int COMPACT_BLOCK_SIZE = 1536; + private final Database database; + private final Trace trace; + private final String fileName; + private FileStore file; + private String accessMode; + private int pageSize = Constants.DEFAULT_PAGE_SIZE; + private int pageSizeShift; + private long writeCountBase, writeCount, readCount; + private int logKey, logFirstTrunkPage, logFirstDataPage; + private final Cache cache; + private int freeListPagesPerList; + private boolean recoveryRunning; + private boolean ignoreBigLog; + + /** + * The index to the first free-list page that potentially has free space. + */ + private int firstFreeListIndex; + + /** + * The file size in bytes. + */ + private long fileLength; + + /** + * Number of pages (including free pages). + */ + private int pageCount; + + private PageLog log; + private Schema metaSchema; + private RegularTable metaTable; + private PageDataIndex metaIndex; + private final IntIntHashMap metaRootPageId = new IntIntHashMap(); + private final HashMap metaObjects = new HashMap<>(); + private HashMap tempObjects; + + /** + * The map of reserved pages, to ensure index head pages + * are not used for regular data during recovery. The key is the page id, + * and the value the latest transaction position where this page is used. + */ + private HashMap reservedPages; + private boolean isNew; + private long maxLogSize = Constants.DEFAULT_MAX_LOG_SIZE; + private final Session pageStoreSession; + + /** + * Each free page is marked with a set bit. + */ + private final BitField freed = new BitField(); + private final ArrayList freeLists = New.arrayList(); + + private boolean recordPageReads; + private ArrayList recordedPagesList; + private IntIntHashMap recordedPagesIndex; + + /** + * The change count is something like a "micro-transaction-id". + * It is used to ensure that changed pages are not written to the file + * before the the current operation is not finished. This is only a problem + * when using a very small cache size. The value starts at 1 so that + * pages with change count 0 can be evicted from the cache. + */ + private long changeCount = 1; + + private Data emptyPage; + private long logSizeBase; + private HashMap statistics; + private int logMode = LOG_MODE_SYNC; + private boolean lockFile; + private boolean readMode; + private int backupLevel; + + /** + * Create a new page store object. + * + * @param database the database + * @param fileName the file name + * @param accessMode the access mode + * @param cacheSizeDefault the default cache size + */ + public PageStore(Database database, String fileName, String accessMode, + int cacheSizeDefault) { + this.fileName = fileName; + this.accessMode = accessMode; + this.database = database; + trace = database.getTrace(Trace.PAGE_STORE); + // if (fileName.endsWith("X.h2.db")) + // trace.setLevel(TraceSystem.DEBUG); + String cacheType = database.getCacheType(); + this.cache = CacheLRU.getCache(this, cacheType, cacheSizeDefault); + pageStoreSession = new Session(database, null, 0); + } + + /** + * Start collecting statistics. + */ + public void statisticsStart() { + statistics = new HashMap<>(); + } + + /** + * Stop collecting statistics. + * + * @return the statistics + */ + public HashMap statisticsEnd() { + HashMap result = statistics; + statistics = null; + return result; + } + + private void statisticsIncrement(String key) { + if (statistics != null) { + Integer old = statistics.get(key); + statistics.put(key, old == null ? 1 : old + 1); + } + } + + /** + * Copy the next page to the output stream. + * + * @param pageId the page to copy + * @param out the output stream + * @return the new position, or -1 if there is no more data to copy + */ + public synchronized int copyDirect(int pageId, OutputStream out) + throws IOException { + byte[] buffer = new byte[pageSize]; + if (pageId >= pageCount) { + return -1; + } + file.seek((long) pageId << pageSizeShift); + file.readFullyDirect(buffer, 0, pageSize); + readCount++; + out.write(buffer, 0, pageSize); + return pageId + 1; + } + + /** + * Open the file and read the header. + */ + public synchronized void open() { + try { + metaRootPageId.put(META_TABLE_ID, PAGE_ID_META_ROOT); + if (FileUtils.exists(fileName)) { + long length = FileUtils.size(fileName); + if (length < MIN_PAGE_COUNT * PAGE_SIZE_MIN) { + if (database.isReadOnly()) { + throw DbException.get( + ErrorCode.FILE_CORRUPTED_1, fileName + " length: " + length); + } + // the database was not fully created + openNew(); + } else { + openExisting(); + } + } else { + openNew(); + } + } catch (DbException e) { + close(); + throw e; + } + } + + private void openNew() { + setPageSize(pageSize); + freeListPagesPerList = PageFreeList.getPagesAddressed(pageSize); + file = database.openFile(fileName, accessMode, false); + lockFile(); + recoveryRunning = true; + writeStaticHeader(); + writeVariableHeader(); + log = new PageLog(this); + increaseFileSize(MIN_PAGE_COUNT); + openMetaIndex(); + logFirstTrunkPage = allocatePage(); + log.openForWriting(logFirstTrunkPage, false); + isNew = true; + recoveryRunning = false; + increaseFileSize(); + } + + private void lockFile() { + if (lockFile) { + if (!file.tryLock()) { + throw DbException.get( + ErrorCode.DATABASE_ALREADY_OPEN_1, fileName); + } + } + } + + private void openExisting() { + try { + file = database.openFile(fileName, accessMode, true); + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.IO_EXCEPTION_2) { + if (e.getMessage().contains("locked")) { + // in Windows, you can't open a locked file + // (in other operating systems, you can) + // the exact error message is: + // "The process cannot access the file because + // another process has locked a portion of the file" + throw DbException.get( + ErrorCode.DATABASE_ALREADY_OPEN_1, e, fileName); + } + } + throw e; + } + lockFile(); + readStaticHeader(); + freeListPagesPerList = PageFreeList.getPagesAddressed(pageSize); + fileLength = file.length(); + pageCount = (int) (fileLength / pageSize); + if (pageCount < MIN_PAGE_COUNT) { + if (database.isReadOnly()) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + fileName + " pageCount: " + pageCount); + } + file.releaseLock(); + file.close(); + FileUtils.delete(fileName); + openNew(); + return; + } + readVariableHeader(); + log = new PageLog(this); + log.openForReading(logKey, logFirstTrunkPage, logFirstDataPage); + boolean old = database.isMultiVersion(); + // temporarily disabling multi-version concurrency, because + // the multi-version index sometimes compares rows + // and the LOB storage is not yet available. + database.setMultiVersion(false); + boolean isEmpty = recover(); + database.setMultiVersion(old); + if (!database.isReadOnly()) { + readMode = true; + if (!isEmpty || !SysProperties.MODIFY_ON_WRITE || tempObjects != null) { + openForWriting(); + removeOldTempIndexes(); + } + } + } + + private void openForWriting() { + if (!readMode || database.isReadOnly()) { + return; + } + readMode = false; + recoveryRunning = true; + log.free(); + logFirstTrunkPage = allocatePage(); + log.openForWriting(logFirstTrunkPage, false); + recoveryRunning = false; + freed.set(0, pageCount, true); + checkpoint(); + } + + private void removeOldTempIndexes() { + if (tempObjects != null) { + metaObjects.putAll(tempObjects); + for (PageIndex index: tempObjects.values()) { + if (index.getTable().isTemporary()) { + index.truncate(pageStoreSession); + index.remove(pageStoreSession); + } + } + pageStoreSession.commit(true); + tempObjects = null; + } + metaObjects.clear(); + metaObjects.put(-1, metaIndex); + } + + private void writeIndexRowCounts() { + for (PageIndex index: metaObjects.values()) { + index.writeRowCount(); + } + } + + private void writeBack() { + ArrayList list = cache.getAllChanged(); + Collections.sort(list); + for (CacheObject cacheObject : list) { + writeBack(cacheObject); + } + } + + /** + * Flush all pending changes to disk, and switch the new transaction log. + */ + public synchronized void checkpoint() { + trace.debug("checkpoint"); + if (log == null || readMode || database.isReadOnly() || backupLevel > 0) { + // the file was never fully opened, or is read-only, + // or checkpoint is currently disabled + return; + } + database.checkPowerOff(); + writeIndexRowCounts(); + + log.checkpoint(); + writeBack(); + + int firstUncommittedSection = getFirstUncommittedSection(); + + log.removeUntil(firstUncommittedSection); + + // write back the free list + writeBack(); + + // ensure the free list is backed up again + log.checkpoint(); + + if (trace.isDebugEnabled()) { + trace.debug("writeFree"); + } + byte[] test = new byte[16]; + byte[] empty = new byte[pageSize]; + for (int i = PAGE_ID_FREE_LIST_ROOT; i < pageCount; i++) { + if (isUsed(i)) { + freed.clear(i); + } else if (!freed.get(i)) { + if (trace.isDebugEnabled()) { + trace.debug("free " + i); + } + file.seek((long) i << pageSizeShift); + file.readFully(test, 0, 16); + if (test[0] != 0) { + file.seek((long) i << pageSizeShift); + file.write(empty, 0, pageSize); + writeCount++; + } + freed.set(i); + } + } + } + + /** + * Shrink the file so there are no empty pages at the end. + * + * @param compactMode 0 if no compacting should happen, otherwise + * TransactionCommand.SHUTDOWN_COMPACT or TransactionCommand.SHUTDOWN_DEFRAG + */ + public synchronized void compact(int compactMode) { + if (!database.getSettings().pageStoreTrim) { + return; + } + if (SysProperties.MODIFY_ON_WRITE && readMode && + compactMode == 0) { + return; + } + openForWriting(); + // find the last used page + int lastUsed = -1; + for (int i = getFreeListId(pageCount); i >= 0; i--) { + lastUsed = getFreeList(i).getLastUsed(); + if (lastUsed != -1) { + break; + } + } + // open a new log at the very end + // (to be truncated later) + writeBack(); + log.free(); + recoveryRunning = true; + try { + logFirstTrunkPage = lastUsed + 1; + allocatePage(logFirstTrunkPage); + log.openForWriting(logFirstTrunkPage, true); + // ensure the free list is backed up again + log.checkpoint(); + } finally { + recoveryRunning = false; + } + long start = System.nanoTime(); + boolean isCompactFully = compactMode == + CommandInterface.SHUTDOWN_COMPACT; + boolean isDefrag = compactMode == + CommandInterface.SHUTDOWN_DEFRAG; + + if (database.getSettings().defragAlways) { + isCompactFully = isDefrag = true; + } + + int maxCompactTime = database.getSettings().maxCompactTime; + int maxMove = database.getSettings().maxCompactCount; + + if (isCompactFully || isDefrag) { + maxCompactTime = Integer.MAX_VALUE; + maxMove = Integer.MAX_VALUE; + } + int blockSize = isCompactFully ? COMPACT_BLOCK_SIZE : 1; + int firstFree = MIN_PAGE_COUNT; + for (int x = lastUsed, j = 0; x > MIN_PAGE_COUNT && + j < maxMove; x -= blockSize) { + for (int full = x - blockSize + 1; full <= x; full++) { + if (full > MIN_PAGE_COUNT && isUsed(full)) { + synchronized (this) { + firstFree = getFirstFree(firstFree); + if (firstFree == -1 || firstFree >= full) { + j = maxMove; + break; + } + if (compact(full, firstFree)) { + j++; + long now = System.nanoTime(); + if (now > start + TimeUnit.MILLISECONDS.toNanos(maxCompactTime)) { + j = maxMove; + break; + } + } + } + } + } + } + if (isDefrag) { + log.checkpoint(); + writeBack(); + cache.clear(); + ArrayList tables = database.getAllTablesAndViews(false); + recordedPagesList = New.arrayList(); + recordedPagesIndex = new IntIntHashMap(); + recordPageReads = true; + Session sysSession = database.getSystemSession(); + for (Table table : tables) { + if (!table.isTemporary() && TableType.TABLE == table.getTableType()) { + Index scanIndex = table.getScanIndex(sysSession); + Cursor cursor = scanIndex.find(sysSession, null, null); + while (cursor.next()) { + cursor.get(); + } + for (Index index : table.getIndexes()) { + if (index != scanIndex && index.canScan()) { + cursor = index.find(sysSession, null, null); + while (cursor.next()) { + // the data is already read + } + } + } + } + } + recordPageReads = false; + int target = MIN_PAGE_COUNT - 1; + int temp = 0; + for (int i = 0, size = recordedPagesList.size(); i < size; i++) { + log.checkpoint(); + writeBack(); + int source = recordedPagesList.get(i); + Page pageSource = getPage(source); + if (!pageSource.canMove()) { + continue; + } + while (true) { + Page pageTarget = getPage(++target); + if (pageTarget == null || pageTarget.canMove()) { + break; + } + } + if (target == source) { + continue; + } + temp = getFirstFree(temp); + if (temp == -1) { + DbException.throwInternalError("no free page for defrag"); + } + cache.clear(); + swap(source, target, temp); + int index = recordedPagesIndex.get(target); + if (index != IntIntHashMap.NOT_FOUND) { + recordedPagesList.set(index, source); + recordedPagesIndex.put(source, index); + } + recordedPagesList.set(i, target); + recordedPagesIndex.put(target, i); + } + recordedPagesList = null; + recordedPagesIndex = null; + } + // TODO can most likely be simplified + checkpoint(); + log.checkpoint(); + writeIndexRowCounts(); + log.checkpoint(); + writeBack(); + commit(pageStoreSession); + writeBack(); + log.checkpoint(); + + log.free(); + // truncate the log + recoveryRunning = true; + try { + setLogFirstPage(++logKey, 0, 0); + } finally { + recoveryRunning = false; + } + writeBack(); + for (int i = getFreeListId(pageCount); i >= 0; i--) { + lastUsed = getFreeList(i).getLastUsed(); + if (lastUsed != -1) { + break; + } + } + int newPageCount = lastUsed + 1; + if (newPageCount < pageCount) { + freed.set(newPageCount, pageCount, false); + } + pageCount = newPageCount; + // the easiest way to remove superfluous entries + freeLists.clear(); + trace.debug("pageCount: " + pageCount); + long newLength = (long) pageCount << pageSizeShift; + if (file.length() != newLength) { + file.setLength(newLength); + writeCount++; + } + } + + private int getFirstFree(int start) { + int free = -1; + for (int id = getFreeListId(start); start < pageCount; id++) { + free = getFreeList(id).getFirstFree(start); + if (free != -1) { + break; + } + } + return free; + } + + private void swap(int a, int b, int free) { + if (a < MIN_PAGE_COUNT || b < MIN_PAGE_COUNT) { + System.out.println(isUsed(a) + " " + isUsed(b)); + DbException.throwInternalError("can't swap " + a + " and " + b); + } + Page f = (Page) cache.get(free); + if (f != null) { + DbException.throwInternalError("not free: " + f); + } + if (trace.isDebugEnabled()) { + trace.debug("swap " + a + " and " + b + " via " + free); + } + Page pageA = null; + if (isUsed(a)) { + pageA = getPage(a); + if (pageA != null) { + pageA.moveTo(pageStoreSession, free); + } + free(a); + } + if (free != b) { + if (isUsed(b)) { + Page pageB = getPage(b); + if (pageB != null) { + pageB.moveTo(pageStoreSession, a); + } + free(b); + } + if (pageA != null) { + f = getPage(free); + if (f != null) { + f.moveTo(pageStoreSession, b); + } + free(free); + } + } + } + + private boolean compact(int full, int free) { + if (full < MIN_PAGE_COUNT || free == -1 || free >= full || !isUsed(full)) { + return false; + } + Page f = (Page) cache.get(free); + if (f != null) { + DbException.throwInternalError("not free: " + f); + } + Page p = getPage(full); + if (p == null) { + freePage(full); + } else if (p instanceof PageStreamData || p instanceof PageStreamTrunk) { + if (p.getPos() < log.getMinPageId()) { + // an old transaction log page + // probably a leftover from a crash + freePage(full); + } + } else { + if (trace.isDebugEnabled()) { + trace.debug("move " + p.getPos() + " to " + free); + } + try { + p.moveTo(pageStoreSession, free); + } finally { + changeCount++; + if (SysProperties.CHECK && changeCount < 0) { + throw DbException.throwInternalError( + "changeCount has wrapped"); + } + } + } + return true; + } + + /** + * Read a page from the store. + * + * @param pageId the page id + * @return the page + */ + public synchronized Page getPage(int pageId) { + Page p = (Page) cache.get(pageId); + if (p != null) { + return p; + } + + Data data = createData(); + readPage(pageId, data); + int type = data.readByte(); + if (type == Page.TYPE_EMPTY) { + return null; + } + data.readShortInt(); + data.readInt(); + if (!checksumTest(data.getBytes(), pageId, pageSize)) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "wrong checksum"); + } + switch (type & ~Page.FLAG_LAST) { + case Page.TYPE_FREE_LIST: + p = PageFreeList.read(this, data, pageId); + break; + case Page.TYPE_DATA_LEAF: { + int indexId = data.readVarInt(); + PageIndex idx = metaObjects.get(indexId); + if (idx == null) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "index not found " + indexId); + } + if (!(idx instanceof PageDataIndex)) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "not a data index " + indexId + " " + idx); + } + PageDataIndex index = (PageDataIndex) idx; + if (statistics != null) { + statisticsIncrement(index.getTable().getName() + "." + + index.getName() + " read"); + } + p = PageDataLeaf.read(index, data, pageId); + break; + } + case Page.TYPE_DATA_NODE: { + int indexId = data.readVarInt(); + PageIndex idx = metaObjects.get(indexId); + if (idx == null) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "index not found " + indexId); + } + if (!(idx instanceof PageDataIndex)) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "not a data index " + indexId + " " + idx); + } + PageDataIndex index = (PageDataIndex) idx; + if (statistics != null) { + statisticsIncrement(index.getTable().getName() + "." + + index.getName() + " read"); + } + p = PageDataNode.read(index, data, pageId); + break; + } + case Page.TYPE_DATA_OVERFLOW: { + p = PageDataOverflow.read(this, data, pageId); + if (statistics != null) { + statisticsIncrement("overflow read"); + } + break; + } + case Page.TYPE_BTREE_LEAF: { + int indexId = data.readVarInt(); + PageIndex idx = metaObjects.get(indexId); + if (idx == null) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "index not found " + indexId); + } + if (!(idx instanceof PageBtreeIndex)) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "not a btree index " + indexId + " " + idx); + } + PageBtreeIndex index = (PageBtreeIndex) idx; + if (statistics != null) { + statisticsIncrement(index.getTable().getName() + "." + + index.getName() + " read"); + } + p = PageBtreeLeaf.read(index, data, pageId); + break; + } + case Page.TYPE_BTREE_NODE: { + int indexId = data.readVarInt(); + PageIndex idx = metaObjects.get(indexId); + if (idx == null) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "index not found " + indexId); + } + if (!(idx instanceof PageBtreeIndex)) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "not a btree index " + indexId + " " + idx); + } + PageBtreeIndex index = (PageBtreeIndex) idx; + if (statistics != null) { + statisticsIncrement(index.getTable().getName() + + "." + index.getName() + " read"); + } + p = PageBtreeNode.read(index, data, pageId); + break; + } + case Page.TYPE_STREAM_TRUNK: + p = PageStreamTrunk.read(this, data, pageId); + break; + case Page.TYPE_STREAM_DATA: + p = PageStreamData.read(this, data, pageId); + break; + default: + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "page=" + pageId + " type=" + type); + } + cache.put(p); + return p; + } + + private int getFirstUncommittedSection() { + trace.debug("getFirstUncommittedSection"); + Session[] sessions = database.getSessions(true); + int firstUncommittedSection = log.getLogSectionId(); + for (Session session : sessions) { + int firstUncommitted = session.getFirstUncommittedLog(); + if (firstUncommitted != Session.LOG_WRITTEN) { + if (firstUncommitted < firstUncommittedSection) { + firstUncommittedSection = firstUncommitted; + } + } + } + return firstUncommittedSection; + } + + private void readStaticHeader() { + file.seek(FileStore.HEADER_LENGTH); + Data page = Data.create(database, + new byte[PAGE_SIZE_MIN - FileStore.HEADER_LENGTH]); + file.readFully(page.getBytes(), 0, + PAGE_SIZE_MIN - FileStore.HEADER_LENGTH); + readCount++; + setPageSize(page.readInt()); + int writeVersion = page.readByte(); + int readVersion = page.readByte(); + if (readVersion > READ_VERSION) { + throw DbException.get( + ErrorCode.FILE_VERSION_ERROR_1, fileName); + } + if (writeVersion > WRITE_VERSION) { + close(); + database.setReadOnly(true); + accessMode = "r"; + file = database.openFile(fileName, accessMode, true); + } + } + + private void readVariableHeader() { + Data page = createData(); + for (int i = 1;; i++) { + if (i == 3) { + throw DbException.get( + ErrorCode.FILE_CORRUPTED_1, fileName); + } + page.reset(); + readPage(i, page); + CRC32 crc = new CRC32(); + crc.update(page.getBytes(), 4, pageSize - 4); + int expected = (int) crc.getValue(); + int got = page.readInt(); + if (expected == got) { + writeCountBase = page.readLong(); + logKey = page.readInt(); + logFirstTrunkPage = page.readInt(); + logFirstDataPage = page.readInt(); + break; + } + } + } + + /** + * Set the page size. The size must be a power of two. This method must be + * called before opening. + * + * @param size the page size + */ + public void setPageSize(int size) { + if (size < PAGE_SIZE_MIN || size > PAGE_SIZE_MAX) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + fileName + " pageSize: " + size); + } + boolean good = false; + int shift = 0; + for (int i = 1; i <= size;) { + if (size == i) { + good = true; + break; + } + shift++; + i += i; + } + if (!good) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, fileName); + } + pageSize = size; + emptyPage = createData(); + pageSizeShift = shift; + } + + private void writeStaticHeader() { + Data page = Data.create(database, new byte[pageSize - FileStore.HEADER_LENGTH]); + page.writeInt(pageSize); + page.writeByte((byte) WRITE_VERSION); + page.writeByte((byte) READ_VERSION); + file.seek(FileStore.HEADER_LENGTH); + file.write(page.getBytes(), 0, pageSize - FileStore.HEADER_LENGTH); + writeCount++; + } + + /** + * Set the trunk page and data page id of the log. + * + * @param logKey the log key of the trunk page + * @param trunkPageId the trunk page id + * @param dataPageId the data page id + */ + void setLogFirstPage(int logKey, int trunkPageId, int dataPageId) { + if (trace.isDebugEnabled()) { + trace.debug("setLogFirstPage key: " + logKey + + " trunk: "+ trunkPageId +" data: " + dataPageId); + } + this.logKey = logKey; + this.logFirstTrunkPage = trunkPageId; + this.logFirstDataPage = dataPageId; + writeVariableHeader(); + } + + private void writeVariableHeader() { + trace.debug("writeVariableHeader"); + if (logMode == LOG_MODE_SYNC) { + file.sync(); + } + Data page = createData(); + page.writeInt(0); + page.writeLong(getWriteCountTotal()); + page.writeInt(logKey); + page.writeInt(logFirstTrunkPage); + page.writeInt(logFirstDataPage); + CRC32 crc = new CRC32(); + crc.update(page.getBytes(), 4, pageSize - 4); + page.setInt(0, (int) crc.getValue()); + file.seek(pageSize); + file.write(page.getBytes(), 0, pageSize); + file.seek(pageSize + pageSize); + file.write(page.getBytes(), 0, pageSize); + // don't increment the write counter, because it was just written + } + + /** + * Close the file without further writing. + */ + public synchronized void close() { + trace.debug("close"); + if (log != null) { + log.close(); + log = null; + } + if (file != null) { + try { + file.releaseLock(); + file.close(); + } finally { + file = null; + } + } + } + + @Override + public synchronized void flushLog() { + if (file != null) { + log.flush(); + } + } + + /** + * Flush the transaction log and sync the file. + */ + public synchronized void sync() { + if (file != null) { + log.flush(); + file.sync(); + } + } + + @Override + public Trace getTrace() { + return trace; + } + + @Override + public synchronized void writeBack(CacheObject obj) { + Page record = (Page) obj; + if (trace.isDebugEnabled()) { + trace.debug("writeBack " + record); + } + record.write(); + record.setChanged(false); + } + + /** + * Write an undo log entry if required. + * + * @param page the page + * @param old the old data (if known) or null + */ + public synchronized void logUndo(Page page, Data old) { + if (logMode == LOG_MODE_OFF) { + return; + } + checkOpen(); + database.checkWritingAllowed(); + if (!recoveryRunning) { + int pos = page.getPos(); + if (!log.getUndo(pos)) { + if (old == null) { + old = readPage(pos); + } + openForWriting(); + log.addUndo(pos, old); + } + } + } + + /** + * Update a page. + * + * @param page the page + */ + public synchronized void update(Page page) { + if (trace.isDebugEnabled()) { + if (!page.isChanged()) { + trace.debug("updateRecord " + page.toString()); + } + } + checkOpen(); + database.checkWritingAllowed(); + page.setChanged(true); + int pos = page.getPos(); + if (SysProperties.CHECK && !recoveryRunning) { + // ensure the undo entry is already written + if (logMode != LOG_MODE_OFF) { + log.addUndo(pos, null); + } + } + allocatePage(pos); + cache.update(pos, page); + } + + private int getFreeListId(int pageId) { + return (pageId - PAGE_ID_FREE_LIST_ROOT) / freeListPagesPerList; + } + + private PageFreeList getFreeListForPage(int pageId) { + return getFreeList(getFreeListId(pageId)); + } + + private PageFreeList getFreeList(int i) { + PageFreeList list = null; + if (i < freeLists.size()) { + list = freeLists.get(i); + if (list != null) { + return list; + } + } + int p = PAGE_ID_FREE_LIST_ROOT + i * freeListPagesPerList; + while (p >= pageCount) { + increaseFileSize(); + } + if (p < pageCount) { + list = (PageFreeList) getPage(p); + } + if (list == null) { + list = PageFreeList.create(this, p); + cache.put(list); + } + while (freeLists.size() <= i) { + freeLists.add(null); + } + freeLists.set(i, list); + return list; + } + + private void freePage(int pageId) { + int index = getFreeListId(pageId); + PageFreeList list = getFreeList(index); + firstFreeListIndex = Math.min(index, firstFreeListIndex); + list.free(pageId); + } + + /** + * Set the bit of an already allocated page. + * + * @param pageId the page to allocate + */ + void allocatePage(int pageId) { + PageFreeList list = getFreeListForPage(pageId); + list.allocate(pageId); + } + + private boolean isUsed(int pageId) { + return getFreeListForPage(pageId).isUsed(pageId); + } + + /** + * Allocate a number of pages. + * + * @param list the list where to add the allocated pages + * @param pagesToAllocate the number of pages to allocate + * @param exclude the exclude list + * @param after all allocated pages are higher than this page + */ + void allocatePages(IntArray list, int pagesToAllocate, BitField exclude, + int after) { + list.ensureCapacity(list.size() + pagesToAllocate); + for (int i = 0; i < pagesToAllocate; i++) { + int page = allocatePage(exclude, after); + after = page; + list.add(page); + } + } + + /** + * Allocate a page. + * + * @return the page id + */ + public synchronized int allocatePage() { + openForWriting(); + int pos = allocatePage(null, 0); + if (!recoveryRunning) { + if (logMode != LOG_MODE_OFF) { + log.addUndo(pos, emptyPage); + } + } + return pos; + } + + private int allocatePage(BitField exclude, int first) { + int page; + for (int i = firstFreeListIndex;; i++) { + PageFreeList list = getFreeList(i); + page = list.allocate(exclude, first); + if (page >= 0) { + firstFreeListIndex = i; + break; + } + } + while (page >= pageCount) { + increaseFileSize(); + } + if (trace.isDebugEnabled()) { + // trace.debug("allocatePage " + pos); + } + return page; + } + + private void increaseFileSize() { + int increment = INCREMENT_KB * 1024 / pageSize; + int percent = pageCount * INCREMENT_PERCENT_MIN / 100; + if (increment < percent) { + increment = (1 + (percent / increment)) * increment; + } + int max = database.getSettings().pageStoreMaxGrowth; + if (max < increment) { + increment = max; + } + increaseFileSize(increment); + } + + private void increaseFileSize(int increment) { + for (int i = pageCount; i < pageCount + increment; i++) { + freed.set(i); + } + pageCount += increment; + long newLength = (long) pageCount << pageSizeShift; + file.setLength(newLength); + writeCount++; + fileLength = newLength; + } + + /** + * Add a page to the free list. The undo log entry must have been written. + * + * @param pageId the page id + */ + public synchronized void free(int pageId) { + free(pageId, true); + } + + /** + * Add a page to the free list. + * + * @param pageId the page id + * @param undo if the undo record must have been written + */ + void free(int pageId, boolean undo) { + if (trace.isDebugEnabled()) { + // trace.debug("free " + pageId + " " + undo); + } + cache.remove(pageId); + if (SysProperties.CHECK && !recoveryRunning && undo) { + // ensure the undo entry is already written + if (logMode != LOG_MODE_OFF) { + log.addUndo(pageId, null); + } + } + freePage(pageId); + if (recoveryRunning) { + writePage(pageId, createData()); + if (reservedPages != null && reservedPages.containsKey(pageId)) { + // re-allocate the page if it is used later on again + int latestPos = reservedPages.get(pageId); + if (latestPos > log.getLogPos()) { + allocatePage(pageId); + } + } + } + } + + /** + * Add a page to the free list. The page is not used, therefore doesn't need + * to be overwritten. + * + * @param pageId the page id + */ + void freeUnused(int pageId) { + if (trace.isDebugEnabled()) { + trace.debug("freeUnused " + pageId); + } + cache.remove(pageId); + freePage(pageId); + freed.set(pageId); + } + + /** + * Create a data object. + * + * @return the data page. + */ + public Data createData() { + return Data.create(database, new byte[pageSize]); + } + + /** + * Read a page. + * + * @param pos the page id + * @return the page + */ + public synchronized Data readPage(int pos) { + Data page = createData(); + readPage(pos, page); + return page; + } + + /** + * Read a page. + * + * @param pos the page id + * @param page the page + */ + void readPage(int pos, Data page) { + if (recordPageReads) { + if (pos >= MIN_PAGE_COUNT && + recordedPagesIndex.get(pos) == IntIntHashMap.NOT_FOUND) { + recordedPagesIndex.put(pos, recordedPagesList.size()); + recordedPagesList.add(pos); + } + } + if (pos < 0 || pos >= pageCount) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, pos + + " of " + pageCount); + } + file.seek((long) pos << pageSizeShift); + file.readFully(page.getBytes(), 0, pageSize); + readCount++; + } + + /** + * Get the page size. + * + * @return the page size + */ + public int getPageSize() { + return pageSize; + } + + /** + * Get the number of pages (including free pages). + * + * @return the page count + */ + public int getPageCount() { + return pageCount; + } + + /** + * Write a page. + * + * @param pageId the page id + * @param data the data + */ + public synchronized void writePage(int pageId, Data data) { + if (pageId <= 0) { + DbException.throwInternalError("write to page " + pageId); + } + byte[] bytes = data.getBytes(); + if (SysProperties.CHECK) { + boolean shouldBeFreeList = (pageId - PAGE_ID_FREE_LIST_ROOT) % + freeListPagesPerList == 0; + boolean isFreeList = bytes[0] == Page.TYPE_FREE_LIST; + if (bytes[0] != 0 && shouldBeFreeList != isFreeList) { + throw DbException.throwInternalError(); + } + } + checksumSet(bytes, pageId); + file.seek((long) pageId << pageSizeShift); + file.write(bytes, 0, pageSize); + writeCount++; + } + + /** + * Remove a page from the cache. + * + * @param pageId the page id + */ + public synchronized void removeFromCache(int pageId) { + cache.remove(pageId); + } + + Database getDatabase() { + return database; + } + + /** + * Run recovery. + * + * @return whether the transaction log was empty + */ + private boolean recover() { + trace.debug("log recover"); + recoveryRunning = true; + boolean isEmpty = true; + isEmpty &= log.recover(PageLog.RECOVERY_STAGE_UNDO); + if (reservedPages != null) { + for (int r : reservedPages.keySet()) { + if (trace.isDebugEnabled()) { + trace.debug("reserve " + r); + } + allocatePage(r); + } + } + isEmpty &= log.recover(PageLog.RECOVERY_STAGE_ALLOCATE); + openMetaIndex(); + readMetaData(); + isEmpty &= log.recover(PageLog.RECOVERY_STAGE_REDO); + boolean setReadOnly = false; + if (!database.isReadOnly()) { + if (log.getInDoubtTransactions().isEmpty()) { + log.recoverEnd(); + int firstUncommittedSection = getFirstUncommittedSection(); + log.removeUntil(firstUncommittedSection); + } else { + setReadOnly = true; + } + } + PageDataIndex systemTable = (PageDataIndex) metaObjects.get(0); + isNew = systemTable == null; + for (PageIndex index : metaObjects.values()) { + if (index.getTable().isTemporary()) { + // temporary indexes are removed after opening + if (tempObjects == null) { + tempObjects = new HashMap<>(); + } + tempObjects.put(index.getId(), index); + } else { + index.close(pageStoreSession); + } + } + + allocatePage(PAGE_ID_META_ROOT); + writeIndexRowCounts(); + recoveryRunning = false; + reservedPages = null; + + writeBack(); + // clear the cache because it contains pages with closed indexes + cache.clear(); + freeLists.clear(); + + metaObjects.clear(); + metaObjects.put(-1, metaIndex); + + if (setReadOnly) { + database.setReadOnly(true); + } + trace.debug("log recover done"); + return isEmpty; + } + + /** + * A record is added to a table, or removed from a table. + * + * @param session the session + * @param tableId the table id + * @param row the row to add + * @param add true if the row is added, false if it is removed + */ + public synchronized void logAddOrRemoveRow(Session session, int tableId, + Row row, boolean add) { + if (logMode != LOG_MODE_OFF) { + if (!recoveryRunning) { + log.logAddOrRemoveRow(session, tableId, row, add); + } + } + } + + /** + * Mark a committed transaction. + * + * @param session the session + */ + public synchronized void commit(Session session) { + checkOpen(); + openForWriting(); + log.commit(session.getId()); + long size = log.getSize(); + if (size - logSizeBase > maxLogSize / 2) { + int firstSection = log.getLogFirstSectionId(); + checkpoint(); + int newSection = log.getLogSectionId(); + if (newSection - firstSection <= 2) { + // one section is always kept, and checkpoint + // advances two sections each time it is called + return; + } + long newSize = log.getSize(); + if (newSize < size || size < maxLogSize) { + ignoreBigLog = false; + return; + } + if (!ignoreBigLog) { + ignoreBigLog = true; + trace.error(null, + "Transaction log could not be truncated; size: " + + (newSize / 1024 / 1024) + " MB"); + } + logSizeBase = log.getSize(); + } + } + + /** + * Prepare a transaction. + * + * @param session the session + * @param transaction the name of the transaction + */ + public synchronized void prepareCommit(Session session, String transaction) { + log.prepareCommit(session, transaction); + } + + /** + * Check whether this is a new database. + * + * @return true if it is + */ + public boolean isNew() { + return isNew; + } + + /** + * Reserve the page if this is a index root page entry. + * + * @param logPos the redo log position + * @param tableId the table id + * @param row the row + */ + void allocateIfIndexRoot(int logPos, int tableId, Row row) { + if (tableId == META_TABLE_ID) { + int rootPageId = row.getValue(3).getInt(); + if (reservedPages == null) { + reservedPages = new HashMap<>(); + } + reservedPages.put(rootPageId, logPos); + } + } + + /** + * Redo a delete in a table. + * + * @param tableId the object id of the table + * @param key the key of the row to delete + */ + void redoDelete(int tableId, long key) { + Index index = metaObjects.get(tableId); + PageDataIndex scan = (PageDataIndex) index; + Row row = scan.getRowWithKey(key); + if (row == null || row.getKey() != key) { + trace.error(null, "Entry not found: " + key + + " found instead: " + row + " - ignoring"); + return; + } + redo(tableId, row, false); + } + + /** + * Redo a change in a table. + * + * @param tableId the object id of the table + * @param row the row + * @param add true if the record is added, false if deleted + */ + void redo(int tableId, Row row, boolean add) { + if (tableId == META_TABLE_ID) { + if (add) { + addMeta(row, pageStoreSession, true); + } else { + removeMeta(row); + } + } + Index index = metaObjects.get(tableId); + if (index == null) { + throw DbException.throwInternalError( + "Table not found: " + tableId + " " + row + " " + add); + } + Table table = index.getTable(); + if (add) { + table.addRow(pageStoreSession, row); + } else { + table.removeRow(pageStoreSession, row); + } + } + + /** + * Redo a truncate. + * + * @param tableId the object id of the table + */ + void redoTruncate(int tableId) { + Index index = metaObjects.get(tableId); + Table table = index.getTable(); + table.truncate(pageStoreSession); + } + + private void openMetaIndex() { + CreateTableData data = new CreateTableData(); + ArrayList cols = data.columns; + cols.add(new Column("ID", Value.INT)); + cols.add(new Column("TYPE", Value.INT)); + cols.add(new Column("PARENT", Value.INT)); + cols.add(new Column("HEAD", Value.INT)); + cols.add(new Column("OPTIONS", Value.STRING)); + cols.add(new Column("COLUMNS", Value.STRING)); + metaSchema = new Schema(database, 0, "", null, true); + data.schema = metaSchema; + data.tableName = "PAGE_INDEX"; + data.id = META_TABLE_ID; + data.temporary = false; + data.persistData = true; + data.persistIndexes = true; + data.create = false; + data.session = pageStoreSession; + metaTable = new RegularTable(data); + metaIndex = (PageDataIndex) metaTable.getScanIndex( + pageStoreSession); + metaObjects.clear(); + metaObjects.put(-1, metaIndex); + } + + private void readMetaData() { + Cursor cursor = metaIndex.find(pageStoreSession, null, null); + // first, create all tables + while (cursor.next()) { + Row row = cursor.get(); + int type = row.getValue(1).getInt(); + if (type == META_TYPE_DATA_INDEX) { + addMeta(row, pageStoreSession, false); + } + } + // now create all secondary indexes + // otherwise the table might not be created yet + cursor = metaIndex.find(pageStoreSession, null, null); + while (cursor.next()) { + Row row = cursor.get(); + int type = row.getValue(1).getInt(); + if (type != META_TYPE_DATA_INDEX) { + addMeta(row, pageStoreSession, false); + } + } + } + + private void removeMeta(Row row) { + int id = row.getValue(0).getInt(); + PageIndex index = metaObjects.get(id); + index.getTable().removeIndex(index); + if (index instanceof PageBtreeIndex || index instanceof PageDelegateIndex) { + if (index.isTemporary()) { + pageStoreSession.removeLocalTempTableIndex(index); + } else { + index.getSchema().remove(index); + } + } + index.remove(pageStoreSession); + metaObjects.remove(id); + } + + private void addMeta(Row row, Session session, boolean redo) { + int id = row.getValue(0).getInt(); + int type = row.getValue(1).getInt(); + int parent = row.getValue(2).getInt(); + int rootPageId = row.getValue(3).getInt(); + String[] options = StringUtils.arraySplit( + row.getValue(4).getString(), ',', false); + String columnList = row.getValue(5).getString(); + String[] columns = StringUtils.arraySplit(columnList, ',', false); + Index meta; + if (trace.isDebugEnabled()) { + trace.debug("addMeta id="+ id +" type=" + type + + " root=" + rootPageId + " parent=" + parent + " columns=" + columnList); + } + if (redo && rootPageId != 0) { + // ensure the page is empty, but not used by regular data + writePage(rootPageId, createData()); + allocatePage(rootPageId); + } + metaRootPageId.put(id, rootPageId); + if (type == META_TYPE_DATA_INDEX) { + CreateTableData data = new CreateTableData(); + if (SysProperties.CHECK) { + if (columns == null) { + throw DbException.throwInternalError(row.toString()); + } + } + for (int i = 0, len = columns.length; i < len; i++) { + Column col = new Column("C" + i, Value.INT); + data.columns.add(col); + } + data.schema = metaSchema; + data.tableName = "T" + id; + data.id = id; + data.temporary = options[2].equals("temp"); + data.persistData = true; + data.persistIndexes = true; + data.create = false; + data.session = session; + RegularTable table = new RegularTable(data); + boolean binaryUnsigned = SysProperties.SORT_BINARY_UNSIGNED; + if (options.length > 3) { + binaryUnsigned = Boolean.parseBoolean(options[3]); + } + CompareMode mode = CompareMode.getInstance( + options[0], Integer.parseInt(options[1]), binaryUnsigned); + table.setCompareMode(mode); + meta = table.getScanIndex(session); + } else { + Index p = metaObjects.get(parent); + if (p == null) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "Table not found:" + parent + " for " + row + " meta:" + metaObjects); + } + RegularTable table = (RegularTable) p.getTable(); + Column[] tableCols = table.getColumns(); + int len = columns.length; + IndexColumn[] cols = new IndexColumn[len]; + for (int i = 0; i < len; i++) { + String c = columns[i]; + IndexColumn ic = new IndexColumn(); + int idx = c.indexOf('/'); + if (idx >= 0) { + String s = c.substring(idx + 1); + ic.sortType = Integer.parseInt(s); + c = c.substring(0, idx); + } + ic.column = tableCols[Integer.parseInt(c)]; + cols[i] = ic; + } + IndexType indexType; + if (options[3].equals("d")) { + indexType = IndexType.createPrimaryKey(true, false); + Column[] tableColumns = table.getColumns(); + for (IndexColumn indexColumn : cols) { + tableColumns[indexColumn.column.getColumnId()].setNullable(false); + } + } else { + indexType = IndexType.createNonUnique(true); + } + meta = table.addIndex(session, "I" + id, id, cols, indexType, false, null); + } + PageIndex index; + if (meta instanceof MultiVersionIndex) { + index = (PageIndex) ((MultiVersionIndex) meta).getBaseIndex(); + } else { + index = (PageIndex) meta; + } + metaObjects.put(id, index); + } + + /** + * Add an index to the in-memory index map. + * + * @param index the index + */ + public synchronized void addIndex(PageIndex index) { + metaObjects.put(index.getId(), index); + } + + /** + * Add the meta data of an index. + * + * @param index the index to add + * @param session the session + */ + public void addMeta(PageIndex index, Session session) { + Table table = index.getTable(); + if (SysProperties.CHECK) { + if (!table.isTemporary()) { + // to prevent ABBA locking problems, we need to always take + // the Database lock before we take the PageStore lock + synchronized (database) { + synchronized (this) { + database.verifyMetaLocked(session); + } + } + } + } + synchronized (this) { + int type = index instanceof PageDataIndex ? + META_TYPE_DATA_INDEX : META_TYPE_BTREE_INDEX; + IndexColumn[] columns = index.getIndexColumns(); + StatementBuilder buff = new StatementBuilder(); + for (IndexColumn col : columns) { + buff.appendExceptFirst(","); + int id = col.column.getColumnId(); + buff.append(id); + int sortType = col.sortType; + if (sortType != 0) { + buff.append('/'); + buff.append(sortType); + } + } + String columnList = buff.toString(); + CompareMode mode = table.getCompareMode(); + String options = mode.getName()+ "," + mode.getStrength() + ","; + if (table.isTemporary()) { + options += "temp"; + } + options += ","; + if (index instanceof PageDelegateIndex) { + options += "d"; + } + options += "," + mode.isBinaryUnsigned(); + Row row = metaTable.getTemplateRow(); + row.setValue(0, ValueInt.get(index.getId())); + row.setValue(1, ValueInt.get(type)); + row.setValue(2, ValueInt.get(table.getId())); + row.setValue(3, ValueInt.get(index.getRootPageId())); + row.setValue(4, ValueString.get(options)); + row.setValue(5, ValueString.get(columnList)); + row.setKey(index.getId() + 1); + metaIndex.add(session, row); + } + } + + /** + * Remove the meta data of an index. + * + * @param index the index to remove + * @param session the session + */ + public void removeMeta(Index index, Session session) { + if (SysProperties.CHECK) { + if (!index.getTable().isTemporary()) { + // to prevent ABBA locking problems, we need to always take + // the Database lock before we take the PageStore lock + synchronized (database) { + synchronized (this) { + database.verifyMetaLocked(session); + } + } + } + } + synchronized (this) { + if (!recoveryRunning) { + removeMetaIndex(index, session); + metaObjects.remove(index.getId()); + } + } + } + + private void removeMetaIndex(Index index, Session session) { + int key = index.getId() + 1; + Row row = metaIndex.getRow(session, key); + if (row.getKey() != key) { + throw DbException.get(ErrorCode.FILE_CORRUPTED_1, + "key: " + key + " index: " + index + + " table: " + index.getTable() + " row: " + row); + } + metaIndex.remove(session, row); + } + + /** + * Set the maximum transaction log size in megabytes. + * + * @param maxSize the new maximum log size + */ + public void setMaxLogSize(long maxSize) { + this.maxLogSize = maxSize; + } + + /** + * Commit or rollback a prepared transaction after opening a database with + * in-doubt transactions. + * + * @param sessionId the session id + * @param pageId the page where the transaction was prepared + * @param commit if the transaction should be committed + */ + public synchronized void setInDoubtTransactionState(int sessionId, + int pageId, boolean commit) { + boolean old = database.isReadOnly(); + try { + database.setReadOnly(false); + log.setInDoubtTransactionState(sessionId, pageId, commit); + } finally { + database.setReadOnly(old); + } + } + + /** + * Get the list of in-doubt transaction. + * + * @return the list + */ + public ArrayList getInDoubtTransactions() { + return log.getInDoubtTransactions(); + } + + /** + * Check whether the recovery process is currently running. + * + * @return true if it is + */ + public boolean isRecoveryRunning() { + return recoveryRunning; + } + + private void checkOpen() { + if (file == null) { + throw DbException.get(ErrorCode.DATABASE_IS_CLOSED); + } + } + + /** + * Get the file write count since the database was created. + * + * @return the write count + */ + public long getWriteCountTotal() { + return writeCount + writeCountBase; + } + + /** + * Get the file write count since the database was opened. + * + * @return the write count + */ + public long getWriteCount() { + return writeCount; + } + + /** + * Get the file read count since the database was opened. + * + * @return the read count + */ + public long getReadCount() { + return readCount; + } + + /** + * A table is truncated. + * + * @param session the session + * @param tableId the table id + */ + public synchronized void logTruncate(Session session, int tableId) { + if (!recoveryRunning) { + openForWriting(); + log.logTruncate(session, tableId); + } + } + + /** + * Get the root page of an index. + * + * @param indexId the index id + * @return the root page + */ + public int getRootPageId(int indexId) { + return metaRootPageId.get(indexId); + } + + public Cache getCache() { + return cache; + } + + private void checksumSet(byte[] d, int pageId) { + int ps = pageSize; + int type = d[0]; + if (type == Page.TYPE_EMPTY) { + return; + } + int s1 = 255 + (type & 255), s2 = 255 + s1; + s2 += s1 += d[6] & 255; + s2 += s1 += d[(ps >> 1) - 1] & 255; + s2 += s1 += d[ps >> 1] & 255; + s2 += s1 += d[ps - 2] & 255; + s2 += s1 += d[ps - 1] & 255; + d[1] = (byte) (((s1 & 255) + (s1 >> 8)) ^ pageId); + d[2] = (byte) (((s2 & 255) + (s2 >> 8)) ^ (pageId >> 8)); + } + + /** + * Check if the stored checksum is correct + * @param d the data + * @param pageId the page id + * @param pageSize the page size + * @return true if it is correct + */ + public static boolean checksumTest(byte[] d, int pageId, int pageSize) { + int s1 = 255 + (d[0] & 255), s2 = 255 + s1; + s2 += s1 += d[6] & 255; + s2 += s1 += d[(pageSize >> 1) - 1] & 255; + s2 += s1 += d[pageSize >> 1] & 255; + s2 += s1 += d[pageSize - 2] & 255; + s2 += s1 += d[pageSize - 1] & 255; + return d[1] == (byte) (((s1 & 255) + (s1 >> 8)) ^ pageId) && d[2] == (byte) (((s2 & 255) + (s2 >> 8)) ^ (pageId + >> 8)); + } + + /** + * Increment the change count. To be done after the operation has finished. + */ + public void incrementChangeCount() { + changeCount++; + if (SysProperties.CHECK && changeCount < 0) { + throw DbException.throwInternalError("changeCount has wrapped"); + } + } + + /** + * Get the current change count. The first value is 1 + * + * @return the change count + */ + public long getChangeCount() { + return changeCount; + } + + public void setLogMode(int logMode) { + this.logMode = logMode; + } + + public int getLogMode() { + return logMode; + } + + public void setLockFile(boolean lockFile) { + this.lockFile = lockFile; + } + + public BitField getObjectIds() { + BitField f = new BitField(); + Cursor cursor = metaIndex.find(pageStoreSession, null, null); + while (cursor.next()) { + Row row = cursor.get(); + int id = row.getValue(0).getInt(); + if (id > 0) { + f.set(id); + } + } + return f; + } + + public Session getPageStoreSession() { + return pageStoreSession; + } + + public synchronized void setBackup(boolean start) { + backupLevel += start ? 1 : -1; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/PageStoreInDoubtTransaction.java b/modules/h2/src/main/java/org/h2/store/PageStoreInDoubtTransaction.java new file mode 100644 index 0000000000000..4097a5364329b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageStoreInDoubtTransaction.java @@ -0,0 +1,72 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import org.h2.message.DbException; + +/** + * Represents an in-doubt transaction (a transaction in the prepare phase). + */ +public class PageStoreInDoubtTransaction implements InDoubtTransaction { + + private final PageStore store; + private final int sessionId; + private final int pos; + private final String transactionName; + private int state; + + /** + * Create a new in-doubt transaction info object. + * + * @param store the page store + * @param sessionId the session id + * @param pos the position + * @param transaction the transaction name + */ + public PageStoreInDoubtTransaction(PageStore store, int sessionId, int pos, + String transaction) { + this.store = store; + this.sessionId = sessionId; + this.pos = pos; + this.transactionName = transaction; + this.state = IN_DOUBT; + } + + @Override + public void setState(int state) { + switch (state) { + case COMMIT: + store.setInDoubtTransactionState(sessionId, pos, true); + break; + case ROLLBACK: + store.setInDoubtTransactionState(sessionId, pos, false); + break; + default: + DbException.throwInternalError("state="+state); + } + this.state = state; + } + + @Override + public String getState() { + switch (state) { + case IN_DOUBT: + return "IN_DOUBT"; + case COMMIT: + return "COMMIT"; + case ROLLBACK: + return "ROLLBACK"; + default: + throw DbException.throwInternalError("state="+state); + } + } + + @Override + public String getTransactionName() { + return transactionName; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/PageStreamData.java b/modules/h2/src/main/java/org/h2/store/PageStreamData.java new file mode 100644 index 0000000000000..598be321c32fb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageStreamData.java @@ -0,0 +1,179 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import org.h2.engine.Session; + +/** + * A data page of a stream. The format is: + *
      + *
    • page type: byte (0)
    • + *
    • checksum: short (1-2)
    • + *
    • the trunk page id: int (3-6)
    • + *
    • log key: int (7-10)
    • + *
    • data (11-)
    • + *
    + */ +public class PageStreamData extends Page { + + private static final int DATA_START = 11; + + private final PageStore store; + private int trunk; + private int logKey; + private Data data; + private int remaining; + + private PageStreamData(PageStore store, int pageId, int trunk, int logKey) { + setPos(pageId); + this.store = store; + this.trunk = trunk; + this.logKey = logKey; + } + + /** + * Read a stream data page. + * + * @param store the page store + * @param data the data + * @param pageId the page id + * @return the page + */ + static PageStreamData read(PageStore store, Data data, int pageId) { + PageStreamData p = new PageStreamData(store, pageId, 0, 0); + p.data = data; + p.read(); + return p; + } + + /** + * Create a new stream trunk page. + * + * @param store the page store + * @param pageId the page id + * @param trunk the trunk page + * @param logKey the log key + * @return the page + */ + static PageStreamData create(PageStore store, int pageId, int trunk, + int logKey) { + return new PageStreamData(store, pageId, trunk, logKey); + } + + /** + * Read the page from the disk. + */ + private void read() { + data.reset(); + data.readByte(); + data.readShortInt(); + trunk = data.readInt(); + logKey = data.readInt(); + } + + /** + * Write the header data. + */ + void initWrite() { + data = store.createData(); + data.writeByte((byte) Page.TYPE_STREAM_DATA); + data.writeShortInt(0); + data.writeInt(trunk); + data.writeInt(logKey); + remaining = store.getPageSize() - data.length(); + } + + /** + * Write the data to the buffer. + * + * @param buff the source data + * @param offset the offset in the source buffer + * @param len the number of bytes to write + * @return the number of bytes written + */ + int write(byte[] buff, int offset, int len) { + int max = Math.min(remaining, len); + data.write(buff, offset, max); + remaining -= max; + return max; + } + + @Override + public void write() { + store.writePage(getPos(), data); + } + + /** + * Get the number of bytes that fit in a page. + * + * @param pageSize the page size + * @return the number of bytes + */ + static int getCapacity(int pageSize) { + return pageSize - DATA_START; + } + + /** + * Read the next bytes from the buffer. + * + * @param startPos the position in the data page + * @param buff the target buffer + * @param off the offset in the target buffer + * @param len the number of bytes to read + */ + void read(int startPos, byte[] buff, int off, int len) { + System.arraycopy(data.getBytes(), startPos, buff, off, len); + } + + /** + * Get the number of remaining data bytes of this page. + * + * @return the remaining byte count + */ + int getRemaining() { + return remaining; + } + + /** + * Get the estimated memory size. + * + * @return number of double words (4 bytes) + */ + @Override + public int getMemory() { + return store.getPageSize() >> 2; + } + + @Override + public void moveTo(Session session, int newPos) { + // not required + } + + int getLogKey() { + return logKey; + } + + @Override + public String toString() { + return "[" + getPos() + "] stream data key:" + logKey + + " pos:" + data.length() + " remaining:" + remaining; + } + + @Override + public boolean canRemove() { + return true; + } + + public static int getReadStart() { + return DATA_START; + } + + @Override + public boolean canMove() { + return false; + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/store/PageStreamTrunk.java b/modules/h2/src/main/java/org/h2/store/PageStreamTrunk.java new file mode 100644 index 0000000000000..2ea92b1dafb7a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/PageStreamTrunk.java @@ -0,0 +1,303 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.message.DbException; + +/** + * A trunk page of a stream. It contains the page numbers of the stream, and the + * page number of the next trunk. The format is: + *
      + *
    • page type: byte (0)
    • + *
    • checksum: short (1-2)
    • + *
    • previous trunk page, or 0 if none: int (3-6)
    • + *
    • log key: int (7-10)
    • + *
    • next trunk page: int (11-14)
    • + *
    • number of pages: short (15-16)
    • + *
    • page ids (17-)
    • + *
    + */ +public class PageStreamTrunk extends Page { + + private static final int DATA_START = 17; + + /** + * The previous stream trunk. + */ + int parent; + + /** + * The next stream trunk. + */ + int nextTrunk; + + private final PageStore store; + private int logKey; + private int[] pageIds; + private int pageCount; + private Data data; + + private PageStreamTrunk(PageStore store, int parent, int pageId, int next, + int logKey, int[] pageIds) { + setPos(pageId); + this.parent = parent; + this.store = store; + this.nextTrunk = next; + this.logKey = logKey; + this.pageCount = pageIds.length; + this.pageIds = pageIds; + } + + private PageStreamTrunk(PageStore store, Data data, int pageId) { + setPos(pageId); + this.data = data; + this.store = store; + } + + /** + * Read a stream trunk page. + * + * @param store the page store + * @param data the data + * @param pageId the page id + * @return the page + */ + static PageStreamTrunk read(PageStore store, Data data, int pageId) { + PageStreamTrunk p = new PageStreamTrunk(store, data, pageId); + p.read(); + return p; + } + + /** + * Create a new stream trunk page. + * + * @param store the page store + * @param parent the parent page + * @param pageId the page id + * @param next the next trunk page + * @param logKey the log key + * @param pageIds the stream data page ids + * @return the page + */ + static PageStreamTrunk create(PageStore store, int parent, int pageId, + int next, int logKey, int[] pageIds) { + return new PageStreamTrunk(store, parent, pageId, next, logKey, pageIds); + } + + /** + * Read the page from the disk. + */ + private void read() { + data.reset(); + data.readByte(); + data.readShortInt(); + parent = data.readInt(); + logKey = data.readInt(); + nextTrunk = data.readInt(); + pageCount = data.readShortInt(); + pageIds = new int[pageCount]; + for (int i = 0; i < pageCount; i++) { + pageIds[i] = data.readInt(); + } + } + + /** + * Get the data page id at the given position. + * + * @param index the index (0, 1, ...) + * @return the value, or -1 if the index is too large + */ + int getPageData(int index) { + if (index >= pageIds.length) { + return -1; + } + return pageIds[index]; + } + + @Override + public void write() { + data = store.createData(); + data.writeByte((byte) Page.TYPE_STREAM_TRUNK); + data.writeShortInt(0); + data.writeInt(parent); + data.writeInt(logKey); + data.writeInt(nextTrunk); + data.writeShortInt(pageCount); + for (int i = 0; i < pageCount; i++) { + data.writeInt(pageIds[i]); + } + store.writePage(getPos(), data); + } + + /** + * Get the number of pages that can be addressed in a stream trunk page. + * + * @param pageSize the page size + * @return the number of pages + */ + static int getPagesAddressed(int pageSize) { + return (pageSize - DATA_START) / 4; + } + + /** + * Check if the given data page is in this trunk page. + * + * @param dataPageId the page id + * @return true if it is + */ + boolean contains(int dataPageId) { + for (int i = 0; i < pageCount; i++) { + if (pageIds[i] == dataPageId) { + return true; + } + } + return false; + } + + /** + * Free this page and all data pages. Pages after the last used data page + * (if within this list) are empty and therefore not just freed, but marked + * as not used. + * + * @param lastUsedPage the last used data page + * @return the number of pages freed + */ + int free(int lastUsedPage) { + store.free(getPos(), false); + int freed = 1; + boolean notUsed = false; + for (int i = 0; i < pageCount; i++) { + int page = pageIds[i]; + if (notUsed) { + store.freeUnused(page); + } else { + store.free(page, false); + } + freed++; + if (page == lastUsedPage) { + notUsed = true; + } + } + return freed; + } + + /** + * Get the estimated memory size. + * + * @return number of double words (4 bytes) + */ + @Override + public int getMemory() { + return store.getPageSize() >> 2; + } + + @Override + public void moveTo(Session session, int newPos) { + // not required + } + + int getLogKey() { + return logKey; + } + + public int getNextTrunk() { + return nextTrunk; + } + + /** + * An iterator over page stream trunk pages. + */ + static class Iterator { + + private final PageStore store; + private int first; + private int next; + private int previous; + private boolean canDelete; + private int current; + + Iterator(PageStore store, int first) { + this.store = store; + this.next = first; + } + + int getCurrentPageId() { + return current; + } + + /** + * Get the next trunk page or null if no next trunk page. + * + * @return the next trunk page or null + */ + PageStreamTrunk next() { + canDelete = false; + if (first == 0) { + first = next; + } else if (first == next) { + return null; + } + if (next == 0 || next >= store.getPageCount()) { + return null; + } + Page p; + current = next; + try { + p = store.getPage(next); + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.FILE_CORRUPTED_1) { + // wrong checksum means end of stream + return null; + } + throw e; + } + if (p == null || p instanceof PageStreamTrunk || + p instanceof PageStreamData) { + canDelete = true; + } + if (!(p instanceof PageStreamTrunk)) { + return null; + } + PageStreamTrunk t = (PageStreamTrunk) p; + if (previous > 0 && t.parent != previous) { + return null; + } + previous = next; + next = t.nextTrunk; + return t; + } + + /** + * Check if the current page can be deleted. It can if it's empty, a + * stream trunk, or a stream data page. + * + * @return true if it can be deleted + */ + boolean canDelete() { + return canDelete; + } + + } + + @Override + public boolean canRemove() { + return true; + } + + @Override + public String toString() { + return "page[" + getPos() + "] stream trunk key:" + logKey + + " next:" + nextTrunk; + } + + @Override + public boolean canMove() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/RangeInputStream.java b/modules/h2/src/main/java/org/h2/store/RangeInputStream.java new file mode 100644 index 0000000000000..eeffe18fc48bb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/RangeInputStream.java @@ -0,0 +1,102 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.FilterInputStream; +import java.io.IOException; +import java.io.InputStream; + +import org.h2.util.IOUtils; + +/** + * Input stream that reads only a specified range from the source stream. + */ +public final class RangeInputStream extends FilterInputStream { + private long limit; + + /** + * Creates new instance of range input stream. + * + * @param in + * source stream + * @param offset + * offset of the range + * @param limit + * length of the range + * @throws IOException + * on I/O exception during seeking to the specified offset + */ + public RangeInputStream(InputStream in, long offset, long limit) throws IOException { + super(in); + this.limit = limit; + IOUtils.skipFully(in, offset); + } + + @Override + public int read() throws IOException { + if (limit <= 0) { + return -1; + } + int b = in.read(); + if (b >= 0) { + limit--; + } + return b; + } + + @Override + public int read(byte b[], int off, int len) throws IOException { + if (limit <= 0) { + return -1; + } + if (len > limit) { + len = (int) limit; + } + int cnt = in.read(b, off, len); + if (cnt > 0) { + limit -= cnt; + } + return cnt; + } + + @Override + public long skip(long n) throws IOException { + if (n > limit) { + n = (int) limit; + } + n = in.skip(n); + limit -= n; + return n; + } + + @Override + public int available() throws IOException { + int cnt = in.available(); + if (cnt > limit) { + return (int) limit; + } + return cnt; + } + + @Override + public void close() throws IOException { + in.close(); + } + + @Override + public void mark(int readlimit) { + } + + @Override + public synchronized void reset() throws IOException { + throw new IOException("mark/reset not supported"); + } + + @Override + public boolean markSupported() { + return false; + } +} diff --git a/modules/h2/src/main/java/org/h2/store/RangeReader.java b/modules/h2/src/main/java/org/h2/store/RangeReader.java new file mode 100644 index 0000000000000..ec0a5b1340b0f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/RangeReader.java @@ -0,0 +1,103 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.Reader; + +import org.h2.util.IOUtils; + +/** + * Reader that reads only a specified range from the source reader. + */ +public final class RangeReader extends Reader { + private final Reader r; + + private long limit; + + /** + * Creates new instance of range reader. + * + * @param r + * source reader + * @param offset + * offset of the range + * @param limit + * length of the range + * @throws IOException + * on I/O exception during seeking to the specified offset + */ + public RangeReader(Reader r, long offset, long limit) throws IOException { + this.r = r; + this.limit = limit; + IOUtils.skipFully(r, offset); + } + + @Override + public int read() throws IOException { + if (limit <= 0) { + return -1; + } + int c = r.read(); + if (c >= 0) { + limit--; + } + return c; + } + + @Override + public int read(char cbuf[], int off, int len) throws IOException { + if (limit <= 0) { + return -1; + } + if (len > limit) { + len = (int) limit; + } + int cnt = r.read(cbuf, off, len); + if (cnt > 0) { + limit -= cnt; + } + return cnt; + } + + @Override + public long skip(long n) throws IOException { + if (n > limit) { + n = (int) limit; + } + n = r.skip(n); + limit -= n; + return n; + } + + @Override + public boolean ready() throws IOException { + if (limit > 0) { + return r.ready(); + } + return false; + } + + @Override + public boolean markSupported() { + return false; + } + + @Override + public void mark(int readAheadLimit) throws IOException { + throw new IOException("mark() not supported"); + } + + @Override + public void reset() throws IOException { + throw new IOException("reset() not supported"); + } + + @Override + public void close() throws IOException { + r.close(); + } +} diff --git a/modules/h2/src/main/java/org/h2/store/RecoverTester.java b/modules/h2/src/main/java/org/h2/store/RecoverTester.java new file mode 100644 index 0000000000000..c091543dca187 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/RecoverTester.java @@ -0,0 +1,196 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.io.IOException; +import java.io.OutputStreamWriter; +import java.io.PrintWriter; +import java.sql.SQLException; +import java.util.HashSet; +import java.util.Properties; + +import org.h2.api.ErrorCode; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.store.fs.FilePathRec; +import org.h2.store.fs.FileUtils; +import org.h2.store.fs.Recorder; +import org.h2.tools.Recover; +import org.h2.util.IOUtils; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * A tool that simulates a crash while writing to the database, and then + * verifies the database doesn't get corrupt. + */ +public class RecoverTester implements Recorder { + + private static RecoverTester instance; + + private String testDatabase = "memFS:reopen"; + private int writeCount = Utils.getProperty("h2.recoverTestOffset", 0); + private int testEvery = Utils.getProperty("h2.recoverTest", 64); + private final long maxFileSize = Utils.getProperty( + "h2.recoverTestMaxFileSize", Integer.MAX_VALUE) * 1024L * 1024; + private int verifyCount; + private final HashSet knownErrors = new HashSet<>(); + private volatile boolean testing; + + /** + * Initialize the recover test. + * + * @param recoverTest the value of the recover test parameter + */ + public static synchronized void init(String recoverTest) { + RecoverTester tester = RecoverTester.getInstance(); + if (StringUtils.isNumber(recoverTest)) { + tester.setTestEvery(Integer.parseInt(recoverTest)); + } + FilePathRec.setRecorder(tester); + } + + public static synchronized RecoverTester getInstance() { + if (instance == null) { + instance = new RecoverTester(); + } + return instance; + } + + @Override + public void log(int op, String fileName, byte[] data, long x) { + if (op != Recorder.WRITE && op != Recorder.TRUNCATE) { + return; + } + if (!fileName.endsWith(Constants.SUFFIX_PAGE_FILE) && + !fileName.endsWith(Constants.SUFFIX_MV_FILE)) { + return; + } + writeCount++; + if ((writeCount % testEvery) != 0) { + return; + } + if (FileUtils.size(fileName) > maxFileSize) { + // System.out.println(fileName + " " + IOUtils.length(fileName)); + return; + } + if (testing) { + // avoid deadlocks + return; + } + testing = true; + PrintWriter out = null; + try { + out = new PrintWriter( + new OutputStreamWriter( + FileUtils.newOutputStream(fileName + ".log", true))); + testDatabase(fileName, out); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } finally { + IOUtils.closeSilently(out); + testing = false; + } + } + + private synchronized void testDatabase(String fileName, PrintWriter out) { + out.println("+ write #" + writeCount + " verify #" + verifyCount); + try { + IOUtils.copyFiles(fileName, testDatabase + Constants.SUFFIX_PAGE_FILE); + String mvFileName = fileName.substring(0, fileName.length() - + Constants.SUFFIX_PAGE_FILE.length()) + + Constants.SUFFIX_MV_FILE; + if (FileUtils.exists(mvFileName)) { + IOUtils.copyFiles(mvFileName, testDatabase + Constants.SUFFIX_MV_FILE); + } + verifyCount++; + // avoid using the Engine class to avoid deadlocks + Properties p = new Properties(); + p.setProperty("user", ""); + p.setProperty("password", ""); + ConnectionInfo ci = new ConnectionInfo("jdbc:h2:" + testDatabase + + ";FILE_LOCK=NO;TRACE_LEVEL_FILE=0", p); + Database database = new Database(ci, null); + // close the database + Session sysSession = database.getSystemSession(); + sysSession.prepare("script to '" + testDatabase + ".sql'").query(0); + sysSession.prepare("shutdown immediately").update(); + database.removeSession(null); + // everything OK - return + return; + } catch (DbException e) { + SQLException e2 = DbException.toSQLException(e); + int errorCode = e2.getErrorCode(); + if (errorCode == ErrorCode.WRONG_USER_OR_PASSWORD) { + return; + } else if (errorCode == ErrorCode.FILE_ENCRYPTION_ERROR_1) { + return; + } + e.printStackTrace(System.out); + } catch (Exception e) { + // failed + int errorCode = 0; + if (e instanceof SQLException) { + errorCode = ((SQLException) e).getErrorCode(); + } + if (errorCode == ErrorCode.WRONG_USER_OR_PASSWORD) { + return; + } else if (errorCode == ErrorCode.FILE_ENCRYPTION_ERROR_1) { + return; + } + e.printStackTrace(System.out); + } + out.println("begin ------------------------------ " + writeCount); + try { + Recover.execute(fileName.substring(0, fileName.lastIndexOf('/')), null); + } catch (SQLException e) { + // ignore + } + testDatabase += "X"; + try { + IOUtils.copyFiles(fileName, testDatabase + Constants.SUFFIX_PAGE_FILE); + // avoid using the Engine class to avoid deadlocks + Properties p = new Properties(); + ConnectionInfo ci = new ConnectionInfo("jdbc:h2:" + + testDatabase + ";FILE_LOCK=NO", p); + Database database = new Database(ci, null); + // close the database + database.removeSession(null); + } catch (Exception e) { + int errorCode = 0; + if (e instanceof DbException) { + e = ((DbException) e).getSQLException(); + errorCode = ((SQLException) e).getErrorCode(); + } + if (errorCode == ErrorCode.WRONG_USER_OR_PASSWORD) { + return; + } else if (errorCode == ErrorCode.FILE_ENCRYPTION_ERROR_1) { + return; + } + StringBuilder buff = new StringBuilder(); + StackTraceElement[] list = e.getStackTrace(); + for (int i = 0; i < 10 && i < list.length; i++) { + buff.append(list[i].toString()).append('\n'); + } + String s = buff.toString(); + if (!knownErrors.contains(s)) { + out.println(writeCount + " code: " + errorCode + " " + e.toString()); + e.printStackTrace(System.out); + knownErrors.add(s); + } else { + out.println(writeCount + " code: " + errorCode); + } + } + } + + public void setTestEvery(int testEvery) { + this.testEvery = testEvery; + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/store/SessionState.java b/modules/h2/src/main/java/org/h2/store/SessionState.java new file mode 100644 index 0000000000000..bb982f9d4a70a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/SessionState.java @@ -0,0 +1,54 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + + +/** + * The session state contains information about when was the last commit of a + * session. It is only used during recovery. + */ +class SessionState { + + /** + * The session id + */ + public int sessionId; + + /** + * The last log id where a commit for this session is found. + */ + public int lastCommitLog; + + /** + * The position where a commit for this session is found. + */ + public int lastCommitPos; + + /** + * The in-doubt transaction if there is one. + */ + public PageStoreInDoubtTransaction inDoubtTransaction; + + /** + * Check if this session state is already committed at this point. + * + * @param logId the log id + * @param pos the position in the log + * @return true if it is committed + */ + public boolean isCommitted(int logId, int pos) { + if (logId != lastCommitLog) { + return lastCommitLog > logId; + } + return lastCommitPos >= pos; + } + + @Override + public String toString() { + return "sessionId:" + sessionId + " log:" + lastCommitLog + + " pos:" + lastCommitPos + " inDoubt:" + inDoubtTransaction; + } +} diff --git a/modules/h2/src/main/java/org/h2/store/WriterThread.java b/modules/h2/src/main/java/org/h2/store/WriterThread.java new file mode 100644 index 0000000000000..c73e0b1dad1aa --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/WriterThread.java @@ -0,0 +1,134 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store; + +import java.lang.ref.WeakReference; +import java.security.AccessControlException; +import org.h2.Driver; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.message.Trace; +import org.h2.message.TraceSystem; + +/** + * The writer thread is responsible to flush the transaction transaction log + * from time to time. + */ +public class WriterThread implements Runnable { + + /** + * The reference to the database. + * + * Thread objects are not garbage collected + * until they returned from the run() method + * (even if they where never started) + * so if the connection was not closed, + * the database object cannot get reclaimed + * by the garbage collector if we use a hard reference. + */ + private volatile WeakReference databaseRef; + + private int writeDelay; + private Thread thread; + private volatile boolean stop; + + private WriterThread(Database database, int writeDelay) { + this.databaseRef = new WeakReference<>(database); + this.writeDelay = writeDelay; + } + + /** + * Change the write delay + * + * @param writeDelay the new write delay + */ + public void setWriteDelay(int writeDelay) { + this.writeDelay = writeDelay; + } + + /** + * Create and start a new writer thread for the given database. If the + * thread can't be created, this method returns null. + * + * @param database the database + * @param writeDelay the delay + * @return the writer thread object or null + */ + public static WriterThread create(Database database, int writeDelay) { + try { + WriterThread writer = new WriterThread(database, writeDelay); + writer.thread = new Thread(writer, "H2 Log Writer " + database.getShortName()); + Driver.setThreadContextClassLoader(writer.thread); + writer.thread.setDaemon(true); + return writer; + } catch (AccessControlException e) { + // // Google App Engine does not allow threads + return null; + } + } + + @Override + public void run() { + while (!stop) { + Database database = databaseRef.get(); + if (database == null) { + break; + } + int wait = writeDelay; + try { + if (database.isFileLockSerialized()) { + wait = Constants.MIN_WRITE_DELAY; + database.checkpointIfRequired(); + } else { + database.flush(); + } + } catch (Exception e) { + TraceSystem traceSystem = database.getTraceSystem(); + if (traceSystem != null) { + traceSystem.getTrace(Trace.DATABASE).error(e, "flush"); + } + } + + // wait 0 mean wait forever, which is not what we want + wait = Math.max(wait, Constants.MIN_WRITE_DELAY); + synchronized (this) { + while (!stop && wait > 0) { + // wait 100 ms at a time + int w = Math.min(wait, 100); + try { + wait(w); + } catch (InterruptedException e) { + // ignore + } + wait -= w; + } + } + } + databaseRef = null; + } + + /** + * Stop the thread. This method is called when closing the database. + */ + public void stopThread() { + stop = true; + synchronized (this) { + notify(); + } + // can't do thread.join(), because this thread might be holding + // a lock that the writer thread is waiting for + } + + /** + * Start the thread. This method is called after opening the database + * (to avoid deadlocks) + */ + public void startThread() { + thread.start(); + thread = null; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FakeFileChannel.java b/modules/h2/src/main/java/org/h2/store/fs/FakeFileChannel.java new file mode 100644 index 0000000000000..d9317ef24d207 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FakeFileChannel.java @@ -0,0 +1,104 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.MappedByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.ReadableByteChannel; +import java.nio.channels.WritableByteChannel; + +/** + * Fake file channel to use by in-memory and ZIP file systems. + */ +public class FakeFileChannel extends FileChannel { + @Override + protected void implCloseChannel() throws IOException { + throw new IOException(); + } + + @Override + public FileLock lock(long position, long size, boolean shared) throws IOException { + throw new IOException(); + } + + @Override + public MappedByteBuffer map(MapMode mode, long position, long size) throws IOException { + throw new IOException(); + } + + @Override + public long position() throws IOException { + throw new IOException(); + } + + @Override + public FileChannel position(long newPosition) throws IOException { + throw new IOException(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + throw new IOException(); + } + + @Override + public int read(ByteBuffer dst, long position) throws IOException { + throw new IOException(); + } + + @Override + public long read(ByteBuffer[] dsts, int offset, int length) throws IOException { + throw new IOException(); + } + + @Override + public long size() throws IOException { + throw new IOException(); + } + + @Override + public long transferFrom(ReadableByteChannel src, long position, long count) throws IOException { + throw new IOException(); + } + + @Override + public long transferTo(long position, long count, WritableByteChannel target) throws IOException { + throw new IOException(); + } + + @Override + public FileChannel truncate(long size) throws IOException { + throw new IOException(); + } + + @Override + public FileLock tryLock(long position, long size, boolean shared) throws IOException { + throw new IOException(); + } + + @Override + public int write(ByteBuffer src) throws IOException { + throw new IOException(); + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + throw new IOException(); + } + + @Override + public long write(ByteBuffer[] srcs, int offset, int len) throws IOException { + throw new IOException(); + } + + @Override + public void force(boolean metaData) throws IOException { + throw new IOException(); + } +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/store/fs/FileBase.java b/modules/h2/src/main/java/org/h2/store/fs/FileBase.java new file mode 100644 index 0000000000000..c6335474e61ec --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FileBase.java @@ -0,0 +1,111 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.MappedByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.ReadableByteChannel; +import java.nio.channels.WritableByteChannel; + +/** + * The base class for file implementations. + */ +public abstract class FileBase extends FileChannel { + + @Override + public abstract long size() throws IOException; + + @Override + public abstract long position() throws IOException; + + @Override + public abstract FileChannel position(long newPosition) throws IOException; + + @Override + public abstract int read(ByteBuffer dst) throws IOException; + + @Override + public abstract int write(ByteBuffer src) throws IOException; + + @Override + public synchronized int read(ByteBuffer dst, long position) + throws IOException { + long oldPos = position(); + position(position); + int len = read(dst); + position(oldPos); + return len; + } + + @Override + public synchronized int write(ByteBuffer src, long position) + throws IOException { + long oldPos = position(); + position(position); + int len = write(src); + position(oldPos); + return len; + } + + @Override + public abstract FileChannel truncate(long size) throws IOException; + + @Override + public void force(boolean metaData) throws IOException { + // ignore + } + + @Override + protected void implCloseChannel() throws IOException { + // ignore + } + + @Override + public FileLock lock(long position, long size, boolean shared) + throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public MappedByteBuffer map(MapMode mode, long position, long size) + throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long read(ByteBuffer[] dsts, int offset, int length) + throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long transferFrom(ReadableByteChannel src, long position, long count) + throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long transferTo(long position, long count, WritableByteChannel target) + throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public FileLock tryLock(long position, long size, boolean shared) + throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long write(ByteBuffer[] srcs, int offset, int length) + throws IOException { + throw new UnsupportedOperationException(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FileChannelInputStream.java b/modules/h2/src/main/java/org/h2/store/fs/FileChannelInputStream.java new file mode 100644 index 0000000000000..5e0ec8a714c3c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FileChannelInputStream.java @@ -0,0 +1,71 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.InputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; + +/** + * Allows to read from a file channel like an input stream. + */ +public class FileChannelInputStream extends InputStream { + + private final FileChannel channel; + private final boolean closeChannel; + + private ByteBuffer buffer; + private long pos; + + /** + * Create a new file object input stream from the file channel. + * + * @param channel the file channel + * @param closeChannel whether closing the stream should close the channel + */ + public FileChannelInputStream(FileChannel channel, boolean closeChannel) { + this.channel = channel; + this.closeChannel = closeChannel; + } + + @Override + public int read() throws IOException { + if (buffer == null) { + buffer = ByteBuffer.allocate(1); + } + buffer.rewind(); + int len = channel.read(buffer, pos++); + if (len < 0) { + return -1; + } + return buffer.get(0) & 0xff; + } + + @Override + public int read(byte[] b) throws IOException { + return read(b, 0, b.length); + } + + @Override + public int read(byte[] b, int off, int len) throws IOException { + ByteBuffer buff = ByteBuffer.wrap(b, off, len); + int read = channel.read(buff, pos); + if (read == -1) { + return -1; + } + pos += read; + return read; + } + + @Override + public void close() throws IOException { + if (closeChannel) { + channel.close(); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FileChannelOutputStream.java b/modules/h2/src/main/java/org/h2/store/fs/FileChannelOutputStream.java new file mode 100644 index 0000000000000..bf2446cdab8b3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FileChannelOutputStream.java @@ -0,0 +1,59 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; + +/** + * Allows to write to a file channel like an output stream. + */ +public class FileChannelOutputStream extends OutputStream { + + private final FileChannel channel; + private final byte[] buffer = { 0 }; + + /** + * Create a new file object output stream from the file channel. + * + * @param channel the file channel + * @param append true for append mode, false for truncate and overwrite + */ + public FileChannelOutputStream(FileChannel channel, boolean append) + throws IOException { + this.channel = channel; + if (append) { + channel.position(channel.size()); + } else { + channel.position(0); + channel.truncate(0); + } + } + + @Override + public void write(int b) throws IOException { + buffer[0] = (byte) b; + FileUtils.writeFully(channel, ByteBuffer.wrap(buffer)); + } + + @Override + public void write(byte[] b) throws IOException { + FileUtils.writeFully(channel, ByteBuffer.wrap(b)); + } + + @Override + public void write(byte[] b, int off, int len) throws IOException { + FileUtils.writeFully(channel, ByteBuffer.wrap(b, off, len)); + } + + @Override + public void close() throws IOException { + channel.close(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePath.java b/modules/h2/src/main/java/org/h2/store/fs/FilePath.java new file mode 100644 index 0000000000000..45fea0d78223e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePath.java @@ -0,0 +1,327 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.channels.FileChannel; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import org.h2.util.MathUtils; + +/** + * A path to a file. It similar to the Java 7 java.nio.file.Path, + * but simpler, and works with older versions of Java. It also implements the + * relevant methods found in java.nio.file.FileSystem and + * FileSystems + */ +public abstract class FilePath { + + private static FilePath defaultProvider; + + private static Map providers; + + /** + * The prefix for temporary files. + */ + private static String tempRandom; + private static long tempSequence; + + /** + * The complete path (which may be absolute or relative, depending on the + * file system). + */ + protected String name; + + /** + * Get the file path object for the given path. + * Windows-style '\' is replaced with '/'. + * + * @param path the path + * @return the file path object + */ + public static FilePath get(String path) { + path = path.replace('\\', '/'); + int index = path.indexOf(':'); + registerDefaultProviders(); + if (index < 2) { + // use the default provider if no prefix or + // only a single character (drive name) + return defaultProvider.getPath(path); + } + String scheme = path.substring(0, index); + FilePath p = providers.get(scheme); + if (p == null) { + // provider not found - use the default + p = defaultProvider; + } + return p.getPath(path); + } + + private static void registerDefaultProviders() { + if (providers == null || defaultProvider == null) { + Map map = Collections.synchronizedMap( + new HashMap()); + for (String c : new String[] { + "org.h2.store.fs.FilePathDisk", + "org.h2.store.fs.FilePathMem", + "org.h2.store.fs.FilePathMemLZF", + "org.h2.store.fs.FilePathNioMem", + "org.h2.store.fs.FilePathNioMemLZF", + "org.h2.store.fs.FilePathSplit", + "org.h2.store.fs.FilePathNio", + "org.h2.store.fs.FilePathNioMapped", + "org.h2.store.fs.FilePathZip", + "org.h2.store.fs.FilePathRetryOnInterrupt" + }) { + try { + FilePath p = (FilePath) Class.forName(c).newInstance(); + map.put(p.getScheme(), p); + if (defaultProvider == null) { + defaultProvider = p; + } + } catch (Exception e) { + // ignore - the files may be excluded in purpose + } + } + providers = map; + } + } + + /** + * Register a file provider. + * + * @param provider the file provider + */ + public static void register(FilePath provider) { + registerDefaultProviders(); + providers.put(provider.getScheme(), provider); + } + + /** + * Unregister a file provider. + * + * @param provider the file provider + */ + public static void unregister(FilePath provider) { + registerDefaultProviders(); + providers.remove(provider.getScheme()); + } + + /** + * Get the size of a file in bytes + * + * @return the size in bytes + */ + public abstract long size(); + + /** + * Rename a file if this is allowed. + * + * @param newName the new fully qualified file name + * @param atomicReplace whether the move should be atomic, and the target + * file should be replaced if it exists and replacing is possible + */ + public abstract void moveTo(FilePath newName, boolean atomicReplace); + + /** + * Create a new file. + * + * @return true if creating was successful + */ + public abstract boolean createFile(); + + /** + * Checks if a file exists. + * + * @return true if it exists + */ + public abstract boolean exists(); + + /** + * Delete a file or directory if it exists. + * Directories may only be deleted if they are empty. + */ + public abstract void delete(); + + /** + * List the files and directories in the given directory. + * + * @return the list of fully qualified file names + */ + public abstract List newDirectoryStream(); + + /** + * Normalize a file name. + * + * @return the normalized file name + */ + public abstract FilePath toRealPath(); + + /** + * Get the parent directory of a file or directory. + * + * @return the parent directory name + */ + public abstract FilePath getParent(); + + /** + * Check if it is a file or a directory. + * + * @return true if it is a directory + */ + public abstract boolean isDirectory(); + + /** + * Check if the file name includes a path. + * + * @return if the file name is absolute + */ + public abstract boolean isAbsolute(); + + /** + * Get the last modified date of a file + * + * @return the last modified date + */ + public abstract long lastModified(); + + /** + * Check if the file is writable. + * + * @return if the file is writable + */ + public abstract boolean canWrite(); + + /** + * Create a directory (all required parent directories already exist). + */ + public abstract void createDirectory(); + + /** + * Get the file or directory name (the last element of the path). + * + * @return the last element of the path + */ + public String getName() { + int idx = Math.max(name.indexOf(':'), name.lastIndexOf('/')); + return idx < 0 ? name : name.substring(idx + 1); + } + + /** + * Create an output stream to write into the file. + * + * @param append if true, the file will grow, if false, the file will be + * truncated first + * @return the output stream + */ + public abstract OutputStream newOutputStream(boolean append) throws IOException; + + /** + * Open a random access file object. + * + * @param mode the access mode. Supported are r, rw, rws, rwd + * @return the file object + */ + public abstract FileChannel open(String mode) throws IOException; + + /** + * Create an input stream to read from the file. + * + * @return the input stream + */ + public abstract InputStream newInputStream() throws IOException; + + /** + * Disable the ability to write. + * + * @return true if the call was successful + */ + public abstract boolean setReadOnly(); + + /** + * Create a new temporary file. + * + * @param suffix the suffix + * @param deleteOnExit if the file should be deleted when the virtual + * machine exists + * @param inTempDir if the file should be stored in the temporary directory + * @return the name of the created file + */ + @SuppressWarnings("unused") + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + while (true) { + FilePath p = getPath(name + getNextTempFileNamePart(false) + suffix); + if (p.exists() || !p.createFile()) { + // in theory, the random number could collide + getNextTempFileNamePart(true); + continue; + } + p.open("rw").close(); + return p; + } + } + + /** + * Get the next temporary file name part (the part in the middle). + * + * @param newRandom if the random part of the filename should change + * @return the file name part + */ + protected static synchronized String getNextTempFileNamePart( + boolean newRandom) { + if (newRandom || tempRandom == null) { + tempRandom = MathUtils.randomInt(Integer.MAX_VALUE) + "."; + } + return tempRandom + tempSequence++; + } + + /** + * Get the string representation. The returned string can be used to + * construct a new object. + * + * @return the path as a string + */ + @Override + public String toString() { + return name; + } + + /** + * Get the scheme (prefix) for this file provider. + * This is similar to + * java.nio.file.spi.FileSystemProvider.getScheme. + * + * @return the scheme + */ + public abstract String getScheme(); + + /** + * Convert a file to a path. This is similar to + * java.nio.file.spi.FileSystemProvider.getPath, but may + * return an object even if the scheme doesn't match in case of the the + * default file provider. + * + * @param path the path + * @return the file path object + */ + public abstract FilePath getPath(String path); + + /** + * Get the unwrapped file name (without wrapper prefixes if wrapping / + * delegating file systems are used). + * + * @return the unwrapped path + */ + public FilePath unwrap() { + return this; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathDisk.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathDisk.java new file mode 100644 index 0000000000000..d1efe5b77722a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathDisk.java @@ -0,0 +1,492 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.RandomAccessFile; +import java.net.URL; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.NonWritableChannelException; +import java.util.ArrayList; +import java.util.List; +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.util.IOUtils; + +/** + * This file system stores files on disk. + * This is the most common file system. + */ +public class FilePathDisk extends FilePath { + + private static final String CLASSPATH_PREFIX = "classpath:"; + + @Override + public FilePathDisk getPath(String path) { + FilePathDisk p = new FilePathDisk(); + p.name = translateFileName(path); + return p; + } + + @Override + public long size() { + return new File(name).length(); + } + + /** + * Translate the file name to the native format. This will replace '\' with + * '/' and expand the home directory ('~'). + * + * @param fileName the file name + * @return the native file name + */ + protected static String translateFileName(String fileName) { + fileName = fileName.replace('\\', '/'); + if (fileName.startsWith("file:")) { + fileName = fileName.substring("file:".length()); + } + return expandUserHomeDirectory(fileName); + } + + /** + * Expand '~' to the user home directory. It is only be expanded if the '~' + * stands alone, or is followed by '/' or '\'. + * + * @param fileName the file name + * @return the native file name + */ + public static String expandUserHomeDirectory(String fileName) { + if (fileName.startsWith("~") && (fileName.length() == 1 || + fileName.startsWith("~/"))) { + String userDir = SysProperties.USER_HOME; + fileName = userDir + fileName.substring(1); + } + return fileName; + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + File oldFile = new File(name); + File newFile = new File(newName.name); + if (oldFile.getAbsolutePath().equals(newFile.getAbsolutePath())) { + return; + } + if (!oldFile.exists()) { + throw DbException.get(ErrorCode.FILE_RENAME_FAILED_2, + name + " (not found)", + newName.name); + } + // Java 7: use java.nio.file.Files.move(Path source, Path target, + // CopyOption... options) + // with CopyOptions "REPLACE_EXISTING" and "ATOMIC_MOVE". + if (atomicReplace) { + boolean ok = oldFile.renameTo(newFile); + if (ok) { + return; + } + throw DbException.get(ErrorCode.FILE_RENAME_FAILED_2, name, newName.name); + } + if (newFile.exists()) { + throw DbException.get(ErrorCode.FILE_RENAME_FAILED_2, name, newName + " (exists)"); + } + for (int i = 0; i < SysProperties.MAX_FILE_RETRY; i++) { + IOUtils.trace("rename", name + " >" + newName, null); + boolean ok = oldFile.renameTo(newFile); + if (ok) { + return; + } + wait(i); + } + throw DbException.get(ErrorCode.FILE_RENAME_FAILED_2, name, newName.name); + } + + private static void wait(int i) { + if (i == 8) { + System.gc(); + } + try { + // sleep at most 256 ms + long sleep = Math.min(256, i * i); + Thread.sleep(sleep); + } catch (InterruptedException e) { + // ignore + } + } + + @Override + public boolean createFile() { + File file = new File(name); + for (int i = 0; i < SysProperties.MAX_FILE_RETRY; i++) { + try { + return file.createNewFile(); + } catch (IOException e) { + // 'access denied' is really a concurrent access problem + wait(i); + } + } + return false; + } + + @Override + public boolean exists() { + return new File(name).exists(); + } + + @Override + public void delete() { + File file = new File(name); + for (int i = 0; i < SysProperties.MAX_FILE_RETRY; i++) { + IOUtils.trace("delete", name, null); + boolean ok = file.delete(); + if (ok || !file.exists()) { + return; + } + wait(i); + } + throw DbException.get(ErrorCode.FILE_DELETE_FAILED_1, name); + } + + @Override + public List newDirectoryStream() { + ArrayList list = new ArrayList<>(); + File f = new File(name); + try { + String[] files = f.list(); + if (files != null) { + String base = f.getCanonicalPath(); + if (!base.endsWith(SysProperties.FILE_SEPARATOR)) { + base += SysProperties.FILE_SEPARATOR; + } + list.ensureCapacity(files.length); + for (String file : files) { + list.add(getPath(base + file)); + } + } + return list; + } catch (IOException e) { + throw DbException.convertIOException(e, name); + } + } + + @Override + public boolean canWrite() { + return canWriteInternal(new File(name)); + } + + @Override + public boolean setReadOnly() { + File f = new File(name); + return f.setReadOnly(); + } + + @Override + public FilePathDisk toRealPath() { + try { + String fileName = new File(name).getCanonicalPath(); + return getPath(fileName); + } catch (IOException e) { + throw DbException.convertIOException(e, name); + } + } + + @Override + public FilePath getParent() { + String p = new File(name).getParent(); + return p == null ? null : getPath(p); + } + + @Override + public boolean isDirectory() { + return new File(name).isDirectory(); + } + + @Override + public boolean isAbsolute() { + return new File(name).isAbsolute(); + } + + @Override + public long lastModified() { + return new File(name).lastModified(); + } + + private static boolean canWriteInternal(File file) { + try { + if (!file.canWrite()) { + return false; + } + } catch (Exception e) { + // workaround for GAE which throws a + // java.security.AccessControlException + return false; + } + // File.canWrite() does not respect windows user permissions, + // so we must try to open it using the mode "rw". + // See also http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4420020 + RandomAccessFile r = null; + try { + r = new RandomAccessFile(file, "rw"); + return true; + } catch (FileNotFoundException e) { + return false; + } finally { + if (r != null) { + try { + r.close(); + } catch (IOException e) { + // ignore + } + } + } + } + + @Override + public void createDirectory() { + File dir = new File(name); + for (int i = 0; i < SysProperties.MAX_FILE_RETRY; i++) { + if (dir.exists()) { + if (dir.isDirectory()) { + return; + } + throw DbException.get(ErrorCode.FILE_CREATION_FAILED_1, + name + " (a file with this name already exists)"); + } else if (dir.mkdir()) { + return; + } + wait(i); + } + throw DbException.get(ErrorCode.FILE_CREATION_FAILED_1, name); + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + try { + File file = new File(name); + File parent = file.getParentFile(); + if (parent != null) { + FileUtils.createDirectories(parent.getAbsolutePath()); + } + FileOutputStream out = new FileOutputStream(name, append); + IOUtils.trace("openFileOutputStream", name, out); + return out; + } catch (IOException e) { + freeMemoryAndFinalize(); + return new FileOutputStream(name); + } + } + + @Override + public InputStream newInputStream() throws IOException { + if (name.matches("[a-zA-Z]{2,19}:.*")) { + // if the ':' is in position 1, a windows file access is assumed: + // C:.. or D:, and if the ':' is not at the beginning, assume its a + // file name with a colon + if (name.startsWith(CLASSPATH_PREFIX)) { + String fileName = name.substring(CLASSPATH_PREFIX.length()); + // Force absolute resolution in Class.getResourceAsStream + if (!fileName.startsWith("/")) { + fileName = "/" + fileName; + } + InputStream in = getClass().getResourceAsStream(fileName); + if (in == null) { + // ClassLoader.getResourceAsStream doesn't need leading "/" + in = Thread.currentThread().getContextClassLoader(). + getResourceAsStream(fileName.substring(1)); + } + if (in == null) { + throw new FileNotFoundException("resource " + fileName); + } + return in; + } + // otherwise an URL is assumed + URL url = new URL(name); + return url.openStream(); + } + FileInputStream in = new FileInputStream(name); + IOUtils.trace("openFileInputStream", name, in); + return in; + } + + /** + * Call the garbage collection and run finalization. This close all files + * that were not closed, and are no longer referenced. + */ + static void freeMemoryAndFinalize() { + IOUtils.trace("freeMemoryAndFinalize", null, null); + Runtime rt = Runtime.getRuntime(); + long mem = rt.freeMemory(); + for (int i = 0; i < 16; i++) { + rt.gc(); + long now = rt.freeMemory(); + rt.runFinalization(); + if (now == mem) { + break; + } + mem = now; + } + } + + @Override + public FileChannel open(String mode) throws IOException { + FileDisk f; + try { + f = new FileDisk(name, mode); + IOUtils.trace("open", name, f); + } catch (IOException e) { + freeMemoryAndFinalize(); + try { + f = new FileDisk(name, mode); + } catch (IOException e2) { + throw e; + } + } + return f; + } + + @Override + public String getScheme() { + return "file"; + } + + @Override + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + String fileName = name + "."; + String prefix = new File(fileName).getName(); + File dir; + if (inTempDir) { + dir = new File(System.getProperty("java.io.tmpdir", ".")); + } else { + dir = new File(fileName).getAbsoluteFile().getParentFile(); + } + FileUtils.createDirectories(dir.getAbsolutePath()); + while (true) { + File f = new File(dir, prefix + getNextTempFileNamePart(false) + suffix); + if (f.exists() || !f.createNewFile()) { + // in theory, the random number could collide + getNextTempFileNamePart(true); + continue; + } + if (deleteOnExit) { + try { + f.deleteOnExit(); + } catch (Throwable e) { + // sometimes this throws a NullPointerException + // at java.io.DeleteOnExitHook.add(DeleteOnExitHook.java:33) + // we can ignore it + } + } + return get(f.getCanonicalPath()); + } + } + +} + +/** + * Uses java.io.RandomAccessFile to access a file. + */ +class FileDisk extends FileBase { + + private final RandomAccessFile file; + private final String name; + private final boolean readOnly; + + FileDisk(String fileName, String mode) throws FileNotFoundException { + this.file = new RandomAccessFile(fileName, mode); + this.name = fileName; + this.readOnly = mode.equals("r"); + } + + @Override + public void force(boolean metaData) throws IOException { + String m = SysProperties.SYNC_METHOD; + if ("".equals(m)) { + // do nothing + } else if ("sync".equals(m)) { + file.getFD().sync(); + } else if ("force".equals(m)) { + file.getChannel().force(true); + } else if ("forceFalse".equals(m)) { + file.getChannel().force(false); + } else { + file.getFD().sync(); + } + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + // compatibility with JDK FileChannel#truncate + if (readOnly) { + throw new NonWritableChannelException(); + } + /* + * RandomAccessFile.setLength() does not always work here since Java 9 for + * unknown reason so use FileChannel.truncate(). + */ + file.getChannel().truncate(newLength); + return this; + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + return file.getChannel().tryLock(position, size, shared); + } + + @Override + public void implCloseChannel() throws IOException { + file.close(); + } + + @Override + public long position() throws IOException { + return file.getFilePointer(); + } + + @Override + public long size() throws IOException { + return file.length(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + int len = file.read(dst.array(), dst.arrayOffset() + dst.position(), + dst.remaining()); + if (len > 0) { + dst.position(dst.position() + len); + } + return len; + } + + @Override + public FileChannel position(long pos) throws IOException { + file.seek(pos); + return this; + } + + @Override + public int write(ByteBuffer src) throws IOException { + int len = src.remaining(); + file.write(src.array(), src.arrayOffset() + src.position(), len); + src.position(src.position() + len); + return len; + } + + @Override + public String toString() { + return name; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathEncrypt.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathEncrypt.java new file mode 100644 index 0000000000000..7dd96a97038bf --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathEncrypt.java @@ -0,0 +1,529 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.EOFException; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; + +import org.h2.security.AES; +import org.h2.security.BlockCipher; +import org.h2.security.SHA256; +import org.h2.util.MathUtils; + +/** + * An encrypted file. + */ +public class FilePathEncrypt extends FilePathWrapper { + + private static final String SCHEME = "encrypt"; + + /** + * Register this file system. + */ + public static void register() { + FilePath.register(new FilePathEncrypt()); + } + + @Override + public FileChannel open(String mode) throws IOException { + String[] parsed = parse(name); + FileChannel file = FileUtils.open(parsed[1], mode); + byte[] passwordBytes = parsed[0].getBytes(StandardCharsets.UTF_8); + return new FileEncrypt(name, passwordBytes, file); + } + + @Override + public String getScheme() { + return SCHEME; + } + + @Override + protected String getPrefix() { + String[] parsed = parse(name); + return getScheme() + ":" + parsed[0] + ":"; + } + + @Override + public FilePath unwrap(String fileName) { + return FilePath.get(parse(fileName)[1]); + } + + @Override + public long size() { + long size = getBase().size() - FileEncrypt.HEADER_LENGTH; + size = Math.max(0, size); + if ((size & FileEncrypt.BLOCK_SIZE_MASK) != 0) { + size -= FileEncrypt.BLOCK_SIZE; + } + return size; + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + return new FileChannelOutputStream(open("rw"), append); + } + + @Override + public InputStream newInputStream() throws IOException { + return new FileChannelInputStream(open("r"), true); + } + + /** + * Split the file name into algorithm, password, and base file name. + * + * @param fileName the file name + * @return an array with algorithm, password, and base file name + */ + private String[] parse(String fileName) { + if (!fileName.startsWith(getScheme())) { + throw new IllegalArgumentException(fileName + + " doesn't start with " + getScheme()); + } + fileName = fileName.substring(getScheme().length() + 1); + int idx = fileName.indexOf(':'); + String password; + if (idx < 0) { + throw new IllegalArgumentException(fileName + + " doesn't contain encryption algorithm and password"); + } + password = fileName.substring(0, idx); + fileName = fileName.substring(idx + 1); + return new String[] { password, fileName }; + } + + /** + * Convert a char array to a byte array, in UTF-16 format. The char array is + * not cleared after use (this must be done by the caller). + * + * @param passwordChars the password characters + * @return the byte array + */ + public static byte[] getPasswordBytes(char[] passwordChars) { + // using UTF-16 + int len = passwordChars.length; + byte[] password = new byte[len * 2]; + for (int i = 0; i < len; i++) { + char c = passwordChars[i]; + password[i + i] = (byte) (c >>> 8); + password[i + i + 1] = (byte) c; + } + return password; + } + + /** + * An encrypted file with a read cache. + */ + public static class FileEncrypt extends FileBase { + + /** + * The block size. + */ + static final int BLOCK_SIZE = 4096; + + /** + * The block size bit mask. + */ + static final int BLOCK_SIZE_MASK = BLOCK_SIZE - 1; + + /** + * The length of the file header. Using a smaller header is possible, + * but would mean reads and writes are not aligned to the block size. + */ + static final int HEADER_LENGTH = BLOCK_SIZE; + + private static final byte[] HEADER = "H2encrypt\n".getBytes(); + private static final int SALT_POS = HEADER.length; + + /** + * The length of the salt, in bytes. + */ + private static final int SALT_LENGTH = 8; + + /** + * The number of iterations. It is relatively low; a higher value would + * slow down opening files on Android too much. + */ + private static final int HASH_ITERATIONS = 10; + + private final FileChannel base; + + /** + * The current position within the file, from a user perspective. + */ + private long pos; + + /** + * The current file size, from a user perspective. + */ + private long size; + + private final String name; + + private XTS xts; + + private byte[] encryptionKey; + + public FileEncrypt(String name, byte[] encryptionKey, FileChannel base) { + // don't do any read or write operations here, because they could + // fail if the file is locked, and we want to give the caller a + // chance to lock the file first + this.name = name; + this.base = base; + this.encryptionKey = encryptionKey; + } + + private void init() throws IOException { + if (xts != null) { + return; + } + this.size = base.size() - HEADER_LENGTH; + boolean newFile = size < 0; + byte[] salt; + if (newFile) { + byte[] header = Arrays.copyOf(HEADER, BLOCK_SIZE); + salt = MathUtils.secureRandomBytes(SALT_LENGTH); + System.arraycopy(salt, 0, header, SALT_POS, salt.length); + writeFully(base, 0, ByteBuffer.wrap(header)); + size = 0; + } else { + salt = new byte[SALT_LENGTH]; + readFully(base, SALT_POS, ByteBuffer.wrap(salt)); + if ((size & BLOCK_SIZE_MASK) != 0) { + size -= BLOCK_SIZE; + } + } + AES cipher = new AES(); + cipher.setKey(SHA256.getPBKDF2( + encryptionKey, salt, HASH_ITERATIONS, 16)); + encryptionKey = null; + xts = new XTS(cipher); + } + + @Override + protected void implCloseChannel() throws IOException { + base.close(); + } + + @Override + public FileChannel position(long newPosition) throws IOException { + this.pos = newPosition; + return this; + } + + @Override + public long position() throws IOException { + return pos; + } + + @Override + public int read(ByteBuffer dst) throws IOException { + int len = read(dst, pos); + if (len > 0) { + pos += len; + } + return len; + } + + @Override + public int read(ByteBuffer dst, long position) throws IOException { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + init(); + len = (int) Math.min(len, size - position); + if (position >= size) { + return -1; + } else if (position < 0) { + throw new IllegalArgumentException("pos: " + position); + } + if ((position & BLOCK_SIZE_MASK) != 0 || + (len & BLOCK_SIZE_MASK) != 0) { + // either the position or the len is unaligned: + // read aligned, and then truncate + long p = position / BLOCK_SIZE * BLOCK_SIZE; + int offset = (int) (position - p); + int l = (len + offset + BLOCK_SIZE - 1) / BLOCK_SIZE * BLOCK_SIZE; + ByteBuffer temp = ByteBuffer.allocate(l); + readInternal(temp, p, l); + temp.flip(); + temp.limit(offset + len); + temp.position(offset); + dst.put(temp); + return len; + } + readInternal(dst, position, len); + return len; + } + + private void readInternal(ByteBuffer dst, long position, int len) + throws IOException { + int x = dst.position(); + readFully(base, position + HEADER_LENGTH, dst); + long block = position / BLOCK_SIZE; + while (len > 0) { + xts.decrypt(block++, BLOCK_SIZE, dst.array(), dst.arrayOffset() + x); + x += BLOCK_SIZE; + len -= BLOCK_SIZE; + } + } + + private static void readFully(FileChannel file, long pos, ByteBuffer dst) + throws IOException { + do { + int len = file.read(dst, pos); + if (len < 0) { + throw new EOFException(); + } + pos += len; + } while (dst.remaining() > 0); + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + init(); + int len = src.remaining(); + if ((position & BLOCK_SIZE_MASK) != 0 || + (len & BLOCK_SIZE_MASK) != 0) { + // either the position or the len is unaligned: + // read aligned, and then truncate + long p = position / BLOCK_SIZE * BLOCK_SIZE; + int offset = (int) (position - p); + int l = (len + offset + BLOCK_SIZE - 1) / BLOCK_SIZE * BLOCK_SIZE; + ByteBuffer temp = ByteBuffer.allocate(l); + int available = (int) (size - p + BLOCK_SIZE - 1) / BLOCK_SIZE * BLOCK_SIZE; + int readLen = Math.min(l, available); + if (readLen > 0) { + readInternal(temp, p, readLen); + temp.rewind(); + } + temp.limit(offset + len); + temp.position(offset); + temp.put(src); + temp.limit(l); + temp.rewind(); + writeInternal(temp, p, l); + long p2 = position + len; + size = Math.max(size, p2); + int plus = (int) (size & BLOCK_SIZE_MASK); + if (plus > 0) { + temp = ByteBuffer.allocate(plus); + writeFully(base, p + HEADER_LENGTH + l, temp); + } + return len; + } + writeInternal(src, position, len); + long p2 = position + len; + size = Math.max(size, p2); + return len; + } + + private void writeInternal(ByteBuffer src, long position, int len) + throws IOException { + ByteBuffer crypt = ByteBuffer.allocate(len); + crypt.put(src); + crypt.flip(); + long block = position / BLOCK_SIZE; + int x = 0, l = len; + while (l > 0) { + xts.encrypt(block++, BLOCK_SIZE, crypt.array(), crypt.arrayOffset() + x); + x += BLOCK_SIZE; + l -= BLOCK_SIZE; + } + writeFully(base, position + HEADER_LENGTH, crypt); + } + + private static void writeFully(FileChannel file, long pos, + ByteBuffer src) throws IOException { + int off = 0; + do { + int len = file.write(src, pos + off); + off += len; + } while (src.remaining() > 0); + } + + @Override + public int write(ByteBuffer src) throws IOException { + int len = write(src, pos); + if (len > 0) { + pos += len; + } + return len; + } + + @Override + public long size() throws IOException { + init(); + return size; + } + + @Override + public FileChannel truncate(long newSize) throws IOException { + init(); + if (newSize > size) { + return this; + } + if (newSize < 0) { + throw new IllegalArgumentException("newSize: " + newSize); + } + int offset = (int) (newSize & BLOCK_SIZE_MASK); + if (offset > 0) { + base.truncate(newSize + HEADER_LENGTH + BLOCK_SIZE); + } else { + base.truncate(newSize + HEADER_LENGTH); + } + this.size = newSize; + pos = Math.min(pos, size); + return this; + } + + @Override + public void force(boolean metaData) throws IOException { + base.force(metaData); + } + + @Override + public FileLock tryLock(long position, long size, boolean shared) + throws IOException { + return base.tryLock(position, size, shared); + } + + @Override + public String toString() { + return name; + } + + } + + /** + * An XTS implementation as described in + * IEEE P1619 (Standard Architecture for Encrypted Shared Storage Media). + * See also + * http://axelkenzo.ru/downloads/1619-2007-NIST-Submission.pdf + */ + static class XTS { + + /** + * Galois field feedback. + */ + private static final int GF_128_FEEDBACK = 0x87; + + /** + * The AES encryption block size. + */ + private static final int CIPHER_BLOCK_SIZE = 16; + + private final BlockCipher cipher; + + XTS(BlockCipher cipher) { + this.cipher = cipher; + } + + /** + * Encrypt the data. + * + * @param id the (sector) id + * @param len the number of bytes + * @param data the data + * @param offset the offset within the data + */ + void encrypt(long id, int len, byte[] data, int offset) { + byte[] tweak = initTweak(id); + int i = 0; + for (; i + CIPHER_BLOCK_SIZE <= len; i += CIPHER_BLOCK_SIZE) { + if (i > 0) { + updateTweak(tweak); + } + xorTweak(data, i + offset, tweak); + cipher.encrypt(data, i + offset, CIPHER_BLOCK_SIZE); + xorTweak(data, i + offset, tweak); + } + if (i < len) { + updateTweak(tweak); + swap(data, i + offset, i - CIPHER_BLOCK_SIZE + offset, len - i); + xorTweak(data, i - CIPHER_BLOCK_SIZE + offset, tweak); + cipher.encrypt(data, i - CIPHER_BLOCK_SIZE + offset, CIPHER_BLOCK_SIZE); + xorTweak(data, i - CIPHER_BLOCK_SIZE + offset, tweak); + } + } + + /** + * Decrypt the data. + * + * @param id the (sector) id + * @param len the number of bytes + * @param data the data + * @param offset the offset within the data + */ + void decrypt(long id, int len, byte[] data, int offset) { + byte[] tweak = initTweak(id), tweakEnd = tweak; + int i = 0; + for (; i + CIPHER_BLOCK_SIZE <= len; i += CIPHER_BLOCK_SIZE) { + if (i > 0) { + updateTweak(tweak); + if (i + CIPHER_BLOCK_SIZE + CIPHER_BLOCK_SIZE > len && + i + CIPHER_BLOCK_SIZE < len) { + tweakEnd = tweak.clone(); + updateTweak(tweak); + } + } + xorTweak(data, i + offset, tweak); + cipher.decrypt(data, i + offset, CIPHER_BLOCK_SIZE); + xorTweak(data, i + offset, tweak); + } + if (i < len) { + swap(data, i, i - CIPHER_BLOCK_SIZE + offset, len - i + offset); + xorTweak(data, i - CIPHER_BLOCK_SIZE + offset, tweakEnd); + cipher.decrypt(data, i - CIPHER_BLOCK_SIZE + offset, CIPHER_BLOCK_SIZE); + xorTweak(data, i - CIPHER_BLOCK_SIZE + offset, tweakEnd); + } + } + + private byte[] initTweak(long id) { + byte[] tweak = new byte[CIPHER_BLOCK_SIZE]; + for (int j = 0; j < CIPHER_BLOCK_SIZE; j++, id >>>= 8) { + tweak[j] = (byte) (id & 0xff); + } + cipher.encrypt(tweak, 0, CIPHER_BLOCK_SIZE); + return tweak; + } + + private static void xorTweak(byte[] data, int pos, byte[] tweak) { + for (int i = 0; i < CIPHER_BLOCK_SIZE; i++) { + data[pos + i] ^= tweak[i]; + } + } + + private static void updateTweak(byte[] tweak) { + byte ci = 0, co = 0; + for (int i = 0; i < CIPHER_BLOCK_SIZE; i++) { + co = (byte) ((tweak[i] >> 7) & 1); + tweak[i] = (byte) (((tweak[i] << 1) + ci) & 255); + ci = co; + } + if (co != 0) { + tweak[0] ^= GF_128_FEEDBACK; + } + } + + private static void swap(byte[] data, int source, int target, int len) { + for (int i = 0; i < len; i++) { + byte temp = data[source + i]; + data[source + i] = data[target + i]; + data[target + i] = temp; + } + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathMem.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathMem.java new file mode 100644 index 0000000000000..51ceee669e6ef --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathMem.java @@ -0,0 +1,783 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.NonWritableChannelException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; +import java.util.concurrent.atomic.AtomicReference; + +import org.h2.api.ErrorCode; +import org.h2.compress.CompressLZF; +import org.h2.message.DbException; +import org.h2.util.MathUtils; +import org.h2.util.New; + +/** + * This file system keeps files fully in memory. There is an option to compress + * file blocks to save memory. + */ +public class FilePathMem extends FilePath { + + private static final TreeMap MEMORY_FILES = + new TreeMap<>(); + private static final FileMemData DIRECTORY = new FileMemData("", false); + + @Override + public FilePathMem getPath(String path) { + FilePathMem p = new FilePathMem(); + p.name = getCanonicalPath(path); + return p; + } + + @Override + public long size() { + return getMemoryFile().length(); + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + synchronized (MEMORY_FILES) { + if (!atomicReplace && !newName.name.equals(name) && + MEMORY_FILES.containsKey(newName.name)) { + throw DbException.get(ErrorCode.FILE_RENAME_FAILED_2, name, newName + " (exists)"); + } + FileMemData f = getMemoryFile(); + f.setName(newName.name); + MEMORY_FILES.remove(name); + MEMORY_FILES.put(newName.name, f); + } + } + + @Override + public boolean createFile() { + synchronized (MEMORY_FILES) { + if (exists()) { + return false; + } + getMemoryFile(); + } + return true; + } + + @Override + public boolean exists() { + if (isRoot()) { + return true; + } + synchronized (MEMORY_FILES) { + return MEMORY_FILES.get(name) != null; + } + } + + @Override + public void delete() { + if (isRoot()) { + return; + } + synchronized (MEMORY_FILES) { + FileMemData old = MEMORY_FILES.remove(name); + if (old != null) { + old.truncate(0); + } + } + } + + @Override + public List newDirectoryStream() { + ArrayList list = New.arrayList(); + synchronized (MEMORY_FILES) { + for (String n : MEMORY_FILES.tailMap(name).keySet()) { + if (n.startsWith(name)) { + if (!n.equals(name) && n.indexOf('/', name.length() + 1) < 0) { + list.add(getPath(n)); + } + } else { + break; + } + } + return list; + } + } + + @Override + public boolean setReadOnly() { + return getMemoryFile().setReadOnly(); + } + + @Override + public boolean canWrite() { + return getMemoryFile().canWrite(); + } + + @Override + public FilePathMem getParent() { + int idx = name.lastIndexOf('/'); + return idx < 0 ? null : getPath(name.substring(0, idx)); + } + + @Override + public boolean isDirectory() { + if (isRoot()) { + return true; + } + synchronized (MEMORY_FILES) { + FileMemData d = MEMORY_FILES.get(name); + return d == DIRECTORY; + } + } + + @Override + public boolean isAbsolute() { + // TODO relative files are not supported + return true; + } + + @Override + public FilePathMem toRealPath() { + return this; + } + + @Override + public long lastModified() { + return getMemoryFile().getLastModified(); + } + + @Override + public void createDirectory() { + if (exists()) { + throw DbException.get(ErrorCode.FILE_CREATION_FAILED_1, + name + " (a file with this name already exists)"); + } + synchronized (MEMORY_FILES) { + MEMORY_FILES.put(name, DIRECTORY); + } + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + FileMemData obj = getMemoryFile(); + FileMem m = new FileMem(obj, false); + return new FileChannelOutputStream(m, append); + } + + @Override + public InputStream newInputStream() { + FileMemData obj = getMemoryFile(); + FileMem m = new FileMem(obj, true); + return new FileChannelInputStream(m, true); + } + + @Override + public FileChannel open(String mode) { + FileMemData obj = getMemoryFile(); + return new FileMem(obj, "r".equals(mode)); + } + + private FileMemData getMemoryFile() { + synchronized (MEMORY_FILES) { + FileMemData m = MEMORY_FILES.get(name); + if (m == DIRECTORY) { + throw DbException.get(ErrorCode.FILE_CREATION_FAILED_1, + name + " (a directory with this name already exists)"); + } + if (m == null) { + m = new FileMemData(name, compressed()); + MEMORY_FILES.put(name, m); + } + return m; + } + } + + private boolean isRoot() { + return name.equals(getScheme() + ":"); + } + + /** + * Get the canonical path for this file name. + * + * @param fileName the file name + * @return the canonical path + */ + protected static String getCanonicalPath(String fileName) { + fileName = fileName.replace('\\', '/'); + int idx = fileName.indexOf(':') + 1; + if (fileName.length() > idx && fileName.charAt(idx) != '/') { + fileName = fileName.substring(0, idx) + "/" + fileName.substring(idx); + } + return fileName; + } + + @Override + public String getScheme() { + return "memFS"; + } + + /** + * Whether the file should be compressed. + * + * @return if it should be compressed. + */ + boolean compressed() { + return false; + } + +} + +/** + * A memory file system that compresses blocks to conserve memory. + */ +class FilePathMemLZF extends FilePathMem { + + @Override + public FilePathMem getPath(String path) { + FilePathMemLZF p = new FilePathMemLZF(); + p.name = getCanonicalPath(path); + return p; + } + + @Override + boolean compressed() { + return true; + } + + @Override + public String getScheme() { + return "memLZF"; + } + +} + +/** + * This class represents an in-memory file. + */ +class FileMem extends FileBase { + + /** + * The file data. + */ + final FileMemData data; + + private final boolean readOnly; + private long pos; + + FileMem(FileMemData data, boolean readOnly) { + this.data = data; + this.readOnly = readOnly; + } + + @Override + public long size() { + return data.length(); + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + // compatibility with JDK FileChannel#truncate + if (readOnly) { + throw new NonWritableChannelException(); + } + if (newLength < size()) { + data.touch(readOnly); + pos = Math.min(pos, newLength); + data.truncate(newLength); + } + return this; + } + + @Override + public FileChannel position(long newPos) { + this.pos = newPos; + return this; + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + int len = src.remaining(); + if (len == 0) { + return 0; + } + data.touch(readOnly); + data.readWrite(position, src.array(), + src.arrayOffset() + src.position(), len, true); + src.position(src.position() + len); + return len; + } + + @Override + public int write(ByteBuffer src) throws IOException { + int len = src.remaining(); + if (len == 0) { + return 0; + } + data.touch(readOnly); + pos = data.readWrite(pos, src.array(), + src.arrayOffset() + src.position(), len, true); + src.position(src.position() + len); + return len; + } + + @Override + public int read(ByteBuffer dst, long position) throws IOException { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + long newPos = data.readWrite(position, dst.array(), + dst.arrayOffset() + dst.position(), len, false); + len = (int) (newPos - position); + if (len <= 0) { + return -1; + } + dst.position(dst.position() + len); + return len; + } + + @Override + public int read(ByteBuffer dst) throws IOException { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + long newPos = data.readWrite(pos, dst.array(), + dst.arrayOffset() + dst.position(), len, false); + len = (int) (newPos - pos); + if (len <= 0) { + return -1; + } + dst.position(dst.position() + len); + pos = newPos; + return len; + } + + @Override + public long position() { + return pos; + } + + @Override + public void implCloseChannel() throws IOException { + pos = 0; + } + + @Override + public void force(boolean metaData) throws IOException { + // do nothing + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + if (shared) { + if (!data.lockShared()) { + return null; + } + } else { + if (!data.lockExclusive()) { + return null; + } + } + + return new FileLock(new FakeFileChannel(), position, size, shared) { + + @Override + public boolean isValid() { + return true; + } + + @Override + public void release() throws IOException { + data.unlock(); + } + }; + } + + @Override + public String toString() { + return data.getName(); + } + +} + +/** + * This class contains the data of an in-memory random access file. + * Data compression using the LZF algorithm is supported as well. + */ +class FileMemData { + + private static final int CACHE_SIZE = 8; + private static final int BLOCK_SIZE_SHIFT = 10; + private static final int BLOCK_SIZE = 1 << BLOCK_SIZE_SHIFT; + private static final int BLOCK_SIZE_MASK = BLOCK_SIZE - 1; + private static final CompressLZF LZF = new CompressLZF(); + private static final byte[] BUFFER = new byte[BLOCK_SIZE * 2]; + private static final byte[] COMPRESSED_EMPTY_BLOCK; + + private static final Cache COMPRESS_LATER = + new Cache<>(CACHE_SIZE); + + private String name; + private final int id; + private final boolean compress; + private long length; + private AtomicReference[] data; + private long lastModified; + private boolean isReadOnly; + private boolean isLockedExclusive; + private int sharedLockCount; + + static { + byte[] n = new byte[BLOCK_SIZE]; + int len = LZF.compress(n, BLOCK_SIZE, BUFFER, 0); + COMPRESSED_EMPTY_BLOCK = Arrays.copyOf(BUFFER, len); + } + + @SuppressWarnings("unchecked") + FileMemData(String name, boolean compress) { + this.name = name; + this.id = name.hashCode(); + this.compress = compress; + this.data = new AtomicReference[0]; + lastModified = System.currentTimeMillis(); + } + + /** + * Get the page if it exists. + * + * @param page the page id + * @return the byte array, or null + */ + byte[] getPage(int page) { + AtomicReference[] b = data; + if (page >= b.length) { + return null; + } + return b[page].get(); + } + + /** + * Set the page data. + * + * @param page the page id + * @param oldData the old data + * @param newData the new data + * @param force whether the data should be overwritten even if the old data + * doesn't match + */ + void setPage(int page, byte[] oldData, byte[] newData, boolean force) { + AtomicReference[] b = data; + if (page >= b.length) { + return; + } + if (force) { + b[page].set(newData); + } else { + b[page].compareAndSet(oldData, newData); + } + } + + int getId() { + return id; + } + + /** + * Lock the file in exclusive mode if possible. + * + * @return if locking was successful + */ + synchronized boolean lockExclusive() { + if (sharedLockCount > 0 || isLockedExclusive) { + return false; + } + isLockedExclusive = true; + return true; + } + + /** + * Lock the file in shared mode if possible. + * + * @return if locking was successful + */ + synchronized boolean lockShared() { + if (isLockedExclusive) { + return false; + } + sharedLockCount++; + return true; + } + + /** + * Unlock the file. + */ + synchronized void unlock() { + if (isLockedExclusive) { + isLockedExclusive = false; + } else { + sharedLockCount = Math.max(0, sharedLockCount - 1); + } + } + + /** + * This small cache compresses the data if an element leaves the cache. + */ + static class Cache extends LinkedHashMap { + + private static final long serialVersionUID = 1L; + private final int size; + + Cache(int size) { + super(size, (float) 0.75, true); + this.size = size; + } + + @Override + public synchronized V put(K key, V value) { + return super.put(key, value); + } + + @Override + protected boolean removeEldestEntry(Map.Entry eldest) { + if (size() < size) { + return false; + } + CompressItem c = (CompressItem) eldest.getKey(); + c.file.compress(c.page); + return true; + } + } + + /** + * Points to a block of bytes that needs to be compressed. + */ + static class CompressItem { + + /** + * The file. + */ + FileMemData file; + + /** + * The page to compress. + */ + int page; + + @Override + public int hashCode() { + return page ^ file.getId(); + } + + @Override + public boolean equals(Object o) { + if (o instanceof CompressItem) { + CompressItem c = (CompressItem) o; + return c.page == page && c.file == file; + } + return false; + } + + } + + private void compressLater(int page) { + CompressItem c = new CompressItem(); + c.file = this; + c.page = page; + synchronized (LZF) { + COMPRESS_LATER.put(c, c); + } + } + + private byte[] expand(int page) { + byte[] d = getPage(page); + if (d.length == BLOCK_SIZE) { + return d; + } + byte[] out = new byte[BLOCK_SIZE]; + if (d != COMPRESSED_EMPTY_BLOCK) { + synchronized (LZF) { + LZF.expand(d, 0, d.length, out, 0, BLOCK_SIZE); + } + } + setPage(page, d, out, false); + return out; + } + + /** + * Compress the data in a byte array. + * + * @param page which page to compress + */ + void compress(int page) { + byte[] old = getPage(page); + if (old == null || old.length != BLOCK_SIZE) { + // not yet initialized or already compressed + return; + } + synchronized (LZF) { + int len = LZF.compress(old, BLOCK_SIZE, BUFFER, 0); + if (len <= BLOCK_SIZE) { + byte[] d = Arrays.copyOf(BUFFER, len); + // maybe data was changed in the meantime + setPage(page, old, d, false); + } + } + } + + /** + * Update the last modified time. + * + * @param openReadOnly if the file was opened in read-only mode + */ + void touch(boolean openReadOnly) throws IOException { + if (isReadOnly || openReadOnly) { + throw new IOException("Read only"); + } + lastModified = System.currentTimeMillis(); + } + + /** + * Get the file length. + * + * @return the length + */ + long length() { + return length; + } + + /** + * Truncate the file. + * + * @param newLength the new length + */ + void truncate(long newLength) { + changeLength(newLength); + long end = MathUtils.roundUpLong(newLength, BLOCK_SIZE); + if (end != newLength) { + int lastPage = (int) (newLength >>> BLOCK_SIZE_SHIFT); + byte[] d = expand(lastPage); + byte[] d2 = Arrays.copyOf(d, d.length); + for (int i = (int) (newLength & BLOCK_SIZE_MASK); i < BLOCK_SIZE; i++) { + d2[i] = 0; + } + setPage(lastPage, d, d2, true); + if (compress) { + compressLater(lastPage); + } + } + } + + private void changeLength(long len) { + length = len; + len = MathUtils.roundUpLong(len, BLOCK_SIZE); + int blocks = (int) (len >>> BLOCK_SIZE_SHIFT); + if (blocks != data.length) { + AtomicReference[] n = Arrays.copyOf(data, blocks); + for (int i = data.length; i < blocks; i++) { + n[i] = new AtomicReference<>(COMPRESSED_EMPTY_BLOCK); + } + data = n; + } + } + + /** + * Read or write. + * + * @param pos the position + * @param b the byte array + * @param off the offset within the byte array + * @param len the number of bytes + * @param write true for writing + * @return the new position + */ + long readWrite(long pos, byte[] b, int off, int len, boolean write) { + long end = pos + len; + if (end > length) { + if (write) { + changeLength(end); + } else { + len = (int) (length - pos); + } + } + while (len > 0) { + int l = (int) Math.min(len, BLOCK_SIZE - (pos & BLOCK_SIZE_MASK)); + int page = (int) (pos >>> BLOCK_SIZE_SHIFT); + byte[] block = expand(page); + int blockOffset = (int) (pos & BLOCK_SIZE_MASK); + if (write) { + byte[] p2 = Arrays.copyOf(block, block.length); + System.arraycopy(b, off, p2, blockOffset, l); + setPage(page, block, p2, true); + } else { + System.arraycopy(block, blockOffset, b, off, l); + } + if (compress) { + compressLater(page); + } + off += l; + pos += l; + len -= l; + } + return pos; + } + + /** + * Set the file name. + * + * @param name the name + */ + void setName(String name) { + this.name = name; + } + + /** + * Get the file name + * + * @return the name + */ + String getName() { + return name; + } + + /** + * Get the last modified time. + * + * @return the time + */ + long getLastModified() { + return lastModified; + } + + /** + * Check whether writing is allowed. + * + * @return true if it is + */ + boolean canWrite() { + return !isReadOnly; + } + + /** + * Set the read-only flag. + * + * @return true + */ + boolean setReadOnly() { + isReadOnly = true; + return true; + } + +} + + diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathNio.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathNio.java new file mode 100644 index 0000000000000..fb02f57a4e1e4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathNio.java @@ -0,0 +1,129 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.RandomAccessFile; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.NonWritableChannelException; + +/** + * This file system stores files on disk and uses java.nio to access the files. + * This class uses FileChannel. + */ +public class FilePathNio extends FilePathWrapper { + + @Override + public FileChannel open(String mode) throws IOException { + return new FileNio(name.substring(getScheme().length() + 1), mode); + } + + @Override + public String getScheme() { + return "nio"; + } + +} + +/** + * File which uses NIO FileChannel. + */ +class FileNio extends FileBase { + + private final String name; + private final FileChannel channel; + + FileNio(String fileName, String mode) throws IOException { + this.name = fileName; + channel = new RandomAccessFile(fileName, mode).getChannel(); + } + + @Override + public void implCloseChannel() throws IOException { + channel.close(); + } + + @Override + public long position() throws IOException { + return channel.position(); + } + + @Override + public long size() throws IOException { + return channel.size(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + return channel.read(dst); + } + + @Override + public FileChannel position(long pos) throws IOException { + channel.position(pos); + return this; + } + + @Override + public int read(ByteBuffer dst, long position) throws IOException { + return channel.read(dst, position); + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + return channel.write(src, position); + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + long size = channel.size(); + if (newLength < size) { + long pos = channel.position(); + channel.truncate(newLength); + long newPos = channel.position(); + if (pos < newLength) { + // position should stay + // in theory, this should not be needed + if (newPos != pos) { + channel.position(pos); + } + } else if (newPos > newLength) { + // looks like a bug in this FileChannel implementation, as + // the documentation says the position needs to be changed + channel.position(newLength); + } + } + return this; + } + + @Override + public void force(boolean metaData) throws IOException { + channel.force(metaData); + } + + @Override + public int write(ByteBuffer src) throws IOException { + try { + return channel.write(src); + } catch (NonWritableChannelException e) { + throw new IOException("read only"); + } + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + return channel.tryLock(position, size, shared); + } + + @Override + public String toString() { + return "nio:" + name; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathNioMapped.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathNioMapped.java new file mode 100644 index 0000000000000..6aa7c0774051e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathNioMapped.java @@ -0,0 +1,261 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.EOFException; +import java.io.IOException; +import java.io.RandomAccessFile; +import java.lang.ref.WeakReference; +import java.lang.reflect.Method; +import java.nio.BufferUnderflowException; +import java.nio.ByteBuffer; +import java.nio.MappedByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.NonWritableChannelException; +import java.util.concurrent.TimeUnit; + +import org.h2.engine.SysProperties; + +/** + * This file system stores files on disk and uses java.nio to access the files. + * This class used memory mapped files. + */ +public class FilePathNioMapped extends FilePathNio { + + @Override + public FileChannel open(String mode) throws IOException { + return new FileNioMapped(name.substring(getScheme().length() + 1), mode); + } + + @Override + public String getScheme() { + return "nioMapped"; + } + +} + +/** + * Uses memory mapped files. + * The file size is limited to 2 GB. + */ +class FileNioMapped extends FileBase { + + private static final long GC_TIMEOUT_MS = 10_000; + private final String name; + private final MapMode mode; + private RandomAccessFile file; + private MappedByteBuffer mapped; + private long fileLength; + + /** + * The position within the file. Can't use the position of the mapped buffer + * because it doesn't support seeking past the end of the file. + */ + private int pos; + + FileNioMapped(String fileName, String mode) throws IOException { + if ("r".equals(mode)) { + this.mode = MapMode.READ_ONLY; + } else { + this.mode = MapMode.READ_WRITE; + } + this.name = fileName; + file = new RandomAccessFile(fileName, mode); + reMap(); + } + + private void unMap() throws IOException { + if (mapped == null) { + return; + } + // first write all data + mapped.force(); + + // need to dispose old direct buffer, see bug + // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038 + + boolean useSystemGc = true; + if (SysProperties.NIO_CLEANER_HACK) { + try { + Method cleanerMethod = mapped.getClass().getMethod("cleaner"); + cleanerMethod.setAccessible(true); + Object cleaner = cleanerMethod.invoke(mapped); + if (cleaner != null) { + Method clearMethod = cleaner.getClass().getMethod("clean"); + clearMethod.invoke(cleaner); + } + useSystemGc = false; + } catch (Throwable e) { + // useSystemGc is already true + } finally { + mapped = null; + } + } + if (useSystemGc) { + WeakReference bufferWeakRef = + new WeakReference<>(mapped); + mapped = null; + long start = System.nanoTime(); + while (bufferWeakRef.get() != null) { + if (System.nanoTime() - start > TimeUnit.MILLISECONDS.toNanos(GC_TIMEOUT_MS)) { + throw new IOException("Timeout (" + GC_TIMEOUT_MS + + " ms) reached while trying to GC mapped buffer"); + } + System.gc(); + Thread.yield(); + } + } + } + + /** + * Re-map byte buffer into memory, called when file size has changed or file + * was created. + */ + private void reMap() throws IOException { + int oldPos = 0; + if (mapped != null) { + oldPos = pos; + unMap(); + } + fileLength = file.length(); + checkFileSizeLimit(fileLength); + // maps new MappedByteBuffer; the old one is disposed during GC + mapped = file.getChannel().map(mode, 0, fileLength); + int limit = mapped.limit(); + int capacity = mapped.capacity(); + if (limit < fileLength || capacity < fileLength) { + throw new IOException("Unable to map: length=" + limit + + " capacity=" + capacity + " length=" + fileLength); + } + if (SysProperties.NIO_LOAD_MAPPED) { + mapped.load(); + } + this.pos = Math.min(oldPos, (int) fileLength); + } + + private static void checkFileSizeLimit(long length) throws IOException { + if (length > Integer.MAX_VALUE) { + throw new IOException( + "File over 2GB is not supported yet when using this file system"); + } + } + + @Override + public void implCloseChannel() throws IOException { + if (file != null) { + unMap(); + file.close(); + file = null; + } + } + + @Override + public long position() { + return pos; + } + + @Override + public String toString() { + return "nioMapped:" + name; + } + + @Override + public synchronized long size() throws IOException { + return fileLength; + } + + @Override + public synchronized int read(ByteBuffer dst) throws IOException { + try { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + len = (int) Math.min(len, fileLength - pos); + if (len <= 0) { + return -1; + } + mapped.position(pos); + mapped.get(dst.array(), dst.arrayOffset() + dst.position(), len); + dst.position(dst.position() + len); + pos += len; + return len; + } catch (IllegalArgumentException e) { + EOFException e2 = new EOFException("EOF"); + e2.initCause(e); + throw e2; + } catch (BufferUnderflowException e) { + EOFException e2 = new EOFException("EOF"); + e2.initCause(e); + throw e2; + } + } + + @Override + public FileChannel position(long pos) throws IOException { + checkFileSizeLimit(pos); + this.pos = (int) pos; + return this; + } + + @Override + public synchronized FileChannel truncate(long newLength) throws IOException { + // compatibility with JDK FileChannel#truncate + if (mode == MapMode.READ_ONLY) { + throw new NonWritableChannelException(); + } + if (newLength < size()) { + setFileLength(newLength); + } + return this; + } + + public synchronized void setFileLength(long newLength) throws IOException { + checkFileSizeLimit(newLength); + int oldPos = pos; + unMap(); + for (int i = 0;; i++) { + try { + file.setLength(newLength); + break; + } catch (IOException e) { + if (i > 16 || !e.toString().contains("user-mapped section open")) { + throw e; + } + } + System.gc(); + } + reMap(); + pos = (int) Math.min(newLength, oldPos); + } + + @Override + public void force(boolean metaData) throws IOException { + mapped.force(); + file.getFD().sync(); + } + + @Override + public synchronized int write(ByteBuffer src) throws IOException { + int len = src.remaining(); + // check if need to expand file + if (mapped.capacity() < pos + len) { + setFileLength(pos + len); + } + mapped.position(pos); + mapped.put(src); + pos += len; + return len; + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + return file.getChannel().tryLock(position, size, shared); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathNioMem.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathNioMem.java new file mode 100644 index 0000000000000..dcba6100d6cfc --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathNioMem.java @@ -0,0 +1,798 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.nio.channels.NonWritableChannelException; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; +import java.util.concurrent.atomic.AtomicReference; +import java.util.concurrent.locks.ReentrantReadWriteLock; +import org.h2.api.ErrorCode; +import org.h2.compress.CompressLZF; +import org.h2.message.DbException; +import org.h2.util.MathUtils; +import org.h2.util.New; + +/** + * This file system keeps files fully in memory. There is an option to compress + * file blocks to save memory. + */ +public class FilePathNioMem extends FilePath { + + private static final TreeMap MEMORY_FILES = + new TreeMap<>(); + + /** + * The percentage of uncompressed (cached) entries. + */ + float compressLaterCachePercent = 1; + + @Override + public FilePathNioMem getPath(String path) { + FilePathNioMem p = new FilePathNioMem(); + p.name = getCanonicalPath(path); + return p; + } + + @Override + public long size() { + return getMemoryFile().length(); + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + synchronized (MEMORY_FILES) { + if (!atomicReplace && !name.equals(newName.name) && + MEMORY_FILES.containsKey(newName.name)) { + throw DbException.get(ErrorCode.FILE_RENAME_FAILED_2, name, newName + " (exists)"); + } + FileNioMemData f = getMemoryFile(); + f.setName(newName.name); + MEMORY_FILES.remove(name); + MEMORY_FILES.put(newName.name, f); + } + } + + @Override + public boolean createFile() { + synchronized (MEMORY_FILES) { + if (exists()) { + return false; + } + getMemoryFile(); + } + return true; + } + + @Override + public boolean exists() { + if (isRoot()) { + return true; + } + synchronized (MEMORY_FILES) { + return MEMORY_FILES.get(name) != null; + } + } + + @Override + public void delete() { + if (isRoot()) { + return; + } + synchronized (MEMORY_FILES) { + MEMORY_FILES.remove(name); + } + } + + @Override + public List newDirectoryStream() { + ArrayList list = New.arrayList(); + synchronized (MEMORY_FILES) { + for (String n : MEMORY_FILES.tailMap(name).keySet()) { + if (n.startsWith(name)) { + list.add(getPath(n)); + } else { + break; + } + } + return list; + } + } + + @Override + public boolean setReadOnly() { + return getMemoryFile().setReadOnly(); + } + + @Override + public boolean canWrite() { + return getMemoryFile().canWrite(); + } + + @Override + public FilePathNioMem getParent() { + int idx = name.lastIndexOf('/'); + return idx < 0 ? null : getPath(name.substring(0, idx)); + } + + @Override + public boolean isDirectory() { + if (isRoot()) { + return true; + } + // TODO in memory file system currently + // does not really support directories + synchronized (MEMORY_FILES) { + return MEMORY_FILES.get(name) == null; + } + } + + @Override + public boolean isAbsolute() { + // TODO relative files are not supported + return true; + } + + @Override + public FilePathNioMem toRealPath() { + return this; + } + + @Override + public long lastModified() { + return getMemoryFile().getLastModified(); + } + + @Override + public void createDirectory() { + if (exists() && isDirectory()) { + throw DbException.get(ErrorCode.FILE_CREATION_FAILED_1, + name + " (a file with this name already exists)"); + } + // TODO directories are not really supported + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + FileNioMemData obj = getMemoryFile(); + FileNioMem m = new FileNioMem(obj, false); + return new FileChannelOutputStream(m, append); + } + + @Override + public InputStream newInputStream() { + FileNioMemData obj = getMemoryFile(); + FileNioMem m = new FileNioMem(obj, true); + return new FileChannelInputStream(m, true); + } + + @Override + public FileChannel open(String mode) { + FileNioMemData obj = getMemoryFile(); + return new FileNioMem(obj, "r".equals(mode)); + } + + private FileNioMemData getMemoryFile() { + synchronized (MEMORY_FILES) { + FileNioMemData m = MEMORY_FILES.get(name); + if (m == null) { + m = new FileNioMemData(name, compressed(), compressLaterCachePercent); + MEMORY_FILES.put(name, m); + } + return m; + } + } + + protected boolean isRoot() { + return name.equals(getScheme() + ":"); + } + + /** + * Get the canonical path of a file (with backslashes replaced with forward + * slashes). + * + * @param fileName the file name + * @return the canonical path + */ + protected static String getCanonicalPath(String fileName) { + fileName = fileName.replace('\\', '/'); + int idx = fileName.lastIndexOf(':') + 1; + if (fileName.length() > idx && fileName.charAt(idx) != '/') { + fileName = fileName.substring(0, idx) + "/" + fileName.substring(idx); + } + return fileName; + } + + @Override + public String getScheme() { + return "nioMemFS"; + } + + /** + * Whether the file should be compressed. + * + * @return true if it should be compressed. + */ + boolean compressed() { + return false; + } + +} + +/** + * A memory file system that compresses blocks to conserve memory. + */ +class FilePathNioMemLZF extends FilePathNioMem { + + @Override + boolean compressed() { + return true; + } + + @Override + public FilePathNioMem getPath(String path) { + if (!path.startsWith(getScheme())) { + throw new IllegalArgumentException(path + + " doesn't start with " + getScheme()); + } + int idx1 = path.indexOf(':'); + int idx2 = path.lastIndexOf(':'); + final FilePathNioMemLZF p = new FilePathNioMemLZF(); + if (idx1 != -1 && idx1 != idx2) { + p.compressLaterCachePercent = Float.parseFloat(path.substring(idx1 + 1, idx2)); + } + p.name = getCanonicalPath(path); + return p; + } + + @Override + protected boolean isRoot() { + return name.lastIndexOf(':') == name.length() - 1; + } + + @Override + public String getScheme() { + return "nioMemLZF"; + } + +} + +/** + * This class represents an in-memory file. + */ +class FileNioMem extends FileBase { + + /** + * The file data. + */ + final FileNioMemData data; + + private final boolean readOnly; + private long pos; + + FileNioMem(FileNioMemData data, boolean readOnly) { + this.data = data; + this.readOnly = readOnly; + } + + @Override + public long size() { + return data.length(); + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + // compatibility with JDK FileChannel#truncate + if (readOnly) { + throw new NonWritableChannelException(); + } + if (newLength < size()) { + data.touch(readOnly); + pos = Math.min(pos, newLength); + data.truncate(newLength); + } + return this; + } + + @Override + public FileChannel position(long newPos) { + this.pos = (int) newPos; + return this; + } + + @Override + public int write(ByteBuffer src) throws IOException { + int len = src.remaining(); + if (len == 0) { + return 0; + } + data.touch(readOnly); + // offset is 0 because we start writing from src.position() + pos = data.readWrite(pos, src, 0, len, true); + src.position(src.position() + len); + return len; + } + + @Override + public int read(ByteBuffer dst) throws IOException { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + long newPos = data.readWrite(pos, dst, dst.position(), len, false); + len = (int) (newPos - pos); + if (len <= 0) { + return -1; + } + dst.position(dst.position() + len); + pos = newPos; + return len; + } + + @Override + public int read(ByteBuffer dst, long position) throws IOException { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + long newPos; + newPos = data.readWrite(position, dst, dst.position(), len, false); + len = (int) (newPos - position); + if (len <= 0) { + return -1; + } + dst.position(dst.position() + len); + return len; + } + + @Override + public long position() { + return pos; + } + + @Override + public void implCloseChannel() throws IOException { + pos = 0; + } + + @Override + public void force(boolean metaData) throws IOException { + // do nothing + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + if (shared) { + if (!data.lockShared()) { + return null; + } + } else { + if (!data.lockExclusive()) { + return null; + } + } + + return new FileLock(new FakeFileChannel(), position, size, shared) { + + @Override + public boolean isValid() { + return true; + } + + @Override + public void release() throws IOException { + data.unlock(); + } + }; + } + + @Override + public String toString() { + return data.getName(); + } + +} + +/** + * This class contains the data of an in-memory random access file. + * Data compression using the LZF algorithm is supported as well. + */ +class FileNioMemData { + + private static final int CACHE_MIN_SIZE = 8; + private static final int BLOCK_SIZE_SHIFT = 16; + + private static final int BLOCK_SIZE = 1 << BLOCK_SIZE_SHIFT; + private static final int BLOCK_SIZE_MASK = BLOCK_SIZE - 1; + private static final ByteBuffer COMPRESSED_EMPTY_BLOCK; + + private static final ThreadLocal LZF_THREAD_LOCAL = + new ThreadLocal() { + @Override + protected CompressLZF initialValue() { + return new CompressLZF(); + } + }; + /** the output buffer when compressing */ + private static final ThreadLocal COMPRESS_OUT_BUF_THREAD_LOCAL = + new ThreadLocal() { + @Override + protected byte[] initialValue() { + return new byte[BLOCK_SIZE * 2]; + } + }; + + /** + * The hash code of the name. + */ + final int nameHashCode; + + private final CompressLaterCache compressLaterCache = + new CompressLaterCache<>(CACHE_MIN_SIZE); + + private String name; + private final boolean compress; + private final float compressLaterCachePercent; + private long length; + private AtomicReference[] buffers; + private long lastModified; + private boolean isReadOnly; + private boolean isLockedExclusive; + private int sharedLockCount; + private final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock(); + + static { + final byte[] n = new byte[BLOCK_SIZE]; + final byte[] output = new byte[BLOCK_SIZE * 2]; + int len = new CompressLZF().compress(n, BLOCK_SIZE, output, 0); + COMPRESSED_EMPTY_BLOCK = ByteBuffer.allocateDirect(len); + COMPRESSED_EMPTY_BLOCK.put(output, 0, len); + } + + @SuppressWarnings("unchecked") + FileNioMemData(String name, boolean compress, float compressLaterCachePercent) { + this.name = name; + this.nameHashCode = name.hashCode(); + this.compress = compress; + this.compressLaterCachePercent = compressLaterCachePercent; + buffers = new AtomicReference[0]; + lastModified = System.currentTimeMillis(); + } + + /** + * Lock the file in exclusive mode if possible. + * + * @return if locking was successful + */ + synchronized boolean lockExclusive() { + if (sharedLockCount > 0 || isLockedExclusive) { + return false; + } + isLockedExclusive = true; + return true; + } + + /** + * Lock the file in shared mode if possible. + * + * @return if locking was successful + */ + synchronized boolean lockShared() { + if (isLockedExclusive) { + return false; + } + sharedLockCount++; + return true; + } + + /** + * Unlock the file. + */ + synchronized void unlock() { + if (isLockedExclusive) { + isLockedExclusive = false; + } else { + sharedLockCount = Math.max(0, sharedLockCount - 1); + } + } + + /** + * This small cache compresses the data if an element leaves the cache. + */ + static class CompressLaterCache extends LinkedHashMap { + + private static final long serialVersionUID = 1L; + private int size; + + CompressLaterCache(int size) { + super(size, (float) 0.75, true); + this.size = size; + } + + @Override + public synchronized V put(K key, V value) { + return super.put(key, value); + } + + @Override + protected boolean removeEldestEntry(Map.Entry eldest) { + if (size() < size) { + return false; + } + CompressItem c = (CompressItem) eldest.getKey(); + c.data.compressPage(c.page); + return true; + } + + public void setCacheSize(int size) { + this.size = size; + } + } + + /** + * Represents a compressed item. + */ + static class CompressItem { + + /** + * The file data. + */ + public final FileNioMemData data; + + /** + * The page to compress. + */ + public final int page; + + public CompressItem(FileNioMemData data, int page) { + this.data = data; + this.page = page; + } + + @Override + public int hashCode() { + return page ^ data.nameHashCode; + } + + @Override + public boolean equals(Object o) { + if (o instanceof CompressItem) { + CompressItem c = (CompressItem) o; + return c.data == data && c.page == page; + } + return false; + } + + } + + private void addToCompressLaterCache(int page) { + CompressItem c = new CompressItem(this, page); + compressLaterCache.put(c, c); + } + + private ByteBuffer expandPage(int page) { + final ByteBuffer d = buffers[page].get(); + if (d.capacity() == BLOCK_SIZE) { + // already expanded, or not compressed + return d; + } + synchronized (d) { + if (d.capacity() == BLOCK_SIZE) { + return d; + } + ByteBuffer out = ByteBuffer.allocateDirect(BLOCK_SIZE); + if (d != COMPRESSED_EMPTY_BLOCK) { + d.position(0); + CompressLZF.expand(d, out); + } + buffers[page].compareAndSet(d, out); + return out; + } + } + + /** + * Compress the data in a byte array. + * + * @param page which page to compress + */ + void compressPage(int page) { + final ByteBuffer d = buffers[page].get(); + synchronized (d) { + if (d.capacity() != BLOCK_SIZE) { + // already compressed + return; + } + final byte[] compressOutputBuffer = COMPRESS_OUT_BUF_THREAD_LOCAL.get(); + int len = LZF_THREAD_LOCAL.get().compress(d, 0, compressOutputBuffer, 0); + ByteBuffer out = ByteBuffer.allocateDirect(len); + out.put(compressOutputBuffer, 0, len); + buffers[page].compareAndSet(d, out); + } + } + + /** + * Update the last modified time. + * + * @param openReadOnly if the file was opened in read-only mode + */ + void touch(boolean openReadOnly) throws IOException { + if (isReadOnly || openReadOnly) { + throw new IOException("Read only"); + } + lastModified = System.currentTimeMillis(); + } + + /** + * Get the file length. + * + * @return the length + */ + long length() { + return length; + } + + /** + * Truncate the file. + * + * @param newLength the new length + */ + void truncate(long newLength) { + rwLock.writeLock().lock(); + try { + changeLength(newLength); + long end = MathUtils.roundUpLong(newLength, BLOCK_SIZE); + if (end != newLength) { + int lastPage = (int) (newLength >>> BLOCK_SIZE_SHIFT); + ByteBuffer d = expandPage(lastPage); + for (int i = (int) (newLength & BLOCK_SIZE_MASK); i < BLOCK_SIZE; i++) { + d.put(i, (byte) 0); + } + if (compress) { + addToCompressLaterCache(lastPage); + } + } + } finally { + rwLock.writeLock().unlock(); + } + } + + @SuppressWarnings("unchecked") + private void changeLength(long len) { + length = len; + len = MathUtils.roundUpLong(len, BLOCK_SIZE); + int blocks = (int) (len >>> BLOCK_SIZE_SHIFT); + if (blocks != buffers.length) { + final AtomicReference[] newBuffers = new AtomicReference[blocks]; + System.arraycopy(buffers, 0, newBuffers, 0, + Math.min(buffers.length, newBuffers.length)); + for (int i = buffers.length; i < blocks; i++) { + newBuffers[i] = new AtomicReference<>(COMPRESSED_EMPTY_BLOCK); + } + buffers = newBuffers; + } + compressLaterCache.setCacheSize(Math.max(CACHE_MIN_SIZE, (int) (blocks * + compressLaterCachePercent / 100))); + } + + /** + * Read or write. + * + * @param pos the position + * @param b the byte array + * @param off the offset within the byte array + * @param len the number of bytes + * @param write true for writing + * @return the new position + */ + long readWrite(long pos, ByteBuffer b, int off, int len, boolean write) { + final java.util.concurrent.locks.Lock lock = write ? rwLock.writeLock() + : rwLock.readLock(); + lock.lock(); + try { + + long end = pos + len; + if (end > length) { + if (write) { + changeLength(end); + } else { + len = (int) (length - pos); + } + } + while (len > 0) { + final int l = (int) Math.min(len, BLOCK_SIZE - (pos & BLOCK_SIZE_MASK)); + final int page = (int) (pos >>> BLOCK_SIZE_SHIFT); + final ByteBuffer block = expandPage(page); + int blockOffset = (int) (pos & BLOCK_SIZE_MASK); + if (write) { + final ByteBuffer srcTmp = b.slice(); + final ByteBuffer dstTmp = block.duplicate(); + srcTmp.position(off); + srcTmp.limit(off + l); + dstTmp.position(blockOffset); + dstTmp.put(srcTmp); + } else { + // duplicate, so this can be done concurrently + final ByteBuffer tmp = block.duplicate(); + tmp.position(blockOffset); + tmp.limit(l + blockOffset); + int oldPosition = b.position(); + b.position(off); + b.put(tmp); + // restore old position + b.position(oldPosition); + } + if (compress) { + addToCompressLaterCache(page); + } + off += l; + pos += l; + len -= l; + } + return pos; + } finally { + lock.unlock(); + } + } + + /** + * Set the file name. + * + * @param name the name + */ + void setName(String name) { + this.name = name; + } + + /** + * Get the file name + * + * @return the name + */ + String getName() { + return name; + } + + /** + * Get the last modified time. + * + * @return the time + */ + long getLastModified() { + return lastModified; + } + + /** + * Check whether writing is allowed. + * + * @return true if it is + */ + boolean canWrite() { + return !isReadOnly; + } + + /** + * Set the read-only flag. + * + * @return true + */ + boolean setReadOnly() { + isReadOnly = true; + return true; + } + +} + + diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathRec.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathRec.java new file mode 100644 index 0000000000000..a5825dc6d5f86 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathRec.java @@ -0,0 +1,218 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.util.Arrays; + +/** + * A file system that records all write operations and can re-play them. + */ +public class FilePathRec extends FilePathWrapper { + + private static final FilePathRec INSTANCE = new FilePathRec(); + + private static Recorder recorder; + + private boolean trace; + + /** + * Register the file system. + */ + public static void register() { + FilePath.register(INSTANCE); + } + + /** + * Set the recorder class. + * + * @param recorder the recorder + */ + public static void setRecorder(Recorder recorder) { + FilePathRec.recorder = recorder; + } + + @Override + public boolean createFile() { + log(Recorder.CREATE_NEW_FILE, name); + return super.createFile(); + } + + @Override + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + log(Recorder.CREATE_TEMP_FILE, unwrap(name) + ":" + suffix + ":" + + deleteOnExit + ":" + inTempDir); + return super.createTempFile(suffix, deleteOnExit, inTempDir); + } + + @Override + public void delete() { + log(Recorder.DELETE, name); + super.delete(); + } + + @Override + public FileChannel open(String mode) throws IOException { + return new FileRec(this, super.open(mode), name); + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + log(Recorder.OPEN_OUTPUT_STREAM, name); + return super.newOutputStream(append); + } + + @Override + public void moveTo(FilePath newPath, boolean atomicReplace) { + log(Recorder.RENAME, unwrap(name) + ":" + unwrap(newPath.name)); + super.moveTo(newPath, atomicReplace); + } + + public boolean isTrace() { + return trace; + } + + public void setTrace(boolean trace) { + this.trace = trace; + } + + /** + * Log the operation. + * + * @param op the operation + * @param fileName the file name(s) + */ + void log(int op, String fileName) { + log(op, fileName, null, 0); + } + + /** + * Log the operation. + * + * @param op the operation + * @param fileName the file name + * @param data the data or null + * @param x the value or 0 + */ + void log(int op, String fileName, byte[] data, long x) { + if (recorder != null) { + recorder.log(op, fileName, data, x); + } + } + + /** + * Get the prefix for this file system. + * + * @return the prefix + */ + @Override + public String getScheme() { + return "rec"; + } + +} + +/** + * A file object that records all write operations and can re-play them. + */ +class FileRec extends FileBase { + + private final FilePathRec rec; + private final FileChannel channel; + private final String name; + + FileRec(FilePathRec rec, FileChannel file, String fileName) { + this.rec = rec; + this.channel = file; + this.name = fileName; + } + + @Override + public void implCloseChannel() throws IOException { + channel.close(); + } + + @Override + public long position() throws IOException { + return channel.position(); + } + + @Override + public long size() throws IOException { + return channel.size(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + return channel.read(dst); + } + + @Override + public int read(ByteBuffer dst, long position) throws IOException { + return channel.read(dst, position); + } + + @Override + public FileChannel position(long pos) throws IOException { + channel.position(pos); + return this; + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + rec.log(Recorder.TRUNCATE, name, null, newLength); + channel.truncate(newLength); + return this; + } + + @Override + public void force(boolean metaData) throws IOException { + channel.force(metaData); + } + + @Override + public int write(ByteBuffer src) throws IOException { + byte[] buff = src.array(); + int len = src.remaining(); + if (src.position() != 0 || len != buff.length) { + int offset = src.arrayOffset() + src.position(); + buff = Arrays.copyOfRange(buff, offset, offset + len); + } + int result = channel.write(src); + rec.log(Recorder.WRITE, name, buff, channel.position()); + return result; + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + byte[] buff = src.array(); + int len = src.remaining(); + if (src.position() != 0 || len != buff.length) { + int offset = src.arrayOffset() + src.position(); + buff = Arrays.copyOfRange(buff, offset, offset + len); + } + int result = channel.write(src, position); + rec.log(Recorder.WRITE, name, buff, position); + return result; + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + return channel.tryLock(position, size, shared); + } + + @Override + public String toString() { + return name; + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathRetryOnInterrupt.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathRetryOnInterrupt.java new file mode 100644 index 0000000000000..2b28671256db0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathRetryOnInterrupt.java @@ -0,0 +1,257 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.channels.ClosedByInterruptException; +import java.nio.channels.ClosedChannelException; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; + +/** + * A file system that re-opens and re-tries the operation if the file was + * closed, because a thread was interrupted. This will clear the interrupt flag. + * It is mainly useful for applications that call Thread.interrupt by mistake. + */ +public class FilePathRetryOnInterrupt extends FilePathWrapper { + + /** + * The prefix. + */ + static final String SCHEME = "retry"; + + @Override + public FileChannel open(String mode) throws IOException { + return new FileRetryOnInterrupt(name.substring(getScheme().length() + 1), mode); + } + + @Override + public String getScheme() { + return SCHEME; + } + +} + +/** + * A file object that re-opens and re-tries the operation if the file was + * closed. + */ +class FileRetryOnInterrupt extends FileBase { + + private final String fileName; + private final String mode; + private FileChannel channel; + private FileLockRetry lock; + + FileRetryOnInterrupt(String fileName, String mode) throws IOException { + this.fileName = fileName; + this.mode = mode; + open(); + } + + private void open() throws IOException { + channel = FileUtils.open(fileName, mode); + } + + private void reopen(int i, IOException e) throws IOException { + if (i > 20) { + throw e; + } + if (!(e instanceof ClosedByInterruptException) && + !(e instanceof ClosedChannelException)) { + throw e; + } + // clear the interrupt flag, to avoid re-opening many times + Thread.interrupted(); + FileChannel before = channel; + // ensure we don't re-open concurrently; + // sometimes we don't re-open, which is fine, + // as this method is called in a loop + synchronized (this) { + if (before == channel) { + open(); + reLock(); + } + } + } + + private void reLock() throws IOException { + if (lock == null) { + return; + } + try { + lock.base.release(); + } catch (IOException e) { + // ignore + } + FileLock l2 = channel.tryLock(lock.position(), lock.size(), lock.isShared()); + if (l2 == null) { + throw new IOException("Re-locking failed"); + } + lock.base = l2; + } + + @Override + public void implCloseChannel() throws IOException { + try { + channel.close(); + } catch (IOException e) { + // ignore + } + } + + @Override + public long position() throws IOException { + for (int i = 0;; i++) { + try { + return channel.position(); + } catch (IOException e) { + reopen(i, e); + } + } + } + + @Override + public long size() throws IOException { + for (int i = 0;; i++) { + try { + return channel.size(); + } catch (IOException e) { + reopen(i, e); + } + } + } + + @Override + public int read(ByteBuffer dst) throws IOException { + long pos = position(); + for (int i = 0;; i++) { + try { + return channel.read(dst); + } catch (IOException e) { + reopen(i, e); + position(pos); + } + } + } + + @Override + public int read(ByteBuffer dst, long position) throws IOException { + for (int i = 0;; i++) { + try { + return channel.read(dst, position); + } catch (IOException e) { + reopen(i, e); + } + } + } + + @Override + public FileChannel position(long pos) throws IOException { + for (int i = 0;; i++) { + try { + channel.position(pos); + return this; + } catch (IOException e) { + reopen(i, e); + } + } + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + for (int i = 0;; i++) { + try { + channel.truncate(newLength); + return this; + } catch (IOException e) { + reopen(i, e); + } + } + } + + @Override + public void force(boolean metaData) throws IOException { + for (int i = 0;; i++) { + try { + channel.force(metaData); + return; + } catch (IOException e) { + reopen(i, e); + } + } + } + + @Override + public int write(ByteBuffer src) throws IOException { + long pos = position(); + for (int i = 0;; i++) { + try { + return channel.write(src); + } catch (IOException e) { + reopen(i, e); + position(pos); + } + } + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + for (int i = 0;; i++) { + try { + return channel.write(src, position); + } catch (IOException e) { + reopen(i, e); + } + } + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + FileLock l = channel.tryLock(position, size, shared); + if (l == null) { + return null; + } + lock = new FileLockRetry(l, this); + return lock; + } + + /** + * A wrapped file lock. + */ + static class FileLockRetry extends FileLock { + + /** + * The base lock. + */ + FileLock base; + + protected FileLockRetry(FileLock base, FileChannel channel) { + super(channel, base.position(), base.size(), base.isShared()); + this.base = base; + } + + @Override + public boolean isValid() { + return base.isValid(); + } + + @Override + public void release() throws IOException { + base.release(); + } + + } + + @Override + public String toString() { + return FilePathRetryOnInterrupt.SCHEME + ":" + fileName; + } + +} + diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathSplit.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathSplit.java new file mode 100644 index 0000000000000..9f5f611347264 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathSplit.java @@ -0,0 +1,447 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.SequenceInputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.util.ArrayList; +import java.util.List; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.util.New; + +/** + * A file system that may split files into multiple smaller files. + * (required for a FAT32 because it only support files up to 2 GB). + */ +public class FilePathSplit extends FilePathWrapper { + + private static final String PART_SUFFIX = ".part"; + + @Override + protected String getPrefix() { + return getScheme() + ":" + parse(name)[0] + ":"; + } + + @Override + public FilePath unwrap(String fileName) { + return FilePath.get(parse(fileName)[1]); + } + + @Override + public boolean setReadOnly() { + boolean result = false; + for (int i = 0;; i++) { + FilePath f = getBase(i); + if (f.exists()) { + result = f.setReadOnly(); + } else { + break; + } + } + return result; + } + + @Override + public void delete() { + for (int i = 0;; i++) { + FilePath f = getBase(i); + if (f.exists()) { + f.delete(); + } else { + break; + } + } + } + + @Override + public long lastModified() { + long lastModified = 0; + for (int i = 0;; i++) { + FilePath f = getBase(i); + if (f.exists()) { + long l = f.lastModified(); + lastModified = Math.max(lastModified, l); + } else { + break; + } + } + return lastModified; + } + + @Override + public long size() { + long length = 0; + for (int i = 0;; i++) { + FilePath f = getBase(i); + if (f.exists()) { + length += f.size(); + } else { + break; + } + } + return length; + } + + @Override + public ArrayList newDirectoryStream() { + List list = getBase().newDirectoryStream(); + ArrayList newList = New.arrayList(); + for (FilePath f : list) { + if (!f.getName().endsWith(PART_SUFFIX)) { + newList.add(wrap(f)); + } + } + return newList; + } + + @Override + public InputStream newInputStream() throws IOException { + InputStream input = getBase().newInputStream(); + for (int i = 1;; i++) { + FilePath f = getBase(i); + if (f.exists()) { + InputStream i2 = f.newInputStream(); + input = new SequenceInputStream(input, i2); + } else { + break; + } + } + return input; + } + + @Override + public FileChannel open(String mode) throws IOException { + ArrayList list = New.arrayList(); + list.add(getBase().open(mode)); + for (int i = 1;; i++) { + FilePath f = getBase(i); + if (f.exists()) { + list.add(f.open(mode)); + } else { + break; + } + } + FileChannel[] array = list.toArray(new FileChannel[0]); + long maxLength = array[0].size(); + long length = maxLength; + if (array.length == 1) { + long defaultMaxLength = getDefaultMaxLength(); + if (maxLength < defaultMaxLength) { + maxLength = defaultMaxLength; + } + } else { + if (maxLength == 0) { + closeAndThrow(0, array, array[0], maxLength); + } + for (int i = 1; i < array.length - 1; i++) { + FileChannel c = array[i]; + long l = c.size(); + length += l; + if (l != maxLength) { + closeAndThrow(i, array, c, maxLength); + } + } + FileChannel c = array[array.length - 1]; + long l = c.size(); + length += l; + if (l > maxLength) { + closeAndThrow(array.length - 1, array, c, maxLength); + } + } + return new FileSplit(this, mode, array, length, maxLength); + } + + private long getDefaultMaxLength() { + return 1L << Integer.decode(parse(name)[0]).intValue(); + } + + private void closeAndThrow(int id, FileChannel[] array, FileChannel o, + long maxLength) throws IOException { + String message = "Expected file length: " + maxLength + " got: " + + o.size() + " for " + getName(id); + for (FileChannel f : array) { + f.close(); + } + throw new IOException(message); + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + return new FileChannelOutputStream(open("rw"), append); + } + + @Override + public void moveTo(FilePath path, boolean atomicReplace) { + FilePathSplit newName = (FilePathSplit) path; + for (int i = 0;; i++) { + FilePath o = getBase(i); + if (o.exists()) { + o.moveTo(newName.getBase(i), atomicReplace); + } else { + break; + } + } + } + + /** + * Split the file name into size and base file name. + * + * @param fileName the file name + * @return an array with size and file name + */ + private String[] parse(String fileName) { + if (!fileName.startsWith(getScheme())) { + DbException.throwInternalError(fileName + " doesn't start with " + getScheme()); + } + fileName = fileName.substring(getScheme().length() + 1); + String size; + if (fileName.length() > 0 && Character.isDigit(fileName.charAt(0))) { + int idx = fileName.indexOf(':'); + size = fileName.substring(0, idx); + try { + fileName = fileName.substring(idx + 1); + } catch (NumberFormatException e) { + // ignore + } + } else { + size = Long.toString(SysProperties.SPLIT_FILE_SIZE_SHIFT); + } + return new String[] { size, fileName }; + } + + /** + * Get the file name of a part file. + * + * @param id the part id + * @return the file name including the part id + */ + FilePath getBase(int id) { + return FilePath.get(getName(id)); + } + + private String getName(int id) { + return id > 0 ? getBase().name + "." + id + PART_SUFFIX : getBase().name; + } + + @Override + public String getScheme() { + return "split"; + } + +} + +/** + * A file that may be split into multiple smaller files. + */ +class FileSplit extends FileBase { + + private final FilePathSplit file; + private final String mode; + private final long maxLength; + private FileChannel[] list; + private long filePointer; + private long length; + + FileSplit(FilePathSplit file, String mode, FileChannel[] list, long length, + long maxLength) { + this.file = file; + this.mode = mode; + this.list = list; + this.length = length; + this.maxLength = maxLength; + } + + @Override + public void implCloseChannel() throws IOException { + for (FileChannel c : list) { + c.close(); + } + } + + @Override + public long position() { + return filePointer; + } + + @Override + public long size() { + return length; + } + + @Override + public synchronized int read(ByteBuffer dst, long position) + throws IOException { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + len = (int) Math.min(len, length - position); + if (len <= 0) { + return -1; + } + long offset = position % maxLength; + len = (int) Math.min(len, maxLength - offset); + FileChannel channel = getFileChannel(position); + return channel.read(dst, offset); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + int len = dst.remaining(); + if (len == 0) { + return 0; + } + len = (int) Math.min(len, length - filePointer); + if (len <= 0) { + return -1; + } + long offset = filePointer % maxLength; + len = (int) Math.min(len, maxLength - offset); + FileChannel channel = getFileChannel(filePointer); + channel.position(offset); + len = channel.read(dst); + filePointer += len; + return len; + } + + @Override + public FileChannel position(long pos) { + filePointer = pos; + return this; + } + + private FileChannel getFileChannel(long position) throws IOException { + int id = (int) (position / maxLength); + while (id >= list.length) { + int i = list.length; + FileChannel[] newList = new FileChannel[i + 1]; + System.arraycopy(list, 0, newList, 0, i); + FilePath f = file.getBase(i); + newList[i] = f.open(mode); + list = newList; + } + return list[id]; + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + if (newLength >= length) { + return this; + } + filePointer = Math.min(filePointer, newLength); + int newFileCount = 1 + (int) (newLength / maxLength); + if (newFileCount < list.length) { + // delete some of the files + FileChannel[] newList = new FileChannel[newFileCount]; + // delete backwards, so that truncating is somewhat transactional + for (int i = list.length - 1; i >= newFileCount; i--) { + // verify the file is writable + list[i].truncate(0); + list[i].close(); + try { + file.getBase(i).delete(); + } catch (DbException e) { + throw DbException.convertToIOException(e); + } + } + System.arraycopy(list, 0, newList, 0, newList.length); + list = newList; + } + long size = newLength - maxLength * (newFileCount - 1); + list[list.length - 1].truncate(size); + this.length = newLength; + return this; + } + + @Override + public void force(boolean metaData) throws IOException { + for (FileChannel c : list) { + c.force(metaData); + } + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + if (position >= length && position > maxLength) { + // may need to extend and create files + long oldFilePointer = position; + long x = length - (length % maxLength) + maxLength; + for (; x < position; x += maxLength) { + if (x > length) { + // expand the file size + position(x - 1); + write(ByteBuffer.wrap(new byte[1])); + } + position = oldFilePointer; + } + } + long offset = position % maxLength; + int len = src.remaining(); + FileChannel channel = getFileChannel(position); + int l = (int) Math.min(len, maxLength - offset); + if (l == len) { + l = channel.write(src, offset); + } else { + int oldLimit = src.limit(); + src.limit(src.position() + l); + l = channel.write(src, offset); + src.limit(oldLimit); + } + length = Math.max(length, position + l); + return l; + } + + @Override + public int write(ByteBuffer src) throws IOException { + if (filePointer >= length && filePointer > maxLength) { + // may need to extend and create files + long oldFilePointer = filePointer; + long x = length - (length % maxLength) + maxLength; + for (; x < filePointer; x += maxLength) { + if (x > length) { + // expand the file size + position(x - 1); + write(ByteBuffer.wrap(new byte[1])); + } + filePointer = oldFilePointer; + } + } + long offset = filePointer % maxLength; + int len = src.remaining(); + FileChannel channel = getFileChannel(filePointer); + channel.position(offset); + int l = (int) Math.min(len, maxLength - offset); + if (l == len) { + l = channel.write(src); + } else { + int oldLimit = src.limit(); + src.limit(src.position() + l); + l = channel.write(src); + src.limit(oldLimit); + } + filePointer += l; + length = Math.max(length, filePointer); + return l; + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + return list[0].tryLock(position, size, shared); + } + + @Override + public String toString() { + return file.toString(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathWrapper.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathWrapper.java new file mode 100644 index 0000000000000..c4be219824861 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathWrapper.java @@ -0,0 +1,166 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.channels.FileChannel; +import java.util.List; + +/** + * The base class for wrapping / delegating file systems such as + * the split file system. + */ +public abstract class FilePathWrapper extends FilePath { + + private FilePath base; + + @Override + public FilePathWrapper getPath(String path) { + return create(path, unwrap(path)); + } + + /** + * Create a wrapped path instance for the given base path. + * + * @param base the base path + * @return the wrapped path + */ + public FilePathWrapper wrap(FilePath base) { + return base == null ? null : create(getPrefix() + base.name, base); + } + + @Override + public FilePath unwrap() { + return unwrap(name); + } + + private FilePathWrapper create(String path, FilePath base) { + try { + FilePathWrapper p = getClass().newInstance(); + p.name = path; + p.base = base; + return p; + } catch (Exception e) { + throw new IllegalArgumentException("Path: " + path, e); + } + } + + protected String getPrefix() { + return getScheme() + ":"; + } + + /** + * Get the base path for the given wrapped path. + * + * @param path the path including the scheme prefix + * @return the base file path + */ + protected FilePath unwrap(String path) { + return FilePath.get(path.substring(getScheme().length() + 1)); + } + + protected FilePath getBase() { + return base; + } + + @Override + public boolean canWrite() { + return base.canWrite(); + } + + @Override + public void createDirectory() { + base.createDirectory(); + } + + @Override + public boolean createFile() { + return base.createFile(); + } + + @Override + public void delete() { + base.delete(); + } + + @Override + public boolean exists() { + return base.exists(); + } + + @Override + public FilePath getParent() { + return wrap(base.getParent()); + } + + @Override + public boolean isAbsolute() { + return base.isAbsolute(); + } + + @Override + public boolean isDirectory() { + return base.isDirectory(); + } + + @Override + public long lastModified() { + return base.lastModified(); + } + + @Override + public FilePath toRealPath() { + return wrap(base.toRealPath()); + } + + @Override + public List newDirectoryStream() { + List list = base.newDirectoryStream(); + for (int i = 0, len = list.size(); i < len; i++) { + list.set(i, wrap(list.get(i))); + } + return list; + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + base.moveTo(((FilePathWrapper) newName).base, atomicReplace); + } + + @Override + public InputStream newInputStream() throws IOException { + return base.newInputStream(); + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + return base.newOutputStream(append); + } + + @Override + public FileChannel open(String mode) throws IOException { + return base.open(mode); + } + + @Override + public boolean setReadOnly() { + return base.setReadOnly(); + } + + @Override + public long size() { + return base.size(); + } + + @Override + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + return wrap(base.createTempFile(suffix, deleteOnExit, inTempDir)); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FilePathZip.java b/modules/h2/src/main/java/org/h2/store/fs/FilePathZip.java new file mode 100644 index 0000000000000..25831e6477363 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FilePathZip.java @@ -0,0 +1,381 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.util.ArrayList; +import java.util.Enumeration; +import java.util.zip.ZipEntry; +import java.util.zip.ZipFile; +import org.h2.message.DbException; +import org.h2.util.IOUtils; +import org.h2.util.New; + +/** + * This is a read-only file system that allows + * to access databases stored in a .zip or .jar file. + */ +public class FilePathZip extends FilePath { + + @Override + public FilePathZip getPath(String path) { + FilePathZip p = new FilePathZip(); + p.name = path; + return p; + } + + @Override + public void createDirectory() { + // ignore + } + + @Override + public boolean createFile() { + throw DbException.getUnsupportedException("write"); + } + + @Override + public void delete() { + throw DbException.getUnsupportedException("write"); + } + + @Override + public boolean exists() { + try { + String entryName = getEntryName(); + if (entryName.length() == 0) { + return true; + } + try (ZipFile file = openZipFile()) { + return file.getEntry(entryName) != null; + } + } catch (IOException e) { + return false; + } + } + + @Override + public long lastModified() { + return 0; + } + + @Override + public FilePath getParent() { + int idx = name.lastIndexOf('/'); + return idx < 0 ? null : getPath(name.substring(0, idx)); + } + + @Override + public boolean isAbsolute() { + String fileName = translateFileName(name); + return FilePath.get(fileName).isAbsolute(); + } + + @Override + public FilePath unwrap() { + return FilePath.get(name.substring(getScheme().length() + 1)); + } + + @Override + public boolean isDirectory() { + try { + String entryName = getEntryName(); + if (entryName.length() == 0) { + return true; + } + try (ZipFile file = openZipFile()) { + Enumeration en = file.entries(); + while (en.hasMoreElements()) { + ZipEntry entry = en.nextElement(); + String n = entry.getName(); + if (n.equals(entryName)) { + return entry.isDirectory(); + } else if (n.startsWith(entryName)) { + if (n.length() == entryName.length() + 1) { + if (n.equals(entryName + "/")) { + return true; + } + } + } + } + } + return false; + } catch (IOException e) { + return false; + } + } + + @Override + public boolean canWrite() { + return false; + } + + @Override + public boolean setReadOnly() { + return true; + } + + @Override + public long size() { + try { + try (ZipFile file = openZipFile()) { + ZipEntry entry = file.getEntry(getEntryName()); + return entry == null ? 0 : entry.getSize(); + } + } catch (IOException e) { + return 0; + } + } + + @Override + public ArrayList newDirectoryStream() { + String path = name; + ArrayList list = New.arrayList(); + try { + if (path.indexOf('!') < 0) { + path += "!"; + } + if (!path.endsWith("/")) { + path += "/"; + } + try (ZipFile file = openZipFile()) { + String dirName = getEntryName(); + String prefix = path.substring(0, path.length() - dirName.length()); + Enumeration en = file.entries(); + while (en.hasMoreElements()) { + ZipEntry entry = en.nextElement(); + String name = entry.getName(); + if (!name.startsWith(dirName)) { + continue; + } + if (name.length() <= dirName.length()) { + continue; + } + int idx = name.indexOf('/', dirName.length()); + if (idx < 0 || idx >= name.length() - 1) { + list.add(getPath(prefix + name)); + } + } + } + return list; + } catch (IOException e) { + throw DbException.convertIOException(e, "listFiles " + path); + } + } + + @Override + public InputStream newInputStream() throws IOException { + return new FileChannelInputStream(open("r"), true); + } + + @Override + public FileChannel open(String mode) throws IOException { + ZipFile file = openZipFile(); + ZipEntry entry = file.getEntry(getEntryName()); + if (entry == null) { + file.close(); + throw new FileNotFoundException(name); + } + return new FileZip(file, entry); + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + throw new IOException("write"); + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + throw DbException.getUnsupportedException("write"); + } + + private static String translateFileName(String fileName) { + if (fileName.startsWith("zip:")) { + fileName = fileName.substring("zip:".length()); + } + int idx = fileName.indexOf('!'); + if (idx >= 0) { + fileName = fileName.substring(0, idx); + } + return FilePathDisk.expandUserHomeDirectory(fileName); + } + + @Override + public FilePath toRealPath() { + return this; + } + + private String getEntryName() { + int idx = name.indexOf('!'); + String fileName; + if (idx <= 0) { + fileName = ""; + } else { + fileName = name.substring(idx + 1); + } + fileName = fileName.replace('\\', '/'); + if (fileName.startsWith("/")) { + fileName = fileName.substring(1); + } + return fileName; + } + + private ZipFile openZipFile() throws IOException { + String fileName = translateFileName(name); + return new ZipFile(fileName); + } + + @Override + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + if (!inTempDir) { + throw new IOException("File system is read-only"); + } + return new FilePathDisk().getPath(name).createTempFile(suffix, + deleteOnExit, true); + } + + @Override + public String getScheme() { + return "zip"; + } + +} + +/** + * The file is read from a stream. When reading from start to end, the same + * input stream is re-used, however when reading from end to start, a new input + * stream is opened for each request. + */ +class FileZip extends FileBase { + + private static final byte[] SKIP_BUFFER = new byte[1024]; + + private final ZipFile file; + private final ZipEntry entry; + private long pos; + private InputStream in; + private long inPos; + private final long length; + private boolean skipUsingRead; + + FileZip(ZipFile file, ZipEntry entry) { + this.file = file; + this.entry = entry; + length = entry.getSize(); + } + + @Override + public long position() { + return pos; + } + + @Override + public long size() { + return length; + } + + @Override + public int read(ByteBuffer dst) throws IOException { + seek(); + int len = in.read(dst.array(), dst.arrayOffset() + dst.position(), + dst.remaining()); + if (len > 0) { + dst.position(dst.position() + len); + pos += len; + inPos += len; + } + return len; + } + + private void seek() throws IOException { + if (inPos > pos) { + if (in != null) { + in.close(); + } + in = null; + } + if (in == null) { + in = file.getInputStream(entry); + inPos = 0; + } + if (inPos < pos) { + long skip = pos - inPos; + if (!skipUsingRead) { + try { + IOUtils.skipFully(in, skip); + } catch (NullPointerException e) { + // workaround for Android + skipUsingRead = true; + } + } + if (skipUsingRead) { + while (skip > 0) { + int s = (int) Math.min(SKIP_BUFFER.length, skip); + s = in.read(SKIP_BUFFER, 0, s); + skip -= s; + } + } + inPos = pos; + } + } + + @Override + public FileChannel position(long newPos) { + this.pos = newPos; + return this; + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + throw new IOException("File is read-only"); + } + + @Override + public void force(boolean metaData) throws IOException { + // nothing to do + } + + @Override + public int write(ByteBuffer src) throws IOException { + throw new IOException("File is read-only"); + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + if (shared) { + return new FileLock(new FakeFileChannel(), position, size, shared) { + + @Override + public boolean isValid() { + return true; + } + + @Override + public void release() throws IOException { + // ignore + }}; + } + return null; + } + + @Override + protected void implCloseChannel() throws IOException { + if (in != null) { + in.close(); + in = null; + } + file.close(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/FileUtils.java b/modules/h2/src/main/java/org/h2/store/fs/FileUtils.java new file mode 100644 index 0000000000000..841f03bcd3af4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/FileUtils.java @@ -0,0 +1,379 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +import java.io.EOFException; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.util.ArrayList; +import java.util.List; + +/** + * This utility class contains utility functions that use the file system + * abstraction. + */ +public class FileUtils { + + /** + * Checks if a file exists. + * This method is similar to Java 7 java.nio.file.Path.exists. + * + * @param fileName the file name + * @return true if it exists + */ + public static boolean exists(String fileName) { + return FilePath.get(fileName).exists(); + } + + /** + * Create a directory (all required parent directories must already exist). + * This method is similar to Java 7 + * java.nio.file.Path.createDirectory. + * + * @param directoryName the directory name + */ + public static void createDirectory(String directoryName) { + FilePath.get(directoryName).createDirectory(); + } + + /** + * Create a new file. This method is similar to Java 7 + * java.nio.file.Path.createFile, but returns false instead of + * throwing a exception if the file already existed. + * + * @param fileName the file name + * @return true if creating was successful + */ + public static boolean createFile(String fileName) { + return FilePath.get(fileName).createFile(); + } + + /** + * Delete a file or directory if it exists. + * Directories may only be deleted if they are empty. + * This method is similar to Java 7 + * java.nio.file.Path.deleteIfExists. + * + * @param path the file or directory name + */ + public static void delete(String path) { + FilePath.get(path).delete(); + } + + /** + * Get the canonical file or directory name. This method is similar to Java + * 7 java.nio.file.Path.toRealPath. + * + * @param fileName the file name + * @return the normalized file name + */ + public static String toRealPath(String fileName) { + return FilePath.get(fileName).toRealPath().toString(); + } + + /** + * Get the parent directory of a file or directory. This method returns null + * if there is no parent. This method is similar to Java 7 + * java.nio.file.Path.getParent. + * + * @param fileName the file or directory name + * @return the parent directory name + */ + public static String getParent(String fileName) { + FilePath p = FilePath.get(fileName).getParent(); + return p == null ? null : p.toString(); + } + + /** + * Check if the file name includes a path. This method is similar to Java 7 + * java.nio.file.Path.isAbsolute. + * + * @param fileName the file name + * @return if the file name is absolute + */ + public static boolean isAbsolute(String fileName) { + return FilePath.get(fileName).isAbsolute() + // Allows Windows to recognize "/path" as absolute. + // Makes the same configuration work on all platforms. + || fileName.startsWith("/"); + } + + /** + * Rename a file if this is allowed. This method is similar to Java 7 + * java.nio.file.Files.move. + * + * @param source the old fully qualified file name + * @param target the new fully qualified file name + */ + public static void move(String source, String target) { + FilePath.get(source).moveTo(FilePath.get(target), false); + } + + /** + * Rename a file if this is allowed, and try to atomically replace an + * existing file. This method is similar to Java 7 + * java.nio.file.Files.move. + * + * @param source the old fully qualified file name + * @param target the new fully qualified file name + */ + public static void moveAtomicReplace(String source, String target) { + FilePath.get(source).moveTo(FilePath.get(target), true); + } + + /** + * Get the file or directory name (the last element of the path). + * This method is similar to Java 7 java.nio.file.Path.getName. + * + * @param path the directory and file name + * @return just the file name + */ + public static String getName(String path) { + return FilePath.get(path).getName(); + } + + /** + * List the files and directories in the given directory. + * This method is similar to Java 7 + * java.nio.file.Path.newDirectoryStream. + * + * @param path the directory + * @return the list of fully qualified file names + */ + public static List newDirectoryStream(String path) { + List list = FilePath.get(path).newDirectoryStream(); + int len = list.size(); + List result = new ArrayList<>(len); + for (FilePath filePath : list) { + result.add(filePath.toString()); + } + return result; + } + + /** + * Get the last modified date of a file. + * This method is similar to Java 7 + * java.nio.file.attribute.Attributes. + * readBasicFileAttributes(file).lastModified().toMillis() + * + * @param fileName the file name + * @return the last modified date + */ + public static long lastModified(String fileName) { + return FilePath.get(fileName).lastModified(); + } + + /** + * Get the size of a file in bytes + * This method is similar to Java 7 + * java.nio.file.attribute.Attributes. + * readBasicFileAttributes(file).size() + * + * @param fileName the file name + * @return the size in bytes + */ + public static long size(String fileName) { + return FilePath.get(fileName).size(); + } + + /** + * Check if it is a file or a directory. + * java.nio.file.attribute.Attributes. + * readBasicFileAttributes(file).isDirectory() + * + * @param fileName the file or directory name + * @return true if it is a directory + */ + public static boolean isDirectory(String fileName) { + return FilePath.get(fileName).isDirectory(); + } + + /** + * Open a random access file object. + * This method is similar to Java 7 + * java.nio.channels.FileChannel.open. + * + * @param fileName the file name + * @param mode the access mode. Supported are r, rw, rws, rwd + * @return the file object + */ + public static FileChannel open(String fileName, String mode) + throws IOException { + return FilePath.get(fileName).open(mode); + } + + /** + * Create an input stream to read from the file. + * This method is similar to Java 7 + * java.nio.file.Path.newInputStream. + * + * @param fileName the file name + * @return the input stream + */ + public static InputStream newInputStream(String fileName) + throws IOException { + return FilePath.get(fileName).newInputStream(); + } + + /** + * Create an output stream to write into the file. + * This method is similar to Java 7 + * java.nio.file.Path.newOutputStream. + * + * @param fileName the file name + * @param append if true, the file will grow, if false, the file will be + * truncated first + * @return the output stream + */ + public static OutputStream newOutputStream(String fileName, boolean append) + throws IOException { + return FilePath.get(fileName).newOutputStream(append); + } + + /** + * Check if the file is writable. + * This method is similar to Java 7 + * java.nio.file.Path.checkAccess(AccessMode.WRITE) + * + * @param fileName the file name + * @return if the file is writable + */ + public static boolean canWrite(String fileName) { + return FilePath.get(fileName).canWrite(); + } + + // special methods ======================================= + + /** + * Disable the ability to write. The file can still be deleted afterwards. + * + * @param fileName the file name + * @return true if the call was successful + */ + public static boolean setReadOnly(String fileName) { + return FilePath.get(fileName).setReadOnly(); + } + + /** + * Get the unwrapped file name (without wrapper prefixes if wrapping / + * delegating file systems are used). + * + * @param fileName the file name + * @return the unwrapped + */ + public static String unwrap(String fileName) { + return FilePath.get(fileName).unwrap().toString(); + } + + // utility methods ======================================= + + /** + * Delete a directory or file and all subdirectories and files. + * + * @param path the path + * @param tryOnly whether errors should be ignored + */ + public static void deleteRecursive(String path, boolean tryOnly) { + if (exists(path)) { + if (isDirectory(path)) { + for (String s : newDirectoryStream(path)) { + deleteRecursive(s, tryOnly); + } + } + if (tryOnly) { + tryDelete(path); + } else { + delete(path); + } + } + } + + /** + * Create the directory and all required parent directories. + * + * @param dir the directory name + */ + public static void createDirectories(String dir) { + if (dir != null) { + if (exists(dir)) { + if (!isDirectory(dir)) { + // this will fail + createDirectory(dir); + } + } else { + String parent = getParent(dir); + createDirectories(parent); + createDirectory(dir); + } + } + } + + /** + * Try to delete a file or directory (ignoring errors). + * + * @param path the file or directory name + * @return true if it worked + */ + public static boolean tryDelete(String path) { + try { + FilePath.get(path).delete(); + return true; + } catch (Exception e) { + return false; + } + } + + /** + * Create a new temporary file. + * + * @param prefix the prefix of the file name (including directory name if + * required) + * @param suffix the suffix + * @param deleteOnExit if the file should be deleted when the virtual + * machine exists + * @param inTempDir if the file should be stored in the temporary directory + * @return the name of the created file + */ + public static String createTempFile(String prefix, String suffix, + boolean deleteOnExit, boolean inTempDir) throws IOException { + return FilePath.get(prefix).createTempFile( + suffix, deleteOnExit, inTempDir).toString(); + } + + /** + * Fully read from the file. This will read all remaining bytes, + * or throw an EOFException if not successful. + * + * @param channel the file channel + * @param dst the byte buffer + */ + public static void readFully(FileChannel channel, ByteBuffer dst) + throws IOException { + do { + int r = channel.read(dst); + if (r < 0) { + throw new EOFException(); + } + } while (dst.remaining() > 0); + } + + /** + * Fully write to the file. This will write all remaining bytes. + * + * @param channel the file channel + * @param src the byte buffer + */ + public static void writeFully(FileChannel channel, ByteBuffer src) + throws IOException { + do { + channel.write(src); + } while (src.remaining() > 0); + } + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/Recorder.java b/modules/h2/src/main/java/org/h2/store/fs/Recorder.java new file mode 100644 index 0000000000000..c80d148a8dfab --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/Recorder.java @@ -0,0 +1,59 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.store.fs; + +/** + * A recorder for the recording file system. + */ +public interface Recorder { + + /** + * Create a new file. + */ + int CREATE_NEW_FILE = 2; + + /** + * Create a temporary file. + */ + int CREATE_TEMP_FILE = 3; + + /** + * Delete a file. + */ + int DELETE = 4; + + /** + * Open a file output stream. + */ + int OPEN_OUTPUT_STREAM = 5; + + /** + * Rename a file. The file name contains the source and the target file + * separated with a colon. + */ + int RENAME = 6; + + /** + * Truncate the file. + */ + int TRUNCATE = 7; + + /** + * Write to the file. + */ + int WRITE = 8; + + /** + * Record the method. + * + * @param op the operation + * @param fileName the file name or file name list + * @param data the data or null + * @param x the value or 0 + */ + void log(int op, String fileName, byte[] data, long x); + +} diff --git a/modules/h2/src/main/java/org/h2/store/fs/package.html b/modules/h2/src/main/java/org/h2/store/fs/package.html new file mode 100644 index 0000000000000..12846af90b2c7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/fs/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A file system abstraction. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/store/package.html b/modules/h2/src/main/java/org/h2/store/package.html new file mode 100644 index 0000000000000..6f704515a683a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/store/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Storage abstractions, such as a file with a cache, or a class to convert values to a byte array and vice versa. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/table/Column.java b/modules/h2/src/main/java/org/h2/table/Column.java new file mode 100644 index 0000000000000..235485872d9f7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/Column.java @@ -0,0 +1,873 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.sql.ResultSetMetaData; +import java.util.Arrays; +import org.h2.api.ErrorCode; +import org.h2.command.Parser; +import org.h2.engine.Constants; +import org.h2.engine.Mode; +import org.h2.engine.Session; +import org.h2.expression.ConditionAndOr; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.expression.SequenceValue; +import org.h2.expression.ValueExpression; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.schema.Schema; +import org.h2.schema.Sequence; +import org.h2.util.DateTimeUtils; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueEnum; +import org.h2.value.ValueInt; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; +import org.h2.value.ValueUuid; + +/** + * This class represents a column in a table. + */ +public class Column { + + /** + * The name of the rowid pseudo column. + */ + public static final String ROWID = "_ROWID_"; + + /** + * This column is not nullable. + */ + public static final int NOT_NULLABLE = + ResultSetMetaData.columnNoNulls; + + /** + * This column is nullable. + */ + public static final int NULLABLE = + ResultSetMetaData.columnNullable; + + /** + * It is not know whether this column is nullable. + */ + public static final int NULLABLE_UNKNOWN = + ResultSetMetaData.columnNullableUnknown; + + private final int type; + private long precision; + private int scale; + private String[] enumerators; + private int displaySize; + private Table table; + private String name; + private int columnId; + private boolean nullable = true; + private Expression defaultExpression; + private Expression onUpdateExpression; + private Expression checkConstraint; + private String checkConstraintSQL; + private String originalSQL; + private boolean autoIncrement; + private long start; + private long increment; + private boolean convertNullToDefault; + private Sequence sequence; + private boolean isComputed; + private TableFilter computeTableFilter; + private int selectivity; + private SingleColumnResolver resolver; + private String comment; + private boolean primaryKey; + private boolean visible = true; + + public Column(String name, int type) { + this(name, type, -1, -1, -1, null); + } + + public Column(String name, int type, long precision, int scale, + int displaySize) { + this(name, type, precision, scale, displaySize, null); + } + + public Column(String name, int type, long precision, int scale, + int displaySize, String[] enumerators) { + this.name = name; + this.type = type; + if (precision == -1 && scale == -1 && displaySize == -1 && type != Value.UNKNOWN) { + DataType dt = DataType.getDataType(type); + precision = dt.defaultPrecision; + scale = dt.defaultScale; + displaySize = dt.defaultDisplaySize; + } + this.precision = precision; + this.scale = scale; + this.displaySize = displaySize; + this.enumerators = enumerators; + } + + @Override + public boolean equals(Object o) { + if (o == this) { + return true; + } else if (!(o instanceof Column)) { + return false; + } + Column other = (Column) o; + if (table == null || other.table == null || + name == null || other.name == null) { + return false; + } + if (table != other.table) { + return false; + } + return name.equals(other.name); + } + + @Override + public int hashCode() { + if (table == null || name == null) { + return 0; + } + return table.getId() ^ name.hashCode(); + } + + public boolean isEnumerated() { + return type == Value.ENUM; + } + + public Column getClone() { + Column newColumn = new Column(name, type, precision, scale, displaySize, enumerators); + newColumn.copy(this); + return newColumn; + } + + /** + * Convert a value to this column's type. + * + * @param v the value + * @return the value + */ + public Value convert(Value v) { + return convert(v, null); + } + + /** + * Convert a value to this column's type using the given {@link Mode}. + *

    + * Use this method in case the conversion is Mode-dependent. + * + * @param v the value + * @param mode the database {@link Mode} to use + * @return the value + */ + public Value convert(Value v, Mode mode) { + try { + return v.convertTo(type, MathUtils.convertLongToInt(precision), mode, this, getEnumerators()); + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.DATA_CONVERSION_ERROR_1) { + String target = (table == null ? "" : table.getName() + ": ") + + getCreateSQL(); + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, e, + v.getSQL() + " (" + target + ")"); + } + throw e; + } + } + + boolean getComputed() { + return isComputed; + } + + /** + * Compute the value of this computed column. + * + * @param session the session + * @param row the row + * @return the value + */ + synchronized Value computeValue(Session session, Row row) { + computeTableFilter.setSession(session); + computeTableFilter.set(row); + return defaultExpression.getValue(session); + } + + /** + * Set the default value in the form of a computed expression of other + * columns. + * + * @param expression the computed expression + */ + public void setComputedExpression(Expression expression) { + this.isComputed = true; + this.defaultExpression = expression; + } + + /** + * Set the table and column id. + * + * @param table the table + * @param columnId the column index + */ + public void setTable(Table table, int columnId) { + this.table = table; + this.columnId = columnId; + } + + public Table getTable() { + return table; + } + + /** + * Set the default expression. + * + * @param session the session + * @param defaultExpression the default expression + */ + public void setDefaultExpression(Session session, + Expression defaultExpression) { + // also to test that no column names are used + if (defaultExpression != null) { + defaultExpression = defaultExpression.optimize(session); + if (defaultExpression.isConstant()) { + defaultExpression = ValueExpression.get( + defaultExpression.getValue(session)); + } + } + this.defaultExpression = defaultExpression; + } + + /** + * Set the on update expression. + * + * @param session the session + * @param onUpdateExpression the on update expression + */ + public void setOnUpdateExpression(Session session, Expression onUpdateExpression) { + // also to test that no column names are used + if (onUpdateExpression != null) { + onUpdateExpression = onUpdateExpression.optimize(session); + if (onUpdateExpression.isConstant()) { + onUpdateExpression = ValueExpression.get(onUpdateExpression.getValue(session)); + } + } + this.onUpdateExpression = onUpdateExpression; + } + + public int getColumnId() { + return columnId; + } + + public String getSQL() { + return Parser.quoteIdentifier(name); + } + + public String getName() { + return name; + } + + public int getType() { + return type; + } + + public long getPrecision() { + return precision; + } + + public void setPrecision(long p) { + precision = p; + } + + public int getDisplaySize() { + return displaySize; + } + + public int getScale() { + return scale; + } + + public void setNullable(boolean b) { + nullable = b; + } + + public String[] getEnumerators() { + return enumerators; + } + + public void setEnumerators(String[] enumerators) { + this.enumerators = enumerators; + } + + public boolean getVisible() { + return visible; + } + + public void setVisible(boolean b) { + visible = b; + } + + /** + * Validate the value, convert it if required, and update the sequence value + * if required. If the value is null, the default value (NULL if no default + * is set) is returned. Check constraints are validated as well. + * + * @param session the session + * @param value the value or null + * @return the new or converted value + */ + public Value validateConvertUpdateSequence(Session session, Value value) { + // take a local copy of defaultExpression to avoid holding the lock + // while calling getValue + final Expression localDefaultExpression; + synchronized (this) { + localDefaultExpression = defaultExpression; + } + if (value == null) { + if (localDefaultExpression == null) { + value = ValueNull.INSTANCE; + } else { + value = localDefaultExpression.getValue(session).convertTo(type); + session.getGeneratedKeys().add(this); + if (primaryKey) { + session.setLastIdentity(value); + } + } + } + Mode mode = session.getDatabase().getMode(); + if (value == ValueNull.INSTANCE) { + if (convertNullToDefault) { + value = localDefaultExpression.getValue(session).convertTo(type); + session.getGeneratedKeys().add(this); + } + if (value == ValueNull.INSTANCE && !nullable) { + if (mode.convertInsertNullToZero) { + DataType dt = DataType.getDataType(type); + if (dt.decimal) { + value = ValueInt.get(0).convertTo(type); + } else if (dt.type == Value.TIMESTAMP) { + value = ValueTimestamp.fromMillis(session.getTransactionStart()); + } else if (dt.type == Value.TIMESTAMP_TZ) { + long ms = session.getTransactionStart(); + value = ValueTimestampTimeZone.fromDateValueAndNanos( + DateTimeUtils.dateValueFromDate(ms), + DateTimeUtils.nanosFromDate(ms), (short) 0); + } else if (dt.type == Value.TIME) { + value = ValueTime.fromNanos(0); + } else if (dt.type == Value.DATE) { + value = ValueDate.fromMillis(session.getTransactionStart()); + } else { + value = ValueString.get("").convertTo(type); + } + } else { + throw DbException.get(ErrorCode.NULL_NOT_ALLOWED, name); + } + } + } + if (checkConstraint != null) { + resolver.setValue(value); + Value v; + synchronized (this) { + v = checkConstraint.getValue(session); + } + // Both TRUE and NULL are ok + if (v != ValueNull.INSTANCE && !v.getBoolean()) { + throw DbException.get( + ErrorCode.CHECK_CONSTRAINT_VIOLATED_1, + checkConstraint.getSQL()); + } + } + value = value.convertScale(mode.convertOnlyToSmallerScale, scale); + if (precision > 0) { + if (!value.checkPrecision(precision)) { + String s = value.getTraceSQL(); + if (s.length() > 127) { + s = s.substring(0, 128) + "..."; + } + throw DbException.get(ErrorCode.VALUE_TOO_LONG_2, + getCreateSQL(), s + " (" + value.getPrecision() + ")"); + } + } + if (isEnumerated() && value != ValueNull.INSTANCE) { + if (!ValueEnum.isValid(enumerators, value)) { + String s = value.getTraceSQL(); + if (s.length() > 127) { + s = s.substring(0, 128) + "..."; + } + throw DbException.get(ErrorCode.ENUM_VALUE_NOT_PERMITTED, + getCreateSQL(), s); + } + + value = ValueEnum.get(enumerators, value.getInt()); + } + updateSequenceIfRequired(session, value); + return value; + } + + private void updateSequenceIfRequired(Session session, Value value) { + if (sequence != null) { + long current = sequence.getCurrentValue(); + long inc = sequence.getIncrement(); + long now = value.getLong(); + boolean update = false; + if (inc > 0 && now > current) { + update = true; + } else if (inc < 0 && now < current) { + update = true; + } + if (update) { + sequence.modify(now + inc, null, null, null); + session.setLastIdentity(ValueLong.get(now)); + sequence.flush(session); + } + } + } + + /** + * Convert the auto-increment flag to a sequence that is linked with this + * table. + * + * @param session the session + * @param schema the schema where the sequence should be generated + * @param id the object id + * @param temporary true if the sequence is temporary and does not need to + * be stored + */ + public void convertAutoIncrementToSequence(Session session, Schema schema, + int id, boolean temporary) { + if (!autoIncrement) { + DbException.throwInternalError(); + } + if ("IDENTITY".equals(originalSQL)) { + originalSQL = "BIGINT"; + } else if ("SERIAL".equals(originalSQL)) { + originalSQL = "INT"; + } + String sequenceName; + do { + ValueUuid uuid = ValueUuid.getNewRandom(); + String s = uuid.getString(); + s = StringUtils.toUpperEnglish(s.replace('-', '_')); + sequenceName = "SYSTEM_SEQUENCE_" + s; + } while (schema.findSequence(sequenceName) != null); + Sequence seq = new Sequence(schema, id, sequenceName, start, increment); + seq.setTemporary(temporary); + session.getDatabase().addSchemaObject(session, seq); + setAutoIncrement(false, 0, 0); + SequenceValue seqValue = new SequenceValue(seq); + setDefaultExpression(session, seqValue); + setSequence(seq); + } + + /** + * Prepare all expressions of this column. + * + * @param session the session + */ + public void prepareExpression(Session session) { + if (defaultExpression != null || onUpdateExpression != null) { + computeTableFilter = new TableFilter(session, table, null, false, null, 0, null); + if (defaultExpression != null) { + defaultExpression.mapColumns(computeTableFilter, 0); + defaultExpression = defaultExpression.optimize(session); + } + if (onUpdateExpression != null) { + onUpdateExpression.mapColumns(computeTableFilter, 0); + onUpdateExpression = onUpdateExpression.optimize(session); + } + } + } + + public String getCreateSQLWithoutName() { + return getCreateSQL(false); + } + + public String getCreateSQL() { + return getCreateSQL(true); + } + + private String getCreateSQL(boolean includeName) { + StringBuilder buff = new StringBuilder(); + if (includeName && name != null) { + buff.append(Parser.quoteIdentifier(name)).append(' '); + } + if (originalSQL != null) { + buff.append(originalSQL); + } else { + buff.append(DataType.getDataType(type).name); + switch (type) { + case Value.DECIMAL: + buff.append('(').append(precision).append(", ").append(scale).append(')'); + break; + case Value.ENUM: + buff.append('('); + for (int i = 0; i < enumerators.length; i++) { + buff.append('\'').append(enumerators[i]).append('\''); + if(i < enumerators.length - 1) { + buff.append(','); + } + } + buff.append(')'); + break; + case Value.BYTES: + case Value.STRING: + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + if (precision < Integer.MAX_VALUE) { + buff.append('(').append(precision).append(')'); + } + break; + default: + } + } + + if (!visible) { + buff.append(" INVISIBLE "); + } + + if (defaultExpression != null) { + String sql = defaultExpression.getSQL(); + if (sql != null) { + if (isComputed) { + buff.append(" AS ").append(sql); + } else if (defaultExpression != null) { + buff.append(" DEFAULT ").append(sql); + } + } + } + if (onUpdateExpression != null) { + String sql = onUpdateExpression.getSQL(); + if (sql != null) { + buff.append(" ON UPDATE ").append(sql); + } + } + if (!nullable) { + buff.append(" NOT NULL"); + } + if (convertNullToDefault) { + buff.append(" NULL_TO_DEFAULT"); + } + if (sequence != null) { + buff.append(" SEQUENCE ").append(sequence.getSQL()); + } + if (selectivity != 0) { + buff.append(" SELECTIVITY ").append(selectivity); + } + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + if (checkConstraint != null) { + buff.append(" CHECK ").append(checkConstraintSQL); + } + return buff.toString(); + } + + public boolean isNullable() { + return nullable; + } + + public void setOriginalSQL(String original) { + originalSQL = original; + } + + public String getOriginalSQL() { + return originalSQL; + } + + public Expression getDefaultExpression() { + return defaultExpression; + } + + public Expression getOnUpdateExpression() { + return onUpdateExpression; + } + + public boolean isAutoIncrement() { + return autoIncrement; + } + + /** + * Set the autoincrement flag and related properties of this column. + * + * @param autoInc the new autoincrement flag + * @param start the sequence start value + * @param increment the sequence increment + */ + public void setAutoIncrement(boolean autoInc, long start, long increment) { + this.autoIncrement = autoInc; + this.start = start; + this.increment = increment; + this.nullable = false; + if (autoInc) { + convertNullToDefault = true; + } + } + + public void setConvertNullToDefault(boolean convert) { + this.convertNullToDefault = convert; + } + + /** + * Rename the column. This method will only set the column name to the new + * value. + * + * @param newName the new column name + */ + public void rename(String newName) { + this.name = newName; + } + + public void setSequence(Sequence sequence) { + this.sequence = sequence; + } + + public Sequence getSequence() { + return sequence; + } + + /** + * Get the selectivity of the column. Selectivity 100 means values are + * unique, 10 means every distinct value appears 10 times on average. + * + * @return the selectivity + */ + public int getSelectivity() { + return selectivity == 0 ? Constants.SELECTIVITY_DEFAULT : selectivity; + } + + /** + * Set the new selectivity of a column. + * + * @param selectivity the new value + */ + public void setSelectivity(int selectivity) { + selectivity = selectivity < 0 ? 0 : (selectivity > 100 ? 100 : selectivity); + this.selectivity = selectivity; + } + + /** + * Add a check constraint expression to this column. An existing check + * constraint constraint is added using AND. + * + * @param session the session + * @param expr the (additional) constraint + */ + public void addCheckConstraint(Session session, Expression expr) { + if (expr == null) { + return; + } + resolver = new SingleColumnResolver(this); + synchronized (this) { + String oldName = name; + if (name == null) { + name = "VALUE"; + } + expr.mapColumns(resolver, 0); + name = oldName; + } + expr = expr.optimize(session); + resolver.setValue(ValueNull.INSTANCE); + // check if the column is mapped + synchronized (this) { + expr.getValue(session); + } + if (checkConstraint == null) { + checkConstraint = expr; + } else { + checkConstraint = new ConditionAndOr(ConditionAndOr.AND, checkConstraint, expr); + } + checkConstraintSQL = getCheckConstraintSQL(session, name); + } + + /** + * Remove the check constraint if there is one. + */ + public void removeCheckConstraint() { + checkConstraint = null; + checkConstraintSQL = null; + } + + /** + * Get the check constraint expression for this column if set. + * + * @param session the session + * @param asColumnName the column name to use + * @return the constraint expression + */ + public Expression getCheckConstraint(Session session, String asColumnName) { + if (checkConstraint == null) { + return null; + } + Parser parser = new Parser(session); + String sql; + synchronized (this) { + String oldName = name; + name = asColumnName; + sql = checkConstraint.getSQL(); + name = oldName; + } + return parser.parseExpression(sql); + } + + String getDefaultSQL() { + return defaultExpression == null ? null : defaultExpression.getSQL(); + } + + String getOnUpdateSQL() { + return onUpdateExpression == null ? null : onUpdateExpression.getSQL(); + } + + int getPrecisionAsInt() { + return MathUtils.convertLongToInt(precision); + } + + DataType getDataType() { + return DataType.getDataType(type); + } + + /** + * Get the check constraint SQL snippet. + * + * @param session the session + * @param asColumnName the column name to use + * @return the SQL snippet + */ + String getCheckConstraintSQL(Session session, String asColumnName) { + Expression constraint = getCheckConstraint(session, asColumnName); + return constraint == null ? "" : constraint.getSQL(); + } + + public void setComment(String comment) { + this.comment = comment; + } + + public String getComment() { + return comment; + } + + public void setPrimaryKey(boolean primaryKey) { + this.primaryKey = primaryKey; + } + + /** + * Visit the default expression, the check constraint, and the sequence (if + * any). + * + * @param visitor the visitor + * @return true if every visited expression returned true, or if there are + * no expressions + */ + boolean isEverything(ExpressionVisitor visitor) { + if (visitor.getType() == ExpressionVisitor.GET_DEPENDENCIES) { + if (sequence != null) { + visitor.getDependencies().add(sequence); + } + } + if (defaultExpression != null && !defaultExpression.isEverything(visitor)) { + return false; + } + if (checkConstraint != null && !checkConstraint.isEverything(visitor)) { + return false; + } + return true; + } + + public boolean isPrimaryKey() { + return primaryKey; + } + + @Override + public String toString() { + return name; + } + + /** + * Check whether the new column is of the same type and not more restricted + * than this column. + * + * @param newColumn the new (target) column + * @return true if the new column is compatible + */ + public boolean isWideningConversion(Column newColumn) { + if (type != newColumn.type) { + return false; + } + if (precision > newColumn.precision) { + return false; + } + if (scale != newColumn.scale) { + return false; + } + if (nullable && !newColumn.nullable) { + return false; + } + if (convertNullToDefault != newColumn.convertNullToDefault) { + return false; + } + if (primaryKey != newColumn.primaryKey) { + return false; + } + if (autoIncrement || newColumn.autoIncrement) { + return false; + } + if (checkConstraint != null || newColumn.checkConstraint != null) { + return false; + } + if (convertNullToDefault || newColumn.convertNullToDefault) { + return false; + } + if (defaultExpression != null || newColumn.defaultExpression != null) { + return false; + } + if (isComputed || newColumn.isComputed) { + return false; + } + if (onUpdateExpression != null || newColumn.onUpdateExpression != null) { + return false; + } + return true; + } + + /** + * Copy the data of the source column into the current column. + * + * @param source the source column + */ + public void copy(Column source) { + checkConstraint = source.checkConstraint; + checkConstraintSQL = source.checkConstraintSQL; + displaySize = source.displaySize; + name = source.name; + precision = source.precision; + enumerators = source.enumerators == null ? null : + Arrays.copyOf(source.enumerators, source.enumerators.length); + scale = source.scale; + // table is not set + // columnId is not set + nullable = source.nullable; + defaultExpression = source.defaultExpression; + onUpdateExpression = source.onUpdateExpression; + originalSQL = source.originalSQL; + // autoIncrement, start, increment is not set + convertNullToDefault = source.convertNullToDefault; + sequence = source.sequence; + comment = source.comment; + computeTableFilter = source.computeTableFilter; + isComputed = source.isComputed; + selectivity = source.selectivity; + primaryKey = source.primaryKey; + visible = source.visible; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/ColumnResolver.java b/modules/h2/src/main/java/org/h2/table/ColumnResolver.java new file mode 100644 index 0000000000000..471f29b2b10cb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/ColumnResolver.java @@ -0,0 +1,93 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import org.h2.command.dml.Select; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.value.Value; + +/** + * A column resolver is list of column (for example, a table) that can map a + * column name to an actual column. + */ +public interface ColumnResolver { + + /** + * Get the table alias. + * + * @return the table alias + */ + String getTableAlias(); + + /** + * Get the column list. + * + * @return the column list + */ + Column[] getColumns(); + + /** + * Get derived column name, or {@code null}. + * + * @param column column + * @return derived column name, or {@code null} + */ + String getDerivedColumnName(Column column); + + /** + * Get the list of system columns, if any. + * + * @return the system columns or null + */ + Column[] getSystemColumns(); + + /** + * Get the row id pseudo column, if there is one. + * + * @return the row id column or null + */ + Column getRowIdColumn(); + + /** + * Get the schema name. + * + * @return the schema name + */ + String getSchemaName(); + + /** + * Get the value for the given column. + * + * @param column the column + * @return the value + */ + Value getValue(Column column); + + /** + * Get the table filter. + * + * @return the table filter + */ + TableFilter getTableFilter(); + + /** + * Get the select statement. + * + * @return the select statement + */ + Select getSelect(); + + /** + * Get the expression that represents this column. + * + * @param expressionColumn the expression column + * @param column the column + * @return the optimized expression + */ + Expression optimize(ExpressionColumn expressionColumn, Column column); + +} diff --git a/modules/h2/src/main/java/org/h2/table/FunctionTable.java b/modules/h2/src/main/java/org/h2/table/FunctionTable.java new file mode 100644 index 0000000000000..facd2db6b2fc7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/FunctionTable.java @@ -0,0 +1,266 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.FunctionCall; +import org.h2.expression.TableFunction; +import org.h2.index.FunctionIndex; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.result.LocalResult; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.schema.Schema; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; + +/** + * A table backed by a system or user-defined function that returns a result + * set. + */ +public class FunctionTable extends Table { + + private final FunctionCall function; + private final long rowCount; + private Expression functionExpr; + private LocalResult cachedResult; + private Value cachedValue; + + public FunctionTable(Schema schema, Session session, + Expression functionExpr, FunctionCall function) { + super(schema, 0, function.getName(), false, true); + this.functionExpr = functionExpr; + this.function = function; + if (function instanceof TableFunction) { + rowCount = ((TableFunction) function).getRowCount(); + } else { + rowCount = Long.MAX_VALUE; + } + function.optimize(session); + int type = function.getType(); + if (type != Value.RESULT_SET) { + throw DbException.get( + ErrorCode.FUNCTION_MUST_RETURN_RESULT_SET_1, function.getName()); + } + Expression[] args = function.getArgs(); + int numParams = args.length; + Expression[] columnListArgs = new Expression[numParams]; + for (int i = 0; i < numParams; i++) { + args[i] = args[i].optimize(session); + columnListArgs[i] = args[i]; + } + ValueResultSet template = function.getValueForColumnList( + session, columnListArgs); + if (template == null) { + throw DbException.get( + ErrorCode.FUNCTION_MUST_RETURN_RESULT_SET_1, function.getName()); + } + ResultSet rs = template.getResultSet(); + try { + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + Column[] cols = new Column[columnCount]; + for (int i = 0; i < columnCount; i++) { + cols[i] = new Column(meta.getColumnName(i + 1), + DataType.getValueTypeFromResultSet(meta, i + 1), + meta.getPrecision(i + 1), + meta.getScale(i + 1), meta.getColumnDisplaySize(i + 1)); + } + setColumns(cols); + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + public boolean lock(Session session, boolean exclusive, boolean forceLockEvenInMvcc) { + // nothing to do + return false; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void unlock(Session s) { + // nothing to do + } + + @Override + public boolean isLockedExclusively() { + return false; + } + + @Override + public Index addIndex(Session session, String indexName, int indexId, + IndexColumn[] cols, IndexType indexType, boolean create, + String indexComment) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public void removeRow(Session session, Row row) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public boolean canDrop() { + throw DbException.throwInternalError(toString()); + } + + @Override + public void addRow(Session session, Row row) { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public void checkSupportAlter() { + throw DbException.getUnsupportedException("ALIAS"); + } + + @Override + public TableType getTableType() { + return null; + } + + @Override + public Index getScanIndex(Session session) { + return new FunctionIndex(this, IndexColumn.wrap(columns)); + } + + @Override + public ArrayList getIndexes() { + return null; + } + + @Override + public boolean canGetRowCount() { + return rowCount != Long.MAX_VALUE; + } + + @Override + public long getRowCount(Session session) { + return rowCount; + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("ALIAS"); + } + + /** + * Read the result from the function. This method buffers the result in a + * temporary file. + * + * @param session the session + * @return the result + */ + public ResultInterface getResult(Session session) { + ValueResultSet v = getValueResultSet(session); + if (v == null) { + return null; + } + if (cachedResult != null && cachedValue == v) { + cachedResult.reset(); + return cachedResult; + } + LocalResult result = LocalResult.read(session, v.getResultSet(), 0); + if (function.isDeterministic()) { + cachedResult = result; + cachedValue = v; + } + return result; + } + + /** + * Read the result set from the function. This method doesn't cache. + * + * @param session the session + * @return the result set + */ + public ResultSet getResultSet(Session session) { + ValueResultSet v = getValueResultSet(session); + return v == null ? null : v.getResultSet(); + } + + private ValueResultSet getValueResultSet(Session session) { + functionExpr = functionExpr.optimize(session); + Value v = functionExpr.getValue(session); + if (v == ValueNull.INSTANCE) { + return null; + } + return (ValueResultSet) v; + } + + public boolean isBufferResultSetToLocalTemp() { + return function.isBufferResultSetToLocalTemp(); + } + + @Override + public long getMaxDataModificationId() { + // TODO optimization: table-as-a-function currently doesn't know the + // last modified date + return Long.MAX_VALUE; + } + + @Override + public Index getUniqueIndex() { + return null; + } + + @Override + public String getSQL() { + return function.getSQL(); + } + + @Override + public long getRowCountApproximation() { + return rowCount; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public boolean isDeterministic() { + return function.isDeterministic(); + } + + @Override + public boolean canReference() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/IndexColumn.java b/modules/h2/src/main/java/org/h2/table/IndexColumn.java new file mode 100644 index 0000000000000..04b300d45d09f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/IndexColumn.java @@ -0,0 +1,82 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import org.h2.result.SortOrder; + +/** + * This represents a column item of an index. This is required because some + * indexes support descending sorted columns. + */ +public class IndexColumn { + + /** + * The column name. + */ + public String columnName; + + /** + * The column, or null if not set. + */ + public Column column; + + /** + * The sort type. Ascending (the default) and descending are supported; + * nulls can be sorted first or last. + */ + public int sortType = SortOrder.ASCENDING; + + /** + * Get the SQL snippet for this index column. + * + * @return the SQL snippet + */ + public String getSQL() { + StringBuilder buff = new StringBuilder(column.getSQL()); + if ((sortType & SortOrder.DESCENDING) != 0) { + buff.append(" DESC"); + } + if ((sortType & SortOrder.NULLS_FIRST) != 0) { + buff.append(" NULLS FIRST"); + } else if ((sortType & SortOrder.NULLS_LAST) != 0) { + buff.append(" NULLS LAST"); + } + return buff.toString(); + } + + /** + * Create an array of index columns from a list of columns. The default sort + * type is used. + * + * @param columns the column list + * @return the index column array + */ + public static IndexColumn[] wrap(Column[] columns) { + IndexColumn[] list = new IndexColumn[columns.length]; + for (int i = 0; i < list.length; i++) { + list[i] = new IndexColumn(); + list[i].column = columns[i]; + } + return list; + } + + /** + * Map the columns using the column names and the specified table. + * + * @param indexColumns the column list with column names set + * @param table the table from where to map the column names to columns + */ + public static void mapColumns(IndexColumn[] indexColumns, Table table) { + for (IndexColumn col : indexColumns) { + col.column = table.getColumn(col.columnName); + } + } + + @Override + public String toString() { + return "IndexColumn " + getSQL(); + } +} diff --git a/modules/h2/src/main/java/org/h2/table/IndexHints.java b/modules/h2/src/main/java/org/h2/table/IndexHints.java new file mode 100644 index 0000000000000..0b83731580177 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/IndexHints.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import org.h2.index.Index; + +import java.util.LinkedHashSet; +import java.util.Set; + +/** + * Contains the hints for which index to use for a specific table. Currently + * allows a list of "use indexes" to be specified. + *

    + * Use the factory method IndexHints.createUseIndexHints(listOfIndexes) to limit + * the query planner to only use specific indexes when determining which index + * to use for a table + **/ +public final class IndexHints { + + private final LinkedHashSet allowedIndexes; + + private IndexHints(LinkedHashSet allowedIndexes) { + this.allowedIndexes = allowedIndexes; + } + + /** + * Create an index hint object. + * + * @param allowedIndexes the set of allowed indexes + * @return the hint object + */ + public static IndexHints createUseIndexHints(LinkedHashSet allowedIndexes) { + return new IndexHints(allowedIndexes); + } + + public Set getAllowedIndexes() { + return allowedIndexes; + } + + @Override + public String toString() { + return "IndexHints{allowedIndexes=" + allowedIndexes + '}'; + } + + /** + * Allow an index to be used. + * + * @param index the index + * @return whether it was already allowed + */ + public boolean allowIndex(Index index) { + return allowedIndexes.contains(index.getName()); + } +} diff --git a/modules/h2/src/main/java/org/h2/table/JoinBatch.java b/modules/h2/src/main/java/org/h2/table/JoinBatch.java new file mode 100644 index 0000000000000..8f5ed93a4c460 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/JoinBatch.java @@ -0,0 +1,1127 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.AbstractList; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.Future; +import org.h2.command.dml.Query; +import org.h2.command.dml.Select; +import org.h2.command.dml.SelectUnion; +import org.h2.index.BaseIndex; +import org.h2.index.Cursor; +import org.h2.index.IndexCursor; +import org.h2.index.IndexLookupBatch; +import org.h2.index.ViewCursor; +import org.h2.index.ViewIndex; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.util.DoneFuture; +import org.h2.util.LazyFuture; +import org.h2.util.New; +import org.h2.value.Value; +import org.h2.value.ValueLong; + +/** + * Support for asynchronous batched index lookups on joins. + * + * @see BaseIndex#createLookupBatch(org.h2.table.TableFilter[], int) + * @see IndexLookupBatch + * @author Sergi Vladykin + */ +public final class JoinBatch { + + /** + * An empty cursor. + */ + static final Cursor EMPTY_CURSOR = new Cursor() { + @Override + public boolean previous() { + return false; + } + + @Override + public boolean next() { + return false; + } + + @Override + public SearchRow getSearchRow() { + return null; + } + + @Override + public Row get() { + return null; + } + + @Override + public String toString() { + return "EMPTY_CURSOR"; + } + }; + + /** + * An empty future cursor. + */ + static final Future EMPTY_FUTURE_CURSOR = new DoneFuture<>(EMPTY_CURSOR); + + /** + * The top cursor. + */ + Future viewTopFutureCursor; + + /** + * The top filter. + */ + JoinFilter top; + + /** + * The filters. + */ + final JoinFilter[] filters; + + /** + * Whether this is a batched subquery. + */ + boolean batchedSubQuery; + + private boolean started; + + private JoinRow current; + private boolean found; + + /** + * This filter joined after this batched join and can be used normally. + */ + private final TableFilter additionalFilter; + + /** + * @param filtersCount number of filters participating in this batched join + * @param additionalFilter table filter after this batched join. + */ + public JoinBatch(int filtersCount, TableFilter additionalFilter) { + if (filtersCount > 32) { + // This is because we store state in a 64 bit field, 2 bits per + // joined table. + throw DbException.getUnsupportedException( + "Too many tables in join (at most 32 supported)."); + } + filters = new JoinFilter[filtersCount]; + this.additionalFilter = additionalFilter; + } + + /** + * Get the lookup batch for the given table filter. + * + * @param joinFilterId joined table filter id + * @return lookup batch + */ + public IndexLookupBatch getLookupBatch(int joinFilterId) { + return filters[joinFilterId].lookupBatch; + } + + /** + * Reset state of this batch. + * + * @param beforeQuery {@code true} if reset was called before the query run, + * {@code false} if after + */ + public void reset(boolean beforeQuery) { + current = null; + started = false; + found = false; + for (JoinFilter jf : filters) { + jf.reset(beforeQuery); + } + if (beforeQuery && additionalFilter != null) { + additionalFilter.reset(); + } + } + + /** + * Register the table filter and lookup batch. + * + * @param filter table filter + * @param lookupBatch lookup batch + */ + public void register(TableFilter filter, IndexLookupBatch lookupBatch) { + assert filter != null; + top = new JoinFilter(lookupBatch, filter, top); + filters[top.id] = top; + } + + /** + * Get the value for the given column. + * + * @param filterId table filter id + * @param column the column + * @return column value for current row + */ + public Value getValue(int filterId, Column column) { + if (current == null) { + return null; + } + Object x = current.row(filterId); + assert x != null; + Row row = current.isRow(filterId) ? (Row) x : ((Cursor) x).get(); + int columnId = column.getColumnId(); + if (columnId == -1) { + return ValueLong.get(row.getKey()); + } + Value value = row.getValue(column.getColumnId()); + if (value == null) { + throw DbException.throwInternalError("value is null: " + column + " " + row); + } + return value; + } + + private void start() { + // initialize current row + current = new JoinRow(new Object[filters.length]); + // initialize top cursor + Cursor cursor; + if (batchedSubQuery) { + assert viewTopFutureCursor != null; + cursor = get(viewTopFutureCursor); + } else { + // setup usual index cursor + TableFilter f = top.filter; + IndexCursor indexCursor = f.getIndexCursor(); + indexCursor.find(f.getSession(), f.getIndexConditions()); + cursor = indexCursor; + } + current.updateRow(top.id, cursor, JoinRow.S_NULL, JoinRow.S_CURSOR); + // we need fake first row because batchedNext always will move to the + // next row + JoinRow fake = new JoinRow(null); + fake.next = current; + current = fake; + } + + /** + * Get next row from the join batch. + * + * @return true if there is a next row + */ + public boolean next() { + if (!started) { + start(); + started = true; + } + if (additionalFilter == null) { + if (batchedNext()) { + assert current.isComplete(); + return true; + } + return false; + } + while (true) { + if (!found) { + if (!batchedNext()) { + return false; + } + assert current.isComplete(); + found = true; + additionalFilter.reset(); + } + // we call furtherFilter in usual way outside of this batch because + // it is more effective + if (additionalFilter.next()) { + return true; + } + found = false; + } + } + + private static Cursor get(Future f) { + Cursor c; + try { + c = f.get(); + } catch (Exception e) { + throw DbException.convert(e); + } + return c == null ? EMPTY_CURSOR : c; + } + + private boolean batchedNext() { + if (current == null) { + // after last + return false; + } + // go next + current = current.next; + if (current == null) { + return false; + } + current.prev = null; + + final int lastJfId = filters.length - 1; + + int jfId = lastJfId; + while (current.row(jfId) == null) { + // lookup for the first non fetched filter for the current row + jfId--; + } + + while (true) { + fetchCurrent(jfId); + + if (!current.isDropped()) { + // if current was not dropped then it must be fetched + // successfully + if (jfId == lastJfId) { + // the whole join row is ready to be returned + return true; + } + JoinFilter join = filters[jfId + 1]; + if (join.isBatchFull()) { + // get future cursors for join and go right to fetch them + current = join.find(current); + } + if (current.row(join.id) != null) { + // either find called or outer join with null-row + jfId = join.id; + continue; + } + } + // we have to go down and fetch next cursors for jfId if it is + // possible + if (current.next == null) { + // either dropped or null-row + if (current.isDropped()) { + current = current.prev; + if (current == null) { + return false; + } + } + assert !current.isDropped(); + assert jfId != lastJfId; + + jfId = 0; + while (current.row(jfId) != null) { + jfId++; + } + // force find on half filled batch (there must be either + // searchRows or Cursor.EMPTY set for null-rows) + current = filters[jfId].find(current); + } else { + // here we don't care if the current was dropped + current = current.next; + assert !current.isRow(jfId); + while (current.row(jfId) == null) { + assert jfId != top.id; + // need to go left and fetch more search rows + jfId--; + assert !current.isRow(jfId); + } + } + } + } + + @SuppressWarnings("unchecked") + private void fetchCurrent(final int jfId) { + assert current.prev == null || current.prev.isRow(jfId) : "prev must be already fetched"; + assert jfId == 0 || current.isRow(jfId - 1) : "left must be already fetched"; + + assert !current.isRow(jfId) : "double fetching"; + + Object x = current.row(jfId); + assert x != null : "x null"; + + // in case of outer join we don't have any future around empty cursor + boolean newCursor = x == EMPTY_CURSOR; + + if (newCursor) { + if (jfId == 0) { + // the top cursor is new and empty, then the whole select will + // not produce any rows + current.drop(); + return; + } + } else if (current.isFuture(jfId)) { + // get cursor from a future + x = get((Future) x); + current.updateRow(jfId, x, JoinRow.S_FUTURE, JoinRow.S_CURSOR); + newCursor = true; + } + + final JoinFilter jf = filters[jfId]; + Cursor c = (Cursor) x; + assert c != null; + JoinFilter join = jf.join; + + while (true) { + if (c == null || !c.next()) { + if (newCursor && jf.isOuterJoin()) { + // replace cursor with null-row + current.updateRow(jfId, jf.getNullRow(), JoinRow.S_CURSOR, JoinRow.S_ROW); + c = null; + newCursor = false; + } else { + // cursor is done, drop it + current.drop(); + return; + } + } + if (!jf.isOk(c == null)) { + // try another row from the cursor + continue; + } + boolean joinEmpty = false; + if (join != null && !join.collectSearchRows()) { + if (join.isOuterJoin()) { + joinEmpty = true; + } else { + // join will fail, try next row in the cursor + continue; + } + } + if (c != null) { + current = current.copyBehind(jfId); + // update jf, set current row from cursor + current.updateRow(jfId, c.get(), JoinRow.S_CURSOR, JoinRow.S_ROW); + } + if (joinEmpty) { + // update jf.join, set an empty cursor + current.updateRow(join.id, EMPTY_CURSOR, JoinRow.S_NULL, JoinRow.S_CURSOR); + } + return; + } + } + + /** + * @return Adapter to allow joining to this batch in sub-queries and views. + */ + private IndexLookupBatch viewIndexLookupBatch(ViewIndex viewIndex) { + return new ViewIndexLookupBatch(viewIndex); + } + + /** + * Create index lookup batch for a view index. + * + * @param viewIndex view index + * @return index lookup batch or {@code null} if batching is not supported + * for this query + */ + public static IndexLookupBatch createViewIndexLookupBatch(ViewIndex viewIndex) { + Query query = viewIndex.getQuery(); + if (query.isUnion()) { + ViewIndexLookupBatchUnion unionBatch = new ViewIndexLookupBatchUnion(viewIndex); + return unionBatch.initialize() ? unionBatch : null; + } + JoinBatch jb = ((Select) query).getJoinBatch(); + if (jb == null || jb.getLookupBatch(0) == null) { + // our sub-query is not batched or is top batched sub-query + return null; + } + assert !jb.batchedSubQuery; + jb.batchedSubQuery = true; + return jb.viewIndexLookupBatch(viewIndex); + } + + /** + * Create fake index lookup batch for non-batched table filter. + * + * @param filter the table filter + * @return fake index lookup batch + */ + public static IndexLookupBatch createFakeIndexLookupBatch(TableFilter filter) { + return new FakeLookupBatch(filter); + } + + @Override + public String toString() { + return "JoinBatch->\n" + "prev->" + (current == null ? null : current.prev) + + "\n" + "curr->" + current + + "\n" + "next->" + (current == null ? null : current.next); + } + + /** + * Table filter participating in batched join. + */ + private static final class JoinFilter { + final IndexLookupBatch lookupBatch; + final int id; + final JoinFilter join; + final TableFilter filter; + + JoinFilter(IndexLookupBatch lookupBatch, TableFilter filter, JoinFilter join) { + this.filter = filter; + this.id = filter.getJoinFilterId(); + this.join = join; + this.lookupBatch = lookupBatch; + assert lookupBatch != null || id == 0; + } + + void reset(boolean beforeQuery) { + if (lookupBatch != null) { + lookupBatch.reset(beforeQuery); + } + } + + Row getNullRow() { + return filter.getTable().getNullRow(); + } + + boolean isOuterJoin() { + return filter.isJoinOuter(); + } + + boolean isBatchFull() { + return lookupBatch.isBatchFull(); + } + + boolean isOk(boolean ignoreJoinCondition) { + boolean filterOk = filter.isOk(filter.getFilterCondition()); + boolean joinOk = filter.isOk(filter.getJoinCondition()); + + return filterOk && (ignoreJoinCondition || joinOk); + } + + boolean collectSearchRows() { + assert !isBatchFull(); + IndexCursor c = filter.getIndexCursor(); + c.prepare(filter.getSession(), filter.getIndexConditions()); + if (c.isAlwaysFalse()) { + return false; + } + return lookupBatch.addSearchRows(c.getStart(), c.getEnd()); + } + + List> find() { + return lookupBatch.find(); + } + + JoinRow find(JoinRow current) { + assert current != null; + + // lookupBatch is allowed to be empty when we have some null-rows + // and forced find call + List> result = lookupBatch.find(); + + // go backwards and assign futures + for (int i = result.size(); i > 0;) { + assert current.isRow(id - 1); + if (current.row(id) == EMPTY_CURSOR) { + // outer join support - skip row with existing empty cursor + current = current.prev; + continue; + } + assert current.row(id) == null; + Future future = result.get(--i); + if (future == null) { + current.updateRow(id, EMPTY_CURSOR, JoinRow.S_NULL, JoinRow.S_CURSOR); + } else { + current.updateRow(id, future, JoinRow.S_NULL, JoinRow.S_FUTURE); + } + if (current.prev == null || i == 0) { + break; + } + current = current.prev; + } + + // handle empty cursors (because of outer joins) at the beginning + while (current.prev != null && current.prev.row(id) == EMPTY_CURSOR) { + current = current.prev; + } + assert current.prev == null || current.prev.isRow(id); + assert current.row(id) != null; + assert !current.isRow(id); + + // the last updated row + return current; + } + + @Override + public String toString() { + return "JoinFilter->" + filter; + } + } + + /** + * Linked row in batched join. + */ + private static final class JoinRow { + private static final long S_NULL = 0; + private static final long S_FUTURE = 1; + private static final long S_CURSOR = 2; + private static final long S_ROW = 3; + + private static final long S_MASK = 3; + + JoinRow prev; + JoinRow next; + + /** + * May contain one of the following: + *

      + *
    • {@code null}: means that we need to get future cursor + * for this row
    • + *
    • {@link Future}: means that we need to get a new {@link Cursor} + * from the {@link Future}
    • + *
    • {@link Cursor}: means that we need to fetch {@link Row}s from the + * {@link Cursor}
    • + *
    • {@link Row}: the {@link Row} is already fetched and is ready to + * be used
    • + *
    + */ + private Object[] row; + private long state; + + /** + * @param row Row. + */ + JoinRow(Object[] row) { + this.row = row; + } + + /** + * @param joinFilterId Join filter id. + * @return Row state. + */ + private long getState(int joinFilterId) { + return (state >>> (joinFilterId << 1)) & S_MASK; + } + + /** + * Allows to do a state transition in the following order: + * 0. Slot contains {@code null} ({@link #S_NULL}). + * 1. Slot contains {@link Future} ({@link #S_FUTURE}). + * 2. Slot contains {@link Cursor} ({@link #S_CURSOR}). + * 3. Slot contains {@link Row} ({@link #S_ROW}). + * + * @param joinFilterId {@link JoinRow} filter id. + * @param i Increment by this number of moves. + */ + private void incrementState(int joinFilterId, long i) { + assert i > 0 : i; + state += i << (joinFilterId << 1); + } + + void updateRow(int joinFilterId, Object x, long oldState, long newState) { + assert getState(joinFilterId) == oldState : "old state: " + getState(joinFilterId); + row[joinFilterId] = x; + incrementState(joinFilterId, newState - oldState); + assert getState(joinFilterId) == newState : "new state: " + getState(joinFilterId); + } + + Object row(int joinFilterId) { + return row[joinFilterId]; + } + + boolean isRow(int joinFilterId) { + return getState(joinFilterId) == S_ROW; + } + + boolean isFuture(int joinFilterId) { + return getState(joinFilterId) == S_FUTURE; + } + + private boolean isCursor(int joinFilterId) { + return getState(joinFilterId) == S_CURSOR; + } + + boolean isComplete() { + return isRow(row.length - 1); + } + + boolean isDropped() { + return row == null; + } + + void drop() { + if (prev != null) { + prev.next = next; + } + if (next != null) { + next.prev = prev; + } + row = null; + } + + /** + * Copy this JoinRow behind itself in linked list of all in progress + * rows. + * + * @param jfId The last fetched filter id. + * @return The copy. + */ + JoinRow copyBehind(int jfId) { + assert isCursor(jfId); + assert jfId + 1 == row.length || row[jfId + 1] == null; + + Object[] r = new Object[row.length]; + if (jfId != 0) { + System.arraycopy(row, 0, r, 0, jfId); + } + JoinRow copy = new JoinRow(r); + copy.state = state; + + if (prev != null) { + copy.prev = prev; + prev.next = copy; + } + prev = copy; + copy.next = this; + + return copy; + } + + @Override + public String toString() { + return "JoinRow->" + Arrays.toString(row); + } + } + + /** + * Fake Lookup batch for indexes which do not support batching but have to + * participate in batched joins. + */ + private static final class FakeLookupBatch implements IndexLookupBatch { + private final TableFilter filter; + + private SearchRow first; + private SearchRow last; + + private boolean full; + + private final List> result = new SingletonList<>(); + + FakeLookupBatch(TableFilter filter) { + this.filter = filter; + } + + @Override + public String getPlanSQL() { + return "fake"; + } + + @Override + public void reset(boolean beforeQuery) { + full = false; + first = last = null; + result.set(0, null); + } + + @Override + public boolean addSearchRows(SearchRow first, SearchRow last) { + assert !full; + this.first = first; + this.last = last; + full = true; + return true; + } + + @Override + public boolean isBatchFull() { + return full; + } + + @Override + public List> find() { + if (!full) { + return Collections.emptyList(); + } + Cursor c = filter.getIndex().find(filter, first, last); + result.set(0, new DoneFuture<>(c)); + full = false; + first = last = null; + return result; + } + } + + /** + * Simple singleton list. + * @param Element type. + */ + static final class SingletonList extends AbstractList { + private E element; + + @Override + public E get(int index) { + assert index == 0; + return element; + } + + @Override + public E set(int index, E element) { + assert index == 0; + this.element = element; + return null; + } + + @Override + public int size() { + return 1; + } + } + + /** + * Base class for SELECT and SELECT UNION view index lookup batches. + * @param Runner type. + */ + private abstract static class ViewIndexLookupBatchBase + implements IndexLookupBatch { + protected final ViewIndex viewIndex; + private final ArrayList> result = New.arrayList(); + private int resultSize; + private boolean findCalled; + + protected ViewIndexLookupBatchBase(ViewIndex viewIndex) { + this.viewIndex = viewIndex; + } + + @Override + public String getPlanSQL() { + return "view"; + } + + protected abstract boolean collectSearchRows(R r); + + protected abstract R newQueryRunner(); + + protected abstract void startQueryRunners(int resultSize); + + protected final boolean resetAfterFind() { + if (!findCalled) { + return false; + } + findCalled = false; + // method find was called, we need to reset futures to initial state + // for reuse + for (int i = 0; i < resultSize; i++) { + queryRunner(i).reset(); + } + resultSize = 0; + return true; + } + + @SuppressWarnings("unchecked") + protected R queryRunner(int i) { + return (R) result.get(i); + } + + @Override + public final boolean addSearchRows(SearchRow first, SearchRow last) { + resetAfterFind(); + viewIndex.setupQueryParameters(viewIndex.getSession(), first, last, null); + R r; + if (resultSize < result.size()) { + // get reused runner + r = queryRunner(resultSize); + } else { + // create new runner + result.add(r = newQueryRunner()); + } + r.first = first; + r.last = last; + if (!collectSearchRows(r)) { + r.clear(); + return false; + } + resultSize++; + return true; + } + + @Override + public void reset(boolean beforeQuery) { + if (resultSize != 0 && !resetAfterFind()) { + // find was not called, need to just clear runners + for (int i = 0; i < resultSize; i++) { + queryRunner(i).clear(); + } + resultSize = 0; + } + } + + @Override + public final List> find() { + if (resultSize == 0) { + return Collections.emptyList(); + } + findCalled = true; + startQueryRunners(resultSize); + return resultSize == result.size() ? result : result.subList(0, resultSize); + } + } + + /** + * Lazy query runner base for subqueries and views. + */ + private abstract static class QueryRunnerBase extends LazyFuture { + protected final ViewIndex viewIndex; + protected SearchRow first; + protected SearchRow last; + private boolean isLazyResult; + + QueryRunnerBase(ViewIndex viewIndex) { + this.viewIndex = viewIndex; + } + + protected void clear() { + first = last = null; + } + + @Override + public final boolean reset() { + if (isLazyResult) { + resetViewTopFutureCursorAfterQuery(); + } + if (super.reset()) { + return true; + } + // this query runner was never executed, need to clear manually + clear(); + return false; + } + + protected final ViewCursor newCursor(ResultInterface localResult) { + isLazyResult = localResult.isLazy(); + ViewCursor cursor = new ViewCursor(viewIndex, localResult, first, last); + clear(); + return cursor; + } + + protected abstract void resetViewTopFutureCursorAfterQuery(); + } + + /** + * View index lookup batch for a simple SELECT. + */ + private final class ViewIndexLookupBatch extends ViewIndexLookupBatchBase { + ViewIndexLookupBatch(ViewIndex viewIndex) { + super(viewIndex); + } + + @Override + protected QueryRunner newQueryRunner() { + return new QueryRunner(viewIndex); + } + + @Override + protected boolean collectSearchRows(QueryRunner r) { + return top.collectSearchRows(); + } + + @Override + public boolean isBatchFull() { + return top.isBatchFull(); + } + + @Override + protected void startQueryRunners(int resultSize) { + // we do batched find only for top table filter and then lazily run + // the ViewIndex query for each received top future cursor + List> topFutureCursors = top.find(); + if (topFutureCursors.size() != resultSize) { + throw DbException + .throwInternalError("Unexpected result size: " + + topFutureCursors.size() + ", expected :" + + resultSize); + } + for (int i = 0; i < resultSize; i++) { + QueryRunner r = queryRunner(i); + r.topFutureCursor = topFutureCursors.get(i); + } + } + } + + /** + * Query runner for SELECT. + */ + private final class QueryRunner extends QueryRunnerBase { + Future topFutureCursor; + + QueryRunner(ViewIndex viewIndex) { + super(viewIndex); + } + + @Override + protected void clear() { + super.clear(); + topFutureCursor = null; + } + + @Override + protected Cursor run() throws Exception { + if (topFutureCursor == null) { + // if the top cursor is empty then the whole query will produce + // empty result + return EMPTY_CURSOR; + } + viewIndex.setupQueryParameters(viewIndex.getSession(), first, last, null); + JoinBatch.this.viewTopFutureCursor = topFutureCursor; + ResultInterface localResult; + boolean lazy = false; + try { + localResult = viewIndex.getQuery().query(0); + lazy = localResult.isLazy(); + } finally { + if (!lazy) { + resetViewTopFutureCursorAfterQuery(); + } + } + return newCursor(localResult); + } + + @Override + protected void resetViewTopFutureCursorAfterQuery() { + JoinBatch.this.viewTopFutureCursor = null; + } + } + + /** + * View index lookup batch for UNION queries. + */ + private static final class ViewIndexLookupBatchUnion + extends ViewIndexLookupBatchBase { + ArrayList filters; + ArrayList joinBatches; + private boolean onlyBatchedQueries = true; + + protected ViewIndexLookupBatchUnion(ViewIndex viewIndex) { + super(viewIndex); + } + + boolean initialize() { + return collectJoinBatches(viewIndex.getQuery()) && joinBatches != null; + } + + private boolean collectJoinBatches(Query query) { + if (query.isUnion()) { + SelectUnion union = (SelectUnion) query; + return collectJoinBatches(union.getLeft()) && + collectJoinBatches(union.getRight()); + } + Select select = (Select) query; + JoinBatch jb = select.getJoinBatch(); + if (jb == null) { + onlyBatchedQueries = false; + } else { + if (jb.getLookupBatch(0) == null) { + // we are top sub-query + return false; + } + assert !jb.batchedSubQuery; + jb.batchedSubQuery = true; + if (joinBatches == null) { + joinBatches = New.arrayList(); + filters = New.arrayList(); + } + filters.add(jb.filters[0]); + joinBatches.add(jb); + } + return true; + } + + @Override + public boolean isBatchFull() { + // if at least one is full + for (JoinFilter filter : filters) { + if (filter.isBatchFull()) { + return true; + } + } + return false; + } + + @Override + protected boolean collectSearchRows(QueryRunnerUnion r) { + boolean collected = false; + for (int i = 0; i < filters.size(); i++) { + if (filters.get(i).collectSearchRows()) { + collected = true; + } else { + r.topFutureCursors[i] = EMPTY_FUTURE_CURSOR; + } + } + return collected || !onlyBatchedQueries; + } + + @Override + protected QueryRunnerUnion newQueryRunner() { + return new QueryRunnerUnion(this); + } + + @Override + protected void startQueryRunners(int resultSize) { + for (int f = 0; f < filters.size(); f++) { + List> topFutureCursors = filters.get(f).find(); + int r = 0, c = 0; + for (; r < resultSize; r++) { + Future[] cs = queryRunner(r).topFutureCursors; + if (cs[f] == null) { + cs[f] = topFutureCursors.get(c++); + } + } + assert r == resultSize; + assert c == topFutureCursors.size(); + } + } + } + + /** + * Query runner for UNION. + */ + private static class QueryRunnerUnion extends QueryRunnerBase { + Future[] topFutureCursors; + private ViewIndexLookupBatchUnion batchUnion; + + @SuppressWarnings("unchecked") + QueryRunnerUnion(ViewIndexLookupBatchUnion batchUnion) { + super(batchUnion.viewIndex); + this.batchUnion = batchUnion; + topFutureCursors = new Future[batchUnion.filters.size()]; + } + + @Override + protected void clear() { + super.clear(); + for (int i = 0; i < topFutureCursors.length; i++) { + topFutureCursors[i] = null; + } + } + + @Override + protected Cursor run() throws Exception { + viewIndex.setupQueryParameters(viewIndex.getSession(), first, last, null); + ArrayList joinBatches = batchUnion.joinBatches; + for (int i = 0, size = joinBatches.size(); i < size; i++) { + assert topFutureCursors[i] != null; + joinBatches.get(i).viewTopFutureCursor = topFutureCursors[i]; + } + ResultInterface localResult; + boolean lazy = false; + try { + localResult = viewIndex.getQuery().query(0); + lazy = localResult.isLazy(); + } finally { + if (!lazy) { + resetViewTopFutureCursorAfterQuery(); + } + } + return newCursor(localResult); + } + + @Override + protected void resetViewTopFutureCursorAfterQuery() { + ArrayList joinBatches = batchUnion.joinBatches; + if (joinBatches == null) { + return; + } + for (JoinBatch joinBatch : joinBatches) { + joinBatch.viewTopFutureCursor = null; + } + } + } +} + diff --git a/modules/h2/src/main/java/org/h2/table/LinkSchema.java b/modules/h2/src/main/java/org/h2/table/LinkSchema.java new file mode 100644 index 0000000000000..5766790f8e31c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/LinkSchema.java @@ -0,0 +1,99 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import org.h2.message.DbException; +import org.h2.tools.SimpleResultSet; +import org.h2.util.JdbcUtils; +import org.h2.util.StringUtils; + +/** + * A utility class to create table links for a whole schema. + */ +public class LinkSchema { + + private LinkSchema() { + // utility class + } + + /** + * Link all tables of a schema to the database. + * + * @param conn the connection to the database where the links are to be + * created + * @param targetSchema the schema name where the objects should be created + * @param driver the driver class name of the linked database + * @param url the database URL of the linked database + * @param user the user name + * @param password the password + * @param sourceSchema the schema where the existing tables are + * @return a result set with the created tables + */ + public static ResultSet linkSchema(Connection conn, String targetSchema, + String driver, String url, String user, String password, + String sourceSchema) { + Connection c2 = null; + Statement stat = null; + ResultSet rs = null; + SimpleResultSet result = new SimpleResultSet(); + result.setAutoClose(false); + result.addColumn("TABLE_NAME", Types.VARCHAR, Integer.MAX_VALUE, 0); + try { + c2 = JdbcUtils.getConnection(driver, url, user, password); + stat = conn.createStatement(); + stat.execute("CREATE SCHEMA IF NOT EXISTS " + + StringUtils.quoteIdentifier(targetSchema)); + //Workaround for PostgreSQL to avoid index names + if (url.startsWith("jdbc:postgresql:")) { + rs = c2.getMetaData().getTables(null, sourceSchema, null, + new String[] { "TABLE", "LINKED TABLE", "VIEW", "EXTERNAL" }); + } else { + rs = c2.getMetaData().getTables(null, sourceSchema, null, null); + } + while (rs.next()) { + String table = rs.getString("TABLE_NAME"); + StringBuilder buff = new StringBuilder(); + buff.append("DROP TABLE IF EXISTS "). + append(StringUtils.quoteIdentifier(targetSchema)). + append('.'). + append(StringUtils.quoteIdentifier(table)); + stat.execute(buff.toString()); + buff = new StringBuilder(); + buff.append("CREATE LINKED TABLE "). + append(StringUtils.quoteIdentifier(targetSchema)). + append('.'). + append(StringUtils.quoteIdentifier(table)). + append('('). + append(StringUtils.quoteStringSQL(driver)). + append(", "). + append(StringUtils.quoteStringSQL(url)). + append(", "). + append(StringUtils.quoteStringSQL(user)). + append(", "). + append(StringUtils.quoteStringSQL(password)). + append(", "). + append(StringUtils.quoteStringSQL(sourceSchema)). + append(", "). + append(StringUtils.quoteStringSQL(table)). + append(')'); + stat.execute(buff.toString()); + result.addRow(table); + } + } catch (SQLException e) { + throw DbException.convert(e); + } finally { + JdbcUtils.closeSilently(rs); + JdbcUtils.closeSilently(c2); + JdbcUtils.closeSilently(stat); + } + return result; + } +} diff --git a/modules/h2/src/main/java/org/h2/table/MetaTable.java b/modules/h2/src/main/java/org/h2/table/MetaTable.java new file mode 100644 index 0000000000000..5373d32e7ef59 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/MetaTable.java @@ -0,0 +1,2345 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStreamReader; +import java.io.Reader; +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.Timestamp; +import java.text.Collator; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.Locale; +import org.h2.command.Command; +import org.h2.constraint.Constraint; +import org.h2.constraint.ConstraintActionType; +import org.h2.constraint.ConstraintCheck; +import org.h2.constraint.ConstraintReferential; +import org.h2.constraint.ConstraintUnique; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.FunctionAlias; +import org.h2.engine.FunctionAlias.JavaMethod; +import org.h2.engine.QueryStatisticsData; +import org.h2.engine.Right; +import org.h2.engine.Role; +import org.h2.engine.Session; +import org.h2.engine.Setting; +import org.h2.engine.User; +import org.h2.engine.UserAggregate; +import org.h2.engine.UserDataType; +import org.h2.expression.ValueExpression; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.index.MetaIndex; +import org.h2.index.MultiVersionIndex; +import org.h2.jdbc.JdbcSQLException; +import org.h2.message.DbException; +import org.h2.mvstore.FileStore; +import org.h2.mvstore.db.MVTableEngine.Store; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.schema.Constant; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObject; +import org.h2.schema.Sequence; +import org.h2.schema.TriggerObject; +import org.h2.store.InDoubtTransaction; +import org.h2.store.PageStore; +import org.h2.tools.Csv; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.util.Utils; +import org.h2.value.CompareMode; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueNull; +import org.h2.value.ValueString; +import org.h2.value.ValueStringIgnoreCase; + +/** + * This class is responsible to build the database meta data pseudo tables. + */ +public class MetaTable extends Table { + + /** + * The approximate number of rows of a meta table. + */ + public static final long ROW_COUNT_APPROXIMATION = 1000; + + private static final String CHARACTER_SET_NAME = "Unicode"; + + private static final int TABLES = 0; + private static final int COLUMNS = 1; + private static final int INDEXES = 2; + private static final int TABLE_TYPES = 3; + private static final int TYPE_INFO = 4; + private static final int CATALOGS = 5; + private static final int SETTINGS = 6; + private static final int HELP = 7; + private static final int SEQUENCES = 8; + private static final int USERS = 9; + private static final int ROLES = 10; + private static final int RIGHTS = 11; + private static final int FUNCTION_ALIASES = 12; + private static final int SCHEMATA = 13; + private static final int TABLE_PRIVILEGES = 14; + private static final int COLUMN_PRIVILEGES = 15; + private static final int COLLATIONS = 16; + private static final int VIEWS = 17; + private static final int IN_DOUBT = 18; + private static final int CROSS_REFERENCES = 19; + private static final int CONSTRAINTS = 20; + private static final int FUNCTION_COLUMNS = 21; + private static final int CONSTANTS = 22; + private static final int DOMAINS = 23; + private static final int TRIGGERS = 24; + private static final int SESSIONS = 25; + private static final int LOCKS = 26; + private static final int SESSION_STATE = 27; + private static final int QUERY_STATISTICS = 28; + private static final int SYNONYMS = 29; + private static final int TABLE_CONSTRAINTS = 30; + private static final int KEY_COLUMN_USAGE = 31; + private static final int REFERENTIAL_CONSTRAINTS = 32; + private static final int META_TABLE_TYPE_COUNT = REFERENTIAL_CONSTRAINTS + 1; + + private final int type; + private final int indexColumn; + private final MetaIndex metaIndex; + + /** + * Create a new metadata table. + * + * @param schema the schema + * @param id the object id + * @param type the meta table type + */ + public MetaTable(Schema schema, int id, int type) { + // tableName will be set later + super(schema, id, null, true, true); + this.type = type; + Column[] cols; + String indexColumnName = null; + switch (type) { + case TABLES: + setObjectName("TABLES"); + cols = createColumns( + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "TABLE_TYPE", + // extensions + "STORAGE_TYPE", + "SQL", + "REMARKS", + "LAST_MODIFICATION BIGINT", + "ID INT", + "TYPE_NAME", + "TABLE_CLASS", + "ROW_COUNT_ESTIMATE BIGINT" + ); + indexColumnName = "TABLE_NAME"; + break; + case COLUMNS: + setObjectName("COLUMNS"); + cols = createColumns( + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "COLUMN_NAME", + "ORDINAL_POSITION INT", + "COLUMN_DEFAULT", + "IS_NULLABLE", + "DATA_TYPE INT", + "CHARACTER_MAXIMUM_LENGTH INT", + "CHARACTER_OCTET_LENGTH INT", + "NUMERIC_PRECISION INT", + "NUMERIC_PRECISION_RADIX INT", + "NUMERIC_SCALE INT", + "CHARACTER_SET_NAME", + "COLLATION_NAME", + // extensions + "TYPE_NAME", + "NULLABLE INT", + "IS_COMPUTED BIT", + "SELECTIVITY INT", + "CHECK_CONSTRAINT", + "SEQUENCE_NAME", + "REMARKS", + "SOURCE_DATA_TYPE SMALLINT", + "COLUMN_TYPE", + "COLUMN_ON_UPDATE" + ); + indexColumnName = "TABLE_NAME"; + break; + case INDEXES: + setObjectName("INDEXES"); + cols = createColumns( + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "NON_UNIQUE BIT", + "INDEX_NAME", + "ORDINAL_POSITION SMALLINT", + "COLUMN_NAME", + "CARDINALITY INT", + "PRIMARY_KEY BIT", + "INDEX_TYPE_NAME", + "IS_GENERATED BIT", + "INDEX_TYPE SMALLINT", + "ASC_OR_DESC", + "PAGES INT", + "FILTER_CONDITION", + "REMARKS", + "SQL", + "ID INT", + "SORT_TYPE INT", + "CONSTRAINT_NAME", + "INDEX_CLASS", + "AFFINITY BIT" + ); + indexColumnName = "TABLE_NAME"; + break; + case TABLE_TYPES: + setObjectName("TABLE_TYPES"); + cols = createColumns("TYPE"); + break; + case TYPE_INFO: + setObjectName("TYPE_INFO"); + cols = createColumns( + "TYPE_NAME", + "DATA_TYPE INT", + "PRECISION INT", + "PREFIX", + "SUFFIX", + "PARAMS", + "AUTO_INCREMENT BIT", + "MINIMUM_SCALE SMALLINT", + "MAXIMUM_SCALE SMALLINT", + "RADIX INT", + "POS INT", + "CASE_SENSITIVE BIT", + "NULLABLE SMALLINT", + "SEARCHABLE SMALLINT" + ); + break; + case CATALOGS: + setObjectName("CATALOGS"); + cols = createColumns("CATALOG_NAME"); + break; + case SETTINGS: + setObjectName("SETTINGS"); + cols = createColumns("NAME", "VALUE"); + break; + case HELP: + setObjectName("HELP"); + cols = createColumns( + "ID INT", + "SECTION", + "TOPIC", + "SYNTAX", + "TEXT" + ); + break; + case SEQUENCES: + setObjectName("SEQUENCES"); + cols = createColumns( + "SEQUENCE_CATALOG", + "SEQUENCE_SCHEMA", + "SEQUENCE_NAME", + "CURRENT_VALUE BIGINT", + "INCREMENT BIGINT", + "IS_GENERATED BIT", + "REMARKS", + "CACHE BIGINT", + "MIN_VALUE BIGINT", + "MAX_VALUE BIGINT", + "IS_CYCLE BIT", + "ID INT" + ); + break; + case USERS: + setObjectName("USERS"); + cols = createColumns( + "NAME", + "ADMIN", + "REMARKS", + "ID INT" + ); + break; + case ROLES: + setObjectName("ROLES"); + cols = createColumns( + "NAME", + "REMARKS", + "ID INT" + ); + break; + case RIGHTS: + setObjectName("RIGHTS"); + cols = createColumns( + "GRANTEE", + "GRANTEETYPE", + "GRANTEDROLE", + "RIGHTS", + "TABLE_SCHEMA", + "TABLE_NAME", + "ID INT" + ); + indexColumnName = "TABLE_NAME"; + break; + case FUNCTION_ALIASES: + setObjectName("FUNCTION_ALIASES"); + cols = createColumns( + "ALIAS_CATALOG", + "ALIAS_SCHEMA", + "ALIAS_NAME", + "JAVA_CLASS", + "JAVA_METHOD", + "DATA_TYPE INT", + "TYPE_NAME", + "COLUMN_COUNT INT", + "RETURNS_RESULT SMALLINT", + "REMARKS", + "ID INT", + "SOURCE" + ); + break; + case FUNCTION_COLUMNS: + setObjectName("FUNCTION_COLUMNS"); + cols = createColumns( + "ALIAS_CATALOG", + "ALIAS_SCHEMA", + "ALIAS_NAME", + "JAVA_CLASS", + "JAVA_METHOD", + "COLUMN_COUNT INT", + "POS INT", + "COLUMN_NAME", + "DATA_TYPE INT", + "TYPE_NAME", + "PRECISION INT", + "SCALE SMALLINT", + "RADIX SMALLINT", + "NULLABLE SMALLINT", + "COLUMN_TYPE SMALLINT", + "REMARKS", + "COLUMN_DEFAULT" + ); + break; + case SCHEMATA: + setObjectName("SCHEMATA"); + cols = createColumns( + "CATALOG_NAME", + "SCHEMA_NAME", + "SCHEMA_OWNER", + "DEFAULT_CHARACTER_SET_NAME", + "DEFAULT_COLLATION_NAME", + "IS_DEFAULT BIT", + "REMARKS", + "ID INT" + ); + break; + case TABLE_PRIVILEGES: + setObjectName("TABLE_PRIVILEGES"); + cols = createColumns( + "GRANTOR", + "GRANTEE", + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "PRIVILEGE_TYPE", + "IS_GRANTABLE" + ); + indexColumnName = "TABLE_NAME"; + break; + case COLUMN_PRIVILEGES: + setObjectName("COLUMN_PRIVILEGES"); + cols = createColumns( + "GRANTOR", + "GRANTEE", + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "COLUMN_NAME", + "PRIVILEGE_TYPE", + "IS_GRANTABLE" + ); + indexColumnName = "TABLE_NAME"; + break; + case COLLATIONS: + setObjectName("COLLATIONS"); + cols = createColumns( + "NAME", + "KEY" + ); + break; + case VIEWS: + setObjectName("VIEWS"); + cols = createColumns( + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "VIEW_DEFINITION", + "CHECK_OPTION", + "IS_UPDATABLE", + "STATUS", + "REMARKS", + "ID INT" + ); + indexColumnName = "TABLE_NAME"; + break; + case IN_DOUBT: + setObjectName("IN_DOUBT"); + cols = createColumns( + "TRANSACTION", + "STATE" + ); + break; + case CROSS_REFERENCES: + setObjectName("CROSS_REFERENCES"); + cols = createColumns( + "PKTABLE_CATALOG", + "PKTABLE_SCHEMA", + "PKTABLE_NAME", + "PKCOLUMN_NAME", + "FKTABLE_CATALOG", + "FKTABLE_SCHEMA", + "FKTABLE_NAME", + "FKCOLUMN_NAME", + "ORDINAL_POSITION SMALLINT", + "UPDATE_RULE SMALLINT", + "DELETE_RULE SMALLINT", + "FK_NAME", + "PK_NAME", + "DEFERRABILITY SMALLINT" + ); + indexColumnName = "PKTABLE_NAME"; + break; + case CONSTRAINTS: + setObjectName("CONSTRAINTS"); + cols = createColumns( + "CONSTRAINT_CATALOG", + "CONSTRAINT_SCHEMA", + "CONSTRAINT_NAME", + "CONSTRAINT_TYPE", + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "UNIQUE_INDEX_NAME", + "CHECK_EXPRESSION", + "COLUMN_LIST", + "REMARKS", + "SQL", + "ID INT" + ); + indexColumnName = "TABLE_NAME"; + break; + case CONSTANTS: + setObjectName("CONSTANTS"); + cols = createColumns( + "CONSTANT_CATALOG", + "CONSTANT_SCHEMA", + "CONSTANT_NAME", + "DATA_TYPE INT", + "REMARKS", + "SQL", + "ID INT" + ); + break; + case DOMAINS: + setObjectName("DOMAINS"); + cols = createColumns( + "DOMAIN_CATALOG", + "DOMAIN_SCHEMA", + "DOMAIN_NAME", + "COLUMN_DEFAULT", + "IS_NULLABLE", + "DATA_TYPE INT", + "PRECISION INT", + "SCALE INT", + "TYPE_NAME", + "SELECTIVITY INT", + "CHECK_CONSTRAINT", + "REMARKS", + "SQL", + "ID INT" + ); + break; + case TRIGGERS: + setObjectName("TRIGGERS"); + cols = createColumns( + "TRIGGER_CATALOG", + "TRIGGER_SCHEMA", + "TRIGGER_NAME", + "TRIGGER_TYPE", + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "BEFORE BIT", + "JAVA_CLASS", + "QUEUE_SIZE INT", + "NO_WAIT BIT", + "REMARKS", + "SQL", + "ID INT" + ); + break; + case SESSIONS: { + setObjectName("SESSIONS"); + cols = createColumns( + "ID INT", + "USER_NAME", + "SESSION_START", + "STATEMENT", + "STATEMENT_START", + "CONTAINS_UNCOMMITTED" + ); + break; + } + case LOCKS: { + setObjectName("LOCKS"); + cols = createColumns( + "TABLE_SCHEMA", + "TABLE_NAME", + "SESSION_ID INT", + "LOCK_TYPE" + ); + break; + } + case SESSION_STATE: { + setObjectName("SESSION_STATE"); + cols = createColumns( + "KEY", + "SQL" + ); + break; + } + case QUERY_STATISTICS: { + setObjectName("QUERY_STATISTICS"); + cols = createColumns( + "SQL_STATEMENT", + "EXECUTION_COUNT INT", + "MIN_EXECUTION_TIME DOUBLE", + "MAX_EXECUTION_TIME DOUBLE", + "CUMULATIVE_EXECUTION_TIME DOUBLE", + "AVERAGE_EXECUTION_TIME DOUBLE", + "STD_DEV_EXECUTION_TIME DOUBLE", + "MIN_ROW_COUNT INT", + "MAX_ROW_COUNT INT", + "CUMULATIVE_ROW_COUNT LONG", + "AVERAGE_ROW_COUNT DOUBLE", + "STD_DEV_ROW_COUNT DOUBLE" + ); + break; + } + case SYNONYMS: { + setObjectName("SYNONYMS"); + cols = createColumns( + "SYNONYM_CATALOG", + "SYNONYM_SCHEMA", + "SYNONYM_NAME", + "SYNONYM_FOR", + "SYNONYM_FOR_SCHEMA", + "TYPE_NAME", + "STATUS", + "REMARKS", + "ID INT" + ); + indexColumnName = "SYNONYM_NAME"; + break; + } + case TABLE_CONSTRAINTS: { + setObjectName("TABLE_CONSTRAINTS"); + cols = createColumns( + "CONSTRAINT_CATALOG", + "CONSTRAINT_SCHEMA", + "CONSTRAINT_NAME", + "CONSTRAINT_TYPE", + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "IS_DEFERRABLE", + "INITIALLY_DEFERRED" + ); + indexColumnName = "TABLE_NAME"; + break; + } + case KEY_COLUMN_USAGE: { + setObjectName("KEY_COLUMN_USAGE"); + cols = createColumns( + "CONSTRAINT_CATALOG", + "CONSTRAINT_SCHEMA", + "CONSTRAINT_NAME", + "TABLE_CATALOG", + "TABLE_SCHEMA", + "TABLE_NAME", + "COLUMN_NAME", + "ORDINAL_POSITION", + "POSITION_IN_UNIQUE_CONSTRAINT" + ); + indexColumnName = "TABLE_NAME"; + break; + } + case REFERENTIAL_CONSTRAINTS: { + setObjectName("REFERENTIAL_CONSTRAINTS"); + cols = createColumns( + "CONSTRAINT_CATALOG", + "CONSTRAINT_SCHEMA", + "CONSTRAINT_NAME", + "UNIQUE_CONSTRAINT_CATALOG", + "UNIQUE_CONSTRAINT_SCHEMA", + "UNIQUE_CONSTRAINT_NAME", + "MATCH_OPTION", + "UPDATE_RULE", + "DELETE_RULE" + ); + break; + } + default: + throw DbException.throwInternalError("type="+type); + } + setColumns(cols); + + if (indexColumnName == null) { + indexColumn = -1; + metaIndex = null; + } else { + indexColumn = getColumn(indexColumnName).getColumnId(); + IndexColumn[] indexCols = IndexColumn.wrap( + new Column[] { cols[indexColumn] }); + metaIndex = new MetaIndex(this, indexCols, false); + } + } + + private Column[] createColumns(String... names) { + Column[] cols = new Column[names.length]; + for (int i = 0; i < names.length; i++) { + String nameType = names[i]; + int idx = nameType.indexOf(' '); + int dataType; + String name; + if (idx < 0) { + dataType = database.getMode().lowerCaseIdentifiers ? + Value.STRING_IGNORECASE : Value.STRING; + name = nameType; + } else { + dataType = DataType.getTypeByName(nameType.substring(idx + 1), database.getMode()).type; + name = nameType.substring(0, idx); + } + cols[i] = new Column(name, dataType); + } + return cols; + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public Index addIndex(Session session, String indexName, int indexId, + IndexColumn[] cols, IndexType indexType, boolean create, + String indexComment) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public boolean lock(Session session, boolean exclusive, boolean forceLockEvenInMvcc) { + // nothing to do + return false; + } + + @Override + public boolean isLockedExclusively() { + return false; + } + + private String identifier(String s) { + if (database.getMode().lowerCaseIdentifiers) { + s = s == null ? null : StringUtils.toLowerEnglish(s); + } + return s; + } + + private ArrayList
    getAllTables(Session session) { + ArrayList
    tables = database.getAllTablesAndViews(true); + ArrayList
    tempTables = session.getLocalTempTables(); + tables.addAll(tempTables); + return tables; + } + + private ArrayList
    getTablesByName(Session session, String tableName) { + if (database.getMode().lowerCaseIdentifiers) { + tableName = StringUtils.toUpperEnglish(tableName); + } + ArrayList
    tables = database.getTableOrViewByName(tableName); + for (Table temp : session.getLocalTempTables()) { + if (temp.getName().equals(tableName)) { + tables.add(temp); + } + } + return tables; + } + + private boolean checkIndex(Session session, String value, Value indexFrom, + Value indexTo) { + if (value == null || (indexFrom == null && indexTo == null)) { + return true; + } + Database db = session.getDatabase(); + Value v; + if (database.getMode().lowerCaseIdentifiers) { + v = ValueStringIgnoreCase.get(value); + } else { + v = ValueString.get(value); + } + if (indexFrom != null && db.compare(v, indexFrom) < 0) { + return false; + } + if (indexTo != null && db.compare(v, indexTo) > 0) { + return false; + } + return true; + } + + private static String replaceNullWithEmpty(String s) { + return s == null ? "" : s; + } + + private boolean hideTable(Table table, Session session) { + return table.isHidden() && session != database.getSystemSession(); + } + + /** + * Generate the data for the given metadata table using the given first and + * last row filters. + * + * @param session the session + * @param first the first row to return + * @param last the last row to return + * @return the generated rows + */ + public ArrayList generateRows(Session session, SearchRow first, + SearchRow last) { + Value indexFrom = null, indexTo = null; + + if (indexColumn >= 0) { + if (first != null) { + indexFrom = first.getValue(indexColumn); + } + if (last != null) { + indexTo = last.getValue(indexColumn); + } + } + + ArrayList rows = New.arrayList(); + String catalog = identifier(database.getShortName()); + boolean admin = session.getUser().isAdmin(); + switch (type) { + case TABLES: { + for (Table table : getAllTables(session)) { + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + if (hideTable(table, session)) { + continue; + } + String storageType; + if (table.isTemporary()) { + if (table.isGlobalTemporary()) { + storageType = "GLOBAL TEMPORARY"; + } else { + storageType = "LOCAL TEMPORARY"; + } + } else { + storageType = table.isPersistIndexes() ? + "CACHED" : "MEMORY"; + } + String sql = table.getCreateSQL(); + if (!admin) { + if (sql != null && sql.contains(JdbcSQLException.HIDE_SQL)) { + // hide the password of linked tables + sql = "-"; + } + } + add(rows, + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + tableName, + // TABLE_TYPE + table.getTableType().toString(), + // STORAGE_TYPE + storageType, + // SQL + sql, + // REMARKS + replaceNullWithEmpty(table.getComment()), + // LAST_MODIFICATION + "" + table.getMaxDataModificationId(), + // ID + "" + table.getId(), + // TYPE_NAME + null, + // TABLE_CLASS + table.getClass().getName(), + // ROW_COUNT_ESTIMATE + "" + table.getRowCountApproximation() + ); + } + break; + } + case COLUMNS: { + // reduce the number of tables to scan - makes some metadata queries + // 10x faster + final ArrayList
    tablesToList; + if (indexFrom != null && indexFrom.equals(indexTo)) { + String tableName = identifier(indexFrom.getString()); + tablesToList = getTablesByName(session, tableName); + } else { + tablesToList = getAllTables(session); + } + for (Table table : tablesToList) { + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + if (hideTable(table, session)) { + continue; + } + Column[] cols = table.getColumns(); + String collation = database.getCompareMode().getName(); + for (int j = 0; j < cols.length; j++) { + Column c = cols[j]; + Sequence sequence = c.getSequence(); + add(rows, + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + tableName, + // COLUMN_NAME + identifier(c.getName()), + // ORDINAL_POSITION + String.valueOf(j + 1), + // COLUMN_DEFAULT + c.getDefaultSQL(), + // IS_NULLABLE + c.isNullable() ? "YES" : "NO", + // DATA_TYPE + "" + DataType.convertTypeToSQLType(c.getType()), + // CHARACTER_MAXIMUM_LENGTH + "" + c.getPrecisionAsInt(), + // CHARACTER_OCTET_LENGTH + "" + c.getPrecisionAsInt(), + // NUMERIC_PRECISION + "" + c.getPrecisionAsInt(), + // NUMERIC_PRECISION_RADIX + "10", + // NUMERIC_SCALE + "" + c.getScale(), + // CHARACTER_SET_NAME + CHARACTER_SET_NAME, + // COLLATION_NAME + collation, + // TYPE_NAME + identifier(DataType.getDataType(c.getType()).name), + // NULLABLE + "" + (c.isNullable() ? + DatabaseMetaData.columnNullable : + DatabaseMetaData.columnNoNulls) , + // IS_COMPUTED + "" + (c.getComputed() ? "TRUE" : "FALSE"), + // SELECTIVITY + "" + (c.getSelectivity()), + // CHECK_CONSTRAINT + c.getCheckConstraintSQL(session, c.getName()), + // SEQUENCE_NAME + sequence == null ? null : sequence.getName(), + // REMARKS + replaceNullWithEmpty(c.getComment()), + // SOURCE_DATA_TYPE + null, + // COLUMN_TYPE + c.getCreateSQLWithoutName(), + // COLUMN_ON_UPDATE + c.getOnUpdateSQL() + ); + } + } + break; + } + case INDEXES: { + // reduce the number of tables to scan - makes some metadata queries + // 10x faster + final ArrayList
    tablesToList; + if (indexFrom != null && indexFrom.equals(indexTo)) { + String tableName = identifier(indexFrom.getString()); + tablesToList = getTablesByName(session, tableName); + } else { + tablesToList = getAllTables(session); + } + for (Table table : tablesToList) { + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + if (hideTable(table, session)) { + continue; + } + ArrayList indexes = table.getIndexes(); + ArrayList constraints = table.getConstraints(); + for (int j = 0; indexes != null && j < indexes.size(); j++) { + Index index = indexes.get(j); + if (index.getCreateSQL() == null) { + continue; + } + String constraintName = null; + for (int k = 0; constraints != null && k < constraints.size(); k++) { + Constraint constraint = constraints.get(k); + if (constraint.usesIndex(index)) { + if (index.getIndexType().isPrimaryKey()) { + if (constraint.getConstraintType() == Constraint.Type.PRIMARY_KEY) { + constraintName = constraint.getName(); + } + } else { + constraintName = constraint.getName(); + } + } + } + IndexColumn[] cols = index.getIndexColumns(); + String indexClass; + if (index instanceof MultiVersionIndex) { + indexClass = ((MultiVersionIndex) index). + getBaseIndex().getClass().getName(); + } else { + indexClass = index.getClass().getName(); + } + for (int k = 0; k < cols.length; k++) { + IndexColumn idxCol = cols[k]; + Column column = idxCol.column; + add(rows, + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + tableName, + // NON_UNIQUE + index.getIndexType().isUnique() ? + "FALSE" : "TRUE", + // INDEX_NAME + identifier(index.getName()), + // ORDINAL_POSITION + "" + (k+1), + // COLUMN_NAME + identifier(column.getName()), + // CARDINALITY + "0", + // PRIMARY_KEY + index.getIndexType().isPrimaryKey() ? + "TRUE" : "FALSE", + // INDEX_TYPE_NAME + index.getIndexType().getSQL(), + // IS_GENERATED + index.getIndexType().getBelongsToConstraint() ? + "TRUE" : "FALSE", + // INDEX_TYPE + "" + DatabaseMetaData.tableIndexOther, + // ASC_OR_DESC + (idxCol.sortType & SortOrder.DESCENDING) != 0 ? + "D" : "A", + // PAGES + "0", + // FILTER_CONDITION + "", + // REMARKS + replaceNullWithEmpty(index.getComment()), + // SQL + index.getCreateSQL(), + // ID + "" + index.getId(), + // SORT_TYPE + "" + idxCol.sortType, + // CONSTRAINT_NAME + constraintName, + // INDEX_CLASS + indexClass, + // AFFINITY + index.getIndexType().isAffinity() ? + "TRUE" : "FALSE" + ); + } + } + } + break; + } + case TABLE_TYPES: { + add(rows, TableType.TABLE.toString()); + add(rows, TableType.TABLE_LINK.toString()); + add(rows, TableType.SYSTEM_TABLE.toString()); + add(rows, TableType.VIEW.toString()); + add(rows, TableType.EXTERNAL_TABLE_ENGINE.toString()); + break; + } + case CATALOGS: { + add(rows, catalog); + break; + } + case SETTINGS: { + for (Setting s : database.getAllSettings()) { + String value = s.getStringValue(); + if (value == null) { + value = "" + s.getIntValue(); + } + add(rows, + identifier(s.getName()), + value + ); + } + add(rows, "info.BUILD_ID", "" + Constants.BUILD_ID); + add(rows, "info.VERSION_MAJOR", "" + Constants.VERSION_MAJOR); + add(rows, "info.VERSION_MINOR", "" + Constants.VERSION_MINOR); + add(rows, "info.VERSION", "" + Constants.getFullVersion()); + if (admin) { + String[] settings = { + "java.runtime.version", "java.vm.name", + "java.vendor", "os.name", "os.arch", "os.version", + "sun.os.patch.level", "file.separator", + "path.separator", "line.separator", "user.country", + "user.language", "user.variant", "file.encoding" }; + for (String s : settings) { + add(rows, "property." + s, Utils.getProperty(s, "")); + } + } + add(rows, "EXCLUSIVE", database.getExclusiveSession() == null ? + "FALSE" : "TRUE"); + add(rows, "MODE", database.getMode().getName()); + add(rows, "MULTI_THREADED", database.isMultiThreaded() ? "1" : "0"); + add(rows, "MVCC", database.isMultiVersion() ? "TRUE" : "FALSE"); + add(rows, "QUERY_TIMEOUT", "" + session.getQueryTimeout()); + add(rows, "RETENTION_TIME", "" + database.getRetentionTime()); + add(rows, "LOG", "" + database.getLogMode()); + // database settings + ArrayList settingNames = New.arrayList(); + HashMap s = database.getSettings().getSettings(); + settingNames.addAll(s.keySet()); + Collections.sort(settingNames); + for (String k : settingNames) { + add(rows, k, s.get(k)); + } + if (database.isPersistent()) { + PageStore store = database.getPageStore(); + if (store != null) { + add(rows, "info.FILE_WRITE_TOTAL", + "" + store.getWriteCountTotal()); + add(rows, "info.FILE_WRITE", + "" + store.getWriteCount()); + add(rows, "info.FILE_READ", + "" + store.getReadCount()); + add(rows, "info.PAGE_COUNT", + "" + store.getPageCount()); + add(rows, "info.PAGE_SIZE", + "" + store.getPageSize()); + add(rows, "info.CACHE_MAX_SIZE", + "" + store.getCache().getMaxMemory()); + add(rows, "info.CACHE_SIZE", + "" + store.getCache().getMemory()); + } + Store mvStore = database.getMvStore(); + if (mvStore != null) { + FileStore fs = mvStore.getStore().getFileStore(); + add(rows, "info.FILE_WRITE", "" + + fs.getWriteCount()); + add(rows, "info.FILE_READ", "" + + fs.getReadCount()); + long size; + try { + size = fs.getFile().size(); + } catch (IOException e) { + throw DbException.convertIOException(e, "Can not get size"); + } + int pageSize = 4 * 1024; + long pageCount = size / pageSize; + add(rows, "info.PAGE_COUNT", "" + + pageCount); + add(rows, "info.PAGE_SIZE", "" + + pageSize); + add(rows, "info.CACHE_MAX_SIZE", "" + + mvStore.getStore().getCacheSize()); + add(rows, "info.CACHE_SIZE", "" + + mvStore.getStore().getCacheSizeUsed()); + } + } + break; + } + case TYPE_INFO: { + for (DataType t : DataType.getTypes()) { + if (t.hidden || t.sqlType == Value.NULL) { + continue; + } + add(rows, + // TYPE_NAME + t.name, + // DATA_TYPE + String.valueOf(t.sqlType), + // PRECISION + String.valueOf(MathUtils.convertLongToInt(t.maxPrecision)), + // PREFIX + t.prefix, + // SUFFIX + t.suffix, + // PARAMS + t.params, + // AUTO_INCREMENT + String.valueOf(t.autoIncrement), + // MINIMUM_SCALE + String.valueOf(t.minScale), + // MAXIMUM_SCALE + String.valueOf(t.maxScale), + // RADIX + t.decimal ? "10" : null, + // POS + String.valueOf(t.sqlTypePos), + // CASE_SENSITIVE + String.valueOf(t.caseSensitive), + // NULLABLE + "" + DatabaseMetaData.typeNullable, + // SEARCHABLE + "" + DatabaseMetaData.typeSearchable + ); + } + break; + } + case HELP: { + String resource = "/org/h2/res/help.csv"; + try { + byte[] data = Utils.getResource(resource); + Reader reader = new InputStreamReader( + new ByteArrayInputStream(data)); + Csv csv = new Csv(); + csv.setLineCommentCharacter('#'); + ResultSet rs = csv.read(reader, null); + for (int i = 0; rs.next(); i++) { + add(rows, + // ID + String.valueOf(i), + // SECTION + rs.getString(1).trim(), + // TOPIC + rs.getString(2).trim(), + // SYNTAX + rs.getString(3).trim(), + // TEXT + rs.getString(4).trim() + ); + } + } catch (Exception e) { + throw DbException.convert(e); + } + break; + } + case SEQUENCES: { + for (SchemaObject obj : database.getAllSchemaObjects( + DbObject.SEQUENCE)) { + Sequence s = (Sequence) obj; + add(rows, + // SEQUENCE_CATALOG + catalog, + // SEQUENCE_SCHEMA + identifier(s.getSchema().getName()), + // SEQUENCE_NAME + identifier(s.getName()), + // CURRENT_VALUE + String.valueOf(s.getCurrentValue()), + // INCREMENT + String.valueOf(s.getIncrement()), + // IS_GENERATED + s.getBelongsToTable() ? "TRUE" : "FALSE", + // REMARKS + replaceNullWithEmpty(s.getComment()), + // CACHE + String.valueOf(s.getCacheSize()), + // MIN_VALUE + String.valueOf(s.getMinValue()), + // MAX_VALUE + String.valueOf(s.getMaxValue()), + // IS_CYCLE + s.getCycle() ? "TRUE" : "FALSE", + // ID + "" + s.getId() + ); + } + break; + } + case USERS: { + for (User u : database.getAllUsers()) { + if (admin || session.getUser() == u) { + add(rows, + // NAME + identifier(u.getName()), + // ADMIN + String.valueOf(u.isAdmin()), + // REMARKS + replaceNullWithEmpty(u.getComment()), + // ID + "" + u.getId() + ); + } + } + break; + } + case ROLES: { + for (Role r : database.getAllRoles()) { + if (admin || session.getUser().isRoleGranted(r)) { + add(rows, + // NAME + identifier(r.getName()), + // REMARKS + replaceNullWithEmpty(r.getComment()), + // ID + "" + r.getId() + ); + } + } + break; + } + case RIGHTS: { + if (admin) { + for (Right r : database.getAllRights()) { + Role role = r.getGrantedRole(); + DbObject grantee = r.getGrantee(); + String rightType = grantee.getType() == DbObject.USER ? + "USER" : "ROLE"; + if (role == null) { + DbObject object = r.getGrantedObject(); + Schema schema = null; + Table table = null; + if (object != null) { + if (object instanceof Schema) { + schema = (Schema) object; + } else if (object instanceof Table) { + table = (Table) object; + schema = table.getSchema(); + } + } + String tableName = (table != null) ? identifier(table.getName()) : ""; + String schemaName = (schema != null) ? identifier(schema.getName()) : ""; + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + add(rows, + // GRANTEE + identifier(grantee.getName()), + // GRANTEETYPE + rightType, + // GRANTEDROLE + "", + // RIGHTS + r.getRights(), + // TABLE_SCHEMA + schemaName, + // TABLE_NAME + tableName, + // ID + "" + r.getId() + ); + } else { + add(rows, + // GRANTEE + identifier(grantee.getName()), + // GRANTEETYPE + rightType, + // GRANTEDROLE + identifier(role.getName()), + // RIGHTS + "", + // TABLE_SCHEMA + "", + // TABLE_NAME + "", + // ID + "" + r.getId() + ); + } + } + } + break; + } + case FUNCTION_ALIASES: { + for (SchemaObject aliasAsSchemaObject : + database.getAllSchemaObjects(DbObject.FUNCTION_ALIAS)) { + FunctionAlias alias = (FunctionAlias) aliasAsSchemaObject; + JavaMethod[] methods; + try { + methods = alias.getJavaMethods(); + } catch (DbException e) { + methods = new JavaMethod[0]; + } + for (FunctionAlias.JavaMethod method : methods) { + int returnsResult = method.getDataType() == Value.NULL ? + DatabaseMetaData.procedureNoResult : + DatabaseMetaData.procedureReturnsResult; + add(rows, + // ALIAS_CATALOG + catalog, + // ALIAS_SCHEMA + alias.getSchema().getName(), + // ALIAS_NAME + identifier(alias.getName()), + // JAVA_CLASS + alias.getJavaClassName(), + // JAVA_METHOD + alias.getJavaMethodName(), + // DATA_TYPE + "" + DataType.convertTypeToSQLType(method.getDataType()), + // TYPE_NAME + DataType.getDataType(method.getDataType()).name, + // COLUMN_COUNT INT + "" + method.getParameterCount(), + // RETURNS_RESULT SMALLINT + "" + returnsResult, + // REMARKS + replaceNullWithEmpty(alias.getComment()), + // ID + "" + alias.getId(), + // SOURCE + alias.getSource() + // when adding more columns, see also below + ); + } + } + for (UserAggregate agg : database.getAllAggregates()) { + int returnsResult = DatabaseMetaData.procedureReturnsResult; + add(rows, + // ALIAS_CATALOG + catalog, + // ALIAS_SCHEMA + Constants.SCHEMA_MAIN, + // ALIAS_NAME + identifier(agg.getName()), + // JAVA_CLASS + agg.getJavaClassName(), + // JAVA_METHOD + "", + // DATA_TYPE + "" + DataType.convertTypeToSQLType(Value.NULL), + // TYPE_NAME + DataType.getDataType(Value.NULL).name, + // COLUMN_COUNT INT + "1", + // RETURNS_RESULT SMALLINT + "" + returnsResult, + // REMARKS + replaceNullWithEmpty(agg.getComment()), + // ID + "" + agg.getId(), + // SOURCE + "" + // when adding more columns, see also below + ); + } + break; + } + case FUNCTION_COLUMNS: { + for (SchemaObject aliasAsSchemaObject : + database.getAllSchemaObjects(DbObject.FUNCTION_ALIAS)) { + FunctionAlias alias = (FunctionAlias) aliasAsSchemaObject; + JavaMethod[] methods; + try { + methods = alias.getJavaMethods(); + } catch (DbException e) { + methods = new JavaMethod[0]; + } + for (FunctionAlias.JavaMethod method : methods) { + // Add return column index 0 + if (method.getDataType() != Value.NULL) { + DataType dt = DataType.getDataType(method.getDataType()); + add(rows, + // ALIAS_CATALOG + catalog, + // ALIAS_SCHEMA + alias.getSchema().getName(), + // ALIAS_NAME + identifier(alias.getName()), + // JAVA_CLASS + alias.getJavaClassName(), + // JAVA_METHOD + alias.getJavaMethodName(), + // COLUMN_COUNT + "" + method.getParameterCount(), + // POS INT + "0", + // COLUMN_NAME + "P0", + // DATA_TYPE + "" + DataType.convertTypeToSQLType(method.getDataType()), + // TYPE_NAME + dt.name, + // PRECISION INT + "" + MathUtils.convertLongToInt(dt.defaultPrecision), + // SCALE + "" + dt.defaultScale, + // RADIX + "10", + // NULLABLE SMALLINT + "" + DatabaseMetaData.columnNullableUnknown, + // COLUMN_TYPE + "" + DatabaseMetaData.procedureColumnReturn, + // REMARKS + "", + // COLUMN_DEFAULT + null + ); + } + Class[] columnList = method.getColumnClasses(); + for (int k = 0; k < columnList.length; k++) { + if (method.hasConnectionParam() && k == 0) { + continue; + } + Class clazz = columnList[k]; + int dataType = DataType.getTypeFromClass(clazz); + DataType dt = DataType.getDataType(dataType); + int nullable = clazz.isPrimitive() ? DatabaseMetaData.columnNoNulls + : DatabaseMetaData.columnNullable; + add(rows, + // ALIAS_CATALOG + catalog, + // ALIAS_SCHEMA + alias.getSchema().getName(), + // ALIAS_NAME + identifier(alias.getName()), + // JAVA_CLASS + alias.getJavaClassName(), + // JAVA_METHOD + alias.getJavaMethodName(), + // COLUMN_COUNT + "" + method.getParameterCount(), + // POS INT + "" + (k + (method.hasConnectionParam() ? 0 : 1)), + // COLUMN_NAME + "P" + (k + 1), + // DATA_TYPE + "" + DataType.convertTypeToSQLType(dt.type), + // TYPE_NAME + dt.name, + // PRECISION INT + "" + MathUtils.convertLongToInt(dt.defaultPrecision), + // SCALE + "" + dt.defaultScale, + // RADIX + "10", + // NULLABLE SMALLINT + "" + nullable, + // COLUMN_TYPE + "" + DatabaseMetaData.procedureColumnIn, + // REMARKS + "", + // COLUMN_DEFAULT + null + ); + } + } + } + break; + } + case SCHEMATA: { + String collation = database.getCompareMode().getName(); + for (Schema schema : database.getAllSchemas()) { + add(rows, + // CATALOG_NAME + catalog, + // SCHEMA_NAME + identifier(schema.getName()), + // SCHEMA_OWNER + identifier(schema.getOwner().getName()), + // DEFAULT_CHARACTER_SET_NAME + CHARACTER_SET_NAME, + // DEFAULT_COLLATION_NAME + collation, + // IS_DEFAULT + Constants.SCHEMA_MAIN.equals( + schema.getName()) ? "TRUE" : "FALSE", + // REMARKS + replaceNullWithEmpty(schema.getComment()), + // ID + "" + schema.getId() + ); + } + break; + } + case TABLE_PRIVILEGES: { + for (Right r : database.getAllRights()) { + DbObject object = r.getGrantedObject(); + if (!(object instanceof Table)) { + continue; + } + Table table = (Table) object; + if (hideTable(table, session)) { + continue; + } + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + addPrivileges(rows, r.getGrantee(), catalog, table, null, + r.getRightMask()); + } + break; + } + case COLUMN_PRIVILEGES: { + for (Right r : database.getAllRights()) { + DbObject object = r.getGrantedObject(); + if (!(object instanceof Table)) { + continue; + } + Table table = (Table) object; + if (hideTable(table, session)) { + continue; + } + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + DbObject grantee = r.getGrantee(); + int mask = r.getRightMask(); + for (Column column : table.getColumns()) { + addPrivileges(rows, grantee, catalog, table, + column.getName(), mask); + } + } + break; + } + case COLLATIONS: { + for (Locale l : Collator.getAvailableLocales()) { + add(rows, + // NAME + CompareMode.getName(l), + // KEY + l.toString() + ); + } + break; + } + case VIEWS: { + for (Table table : getAllTables(session)) { + if (table.getTableType() != TableType.VIEW) { + continue; + } + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + TableView view = (TableView) table; + add(rows, + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + tableName, + // VIEW_DEFINITION + table.getCreateSQL(), + // CHECK_OPTION + "NONE", + // IS_UPDATABLE + "NO", + // STATUS + view.isInvalid() ? "INVALID" : "VALID", + // REMARKS + replaceNullWithEmpty(view.getComment()), + // ID + "" + view.getId() + ); + } + break; + } + case IN_DOUBT: { + ArrayList prepared = database.getInDoubtTransactions(); + if (prepared != null && admin) { + for (InDoubtTransaction prep : prepared) { + add(rows, + // TRANSACTION + prep.getTransactionName(), + // STATE + prep.getState() + ); + } + } + break; + } + case CROSS_REFERENCES: { + for (SchemaObject obj : database.getAllSchemaObjects( + DbObject.CONSTRAINT)) { + Constraint constraint = (Constraint) obj; + if (constraint.getConstraintType() != Constraint.Type.REFERENTIAL) { + continue; + } + ConstraintReferential ref = (ConstraintReferential) constraint; + IndexColumn[] cols = ref.getColumns(); + IndexColumn[] refCols = ref.getRefColumns(); + Table tab = ref.getTable(); + Table refTab = ref.getRefTable(); + String tableName = identifier(refTab.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + int update = getRefAction(ref.getUpdateAction()); + int delete = getRefAction(ref.getDeleteAction()); + for (int j = 0; j < cols.length; j++) { + add(rows, + // PKTABLE_CATALOG + catalog, + // PKTABLE_SCHEMA + identifier(refTab.getSchema().getName()), + // PKTABLE_NAME + identifier(refTab.getName()), + // PKCOLUMN_NAME + identifier(refCols[j].column.getName()), + // FKTABLE_CATALOG + catalog, + // FKTABLE_SCHEMA + identifier(tab.getSchema().getName()), + // FKTABLE_NAME + identifier(tab.getName()), + // FKCOLUMN_NAME + identifier(cols[j].column.getName()), + // ORDINAL_POSITION + String.valueOf(j + 1), + // UPDATE_RULE SMALLINT + String.valueOf(update), + // DELETE_RULE SMALLINT + String.valueOf(delete), + // FK_NAME + identifier(ref.getName()), + // PK_NAME + identifier(ref.getUniqueIndex().getName()), + // DEFERRABILITY + "" + DatabaseMetaData.importedKeyNotDeferrable + ); + } + } + break; + } + case CONSTRAINTS: { + for (SchemaObject obj : database.getAllSchemaObjects( + DbObject.CONSTRAINT)) { + Constraint constraint = (Constraint) obj; + Constraint.Type constraintType = constraint.getConstraintType(); + String checkExpression = null; + IndexColumn[] indexColumns = null; + Table table = constraint.getTable(); + if (hideTable(table, session)) { + continue; + } + Index index = constraint.getUniqueIndex(); + String uniqueIndexName = null; + if (index != null) { + uniqueIndexName = index.getName(); + } + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + if (constraintType == Constraint.Type.CHECK) { + checkExpression = ((ConstraintCheck) constraint).getExpression().getSQL(); + } else if (constraintType == Constraint.Type.UNIQUE || + constraintType == Constraint.Type.PRIMARY_KEY) { + indexColumns = ((ConstraintUnique) constraint).getColumns(); + } else if (constraintType == Constraint.Type.REFERENTIAL) { + indexColumns = ((ConstraintReferential) constraint).getColumns(); + } + String columnList = null; + if (indexColumns != null) { + StatementBuilder buff = new StatementBuilder(); + for (IndexColumn col : indexColumns) { + buff.appendExceptFirst(","); + buff.append(col.column.getName()); + } + columnList = buff.toString(); + } + add(rows, + // CONSTRAINT_CATALOG + catalog, + // CONSTRAINT_SCHEMA + identifier(constraint.getSchema().getName()), + // CONSTRAINT_NAME + identifier(constraint.getName()), + // CONSTRAINT_TYPE + constraintType.toString(), + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + tableName, + // UNIQUE_INDEX_NAME + uniqueIndexName, + // CHECK_EXPRESSION + checkExpression, + // COLUMN_LIST + columnList, + // REMARKS + replaceNullWithEmpty(constraint.getComment()), + // SQL + constraint.getCreateSQL(), + // ID + "" + constraint.getId() + ); + } + break; + } + case CONSTANTS: { + for (SchemaObject obj : database.getAllSchemaObjects( + DbObject.CONSTANT)) { + Constant constant = (Constant) obj; + ValueExpression expr = constant.getValue(); + add(rows, + // CONSTANT_CATALOG + catalog, + // CONSTANT_SCHEMA + identifier(constant.getSchema().getName()), + // CONSTANT_NAME + identifier(constant.getName()), + // CONSTANT_TYPE + "" + DataType.convertTypeToSQLType(expr.getType()), + // REMARKS + replaceNullWithEmpty(constant.getComment()), + // SQL + expr.getSQL(), + // ID + "" + constant.getId() + ); + } + break; + } + case DOMAINS: { + for (UserDataType dt : database.getAllUserDataTypes()) { + Column col = dt.getColumn(); + add(rows, + // DOMAIN_CATALOG + catalog, + // DOMAIN_SCHEMA + Constants.SCHEMA_MAIN, + // DOMAIN_NAME + identifier(dt.getName()), + // COLUMN_DEFAULT + col.getDefaultSQL(), + // IS_NULLABLE + col.isNullable() ? "YES" : "NO", + // DATA_TYPE + "" + col.getDataType().sqlType, + // PRECISION INT + "" + col.getPrecisionAsInt(), + // SCALE INT + "" + col.getScale(), + // TYPE_NAME + col.getDataType().name, + // SELECTIVITY INT + "" + col.getSelectivity(), + // CHECK_CONSTRAINT + "" + col.getCheckConstraintSQL(session, "VALUE"), + // REMARKS + replaceNullWithEmpty(dt.getComment()), + // SQL + "" + dt.getCreateSQL(), + // ID + "" + dt.getId() + ); + } + break; + } + case TRIGGERS: { + for (SchemaObject obj : database.getAllSchemaObjects( + DbObject.TRIGGER)) { + TriggerObject trigger = (TriggerObject) obj; + Table table = trigger.getTable(); + add(rows, + // TRIGGER_CATALOG + catalog, + // TRIGGER_SCHEMA + identifier(trigger.getSchema().getName()), + // TRIGGER_NAME + identifier(trigger.getName()), + // TRIGGER_TYPE + trigger.getTypeNameList(), + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + identifier(table.getName()), + // BEFORE BIT + "" + trigger.isBefore(), + // JAVA_CLASS + trigger.getTriggerClassName(), + // QUEUE_SIZE INT + "" + trigger.getQueueSize(), + // NO_WAIT BIT + "" + trigger.isNoWait(), + // REMARKS + replaceNullWithEmpty(trigger.getComment()), + // SQL + trigger.getCreateSQL(), + // ID + "" + trigger.getId() + ); + } + break; + } + case SESSIONS: { + long now = System.currentTimeMillis(); + for (Session s : database.getSessions(false)) { + if (admin || s == session) { + Command command = s.getCurrentCommand(); + long start = s.getCurrentCommandStart(); + if (start == 0) { + start = now; + } + add(rows, + // ID + "" + s.getId(), + // USER_NAME + s.getUser().getName(), + // SESSION_START + new Timestamp(s.getSessionStart()).toString(), + // STATEMENT + command == null ? null : command.toString(), + // STATEMENT_START + new Timestamp(start).toString(), + // CONTAINS_UNCOMMITTED + "" + s.containsUncommitted() + ); + } + } + break; + } + case LOCKS: { + for (Session s : database.getSessions(false)) { + if (admin || s == session) { + for (Table table : s.getLocks()) { + add(rows, + // TABLE_SCHEMA + table.getSchema().getName(), + // TABLE_NAME + table.getName(), + // SESSION_ID + "" + s.getId(), + // LOCK_TYPE + table.isLockedExclusivelyBy(s) ? "WRITE" : "READ" + ); + } + } + } + break; + } + case SESSION_STATE: { + for (String name : session.getVariableNames()) { + Value v = session.getVariable(name); + add(rows, + // KEY + "@" + name, + // SQL + "SET @" + name + " " + v.getSQL() + ); + } + for (Table table : session.getLocalTempTables()) { + add(rows, + // KEY + "TABLE " + table.getName(), + // SQL + table.getCreateSQL() + ); + } + String[] path = session.getSchemaSearchPath(); + if (path != null && path.length > 0) { + StatementBuilder buff = new StatementBuilder( + "SET SCHEMA_SEARCH_PATH "); + for (String p : path) { + buff.appendExceptFirst(", "); + buff.append(StringUtils.quoteIdentifier(p)); + } + add(rows, + // KEY + "SCHEMA_SEARCH_PATH", + // SQL + buff.toString() + ); + } + String schema = session.getCurrentSchemaName(); + if (schema != null) { + add(rows, + // KEY + "SCHEMA", + // SQL + "SET SCHEMA " + StringUtils.quoteIdentifier(schema) + ); + } + break; + } + case QUERY_STATISTICS: { + QueryStatisticsData control = database.getQueryStatisticsData(); + if (control != null) { + for (QueryStatisticsData.QueryEntry entry : control.getQueries()) { + add(rows, + // SQL_STATEMENT + entry.sqlStatement, + // EXECUTION_COUNT + "" + entry.count, + // MIN_EXECUTION_TIME + "" + entry.executionTimeMinNanos / 1000d / 1000, + // MAX_EXECUTION_TIME + "" + entry.executionTimeMaxNanos / 1000d / 1000, + // CUMULATIVE_EXECUTION_TIME + "" + entry.executionTimeCumulativeNanos / 1000d / 1000, + // AVERAGE_EXECUTION_TIME + "" + entry.executionTimeMeanNanos / 1000d / 1000, + // STD_DEV_EXECUTION_TIME + "" + entry.getExecutionTimeStandardDeviation() / 1000d / 1000, + // MIN_ROW_COUNT + "" + entry.rowCountMin, + // MAX_ROW_COUNT + "" + entry.rowCountMax, + // CUMULATIVE_ROW_COUNT + "" + entry.rowCountCumulative, + // AVERAGE_ROW_COUNT + "" + entry.rowCountMean, + // STD_DEV_ROW_COUNT + "" + entry.getRowCountStandardDeviation() + ); + } + } + break; + } + case SYNONYMS: { + for (TableSynonym synonym : database.getAllSynonyms()) { + add(rows, + // SYNONYM_CATALOG + catalog, + // SYNONYM_SCHEMA + identifier(synonym.getSchema().getName()), + // SYNONYM_NAME + identifier(synonym.getName()), + // SYNONYM_FOR + synonym.getSynonymForName(), + // SYNONYM_FOR_SCHEMA + synonym.getSynonymForSchema().getName(), + // TYPE NAME + "SYNONYM", + // STATUS + "VALID", + // REMARKS + replaceNullWithEmpty(synonym.getComment()), + // ID + "" + synonym.getId() + ); + } + break; + } + case TABLE_CONSTRAINTS: { + for (SchemaObject obj : database.getAllSchemaObjects(DbObject.CONSTRAINT)) { + Constraint constraint = (Constraint) obj; + Constraint.Type constraintType = constraint.getConstraintType(); + Table table = constraint.getTable(); + if (hideTable(table, session)) { + continue; + } + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + add(rows, + // CONSTRAINT_CATALOG + catalog, + // CONSTRAINT_SCHEMA + identifier(constraint.getSchema().getName()), + // CONSTRAINT_NAME + identifier(constraint.getName()), + // CONSTRAINT_TYPE + constraintType.getSqlName(), + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + tableName, + // IS_DEFERRABLE + "NO", + // INITIALLY_DEFERRED + "NO" + ); + } + break; + } + case KEY_COLUMN_USAGE: { + for (SchemaObject obj : database.getAllSchemaObjects(DbObject.CONSTRAINT)) { + Constraint constraint = (Constraint) obj; + Constraint.Type constraintType = constraint.getConstraintType(); + IndexColumn[] indexColumns = null; + Table table = constraint.getTable(); + if (hideTable(table, session)) { + continue; + } + String tableName = identifier(table.getName()); + if (!checkIndex(session, tableName, indexFrom, indexTo)) { + continue; + } + if (constraintType == Constraint.Type.UNIQUE || + constraintType == Constraint.Type.PRIMARY_KEY) { + indexColumns = ((ConstraintUnique) constraint).getColumns(); + } else if (constraintType == Constraint.Type.REFERENTIAL) { + indexColumns = ((ConstraintReferential) constraint).getColumns(); + } + if (indexColumns == null) { + continue; + } + ConstraintUnique referenced; + if (constraintType == Constraint.Type.REFERENTIAL) { + referenced = lookupUniqueForReferential((ConstraintReferential) constraint); + } else { + referenced = null; + } + for (int i = 0; i < indexColumns.length; i++) { + IndexColumn indexColumn = indexColumns[i]; + String ordinalPosition = Integer.toString(i + 1); + String positionInUniqueConstraint; + if (constraintType == Constraint.Type.REFERENTIAL) { + positionInUniqueConstraint = ordinalPosition; + if (referenced != null) { + Column c = ((ConstraintReferential) constraint).getRefColumns()[i].column; + IndexColumn[] refColumns = referenced.getColumns(); + for (int j = 0; j < refColumns.length; j++) { + if (refColumns[j].column.equals(c)) { + positionInUniqueConstraint = Integer.toString(j + 1); + break; + } + } + } + } else { + positionInUniqueConstraint = null; + } + add(rows, + // CONSTRAINT_CATALOG + catalog, + // CONSTRAINT_SCHEMA + identifier(constraint.getSchema().getName()), + // CONSTRAINT_NAME + identifier(constraint.getName()), + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + tableName, + // COLUMN_NAME + indexColumn.columnName, + // ORDINAL_POSITION + ordinalPosition, + // POSITION_IN_UNIQUE_CONSTRAINT + positionInUniqueConstraint + ); + } + } + break; + } + case REFERENTIAL_CONSTRAINTS: { + for (SchemaObject obj : database.getAllSchemaObjects(DbObject.CONSTRAINT)) { + if (((Constraint) obj).getConstraintType() != Constraint.Type.REFERENTIAL) { + continue; + } + ConstraintReferential constraint = (ConstraintReferential) obj; + Table table = constraint.getTable(); + if (hideTable(table, session)) { + continue; + } + // Should be referenced unique constraint, but H2 uses indexes instead. + // So try to find matching unique constraint first and there is no such + // constraint use index name to return something. + SchemaObject unique = lookupUniqueForReferential(constraint); + if (unique == null) { + unique = constraint.getUniqueIndex(); + } + add(rows, + // CONSTRAINT_CATALOG + catalog, + // CONSTRAINT_SCHEMA + identifier(constraint.getSchema().getName()), + // CONSTRAINT_NAME + identifier(constraint.getName()), + // UNIQUE_CONSTRAINT_CATALOG + catalog, + // UNIQUE_CONSTRAINT_SCHEMA + identifier(unique.getSchema().getName()), + // UNIQUE_CONSTRAINT_NAME + unique.getName(), + // MATCH_OPTION + "NONE", + // UPDATE_RULE + constraint.getUpdateAction().getSqlName(), + // DELETE_RULE + constraint.getDeleteAction().getSqlName() + ); + } + break; + } + default: + DbException.throwInternalError("type="+type); + } + return rows; + } + + private static int getRefAction(ConstraintActionType action) { + switch (action) { + case CASCADE: + return DatabaseMetaData.importedKeyCascade; + case RESTRICT: + return DatabaseMetaData.importedKeyRestrict; + case SET_DEFAULT: + return DatabaseMetaData.importedKeySetDefault; + case SET_NULL: + return DatabaseMetaData.importedKeySetNull; + default: + throw DbException.throwInternalError("action="+action); + } + } + + private static ConstraintUnique lookupUniqueForReferential(ConstraintReferential referential) { + Table table = referential.getRefTable(); + for (Constraint c : table.getConstraints()) { + if (c.getConstraintType() == Constraint.Type.UNIQUE) { + ConstraintUnique unique = (ConstraintUnique) c; + if (unique.getReferencedColumns(table).equals(referential.getReferencedColumns(table))) { + return unique; + } + } + } + return null; + } + + @Override + public void removeRow(Session session, Row row) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public void addRow(Session session, Row row) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public void removeChildrenAndResources(Session session) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void unlock(Session s) { + // nothing to do + } + + private void addPrivileges(ArrayList rows, DbObject grantee, + String catalog, Table table, String column, int rightMask) { + if ((rightMask & Right.SELECT) != 0) { + addPrivilege(rows, grantee, catalog, table, column, "SELECT"); + } + if ((rightMask & Right.INSERT) != 0) { + addPrivilege(rows, grantee, catalog, table, column, "INSERT"); + } + if ((rightMask & Right.UPDATE) != 0) { + addPrivilege(rows, grantee, catalog, table, column, "UPDATE"); + } + if ((rightMask & Right.DELETE) != 0) { + addPrivilege(rows, grantee, catalog, table, column, "DELETE"); + } + } + + private void addPrivilege(ArrayList rows, DbObject grantee, + String catalog, Table table, String column, String right) { + String isGrantable = "NO"; + if (grantee.getType() == DbObject.USER) { + User user = (User) grantee; + if (user.isAdmin()) { + // the right is grantable if the grantee is an admin + isGrantable = "YES"; + } + } + if (column == null) { + add(rows, + // GRANTOR + null, + // GRANTEE + identifier(grantee.getName()), + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + identifier(table.getName()), + // PRIVILEGE_TYPE + right, + // IS_GRANTABLE + isGrantable + ); + } else { + add(rows, + // GRANTOR + null, + // GRANTEE + identifier(grantee.getName()), + // TABLE_CATALOG + catalog, + // TABLE_SCHEMA + identifier(table.getSchema().getName()), + // TABLE_NAME + identifier(table.getName()), + // COLUMN_NAME + identifier(column), + // PRIVILEGE_TYPE + right, + // IS_GRANTABLE + isGrantable + ); + } + } + + private void add(ArrayList rows, String... strings) { + Value[] values = new Value[strings.length]; + for (int i = 0; i < strings.length; i++) { + String s = strings[i]; + Value v = (s == null) ? (Value) ValueNull.INSTANCE : ValueString.get(s); + Column col = columns[i]; + v = col.convert(v); + values[i] = v; + } + Row row = database.createRow(values, 1); + row.setKey(rows.size()); + rows.add(row); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("META"); + } + + @Override + public void checkSupportAlter() { + throw DbException.getUnsupportedException("META"); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("META"); + } + + @Override + public long getRowCount(Session session) { + throw DbException.throwInternalError(toString()); + } + + @Override + public boolean canGetRowCount() { + return false; + } + + @Override + public boolean canDrop() { + return false; + } + + @Override + public TableType getTableType() { + return TableType.SYSTEM_TABLE; + } + + @Override + public Index getScanIndex(Session session) { + return new MetaIndex(this, IndexColumn.wrap(columns), true); + } + + @Override + public ArrayList getIndexes() { + ArrayList list = New.arrayList(); + if (metaIndex == null) { + return list; + } + list.add(new MetaIndex(this, IndexColumn.wrap(columns), true)); + // TODO re-use the index + list.add(metaIndex); + return list; + } + + @Override + public long getMaxDataModificationId() { + switch (type) { + case SETTINGS: + case IN_DOUBT: + case SESSIONS: + case LOCKS: + case SESSION_STATE: + return Long.MAX_VALUE; + } + return database.getModificationDataId(); + } + + @Override + public Index getUniqueIndex() { + return null; + } + + /** + * Get the number of meta table types. Supported meta table + * types are 0 .. this value - 1. + * + * @return the number of meta table types + */ + public static int getMetaTableTypeCount() { + return META_TABLE_TYPE_COUNT; + } + + @Override + public long getRowCountApproximation() { + return ROW_COUNT_APPROXIMATION; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public boolean isDeterministic() { + return true; + } + + @Override + public boolean canReference() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/Plan.java b/modules/h2/src/main/java/org/h2/table/Plan.java new file mode 100644 index 0000000000000..24909c188a49b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/Plan.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.message.Trace; +import org.h2.table.TableFilter.TableFilterVisitor; +import org.h2.util.New; + +/** + * A possible query execution plan. The time required to execute a query depends + * on the order the tables are accessed. + */ +public class Plan { + + private final TableFilter[] filters; + private final HashMap planItems = new HashMap<>(); + private final Expression[] allConditions; + private final TableFilter[] allFilters; + + /** + * Create a query plan with the given order. + * + * @param filters the tables of the query + * @param count the number of table items + * @param condition the condition in the WHERE clause + */ + public Plan(TableFilter[] filters, int count, Expression condition) { + this.filters = new TableFilter[count]; + System.arraycopy(filters, 0, this.filters, 0, count); + final ArrayList allCond = New.arrayList(); + final ArrayList all = New.arrayList(); + if (condition != null) { + allCond.add(condition); + } + for (int i = 0; i < count; i++) { + TableFilter f = filters[i]; + f.visit(new TableFilterVisitor() { + @Override + public void accept(TableFilter f) { + all.add(f); + if (f.getJoinCondition() != null) { + allCond.add(f.getJoinCondition()); + } + } + }); + } + allConditions = allCond.toArray(new Expression[0]); + allFilters = all.toArray(new TableFilter[0]); + } + + /** + * Get the plan item for the given table. + * + * @param filter the table + * @return the plan item + */ + public PlanItem getItem(TableFilter filter) { + return planItems.get(filter); + } + + /** + * The the list of tables. + * + * @return the list of tables + */ + public TableFilter[] getFilters() { + return filters; + } + + /** + * Remove all index conditions that can not be used. + */ + public void removeUnusableIndexConditions() { + for (int i = 0; i < allFilters.length; i++) { + TableFilter f = allFilters[i]; + setEvaluatable(f, true); + if (i < allFilters.length - 1 || + f.getSession().getDatabase().getSettings().earlyFilter) { + // the last table doesn't need the optimization, + // otherwise the expression is calculated twice unnecessarily + // (not that bad but not optimal) + f.optimizeFullCondition(false); + } + f.removeUnusableIndexConditions(); + } + for (TableFilter f : allFilters) { + setEvaluatable(f, false); + } + } + + /** + * Calculate the cost of this query plan. + * + * @param session the session + * @return the cost + */ + public double calculateCost(Session session) { + Trace t = session.getTrace(); + if (t.isDebugEnabled()) { + t.debug("Plan : calculate cost for plan {0}", Arrays.toString(allFilters)); + } + double cost = 1; + boolean invalidPlan = false; + final HashSet allColumnsSet = ExpressionVisitor + .allColumnsForTableFilters(allFilters); + for (int i = 0; i < allFilters.length; i++) { + TableFilter tableFilter = allFilters[i]; + if (t.isDebugEnabled()) { + t.debug("Plan : for table filter {0}", tableFilter); + } + PlanItem item = tableFilter.getBestPlanItem(session, allFilters, i, allColumnsSet); + planItems.put(tableFilter, item); + if (t.isDebugEnabled()) { + t.debug("Plan : best plan item cost {0} index {1}", + item.cost, item.getIndex().getPlanSQL()); + } + cost += cost * item.cost; + setEvaluatable(tableFilter, true); + Expression on = tableFilter.getJoinCondition(); + if (on != null) { + if (!on.isEverything(ExpressionVisitor.EVALUATABLE_VISITOR)) { + invalidPlan = true; + break; + } + } + } + if (invalidPlan) { + cost = Double.POSITIVE_INFINITY; + } + if (t.isDebugEnabled()) { + session.getTrace().debug("Plan : plan cost {0}", cost); + } + for (TableFilter f : allFilters) { + setEvaluatable(f, false); + } + return cost; + } + + private void setEvaluatable(TableFilter filter, boolean b) { + filter.setEvaluatable(filter, b); + for (Expression e : allConditions) { + e.setEvaluatable(filter, b); + } + } +} diff --git a/modules/h2/src/main/java/org/h2/table/PlanItem.java b/modules/h2/src/main/java/org/h2/table/PlanItem.java new file mode 100644 index 0000000000000..2e08390ffbf25 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/PlanItem.java @@ -0,0 +1,58 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import org.h2.index.Index; + +/** + * The plan item describes the index to be used, and the estimated cost when + * using it. + */ +public class PlanItem { + + /** + * The cost. + */ + double cost; + + private int[] masks; + private Index index; + private PlanItem joinPlan; + private PlanItem nestedJoinPlan; + + void setMasks(int[] masks) { + this.masks = masks; + } + + int[] getMasks() { + return masks; + } + + void setIndex(Index index) { + this.index = index; + } + + public Index getIndex() { + return index; + } + + PlanItem getJoinPlan() { + return joinPlan; + } + + PlanItem getNestedJoinPlan() { + return nestedJoinPlan; + } + + void setJoinPlan(PlanItem joinPlan) { + this.joinPlan = joinPlan; + } + + void setNestedJoinPlan(PlanItem nestedJoinPlan) { + this.nestedJoinPlan = nestedJoinPlan; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/RangeTable.java b/modules/h2/src/main/java/org/h2/table/RangeTable.java new file mode 100644 index 0000000000000..d36c199be5e1f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/RangeTable.java @@ -0,0 +1,257 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.expression.Expression; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.index.RangeIndex; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.schema.Schema; +import org.h2.value.Value; + +/** + * The table SYSTEM_RANGE is a virtual table that generates incrementing numbers + * with a given start end end point. + */ +public class RangeTable extends Table { + + /** + * The name of the range table. + */ + public static final String NAME = "SYSTEM_RANGE"; + + /** + * The PostgreSQL alias for the range table. + */ + public static final String ALIAS = "GENERATE_SERIES"; + + private Expression min, max, step; + private boolean optimized; + + /** + * Create a new range with the given start and end expressions. + * + * @param schema the schema (always the main schema) + * @param min the start expression + * @param max the end expression + * @param noColumns whether this table has no columns + */ + public RangeTable(Schema schema, Expression min, Expression max, + boolean noColumns) { + super(schema, 0, NAME, true, true); + Column[] cols = noColumns ? new Column[0] : new Column[] { new Column( + "X", Value.LONG) }; + this.min = min; + this.max = max; + setColumns(cols); + } + + public RangeTable(Schema schema, Expression min, Expression max, + Expression step, boolean noColumns) { + this(schema, min, max, noColumns); + this.step = step; + } + + @Override + public String getDropSQL() { + return null; + } + + @Override + public String getCreateSQL() { + return null; + } + + @Override + public String getSQL() { + String sql = NAME + "(" + min.getSQL() + ", " + max.getSQL(); + if (step != null) { + sql += ", " + step.getSQL(); + } + return sql + ")"; + } + + @Override + public boolean lock(Session session, boolean exclusive, boolean forceLockEvenInMvcc) { + // nothing to do + return false; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void unlock(Session s) { + // nothing to do + } + + @Override + public boolean isLockedExclusively() { + return false; + } + + @Override + public Index addIndex(Session session, String indexName, + int indexId, IndexColumn[] cols, IndexType indexType, + boolean create, String indexComment) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public void removeRow(Session session, Row row) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public void addRow(Session session, Row row) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public void checkSupportAlter() { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public boolean canGetRowCount() { + return true; + } + + @Override + public boolean canDrop() { + return false; + } + + @Override + public long getRowCount(Session session) { + long step = getStep(session); + if (step == 0L) { + throw DbException.get(ErrorCode.STEP_SIZE_MUST_NOT_BE_ZERO); + } + long delta = getMax(session) - getMin(session); + if (step > 0) { + if (delta < 0) { + return 0; + } + } else if (delta > 0) { + return 0; + } + return delta / step + 1; + } + + @Override + public TableType getTableType() { + return TableType.SYSTEM_TABLE; + } + + @Override + public Index getScanIndex(Session session) { + if (getStep(session) == 0) { + throw DbException.get(ErrorCode.STEP_SIZE_MUST_NOT_BE_ZERO); + } + return new RangeIndex(this, IndexColumn.wrap(columns)); + } + + /** + * Calculate and get the start value of this range. + * + * @param session the session + * @return the start value + */ + public long getMin(Session session) { + optimize(session); + return min.getValue(session).getLong(); + } + + /** + * Calculate and get the end value of this range. + * + * @param session the session + * @return the end value + */ + public long getMax(Session session) { + optimize(session); + return max.getValue(session).getLong(); + } + + /** + * Get the increment. + * + * @param session the session + * @return the increment (1 by default) + */ + public long getStep(Session session) { + optimize(session); + if (step == null) { + return 1; + } + return step.getValue(session).getLong(); + } + + private void optimize(Session s) { + if (!optimized) { + min = min.optimize(s); + max = max.optimize(s); + if (step != null) { + step = step.optimize(s); + } + optimized = true; + } + } + + @Override + public ArrayList getIndexes() { + return null; + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("SYSTEM_RANGE"); + } + + @Override + public long getMaxDataModificationId() { + return 0; + } + + @Override + public Index getUniqueIndex() { + return null; + } + + @Override + public long getRowCountApproximation() { + return 100; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public boolean isDeterministic() { + return true; + } + + @Override + public boolean canReference() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/RegularTable.java b/modules/h2/src/main/java/org/h2/table/RegularTable.java new file mode 100644 index 0000000000000..f60739675b613 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/RegularTable.java @@ -0,0 +1,804 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.command.ddl.CreateTableData; +import org.h2.constraint.Constraint; +import org.h2.constraint.ConstraintReferential; +import org.h2.engine.Constants; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.index.Cursor; +import org.h2.index.HashIndex; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.index.MultiVersionIndex; +import org.h2.index.NonUniqueHashIndex; +import org.h2.index.PageBtreeIndex; +import org.h2.index.PageDataIndex; +import org.h2.index.PageDelegateIndex; +import org.h2.index.ScanIndex; +import org.h2.index.SpatialTreeIndex; +import org.h2.index.TreeIndex; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.Row; +import org.h2.result.SortOrder; +import org.h2.schema.SchemaObject; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.value.CompareMode; +import org.h2.value.DataType; +import org.h2.value.Value; + +/** + * Most tables are an instance of this class. For this table, the data is stored + * in the database. The actual data is not kept here, instead it is kept in the + * indexes. There is at least one index, the scan index. + */ +public class RegularTable extends TableBase { + + private Index scanIndex; + private long rowCount; + private volatile Session lockExclusiveSession; + private HashSet lockSharedSessions = new HashSet<>(); + + /** + * The queue of sessions waiting to lock the table. It is a FIFO queue to + * prevent starvation, since Java's synchronized locking is biased. + */ + private final ArrayDeque waitingSessions = new ArrayDeque<>(); + private final Trace traceLock; + private final ArrayList indexes = New.arrayList(); + private long lastModificationId; + private final boolean containsLargeObject; + private final PageDataIndex mainIndex; + private int changesSinceAnalyze; + private int nextAnalyze; + private Column rowIdColumn; + + public RegularTable(CreateTableData data) { + super(data); + nextAnalyze = database.getSettings().analyzeAuto; + this.isHidden = data.isHidden; + boolean b = false; + for (Column col : getColumns()) { + if (DataType.isLargeObject(col.getType())) { + b = true; + break; + } + } + containsLargeObject = b; + if (data.persistData && database.isPersistent()) { + mainIndex = new PageDataIndex(this, data.id, + IndexColumn.wrap(getColumns()), + IndexType.createScan(data.persistData), + data.create, data.session); + scanIndex = mainIndex; + } else { + mainIndex = null; + scanIndex = new ScanIndex(this, data.id, + IndexColumn.wrap(getColumns()), IndexType.createScan(data.persistData)); + } + indexes.add(scanIndex); + traceLock = database.getTrace(Trace.LOCK); + } + + @Override + public void close(Session session) { + for (Index index : indexes) { + index.close(session); + } + } + + @Override + public Row getRow(Session session, long key) { + return scanIndex.getRow(session, key); + } + + @Override + public void addRow(Session session, Row row) { + lastModificationId = database.getNextModificationDataId(); + if (database.isMultiVersion()) { + row.setSessionId(session.getId()); + } + int i = 0; + try { + for (int size = indexes.size(); i < size; i++) { + Index index = indexes.get(i); + index.add(session, row); + checkRowCount(session, index, 1); + } + rowCount++; + } catch (Throwable e) { + try { + while (--i >= 0) { + Index index = indexes.get(i); + index.remove(session, row); + checkRowCount(session, index, 0); + } + } catch (DbException e2) { + // this could happen, for example on failure in the storage + // but if that is not the case it means there is something wrong + // with the database + trace.error(e2, "could not undo operation"); + throw e2; + } + DbException de = DbException.convert(e); + if (de.getErrorCode() == ErrorCode.DUPLICATE_KEY_1) { + for (Index index : indexes) { + if (index.getIndexType().isUnique() && index instanceof MultiVersionIndex) { + MultiVersionIndex mv = (MultiVersionIndex) index; + if (mv.isUncommittedFromOtherSession(session, row)) { + throw DbException.get( + ErrorCode.CONCURRENT_UPDATE_1, index.getName()); + } + } + } + } + throw de; + } + analyzeIfRequired(session); + } + + @Override + public void commit(short operation, Row row) { + lastModificationId = database.getNextModificationDataId(); + for (Index index : indexes) { + index.commit(operation, row); + } + } + + private void checkRowCount(Session session, Index index, int offset) { + if (SysProperties.CHECK && !database.isMultiVersion()) { + if (!(index instanceof PageDelegateIndex)) { + long rc = index.getRowCount(session); + if (rc != rowCount + offset) { + DbException.throwInternalError( + "rowCount expected " + (rowCount + offset) + + " got " + rc + " " + getName() + "." + index.getName()); + } + } + } + } + + @Override + public Index getScanIndex(Session session) { + return indexes.get(0); + } + + @Override + public Index getUniqueIndex() { + for (Index idx : indexes) { + if (idx.getIndexType().isUnique()) { + return idx; + } + } + return null; + } + + @Override + public ArrayList getIndexes() { + return indexes; + } + + @Override + public Index addIndex(Session session, String indexName, int indexId, + IndexColumn[] cols, IndexType indexType, boolean create, + String indexComment) { + if (indexType.isPrimaryKey()) { + for (IndexColumn c : cols) { + Column column = c.column; + if (column.isNullable()) { + throw DbException.get( + ErrorCode.COLUMN_MUST_NOT_BE_NULLABLE_1, column.getName()); + } + column.setPrimaryKey(true); + } + } + boolean isSessionTemporary = isTemporary() && !isGlobalTemporary(); + if (!isSessionTemporary) { + database.lockMeta(session); + } + Index index; + if (isPersistIndexes() && indexType.isPersistent()) { + int mainIndexColumn; + if (database.isStarting() && + database.getPageStore().getRootPageId(indexId) != 0) { + mainIndexColumn = -1; + } else if (!database.isStarting() && mainIndex.getRowCount(session) != 0) { + mainIndexColumn = -1; + } else { + mainIndexColumn = getMainIndexColumn(indexType, cols); + } + if (mainIndexColumn != -1) { + mainIndex.setMainIndexColumn(mainIndexColumn); + index = new PageDelegateIndex(this, indexId, indexName, + indexType, mainIndex, create, session); + } else if (indexType.isSpatial()) { + index = new SpatialTreeIndex(this, indexId, indexName, cols, + indexType, true, create, session); + } else { + index = new PageBtreeIndex(this, indexId, indexName, cols, + indexType, create, session); + } + } else { + if (indexType.isHash()) { + if (cols.length != 1) { + throw DbException.getUnsupportedException( + "hash indexes may index only one column"); + } + if (indexType.isUnique()) { + index = new HashIndex(this, indexId, indexName, cols, + indexType); + } else { + index = new NonUniqueHashIndex(this, indexId, indexName, + cols, indexType); + } + } else if (indexType.isSpatial()) { + index = new SpatialTreeIndex(this, indexId, indexName, cols, + indexType, false, true, session); + } else { + index = new TreeIndex(this, indexId, indexName, cols, indexType); + } + } + if (database.isMultiVersion()) { + index = new MultiVersionIndex(index, this); + } + if (index.needRebuild() && rowCount > 0) { + try { + Index scan = getScanIndex(session); + long remaining = scan.getRowCount(session); + long total = remaining; + Cursor cursor = scan.find(session, null, null); + long i = 0; + int bufferSize = (int) Math.min(rowCount, database.getMaxMemoryRows()); + ArrayList buffer = new ArrayList<>(bufferSize); + String n = getName() + ":" + index.getName(); + int t = MathUtils.convertLongToInt(total); + while (cursor.next()) { + database.setProgress(DatabaseEventListener.STATE_CREATE_INDEX, n, + MathUtils.convertLongToInt(i++), t); + Row row = cursor.get(); + buffer.add(row); + if (buffer.size() >= bufferSize) { + addRowsToIndex(session, buffer, index); + } + remaining--; + } + addRowsToIndex(session, buffer, index); + if (SysProperties.CHECK && remaining != 0) { + DbException.throwInternalError("rowcount remaining=" + + remaining + " " + getName()); + } + } catch (DbException e) { + getSchema().freeUniqueName(indexName); + try { + index.remove(session); + } catch (DbException e2) { + // this could happen, for example on failure in the storage + // but if that is not the case it means + // there is something wrong with the database + trace.error(e2, "could not remove index"); + throw e2; + } + throw e; + } + } + index.setTemporary(isTemporary()); + if (index.getCreateSQL() != null) { + index.setComment(indexComment); + if (isSessionTemporary) { + session.addLocalTempTableIndex(index); + } else { + database.addSchemaObject(session, index); + } + } + indexes.add(index); + setModified(); + return index; + } + + private int getMainIndexColumn(IndexType indexType, IndexColumn[] cols) { + if (mainIndex.getMainIndexColumn() != -1) { + return -1; + } + if (!indexType.isPrimaryKey() || cols.length != 1) { + return -1; + } + IndexColumn first = cols[0]; + if (first.sortType != SortOrder.ASCENDING) { + return -1; + } + switch (first.column.getType()) { + case Value.BYTE: + case Value.SHORT: + case Value.INT: + case Value.LONG: + break; + default: + return -1; + } + return first.column.getColumnId(); + } + + @Override + public boolean canGetRowCount() { + return true; + } + + private static void addRowsToIndex(Session session, ArrayList list, + Index index) { + final Index idx = index; + Collections.sort(list, new Comparator() { + @Override + public int compare(Row r1, Row r2) { + return idx.compareRows(r1, r2); + } + }); + for (Row row : list) { + index.add(session, row); + } + list.clear(); + } + + @Override + public boolean canDrop() { + return true; + } + + @Override + public long getRowCount(Session session) { + if (database.isMultiVersion()) { + return getScanIndex(session).getRowCount(session); + } + return rowCount; + } + + @Override + public void removeRow(Session session, Row row) { + if (database.isMultiVersion()) { + if (row.isDeleted()) { + throw DbException.get(ErrorCode.CONCURRENT_UPDATE_1, getName()); + } + int old = row.getSessionId(); + int newId = session.getId(); + if (old == 0) { + row.setSessionId(newId); + } else if (old != newId) { + throw DbException.get(ErrorCode.CONCURRENT_UPDATE_1, getName()); + } + } + lastModificationId = database.getNextModificationDataId(); + int i = indexes.size() - 1; + try { + for (; i >= 0; i--) { + Index index = indexes.get(i); + index.remove(session, row); + checkRowCount(session, index, -1); + } + rowCount--; + } catch (Throwable e) { + try { + while (++i < indexes.size()) { + Index index = indexes.get(i); + index.add(session, row); + checkRowCount(session, index, 0); + } + } catch (DbException e2) { + // this could happen, for example on failure in the storage + // but if that is not the case it means there is something wrong + // with the database + trace.error(e2, "could not undo operation"); + throw e2; + } + throw DbException.convert(e); + } + analyzeIfRequired(session); + } + + @Override + public void truncate(Session session) { + lastModificationId = database.getNextModificationDataId(); + for (int i = indexes.size() - 1; i >= 0; i--) { + Index index = indexes.get(i); + index.truncate(session); + } + rowCount = 0; + changesSinceAnalyze = 0; + } + + private void analyzeIfRequired(Session session) { + if (nextAnalyze == 0 || nextAnalyze > changesSinceAnalyze++) { + return; + } + changesSinceAnalyze = 0; + int n = 2 * nextAnalyze; + if (n > 0) { + nextAnalyze = n; + } + session.markTableForAnalyze(this); + } + + @Override + public boolean isLockedExclusivelyBy(Session session) { + return lockExclusiveSession == session; + } + + @Override + public boolean lock(Session session, boolean exclusive, + boolean forceLockEvenInMvcc) { + int lockMode = database.getLockMode(); + if (lockMode == Constants.LOCK_MODE_OFF) { + return lockExclusiveSession != null; + } + if (!forceLockEvenInMvcc && database.isMultiVersion()) { + // MVCC: update, delete, and insert use a shared lock. + // Select doesn't lock except when using FOR UPDATE + if (exclusive) { + exclusive = false; + } else { + if (lockExclusiveSession == null) { + return false; + } + } + } + if (lockExclusiveSession == session) { + return true; + } + synchronized (database) { + if (lockExclusiveSession == session) { + return true; + } + if (!exclusive && lockSharedSessions.contains(session)) { + return true; + } + session.setWaitForLock(this, Thread.currentThread()); + waitingSessions.addLast(session); + try { + doLock1(session, lockMode, exclusive); + } finally { + session.setWaitForLock(null, null); + waitingSessions.remove(session); + } + } + return false; + } + + private void doLock1(Session session, int lockMode, boolean exclusive) { + traceLock(session, exclusive, "requesting for"); + // don't get the current time unless necessary + long max = 0; + boolean checkDeadlock = false; + while (true) { + // if I'm the next one in the queue + if (waitingSessions.getFirst() == session) { + if (doLock2(session, lockMode, exclusive)) { + return; + } + } + if (checkDeadlock) { + ArrayList sessions = checkDeadlock(session, null, null); + if (sessions != null) { + throw DbException.get(ErrorCode.DEADLOCK_1, + getDeadlockDetails(sessions, exclusive)); + } + } else { + // check for deadlocks from now on + checkDeadlock = true; + } + long now = System.nanoTime(); + if (max == 0) { + // try at least one more time + max = now + TimeUnit.MILLISECONDS.toNanos(session.getLockTimeout()); + } else if (now >= max) { + traceLock(session, exclusive, "timeout after " + session.getLockTimeout()); + throw DbException.get(ErrorCode.LOCK_TIMEOUT_1, getName()); + } + try { + traceLock(session, exclusive, "waiting for"); + if (database.getLockMode() == Constants.LOCK_MODE_TABLE_GC) { + for (int i = 0; i < 20; i++) { + long free = Runtime.getRuntime().freeMemory(); + System.gc(); + long free2 = Runtime.getRuntime().freeMemory(); + if (free == free2) { + break; + } + } + } + // don't wait too long so that deadlocks are detected early + long sleep = Math.min(Constants.DEADLOCK_CHECK, + TimeUnit.NANOSECONDS.toMillis(max - now)); + if (sleep == 0) { + sleep = 1; + } + database.wait(sleep); + } catch (InterruptedException e) { + // ignore + } + } + } + + private boolean doLock2(Session session, int lockMode, boolean exclusive) { + if (exclusive) { + if (lockExclusiveSession == null) { + if (lockSharedSessions.isEmpty()) { + traceLock(session, exclusive, "added for"); + session.addLock(this); + lockExclusiveSession = session; + return true; + } else if (lockSharedSessions.size() == 1 && + lockSharedSessions.contains(session)) { + traceLock(session, exclusive, "add (upgraded) for "); + lockExclusiveSession = session; + return true; + } + } + } else { + if (lockExclusiveSession == null) { + if (lockMode == Constants.LOCK_MODE_READ_COMMITTED) { + if (!database.isMultiThreaded() && !database.isMultiVersion()) { + // READ_COMMITTED: a read lock is acquired, + // but released immediately after the operation + // is complete. + // When allowing only one thread, no lock is + // required. + // Row level locks work like read committed. + return true; + } + } + if (!lockSharedSessions.contains(session)) { + traceLock(session, exclusive, "ok"); + session.addLock(this); + lockSharedSessions.add(session); + } + return true; + } + } + return false; + } + private static String getDeadlockDetails(ArrayList sessions, boolean exclusive) { + // We add the thread details here to make it easier for customers to + // match up these error messages with their own logs. + StringBuilder buff = new StringBuilder(); + for (Session s : sessions) { + Table lock = s.getWaitForLock(); + Thread thread = s.getWaitForLockThread(); + buff.append("\nSession "). + append(s.toString()). + append(" on thread "). + append(thread.getName()). + append(" is waiting to lock "). + append(lock.toString()). + append(exclusive ? " (exclusive)" : " (shared)"). + append(" while locking "); + int i = 0; + for (Table t : s.getLocks()) { + if (i++ > 0) { + buff.append(", "); + } + buff.append(t.toString()); + if (t instanceof RegularTable) { + if (((RegularTable) t).lockExclusiveSession == s) { + buff.append(" (exclusive)"); + } else { + buff.append(" (shared)"); + } + } + } + buff.append('.'); + } + return buff.toString(); + } + + @Override + public ArrayList checkDeadlock(Session session, Session clash, + Set visited) { + // only one deadlock check at any given time + synchronized (RegularTable.class) { + if (clash == null) { + // verification is started + clash = session; + visited = new HashSet<>(); + } else if (clash == session) { + // we found a circle where this session is involved + return New.arrayList(); + } else if (visited.contains(session)) { + // we have already checked this session. + // there is a circle, but the sessions in the circle need to + // find it out themselves + return null; + } + visited.add(session); + ArrayList error = null; + for (Session s : lockSharedSessions) { + if (s == session) { + // it doesn't matter if we have locked the object already + continue; + } + Table t = s.getWaitForLock(); + if (t != null) { + error = t.checkDeadlock(s, clash, visited); + if (error != null) { + error.add(session); + break; + } + } + } + if (error == null && lockExclusiveSession != null) { + Table t = lockExclusiveSession.getWaitForLock(); + if (t != null) { + error = t.checkDeadlock(lockExclusiveSession, clash, visited); + if (error != null) { + error.add(session); + } + } + } + return error; + } + } + + private void traceLock(Session session, boolean exclusive, String s) { + if (traceLock.isDebugEnabled()) { + traceLock.debug("{0} {1} {2} {3}", session.getId(), + exclusive ? "exclusive write lock" : "shared read lock", s, getName()); + } + } + + @Override + public boolean isLockedExclusively() { + return lockExclusiveSession != null; + } + + @Override + public void unlock(Session s) { + if (database != null) { + traceLock(s, lockExclusiveSession == s, "unlock"); + if (lockExclusiveSession == s) { + lockExclusiveSession = null; + } + synchronized (database) { + if (!lockSharedSessions.isEmpty()) { + lockSharedSessions.remove(s); + } + if (!waitingSessions.isEmpty()) { + database.notifyAll(); + } + } + } + } + + /** + * Set the row count of this table. + * + * @param count the row count + */ + public void setRowCount(long count) { + this.rowCount = count; + } + + @Override + public void removeChildrenAndResources(Session session) { + if (containsLargeObject) { + // unfortunately, the data is gone on rollback + truncate(session); + database.getLobStorage().removeAllForTable(getId()); + database.lockMeta(session); + } + super.removeChildrenAndResources(session); + // go backwards because database.removeIndex will call table.removeIndex + while (indexes.size() > 1) { + Index index = indexes.get(1); + if (index.getName() != null) { + database.removeSchemaObject(session, index); + } + // needed for session temporary indexes + indexes.remove(index); + } + if (SysProperties.CHECK) { + for (SchemaObject obj : database.getAllSchemaObjects(DbObject.INDEX)) { + Index index = (Index) obj; + if (index.getTable() == this) { + DbException.throwInternalError("index not dropped: " + index.getName()); + } + } + } + scanIndex.remove(session); + database.removeMeta(session, getId()); + scanIndex = null; + lockExclusiveSession = null; + lockSharedSessions = null; + invalidate(); + } + + @Override + public String toString() { + return getSQL(); + } + + @Override + public void checkRename() { + // ok + } + + @Override + public void checkSupportAlter() { + // ok + } + + @Override + public boolean canTruncate() { + if (getCheckForeignKeyConstraints() && database.getReferentialIntegrity()) { + ArrayList constraints = getConstraints(); + if (constraints != null) { + for (Constraint c : constraints) { + if (c.getConstraintType() != Constraint.Type.REFERENTIAL) { + continue; + } + ConstraintReferential ref = (ConstraintReferential) c; + if (ref.getRefTable() == this) { + return false; + } + } + } + } + return true; + } + + @Override + public TableType getTableType() { + return TableType.TABLE; + } + + @Override + public long getMaxDataModificationId() { + return lastModificationId; + } + + public boolean getContainsLargeObject() { + return containsLargeObject; + } + + @Override + public long getRowCountApproximation() { + return scanIndex.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return scanIndex.getDiskSpaceUsed(); + } + + public void setCompareMode(CompareMode compareMode) { + this.compareMode = compareMode; + } + + @Override + public boolean isDeterministic() { + return true; + } + + @Override + public Column getRowIdColumn() { + if (rowIdColumn == null) { + rowIdColumn = new Column(Column.ROWID, Value.LONG); + rowIdColumn.setTable(this, -1); + } + return rowIdColumn; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/SingleColumnResolver.java b/modules/h2/src/main/java/org/h2/table/SingleColumnResolver.java new file mode 100644 index 0000000000000..f33cd8c9e5461 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/SingleColumnResolver.java @@ -0,0 +1,80 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import org.h2.command.dml.Select; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.value.Value; + +/** + * The single column resolver is like a table with exactly one row. + * It is used to parse a simple one-column check constraint. + */ +public class SingleColumnResolver implements ColumnResolver { + + private final Column column; + private Value value; + + SingleColumnResolver(Column column) { + this.column = column; + } + + @Override + public String getTableAlias() { + return null; + } + + void setValue(Value value) { + this.value = value; + } + + @Override + public Value getValue(Column col) { + return value; + } + + @Override + public Column[] getColumns() { + return new Column[] { column }; + } + + @Override + public String getDerivedColumnName(Column column) { + return null; + } + + @Override + public String getSchemaName() { + return null; + } + + @Override + public TableFilter getTableFilter() { + return null; + } + + @Override + public Select getSelect() { + return null; + } + + @Override + public Column[] getSystemColumns() { + return null; + } + + @Override + public Column getRowIdColumn() { + return null; + } + + @Override + public Expression optimize(ExpressionColumn expressionColumn, Column col) { + return expressionColumn; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/SubQueryInfo.java b/modules/h2/src/main/java/org/h2/table/SubQueryInfo.java new file mode 100644 index 0000000000000..dc68b443fdb12 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/SubQueryInfo.java @@ -0,0 +1,59 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ + +package org.h2.table; + +import org.h2.result.SortOrder; + +/** + * Information about current sub-query being prepared. + * + * @author Sergi Vladykin + */ +public class SubQueryInfo { + + private final int[] masks; + private final TableFilter[] filters; + private final int filter; + private final SortOrder sortOrder; + private final SubQueryInfo upper; + + /** + * @param upper upper level sub-query if any + * @param masks index conditions masks + * @param filters table filters + * @param filter current filter + * @param sortOrder sort order + */ + public SubQueryInfo(SubQueryInfo upper, int[] masks, TableFilter[] filters, int filter, + SortOrder sortOrder) { + this.upper = upper; + this.masks = masks; + this.filters = filters; + this.filter = filter; + this.sortOrder = sortOrder; + } + + public SubQueryInfo getUpper() { + return upper; + } + + public int[] getMasks() { + return masks; + } + + public TableFilter[] getFilters() { + return filters; + } + + public int getFilter() { + return filter; + } + + public SortOrder getSortOrder() { + return sortOrder; + } +} diff --git a/modules/h2/src/main/java/org/h2/table/Table.java b/modules/h2/src/main/java/org/h2/table/Table.java new file mode 100644 index 0000000000000..1a2a96db01793 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/Table.java @@ -0,0 +1,1256 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Set; +import java.util.concurrent.CopyOnWriteArrayList; +import org.h2.api.ErrorCode; +import org.h2.command.Prepared; +import org.h2.constraint.Constraint; +import org.h2.engine.Constants; +import org.h2.engine.DbObject; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.engine.UndoLogRecord; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionVisitor; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.result.Row; +import org.h2.result.RowList; +import org.h2.result.SearchRow; +import org.h2.result.SimpleRow; +import org.h2.result.SimpleRowValue; +import org.h2.result.SortOrder; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObjectBase; +import org.h2.schema.Sequence; +import org.h2.schema.TriggerObject; +import org.h2.util.New; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This is the base class for most tables. + * A table contains a list of columns and a list of rows. + */ +public abstract class Table extends SchemaObjectBase { + + /** + * The table type that means this table is a regular persistent table. + */ + public static final int TYPE_CACHED = 0; + + /** + * The table type that means this table is a regular persistent table. + */ + public static final int TYPE_MEMORY = 1; + + /** + * The columns of this table. + */ + protected Column[] columns; + + /** + * The compare mode used for this table. + */ + protected CompareMode compareMode; + + /** + * Protected tables are not listed in the meta data and are excluded when + * using the SCRIPT command. + */ + protected boolean isHidden; + + private final HashMap columnMap; + private final boolean persistIndexes; + private final boolean persistData; + private ArrayList triggers; + private ArrayList constraints; + private ArrayList sequences; + /** + * views that depend on this table + */ + private final CopyOnWriteArrayList dependentViews = new CopyOnWriteArrayList<>(); + private ArrayList synonyms; + private boolean checkForeignKeyConstraints = true; + private boolean onCommitDrop, onCommitTruncate; + private volatile Row nullRow; + private boolean tableExpression; + + public Table(Schema schema, int id, String name, boolean persistIndexes, + boolean persistData) { + columnMap = schema.getDatabase().newStringMap(); + initSchemaObjectBase(schema, id, name, Trace.TABLE); + this.persistIndexes = persistIndexes; + this.persistData = persistData; + compareMode = schema.getDatabase().getCompareMode(); + } + + @Override + public void rename(String newName) { + super.rename(newName); + if (constraints != null) { + for (Constraint constraint : constraints) { + constraint.rebuild(); + } + } + } + + public boolean isView() { + return false; + } + + /** + * Lock the table for the given session. + * This method waits until the lock is granted. + * + * @param session the session + * @param exclusive true for write locks, false for read locks + * @param forceLockEvenInMvcc lock even in the MVCC mode + * @return true if the table was already exclusively locked by this session. + * @throws DbException if a lock timeout occurred + */ + public abstract boolean lock(Session session, boolean exclusive, boolean forceLockEvenInMvcc); + + /** + * Close the table object and flush changes. + * + * @param session the session + */ + public abstract void close(Session session); + + /** + * Release the lock for this session. + * + * @param s the session + */ + public abstract void unlock(Session s); + + /** + * Create an index for this table + * + * @param session the session + * @param indexName the name of the index + * @param indexId the id + * @param cols the index columns + * @param indexType the index type + * @param create whether this is a new index + * @param indexComment the comment + * @return the index + */ + public abstract Index addIndex(Session session, String indexName, + int indexId, IndexColumn[] cols, IndexType indexType, + boolean create, String indexComment); + + /** + * Get the given row. + * + * @param session the session + * @param key the primary key + * @return the row + */ + @SuppressWarnings("unused") + public Row getRow(Session session, long key) { + return null; + } + + /** + * Remove a row from the table and all indexes. + * + * @param session the session + * @param row the row + */ + public abstract void removeRow(Session session, Row row); + + /** + * Remove all rows from the table and indexes. + * + * @param session the session + */ + public abstract void truncate(Session session); + + /** + * Add a row to the table and all indexes. + * + * @param session the session + * @param row the row + * @throws DbException if a constraint was violated + */ + public abstract void addRow(Session session, Row row); + + /** + * Commit an operation (when using multi-version concurrency). + * + * @param operation the operation + * @param row the row + */ + @SuppressWarnings("unused") + public void commit(short operation, Row row) { + // nothing to do + } + + /** + * Check if this table supports ALTER TABLE. + * + * @throws DbException if it is not supported + */ + public abstract void checkSupportAlter(); + + /** + * Get the table type name + * + * @return the table type name + */ + public abstract TableType getTableType(); + + /** + * Get the scan index to iterate through all rows. + * + * @param session the session + * @return the index + */ + public abstract Index getScanIndex(Session session); + + /** + * Get the scan index for this table. + * + * @param session the session + * @param masks the search mask + * @param filters the table filters + * @param filter the filter index + * @param sortOrder the sort order + * @param allColumnsSet all columns + * @return the scan index + */ + @SuppressWarnings("unused") + public Index getScanIndex(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return getScanIndex(session); + } + + /** + * Get any unique index for this table if one exists. + * + * @return a unique index + */ + public abstract Index getUniqueIndex(); + + /** + * Get all indexes for this table. + * + * @return the list of indexes + */ + public abstract ArrayList getIndexes(); + + /** + * Get an index by name. + * + * @param indexName the index name to search for + * @return the found index + */ + public Index getIndex(String indexName) { + ArrayList indexes = getIndexes(); + if (indexes != null) { + for (Index index : indexes) { + if (index.getName().equals(indexName)) { + return index; + } + } + } + throw DbException.get(ErrorCode.INDEX_NOT_FOUND_1, indexName); + } + + /** + * Check if this table is locked exclusively. + * + * @return true if it is. + */ + public abstract boolean isLockedExclusively(); + + /** + * Get the last data modification id. + * + * @return the modification id + */ + public abstract long getMaxDataModificationId(); + + /** + * Check if the table is deterministic. + * + * @return true if it is + */ + public abstract boolean isDeterministic(); + + /** + * Check if the row count can be retrieved quickly. + * + * @return true if it can + */ + public abstract boolean canGetRowCount(); + + /** + * Check if this table can be referenced. + * + * @return true if it can + */ + public boolean canReference() { + return true; + } + + /** + * Check if this table can be dropped. + * + * @return true if it can + */ + public abstract boolean canDrop(); + + /** + * Get the row count for this table. + * + * @param session the session + * @return the row count + */ + public abstract long getRowCount(Session session); + + /** + * Get the approximated row count for this table. + * + * @return the approximated row count + */ + public abstract long getRowCountApproximation(); + + public abstract long getDiskSpaceUsed(); + + /** + * Get the row id column if this table has one. + * + * @return the row id column, or null + */ + public Column getRowIdColumn() { + return null; + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + throw DbException.throwInternalError(toString()); + } + + /** + * Check whether the table (or view) contains no columns that prevent index + * conditions to be used. For example, a view that contains the ROWNUM() + * pseudo-column prevents this. + * + * @return true if the table contains no query-comparable column + */ + public boolean isQueryComparable() { + return true; + } + + /** + * Add all objects that this table depends on to the hash set. + * + * @param dependencies the current set of dependencies + */ + public void addDependencies(HashSet dependencies) { + if (dependencies.contains(this)) { + // avoid endless recursion + return; + } + if (sequences != null) { + dependencies.addAll(sequences); + } + ExpressionVisitor visitor = ExpressionVisitor.getDependenciesVisitor( + dependencies); + for (Column col : columns) { + col.isEverything(visitor); + } + if (constraints != null) { + for (Constraint c : constraints) { + c.isEverything(visitor); + } + } + dependencies.add(this); + } + + @Override + public ArrayList getChildren() { + ArrayList children = New.arrayList(); + ArrayList indexes = getIndexes(); + if (indexes != null) { + children.addAll(indexes); + } + if (constraints != null) { + children.addAll(constraints); + } + if (triggers != null) { + children.addAll(triggers); + } + if (sequences != null) { + children.addAll(sequences); + } + children.addAll(dependentViews); + if (synonyms != null) { + children.addAll(synonyms); + } + ArrayList rights = database.getAllRights(); + for (Right right : rights) { + if (right.getGrantedObject() == this) { + children.add(right); + } + } + return children; + } + + protected void setColumns(Column[] columns) { + this.columns = columns; + if (columnMap.size() > 0) { + columnMap.clear(); + } + for (int i = 0; i < columns.length; i++) { + Column col = columns[i]; + int dataType = col.getType(); + if (dataType == Value.UNKNOWN) { + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, col.getSQL()); + } + col.setTable(this, i); + String columnName = col.getName(); + if (columnMap.get(columnName) != null) { + throw DbException.get( + ErrorCode.DUPLICATE_COLUMN_NAME_1, columnName); + } + columnMap.put(columnName, col); + } + } + + /** + * Rename a column of this table. + * + * @param column the column to rename + * @param newName the new column name + */ + public void renameColumn(Column column, String newName) { + for (Column c : columns) { + if (c == column) { + continue; + } + if (c.getName().equals(newName)) { + throw DbException.get( + ErrorCode.DUPLICATE_COLUMN_NAME_1, newName); + } + } + columnMap.remove(column.getName()); + column.rename(newName); + columnMap.put(newName, column); + } + + /** + * Check if the table is exclusively locked by this session. + * + * @param session the session + * @return true if it is + */ + @SuppressWarnings("unused") + public boolean isLockedExclusivelyBy(Session session) { + return false; + } + + /** + * Update a list of rows in this table. + * + * @param prepared the prepared statement + * @param session the session + * @param rows a list of row pairs of the form old row, new row, old row, + * new row,... + */ + public void updateRows(Prepared prepared, Session session, RowList rows) { + // in case we need to undo the update + Session.Savepoint rollback = session.setSavepoint(); + // remove the old rows + int rowScanCount = 0; + for (rows.reset(); rows.hasNext();) { + if ((++rowScanCount & 127) == 0) { + prepared.checkCanceled(); + } + Row o = rows.next(); + rows.next(); + try { + removeRow(session, o); + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.CONCURRENT_UPDATE_1) { + session.rollbackTo(rollback, false); + session.startStatementWithinTransaction(); + rollback = session.setSavepoint(); + } + throw e; + } + session.log(this, UndoLogRecord.DELETE, o); + } + // add the new rows + for (rows.reset(); rows.hasNext();) { + if ((++rowScanCount & 127) == 0) { + prepared.checkCanceled(); + } + rows.next(); + Row n = rows.next(); + try { + addRow(session, n); + } catch (DbException e) { + if (e.getErrorCode() == ErrorCode.CONCURRENT_UPDATE_1) { + session.rollbackTo(rollback, false); + session.startStatementWithinTransaction(); + rollback = session.setSavepoint(); + } + throw e; + } + session.log(this, UndoLogRecord.INSERT, n); + } + } + + public CopyOnWriteArrayList getDependentViews() { + return dependentViews; + } + + @Override + public void removeChildrenAndResources(Session session) { + while (!dependentViews.isEmpty()) { + TableView view = dependentViews.get(0); + dependentViews.remove(0); + database.removeSchemaObject(session, view); + } + while (synonyms != null && !synonyms.isEmpty()) { + TableSynonym synonym = synonyms.remove(0); + database.removeSchemaObject(session, synonym); + } + while (triggers != null && !triggers.isEmpty()) { + TriggerObject trigger = triggers.remove(0); + database.removeSchemaObject(session, trigger); + } + while (constraints != null && !constraints.isEmpty()) { + Constraint constraint = constraints.remove(0); + database.removeSchemaObject(session, constraint); + } + for (Right right : database.getAllRights()) { + if (right.getGrantedObject() == this) { + database.removeDatabaseObject(session, right); + } + } + database.removeMeta(session, getId()); + // must delete sequences later (in case there is a power failure + // before removing the table object) + while (sequences != null && !sequences.isEmpty()) { + Sequence sequence = sequences.remove(0); + // only remove if no other table depends on this sequence + // this is possible when calling ALTER TABLE ALTER COLUMN + if (database.getDependentTable(sequence, this) == null) { + database.removeSchemaObject(session, sequence); + } + } + } + + /** + * Check that these columns are not referenced by a multi-column constraint + * or multi-column index. If it is, an exception is thrown. Single-column + * references and indexes are dropped. + * + * @param session the session + * @param columnsToDrop the columns to drop + * @throws DbException if the column is referenced by multi-column + * constraints or indexes + */ + public void dropMultipleColumnsConstraintsAndIndexes(Session session, + ArrayList columnsToDrop) { + HashSet constraintsToDrop = new HashSet<>(); + if (constraints != null) { + for (Column col : columnsToDrop) { + for (Constraint constraint : constraints) { + HashSet columns = constraint.getReferencedColumns(this); + if (!columns.contains(col)) { + continue; + } + if (columns.size() == 1) { + constraintsToDrop.add(constraint); + } else { + throw DbException.get( + ErrorCode.COLUMN_IS_REFERENCED_1, constraint.getSQL()); + } + } + } + } + HashSet indexesToDrop = new HashSet<>(); + ArrayList indexes = getIndexes(); + if (indexes != null) { + for (Column col : columnsToDrop) { + for (Index index : indexes) { + if (index.getCreateSQL() == null) { + continue; + } + if (index.getColumnIndex(col) < 0) { + continue; + } + if (index.getColumns().length == 1) { + indexesToDrop.add(index); + } else { + throw DbException.get( + ErrorCode.COLUMN_IS_REFERENCED_1, index.getSQL()); + } + } + } + } + for (Constraint c : constraintsToDrop) { + session.getDatabase().removeSchemaObject(session, c); + } + for (Index i : indexesToDrop) { + // the index may already have been dropped when dropping the + // constraint + if (getIndexes().contains(i)) { + session.getDatabase().removeSchemaObject(session, i); + } + } + } + + public Row getTemplateRow() { + return database.createRow(new Value[columns.length], Row.MEMORY_CALCULATE); + } + + /** + * Get a new simple row object. + * + * @param singleColumn if only one value need to be stored + * @return the simple row object + */ + public SearchRow getTemplateSimpleRow(boolean singleColumn) { + if (singleColumn) { + return new SimpleRowValue(columns.length); + } + return new SimpleRow(new Value[columns.length]); + } + + Row getNullRow() { + Row row = nullRow; + if (row == null) { + // Here can be concurrently produced more than one row, but it must + // be ok. + Value[] values = new Value[columns.length]; + Arrays.fill(values, ValueNull.INSTANCE); + nullRow = row = database.createRow(values, 1); + } + return row; + } + + public Column[] getColumns() { + return columns; + } + + @Override + public int getType() { + return DbObject.TABLE_OR_VIEW; + } + + /** + * Get the column at the given index. + * + * @param index the column index (0, 1,...) + * @return the column + */ + public Column getColumn(int index) { + return columns[index]; + } + + /** + * Get the column with the given name. + * + * @param columnName the column name + * @return the column + * @throws DbException if the column was not found + */ + public Column getColumn(String columnName) { + Column column = columnMap.get(columnName); + if (column == null) { + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, columnName); + } + return column; + } + + /** + * Does the column with the given name exist? + * + * @param columnName the column name + * @return true if the column exists + */ + public boolean doesColumnExist(String columnName) { + return columnMap.containsKey(columnName); + } + + /** + * Get the best plan for the given search mask. + * + * @param session the session + * @param masks per-column comparison bit masks, null means 'always false', + * see constants in IndexCondition + * @param filters all joined table filters + * @param filter the current table filter index + * @param sortOrder the sort order + * @param allColumnsSet the set of all columns + * @return the plan item + */ + public PlanItem getBestPlanItem(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + PlanItem item = new PlanItem(); + item.setIndex(getScanIndex(session)); + item.cost = item.getIndex().getCost(session, null, filters, filter, null, allColumnsSet); + Trace t = session.getTrace(); + if (t.isDebugEnabled()) { + t.debug("Table : potential plan item cost {0} index {1}", + item.cost, item.getIndex().getPlanSQL()); + } + ArrayList indexes = getIndexes(); + IndexHints indexHints = getIndexHints(filters, filter); + + if (indexes != null && masks != null) { + for (int i = 1, size = indexes.size(); i < size; i++) { + Index index = indexes.get(i); + + if (isIndexExcludedByHints(indexHints, index)) { + continue; + } + + double cost = index.getCost(session, masks, filters, filter, + sortOrder, allColumnsSet); + if (t.isDebugEnabled()) { + t.debug("Table : potential plan item cost {0} index {1}", + cost, index.getPlanSQL()); + } + if (cost < item.cost) { + item.cost = cost; + item.setIndex(index); + } + } + } + return item; + } + + private static boolean isIndexExcludedByHints(IndexHints indexHints, Index index) { + return indexHints != null && !indexHints.allowIndex(index); + } + + private static IndexHints getIndexHints(TableFilter[] filters, int filter) { + return filters == null ? null : filters[filter].getIndexHints(); + } + + /** + * Get the primary key index if there is one, or null if there is none. + * + * @return the primary key index or null + */ + public Index findPrimaryKey() { + ArrayList indexes = getIndexes(); + if (indexes != null) { + for (Index idx : indexes) { + if (idx.getIndexType().isPrimaryKey()) { + return idx; + } + } + } + return null; + } + + public Index getPrimaryKey() { + Index index = findPrimaryKey(); + if (index != null) { + return index; + } + throw DbException.get(ErrorCode.INDEX_NOT_FOUND_1, + Constants.PREFIX_PRIMARY_KEY); + } + + /** + * Validate all values in this row, convert the values if required, and + * update the sequence values if required. This call will also set the + * default values if required and set the computed column if there are any. + * + * @param session the session + * @param row the row + */ + public void validateConvertUpdateSequence(Session session, Row row) { + for (int i = 0; i < columns.length; i++) { + Value value = row.getValue(i); + Column column = columns[i]; + Value v2; + if (column.getComputed()) { + // force updating the value + value = null; + v2 = column.computeValue(session, row); + } + v2 = column.validateConvertUpdateSequence(session, value); + if (v2 != value) { + row.setValue(i, v2); + } + } + } + + private static void remove(ArrayList list, DbObject obj) { + if (list != null) { + list.remove(obj); + } + } + + /** + * Remove the given index from the list. + * + * @param index the index to remove + */ + public void removeIndex(Index index) { + ArrayList indexes = getIndexes(); + if (indexes != null) { + remove(indexes, index); + if (index.getIndexType().isPrimaryKey()) { + for (Column col : index.getColumns()) { + col.setPrimaryKey(false); + } + } + } + } + + /** + * Remove the given view from the dependent views list. + * + * @param view the view to remove + */ + public void removeDependentView(TableView view) { + dependentViews.remove(view); + } + + /** + * Remove the given view from the list. + * + * @param synonym the synonym to remove + */ + public void removeSynonym(TableSynonym synonym) { + remove(synonyms, synonym); + } + + /** + * Remove the given constraint from the list. + * + * @param constraint the constraint to remove + */ + public void removeConstraint(Constraint constraint) { + remove(constraints, constraint); + } + + /** + * Remove a sequence from the table. Sequences are used as identity columns. + * + * @param sequence the sequence to remove + */ + public final void removeSequence(Sequence sequence) { + remove(sequences, sequence); + } + + /** + * Remove the given trigger from the list. + * + * @param trigger the trigger to remove + */ + public void removeTrigger(TriggerObject trigger) { + remove(triggers, trigger); + } + + /** + * Add a view to this table. + * + * @param view the view to add + */ + public void addDependentView(TableView view) { + dependentViews.add(view); + } + + /** + * Add a synonym to this table. + * + * @param synonym the synonym to add + */ + public void addSynonym(TableSynonym synonym) { + synonyms = add(synonyms, synonym); + } + + /** + * Add a constraint to the table. + * + * @param constraint the constraint to add + */ + public void addConstraint(Constraint constraint) { + if (constraints == null || !constraints.contains(constraint)) { + constraints = add(constraints, constraint); + } + } + + public ArrayList getConstraints() { + return constraints; + } + + /** + * Add a sequence to this table. + * + * @param sequence the sequence to add + */ + public void addSequence(Sequence sequence) { + sequences = add(sequences, sequence); + } + + /** + * Add a trigger to this table. + * + * @param trigger the trigger to add + */ + public void addTrigger(TriggerObject trigger) { + triggers = add(triggers, trigger); + } + + private static ArrayList add(ArrayList list, T obj) { + if (list == null) { + list = New.arrayList(); + } + // self constraints are two entries in the list + list.add(obj); + return list; + } + + /** + * Fire the triggers for this table. + * + * @param session the session + * @param type the trigger type + * @param beforeAction whether 'before' triggers should be called + */ + public void fire(Session session, int type, boolean beforeAction) { + if (triggers != null) { + for (TriggerObject trigger : triggers) { + trigger.fire(session, type, beforeAction); + } + } + } + + /** + * Check whether this table has a select trigger. + * + * @return true if it has + */ + public boolean hasSelectTrigger() { + if (triggers != null) { + for (TriggerObject trigger : triggers) { + if (trigger.isSelectTrigger()) { + return true; + } + } + } + return false; + } + + /** + * Check if row based triggers or constraints are defined. + * In this case the fire after and before row methods need to be called. + * + * @return if there are any triggers or rows defined + */ + public boolean fireRow() { + return (constraints != null && !constraints.isEmpty()) || + (triggers != null && !triggers.isEmpty()); + } + + /** + * Fire all triggers that need to be called before a row is updated. + * + * @param session the session + * @param oldRow the old data or null for an insert + * @param newRow the new data or null for a delete + * @return true if no further action is required (for 'instead of' triggers) + */ + public boolean fireBeforeRow(Session session, Row oldRow, Row newRow) { + boolean done = fireRow(session, oldRow, newRow, true, false); + fireConstraints(session, oldRow, newRow, true); + return done; + } + + private void fireConstraints(Session session, Row oldRow, Row newRow, + boolean before) { + if (constraints != null) { + // don't use enhanced for loop to avoid creating objects + for (Constraint constraint : constraints) { + if (constraint.isBefore() == before) { + constraint.checkRow(session, this, oldRow, newRow); + } + } + } + } + + /** + * Fire all triggers that need to be called after a row is updated. + * + * @param session the session + * @param oldRow the old data or null for an insert + * @param newRow the new data or null for a delete + * @param rollback when the operation occurred within a rollback + */ + public void fireAfterRow(Session session, Row oldRow, Row newRow, + boolean rollback) { + fireRow(session, oldRow, newRow, false, rollback); + if (!rollback) { + fireConstraints(session, oldRow, newRow, false); + } + } + + private boolean fireRow(Session session, Row oldRow, Row newRow, + boolean beforeAction, boolean rollback) { + if (triggers != null) { + for (TriggerObject trigger : triggers) { + boolean done = trigger.fireRow(session, this, oldRow, newRow, beforeAction, rollback); + if (done) { + return true; + } + } + } + return false; + } + + public boolean isGlobalTemporary() { + return false; + } + + /** + * Check if this table can be truncated. + * + * @return true if it can + */ + public boolean canTruncate() { + return false; + } + + /** + * Enable or disable foreign key constraint checking for this table. + * + * @param session the session + * @param enabled true if checking should be enabled + * @param checkExisting true if existing rows must be checked during this + * call + */ + public void setCheckForeignKeyConstraints(Session session, boolean enabled, + boolean checkExisting) { + if (enabled && checkExisting) { + if (constraints != null) { + for (Constraint c : constraints) { + c.checkExistingData(session); + } + } + } + checkForeignKeyConstraints = enabled; + } + + public boolean getCheckForeignKeyConstraints() { + return checkForeignKeyConstraints; + } + + /** + * Get the index that has the given column as the first element. + * This method returns null if no matching index is found. + * + * @param column the column + * @param needGetFirstOrLast if the returned index must be able + * to do {@link Index#canGetFirstOrLast()} + * @param needFindNext if the returned index must be able to do + * {@link Index#findNext(Session, SearchRow, SearchRow)} + * @return the index or null + */ + public Index getIndexForColumn(Column column, + boolean needGetFirstOrLast, boolean needFindNext) { + ArrayList indexes = getIndexes(); + Index result = null; + if (indexes != null) { + for (int i = 1, size = indexes.size(); i < size; i++) { + Index index = indexes.get(i); + if (needGetFirstOrLast && !index.canGetFirstOrLast()) { + continue; + } + if (needFindNext && !index.canFindNext()) { + continue; + } + // choose the minimal covering index with the needed first + // column to work consistently with execution plan from + // Optimizer + if (index.isFirstColumn(column) && (result == null || + result.getColumns().length > index.getColumns().length)) { + result = index; + } + } + } + return result; + } + + public boolean getOnCommitDrop() { + return onCommitDrop; + } + + public void setOnCommitDrop(boolean onCommitDrop) { + this.onCommitDrop = onCommitDrop; + } + + public boolean getOnCommitTruncate() { + return onCommitTruncate; + } + + public void setOnCommitTruncate(boolean onCommitTruncate) { + this.onCommitTruncate = onCommitTruncate; + } + + /** + * If the index is still required by a constraint, transfer the ownership to + * it. Otherwise, the index is removed. + * + * @param session the session + * @param index the index that is no longer required + */ + public void removeIndexOrTransferOwnership(Session session, Index index) { + boolean stillNeeded = false; + if (constraints != null) { + for (Constraint cons : constraints) { + if (cons.usesIndex(index)) { + cons.setIndexOwner(index); + database.updateMeta(session, cons); + stillNeeded = true; + } + } + } + if (!stillNeeded) { + database.removeSchemaObject(session, index); + } + } + + /** + * Check if a deadlock occurred. This method is called recursively. There is + * a circle if the session to be tested has already being visited. If this + * session is part of the circle (if it is the clash session), the method + * must return an empty object array. Once a deadlock has been detected, the + * methods must add the session to the list. If this session is not part of + * the circle, or if no deadlock is detected, this method returns null. + * + * @param session the session to be tested for + * @param clash set with sessions already visited, and null when starting + * verification + * @param visited set with sessions already visited, and null when starting + * verification + * @return an object array with the sessions involved in the deadlock, or + * null + */ + @SuppressWarnings("unused") + public ArrayList checkDeadlock(Session session, Session clash, + Set visited) { + return null; + } + + public boolean isPersistIndexes() { + return persistIndexes; + } + + public boolean isPersistData() { + return persistData; + } + + /** + * Compare two values with the current comparison mode. The values may be of + * different type. + * + * @param a the first value + * @param b the second value + * @return 0 if both values are equal, -1 if the first value is smaller, and + * 1 otherwise + */ + public int compareTypeSafe(Value a, Value b) { + if (a == b) { + return 0; + } + int dataType = Value.getHigherOrder(a.getType(), b.getType()); + a = a.convertTo(dataType); + b = b.convertTo(dataType); + return a.compareTypeSafe(b, compareMode); + } + + public CompareMode getCompareMode() { + return compareMode; + } + + /** + * Tests if the table can be written. Usually, this depends on the + * database.checkWritingAllowed method, but some tables (eg. TableLink) + * overwrite this default behaviour. + */ + public void checkWritingAllowed() { + database.checkWritingAllowed(); + } + + private static Value getGeneratedValue(Session session, Column column, Expression expression) { + Value v; + if (expression == null) { + v = column.validateConvertUpdateSequence(session, null); + } else { + v = expression.getValue(session); + } + return column.convert(v); + } + + /** + * Get or generate a default value for the given column. + * + * @param session the session + * @param column the column + * @return the value + */ + public Value getDefaultValue(Session session, Column column) { + return getGeneratedValue(session, column, column.getDefaultExpression()); + } + + /** + * Generates on update value for the given column. + * + * @param session the session + * @param column the column + * @return the value + */ + public Value getOnUpdateValue(Session session, Column column) { + return getGeneratedValue(session, column, column.getOnUpdateExpression()); + } + + @Override + public boolean isHidden() { + return isHidden; + } + + public void setHidden(boolean hidden) { + this.isHidden = hidden; + } + + public boolean isMVStore() { + return false; + } + + public void setTableExpression(boolean tableExpression) { + this.tableExpression = tableExpression; + } + + public boolean isTableExpression() { + return tableExpression; + } +} diff --git a/modules/h2/src/main/java/org/h2/table/TableBase.java b/modules/h2/src/main/java/org/h2/table/TableBase.java new file mode 100644 index 0000000000000..4b23abb6bf60f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/TableBase.java @@ -0,0 +1,121 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.Collections; +import java.util.List; +import org.h2.command.ddl.CreateTableData; +import org.h2.engine.Database; +import org.h2.engine.DbSettings; +import org.h2.mvstore.db.MVTableEngine; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; + +/** + * The base class of a regular table, or a user defined table. + * + * @author Thomas Mueller + * @author Sergi Vladykin + */ +public abstract class TableBase extends Table { + + /** + * The table engine used (null for regular tables). + */ + private final String tableEngine; + /** Provided table parameters */ + private final List tableEngineParams; + + private final boolean globalTemporary; + + public TableBase(CreateTableData data) { + super(data.schema, data.id, data.tableName, + data.persistIndexes, data.persistData); + this.tableEngine = data.tableEngine; + this.globalTemporary = data.globalTemporary; + if (data.tableEngineParams != null) { + this.tableEngineParams = data.tableEngineParams; + } else { + this.tableEngineParams = Collections.emptyList(); + } + setTemporary(data.temporary); + Column[] cols = data.columns.toArray(new Column[0]); + setColumns(cols); + } + + @Override + public String getDropSQL() { + return "DROP TABLE IF EXISTS " + getSQL() + " CASCADE"; + } + + @Override + public String getCreateSQL() { + Database db = getDatabase(); + if (db == null) { + // closed + return null; + } + StatementBuilder buff = new StatementBuilder("CREATE "); + if (isTemporary()) { + if (isGlobalTemporary()) { + buff.append("GLOBAL "); + } else { + buff.append("LOCAL "); + } + buff.append("TEMPORARY "); + } else if (isPersistIndexes()) { + buff.append("CACHED "); + } else { + buff.append("MEMORY "); + } + buff.append("TABLE "); + if (isHidden) { + buff.append("IF NOT EXISTS "); + } + buff.append(getSQL()); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + buff.append("(\n "); + for (Column column : columns) { + buff.appendExceptFirst(",\n "); + buff.append(column.getCreateSQL()); + } + buff.append("\n)"); + if (tableEngine != null) { + DbSettings s = db.getSettings(); + String d = s.defaultTableEngine; + if (d == null && s.mvStore) { + d = MVTableEngine.class.getName(); + } + if (d == null || !tableEngine.endsWith(d)) { + buff.append("\nENGINE "); + buff.append(StringUtils.quoteIdentifier(tableEngine)); + } + } + if (!tableEngineParams.isEmpty()) { + buff.append("\nWITH "); + buff.resetCount(); + for (String parameter : tableEngineParams) { + buff.appendExceptFirst(", "); + buff.append(StringUtils.quoteIdentifier(parameter)); + } + } + if (!isPersistIndexes() && !isPersistData()) { + buff.append("\nNOT PERSISTENT"); + } + if (isHidden) { + buff.append("\nHIDDEN"); + } + return buff.toString(); + } + + @Override + public boolean isGlobalTemporary() { + return globalTemporary; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/TableFilter.java b/modules/h2/src/main/java/org/h2/table/TableFilter.java new file mode 100644 index 0000000000000..70920d7802405 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/TableFilter.java @@ -0,0 +1,1245 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import org.h2.api.ErrorCode; +import org.h2.command.Parser; +import org.h2.command.dml.Select; +import org.h2.engine.Right; +import org.h2.engine.Session; +import org.h2.engine.SysProperties; +import org.h2.engine.UndoLogRecord; +import org.h2.expression.Comparison; +import org.h2.expression.ConditionAndOr; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.index.Index; +import org.h2.index.IndexCondition; +import org.h2.index.IndexCursor; +import org.h2.index.IndexLookupBatch; +import org.h2.index.ViewIndex; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.result.SortOrder; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.Value; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; + +/** + * A table filter represents a table that is used in a query. There is one such + * object whenever a table (or view) is used in a query. For example the + * following query has 2 table filters: SELECT * FROM TEST T1, TEST T2. + */ +public class TableFilter implements ColumnResolver { + + private static final int BEFORE_FIRST = 0, FOUND = 1, AFTER_LAST = 2, + NULL_ROW = 3; + + /** + * Whether this is a direct or indirect (nested) outer join + */ + protected boolean joinOuterIndirect; + + private Session session; + + private final Table table; + private final Select select; + private String alias; + private Index index; + private final IndexHints indexHints; + private int[] masks; + private int scanCount; + private boolean evaluatable; + + /** + * Batched join support. + */ + private JoinBatch joinBatch; + private int joinFilterId = -1; + + /** + * Indicates that this filter is used in the plan. + */ + private boolean used; + + /** + * The filter used to walk through the index. + */ + private final IndexCursor cursor; + + /** + * The index conditions used for direct index lookup (start or end). + */ + private final ArrayList indexConditions = New.arrayList(); + + /** + * Additional conditions that can't be used for index lookup, but for row + * filter for this table (ID=ID, NAME LIKE '%X%') + */ + private Expression filterCondition; + + /** + * The complete join condition. + */ + private Expression joinCondition; + + private SearchRow currentSearchRow; + private Row current; + private int state; + + /** + * The joined table (if there is one). + */ + private TableFilter join; + + /** + * Whether this is an outer join. + */ + private boolean joinOuter; + + /** + * The nested joined table (if there is one). + */ + private TableFilter nestedJoin; + + private ArrayList naturalJoinColumns; + private boolean foundOne; + private Expression fullCondition; + private final int hashCode; + private final int orderInFrom; + + private HashMap derivedColumnMap; + + /** + * Create a new table filter object. + * + * @param session the session + * @param table the table from where to read data + * @param alias the alias name + * @param rightsChecked true if rights are already checked + * @param select the select statement + * @param orderInFrom original order number (index) of this table filter in + * @param indexHints the index hints to be used by the query planner + */ + public TableFilter(Session session, Table table, String alias, + boolean rightsChecked, Select select, int orderInFrom, IndexHints indexHints) { + this.session = session; + this.table = table; + this.alias = alias; + this.select = select; + this.cursor = new IndexCursor(this); + if (!rightsChecked) { + session.getUser().checkRight(table, Right.SELECT); + } + hashCode = session.nextObjectId(); + this.orderInFrom = orderInFrom; + this.indexHints = indexHints; + } + + /** + * Get the order number (index) of this table filter in the "from" clause of + * the query. + * + * @return the index (0, 1, 2,...) + */ + public int getOrderInFrom() { + return orderInFrom; + } + + public IndexCursor getIndexCursor() { + return cursor; + } + + @Override + public Select getSelect() { + return select; + } + + public Table getTable() { + return table; + } + + /** + * Lock the table. This will also lock joined tables. + * + * @param s the session + * @param exclusive true if an exclusive lock is required + * @param forceLockEvenInMvcc lock even in the MVCC mode + */ + public void lock(Session s, boolean exclusive, boolean forceLockEvenInMvcc) { + table.lock(s, exclusive, forceLockEvenInMvcc); + if (join != null) { + join.lock(s, exclusive, forceLockEvenInMvcc); + } + } + + /** + * Get the best plan item (index, cost) to use use for the current join + * order. + * + * @param s the session + * @param filters all joined table filters + * @param filter the current table filter index + * @param allColumnsSet the set of all columns + * @return the best plan item + */ + public PlanItem getBestPlanItem(Session s, TableFilter[] filters, int filter, + HashSet allColumnsSet) { + PlanItem item1 = null; + SortOrder sortOrder = null; + if (select != null) { + sortOrder = select.getSortOrder(); + } + if (indexConditions.isEmpty()) { + item1 = new PlanItem(); + item1.setIndex(table.getScanIndex(s, null, filters, filter, + sortOrder, allColumnsSet)); + item1.cost = item1.getIndex().getCost(s, null, filters, filter, + sortOrder, allColumnsSet); + } + int len = table.getColumns().length; + int[] masks = new int[len]; + for (IndexCondition condition : indexConditions) { + if (condition.isEvaluatable()) { + if (condition.isAlwaysFalse()) { + masks = null; + break; + } + int id = condition.getColumn().getColumnId(); + if (id >= 0) { + masks[id] |= condition.getMask(indexConditions); + } + } + } + PlanItem item = table.getBestPlanItem(s, masks, filters, filter, sortOrder, allColumnsSet); + item.setMasks(masks); + // The more index conditions, the earlier the table. + // This is to ensure joins without indexes run quickly: + // x (x.a=10); y (x.b=y.b) - see issue 113 + item.cost -= item.cost * indexConditions.size() / 100 / (filter + 1); + + if (item1 != null && item1.cost < item.cost) { + item = item1; + } + + if (nestedJoin != null) { + setEvaluatable(true); + item.setNestedJoinPlan(nestedJoin.getBestPlanItem(s, filters, filter, allColumnsSet)); + // TODO optimizer: calculate cost of a join: should use separate + // expected row number and lookup cost + item.cost += item.cost * item.getNestedJoinPlan().cost; + } + if (join != null) { + setEvaluatable(true); + do { + filter++; + } while (filters[filter] != join); + item.setJoinPlan(join.getBestPlanItem(s, filters, filter, allColumnsSet)); + // TODO optimizer: calculate cost of a join: should use separate + // expected row number and lookup cost + item.cost += item.cost * item.getJoinPlan().cost; + } + return item; + } + + /** + * Set what plan item (index, cost, masks) to use. + * + * @param item the plan item + */ + public void setPlanItem(PlanItem item) { + if (item == null) { + // invalid plan, most likely because a column wasn't found + // this will result in an exception later on + return; + } + setIndex(item.getIndex()); + masks = item.getMasks(); + if (nestedJoin != null) { + if (item.getNestedJoinPlan() != null) { + nestedJoin.setPlanItem(item.getNestedJoinPlan()); + } else { + nestedJoin.setScanIndexes(); + } + } + if (join != null) { + if (item.getJoinPlan() != null) { + join.setPlanItem(item.getJoinPlan()); + } else { + join.setScanIndexes(); + } + } + } + + /** + * Set all missing indexes to scan indexes recursively. + */ + private void setScanIndexes() { + if (index == null) { + setIndex(table.getScanIndex(session)); + } + if (join != null) { + join.setScanIndexes(); + } + if (nestedJoin != null) { + nestedJoin.setScanIndexes(); + } + } + + /** + * Prepare reading rows. This method will remove all index conditions that + * can not be used, and optimize the conditions. + */ + public void prepare() { + // forget all unused index conditions + // the indexConditions list may be modified here + for (int i = 0; i < indexConditions.size(); i++) { + IndexCondition condition = indexConditions.get(i); + if (!condition.isAlwaysFalse()) { + Column col = condition.getColumn(); + if (col.getColumnId() >= 0) { + if (index.getColumnIndex(col) < 0) { + indexConditions.remove(i); + i--; + } + } + } + } + if (nestedJoin != null) { + if (SysProperties.CHECK && nestedJoin == this) { + DbException.throwInternalError("self join"); + } + nestedJoin.prepare(); + } + if (join != null) { + if (SysProperties.CHECK && join == this) { + DbException.throwInternalError("self join"); + } + join.prepare(); + } + if (filterCondition != null) { + filterCondition = filterCondition.optimize(session); + } + if (joinCondition != null) { + joinCondition = joinCondition.optimize(session); + } + } + + /** + * Start the query. This will reset the scan counts. + * + * @param s the session + */ + public void startQuery(Session s) { + this.session = s; + scanCount = 0; + if (nestedJoin != null) { + nestedJoin.startQuery(s); + } + if (join != null) { + join.startQuery(s); + } + } + + /** + * Reset to the current position. + */ + public void reset() { + if (joinBatch != null && joinFilterId == 0) { + // reset join batch only on top table filter + joinBatch.reset(true); + return; + } + if (nestedJoin != null) { + nestedJoin.reset(); + } + if (join != null) { + join.reset(); + } + state = BEFORE_FIRST; + foundOne = false; + } + + private boolean isAlwaysTopTableFilter(int filter) { + if (filter != 0) { + return false; + } + // check if we are at the top table filters all the way up + SubQueryInfo info = session.getSubQueryInfo(); + while (true) { + if (info == null) { + return true; + } + if (info.getFilter() != 0) { + return false; + } + info = info.getUpper(); + } + } + + /** + * Attempt to initialize batched join. + * + * @param jb join batch if it is already created + * @param filters the table filters + * @param filter the filter index (0, 1,...) + * @return join batch if query runs over index which supports batched + * lookups, {@code null} otherwise + */ + public JoinBatch prepareJoinBatch(JoinBatch jb, TableFilter[] filters, int filter) { + assert filters[filter] == this; + joinBatch = null; + joinFilterId = -1; + if (getTable().isView()) { + session.pushSubQueryInfo(masks, filters, filter, select.getSortOrder()); + try { + ((ViewIndex) index).getQuery().prepareJoinBatch(); + } finally { + session.popSubQueryInfo(); + } + } + // For globally top table filter we don't need to create lookup batch, + // because currently it will not be used (this will be shown in + // ViewIndex.getPlanSQL()). Probably later on it will make sense to + // create it to better support X IN (...) conditions, but this needs to + // be implemented separately. If isAlwaysTopTableFilter is false then we + // either not a top table filter or top table filter in a sub-query, + // which in turn is not top in outer query, thus we need to enable + // batching here to allow outer query run batched join against this + // sub-query. + IndexLookupBatch lookupBatch = null; + if (jb == null && select != null && !isAlwaysTopTableFilter(filter)) { + lookupBatch = index.createLookupBatch(filters, filter); + if (lookupBatch != null) { + jb = new JoinBatch(filter + 1, join); + } + } + if (jb != null) { + if (nestedJoin != null) { + throw DbException.throwInternalError(); + } + joinBatch = jb; + joinFilterId = filter; + if (lookupBatch == null && !isAlwaysTopTableFilter(filter)) { + // createLookupBatch will be called at most once because jb can + // be created only if lookupBatch is already not null from the + // call above. + lookupBatch = index.createLookupBatch(filters, filter); + if (lookupBatch == null) { + // the index does not support lookup batching, need to fake + // it because we are not top + lookupBatch = JoinBatch.createFakeIndexLookupBatch(this); + } + } + jb.register(this, lookupBatch); + } + return jb; + } + + public int getJoinFilterId() { + return joinFilterId; + } + + public JoinBatch getJoinBatch() { + return joinBatch; + } + + /** + * Check if there are more rows to read. + * + * @return true if there are + */ + public boolean next() { + if (joinBatch != null) { + // will happen only on topTableFilter since joinBatch.next() does + // not call join.next() + return joinBatch.next(); + } + if (state == AFTER_LAST) { + return false; + } else if (state == BEFORE_FIRST) { + cursor.find(session, indexConditions); + if (!cursor.isAlwaysFalse()) { + if (nestedJoin != null) { + nestedJoin.reset(); + } + if (join != null) { + join.reset(); + } + } + } else { + // state == FOUND || NULL_ROW + // the last row was ok - try next row of the join + if (join != null && join.next()) { + return true; + } + } + while (true) { + // go to the next row + if (state == NULL_ROW) { + break; + } + if (cursor.isAlwaysFalse()) { + state = AFTER_LAST; + } else if (nestedJoin != null) { + if (state == BEFORE_FIRST) { + state = FOUND; + } + } else { + if ((++scanCount & 4095) == 0) { + checkTimeout(); + } + if (cursor.next()) { + currentSearchRow = cursor.getSearchRow(); + current = null; + state = FOUND; + } else { + state = AFTER_LAST; + } + } + if (nestedJoin != null && state == FOUND) { + if (!nestedJoin.next()) { + state = AFTER_LAST; + if (joinOuter && !foundOne) { + // possibly null row + } else { + continue; + } + } + } + // if no more rows found, try the null row (for outer joins only) + if (state == AFTER_LAST) { + if (joinOuter && !foundOne) { + setNullRow(); + } else { + break; + } + } + if (!isOk(filterCondition)) { + continue; + } + boolean joinConditionOk = isOk(joinCondition); + if (state == FOUND) { + if (joinConditionOk) { + foundOne = true; + } else { + continue; + } + } + if (join != null) { + join.reset(); + if (!join.next()) { + continue; + } + } + // check if it's ok + if (state == NULL_ROW || joinConditionOk) { + return true; + } + } + state = AFTER_LAST; + return false; + } + + /** + * Set the state of this and all nested tables to the NULL row. + */ + protected void setNullRow() { + state = NULL_ROW; + current = table.getNullRow(); + currentSearchRow = current; + if (nestedJoin != null) { + nestedJoin.visit(new TableFilterVisitor() { + @Override + public void accept(TableFilter f) { + f.setNullRow(); + } + }); + } + } + + private void checkTimeout() { + session.checkCanceled(); + } + + /** + * Whether the current value of the condition is true, or there is no + * condition. + * + * @param condition the condition (null for no condition) + * @return true if yes + */ + boolean isOk(Expression condition) { + return condition == null || condition.getBooleanValue(session); + } + + /** + * Get the current row. + * + * @return the current row, or null + */ + public Row get() { + if (current == null && currentSearchRow != null) { + current = cursor.get(); + } + return current; + } + + /** + * Set the current row. + * + * @param current the current row + */ + public void set(Row current) { + // this is currently only used so that check constraints work - to set + // the current (new) row + this.current = current; + this.currentSearchRow = current; + } + + /** + * Get the table alias name. If no alias is specified, the table name is + * returned. + * + * @return the alias name + */ + @Override + public String getTableAlias() { + if (alias != null) { + return alias; + } + return table.getName(); + } + + /** + * Add an index condition. + * + * @param condition the index condition + */ + public void addIndexCondition(IndexCondition condition) { + indexConditions.add(condition); + } + + /** + * Add a filter condition. + * + * @param condition the condition + * @param isJoin if this is in fact a join condition + */ + public void addFilterCondition(Expression condition, boolean isJoin) { + if (isJoin) { + if (joinCondition == null) { + joinCondition = condition; + } else { + joinCondition = new ConditionAndOr(ConditionAndOr.AND, + joinCondition, condition); + } + } else { + if (filterCondition == null) { + filterCondition = condition; + } else { + filterCondition = new ConditionAndOr(ConditionAndOr.AND, + filterCondition, condition); + } + } + } + + /** + * Add a joined table. + * + * @param filter the joined table filter + * @param outer if this is an outer join + * @param on the join condition + */ + public void addJoin(TableFilter filter, boolean outer, Expression on) { + if (on != null) { + on.mapColumns(this, 0); + TableFilterVisitor visitor = new MapColumnsVisitor(on); + visit(visitor); + filter.visit(visitor); + } + if (join == null) { + join = filter; + filter.joinOuter = outer; + if (outer) { + filter.visit(new JOIVisitor()); + } + if (on != null) { + filter.mapAndAddFilter(on); + } + } else { + join.addJoin(filter, outer, on); + } + } + + /** + * Set a nested joined table. + * + * @param filter the joined table filter + */ + public void setNestedJoin(TableFilter filter) { + nestedJoin = filter; + } + + /** + * Map the columns and add the join condition. + * + * @param on the condition + */ + public void mapAndAddFilter(Expression on) { + on.mapColumns(this, 0); + addFilterCondition(on, true); + on.createIndexConditions(session, this); + if (nestedJoin != null) { + on.mapColumns(nestedJoin, 0); + on.createIndexConditions(session, nestedJoin); + } + if (join != null) { + join.mapAndAddFilter(on); + } + } + + public TableFilter getJoin() { + return join; + } + + /** + * Whether this is an outer joined table. + * + * @return true if it is + */ + public boolean isJoinOuter() { + return joinOuter; + } + + /** + * Whether this is indirectly an outer joined table (nested within an inner + * join). + * + * @return true if it is + */ + public boolean isJoinOuterIndirect() { + return joinOuterIndirect; + } + + /** + * Get the query execution plan text to use for this table filter. + * + * @param isJoin if this is a joined table + * @return the SQL statement snippet + */ + public String getPlanSQL(boolean isJoin) { + StringBuilder buff = new StringBuilder(); + if (isJoin) { + if (joinOuter) { + buff.append("LEFT OUTER JOIN "); + } else { + buff.append("INNER JOIN "); + } + } + if (nestedJoin != null) { + StringBuilder buffNested = new StringBuilder(); + TableFilter n = nestedJoin; + do { + buffNested.append(n.getPlanSQL(n != nestedJoin)); + buffNested.append('\n'); + n = n.getJoin(); + } while (n != null); + String nested = buffNested.toString(); + boolean enclose = !nested.startsWith("("); + if (enclose) { + buff.append("(\n"); + } + buff.append(StringUtils.indent(nested, 4, false)); + if (enclose) { + buff.append(')'); + } + if (isJoin) { + buff.append(" ON "); + if (joinCondition == null) { + // need to have a ON expression, + // otherwise the nesting is unclear + buff.append("1=1"); + } else { + buff.append(StringUtils.unEnclose(joinCondition.getSQL())); + } + } + return buff.toString(); + } + if (table.isView() && ((TableView) table).isRecursive()) { + buff.append(table.getName()); + } else { + buff.append(table.getSQL()); + } + if (table.isView() && ((TableView) table).isInvalid()) { + throw DbException.get(ErrorCode.VIEW_IS_INVALID_2, table.getName(), "not compiled"); + } + if (alias != null) { + buff.append(' ').append(Parser.quoteIdentifier(alias)); + } + if (indexHints != null) { + buff.append(" USE INDEX ("); + boolean first = true; + for (String index : indexHints.getAllowedIndexes()) { + if (!first) { + buff.append(", "); + } else { + first = false; + } + buff.append(Parser.quoteIdentifier(index)); + } + buff.append(")"); + } + if (index != null) { + buff.append('\n'); + StatementBuilder planBuff = new StatementBuilder(); + if (joinBatch != null) { + IndexLookupBatch lookupBatch = joinBatch.getLookupBatch(joinFilterId); + if (lookupBatch == null) { + if (joinFilterId != 0) { + throw DbException.throwInternalError("" + joinFilterId); + } + } else { + planBuff.append("batched:"); + String batchPlan = lookupBatch.getPlanSQL(); + planBuff.append(batchPlan); + planBuff.append(" "); + } + } + planBuff.append(index.getPlanSQL()); + if (!indexConditions.isEmpty()) { + planBuff.append(": "); + for (IndexCondition condition : indexConditions) { + planBuff.appendExceptFirst("\n AND "); + planBuff.append(condition.getSQL()); + } + } + String plan = StringUtils.quoteRemarkSQL(planBuff.toString()); + if (plan.indexOf('\n') >= 0) { + plan += "\n"; + } + buff.append(StringUtils.indent("/* " + plan + " */", 4, false)); + } + if (isJoin) { + buff.append("\n ON "); + if (joinCondition == null) { + // need to have a ON expression, otherwise the nesting is + // unclear + buff.append("1=1"); + } else { + buff.append(StringUtils.unEnclose(joinCondition.getSQL())); + } + } + if (filterCondition != null) { + buff.append('\n'); + String condition = StringUtils.unEnclose(filterCondition.getSQL()); + condition = "/* WHERE " + StringUtils.quoteRemarkSQL(condition) + "\n*/"; + buff.append(StringUtils.indent(condition, 4, false)); + } + if (scanCount > 0) { + buff.append("\n /* scanCount: ").append(scanCount).append(" */"); + } + return buff.toString(); + } + + /** + * Remove all index conditions that are not used by the current index. + */ + void removeUnusableIndexConditions() { + // the indexConditions list may be modified here + for (int i = 0; i < indexConditions.size(); i++) { + IndexCondition cond = indexConditions.get(i); + if (!cond.isEvaluatable()) { + indexConditions.remove(i--); + } + } + } + + public int[] getMasks() { + return masks; + } + + public ArrayList getIndexConditions() { + return indexConditions; + } + + public Index getIndex() { + return index; + } + + public void setIndex(Index index) { + this.index = index; + cursor.setIndex(index); + } + + public void setUsed(boolean used) { + this.used = used; + } + + public boolean isUsed() { + return used; + } + + /** + * Set the session of this table filter. + * + * @param session the new session + */ + void setSession(Session session) { + this.session = session; + } + + /** + * Remove the joined table + */ + public void removeJoin() { + this.join = null; + } + + public Expression getJoinCondition() { + return joinCondition; + } + + /** + * Remove the join condition. + */ + public void removeJoinCondition() { + this.joinCondition = null; + } + + public Expression getFilterCondition() { + return filterCondition; + } + + /** + * Remove the filter condition. + */ + public void removeFilterCondition() { + this.filterCondition = null; + } + + public void setFullCondition(Expression condition) { + this.fullCondition = condition; + if (join != null) { + join.setFullCondition(condition); + } + } + + /** + * Optimize the full condition. This will add the full condition to the + * filter condition. + * + * @param fromOuterJoin if this method was called from an outer joined table + */ + void optimizeFullCondition(boolean fromOuterJoin) { + if (fullCondition != null) { + fullCondition.addFilterConditions(this, fromOuterJoin || joinOuter); + if (nestedJoin != null) { + nestedJoin.optimizeFullCondition(fromOuterJoin || joinOuter); + } + if (join != null) { + join.optimizeFullCondition(fromOuterJoin || joinOuter); + } + } + } + + /** + * Update the filter and join conditions of this and all joined tables with + * the information that the given table filter and all nested filter can now + * return rows or not. + * + * @param filter the table filter + * @param b the new flag + */ + public void setEvaluatable(TableFilter filter, boolean b) { + filter.setEvaluatable(b); + if (filterCondition != null) { + filterCondition.setEvaluatable(filter, b); + } + if (joinCondition != null) { + joinCondition.setEvaluatable(filter, b); + } + if (nestedJoin != null) { + // don't enable / disable the nested join filters + // if enabling a filter in a joined filter + if (this == filter) { + nestedJoin.setEvaluatable(nestedJoin, b); + } + } + if (join != null) { + join.setEvaluatable(filter, b); + } + } + + public void setEvaluatable(boolean evaluatable) { + this.evaluatable = evaluatable; + } + + @Override + public String getSchemaName() { + return table.getSchema().getName(); + } + + @Override + public Column[] getColumns() { + return table.getColumns(); + } + + @Override + public String getDerivedColumnName(Column column) { + HashMap map = derivedColumnMap; + return map != null ? map.get(column) : null; + } + + /** + * Get the system columns that this table understands. This is used for + * compatibility with other databases. The columns are only returned if the + * current mode supports system columns. + * + * @return the system columns + */ + @Override + public Column[] getSystemColumns() { + if (!session.getDatabase().getMode().systemColumns) { + return null; + } + Column[] sys = new Column[3]; + sys[0] = new Column("oid", Value.INT); + sys[0].setTable(table, 0); + sys[1] = new Column("ctid", Value.STRING); + sys[1].setTable(table, 0); + sys[2] = new Column("CTID", Value.STRING); + sys[2].setTable(table, 0); + return sys; + } + + @Override + public Column getRowIdColumn() { + if (session.getDatabase().getSettings().rowId) { + return table.getRowIdColumn(); + } + return null; + } + + @Override + public Value getValue(Column column) { + if (joinBatch != null) { + return joinBatch.getValue(joinFilterId, column); + } + if (currentSearchRow == null) { + return null; + } + int columnId = column.getColumnId(); + if (columnId == -1) { + return ValueLong.get(currentSearchRow.getKey()); + } + if (current == null) { + Value v = currentSearchRow.getValue(columnId); + if (v != null) { + return v; + } + current = cursor.get(); + if (current == null) { + return ValueNull.INSTANCE; + } + } + return current.getValue(columnId); + } + + @Override + public TableFilter getTableFilter() { + return this; + } + + public void setAlias(String alias) { + this.alias = alias; + } + + /** + * Set derived column list. + * + * @param derivedColumnNames names of derived columns + */ + public void setDerivedColumns(ArrayList derivedColumnNames) { + Column[] columns = getColumns(); + int count = columns.length; + if (count != derivedColumnNames.size()) { + throw DbException.get(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH); + } + HashMap map = new HashMap<>(count); + for (int i = 0; i < count; i++) { + String alias = derivedColumnNames.get(i); + for (int j = 0; j < i; j++) { + if (alias.equals(derivedColumnNames.get(j))) { + throw DbException.get(ErrorCode.DUPLICATE_COLUMN_NAME_1, alias); + } + } + map.put(columns[i], alias); + } + this.derivedColumnMap = map; + } + + @Override + public Expression optimize(ExpressionColumn expressionColumn, Column column) { + return expressionColumn; + } + + @Override + public String toString() { + return alias != null ? alias : table.toString(); + } + + /** + * Add a column to the natural join key column list. + * + * @param c the column to add + */ + public void addNaturalJoinColumn(Column c) { + if (naturalJoinColumns == null) { + naturalJoinColumns = New.arrayList(); + } + naturalJoinColumns.add(c); + } + + /** + * Check if the given column is a natural join column. + * + * @param c the column to check + * @return true if this is a joined natural join column + */ + public boolean isNaturalJoinColumn(Column c) { + return naturalJoinColumns != null && naturalJoinColumns.contains(c); + } + + @Override + public int hashCode() { + return hashCode; + } + + /** + * Are there any index conditions that involve IN(...). + * + * @return whether there are IN(...) comparisons + */ + public boolean hasInComparisons() { + for (IndexCondition cond : indexConditions) { + int compareType = cond.getCompareType(); + if (compareType == Comparison.IN_QUERY || compareType == Comparison.IN_LIST) { + return true; + } + } + return false; + } + + /** + * Add the current row to the array, if there is a current row. + * + * @param rows the rows to lock + */ + public void lockRowAdd(ArrayList rows) { + if (state == FOUND) { + rows.add(get()); + } + } + + /** + * Lock the given rows. + * + * @param forUpdateRows the rows to lock + */ + public void lockRows(ArrayList forUpdateRows) { + for (Row row : forUpdateRows) { + Row newRow = row.getCopy(); + table.removeRow(session, row); + session.log(table, UndoLogRecord.DELETE, row); + table.addRow(session, newRow); + session.log(table, UndoLogRecord.INSERT, newRow); + } + } + + public TableFilter getNestedJoin() { + return nestedJoin; + } + + /** + * Visit this and all joined or nested table filters. + * + * @param visitor the visitor + */ + public void visit(TableFilterVisitor visitor) { + TableFilter f = this; + do { + visitor.accept(f); + TableFilter n = f.nestedJoin; + if (n != null) { + n.visit(visitor); + } + f = f.join; + } while (f != null); + } + + public boolean isEvaluatable() { + return evaluatable; + } + + public Session getSession() { + return session; + } + + public IndexHints getIndexHints() { + return indexHints; + } + + /** + * A visitor for table filters. + */ + public interface TableFilterVisitor { + + /** + * This method is called for each nested or joined table filter. + * + * @param f the filter + */ + void accept(TableFilter f); + } + + /** + * A visitor that maps columns. + */ + private static final class MapColumnsVisitor implements TableFilterVisitor { + private final Expression on; + + MapColumnsVisitor(Expression on) { + this.on = on; + } + + @Override + public void accept(TableFilter f) { + on.mapColumns(f, 0); + } + } + + /** + * A visitor that sets joinOuterIndirect to true. + */ + private static final class JOIVisitor implements TableFilterVisitor { + JOIVisitor() { + } + + @Override + public void accept(TableFilter f) { + f.joinOuterIndirect = true; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/TableLink.java b/modules/h2/src/main/java/org/h2/table/TableLink.java new file mode 100644 index 0000000000000..0a416549548f9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/TableLink.java @@ -0,0 +1,711 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Objects; + +import org.h2.api.ErrorCode; +import org.h2.command.Prepared; +import org.h2.engine.Session; +import org.h2.engine.UndoLogRecord; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.index.LinkedIndex; +import org.h2.jdbc.JdbcSQLException; +import org.h2.message.DbException; +import org.h2.result.Row; +import org.h2.result.RowList; +import org.h2.schema.Schema; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; + +/** + * A linked table contains connection information for a table accessible by + * JDBC. The table may be stored in a different database. + */ +public class TableLink extends Table { + + private static final int MAX_RETRY = 2; + + private static final long ROW_COUNT_APPROXIMATION = 100_000; + + private final String originalSchema; + private String driver, url, user, password, originalTable, qualifiedTableName; + private TableLinkConnection conn; + private HashMap preparedMap = new HashMap<>(); + private final ArrayList indexes = New.arrayList(); + private final boolean emitUpdates; + private LinkedIndex linkedIndex; + private DbException connectException; + private boolean storesLowerCase; + private boolean storesMixedCase; + private boolean storesMixedCaseQuoted; + private boolean supportsMixedCaseIdentifiers; + private boolean globalTemporary; + private boolean readOnly; + + public TableLink(Schema schema, int id, String name, String driver, + String url, String user, String password, String originalSchema, + String originalTable, boolean emitUpdates, boolean force) { + super(schema, id, name, false, true); + this.driver = driver; + this.url = url; + this.user = user; + this.password = password; + this.originalSchema = originalSchema; + this.originalTable = originalTable; + this.emitUpdates = emitUpdates; + try { + connect(); + } catch (DbException e) { + if (!force) { + throw e; + } + Column[] cols = { }; + setColumns(cols); + linkedIndex = new LinkedIndex(this, id, IndexColumn.wrap(cols), + IndexType.createNonUnique(false)); + indexes.add(linkedIndex); + } + } + + private void connect() { + connectException = null; + for (int retry = 0;; retry++) { + try { + conn = database.getLinkConnection(driver, url, user, password); + synchronized (conn) { + try { + readMetaData(); + return; + } catch (Exception e) { + // could be SQLException or RuntimeException + conn.close(true); + conn = null; + throw DbException.convert(e); + } + } + } catch (DbException e) { + if (retry >= MAX_RETRY) { + connectException = e; + throw e; + } + } + } + } + + private void readMetaData() throws SQLException { + DatabaseMetaData meta = conn.getConnection().getMetaData(); + storesLowerCase = meta.storesLowerCaseIdentifiers(); + storesMixedCase = meta.storesMixedCaseIdentifiers(); + storesMixedCaseQuoted = meta.storesMixedCaseQuotedIdentifiers(); + supportsMixedCaseIdentifiers = meta.supportsMixedCaseIdentifiers(); + ResultSet rs = meta.getTables(null, originalSchema, originalTable, null); + if (rs.next() && rs.next()) { + throw DbException.get(ErrorCode.SCHEMA_NAME_MUST_MATCH, originalTable); + } + rs.close(); + rs = meta.getColumns(null, originalSchema, originalTable, null); + int i = 0; + ArrayList columnList = New.arrayList(); + HashMap columnMap = new HashMap<>(); + String catalog = null, schema = null; + while (rs.next()) { + String thisCatalog = rs.getString("TABLE_CAT"); + if (catalog == null) { + catalog = thisCatalog; + } + String thisSchema = rs.getString("TABLE_SCHEM"); + if (schema == null) { + schema = thisSchema; + } + if (!Objects.equals(catalog, thisCatalog) || + !Objects.equals(schema, thisSchema)) { + // if the table exists in multiple schemas or tables, + // use the alternative solution + columnMap.clear(); + columnList.clear(); + break; + } + String n = rs.getString("COLUMN_NAME"); + n = convertColumnName(n); + int sqlType = rs.getInt("DATA_TYPE"); + String sqlTypeName = rs.getString("TYPE_NAME"); + long precision = rs.getInt("COLUMN_SIZE"); + precision = convertPrecision(sqlType, precision); + int scale = rs.getInt("DECIMAL_DIGITS"); + scale = convertScale(sqlType, scale); + int displaySize = MathUtils.convertLongToInt(precision); + int type = DataType.convertSQLTypeToValueType(sqlType, sqlTypeName); + Column col = new Column(n, type, precision, scale, displaySize); + col.setTable(this, i++); + columnList.add(col); + columnMap.put(n, col); + } + rs.close(); + if (originalTable.indexOf('.') < 0 && !StringUtils.isNullOrEmpty(schema)) { + qualifiedTableName = schema + "." + originalTable; + } else { + qualifiedTableName = originalTable; + } + // check if the table is accessible + + try (Statement stat = conn.getConnection().createStatement()) { + rs = stat.executeQuery("SELECT * FROM " + + qualifiedTableName + " T WHERE 1=0"); + if (columnList.isEmpty()) { + // alternative solution + ResultSetMetaData rsMeta = rs.getMetaData(); + for (i = 0; i < rsMeta.getColumnCount();) { + String n = rsMeta.getColumnName(i + 1); + n = convertColumnName(n); + int sqlType = rsMeta.getColumnType(i + 1); + long precision = rsMeta.getPrecision(i + 1); + precision = convertPrecision(sqlType, precision); + int scale = rsMeta.getScale(i + 1); + scale = convertScale(sqlType, scale); + int displaySize = rsMeta.getColumnDisplaySize(i + 1); + int type = DataType.getValueTypeFromResultSet(rsMeta, i + 1); + Column col = new Column(n, type, precision, scale, displaySize); + col.setTable(this, i++); + columnList.add(col); + columnMap.put(n, col); + } + } + rs.close(); + } catch (Exception e) { + throw DbException.get(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, e, + originalTable + "(" + e.toString() + ")"); + } + Column[] cols = columnList.toArray(new Column[0]); + setColumns(cols); + int id = getId(); + linkedIndex = new LinkedIndex(this, id, IndexColumn.wrap(cols), + IndexType.createNonUnique(false)); + indexes.add(linkedIndex); + try { + rs = meta.getPrimaryKeys(null, originalSchema, originalTable); + } catch (Exception e) { + // Some ODBC bridge drivers don't support it: + // some combinations of "DataDirect SequeLink(R) for JDBC" + // http://www.datadirect.com/index.ssp + rs = null; + } + String pkName = ""; + ArrayList list; + if (rs != null && rs.next()) { + // the problem is, the rows are not sorted by KEY_SEQ + list = New.arrayList(); + do { + int idx = rs.getInt("KEY_SEQ"); + if (pkName == null) { + pkName = rs.getString("PK_NAME"); + } + while (list.size() < idx) { + list.add(null); + } + String col = rs.getString("COLUMN_NAME"); + col = convertColumnName(col); + Column column = columnMap.get(col); + if (idx == 0) { + // workaround for a bug in the SQLite JDBC driver + list.add(column); + } else { + list.set(idx - 1, column); + } + } while (rs.next()); + addIndex(list, IndexType.createPrimaryKey(false, false)); + rs.close(); + } + try { + rs = meta.getIndexInfo(null, originalSchema, originalTable, false, true); + } catch (Exception e) { + // Oracle throws an exception if the table is not found or is a + // SYNONYM + rs = null; + } + String indexName = null; + list = New.arrayList(); + IndexType indexType = null; + if (rs != null) { + while (rs.next()) { + if (rs.getShort("TYPE") == DatabaseMetaData.tableIndexStatistic) { + // ignore index statistics + continue; + } + String newIndex = rs.getString("INDEX_NAME"); + if (pkName.equals(newIndex)) { + continue; + } + if (indexName != null && !indexName.equals(newIndex)) { + addIndex(list, indexType); + indexName = null; + } + if (indexName == null) { + indexName = newIndex; + list.clear(); + } + boolean unique = !rs.getBoolean("NON_UNIQUE"); + indexType = unique ? IndexType.createUnique(false, false) : + IndexType.createNonUnique(false); + String col = rs.getString("COLUMN_NAME"); + col = convertColumnName(col); + Column column = columnMap.get(col); + list.add(column); + } + rs.close(); + } + if (indexName != null) { + addIndex(list, indexType); + } + } + + private static long convertPrecision(int sqlType, long precision) { + // workaround for an Oracle problem: + // for DATE columns, the reported precision is 7 + // for DECIMAL columns, the reported precision is 0 + switch (sqlType) { + case Types.DECIMAL: + case Types.NUMERIC: + if (precision == 0) { + precision = 65535; + } + break; + case Types.DATE: + precision = Math.max(ValueDate.PRECISION, precision); + break; + case Types.TIMESTAMP: + precision = Math.max(ValueTimestamp.MAXIMUM_PRECISION, precision); + break; + case Types.TIME: + precision = Math.max(ValueTime.MAXIMUM_PRECISION, precision); + break; + } + return precision; + } + + private static int convertScale(int sqlType, int scale) { + // workaround for an Oracle problem: + // for DECIMAL columns, the reported precision is -127 + switch (sqlType) { + case Types.DECIMAL: + case Types.NUMERIC: + if (scale < 0) { + scale = 32767; + } + break; + } + return scale; + } + + private String convertColumnName(String columnName) { + if ((storesMixedCase || storesLowerCase) && + columnName.equals(StringUtils.toLowerEnglish(columnName))) { + columnName = StringUtils.toUpperEnglish(columnName); + } else if (storesMixedCase && !supportsMixedCaseIdentifiers) { + // TeraData + columnName = StringUtils.toUpperEnglish(columnName); + } else if (storesMixedCase && storesMixedCaseQuoted) { + // MS SQL Server (identifiers are case insensitive even if quoted) + columnName = StringUtils.toUpperEnglish(columnName); + } + return columnName; + } + + private void addIndex(List list, IndexType indexType) { + // bind the index to the leading recognized columns in the index + // (null columns might come from a function-based index) + int firstNull = list.indexOf(null); + if (firstNull == 0) { + trace.info("Omitting linked index - no recognized columns."); + return; + } else if (firstNull > 0) { + trace.info("Unrecognized columns in linked index. " + + "Registering the index against the leading {0} " + + "recognized columns of {1} total columns.", firstNull, list.size()); + list = list.subList(0, firstNull); + } + Column[] cols = list.toArray(new Column[0]); + Index index = new LinkedIndex(this, 0, IndexColumn.wrap(cols), indexType); + indexes.add(index); + } + + @Override + public String getDropSQL() { + return "DROP TABLE IF EXISTS " + getSQL(); + } + + @Override + public String getCreateSQL() { + StringBuilder buff = new StringBuilder("CREATE FORCE "); + if (isTemporary()) { + if (globalTemporary) { + buff.append("GLOBAL "); + } else { + buff.append("LOCAL "); + } + buff.append("TEMPORARY "); + } + buff.append("LINKED TABLE ").append(getSQL()); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + buff.append('('). + append(StringUtils.quoteStringSQL(driver)). + append(", "). + append(StringUtils.quoteStringSQL(url)). + append(", "). + append(StringUtils.quoteStringSQL(user)). + append(", "). + append(StringUtils.quoteStringSQL(password)). + append(", "). + append(StringUtils.quoteStringSQL(originalTable)). + append(')'); + if (emitUpdates) { + buff.append(" EMIT UPDATES"); + } + if (readOnly) { + buff.append(" READONLY"); + } + buff.append(" /*").append(JdbcSQLException.HIDE_SQL).append("*/"); + return buff.toString(); + } + + @Override + public Index addIndex(Session session, String indexName, int indexId, + IndexColumn[] cols, IndexType indexType, boolean create, + String indexComment) { + throw DbException.getUnsupportedException("LINK"); + } + + @Override + public boolean lock(Session session, boolean exclusive, boolean forceLockEvenInMvcc) { + // nothing to do + return false; + } + + @Override + public boolean isLockedExclusively() { + return false; + } + + @Override + public Index getScanIndex(Session session) { + return linkedIndex; + } + + private void checkReadOnly() { + if (readOnly) { + throw DbException.get(ErrorCode.DATABASE_IS_READ_ONLY); + } + } + + @Override + public void removeRow(Session session, Row row) { + checkReadOnly(); + getScanIndex(session).remove(session, row); + } + + @Override + public void addRow(Session session, Row row) { + checkReadOnly(); + getScanIndex(session).add(session, row); + } + + @Override + public void close(Session session) { + if (conn != null) { + try { + conn.close(false); + } finally { + conn = null; + } + } + } + + @Override + public synchronized long getRowCount(Session session) { + //The foo alias is used to support the PostgreSQL syntax + String sql = "SELECT COUNT(*) FROM " + qualifiedTableName + " as foo"; + try { + PreparedStatement prep = execute(sql, null, false); + ResultSet rs = prep.getResultSet(); + rs.next(); + long count = rs.getLong(1); + rs.close(); + reusePreparedStatement(prep, sql); + return count; + } catch (Exception e) { + throw wrapException(sql, e); + } + } + + /** + * Wrap a SQL exception that occurred while accessing a linked table. + * + * @param sql the SQL statement + * @param ex the exception from the remote database + * @return the wrapped exception + */ + public static DbException wrapException(String sql, Exception ex) { + SQLException e = DbException.toSQLException(ex); + return DbException.get(ErrorCode.ERROR_ACCESSING_LINKED_TABLE_2, + e, sql, e.toString()); + } + + public String getQualifiedTable() { + return qualifiedTableName; + } + + /** + * Execute a SQL statement using the given parameters. Prepared + * statements are kept in a hash map to avoid re-creating them. + * + * @param sql the SQL statement + * @param params the parameters or null + * @param reusePrepared if the prepared statement can be re-used immediately + * @return the prepared statement, or null if it is re-used + */ + public PreparedStatement execute(String sql, ArrayList params, + boolean reusePrepared) { + if (conn == null) { + throw connectException; + } + for (int retry = 0;; retry++) { + try { + synchronized (conn) { + PreparedStatement prep = preparedMap.remove(sql); + if (prep == null) { + prep = conn.getConnection().prepareStatement(sql); + } + if (trace.isDebugEnabled()) { + StatementBuilder buff = new StatementBuilder(); + buff.append(getName()).append(":\n").append(sql); + if (params != null && !params.isEmpty()) { + buff.append(" {"); + int i = 1; + for (Value v : params) { + buff.appendExceptFirst(", "); + buff.append(i++).append(": ").append(v.getSQL()); + } + buff.append('}'); + } + buff.append(';'); + trace.debug(buff.toString()); + } + if (params != null) { + for (int i = 0, size = params.size(); i < size; i++) { + Value v = params.get(i); + v.set(prep, i + 1); + } + } + prep.execute(); + if (reusePrepared) { + reusePreparedStatement(prep, sql); + return null; + } + return prep; + } + } catch (SQLException e) { + if (retry >= MAX_RETRY) { + throw DbException.convert(e); + } + conn.close(true); + connect(); + } + } + } + + @Override + public void unlock(Session s) { + // nothing to do + } + + @Override + public void checkRename() { + // ok + } + + @Override + public void checkSupportAlter() { + throw DbException.getUnsupportedException("LINK"); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("LINK"); + } + + @Override + public boolean canGetRowCount() { + return true; + } + + @Override + public boolean canDrop() { + return true; + } + + @Override + public TableType getTableType() { + return TableType.TABLE_LINK; + } + + @Override + public void removeChildrenAndResources(Session session) { + super.removeChildrenAndResources(session); + close(session); + database.removeMeta(session, getId()); + driver = null; + url = user = password = originalTable = null; + preparedMap = null; + invalidate(); + } + + public boolean isOracle() { + return url.startsWith("jdbc:oracle:"); + } + + @Override + public ArrayList getIndexes() { + return indexes; + } + + @Override + public long getMaxDataModificationId() { + // data may have been modified externally + return Long.MAX_VALUE; + } + + @Override + public Index getUniqueIndex() { + for (Index idx : indexes) { + if (idx.getIndexType().isUnique()) { + return idx; + } + } + return null; + } + + @Override + public void updateRows(Prepared prepared, Session session, RowList rows) { + boolean deleteInsert; + checkReadOnly(); + if (emitUpdates) { + for (rows.reset(); rows.hasNext();) { + prepared.checkCanceled(); + Row oldRow = rows.next(); + Row newRow = rows.next(); + linkedIndex.update(oldRow, newRow); + session.log(this, UndoLogRecord.DELETE, oldRow); + session.log(this, UndoLogRecord.INSERT, newRow); + } + deleteInsert = false; + } else { + deleteInsert = true; + } + if (deleteInsert) { + super.updateRows(prepared, session, rows); + } + } + + public void setGlobalTemporary(boolean globalTemporary) { + this.globalTemporary = globalTemporary; + } + + public void setReadOnly(boolean readOnly) { + this.readOnly = readOnly; + } + + @Override + public long getRowCountApproximation() { + return ROW_COUNT_APPROXIMATION; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + /** + * Add this prepared statement to the list of cached statements. + * + * @param prep the prepared statement + * @param sql the SQL statement + */ + public void reusePreparedStatement(PreparedStatement prep, String sql) { + synchronized (conn) { + preparedMap.put(sql, prep); + } + } + + @Override + public boolean isDeterministic() { + return false; + } + + /** + * Linked tables don't know if they are readonly. This overwrites + * the default handling. + */ + @Override + public void checkWritingAllowed() { + // only the target database can verify this + } + + /** + * Convert the values if required. Default values are not set (kept as + * null). + * + * @param session the session + * @param row the row + */ + @Override + public void validateConvertUpdateSequence(Session session, Row row) { + for (int i = 0; i < columns.length; i++) { + Value value = row.getValue(i); + if (value != null) { + // null means use the default value + Column column = columns[i]; + Value v2 = column.validateConvertUpdateSequence(session, value); + if (v2 != value) { + row.setValue(i, v2); + } + } + } + } + + /** + * Get or generate a default value for the given column. Default values are + * not set (kept as null). + * + * @param session the session + * @param column the column + * @return the value + */ + @Override + public Value getDefaultValue(Session session, Column column) { + return null; + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/TableLinkConnection.java b/modules/h2/src/main/java/org/h2/table/TableLinkConnection.java new file mode 100644 index 0000000000000..f6e0319848928 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/TableLinkConnection.java @@ -0,0 +1,145 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.sql.Connection; +import java.sql.SQLException; +import java.util.HashMap; +import java.util.Objects; +import org.h2.message.DbException; +import org.h2.util.JdbcUtils; + +/** + * A connection for a linked table. The same connection may be used for multiple + * tables, that means a connection may be shared. + */ +public class TableLinkConnection { + + /** + * The map where the link is kept. + */ + private final HashMap map; + + /** + * The connection information. + */ + private final String driver, url, user, password; + + /** + * The database connection. + */ + private Connection conn; + + /** + * How many times the connection is used. + */ + private int useCounter; + + private TableLinkConnection( + HashMap map, + String driver, String url, String user, String password) { + this.map = map; + this.driver = driver; + this.url = url; + this.user = user; + this.password = password; + } + + /** + * Open a new connection. + * + * @param map the map where the connection should be stored + * (if shared connections are enabled). + * @param driver the JDBC driver class name + * @param url the database URL + * @param user the user name + * @param password the password + * @param shareLinkedConnections if connections should be shared + * @return a connection + */ + public static TableLinkConnection open( + HashMap map, + String driver, String url, String user, String password, + boolean shareLinkedConnections) { + TableLinkConnection t = new TableLinkConnection(map, driver, url, + user, password); + if (!shareLinkedConnections) { + t.open(); + return t; + } + synchronized (map) { + TableLinkConnection result = map.get(t); + if (result == null) { + t.open(); + // put the connection in the map after is has been opened, + // when we know it works + map.put(t, t); + result = t; + } + result.useCounter++; + return result; + } + } + + private void open() { + try { + conn = JdbcUtils.getConnection(driver, url, user, password); + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + public int hashCode() { + return Objects.hashCode(driver) + ^ Objects.hashCode(url) + ^ Objects.hashCode(user) + ^ Objects.hashCode(password); + } + + @Override + public boolean equals(Object o) { + if (o instanceof TableLinkConnection) { + TableLinkConnection other = (TableLinkConnection) o; + return Objects.equals(driver, other.driver) + && Objects.equals(url, other.url) + && Objects.equals(user, other.user) + && Objects.equals(password, other.password); + } + return false; + } + + /** + * Get the connection. + * This method and methods on the statement must be + * synchronized on this object. + * + * @return the connection + */ + Connection getConnection() { + return conn; + } + + /** + * Closes the connection if this is the last link to it. + * + * @param force if the connection needs to be closed even if it is still + * used elsewhere (for example, because the connection is broken) + */ + void close(boolean force) { + boolean actuallyClose = false; + synchronized (map) { + if (--useCounter <= 0 || force) { + actuallyClose = true; + map.remove(this); + } + } + if (actuallyClose) { + JdbcUtils.closeSilently(conn); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/TableSynonym.java b/modules/h2/src/main/java/org/h2/table/TableSynonym.java new file mode 100644 index 0000000000000..d58d19f975a61 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/TableSynonym.java @@ -0,0 +1,115 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import org.h2.command.ddl.CreateSynonymData; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.message.Trace; +import org.h2.schema.Schema; +import org.h2.schema.SchemaObjectBase; + +/** + * Synonym for an existing table or view. All DML requests are forwarded to the backing table. + * Adding indices to a synonym or altering the table is not supported. + */ +public class TableSynonym extends SchemaObjectBase { + + private CreateSynonymData data; + + /** + * The table the synonym is created for. + */ + private Table synonymFor; + + public TableSynonym(CreateSynonymData data) { + initSchemaObjectBase(data.schema, data.id, data.synonymName, Trace.TABLE); + this.data = data; + } + + /** + * @return the table this is a synonym for + */ + public Table getSynonymFor() { + return synonymFor; + } + + /** + * Set (update) the data. + * + * @param data the new data + */ + public void updateData(CreateSynonymData data) { + this.data = data; + } + + @Override + public int getType() { + return SYNONYM; + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + return synonymFor.getCreateSQLForCopy(table, quotedName); + } + + @Override + public void rename(String newName) { throw DbException.getUnsupportedException("SYNONYM"); } + + @Override + public void removeChildrenAndResources(Session session) { + synonymFor.removeSynonym(this); + database.removeMeta(session, getId()); + } + + @Override + public String getCreateSQL() { + return "CREATE SYNONYM " + getName() + " FOR " + data.synonymForSchema.getName() + "." + data.synonymFor; + } + + @Override + public String getDropSQL() { + return "DROP SYNONYM " + getName(); + } + + @Override + public void checkRename() { + throw DbException.getUnsupportedException("SYNONYM"); + } + + /** + * @return the table this synonym is for + */ + public String getSynonymForName() { + return data.synonymFor; + } + + /** + * @return the schema this synonym is for + */ + public Schema getSynonymForSchema() { + return data.synonymForSchema; + } + + /** + * @return true if this synonym currently points to a real table + */ + public boolean isInvalid() { + return synonymFor.isValid(); + } + + /** + * Update the table that this is a synonym for, to know about this synonym. + */ + public void updateSynonymFor() { + if (synonymFor != null) { + synonymFor.removeSynonym(this); + } + synonymFor = data.synonymForSchema.getTableOrView(data.session, data.synonymFor); + synonymFor.addSynonym(this); + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/TableType.java b/modules/h2/src/main/java/org/h2/table/TableType.java new file mode 100644 index 0000000000000..0a07145068930 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/TableType.java @@ -0,0 +1,51 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +/** + * The table types. + */ +public enum TableType { + + /** + * The table type name for linked tables. + */ + TABLE_LINK, + + /** + * The table type name for system tables. (aka. MetaTable) + */ + SYSTEM_TABLE, + + /** + * The table type name for regular data tables. + */ + TABLE, + + /** + * The table type name for views. + */ + VIEW, + + /** + * The table type name for external table engines. + */ + EXTERNAL_TABLE_ENGINE; + + @Override + public String toString() { + if (this == EXTERNAL_TABLE_ENGINE) { + return "EXTERNAL"; + } else if (this == SYSTEM_TABLE) { + return "SYSTEM TABLE"; + } else if (this == TABLE_LINK) { + return "TABLE LINK"; + } else { + return super.toString(); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/table/TableView.java b/modules/h2/src/main/java/org/h2/table/TableView.java new file mode 100644 index 0000000000000..acf17191a566e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/TableView.java @@ -0,0 +1,882 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.table; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import org.h2.api.ErrorCode; +import org.h2.command.Prepared; +import org.h2.command.ddl.CreateTableData; +import org.h2.command.dml.Query; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.DbObject; +import org.h2.engine.Session; +import org.h2.engine.User; +import org.h2.expression.Alias; +import org.h2.expression.Expression; +import org.h2.expression.ExpressionColumn; +import org.h2.expression.ExpressionVisitor; +import org.h2.expression.Parameter; +import org.h2.index.Index; +import org.h2.index.IndexType; +import org.h2.index.ViewIndex; +import org.h2.message.DbException; +import org.h2.result.ResultInterface; +import org.h2.result.Row; +import org.h2.result.SortOrder; +import org.h2.schema.Schema; +import org.h2.util.ColumnNamer; +import org.h2.util.New; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.value.Value; + +/** + * A view is a virtual table that is defined by a query. + * @author Thomas Mueller + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class TableView extends Table { + + private static final long ROW_COUNT_APPROXIMATION = 100; + + private String querySQL; + private ArrayList
    tables; + private Column[] columnTemplates; + private Query viewQuery; + private ViewIndex index; + private boolean allowRecursive; + private DbException createException; + private long lastModificationCheck; + private long maxDataModificationId; + private User owner; + private Query topQuery; + private ResultInterface recursiveResult; + private boolean isRecursiveQueryDetected; + private boolean isTableExpression; + private boolean isPersistent; + + public TableView(Schema schema, int id, String name, String querySQL, + ArrayList params, Column[] columnTemplates, Session session, + boolean allowRecursive, boolean literalsChecked, boolean isTableExpression, boolean isPersistent) { + super(schema, id, name, false, true); + init(querySQL, params, columnTemplates, session, allowRecursive, literalsChecked, isTableExpression, + isPersistent); + } + + /** + * Try to replace the SQL statement of the view and re-compile this and all + * dependent views. + * + * @param querySQL the SQL statement + * @param newColumnTemplates the columns + * @param session the session + * @param recursive whether this is a recursive view + * @param force if errors should be ignored + * @param literalsChecked if literals have been checked + */ + public void replace(String querySQL, Column[] newColumnTemplates, Session session, + boolean recursive, boolean force, boolean literalsChecked) { + String oldQuerySQL = this.querySQL; + Column[] oldColumnTemplates = this.columnTemplates; + boolean oldRecursive = this.allowRecursive; + init(querySQL, null, + newColumnTemplates == null ? this.columnTemplates + : newColumnTemplates, + session, recursive, literalsChecked, isTableExpression, isPersistent); + DbException e = recompile(session, force, true); + if (e != null) { + init(oldQuerySQL, null, oldColumnTemplates, session, oldRecursive, + literalsChecked, isTableExpression, isPersistent); + recompile(session, true, false); + throw e; + } + } + + private synchronized void init(String querySQL, ArrayList params, + Column[] columnTemplates, Session session, boolean allowRecursive, boolean literalsChecked, + boolean isTableExpression, boolean isPersistent) { + this.querySQL = querySQL; + this.columnTemplates = columnTemplates; + this.allowRecursive = allowRecursive; + this.isRecursiveQueryDetected = false; + this.isTableExpression = isTableExpression; + this.isPersistent = isPersistent; + index = new ViewIndex(this, querySQL, params, allowRecursive); + initColumnsAndTables(session, literalsChecked); + } + + private Query compileViewQuery(Session session, String sql, boolean literalsChecked, String viewName) { + Prepared p; + session.setParsingCreateView(true, viewName); + try { + p = session.prepare(sql, false, literalsChecked); + } finally { + session.setParsingCreateView(false, viewName); + } + if (!(p instanceof Query)) { + throw DbException.getSyntaxError(sql, 0); + } + Query q = (Query) p; + // only potentially recursive cte queries need to be non-lazy + if (isTableExpression && allowRecursive) { + q.setNeverLazy(true); + } + return q; + } + + /** + * Re-compile the view query and all views that depend on this object. + * + * @param session the session + * @param force if exceptions should be ignored + * @param clearIndexCache if we need to clear view index cache + * @return the exception if re-compiling this or any dependent view failed + * (only when force is disabled) + */ + public synchronized DbException recompile(Session session, boolean force, + boolean clearIndexCache) { + try { + compileViewQuery(session, querySQL, false, getName()); + } catch (DbException e) { + if (!force) { + return e; + } + } + ArrayList dependentViews = new ArrayList<>(getDependentViews()); + initColumnsAndTables(session, false); + for (TableView v : dependentViews) { + DbException e = v.recompile(session, force, false); + if (e != null && !force) { + return e; + } + } + if (clearIndexCache) { + clearIndexCaches(database); + } + return force ? null : createException; + } + + private void initColumnsAndTables(Session session, boolean literalsChecked) { + Column[] cols; + removeCurrentViewFromOtherTables(); + setTableExpression(isTableExpression); + try { + Query compiledQuery = compileViewQuery(session, querySQL, literalsChecked, getName()); + this.querySQL = compiledQuery.getPlanSQL(); + tables = new ArrayList<>(compiledQuery.getTables()); + ArrayList expressions = compiledQuery.getExpressions(); + ArrayList list = New.arrayList(); + ColumnNamer columnNamer = new ColumnNamer(session); + for (int i = 0, count = compiledQuery.getColumnCount(); i < count; i++) { + Expression expr = expressions.get(i); + String name = null; + int type = Value.UNKNOWN; + if (columnTemplates != null && columnTemplates.length > i) { + name = columnTemplates[i].getName(); + type = columnTemplates[i].getType(); + } + if (name == null) { + name = expr.getAlias(); + } + name = columnNamer.getColumnName(expr, i, name); + if (type == Value.UNKNOWN) { + type = expr.getType(); + } + long precision = expr.getPrecision(); + int scale = expr.getScale(); + int displaySize = expr.getDisplaySize(); + Column col = new Column(name, type, precision, scale, displaySize); + col.setTable(this, i); + // Fetch check constraint from view column source + ExpressionColumn fromColumn = null; + if (expr instanceof ExpressionColumn) { + fromColumn = (ExpressionColumn) expr; + } else if (expr instanceof Alias) { + Expression aliasExpr = expr.getNonAliasExpression(); + if (aliasExpr instanceof ExpressionColumn) { + fromColumn = (ExpressionColumn) aliasExpr; + } + } + if (fromColumn != null) { + Expression checkExpression = fromColumn.getColumn() + .getCheckConstraint(session, name); + if (checkExpression != null) { + col.addCheckConstraint(session, checkExpression); + } + } + list.add(col); + } + cols = list.toArray(new Column[0]); + createException = null; + viewQuery = compiledQuery; + } catch (DbException e) { + e.addSQL(getCreateSQL()); + createException = e; + // If it can't be compiled, then it's a 'zero column table' + // this avoids problems when creating the view when opening the + // database. + // If it can not be compiled - it could also be a recursive common + // table expression query. + if (isRecursiveQueryExceptionDetected(createException)) { + this.isRecursiveQueryDetected = true; + } + tables = New.arrayList(); + cols = new Column[0]; + if (allowRecursive && columnTemplates != null) { + cols = new Column[columnTemplates.length]; + for (int i = 0; i < columnTemplates.length; i++) { + cols[i] = columnTemplates[i].getClone(); + } + index.setRecursive(true); + createException = null; + } + } + setColumns(cols); + if (getId() != 0) { + addDependentViewToTables(); + } + } + + @Override + public boolean isView() { + return true; + } + + /** + * Check if this view is currently invalid. + * + * @return true if it is + */ + public boolean isInvalid() { + return createException != null; + } + + @Override + public PlanItem getBestPlanItem(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + final CacheKey cacheKey = new CacheKey(masks, this); + Map indexCache = session.getViewIndexCache(topQuery != null); + ViewIndex i = indexCache.get(cacheKey); + if (i == null || i.isExpired()) { + i = new ViewIndex(this, index, session, masks, filters, filter, sortOrder); + indexCache.put(cacheKey, i); + } + PlanItem item = new PlanItem(); + item.cost = i.getCost(session, masks, filters, filter, sortOrder, allColumnsSet); + item.setIndex(i); + return item; + } + + @Override + public boolean isQueryComparable() { + if (!super.isQueryComparable()) { + return false; + } + for (Table t : tables) { + if (!t.isQueryComparable()) { + return false; + } + } + if (topQuery != null && + !topQuery.isEverything(ExpressionVisitor.QUERY_COMPARABLE_VISITOR)) { + return false; + } + return true; + } + + public Query getTopQuery() { + return topQuery; + } + + @Override + public String getDropSQL() { + return "DROP VIEW IF EXISTS " + getSQL() + " CASCADE"; + } + + @Override + public String getCreateSQLForCopy(Table table, String quotedName) { + return getCreateSQL(false, true, quotedName); + } + + + @Override + public String getCreateSQL() { + return getCreateSQL(false, true); + } + + /** + * Generate "CREATE" SQL statement for the view. + * + * @param orReplace if true, then include the OR REPLACE clause + * @param force if true, then include the FORCE clause + * @return the SQL statement + */ + public String getCreateSQL(boolean orReplace, boolean force) { + return getCreateSQL(orReplace, force, getSQL()); + } + + private String getCreateSQL(boolean orReplace, boolean force, + String quotedName) { + StatementBuilder buff = new StatementBuilder("CREATE "); + if (orReplace) { + buff.append("OR REPLACE "); + } + if (force) { + buff.append("FORCE "); + } + buff.append("VIEW "); + if (isTableExpression) { + buff.append("TABLE_EXPRESSION "); + } + buff.append(quotedName); + if (comment != null) { + buff.append(" COMMENT ").append(StringUtils.quoteStringSQL(comment)); + } + if (columns != null && columns.length > 0) { + buff.append('('); + for (Column c : columns) { + buff.appendExceptFirst(", "); + buff.append(c.getSQL()); + } + buff.append(')'); + } else if (columnTemplates != null) { + buff.append('('); + for (Column c : columnTemplates) { + buff.appendExceptFirst(", "); + buff.append(c.getName()); + } + buff.append(')'); + } + return buff.append(" AS\n").append(querySQL).toString(); + } + + @Override + public void checkRename() { + // ok + } + + @Override + public boolean lock(Session session, boolean exclusive, boolean forceLockEvenInMvcc) { + // exclusive lock means: the view will be dropped + return false; + } + + @Override + public void close(Session session) { + // nothing to do + } + + @Override + public void unlock(Session s) { + // nothing to do + } + + @Override + public boolean isLockedExclusively() { + return false; + } + + @Override + public Index addIndex(Session session, String indexName, int indexId, + IndexColumn[] cols, IndexType indexType, boolean create, + String indexComment) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public void removeRow(Session session, Row row) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public void addRow(Session session, Row row) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public void checkSupportAlter() { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public void truncate(Session session) { + throw DbException.getUnsupportedException("VIEW"); + } + + @Override + public long getRowCount(Session session) { + throw DbException.throwInternalError(toString()); + } + + @Override + public boolean canGetRowCount() { + // TODO view: could get the row count, but not that easy + return false; + } + + @Override + public boolean canDrop() { + return true; + } + + @Override + public TableType getTableType() { + return TableType.VIEW; + } + + @Override + public void removeChildrenAndResources(Session session) { + removeCurrentViewFromOtherTables(); + super.removeChildrenAndResources(session); + database.removeMeta(session, getId()); + querySQL = null; + index = null; + clearIndexCaches(database); + invalidate(); + } + + /** + * Clear the cached indexes for all sessions. + * + * @param database the database + */ + public static void clearIndexCaches(Database database) { + for (Session s : database.getSessions(true)) { + s.clearViewIndexCache(); + } + } + + @Override + public String getSQL() { + if (isTemporary() && querySQL != null) { + return "(\n" + StringUtils.indent(querySQL) + ")"; + } + return super.getSQL(); + } + + public String getQuery() { + return querySQL; + } + + @Override + public Index getScanIndex(Session session) { + return getBestPlanItem(session, null, null, -1, null, null).getIndex(); + } + + @Override + public Index getScanIndex(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + if (createException != null) { + String msg = createException.getMessage(); + throw DbException.get(ErrorCode.VIEW_IS_INVALID_2, + createException, getSQL(), msg); + } + PlanItem item = getBestPlanItem(session, masks, filters, filter, sortOrder, allColumnsSet); + return item.getIndex(); + } + + @Override + public boolean canReference() { + return false; + } + + @Override + public ArrayList getIndexes() { + return null; + } + + @Override + public long getMaxDataModificationId() { + if (createException != null) { + return Long.MAX_VALUE; + } + if (viewQuery == null) { + return Long.MAX_VALUE; + } + // if nothing was modified in the database since the last check, and the + // last is known, then we don't need to check again + // this speeds up nested views + long dbMod = database.getModificationDataId(); + if (dbMod > lastModificationCheck && maxDataModificationId <= dbMod) { + maxDataModificationId = viewQuery.getMaxDataModificationId(); + lastModificationCheck = dbMod; + } + return maxDataModificationId; + } + + @Override + public Index getUniqueIndex() { + return null; + } + + private void removeCurrentViewFromOtherTables() { + if (tables != null) { + for (Table t : tables) { + t.removeDependentView(this); + } + tables.clear(); + } + } + + private void addDependentViewToTables() { + for (Table t : tables) { + t.addDependentView(this); + } + } + + private void setOwner(User owner) { + this.owner = owner; + } + + public User getOwner() { + return owner; + } + + /** + * Create a temporary view out of the given query. + * + * @param session the session + * @param owner the owner of the query + * @param name the view name + * @param query the query + * @param topQuery the top level query + * @return the view table + */ + public static TableView createTempView(Session session, User owner, + String name, Query query, Query topQuery) { + Schema mainSchema = session.getDatabase().getSchema(Constants.SCHEMA_MAIN); + String querySQL = query.getPlanSQL(); + TableView v = new TableView(mainSchema, 0, name, + querySQL, query.getParameters(), null /* column templates */, session, + false/* allow recursive */, true /* literals have already been checked when parsing original query */, + false /* is table expression */, false/* is persistent*/); + if (v.createException != null) { + throw v.createException; + } + v.setTopQuery(topQuery); + v.setOwner(owner); + v.setTemporary(true); + return v; + } + + private void setTopQuery(Query topQuery) { + this.topQuery = topQuery; + } + + @Override + public long getRowCountApproximation() { + return ROW_COUNT_APPROXIMATION; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + /** + * Get the index of the first parameter. + * + * @param additionalParameters additional parameters + * @return the index of the first parameter + */ + public int getParameterOffset(ArrayList additionalParameters) { + int result = topQuery == null ? -1 : getMaxParameterIndex(topQuery.getParameters()); + if (additionalParameters != null) { + result = Math.max(result, getMaxParameterIndex(additionalParameters)); + } + return result + 1; + } + + private static int getMaxParameterIndex(ArrayList parameters) { + int result = -1; + for (Parameter p : parameters) { + result = Math.max(result, p.getIndex()); + } + return result; + } + + public boolean isRecursive() { + return allowRecursive; + } + + @Override + public boolean isDeterministic() { + if (allowRecursive || viewQuery == null) { + return false; + } + return viewQuery.isEverything(ExpressionVisitor.DETERMINISTIC_VISITOR); + } + + public void setRecursiveResult(ResultInterface value) { + if (recursiveResult != null) { + recursiveResult.close(); + } + this.recursiveResult = value; + } + + public ResultInterface getRecursiveResult() { + return recursiveResult; + } + + @Override + public void addDependencies(HashSet dependencies) { + super.addDependencies(dependencies); + if (tables != null) { + for (Table t : tables) { + if (TableType.VIEW != t.getTableType()) { + t.addDependencies(dependencies); + } + } + } + } + + /** + * The key of the index cache for views. + */ + private static final class CacheKey { + + private final int[] masks; + private final TableView view; + + CacheKey(int[] masks, TableView view) { + this.masks = masks; + this.view = view; + } + + @Override + public int hashCode() { + final int prime = 31; + int result = 1; + result = prime * result + Arrays.hashCode(masks); + result = prime * result + view.hashCode(); + return result; + } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null) { + return false; + } + if (getClass() != obj.getClass()) { + return false; + } + CacheKey other = (CacheKey) obj; + if (view != other.view) { + return false; + } + return Arrays.equals(masks, other.masks); + } + } + + /** + * Was query recursion detected during compiling. + * + * @return true if yes + */ + public boolean isRecursiveQueryDetected() { + return isRecursiveQueryDetected; + } + + /** + * Does exception indicate query recursion? + */ + private boolean isRecursiveQueryExceptionDetected(DbException exception) { + if (exception == null) { + return false; + } + if (exception.getErrorCode() != ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1) { + return false; + } + return exception.getMessage().contains("\"" + this.getName() + "\""); + } + + public List
    getTables() { + return tables; + } + + public boolean isPersistent() { + return isPersistent; + } + + /** + * Create a view. + * + * @param schema the schema + * @param id the view id + * @param name the view name + * @param querySQL the query + * @param parameters the parameters + * @param columnTemplates the columns + * @param session the session + * @param literalsChecked whether literals in the query are checked + * @param isTableExpression if this is a table expression + * @param isPersistent whether the view is persisted + * @param db the database + * @return the view + */ + public static TableView createTableViewMaybeRecursive(Schema schema, int id, String name, String querySQL, + ArrayList parameters, Column[] columnTemplates, Session session, + boolean literalsChecked, boolean isTableExpression, boolean isPersistent, Database db) { + + + Table recursiveTable = TableView.createShadowTableForRecursiveTableExpression(isPersistent, session, name, + schema, Arrays.asList(columnTemplates), db); + + List columnTemplateList; + String[] querySQLOutput = {null}; + ArrayList columnNames = new ArrayList<>(); + for (Column columnTemplate: columnTemplates) { + columnNames.add(columnTemplate.getName()); + } + + try { + Prepared withQuery = session.prepare(querySQL, false, false); + if (isPersistent) { + withQuery.setSession(session); + } + columnTemplateList = TableView.createQueryColumnTemplateList(columnNames.toArray(new String[1]), + (Query) withQuery, querySQLOutput); + + } finally { + TableView.destroyShadowTableForRecursiveExpression(isPersistent, session, recursiveTable); + } + + // build with recursion turned on + TableView view = new TableView(schema, id, name, querySQL, + parameters, columnTemplateList.toArray(columnTemplates), session, + true/* try recursive */, literalsChecked, isTableExpression, isPersistent); + + // is recursion really detected ? if not - recreate it without recursion flag + // and no recursive index + if (!view.isRecursiveQueryDetected()) { + if (isPersistent) { + db.addSchemaObject(session, view); + view.lock(session, true, true); + session.getDatabase().removeSchemaObject(session, view); + + // during database startup - this method does not normally get called - and it + // needs to be to correctly un-register the table which the table expression + // uses... + view.removeChildrenAndResources(session); + } else { + session.removeLocalTempTable(view); + } + view = new TableView(schema, id, name, querySQL, parameters, + columnTemplates, session, + false/* detected not recursive */, literalsChecked, isTableExpression, isPersistent); + } + + return view; + } + + + /** + * Creates a list of column templates from a query (usually from WITH query, + * but could be any query) + * + * @param cols - an optional list of column names (can be specified by WITH + * clause overriding usual select names) + * @param theQuery - the query object we want the column list for + * @param querySQLOutput - array of length 1 to receive extra 'output' field + * in addition to return value - containing the SQL query of the + * Query object + * @return a list of column object returned by withQuery + */ + public static List createQueryColumnTemplateList(String[] cols, + Query theQuery, String[] querySQLOutput) { + List columnTemplateList = new ArrayList<>(); + theQuery.prepare(); + // String array of length 1 is to receive extra 'output' field in addition to + // return value + querySQLOutput[0] = StringUtils.cache(theQuery.getPlanSQL()); + ColumnNamer columnNamer = new ColumnNamer(theQuery.getSession()); + ArrayList withExpressions = theQuery.getExpressions(); + for (int i = 0; i < withExpressions.size(); ++i) { + Expression columnExp = withExpressions.get(i); + // use the passed in column name if supplied, otherwise use alias + // (if found) otherwise use column name derived from column + // expression + String columnName = columnNamer.getColumnName(columnExp, i, cols); + columnTemplateList.add(new Column(columnName, + columnExp.getType())); + + } + return columnTemplateList; + } + + /** + * Create a table for a recursive query. + * + * @param isPersistent whether the table is persisted + * @param targetSession the session + * @param cteViewName the name + * @param schema the schema + * @param columns the columns + * @param db the database + * @return the table + */ + public static Table createShadowTableForRecursiveTableExpression(boolean isPersistent, Session targetSession, + String cteViewName, Schema schema, List columns, Database db) { + + // create table data object + CreateTableData recursiveTableData = new CreateTableData(); + recursiveTableData.id = db.allocateObjectId(); + recursiveTableData.columns = new ArrayList<>(columns); + recursiveTableData.tableName = cteViewName; + recursiveTableData.temporary = !isPersistent; + recursiveTableData.persistData = true; + recursiveTableData.persistIndexes = isPersistent; + recursiveTableData.create = true; + recursiveTableData.session = targetSession; + + // this gets a meta table lock that is not released + Table recursiveTable = schema.createTable(recursiveTableData); + + if (isPersistent) { + // this unlock is to prevent lock leak from schema.createTable() + db.unlockMeta(targetSession); + synchronized (targetSession) { + db.addSchemaObject(targetSession, recursiveTable); + } + } else { + targetSession.addLocalTempTable(recursiveTable); + } + return recursiveTable; + } + + /** + * Remove a table for a recursive query. + * + * @param isPersistent whether the table is persisted + * @param targetSession the session + * @param recursiveTable the table + */ + public static void destroyShadowTableForRecursiveExpression(boolean isPersistent, Session targetSession, + Table recursiveTable) { + if (recursiveTable != null) { + if (isPersistent) { + recursiveTable.lock(targetSession, true, true); + targetSession.getDatabase().removeSchemaObject(targetSession, recursiveTable); + + } else { + targetSession.removeLocalTempTable(recursiveTable); + } + + // both removeSchemaObject and removeLocalTempTable hold meta locks - release them here + targetSession.getDatabase().unlockMeta(targetSession); + } + } +} diff --git a/modules/h2/src/main/java/org/h2/table/package.html b/modules/h2/src/main/java/org/h2/table/package.html new file mode 100644 index 0000000000000..dc8b75b062a5e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/table/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Classes related to a table and table meta data. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/tools/Backup.java b/modules/h2/src/main/java/org/h2/tools/Backup.java new file mode 100644 index 0000000000000..d0fd4b7648bda --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Backup.java @@ -0,0 +1,179 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.sql.SQLException; +import java.util.List; +import java.util.zip.ZipEntry; +import java.util.zip.ZipOutputStream; +import org.h2.command.dml.BackupCommand; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.store.FileLister; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.Tool; + +/** + * Creates a backup of a database. + *
    + * This tool copies all database files. The database must be closed before using + * this tool. To create a backup while the database is in use, run the BACKUP + * SQL statement. In an emergency, for example if the application is not + * responding, creating a backup using the Backup tool is possible by using the + * quiet mode. However, if the database is changed while the backup is running + * in quiet mode, the backup could be corrupt. + * + * @h2.resource + */ +public class Backup extends Tool { + + /** + * Options are case sensitive. Supported options are: + *
    + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-file <filename>]The target file name (default: backup.zip)
    [-dir <dir>]The source directory (default: .)
    [-db <database>]Source database; not required if there is only one
    [-quiet]Do not print progress information
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new Backup().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + String zipFileName = "backup.zip"; + String dir = "."; + String db = null; + boolean quiet = false; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-dir")) { + dir = args[++i]; + } else if (arg.equals("-db")) { + db = args[++i]; + } else if (arg.equals("-quiet")) { + quiet = true; + } else if (arg.equals("-file")) { + zipFileName = args[++i]; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + try { + process(zipFileName, dir, db, quiet); + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } + + /** + * Backs up database files. + * + * @param zipFileName the name of the target backup file (including path) + * @param directory the source directory name + * @param db the source database name (null if there is only one database, + * and and empty string to backup all files in this directory) + * @param quiet don't print progress information + */ + public static void execute(String zipFileName, String directory, String db, + boolean quiet) throws SQLException { + try { + new Backup().process(zipFileName, directory, db, quiet); + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } + + private void process(String zipFileName, String directory, String db, + boolean quiet) throws SQLException { + List list; + boolean allFiles = db != null && db.length() == 0; + if (allFiles) { + list = FileUtils.newDirectoryStream(directory); + } else { + list = FileLister.getDatabaseFiles(directory, db, true); + } + if (list.isEmpty()) { + if (!quiet) { + printNoDatabaseFilesFound(directory, db); + } + return; + } + if (!quiet) { + FileLister.tryUnlockDatabase(list, "backup"); + } + zipFileName = FileUtils.toRealPath(zipFileName); + FileUtils.delete(zipFileName); + OutputStream fileOut = null; + try { + fileOut = FileUtils.newOutputStream(zipFileName, false); + try (ZipOutputStream zipOut = new ZipOutputStream(fileOut)) { + String base = ""; + for (String fileName : list) { + if (allFiles || + fileName.endsWith(Constants.SUFFIX_PAGE_FILE) || + fileName.endsWith(Constants.SUFFIX_MV_FILE)) { + base = FileUtils.getParent(fileName); + break; + } + } + for (String fileName : list) { + String f = FileUtils.toRealPath(fileName); + if (!f.startsWith(base)) { + DbException.throwInternalError(f + " does not start with " + base); + } + if (f.endsWith(zipFileName)) { + continue; + } + if (FileUtils.isDirectory(fileName)) { + continue; + } + f = f.substring(base.length()); + f = BackupCommand.correctFileName(f); + ZipEntry entry = new ZipEntry(f); + zipOut.putNextEntry(entry); + InputStream in = null; + try { + in = FileUtils.newInputStream(fileName); + IOUtils.copyAndCloseInput(in, zipOut); + } catch (FileNotFoundException e) { + // the file could have been deleted in the meantime + // ignore this (in this case an empty file is created) + } finally { + IOUtils.closeSilently(in); + } + zipOut.closeEntry(); + if (!quiet) { + out.println("Processed: " + fileName); + } + } + } + } catch (IOException e) { + throw DbException.convertIOException(e, zipFileName); + } finally { + IOUtils.closeSilently(fileOut); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/ChangeFileEncryption.java b/modules/h2/src/main/java/org/h2/tools/ChangeFileEncryption.java new file mode 100644 index 0000000000000..c17032d912166 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/ChangeFileEncryption.java @@ -0,0 +1,293 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.channels.FileChannel; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.concurrent.TimeUnit; + +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.security.SHA256; +import org.h2.store.FileLister; +import org.h2.store.FileStore; +import org.h2.store.fs.FileChannelInputStream; +import org.h2.store.fs.FileChannelOutputStream; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathEncrypt; +import org.h2.store.fs.FileUtils; +import org.h2.util.Tool; + +/** + * Allows changing the database file encryption password or algorithm. + *
    + * This tool can not be used to change a password of a user. + * The database must be closed before using this tool. + * @h2.resource + */ +public class ChangeFileEncryption extends Tool { + + private String directory; + private String cipherType; + private byte[] decrypt; + private byte[] encrypt; + private byte[] decryptKey; + private byte[] encryptKey; + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-cipher type]The encryption type (AES)
    [-dir <dir>]The database directory (default: .)
    [-db <database>]Database name (all databases if not set)
    [-decrypt <pwd>]The decryption password (if not set: not yet encrypted)
    [-encrypt <pwd>]The encryption password (if not set: do not encrypt)
    [-quiet]Do not print progress information
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new ChangeFileEncryption().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + String dir = "."; + String cipher = null; + char[] decryptPassword = null; + char[] encryptPassword = null; + String db = null; + boolean quiet = false; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-dir")) { + dir = args[++i]; + } else if (arg.equals("-cipher")) { + cipher = args[++i]; + } else if (arg.equals("-db")) { + db = args[++i]; + } else if (arg.equals("-decrypt")) { + decryptPassword = args[++i].toCharArray(); + } else if (arg.equals("-encrypt")) { + encryptPassword = args[++i].toCharArray(); + } else if (arg.equals("-quiet")) { + quiet = true; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + if ((encryptPassword == null && decryptPassword == null) || cipher == null) { + showUsage(); + throw new SQLException( + "Encryption or decryption password not set, or cipher not set"); + } + try { + process(dir, db, cipher, decryptPassword, encryptPassword, quiet); + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } + + /** + * Get the file encryption key for a given password. The password must be + * supplied as char arrays and is cleaned in this method. + * + * @param password the password as a char array + * @return the encryption key + */ + private static byte[] getFileEncryptionKey(char[] password) { + if (password == null) { + return null; + } + return SHA256.getKeyPasswordHash("file", password); + } + + /** + * Changes the password for a database. The passwords must be supplied as + * char arrays and are cleaned in this method. The database must be closed + * before calling this method. + * + * @param dir the directory (. for the current directory) + * @param db the database name (null for all databases) + * @param cipher the cipher (AES) + * @param decryptPassword the decryption password as a char array + * @param encryptPassword the encryption password as a char array + * @param quiet don't print progress information + */ + public static void execute(String dir, String db, String cipher, + char[] decryptPassword, char[] encryptPassword, boolean quiet) + throws SQLException { + try { + new ChangeFileEncryption().process(dir, db, cipher, + decryptPassword, encryptPassword, quiet); + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } + + private void process(String dir, String db, String cipher, + char[] decryptPassword, char[] encryptPassword, boolean quiet) + throws SQLException { + dir = FileLister.getDir(dir); + ChangeFileEncryption change = new ChangeFileEncryption(); + if (encryptPassword != null) { + for (char c : encryptPassword) { + if (c == ' ') { + throw new SQLException("The file password may not contain spaces"); + } + } + change.encryptKey = FilePathEncrypt.getPasswordBytes(encryptPassword); + change.encrypt = getFileEncryptionKey(encryptPassword); + } + if (decryptPassword != null) { + change.decryptKey = FilePathEncrypt.getPasswordBytes(decryptPassword); + change.decrypt = getFileEncryptionKey(decryptPassword); + } + change.out = out; + change.directory = dir; + change.cipherType = cipher; + + ArrayList files = FileLister.getDatabaseFiles(dir, db, true); + FileLister.tryUnlockDatabase(files, "encryption"); + files = FileLister.getDatabaseFiles(dir, db, false); + if (files.isEmpty() && !quiet) { + printNoDatabaseFilesFound(dir, db); + } + // first, test only if the file can be renamed + // (to find errors with locked files early) + for (String fileName : files) { + String temp = dir + "/temp.db"; + FileUtils.delete(temp); + FileUtils.move(fileName, temp); + FileUtils.move(temp, fileName); + } + // if this worked, the operation will (hopefully) be successful + // TODO changeFileEncryption: this is a workaround! + // make the operation atomic (all files or none) + for (String fileName : files) { + // don't process a lob directory, just the files in the directory. + if (!FileUtils.isDirectory(fileName)) { + change.process(fileName, quiet); + } + } + } + + private void process(String fileName, boolean quiet) { + if (fileName.endsWith(Constants.SUFFIX_MV_FILE)) { + try { + copy(fileName, quiet); + } catch (IOException e) { + throw DbException.convertIOException(e, + "Error encrypting / decrypting file " + fileName); + } + return; + } + FileStore in; + if (decrypt == null) { + in = FileStore.open(null, fileName, "r"); + } else { + in = FileStore.open(null, fileName, "r", cipherType, decrypt); + } + try { + in.init(); + copy(fileName, in, encrypt, quiet); + } finally { + in.closeSilently(); + } + } + + private void copy(String fileName, boolean quiet) throws IOException { + if (FileUtils.isDirectory(fileName)) { + return; + } + String temp = directory + "/temp.db"; + try (FileChannel fileIn = getFileChannel(fileName, "r", decryptKey)){ + try(InputStream inStream = new FileChannelInputStream(fileIn, true)) { + FileUtils.delete(temp); + try (OutputStream outStream = new FileChannelOutputStream(getFileChannel(temp, "rw", encryptKey), + true)) { + byte[] buffer = new byte[4 * 1024]; + long remaining = fileIn.size(); + long total = remaining; + long time = System.nanoTime(); + while (remaining > 0) { + if (!quiet && System.nanoTime() - time > TimeUnit.SECONDS.toNanos(1)) { + out.println(fileName + ": " + (100 - 100 * remaining / total) + "%"); + time = System.nanoTime(); + } + int len = (int) Math.min(buffer.length, remaining); + len = inStream.read(buffer, 0, len); + outStream.write(buffer, 0, len); + remaining -= len; + } + } + } + } + FileUtils.delete(fileName); + FileUtils.move(temp, fileName); + } + + private FileChannel getFileChannel(String fileName, String r, byte[] decryptKey) throws IOException { + FileChannel fileIn = FilePath.get(fileName).open(r); + if (decryptKey != null) { + fileIn = new FilePathEncrypt.FileEncrypt(fileName, decryptKey, fileIn); + } + return fileIn; + } + + private void copy(String fileName, FileStore in, byte[] key, boolean quiet) { + if (FileUtils.isDirectory(fileName)) { + return; + } + String temp = directory + "/temp.db"; + FileUtils.delete(temp); + FileStore fileOut; + if (key == null) { + fileOut = FileStore.open(null, temp, "rw"); + } else { + fileOut = FileStore.open(null, temp, "rw", cipherType, key); + } + fileOut.init(); + byte[] buffer = new byte[4 * 1024]; + long remaining = in.length() - FileStore.HEADER_LENGTH; + long total = remaining; + in.seek(FileStore.HEADER_LENGTH); + fileOut.seek(FileStore.HEADER_LENGTH); + long time = System.nanoTime(); + while (remaining > 0) { + if (!quiet && System.nanoTime() - time > TimeUnit.SECONDS.toNanos(1)) { + out.println(fileName + ": " + (100 - 100 * remaining / total) + "%"); + time = System.nanoTime(); + } + int len = (int) Math.min(buffer.length, remaining); + in.readFully(buffer, 0, len); + fileOut.write(buffer, 0, len); + remaining -= len; + } + in.close(); + fileOut.close(); + FileUtils.delete(fileName); + FileUtils.move(temp, fileName); + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/CompressTool.java b/modules/h2/src/main/java/org/h2/tools/CompressTool.java new file mode 100644 index 0000000000000..91635317dce82 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/CompressTool.java @@ -0,0 +1,334 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.zip.DeflaterOutputStream; +import java.util.zip.GZIPInputStream; +import java.util.zip.GZIPOutputStream; +import java.util.zip.InflaterInputStream; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; +import java.util.zip.ZipOutputStream; + +import org.h2.api.ErrorCode; +import org.h2.compress.CompressDeflate; +import org.h2.compress.CompressLZF; +import org.h2.compress.CompressNo; +import org.h2.compress.Compressor; +import org.h2.compress.LZFInputStream; +import org.h2.compress.LZFOutputStream; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.util.Bits; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * A tool to losslessly compress data, and expand the compressed data again. + */ +public class CompressTool { + + private static final int MAX_BUFFER_SIZE = + 3 * Constants.IO_BUFFER_SIZE_COMPRESS; + private byte[] cachedBuffer; + + private CompressTool() { + // don't allow construction + } + + private byte[] getBuffer(int min) { + if (min > MAX_BUFFER_SIZE) { + return Utils.newBytes(min); + } + if (cachedBuffer == null || cachedBuffer.length < min) { + cachedBuffer = Utils.newBytes(min); + } + return cachedBuffer; + } + + /** + * Get a new instance. Each instance uses a separate buffer, so multiple + * instances can be used concurrently. However each instance alone is not + * multithreading safe. + * + * @return a new instance + */ + public static CompressTool getInstance() { + return new CompressTool(); + } + + /** + * Compressed the data using the specified algorithm. If no algorithm is + * supplied, LZF is used + * + * @param in the byte array with the original data + * @param algorithm the algorithm (LZF, DEFLATE) + * @return the compressed data + */ + public byte[] compress(byte[] in, String algorithm) { + int len = in.length; + if (in.length < 5) { + algorithm = "NO"; + } + Compressor compress = getCompressor(algorithm); + byte[] buff = getBuffer((len < 100 ? len + 100 : len) * 2); + int newLen = compress(in, in.length, compress, buff); + return Utils.copyBytes(buff, newLen); + } + + private static int compress(byte[] in, int len, Compressor compress, + byte[] out) { + int newLen = 0; + out[0] = (byte) compress.getAlgorithm(); + int start = 1 + writeVariableInt(out, 1, len); + newLen = compress.compress(in, len, out, start); + if (newLen > len + start || newLen <= 0) { + out[0] = Compressor.NO; + System.arraycopy(in, 0, out, start, len); + newLen = len + start; + } + return newLen; + } + + /** + * Expands the compressed data. + * + * @param in the byte array with the compressed data + * @return the uncompressed data + */ + public byte[] expand(byte[] in) { + int algorithm = in[0]; + Compressor compress = getCompressor(algorithm); + try { + int len = readVariableInt(in, 1); + int start = 1 + getVariableIntLength(len); + byte[] buff = Utils.newBytes(len); + compress.expand(in, start, in.length - start, buff, 0, len); + return buff; + } catch (Exception e) { + throw DbException.get(ErrorCode.COMPRESSION_ERROR, e); + } + } + + /** + * INTERNAL + */ + public static void expand(byte[] in, byte[] out, int outPos) { + int algorithm = in[0]; + Compressor compress = getCompressor(algorithm); + try { + int len = readVariableInt(in, 1); + int start = 1 + getVariableIntLength(len); + compress.expand(in, start, in.length - start, out, outPos, len); + } catch (Exception e) { + throw DbException.get(ErrorCode.COMPRESSION_ERROR, e); + } + } + + /** + * Read a variable size integer using Rice coding. + * + * @param buff the buffer + * @param pos the position + * @return the integer + */ + public static int readVariableInt(byte[] buff, int pos) { + int x = buff[pos++] & 0xff; + if (x < 0x80) { + return x; + } + if (x < 0xc0) { + return ((x & 0x3f) << 8) + (buff[pos] & 0xff); + } + if (x < 0xe0) { + return ((x & 0x1f) << 16) + + ((buff[pos++] & 0xff) << 8) + + (buff[pos] & 0xff); + } + if (x < 0xf0) { + return ((x & 0xf) << 24) + + ((buff[pos++] & 0xff) << 16) + + ((buff[pos++] & 0xff) << 8) + + (buff[pos] & 0xff); + } + return Bits.readInt(buff, pos); + } + + /** + * Write a variable size integer using Rice coding. + * Negative values need 5 bytes. + * + * @param buff the buffer + * @param pos the position + * @param x the value + * @return the number of bytes written (0-5) + */ + public static int writeVariableInt(byte[] buff, int pos, int x) { + if (x < 0) { + buff[pos++] = (byte) 0xf0; + Bits.writeInt(buff, pos, x); + return 5; + } else if (x < 0x80) { + buff[pos] = (byte) x; + return 1; + } else if (x < 0x4000) { + buff[pos++] = (byte) (0x80 | (x >> 8)); + buff[pos] = (byte) x; + return 2; + } else if (x < 0x20_0000) { + buff[pos++] = (byte) (0xc0 | (x >> 16)); + buff[pos++] = (byte) (x >> 8); + buff[pos] = (byte) x; + return 3; + } else if (x < 0x1000_0000) { + Bits.writeInt(buff, pos, x | 0xe000_0000); + return 4; + } else { + buff[pos++] = (byte) 0xf0; + Bits.writeInt(buff, pos, x); + return 5; + } + } + + /** + * Get a variable size integer length using Rice coding. + * Negative values need 5 bytes. + * + * @param x the value + * @return the number of bytes needed (0-5) + */ + public static int getVariableIntLength(int x) { + if (x < 0) { + return 5; + } else if (x < 0x80) { + return 1; + } else if (x < 0x4000) { + return 2; + } else if (x < 0x20_0000) { + return 3; + } else if (x < 0x1000_0000) { + return 4; + } else { + return 5; + } + } + + private static Compressor getCompressor(String algorithm) { + if (algorithm == null) { + algorithm = "LZF"; + } + int idx = algorithm.indexOf(' '); + String options = null; + if (idx > 0) { + options = algorithm.substring(idx + 1); + algorithm = algorithm.substring(0, idx); + } + int a = getCompressAlgorithm(algorithm); + Compressor compress = getCompressor(a); + compress.setOptions(options); + return compress; + } + + /** + * INTERNAL + */ + public static int getCompressAlgorithm(String algorithm) { + algorithm = StringUtils.toUpperEnglish(algorithm); + if ("NO".equals(algorithm)) { + return Compressor.NO; + } else if ("LZF".equals(algorithm)) { + return Compressor.LZF; + } else if ("DEFLATE".equals(algorithm)) { + return Compressor.DEFLATE; + } else { + throw DbException.get( + ErrorCode.UNSUPPORTED_COMPRESSION_ALGORITHM_1, + algorithm); + } + } + + private static Compressor getCompressor(int algorithm) { + switch (algorithm) { + case Compressor.NO: + return new CompressNo(); + case Compressor.LZF: + return new CompressLZF(); + case Compressor.DEFLATE: + return new CompressDeflate(); + default: + throw DbException.get( + ErrorCode.UNSUPPORTED_COMPRESSION_ALGORITHM_1, + "" + algorithm); + } + } + + /** + * INTERNAL + */ + public static OutputStream wrapOutputStream(OutputStream out, + String compressionAlgorithm, String entryName) { + try { + if ("GZIP".equals(compressionAlgorithm)) { + out = new GZIPOutputStream(out); + } else if ("ZIP".equals(compressionAlgorithm)) { + ZipOutputStream z = new ZipOutputStream(out); + z.putNextEntry(new ZipEntry(entryName)); + out = z; + } else if ("DEFLATE".equals(compressionAlgorithm)) { + out = new DeflaterOutputStream(out); + } else if ("LZF".equals(compressionAlgorithm)) { + out = new LZFOutputStream(out); + } else if (compressionAlgorithm != null) { + throw DbException.get( + ErrorCode.UNSUPPORTED_COMPRESSION_ALGORITHM_1, + compressionAlgorithm); + } + return out; + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + /** + * INTERNAL + */ + public static InputStream wrapInputStream(InputStream in, + String compressionAlgorithm, String entryName) { + try { + if ("GZIP".equals(compressionAlgorithm)) { + in = new GZIPInputStream(in); + } else if ("ZIP".equals(compressionAlgorithm)) { + ZipInputStream z = new ZipInputStream(in); + while (true) { + ZipEntry entry = z.getNextEntry(); + if (entry == null) { + return null; + } + if (entryName.equals(entry.getName())) { + break; + } + } + in = z; + } else if ("DEFLATE".equals(compressionAlgorithm)) { + in = new InflaterInputStream(in); + } else if ("LZF".equals(compressionAlgorithm)) { + in = new LZFInputStream(in); + } else if (compressionAlgorithm != null) { + throw DbException.get( + ErrorCode.UNSUPPORTED_COMPRESSION_ALGORITHM_1, + compressionAlgorithm); + } + return in; + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + +} + diff --git a/modules/h2/src/main/java/org/h2/tools/Console.java b/modules/h2/src/main/java/org/h2/tools/Console.java new file mode 100644 index 0000000000000..ff02bff3ab1ed --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Console.java @@ -0,0 +1,700 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +//## AWT ## +import java.awt.Button; +import java.awt.Dimension; +import java.awt.Font; +import java.awt.Frame; +import java.awt.GraphicsEnvironment; +import java.awt.GridBagConstraints; +import java.awt.GridBagLayout; +import java.awt.Image; +import java.awt.Insets; +import java.awt.Label; +import java.awt.MenuItem; +import java.awt.Panel; +import java.awt.PopupMenu; +import java.awt.SystemColor; +import java.awt.TextField; +import java.awt.Toolkit; +import java.awt.event.ActionEvent; +import java.awt.event.ActionListener; +import java.awt.event.MouseEvent; +import java.awt.event.MouseListener; +import java.awt.event.WindowEvent; +import java.awt.event.WindowListener; +import java.io.IOException; +import java.sql.Connection; +import java.sql.SQLException; +import java.util.Locale; +import java.util.concurrent.TimeUnit; + +import org.h2.server.ShutdownHandler; +import org.h2.util.JdbcUtils; +import org.h2.util.Tool; +import org.h2.util.Utils; + +/** + * Starts the H2 Console (web-) server, as well as the TCP and PG server. + * @h2.resource + * + * @author Thomas Mueller, Ridvan Agar + */ +public class Console extends Tool implements +//## AWT ## +ActionListener, MouseListener, WindowListener, +//*/ +ShutdownHandler { + +//## AWT ## + private Frame frame; + private boolean trayIconUsed; + private Font font; + private Button startBrowser; + private TextField urlText; + private Object tray; + private Object trayIcon; +//*/ + private Server web, tcp, pg; + private boolean isWindows; + private long lastOpenNs; + + /** + * When running without options, -tcp, -web, -browser and -pg are started. + *
    + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-url]Start a browser and connect to this URL
    [-driver]Used together with -url: the driver
    [-user]Used together with -url: the user name
    [-password]Used together with -url: the password
    [-web]Start the web server with the H2 Console
    [-tool]Start the icon or window that allows to start a browser
    [-browser]Start a browser connecting to the web server
    [-tcp]Start the TCP server
    [-pg]Start the PG server
    + * For each Server, additional options are available; + * for details, see the Server tool.
    + * If a service can not be started, the program + * terminates with an exit code of 1. + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new Console().runTool(args); + } + + /** + * This tool starts the H2 Console (web-) server, as well as the TCP and PG + * server. For JDK 1.6, a system tray icon is created, for platforms that + * support it. Otherwise, a small window opens. + * + * @param args the command line arguments + */ + @Override + public void runTool(String... args) throws SQLException { + isWindows = Utils.getProperty("os.name", "").startsWith("Windows"); + boolean tcpStart = false, pgStart = false, webStart = false, toolStart = false; + boolean browserStart = false; + boolean startDefaultServers = true; + boolean printStatus = args != null && args.length > 0; + String driver = null, url = null, user = null, password = null; + boolean tcpShutdown = false, tcpShutdownForce = false; + String tcpPassword = ""; + String tcpShutdownServer = ""; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg == null) { + } else if ("-?".equals(arg) || "-help".equals(arg)) { + showUsage(); + return; + } else if ("-url".equals(arg)) { + startDefaultServers = false; + url = args[++i]; + } else if ("-driver".equals(arg)) { + driver = args[++i]; + } else if ("-user".equals(arg)) { + user = args[++i]; + } else if ("-password".equals(arg)) { + password = args[++i]; + } else if (arg.startsWith("-web")) { + if ("-web".equals(arg)) { + startDefaultServers = false; + webStart = true; + } else if ("-webAllowOthers".equals(arg)) { + // no parameters + } else if ("-webDaemon".equals(arg)) { + // no parameters + } else if ("-webSSL".equals(arg)) { + // no parameters + } else if ("-webPort".equals(arg)) { + i++; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } else if ("-tool".equals(arg)) { + startDefaultServers = false; + webStart = true; + toolStart = true; + } else if ("-browser".equals(arg)) { + startDefaultServers = false; + webStart = true; + browserStart = true; + } else if (arg.startsWith("-tcp")) { + if ("-tcp".equals(arg)) { + startDefaultServers = false; + tcpStart = true; + } else if ("-tcpAllowOthers".equals(arg)) { + // no parameters + } else if ("-tcpDaemon".equals(arg)) { + // no parameters + } else if ("-tcpSSL".equals(arg)) { + // no parameters + } else if ("-tcpPort".equals(arg)) { + i++; + } else if ("-tcpPassword".equals(arg)) { + tcpPassword = args[++i]; + } else if ("-tcpShutdown".equals(arg)) { + startDefaultServers = false; + tcpShutdown = true; + tcpShutdownServer = args[++i]; + } else if ("-tcpShutdownForce".equals(arg)) { + tcpShutdownForce = true; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } else if (arg.startsWith("-pg")) { + if ("-pg".equals(arg)) { + startDefaultServers = false; + pgStart = true; + } else if ("-pgAllowOthers".equals(arg)) { + // no parameters + } else if ("-pgDaemon".equals(arg)) { + // no parameters + } else if ("-pgPort".equals(arg)) { + i++; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } else if ("-properties".equals(arg)) { + i++; + } else if ("-trace".equals(arg)) { + // no parameters + } else if ("-ifExists".equals(arg)) { + // no parameters + } else if ("-baseDir".equals(arg)) { + i++; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + if (startDefaultServers) { + webStart = true; + toolStart = true; + browserStart = true; + tcpStart = true; + pgStart = true; + } + if (tcpShutdown) { + out.println("Shutting down TCP Server at " + tcpShutdownServer); + Server.shutdownTcpServer(tcpShutdownServer, + tcpPassword, tcpShutdownForce, false); + } + SQLException startException = null; + boolean webRunning = false; + + if (url != null) { + Connection conn = JdbcUtils.getConnection(driver, url, user, password); + Server.startWebServer(conn); + } + + if (webStart) { + try { + web = Server.createWebServer(args); + web.setShutdownHandler(this); + web.start(); + if (printStatus) { + out.println(web.getStatus()); + } + webRunning = true; + } catch (SQLException e) { + printProblem(e, web); + startException = e; + } + } + +//## AWT ## + if (toolStart && webRunning && !GraphicsEnvironment.isHeadless()) { + loadFont(); + try { + if (!createTrayIcon()) { + showWindow(); + } + } catch (Exception e) { + e.printStackTrace(); + } + } +//*/ + + // start browser in any case (even if the server is already running) + // because some people don't look at the output, + // but are wondering why nothing happens + if (browserStart && web != null) { + openBrowser(web.getURL()); + } + + if (tcpStart) { + try { + tcp = Server.createTcpServer(args); + tcp.start(); + if (printStatus) { + out.println(tcp.getStatus()); + } + tcp.setShutdownHandler(this); + } catch (SQLException e) { + printProblem(e, tcp); + if (startException == null) { + startException = e; + } + } + } + if (pgStart) { + try { + pg = Server.createPgServer(args); + pg.start(); + if (printStatus) { + out.println(pg.getStatus()); + } + } catch (SQLException e) { + printProblem(e, pg); + if (startException == null) { + startException = e; + } + } + } + if (startException != null) { + shutdown(); + throw startException; + } + } + + private void printProblem(Exception e, Server server) { + if (server == null) { + e.printStackTrace(); + } else { + out.println(server.getStatus()); + out.println("Root cause: " + e.getMessage()); + } + } + + private static Image loadImage(String name) { + try { + byte[] imageData = Utils.getResource(name); + if (imageData == null) { + return null; + } + return Toolkit.getDefaultToolkit().createImage(imageData); + } catch (IOException e) { + e.printStackTrace(); + return null; + } + } + + /** + * INTERNAL. + * Stop all servers that were started using the console. + */ + @Override + public void shutdown() { + if (web != null && web.isRunning(false)) { + web.stop(); + web = null; + } + if (tcp != null && tcp.isRunning(false)) { + tcp.stop(); + tcp = null; + } + if (pg != null && pg.isRunning(false)) { + pg.stop(); + pg = null; + } +//## AWT ## + if (frame != null) { + frame.dispose(); + frame = null; + } + if (trayIconUsed) { + try { + // tray.remove(trayIcon); + Utils.callMethod(tray, "remove", trayIcon); + } catch (Exception e) { + // ignore + } finally { + trayIcon = null; + tray = null; + trayIconUsed = false; + } + System.gc(); + // Mac OS X: Console tool process did not stop on exit + String os = System.getProperty("os.name", "generic").toLowerCase(Locale.ENGLISH); + if (os.contains("mac")) { + for (Thread t : Thread.getAllStackTraces().keySet()) { + if (t.getName().startsWith("AWT-")) { + t.interrupt(); + } + } + } + Thread.currentThread().interrupt(); + // throw new ThreadDeath(); + } +//*/ + } + +//## AWT ## + private void loadFont() { + if (isWindows) { + font = new Font("Dialog", Font.PLAIN, 11); + } else { + font = new Font("Dialog", Font.PLAIN, 12); + } + } + + private boolean createTrayIcon() { + try { + // SystemTray.isSupported(); + boolean supported = (Boolean) Utils.callStaticMethod( + "java.awt.SystemTray.isSupported"); + if (!supported) { + return false; + } + PopupMenu menuConsole = new PopupMenu(); + MenuItem itemConsole = new MenuItem("H2 Console"); + itemConsole.setActionCommand("console"); + itemConsole.addActionListener(this); + itemConsole.setFont(font); + menuConsole.add(itemConsole); + MenuItem itemStatus = new MenuItem("Status"); + itemStatus.setActionCommand("status"); + itemStatus.addActionListener(this); + itemStatus.setFont(font); + menuConsole.add(itemStatus); + MenuItem itemExit = new MenuItem("Exit"); + itemExit.setFont(font); + itemExit.setActionCommand("exit"); + itemExit.addActionListener(this); + menuConsole.add(itemExit); + + // tray = SystemTray.getSystemTray(); + tray = Utils.callStaticMethod("java.awt.SystemTray.getSystemTray"); + + // Dimension d = tray.getTrayIconSize(); + Dimension d = (Dimension) Utils.callMethod(tray, "getTrayIconSize"); + String iconFile; + if (d.width >= 24 && d.height >= 24) { + iconFile = "/org/h2/res/h2-24.png"; + } else if (d.width >= 22 && d.height >= 22) { + // for Mac OS X 10.8.1 with retina display: + // the reported resolution is 22 x 22, but the image + // is scaled and the real resolution is 44 x 44 + iconFile = "/org/h2/res/h2-64-t.png"; + // iconFile = "/org/h2/res/h2-22-t.png"; + } else { + iconFile = "/org/h2/res/h2.png"; + } + Image icon = loadImage(iconFile); + + // trayIcon = new TrayIcon(image, "H2 Database Engine", + // menuConsole); + trayIcon = Utils.newInstance("java.awt.TrayIcon", + icon, "H2 Database Engine", menuConsole); + + // trayIcon.addMouseListener(this); + Utils.callMethod(trayIcon, "addMouseListener", this); + + // tray.add(trayIcon); + Utils.callMethod(tray, "add", trayIcon); + + this.trayIconUsed = true; + + return true; + } catch (Exception e) { + return false; + } + } + + private void showWindow() { + if (frame != null) { + return; + } + frame = new Frame("H2 Console"); + frame.addWindowListener(this); + Image image = loadImage("/org/h2/res/h2.png"); + if (image != null) { + frame.setIconImage(image); + } + frame.setResizable(false); + frame.setBackground(SystemColor.control); + + GridBagLayout layout = new GridBagLayout(); + frame.setLayout(layout); + + // the main panel keeps everything together + Panel mainPanel = new Panel(layout); + + GridBagConstraints constraintsPanel = new GridBagConstraints(); + constraintsPanel.gridx = 0; + constraintsPanel.weightx = 1.0D; + constraintsPanel.weighty = 1.0D; + constraintsPanel.fill = GridBagConstraints.BOTH; + constraintsPanel.insets = new Insets(0, 10, 0, 10); + constraintsPanel.gridy = 0; + + GridBagConstraints constraintsButton = new GridBagConstraints(); + constraintsButton.gridx = 0; + constraintsButton.gridwidth = 2; + constraintsButton.insets = new Insets(10, 0, 0, 0); + constraintsButton.gridy = 1; + constraintsButton.anchor = GridBagConstraints.EAST; + + GridBagConstraints constraintsTextField = new GridBagConstraints(); + constraintsTextField.fill = GridBagConstraints.HORIZONTAL; + constraintsTextField.gridy = 0; + constraintsTextField.weightx = 1.0; + constraintsTextField.insets = new Insets(0, 5, 0, 0); + constraintsTextField.gridx = 1; + + GridBagConstraints constraintsLabel = new GridBagConstraints(); + constraintsLabel.gridx = 0; + constraintsLabel.gridy = 0; + + Label label = new Label("H2 Console URL:", Label.LEFT); + label.setFont(font); + mainPanel.add(label, constraintsLabel); + + urlText = new TextField(); + urlText.setEditable(false); + urlText.setFont(font); + urlText.setText(web.getURL()); + if (isWindows) { + urlText.setFocusable(false); + } + mainPanel.add(urlText, constraintsTextField); + + startBrowser = new Button("Start Browser"); + startBrowser.setFocusable(false); + startBrowser.setActionCommand("console"); + startBrowser.addActionListener(this); + startBrowser.setFont(font); + mainPanel.add(startBrowser, constraintsButton); + frame.add(mainPanel, constraintsPanel); + + int width = 300, height = 120; + frame.setSize(width, height); + Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); + frame.setLocation((screenSize.width - width) / 2, + (screenSize.height - height) / 2); + try { + frame.setVisible(true); + } catch (Throwable t) { + // ignore + // some systems don't support this method, for example IKVM + // however it still works + } + try { + // ensure this window is in front of the browser + frame.setAlwaysOnTop(true); + frame.setAlwaysOnTop(false); + } catch (Throwable t) { + // ignore + } + } + + private void startBrowser() { + if (web != null) { + String url = web.getURL(); + if (urlText != null) { + urlText.setText(url); + } + long now = System.nanoTime(); + if (lastOpenNs == 0 || lastOpenNs + TimeUnit.MILLISECONDS.toNanos(100) < now) { + lastOpenNs = now; + openBrowser(url); + } + } + } +//*/ + + private void openBrowser(String url) { + try { + Server.openBrowser(url); + } catch (Exception e) { + out.println(e.getMessage()); + } + } + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void actionPerformed(ActionEvent e) { + String command = e.getActionCommand(); + if ("exit".equals(command)) { + shutdown(); + } else if ("console".equals(command)) { + startBrowser(); + } else if ("status".equals(command)) { + showWindow(); + } else if (startBrowser == e.getSource()) { + // for some reason, IKVM ignores setActionCommand + startBrowser(); + } + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void mouseClicked(MouseEvent e) { + if (e.getButton() == MouseEvent.BUTTON1) { + startBrowser(); + } + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void mouseEntered(MouseEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void mouseExited(MouseEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void mousePressed(MouseEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void mouseReleased(MouseEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void windowClosing(WindowEvent e) { + if (trayIconUsed) { + frame.dispose(); + frame = null; + } else { + shutdown(); + } + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void windowActivated(WindowEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void windowClosed(WindowEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void windowDeactivated(WindowEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void windowDeiconified(WindowEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void windowIconified(WindowEvent e) { + // nothing to do + } +//*/ + + /** + * INTERNAL + */ +//## AWT ## + @Override + public void windowOpened(WindowEvent e) { + // nothing to do + } +//*/ + +} diff --git a/modules/h2/src/main/java/org/h2/tools/ConvertTraceFile.java b/modules/h2/src/main/java/org/h2/tools/ConvertTraceFile.java new file mode 100644 index 0000000000000..be2195dd178da --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/ConvertTraceFile.java @@ -0,0 +1,226 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; +import java.io.IOException; +import java.io.LineNumberReader; +import java.io.PrintWriter; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.StringTokenizer; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.StringUtils; +import org.h2.util.Tool; + +/** + * Converts a .trace.db file to a SQL script and Java source code. + *
    + * SQL statement statistics are listed as well. + * @h2.resource + */ +public class ConvertTraceFile extends Tool { + + private final HashMap stats = new HashMap<>(); + private long timeTotal; + + /** + * This class holds statistics about a SQL statement. + */ + static class Stat implements Comparable { + String sql; + int executeCount; + long time; + long resultCount; + + @Override + public int compareTo(Stat other) { + if (other == this) { + return 0; + } + int c = Long.compare(other.time, time); + if (c == 0) { + c = Integer.compare(other.executeCount, executeCount); + if (c == 0) { + c = sql.compareTo(other.sql); + } + } + return c; + } + } + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-traceFile <file>]The trace file name (default: test.trace.db)
    [-script <file>]The script file name (default: test.sql)
    [-javaClass <file>]The Java directory and class file name (default: Test)
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new ConvertTraceFile().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + String traceFile = "test.trace.db"; + String javaClass = "Test"; + String script = "test.sql"; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-traceFile")) { + traceFile = args[++i]; + } else if (arg.equals("-javaClass")) { + javaClass = args[++i]; + } else if (arg.equals("-script")) { + script = args[++i]; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + try { + convertFile(traceFile, javaClass, script); + } catch (IOException e) { + throw DbException.convertIOException(e, traceFile); + } + } + + /** + * Converts a trace file to a Java class file and a script file. + */ + private void convertFile(String traceFileName, String javaClassName, + String script) throws IOException { + LineNumberReader reader = new LineNumberReader( + IOUtils.getBufferedReader( + FileUtils.newInputStream(traceFileName))); + PrintWriter javaWriter = new PrintWriter( + IOUtils.getBufferedWriter( + FileUtils.newOutputStream(javaClassName + ".java", false))); + PrintWriter scriptWriter = new PrintWriter( + IOUtils.getBufferedWriter( + FileUtils.newOutputStream(script, false))); + javaWriter.println("import java.io.*;"); + javaWriter.println("import java.sql.*;"); + javaWriter.println("import java.math.*;"); + javaWriter.println("import java.util.Calendar;"); + String cn = javaClassName.replace('\\', '/'); + int idx = cn.lastIndexOf('/'); + if (idx > 0) { + cn = cn.substring(idx + 1); + } + javaWriter.println("public class " + cn + " {"); + javaWriter.println(" public static void main(String... args) " + + "throws Exception {"); + javaWriter.println(" Class.forName(\"org.h2.Driver\");"); + while (true) { + String line = reader.readLine(); + if (line == null) { + break; + } + if (line.startsWith("/**/")) { + line = " " + line.substring(4); + javaWriter.println(line); + } else if (line.startsWith("/*SQL")) { + int end = line.indexOf("*/"); + String sql = line.substring(end + "*/".length()); + sql = StringUtils.javaDecode(sql); + line = line.substring("/*SQL".length(), end); + if (line.length() > 0) { + String statement = sql; + int count = 0; + long time = 0; + line = line.trim(); + if (line.length() > 0) { + StringTokenizer tk = new StringTokenizer(line, " :"); + while (tk.hasMoreElements()) { + String token = tk.nextToken(); + if ("l".equals(token)) { + int len = Integer.parseInt(tk.nextToken()); + statement = sql.substring(0, len) + ";"; + } else if ("#".equals(token)) { + count = Integer.parseInt(tk.nextToken()); + } else if ("t".equals(token)) { + time = Long.parseLong(tk.nextToken()); + } + } + } + addToStats(statement, count, time); + } + scriptWriter.println(sql); + } + } + javaWriter.println(" }"); + javaWriter.println('}'); + reader.close(); + javaWriter.close(); + if (stats.size() > 0) { + scriptWriter.println("-----------------------------------------"); + scriptWriter.println("-- SQL Statement Statistics"); + scriptWriter.println("-- time: total time in milliseconds (accumulated)"); + scriptWriter.println("-- count: how many times the statement ran"); + scriptWriter.println("-- result: total update count or row count"); + scriptWriter.println("-----------------------------------------"); + scriptWriter.println("-- self accu time count result sql"); + int accumTime = 0; + ArrayList list = new ArrayList<>(stats.values()); + Collections.sort(list); + if (timeTotal == 0) { + timeTotal = 1; + } + for (Stat stat : list) { + accumTime += stat.time; + StringBuilder buff = new StringBuilder(100); + buff.append("-- "). + append(padNumberLeft(100 * stat.time / timeTotal, 3)). + append("% "). + append(padNumberLeft(100 * accumTime / timeTotal, 3)). + append('%'). + append(padNumberLeft(stat.time, 8)). + append(padNumberLeft(stat.executeCount, 8)). + append(padNumberLeft(stat.resultCount, 8)). + append(' '). + append(removeNewlines(stat.sql)); + scriptWriter.println(buff.toString()); + } + } + scriptWriter.close(); + } + + private static String removeNewlines(String s) { + return s == null ? s : s.replace('\r', ' ').replace('\n', ' '); + } + + private static String padNumberLeft(long number, int digits) { + return StringUtils.pad(String.valueOf(number), digits, " ", false); + } + + private void addToStats(String sql, int resultCount, long time) { + Stat stat = stats.get(sql); + if (stat == null) { + stat = new Stat(); + stat.sql = sql; + stats.put(sql, stat); + } + stat.executeCount++; + stat.resultCount += resultCount; + stat.time += time; + timeTotal += time; + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/CreateCluster.java b/modules/h2/src/main/java/org/h2/tools/CreateCluster.java new file mode 100644 index 0000000000000..d283f105a25d3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/CreateCluster.java @@ -0,0 +1,186 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.IOException; +import java.io.PipedReader; +import java.io.PipedWriter; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import org.h2.util.Tool; + +/** + * Creates a cluster from a stand-alone database. + *
    + * Copies a database to another location if required. + * @h2.resource + */ +public class CreateCluster extends Tool { + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-urlSource "<url>"]The database URL of the source database (jdbc:h2:...)
    [-urlTarget "<url>"]The database URL of the target database (jdbc:h2:...)
    [-user <user>]The user name (default: sa)
    [-password <pwd>]The password
    [-serverList <list>]The comma separated list of host names or IP addresses
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new CreateCluster().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + String urlSource = null; + String urlTarget = null; + String user = ""; + String password = ""; + String serverList = null; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-urlSource")) { + urlSource = args[++i]; + } else if (arg.equals("-urlTarget")) { + urlTarget = args[++i]; + } else if (arg.equals("-user")) { + user = args[++i]; + } else if (arg.equals("-password")) { + password = args[++i]; + } else if (arg.equals("-serverList")) { + serverList = args[++i]; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + if (urlSource == null || urlTarget == null || serverList == null) { + showUsage(); + throw new SQLException("Source URL, target URL, or server list not set"); + } + process(urlSource, urlTarget, user, password, serverList); + } + + /** + * Creates a cluster. + * + * @param urlSource the database URL of the original database + * @param urlTarget the database URL of the copy + * @param user the user name + * @param password the password + * @param serverList the server list + */ + public void execute(String urlSource, String urlTarget, + String user, String password, String serverList) throws SQLException { + process(urlSource, urlTarget, user, password, serverList); + } + + private static void process(String urlSource, String urlTarget, + String user, String password, String serverList) throws SQLException { + org.h2.Driver.load(); + + // use cluster='' so connecting is possible + // even if the cluster is enabled + try (Connection connSource = DriverManager.getConnection(urlSource + ";CLUSTER=''", user, password); + Statement statSource = connSource.createStatement()) { + // enable the exclusive mode and close other connections, + // so that data can't change while restoring the second database + statSource.execute("SET EXCLUSIVE 2"); + try { + performTransfer(statSource, urlTarget, user, password, serverList); + } finally { + // switch back to the regular mode + statSource.execute("SET EXCLUSIVE FALSE"); + } + } + } + + private static void performTransfer(Statement statSource, String urlTarget, String user, String password, + String serverList) throws SQLException { + + // Delete the target database first. + try (Connection connTarget = DriverManager.getConnection(urlTarget + ";CLUSTER=''", user, password); + Statement statTarget = connTarget.createStatement()) { + statTarget.execute("DROP ALL OBJECTS DELETE FILES"); + } + + try (PipedReader pipeReader = new PipedReader()) { + Future threadFuture = startWriter(pipeReader, statSource); + + // Read data from pipe reader, restore on target. + try (Connection connTarget = DriverManager.getConnection(urlTarget, user, password); + Statement statTarget = connTarget.createStatement()) { + RunScript.execute(connTarget, pipeReader); + + // Check if the writer encountered any exception + try { + threadFuture.get(); + } catch (ExecutionException ex) { + throw new SQLException(ex.getCause()); + } catch (InterruptedException ex) { + throw new SQLException(ex); + } + + // set the cluster to the serverList on both databases + statSource.executeUpdate("SET CLUSTER '" + serverList + "'"); + statTarget.executeUpdate("SET CLUSTER '" + serverList + "'"); + } + } catch (IOException ex) { + throw new SQLException(ex); + } + } + + private static Future startWriter(final PipedReader pipeReader, + final Statement statSource) { + final ExecutorService thread = Executors.newFixedThreadPool(1); + + // Since exceptions cannot be thrown across thread boundaries, return + // the task's future so we can check manually + Future threadFuture = thread.submit(new Runnable() { + @Override + public void run() { + // If the creation of the piped writer fails, the reader will + // throw an IOException as soon as read() is called: IOException + // - if the pipe is broken, unconnected, closed, or an I/O error + // occurs. The reader's IOException will then trigger the + // finally{} that releases exclusive mode on the source DB. + try (final PipedWriter pipeWriter = new PipedWriter(pipeReader); + final ResultSet rs = statSource.executeQuery("SCRIPT")) { + while (rs.next()) { + pipeWriter.write(rs.getString(1) + "\n"); + } + } catch (SQLException | IOException ex) { + throw new IllegalStateException("Producing script from the source DB is failing.", ex); + } + } + }); + + thread.shutdown(); + + return threadFuture; + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/Csv.java b/modules/h2/src/main/java/org/h2/tools/Csv.java new file mode 100644 index 0000000000000..d5070c26af3a7 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Csv.java @@ -0,0 +1,875 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.OutputStreamWriter; +import java.io.Reader; +import java.io.Writer; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.ArrayList; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * A facility to read from and write to CSV (comma separated values) files. When + * reading, the BOM (the byte-order-mark) character 0xfeff at the beginning of + * the file is ignored. + * + * @author Thomas Mueller, Sylvain Cuaz + */ +public class Csv implements SimpleRowSource { + + private String[] columnNames; + + private String characterSet = SysProperties.FILE_ENCODING; + private char escapeCharacter = '\"'; + private char fieldDelimiter = '\"'; + private char fieldSeparatorRead = ','; + private String fieldSeparatorWrite = ","; + private boolean caseSensitiveColumnNames; + private boolean preserveWhitespace; + private boolean writeColumnHeader = true; + private char lineComment; + private String lineSeparator = SysProperties.LINE_SEPARATOR; + private String nullString = ""; + + private String fileName; + private Reader input; + private char[] inputBuffer; + private int inputBufferPos; + private int inputBufferStart = -1; + private int inputBufferEnd; + private Writer output; + private boolean endOfLine, endOfFile; + + private int writeResultSet(ResultSet rs) throws SQLException { + try { + int rows = 0; + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + String[] row = new String[columnCount]; + int[] sqlTypes = new int[columnCount]; + for (int i = 0; i < columnCount; i++) { + row[i] = meta.getColumnLabel(i + 1); + sqlTypes[i] = meta.getColumnType(i + 1); + } + if (writeColumnHeader) { + writeRow(row); + } + while (rs.next()) { + for (int i = 0; i < columnCount; i++) { + Object o; + switch (sqlTypes[i]) { + case Types.DATE: + o = rs.getDate(i + 1); + break; + case Types.TIME: + o = rs.getTime(i + 1); + break; + case Types.TIMESTAMP: + o = rs.getTimestamp(i + 1); + break; + default: + o = rs.getString(i + 1); + } + row[i] = o == null ? null : o.toString(); + } + writeRow(row); + rows++; + } + output.close(); + return rows; + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } finally { + close(); + JdbcUtils.closeSilently(rs); + } + } + + /** + * Writes the result set to a file in the CSV format. + * + * @param writer the writer + * @param rs the result set + * @return the number of rows written + */ + public int write(Writer writer, ResultSet rs) throws SQLException { + this.output = writer; + return writeResultSet(rs); + } + + /** + * Writes the result set to a file in the CSV format. The result set is read + * using the following loop: + * + *
    +     * while (rs.next()) {
    +     *     writeRow(row);
    +     * }
    +     * 
    + * + * @param outputFileName the name of the csv file + * @param rs the result set - the result set must be positioned before the + * first row. + * @param charset the charset or null to use the system default charset + * (see system property file.encoding) + * @return the number of rows written + */ + public int write(String outputFileName, ResultSet rs, String charset) + throws SQLException { + init(outputFileName, charset); + try { + initWrite(); + return writeResultSet(rs); + } catch (IOException e) { + throw convertException("IOException writing " + outputFileName, e); + } + } + + /** + * Writes the result set of a query to a file in the CSV format. + * + * @param conn the connection + * @param outputFileName the file name + * @param sql the query + * @param charset the charset or null to use the system default charset + * (see system property file.encoding) + * @return the number of rows written + */ + public int write(Connection conn, String outputFileName, String sql, + String charset) throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery(sql); + int rows = write(outputFileName, rs, charset); + stat.close(); + return rows; + } + + /** + * Reads from the CSV file and returns a result set. The rows in the result + * set are created on demand, that means the file is kept open until all + * rows are read or the result set is closed. + *
    + * If the columns are read from the CSV file, then the following rules are + * used: columns names that start with a letter or '_', and only + * contain letters, '_', and digits, are considered case insensitive + * and are converted to uppercase. Other column names are considered + * case sensitive (that means they need to be quoted when accessed). + * + * @param inputFileName the file name + * @param colNames or null if the column names should be read from the CSV + * file + * @param charset the charset or null to use the system default charset + * (see system property file.encoding) + * @return the result set + */ + public ResultSet read(String inputFileName, String[] colNames, + String charset) throws SQLException { + init(inputFileName, charset); + try { + return readResultSet(colNames); + } catch (IOException e) { + throw convertException("IOException reading " + inputFileName, e); + } + } + + /** + * Reads CSV data from a reader and returns a result set. The rows in the + * result set are created on demand, that means the reader is kept open + * until all rows are read or the result set is closed. + * + * @param reader the reader + * @param colNames or null if the column names should be read from the CSV + * file + * @return the result set + */ + public ResultSet read(Reader reader, String[] colNames) throws IOException { + init(null, null); + this.input = reader; + return readResultSet(colNames); + } + + private ResultSet readResultSet(String[] colNames) throws IOException { + this.columnNames = colNames; + initRead(); + SimpleResultSet result = new SimpleResultSet(this); + makeColumnNamesUnique(); + for (String columnName : columnNames) { + result.addColumn(columnName, Types.VARCHAR, Integer.MAX_VALUE, 0); + } + return result; + } + + private void makeColumnNamesUnique() { + for (int i = 0; i < columnNames.length; i++) { + StringBuilder buff = new StringBuilder(); + String n = columnNames[i]; + if (n == null || n.length() == 0) { + buff.append('C').append(i + 1); + } else { + buff.append(n); + } + for (int j = 0; j < i; j++) { + String y = columnNames[j]; + if (buff.toString().equals(y)) { + buff.append('1'); + j = -1; + } + } + columnNames[i] = buff.toString(); + } + } + + private void init(String newFileName, String charset) { + this.fileName = newFileName; + if (charset != null) { + this.characterSet = charset; + } + } + + private void initWrite() throws IOException { + if (output == null) { + try { + OutputStream out = FileUtils.newOutputStream(fileName, false); + out = new BufferedOutputStream(out, Constants.IO_BUFFER_SIZE); + output = new BufferedWriter(new OutputStreamWriter(out, characterSet)); + } catch (Exception e) { + close(); + throw DbException.convertToIOException(e); + } + } + } + + private void writeRow(String[] values) throws IOException { + for (int i = 0; i < values.length; i++) { + if (i > 0) { + if (fieldSeparatorWrite != null) { + output.write(fieldSeparatorWrite); + } + } + String s = values[i]; + if (s != null) { + if (escapeCharacter != 0) { + if (fieldDelimiter != 0) { + output.write(fieldDelimiter); + } + output.write(escape(s)); + if (fieldDelimiter != 0) { + output.write(fieldDelimiter); + } + } else { + output.write(s); + } + } else if (nullString != null && nullString.length() > 0) { + output.write(nullString); + } + } + output.write(lineSeparator); + } + + private String escape(String data) { + if (data.indexOf(fieldDelimiter) < 0) { + if (escapeCharacter == fieldDelimiter || data.indexOf(escapeCharacter) < 0) { + return data; + } + } + int length = data.length(); + StringBuilder buff = new StringBuilder(length); + for (int i = 0; i < length; i++) { + char ch = data.charAt(i); + if (ch == fieldDelimiter || ch == escapeCharacter) { + buff.append(escapeCharacter); + } + buff.append(ch); + } + return buff.toString(); + } + + private void initRead() throws IOException { + if (input == null) { + try { + InputStream in = FileUtils.newInputStream(fileName); + in = new BufferedInputStream(in, Constants.IO_BUFFER_SIZE); + input = new InputStreamReader(in, characterSet); + } catch (IOException e) { + close(); + throw e; + } + } + if (!input.markSupported()) { + input = new BufferedReader(input); + } + input.mark(1); + int bom = input.read(); + if (bom != 0xfeff) { + // Microsoft Excel compatibility + // ignore pseudo-BOM + input.reset(); + } + inputBuffer = new char[Constants.IO_BUFFER_SIZE * 2]; + if (columnNames == null) { + readHeader(); + } + } + + private void readHeader() throws IOException { + ArrayList list = New.arrayList(); + while (true) { + String v = readValue(); + if (v == null) { + if (endOfLine) { + if (endOfFile || !list.isEmpty()) { + break; + } + } else { + v = "COLUMN" + list.size(); + list.add(v); + } + } else { + if (v.length() == 0) { + v = "COLUMN" + list.size(); + } else if (!caseSensitiveColumnNames && isSimpleColumnName(v)) { + v = StringUtils.toUpperEnglish(v); + } + list.add(v); + if (endOfLine) { + break; + } + } + } + columnNames = list.toArray(new String[0]); + } + + private static boolean isSimpleColumnName(String columnName) { + for (int i = 0, length = columnName.length(); i < length; i++) { + char ch = columnName.charAt(i); + if (i == 0) { + if (ch != '_' && !Character.isLetter(ch)) { + return false; + } + } else { + if (ch != '_' && !Character.isLetterOrDigit(ch)) { + return false; + } + } + } + return columnName.length() != 0; + } + + private void pushBack() { + inputBufferPos--; + } + + private int readChar() throws IOException { + if (inputBufferPos >= inputBufferEnd) { + return readBuffer(); + } + return inputBuffer[inputBufferPos++]; + } + + private int readBuffer() throws IOException { + if (endOfFile) { + return -1; + } + int keep; + if (inputBufferStart >= 0) { + keep = inputBufferPos - inputBufferStart; + if (keep > 0) { + char[] src = inputBuffer; + if (keep + Constants.IO_BUFFER_SIZE > src.length) { + inputBuffer = new char[src.length * 2]; + } + System.arraycopy(src, inputBufferStart, inputBuffer, 0, keep); + } + inputBufferStart = 0; + } else { + keep = 0; + } + inputBufferPos = keep; + int len = input.read(inputBuffer, keep, Constants.IO_BUFFER_SIZE); + if (len == -1) { + // ensure bufferPos > bufferEnd + // even after pushBack + inputBufferEnd = -1024; + endOfFile = true; + // ensure the right number of characters are read + // in case the input buffer is still used + inputBufferPos++; + return -1; + } + inputBufferEnd = keep + len; + return inputBuffer[inputBufferPos++]; + } + + private String readValue() throws IOException { + endOfLine = false; + inputBufferStart = inputBufferPos; + while (true) { + int ch = readChar(); + if (ch == fieldDelimiter) { + // delimited value + boolean containsEscape = false; + inputBufferStart = inputBufferPos; + int sep; + while (true) { + ch = readChar(); + if (ch == fieldDelimiter) { + ch = readChar(); + if (ch != fieldDelimiter) { + sep = 2; + break; + } + containsEscape = true; + } else if (ch == escapeCharacter) { + ch = readChar(); + if (ch < 0) { + sep = 1; + break; + } + containsEscape = true; + } else if (ch < 0) { + sep = 1; + break; + } + } + String s = new String(inputBuffer, + inputBufferStart, inputBufferPos - inputBufferStart - sep); + if (containsEscape) { + s = unEscape(s); + } + inputBufferStart = -1; + while (true) { + if (ch == fieldSeparatorRead) { + break; + } else if (ch == '\n' || ch < 0 || ch == '\r') { + endOfLine = true; + break; + } else if (ch == ' ' || ch == '\t') { + // ignore + } else { + pushBack(); + break; + } + ch = readChar(); + } + return s; + } else if (ch == '\n' || ch < 0 || ch == '\r') { + endOfLine = true; + return null; + } else if (ch == fieldSeparatorRead) { + // null + return null; + } else if (ch <= ' ') { + // ignore spaces + } else if (lineComment != 0 && ch == lineComment) { + // comment until end of line + inputBufferStart = -1; + do { + ch = readChar(); + } while (ch != '\n' && ch >= 0 && ch != '\r'); + endOfLine = true; + return null; + } else { + // un-delimited value + while (true) { + ch = readChar(); + if (ch == fieldSeparatorRead) { + break; + } else if (ch == '\n' || ch < 0 || ch == '\r') { + endOfLine = true; + break; + } + } + String s = new String(inputBuffer, + inputBufferStart, inputBufferPos - inputBufferStart - 1); + if (!preserveWhitespace) { + s = s.trim(); + } + inputBufferStart = -1; + // check un-delimited value for nullString + return readNull(s); + } + } + } + + private String readNull(String s) { + return s.equals(nullString) ? null : s; + } + + private String unEscape(String s) { + StringBuilder buff = new StringBuilder(s.length()); + int start = 0; + char[] chars = null; + while (true) { + int idx = s.indexOf(escapeCharacter, start); + if (idx < 0) { + idx = s.indexOf(fieldDelimiter, start); + if (idx < 0) { + break; + } + } + if (chars == null) { + chars = s.toCharArray(); + } + buff.append(chars, start, idx - start); + if (idx == s.length() - 1) { + start = s.length(); + break; + } + buff.append(chars[idx + 1]); + start = idx + 2; + } + buff.append(s, start, s.length()); + return buff.toString(); + } + + /** + * INTERNAL + */ + @Override + public Object[] readRow() throws SQLException { + if (input == null) { + return null; + } + String[] row = new String[columnNames.length]; + try { + int i = 0; + while (true) { + String v = readValue(); + if (v == null) { + if (endOfLine) { + if (i == 0) { + if (endOfFile) { + return null; + } + // empty line + continue; + } + break; + } + } + if (i < row.length) { + row[i++] = v; + } + if (endOfLine) { + break; + } + } + } catch (IOException e) { + throw convertException("IOException reading from " + fileName, e); + } + return row; + } + + private static SQLException convertException(String message, Exception e) { + return DbException.get(ErrorCode.IO_EXCEPTION_1, e, message).getSQLException(); + } + + /** + * INTERNAL + */ + @Override + public void close() { + IOUtils.closeSilently(input); + input = null; + IOUtils.closeSilently(output); + output = null; + } + + /** + * INTERNAL + */ + @Override + public void reset() throws SQLException { + throw new SQLException("Method is not supported", "CSV"); + } + + /** + * Override the field separator for writing. The default is ",". + * + * @param fieldSeparatorWrite the field separator + */ + public void setFieldSeparatorWrite(String fieldSeparatorWrite) { + this.fieldSeparatorWrite = fieldSeparatorWrite; + } + + /** + * Get the current field separator for writing. + * + * @return the field separator + */ + public String getFieldSeparatorWrite() { + return fieldSeparatorWrite; + } + + /** + * Override the case sensitive column names setting. The default is false. + * If enabled, the case of all column names is always preserved. + * + * @param caseSensitiveColumnNames whether column names are case sensitive + */ + public void setCaseSensitiveColumnNames(boolean caseSensitiveColumnNames) { + this.caseSensitiveColumnNames = caseSensitiveColumnNames; + } + + /** + * Get the current case sensitive column names setting. + * + * @return whether column names are case sensitive + */ + public boolean getCaseSensitiveColumnNames() { + return caseSensitiveColumnNames; + } + + /** + * Override the field separator for reading. The default is ','. + * + * @param fieldSeparatorRead the field separator + */ + public void setFieldSeparatorRead(char fieldSeparatorRead) { + this.fieldSeparatorRead = fieldSeparatorRead; + } + + /** + * Get the current field separator for reading. + * + * @return the field separator + */ + public char getFieldSeparatorRead() { + return fieldSeparatorRead; + } + + /** + * Set the line comment character. The default is character code 0 (line + * comments are disabled). + * + * @param lineCommentCharacter the line comment character + */ + public void setLineCommentCharacter(char lineCommentCharacter) { + this.lineComment = lineCommentCharacter; + } + + /** + * Get the line comment character. + * + * @return the line comment character, or 0 if disabled + */ + public char getLineCommentCharacter() { + return lineComment; + } + + /** + * Set the field delimiter. The default is " (a double quote). + * The value 0 means no field delimiter is used. + * + * @param fieldDelimiter the field delimiter + */ + public void setFieldDelimiter(char fieldDelimiter) { + this.fieldDelimiter = fieldDelimiter; + } + + /** + * Get the current field delimiter. + * + * @return the field delimiter + */ + public char getFieldDelimiter() { + return fieldDelimiter; + } + + /** + * Set the escape character. The escape character is used to escape the + * field delimiter. This is needed if the data contains the field delimiter. + * The default escape character is " (a double quote), which is the same as + * the field delimiter. If the field delimiter and the escape character are + * both " (double quote), and the data contains a double quote, then an + * additional double quote is added. Example: + *
    +     * Data: He said "Hello".
    +     * Escape character: "
    +     * Field delimiter: "
    +     * CSV file: "He said ""Hello""."
    +     * 
    + * If the field delimiter is a double quote and the escape character is a + * backslash, then escaping is done similar to Java (however, only the field + * delimiter is escaped). Example: + *
    +     * Data: He said "Hello".
    +     * Escape character: \
    +     * Field delimiter: "
    +     * CSV file: "He said \"Hello\"."
    +     * 
    + * The value 0 means no escape character is used. + * + * @param escapeCharacter the escape character + */ + public void setEscapeCharacter(char escapeCharacter) { + this.escapeCharacter = escapeCharacter; + } + + /** + * Get the current escape character. + * + * @return the escape character + */ + public char getEscapeCharacter() { + return escapeCharacter; + } + + /** + * Set the line separator used for writing. This is usually a line feed (\n + * or \r\n depending on the system settings). The line separator is written + * after each row (including the last row), so this option can include an + * end-of-row marker if needed. + * + * @param lineSeparator the line separator + */ + public void setLineSeparator(String lineSeparator) { + this.lineSeparator = lineSeparator; + } + + /** + * Get the line separator used for writing. + * + * @return the line separator + */ + public String getLineSeparator() { + return lineSeparator; + } + + /** + * Set the value that represents NULL. It is only used for non-delimited + * values. + * + * @param nullString the null + */ + public void setNullString(String nullString) { + this.nullString = nullString; + } + + /** + * Get the current null string. + * + * @return the null string. + */ + public String getNullString() { + return nullString; + } + + /** + * Enable or disable preserving whitespace in unquoted text. + * + * @param value the new value for the setting + */ + public void setPreserveWhitespace(boolean value) { + this.preserveWhitespace = value; + } + + /** + * Whether whitespace in unquoted text is preserved. + * + * @return the current value for the setting + */ + public boolean getPreserveWhitespace() { + return preserveWhitespace; + } + + /** + * Enable or disable writing the column header. + * + * @param value the new value for the setting + */ + public void setWriteColumnHeader(boolean value) { + this.writeColumnHeader = value; + } + + /** + * Whether the column header is written. + * + * @return the current value for the setting + */ + public boolean getWriteColumnHeader() { + return writeColumnHeader; + } + + /** + * INTERNAL. + * Parse and set the CSV options. + * + * @param options the the options + * @return the character set + */ + public String setOptions(String options) { + String charset = null; + String[] keyValuePairs = StringUtils.arraySplit(options, ' ', false); + for (String pair : keyValuePairs) { + if (pair.length() == 0) { + continue; + } + int index = pair.indexOf('='); + String key = StringUtils.trim(pair.substring(0, index), true, true, " "); + String value = pair.substring(index + 1); + char ch = value.length() == 0 ? 0 : value.charAt(0); + if (isParam(key, "escape", "esc", "escapeCharacter")) { + setEscapeCharacter(ch); + } else if (isParam(key, "fieldDelimiter", "fieldDelim")) { + setFieldDelimiter(ch); + } else if (isParam(key, "fieldSeparator", "fieldSep")) { + setFieldSeparatorRead(ch); + setFieldSeparatorWrite(value); + } else if (isParam(key, "lineComment", "lineCommentCharacter")) { + setLineCommentCharacter(ch); + } else if (isParam(key, "lineSeparator", "lineSep")) { + setLineSeparator(value); + } else if (isParam(key, "null", "nullString")) { + setNullString(value); + } else if (isParam(key, "charset", "characterSet")) { + charset = value; + } else if (isParam(key, "preserveWhitespace")) { + setPreserveWhitespace(Utils.parseBoolean(value, false, false)); + } else if (isParam(key, "writeColumnHeader")) { + setWriteColumnHeader(Utils.parseBoolean(value, true, false)); + } else if (isParam(key, "caseSensitiveColumnNames")) { + setCaseSensitiveColumnNames(Utils.parseBoolean(value, false, false)); + } else { + throw DbException.getUnsupportedException(key); + } + } + return charset; + } + + private static boolean isParam(String key, String... values) { + for (String v : values) { + if (key.equalsIgnoreCase(v)) { + return true; + } + } + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/DeleteDbFiles.java b/modules/h2/src/main/java/org/h2/tools/DeleteDbFiles.java new file mode 100644 index 0000000000000..17d6a177855b6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/DeleteDbFiles.java @@ -0,0 +1,110 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.sql.SQLException; +import java.util.ArrayList; + +import org.h2.engine.Constants; +import org.h2.store.FileLister; +import org.h2.store.fs.FileUtils; +import org.h2.util.Tool; + +/** + * Deletes all files belonging to a database. + *
    + * The database must be closed before calling this tool. + * @h2.resource + */ +public class DeleteDbFiles extends Tool { + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-dir <dir>]The directory (default: .)
    [-db <database>]The database name
    [-quiet]Do not print progress information
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new DeleteDbFiles().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + String dir = "."; + String db = null; + boolean quiet = false; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-dir")) { + dir = args[++i]; + } else if (arg.equals("-db")) { + db = args[++i]; + } else if (arg.equals("-quiet")) { + quiet = true; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + process(dir, db, quiet); + } + + /** + * Deletes the database files. + * + * @param dir the directory + * @param db the database name (null for all databases) + * @param quiet don't print progress information + */ + public static void execute(String dir, String db, boolean quiet) { + new DeleteDbFiles().process(dir, db, quiet); + } + + /** + * Deletes the database files. + * + * @param dir the directory + * @param db the database name (null for all databases) + * @param quiet don't print progress information + */ + private void process(String dir, String db, boolean quiet) { + ArrayList files = FileLister.getDatabaseFiles(dir, db, true); + if (files.isEmpty() && !quiet) { + printNoDatabaseFilesFound(dir, db); + } + for (String fileName : files) { + process(fileName, quiet); + if (!quiet) { + out.println("Processed: " + fileName); + } + } + } + + private static void process(String fileName, boolean quiet) { + if (FileUtils.isDirectory(fileName)) { + // only delete empty directories + FileUtils.tryDelete(fileName); + } else if (quiet || fileName.endsWith(Constants.SUFFIX_TEMP_FILE) || + fileName.endsWith(Constants.SUFFIX_TRACE_FILE)) { + FileUtils.tryDelete(fileName); + } else { + FileUtils.delete(fileName); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/MultiDimension.java b/modules/h2/src/main/java/org/h2/tools/MultiDimension.java new file mode 100644 index 0000000000000..2c4203b1c63be --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/MultiDimension.java @@ -0,0 +1,336 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * A tool to help an application execute multi-dimensional range queries. + * The algorithm used is database independent, the only requirement + * is that the engine supports a range index (for example b-tree). + */ +public class MultiDimension implements Comparator { + + private static final MultiDimension INSTANCE = new MultiDimension(); + + protected MultiDimension() { + // don't allow construction by normal code + // but allow tests + } + + /** + * Get the singleton. + * + * @return the singleton + */ + public static MultiDimension getInstance() { + return INSTANCE; + } + + /** + * Normalize a value so that it is between the minimum and maximum for the + * given number of dimensions. + * + * @param dimensions the number of dimensions + * @param value the value (must be in the range min..max) + * @param min the minimum value + * @param max the maximum value (must be larger than min) + * @return the normalized value in the range 0..getMaxValue(dimensions) + */ + public int normalize(int dimensions, double value, double min, double max) { + if (value < min || value > max) { + throw new IllegalArgumentException(min + "<" + value + "<" + max); + } + double x = (value - min) / (max - min); + return (int) (x * getMaxValue(dimensions)); + } + + /** + * Get the maximum value for the given dimension count. For two dimensions, + * each value must contain at most 32 bit, for 3: 21 bit, 4: 16 bit, 5: 12 + * bit, 6: 10 bit, 7: 9 bit, 8: 8 bit. + * + * @param dimensions the number of dimensions + * @return the maximum value + */ + public int getMaxValue(int dimensions) { + if (dimensions < 2 || dimensions > 32) { + throw new IllegalArgumentException("" + dimensions); + } + int bitsPerValue = getBitsPerValue(dimensions); + return (int) ((1L << bitsPerValue) - 1); + } + + private static int getBitsPerValue(int dimensions) { + return Math.min(31, 64 / dimensions); + } + + /** + * Convert the multi-dimensional value into a one-dimensional (scalar) + * value. This is done by interleaving the bits of the values. Each values + * must be between 0 (including) and the maximum value for the given number + * of dimensions (getMaxValue, excluding). To normalize values to this + * range, use the normalize function. + * + * @param values the multi-dimensional value + * @return the scalar value + */ + public long interleave(int... values) { + int dimensions = values.length; + long max = getMaxValue(dimensions); + int bitsPerValue = getBitsPerValue(dimensions); + long x = 0; + for (int i = 0; i < dimensions; i++) { + long k = values[i]; + if (k < 0 || k > max) { + throw new IllegalArgumentException(0 + "<" + k + "<" + max); + } + for (int b = 0; b < bitsPerValue; b++) { + x |= (k & (1L << b)) << (i + (dimensions - 1) * b); + } + } + return x; + } + + /** + * Convert the two-dimensional value into a one-dimensional (scalar) value. + * This is done by interleaving the bits of the values. + * Each values must be between 0 (including) and the maximum value + * for the given number of dimensions (getMaxValue, excluding). + * To normalize values to this range, use the normalize function. + * + * @param x the value of the first dimension, normalized + * @param y the value of the second dimension, normalized + * @return the scalar value + */ + public long interleave(int x, int y) { + if (x < 0) { + throw new IllegalArgumentException(0 + "<" + x); + } + if (y < 0) { + throw new IllegalArgumentException(0 + "<" + y); + } + long z = 0; + for (int i = 0; i < 32; i++) { + z |= (x & (1L << i)) << i; + z |= (y & (1L << i)) << (i + 1); + } + return z; + } + + /** + * Gets one of the original multi-dimensional values from a scalar value. + * + * @param dimensions the number of dimensions + * @param scalar the scalar value + * @param dim the dimension of the returned value (starting from 0) + * @return the value + */ + public int deinterleave(int dimensions, long scalar, int dim) { + int bitsPerValue = getBitsPerValue(dimensions); + int value = 0; + for (int i = 0; i < bitsPerValue; i++) { + value |= (scalar >> (dim + (dimensions - 1) * i)) & (1L << i); + } + return value; + } + + /** + * Generates an optimized multi-dimensional range query. The query contains + * parameters. It can only be used with the H2 database. + * + * @param table the table name + * @param columns the list of columns + * @param scalarColumn the column name of the computed scalar column + * @return the query + */ + public String generatePreparedQuery(String table, String scalarColumn, + String[] columns) { + StringBuilder buff = new StringBuilder("SELECT D.* FROM "); + buff.append(StringUtils.quoteIdentifier(table)). + append(" D, TABLE(_FROM_ BIGINT=?, _TO_ BIGINT=?) WHERE "). + append(StringUtils.quoteIdentifier(scalarColumn)). + append(" BETWEEN _FROM_ AND _TO_"); + for (String col : columns) { + buff.append(" AND ").append(StringUtils.quoteIdentifier(col)). + append("+1 BETWEEN ?+1 AND ?+1"); + } + return buff.toString(); + } + + /** + * Executes a prepared query that was generated using generatePreparedQuery. + * + * @param prep the prepared statement + * @param min the lower values + * @param max the upper values + * @return the result set + */ + public ResultSet getResult(PreparedStatement prep, int[] min, int[] max) + throws SQLException { + long[][] ranges = getMortonRanges(min, max); + int len = ranges.length; + Long[] from = new Long[len]; + Long[] to = new Long[len]; + for (int i = 0; i < len; i++) { + from[i] = ranges[i][0]; + to[i] = ranges[i][1]; + } + prep.setObject(1, from); + prep.setObject(2, to); + len = min.length; + for (int i = 0, idx = 3; i < len; i++) { + prep.setInt(idx++, min[i]); + prep.setInt(idx++, max[i]); + } + return prep.executeQuery(); + } + + /** + * Gets a list of ranges to be searched for a multi-dimensional range query + * where min <= value <= max. In most cases, the ranges will be larger + * than required in order to combine smaller ranges into one. Usually, about + * double as many points will be included in the resulting range. + * + * @param min the minimum value + * @param max the maximum value + * @return the list of ranges (low, high pairs) + */ + private long[][] getMortonRanges(int[] min, int[] max) { + int len = min.length; + if (max.length != len) { + throw new IllegalArgumentException(len + "=" + max.length); + } + for (int i = 0; i < len; i++) { + if (min[i] > max[i]) { + int temp = min[i]; + min[i] = max[i]; + max[i] = temp; + } + } + int total = getSize(min, max, len); + ArrayList list = New.arrayList(); + addMortonRanges(list, min, max, len, 0); + combineEntries(list, total); + return list.toArray(new long[0][]); + } + + private static int getSize(int[] min, int[] max, int len) { + int size = 1; + for (int i = 0; i < len; i++) { + int diff = max[i] - min[i]; + size *= diff + 1; + } + return size; + } + + /** + * Combine entries if the size of the list is too large. + * + * @param list list of pairs(low, high) + * @param total product of the gap lengths + */ + private void combineEntries(ArrayList list, int total) { + Collections.sort(list, this); + for (int minGap = 10; minGap < total; minGap += minGap / 2) { + for (int i = 0; i < list.size() - 1; i++) { + long[] current = list.get(i); + long[] next = list.get(i + 1); + if (current[1] + minGap >= next[0]) { + current[1] = next[1]; + list.remove(i + 1); + i--; + } + } + int searched = 0; + for (long[] range : list) { + searched += range[1] - range[0] + 1; + } + if (searched > 2 * total || list.size() < 100) { + break; + } + } + } + + @Override + public int compare(long[] a, long[] b) { + return a[0] > b[0] ? 1 : -1; + } + + private void addMortonRanges(ArrayList list, int[] min, int[] max, + int len, int level) { + if (level > 100) { + throw new IllegalArgumentException("" + level); + } + int largest = 0, largestDiff = 0; + long size = 1; + for (int i = 0; i < len; i++) { + int diff = max[i] - min[i]; + if (diff < 0) { + throw new IllegalArgumentException(""+ diff); + } + size *= diff + 1; + if (size < 0) { + throw new IllegalArgumentException("" + size); + } + if (diff > largestDiff) { + largestDiff = diff; + largest = i; + } + } + long low = interleave(min), high = interleave(max); + if (high < low) { + throw new IllegalArgumentException(high + "<" + low); + } + long range = high - low + 1; + if (range == size) { + long[] item = { low, high }; + list.add(item); + } else { + int middle = findMiddle(min[largest], max[largest]); + int temp = max[largest]; + max[largest] = middle; + addMortonRanges(list, min, max, len, level + 1); + max[largest] = temp; + temp = min[largest]; + min[largest] = middle + 1; + addMortonRanges(list, min, max, len, level + 1); + min[largest] = temp; + } + } + + private static int roundUp(int x, int blockSizePowerOf2) { + return (x + blockSizePowerOf2 - 1) & (-blockSizePowerOf2); + } + + private static int findMiddle(int a, int b) { + int diff = b - a - 1; + if (diff == 0) { + return a; + } + if (diff == 1) { + return a + 1; + } + int scale = 0; + while ((1 << scale) < diff) { + scale++; + } + scale--; + int m = roundUp(a + 2, 1 << scale) - 1; + if (m <= a || m >= b) { + throw new IllegalArgumentException(a + "<" + m + "<" + b); + } + return m; + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/Recover.java b/modules/h2/src/main/java/org/h2/tools/Recover.java new file mode 100644 index 0000000000000..35e2ea39456b6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Recover.java @@ -0,0 +1,1759 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.BufferedInputStream; +import java.io.BufferedReader; +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.PrintWriter; +import java.io.Reader; +import java.io.SequenceInputStream; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Enumeration; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.Map; +import java.util.Map.Entry; +import java.util.zip.CRC32; +import org.h2.api.ErrorCode; +import org.h2.api.JavaObjectSerializer; +import org.h2.compress.CompressLZF; +import org.h2.engine.Constants; +import org.h2.engine.DbObject; +import org.h2.engine.MetaRecord; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.MVStoreTool; +import org.h2.mvstore.StreamStore; +import org.h2.mvstore.db.TransactionStore; +import org.h2.mvstore.db.TransactionStore.TransactionMap; +import org.h2.mvstore.db.ValueDataType; +import org.h2.result.Row; +import org.h2.result.RowFactory; +import org.h2.result.SimpleRow; +import org.h2.security.SHA256; +import org.h2.store.Data; +import org.h2.store.DataHandler; +import org.h2.store.DataReader; +import org.h2.store.FileLister; +import org.h2.store.FileStore; +import org.h2.store.FileStoreInputStream; +import org.h2.store.LobStorageBackend; +import org.h2.store.LobStorageFrontend; +import org.h2.store.LobStorageMap; +import org.h2.store.Page; +import org.h2.store.PageFreeList; +import org.h2.store.PageLog; +import org.h2.store.PageStore; +import org.h2.store.fs.FileUtils; +import org.h2.util.BitField; +import org.h2.util.IOUtils; +import org.h2.util.IntArray; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.SmallLRUCache; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; +import org.h2.util.TempFileDeleter; +import org.h2.util.Tool; +import org.h2.util.Utils; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueLob; +import org.h2.value.ValueLobDb; +import org.h2.value.ValueLong; + +/** + * Helps recovering a corrupted database. + * @h2.resource + */ +public class Recover extends Tool implements DataHandler { + + private String databaseName; + private int storageId; + private String storageName; + private int recordLength; + private int valueId; + private boolean trace; + private boolean transactionLog; + private ArrayList schema; + private HashSet objectIdSet; + private HashMap tableMap; + private HashMap columnTypeMap; + private boolean remove; + + private int pageSize; + private FileStore store; + private int[] parents; + + private Stats stat; + private boolean lobMaps; + + /** + * Statistic data + */ + static class Stats { + + /** + * The empty space in bytes in a data leaf pages. + */ + long pageDataEmpty; + + /** + * The number of bytes used for data. + */ + long pageDataRows; + + /** + * The number of bytes used for the page headers. + */ + long pageDataHead; + + /** + * The count per page type. + */ + final int[] pageTypeCount = new int[Page.TYPE_STREAM_DATA + 2]; + + /** + * The number of free pages. + */ + int free; + } + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-dir <dir>]The directory (default: .)
    [-db <database>]The database name (all databases if not set)
    [-trace]Print additional trace information
    [-transactionLog]Print the transaction log
    + * Encrypted databases need to be decrypted first. + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new Recover().runTool(args); + } + + /** + * Dumps the contents of a database file to a human readable text file. This + * text file can be used to recover most of the data. This tool does not + * open the database and can be used even if the database files are + * corrupted. A database can get corrupted if there is a bug in the database + * engine or file system software, or if an application writes into the + * database file that doesn't understand the the file format, or if there is + * a hardware problem. + * + * @param args the command line arguments + */ + @Override + public void runTool(String... args) throws SQLException { + String dir = "."; + String db = null; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if ("-dir".equals(arg)) { + dir = args[++i]; + } else if ("-db".equals(arg)) { + db = args[++i]; + } else if ("-removePassword".equals(arg)) { + remove = true; + } else if ("-trace".equals(arg)) { + trace = true; + } else if ("-transactionLog".equals(arg)) { + transactionLog = true; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + process(dir, db); + } + + /** + * INTERNAL + */ + public static Reader readClob(String fileName) throws IOException { + return new BufferedReader(new InputStreamReader(readBlob(fileName), + StandardCharsets.UTF_8)); + } + + /** + * INTERNAL + */ + public static InputStream readBlob(String fileName) throws IOException { + return new BufferedInputStream(FileUtils.newInputStream(fileName)); + } + + /** + * INTERNAL + */ + public static Value.ValueBlob readBlobDb(Connection conn, long lobId, + long precision) { + DataHandler h = ((JdbcConnection) conn).getSession().getDataHandler(); + verifyPageStore(h); + ValueLobDb lob = ValueLobDb.create(Value.BLOB, h, LobStorageFrontend.TABLE_TEMP, + lobId, null, precision); + lob.setRecoveryReference(true); + return lob; + } + + private static void verifyPageStore(DataHandler h) { + if (h.getLobStorage() instanceof LobStorageMap) { + throw DbException.get(ErrorCode.FEATURE_NOT_SUPPORTED_1, + "Restore page store recovery SQL script " + + "can only be restored to a PageStore file"); + } + } + + /** + * INTERNAL + */ + public static Value.ValueClob readClobDb(Connection conn, long lobId, + long precision) { + DataHandler h = ((JdbcConnection) conn).getSession().getDataHandler(); + verifyPageStore(h); + ValueLobDb lob = ValueLobDb.create(Value.CLOB, h, LobStorageFrontend.TABLE_TEMP, + lobId, null, precision); + lob.setRecoveryReference(true); + return lob; + } + + /** + * INTERNAL + */ + public static InputStream readBlobMap(Connection conn, long lobId, + long precision) throws SQLException { + final PreparedStatement prep = conn.prepareStatement( + "SELECT DATA FROM INFORMATION_SCHEMA.LOB_BLOCKS " + + "WHERE LOB_ID = ? AND SEQ = ? AND ? > 0"); + prep.setLong(1, lobId); + // precision is currently not really used, + // it is just to improve readability of the script + prep.setLong(3, precision); + return new SequenceInputStream( + new Enumeration() { + + private int seq; + private byte[] data = fetch(); + + private byte[] fetch() { + try { + prep.setInt(2, seq++); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + return rs.getBytes(1); + } + return null; + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + public boolean hasMoreElements() { + return data != null; + } + + @Override + public InputStream nextElement() { + ByteArrayInputStream in = new ByteArrayInputStream(data); + data = fetch(); + return in; + } + } + ); + } + + /** + * INTERNAL + */ + public static Reader readClobMap(Connection conn, long lobId, long precision) + throws Exception { + InputStream in = readBlobMap(conn, lobId, precision); + return new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8)); + } + + private void trace(String message) { + if (trace) { + out.println(message); + } + } + + private void traceError(String message, Throwable t) { + out.println(message + ": " + t.toString()); + if (trace) { + t.printStackTrace(out); + } + } + + /** + * Dumps the contents of a database to a SQL script file. + * + * @param dir the directory + * @param db the database name (null for all databases) + */ + public static void execute(String dir, String db) throws SQLException { + try { + new Recover().process(dir, db); + } catch (DbException e) { + throw DbException.toSQLException(e); + } + } + + private void process(String dir, String db) { + ArrayList list = FileLister.getDatabaseFiles(dir, db, true); + if (list.isEmpty()) { + printNoDatabaseFilesFound(dir, db); + } + for (String fileName : list) { + if (fileName.endsWith(Constants.SUFFIX_PAGE_FILE)) { + dumpPageStore(fileName); + } else if (fileName.endsWith(Constants.SUFFIX_LOB_FILE)) { + dumpLob(fileName, false); + } else if (fileName.endsWith(Constants.SUFFIX_MV_FILE)) { + String f = fileName.substring(0, fileName.length() - + Constants.SUFFIX_PAGE_FILE.length()); + PrintWriter writer; + writer = getWriter(fileName, ".txt"); + MVStoreTool.dump(fileName, writer, true); + MVStoreTool.info(fileName, writer); + writer.close(); + writer = getWriter(f + ".h2.db", ".sql"); + dumpMVStoreFile(writer, fileName); + writer.close(); + } + } + } + + private PrintWriter getWriter(String fileName, String suffix) { + fileName = fileName.substring(0, fileName.length() - 3); + String outputFile = fileName + suffix; + trace("Created file: " + outputFile); + try { + return new PrintWriter(IOUtils.getBufferedWriter( + FileUtils.newOutputStream(outputFile, false))); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + private void writeDataError(PrintWriter writer, String error, byte[] data) { + writer.println("-- ERROR: " + error + " storageId: " + + storageId + " recordLength: " + recordLength + " valueId: " + valueId); + StringBuilder sb = new StringBuilder(); + for (byte aData1 : data) { + int x = aData1 & 0xff; + if (x >= ' ' && x < 128) { + sb.append((char) x); + } else { + sb.append('?'); + } + } + writer.println("-- dump: " + sb.toString()); + sb = new StringBuilder(); + for (byte aData : data) { + int x = aData & 0xff; + sb.append(' '); + if (x < 16) { + sb.append('0'); + } + sb.append(Integer.toHexString(x)); + } + writer.println("-- dump: " + sb.toString()); + } + + private void dumpLob(String fileName, boolean lobCompression) { + OutputStream fileOut = null; + FileStore fileStore = null; + long size = 0; + String n = fileName + (lobCompression ? ".comp" : "") + ".txt"; + InputStream in = null; + try { + fileOut = FileUtils.newOutputStream(n, false); + fileStore = FileStore.open(null, fileName, "r"); + fileStore.init(); + in = new FileStoreInputStream(fileStore, this, lobCompression, false); + size = IOUtils.copy(in, fileOut); + } catch (Throwable e) { + // this is usually not a problem, because we try both compressed and + // uncompressed + } finally { + IOUtils.closeSilently(fileOut); + IOUtils.closeSilently(in); + closeSilently(fileStore); + } + if (size == 0) { + try { + FileUtils.delete(n); + } catch (Exception e) { + traceError(n, e); + } + } + } + + private String getSQL(String column, Value v) { + if (v instanceof ValueLob) { + ValueLob lob = (ValueLob) v; + byte[] small = lob.getSmall(); + if (small == null) { + String file = lob.getFileName(); + String type = lob.getType() == Value.BLOB ? "BLOB" : "CLOB"; + if (lob.isCompressed()) { + dumpLob(file, true); + file += ".comp"; + } + return "READ_" + type + "('" + file + ".txt')"; + } + } else if (v instanceof ValueLobDb) { + ValueLobDb lob = (ValueLobDb) v; + byte[] small = lob.getSmall(); + if (small == null) { + int type = lob.getType(); + long id = lob.getLobId(); + long precision = lob.getPrecision(); + String m; + String columnType; + if (type == Value.BLOB) { + columnType = "BLOB"; + m = "READ_BLOB"; + } else { + columnType = "CLOB"; + m = "READ_CLOB"; + } + if (lobMaps) { + m += "_MAP"; + } else { + m += "_DB"; + } + columnTypeMap.put(column, columnType); + return m + "(" + id + ", " + precision + ")"; + } + } + return v.getSQL(); + } + + private void setDatabaseName(String name) { + databaseName = name; + } + + private void dumpPageStore(String fileName) { + setDatabaseName(fileName.substring(0, fileName.length() - + Constants.SUFFIX_PAGE_FILE.length())); + PrintWriter writer = null; + stat = new Stats(); + try { + writer = getWriter(fileName, ".sql"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_BLOB FOR \"" + + this.getClass().getName() + ".readBlob\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_CLOB FOR \"" + + this.getClass().getName() + ".readClob\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_BLOB_DB FOR \"" + + this.getClass().getName() + ".readBlobDb\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_CLOB_DB FOR \"" + + this.getClass().getName() + ".readClobDb\";"); + resetSchema(); + store = FileStore.open(null, fileName, remove ? "rw" : "r"); + long length = store.length(); + try { + store.init(); + } catch (Exception e) { + writeError(writer, e); + } + Data s = Data.create(this, 128); + seek(0); + store.readFully(s.getBytes(), 0, 128); + s.setPos(48); + pageSize = s.readInt(); + int writeVersion = s.readByte(); + int readVersion = s.readByte(); + writer.println("-- pageSize: " + pageSize + + " writeVersion: " + writeVersion + + " readVersion: " + readVersion); + if (pageSize < PageStore.PAGE_SIZE_MIN || + pageSize > PageStore.PAGE_SIZE_MAX) { + pageSize = Constants.DEFAULT_PAGE_SIZE; + writer.println("-- ERROR: page size; using " + pageSize); + } + long pageCount = length / pageSize; + parents = new int[(int) pageCount]; + s = Data.create(this, pageSize); + for (long i = 3; i < pageCount; i++) { + s.reset(); + seek(i); + store.readFully(s.getBytes(), 0, 32); + s.readByte(); + s.readShortInt(); + parents[(int) i] = s.readInt(); + } + int logKey = 0, logFirstTrunkPage = 0, logFirstDataPage = 0; + s = Data.create(this, pageSize); + for (long i = 1;; i++) { + if (i == 3) { + break; + } + s.reset(); + seek(i); + store.readFully(s.getBytes(), 0, pageSize); + CRC32 crc = new CRC32(); + crc.update(s.getBytes(), 4, pageSize - 4); + int expected = (int) crc.getValue(); + int got = s.readInt(); + long writeCounter = s.readLong(); + int key = s.readInt(); + int firstTrunkPage = s.readInt(); + int firstDataPage = s.readInt(); + if (expected == got) { + logKey = key; + logFirstTrunkPage = firstTrunkPage; + logFirstDataPage = firstDataPage; + } + writer.println("-- head " + i + + ": writeCounter: " + writeCounter + + " log " + key + ":" + firstTrunkPage + "/" + firstDataPage + + " crc " + got + " (" + (expected == got ? + "ok" : ("expected: " + expected)) + ")"); + } + writer.println("-- log " + logKey + ":" + logFirstTrunkPage + + "/" + logFirstDataPage); + + PrintWriter devNull = new PrintWriter(new OutputStream() { + @Override + public void write(int b) { + // ignore + } + }); + dumpPageStore(devNull, pageCount); + stat = new Stats(); + schema.clear(); + objectIdSet = new HashSet<>(); + dumpPageStore(writer, pageCount); + writeSchema(writer); + try { + dumpPageLogStream(writer, logKey, logFirstTrunkPage, + logFirstDataPage, pageCount); + } catch (IOException e) { + // ignore + } + writer.println("---- Statistics ----"); + writer.println("-- page count: " + pageCount + ", free: " + stat.free); + long total = Math.max(1, stat.pageDataRows + + stat.pageDataEmpty + stat.pageDataHead); + writer.println("-- page data bytes: head " + stat.pageDataHead + + ", empty " + stat.pageDataEmpty + + ", rows " + stat.pageDataRows + + " (" + (100 - 100L * stat.pageDataEmpty / total) + "% full)"); + for (int i = 0; i < stat.pageTypeCount.length; i++) { + int count = stat.pageTypeCount[i]; + if (count > 0) { + writer.println("-- " + getPageType(i) + " " + + (100 * count / pageCount) + "%, " + count + " page(s)"); + } + } + writer.close(); + } catch (Throwable e) { + writeError(writer, e); + } finally { + IOUtils.closeSilently(writer); + closeSilently(store); + } + } + + private void dumpMVStoreFile(PrintWriter writer, String fileName) { + writer.println("-- MVStore"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_BLOB FOR \"" + + this.getClass().getName() + ".readBlob\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_CLOB FOR \"" + + this.getClass().getName() + ".readClob\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_BLOB_DB FOR \"" + + this.getClass().getName() + ".readBlobDb\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_CLOB_DB FOR \"" + + this.getClass().getName() + ".readClobDb\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_BLOB_MAP FOR \"" + + this.getClass().getName() + ".readBlobMap\";"); + writer.println("CREATE ALIAS IF NOT EXISTS READ_CLOB_MAP FOR \"" + + this.getClass().getName() + ".readClobMap\";"); + resetSchema(); + setDatabaseName(fileName.substring(0, fileName.length() - + Constants.SUFFIX_MV_FILE.length())); + MVStore mv = new MVStore.Builder(). + fileName(fileName).readOnly().open(); + dumpLobMaps(writer, mv); + writer.println("-- Meta"); + dumpMeta(writer, mv); + writer.println("-- Tables"); + TransactionStore store = new TransactionStore(mv); + try { + store.init(); + } catch (Throwable e) { + writeError(writer, e); + } + try { + for (String mapName : mv.getMapNames()) { + if (!mapName.startsWith("table.")) { + continue; + } + String tableId = mapName.substring("table.".length()); + ValueDataType keyType = new ValueDataType( + null, this, null); + ValueDataType valueType = new ValueDataType( + null, this, null); + TransactionMap dataMap = store.begin().openMap( + mapName, keyType, valueType); + Iterator dataIt = dataMap.keyIterator(null); + boolean init = false; + while (dataIt.hasNext()) { + Value rowId = dataIt.next(); + Value[] values = ((ValueArray) dataMap.get(rowId)).getList(); + recordLength = values.length; + if (!init) { + setStorage(Integer.parseInt(tableId)); + // init the column types + for (valueId = 0; valueId < recordLength; valueId++) { + String columnName = storageName + "." + valueId; + getSQL(columnName, values[valueId]); + } + createTemporaryTable(writer); + init = true; + } + StringBuilder buff = new StringBuilder(); + buff.append("INSERT INTO O_").append(tableId) + .append(" VALUES("); + for (valueId = 0; valueId < recordLength; valueId++) { + if (valueId > 0) { + buff.append(", "); + } + String columnName = storageName + "." + valueId; + buff.append(getSQL(columnName, values[valueId])); + } + buff.append(");"); + writer.println(buff.toString()); + if (storageId == 0) { + try { + SimpleRow r = new SimpleRow(values); + MetaRecord meta = new MetaRecord(r); + schema.add(meta); + if (meta.getObjectType() == DbObject.TABLE_OR_VIEW) { + String sql = values[3].getString(); + String name = extractTableOrViewName(sql); + tableMap.put(meta.getId(), name); + } + } catch (Throwable t) { + writeError(writer, t); + } + } + } + } + writeSchema(writer); + writer.println("DROP ALIAS READ_BLOB_MAP;"); + writer.println("DROP ALIAS READ_CLOB_MAP;"); + writer.println("DROP TABLE IF EXISTS INFORMATION_SCHEMA.LOB_BLOCKS;"); + } catch (Throwable e) { + writeError(writer, e); + } finally { + mv.close(); + } + } + + private static void dumpMeta(PrintWriter writer, MVStore mv) { + MVMap meta = mv.getMetaMap(); + for (Entry e : meta.entrySet()) { + writer.println("-- " + e.getKey() + " = " + e.getValue()); + } + } + + private void dumpLobMaps(PrintWriter writer, MVStore mv) { + lobMaps = mv.hasMap("lobData"); + if (!lobMaps) { + return; + } + MVMap lobData = mv.openMap("lobData"); + StreamStore streamStore = new StreamStore(lobData); + MVMap lobMap = mv.openMap("lobMap"); + writer.println("-- LOB"); + writer.println("CREATE TABLE IF NOT EXISTS " + + "INFORMATION_SCHEMA.LOB_BLOCKS(" + + "LOB_ID BIGINT, SEQ INT, DATA BINARY, " + + "PRIMARY KEY(LOB_ID, SEQ));"); + boolean hasErrors = false; + for (Entry e : lobMap.entrySet()) { + long lobId = e.getKey(); + Object[] value = e.getValue(); + byte[] streamStoreId = (byte[]) value[0]; + InputStream in = streamStore.get(streamStoreId); + int len = 8 * 1024; + byte[] block = new byte[len]; + try { + for (int seq = 0;; seq++) { + int l = IOUtils.readFully(in, block, block.length); + String x = StringUtils.convertBytesToHex(block, l); + if (l > 0) { + writer.println("INSERT INTO INFORMATION_SCHEMA.LOB_BLOCKS " + + "VALUES(" + lobId + ", " + seq + ", '" + x + "');"); + } + if (l != len) { + break; + } + } + } catch (IOException ex) { + writeError(writer, ex); + hasErrors = true; + } + } + writer.println("-- lobMap.size: " + lobMap.sizeAsLong()); + writer.println("-- lobData.size: " + lobData.sizeAsLong()); + + if (hasErrors) { + writer.println("-- lobMap"); + for (Long k : lobMap.keyList()) { + Object[] value = lobMap.get(k); + byte[] streamStoreId = (byte[]) value[0]; + writer.println("-- " + k + " " + StreamStore.toString(streamStoreId)); + } + writer.println("-- lobData"); + for (Long k : lobData.keyList()) { + writer.println("-- " + k + " len " + lobData.get(k).length); + } + } + } + + private static String getPageType(int type) { + switch (type) { + case 0: + return "free"; + case Page.TYPE_DATA_LEAF: + return "data leaf"; + case Page.TYPE_DATA_NODE: + return "data node"; + case Page.TYPE_DATA_OVERFLOW: + return "data overflow"; + case Page.TYPE_BTREE_LEAF: + return "btree leaf"; + case Page.TYPE_BTREE_NODE: + return "btree node"; + case Page.TYPE_FREE_LIST: + return "free list"; + case Page.TYPE_STREAM_TRUNK: + return "stream trunk"; + case Page.TYPE_STREAM_DATA: + return "stream data"; + } + return "[" + type + "]"; + } + + private void dumpPageStore(PrintWriter writer, long pageCount) { + Data s = Data.create(this, pageSize); + for (long page = 3; page < pageCount; page++) { + s = Data.create(this, pageSize); + seek(page); + store.readFully(s.getBytes(), 0, pageSize); + dumpPage(writer, s, page, pageCount); + } + } + + private void dumpPage(PrintWriter writer, Data s, long page, long pageCount) { + try { + int type = s.readByte(); + switch (type) { + case Page.TYPE_EMPTY: + stat.pageTypeCount[type]++; + return; + } + boolean last = (type & Page.FLAG_LAST) != 0; + type &= ~Page.FLAG_LAST; + if (!PageStore.checksumTest(s.getBytes(), (int) page, pageSize)) { + writeDataError(writer, "checksum mismatch type: " + type, s.getBytes()); + } + s.readShortInt(); + switch (type) { + // type 1 + case Page.TYPE_DATA_LEAF: { + stat.pageTypeCount[type]++; + int parentPageId = s.readInt(); + setStorage(s.readVarInt()); + int columnCount = s.readVarInt(); + int entries = s.readShortInt(); + writer.println("-- page " + page + ": data leaf " + + (last ? "(last) " : "") + "parent: " + parentPageId + + " table: " + storageId + " entries: " + entries + + " columns: " + columnCount); + dumpPageDataLeaf(writer, s, last, page, columnCount, entries); + break; + } + // type 2 + case Page.TYPE_DATA_NODE: { + stat.pageTypeCount[type]++; + int parentPageId = s.readInt(); + setStorage(s.readVarInt()); + int rowCount = s.readInt(); + int entries = s.readShortInt(); + writer.println("-- page " + page + ": data node " + + (last ? "(last) " : "") + "parent: " + parentPageId + + " table: " + storageId + " entries: " + entries + + " rowCount: " + rowCount); + dumpPageDataNode(writer, s, page, entries); + break; + } + // type 3 + case Page.TYPE_DATA_OVERFLOW: + stat.pageTypeCount[type]++; + writer.println("-- page " + page + ": data overflow " + + (last ? "(last) " : "")); + break; + // type 4 + case Page.TYPE_BTREE_LEAF: { + stat.pageTypeCount[type]++; + int parentPageId = s.readInt(); + setStorage(s.readVarInt()); + int entries = s.readShortInt(); + writer.println("-- page " + page + ": b-tree leaf " + + (last ? "(last) " : "") + "parent: " + parentPageId + + " index: " + storageId + " entries: " + entries); + if (trace) { + dumpPageBtreeLeaf(writer, s, entries, !last); + } + break; + } + // type 5 + case Page.TYPE_BTREE_NODE: + stat.pageTypeCount[type]++; + int parentPageId = s.readInt(); + setStorage(s.readVarInt()); + writer.println("-- page " + page + ": b-tree node " + + (last ? "(last) " : "") + "parent: " + parentPageId + + " index: " + storageId); + dumpPageBtreeNode(writer, s, page, !last); + break; + // type 6 + case Page.TYPE_FREE_LIST: + stat.pageTypeCount[type]++; + writer.println("-- page " + page + ": free list " + (last ? "(last)" : "")); + stat.free += dumpPageFreeList(writer, s, page, pageCount); + break; + // type 7 + case Page.TYPE_STREAM_TRUNK: + stat.pageTypeCount[type]++; + writer.println("-- page " + page + ": log trunk"); + break; + // type 8 + case Page.TYPE_STREAM_DATA: + stat.pageTypeCount[type]++; + writer.println("-- page " + page + ": log data"); + break; + default: + writer.println("-- ERROR page " + page + " unknown type " + type); + break; + } + } catch (Exception e) { + writeError(writer, e); + } + } + + private void dumpPageLogStream(PrintWriter writer, int logKey, + int logFirstTrunkPage, int logFirstDataPage, long pageCount) + throws IOException { + Data s = Data.create(this, pageSize); + DataReader in = new DataReader( + new PageInputStream(writer, this, store, logKey, + logFirstTrunkPage, logFirstDataPage, pageSize) + ); + writer.println("---- Transaction log ----"); + CompressLZF compress = new CompressLZF(); + while (true) { + int x = in.readByte(); + if (x < 0) { + break; + } + if (x == PageLog.NOOP) { + // ignore + } else if (x == PageLog.UNDO) { + int pageId = in.readVarInt(); + int size = in.readVarInt(); + byte[] data = new byte[pageSize]; + if (size == 0) { + in.readFully(data, pageSize); + } else if (size == 1) { + // empty + } else { + byte[] compressBuffer = new byte[size]; + in.readFully(compressBuffer, size); + try { + compress.expand(compressBuffer, 0, size, data, 0, pageSize); + } catch (ArrayIndexOutOfBoundsException e) { + throw DbException.convertToIOException(e); + } + } + String typeName = ""; + int type = data[0]; + boolean last = (type & Page.FLAG_LAST) != 0; + type &= ~Page.FLAG_LAST; + switch (type) { + case Page.TYPE_EMPTY: + typeName = "empty"; + break; + case Page.TYPE_DATA_LEAF: + typeName = "data leaf " + (last ? "(last)" : ""); + break; + case Page.TYPE_DATA_NODE: + typeName = "data node " + (last ? "(last)" : ""); + break; + case Page.TYPE_DATA_OVERFLOW: + typeName = "data overflow " + (last ? "(last)" : ""); + break; + case Page.TYPE_BTREE_LEAF: + typeName = "b-tree leaf " + (last ? "(last)" : ""); + break; + case Page.TYPE_BTREE_NODE: + typeName = "b-tree node " + (last ? "(last)" : ""); + break; + case Page.TYPE_FREE_LIST: + typeName = "free list " + (last ? "(last)" : ""); + break; + case Page.TYPE_STREAM_TRUNK: + typeName = "log trunk"; + break; + case Page.TYPE_STREAM_DATA: + typeName = "log data"; + break; + default: + typeName = "ERROR: unknown type " + type; + break; + } + writer.println("-- undo page " + pageId + " " + typeName); + if (trace) { + Data d = Data.create(null, data); + dumpPage(writer, d, pageId, pageCount); + } + } else if (x == PageLog.ADD) { + int sessionId = in.readVarInt(); + setStorage(in.readVarInt()); + Row row = PageLog.readRow(RowFactory.DEFAULT, in, s); + writer.println("-- session " + sessionId + + " table " + storageId + + " + " + row.toString()); + if (transactionLog) { + if (storageId == 0 && row.getColumnCount() >= 4) { + int tableId = (int) row.getKey(); + String sql = row.getValue(3).getString(); + String name = extractTableOrViewName(sql); + if (row.getValue(2).getInt() == DbObject.TABLE_OR_VIEW) { + tableMap.put(tableId, name); + } + writer.println(sql + ";"); + } else { + String tableName = tableMap.get(storageId); + if (tableName != null) { + StatementBuilder buff = new StatementBuilder(); + buff.append("INSERT INTO ").append(tableName). + append(" VALUES("); + for (int i = 0; i < row.getColumnCount(); i++) { + buff.appendExceptFirst(", "); + buff.append(row.getValue(i).getSQL()); + } + buff.append(");"); + writer.println(buff.toString()); + } + } + } + } else if (x == PageLog.REMOVE) { + int sessionId = in.readVarInt(); + setStorage(in.readVarInt()); + long key = in.readVarLong(); + writer.println("-- session " + sessionId + + " table " + storageId + + " - " + key); + if (transactionLog) { + if (storageId == 0) { + int tableId = (int) key; + String tableName = tableMap.get(tableId); + if (tableName != null) { + writer.println("DROP TABLE IF EXISTS " + tableName + ";"); + } + } else { + String tableName = tableMap.get(storageId); + if (tableName != null) { + String sql = "DELETE FROM " + tableName + + " WHERE _ROWID_ = " + key + ";"; + writer.println(sql); + } + } + } + } else if (x == PageLog.TRUNCATE) { + int sessionId = in.readVarInt(); + setStorage(in.readVarInt()); + writer.println("-- session " + sessionId + + " table " + storageId + + " truncate"); + if (transactionLog) { + writer.println("TRUNCATE TABLE " + storageId); + } + } else if (x == PageLog.COMMIT) { + int sessionId = in.readVarInt(); + writer.println("-- commit " + sessionId); + } else if (x == PageLog.ROLLBACK) { + int sessionId = in.readVarInt(); + writer.println("-- rollback " + sessionId); + } else if (x == PageLog.PREPARE_COMMIT) { + int sessionId = in.readVarInt(); + String transaction = in.readString(); + writer.println("-- prepare commit " + sessionId + " " + transaction); + } else if (x == PageLog.NOOP) { + // nothing to do + } else if (x == PageLog.CHECKPOINT) { + writer.println("-- checkpoint"); + } else if (x == PageLog.FREE_LOG) { + int size = in.readVarInt(); + StringBuilder buff = new StringBuilder("-- free"); + for (int i = 0; i < size; i++) { + buff.append(' ').append(in.readVarInt()); + } + writer.println(buff); + } else { + writer.println("-- ERROR: unknown operation " + x); + break; + } + } + } + + private String setStorage(int storageId) { + this.storageId = storageId; + this.storageName = "O_" + String.valueOf(storageId).replace('-', 'M'); + return storageName; + } + + /** + * An input stream that reads the data from a page store. + */ + static class PageInputStream extends InputStream { + + private final PrintWriter writer; + private final FileStore store; + private final Data page; + private final int pageSize; + private long trunkPage; + private long nextTrunkPage; + private long dataPage; + private final IntArray dataPages = new IntArray(); + private boolean endOfFile; + private int remaining; + private int logKey; + + public PageInputStream(PrintWriter writer, DataHandler handler, + FileStore store, int logKey, long firstTrunkPage, + long firstDataPage, int pageSize) { + this.writer = writer; + this.store = store; + this.pageSize = pageSize; + this.logKey = logKey - 1; + this.nextTrunkPage = firstTrunkPage; + this.dataPage = firstDataPage; + page = Data.create(handler, pageSize); + } + + @Override + public int read() { + byte[] b = { 0 }; + int len = read(b); + return len < 0 ? -1 : (b[0] & 255); + } + + @Override + public int read(byte[] b) { + return read(b, 0, b.length); + } + + @Override + public int read(byte[] b, int off, int len) { + if (len == 0) { + return 0; + } + int read = 0; + while (len > 0) { + int r = readBlock(b, off, len); + if (r < 0) { + break; + } + read += r; + off += r; + len -= r; + } + return read == 0 ? -1 : read; + } + + private int readBlock(byte[] buff, int off, int len) { + fillBuffer(); + if (endOfFile) { + return -1; + } + int l = Math.min(remaining, len); + page.read(buff, off, l); + remaining -= l; + return l; + } + + private void fillBuffer() { + if (remaining > 0 || endOfFile) { + return; + } + while (dataPages.size() == 0) { + if (nextTrunkPage == 0) { + endOfFile = true; + return; + } + trunkPage = nextTrunkPage; + store.seek(trunkPage * pageSize); + store.readFully(page.getBytes(), 0, pageSize); + page.reset(); + if (!PageStore.checksumTest(page.getBytes(), (int) trunkPage, pageSize)) { + writer.println("-- ERROR: checksum mismatch page: " +trunkPage); + endOfFile = true; + return; + } + int t = page.readByte(); + page.readShortInt(); + if (t != Page.TYPE_STREAM_TRUNK) { + writer.println("-- log eof " + trunkPage + " type: " + t + + " expected type: " + Page.TYPE_STREAM_TRUNK); + endOfFile = true; + return; + } + page.readInt(); + int key = page.readInt(); + logKey++; + if (key != logKey) { + writer.println("-- log eof " + trunkPage + + " type: " + t + " expected key: " + logKey + " got: " + key); + } + nextTrunkPage = page.readInt(); + writer.println("-- log " + key + ":" + trunkPage + + " next: " + nextTrunkPage); + int pageCount = page.readShortInt(); + for (int i = 0; i < pageCount; i++) { + int d = page.readInt(); + if (dataPage != 0) { + if (d == dataPage) { + dataPage = 0; + } else { + // ignore the pages before the starting page + continue; + } + } + dataPages.add(d); + } + } + if (dataPages.size() > 0) { + page.reset(); + long nextPage = dataPages.get(0); + dataPages.remove(0); + store.seek(nextPage * pageSize); + store.readFully(page.getBytes(), 0, pageSize); + page.reset(); + int t = page.readByte(); + if (t != 0 && !PageStore.checksumTest(page.getBytes(), + (int) nextPage, pageSize)) { + writer.println("-- ERROR: checksum mismatch page: " +nextPage); + endOfFile = true; + return; + } + page.readShortInt(); + int p = page.readInt(); + int k = page.readInt(); + writer.println("-- log " + k + ":" + trunkPage + "/" + nextPage); + if (t != Page.TYPE_STREAM_DATA) { + writer.println("-- log eof " +nextPage+ " type: " + t + " parent: " + p + + " expected type: " + Page.TYPE_STREAM_DATA); + endOfFile = true; + return; + } else if (k != logKey) { + writer.println("-- log eof " +nextPage+ " type: " + t + " parent: " + p + + " expected key: " + logKey + " got: " + k); + endOfFile = true; + return; + } + remaining = pageSize - page.length(); + } + } + } + + private void dumpPageBtreeNode(PrintWriter writer, Data s, long pageId, + boolean positionOnly) { + int rowCount = s.readInt(); + int entryCount = s.readShortInt(); + int[] children = new int[entryCount + 1]; + int[] offsets = new int[entryCount]; + children[entryCount] = s.readInt(); + checkParent(writer, pageId, children, entryCount); + int empty = Integer.MAX_VALUE; + for (int i = 0; i < entryCount; i++) { + children[i] = s.readInt(); + checkParent(writer, pageId, children, i); + int off = s.readShortInt(); + empty = Math.min(off, empty); + offsets[i] = off; + } + empty = empty - s.length(); + if (!trace) { + return; + } + writer.println("-- empty: " + empty); + for (int i = 0; i < entryCount; i++) { + int off = offsets[i]; + s.setPos(off); + long key = s.readVarLong(); + Value data; + if (positionOnly) { + data = ValueLong.get(key); + } else { + try { + data = s.readValue(); + } catch (Throwable e) { + writeDataError(writer, "exception " + e, s.getBytes()); + continue; + } + } + writer.println("-- [" + i + "] child: " + children[i] + + " key: " + key + " data: " + data); + } + writer.println("-- [" + entryCount + "] child: " + + children[entryCount] + " rowCount: " + rowCount); + } + + private int dumpPageFreeList(PrintWriter writer, Data s, long pageId, + long pageCount) { + int pagesAddressed = PageFreeList.getPagesAddressed(pageSize); + BitField used = new BitField(); + for (int i = 0; i < pagesAddressed; i += 8) { + int x = s.readByte() & 255; + for (int j = 0; j < 8; j++) { + if ((x & (1 << j)) != 0) { + used.set(i + j); + } + } + } + int free = 0; + for (long i = 0, j = pageId; i < pagesAddressed && j < pageCount; i++, j++) { + if (i == 0 || j % 100 == 0) { + if (i > 0) { + writer.println(); + } + writer.print("-- " + j + " "); + } else if (j % 20 == 0) { + writer.print(" - "); + } else if (j % 10 == 0) { + writer.print(' '); + } + writer.print(used.get((int) i) ? '1' : '0'); + if (!used.get((int) i)) { + free++; + } + } + writer.println(); + return free; + } + + private void dumpPageBtreeLeaf(PrintWriter writer, Data s, int entryCount, + boolean positionOnly) { + int[] offsets = new int[entryCount]; + int empty = Integer.MAX_VALUE; + for (int i = 0; i < entryCount; i++) { + int off = s.readShortInt(); + empty = Math.min(off, empty); + offsets[i] = off; + } + empty = empty - s.length(); + writer.println("-- empty: " + empty); + for (int i = 0; i < entryCount; i++) { + int off = offsets[i]; + s.setPos(off); + long key = s.readVarLong(); + Value data; + if (positionOnly) { + data = ValueLong.get(key); + } else { + try { + data = s.readValue(); + } catch (Throwable e) { + writeDataError(writer, "exception " + e, s.getBytes()); + continue; + } + } + writer.println("-- [" + i + "] key: " + key + " data: " + data); + } + } + + private void checkParent(PrintWriter writer, long pageId, int[] children, + int index) { + int child = children[index]; + if (child < 0 || child >= parents.length) { + writer.println("-- ERROR [" + pageId + "] child[" + + index + "]: " + child + " >= page count: " + parents.length); + } else if (parents[child] != pageId) { + writer.println("-- ERROR [" + pageId + "] child[" + + index + "]: " + child + " parent: " + parents[child]); + } + } + + private void dumpPageDataNode(PrintWriter writer, Data s, long pageId, + int entryCount) { + int[] children = new int[entryCount + 1]; + long[] keys = new long[entryCount]; + children[entryCount] = s.readInt(); + checkParent(writer, pageId, children, entryCount); + for (int i = 0; i < entryCount; i++) { + children[i] = s.readInt(); + checkParent(writer, pageId, children, i); + keys[i] = s.readVarLong(); + } + if (!trace) { + return; + } + for (int i = 0; i < entryCount; i++) { + writer.println("-- [" + i + "] child: " + children[i] + " key: " + keys[i]); + } + writer.println("-- [" + entryCount + "] child: " + children[entryCount]); + } + + private void dumpPageDataLeaf(PrintWriter writer, Data s, boolean last, + long pageId, int columnCount, int entryCount) { + long[] keys = new long[entryCount]; + int[] offsets = new int[entryCount]; + long next = 0; + if (!last) { + next = s.readInt(); + writer.println("-- next: " + next); + } + int empty = pageSize; + for (int i = 0; i < entryCount; i++) { + keys[i] = s.readVarLong(); + int off = s.readShortInt(); + empty = Math.min(off, empty); + offsets[i] = off; + } + stat.pageDataRows += pageSize - empty; + empty = empty - s.length(); + stat.pageDataHead += s.length(); + stat.pageDataEmpty += empty; + if (trace) { + writer.println("-- empty: " + empty); + } + if (!last) { + Data s2 = Data.create(this, pageSize); + s.setPos(pageSize); + long parent = pageId; + while (true) { + checkParent(writer, parent, new int[]{(int) next}, 0); + parent = next; + seek(next); + store.readFully(s2.getBytes(), 0, pageSize); + s2.reset(); + int type = s2.readByte(); + s2.readShortInt(); + s2.readInt(); + if (type == (Page.TYPE_DATA_OVERFLOW | Page.FLAG_LAST)) { + int size = s2.readShortInt(); + writer.println("-- chain: " + next + + " type: " + type + " size: " + size); + s.checkCapacity(size); + s.write(s2.getBytes(), s2.length(), size); + break; + } else if (type == Page.TYPE_DATA_OVERFLOW) { + next = s2.readInt(); + if (next == 0) { + writeDataError(writer, "next:0", s2.getBytes()); + break; + } + int size = pageSize - s2.length(); + writer.println("-- chain: " + next + " type: " + type + + " size: " + size + " next: " + next); + s.checkCapacity(size); + s.write(s2.getBytes(), s2.length(), size); + } else { + writeDataError(writer, "type: " + type, s2.getBytes()); + break; + } + } + } + for (int i = 0; i < entryCount; i++) { + long key = keys[i]; + int off = offsets[i]; + if (trace) { + writer.println("-- [" + i + "] storage: " + storageId + + " key: " + key + " off: " + off); + } + s.setPos(off); + Value[] data = createRecord(writer, s, columnCount); + if (data != null) { + createTemporaryTable(writer); + writeRow(writer, s, data); + if (remove && storageId == 0) { + String sql = data[3].getString(); + if (sql.startsWith("CREATE USER ")) { + int saltIndex = Utils.indexOf(s.getBytes(), "SALT ".getBytes(), off); + if (saltIndex >= 0) { + String userName = sql.substring("CREATE USER ".length(), + sql.indexOf("SALT ") - 1); + if (userName.startsWith("IF NOT EXISTS ")) { + userName = userName.substring("IF NOT EXISTS ".length()); + } + if (userName.startsWith("\"")) { + // TODO doesn't work for all cases ("" inside + // user name) + userName = userName.substring(1, userName.length() - 1); + } + byte[] userPasswordHash = SHA256.getKeyPasswordHash( + userName, "".toCharArray()); + byte[] salt = MathUtils.secureRandomBytes(Constants.SALT_LEN); + byte[] passwordHash = SHA256.getHashWithSalt( + userPasswordHash, salt); + StringBuilder buff = new StringBuilder(); + buff.append("SALT '"). + append(StringUtils.convertBytesToHex(salt)). + append("' HASH '"). + append(StringUtils.convertBytesToHex(passwordHash)). + append('\''); + byte[] replacement = buff.toString().getBytes(); + System.arraycopy(replacement, 0, s.getBytes(), + saltIndex, replacement.length); + seek(pageId); + store.write(s.getBytes(), 0, pageSize); + if (trace) { + out.println("User: " + userName); + } + remove = false; + } + } + } + } + } + } + + private void seek(long page) { + // page is long to avoid integer overflow + store.seek(page * pageSize); + } + + private Value[] createRecord(PrintWriter writer, Data s, int columnCount) { + recordLength = columnCount; + if (columnCount <= 0) { + writeDataError(writer, "columnCount<0", s.getBytes()); + return null; + } + Value[] data; + try { + data = new Value[columnCount]; + } catch (OutOfMemoryError e) { + writeDataError(writer, "out of memory", s.getBytes()); + return null; + } + return data; + } + + private void writeRow(PrintWriter writer, Data s, Value[] data) { + StringBuilder sb = new StringBuilder(); + sb.append("INSERT INTO ").append(storageName).append(" VALUES("); + for (valueId = 0; valueId < recordLength; valueId++) { + try { + Value v = s.readValue(); + data[valueId] = v; + if (valueId > 0) { + sb.append(", "); + } + String columnName = storageName + "." + valueId; + sb.append(getSQL(columnName, v)); + } catch (Exception e) { + writeDataError(writer, "exception " + e, s.getBytes()); + } catch (OutOfMemoryError e) { + writeDataError(writer, "out of memory", s.getBytes()); + } + } + sb.append(");"); + writer.println(sb.toString()); + if (storageId == 0) { + try { + SimpleRow r = new SimpleRow(data); + MetaRecord meta = new MetaRecord(r); + schema.add(meta); + if (meta.getObjectType() == DbObject.TABLE_OR_VIEW) { + String sql = data[3].getString(); + String name = extractTableOrViewName(sql); + tableMap.put(meta.getId(), name); + } + } catch (Throwable t) { + writeError(writer, t); + } + } + } + + private void resetSchema() { + schema = New.arrayList(); + objectIdSet = new HashSet<>(); + tableMap = new HashMap<>(); + columnTypeMap = new HashMap<>(); + } + + private void writeSchema(PrintWriter writer) { + writer.println("---- Schema ----"); + Collections.sort(schema); + for (MetaRecord m : schema) { + if (!isSchemaObjectTypeDelayed(m)) { + // create, but not referential integrity constraints and so on + // because they could fail on duplicate keys + String sql = m.getSQL(); + writer.println(sql + ";"); + } + } + // first, copy the lob storage (if there is any) + // must occur before copying data, + // otherwise the lob storage may be overwritten + boolean deleteLobs = false; + for (Map.Entry entry : tableMap.entrySet()) { + Integer objectId = entry.getKey(); + String name = entry.getValue(); + if (objectIdSet.contains(objectId)) { + if (name.startsWith("INFORMATION_SCHEMA.LOB")) { + setStorage(objectId); + writer.println("DELETE FROM " + name + ";"); + writer.println("INSERT INTO " + name + " SELECT * FROM " + storageName + ";"); + if (name.startsWith("INFORMATION_SCHEMA.LOBS")) { + writer.println("UPDATE " + name + " SET TABLE = " + + LobStorageFrontend.TABLE_TEMP + ";"); + deleteLobs = true; + } + } + } + } + for (Map.Entry entry : tableMap.entrySet()) { + Integer objectId = entry.getKey(); + String name = entry.getValue(); + if (objectIdSet.contains(objectId)) { + setStorage(objectId); + if (name.startsWith("INFORMATION_SCHEMA.LOB")) { + continue; + } + writer.println("INSERT INTO " + name + " SELECT * FROM " + storageName + ";"); + } + } + for (Integer objectId : objectIdSet) { + setStorage(objectId); + writer.println("DROP TABLE " + storageName + ";"); + } + writer.println("DROP ALIAS READ_BLOB;"); + writer.println("DROP ALIAS READ_CLOB;"); + writer.println("DROP ALIAS READ_BLOB_DB;"); + writer.println("DROP ALIAS READ_CLOB_DB;"); + if (deleteLobs) { + writer.println("DELETE FROM INFORMATION_SCHEMA.LOBS WHERE TABLE = " + + LobStorageFrontend.TABLE_TEMP + ";"); + } + for (MetaRecord m : schema) { + if (isSchemaObjectTypeDelayed(m)) { + String sql = m.getSQL(); + writer.println(sql + ";"); + } + } + } + + private static boolean isSchemaObjectTypeDelayed(MetaRecord m) { + switch (m.getObjectType()) { + case DbObject.INDEX: + case DbObject.CONSTRAINT: + case DbObject.TRIGGER: + return true; + } + return false; + } + + private void createTemporaryTable(PrintWriter writer) { + if (!objectIdSet.contains(storageId)) { + objectIdSet.add(storageId); + StatementBuilder buff = new StatementBuilder("CREATE TABLE "); + buff.append(storageName).append('('); + for (int i = 0; i < recordLength; i++) { + buff.appendExceptFirst(", "); + buff.append('C').append(i).append(' '); + String columnType = columnTypeMap.get(storageName + "." + i); + if (columnType == null) { + buff.append("VARCHAR"); + } else { + buff.append(columnType); + } + } + writer.println(buff.append(");").toString()); + writer.flush(); + } + } + + private static String extractTableOrViewName(String sql) { + int indexTable = sql.indexOf(" TABLE "); + int indexView = sql.indexOf(" VIEW "); + if (indexTable > 0 && indexView > 0) { + if (indexTable < indexView) { + indexView = -1; + } else { + indexTable = -1; + } + } + if (indexView > 0) { + sql = sql.substring(indexView + " VIEW ".length()); + } else if (indexTable > 0) { + sql = sql.substring(indexTable + " TABLE ".length()); + } else { + return "UNKNOWN"; + } + if (sql.startsWith("IF NOT EXISTS ")) { + sql = sql.substring("IF NOT EXISTS ".length()); + } + boolean ignore = false; + // sql is modified in the loop + for (int i = 0; i < sql.length(); i++) { + char ch = sql.charAt(i); + if (ch == '\"') { + ignore = !ignore; + } else if (!ignore && (ch <= ' ' || ch == '(')) { + sql = sql.substring(0, i); + return sql; + } + } + return "UNKNOWN"; + } + + + private static void closeSilently(FileStore fileStore) { + if (fileStore != null) { + fileStore.closeSilently(); + } + } + + private void writeError(PrintWriter writer, Throwable e) { + if (writer != null) { + writer.println("// error: " + e); + } + traceError("Error", e); + } + + /** + * INTERNAL + */ + @Override + public String getDatabasePath() { + return databaseName; + } + + /** + * INTERNAL + */ + @Override + public FileStore openFile(String name, String mode, boolean mustExist) { + return FileStore.open(this, name, "rw"); + } + + /** + * INTERNAL + */ + @Override + public void checkPowerOff() { + // nothing to do + } + + /** + * INTERNAL + */ + @Override + public void checkWritingAllowed() { + // nothing to do + } + + /** + * INTERNAL + */ + @Override + public int getMaxLengthInplaceLob() { + throw DbException.throwInternalError(); + } + + /** + * INTERNAL + */ + @Override + public String getLobCompressionAlgorithm(int type) { + return null; + } + + /** + * INTERNAL + */ + @Override + public Object getLobSyncObject() { + return this; + } + + /** + * INTERNAL + */ + @Override + public SmallLRUCache getLobFileListCache() { + return null; + } + + /** + * INTERNAL + */ + @Override + public TempFileDeleter getTempFileDeleter() { + return TempFileDeleter.getInstance(); + } + + /** + * INTERNAL + */ + @Override + public LobStorageBackend getLobStorage() { + return null; + } + + /** + * INTERNAL + */ + @Override + public int readLob(long lobId, byte[] hmac, long offset, byte[] buff, + int off, int length) { + throw DbException.throwInternalError(); + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + return null; + } + + @Override + public CompareMode getCompareMode() { + return CompareMode.getInstance(null, 0); + } +} diff --git a/modules/h2/src/main/java/org/h2/tools/Restore.java b/modules/h2/src/main/java/org/h2/tools/Restore.java new file mode 100644 index 0000000000000..d5195ab83e08f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Restore.java @@ -0,0 +1,200 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.sql.SQLException; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.Tool; + +/** + * Restores a H2 database by extracting the database files from a .zip file. + * @h2.resource + */ +public class Restore extends Tool { + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-file <filename>]The source file name (default: backup.zip)
    [-dir <dir>]The target directory (default: .)
    [-db <database>]The target database name (as stored if not set)
    [-quiet]Do not print progress information
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new Restore().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + String zipFileName = "backup.zip"; + String dir = "."; + String db = null; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-dir")) { + dir = args[++i]; + } else if (arg.equals("-file")) { + zipFileName = args[++i]; + } else if (arg.equals("-db")) { + db = args[++i]; + } else if (arg.equals("-quiet")) { + // ignore + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + execute(zipFileName, dir, db); + } + + private static String getOriginalDbName(String fileName, String db) + throws IOException { + + try (InputStream in = FileUtils.newInputStream(fileName)) { + ZipInputStream zipIn = new ZipInputStream(in); + String originalDbName = null; + boolean multiple = false; + while (true) { + ZipEntry entry = zipIn.getNextEntry(); + if (entry == null) { + break; + } + String entryName = entry.getName(); + zipIn.closeEntry(); + String name = getDatabaseNameFromFileName(entryName); + if (name != null) { + if (db.equals(name)) { + originalDbName = name; + // we found the correct database + break; + } else if (originalDbName == null) { + originalDbName = name; + // we found a database, but maybe another one + } else { + // we have found multiple databases, but not the correct + // one + multiple = true; + } + } + } + zipIn.close(); + if (multiple && !db.equals(originalDbName)) { + throw new IOException("Multiple databases found, but not " + db); + } + return originalDbName; + } + } + + /** + * Extract the name of the database from a given file name. + * Only files ending with .h2.db are considered, all others return null. + * + * @param fileName the file name (without directory) + * @return the database name or null + */ + private static String getDatabaseNameFromFileName(String fileName) { + if (fileName.endsWith(Constants.SUFFIX_PAGE_FILE)) { + return fileName.substring(0, + fileName.length() - Constants.SUFFIX_PAGE_FILE.length()); + } + if (fileName.endsWith(Constants.SUFFIX_MV_FILE)) { + return fileName.substring(0, + fileName.length() - Constants.SUFFIX_MV_FILE.length()); + } + return null; + } + + /** + * Restores database files. + * + * @param zipFileName the name of the backup file + * @param directory the directory name + * @param db the database name (null for all databases) + * @throws DbException if there is an IOException + */ + public static void execute(String zipFileName, String directory, String db) { + InputStream in = null; + try { + if (!FileUtils.exists(zipFileName)) { + throw new IOException("File not found: " + zipFileName); + } + String originalDbName = null; + int originalDbLen = 0; + if (db != null) { + originalDbName = getOriginalDbName(zipFileName, db); + if (originalDbName == null) { + throw new IOException("No database named " + db + " found"); + } + if (originalDbName.startsWith(SysProperties.FILE_SEPARATOR)) { + originalDbName = originalDbName.substring(1); + } + originalDbLen = originalDbName.length(); + } + in = FileUtils.newInputStream(zipFileName); + try (ZipInputStream zipIn = new ZipInputStream(in)) { + while (true) { + ZipEntry entry = zipIn.getNextEntry(); + if (entry == null) { + break; + } + String fileName = entry.getName(); + // restoring windows backups on linux and vice versa + fileName = fileName.replace('\\', SysProperties.FILE_SEPARATOR.charAt(0)); + fileName = fileName.replace('/', SysProperties.FILE_SEPARATOR.charAt(0)); + if (fileName.startsWith(SysProperties.FILE_SEPARATOR)) { + fileName = fileName.substring(1); + } + boolean copy = false; + if (db == null) { + copy = true; + } else if (fileName.startsWith(originalDbName + ".")) { + fileName = db + fileName.substring(originalDbLen); + copy = true; + } + if (copy) { + OutputStream o = null; + try { + o = FileUtils.newOutputStream( + directory + SysProperties.FILE_SEPARATOR + fileName, false); + IOUtils.copy(zipIn, o); + o.close(); + } finally { + IOUtils.closeSilently(o); + } + } + zipIn.closeEntry(); + } + zipIn.closeEntry(); + } + } catch (IOException e) { + throw DbException.convertIOException(e, zipFileName); + } finally { + IOUtils.closeSilently(in); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/RunScript.java b/modules/h2/src/main/java/org/h2/tools/RunScript.java new file mode 100644 index 0000000000000..4998ecd56c03e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/RunScript.java @@ -0,0 +1,336 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.BufferedInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.Reader; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.ScriptReader; +import org.h2.util.StringUtils; +import org.h2.util.Tool; + +/** + * Runs a SQL script against a database. + * @h2.resource + */ +public class RunScript extends Tool { + + private boolean showResults; + private boolean checkResults; + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-url "<url>"]The database URL (jdbc:...)
    [-user <user>]The user name (default: sa)
    [-password <pwd>]The password
    [-script <file>]The script file to run (default: backup.sql)
    [-driver <class>]The JDBC driver class to use (not required in most cases)
    [-showResults]Show the statements and the results of queries
    [-checkResults]Check if the query results match the expected results
    [-continueOnError]Continue even if the script contains errors
    [-options ...]RUNSCRIPT options (embedded H2; -*Results not supported)
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new RunScript().runTool(args); + } + + /** + * Executes the contents of a SQL script file against a database. + * This tool is usually used to create a database from script. + * It can also be used to analyze performance problems by running + * the tool using Java profiler settings such as: + *
    +     * java -Xrunhprof:cpu=samples,depth=16 ...
    +     * 
    + * To include local files when using remote databases, use the special + * syntax: + *
    +     * @INCLUDE fileName
    +     * 
    + * This syntax is only supported by this tool. Embedded RUNSCRIPT SQL + * statements will be executed by the database. + * + * @param args the command line arguments + */ + @Override + public void runTool(String... args) throws SQLException { + String url = null; + String user = ""; + String password = ""; + String script = "backup.sql"; + String options = null; + boolean continueOnError = false; + boolean showTime = false; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-url")) { + url = args[++i]; + } else if (arg.equals("-user")) { + user = args[++i]; + } else if (arg.equals("-password")) { + password = args[++i]; + } else if (arg.equals("-continueOnError")) { + continueOnError = true; + } else if (arg.equals("-checkResults")) { + checkResults = true; + } else if (arg.equals("-showResults")) { + showResults = true; + } else if (arg.equals("-script")) { + script = args[++i]; + } else if (arg.equals("-time")) { + showTime = true; + } else if (arg.equals("-driver")) { + String driver = args[++i]; + JdbcUtils.loadUserClass(driver); + } else if (arg.equals("-options")) { + StringBuilder buff = new StringBuilder(); + i++; + for (; i < args.length; i++) { + buff.append(' ').append(args[i]); + } + options = buff.toString(); + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + if (url == null) { + showUsage(); + throw new SQLException("URL not set"); + } + long time = System.nanoTime(); + if (options != null) { + processRunscript(url, user, password, script, options); + } else { + process(url, user, password, script, null, continueOnError); + } + if (showTime) { + time = System.nanoTime() - time; + out.println("Done in " + TimeUnit.NANOSECONDS.toMillis(time) + " ms"); + } + } + + /** + * Executes the SQL commands read from the reader against a database. + * + * @param conn the connection to a database + * @param reader the reader + * @return the last result set + */ + public static ResultSet execute(Connection conn, Reader reader) + throws SQLException { + // can not close the statement because we return a result set from it + Statement stat = conn.createStatement(); + ResultSet rs = null; + ScriptReader r = new ScriptReader(reader); + while (true) { + String sql = r.readStatement(); + if (sql == null) { + break; + } + if (sql.trim().length() == 0) { + continue; + } + boolean resultSet = stat.execute(sql); + if (resultSet) { + if (rs != null) { + rs.close(); + rs = null; + } + rs = stat.getResultSet(); + } + } + return rs; + } + + private void process(Connection conn, String fileName, + boolean continueOnError, Charset charset) throws SQLException, + IOException { + InputStream in = FileUtils.newInputStream(fileName); + String path = FileUtils.getParent(fileName); + try { + in = new BufferedInputStream(in, Constants.IO_BUFFER_SIZE); + Reader reader = new InputStreamReader(in, charset); + process(conn, continueOnError, path, reader, charset); + } finally { + IOUtils.closeSilently(in); + } + } + + private void process(Connection conn, boolean continueOnError, String path, + Reader reader, Charset charset) throws SQLException, IOException { + Statement stat = conn.createStatement(); + ScriptReader r = new ScriptReader(reader); + while (true) { + String sql = r.readStatement(); + if (sql == null) { + break; + } + String trim = sql.trim(); + if (trim.length() == 0) { + continue; + } + if (trim.startsWith("@") && StringUtils.toUpperEnglish(trim). + startsWith("@INCLUDE")) { + sql = trim; + sql = sql.substring("@INCLUDE".length()).trim(); + if (!FileUtils.isAbsolute(sql)) { + sql = path + SysProperties.FILE_SEPARATOR + sql; + } + process(conn, sql, continueOnError, charset); + } else { + try { + if (showResults && !trim.startsWith("-->")) { + out.print(sql + ";"); + } + if (showResults || checkResults) { + boolean query = stat.execute(sql); + if (query) { + ResultSet rs = stat.getResultSet(); + int columns = rs.getMetaData().getColumnCount(); + StringBuilder buff = new StringBuilder(); + while (rs.next()) { + buff.append("\n-->"); + for (int i = 0; i < columns; i++) { + String s = rs.getString(i + 1); + if (s != null) { + s = StringUtils.replaceAll(s, "\r\n", "\n"); + s = StringUtils.replaceAll(s, "\n", "\n--> "); + s = StringUtils.replaceAll(s, "\r", "\r--> "); + } + buff.append(' ').append(s); + } + } + buff.append("\n;"); + String result = buff.toString(); + if (showResults) { + out.print(result); + } + if (checkResults) { + String expected = r.readStatement() + ";"; + expected = StringUtils.replaceAll(expected, "\r\n", "\n"); + expected = StringUtils.replaceAll(expected, "\r", "\n"); + if (!expected.equals(result)) { + expected = StringUtils.replaceAll(expected, " ", "+"); + result = StringUtils.replaceAll(result, " ", "+"); + throw new SQLException( + "Unexpected output for:\n" + sql.trim() + + "\nGot:\n" + result + "\nExpected:\n" + expected); + } + } + + } + } else { + stat.execute(sql); + } + } catch (Exception e) { + if (continueOnError) { + e.printStackTrace(out); + } else { + throw DbException.toSQLException(e); + } + } + } + } + } + + private static void processRunscript(String url, String user, String password, + String fileName, String options) throws SQLException { + Connection conn = null; + Statement stat = null; + try { + org.h2.Driver.load(); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + String sql = "RUNSCRIPT FROM '" + fileName + "' " + options; + stat.execute(sql); + } finally { + JdbcUtils.closeSilently(stat); + JdbcUtils.closeSilently(conn); + } + } + + /** + * Executes the SQL commands in a script file against a database. + * + * @param url the database URL + * @param user the user name + * @param password the password + * @param fileName the script file + * @param charset the character set or null for UTF-8 + * @param continueOnError if execution should be continued if an error + * occurs + */ + public static void execute(String url, String user, String password, + String fileName, Charset charset, boolean continueOnError) + throws SQLException { + new RunScript().process(url, user, password, fileName, charset, + continueOnError); + } + + /** + * Executes the SQL commands in a script file against a database. + * + * @param url the database URL + * @param user the user name + * @param password the password + * @param fileName the script file + * @param charset the character set or null for UTF-8 + * @param continueOnError if execution should be continued if an error + * occurs + */ + void process(String url, String user, String password, + String fileName, Charset charset, + boolean continueOnError) throws SQLException { + try { + org.h2.Driver.load(); + if (charset == null) { + charset = StandardCharsets.UTF_8; + } + try (Connection conn = DriverManager.getConnection(url, user, password)) { + process(conn, fileName, continueOnError, charset); + } + } catch (IOException e) { + throw DbException.convertIOException(e, fileName); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/Script.java b/modules/h2/src/main/java/org/h2/tools/Script.java new file mode 100644 index 0000000000000..4f094816c23b9 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Script.java @@ -0,0 +1,143 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.util.JdbcUtils; +import org.h2.util.StringUtils; +import org.h2.util.Tool; + +/** + * Creates a SQL script file by extracting the schema and data of a database. + * @h2.resource + */ +public class Script extends Tool { + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-url "<url>"]The database URL (jdbc:...)
    [-user <user>]The user name (default: sa)
    [-password <pwd>]The password
    [-script <file>]The target script file name (default: backup.sql)
    [-options ...]A list of options (only for embedded H2, see SCRIPT)
    [-quiet]Do not print progress information
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new Script().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + String url = null; + String user = ""; + String password = ""; + String file = "backup.sql"; + String options1 = ""; + String options2 = ""; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-url")) { + url = args[++i]; + } else if (arg.equals("-user")) { + user = args[++i]; + } else if (arg.equals("-password")) { + password = args[++i]; + } else if (arg.equals("-script")) { + file = args[++i]; + } else if (arg.equals("-options")) { + StringBuilder buff1 = new StringBuilder(); + StringBuilder buff2 = new StringBuilder(); + i++; + for (; i < args.length; i++) { + String a = args[i]; + String upper = StringUtils.toUpperEnglish(a); + if ("SIMPLE".equals(upper) || upper.startsWith("NO") || "DROP".equals(upper)) { + buff1.append(' '); + buff1.append(args[i]); + } else if ("BLOCKSIZE".equals(upper)) { + buff1.append(' '); + buff1.append(args[i]); + i++; + buff1.append(' '); + buff1.append(args[i]); + } else { + buff2.append(' '); + buff2.append(args[i]); + } + } + options1 = buff1.toString(); + options2 = buff2.toString(); + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + if (url == null) { + showUsage(); + throw new SQLException("URL not set"); + } + process(url, user, password, file, options1, options2); + } + + /** + * Backs up a database to a stream. + * + * @param url the database URL + * @param user the user name + * @param password the password + * @param fileName the target file name + * @param options1 the options before the file name (may be an empty string) + * @param options2 the options after the file name (may be an empty string) + */ + public static void process(String url, String user, String password, + String fileName, String options1, String options2) throws SQLException { + Connection conn = null; + try { + org.h2.Driver.load(); + conn = DriverManager.getConnection(url, user, password); + process(conn, fileName, options1, options2); + } finally { + JdbcUtils.closeSilently(conn); + } + } + + /** + * Backs up a database to a stream. The stream is not closed. + * The connection is not closed. + * + * @param conn the connection + * @param fileName the target file name + * @param options1 the options before the file name + * @param options2 the options after the file name + */ + public static void process(Connection conn, + String fileName, String options1, String options2) throws SQLException { + + try (Statement stat = conn.createStatement()) { + String sql = "SCRIPT " + options1 + " TO '" + fileName + "' " + options2; + stat.execute(sql); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/Server.java b/modules/h2/src/main/java/org/h2/tools/Server.java new file mode 100644 index 0000000000000..34266b5d88acc --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Server.java @@ -0,0 +1,744 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.net.URI; +import java.sql.Connection; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.server.Service; +import org.h2.server.ShutdownHandler; +import org.h2.server.TcpServer; +import org.h2.server.pg.PgServer; +import org.h2.server.web.WebServer; +import org.h2.util.StringUtils; +import org.h2.util.Tool; +import org.h2.util.Utils; + +/** + * Starts the H2 Console (web-) server, TCP, and PG server. + * @h2.resource + */ +public class Server extends Tool implements Runnable, ShutdownHandler { + + private final Service service; + private Server web, tcp, pg; + private ShutdownHandler shutdownHandler; + private boolean started; + + public Server() { + // nothing to do + this.service = null; + } + + /** + * Create a new server for the given service. + * + * @param service the service + * @param args the command line arguments + */ + public Server(Service service, String... args) throws SQLException { + verifyArgs(args); + this.service = service; + try { + service.init(args); + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } + + /** + * When running without options, -tcp, -web, -browser and -pg are started. + *
    + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-web]Start the web server with the H2 Console
    [-webAllowOthers]Allow other computers to connect - see below
    [-webDaemon]Use a daemon thread
    [-webPort <port>]The port (default: 8082)
    [-webSSL]Use encrypted (HTTPS) connections
    [-browser]Start a browser connecting to the web server
    [-tcp]Start the TCP server
    [-tcpAllowOthers]Allow other computers to connect - see below
    [-tcpDaemon]Use a daemon thread
    [-tcpPort <port>]The port (default: 9092)
    [-tcpSSL]Use encrypted (SSL) connections
    [-tcpPassword <pwd>]The password for shutting down a TCP server
    [-tcpShutdown "<url>"]Stop the TCP server; example: tcp://localhost
    [-tcpShutdownForce]Do not wait until all connections are closed
    [-pg]Start the PG server
    [-pgAllowOthers]Allow other computers to connect - see below
    [-pgDaemon]Use a daemon thread
    [-pgPort <port>]The port (default: 5435)
    [-properties "<dir>"]Server properties (default: ~, disable: null)
    [-baseDir <dir>]The base directory for H2 databases (all servers)
    [-ifExists]Only existing databases may be opened (all servers)
    [-trace]Print additional trace information (all servers)
    [-key <from> <to>]Allows to map a database name to another (all servers)
    + * The options -xAllowOthers are potentially risky. + *
    + * For details, see Advanced Topics / Protection against Remote Access. + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new Server().runTool(args); + } + + private void verifyArgs(String... args) throws SQLException { + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg == null) { + } else if ("-?".equals(arg) || "-help".equals(arg)) { + // ok + } else if (arg.startsWith("-web")) { + if ("-web".equals(arg)) { + // ok + } else if ("-webAllowOthers".equals(arg)) { + // no parameters + } else if ("-webDaemon".equals(arg)) { + // no parameters + } else if ("-webSSL".equals(arg)) { + // no parameters + } else if ("-webPort".equals(arg)) { + i++; + } else { + throwUnsupportedOption(arg); + } + } else if ("-browser".equals(arg)) { + // ok + } else if (arg.startsWith("-tcp")) { + if ("-tcp".equals(arg)) { + // ok + } else if ("-tcpAllowOthers".equals(arg)) { + // no parameters + } else if ("-tcpDaemon".equals(arg)) { + // no parameters + } else if ("-tcpSSL".equals(arg)) { + // no parameters + } else if ("-tcpPort".equals(arg)) { + i++; + } else if ("-tcpPassword".equals(arg)) { + i++; + } else if ("-tcpShutdown".equals(arg)) { + i++; + } else if ("-tcpShutdownForce".equals(arg)) { + // ok + } else { + throwUnsupportedOption(arg); + } + } else if (arg.startsWith("-pg")) { + if ("-pg".equals(arg)) { + // ok + } else if ("-pgAllowOthers".equals(arg)) { + // no parameters + } else if ("-pgDaemon".equals(arg)) { + // no parameters + } else if ("-pgPort".equals(arg)) { + i++; + } else { + throwUnsupportedOption(arg); + } + } else if (arg.startsWith("-ftp")) { + if ("-ftpPort".equals(arg)) { + i++; + } else if ("-ftpDir".equals(arg)) { + i++; + } else if ("-ftpRead".equals(arg)) { + i++; + } else if ("-ftpWrite".equals(arg)) { + i++; + } else if ("-ftpWritePassword".equals(arg)) { + i++; + } else if ("-ftpTask".equals(arg)) { + // no parameters + } else { + throwUnsupportedOption(arg); + } + } else if ("-properties".equals(arg)) { + i++; + } else if ("-trace".equals(arg)) { + // no parameters + } else if ("-ifExists".equals(arg)) { + // no parameters + } else if ("-baseDir".equals(arg)) { + i++; + } else if ("-key".equals(arg)) { + i += 2; + } else if ("-tool".equals(arg)) { + // no parameters + } else { + throwUnsupportedOption(arg); + } + } + } + + @Override + public void runTool(String... args) throws SQLException { + boolean tcpStart = false, pgStart = false, webStart = false; + boolean browserStart = false; + boolean tcpShutdown = false, tcpShutdownForce = false; + String tcpPassword = ""; + String tcpShutdownServer = ""; + boolean startDefaultServers = true; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg == null) { + } else if ("-?".equals(arg) || "-help".equals(arg)) { + showUsage(); + return; + } else if (arg.startsWith("-web")) { + if ("-web".equals(arg)) { + startDefaultServers = false; + webStart = true; + } else if ("-webAllowOthers".equals(arg)) { + // no parameters + } else if ("-webDaemon".equals(arg)) { + // no parameters + } else if ("-webSSL".equals(arg)) { + // no parameters + } else if ("-webPort".equals(arg)) { + i++; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } else if ("-browser".equals(arg)) { + startDefaultServers = false; + browserStart = true; + } else if (arg.startsWith("-tcp")) { + if ("-tcp".equals(arg)) { + startDefaultServers = false; + tcpStart = true; + } else if ("-tcpAllowOthers".equals(arg)) { + // no parameters + } else if ("-tcpDaemon".equals(arg)) { + // no parameters + } else if ("-tcpSSL".equals(arg)) { + // no parameters + } else if ("-tcpPort".equals(arg)) { + i++; + } else if ("-tcpPassword".equals(arg)) { + tcpPassword = args[++i]; + } else if ("-tcpShutdown".equals(arg)) { + startDefaultServers = false; + tcpShutdown = true; + tcpShutdownServer = args[++i]; + } else if ("-tcpShutdownForce".equals(arg)) { + tcpShutdownForce = true; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } else if (arg.startsWith("-pg")) { + if ("-pg".equals(arg)) { + startDefaultServers = false; + pgStart = true; + } else if ("-pgAllowOthers".equals(arg)) { + // no parameters + } else if ("-pgDaemon".equals(arg)) { + // no parameters + } else if ("-pgPort".equals(arg)) { + i++; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } else if ("-properties".equals(arg)) { + i++; + } else if ("-trace".equals(arg)) { + // no parameters + } else if ("-ifExists".equals(arg)) { + // no parameters + } else if ("-baseDir".equals(arg)) { + i++; + } else if ("-key".equals(arg)) { + i += 2; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + verifyArgs(args); + if (startDefaultServers) { + tcpStart = true; + pgStart = true; + webStart = true; + browserStart = true; + } + // TODO server: maybe use one single properties file? + if (tcpShutdown) { + out.println("Shutting down TCP Server at " + tcpShutdownServer); + shutdownTcpServer(tcpShutdownServer, tcpPassword, + tcpShutdownForce, false); + } + try { + if (tcpStart) { + tcp = createTcpServer(args); + tcp.start(); + out.println(tcp.getStatus()); + tcp.setShutdownHandler(this); + } + if (pgStart) { + pg = createPgServer(args); + pg.start(); + out.println(pg.getStatus()); + } + if (webStart) { + web = createWebServer(args); + web.setShutdownHandler(this); + SQLException result = null; + try { + web.start(); + } catch (Exception e) { + result = DbException.toSQLException(e); + } + out.println(web.getStatus()); + // start browser in any case (even if the server is already + // running) because some people don't look at the output, but + // are wondering why nothing happens + if (browserStart) { + try { + openBrowser(web.getURL()); + } catch (Exception e) { + out.println(e.getMessage()); + } + } + if (result != null) { + throw result; + } + } else if (browserStart) { + out.println("The browser can only start if a web server is started (-web)"); + } + } catch (SQLException e) { + stopAll(); + throw e; + } + } + + /** + * Shutdown one or all TCP server. If force is set to false, the server will + * not allow new connections, but not kill existing connections, instead it + * will stop if the last connection is closed. If force is set to true, + * existing connections are killed. After calling the method with + * force=false, it is not possible to call it again with force=true because + * new connections are not allowed. Example: + * + *
    +     * Server.shutdownTcpServer("tcp://localhost:9094",
    +     *         password, true, false);
    +     * 
    + * + * @param url example: tcp://localhost:9094 + * @param password the password to use ("" for no password) + * @param force the shutdown (don't wait) + * @param all whether all TCP servers that are running in the JVM should be + * stopped + */ + public static void shutdownTcpServer(String url, String password, + boolean force, boolean all) throws SQLException { + TcpServer.shutdown(url, password, force, all); + } + + /** + * Get the status of this server. + * + * @return the status + */ + public String getStatus() { + StringBuilder buff = new StringBuilder(); + if (!started) { + buff.append("Not started"); + } else if (isRunning(false)) { + buff.append(service.getType()). + append(" server running at "). + append(service.getURL()). + append(" ("); + if (service.getAllowOthers()) { + buff.append("others can connect"); + } else { + buff.append("only local connections"); + } + buff.append(')'); + } else { + buff.append("The "). + append(service.getType()). + append(" server could not be started. " + + "Possible cause: another server is already running at "). + append(service.getURL()); + } + return buff.toString(); + } + + /** + * Create a new web server, but does not start it yet. Example: + * + *
    +     * Server server = Server.createWebServer("-trace").start();
    +     * 
    + * Supported options are: + * -webPort, -webSSL, -webAllowOthers, -webDaemon, + * -trace, -ifExists, -baseDir, -properties. + * See the main method for details. + * + * @param args the argument list + * @return the server + */ + public static Server createWebServer(String... args) throws SQLException { + WebServer service = new WebServer(); + Server server = new Server(service, args); + service.setShutdownHandler(server); + return server; + } + + /** + * Create a new TCP server, but does not start it yet. Example: + * + *
    +     * Server server = Server.createTcpServer(
    +     *     "-tcpPort", "9123", "-tcpAllowOthers").start();
    +     * 
    + * Supported options are: + * -tcpPort, -tcpSSL, -tcpPassword, -tcpAllowOthers, -tcpDaemon, + * -trace, -ifExists, -baseDir, -key. + * See the main method for details. + *

    + * If no port is specified, the default port is used if possible, + * and if this port is already used, a random port is used. + * Use getPort() or getURL() after starting to retrieve the port. + *

    + * + * @param args the argument list + * @return the server + */ + public static Server createTcpServer(String... args) throws SQLException { + TcpServer service = new TcpServer(); + Server server = new Server(service, args); + service.setShutdownHandler(server); + return server; + } + + /** + * Create a new PG server, but does not start it yet. + * Example: + *
    +     * Server server =
    +     *     Server.createPgServer("-pgAllowOthers").start();
    +     * 
    + * Supported options are: + * -pgPort, -pgAllowOthers, -pgDaemon, + * -trace, -ifExists, -baseDir, -key. + * See the main method for details. + *

    + * If no port is specified, the default port is used if possible, + * and if this port is already used, a random port is used. + * Use getPort() or getURL() after starting to retrieve the port. + *

    + * + * @param args the argument list + * @return the server + */ + public static Server createPgServer(String... args) throws SQLException { + return new Server(new PgServer(), args); + } + + /** + * Tries to start the server. + * @return the server if successful + * @throws SQLException if the server could not be started + */ + public Server start() throws SQLException { + try { + started = true; + service.start(); + String name = service.getName() + " (" + service.getURL() + ")"; + Thread t = new Thread(this, name); + t.setDaemon(service.isDaemon()); + t.start(); + for (int i = 1; i < 64; i += i) { + wait(i); + if (isRunning(false)) { + return this; + } + } + if (isRunning(true)) { + return this; + } + throw DbException.get(ErrorCode.EXCEPTION_OPENING_PORT_2, + name, "timeout; " + + "please check your network configuration, specially the file /etc/hosts"); + } catch (DbException e) { + throw DbException.toSQLException(e); + } + } + + private static void wait(int i) { + try { + // sleep at most 4096 ms + long sleep = (long) i * (long) i; + Thread.sleep(sleep); + } catch (InterruptedException e) { + // ignore + } + } + + private void stopAll() { + Server s = web; + if (s != null && s.isRunning(false)) { + s.stop(); + web = null; + } + s = tcp; + if (s != null && s.isRunning(false)) { + s.stop(); + tcp = null; + } + s = pg; + if (s != null && s.isRunning(false)) { + s.stop(); + pg = null; + } + } + + /** + * Checks if the server is running. + * + * @param traceError if errors should be written + * @return if the server is running + */ + public boolean isRunning(boolean traceError) { + return service.isRunning(traceError); + } + + /** + * Stops the server. + */ + public void stop() { + started = false; + if (service != null) { + service.stop(); + } + } + + /** + * Gets the URL of this server. + * + * @return the url + */ + public String getURL() { + return service.getURL(); + } + + /** + * Gets the port this server is listening on. + * + * @return the port + */ + public int getPort() { + return service.getPort(); + } + + /** + * INTERNAL + */ + @Override + public void run() { + try { + service.listen(); + } catch (Exception e) { + DbException.traceThrowable(e); + } + } + + /** + * INTERNAL + */ + public void setShutdownHandler(ShutdownHandler shutdownHandler) { + this.shutdownHandler = shutdownHandler; + } + + /** + * INTERNAL + */ + @Override + public void shutdown() { + if (shutdownHandler != null) { + shutdownHandler.shutdown(); + } else { + stopAll(); + } + } + + /** + * Get the service attached to this server. + * + * @return the service + */ + public Service getService() { + return service; + } + + /** + * Open a new browser tab or window with the given URL. + * + * @param url the URL to open + */ + public static void openBrowser(String url) throws Exception { + try { + String osName = StringUtils.toLowerEnglish( + Utils.getProperty("os.name", "linux")); + Runtime rt = Runtime.getRuntime(); + String browser = Utils.getProperty(SysProperties.H2_BROWSER, null); + if (browser == null) { + // under Linux, this will point to the default system browser + try { + browser = System.getenv("BROWSER"); + } catch (SecurityException se) { + // ignore + } + } + if (browser != null) { + if (browser.startsWith("call:")) { + browser = browser.substring("call:".length()); + Utils.callStaticMethod(browser, url); + } else if (browser.contains("%url")) { + String[] args = StringUtils.arraySplit(browser, ',', false); + for (int i = 0; i < args.length; i++) { + args[i] = StringUtils.replaceAll(args[i], "%url", url); + } + rt.exec(args); + } else if (osName.contains("windows")) { + rt.exec(new String[] { "cmd.exe", "/C", browser, url }); + } else { + rt.exec(new String[] { browser, url }); + } + return; + } + try { + Class desktopClass = Class.forName("java.awt.Desktop"); + // Desktop.isDesktopSupported() + Boolean supported = (Boolean) desktopClass. + getMethod("isDesktopSupported"). + invoke(null, new Object[0]); + URI uri = new URI(url); + if (supported) { + // Desktop.getDesktop(); + Object desktop = desktopClass.getMethod("getDesktop"). + invoke(null); + // desktop.browse(uri); + desktopClass.getMethod("browse", URI.class). + invoke(desktop, uri); + return; + } + } catch (Exception e) { + // ignore + } + if (osName.contains("windows")) { + rt.exec(new String[] { "rundll32", "url.dll,FileProtocolHandler", url }); + } else if (osName.contains("mac") || osName.contains("darwin")) { + // Mac OS: to open a page with Safari, use "open -a Safari" + Runtime.getRuntime().exec(new String[] { "open", url }); + } else { + String[] browsers = { "xdg-open", "chromium", "google-chrome", + "firefox", "mozilla-firefox", "mozilla", "konqueror", + "netscape", "opera", "midori" }; + boolean ok = false; + for (String b : browsers) { + try { + rt.exec(new String[] { b, url }); + ok = true; + break; + } catch (Exception e) { + // ignore and try the next + } + } + if (!ok) { + // No success in detection. + throw new Exception( + "Browser detection failed and system property " + + SysProperties.H2_BROWSER + " not set"); + } + } + } catch (Exception e) { + throw new Exception( + "Failed to start a browser to open the URL " + + url + ": " + e.getMessage()); + } + } + + /** + * Start a web server and a browser that uses the given connection. The + * current transaction is preserved. This is specially useful to manually + * inspect the database when debugging. This method return as soon as the + * user has disconnected. + * + * @param conn the database connection (the database must be open) + */ + public static void startWebServer(Connection conn) throws SQLException { + startWebServer(conn, false); + } + + /** + * Start a web server and a browser that uses the given connection. The + * current transaction is preserved. This is specially useful to manually + * inspect the database when debugging. This method return as soon as the + * user has disconnected. + * + * @param conn the database connection (the database must be open) + * @param ignoreProperties if {@code true} properties from + * {@code .h2.server.properties} will be ignored + */ + public static void startWebServer(Connection conn, boolean ignoreProperties) throws SQLException { + WebServer webServer = new WebServer(); + String[] args; + if (ignoreProperties) { + args = new String[] { "-webPort", "0", "-properties", "null"}; + } else { + args = new String[] { "-webPort", "0" }; + } + Server web = new Server(webServer, args); + web.start(); + Server server = new Server(); + server.web = web; + webServer.setShutdownHandler(server); + String url = webServer.addSession(conn); + try { + Server.openBrowser(url); + while (!webServer.isStopped()) { + Thread.sleep(1000); + } + } catch (Exception e) { + // ignore + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/Shell.java b/modules/h2/src/main/java/org/h2/tools/Shell.java new file mode 100644 index 0000000000000..3df3f57425598 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/Shell.java @@ -0,0 +1,610 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.PrintStream; +import java.io.StringReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Properties; +import java.util.concurrent.TimeUnit; + +import org.h2.engine.Constants; +import org.h2.server.web.ConnectionInfo; +import org.h2.util.JdbcUtils; +import org.h2.util.New; +import org.h2.util.ScriptReader; +import org.h2.util.SortedProperties; +import org.h2.util.StringUtils; +import org.h2.util.Tool; +import org.h2.util.Utils; + +/** + * Interactive command line tool to access a database using JDBC. + * @h2.resource + */ +public class Shell extends Tool implements Runnable { + + private static final int MAX_ROW_BUFFER = 5000; + private static final int HISTORY_COUNT = 20; + // Windows: '\u00b3'; + private static final char BOX_VERTICAL = '|'; + + private PrintStream err = System.err; + private InputStream in = System.in; + private BufferedReader reader; + private Connection conn; + private Statement stat; + private boolean listMode; + private int maxColumnSize = 100; + private final ArrayList history = New.arrayList(); + private boolean stopHide; + private String serverPropertiesDir = Constants.SERVER_PROPERTIES_DIR; + + /** + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-url "<url>"]The database URL (jdbc:h2:...)
    [-user <user>]The user name
    [-password <pwd>]The password
    [-driver <class>]The JDBC driver class to use (not required in most cases)
    [-sql "<statements>"]Execute the SQL statements and exit
    [-properties "<dir>"]Load the server properties from this directory
    + * If special characters don't work as expected, you may need to use + * -Dfile.encoding=UTF-8 (Mac OS X) or CP850 (Windows). + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new Shell().runTool(args); + } + + /** + * Sets the standard error stream. + * + * @param err the new standard error stream + */ + public void setErr(PrintStream err) { + this.err = err; + } + + /** + * Redirects the standard input. By default, System.in is used. + * + * @param in the input stream to use + */ + public void setIn(InputStream in) { + this.in = in; + } + + /** + * Redirects the standard input. By default, System.in is used. + * + * @param reader the input stream reader to use + */ + public void setInReader(BufferedReader reader) { + this.reader = reader; + } + + /** + * Run the shell tool with the given command line settings. + * + * @param args the command line settings + */ + @Override + public void runTool(String... args) throws SQLException { + String url = null; + String user = ""; + String password = ""; + String sql = null; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-url")) { + url = args[++i]; + } else if (arg.equals("-user")) { + user = args[++i]; + } else if (arg.equals("-password")) { + password = args[++i]; + } else if (arg.equals("-driver")) { + String driver = args[++i]; + JdbcUtils.loadUserClass(driver); + } else if (arg.equals("-sql")) { + sql = args[++i]; + } else if (arg.equals("-properties")) { + serverPropertiesDir = args[++i]; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else if (arg.equals("-list")) { + listMode = true; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + if (url != null) { + org.h2.Driver.load(); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + } + if (sql == null) { + promptLoop(); + } else { + ScriptReader r = new ScriptReader(new StringReader(sql)); + while (true) { + String s = r.readStatement(); + if (s == null) { + break; + } + execute(s); + } + if (conn != null) { + conn.close(); + } + } + } + + /** + * Run the shell tool with the given connection and command line settings. + * The connection will be closed when the shell exits. + * This is primary used to integrate the Shell into another application. + *

    + * Note: using the "-url" option in {@code args} doesn't make much sense + * since it will override the {@code conn} parameter. + *

    + * + * @param conn the connection + * @param args the command line settings + */ + public void runTool(Connection conn, String... args) throws SQLException { + this.conn = conn; + this.stat = conn.createStatement(); + runTool(args); + } + + private void showHelp() { + println("Commands are case insensitive; SQL statements end with ';'"); + println("help or ? Display this help"); + println("list Toggle result list / stack trace mode"); + println("maxwidth Set maximum column width (default is 100)"); + println("autocommit Enable or disable autocommit"); + println("history Show the last 20 statements"); + println("quit or exit Close the connection and exit"); + println(""); + } + + private void promptLoop() { + println(""); + println("Welcome to H2 Shell " + Constants.getFullVersion()); + println("Exit with Ctrl+C"); + if (conn != null) { + showHelp(); + } + String statement = null; + if (reader == null) { + reader = new BufferedReader(new InputStreamReader(in)); + } + while (true) { + try { + if (conn == null) { + connect(); + showHelp(); + } + if (statement == null) { + print("sql> "); + } else { + print("...> "); + } + String line = readLine(); + if (line == null) { + break; + } + String trimmed = line.trim(); + if (trimmed.length() == 0) { + continue; + } + boolean end = trimmed.endsWith(";"); + if (end) { + line = line.substring(0, line.lastIndexOf(';')); + trimmed = trimmed.substring(0, trimmed.length() - 1); + } + String lower = StringUtils.toLowerEnglish(trimmed); + if ("exit".equals(lower) || "quit".equals(lower)) { + break; + } else if ("help".equals(lower) || "?".equals(lower)) { + showHelp(); + } else if ("list".equals(lower)) { + listMode = !listMode; + println("Result list mode is now " + (listMode ? "on" : "off")); + } else if ("history".equals(lower)) { + for (int i = 0, size = history.size(); i < size; i++) { + String s = history.get(i); + s = s.replace('\n', ' ').replace('\r', ' '); + println("#" + (1 + i) + ": " + s); + } + if (!history.isEmpty()) { + println("To re-run a statement, type the number and press and enter"); + } else { + println("No history"); + } + } else if (lower.startsWith("autocommit")) { + lower = lower.substring("autocommit".length()).trim(); + if ("true".equals(lower)) { + conn.setAutoCommit(true); + } else if ("false".equals(lower)) { + conn.setAutoCommit(false); + } else { + println("Usage: autocommit [true|false]"); + } + println("Autocommit is now " + conn.getAutoCommit()); + } else if (lower.startsWith("maxwidth")) { + lower = lower.substring("maxwidth".length()).trim(); + try { + maxColumnSize = Integer.parseInt(lower); + } catch (NumberFormatException e) { + println("Usage: maxwidth "); + } + println("Maximum column width is now " + maxColumnSize); + } else { + boolean addToHistory = true; + if (statement == null) { + if (StringUtils.isNumber(line)) { + int pos = Integer.parseInt(line); + if (pos == 0 || pos > history.size()) { + println("Not found"); + } else { + statement = history.get(pos - 1); + addToHistory = false; + println(statement); + end = true; + } + } else { + statement = line; + } + } else { + statement += "\n" + line; + } + if (end) { + if (addToHistory) { + history.add(0, statement); + if (history.size() > HISTORY_COUNT) { + history.remove(HISTORY_COUNT); + } + } + execute(statement); + statement = null; + } + } + } catch (SQLException e) { + println("SQL Exception: " + e.getMessage()); + statement = null; + } catch (IOException e) { + println(e.getMessage()); + break; + } catch (Exception e) { + println("Exception: " + e.toString()); + e.printStackTrace(err); + break; + } + } + if (conn != null) { + try { + conn.close(); + println("Connection closed"); + } catch (SQLException e) { + println("SQL Exception: " + e.getMessage()); + e.printStackTrace(err); + } + } + } + + private void connect() throws IOException, SQLException { + String url = "jdbc:h2:~/test"; + String user = ""; + String driver = null; + try { + Properties prop; + if ("null".equals(serverPropertiesDir)) { + prop = new Properties(); + } else { + prop = SortedProperties.loadProperties( + serverPropertiesDir + "/" + Constants.SERVER_PROPERTIES_NAME); + } + String data = null; + boolean found = false; + for (int i = 0;; i++) { + String d = prop.getProperty(String.valueOf(i)); + if (d == null) { + break; + } + found = true; + data = d; + } + if (found) { + ConnectionInfo info = new ConnectionInfo(data); + url = info.url; + user = info.user; + driver = info.driver; + } + } catch (IOException e) { + // ignore + } + println("[Enter] " + url); + print("URL "); + url = readLine(url).trim(); + if (driver == null) { + driver = JdbcUtils.getDriver(url); + } + if (driver != null) { + println("[Enter] " + driver); + } + print("Driver "); + driver = readLine(driver).trim(); + println("[Enter] " + user); + print("User "); + user = readLine(user); + println("[Enter] Hide"); + print("Password "); + String password = readLine(); + if (password.length() == 0) { + password = readPassword(); + } + conn = JdbcUtils.getConnection(driver, url, user, password); + stat = conn.createStatement(); + println("Connected"); + } + + /** + * Print the string without newline, and flush. + * + * @param s the string to print + */ + protected void print(String s) { + out.print(s); + out.flush(); + } + + private void println(String s) { + out.println(s); + out.flush(); + } + + private String readPassword() throws IOException { + try { + Object console = Utils.callStaticMethod("java.lang.System.console"); + print("Password "); + char[] password = (char[]) Utils.callMethod(console, "readPassword"); + return password == null ? null : new String(password); + } catch (Exception e) { + // ignore, use the default solution + } + Thread passwordHider = new Thread(this, "Password hider"); + stopHide = false; + passwordHider.start(); + print("Password > "); + String p = readLine(); + stopHide = true; + try { + passwordHider.join(); + } catch (InterruptedException e) { + // ignore + } + print("\b\b"); + return p; + } + + /** + * INTERNAL. + * Hides the password by repeatedly printing + * backspace, backspace, >, <. + */ + @Override + public void run() { + while (!stopHide) { + print("\b\b><"); + try { + Thread.sleep(10); + } catch (InterruptedException e) { + // ignore + } + } + } + + + private String readLine(String defaultValue) throws IOException { + String s = readLine(); + return s.length() == 0 ? defaultValue : s; + } + + private String readLine() throws IOException { + String line = reader.readLine(); + if (line == null) { + throw new IOException("Aborted"); + } + return line; + } + + private void execute(String sql) { + if (sql.trim().length() == 0) { + return; + } + long time = System.nanoTime(); + try { + ResultSet rs = null; + try { + if (stat.execute(sql)) { + rs = stat.getResultSet(); + int rowCount = printResult(rs, listMode); + time = System.nanoTime() - time; + println("(" + rowCount + (rowCount == 1 ? + " row, " : " rows, ") + TimeUnit.NANOSECONDS.toMillis(time) + " ms)"); + } else { + int updateCount = stat.getUpdateCount(); + time = System.nanoTime() - time; + println("(Update count: " + updateCount + ", " + + TimeUnit.NANOSECONDS.toMillis(time) + " ms)"); + } + } finally { + JdbcUtils.closeSilently(rs); + } + } catch (SQLException e) { + println("Error: " + e.toString()); + if (listMode) { + e.printStackTrace(err); + } + } + } + + private int printResult(ResultSet rs, boolean asList) throws SQLException { + if (asList) { + return printResultAsList(rs); + } + return printResultAsTable(rs); + } + + private int printResultAsTable(ResultSet rs) throws SQLException { + ResultSetMetaData meta = rs.getMetaData(); + int len = meta.getColumnCount(); + boolean truncated = false; + ArrayList rows = New.arrayList(); + // buffer the header + String[] columns = new String[len]; + for (int i = 0; i < len; i++) { + String s = meta.getColumnLabel(i + 1); + columns[i] = s == null ? "" : s; + } + rows.add(columns); + int rowCount = 0; + while (rs.next()) { + rowCount++; + truncated |= loadRow(rs, len, rows); + if (rowCount > MAX_ROW_BUFFER) { + printRows(rows, len); + rows.clear(); + } + } + printRows(rows, len); + rows.clear(); + if (truncated) { + println("(data is partially truncated)"); + } + return rowCount; + } + + private boolean loadRow(ResultSet rs, int len, ArrayList rows) + throws SQLException { + boolean truncated = false; + String[] row = new String[len]; + for (int i = 0; i < len; i++) { + String s = rs.getString(i + 1); + if (s == null) { + s = "null"; + } + // only truncate if more than one column + if (len > 1 && s.length() > maxColumnSize) { + s = s.substring(0, maxColumnSize); + truncated = true; + } + row[i] = s; + } + rows.add(row); + return truncated; + } + + private int[] printRows(ArrayList rows, int len) { + int[] columnSizes = new int[len]; + for (int i = 0; i < len; i++) { + int max = 0; + for (String[] row : rows) { + max = Math.max(max, row[i].length()); + } + if (len > 1) { + Math.min(maxColumnSize, max); + } + columnSizes[i] = max; + } + for (String[] row : rows) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < len; i++) { + if (i > 0) { + buff.append(' ').append(BOX_VERTICAL).append(' '); + } + String s = row[i]; + buff.append(s); + if (i < len - 1) { + for (int j = s.length(); j < columnSizes[i]; j++) { + buff.append(' '); + } + } + } + println(buff.toString()); + } + return columnSizes; + } + + private int printResultAsList(ResultSet rs) throws SQLException { + ResultSetMetaData meta = rs.getMetaData(); + int longestLabel = 0; + int len = meta.getColumnCount(); + String[] columns = new String[len]; + for (int i = 0; i < len; i++) { + String s = meta.getColumnLabel(i + 1); + columns[i] = s; + longestLabel = Math.max(longestLabel, s.length()); + } + StringBuilder buff = new StringBuilder(); + int rowCount = 0; + while (rs.next()) { + rowCount++; + buff.setLength(0); + if (rowCount > 1) { + println(""); + } + for (int i = 0; i < len; i++) { + if (i > 0) { + buff.append('\n'); + } + String label = columns[i]; + buff.append(label); + for (int j = label.length(); j < longestLabel; j++) { + buff.append(' '); + } + buff.append(": ").append(rs.getString(i + 1)); + } + println(buff.toString()); + } + if (rowCount == 0) { + for (int i = 0; i < len; i++) { + if (i > 0) { + buff.append('\n'); + } + String label = columns[i]; + buff.append(label); + } + println(buff.toString()); + } + return rowCount; + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/SimpleResultSet.java b/modules/h2/src/main/java/org/h2/tools/SimpleResultSet.java new file mode 100644 index 0000000000000..af18a5d6ff05b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/SimpleResultSet.java @@ -0,0 +1,2501 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.io.InputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.net.URL; +import java.sql.Array; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.Date; +import java.sql.NClob; +import java.sql.Ref; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.RowId; +import java.sql.SQLException; +import java.sql.SQLWarning; +import java.sql.SQLXML; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.ArrayList; +import java.util.Calendar; +import java.util.Map; +import java.util.UUID; + +import org.h2.api.ErrorCode; +import org.h2.jdbc.JdbcResultSetBackwardsCompat; +import org.h2.message.DbException; +import org.h2.util.Bits; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.Utils; +import org.h2.value.DataType; + +/** + * This class is a simple result set and meta data implementation. + * It can be used in Java functions that return a result set. + * Only the most basic methods are implemented, the others throw an exception. + * This implementation is standalone, and only relies on standard classes. + * It can be extended easily if required. + * + * An application can create a result set using the following code: + * + *
    + * SimpleResultSet rs = new SimpleResultSet();
    + * rs.addColumn("ID", Types.INTEGER, 10, 0);
    + * rs.addColumn("NAME", Types.VARCHAR, 255, 0);
    + * rs.addRow(0, "Hello" });
    + * rs.addRow(1, "World" });
    + * 
    + * + */ +public class SimpleResultSet implements ResultSet, ResultSetMetaData, + JdbcResultSetBackwardsCompat { + + private ArrayList rows; + private Object[] currentRow; + private int rowId = -1; + private boolean wasNull; + private SimpleRowSource source; + private ArrayList columns = New.arrayList(); + private boolean autoClose = true; + + /** + * This constructor is used if the result set is later populated with + * addRow. + */ + public SimpleResultSet() { + rows = New.arrayList(); + } + + /** + * This constructor is used if the result set should retrieve the rows using + * the specified row source object. + * + * @param source the row source + */ + public SimpleResultSet(SimpleRowSource source) { + this.source = source; + } + + /** + * Adds a column to the result set. + * All columns must be added before adding rows. + * This method uses the default SQL type names. + * + * @param name null is replaced with C1, C2,... + * @param sqlType the value returned in getColumnType(..) + * @param precision the precision + * @param scale the scale + */ + public void addColumn(String name, int sqlType, int precision, int scale) { + int valueType = DataType.convertSQLTypeToValueType(sqlType); + addColumn(name, sqlType, DataType.getDataType(valueType).name, + precision, scale); + } + + /** + * Adds a column to the result set. + * All columns must be added before adding rows. + * + * @param name null is replaced with C1, C2,... + * @param sqlType the value returned in getColumnType(..) + * @param sqlTypeName the type name return in getColumnTypeName(..) + * @param precision the precision + * @param scale the scale + */ + public void addColumn(String name, int sqlType, String sqlTypeName, + int precision, int scale) { + if (rows != null && !rows.isEmpty()) { + throw new IllegalStateException( + "Cannot add a column after adding rows"); + } + if (name == null) { + name = "C" + (columns.size() + 1); + } + Column column = new Column(); + column.name = name; + column.sqlType = sqlType; + column.precision = precision; + column.scale = scale; + column.sqlTypeName = sqlTypeName; + columns.add(column); + } + + /** + * Add a new row to the result set. + * Do not use this method when using a RowSource. + * + * @param row the row as an array of objects + */ + public void addRow(Object... row) { + if (rows == null) { + throw new IllegalStateException( + "Cannot add a row when using RowSource"); + } + rows.add(row); + } + + /** + * Returns ResultSet.CONCUR_READ_ONLY. + * + * @return CONCUR_READ_ONLY + */ + @Override + public int getConcurrency() { + return ResultSet.CONCUR_READ_ONLY; + } + + /** + * Returns ResultSet.FETCH_FORWARD. + * + * @return FETCH_FORWARD + */ + @Override + public int getFetchDirection() { + return ResultSet.FETCH_FORWARD; + } + + /** + * Returns 0. + * + * @return 0 + */ + @Override + public int getFetchSize() { + return 0; + } + + /** + * Returns the row number (1, 2,...) or 0 for no row. + * + * @return 0 + */ + @Override + public int getRow() { + return currentRow == null ? 0 : rowId + 1; + } + + /** + * Returns the result set type. This is ResultSet.TYPE_FORWARD_ONLY for + * auto-close result sets, and ResultSet.TYPE_SCROLL_INSENSITIVE for others. + * + * @return TYPE_FORWARD_ONLY or TYPE_SCROLL_INSENSITIVE + */ + @Override + public int getType() { + if (autoClose) { + return ResultSet.TYPE_FORWARD_ONLY; + } + return ResultSet.TYPE_SCROLL_INSENSITIVE; + } + + /** + * Closes the result set and releases the resources. + */ + @Override + public void close() { + currentRow = null; + rows = null; + columns = null; + rowId = -1; + if (source != null) { + source.close(); + source = null; + } + } + + /** + * Moves the cursor to the next row of the result set. + * + * @return true if successful, false if there are no more rows + */ + @Override + public boolean next() throws SQLException { + if (source != null) { + rowId++; + currentRow = source.readRow(); + if (currentRow != null) { + return true; + } + } else if (rows != null && rowId < rows.size()) { + rowId++; + if (rowId < rows.size()) { + currentRow = rows.get(rowId); + return true; + } + currentRow = null; + } + if (autoClose) { + close(); + } + return false; + } + + /** + * Moves the current position to before the first row, that means the result + * set is reset. + */ + @Override + public void beforeFirst() throws SQLException { + if (autoClose) { + throw DbException.get(ErrorCode.RESULT_SET_NOT_SCROLLABLE); + } + rowId = -1; + if (source != null) { + source.reset(); + } + } + + /** + * Returns whether the last column accessed was null. + * + * @return true if the last column accessed was null + */ + @Override + public boolean wasNull() { + return wasNull; + } + + /** + * Searches for a specific column in the result set. A case-insensitive + * search is made. + * + * @param columnLabel the column label + * @return the column index (1,2,...) + * @throws SQLException if the column is not found or if the result set is + * closed + */ + @Override + public int findColumn(String columnLabel) throws SQLException { + if (columnLabel != null && columns != null) { + for (int i = 0, size = columns.size(); i < size; i++) { + if (columnLabel.equalsIgnoreCase(getColumn(i).name)) { + return i + 1; + } + } + } + throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, columnLabel) + .getSQLException(); + } + + /** + * Returns a reference to itself. + * + * @return this + */ + @Override + public ResultSetMetaData getMetaData() { + return this; + } + + /** + * Returns null. + * + * @return null + */ + @Override + public SQLWarning getWarnings() { + return null; + } + + /** + * Returns null. + * + * @return null + */ + @Override + public Statement getStatement() { + return null; + } + + /** + * INTERNAL + */ + @Override + public void clearWarnings() { + // nothing to do + } + + // ---- get --------------------------------------------- + + /** + * Returns the value as a java.sql.Array. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Array getArray(int columnIndex) throws SQLException { + Object[] o = (Object[]) get(columnIndex); + return o == null ? null : new SimpleArray(o); + } + + /** + * Returns the value as a java.sql.Array. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Array getArray(String columnLabel) throws SQLException { + return getArray(findColumn(columnLabel)); + } + + /** + * INTERNAL + */ + @Override + public InputStream getAsciiStream(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public InputStream getAsciiStream(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + /** + * Returns the value as a java.math.BigDecimal. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public BigDecimal getBigDecimal(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o != null && !(o instanceof BigDecimal)) { + o = new BigDecimal(o.toString()); + } + return (BigDecimal) o; + } + + /** + * Returns the value as a java.math.BigDecimal. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public BigDecimal getBigDecimal(String columnLabel) throws SQLException { + return getBigDecimal(findColumn(columnLabel)); + } + + /** + * @deprecated INTERNAL + */ + @Deprecated + @Override + public BigDecimal getBigDecimal(int columnIndex, int scale) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * @deprecated INTERNAL + */ + @Deprecated + @Override + public BigDecimal getBigDecimal(String columnLabel, int scale) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * Returns the value as a java.io.InputStream. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public InputStream getBinaryStream(int columnIndex) throws SQLException { + return asInputStream(get(columnIndex)); + } + + private static InputStream asInputStream(Object o) throws SQLException { + if (o == null) { + return null; + } else if (o instanceof Blob) { + return ((Blob) o).getBinaryStream(); + } + return (InputStream) o; + } + + /** + * Returns the value as a java.io.InputStream. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public InputStream getBinaryStream(String columnLabel) throws SQLException { + return getBinaryStream(findColumn(columnLabel)); + } + + /** + * Returns the value as a java.sql.Blob. + * This is only supported if the + * result set was created using a Blob object. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Blob getBlob(int columnIndex) throws SQLException { + return (Blob) get(columnIndex); + } + + /** + * Returns the value as a java.sql.Blob. + * This is only supported if the + * result set was created using a Blob object. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Blob getBlob(String columnLabel) throws SQLException { + return getBlob(findColumn(columnLabel)); + } + + /** + * Returns the value as a boolean. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public boolean getBoolean(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o == null) { + return false; + } + if (o instanceof Boolean) { + return (Boolean) o; + } + if (o instanceof Number) { + Number n = (Number) o; + if (n instanceof Double || n instanceof Float) { + return n.doubleValue() != 0; + } + if (n instanceof BigDecimal) { + return ((BigDecimal) n).signum() != 0; + } + if (n instanceof BigInteger) { + return ((BigInteger) n).signum() != 0; + } + return n.longValue() != 0; + } + return Utils.parseBoolean(o.toString(), false, true); + } + + /** + * Returns the value as a boolean. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public boolean getBoolean(String columnLabel) throws SQLException { + return getBoolean(findColumn(columnLabel)); + } + + /** + * Returns the value as a byte. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public byte getByte(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o != null && !(o instanceof Number)) { + o = Byte.decode(o.toString()); + } + return o == null ? 0 : ((Number) o).byteValue(); + } + + /** + * Returns the value as a byte. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public byte getByte(String columnLabel) throws SQLException { + return getByte(findColumn(columnLabel)); + } + + /** + * Returns the value as a byte array. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public byte[] getBytes(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o == null || o instanceof byte[]) { + return (byte[]) o; + } + if (o instanceof UUID) { + return Bits.uuidToBytes((UUID) o); + } + return JdbcUtils.serialize(o, null); + } + + /** + * Returns the value as a byte array. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public byte[] getBytes(String columnLabel) throws SQLException { + return getBytes(findColumn(columnLabel)); + } + + /** + * Returns the value as a java.io.Reader. + * This is only supported if the + * result set was created using a Clob or Reader object. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Reader getCharacterStream(int columnIndex) throws SQLException { + return asReader(get(columnIndex)); + } + + private static Reader asReader(Object o) throws SQLException { + if (o == null) { + return null; + } else if (o instanceof Clob) { + return ((Clob) o).getCharacterStream(); + } + return (Reader) o; + } + + /** + * Returns the value as a java.io.Reader. + * This is only supported if the + * result set was created using a Clob or Reader object. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Reader getCharacterStream(String columnLabel) throws SQLException { + return getCharacterStream(findColumn(columnLabel)); + } + + /** + * Returns the value as a java.sql.Clob. + * This is only supported if the + * result set was created using a Clob object. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Clob getClob(int columnIndex) throws SQLException { + Clob c = (Clob) get(columnIndex); + return c == null ? null : c; + } + + /** + * Returns the value as a java.sql.Clob. + * This is only supported if the + * result set was created using a Clob object. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Clob getClob(String columnLabel) throws SQLException { + return getClob(findColumn(columnLabel)); + } + + /** + * Returns the value as an java.sql.Date. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Date getDate(int columnIndex) throws SQLException { + return (Date) get(columnIndex); + } + + /** + * Returns the value as a java.sql.Date. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Date getDate(String columnLabel) throws SQLException { + return getDate(findColumn(columnLabel)); + } + + /** + * INTERNAL + */ + @Override + public Date getDate(int columnIndex, Calendar cal) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Date getDate(String columnLabel, Calendar cal) throws SQLException { + throw getUnsupportedException(); + } + + /** + * Returns the value as an double. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public double getDouble(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o != null && !(o instanceof Number)) { + return Double.parseDouble(o.toString()); + } + return o == null ? 0 : ((Number) o).doubleValue(); + } + + /** + * Returns the value as a double. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public double getDouble(String columnLabel) throws SQLException { + return getDouble(findColumn(columnLabel)); + } + + /** + * Returns the value as a float. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public float getFloat(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o != null && !(o instanceof Number)) { + return Float.parseFloat(o.toString()); + } + return o == null ? 0 : ((Number) o).floatValue(); + } + + /** + * Returns the value as a float. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public float getFloat(String columnLabel) throws SQLException { + return getFloat(findColumn(columnLabel)); + } + + /** + * Returns the value as an int. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public int getInt(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o != null && !(o instanceof Number)) { + o = Integer.decode(o.toString()); + } + return o == null ? 0 : ((Number) o).intValue(); + } + + /** + * Returns the value as an int. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public int getInt(String columnLabel) throws SQLException { + return getInt(findColumn(columnLabel)); + } + + /** + * Returns the value as a long. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public long getLong(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o != null && !(o instanceof Number)) { + o = Long.decode(o.toString()); + } + return o == null ? 0 : ((Number) o).longValue(); + } + + /** + * Returns the value as a long. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public long getLong(String columnLabel) throws SQLException { + return getLong(findColumn(columnLabel)); + } + + /** + * INTERNAL + */ + @Override + public Reader getNCharacterStream(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Reader getNCharacterStream(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public NClob getNClob(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public NClob getNClob(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public String getNString(int columnIndex) throws SQLException { + return getString(columnIndex); + } + + /** + * INTERNAL + */ + @Override + public String getNString(String columnLabel) throws SQLException { + return getString(columnLabel); + } + + /** + * Returns the value as an Object. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Object getObject(int columnIndex) throws SQLException { + return get(columnIndex); + } + + /** + * Returns the value as an Object. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Object getObject(String columnLabel) throws SQLException { + return getObject(findColumn(columnLabel)); + } + + /** + * Returns the value as an Object of the specified type. + * + * @param columnIndex the column index (1, 2, ...) + * @param type the class of the returned value + * @return the value + */ + @Override + public T getObject(int columnIndex, Class type) throws SQLException { + if (wasNull()) { + return null; + } + + if (type == BigDecimal.class) { + return type.cast(getBigDecimal(columnIndex)); + } else if (type == BigInteger.class) { + return type.cast(getBigDecimal(columnIndex).toBigInteger()); + } else if (type == String.class) { + return type.cast(getString(columnIndex)); + } else if (type == Boolean.class) { + return type.cast(getBoolean(columnIndex)); + } else if (type == Byte.class) { + return type.cast(getByte(columnIndex)); + } else if (type == Short.class) { + return type.cast(getShort(columnIndex)); + } else if (type == Integer.class) { + return type.cast(getInt(columnIndex)); + } else if (type == Long.class) { + return type.cast(getLong(columnIndex)); + } else if (type == Float.class) { + return type.cast(getFloat(columnIndex)); + } else if (type == Double.class) { + return type.cast(getDouble(columnIndex)); + } else if (type == Date.class) { + return type.cast(getDate(columnIndex)); + } else if (type == Time.class) { + return type.cast(getTime(columnIndex)); + } else if (type == Timestamp.class) { + return type.cast(getTimestamp(columnIndex)); + } else if (type == UUID.class) { + return type.cast(getObject(columnIndex)); + } else if (type == byte[].class) { + return type.cast(getBytes(columnIndex)); + } else if (type == java.sql.Array.class) { + return type.cast(getArray(columnIndex)); + } else if (type == Blob.class) { + return type.cast(getBlob(columnIndex)); + } else if (type == Clob.class) { + return type.cast(getClob(columnIndex)); + } else { + throw getUnsupportedException(); + } + } + + /** + * Returns the value as an Object of the specified type. + * + * @param columnName the column name + * @param type the class of the returned value + * @return the value + */ + @Override + public T getObject(String columnName, Class type) throws SQLException { + return getObject(findColumn(columnName), type); + } + + /** + * INTERNAL + */ + @Override + public Object getObject(int columnIndex, Map> map) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Object getObject(String columnLabel, Map> map) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Ref getRef(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Ref getRef(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public RowId getRowId(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public RowId getRowId(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + /** + * Returns the value as a short. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public short getShort(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o != null && !(o instanceof Number)) { + o = Short.decode(o.toString()); + } + return o == null ? 0 : ((Number) o).shortValue(); + } + + /** + * Returns the value as a short. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public short getShort(String columnLabel) throws SQLException { + return getShort(findColumn(columnLabel)); + } + + /** + * INTERNAL + */ + @Override + public SQLXML getSQLXML(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public SQLXML getSQLXML(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + /** + * Returns the value as a String. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public String getString(int columnIndex) throws SQLException { + Object o = get(columnIndex); + if (o == null) { + return null; + } + switch (columns.get(columnIndex - 1).sqlType) { + case Types.CLOB: + Clob c = (Clob) o; + return c.getSubString(1, MathUtils.convertLongToInt(c.length())); + } + return o.toString(); + } + + /** + * Returns the value as a String. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public String getString(String columnLabel) throws SQLException { + return getString(findColumn(columnLabel)); + } + + /** + * Returns the value as an java.sql.Time. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Time getTime(int columnIndex) throws SQLException { + return (Time) get(columnIndex); + } + + /** + * Returns the value as a java.sql.Time. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Time getTime(String columnLabel) throws SQLException { + return getTime(findColumn(columnLabel)); + } + + /** + * INTERNAL + */ + @Override + public Time getTime(int columnIndex, Calendar cal) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Time getTime(String columnLabel, Calendar cal) throws SQLException { + throw getUnsupportedException(); + } + + /** + * Returns the value as an java.sql.Timestamp. + * + * @param columnIndex (1,2,...) + * @return the value + */ + @Override + public Timestamp getTimestamp(int columnIndex) throws SQLException { + return (Timestamp) get(columnIndex); + } + + /** + * Returns the value as a java.sql.Timestamp. + * + * @param columnLabel the column label + * @return the value + */ + @Override + public Timestamp getTimestamp(String columnLabel) throws SQLException { + return getTimestamp(findColumn(columnLabel)); + } + + /** + * INTERNAL + */ + @Override + public Timestamp getTimestamp(int columnIndex, Calendar cal) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Timestamp getTimestamp(String columnLabel, Calendar cal) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * @deprecated INTERNAL + */ + @Deprecated + @Override + public InputStream getUnicodeStream(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * @deprecated INTERNAL + */ + @Deprecated + @Override + public InputStream getUnicodeStream(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public URL getURL(int columnIndex) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public URL getURL(String columnLabel) throws SQLException { + throw getUnsupportedException(); + } + + // ---- update --------------------------------------------- + + /** + * INTERNAL + */ + @Override + public void updateArray(int columnIndex, Array x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateArray(String columnLabel, Array x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateAsciiStream(int columnIndex, InputStream x) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateAsciiStream(String columnLabel, InputStream x) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateAsciiStream(int columnIndex, InputStream x, int length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateAsciiStream(String columnLabel, InputStream x, int length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateAsciiStream(int columnIndex, InputStream x, long length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateAsciiStream(String columnLabel, InputStream x, long length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBigDecimal(int columnIndex, BigDecimal x) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBigDecimal(String columnLabel, BigDecimal x) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBinaryStream(int columnIndex, InputStream x) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBinaryStream(String columnLabel, InputStream x) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBinaryStream(int columnIndex, InputStream x, int length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBinaryStream(String columnLabel, InputStream x, int length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBinaryStream(int columnIndex, InputStream x, long length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBinaryStream(String columnLabel, InputStream x, long length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBlob(int columnIndex, Blob x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBlob(String columnLabel, Blob x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBlob(int columnIndex, InputStream x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBlob(String columnLabel, InputStream x) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBlob(int columnIndex, InputStream x, long length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBlob(String columnLabel, InputStream x, long length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBoolean(int columnIndex, boolean x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBoolean(String columnLabel, boolean x) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateByte(int columnIndex, byte x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateByte(String columnLabel, byte x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBytes(int columnIndex, byte[] x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateBytes(String columnLabel, byte[] x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateCharacterStream(int columnIndex, Reader x) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateCharacterStream(String columnLabel, Reader x) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateCharacterStream(int columnIndex, Reader x, int length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateCharacterStream(String columnLabel, Reader x, int length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateCharacterStream(int columnIndex, Reader x, long length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateCharacterStream(String columnLabel, Reader x, long length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateClob(int columnIndex, Clob x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateClob(String columnLabel, Clob x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateClob(int columnIndex, Reader x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateClob(String columnLabel, Reader x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateClob(int columnIndex, Reader x, long length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateClob(String columnLabel, Reader x, long length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateDate(int columnIndex, Date x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateDate(String columnLabel, Date x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateDouble(int columnIndex, double x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateDouble(String columnLabel, double x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateFloat(int columnIndex, float x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateFloat(String columnLabel, float x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateInt(int columnIndex, int x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateInt(String columnLabel, int x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateLong(int columnIndex, long x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateLong(String columnLabel, long x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNCharacterStream(int columnIndex, Reader x) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNCharacterStream(String columnLabel, Reader x) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNCharacterStream(int columnIndex, Reader x, long length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNCharacterStream(String columnLabel, Reader x, long length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNClob(int columnIndex, NClob x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNClob(String columnLabel, NClob x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNClob(int columnIndex, Reader x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNClob(String columnLabel, Reader x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNClob(int columnIndex, Reader x, long length) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNClob(String columnLabel, Reader x, long length) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNString(int columnIndex, String x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNString(String columnLabel, String x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateNull(int columnIndex) throws SQLException { + update(columnIndex, null); + } + + /** + * INTERNAL + */ + @Override + public void updateNull(String columnLabel) throws SQLException { + update(columnLabel, null); + } + + /** + * INTERNAL + */ + @Override + public void updateObject(int columnIndex, Object x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateObject(String columnLabel, Object x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateObject(int columnIndex, Object x, int scale) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateObject(String columnLabel, Object x, int scale) + throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateRef(int columnIndex, Ref x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateRef(String columnLabel, Ref x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateRowId(int columnIndex, RowId x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateRowId(String columnLabel, RowId x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateShort(int columnIndex, short x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateShort(String columnLabel, short x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateSQLXML(int columnIndex, SQLXML x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateSQLXML(String columnLabel, SQLXML x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateString(int columnIndex, String x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateString(String columnLabel, String x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateTime(int columnIndex, Time x) throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateTime(String columnLabel, Time x) throws SQLException { + update(columnLabel, x); + } + + /** + * INTERNAL + */ + @Override + public void updateTimestamp(int columnIndex, Timestamp x) + throws SQLException { + update(columnIndex, x); + } + + /** + * INTERNAL + */ + @Override + public void updateTimestamp(String columnLabel, Timestamp x) + throws SQLException { + update(columnLabel, x); + } + + // ---- result set meta data --------------------------------------------- + + /** + * Returns the column count. + * + * @return the column count + */ + @Override + public int getColumnCount() { + return columns.size(); + } + + /** + * Returns 15. + * + * @param columnIndex (1,2,...) + * @return 15 + */ + @Override + public int getColumnDisplaySize(int columnIndex) { + return 15; + } + + /** + * Returns the SQL type. + * + * @param columnIndex (1,2,...) + * @return the SQL type + */ + @Override + public int getColumnType(int columnIndex) throws SQLException { + return getColumn(columnIndex - 1).sqlType; + } + + /** + * Returns the precision. + * + * @param columnIndex (1,2,...) + * @return the precision + */ + @Override + public int getPrecision(int columnIndex) throws SQLException { + return getColumn(columnIndex - 1).precision; + } + + /** + * Returns the scale. + * + * @param columnIndex (1,2,...) + * @return the scale + */ + @Override + public int getScale(int columnIndex) throws SQLException { + return getColumn(columnIndex - 1).scale; + } + + /** + * Returns ResultSetMetaData.columnNullableUnknown. + * + * @param columnIndex (1,2,...) + * @return columnNullableUnknown + */ + @Override + public int isNullable(int columnIndex) { + return ResultSetMetaData.columnNullableUnknown; + } + + /** + * Returns false. + * + * @param columnIndex (1,2,...) + * @return false + */ + @Override + public boolean isAutoIncrement(int columnIndex) { + return false; + } + + /** + * Returns true. + * + * @param columnIndex (1,2,...) + * @return true + */ + @Override + public boolean isCaseSensitive(int columnIndex) { + return true; + } + + /** + * Returns false. + * + * @param columnIndex (1,2,...) + * @return false + */ + @Override + public boolean isCurrency(int columnIndex) { + return false; + } + + /** + * Returns false. + * + * @param columnIndex (1,2,...) + * @return false + */ + @Override + public boolean isDefinitelyWritable(int columnIndex) { + return false; + } + + /** + * Returns true. + * + * @param columnIndex (1,2,...) + * @return true + */ + @Override + public boolean isReadOnly(int columnIndex) { + return true; + } + + /** + * Returns true. + * + * @param columnIndex (1,2,...) + * @return true + */ + @Override + public boolean isSearchable(int columnIndex) { + return true; + } + + /** + * Returns true. + * + * @param columnIndex (1,2,...) + * @return true + */ + @Override + public boolean isSigned(int columnIndex) { + return true; + } + + /** + * Returns false. + * + * @param columnIndex (1,2,...) + * @return false + */ + @Override + public boolean isWritable(int columnIndex) { + return false; + } + + /** + * Returns null. + * + * @param columnIndex (1,2,...) + * @return null + */ + @Override + public String getCatalogName(int columnIndex) { + return null; + } + + /** + * Returns the Java class name if this column. + * + * @param columnIndex (1,2,...) + * @return the class name + */ + @Override + public String getColumnClassName(int columnIndex) throws SQLException { + int type = DataType.getValueTypeFromResultSet(this, columnIndex); + return DataType.getTypeClassName(type); + } + + /** + * Returns the column label. + * + * @param columnIndex (1,2,...) + * @return the column label + */ + @Override + public String getColumnLabel(int columnIndex) throws SQLException { + return getColumn(columnIndex - 1).name; + } + + /** + * Returns the column name. + * + * @param columnIndex (1,2,...) + * @return the column name + */ + @Override + public String getColumnName(int columnIndex) throws SQLException { + return getColumnLabel(columnIndex); + } + + /** + * Returns the data type name of a column. + * + * @param columnIndex (1,2,...) + * @return the type name + */ + @Override + public String getColumnTypeName(int columnIndex) throws SQLException { + return getColumn(columnIndex - 1).sqlTypeName; + } + + /** + * Returns null. + * + * @param columnIndex (1,2,...) + * @return null + */ + @Override + public String getSchemaName(int columnIndex) { + return null; + } + + /** + * Returns null. + * + * @param columnIndex (1,2,...) + * @return null + */ + @Override + public String getTableName(int columnIndex) { + return null; + } + + // ---- unsupported / result set ----------------------------------- + + /** + * INTERNAL + */ + @Override + public void afterLast() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void cancelRowUpdates() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void deleteRow() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void insertRow() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void moveToCurrentRow() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void moveToInsertRow() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void refreshRow() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void updateRow() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean first() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean isAfterLast() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean isBeforeFirst() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean isFirst() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean isLast() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean last() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean previous() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean rowDeleted() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean rowInserted() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean rowUpdated() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void setFetchDirection(int direction) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void setFetchSize(int rows) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean absolute(int row) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean relative(int offset) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public String getCursorName() throws SQLException { + throw getUnsupportedException(); + } + + // --- private ----------------------------- + + private void update(int columnIndex, Object obj) throws SQLException { + checkColumnIndex(columnIndex); + this.currentRow[columnIndex - 1] = obj; + } + + private void update(String columnLabel, Object obj) throws SQLException { + this.currentRow[findColumn(columnLabel) - 1] = obj; + } + + /** + * INTERNAL + */ + static SQLException getUnsupportedException() { + return DbException.get(ErrorCode.FEATURE_NOT_SUPPORTED_1). + getSQLException(); + } + + private void checkColumnIndex(int columnIndex) throws SQLException { + if (columnIndex < 1 || columnIndex > columns.size()) { + throw DbException.getInvalidValueException( + "columnIndex", columnIndex).getSQLException(); + } + } + + private Object get(int columnIndex) throws SQLException { + if (currentRow == null) { + throw DbException.get(ErrorCode.NO_DATA_AVAILABLE). + getSQLException(); + } + checkColumnIndex(columnIndex); + columnIndex--; + Object o = columnIndex < currentRow.length ? + currentRow[columnIndex] : null; + wasNull = o == null; + return o; + } + + private Column getColumn(int i) throws SQLException { + checkColumnIndex(i + 1); + return columns.get(i); + } + + /** + * Returns the current result set holdability. + * + * @return the holdability + */ + @Override + public int getHoldability() { + return ResultSet.HOLD_CURSORS_OVER_COMMIT; + } + + /** + * Returns whether this result set has been closed. + * + * @return true if the result set was closed + */ + @Override + public boolean isClosed() { + return rows == null && source == null; + } + + /** + * INTERNAL + */ + @Override + public T unwrap(Class iface) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + throw getUnsupportedException(); + } + + /** + * Set the auto-close behavior. If enabled (the default), the result set is + * closed after reading the last row. + * + * @param autoClose the new value + */ + public void setAutoClose(boolean autoClose) { + this.autoClose = autoClose; + } + + /** + * Get the current auto-close behavior. + * + * @return the auto-close value + */ + public boolean getAutoClose() { + return autoClose; + } + + /** + * This class holds the data of a result column. + */ + static class Column { + + /** + * The column label. + */ + String name; + + /** + * The column type Name + */ + String sqlTypeName; + + /** + * The SQL type. + */ + int sqlType; + + /** + * The precision. + */ + int precision; + + /** + * The scale. + */ + int scale; + } + + /** + * A simple array implementation, + * backed by an object array + */ + public static class SimpleArray implements Array { + + private final Object[] value; + + SimpleArray(Object[] value) { + this.value = value; + } + + /** + * Get the object array. + * + * @return the object array + */ + @Override + public Object getArray() { + return value; + } + + /** + * INTERNAL + */ + @Override + public Object getArray(Map> map) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Object getArray(long index, int count) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public Object getArray(long index, int count, Map> map) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * Get the base type of this array. + * + * @return Types.NULL + */ + @Override + public int getBaseType() { + return Types.NULL; + } + + /** + * Get the base type name of this array. + * + * @return "NULL" + */ + @Override + public String getBaseTypeName() { + return "NULL"; + } + + /** + * INTERNAL + */ + @Override + public ResultSet getResultSet() throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public ResultSet getResultSet(Map> map) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public ResultSet getResultSet(long index, int count) + throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public ResultSet getResultSet(long index, int count, + Map> map) throws SQLException { + throw getUnsupportedException(); + } + + /** + * INTERNAL + */ + @Override + public void free() { + // nothing to do + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/SimpleRowSource.java b/modules/h2/src/main/java/org/h2/tools/SimpleRowSource.java new file mode 100644 index 0000000000000..d8607b508772e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/SimpleRowSource.java @@ -0,0 +1,34 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.sql.SQLException; + +/** + * This interface is for classes that create rows on demand. + * It is used together with SimpleResultSet to create a dynamic result set. + */ +public interface SimpleRowSource { + + /** + * Get the next row. Must return null if no more rows are available. + * + * @return the row or null + */ + Object[] readRow() throws SQLException; + + /** + * Close the row source. + */ + void close(); + + /** + * Reset the position (before the first row). + * + * @throws SQLException if this operation is not supported + */ + void reset() throws SQLException; +} diff --git a/modules/h2/src/main/java/org/h2/tools/TriggerAdapter.java b/modules/h2/src/main/java/org/h2/tools/TriggerAdapter.java new file mode 100644 index 0000000000000..1e5c2b9306ef4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/TriggerAdapter.java @@ -0,0 +1,198 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.tools; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import org.h2.api.Trigger; + +/** + * An adapter for the trigger interface that allows to use the ResultSet + * interface instead of a row array. + */ +public abstract class TriggerAdapter implements Trigger { + + /** + * The schema name. + */ + protected String schemaName; + + /** + * The name of the trigger. + */ + protected String triggerName; + + /** + * The name of the table. + */ + protected String tableName; + + /** + * Whether the fire method is called before or after the operation is + * performed. + */ + protected boolean before; + + /** + * The trigger type: INSERT, UPDATE, DELETE, SELECT, or a combination (a bit + * field). + */ + protected int type; + + private SimpleResultSet oldResultSet, newResultSet; + private TriggerRowSource oldSource, newSource; + + /** + * This method is called by the database engine once when initializing the + * trigger. It is called when the trigger is created, as well as when the + * database is opened. The default implementation initialized the result + * sets. + * + * @param conn a connection to the database + * @param schemaName the name of the schema + * @param triggerName the name of the trigger used in the CREATE TRIGGER + * statement + * @param tableName the name of the table + * @param before whether the fire method is called before or after the + * operation is performed + * @param type the operation type: INSERT, UPDATE, DELETE, SELECT, or a + * combination (this parameter is a bit field) + */ + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, + boolean before, int type) throws SQLException { + ResultSet rs = conn.getMetaData().getColumns( + null, schemaName, tableName, null); + oldSource = new TriggerRowSource(); + newSource = new TriggerRowSource(); + oldResultSet = new SimpleResultSet(oldSource); + newResultSet = new SimpleResultSet(newSource); + while (rs.next()) { + String column = rs.getString("COLUMN_NAME"); + int dataType = rs.getInt("DATA_TYPE"); + int precision = rs.getInt("COLUMN_SIZE"); + int scale = rs.getInt("DECIMAL_DIGITS"); + oldResultSet.addColumn(column, dataType, precision, scale); + newResultSet.addColumn(column, dataType, precision, scale); + } + this.schemaName = schemaName; + this.triggerName = triggerName; + this.tableName = tableName; + this.before = before; + this.type = type; + } + + /** + * A row source that allows to set the next row. + */ + static class TriggerRowSource implements SimpleRowSource { + + private Object[] row; + + void setRow(Object[] row) { + this.row = row; + } + + @Override + public Object[] readRow() { + return row; + } + + @Override + public void close() { + // ignore + } + + @Override + public void reset() { + // ignore + } + + } + + /** + * This method is called for each triggered action. The method is called + * immediately when the operation occurred (before it is committed). A + * transaction rollback will also rollback the operations that were done + * within the trigger, if the operations occurred within the same database. + * If the trigger changes state outside the database, a rollback trigger + * should be used. + *

    + * The row arrays contain all columns of the table, in the same order + * as defined in the table. + *

    + *

    + * The default implementation calls the fire method with the ResultSet + * parameters. + *

    + * + * @param conn a connection to the database + * @param oldRow the old row, or null if no old row is available (for + * INSERT) + * @param newRow the new row, or null if no new row is available (for + * DELETE) + * @throws SQLException if the operation must be undone + */ + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + fire(conn, wrap(oldResultSet, oldSource, oldRow), + wrap(newResultSet, newSource, newRow)); + } + + /** + * This method is called for each triggered action by the default + * fire(Connection conn, Object[] oldRow, Object[] newRow) method. + * ResultSet.next does not need to be called (and calling it has no effect; + * it will always return true). + *

    + * For "before" triggers, the new values of the new row may be changed + * using the ResultSet.updateX methods. + *

    + * + * @param conn a connection to the database + * @param oldRow the old row, or null if no old row is available (for + * INSERT) + * @param newRow the new row, or null if no new row is available (for + * DELETE) + * @throws SQLException if the operation must be undone + */ + public abstract void fire(Connection conn, ResultSet oldRow, + ResultSet newRow) throws SQLException; + + private static SimpleResultSet wrap(SimpleResultSet rs, + TriggerRowSource source, Object[] row) throws SQLException { + if (row == null) { + return null; + } + source.setRow(row); + rs.next(); + return rs; + } + + /** + * This method is called when the database is closed. + * If the method throws an exception, it will be logged, but + * closing the database will continue. + * The default implementation does nothing. + */ + @Override + public void remove() throws SQLException { + // do nothing by default + } + + /** + * This method is called when the trigger is dropped. + * The default implementation does nothing. + */ + @Override + public void close() throws SQLException { + // do nothing by default + } + +} diff --git a/modules/h2/src/main/java/org/h2/tools/package.html b/modules/h2/src/main/java/org/h2/tools/package.html new file mode 100644 index 0000000000000..360cdcba96725 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/tools/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Various tools. Most tools are command line driven, but not all (for example the CSV tool). + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/upgrade/DbUpgrade.java b/modules/h2/src/main/java/org/h2/upgrade/DbUpgrade.java new file mode 100644 index 0000000000000..7526cb049041c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/upgrade/DbUpgrade.java @@ -0,0 +1,187 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.upgrade; + +import java.io.File; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Properties; +import java.util.UUID; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.Constants; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * This class starts the conversion from older database versions to the current + * version if the respective classes are found. + */ +public class DbUpgrade { + + private static final boolean UPGRADE_CLASSES_PRESENT; + + private static boolean scriptInTempDir; + private static boolean deleteOldDb; + + static { + UPGRADE_CLASSES_PRESENT = Utils.isClassPresent("org.h2.upgrade.v1_1.Driver"); + } + + /** + * If the upgrade classes are present, upgrade the database, or connect + * using the old version (if the parameter NO_UPGRADE is set to true). If + * the database is upgraded, or if no upgrade is possible or needed, this + * methods returns null. + * + * @param url the database URL + * @param info the properties + * @return the connection if connected with the old version (NO_UPGRADE) + */ + public static Connection connectOrUpgrade(String url, Properties info) + throws SQLException { + if (!UPGRADE_CLASSES_PRESENT) { + return null; + } + Properties i2 = new Properties(); + i2.putAll(info); + // clone so that the password (if set as a char array) is not cleared + Object o = info.get("password"); + if (o instanceof char[]) { + i2.put("password", StringUtils.cloneCharArray((char[]) o)); + } + info = i2; + ConnectionInfo ci = new ConnectionInfo(url, info); + if (ci.isRemote() || !ci.isPersistent()) { + return null; + } + String name = ci.getName(); + if (FileUtils.exists(name + Constants.SUFFIX_PAGE_FILE)) { + return null; + } + if (!FileUtils.exists(name + Constants.SUFFIX_OLD_DATABASE_FILE)) { + return null; + } + if (ci.removeProperty("NO_UPGRADE", false)) { + return connectWithOldVersion(url, info); + } + synchronized (DbUpgrade.class) { + upgrade(ci, info); + return null; + } + } + + /** + * The conversion script file will per default be created in the db + * directory. Use this method to change the directory to the temp + * directory. + * + * @param scriptInTempDir true if the conversion script should be + * located in the temp directory. + */ + public static void setScriptInTempDir(boolean scriptInTempDir) { + DbUpgrade.scriptInTempDir = scriptInTempDir; + } + + /** + * Old files will be renamed to .backup after a successful conversion. To + * delete them after the conversion, use this method with the parameter + * 'true'. + * + * @param deleteOldDb if true, the old db files will be deleted. + */ + public static void setDeleteOldDb(boolean deleteOldDb) { + DbUpgrade.deleteOldDb = deleteOldDb; + } + + private static Connection connectWithOldVersion(String url, Properties info) + throws SQLException { + url = "jdbc:h2v1_1:" + url.substring("jdbc:h2:".length()) + + ";IGNORE_UNKNOWN_SETTINGS=TRUE"; + return DriverManager.getConnection(url, info); + } + + private static void upgrade(ConnectionInfo ci, Properties info) + throws SQLException { + String name = ci.getName(); + String data = name + Constants.SUFFIX_OLD_DATABASE_FILE; + String index = name + ".index.db"; + String lobs = name + ".lobs.db"; + String backupData = data + ".backup"; + String backupIndex = index + ".backup"; + String backupLobs = lobs + ".backup"; + String script = null; + try { + if (scriptInTempDir) { + new File(Utils.getProperty("java.io.tmpdir", ".")).mkdirs(); + script = File.createTempFile( + "h2dbmigration", "backup.sql").getAbsolutePath(); + } else { + script = name + ".script.sql"; + } + String oldUrl = "jdbc:h2v1_1:" + name + + ";UNDO_LOG=0;LOG=0;LOCK_MODE=0"; + String cipher = ci.getProperty("CIPHER", null); + if (cipher != null) { + oldUrl += ";CIPHER=" + cipher; + } + Connection conn = DriverManager.getConnection(oldUrl, info); + Statement stat = conn.createStatement(); + String uuid = UUID.randomUUID().toString(); + if (cipher != null) { + stat.execute("script to '" + script + + "' cipher aes password '" + uuid + "' --hide--"); + } else { + stat.execute("script to '" + script + "'"); + } + conn.close(); + FileUtils.move(data, backupData); + FileUtils.move(index, backupIndex); + if (FileUtils.exists(lobs)) { + FileUtils.move(lobs, backupLobs); + } + ci.removeProperty("IFEXISTS", false); + conn = new JdbcConnection(ci, true); + stat = conn.createStatement(); + if (cipher != null) { + stat.execute("runscript from '" + script + + "' cipher aes password '" + uuid + "' --hide--"); + } else { + stat.execute("runscript from '" + script + "'"); + } + stat.execute("analyze"); + stat.execute("shutdown compact"); + stat.close(); + conn.close(); + if (deleteOldDb) { + FileUtils.delete(backupData); + FileUtils.delete(backupIndex); + FileUtils.deleteRecursive(backupLobs, false); + } + } catch (Exception e) { + if (FileUtils.exists(backupData)) { + FileUtils.move(backupData, data); + } + if (FileUtils.exists(backupIndex)) { + FileUtils.move(backupIndex, index); + } + if (FileUtils.exists(backupLobs)) { + FileUtils.move(backupLobs, lobs); + } + FileUtils.delete(name + ".h2.db"); + throw DbException.toSQLException(e); + } finally { + if (script != null) { + FileUtils.delete(script); + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/upgrade/package.html b/modules/h2/src/main/java/org/h2/upgrade/package.html new file mode 100644 index 0000000000000..8f008a7186f59 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/upgrade/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation + + +Implementation of the database upgrade mechanism. + + \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/util/AbbaDetector.java b/modules/h2/src/main/java/org/h2/util/AbbaDetector.java new file mode 100644 index 0000000000000..080041fb21167 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/AbbaDetector.java @@ -0,0 +1,129 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.ArrayDeque; +import java.util.Deque; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import java.util.WeakHashMap; + +/** + * Utility to detect AB-BA deadlocks. + */ +public class AbbaDetector { + + private static final boolean TRACE = false; + + private static final ThreadLocal> STACK = + new ThreadLocal>() { + @Override protected Deque initialValue() { + return new ArrayDeque<>(); + } + }; + + /** + * Map of (object A) -> ( + * map of (object locked before object A) -> + * (stack trace where locked) ) + */ + private static final Map> LOCK_ORDERING = + new WeakHashMap<>(); + + private static final Set KNOWN_DEADLOCKS = new HashSet<>(); + + /** + * This method is called just before or just after an object is + * synchronized. + * + * @param o the object, or null for the current class + * @return the object that was passed + */ + public static Object begin(Object o) { + if (o == null) { + o = new SecurityManager() { + Class clazz = getClassContext()[2]; + }.clazz; + } + Deque stack = STACK.get(); + if (!stack.isEmpty()) { + // Ignore locks which are locked multiple times in succession - + // Java locks are recursive + if (stack.contains(o)) { + // already synchronized on this + return o; + } + while (!stack.isEmpty()) { + Object last = stack.peek(); + if (Thread.holdsLock(last)) { + break; + } + stack.pop(); + } + } + if (TRACE) { + String thread = "[thread " + Thread.currentThread().getId() + "]"; + String indent = new String(new char[stack.size() * 2]).replace((char) 0, ' '); + System.out.println(thread + " " + indent + + "sync " + getObjectName(o)); + } + if (!stack.isEmpty()) { + markHigher(o, stack); + } + stack.push(o); + return o; + } + + private static Object getTest(Object o) { + // return o.getClass(); + return o; + } + + private static String getObjectName(Object o) { + return o.getClass().getSimpleName() + "@" + System.identityHashCode(o); + } + + private static synchronized void markHigher(Object o, Deque older) { + Object test = getTest(o); + Map map = LOCK_ORDERING.get(test); + if (map == null) { + map = new WeakHashMap<>(); + LOCK_ORDERING.put(test, map); + } + Exception oldException = null; + for (Object old : older) { + Object oldTest = getTest(old); + if (oldTest == test) { + continue; + } + Map oldMap = LOCK_ORDERING.get(oldTest); + if (oldMap != null) { + Exception e = oldMap.get(test); + if (e != null) { + String deadlockType = test.getClass() + " " + oldTest.getClass(); + if (!KNOWN_DEADLOCKS.contains(deadlockType)) { + String message = getObjectName(test) + + " synchronized after \n " + getObjectName(oldTest) + + ", but in the past before"; + RuntimeException ex = new RuntimeException(message); + ex.initCause(e); + ex.printStackTrace(System.out); + // throw ex; + KNOWN_DEADLOCKS.add(deadlockType); + } + } + } + if (!map.containsKey(oldTest)) { + if (oldException == null) { + oldException = new Exception("Before"); + } + map.put(oldTest, oldException); + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/AbbaLockingDetector.java b/modules/h2/src/main/java/org/h2/util/AbbaLockingDetector.java new file mode 100644 index 0000000000000..b11d30a633f95 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/AbbaLockingDetector.java @@ -0,0 +1,262 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, Version + * 1.0, and under the Eclipse Public License, Version 1.0 + * (http://h2database.com/html/license.html). Initial Developer: H2 Group + */ +package org.h2.util; + +import java.lang.management.ManagementFactory; +import java.lang.management.MonitorInfo; +import java.lang.management.ThreadInfo; +import java.lang.management.ThreadMXBean; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.WeakHashMap; + +/** + * Utility to detect AB-BA deadlocks. + */ +public class AbbaLockingDetector implements Runnable { + + private final int tickIntervalMs = 2; + private volatile boolean stop; + + private final ThreadMXBean threadMXBean = + ManagementFactory.getThreadMXBean(); + private Thread thread; + + /** + * Map of (object A) -> ( map of (object locked before object A) -> + * (stack trace where locked) ) + */ + private final Map> lockOrdering = + new WeakHashMap<>(); + private final Set knownDeadlocks = new HashSet<>(); + + /** + * Start collecting locking data. + * + * @return this + */ + public AbbaLockingDetector startCollecting() { + thread = new Thread(this, "AbbaLockingDetector"); + thread.setDaemon(true); + thread.start(); + return this; + } + + /** + * Reset the state. + */ + public synchronized void reset() { + lockOrdering.clear(); + knownDeadlocks.clear(); + } + + /** + * Stop collecting. + * + * @return this + */ + public AbbaLockingDetector stopCollecting() { + stop = true; + if (thread != null) { + try { + thread.join(); + } catch (InterruptedException e) { + // ignore + } + thread = null; + } + return this; + } + + @Override + public void run() { + while (!stop) { + try { + tick(); + } catch (Throwable t) { + break; + } + } + } + + private void tick() { + if (tickIntervalMs > 0) { + try { + Thread.sleep(tickIntervalMs); + } catch (InterruptedException ex) { + // ignore + } + } + + ThreadInfo[] list = threadMXBean.dumpAllThreads( + // lockedMonitors + true, + // lockedSynchronizers + false); + processThreadList(list); + } + + private void processThreadList(ThreadInfo[] threadInfoList) { + final List lockOrder = new ArrayList<>(); + for (ThreadInfo threadInfo : threadInfoList) { + lockOrder.clear(); + generateOrdering(lockOrder, threadInfo); + if (lockOrder.size() > 1) { + markHigher(lockOrder, threadInfo); + } + } + } + + /** + * We cannot simply call getLockedMonitors because it is not guaranteed to + * return the locks in the correct order. + */ + private static void generateOrdering(final List lockOrder, + ThreadInfo info) { + final MonitorInfo[] lockedMonitors = info.getLockedMonitors(); + Arrays.sort(lockedMonitors, new Comparator() { + @Override + public int compare(MonitorInfo a, MonitorInfo b) { + return b.getLockedStackDepth() - a.getLockedStackDepth(); + } + }); + for (MonitorInfo mi : lockedMonitors) { + String lockName = getObjectName(mi); + if (lockName.equals("sun.misc.Launcher$AppClassLoader")) { + // ignore, it shows up everywhere + continue; + } + // Ignore locks which are locked multiple times in + // succession - Java locks are recursive. + if (!lockOrder.contains(lockName)) { + lockOrder.add(lockName); + } + } + } + + private synchronized void markHigher(List lockOrder, + ThreadInfo threadInfo) { + String topLock = lockOrder.get(lockOrder.size() - 1); + Map map = lockOrdering.get(topLock); + if (map == null) { + map = new WeakHashMap<>(); + lockOrdering.put(topLock, map); + } + String oldException = null; + for (int i = 0; i < lockOrder.size() - 1; i++) { + String olderLock = lockOrder.get(i); + Map oldMap = lockOrdering.get(olderLock); + boolean foundDeadLock = false; + if (oldMap != null) { + String e = oldMap.get(topLock); + if (e != null) { + foundDeadLock = true; + String deadlockType = topLock + " " + olderLock; + if (!knownDeadlocks.contains(deadlockType)) { + System.out.println(topLock + " synchronized after \n " + olderLock + + ", but in the past before\n" + "AFTER\n" + + getStackTraceForThread(threadInfo) + + "BEFORE\n" + e); + knownDeadlocks.add(deadlockType); + } + } + } + if (!foundDeadLock && !map.containsKey(olderLock)) { + if (oldException == null) { + oldException = getStackTraceForThread(threadInfo); + } + map.put(olderLock, oldException); + } + } + } + + /** + * Dump data in the same format as {@link ThreadInfo#toString()}, but with + * some modifications (no stack frame limit, and removal of uninteresting + * stack frames) + */ + private static String getStackTraceForThread(ThreadInfo info) { + StringBuilder sb = new StringBuilder().append('"') + .append(info.getThreadName()).append("\"" + " Id=") + .append(info.getThreadId()).append(' ').append(info.getThreadState()); + if (info.getLockName() != null) { + sb.append(" on ").append(info.getLockName()); + } + if (info.getLockOwnerName() != null) { + sb.append(" owned by \"").append(info.getLockOwnerName()) + .append("\" Id=").append(info.getLockOwnerId()); + } + if (info.isSuspended()) { + sb.append(" (suspended)"); + } + if (info.isInNative()) { + sb.append(" (in native)"); + } + sb.append('\n'); + final StackTraceElement[] stackTrace = info.getStackTrace(); + final MonitorInfo[] lockedMonitors = info.getLockedMonitors(); + boolean startDumping = false; + for (int i = 0; i < stackTrace.length; i++) { + StackTraceElement e = stackTrace[i]; + if (startDumping) { + dumpStackTraceElement(info, sb, i, e); + } + + for (MonitorInfo mi : lockedMonitors) { + if (mi.getLockedStackDepth() == i) { + // Only start dumping the stack from the first time we lock + // something. + // Removes a lot of unnecessary noise from the output. + if (!startDumping) { + dumpStackTraceElement(info, sb, i, e); + startDumping = true; + } + sb.append("\t- locked ").append(mi); + sb.append('\n'); + } + } + } + return sb.toString(); + } + + private static void dumpStackTraceElement(ThreadInfo info, + StringBuilder sb, int i, StackTraceElement e) { + sb.append('\t').append("at ").append(e) + .append('\n'); + if (i == 0 && info.getLockInfo() != null) { + Thread.State ts = info.getThreadState(); + switch (ts) { + case BLOCKED: + sb.append("\t- blocked on ") + .append(info.getLockInfo()) + .append('\n'); + break; + case WAITING: + sb.append("\t- waiting on ") + .append(info.getLockInfo()) + .append('\n'); + break; + case TIMED_WAITING: + sb.append("\t- waiting on ") + .append(info.getLockInfo()) + .append('\n'); + break; + default: + } + } + } + + private static String getObjectName(MonitorInfo info) { + return info.getClassName() + "@" + + Integer.toHexString(info.getIdentityHashCode()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/BitField.java b/modules/h2/src/main/java/org/h2/util/BitField.java new file mode 100644 index 0000000000000..ffac01c2c0405 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/BitField.java @@ -0,0 +1,188 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.Arrays; + +/** + * A list of bits. + */ +public final class BitField { + + private static final int ADDRESS_BITS = 6; + private static final int BITS = 64; + private static final int ADDRESS_MASK = BITS - 1; + private long[] data; + private int maxLength; + + public BitField() { + this(64); + } + + public BitField(int capacity) { + data = new long[capacity >>> 3]; + } + + /** + * Get the index of the next bit that is not set. + * + * @param fromIndex where to start searching + * @return the index of the next disabled bit + */ + public int nextClearBit(int fromIndex) { + int i = fromIndex >> ADDRESS_BITS; + int max = data.length; + for (; i < max; i++) { + if (data[i] == -1) { + continue; + } + int j = Math.max(fromIndex, i << ADDRESS_BITS); + for (int end = j + 64; j < end; j++) { + if (!get(j)) { + return j; + } + } + } + return max << ADDRESS_BITS; + } + + /** + * Get the bit at the given index. + * + * @param i the index + * @return true if the bit is enabled + */ + public boolean get(int i) { + int addr = i >> ADDRESS_BITS; + if (addr >= data.length) { + return false; + } + return (data[addr] & getBitMask(i)) != 0; + } + + /** + * Get the next 8 bits at the given index. + * The index must be a multiple of 8. + * + * @param i the index + * @return the next 8 bits + */ + public int getByte(int i) { + int addr = i >> ADDRESS_BITS; + if (addr >= data.length) { + return 0; + } + return (int) (data[addr] >>> (i & (7 << 3)) & 255); + } + + /** + * Combine the next 8 bits at the given index with OR. + * The index must be a multiple of 8. + * + * @param i the index + * @param x the next 8 bits (0 - 255) + */ + public void setByte(int i, int x) { + int addr = i >> ADDRESS_BITS; + checkCapacity(addr); + data[addr] |= ((long) x) << (i & (7 << 3)); + if (maxLength < i && x != 0) { + maxLength = i + 7; + } + } + + /** + * Set bit at the given index to 'true'. + * + * @param i the index + */ + public void set(int i) { + int addr = i >> ADDRESS_BITS; + checkCapacity(addr); + data[addr] |= getBitMask(i); + if (maxLength < i) { + maxLength = i; + } + } + + /** + * Set bit at the given index to 'false'. + * + * @param i the index + */ + public void clear(int i) { + int addr = i >> ADDRESS_BITS; + if (addr >= data.length) { + return; + } + data[addr] &= ~getBitMask(i); + } + + private static long getBitMask(int i) { + return 1L << (i & ADDRESS_MASK); + } + + private void checkCapacity(int size) { + if (size >= data.length) { + expandCapacity(size); + } + } + + private void expandCapacity(int size) { + while (size >= data.length) { + int newSize = data.length == 0 ? 1 : data.length * 2; + data = Arrays.copyOf(data, newSize); + } + } + + /** + * Enable or disable a number of bits. + * + * @param fromIndex the index of the first bit to enable or disable + * @param toIndex one plus the index of the last bit to enable or disable + * @param value the new value + */ + public void set(int fromIndex, int toIndex, boolean value) { + // go backwards so that OutOfMemory happens + // before some bytes are modified + for (int i = toIndex - 1; i >= fromIndex; i--) { + set(i, value); + } + if (value) { + if (toIndex > maxLength) { + maxLength = toIndex; + } + } else { + if (toIndex >= maxLength) { + maxLength = fromIndex; + } + } + } + + private void set(int i, boolean value) { + if (value) { + set(i); + } else { + clear(i); + } + } + + /** + * Get the index of the highest set bit plus one, or 0 if no bits are set. + * + * @return the length of the bit field + */ + public int length() { + int m = maxLength >> ADDRESS_BITS; + while (m > 0 && data[m] == 0) { + m--; + } + maxLength = (m << ADDRESS_BITS) + + (64 - Long.numberOfLeadingZeros(data[m])); + return maxLength; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/Bits.java b/modules/h2/src/main/java/org/h2/util/Bits.java new file mode 100644 index 0000000000000..b4c88a29c2af5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/Bits.java @@ -0,0 +1,178 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.UUID; + +/** + * Manipulations with bytes and arrays. This class can be overridden in + * multi-release JAR with more efficient implementation for a newer versions of + * Java. + */ +public final class Bits { + + /* + * Signatures of methods should match with + * h2/src/java9/src/org/h2/util/Bits.java and precompiled + * h2/src/java9/precompiled/org/h2/util/Bits.class. + */ + + /** + * Compare the contents of two byte arrays. If the content or length of the + * first array is smaller than the second array, -1 is returned. If the content + * or length of the second array is smaller than the first array, 1 is returned. + * If the contents and lengths are the same, 0 is returned. + * + *

    + * This method interprets bytes as signed. + *

    + * + * @param data1 + * the first byte array (must not be null) + * @param data2 + * the second byte array (must not be null) + * @return the result of the comparison (-1, 1 or 0) + */ + public static int compareNotNullSigned(byte[] data1, byte[] data2) { + if (data1 == data2) { + return 0; + } + int len = Math.min(data1.length, data2.length); + for (int i = 0; i < len; i++) { + byte b = data1[i]; + byte b2 = data2[i]; + if (b != b2) { + return b > b2 ? 1 : -1; + } + } + return Integer.signum(data1.length - data2.length); + } + + /** + * Compare the contents of two byte arrays. If the content or length of the + * first array is smaller than the second array, -1 is returned. If the content + * or length of the second array is smaller than the first array, 1 is returned. + * If the contents and lengths are the same, 0 is returned. + * + *

    + * This method interprets bytes as unsigned. + *

    + * + * @param data1 + * the first byte array (must not be null) + * @param data2 + * the second byte array (must not be null) + * @return the result of the comparison (-1, 1 or 0) + */ + public static int compareNotNullUnsigned(byte[] data1, byte[] data2) { + if (data1 == data2) { + return 0; + } + int len = Math.min(data1.length, data2.length); + for (int i = 0; i < len; i++) { + int b = data1[i] & 0xff; + int b2 = data2[i] & 0xff; + if (b != b2) { + return b > b2 ? 1 : -1; + } + } + return Integer.signum(data1.length - data2.length); + } + + /** + * Reads a int value from the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @return the value + */ + public static int readInt(byte[] buff, int pos) { + return (buff[pos++] << 24) + ((buff[pos++] & 0xff) << 16) + ((buff[pos++] & 0xff) << 8) + (buff[pos] & 0xff); + } + + /** + * Reads a long value from the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @return the value + */ + public static long readLong(byte[] buff, int pos) { + return (((long) readInt(buff, pos)) << 32) + (readInt(buff, pos + 4) & 0xffffffffL); + } + + /** + * Converts UUID value to byte array in big-endian order. + * + * @param msb + * most significant part of UUID + * @param lsb + * least significant part of UUID + * @return byte array representation + */ + public static byte[] uuidToBytes(long msb, long lsb) { + byte[] buff = new byte[16]; + for (int i = 0; i < 8; i++) { + buff[i] = (byte) ((msb >> (8 * (7 - i))) & 0xff); + buff[8 + i] = (byte) ((lsb >> (8 * (7 - i))) & 0xff); + } + return buff; + } + + /** + * Converts UUID value to byte array in big-endian order. + * + * @param uuid + * UUID value + * @return byte array representation + */ + public static byte[] uuidToBytes(UUID uuid) { + return uuidToBytes(uuid.getMostSignificantBits(), uuid.getLeastSignificantBits()); + } + + /** + * Writes a int value to the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @param x + * the value to write + */ + public static void writeInt(byte[] buff, int pos, int x) { + buff[pos++] = (byte) (x >> 24); + buff[pos++] = (byte) (x >> 16); + buff[pos++] = (byte) (x >> 8); + buff[pos] = (byte) x; + } + + /** + * Writes a long value to the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @param x + * the value to write + */ + public static void writeLong(byte[] buff, int pos, long x) { + writeInt(buff, pos, (int) (x >> 32)); + writeInt(buff, pos + 4, (int) x); + } + + private Bits() { + } +} diff --git a/modules/h2/src/main/java/org/h2/util/Cache.java b/modules/h2/src/main/java/org/h2/util/Cache.java new file mode 100644 index 0000000000000..f3a0ece05a074 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/Cache.java @@ -0,0 +1,92 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.ArrayList; + +/** + * The cache keeps frequently used objects in the main memory. + */ +public interface Cache { + + /** + * Get all objects in the cache that have been changed. + * + * @return the list of objects + */ + ArrayList getAllChanged(); + + /** + * Clear the cache. + */ + void clear(); + + /** + * Get an element in the cache if it is available. + * This will move the item to the front of the list. + * + * @param pos the unique key of the element + * @return the element or null + */ + CacheObject get(int pos); + + /** + * Add an element to the cache. Other items may fall out of the cache + * because of this. It is not allowed to add the same record twice. + * + * @param r the object + */ + void put(CacheObject r); + + /** + * Update an element in the cache. + * This will move the item to the front of the list. + * + * @param pos the unique key of the element + * @param record the element + * @return the element + */ + CacheObject update(int pos, CacheObject record); + + /** + * Remove an object from the cache. + * + * @param pos the unique key of the element + * @return true if the key was in the cache + */ + boolean remove(int pos); + + /** + * Get an element from the cache if it is available. + * This will not move the item to the front of the list. + * + * @param pos the unique key of the element + * @return the element or null + */ + CacheObject find(int pos); + + /** + * Set the maximum memory to be used by this cache. + * + * @param size the maximum size in KB + */ + void setMaxMemory(int size); + + /** + * Get the maximum memory to be used. + * + * @return the maximum size in KB + */ + int getMaxMemory(); + + /** + * Get the used size in KB. + * + * @return the current size in KB + */ + int getMemory(); + +} diff --git a/modules/h2/src/main/java/org/h2/util/CacheHead.java b/modules/h2/src/main/java/org/h2/util/CacheHead.java new file mode 100644 index 0000000000000..7d7bb4d1634ec --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/CacheHead.java @@ -0,0 +1,23 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +/** + * The head element of the linked list. + */ +public class CacheHead extends CacheObject { + + @Override + public boolean canRemove() { + return false; + } + + @Override + public int getMemory() { + return 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/CacheLRU.java b/modules/h2/src/main/java/org/h2/util/CacheLRU.java new file mode 100644 index 0000000000000..d634d36b56e35 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/CacheLRU.java @@ -0,0 +1,388 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.Map; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; + +/** + * A cache implementation based on the last recently used (LRU) algorithm. + */ +public class CacheLRU implements Cache { + + static final String TYPE_NAME = "LRU"; + + private final CacheWriter writer; + + /** + * Use First-In-First-Out (don't move recently used items to the front of + * the queue). + */ + private final boolean fifo; + + private final CacheObject head = new CacheHead(); + private final int mask; + private CacheObject[] values; + private int recordCount; + + /** + * The number of cache buckets. + */ + private final int len; + + /** + * The maximum memory, in words (4 bytes each). + */ + private long maxMemory; + + /** + * The current memory used in this cache, in words (4 bytes each). + */ + private long memory; + + CacheLRU(CacheWriter writer, int maxMemoryKb, boolean fifo) { + this.writer = writer; + this.fifo = fifo; + this.setMaxMemory(maxMemoryKb); + try { + // Since setMaxMemory() ensures that maxMemory is >=0, + // we don't have to worry about an underflow. + long tmpLen = maxMemory / 64; + if (tmpLen > Integer.MAX_VALUE) { + throw new IllegalArgumentException(); + } + this.len = MathUtils.nextPowerOf2((int) tmpLen); + } catch (IllegalArgumentException e) { + throw new IllegalStateException("This much cache memory is not supported: " + maxMemoryKb + "kb", e); + } + this.mask = len - 1; + clear(); + } + + /** + * Create a cache of the given type and size. + * + * @param writer the cache writer + * @param cacheType the cache type + * @param cacheSize the size + * @return the cache object + */ + public static Cache getCache(CacheWriter writer, String cacheType, + int cacheSize) { + Map secondLevel = null; + if (cacheType.startsWith("SOFT_")) { + secondLevel = new SoftHashMap<>(); + cacheType = cacheType.substring("SOFT_".length()); + } + Cache cache; + if (CacheLRU.TYPE_NAME.equals(cacheType)) { + cache = new CacheLRU(writer, cacheSize, false); + } else if (CacheTQ.TYPE_NAME.equals(cacheType)) { + cache = new CacheTQ(writer, cacheSize); + } else { + throw DbException.getInvalidValueException("CACHE_TYPE", cacheType); + } + if (secondLevel != null) { + cache = new CacheSecondLevel(cache, secondLevel); + } + return cache; + } + + @Override + public void clear() { + head.cacheNext = head.cachePrevious = head; + // first set to null - avoiding out of memory + values = null; + values = new CacheObject[len]; + recordCount = 0; + memory = len * (long)Constants.MEMORY_POINTER; + } + + @Override + public void put(CacheObject rec) { + if (SysProperties.CHECK) { + int pos = rec.getPos(); + CacheObject old = find(pos); + if (old != null) { + DbException + .throwInternalError("try to add a record twice at pos " + + pos); + } + } + int index = rec.getPos() & mask; + rec.cacheChained = values[index]; + values[index] = rec; + recordCount++; + memory += rec.getMemory(); + addToFront(rec); + removeOldIfRequired(); + } + + @Override + public CacheObject update(int pos, CacheObject rec) { + CacheObject old = find(pos); + if (old == null) { + put(rec); + } else { + if (SysProperties.CHECK) { + if (old != rec) { + DbException.throwInternalError("old!=record pos:" + pos + + " old:" + old + " new:" + rec); + } + } + if (!fifo) { + removeFromLinkedList(rec); + addToFront(rec); + } + } + return old; + } + + private void removeOldIfRequired() { + // a small method, to allow inlining + if (memory >= maxMemory) { + removeOld(); + } + } + + private void removeOld() { + int i = 0; + ArrayList changed = New.arrayList(); + long mem = memory; + int rc = recordCount; + boolean flushed = false; + CacheObject next = head.cacheNext; + while (true) { + if (rc <= Constants.CACHE_MIN_RECORDS) { + break; + } + if (changed.isEmpty()) { + if (mem <= maxMemory) { + break; + } + } else { + if (mem * 4 <= maxMemory * 3) { + break; + } + } + CacheObject check = next; + next = check.cacheNext; + i++; + if (i >= recordCount) { + if (!flushed) { + writer.flushLog(); + flushed = true; + i = 0; + } else { + // can't remove any record, because the records can not be + // removed hopefully this does not happen frequently, but it + // can happen + writer.getTrace() + .info("cannot remove records, cache size too small? records:" + + recordCount + " memory:" + memory); + break; + } + } + if (SysProperties.CHECK && check == head) { + DbException.throwInternalError("try to remove head"); + } + // we are not allowed to remove it if the log is not yet written + // (because we need to log before writing the data) + // also, can't write it if the record is pinned + if (!check.canRemove()) { + removeFromLinkedList(check); + addToFront(check); + continue; + } + rc--; + mem -= check.getMemory(); + if (check.isChanged()) { + changed.add(check); + } else { + remove(check.getPos()); + } + } + if (!changed.isEmpty()) { + if (!flushed) { + writer.flushLog(); + } + Collections.sort(changed); + long max = maxMemory; + int size = changed.size(); + try { + // temporary disable size checking, + // to avoid stack overflow + maxMemory = Long.MAX_VALUE; + for (i = 0; i < size; i++) { + CacheObject rec = changed.get(i); + writer.writeBack(rec); + } + } finally { + maxMemory = max; + } + for (i = 0; i < size; i++) { + CacheObject rec = changed.get(i); + remove(rec.getPos()); + if (SysProperties.CHECK) { + if (rec.cacheNext != null) { + throw DbException.throwInternalError(); + } + } + } + } + } + + private void addToFront(CacheObject rec) { + if (SysProperties.CHECK && rec == head) { + DbException.throwInternalError("try to move head"); + } + rec.cacheNext = head; + rec.cachePrevious = head.cachePrevious; + rec.cachePrevious.cacheNext = rec; + head.cachePrevious = rec; + } + + private void removeFromLinkedList(CacheObject rec) { + if (SysProperties.CHECK && rec == head) { + DbException.throwInternalError("try to remove head"); + } + rec.cachePrevious.cacheNext = rec.cacheNext; + rec.cacheNext.cachePrevious = rec.cachePrevious; + // TODO cache: mystery: why is this required? needs more memory if we + // don't do this + rec.cacheNext = null; + rec.cachePrevious = null; + } + + @Override + public boolean remove(int pos) { + int index = pos & mask; + CacheObject rec = values[index]; + if (rec == null) { + return false; + } + if (rec.getPos() == pos) { + values[index] = rec.cacheChained; + } else { + CacheObject last; + do { + last = rec; + rec = rec.cacheChained; + if (rec == null) { + return false; + } + } while (rec.getPos() != pos); + last.cacheChained = rec.cacheChained; + } + recordCount--; + memory -= rec.getMemory(); + removeFromLinkedList(rec); + if (SysProperties.CHECK) { + rec.cacheChained = null; + CacheObject o = find(pos); + if (o != null) { + DbException.throwInternalError("not removed: " + o); + } + } + return true; + } + + @Override + public CacheObject find(int pos) { + CacheObject rec = values[pos & mask]; + while (rec != null && rec.getPos() != pos) { + rec = rec.cacheChained; + } + return rec; + } + + @Override + public CacheObject get(int pos) { + CacheObject rec = find(pos); + if (rec != null) { + if (!fifo) { + removeFromLinkedList(rec); + addToFront(rec); + } + } + return rec; + } + + // private void testConsistency() { + // int s = size; + // HashSet set = new HashSet(); + // for(int i=0; i { + + /** + * The previous element in the LRU linked list. If the previous element is + * the head, then this element is the most recently used object. + */ + public CacheObject cachePrevious; + + /** + * The next element in the LRU linked list. If the next element is the head, + * then this element is the least recently used object. + */ + public CacheObject cacheNext; + + /** + * The next element in the hash chain. + */ + public CacheObject cacheChained; + + private int pos; + private boolean changed; + + /** + * Check if the object can be removed from the cache. + * For example pinned objects can not be removed. + * + * @return true if it can be removed + */ + public abstract boolean canRemove(); + + /** + * Get the estimated used memory. + * + * @return number of words (one word is 4 bytes) + */ + public abstract int getMemory(); + + public void setPos(int pos) { + if (SysProperties.CHECK) { + if (cachePrevious != null || cacheNext != null || cacheChained != null) { + DbException.throwInternalError("setPos too late"); + } + } + this.pos = pos; + } + + public int getPos() { + return pos; + } + + /** + * Check if this cache object has been changed and thus needs to be written + * back to the storage. + * + * @return if it has been changed + */ + public boolean isChanged() { + return changed; + } + + public void setChanged(boolean b) { + changed = b; + } + + @Override + public int compareTo(CacheObject other) { + return Integer.compare(getPos(), other.getPos()); + } + + public boolean isStream() { + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/CacheSecondLevel.java b/modules/h2/src/main/java/org/h2/util/CacheSecondLevel.java new file mode 100644 index 0000000000000..0266055650dac --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/CacheSecondLevel.java @@ -0,0 +1,89 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Jan Kotek + */ +package org.h2.util; + +import java.util.ArrayList; +import java.util.Map; + +/** + * Cache which wraps another cache (proxy pattern) and adds caching using map. + * This is useful for WeakReference, SoftReference or hard reference cache. + */ +class CacheSecondLevel implements Cache { + + private final Cache baseCache; + private final Map map; + + CacheSecondLevel(Cache cache, Map map) { + this.baseCache = cache; + this.map = map; + } + + @Override + public void clear() { + map.clear(); + baseCache.clear(); + } + + @Override + public CacheObject find(int pos) { + CacheObject ret = baseCache.find(pos); + if (ret == null) { + ret = map.get(pos); + } + return ret; + } + + @Override + public CacheObject get(int pos) { + CacheObject ret = baseCache.get(pos); + if (ret == null) { + ret = map.get(pos); + } + return ret; + } + + @Override + public ArrayList getAllChanged() { + return baseCache.getAllChanged(); + } + + @Override + public int getMaxMemory() { + return baseCache.getMaxMemory(); + } + + @Override + public int getMemory() { + return baseCache.getMemory(); + } + + @Override + public void put(CacheObject r) { + baseCache.put(r); + map.put(r.getPos(), r); + } + + @Override + public boolean remove(int pos) { + boolean result = baseCache.remove(pos); + result |= map.remove(pos) != null; + return result; + } + + @Override + public void setMaxMemory(int size) { + baseCache.setMaxMemory(size); + } + + @Override + public CacheObject update(int pos, CacheObject record) { + CacheObject oldRec = baseCache.update(pos, record); + map.put(pos, record); + return oldRec; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/CacheTQ.java b/modules/h2/src/main/java/org/h2/util/CacheTQ.java new file mode 100644 index 0000000000000..4a9995b6b2d4f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/CacheTQ.java @@ -0,0 +1,131 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.ArrayList; + +/** + * An alternative cache implementation. This implementation uses two caches: a + * LRU cache and a FIFO cache. Entries are first kept in the FIFO cache, and if + * referenced again then marked in a hash set. If referenced again, they are + * moved to the LRU cache. Stream pages are never added to the LRU cache. It is + * supposed to be more or less scan resistant, and it doesn't cache large rows + * in the LRU cache. + */ +public class CacheTQ implements Cache { + + static final String TYPE_NAME = "TQ"; + + private final Cache lru; + private final Cache fifo; + private final SmallLRUCache recentlyUsed = + SmallLRUCache.newInstance(1024); + private int lastUsed = -1; + + private int maxMemory; + + CacheTQ(CacheWriter writer, int maxMemoryKb) { + this.maxMemory = maxMemoryKb; + lru = new CacheLRU(writer, (int) (maxMemoryKb * 0.8), false); + fifo = new CacheLRU(writer, (int) (maxMemoryKb * 0.2), true); + setMaxMemory(4 * maxMemoryKb); + } + + @Override + public void clear() { + lru.clear(); + fifo.clear(); + recentlyUsed.clear(); + lastUsed = -1; + } + + @Override + public CacheObject find(int pos) { + CacheObject r = lru.find(pos); + if (r == null) { + r = fifo.find(pos); + } + return r; + } + + @Override + public CacheObject get(int pos) { + CacheObject r = lru.find(pos); + if (r != null) { + return r; + } + r = fifo.find(pos); + if (r != null && !r.isStream()) { + if (recentlyUsed.get(pos) != null) { + if (lastUsed != pos) { + fifo.remove(pos); + lru.put(r); + } + } else { + recentlyUsed.put(pos, this); + } + lastUsed = pos; + } + return r; + } + + @Override + public ArrayList getAllChanged() { + ArrayList changed = New.arrayList(); + changed.addAll(lru.getAllChanged()); + changed.addAll(fifo.getAllChanged()); + return changed; + } + + @Override + public int getMaxMemory() { + return maxMemory; + } + + @Override + public int getMemory() { + return lru.getMemory() + fifo.getMemory(); + } + + @Override + public void put(CacheObject r) { + if (r.isStream()) { + fifo.put(r); + } else if (recentlyUsed.get(r.getPos()) != null) { + lru.put(r); + } else { + fifo.put(r); + lastUsed = r.getPos(); + } + } + + @Override + public boolean remove(int pos) { + boolean result = lru.remove(pos); + if (!result) { + result = fifo.remove(pos); + } + recentlyUsed.remove(pos); + return result; + } + + @Override + public void setMaxMemory(int maxMemoryKb) { + this.maxMemory = maxMemoryKb; + lru.setMaxMemory((int) (maxMemoryKb * 0.8)); + fifo.setMaxMemory((int) (maxMemoryKb * 0.2)); + recentlyUsed.setMaxSize(4 * maxMemoryKb); + } + + @Override + public CacheObject update(int pos, CacheObject record) { + if (lru.find(pos) != null) { + return lru.update(pos, record); + } + return fifo.update(pos, record); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/CacheWriter.java b/modules/h2/src/main/java/org/h2/util/CacheWriter.java new file mode 100644 index 0000000000000..99013b6f0b7f4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/CacheWriter.java @@ -0,0 +1,38 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import org.h2.message.Trace; + +/** + * The cache writer is called by the cache to persist changed data that needs to + * be removed from the cache. + */ +public interface CacheWriter { + + /** + * Persist a record. + * + * @param entry the cache entry + */ + void writeBack(CacheObject entry); + + /** + * Flush the transaction log, so that entries can be removed from the cache. + * This is only required if the cache is full and contains data that is not + * yet written to the log. It is required to write the log entries to the + * log first, because the log is 'write ahead'. + */ + void flushLog(); + + /** + * Get the trace writer. + * + * @return the trace writer + */ + Trace getTrace(); + +} diff --git a/modules/h2/src/main/java/org/h2/util/CloseWatcher.java b/modules/h2/src/main/java/org/h2/util/CloseWatcher.java new file mode 100644 index 0000000000000..e81c456f02500 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/CloseWatcher.java @@ -0,0 +1,135 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + * Iso8601: + * Initial Developer: Robert Rathsack (firstName dot lastName at gmx dot de) + */ +package org.h2.util; + +import java.io.Closeable; +import java.io.PrintWriter; +import java.io.StringWriter; +import java.lang.ref.PhantomReference; +import java.lang.ref.ReferenceQueue; +import java.util.Collections; +import java.util.HashSet; +import java.util.Set; + +/** + * A phantom reference to watch for unclosed objects. + */ +public class CloseWatcher extends PhantomReference { + + /** + * The queue (might be set to null at any time). + */ + private static ReferenceQueue queue = new ReferenceQueue<>(); + + /** + * The reference set. Must keep it, otherwise the references are garbage + * collected first and thus never enqueued. + */ + private static Set refs = createSet(); + + /** + * The stack trace of when the object was created. It is converted to a + * string early on to avoid classloader problems (a classloader can't be + * garbage collected if there is a static reference to one of its classes). + */ + private String openStackTrace; + + /** + * The closeable object. + */ + private Closeable closeable; + + public CloseWatcher(Object referent, ReferenceQueue q, + Closeable closeable) { + super(referent, q); + this.closeable = closeable; + } + + private static Set createSet() { + return Collections.synchronizedSet(new HashSet()); + } + + /** + * Check for an collected object. + * + * @return the first watcher + */ + public static CloseWatcher pollUnclosed() { + ReferenceQueue q = queue; + if (q == null) { + return null; + } + while (true) { + CloseWatcher cw = (CloseWatcher) q.poll(); + if (cw == null) { + return null; + } + if (refs != null) { + refs.remove(cw); + } + if (cw.closeable != null) { + return cw; + } + } + } + + /** + * Register an object. Before calling this method, pollUnclosed() should be + * called in a loop to remove old references. + * + * @param o the object + * @param closeable the object to close + * @param stackTrace whether the stack trace should be registered (this is + * relatively slow) + * @return the close watcher + */ + public static CloseWatcher register(Object o, Closeable closeable, + boolean stackTrace) { + ReferenceQueue q = queue; + if (q == null) { + q = new ReferenceQueue<>(); + queue = q; + } + CloseWatcher cw = new CloseWatcher(o, q, closeable); + if (stackTrace) { + Exception e = new Exception("Open Stack Trace"); + StringWriter s = new StringWriter(); + e.printStackTrace(new PrintWriter(s)); + cw.openStackTrace = s.toString(); + } + if (refs == null) { + refs = createSet(); + } + refs.add(cw); + return cw; + } + + /** + * Unregister an object, so it is no longer tracked. + * + * @param w the reference + */ + public static void unregister(CloseWatcher w) { + w.closeable = null; + refs.remove(w); + } + + /** + * Get the open stack trace or null if none. + * + * @return the open stack trace + */ + public String getOpenStackTrace() { + return openStackTrace; + } + + public Closeable getCloseable() { + return closeable; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/ColumnNamer.java b/modules/h2/src/main/java/org/h2/util/ColumnNamer.java new file mode 100644 index 0000000000000..9beea8672a6e5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ColumnNamer.java @@ -0,0 +1,157 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + */ +package org.h2.util; + +import java.util.HashSet; +import java.util.Set; +import java.util.regex.Matcher; +import org.h2.engine.Session; +import org.h2.expression.Expression; + +/** + * A factory for column names. + */ +public class ColumnNamer { + + private static final String DEFAULT_COLUMN_NAME = "DEFAULT"; + + private final ColumnNamerConfiguration configuration; + private final Session session; + private final Set existingColumnNames = new HashSet<>(); + + public ColumnNamer(Session session) { + this.session = session; + if (this.session != null && this.session.getColumnNamerConfiguration() != null) { + // use original from session + this.configuration = this.session.getColumnNamerConfiguration(); + } else { + // detached namer, create new + this.configuration = ColumnNamerConfiguration.getDefault(); + if (session != null) { + session.setColumnNamerConfiguration(this.configuration); + } + } + } + + /** + * Create a standardized column name that isn't null and doesn't have a CR/LF in it. + * @param columnExp the column expression + * @param indexOfColumn index of column in below array + * @param columnNameOverides array of overriding column names + * @return the new column name + */ + public String getColumnName(Expression columnExp, int indexOfColumn, String[] columnNameOverides) { + String columnNameOverride = null; + if (columnNameOverides != null && columnNameOverides.length > indexOfColumn) { + columnNameOverride = columnNameOverides[indexOfColumn]; + } + return getColumnName(columnExp, indexOfColumn, columnNameOverride); + } + + /** + * Create a standardized column name that isn't null and doesn't have a CR/LF in it. + * @param columnExp the column expression + * @param indexOfColumn index of column in below array + * @param columnNameOverride single overriding column name + * @return the new column name + */ + public String getColumnName(Expression columnExp, int indexOfColumn, String columnNameOverride) { + // try a name from the column name override + String columnName = null; + if (columnNameOverride != null) { + columnName = columnNameOverride; + if (!isAllowableColumnName(columnName)) { + columnName = fixColumnName(columnName); + } + if (!isAllowableColumnName(columnName)) { + columnName = null; + } + } + // try a name from the column alias + if (columnName == null && columnExp.getAlias() != null && !DEFAULT_COLUMN_NAME.equals(columnExp.getAlias())) { + columnName = columnExp.getAlias(); + if (!isAllowableColumnName(columnName)) { + columnName = fixColumnName(columnName); + } + if (!isAllowableColumnName(columnName)) { + columnName = null; + } + } + // try a name derived from the column expression SQL + if (columnName == null && columnExp.getColumnName() != null + && !DEFAULT_COLUMN_NAME.equals(columnExp.getColumnName())) { + columnName = columnExp.getColumnName(); + if (!isAllowableColumnName(columnName)) { + columnName = fixColumnName(columnName); + } + if (!isAllowableColumnName(columnName)) { + columnName = null; + } + } + // try a name derived from the column expression plan SQL + if (columnName == null && columnExp.getSQL() != null && !DEFAULT_COLUMN_NAME.equals(columnExp.getSQL())) { + columnName = columnExp.getSQL(); + if (!isAllowableColumnName(columnName)) { + columnName = fixColumnName(columnName); + } + if (!isAllowableColumnName(columnName)) { + columnName = null; + } + } + // go with a innocuous default name pattern + if (columnName == null) { + columnName = configuration.getDefaultColumnNamePattern().replace("$$", "" + (indexOfColumn + 1)); + } + if (existingColumnNames.contains(columnName) && configuration.isGenerateUniqueColumnNames()) { + columnName = generateUniqueName(columnName); + } + existingColumnNames.add(columnName); + return columnName; + } + + private String generateUniqueName(String columnName) { + String newColumnName = columnName; + int loopCount = 2; + while (existingColumnNames.contains(newColumnName)) { + String loopCountString = "_" + loopCount; + newColumnName = columnName.substring(0, + Math.min(columnName.length(), configuration.getMaxIdentiferLength() - loopCountString.length())) + + loopCountString; + loopCount++; + } + return newColumnName; + } + + private boolean isAllowableColumnName(String proposedName) { + + // check null + if (proposedName == null) { + return false; + } + // check size limits + if (proposedName.length() > configuration.getMaxIdentiferLength() || proposedName.length() == 0) { + return false; + } + Matcher match = configuration.getCompiledRegularExpressionMatchAllowed().matcher(proposedName); + return match.matches(); + } + + private String fixColumnName(String proposedName) { + Matcher match = configuration.getCompiledRegularExpressionMatchDisallowed().matcher(proposedName); + proposedName = match.replaceAll(""); + + // check size limits - then truncate + if (proposedName.length() > configuration.getMaxIdentiferLength()) { + proposedName = proposedName.substring(0, configuration.getMaxIdentiferLength()); + } + + return proposedName; + } + + public ColumnNamerConfiguration getConfiguration() { + return configuration; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/ColumnNamerConfiguration.java b/modules/h2/src/main/java/org/h2/util/ColumnNamerConfiguration.java new file mode 100644 index 0000000000000..04a8f93fb3c5c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ColumnNamerConfiguration.java @@ -0,0 +1,231 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + */ +package org.h2.util; + +import java.util.regex.Pattern; +import org.h2.engine.Mode.ModeEnum; +import static org.h2.engine.Mode.ModeEnum.*; +import org.h2.message.DbException; + +/** + * The configuration for the allowed column names. + */ +public class ColumnNamerConfiguration { + + private static final String DEFAULT_COMMAND = "DEFAULT"; + private static final String REGULAR_EXPRESSION_MATCH_DISALLOWED = "REGULAR_EXPRESSION_MATCH_DISALLOWED = "; + private static final String REGULAR_EXPRESSION_MATCH_ALLOWED = "REGULAR_EXPRESSION_MATCH_ALLOWED = "; + private static final String DEFAULT_COLUMN_NAME_PATTERN = "DEFAULT_COLUMN_NAME_PATTERN = "; + private static final String MAX_IDENTIFIER_LENGTH = "MAX_IDENTIFIER_LENGTH = "; + private static final String EMULATE_COMMAND = "EMULATE = "; + private static final String GENERATE_UNIQUE_COLUMN_NAMES = "GENERATE_UNIQUE_COLUMN_NAMES = "; + + private int maxIdentiferLength; + private String regularExpressionMatchAllowed; + private String regularExpressionMatchDisallowed; + private String defaultColumnNamePattern; + private boolean generateUniqueColumnNames; + private Pattern compiledRegularExpressionMatchAllowed; + private Pattern compiledRegularExpressionMatchDisallowed; + + public ColumnNamerConfiguration(int maxIdentiferLength, String regularExpressionMatchAllowed, + String regularExpressionMatchDisallowed, String defaultColumnNamePattern, + boolean generateUniqueColumnNames) { + + this.maxIdentiferLength = maxIdentiferLength; + this.regularExpressionMatchAllowed = regularExpressionMatchAllowed; + this.regularExpressionMatchDisallowed = regularExpressionMatchDisallowed; + this.defaultColumnNamePattern = defaultColumnNamePattern; + this.generateUniqueColumnNames = generateUniqueColumnNames; + + compiledRegularExpressionMatchAllowed = Pattern.compile(regularExpressionMatchAllowed); + compiledRegularExpressionMatchDisallowed = Pattern.compile(regularExpressionMatchDisallowed); + } + + public int getMaxIdentiferLength() { + return maxIdentiferLength; + } + + public void setMaxIdentiferLength(int maxIdentiferLength) { + this.maxIdentiferLength = Math.max(30, maxIdentiferLength); + if (maxIdentiferLength != getMaxIdentiferLength()) { + throw DbException.getInvalidValueException("Illegal value (<30) in SET COLUMN_NAME_RULES", + "MAX_IDENTIFIER_LENGTH=" + maxIdentiferLength); + } + } + + public String getRegularExpressionMatchAllowed() { + return regularExpressionMatchAllowed; + } + + public void setRegularExpressionMatchAllowed(String regularExpressionMatchAllowed) { + this.regularExpressionMatchAllowed = regularExpressionMatchAllowed; + } + + public String getRegularExpressionMatchDisallowed() { + return regularExpressionMatchDisallowed; + } + + public void setRegularExpressionMatchDisallowed(String regularExpressionMatchDisallowed) { + this.regularExpressionMatchDisallowed = regularExpressionMatchDisallowed; + } + + public String getDefaultColumnNamePattern() { + return defaultColumnNamePattern; + } + + public void setDefaultColumnNamePattern(String defaultColumnNamePattern) { + this.defaultColumnNamePattern = defaultColumnNamePattern; + } + + public Pattern getCompiledRegularExpressionMatchAllowed() { + return compiledRegularExpressionMatchAllowed; + } + + public void setCompiledRegularExpressionMatchAllowed(Pattern compiledRegularExpressionMatchAllowed) { + this.compiledRegularExpressionMatchAllowed = compiledRegularExpressionMatchAllowed; + } + + public Pattern getCompiledRegularExpressionMatchDisallowed() { + return compiledRegularExpressionMatchDisallowed; + } + + public void setCompiledRegularExpressionMatchDisallowed(Pattern compiledRegularExpressionMatchDisallowed) { + this.compiledRegularExpressionMatchDisallowed = compiledRegularExpressionMatchDisallowed; + } + + /** + * Configure the column namer. + * + * @param stringValue the configuration + */ + public void configure(String stringValue) { + try { + if (stringValue.equalsIgnoreCase(DEFAULT_COMMAND)) { + configure(REGULAR); + } else if (stringValue.startsWith(EMULATE_COMMAND)) { + configure(ModeEnum.valueOf(unquoteString(stringValue.substring(EMULATE_COMMAND.length())))); + } else if (stringValue.startsWith(MAX_IDENTIFIER_LENGTH)) { + int maxLength = Integer.parseInt(stringValue.substring(MAX_IDENTIFIER_LENGTH.length())); + setMaxIdentiferLength(maxLength); + } else if (stringValue.startsWith(GENERATE_UNIQUE_COLUMN_NAMES)) { + setGenerateUniqueColumnNames( + Integer.parseInt(stringValue.substring(GENERATE_UNIQUE_COLUMN_NAMES.length())) == 1); + } else if (stringValue.startsWith(DEFAULT_COLUMN_NAME_PATTERN)) { + setDefaultColumnNamePattern( + unquoteString(stringValue.substring(DEFAULT_COLUMN_NAME_PATTERN.length()))); + } else if (stringValue.startsWith(REGULAR_EXPRESSION_MATCH_ALLOWED)) { + setRegularExpressionMatchAllowed( + unquoteString(stringValue.substring(REGULAR_EXPRESSION_MATCH_ALLOWED.length()))); + } else if (stringValue.startsWith(REGULAR_EXPRESSION_MATCH_DISALLOWED)) { + setRegularExpressionMatchDisallowed( + unquoteString(stringValue.substring(REGULAR_EXPRESSION_MATCH_DISALLOWED.length()))); + } else { + throw DbException.getInvalidValueException("SET COLUMN_NAME_RULES: unknown id:" + stringValue, + stringValue); + } + recompilePatterns(); + } + // Including NumberFormatException|PatternSyntaxException + catch (RuntimeException e) { + throw DbException.getInvalidValueException("SET COLUMN_NAME_RULES:" + e.getMessage(), stringValue); + + } + } + + private void recompilePatterns() { + try { + // recompile RE patterns + setCompiledRegularExpressionMatchAllowed(Pattern.compile(getRegularExpressionMatchAllowed())); + setCompiledRegularExpressionMatchDisallowed(Pattern.compile(getRegularExpressionMatchDisallowed())); + } catch (Exception e) { + configure(REGULAR); + throw e; + } + } + + public static ColumnNamerConfiguration getDefault() { + return new ColumnNamerConfiguration(Integer.MAX_VALUE, "(?m)(?s).+", "(?m)(?s)[\\x00]", "_UNNAMED_$$", false); + } + + private static String unquoteString(String s) { + if (s.startsWith("'") && s.endsWith("'")) { + s = s.substring(1, s.length() - 1); + return s; + } + return s; + } + + public boolean isGenerateUniqueColumnNames() { + return generateUniqueColumnNames; + } + + public void setGenerateUniqueColumnNames(boolean generateUniqueColumnNames) { + this.generateUniqueColumnNames = generateUniqueColumnNames; + } + + /** + * Configure the rules. + * + * @param modeEnum the mode + */ + public void configure(ModeEnum modeEnum) { + switch (modeEnum) { + case Oracle: + // Nonquoted identifiers can contain only alphanumeric characters + // from your database character set and the underscore (_), dollar + // sign ($), and pound sign (#). + setMaxIdentiferLength(128); + setRegularExpressionMatchAllowed("(?m)(?s)\"?[A-Za-z0-9_\\$#]+\"?"); + setRegularExpressionMatchDisallowed("(?m)(?s)[^A-Za-z0-9_\"\\$#]"); + setDefaultColumnNamePattern("_UNNAMED_$$"); + setGenerateUniqueColumnNames(false); + break; + + case MSSQLServer: + // https://docs.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server + setMaxIdentiferLength(128); + // allows [] around names + setRegularExpressionMatchAllowed("(?m)(?s)[A-Za-z0-9_\\[\\]]+"); + setRegularExpressionMatchDisallowed("(?m)(?s)[^A-Za-z0-9_\\[\\]]"); + setDefaultColumnNamePattern("_UNNAMED_$$"); + setGenerateUniqueColumnNames(false); + break; + + case PostgreSQL: + // this default can be changed to 128 by postgres config + setMaxIdentiferLength(63); + setRegularExpressionMatchAllowed("(?m)(?s)[A-Za-z0-9_\\$]+"); + setRegularExpressionMatchDisallowed("(?m)(?s)[^A-Za-z0-9_\\$]"); + setDefaultColumnNamePattern("_UNNAMED_$$"); + setGenerateUniqueColumnNames(false); + break; + + case MySQL: + // https://dev.mysql.com/doc/refman/5.7/en/identifiers.html + setMaxIdentiferLength(64); + setRegularExpressionMatchAllowed("(?m)(?s)`?[A-Za-z0-9_`\\$]+`?"); + setRegularExpressionMatchDisallowed("(?m)(?s)[^A-Za-z0-9_`\\$]"); + setDefaultColumnNamePattern("_UNNAMED_$$"); + setGenerateUniqueColumnNames(false); + break; + + case REGULAR: + case DB2: + case Derby: + case HSQLDB: + case Ignite: + default: + setMaxIdentiferLength(Integer.MAX_VALUE); + setRegularExpressionMatchAllowed("(?m)(?s).+"); + setRegularExpressionMatchDisallowed("(?m)(?s)[\\x00]"); + setDefaultColumnNamePattern("_UNNAMED_$$"); + setGenerateUniqueColumnNames(false); + break; + } + recompilePatterns(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/DateTimeFunctions.java b/modules/h2/src/main/java/org/h2/util/DateTimeFunctions.java new file mode 100644 index 0000000000000..847184a31799c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/DateTimeFunctions.java @@ -0,0 +1,723 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import static org.h2.expression.Function.CENTURY; +import static org.h2.expression.Function.DAY_OF_MONTH; +import static org.h2.expression.Function.DAY_OF_WEEK; +import static org.h2.expression.Function.DAY_OF_YEAR; +import static org.h2.expression.Function.DECADE; +import static org.h2.expression.Function.EPOCH; +import static org.h2.expression.Function.HOUR; +import static org.h2.expression.Function.ISO_DAY_OF_WEEK; +import static org.h2.expression.Function.ISO_WEEK; +import static org.h2.expression.Function.ISO_YEAR; +import static org.h2.expression.Function.MICROSECOND; +import static org.h2.expression.Function.MILLENNIUM; +import static org.h2.expression.Function.MILLISECOND; +import static org.h2.expression.Function.MINUTE; +import static org.h2.expression.Function.MONTH; +import static org.h2.expression.Function.NANOSECOND; +import static org.h2.expression.Function.QUARTER; +import static org.h2.expression.Function.SECOND; +import static org.h2.expression.Function.TIMEZONE_HOUR; +import static org.h2.expression.Function.TIMEZONE_MINUTE; +import static org.h2.expression.Function.WEEK; +import static org.h2.expression.Function.YEAR; +import java.math.BigDecimal; +import java.text.DateFormatSymbols; +import java.text.SimpleDateFormat; +import java.util.GregorianCalendar; +import java.util.HashMap; +import java.util.Locale; +import java.util.TimeZone; +import org.h2.api.ErrorCode; +import org.h2.expression.Function; +import org.h2.message.DbException; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueInt; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * Date and time functions. + */ +public final class DateTimeFunctions { + private static final HashMap DATE_PART = new HashMap<>(); + + /** + * English names of months and week days. + */ + private static volatile String[][] MONTHS_AND_WEEKS; + + static { + // DATE_PART + DATE_PART.put("SQL_TSI_YEAR", YEAR); + DATE_PART.put("YEAR", YEAR); + DATE_PART.put("YYYY", YEAR); + DATE_PART.put("YY", YEAR); + DATE_PART.put("SQL_TSI_MONTH", MONTH); + DATE_PART.put("MONTH", MONTH); + DATE_PART.put("MM", MONTH); + DATE_PART.put("M", MONTH); + DATE_PART.put("QUARTER", QUARTER); + DATE_PART.put("SQL_TSI_WEEK", WEEK); + DATE_PART.put("WW", WEEK); + DATE_PART.put("WK", WEEK); + DATE_PART.put("WEEK", WEEK); + DATE_PART.put("ISO_WEEK", ISO_WEEK); + DATE_PART.put("DAY", DAY_OF_MONTH); + DATE_PART.put("DD", DAY_OF_MONTH); + DATE_PART.put("D", DAY_OF_MONTH); + DATE_PART.put("SQL_TSI_DAY", DAY_OF_MONTH); + DATE_PART.put("DAY_OF_WEEK", DAY_OF_WEEK); + DATE_PART.put("DAYOFWEEK", DAY_OF_WEEK); + DATE_PART.put("DOW", DAY_OF_WEEK); + DATE_PART.put("ISO_DAY_OF_WEEK", ISO_DAY_OF_WEEK); + DATE_PART.put("DAYOFYEAR", DAY_OF_YEAR); + DATE_PART.put("DAY_OF_YEAR", DAY_OF_YEAR); + DATE_PART.put("DY", DAY_OF_YEAR); + DATE_PART.put("DOY", DAY_OF_YEAR); + DATE_PART.put("SQL_TSI_HOUR", HOUR); + DATE_PART.put("HOUR", HOUR); + DATE_PART.put("HH", HOUR); + DATE_PART.put("SQL_TSI_MINUTE", MINUTE); + DATE_PART.put("MINUTE", MINUTE); + DATE_PART.put("MI", MINUTE); + DATE_PART.put("N", MINUTE); + DATE_PART.put("SQL_TSI_SECOND", SECOND); + DATE_PART.put("SECOND", SECOND); + DATE_PART.put("SS", SECOND); + DATE_PART.put("S", SECOND); + DATE_PART.put("MILLISECOND", MILLISECOND); + DATE_PART.put("MILLISECONDS", MILLISECOND); + DATE_PART.put("MS", MILLISECOND); + DATE_PART.put("EPOCH", EPOCH); + DATE_PART.put("MICROSECOND", MICROSECOND); + DATE_PART.put("MICROSECONDS", MICROSECOND); + DATE_PART.put("MCS", MICROSECOND); + DATE_PART.put("NANOSECOND", NANOSECOND); + DATE_PART.put("NS", NANOSECOND); + DATE_PART.put("TIMEZONE_HOUR", TIMEZONE_HOUR); + DATE_PART.put("TIMEZONE_MINUTE", TIMEZONE_MINUTE); + DATE_PART.put("DECADE", DECADE); + DATE_PART.put("CENTURY", CENTURY); + DATE_PART.put("MILLENNIUM", MILLENNIUM); + } + + /** + * DATEADD function. + * + * @param part + * name of date-time part + * @param count + * count to add + * @param v + * value to add to + * @return result + */ + public static Value dateadd(String part, long count, Value v) { + int field = getDatePart(part); + if (field != MILLISECOND && field != MICROSECOND && field != NANOSECOND + && (count > Integer.MAX_VALUE || count < Integer.MIN_VALUE)) { + throw DbException.getInvalidValueException("DATEADD count", count); + } + boolean withDate = !(v instanceof ValueTime); + boolean withTime = !(v instanceof ValueDate); + boolean forceTimestamp = false; + long[] a = DateTimeUtils.dateAndTimeFromValue(v); + long dateValue = a[0]; + long timeNanos = a[1]; + switch (field) { + case QUARTER: + count *= 3; + //$FALL-THROUGH$ + case YEAR: + case MONTH: { + if (!withDate) { + throw DbException.getInvalidValueException("DATEADD time part", part); + } + long year = DateTimeUtils.yearFromDateValue(dateValue); + long month = DateTimeUtils.monthFromDateValue(dateValue); + int day = DateTimeUtils.dayFromDateValue(dateValue); + if (field == YEAR) { + year += count; + } else { + month += count; + } + dateValue = DateTimeUtils.dateValueFromDenormalizedDate(year, month, day); + return DateTimeUtils.dateTimeToValue(v, dateValue, timeNanos, forceTimestamp); + } + case WEEK: + case ISO_WEEK: + count *= 7; + //$FALL-THROUGH$ + case DAY_OF_WEEK: + case ISO_DAY_OF_WEEK: + case DAY_OF_MONTH: + case DAY_OF_YEAR: + if (!withDate) { + throw DbException.getInvalidValueException("DATEADD time part", part); + } + dateValue = DateTimeUtils + .dateValueFromAbsoluteDay(DateTimeUtils.absoluteDayFromDateValue(dateValue) + count); + return DateTimeUtils.dateTimeToValue(v, dateValue, timeNanos, forceTimestamp); + case HOUR: + count *= 3_600_000_000_000L; + break; + case MINUTE: + count *= 60_000_000_000L; + break; + case SECOND: + case EPOCH: + count *= 1_000_000_000; + break; + case MILLISECOND: + count *= 1_000_000; + break; + case MICROSECOND: + count *= 1_000; + break; + case NANOSECOND: + break; + case TIMEZONE_HOUR: + count *= 60; + //$FALL-THROUGH$ + case TIMEZONE_MINUTE: { + if (!(v instanceof ValueTimestampTimeZone)) { + throw DbException.getUnsupportedException("DATEADD " + part); + } + count += ((ValueTimestampTimeZone) v).getTimeZoneOffsetMins(); + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, timeNanos, (short) count); + } + default: + throw DbException.getUnsupportedException("DATEADD " + part); + } + if (!withTime) { + // Treat date as timestamp at the start of this date + forceTimestamp = true; + } + timeNanos += count; + if (timeNanos >= DateTimeUtils.NANOS_PER_DAY || timeNanos < 0) { + long d; + if (timeNanos >= DateTimeUtils.NANOS_PER_DAY) { + d = timeNanos / DateTimeUtils.NANOS_PER_DAY; + } else { + d = (timeNanos - DateTimeUtils.NANOS_PER_DAY + 1) / DateTimeUtils.NANOS_PER_DAY; + } + timeNanos -= d * DateTimeUtils.NANOS_PER_DAY; + return DateTimeUtils.dateTimeToValue(v, + DateTimeUtils.dateValueFromAbsoluteDay(DateTimeUtils.absoluteDayFromDateValue(dateValue) + d), + timeNanos, forceTimestamp); + } + return DateTimeUtils.dateTimeToValue(v, dateValue, timeNanos, forceTimestamp); + } + + /** + * Calculate the number of crossed unit boundaries between two timestamps. This + * method is supported for MS SQL Server compatibility. + * + *
    +     * DATEDIFF(YEAR, '2004-12-31', '2005-01-01') = 1
    +     * 
    + * + * @param part + * the part + * @param v1 + * the first date-time value + * @param v2 + * the second date-time value + * @return the number of crossed boundaries + */ + public static long datediff(String part, Value v1, Value v2) { + int field = getDatePart(part); + long[] a1 = DateTimeUtils.dateAndTimeFromValue(v1); + long dateValue1 = a1[0]; + long absolute1 = DateTimeUtils.absoluteDayFromDateValue(dateValue1); + long[] a2 = DateTimeUtils.dateAndTimeFromValue(v2); + long dateValue2 = a2[0]; + long absolute2 = DateTimeUtils.absoluteDayFromDateValue(dateValue2); + switch (field) { + case NANOSECOND: + case MICROSECOND: + case MILLISECOND: + case SECOND: + case EPOCH: + case MINUTE: + case HOUR: + long timeNanos1 = a1[1]; + long timeNanos2 = a2[1]; + switch (field) { + case NANOSECOND: + return (absolute2 - absolute1) * DateTimeUtils.NANOS_PER_DAY + (timeNanos2 - timeNanos1); + case MICROSECOND: + return (absolute2 - absolute1) * (DateTimeUtils.MILLIS_PER_DAY * 1_000) + + (timeNanos2 / 1_000 - timeNanos1 / 1_000); + case MILLISECOND: + return (absolute2 - absolute1) * DateTimeUtils.MILLIS_PER_DAY + + (timeNanos2 / 1_000_000 - timeNanos1 / 1_000_000); + case SECOND: + case EPOCH: + return (absolute2 - absolute1) * 86_400 + (timeNanos2 / 1_000_000_000 - timeNanos1 / 1_000_000_000); + case MINUTE: + return (absolute2 - absolute1) * 1_440 + (timeNanos2 / 60_000_000_000L - timeNanos1 / 60_000_000_000L); + case HOUR: + return (absolute2 - absolute1) * 24 + + (timeNanos2 / 3_600_000_000_000L - timeNanos1 / 3_600_000_000_000L); + } + // Fake fall-through + // $FALL-THROUGH$ + case DAY_OF_MONTH: + case DAY_OF_YEAR: + case DAY_OF_WEEK: + case ISO_DAY_OF_WEEK: + return absolute2 - absolute1; + case WEEK: + return weekdiff(absolute1, absolute2, 0); + case ISO_WEEK: + return weekdiff(absolute1, absolute2, 1); + case MONTH: + return (DateTimeUtils.yearFromDateValue(dateValue2) - DateTimeUtils.yearFromDateValue(dateValue1)) * 12 + + DateTimeUtils.monthFromDateValue(dateValue2) - DateTimeUtils.monthFromDateValue(dateValue1); + case QUARTER: + return (DateTimeUtils.yearFromDateValue(dateValue2) - DateTimeUtils.yearFromDateValue(dateValue1)) * 4 + + (DateTimeUtils.monthFromDateValue(dateValue2) - 1) / 3 + - (DateTimeUtils.monthFromDateValue(dateValue1) - 1) / 3; + case YEAR: + return DateTimeUtils.yearFromDateValue(dateValue2) - DateTimeUtils.yearFromDateValue(dateValue1); + case TIMEZONE_HOUR: + case TIMEZONE_MINUTE: { + int offsetMinutes1; + if (v1 instanceof ValueTimestampTimeZone) { + offsetMinutes1 = ((ValueTimestampTimeZone) v1).getTimeZoneOffsetMins(); + } else { + offsetMinutes1 = DateTimeUtils.getTimeZoneOffsetMillis(null, dateValue1, a1[1]); + } + int offsetMinutes2; + if (v2 instanceof ValueTimestampTimeZone) { + offsetMinutes2 = ((ValueTimestampTimeZone) v2).getTimeZoneOffsetMins(); + } else { + offsetMinutes2 = DateTimeUtils.getTimeZoneOffsetMillis(null, dateValue2, a2[1]); + } + if (field == TIMEZONE_HOUR) { + return (offsetMinutes2 / 60) - (offsetMinutes1 / 60); + } else { + return offsetMinutes2 - offsetMinutes1; + } + } + default: + throw DbException.getUnsupportedException("DATEDIFF " + part); + } + } + + /** + * Extracts specified field from the specified date-time value. + * + * @param part + * the date part + * @param value + * the date-time value + * @return extracted field + */ + public static Value extract(String part, Value value) { + Value result; + int field = getDatePart(part); + if (field != EPOCH) { + result = ValueInt.get(getIntDatePart(value, field)); + } else { + + // Case where we retrieve the EPOCH time. + // First we retrieve the dateValue and his time in nanoseconds. + long[] a = DateTimeUtils.dateAndTimeFromValue(value); + long dateValue = a[0]; + long timeNanos = a[1]; + // We compute the time in nanoseconds and the total number of days. + BigDecimal timeNanosBigDecimal = new BigDecimal(timeNanos); + BigDecimal numberOfDays = new BigDecimal(DateTimeUtils.absoluteDayFromDateValue(dateValue)); + BigDecimal nanosSeconds = new BigDecimal(1_000_000_000); + BigDecimal secondsPerDay = new BigDecimal(DateTimeUtils.SECONDS_PER_DAY); + + // Case where the value is of type time e.g. '10:00:00' + if (value instanceof ValueTime) { + + // In order to retrieve the EPOCH time we only have to convert the time + // in nanoseconds (previously retrieved) in seconds. + result = ValueDecimal.get(timeNanosBigDecimal.divide(nanosSeconds)); + + } else if (value instanceof ValueDate) { + + // Case where the value is of type date '2000:01:01', we have to retrieve the + // total number of days and multiply it by the number of seconds in a day. + result = ValueDecimal.get(numberOfDays.multiply(secondsPerDay)); + + } else if (value instanceof ValueTimestampTimeZone) { + + // Case where the value is a of type ValueTimestampTimeZone + // ('2000:01:01 10:00:00+05'). + // We retrieve the time zone offset in minutes + ValueTimestampTimeZone v = (ValueTimestampTimeZone) value; + BigDecimal timeZoneOffsetSeconds = new BigDecimal(v.getTimeZoneOffsetMins() * 60); + // Sum the time in nanoseconds and the total number of days in seconds + // and adding the timeZone offset in seconds. + result = ValueDecimal.get(timeNanosBigDecimal.divide(nanosSeconds) + .add(numberOfDays.multiply(secondsPerDay)).subtract(timeZoneOffsetSeconds)); + + } else { + + // By default, we have the date and the time ('2000:01:01 10:00:00') if no type + // is given. + // We just have to sum the time in nanoseconds and the total number of days in + // seconds. + result = ValueDecimal + .get(timeNanosBigDecimal.divide(nanosSeconds).add(numberOfDays.multiply(secondsPerDay))); + } + } + return result; + } + + /** + * Truncate the given date to the unit specified + * + * @param datePartStr the time unit (e.g. 'DAY', 'HOUR', etc.) + * @param valueDate the date + * @return date truncated to 'day' + */ + public static Value truncateDate(String datePartStr, Value valueDate) { + + int timeUnit = getDatePart(datePartStr); + + // Retrieve the dateValue and the time in nanoseconds of the date. + long[] fieldDateAndTime = DateTimeUtils.dateAndTimeFromValue(valueDate); + long dateValue = fieldDateAndTime[0]; + long timeNanosRetrieved = fieldDateAndTime[1]; + + // Variable used to the time in nanoseconds of the date truncated. + long timeNanos; + + // Compute the number of time unit in the date, for example, the + // number of time unit 'HOUR' in '15:14:13' is '15'. Then convert the + // result to nanoseconds. + switch (timeUnit) { + + case MICROSECOND: + + long nanoInMicroSecond = 1_000L; + long microseconds = timeNanosRetrieved / nanoInMicroSecond; + timeNanos = microseconds * nanoInMicroSecond; + break; + + case MILLISECOND: + + long nanoInMilliSecond = 1_000_000L; + long milliseconds = timeNanosRetrieved / nanoInMilliSecond; + timeNanos = milliseconds * nanoInMilliSecond; + break; + + case SECOND: + + long nanoInSecond = 1_000_000_000L; + long seconds = timeNanosRetrieved / nanoInSecond; + timeNanos = seconds * nanoInSecond; + break; + + case MINUTE: + + long nanoInMinute = 60_000_000_000L; + long minutes = timeNanosRetrieved / nanoInMinute; + timeNanos = minutes * nanoInMinute; + break; + + case HOUR: + + long nanoInHour = 3_600_000_000_000L; + long hours = timeNanosRetrieved / nanoInHour; + timeNanos = hours * nanoInHour; + break; + + case DAY_OF_MONTH: + + timeNanos = 0L; + break; + + case WEEK: + + long absoluteDay = DateTimeUtils.absoluteDayFromDateValue(dateValue); + int dayOfWeek = DateTimeUtils.getDayOfWeekFromAbsolute(absoluteDay, 1); + if (dayOfWeek != 1) { + dateValue = DateTimeUtils.dateValueFromAbsoluteDay(absoluteDay - dayOfWeek + 1); + } + timeNanos = 0L; + break; + + case MONTH: { + + long year = DateTimeUtils.yearFromDateValue(dateValue); + int month = DateTimeUtils.monthFromDateValue(dateValue); + dateValue = DateTimeUtils.dateValue(year, month, 1); + timeNanos = 0L; + break; + + } + case QUARTER: { + + long year = DateTimeUtils.yearFromDateValue(dateValue); + int month = DateTimeUtils.monthFromDateValue(dateValue); + month = ((month - 1) / 3) * 3 + 1; + dateValue = DateTimeUtils.dateValue(year, month, 1); + timeNanos = 0L; + break; + + } + case YEAR: { + + long year = DateTimeUtils.yearFromDateValue(dateValue); + dateValue = DateTimeUtils.dateValue(year, 1, 1); + timeNanos = 0L; + break; + + } + case DECADE: { + + long year = DateTimeUtils.yearFromDateValue(dateValue); + year = (year / 10) * 10; + dateValue = DateTimeUtils.dateValue(year, 1, 1); + timeNanos = 0L; + break; + + } + case CENTURY: { + + long year = DateTimeUtils.yearFromDateValue(dateValue); + year = ((year - 1) / 100) * 100 + 1; + dateValue = DateTimeUtils.dateValue(year, 1, 1); + timeNanos = 0L; + break; + + } + case MILLENNIUM: { + + long year = DateTimeUtils.yearFromDateValue(dateValue); + year = ((year - 1) / 1000) * 1000 + 1; + dateValue = DateTimeUtils.dateValue(year, 1, 1); + timeNanos = 0L; + break; + + } + default: + + // Return an exception in the timeUnit is not recognized + throw DbException.getUnsupportedException(datePartStr); + + } + + Value result; + + if (valueDate instanceof ValueTimestampTimeZone) { + + // Case we create a timestamp with timezone with the dateValue and + // timeNanos computed. + ValueTimestampTimeZone vTmp = (ValueTimestampTimeZone) valueDate; + result = ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, timeNanos, vTmp.getTimeZoneOffsetMins()); + + } else { + + // By default, we create a timestamp with the dateValue and + // timeNanos computed. + result = ValueTimestamp.fromDateValueAndNanos(dateValue, timeNanos); + + } + + return result; + } + + /** + * Formats a date using a format string. + * + * @param date + * the date to format + * @param format + * the format string + * @param locale + * the locale + * @param timeZone + * the timezone + * @return the formatted date + */ + public static String formatDateTime(java.util.Date date, String format, String locale, String timeZone) { + SimpleDateFormat dateFormat = getDateFormat(format, locale, timeZone); + synchronized (dateFormat) { + return dateFormat.format(date); + } + } + + private static SimpleDateFormat getDateFormat(String format, String locale, String timeZone) { + try { + // currently, a new instance is create for each call + // however, could cache the last few instances + SimpleDateFormat df; + if (locale == null) { + df = new SimpleDateFormat(format); + } else { + Locale l = new Locale(locale); + df = new SimpleDateFormat(format, l); + } + if (timeZone != null) { + df.setTimeZone(TimeZone.getTimeZone(timeZone)); + } + return df; + } catch (Exception e) { + throw DbException.get(ErrorCode.PARSE_ERROR_1, e, format + "/" + locale + "/" + timeZone); + } + } + + private static int getDatePart(String part) { + Integer p = DATE_PART.get(StringUtils.toUpperEnglish(part)); + if (p == null) { + throw DbException.getInvalidValueException("date part", part); + } + return p.intValue(); + } + + /** + * Get the specified field of a date, however with years normalized to positive + * or negative, and month starting with 1. + * + * @param date + * the date value + * @param field + * the field type, see {@link Function} for constants + * @return the value + */ + public static int getIntDatePart(Value date, int field) { + long[] a = DateTimeUtils.dateAndTimeFromValue(date); + long dateValue = a[0]; + long timeNanos = a[1]; + switch (field) { + case YEAR: + return DateTimeUtils.yearFromDateValue(dateValue); + case MONTH: + return DateTimeUtils.monthFromDateValue(dateValue); + case DAY_OF_MONTH: + return DateTimeUtils.dayFromDateValue(dateValue); + case HOUR: + return (int) (timeNanos / 3_600_000_000_000L % 24); + case MINUTE: + return (int) (timeNanos / 60_000_000_000L % 60); + case SECOND: + return (int) (timeNanos / 1_000_000_000 % 60); + case MILLISECOND: + return (int) (timeNanos / 1_000_000 % 1_000); + case MICROSECOND: + return (int) (timeNanos / 1_000 % 1_000_000); + case NANOSECOND: + return (int) (timeNanos % 1_000_000_000); + case DAY_OF_YEAR: + return DateTimeUtils.getDayOfYear(dateValue); + case DAY_OF_WEEK: + return DateTimeUtils.getSundayDayOfWeek(dateValue); + case WEEK: + GregorianCalendar gc = DateTimeUtils.getCalendar(); + return DateTimeUtils.getWeekOfYear(dateValue, gc.getFirstDayOfWeek() - 1, gc.getMinimalDaysInFirstWeek()); + case QUARTER: + return (DateTimeUtils.monthFromDateValue(dateValue) - 1) / 3 + 1; + case ISO_YEAR: + return DateTimeUtils.getIsoWeekYear(dateValue); + case ISO_WEEK: + return DateTimeUtils.getIsoWeekOfYear(dateValue); + case ISO_DAY_OF_WEEK: + return DateTimeUtils.getIsoDayOfWeek(dateValue); + case TIMEZONE_HOUR: + case TIMEZONE_MINUTE: { + int offsetMinutes; + if (date instanceof ValueTimestampTimeZone) { + offsetMinutes = ((ValueTimestampTimeZone) date).getTimeZoneOffsetMins(); + } else { + offsetMinutes = DateTimeUtils.getTimeZoneOffsetMillis(null, dateValue, timeNanos); + } + if (field == TIMEZONE_HOUR) { + return offsetMinutes / 60; + } + return offsetMinutes % 60; + } + } + throw DbException.getUnsupportedException("getDatePart(" + date + ", " + field + ')'); + } + + /** + * Return names of month or weeks. + * + * @param field + * 0 for months, 1 for weekdays + * @return names of month or weeks + */ + public static String[] getMonthsAndWeeks(int field) { + String[][] result = MONTHS_AND_WEEKS; + if (result == null) { + result = new String[2][]; + DateFormatSymbols dfs = DateFormatSymbols.getInstance(Locale.ENGLISH); + result[0] = dfs.getMonths(); + result[1] = dfs.getWeekdays(); + MONTHS_AND_WEEKS = result; + } + return result[field]; + } + + /** + * Check if a given string is a valid date part string. + * + * @param part + * the string + * @return true if it is + */ + public static boolean isDatePart(String part) { + return DATE_PART.containsKey(StringUtils.toUpperEnglish(part)); + } + + /** + * Parses a date using a format string. + * + * @param date + * the date to parse + * @param format + * the parsing format + * @param locale + * the locale + * @param timeZone + * the timeZone + * @return the parsed date + */ + public static java.util.Date parseDateTime(String date, String format, String locale, String timeZone) { + SimpleDateFormat dateFormat = getDateFormat(format, locale, timeZone); + try { + synchronized (dateFormat) { + return dateFormat.parse(date); + } + } catch (Exception e) { + // ParseException + throw DbException.get(ErrorCode.PARSE_ERROR_1, e, date); + } + } + + private static long weekdiff(long absolute1, long absolute2, int firstDayOfWeek) { + absolute1 += 4 - firstDayOfWeek; + long r1 = absolute1 / 7; + if (absolute1 < 0 && (r1 * 7 != absolute1)) { + r1--; + } + absolute2 += 4 - firstDayOfWeek; + long r2 = absolute2 / 7; + if (absolute2 < 0 && (r2 * 7 != absolute2)) { + r2--; + } + return r2 - r1; + } + + private DateTimeFunctions() { + } +} diff --git a/modules/h2/src/main/java/org/h2/util/DateTimeUtils.java b/modules/h2/src/main/java/org/h2/util/DateTimeUtils.java new file mode 100644 index 0000000000000..d1adff56d9269 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/DateTimeUtils.java @@ -0,0 +1,1492 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, and the + * EPL 1.0 (http://h2database.com/html/license.html). Initial Developer: H2 + * Group Iso8601: Initial Developer: Robert Rathsack (firstName dot lastName at + * gmx dot de) + */ +package org.h2.util; + +import java.sql.Date; +import java.sql.Time; +import java.sql.Timestamp; +import java.util.Calendar; +import java.util.GregorianCalendar; +import java.util.TimeZone; +import org.h2.engine.Mode; +import org.h2.message.DbException; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueNull; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * This utility class contains time conversion functions. + *

    + * Date value: a bit field with bits for the year, month, and day. Absolute day: + * the day number (0 means 1970-01-01). + */ +public class DateTimeUtils { + + /** + * The number of milliseconds per day. + */ + public static final long MILLIS_PER_DAY = 24 * 60 * 60 * 1000L; + + /** + * The number of seconds per day. + */ + public static final long SECONDS_PER_DAY = 24 * 60 * 60; + + /** + * UTC time zone. + */ + public static final TimeZone UTC = TimeZone.getTimeZone("UTC"); + + /** + * The number of nanoseconds per day. + */ + public static final long NANOS_PER_DAY = MILLIS_PER_DAY * 1_000_000; + + private static final int SHIFT_YEAR = 9; + private static final int SHIFT_MONTH = 5; + + /** + * Date value for 1970-01-01. + */ + public static final int EPOCH_DATE_VALUE = (1970 << SHIFT_YEAR) + (1 << SHIFT_MONTH) + 1; + + private static final int[] NORMAL_DAYS_PER_MONTH = { 0, 31, 28, 31, 30, 31, + 30, 31, 31, 30, 31, 30, 31 }; + + /** + * Offsets of month within a year, starting with March, April,... + */ + private static final int[] DAYS_OFFSET = { 0, 31, 61, 92, 122, 153, 184, + 214, 245, 275, 306, 337, 366 }; + + /** + * Multipliers for {@link #convertScale(long, int)}. + */ + private static final int[] CONVERT_SCALE_TABLE = { 1_000_000_000, 100_000_000, + 10_000_000, 1_000_000, 100_000, 10_000, 1_000, 100, 10 }; + + /** + * The thread local. Can not override initialValue because this would result + * in an inner class, which would not be garbage collected in a web + * container, and prevent the class loader of H2 from being garbage + * collected. Using a ThreadLocal on a system class like Calendar does not + * have that problem, and while it is still a small memory leak, it is not a + * class loader memory leak. + */ + private static final ThreadLocal CACHED_CALENDAR = new ThreadLocal<>(); + + /** + * A cached instance of Calendar used when a timezone is specified. + */ + private static final ThreadLocal CACHED_CALENDAR_NON_DEFAULT_TIMEZONE = + new ThreadLocal<>(); + + /** + * Cached local time zone. + */ + private static volatile TimeZone timeZone; + + /** + * Observed JVM behaviour is that if the timezone of the host computer is + * changed while the JVM is running, the zone offset does not change but + * keeps the initial value. So it is correct to measure this once and use + * this value throughout the JVM's lifecycle. In any case, it is safer to + * use a fixed value throughout the duration of the JVM's life, rather than + * have this offset change, possibly midway through a long-running query. + */ + private static int zoneOffsetMillis = createGregorianCalendar().get(Calendar.ZONE_OFFSET); + + private DateTimeUtils() { + // utility class + } + + /** + * Returns local time zone. + * + * @return local time zone + */ + private static TimeZone getTimeZone() { + TimeZone tz = timeZone; + if (tz == null) { + timeZone = tz = TimeZone.getDefault(); + } + return tz; + } + + /** + * Reset the cached calendar for default timezone, for example after + * changing the default timezone. + */ + public static void resetCalendar() { + CACHED_CALENDAR.remove(); + timeZone = null; + zoneOffsetMillis = createGregorianCalendar().get(Calendar.ZONE_OFFSET); + } + + /** + * Get a calendar for the default timezone. + * + * @return a calendar instance. A cached instance is returned where possible + */ + public static GregorianCalendar getCalendar() { + GregorianCalendar c = CACHED_CALENDAR.get(); + if (c == null) { + c = createGregorianCalendar(); + CACHED_CALENDAR.set(c); + } + c.clear(); + return c; + } + + /** + * Get a calendar for the given timezone. + * + * @param tz timezone for the calendar, is never null + * @return a calendar instance. A cached instance is returned where possible + */ + private static GregorianCalendar getCalendar(TimeZone tz) { + GregorianCalendar c = CACHED_CALENDAR_NON_DEFAULT_TIMEZONE.get(); + if (c == null || !c.getTimeZone().equals(tz)) { + c = createGregorianCalendar(tz); + CACHED_CALENDAR_NON_DEFAULT_TIMEZONE.set(c); + } + c.clear(); + return c; + } + + /** + * Creates a Gregorian calendar for the default timezone using the default + * locale. Dates in H2 are represented in a Gregorian calendar. So this + * method should be used instead of Calendar.getInstance() to ensure that + * the Gregorian calendar is used for all date processing instead of a + * default locale calendar that can be non-Gregorian in some locales. + * + * @return a new calendar instance. + */ + public static GregorianCalendar createGregorianCalendar() { + return new GregorianCalendar(); + } + + /** + * Creates a Gregorian calendar for the given timezone using the default + * locale. Dates in H2 are represented in a Gregorian calendar. So this + * method should be used instead of Calendar.getInstance() to ensure that + * the Gregorian calendar is used for all date processing instead of a + * default locale calendar that can be non-Gregorian in some locales. + * + * @param tz timezone for the calendar, is never null + * @return a new calendar instance. + */ + public static GregorianCalendar createGregorianCalendar(TimeZone tz) { + return new GregorianCalendar(tz); + } + + /** + * Convert the date to the specified time zone. + * + * @param value the date (might be ValueNull) + * @param calendar the calendar + * @return the date using the correct time zone + */ + public static Date convertDate(Value value, Calendar calendar) { + if (value == ValueNull.INSTANCE) { + return null; + } + ValueDate d = (ValueDate) value.convertTo(Value.DATE); + Calendar cal = (Calendar) calendar.clone(); + cal.clear(); + cal.setLenient(true); + long dateValue = d.getDateValue(); + long ms = convertToMillis(cal, yearFromDateValue(dateValue), + monthFromDateValue(dateValue), dayFromDateValue(dateValue), 0, + 0, 0, 0); + return new Date(ms); + } + + /** + * Convert the time to the specified time zone. + * + * @param value the time (might be ValueNull) + * @param calendar the calendar + * @return the time using the correct time zone + */ + public static Time convertTime(Value value, Calendar calendar) { + if (value == ValueNull.INSTANCE) { + return null; + } + ValueTime t = (ValueTime) value.convertTo(Value.TIME); + Calendar cal = (Calendar) calendar.clone(); + cal.clear(); + cal.setLenient(true); + long nanos = t.getNanos(); + long millis = nanos / 1_000_000; + nanos -= millis * 1_000_000; + long s = millis / 1_000; + millis -= s * 1_000; + long m = s / 60; + s -= m * 60; + long h = m / 60; + m -= h * 60; + return new Time(convertToMillis(cal, 1970, 1, 1, (int) h, (int) m, (int) s, (int) millis)); + } + + /** + * Convert the timestamp to the specified time zone. + * + * @param value the timestamp (might be ValueNull) + * @param calendar the calendar + * @return the timestamp using the correct time zone + */ + public static Timestamp convertTimestamp(Value value, Calendar calendar) { + if (value == ValueNull.INSTANCE) { + return null; + } + ValueTimestamp ts = (ValueTimestamp) value.convertTo(Value.TIMESTAMP); + Calendar cal = (Calendar) calendar.clone(); + cal.clear(); + cal.setLenient(true); + long dateValue = ts.getDateValue(); + long nanos = ts.getTimeNanos(); + long millis = nanos / 1_000_000; + nanos -= millis * 1_000_000; + long s = millis / 1_000; + millis -= s * 1_000; + long m = s / 60; + s -= m * 60; + long h = m / 60; + m -= h * 60; + long ms = convertToMillis(cal, yearFromDateValue(dateValue), + monthFromDateValue(dateValue), dayFromDateValue(dateValue), + (int) h, (int) m, (int) s, (int) millis); + Timestamp x = new Timestamp(ms); + x.setNanos((int) (nanos + millis * 1_000_000)); + return x; + } + + /** + * Convert a java.util.Date using the specified calendar. + * + * @param x the date + * @param calendar the calendar + * @return the date + */ + public static ValueDate convertDate(Date x, Calendar calendar) { + if (calendar == null) { + throw DbException.getInvalidValueException("calendar", null); + } + Calendar cal = (Calendar) calendar.clone(); + cal.setTimeInMillis(x.getTime()); + long dateValue = dateValueFromCalendar(cal); + return ValueDate.fromDateValue(dateValue); + } + + /** + * Convert the time using the specified calendar. + * + * @param x the time + * @param calendar the calendar + * @return the time + */ + public static ValueTime convertTime(Time x, Calendar calendar) { + if (calendar == null) { + throw DbException.getInvalidValueException("calendar", null); + } + Calendar cal = (Calendar) calendar.clone(); + cal.setTimeInMillis(x.getTime()); + long nanos = nanosFromCalendar(cal); + return ValueTime.fromNanos(nanos); + } + + /** + * Convert the timestamp using the specified calendar. + * + * @param x the time + * @param calendar the calendar + * @return the timestamp + */ + public static ValueTimestamp convertTimestamp(Timestamp x, + Calendar calendar) { + if (calendar == null) { + throw DbException.getInvalidValueException("calendar", null); + } + Calendar cal = (Calendar) calendar.clone(); + cal.setTimeInMillis(x.getTime()); + long dateValue = dateValueFromCalendar(cal); + long nanos = nanosFromCalendar(cal); + nanos += x.getNanos() % 1_000_000; + return ValueTimestamp.fromDateValueAndNanos(dateValue, nanos); + } + + /** + * Parse a date string. The format is: [+|-]year-month-day + * + * @param s the string to parse + * @param start the parse index start + * @param end the parse index end + * @return the date value + * @throws IllegalArgumentException if there is a problem + */ + public static long parseDateValue(String s, int start, int end) { + if (s.charAt(start) == '+') { + // +year + start++; + } + // start at position 1 to support "-year" + int s1 = s.indexOf('-', start + 1); + int s2 = s.indexOf('-', s1 + 1); + if (s1 <= 0 || s2 <= s1) { + throw new IllegalArgumentException(s); + } + int year = Integer.parseInt(s.substring(start, s1)); + int month = Integer.parseInt(s.substring(s1 + 1, s2)); + int day = Integer.parseInt(s.substring(s2 + 1, end)); + if (!isValidDate(year, month, day)) { + throw new IllegalArgumentException(year + "-" + month + "-" + day); + } + return dateValue(year, month, day); + } + + /** + * Parse a time string. The format is: [-]hour:minute:second[.nanos] or + * alternatively [-]hour.minute.second[.nanos]. + * + * @param s the string to parse + * @param start the parse index start + * @param end the parse index end + * @param timeOfDay whether the result need to be within 0 (inclusive) and 1 + * day (exclusive) + * @return the time in nanoseconds + * @throws IllegalArgumentException if there is a problem + */ + public static long parseTimeNanos(String s, int start, int end, + boolean timeOfDay) { + int hour = 0, minute = 0, second = 0; + long nanos = 0; + int s1 = s.indexOf(':', start); + int s2 = s.indexOf(':', s1 + 1); + int s3 = s.indexOf('.', s2 + 1); + if (s1 <= 0 || s2 <= s1) { + // if first try fails try to use IBM DB2 time format + // [-]hour.minute.second[.nanos] + s1 = s.indexOf('.', start); + s2 = s.indexOf('.', s1 + 1); + s3 = s.indexOf('.', s2 + 1); + + if (s1 <= 0 || s2 <= s1) { + throw new IllegalArgumentException(s); + } + } + boolean negative; + hour = Integer.parseInt(s.substring(start, s1)); + if (hour < 0 || hour == 0 && s.charAt(0) == '-') { + if (timeOfDay) { + /* + * This also forbids -00:00:00 and similar values. + */ + throw new IllegalArgumentException(s); + } + negative = true; + hour = -hour; + } else { + negative = false; + } + minute = Integer.parseInt(s.substring(s1 + 1, s2)); + if (s3 < 0) { + second = Integer.parseInt(s.substring(s2 + 1, end)); + } else { + second = Integer.parseInt(s.substring(s2 + 1, s3)); + String n = (s.substring(s3 + 1, end) + "000000000").substring(0, 9); + nanos = Integer.parseInt(n); + } + if (hour >= 2_000_000 || minute < 0 || minute >= 60 || second < 0 + || second >= 60) { + throw new IllegalArgumentException(s); + } + if (timeOfDay && hour >= 24) { + throw new IllegalArgumentException(s); + } + nanos += ((((hour * 60L) + minute) * 60) + second) * 1_000_000_000; + return negative ? -nanos : nanos; + } + + /** + * See: + * https://stackoverflow.com/questions/3976616/how-to-find-nth-occurrence-of-character-in-a-string#answer-3976656 + */ + private static int findNthIndexOf(String str, char chr, int n) { + int pos = str.indexOf(chr); + while (--n > 0 && pos != -1) { + pos = str.indexOf(chr, pos + 1); + } + return pos; + } + + /** + * Parses timestamp value from the specified string. + * + * @param s + * string to parse + * @param mode + * database mode, or {@code null} + * @param withTimeZone + * if {@code true} return {@link ValueTimestampTimeZone} instead of + * {@link ValueTimestamp} + * @return parsed timestamp + */ + public static Value parseTimestamp(String s, Mode mode, boolean withTimeZone) { + int dateEnd = s.indexOf(' '); + if (dateEnd < 0) { + // ISO 8601 compatibility + dateEnd = s.indexOf('T'); + if (dateEnd < 0 && mode != null && mode.allowDB2TimestampFormat) { + // DB2 also allows dash between date and time + dateEnd = findNthIndexOf(s, '-', 3); + } + } + int timeStart; + if (dateEnd < 0) { + dateEnd = s.length(); + timeStart = -1; + } else { + timeStart = dateEnd + 1; + } + long dateValue = parseDateValue(s, 0, dateEnd); + long nanos; + short tzMinutes = 0; + if (timeStart < 0) { + nanos = 0; + } else { + int timeEnd = s.length(); + TimeZone tz = null; + if (s.endsWith("Z")) { + tz = UTC; + timeEnd--; + } else { + int timeZoneStart = s.indexOf('+', dateEnd + 1); + if (timeZoneStart < 0) { + timeZoneStart = s.indexOf('-', dateEnd + 1); + } + if (timeZoneStart >= 0) { + // Allow [timeZoneName] part after time zone offset + int offsetEnd = s.indexOf('[', timeZoneStart + 1); + if (offsetEnd < 0) { + offsetEnd = s.length(); + } + String tzName = "GMT" + s.substring(timeZoneStart, offsetEnd); + tz = TimeZone.getTimeZone(tzName); + if (!tz.getID().startsWith(tzName)) { + throw new IllegalArgumentException( + tzName + " (" + tz.getID() + "?)"); + } + if (s.charAt(timeZoneStart - 1) == ' ') { + timeZoneStart--; + } + timeEnd = timeZoneStart; + } else { + timeZoneStart = s.indexOf(' ', dateEnd + 1); + if (timeZoneStart > 0) { + String tzName = s.substring(timeZoneStart + 1); + tz = TimeZone.getTimeZone(tzName); + if (!tz.getID().startsWith(tzName)) { + throw new IllegalArgumentException(tzName); + } + timeEnd = timeZoneStart; + } + } + } + nanos = parseTimeNanos(s, dateEnd + 1, timeEnd, true); + if (tz != null) { + if (withTimeZone) { + if (tz != UTC) { + long millis = convertDateTimeValueToMillis(tz, dateValue, nanos / 1_000_000); + tzMinutes = (short) (tz.getOffset(millis) / 1000 / 60); + } + } else { + long millis = convertDateTimeValueToMillis(tz, dateValue, nanos / 1_000_000); + dateValue = dateValueFromDate(millis); + nanos = nanos % 1_000_000 + nanosFromDate(millis); + } + } + } + if (withTimeZone) { + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, nanos, tzMinutes); + } + return ValueTimestamp.fromDateValueAndNanos(dateValue, nanos); + } + + /** + * Calculates the time zone offset in minutes for the specified time zone, date + * value, and nanoseconds since midnight. + * + * @param tz + * time zone, or {@code null} for default + * @param dateValue + * date value + * @param timeNanos + * nanoseconds since midnight + * @return time zone offset in milliseconds + */ + public static int getTimeZoneOffsetMillis(TimeZone tz, long dateValue, long timeNanos) { + long msec = timeNanos / 1_000_000; + long utc = convertDateTimeValueToMillis(tz, dateValue, msec); + long local = absoluteDayFromDateValue(dateValue) * MILLIS_PER_DAY + msec; + return (int) (local - utc); + } + + /** + * Calculates the milliseconds since epoch for the specified date value, + * nanoseconds since midnight, and time zone offset. + * @param dateValue + * date value + * @param timeNanos + * nanoseconds since midnight + * @param offsetMins + * time zone offset in minutes + * @return milliseconds since epoch in UTC + */ + public static long getMillis(long dateValue, long timeNanos, short offsetMins) { + return absoluteDayFromDateValue(dateValue) * MILLIS_PER_DAY + + timeNanos / 1_000_000 - offsetMins * 60_000; + } + + /** + * Calculate the milliseconds since 1970-01-01 (UTC) for the given date and + * time (in the specified timezone). + * + * @param tz the timezone of the parameters, or null for the default + * timezone + * @param year the absolute year (positive or negative) + * @param month the month (1-12) + * @param day the day (1-31) + * @param hour the hour (0-23) + * @param minute the minutes (0-59) + * @param second the number of seconds (0-59) + * @param millis the number of milliseconds + * @return the number of milliseconds (UTC) + */ + public static long getMillis(TimeZone tz, int year, int month, int day, + int hour, int minute, int second, int millis) { + GregorianCalendar c; + if (tz == null) { + c = getCalendar(); + } else { + c = getCalendar(tz); + } + c.setLenient(false); + try { + return convertToMillis(c, year, month, day, hour, minute, second, millis); + } catch (IllegalArgumentException e) { + // special case: if the time simply doesn't exist because of + // daylight saving time changes, use the lenient version + String message = e.toString(); + if (message.indexOf("HOUR_OF_DAY") > 0) { + if (hour < 0 || hour > 23) { + throw e; + } + } else if (message.indexOf("DAY_OF_MONTH") > 0) { + int maxDay; + if (month == 2) { + maxDay = c.isLeapYear(year) ? 29 : 28; + } else { + maxDay = NORMAL_DAYS_PER_MONTH[month]; + } + if (day < 1 || day > maxDay) { + throw e; + } + // DAY_OF_MONTH is thrown for years > 2037 + // using the timezone Brasilia and others, + // for example for 2042-10-12 00:00:00. + hour += 6; + } + c.setLenient(true); + return convertToMillis(c, year, month, day, hour, minute, second, millis); + } + } + + private static long convertToMillis(Calendar cal, int year, int month, int day, + int hour, int minute, int second, int millis) { + if (year <= 0) { + cal.set(Calendar.ERA, GregorianCalendar.BC); + cal.set(Calendar.YEAR, 1 - year); + } else { + cal.set(Calendar.ERA, GregorianCalendar.AD); + cal.set(Calendar.YEAR, year); + } + // january is 0 + cal.set(Calendar.MONTH, month - 1); + cal.set(Calendar.DAY_OF_MONTH, day); + cal.set(Calendar.HOUR_OF_DAY, hour); + cal.set(Calendar.MINUTE, minute); + cal.set(Calendar.SECOND, second); + cal.set(Calendar.MILLISECOND, millis); + return cal.getTimeInMillis(); + } + + /** + * Extracts date value and nanos of day from the specified value. + * + * @param value + * value to extract fields from + * @return array with date value and nanos of day + */ + public static long[] dateAndTimeFromValue(Value value) { + long dateValue = EPOCH_DATE_VALUE; + long timeNanos = 0; + if (value instanceof ValueTimestamp) { + ValueTimestamp v = (ValueTimestamp) value; + dateValue = v.getDateValue(); + timeNanos = v.getTimeNanos(); + } else if (value instanceof ValueDate) { + dateValue = ((ValueDate) value).getDateValue(); + } else if (value instanceof ValueTime) { + timeNanos = ((ValueTime) value).getNanos(); + } else if (value instanceof ValueTimestampTimeZone) { + ValueTimestampTimeZone v = (ValueTimestampTimeZone) value; + dateValue = v.getDateValue(); + timeNanos = v.getTimeNanos(); + } else { + ValueTimestamp v = (ValueTimestamp) value.convertTo(Value.TIMESTAMP); + dateValue = v.getDateValue(); + timeNanos = v.getTimeNanos(); + } + return new long[] {dateValue, timeNanos}; + } + + /** + * Creates a new date-time value with the same type as original value. If + * original value is a ValueTimestampTimeZone, returned value will have the same + * time zone offset as original value. + * + * @param original + * original value + * @param dateValue + * date value for the returned value + * @param timeNanos + * nanos of day for the returned value + * @param forceTimestamp + * if {@code true} return ValueTimestamp if original argument is + * ValueDate or ValueTime + * @return new value with specified date value and nanos of day + */ + public static Value dateTimeToValue(Value original, long dateValue, long timeNanos, boolean forceTimestamp) { + if (!(original instanceof ValueTimestamp)) { + if (!forceTimestamp) { + if (original instanceof ValueDate) { + return ValueDate.fromDateValue(dateValue); + } + if (original instanceof ValueTime) { + return ValueTime.fromNanos(timeNanos); + } + } + if (original instanceof ValueTimestampTimeZone) { + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, timeNanos, + ((ValueTimestampTimeZone) original).getTimeZoneOffsetMins()); + } + } + return ValueTimestamp.fromDateValueAndNanos(dateValue, timeNanos); + } + + /** + * Get the number of milliseconds since 1970-01-01 in the local timezone, + * but without daylight saving time into account. + * + * @param d the date + * @return the milliseconds + */ + public static long getTimeLocalWithoutDst(java.util.Date d) { + return d.getTime() + zoneOffsetMillis; + } + + /** + * Convert the number of milliseconds since 1970-01-01 in the local timezone + * to UTC, but without daylight saving time into account. + * + * @param millis the number of milliseconds in the local timezone + * @return the number of milliseconds in UTC + */ + public static long getTimeUTCWithoutDst(long millis) { + return millis - zoneOffsetMillis; + } + + /** + * Returns day of week. + * + * @param dateValue + * the date value + * @param firstDayOfWeek + * first day of week, Monday as 1, Sunday as 7 or 0 + * @return day of week + * @see #getIsoDayOfWeek(long) + */ + public static int getDayOfWeek(long dateValue, int firstDayOfWeek) { + return getDayOfWeekFromAbsolute(absoluteDayFromDateValue(dateValue), firstDayOfWeek); + } + + /** + * Get the day of the week from the absolute day value. + * + * @param absoluteValue the absolute day + * @param firstDayOfWeek the first day of the week + * @return the day of week + */ + public static int getDayOfWeekFromAbsolute(long absoluteValue, int firstDayOfWeek) { + return absoluteValue >= 0 ? (int) ((absoluteValue - firstDayOfWeek + 11) % 7) + 1 + : (int) ((absoluteValue - firstDayOfWeek - 2) % 7) + 7; + } + + /** + * Returns number of day in year. + * + * @param dateValue + * the date value + * @return number of day in year + */ + public static int getDayOfYear(long dateValue) { + return (int) (absoluteDayFromDateValue(dateValue) - absoluteDayFromYear(yearFromDateValue(dateValue))) + 1; + } + + /** + * Returns ISO day of week. + * + * @param dateValue + * the date value + * @return ISO day of week, Monday as 1 to Sunday as 7 + * @see #getSundayDayOfWeek(long) + */ + public static int getIsoDayOfWeek(long dateValue) { + return getDayOfWeek(dateValue, 1); + } + + /** + * Returns ISO number of week in year. + * + * @param dateValue + * the date value + * @return number of week in year + * @see #getIsoWeekYear(long) + * @see #getWeekOfYear(long, int, int) + */ + public static int getIsoWeekOfYear(long dateValue) { + return getWeekOfYear(dateValue, 1, 4); + } + + /** + * Returns ISO week year. + * + * @param dateValue + * the date value + * @return ISO week year + * @see #getIsoWeekOfYear(long) + * @see #getWeekYear(long, int, int) + */ + public static int getIsoWeekYear(long dateValue) { + return getWeekYear(dateValue, 1, 4); + } + + /** + * Returns day of week with Sunday as 1. + * + * @param dateValue + * the date value + * @return day of week, Sunday as 1 to Monday as 7 + * @see #getIsoDayOfWeek(long) + */ + public static int getSundayDayOfWeek(long dateValue) { + return getDayOfWeek(dateValue, 0); + } + + /** + * Returns number of week in year. + * + * @param dateValue + * the date value + * @param firstDayOfWeek + * first day of week, Monday as 1, Sunday as 7 or 0 + * @param minimalDaysInFirstWeek + * minimal days in first week of year + * @return number of week in year + * @see #getIsoWeekOfYear(long) + */ + public static int getWeekOfYear(long dateValue, int firstDayOfWeek, int minimalDaysInFirstWeek) { + long abs = absoluteDayFromDateValue(dateValue); + int year = yearFromDateValue(dateValue); + long base = getWeekOfYearBase(year, firstDayOfWeek, minimalDaysInFirstWeek); + if (abs - base < 0) { + base = getWeekOfYearBase(year - 1, firstDayOfWeek, minimalDaysInFirstWeek); + } else if (monthFromDateValue(dateValue) == 12 && 24 + minimalDaysInFirstWeek < dayFromDateValue(dateValue)) { + if (abs >= getWeekOfYearBase(year + 1, firstDayOfWeek, minimalDaysInFirstWeek)) { + return 1; + } + } + return (int) ((abs - base) / 7) + 1; + } + + private static long getWeekOfYearBase(int year, int firstDayOfWeek, int minimalDaysInFirstWeek) { + long first = absoluteDayFromYear(year); + int daysInFirstWeek = 8 - getDayOfWeekFromAbsolute(first, firstDayOfWeek); + long base = first + daysInFirstWeek; + if (daysInFirstWeek >= minimalDaysInFirstWeek) { + base -= 7; + } + return base; + } + + /** + * Returns week year. + * + * @param dateValue + * the date value + * @param firstDayOfWeek + * first day of week, Monday as 1, Sunday as 7 or 0 + * @param minimalDaysInFirstWeek + * minimal days in first week of year + * @return week year + * @see #getIsoWeekYear(long) + */ + public static int getWeekYear(long dateValue, int firstDayOfWeek, int minimalDaysInFirstWeek) { + long abs = absoluteDayFromDateValue(dateValue); + int year = yearFromDateValue(dateValue); + long base = getWeekOfYearBase(year, firstDayOfWeek, minimalDaysInFirstWeek); + if (abs - base < 0) { + return year - 1; + } else if (monthFromDateValue(dateValue) == 12 && 24 + minimalDaysInFirstWeek < dayFromDateValue(dateValue)) { + if (abs >= getWeekOfYearBase(year + 1, firstDayOfWeek, minimalDaysInFirstWeek)) { + return year + 1; + } + } + return year; + } + + /** + * Returns number of days in month. + * + * @param year the year + * @param month the month + * @return number of days in the specified month + */ + public static int getDaysInMonth(int year, int month) { + if (month != 2) { + return NORMAL_DAYS_PER_MONTH[month]; + } + // All leap years divisible by 4 + return (year & 3) == 0 + // All such years before 1582 are Julian and leap + && (year < 1582 + // Otherwise check Gregorian conditions + || year % 100 != 0 || year % 400 == 0) + ? 29 : 28; + } + + /** + * Verify if the specified date is valid. + * + * @param year the year + * @param month the month (January is 1) + * @param day the day (1 is the first of the month) + * @return true if it is valid + */ + public static boolean isValidDate(int year, int month, int day) { + if (month < 1 || month > 12 || day < 1) { + return false; + } + if (year == 1582 && month == 10) { + // special case: days 1582-10-05 .. 1582-10-14 don't exist + return day < 5 || (day > 14 && day <= 31); + } + return day <= getDaysInMonth(year, month); + } + + /** + * Convert an encoded date value to a java.util.Date, using the default + * timezone. + * + * @param dateValue the date value + * @return the date + */ + public static Date convertDateValueToDate(long dateValue) { + long millis = getMillis(null, yearFromDateValue(dateValue), + monthFromDateValue(dateValue), dayFromDateValue(dateValue), 0, + 0, 0, 0); + return new Date(millis); + } + + /** + * Convert an encoded date-time value to millis, using the supplied timezone. + * + * @param tz the timezone + * @param dateValue the date value + * @param ms milliseconds of day + * @return the date + */ + public static long convertDateTimeValueToMillis(TimeZone tz, long dateValue, long ms) { + long second = ms / 1000; + ms -= second * 1000; + int minute = (int) (second / 60); + second -= minute * 60; + int hour = minute / 60; + minute -= hour * 60; + return getMillis(tz, yearFromDateValue(dateValue), monthFromDateValue(dateValue), dayFromDateValue(dateValue), + hour, minute, (int) second, (int) ms); + } + + /** + * Convert an encoded date value / time value to a timestamp, using the + * default timezone. + * + * @param dateValue the date value + * @param timeNanos the nanoseconds since midnight + * @return the timestamp + */ + public static Timestamp convertDateValueToTimestamp(long dateValue, + long timeNanos) { + Timestamp ts = new Timestamp(convertDateTimeValueToMillis(null, dateValue, timeNanos / 1_000_000)); + // This method expects the complete nanoseconds value including milliseconds + ts.setNanos((int) (timeNanos % 1_000_000_000)); + return ts; + } + + /** + * Convert an encoded date value / time value to a timestamp using the specified + * time zone offset. + * + * @param dateValue the date value + * @param timeNanos the nanoseconds since midnight + * @param offsetMins time zone offset in minutes + * @return the timestamp + */ + public static Timestamp convertTimestampTimeZoneToTimestamp(long dateValue, long timeNanos, short offsetMins) { + Timestamp ts = new Timestamp(getMillis(dateValue, timeNanos, offsetMins)); + ts.setNanos((int) (timeNanos % 1_000_000_000)); + return ts; + } + + /** + * Convert a time value to a time, using the default timezone. + * + * @param nanosSinceMidnight the nanoseconds since midnight + * @return the time + */ + public static Time convertNanoToTime(long nanosSinceMidnight) { + long millis = nanosSinceMidnight / 1_000_000; + long s = millis / 1_000; + millis -= s * 1_000; + long m = s / 60; + s -= m * 60; + long h = m / 60; + m -= h * 60; + long ms = getMillis(null, 1970, 1, 1, (int) (h % 24), (int) m, (int) s, + (int) millis); + return new Time(ms); + } + + /** + * Get the year from a date value. + * + * @param x the date value + * @return the year + */ + public static int yearFromDateValue(long x) { + return (int) (x >>> SHIFT_YEAR); + } + + /** + * Get the month from a date value. + * + * @param x the date value + * @return the month (1..12) + */ + public static int monthFromDateValue(long x) { + return (int) (x >>> SHIFT_MONTH) & 15; + } + + /** + * Get the day of month from a date value. + * + * @param x the date value + * @return the day (1..31) + */ + public static int dayFromDateValue(long x) { + return (int) (x & 31); + } + + /** + * Get the date value from a given date. + * + * @param year the year + * @param month the month (1..12) + * @param day the day (1..31) + * @return the date value + */ + public static long dateValue(long year, int month, int day) { + return (year << SHIFT_YEAR) | (month << SHIFT_MONTH) | day; + } + + /** + * Get the date value from a given denormalized date with possible out of range + * values of month and/or day. Used after addition or subtraction month or years + * to (from) it to get a valid date. + * + * @param year + * the year + * @param month + * the month, if out of range month and year will be normalized + * @param day + * the day of the month, if out of range it will be saturated + * @return the date value + */ + public static long dateValueFromDenormalizedDate(long year, long month, int day) { + long mm1 = month - 1; + long yd = mm1 / 12; + if (mm1 < 0 && yd * 12 != mm1) { + yd--; + } + int y = (int) (year + yd); + int m = (int) (month - yd * 12); + if (day < 1) { + day = 1; + } else { + int max = getDaysInMonth(y, m); + if (day > max) { + day = max; + } + } + return dateValue(y, m, day); + } + + /** + * Convert a UTC datetime in millis to an encoded date in the default + * timezone. + * + * @param ms the milliseconds + * @return the date value + */ + public static long dateValueFromDate(long ms) { + ms += getTimeZone().getOffset(ms); + long absoluteDay = ms / MILLIS_PER_DAY; + // Round toward negative infinity + if (ms < 0 && (absoluteDay * MILLIS_PER_DAY != ms)) { + absoluteDay--; + } + return dateValueFromAbsoluteDay(absoluteDay); + } + + /** + * Calculate the encoded date value from a given calendar. + * + * @param cal the calendar + * @return the date value + */ + private static long dateValueFromCalendar(Calendar cal) { + int year = cal.get(Calendar.YEAR); + if (cal.get(Calendar.ERA) == GregorianCalendar.BC) { + year = 1 - year; + } + int month = cal.get(Calendar.MONTH) + 1; + int day = cal.get(Calendar.DAY_OF_MONTH); + return ((long) year << SHIFT_YEAR) | (month << SHIFT_MONTH) | day; + } + + /** + * Convert a time in milliseconds in UTC to the nanoseconds since midnight + * (in the default timezone). + * + * @param ms the milliseconds + * @return the nanoseconds + */ + public static long nanosFromDate(long ms) { + ms += getTimeZone().getOffset(ms); + long absoluteDay = ms / MILLIS_PER_DAY; + // Round toward negative infinity + if (ms < 0 && (absoluteDay * MILLIS_PER_DAY != ms)) { + absoluteDay--; + } + return (ms - absoluteDay * MILLIS_PER_DAY) * 1_000_000; + } + + /** + * Convert a java.util.Calendar to nanoseconds since midnight. + * + * @param cal the calendar + * @return the nanoseconds + */ + private static long nanosFromCalendar(Calendar cal) { + int h = cal.get(Calendar.HOUR_OF_DAY); + int m = cal.get(Calendar.MINUTE); + int s = cal.get(Calendar.SECOND); + int millis = cal.get(Calendar.MILLISECOND); + return ((((((h * 60L) + m) * 60) + s) * 1000) + millis) * 1000000; + } + + /** + * Calculate the normalized timestamp. + * + * @param absoluteDay the absolute day + * @param nanos the nanoseconds (may be negative or larger than one day) + * @return the timestamp + */ + public static ValueTimestamp normalizeTimestamp(long absoluteDay, + long nanos) { + if (nanos > NANOS_PER_DAY || nanos < 0) { + long d; + if (nanos > NANOS_PER_DAY) { + d = nanos / NANOS_PER_DAY; + } else { + d = (nanos - NANOS_PER_DAY + 1) / NANOS_PER_DAY; + } + nanos -= d * NANOS_PER_DAY; + absoluteDay += d; + } + return ValueTimestamp.fromDateValueAndNanos( + dateValueFromAbsoluteDay(absoluteDay), nanos); + } + + /** + * Converts local date value and nanoseconds to timestamp with time zone. + * + * @param dateValue + * date value + * @param timeNanos + * nanoseconds since midnight + * @return timestamp with time zone + */ + public static ValueTimestampTimeZone timestampTimeZoneFromLocalDateValueAndNanos(long dateValue, long timeNanos) { + int timeZoneOffset = getTimeZoneOffsetMillis(null, dateValue, timeNanos); + int offsetMins = timeZoneOffset / 60_000; + int correction = timeZoneOffset % 60_000; + if (correction != 0) { + timeNanos -= correction; + if (timeNanos < 0) { + timeNanos += NANOS_PER_DAY; + dateValue = decrementDateValue(dateValue); + } else if (timeNanos >= NANOS_PER_DAY) { + timeNanos -= NANOS_PER_DAY; + dateValue = incrementDateValue(dateValue); + } + } + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, timeNanos, (short) offsetMins); + } + + /** + * Calculate the absolute day for a January, 1 of the specified year. + * + * @param year + * the year + * @return the absolute day + */ + public static long absoluteDayFromYear(long year) { + year--; + long a = ((year * 1461L) >> 2) - 719_177; + if (year < 1582) { + // Julian calendar + a += 13; + } else if (year < 1900 || year > 2099) { + // Gregorian calendar (slow mode) + a += (year / 400) - (year / 100) + 15; + } + return a; + } + + /** + * Calculate the absolute day from an encoded date value. + * + * @param dateValue the date value + * @return the absolute day + */ + public static long absoluteDayFromDateValue(long dateValue) { + long y = yearFromDateValue(dateValue); + int m = monthFromDateValue(dateValue); + int d = dayFromDateValue(dateValue); + if (m <= 2) { + y--; + m += 12; + } + long a = ((y * 1461L) >> 2) + DAYS_OFFSET[m - 3] + d - 719_484; + if (y <= 1582 && ((y < 1582) || (m * 100 + d < 10_15))) { + // Julian calendar (cutover at 1582-10-04 / 1582-10-15) + a += 13; + } else if (y < 1900 || y > 2099) { + // Gregorian calendar (slow mode) + a += (y / 400) - (y / 100) + 15; + } + return a; + } + + /** + * Calculate the absolute day from an encoded date value in proleptic Gregorian + * calendar. + * + * @param dateValue the date value + * @return the absolute day in proleptic Gregorian calendar + */ + public static long prolepticGregorianAbsoluteDayFromDateValue(long dateValue) { + long y = yearFromDateValue(dateValue); + int m = monthFromDateValue(dateValue); + int d = dayFromDateValue(dateValue); + if (m <= 2) { + y--; + m += 12; + } + long a = ((y * 1461L) >> 2) + DAYS_OFFSET[m - 3] + d - 719_484; + if (y < 1900 || y > 2099) { + // Slow mode + a += (y / 400) - (y / 100) + 15; + } + return a; + } + + /** + * Calculate the encoded date value from an absolute day. + * + * @param absoluteDay the absolute day + * @return the date value + */ + public static long dateValueFromAbsoluteDay(long absoluteDay) { + long d = absoluteDay + 719_468; + long y100, offset; + if (d > 578_040) { + // Gregorian calendar + long y400 = d / 146_097; + d -= y400 * 146_097; + y100 = d / 36_524; + d -= y100 * 36_524; + offset = y400 * 400 + y100 * 100; + } else { + // Julian calendar + y100 = 0; + d += 292_200_000_002L; + offset = -800_000_000; + } + long y4 = d / 1461; + d -= y4 * 1461; + long y = d / 365; + d -= y * 365; + if (d == 0 && (y == 4 || y100 == 4)) { + y--; + d += 365; + } + y += offset + y4 * 4; + // month of a day + int m = ((int) d * 2 + 1) * 5 / 306; + d -= DAYS_OFFSET[m] - 1; + if (m >= 10) { + y++; + m -= 12; + } + return dateValue(y, m + 3, (int) d); + } + + /** + * Return the next date value. + * + * @param dateValue + * the date value + * @return the next date value + */ + public static long incrementDateValue(long dateValue) { + int year = yearFromDateValue(dateValue); + if (year == 1582) { + // Use slow way instead of rarely needed large custom code. + return dateValueFromAbsoluteDay(absoluteDayFromDateValue(dateValue) + 1); + } + int day = dayFromDateValue(dateValue); + if (day < 28) { + return dateValue + 1; + } + int month = monthFromDateValue(dateValue); + if (day < getDaysInMonth(year, month)) { + return dateValue + 1; + } + if (month < 12) { + month++; + } else { + month = 1; + year++; + } + return dateValue(year, month, 1); + } + + /** + * Return the previous date value. + * + * @param dateValue + * the date value + * @return the previous date value + */ + public static long decrementDateValue(long dateValue) { + int year = yearFromDateValue(dateValue); + if (year == 1582) { + // Use slow way instead of rarely needed large custom code. + return dateValueFromAbsoluteDay(absoluteDayFromDateValue(dateValue) - 1); + } + if (dayFromDateValue(dateValue) > 1) { + return dateValue - 1; + } + int month = monthFromDateValue(dateValue); + if (month > 1) { + month--; + } else { + month = 12; + year--; + } + return dateValue(year, month, getDaysInMonth(year, month)); + } + + /** + * Append a date to the string builder. + * + * @param buff the target string builder + * @param dateValue the date value + */ + public static void appendDate(StringBuilder buff, long dateValue) { + int y = yearFromDateValue(dateValue); + int m = monthFromDateValue(dateValue); + int d = dayFromDateValue(dateValue); + if (y > 0 && y < 10_000) { + StringUtils.appendZeroPadded(buff, 4, y); + } else { + buff.append(y); + } + buff.append('-'); + StringUtils.appendZeroPadded(buff, 2, m); + buff.append('-'); + StringUtils.appendZeroPadded(buff, 2, d); + } + + /** + * Append a time to the string builder. + * + * @param buff the target string builder + * @param nanos the time in nanoseconds + */ + public static void appendTime(StringBuilder buff, long nanos) { + if (nanos < 0) { + buff.append('-'); + nanos = -nanos; + } + /* + * nanos now either in range from 0 to Long.MAX_VALUE or equals to + * Long.MIN_VALUE. We need to divide nanos by 1000000 with unsigned division to + * get correct result. The simplest way to do this with such constraints is to + * divide -nanos by -1000000. + */ + long ms = -nanos / -1_000_000; + nanos -= ms * 1_000_000; + long s = ms / 1_000; + ms -= s * 1_000; + long m = s / 60; + s -= m * 60; + long h = m / 60; + m -= h * 60; + StringUtils.appendZeroPadded(buff, 2, h); + buff.append(':'); + StringUtils.appendZeroPadded(buff, 2, m); + buff.append(':'); + StringUtils.appendZeroPadded(buff, 2, s); + if (ms > 0 || nanos > 0) { + buff.append('.'); + int start = buff.length(); + StringUtils.appendZeroPadded(buff, 3, ms); + if (nanos > 0) { + StringUtils.appendZeroPadded(buff, 6, nanos); + } + for (int i = buff.length() - 1; i > start; i--) { + if (buff.charAt(i) != '0') { + break; + } + buff.deleteCharAt(i); + } + } + } + + /** + * Append a time zone to the string builder. + * + * @param buff the target string builder + * @param tz the time zone in minutes + */ + public static void appendTimeZone(StringBuilder buff, short tz) { + if (tz < 0) { + buff.append('-'); + tz = (short) -tz; + } else { + buff.append('+'); + } + int hours = tz / 60; + tz -= hours * 60; + int mins = tz; + StringUtils.appendZeroPadded(buff, 2, hours); + if (mins != 0) { + buff.append(':'); + StringUtils.appendZeroPadded(buff, 2, mins); + } + } + + /** + * Formats timestamp with time zone as string. + * + * @param dateValue the year-month-day bit field + * @param timeNanos nanoseconds since midnight + * @param timeZoneOffsetMins the time zone offset in minutes + * @return formatted string + */ + public static String timestampTimeZoneToString(long dateValue, long timeNanos, short timeZoneOffsetMins) { + StringBuilder buff = new StringBuilder(ValueTimestampTimeZone.MAXIMUM_PRECISION); + appendDate(buff, dateValue); + buff.append(' '); + appendTime(buff, timeNanos); + appendTimeZone(buff, timeZoneOffsetMins); + return buff.toString(); + } + + /** + * Generates time zone name for the specified offset in minutes. + * + * @param offsetMins + * offset in minutes + * @return time zone name + */ + public static String timeZoneNameFromOffsetMins(int offsetMins) { + if (offsetMins == 0) { + return "UTC"; + } + StringBuilder b = new StringBuilder(9); + b.append("GMT"); + if (offsetMins < 0) { + b.append('-'); + offsetMins = -offsetMins; + } else { + b.append('+'); + } + StringUtils.appendZeroPadded(b, 2, offsetMins / 60); + b.append(':'); + StringUtils.appendZeroPadded(b, 2, offsetMins % 60); + return b.toString(); + } + + /** + * Converts scale of nanoseconds. + * + * @param nanosOfDay nanoseconds of day + * @param scale fractional seconds precision + * @return scaled value + */ + public static long convertScale(long nanosOfDay, int scale) { + if (scale >= 9) { + return nanosOfDay; + } + int m = CONVERT_SCALE_TABLE[scale]; + long mod = nanosOfDay % m; + if (mod >= m >>> 1) { + nanosOfDay += m; + } + return nanosOfDay - mod; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/DebuggingThreadLocal.java b/modules/h2/src/main/java/org/h2/util/DebuggingThreadLocal.java new file mode 100644 index 0000000000000..b055c42742939 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/DebuggingThreadLocal.java @@ -0,0 +1,45 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.HashMap; +import java.util.concurrent.ConcurrentHashMap; + +/** + * Similar to ThreadLocal, except that it allows its data to be read from other + * threads - useful for debugging info. + * + * @param the type + */ +public class DebuggingThreadLocal { + + private final ConcurrentHashMap map = new ConcurrentHashMap<>(); + + public void set(T value) { + map.put(Thread.currentThread().getId(), value); + } + + /** + * Remove the value for the current thread. + */ + public void remove() { + map.remove(Thread.currentThread().getId()); + } + + public T get() { + return map.get(Thread.currentThread().getId()); + } + + /** + * Get a snapshot of the data of all threads. + * + * @return a HashMap containing a mapping from thread-id to value + */ + public HashMap getSnapshotOfAllThreads() { + return new HashMap<>(map); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/DoneFuture.java b/modules/h2/src/main/java/org/h2/util/DoneFuture.java new file mode 100644 index 0000000000000..9ccd020961925 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/DoneFuture.java @@ -0,0 +1,56 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; + +/** + * Future which is already done. + * + * @param Result value. + * @author Sergi Vladykin + */ +public class DoneFuture implements Future { + final T x; + + public DoneFuture(T x) { + this.x = x; + } + + @Override + public T get() throws InterruptedException, ExecutionException { + return x; + } + + @Override + public T get(long timeout, TimeUnit unit) throws InterruptedException, + ExecutionException, TimeoutException { + return x; + } + + @Override + public boolean isDone() { + return true; + } + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + return false; + } + + @Override + public boolean isCancelled() { + return false; + } + + @Override + public String toString() { + return "DoneFuture->" + x; + } +} diff --git a/modules/h2/src/main/java/org/h2/util/HashBase.java b/modules/h2/src/main/java/org/h2/util/HashBase.java new file mode 100644 index 0000000000000..f7aaa200ee853 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/HashBase.java @@ -0,0 +1,128 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + + +/** + * The base for other hash classes. + */ +public abstract class HashBase { + + /** + * The maximum load, in percent. + * declared as long so we do long arithmetic so we don't overflow. + */ + private static final long MAX_LOAD = 90; + + /** + * The bit mask to get the index from the hash code. + */ + protected int mask; + + /** + * The number of slots in the table. + */ + protected int len; + + /** + * The number of occupied slots, excluding the zero key (if any). + */ + protected int size; + + /** + * The number of deleted slots. + */ + protected int deletedCount; + + /** + * The level. The number of slots is 2 ^ level. + */ + protected int level; + + /** + * Whether the zero key is used. + */ + protected boolean zeroKey; + + private int maxSize, minSize, maxDeleted; + + public HashBase() { + reset(2); + } + + /** + * Increase the size of the underlying table and re-distribute the elements. + * + * @param newLevel the new level + */ + protected abstract void rehash(int newLevel); + + /** + * Get the size of the map. + * + * @return the size + */ + public int size() { + return size + (zeroKey ? 1 : 0); + } + + /** + * Check the size before adding an entry. This method resizes the map if + * required. + */ + void checkSizePut() { + if (deletedCount > size) { + rehash(level); + } + if (size + deletedCount >= maxSize) { + rehash(level + 1); + } + } + + /** + * Check the size before removing an entry. This method resizes the map if + * required. + */ + protected void checkSizeRemove() { + if (size < minSize && level > 0) { + rehash(level - 1); + } else if (deletedCount > maxDeleted) { + rehash(level); + } + } + + /** + * Clear the map and reset the level to the specified value. + * + * @param newLevel the new level + */ + protected void reset(int newLevel) { + // can't exceed 30 or we will generate a negative value + // for the "len" field + if (newLevel > 30) { + throw new IllegalStateException("exceeded max size of hash table"); + } + size = 0; + level = newLevel; + len = 2 << level; + mask = len - 1; + minSize = (int) ((1 << level) * MAX_LOAD / 100); + maxSize = (int) (len * MAX_LOAD / 100); + deletedCount = 0; + maxDeleted = 20 + len / 2; + } + + /** + * Calculate the index for this hash code. + * + * @param hash the hash code + * @return the index + */ + protected int getIndex(int hash) { + return hash & mask; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/IOUtils.java b/modules/h2/src/main/java/org/h2/util/IOUtils.java new file mode 100644 index 0000000000000..0b10fff89fdca --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/IOUtils.java @@ -0,0 +1,479 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.Closeable; +import java.io.EOFException; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.OutputStreamWriter; +import java.io.Reader; +import java.io.StringWriter; +import java.io.Writer; +import java.nio.charset.StandardCharsets; + +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; + +/** + * This utility class contains input/output functions. + */ +public class IOUtils { + + private IOUtils() { + // utility class + } + + /** + * Close a Closeable without throwing an exception. + * + * @param out the Closeable or null + */ + public static void closeSilently(Closeable out) { + if (out != null) { + try { + trace("closeSilently", null, out); + out.close(); + } catch (Exception e) { + // ignore + } + } + } + + /** + * Close an AutoCloseable without throwing an exception. + * + * @param out the AutoCloseable or null + */ + public static void closeSilently(AutoCloseable out) { + if (out != null) { + try { + trace("closeSilently", null, out); + out.close(); + } catch (Exception e) { + // ignore + } + } + } + + /** + * Skip a number of bytes in an input stream. + * + * @param in the input stream + * @param skip the number of bytes to skip + * @throws EOFException if the end of file has been reached before all bytes + * could be skipped + * @throws IOException if an IO exception occurred while skipping + */ + public static void skipFully(InputStream in, long skip) throws IOException { + try { + while (skip > 0) { + long skipped = in.skip(skip); + if (skipped <= 0) { + throw new EOFException(); + } + skip -= skipped; + } + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + + /** + * Skip a number of characters in a reader. + * + * @param reader the reader + * @param skip the number of characters to skip + * @throws EOFException if the end of file has been reached before all + * characters could be skipped + * @throws IOException if an IO exception occurred while skipping + */ + public static void skipFully(Reader reader, long skip) throws IOException { + try { + while (skip > 0) { + long skipped = reader.skip(skip); + if (skipped <= 0) { + throw new EOFException(); + } + skip -= skipped; + } + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + + /** + * Copy all data from the input stream to the output stream and close both + * streams. Exceptions while closing are ignored. + * + * @param in the input stream + * @param out the output stream + * @return the number of bytes copied + */ + public static long copyAndClose(InputStream in, OutputStream out) + throws IOException { + try { + long len = copyAndCloseInput(in, out); + out.close(); + return len; + } catch (Exception e) { + throw DbException.convertToIOException(e); + } finally { + closeSilently(out); + } + } + + /** + * Copy all data from the input stream to the output stream and close the + * input stream. Exceptions while closing are ignored. + * + * @param in the input stream + * @param out the output stream (null if writing is not required) + * @return the number of bytes copied + */ + public static long copyAndCloseInput(InputStream in, OutputStream out) + throws IOException { + try { + return copy(in, out); + } catch (Exception e) { + throw DbException.convertToIOException(e); + } finally { + closeSilently(in); + } + } + + /** + * Copy all data from the input stream to the output stream. Both streams + * are kept open. + * + * @param in the input stream + * @param out the output stream (null if writing is not required) + * @return the number of bytes copied + */ + public static long copy(InputStream in, OutputStream out) + throws IOException { + return copy(in, out, Long.MAX_VALUE); + } + + /** + * Copy all data from the input stream to the output stream. Both streams + * are kept open. + * + * @param in the input stream + * @param out the output stream (null if writing is not required) + * @param length the maximum number of bytes to copy + * @return the number of bytes copied + */ + public static long copy(InputStream in, OutputStream out, long length) + throws IOException { + try { + long copied = 0; + int len = (int) Math.min(length, Constants.IO_BUFFER_SIZE); + byte[] buffer = new byte[len]; + while (length > 0) { + len = in.read(buffer, 0, len); + if (len < 0) { + break; + } + if (out != null) { + out.write(buffer, 0, len); + } + copied += len; + length -= len; + len = (int) Math.min(length, Constants.IO_BUFFER_SIZE); + } + return copied; + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + + /** + * Copy all data from the reader to the writer and close the reader. + * Exceptions while closing are ignored. + * + * @param in the reader + * @param out the writer (null if writing is not required) + * @param length the maximum number of bytes to copy + * @return the number of characters copied + */ + public static long copyAndCloseInput(Reader in, Writer out, long length) + throws IOException { + try { + long copied = 0; + int len = (int) Math.min(length, Constants.IO_BUFFER_SIZE); + char[] buffer = new char[len]; + while (length > 0) { + len = in.read(buffer, 0, len); + if (len < 0) { + break; + } + if (out != null) { + out.write(buffer, 0, len); + } + length -= len; + len = (int) Math.min(length, Constants.IO_BUFFER_SIZE); + copied += len; + } + return copied; + } catch (Exception e) { + throw DbException.convertToIOException(e); + } finally { + in.close(); + } + } + + /** + * Close an input stream without throwing an exception. + * + * @param in the input stream or null + */ + public static void closeSilently(InputStream in) { + if (in != null) { + try { + trace("closeSilently", null, in); + in.close(); + } catch (Exception e) { + // ignore + } + } + } + + /** + * Close a reader without throwing an exception. + * + * @param reader the reader or null + */ + public static void closeSilently(Reader reader) { + if (reader != null) { + try { + reader.close(); + } catch (Exception e) { + // ignore + } + } + } + + /** + * Close a writer without throwing an exception. + * + * @param writer the writer or null + */ + public static void closeSilently(Writer writer) { + if (writer != null) { + try { + writer.close(); + } catch (Exception e) { + // ignore + } + } + } + + /** + * Read a number of bytes from an input stream and close the stream. + * + * @param in the input stream + * @param length the maximum number of bytes to read, or -1 to read until + * the end of file + * @return the bytes read + */ + public static byte[] readBytesAndClose(InputStream in, int length) + throws IOException { + try { + if (length <= 0) { + length = Integer.MAX_VALUE; + } + int block = Math.min(Constants.IO_BUFFER_SIZE, length); + ByteArrayOutputStream out = new ByteArrayOutputStream(block); + copy(in, out, length); + return out.toByteArray(); + } catch (Exception e) { + throw DbException.convertToIOException(e); + } finally { + in.close(); + } + } + + /** + * Read a number of characters from a reader and close it. + * + * @param in the reader + * @param length the maximum number of characters to read, or -1 to read + * until the end of file + * @return the string read + */ + public static String readStringAndClose(Reader in, int length) + throws IOException { + try { + if (length <= 0) { + length = Integer.MAX_VALUE; + } + int block = Math.min(Constants.IO_BUFFER_SIZE, length); + StringWriter out = new StringWriter(block); + copyAndCloseInput(in, out, length); + return out.toString(); + } finally { + in.close(); + } + } + + /** + * Try to read the given number of bytes to the buffer. This method reads + * until the maximum number of bytes have been read or until the end of + * file. + * + * @param in the input stream + * @param buffer the output buffer + * @param max the number of bytes to read at most + * @return the number of bytes read, 0 meaning EOF + */ + public static int readFully(InputStream in, byte[] buffer, int max) + throws IOException { + try { + int result = 0, len = Math.min(max, buffer.length); + while (len > 0) { + int l = in.read(buffer, result, len); + if (l < 0) { + break; + } + result += l; + len -= l; + } + return result; + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + + /** + * Try to read the given number of characters to the buffer. This method + * reads until the maximum number of characters have been read or until the + * end of file. + * + * @param in the reader + * @param buffer the output buffer + * @param max the number of characters to read at most + * @return the number of characters read, 0 meaning EOF + */ + public static int readFully(Reader in, char[] buffer, int max) + throws IOException { + try { + int result = 0, len = Math.min(max, buffer.length); + while (len > 0) { + int l = in.read(buffer, result, len); + if (l < 0) { + break; + } + result += l; + len -= l; + } + return result; + } catch (Exception e) { + throw DbException.convertToIOException(e); + } + } + + /** + * Create a buffered reader to read from an input stream using the UTF-8 + * format. If the input stream is null, this method returns null. The + * InputStreamReader that is used here is not exact, that means it may read + * some additional bytes when buffering. + * + * @param in the input stream or null + * @return the reader + */ + public static Reader getBufferedReader(InputStream in) { + return in == null ? null : new BufferedReader( + new InputStreamReader(in, StandardCharsets.UTF_8)); + } + + /** + * Create a reader to read from an input stream using the UTF-8 format. If + * the input stream is null, this method returns null. The InputStreamReader + * that is used here is not exact, that means it may read some additional + * bytes when buffering. + * + * @param in the input stream or null + * @return the reader + */ + public static Reader getReader(InputStream in) { + // InputStreamReader may read some more bytes + return in == null ? null : new BufferedReader( + new InputStreamReader(in, StandardCharsets.UTF_8)); + } + + /** + * Create a buffered writer to write to an output stream using the UTF-8 + * format. If the output stream is null, this method returns null. + * + * @param out the output stream or null + * @return the writer + */ + public static Writer getBufferedWriter(OutputStream out) { + return out == null ? null : new BufferedWriter( + new OutputStreamWriter(out, StandardCharsets.UTF_8)); + } + + /** + * Wrap an input stream in a reader. The bytes are converted to characters + * using the US-ASCII character set. + * + * @param in the input stream + * @return the reader + */ + public static Reader getAsciiReader(InputStream in) { + return in == null ? null : new InputStreamReader(in, StandardCharsets.US_ASCII); + } + + /** + * Trace input or output operations if enabled. + * + * @param method the method from where this method was called + * @param fileName the file name + * @param o the object to append to the message + */ + public static void trace(String method, String fileName, Object o) { + if (SysProperties.TRACE_IO) { + System.out.println("IOUtils." + method + " " + fileName + " " + o); + } + } + + /** + * Create an input stream to read from a string. The string is converted to + * a byte array using UTF-8 encoding. + * If the string is null, this method returns null. + * + * @param s the string + * @return the input stream + */ + public static InputStream getInputStreamFromString(String s) { + if (s == null) { + return null; + } + return new ByteArrayInputStream(s.getBytes(StandardCharsets.UTF_8)); + } + + /** + * Copy a file from one directory to another, or to another file. + * + * @param original the original file name + * @param copy the file name of the copy + */ + public static void copyFiles(String original, String copy) throws IOException { + InputStream in = FileUtils.newInputStream(original); + OutputStream out = FileUtils.newOutputStream(copy, false); + copyAndClose(in, out); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/IntArray.java b/modules/h2/src/main/java/org/h2/util/IntArray.java new file mode 100644 index 0000000000000..ef18169d904d2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/IntArray.java @@ -0,0 +1,177 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.Arrays; + +import org.h2.engine.SysProperties; + +/** + * An array with integer element. + */ +public class IntArray { + + private int[] data; + private int size; + private int hash; + + /** + * Create an int array with the default initial capacity. + */ + public IntArray() { + this(10); + } + + /** + * Create an int array with specified initial capacity. + * + * @param capacity the initial capacity + */ + public IntArray(int capacity) { + data = new int[capacity]; + } + + /** + * Create an int array with the given values and size. + * + * @param data the int array + */ + public IntArray(int[] data) { + this.data = data; + size = data.length; + } + + /** + * Append a value. + * + * @param value the value to append + */ + public void add(int value) { + if (size >= data.length) { + ensureCapacity(size + size); + } + data[size++] = value; + } + + /** + * Get the value at the given index. + * + * @param index the index + * @return the value + */ + public int get(int index) { + if (SysProperties.CHECK) { + if (index >= size) { + throw new ArrayIndexOutOfBoundsException("i=" + index + " size=" + size); + } + } + return data[index]; + } + + /** + * Remove the value at the given index. + * + * @param index the index + */ + public void remove(int index) { + if (SysProperties.CHECK) { + if (index >= size) { + throw new ArrayIndexOutOfBoundsException("i=" + index + " size=" + size); + } + } + System.arraycopy(data, index + 1, data, index, size - index - 1); + size--; + } + + /** + * Ensure the the underlying array is large enough for the given number of + * entries. + * + * @param minCapacity the minimum capacity + */ + public void ensureCapacity(int minCapacity) { + minCapacity = Math.max(4, minCapacity); + if (minCapacity >= data.length) { + data = Arrays.copyOf(data, minCapacity); + } + } + + @Override + public boolean equals(Object obj) { + if (!(obj instanceof IntArray)) { + return false; + } + IntArray other = (IntArray) obj; + if (hashCode() != other.hashCode() || size != other.size) { + return false; + } + for (int i = 0; i < size; i++) { + if (data[i] != other.data[i]) { + return false; + } + } + return true; + } + + @Override + public int hashCode() { + if (hash != 0) { + return hash; + } + int h = size + 1; + for (int i = 0; i < size; i++) { + h = h * 31 + data[i]; + } + hash = h; + return h; + } + + /** + * Get the size of the list. + * + * @return the size + */ + public int size() { + return size; + } + + /** + * Convert this list to an array. The target array must be big enough. + * + * @param array the target array + */ + public void toArray(int[] array) { + System.arraycopy(data, 0, array, 0, size); + } + + @Override + public String toString() { + StatementBuilder buff = new StatementBuilder("{"); + for (int i = 0; i < size; i++) { + buff.appendExceptFirst(", "); + buff.append(data[i]); + } + return buff.append('}').toString(); + } + + /** + * Remove a number of elements. + * + * @param fromIndex the index of the first item to remove + * @param toIndex upper bound (exclusive) + */ + public void removeRange(int fromIndex, int toIndex) { + if (SysProperties.CHECK) { + if (fromIndex > toIndex || toIndex > size) { + throw new ArrayIndexOutOfBoundsException("from=" + fromIndex + + " to=" + toIndex + " size=" + size); + } + } + System.arraycopy(data, toIndex, data, fromIndex, size - toIndex); + size -= toIndex - fromIndex; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/IntIntHashMap.java b/modules/h2/src/main/java/org/h2/util/IntIntHashMap.java new file mode 100644 index 0000000000000..d264c86d1562a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/IntIntHashMap.java @@ -0,0 +1,157 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import org.h2.message.DbException; + +/** + * A hash map with int key and int values. There is a restriction: the + * value -1 (NOT_FOUND) cannot be stored in the map. 0 can be stored. + * An empty record has key=0 and value=0. + * A deleted record has key=0 and value=DELETED + */ +public class IntIntHashMap extends HashBase { + + /** + * The value indicating that the entry has not been found. + */ + public static final int NOT_FOUND = -1; + + private static final int DELETED = 1; + private int[] keys; + private int[] values; + private int zeroValue; + + @Override + protected void reset(int newLevel) { + super.reset(newLevel); + keys = new int[len]; + values = new int[len]; + } + + /** + * Store the given key-value pair. The value is overwritten or added. + * + * @param key the key + * @param value the value (-1 is not supported) + */ + public void put(int key, int value) { + if (key == 0) { + zeroKey = true; + zeroValue = value; + return; + } + checkSizePut(); + internalPut(key, value); + } + + private void internalPut(int key, int value) { + int index = getIndex(key); + int plus = 1; + int deleted = -1; + do { + int k = keys[index]; + if (k == 0) { + if (values[index] != DELETED) { + // found an empty record + if (deleted >= 0) { + index = deleted; + deletedCount--; + } + size++; + keys[index] = key; + values[index] = value; + return; + } + // found a deleted record + if (deleted < 0) { + deleted = index; + } + } else if (k == key) { + // update existing + values[index] = value; + return; + } + index = (index + plus++) & mask; + } while (plus <= len); + // no space + DbException.throwInternalError("hashmap is full"); + } + + /** + * Remove the key-value pair with the given key. + * + * @param key the key + */ + public void remove(int key) { + if (key == 0) { + zeroKey = false; + return; + } + checkSizeRemove(); + int index = getIndex(key); + int plus = 1; + do { + int k = keys[index]; + if (k == key) { + // found the record + keys[index] = 0; + values[index] = DELETED; + deletedCount++; + size--; + return; + } else if (k == 0 && values[index] == 0) { + // found an empty record + return; + } + index = (index + plus++) & mask; + } while (plus <= len); + // not found + } + + @Override + protected void rehash(int newLevel) { + int[] oldKeys = keys; + int[] oldValues = values; + reset(newLevel); + for (int i = 0; i < oldKeys.length; i++) { + int k = oldKeys[i]; + if (k != 0) { + // skip the checkSizePut so we don't end up + // accidentally recursing + internalPut(k, oldValues[i]); + } + } + } + + /** + * Get the value for the given key. This method returns NOT_FOUND if the + * entry has not been found. + * + * @param key the key + * @return the value or NOT_FOUND + */ + public int get(int key) { + if (key == 0) { + return zeroKey ? zeroValue : NOT_FOUND; + } + int index = getIndex(key); + int plus = 1; + do { + int k = keys[index]; + if (k == 0 && values[index] == 0) { + // found an empty record + return NOT_FOUND; + } else if (k == key) { + // found it + return values[index]; + } + index = (index + plus++) & mask; + } while (plus <= len); + return NOT_FOUND; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/JdbcUtils.java b/modules/h2/src/main/java/org/h2/util/JdbcUtils.java new file mode 100644 index 0000000000000..a157a1efb9065 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/JdbcUtils.java @@ -0,0 +1,427 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.ObjectInputStream; +import java.io.ObjectOutputStream; +import java.io.ObjectStreamClass; +import java.sql.*; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.Properties; +import javax.naming.Context; +import javax.sql.DataSource; +import org.h2.api.CustomDataTypesHandler; +import org.h2.api.ErrorCode; +import org.h2.api.JavaObjectSerializer; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.DataHandler; +import org.h2.util.Utils.ClassFactory; + +/** + * This is a utility class with JDBC helper functions. + */ +public class JdbcUtils { + + /** + * The serializer to use. + */ + public static JavaObjectSerializer serializer; + + /** + * Custom data types handler to use. + */ + public static CustomDataTypesHandler customDataTypesHandler; + + private static final String[] DRIVERS = { + "h2:", "org.h2.Driver", + "Cache:", "com.intersys.jdbc.CacheDriver", + "daffodilDB://", "in.co.daffodil.db.rmi.RmiDaffodilDBDriver", + "daffodil", "in.co.daffodil.db.jdbc.DaffodilDBDriver", + "db2:", "com.ibm.db2.jcc.DB2Driver", + "derby:net:", "org.apache.derby.jdbc.ClientDriver", + "derby://", "org.apache.derby.jdbc.ClientDriver", + "derby:", "org.apache.derby.jdbc.EmbeddedDriver", + "FrontBase:", "com.frontbase.jdbc.FBJDriver", + "firebirdsql:", "org.firebirdsql.jdbc.FBDriver", + "hsqldb:", "org.hsqldb.jdbcDriver", + "informix-sqli:", "com.informix.jdbc.IfxDriver", + "jtds:", "net.sourceforge.jtds.jdbc.Driver", + "microsoft:", "com.microsoft.jdbc.sqlserver.SQLServerDriver", + "mimer:", "com.mimer.jdbc.Driver", + "mysql:", "com.mysql.jdbc.Driver", + "odbc:", "sun.jdbc.odbc.JdbcOdbcDriver", + "oracle:", "oracle.jdbc.driver.OracleDriver", + "pervasive:", "com.pervasive.jdbc.v2.Driver", + "pointbase:micro:", "com.pointbase.me.jdbc.jdbcDriver", + "pointbase:", "com.pointbase.jdbc.jdbcUniversalDriver", + "postgresql:", "org.postgresql.Driver", + "sybase:", "com.sybase.jdbc3.jdbc.SybDriver", + "sqlserver:", "com.microsoft.sqlserver.jdbc.SQLServerDriver", + "teradata:", "com.ncr.teradata.TeraDriver", + }; + + private static boolean allowAllClasses; + private static HashSet allowedClassNames; + + /** + * In order to manage more than one class loader + */ + private static ArrayList userClassFactories = + new ArrayList<>(); + + private static String[] allowedClassNamePrefixes; + + private JdbcUtils() { + // utility class + } + + /** + * Add a class factory in order to manage more than one class loader. + * + * @param classFactory An object that implements ClassFactory + */ + public static void addClassFactory(ClassFactory classFactory) { + getUserClassFactories().add(classFactory); + } + + /** + * Remove a class factory + * + * @param classFactory Already inserted class factory instance + */ + public static void removeClassFactory(ClassFactory classFactory) { + getUserClassFactories().remove(classFactory); + } + + private static ArrayList getUserClassFactories() { + if (userClassFactories == null) { + // initially, it is empty + // but Apache Tomcat may clear the fields as well + userClassFactories = new ArrayList<>(); + } + return userClassFactories; + } + + static { + String clazz = SysProperties.JAVA_OBJECT_SERIALIZER; + if (clazz != null) { + try { + serializer = (JavaObjectSerializer) loadUserClass(clazz).newInstance(); + } catch (Exception e) { + throw DbException.convert(e); + } + } + + String customTypeHandlerClass = SysProperties.CUSTOM_DATA_TYPES_HANDLER; + if (customTypeHandlerClass != null) { + try { + customDataTypesHandler = (CustomDataTypesHandler) + loadUserClass(customTypeHandlerClass).newInstance(); + } catch (Exception e) { + throw DbException.convert(e); + } + } + } + + /** + * Load a class, but check if it is allowed to load this class first. To + * perform access rights checking, the system property h2.allowedClasses + * needs to be set to a list of class file name prefixes. + * + * @param className the name of the class + * @return the class object + */ + @SuppressWarnings("unchecked") + public static Class loadUserClass(String className) { + if (allowedClassNames == null) { + // initialize the static fields + String s = SysProperties.ALLOWED_CLASSES; + ArrayList prefixes = New.arrayList(); + boolean allowAll = false; + HashSet classNames = new HashSet<>(); + for (String p : StringUtils.arraySplit(s, ',', true)) { + if (p.equals("*")) { + allowAll = true; + } else if (p.endsWith("*")) { + prefixes.add(p.substring(0, p.length() - 1)); + } else { + classNames.add(p); + } + } + allowedClassNamePrefixes = prefixes.toArray(new String[0]); + allowAllClasses = allowAll; + allowedClassNames = classNames; + } + if (!allowAllClasses && !allowedClassNames.contains(className)) { + boolean allowed = false; + for (String s : allowedClassNamePrefixes) { + if (className.startsWith(s)) { + allowed = true; + } + } + if (!allowed) { + throw DbException.get( + ErrorCode.ACCESS_DENIED_TO_CLASS_1, className); + } + } + // Use provided class factory first. + for (ClassFactory classFactory : getUserClassFactories()) { + if (classFactory.match(className)) { + try { + Class userClass = classFactory.loadClass(className); + if (!(userClass == null)) { + return (Class) userClass; + } + } catch (Exception e) { + throw DbException.get( + ErrorCode.CLASS_NOT_FOUND_1, e, className); + } + } + } + // Use local ClassLoader + try { + return (Class) Class.forName(className); + } catch (ClassNotFoundException e) { + try { + return (Class) Class.forName( + className, true, + Thread.currentThread().getContextClassLoader()); + } catch (Exception e2) { + throw DbException.get( + ErrorCode.CLASS_NOT_FOUND_1, e, className); + } + } catch (NoClassDefFoundError e) { + throw DbException.get( + ErrorCode.CLASS_NOT_FOUND_1, e, className); + } catch (Error e) { + // UnsupportedClassVersionError + throw DbException.get( + ErrorCode.GENERAL_ERROR_1, e, className); + } + } + + /** + * Close a statement without throwing an exception. + * + * @param stat the statement or null + */ + public static void closeSilently(Statement stat) { + if (stat != null) { + try { + stat.close(); + } catch (SQLException e) { + // ignore + } + } + } + + /** + * Close a connection without throwing an exception. + * + * @param conn the connection or null + */ + public static void closeSilently(Connection conn) { + if (conn != null) { + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + } + } + + /** + * Close a result set without throwing an exception. + * + * @param rs the result set or null + */ + public static void closeSilently(ResultSet rs) { + if (rs != null) { + try { + rs.close(); + } catch (SQLException e) { + // ignore + } + } + } + + /** + * Open a new database connection with the given settings. + * + * @param driver the driver class name + * @param url the database URL + * @param user the user name + * @param password the password + * @return the database connection + */ + public static Connection getConnection(String driver, String url, + String user, String password) throws SQLException { + Properties prop = new Properties(); + if (user != null) { + prop.setProperty("user", user); + } + if (password != null) { + prop.setProperty("password", password); + } + return getConnection(driver, url, prop); + } + + /** + * Open a new database connection with the given settings. + * + * @param driver the driver class name + * @param url the database URL + * @param prop the properties containing at least the user name and password + * @return the database connection + */ + public static Connection getConnection(String driver, String url, + Properties prop) throws SQLException { + if (StringUtils.isNullOrEmpty(driver)) { + JdbcUtils.load(url); + } else { + Class d = loadUserClass(driver); + if (java.sql.Driver.class.isAssignableFrom(d)) { + try { + Driver driverInstance = (Driver) d.newInstance(); + return driverInstance.connect(url, prop); /*fix issue #695 with drivers with the same + jdbc subprotocol in classpath of jdbc drivers (as example redshift and postgresql drivers)*/ + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } else if (javax.naming.Context.class.isAssignableFrom(d)) { + // JNDI context + try { + Context context = (Context) d.newInstance(); + DataSource ds = (DataSource) context.lookup(url); + String user = prop.getProperty("user"); + String password = prop.getProperty("password"); + if (StringUtils.isNullOrEmpty(user) && StringUtils.isNullOrEmpty(password)) { + return ds.getConnection(); + } + return ds.getConnection(user, password); + } catch (Exception e) { + throw DbException.toSQLException(e); + } + } else { + // don't know, but maybe it loaded a JDBC Driver + return DriverManager.getConnection(url, prop); + } + } + return DriverManager.getConnection(url, prop); + } + + /** + * Get the driver class name for the given URL, or null if the URL is + * unknown. + * + * @param url the database URL + * @return the driver class name + */ + public static String getDriver(String url) { + if (url.startsWith("jdbc:")) { + url = url.substring("jdbc:".length()); + for (int i = 0; i < DRIVERS.length; i += 2) { + String prefix = DRIVERS[i]; + if (url.startsWith(prefix)) { + return DRIVERS[i + 1]; + } + } + } + return null; + } + + /** + * Load the driver class for the given URL, if the database URL is known. + * + * @param url the database URL + */ + public static void load(String url) { + String driver = getDriver(url); + if (driver != null) { + loadUserClass(driver); + } + } + + /** + * Serialize the object to a byte array, using the serializer specified by + * the connection info if set, or the default serializer. + * + * @param obj the object to serialize + * @param dataHandler provides the object serializer (may be null) + * @return the byte array + */ + public static byte[] serialize(Object obj, DataHandler dataHandler) { + try { + JavaObjectSerializer handlerSerializer = null; + if (dataHandler != null) { + handlerSerializer = dataHandler.getJavaObjectSerializer(); + } + if (handlerSerializer != null) { + return handlerSerializer.serialize(obj); + } + if (serializer != null) { + return serializer.serialize(obj); + } + ByteArrayOutputStream out = new ByteArrayOutputStream(); + ObjectOutputStream os = new ObjectOutputStream(out); + os.writeObject(obj); + return out.toByteArray(); + } catch (Throwable e) { + throw DbException.get(ErrorCode.SERIALIZATION_FAILED_1, e, e.toString()); + } + } + + /** + * De-serialize the byte array to an object, eventually using the serializer + * specified by the connection info. + * + * @param data the byte array + * @param dataHandler provides the object serializer (may be null) + * @return the object + * @throws DbException if serialization fails + */ + public static Object deserialize(byte[] data, DataHandler dataHandler) { + try { + JavaObjectSerializer dbJavaObjectSerializer = null; + if (dataHandler != null) { + dbJavaObjectSerializer = dataHandler.getJavaObjectSerializer(); + } + if (dbJavaObjectSerializer != null) { + return dbJavaObjectSerializer.deserialize(data); + } + if (serializer != null) { + return serializer.deserialize(data); + } + ByteArrayInputStream in = new ByteArrayInputStream(data); + ObjectInputStream is; + if (SysProperties.USE_THREAD_CONTEXT_CLASS_LOADER) { + final ClassLoader loader = Thread.currentThread().getContextClassLoader(); + is = new ObjectInputStream(in) { + @Override + protected Class resolveClass(ObjectStreamClass desc) + throws IOException, ClassNotFoundException { + try { + return Class.forName(desc.getName(), true, loader); + } catch (ClassNotFoundException e) { + return super.resolveClass(desc); + } + } + }; + } else { + is = new ObjectInputStream(in); + } + return is.readObject(); + } catch (Throwable e) { + throw DbException.get(ErrorCode.DESERIALIZATION_FAILED_1, e, e.toString()); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/LazyFuture.java b/modules/h2/src/main/java/org/h2/util/LazyFuture.java new file mode 100644 index 0000000000000..c45748a96d87f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/LazyFuture.java @@ -0,0 +1,107 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.concurrent.CancellationException; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; +import org.h2.message.DbException; + +/** + * Single threaded lazy future. + * + * @author Sergi Vladykin + * + * @param the result type + */ +public abstract class LazyFuture implements Future { + + private static final int S_READY = 0; + private static final int S_DONE = 1; + private static final int S_ERROR = 2; + private static final int S_CANCELED = 3; + + private int state = S_READY; + private T result; + private Exception error; + + /** + * Reset this future to the initial state. + * + * @return {@code false} if it was already in initial state + */ + public boolean reset() { + if (state == S_READY) { + return false; + } + state = S_READY; + result = null; + error = null; + return true; + } + + /** + * Run computation and produce the result. + * + * @return the result of computation + */ + protected abstract T run() throws Exception; + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + if (state != S_READY) { + return false; + } + state = S_CANCELED; + return true; + } + + @Override + public T get() throws InterruptedException, ExecutionException { + switch (state) { + case S_READY: + try { + result = run(); + state = S_DONE; + } catch (Exception e) { + error = e; + if (e instanceof InterruptedException) { + throw (InterruptedException) e; + } + throw new ExecutionException(e); + } finally { + if (state != S_DONE) { + state = S_ERROR; + } + } + return result; + case S_DONE: + return result; + case S_ERROR: + throw new ExecutionException(error); + case S_CANCELED: + throw new CancellationException(); + default: + throw DbException.throwInternalError("" + state); + } + } + + @Override + public T get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException { + return get(); + } + + @Override + public boolean isCancelled() { + return state == S_CANCELED; + } + + @Override + public boolean isDone() { + return state != S_READY; + } +} diff --git a/modules/h2/src/main/java/org/h2/util/LocalDateTimeUtils.java b/modules/h2/src/main/java/org/h2/util/LocalDateTimeUtils.java new file mode 100644 index 0000000000000..0a503fc4f2dd5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/LocalDateTimeUtils.java @@ -0,0 +1,579 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + * Iso8601: Initial Developer: Philippe Marschall (firstName dot lastName + * at gmail dot com) + */ +package org.h2.util; + +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.sql.Timestamp; +import java.util.Arrays; +import java.util.concurrent.TimeUnit; +import org.h2.message.DbException; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * This utility class contains time conversion functions for Java 8 + * Date and Time API classes. + * + *

    This class is implemented using reflection so that it compiles on + * Java 7 as well.

    + * + *

    Custom conversion methods between H2 internal values and JSR-310 classes + * are used in most cases without intermediate conversions to java.sql classes. + * Direct conversion is simpler, faster, and it does not inherit limitations + * and issues from java.sql classes and conversion methods provided by JDK.

    + * + *

    The only one exclusion is a conversion between {@link Timestamp} and + * Instant.

    + * + *

    Once the driver requires Java 8 and Android API 26 all the reflection + * can be removed.

    + */ +public class LocalDateTimeUtils { + + /** + * {@code Class} or {@code null}. + */ + public static final Class LOCAL_DATE; + /** + * {@code Class} or {@code null}. + */ + public static final Class LOCAL_TIME; + /** + * {@code Class} or {@code null}. + */ + public static final Class LOCAL_DATE_TIME; + /** + * {@code Class} or {@code null}. + */ + public static final Class INSTANT; + /** + * {@code Class} or {@code null}. + */ + public static final Class OFFSET_DATE_TIME; + + /** + * {@code Class} or {@code null}. + */ + private static final Class ZONE_OFFSET; + + /** + * {@code java.time.LocalTime#ofNanoOfDay()} or {@code null}. + */ + private static final Method LOCAL_TIME_OF_NANO; + + /** + * {@code java.time.LocalTime#toNanoOfDay()} or {@code null}. + */ + private static final Method LOCAL_TIME_TO_NANO; + + /** + * {@code java.time.LocalDate#of(int, int, int)} or {@code null}. + */ + private static final Method LOCAL_DATE_OF_YEAR_MONTH_DAY; + /** + * {@code java.time.LocalDate#parse(CharSequence)} or {@code null}. + */ + private static final Method LOCAL_DATE_PARSE; + /** + * {@code java.time.LocalDate#getYear()} or {@code null}. + */ + private static final Method LOCAL_DATE_GET_YEAR; + /** + * {@code java.time.LocalDate#getMonthValue()} or {@code null}. + */ + private static final Method LOCAL_DATE_GET_MONTH_VALUE; + /** + * {@code java.time.LocalDate#getDayOfMonth()} or {@code null}. + */ + private static final Method LOCAL_DATE_GET_DAY_OF_MONTH; + /** + * {@code java.time.LocalDate#atStartOfDay()} or {@code null}. + */ + private static final Method LOCAL_DATE_AT_START_OF_DAY; + + /** + * {@code java.time.Instant#getEpochSecond()} or {@code null}. + */ + private static final Method INSTANT_GET_EPOCH_SECOND; + /** + * {@code java.time.Instant#getNano()} or {@code null}. + */ + private static final Method INSTANT_GET_NANO; + /** + * {@code java.sql.Timestamp.toInstant()} or {@code null}. + */ + private static final Method TIMESTAMP_TO_INSTANT; + + /** + * {@code java.time.LocalTime#parse(CharSequence)} or {@code null}. + */ + private static final Method LOCAL_TIME_PARSE; + + /** + * {@code java.time.LocalDateTime#plusNanos(long)} or {@code null}. + */ + private static final Method LOCAL_DATE_TIME_PLUS_NANOS; + /** + * {@code java.time.LocalDateTime#toLocalDate()} or {@code null}. + */ + private static final Method LOCAL_DATE_TIME_TO_LOCAL_DATE; + /** + * {@code java.time.LocalDateTime#toLocalTime()} or {@code null}. + */ + private static final Method LOCAL_DATE_TIME_TO_LOCAL_TIME; + /** + * {@code java.time.LocalDateTime#parse(CharSequence)} or {@code null}. + */ + private static final Method LOCAL_DATE_TIME_PARSE; + + /** + * {@code java.time.ZoneOffset#ofTotalSeconds(int)} or {@code null}. + */ + private static final Method ZONE_OFFSET_OF_TOTAL_SECONDS; + + /** + * {@code java.time.OffsetDateTime#of(LocalDateTime, ZoneOffset)} or + * {@code null}. + */ + private static final Method OFFSET_DATE_TIME_OF_LOCAL_DATE_TIME_ZONE_OFFSET; + /** + * {@code java.time.OffsetDateTime#parse(CharSequence)} or {@code null}. + */ + private static final Method OFFSET_DATE_TIME_PARSE; + /** + * {@code java.time.OffsetDateTime#toLocalDateTime()} or {@code null}. + */ + private static final Method OFFSET_DATE_TIME_TO_LOCAL_DATE_TIME; + /** + * {@code java.time.OffsetDateTime#getOffset()} or {@code null}. + */ + private static final Method OFFSET_DATE_TIME_GET_OFFSET; + + /** + * {@code java.time.ZoneOffset#getTotalSeconds()} or {@code null}. + */ + private static final Method ZONE_OFFSET_GET_TOTAL_SECONDS; + + private static final boolean IS_JAVA8_DATE_API_PRESENT; + + static { + LOCAL_DATE = tryGetClass("java.time.LocalDate"); + LOCAL_TIME = tryGetClass("java.time.LocalTime"); + LOCAL_DATE_TIME = tryGetClass("java.time.LocalDateTime"); + INSTANT = tryGetClass("java.time.Instant"); + OFFSET_DATE_TIME = tryGetClass("java.time.OffsetDateTime"); + ZONE_OFFSET = tryGetClass("java.time.ZoneOffset"); + IS_JAVA8_DATE_API_PRESENT = LOCAL_DATE != null && LOCAL_TIME != null && + LOCAL_DATE_TIME != null && INSTANT != null && + OFFSET_DATE_TIME != null && ZONE_OFFSET != null; + + if (IS_JAVA8_DATE_API_PRESENT) { + LOCAL_TIME_OF_NANO = getMethod(LOCAL_TIME, "ofNanoOfDay", long.class); + + LOCAL_TIME_TO_NANO = getMethod(LOCAL_TIME, "toNanoOfDay"); + + LOCAL_DATE_OF_YEAR_MONTH_DAY = getMethod(LOCAL_DATE, "of", + int.class, int.class, int.class); + LOCAL_DATE_PARSE = getMethod(LOCAL_DATE, "parse", + CharSequence.class); + LOCAL_DATE_GET_YEAR = getMethod(LOCAL_DATE, "getYear"); + LOCAL_DATE_GET_MONTH_VALUE = getMethod(LOCAL_DATE, "getMonthValue"); + LOCAL_DATE_GET_DAY_OF_MONTH = getMethod(LOCAL_DATE, "getDayOfMonth"); + LOCAL_DATE_AT_START_OF_DAY = getMethod(LOCAL_DATE, "atStartOfDay"); + + INSTANT_GET_EPOCH_SECOND = getMethod(INSTANT, "getEpochSecond"); + INSTANT_GET_NANO = getMethod(INSTANT, "getNano"); + TIMESTAMP_TO_INSTANT = getMethod(Timestamp.class, "toInstant"); + + LOCAL_TIME_PARSE = getMethod(LOCAL_TIME, "parse", CharSequence.class); + + LOCAL_DATE_TIME_PLUS_NANOS = getMethod(LOCAL_DATE_TIME, "plusNanos", long.class); + LOCAL_DATE_TIME_TO_LOCAL_DATE = getMethod(LOCAL_DATE_TIME, "toLocalDate"); + LOCAL_DATE_TIME_TO_LOCAL_TIME = getMethod(LOCAL_DATE_TIME, "toLocalTime"); + LOCAL_DATE_TIME_PARSE = getMethod(LOCAL_DATE_TIME, "parse", CharSequence.class); + + ZONE_OFFSET_OF_TOTAL_SECONDS = getMethod(ZONE_OFFSET, "ofTotalSeconds", int.class); + + OFFSET_DATE_TIME_TO_LOCAL_DATE_TIME = getMethod(OFFSET_DATE_TIME, "toLocalDateTime"); + OFFSET_DATE_TIME_GET_OFFSET = getMethod(OFFSET_DATE_TIME, "getOffset"); + OFFSET_DATE_TIME_OF_LOCAL_DATE_TIME_ZONE_OFFSET = getMethod( + OFFSET_DATE_TIME, "of", LOCAL_DATE_TIME, ZONE_OFFSET); + OFFSET_DATE_TIME_PARSE = getMethod(OFFSET_DATE_TIME, "parse", CharSequence.class); + + ZONE_OFFSET_GET_TOTAL_SECONDS = getMethod(ZONE_OFFSET, "getTotalSeconds"); + } else { + LOCAL_TIME_OF_NANO = null; + LOCAL_TIME_TO_NANO = null; + LOCAL_DATE_OF_YEAR_MONTH_DAY = null; + LOCAL_DATE_PARSE = null; + LOCAL_DATE_GET_YEAR = null; + LOCAL_DATE_GET_MONTH_VALUE = null; + LOCAL_DATE_GET_DAY_OF_MONTH = null; + LOCAL_DATE_AT_START_OF_DAY = null; + INSTANT_GET_EPOCH_SECOND = null; + INSTANT_GET_NANO = null; + TIMESTAMP_TO_INSTANT = null; + LOCAL_TIME_PARSE = null; + LOCAL_DATE_TIME_PLUS_NANOS = null; + LOCAL_DATE_TIME_TO_LOCAL_DATE = null; + LOCAL_DATE_TIME_TO_LOCAL_TIME = null; + LOCAL_DATE_TIME_PARSE = null; + ZONE_OFFSET_OF_TOTAL_SECONDS = null; + OFFSET_DATE_TIME_TO_LOCAL_DATE_TIME = null; + OFFSET_DATE_TIME_GET_OFFSET = null; + OFFSET_DATE_TIME_OF_LOCAL_DATE_TIME_ZONE_OFFSET = null; + OFFSET_DATE_TIME_PARSE = null; + ZONE_OFFSET_GET_TOTAL_SECONDS = null; + } + } + + private LocalDateTimeUtils() { + // utility class + } + + /** + * Checks if the Java 8 Date and Time API is present. + * + *

    This is the case on Java 8 and later and not the case on + * Java 7. Versions older than Java 7 are not supported.

    + * + * @return if the Java 8 Date and Time API is present + */ + public static boolean isJava8DateApiPresent() { + return IS_JAVA8_DATE_API_PRESENT; + } + + /** + * Parses an ISO date string into a java.time.LocalDate. + * + * @param text the ISO date string + * @return the java.time.LocalDate instance + */ + public static Object parseLocalDate(CharSequence text) { + try { + return LOCAL_DATE_PARSE.invoke(null, text); + } catch (IllegalAccessException | InvocationTargetException e) { + throw new IllegalArgumentException("error when parsing text '" + text + "'", e); + } + } + + /** + * Parses an ISO time string into a java.time.LocalTime. + * + * @param text the ISO time string + * @return the java.time.LocalTime instance + */ + public static Object parseLocalTime(CharSequence text) { + try { + return LOCAL_TIME_PARSE.invoke(null, text); + } catch (IllegalAccessException | InvocationTargetException e) { + throw new IllegalArgumentException("error when parsing text '" + text + "'", e); + } + } + + /** + * Parses an ISO date string into a java.time.LocalDateTime. + * + * @param text the ISO date string + * @return the java.time.LocalDateTime instance + */ + public static Object parseLocalDateTime(CharSequence text) { + try { + return LOCAL_DATE_TIME_PARSE.invoke(null, text); + } catch (IllegalAccessException | InvocationTargetException e) { + throw new IllegalArgumentException("error when parsing text '" + text + "'", e); + } + } + + /** + * Parses an ISO date string into a java.time.OffsetDateTime. + * + * @param text the ISO date string + * @return the java.time.OffsetDateTime instance + */ + public static Object parseOffsetDateTime(CharSequence text) { + try { + return OFFSET_DATE_TIME_PARSE.invoke(null, text); + } catch (IllegalAccessException | InvocationTargetException e) { + throw new IllegalArgumentException("error when parsing text '" + text + "'", e); + } + } + + private static Class tryGetClass(String className) { + try { + return Class.forName(className); + } catch (ClassNotFoundException e) { + return null; + } + } + + private static Method getMethod(Class clazz, String methodName, + Class... parameterTypes) { + try { + return clazz.getMethod(methodName, parameterTypes); + } catch (NoSuchMethodException e) { + throw new IllegalStateException("Java 8 or later but method " + + clazz.getName() + "#" + methodName + "(" + + Arrays.toString(parameterTypes) + ") is missing", e); + } + } + + /** + * Converts a value to a LocalDate. + * + *

    This method should only called from Java 8 or later.

    + * + * @param value the value to convert + * @return the LocalDate + */ + public static Object valueToLocalDate(Value value) { + try { + return localDateFromDateValue(((ValueDate) value.convertTo(Value.DATE)).getDateValue()); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "date conversion failed"); + } + } + + /** + * Converts a value to a LocalTime. + * + *

    This method should only called from Java 8 or later.

    + * + * @param value the value to convert + * @return the LocalTime + */ + public static Object valueToLocalTime(Value value) { + try { + return LOCAL_TIME_OF_NANO.invoke(null, + ((ValueTime) value.convertTo(Value.TIME)).getNanos()); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "time conversion failed"); + } + } + + /** + * Converts a value to a LocalDateTime. + * + *

    This method should only called from Java 8 or later.

    + * + * @param value the value to convert + * @return the LocalDateTime + */ + public static Object valueToLocalDateTime(Value value) { + ValueTimestamp valueTimestamp = (ValueTimestamp) value.convertTo(Value.TIMESTAMP); + long dateValue = valueTimestamp.getDateValue(); + long timeNanos = valueTimestamp.getTimeNanos(); + try { + return localDateTimeFromDateNanos(dateValue, timeNanos); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "timestamp conversion failed"); + } + } + + /** + * Converts a value to a Instant. + * + *

    This method should only called from Java 8 or later.

    + * + * @param value the value to convert + * @return the Instant + */ + public static Object valueToInstant(Value value) { + try { + return TIMESTAMP_TO_INSTANT.invoke(value.getTimestamp()); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "timestamp conversion failed"); + } + } + + /** + * Converts a value to a OffsetDateTime. + * + *

    This method should only called from Java 8 or later.

    + * + * @param value the value to convert + * @return the OffsetDateTime + */ + public static Object valueToOffsetDateTime(Value value) { + ValueTimestampTimeZone valueTimestampTimeZone = (ValueTimestampTimeZone) value.convertTo(Value.TIMESTAMP_TZ); + long dateValue = valueTimestampTimeZone.getDateValue(); + long timeNanos = valueTimestampTimeZone.getTimeNanos(); + try { + Object localDateTime = localDateTimeFromDateNanos(dateValue, timeNanos); + + short timeZoneOffsetMins = valueTimestampTimeZone.getTimeZoneOffsetMins(); + int offsetSeconds = (int) TimeUnit.MINUTES.toSeconds(timeZoneOffsetMins); + + Object offset = ZONE_OFFSET_OF_TOTAL_SECONDS.invoke(null, offsetSeconds); + + return OFFSET_DATE_TIME_OF_LOCAL_DATE_TIME_ZONE_OFFSET.invoke(null, + localDateTime, offset); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "timestamp with time zone conversion failed"); + } + } + + /** + * Converts a LocalDate to a Value. + * + * @param localDate the LocalDate to convert, not {@code null} + * @return the value + */ + public static Value localDateToDateValue(Object localDate) { + try { + return ValueDate.fromDateValue(dateValueFromLocalDate(localDate)); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "date conversion failed"); + } + } + + /** + * Converts a LocalTime to a Value. + * + * @param localTime the LocalTime to convert, not {@code null} + * @return the value + */ + public static Value localTimeToTimeValue(Object localTime) { + try { + return ValueTime.fromNanos((Long) LOCAL_TIME_TO_NANO.invoke(localTime)); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "time conversion failed"); + } + } + + /** + * Converts a LocalDateTime to a Value. + * + * @param localDateTime the LocalDateTime to convert, not {@code null} + * @return the value + */ + public static Value localDateTimeToValue(Object localDateTime) { + try { + Object localDate = LOCAL_DATE_TIME_TO_LOCAL_DATE.invoke(localDateTime); + long dateValue = dateValueFromLocalDate(localDate); + long timeNanos = timeNanosFromLocalDateTime(localDateTime); + return ValueTimestamp.fromDateValueAndNanos(dateValue, timeNanos); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "local date time conversion failed"); + } + } + + /** + * Converts a Instant to a Value. + * + * @param instant the Instant to convert, not {@code null} + * @return the value + */ + public static Value instantToValue(Object instant) { + try { + long epochSecond = (long) INSTANT_GET_EPOCH_SECOND.invoke(instant); + int nano = (int) INSTANT_GET_NANO.invoke(instant); + long absoluteDay = epochSecond / 86_400; + // Round toward negative infinity + if (epochSecond < 0 && (absoluteDay * 86_400 != epochSecond)) { + absoluteDay--; + } + long timeNanos = (epochSecond - absoluteDay * 86_400) * 1_000_000_000 + nano; + return ValueTimestampTimeZone.fromDateValueAndNanos( + DateTimeUtils.dateValueFromAbsoluteDay(absoluteDay), timeNanos, (short) 0); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "instant conversion failed"); + } + } + + /** + * Converts a OffsetDateTime to a Value. + * + * @param offsetDateTime the OffsetDateTime to convert, not {@code null} + * @return the value + */ + public static Value offsetDateTimeToValue(Object offsetDateTime) { + try { + Object localDateTime = OFFSET_DATE_TIME_TO_LOCAL_DATE_TIME.invoke(offsetDateTime); + Object localDate = LOCAL_DATE_TIME_TO_LOCAL_DATE.invoke(localDateTime); + Object zoneOffset = OFFSET_DATE_TIME_GET_OFFSET.invoke(offsetDateTime); + + long dateValue = dateValueFromLocalDate(localDate); + long timeNanos = timeNanosFromLocalDateTime(localDateTime); + short timeZoneOffsetMins = zoneOffsetToOffsetMinute(zoneOffset); + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, + timeNanos, timeZoneOffsetMins); + } catch (IllegalAccessException e) { + throw DbException.convert(e); + } catch (InvocationTargetException e) { + throw DbException.convertInvocation(e, "time conversion failed"); + } + } + + private static long dateValueFromLocalDate(Object localDate) + throws IllegalAccessException, InvocationTargetException { + int year = (Integer) LOCAL_DATE_GET_YEAR.invoke(localDate); + int month = (Integer) LOCAL_DATE_GET_MONTH_VALUE.invoke(localDate); + int day = (Integer) LOCAL_DATE_GET_DAY_OF_MONTH.invoke(localDate); + return DateTimeUtils.dateValue(year, month, day); + } + + private static long timeNanosFromLocalDateTime(Object localDateTime) + throws IllegalAccessException, InvocationTargetException { + Object localTime = LOCAL_DATE_TIME_TO_LOCAL_TIME.invoke(localDateTime); + return (Long) LOCAL_TIME_TO_NANO.invoke(localTime); + } + + private static short zoneOffsetToOffsetMinute(Object zoneOffset) + throws IllegalAccessException, InvocationTargetException { + int totalSeconds = (Integer) ZONE_OFFSET_GET_TOTAL_SECONDS.invoke(zoneOffset); + return (short) TimeUnit.SECONDS.toMinutes(totalSeconds); + } + + private static Object localDateFromDateValue(long dateValue) + throws IllegalAccessException, InvocationTargetException { + + int year = DateTimeUtils.yearFromDateValue(dateValue); + int month = DateTimeUtils.monthFromDateValue(dateValue); + int day = DateTimeUtils.dayFromDateValue(dateValue); + try { + return LOCAL_DATE_OF_YEAR_MONTH_DAY.invoke(null, year, month, day); + } catch (InvocationTargetException e) { + if (year <= 1500 && (year & 3) == 0 && month == 2 && day == 29) { + // If proleptic Gregorian doesn't have such date use the next day + return LOCAL_DATE_OF_YEAR_MONTH_DAY.invoke(null, year, 3, 1); + } + throw e; + } + } + + private static Object localDateTimeFromDateNanos(long dateValue, long timeNanos) + throws IllegalAccessException, InvocationTargetException { + Object localDate = localDateFromDateValue(dateValue); + Object localDateTime = LOCAL_DATE_AT_START_OF_DAY.invoke(localDate); + return LOCAL_DATE_TIME_PLUS_NANOS.invoke(localDateTime, timeNanos); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/MathUtils.java b/modules/h2/src/main/java/org/h2/util/MathUtils.java new file mode 100644 index 0000000000000..4521a05dc9a14 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/MathUtils.java @@ -0,0 +1,313 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.ByteArrayOutputStream; +import java.io.DataOutputStream; +import java.io.IOException; +import java.lang.reflect.Method; +import java.nio.charset.StandardCharsets; +import java.security.SecureRandom; +import java.util.concurrent.ThreadLocalRandom; + +/** + * This is a utility class with mathematical helper functions. + */ +public class MathUtils { + + /** + * The secure random object. + */ + static SecureRandom cachedSecureRandom; + + /** + * True if the secure random object is seeded. + */ + static volatile boolean seeded; + + private MathUtils() { + // utility class + } + + + /** + * Round the value up to the next block size. The block size must be a power + * of two. As an example, using the block size of 8, the following rounding + * operations are done: 0 stays 0; values 1..8 results in 8, 9..16 results + * in 16, and so on. + * + * @param x the value to be rounded + * @param blockSizePowerOf2 the block size + * @return the rounded value + */ + public static int roundUpInt(int x, int blockSizePowerOf2) { + return (x + blockSizePowerOf2 - 1) & (-blockSizePowerOf2); + } + + /** + * Round the value up to the next block size. The block size must be a power + * of two. As an example, using the block size of 8, the following rounding + * operations are done: 0 stays 0; values 1..8 results in 8, 9..16 results + * in 16, and so on. + * + * @param x the value to be rounded + * @param blockSizePowerOf2 the block size + * @return the rounded value + */ + public static long roundUpLong(long x, long blockSizePowerOf2) { + return (x + blockSizePowerOf2 - 1) & (-blockSizePowerOf2); + } + + private static synchronized SecureRandom getSecureRandom() { + if (cachedSecureRandom != null) { + return cachedSecureRandom; + } + // Workaround for SecureRandom problem as described in + // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6202721 + // Can not do that in a static initializer block, because + // threads are not started until after the initializer block exits + try { + cachedSecureRandom = SecureRandom.getInstance("SHA1PRNG"); + // On some systems, secureRandom.generateSeed() is very slow. + // In this case it is initialized using our own seed implementation + // and afterwards (in the thread) using the regular algorithm. + Runnable runnable = new Runnable() { + @Override + public void run() { + try { + SecureRandom sr = SecureRandom.getInstance("SHA1PRNG"); + byte[] seed = sr.generateSeed(20); + synchronized (cachedSecureRandom) { + cachedSecureRandom.setSeed(seed); + seeded = true; + } + } catch (Exception e) { + // NoSuchAlgorithmException + warn("SecureRandom", e); + } + } + }; + + try { + Thread t = new Thread(runnable, "Generate Seed"); + // let the process terminate even if generating the seed is + // really slow + t.setDaemon(true); + t.start(); + Thread.yield(); + try { + // normally, generateSeed takes less than 200 ms + t.join(400); + } catch (InterruptedException e) { + warn("InterruptedException", e); + } + if (!seeded) { + byte[] seed = generateAlternativeSeed(); + // this never reduces randomness + synchronized (cachedSecureRandom) { + cachedSecureRandom.setSeed(seed); + } + } + } catch (SecurityException e) { + // workaround for the Google App Engine: don't use a thread + runnable.run(); + generateAlternativeSeed(); + } + + } catch (Exception e) { + // NoSuchAlgorithmException + warn("SecureRandom", e); + cachedSecureRandom = new SecureRandom(); + } + return cachedSecureRandom; + } + + /** + * Generate a seed value, using as much unpredictable data as possible. + * + * @return the seed + */ + public static byte[] generateAlternativeSeed() { + try { + ByteArrayOutputStream bout = new ByteArrayOutputStream(); + DataOutputStream out = new DataOutputStream(bout); + + // milliseconds and nanoseconds + out.writeLong(System.currentTimeMillis()); + out.writeLong(System.nanoTime()); + + // memory + out.writeInt(new Object().hashCode()); + Runtime runtime = Runtime.getRuntime(); + out.writeLong(runtime.freeMemory()); + out.writeLong(runtime.maxMemory()); + out.writeLong(runtime.totalMemory()); + + // environment + try { + String s = System.getProperties().toString(); + // can't use writeUTF, as the string + // might be larger than 64 KB + out.writeInt(s.length()); + out.write(s.getBytes(StandardCharsets.UTF_8)); + } catch (Exception e) { + warn("generateAlternativeSeed", e); + } + + // host name and ip addresses (if any) + try { + // workaround for the Google App Engine: don't use InetAddress + Class inetAddressClass = Class.forName( + "java.net.InetAddress"); + Object localHost = inetAddressClass.getMethod( + "getLocalHost").invoke(null); + String hostName = inetAddressClass.getMethod( + "getHostName").invoke(localHost).toString(); + out.writeUTF(hostName); + Object[] list = (Object[]) inetAddressClass.getMethod( + "getAllByName", String.class).invoke(null, hostName); + Method getAddress = inetAddressClass.getMethod( + "getAddress"); + for (Object o : list) { + out.write((byte[]) getAddress.invoke(o)); + } + } catch (Throwable e) { + // on some system, InetAddress is not supported + // on some system, InetAddress.getLocalHost() doesn't work + // for some reason (incorrect configuration) + } + + // timing (a second thread is already running usually) + for (int j = 0; j < 16; j++) { + int i = 0; + long end = System.currentTimeMillis(); + while (end == System.currentTimeMillis()) { + i++; + } + out.writeInt(i); + } + + out.close(); + return bout.toByteArray(); + } catch (IOException e) { + warn("generateAlternativeSeed", e); + return new byte[1]; + } + } + + /** + * Print a message to system output if there was a problem initializing the + * random number generator. + * + * @param s the message to print + * @param t the stack trace + */ + static void warn(String s, Throwable t) { + // not a fatal problem, but maybe reduced security + System.out.println("Warning: " + s); + if (t != null) { + t.printStackTrace(); + } + } + + /** + * Get the value that is equal to or higher than this value, and that is a + * power of two. + * + * @param x the original value + * @return the next power of two value + * @throws IllegalArgumentException if x < 0 or x > 0x40000000 + */ + public static int nextPowerOf2(int x) throws IllegalArgumentException { + if (x == 0) { + return 1; + } else if (x < 0 || x > 0x4000_0000 ) { + throw new IllegalArgumentException("Argument out of range" + + " [0x0-0x40000000]. Argument was: " + x); + } + x--; + x |= x >> 1; + x |= x >> 2; + x |= x >> 4; + x |= x >> 8; + x |= x >> 16; + return ++x; + } + + /** + * Convert a long value to an int value. Values larger than the biggest int + * value is converted to the biggest int value, and values smaller than the + * smallest int value are converted to the smallest int value. + * + * @param l the value to convert + * @return the converted int value + */ + public static int convertLongToInt(long l) { + if (l <= Integer.MIN_VALUE) { + return Integer.MIN_VALUE; + } else if (l >= Integer.MAX_VALUE) { + return Integer.MAX_VALUE; + } else { + return (int) l; + } + } + + /** + * Get a cryptographically secure pseudo random long value. + * + * @return the random long value + */ + public static long secureRandomLong() { + return getSecureRandom().nextLong(); + } + + /** + * Get a number of pseudo random bytes. + * + * @param bytes the target array + */ + public static void randomBytes(byte[] bytes) { + ThreadLocalRandom.current().nextBytes(bytes); + } + + /** + * Get a number of cryptographically secure pseudo random bytes. + * + * @param len the number of bytes + * @return the random bytes + */ + public static byte[] secureRandomBytes(int len) { + if (len <= 0) { + len = 1; + } + byte[] buff = new byte[len]; + getSecureRandom().nextBytes(buff); + return buff; + } + + /** + * Get a pseudo random int value between 0 (including and the given value + * (excluding). The value is not cryptographically secure. + * + * @param lowerThan the value returned will be lower than this value + * @return the random long value + */ + public static int randomInt(int lowerThan) { + return ThreadLocalRandom.current().nextInt(lowerThan); + } + + /** + * Get a cryptographically secure pseudo random int value between 0 + * (including and the given value (excluding). + * + * @param lowerThan the value returned will be lower than this value + * @return the random long value + */ + public static int secureRandomInt(int lowerThan) { + return getSecureRandom().nextInt(lowerThan); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/MergedResultSet.java b/modules/h2/src/main/java/org/h2/util/MergedResultSet.java new file mode 100644 index 0000000000000..0ec1760fb4e2e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/MergedResultSet.java @@ -0,0 +1,143 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import org.h2.tools.SimpleResultSet; + +/** + * Merged result set. Used to combine several result sets into one. Merged + * result set will contain rows from all appended result sets. Result sets are + * not required to have the same lists of columns, but required to have + * compatible column definitions, for example, if one result set has a + * {@link java.sql.Types#VARCHAR} column {@code NAME} then another results sets + * that have {@code NAME} column should also define it with the same type. + */ +public final class MergedResultSet { + /** + * Metadata of a column. + */ + private static final class ColumnInfo { + final String name; + + final int type; + + final int precision; + + final int scale; + + /** + * Creates metadata. + * + * @param name + * name of the column + * @param type + * type of the column, see {@link java.sql.Types} + * @param precision + * precision of the column + * @param scale + * scale of the column + */ + ColumnInfo(String name, int type, int precision, int scale) { + this.name = name; + this.type = type; + this.precision = precision; + this.scale = scale; + } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null || getClass() != obj.getClass()) { + return false; + } + ColumnInfo other = (ColumnInfo) obj; + return name.equals(other.name); + } + + @Override + public int hashCode() { + return name.hashCode(); + } + + } + + private final ArrayList> data = New.arrayList(); + + private final ArrayList columns = New.arrayList(); + + /** + * Appends a result set. + * + * @param rs + * result set to append + * @throws SQLException + * on SQL exception + */ + public void add(ResultSet rs) throws SQLException { + ResultSetMetaData meta = rs.getMetaData(); + int cols = meta.getColumnCount(); + if (cols == 0) { + return; + } + ColumnInfo[] info = new ColumnInfo[cols]; + for (int i = 1; i <= cols; i++) { + ColumnInfo ci = new ColumnInfo(meta.getColumnName(i), meta.getColumnType(i), meta.getPrecision(i), + meta.getScale(i)); + info[i - 1] = ci; + if (!columns.contains(ci)) { + columns.add(ci); + } + } + while (rs.next()) { + if (cols == 1) { + data.add(Collections.singletonMap(info[0], rs.getObject(1))); + } else { + HashMap map = new HashMap<>(); + for (int i = 1; i <= cols; i++) { + ColumnInfo ci = info[i - 1]; + map.put(ci, rs.getObject(i)); + } + data.add(map); + } + } + } + + /** + * Returns merged results set. + * + * @return result set with rows from all appended result sets + */ + public SimpleResultSet getResult() { + SimpleResultSet rs = new SimpleResultSet(); + for (ColumnInfo ci : columns) { + rs.addColumn(ci.name, ci.type, ci.precision, ci.scale); + } + for (Map map : data) { + Object[] row = new Object[columns.size()]; + for (Map.Entry entry : map.entrySet()) { + row[columns.indexOf(entry.getKey())] = entry.getValue(); + } + rs.addRow(row); + } + return rs; + } + + @Override + public String toString() { + return columns + ": " + data.size(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/NetUtils.java b/modules/h2/src/main/java/org/h2/util/NetUtils.java new file mode 100644 index 0000000000000..b7ef4c767faa4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/NetUtils.java @@ -0,0 +1,295 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.IOException; +import java.net.BindException; +import java.net.Inet6Address; +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.ServerSocket; +import java.net.Socket; +import java.net.UnknownHostException; +import java.util.concurrent.TimeUnit; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.security.CipherFactory; + +/** + * This utility class contains socket helper functions. + */ +public class NetUtils { + + private static final int CACHE_MILLIS = 1000; + private static InetAddress cachedBindAddress; + private static String cachedLocalAddress; + private static long cachedLocalAddressTime; + + private NetUtils() { + // utility class + } + + /** + * Create a loopback socket (a socket that is connected to localhost) on + * this port. + * + * @param port the port + * @param ssl if SSL should be used + * @return the socket + */ + public static Socket createLoopbackSocket(int port, boolean ssl) + throws IOException { + String local = getLocalAddress(); + try { + return createSocket(local, port, ssl); + } catch (IOException e) { + try { + return createSocket("localhost", port, ssl); + } catch (IOException e2) { + // throw the original exception + throw e; + } + } + } + + /** + * Create a client socket that is connected to the given address and port. + * + * @param server to connect to (including an optional port) + * @param defaultPort the default port (if not specified in the server + * address) + * @param ssl if SSL should be used + * @return the socket + */ + public static Socket createSocket(String server, int defaultPort, + boolean ssl) throws IOException { + int port = defaultPort; + // IPv6: RFC 2732 format is '[a:b:c:d:e:f:g:h]' or + // '[a:b:c:d:e:f:g:h]:port' + // RFC 2396 format is 'a.b.c.d' or 'a.b.c.d:port' or 'hostname' or + // 'hostname:port' + int startIndex = server.startsWith("[") ? server.indexOf(']') : 0; + int idx = server.indexOf(':', startIndex); + if (idx >= 0) { + port = Integer.decode(server.substring(idx + 1)); + server = server.substring(0, idx); + } + InetAddress address = InetAddress.getByName(server); + return createSocket(address, port, ssl); + } + + /** + * Create a client socket that is connected to the given address and port. + * + * @param address the address to connect to + * @param port the port + * @param ssl if SSL should be used + * @return the socket + */ + public static Socket createSocket(InetAddress address, int port, boolean ssl) + throws IOException { + long start = System.nanoTime(); + for (int i = 0;; i++) { + try { + if (ssl) { + return CipherFactory.createSocket(address, port); + } + Socket socket = new Socket(); + socket.connect(new InetSocketAddress(address, port), + SysProperties.SOCKET_CONNECT_TIMEOUT); + return socket; + } catch (IOException e) { + if (System.nanoTime() - start >= + TimeUnit.MILLISECONDS.toNanos(SysProperties.SOCKET_CONNECT_TIMEOUT)) { + // either it was a connect timeout, + // or list of different exceptions + throw e; + } + if (i >= SysProperties.SOCKET_CONNECT_RETRY) { + throw e; + } + // wait a bit and retry + try { + // sleep at most 256 ms + long sleep = Math.min(256, i * i); + Thread.sleep(sleep); + } catch (InterruptedException e2) { + // ignore + } + } + } + } + + /** + * Create a server socket. The system property h2.bindAddress is used if + * set. If SSL is used and h2.enableAnonymousTLS is true, an attempt is + * made to modify the security property jdk.tls.legacyAlgorithms + * (in newer JVMs) to allow anonymous TLS. + *

    + * This system change is effectively permanent for the lifetime of the JVM. + * @see CipherFactory#removeAnonFromLegacyAlgorithms() + * + * @param port the port to listen on + * @param ssl if SSL should be used + * @return the server socket + */ + public static ServerSocket createServerSocket(int port, boolean ssl) { + try { + return createServerSocketTry(port, ssl); + } catch (Exception e) { + // try again + return createServerSocketTry(port, ssl); + } + } + + /** + * Get the bind address if the system property h2.bindAddress is set, or + * null if not. + * + * @return the bind address + */ + private static InetAddress getBindAddress() throws UnknownHostException { + String host = SysProperties.BIND_ADDRESS; + if (host == null || host.length() == 0) { + return null; + } + synchronized (NetUtils.class) { + if (cachedBindAddress == null) { + cachedBindAddress = InetAddress.getByName(host); + } + } + return cachedBindAddress; + } + + private static ServerSocket createServerSocketTry(int port, boolean ssl) { + try { + InetAddress bindAddress = getBindAddress(); + if (ssl) { + return CipherFactory.createServerSocket(port, bindAddress); + } + if (bindAddress == null) { + return new ServerSocket(port); + } + return new ServerSocket(port, 0, bindAddress); + } catch (BindException be) { + throw DbException.get(ErrorCode.EXCEPTION_OPENING_PORT_2, + be, "" + port, be.toString()); + } catch (IOException e) { + throw DbException.convertIOException(e, "port: " + port + " ssl: " + ssl); + } + } + + /** + * Check if a socket is connected to a local address. + * + * @param socket the socket + * @return true if it is + */ + public static boolean isLocalAddress(Socket socket) + throws UnknownHostException { + InetAddress test = socket.getInetAddress(); + if (test.isLoopbackAddress()) { + return true; + } + InetAddress localhost = InetAddress.getLocalHost(); + // localhost.getCanonicalHostName() is very very slow + String host = localhost.getHostAddress(); + for (InetAddress addr : InetAddress.getAllByName(host)) { + if (test.equals(addr)) { + return true; + } + } + return false; + } + + /** + * Close a server socket and ignore any exceptions. + * + * @param socket the socket + * @return null + */ + public static ServerSocket closeSilently(ServerSocket socket) { + if (socket != null) { + try { + socket.close(); + } catch (IOException e) { + // ignore + } + } + return null; + } + + /** + * Get the local host address as a string. + * For performance, the result is cached for one second. + * + * @return the local host address + */ + public static synchronized String getLocalAddress() { + long now = System.nanoTime(); + if (cachedLocalAddress != null) { + if (cachedLocalAddressTime + TimeUnit.MILLISECONDS.toNanos(CACHE_MILLIS) > now) { + return cachedLocalAddress; + } + } + InetAddress bind = null; + boolean useLocalhost = false; + try { + bind = getBindAddress(); + if (bind == null) { + useLocalhost = true; + } + } catch (UnknownHostException e) { + // ignore + } + if (useLocalhost) { + try { + bind = InetAddress.getLocalHost(); + } catch (UnknownHostException e) { + throw DbException.convert(e); + } + } + String address; + if (bind == null) { + address = "localhost"; + } else { + address = bind.getHostAddress(); + if (bind instanceof Inet6Address) { + if (address.indexOf('%') >= 0) { + address = "localhost"; + } else if (address.indexOf(':') >= 0 && !address.startsWith("[")) { + // adds'[' and ']' if required for + // Inet6Address that contain a ':'. + address = "[" + address + "]"; + } + } + } + if (address.equals("127.0.0.1")) { + address = "localhost"; + } + cachedLocalAddress = address; + cachedLocalAddressTime = now; + return address; + } + + /** + * Get the host name of a local address, if available. + * + * @param localAddress the local address + * @return the host name, or another text if not available + */ + public static String getHostName(String localAddress) { + try { + InetAddress addr = InetAddress.getByName(localAddress); + return addr.getHostName(); + } catch (Exception e) { + return "unknown"; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/New.java b/modules/h2/src/main/java/org/h2/util/New.java new file mode 100644 index 0000000000000..1ddff4338bf88 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/New.java @@ -0,0 +1,26 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.ArrayList; + +/** + * This class contains static methods to construct commonly used generic objects + * such as ArrayList. + */ +public class New { + + /** + * Create a new ArrayList. + * + * @param the type + * @return the object + */ + public static ArrayList arrayList() { + return new ArrayList<>(4); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/ParserUtil.java b/modules/h2/src/main/java/org/h2/util/ParserUtil.java new file mode 100644 index 0000000000000..7211a2dad747f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ParserUtil.java @@ -0,0 +1,210 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +public class ParserUtil { + + /** + * A keyword. + */ + public static final int KEYWORD = 1; + + /** + * An identifier (table name, column name,...). + */ + public static final int IDENTIFIER = 2; + + /** + * The token "null". + */ + public static final int NULL = 3; + + /** + * The token "true". + */ + public static final int TRUE = 4; + + /** + * The token "false". + */ + public static final int FALSE = 5; + + /** + * The token "rownum". + */ + public static final int ROWNUM = 6; + + private ParserUtil() { + // utility class + } + + /** + * Checks if this string is a SQL keyword. + * + * @param s the token to check + * @return true if it is a keyword + */ + public static boolean isKeyword(String s) { + if (s == null || s.length() == 0) { + return false; + } + return getSaveTokenType(s, false) != IDENTIFIER; + } + + /** + * Is this a simple identifier (in the JDBC specification sense). + * + * @param s identifier to check + * @param functionsAsKeywords treat system functions as keywords + * @return is specified identifier may be used without quotes + * @throws NullPointerException if s is {@code null} + */ + public static boolean isSimpleIdentifier(String s, boolean functionsAsKeywords) { + if (s.length() == 0) { + return false; + } + char c = s.charAt(0); + // lowercase a-z is quoted as well + if ((!Character.isLetter(c) && c != '_') || Character.isLowerCase(c)) { + return false; + } + for (int i = 1, length = s.length(); i < length; i++) { + c = s.charAt(i); + if ((!Character.isLetterOrDigit(c) && c != '_') || + Character.isLowerCase(c)) { + return false; + } + } + return getSaveTokenType(s, functionsAsKeywords) == IDENTIFIER; + } + + /** + * Get the token type. + * + * @param s the token + * @param functionsAsKeywords whether "current data / time" functions are keywords + * @return the token type + */ + public static int getSaveTokenType(String s, boolean functionsAsKeywords) { + switch (s.charAt(0)) { + case 'A': + return getKeywordOrIdentifier(s, "ALL", KEYWORD); + case 'C': + if ("CHECK".equals(s)) { + return KEYWORD; + } else if ("CONSTRAINT".equals(s)) { + return KEYWORD; + } else if ("CROSS".equals(s)) { + return KEYWORD; + } + if (functionsAsKeywords) { + if ("CURRENT_DATE".equals(s) || "CURRENT_TIME".equals(s) || "CURRENT_TIMESTAMP".equals(s)) { + return KEYWORD; + } + } + return IDENTIFIER; + case 'D': + return getKeywordOrIdentifier(s, "DISTINCT", KEYWORD); + case 'E': + if ("EXCEPT".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "EXISTS", KEYWORD); + case 'F': + if ("FETCH".equals(s)) { + return KEYWORD; + } else if ("FROM".equals(s)) { + return KEYWORD; + } else if ("FOR".equals(s)) { + return KEYWORD; + } else if ("FOREIGN".equals(s)) { + return KEYWORD; + } else if ("FULL".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "FALSE", FALSE); + case 'G': + return getKeywordOrIdentifier(s, "GROUP", KEYWORD); + case 'H': + return getKeywordOrIdentifier(s, "HAVING", KEYWORD); + case 'I': + if ("INNER".equals(s)) { + return KEYWORD; + } else if ("INTERSECT".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "IS", KEYWORD); + case 'J': + return getKeywordOrIdentifier(s, "JOIN", KEYWORD); + case 'L': + if ("LIMIT".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "LIKE", KEYWORD); + case 'M': + return getKeywordOrIdentifier(s, "MINUS", KEYWORD); + case 'N': + if ("NOT".equals(s)) { + return KEYWORD; + } else if ("NATURAL".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "NULL", NULL); + case 'O': + if ("OFFSET".equals(s)) { + return KEYWORD; + } else if ("ON".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "ORDER", KEYWORD); + case 'P': + return getKeywordOrIdentifier(s, "PRIMARY", KEYWORD); + case 'R': + return getKeywordOrIdentifier(s, "ROWNUM", ROWNUM); + case 'S': + if ("SELECT".equals(s)) { + return KEYWORD; + } + if (functionsAsKeywords) { + if ("SYSDATE".equals(s) || "SYSTIME".equals(s) || "SYSTIMESTAMP".equals(s)) { + return KEYWORD; + } + } + return IDENTIFIER; + case 'T': + if ("TRUE".equals(s)) { + return TRUE; + } + if (functionsAsKeywords) { + if ("TODAY".equals(s)) { + return KEYWORD; + } + } + return IDENTIFIER; + case 'U': + if ("UNIQUE".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "UNION", KEYWORD); + case 'W': + if ("WITH".equals(s)) { + return KEYWORD; + } + return getKeywordOrIdentifier(s, "WHERE", KEYWORD); + default: + return IDENTIFIER; + } + } + + private static int getKeywordOrIdentifier(String s1, String s2, + int keywordType) { + if (s1.equals(s2)) { + return keywordType; + } + return IDENTIFIER; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/Permutations.java b/modules/h2/src/main/java/org/h2/util/Permutations.java new file mode 100644 index 0000000000000..f1a807f70a4a2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/Permutations.java @@ -0,0 +1,173 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + * + * According to a mail from Alan Tucker to Chris H Miller from IBM, + * the algorithm is in the public domain: + * + * Date: 2010-07-15 15:57 + * Subject: Re: Applied Combinatorics Code + * + * Chris, + * The combinatorics algorithms in my textbook are all not under patent + * or copyright. They are as much in the public domain as the solution to any + * common question in an undergraduate mathematics course, e.g., in my + * combinatorics course, the solution to the problem of how many arrangements + * there are of the letters in the word MATHEMATICS. I appreciate your due + * diligence. + * -Alan + */ +package org.h2.util; + +import org.h2.message.DbException; + +/** + * A class to iterate over all permutations of an array. + * The algorithm is from Applied Combinatorics, by Alan Tucker as implemented in + * http://www.koders.com/java/fidD3445CD11B1DC687F6B8911075E7F01E23171553.aspx + * + * @param the element type + */ +public class Permutations { + + private final T[] in; + private final T[] out; + private final int n, m; + private final int[] index; + private boolean hasNext = true; + + private Permutations(T[] in, T[] out, int m) { + this.n = in.length; + this.m = m; + if (n < m || m < 0) { + DbException.throwInternalError("n < m or m < 0"); + } + this.in = in; + this.out = out; + index = new int[n]; + for (int i = 0; i < n; i++) { + index[i] = i; + } + + // The elements from m to n are always kept ascending right to left. + // This keeps the dip in the interesting region. + reverseAfter(m - 1); + } + + /** + * Create a new permutations object. + * + * @param the type + * @param in the source array + * @param out the target array + * @return the generated permutations object + */ + public static Permutations create(T[] in, T[] out) { + return new Permutations<>(in, out, in.length); + } + + /** + * Create a new permutations object. + * + * @param the type + * @param in the source array + * @param out the target array + * @param m the number of output elements to generate + * @return the generated permutations object + */ + public static Permutations create(T[] in, T[] out, int m) { + return new Permutations<>(in, out, m); + } + + /** + * Move the index forward a notch. The algorithm first finds the rightmost + * index that is less than its neighbor to the right. This is the dip point. + * The algorithm next finds the least element to the right of the dip that + * is greater than the dip. That element is switched with the dip. Finally, + * the list of elements to the right of the dip is reversed. + * For example, in a permutation of 5 items, the index may be {1, 2, 4, 3, + * 0}. The dip is 2 the rightmost element less than its neighbor on its + * right. The least element to the right of 2 that is greater than 2 is 3. + * These elements are swapped, yielding {1, 3, 4, 2, 0}, and the list right + * of the dip point is reversed, yielding {1, 3, 0, 2, 4}. + */ + private void moveIndex() { + // find the index of the first element that dips + int i = rightmostDip(); + if (i < 0) { + hasNext = false; + return; + } + + // find the least greater element to the right of the dip + int leastToRightIndex = i + 1; + for (int j = i + 2; j < n; j++) { + if (index[j] < index[leastToRightIndex] && index[j] > index[i]) { + leastToRightIndex = j; + } + } + + // switch dip element with least greater element to its right + int t = index[i]; + index[i] = index[leastToRightIndex]; + index[leastToRightIndex] = t; + + if (m - 1 > i) { + // reverse the elements to the right of the dip + reverseAfter(i); + + // reverse the elements to the right of m - 1 + reverseAfter(m - 1); + } + } + + /** + * Get the index of the first element from the right that is less + * than its neighbor on the right. + * + * @return the index or -1 if non is found + */ + private int rightmostDip() { + for (int i = n - 2; i >= 0; i--) { + if (index[i] < index[i + 1]) { + return i; + } + } + return -1; + } + + /** + * Reverse the elements to the right of the specified index. + * + * @param i the index + */ + private void reverseAfter(int i) { + int start = i + 1; + int end = n - 1; + while (start < end) { + int t = index[start]; + index[start] = index[end]; + index[end] = t; + start++; + end--; + } + } + + /** + * Go to the next lineup, and if available, fill the target array. + * + * @return if a new lineup is available + */ + public boolean next() { + if (!hasNext) { + return false; + } + for (int i = 0; i < m; i++) { + out[i] = in[index[i]]; + } + moveIndex(); + return true; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/Profiler.java b/modules/h2/src/main/java/org/h2/util/Profiler.java new file mode 100644 index 0000000000000..a73c8d5b67c08 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/Profiler.java @@ -0,0 +1,518 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.ByteArrayOutputStream; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.io.OutputStream; +import java.io.Reader; +import java.io.StringReader; +import java.lang.instrument.Instrumentation; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +/** + * A simple CPU profiling tool similar to java -Xrunhprof. It can be used + * in-process (to profile the current application) or as a standalone program + * (to profile a different process, or files containing full thread dumps). + */ +public class Profiler implements Runnable { + + private static Instrumentation instrumentation; + private static final String LINE_SEPARATOR = + System.getProperty("line.separator", "\n"); + private static final int MAX_ELEMENTS = 1000; + + public int interval = 2; + public int depth = 48; + public boolean paused; + public boolean sumClasses; + public boolean sumMethods; + + private int pid; + + private final String[] ignoreLines = ( + "java," + + "sun," + + "com.sun.," + + "com.google.common.," + + "com.mongodb.," + + "org.bson.," + ).split(","); + private final String[] ignorePackages = ( + "java," + + "sun," + + "com.sun.," + + "com.google.common.," + + "com.mongodb.," + + "org.bson" + ).split(","); + private final String[] ignoreThreads = ( + "java.lang.Object.wait," + + "java.lang.Thread.dumpThreads," + + "java.lang.Thread.getThreads," + + "java.lang.Thread.sleep," + + "java.lang.UNIXProcess.waitForProcessExit," + + "java.net.PlainDatagramSocketImpl.receive0," + + "java.net.PlainSocketImpl.accept," + + "java.net.PlainSocketImpl.socketAccept," + + "java.net.SocketInputStream.socketRead," + + "java.net.SocketOutputStream.socketWrite," + + "org.eclipse.jetty.io.nio.SelectorManager$SelectSet.doSelect," + + "sun.awt.windows.WToolkit.eventLoop," + + "sun.misc.Unsafe.park," + + "sun.nio.ch.EPollArrayWrapper.epollWait," + + "sun.nio.ch.KQueueArrayWrapper.kevent0," + + "sun.nio.ch.ServerSocketChannelImpl.accept," + + "dalvik.system.VMStack.getThreadStackTrace," + + "dalvik.system.NativeStart.run" + ).split(","); + + private volatile boolean stop; + private final HashMap counts = + new HashMap<>(); + + /** + * The summary (usually one entry per package, unless sumClasses is enabled, + * in which case it's one entry per class). + */ + private final HashMap summary = + new HashMap<>(); + private int minCount = 1; + private int total; + private Thread thread; + private long start; + private long time; + private int threadDumps; + + /** + * This method is called when the agent is installed. + * + * @param agentArgs the agent arguments + * @param inst the instrumentation object + */ + public static void premain(@SuppressWarnings("unused") String agentArgs, Instrumentation inst) { + instrumentation = inst; + } + + /** + * Get the instrumentation object if started as an agent. + * + * @return the instrumentation, or null + */ + public static Instrumentation getInstrumentation() { + return instrumentation; + } + + /** + * Run the command line version of the profiler. The JDK (jps and jstack) + * need to be in the path. + * + * @param args the process id of the process - if not set the java processes + * are listed + */ + public static void main(String... args) { + new Profiler().run(args); + } + + private void run(String... args) { + if (args.length == 0) { + System.out.println("Show profiling data"); + System.out.println("Usage: java " + getClass().getName() + + " | "); + System.out.println("Processes:"); + String processes = exec("jps", "-l"); + System.out.println(processes); + return; + } + start = System.nanoTime(); + if (args[0].matches("\\d+")) { + pid = Integer.parseInt(args[0]); + long last = 0; + while (true) { + tick(); + long t = System.nanoTime(); + if (t - last > TimeUnit.SECONDS.toNanos(5)) { + time = System.nanoTime() - start; + System.out.println(getTopTraces(3)); + last = t; + } + } + } + try { + for (String arg : args) { + if (arg.startsWith("-")) { + if ("-classes".equals(arg)) { + sumClasses = true; + } else if ("-methods".equals(arg)) { + sumMethods = true; + } else if ("-packages".equals(arg)) { + sumClasses = false; + sumMethods = false; + } else { + throw new IllegalArgumentException(arg); + } + continue; + } + try (Reader reader = new InputStreamReader(new FileInputStream(arg), "CP1252")) { + LineNumberReader r = new LineNumberReader(reader); + while (true) { + String line = r.readLine(); + if (line == null) { + break; + } else if (line.startsWith("Full thread dump")) { + threadDumps++; + } + } + } + try (Reader reader = new InputStreamReader(new FileInputStream(arg), "CP1252")) { + LineNumberReader r = new LineNumberReader(reader); + processList(readStackTrace(r)); + } + } + System.out.println(getTopTraces(5)); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + private static List getRunnableStackTraces() { + ArrayList list = new ArrayList<>(); + Map map = Thread.getAllStackTraces(); + for (Map.Entry entry : map.entrySet()) { + Thread t = entry.getKey(); + if (t.getState() != Thread.State.RUNNABLE) { + continue; + } + StackTraceElement[] dump = entry.getValue(); + if (dump == null || dump.length == 0) { + continue; + } + list.add(dump); + } + return list; + } + + private static List readRunnableStackTraces(int pid) { + try { + String jstack = exec("jstack", "" + pid); + LineNumberReader r = new LineNumberReader( + new StringReader(jstack)); + return readStackTrace(r); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + private static List readStackTrace(LineNumberReader r) + throws IOException { + ArrayList list = new ArrayList<>(); + while (true) { + String line = r.readLine(); + if (line == null) { + break; + } + if (!line.startsWith("\"")) { + // not a thread + continue; + } + line = r.readLine(); + if (line == null) { + break; + } + line = line.trim(); + if (!line.startsWith("java.lang.Thread.State: RUNNABLE")) { + continue; + } + ArrayList stack = new ArrayList<>(); + while (true) { + line = r.readLine(); + if (line == null) { + break; + } + line = line.trim(); + if (line.startsWith("- ")) { + continue; + } + if (!line.startsWith("at ")) { + break; + } + line = line.substring(3).trim(); + stack.add(line); + } + if (!stack.isEmpty()) { + String[] s = stack.toArray(new String[0]); + list.add(s); + } + } + return list; + } + + private static String exec(String... args) { + ByteArrayOutputStream err = new ByteArrayOutputStream(); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + try { + Process p = Runtime.getRuntime().exec(args); + copyInThread(p.getInputStream(), out); + copyInThread(p.getErrorStream(), err); + p.waitFor(); + String e = new String(err.toByteArray(), StandardCharsets.UTF_8); + if (e.length() > 0) { + throw new RuntimeException(e); + } + return new String(out.toByteArray(), StandardCharsets.UTF_8); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + private static void copyInThread(final InputStream in, + final OutputStream out) { + new Thread("Profiler stream copy") { + @Override + public void run() { + byte[] buffer = new byte[4096]; + try { + while (true) { + int len = in.read(buffer, 0, buffer.length); + if (len < 0) { + break; + } + out.write(buffer, 0, len); + } + } catch (Exception e) { + throw new RuntimeException(e); + } + } + }.start(); + } + + /** + * Start collecting profiling data. + * + * @return this + */ + public Profiler startCollecting() { + thread = new Thread(this, "Profiler"); + thread.setDaemon(true); + thread.start(); + return this; + } + + /** + * Stop collecting. + * + * @return this + */ + public Profiler stopCollecting() { + stop = true; + if (thread != null) { + try { + thread.join(); + } catch (InterruptedException e) { + // ignore + } + thread = null; + } + return this; + } + + @Override + public void run() { + start = System.nanoTime(); + while (!stop) { + try { + tick(); + } catch (Throwable t) { + break; + } + } + time = System.nanoTime() - start; + } + + private void tick() { + if (interval > 0) { + if (paused) { + return; + } + try { + Thread.sleep(interval); + } catch (Exception e) { + // ignore + } + } + + List list; + if (pid != 0) { + list = readRunnableStackTraces(pid); + } else { + list = getRunnableStackTraces(); + } + threadDumps++; + processList(list); + } + + private void processList(List list) { + for (Object[] dump : list) { + if (startsWithAny(dump[0].toString(), ignoreThreads)) { + continue; + } + StringBuilder buff = new StringBuilder(); + // simple recursive calls are ignored + String last = null; + boolean packageCounts = false; + for (int j = 0, i = 0; i < dump.length && j < depth; i++) { + String el = dump[i].toString(); + if (!el.equals(last) && !startsWithAny(el, ignoreLines)) { + last = el; + buff.append("at ").append(el).append(LINE_SEPARATOR); + if (!packageCounts && !startsWithAny(el, ignorePackages)) { + packageCounts = true; + int index = 0; + for (; index < el.length(); index++) { + char c = el.charAt(index); + if (c == '(' || Character.isUpperCase(c)) { + break; + } + } + if (index > 0 && el.charAt(index - 1) == '.') { + index--; + } + if (sumClasses) { + int m = el.indexOf('.', index + 1); + index = m >= 0 ? m : index; + } + if (sumMethods) { + int m = el.indexOf('(', index + 1); + index = m >= 0 ? m : index; + } + String groupName = el.substring(0, index); + increment(summary, groupName, 0); + } + j++; + } + } + if (buff.length() > 0) { + minCount = increment(counts, buff.toString().trim(), minCount); + total++; + } + } + } + + private static boolean startsWithAny(String s, String[] prefixes) { + for (String p : prefixes) { + if (p.length() > 0 && s.startsWith(p)) { + return true; + } + } + return false; + } + + private static int increment(HashMap map, String trace, + int minCount) { + Integer oldCount = map.get(trace); + if (oldCount == null) { + map.put(trace, 1); + } else { + map.put(trace, oldCount + 1); + } + while (map.size() > MAX_ELEMENTS) { + for (Iterator> ei = + map.entrySet().iterator(); ei.hasNext();) { + Map.Entry e = ei.next(); + if (e.getValue() <= minCount) { + ei.remove(); + } + } + if (map.size() > MAX_ELEMENTS) { + minCount++; + } + } + return minCount; + } + + /** + * Get the top stack traces. + * + * @param count the maximum number of stack traces + * @return the stack traces. + */ + public String getTop(int count) { + stopCollecting(); + return getTopTraces(count); + } + + private String getTopTraces(int count) { + StringBuilder buff = new StringBuilder(); + buff.append("Profiler: top ").append(count).append(" stack trace(s) of "); + if (time > 0) { + buff.append(" of ").append(TimeUnit.NANOSECONDS.toMillis(time)).append(" ms"); + } + if (threadDumps > 0) { + buff.append(" of ").append(threadDumps).append(" thread dumps"); + } + buff.append(":").append(LINE_SEPARATOR); + if (counts.size() == 0) { + buff.append("(none)").append(LINE_SEPARATOR); + } + HashMap copy = new HashMap<>(counts); + appendTop(buff, copy, count, total, false); + buff.append("summary:").append(LINE_SEPARATOR); + copy = new HashMap<>(summary); + appendTop(buff, copy, count, total, true); + buff.append('.'); + return buff.toString(); + } + + private static void appendTop(StringBuilder buff, + HashMap map, int count, int total, boolean table) { + for (int x = 0, min = 0;;) { + int highest = 0; + Map.Entry best = null; + for (Map.Entry el : map.entrySet()) { + if (el.getValue() > highest) { + best = el; + highest = el.getValue(); + } + } + if (best == null) { + break; + } + map.remove(best.getKey()); + if (++x >= count) { + if (best.getValue() < min) { + break; + } + min = best.getValue(); + } + int c = best.getValue(); + int percent = 100 * c / Math.max(total, 1); + if (table) { + if (percent > 1) { + buff.append(percent). + append("%: ").append(best.getKey()). + append(LINE_SEPARATOR); + } + } else { + buff.append(c).append('/').append(total).append(" ("). + append(percent). + append("%):").append(LINE_SEPARATOR). + append(best.getKey()). + append(LINE_SEPARATOR); + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/ScriptReader.java b/modules/h2/src/main/java/org/h2/util/ScriptReader.java new file mode 100644 index 0000000000000..26ac8f9192266 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ScriptReader.java @@ -0,0 +1,324 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.Closeable; +import java.io.IOException; +import java.io.Reader; +import java.util.Arrays; +import org.h2.engine.Constants; +import org.h2.message.DbException; + +/** + * This class can split SQL scripts to single SQL statements. + * Each SQL statement ends with the character ';', however it is ignored + * in comments and quotes. + */ +public class ScriptReader implements Closeable { + + private final Reader reader; + private char[] buffer; + + /** + * The position in the buffer of the next char to be read + */ + private int bufferPos; + + /** + * The position in the buffer of the statement start + */ + private int bufferStart = -1; + + /** + * The position in the buffer of the last available char + */ + private int bufferEnd; + + /** + * True if we have read past the end of file + */ + private boolean endOfFile; + + /** + * True if we are inside a comment + */ + private boolean insideRemark; + + /** + * Only valid if insideRemark is true. True if we are inside a block + * comment, false if we are inside a line comment + */ + private boolean blockRemark; + + /** + * True if comments should be skipped completely by this reader. + */ + private boolean skipRemarks; + + /** + * The position in buffer of start of comment + */ + private int remarkStart; + + /** + * Create a new SQL script reader from the given reader + * + * @param reader the reader + */ + public ScriptReader(Reader reader) { + this.reader = reader; + buffer = new char[Constants.IO_BUFFER_SIZE * 2]; + } + + /** + * Close the underlying reader. + */ + @Override + public void close() { + try { + reader.close(); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + /** + * Read a statement from the reader. This method returns null if the end has + * been reached. + * + * @return the SQL statement or null + */ + public String readStatement() { + if (endOfFile) { + return null; + } + try { + return readStatementLoop(); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + private String readStatementLoop() throws IOException { + bufferStart = bufferPos; + int c = read(); + while (true) { + if (c < 0) { + endOfFile = true; + if (bufferPos - 1 == bufferStart) { + return null; + } + break; + } else if (c == ';') { + break; + } + switch (c) { + case '$': { + c = read(); + if (c == '$' && (bufferPos - bufferStart < 3 || buffer[bufferPos - 3] <= ' ')) { + // dollar quoted string + while (true) { + c = read(); + if (c < 0) { + break; + } + if (c == '$') { + c = read(); + if (c < 0) { + break; + } + if (c == '$') { + break; + } + } + } + c = read(); + } + break; + } + case '\'': + while (true) { + c = read(); + if (c < 0) { + break; + } + if (c == '\'') { + break; + } + } + c = read(); + break; + case '"': + while (true) { + c = read(); + if (c < 0) { + break; + } + if (c == '\"') { + break; + } + } + c = read(); + break; + case '/': { + c = read(); + if (c == '*') { + // block comment + startRemark(true); + while (true) { + c = read(); + if (c < 0) { + break; + } + if (c == '*') { + c = read(); + if (c < 0) { + clearRemark(); + break; + } + if (c == '/') { + endRemark(); + break; + } + } + } + c = read(); + } else if (c == '/') { + // single line comment + startRemark(false); + while (true) { + c = read(); + if (c < 0) { + clearRemark(); + break; + } + if (c == '\r' || c == '\n') { + endRemark(); + break; + } + } + c = read(); + } + break; + } + case '-': { + c = read(); + if (c == '-') { + // single line comment + startRemark(false); + while (true) { + c = read(); + if (c < 0) { + clearRemark(); + break; + } + if (c == '\r' || c == '\n') { + endRemark(); + break; + } + } + c = read(); + } + break; + } + default: { + c = read(); + } + } + } + return new String(buffer, bufferStart, bufferPos - 1 - bufferStart); + } + + private void startRemark(boolean block) { + blockRemark = block; + remarkStart = bufferPos - 2; + insideRemark = true; + } + + private void endRemark() { + clearRemark(); + insideRemark = false; + } + + private void clearRemark() { + if (skipRemarks) { + Arrays.fill(buffer, remarkStart, bufferPos, ' '); + } + } + + private int read() throws IOException { + if (bufferPos >= bufferEnd) { + return readBuffer(); + } + return buffer[bufferPos++]; + } + + private int readBuffer() throws IOException { + if (endOfFile) { + return -1; + } + int keep = bufferPos - bufferStart; + if (keep > 0) { + char[] src = buffer; + if (keep + Constants.IO_BUFFER_SIZE > src.length) { + // protect against NegativeArraySizeException + if (src.length >= Integer.MAX_VALUE / 2) { + throw new IOException("Error in parsing script, " + + "statement size exceeds 1G, " + + "first 80 characters of statement looks like: " + + new String(buffer, bufferStart, 80)); + } + buffer = new char[src.length * 2]; + } + System.arraycopy(src, bufferStart, buffer, 0, keep); + } + remarkStart -= bufferStart; + bufferStart = 0; + bufferPos = keep; + int len = reader.read(buffer, keep, Constants.IO_BUFFER_SIZE); + if (len == -1) { + // ensure bufferPos > bufferEnd + bufferEnd = -1024; + endOfFile = true; + // ensure the right number of characters are read + // in case the input buffer is still used + bufferPos++; + return -1; + } + bufferEnd = keep + len; + return buffer[bufferPos++]; + } + + /** + * Check if this is the last statement, and if the single line or block + * comment is not finished yet. + * + * @return true if the current position is inside a remark + */ + public boolean isInsideRemark() { + return insideRemark; + } + + /** + * If currently inside a remark, this method tells if it is a block comment + * (true) or single line comment (false) + * + * @return true if inside a block comment + */ + public boolean isBlockRemark() { + return blockRemark; + } + + /** + * If comments should be skipped completely by this reader. + * + * @param skipRemarks true if comments should be skipped + */ + public void setSkipRemarks(boolean skipRemarks) { + this.skipRemarks = skipRemarks; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/SmallLRUCache.java b/modules/h2/src/main/java/org/h2/util/SmallLRUCache.java new file mode 100644 index 0000000000000..8a15a82541d57 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/SmallLRUCache.java @@ -0,0 +1,48 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.LinkedHashMap; +import java.util.Map; + +/** + * This class implements a small LRU object cache. + * + * @param the key + * @param the value + */ +public class SmallLRUCache extends LinkedHashMap { + + private static final long serialVersionUID = 1L; + private int size; + + private SmallLRUCache(int size) { + super(size, (float) 0.75, true); + this.size = size; + } + + /** + * Create a new object with all elements of the given collection. + * + * @param the key type + * @param the value type + * @param size the number of elements + * @return the object + */ + public static SmallLRUCache newInstance(int size) { + return new SmallLRUCache<>(size); + } + + public void setMaxSize(int size) { + this.size = size; + } + + @Override + protected boolean removeEldestEntry(Map.Entry eldest) { + return size() > size; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/SmallMap.java b/modules/h2/src/main/java/org/h2/util/SmallMap.java new file mode 100644 index 0000000000000..ee82f772fcf21 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/SmallMap.java @@ -0,0 +1,94 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.HashMap; +import java.util.Iterator; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * A simple hash table with an optimization for the last recently used object. + */ +public class SmallMap { + + private final HashMap map = new HashMap<>(); + private Object cache; + private int cacheId; + private int lastId; + private final int maxElements; + + /** + * Create a map with the given maximum number of entries. + * + * @param maxElements the maximum number of entries + */ + public SmallMap(int maxElements) { + this.maxElements = maxElements; + } + + /** + * Add an object to the map. If the size of the map is larger than twice the + * maximum size, objects with a low id are removed. + * + * @param id the object id + * @param o the object + * @return the id + */ + public int addObject(int id, Object o) { + if (map.size() > maxElements * 2) { + Iterator it = map.keySet().iterator(); + while (it.hasNext()) { + Integer k = it.next(); + if (k.intValue() + maxElements < lastId) { + it.remove(); + } + } + } + if (id > lastId) { + lastId = id; + } + map.put(id, o); + cacheId = id; + cache = o; + return id; + } + + /** + * Remove an object from the map. + * + * @param id the id of the object to remove + */ + public void freeObject(int id) { + if (cacheId == id) { + cacheId = -1; + cache = null; + } + map.remove(id); + } + + /** + * Get an object from the map if it is stored. + * + * @param id the id of the object + * @param ifAvailable only return it if available, otherwise return null + * @return the object or null + * @throws DbException if isAvailable is false and the object has not been + * found + */ + public Object getObject(int id, boolean ifAvailable) { + if (id == cacheId) { + return cache; + } + Object obj = map.get(id); + if (obj == null && !ifAvailable) { + throw DbException.get(ErrorCode.OBJECT_CLOSED); + } + return obj; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/SoftHashMap.java b/modules/h2/src/main/java/org/h2/util/SoftHashMap.java new file mode 100644 index 0000000000000..8a3339aa35714 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/SoftHashMap.java @@ -0,0 +1,108 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.lang.ref.Reference; +import java.lang.ref.ReferenceQueue; +import java.lang.ref.SoftReference; +import java.util.AbstractMap; +import java.util.HashMap; +import java.util.Map; +import java.util.Set; + +/** + * Map which stores items using SoftReference. Items can be garbage collected + * and removed. It is not a general purpose cache, as it doesn't implement some + * methods, and others not according to the map definition, to improve speed. + * + * @param the key type + * @param the value type + */ +public class SoftHashMap extends AbstractMap { + + private final Map> map; + private final ReferenceQueue queue = new ReferenceQueue<>(); + + public SoftHashMap() { + map = new HashMap<>(); + } + + @SuppressWarnings("unchecked") + private void processQueue() { + while (true) { + Reference o = queue.poll(); + if (o == null) { + return; + } + SoftValue k = (SoftValue) o; + Object key = k.key; + map.remove(key); + } + } + + @Override + public V get(Object key) { + processQueue(); + SoftReference o = map.get(key); + if (o == null) { + return null; + } + return o.get(); + } + + /** + * Store the object. The return value of this method is null or a + * SoftReference. + * + * @param key the key + * @param value the value + * @return null or the old object. + */ + @Override + public V put(K key, V value) { + processQueue(); + SoftValue old = map.put(key, new SoftValue<>(value, queue, key)); + return old == null ? null : old.get(); + } + + /** + * Remove an object. + * + * @param key the key + * @return null or the old object + */ + @Override + public V remove(Object key) { + processQueue(); + SoftReference ref = map.remove(key); + return ref == null ? null : ref.get(); + } + + @Override + public void clear() { + processQueue(); + map.clear(); + } + + @Override + public Set> entrySet() { + throw new UnsupportedOperationException(); + } + + /** + * A soft reference that has a hard reference to the key. + */ + private static class SoftValue extends SoftReference { + final Object key; + + public SoftValue(T ref, ReferenceQueue q, Object key) { + super(ref, q); + this.key = key; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/SortedProperties.java b/modules/h2/src/main/java/org/h2/util/SortedProperties.java new file mode 100644 index 0000000000000..b8235406f66a3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/SortedProperties.java @@ -0,0 +1,158 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.BufferedWriter; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.io.OutputStreamWriter; +import java.io.PrintWriter; +import java.io.Writer; +import java.nio.charset.StandardCharsets; +import java.util.Collections; +import java.util.Enumeration; +import java.util.Map.Entry; +import java.util.Properties; +import java.util.TreeMap; +import java.util.Vector; +import org.h2.store.fs.FileUtils; + +/** + * Sorted properties file. + * This implementation requires that store() internally calls keys(). + */ +public class SortedProperties extends Properties { + + private static final long serialVersionUID = 1L; + + @Override + public synchronized Enumeration keys() { + Vector v = new Vector<>(); + for (Object o : keySet()) { + v.add(o.toString()); + } + Collections.sort(v); + return new Vector(v).elements(); + } + + /** + * Get a boolean property value from a properties object. + * + * @param prop the properties object + * @param key the key + * @param def the default value + * @return the value if set, or the default value if not + */ + public static boolean getBooleanProperty(Properties prop, String key, + boolean def) { + try { + return Utils.parseBoolean(prop.getProperty(key, null), def, true); + } catch (IllegalArgumentException e) { + e.printStackTrace(); + return def; + } + } + + /** + * Get an int property value from a properties object. + * + * @param prop the properties object + * @param key the key + * @param def the default value + * @return the value if set, or the default value if not + */ + public static int getIntProperty(Properties prop, String key, int def) { + String value = prop.getProperty(key, "" + def); + try { + return Integer.decode(value); + } catch (Exception e) { + e.printStackTrace(); + return def; + } + } + + /** + * Load a properties object from a file. + * + * @param fileName the name of the properties file + * @return the properties object + */ + public static synchronized SortedProperties loadProperties(String fileName) + throws IOException { + SortedProperties prop = new SortedProperties(); + if (FileUtils.exists(fileName)) { + try (InputStream in = FileUtils.newInputStream(fileName)) { + prop.load(in); + } + } + return prop; + } + + /** + * Store a properties file. The header and the date is not written. + * + * @param fileName the target file name + */ + public synchronized void store(String fileName) throws IOException { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + store(out, null); + ByteArrayInputStream in = new ByteArrayInputStream(out.toByteArray()); + InputStreamReader reader = new InputStreamReader(in, StandardCharsets.ISO_8859_1); + LineNumberReader r = new LineNumberReader(reader); + Writer w; + try { + w = new OutputStreamWriter(FileUtils.newOutputStream(fileName, false)); + } catch (Exception e) { + throw new IOException(e.toString(), e); + } + try (PrintWriter writer = new PrintWriter(new BufferedWriter(w))) { + while (true) { + String line = r.readLine(); + if (line == null) { + break; + } + if (!line.startsWith("#")) { + writer.print(line + "\n"); + } + } + } + } + + /** + * Convert the map to a list of line in the form key=value. + * + * @return the lines + */ + public synchronized String toLines() { + StringBuilder buff = new StringBuilder(); + for (Entry e : new TreeMap<>(this).entrySet()) { + buff.append(e.getKey()).append('=').append(e.getValue()).append('\n'); + } + return buff.toString(); + } + + /** + * Convert a String to a map. + * + * @param s the string + * @return the map + */ + public static SortedProperties fromLines(String s) { + SortedProperties p = new SortedProperties(); + for (String line : StringUtils.arraySplit(s, '\n', true)) { + int idx = line.indexOf('='); + if (idx > 0) { + p.put(line.substring(0, idx), line.substring(idx + 1)); + } + } + return p; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/SourceCompiler.java b/modules/h2/src/main/java/org/h2/util/SourceCompiler.java new file mode 100644 index 0000000000000..0876cca45b892 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/SourceCompiler.java @@ -0,0 +1,590 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.BufferedReader; +import java.io.ByteArrayOutputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.PrintStream; +import java.io.StringReader; +import java.io.StringWriter; +import java.io.Writer; +import java.lang.reflect.Array; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.net.URI; +import java.nio.charset.StandardCharsets; +import java.security.SecureClassLoader; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Map; + +import javax.script.Compilable; +import javax.script.CompiledScript; +import javax.script.ScriptEngineManager; +import javax.script.ScriptException; +import javax.tools.FileObject; +import javax.tools.ForwardingJavaFileManager; +import javax.tools.JavaCompiler; +import javax.tools.JavaFileManager; +import javax.tools.JavaFileObject; +import javax.tools.JavaFileObject.Kind; +import javax.tools.SimpleJavaFileObject; +import javax.tools.StandardJavaFileManager; +import javax.tools.ToolProvider; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; + +/** + * This class allows to convert source code to a class. It uses one class loader + * per class. + */ +public class SourceCompiler { + + /** + * The "com.sun.tools.javac.Main" (if available). + */ + static final JavaCompiler JAVA_COMPILER; + + private static final Class JAVAC_SUN; + + private static final String COMPILE_DIR = + Utils.getProperty("java.io.tmpdir", "."); + + /** + * The class name to source code map. + */ + final HashMap sources = new HashMap<>(); + + /** + * The class name to byte code map. + */ + final HashMap> compiled = new HashMap<>(); + + /** + * The class name to compiled scripts map. + */ + final Map compiledScripts = new HashMap<>(); + + /** + * Whether to use the ToolProvider.getSystemJavaCompiler(). + */ + boolean useJavaSystemCompiler = SysProperties.JAVA_SYSTEM_COMPILER; + + static { + JavaCompiler c; + try { + c = ToolProvider.getSystemJavaCompiler(); + } catch (Exception e) { + // ignore + c = null; + } + JAVA_COMPILER = c; + Class clazz; + try { + clazz = Class.forName("com.sun.tools.javac.Main"); + } catch (Exception e) { + clazz = null; + } + JAVAC_SUN = clazz; + } + + /** + * Set the source code for the specified class. + * This will reset all compiled classes. + * + * @param className the class name + * @param source the source code + */ + public void setSource(String className, String source) { + sources.put(className, source); + compiled.clear(); + } + + /** + * Enable or disable the usage of the Java system compiler. + * + * @param enabled true to enable + */ + public void setJavaSystemCompiler(boolean enabled) { + this.useJavaSystemCompiler = enabled; + } + + /** + * Get the class object for the given name. + * + * @param packageAndClassName the class name + * @return the class + */ + public Class getClass(String packageAndClassName) + throws ClassNotFoundException { + + Class compiledClass = compiled.get(packageAndClassName); + if (compiledClass != null) { + return compiledClass; + } + String source = sources.get(packageAndClassName); + if (isGroovySource(source)) { + Class clazz = GroovyCompiler.parseClass(source, packageAndClassName); + compiled.put(packageAndClassName, clazz); + return clazz; + } + + ClassLoader classLoader = new ClassLoader(getClass().getClassLoader()) { + + @Override + public Class findClass(String name) throws ClassNotFoundException { + Class classInstance = compiled.get(name); + if (classInstance == null) { + String source = sources.get(name); + String packageName = null; + int idx = name.lastIndexOf('.'); + String className; + if (idx >= 0) { + packageName = name.substring(0, idx); + className = name.substring(idx + 1); + } else { + className = name; + } + String s = getCompleteSourceCode(packageName, className, source); + if (JAVA_COMPILER != null && useJavaSystemCompiler) { + classInstance = javaxToolsJavac(packageName, className, s); + } else { + byte[] data = javacCompile(packageName, className, s); + if (data == null) { + classInstance = findSystemClass(name); + } else { + classInstance = defineClass(name, data, 0, data.length); + } + } + compiled.put(name, classInstance); + } + return classInstance; + } + }; + return classLoader.loadClass(packageAndClassName); + } + + private static boolean isGroovySource(String source) { + return source.startsWith("//groovy") || source.startsWith("@groovy"); + } + + private static boolean isJavascriptSource(String source) { + return source.startsWith("//javascript"); + } + + private static boolean isRubySource(String source) { + return source.startsWith("#ruby"); + } + + /** + * Whether the passed source can be compiled using {@link javax.script.ScriptEngineManager}. + * + * @param source the source to test. + * @return true if {@link #getCompiledScript(String)} can be called. + */ + public static boolean isJavaxScriptSource(String source) { + return isJavascriptSource(source) || isRubySource(source); + } + + /** + * Get the compiled script. + * + * @param packageAndClassName the package and class name + * @return the compiled script + */ + public CompiledScript getCompiledScript(String packageAndClassName) throws ScriptException { + CompiledScript compiledScript = compiledScripts.get(packageAndClassName); + if (compiledScript == null) { + String source = sources.get(packageAndClassName); + final String lang; + if (isJavascriptSource(source)) { + lang = "javascript"; + } else if (isRubySource(source)) { + lang = "ruby"; + } else { + throw new IllegalStateException("Unknown language for " + source); + } + + final Compilable jsEngine = (Compilable) new ScriptEngineManager().getEngineByName(lang); + compiledScript = jsEngine.compile(source); + compiledScripts.put(packageAndClassName, compiledScript); + } + return compiledScript; + } + + /** + * Get the first public static method of the given class. + * + * @param className the class name + * @return the method name + */ + public Method getMethod(String className) throws ClassNotFoundException { + Class clazz = getClass(className); + Method[] methods = clazz.getDeclaredMethods(); + for (Method m : methods) { + int modifiers = m.getModifiers(); + if (Modifier.isPublic(modifiers) && Modifier.isStatic(modifiers)) { + String name = m.getName(); + if (!name.startsWith("_") && !m.getName().equals("main")) { + return m; + } + } + } + return null; + } + + /** + * Compile the given class. This method tries to use the class + * "com.sun.tools.javac.Main" if available. If not, it tries to run "javac" + * in a separate process. + * + * @param packageName the package name + * @param className the class name + * @param source the source code + * @return the class file + */ + byte[] javacCompile(String packageName, String className, String source) { + File dir = new File(COMPILE_DIR); + if (packageName != null) { + dir = new File(dir, packageName.replace('.', '/')); + FileUtils.createDirectories(dir.getAbsolutePath()); + } + File javaFile = new File(dir, className + ".java"); + File classFile = new File(dir, className + ".class"); + try { + OutputStream f = FileUtils.newOutputStream(javaFile.getAbsolutePath(), false); + Writer out = IOUtils.getBufferedWriter(f); + classFile.delete(); + out.write(source); + out.close(); + if (JAVAC_SUN != null) { + javacSun(javaFile); + } else { + javacProcess(javaFile); + } + byte[] data = new byte[(int) classFile.length()]; + DataInputStream in = new DataInputStream(new FileInputStream(classFile)); + in.readFully(data); + in.close(); + return data; + } catch (Exception e) { + throw DbException.convert(e); + } finally { + javaFile.delete(); + classFile.delete(); + } + } + + /** + * Get the complete source code (including package name, imports, and so + * on). + * + * @param packageName the package name + * @param className the class name + * @param source the (possibly shortened) source code + * @return the full source code + */ + static String getCompleteSourceCode(String packageName, String className, + String source) { + if (source.startsWith("package ")) { + return source; + } + StringBuilder buff = new StringBuilder(); + if (packageName != null) { + buff.append("package ").append(packageName).append(";\n"); + } + int endImport = source.indexOf("@CODE"); + String importCode = + "import java.util.*;\n" + + "import java.math.*;\n" + + "import java.sql.*;\n"; + if (endImport >= 0) { + importCode = source.substring(0, endImport); + source = source.substring("@CODE".length() + endImport); + } + buff.append(importCode); + buff.append("public class ").append(className).append( + " {\n" + + " public static ").append(source).append("\n" + + "}\n"); + return buff.toString(); + } + + /** + * Compile using the standard java compiler. + * + * @param packageName the package name + * @param className the class name + * @param source the source code + * @return the class + */ + Class javaxToolsJavac(String packageName, String className, String source) { + String fullClassName = packageName + "." + className; + StringWriter writer = new StringWriter(); + try (JavaFileManager fileManager = new + ClassFileManager(JAVA_COMPILER + .getStandardFileManager(null, null, null))) { + ArrayList compilationUnits = new ArrayList<>(); + compilationUnits.add(new StringJavaFileObject(fullClassName, source)); + // cannot concurrently compile + synchronized (JAVA_COMPILER) { + JAVA_COMPILER.getTask(writer, fileManager, null, null, + null, compilationUnits).call(); + } + String output = writer.toString(); + handleSyntaxError(output); + return fileManager.getClassLoader(null).loadClass(fullClassName); + } catch (ClassNotFoundException | IOException e) { + throw DbException.convert(e); + } + } + + private static void javacProcess(File javaFile) { + exec("javac", + "-sourcepath", COMPILE_DIR, + "-cp", System.getProperty("surefire.test.class.path", "target/classes:target/test-classes"), + "-d", COMPILE_DIR, + "-encoding", "UTF-8", + javaFile.getAbsolutePath()); + } + + private static int exec(String... args) { + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + try { + ProcessBuilder builder = new ProcessBuilder(); + // The javac executable allows some of it's flags + // to be smuggled in via environment variables. + // But if it sees those flags, it will write out a message + // to stderr, which messes up our parsing of the output. + builder.environment().remove("JAVA_TOOL_OPTIONS"); + builder.command(args); + + Process p = builder.start(); + copyInThread(p.getInputStream(), buff); + copyInThread(p.getErrorStream(), buff); + p.waitFor(); + String output = new String(buff.toByteArray(), StandardCharsets.UTF_8); + handleSyntaxError(output); + return p.exitValue(); + } catch (Exception e) { + throw DbException.convert(e); + } + } + + private static void copyInThread(final InputStream in, final OutputStream out) { + new Task() { + @Override + public void call() throws IOException { + IOUtils.copy(in, out); + } + }.execute(); + } + + private static synchronized void javacSun(File javaFile) { + PrintStream old = System.err; + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + PrintStream temp = new PrintStream(buff); + try { + System.setErr(temp); + Method compile; + compile = JAVAC_SUN.getMethod("compile", String[].class); + Object javac = JAVAC_SUN.newInstance(); + compile.invoke(javac, (Object) new String[] { + "-sourcepath", COMPILE_DIR, + // "-Xlint:unchecked", + "-cp", System.getProperty("surefire.test.class.path", "target/classes:target/test-classes"), + "-d", COMPILE_DIR, + "-encoding", "UTF-8", + javaFile.getAbsolutePath() }); + String output = new String(buff.toByteArray(), StandardCharsets.UTF_8); + handleSyntaxError(output); + } catch (Exception e) { + throw DbException.convert(e); + } finally { + System.setErr(old); + } + } + + private static void handleSyntaxError(String output) { + boolean syntaxError = false; + final BufferedReader reader = new BufferedReader(new StringReader(output)); + try { + for (String line; (line = reader.readLine()) != null;) { + if (line.endsWith("warning")) { + // ignore summary line + } else if (line.startsWith("Note:") + || line.startsWith("warning:")) { + // just a warning (e.g. unchecked or unsafe operations) + } else { + syntaxError = true; + break; + } + } + } catch (IOException ignored) { + // exception ignored + } + + if (syntaxError) { + output = StringUtils.replaceAll(output, COMPILE_DIR, ""); + throw DbException.get(ErrorCode.SYNTAX_ERROR_1, output); + } + } + + + /** + * Access the Groovy compiler using reflection, so that we do not gain a + * compile-time dependency unnecessarily. + */ + private static final class GroovyCompiler { + + private static final Object LOADER; + private static final Throwable INIT_FAIL_EXCEPTION; + + static { + Object loader = null; + Throwable initFailException = null; + try { + // Create an instance of ImportCustomizer + Class importCustomizerClass = Class.forName( + "org.codehaus.groovy.control.customizers.ImportCustomizer"); + Object importCustomizer = Utils.newInstance( + "org.codehaus.groovy.control.customizers.ImportCustomizer"); + // Call the method ImportCustomizer.addImports(String[]) + String[] importsArray = { + "java.sql.Connection", + "java.sql.Types", + "java.sql.ResultSet", + "groovy.sql.Sql", + "org.h2.tools.SimpleResultSet" + }; + Utils.callMethod(importCustomizer, "addImports", new Object[] { importsArray }); + + // Call the method + // CompilerConfiguration.addCompilationCustomizers( + // ImportCustomizer...) + Object importCustomizerArray = Array.newInstance(importCustomizerClass, 1); + Array.set(importCustomizerArray, 0, importCustomizer); + Object configuration = Utils.newInstance( + "org.codehaus.groovy.control.CompilerConfiguration"); + Utils.callMethod(configuration, + "addCompilationCustomizers", importCustomizerArray); + + ClassLoader parent = GroovyCompiler.class.getClassLoader(); + loader = Utils.newInstance( + "groovy.lang.GroovyClassLoader", parent, configuration); + } catch (Exception ex) { + initFailException = ex; + } + LOADER = loader; + INIT_FAIL_EXCEPTION = initFailException; + } + + public static Class parseClass(String source, + String packageAndClassName) { + if (LOADER == null) { + throw new RuntimeException( + "Compile fail: no Groovy jar in the classpath", INIT_FAIL_EXCEPTION); + } + try { + Object codeSource = Utils.newInstance("groovy.lang.GroovyCodeSource", + source, packageAndClassName + ".groovy", "UTF-8"); + Utils.callMethod(codeSource, "setCachable", false); + return (Class) Utils.callMethod( + LOADER, "parseClass", codeSource); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + } + + /** + * An in-memory java source file object. + */ + static class StringJavaFileObject extends SimpleJavaFileObject { + + private final String sourceCode; + + public StringJavaFileObject(String className, String sourceCode) { + super(URI.create("string:///" + className.replace('.', '/') + + Kind.SOURCE.extension), Kind.SOURCE); + this.sourceCode = sourceCode; + } + + @Override + public CharSequence getCharContent(boolean ignoreEncodingErrors) { + return sourceCode; + } + + } + + /** + * An in-memory java class object. + */ + static class JavaClassObject extends SimpleJavaFileObject { + + private final ByteArrayOutputStream out = new ByteArrayOutputStream(); + + public JavaClassObject(String name, Kind kind) { + super(URI.create("string:///" + name.replace('.', '/') + + kind.extension), kind); + } + + public byte[] getBytes() { + return out.toByteArray(); + } + + @Override + public OutputStream openOutputStream() throws IOException { + return out; + } + } + + /** + * An in-memory class file manager. + */ + static class ClassFileManager extends + ForwardingJavaFileManager { + + /** + * The class (only one class is kept). + */ + JavaClassObject classObject; + + public ClassFileManager(StandardJavaFileManager standardManager) { + super(standardManager); + } + + @Override + public ClassLoader getClassLoader(Location location) { + return new SecureClassLoader() { + @Override + protected Class findClass(String name) + throws ClassNotFoundException { + byte[] bytes = classObject.getBytes(); + return super.defineClass(name, bytes, 0, + bytes.length); + } + }; + } + + @Override + public JavaFileObject getJavaFileForOutput(Location location, + String className, Kind kind, FileObject sibling) throws IOException { + classObject = new JavaClassObject(className, kind); + return classObject; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/StatementBuilder.java b/modules/h2/src/main/java/org/h2/util/StatementBuilder.java new file mode 100644 index 0000000000000..91ca3ca9e0d7b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/StatementBuilder.java @@ -0,0 +1,130 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +/** + * A utility class to build a statement. In addition to the methods supported by + * StringBuilder, it allows to add a text only in the second iteration. This + * simplified constructs such as: + *
    + * StringBuilder buff = new StringBuilder();
    + * for (int i = 0; i < args.length; i++) {
    + *     if (i > 0) {
    + *         buff.append(", ");
    + *     }
    + *     buff.append(args[i]);
    + * }
    + * 
    + * to + *
    + * StatementBuilder buff = new StatementBuilder();
    + * for (String s : args) {
    + *     buff.appendExceptFirst(", ");
    + *     buff.append(a);
    + * }
    + *
    + */ +public class StatementBuilder { + + private final StringBuilder builder = new StringBuilder(); + private int index; + + /** + * Create a new builder. + */ + public StatementBuilder() { + // nothing to do + } + + /** + * Create a new builder. + * + * @param string the initial string + */ + public StatementBuilder(String string) { + builder.append(string); + } + + /** + * Append a text. + * + * @param s the text to append + * @return itself + */ + public StatementBuilder append(String s) { + builder.append(s); + return this; + } + + /** + * Append a character. + * + * @param c the character to append + * @return itself + */ + public StatementBuilder append(char c) { + builder.append(c); + return this; + } + + /** + * Append a number. + * + * @param x the number to append + * @return itself + */ + public StatementBuilder append(long x) { + builder.append(x); + return this; + } + + /** + * Reset the loop counter. + * + * @return itself + */ + public StatementBuilder resetCount() { + index = 0; + return this; + } + + /** + * Append a text, but only if appendExceptFirst was never called. + * + * @param s the text to append + */ + public void appendOnlyFirst(String s) { + if (index == 0) { + builder.append(s); + } + } + + /** + * Append a text, except when this method is called the first time. + * + * @param s the text to append + */ + public void appendExceptFirst(String s) { + if (index++ > 0) { + builder.append(s); + } + } + + @Override + public String toString() { + return builder.toString(); + } + + /** + * Get the length. + * + * @return the length + */ + public int length() { + return builder.length(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/StringUtils.java b/modules/h2/src/main/java/org/h2/util/StringUtils.java new file mode 100644 index 0000000000000..275eac32059dc --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/StringUtils.java @@ -0,0 +1,1018 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.lang.ref.SoftReference; +import java.net.URLEncoder; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Locale; +import java.util.concurrent.TimeUnit; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; + +/** + * A few String utility functions. + */ +public class StringUtils { + + private static SoftReference softCache = + new SoftReference<>(null); + private static long softCacheCreatedNs; + + private static final char[] HEX = "0123456789abcdef".toCharArray(); + private static final int[] HEX_DECODE = new int['f' + 1]; + + // memory used by this cache: + // 4 * 1024 * 2 (strings per pair) * 64 * 2 (bytes per char) = 0.5 MB + private static final int TO_UPPER_CACHE_LENGTH = 2 * 1024; + private static final int TO_UPPER_CACHE_MAX_ENTRY_LENGTH = 64; + private static final String[][] TO_UPPER_CACHE = new String[TO_UPPER_CACHE_LENGTH][]; + + static { + for (int i = 0; i < HEX_DECODE.length; i++) { + HEX_DECODE[i] = -1; + } + for (int i = 0; i <= 9; i++) { + HEX_DECODE[i + '0'] = i; + } + for (int i = 0; i <= 5; i++) { + HEX_DECODE[i + 'a'] = HEX_DECODE[i + 'A'] = i + 10; + } + } + + private StringUtils() { + // utility class + } + + private static String[] getCache() { + String[] cache; + // softCache can be null due to a Tomcat problem + // a workaround is disable the system property org.apache. + // catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES + if (softCache != null) { + cache = softCache.get(); + if (cache != null) { + return cache; + } + } + // create a new cache at most every 5 seconds + // so that out of memory exceptions are not delayed + long time = System.nanoTime(); + if (softCacheCreatedNs != 0 && time - softCacheCreatedNs < TimeUnit.SECONDS.toNanos(5)) { + return null; + } + try { + cache = new String[SysProperties.OBJECT_CACHE_SIZE]; + softCache = new SoftReference<>(cache); + return cache; + } finally { + softCacheCreatedNs = System.nanoTime(); + } + } + + /** + * Convert a string to uppercase using the English locale. + * + * @param s the test to convert + * @return the uppercase text + */ + public static String toUpperEnglish(String s) { + if (s.length() > TO_UPPER_CACHE_MAX_ENTRY_LENGTH) { + return s.toUpperCase(Locale.ENGLISH); + } + int index = s.hashCode() & (TO_UPPER_CACHE_LENGTH - 1); + String[] e = TO_UPPER_CACHE[index]; + if (e != null) { + if (e[0].equals(s)) { + return e[1]; + } + } + String s2 = s.toUpperCase(Locale.ENGLISH); + e = new String[] { s, s2 }; + TO_UPPER_CACHE[index] = e; + return s2; + } + + /** + * Convert a string to lowercase using the English locale. + * + * @param s the text to convert + * @return the lowercase text + */ + public static String toLowerEnglish(String s) { + return s.toLowerCase(Locale.ENGLISH); + } + + /** + * Check is a string starts with another string, ignoring the case. + * + * @param s the string to check (must be longer than start) + * @param start the prefix of s + * @return true if start is a prefix of s + */ + public static boolean startsWithIgnoreCase(String s, String start) { + if (s.length() < start.length()) { + return false; + } + return s.substring(0, start.length()).equalsIgnoreCase(start); + } + + /** + * Convert a string to a SQL literal. Null is converted to NULL. The text is + * enclosed in single quotes. If there are any special characters, the + * method STRINGDECODE is used. + * + * @param s the text to convert. + * @return the SQL literal + */ + public static String quoteStringSQL(String s) { + if (s == null) { + return "NULL"; + } + int length = s.length(); + StringBuilder buff = new StringBuilder(length + 2); + buff.append('\''); + for (int i = 0; i < length; i++) { + char c = s.charAt(i); + if (c == '\'') { + buff.append(c); + } else if (c < ' ' || c > 127) { + // need to start from the beginning because maybe there was a \ + // that was not quoted + return "STRINGDECODE(" + quoteStringSQL(javaEncode(s)) + ")"; + } + buff.append(c); + } + buff.append('\''); + return buff.toString(); + } + + /** + * Convert a string to a Java literal using the correct escape sequences. + * The literal is not enclosed in double quotes. The result can be used in + * properties files or in Java source code. + * + * @param s the text to convert + * @return the Java representation + */ + public static String javaEncode(String s) { + int length = s.length(); + StringBuilder buff = new StringBuilder(length); + for (int i = 0; i < length; i++) { + char c = s.charAt(i); + switch (c) { +// case '\b': +// // BS backspace +// // not supported in properties files +// buff.append("\\b"); +// break; + case '\t': + // HT horizontal tab + buff.append("\\t"); + break; + case '\n': + // LF linefeed + buff.append("\\n"); + break; + case '\f': + // FF form feed + buff.append("\\f"); + break; + case '\r': + // CR carriage return + buff.append("\\r"); + break; + case '"': + // double quote + buff.append("\\\""); + break; + case '\\': + // backslash + buff.append("\\\\"); + break; + default: + int ch = c & 0xffff; + if (ch >= ' ' && (ch < 0x80)) { + buff.append(c); + // not supported in properties files + // } else if (ch < 0xff) { + // buff.append("\\"); + // // make sure it's three characters (0x200 is octal 1000) + // buff.append(Integer.toOctalString(0x200 | ch).substring(1)); + } else { + buff.append("\\u"); + String hex = Integer.toHexString(ch); + // make sure it's four characters + for (int len = hex.length(); len < 4; len++) { + buff.append('0'); + } + buff.append(hex); + } + } + } + return buff.toString(); + } + + /** + * Add an asterisk ('[*]') at the given position. This format is used to + * show where parsing failed in a statement. + * + * @param s the text + * @param index the position + * @return the text with asterisk + */ + public static String addAsterisk(String s, int index) { + if (s != null) { + index = Math.min(index, s.length()); + s = s.substring(0, index) + "[*]" + s.substring(index); + } + return s; + } + + private static DbException getFormatException(String s, int i) { + return DbException.get(ErrorCode.STRING_FORMAT_ERROR_1, addAsterisk(s, i)); + } + + /** + * Decode a text that is encoded as a Java string literal. The Java + * properties file format and Java source code format is supported. + * + * @param s the encoded string + * @return the string + */ + public static String javaDecode(String s) { + int length = s.length(); + StringBuilder buff = new StringBuilder(length); + for (int i = 0; i < length; i++) { + char c = s.charAt(i); + if (c == '\\') { + if (i + 1 >= s.length()) { + throw getFormatException(s, i); + } + c = s.charAt(++i); + switch (c) { + case 't': + buff.append('\t'); + break; + case 'r': + buff.append('\r'); + break; + case 'n': + buff.append('\n'); + break; + case 'b': + buff.append('\b'); + break; + case 'f': + buff.append('\f'); + break; + case '#': + // for properties files + buff.append('#'); + break; + case '=': + // for properties files + buff.append('='); + break; + case ':': + // for properties files + buff.append(':'); + break; + case '"': + buff.append('"'); + break; + case '\\': + buff.append('\\'); + break; + case 'u': { + try { + c = (char) (Integer.parseInt(s.substring(i + 1, i + 5), 16)); + } catch (NumberFormatException e) { + throw getFormatException(s, i); + } + i += 4; + buff.append(c); + break; + } + default: + if (c >= '0' && c <= '9') { + try { + c = (char) (Integer.parseInt(s.substring(i, i + 3), 8)); + } catch (NumberFormatException e) { + throw getFormatException(s, i); + } + i += 2; + buff.append(c); + } else { + throw getFormatException(s, i); + } + } + } else { + buff.append(c); + } + } + return buff.toString(); + } + + /** + * Convert a string to the Java literal and enclose it with double quotes. + * Null will result in "null" (without double quotes). + * + * @param s the text to convert + * @return the Java representation + */ + public static String quoteJavaString(String s) { + if (s == null) { + return "null"; + } + return "\"" + javaEncode(s) + "\""; + } + + /** + * Convert a string array to the Java source code that represents this + * array. Null will be converted to 'null'. + * + * @param array the string array + * @return the Java source code (including new String[]{}) + */ + public static String quoteJavaStringArray(String[] array) { + if (array == null) { + return "null"; + } + StatementBuilder buff = new StatementBuilder("new String[]{"); + for (String a : array) { + buff.appendExceptFirst(", "); + buff.append(quoteJavaString(a)); + } + return buff.append('}').toString(); + } + + /** + * Convert an int array to the Java source code that represents this array. + * Null will be converted to 'null'. + * + * @param array the int array + * @return the Java source code (including new int[]{}) + */ + public static String quoteJavaIntArray(int[] array) { + if (array == null) { + return "null"; + } + StatementBuilder buff = new StatementBuilder("new int[]{"); + for (int a : array) { + buff.appendExceptFirst(", "); + buff.append(a); + } + return buff.append('}').toString(); + } + + /** + * Enclose a string with '(' and ')' if this is not yet done. + * + * @param s the string + * @return the enclosed string + */ + public static String enclose(String s) { + if (s.startsWith("(")) { + return s; + } + return "(" + s + ")"; + } + + /** + * Remove enclosing '(' and ')' if this text is enclosed. + * + * @param s the potentially enclosed string + * @return the string + */ + public static String unEnclose(String s) { + if (s.startsWith("(") && s.endsWith(")")) { + return s.substring(1, s.length() - 1); + } + return s; + } + + /** + * Encode the string as an URL. + * + * @param s the string to encode + * @return the encoded string + */ + public static String urlEncode(String s) { + try { + return URLEncoder.encode(s, "UTF-8"); + } catch (Exception e) { + // UnsupportedEncodingException + throw DbException.convert(e); + } + } + + /** + * Decode the URL to a string. + * + * @param encoded the encoded URL + * @return the decoded string + */ + public static String urlDecode(String encoded) { + int length = encoded.length(); + byte[] buff = new byte[length]; + int j = 0; + for (int i = 0; i < length; i++) { + char ch = encoded.charAt(i); + if (ch == '+') { + buff[j++] = ' '; + } else if (ch == '%') { + buff[j++] = (byte) Integer.parseInt(encoded.substring(i + 1, i + 3), 16); + i += 2; + } else { + if (SysProperties.CHECK) { + if (ch > 127 || ch < ' ') { + throw new IllegalArgumentException( + "Unexpected char " + (int) ch + " decoding " + encoded); + } + } + buff[j++] = (byte) ch; + } + } + return new String(buff, 0, j, StandardCharsets.UTF_8); + } + + /** + * Split a string into an array of strings using the given separator. A null + * string will result in a null array, and an empty string in a zero element + * array. + * + * @param s the string to split + * @param separatorChar the separator character + * @param trim whether each element should be trimmed + * @return the array list + */ + public static String[] arraySplit(String s, char separatorChar, boolean trim) { + if (s == null) { + return null; + } + int length = s.length(); + if (length == 0) { + return new String[0]; + } + ArrayList list = New.arrayList(); + StringBuilder buff = new StringBuilder(length); + for (int i = 0; i < length; i++) { + char c = s.charAt(i); + if (c == separatorChar) { + String e = buff.toString(); + list.add(trim ? e.trim() : e); + buff.setLength(0); + } else if (c == '\\' && i < length - 1) { + buff.append(s.charAt(++i)); + } else { + buff.append(c); + } + } + String e = buff.toString(); + list.add(trim ? e.trim() : e); + return list.toArray(new String[0]); + } + + /** + * Combine an array of strings to one array using the given separator + * character. A backslash and the separator character and escaped using a + * backslash. + * + * @param list the string array + * @param separatorChar the separator character + * @return the combined string + */ + public static String arrayCombine(String[] list, char separatorChar) { + StatementBuilder buff = new StatementBuilder(); + for (String s : list) { + buff.appendExceptFirst(String.valueOf(separatorChar)); + if (s == null) { + s = ""; + } + for (int j = 0, length = s.length(); j < length; j++) { + char c = s.charAt(j); + if (c == '\\' || c == separatorChar) { + buff.append('\\'); + } + buff.append(c); + } + } + return buff.toString(); + } + + /** + * Creates an XML attribute of the form name="value". + * A single space is prepended to the name, + * so that multiple attributes can be concatenated. + * @param name the attribute name + * @param value the attribute value + * @return the attribute + */ + public static String xmlAttr(String name, String value) { + return " " + name + "=\"" + xmlText(value) + "\""; + } + + /** + * Create an XML node with optional attributes and content. + * The data is indented with 4 spaces if it contains a newline character. + * + * @param name the element name + * @param attributes the attributes (may be null) + * @param content the content (may be null) + * @return the node + */ + public static String xmlNode(String name, String attributes, String content) { + return xmlNode(name, attributes, content, true); + } + + /** + * Create an XML node with optional attributes and content. The data is + * indented with 4 spaces if it contains a newline character and the indent + * parameter is set to true. + * + * @param name the element name + * @param attributes the attributes (may be null) + * @param content the content (may be null) + * @param indent whether to indent the content if it contains a newline + * @return the node + */ + public static String xmlNode(String name, String attributes, + String content, boolean indent) { + String start = attributes == null ? name : name + attributes; + if (content == null) { + return "<" + start + "/>\n"; + } + if (indent && content.indexOf('\n') >= 0) { + content = "\n" + indent(content); + } + return "<" + start + ">" + content + "\n"; + } + + /** + * Indents a string with 4 spaces. + * + * @param s the string + * @return the indented string + */ + public static String indent(String s) { + return indent(s, 4, true); + } + + /** + * Indents a string with spaces. + * + * @param s the string + * @param spaces the number of spaces + * @param newline append a newline if there is none + * @return the indented string + */ + public static String indent(String s, int spaces, boolean newline) { + StringBuilder buff = new StringBuilder(s.length() + spaces); + for (int i = 0; i < s.length();) { + for (int j = 0; j < spaces; j++) { + buff.append(' '); + } + int n = s.indexOf('\n', i); + n = n < 0 ? s.length() : n + 1; + buff.append(s, i, n); + i = n; + } + if (newline && !s.endsWith("\n")) { + buff.append('\n'); + } + return buff.toString(); + } + + /** + * Escapes a comment. + * If the data contains '--', it is converted to '- -'. + * The data is indented with 4 spaces if it contains a newline character. + * + * @param data the comment text + * @return <!-- data --> + */ + public static String xmlComment(String data) { + int idx = 0; + while (true) { + idx = data.indexOf("--", idx); + if (idx < 0) { + break; + } + data = data.substring(0, idx + 1) + " " + data.substring(idx + 1); + } + // must have a space at the beginning and at the end, + // otherwise the data must not contain '-' as the first/last character + if (data.indexOf('\n') >= 0) { + return "\n"; + } + return "\n"; + } + + /** + * Converts the data to a CDATA element. + * If the data contains ']]>', it is escaped as a text element. + * + * @param data the text data + * @return <![CDATA[data]]> + */ + public static String xmlCData(String data) { + if (data.contains("]]>")) { + return xmlText(data); + } + boolean newline = data.endsWith("\n"); + data = ""; + return newline ? data + "\n" : data; + } + + /** + * Returns <?xml version="1.0"?> + * @return <?xml version="1.0"?> + */ + public static String xmlStartDoc() { + return "\n"; + } + + /** + * Escapes an XML text element. + * + * @param text the text data + * @return the escaped text + */ + public static String xmlText(String text) { + return xmlText(text, false); + } + + /** + * Escapes an XML text element. + * + * @param text the text data + * @param escapeNewline whether to escape newlines + * @return the escaped text + */ + public static String xmlText(String text, boolean escapeNewline) { + int length = text.length(); + StringBuilder buff = new StringBuilder(length); + for (int i = 0; i < length; i++) { + char ch = text.charAt(i); + switch (ch) { + case '<': + buff.append("<"); + break; + case '>': + buff.append(">"); + break; + case '&': + buff.append("&"); + break; + case '\'': + // ' is not valid in HTML + buff.append("'"); + break; + case '\"': + buff.append("""); + break; + case '\r': + case '\n': + if (escapeNewline) { + buff.append("&#x"). + append(Integer.toHexString(ch)). + append(';'); + } else { + buff.append(ch); + } + break; + case '\t': + buff.append(ch); + break; + default: + if (ch < ' ' || ch > 127) { + buff.append("&#x"). + append(Integer.toHexString(ch)). + append(';'); + } else { + buff.append(ch); + } + } + } + return buff.toString(); + } + + /** + * Replace all occurrences of the before string with the after string. Unlike + * {@link String#replaceAll(String, String)} this method reads {@code before} + * and {@code after} arguments as plain strings and if {@code before} argument + * is an empty string this method returns original string {@code s}. + * + * @param s the string + * @param before the old text + * @param after the new text + * @return the string with the before string replaced + */ + public static String replaceAll(String s, String before, String after) { + int next = s.indexOf(before); + if (next < 0 || before.isEmpty()) { + return s; + } + StringBuilder buff = new StringBuilder( + s.length() - before.length() + after.length()); + int index = 0; + while (true) { + buff.append(s, index, next).append(after); + index = next + before.length(); + next = s.indexOf(before, index); + if (next < 0) { + buff.append(s, index, s.length()); + break; + } + } + return buff.toString(); + } + + /** + * Enclose a string with double quotes. A double quote inside the string is + * escaped using a double quote. + * + * @param s the text + * @return the double quoted text + */ + public static String quoteIdentifier(String s) { + int length = s.length(); + StringBuilder buff = new StringBuilder(length + 2); + buff.append('\"'); + for (int i = 0; i < length; i++) { + char c = s.charAt(i); + if (c == '"') { + buff.append(c); + } + buff.append(c); + } + return buff.append('\"').toString(); + } + + /** + * Check if a String is null or empty (the length is null). + * + * @param s the string to check + * @return true if it is null or empty + */ + public static boolean isNullOrEmpty(String s) { + return s == null || s.length() == 0; + } + + /** + * In a string, replace block comment marks with /++ .. ++/. + * + * @param sql the string + * @return the resulting string + */ + public static String quoteRemarkSQL(String sql) { + sql = replaceAll(sql, "*/", "++/"); + return replaceAll(sql, "/*", "/++"); + } + + /** + * Pad a string. This method is used for the SQL function RPAD and LPAD. + * + * @param string the original string + * @param n the target length + * @param padding the padding string + * @param right true if the padding should be appended at the end + * @return the padded string + */ + public static String pad(String string, int n, String padding, boolean right) { + if (n < 0) { + n = 0; + } + if (n < string.length()) { + return string.substring(0, n); + } else if (n == string.length()) { + return string; + } + char paddingChar; + if (padding == null || padding.length() == 0) { + paddingChar = ' '; + } else { + paddingChar = padding.charAt(0); + } + StringBuilder buff = new StringBuilder(n); + n -= string.length(); + if (right) { + buff.append(string); + } + for (int i = 0; i < n; i++) { + buff.append(paddingChar); + } + if (!right) { + buff.append(string); + } + return buff.toString(); + } + + /** + * Create a new char array and copy all the data. If the size of the byte + * array is zero, the same array is returned. + * + * @param chars the char array (may be null) + * @return a new char array + */ + public static char[] cloneCharArray(char[] chars) { + if (chars == null) { + return null; + } + int len = chars.length; + if (len == 0) { + return chars; + } + return Arrays.copyOf(chars, len); + } + + /** + * Trim a character from a string. + * + * @param s the string + * @param leading if leading characters should be removed + * @param trailing if trailing characters should be removed + * @param sp what to remove (only the first character is used) + * or null for a space + * @return the trimmed string + */ + public static String trim(String s, boolean leading, boolean trailing, + String sp) { + char space = sp == null || sp.isEmpty() ? ' ' : sp.charAt(0); + int begin = 0, end = s.length(); + if (leading) { + while (begin < end && s.charAt(begin) == space) { + begin++; + } + } + if (trailing) { + while (end > begin && s.charAt(end - 1) == space) { + end--; + } + } + // substring() returns self if start == 0 && end == length() + return s.substring(begin, end); + } + + /** + * Get the string from the cache if possible. If the string has not been + * found, it is added to the cache. If there is such a string in the cache, + * that one is returned. + * + * @param s the original string + * @return a string with the same content, if possible from the cache + */ + public static String cache(String s) { + if (!SysProperties.OBJECT_CACHE) { + return s; + } + if (s == null) { + return s; + } else if (s.length() == 0) { + return ""; + } + int hash = s.hashCode(); + String[] cache = getCache(); + if (cache != null) { + int index = hash & (SysProperties.OBJECT_CACHE_SIZE - 1); + String cached = cache[index]; + if (cached != null) { + if (s.equals(cached)) { + return cached; + } + } + cache[index] = s; + } + return s; + } + + /** + * Clear the cache. This method is used for testing. + */ + public static void clearCache() { + softCache = new SoftReference<>(null); + } + + /** + * Convert a hex encoded string to a byte array. + * + * @param s the hex encoded string + * @return the byte array + */ + public static byte[] convertHexToBytes(String s) { + int len = s.length(); + if (len % 2 != 0) { + throw DbException.get(ErrorCode.HEX_STRING_ODD_1, s); + } + len /= 2; + byte[] buff = new byte[len]; + int mask = 0; + int[] hex = HEX_DECODE; + try { + for (int i = 0; i < len; i++) { + int d = hex[s.charAt(i + i)] << 4 | hex[s.charAt(i + i + 1)]; + mask |= d; + buff[i] = (byte) d; + } + } catch (ArrayIndexOutOfBoundsException e) { + throw DbException.get(ErrorCode.HEX_STRING_WRONG_1, s); + } + if ((mask & ~255) != 0) { + throw DbException.get(ErrorCode.HEX_STRING_WRONG_1, s); + } + return buff; + } + + /** + * Convert a byte array to a hex encoded string. + * + * @param value the byte array + * @return the hex encoded string + */ + public static String convertBytesToHex(byte[] value) { + return convertBytesToHex(value, value.length); + } + + /** + * Convert a byte array to a hex encoded string. + * + * @param value the byte array + * @param len the number of bytes to encode + * @return the hex encoded string + */ + public static String convertBytesToHex(byte[] value, int len) { + char[] buff = new char[len + len]; + char[] hex = HEX; + for (int i = 0; i < len; i++) { + int c = value[i] & 0xff; + buff[i + i] = hex[c >> 4]; + buff[i + i + 1] = hex[c & 0xf]; + } + return new String(buff); + } + + /** + * Check if this string is a decimal number. + * + * @param s the string + * @return true if it is + */ + public static boolean isNumber(String s) { + if (s.length() == 0) { + return false; + } + for (char c : s.toCharArray()) { + if (!Character.isDigit(c)) { + return false; + } + } + return true; + } + + /** + * Append a zero-padded number to a string builder. + * + * @param buff the string builder + * @param length the number of characters to append + * @param positiveValue the number to append + */ + public static void appendZeroPadded(StringBuilder buff, int length, + long positiveValue) { + if (length == 2) { + if (positiveValue < 10) { + buff.append('0'); + } + buff.append(positiveValue); + } else { + String s = Long.toString(positiveValue); + length -= s.length(); + while (length > 0) { + buff.append('0'); + length--; + } + buff.append(s); + } + } + + /** + * Escape table or schema patterns used for DatabaseMetaData functions. + * + * @param pattern the pattern + * @return the escaped pattern + */ + public static String escapeMetaDataPattern(String pattern) { + if (pattern == null || pattern.length() == 0) { + return pattern; + } + return replaceAll(pattern, "\\", "\\\\"); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/SynchronizedVerifier.java b/modules/h2/src/main/java/org/h2/util/SynchronizedVerifier.java new file mode 100644 index 0000000000000..87ce41418bc18 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/SynchronizedVerifier.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.Collections; +import java.util.HashMap; +import java.util.IdentityHashMap; +import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; + +/** + * A utility class that allows to verify access to a resource is synchronized. + */ +public class SynchronizedVerifier { + + private static volatile boolean enabled; + private static final Map, AtomicBoolean> DETECT = + Collections.synchronizedMap(new HashMap, AtomicBoolean>()); + private static final Map CURRENT = + Collections.synchronizedMap(new IdentityHashMap<>()); + + /** + * Enable or disable detection for a given class. + * + * @param clazz the class + * @param value the new value (true means detection is enabled) + */ + public static void setDetect(Class clazz, boolean value) { + if (value) { + DETECT.put(clazz, new AtomicBoolean()); + } else { + AtomicBoolean b = DETECT.remove(clazz); + if (b == null) { + throw new AssertionError("Detection was not enabled"); + } else if (!b.get()) { + throw new AssertionError("No object of this class was tested"); + } + } + enabled = DETECT.size() > 0; + } + + /** + * Verify the object is not accessed concurrently. + * + * @param o the object + */ + public static void check(Object o) { + if (enabled) { + detectConcurrentAccess(o); + } + } + + private static void detectConcurrentAccess(Object o) { + AtomicBoolean value = DETECT.get(o.getClass()); + if (value != null) { + value.set(true); + if (CURRENT.remove(o) != null) { + throw new AssertionError("Concurrent access"); + } + CURRENT.put(o, o); + try { + Thread.sleep(1); + } catch (InterruptedException e) { + // ignore + } + Object old = CURRENT.remove(o); + if (old == null) { + throw new AssertionError("Concurrent access"); + } + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/Task.java b/modules/h2/src/main/java/org/h2/util/Task.java new file mode 100644 index 0000000000000..5187359636b30 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/Task.java @@ -0,0 +1,126 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.concurrent.atomic.AtomicInteger; + +/** + * A method call that is executed in a separate thread. If the method throws an + * exception, it is wrapped in a RuntimeException. + */ +public abstract class Task implements Runnable { + + private static final AtomicInteger counter = new AtomicInteger(); + + /** + * A flag indicating the get() method has been called. + */ + public volatile boolean stop; + + /** + * The result, if any. + */ + private volatile Object result; + + private volatile boolean finished; + + private Thread thread; + + private volatile Exception ex; + + /** + * The method to be implemented. + * + * @throws Exception any exception is wrapped in a RuntimeException + */ + public abstract void call() throws Exception; + + @Override + public void run() { + try { + call(); + } catch (Exception e) { + this.ex = e; + } + finished = true; + } + + /** + * Start the thread. + * + * @return this + */ + public Task execute() { + return execute(getClass().getName() + ":" + counter.getAndIncrement()); + } + + /** + * Start the thread. + * + * @param threadName the name of the thread + * @return this + */ + public Task execute(String threadName) { + thread = new Thread(this, threadName); + thread.setDaemon(true); + thread.start(); + return this; + } + + /** + * Calling this method will set the stop flag and wait until the thread is + * stopped. + * + * @return the result, or null + * @throws RuntimeException if an exception in the method call occurs + */ + public Object get() { + Exception e = getException(); + if (e != null) { + throw new RuntimeException(e); + } + return result; + } + + /** + * Whether the call method has returned (with or without exception). + * + * @return true if yes + */ + public boolean isFinished() { + return finished; + } + + /** + * Get the exception that was thrown in the call (if any). + * + * @return the exception or null + */ + public Exception getException() { + join(); + if (ex != null) { + return ex; + } + return null; + } + + /** + * Stop the thread and wait until it is no longer running. Exceptions are + * ignored. + */ + public void join() { + stop = true; + if (thread == null) { + throw new IllegalStateException("Thread not started"); + } + try { + thread.join(); + } catch (InterruptedException e) { + // ignore + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/TempFileDeleter.java b/modules/h2/src/main/java/org/h2/util/TempFileDeleter.java new file mode 100644 index 0000000000000..7317203bb36a2 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/TempFileDeleter.java @@ -0,0 +1,122 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.lang.ref.PhantomReference; +import java.lang.ref.Reference; +import java.lang.ref.ReferenceQueue; +import java.util.ArrayList; +import java.util.HashMap; + +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; + +/** + * This class deletes temporary files when they are not used any longer. + */ +public class TempFileDeleter { + + private final ReferenceQueue queue = new ReferenceQueue<>(); + private final HashMap, String> refMap = new HashMap<>(); + + private TempFileDeleter() { + // utility class + } + + public static TempFileDeleter getInstance() { + return new TempFileDeleter(); + } + + /** + * Add a file to the list of temp files to delete. The file is deleted once + * the file object is garbage collected. + * + * @param fileName the file name + * @param file the object to monitor + * @return the reference that can be used to stop deleting the file + */ + public synchronized Reference addFile(String fileName, Object file) { + IOUtils.trace("TempFileDeleter.addFile", fileName, file); + PhantomReference ref = new PhantomReference<>(file, queue); + refMap.put(ref, fileName); + deleteUnused(); + return ref; + } + + /** + * Delete the given file now. This will remove the reference from the list. + * + * @param ref the reference as returned by addFile + * @param fileName the file name + */ + public synchronized void deleteFile(Reference ref, String fileName) { + if (ref != null) { + String f2 = refMap.remove(ref); + if (f2 != null) { + if (SysProperties.CHECK) { + if (fileName != null && !f2.equals(fileName)) { + DbException.throwInternalError("f2:" + f2 + " f:" + fileName); + } + } + fileName = f2; + } + } + if (fileName != null && FileUtils.exists(fileName)) { + try { + IOUtils.trace("TempFileDeleter.deleteFile", fileName, null); + FileUtils.tryDelete(fileName); + } catch (Exception e) { + // TODO log such errors? + } + } + } + + /** + * Delete all registered temp files. + */ + public void deleteAll() { + for (String tempFile : new ArrayList<>(refMap.values())) { + deleteFile(null, tempFile); + } + deleteUnused(); + } + + /** + * Delete all unused files now. + */ + public void deleteUnused() { + while (queue != null) { + Reference ref = queue.poll(); + if (ref == null) { + break; + } + deleteFile(ref, null); + } + } + + /** + * This method is called if a file should no longer be deleted if the object + * is garbage collected. + * + * @param ref the reference as returned by addFile + * @param fileName the file name + */ + public void stopAutoDelete(Reference ref, String fileName) { + IOUtils.trace("TempFileDeleter.stopAutoDelete", fileName, ref); + if (ref != null) { + String f2 = refMap.remove(ref); + if (SysProperties.CHECK) { + if (f2 == null || !f2.equals(fileName)) { + DbException.throwInternalError("f2:" + f2 + + " " + (f2 == null ? "" : f2) + " f:" + fileName); + } + } + } + deleteUnused(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/ThreadDeadlockDetector.java b/modules/h2/src/main/java/org/h2/util/ThreadDeadlockDetector.java new file mode 100644 index 0000000000000..f2952f0bba533 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ThreadDeadlockDetector.java @@ -0,0 +1,197 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.PrintStream; +import java.io.PrintWriter; +import java.io.StringWriter; +import java.lang.management.LockInfo; +import java.lang.management.ManagementFactory; +import java.lang.management.MonitorInfo; +import java.lang.management.ThreadInfo; +import java.lang.management.ThreadMXBean; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Timer; +import java.util.TimerTask; +import org.h2.engine.SysProperties; +import org.h2.mvstore.db.MVTable; + +/** + * Detects deadlocks between threads. Prints out data in the same format as the + * CTRL-BREAK handler, but includes information about table locks. + */ +public class ThreadDeadlockDetector { + + private static final String INDENT = " "; + + private static ThreadDeadlockDetector detector; + + private final ThreadMXBean threadBean; + + // a daemon thread + private final Timer threadCheck = new Timer("ThreadDeadlockDetector", true); + + private ThreadDeadlockDetector() { + this.threadBean = ManagementFactory.getThreadMXBean(); + // delay: 10 ms + // period: 10000 ms (100 seconds) + threadCheck.schedule(new TimerTask() { + @Override + public void run() { + checkForDeadlocks(); + } + }, 10, 10_000); + } + + /** + * Initialize the detector. + */ + public static synchronized void init() { + if (detector == null) { + detector = new ThreadDeadlockDetector(); + } + } + + /** + * Checks if any threads are deadlocked. If any, print the thread dump + * information. + */ + void checkForDeadlocks() { + long[] deadlockedThreadIds = threadBean.findDeadlockedThreads(); + if (deadlockedThreadIds == null) { + return; + } + dumpThreadsAndLocks("ThreadDeadlockDetector - deadlock found :", + threadBean, deadlockedThreadIds, System.out); + } + + /** + * Dump all deadlocks (if any). + * + * @param msg the message + */ + public static void dumpAllThreadsAndLocks(String msg) { + dumpAllThreadsAndLocks(msg, System.out); + } + + /** + * Dump all deadlocks (if any). + * + * @param msg the message + * @param out the output + */ + public static void dumpAllThreadsAndLocks(String msg, PrintStream out) { + final ThreadMXBean threadBean = ManagementFactory.getThreadMXBean(); + final long[] allThreadIds = threadBean.getAllThreadIds(); + dumpThreadsAndLocks(msg, threadBean, allThreadIds, out); + } + + private static void dumpThreadsAndLocks(String msg, ThreadMXBean threadBean, + long[] threadIds, PrintStream out) { + final StringWriter stringWriter = new StringWriter(); + final PrintWriter print = new PrintWriter(stringWriter); + + print.println(msg); + + final HashMap tableWaitingForLockMap; + final HashMap> tableExclusiveLocksMap; + final HashMap> tableSharedLocksMap; + if (SysProperties.THREAD_DEADLOCK_DETECTOR) { + tableWaitingForLockMap = MVTable.WAITING_FOR_LOCK + .getSnapshotOfAllThreads(); + tableExclusiveLocksMap = MVTable.EXCLUSIVE_LOCKS + .getSnapshotOfAllThreads(); + tableSharedLocksMap = MVTable.SHARED_LOCKS + .getSnapshotOfAllThreads(); + } else { + tableWaitingForLockMap = new HashMap<>(); + tableExclusiveLocksMap = new HashMap<>(); + tableSharedLocksMap = new HashMap<>(); + } + + final ThreadInfo[] infos = threadBean.getThreadInfo(threadIds, true, + true); + for (ThreadInfo ti : infos) { + printThreadInfo(print, ti); + printLockInfo(print, ti.getLockedSynchronizers(), + tableWaitingForLockMap.get(ti.getThreadId()), + tableExclusiveLocksMap.get(ti.getThreadId()), + tableSharedLocksMap.get(ti.getThreadId())); + } + + print.flush(); + // Dump it to system.out in one block, so it doesn't get mixed up with + // other stuff when we're using a logging subsystem. + out.println(stringWriter.getBuffer()); + out.flush(); + } + + private static void printThreadInfo(PrintWriter print, ThreadInfo ti) { + // print thread information + printThread(print, ti); + + // print stack trace with locks + StackTraceElement[] stackTrace = ti.getStackTrace(); + MonitorInfo[] monitors = ti.getLockedMonitors(); + for (int i = 0; i < stackTrace.length; i++) { + StackTraceElement e = stackTrace[i]; + print.println(INDENT + "at " + e.toString()); + for (MonitorInfo mi : monitors) { + if (mi.getLockedStackDepth() == i) { + print.println(INDENT + " - locked " + mi); + } + } + } + print.println(); + } + + private static void printThread(PrintWriter print, ThreadInfo ti) { + print.print("\"" + ti.getThreadName() + "\"" + " Id=" + + ti.getThreadId() + " in " + ti.getThreadState()); + if (ti.getLockName() != null) { + print.append(" on lock=").append(ti.getLockName()); + } + if (ti.isSuspended()) { + print.append(" (suspended)"); + } + if (ti.isInNative()) { + print.append(" (running in native)"); + } + print.println(); + if (ti.getLockOwnerName() != null) { + print.println(INDENT + " owned by " + ti.getLockOwnerName() + " Id=" + + ti.getLockOwnerId()); + } + } + + private static void printLockInfo(PrintWriter print, LockInfo[] locks, + String tableWaitingForLock, + ArrayList tableExclusiveLocks, + ArrayList tableSharedLocksMap) { + print.println(INDENT + "Locked synchronizers: count = " + locks.length); + for (LockInfo li : locks) { + print.println(INDENT + " - " + li); + } + if (tableWaitingForLock != null) { + print.println(INDENT + "Waiting for table: " + tableWaitingForLock); + } + if (tableExclusiveLocks != null) { + print.println(INDENT + "Exclusive table locks: count = " + tableExclusiveLocks.size()); + for (String name : tableExclusiveLocks) { + print.println(INDENT + " - " + name); + } + } + if (tableSharedLocksMap != null) { + print.println(INDENT + "Shared table locks: count = " + tableSharedLocksMap.size()); + for (String name : tableSharedLocksMap) { + print.println(INDENT + " - " + name); + } + } + print.println(); + } + +} \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/util/ToChar.java b/modules/h2/src/main/java/org/h2/util/ToChar.java new file mode 100644 index 0000000000000..bb87e45977c5c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ToChar.java @@ -0,0 +1,1041 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Daniel Gredler + */ +package org.h2.util; + +import java.math.BigDecimal; +import java.text.DateFormatSymbols; +import java.text.DecimalFormat; +import java.text.DecimalFormatSymbols; +import java.text.SimpleDateFormat; +import java.util.Arrays; +import java.util.Currency; +import java.util.Locale; +import java.util.TimeZone; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.value.Value; +import org.h2.value.ValueTimestampTimeZone; + +/** + * Emulates Oracle's TO_CHAR function. + */ +public class ToChar { + + /** + * The beginning of the Julian calendar. + */ + static final int JULIAN_EPOCH = -2_440_588; + + private static final int[] ROMAN_VALUES = { 1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, + 5, 4, 1 }; + + private static final String[] ROMAN_NUMERALS = { "M", "CM", "D", "CD", "C", "XC", + "L", "XL", "X", "IX", "V", "IV", "I" }; + + /** + * The month field. + */ + static final int MONTHS = 0; + + /** + * The month field (short form). + */ + static final int SHORT_MONTHS = 1; + + /** + * The weekday field. + */ + static final int WEEKDAYS = 2; + + /** + * The weekday field (short form). + */ + static final int SHORT_WEEKDAYS = 3; + + /** + * The AM / PM field. + */ + static final int AM_PM = 4; + + private static volatile String[][] NAMES; + + private ToChar() { + // utility class + } + + /** + * Emulates Oracle's TO_CHAR(number) function. + * + *

    + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    InputOutputClosest {@link DecimalFormat} Equivalent
    ,Grouping separator.,
    .Decimal separator..
    $Leading dollar sign.$
    0Leading or trailing zeroes.0
    9Digit.#
    BBlanks integer part of a fixed point number less than 1.#
    CISO currency symbol.\u00A4
    DLocal decimal separator..
    EEEEReturns a value in scientific notation.E
    FMReturns values with no leading or trailing spaces.None.
    GLocal grouping separator.,
    LLocal currency symbol.\u00A4
    MINegative values get trailing minus sign, + * positive get trailing space.-
    PRNegative values get enclosing angle brackets, + * positive get spaces.None.
    RNReturns values in Roman numerals.None.
    SReturns values with leading/trailing +/- signs.None.
    TMReturns smallest number of characters possible.None.
    UReturns the dual currency symbol.None.
    VReturns a value multiplied by 10^n.None.
    XHex value.None.
    + * See also TO_CHAR(number) and number format models + * in the Oracle documentation. + * + * @param number the number to format + * @param format the format pattern to use (if any) + * @param nlsParam the NLS parameter (if any) + * @return the formatted number + */ + public static String toChar(BigDecimal number, String format, + @SuppressWarnings("unused") String nlsParam) { + + // short-circuit logic for formats that don't follow common logic below + String formatUp = format != null ? StringUtils.toUpperEnglish(format) : null; + if (formatUp == null || formatUp.equals("TM") || formatUp.equals("TM9")) { + String s = number.toPlainString(); + return s.startsWith("0.") ? s.substring(1) : s; + } else if (formatUp.equals("TME")) { + int pow = number.precision() - number.scale() - 1; + number = number.movePointLeft(pow); + return number.toPlainString() + "E" + + (pow < 0 ? '-' : '+') + (Math.abs(pow) < 10 ? "0" : "") + Math.abs(pow); + } else if (formatUp.equals("RN")) { + boolean lowercase = format.startsWith("r"); + String rn = StringUtils.pad(toRomanNumeral(number.intValue()), 15, " ", false); + return lowercase ? rn.toLowerCase() : rn; + } else if (formatUp.equals("FMRN")) { + boolean lowercase = format.charAt(2) == 'r'; + String rn = toRomanNumeral(number.intValue()); + return lowercase ? rn.toLowerCase() : rn; + } else if (formatUp.endsWith("X")) { + return toHex(number, format); + } + + String originalFormat = format; + DecimalFormatSymbols symbols = DecimalFormatSymbols.getInstance(); + char localGrouping = symbols.getGroupingSeparator(); + char localDecimal = symbols.getDecimalSeparator(); + + boolean leadingSign = formatUp.startsWith("S"); + if (leadingSign) { + format = format.substring(1); + } + + boolean trailingSign = formatUp.endsWith("S"); + if (trailingSign) { + format = format.substring(0, format.length() - 1); + } + + boolean trailingMinus = formatUp.endsWith("MI"); + if (trailingMinus) { + format = format.substring(0, format.length() - 2); + } + + boolean angleBrackets = formatUp.endsWith("PR"); + if (angleBrackets) { + format = format.substring(0, format.length() - 2); + } + + int v = formatUp.indexOf('V'); + if (v >= 0) { + int digits = 0; + for (int i = v + 1; i < format.length(); i++) { + char c = format.charAt(i); + if (c == '0' || c == '9') { + digits++; + } + } + number = number.movePointRight(digits); + format = format.substring(0, v) + format.substring(v + 1); + } + + Integer power; + if (format.endsWith("EEEE")) { + power = number.precision() - number.scale() - 1; + number = number.movePointLeft(power); + format = format.substring(0, format.length() - 4); + } else { + power = null; + } + + int maxLength = 1; + boolean fillMode = !formatUp.startsWith("FM"); + if (!fillMode) { + format = format.substring(2); + } + + // blanks flag doesn't seem to actually do anything + format = format.replaceAll("[Bb]", ""); + + // if we need to round the number to fit into the format specified, + // go ahead and do that first + int separator = findDecimalSeparator(format); + int formatScale = calculateScale(format, separator); + if (formatScale < number.scale()) { + number = number.setScale(formatScale, BigDecimal.ROUND_HALF_UP); + } + + // any 9s to the left of the decimal separator but to the right of a + // 0 behave the same as a 0, e.g. "09999.99" -> "00000.99" + for (int i = format.indexOf('0'); i >= 0 && i < separator; i++) { + if (format.charAt(i) == '9') { + format = format.substring(0, i) + "0" + format.substring(i + 1); + } + } + + StringBuilder output = new StringBuilder(); + String unscaled = (number.abs().compareTo(BigDecimal.ONE) < 0 ? + zeroesAfterDecimalSeparator(number) : "") + + number.unscaledValue().abs().toString(); + + // start at the decimal point and fill in the numbers to the left, + // working our way from right to left + int i = separator - 1; + int j = unscaled.length() - number.scale() - 1; + for (; i >= 0; i--) { + char c = format.charAt(i); + maxLength++; + if (c == '9' || c == '0') { + if (j >= 0) { + char digit = unscaled.charAt(j); + output.insert(0, digit); + j--; + } else if (c == '0' && power == null) { + output.insert(0, '0'); + } + } else if (c == ',') { + // only add the grouping separator if we have more numbers + if (j >= 0 || (i > 0 && format.charAt(i - 1) == '0')) { + output.insert(0, c); + } + } else if (c == 'G' || c == 'g') { + // only add the grouping separator if we have more numbers + if (j >= 0 || (i > 0 && format.charAt(i - 1) == '0')) { + output.insert(0, localGrouping); + } + } else if (c == 'C' || c == 'c') { + Currency currency = Currency.getInstance(Locale.getDefault()); + output.insert(0, currency.getCurrencyCode()); + maxLength += 6; + } else if (c == 'L' || c == 'l' || c == 'U' || c == 'u') { + Currency currency = Currency.getInstance(Locale.getDefault()); + output.insert(0, currency.getSymbol()); + maxLength += 9; + } else if (c == '$') { + Currency currency = Currency.getInstance(Locale.getDefault()); + String cs = currency.getSymbol(); + output.insert(0, cs); + } else { + throw DbException.get( + ErrorCode.INVALID_TO_CHAR_FORMAT, originalFormat); + } + } + + // if the format (to the left of the decimal point) was too small + // to hold the number, return a big "######" string + if (j >= 0) { + return StringUtils.pad("", format.length() + 1, "#", true); + } + + if (separator < format.length()) { + + // add the decimal point + maxLength++; + char pt = format.charAt(separator); + if (pt == 'd' || pt == 'D') { + output.append(localDecimal); + } else { + output.append(pt); + } + + // start at the decimal point and fill in the numbers to the right, + // working our way from left to right + i = separator + 1; + j = unscaled.length() - number.scale(); + for (; i < format.length(); i++) { + char c = format.charAt(i); + maxLength++; + if (c == '9' || c == '0') { + if (j < unscaled.length()) { + char digit = unscaled.charAt(j); + output.append(digit); + j++; + } else { + if (c == '0' || fillMode) { + output.append('0'); + } + } + } else { + throw DbException.get( + ErrorCode.INVALID_TO_CHAR_FORMAT, originalFormat); + } + } + } + + addSign(output, number.signum(), leadingSign, trailingSign, + trailingMinus, angleBrackets, fillMode); + + if (power != null) { + output.append('E'); + output.append(power < 0 ? '-' : '+'); + output.append(Math.abs(power) < 10 ? "0" : ""); + output.append(Math.abs(power)); + } + + if (fillMode) { + if (power != null) { + output.insert(0, ' '); + } else { + while (output.length() < maxLength) { + output.insert(0, ' '); + } + } + } + + return output.toString(); + } + + private static String zeroesAfterDecimalSeparator(BigDecimal number) { + final String numberStr = number.toPlainString(); + final int idx = numberStr.indexOf('.'); + if (idx < 0) { + return ""; + } + int i = idx + 1; + boolean allZeroes = true; + for (; i < numberStr.length(); i++) { + if (numberStr.charAt(i) != '0') { + allZeroes = false; + break; + } + } + final char[] zeroes = new char[allZeroes ? numberStr.length() - idx - 1: i - 1 - idx]; + Arrays.fill(zeroes, '0'); + return String.valueOf(zeroes); + } + + private static void addSign(StringBuilder output, int signum, + boolean leadingSign, boolean trailingSign, boolean trailingMinus, + boolean angleBrackets, boolean fillMode) { + if (angleBrackets) { + if (signum < 0) { + output.insert(0, '<'); + output.append('>'); + } else if (fillMode) { + output.insert(0, ' '); + output.append(' '); + } + } else { + String sign; + if (signum == 0) { + sign = ""; + } else if (signum < 0) { + sign = "-"; + } else { + if (leadingSign || trailingSign) { + sign = "+"; + } else if (fillMode) { + sign = " "; + } else { + sign = ""; + } + } + if (trailingMinus || trailingSign) { + output.append(sign); + } else { + output.insert(0, sign); + } + } + } + + private static int findDecimalSeparator(String format) { + int index = format.indexOf('.'); + if (index == -1) { + index = format.indexOf('D'); + if (index == -1) { + index = format.indexOf('d'); + if (index == -1) { + index = format.length(); + } + } + } + return index; + } + + private static int calculateScale(String format, int separator) { + int scale = 0; + for (int i = separator; i < format.length(); i++) { + char c = format.charAt(i); + if (c == '0' || c == '9') { + scale++; + } + } + return scale; + } + + private static String toRomanNumeral(int number) { + StringBuilder result = new StringBuilder(); + for (int i = 0; i < ROMAN_VALUES.length; i++) { + int value = ROMAN_VALUES[i]; + String numeral = ROMAN_NUMERALS[i]; + while (number >= value) { + result.append(numeral); + number -= value; + } + } + return result.toString(); + } + + private static String toHex(BigDecimal number, String format) { + + boolean fillMode = !StringUtils.toUpperEnglish(format).startsWith("FM"); + boolean uppercase = !format.contains("x"); + boolean zeroPadded = format.startsWith("0"); + int digits = 0; + for (int i = 0; i < format.length(); i++) { + char c = format.charAt(i); + if (c == '0' || c == 'X' || c == 'x') { + digits++; + } + } + + int i = number.setScale(0, BigDecimal.ROUND_HALF_UP).intValue(); + String hex = Integer.toHexString(i); + if (digits < hex.length()) { + hex = StringUtils.pad("", digits + 1, "#", true); + } else { + if (uppercase) { + hex = StringUtils.toUpperEnglish(hex); + } + if (zeroPadded) { + hex = StringUtils.pad(hex, digits, "0", false); + } + if (fillMode) { + hex = StringUtils.pad(hex, format.length() + 1, " ", false); + } + } + + return hex; + } + + /** + * Get the date (month / weekday / ...) names. + * + * @param names the field + * @return the names + */ + static String[] getDateNames(int names) { + String[][] result = NAMES; + if (result == null) { + result = new String[5][]; + DateFormatSymbols dfs = DateFormatSymbols.getInstance(); + result[MONTHS] = dfs.getMonths(); + String[] months = dfs.getShortMonths(); + for (int i = 0; i < 12; i++) { + String month = months[i]; + if (month.endsWith(".")) { + months[i] = month.substring(0, month.length() - 1); + } + } + result[SHORT_MONTHS] = months; + result[WEEKDAYS] = dfs.getWeekdays(); + result[SHORT_WEEKDAYS] = dfs.getShortWeekdays(); + result[AM_PM] = dfs.getAmPmStrings(); + NAMES = result; + } + return result[names]; + } + + /** + * Returns time zone display name or ID for the specified date-time value. + * + * @param value + * value + * @param tzd + * if {@code true} return TZD (time zone region with Daylight Saving + * Time information included), if {@code false} return TZR (time zone + * region) + * @return time zone display name or ID + */ + private static String getTimeZone(Value value, boolean tzd) { + if (!(value instanceof ValueTimestampTimeZone)) { + TimeZone tz = TimeZone.getDefault(); + if (tzd) { + boolean daylight = tz.inDaylightTime(value.getTimestamp()); + return tz.getDisplayName(daylight, TimeZone.SHORT); + } + return tz.getID(); + } + return DateTimeUtils.timeZoneNameFromOffsetMins(((ValueTimestampTimeZone) value).getTimeZoneOffsetMins()); + } + + /** + * Emulates Oracle's TO_CHAR(datetime) function. + * + *

    + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    InputOutputClosest {@link SimpleDateFormat} Equivalent
    - / , . ; : "text"Reproduced verbatim.'text'
    A.D. AD B.C. BCEra designator, with or without periods.G
    A.M. AM P.M. PMAM/PM marker.a
    CC SCCCentury.None.
    DDay of week.u
    DAYName of day.EEEE
    DYAbbreviated day name.EEE
    DDDay of month.d
    DDDDay of year.D
    DLLong date format.EEEE, MMMM d, yyyy
    DSShort date format.MM/dd/yyyy
    EAbbreviated era name (Japanese, Chinese, Thai)None.
    EEFull era name (Japanese, Chinese, Thai)None.
    FF[1-9]Fractional seconds.S
    FMReturns values with no leading or trailing spaces.None.
    FXRequires exact matches between character data and format model.None.
    HH HH12Hour in AM/PM (1-12).hh
    HH24Hour in day (0-23).HH
    IWWeek in year.w
    WWWeek in year.w
    WWeek in month.W
    IYYY IYY IY ILast 4/3/2/1 digit(s) of ISO year.yyyy yyy yy y
    RRRR RRLast 4/2 digits of year.yyyy yy
    Y,YYYYear with comma.None.
    YEAR SYEARYear spelled out (S prefixes BC years with minus sign).None.
    YYYY SYYYY4-digit year (S prefixes BC years with minus sign).yyyy
    YYY YY YLast 3/2/1 digit(s) of year.yyy yy y
    JJulian day (number of days since January 1, 4712 BC).None.
    MIMinute in hour.mm
    MMMonth in year.MM
    MONAbbreviated name of month.MMM
    MONTHName of month, padded with spaces.MMMM
    RMRoman numeral month.None.
    QQuarter of year.None.
    SSSeconds in minute.ss
    SSSSSSeconds in day.None.
    TSShort time format.h:mm:ss aa
    TZDDaylight savings time zone abbreviation.z
    TZRTime zone region information.zzzz
    XLocal radix character.None.
    + *

    + * See also TO_CHAR(datetime) and datetime format models + * in the Oracle documentation. + * + * @param value the date-time value to format + * @param format the format pattern to use (if any) + * @param nlsParam the NLS parameter (if any) + * @return the formatted timestamp + */ + public static String toCharDateTime(Value value, String format, @SuppressWarnings("unused") String nlsParam) { + long[] a = DateTimeUtils.dateAndTimeFromValue(value); + long dateValue = a[0]; + long timeNanos = a[1]; + int year = DateTimeUtils.yearFromDateValue(dateValue); + int monthOfYear = DateTimeUtils.monthFromDateValue(dateValue); + int dayOfMonth = DateTimeUtils.dayFromDateValue(dateValue); + int posYear = Math.abs(year); + long second = timeNanos / 1_000_000_000; + int nanos = (int) (timeNanos - second * 1_000_000_000); + int minute = (int) (second / 60); + second -= minute * 60; + int hour = minute / 60; + minute -= hour * 60; + int h12 = (hour + 11) % 12 + 1; + boolean isAM = hour < 12; + if (format == null) { + format = "DD-MON-YY HH.MI.SS.FF PM"; + } + + StringBuilder output = new StringBuilder(); + boolean fillMode = true; + + for (int i = 0; i < format.length();) { + + Capitalization cap; + + // AD / BC + + if ((cap = containsAt(format, i, "A.D.", "B.C.")) != null) { + String era = year > 0 ? "A.D." : "B.C."; + output.append(cap.apply(era)); + i += 4; + } else if ((cap = containsAt(format, i, "AD", "BC")) != null) { + String era = year > 0 ? "AD" : "BC"; + output.append(cap.apply(era)); + i += 2; + + // AM / PM + + } else if ((cap = containsAt(format, i, "A.M.", "P.M.")) != null) { + String am = isAM ? "A.M." : "P.M."; + output.append(cap.apply(am)); + i += 4; + } else if ((cap = containsAt(format, i, "AM", "PM")) != null) { + String am = isAM ? "AM" : "PM"; + output.append(cap.apply(am)); + i += 2; + + // Long/short date/time format + + } else if (containsAt(format, i, "DL") != null) { + String day = getDateNames(WEEKDAYS)[DateTimeUtils.getSundayDayOfWeek(dateValue)]; + String month = getDateNames(MONTHS)[monthOfYear - 1]; + output.append(day).append(", ").append(month).append(' ').append(dayOfMonth).append(", "); + StringUtils.appendZeroPadded(output, 4, posYear); + i += 2; + } else if (containsAt(format, i, "DS") != null) { + StringUtils.appendZeroPadded(output, 2, monthOfYear); + output.append('/'); + StringUtils.appendZeroPadded(output, 2, dayOfMonth); + output.append('/'); + StringUtils.appendZeroPadded(output, 4, posYear); + i += 2; + } else if (containsAt(format, i, "TS") != null) { + output.append(h12).append(':'); + StringUtils.appendZeroPadded(output, 2, minute); + output.append(':'); + StringUtils.appendZeroPadded(output, 2, second); + output.append(' '); + output.append(getDateNames(AM_PM)[isAM ? 0 : 1]); + i += 2; + + // Day + + } else if (containsAt(format, i, "DDD") != null) { + output.append(DateTimeUtils.getDayOfYear(dateValue)); + i += 3; + } else if (containsAt(format, i, "DD") != null) { + StringUtils.appendZeroPadded(output, 2, dayOfMonth); + i += 2; + } else if ((cap = containsAt(format, i, "DY")) != null) { + String day = getDateNames(SHORT_WEEKDAYS)[DateTimeUtils.getSundayDayOfWeek(dateValue)]; + output.append(cap.apply(day)); + i += 2; + } else if ((cap = containsAt(format, i, "DAY")) != null) { + String day = getDateNames(WEEKDAYS)[DateTimeUtils.getSundayDayOfWeek(dateValue)]; + if (fillMode) { + day = StringUtils.pad(day, "Wednesday".length(), " ", true); + } + output.append(cap.apply(day)); + i += 3; + } else if (containsAt(format, i, "D") != null) { + output.append(DateTimeUtils.getSundayDayOfWeek(dateValue)); + i += 1; + } else if (containsAt(format, i, "J") != null) { + output.append(DateTimeUtils.absoluteDayFromDateValue(dateValue) - JULIAN_EPOCH); + i += 1; + + // Hours + + } else if (containsAt(format, i, "HH24") != null) { + StringUtils.appendZeroPadded(output, 2, hour); + i += 4; + } else if (containsAt(format, i, "HH12") != null) { + StringUtils.appendZeroPadded(output, 2, h12); + i += 4; + } else if (containsAt(format, i, "HH") != null) { + StringUtils.appendZeroPadded(output, 2, h12); + i += 2; + + // Minutes + + } else if (containsAt(format, i, "MI") != null) { + StringUtils.appendZeroPadded(output, 2, minute); + i += 2; + + // Seconds + + } else if (containsAt(format, i, "SSSSS") != null) { + int seconds = (int) (timeNanos / 1_000_000_000); + output.append(seconds); + i += 5; + } else if (containsAt(format, i, "SS") != null) { + StringUtils.appendZeroPadded(output, 2, second); + i += 2; + + // Fractional seconds + + } else if (containsAt(format, i, "FF1", "FF2", + "FF3", "FF4", "FF5", "FF6", "FF7", "FF8", "FF9") != null) { + int x = format.charAt(i + 2) - '0'; + int ff = (int) (nanos * Math.pow(10, x - 9)); + StringUtils.appendZeroPadded(output, x, ff); + i += 3; + } else if (containsAt(format, i, "FF") != null) { + StringUtils.appendZeroPadded(output, 9, nanos); + i += 2; + + // Time zone + + } else if (containsAt(format, i, "TZR") != null) { + output.append(getTimeZone(value, false)); + i += 3; + } else if (containsAt(format, i, "TZD") != null) { + output.append(getTimeZone(value, true)); + i += 3; + + // Week + + } else if (containsAt(format, i, "IW", "WW") != null) { + output.append(DateTimeUtils.getWeekOfYear(dateValue, 0, 1)); + i += 2; + } else if (containsAt(format, i, "W") != null) { + int w = 1 + dayOfMonth / 7; + output.append(w); + i += 1; + + // Year + + } else if (containsAt(format, i, "Y,YYY") != null) { + output.append(new DecimalFormat("#,###").format(posYear)); + i += 5; + } else if (containsAt(format, i, "SYYYY") != null) { + // Should be <= 0, but Oracle prints negative years with off-by-one difference + if (year < 0) { + output.append('-'); + } + StringUtils.appendZeroPadded(output, 4, posYear); + i += 5; + } else if (containsAt(format, i, "YYYY", "RRRR") != null) { + StringUtils.appendZeroPadded(output, 4, posYear); + i += 4; + } else if (containsAt(format, i, "IYYY") != null) { + StringUtils.appendZeroPadded(output, 4, Math.abs(DateTimeUtils.getIsoWeekYear(dateValue))); + i += 4; + } else if (containsAt(format, i, "YYY") != null) { + StringUtils.appendZeroPadded(output, 3, posYear % 1000); + i += 3; + } else if (containsAt(format, i, "IYY") != null) { + StringUtils.appendZeroPadded(output, 3, Math.abs(DateTimeUtils.getIsoWeekYear(dateValue)) % 1000); + i += 3; + } else if (containsAt(format, i, "YY", "RR") != null) { + StringUtils.appendZeroPadded(output, 2, posYear % 100); + i += 2; + } else if (containsAt(format, i, "IY") != null) { + StringUtils.appendZeroPadded(output, 2, Math.abs(DateTimeUtils.getIsoWeekYear(dateValue)) % 100); + i += 2; + } else if (containsAt(format, i, "Y") != null) { + output.append(posYear % 10); + i += 1; + } else if (containsAt(format, i, "I") != null) { + output.append(Math.abs(DateTimeUtils.getIsoWeekYear(dateValue)) % 10); + i += 1; + + // Month / quarter + + } else if ((cap = containsAt(format, i, "MONTH")) != null) { + String month = getDateNames(MONTHS)[monthOfYear - 1]; + if (fillMode) { + month = StringUtils.pad(month, "September".length(), " ", true); + } + output.append(cap.apply(month)); + i += 5; + } else if ((cap = containsAt(format, i, "MON")) != null) { + String month = getDateNames(SHORT_MONTHS)[monthOfYear - 1]; + output.append(cap.apply(month)); + i += 3; + } else if (containsAt(format, i, "MM") != null) { + StringUtils.appendZeroPadded(output, 2, monthOfYear); + i += 2; + } else if ((cap = containsAt(format, i, "RM")) != null) { + output.append(cap.apply(toRomanNumeral(monthOfYear))); + i += 2; + } else if (containsAt(format, i, "Q") != null) { + int q = 1 + ((monthOfYear - 1) / 3); + output.append(q); + i += 1; + + // Local radix character + + } else if (containsAt(format, i, "X") != null) { + char c = DecimalFormatSymbols.getInstance().getDecimalSeparator(); + output.append(c); + i += 1; + + // Format modifiers + + } else if (containsAt(format, i, "FM") != null) { + fillMode = !fillMode; + i += 2; + } else if (containsAt(format, i, "FX") != null) { + i += 2; + + // Literal text + + } else if (containsAt(format, i, "\"") != null) { + for (i = i + 1; i < format.length(); i++) { + char c = format.charAt(i); + if (c != '"') { + output.append(c); + } else { + i++; + break; + } + } + } else if (format.charAt(i) == '-' + || format.charAt(i) == '/' + || format.charAt(i) == ',' + || format.charAt(i) == '.' + || format.charAt(i) == ';' + || format.charAt(i) == ':' + || format.charAt(i) == ' ') { + output.append(format.charAt(i)); + i += 1; + + // Anything else + + } else { + throw DbException.get(ErrorCode.INVALID_TO_CHAR_FORMAT, format); + } + } + + return output.toString(); + } + + /** + * Returns a capitalization strategy if the specified string contains any of + * the specified substrings at the specified index. The capitalization + * strategy indicates the casing of the substring that was found. If none of + * the specified substrings are found, this method returns null + * . + * + * @param s the string to check + * @param index the index to check at + * @param substrings the substrings to check for within the string + * @return a capitalization strategy if the specified string contains any of + * the specified substrings at the specified index, + * null otherwise + */ + private static Capitalization containsAt(String s, int index, + String... substrings) { + for (String substring : substrings) { + if (index + substring.length() <= s.length()) { + boolean found = true; + Boolean up1 = null; + Boolean up2 = null; + for (int i = 0; i < substring.length(); i++) { + char c1 = s.charAt(index + i); + char c2 = substring.charAt(i); + if (c1 != c2 && Character.toUpperCase(c1) != Character.toUpperCase(c2)) { + found = false; + break; + } else if (Character.isLetter(c1)) { + if (up1 == null) { + up1 = Character.isUpperCase(c1); + } else if (up2 == null) { + up2 = Character.isUpperCase(c1); + } + } + } + if (found) { + return Capitalization.toCapitalization(up1, up2); + } + } + } + return null; + } + + /** Represents a capitalization / casing strategy. */ + public enum Capitalization { + + /** + * All letters are uppercased. + */ + UPPERCASE, + + /** + * All letters are lowercased. + */ + LOWERCASE, + + /** + * The string is capitalized (first letter uppercased, subsequent + * letters lowercased). + */ + CAPITALIZE; + + /** + * Returns the capitalization / casing strategy which should be used + * when the first and second letters have the specified casing. + * + * @param up1 whether or not the first letter is uppercased + * @param up2 whether or not the second letter is uppercased + * @return the capitalization / casing strategy which should be used + * when the first and second letters have the specified casing + */ + static Capitalization toCapitalization(Boolean up1, Boolean up2) { + if (up1 == null) { + return Capitalization.CAPITALIZE; + } else if (up2 == null) { + return up1 ? Capitalization.UPPERCASE : Capitalization.LOWERCASE; + } else if (up1) { + return up2 ? Capitalization.UPPERCASE : Capitalization.CAPITALIZE; + } else { + return Capitalization.LOWERCASE; + } + } + + /** + * Applies this capitalization strategy to the specified string. + * + * @param s the string to apply this strategy to + * @return the resultant string + */ + public String apply(String s) { + if (s == null || s.isEmpty()) { + return s; + } + switch (this) { + case UPPERCASE: + return StringUtils.toUpperEnglish(s); + case LOWERCASE: + return StringUtils.toLowerEnglish(s); + case CAPITALIZE: + return Character.toUpperCase(s.charAt(0)) + + (s.length() > 1 ? StringUtils.toLowerEnglish(s).substring(1) : ""); + default: + throw new IllegalArgumentException( + "Unknown capitalization strategy: " + this); + } + } + } +} diff --git a/modules/h2/src/main/java/org/h2/util/ToDateParser.java b/modules/h2/src/main/java/org/h2/util/ToDateParser.java new file mode 100644 index 0000000000000..2e8768ea84ab3 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ToDateParser.java @@ -0,0 +1,372 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Daniel Gredler + */ +package org.h2.util; + +import static java.lang.String.format; + +import java.util.Calendar; +import java.util.GregorianCalendar; +import java.util.List; +import java.util.TimeZone; + +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * Emulates Oracle's TO_DATE function.
    + * This class holds and handles the input data form the TO_DATE-method + */ +public class ToDateParser { + private final String unmodifiedInputStr; + private final String unmodifiedFormatStr; + private final ConfigParam functionName; + private String inputStr; + private String formatStr; + + private boolean doyValid = false, absoluteDayValid = false, + hour12Valid = false, + timeZoneHMValid = false; + + private boolean bc; + + private long absoluteDay; + + private int year, month, day = 1; + + private int dayOfYear; + + private int hour, minute, second, nanos; + + private int hour12; + + private boolean isAM = true; + + private TimeZone timeZone; + + private int timeZoneHour, timeZoneMinute; + + private int currentYear, currentMonth; + + /** + * @param input the input date with the date-time info + * @param format the format of date-time info + * @param functionName one of [TO_DATE, TO_TIMESTAMP] (both share the same + * code) + */ + private ToDateParser(ConfigParam functionName, String input, String format) { + this.functionName = functionName; + inputStr = input.trim(); + // Keep a copy + unmodifiedInputStr = inputStr; + if (format == null || format.isEmpty()) { + // default Oracle format. + formatStr = functionName.getDefaultFormatStr(); + } else { + formatStr = format.trim(); + } + // Keep a copy + unmodifiedFormatStr = formatStr; + } + + private static ToDateParser getTimestampParser(ConfigParam param, String input, String format) { + ToDateParser result = new ToDateParser(param, input, format); + parse(result); + return result; + } + + private ValueTimestamp getResultingValue() { + long dateValue; + if (absoluteDayValid) { + dateValue = DateTimeUtils.dateValueFromAbsoluteDay(absoluteDay); + } else { + int year = this.year; + if (year == 0) { + year = getCurrentYear(); + } + if (bc) { + year = 1 - year; + } + if (doyValid) { + dateValue = DateTimeUtils.dateValueFromAbsoluteDay( + DateTimeUtils.absoluteDayFromYear(year) + dayOfYear - 1); + } else { + int month = this.month; + if (month == 0) { + // Oracle uses current month as default + month = getCurrentMonth(); + } + dateValue = DateTimeUtils.dateValue(year, month, day); + } + } + int hour; + if (hour12Valid) { + hour = hour12 % 12; + if (!isAM) { + hour += 12; + } + } else { + hour = this.hour; + } + long timeNanos = ((((hour * 60) + minute) * 60) + second) * 1_000_000_000L + nanos; + return ValueTimestamp.fromDateValueAndNanos(dateValue, timeNanos); + } + + private ValueTimestampTimeZone getResultingValueWithTimeZone() { + ValueTimestamp ts = getResultingValue(); + long dateValue = ts.getDateValue(); + short offset; + if (timeZoneHMValid) { + offset = (short) (timeZoneHour * 60 + ((timeZoneHour >= 0) ? timeZoneMinute : -timeZoneMinute)); + } else { + TimeZone timeZone = this.timeZone; + if (timeZone == null) { + timeZone = TimeZone.getDefault(); + } + long millis = DateTimeUtils.convertDateTimeValueToMillis(timeZone, dateValue, nanos / 1_000_000); + offset = (short) (timeZone.getOffset(millis) / 60_000); + } + return ValueTimestampTimeZone.fromDateValueAndNanos(dateValue, ts.getTimeNanos(), offset); + } + + String getInputStr() { + return inputStr; + } + + String getFormatStr() { + return formatStr; + } + + String getFunctionName() { + return functionName.name(); + } + + private void queryCurrentYearAndMonth() { + GregorianCalendar gc = DateTimeUtils.getCalendar(); + gc.setTimeInMillis(System.currentTimeMillis()); + currentYear = gc.get(Calendar.YEAR); + currentMonth = gc.get(Calendar.MONTH) + 1; + } + + int getCurrentYear() { + if (currentYear == 0) { + queryCurrentYearAndMonth(); + } + return currentYear; + } + + int getCurrentMonth() { + if (currentMonth == 0) { + queryCurrentYearAndMonth(); + } + return currentMonth; + } + + void setAbsoluteDay(int absoluteDay) { + doyValid = false; + absoluteDayValid = true; + this.absoluteDay = absoluteDay; + } + + void setBC(boolean bc) { + doyValid = false; + absoluteDayValid = false; + this.bc = bc; + } + + void setYear(int year) { + doyValid = false; + absoluteDayValid = false; + this.year = year; + } + + void setMonth(int month) { + doyValid = false; + absoluteDayValid = false; + this.month = month; + if (year == 0) { + year = 1970; + } + } + + void setDay(int day) { + doyValid = false; + absoluteDayValid = false; + this.day = day; + if (year == 0) { + year = 1970; + } + } + + void setDayOfYear(int dayOfYear) { + doyValid = true; + absoluteDayValid = false; + this.dayOfYear = dayOfYear; + } + + void setHour(int hour) { + hour12Valid = false; + this.hour = hour; + } + + void setMinute(int minute) { + this.minute = minute; + } + + void setSecond(int second) { + this.second = second; + } + + void setNanos(int nanos) { + this.nanos = nanos; + } + + void setAmPm(boolean isAM) { + hour12Valid = true; + this.isAM = isAM; + } + + void setHour12(int hour12) { + hour12Valid = true; + this.hour12 = hour12; + } + + void setTimeZone(TimeZone timeZone) { + timeZoneHMValid = false; + this.timeZone = timeZone; + } + + void setTimeZoneHour(int timeZoneHour) { + timeZoneHMValid = true; + this.timeZoneHour = timeZoneHour; + } + + void setTimeZoneMinute(int timeZoneMinute) { + timeZoneHMValid = true; + this.timeZoneMinute = timeZoneMinute; + } + + private boolean hasToParseData() { + return formatStr.length() > 0; + } + + private void removeFirstChar() { + if (!formatStr.isEmpty()) { + formatStr = formatStr.substring(1); + } + if (!inputStr.isEmpty()) { + inputStr = inputStr.substring(1); + } + } + + private static ToDateParser parse(ToDateParser p) { + while (p.hasToParseData()) { + List tokenList = + ToDateTokenizer.FormatTokenEnum.getTokensInQuestion(p.getFormatStr()); + if (tokenList.isEmpty()) { + p.removeFirstChar(); + continue; + } + boolean foundAnToken = false; + for (ToDateTokenizer.FormatTokenEnum token : tokenList) { + if (token.parseFormatStrWithToken(p)) { + foundAnToken = true; + break; + } + } + if (!foundAnToken) { + p.removeFirstChar(); + } + } + return p; + } + + /** + * Remove a token from a string. + * + * @param inputFragmentStr the input fragment + * @param formatFragment the format fragment + */ + void remove(String inputFragmentStr, String formatFragment) { + if (inputFragmentStr != null && inputStr.length() >= inputFragmentStr.length()) { + inputStr = inputStr.substring(inputFragmentStr.length()); + } + if (formatFragment != null && formatStr.length() >= formatFragment.length()) { + formatStr = formatStr.substring(formatFragment.length()); + } + } + + @Override + public String toString() { + int inputStrLen = inputStr.length(); + int orgInputLen = unmodifiedInputStr.length(); + int currentInputPos = orgInputLen - inputStrLen; + int restInputLen = inputStrLen <= 0 ? inputStrLen : inputStrLen - 1; + + int orgFormatLen = unmodifiedFormatStr.length(); + int currentFormatPos = orgFormatLen - formatStr.length(); + + return format("\n %s('%s', '%s')", functionName, unmodifiedInputStr, unmodifiedFormatStr) + + format("\n %s^%s , %s^ <-- Parsing failed at this point", + format("%" + (functionName.name().length() + currentInputPos) + "s", ""), + restInputLen <= 0 ? "" : format("%" + restInputLen + "s", ""), + currentFormatPos <= 0 ? "" : format("%" + currentFormatPos + "s", "")); + } + + /** + * Parse a string as a timestamp with the given format. + * + * @param input the input + * @param format the format + * @return the timestamp + */ + public static ValueTimestamp toTimestamp(String input, String format) { + ToDateParser parser = getTimestampParser(ConfigParam.TO_TIMESTAMP, input, format); + return parser.getResultingValue(); + } + + /** + * Parse a string as a timestamp with the given format. + * + * @param input the input + * @param format the format + * @return the timestamp + */ + public static ValueTimestampTimeZone toTimestampTz(String input, String format) { + ToDateParser parser = getTimestampParser(ConfigParam.TO_TIMESTAMP_TZ, input, format); + return parser.getResultingValueWithTimeZone(); + } + + /** + * Parse a string as a date with the given format. + * + * @param input the input + * @param format the format + * @return the date as a timestamp + */ + public static ValueTimestamp toDate(String input, String format) { + ToDateParser parser = getTimestampParser(ConfigParam.TO_DATE, input, format); + return parser.getResultingValue(); + } + + /** + * The configuration of the date parser. + */ + private enum ConfigParam { + TO_DATE("DD MON YYYY"), + TO_TIMESTAMP("DD MON YYYY HH:MI:SS"), + TO_TIMESTAMP_TZ("DD MON YYYY HH:MI:SS TZR"); + + private final String defaultFormatStr; + ConfigParam(String defaultFormatStr) { + this.defaultFormatStr = defaultFormatStr; + } + String getDefaultFormatStr() { + return defaultFormatStr; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/ToDateTokenizer.java b/modules/h2/src/main/java/org/h2/util/ToDateTokenizer.java new file mode 100644 index 0000000000000..ad0ff818d0a2e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ToDateTokenizer.java @@ -0,0 +1,725 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Daniel Gredler + */ +package org.h2.util; + +import static java.lang.String.format; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.TimeZone; +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Emulates Oracle's TO_DATE function. This class knows all about the + * TO_DATE-format conventions and how to parse the corresponding data. + */ +class ToDateTokenizer { + + /** + * The pattern for a number. + */ + static final Pattern PATTERN_INLINE = Pattern.compile("(\"[^\"]*\")"); + + /** + * The pattern for a number. + */ + static final Pattern PATTERN_NUMBER = Pattern.compile("^([+-]?[0-9]+)"); + + /** + * The pattern for four digits (typically a year). + */ + static final Pattern PATTERN_FOUR_DIGITS = Pattern + .compile("^([+-]?[0-9]{4})"); + + /** + * The pattern 2-4 digits (e.g. for RRRR). + */ + static final Pattern PATTERN_TWO_TO_FOUR_DIGITS = Pattern + .compile("^([+-]?[0-9]{2,4})"); + /** + * The pattern for three digits. + */ + static final Pattern PATTERN_THREE_DIGITS = Pattern + .compile("^([+-]?[0-9]{3})"); + + /** + * The pattern for two digits. + */ + static final Pattern PATTERN_TWO_DIGITS = Pattern + .compile("^([+-]?[0-9]{2})"); + + /** + * The pattern for one or two digits. + */ + static final Pattern PATTERN_TWO_DIGITS_OR_LESS = Pattern + .compile("^([+-]?[0-9][0-9]?)"); + + /** + * The pattern for one digit. + */ + static final Pattern PATTERN_ONE_DIGIT = Pattern.compile("^([+-]?[0-9])"); + + /** + * The pattern for a fraction (of a second for example). + */ + static final Pattern PATTERN_FF = Pattern.compile("^(FF[0-9]?)", + Pattern.CASE_INSENSITIVE); + + /** + * The pattern for "am" or "pm". + */ + static final Pattern PATTERN_AM_PM = Pattern + .compile("^(AM|A\\.M\\.|PM|P\\.M\\.)", Pattern.CASE_INSENSITIVE); + + /** + * The pattern for "bc" or "ad". + */ + static final Pattern PATTERN_BC_AD = Pattern + .compile("^(BC|B\\.C\\.|AD|A\\.D\\.)", Pattern.CASE_INSENSITIVE); + + /** + * The parslet for a year. + */ + static final YearParslet PARSLET_YEAR = new YearParslet(); + + /** + * The parslet for a month. + */ + static final MonthParslet PARSLET_MONTH = new MonthParslet(); + + /** + * The parslet for a day. + */ + static final DayParslet PARSLET_DAY = new DayParslet(); + + /** + * The parslet for time. + */ + static final TimeParslet PARSLET_TIME = new TimeParslet(); + + /** + * The inline parslet. E.g. 'YYYY-MM-DD"T"HH24:MI:SS"Z"' where "T" and "Z" + * are inlined + */ + static final InlineParslet PARSLET_INLINE = new InlineParslet(); + + /** + * Interface of the classes that can parse a specialized small bit of the + * TO_DATE format-string. + */ + interface ToDateParslet { + + /** + * Parse a date part. + * + * @param params the parameters that contains the string + * @param formatTokenEnum the format + * @param formatTokenStr the format string + */ + void parse(ToDateParser params, FormatTokenEnum formatTokenEnum, + String formatTokenStr); + } + + /** + * Parslet responsible for parsing year parameter + */ + static class YearParslet implements ToDateParslet { + + @Override + public void parse(ToDateParser params, FormatTokenEnum formatTokenEnum, + String formatTokenStr) { + String inputFragmentStr = null; + int dateNr = 0; + switch (formatTokenEnum) { + case SYYYY: + case YYYY: + inputFragmentStr = matchStringOrThrow(PATTERN_FOUR_DIGITS, + params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + // Gregorian calendar does not have a year 0. + // 0 = 0001 BC, -1 = 0002 BC, ... so we adjust + if (dateNr == 0) { + throwException(params, "Year may not be zero"); + } + params.setYear(dateNr >= 0 ? dateNr : dateNr + 1); + break; + case YYY: + inputFragmentStr = matchStringOrThrow(PATTERN_THREE_DIGITS, + params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + if (dateNr > 999) { + throwException(params, "Year may have only three digits with specified format"); + } + dateNr += (params.getCurrentYear() / 1_000) * 1_000; + // Gregorian calendar does not have a year 0. + // 0 = 0001 BC, -1 = 0002 BC, ... so we adjust + params.setYear(dateNr >= 0 ? dateNr : dateNr + 1); + break; + case RRRR: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_TO_FOUR_DIGITS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + if (inputFragmentStr.length() < 4) { + if (dateNr < 50) { + dateNr += 2000; + } else if (dateNr < 100) { + dateNr += 1900; + } + } + if (dateNr == 0) { + throwException(params, "Year may not be zero"); + } + params.setYear(dateNr); + break; + case RR: + int cc = params.getCurrentYear() / 100; + inputFragmentStr = matchStringOrThrow(PATTERN_TWO_DIGITS, + params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr) + cc * 100; + params.setYear(dateNr); + break; + case EE /* NOT supported yet */: + throwException(params, format("token '%s' not supported yet.", + formatTokenEnum.name())); + break; + case E /* NOT supported yet */: + throwException(params, format("token '%s' not supported yet.", + formatTokenEnum.name())); + break; + case YY: + inputFragmentStr = matchStringOrThrow(PATTERN_TWO_DIGITS, + params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + if (dateNr > 99) { + throwException(params, "Year may have only two digits with specified format"); + } + dateNr += (params.getCurrentYear() / 100) * 100; + // Gregorian calendar does not have a year 0. + // 0 = 0001 BC, -1 = 0002 BC, ... so we adjust + params.setYear(dateNr >= 0 ? dateNr : dateNr + 1); + break; + case SCC: + case CC: + inputFragmentStr = matchStringOrThrow(PATTERN_TWO_DIGITS, + params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr) * 100; + params.setYear(dateNr); + break; + case Y: + inputFragmentStr = matchStringOrThrow(PATTERN_ONE_DIGIT, params, + formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + if (dateNr > 9) { + throwException(params, "Year may have only two digits with specified format"); + } + dateNr += (params.getCurrentYear() / 10) * 10; + // Gregorian calendar does not have a year 0. + // 0 = 0001 BC, -1 = 0002 BC, ... so we adjust + params.setYear(dateNr >= 0 ? dateNr : dateNr + 1); + break; + case BC_AD: + inputFragmentStr = matchStringOrThrow(PATTERN_BC_AD, params, + formatTokenEnum); + params.setBC(inputFragmentStr.toUpperCase().startsWith("B")); + break; + default: + throw new IllegalArgumentException(format( + "%s: Internal Error. Unhandled case: %s", + this.getClass().getSimpleName(), formatTokenEnum)); + } + params.remove(inputFragmentStr, formatTokenStr); + } + } + + /** + * Parslet responsible for parsing month parameter + */ + static class MonthParslet implements ToDateParslet { + private static final String[] ROMAN_MONTH = { "I", "II", "III", "IV", + "V", "VI", "VII", "VIII", "IX", "X", "XI", "XII" }; + + @Override + public void parse(ToDateParser params, FormatTokenEnum formatTokenEnum, + String formatTokenStr) { + String s = params.getInputStr(); + String inputFragmentStr = null; + int dateNr = 0; + switch (formatTokenEnum) { + case MONTH: + inputFragmentStr = setByName(params, ToChar.MONTHS); + break; + case Q /* NOT supported yet */: + throwException(params, format("token '%s' not supported yet.", + formatTokenEnum.name())); + break; + case MON: + inputFragmentStr = setByName(params, ToChar.SHORT_MONTHS); + break; + case MM: + // Note: In Calendar Month go from 0 - 11 + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setMonth(dateNr); + break; + case RM: + dateNr = 0; + for (String monthName : ROMAN_MONTH) { + dateNr++; + int len = monthName.length(); + if (s.length() >= len && monthName + .equalsIgnoreCase(s.substring(0, len))) { + params.setMonth(dateNr + 1); + inputFragmentStr = monthName; + break; + } + } + if (inputFragmentStr == null || inputFragmentStr.isEmpty()) { + throwException(params, + format("Issue happened when parsing token '%s'. " + + "Expected one of: %s", + formatTokenEnum.name(), + Arrays.toString(ROMAN_MONTH))); + } + break; + default: + throw new IllegalArgumentException(format( + "%s: Internal Error. Unhandled case: %s", + this.getClass().getSimpleName(), formatTokenEnum)); + } + params.remove(inputFragmentStr, formatTokenStr); + } + } + + /** + * Parslet responsible for parsing day parameter + */ + static class DayParslet implements ToDateParslet { + @Override + public void parse(ToDateParser params, FormatTokenEnum formatTokenEnum, + String formatTokenStr) { + String inputFragmentStr = null; + int dateNr = 0; + switch (formatTokenEnum) { + case DDD: + inputFragmentStr = matchStringOrThrow(PATTERN_NUMBER, params, + formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setDayOfYear(dateNr); + break; + case DD: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setDay(dateNr); + break; + case D: + inputFragmentStr = matchStringOrThrow(PATTERN_ONE_DIGIT, params, + formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setDay(dateNr); + break; + case DAY: + inputFragmentStr = setByName(params, ToChar.WEEKDAYS); + break; + case DY: + inputFragmentStr = setByName(params, ToChar.SHORT_WEEKDAYS); + break; + case J: + inputFragmentStr = matchStringOrThrow(PATTERN_NUMBER, params, + formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setAbsoluteDay(dateNr + ToChar.JULIAN_EPOCH); + break; + default: + throw new IllegalArgumentException(format( + "%s: Internal Error. Unhandled case: %s", + this.getClass().getSimpleName(), formatTokenEnum)); + } + params.remove(inputFragmentStr, formatTokenStr); + } + } + + /** + * Parslet responsible for parsing time parameter + */ + static class TimeParslet implements ToDateParslet { + + @Override + public void parse(ToDateParser params, FormatTokenEnum formatTokenEnum, + String formatTokenStr) { + String inputFragmentStr = null; + int dateNr = 0; + switch (formatTokenEnum) { + case HH24: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setHour(dateNr); + break; + case HH12: + case HH: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setHour12(dateNr); + break; + case MI: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setMinute(dateNr); + break; + case SS: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setSecond(dateNr); + break; + case SSSSS: { + inputFragmentStr = matchStringOrThrow(PATTERN_NUMBER, params, + formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + int second = dateNr % 60; + dateNr /= 60; + int minute = dateNr % 60; + dateNr /= 60; + int hour = dateNr % 24; + params.setHour(hour); + params.setMinute(minute); + params.setSecond(second); + break; + } + case FF: + inputFragmentStr = matchStringOrThrow(PATTERN_NUMBER, params, + formatTokenEnum); + String paddedRightNrStr = format("%-9s", inputFragmentStr) + .replace(' ', '0'); + paddedRightNrStr = paddedRightNrStr.substring(0, 9); + double nineDigits = Double.parseDouble(paddedRightNrStr); + params.setNanos((int) nineDigits); + break; + case AM_PM: + inputFragmentStr = matchStringOrThrow(PATTERN_AM_PM, params, + formatTokenEnum); + if (inputFragmentStr.toUpperCase().startsWith("A")) { + params.setAmPm(true); + } else { + params.setAmPm(false); + } + break; + case TZH: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setTimeZoneHour(dateNr); + break; + case TZM: + inputFragmentStr = matchStringOrThrow( + PATTERN_TWO_DIGITS_OR_LESS, params, formatTokenEnum); + dateNr = Integer.parseInt(inputFragmentStr); + params.setTimeZoneMinute(dateNr); + break; + case TZR: + case TZD: + String tzName = params.getInputStr(); + params.setTimeZone(TimeZone.getTimeZone(tzName)); + inputFragmentStr = tzName; + break; + default: + throw new IllegalArgumentException(format( + "%s: Internal Error. Unhandled case: %s", + this.getClass().getSimpleName(), formatTokenEnum)); + } + params.remove(inputFragmentStr, formatTokenStr); + } + } + + /** + * Parslet responsible for parsing year parameter + */ + static class InlineParslet implements ToDateParslet { + @Override + public void parse(ToDateParser params, FormatTokenEnum formatTokenEnum, + String formatTokenStr) { + String inputFragmentStr = null; + switch (formatTokenEnum) { + case INLINE: + inputFragmentStr = formatTokenStr.replace("\"", ""); + break; + default: + throw new IllegalArgumentException(format( + "%s: Internal Error. Unhandled case: %s", + this.getClass().getSimpleName(), formatTokenEnum)); + } + params.remove(inputFragmentStr, formatTokenStr); + } + + } + + /** + * Match the pattern, or if not possible throw an exception. + * + * @param p the pattern + * @param params the parameters with the input string + * @param aEnum the pattern name + * @return the matched value + */ + static String matchStringOrThrow(Pattern p, ToDateParser params, + Enum aEnum) { + String s = params.getInputStr(); + Matcher matcher = p.matcher(s); + if (!matcher.find()) { + throwException(params, format( + "Issue happened when parsing token '%s'", aEnum.name())); + } + return matcher.group(1); + } + + /** + * Set the given field in the calendar. + * + * @param params the parameters with the input string + * @param field the field to set + * @return the matched value + */ + static String setByName(ToDateParser params, int field) { + String inputFragmentStr = null; + String s = params.getInputStr(); + String[] values = ToChar.getDateNames(field); + for (int i = 0; i < values.length; i++) { + String dayName = values[i]; + if (dayName == null) { + continue; + } + int len = dayName.length(); + if (dayName.equalsIgnoreCase(s.substring(0, len))) { + switch (field) { + case ToChar.MONTHS: + case ToChar.SHORT_MONTHS: + params.setMonth(i + 1); + break; + case ToChar.WEEKDAYS: + case ToChar.SHORT_WEEKDAYS: + // TODO + break; + default: + throw new IllegalArgumentException(); + } + inputFragmentStr = dayName; + break; + } + } + if (inputFragmentStr == null || inputFragmentStr.isEmpty()) { + throwException(params, format( + "Tried to parse one of '%s' but failed (may be an internal error?)", + Arrays.toString(values))); + } + return inputFragmentStr; + } + + /** + * Throw a parse exception. + * + * @param params the parameters with the input string + * @param errorStr the error string + */ + static void throwException(ToDateParser params, String errorStr) { + throw DbException.get(ErrorCode.INVALID_TO_DATE_FORMAT, + params.getFunctionName(), + format(" %s. Details: %s", errorStr, params)); + } + + /** + * The format tokens. + */ + public enum FormatTokenEnum { + // 4-digit year + YYYY(PARSLET_YEAR), + // 4-digit year with sign (- = B.C.) + SYYYY(PARSLET_YEAR), + // 3-digit year + YYY(PARSLET_YEAR), + // 2-digit year + YY(PARSLET_YEAR), + // Two-digit century with with sign (- = B.C.) + SCC(PARSLET_YEAR), + // Two-digit century. + CC(PARSLET_YEAR), + // 2-digit -> 4-digit year 0-49 -> 20xx , 50-99 -> 19xx + RRRR(PARSLET_YEAR), + // last 2-digit of the year using "current" century value. + RR(PARSLET_YEAR), + // Meridian indicator + BC_AD(PARSLET_YEAR, PATTERN_BC_AD), + // Full Name of month + MONTH(PARSLET_MONTH), + // Abbreviated name of month. + MON(PARSLET_MONTH), + // Month (01-12; JAN = 01). + MM(PARSLET_MONTH), + // Roman numeral month (I-XII; JAN = I). + RM(PARSLET_MONTH), + // Day of year (1-366). + DDD(PARSLET_DAY), + // Name of day. + DAY(PARSLET_DAY), + // Day of month (1-31). + DD(PARSLET_DAY), + // Abbreviated name of day. + DY(PARSLET_DAY), HH24(PARSLET_TIME), HH12(PARSLET_TIME), + // Hour of day (1-12). + HH(PARSLET_TIME), + // Min + MI(PARSLET_TIME), + // Seconds past midnight (0-86399) + SSSSS(PARSLET_TIME), SS(PARSLET_TIME), + // Fractional seconds + FF(PARSLET_TIME, PATTERN_FF), + // Time zone hour. + TZH(PARSLET_TIME), + // Time zone minute. + TZM(PARSLET_TIME), + // Time zone region ID + TZR(PARSLET_TIME), + // Daylight savings information. Example: + // PST (for US/Pacific standard time); + TZD(PARSLET_TIME), + // Meridian indicator + AM_PM(PARSLET_TIME, PATTERN_AM_PM), + // NOT supported yet - + // Full era name (Japanese Imperial, ROC Official, + // and Thai Buddha calendars). + EE(PARSLET_YEAR), + // NOT supported yet - + // Abbreviated era name (Japanese Imperial, + // ROC Official, and Thai Buddha calendars). + E(PARSLET_YEAR), Y(PARSLET_YEAR), + // Quarter of year (1, 2, 3, 4; JAN-MAR = 1). + Q(PARSLET_MONTH), + // Day of week (1-7). + D(PARSLET_DAY), + // NOT supported yet - + // Julian day; the number of days since Jan 1, 4712 BC. + J(PARSLET_DAY), + // Inline text e.g. to_date('2017-04-21T00:00:00Z', + // 'YYYY-MM-DD"T"HH24:MI:SS"Z"') + // where "T" and "Z" are inlined + INLINE(PARSLET_INLINE, PATTERN_INLINE); + + private static final List EMPTY_LIST = Collections + .emptyList(); + + private static final Map> CACHE = new HashMap<>( + FormatTokenEnum.values().length); + private final ToDateParslet toDateParslet; + private final Pattern patternToUse; + + /** + * Construct a format token. + * + * @param toDateParslet the date parslet + * @param patternToUse the pattern + */ + FormatTokenEnum(ToDateParslet toDateParslet, Pattern patternToUse) { + this.toDateParslet = toDateParslet; + this.patternToUse = patternToUse; + } + + /** + * Construct a format token. + * + * @param toDateParslet the date parslet + */ + FormatTokenEnum(ToDateParslet toDateParslet) { + this.toDateParslet = toDateParslet; + patternToUse = Pattern.compile(format("^(%s)", name()), + Pattern.CASE_INSENSITIVE); + } + + /** + * Optimization: Only return a list of {@link FormatTokenEnum} that + * share the same 1st char using the 1st char of the 'to parse' + * formatStr. Or return empty list if no match. + * + * @param formatStr the format string + * @return the list of tokens + */ + static List getTokensInQuestion(String formatStr) { + List result = EMPTY_LIST; + if (CACHE.size() <= 0) { + initCache(); + } + if (formatStr != null && formatStr.length() > 0) { + Character key = Character.toUpperCase(formatStr.charAt(0)); + switch (key) { + case '"': + result = new ArrayList<>(); + result.add(INLINE); + break; + default: + result = CACHE.get(key); + } + } + if (result == null) { + result = EMPTY_LIST; + } + return result; + } + + private static synchronized void initCache() { + if (CACHE.size() <= 0) { + for (FormatTokenEnum token : FormatTokenEnum.values()) { + + List tokenKeys = new ArrayList<>(); + + if (token.name().contains("_")) { + String[] tokens = token.name().split("_"); + for (String tokenLets : tokens) { + tokenKeys.add(tokenLets.toUpperCase().charAt(0)); + } + } else { + tokenKeys.add(token.name().toUpperCase().charAt(0)); + } + + for (Character tokenKey : tokenKeys) { + List l = CACHE.get(tokenKey); + if (l == null) { + l = new ArrayList<>(1); + CACHE.put(tokenKey, l); + } + l.add(token); + } + } + } + + } + + /** + * Parse the format-string with passed token of {@link FormatTokenEnum}. + * If token matches return true, otherwise false. + * + * @param params the parameters + * @return true if it matches + */ + boolean parseFormatStrWithToken(ToDateParser params) { + Matcher matcher = patternToUse.matcher(params.getFormatStr()); + boolean foundToken = matcher.find(); + if (foundToken) { + String formatTokenStr = matcher.group(1); + toDateParslet.parse(params, this, formatTokenStr); + } + return foundToken; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/Tool.java b/modules/h2/src/main/java/org/h2/util/Tool.java new file mode 100644 index 0000000000000..e68dac56a096c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/Tool.java @@ -0,0 +1,138 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.PrintStream; +import java.sql.SQLException; +import java.util.Properties; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.store.FileLister; +import org.h2.store.fs.FileUtils; + +/** + * Command line tools implement the tool interface so that they can be used in + * the H2 Console. + */ +public abstract class Tool { + + /** + * The output stream where this tool writes to. + */ + protected PrintStream out = System.out; + + private Properties resources; + + /** + * Sets the standard output stream. + * + * @param out the new standard output stream + */ + public void setOut(PrintStream out) { + this.out = out; + } + + /** + * Run the tool with the given output stream and arguments. + * + * @param args the argument list + */ + public abstract void runTool(String... args) throws SQLException; + + /** + * Throw a SQLException saying this command line option is not supported. + * + * @param option the unsupported option + * @return this method never returns normally + */ + protected SQLException showUsageAndThrowUnsupportedOption(String option) + throws SQLException { + showUsage(); + throw throwUnsupportedOption(option); + } + + /** + * Throw a SQLException saying this command line option is not supported. + * + * @param option the unsupported option + * @return this method never returns normally + */ + protected SQLException throwUnsupportedOption(String option) + throws SQLException { + throw DbException.get( + ErrorCode.FEATURE_NOT_SUPPORTED_1, option).getSQLException(); + } + + /** + * Print to the output stream that no database files have been found. + * + * @param dir the directory or null + * @param db the database name or null + */ + protected void printNoDatabaseFilesFound(String dir, String db) { + StringBuilder buff; + dir = FileLister.getDir(dir); + if (!FileUtils.isDirectory(dir)) { + buff = new StringBuilder("Directory not found: "); + buff.append(dir); + } else { + buff = new StringBuilder("No database files have been found"); + buff.append(" in directory ").append(dir); + if (db != null) { + buff.append(" for the database ").append(db); + } + } + out.println(buff.toString()); + } + + /** + * Print the usage of the tool. This method reads the description from the + * resource file. + */ + protected void showUsage() { + if (resources == null) { + resources = new Properties(); + String resourceName = "/org/h2/res/javadoc.properties"; + try { + byte[] buff = Utils.getResource(resourceName); + if (buff != null) { + resources.load(new ByteArrayInputStream(buff)); + } + } catch (IOException e) { + out.println("Cannot load " + resourceName); + } + } + String className = getClass().getName(); + out.println(resources.get(className)); + out.println("Usage: java "+getClass().getName() + " "); + out.println(resources.get(className + ".main")); + out.println("See also http://h2database.com/javadoc/" + + className.replace('.', '/') + ".html"); + } + + /** + * Check if the argument matches the option. + * If the argument starts with this option, but doesn't match, + * then an exception is thrown. + * + * @param arg the argument + * @param option the command line option + * @return true if it matches + */ + public static boolean isOption(String arg, String option) { + if (arg.equals(option)) { + return true; + } else if (arg.startsWith(option)) { + throw DbException.getUnsupportedException( + "expected: " + option + " got: " + arg); + } + return false; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/Utils.java b/modules/h2/src/main/java/org/h2/util/Utils.java new file mode 100644 index 0000000000000..0b3e019a621fd --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/Utils.java @@ -0,0 +1,806 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.lang.management.GarbageCollectorMXBean; +import java.lang.management.ManagementFactory; +import java.lang.management.OperatingSystemMXBean; +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.util.Arrays; +import java.util.Comparator; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; + +/** + * This utility class contains miscellaneous functions. + */ +public class Utils { + + /** + * An 0-size byte array. + */ + public static final byte[] EMPTY_BYTES = {}; + + /** + * An 0-size int array. + */ + public static final int[] EMPTY_INT_ARRAY = {}; + + /** + * An 0-size long array. + */ + private static final long[] EMPTY_LONG_ARRAY = {}; + + private static final int GC_DELAY = 50; + private static final int MAX_GC = 8; + private static long lastGC; + + private static final HashMap RESOURCES = new HashMap<>(); + + private Utils() { + // utility class + } + + /** + * Calculate the index of the first occurrence of the pattern in the byte + * array, starting with the given index. This methods returns -1 if the + * pattern has not been found, and the start position if the pattern is + * empty. + * + * @param bytes the byte array + * @param pattern the pattern + * @param start the start index from where to search + * @return the index + */ + public static int indexOf(byte[] bytes, byte[] pattern, int start) { + if (pattern.length == 0) { + return start; + } + if (start > bytes.length) { + return -1; + } + int last = bytes.length - pattern.length + 1; + int patternLen = pattern.length; + next: + for (; start < last; start++) { + for (int i = 0; i < patternLen; i++) { + if (bytes[start + i] != pattern[i]) { + continue next; + } + } + return start; + } + return -1; + } + + /** + * Calculate the hash code of the given byte array. + * + * @param value the byte array + * @return the hash code + */ + public static int getByteArrayHash(byte[] value) { + int len = value.length; + int h = len; + if (len < 50) { + for (int i = 0; i < len; i++) { + h = 31 * h + value[i]; + } + } else { + int step = len / 16; + for (int i = 0; i < 4; i++) { + h = 31 * h + value[i]; + h = 31 * h + value[--len]; + } + for (int i = 4 + step; i < len; i += step) { + h = 31 * h + value[i]; + } + } + return h; + } + + /** + * Compare two byte arrays. This method will always loop over all bytes and + * doesn't use conditional operations in the loop to make sure an attacker + * can not use a timing attack when trying out passwords. + * + * @param test the first array + * @param good the second array + * @return true if both byte arrays contain the same bytes + */ + public static boolean compareSecure(byte[] test, byte[] good) { + if ((test == null) || (good == null)) { + return (test == null) && (good == null); + } + int len = test.length; + if (len != good.length) { + return false; + } + if (len == 0) { + return true; + } + // don't use conditional operations inside the loop + int bits = 0; + for (int i = 0; i < len; i++) { + // this will never reset any bits + bits |= test[i] ^ good[i]; + } + return bits == 0; + } + + /** + * Copy the contents of the source array to the target array. If the size if + * the target array is too small, a larger array is created. + * + * @param source the source array + * @param target the target array + * @return the target array or a new one if the target array was too small + */ + public static byte[] copy(byte[] source, byte[] target) { + int len = source.length; + if (len > target.length) { + target = new byte[len]; + } + System.arraycopy(source, 0, target, 0, len); + return target; + } + + /** + * Create an array of bytes with the given size. If this is not possible + * because not enough memory is available, an OutOfMemoryError with the + * requested size in the message is thrown. + *

    + * This method should be used if the size of the array is user defined, or + * stored in a file, so wrong size data can be distinguished from regular + * out-of-memory. + *

    + * + * @param len the number of bytes requested + * @return the byte array + * @throws OutOfMemoryError if the allocation was too large + */ + public static byte[] newBytes(int len) { + if (len == 0) { + return EMPTY_BYTES; + } + try { + return new byte[len]; + } catch (OutOfMemoryError e) { + Error e2 = new OutOfMemoryError("Requested memory: " + len); + e2.initCause(e); + throw e2; + } + } + + /** + * Creates a copy of array of bytes with the new size. If this is not possible + * because not enough memory is available, an OutOfMemoryError with the + * requested size in the message is thrown. + *

    + * This method should be used if the size of the array is user defined, or + * stored in a file, so wrong size data can be distinguished from regular + * out-of-memory. + *

    + * + * @param bytes source array + * @param len the number of bytes in the new array + * @return the byte array + * @throws OutOfMemoryError if the allocation was too large + * @see Arrays#copyOf(byte[], int) + */ + public static byte[] copyBytes(byte[] bytes, int len) { + if (len == 0) { + return EMPTY_BYTES; + } + try { + return Arrays.copyOf(bytes, len); + } catch (OutOfMemoryError e) { + Error e2 = new OutOfMemoryError("Requested memory: " + len); + e2.initCause(e); + throw e2; + } + } + + /** + * Create a new byte array and copy all the data. If the size of the byte + * array is zero, the same array is returned. + * + * @param b the byte array (may not be null) + * @return a new byte array + */ + public static byte[] cloneByteArray(byte[] b) { + if (b == null) { + return null; + } + int len = b.length; + if (len == 0) { + return EMPTY_BYTES; + } + return Arrays.copyOf(b, len); + } + + /** + * Get the used memory in KB. + * This method possibly calls System.gc(). + * + * @return the used memory + */ + public static int getMemoryUsed() { + collectGarbage(); + Runtime rt = Runtime.getRuntime(); + long mem = rt.totalMemory() - rt.freeMemory(); + return (int) (mem >> 10); + } + + /** + * Get the free memory in KB. + * This method possibly calls System.gc(). + * + * @return the free memory + */ + public static int getMemoryFree() { + collectGarbage(); + Runtime rt = Runtime.getRuntime(); + long mem = rt.freeMemory(); + return (int) (mem >> 10); + } + + /** + * Get the maximum memory in KB. + * + * @return the maximum memory + */ + public static long getMemoryMax() { + long max = Runtime.getRuntime().maxMemory(); + return max / 1024; + } + + public static long getGarbageCollectionTime() { + long totalGCTime = 0; + for (GarbageCollectorMXBean gcMXBean : ManagementFactory.getGarbageCollectorMXBeans()) { + long collectionTime = gcMXBean.getCollectionTime(); + if(collectionTime > 0) { + totalGCTime += collectionTime; + } + } + return totalGCTime; + } + + private static synchronized void collectGarbage() { + Runtime runtime = Runtime.getRuntime(); + long total = runtime.totalMemory(); + long time = System.nanoTime(); + if (lastGC + TimeUnit.MILLISECONDS.toNanos(GC_DELAY) < time) { + for (int i = 0; i < MAX_GC; i++) { + runtime.gc(); + long now = runtime.totalMemory(); + if (now == total) { + lastGC = System.nanoTime(); + break; + } + total = now; + } + } + } + + /** + * Create an int array with the given size. + * + * @param len the number of bytes requested + * @return the int array + */ + public static int[] newIntArray(int len) { + if (len == 0) { + return EMPTY_INT_ARRAY; + } + return new int[len]; + } + + /** + * Create a long array with the given size. + * + * @param len the number of bytes requested + * @return the int array + */ + public static long[] newLongArray(int len) { + if (len == 0) { + return EMPTY_LONG_ARRAY; + } + return new long[len]; + } + + /** + * Find the top limit values using given comparator and place them as in a + * full array sort, in descending order. + * + * @param array the array. + * @param offset the offset. + * @param limit the limit. + * @param comp the comparator. + */ + public static void sortTopN(X[] array, int offset, int limit, + Comparator comp) { + partitionTopN(array, offset, limit, comp); + Arrays.sort(array, offset, + (int) Math.min((long) offset + limit, array.length), comp); + } + + /** + * Find the top limit values using given comparator and place them as in a + * full array sort. This method does not sort the top elements themselves. + * + * @param array the array + * @param offset the offset + * @param limit the limit + * @param comp the comparator + */ + private static void partitionTopN(X[] array, int offset, int limit, + Comparator comp) { + partialQuickSort(array, 0, array.length - 1, comp, offset, offset + + limit - 1); + } + + private static void partialQuickSort(X[] array, int low, int high, + Comparator comp, int start, int end) { + if (low > end || high < start || (low > start && high < end)) { + return; + } + if (low == high) { + return; + } + int i = low, j = high; + // use a random pivot to protect against + // the worst case order + int p = low + MathUtils.randomInt(high - low); + X pivot = array[p]; + int m = (low + high) >>> 1; + X temp = array[m]; + array[m] = pivot; + array[p] = temp; + while (i <= j) { + while (comp.compare(array[i], pivot) < 0) { + i++; + } + while (comp.compare(array[j], pivot) > 0) { + j--; + } + if (i <= j) { + temp = array[i]; + array[i++] = array[j]; + array[j--] = temp; + } + } + if (low < j) { + partialQuickSort(array, low, j, comp, start, end); + } + if (i < high) { + partialQuickSort(array, i, high, comp, start, end); + } + } + + /** + * Checks if given classes have a common Comparable superclass. + * + * @param c1 the first class + * @param c2 the second class + * @return true if they have + */ + public static boolean haveCommonComparableSuperclass( + Class c1, Class c2) { + if (c1 == c2 || c1.isAssignableFrom(c2) || c2.isAssignableFrom(c1)) { + return true; + } + Class top1; + do { + top1 = c1; + c1 = c1.getSuperclass(); + } while (Comparable.class.isAssignableFrom(c1)); + + Class top2; + do { + top2 = c2; + c2 = c2.getSuperclass(); + } while (Comparable.class.isAssignableFrom(c2)); + + return top1 == top2; + } + + /** + * Get a resource from the resource map. + * + * @param name the name of the resource + * @return the resource data + */ + public static byte[] getResource(String name) throws IOException { + byte[] data = RESOURCES.get(name); + if (data == null) { + data = loadResource(name); + if (data != null) { + RESOURCES.put(name, data); + } + } + return data; + } + + private static byte[] loadResource(String name) throws IOException { + InputStream in = Utils.class.getResourceAsStream("data.zip"); + if (in == null) { + in = Utils.class.getResourceAsStream(name); + if (in == null) { + return null; + } + return IOUtils.readBytesAndClose(in, 0); + } + + try (ZipInputStream zipIn = new ZipInputStream(in)) { + while (true) { + ZipEntry entry = zipIn.getNextEntry(); + if (entry == null) { + break; + } + String entryName = entry.getName(); + if (!entryName.startsWith("/")) { + entryName = "/" + entryName; + } + if (entryName.equals(name)) { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + IOUtils.copy(zipIn, out); + zipIn.closeEntry(); + return out.toByteArray(); + } + zipIn.closeEntry(); + } + } catch (IOException e) { + // if this happens we have a real problem + e.printStackTrace(); + } + return null; + } + + /** + * Calls a static method via reflection. This will try to use the method + * where the most parameter classes match exactly (this algorithm is simpler + * than the one in the Java specification, but works well for most cases). + * + * @param classAndMethod a string with the entire class and method name, eg. + * "java.lang.System.gc" + * @param params the method parameters + * @return the return value from this call + */ + public static Object callStaticMethod(String classAndMethod, + Object... params) throws Exception { + int lastDot = classAndMethod.lastIndexOf('.'); + String className = classAndMethod.substring(0, lastDot); + String methodName = classAndMethod.substring(lastDot + 1); + return callMethod(null, Class.forName(className), methodName, params); + } + + /** + * Calls an instance method via reflection. This will try to use the method + * where the most parameter classes match exactly (this algorithm is simpler + * than the one in the Java specification, but works well for most cases). + * + * @param instance the instance on which the call is done + * @param methodName a string with the method name + * @param params the method parameters + * @return the return value from this call + */ + public static Object callMethod( + Object instance, + String methodName, + Object... params) throws Exception { + return callMethod(instance, instance.getClass(), methodName, params); + } + + private static Object callMethod( + Object instance, Class clazz, + String methodName, + Object... params) throws Exception { + Method best = null; + int bestMatch = 0; + boolean isStatic = instance == null; + for (Method m : clazz.getMethods()) { + if (Modifier.isStatic(m.getModifiers()) == isStatic && + m.getName().equals(methodName)) { + int p = match(m.getParameterTypes(), params); + if (p > bestMatch) { + bestMatch = p; + best = m; + } + } + } + if (best == null) { + throw new NoSuchMethodException(methodName); + } + return best.invoke(instance, params); + } + + /** + * Creates a new instance. This will try to use the constructor where the + * most parameter classes match exactly (this algorithm is simpler than the + * one in the Java specification, but works well for most cases). + * + * @param className a string with the entire class, eg. "java.lang.Integer" + * @param params the constructor parameters + * @return the newly created object + */ + public static Object newInstance(String className, Object... params) + throws Exception { + Constructor best = null; + int bestMatch = 0; + for (Constructor c : Class.forName(className).getConstructors()) { + int p = match(c.getParameterTypes(), params); + if (p > bestMatch) { + bestMatch = p; + best = c; + } + } + if (best == null) { + throw new NoSuchMethodException(className); + } + return best.newInstance(params); + } + + private static int match(Class[] params, Object[] values) { + int len = params.length; + if (len == values.length) { + int points = 1; + for (int i = 0; i < len; i++) { + Class pc = getNonPrimitiveClass(params[i]); + Object v = values[i]; + Class vc = v == null ? null : v.getClass(); + if (pc == vc) { + points++; + } else if (vc == null) { + // can't verify + } else if (!pc.isAssignableFrom(vc)) { + return 0; + } + } + return points; + } + return 0; + } + + /** + * Returns a static field. + * + * @param classAndField a string with the entire class and field name + * @return the field value + */ + public static Object getStaticField(String classAndField) throws Exception { + int lastDot = classAndField.lastIndexOf('.'); + String className = classAndField.substring(0, lastDot); + String fieldName = classAndField.substring(lastDot + 1); + return Class.forName(className).getField(fieldName).get(null); + } + + /** + * Returns a static field. + * + * @param instance the instance on which the call is done + * @param fieldName the field name + * @return the field value + */ + public static Object getField(Object instance, String fieldName) + throws Exception { + return instance.getClass().getField(fieldName).get(instance); + } + + /** + * Returns true if the class is present in the current class loader. + * + * @param fullyQualifiedClassName a string with the entire class name, eg. + * "java.lang.System" + * @return true if the class is present + */ + public static boolean isClassPresent(String fullyQualifiedClassName) { + try { + Class.forName(fullyQualifiedClassName); + return true; + } catch (ClassNotFoundException e) { + return false; + } + } + + /** + * Convert primitive class names to java.lang.* class names. + * + * @param clazz the class (for example: int) + * @return the non-primitive class (for example: java.lang.Integer) + */ + public static Class getNonPrimitiveClass(Class clazz) { + if (!clazz.isPrimitive()) { + return clazz; + } else if (clazz == boolean.class) { + return Boolean.class; + } else if (clazz == byte.class) { + return Byte.class; + } else if (clazz == char.class) { + return Character.class; + } else if (clazz == double.class) { + return Double.class; + } else if (clazz == float.class) { + return Float.class; + } else if (clazz == int.class) { + return Integer.class; + } else if (clazz == long.class) { + return Long.class; + } else if (clazz == short.class) { + return Short.class; + } else if (clazz == void.class) { + return Void.class; + } + return clazz; + } + + /** + * Parses the specified string to boolean value. + * + * @param value + * string to parse + * @param defaultValue + * value to return if value is null or on parsing error + * @param throwException + * throw exception on parsing error or return default value instead + * @return parsed or default value + * @throws IllegalArgumentException + * on parsing error if {@code throwException} is true + */ + public static boolean parseBoolean(String value, boolean defaultValue, boolean throwException) { + if (value == null) { + return defaultValue; + } + switch (value.length()) { + case 1: + if (value.equals("1") || value.equalsIgnoreCase("t") || value.equalsIgnoreCase("y")) { + return true; + } + if (value.equals("0") || value.equalsIgnoreCase("f") || value.equalsIgnoreCase("n")) { + return false; + } + break; + case 2: + if (value.equalsIgnoreCase("no")) { + return false; + } + break; + case 3: + if (value.equalsIgnoreCase("yes")) { + return true; + } + break; + case 4: + if (value.equalsIgnoreCase("true")) { + return true; + } + break; + case 5: + if (value.equalsIgnoreCase("false")) { + return false; + } + } + if (throwException) { + throw new IllegalArgumentException(value); + } + return defaultValue; + } + + /** + * Get the system property. If the system property is not set, or if a + * security exception occurs, the default value is returned. + * + * @param key the key + * @param defaultValue the default value + * @return the value + */ + public static String getProperty(String key, String defaultValue) { + try { + return System.getProperty(key, defaultValue); + } catch (SecurityException se) { + return defaultValue; + } + } + + /** + * Get the system property. If the system property is not set, or if a + * security exception occurs, the default value is returned. + * + * @param key the key + * @param defaultValue the default value + * @return the value + */ + public static int getProperty(String key, int defaultValue) { + String s = getProperty(key, null); + if (s != null) { + try { + return Integer.decode(s); + } catch (NumberFormatException e) { + // ignore + } + } + return defaultValue; + } + + /** + * Get the system property. If the system property is not set, or if a + * security exception occurs, the default value is returned. + * + * @param key the key + * @param defaultValue the default value + * @return the value + */ + public static boolean getProperty(String key, boolean defaultValue) { + return parseBoolean(getProperty(key, null), defaultValue, false); + } + + /** + * Scale the value with the available memory. If 1 GB of RAM is available, + * the value is returned, if 2 GB are available, then twice the value, and + * so on. + * + * @param value the value to scale + * @return the scaled value + */ + public static int scaleForAvailableMemory(int value) { + long maxMemory = Runtime.getRuntime().maxMemory(); + if (maxMemory != Long.MAX_VALUE) { + // we are limited by an -XmX parameter + return (int) (value * maxMemory / (1024 * 1024 * 1024)); + } + try { + OperatingSystemMXBean mxBean = ManagementFactory + .getOperatingSystemMXBean(); + // this method is only available on the class + // com.sun.management.OperatingSystemMXBean, which mxBean + // is an instance of under the Oracle JDK, but it is not present on + // Android and other JDK's + Method method = Class.forName( + "com.sun.management.OperatingSystemMXBean"). + getMethod("getTotalPhysicalMemorySize"); + long physicalMemorySize = ((Number) method.invoke(mxBean)).longValue(); + return (int) (value * physicalMemorySize / (1024 * 1024 * 1024)); + } catch (Exception e) { + // ignore + } + return value; + } + + /** + * The utility methods will try to use the provided class factories to + * convert binary name of class to Class object. Used by H2 OSGi Activator + * in order to provide a class from another bundle ClassLoader. + */ + public interface ClassFactory { + + /** + * Check whether the factory can return the named class. + * + * @param name the binary name of the class + * @return true if this factory can return a valid class for the + * provided class name + */ + boolean match(String name); + + /** + * Load the class. + * + * @param name the binary name of the class + * @return the class object + * @throws ClassNotFoundException If the class is not handle by this + * factory + */ + Class loadClass(String name) + throws ClassNotFoundException; + } +} diff --git a/modules/h2/src/main/java/org/h2/util/ValueHashMap.java b/modules/h2/src/main/java/org/h2/util/ValueHashMap.java new file mode 100644 index 0000000000000..518e56803e3bb --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/ValueHashMap.java @@ -0,0 +1,190 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.util.ArrayList; +import org.h2.message.DbException; +import org.h2.value.Value; +import org.h2.value.ValueNull; + +/** + * This hash map supports keys of type Value. + * + * @param the value type + */ +public class ValueHashMap extends HashBase { + + private Value[] keys; + private V[] values; + + /** + * Create a new value hash map. + * + * @return the object + */ + public static ValueHashMap newInstance() { + return new ValueHashMap<>(); + } + + @Override + @SuppressWarnings("unchecked") + protected void reset(int newLevel) { + super.reset(newLevel); + keys = new Value[len]; + values = (V[]) new Object[len]; + } + + @Override + protected void rehash(int newLevel) { + Value[] oldKeys = keys; + V[] oldValues = values; + reset(newLevel); + int len = oldKeys.length; + for (int i = 0; i < len; i++) { + Value k = oldKeys[i]; + if (k != null && k != ValueNull.DELETED) { + // skip the checkSizePut so we don't end up + // accidentally recursing + internalPut(k, oldValues[i]); + } + } + } + + private int getIndex(Value key) { + return key.hashCode() & mask; + } + + /** + * Add or update a key value pair. + * + * @param key the key + * @param value the new value + */ + public void put(Value key, V value) { + checkSizePut(); + internalPut(key, value); + } + + private void internalPut(Value key, V value) { + int index = getIndex(key); + int plus = 1; + int deleted = -1; + do { + Value k = keys[index]; + if (k == null) { + // found an empty record + if (deleted >= 0) { + index = deleted; + deletedCount--; + } + size++; + keys[index] = key; + values[index] = value; + return; + } else if (k == ValueNull.DELETED) { + // found a deleted record + if (deleted < 0) { + deleted = index; + } + } else if (k.equals(key)) { + // update existing + values[index] = value; + return; + } + index = (index + plus++) & mask; + } while (plus <= len); + // no space + DbException.throwInternalError("hashmap is full"); + } + + /** + * Remove a key value pair. + * + * @param key the key + */ + public void remove(Value key) { + checkSizeRemove(); + int index = getIndex(key); + int plus = 1; + do { + Value k = keys[index]; + if (k == null) { + // found an empty record + return; + } else if (k == ValueNull.DELETED) { + // found a deleted record + } else if (k.equals(key)) { + // found the record + keys[index] = ValueNull.DELETED; + values[index] = null; + deletedCount++; + size--; + return; + } + index = (index + plus++) & mask; + } while (plus <= len); + // not found + } + + /** + * Get the value for this key. This method returns null if the key was not + * found. + * + * @param key the key + * @return the value for the given key + */ + public V get(Value key) { + int index = getIndex(key); + int plus = 1; + do { + Value k = keys[index]; + if (k == null) { + // found an empty record + return null; + } else if (k == ValueNull.DELETED) { + // found a deleted record + } else if (k.equals(key)) { + // found it + return values[index]; + } + index = (index + plus++) & mask; + } while (plus <= len); + return null; + } + + /** + * Get the list of keys. + * + * @return all keys + */ + public ArrayList keys() { + ArrayList list = new ArrayList<>(size); + for (Value k : keys) { + if (k != null && k != ValueNull.DELETED) { + list.add(k); + } + } + return list; + } + + /** + * Get the list of values. + * + * @return all values + */ + public ArrayList values() { + ArrayList list = new ArrayList<>(size); + int len = keys.length; + for (int i = 0; i < len; i++) { + Value k = keys[i]; + if (k != null && k != ValueNull.DELETED) { + list.add(values[i]); + } + } + return list; + } + +} diff --git a/modules/h2/src/main/java/org/h2/util/package.html b/modules/h2/src/main/java/org/h2/util/package.html new file mode 100644 index 0000000000000..e5fc27aee4625 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/util/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Internal utility classes. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java/org/h2/value/CaseInsensitiveConcurrentMap.java b/modules/h2/src/main/java/org/h2/value/CaseInsensitiveConcurrentMap.java new file mode 100644 index 0000000000000..77032ef097b1c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/CaseInsensitiveConcurrentMap.java @@ -0,0 +1,46 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.util.concurrent.ConcurrentHashMap; +import org.h2.util.StringUtils; + +/** + * A concurrent hash map with a case-insensitive string key, that also allows + * NULL as a key. + * + * @param the value type + */ +public class CaseInsensitiveConcurrentMap extends ConcurrentHashMap { + + private static final long serialVersionUID = 1L; + private static final String NULL = new String(new byte[0]); + + @Override + public V get(Object key) { + return super.get(toUpper(key)); + } + + @Override + public V put(String key, V value) { + return super.put(toUpper(key), value); + } + + @Override + public boolean containsKey(Object key) { + return super.containsKey(toUpper(key)); + } + + @Override + public V remove(Object key) { + return super.remove(toUpper(key)); + } + + private static String toUpper(Object key) { + return key == null ? NULL : StringUtils.toUpperEnglish(key.toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/CaseInsensitiveMap.java b/modules/h2/src/main/java/org/h2/value/CaseInsensitiveMap.java new file mode 100644 index 0000000000000..9ee6bbff84f3c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/CaseInsensitiveMap.java @@ -0,0 +1,44 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.util.HashMap; +import org.h2.util.StringUtils; + +/** + * A hash map with a case-insensitive string key. + * + * @param the value type + */ +public class CaseInsensitiveMap extends HashMap { + + private static final long serialVersionUID = 1L; + + @Override + public V get(Object key) { + return super.get(toUpper(key)); + } + + @Override + public V put(String key, V value) { + return super.put(toUpper(key), value); + } + + @Override + public boolean containsKey(Object key) { + return super.containsKey(toUpper(key)); + } + + @Override + public V remove(Object key) { + return super.remove(toUpper(key)); + } + + private static String toUpper(Object key) { + return key == null ? null : StringUtils.toUpperEnglish(key.toString()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/CharsetCollator.java b/modules/h2/src/main/java/org/h2/value/CharsetCollator.java new file mode 100644 index 0000000000000..84715d7dcbd6f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/CharsetCollator.java @@ -0,0 +1,86 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.nio.charset.Charset; +import java.text.CollationKey; +import java.text.Collator; +import java.util.Comparator; + +/** + * The charset collator sorts strings according to the order in the given charset. + */ +public class CharsetCollator extends Collator { + + /** + * The comparator used to compare byte arrays. + */ + static final Comparator COMPARATOR = new Comparator() { + @Override + public int compare(byte[] b1, byte[] b2) { + int minLength = Math.min(b1.length, b2.length); + for (int index = 0; index < minLength; index++) { + int result = b1[index] - b2[index]; + if (result != 0) { + return result; + } + } + return b1.length - b2.length; + } + }; + private final Charset charset; + + public CharsetCollator(Charset charset) { + this.charset = charset; + } + + public Charset getCharset() { + return charset; + } + + @Override + public int compare(String source, String target) { + return COMPARATOR.compare(toBytes(source), toBytes(target)); + } + + /** + * Convert the source to bytes, using the character set. + * + * @param source the source + * @return the bytes + */ + byte[] toBytes(String source) { + return source.getBytes(charset); + } + + @Override + public CollationKey getCollationKey(final String source) { + return new CharsetCollationKey(source); + } + + @Override + public int hashCode() { + return 255; + } + + private class CharsetCollationKey extends CollationKey { + + CharsetCollationKey(String source) { + super(source); + } + + @Override + public int compareTo(CollationKey target) { + return COMPARATOR.compare(toByteArray(), toBytes(target.getSourceString())); + } + + @Override + public byte[] toByteArray() { + return toBytes(getSourceString()); + } + + } +} diff --git a/modules/h2/src/main/java/org/h2/value/CompareMode.java b/modules/h2/src/main/java/org/h2/value/CompareMode.java new file mode 100644 index 0000000000000..1dc31c7f85073 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/CompareMode.java @@ -0,0 +1,289 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.nio.charset.Charset; +import java.text.Collator; +import java.util.Locale; +import java.util.Objects; + +import org.h2.engine.SysProperties; +import org.h2.util.StringUtils; + +/** + * Instances of this class can compare strings. Case sensitive and case + * insensitive comparison is supported, and comparison using a collator. + */ +public class CompareMode { + + /** + * This constant means there is no collator set, and the default string + * comparison is to be used. + */ + public static final String OFF = "OFF"; + + /** + * This constant means the default collator should be used, even if ICU4J is + * in the classpath. + */ + public static final String DEFAULT = "DEFAULT_"; + + /** + * This constant means ICU4J should be used (this will fail if it is not in + * the classpath). + */ + public static final String ICU4J = "ICU4J_"; + + /** + * This constant means the charset specified should be used. + * This will fail if the specified charset does not exist. + */ + public static final String CHARSET = "CHARSET_"; + + /** + * This constant means that the BINARY columns are sorted as if the bytes + * were signed. + */ + public static final String SIGNED = "SIGNED"; + + /** + * This constant means that the BINARY columns are sorted as if the bytes + * were unsigned. + */ + public static final String UNSIGNED = "UNSIGNED"; + + private static volatile CompareMode lastUsed; + + private static final boolean CAN_USE_ICU4J; + + static { + boolean b = false; + try { + Class.forName("com.ibm.icu.text.Collator"); + b = true; + } catch (Exception e) { + // ignore + } + CAN_USE_ICU4J = b; + } + + private final String name; + private final int strength; + + /** + * If true, sort BINARY columns as if they contain unsigned bytes. + */ + private final boolean binaryUnsigned; + + protected CompareMode(String name, int strength, boolean binaryUnsigned) { + this.name = name; + this.strength = strength; + this.binaryUnsigned = binaryUnsigned; + } + + /** + * Create a new compare mode with the given collator and strength. If + * required, a new CompareMode is created, or if possible the last one is + * returned. A cache is used to speed up comparison when using a collator; + * CollationKey objects are cached. + * + * @param name the collation name or null + * @param strength the collation strength + * @return the compare mode + */ + public static CompareMode getInstance(String name, int strength) { + return getInstance(name, strength, SysProperties.SORT_BINARY_UNSIGNED); + } + + /** + * Create a new compare mode with the given collator and strength. If + * required, a new CompareMode is created, or if possible the last one is + * returned. A cache is used to speed up comparison when using a collator; + * CollationKey objects are cached. + * + * @param name the collation name or null + * @param strength the collation strength + * @param binaryUnsigned whether to compare binaries as unsigned + * @return the compare mode + */ + public static CompareMode getInstance(String name, int strength, boolean binaryUnsigned) { + CompareMode last = lastUsed; + if (last != null) { + if (Objects.equals(last.name, name) && + last.strength == strength && + last.binaryUnsigned == binaryUnsigned) { + return last; + } + } + if (name == null || name.equals(OFF)) { + last = new CompareMode(name, strength, binaryUnsigned); + } else { + boolean useICU4J; + if (name.startsWith(ICU4J)) { + useICU4J = true; + name = name.substring(ICU4J.length()); + } else if (name.startsWith(DEFAULT)) { + useICU4J = false; + name = name.substring(DEFAULT.length()); + } else { + useICU4J = CAN_USE_ICU4J; + } + if (useICU4J) { + last = new CompareModeIcu4J(name, strength, binaryUnsigned); + } else { + last = new CompareModeDefault(name, strength, binaryUnsigned); + } + } + lastUsed = last; + return last; + } + + /** + * Compare two characters in a string. + * + * @param a the first string + * @param ai the character index in the first string + * @param b the second string + * @param bi the character index in the second string + * @param ignoreCase true if a case-insensitive comparison should be made + * @return true if the characters are equals + */ + public boolean equalsChars(String a, int ai, String b, int bi, + boolean ignoreCase) { + char ca = a.charAt(ai); + char cb = b.charAt(bi); + if (ignoreCase) { + ca = Character.toUpperCase(ca); + cb = Character.toUpperCase(cb); + } + return ca == cb; + } + + /** + * Compare two strings. + * + * @param a the first string + * @param b the second string + * @param ignoreCase true if a case-insensitive comparison should be made + * @return -1 if the first string is 'smaller', 1 if the second string is + * smaller, and 0 if they are equal + */ + public int compareString(String a, String b, boolean ignoreCase) { + if (ignoreCase) { + return a.compareToIgnoreCase(b); + } + return a.compareTo(b); + } + + /** + * Get the collation name. + * + * @param l the locale + * @return the name of the collation + */ + public static String getName(Locale l) { + Locale english = Locale.ENGLISH; + String name = l.getDisplayLanguage(english) + ' ' + + l.getDisplayCountry(english) + ' ' + l.getVariant(); + name = StringUtils.toUpperEnglish(name.trim().replace(' ', '_')); + return name; + } + + /** + * Compare name name of the locale with the given name. The case of the name + * is ignored. + * + * @param locale the locale + * @param name the name + * @return true if they match + */ + static boolean compareLocaleNames(Locale locale, String name) { + return name.equalsIgnoreCase(locale.toString()) || + name.equalsIgnoreCase(getName(locale)); + } + + /** + * Get the collator object for the given language name or language / country + * combination. + * + * @param name the language name + * @return the collator + */ + public static Collator getCollator(String name) { + Collator result = null; + if (name.startsWith(ICU4J)) { + name = name.substring(ICU4J.length()); + } else if (name.startsWith(DEFAULT)) { + name = name.substring(DEFAULT.length()); + } else if (name.startsWith(CHARSET)) { + return new CharsetCollator(Charset.forName(name.substring(CHARSET.length()))); + } + if (name.length() == 2) { + Locale locale = new Locale(StringUtils.toLowerEnglish(name), ""); + if (compareLocaleNames(locale, name)) { + result = Collator.getInstance(locale); + } + } else if (name.length() == 5) { + // LL_CC (language_country) + int idx = name.indexOf('_'); + if (idx >= 0) { + String language = StringUtils.toLowerEnglish(name.substring(0, idx)); + String country = name.substring(idx + 1); + Locale locale = new Locale(language, country); + if (compareLocaleNames(locale, name)) { + result = Collator.getInstance(locale); + } + } + } + if (result == null) { + for (Locale locale : Collator.getAvailableLocales()) { + if (compareLocaleNames(locale, name)) { + result = Collator.getInstance(locale); + break; + } + } + } + return result; + } + + public String getName() { + return name == null ? OFF : name; + } + + public int getStrength() { + return strength; + } + + public boolean isBinaryUnsigned() { + return binaryUnsigned; + } + + @Override + public boolean equals(Object obj) { + if (obj == this) { + return true; + } else if (!(obj instanceof CompareMode)) { + return false; + } + CompareMode o = (CompareMode) obj; + if (!getName().equals(o.getName())) { + return false; + } + if (strength != o.strength) { + return false; + } + if (binaryUnsigned != o.binaryUnsigned) { + return false; + } + return true; + } + + @Override + public int hashCode() { + return getName().hashCode() ^ strength ^ (binaryUnsigned ? -1 : 0); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/CompareModeDefault.java b/modules/h2/src/main/java/org/h2/value/CompareModeDefault.java new file mode 100644 index 0000000000000..6206e6f9546e5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/CompareModeDefault.java @@ -0,0 +1,75 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.text.CollationKey; +import java.text.Collator; + +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.util.SmallLRUCache; + +/** + * The default implementation of CompareMode. It uses java.text.Collator. + */ +public class CompareModeDefault extends CompareMode { + + private final Collator collator; + private final SmallLRUCache collationKeys; + + protected CompareModeDefault(String name, int strength, + boolean binaryUnsigned) { + super(name, strength, binaryUnsigned); + collator = CompareMode.getCollator(name); + if (collator == null) { + throw DbException.throwInternalError(name); + } + collator.setStrength(strength); + int cacheSize = SysProperties.COLLATOR_CACHE_SIZE; + if (cacheSize != 0) { + collationKeys = SmallLRUCache.newInstance(cacheSize); + } else { + collationKeys = null; + } + } + + @Override + public int compareString(String a, String b, boolean ignoreCase) { + if (ignoreCase) { + // this is locale sensitive + a = a.toUpperCase(); + b = b.toUpperCase(); + } + int comp; + if (collationKeys != null) { + CollationKey aKey = getKey(a); + CollationKey bKey = getKey(b); + comp = aKey.compareTo(bKey); + } else { + comp = collator.compare(a, b); + } + return comp; + } + + @Override + public boolean equalsChars(String a, int ai, String b, int bi, + boolean ignoreCase) { + return compareString(a.substring(ai, ai + 1), b.substring(bi, bi + 1), + ignoreCase) == 0; + } + + private CollationKey getKey(String a) { + synchronized (collationKeys) { + CollationKey key = collationKeys.get(a); + if (key == null) { + key = collator.getCollationKey(a); + collationKeys.put(a, key); + } + return key; + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/CompareModeIcu4J.java b/modules/h2/src/main/java/org/h2/value/CompareModeIcu4J.java new file mode 100644 index 0000000000000..7f331f30a2fa0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/CompareModeIcu4J.java @@ -0,0 +1,88 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.lang.reflect.Method; +import java.util.Comparator; +import java.util.Locale; + +import org.h2.message.DbException; +import org.h2.util.JdbcUtils; +import org.h2.util.StringUtils; + +/** + * An implementation of CompareMode that uses the ICU4J Collator. + */ +public class CompareModeIcu4J extends CompareMode { + + private final Comparator collator; + + protected CompareModeIcu4J(String name, int strength, boolean binaryUnsigned) { + super(name, strength, binaryUnsigned); + collator = getIcu4jCollator(name, strength); + } + + @Override + public int compareString(String a, String b, boolean ignoreCase) { + if (ignoreCase) { + a = a.toUpperCase(); + b = b.toUpperCase(); + } + return collator.compare(a, b); + } + + @Override + public boolean equalsChars(String a, int ai, String b, int bi, + boolean ignoreCase) { + return compareString(a.substring(ai, ai + 1), b.substring(bi, bi + 1), + ignoreCase) == 0; + } + + @SuppressWarnings("unchecked") + private static Comparator getIcu4jCollator(String name, int strength) { + try { + Comparator result = null; + Class collatorClass = JdbcUtils.loadUserClass( + "com.ibm.icu.text.Collator"); + Method getInstanceMethod = collatorClass.getMethod( + "getInstance", Locale.class); + if (name.length() == 2) { + Locale locale = new Locale(StringUtils.toLowerEnglish(name), ""); + if (compareLocaleNames(locale, name)) { + result = (Comparator) getInstanceMethod.invoke(null, locale); + } + } else if (name.length() == 5) { + // LL_CC (language_country) + int idx = name.indexOf('_'); + if (idx >= 0) { + String language = StringUtils.toLowerEnglish(name.substring(0, idx)); + String country = name.substring(idx + 1); + Locale locale = new Locale(language, country); + if (compareLocaleNames(locale, name)) { + result = (Comparator) getInstanceMethod.invoke(null, locale); + } + } + } + if (result == null) { + for (Locale locale : (Locale[]) collatorClass.getMethod( + "getAvailableLocales").invoke(null)) { + if (compareLocaleNames(locale, name)) { + result = (Comparator) getInstanceMethod.invoke(null, locale); + break; + } + } + } + if (result == null) { + throw DbException.getInvalidValueException("collator", name); + } + collatorClass.getMethod("setStrength", int.class).invoke(result, strength); + return result; + } catch (Exception e) { + throw DbException.convert(e); + } + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/DataType.java b/modules/h2/src/main/java/org/h2/value/DataType.java new file mode 100644 index 0000000000000..0fad2d782c954 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/DataType.java @@ -0,0 +1,1410 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.io.BufferedReader; +import java.io.InputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.nio.charset.StandardCharsets; +import java.sql.Array; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.UUID; +import org.h2.api.ErrorCode; +import org.h2.api.TimestampWithTimeZone; +import org.h2.engine.Mode; +import org.h2.engine.SessionInterface; +import org.h2.engine.SysProperties; +import org.h2.jdbc.JdbcArray; +import org.h2.jdbc.JdbcBlob; +import org.h2.jdbc.JdbcClob; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.tools.SimpleResultSet; +import org.h2.util.JdbcUtils; +import org.h2.util.LocalDateTimeUtils; +import org.h2.util.Utils; + +/** + * This class contains meta data information about data types, + * and can convert between Java objects and Values. + */ +public class DataType { + + /** + * This constant is used to represent the type of a ResultSet. There is no + * equivalent java.sql.Types value, but Oracle uses it to represent a + * ResultSet (OracleTypes.CURSOR = -10). + */ + public static final int TYPE_RESULT_SET = -10; + + /** + * The Geometry class. This object is null if the jts jar file is not in the + * classpath. + */ + public static final Class GEOMETRY_CLASS; + + private static final String GEOMETRY_CLASS_NAME = + "org.locationtech.jts.geom.Geometry"; + + /** + * The list of types. An ArrayList so that Tomcat doesn't set it to null + * when clearing references. + */ + private static final ArrayList TYPES = new ArrayList<>(96); + private static final HashMap TYPES_BY_NAME = new HashMap<>(96); + private static final HashMap TYPES_BY_VALUE_TYPE = new HashMap<>(48); + + /** + * The value type of this data type. + */ + public int type; + + /** + * The data type name. + */ + public String name; + + /** + * The SQL type. + */ + public int sqlType; + + /** + * How closely the data type maps to the corresponding JDBC SQL type (low is + * best). + */ + public int sqlTypePos; + + /** + * The maximum supported precision. + */ + public long maxPrecision; + + /** + * The lowest possible scale. + */ + public int minScale; + + /** + * The highest possible scale. + */ + public int maxScale; + + /** + * If this is a numeric type. + */ + public boolean decimal; + + /** + * The prefix required for the SQL literal representation. + */ + public String prefix; + + /** + * The suffix required for the SQL literal representation. + */ + public String suffix; + + /** + * The list of parameters used in the column definition. + */ + public String params; + + /** + * If this is an autoincrement type. + */ + public boolean autoIncrement; + + /** + * If this data type is an autoincrement type. + */ + public boolean caseSensitive; + + /** + * If the precision parameter is supported. + */ + public boolean supportsPrecision; + + /** + * If the scale parameter is supported. + */ + public boolean supportsScale; + + /** + * The default precision. + */ + public long defaultPrecision; + + /** + * The default scale. + */ + public int defaultScale; + + /** + * The default display size. + */ + public int defaultDisplaySize; + + /** + * If this data type should not be listed in the database meta data. + */ + public boolean hidden; + + /** + * The number of bytes required for an object. + */ + public int memory; + + static { + Class g; + try { + g = JdbcUtils.loadUserClass(GEOMETRY_CLASS_NAME); + } catch (Exception e) { + // class is not in the classpath - ignore + g = null; + } + GEOMETRY_CLASS = g; + } + + static { + add(Value.NULL, Types.NULL, + new DataType(), + new String[]{"NULL"}, + // the value is always in the cache + 0 + ); + add(Value.STRING, Types.VARCHAR, + createString(true), + new String[]{"VARCHAR", "VARCHAR2", "NVARCHAR", "NVARCHAR2", + "VARCHAR_CASESENSITIVE", "CHARACTER VARYING", "TID"}, + // 24 for ValueString, 24 for String + 48 + ); + add(Value.STRING, Types.LONGVARCHAR, + createString(true), + new String[]{"LONGVARCHAR", "LONGNVARCHAR"}, + 48 + ); + add(Value.STRING_FIXED, Types.CHAR, + createString(true), + new String[]{"CHAR", "CHARACTER", "NCHAR"}, + 48 + ); + add(Value.STRING_IGNORECASE, Types.VARCHAR, + createString(false), + new String[]{"VARCHAR_IGNORECASE"}, + 48 + ); + add(Value.BOOLEAN, Types.BOOLEAN, + createDecimal(ValueBoolean.PRECISION, ValueBoolean.PRECISION, + 0, ValueBoolean.DISPLAY_SIZE, false, false), + new String[]{"BOOLEAN", "BIT", "BOOL"}, + // the value is always in the cache + 0 + ); + add(Value.BYTE, Types.TINYINT, + createDecimal(ValueByte.PRECISION, ValueByte.PRECISION, 0, + ValueByte.DISPLAY_SIZE, false, false), + new String[]{"TINYINT"}, + // the value is almost always in the cache + 1 + ); + add(Value.SHORT, Types.SMALLINT, + createDecimal(ValueShort.PRECISION, ValueShort.PRECISION, 0, + ValueShort.DISPLAY_SIZE, false, false), + new String[]{"SMALLINT", "YEAR", "INT2"}, + // in many cases the value is in the cache + 20 + ); + add(Value.INT, Types.INTEGER, + createDecimal(ValueInt.PRECISION, ValueInt.PRECISION, 0, + ValueInt.DISPLAY_SIZE, false, false), + new String[]{"INTEGER", "INT", "MEDIUMINT", "INT4", "SIGNED"}, + // in many cases the value is in the cache + 20 + ); + add(Value.INT, Types.INTEGER, + createDecimal(ValueInt.PRECISION, ValueInt.PRECISION, 0, + ValueInt.DISPLAY_SIZE, false, true), + new String[]{"SERIAL"}, + 20 + ); + add(Value.LONG, Types.BIGINT, + createDecimal(ValueLong.PRECISION, ValueLong.PRECISION, 0, + ValueLong.DISPLAY_SIZE, false, false), + new String[]{"BIGINT", "INT8", "LONG"}, + 24 + ); + add(Value.LONG, Types.BIGINT, + createDecimal(ValueLong.PRECISION, ValueLong.PRECISION, 0, + ValueLong.DISPLAY_SIZE, false, true), + new String[]{"IDENTITY", "BIGSERIAL"}, + 24 + ); + if (SysProperties.BIG_DECIMAL_IS_DECIMAL) { + addDecimal(); + addNumeric(); + } else { + addNumeric(); + addDecimal(); + } + add(Value.FLOAT, Types.REAL, + createDecimal(ValueFloat.PRECISION, ValueFloat.PRECISION, + 0, ValueFloat.DISPLAY_SIZE, false, false), + new String[] {"REAL", "FLOAT4"}, + 24 + ); + add(Value.DOUBLE, Types.DOUBLE, + createDecimal(ValueDouble.PRECISION, ValueDouble.PRECISION, + 0, ValueDouble.DISPLAY_SIZE, false, false), + new String[] { "DOUBLE", "DOUBLE PRECISION" }, + 24 + ); + add(Value.DOUBLE, Types.FLOAT, + createDecimal(ValueDouble.PRECISION, ValueDouble.PRECISION, + 0, ValueDouble.DISPLAY_SIZE, false, false), + new String[] {"FLOAT", "FLOAT8" }, + 24 + ); + add(Value.TIME, Types.TIME, + createDate(ValueTime.MAXIMUM_PRECISION, ValueTime.DEFAULT_PRECISION, + "TIME", true, ValueTime.DEFAULT_SCALE, ValueTime.MAXIMUM_SCALE), + new String[]{"TIME", "TIME WITHOUT TIME ZONE"}, + // 24 for ValueTime, 32 for java.sql.Time + 56 + ); + add(Value.DATE, Types.DATE, + createDate(ValueDate.PRECISION, ValueDate.PRECISION, + "DATE", false, 0, 0), + new String[]{"DATE"}, + // 24 for ValueDate, 32 for java.sql.Data + 56 + ); + add(Value.TIMESTAMP, Types.TIMESTAMP, + createDate(ValueTimestamp.MAXIMUM_PRECISION, ValueTimestamp.DEFAULT_PRECISION, + "TIMESTAMP", true, ValueTimestamp.DEFAULT_SCALE, ValueTimestamp.MAXIMUM_SCALE), + new String[]{"TIMESTAMP", "TIMESTAMP WITHOUT TIME ZONE", + "DATETIME", "DATETIME2", "SMALLDATETIME"}, + // 24 for ValueTimestamp, 32 for java.sql.Timestamp + 56 + ); + // 2014 is the value of Types.TIMESTAMP_WITH_TIMEZONE + // use the value instead of the reference because the code has to + // compile (on Java 1.7). Can be replaced with + // Types.TIMESTAMP_WITH_TIMEZONE once Java 1.8 is required. + add(Value.TIMESTAMP_TZ, 2014, + createDate(ValueTimestampTimeZone.MAXIMUM_PRECISION, ValueTimestampTimeZone.DEFAULT_PRECISION, + "TIMESTAMP_TZ", true, ValueTimestampTimeZone.DEFAULT_SCALE, + ValueTimestampTimeZone.MAXIMUM_SCALE), + new String[]{"TIMESTAMP WITH TIME ZONE"}, + // 26 for ValueTimestampUtc, 32 for java.sql.Timestamp + 58 + ); + add(Value.BYTES, Types.VARBINARY, + createString(false), + new String[]{"VARBINARY"}, + 32 + ); + add(Value.BYTES, Types.BINARY, + createString(false), + new String[]{"BINARY", "RAW", "BYTEA", "LONG RAW"}, + 32 + ); + add(Value.BYTES, Types.LONGVARBINARY, + createString(false), + new String[]{"LONGVARBINARY"}, + 32 + ); + add(Value.UUID, Types.BINARY, + createString(false), + // UNIQUEIDENTIFIER is the MSSQL mode equivalent + new String[]{"UUID", "UNIQUEIDENTIFIER"}, + 32 + ); + add(Value.JAVA_OBJECT, Types.OTHER, + createString(false), + new String[]{"OTHER", "OBJECT", "JAVA_OBJECT"}, + 24 + ); + add(Value.BLOB, Types.BLOB, + createLob(), + new String[]{"BLOB", "TINYBLOB", "MEDIUMBLOB", + "LONGBLOB", "IMAGE", "OID"}, + // 80 for ValueLob, 24 for String + 104 + ); + add(Value.CLOB, Types.CLOB, + createLob(), + new String[]{"CLOB", "TINYTEXT", "TEXT", "MEDIUMTEXT", + "LONGTEXT", "NTEXT", "NCLOB"}, + // 80 for ValueLob, 24 for String + 104 + ); + add(Value.GEOMETRY, Types.OTHER, + createString(false), + new String[]{"GEOMETRY"}, + 32 + ); + DataType dataType = new DataType(); + dataType.prefix = "("; + dataType.suffix = "')"; + add(Value.ARRAY, Types.ARRAY, + dataType, + new String[]{"ARRAY"}, + 32 + ); + dataType = new DataType(); + add(Value.RESULT_SET, DataType.TYPE_RESULT_SET, + dataType, + new String[]{"RESULT_SET"}, + 400 + ); + dataType = createString(false); + dataType.supportsPrecision = false; + dataType.supportsScale = false; + add(Value.ENUM, Types.OTHER, + dataType, + new String[]{"ENUM"}, + 48 + ); + for (Integer i : TYPES_BY_VALUE_TYPE.keySet()) { + Value.getOrder(i); + } + } + + private static void addDecimal() { + add(Value.DECIMAL, Types.DECIMAL, + createDecimal(Integer.MAX_VALUE, + ValueDecimal.DEFAULT_PRECISION, + ValueDecimal.DEFAULT_SCALE, + ValueDecimal.DEFAULT_DISPLAY_SIZE, true, false), + new String[]{"DECIMAL", "DEC"}, + // 40 for ValueDecimal, + 64 + ); + } + + private static void addNumeric() { + add(Value.DECIMAL, Types.NUMERIC, + createDecimal(Integer.MAX_VALUE, + ValueDecimal.DEFAULT_PRECISION, + ValueDecimal.DEFAULT_SCALE, + ValueDecimal.DEFAULT_DISPLAY_SIZE, true, false), + new String[]{"NUMERIC", "NUMBER"}, + 64 + ); + } + + private static void add(int type, int sqlType, + DataType dataType, String[] names, int memory) { + for (int i = 0; i < names.length; i++) { + DataType dt = new DataType(); + dt.type = type; + dt.sqlType = sqlType; + dt.name = names[i]; + dt.autoIncrement = dataType.autoIncrement; + dt.decimal = dataType.decimal; + dt.maxPrecision = dataType.maxPrecision; + dt.maxScale = dataType.maxScale; + dt.minScale = dataType.minScale; + dt.params = dataType.params; + dt.prefix = dataType.prefix; + dt.suffix = dataType.suffix; + dt.supportsPrecision = dataType.supportsPrecision; + dt.supportsScale = dataType.supportsScale; + dt.defaultPrecision = dataType.defaultPrecision; + dt.defaultScale = dataType.defaultScale; + dt.defaultDisplaySize = dataType.defaultDisplaySize; + dt.caseSensitive = dataType.caseSensitive; + dt.hidden = i > 0; + dt.memory = memory; + for (DataType t2 : TYPES) { + if (t2.sqlType == dt.sqlType) { + dt.sqlTypePos++; + } + } + TYPES_BY_NAME.put(dt.name, dt); + if (TYPES_BY_VALUE_TYPE.get(type) == null) { + TYPES_BY_VALUE_TYPE.put(type, dt); + } + TYPES.add(dt); + } + } + + private static DataType createDecimal(int maxPrecision, + int defaultPrecision, int defaultScale, int defaultDisplaySize, + boolean needsPrecisionAndScale, boolean autoInc) { + DataType dataType = new DataType(); + dataType.maxPrecision = maxPrecision; + dataType.defaultPrecision = defaultPrecision; + dataType.defaultScale = defaultScale; + dataType.defaultDisplaySize = defaultDisplaySize; + if (needsPrecisionAndScale) { + dataType.params = "PRECISION,SCALE"; + dataType.supportsPrecision = true; + dataType.supportsScale = true; + } + dataType.decimal = true; + dataType.autoIncrement = autoInc; + return dataType; + } + + private static DataType createDate(int maxPrecision, int precision, String prefix, + boolean supportsScale, int scale, int maxScale) { + DataType dataType = new DataType(); + dataType.prefix = prefix + " '"; + dataType.suffix = "'"; + dataType.maxPrecision = maxPrecision; + dataType.supportsScale = supportsScale; + dataType.maxScale = maxScale; + dataType.defaultPrecision = precision; + dataType.defaultScale = scale; + dataType.defaultDisplaySize = precision; + return dataType; + } + + private static DataType createString(boolean caseSensitive) { + DataType dataType = new DataType(); + dataType.prefix = "'"; + dataType.suffix = "'"; + dataType.params = "LENGTH"; + dataType.caseSensitive = caseSensitive; + dataType.supportsPrecision = true; + dataType.maxPrecision = Integer.MAX_VALUE; + dataType.defaultPrecision = Integer.MAX_VALUE; + dataType.defaultDisplaySize = Integer.MAX_VALUE; + return dataType; + } + + private static DataType createLob() { + DataType t = createString(true); + t.maxPrecision = Long.MAX_VALUE; + t.defaultPrecision = Long.MAX_VALUE; + return t; + } + + /** + * Get the list of data types. + * + * @return the list + */ + public static ArrayList getTypes() { + return TYPES; + } + + /** + * Read a value from the given result set. + * + * @param session the session + * @param rs the result set + * @param columnIndex the column index (1 based) + * @param type the data type + * @return the value + */ + public static Value readValue(SessionInterface session, ResultSet rs, + int columnIndex, int type) { + try { + Value v; + switch (type) { + case Value.NULL: { + return ValueNull.INSTANCE; + } + case Value.BYTES: { + /* + * Both BINARY and UUID may be mapped to Value.BYTES. getObject() returns byte[] + * for SQL BINARY, UUID for SQL UUID and null for SQL NULL. + */ + Object o = rs.getObject(columnIndex); + if (o instanceof byte[]) { + v = ValueBytes.getNoCopy((byte[]) o); + } else if (o != null) { + v = ValueUuid.get((UUID) o); + } else { + v = ValueNull.INSTANCE; + } + break; + } + case Value.UUID: { + Object o = rs.getObject(columnIndex); + if (o instanceof UUID) { + v = ValueUuid.get((UUID) o); + } else if (o != null) { + v = ValueUuid.get((byte[]) o); + } else { + v = ValueNull.INSTANCE; + } + break; + } + case Value.BOOLEAN: { + boolean value = rs.getBoolean(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueBoolean.get(value); + break; + } + case Value.BYTE: { + byte value = rs.getByte(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueByte.get(value); + break; + } + case Value.DATE: { + Date value = rs.getDate(columnIndex); + v = value == null ? (Value) ValueNull.INSTANCE : + ValueDate.get(value); + break; + } + case Value.TIME: { + Time value = rs.getTime(columnIndex); + v = value == null ? (Value) ValueNull.INSTANCE : + ValueTime.get(value); + break; + } + case Value.TIMESTAMP: { + Timestamp value = rs.getTimestamp(columnIndex); + v = value == null ? (Value) ValueNull.INSTANCE : + ValueTimestamp.get(value); + break; + } + case Value.TIMESTAMP_TZ: { + TimestampWithTimeZone value = (TimestampWithTimeZone) rs.getObject(columnIndex); + v = value == null ? (Value) ValueNull.INSTANCE : + ValueTimestampTimeZone.get(value); + break; + } + case Value.DECIMAL: { + BigDecimal value = rs.getBigDecimal(columnIndex); + v = value == null ? (Value) ValueNull.INSTANCE : + ValueDecimal.get(value); + break; + } + case Value.DOUBLE: { + double value = rs.getDouble(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueDouble.get(value); + break; + } + case Value.FLOAT: { + float value = rs.getFloat(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueFloat.get(value); + break; + } + case Value.INT: { + int value = rs.getInt(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueInt.get(value); + break; + } + case Value.LONG: { + long value = rs.getLong(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueLong.get(value); + break; + } + case Value.SHORT: { + short value = rs.getShort(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueShort.get(value); + break; + } + case Value.STRING_IGNORECASE: { + String s = rs.getString(columnIndex); + v = (s == null) ? (Value) ValueNull.INSTANCE : + ValueStringIgnoreCase.get(s); + break; + } + case Value.STRING_FIXED: { + String s = rs.getString(columnIndex); + v = (s == null) ? (Value) ValueNull.INSTANCE : + ValueStringFixed.get(s); + break; + } + case Value.STRING: { + String s = rs.getString(columnIndex); + v = (s == null) ? (Value) ValueNull.INSTANCE : + ValueString.get(s); + break; + } + case Value.CLOB: { + if (session == null) { + String s = rs.getString(columnIndex); + v = s == null ? ValueNull.INSTANCE : + ValueLobDb.createSmallLob(Value.CLOB, s.getBytes(StandardCharsets.UTF_8)); + } else { + Reader in = rs.getCharacterStream(columnIndex); + if (in == null) { + v = ValueNull.INSTANCE; + } else { + v = session.getDataHandler().getLobStorage(). + createClob(new BufferedReader(in), -1); + } + } + if (session != null) { + session.addTemporaryLob(v); + } + break; + } + case Value.BLOB: { + if (session == null) { + byte[] buff = rs.getBytes(columnIndex); + return buff == null ? ValueNull.INSTANCE : + ValueLobDb.createSmallLob(Value.BLOB, buff); + } + InputStream in = rs.getBinaryStream(columnIndex); + v = (in == null) ? (Value) ValueNull.INSTANCE : + session.getDataHandler().getLobStorage().createBlob(in, -1); + session.addTemporaryLob(v); + break; + } + case Value.JAVA_OBJECT: { + if (SysProperties.serializeJavaObject) { + byte[] buff = rs.getBytes(columnIndex); + v = buff == null ? ValueNull.INSTANCE : + ValueJavaObject.getNoCopy(null, buff, session.getDataHandler()); + } else { + Object o = rs.getObject(columnIndex); + v = o == null ? ValueNull.INSTANCE : + ValueJavaObject.getNoCopy(o, null, session.getDataHandler()); + } + break; + } + case Value.ARRAY: { + Array array = rs.getArray(columnIndex); + if (array == null) { + return ValueNull.INSTANCE; + } + Object[] list = (Object[]) array.getArray(); + if (list == null) { + return ValueNull.INSTANCE; + } + int len = list.length; + Value[] values = new Value[len]; + for (int i = 0; i < len; i++) { + values[i] = DataType.convertToValue(session, list[i], Value.NULL); + } + v = ValueArray.get(values); + break; + } + case Value.ENUM: { + int value = rs.getInt(columnIndex); + v = rs.wasNull() ? (Value) ValueNull.INSTANCE : + ValueInt.get(value); + break; + } + case Value.RESULT_SET: { + ResultSet x = (ResultSet) rs.getObject(columnIndex); + if (x == null) { + return ValueNull.INSTANCE; + } + return ValueResultSet.get(x); + } + case Value.GEOMETRY: { + Object x = rs.getObject(columnIndex); + if (x == null) { + return ValueNull.INSTANCE; + } + return ValueGeometry.getFromGeometry(x); + } + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.getValue(type, + rs.getObject(columnIndex), + session.getDataHandler()); + } + throw DbException.throwInternalError("type="+type); + } + return v; + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + /** + * Get the name of the Java class for the given value type. + * + * @param type the value type + * @return the class name + */ + public static String getTypeClassName(int type) { + switch (type) { + case Value.BOOLEAN: + // "java.lang.Boolean"; + return Boolean.class.getName(); + case Value.BYTE: + // "java.lang.Byte"; + return Byte.class.getName(); + case Value.SHORT: + // "java.lang.Short"; + return Short.class.getName(); + case Value.INT: + case Value.ENUM: + // "java.lang.Integer"; + return Integer.class.getName(); + case Value.LONG: + // "java.lang.Long"; + return Long.class.getName(); + case Value.DECIMAL: + // "java.math.BigDecimal"; + return BigDecimal.class.getName(); + case Value.TIME: + // "java.sql.Time"; + return Time.class.getName(); + case Value.DATE: + // "java.sql.Date"; + return Date.class.getName(); + case Value.TIMESTAMP: + // "java.sql.Timestamp"; + return Timestamp.class.getName(); + case Value.TIMESTAMP_TZ: + // "org.h2.api.TimestampWithTimeZone"; + return TimestampWithTimeZone.class.getName(); + case Value.BYTES: + case Value.UUID: + // "[B", not "byte[]"; + return byte[].class.getName(); + case Value.STRING: + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + // "java.lang.String"; + return String.class.getName(); + case Value.BLOB: + // "java.sql.Blob"; + return java.sql.Blob.class.getName(); + case Value.CLOB: + // "java.sql.Clob"; + return java.sql.Clob.class.getName(); + case Value.DOUBLE: + // "java.lang.Double"; + return Double.class.getName(); + case Value.FLOAT: + // "java.lang.Float"; + return Float.class.getName(); + case Value.NULL: + return null; + case Value.JAVA_OBJECT: + // "java.lang.Object"; + return Object.class.getName(); + case Value.UNKNOWN: + // anything + return Object.class.getName(); + case Value.ARRAY: + return Array.class.getName(); + case Value.RESULT_SET: + return ResultSet.class.getName(); + case Value.GEOMETRY: + return GEOMETRY_CLASS_NAME; + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.getDataTypeClassName(type); + } + throw DbException.throwInternalError("type="+type); + } + } + + /** + * Get the data type object for the given value type. + * + * @param type the value type + * @return the data type object + */ + public static DataType getDataType(int type) { + if (type == Value.UNKNOWN) { + throw DbException.get(ErrorCode.UNKNOWN_DATA_TYPE_1, "?"); + } + DataType dt = TYPES_BY_VALUE_TYPE.get(type); + if (dt == null && JdbcUtils.customDataTypesHandler != null) { + dt = JdbcUtils.customDataTypesHandler.getDataTypeById(type); + } + if (dt == null) { + dt = TYPES_BY_VALUE_TYPE.get(Value.NULL); + } + return dt; + } + + /** + * Convert a value type to a SQL type. + * + * @param type the value type + * @return the SQL type + */ + public static int convertTypeToSQLType(int type) { + return getDataType(type).sqlType; + } + + /** + * Convert a SQL type to a value type using SQL type name, in order to + * manage SQL type extension mechanism. + * + * @param sqlType the SQL type + * @param sqlTypeName the SQL type name + * @return the value type + */ + public static int convertSQLTypeToValueType(int sqlType, String sqlTypeName) { + switch (sqlType) { + case Types.OTHER: + case Types.JAVA_OBJECT: + if (sqlTypeName.equalsIgnoreCase("geometry")) { + return Value.GEOMETRY; + } + } + return convertSQLTypeToValueType(sqlType); + } + + /** + * Get the SQL type from the result set meta data for the given column. This + * method uses the SQL type and type name. + * + * @param meta the meta data + * @param columnIndex the column index (1, 2,...) + * @return the value type + */ + public static int getValueTypeFromResultSet(ResultSetMetaData meta, + int columnIndex) throws SQLException { + return convertSQLTypeToValueType( + meta.getColumnType(columnIndex), + meta.getColumnTypeName(columnIndex)); + } + + /** + * Convert a SQL type to a value type. + * + * @param sqlType the SQL type + * @return the value type + */ + public static int convertSQLTypeToValueType(int sqlType) { + switch (sqlType) { + case Types.CHAR: + case Types.NCHAR: + return Value.STRING_FIXED; + case Types.VARCHAR: + case Types.LONGVARCHAR: + case Types.NVARCHAR: + case Types.LONGNVARCHAR: + return Value.STRING; + case Types.NUMERIC: + case Types.DECIMAL: + return Value.DECIMAL; + case Types.BIT: + case Types.BOOLEAN: + return Value.BOOLEAN; + case Types.INTEGER: + return Value.INT; + case Types.SMALLINT: + return Value.SHORT; + case Types.TINYINT: + return Value.BYTE; + case Types.BIGINT: + return Value.LONG; + case Types.REAL: + return Value.FLOAT; + case Types.DOUBLE: + case Types.FLOAT: + return Value.DOUBLE; + case Types.BINARY: + case Types.VARBINARY: + case Types.LONGVARBINARY: + return Value.BYTES; + case Types.OTHER: + case Types.JAVA_OBJECT: + return Value.JAVA_OBJECT; + case Types.DATE: + return Value.DATE; + case Types.TIME: + return Value.TIME; + case Types.TIMESTAMP: + return Value.TIMESTAMP; + case 2014: // Types.TIMESTAMP_WITH_TIMEZONE + return Value.TIMESTAMP_TZ; + case Types.BLOB: + return Value.BLOB; + case Types.CLOB: + case Types.NCLOB: + return Value.CLOB; + case Types.NULL: + return Value.NULL; + case Types.ARRAY: + return Value.ARRAY; + case DataType.TYPE_RESULT_SET: + return Value.RESULT_SET; + default: + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "" + sqlType); + } + } + + /** + * Get the value type for the given Java class. + * + * @param x the Java class + * @return the value type + */ + public static int getTypeFromClass(Class x) { + // TODO refactor: too many if/else in functions, can reduce! + if (x == null || Void.TYPE == x) { + return Value.NULL; + } + if (x.isPrimitive()) { + x = Utils.getNonPrimitiveClass(x); + } + if (String.class == x) { + return Value.STRING; + } else if (Integer.class == x) { + return Value.INT; + } else if (Long.class == x) { + return Value.LONG; + } else if (Boolean.class == x) { + return Value.BOOLEAN; + } else if (Double.class == x) { + return Value.DOUBLE; + } else if (Byte.class == x) { + return Value.BYTE; + } else if (Short.class == x) { + return Value.SHORT; + } else if (Character.class == x) { + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, "char (not supported)"); + } else if (Float.class == x) { + return Value.FLOAT; + } else if (byte[].class == x) { + return Value.BYTES; + } else if (UUID.class == x) { + return Value.UUID; + } else if (Void.class == x) { + return Value.NULL; + } else if (BigDecimal.class.isAssignableFrom(x)) { + return Value.DECIMAL; + } else if (ResultSet.class.isAssignableFrom(x)) { + return Value.RESULT_SET; + } else if (Value.ValueBlob.class.isAssignableFrom(x)) { + return Value.BLOB; + } else if (Value.ValueClob.class.isAssignableFrom(x)) { + return Value.CLOB; + } else if (Date.class.isAssignableFrom(x)) { + return Value.DATE; + } else if (Time.class.isAssignableFrom(x)) { + return Value.TIME; + } else if (Timestamp.class.isAssignableFrom(x)) { + return Value.TIMESTAMP; + } else if (java.util.Date.class.isAssignableFrom(x)) { + return Value.TIMESTAMP; + } else if (java.io.Reader.class.isAssignableFrom(x)) { + return Value.CLOB; + } else if (java.sql.Clob.class.isAssignableFrom(x)) { + return Value.CLOB; + } else if (java.io.InputStream.class.isAssignableFrom(x)) { + return Value.BLOB; + } else if (java.sql.Blob.class.isAssignableFrom(x)) { + return Value.BLOB; + } else if (Object[].class.isAssignableFrom(x)) { + // this includes String[] and so on + return Value.ARRAY; + } else if (isGeometryClass(x)) { + return Value.GEOMETRY; + } else if (LocalDateTimeUtils.LOCAL_DATE == x) { + return Value.DATE; + } else if (LocalDateTimeUtils.LOCAL_TIME == x) { + return Value.TIME; + } else if (LocalDateTimeUtils.LOCAL_DATE_TIME == x) { + return Value.TIMESTAMP; + } else if (LocalDateTimeUtils.OFFSET_DATE_TIME == x || LocalDateTimeUtils.INSTANT == x) { + return Value.TIMESTAMP_TZ; + } else { + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.getTypeIdFromClass(x); + } + return Value.JAVA_OBJECT; + } + } + + /** + * Convert a Java object to a value. + * + * @param session the session + * @param x the value + * @param type the value type + * @return the value + */ + public static Value convertToValue(SessionInterface session, Object x, + int type) { + Value v = convertToValue1(session, x, type); + if (session != null) { + session.addTemporaryLob(v); + } + return v; + } + + private static Value convertToValue1(SessionInterface session, Object x, + int type) { + if (x == null) { + return ValueNull.INSTANCE; + } + if (type == Value.JAVA_OBJECT) { + return ValueJavaObject.getNoCopy(x, null, session.getDataHandler()); + } + if (x instanceof String) { + return ValueString.get((String) x); + } else if (x instanceof Value) { + return (Value) x; + } else if (x instanceof Long) { + return ValueLong.get(((Long) x).longValue()); + } else if (x instanceof Integer) { + return ValueInt.get(((Integer) x).intValue()); + } else if (x instanceof BigInteger) { + return ValueDecimal.get(new BigDecimal((BigInteger) x)); + } else if (x instanceof BigDecimal) { + return ValueDecimal.get((BigDecimal) x); + } else if (x instanceof Boolean) { + return ValueBoolean.get(((Boolean) x).booleanValue()); + } else if (x instanceof Byte) { + return ValueByte.get(((Byte) x).byteValue()); + } else if (x instanceof Short) { + return ValueShort.get(((Short) x).shortValue()); + } else if (x instanceof Float) { + return ValueFloat.get(((Float) x).floatValue()); + } else if (x instanceof Double) { + return ValueDouble.get(((Double) x).doubleValue()); + } else if (x instanceof byte[]) { + return ValueBytes.get((byte[]) x); + } else if (x instanceof Date) { + return ValueDate.get((Date) x); + } else if (x instanceof Time) { + return ValueTime.get((Time) x); + } else if (x instanceof Timestamp) { + return ValueTimestamp.get((Timestamp) x); + } else if (x instanceof java.util.Date) { + return ValueTimestamp.fromMillis(((java.util.Date) x).getTime()); + } else if (x instanceof java.io.Reader) { + Reader r = new BufferedReader((java.io.Reader) x); + return session.getDataHandler().getLobStorage(). + createClob(r, -1); + } else if (x instanceof java.sql.Clob) { + try { + java.sql.Clob clob = (java.sql.Clob) x; + Reader r = new BufferedReader(clob.getCharacterStream()); + return session.getDataHandler().getLobStorage(). + createClob(r, clob.length()); + } catch (SQLException e) { + throw DbException.convert(e); + } + } else if (x instanceof java.io.InputStream) { + return session.getDataHandler().getLobStorage(). + createBlob((java.io.InputStream) x, -1); + } else if (x instanceof java.sql.Blob) { + try { + java.sql.Blob blob = (java.sql.Blob) x; + return session.getDataHandler().getLobStorage(). + createBlob(blob.getBinaryStream(), blob.length()); + } catch (SQLException e) { + throw DbException.convert(e); + } + } else if (x instanceof java.sql.Array) { + java.sql.Array array = (java.sql.Array) x; + try { + return convertToValue(session, array.getArray(), Value.ARRAY); + } catch (SQLException e) { + throw DbException.convert(e); + } + } else if (x instanceof ResultSet) { + if (x instanceof SimpleResultSet) { + return ValueResultSet.get((ResultSet) x); + } + return ValueResultSet.getCopy((ResultSet) x, Integer.MAX_VALUE); + } else if (x instanceof UUID) { + return ValueUuid.get((UUID) x); + } + Class clazz = x.getClass(); + if (x instanceof Object[]) { + // (a.getClass().isArray()); + // (a.getClass().getComponentType().isPrimitive()); + Object[] o = (Object[]) x; + int len = o.length; + Value[] v = new Value[len]; + for (int i = 0; i < len; i++) { + v[i] = convertToValue(session, o[i], type); + } + return ValueArray.get(clazz.getComponentType(), v); + } else if (x instanceof Character) { + return ValueStringFixed.get(((Character) x).toString()); + } else if (isGeometry(x)) { + return ValueGeometry.getFromGeometry(x); + } else if (clazz == LocalDateTimeUtils.LOCAL_DATE) { + return LocalDateTimeUtils.localDateToDateValue(x); + } else if (clazz == LocalDateTimeUtils.LOCAL_TIME) { + return LocalDateTimeUtils.localTimeToTimeValue(x); + } else if (clazz == LocalDateTimeUtils.LOCAL_DATE_TIME) { + return LocalDateTimeUtils.localDateTimeToValue(x); + } else if (clazz == LocalDateTimeUtils.INSTANT) { + return LocalDateTimeUtils.instantToValue(x); + } else if (clazz == LocalDateTimeUtils.OFFSET_DATE_TIME) { + return LocalDateTimeUtils.offsetDateTimeToValue(x); + } else if (x instanceof TimestampWithTimeZone) { + return ValueTimestampTimeZone.get((TimestampWithTimeZone) x); + } else { + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.getValue(type, x, + session.getDataHandler()); + } + return ValueJavaObject.getNoCopy(x, null, session.getDataHandler()); + } + } + + + /** + * Check whether a given class matches the Geometry class. + * + * @param x the class + * @return true if it is a Geometry class + */ + public static boolean isGeometryClass(Class x) { + if (x == null || GEOMETRY_CLASS == null) { + return false; + } + return GEOMETRY_CLASS.isAssignableFrom(x); + } + + /** + * Check whether a given object is a Geometry object. + * + * @param x the the object + * @return true if it is a Geometry object + */ + public static boolean isGeometry(Object x) { + if (x == null) { + return false; + } + return isGeometryClass(x.getClass()); + } + + /** + * Get a data type object from a type name. + * + * @param s the type name + * @param mode database mode + * @return the data type object + */ + public static DataType getTypeByName(String s, Mode mode) { + DataType result = mode.typeByNameMap.get(s); + if (result == null) { + result = TYPES_BY_NAME.get(s); + if (result == null && JdbcUtils.customDataTypesHandler != null) { + result = JdbcUtils.customDataTypesHandler.getDataTypeByName(s); + } + } + return result; + } + + /** + * Check if the given value type is a large object (BLOB or CLOB). + * + * @param type the value type + * @return true if the value type is a lob type + */ + public static boolean isLargeObject(int type) { + return type == Value.BLOB || type == Value.CLOB; + } + + /** + * Check if the given value type is a String (VARCHAR,...). + * + * @param type the value type + * @return true if the value type is a String type + */ + public static boolean isStringType(int type) { + return type == Value.STRING || type == Value.STRING_FIXED || type == Value.STRING_IGNORECASE; + } + + /** + * Check if the given value type supports the add operation. + * + * @param type the value type + * @return true if add is supported + */ + public static boolean supportsAdd(int type) { + switch (type) { + case Value.BYTE: + case Value.DECIMAL: + case Value.DOUBLE: + case Value.FLOAT: + case Value.INT: + case Value.LONG: + case Value.SHORT: + return true; + case Value.BOOLEAN: + case Value.TIME: + case Value.DATE: + case Value.TIMESTAMP: + case Value.TIMESTAMP_TZ: + case Value.BYTES: + case Value.UUID: + case Value.STRING: + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + case Value.BLOB: + case Value.CLOB: + case Value.NULL: + case Value.JAVA_OBJECT: + case Value.UNKNOWN: + case Value.ARRAY: + case Value.RESULT_SET: + case Value.GEOMETRY: + return false; + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.supportsAdd(type); + } + return false; + } + } + + /** + * Get the data type that will not overflow when calling 'add' 2 billion + * times. + * + * @param type the value type + * @return the data type that supports adding + */ + public static int getAddProofType(int type) { + switch (type) { + case Value.BYTE: + return Value.LONG; + case Value.FLOAT: + return Value.DOUBLE; + case Value.INT: + return Value.LONG; + case Value.LONG: + return Value.DECIMAL; + case Value.SHORT: + return Value.LONG; + case Value.BOOLEAN: + case Value.DECIMAL: + case Value.TIME: + case Value.DATE: + case Value.TIMESTAMP: + case Value.TIMESTAMP_TZ: + case Value.BYTES: + case Value.UUID: + case Value.STRING: + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + case Value.BLOB: + case Value.CLOB: + case Value.DOUBLE: + case Value.NULL: + case Value.JAVA_OBJECT: + case Value.UNKNOWN: + case Value.ARRAY: + case Value.RESULT_SET: + case Value.GEOMETRY: + return type; + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.getAddProofType(type); + } + return type; + } + } + + /** + * Get the default value in the form of a Java object for the given Java + * class. + * + * @param clazz the Java class + * @return the default object + */ + public static Object getDefaultForPrimitiveType(Class clazz) { + if (clazz == Boolean.TYPE) { + return Boolean.FALSE; + } else if (clazz == Byte.TYPE) { + return (byte) 0; + } else if (clazz == Character.TYPE) { + return (char) 0; + } else if (clazz == Short.TYPE) { + return (short) 0; + } else if (clazz == Integer.TYPE) { + return 0; + } else if (clazz == Long.TYPE) { + return 0L; + } else if (clazz == Float.TYPE) { + return (float) 0; + } else if (clazz == Double.TYPE) { + return (double) 0; + } + throw DbException.throwInternalError( + "primitive=" + clazz.toString()); + } + + /** + * Convert a value to the specified class. + * + * @param conn the database connection + * @param v the value + * @param paramClass the target class + * @return the converted object + */ + public static Object convertTo(JdbcConnection conn, Value v, + Class paramClass) { + if (paramClass == Blob.class) { + return new JdbcBlob(conn, v, 0); + } else if (paramClass == Clob.class) { + return new JdbcClob(conn, v, 0); + } else if (paramClass == Array.class) { + return new JdbcArray(conn, v, 0); + } + switch (v.getType()) { + case Value.JAVA_OBJECT: { + Object o = SysProperties.serializeJavaObject ? JdbcUtils.deserialize(v.getBytes(), + conn.getSession().getDataHandler()) : v.getObject(); + if (paramClass.isAssignableFrom(o.getClass())) { + return o; + } + break; + } + case Value.BOOLEAN: + case Value.BYTE: + case Value.SHORT: + case Value.INT: + case Value.LONG: + case Value.DECIMAL: + case Value.TIME: + case Value.DATE: + case Value.TIMESTAMP: + case Value.TIMESTAMP_TZ: + case Value.BYTES: + case Value.UUID: + case Value.STRING: + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + case Value.BLOB: + case Value.CLOB: + case Value.DOUBLE: + case Value.FLOAT: + case Value.NULL: + case Value.UNKNOWN: + case Value.ARRAY: + case Value.RESULT_SET: + case Value.GEOMETRY: + break; + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.getObject(v, paramClass); + } + } + throw DbException.getUnsupportedException("converting to class " + paramClass.getName()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/NullableKeyConcurrentMap.java b/modules/h2/src/main/java/org/h2/value/NullableKeyConcurrentMap.java new file mode 100644 index 0000000000000..4ca46df60ae4d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/NullableKeyConcurrentMap.java @@ -0,0 +1,44 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.util.concurrent.ConcurrentHashMap; + +/** + * A concurrent hash map that allows null keys + * + * @param the value type + */ +public class NullableKeyConcurrentMap extends ConcurrentHashMap { + + private static final long serialVersionUID = 1L; + private static final String NULL = new String(new byte[0]); + + @Override + public V get(Object key) { + return super.get(toUpper(key)); + } + + @Override + public V put(String key, V value) { + return super.put(toUpper(key), value); + } + + @Override + public boolean containsKey(Object key) { + return super.containsKey(toUpper(key)); + } + + @Override + public V remove(Object key) { + return super.remove(toUpper(key)); + } + + private static String toUpper(Object key) { + return key == null ? NULL : key.toString(); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/Transfer.java b/modules/h2/src/main/java/org/h2/value/Transfer.java new file mode 100644 index 0000000000000..16cd4ba2c6f00 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/Transfer.java @@ -0,0 +1,767 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.DataInputStream; +import java.io.DataOutputStream; +import java.io.IOException; +import java.io.Reader; +import java.math.BigDecimal; +import java.net.InetAddress; +import java.net.Socket; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Timestamp; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.SessionInterface; +import org.h2.message.DbException; +import org.h2.security.SHA256; +import org.h2.store.Data; +import org.h2.store.DataReader; +import org.h2.tools.SimpleResultSet; +import org.h2.util.Bits; +import org.h2.util.DateTimeUtils; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.NetUtils; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * The transfer class is used to send and receive Value objects. + * It is used on both the client side, and on the server side. + */ +public class Transfer { + + private static final int BUFFER_SIZE = 64 * 1024; + private static final int LOB_MAGIC = 0x1234; + private static final int LOB_MAC_SALT_LENGTH = 16; + + private Socket socket; + private DataInputStream in; + private DataOutputStream out; + private SessionInterface session; + private boolean ssl; + private int version; + private byte[] lobMacSalt; + + /** + * Create a new transfer object for the specified session. + * + * @param session the session + * @param s the socket + */ + public Transfer(SessionInterface session, Socket s) { + this.session = session; + this.socket = s; + } + + /** + * Initialize the transfer object. This method will try to open an input and + * output stream. + */ + public synchronized void init() throws IOException { + if (socket != null) { + in = new DataInputStream( + new BufferedInputStream( + socket.getInputStream(), Transfer.BUFFER_SIZE)); + out = new DataOutputStream( + new BufferedOutputStream( + socket.getOutputStream(), Transfer.BUFFER_SIZE)); + } + } + + /** + * Write pending changes. + */ + public void flush() throws IOException { + out.flush(); + } + + /** + * Write a boolean. + * + * @param x the value + * @return itself + */ + public Transfer writeBoolean(boolean x) throws IOException { + out.writeByte((byte) (x ? 1 : 0)); + return this; + } + + /** + * Read a boolean. + * + * @return the value + */ + public boolean readBoolean() throws IOException { + return in.readByte() == 1; + } + + /** + * Write a byte. + * + * @param x the value + * @return itself + */ + private Transfer writeByte(byte x) throws IOException { + out.writeByte(x); + return this; + } + + /** + * Read a byte. + * + * @return the value + */ + private byte readByte() throws IOException { + return in.readByte(); + } + + /** + * Write an int. + * + * @param x the value + * @return itself + */ + public Transfer writeInt(int x) throws IOException { + out.writeInt(x); + return this; + } + + /** + * Read an int. + * + * @return the value + */ + public int readInt() throws IOException { + return in.readInt(); + } + + /** + * Write a long. + * + * @param x the value + * @return itself + */ + public Transfer writeLong(long x) throws IOException { + out.writeLong(x); + return this; + } + + /** + * Read a long. + * + * @return the value + */ + public long readLong() throws IOException { + return in.readLong(); + } + + /** + * Write a double. + * + * @param i the value + * @return itself + */ + private Transfer writeDouble(double i) throws IOException { + out.writeDouble(i); + return this; + } + + /** + * Write a float. + * + * @param i the value + * @return itself + */ + private Transfer writeFloat(float i) throws IOException { + out.writeFloat(i); + return this; + } + + /** + * Read a double. + * + * @return the value + */ + private double readDouble() throws IOException { + return in.readDouble(); + } + + /** + * Read a float. + * + * @return the value + */ + private float readFloat() throws IOException { + return in.readFloat(); + } + + /** + * Write a string. The maximum string length is Integer.MAX_VALUE. + * + * @param s the value + * @return itself + */ + public Transfer writeString(String s) throws IOException { + if (s == null) { + out.writeInt(-1); + } else { + int len = s.length(); + out.writeInt(len); + for (int i = 0; i < len; i++) { + out.writeChar(s.charAt(i)); + } + } + return this; + } + + /** + * Read a string. + * + * @return the value + */ + public String readString() throws IOException { + int len = in.readInt(); + if (len == -1) { + return null; + } + StringBuilder buff = new StringBuilder(len); + for (int i = 0; i < len; i++) { + buff.append(in.readChar()); + } + String s = buff.toString(); + s = StringUtils.cache(s); + return s; + } + + /** + * Write a byte array. + * + * @param data the value + * @return itself + */ + public Transfer writeBytes(byte[] data) throws IOException { + if (data == null) { + writeInt(-1); + } else { + writeInt(data.length); + out.write(data); + } + return this; + } + + /** + * Write a number of bytes. + * + * @param buff the value + * @param off the offset + * @param len the length + * @return itself + */ + public Transfer writeBytes(byte[] buff, int off, int len) throws IOException { + out.write(buff, off, len); + return this; + } + + /** + * Read a byte array. + * + * @return the value + */ + public byte[] readBytes() throws IOException { + int len = readInt(); + if (len == -1) { + return null; + } + byte[] b = Utils.newBytes(len); + in.readFully(b); + return b; + } + + /** + * Read a number of bytes. + * + * @param buff the target buffer + * @param off the offset + * @param len the number of bytes to read + */ + public void readBytes(byte[] buff, int off, int len) throws IOException { + in.readFully(buff, off, len); + } + + /** + * Close the transfer object and the socket. + */ + public synchronized void close() { + if (socket != null) { + try { + if (out != null) { + out.flush(); + } + if (socket != null) { + socket.close(); + } + } catch (IOException e) { + DbException.traceThrowable(e); + } finally { + socket = null; + } + } + } + + /** + * Write a value. + * + * @param v the value + */ + public void writeValue(Value v) throws IOException { + int type = v.getType(); + writeInt(type); + switch (type) { + case Value.NULL: + break; + case Value.BYTES: + case Value.JAVA_OBJECT: + writeBytes(v.getBytesNoCopy()); + break; + case Value.UUID: { + ValueUuid uuid = (ValueUuid) v; + writeLong(uuid.getHigh()); + writeLong(uuid.getLow()); + break; + } + case Value.BOOLEAN: + writeBoolean(v.getBoolean()); + break; + case Value.BYTE: + writeByte(v.getByte()); + break; + case Value.TIME: + if (version >= Constants.TCP_PROTOCOL_VERSION_9) { + writeLong(((ValueTime) v).getNanos()); + } else { + writeLong(DateTimeUtils.getTimeLocalWithoutDst(v.getTime())); + } + break; + case Value.DATE: + if (version >= Constants.TCP_PROTOCOL_VERSION_9) { + writeLong(((ValueDate) v).getDateValue()); + } else { + writeLong(DateTimeUtils.getTimeLocalWithoutDst(v.getDate())); + } + break; + case Value.TIMESTAMP: { + if (version >= Constants.TCP_PROTOCOL_VERSION_9) { + ValueTimestamp ts = (ValueTimestamp) v; + writeLong(ts.getDateValue()); + writeLong(ts.getTimeNanos()); + } else { + Timestamp ts = v.getTimestamp(); + writeLong(DateTimeUtils.getTimeLocalWithoutDst(ts)); + writeInt(ts.getNanos() % 1_000_000); + } + break; + } + case Value.TIMESTAMP_TZ: { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) v; + writeLong(ts.getDateValue()); + writeLong(ts.getTimeNanos()); + writeInt(ts.getTimeZoneOffsetMins()); + break; + } + case Value.DECIMAL: + writeString(v.getString()); + break; + case Value.DOUBLE: + writeDouble(v.getDouble()); + break; + case Value.FLOAT: + writeFloat(v.getFloat()); + break; + case Value.INT: + writeInt(v.getInt()); + break; + case Value.LONG: + writeLong(v.getLong()); + break; + case Value.SHORT: + writeInt(v.getShort()); + break; + case Value.STRING: + case Value.STRING_IGNORECASE: + case Value.STRING_FIXED: + writeString(v.getString()); + break; + case Value.BLOB: { + if (version >= Constants.TCP_PROTOCOL_VERSION_11) { + if (v instanceof ValueLobDb) { + ValueLobDb lob = (ValueLobDb) v; + if (lob.isStored()) { + writeLong(-1); + writeInt(lob.getTableId()); + writeLong(lob.getLobId()); + if (version >= Constants.TCP_PROTOCOL_VERSION_12) { + writeBytes(calculateLobMac(lob.getLobId())); + } + writeLong(lob.getPrecision()); + break; + } + } + } + long length = v.getPrecision(); + if (length < 0) { + throw DbException.get( + ErrorCode.CONNECTION_BROKEN_1, "length=" + length); + } + writeLong(length); + long written = IOUtils.copyAndCloseInput(v.getInputStream(), out); + if (written != length) { + throw DbException.get( + ErrorCode.CONNECTION_BROKEN_1, "length:" + length + " written:" + written); + } + writeInt(LOB_MAGIC); + break; + } + case Value.CLOB: { + if (version >= Constants.TCP_PROTOCOL_VERSION_11) { + if (v instanceof ValueLobDb) { + ValueLobDb lob = (ValueLobDb) v; + if (lob.isStored()) { + writeLong(-1); + writeInt(lob.getTableId()); + writeLong(lob.getLobId()); + if (version >= Constants.TCP_PROTOCOL_VERSION_12) { + writeBytes(calculateLobMac(lob.getLobId())); + } + writeLong(lob.getPrecision()); + break; + } + } + } + long length = v.getPrecision(); + if (length < 0) { + throw DbException.get( + ErrorCode.CONNECTION_BROKEN_1, "length=" + length); + } + writeLong(length); + Reader reader = v.getReader(); + Data.copyString(reader, out); + writeInt(LOB_MAGIC); + break; + } + case Value.ARRAY: { + ValueArray va = (ValueArray) v; + Value[] list = va.getList(); + int len = list.length; + Class componentType = va.getComponentType(); + if (componentType == Object.class) { + writeInt(len); + } else { + writeInt(-(len + 1)); + writeString(componentType.getName()); + } + for (Value value : list) { + writeValue(value); + } + break; + } + case Value.ENUM: { + writeInt(v.getInt()); + writeString(v.getString()); + break; + } + case Value.RESULT_SET: { + try { + ResultSet rs = ((ValueResultSet) v).getResultSet(); + rs.beforeFirst(); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + writeInt(columnCount); + for (int i = 0; i < columnCount; i++) { + writeString(meta.getColumnName(i + 1)); + writeInt(meta.getColumnType(i + 1)); + writeInt(meta.getPrecision(i + 1)); + writeInt(meta.getScale(i + 1)); + } + while (rs.next()) { + writeBoolean(true); + for (int i = 0; i < columnCount; i++) { + int t = DataType.getValueTypeFromResultSet(meta, i + 1); + Value val = DataType.readValue(session, rs, i + 1, t); + writeValue(val); + } + } + writeBoolean(false); + rs.beforeFirst(); + } catch (SQLException e) { + throw DbException.convertToIOException(e); + } + break; + } + case Value.GEOMETRY: + if (version >= Constants.TCP_PROTOCOL_VERSION_14) { + writeBytes(v.getBytesNoCopy()); + } else { + writeString(v.getString()); + } + break; + default: + if (JdbcUtils.customDataTypesHandler != null) { + writeBytes(v.getBytesNoCopy()); + break; + } + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, "type=" + type); + } + } + + /** + * Read a value. + * + * @return the value + */ + public Value readValue() throws IOException { + int type = readInt(); + switch (type) { + case Value.NULL: + return ValueNull.INSTANCE; + case Value.BYTES: + return ValueBytes.getNoCopy(readBytes()); + case Value.UUID: + return ValueUuid.get(readLong(), readLong()); + case Value.JAVA_OBJECT: + return ValueJavaObject.getNoCopy(null, readBytes(), session.getDataHandler()); + case Value.BOOLEAN: + return ValueBoolean.get(readBoolean()); + case Value.BYTE: + return ValueByte.get(readByte()); + case Value.DATE: + if (version >= Constants.TCP_PROTOCOL_VERSION_9) { + return ValueDate.fromDateValue(readLong()); + } else { + return ValueDate.fromMillis(DateTimeUtils.getTimeUTCWithoutDst(readLong())); + } + case Value.TIME: + if (version >= Constants.TCP_PROTOCOL_VERSION_9) { + return ValueTime.fromNanos(readLong()); + } else { + return ValueTime.fromMillis(DateTimeUtils.getTimeUTCWithoutDst(readLong())); + } + case Value.TIMESTAMP: { + if (version >= Constants.TCP_PROTOCOL_VERSION_9) { + return ValueTimestamp.fromDateValueAndNanos( + readLong(), readLong()); + } else { + return ValueTimestamp.fromMillisNanos( + DateTimeUtils.getTimeUTCWithoutDst(readLong()), + readInt() % 1_000_000); + } + } + case Value.TIMESTAMP_TZ: { + return ValueTimestampTimeZone.fromDateValueAndNanos(readLong(), + readLong(), (short) readInt()); + } + case Value.DECIMAL: + return ValueDecimal.get(new BigDecimal(readString())); + case Value.DOUBLE: + return ValueDouble.get(readDouble()); + case Value.FLOAT: + return ValueFloat.get(readFloat()); + case Value.ENUM: { + final int ordinal = readInt(); + final String label = readString(); + return ValueEnumBase.get(label, ordinal); + } + case Value.INT: + return ValueInt.get(readInt()); + case Value.LONG: + return ValueLong.get(readLong()); + case Value.SHORT: + return ValueShort.get((short) readInt()); + case Value.STRING: + return ValueString.get(readString()); + case Value.STRING_IGNORECASE: + return ValueStringIgnoreCase.get(readString()); + case Value.STRING_FIXED: + return ValueStringFixed.get(readString(), ValueStringFixed.PRECISION_DO_NOT_TRIM, null); + case Value.BLOB: { + long length = readLong(); + if (version >= Constants.TCP_PROTOCOL_VERSION_11) { + if (length == -1) { + int tableId = readInt(); + long id = readLong(); + byte[] hmac; + if (version >= Constants.TCP_PROTOCOL_VERSION_12) { + hmac = readBytes(); + } else { + hmac = null; + } + long precision = readLong(); + return ValueLobDb.create( + Value.BLOB, session.getDataHandler(), tableId, id, hmac, precision); + } + } + Value v = session.getDataHandler().getLobStorage().createBlob(in, length); + int magic = readInt(); + if (magic != LOB_MAGIC) { + throw DbException.get( + ErrorCode.CONNECTION_BROKEN_1, "magic=" + magic); + } + return v; + } + case Value.CLOB: { + long length = readLong(); + if (version >= Constants.TCP_PROTOCOL_VERSION_11) { + if (length == -1) { + int tableId = readInt(); + long id = readLong(); + byte[] hmac; + if (version >= Constants.TCP_PROTOCOL_VERSION_12) { + hmac = readBytes(); + } else { + hmac = null; + } + long precision = readLong(); + return ValueLobDb.create( + Value.CLOB, session.getDataHandler(), tableId, id, hmac, precision); + } + if (length < 0) { + throw DbException.get( + ErrorCode.CONNECTION_BROKEN_1, "length="+ length); + } + } + Value v = session.getDataHandler().getLobStorage(). + createClob(new DataReader(in), length); + int magic = readInt(); + if (magic != LOB_MAGIC) { + throw DbException.get( + ErrorCode.CONNECTION_BROKEN_1, "magic=" + magic); + } + return v; + } + case Value.ARRAY: { + int len = readInt(); + Class componentType = Object.class; + if (len < 0) { + len = -(len + 1); + componentType = JdbcUtils.loadUserClass(readString()); + } + Value[] list = new Value[len]; + for (int i = 0; i < len; i++) { + list[i] = readValue(); + } + return ValueArray.get(componentType, list); + } + case Value.RESULT_SET: { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + int columns = readInt(); + for (int i = 0; i < columns; i++) { + rs.addColumn(readString(), readInt(), readInt(), readInt()); + } + while (readBoolean()) { + Object[] o = new Object[columns]; + for (int i = 0; i < columns; i++) { + o[i] = readValue().getObject(); + } + rs.addRow(o); + } + return ValueResultSet.get(rs); + } + case Value.GEOMETRY: + if (version >= Constants.TCP_PROTOCOL_VERSION_14) { + return ValueGeometry.get(readBytes()); + } + return ValueGeometry.get(readString()); + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.convert( + ValueBytes.getNoCopy(readBytes()), type); + } + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, "type=" + type); + } + } + + /** + * Get the socket. + * + * @return the socket + */ + public Socket getSocket() { + return socket; + } + + /** + * Set the session. + * + * @param session the session + */ + public void setSession(SessionInterface session) { + this.session = session; + } + + /** + * Enable or disable SSL. + * + * @param ssl the new value + */ + public void setSSL(boolean ssl) { + this.ssl = ssl; + } + + /** + * Open a new new connection to the same address and port as this one. + * + * @return the new transfer object + */ + public Transfer openNewConnection() throws IOException { + InetAddress address = socket.getInetAddress(); + int port = socket.getPort(); + Socket s2 = NetUtils.createSocket(address, port, ssl); + Transfer trans = new Transfer(null, s2); + trans.setSSL(ssl); + return trans; + } + + public void setVersion(int version) { + this.version = version; + } + + public synchronized boolean isClosed() { + return socket == null || socket.isClosed(); + } + + /** + * Verify the HMAC. + * + * @param hmac the message authentication code + * @param lobId the lobId + * @throws DbException if the HMAC does not match + */ + public void verifyLobMac(byte[] hmac, long lobId) { + byte[] result = calculateLobMac(lobId); + if (!Utils.compareSecure(hmac, result)) { + throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, + "Invalid lob hmac; possibly the connection was re-opened internally"); + } + } + + private byte[] calculateLobMac(long lobId) { + if (lobMacSalt == null) { + lobMacSalt = MathUtils.secureRandomBytes(LOB_MAC_SALT_LENGTH); + } + byte[] data = new byte[8]; + Bits.writeLong(data, 0, lobId); + return SHA256.getHashWithSalt(data, lobMacSalt); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/Value.java b/modules/h2/src/main/java/org/h2/value/Value.java new file mode 100644 index 0000000000000..f9962459fe438 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/Value.java @@ -0,0 +1,1382 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.io.Reader; +import java.io.StringReader; +import java.lang.ref.SoftReference; +import java.math.BigDecimal; +import java.nio.charset.StandardCharsets; +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import org.h2.api.ErrorCode; +import org.h2.engine.Mode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.DataHandler; +import org.h2.tools.SimpleResultSet; +import org.h2.util.Bits; +import org.h2.util.DateTimeUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; + +/** + * This is the base class for all value classes. + * It provides conversion and comparison methods. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public abstract class Value { + + /** + * The data type is unknown at this time. + */ + public static final int UNKNOWN = -1; + + /** + * The value type for NULL. + */ + public static final int NULL = 0; + + /** + * The value type for BOOLEAN values. + */ + public static final int BOOLEAN = 1; + + /** + * The value type for BYTE values. + */ + public static final int BYTE = 2; + + /** + * The value type for SHORT values. + */ + public static final int SHORT = 3; + + /** + * The value type for INT values. + */ + public static final int INT = 4; + + /** + * The value type for LONG values. + */ + public static final int LONG = 5; + + /** + * The value type for DECIMAL values. + */ + public static final int DECIMAL = 6; + + /** + * The value type for DOUBLE values. + */ + public static final int DOUBLE = 7; + + /** + * The value type for FLOAT values. + */ + public static final int FLOAT = 8; + + /** + * The value type for TIME values. + */ + public static final int TIME = 9; + + /** + * The value type for DATE values. + */ + public static final int DATE = 10; + + /** + * The value type for TIMESTAMP values. + */ + public static final int TIMESTAMP = 11; + + /** + * The value type for BYTES values. + */ + public static final int BYTES = 12; + + /** + * The value type for STRING values. + */ + public static final int STRING = 13; + + /** + * The value type for case insensitive STRING values. + */ + public static final int STRING_IGNORECASE = 14; + + /** + * The value type for BLOB values. + */ + public static final int BLOB = 15; + + /** + * The value type for CLOB values. + */ + public static final int CLOB = 16; + + /** + * The value type for ARRAY values. + */ + public static final int ARRAY = 17; + + /** + * The value type for RESULT_SET values. + */ + public static final int RESULT_SET = 18; + /** + * The value type for JAVA_OBJECT values. + */ + public static final int JAVA_OBJECT = 19; + + /** + * The value type for UUID values. + */ + public static final int UUID = 20; + + /** + * The value type for string values with a fixed size. + */ + public static final int STRING_FIXED = 21; + + /** + * The value type for string values with a fixed size. + */ + public static final int GEOMETRY = 22; + + /** + * 23 was a short-lived experiment "TIMESTAMP UTC" which has been removed. + */ + + /** + * The value type for TIMESTAMP WITH TIME ZONE values. + */ + public static final int TIMESTAMP_TZ = 24; + + /** + * The value type for ENUM values. + */ + public static final int ENUM = 25; + + /** + * The number of value types. + */ + public static final int TYPE_COUNT = ENUM; + + private static SoftReference softCache = + new SoftReference<>(null); + private static final BigDecimal MAX_LONG_DECIMAL = + BigDecimal.valueOf(Long.MAX_VALUE); + private static final BigDecimal MIN_LONG_DECIMAL = + BigDecimal.valueOf(Long.MIN_VALUE); + + /** + * Check the range of the parameters. + * + * @param zeroBasedOffset the offset (0 meaning no offset) + * @param length the length of the target + * @param dataSize the length of the source + */ + static void rangeCheck(long zeroBasedOffset, long length, long dataSize) { + if ((zeroBasedOffset | length) < 0 || length > dataSize - zeroBasedOffset) { + if (zeroBasedOffset < 0 || zeroBasedOffset > dataSize) { + throw DbException.getInvalidValueException("offset", zeroBasedOffset + 1); + } + throw DbException.getInvalidValueException("length", length); + } + } + + /** + * Get the SQL expression for this value. + * + * @return the SQL expression + */ + public abstract String getSQL(); + + /** + * Get the value type. + * + * @return the type + */ + public abstract int getType(); + + /** + * Get the precision. + * + * @return the precision + */ + public abstract long getPrecision(); + + /** + * Get the display size in characters. + * + * @return the display size + */ + public abstract int getDisplaySize(); + + /** + * Get the memory used by this object. + * + * @return the memory used in bytes + */ + public int getMemory() { + return DataType.getDataType(getType()).memory; + } + + /** + * Get the value as a string. + * + * @return the string + */ + public abstract String getString(); + + /** + * Get the value as an object. + * + * @return the object + */ + public abstract Object getObject(); + + /** + * Set the value as a parameter in a prepared statement. + * + * @param prep the prepared statement + * @param parameterIndex the parameter index + */ + public abstract void set(PreparedStatement prep, int parameterIndex) + throws SQLException; + + /** + * Compare the value with another value of the same type. + * + * @param v the other value + * @param mode the compare mode + * @return 0 if both values are equal, -1 if the other value is smaller, and + * 1 otherwise + */ + protected abstract int compareSecure(Value v, CompareMode mode); + + @Override + public abstract int hashCode(); + + /** + * Check if the two values have the same hash code. No data conversion is + * made; this method returns false if the other object is not of the same + * class. For some values, compareTo may return 0 even if equals return + * false. Example: ValueDecimal 0.0 and 0.00. + * + * @param other the other value + * @return true if they are equal + */ + @Override + public abstract boolean equals(Object other); + + /** + * Get the order of this value type. + * + * @param type the value type + * @return the order number + */ + static int getOrder(int type) { + switch (type) { + case UNKNOWN: + return 1_000; + case NULL: + return 2_000; + case STRING: + return 10_000; + case CLOB: + return 11_000; + case STRING_FIXED: + return 12_000; + case STRING_IGNORECASE: + return 13_000; + case BOOLEAN: + return 20_000; + case BYTE: + return 21_000; + case SHORT: + return 22_000; + case INT: + return 23_000; + case LONG: + return 24_000; + case DECIMAL: + return 25_000; + case FLOAT: + return 26_000; + case DOUBLE: + return 27_000; + case TIME: + return 30_000; + case DATE: + return 31_000; + case TIMESTAMP: + return 32_000; + case TIMESTAMP_TZ: + return 34_000; + case BYTES: + return 40_000; + case BLOB: + return 41_000; + case JAVA_OBJECT: + return 42_000; + case UUID: + return 43_000; + case GEOMETRY: + return 44_000; + case ARRAY: + return 50_000; + case RESULT_SET: + return 51_000; + case ENUM: + return 52_000; + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.getDataTypeOrder(type); + } + throw DbException.throwInternalError("type:"+type); + } + } + + /** + * Get the higher value order type of two value types. If values need to be + * converted to match the other operands value type, the value with the + * lower order is converted to the value with the higher order. + * + * @param t1 the first value type + * @param t2 the second value type + * @return the higher value type of the two + */ + public static int getHigherOrder(int t1, int t2) { + if (t1 == Value.UNKNOWN || t2 == Value.UNKNOWN) { + if (t1 == t2) { + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "?, ?"); + } else if (t1 == Value.NULL) { + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "NULL, ?"); + } else if (t2 == Value.NULL) { + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "?, NULL"); + } + } + if (t1 == t2) { + return t1; + } + int o1 = getOrder(t1); + int o2 = getOrder(t2); + return o1 > o2 ? t1 : t2; + } + + /** + * Check if a value is in the cache that is equal to this value. If yes, + * this value should be used to save memory. If the value is not in the + * cache yet, it is added. + * + * @param v the value to look for + * @return the value in the cache or the value passed + */ + static Value cache(Value v) { + if (SysProperties.OBJECT_CACHE) { + int hash = v.hashCode(); + if (softCache == null) { + softCache = new SoftReference<>(null); + } + Value[] cache = softCache.get(); + if (cache == null) { + cache = new Value[SysProperties.OBJECT_CACHE_SIZE]; + softCache = new SoftReference<>(cache); + } + int index = hash & (SysProperties.OBJECT_CACHE_SIZE - 1); + Value cached = cache[index]; + if (cached != null) { + if (cached.getType() == v.getType() && v.equals(cached)) { + // cacheHit++; + return cached; + } + } + // cacheMiss++; + // cache[cacheCleaner] = null; + // cacheCleaner = (cacheCleaner + 1) & + // (Constants.OBJECT_CACHE_SIZE - 1); + cache[index] = v; + } + return v; + } + + /** + * Clear the value cache. Used for testing. + */ + public static void clearCache() { + softCache = null; + } + + public boolean getBoolean() { + return ((ValueBoolean) convertTo(Value.BOOLEAN)).getBoolean(); + } + + public Date getDate() { + return ((ValueDate) convertTo(Value.DATE)).getDate(); + } + + public Time getTime() { + return ((ValueTime) convertTo(Value.TIME)).getTime(); + } + + public Timestamp getTimestamp() { + return ((ValueTimestamp) convertTo(Value.TIMESTAMP)).getTimestamp(); + } + + public byte[] getBytes() { + return ((ValueBytes) convertTo(Value.BYTES)).getBytes(); + } + + public byte[] getBytesNoCopy() { + return ((ValueBytes) convertTo(Value.BYTES)).getBytesNoCopy(); + } + + public byte getByte() { + return ((ValueByte) convertTo(Value.BYTE)).getByte(); + } + + public short getShort() { + return ((ValueShort) convertTo(Value.SHORT)).getShort(); + } + + public BigDecimal getBigDecimal() { + return ((ValueDecimal) convertTo(Value.DECIMAL)).getBigDecimal(); + } + + public double getDouble() { + return ((ValueDouble) convertTo(Value.DOUBLE)).getDouble(); + } + + public float getFloat() { + return ((ValueFloat) convertTo(Value.FLOAT)).getFloat(); + } + + public int getInt() { + return ((ValueInt) convertTo(Value.INT)).getInt(); + } + + public long getLong() { + return ((ValueLong) convertTo(Value.LONG)).getLong(); + } + + public InputStream getInputStream() { + return new ByteArrayInputStream(getBytesNoCopy()); + } + + /** + * Get the input stream + * + * @param oneBasedOffset the offset (1 means no offset) + * @param length the requested length + * @return the new input stream + */ + public InputStream getInputStream(long oneBasedOffset, long length) { + byte[] bytes = getBytesNoCopy(); + long zeroBasedOffset = oneBasedOffset - 1; + rangeCheck(zeroBasedOffset, length, bytes.length); + return new ByteArrayInputStream(bytes, (int) zeroBasedOffset, (int) length); + } + + public Reader getReader() { + return new StringReader(getString()); + } + + /** + * Get the reader + * + * @param oneBasedOffset the offset (1 means no offset) + * @param length the requested length + * @return the new reader + */ + public Reader getReader(long oneBasedOffset, long length) { + String string = getString(); + long zeroBasedOffset = oneBasedOffset - 1; + rangeCheck(zeroBasedOffset, length, string.length()); + int offset = (int) zeroBasedOffset; + return new StringReader(string.substring(offset, offset + (int) length)); + } + + /** + * Add a value and return the result. + * + * @param v the value to add + * @return the result + */ + public Value add(@SuppressWarnings("unused") Value v) { + throw throwUnsupportedExceptionForType("+"); + } + + public int getSignum() { + throw throwUnsupportedExceptionForType("SIGNUM"); + } + + /** + * Return -value if this value support arithmetic operations. + * + * @return the negative + */ + public Value negate() { + throw throwUnsupportedExceptionForType("NEG"); + } + + /** + * Subtract a value and return the result. + * + * @param v the value to subtract + * @return the result + */ + public Value subtract(@SuppressWarnings("unused") Value v) { + throw throwUnsupportedExceptionForType("-"); + } + + /** + * Divide by a value and return the result. + * + * @param v the value to divide by + * @return the result + */ + public Value divide(@SuppressWarnings("unused") Value v) { + throw throwUnsupportedExceptionForType("/"); + } + + /** + * Multiply with a value and return the result. + * + * @param v the value to multiply with + * @return the result + */ + public Value multiply(@SuppressWarnings("unused") Value v) { + throw throwUnsupportedExceptionForType("*"); + } + + /** + * Take the modulus with a value and return the result. + * + * @param v the value to take the modulus with + * @return the result + */ + public Value modulus(@SuppressWarnings("unused") Value v) { + throw throwUnsupportedExceptionForType("%"); + } + + /** + * Compare a value to the specified type. + * + * @param targetType the type of the returned value + * @return the converted value + */ + public Value convertTo(int targetType) { + // Use -1 to indicate "default behaviour" where value conversion should not + // depend on any datatype precision. + return convertTo(targetType, -1, null); + } + + /** + * Convert value to ENUM value + * @param enumerators allowed values for the ENUM to which the value is converted + * @return value represented as ENUM + */ + public Value convertToEnum(String[] enumerators) { + // Use -1 to indicate "default behaviour" where value conversion should not + // depend on any datatype precision. + return convertTo(ENUM, -1, null, null, enumerators); + } + + /** + * Compare a value to the specified type. + * + * @param targetType the type of the returned value + * @param precision the precision of the column to convert this value to. + * The special constant -1 is used to indicate that + * the precision plays no role when converting the value + * @param mode the mode + * @return the converted value + */ + public final Value convertTo(int targetType, int precision, Mode mode) { + return convertTo(targetType, precision, mode, null, null); + } + + /** + * Compare a value to the specified type. + * + * @param targetType the type of the returned value + * @param precision the precision of the column to convert this value to. + * The special constant -1 is used to indicate that + * the precision plays no role when converting the value + * @param mode the conversion mode + * @param column the column (if any), used for to improve the error message if conversion fails + * @param enumerators the ENUM datatype enumerators (if any), + * for dealing with ENUM conversions + * @return the converted value + */ + public Value convertTo(int targetType, int precision, Mode mode, Object column, String[] enumerators) { + // converting NULL is done in ValueNull + // converting BLOB to CLOB and vice versa is done in ValueLob + if (getType() == targetType) { + return this; + } + try { + // decimal conversion + switch (targetType) { + case BOOLEAN: { + switch (getType()) { + case BYTE: + case SHORT: + case INT: + case LONG: + case DECIMAL: + case DOUBLE: + case FLOAT: + return ValueBoolean.get(getSignum() != 0); + case TIME: + case DATE: + case TIMESTAMP: + case TIMESTAMP_TZ: + case BYTES: + case JAVA_OBJECT: + case UUID: + case ENUM: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case BYTE: { + switch (getType()) { + case BOOLEAN: + return ValueByte.get(getBoolean() ? (byte) 1 : (byte) 0); + case SHORT: + case ENUM: + case INT: + return ValueByte.get(convertToByte(getInt(), column)); + case LONG: + return ValueByte.get(convertToByte(getLong(), column)); + case DECIMAL: + return ValueByte.get(convertToByte(convertToLong(getBigDecimal(), column), column)); + case DOUBLE: + return ValueByte.get(convertToByte(convertToLong(getDouble(), column), column)); + case FLOAT: + return ValueByte.get(convertToByte(convertToLong(getFloat(), column), column)); + case BYTES: + return ValueByte.get((byte) Integer.parseInt(getString(), 16)); + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case SHORT: { + switch (getType()) { + case BOOLEAN: + return ValueShort.get(getBoolean() ? (short) 1 : (short) 0); + case BYTE: + return ValueShort.get(getByte()); + case ENUM: + case INT: + return ValueShort.get(convertToShort(getInt(), column)); + case LONG: + return ValueShort.get(convertToShort(getLong(), column)); + case DECIMAL: + return ValueShort.get(convertToShort(convertToLong(getBigDecimal(), column), column)); + case DOUBLE: + return ValueShort.get(convertToShort(convertToLong(getDouble(), column), column)); + case FLOAT: + return ValueShort.get(convertToShort(convertToLong(getFloat(), column), column)); + case BYTES: + return ValueShort.get((short) Integer.parseInt(getString(), 16)); + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case INT: { + switch (getType()) { + case BOOLEAN: + return ValueInt.get(getBoolean() ? 1 : 0); + case BYTE: + case ENUM: + case SHORT: + return ValueInt.get(getInt()); + case LONG: + return ValueInt.get(convertToInt(getLong(), column)); + case DECIMAL: + return ValueInt.get(convertToInt(convertToLong(getBigDecimal(), column), column)); + case DOUBLE: + return ValueInt.get(convertToInt(convertToLong(getDouble(), column), column)); + case FLOAT: + return ValueInt.get(convertToInt(convertToLong(getFloat(), column), column)); + case BYTES: + return ValueInt.get((int) Long.parseLong(getString(), 16)); + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case LONG: { + switch (getType()) { + case BOOLEAN: + return ValueLong.get(getBoolean() ? 1 : 0); + case BYTE: + case SHORT: + case ENUM: + case INT: + return ValueLong.get(getInt()); + case DECIMAL: + return ValueLong.get(convertToLong(getBigDecimal(), column)); + case DOUBLE: + return ValueLong.get(convertToLong(getDouble(), column)); + case FLOAT: + return ValueLong.get(convertToLong(getFloat(), column)); + case BYTES: { + // parseLong doesn't work for ffffffffffffffff + byte[] d = getBytes(); + if (d.length == 8) { + return ValueLong.get(Bits.readLong(d, 0)); + } + return ValueLong.get(Long.parseLong(getString(), 16)); + } + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case DECIMAL: { + switch (getType()) { + case BOOLEAN: + return ValueDecimal.get(BigDecimal.valueOf(getBoolean() ? 1 : 0)); + case BYTE: + case SHORT: + case ENUM: + case INT: + return ValueDecimal.get(BigDecimal.valueOf(getInt())); + case LONG: + return ValueDecimal.get(BigDecimal.valueOf(getLong())); + case DOUBLE: { + double d = getDouble(); + if (Double.isInfinite(d) || Double.isNaN(d)) { + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, "" + d); + } + return ValueDecimal.get(BigDecimal.valueOf(d)); + } + case FLOAT: { + float f = getFloat(); + if (Float.isInfinite(f) || Float.isNaN(f)) { + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, "" + f); + } + // better rounding behavior than BigDecimal.valueOf(f) + return ValueDecimal.get(new BigDecimal(Float.toString(f))); + } + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case DOUBLE: { + switch (getType()) { + case BOOLEAN: + return ValueDouble.get(getBoolean() ? 1 : 0); + case BYTE: + case SHORT: + case INT: + return ValueDouble.get(getInt()); + case LONG: + return ValueDouble.get(getLong()); + case DECIMAL: + return ValueDouble.get(getBigDecimal().doubleValue()); + case FLOAT: + return ValueDouble.get(getFloat()); + case ENUM: + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case FLOAT: { + switch (getType()) { + case BOOLEAN: + return ValueFloat.get(getBoolean() ? 1 : 0); + case BYTE: + case SHORT: + case INT: + return ValueFloat.get(getInt()); + case LONG: + return ValueFloat.get(getLong()); + case DECIMAL: + return ValueFloat.get(getBigDecimal().floatValue()); + case DOUBLE: + return ValueFloat.get((float) getDouble()); + case ENUM: + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case DATE: { + switch (getType()) { + case TIME: + // because the time has set the date to 1970-01-01, + // this will be the result + return ValueDate.fromDateValue(DateTimeUtils.EPOCH_DATE_VALUE); + case TIMESTAMP: + return ValueDate.fromDateValue( + ((ValueTimestamp) this).getDateValue()); + case TIMESTAMP_TZ: { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) this; + long dateValue = ts.getDateValue(), timeNanos = ts.getTimeNanos(); + long millis = DateTimeUtils.getMillis(dateValue, timeNanos, ts.getTimeZoneOffsetMins()); + return ValueDate.fromMillis(millis); + } + case ENUM: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case TIME: { + switch (getType()) { + case DATE: + // need to normalize the year, month and day because a date + // has the time set to 0, the result will be 0 + return ValueTime.fromNanos(0); + case TIMESTAMP: + return ValueTime.fromNanos( + ((ValueTimestamp) this).getTimeNanos()); + case TIMESTAMP_TZ: { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) this; + long dateValue = ts.getDateValue(), timeNanos = ts.getTimeNanos(); + long millis = DateTimeUtils.getMillis(dateValue, timeNanos, ts.getTimeZoneOffsetMins()); + return ValueTime.fromNanos(DateTimeUtils.nanosFromDate(millis) + timeNanos % 1_000_000); + } + case ENUM: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case TIMESTAMP: { + switch (getType()) { + case TIME: + return DateTimeUtils.normalizeTimestamp( + 0, ((ValueTime) this).getNanos()); + case DATE: + return ValueTimestamp.fromDateValueAndNanos( + ((ValueDate) this).getDateValue(), 0); + case TIMESTAMP_TZ: { + ValueTimestampTimeZone ts = (ValueTimestampTimeZone) this; + long dateValue = ts.getDateValue(), timeNanos = ts.getTimeNanos(); + long millis = DateTimeUtils.getMillis(dateValue, timeNanos, ts.getTimeZoneOffsetMins()); + return ValueTimestamp.fromMillisNanos(millis, (int) (timeNanos % 1_000_000)); + } + case ENUM: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case TIMESTAMP_TZ: { + switch (getType()) { + case TIME: { + ValueTimestamp ts = DateTimeUtils.normalizeTimestamp(0, ((ValueTime) this).getNanos()); + return DateTimeUtils.timestampTimeZoneFromLocalDateValueAndNanos( + ts.getDateValue(), ts.getTimeNanos()); + } + case DATE: + return DateTimeUtils.timestampTimeZoneFromLocalDateValueAndNanos( + ((ValueDate) this).getDateValue(), 0); + case TIMESTAMP: { + ValueTimestamp ts = (ValueTimestamp) this; + return DateTimeUtils.timestampTimeZoneFromLocalDateValueAndNanos( + ts.getDateValue(), ts.getTimeNanos()); + } + case ENUM: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case BYTES: { + switch (getType()) { + case JAVA_OBJECT: + case BLOB: + return ValueBytes.getNoCopy(getBytesNoCopy()); + case UUID: + case GEOMETRY: + return ValueBytes.getNoCopy(getBytes()); + case BYTE: + return ValueBytes.getNoCopy(new byte[]{getByte()}); + case SHORT: { + int x = getShort(); + return ValueBytes.getNoCopy(new byte[]{ + (byte) (x >> 8), + (byte) x + }); + } + case INT: { + byte[] b = new byte[4]; + Bits.writeInt(b, 0, getInt()); + return ValueBytes.getNoCopy(b); + } + case LONG: { + byte[] b = new byte[8]; + Bits.writeLong(b, 0, getLong()); + return ValueBytes.getNoCopy(b); + } + case ENUM: + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case JAVA_OBJECT: { + switch (getType()) { + case BYTES: + case BLOB: + return ValueJavaObject.getNoCopy( + null, getBytesNoCopy(), getDataHandler()); + case ENUM: + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case ENUM: { + switch (getType()) { + case BYTE: + case SHORT: + case INT: + case LONG: + case DECIMAL: + return ValueEnum.get(enumerators, getInt()); + case STRING: + case STRING_IGNORECASE: + case STRING_FIXED: + return ValueEnum.get(enumerators, getString()); + default: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + } + case BLOB: { + switch (getType()) { + case BYTES: + return ValueLobDb.createSmallLob( + Value.BLOB, getBytesNoCopy()); + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case UUID: { + switch (getType()) { + case BYTES: + return ValueUuid.get(getBytesNoCopy()); + case JAVA_OBJECT: + Object object = JdbcUtils.deserialize(getBytesNoCopy(), + getDataHandler()); + if (object instanceof java.util.UUID) { + return ValueUuid.get((java.util.UUID) object); + } + throw DbException.get(ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + case GEOMETRY: { + switch (getType()) { + case BYTES: + return ValueGeometry.get(getBytesNoCopy()); + case JAVA_OBJECT: + Object object = JdbcUtils.deserialize(getBytesNoCopy(), getDataHandler()); + if (DataType.isGeometry(object)) { + return ValueGeometry.getFromGeometry(object); + } + //$FALL-THROUGH$ + case TIMESTAMP_TZ: + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + break; + } + } + // conversion by parsing the string value + String s = getString(); + switch (targetType) { + case NULL: + return ValueNull.INSTANCE; + case BOOLEAN: { + if (s.equalsIgnoreCase("true") || + s.equalsIgnoreCase("t") || + s.equalsIgnoreCase("yes") || + s.equalsIgnoreCase("y")) { + return ValueBoolean.TRUE; + } else if (s.equalsIgnoreCase("false") || + s.equalsIgnoreCase("f") || + s.equalsIgnoreCase("no") || + s.equalsIgnoreCase("n")) { + return ValueBoolean.FALSE; + } else { + // convert to a number, and if it is not 0 then it is true + return ValueBoolean.get(new BigDecimal(s).signum() != 0); + } + } + case BYTE: + return ValueByte.get(Byte.parseByte(s.trim())); + case SHORT: + return ValueShort.get(Short.parseShort(s.trim())); + case INT: + return ValueInt.get(Integer.parseInt(s.trim())); + case LONG: + return ValueLong.get(Long.parseLong(s.trim())); + case DECIMAL: + return ValueDecimal.get(new BigDecimal(s.trim())); + case TIME: + return ValueTime.parse(s.trim()); + case DATE: + return ValueDate.parse(s.trim()); + case TIMESTAMP: + return ValueTimestamp.parse(s.trim(), mode); + case TIMESTAMP_TZ: + return ValueTimestampTimeZone.parse(s.trim()); + case BYTES: + return ValueBytes.getNoCopy( + StringUtils.convertHexToBytes(s.trim())); + case JAVA_OBJECT: + return ValueJavaObject.getNoCopy(null, + StringUtils.convertHexToBytes(s.trim()), getDataHandler()); + case STRING: + return ValueString.get(s); + case STRING_IGNORECASE: + return ValueStringIgnoreCase.get(s); + case STRING_FIXED: + return ValueStringFixed.get(s, precision, mode); + case DOUBLE: + return ValueDouble.get(Double.parseDouble(s.trim())); + case FLOAT: + return ValueFloat.get(Float.parseFloat(s.trim())); + case CLOB: + return ValueLobDb.createSmallLob( + CLOB, s.getBytes(StandardCharsets.UTF_8)); + case BLOB: + return ValueLobDb.createSmallLob( + BLOB, StringUtils.convertHexToBytes(s.trim())); + case ARRAY: + return ValueArray.get(new Value[]{ValueString.get(s)}); + case RESULT_SET: { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + rs.addColumn("X", Types.VARCHAR, s.length(), 0); + rs.addRow(s); + return ValueResultSet.get(rs); + } + case UUID: + return ValueUuid.get(s); + case GEOMETRY: + return ValueGeometry.get(s); + default: + if (JdbcUtils.customDataTypesHandler != null) { + return JdbcUtils.customDataTypesHandler.convert(this, targetType); + } + throw DbException.throwInternalError("type=" + targetType); + } + } catch (NumberFormatException e) { + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, e, getString()); + } + } + + /** + * Compare this value against another value given that the values are of the + * same data type. + * + * @param v the other value + * @param mode the compare mode + * @return 0 if both values are equal, -1 if the other value is smaller, and + * 1 otherwise + */ + public final int compareTypeSafe(Value v, CompareMode mode) { + if (this == v) { + return 0; + } else if (this == ValueNull.INSTANCE) { + return -1; + } else if (v == ValueNull.INSTANCE) { + return 1; + } + return compareSecure(v, mode); + } + + /** + * Compare this value against another value using the specified compare + * mode. + * + * @param v the other value + * @param mode the compare mode + * @return 0 if both values are equal, -1 if the other value is smaller, and + * 1 otherwise + */ + public final int compareTo(Value v, CompareMode mode) { + if (this == v) { + return 0; + } + if (this == ValueNull.INSTANCE) { + return v == ValueNull.INSTANCE ? 0 : -1; + } else if (v == ValueNull.INSTANCE) { + return 1; + } + if (getType() == v.getType()) { + return compareSecure(v, mode); + } + int t2 = Value.getHigherOrder(getType(), v.getType()); + return convertTo(t2).compareSecure(v.convertTo(t2), mode); + } + + public int getScale() { + return 0; + } + + /** + * Convert the scale. + * + * @param onlyToSmallerScale if the scale should not reduced + * @param targetScale the requested scale + * @return the value + */ + @SuppressWarnings("unused") + public Value convertScale(boolean onlyToSmallerScale, int targetScale) { + return this; + } + + /** + * Convert the precision to the requested value. The precision of the + * returned value may be somewhat larger than requested, because values with + * a fixed precision are not truncated. + * + * @param precision the new precision + * @param force true if losing numeric precision is allowed + * @return the new value + */ + @SuppressWarnings("unused") + public Value convertPrecision(long precision, boolean force) { + return this; + } + + private static byte convertToByte(long x, Object column) { + if (x > Byte.MAX_VALUE || x < Byte.MIN_VALUE) { + throw DbException.get( + ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_2, Long.toString(x), getColumnName(column)); + } + return (byte) x; + } + + private static short convertToShort(long x, Object column) { + if (x > Short.MAX_VALUE || x < Short.MIN_VALUE) { + throw DbException.get( + ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_2, Long.toString(x), getColumnName(column)); + } + return (short) x; + } + + private static int convertToInt(long x, Object column) { + if (x > Integer.MAX_VALUE || x < Integer.MIN_VALUE) { + throw DbException.get( + ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_2, Long.toString(x), getColumnName(column)); + } + return (int) x; + } + + private static long convertToLong(double x, Object column) { + if (x > Long.MAX_VALUE || x < Long.MIN_VALUE) { + // TODO document that +Infinity, -Infinity throw an exception and + // NaN returns 0 + throw DbException.get( + ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_2, Double.toString(x), getColumnName(column)); + } + return Math.round(x); + } + + private static long convertToLong(BigDecimal x, Object column) { + if (x.compareTo(MAX_LONG_DECIMAL) > 0 || + x.compareTo(Value.MIN_LONG_DECIMAL) < 0) { + throw DbException.get( + ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_2, x.toString(), getColumnName(column)); + } + return x.setScale(0, BigDecimal.ROUND_HALF_UP).longValue(); + } + + private static String getColumnName(Object column) { + return column == null ? "" : column.toString(); + } + + /** + * Copy a large value, to be used in the given table. For values that are + * kept fully in memory this method has no effect. + * + * @param handler the data handler + * @param tableId the table where this object is used + * @return the new value or itself + */ + @SuppressWarnings("unused") + public Value copy(DataHandler handler, int tableId) { + return this; + } + + /** + * Check if this value is linked to a specific table. For values that are + * kept fully in memory, this method returns false. + * + * @return true if it is + */ + public boolean isLinkedToTable() { + return false; + } + + /** + * Remove the underlying resource, if any. For values that are kept fully in + * memory this method has no effect. + */ + public void remove() { + // nothing to do + } + + /** + * Check if the precision is smaller or equal than the given precision. + * + * @param precision the maximum precision + * @return true if the precision of this value is smaller or equal to the + * given precision + */ + public boolean checkPrecision(long precision) { + return getPrecision() <= precision; + } + + /** + * Get a medium size SQL expression for debugging or tracing. If the + * precision is too large, only a subset of the value is returned. + * + * @return the SQL expression + */ + public String getTraceSQL() { + return getSQL(); + } + + @Override + public String toString() { + return getTraceSQL(); + } + + /** + * Throw the exception that the feature is not support for the given data + * type. + * + * @param op the operation + * @return never returns normally + * @throws DbException the exception + */ + protected DbException throwUnsupportedExceptionForType(String op) { + throw DbException.getUnsupportedException( + DataType.getDataType(getType()).name + " " + op); + } + + /** + * Get the table (only for LOB object). + * + * @return the table id + */ + public int getTableId() { + return 0; + } + + /** + * Get the byte array. + * + * @return the byte array + */ + public byte[] getSmall() { + return null; + } + + /** + * Copy this value to a temporary file if necessary. + * + * @return the new value + */ + public Value copyToTemp() { + return this; + } + + /** + * Create an independent copy of this value if needed, that will be bound to + * a result. If the original row is removed, this copy is still readable. + * + * @return the value (this for small objects) + */ + public Value copyToResult() { + return this; + } + + public ResultSet getResultSet() { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + rs.addColumn("X", DataType.convertTypeToSQLType(getType()), + MathUtils.convertLongToInt(getPrecision()), getScale()); + rs.addRow(getObject()); + return rs; + } + + /** + * Return the data handler for the values that support it + * (actually only Java objects). + * @return the data handler + */ + protected DataHandler getDataHandler() { + return null; + } + + /** + * A "binary large object". + */ + public interface ValueClob { + // this is a marker interface + } + + /** + * A "character large object". + */ + public interface ValueBlob { + // this is a marker interface + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueArray.java b/modules/h2/src/main/java/org/h2/value/ValueArray.java new file mode 100644 index 0000000000000..b9d711407fb53 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueArray.java @@ -0,0 +1,227 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.lang.reflect.Array; +import java.sql.PreparedStatement; +import java.util.ArrayList; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.util.StatementBuilder; + +/** + * Implementation of the ARRAY data type. + */ +public class ValueArray extends Value { + + private final Class componentType; + private final Value[] values; + private int hash; + + private ValueArray(Class componentType, Value[] list) { + this.componentType = componentType; + this.values = list; + } + + private ValueArray(Value[] list) { + this(Object.class, list); + } + + /** + * Get or create a array value for the given value array. + * Do not clone the data. + * + * @param list the value array + * @return the value + */ + public static ValueArray get(Value[] list) { + return new ValueArray(list); + } + + /** + * Get or create a array value for the given value array. + * Do not clone the data. + * + * @param componentType the array class (null for Object[]) + * @param list the value array + * @return the value + */ + public static ValueArray get(Class componentType, Value[] list) { + return new ValueArray(componentType, list); + } + + @Override + public int hashCode() { + if (hash != 0) { + return hash; + } + int h = 1; + for (Value v : values) { + h = h * 31 + v.hashCode(); + } + hash = h; + return h; + } + + public Value[] getList() { + return values; + } + + @Override + public int getType() { + return Value.ARRAY; + } + + public Class getComponentType() { + return componentType; + } + + @Override + public long getPrecision() { + long p = 0; + for (Value v : values) { + p += v.getPrecision(); + } + return p; + } + + @Override + public String getString() { + StatementBuilder buff = new StatementBuilder("("); + for (Value v : values) { + buff.appendExceptFirst(", "); + buff.append(v.getString()); + } + return buff.append(')').toString(); + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueArray v = (ValueArray) o; + if (values == v.values) { + return 0; + } + int l = values.length; + int ol = v.values.length; + int len = Math.min(l, ol); + for (int i = 0; i < len; i++) { + Value v1 = values[i]; + Value v2 = v.values[i]; + int comp = v1.compareTo(v2, mode); + if (comp != 0) { + return comp; + } + } + return Integer.compare(l, ol); + } + + @Override + public Object getObject() { + int len = values.length; + Object[] list = (Object[]) Array.newInstance(componentType, len); + for (int i = 0; i < len; i++) { + final Value value = values[i]; + if (!SysProperties.OLD_RESULT_SET_GET_OBJECT) { + final int type = value.getType(); + if (type == Value.BYTE || type == Value.SHORT) { + list[i] = value.getInt(); + continue; + } + } + list[i] = value.getObject(); + } + return list; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) { + throw throwUnsupportedExceptionForType("PreparedStatement.set"); + } + + @Override + public String getSQL() { + StatementBuilder buff = new StatementBuilder("("); + for (Value v : values) { + buff.appendExceptFirst(", "); + buff.append(v.getSQL()); + } + if (values.length == 1) { + buff.append(','); + } + return buff.append(')').toString(); + } + + @Override + public String getTraceSQL() { + StatementBuilder buff = new StatementBuilder("("); + for (Value v : values) { + buff.appendExceptFirst(", "); + buff.append(v == null ? "null" : v.getTraceSQL()); + } + return buff.append(')').toString(); + } + + @Override + public int getDisplaySize() { + long size = 0; + for (Value v : values) { + size += v.getDisplaySize(); + } + return MathUtils.convertLongToInt(size); + } + + @Override + public boolean equals(Object other) { + if (!(other instanceof ValueArray)) { + return false; + } + ValueArray v = (ValueArray) other; + if (values == v.values) { + return true; + } + int len = values.length; + if (len != v.values.length) { + return false; + } + for (int i = 0; i < len; i++) { + if (!values[i].equals(v.values[i])) { + return false; + } + } + return true; + } + + @Override + public int getMemory() { + int memory = 32; + for (Value v : values) { + memory += v.getMemory() + Constants.MEMORY_POINTER; + } + return memory; + } + + @Override + public Value convertPrecision(long precision, boolean force) { + if (!force) { + return this; + } + ArrayList list = New.arrayList(); + for (Value v : values) { + v = v.convertPrecision(precision, true); + // empty byte arrays or strings have precision 0 + // they count as precision 1 here + precision -= Math.max(1, v.getPrecision()); + if (precision < 0) { + break; + } + list.add(v); + } + return get(list.toArray(new Value[0])); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueBoolean.java b/modules/h2/src/main/java/org/h2/value/ValueBoolean.java new file mode 100644 index 0000000000000..1ba95f275f800 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueBoolean.java @@ -0,0 +1,116 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +/** + * Implementation of the BOOLEAN data type. + */ +public class ValueBoolean extends Value { + + /** + * The precision in digits. + */ + public static final int PRECISION = 1; + + /** + * The maximum display size of a boolean. + * Example: FALSE + */ + public static final int DISPLAY_SIZE = 5; + + /** + * TRUE value. + */ + public static final ValueBoolean TRUE = new ValueBoolean(true); + + /** + * FALSE value. + */ + public static final ValueBoolean FALSE = new ValueBoolean(false); + + private final boolean value; + + private ValueBoolean(boolean value) { + this.value = value; + } + + @Override + public int getType() { + return Value.BOOLEAN; + } + + @Override + public String getSQL() { + return getString(); + } + + @Override + public String getString() { + return value ? "TRUE" : "FALSE"; + } + + @Override + public Value negate() { + return value ? FALSE : TRUE; + } + + @Override + public boolean getBoolean() { + return value; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueBoolean v = (ValueBoolean) o; + return Boolean.compare(value, v.value); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int hashCode() { + return value ? 1 : 0; + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setBoolean(parameterIndex, value); + } + + /** + * Get the boolean value for the given boolean. + * + * @param b the boolean + * @return the value + */ + public static ValueBoolean get(boolean b) { + return b ? TRUE : FALSE; + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + // there are only ever two instances, so the instance must match + return this == other; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueByte.java b/modules/h2/src/main/java/org/h2/value/ValueByte.java new file mode 100644 index 0000000000000..f769c2b485321 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueByte.java @@ -0,0 +1,162 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Implementation of the BYTE data type. + */ +public class ValueByte extends Value { + + /** + * The precision in digits. + */ + static final int PRECISION = 3; + + /** + * The display size for a byte. + * Example: -127 + */ + static final int DISPLAY_SIZE = 4; + + private final byte value; + + private ValueByte(byte value) { + this.value = value; + } + + @Override + public Value add(Value v) { + ValueByte other = (ValueByte) v; + return checkRange(value + other.value); + } + + private static ValueByte checkRange(int x) { + if (x < Byte.MIN_VALUE || x > Byte.MAX_VALUE) { + throw DbException.get(ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_1, + Integer.toString(x)); + } + return ValueByte.get((byte) x); + } + + @Override + public int getSignum() { + return Integer.signum(value); + } + + @Override + public Value negate() { + return checkRange(-(int) value); + } + + @Override + public Value subtract(Value v) { + ValueByte other = (ValueByte) v; + return checkRange(value - other.value); + } + + @Override + public Value multiply(Value v) { + ValueByte other = (ValueByte) v; + return checkRange(value * other.value); + } + + @Override + public Value divide(Value v) { + ValueByte other = (ValueByte) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueByte.get((byte) (value / other.value)); + } + + @Override + public Value modulus(Value v) { + ValueByte other = (ValueByte) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueByte.get((byte) (value % other.value)); + } + + @Override + public String getSQL() { + return getString(); + } + + @Override + public int getType() { + return Value.BYTE; + } + + @Override + public byte getByte() { + return value; + } + + @Override + public int getInt() { + return value; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueByte v = (ValueByte) o; + return Integer.compare(value, v.value); + } + + @Override + public String getString() { + return String.valueOf(value); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int hashCode() { + return value; + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setByte(parameterIndex, value); + } + + /** + * Get or create byte value for the given byte. + * + * @param i the byte + * @return the value + */ + public static ValueByte get(byte i) { + return (ValueByte) Value.cache(new ValueByte(i)); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueByte && value == ((ValueByte) other).value; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueBytes.java b/modules/h2/src/main/java/org/h2/value/ValueBytes.java new file mode 100644 index 0000000000000..cc755b07f5d44 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueBytes.java @@ -0,0 +1,157 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.util.Arrays; + +import org.h2.engine.SysProperties; +import org.h2.util.Bits; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * Implementation of the BINARY data type. + * It is also the base class for ValueJavaObject. + */ +public class ValueBytes extends Value { + + private static final ValueBytes EMPTY = new ValueBytes(Utils.EMPTY_BYTES); + + /** + * The value. + */ + protected byte[] value; + + /** + * The hash code. + */ + protected int hash; + + protected ValueBytes(byte[] v) { + this.value = v; + } + + /** + * Get or create a bytes value for the given byte array. + * Clone the data. + * + * @param b the byte array + * @return the value + */ + public static ValueBytes get(byte[] b) { + if (b.length == 0) { + return EMPTY; + } + b = Utils.cloneByteArray(b); + return getNoCopy(b); + } + + /** + * Get or create a bytes value for the given byte array. + * Do not clone the date. + * + * @param b the byte array + * @return the value + */ + public static ValueBytes getNoCopy(byte[] b) { + if (b.length == 0) { + return EMPTY; + } + ValueBytes obj = new ValueBytes(b); + if (b.length > SysProperties.OBJECT_CACHE_MAX_PER_ELEMENT_SIZE) { + return obj; + } + return (ValueBytes) Value.cache(obj); + } + + @Override + public int getType() { + return Value.BYTES; + } + + @Override + public String getSQL() { + return "X'" + StringUtils.convertBytesToHex(getBytesNoCopy()) + "'"; + } + + @Override + public byte[] getBytesNoCopy() { + return value; + } + + @Override + public byte[] getBytes() { + return Utils.cloneByteArray(getBytesNoCopy()); + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + byte[] v2 = ((ValueBytes) v).value; + if (mode.isBinaryUnsigned()) { + return Bits.compareNotNullUnsigned(value, v2); + } + return Bits.compareNotNullSigned(value, v2); + } + + @Override + public String getString() { + return StringUtils.convertBytesToHex(value); + } + + @Override + public long getPrecision() { + return value.length; + } + + @Override + public int hashCode() { + if (hash == 0) { + hash = Utils.getByteArrayHash(value); + } + return hash; + } + + @Override + public Object getObject() { + return getBytes(); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setBytes(parameterIndex, value); + } + + @Override + public int getDisplaySize() { + return MathUtils.convertLongToInt(value.length * 2L); + } + + @Override + public int getMemory() { + return value.length + 24; + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueBytes + && Arrays.equals(value, ((ValueBytes) other).value); + } + + @Override + public Value convertPrecision(long precision, boolean force) { + if (value.length <= precision) { + return this; + } + int len = MathUtils.convertLongToInt(precision); + byte[] buff = Arrays.copyOf(value, len); + return get(buff); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueDate.java b/modules/h2/src/main/java/org/h2/value/ValueDate.java new file mode 100644 index 0000000000000..023f879ba2a79 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueDate.java @@ -0,0 +1,145 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.util.DateTimeUtils; + +/** + * Implementation of the DATE data type. + */ +public class ValueDate extends Value { + + /** + * The default precision and display size of the textual representation of a date. + * Example: 2000-01-02 + */ + public static final int PRECISION = 10; + + private final long dateValue; + + private ValueDate(long dateValue) { + this.dateValue = dateValue; + } + + /** + * Get or create a date value for the given date. + * + * @param dateValue the date value + * @return the value + */ + public static ValueDate fromDateValue(long dateValue) { + return (ValueDate) Value.cache(new ValueDate(dateValue)); + } + + /** + * Get or create a date value for the given date. + * + * @param date the date + * @return the value + */ + public static ValueDate get(Date date) { + return fromDateValue(DateTimeUtils.dateValueFromDate(date.getTime())); + } + + /** + * Calculate the date value (in the default timezone) from a given time in + * milliseconds in UTC. + * + * @param ms the milliseconds + * @return the value + */ + public static ValueDate fromMillis(long ms) { + return fromDateValue(DateTimeUtils.dateValueFromDate(ms)); + } + + /** + * Parse a string to a ValueDate. + * + * @param s the string to parse + * @return the date + */ + public static ValueDate parse(String s) { + try { + return fromDateValue(DateTimeUtils.parseDateValue(s, 0, s.length())); + } catch (Exception e) { + throw DbException.get(ErrorCode.INVALID_DATETIME_CONSTANT_2, + e, "DATE", s); + } + } + + public long getDateValue() { + return dateValue; + } + + @Override + public Date getDate() { + return DateTimeUtils.convertDateValueToDate(dateValue); + } + + @Override + public int getType() { + return Value.DATE; + } + + @Override + public String getString() { + StringBuilder buff = new StringBuilder(PRECISION); + DateTimeUtils.appendDate(buff, dateValue); + return buff.toString(); + } + + @Override + public String getSQL() { + return "DATE '" + getString() + "'"; + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int getDisplaySize() { + return PRECISION; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + return Long.compare(dateValue, ((ValueDate) o).dateValue); + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } + return other instanceof ValueDate + && dateValue == (((ValueDate) other).dateValue); + } + + @Override + public int hashCode() { + return (int) (dateValue ^ (dateValue >>> 32)); + } + + @Override + public Object getObject() { + return getDate(); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setDate(parameterIndex, getDate()); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueDecimal.java b/modules/h2/src/main/java/org/h2/value/ValueDecimal.java new file mode 100644 index 0000000000000..71fdd653ce08b --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueDecimal.java @@ -0,0 +1,272 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.math.BigDecimal; +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.util.MathUtils; + +/** + * Implementation of the DECIMAL data type. + */ +public class ValueDecimal extends Value { + + /** + * The value 'zero'. + */ + public static final Object ZERO = new ValueDecimal(BigDecimal.ZERO); + + /** + * The value 'one'. + */ + public static final Object ONE = new ValueDecimal(BigDecimal.ONE); + + /** + * The default precision for a decimal value. + */ + static final int DEFAULT_PRECISION = 65535; + + /** + * The default scale for a decimal value. + */ + static final int DEFAULT_SCALE = 32767; + + /** + * The default display size for a decimal value. + */ + static final int DEFAULT_DISPLAY_SIZE = 65535; + + private static final int DIVIDE_SCALE_ADD = 25; + + /** + * The maximum scale of a BigDecimal value. + */ + private static final int BIG_DECIMAL_SCALE_MAX = 100_000; + + private final BigDecimal value; + private String valueString; + private int precision; + + private ValueDecimal(BigDecimal value) { + if (value == null) { + throw new IllegalArgumentException("null"); + } else if (!value.getClass().equals(BigDecimal.class)) { + throw DbException.get(ErrorCode.INVALID_CLASS_2, + BigDecimal.class.getName(), value.getClass().getName()); + } + this.value = value; + } + + @Override + public Value add(Value v) { + ValueDecimal dec = (ValueDecimal) v; + return ValueDecimal.get(value.add(dec.value)); + } + + @Override + public Value subtract(Value v) { + ValueDecimal dec = (ValueDecimal) v; + return ValueDecimal.get(value.subtract(dec.value)); + } + + @Override + public Value negate() { + return ValueDecimal.get(value.negate()); + } + + @Override + public Value multiply(Value v) { + ValueDecimal dec = (ValueDecimal) v; + return ValueDecimal.get(value.multiply(dec.value)); + } + + @Override + public Value divide(Value v) { + ValueDecimal dec = (ValueDecimal) v; + if (dec.value.signum() == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + BigDecimal bd = value.divide(dec.value, + value.scale() + DIVIDE_SCALE_ADD, + BigDecimal.ROUND_HALF_DOWN); + if (bd.signum() == 0) { + bd = BigDecimal.ZERO; + } else if (bd.scale() > 0) { + if (!bd.unscaledValue().testBit(0)) { + bd = bd.stripTrailingZeros(); + } + } + return ValueDecimal.get(bd); + } + + @Override + public ValueDecimal modulus(Value v) { + ValueDecimal dec = (ValueDecimal) v; + if (dec.value.signum() == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + BigDecimal bd = value.remainder(dec.value); + return ValueDecimal.get(bd); + } + + @Override + public String getSQL() { + return getString(); + } + + @Override + public int getType() { + return Value.DECIMAL; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueDecimal v = (ValueDecimal) o; + return value.compareTo(v.value); + } + + @Override + public int getSignum() { + return value.signum(); + } + + @Override + public BigDecimal getBigDecimal() { + return value; + } + + @Override + public String getString() { + if (valueString == null) { + String p = value.toPlainString(); + if (p.length() < 40) { + valueString = p; + } else { + valueString = value.toString(); + } + } + return valueString; + } + + @Override + public long getPrecision() { + if (precision == 0) { + precision = value.precision(); + } + return precision; + } + + @Override + public boolean checkPrecision(long prec) { + if (prec == DEFAULT_PRECISION) { + return true; + } + return getPrecision() <= prec; + } + + @Override + public int getScale() { + return value.scale(); + } + + @Override + public int hashCode() { + return value.hashCode(); + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setBigDecimal(parameterIndex, value); + } + + @Override + public Value convertScale(boolean onlyToSmallerScale, int targetScale) { + if (value.scale() == targetScale) { + return this; + } + if (onlyToSmallerScale || targetScale >= DEFAULT_SCALE) { + if (value.scale() < targetScale) { + return this; + } + } + BigDecimal bd = ValueDecimal.setScale(value, targetScale); + return ValueDecimal.get(bd); + } + + @Override + public Value convertPrecision(long precision, boolean force) { + if (getPrecision() <= precision) { + return this; + } + if (force) { + return get(BigDecimal.valueOf(value.doubleValue())); + } + throw DbException.get( + ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_1, + Long.toString(precision)); + } + + /** + * Get or create big decimal value for the given big decimal. + * + * @param dec the bit decimal + * @return the value + */ + public static ValueDecimal get(BigDecimal dec) { + if (BigDecimal.ZERO.equals(dec)) { + return (ValueDecimal) ZERO; + } else if (BigDecimal.ONE.equals(dec)) { + return (ValueDecimal) ONE; + } + return (ValueDecimal) Value.cache(new ValueDecimal(dec)); + } + + @Override + public int getDisplaySize() { + // add 2 characters for '-' and '.' + return MathUtils.convertLongToInt(getPrecision() + 2); + } + + @Override + public boolean equals(Object other) { + // Two BigDecimal objects are considered equal only if they are equal in + // value and scale (thus 2.0 is not equal to 2.00 when using equals; + // however -0.0 and 0.0 are). Can not use compareTo because 2.0 and 2.00 + // have different hash codes + return other instanceof ValueDecimal && + value.equals(((ValueDecimal) other).value); + } + + @Override + public int getMemory() { + return value.precision() + 120; + } + + /** + * Set the scale of a BigDecimal value. + * + * @param bd the BigDecimal value + * @param scale the new scale + * @return the scaled value + */ + public static BigDecimal setScale(BigDecimal bd, int scale) { + if (scale > BIG_DECIMAL_SCALE_MAX || scale < -BIG_DECIMAL_SCALE_MAX) { + throw DbException.getInvalidValueException("scale", scale); + } + return bd.setScale(scale, BigDecimal.ROUND_HALF_UP); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueDouble.java b/modules/h2/src/main/java/org/h2/value/ValueDouble.java new file mode 100644 index 0000000000000..a6847ca802535 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueDouble.java @@ -0,0 +1,182 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Implementation of the DOUBLE data type. + */ +public class ValueDouble extends Value { + + /** + * The precision in digits. + */ + public static final int PRECISION = 17; + + /** + * The maximum display size of a double. + * Example: -3.3333333333333334E-100 + */ + public static final int DISPLAY_SIZE = 24; + + /** + * Double.doubleToLongBits(0.0) + */ + public static final long ZERO_BITS = Double.doubleToLongBits(0.0); + + private static final ValueDouble ZERO = new ValueDouble(0.0); + private static final ValueDouble ONE = new ValueDouble(1.0); + private static final ValueDouble NAN = new ValueDouble(Double.NaN); + + private final double value; + + private ValueDouble(double value) { + this.value = value; + } + + @Override + public Value add(Value v) { + ValueDouble v2 = (ValueDouble) v; + return ValueDouble.get(value + v2.value); + } + + @Override + public Value subtract(Value v) { + ValueDouble v2 = (ValueDouble) v; + return ValueDouble.get(value - v2.value); + } + + @Override + public Value negate() { + return ValueDouble.get(-value); + } + + @Override + public Value multiply(Value v) { + ValueDouble v2 = (ValueDouble) v; + return ValueDouble.get(value * v2.value); + } + + @Override + public Value divide(Value v) { + ValueDouble v2 = (ValueDouble) v; + if (v2.value == 0.0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueDouble.get(value / v2.value); + } + + @Override + public ValueDouble modulus(Value v) { + ValueDouble other = (ValueDouble) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueDouble.get(value % other.value); + } + + @Override + public String getSQL() { + if (value == Double.POSITIVE_INFINITY) { + return "POWER(0, -1)"; + } else if (value == Double.NEGATIVE_INFINITY) { + return "(-POWER(0, -1))"; + } else if (Double.isNaN(value)) { + return "SQRT(-1)"; + } + return getString(); + } + + @Override + public int getType() { + return Value.DOUBLE; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueDouble v = (ValueDouble) o; + return Double.compare(value, v.value); + } + + @Override + public int getSignum() { + return value == 0 ? 0 : (value < 0 ? -1 : 1); + } + + @Override + public double getDouble() { + return value; + } + + @Override + public String getString() { + return String.valueOf(value); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int getScale() { + return 0; + } + + @Override + public int hashCode() { + long hash = Double.doubleToLongBits(value); + return (int) (hash ^ (hash >> 32)); + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setDouble(parameterIndex, value); + } + + /** + * Get or create double value for the given double. + * + * @param d the double + * @return the value + */ + public static ValueDouble get(double d) { + if (d == 1.0) { + return ONE; + } else if (d == 0.0) { + // -0.0 == 0.0, and we want to return 0.0 for both + return ZERO; + } else if (Double.isNaN(d)) { + return NAN; + } + return (ValueDouble) Value.cache(new ValueDouble(d)); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + if (!(other instanceof ValueDouble)) { + return false; + } + return compareSecure((ValueDouble) other, null) == 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueEnum.java b/modules/h2/src/main/java/org/h2/value/ValueEnum.java new file mode 100644 index 0000000000000..2dc4f4027753d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueEnum.java @@ -0,0 +1,185 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.util.Locale; +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +public class ValueEnum extends ValueEnumBase { + private enum Validation { + DUPLICATE, + EMPTY, + INVALID, + VALID + } + + private final String[] enumerators; + + private ValueEnum(final String[] enumerators, final int ordinal) { + super(enumerators[ordinal], ordinal); + this.enumerators = enumerators; + } + + /** + * Check for any violations, such as empty + * values, duplicate values. + * + * @param enumerators the enumerators + */ + public static void check(final String[] enumerators) { + switch (validate(enumerators)) { + case VALID: + return; + case EMPTY: + throw DbException.get(ErrorCode.ENUM_EMPTY); + case DUPLICATE: + throw DbException.get(ErrorCode.ENUM_DUPLICATE, + toString(enumerators)); + default: + throw DbException.get(ErrorCode.INVALID_VALUE_2, + toString(enumerators)); + } + } + + private static void check(final String[] enumerators, final Value value) { + check(enumerators); + + if (validate(enumerators, value) != Validation.VALID) { + throw DbException.get(ErrorCode.ENUM_VALUE_NOT_PERMITTED, + toString(enumerators), value.toString()); + } + } + + @Override + protected int compareSecure(final Value v, final CompareMode mode) { + return Integer.compare(getInt(), v.getInt()); + } + + /** + * Create an ENUM value from the provided enumerators + * and value. + * + * @param enumerators the enumerators + * @param value a value + * @return the ENUM value + */ + public static ValueEnum get(final String[] enumerators, int value) { + check(enumerators, ValueInt.get(value)); + return new ValueEnum(enumerators, value); + } + + public static ValueEnum get(final String[] enumerators, String value) { + check(enumerators, ValueString.get(value)); + + final String cleanLabel = sanitize(value); + + for (int i = 0; i < enumerators.length; i++) { + if (cleanLabel.equals(sanitize(enumerators[i]))) { + return new ValueEnum(enumerators, i); + } + } + + throw DbException.get(ErrorCode.GENERAL_ERROR_1, "Unexpected error"); + } + + public String[] getEnumerators() { + return enumerators; + } + + /** + * Evaluates whether a valid ENUM can be constructed + * from the provided enumerators and value. + * + * @param enumerators the enumerators + * @param value the value + * @return whether a valid ENUM can be constructed from the provided values + */ + public static boolean isValid(final String enumerators[], final Value value) { + return validate(enumerators, value).equals(Validation.VALID); + } + + private static String sanitize(final String label) { + return label == null ? null : label.trim().toUpperCase(Locale.ENGLISH); + } + + private static String[] sanitize(final String[] enumerators) { + if (enumerators == null || enumerators.length == 0) { + return null; + } + + final String[] clean = new String[enumerators.length]; + + for (int i = 0; i < enumerators.length; i++) { + clean[i] = sanitize(enumerators[i]); + } + + return clean; + } + + private static String toString(final String[] enumerators) { + String result = "("; + for (int i = 0; i < enumerators.length; i++) { + result += "'" + enumerators[i] + "'"; + if (i < enumerators.length - 1) { + result += ", "; + } + } + result += ")"; + return result; + } + + private static Validation validate(final String[] enumerators) { + final String[] cleaned = sanitize(enumerators); + + if (cleaned == null || cleaned.length == 0) { + return Validation.EMPTY; + } + + for (int i = 0; i < cleaned.length; i++) { + if (cleaned[i] == null || cleaned[i].equals("")) { + return Validation.EMPTY; + } + + if (i < cleaned.length - 1) { + for (int j = i + 1; j < cleaned.length; j++) { + if (cleaned[i].equals(cleaned[j])) { + return Validation.DUPLICATE; + } + } + } + } + + return Validation.VALID; + } + + private static Validation validate(final String[] enumerators, final Value value) { + final Validation validation = validate(enumerators); + if (!validation.equals(Validation.VALID)) { + return validation; + } + + if (DataType.isStringType(value.getType())) { + final String cleanLabel = sanitize(value.getString()); + + for (String enumerator : enumerators) { + if (cleanLabel.equals(sanitize(enumerator))) { + return Validation.VALID; + } + } + + return Validation.INVALID; + } else { + final int ordinal = value.getInt(); + + if (ordinal < 0 || ordinal >= enumerators.length) { + return Validation.INVALID; + } + + return Validation.VALID; + } + } +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueEnumBase.java b/modules/h2/src/main/java/org/h2/value/ValueEnumBase.java new file mode 100644 index 0000000000000..b9f1d959df21a --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueEnumBase.java @@ -0,0 +1,140 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +/** + * Base implementation of the ENUM data type. + * + * Currently, this class is used primarily for + * client-server communication. + */ +public class ValueEnumBase extends Value { + private static final int PRECISION = 10; + private static final int DISPLAY_SIZE = 11; + + private final String label; + private final int ordinal; + + protected ValueEnumBase(final String label, final int ordinal) { + this.label = label; + this.ordinal = ordinal; + } + + @Override + public Value add(final Value v) { + final Value iv = v.convertTo(Value.INT); + return convertTo(Value.INT).add(iv); + } + + @Override + protected int compareSecure(final Value v, final CompareMode mode) { + return Integer.compare(getInt(), v.getInt()); + } + + @Override + public Value divide(final Value v) { + final Value iv = v.convertTo(Value.INT); + return convertTo(Value.INT).divide(iv); + } + + @Override + public boolean equals(final Object other) { + return other instanceof ValueEnumBase && + getInt() == ((ValueEnumBase) other).getInt(); + } + + /** + * Get or create an enum value with the given label and ordinal. + * + * @param label the label + * @param ordinal the ordinal + * @return the value + */ + public static ValueEnumBase get(final String label, final int ordinal) { + return new ValueEnumBase(label, ordinal); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public int getInt() { + return ordinal; + } + + @Override + public long getLong() { + return ordinal; + } + + @Override + public Object getObject() { + return ordinal; + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int getSignum() { + return Integer.signum(ordinal); + } + + @Override + public String getSQL() { + return getString(); + } + + @Override + public String getString() { + return label; + } + + @Override + public int getType() { + return Value.ENUM; + } + + @Override + public int hashCode() { + int results = 31; + results += getString().hashCode(); + results += getInt(); + return results; + } + + @Override + public Value modulus(final Value v) { + final Value iv = v.convertTo(Value.INT); + return convertTo(Value.INT).modulus(iv); + } + + @Override + public Value multiply(final Value v) { + final Value iv = v.convertTo(Value.INT); + return convertTo(Value.INT).multiply(iv); + } + + + @Override + public void set(final PreparedStatement prep, final int parameterIndex) + throws SQLException { + prep.setInt(parameterIndex, ordinal); + } + + @Override + public Value subtract(final Value v) { + final Value iv = v.convertTo(Value.INT); + return convertTo(Value.INT).subtract(iv); + } +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueFloat.java b/modules/h2/src/main/java/org/h2/value/ValueFloat.java new file mode 100644 index 0000000000000..d630dc675a298 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueFloat.java @@ -0,0 +1,180 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Implementation of the REAL data type. + */ +public class ValueFloat extends Value { + + /** + * Float.floatToIntBits(0.0F). + */ + public static final int ZERO_BITS = Float.floatToIntBits(0.0F); + + /** + * The precision in digits. + */ + static final int PRECISION = 7; + + /** + * The maximum display size of a float. + * Example: -1.12345676E-20 + */ + static final int DISPLAY_SIZE = 15; + + private static final ValueFloat ZERO = new ValueFloat(0.0F); + private static final ValueFloat ONE = new ValueFloat(1.0F); + + private final float value; + + private ValueFloat(float value) { + this.value = value; + } + + @Override + public Value add(Value v) { + ValueFloat v2 = (ValueFloat) v; + return ValueFloat.get(value + v2.value); + } + + @Override + public Value subtract(Value v) { + ValueFloat v2 = (ValueFloat) v; + return ValueFloat.get(value - v2.value); + } + + @Override + public Value negate() { + return ValueFloat.get(-value); + } + + @Override + public Value multiply(Value v) { + ValueFloat v2 = (ValueFloat) v; + return ValueFloat.get(value * v2.value); + } + + @Override + public Value divide(Value v) { + ValueFloat v2 = (ValueFloat) v; + if (v2.value == 0.0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueFloat.get(value / v2.value); + } + + @Override + public Value modulus(Value v) { + ValueFloat other = (ValueFloat) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueFloat.get(value % other.value); + } + + @Override + public String getSQL() { + if (value == Float.POSITIVE_INFINITY) { + return "POWER(0, -1)"; + } else if (value == Float.NEGATIVE_INFINITY) { + return "(-POWER(0, -1))"; + } else if (Double.isNaN(value)) { + // NaN + return "SQRT(-1)"; + } + return getString(); + } + + @Override + public int getType() { + return Value.FLOAT; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueFloat v = (ValueFloat) o; + return Float.compare(value, v.value); + } + + @Override + public int getSignum() { + return value == 0 ? 0 : (value < 0 ? -1 : 1); + } + + @Override + public float getFloat() { + return value; + } + + @Override + public String getString() { + return String.valueOf(value); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int getScale() { + return 0; + } + + @Override + public int hashCode() { + long hash = Float.floatToIntBits(value); + return (int) (hash ^ (hash >> 32)); + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setFloat(parameterIndex, value); + } + + /** + * Get or create float value for the given float. + * + * @param d the float + * @return the value + */ + public static ValueFloat get(float d) { + if (d == 1.0F) { + return ONE; + } else if (d == 0.0F) { + // -0.0 == 0.0, and we want to return 0.0 for both + return ZERO; + } + return (ValueFloat) Value.cache(new ValueFloat(d)); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + if (!(other instanceof ValueFloat)) { + return false; + } + return compareSecure((ValueFloat) other, null) == 0; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueGeometry.java b/modules/h2/src/main/java/org/h2/value/ValueGeometry.java new file mode 100644 index 0000000000000..de82f190f0508 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueGeometry.java @@ -0,0 +1,318 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.util.Arrays; +import org.h2.engine.Mode; +import org.h2.message.DbException; +import org.h2.util.StringUtils; +import org.locationtech.jts.geom.CoordinateSequence; +import org.locationtech.jts.geom.CoordinateSequenceFilter; +import org.locationtech.jts.geom.Envelope; +import org.locationtech.jts.geom.Geometry; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.PrecisionModel; +import org.locationtech.jts.io.ParseException; +import org.locationtech.jts.io.WKBReader; +import org.locationtech.jts.io.WKBWriter; +import org.locationtech.jts.io.WKTReader; +import org.locationtech.jts.io.WKTWriter; + +/** + * Implementation of the GEOMETRY data type. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class ValueGeometry extends Value { + + /** + * As conversion from/to WKB cost a significant amount of CPU cycles, WKB + * are kept in ValueGeometry instance. + * + * We always calculate the WKB, because not all WKT values can be + * represented in WKB, but since we persist it in WKB format, it has to be + * valid in WKB + */ + private final byte[] bytes; + + private final int hashCode; + + /** + * The value. Converted from WKB only on request as conversion from/to WKB + * cost a significant amount of CPU cycles. + */ + private Geometry geometry; + + /** + * Create a new geometry objects. + * + * @param bytes the bytes (always known) + * @param geometry the geometry object (may be null) + */ + private ValueGeometry(byte[] bytes, Geometry geometry) { + this.bytes = bytes; + this.geometry = geometry; + this.hashCode = Arrays.hashCode(bytes); + } + + /** + * Get or create a geometry value for the given geometry. + * + * @param o the geometry object (of type + * org.locationtech.jts.geom.Geometry) + * @return the value + */ + public static ValueGeometry getFromGeometry(Object o) { + return get((Geometry) o); + } + + private static ValueGeometry get(Geometry g) { + byte[] bytes = convertToWKB(g); + return (ValueGeometry) Value.cache(new ValueGeometry(bytes, g)); + } + + private static byte[] convertToWKB(Geometry g) { + boolean includeSRID = g.getSRID() != 0; + int dimensionCount = getDimensionCount(g); + WKBWriter writer = new WKBWriter(dimensionCount, includeSRID); + return writer.write(g); + } + + private static int getDimensionCount(Geometry geometry) { + ZVisitor finder = new ZVisitor(); + geometry.apply(finder); + return finder.isFoundZ() ? 3 : 2; + } + + /** + * Get or create a geometry value for the given geometry. + * + * @param s the WKT representation of the geometry + * @return the value + */ + public static ValueGeometry get(String s) { + try { + Geometry g = new WKTReader().read(s); + return get(g); + } catch (ParseException ex) { + throw DbException.convert(ex); + } + } + + /** + * Get or create a geometry value for the given geometry. + * + * @param s the WKT representation of the geometry + * @param srid the srid of the object + * @return the value + */ + public static ValueGeometry get(String s, int srid) { + try { + GeometryFactory geometryFactory = new GeometryFactory(new PrecisionModel(), srid); + Geometry g = new WKTReader(geometryFactory).read(s); + return get(g); + } catch (ParseException ex) { + throw DbException.convert(ex); + } + } + + /** + * Get or create a geometry value for the given geometry. + * + * @param bytes the WKB representation of the geometry + * @return the value + */ + public static ValueGeometry get(byte[] bytes) { + return (ValueGeometry) Value.cache(new ValueGeometry(bytes, null)); + } + + /** + * Get a copy of geometry object. Geometry object is mutable. The returned + * object is therefore copied before returning. + * + * @return a copy of the geometry object + */ + public Geometry getGeometry() { + return getGeometryNoCopy().copy(); + } + + public Geometry getGeometryNoCopy() { + if (geometry == null) { + try { + geometry = new WKBReader().read(bytes); + } catch (ParseException ex) { + throw DbException.convert(ex); + } + } + return geometry; + } + + /** + * Test if this geometry envelope intersects with the other geometry + * envelope. + * + * @param r the other geometry + * @return true if the two overlap + */ + public boolean intersectsBoundingBox(ValueGeometry r) { + // the Geometry object caches the envelope + return getGeometryNoCopy().getEnvelopeInternal().intersects( + r.getGeometryNoCopy().getEnvelopeInternal()); + } + + /** + * Get the union. + * + * @param r the other geometry + * @return the union of this geometry envelope and another geometry envelope + */ + public Value getEnvelopeUnion(ValueGeometry r) { + GeometryFactory gf = new GeometryFactory(); + Envelope mergedEnvelope = new Envelope(getGeometryNoCopy().getEnvelopeInternal()); + mergedEnvelope.expandToInclude(r.getGeometryNoCopy().getEnvelopeInternal()); + return get(gf.toGeometry(mergedEnvelope)); + } + + @Override + public int getType() { + return Value.GEOMETRY; + } + + @Override + public String getSQL() { + // WKT does not hold Z or SRID with JTS 1.13. As getSQL is used to + // export database, it should contains all object attributes. Moreover + // using bytes is faster than converting WKB to Geometry then to WKT. + return "X'" + StringUtils.convertBytesToHex(getBytesNoCopy()) + "'::Geometry"; + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + Geometry g = ((ValueGeometry) v).getGeometryNoCopy(); + return getGeometryNoCopy().compareTo(g); + } + + @Override + public String getString() { + return getWKT(); + } + + @Override + public long getPrecision() { + return 0; + } + + @Override + public int hashCode() { + return hashCode; + } + + @Override + public Object getObject() { + return getGeometry(); + } + + @Override + public byte[] getBytes() { + return getWKB(); + } + + @Override + public byte[] getBytesNoCopy() { + return getWKB(); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setObject(parameterIndex, getGeometryNoCopy()); + } + + @Override + public int getDisplaySize() { + return getWKT().length(); + } + + @Override + public int getMemory() { + return getWKB().length * 20 + 24; + } + + @Override + public boolean equals(Object other) { + // The JTS library only does half-way support for 3D coordinates, so + // their equals method only checks the first two coordinates. + return other instanceof ValueGeometry && + Arrays.equals(getWKB(), ((ValueGeometry) other).getWKB()); + } + + /** + * Get the value in Well-Known-Text format. + * + * @return the well-known-text + */ + public String getWKT() { + return new WKTWriter(3).write(getGeometryNoCopy()); + } + + /** + * Get the value in Well-Known-Binary format. + * + * @return the well-known-binary + */ + public byte[] getWKB() { + return bytes; + } + + @Override + public Value convertTo(int targetType, int precision, Mode mode, Object column, String[] enumerators) { + if (targetType == Value.JAVA_OBJECT) { + return this; + } + return super.convertTo(targetType, precision, mode, column, null); + } + + /** + * A visitor that checks if there is a Z coordinate. + */ + static class ZVisitor implements CoordinateSequenceFilter { + + private boolean foundZ; + + public boolean isFoundZ() { + return foundZ; + } + + /** + * Performs an operation on a coordinate in a CoordinateSequence. + * + * @param coordinateSequence the object to which the filter is applied + * @param i the index of the coordinate to apply the filter to + */ + @Override + public void filter(CoordinateSequence coordinateSequence, int i) { + if (!Double.isNaN(coordinateSequence.getOrdinate(i, 2))) { + foundZ = true; + } + } + + @Override + public boolean isDone() { + return foundZ; + } + + @Override + public boolean isGeometryChanged() { + return false; + } + + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueInt.java b/modules/h2/src/main/java/org/h2/value/ValueInt.java new file mode 100644 index 0000000000000..208659224056c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueInt.java @@ -0,0 +1,181 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Implementation of the INT data type. + */ +public class ValueInt extends Value { + + /** + * The precision in digits. + */ + public static final int PRECISION = 10; + + /** + * The maximum display size of an int. + * Example: -2147483648 + */ + public static final int DISPLAY_SIZE = 11; + + private static final int STATIC_SIZE = 128; + // must be a power of 2 + private static final int DYNAMIC_SIZE = 256; + private static final ValueInt[] STATIC_CACHE = new ValueInt[STATIC_SIZE]; + private static final ValueInt[] DYNAMIC_CACHE = new ValueInt[DYNAMIC_SIZE]; + + private final int value; + + static { + for (int i = 0; i < STATIC_SIZE; i++) { + STATIC_CACHE[i] = new ValueInt(i); + } + } + + private ValueInt(int value) { + this.value = value; + } + + /** + * Get or create an int value for the given int. + * + * @param i the int + * @return the value + */ + public static ValueInt get(int i) { + if (i >= 0 && i < STATIC_SIZE) { + return STATIC_CACHE[i]; + } + ValueInt v = DYNAMIC_CACHE[i & (DYNAMIC_SIZE - 1)]; + if (v == null || v.value != i) { + v = new ValueInt(i); + DYNAMIC_CACHE[i & (DYNAMIC_SIZE - 1)] = v; + } + return v; + } + + @Override + public Value add(Value v) { + ValueInt other = (ValueInt) v; + return checkRange((long) value + (long) other.value); + } + + private static ValueInt checkRange(long x) { + if (x < Integer.MIN_VALUE || x > Integer.MAX_VALUE) { + throw DbException.get(ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_1, Long.toString(x)); + } + return ValueInt.get((int) x); + } + + @Override + public int getSignum() { + return Integer.signum(value); + } + + @Override + public Value negate() { + return checkRange(-(long) value); + } + + @Override + public Value subtract(Value v) { + ValueInt other = (ValueInt) v; + return checkRange((long) value - (long) other.value); + } + + @Override + public Value multiply(Value v) { + ValueInt other = (ValueInt) v; + return checkRange((long) value * (long) other.value); + } + + @Override + public Value divide(Value v) { + ValueInt other = (ValueInt) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueInt.get(value / other.value); + } + + @Override + public Value modulus(Value v) { + ValueInt other = (ValueInt) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueInt.get(value % other.value); + } + + @Override + public String getSQL() { + return getString(); + } + + @Override + public int getType() { + return Value.INT; + } + + @Override + public int getInt() { + return value; + } + + @Override + public long getLong() { + return value; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueInt v = (ValueInt) o; + return Integer.compare(value, v.value); + } + + @Override + public String getString() { + return String.valueOf(value); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int hashCode() { + return value; + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setInt(parameterIndex, value); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueInt && value == ((ValueInt) other).value; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueJavaObject.java b/modules/h2/src/main/java/org/h2/value/ValueJavaObject.java new file mode 100644 index 0000000000000..abfec0dbad515 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueJavaObject.java @@ -0,0 +1,209 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Types; + +import org.h2.engine.SysProperties; +import org.h2.store.DataHandler; +import org.h2.util.Bits; +import org.h2.util.JdbcUtils; +import org.h2.util.Utils; + +/** + * Implementation of the OBJECT data type. + */ +public class ValueJavaObject extends ValueBytes { + + private static final ValueJavaObject EMPTY = + new ValueJavaObject(Utils.EMPTY_BYTES, null); + private final DataHandler dataHandler; + + protected ValueJavaObject(byte[] v, DataHandler dataHandler) { + super(v); + this.dataHandler = dataHandler; + } + + /** + * Get or create a java object value for the given byte array. + * Do not clone the data. + * + * @param javaObject the object + * @param b the byte array + * @param dataHandler provides the object serializer + * @return the value + */ + public static ValueJavaObject getNoCopy(Object javaObject, byte[] b, + DataHandler dataHandler) { + if (b != null && b.length == 0) { + return EMPTY; + } + ValueJavaObject obj; + if (SysProperties.serializeJavaObject) { + if (b == null) { + b = JdbcUtils.serialize(javaObject, dataHandler); + } + obj = new ValueJavaObject(b, dataHandler); + } else { + obj = new NotSerialized(javaObject, b, dataHandler); + } + if (b == null || b.length > SysProperties.OBJECT_CACHE_MAX_PER_ELEMENT_SIZE) { + return obj; + } + return (ValueJavaObject) Value.cache(obj); + } + + @Override + public int getType() { + return Value.JAVA_OBJECT; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + Object obj = JdbcUtils.deserialize(getBytesNoCopy(), getDataHandler()); + prep.setObject(parameterIndex, obj, Types.JAVA_OBJECT); + } + + /** + * Value which serializes java object only for I/O operations. + * Used when property {@link SysProperties#serializeJavaObject} is disabled. + * + * @author Sergi Vladykin + */ + private static class NotSerialized extends ValueJavaObject { + + private Object javaObject; + + private int displaySize = -1; + + NotSerialized(Object javaObject, byte[] v, DataHandler dataHandler) { + super(v, dataHandler); + this.javaObject = javaObject; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setObject(parameterIndex, getObject(), Types.JAVA_OBJECT); + } + + @Override + public byte[] getBytesNoCopy() { + if (value == null) { + value = JdbcUtils.serialize(javaObject, null); + } + return value; + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + Object o1 = getObject(); + Object o2 = v.getObject(); + + boolean o1Comparable = o1 instanceof Comparable; + boolean o2Comparable = o2 instanceof Comparable; + + if (o1Comparable && o2Comparable && + Utils.haveCommonComparableSuperclass(o1.getClass(), o2.getClass())) { + @SuppressWarnings("unchecked") + Comparable c1 = (Comparable) o1; + return c1.compareTo(o2); + } + + // group by types + if (o1.getClass() != o2.getClass()) { + if (o1Comparable != o2Comparable) { + return o1Comparable ? -1 : 1; + } + return o1.getClass().getName().compareTo(o2.getClass().getName()); + } + + // compare hash codes + int h1 = hashCode(); + int h2 = v.hashCode(); + + if (h1 == h2) { + if (o1.equals(o2)) { + return 0; + } + return Bits.compareNotNullSigned(getBytesNoCopy(), v.getBytesNoCopy()); + } + + return h1 > h2 ? 1 : -1; + } + + @Override + public String getString() { + String str = getObject().toString(); + if (displaySize == -1) { + displaySize = str.length(); + } + return str; + } + + @Override + public long getPrecision() { + return 0; + } + + @Override + public int hashCode() { + if (hash == 0) { + hash = getObject().hashCode(); + } + return hash; + } + + @Override + public Object getObject() { + if (javaObject == null) { + javaObject = JdbcUtils.deserialize(value, getDataHandler()); + } + return javaObject; + } + + @Override + public int getDisplaySize() { + if (displaySize == -1) { + displaySize = getString().length(); + } + return displaySize; + } + + @Override + public int getMemory() { + if (value == null) { + return DataType.getDataType(getType()).memory; + } + int mem = super.getMemory(); + if (javaObject != null) { + mem *= 2; + } + return mem; + } + + @Override + public boolean equals(Object other) { + if (!(other instanceof NotSerialized)) { + return false; + } + return getObject().equals(((NotSerialized) other).getObject()); + } + + @Override + public Value convertPrecision(long precision, boolean force) { + return this; + } + } + + @Override + protected DataHandler getDataHandler() { + return dataHandler; + } +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueLob.java b/modules/h2/src/main/java/org/h2/value/ValueLob.java new file mode 100644 index 0000000000000..a6c5436b360f4 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueLob.java @@ -0,0 +1,888 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.io.BufferedInputStream; +import java.io.ByteArrayInputStream; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import java.nio.charset.StandardCharsets; +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.engine.Constants; +import org.h2.engine.Mode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.FileStoreInputStream; +import org.h2.store.FileStoreOutputStream; +import org.h2.store.RangeInputStream; +import org.h2.store.RangeReader; +import org.h2.store.fs.FileUtils; +import org.h2.util.Bits; +import org.h2.util.IOUtils; +import org.h2.util.MathUtils; +import org.h2.util.SmallLRUCache; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * Implementation of the BLOB and CLOB data types. Small objects are kept in + * memory and stored in the record. + * + * Large objects are stored in their own files. When large objects are set in a + * prepared statement, they are first stored as 'temporary' files. Later, when + * they are used in a record, and when the record is stored, the lob files are + * linked: the file is renamed using the file format (tableId).(objectId). There + * is one exception: large variables are stored in the file (-1).(objectId). + * + * When lobs are deleted, they are first renamed to a temp file, and if the + * delete operation is committed the file is deleted. + * + * Data compression is supported. + */ +public class ValueLob extends Value { + + private static void rangeCheckUnknown(long zeroBasedOffset, long length) { + if (zeroBasedOffset < 0) { + throw DbException.getInvalidValueException("offset", zeroBasedOffset + 1); + } + if (length < 0) { + throw DbException.getInvalidValueException("length", length); + } + } + + /** + * Create an input stream that is s subset of the given stream. + * + * @param inputStream the source input stream + * @param oneBasedOffset the offset (1 means no offset) + * @param length the length of the result, in bytes + * @param dataSize the length of the input, in bytes + * @return the smaller input stream + */ + static InputStream rangeInputStream(InputStream inputStream, long oneBasedOffset, long length, long dataSize) { + if (dataSize > 0) { + rangeCheck(oneBasedOffset - 1, length, dataSize); + } else { + rangeCheckUnknown(oneBasedOffset - 1, length); + } + try { + return new RangeInputStream(inputStream, oneBasedOffset - 1, length); + } catch (IOException e) { + throw DbException.getInvalidValueException("offset", oneBasedOffset); + } + } + + /** + * Create a reader that is s subset of the given reader. + * + * @param reader the input reader + * @param oneBasedOffset the offset (1 means no offset) + * @param length the length of the result, in bytes + * @param dataSize the length of the input, in bytes + * @return the smaller input stream + */ + static Reader rangeReader(Reader reader, long oneBasedOffset, long length, long dataSize) { + if (dataSize > 0) { + rangeCheck(oneBasedOffset - 1, length, dataSize); + } else { + rangeCheckUnknown(oneBasedOffset - 1, length); + } + try { + return new RangeReader(reader, oneBasedOffset - 1, length); + } catch (IOException e) { + throw DbException.getInvalidValueException("offset", oneBasedOffset); + } + } + + /** + * This counter is used to calculate the next directory to store lobs. It is + * better than using a random number because less directories are created. + */ + private static int dirCounter; + + private final int type; + private long precision; + private DataHandler handler; + private int tableId; + private int objectId; + private String fileName; + private boolean linked; + private byte[] small; + private int hash; + private boolean compressed; + private FileStore tempFile; + + private ValueLob(int type, DataHandler handler, String fileName, + int tableId, int objectId, boolean linked, long precision, + boolean compressed) { + this.type = type; + this.handler = handler; + this.fileName = fileName; + this.tableId = tableId; + this.objectId = objectId; + this.linked = linked; + this.precision = precision; + this.compressed = compressed; + } + + private ValueLob(int type, byte[] small) { + this.type = type; + this.small = small; + if (small != null) { + if (type == Value.BLOB) { + this.precision = small.length; + } else { + this.precision = getString().length(); + } + } + } + + private static ValueLob copy(ValueLob lob) { + ValueLob copy = new ValueLob(lob.type, lob.handler, lob.fileName, + lob.tableId, lob.objectId, lob.linked, lob.precision, lob.compressed); + copy.small = lob.small; + copy.hash = lob.hash; + return copy; + } + + /** + * Create a small lob using the given byte array. + * + * @param type the type (Value.BLOB or CLOB) + * @param small the byte array + * @return the lob value + */ + private static ValueLob createSmallLob(int type, byte[] small) { + return new ValueLob(type, small); + } + + private static String getFileName(DataHandler handler, int tableId, + int objectId) { + if (SysProperties.CHECK && tableId == 0 && objectId == 0) { + DbException.throwInternalError("0 LOB"); + } + String table = tableId < 0 ? ".temp" : ".t" + tableId; + return getFileNamePrefix(handler.getDatabasePath(), objectId) + + table + Constants.SUFFIX_LOB_FILE; + } + + /** + * Create a LOB value with the given parameters. + * + * @param type the data type + * @param handler the file handler + * @param tableId the table object id + * @param objectId the object id + * @param precision the precision (length in elements) + * @param compression if compression is used + * @return the value object + */ + public static ValueLob openLinked(int type, DataHandler handler, + int tableId, int objectId, long precision, boolean compression) { + String fileName = getFileName(handler, tableId, objectId); + return new ValueLob(type, handler, fileName, tableId, objectId, + true/* linked */, precision, compression); + } + + /** + * Create a LOB value with the given parameters. + * + * @param type the data type + * @param handler the file handler + * @param tableId the table object id + * @param objectId the object id + * @param precision the precision (length in elements) + * @param compression if compression is used + * @param fileName the file name + * @return the value object + */ + public static ValueLob openUnlinked(int type, DataHandler handler, + int tableId, int objectId, long precision, boolean compression, + String fileName) { + return new ValueLob(type, handler, fileName, tableId, objectId, + false/* linked */, precision, compression); + } + + /** + * Create a CLOB value from a stream. + * + * @param in the reader + * @param length the number of characters to read, or -1 for no limit + * @param handler the data handler + * @return the lob value + */ + private static ValueLob createClob(Reader in, long length, + DataHandler handler) { + try { + if (handler == null) { + String s = IOUtils.readStringAndClose(in, (int) length); + return createSmallLob(Value.CLOB, s.getBytes(StandardCharsets.UTF_8)); + } + boolean compress = handler.getLobCompressionAlgorithm(Value.CLOB) != null; + long remaining = Long.MAX_VALUE; + if (length >= 0 && length < remaining) { + remaining = length; + } + int len = getBufferSize(handler, compress, remaining); + char[] buff; + if (len >= Integer.MAX_VALUE) { + String data = IOUtils.readStringAndClose(in, -1); + buff = data.toCharArray(); + len = buff.length; + } else { + buff = new char[len]; + len = IOUtils.readFully(in, buff, len); + } + if (len <= handler.getMaxLengthInplaceLob()) { + byte[] small = new String(buff, 0, len).getBytes(StandardCharsets.UTF_8); + return ValueLob.createSmallLob(Value.CLOB, small); + } + ValueLob lob = new ValueLob(Value.CLOB, null); + lob.createFromReader(buff, len, in, remaining, handler); + return lob; + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + private static int getBufferSize(DataHandler handler, boolean compress, + long remaining) { + if (remaining < 0 || remaining > Integer.MAX_VALUE) { + remaining = Integer.MAX_VALUE; + } + int inplace = handler.getMaxLengthInplaceLob(); + long m = compress ? + Constants.IO_BUFFER_SIZE_COMPRESS : Constants.IO_BUFFER_SIZE; + if (m < remaining && m <= inplace) { + // using "1L" to force long arithmetic + m = Math.min(remaining, inplace + 1L); + // the buffer size must be bigger than the inplace lob, otherwise we + // can't know if it must be stored in-place or not + m = MathUtils.roundUpLong(m, Constants.IO_BUFFER_SIZE); + } + m = Math.min(remaining, m); + m = MathUtils.convertLongToInt(m); + if (m < 0) { + m = Integer.MAX_VALUE; + } + return (int) m; + } + + private void createFromReader(char[] buff, int len, Reader in, + long remaining, DataHandler h) throws IOException { + try (FileStoreOutputStream out = initLarge(h)) { + boolean compress = h.getLobCompressionAlgorithm(Value.CLOB) != null; + while (true) { + precision += len; + byte[] b = new String(buff, 0, len).getBytes(StandardCharsets.UTF_8); + out.write(b, 0, b.length); + remaining -= len; + if (remaining <= 0) { + break; + } + len = getBufferSize(h, compress, remaining); + len = IOUtils.readFully(in, buff, len); + if (len == 0) { + break; + } + } + } + } + + private static String getFileNamePrefix(String path, int objectId) { + String name; + int f = objectId % SysProperties.LOB_FILES_PER_DIRECTORY; + if (f > 0) { + name = SysProperties.FILE_SEPARATOR + objectId; + } else { + name = ""; + } + objectId /= SysProperties.LOB_FILES_PER_DIRECTORY; + while (objectId > 0) { + f = objectId % SysProperties.LOB_FILES_PER_DIRECTORY; + name = SysProperties.FILE_SEPARATOR + f + + Constants.SUFFIX_LOBS_DIRECTORY + name; + objectId /= SysProperties.LOB_FILES_PER_DIRECTORY; + } + name = FileUtils.toRealPath(path + + Constants.SUFFIX_LOBS_DIRECTORY + name); + return name; + } + + private static int getNewObjectId(DataHandler h) { + String path = h.getDatabasePath(); + if ((path != null) && (path.length() == 0)) { + path = new File(Utils.getProperty("java.io.tmpdir", "."), + SysProperties.PREFIX_TEMP_FILE).getAbsolutePath(); + } + int newId = 0; + int lobsPerDir = SysProperties.LOB_FILES_PER_DIRECTORY; + while (true) { + String dir = getFileNamePrefix(path, newId); + String[] list = getFileList(h, dir); + int fileCount = 0; + boolean[] used = new boolean[lobsPerDir]; + for (String name : list) { + if (name.endsWith(Constants.SUFFIX_DB_FILE)) { + name = FileUtils.getName(name); + String n = name.substring(0, name.indexOf('.')); + int id; + try { + id = Integer.parseInt(n); + } catch (NumberFormatException e) { + id = -1; + } + if (id > 0) { + fileCount++; + used[id % lobsPerDir] = true; + } + } + } + int fileId = -1; + if (fileCount < lobsPerDir) { + for (int i = 1; i < lobsPerDir; i++) { + if (!used[i]) { + fileId = i; + break; + } + } + } + if (fileId > 0) { + newId += fileId; + invalidateFileList(h, dir); + break; + } + if (newId > Integer.MAX_VALUE / lobsPerDir) { + // this directory path is full: start from zero + newId = 0; + dirCounter = MathUtils.randomInt(lobsPerDir - 1) * lobsPerDir; + } else { + // calculate the directory. + // start with 1 (otherwise we don't know the number of + // directories). + // it doesn't really matter what directory is used, it might as + // well be random (but that would generate more directories): + // int dirId = RandomUtils.nextInt(lobsPerDir - 1) + 1; + int dirId = (dirCounter++ / (lobsPerDir - 1)) + 1; + newId = newId * lobsPerDir; + newId += dirId * lobsPerDir; + } + } + return newId; + } + + private static void invalidateFileList(DataHandler h, String dir) { + SmallLRUCache cache = h.getLobFileListCache(); + if (cache != null) { + synchronized (cache) { + cache.remove(dir); + } + } + } + + private static String[] getFileList(DataHandler h, String dir) { + SmallLRUCache cache = h.getLobFileListCache(); + String[] list; + if (cache == null) { + list = FileUtils.newDirectoryStream(dir).toArray(new String[0]); + } else { + synchronized (cache) { + list = cache.get(dir); + if (list == null) { + list = FileUtils.newDirectoryStream(dir).toArray(new String[0]); + cache.put(dir, list); + } + } + } + return list; + } + + /** + * Create a BLOB value from a stream. + * + * @param in the input stream + * @param length the number of characters to read, or -1 for no limit + * @param handler the data handler + * @return the lob value + */ + private static ValueLob createBlob(InputStream in, long length, + DataHandler handler) { + try { + if (handler == null) { + byte[] data = IOUtils.readBytesAndClose(in, (int) length); + return createSmallLob(Value.BLOB, data); + } + long remaining = Long.MAX_VALUE; + boolean compress = handler.getLobCompressionAlgorithm(Value.BLOB) != null; + if (length >= 0 && length < remaining) { + remaining = length; + } + int len = getBufferSize(handler, compress, remaining); + byte[] buff; + if (len >= Integer.MAX_VALUE) { + buff = IOUtils.readBytesAndClose(in, -1); + len = buff.length; + } else { + buff = Utils.newBytes(len); + len = IOUtils.readFully(in, buff, len); + } + if (len <= handler.getMaxLengthInplaceLob()) { + byte[] small = Utils.copyBytes(buff, len); + return ValueLob.createSmallLob(Value.BLOB, small); + } + ValueLob lob = new ValueLob(Value.BLOB, null); + lob.createFromStream(buff, len, in, remaining, handler); + return lob; + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + private FileStoreOutputStream initLarge(DataHandler h) { + this.handler = h; + this.tableId = 0; + this.linked = false; + this.precision = 0; + this.small = null; + this.hash = 0; + String compressionAlgorithm = h.getLobCompressionAlgorithm(type); + this.compressed = compressionAlgorithm != null; + synchronized (h) { + String path = h.getDatabasePath(); + if ((path != null) && (path.length() == 0)) { + path = new File(Utils.getProperty("java.io.tmpdir", "."), + SysProperties.PREFIX_TEMP_FILE).getAbsolutePath(); + } + objectId = getNewObjectId(h); + fileName = getFileNamePrefix(path, objectId) + Constants.SUFFIX_TEMP_FILE; + tempFile = h.openFile(fileName, "rw", false); + tempFile.autoDelete(); + } + return new FileStoreOutputStream(tempFile, h, + compressionAlgorithm); + } + + private void createFromStream(byte[] buff, int len, InputStream in, + long remaining, DataHandler h) throws IOException { + try (FileStoreOutputStream out = initLarge(h)) { + boolean compress = h.getLobCompressionAlgorithm(Value.BLOB) != null; + while (true) { + precision += len; + out.write(buff, 0, len); + remaining -= len; + if (remaining <= 0) { + break; + } + len = getBufferSize(h, compress, remaining); + len = IOUtils.readFully(in, buff, len); + if (len <= 0) { + break; + } + } + } + } + + /** + * Convert a lob to another data type. The data is fully read in memory + * except when converting to BLOB or CLOB. + * + * @param t the new type + * @param precision the precision of the column to convert this value to. + * The special constant -1 is used to indicate that + * the precision plays no role when converting the value + * @param mode the database mode + * @param column the column (if any), used for to improve the error message if conversion fails + * @param enumerators the ENUM datatype enumerators (if any), + * for dealing with ENUM conversions + * @return the converted value + */ + @Override + public Value convertTo(int t, int precision, Mode mode, Object column, String[] enumerators) { + if (t == type) { + return this; + } else if (t == Value.CLOB) { + return ValueLob.createClob(getReader(), -1, handler); + } else if (t == Value.BLOB) { + return ValueLob.createBlob(getInputStream(), -1, handler); + } + return super.convertTo(t, precision, mode, column, null); + } + + @Override + public boolean isLinkedToTable() { + return linked; + } + + /** + * Get the current file name where the lob is saved. + * + * @return the file name or null + */ + public String getFileName() { + return fileName; + } + + @Override + public void remove() { + if (fileName != null) { + if (tempFile != null) { + tempFile.stopAutoDelete(); + tempFile = null; + } + deleteFile(handler, fileName); + } + } + + @Override + public Value copy(DataHandler h, int tabId) { + if (fileName == null) { + this.tableId = tabId; + return this; + } + if (linked) { + ValueLob copy = ValueLob.copy(this); + copy.objectId = getNewObjectId(h); + copy.tableId = tabId; + String live = getFileName(h, copy.tableId, copy.objectId); + copyFileTo(h, fileName, live); + copy.fileName = live; + copy.linked = true; + return copy; + } + if (!linked) { + this.tableId = tabId; + String live = getFileName(h, tableId, objectId); + if (tempFile != null) { + tempFile.stopAutoDelete(); + tempFile = null; + } + renameFile(h, fileName, live); + fileName = live; + linked = true; + } + return this; + } + + /** + * Get the current table id of this lob. + * + * @return the table id + */ + @Override + public int getTableId() { + return tableId; + } + + /** + * Get the current object id of this lob. + * + * @return the object id + */ + public int getObjectId() { + return objectId; + } + + @Override + public int getType() { + return type; + } + + @Override + public long getPrecision() { + return precision; + } + + @Override + public String getString() { + int len = precision > Integer.MAX_VALUE || precision == 0 ? + Integer.MAX_VALUE : (int) precision; + try { + if (type == Value.CLOB) { + if (small != null) { + return new String(small, StandardCharsets.UTF_8); + } + return IOUtils.readStringAndClose(getReader(), len); + } + byte[] buff; + if (small != null) { + buff = small; + } else { + buff = IOUtils.readBytesAndClose(getInputStream(), len); + } + return StringUtils.convertBytesToHex(buff); + } catch (IOException e) { + throw DbException.convertIOException(e, fileName); + } + } + + @Override + public byte[] getBytes() { + if (type == CLOB) { + // convert hex to string + return super.getBytes(); + } + byte[] data = getBytesNoCopy(); + return Utils.cloneByteArray(data); + } + + @Override + public byte[] getBytesNoCopy() { + if (type == CLOB) { + // convert hex to string + return super.getBytesNoCopy(); + } + if (small != null) { + return small; + } + try { + return IOUtils.readBytesAndClose( + getInputStream(), Integer.MAX_VALUE); + } catch (IOException e) { + throw DbException.convertIOException(e, fileName); + } + } + + @Override + public int hashCode() { + if (hash == 0) { + if (precision > 4096) { + // TODO: should calculate the hash code when saving, and store + // it in the database file + return (int) (precision ^ (precision >>> 32)); + } + if (type == CLOB) { + hash = getString().hashCode(); + } else { + hash = Utils.getByteArrayHash(getBytes()); + } + } + return hash; + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + if (type == Value.CLOB) { + return Integer.signum(getString().compareTo(v.getString())); + } + byte[] v2 = v.getBytesNoCopy(); + return Bits.compareNotNullSigned(getBytesNoCopy(), v2); + } + + @Override + public Object getObject() { + if (type == Value.CLOB) { + return getReader(); + } + return getInputStream(); + } + + @Override + public Reader getReader() { + return IOUtils.getBufferedReader(getInputStream()); + } + + @Override + public Reader getReader(long oneBasedOffset, long length) { + return rangeReader(getReader(), oneBasedOffset, length, type == Value.CLOB ? precision : -1); + } + + @Override + public InputStream getInputStream() { + if (fileName == null) { + return new ByteArrayInputStream(small); + } + FileStore store = handler.openFile(fileName, "r", true); + boolean alwaysClose = SysProperties.lobCloseBetweenReads; + return new BufferedInputStream( + new FileStoreInputStream(store, handler, compressed, alwaysClose), + Constants.IO_BUFFER_SIZE); + } + + @Override + public InputStream getInputStream(long oneBasedOffset, long length) { + if (fileName == null) { + return super.getInputStream(oneBasedOffset, length); + } + FileStore store = handler.openFile(fileName, "r", true); + boolean alwaysClose = SysProperties.lobCloseBetweenReads; + InputStream inputStream = new BufferedInputStream( + new FileStoreInputStream(store, handler, compressed, alwaysClose), + Constants.IO_BUFFER_SIZE); + return rangeInputStream(inputStream, oneBasedOffset, length, store.length()); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + long p = getPrecision(); + if (p > Integer.MAX_VALUE || p <= 0) { + p = -1; + } + if (type == Value.BLOB) { + prep.setBinaryStream(parameterIndex, getInputStream(), (int) p); + } else { + prep.setCharacterStream(parameterIndex, getReader(), (int) p); + } + } + + @Override + public String getSQL() { + String s; + if (type == Value.CLOB) { + s = getString(); + return StringUtils.quoteStringSQL(s); + } + byte[] buff = getBytes(); + s = StringUtils.convertBytesToHex(buff); + return "X'" + s + "'"; + } + + @Override + public String getTraceSQL() { + if (small != null && getPrecision() <= SysProperties.MAX_TRACE_DATA_LENGTH) { + return getSQL(); + } + StringBuilder buff = new StringBuilder(); + if (type == Value.CLOB) { + buff.append("SPACE(").append(getPrecision()); + } else { + buff.append("CAST(REPEAT('00', ").append(getPrecision()).append(") AS BINARY"); + } + buff.append(" /* ").append(fileName).append(" */)"); + return buff.toString(); + } + + /** + * Get the data if this a small lob value. + * + * @return the data + */ + @Override + public byte[] getSmall() { + return small; + } + + @Override + public int getDisplaySize() { + return MathUtils.convertLongToInt(getPrecision()); + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueLob && compareSecure((Value) other, null) == 0; + } + + /** + * Store the lob data to a file if the size of the buffer is larger than the + * maximum size for an in-place lob. + * + * @param h the data handler + */ + public void convertToFileIfRequired(DataHandler h) { + try { + if (small != null && small.length > h.getMaxLengthInplaceLob()) { + boolean compress = h.getLobCompressionAlgorithm(type) != null; + int len = getBufferSize(h, compress, Long.MAX_VALUE); + int tabId = tableId; + if (type == Value.BLOB) { + createFromStream( + Utils.newBytes(len), 0, getInputStream(), Long.MAX_VALUE, h); + } else { + createFromReader( + new char[len], 0, getReader(), Long.MAX_VALUE, h); + } + Value v2 = copy(h, tabId); + if (SysProperties.CHECK && v2 != this) { + DbException.throwInternalError(v2.toString()); + } + } + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + /** + * Check if this lob value is compressed. + * + * @return true if it is + */ + public boolean isCompressed() { + return compressed; + } + + private static synchronized void deleteFile(DataHandler handler, + String fileName) { + // synchronize on the database, to avoid concurrent temp file creation / + // deletion / backup + synchronized (handler.getLobSyncObject()) { + FileUtils.delete(fileName); + } + } + + private static synchronized void renameFile(DataHandler handler, + String oldName, String newName) { + synchronized (handler.getLobSyncObject()) { + FileUtils.move(oldName, newName); + } + } + + private static void copyFileTo(DataHandler h, String sourceFileName, + String targetFileName) { + synchronized (h.getLobSyncObject()) { + try { + IOUtils.copyFiles(sourceFileName, targetFileName); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + } + + @Override + public int getMemory() { + if (small != null) { + return small.length + 104; + } + return 140; + } + + /** + * Create an independent copy of this temporary value. + * The file will not be deleted automatically. + * + * @return the value + */ + @Override + public ValueLob copyToTemp() { + ValueLob lob; + if (type == CLOB) { + lob = ValueLob.createClob(getReader(), precision, handler); + } else { + lob = ValueLob.createBlob(getInputStream(), precision, handler); + } + return lob; + } + + @Override + public Value convertPrecision(long precision, boolean force) { + if (this.precision <= precision) { + return this; + } + ValueLob lob; + if (type == CLOB) { + lob = ValueLob.createClob(getReader(), precision, handler); + } else { + lob = ValueLob.createBlob(getInputStream(), precision, handler); + } + return lob; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueLobDb.java b/modules/h2/src/main/java/org/h2/value/ValueLobDb.java new file mode 100644 index 0000000000000..599f8deccaf10 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueLobDb.java @@ -0,0 +1,720 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.io.BufferedInputStream; +import java.io.BufferedReader; +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import java.nio.charset.StandardCharsets; +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.engine.Constants; +import org.h2.engine.Mode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.FileStoreInputStream; +import org.h2.store.FileStoreOutputStream; +import org.h2.store.LobStorageFrontend; +import org.h2.store.LobStorageInterface; +import org.h2.store.RangeReader; +import org.h2.store.fs.FileUtils; +import org.h2.util.Bits; +import org.h2.util.IOUtils; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * A implementation of the BLOB and CLOB data types. + * + * Small objects are kept in memory and stored in the record. + * Large objects are either stored in the database, or in temporary files. + */ +public class ValueLobDb extends Value implements Value.ValueClob, + Value.ValueBlob { + + private final int type; + private final long lobId; + private final byte[] hmac; + private final byte[] small; + private final DataHandler handler; + + /** + * For a BLOB, precision is length in bytes. + * For a CLOB, precision is length in chars. + */ + private final long precision; + + private final String fileName; + private final FileStore tempFile; + private final int tableId; + private int hash; + + //Arbonaut: 13.07.2016 + // Fix for recovery tool. + + private boolean isRecoveryReference; + + private ValueLobDb(int type, DataHandler handler, int tableId, long lobId, + byte[] hmac, long precision) { + this.type = type; + this.handler = handler; + this.tableId = tableId; + this.lobId = lobId; + this.hmac = hmac; + this.precision = precision; + this.small = null; + this.fileName = null; + this.tempFile = null; + } + + private ValueLobDb(int type, byte[] small, long precision) { + this.type = type; + this.small = small; + this.precision = precision; + this.lobId = 0; + this.hmac = null; + this.handler = null; + this.fileName = null; + this.tempFile = null; + this.tableId = 0; + } + + /** + * Create a CLOB in a temporary file. + */ + private ValueLobDb(DataHandler handler, Reader in, long remaining) + throws IOException { + this.type = Value.CLOB; + this.handler = handler; + this.small = null; + this.lobId = 0; + this.hmac = null; + this.fileName = createTempLobFileName(handler); + this.tempFile = this.handler.openFile(fileName, "rw", false); + this.tempFile.autoDelete(); + + long tmpPrecision = 0; + try (FileStoreOutputStream out = new FileStoreOutputStream(tempFile, null, null)) { + char[] buff = new char[Constants.IO_BUFFER_SIZE]; + while (true) { + int len = getBufferSize(this.handler, false, remaining); + len = IOUtils.readFully(in, buff, len); + if (len == 0) { + break; + } + byte[] data = new String(buff, 0, len).getBytes(StandardCharsets.UTF_8); + out.write(data); + tmpPrecision += len; + } + } + this.precision = tmpPrecision; + this.tableId = 0; + } + + /** + * Create a BLOB in a temporary file. + */ + private ValueLobDb(DataHandler handler, byte[] buff, int len, InputStream in, + long remaining) throws IOException { + this.type = Value.BLOB; + this.handler = handler; + this.small = null; + this.lobId = 0; + this.hmac = null; + this.fileName = createTempLobFileName(handler); + this.tempFile = this.handler.openFile(fileName, "rw", false); + this.tempFile.autoDelete(); + long tmpPrecision = 0; + boolean compress = this.handler.getLobCompressionAlgorithm(Value.BLOB) != null; + try (FileStoreOutputStream out = new FileStoreOutputStream(tempFile, null, null)) { + while (true) { + tmpPrecision += len; + out.write(buff, 0, len); + remaining -= len; + if (remaining <= 0) { + break; + } + len = getBufferSize(this.handler, compress, remaining); + len = IOUtils.readFully(in, buff, len); + if (len <= 0) { + break; + } + } + } + this.precision = tmpPrecision; + this.tableId = 0; + } + + private static String createTempLobFileName(DataHandler handler) + throws IOException { + String path = handler.getDatabasePath(); + if (path.length() == 0) { + path = SysProperties.PREFIX_TEMP_FILE; + } + return FileUtils.createTempFile(path, Constants.SUFFIX_TEMP_FILE, true, true); + } + + /** + * Create a LOB value. + * + * @param type the type + * @param handler the data handler + * @param tableId the table id + * @param id the lob id + * @param hmac the message authentication code + * @param precision the precision (number of bytes / characters) + * @return the value + */ + public static ValueLobDb create(int type, DataHandler handler, + int tableId, long id, byte[] hmac, long precision) { + return new ValueLobDb(type, handler, tableId, id, hmac, precision); + } + + /** + * Convert a lob to another data type. The data is fully read in memory + * except when converting to BLOB or CLOB. + * + * @param t the new type + * @param precision the precision + * @param mode the mode + * @param column the column (if any), used for to improve the error message if conversion fails + * @param enumerators the ENUM datatype enumerators (if any), + * for dealing with ENUM conversions + * @return the converted value + */ + @Override + public Value convertTo(int t, int precision, Mode mode, Object column, String[] enumerators) { + if (t == type) { + return this; + } else if (t == Value.CLOB) { + if (handler != null) { + return handler.getLobStorage(). + createClob(getReader(), -1); + } else if (small != null) { + return ValueLobDb.createSmallLob(t, small); + } + } else if (t == Value.BLOB) { + if (handler != null) { + return handler.getLobStorage(). + createBlob(getInputStream(), -1); + } else if (small != null) { + return ValueLobDb.createSmallLob(t, small); + } + } + return super.convertTo(t, precision, mode, column, null); + } + + @Override + public boolean isLinkedToTable() { + return small == null && + tableId >= 0; + } + + public boolean isStored() { + return small == null && fileName == null; + } + + @Override + public void remove() { + if (fileName != null) { + if (tempFile != null) { + tempFile.stopAutoDelete(); + } + // synchronize on the database, to avoid concurrent temp file + // creation / deletion / backup + synchronized (handler.getLobSyncObject()) { + FileUtils.delete(fileName); + } + } + if (handler != null) { + handler.getLobStorage().removeLob(this); + } + } + + @Override + public Value copy(DataHandler database, int tableId) { + if (small == null) { + return handler.getLobStorage().copyLob(this, tableId, getPrecision()); + } else if (small.length > database.getMaxLengthInplaceLob()) { + LobStorageInterface s = database.getLobStorage(); + Value v; + if (type == Value.BLOB) { + v = s.createBlob(getInputStream(), getPrecision()); + } else { + v = s.createClob(getReader(), getPrecision()); + } + Value v2 = v.copy(database, tableId); + v.remove(); + return v2; + } + return this; + } + + /** + * Get the current table id of this lob. + * + * @return the table id + */ + @Override + public int getTableId() { + return tableId; + } + + @Override + public int getType() { + return type; + } + + @Override + public long getPrecision() { + return precision; + } + + @Override + public String getString() { + int len = precision > Integer.MAX_VALUE || precision == 0 ? + Integer.MAX_VALUE : (int) precision; + try { + if (type == Value.CLOB) { + if (small != null) { + return new String(small, StandardCharsets.UTF_8); + } + return IOUtils.readStringAndClose(getReader(), len); + } + byte[] buff; + if (small != null) { + buff = small; + } else { + buff = IOUtils.readBytesAndClose(getInputStream(), len); + } + return StringUtils.convertBytesToHex(buff); + } catch (IOException e) { + throw DbException.convertIOException(e, toString()); + } + } + + @Override + public byte[] getBytes() { + if (type == CLOB) { + // convert hex to string + return super.getBytes(); + } + byte[] data = getBytesNoCopy(); + return Utils.cloneByteArray(data); + } + + @Override + public byte[] getBytesNoCopy() { + if (type == CLOB) { + // convert hex to string + return super.getBytesNoCopy(); + } + if (small != null) { + return small; + } + try { + return IOUtils.readBytesAndClose(getInputStream(), Integer.MAX_VALUE); + } catch (IOException e) { + throw DbException.convertIOException(e, toString()); + } + } + + @Override + public int hashCode() { + if (hash == 0) { + if (precision > 4096) { + // TODO: should calculate the hash code when saving, and store + // it in the database file + return (int) (precision ^ (precision >>> 32)); + } + if (type == CLOB) { + hash = getString().hashCode(); + } else { + hash = Utils.getByteArrayHash(getBytes()); + } + } + return hash; + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + if (v instanceof ValueLobDb) { + ValueLobDb v2 = (ValueLobDb) v; + if (v == this) { + return 0; + } + if (lobId == v2.lobId && small == null && v2.small == null) { + return 0; + } + } + if (type == Value.CLOB) { + return Integer.signum(getString().compareTo(v.getString())); + } + byte[] v2 = v.getBytesNoCopy(); + return Bits.compareNotNullSigned(getBytesNoCopy(), v2); + } + + @Override + public Object getObject() { + if (type == Value.CLOB) { + return getReader(); + } + return getInputStream(); + } + + @Override + public Reader getReader() { + return IOUtils.getBufferedReader(getInputStream()); + } + + @Override + public Reader getReader(long oneBasedOffset, long length) { + return ValueLob.rangeReader(getReader(), oneBasedOffset, length, type == Value.CLOB ? precision : -1); + } + + @Override + public InputStream getInputStream() { + if (small != null) { + return new ByteArrayInputStream(small); + } else if (fileName != null) { + FileStore store = handler.openFile(fileName, "r", true); + boolean alwaysClose = SysProperties.lobCloseBetweenReads; + return new BufferedInputStream(new FileStoreInputStream(store, + handler, false, alwaysClose), Constants.IO_BUFFER_SIZE); + } + long byteCount = (type == Value.BLOB) ? precision : -1; + try { + return handler.getLobStorage().getInputStream(this, hmac, byteCount); + } catch (IOException e) { + throw DbException.convertIOException(e, toString()); + } + } + + @Override + public InputStream getInputStream(long oneBasedOffset, long length) { + long byteCount; + InputStream inputStream; + if (small != null) { + return super.getInputStream(oneBasedOffset, length); + } else if (fileName != null) { + FileStore store = handler.openFile(fileName, "r", true); + boolean alwaysClose = SysProperties.lobCloseBetweenReads; + byteCount = store.length(); + inputStream = new BufferedInputStream(new FileStoreInputStream(store, + handler, false, alwaysClose), Constants.IO_BUFFER_SIZE); + } else { + byteCount = (type == Value.BLOB) ? precision : -1; + try { + inputStream = handler.getLobStorage().getInputStream(this, hmac, byteCount); + } catch (IOException e) { + throw DbException.convertIOException(e, toString()); + } + } + return ValueLob.rangeInputStream(inputStream, oneBasedOffset, length, byteCount); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + long p = getPrecision(); + if (p > Integer.MAX_VALUE || p <= 0) { + p = -1; + } + if (type == Value.BLOB) { + prep.setBinaryStream(parameterIndex, getInputStream(), (int) p); + } else { + prep.setCharacterStream(parameterIndex, getReader(), (int) p); + } + } + + @Override + public String getSQL() { + String s; + if (type == Value.CLOB) { + s = getString(); + return StringUtils.quoteStringSQL(s); + } + byte[] buff = getBytes(); + s = StringUtils.convertBytesToHex(buff); + return "X'" + s + "'"; + } + + @Override + public String getTraceSQL() { + if (small != null && getPrecision() <= SysProperties.MAX_TRACE_DATA_LENGTH) { + return getSQL(); + } + StringBuilder buff = new StringBuilder(); + if (type == Value.CLOB) { + buff.append("SPACE(").append(getPrecision()); + } else { + buff.append("CAST(REPEAT('00', ").append(getPrecision()).append(") AS BINARY"); + } + buff.append(" /* table: ").append(tableId).append(" id: ") + .append(lobId).append(" */)"); + return buff.toString(); + } + + /** + * Get the data if this a small lob value. + * + * @return the data + */ + @Override + public byte[] getSmall() { + return small; + } + + @Override + public int getDisplaySize() { + return MathUtils.convertLongToInt(getPrecision()); + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueLobDb && compareSecure((Value) other, null) == 0; + } + + @Override + public int getMemory() { + if (small != null) { + return small.length + 104; + } + return 140; + } + + /** + * Create an independent copy of this temporary value. + * The file will not be deleted automatically. + * + * @return the value + */ + @Override + public ValueLobDb copyToTemp() { + return this; + } + + /** + * Create an independent copy of this value, + * that will be bound to a result. + * + * @return the value (this for small objects) + */ + @Override + public ValueLobDb copyToResult() { + if (handler == null) { + return this; + } + LobStorageInterface s = handler.getLobStorage(); + if (s.isReadOnly()) { + return this; + } + return s.copyLob(this, LobStorageFrontend.TABLE_RESULT, + getPrecision()); + } + + public long getLobId() { + return lobId; + } + + @Override + public String toString() { + return "lob: " + fileName + " table: " + tableId + " id: " + lobId; + } + + /** + * Create a temporary CLOB value from a stream. + * + * @param in the reader + * @param length the number of characters to read, or -1 for no limit + * @param handler the data handler + * @return the lob value + */ + public static ValueLobDb createTempClob(Reader in, long length, + DataHandler handler) { + if (length >= 0) { + // Otherwise BufferedReader may try to read more data than needed and that + // blocks the network level + try { + in = new RangeReader(in, 0, length); + } catch (IOException e) { + throw DbException.convert(e); + } + } + BufferedReader reader; + if (in instanceof BufferedReader) { + reader = (BufferedReader) in; + } else { + reader = new BufferedReader(in, Constants.IO_BUFFER_SIZE); + } + try { + boolean compress = handler.getLobCompressionAlgorithm(Value.CLOB) != null; + long remaining = Long.MAX_VALUE; + if (length >= 0 && length < remaining) { + remaining = length; + } + int len = getBufferSize(handler, compress, remaining); + char[] buff; + if (len >= Integer.MAX_VALUE) { + String data = IOUtils.readStringAndClose(reader, -1); + buff = data.toCharArray(); + len = buff.length; + } else { + buff = new char[len]; + reader.mark(len); + len = IOUtils.readFully(reader, buff, len); + } + if (len <= handler.getMaxLengthInplaceLob()) { + byte[] small = new String(buff, 0, len).getBytes(StandardCharsets.UTF_8); + return ValueLobDb.createSmallLob(Value.CLOB, small, len); + } + reader.reset(); + return new ValueLobDb(handler, reader, remaining); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + /** + * Create a temporary BLOB value from a stream. + * + * @param in the input stream + * @param length the number of characters to read, or -1 for no limit + * @param handler the data handler + * @return the lob value + */ + public static ValueLobDb createTempBlob(InputStream in, long length, + DataHandler handler) { + try { + long remaining = Long.MAX_VALUE; + boolean compress = handler.getLobCompressionAlgorithm(Value.BLOB) != null; + if (length >= 0 && length < remaining) { + remaining = length; + } + int len = getBufferSize(handler, compress, remaining); + byte[] buff; + if (len >= Integer.MAX_VALUE) { + buff = IOUtils.readBytesAndClose(in, -1); + len = buff.length; + } else { + buff = Utils.newBytes(len); + len = IOUtils.readFully(in, buff, len); + } + if (len <= handler.getMaxLengthInplaceLob()) { + byte[] small = Utils.copyBytes(buff, len); + return ValueLobDb.createSmallLob(Value.BLOB, small, small.length); + } + return new ValueLobDb(handler, buff, len, in, remaining); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } + + private static int getBufferSize(DataHandler handler, boolean compress, + long remaining) { + if (remaining < 0 || remaining > Integer.MAX_VALUE) { + remaining = Integer.MAX_VALUE; + } + int inplace = handler.getMaxLengthInplaceLob(); + long m = compress ? Constants.IO_BUFFER_SIZE_COMPRESS + : Constants.IO_BUFFER_SIZE; + if (m < remaining && m <= inplace) { + // using "1L" to force long arithmetic because + // inplace could be Integer.MAX_VALUE + m = Math.min(remaining, inplace + 1L); + // the buffer size must be bigger than the inplace lob, otherwise we + // can't know if it must be stored in-place or not + m = MathUtils.roundUpLong(m, Constants.IO_BUFFER_SIZE); + } + m = Math.min(remaining, m); + m = MathUtils.convertLongToInt(m); + if (m < 0) { + m = Integer.MAX_VALUE; + } + return (int) m; + } + + @Override + public Value convertPrecision(long precision, boolean force) { + if (this.precision <= precision) { + return this; + } + ValueLobDb lob; + if (type == CLOB) { + if (handler == null) { + try { + int p = MathUtils.convertLongToInt(precision); + String s = IOUtils.readStringAndClose(getReader(), p); + byte[] data = s.getBytes(StandardCharsets.UTF_8); + lob = ValueLobDb.createSmallLob(type, data, s.length()); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } else { + lob = ValueLobDb.createTempClob(getReader(), precision, handler); + } + } else { + if (handler == null) { + try { + int p = MathUtils.convertLongToInt(precision); + byte[] data = IOUtils.readBytesAndClose(getInputStream(), p); + lob = ValueLobDb.createSmallLob(type, data, data.length); + } catch (IOException e) { + throw DbException.convertIOException(e, null); + } + } else { + lob = ValueLobDb.createTempBlob(getInputStream(), precision, handler); + } + } + return lob; + } + + /** + * Create a LOB object that fits in memory. + * + * @param type the type (Value.BLOB or CLOB) + * @param small the byte array + * @return the LOB + */ + public static Value createSmallLob(int type, byte[] small) { + int precision; + if (type == Value.CLOB) { + precision = new String(small, StandardCharsets.UTF_8).length(); + } else { + precision = small.length; + } + return createSmallLob(type, small, precision); + } + + /** + * Create a LOB object that fits in memory. + * + * @param type the type (Value.BLOB or CLOB) + * @param small the byte array + * @param precision the precision + * @return the LOB + */ + public static ValueLobDb createSmallLob(int type, byte[] small, + long precision) { + return new ValueLobDb(type, small, precision); + } + + + public void setRecoveryReference(boolean isRecoveryReference) { + this.isRecoveryReference = isRecoveryReference; + } + + public boolean isRecoveryReference() { + return isRecoveryReference; + } +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueLong.java b/modules/h2/src/main/java/org/h2/value/ValueLong.java new file mode 100644 index 0000000000000..57e67403c3aa1 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueLong.java @@ -0,0 +1,235 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.math.BigDecimal; +import java.math.BigInteger; +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Implementation of the BIGINT data type. + */ +public class ValueLong extends Value { + + /** + * The smallest {@code ValueLong} value. + */ + public static final ValueLong MIN = get(Long.MIN_VALUE); + + /** + * The largest {@code ValueLong} value. + */ + public static final ValueLong MAX = get(Long.MAX_VALUE); + + /** + * The largest Long value, as a BigInteger. + */ + public static final BigInteger MAX_BI = BigInteger.valueOf(Long.MAX_VALUE); + + /** + * The smallest Long value, as a BigDecimal. + */ + public static final BigDecimal MIN_BD = BigDecimal.valueOf(Long.MIN_VALUE); + + /** + * The precision in digits. + */ + public static final int PRECISION = 19; + + /** + * The maximum display size of a long. + * Example: 9223372036854775808 + */ + public static final int DISPLAY_SIZE = 20; + + private static final BigInteger MIN_BI = BigInteger.valueOf(Long.MIN_VALUE); + private static final int STATIC_SIZE = 100; + private static final ValueLong[] STATIC_CACHE; + + private final long value; + + static { + STATIC_CACHE = new ValueLong[STATIC_SIZE]; + for (int i = 0; i < STATIC_SIZE; i++) { + STATIC_CACHE[i] = new ValueLong(i); + } + } + + private ValueLong(long value) { + this.value = value; + } + + @Override + public Value add(Value v) { + ValueLong other = (ValueLong) v; + long result = value + other.value; + int sv = Long.signum(value); + int so = Long.signum(other.value); + int sr = Long.signum(result); + // if the operands have different signs overflow can not occur + // if the operands have the same sign, + // and the result has a different sign, then it is an overflow + // it can not be an overflow when one of the operands is 0 + if (sv != so || sr == so || sv == 0 || so == 0) { + return ValueLong.get(result); + } + throw getOverflow(); + } + + @Override + public int getSignum() { + return Long.signum(value); + } + + @Override + public Value negate() { + if (value == Long.MIN_VALUE) { + throw getOverflow(); + } + return ValueLong.get(-value); + } + + private DbException getOverflow() { + return DbException.get(ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_1, + Long.toString(value)); + } + + @Override + public Value subtract(Value v) { + ValueLong other = (ValueLong) v; + int sv = Long.signum(value); + int so = Long.signum(other.value); + // if the operands have the same sign, then overflow can not occur + // if the second operand is 0, then overflow can not occur + if (sv == so || so == 0) { + return ValueLong.get(value - other.value); + } + // now, if the other value is Long.MIN_VALUE, it must be an overflow + // x - Long.MIN_VALUE overflows for x>=0 + return add(other.negate()); + } + + private static boolean isInteger(long a) { + return a >= Integer.MIN_VALUE && a <= Integer.MAX_VALUE; + } + + @Override + public Value multiply(Value v) { + ValueLong other = (ValueLong) v; + long result = value * other.value; + if (value == 0 || value == 1 || other.value == 0 || other.value == 1) { + return ValueLong.get(result); + } + if (isInteger(value) && isInteger(other.value)) { + return ValueLong.get(result); + } + // just checking one case is not enough: Long.MIN_VALUE * -1 + // probably this is correct but I'm not sure + // if (result / value == other.value && result / other.value == value) { + // return ValueLong.get(result); + //} + BigInteger bv = BigInteger.valueOf(value); + BigInteger bo = BigInteger.valueOf(other.value); + BigInteger br = bv.multiply(bo); + if (br.compareTo(MIN_BI) < 0 || br.compareTo(MAX_BI) > 0) { + throw getOverflow(); + } + return ValueLong.get(br.longValue()); + } + + @Override + public Value divide(Value v) { + ValueLong other = (ValueLong) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueLong.get(value / other.value); + } + + @Override + public Value modulus(Value v) { + ValueLong other = (ValueLong) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueLong.get(this.value % other.value); + } + + @Override + public String getSQL() { + return getString(); + } + + @Override + public int getType() { + return Value.LONG; + } + + @Override + public long getLong() { + return value; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueLong v = (ValueLong) o; + return Long.compare(value, v.value); + } + + @Override + public String getString() { + return String.valueOf(value); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int hashCode() { + return (int) (value ^ (value >> 32)); + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setLong(parameterIndex, value); + } + + /** + * Get or create a long value for the given long. + * + * @param i the long + * @return the value + */ + public static ValueLong get(long i) { + if (i >= 0 && i < STATIC_SIZE) { + return STATIC_CACHE[(int) i]; + } + return (ValueLong) Value.cache(new ValueLong(i)); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueLong && value == ((ValueLong) other).value; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueNull.java b/modules/h2/src/main/java/org/h2/value/ValueNull.java new file mode 100644 index 0000000000000..1377e662640f0 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueNull.java @@ -0,0 +1,176 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.io.InputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; + +import org.h2.engine.Mode; +import org.h2.message.DbException; + +/** + * Implementation of NULL. NULL is not a regular data type. + */ +public class ValueNull extends Value { + + /** + * The main NULL instance. + */ + public static final ValueNull INSTANCE = new ValueNull(); + + /** + * This special instance is used as a marker for deleted entries in a map. + * It should not be used anywhere else. + */ + public static final ValueNull DELETED = new ValueNull(); + + /** + * The precision of NULL. + */ + private static final int PRECISION = 1; + + /** + * The display size of the textual representation of NULL. + */ + private static final int DISPLAY_SIZE = 4; + + private ValueNull() { + // don't allow construction + } + + @Override + public String getSQL() { + return "NULL"; + } + + @Override + public int getType() { + return Value.NULL; + } + + @Override + public String getString() { + return null; + } + + @Override + public boolean getBoolean() { + return false; + } + + @Override + public Date getDate() { + return null; + } + + @Override + public Time getTime() { + return null; + } + + @Override + public Timestamp getTimestamp() { + return null; + } + + @Override + public byte[] getBytes() { + return null; + } + + @Override + public byte getByte() { + return 0; + } + + @Override + public short getShort() { + return 0; + } + + @Override + public BigDecimal getBigDecimal() { + return null; + } + + @Override + public double getDouble() { + return 0.0; + } + + @Override + public float getFloat() { + return 0.0F; + } + + @Override + public int getInt() { + return 0; + } + + @Override + public long getLong() { + return 0; + } + + @Override + public InputStream getInputStream() { + return null; + } + + @Override + public Reader getReader() { + return null; + } + + @Override + public Value convertTo(int type, int precision, Mode mode, Object column, String[] enumerators) { + return this; + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + throw DbException.throwInternalError("compare null"); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int hashCode() { + return 0; + } + + @Override + public Object getObject() { + return null; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setNull(parameterIndex, DataType.convertTypeToSQLType(Value.NULL)); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + return other == this; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueResultSet.java b/modules/h2/src/main/java/org/h2/value/ValueResultSet.java new file mode 100644 index 0000000000000..526b106643d3e --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueResultSet.java @@ -0,0 +1,163 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import org.h2.message.DbException; +import org.h2.tools.SimpleResultSet; +import org.h2.util.StatementBuilder; + +/** + * Implementation of the RESULT_SET data type. + */ +public class ValueResultSet extends Value { + + private final ResultSet result; + + private ValueResultSet(ResultSet rs) { + this.result = rs; + } + + /** + * Create a result set value for the given result set. + * The result set will be wrapped. + * + * @param rs the result set + * @return the value + */ + public static ValueResultSet get(ResultSet rs) { + return new ValueResultSet(rs); + } + + /** + * Create a result set value for the given result set. The result set will + * be fully read in memory. The original result set is not closed. + * + * @param rs the result set + * @param maxrows the maximum number of rows to read (0 to just read the + * meta data) + * @return the value + */ + public static ValueResultSet getCopy(ResultSet rs, int maxrows) { + try { + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + SimpleResultSet simple = new SimpleResultSet(); + simple.setAutoClose(false); + ValueResultSet val = new ValueResultSet(simple); + for (int i = 0; i < columnCount; i++) { + String name = meta.getColumnLabel(i + 1); + int sqlType = meta.getColumnType(i + 1); + int precision = meta.getPrecision(i + 1); + int scale = meta.getScale(i + 1); + simple.addColumn(name, sqlType, precision, scale); + } + for (int i = 0; i < maxrows && rs.next(); i++) { + Object[] list = new Object[columnCount]; + for (int j = 0; j < columnCount; j++) { + list[j] = rs.getObject(j + 1); + } + simple.addRow(list); + } + return val; + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + public int getType() { + return Value.RESULT_SET; + } + + @Override + public long getPrecision() { + return Integer.MAX_VALUE; + } + + @Override + public int getDisplaySize() { + // it doesn't make sense to calculate it + return Integer.MAX_VALUE; + } + + @Override + public String getString() { + try { + StatementBuilder buff = new StatementBuilder("("); + result.beforeFirst(); + ResultSetMetaData meta = result.getMetaData(); + int columnCount = meta.getColumnCount(); + for (int i = 0; result.next(); i++) { + if (i > 0) { + buff.append(", "); + } + buff.append('('); + buff.resetCount(); + for (int j = 0; j < columnCount; j++) { + buff.appendExceptFirst(", "); + int t = DataType.getValueTypeFromResultSet(meta, j + 1); + Value v = DataType.readValue(null, result, j + 1, t); + buff.append(v.getString()); + } + buff.append(')'); + } + result.beforeFirst(); + return buff.append(')').toString(); + } catch (SQLException e) { + throw DbException.convert(e); + } + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + return this == v ? 0 : super.toString().compareTo(v.toString()); + } + + @Override + public boolean equals(Object other) { + return other == this; + } + + @Override + public int hashCode() { + return 0; + } + + @Override + public Object getObject() { + return result; + } + + @Override + public ResultSet getResultSet() { + return result; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) { + throw throwUnsupportedExceptionForType("PreparedStatement.set"); + } + + @Override + public String getSQL() { + return ""; + } + + @Override + public Value convertPrecision(long precision, boolean force) { + if (!force) { + return this; + } + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + return ValueResultSet.get(rs); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueShort.java b/modules/h2/src/main/java/org/h2/value/ValueShort.java new file mode 100644 index 0000000000000..d45960de9b36c --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueShort.java @@ -0,0 +1,162 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; + +/** + * Implementation of the SMALLINT data type. + */ +public class ValueShort extends Value { + + /** + * The precision in digits. + */ + static final int PRECISION = 5; + + /** + * The maximum display size of a short. + * Example: -32768 + */ + static final int DISPLAY_SIZE = 6; + + private final short value; + + private ValueShort(short value) { + this.value = value; + } + + @Override + public Value add(Value v) { + ValueShort other = (ValueShort) v; + return checkRange(value + other.value); + } + + private static ValueShort checkRange(int x) { + if (x < Short.MIN_VALUE || x > Short.MAX_VALUE) { + throw DbException.get(ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_1, + Integer.toString(x)); + } + return ValueShort.get((short) x); + } + + @Override + public int getSignum() { + return Integer.signum(value); + } + + @Override + public Value negate() { + return checkRange(-(int) value); + } + + @Override + public Value subtract(Value v) { + ValueShort other = (ValueShort) v; + return checkRange(value - other.value); + } + + @Override + public Value multiply(Value v) { + ValueShort other = (ValueShort) v; + return checkRange(value * other.value); + } + + @Override + public Value divide(Value v) { + ValueShort other = (ValueShort) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueShort.get((short) (value / other.value)); + } + + @Override + public Value modulus(Value v) { + ValueShort other = (ValueShort) v; + if (other.value == 0) { + throw DbException.get(ErrorCode.DIVISION_BY_ZERO_1, getSQL()); + } + return ValueShort.get((short) (value % other.value)); + } + + @Override + public String getSQL() { + return getString(); + } + + @Override + public int getType() { + return Value.SHORT; + } + + @Override + public short getShort() { + return value; + } + + @Override + public int getInt() { + return value; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueShort v = (ValueShort) o; + return Integer.compare(value, v.value); + } + + @Override + public String getString() { + return String.valueOf(value); + } + + @Override + public long getPrecision() { + return PRECISION; + } + + @Override + public int hashCode() { + return value; + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setShort(parameterIndex, value); + } + + /** + * Get or create a short value for the given short. + * + * @param i the short + * @return the value + */ + public static ValueShort get(short i) { + return (ValueShort) Value.cache(new ValueShort(i)); + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueShort && value == ((ValueShort) other).value; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueString.java b/modules/h2/src/main/java/org/h2/value/ValueString.java new file mode 100644 index 0000000000000..a60802fb1b4b6 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueString.java @@ -0,0 +1,169 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.engine.SysProperties; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; + +/** + * Implementation of the VARCHAR data type. + * It is also the base class for other ValueString* classes. + */ +public class ValueString extends Value { + + private static final ValueString EMPTY = new ValueString(""); + + /** + * The string data. + */ + protected final String value; + + protected ValueString(String value) { + this.value = value; + } + + @Override + public String getSQL() { + return StringUtils.quoteStringSQL(value); + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueString + && value.equals(((ValueString) other).value); + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + // compatibility: the other object could be another type + ValueString v = (ValueString) o; + return mode.compareString(value, v.value, false); + } + + @Override + public String getString() { + return value; + } + + @Override + public long getPrecision() { + return value.length(); + } + + @Override + public Object getObject() { + return value; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setString(parameterIndex, value); + } + + @Override + public int getDisplaySize() { + return value.length(); + } + + @Override + public int getMemory() { + return value.length() * 2 + 48; + } + + @Override + public Value convertPrecision(long precision, boolean force) { + if (precision == 0 || value.length() <= precision) { + return this; + } + int p = MathUtils.convertLongToInt(precision); + return getNew(value.substring(0, p)); + } + + @Override + public int hashCode() { + // TODO hash performance: could build a quicker hash + // by hashing the size and a few characters + return value.hashCode(); + + // proposed code: +// private int hash = 0; +// +// public int hashCode() { +// int h = hash; +// if (h == 0) { +// String s = value; +// int l = s.length(); +// if (l > 0) { +// if (l < 16) +// h = s.hashCode(); +// else { +// h = l; +// for (int i = 1; i <= l; i <<= 1) +// h = 31 * +// (31 * h + s.charAt(i - 1)) + +// s.charAt(l - i); +// } +// hash = h; +// } +// } +// return h; +// } + + } + + @Override + public int getType() { + return Value.STRING; + } + + /** + * Get or create a string value for the given string. + * + * @param s the string + * @return the value + */ + public static Value get(String s) { + return get(s, false); + } + + /** + * Get or create a string value for the given string. + * + * @param s the string + * @param treatEmptyStringsAsNull whether or not to treat empty strings as + * NULL + * @return the value + */ + public static Value get(String s, boolean treatEmptyStringsAsNull) { + if (s.isEmpty()) { + return treatEmptyStringsAsNull ? ValueNull.INSTANCE : EMPTY; + } + ValueString obj = new ValueString(StringUtils.cache(s)); + if (s.length() > SysProperties.OBJECT_CACHE_MAX_PER_ELEMENT_SIZE) { + return obj; + } + return Value.cache(obj); + // this saves memory, but is really slow + // return new ValueString(s.intern()); + } + + /** + * Create a new String value of the current class. + * This method is meant to be overridden by subclasses. + * + * @param s the string + * @return the value + */ + protected Value getNew(String s) { + return ValueString.get(s); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueStringFixed.java b/modules/h2/src/main/java/org/h2/value/ValueStringFixed.java new file mode 100644 index 0000000000000..6e44e2a7fde78 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueStringFixed.java @@ -0,0 +1,127 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.util.Arrays; +import org.h2.engine.Mode; +import org.h2.engine.SysProperties; +import org.h2.util.StringUtils; + +/** + * Implementation of the CHAR data type. + */ +public class ValueStringFixed extends ValueString { + + /** + * Special value for the precision in {@link #get(String, int, Mode)} to indicate that the value + * should not be trimmed. + */ + public static final int PRECISION_DO_NOT_TRIM = Integer.MIN_VALUE; + + /** + * Special value for the precision in {@link #get(String, int, Mode)} to indicate + * that the default behaviour should of trimming the value should apply. + */ + public static final int PRECISION_TRIM = -1; + + private static final ValueStringFixed EMPTY = new ValueStringFixed(""); + + protected ValueStringFixed(String value) { + super(value); + } + + private static String trimRight(String s) { + return trimRight(s, 0); + } + private static String trimRight(String s, int minLength) { + int endIndex = s.length() - 1; + int i = endIndex; + while (i >= minLength && s.charAt(i) == ' ') { + i--; + } + s = i == endIndex ? s : s.substring(0, i + 1); + return s; + } + + private static String rightPadWithSpaces(String s, int length) { + int pad = length - s.length(); + if (pad <= 0) { + return s; + } + char[] res = new char[length]; + s.getChars(0, s.length(), res, 0); + Arrays.fill(res, s.length(), length, ' '); + return new String(res); + } + + @Override + public int getType() { + return Value.STRING_FIXED; + } + + /** + * Get or create a fixed length string value for the given string. + * Spaces at the end of the string will be removed. + * + * @param s the string + * @return the value + */ + public static ValueStringFixed get(String s) { + // Use the special precision constant PRECISION_TRIM to indicate + // default H2 behaviour of trimming the value. + return get(s, PRECISION_TRIM, null); + } + + /** + * Get or create a fixed length string value for the given string. + *

    + * This method will use a {@link Mode}-specific conversion when mode is not + * null. + * Otherwise it will use the default H2 behaviour of trimming the given string if + * precision is not {@link #PRECISION_DO_NOT_TRIM}. + * + * @param s the string + * @param precision if the {@link Mode#padFixedLengthStrings} indicates that strings should + * be padded, this defines the overall length of the (potentially padded) string. + * If the special constant {@link #PRECISION_DO_NOT_TRIM} is used the value will + * not be trimmed. + * @param mode the database mode + * @return the value + */ + public static ValueStringFixed get(String s, int precision, Mode mode) { + // Should fixed strings be padded? + if (mode != null && mode.padFixedLengthStrings) { + if (precision == Integer.MAX_VALUE) { + // CHAR without a length specification is identical to CHAR(1) + precision = 1; + } + if (s.length() < precision) { + // We have to pad + s = rightPadWithSpaces(s, precision); + } else { + // We should trim, because inserting 'A ' into a CHAR(1) is possible! + s = trimRight(s, precision); + } + } else if (precision != PRECISION_DO_NOT_TRIM) { + // Default behaviour of H2 + s = trimRight(s); + } + if (s.length() == 0) { + return EMPTY; + } + ValueStringFixed obj = new ValueStringFixed(StringUtils.cache(s)); + if (s.length() > SysProperties.OBJECT_CACHE_MAX_PER_ELEMENT_SIZE) { + return obj; + } + return (ValueStringFixed) Value.cache(obj); + } + + @Override + protected ValueString getNew(String s) { + return ValueStringFixed.get(s); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueStringIgnoreCase.java b/modules/h2/src/main/java/org/h2/value/ValueStringIgnoreCase.java new file mode 100644 index 0000000000000..32a11e943b7e5 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueStringIgnoreCase.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import org.h2.engine.SysProperties; +import org.h2.util.StringUtils; + +/** + * Implementation of the VARCHAR_IGNORECASE data type. + */ +public class ValueStringIgnoreCase extends ValueString { + + private static final ValueStringIgnoreCase EMPTY = + new ValueStringIgnoreCase(""); + private int hash; + + protected ValueStringIgnoreCase(String value) { + super(value); + } + + @Override + public int getType() { + return Value.STRING_IGNORECASE; + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueStringIgnoreCase v = (ValueStringIgnoreCase) o; + return mode.compareString(value, v.value, true); + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueString + && value.equalsIgnoreCase(((ValueString) other).value); + } + + @Override + public int hashCode() { + if (hash == 0) { + // this is locale sensitive + hash = value.toUpperCase().hashCode(); + } + return hash; + } + + @Override + public String getSQL() { + return "CAST(" + StringUtils.quoteStringSQL(value) + " AS VARCHAR_IGNORECASE)"; + } + + /** + * Get or create a case insensitive string value for the given string. + * The value will have the same case as the passed string. + * + * @param s the string + * @return the value + */ + public static ValueStringIgnoreCase get(String s) { + if (s.length() == 0) { + return EMPTY; + } + ValueStringIgnoreCase obj = new ValueStringIgnoreCase(StringUtils.cache(s)); + if (s.length() > SysProperties.OBJECT_CACHE_MAX_PER_ELEMENT_SIZE) { + return obj; + } + ValueStringIgnoreCase cache = (ValueStringIgnoreCase) Value.cache(obj); + // the cached object could have the wrong case + // (it would still be 'equal', but we don't like to store it) + if (cache.value.equals(s)) { + return cache; + } + return obj; + } + + @Override + protected ValueString getNew(String s) { + return ValueStringIgnoreCase.get(s); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueTime.java b/modules/h2/src/main/java/org/h2/value/ValueTime.java new file mode 100644 index 0000000000000..7f4732bb17156 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueTime.java @@ -0,0 +1,244 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Time; +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.util.DateTimeUtils; + +/** + * Implementation of the TIME data type. + */ +public class ValueTime extends Value { + + /** + * The default precision and display size of the textual representation of a time. + * Example: 10:00:00 + */ + public static final int DEFAULT_PRECISION = 8; + + /** + * The maximum precision and display size of the textual representation of a time. + * Example: 10:00:00.123456789 + */ + public static final int MAXIMUM_PRECISION = 18; + + /** + * The default scale for time. + */ + static final int DEFAULT_SCALE = 0; + + /** + * The maximum scale for time. + */ + public static final int MAXIMUM_SCALE = 9; + + /** + * Get display size for the specified scale. + * + * @param scale scale + * @return display size + */ + public static int getDisplaySize(int scale) { + return scale == 0 ? 8 : 9 + scale; + } + + /** + * Nanoseconds since midnight + */ + private final long nanos; + + /** + * @param nanos nanoseconds since midnight + */ + private ValueTime(long nanos) { + this.nanos = nanos; + } + + /** + * Get or create a time value. + * + * @param nanos the nanoseconds since midnight + * @return the value + */ + public static ValueTime fromNanos(long nanos) { + if (!SysProperties.UNLIMITED_TIME_RANGE) { + if (nanos < 0L || nanos >= DateTimeUtils.NANOS_PER_DAY) { + StringBuilder builder = new StringBuilder(); + DateTimeUtils.appendTime(builder, nanos); + throw DbException.get(ErrorCode.INVALID_DATETIME_CONSTANT_2, + "TIME", builder.toString()); + } + } + return (ValueTime) Value.cache(new ValueTime(nanos)); + } + + /** + * Get or create a time value for the given time. + * + * @param time the time + * @return the value + */ + public static ValueTime get(Time time) { + return fromNanos(DateTimeUtils.nanosFromDate(time.getTime())); + } + + /** + * Calculate the time value from a given time in + * milliseconds in UTC. + * + * @param ms the milliseconds + * @return the value + */ + public static ValueTime fromMillis(long ms) { + return fromNanos(DateTimeUtils.nanosFromDate(ms)); + } + + /** + * Parse a string to a ValueTime. + * + * @param s the string to parse + * @return the time + */ + public static ValueTime parse(String s) { + try { + return fromNanos(DateTimeUtils.parseTimeNanos(s, 0, s.length(), false)); + } catch (Exception e) { + throw DbException.get(ErrorCode.INVALID_DATETIME_CONSTANT_2, + e, "TIME", s); + } + } + + /** + * @return nanoseconds since midnight + */ + public long getNanos() { + return nanos; + } + + @Override + public Time getTime() { + return DateTimeUtils.convertNanoToTime(nanos); + } + + @Override + public int getType() { + return Value.TIME; + } + + @Override + public String getString() { + StringBuilder buff = new StringBuilder(MAXIMUM_PRECISION); + DateTimeUtils.appendTime(buff, nanos); + return buff.toString(); + } + + @Override + public String getSQL() { + return "TIME '" + getString() + "'"; + } + + @Override + public long getPrecision() { + return MAXIMUM_PRECISION; + } + + @Override + public int getDisplaySize() { + return MAXIMUM_PRECISION; + } + + @Override + public boolean checkPrecision(long precision) { + // TIME data type does not have precision parameter + return true; + } + + @Override + public Value convertScale(boolean onlyToSmallerScale, int targetScale) { + if (targetScale >= MAXIMUM_SCALE) { + return this; + } + if (targetScale < 0) { + throw DbException.getInvalidValueException("scale", targetScale); + } + long n = nanos; + long n2 = DateTimeUtils.convertScale(n, targetScale); + if (n2 == n) { + return this; + } + if (n2 >= DateTimeUtils.NANOS_PER_DAY) { + n2 = DateTimeUtils.NANOS_PER_DAY - 1; + } + return fromNanos(n2); + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + return Long.compare(nanos, ((ValueTime) o).nanos); + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } + return other instanceof ValueTime && nanos == (((ValueTime) other).nanos); + } + + @Override + public int hashCode() { + return (int) (nanos ^ (nanos >>> 32)); + } + + @Override + public Object getObject() { + return getTime(); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setTime(parameterIndex, getTime()); + } + + @Override + public Value add(Value v) { + ValueTime t = (ValueTime) v.convertTo(Value.TIME); + return ValueTime.fromNanos(nanos + t.getNanos()); + } + + @Override + public Value subtract(Value v) { + ValueTime t = (ValueTime) v.convertTo(Value.TIME); + return ValueTime.fromNanos(nanos - t.getNanos()); + } + + @Override + public Value multiply(Value v) { + return ValueTime.fromNanos((long) (nanos * v.getDouble())); + } + + @Override + public Value divide(Value v) { + return ValueTime.fromNanos((long) (nanos / v.getDouble())); + } + + @Override + public int getSignum() { + return Long.signum(nanos); + } + + @Override + public Value negate() { + return ValueTime.fromNanos(-nanos); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueTimestamp.java b/modules/h2/src/main/java/org/h2/value/ValueTimestamp.java new file mode 100644 index 0000000000000..86fb65b9ed45f --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueTimestamp.java @@ -0,0 +1,290 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Timestamp; +import org.h2.api.ErrorCode; +import org.h2.engine.Mode; +import org.h2.message.DbException; +import org.h2.util.DateTimeUtils; + +/** + * Implementation of the TIMESTAMP data type. + */ +public class ValueTimestamp extends Value { + + /** + * The default precision and display size of the textual representation of a timestamp. + * Example: 2001-01-01 23:59:59.123456 + */ + public static final int DEFAULT_PRECISION = 26; + + /** + * The maximum precision and display size of the textual representation of a timestamp. + * Example: 2001-01-01 23:59:59.123456789 + */ + public static final int MAXIMUM_PRECISION = 29; + + /** + * The default scale for timestamps. + */ + static final int DEFAULT_SCALE = 6; + + /** + * The maximum scale for timestamps. + */ + public static final int MAXIMUM_SCALE = 9; + + /** + * Get display size for the specified scale. + * + * @param scale scale + * @return display size + */ + public static int getDisplaySize(int scale) { + return scale == 0 ? 19 : 20 + scale; + } + + /** + * A bit field with bits for the year, month, and day (see DateTimeUtils for + * encoding) + */ + private final long dateValue; + /** + * The nanoseconds since midnight. + */ + private final long timeNanos; + + private ValueTimestamp(long dateValue, long timeNanos) { + this.dateValue = dateValue; + if (timeNanos < 0 || timeNanos >= DateTimeUtils.NANOS_PER_DAY) { + throw new IllegalArgumentException("timeNanos out of range " + timeNanos); + } + this.timeNanos = timeNanos; + } + + /** + * Get or create a date value for the given date. + * + * @param dateValue the date value, a bit field with bits for the year, + * month, and day + * @param timeNanos the nanoseconds since midnight + * @return the value + */ + public static ValueTimestamp fromDateValueAndNanos(long dateValue, long timeNanos) { + return (ValueTimestamp) Value.cache(new ValueTimestamp(dateValue, timeNanos)); + } + + /** + * Get or create a timestamp value for the given timestamp. + * + * @param timestamp the timestamp + * @return the value + */ + public static ValueTimestamp get(Timestamp timestamp) { + long ms = timestamp.getTime(); + long nanos = timestamp.getNanos() % 1_000_000; + long dateValue = DateTimeUtils.dateValueFromDate(ms); + nanos += DateTimeUtils.nanosFromDate(ms); + return fromDateValueAndNanos(dateValue, nanos); + } + + /** + * Get or create a timestamp value for the given date/time in millis. + * + * @param ms the milliseconds + * @param nanos the nanoseconds + * @return the value + */ + public static ValueTimestamp fromMillisNanos(long ms, int nanos) { + long dateValue = DateTimeUtils.dateValueFromDate(ms); + long timeNanos = nanos + DateTimeUtils.nanosFromDate(ms); + return fromDateValueAndNanos(dateValue, timeNanos); + } + + /** + * Get or create a timestamp value for the given date/time in millis. + * + * @param ms the milliseconds + * @return the value + */ + public static ValueTimestamp fromMillis(long ms) { + long dateValue = DateTimeUtils.dateValueFromDate(ms); + long nanos = DateTimeUtils.nanosFromDate(ms); + return fromDateValueAndNanos(dateValue, nanos); + } + + /** + * Parse a string to a ValueTimestamp. This method supports the format + * +/-year-month-day hour[:.]minute[:.]seconds.fractional and an optional timezone + * part. + * + * @param s the string to parse + * @return the date + */ + public static ValueTimestamp parse(String s) { + return parse(s, null); + } + + /** + * Parse a string to a ValueTimestamp, using the given {@link Mode}. + * This method supports the format +/-year-month-day[ -]hour[:.]minute[:.]seconds.fractional + * and an optional timezone part. + * + * @param s the string to parse + * @param mode the database {@link Mode} + * @return the date + */ + public static ValueTimestamp parse(String s, Mode mode) { + try { + return (ValueTimestamp) DateTimeUtils.parseTimestamp(s, mode, false); + } catch (Exception e) { + throw DbException.get(ErrorCode.INVALID_DATETIME_CONSTANT_2, + e, "TIMESTAMP", s); + } + } + + /** + * A bit field with bits for the year, month, and day (see DateTimeUtils for + * encoding). + * + * @return the data value + */ + public long getDateValue() { + return dateValue; + } + + /** + * The nanoseconds since midnight. + * + * @return the nanoseconds + */ + public long getTimeNanos() { + return timeNanos; + } + + @Override + public Timestamp getTimestamp() { + return DateTimeUtils.convertDateValueToTimestamp(dateValue, timeNanos); + } + + @Override + public int getType() { + return Value.TIMESTAMP; + } + + @Override + public String getString() { + StringBuilder buff = new StringBuilder(MAXIMUM_PRECISION); + DateTimeUtils.appendDate(buff, dateValue); + buff.append(' '); + DateTimeUtils.appendTime(buff, timeNanos); + return buff.toString(); + } + + @Override + public String getSQL() { + return "TIMESTAMP '" + getString() + "'"; + } + + @Override + public long getPrecision() { + return MAXIMUM_PRECISION; + } + + @Override + public int getScale() { + return MAXIMUM_SCALE; + } + + @Override + public int getDisplaySize() { + return MAXIMUM_PRECISION; + } + + @Override + public boolean checkPrecision(long precision) { + // TIMESTAMP data type does not have precision parameter + return true; + } + + @Override + public Value convertScale(boolean onlyToSmallerScale, int targetScale) { + if (targetScale >= MAXIMUM_SCALE) { + return this; + } + if (targetScale < 0) { + throw DbException.getInvalidValueException("scale", targetScale); + } + long n = timeNanos; + long n2 = DateTimeUtils.convertScale(n, targetScale); + if (n2 == n) { + return this; + } + long dv = dateValue; + if (n2 >= DateTimeUtils.NANOS_PER_DAY) { + n2 -= DateTimeUtils.NANOS_PER_DAY; + dv = DateTimeUtils.incrementDateValue(dv); + } + return fromDateValueAndNanos(dv, n2); + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueTimestamp t = (ValueTimestamp) o; + int c = Long.compare(dateValue, t.dateValue); + if (c != 0) { + return c; + } + return Long.compare(timeNanos, t.timeNanos); + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } else if (!(other instanceof ValueTimestamp)) { + return false; + } + ValueTimestamp x = (ValueTimestamp) other; + return dateValue == x.dateValue && timeNanos == x.timeNanos; + } + + @Override + public int hashCode() { + return (int) (dateValue ^ (dateValue >>> 32) ^ timeNanos ^ (timeNanos >>> 32)); + } + + @Override + public Object getObject() { + return getTimestamp(); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setTimestamp(parameterIndex, getTimestamp()); + } + + @Override + public Value add(Value v) { + ValueTimestamp t = (ValueTimestamp) v.convertTo(Value.TIMESTAMP); + long d1 = DateTimeUtils.absoluteDayFromDateValue(dateValue); + long d2 = DateTimeUtils.absoluteDayFromDateValue(t.dateValue); + return DateTimeUtils.normalizeTimestamp(d1 + d2, timeNanos + t.timeNanos); + } + + @Override + public Value subtract(Value v) { + ValueTimestamp t = (ValueTimestamp) v.convertTo(Value.TIMESTAMP); + long d1 = DateTimeUtils.absoluteDayFromDateValue(dateValue); + long d2 = DateTimeUtils.absoluteDayFromDateValue(t.dateValue); + return DateTimeUtils.normalizeTimestamp(d1 - d2, timeNanos - t.timeNanos); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueTimestampTimeZone.java b/modules/h2/src/main/java/org/h2/value/ValueTimestampTimeZone.java new file mode 100644 index 0000000000000..3e07741d96516 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueTimestampTimeZone.java @@ -0,0 +1,298 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, and the + * EPL 1.0 (http://h2database.com/html/license.html). Initial Developer: H2 + * Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Timestamp; +import org.h2.api.ErrorCode; +import org.h2.api.TimestampWithTimeZone; +import org.h2.message.DbException; +import org.h2.util.DateTimeUtils; + +/** + * Implementation of the TIMESTAMP WITH TIME ZONE data type. + * + * @see + * ISO 8601 Time zone designators + */ +public class ValueTimestampTimeZone extends Value { + + /** + * The default precision and display size of the textual representation of a timestamp. + * Example: 2001-01-01 23:59:59.123456+10:00 + */ + public static final int DEFAULT_PRECISION = 32; + + /** + * The maximum precision and display size of the textual representation of a timestamp. + * Example: 2001-01-01 23:59:59.123456789+10:00 + */ + public static final int MAXIMUM_PRECISION = 35; + + /** + * The default scale for timestamps. + */ + static final int DEFAULT_SCALE = ValueTimestamp.DEFAULT_SCALE; + + /** + * The default scale for timestamps. + */ + static final int MAXIMUM_SCALE = ValueTimestamp.MAXIMUM_SCALE; + + /** + * Get display size for the specified scale. + * + * @param scale scale + * @return display size + */ + public static int getDisplaySize(int scale) { + return scale == 0 ? 25 : 26 + scale; + } + + /** + * A bit field with bits for the year, month, and day (see DateTimeUtils for + * encoding) + */ + private final long dateValue; + /** + * The nanoseconds since midnight. + */ + private final long timeNanos; + /** + * Time zone offset from UTC in minutes, range of -18 hours to +18 hours. This + * range is compatible with OffsetDateTime from JSR-310. + */ + private final short timeZoneOffsetMins; + + private ValueTimestampTimeZone(long dateValue, long timeNanos, + short timeZoneOffsetMins) { + if (timeNanos < 0 || timeNanos >= DateTimeUtils.NANOS_PER_DAY) { + throw new IllegalArgumentException( + "timeNanos out of range " + timeNanos); + } + /* + * Some current and historic time zones have offsets larger than 12 hours. + * JSR-310 determines 18 hours as maximum possible offset in both directions, so + * we use this limit too for compatibility. + */ + if (timeZoneOffsetMins < (-18 * 60) + || timeZoneOffsetMins > (18 * 60)) { + throw new IllegalArgumentException( + "timeZoneOffsetMins out of range " + timeZoneOffsetMins); + } + this.dateValue = dateValue; + this.timeNanos = timeNanos; + this.timeZoneOffsetMins = timeZoneOffsetMins; + } + + /** + * Get or create a date value for the given date. + * + * @param dateValue the date value, a bit field with bits for the year, + * month, and day + * @param timeNanos the nanoseconds since midnight + * @param timeZoneOffsetMins the timezone offset in minutes + * @return the value + */ + public static ValueTimestampTimeZone fromDateValueAndNanos(long dateValue, + long timeNanos, short timeZoneOffsetMins) { + return (ValueTimestampTimeZone) Value.cache(new ValueTimestampTimeZone( + dateValue, timeNanos, timeZoneOffsetMins)); + } + + /** + * Get or create a timestamp value for the given timestamp. + * + * @param timestamp the timestamp + * @return the value + */ + public static ValueTimestampTimeZone get(TimestampWithTimeZone timestamp) { + return fromDateValueAndNanos(timestamp.getYMD(), + timestamp.getNanosSinceMidnight(), + timestamp.getTimeZoneOffsetMins()); + } + + /** + * Parse a string to a ValueTimestamp. This method supports the format + * +/-year-month-day hour:minute:seconds.fractional and an optional timezone + * part. + * + * @param s the string to parse + * @return the date + */ + public static ValueTimestampTimeZone parse(String s) { + try { + return (ValueTimestampTimeZone) DateTimeUtils.parseTimestamp(s, null, true); + } catch (Exception e) { + throw DbException.get(ErrorCode.INVALID_DATETIME_CONSTANT_2, e, + "TIMESTAMP WITH TIME ZONE", s); + } + } + + /** + * A bit field with bits for the year, month, and day (see DateTimeUtils for + * encoding). + * + * @return the data value + */ + public long getDateValue() { + return dateValue; + } + + /** + * The nanoseconds since midnight. + * + * @return the nanoseconds + */ + public long getTimeNanos() { + return timeNanos; + } + + /** + * The timezone offset in minutes. + * + * @return the offset + */ + public short getTimeZoneOffsetMins() { + return timeZoneOffsetMins; + } + + @Override + public Timestamp getTimestamp() { + return DateTimeUtils.convertTimestampTimeZoneToTimestamp(dateValue, timeNanos, timeZoneOffsetMins); + } + + @Override + public int getType() { + return Value.TIMESTAMP_TZ; + } + + @Override + public String getString() { + return DateTimeUtils.timestampTimeZoneToString(dateValue, timeNanos, timeZoneOffsetMins); + } + + @Override + public String getSQL() { + return "TIMESTAMP WITH TIME ZONE '" + getString() + "'"; + } + + @Override + public long getPrecision() { + return MAXIMUM_PRECISION; + } + + @Override + public int getScale() { + return MAXIMUM_SCALE; + } + + @Override + public int getDisplaySize() { + return MAXIMUM_PRECISION; + } + + @Override + public boolean checkPrecision(long precision) { + // TIMESTAMP WITH TIME ZONE data type does not have precision parameter + return true; + } + + @Override + public Value convertScale(boolean onlyToSmallerScale, int targetScale) { + if (targetScale >= MAXIMUM_SCALE) { + return this; + } + if (targetScale < 0) { + throw DbException.getInvalidValueException("scale", targetScale); + } + long n = timeNanos; + long n2 = DateTimeUtils.convertScale(n, targetScale); + if (n2 == n) { + return this; + } + long dv = dateValue; + if (n2 >= DateTimeUtils.NANOS_PER_DAY) { + n2 -= DateTimeUtils.NANOS_PER_DAY; + dv = DateTimeUtils.incrementDateValue(dv); + } + return fromDateValueAndNanos(dv, n2, timeZoneOffsetMins); + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + ValueTimestampTimeZone t = (ValueTimestampTimeZone) o; + // Maximum time zone offset is +/-18 hours so difference in days between local + // and UTC cannot be more than one day + long dateValueA = dateValue; + long timeA = timeNanos - timeZoneOffsetMins * 60_000_000_000L; + if (timeA < 0) { + timeA += DateTimeUtils.NANOS_PER_DAY; + dateValueA = DateTimeUtils.decrementDateValue(dateValueA); + } else if (timeA >= DateTimeUtils.NANOS_PER_DAY) { + timeA -= DateTimeUtils.NANOS_PER_DAY; + dateValueA = DateTimeUtils.incrementDateValue(dateValueA); + } + long dateValueB = t.dateValue; + long timeB = t.timeNanos - t.timeZoneOffsetMins * 60_000_000_000L; + if (timeB < 0) { + timeB += DateTimeUtils.NANOS_PER_DAY; + dateValueB = DateTimeUtils.decrementDateValue(dateValueB); + } else if (timeB >= DateTimeUtils.NANOS_PER_DAY) { + timeB -= DateTimeUtils.NANOS_PER_DAY; + dateValueB = DateTimeUtils.incrementDateValue(dateValueB); + } + int cmp = Long.compare(dateValueA, dateValueB); + if (cmp != 0) { + return cmp; + } + return Long.compare(timeA, timeB); + } + + @Override + public boolean equals(Object other) { + if (this == other) { + return true; + } else if (!(other instanceof ValueTimestampTimeZone)) { + return false; + } + ValueTimestampTimeZone x = (ValueTimestampTimeZone) other; + return dateValue == x.dateValue && timeNanos == x.timeNanos + && timeZoneOffsetMins == x.timeZoneOffsetMins; + } + + @Override + public int hashCode() { + return (int) (dateValue ^ (dateValue >>> 32) ^ timeNanos + ^ (timeNanos >>> 32) ^ timeZoneOffsetMins); + } + + @Override + public Object getObject() { + return new TimestampWithTimeZone(dateValue, timeNanos, + timeZoneOffsetMins); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setString(parameterIndex, getString()); + } + + @Override + public Value add(Value v) { + throw DbException.getUnsupportedException( + "manipulating TIMESTAMP WITH TIME ZONE values is unsupported"); + } + + @Override + public Value subtract(Value v) { + throw DbException.getUnsupportedException( + "manipulating TIMESTAMP WITH TIME ZONE values is unsupported"); + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/ValueUuid.java b/modules/h2/src/main/java/org/h2/value/ValueUuid.java new file mode 100644 index 0000000000000..4d0f998519d2d --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/ValueUuid.java @@ -0,0 +1,221 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.value; + +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.util.UUID; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.util.Bits; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; + +/** + * Implementation of the UUID data type. + */ +public class ValueUuid extends Value { + + /** + * The precision of this value in number of bytes. + */ + private static final int PRECISION = 16; + + /** + * The display size of the textual representation of a UUID. + * Example: cd38d882-7ada-4589-b5fb-7da0ca559d9a + */ + private static final int DISPLAY_SIZE = 36; + + private final long high, low; + + private ValueUuid(long high, long low) { + this.high = high; + this.low = low; + } + + @Override + public int hashCode() { + return (int) ((high >>> 32) ^ high ^ (low >>> 32) ^ low); + } + + /** + * Create a new UUID using the pseudo random number generator. + * + * @return the new UUID + */ + public static ValueUuid getNewRandom() { + long high = MathUtils.secureRandomLong(); + long low = MathUtils.secureRandomLong(); + // version 4 (random) + high = (high & (~0xf000L)) | 0x4000L; + // variant (Leach-Salz) + low = (low & 0x3fff_ffff_ffff_ffffL) | 0x8000_0000_0000_0000L; + return new ValueUuid(high, low); + } + + /** + * Get or create a UUID for the given 16 bytes. + * + * @param binary the byte array (must be at least 16 bytes long) + * @return the UUID + */ + public static ValueUuid get(byte[] binary) { + if (binary.length < 16) { + return get(StringUtils.convertBytesToHex(binary)); + } + long high = Bits.readLong(binary, 0); + long low = Bits.readLong(binary, 8); + return (ValueUuid) Value.cache(new ValueUuid(high, low)); + } + + /** + * Get or create a UUID for the given high and low order values. + * + * @param high the most significant bits + * @param low the least significant bits + * @return the UUID + */ + public static ValueUuid get(long high, long low) { + return (ValueUuid) Value.cache(new ValueUuid(high, low)); + } + + /** + * Get or create a UUID for the given Java UUID. + * + * @param uuid Java UUID + * @return the UUID + */ + public static ValueUuid get(UUID uuid) { + return get(uuid.getMostSignificantBits(), uuid.getLeastSignificantBits()); + } + + /** + * Get or create a UUID for the given text representation. + * + * @param s the text representation of the UUID + * @return the UUID + */ + public static ValueUuid get(String s) { + long low = 0, high = 0; + for (int i = 0, j = 0, length = s.length(); i < length; i++) { + char c = s.charAt(i); + if (c >= '0' && c <= '9') { + low = (low << 4) | (c - '0'); + } else if (c >= 'a' && c <= 'f') { + low = (low << 4) | (c - 'a' + 0xa); + } else if (c == '-') { + continue; + } else if (c >= 'A' && c <= 'F') { + low = (low << 4) | (c - 'A' + 0xa); + } else if (c <= ' ') { + continue; + } else { + throw DbException.get(ErrorCode.DATA_CONVERSION_ERROR_1, s); + } + if (j++ == 15) { + high = low; + low = 0; + } + } + return (ValueUuid) Value.cache(new ValueUuid(high, low)); + } + + @Override + public String getSQL() { + return StringUtils.quoteStringSQL(getString()); + } + + @Override + public int getType() { + return Value.UUID; + } + + @Override + public long getPrecision() { + return PRECISION; + } + + private static void appendHex(StringBuilder buff, long x, int bytes) { + for (int i = bytes * 8 - 4; i >= 0; i -= 8) { + buff.append(Integer.toHexString((int) (x >> i) & 0xf)). + append(Integer.toHexString((int) (x >> (i - 4)) & 0xf)); + } + } + + @Override + public String getString() { + StringBuilder buff = new StringBuilder(36); + appendHex(buff, high >> 32, 4); + buff.append('-'); + appendHex(buff, high >> 16, 2); + buff.append('-'); + appendHex(buff, high, 2); + buff.append('-'); + appendHex(buff, low >> 48, 2); + buff.append('-'); + appendHex(buff, low, 6); + return buff.toString(); + } + + @Override + protected int compareSecure(Value o, CompareMode mode) { + if (o == this) { + return 0; + } + ValueUuid v = (ValueUuid) o; + if (high == v.high) { + return Long.compare(low, v.low); + } + return high > v.high ? 1 : -1; + } + + @Override + public boolean equals(Object other) { + return other instanceof ValueUuid && compareSecure((Value) other, null) == 0; + } + + @Override + public Object getObject() { + return new UUID(high, low); + } + + @Override + public byte[] getBytes() { + return Bits.uuidToBytes(high, low); + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) + throws SQLException { + prep.setBytes(parameterIndex, getBytes()); + } + + /** + * Get the most significant 64 bits of this UUID. + * + * @return the high order bits + */ + public long getHigh() { + return high; + } + + /** + * Get the least significant 64 bits of this UUID. + * + * @return the low order bits + */ + public long getLow() { + return low; + } + + @Override + public int getDisplaySize() { + return DISPLAY_SIZE; + } + +} diff --git a/modules/h2/src/main/java/org/h2/value/package.html b/modules/h2/src/main/java/org/h2/value/package.html new file mode 100644 index 0000000000000..930448355bb20 --- /dev/null +++ b/modules/h2/src/main/java/org/h2/value/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Data type and value implementations. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java9/org/h2/util/Bits.java b/modules/h2/src/main/java9/org/h2/util/Bits.java new file mode 100644 index 0000000000000..15c20c8bb8753 --- /dev/null +++ b/modules/h2/src/main/java9/org/h2/util/Bits.java @@ -0,0 +1,159 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.util; + +import java.lang.invoke.MethodHandles; +import java.lang.invoke.VarHandle; +import java.nio.ByteOrder; +import java.util.Arrays; +import java.util.UUID; + +/** + * Manipulations with bytes and arrays. Specialized implementation for Java 9 + * and later versions. + */ +public final class Bits { + + /** + * VarHandle giving access to elements of a byte[] array viewed as if it were a + * int[] array on big-endian system. + */ + private static final VarHandle INT_VH = MethodHandles.byteArrayViewVarHandle(int[].class, ByteOrder.BIG_ENDIAN); + + /** + * VarHandle giving access to elements of a byte[] array viewed as if it were a + * long[] array on big-endian system. + */ + private static final VarHandle LONG_VH = MethodHandles.byteArrayViewVarHandle(long[].class, ByteOrder.BIG_ENDIAN); + + /** + * Compare the contents of two byte arrays. If the content or length of the + * first array is smaller than the second array, -1 is returned. If the content + * or length of the second array is smaller than the first array, 1 is returned. + * If the contents and lengths are the same, 0 is returned. + * + *

    + * This method interprets bytes as signed. + *

    + * + * @param data1 + * the first byte array (must not be null) + * @param data2 + * the second byte array (must not be null) + * @return the result of the comparison (-1, 1 or 0) + */ + public static int compareNotNullSigned(byte[] data1, byte[] data2) { + return Integer.signum(Arrays.compare(data1, data2)); + } + + /** + * Compare the contents of two byte arrays. If the content or length of the + * first array is smaller than the second array, -1 is returned. If the content + * or length of the second array is smaller than the first array, 1 is returned. + * If the contents and lengths are the same, 0 is returned. + * + *

    + * This method interprets bytes as unsigned. + *

    + * + * @param data1 + * the first byte array (must not be null) + * @param data2 + * the second byte array (must not be null) + * @return the result of the comparison (-1, 1 or 0) + */ + public static int compareNotNullUnsigned(byte[] data1, byte[] data2) { + return Integer.signum(Arrays.compareUnsigned(data1, data2)); + } + + /** + * Reads a int value from the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @return the value + */ + public static int readInt(byte[] buff, int pos) { + return (int) INT_VH.get(buff, pos); + } + + /** + * Reads a long value from the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @return the value + */ + public static long readLong(byte[] buff, int pos) { + return (long) LONG_VH.get(buff, pos); + } + + /** + * Converts UUID value to byte array in big-endian order. + * + * @param msb + * most significant part of UUID + * @param lsb + * least significant part of UUID + * @return byte array representation + */ + public static byte[] uuidToBytes(long msb, long lsb) { + byte[] buff = new byte[16]; + LONG_VH.set(buff, 0, msb); + LONG_VH.set(buff, 8, lsb); + return buff; + } + + /** + * Converts UUID value to byte array in big-endian order. + * + * @param uuid + * UUID value + * @return byte array representation + */ + public static byte[] uuidToBytes(UUID uuid) { + return uuidToBytes(uuid.getMostSignificantBits(), uuid.getLeastSignificantBits()); + } + + /** + * Writes a int value to the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @param x + * the value to write + */ + public static void writeInt(byte[] buff, int pos, int x) { + INT_VH.set(buff, pos, x); + } + + /** + * Writes a long value to the byte array at the given position in big-endian + * order. + * + * @param buff + * the byte array + * @param pos + * the position + * @param x + * the value to write + */ + public static void writeLong(byte[] buff, int pos, long x) { + LONG_VH.set(buff, pos, x); + } + + private Bits() { + } +} diff --git a/modules/h2/src/main/java9/org/h2/util/package.html b/modules/h2/src/main/java9/org/h2/util/package.html new file mode 100644 index 0000000000000..ab7c511465959 --- /dev/null +++ b/modules/h2/src/main/java9/org/h2/util/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Internal utility classes reimplemented for Java 9 and later versions. + +

    \ No newline at end of file diff --git a/modules/h2/src/main/java9/precompiled/org/h2/util/Bits.class b/modules/h2/src/main/java9/precompiled/org/h2/util/Bits.class new file mode 100644 index 0000000000000..4b427d6a0cc07 Binary files /dev/null and b/modules/h2/src/main/java9/precompiled/org/h2/util/Bits.class differ diff --git a/modules/h2/src/test/java/META-INF/services/javax.annotation.processing.Processor b/modules/h2/src/test/java/META-INF/services/javax.annotation.processing.Processor new file mode 100644 index 0000000000000..0a9f0872e8f4a --- /dev/null +++ b/modules/h2/src/test/java/META-INF/services/javax.annotation.processing.Processor @@ -0,0 +1 @@ +org.h2.test.ap.TestAnnotationProcessor \ No newline at end of file diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2AllTestsSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2AllTestsSuite.java new file mode 100644 index 0000000000000..64bbde4e0b7cf --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2AllTestsSuite.java @@ -0,0 +1,34 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import org.junit.runner.RunWith; +import org.junit.runners.Suite; + +@RunWith(Suite.class) +@Suite.SuiteClasses({ + H2InMemoryMultiThreadTestSuite.class, +// H2InMemoryMultiThreadLazyTestSuite.class, +/* H2MultiThreadTestSuite.class, + H2PageStoreOffloadTestSuite.class, + H2MVStoreWithNetworkingTestSuite.class, + H2PageStoreDefragTestSuite.class, + H2PageStoreNoMVStoreTestSuite.class,*/ +}) +public class H2AllTestsSuite { +} + diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2InMemoryMultiThreadLazyTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2InMemoryMultiThreadLazyTestSuite.java new file mode 100644 index 0000000000000..17aaec12a48c9 --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2InMemoryMultiThreadLazyTestSuite.java @@ -0,0 +1,39 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 in-memory multi-threaded tests in lazy-mode. + */ +@RunWith(SuiteMethod.class) +public class H2InMemoryMultiThreadLazyTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.memory = true; + builder.multiThreaded = true; + builder.lazy = true; + + return builder.buildSuite(H2InMemoryMultiThreadLazyTestSuite.class, true); + } +} diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2InMemoryMultiThreadTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2InMemoryMultiThreadTestSuite.java new file mode 100644 index 0000000000000..328518768c32d --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2InMemoryMultiThreadTestSuite.java @@ -0,0 +1,38 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 in-memory multi-threaded test suite. + */ +@RunWith(SuiteMethod.class) +public class H2InMemoryMultiThreadTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.memory = true; + builder.multiThreaded = true; + + return builder.buildSuite(H2InMemoryMultiThreadTestSuite.class, true, true); + } +} diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2MVStoreWithNetworkingTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2MVStoreWithNetworkingTestSuite.java new file mode 100644 index 0000000000000..ef394183e4c97 --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2MVStoreWithNetworkingTestSuite.java @@ -0,0 +1,40 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 test suite with MVStore and networking enabled. + */ +@RunWith(SuiteMethod.class) +public class H2MVStoreWithNetworkingTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.mvStore = true; + builder.memory = false; + builder.multiThreaded = false; + builder.networked = true; + + return builder.buildSuite(H2MVStoreWithNetworkingTestSuite.class, true); + } +} diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2MultiThreadTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2MultiThreadTestSuite.java new file mode 100644 index 0000000000000..f2400ef79b8f7 --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2MultiThreadTestSuite.java @@ -0,0 +1,38 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 multi-threaded test suite. + */ +@RunWith(SuiteMethod.class) +public class H2MultiThreadTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.memory = false; + builder.multiThreaded = true; + + return builder.buildSuite(H2MultiThreadTestSuite.class, true, true); + } +} diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2NormalModeTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2NormalModeTestSuite.java new file mode 100644 index 0000000000000..8c77c902ec1a7 --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2NormalModeTestSuite.java @@ -0,0 +1,38 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 database tests. + */ +@RunWith(SuiteMethod.class) +public class H2NormalModeTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.memory = false; + builder.multiThreaded = false; + + return builder.buildSuite(H2NormalModeTestSuite.class, true, true); + } +} diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreDefragTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreDefragTestSuite.java new file mode 100644 index 0000000000000..27a5950c993ce --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreDefragTestSuite.java @@ -0,0 +1,38 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 page store defrag tests. + */ +@RunWith(SuiteMethod.class) +public class H2PageStoreDefragTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.traceLevelFile = 1; + builder.defrag = true; + + return builder.buildSuite(H2PageStoreDefragTestSuite.class, true, true); + } +} diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreNoMVStoreTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreNoMVStoreTestSuite.java new file mode 100644 index 0000000000000..e282555168f8f --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreNoMVStoreTestSuite.java @@ -0,0 +1,39 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 without MVStore test suite. + */ +@RunWith(SuiteMethod.class) +public class H2PageStoreNoMVStoreTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.memory = false; + builder.multiThreaded = false; + builder.mvStore = false; + + return builder.buildSuite(H2PageStoreNoMVStoreTestSuite.class, true); + } +} diff --git a/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreOffloadTestSuite.java b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreOffloadTestSuite.java new file mode 100644 index 0000000000000..ee4ee818e4378 --- /dev/null +++ b/modules/h2/src/test/java/org/apache/ignite/testsuites/H2PageStoreOffloadTestSuite.java @@ -0,0 +1,44 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.Test; +import org.h2.test.H2TestSuiteBuilder; +import org.junit.internal.runners.SuiteMethod; +import org.junit.runner.RunWith; + +/** + * H2 offload to disk tests. + */ +@RunWith(SuiteMethod.class) +public class H2PageStoreOffloadTestSuite { + /** */ + public static Test suite() { + H2TestSuiteBuilder builder = new H2TestSuiteBuilder(); + + builder.memory = false; + builder.multiThreaded = false; + builder.diskUndo = true; + builder.diskResult = true; + builder.traceLevelFile = 3; + builder.throttle = 1; + builder.cacheType = "SOFT_LRU"; + builder.cipher = "AES"; + + return builder.buildSuite(H2PageStoreOffloadTestSuite.class, true); + } +} diff --git a/modules/h2/src/test/java/org/h2/samples/CachedPreparedStatements.java b/modules/h2/src/test/java/org/h2/samples/CachedPreparedStatements.java new file mode 100644 index 0000000000000..69962d62cce9b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/CachedPreparedStatements.java @@ -0,0 +1,63 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +/** + * This sample application shows how to cache prepared statements. + */ +public class CachedPreparedStatements { + + private Connection conn; + private Statement stat; + private final Map prepared = + Collections.synchronizedMap( + new HashMap()); + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + new CachedPreparedStatements().run(); + } + + private void run() throws Exception { + Class.forName("org.h2.Driver"); + conn = DriverManager.getConnection( + "jdbc:h2:mem:", "sa", ""); + stat = conn.createStatement(); + stat.execute( + "create table test(id int primary key, name varchar)"); + PreparedStatement prep = prepare( + "insert into test values(?, ?)"); + prep.setInt(1, 1); + prep.setString(2, "Hello"); + prep.execute(); + conn.close(); + } + + private PreparedStatement prepare(String sql) + throws SQLException { + PreparedStatement prep = prepared.get(sql); + if (prep == null) { + prep = conn.prepareStatement(sql); + prepared.put(sql, prep); + } + return prep; + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/Compact.java b/modules/h2/src/test/java/org/h2/samples/Compact.java new file mode 100644 index 0000000000000..88cbd74430a99 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/Compact.java @@ -0,0 +1,63 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.store.fs.FileUtils; +import org.h2.tools.Script; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.RunScript; + +/** + * This sample application shows how to compact the database files. + * This is done by creating a SQL script, and then re-creating the database + * using this script. + */ +public class Compact { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + DeleteDbFiles.execute("./target/data", "test", true); + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:./data/test", "sa", ""); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World');"); + stat.close(); + conn.close(); + System.out.println("Compacting..."); + compact("./target/data", "test", "sa", ""); + System.out.println("Done."); + } + + /** + * Utility method to compact a database. + * + * @param dir the directory + * @param dbName the database name + * @param user the user name + * @param password the password + */ + public static void compact(String dir, String dbName, + String user, String password) throws SQLException { + String url = "jdbc:h2:" + dir + "/" + dbName; + String file = "target/data/test.sql"; + Script.process(url, user, password, file, "", ""); + DeleteDbFiles.execute(dir, dbName, true); + RunScript.execute(url, user, password, file, null, false); + FileUtils.delete(file); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/CreateScriptFile.java b/modules/h2/src/test/java/org/h2/samples/CreateScriptFile.java new file mode 100644 index 0000000000000..dde77fa9136d8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/CreateScriptFile.java @@ -0,0 +1,165 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.io.OutputStream; +import java.io.OutputStreamWriter; +import java.io.PrintWriter; +import java.sql.Connection; +import java.sql.DriverManager; +import org.h2.engine.Constants; +import org.h2.security.SHA256; +import org.h2.store.FileStore; +import org.h2.store.FileStoreInputStream; +import org.h2.store.FileStoreOutputStream; +import org.h2.store.fs.FileUtils; +import org.h2.tools.CompressTool; +import org.h2.tools.RunScript; +import org.h2.tools.Script; + +/** + * This sample application shows how to manually + * create an encrypted and compressed script file. + */ +public class CreateScriptFile { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + + String file = "test.txt"; + String cipher = "AES"; + String filePassword = "password"; + String compressionAlgorithm = "DEFLATE"; + + String url = "jdbc:h2:mem:test"; + String user = "sa", dbPassword = "sa"; + + PrintWriter w = openScriptWriter(file, + compressionAlgorithm, + cipher, filePassword, "UTF-8"); + w.println("create table test(id int primary key);"); + w.println("insert into test select x from system_range(1, 10);"); + w.close(); + + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection(url, user, dbPassword); + RunScript.main( + "-url", url, + "-user", user, "-password", dbPassword, + "-script", file, + "-options", + "compression", compressionAlgorithm, + "cipher", cipher, + "password", "'" + filePassword + "'" + ); + Script.main( + "-url", url, + "-user", user, "-password", dbPassword, + "-script", file, + "-options", + "compression", compressionAlgorithm, + "cipher", cipher, "password", "'" + filePassword + "'" + ); + conn.close(); + + LineNumberReader r = openScriptReader(file, + compressionAlgorithm, + cipher, filePassword, "UTF-8"); + while (true) { + String line = r.readLine(); + if (line == null) { + break; + } + System.out.println(line); + } + r.close(); + + } + + /** + * Open a script writer. + * + * @param fileName the file name (the file will be overwritten) + * @param compressionAlgorithm the compression algorithm (uppercase) + * @param cipher the encryption algorithm or null + * @param password the encryption password + * @param charset the character set (for example UTF-8) + * @return the print writer + */ + public static PrintWriter openScriptWriter(String fileName, + String compressionAlgorithm, + String cipher, String password, + String charset) throws IOException { + try { + OutputStream out; + if (cipher != null) { + byte[] key = SHA256.getKeyPasswordHash("script", password.toCharArray()); + FileUtils.delete(fileName); + FileStore store = FileStore.open(null, fileName, "rw", cipher, key); + store.init(); + out = new FileStoreOutputStream(store, null, compressionAlgorithm); + out = new BufferedOutputStream(out, Constants.IO_BUFFER_SIZE_COMPRESS); + } else { + out = FileUtils.newOutputStream(fileName, false); + out = new BufferedOutputStream(out, Constants.IO_BUFFER_SIZE); + out = CompressTool.wrapOutputStream(out, + compressionAlgorithm, "script.sql"); + } + return new PrintWriter(new OutputStreamWriter(out, charset)); + } catch (Exception e) { + throw new IOException(e.getMessage(), e); + } + } + + /** + * Open a script reader. + * + * @param fileName the file name (the file will be overwritten) + * @param compressionAlgorithm the compression algorithm (uppercase) + * @param cipher the encryption algorithm or null + * @param password the encryption password + * @param charset the character set (for example UTF-8) + * @return the script reader + */ + public static LineNumberReader openScriptReader(String fileName, + String compressionAlgorithm, + String cipher, String password, + String charset) throws IOException { + try { + InputStream in; + if (cipher != null) { + byte[] key = SHA256.getKeyPasswordHash("script", password.toCharArray()); + FileStore store = FileStore.open(null, fileName, "rw", cipher, key); + store.init(); + in = new FileStoreInputStream(store, null, + compressionAlgorithm != null, false); + in = new BufferedInputStream(in, Constants.IO_BUFFER_SIZE_COMPRESS); + } else { + in = FileUtils.newInputStream(fileName); + in = new BufferedInputStream(in, Constants.IO_BUFFER_SIZE); + in = CompressTool.wrapInputStream(in, compressionAlgorithm, "script.sql"); + if (in == null) { + throw new IOException("Entry not found: script.sql in " + fileName); + } + } + return new LineNumberReader(new InputStreamReader(in, charset)); + } catch (Exception e) { + throw new IOException(e.getMessage(), e); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/CsvSample.java b/modules/h2/src/test/java/org/h2/samples/CsvSample.java new file mode 100644 index 0000000000000..36253f85aa657 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/CsvSample.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Types; + +import org.h2.store.fs.FileUtils; +import org.h2.tools.Csv; +import org.h2.tools.SimpleResultSet; + +/** + * This sample application shows how to use the CSV tool + * to write CSV (comma separated values) files, and + * how to use the tool to read such files. + * See also the section CSV (Comma Separated Values) Support in the Tutorial. + */ +public class CsvSample { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws SQLException { + CsvSample.write(); + CsvSample.read(); + FileUtils.delete("target/data/test.csv"); + } + + /** + * Write a CSV file. + */ + static void write() throws SQLException { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("NAME", Types.VARCHAR, 255, 0); + rs.addColumn("EMAIL", Types.VARCHAR, 255, 0); + rs.addColumn("PHONE", Types.VARCHAR, 255, 0); + rs.addRow("Bob Meier", "bob.meier@abcde.abc", "+41123456789"); + rs.addRow("John Jones", "john.jones@abcde.abc", "+41976543210"); + new Csv().write("target/data/test.csv", rs, null); + } + + /** + * Read a CSV file. + */ + static void read() throws SQLException { + ResultSet rs = new Csv().read("target/data/test.csv", null, null); + ResultSetMetaData meta = rs.getMetaData(); + while (rs.next()) { + for (int i = 0; i < meta.getColumnCount(); i++) { + System.out.println( + meta.getColumnLabel(i + 1) + ": " + + rs.getString(i + 1)); + } + System.out.println(); + } + rs.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/DirectInsert.java b/modules/h2/src/test/java/org/h2/samples/DirectInsert.java new file mode 100644 index 0000000000000..d361a5d9a22d1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/DirectInsert.java @@ -0,0 +1,80 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.tools.DeleteDbFiles; + +/** + * Demonstrates the benefit of using the CREATE TABLE ... AS SELECT + * optimization. + */ +public class DirectInsert { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + Class.forName("org.h2.Driver"); + DeleteDbFiles.execute("~", "test", true); + String url = "jdbc:h2:~/test"; + initialInsert(url, 200_000); + for (int i = 0; i < 3; i++) { + createAsSelect(url, true); + createAsSelect(url, false); + } + } + + private static void initialInsert(String url, int len) throws SQLException { + Connection conn = DriverManager.getConnection(url + ";LOG=0"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, 'Test' || SPACE(100))"); + long time = System.nanoTime(); + for (int i = 0; i < len; i++) { + long now = System.nanoTime(); + if (now > time + TimeUnit.SECONDS.toNanos(1)) { + time = now; + System.out.println("Inserting " + (100L * i / len) + "%"); + } + prep.setInt(1, i); + prep.execute(); + } + conn.commit(); + prep.close(); + stat.close(); + conn.close(); + } + + private static void createAsSelect(String url, boolean optimize) + throws SQLException { + Connection conn = DriverManager.getConnection(url + + ";OPTIMIZE_INSERT_FROM_SELECT=" + optimize); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST2"); + System.out.println("CREATE TABLE ... AS SELECT " + + (optimize ? "(optimized)" : "")); + long time = System.nanoTime(); + stat.execute("CREATE TABLE TEST2 AS SELECT * FROM TEST"); + System.out.printf("%.3f sec.\n", (double) (System.nanoTime() - time) / + TimeUnit.SECONDS.toNanos(1)); + stat.execute("INSERT INTO TEST2 SELECT * FROM TEST2"); + stat.close(); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/FileFunctions.java b/modules/h2/src/test/java/org/h2/samples/FileFunctions.java new file mode 100644 index 0000000000000..ca5bbcdb8bc3b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/FileFunctions.java @@ -0,0 +1,89 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.io.IOException; +import java.io.RandomAccessFile; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.Statement; + +/** + * This sample application shows how to create a user defined function + * to read a file from the file system. + */ +public class FileFunctions { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:mem:", "sa", ""); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS READ_TEXT_FILE " + + "FOR \"org.h2.samples.FileFunctions.readTextFile\" "); + stat.execute("CREATE ALIAS READ_TEXT_FILE_WITH_ENCODING " + + "FOR \"org.h2.samples.FileFunctions.readTextFileWithEncoding\" "); + stat.execute("CREATE ALIAS READ_FILE " + + "FOR \"org.h2.samples.FileFunctions.readFile\" "); + ResultSet rs = stat.executeQuery("CALL READ_FILE('test.txt')"); + rs.next(); + byte[] data = rs.getBytes(1); + System.out.println("length: " + data.length); + rs = stat.executeQuery("CALL READ_TEXT_FILE('test.txt')"); + rs.next(); + String text = rs.getString(1); + System.out.println("text: " + text); + stat.close(); + conn.close(); + } + + /** + * Read a String from a file. The default encoding for this platform is + * used. + * + * @param fileName the file name + * @return the text + */ + public static String readTextFile(String fileName) throws IOException { + byte[] buff = readFile(fileName); + String s = new String(buff); + return s; + } + + /** + * Read a String from a file using the specified encoding. + * + * @param fileName the file name + * @param encoding the encoding + * @return the text + */ + public static String readTextFileWithEncoding(String fileName, + String encoding) throws IOException { + byte[] buff = readFile(fileName); + String s = new String(buff, encoding); + return s; + } + + /** + * Read a file into a byte array. + * + * @param fileName the file name + * @return the byte array + */ + public static byte[] readFile(String fileName) throws IOException { + try (RandomAccessFile file = new RandomAccessFile(fileName, "r")) { + byte[] buff = new byte[(int) file.length()]; + file.readFully(buff); + return buff; + } + } +} diff --git a/modules/h2/src/test/java/org/h2/samples/Function.java b/modules/h2/src/test/java/org/h2/samples/Function.java new file mode 100644 index 0000000000000..bc0b99fb1ad89 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/Function.java @@ -0,0 +1,156 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.math.BigInteger; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import org.h2.tools.SimpleResultSet; + +/** + * This sample application shows how to define and use + * custom (user defined) functions in this database. + */ +public class Function { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection( + "jdbc:h2:mem:", "sa", ""); + Statement stat = conn.createStatement(); + + // Using a custom Java function + stat.execute("CREATE ALIAS IS_PRIME " + + "FOR \"org.h2.samples.Function.isPrime\" "); + ResultSet rs; + rs = stat.executeQuery("SELECT IS_PRIME(X), X " + + "FROM SYSTEM_RANGE(1, 20) ORDER BY X"); + while (rs.next()) { + boolean isPrime = rs.getBoolean(1); + if (isPrime) { + int x = rs.getInt(2); + System.out.println(x + " is prime"); + } + } + + // Calling the built-in 'table' function + stat.execute("CREATE TABLE TEST(ID INT) AS " + + "SELECT X FROM SYSTEM_RANGE(1, 100)"); + PreparedStatement prep; + prep = conn.prepareStatement( + "SELECT * FROM TABLE(X INT=?, O INT=?) J " + + "INNER JOIN TEST T ON J.X=T.ID ORDER BY J.O"); + prep.setObject(1, new Integer[] { 30, 20 }); + prep.setObject(2, new Integer[] { 1, 2 }); + rs = prep.executeQuery(); + while (rs.next()) { + System.out.println(rs.getInt(1)); + } + prep.close(); + rs.close(); + + // Using a custom function like table + stat.execute("CREATE ALIAS MATRIX " + + "FOR \"org.h2.samples.Function.getMatrix\" "); + prep = conn.prepareStatement("SELECT * FROM MATRIX(?) " + + "ORDER BY X, Y"); + prep.setInt(1, 2); + rs = prep.executeQuery(); + while (rs.next()) { + System.out.println(rs.getInt(1) + "/" + rs.getInt(2)); + } + prep.close(); + + // Creating functions with source code + // in this case the JDK classes must be in the classpath + // where the database is running + stat.execute("create alias make_point as $$ " + + "java.awt.Point newPoint(int x, int y) { " + + "return new java.awt.Point(x, y); } $$"); + // parameters of type OTHER (or OBJECT or JAVA_OBJECT) + // are de-serialized to match the type + stat.execute("create alias get_x as $$ " + + "int pointX(java.awt.geom.Point2D p) { " + + "return (int) p.getX(); } $$"); + rs = stat.executeQuery("call get_x(make_point(10, 20))"); + while (rs.next()) { + System.out.println(rs.getString(1)); + } + + stat.close(); + conn.close(); + } + + /** + * Check if a value is a prime number. + * + * @param value the value + * @return true if it is a prime number + */ + public static boolean isPrime(int value) { + return new BigInteger(String.valueOf(value)).isProbablePrime(100); + } + + /** + * Execute a query. + * + * @param conn the connection + * @param sql the SQL statement + * @return the result set + */ + public static ResultSet query(Connection conn, String sql) throws SQLException { + return conn.createStatement().executeQuery(sql); + } + + /** + * Creates a simple result set with one row. + * + * @return the result set + */ + public static ResultSet simpleResultSet() { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("ID", Types.INTEGER, 10, 0); + rs.addColumn("NAME", Types.VARCHAR, 255, 0); + rs.addRow(0, "Hello"); + return rs; + } + + /** + * Creates a simple result set with two columns. + * + * @param conn the connection + * @param size the number of x and y values + * @return the result set with two columns + */ + public static ResultSet getMatrix(Connection conn, Integer size) + throws SQLException { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("X", Types.INTEGER, 10, 0); + rs.addColumn("Y", Types.INTEGER, 10, 0); + String url = conn.getMetaData().getURL(); + if (url.equals("jdbc:columnlist:connection")) { + return rs; + } + for (int s = size.intValue(), x = 0; x < s; x++) { + for (int y = 0; y < s; y++) { + rs.addRow(x, y); + } + } + return rs; + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/FunctionMultiReturn.java b/modules/h2/src/test/java/org/h2/samples/FunctionMultiReturn.java new file mode 100644 index 0000000000000..dcbe26369e16a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/FunctionMultiReturn.java @@ -0,0 +1,162 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; + +import org.h2.tools.SimpleResultSet; + +/** + * User defined functions can return a result set, + * and can therefore be used like a table. + * This sample application uses such a function to convert + * polar to cartesian coordinates. + */ +public class FunctionMultiReturn { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection( + "jdbc:h2:mem:", "sa", ""); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS P2C " + + "FOR \"org.h2.samples.FunctionMultiReturn.polar2Cartesian\" "); + PreparedStatement prep = conn.prepareStatement( + "SELECT X, Y FROM P2C(?, ?)"); + prep.setDouble(1, 5.0); + prep.setDouble(2, 0.5); + ResultSet rs = prep.executeQuery(); + while (rs.next()) { + double x = rs.getDouble(1); + double y = rs.getDouble(2); + System.out.println("result: (x=" + x + ", y="+y+")"); + } + + stat.execute("CREATE TABLE TEST(ID IDENTITY, R DOUBLE, A DOUBLE)"); + stat.execute("INSERT INTO TEST(R, A) VALUES(5.0, 0.5), (10.0, 0.6)"); + stat.execute("CREATE ALIAS P2C_SET " + + "FOR \"org.h2.samples.FunctionMultiReturn.polar2CartesianSet\" "); + rs = conn.createStatement().executeQuery( + "SELECT * FROM P2C_SET('SELECT * FROM TEST')"); + while (rs.next()) { + double r = rs.getDouble("R"); + double a = rs.getDouble("A"); + double x = rs.getDouble("X"); + double y = rs.getDouble("Y"); + System.out.println("(r="+r+" a="+a+") :" + + " (x=" + x + ", y="+y+")"); + } + + stat.execute("CREATE ALIAS P2C_A " + + "FOR \"org.h2.samples.FunctionMultiReturn.polar2CartesianArray\" "); + rs = conn.createStatement().executeQuery( + "SELECT R, A, P2C_A(R, A) FROM TEST"); + while (rs.next()) { + double r = rs.getDouble(1); + double a = rs.getDouble(2); + Object o = rs.getObject(3); + Object[] xy = (Object[]) o; + double x = ((Double) xy[0]).doubleValue(); + double y = ((Double) xy[1]).doubleValue(); + System.out.println("(r=" + r + " a=" + a + ") :" + + " (x=" + x + ", y=" + y + ")"); + } + + rs = stat.executeQuery( + "SELECT R, A, ARRAY_GET(E, 1), ARRAY_GET(E, 2) " + + "FROM (SELECT R, A, P2C_A(R, A) E FROM TEST)"); + while (rs.next()) { + double r = rs.getDouble(1); + double a = rs.getDouble(2); + double x = rs.getDouble(3); + double y = rs.getDouble(4); + System.out.println("(r="+r+" a="+a+") :" + + " (x=" + x + ", y="+y+")"); + } + rs.close(); + + prep.close(); + conn.close(); + } + + /** + * Convert polar coordinates to cartesian coordinates. The function may be + * called twice, once to retrieve the result columns (with null parameters), + * and the second time to return the data. + * + * @param r the distance from the point 0/0 + * @param alpha the angle + * @return a result set with two columns: x and y + */ + public static ResultSet polar2Cartesian(Double r, Double alpha) { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("X", Types.DOUBLE, 0, 0); + rs.addColumn("Y", Types.DOUBLE, 0, 0); + if (r != null && alpha != null) { + double x = r.doubleValue() * Math.cos(alpha.doubleValue()); + double y = r.doubleValue() * Math.sin(alpha.doubleValue()); + rs.addRow(x, y); + } + return rs; + } + + /** + * Convert polar coordinates to cartesian coordinates. The function may be + * called twice, once to retrieve the result columns (with null parameters), + * and the second time to return the data. + * + * @param r the distance from the point 0/0 + * @param alpha the angle + * @return an array two values: x and y + */ + public static Object[] polar2CartesianArray(Double r, Double alpha) { + double x = r.doubleValue() * Math.cos(alpha.doubleValue()); + double y = r.doubleValue() * Math.sin(alpha.doubleValue()); + return new Object[]{x, y}; + } + + /** + * Convert a set of polar coordinates to cartesian coordinates. The function + * may be called twice, once to retrieve the result columns (with null + * parameters), and the second time to return the data. + * + * @param conn the connection + * @param query the query + * @return a result set with the coordinates + */ + public static ResultSet polar2CartesianSet(Connection conn, String query) + throws SQLException { + SimpleResultSet result = new SimpleResultSet(); + result.addColumn("R", Types.DOUBLE, 0, 0); + result.addColumn("A", Types.DOUBLE, 0, 0); + result.addColumn("X", Types.DOUBLE, 0, 0); + result.addColumn("Y", Types.DOUBLE, 0, 0); + if (query != null) { + ResultSet rs = conn.createStatement().executeQuery(query); + while (rs.next()) { + double r = rs.getDouble("R"); + double alpha = rs.getDouble("A"); + double x = r * Math.cos(alpha); + double y = r * Math.sin(alpha); + result.addRow(r, alpha, x, y); + } + } + return result; + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/HelloWorld.java b/modules/h2/src/test/java/org/h2/samples/HelloWorld.java new file mode 100644 index 0000000000000..f3fa43ed1412c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/HelloWorld.java @@ -0,0 +1,48 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.Statement; +import org.h2.tools.DeleteDbFiles; + +/** + * A very simple class that shows how to load the driver, create a database, + * create a table, and insert some data. + */ +public class HelloWorld { + + /** + * Called when ran from command line. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + // delete the database named 'test' in the user home directory + DeleteDbFiles.execute("~", "test", true); + + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:~/test"); + Statement stat = conn.createStatement(); + + // this line would initialize the database + // from the SQL script file 'init.sql' + // stat.execute("runscript from 'init.sql'"); + + stat.execute("create table test(id int primary key, name varchar(255))"); + stat.execute("insert into test values(1, 'Hello')"); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + while (rs.next()) { + System.out.println(rs.getString("name")); + } + stat.close(); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/InitDatabaseFromJar.java b/modules/h2/src/test/java/org/h2/samples/InitDatabaseFromJar.java new file mode 100644 index 0000000000000..3d1bc8d61cca2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/InitDatabaseFromJar.java @@ -0,0 +1,70 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.io.InputStream; +import java.io.InputStreamReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.Statement; + +import org.h2.tools.RunScript; + +/** + * In this example a database is initialized from compressed script in a jar + * file. + */ +public class InitDatabaseFromJar { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + createScript(); + new InitDatabaseFromJar().initDb(); + } + + /** + * Create a script from a new database. + */ + private static void createScript() throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:mem:test"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(NAME VARCHAR)"); + stat.execute("INSERT INTO TEST VALUES('Hello World')"); + stat.execute("SCRIPT TO 'script.sql'"); + stat.close(); + conn.close(); + } + + /** + * Initialize a database from a SQL script file. + */ + void initDb() throws Exception { + Class.forName("org.h2.Driver"); + InputStream in = getClass().getResourceAsStream("script.sql"); + if (in == null) { + System.out.println("Please add the file script.sql to the classpath, package " + + getClass().getPackage().getName()); + } else { + Connection conn = DriverManager.getConnection("jdbc:h2:mem:test"); + RunScript.execute(conn, new InputStreamReader(in)); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + while (rs.next()) { + System.out.println(rs.getString(1)); + } + rs.close(); + stat.close(); + conn.close(); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/samples/MixedMode.java b/modules/h2/src/test/java/org/h2/samples/MixedMode.java new file mode 100644 index 0000000000000..df9b3f514018b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/MixedMode.java @@ -0,0 +1,64 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.tools.Server; + +/** + * This sample program opens the same database once in embedded mode, + * and once in the server mode. The embedded mode is faster, but only + * the server mode supports remote connections. + */ +public class MixedMode { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + + // start the server, allows to access the database remotely + Server server = Server.createTcpServer("-tcpPort", "9081"); + server.start(); + System.out.println( + "You can access the database remotely now, using the URL:"); + System.out.println( + "jdbc:h2:tcp://localhost:9081/~/test (user: sa, password: sa)"); + + // now use the database in your application in embedded mode + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection( + "jdbc:h2:~/test", "sa", "sa"); + + // some simple 'business usage' + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE TIMER IF EXISTS"); + stat.execute("CREATE TABLE TIMER(ID INT PRIMARY KEY, TIME VARCHAR)"); + System.out.println("Execute this a few times: " + + "SELECT TIME FROM TIMER"); + System.out.println("To stop this application " + + "(and the server), run: DROP TABLE TIMER"); + try { + while (true) { + // runs forever, except if you drop the table remotely + stat.execute("MERGE INTO TIMER VALUES(1, NOW())"); + Thread.sleep(1000); + } + } catch (SQLException e) { + System.out.println("Error: " + e.toString()); + } + conn.close(); + + // stop the server + server.stop(); + } +} diff --git a/modules/h2/src/test/java/org/h2/samples/Newsfeed.java b/modules/h2/src/test/java/org/h2/samples/Newsfeed.java new file mode 100644 index 0000000000000..24d700bbcf912 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/Newsfeed.java @@ -0,0 +1,83 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStreamWriter; +import java.io.Writer; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; + +import org.h2.tools.RunScript; +import org.h2.util.StringUtils; + +/** + * The newsfeed application uses XML functions to create an RSS and Atom feed + * from a simple SQL script. A textual representation of the data is created as + * well. + */ +public class Newsfeed { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + String targetDir = args.length == 0 ? "." : args[0]; + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:mem:", "sa", ""); + InputStream in = Newsfeed.class.getResourceAsStream("newsfeed.sql"); + ResultSet rs = RunScript.execute(conn, new InputStreamReader(in, StandardCharsets.ISO_8859_1)); + in.close(); + while (rs.next()) { + String file = rs.getString("FILE"); + String content = rs.getString("CONTENT"); + if (file.endsWith(".txt")) { + content = convertHtml2Text(content); + } + new File(targetDir).mkdirs(); + FileOutputStream out = new FileOutputStream(targetDir + "/" + file); + Writer writer = new OutputStreamWriter(out, StandardCharsets.UTF_8); + writer.write(content); + writer.close(); + out.close(); + } + conn.close(); + } + + /** + * Convert HTML text to plain text. + * + * @param html the html text + * @return the plain text + */ + private static String convertHtml2Text(String html) { + String s = html; + s = StringUtils.replaceAll(s, "", ""); + s = StringUtils.replaceAll(s, "", ""); + s = StringUtils.replaceAll(s, "
      ", ""); + s = StringUtils.replaceAll(s, "
    ", ""); + s = StringUtils.replaceAll(s, "
  • ", "- "); + s = StringUtils.replaceAll(s, "
  • ", ""); + s = StringUtils.replaceAll(s, "", " ) "); + s = StringUtils.replaceAll(s, "", ""); + s = StringUtils.replaceAll(s, "
    ", ""); + s = StringUtils.replaceAll(s, "
    ", ""); + s = StringUtils.replaceAll(s, "
    ", ""); + if (s.indexOf('<') >= 0 || s.indexOf('>') >= 0) { + throw new RuntimeException("Unsupported HTML Tag: < or > in " + s); + } + return s; + } +} diff --git a/modules/h2/src/test/java/org/h2/samples/ReadOnlyDatabaseInZip.java b/modules/h2/src/test/java/org/h2/samples/ReadOnlyDatabaseInZip.java new file mode 100644 index 0000000000000..4bffe606e098c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/ReadOnlyDatabaseInZip.java @@ -0,0 +1,69 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.Statement; +import org.h2.store.fs.FileUtils; +import org.h2.tools.Backup; +import org.h2.tools.DeleteDbFiles; + +/** + * This sample application shows how to create and use a read-only database in a + * zip file. The database file is split into multiple smaller files, to speed up + * random-access. Splitting up the file is only needed if the database file is + * larger than a few megabytes. + */ +public class ReadOnlyDatabaseInZip { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + + // delete all files in this directory + FileUtils.deleteRecursive("~/temp", false); + + Connection conn; + Class.forName("org.h2.Driver"); + + // create a database where the database file is split into + // multiple small files, 4 MB each (2^22). The larger the + // parts, the faster opening the database, but also the + // more files. 4 MB seems to be a good compromise, so + // the prefix split:22: is used, which means each part is + // 2^22 bytes long + conn = DriverManager.getConnection( + "jdbc:h2:split:22:~/temp/test"); + + System.out.println("adding test data..."); + Statement stat = conn.createStatement(); + stat.execute( + "create table test(id int primary key, name varchar) " + + "as select x, space(1000) from system_range(1, 2000)"); + + System.out.println("defrag to reduce random access..."); + stat.execute("shutdown defrag"); + conn.close(); + + System.out.println("create the zip file..."); + Backup.execute("~/temp/test.zip", "~/temp", "", true); + + // delete the old database files + DeleteDbFiles.execute("split:~/temp", "test", true); + + System.out.println("open the database from the zip file..."); + conn = DriverManager.getConnection( + "jdbc:h2:split:zip:~/temp/test.zip!/test"); + // the database can now be used + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/RowAccessRights.java b/modules/h2/src/test/java/org/h2/samples/RowAccessRights.java new file mode 100644 index 0000000000000..1daa7a75a256c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/RowAccessRights.java @@ -0,0 +1,123 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.TriggerAdapter; + +/** + * This sample application show how to emulate per-row access rights so that + * each user can only access rows created by the given user. + */ +public class RowAccessRights extends TriggerAdapter { + + private PreparedStatement prepDelete, prepInsert; + + /** + * Called when ran from command line. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + DeleteDbFiles.execute("~", "test", true); + + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection( + "jdbc:h2:~/test"); + Statement stat = conn.createStatement(); + + stat.execute("create table test_data(" + + "id int, user varchar, data varchar, primary key(id, user))"); + stat.execute("create index on test_data(id, user)"); + + stat.execute("create view test as select id, data " + + "from test_data where user = user()"); + stat.execute("create trigger t_test instead of " + + "insert, update, delete on test for each row " + + "call \"" + RowAccessRights.class.getName() + "\""); + stat.execute("create user a password 'a'"); + stat.execute("create user b password 'b'"); + stat.execute("grant all on test to a"); + stat.execute("grant all on test to b"); + + ResultSet rs; + + Connection connA = DriverManager.getConnection( + "jdbc:h2:~/test", "a", "a"); + Statement statA = connA.createStatement(); + statA.execute("insert into test values(1, 'Hello'), (2, 'World')"); + statA.execute("update test set data = 'Hello!' where id = 1"); + statA.execute("delete from test where id = 2"); + + Connection connB = DriverManager.getConnection( + "jdbc:h2:~/test", "b", "b"); + Statement statB = connB.createStatement(); + statB.execute("insert into test values(1, 'Hallo'), (2, 'Welt')"); + statB.execute("update test set data = 'Hallo!' where id = 1"); + statB.execute("delete from test where id = 2"); + + rs = statA.executeQuery("select * from test"); + while (rs.next()) { + System.out.println("a: " + rs.getInt(1) + "/" + rs.getString(2)); + } + + rs = statB.executeQuery("select * from test"); + while (rs.next()) { + System.out.println("b: " + + rs.getInt(1) + "/" + rs.getString(2)); + } + + connA.close(); + connB.close(); + + rs = stat.executeQuery("select * from test_data"); + while (rs.next()) { + System.out.println(rs.getInt(1) + "/" + + rs.getString(2) + "/" + rs.getString(3)); + } + conn.close(); + + } + + @Override + public void init(Connection conn, String schemaName, String triggerName, + String tableName, boolean before, int type) throws SQLException { + prepDelete = conn.prepareStatement( + "delete from test_data where id = ? and user = ?"); + prepInsert = conn.prepareStatement( + "insert into test_data values(?, ?, ?)"); + super.init(conn, schemaName, triggerName, tableName, before, type); + } + + @Override + public void fire(Connection conn, ResultSet oldRow, ResultSet newRow) + throws SQLException { + String user = conn.getMetaData().getUserName(); + if (oldRow != null && oldRow.next()) { + prepDelete.setInt(1, oldRow.getInt(1)); + prepDelete.setString(2, user); + int deleted = prepDelete.executeUpdate(); + if (deleted == 0 && newRow != null) { + // update: + // if deleting failed, insert must fail as well + newRow = null; + } + } + if (newRow != null && newRow.next()) { + prepInsert.setInt(1, newRow.getInt(1)); + prepInsert.setString(2, user); + prepInsert.setString(3, newRow.getString(2)); + prepInsert.executeUpdate(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/SQLInjection.java b/modules/h2/src/test/java/org/h2/samples/SQLInjection.java new file mode 100644 index 0000000000000..3765e275ad017 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/SQLInjection.java @@ -0,0 +1,433 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.io.BufferedReader; +import java.io.InputStreamReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * SQL Injection is a common security vulnerability for applications that use + * database. It is one of the most common security vulnerabilities for web + * applications today. This sample application shows how SQL injection works, + * and how to protect the application from it. + */ +public class SQLInjection { + + private Connection conn; + private Statement stat; + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + new SQLInjection().run("org.h2.Driver", + "jdbc:h2:test", "sa", "sa"); +// new SQLInjection().run("org.postgresql.Driver", +// "jdbc:postgresql:jpox2", "sa", "sa"); +// new SQLInjection().run("com.mysql.jdbc.Driver", +// "jdbc:mysql://localhost/test", "sa", "sa"); +// new SQLInjection().run("org.hsqldb.jdbcDriver", +// "jdbc:hsqldb:test", "sa", ""); +// new SQLInjection().run( +// "org.apache.derby.jdbc.EmbeddedDriver", +// "jdbc:derby:test3;create=true", "sa", "sa"); + } + + /** + * Run the test against the specified database. + * + * @param driver the JDBC driver name + * @param url the database URL + * @param user the user name + * @param password the password + */ + void run(String driver, String url, String user, String password) + throws Exception { + Class.forName(driver); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + try { + stat.execute("DROP TABLE USERS"); + } catch (SQLException e) { + // ignore + } + stat.execute("CREATE TABLE USERS(ID INT PRIMARY KEY, " + + "NAME VARCHAR(255), PASSWORD VARCHAR(255))"); + stat.execute("INSERT INTO USERS VALUES(1, 'admin', 'super')"); + stat.execute("INSERT INTO USERS VALUES(2, 'guest', '123456')"); + stat.execute("INSERT INTO USERS VALUES(3, 'test', 'abc')"); + + loginByNameInsecure(); + + if (url.startsWith("jdbc:h2:")) { + loginStoredProcedureInsecure(); + limitRowAccess(); + } + + loginByNameSecure(); + + if (url.startsWith("jdbc:h2:")) { + stat.execute("SET ALLOW_LITERALS NONE"); + stat.execute("SET ALLOW_LITERALS NUMBERS"); + stat.execute("SET ALLOW_LITERALS ALL"); + } + + loginByIdInsecure(); + loginByIdSecure(); + + try { + stat.execute("DROP TABLE ITEMS"); + } catch (SQLException e) { + // ignore + } + + stat.execute("CREATE TABLE ITEMS(ID INT PRIMARY KEY, " + + "NAME VARCHAR(255), ACTIVE INT)"); + stat.execute("INSERT INTO ITEMS VALUES(0, 'XBox', 0)"); + stat.execute("INSERT INTO ITEMS VALUES(1, 'XBox 360', 1)"); + stat.execute("INSERT INTO ITEMS VALUES(2, 'PlayStation 1', 0)"); + stat.execute("INSERT INTO ITEMS VALUES(3, 'PlayStation 2', 1)"); + stat.execute("INSERT INTO ITEMS VALUES(4, 'PlayStation 3', 1)"); + + listActiveItems(); + + if (url.startsWith("jdbc:h2:")) { + stat.execute("DROP CONSTANT IF EXISTS TYPE_INACTIVE"); + stat.execute("DROP CONSTANT IF EXISTS TYPE_ACTIVE"); + stat.execute("CREATE CONSTANT TYPE_INACTIVE VALUE 0"); + stat.execute("CREATE CONSTANT TYPE_ACTIVE VALUE 1"); + listActiveItemsUsingConstants(); + } + + listItemsSortedInsecure(); + listItemsSortedSecure(); + + if (url.startsWith("jdbc:h2:")) { + listItemsSortedSecureParam(); + storePasswordHashWithSalt(); + } + + conn.close(); + } + + /** + * Simulate a login using an insecure method. + */ + void loginByNameInsecure() throws Exception { + System.out.println("Insecure Systems Inc. - login"); + String name = input("Name?"); + String password = input("Password?"); + ResultSet rs = stat.executeQuery("SELECT * FROM USERS WHERE " + + "NAME='" + name + "' AND PASSWORD='" + password + "'"); + if (rs.next()) { + System.out.println("Welcome!"); + } else { + System.out.println("Access denied!"); + } + } + + /** + * Utility method to get a user record given the user name and password. + * This method is secure. + * + * @param conn the database connection + * @param userName the user name + * @param password the password + * @return a result set with the user record if the password matches + */ + public static ResultSet getUser(Connection conn, String userName, + String password) throws Exception { + PreparedStatement prep = conn.prepareStatement( + "SELECT * FROM USERS WHERE NAME=? AND PASSWORD=?"); + prep.setString(1, userName); + prep.setString(2, password); + return prep.executeQuery(); + } + + /** + * Utility method to change a password of a user. + * This method is secure, except that the old password is not checked. + * + * @param conn the database connection + * @param userName the user name + * @param password the password + * @return the new password + */ + public static String changePassword(Connection conn, String userName, + String password) throws Exception { + PreparedStatement prep = conn.prepareStatement( + "UPDATE USERS SET PASSWORD=? WHERE NAME=?"); + prep.setString(1, password); + prep.setString(2, userName); + prep.executeUpdate(); + return password; + } + + /** + * Simulate a login using an insecure method. + * A stored procedure is used here. + */ + void loginStoredProcedureInsecure() throws Exception { + System.out.println("Insecure Systems Inc. - login using a stored procedure"); + stat.execute("CREATE ALIAS IF NOT EXISTS " + + "GET_USER FOR \"org.h2.samples.SQLInjection.getUser\""); + stat.execute("CREATE ALIAS IF NOT EXISTS " + + "CHANGE_PASSWORD FOR \"org.h2.samples.SQLInjection.changePassword\""); + String name = input("Name?"); + String password = input("Password?"); + ResultSet rs = stat.executeQuery( + "CALL GET_USER('" + name + "', '" + password + "')"); + if (rs.next()) { + System.out.println("Welcome!"); + } else { + System.out.println("Access denied!"); + } + } + + /** + * Simulate a login using a secure method. + */ + void loginByNameSecure() throws Exception { + System.out.println("Secure Systems Inc. - login using placeholders"); + String name = input("Name?"); + String password = input("Password?"); + PreparedStatement prep = conn.prepareStatement( + "SELECT * FROM USERS WHERE " + + "NAME=? AND PASSWORD=?"); + prep.setString(1, name); + prep.setString(2, password); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + System.out.println("Welcome!"); + } else { + System.out.println("Access denied!"); + } + rs.close(); + prep.close(); + } + + /** + * Sample code to limit access only to specific rows. + */ + void limitRowAccess() throws Exception { + System.out.println("Secure Systems Inc. - limit row access"); + stat.execute("DROP TABLE IF EXISTS SESSION_USER"); + stat.execute("CREATE TABLE SESSION_USER(ID INT, USER INT)"); + stat.execute("DROP VIEW IF EXISTS MY_USER"); + stat.execute("CREATE VIEW MY_USER AS " + + "SELECT U.* FROM SESSION_USER S, USERS U " + + "WHERE S.ID=SESSION_ID() AND S.USER=U.ID"); + stat.execute("INSERT INTO SESSION_USER VALUES(SESSION_ID(), 1)"); + ResultSet rs = stat.executeQuery("SELECT ID, NAME FROM MY_USER"); + while (rs.next()) { + System.out.println(rs.getString(1) + ": " + rs.getString(2)); + } + } + + /** + * Simulate a login using an insecure method. + */ + void loginByIdInsecure() throws Exception { + System.out.println("Half Secure Systems Inc. - login by id"); + String id = input("User ID?"); + String password = input("Password?"); + try { + PreparedStatement prep = conn.prepareStatement( + "SELECT * FROM USERS WHERE " + + "ID=" + id + " AND PASSWORD=?"); + prep.setString(1, password); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + System.out.println("Welcome!"); + } else { + System.out.println("Access denied!"); + } + rs.close(); + prep.close(); + } catch (SQLException e) { + System.out.println(e); + } + } + + /** + * Simulate a login using a secure method. + */ + void loginByIdSecure() throws Exception { + System.out.println("Secure Systems Inc. - login by id"); + String id = input("User ID?"); + String password = input("Password?"); + try { + PreparedStatement prep = conn.prepareStatement( + "SELECT * FROM USERS WHERE " + + "ID=? AND PASSWORD=?"); + prep.setInt(1, Integer.parseInt(id)); + prep.setString(2, password); + ResultSet rs = prep.executeQuery(); + if (rs.next()) { + System.out.println("Welcome!"); + } else { + System.out.println("Access denied!"); + } + rs.close(); + prep.close(); + } catch (Exception e) { + System.out.println(e); + } + } + + /** + * List active items. + * The method uses the hard coded value '1', and therefore the database + * can not verify if the SQL statement was constructed with user + * input or not. + */ + void listActiveItems() throws Exception { + System.out.println("Half Secure Systems Inc. - list active items"); + ResultSet rs = stat.executeQuery( + "SELECT NAME FROM ITEMS WHERE ACTIVE=1"); + while (rs.next()) { + System.out.println("Name: " + rs.getString(1)); + } + } + + /** + * List active items. + * The method uses a constant, and therefore the database + * knows it does not contain user input. + */ + void listActiveItemsUsingConstants() throws Exception { + System.out.println("Secure Systems Inc. - list active items"); + ResultSet rs = stat.executeQuery( + "SELECT NAME FROM ITEMS WHERE ACTIVE=TYPE_ACTIVE"); + while (rs.next()) { + System.out.println("Name: " + rs.getString(1)); + } + } + + /** + * List items using a specified sort order. + * The method is not secure as user input is used to construct the + * SQL statement. + */ + void listItemsSortedInsecure() throws Exception { + System.out.println("Insecure Systems Inc. - list items"); + String order = input("order (id, name)?"); + try { + ResultSet rs = stat.executeQuery( + "SELECT ID, NAME FROM ITEMS ORDER BY " + order); + while (rs.next()) { + System.out.println(rs.getString(1) + ": " + rs.getString(2)); + } + } catch (SQLException e) { + System.out.println(e); + } + } + + /** + * List items using a specified sort order. + * The method is secure as the user input is validated before use. + * However the database has no chance to verify this. + */ + void listItemsSortedSecure() throws Exception { + System.out.println("Secure Systems Inc. - list items"); + String order = input("order (id, name)?"); + if (!order.matches("[a-zA-Z0-9_]*")) { + order = "id"; + } + try { + ResultSet rs = stat.executeQuery( + "SELECT ID, NAME FROM ITEMS ORDER BY " + order); + while (rs.next()) { + System.out.println(rs.getString(1) + ": " + rs.getString(2)); + } + } catch (SQLException e) { + System.out.println(e); + } + } + + /** + * List items using a specified sort order. + * The method is secure as a parameterized statement is used. + */ + void listItemsSortedSecureParam() throws Exception { + System.out.println("Secure Systems Inc. - list items"); + String order = input("order (1, 2, -1, -2)?"); + PreparedStatement prep = conn.prepareStatement( + "SELECT ID, NAME FROM ITEMS ORDER BY ?"); + try { + prep.setInt(1, Integer.parseInt(order)); + ResultSet rs = prep.executeQuery(); + while (rs.next()) { + System.out.println(rs.getString(1) + ": " + rs.getString(2)); + } + rs.close(); + } catch (Exception e) { + System.out.println(e); + } + prep.close(); + } + + /** + * This method creates a one way hash from the password + * (using a random salt), and stores this information instead of the + * password. + */ + void storePasswordHashWithSalt() throws Exception { + System.out.println("Very Secure Systems Inc. - login"); + stat.execute("DROP TABLE IF EXISTS USERS2"); + stat.execute("CREATE TABLE USERS2(ID INT PRIMARY KEY, " + + "NAME VARCHAR, SALT BINARY, HASH BINARY)"); + stat.execute("INSERT INTO USERS2 VALUES" + + "(1, 'admin', SECURE_RAND(16), NULL)"); + stat.execute("DROP CONSTANT IF EXISTS HASH_ITERATIONS"); + stat.execute("DROP CONSTANT IF EXISTS HASH_ALGORITHM"); + stat.execute("CREATE CONSTANT HASH_ITERATIONS VALUE 100"); + stat.execute("CREATE CONSTANT HASH_ALGORITHM VALUE 'SHA256'"); + stat.execute("UPDATE USERS2 SET " + + "HASH=HASH(HASH_ALGORITHM, " + + "STRINGTOUTF8('abc' || SALT), HASH_ITERATIONS) " + + "WHERE ID=1"); + String user = input("user?"); + String password = input("password?"); + stat.execute("SET ALLOW_LITERALS NONE"); + PreparedStatement prep = conn.prepareStatement( + "SELECT * FROM USERS2 WHERE NAME=? AND " + + "HASH=HASH(HASH_ALGORITHM, STRINGTOUTF8(? || SALT), HASH_ITERATIONS)"); + prep.setString(1, user); + prep.setString(2, password); + ResultSet rs = prep.executeQuery(); + while (rs.next()) { + System.out.println("name: " + rs.getString("NAME")); + System.out.println("salt: " + rs.getString("SALT")); + System.out.println("hash: " + rs.getString("HASH")); + } + rs.close(); + prep.close(); + stat.execute("SET ALLOW_LITERALS ALL"); + stat.close(); + } + + /** + * Utility method to get user input from the command line. + * + * @param prompt the prompt + * @return the user input + */ + String input(String prompt) throws Exception { + System.out.print(prompt); + return new BufferedReader(new InputStreamReader(System.in)).readLine(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/SecurePassword.java b/modules/h2/src/test/java/org/h2/samples/SecurePassword.java new file mode 100644 index 0000000000000..2ac0e14247269 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/SecurePassword.java @@ -0,0 +1,89 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.util.Properties; + +/** + * This example shows how to secure passwords + * (both database passwords, and account passwords). + */ +public class SecurePassword { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + + Class.forName("org.h2.Driver"); + String url = "jdbc:h2:data/simple"; + String user = "sam"; + char[] password = {'t', 'i', 'a', 'E', 'T', 'r', 'p'}; + + // This is the normal, but 'unsafe' way to connect: + // the password may reside in the main memory for an undefined time, + // or even written to disk (swap file): + // Connection conn = + // DriverManager.getConnection(url, user, new String(password)); + + // This is the most safe way to connect: the password is overwritten + // after use + Properties prop = new Properties(); + prop.setProperty("user", user); + prop.put("password", password); + Connection conn = DriverManager.getConnection(url, prop); + + // For security reasons, account passwords should not be stored directly + // in a database. Instead, only the hash should be stored. Also, + // PreparedStatements must be used to avoid SQL injection: + Statement stat = conn.createStatement(); + stat.execute( + "drop table account if exists"); + stat.execute( + "create table account(" + + "name varchar primary key, " + + "salt binary default secure_rand(16), " + + "hash binary)"); + PreparedStatement prep; + prep = conn.prepareStatement("insert into account(name) values(?)"); + prep.setString(1, "Joe"); + prep.execute(); + prep.close(); + + prep = conn.prepareStatement( + "update account set " + + "hash=hash('SHA256', stringtoutf8(salt||?), 10) " + + "where name=?"); + prep.setString(1, "secret"); + prep.setString(2, "Joe"); + prep.execute(); + prep.close(); + + prep = conn.prepareStatement( + "select * from account " + + "where name=? " + + "and hash=hash('SHA256', stringtoutf8(salt||?), 10)"); + prep.setString(1, "Joe"); + prep.setString(2, "secret"); + ResultSet rs = prep.executeQuery(); + while (rs.next()) { + System.out.println(rs.getString("name")); + } + rs.close(); + prep.close(); + stat.close(); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/ShowProgress.java b/modules/h2/src/test/java/org/h2/samples/ShowProgress.java new file mode 100644 index 0000000000000..523c86801f6ef --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/ShowProgress.java @@ -0,0 +1,172 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.api.DatabaseEventListener; +import org.h2.jdbc.JdbcConnection; + +/** + * This example application implements a database event listener. This is useful + * to display progress information while opening a large database, or to log + * database exceptions. + */ +public class ShowProgress implements DatabaseEventListener { + + private final long startNs; + private long lastNs; + + /** + * Create a new instance of this class, and startNs the timer. + */ + public ShowProgress() { + startNs = lastNs = System.nanoTime(); + } + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + new ShowProgress().test(); + } + + /** + * Run the progress test. + */ + void test() throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:test", "sa", ""); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, 'Test' || SPACE(100))"); + long time; + time = System.nanoTime(); + int len = 1000; + for (int i = 0; i < len; i++) { + long now = System.nanoTime(); + if (now > time + TimeUnit.SECONDS.toNanos(1)) { + time = now; + System.out.println("Inserting " + (100L * i / len) + "%"); + } + prep.setInt(1, i); + prep.execute(); + } + boolean abnormalTermination = true; + if (abnormalTermination) { + ((JdbcConnection) conn).setPowerOffCount(1); + try { + stat.execute("INSERT INTO TEST VALUES(-1, 'Test' || SPACE(100))"); + } catch (SQLException e) { + // ignore + } + } else { + conn.close(); + } + + System.out.println("Open connection..."); + time = System.nanoTime(); + conn = DriverManager.getConnection( + "jdbc:h2:test;DATABASE_EVENT_LISTENER='" + + getClass().getName() + "'", "sa", ""); + time = System.nanoTime() - time; + System.out.println("Done after " + TimeUnit.NANOSECONDS.toMillis(time) + " ms"); + prep.close(); + stat.close(); + conn.close(); + + } + + /** + * This method is called if an exception occurs in the database. + * + * @param e the exception + * @param sql the SQL statement + */ + @Override + public void exceptionThrown(SQLException e, String sql) { + System.out.println("Error executing " + sql); + e.printStackTrace(); + } + + /** + * This method is called when opening the database to notify about the + * progress. + * + * @param state the current state + * @param name the object name (depends on the state) + * @param current the current progress + * @param max the 100% mark + */ + @Override + public void setProgress(int state, String name, int current, int max) { + long time = System.nanoTime(); + if (time < lastNs + TimeUnit.SECONDS.toNanos(5)) { + return; + } + lastNs = time; + String stateName = "?"; + switch (state) { + case STATE_SCAN_FILE: + stateName = "Scan " + name; + break; + case STATE_CREATE_INDEX: + stateName = "Create Index " + name; + break; + case STATE_RECOVER: + stateName = "Recover"; + break; + default: + return; + } + try { + Thread.sleep(1); + } catch (InterruptedException e) { + // ignore + } + System.out.println("State: " + stateName + " " + + (100 * current / max) + "% (" + + current + " of " + max + ") " + + TimeUnit.NANOSECONDS.toMillis(time - startNs) + " ms"); + } + + /** + * This method is called when the database is closed. + */ + @Override + public void closingDatabase() { + System.out.println("Closing the database"); + } + + /** + * This method is called just after creating the instance. + * + * @param url the database URL + */ + @Override + public void init(String url) { + System.out.println("Initializing the event listener for database " + url); + } + + /** + * This method is called when the database is open. + */ + @Override + public void opened() { + // do nothing + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/ShutdownServer.java b/modules/h2/src/test/java/org/h2/samples/ShutdownServer.java new file mode 100644 index 0000000000000..2e6e366b84268 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/ShutdownServer.java @@ -0,0 +1,23 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +/** + * This very simple sample application stops a H2 TCP server + * if it is running. + */ +public class ShutdownServer { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + org.h2.tools.Server.shutdownTcpServer("tcp://localhost:9094", "", false, false); + } +} diff --git a/modules/h2/src/test/java/org/h2/samples/ToDate.java b/modules/h2/src/test/java/org/h2/samples/ToDate.java new file mode 100644 index 0000000000000..0b83831bfb1c8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/ToDate.java @@ -0,0 +1,68 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.Statement; +import java.text.SimpleDateFormat; +import java.util.Date; +import org.h2.tools.DeleteDbFiles; + +/** + * A very simple class that shows how to load the driver, create a database, + * create a table, and insert some data. + */ +public class ToDate { + + /** + * Called when ran from command line. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + + // delete the database named 'test' in the user home directory + DeleteDbFiles.execute("~", "test", true); + + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:~/test"); + Statement stat = conn.createStatement(); + + stat.execute("create table ToDateTest(id int primary key, " + + "start_date datetime, end_date datetime)"); + stat.execute("insert into ToDateTest values(1, " + + "ADD_MONTHS(TO_DATE('2015-11-13', 'yyyy-MM-DD'), 1), " + + "TO_DATE('2015-12-15', 'YYYY-MM-DD'))"); + stat.execute("insert into ToDateTest values(1, " + + "TO_DATE('2015-11-13', 'yyyy-MM-DD'), " + + "TO_DATE('2015-12-15', 'YYYY-MM-DD'))"); + stat.execute("insert into ToDateTest values(2, " + + "TO_DATE('2015-12-12 00:00:00', 'yyyy-MM-DD HH24:MI:ss'), " + + "TO_DATE('2015-12-16 15:00:00', 'YYYY-MM-DD HH24:MI:ss'))"); + stat.execute("insert into ToDateTest values(3, " + + "TO_DATE('2015-12-12 08:00 A.M.', 'yyyy-MM-DD HH:MI AM'), " + + "TO_DATE('2015-12-17 08:00 P.M.', 'YYYY-MM-DD HH:MI AM'))"); + stat.execute("insert into ToDateTest values(4, " + + "TO_DATE(substr('2015-12-12 08:00 A.M.', 1, 10), 'yyyy-MM-DD'), " + + "TO_DATE('2015-12-17 08:00 P.M.', 'YYYY-MM-DD HH:MI AM'))"); + + ResultSet rs = stat.executeQuery("select * from ToDateTest"); + while (rs.next()) { + System.out.println("Start date: " + dateToString(rs.getTimestamp("start_date"))); + System.out.println("End date: " + dateToString(rs.getTimestamp("end_date"))); + System.out.println(); + } + stat.close(); + conn.close(); + } + + private static String dateToString(Date date) { + return new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(date); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/TriggerPassData.java b/modules/h2/src/test/java/org/h2/samples/TriggerPassData.java new file mode 100644 index 0000000000000..6b8ba5384a12c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/TriggerPassData.java @@ -0,0 +1,97 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import org.h2.api.Trigger; + +/** + * This sample application shows how to pass data to a trigger. Trigger data can + * be persisted by storing it in the database. + */ +public class TriggerPassData implements Trigger { + + private static final Map TRIGGERS = + Collections.synchronizedMap(new HashMap()); + private String triggerData; + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection( + "jdbc:h2:mem:test", "sa", ""); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("CREATE ALIAS TRIGGER_SET FOR \"" + + TriggerPassData.class.getName() + + ".setTriggerData\""); + stat.execute("CREATE TRIGGER T1 " + + "BEFORE INSERT ON TEST " + + "FOR EACH ROW CALL \"" + + TriggerPassData.class.getName() + "\""); + stat.execute("CALL TRIGGER_SET('T1', 'Hello')"); + stat.execute("INSERT INTO TEST VALUES(1)"); + stat.execute("CALL TRIGGER_SET('T1', 'World')"); + stat.execute("INSERT INTO TEST VALUES(2)"); + stat.close(); + conn.close(); + } + + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, + int type) throws SQLException { + TRIGGERS.put(getPrefix(conn) + triggerName, this); + } + + @Override + public void fire(Connection conn, Object[] old, Object[] row) { + System.out.println(triggerData + ": " + row[0]); + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + /** + * Call this method to change a specific trigger. + * + * @param conn the connection + * @param trigger the trigger name + * @param data the data + */ + public static void setTriggerData(Connection conn, String trigger, + String data) throws SQLException { + TRIGGERS.get(getPrefix(conn) + trigger).triggerData = data; + } + + private static String getPrefix(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery( + "call ifnull(database_path() || '_', '') || database() || '_'"); + rs.next(); + return rs.getString(1); + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/TriggerSample.java b/modules/h2/src/test/java/org/h2/samples/TriggerSample.java new file mode 100644 index 0000000000000..301991ebcb27b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/TriggerSample.java @@ -0,0 +1,124 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.Trigger; + +/** + * This sample application shows how to use database triggers. + */ +public class TriggerSample { + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:mem:", "sa", ""); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE INVOICE(ID INT PRIMARY KEY, AMOUNT DECIMAL)"); + stat.execute("CREATE TABLE INVOICE_SUM(AMOUNT DECIMAL)"); + stat.execute("INSERT INTO INVOICE_SUM VALUES(0.0)"); + + stat.execute("CREATE TRIGGER INV_INS " + + "AFTER INSERT ON INVOICE FOR EACH ROW " + + "CALL \"org.h2.samples.TriggerSample$MyTrigger\" "); + stat.execute("CREATE TRIGGER INV_UPD " + + "AFTER UPDATE ON INVOICE FOR EACH ROW " + + "CALL \"org.h2.samples.TriggerSample$MyTrigger\" "); + stat.execute("CREATE TRIGGER INV_DEL " + + "AFTER DELETE ON INVOICE FOR EACH ROW " + + "CALL \"org.h2.samples.TriggerSample$MyTrigger\" "); + + stat.execute("INSERT INTO INVOICE VALUES(1, 10.0)"); + stat.execute("INSERT INTO INVOICE VALUES(2, 19.95)"); + stat.execute("UPDATE INVOICE SET AMOUNT=20.0 WHERE ID=2"); + stat.execute("DELETE FROM INVOICE WHERE ID=1"); + + ResultSet rs; + rs = stat.executeQuery("SELECT AMOUNT FROM INVOICE_SUM"); + rs.next(); + System.out.println("The sum is " + rs.getBigDecimal(1)); + rs.close(); + stat.close(); + conn.close(); + } + + /** + * This class is a simple trigger implementation. + */ + public static class MyTrigger implements Trigger { + + /** + * Initializes the trigger. + * + * @param conn a connection to the database + * @param schemaName the name of the schema + * @param triggerName the name of the trigger used in the CREATE TRIGGER + * statement + * @param tableName the name of the table + * @param before whether the fire method is called before or after the + * operation is performed + * @param type the operation type: INSERT, UPDATE, or DELETE + */ + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, int type) { + // initialize the trigger object is necessary + } + + /** + * This method is called for each triggered action. + * + * @param conn a connection to the database + * @param oldRow the old row, or null if no old row is available (for + * INSERT) + * @param newRow the new row, or null if no new row is available (for + * DELETE) + * @throws SQLException if the operation must be undone + */ + @Override + public void fire(Connection conn, + Object[] oldRow, Object[] newRow) + throws SQLException { + BigDecimal diff = null; + if (newRow != null) { + diff = (BigDecimal) newRow[1]; + } + if (oldRow != null) { + BigDecimal m = (BigDecimal) oldRow[1]; + diff = diff == null ? m.negate() : diff.subtract(m); + } + PreparedStatement prep = conn.prepareStatement( + "UPDATE INVOICE_SUM SET AMOUNT=AMOUNT+?"); + prep.setBigDecimal(1, diff); + prep.execute(); + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/UpdatableView.java b/modules/h2/src/test/java/org/h2/samples/UpdatableView.java new file mode 100644 index 0000000000000..dd9689adfafcb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/UpdatableView.java @@ -0,0 +1,89 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.samples; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.tools.TriggerAdapter; + +/** + * This sample application shows how to use triggers to create updatable views. + */ +public class UpdatableView extends TriggerAdapter { + + private PreparedStatement prepDelete, prepInsert; + + /** + * This method is called when executing this sample application from the + * command line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection("jdbc:h2:mem:"); + Statement stat; + stat = conn.createStatement(); + + // create the table and the view + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("create view test_view as select * from test"); + + // create the trigger that is called whenever + // the data in the view is modified + stat.execute("create trigger t_test_view instead of " + + "insert, update, delete on test_view for each row " + + "call \"" + UpdatableView.class.getName() + "\""); + + // test a few operations + stat.execute("insert into test_view values(1, 'Hello'), (2, 'World')"); + stat.execute("update test_view set name = 'Hallo' where id = 1"); + stat.execute("delete from test_view where id = 2"); + + // print the contents of the table and the view + System.out.println("table test:"); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + while (rs.next()) { + System.out.println(rs.getInt(1) + " " + rs.getString(2)); + } + System.out.println(); + System.out.println("test_view:"); + rs = stat.executeQuery("select * from test_view"); + while (rs.next()) { + System.out.println(rs.getInt(1) + " " + rs.getString(2)); + } + + conn.close(); + } + + @Override + public void init(Connection conn, String schemaName, String triggerName, + String tableName, boolean before, int type) throws SQLException { + prepDelete = conn.prepareStatement("delete from test where id = ?"); + prepInsert = conn.prepareStatement("insert into test values(?, ?)"); + super.init(conn, schemaName, triggerName, tableName, before, type); + } + + @Override + public void fire(Connection conn, ResultSet oldRow, ResultSet newRow) + throws SQLException { + if (oldRow != null && oldRow.next()) { + prepDelete.setInt(1, oldRow.getInt(1)); + prepDelete.execute(); + } + if (newRow != null && newRow.next()) { + prepInsert.setInt(1, newRow.getInt(1)); + prepInsert.setString(2, newRow.getString(2)); + prepInsert.execute(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/samples/newsfeed.sql b/modules/h2/src/test/java/org/h2/samples/newsfeed.sql new file mode 100644 index 0000000000000..a1d6b7a4f10af --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/newsfeed.sql @@ -0,0 +1,135 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ + +CREATE TABLE VERSION(ID INT PRIMARY KEY, VERSION VARCHAR, CREATED VARCHAR); +INSERT INTO VERSION VALUES + +(147, '1.4.198', '2018-03-18'), +(146, '1.4.197', '2017-06-10'), +(145, '1.4.195', '2017-04-23'), +(144, '1.4.194', '2017-03-10'), +(143, '1.4.193', '2016-10-31'), +(142, '1.4.192', '2016-05-26'), +(141, '1.4.191', '2016-01-21'), +(140, '1.4.190', '2015-10-11'), +(139, '1.4.189', '2015-09-13'), +(138, '1.4.188', '2015-08-01'), +(137, '1.4.187', '2015-04-10'), +(136, '1.4.186', '2015-03-02'), +(135, '1.4.185', '2015-01-16'), +(134, '1.4.184', '2014-12-19'), +(133, '1.4.183', '2014-12-13'), +(132, '1.4.182', '2014-10-17'), +(131, '1.4.181', '2014-08-06'), +; + +CREATE TABLE CHANNEL(TITLE VARCHAR, LINK VARCHAR, DESC VARCHAR, + LANGUAGE VARCHAR, PUB TIMESTAMP, LAST TIMESTAMP, AUTHOR VARCHAR); + +INSERT INTO CHANNEL VALUES('H2 Database Engine' , + 'http://www.h2database.com/', 'H2 Database Engine', 'en-us', NOW(), NOW(), 'Thomas Mueller'); + +CREATE VIEW ITEM AS +SELECT ID, 'New version available: ' || VERSION || ' (' || CREATED || ')' TITLE, +CAST((CREATED || ' 12:00:00') AS TIMESTAMP) ISSUED, +$$A new version of H2 is available for +download. +(You may have to click 'Refresh'). +
    +For details, see the +change log. +
    +For future plans, see the +roadmap. +$$ AS DESC FROM VERSION; + +SELECT 'newsfeed-rss.xml' FILE, + XMLSTARTDOC() || + XMLNODE('rss', XMLATTR('version', '2.0'), + XMLNODE('channel', NULL, + XMLNODE('title', NULL, C.TITLE) || + XMLNODE('link', NULL, C.LINK) || + XMLNODE('description', NULL, C.DESC) || + XMLNODE('language', NULL, C.LANGUAGE) || + XMLNODE('pubDate', NULL, FORMATDATETIME(C.PUB, 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT')) || + XMLNODE('lastBuildDate', NULL, FORMATDATETIME(C.LAST, 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT')) || + GROUP_CONCAT( + XMLNODE('item', NULL, + XMLNODE('title', NULL, I.TITLE) || + XMLNODE('link', NULL, C.LINK) || + XMLNODE('description', NULL, XMLCDATA(I.TITLE)) + ) + ORDER BY I.ID DESC SEPARATOR '') + ) + ) CONTENT +FROM CHANNEL C, ITEM I +UNION +SELECT 'newsfeed-atom.xml' FILE, + XMLSTARTDOC() || + XMLNODE('feed', XMLATTR('xmlns', 'http://www.w3.org/2005/Atom') || XMLATTR('xml:lang', C.LANGUAGE), + XMLNODE('title', XMLATTR('type', 'text'), C.TITLE) || + XMLNODE('id', NULL, XMLTEXT(C.LINK)) || + XMLNODE('author', NULL, XMLNODE('name', NULL, C.AUTHOR)) || + XMLNODE('link', XMLATTR('rel', 'self') || XMLATTR('href', 'http://www.h2database.com/html/newsfeed-atom.xml'), NULL) || + XMLNODE('updated', NULL, FORMATDATETIME(C.LAST, 'yyyy-MM-dd''T''HH:mm:ss''Z''', 'en', 'GMT')) || + GROUP_CONCAT( + XMLNODE('entry', NULL, + XMLNODE('title', XMLATTR('type', 'text'), I.TITLE) || + XMLNODE('link', XMLATTR('rel', 'alternate') || XMLATTR('type', 'text/html') || XMLATTR('href', C.LINK), NULL) || + XMLNODE('id', NULL, XMLTEXT(C.LINK || '/' || I.ID)) || + XMLNODE('updated', NULL, FORMATDATETIME(I.ISSUED, 'yyyy-MM-dd''T''HH:mm:ss''Z''', 'en', 'GMT')) || + XMLNODE('content', XMLATTR('type', 'html'), XMLCDATA(I.DESC)) + ) + ORDER BY I.ID DESC SEPARATOR '') + ) CONTENT +FROM CHANNEL C, ITEM I +UNION +SELECT 'newsletter.txt' FILE, I.DESC CONTENT FROM ITEM I WHERE I.ID = (SELECT MAX(ID) FROM ITEM) +UNION +SELECT 'doap-h2.rdf' FILE, + XMLSTARTDOC() || +$$ + + H2 Database Engine + + Java + + + + + + + H2 Database Engine + + H2 is a relational database management system written in Java. + It can be embedded in Java applications or run in the client-server mode. + The disk footprint is about 1 MB. The main programming APIs are SQL and JDBC, + however the database also supports using the PostgreSQL ODBC driver by acting like a PostgreSQL server. + It is possible to create both in-memory tables, as well as disk-based tables. + Tables can be persistent or temporary. Index types are hash table and tree for in-memory tables, + and b-tree for disk-based tables. + All data manipulation operations are transactional. (from Wikipedia) + + + + + + + + +$$ || + GROUP_CONCAT( + XMLNODE('release', NULL, + XMLNODE('Version', NULL, + XMLNODE('name', NULL, 'H2 ' || V.VERSION) || + XMLNODE('created', NULL, V.CREATED) || + XMLNODE('revision', NULL, V.VERSION) + ) + ) + ORDER BY V.ID DESC SEPARATOR '') || +' +' CONTENT +FROM VERSION V diff --git a/modules/h2/src/test/java/org/h2/samples/optimizations.sql b/modules/h2/src/test/java/org/h2/samples/optimizations.sql new file mode 100644 index 0000000000000..60fc44fb8c523 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/optimizations.sql @@ -0,0 +1,295 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ + +------------------------------------------------------------------------------- +-- Optimize Count Star +------------------------------------------------------------------------------- +-- This code snippet shows how to quickly get the the number of rows in a table. + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY); +INSERT INTO TEST SELECT X FROM SYSTEM_RANGE(1, 1000); + +-- Query the count +SELECT COUNT(*) FROM TEST; +--> 1000 +; + +-- Display the query plan - 'direct lookup' means the index is used +EXPLAIN SELECT COUNT(*) FROM TEST; +--> SELECT +--> COUNT(*) +--> FROM PUBLIC.TEST +--> /* PUBLIC.TEST.tableScan */ +--> /* direct lookup */ +; + +DROP TABLE TEST; + +------------------------------------------------------------------------------- +-- Optimize Distinct +------------------------------------------------------------------------------- +-- This code snippet shows how to quickly get all distinct values +-- of a column for the whole table. + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY, TYPE INT); +CALL RAND(0); +--> 0.730967787376657 +; +INSERT INTO TEST SELECT X, MOD(X, 10) FROM SYSTEM_RANGE(1, 1000); + +-- Create an index on the column TYPE +CREATE INDEX IDX_TEST_TYPE ON TEST(TYPE); + +-- Calculate the selectivity - otherwise it will not be optimized +ANALYZE; + +-- Query the distinct values +SELECT DISTINCT TYPE FROM TEST ORDER BY TYPE LIMIT 3; +--> 0 +--> 1 +--> 2 +; + +-- Display the query plan - 'index sorted' means the index is used to order +EXPLAIN SELECT DISTINCT TYPE FROM TEST ORDER BY TYPE LIMIT 3; +--> SELECT DISTINCT +--> TYPE +--> FROM PUBLIC.TEST +--> /* PUBLIC.IDX_TEST_TYPE */ +--> ORDER BY 1 +--> LIMIT 3 +--> /* distinct */ +--> /* index sorted */ +; + +DROP TABLE TEST; + +------------------------------------------------------------------------------- +-- Optimize Min Max +------------------------------------------------------------------------------- +-- This code snippet shows how to quickly get the smallest and largest value +-- of a column for each group. + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY, VALUE DECIMAL(100, 2)); +CALL RAND(0); +--> 0.730967787376657 +; +INSERT INTO TEST SELECT X, RAND()*100 FROM SYSTEM_RANGE(1, 1000); + +-- Create an index on the column VALUE +CREATE INDEX IDX_TEST_VALUE ON TEST(VALUE); + +-- Query the largest and smallest value - this is optimized +SELECT MIN(VALUE), MAX(VALUE) FROM TEST; +--> 0.01 99.89 +; + +-- Display the query plan - 'direct lookup' means it's optimized +EXPLAIN SELECT MIN(VALUE), MAX(VALUE) FROM TEST; +--> SELECT +--> MIN(VALUE), +--> MAX(VALUE) +--> FROM PUBLIC.TEST +--> /* PUBLIC.IDX_TEST_VALUE */ +--> /* direct lookup */ +; + +DROP TABLE TEST; + +------------------------------------------------------------------------------- +-- Optimize Grouped Min Max +------------------------------------------------------------------------------- +-- This code snippet shows how to quickly get the smallest and largest value +-- of a column for each group. + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY, TYPE INT, VALUE DECIMAL(100, 2)); +CALL RAND(0); +--> 0.730967787376657 +; +INSERT INTO TEST SELECT X, MOD(X, 5), RAND()*100 FROM SYSTEM_RANGE(1, 1000); + +-- Create an index on the columns TYPE and VALUE +CREATE INDEX IDX_TEST_TYPE_VALUE ON TEST(TYPE, VALUE); + +-- Analyze to optimize the DISTINCT part of the query query +ANALYZE; + +-- Query the largest and smallest value - this is optimized +SELECT TYPE, (SELECT VALUE FROM TEST T2 WHERE T.TYPE = T2.TYPE +ORDER BY TYPE, VALUE LIMIT 1) MIN +FROM (SELECT DISTINCT TYPE FROM TEST) T ORDER BY TYPE; +--> 0 0.42 +--> 1 0.14 +--> 2 0.01 +--> 3 0.40 +--> 4 0.44 +; + +-- Display the query plan +EXPLAIN SELECT TYPE, (SELECT VALUE FROM TEST T2 WHERE T.TYPE = T2.TYPE +ORDER BY TYPE, VALUE LIMIT 1) MIN +FROM (SELECT DISTINCT TYPE FROM TEST) T ORDER BY TYPE; +--> SELECT +--> TYPE, +--> (SELECT +--> VALUE +--> FROM PUBLIC.TEST T2 +--> /* PUBLIC.IDX_TEST_TYPE_VALUE: TYPE = T.TYPE */ +--> WHERE T.TYPE = T2.TYPE +--> ORDER BY =TYPE, 1 +--> LIMIT 1 +--> /* index sorted */) AS MIN +--> FROM ( +--> SELECT DISTINCT +--> TYPE +--> FROM PUBLIC.TEST +--> /* PUBLIC.IDX_TEST_TYPE_VALUE */ +--> /* distinct */ +--> ) T +--> /* SELECT DISTINCT +--> TYPE +--> FROM PUBLIC.TEST +--> /++ PUBLIC.IDX_TEST_TYPE_VALUE ++/ +--> /++ distinct ++/ +--> */ +--> ORDER BY 1 +; + +DROP TABLE TEST; + +------------------------------------------------------------------------------- +-- Optimize Top N -- +------------------------------------------------------------------------------- +-- This code snippet shows how to quickly get the smallest and largest N +-- values of a column for the whole table. + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY, TYPE INT, VALUE DECIMAL(100, 2)); +CALL RAND(0); +--> 0.730967787376657 +; +INSERT INTO TEST SELECT X, MOD(X, 100), RAND()*100 FROM SYSTEM_RANGE(1, 1000); + +-- Create an index on the column VALUE +CREATE INDEX IDX_TEST_VALUE ON TEST(VALUE); + +-- Query the smallest 10 values +SELECT VALUE FROM TEST ORDER BY VALUE LIMIT 3; +--> 0.01 +--> 0.14 +--> 0.16 +; + +-- Display the query plan - 'index sorted' means the index is used +EXPLAIN SELECT VALUE FROM TEST ORDER BY VALUE LIMIT 10; +--> SELECT +--> VALUE +--> FROM PUBLIC.TEST +--> /* PUBLIC.IDX_TEST_VALUE */ +--> ORDER BY 1 +--> LIMIT 10 +--> /* index sorted */ +; + +-- To optimize getting the largest values, a new descending index is required +CREATE INDEX IDX_TEST_VALUE_D ON TEST(VALUE DESC); + +-- Query the largest 10 values +SELECT VALUE FROM TEST ORDER BY VALUE DESC LIMIT 3; +--> 99.89 +--> 99.73 +--> 99.68 +; + +-- Display the query plan - 'index sorted' means the index is used +EXPLAIN SELECT VALUE FROM TEST ORDER BY VALUE DESC LIMIT 10; +--> SELECT +--> VALUE +--> FROM PUBLIC.TEST +--> /* PUBLIC.IDX_TEST_VALUE_D */ +--> ORDER BY 1 DESC +--> LIMIT 10 +--> /* index sorted */ +; + +DROP TABLE TEST; + +------------------------------------------------------------------------------- +-- Optimize IN(..) +------------------------------------------------------------------------------- +-- This code snippet shows how IN(...) uses an index (unlike .. OR ..). + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY); +INSERT INTO TEST SELECT X FROM SYSTEM_RANGE(1, 1000); + +-- Query the count +SELECT * FROM TEST WHERE ID IN(1, 1000); +--> 1 +--> 1000 +; + +-- Display the query plan +EXPLAIN SELECT * FROM TEST WHERE ID IN(1, 1000); +--> SELECT +--> TEST.ID +--> FROM PUBLIC.TEST +--> /* PUBLIC.PRIMARY_KEY_2: ID IN(1, 1000) */ +--> WHERE ID IN(1, 1000) +; + +DROP TABLE TEST; + +------------------------------------------------------------------------------- +-- Optimize Multiple IN(..) +------------------------------------------------------------------------------- +-- This code snippet shows how multiple IN(...) conditions use an index. + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY, DATA INT); +CREATE INDEX TEST_DATA ON TEST(DATA); + +INSERT INTO TEST SELECT X, MOD(X, 10) FROM SYSTEM_RANGE(1, 1000); + +-- Display the query plan +EXPLAIN SELECT * FROM TEST WHERE ID IN (10, 20) AND DATA IN (1, 2); +--> SELECT +--> TEST.ID, +--> TEST.DATA +--> FROM PUBLIC.TEST +--> /* PUBLIC.PRIMARY_KEY_2: ID IN(10, 20) */ +--> WHERE (ID IN(10, 20)) +--> AND (DATA IN(1, 2)) +; + +DROP TABLE TEST; + +------------------------------------------------------------------------------- +-- Optimize GROUP BY +------------------------------------------------------------------------------- +-- This code snippet shows how GROUP BY is using an index. + +-- Initialize the data +CREATE TABLE TEST(ID INT PRIMARY KEY, DATA INT); + +INSERT INTO TEST SELECT X, X/10 FROM SYSTEM_RANGE(1, 100); + +-- Display the query plan +EXPLAIN SELECT ID X, COUNT(*) FROM TEST GROUP BY ID; +--> SELECT +--> ID AS X, +--> COUNT(*) +--> FROM PUBLIC.TEST +--> /* PUBLIC.PRIMARY_KEY_2 */ +--> GROUP BY ID +--> /* group sorted */ +; + +DROP TABLE TEST; diff --git a/modules/h2/src/test/java/org/h2/samples/package.html b/modules/h2/src/test/java/org/h2/samples/package.html new file mode 100644 index 0000000000000..3ba8525e94a09 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/samples/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Standalone sample applications. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/H2TestCase.java b/modules/h2/src/test/java/org/h2/test/H2TestCase.java new file mode 100644 index 0000000000000..1f00e631923bc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/H2TestCase.java @@ -0,0 +1,85 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.h2.test; + +import java.util.Timer; +import java.util.TimerTask; +import junit.framework.Protectable; +import junit.framework.Test; +import junit.framework.TestResult; +import org.h2.util.ThreadDeadlockDetector; +import org.junit.runner.Describable; +import org.junit.runner.Description; + +/** + * H2 test runner Contains H2 test with config and provides correct description for JUnit engine. + */ +class H2TestCase implements Test, Describable { + /** Config holder. */ + private TestAll conf; + + /** Test to run. */ + private TestBase test; + + /** + * Constructor. + * + * @param conf Context for test. + * @param test H2 test implementation. + */ + H2TestCase(TestAll conf, TestBase test) { + this.conf = conf; + this.test = test; + } + + /** {@inheritDoc} */ + @Override public int countTestCases() { + return 1; + } + + /** {@inheritDoc} */ + @Override public void run(TestResult result) { + result.startTest(this); + // event queue watchdog for tests that get stuck when running in Jenkins + final Timer watchdog = new Timer(); + // 5 minutes + watchdog.schedule(new TimerTask() { + @Override + public void run() { + ThreadDeadlockDetector.dumpAllThreadsAndLocks("test watchdog timed out"); + } + }, 5 * 60 * 1000); + try { + result.startTest(this); + Protectable p = new Protectable() { + public void protect() throws Throwable { + test.runTest(conf); + } + }; + result.runProtected(this, p); + } + finally { + watchdog.cancel(); + result.endTest(this); + } + } + + /** {@inheritDoc} */ + @Override public Description getDescription() { + return Description.createTestDescription(test.getClass(), "test[" + conf.toString() + "]"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/H2TestSuiteBuilder.java b/modules/h2/src/test/java/org/h2/test/H2TestSuiteBuilder.java new file mode 100644 index 0000000000000..e92684ea76dd5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/H2TestSuiteBuilder.java @@ -0,0 +1,114 @@ +/* + * Copyright 2020 GridGain Systems, Inc. and Contributors. + * + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.h2.test; + +import java.sql.SQLException; +import junit.framework.TestResult; +import junit.framework.TestSuite; + +/** + * TestSuite generator to adapt suite to be run one-by-one with JUnit. + */ +public class H2TestSuiteBuilder extends TestAll { + /** Test suite. */ + private TestSuite suite; + + /** + * Constructor. + */ + public H2TestSuiteBuilder() { + travis = true; + // Force test failure + stopOnError = true; + + // Defaults, copied from base class (TestAll). + smallLog = big = networked = memory = ssl = false; + diskResult = traceSystemOut = diskUndo = false; + traceTest = false; + defrag = false; + traceLevelFile = throttle = 0; + cipher = null; + } + + /** {@inheritDoc} */ + @Override protected void addTest(TestBase test) { + suite.addTest(new H2TestCase(this, test)); + } + + /** + * @param suiteClass Suite class. + * @param baseTests Include base suite to suite. + * @return Suite suite. + */ + public TestSuite buildSuite(Class suiteClass, boolean baseTests) { + return buildSuite(suiteClass, baseTests, false); + } + + /** + * @param suiteClass Suite class. + * @param baseTests Include base suite to suite. + * @param additionalTests Include additional suite to suite. + * @return Suite suite. + */ + public TestSuite buildSuite(Class suiteClass, boolean baseTests, boolean additionalTests) { + suite = new TestSuite(suiteClass.getName()) { + /** {@inheritDoc} */ + @Override public void run(TestResult result) { + try { + beforeTest(); + super.run(result); + } + finally { + afterTest(); + } + } + }; + + try { + if (baseTests) + test(); + + if (additionalTests) + testUnit(); + } + catch (SQLException e) { + assert false : e; + } + + TestSuite suite0 = suite; + + suite = null; + + return suite0; + } + + /** {@inheritDoc} */ + @Override public void beforeTest() { + try { + super.beforeTest(); + } + catch (Exception e) { + e.printStackTrace(System.err); + throw new AssertionError("Failed to start suite.", e); + } + } + + /** {@inheritDoc} */ + @Override public void afterTest() { + super.afterTest(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/TestAll.java b/modules/h2/src/test/java/org/h2/test/TestAll.java new file mode 100644 index 0000000000000..e70b5ebbad56c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/TestAll.java @@ -0,0 +1,1132 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test; + +import java.lang.management.ManagementFactory; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Properties; +import java.util.TimerTask; +import java.util.concurrent.TimeUnit; +import org.h2.Driver; +import org.h2.engine.Constants; +import org.h2.jdbcx.JdbcDataSourceFactory; +import org.h2.store.fs.FilePathRec; +import org.h2.store.fs.FileUtils; +import org.h2.test.bench.TestPerformance; +import org.h2.test.db.TestAlter; +import org.h2.test.db.TestAlterSchemaRename; +import org.h2.test.db.TestAutoRecompile; +import org.h2.test.db.TestBackup; +import org.h2.test.db.TestBigDb; +import org.h2.test.db.TestBigResult; +import org.h2.test.db.TestCases; +import org.h2.test.db.TestCheckpoint; +import org.h2.test.db.TestCluster; +import org.h2.test.db.TestCompatibility; +import org.h2.test.db.TestCompatibilityOracle; +import org.h2.test.db.TestCsv; +import org.h2.test.db.TestDateStorage; +import org.h2.test.db.TestDeadlock; +import org.h2.test.db.TestDrop; +import org.h2.test.db.TestDuplicateKeyUpdate; +import org.h2.test.db.TestEncryptedDb; +import org.h2.test.db.TestExclusive; +import org.h2.test.db.TestFunctionOverload; +import org.h2.test.db.TestFunctions; +import org.h2.test.db.TestGeneralCommonTableQueries; +import org.h2.test.db.TestIndex; +import org.h2.test.db.TestIndexHints; +import org.h2.test.db.TestLargeBlob; +import org.h2.test.db.TestLinkedTable; +import org.h2.test.db.TestListener; +import org.h2.test.db.TestLob; +import org.h2.test.db.TestMemoryUsage; +import org.h2.test.db.TestMergeUsing; +import org.h2.test.db.TestMultiConn; +import org.h2.test.db.TestMultiDimension; +import org.h2.test.db.TestMultiThread; +import org.h2.test.db.TestMultiThreadedKernel; +import org.h2.test.db.TestOpenClose; +import org.h2.test.db.TestOptimizations; +import org.h2.test.db.TestOptimizerHints; +import org.h2.test.db.TestOutOfMemory; +import org.h2.test.db.TestPersistentCommonTableExpressions; +import org.h2.test.db.TestPowerOff; +import org.h2.test.db.TestQueryCache; +import org.h2.test.db.TestReadOnly; +import org.h2.test.db.TestRecursiveQueries; +import org.h2.test.db.TestReplace; +import org.h2.test.db.TestRights; +import org.h2.test.db.TestRowFactory; +import org.h2.test.db.TestRunscript; +import org.h2.test.db.TestSQLInjection; +import org.h2.test.db.TestSelectCountNonNullColumn; +import org.h2.test.db.TestSequence; +import org.h2.test.db.TestSessionsLocks; +import org.h2.test.db.TestSetCollation; +import org.h2.test.db.TestShow; +import org.h2.test.db.TestSpaceReuse; +import org.h2.test.db.TestSpatial; +import org.h2.test.db.TestSpeed; +import org.h2.test.db.TestSynonymForTable; +import org.h2.test.db.TestTableEngines; +import org.h2.test.db.TestTempTables; +import org.h2.test.db.TestTransaction; +import org.h2.test.db.TestTriggersConstraints; +import org.h2.test.db.TestTwoPhaseCommit; +import org.h2.test.db.TestUpgrade; +import org.h2.test.db.TestUsingIndex; +import org.h2.test.db.TestView; +import org.h2.test.db.TestViewAlterTable; +import org.h2.test.db.TestViewDropView; +import org.h2.test.jdbc.TestBatchUpdates; +import org.h2.test.jdbc.TestCallableStatement; +import org.h2.test.jdbc.TestCancel; +import org.h2.test.jdbc.TestConcurrentConnectionUsage; +import org.h2.test.jdbc.TestConnection; +import org.h2.test.jdbc.TestCustomDataTypesHandler; +import org.h2.test.jdbc.TestDatabaseEventListener; +import org.h2.test.jdbc.TestDriver; +import org.h2.test.jdbc.TestGetGeneratedKeys; +import org.h2.test.jdbc.TestJavaObject; +import org.h2.test.jdbc.TestJavaObjectSerializer; +import org.h2.test.jdbc.TestLimitUpdates; +import org.h2.test.jdbc.TestLobApi; +import org.h2.test.jdbc.TestManyJdbcObjects; +import org.h2.test.jdbc.TestMetaData; +import org.h2.test.jdbc.TestNativeSQL; +import org.h2.test.jdbc.TestPreparedStatement; +import org.h2.test.jdbc.TestResultSet; +import org.h2.test.jdbc.TestStatement; +import org.h2.test.jdbc.TestTransactionIsolation; +import org.h2.test.jdbc.TestUpdatableResultSet; +import org.h2.test.jdbc.TestUrlJavaObjectSerializer; +import org.h2.test.jdbc.TestZloty; +import org.h2.test.jdbcx.TestConnectionPool; +import org.h2.test.jdbcx.TestDataSource; +import org.h2.test.jdbcx.TestXA; +import org.h2.test.jdbcx.TestXASimple; +import org.h2.test.mvcc.TestMvcc1; +import org.h2.test.mvcc.TestMvcc2; +import org.h2.test.mvcc.TestMvcc3; +import org.h2.test.mvcc.TestMvcc4; +import org.h2.test.mvcc.TestMvccMultiThreaded; +import org.h2.test.mvcc.TestMvccMultiThreaded2; +import org.h2.test.poweroff.TestReorderWrites; +import org.h2.test.recover.RecoverLobTest; +import org.h2.test.rowlock.TestRowLocks; +import org.h2.test.scripts.TestScript; +import org.h2.test.scripts.TestScriptSimple; +import org.h2.test.server.TestAutoServer; +import org.h2.test.server.TestInit; +import org.h2.test.server.TestNestedLoop; +import org.h2.test.server.TestWeb; +import org.h2.test.store.TestCacheConcurrentLIRS; +import org.h2.test.store.TestCacheLIRS; +import org.h2.test.store.TestCacheLongKeyLIRS; +import org.h2.test.store.TestConcurrent; +import org.h2.test.store.TestConcurrentLinkedList; +import org.h2.test.store.TestDataUtils; +import org.h2.test.store.TestFreeSpace; +import org.h2.test.store.TestKillProcessWhileWriting; +import org.h2.test.store.TestMVRTree; +import org.h2.test.store.TestMVStore; +import org.h2.test.store.TestMVStoreBenchmark; +import org.h2.test.store.TestMVStoreStopCompact; +import org.h2.test.store.TestMVStoreTool; +import org.h2.test.store.TestMVTableEngine; +import org.h2.test.store.TestObjectDataType; +import org.h2.test.store.TestRandomMapOps; +import org.h2.test.store.TestSpinLock; +import org.h2.test.store.TestStreamStore; +import org.h2.test.store.TestTransactionStore; +import org.h2.test.synth.TestBtreeIndex; +import org.h2.test.synth.TestConcurrentUpdate; +import org.h2.test.synth.TestCrashAPI; +import org.h2.test.synth.TestDiskFull; +import org.h2.test.synth.TestFuzzOptimizations; +import org.h2.test.synth.TestHaltApp; +import org.h2.test.synth.TestJoin; +import org.h2.test.synth.TestKill; +import org.h2.test.synth.TestKillRestart; +import org.h2.test.synth.TestKillRestartMulti; +import org.h2.test.synth.TestLimit; +import org.h2.test.synth.TestMultiThreaded; +import org.h2.test.synth.TestNestedJoins; +import org.h2.test.synth.TestOuterJoins; +import org.h2.test.synth.TestRandomCompare; +import org.h2.test.synth.TestRandomSQL; +import org.h2.test.synth.TestStringAggCompatibility; +import org.h2.test.synth.TestTimer; +import org.h2.test.synth.sql.TestSynth; +import org.h2.test.synth.thread.TestMulti; +import org.h2.test.unit.TestAnsCompression; +import org.h2.test.unit.TestBinaryArithmeticStream; +import org.h2.test.unit.TestBitField; +import org.h2.test.unit.TestBitStream; +import org.h2.test.unit.TestBnf; +import org.h2.test.unit.TestCache; +import org.h2.test.unit.TestCharsetCollator; +import org.h2.test.unit.TestClearReferences; +import org.h2.test.unit.TestCollation; +import org.h2.test.unit.TestCompress; +import org.h2.test.unit.TestConnectionInfo; +import org.h2.test.unit.TestDataPage; +import org.h2.test.unit.TestDate; +import org.h2.test.unit.TestDateIso8601; +import org.h2.test.unit.TestDateTimeUtils; +import org.h2.test.unit.TestExit; +import org.h2.test.unit.TestFile; +import org.h2.test.unit.TestFileLock; +import org.h2.test.unit.TestFileLockSerialized; +import org.h2.test.unit.TestFileSystem; +import org.h2.test.unit.TestFtp; +import org.h2.test.unit.TestIntArray; +import org.h2.test.unit.TestIntIntHashMap; +import org.h2.test.unit.TestIntPerfectHash; +import org.h2.test.unit.TestJmx; +import org.h2.test.unit.TestLocale; +import org.h2.test.unit.TestMathUtils; +import org.h2.test.unit.TestMode; +import org.h2.test.unit.TestModifyOnWrite; +import org.h2.test.unit.TestNetUtils; +import org.h2.test.unit.TestObjectDeserialization; +import org.h2.test.unit.TestOldVersion; +import org.h2.test.unit.TestOverflow; +import org.h2.test.unit.TestPageStore; +import org.h2.test.unit.TestPageStoreCoverage; +import org.h2.test.unit.TestPattern; +import org.h2.test.unit.TestPerfectHash; +import org.h2.test.unit.TestPgServer; +import org.h2.test.unit.TestReader; +import org.h2.test.unit.TestRecovery; +import org.h2.test.unit.TestReopen; +import org.h2.test.unit.TestSampleApps; +import org.h2.test.unit.TestScriptReader; +import org.h2.test.unit.TestSecurity; +import org.h2.test.unit.TestShell; +import org.h2.test.unit.TestSort; +import org.h2.test.unit.TestStreams; +import org.h2.test.unit.TestStringCache; +import org.h2.test.unit.TestStringUtils; +import org.h2.test.unit.TestTimeStampWithTimeZone; +import org.h2.test.unit.TestTraceSystem; +import org.h2.test.unit.TestUtils; +import org.h2.test.unit.TestValue; +import org.h2.test.unit.TestValueHashMap; +import org.h2.test.unit.TestValueMemory; +import org.h2.test.utils.OutputCatcher; +import org.h2.test.utils.SelfDestructor; +import org.h2.test.utils.TestColumnNamer; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Server; +import org.h2.util.AbbaLockingDetector; +import org.h2.util.Profiler; +import org.h2.util.StringUtils; +import org.h2.util.Task; +import org.h2.util.ThreadDeadlockDetector; +import org.h2.util.Utils; + +/** + * The main test application. JUnit is not used because loops are easier to + * write in regular java applications (most tests are ran multiple times using + * different settings). + */ +public class TestAll { + + static { + // Locale.setDefault(new Locale("ru", "ru")); + } + +/* + +PIT test: +java org.pitest.mutationtest.MutationCoverageReport +--reportDir data --targetClasses org.h2.dev.store.btree.StreamStore* +--targetTests org.h2.test.store.TestStreamStore +--sourceDirs src/test,src/tools + +Dump heap on out of memory: +-XX:+HeapDumpOnOutOfMemoryError + +Random test: +java15 +cd h2database/h2/bin +del *.db +start cmd /k "java -cp .;%H2DRIVERS% org.h2.test.TestAll join >testJoin.txt" +start cmd /k "java -cp . org.h2.test.TestAll synth >testSynth.txt" +start cmd /k "java -cp . org.h2.test.TestAll all >testAll.txt" +start cmd /k "java -cp . org.h2.test.TestAll random >testRandom.txt" +start cmd /k "java -cp . org.h2.test.TestAll btree >testBtree.txt" +start cmd /k "java -cp . org.h2.test.TestAll halt >testHalt.txt" +java -cp . org.h2.test.TestAll crash >testCrash.txt + +java org.h2.test.TestAll timer + +*/ + + /** + * Set to true if any of the tests fail. Used to return an error code from + * the whole program. + */ + static boolean atLeastOneTestFailed; + + /** + * Whether the MVStore storage is used. + */ + public boolean mvStore = Constants.VERSION_MINOR >= 4; + + /** + * If the test should run with many rows. + */ + public boolean big; + + /** + * If remote database connections should be used. + */ + public boolean networked; + + /** + * If in-memory databases should be used. + */ + public boolean memory; + + /** + * If code coverage is enabled. + */ + public boolean codeCoverage; + + /** + * If the multi version concurrency control mode should be used. + */ + public boolean mvcc = mvStore; + + /** + * If the multi-threaded mode should be used. + */ + public boolean multiThreaded; + + /** + * If lazy queries should be used. + */ + public boolean lazy; + + /** + * The cipher to use (null for unencrypted). + */ + public String cipher; + + /** + * The file trace level value to use. + */ + public int traceLevelFile; + + /** + * If test trace information should be written (for debugging only). + */ + public boolean traceTest; + + /** + * If testing on Google App Engine. + */ + public boolean googleAppEngine; + + /** + * If a small cache and a low number for MAX_MEMORY_ROWS should be used. + */ + public boolean diskResult; + + /** + * Test using the recording file system. + */ + public boolean reopen; + + /** + * Test the split file system. + */ + public boolean splitFileSystem; + + /** + * If only fast/CI/Jenkins/Travis tests should be run. + */ + public boolean travis; + + /** + * the vmlens.com race condition tool + */ + public boolean vmlens; + + /** + * The lock timeout to use + */ + public int lockTimeout = 50; + + /** + * If the transaction log should be kept small (that is, the log should be + * switched early). + */ + boolean smallLog; + + /** + * If SSL should be used for remote connections. + */ + boolean ssl; + + /** + * If MAX_MEMORY_UNDO=3 should be used. + */ + public boolean diskUndo; + + /** + * If TRACE_LEVEL_SYSTEM_OUT should be set to 2 (for debugging only). + */ + boolean traceSystemOut; + + /** + * If the tests should run forever. + */ + boolean endless; + + /** + * The THROTTLE value to use. + */ + public int throttle; + + /** + * The THROTTLE value to use by default. + */ + int throttleDefault = Integer.parseInt(System.getProperty("throttle", "0")); + + /** + * If the test should stop when the first error occurs. + */ + public boolean stopOnError; + + /** + * If the database should always be defragmented when closing. + */ + public boolean defrag; + + /** + * The cache type. + */ + public String cacheType; + + /** If not null the database should be opened with the collation parameter */ + public String collation; + + + /** + * The AB-BA locking detector. + */ + AbbaLockingDetector abbaLockingDetector; + + /** + * The list of tests. + */ + ArrayList tests = new ArrayList<>(); + + private Server server; + + + /** + * Run all tests. + * + * @param args the command line arguments + */ + public static void main(String... args) throws Exception { + OutputCatcher catcher = OutputCatcher.start(); + run(args); + catcher.stop(); + catcher.writeTo("Test Output", "docs/html/testOutput.html"); + if (atLeastOneTestFailed) { + System.exit(1); + } + } + + private static void run(String... args) throws Exception { + SelfDestructor.startCountdown(4 * 60); + long time = System.nanoTime(); + printSystemInfo(); + + // use lower values, to better test those cases, + // and (for delays) to speed up the tests + + System.setProperty("h2.maxMemoryRows", "100"); + + //TODO: GG-19169: What does check2 made? + System.setProperty("h2.check2", "true"); + System.setProperty("h2.delayWrongPasswordMin", "0"); + System.setProperty("h2.delayWrongPasswordMax", "0"); + System.setProperty("h2.useThreadContextClassLoader", "true"); + + // System.setProperty("h2.modifyOnWrite", "true"); + + // speedup + // System.setProperty("h2.syncMethod", ""); + +/* + +recovery tests with small freeList pages, page size 64 + +reopen org.h2.test.unit.TestPageStore +-Xmx1500m -D reopenOffset=3 -D reopenShift=1 + +power failure test +power failure test: MULTI_THREADED=TRUE +power failure test: larger binaries and additional index. +power failure test with randomly generating / dropping indexes and tables. + +drop table test; +create table test(id identity, name varchar(100) default space(100)); +@LOOP 10 insert into test select null, null from system_range(1, 100000); +delete from test; + +documentation: review package and class level javadocs +documentation: rolling review at main.html + +------------- + +remove old TODO, move to roadmap + +kill a test: +kill -9 `jps -l | grep "org.h2.test." | cut -d " " -f 1` + +*/ + TestAll test = new TestAll(); + if (args.length > 0) { + if ("travis".equals(args[0])) { + test.travis = true; + test.testAll(); + } else if ("vmlens".equals(args[0])) { + test.vmlens = true; + test.testAll(); + } else if ("reopen".equals(args[0])) { + System.setProperty("h2.delayWrongPasswordMin", "0"); + System.setProperty("h2.check2", "false"); + System.setProperty("h2.analyzeAuto", "100"); + System.setProperty("h2.pageSize", "64"); + System.setProperty("h2.reopenShift", "5"); + FilePathRec.register(); + test.reopen = true; + TestReopen reopen = new TestReopen(); + reopen.init(); + FilePathRec.setRecorder(reopen); + test.runTests(); + } else if ("crash".equals(args[0])) { + test.endless = true; + new TestCrashAPI().runTest(test); + } else if ("synth".equals(args[0])) { + new TestSynth().runTest(test); + } else if ("kill".equals(args[0])) { + new TestKill().runTest(test); + } else if ("random".equals(args[0])) { + test.endless = true; + new TestRandomSQL().runTest(test); + } else if ("join".equals(args[0])) { + new TestJoin().runTest(test); + test.endless = true; + } else if ("btree".equals(args[0])) { + new TestBtreeIndex().runTest(test); + } else if ("all".equals(args[0])) { + test.testEverything(); + } else if ("codeCoverage".equals(args[0])) { + test.codeCoverage = true; + test.runCoverage(); + } else if ("multiThread".equals(args[0])) { + new TestMulti().runTest(test); + } else if ("halt".equals(args[0])) { + new TestHaltApp().runTest(test); + } else if ("timer".equals(args[0])) { + new TestTimer().runTest(test); + } + } else { + test.testAll(); + } + System.out.println(TestBase.formatTime( + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)) + " total"); + } + + private void testAll() throws Exception { + runTests(); + if (!travis && !vmlens) { + Profiler prof = new Profiler(); + prof.depth = 16; + prof.interval = 1; + prof.startCollecting(); + TestPerformance.main("-init", "-db", "1", "-size", "1000"); + prof.stopCollecting(); + System.out.println(prof.getTop(5)); + TestPerformance.main("-init", "-db", "1", "-size", "1000"); + } + } + + /** + * Run all tests in all possible combinations. + */ + private void testEverything() throws SQLException { + for (int c = 0; c < 2; c++) { + if (c == 0) { + cipher = null; + } else { + cipher = "AES"; + } + for (int a = 0; a < 64; a++) { + smallLog = (a & 1) != 0; + big = (a & 2) != 0; + networked = (a & 4) != 0; + memory = (a & 8) != 0; + ssl = (a & 16) != 0; + diskResult = (a & 32) != 0; + for (int trace = 0; trace < 3; trace++) { + traceLevelFile = trace; + test(); + } + } + } + } + + /** + * Run the tests with a number of different settings. + */ + private void runTests() throws SQLException { + + if (Boolean.getBoolean("abba")) { + abbaLockingDetector = new AbbaLockingDetector().startCollecting(); + } + + smallLog = big = networked = memory = ssl = false; + diskResult = traceSystemOut = diskUndo = false; + traceTest = stopOnError = false; + defrag = false; + traceLevelFile = throttle = 0; + cipher = null; + + // memory is a good match for multi-threaded, makes things happen + // faster, more chance of exposing race conditions + memory = true; + multiThreaded = true; + test(); + if (vmlens) { + return; + } + testUnit(); + + // lazy + lazy = true; + memory = true; + multiThreaded = true; + test(); + lazy = false; + + // but sometimes race conditions need bigger windows + memory = false; + multiThreaded = true; + test(); + testUnit(); + + // a more normal setup + memory = false; + multiThreaded = false; + test(); + testUnit(); + + memory = true; + multiThreaded = false; + networked = true; + test(); + + memory = false; + networked = false; + diskUndo = true; + diskResult = true; + traceLevelFile = 3; + throttle = 1; + cacheType = "SOFT_LRU"; + cipher = "AES"; + test(); + + diskUndo = false; + diskResult = false; + traceLevelFile = 1; + throttle = 0; + cacheType = null; + cipher = null; + defrag = true; + test(); + + if (!travis) { + traceLevelFile = 0; + smallLog = true; + networked = true; + defrag = false; + ssl = true; + test(); + + big = true; + smallLog = false; + networked = false; + ssl = false; + traceLevelFile = 0; + test(); + testUnit(); + + big = false; + cipher = "AES"; + test(); + cipher = null; + test(); + } + } + + private void runCoverage() throws SQLException { + smallLog = big = networked = memory = ssl = false; + diskResult = traceSystemOut = diskUndo = false; + traceTest = stopOnError = false; + defrag = false; + traceLevelFile = throttle = 0; + cipher = null; + + memory = true; + multiThreaded = true; + test(); + testUnit(); + + multiThreaded = false; + mvStore = false; + mvcc = false; + test(); + // testUnit(); + } + + /** + * Run all tests with the current settings. + */ + protected void test() throws SQLException { + System.out.println(); + System.out.println("Test " + toString() + + " (" + Utils.getMemoryUsed() + " KB used)"); + beforeTest(); + + // db + addTest(new TestScriptSimple()); + addTest(new TestScript()); + addTest(new TestAlter()); + addTest(new TestAlterSchemaRename()); + addTest(new TestAutoRecompile()); + addTest(new TestBackup()); + addTest(new TestBigDb()); + addTest(new TestBigResult()); + addTest(new TestCases()); + addTest(new TestCheckpoint()); + addTest(new TestCompatibility()); + addTest(new TestCompatibilityOracle()); + addTest(new TestCsv()); + addTest(new TestDeadlock()); + if (vmlens) { + return; + } + addTest(new TestDrop()); + addTest(new TestDuplicateKeyUpdate()); + addTest(new TestEncryptedDb()); + addTest(new TestExclusive()); + addTest(new TestFunctionOverload()); + addTest(new TestFunctions()); + addTest(new TestInit()); + addTest(new TestIndex()); + addTest(new TestIndexHints()); + addTest(new TestLargeBlob()); + addTest(new TestLinkedTable()); + addTest(new TestListener()); + addTest(new TestLob()); + addTest(new TestMergeUsing()); + addTest(new TestMultiConn()); + addTest(new TestMultiDimension()); + addTest(new TestMultiThreadedKernel()); + addTest(new TestOpenClose()); + addTest(new TestOptimizations()); + addTest(new TestOptimizerHints()); + addTest(new TestOutOfMemory()); + addTest(new TestReadOnly()); + addTest(new TestRecursiveQueries()); + addTest(new TestGeneralCommonTableQueries()); + if (!memory) { + // requires persistent store for reconnection tests + addTest(new TestPersistentCommonTableExpressions()); + } + addTest(new TestRights()); + addTest(new TestRunscript()); + addTest(new TestSQLInjection()); + addTest(new TestSessionsLocks()); + addTest(new TestSelectCountNonNullColumn()); + addTest(new TestSequence()); + addTest(new TestShow()); + addTest(new TestSpaceReuse()); + addTest(new TestSpatial()); + addTest(new TestSpeed()); + addTest(new TestTableEngines()); + addTest(new TestRowFactory()); + addTest(new TestTempTables()); + addTest(new TestTransaction()); + addTest(new TestTriggersConstraints()); + addTest(new TestTwoPhaseCommit()); + addTest(new TestView()); + addTest(new TestViewAlterTable()); + addTest(new TestViewDropView()); + addTest(new TestReplace()); + addTest(new TestSynonymForTable()); + addTest(new TestColumnNamer()); + + // jdbc + addTest(new TestBatchUpdates()); + addTest(new TestCallableStatement()); + addTest(new TestCancel()); + addTest(new TestConcurrentConnectionUsage()); + addTest(new TestConnection()); + addTest(new TestDatabaseEventListener()); + addTest(new TestJavaObject()); + addTest(new TestLimitUpdates()); + addTest(new TestLobApi()); + addTest(new TestManyJdbcObjects()); + addTest(new TestMetaData()); + addTest(new TestNativeSQL()); + addTest(new TestPreparedStatement()); + addTest(new TestResultSet()); + addTest(new TestStatement()); + addTest(new TestGetGeneratedKeys()); + addTest(new TestTransactionIsolation()); + addTest(new TestUpdatableResultSet()); + addTest(new TestZloty()); + addTest(new TestCustomDataTypesHandler()); + addTest(new TestSetCollation()); + + // jdbcx + addTest(new TestConnectionPool()); + addTest(new TestDataSource()); + addTest(new TestXA()); + addTest(new TestXASimple()); + + // server + addTest(new TestAutoServer()); + addTest(new TestNestedLoop()); + + // mvcc & row level locking + addTest(new TestMvcc1()); + addTest(new TestMvcc2()); + addTest(new TestMvcc3()); + addTest(new TestMvcc4()); + addTest(new TestMvccMultiThreaded()); + addTest(new TestMvccMultiThreaded2()); + addTest(new TestRowLocks()); + + // synth + addTest(new TestBtreeIndex()); + addTest(new TestConcurrentUpdate()); + addTest(new TestDiskFull()); + addTest(new TestCrashAPI()); + addTest(new TestFuzzOptimizations()); + addTest(new TestLimit()); + addTest(new TestRandomCompare()); + addTest(new TestKillRestart()); + addTest(new TestKillRestartMulti()); + addTest(new TestMultiThreaded()); + addTest(new TestOuterJoins()); + addTest(new TestNestedJoins()); + addTest(new TestStringAggCompatibility()); + + runAddedTests(); + + // serial + addTest(new TestDateStorage()); + addTest(new TestDriver()); + addTest(new TestJavaObjectSerializer()); + addTest(new TestLocale()); + addTest(new TestMemoryUsage()); + addTest(new TestMultiThread()); + addTest(new TestPowerOff()); + addTest(new TestReorderWrites()); + addTest(new TestRandomSQL()); + addTest(new TestQueryCache()); + addTest(new TestUrlJavaObjectSerializer()); + addTest(new TestWeb()); + + runAddedTests(1); + + afterTest(); + } + + protected void testUnit() { + // mv store + addTest(new TestCacheConcurrentLIRS()); + addTest(new TestCacheLIRS()); + addTest(new TestCacheLongKeyLIRS()); + addTest(new TestConcurrentLinkedList()); + addTest(new TestDataUtils()); + addTest(new TestFreeSpace()); + addTest(new TestKillProcessWhileWriting()); + addTest(new TestMVRTree()); + addTest(new TestMVStore()); + addTest(new TestMVStoreBenchmark()); + addTest(new TestMVStoreStopCompact()); + addTest(new TestMVStoreTool()); + addTest(new TestMVTableEngine()); + addTest(new TestObjectDataType()); + addTest(new TestRandomMapOps()); + addTest(new TestSpinLock()); + addTest(new TestStreamStore()); + addTest(new TestTransactionStore()); + + // unit + addTest(new TestAnsCompression()); +// addTest(new TestAutoReconnect()); + addTest(new TestBinaryArithmeticStream()); + addTest(new TestBitField()); + addTest(new TestBitStream()); + addTest(new TestBnf()); + addTest(new TestCache()); + addTest(new TestCharsetCollator()); + addTest(new TestClearReferences()); + addTest(new TestCollation()); + addTest(new TestCompress()); + addTest(new TestConnectionInfo()); + addTest(new TestDataPage()); + addTest(new TestDateIso8601()); + addTest(new TestExit()); + addTest(new TestFile()); + addTest(new TestFileLock()); + addTest(new TestFtp()); + addTest(new TestIntArray()); + addTest(new TestIntIntHashMap()); + addTest(new TestIntPerfectHash()); + addTest(new TestJmx()); + addTest(new TestMathUtils()); + addTest(new TestMode()); + addTest(new TestModifyOnWrite()); + addTest(new TestOldVersion()); + addTest(new TestObjectDeserialization()); + addTest(new TestMultiThreadedKernel()); + addTest(new TestOverflow()); + addTest(new TestPageStore()); + addTest(new TestPageStoreCoverage()); + addTest(new TestPerfectHash()); + addTest(new TestPgServer()); + addTest(new TestReader()); + addTest(new TestRecovery()); + addTest(new TestScriptReader()); + addTest(new RecoverLobTest()); + addTest(createTest("org.h2.test.unit.TestServlet")); + addTest(new TestSecurity()); + addTest(new TestShell()); + addTest(new TestSort()); + addTest(new TestStreams()); + addTest(new TestStringUtils()); + addTest(new TestTimeStampWithTimeZone()); + addTest(new TestTraceSystem()); + addTest(new TestUpgrade()); + addTest(new TestUsingIndex()); + addTest(new TestUtils()); + addTest(new TestValue()); + addTest(new TestValueHashMap()); + addTest(new TestWeb()); + + + runAddedTests(); + + // serial + addTest(new TestDate()); + addTest(new TestDateTimeUtils()); + addTest(new TestCluster()); + addTest(new TestConcurrent()); + addTest(new TestFileLockSerialized()); +// addTest(new TestFileLockProcess()); + addTest(new TestFileSystem()); + addTest(new TestNetUtils()); + addTest(new TestPattern()); +// addTest(new TestTools()); + addTest(new TestSampleApps()); + addTest(new TestStringCache()); + addTest(new TestValueMemory()); + + runAddedTests(1); + } + + protected void addTest(TestBase test) { + // tests.add(test); + // run directly for now, because concurrently running tests + // fails on Raspberry Pi quite often (seems to be a JVM problem) + + // event queue watchdog for tests that get stuck when running in Jenkins + final java.util.Timer watchdog = new java.util.Timer(); + // 5 minutes + watchdog.schedule(new TimerTask() { + @Override + public void run() { + ThreadDeadlockDetector.dumpAllThreadsAndLocks("test watchdog timed out"); + } + }, 5 * 60 * 1000); + try { + test.runTest(this); + } finally { + watchdog.cancel(); + } + } + + private void runAddedTests() { + int threadCount = ManagementFactory.getOperatingSystemMXBean().getAvailableProcessors(); + // threadCount = 2; + runAddedTests(threadCount); + } + + private void runAddedTests(int threadCount) { + Task[] tasks = new Task[threadCount]; + for (int i = 0; i < threadCount; i++) { + Task t = new Task() { + @Override + public void call() throws Exception { + while (true) { + TestBase test; + synchronized (tests) { + if (tests.isEmpty()) { + break; + } + test = tests.remove(0); + } + test.runTest(TestAll.this); + } + } + }; + t.execute(); + tasks[i] = t; + } + for (Task t : tasks) { + t.get(); + } + } + + private static TestBase createTest(String className) { + try { + Class clazz = Class.forName(className); + return (TestBase) clazz.newInstance(); + } catch (Exception e) { + // ignore + TestBase.printlnWithTime(0, className + " class not found"); + } catch (NoClassDefFoundError e) { + // ignore + TestBase.printlnWithTime(0, className + " class not found"); + } + return new TestBase() { + + @Override + public void test() throws Exception { + // ignore + } + + }; + } + + /** + * This method is called before a complete set of tests is run. It deletes + * old database files in the test directory and trace files. It also starts + * a TCP server if the test uses remote connections. + */ + public void beforeTest() throws SQLException { + Driver.load(); + FileUtils.deleteRecursive(TestBase.BASE_TEST_DIR, true); + DeleteDbFiles.execute(TestBase.BASE_TEST_DIR, null, true); + FileUtils.deleteRecursive("target/trace.db", false); + if (networked) { + String[] args = ssl ? new String[] { "-ifNotExists", "-tcpSSL" } : new String[] { "-ifNotExists" }; + server = Server.createTcpServer(args); + try { + server.start(); + } catch (SQLException e) { + System.out.println("FAIL: can not start server (may already be running)"); + server = null; + } + } + } + + /** + * Stop the server if it was started. + */ + public void afterTest() { + if (networked && server != null) { + server.stop(); + } + JdbcDataSourceFactory.getTraceSystem().close(); + FileUtils.deleteRecursive("target/trace.db", true); + FileUtils.deleteRecursive(TestBase.BASE_TEST_DIR, true); + } + + public int getPort() { + return server == null ? 9192 : server.getPort(); + } + + /** + * Print system information. + */ + public static void printSystemInfo() { + Properties prop = System.getProperties(); + System.out.println("H2 " + Constants.getFullVersion() + + " @ " + new java.sql.Timestamp(System.currentTimeMillis()).toString()); + System.out.println("Java " + + prop.getProperty("java.runtime.version") + ", " + + prop.getProperty("java.vm.name")+", " + + prop.getProperty("java.vendor") + ", " + + prop.getProperty("sun.arch.data.model")); + System.out.println( + prop.getProperty("os.name") + ", " + + prop.getProperty("os.arch")+", "+ + prop.getProperty("os.version")+", "+ + prop.getProperty("sun.os.patch.level")+", "+ + prop.getProperty("file.separator")+" "+ + prop.getProperty("path.separator")+" "+ + StringUtils.javaEncode(prop.getProperty("line.separator")) + " " + + prop.getProperty("user.country") + " " + + prop.getProperty("user.language") + " " + + prop.getProperty("user.timezone") + " " + + prop.getProperty("user.variant")+" "+ + prop.getProperty("file.encoding")); + } + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + appendIf(buff, lazy, "lazy"); + appendIf(buff, mvStore, "mvStore"); + appendIf(buff, big, "big"); + appendIf(buff, networked, "net"); + appendIf(buff, memory, "memory"); + appendIf(buff, codeCoverage, "codeCoverage"); + appendIf(buff, mvcc, "mvcc"); + appendIf(buff, multiThreaded, "multiThreaded"); + appendIf(buff, cipher != null, cipher); + appendIf(buff, cacheType != null, cacheType); + appendIf(buff, smallLog, "smallLog"); + appendIf(buff, ssl, "ssl"); + appendIf(buff, diskUndo, "diskUndo"); + appendIf(buff, diskResult, "diskResult"); + appendIf(buff, traceSystemOut, "traceSystemOut"); + appendIf(buff, endless, "endless"); + appendIf(buff, traceLevelFile > 0, "traceLevelFile"); + appendIf(buff, throttle > 0, "throttle:" + throttle); + appendIf(buff, traceTest, "traceTest"); + appendIf(buff, stopOnError, "stopOnError"); + appendIf(buff, defrag, "defrag"); + appendIf(buff, splitFileSystem, "split"); + appendIf(buff, collation != null, collation); + return buff.toString(); + } + + private static void appendIf(StringBuilder buff, boolean flag, String text) { + if (flag) { + buff.append(text); + buff.append(' '); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/TestAllJunit.java b/modules/h2/src/test/java/org/h2/test/TestAllJunit.java new file mode 100644 index 0000000000000..9dbe51e9561f7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/TestAllJunit.java @@ -0,0 +1,23 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test; + +import org.junit.Test; + +/** + * This class is a bridge between JUnit and the custom test framework + * used by H2. + */ +public class TestAllJunit { + + /** + * Run all the fast tests. + */ + @Test + public void testTravis() throws Exception { + TestAll.main("travis"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/TestBase.java b/modules/h2/src/test/java/org/h2/test/TestBase.java new file mode 100644 index 0000000000000..b62a48a789fa5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/TestBase.java @@ -0,0 +1,1758 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test; + +import java.io.ByteArrayInputStream; +import java.io.File; +import java.io.FileWriter; +import java.io.IOException; +import java.io.InputStream; +import java.io.PrintWriter; +import java.io.Reader; +import java.lang.reflect.Constructor; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.lang.reflect.Proxy; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.text.DateFormat; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Collections; +import java.util.LinkedList; +import java.util.List; +import java.util.Objects; +import java.util.SimpleTimeZone; +import java.util.concurrent.TimeUnit; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FileUtils; +import org.h2.test.utils.ProxyCodeGenerator; +import org.h2.test.utils.ResultVerifier; +import org.h2.test.utils.SelfDestructor; +import org.h2.tools.DeleteDbFiles; + +/** + * The base class for all tests. + */ +public abstract class TestBase { + + /** + * The base directory. + */ + public static final String BASE_TEST_DIR = "./target/data"; + + /** + * An id used to create unique file names. + */ + protected static int uniqueId; + + /** + * The temporary directory. + */ + private static final String TEMP_DIR = "./target/data/temp"; + + /** + * The base directory to write test databases. + */ + private static String baseDir = getTestDir(""); + + /** + * The test configuration. + */ + public TestAll config; + + /** + * The time when the test was started. + */ + protected long start; + + private final LinkedList memory = new LinkedList<>(); + + private static final SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss"); + + /** + * Get the test directory for this test. + * + * @param name the directory name suffix + * @return the test directory + */ + public static String getTestDir(String name) { + return BASE_TEST_DIR + "/test" + name; + } + + /** + * Start the TCP server if enabled in the configuration. + */ + protected void startServerIfRequired() throws SQLException { + config.beforeTest(); + } + + /** + * Initialize the test configuration using the default settings. + * + * @return itself + */ + public TestBase init() throws Exception { + return init(new TestAll()); + } + + /** + * Initialize the test configuration. + * + * @param conf the configuration + * @return itself + */ + public TestBase init(TestAll conf) throws Exception { + baseDir = getTestDir(""); + FileUtils.createDirectories(baseDir); +// System.setProperty("java.io.tmpdir", TEMP_DIR); + this.config = conf; + return this; + } + + /** + * This method is initializes the test, runs the test by calling the test() + * method, and prints status information. It also catches exceptions so that + * the tests can continue. + * + * @param conf the test configuration + */ + public void runTest(TestAll conf) { + if (conf.abbaLockingDetector != null) { + conf.abbaLockingDetector.reset(); + } + try { + init(conf); + start = System.nanoTime(); + test(); + println(""); + } catch (Throwable e) { + println("FAIL " + e.toString()); + logError("FAIL " + e.toString(), e); + if (config.stopOnError) { + throw new AssertionError("ERROR"); + } + TestAll.atLeastOneTestFailed = true; + if (e instanceof OutOfMemoryError) { + throw (OutOfMemoryError) e; + } + } + } + + /** + * Open a database connection in admin mode. The default user name and + * password is used. + * + * @param name the database name + * @return the connection + */ + public Connection getConnection(String name) throws SQLException { + return getConnectionInternal(getURL(name, true), getUser(), + getPassword()); + } + + /** + * Open a database connection. + * + * @param name the database name + * @param user the user name to use + * @param password the password to use + * @return the connection + */ + public Connection getConnection(String name, String user, String password) + throws SQLException { + return getConnectionInternal(getURL(name, false), user, password); + } + + /** + * Get the password to use to login for the given user password. The file + * password is added if required. + * + * @param userPassword the password of this user + * @return the login password + */ + protected String getPassword(String userPassword) { + return config == null || config.cipher == null ? + userPassword : getFilePassword() + " " + userPassword; + } + + /** + * Get the file password (only required if file encryption is used). + * + * @return the file password + */ + protected String getFilePassword() { + return "filePassword"; + } + + /** + * Get the login password. This is usually the user password. If file + * encryption is used it is combined with the file password. + * + * @return the login password + */ + protected String getPassword() { + return getPassword("123"); + } + + /** + * Get the base directory for tests. + * If a special file system is used, the prefix is prepended. + * + * @return the directory, possibly including file system prefix + */ + public String getBaseDir() { + String dir = baseDir; + if (config != null) { + if (config.reopen) { + dir = "rec:memFS:" + dir; + } + if (config.splitFileSystem) { + dir = "split:16:" + dir; + } + } + // return "split:nioMapped:" + baseDir; + return dir; + } + + /** + * Get the database URL for the given database name using the current + * configuration options. + * + * @param name the database name + * @param admin true if the current user is an admin + * @return the database URL + */ + protected String getURL(String name, boolean admin) { + String url; + if (name.startsWith("jdbc:")) { + if (config.mvStore) { + name = addOption(name, "MV_STORE", "true"); + // name = addOption(name, "MVCC", "true"); + } + return name; + } + if (admin) { + // name = addOption(name, "RETENTION_TIME", "10"); + // name = addOption(name, "WRITE_DELAY", "10"); + } + int idx = name.indexOf(':'); + if (idx == -1 && config.memory) { + name = "mem:" + name; + } else { + if (idx < 0 || idx > 10) { + // index > 10 if in options + name = getBaseDir() + "/" + name; + } + } + if (config.networked) { + if (config.ssl) { + url = "ssl://localhost:"+config.getPort()+"/" + name; + } else { + url = "tcp://localhost:"+config.getPort()+"/" + name; + } + } else if (config.googleAppEngine) { + url = "gae://" + name + + ";FILE_LOCK=NO;AUTO_SERVER=FALSE;DB_CLOSE_ON_EXIT=FALSE"; + } else { + url = name; + } + if (config.mvStore) { + url = addOption(url, "MV_STORE", "true"); + // url = addOption(url, "MVCC", "true"); + } else { + url = addOption(url, "MV_STORE", "false"); + } + if (!config.memory) { + if (config.smallLog && admin) { + url = addOption(url, "MAX_LOG_SIZE", "1"); + } + } + if (config.traceSystemOut) { + url = addOption(url, "TRACE_LEVEL_SYSTEM_OUT", "2"); + } + if (config.traceLevelFile > 0 && admin) { + url = addOption(url, "TRACE_LEVEL_FILE", "" + config.traceLevelFile); + url = addOption(url, "TRACE_MAX_FILE_SIZE", "8"); + } + url = addOption(url, "LOG", "1"); + if (config.throttleDefault > 0) { + url = addOption(url, "THROTTLE", "" + config.throttleDefault); + } else if (config.throttle > 0) { + url = addOption(url, "THROTTLE", "" + config.throttle); + } + url = addOption(url, "LOCK_TIMEOUT", "" + config.lockTimeout); + if (config.diskUndo && admin) { + url = addOption(url, "MAX_MEMORY_UNDO", "3"); + } + if (config.big && admin) { + // force operations to disk + url = addOption(url, "MAX_OPERATION_MEMORY", "1"); + } + if (config.mvcc) { + url = addOption(url, "MVCC", "TRUE"); + } + if (config.multiThreaded) { + url = addOption(url, "MULTI_THREADED", "TRUE"); + } + if (config.lazy) { + url = addOption(url, "LAZY_QUERY_EXECUTION", "1"); + } + if (config.cacheType != null && admin) { + url = addOption(url, "CACHE_TYPE", config.cacheType); + } + if (config.diskResult && admin) { + url = addOption(url, "MAX_MEMORY_ROWS", "100"); + url = addOption(url, "CACHE_SIZE", "0"); + } + if (config.cipher != null) { + url = addOption(url, "CIPHER", config.cipher); + } + if (config.defrag) { + url = addOption(url, "DEFRAG_ALWAYS", "TRUE"); + } + if (config.collation != null) { + url = addOption(url, "COLLATION", config.collation); + } + return "jdbc:h2:" + url; + } + + private static String addOption(String url, String option, String value) { + if (url.indexOf(";" + option + "=") < 0) { + url += ";" + option + "=" + value; + } + return url; + } + + private static Connection getConnectionInternal(String url, String user, + String password) throws SQLException { + org.h2.Driver.load(); + // url += ";DEFAULT_TABLE_TYPE=1"; + // Class.forName("org.hsqldb.jdbcDriver"); + // return DriverManager.getConnection("jdbc:hsqldb:" + name, "sa", ""); + return DriverManager.getConnection(url, user, password); + } + + /** + * Get the small or the big value depending on the configuration. + * + * @param small the value to return if the current test mode is 'small' + * @param big the value to return if the current test mode is 'big' + * @return small or big, depending on the configuration + */ + protected int getSize(int small, int big) { + return config.endless ? Integer.MAX_VALUE : config.big ? big : small; + } + + protected String getUser() { + return "sa"; + } + + /** + * Write a message to system out if trace is enabled. + * + * @param x the value to write + */ + protected void trace(int x) { + trace("" + x); + } + + /** + * Write a message to system out if trace is enabled. + * + * @param s the message to write + */ + public void trace(String s) { + if (config.traceTest) { + println(s); + } + } + + /** + * Print how much memory is currently used. + */ + protected void traceMemory() { + if (config.traceTest) { + trace("mem=" + getMemoryUsed()); + } + } + + /** + * Print the currently used memory, the message and the given time in + * milliseconds. + * + * @param s the message + * @param time the time in millis + */ + public void printTimeMemory(String s, long time) { + if (config.big) { + Runtime rt = Runtime.getRuntime(); + long memNow = rt.totalMemory() - rt.freeMemory(); + println(memNow / 1024 / 1024 + " MB: " + s + " ms: " + time); + } + } + + /** + * Get the number of megabytes heap memory in use. + * + * @return the used megabytes + */ + public static int getMemoryUsed() { + return (int) (getMemoryUsedBytes() / 1024 / 1024); + } + + /** + * Get the number of bytes heap memory in use. + * + * @return the used bytes + */ + public static long getMemoryUsedBytes() { + Runtime rt = Runtime.getRuntime(); + long memory = Long.MAX_VALUE; + for (int i = 0; i < 8; i++) { + rt.gc(); + long memNow = rt.totalMemory() - rt.freeMemory(); + if (memNow >= memory) { + break; + } + memory = memNow; + } + return memory; + } + + /** + * Called if the test reached a point that was not expected. + * + * @throws AssertionError always throws an AssertionError + */ + public void fail() { + fail("Failure"); + } + + /** + * Called if the test reached a point that was not expected. + * + * @param string the error message + * @throws AssertionError always throws an AssertionError + */ + protected void fail(String string) { + if (string.length() > 100) { + // avoid long strings with special characters, because they are slow + // to display in Eclipse + char[] data = string.toCharArray(); + for (int i = 0; i < data.length; i++) { + char c = data[i]; + if (c >= 128 || c < 32) { + data[i] = (char) ('a' + (c & 15)); + string = null; + } + } + if (string == null) { + string = new String(data); + } + } + println(string); + throw new AssertionError(string); + } + + /** + * Log an error message. + * + * @param s the message + */ + public static void logErrorMessage(String s) { + System.out.flush(); + System.err.println("ERROR: " + s + "------------------------------"); + logThrowable(s, null); + } + + /** + * Log an error message. + * + * @param s the message + * @param e the exception + */ + public static void logError(String s, Throwable e) { + if (e == null) { + e = new Exception(s); + } + System.out.flush(); + System.err.println("ERROR: " + s + " " + e.toString() + + " ------------------------------"); + e.printStackTrace(); + logThrowable(null, e); + } + + private static void logThrowable(String s, Throwable e) { + // synchronize on this class, because file locks are only visible to + // other JVMs + synchronized (TestBase.class) { + try { + // lock + FileChannel fc = FilePath.get("target/error.lock").open("rw"); + FileLock lock; + while (true) { + lock = fc.tryLock(); + if (lock != null) { + break; + } + Thread.sleep(10); + } + // append + FileWriter fw = new FileWriter("target/error.txt", true); + if (s != null) { + fw.write(s); + } + if (e != null) { + PrintWriter pw = new PrintWriter(fw); + e.printStackTrace(pw); + pw.close(); + } + fw.close(); + // unlock + lock.release(); + } catch (Throwable t) { + t.printStackTrace(); + } + } + System.err.flush(); + } + + /** + * Print a message to system out. + * + * @param s the message + */ + public void println(String s) { + long now = System.nanoTime(); + long time = TimeUnit.NANOSECONDS.toMillis(now - start); + printlnWithTime(time, getClass().getName() + " " + s); + } + + /** + * Print a message, prepended with the specified time in milliseconds. + * + * @param millis the time in milliseconds + * @param s the message + */ + static synchronized void printlnWithTime(long millis, String s) { + s = dateFormat.format(new java.util.Date()) + " " + + formatTime(millis) + " " + s; + System.out.println(s); + } + + /** + * Print the current time and a message to system out. + * + * @param s the message + */ + protected void printTime(String s) { + SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss"); + println(dateFormat.format(new java.util.Date()) + " " + s); + } + + /** + * Format the time in the format hh:mm:ss.1234 where 1234 is milliseconds. + * + * @param millis the time in milliseconds + * @return the formatted time + */ + static String formatTime(long millis) { + String s = new java.sql.Time( + java.sql.Time.valueOf("0:0:0").getTime() + millis).toString() + + "." + ("" + (1000 + (millis % 1000))).substring(1); + if (s.startsWith("00:")) { + s = s.substring(3); + } + return s; + } + + /** + * Delete all database files for this database. + * + * @param name the database name + */ + protected void deleteDb(String name) { + deleteDb(getBaseDir(), name); + } + + /** + * Delete all database files for a database. + * + * @param dir the directory where the database files are located + * @param name the database name + */ + protected void deleteDb(String dir, String name) { + DeleteDbFiles.execute(dir, name, true); + // ArrayList list; + // list = FileLister.getDatabaseFiles(baseDir, name, true); + // if (list.size() > 0) { + // System.out.println("Not deleted: " + list); + // } + } + + /** + * This method will be called by the test framework. + * + * @throws Exception if an exception in the test occurs + */ + public abstract void test() throws Exception; + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param message the message to print in case of error + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + public void assertEquals(String message, int expected, int actual) { + if (expected != actual) { + fail("Expected: " + expected + " actual: " + actual + " message: " + message); + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + public void assertEquals(int expected, int actual) { + if (expected != actual) { + fail("Expected: " + expected + " actual: " + actual); + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + public void assertEquals(byte[] expected, byte[] actual) { + if (expected == null || actual == null) { + assertTrue(expected == actual); + return; + } + assertEquals(expected.length, actual.length); + for (int i = 0; i < expected.length; i++) { + if (expected[i] != actual[i]) { + fail("[" + i + "]: expected: " + (int) expected[i] + + " actual: " + (int) actual[i]); + } + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + public void assertEquals(java.util.Date expected, java.util.Date actual) { + if (!Objects.equals(expected, actual)) { + DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS"); + SimpleTimeZone gmt = new SimpleTimeZone(0, "Z"); + df.setTimeZone(gmt); + fail("Expected: " + + (expected != null ? df.format(expected) : "null") + + " actual: " + + (actual != null ? df.format(actual) : "null")); + } + } + + /** + * Check if two arrays are equal, and if not throw an exception. + * If some of the elements in the arrays are themselves arrays this + * check is called recursively. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + public void assertEquals(Object[] expected, Object[] actual) { + if (expected == null || actual == null) { + assertTrue(expected == actual); + return; + } + assertEquals(expected.length, actual.length); + for (int i = 0; i < expected.length; i++) { + if (expected[i] == null || actual[i] == null) { + if (expected[i] != actual[i]) { + fail("[" + i + "]: expected: " + expected[i] + " actual: " + actual[i]); + } + } else if (expected[i] instanceof Object[] && actual[i] instanceof Object[]) { + assertEquals((Object[]) expected[i], (Object[]) actual[i]); + } else if (!expected[i].equals(actual[i])) { + fail("[" + i + "]: expected: " + expected[i] + " actual: " + actual[i]); + } + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + public void assertEquals(Object expected, Object actual) { + if (expected == null || actual == null) { + assertTrue(expected == actual); + return; + } + if (!expected.equals(actual)) { + fail(" expected: " + expected + " actual: " + actual); + } + } + + /** + * Check if two readers are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @param len the maximum length, or -1 + * @throws AssertionError if the values are not equal + */ + protected void assertEqualReaders(Reader expected, Reader actual, int len) + throws IOException { + for (int i = 0; len < 0 || i < len; i++) { + int ce = expected.read(); + int ca = actual.read(); + assertEquals("pos:" + i, ce, ca); + if (ce == -1) { + break; + } + } + expected.close(); + actual.close(); + } + + /** + * Check if two streams are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @param len the maximum length, or -1 + * @throws AssertionError if the values are not equal + */ + protected void assertEqualStreams(InputStream expected, InputStream actual, + int len) throws IOException { + // this doesn't actually read anything - just tests reading 0 bytes + actual.read(new byte[0]); + expected.read(new byte[0]); + actual.read(new byte[10], 3, 0); + expected.read(new byte[10], 0, 0); + + for (int i = 0; len < 0 || i < len; i++) { + int ca = actual.read(); + actual.read(new byte[0]); + int ce = expected.read(); + if (ca != ce) { + assertEquals("Error at index " + i, ce, ca); + } + if (ca == -1) { + break; + } + } + actual.read(new byte[10], 3, 0); + expected.read(new byte[10], 0, 0); + actual.read(new byte[0]); + expected.read(new byte[0]); + actual.close(); + expected.close(); + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param message the message to use if the check fails + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(String message, String expected, String actual) { + if (expected == null && actual == null) { + return; + } else if (expected == null || actual == null) { + fail("Expected: " + expected + " Actual: " + actual + " " + message); + } else if (!expected.equals(actual)) { + int al = expected.length(); + int bl = actual.length(); + for (int i = 0; i < expected.length(); i++) { + String s = expected.substring(0, i); + if (!actual.startsWith(s)) { + expected = expected.substring(0, i) + "<*>" + expected.substring(i); + if (al > 20) { + expected = "@" + i + " " + expected; + } + break; + } + } + if (al > 4000) { + expected = expected.substring(0, 4000); + } + if (bl > 4000) { + actual = actual.substring(0, 4000); + } + fail("Expected: " + expected + " (" + al + ") actual: " + actual + + " (" + bl + ") " + message); + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(String expected, String actual) { + assertEquals("", expected, actual); + } + + /** + * Check if two result sets are equal, and if not throw an exception. + * + * @param message the message to use if the check fails + * @param rs0 the first result set + * @param rs1 the second result set + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(String message, ResultSet rs0, ResultSet rs1) + throws SQLException { + ResultSetMetaData meta = rs0.getMetaData(); + int columns = meta.getColumnCount(); + assertEquals(columns, rs1.getMetaData().getColumnCount()); + while (rs0.next()) { + assertTrue(message, rs1.next()); + for (int i = 0; i < columns; i++) { + assertEquals(message, rs0.getString(i + 1), rs1.getString(i + 1)); + } + } + assertFalse(message, rs0.next()); + assertFalse(message, rs1.next()); + } + + /** + * Check if the first value is larger or equal than the second value, and if + * not throw an exception. + * + * @param a the first value + * @param b the second value (must be smaller than the first value) + * @throws AssertionError if the first value is smaller + */ + protected void assertSmaller(long a, long b) { + if (a >= b) { + fail("a: " + a + " is not smaller than b: " + b); + } + } + + /** + * Check that a result contains the given substring. + * + * @param result the result value + * @param contains the term that should appear in the result + * @throws AssertionError if the term was not found + */ + protected void assertContains(String result, String contains) { + if (result.indexOf(contains) < 0) { + fail(result + " does not contain: " + contains); + } + } + + /** + * Check that a text starts with the expected characters.. + * + * @param text the text + * @param expectedStart the expected prefix + * @throws AssertionError if the text does not start with the expected + * characters + */ + protected void assertStartsWith(String text, String expectedStart) { + if (!text.startsWith(expectedStart)) { + fail("[" + text + "] does not start with: [" + expectedStart + "]"); + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(long expected, long actual) { + if (expected != actual) { + fail("Expected: " + expected + " actual: " + actual); + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(double expected, double actual) { + if (expected != actual) { + if (Double.isNaN(expected) && Double.isNaN(actual)) { + // if both a NaN, then there is no error + } else { + fail("Expected: " + expected + " actual: " + actual); + } + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(float expected, float actual) { + if (expected != actual) { + if (Float.isNaN(expected) && Float.isNaN(actual)) { + // if both a NaN, then there is no error + } else { + fail("Expected: " + expected + " actual: " + actual); + } + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(boolean expected, boolean actual) { + if (expected != actual) { + fail("Boolean expected: " + expected + " actual: " + actual); + } + } + + /** + * Check that the passed boolean is true. + * + * @param condition the condition + * @throws AssertionError if the condition is false + */ + public void assertTrue(boolean condition) { + assertTrue("Expected: true got: false", condition); + } + + /** + * Check that the passed object is null. + * + * @param obj the object + * @throws AssertionError if the condition is false + */ + public void assertNull(Object obj) { + if (obj != null) { + fail("Expected: null got: " + obj); + } + } + + /** + * Check that the passed boolean is true. + * + * @param message the message to print if the condition is false + * @param condition the condition + * @throws AssertionError if the condition is false + */ + public void assertTrue(String message, boolean condition) { + if (!condition) { + fail(message); + } + } + + /** + * Check that the passed boolean is false. + * + * @param value the condition + * @throws AssertionError if the condition is true + */ + protected void assertFalse(boolean value) { + assertFalse("Expected: false got: true", value); + } + + /** + * Check that the passed boolean is false. + * + * @param message the message to print if the condition is false + * @param value the condition + * @throws AssertionError if the condition is true + */ + protected void assertFalse(String message, boolean value) { + if (value) { + fail(message); + } + } + + /** + * Check that the result set row count matches. + * + * @param expected the number of expected rows + * @param rs the result set + * @throws AssertionError if a different number of rows have been found + */ + protected void assertResultRowCount(int expected, ResultSet rs) + throws SQLException { + int i = 0; + while (rs.next()) { + i++; + } + assertEquals(expected, i); + } + + /** + * Check that the result set of a query is exactly this value. + * + * @param stat the statement + * @param sql the SQL statement to execute + * @param expected the expected result value + * @throws AssertionError if a different result value was returned + */ + protected void assertSingleValue(Statement stat, String sql, int expected) + throws SQLException { + ResultSet rs = stat.executeQuery(sql); + assertTrue(rs.next()); + assertEquals(expected, rs.getInt(1)); + assertFalse(rs.next()); + } + + /** + * Check that the result set of a query is exactly this value. + * + * @param expected the expected result value + * @param stat the statement + * @param sql the SQL statement to execute + * @throws AssertionError if a different result value was returned + */ + protected void assertResult(String expected, Statement stat, String sql) + throws SQLException { + ResultSet rs = stat.executeQuery(sql); + if (rs.next()) { + String actual = rs.getString(1); + assertEquals(expected, actual); + } else { + assertEquals(expected, null); + } + } + + /** + * Check that executing the specified query results in the specified error. + * + * @param expectedErrorCode the expected error code + * @param stat the statement + * @param sql the SQL statement to execute + */ + protected void assertThrows(int expectedErrorCode, Statement stat, + String sql) { + try { + execute(stat, sql); + fail("Expected error: " + expectedErrorCode); + } catch (SQLException ex) { + assertEquals(expectedErrorCode, ex.getErrorCode()); + } + } + + /** + * Execute the statement. + * + * @param stat the statement + */ + public void execute(PreparedStatement stat) throws SQLException { + execute(stat, null); + } + + /** + * Execute the statement. + * + * @param stat the statement + * @param sql the SQL command + */ + protected void execute(Statement stat, String sql) throws SQLException { + boolean query = sql == null ? ((PreparedStatement) stat).execute() : + stat.execute(sql); + + if (query && config.lazy) { + try (ResultSet rs = stat.getResultSet()) { + while (rs.next()) { + // just loop + } + } + } + } + + /** + * Check if the result set meta data is correct. + * + * @param rs the result set + * @param columnCount the expected column count + * @param labels the expected column labels + * @param datatypes the expected data types + * @param precision the expected precisions + * @param scale the expected scales + */ + protected void assertResultSetMeta(ResultSet rs, int columnCount, + String[] labels, int[] datatypes, int[] precision, int[] scale) + throws SQLException { + ResultSetMetaData meta = rs.getMetaData(); + int cc = meta.getColumnCount(); + if (cc != columnCount) { + fail("result set contains " + cc + " columns not " + columnCount); + } + for (int i = 0; i < columnCount; i++) { + if (labels != null) { + String l = meta.getColumnLabel(i + 1); + if (!labels[i].equals(l)) { + fail("column label " + i + " is " + l + " not " + labels[i]); + } + } + if (datatypes != null) { + int t = meta.getColumnType(i + 1); + if (datatypes[i] != t) { + fail("column datatype " + i + " is " + t + " not " + datatypes[i] + " (prec=" + + meta.getPrecision(i + 1) + " scale=" + meta.getScale(i + 1) + ")"); + } + String typeName = meta.getColumnTypeName(i + 1); + String className = meta.getColumnClassName(i + 1); + switch (t) { + case Types.INTEGER: + assertEquals("INTEGER", typeName); + assertEquals("java.lang.Integer", className); + break; + case Types.VARCHAR: + assertEquals("VARCHAR", typeName); + assertEquals("java.lang.String", className); + break; + case Types.SMALLINT: + assertEquals("SMALLINT", typeName); + assertEquals("java.lang.Short", className); + break; + case Types.TIMESTAMP: + assertEquals("TIMESTAMP", typeName); + assertEquals("java.sql.Timestamp", className); + break; + case Types.DECIMAL: + assertEquals("DECIMAL", typeName); + assertEquals("java.math.BigDecimal", className); + break; + default: + } + } + if (precision != null) { + int p = meta.getPrecision(i + 1); + if (precision[i] != p) { + fail("column precision " + i + " is " + p + " not " + precision[i]); + } + } + if (scale != null) { + int s = meta.getScale(i + 1); + if (scale[i] != s) { + fail("column scale " + i + " is " + s + " not " + scale[i]); + } + } + + } + } + + /** + * Check if a result set contains the expected data. + * The sort order is significant + * + * @param rs the result set + * @param data the expected data + * @throws AssertionError if there is a mismatch + */ + protected void assertResultSetOrdered(ResultSet rs, String[][] data) + throws SQLException { + assertResultSet(true, rs, data); + } + + /** + * Check if a result set contains the expected data. + * + * @param ordered if the sort order is significant + * @param rs the result set + * @param data the expected data + * @throws AssertionError if there is a mismatch + */ + private void assertResultSet(boolean ordered, ResultSet rs, String[][] data) + throws SQLException { + int len = rs.getMetaData().getColumnCount(); + int rows = data.length; + if (rows == 0) { + // special case: no rows + if (rs.next()) { + fail("testResultSet expected rowCount:" + rows + " got:0"); + } + } + int len2 = data[0].length; + if (len < len2) { + fail("testResultSet expected columnCount:" + len2 + " got:" + len); + } + for (int i = 0; i < rows; i++) { + if (!rs.next()) { + fail("testResultSet expected rowCount:" + rows + " got:" + i); + } + String[] row = getData(rs, len); + if (ordered) { + String[] good = data[i]; + if (!testRow(good, row, good.length)) { + fail("testResultSet row not equal, got:\n" + formatRow(row) + + "\n" + formatRow(good)); + } + } else { + boolean found = false; + for (int j = 0; j < rows; j++) { + String[] good = data[i]; + if (testRow(good, row, good.length)) { + found = true; + break; + } + } + if (!found) { + fail("testResultSet no match for row:" + formatRow(row)); + } + } + } + if (rs.next()) { + String[] row = getData(rs, len); + fail("testResultSet expected rowcount:" + rows + " got:>=" + + (rows + 1) + " data:" + formatRow(row)); + } + } + + private static boolean testRow(String[] a, String[] b, int len) { + for (int i = 0; i < len; i++) { + String sa = a[i]; + String sb = b[i]; + if (sa == null || sb == null) { + if (sa != sb) { + return false; + } + } else { + if (!sa.equals(sb)) { + return false; + } + } + } + return true; + } + + private static String[] getData(ResultSet rs, int len) throws SQLException { + String[] data = new String[len]; + for (int i = 0; i < len; i++) { + data[i] = rs.getString(i + 1); + // just check if it works + rs.getObject(i + 1); + } + return data; + } + + private static String formatRow(String[] row) { + String sb = ""; + for (String r : row) { + sb += "{" + r + "}"; + } + return "{" + sb + "}"; + } + + /** + * Simulate a database crash. This method will also close the database + * files, but the files are in a state as the power was switched off. It + * doesn't throw an exception. + * + * @param conn the database connection + */ + protected void crash(Connection conn) { + ((JdbcConnection) conn).setPowerOffCount(1); + try { + conn.createStatement().execute("SET WRITE_DELAY 0"); + conn.createStatement().execute("CREATE TABLE TEST_A(ID INT)"); + fail("should be crashed already"); + } catch (SQLException e) { + // expected + } + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + } + + /** + * Read a string from the reader. This method reads until end of file. + * + * @param reader the reader + * @return the string read + */ + protected String readString(Reader reader) { + if (reader == null) { + return null; + } + StringBuilder buffer = new StringBuilder(); + try { + while (true) { + int c = reader.read(); + if (c == -1) { + break; + } + buffer.append((char) c); + } + return buffer.toString(); + } catch (Exception e) { + assertTrue(false); + return null; + } + } + + /** + * Check that a given exception is not an unexpected 'general error' + * exception. + * + * @param e the error + */ + public void assertKnownException(SQLException e) { + assertKnownException("", e); + } + + /** + * Check that a given exception is not an unexpected 'general error' + * exception. + * + * @param message the message + * @param e the exception + */ + protected void assertKnownException(String message, SQLException e) { + if (e != null && e.getSQLState().startsWith("HY000")) { + TestBase.logError("Unexpected General error " + message, e); + } + } + + /** + * Check if two values are equal, and if not throw an exception. + * + * @param expected the expected value + * @param actual the actual value + * @throws AssertionError if the values are not equal + */ + protected void assertEquals(Integer expected, Integer actual) { + if (expected == null || actual == null) { + if (expected != actual) { + assertEquals("" + expected, "" + actual); + } + } else { + assertEquals(expected.intValue(), actual.intValue()); + } + } + + /** + * Check if two databases contain the same met data. + * + * @param stat1 the connection to the first database + * @param stat2 the connection to the second database + * @throws AssertionError if the databases don't match + */ + protected void assertEqualDatabases(Statement stat1, Statement stat2) + throws SQLException { + ResultSet rs = stat1.executeQuery( + "select value from information_schema.settings " + + "where name='ANALYZE_AUTO'"); + int analyzeAuto = rs.next() ? rs.getInt(1) : 0; + if (analyzeAuto > 0) { + stat1.execute("analyze"); + stat2.execute("analyze"); + } + ResultSet rs1 = stat1.executeQuery("SCRIPT simple NOPASSWORDS"); + ResultSet rs2 = stat2.executeQuery("SCRIPT simple NOPASSWORDS"); + ArrayList list1 = new ArrayList<>(); + ArrayList list2 = new ArrayList<>(); + while (rs1.next()) { + String s1 = rs1.getString(1); + s1 = removeRowCount(s1); + if (!rs2.next()) { + fail("expected: " + s1); + } + String s2 = rs2.getString(1); + s2 = removeRowCount(s2); + if (!s1.equals(s2)) { + list1.add(s1); + list2.add(s2); + } + } + for (String s : list1) { + if (!list2.remove(s)) { + fail("only found in first: " + s + " remaining: " + list2); + } + } + assertEquals("remaining: " + list2, 0, list2.size()); + assertFalse(rs2.next()); + } + + private static String removeRowCount(String scriptLine) { + int index = scriptLine.indexOf("+/-"); + if (index >= 0) { + scriptLine = scriptLine.substring(index); + } + return scriptLine; + } + + /** + * Create a new object of the calling class. + * + * @return the new test + */ + public static TestBase createCaller() { + org.h2.Driver.load(); + try { + return (TestBase) new SecurityManager() { + Class clazz = getClassContext()[2]; + }.clazz.getDeclaredConstructor().newInstance(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + /** + * Get the classpath list used to execute java -cp ... + * + * @return the classpath list + */ + protected String getClassPath() { + String cp = System.getProperty("surefire.test.class.path", null); + if (cp != null) + return cp; + return System.getProperty("java.class.path"); + } + + /** + * Get the path to a java executable of the current process + * + * @return the path to java + */ + public static String getJVM() { + return System.getProperty("java.home") + File.separatorChar + "bin" + + File.separator + "java"; + } + + /** + * Use up almost all memory. + * + * @param remainingKB the number of kilobytes that are not referenced + */ + protected void eatMemory(int remainingKB) { + byte[] reserve = new byte[remainingKB * 1024]; + // first, eat memory in 16 KB blocks, then eat in 16 byte blocks + for (int size = 16 * 1024; size > 0; size /= 1024) { + while (true) { + try { + byte[] block = new byte[16 * 1024]; + memory.add(block); + } catch (OutOfMemoryError e) { + break; + } + } + } + // silly code - makes sure there are no warnings + reserve[0] = reserve[1]; + } + + /** + * Remove the hard reference to the memory. + */ + protected void freeMemory() { + memory.clear(); + for (int i = 0; i < 5; i++) { + System.gc(); + try { + Thread.sleep(20); + } catch (InterruptedException ignore) {/**/} + } + } + + /** + * Verify the next method call on the object will throw an exception. + * + * @param the class of the object + * @param expectedExceptionClass the expected exception class to be thrown + * @param obj the object to wrap + * @return a proxy for the object + */ + protected T assertThrows(final Class expectedExceptionClass, + final T obj) { + return assertThrows(new ResultVerifier() { + @Override + public boolean verify(Object returnValue, Throwable t, Method m, + Object... args) { + if (t == null) { + throw new AssertionError("Expected an exception of type " + + expectedExceptionClass.getSimpleName() + + " to be thrown, but the method returned " + + returnValue + + " for " + ProxyCodeGenerator.formatMethodCall(m, args)); + } + if (!expectedExceptionClass.isAssignableFrom(t.getClass())) { + AssertionError ae = new AssertionError( + "Expected an exception of type\n" + + expectedExceptionClass.getSimpleName() + + " to be thrown, but the method under test " + + "threw an exception of type\n" + + t.getClass().getSimpleName() + + " (see in the 'Caused by' for the exception " + + "that was thrown) " + + " for " + ProxyCodeGenerator. + formatMethodCall(m, args)); + ae.initCause(t); + throw ae; + } + return false; + } + }, obj); + } + + /** + * Verify the next method call on the object will throw an exception. + * + * @param the class of the object + * @param expectedErrorCode the expected error code + * @param obj the object to wrap + * @return a proxy for the object + */ + protected T assertThrows(final int expectedErrorCode, final T obj) { + return assertThrows(new ResultVerifier() { + @Override + public boolean verify(Object returnValue, Throwable t, Method m, + Object... args) { + int errorCode; + if (t instanceof DbException) { + errorCode = ((DbException) t).getErrorCode(); + } else if (t instanceof SQLException) { + errorCode = ((SQLException) t).getErrorCode(); + } else { + errorCode = 0; + } + if (errorCode != expectedErrorCode) { + AssertionError ae = new AssertionError( + "Expected an SQLException or DbException with error code " + + expectedErrorCode + + ", but got a " + (t == null ? "null" : + t.getClass().getName() + " exception " + + " with error code " + errorCode)); + ae.initCause(t); + throw ae; + } + return false; + } + }, obj); + } + + /** + * Verify the next method call on the object will throw an exception. + * + * @param the class of the object + * @param verifier the result verifier to call + * @param obj the object to wrap + * @return a proxy for the object + */ + @SuppressWarnings("unchecked") + protected T assertThrows(final ResultVerifier verifier, final T obj) { + Class c = obj.getClass(); + InvocationHandler ih = new InvocationHandler() { + private Exception called = new Exception("No method called"); + @Override + protected void finalize() { + if (called != null) { + called.printStackTrace(System.err); + } + } + @Override + public Object invoke(Object proxy, Method method, Object[] args) + throws Exception { + try { + called = null; + Object ret = method.invoke(obj, args); + verifier.verify(ret, null, method, args); + return ret; + } catch (InvocationTargetException e) { + verifier.verify(null, e.getTargetException(), method, args); + Class retClass = method.getReturnType(); + if (!retClass.isPrimitive()) { + return null; + } + if (retClass == boolean.class) { + return false; + } else if (retClass == byte.class) { + return (byte) 0; + } else if (retClass == char.class) { + return (char) 0; + } else if (retClass == short.class) { + return (short) 0; + } else if (retClass == int.class) { + return 0; + } else if (retClass == long.class) { + return 0L; + } else if (retClass == float.class) { + return 0F; + } else if (retClass == double.class) { + return 0D; + } + return null; + } + } + }; + if (!ProxyCodeGenerator.isGenerated(c)) { + Class[] interfaces = c.getInterfaces(); + if (Modifier.isFinal(c.getModifiers()) + || (interfaces.length > 0 && getClass() != c)) { + // interface class proxies + if (interfaces.length == 0) { + throw new RuntimeException("Can not create a proxy for the class " + + c.getSimpleName() + + " because it doesn't implement any interfaces and is final"); + } + return (T) Proxy.newProxyInstance(c.getClassLoader(), interfaces, ih); + } + } + try { + Class pc = ProxyCodeGenerator.getClassProxy(c); + Constructor cons = pc + .getConstructor(new Class[] { InvocationHandler.class }); + return (T) cons.newInstance(new Object[] { ih }); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + /** + * Create a proxy class that extends the given class. + * + * @param clazz the class + */ + protected void createClassProxy(Class clazz) { + try { + ProxyCodeGenerator.getClassProxy(clazz); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + /** + * Construct a stream of 20 KB that fails while reading with the provided + * exception. + * + * @param e the exception + * @return the stream + */ + public static ByteArrayInputStream createFailingStream(final Exception e) { + return new ByteArrayInputStream(new byte[20 * 1024]) { + @Override + public int read(byte[] buffer, int off, int len) { + if (this.pos > 10 * 1024) { + throwException(e); + } + return super.read(buffer, off, len); + } + }; + } + + /** + * Throw a checked exception, without having to declare the method as + * throwing a checked exception. + * + * @param e the exception to throw + */ + public static void throwException(Throwable e) { + TestBase.throwThis(e); + } + + @SuppressWarnings("unchecked") + private static void throwThis(Throwable e) throws E { + throw (E) e; + } + + /** + * Get the name of the test. + * + * @return the name of the test class + */ + public String getTestName() { + return getClass().getSimpleName(); + } + + /** + * Build a child process. + * + * @param name the name + * @param childClass the class + * @param jvmArgs the argument list + * @return the process builder + */ + public ProcessBuilder buildChild(String name, Class childClass, + String... jvmArgs) { + List args = new ArrayList<>(16); + args.add(getJVM()); + Collections.addAll(args, jvmArgs); + Collections.addAll(args, "-cp", getClassPath(), + SelfDestructor.getPropertyString(1), + childClass.getName(), + "-url", getURL(name, true), + "-user", getUser(), + "-password", getPassword()); + ProcessBuilder processBuilder = new ProcessBuilder() +// .redirectError(ProcessBuilder.Redirect.INHERIT) + .redirectErrorStream(true) + .redirectOutput(ProcessBuilder.Redirect.INHERIT) + .command(args); + return processBuilder; + } + + public abstract static class Child extends TestBase { + private String url; + private String user; + private String password; + + public Child(String... args) { + for (int i = 0; i < args.length; i++) { + if ("-url".equals(args[i])) { + url = args[++i]; + } else if ("-user".equals(args[i])) { + user = args[++i]; + } else if ("-password".equals(args[i])) { + password = args[++i]; + } + SelfDestructor.startCountdown(60); + } + } + + public Connection getConnection() throws SQLException { + return getConnection(url, user, password); + } + + } +} diff --git a/modules/h2/src/test/java/org/h2/test/ap/TestAnnotationProcessor.java b/modules/h2/src/test/java/org/h2/test/ap/TestAnnotationProcessor.java new file mode 100644 index 0000000000000..fedf3f3e4485a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/ap/TestAnnotationProcessor.java @@ -0,0 +1,83 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.ap; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; + +import javax.annotation.processing.AbstractProcessor; +import javax.annotation.processing.RoundEnvironment; +import javax.lang.model.SourceVersion; +import javax.lang.model.element.TypeElement; +import javax.tools.Diagnostic; + +/** + * An annotation processor for testing. + */ +public class TestAnnotationProcessor extends AbstractProcessor { + + /** + * The message key. + */ + public static final String MESSAGES_KEY = + TestAnnotationProcessor.class.getName() + "-messages"; + + @Override + public Set getSupportedAnnotationTypes() { + + for (OutputMessage outputMessage : findMessages()) { + processingEnv.getMessager().printMessage(outputMessage.kind, outputMessage.message); + } + + return Collections.emptySet(); + } + + private static List findMessages() { + final String messagesStr = System.getProperty(MESSAGES_KEY); + if (messagesStr == null || messagesStr.isEmpty()) { + return Collections.emptyList(); + } + List outputMessages = new ArrayList<>(); + + for (String msg : messagesStr.split("\\|")) { + String[] split = msg.split(","); + if (split.length == 2) { + outputMessages.add(new OutputMessage(Diagnostic.Kind.valueOf(split[0]), split[1])); + } else { + throw new IllegalStateException( + "Unable to parse messages definition for: '" + + messagesStr + "'"); + } + } + + return outputMessages; + } + + @Override + public SourceVersion getSupportedSourceVersion() { + return SourceVersion.latest(); + } + + @Override + public boolean process(Set annotations, RoundEnvironment roundEnv) { + return false; + } + + /** + * An output message. + */ + private static class OutputMessage { + public final Diagnostic.Kind kind; + public final String message; + + OutputMessage(Diagnostic.Kind kind, String message) { + this.kind = kind; + this.message = message; + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/ap/package.html b/modules/h2/src/test/java/org/h2/test/ap/package.html new file mode 100644 index 0000000000000..1e9cfc5daffe6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/ap/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +An annotation processor used for testing. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/bench/Bench.java b/modules/h2/src/test/java/org/h2/test/bench/Bench.java new file mode 100644 index 0000000000000..301b2a49e146b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/Bench.java @@ -0,0 +1,36 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.sql.SQLException; + +/** + * The interface for benchmark tests. + */ +public interface Bench { + + /** + * Initialize the database. This includes creating tables and inserting + * data. + * + * @param db the database object + * @param size the amount of data + */ + void init(Database db, int size) throws SQLException; + + /** + * Run the test. + */ + void runTest() throws Exception; + + /** + * Get the name of the test. + * + * @return the test name + */ + String getName(); + +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/BenchA.java b/modules/h2/src/test/java/org/h2/test/bench/BenchA.java new file mode 100644 index 0000000000000..8c8b79a61cc7b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/BenchA.java @@ -0,0 +1,201 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.math.BigDecimal; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.util.Random; + +/** + * This test is similar to the TPC-A test of the Transaction Processing Council + * (TPC). However, only one connection and one thread is used. + *

    + * See also: http://www.tpc.org/tpca/spec/tpca_current.pdf + */ +public class BenchA implements Bench { + + private static final String FILLER = "abcdefghijklmnopqrstuvwxyz"; + private static final int DELTA = 10000; + + private Database database; + + private int branches; + private int tellers; + private int accounts; + private int transactions; + + @Override + public void init(Database db, int size) throws SQLException { + this.database = db; + transactions = size * 6; + + int scale = 2; + accounts = size * 30; + tellers = Math.max(accounts / 10, 1); + branches = Math.max(tellers / 10, 1); + + db.start(this, "Init"); + + db.openConnection(); + + db.dropTable("BRANCHES"); + db.dropTable("TELLERS"); + db.dropTable("ACCOUNTS"); + db.dropTable("HISTORY"); + + String[] create = { + "CREATE TABLE BRANCHES(BID INT NOT NULL PRIMARY KEY, " + + "BBALANCE DECIMAL(15,2), FILLER VARCHAR(88))", + "CREATE TABLE TELLERS(TID INT NOT NULL PRIMARY KEY, " + + "BID INT, TBALANCE DECIMAL(15,2), FILLER VARCHAR(84))", + "CREATE TABLE ACCOUNTS(AID INT NOT NULL PRIMARY KEY, " + + "BID INT, ABALANCE DECIMAL(15,2), FILLER VARCHAR(84))", + "CREATE TABLE HISTORY(TID INT, " + + "BID INT, AID INT, DELTA DECIMAL(15,2), HTIME DATETIME, " + + "FILLER VARCHAR(40))" }; + + for (String sql : create) { + db.update(sql); + } + + PreparedStatement prep; + db.setAutoCommit(false); + int commitEvery = 1000; + prep = db.prepare( + "INSERT INTO BRANCHES(BID, BBALANCE, FILLER) " + + "VALUES(?, 10000.00, '" + FILLER + "')"); + for (int i = 0; i < branches * scale; i++) { + prep.setInt(1, i); + db.update(prep, "insertBranches"); + if (i % commitEvery == 0) { + db.commit(); + } + } + db.commit(); + prep = db.prepare( + "INSERT INTO TELLERS(TID, BID, TBALANCE, FILLER) " + + "VALUES(?, ?, 10000.00, '" + FILLER + "')"); + for (int i = 0; i < tellers * scale; i++) { + prep.setInt(1, i); + prep.setInt(2, i / tellers); + db.update(prep, "insertTellers"); + if (i % commitEvery == 0) { + db.commit(); + } + } + db.commit(); + int len = accounts * scale; + prep = db.prepare( + "INSERT INTO ACCOUNTS(AID, BID, ABALANCE, FILLER) " + + "VALUES(?, ?, 10000.00, '" + FILLER + "')"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.setInt(2, i / accounts); + db.update(prep, "insertAccounts"); + if (i % commitEvery == 0) { + db.commit(); + } + } + db.commit(); + db.closeConnection(); + db.end(); + +// db.start(this, "Open/Close"); +// db.openConnection(); +// db.closeConnection(); +// db.end(); + } + + @Override + public void runTest() throws SQLException { + + database.start(this, "Transactions"); + database.openConnection(); + processTransactions(); + database.closeConnection(); + database.end(); + + database.openConnection(); + processTransactions(); + database.logMemory(this, "Memory Usage"); + database.closeConnection(); + + } + + private void processTransactions() throws SQLException { + Random random = database.getRandom(); + int branch = random.nextInt(branches); + int teller = random.nextInt(tellers); + + PreparedStatement updateAccount = database.prepare( + "UPDATE ACCOUNTS SET ABALANCE=ABALANCE+? WHERE AID=?"); + PreparedStatement selectBalance = database.prepare( + "SELECT ABALANCE FROM ACCOUNTS WHERE AID=?"); + PreparedStatement updateTeller = database.prepare( + "UPDATE TELLERS SET TBALANCE=TBALANCE+? WHERE TID=?"); + PreparedStatement updateBranch = database.prepare( + "UPDATE BRANCHES SET BBALANCE=BBALANCE+? WHERE BID=?"); + PreparedStatement insertHistory = database.prepare( + "INSERT INTO HISTORY(AID, TID, BID, DELTA, HTIME, FILLER) " + + "VALUES(?, ?, ?, ?, ?, ?)"); + int accountsPerBranch = accounts / branches; + database.setAutoCommit(false); + + for (int i = 0; i < transactions; i++) { + int account; + if (random.nextInt(100) < 85) { + account = random.nextInt(accountsPerBranch) + branch * accountsPerBranch; + } else { + account = random.nextInt(accounts); + } + int max = BenchA.DELTA; + // delta: -max .. +max + + BigDecimal delta = BigDecimal.valueOf(random.nextInt(max * 2) - max); + long current = System.currentTimeMillis(); + + updateAccount.setBigDecimal(1, delta); + updateAccount.setInt(2, account); + database.update(updateAccount, "updateAccount"); + + updateTeller.setBigDecimal(1, delta); + updateTeller.setInt(2, teller); + database.update(updateTeller, "updateTeller"); + + updateBranch.setBigDecimal(1, delta); + updateBranch.setInt(2, branch); + database.update(updateBranch, "updateBranch"); + + selectBalance.setInt(1, account); + database.queryReadResult(selectBalance); + + insertHistory.setInt(1, account); + insertHistory.setInt(2, teller); + insertHistory.setInt(3, branch); + insertHistory.setBigDecimal(4, delta); + // TODO convert: should be able to convert date to timestamp + // (by using 0 for remaining fields) + // insertHistory.setDate(5, new java.sql.Date(current)); + insertHistory.setTimestamp(5, new java.sql.Timestamp(current)); + insertHistory.setString(6, BenchA.FILLER); + database.update(insertHistory, "insertHistory"); + + database.commit(); + } + updateAccount.close(); + selectBalance.close(); + updateTeller.close(); + updateBranch.close(); + insertHistory.close(); + } + + @Override + public String getName() { + return "BenchA"; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/BenchB.java b/modules/h2/src/test/java/org/h2/test/bench/BenchB.java new file mode 100644 index 0000000000000..83091cbd86870 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/BenchB.java @@ -0,0 +1,241 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.Random; + +/** + * This test is similar to the TPC-B test of the Transaction Processing Council + * (TPC). Multiple threads are used (one thread per connection). Referential + * integrity is not implemented. + *

    + * See also http://www.tpc.org/tpcb + */ +public class BenchB implements Bench, Runnable { + + private static final int SCALE = 4; + private static final int BRANCHES = 1; + private static final int TELLERS = 10; + private static final int ACCOUNTS = 100000; + + private int threadCount = 10; + + // master data + private Database database; + private int transactionPerClient; + + // client data + private BenchB master; + private Connection conn; + private PreparedStatement updateAccount; + private PreparedStatement selectAccount; + private PreparedStatement updateTeller; + private PreparedStatement updateBranch; + private PreparedStatement insertHistory; + private Random random; + + public BenchB() { + // nothing to do + } + + private BenchB(BenchB master, int seed) throws SQLException { + this.master = master; + random = new Random(seed); + conn = master.database.openNewConnection(); + conn.setAutoCommit(false); + updateAccount = conn.prepareStatement( + "UPDATE ACCOUNTS SET ABALANCE=ABALANCE+? WHERE AID=?"); + selectAccount = conn.prepareStatement( + "SELECT ABALANCE FROM ACCOUNTS WHERE AID=?"); + updateTeller = conn.prepareStatement( + "UPDATE TELLERS SET TBALANCE=TBALANCE+? WHERE TID=?"); + updateBranch = conn.prepareStatement( + "UPDATE BRANCHES SET BBALANCE=BBALANCE+? WHERE BID=?"); + insertHistory = conn.prepareStatement( + "INSERT INTO HISTORY(TID, BID, AID, DELTA) VALUES(?, ?, ?, ?)"); + } + + @Override + public void init(Database db, int size) throws SQLException { + this.database = db; + this.transactionPerClient = getTransactionsPerClient(size); + + db.start(this, "Init"); + db.openConnection(); + db.dropTable("BRANCHES"); + db.dropTable("TELLERS"); + db.dropTable("ACCOUNTS"); + db.dropTable("HISTORY"); + String[] create = { + "CREATE TABLE BRANCHES(" + + "BID INT NOT NULL PRIMARY KEY, " + + "BBALANCE INT, FILLER VARCHAR(88))", + "CREATE TABLE TELLERS(" + + "TID INT NOT NULL PRIMARY KEY, " + + "BID INT, TBALANCE INT, FILLER VARCHAR(84))", + "CREATE TABLE ACCOUNTS(" + + "AID INT NOT NULL PRIMARY KEY, " + + "BID INT, ABALANCE INT, FILLER VARCHAR(84))", + "CREATE TABLE HISTORY(" + + "TID INT, BID INT, AID INT, " + + "DELTA INT, TIME DATETIME, FILLER VARCHAR(22))" }; + for (String sql : create) { + db.update(sql); + } + PreparedStatement prep; + db.setAutoCommit(false); + int commitEvery = 1000; + prep = db.prepare( + "INSERT INTO BRANCHES(BID, BBALANCE) VALUES(?, 0)"); + for (int i = 0; i < BRANCHES * SCALE; i++) { + prep.setInt(1, i); + db.update(prep, "insertBranches"); + if (i % commitEvery == 0) { + db.commit(); + } + } + db.commit(); + prep = db.prepare( + "INSERT INTO TELLERS(TID, BID, TBALANCE) VALUES(?, ?, 0)"); + for (int i = 0; i < TELLERS * SCALE; i++) { + prep.setInt(1, i); + prep.setInt(2, i / TELLERS); + db.update(prep, "insertTellers"); + if (i % commitEvery == 0) { + db.commit(); + } + } + db.commit(); + int len = ACCOUNTS * SCALE; + prep = db.prepare( + "INSERT INTO ACCOUNTS(AID, BID, ABALANCE) VALUES(?, ?, 0)"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.setInt(2, i / ACCOUNTS); + db.update(prep, "insertAccounts"); + if (i % commitEvery == 0) { + db.commit(); + } + } + db.commit(); + db.closeConnection(); + db.end(); +// db.start(this, "Open/Close"); +// db.openConnection(); +// db.closeConnection(); +// db.end(); + } + + /** + * Get the number of transactions per client. + * + * @param size test size + * @return the transactions per client + */ + protected int getTransactionsPerClient(int size) { + return size / 8; + } + + @Override + public void run() { + int accountsPerBranch = ACCOUNTS / BRANCHES; + for (int i = 0; i < master.transactionPerClient; i++) { + int branch = random.nextInt(BRANCHES); + int teller = random.nextInt(TELLERS); + int account; + if (random.nextInt(100) < 85) { + account = random.nextInt(accountsPerBranch) + branch * accountsPerBranch; + } else { + account = random.nextInt(ACCOUNTS); + } + int delta = random.nextInt(1000); + doOne(branch, teller, account, delta); + } + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + } + + private void doOne(int branch, int teller, int account, int delta) { + try { + // UPDATE ACCOUNTS SET ABALANCE=ABALANCE+? WHERE AID=? + updateAccount.setInt(1, delta); + updateAccount.setInt(2, account); + master.database.update(updateAccount, "UpdateAccounts"); + updateAccount.executeUpdate(); + + // SELECT ABALANCE FROM ACCOUNTS WHERE AID=? + selectAccount.setInt(1, account); + ResultSet rs = master.database.query(selectAccount); + while (rs.next()) { + rs.getInt(1); + } + + // UPDATE TELLERS SET TBALANCE=TABLANCE+? WHERE TID=? + updateTeller.setInt(1, delta); + updateTeller.setInt(2, teller); + master.database.update(updateTeller, "UpdateTeller"); + + // UPDATE BRANCHES SET BBALANCE=BBALANCE+? WHERE BID=? + updateBranch.setInt(1, delta); + updateBranch.setInt(2, branch); + master.database.update(updateBranch, "UpdateBranch"); + + // INSERT INTO HISTORY(TID, BID, AID, DELTA) VALUES(?, ?, ?, ?) + insertHistory.setInt(1, teller); + insertHistory.setInt(2, branch); + insertHistory.setInt(3, account); + insertHistory.setInt(4, delta); + master.database.update(insertHistory, "InsertHistory"); + conn.commit(); + } catch (SQLException e) { + e.printStackTrace(); + } + } + + + @Override + public void runTest() throws Exception { + Database db = database; + db.start(this, "Transactions"); + db.openConnection(); + processTransactions(); + db.closeConnection(); + db.end(); + db.openConnection(); + processTransactions(); + db.logMemory(this, "Memory Usage"); + db.closeConnection(); + } + + private void processTransactions() throws Exception { + Thread[] threads = new Thread[threadCount]; + for (int i = 0; i < threadCount; i++) { + threads[i] = new Thread(new BenchB(this, i), "BenchB-" + i); + } + for (Thread t : threads) { + t.start(); + } + for (Thread t : threads) { + t.join(); + } + } + + @Override + public String getName() { + return "BenchB"; + } + + public void setThreadCount(int threadCount) { + this.threadCount = threadCount; + } +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/BenchC.java b/modules/h2/src/test/java/org/h2/test/bench/BenchC.java new file mode 100644 index 0000000000000..9d46c4b8f7b34 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/BenchC.java @@ -0,0 +1,569 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.math.BigDecimal; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.sql.Types; + +/** + * This test is similar to the TPC-C test of the Transaction Processing Council + * (TPC). Only one connection and one thread is used. Referential integrity is + * not implemented. + *

    + * See also http://www.tpc.org + */ +public class BenchC implements Bench { + + private static final int COMMIT_EVERY = 1000; + + private static final String[] TABLES = { "WAREHOUSE", "DISTRICT", + "CUSTOMER", "HISTORY", "ORDERS", "NEW_ORDER", "ITEM", "STOCK", + "ORDER_LINE", "RESULTS" }; + + private static final String[] CREATE_SQL = { + "CREATE TABLE WAREHOUSE(\n" + + " W_ID INT NOT NULL PRIMARY KEY,\n" + + " W_NAME VARCHAR(10),\n" + + " W_STREET_1 VARCHAR(20),\n" + + " W_STREET_2 VARCHAR(20),\n" + + " W_CITY VARCHAR(20),\n" + + " W_STATE CHAR(2),\n" + + " W_ZIP CHAR(9),\n" + + " W_TAX DECIMAL(4, 4),\n" + + " W_YTD DECIMAL(12, 2))", + "CREATE TABLE DISTRICT(\n" + + " D_ID INT NOT NULL,\n" + + " D_W_ID INT NOT NULL,\n" + + " D_NAME VARCHAR(10),\n" + + " D_STREET_1 VARCHAR(20),\n" + + " D_STREET_2 VARCHAR(20),\n" + + " D_CITY VARCHAR(20),\n" + + " D_STATE CHAR(2),\n" + + " D_ZIP CHAR(9),\n" + + " D_TAX DECIMAL(4, 4),\n" + + " D_YTD DECIMAL(12, 2),\n" + + " D_NEXT_O_ID INT,\n" + + " PRIMARY KEY (D_ID, D_W_ID))", + // + " FOREIGN KEY (D_W_ID)\n" + // + " REFERENCES WAREHOUSE(W_ID))", + "CREATE TABLE CUSTOMER(\n" + + " C_ID INT NOT NULL,\n" + + " C_D_ID INT NOT NULL,\n" + + " C_W_ID INT NOT NULL,\n" + + " C_FIRST VARCHAR(16),\n" + + " C_MIDDLE CHAR(2),\n" + + " C_LAST VARCHAR(16),\n" + + " C_STREET_1 VARCHAR(20),\n" + + " C_STREET_2 VARCHAR(20),\n" + + " C_CITY VARCHAR(20),\n" + + " C_STATE CHAR(2),\n" + + " C_ZIP CHAR(9),\n" + + " C_PHONE CHAR(16),\n" + + " C_SINCE TIMESTAMP,\n" + + " C_CREDIT CHAR(2),\n" + + " C_CREDIT_LIM DECIMAL(12, 2),\n" + + " C_DISCOUNT DECIMAL(4, 4),\n" + + " C_BALANCE DECIMAL(12, 2),\n" + + " C_YTD_PAYMENT DECIMAL(12, 2),\n" + + " C_PAYMENT_CNT DECIMAL(4),\n" + + " C_DELIVERY_CNT DECIMAL(4),\n" + + " C_DATA VARCHAR(500),\n" + + " PRIMARY KEY (C_W_ID, C_D_ID, C_ID))", + // + " FOREIGN KEY (C_W_ID, C_D_ID)\n" + // + " REFERENCES DISTRICT(D_W_ID, D_ID))", + "CREATE INDEX CUSTOMER_NAME ON CUSTOMER(C_LAST, C_D_ID, C_W_ID)", + "CREATE TABLE HISTORY(\n" + + " H_C_ID INT,\n" + + " H_C_D_ID INT,\n" + + " H_C_W_ID INT,\n" + + " H_D_ID INT,\n" + + " H_W_ID INT,\n" + + " H_DATE TIMESTAMP,\n" + + " H_AMOUNT DECIMAL(6, 2),\n" + + " H_DATA VARCHAR(24))", + // + " FOREIGN KEY(H_C_W_ID, H_C_D_ID, H_C_ID)\n" + // + " REFERENCES CUSTOMER(C_W_ID, C_D_ID, C_ID),\n" + // + " FOREIGN KEY(H_W_ID, H_D_ID)\n" + // + " REFERENCES DISTRICT(D_W_ID, D_ID))", + "CREATE TABLE ORDERS(\n" + + " O_ID INT NOT NULL,\n" + + " O_D_ID INT NOT NULL,\n" + + " O_W_ID INT NOT NULL,\n" + + " O_C_ID INT,\n" + + " O_ENTRY_D TIMESTAMP,\n" + + " O_CARRIER_ID INT,\n" + + " O_OL_CNT INT,\n" + + " O_ALL_LOCAL DECIMAL(1),\n" + + " PRIMARY KEY(O_W_ID, O_D_ID, O_ID))", + // + " FOREIGN KEY(O_W_ID, O_D_ID, O_C_ID)\n" + // + " REFERENCES CUSTOMER(C_W_ID, C_D_ID, C_ID))", + "CREATE INDEX ORDERS_OID ON ORDERS(O_ID)", + "CREATE TABLE NEW_ORDER(\n" + + " NO_O_ID INT NOT NULL,\n" + + " NO_D_ID INT NOT NULL,\n" + + " NO_W_ID INT NOT NULL,\n" + + " PRIMARY KEY(NO_W_ID, NO_D_ID, NO_O_ID))", + // + " FOREIGN KEY(NO_W_ID, NO_D_ID, NO_O_ID)\n" + // + " REFERENCES ORDER(O_W_ID, O_D_ID, O_ID))", + "CREATE TABLE ITEM(\n" + + " I_ID INT NOT NULL,\n" + + " I_IM_ID INT,\n" + + " I_NAME VARCHAR(24),\n" + + " I_PRICE DECIMAL(5, 2),\n" + + " I_DATA VARCHAR(50),\n" + + " PRIMARY KEY(I_ID))", + "CREATE TABLE STOCK(\n" + + " S_I_ID INT NOT NULL,\n" + + " S_W_ID INT NOT NULL,\n" + + " S_QUANTITY DECIMAL(4),\n" + + " S_DIST_01 CHAR(24),\n" + + " S_DIST_02 CHAR(24),\n" + + " S_DIST_03 CHAR(24),\n" + + " S_DIST_04 CHAR(24),\n" + + " S_DIST_05 CHAR(24),\n" + + " S_DIST_06 CHAR(24),\n" + + " S_DIST_07 CHAR(24),\n" + + " S_DIST_08 CHAR(24),\n" + + " S_DIST_09 CHAR(24),\n" + + " S_DIST_10 CHAR(24),\n" + + " S_YTD DECIMAL(8),\n" + + " S_ORDER_CNT DECIMAL(4),\n" + + " S_REMOTE_CNT DECIMAL(4),\n" + + " S_DATA VARCHAR(50),\n" + + " PRIMARY KEY(S_W_ID, S_I_ID))", + // + " FOREIGN KEY(S_W_ID)\n" + // + " REFERENCES WAREHOUSE(W_ID),\n" + // + " FOREIGN KEY(S_I_ID)\n" + " REFERENCES ITEM(I_ID))", + "CREATE TABLE ORDER_LINE(\n" + + " OL_O_ID INT NOT NULL,\n" + + " OL_D_ID INT NOT NULL,\n" + + " OL_W_ID INT NOT NULL,\n" + + " OL_NUMBER INT NOT NULL,\n" + + " OL_I_ID INT,\n" + + " OL_SUPPLY_W_ID INT,\n" + + " OL_DELIVERY_D TIMESTAMP,\n" + + " OL_QUANTITY DECIMAL(2),\n" + + " OL_AMOUNT DECIMAL(6, 2),\n" + + " OL_DIST_INFO CHAR(24),\n" + + " PRIMARY KEY (OL_W_ID, OL_D_ID, OL_O_ID, OL_NUMBER))", + // + " FOREIGN KEY(OL_W_ID, OL_D_ID, OL_O_ID)\n" + // + " REFERENCES ORDER(O_W_ID, O_D_ID, O_ID),\n" + // + " FOREIGN KEY(OL_SUPPLY_W_ID, OL_I_ID)\n" + // + " REFERENCES STOCK(S_W_ID, S_I_ID))", + "CREATE TABLE RESULTS(\n" + + " ID INT NOT NULL PRIMARY KEY,\n" + + " TERMINAL INT,\n" + + " OPERATION INT,\n" + + " RESPONSE_TIME INT,\n" + + " PROCESSING_TIME INT,\n" + + " KEYING_TIME INT,\n" + + " THINK_TIME INT,\n" + + " SUCCESSFUL INT,\n" + + " NOW TIMESTAMP)" }; + + int warehouses = 2; + int items = 10000; + int districtsPerWarehouse = 10; + int customersPerDistrict = 300; + + private Database database; + + private int ordersPerDistrict = 300; + + private BenchCRandom random; + private String action; + + @Override + public void init(Database db, int size) throws SQLException { + this.database = db; + + random = new BenchCRandom(); + + items = size * 10; + warehouses = 2; + districtsPerWarehouse = Math.max(1, size / 100); + customersPerDistrict = Math.max(1, size / 100); + ordersPerDistrict = Math.max(1, size / 1000); + + db.start(this, "Init"); + db.openConnection(); + load(); + db.commit(); + db.closeConnection(); + db.end(); + + // db.start(this, "Open/Close"); + // db.openConnection(); + // db.closeConnection(); + // db.end(); + + } + + private void load() throws SQLException { + for (String sql : TABLES) { + database.dropTable(sql); + } + for (String sql : CREATE_SQL) { + database.update(sql); + } + database.setAutoCommit(false); + loadItem(); + loadWarehouse(); + loadCustomer(); + loadOrder(); + database.commit(); + trace("Load done"); + } + + private void trace(String s) { + action = s; + } + + private void trace(int i, int max) { + database.trace(action, i, max); + } + + private void loadItem() throws SQLException { + trace("Loading item table"); + boolean[] original = random.getBoolean(items, items / 10); + PreparedStatement prep = database.prepare( + "INSERT INTO ITEM(I_ID, I_IM_ID, I_NAME, I_PRICE, I_DATA) " + + "VALUES(?, ?, ?, ?, ?)"); + for (int id = 1; id <= items; id++) { + String name = random.getString(14, 24); + BigDecimal price = random.getBigDecimal(random.getInt(100, 10000), 2); + String data = random.getString(26, 50); + if (original[id - 1]) { + data = random.replace(data, "original"); + } + prep.setInt(1, id); + prep.setInt(2, random.getInt(1, 10000)); + prep.setString(3, name); + prep.setBigDecimal(4, price); + prep.setString(5, data); + database.update(prep, "insertItem"); + trace(id, items); + if (id % COMMIT_EVERY == 0) { + database.commit(); + } + } + } + + private void loadWarehouse() throws SQLException { + trace("Loading warehouse table"); + PreparedStatement prep = database.prepare( + "INSERT INTO WAREHOUSE(W_ID, W_NAME, W_STREET_1, " + + "W_STREET_2, W_CITY, W_STATE, W_ZIP, W_TAX, W_YTD) " + + "VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?)"); + for (int id = 1; id <= warehouses; id++) { + String name = random.getString(6, 10); + String[] address = random.getAddress(); + String street1 = address[0]; + String street2 = address[1]; + String city = address[2]; + String state = address[3]; + String zip = address[4]; + BigDecimal tax = random.getBigDecimal(random.getInt(0, 2000), 4); + BigDecimal ytd = new BigDecimal("300000.00"); + prep.setInt(1, id); + prep.setString(2, name); + prep.setString(3, street1); + prep.setString(4, street2); + prep.setString(5, city); + prep.setString(6, state); + prep.setString(7, zip); + prep.setBigDecimal(8, tax); + prep.setBigDecimal(9, ytd); + database.update(prep, "insertWarehouse"); + loadStock(id); + loadDistrict(id); + if (id % COMMIT_EVERY == 0) { + database.commit(); + } + } + } + + private void loadCustomer() throws SQLException { + trace("Load customer table"); + int max = warehouses * districtsPerWarehouse; + int i = 0; + for (int id = 1; id <= warehouses; id++) { + for (int districtId = 1; districtId <= districtsPerWarehouse; districtId++) { + loadCustomerSub(districtId, id); + trace(i++, max); + if (i % COMMIT_EVERY == 0) { + database.commit(); + } + } + } + } + + private void loadCustomerSub(int dId, int wId) throws SQLException { + Timestamp timestamp = new Timestamp(System.currentTimeMillis()); + PreparedStatement prepCustomer = database.prepare( + "INSERT INTO CUSTOMER(C_ID, C_D_ID, C_W_ID, " + + "C_FIRST, C_MIDDLE, C_LAST, " + + "C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP, " + + "C_PHONE, C_SINCE, C_CREDIT, " + + "C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_DATA, " + + "C_YTD_PAYMENT, C_PAYMENT_CNT, C_DELIVERY_CNT) " + + "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"); + PreparedStatement prepHistory = database.prepare( + "INSERT INTO HISTORY(H_C_ID, H_C_D_ID, H_C_W_ID, " + + "H_W_ID, H_D_ID, H_DATE, H_AMOUNT, H_DATA) " + + "VALUES (?, ?, ?, ?, ?, ?, ?, ?)"); + for (int cId = 1; cId <= customersPerDistrict; cId++) { + String first = random.getString(8, 16); + String middle = "OE"; + String last; + if (cId < 1000) { + last = random.getLastname(cId); + } else { + last = random.getLastname(random.getNonUniform(255, 0, 999)); + } + String[] address = random.getAddress(); + String street1 = address[0]; + String street2 = address[1]; + String city = address[2]; + String state = address[3]; + String zip = address[4]; + String phone = random.getNumberString(16, 16); + String credit; + if (random.getInt(0, 1) == 0) { + credit = "GC"; + } else { + credit = "BC"; + } + BigDecimal discount = random.getBigDecimal(random.getInt(0, 5000), 4); + BigDecimal balance = new BigDecimal("-10.00"); + BigDecimal creditLim = new BigDecimal("50000.00"); + String data = random.getString(300, 500); + BigDecimal ytdPayment = new BigDecimal("10.00"); + int paymentCnt = 1; + int deliveryCnt = 1; + prepCustomer.setInt(1, cId); + prepCustomer.setInt(2, dId); + prepCustomer.setInt(3, wId); + prepCustomer.setString(4, first); + prepCustomer.setString(5, middle); + prepCustomer.setString(6, last); + prepCustomer.setString(7, street1); + prepCustomer.setString(8, street2); + prepCustomer.setString(9, city); + prepCustomer.setString(10, state); + prepCustomer.setString(11, zip); + prepCustomer.setString(12, phone); + prepCustomer.setTimestamp(13, timestamp); + prepCustomer.setString(14, credit); + prepCustomer.setBigDecimal(15, creditLim); + prepCustomer.setBigDecimal(16, discount); + prepCustomer.setBigDecimal(17, balance); + prepCustomer.setString(18, data); + prepCustomer.setBigDecimal(19, ytdPayment); + prepCustomer.setInt(20, paymentCnt); + prepCustomer.setInt(21, deliveryCnt); + database.update(prepCustomer, "insertCustomer"); + BigDecimal amount = new BigDecimal("10.00"); + String hData = random.getString(12, 24); + prepHistory.setInt(1, cId); + prepHistory.setInt(2, dId); + prepHistory.setInt(3, wId); + prepHistory.setInt(4, wId); + prepHistory.setInt(5, dId); + prepHistory.setTimestamp(6, timestamp); + prepHistory.setBigDecimal(7, amount); + prepHistory.setString(8, hData); + database.update(prepHistory, "insertHistory"); + } + } + + private void loadOrder() throws SQLException { + trace("Loading order table"); + int max = warehouses * districtsPerWarehouse; + int i = 0; + for (int wId = 1; wId <= warehouses; wId++) { + for (int dId = 1; dId <= districtsPerWarehouse; dId++) { + loadOrderSub(dId, wId); + trace(i++, max); + } + } + } + + private void loadOrderSub(int dId, int wId) throws SQLException { + Timestamp timestamp = new Timestamp(System.currentTimeMillis()); + int[] orderid = random.getPermutation(ordersPerDistrict); + PreparedStatement prepOrder = database.prepare( + "INSERT INTO ORDERS(O_ID, O_C_ID, O_D_ID, O_W_ID, " + + "O_ENTRY_D, O_CARRIER_ID, O_OL_CNT, O_ALL_LOCAL) " + + "VALUES(?, ?, ?, ?, ?, ?, ?, 1)"); + PreparedStatement prepNewOrder = database.prepare( + "INSERT INTO NEW_ORDER (NO_O_ID, NO_D_ID, NO_W_ID) " + + "VALUES (?, ?, ?)"); + PreparedStatement prepLine = database.prepare( + "INSERT INTO ORDER_LINE(" + + "OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, " + + "OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, " + + "OL_DIST_INFO, OL_DELIVERY_D)" + + "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, NULL)"); + for (int oId = 1, i = 0; oId <= ordersPerDistrict; oId++) { + int cId = orderid[oId - 1]; + int carrierId = random.getInt(1, 10); + int olCnt = random.getInt(5, 15); + prepOrder.setInt(1, oId); + prepOrder.setInt(2, cId); + prepOrder.setInt(3, dId); + prepOrder.setInt(4, wId); + prepOrder.setTimestamp(5, timestamp); + prepOrder.setInt(7, olCnt); + if (oId <= 2100) { + prepOrder.setInt(6, carrierId); + } else { + // the last 900 orders have not been delivered + prepOrder.setNull(6, Types.INTEGER); + prepNewOrder.setInt(1, oId); + prepNewOrder.setInt(2, dId); + prepNewOrder.setInt(3, wId); + database.update(prepNewOrder, "newNewOrder"); + } + database.update(prepOrder, "insertOrder"); + for (int ol = 1; ol <= olCnt; ol++) { + int id = random.getInt(1, items); + int supplyId = wId; + int quantity = 5; + String distInfo = random.getString(24); + BigDecimal amount; + if (oId < 2101) { + amount = random.getBigDecimal(0, 2); + } else { + amount = random.getBigDecimal(random.getInt(0, 1000000), 2); + } + prepLine.setInt(1, oId); + prepLine.setInt(2, dId); + prepLine.setInt(3, wId); + prepLine.setInt(4, ol); + prepLine.setInt(5, id); + prepLine.setInt(6, supplyId); + prepLine.setInt(7, quantity); + prepLine.setBigDecimal(8, amount); + prepLine.setString(9, distInfo); + database.update(prepLine, "insertOrderLine"); + if (i++ % COMMIT_EVERY == 0) { + database.commit(); + } + } + } + } + + private void loadStock(int wId) throws SQLException { + trace("Loading stock table (warehouse " + wId + ")"); + boolean[] original = random.getBoolean(items, items / 10); + PreparedStatement prep = database.prepare( + "INSERT INTO STOCK(S_I_ID, S_W_ID, S_QUANTITY, " + + "S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, " + + "S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10, " + + "S_DATA, S_YTD, S_ORDER_CNT, S_REMOTE_CNT) " + + "VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"); + for (int id = 1; id <= items; id++) { + int quantity = random.getInt(10, 100); + String dist01 = random.getString(24); + String dist02 = random.getString(24); + String dist03 = random.getString(24); + String dist04 = random.getString(24); + String dist05 = random.getString(24); + String dist06 = random.getString(24); + String dist07 = random.getString(24); + String dist08 = random.getString(24); + String dist09 = random.getString(24); + String dist10 = random.getString(24); + String data = random.getString(26, 50); + if (original[id - 1]) { + data = random.replace(data, "original"); + } + prep.setInt(1, id); + prep.setInt(2, wId); + prep.setInt(3, quantity); + prep.setString(4, dist01); + prep.setString(5, dist02); + prep.setString(6, dist03); + prep.setString(7, dist04); + prep.setString(8, dist05); + prep.setString(9, dist06); + prep.setString(10, dist07); + prep.setString(11, dist08); + prep.setString(12, dist09); + prep.setString(13, dist10); + prep.setString(14, data); + prep.setInt(15, 0); + prep.setInt(16, 0); + prep.setInt(17, 0); + database.update(prep, "insertStock"); + if (id % COMMIT_EVERY == 0) { + database.commit(); + } + trace(id, items); + } + } + + private void loadDistrict(int wId) throws SQLException { + BigDecimal ytd = new BigDecimal("300000.00"); + int nextId = 3001; + PreparedStatement prep = database.prepare( + "INSERT INTO DISTRICT(D_ID, D_W_ID, D_NAME, " + + "D_STREET_1, D_STREET_2, D_CITY, D_STATE, D_ZIP, " + + "D_TAX, D_YTD, D_NEXT_O_ID) " + + "VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"); + for (int dId = 1; dId <= districtsPerWarehouse; dId++) { + String name = random.getString(6, 10); + String[] address = random.getAddress(); + String street1 = address[0]; + String street2 = address[1]; + String city = address[2]; + String state = address[3]; + String zip = address[4]; + BigDecimal tax = random.getBigDecimal(random.getInt(0, 2000), 4); + prep.setInt(1, dId); + prep.setInt(2, wId); + prep.setString(3, name); + prep.setString(4, street1); + prep.setString(5, street2); + prep.setString(6, city); + prep.setString(7, state); + prep.setString(8, zip); + prep.setBigDecimal(9, tax); + prep.setBigDecimal(10, ytd); + prep.setInt(11, nextId); + database.update(prep, "insertDistrict"); + trace(dId, districtsPerWarehouse); + } + } + + @Override + public void runTest() throws SQLException { + database.start(this, "Transactions"); + database.openConnection(); + for (int i = 0; i < 70; i++) { + BenchCThread process = new BenchCThread(database, this, random, i); + process.process(); + } + database.closeConnection(); + database.end(); + + database.openConnection(); + BenchCThread process = new BenchCThread(database, this, random, 0); + process.process(); + database.logMemory(this, "Memory Usage"); + database.closeConnection(); + } + + @Override + public String getName() { + return "BenchC"; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/BenchCRandom.java b/modules/h2/src/test/java/org/h2/test/bench/BenchCRandom.java new file mode 100644 index 0000000000000..958c053131004 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/BenchCRandom.java @@ -0,0 +1,179 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.math.BigDecimal; +import java.math.BigInteger; +import java.util.Random; + +/** + * The random data generator used for BenchC. + */ +public class BenchCRandom { + + private final Random random = new Random(10); + + /** + * Get a non-uniform random integer value between min and max. + * + * @param a the bit mask + * @param min the minimum value + * @param max the maximum value + * @return the random value + */ + int getNonUniform(int a, int min, int max) { + int c = 0; + return (((getInt(0, a) | getInt(min, max)) + c) % (max - min + 1)) + + min; + } + + /** + * Get a random integer value between min and max. + * + * @param min the minimum value + * @param max the maximum value + * @return the random value + */ + int getInt(int min, int max) { + return max <= min ? min : (random.nextInt(max - min) + min); + } + + /** + * Generate a boolean array with this many items set to true (randomly + * distributed). + * + * @param length the size of the array + * @param trueCount the number of true elements + * @return the boolean array + */ + boolean[] getBoolean(int length, int trueCount) { + boolean[] data = new boolean[length]; + for (int i = 0, pos; i < trueCount; i++) { + do { + pos = getInt(0, length); + } while (data[pos]); + data[pos] = true; + } + return data; + } + + /** + * Replace a random part of the string with another text. + * + * @param text the original text + * @param replacement the replacement + * @return the patched string + */ + String replace(String text, String replacement) { + int pos = getInt(0, text.length() - replacement.length()); + StringBuilder buffer = new StringBuilder(text); + buffer.replace(pos, pos + 7, replacement); + return buffer.toString(); + } + + /** + * Get a random number string. + * + * @param min the minimum value + * @param max the maximum value + * @return the number string + */ + String getNumberString(int min, int max) { + int len = getInt(min, max); + char[] buff = new char[len]; + for (int i = 0; i < len; i++) { + buff[i] = (char) getInt('0', '9'); + } + return new String(buff); + } + + /** + * Get random address data. + * + * @return the address + */ + String[] getAddress() { + String str1 = getString(10, 20); + String str2 = getString(10, 20); + String city = getString(10, 20); + String state = getString(2); + String zip = getNumberString(9, 9); + return new String[] { str1, str2, city, state, zip }; + } + + /** + * Get a random string. + * + * @param min the minimum size + * @param max the maximum size + * @return the string + */ + String getString(int min, int max) { + return getString(getInt(min, max)); + } + + /** + * Get a random string. + * + * @param len the size + * @return the string + */ + String getString(int len) { + char[] buff = new char[len]; + for (int i = 0; i < len; i++) { + buff[i] = (char) getInt('A', 'Z'); + } + return new String(buff); + } + + /** + * Generate a random permutation if the values 0 .. length. + * + * @param length the number of elements + * @return the random permutation + */ + int[] getPermutation(int length) { + int[] data = new int[length]; + for (int i = 0; i < length; i++) { + data[i] = i; + } + for (int i = 0; i < length; i++) { + int j = getInt(0, length); + int temp = data[i]; + data[i] = data[j]; + data[j] = temp; + } + return data; + } + + /** + * Create a big decimal value. + * + * @param value the value + * @param scale the scale + * @return the big decimal object + */ + BigDecimal getBigDecimal(int value, int scale) { + return new BigDecimal(new BigInteger(String.valueOf(value)), scale); + } + + /** + * Generate a last name composed of three elements + * + * @param i the last name index + * @return the name + */ + String getLastname(int i) { + String[] n = { "BAR", "OUGHT", "ABLE", "PRI", "PRES", "ESE", "ANTI", + "CALLY", "ATION", "EING" }; + StringBuilder buff = new StringBuilder(); + buff.append(n[i / 100]); + buff.append(n[(i / 10) % 10]); + buff.append(n[i % 10]); + return buff.toString(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/BenchCThread.java b/modules/h2/src/test/java/org/h2/test/bench/BenchCThread.java new file mode 100644 index 0000000000000..3112752523b6e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/BenchCThread.java @@ -0,0 +1,725 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.math.BigDecimal; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.util.HashMap; + +/** + * This class implements the functionality of one thread of BenchC. + */ +public class BenchCThread { + + private static final int OP_NEW_ORDER = 0, OP_PAYMENT = 1, + OP_ORDER_STATUS = 2, OP_DELIVERY = 3, + OP_STOCK_LEVEL = 4; + private static final BigDecimal ONE = new BigDecimal("1"); + + private final Database db; + private final int warehouseId; + private final int terminalId; + private final HashMap prepared = + new HashMap<>(); + private final BenchCRandom random; + private final BenchC bench; + + BenchCThread(Database db, BenchC bench, BenchCRandom random, int terminal) + throws SQLException { + this.db = db; + this.bench = bench; + this.terminalId = terminal; + db.setAutoCommit(false); + this.random = random; + warehouseId = random.getInt(1, bench.warehouses); + } + + /** + * Process the list of operations (a 'deck') in random order. + */ + void process() throws SQLException { + int[] deck = { OP_NEW_ORDER, OP_NEW_ORDER, OP_NEW_ORDER, + OP_NEW_ORDER, OP_NEW_ORDER, OP_NEW_ORDER, OP_NEW_ORDER, + OP_NEW_ORDER, OP_NEW_ORDER, OP_NEW_ORDER, OP_PAYMENT, + OP_PAYMENT, OP_PAYMENT, OP_PAYMENT, OP_PAYMENT, OP_PAYMENT, + OP_PAYMENT, OP_PAYMENT, OP_PAYMENT, OP_PAYMENT, + OP_ORDER_STATUS, OP_DELIVERY, OP_STOCK_LEVEL }; + int len = deck.length; + for (int i = 0; i < len; i++) { + int temp = deck[i]; + int j = random.getInt(0, len); + deck[i] = deck[j]; + deck[j] = temp; + } + for (int op : deck) { + switch (op) { + case OP_NEW_ORDER: + processNewOrder(); + break; + case OP_PAYMENT: + processPayment(); + break; + case OP_ORDER_STATUS: + processOrderStatus(); + break; + case OP_DELIVERY: + processDelivery(); + break; + case OP_STOCK_LEVEL: + processStockLevel(); + break; + default: + throw new AssertionError("op=" + op); + } + } + } + + private void processNewOrder() throws SQLException { + int dId = random.getInt(1, bench.districtsPerWarehouse); + int cId = random.getNonUniform(1023, 1, bench.customersPerDistrict); + int olCnt = random.getInt(5, 15); + boolean rollback = random.getInt(1, 100) == 1; + int[] supplyId = new int[olCnt]; + int[] itemId = new int[olCnt]; + int[] quantity = new int[olCnt]; + int allLocal = 1; + for (int i = 0; i < olCnt; i++) { + int w; + if (bench.warehouses > 1 && random.getInt(1, 100) == 1) { + do { + w = random.getInt(1, bench.warehouses); + } while (w != warehouseId); + allLocal = 0; + } else { + w = warehouseId; + } + supplyId[i] = w; + int item; + if (rollback && i == olCnt - 1) { + // unused order number + item = -1; + } else { + item = random.getNonUniform(8191, 1, bench.items); + } + itemId[i] = item; + quantity[i] = random.getInt(1, 10); + } + char[] bg = new char[olCnt]; + int[] stock = new int[olCnt]; + BigDecimal[] amt = new BigDecimal[olCnt]; + Timestamp datetime = new Timestamp(System.currentTimeMillis()); + PreparedStatement prep; + ResultSet rs; + + prep = prepare("UPDATE DISTRICT SET D_NEXT_O_ID=D_NEXT_O_ID+1 " + + "WHERE D_ID=? AND D_W_ID=?"); + prep.setInt(1, dId); + prep.setInt(2, warehouseId); + db.update(prep, "updateDistrict"); + prep = prepare("SELECT D_NEXT_O_ID, D_TAX FROM DISTRICT " + + "WHERE D_ID=? AND D_W_ID=?"); + prep.setInt(1, dId); + prep.setInt(2, warehouseId); + rs = db.query(prep); + rs.next(); + int oId = rs.getInt(1) - 1; + BigDecimal tax = rs.getBigDecimal(2); + rs.close(); + prep = prepare("SELECT C_DISCOUNT, C_LAST, C_CREDIT, W_TAX " + + "FROM CUSTOMER, WAREHOUSE " + + "WHERE C_ID=? AND W_ID=? AND C_W_ID=W_ID AND C_D_ID=?"); + prep.setInt(1, cId); + prep.setInt(2, warehouseId); + prep.setInt(3, dId); + rs = db.query(prep); + rs.next(); + BigDecimal discount = rs.getBigDecimal(1); + // c_last + rs.getString(2); + // c_credit + rs.getString(3); + BigDecimal wTax = rs.getBigDecimal(4); + rs.close(); + BigDecimal total = new BigDecimal("0"); + for (int number = 1; number <= olCnt; number++) { + int olId = itemId[number - 1]; + int olSupplyId = supplyId[number - 1]; + int olQuantity = quantity[number - 1]; + prep = prepare("SELECT I_PRICE, I_NAME, I_DATA " + + "FROM ITEM WHERE I_ID=?"); + prep.setInt(1, olId); + rs = db.query(prep); + if (!rs.next()) { + if (rollback) { + // item not found - correct behavior + db.rollback(); + return; + } + throw new SQLException("item not found: " + olId + " " + + olSupplyId); + } + BigDecimal price = rs.getBigDecimal(1); + // i_name + rs.getString(2); + String data = rs.getString(3); + rs.close(); + prep = prepare("SELECT S_QUANTITY, S_DATA, " + + "S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, " + + "S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 " + + "FROM STOCK WHERE S_I_ID=? AND S_W_ID=?"); + prep.setInt(1, olId); + prep.setInt(2, olSupplyId); + rs = db.query(prep); + if (!rs.next()) { + if (rollback) { + // item not found - correct behavior + db.rollback(); + return; + } + throw new SQLException("item not found: " + olId + " " + + olSupplyId); + } + int sQuantity = rs.getInt(1); + String sData = rs.getString(2); + String[] dist = new String[10]; + for (int i = 0; i < 10; i++) { + dist[i] = rs.getString(3 + i); + } + rs.close(); + String distInfo = dist[(dId - 1) % 10]; + stock[number - 1] = sQuantity; + if (data.contains("original") + && sData.contains("original")) { + bg[number - 1] = 'B'; + } else { + bg[number - 1] = 'G'; + } + if (sQuantity > olQuantity) { + sQuantity = sQuantity - olQuantity; + } else { + sQuantity = sQuantity - olQuantity + 91; + } + prep = prepare("UPDATE STOCK SET S_QUANTITY=? " + + "WHERE S_W_ID=? AND S_I_ID=?"); + prep.setInt(1, sQuantity); + prep.setInt(2, olSupplyId); + prep.setInt(3, olId); + db.update(prep, "updateStock"); + BigDecimal olAmount = new BigDecimal(olQuantity).multiply( + price).multiply(ONE.add(wTax).add(tax)).multiply( + ONE.subtract(discount)); + olAmount = olAmount.setScale(2, BigDecimal.ROUND_HALF_UP); + amt[number - 1] = olAmount; + total = total.add(olAmount); + prep = prepare("INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, " + + "OL_I_ID, OL_SUPPLY_W_ID, " + + "OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) " + + "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)"); + prep.setInt(1, oId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + prep.setInt(4, number); + prep.setInt(5, olId); + prep.setInt(6, olSupplyId); + prep.setInt(7, olQuantity); + prep.setBigDecimal(8, olAmount); + prep.setString(9, distInfo); + db.update(prep, "insertOrderLine"); + } + prep = prepare("INSERT INTO ORDERS (O_ID, O_D_ID, O_W_ID, O_C_ID, " + + "O_ENTRY_D, O_OL_CNT, O_ALL_LOCAL) " + + "VALUES (?, ?, ?, ?, ?, ?, ?)"); + prep.setInt(1, oId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + prep.setInt(4, cId); + prep.setTimestamp(5, datetime); + prep.setInt(6, olCnt); + prep.setInt(7, allLocal); + db.update(prep, "insertOrders"); + prep = prepare("INSERT INTO NEW_ORDER (NO_O_ID, NO_D_ID, NO_W_ID) " + + "VALUES (?, ?, ?)"); + prep.setInt(1, oId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + db.update(prep, "insertNewOrder"); + db.commit(); + } + + private void processPayment() throws SQLException { + int dId = random.getInt(1, bench.districtsPerWarehouse); + int wId, cdId; + if (bench.warehouses > 1 && random.getInt(1, 100) <= 15) { + do { + wId = random.getInt(1, bench.warehouses); + } while (wId != warehouseId); + cdId = random.getInt(1, bench.districtsPerWarehouse); + } else { + wId = warehouseId; + cdId = dId; + } + boolean byName; + String last; + int cId = 1; + if (random.getInt(1, 100) <= 60) { + byName = true; + last = random.getLastname(random.getNonUniform(255, 0, 999)); + } else { + byName = false; + last = ""; + cId = random.getNonUniform(1023, 1, bench.customersPerDistrict); + } + BigDecimal amount = random.getBigDecimal(random.getInt(100, 500000), + 2); + Timestamp datetime = new Timestamp(System.currentTimeMillis()); + PreparedStatement prep; + ResultSet rs; + + prep = prepare("UPDATE DISTRICT SET D_YTD = D_YTD+? " + + "WHERE D_ID=? AND D_W_ID=?"); + prep.setBigDecimal(1, amount); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + db.update(prep, "updateDistrict"); + prep = prepare("UPDATE WAREHOUSE SET W_YTD=W_YTD+? WHERE W_ID=?"); + prep.setBigDecimal(1, amount); + prep.setInt(2, warehouseId); + db.update(prep, "updateWarehouse"); + prep = prepare("SELECT W_STREET_1, W_STREET_2, W_CITY, W_STATE, W_ZIP, W_NAME " + + "FROM WAREHOUSE WHERE W_ID=?"); + prep.setInt(1, warehouseId); + rs = db.query(prep); + rs.next(); + // w_street_1 + rs.getString(1); + // w_street_2 + rs.getString(2); + // w_city + rs.getString(3); + // w_state + rs.getString(4); + // w_zip + rs.getString(5); + String wName = rs.getString(6); + rs.close(); + prep = prepare("SELECT D_STREET_1, D_STREET_2, D_CITY, D_STATE, D_ZIP, D_NAME " + + "FROM DISTRICT WHERE D_ID=? AND D_W_ID=?"); + prep.setInt(1, dId); + prep.setInt(2, warehouseId); + rs = db.query(prep); + rs.next(); + // d_street_1 + rs.getString(1); + // d_street_2 + rs.getString(2); + // d_city + rs.getString(3); + // d_state + rs.getString(4); + // d_zip + rs.getString(5); + String dName = rs.getString(6); + rs.close(); + BigDecimal balance; + String credit; + if (byName) { + prep = prepare("SELECT COUNT(C_ID) FROM CUSTOMER " + + "WHERE C_LAST=? AND C_D_ID=? AND C_W_ID=?"); + prep.setString(1, last); + prep.setInt(2, cdId); + prep.setInt(3, wId); + rs = db.query(prep); + rs.next(); + int namecnt = rs.getInt(1); + rs.close(); + if (namecnt == 0) { + // TODO TPC-C: check if this can happen + db.rollback(); + return; + } + prep = prepare("SELECT C_FIRST, C_MIDDLE, C_ID, " + + "C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP, " + + "C_PHONE, C_CREDIT, C_CREDIT_LIM, " + + "C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER " + + "WHERE C_LAST=? AND C_D_ID=? AND C_W_ID=? " + + "ORDER BY C_FIRST"); + prep.setString(1, last); + prep.setInt(2, cdId); + prep.setInt(3, wId); + rs = db.query(prep); + // locate midpoint customer + if (namecnt % 2 != 0) { + namecnt++; + } + for (int n = 0; n < namecnt / 2; n++) { + rs.next(); + } + // c_first + rs.getString(1); + // c_middle + rs.getString(2); + cId = rs.getInt(3); + // c_street_1 + rs.getString(4); + // c_street_2 + rs.getString(5); + // c_city + rs.getString(6); + // c_state + rs.getString(7); + // c_zip + rs.getString(8); + // c_phone + rs.getString(9); + credit = rs.getString(10); + // c_credit_lim + rs.getString(11); + // c_discount + rs.getBigDecimal(12); + balance = rs.getBigDecimal(13); + // c_since + rs.getTimestamp(14); + rs.close(); + } else { + prep = prepare("SELECT C_FIRST, C_MIDDLE, C_LAST, " + + "C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP, " + + "C_PHONE, C_CREDIT, C_CREDIT_LIM, " + + "C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER " + + "WHERE C_ID=? AND C_D_ID=? AND C_W_ID=?"); + prep.setInt(1, cId); + prep.setInt(2, cdId); + prep.setInt(3, wId); + rs = db.query(prep); + rs.next(); + // c_first + rs.getString(1); + // c_middle + rs.getString(2); + // c_last + rs.getString(3); + // c_street_1 + rs.getString(4); + // c_street_2 + rs.getString(5); + // c_city + rs.getString(6); + // c_state + rs.getString(7); + // c_zip + rs.getString(8); + // c_phone + rs.getString(9); + credit = rs.getString(10); + // c_credit_lim + rs.getString(11); + // c_discount + rs.getBigDecimal(12); + balance = rs.getBigDecimal(13); + // c_since + rs.getTimestamp(14); + rs.close(); + } + balance = balance.add(amount); + if (credit.equals("BC")) { + prep = prepare("SELECT C_DATA INTO FROM CUSTOMER " + + "WHERE C_ID=? AND C_D_ID=? AND C_W_ID=?"); + prep.setInt(1, cId); + prep.setInt(2, cdId); + prep.setInt(3, wId); + rs = db.query(prep); + rs.next(); + String cData = rs.getString(1); + rs.close(); + String cNewData = "| " + cId + " " + cdId + " " + wId + + " " + dId + " " + warehouseId + " " + amount + " " + + cData; + if (cNewData.length() > 500) { + cNewData = cNewData.substring(0, 500); + } + prep = prepare("UPDATE CUSTOMER SET C_BALANCE=?, C_DATA=? " + + "WHERE C_ID=? AND C_D_ID=? AND C_W_ID=?"); + prep.setBigDecimal(1, balance); + prep.setString(2, cNewData); + prep.setInt(3, cId); + prep.setInt(4, cdId); + prep.setInt(5, wId); + db.update(prep, "updateCustomer"); + } else { + prep = prepare("UPDATE CUSTOMER SET C_BALANCE=? " + + "WHERE C_ID=? AND C_D_ID=? AND C_W_ID=?"); + prep.setBigDecimal(1, balance); + prep.setInt(2, cId); + prep.setInt(3, cdId); + prep.setInt(4, wId); + db.update(prep, "updateCustomer"); + } + // MySQL bug? +// String h_data = w_name + " " + d_name; + String hData = wName + " " + dName; + prep = prepare("INSERT INTO HISTORY (H_C_D_ID, H_C_W_ID, H_C_ID, H_D_ID, " + + "H_W_ID, H_DATE, H_AMOUNT, H_DATA) " + + "VALUES (?, ?, ?, ?, ?, ?, ?, ?)"); + prep.setInt(1, cdId); + prep.setInt(2, wId); + prep.setInt(3, cId); + prep.setInt(4, dId); + prep.setInt(5, warehouseId); + prep.setTimestamp(6, datetime); + prep.setBigDecimal(7, amount); + prep.setString(8, hData); + db.update(prep, "insertHistory"); + db.commit(); + } + + private void processOrderStatus() throws SQLException { + int dId = random.getInt(1, bench.districtsPerWarehouse); + boolean byName; + String last = null; + int cId = -1; + if (random.getInt(1, 100) <= 60) { + byName = true; + last = random.getLastname(random.getNonUniform(255, 0, 999)); + } else { + byName = false; + cId = random.getNonUniform(1023, 1, bench.customersPerDistrict); + } + PreparedStatement prep; + ResultSet rs; + + prep = prepare("UPDATE DISTRICT SET D_NEXT_O_ID=-1 WHERE D_ID=-1"); + db.update(prep, "updateDistrict"); + if (byName) { + prep = prepare("SELECT COUNT(C_ID) FROM CUSTOMER " + + "WHERE C_LAST=? AND C_D_ID=? AND C_W_ID=?"); + prep.setString(1, last); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + rs = db.query(prep); + rs.next(); + int namecnt = rs.getInt(1); + rs.close(); + if (namecnt == 0) { + // TODO TPC-C: check if this can happen + db.rollback(); + return; + } + prep = prepare("SELECT C_BALANCE, C_FIRST, C_MIDDLE, C_ID " + + "FROM CUSTOMER " + + "WHERE C_LAST=? AND C_D_ID=? AND C_W_ID=? " + + "ORDER BY C_FIRST"); + prep.setString(1, last); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + rs = db.query(prep); + if (namecnt % 2 != 0) { + namecnt++; + } + for (int n = 0; n < namecnt / 2; n++) { + rs.next(); + } + // c_balance + rs.getBigDecimal(1); + // c_first + rs.getString(2); + // c_middle + rs.getString(3); + rs.close(); + } else { + prep = prepare("SELECT C_BALANCE, C_FIRST, C_MIDDLE, C_LAST " + + "FROM CUSTOMER " + + "WHERE C_ID=? AND C_D_ID=? AND C_W_ID=?"); + prep.setInt(1, cId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + rs = db.query(prep); + rs.next(); + // c_balance + rs.getBigDecimal(1); + // c_first + rs.getString(2); + // c_middle + rs.getString(3); + // c_last + rs.getString(4); + rs.close(); + } + prep = prepare("SELECT MAX(O_ID) " + + "FROM ORDERS WHERE O_C_ID=? AND O_D_ID=? AND O_W_ID=?"); + prep.setInt(1, cId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + rs = db.query(prep); + int oId = -1; + if (rs.next()) { + oId = rs.getInt(1); + if (rs.wasNull()) { + oId = -1; + } + } + rs.close(); + if (oId != -1) { + prep = prepare("SELECT O_ID, O_CARRIER_ID, O_ENTRY_D " + + "FROM ORDERS WHERE O_ID=?"); + prep.setInt(1, oId); + rs = db.query(prep); + rs.next(); + oId = rs.getInt(1); + // o_carrier_id + rs.getInt(2); + // o_entry_d + rs.getTimestamp(3); + rs.close(); + prep = prepare("SELECT OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, " + + "OL_AMOUNT, OL_DELIVERY_D FROM ORDER_LINE " + + "WHERE OL_O_ID=? AND OL_D_ID=? AND OL_W_ID=?"); + prep.setInt(1, oId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + rs = db.query(prep); + while (rs.next()) { + // o_i_id + rs.getInt(1); + // ol_supply_w_id + rs.getInt(2); + // ol_quantity + rs.getInt(3); + // ol_amount + rs.getBigDecimal(4); + // ol_delivery_d + rs.getTimestamp(5); + } + rs.close(); + } + db.commit(); + } + + private void processDelivery() throws SQLException { + int carrierId = random.getInt(1, 10); + Timestamp datetime = new Timestamp(System.currentTimeMillis()); + PreparedStatement prep; + ResultSet rs; + + prep = prepare("UPDATE DISTRICT SET D_NEXT_O_ID=-1 WHERE D_ID=-1"); + db.update(prep, "updateDistrict"); + for (int dId = 1; dId <= bench.districtsPerWarehouse; dId++) { + prep = prepare("SELECT MIN(NO_O_ID) FROM NEW_ORDER " + + "WHERE NO_D_ID=? AND NO_W_ID=?"); + prep.setInt(1, dId); + prep.setInt(2, warehouseId); + rs = db.query(prep); + int noId = -1; + if (rs.next()) { + noId = rs.getInt(1); + if (rs.wasNull()) { + noId = -1; + } + } + rs.close(); + if (noId != -1) { + prep = prepare("DELETE FROM NEW_ORDER " + + "WHERE NO_O_ID=? AND NO_D_ID=? AND NO_W_ID=?"); + prep.setInt(1, noId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + db.update(prep, "deleteNewOrder"); + prep = prepare("SELECT O_C_ID FROM ORDERS " + + "WHERE O_ID=? AND O_D_ID=? AND O_W_ID=?"); + prep.setInt(1, noId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + rs = db.query(prep); + rs.next(); + // o_c_id + rs.getInt(1); + rs.close(); + prep = prepare("UPDATE ORDERS SET O_CARRIER_ID=? " + + "WHERE O_ID=? AND O_D_ID=? AND O_W_ID=?"); + prep.setInt(1, carrierId); + prep.setInt(2, noId); + prep.setInt(3, dId); + prep.setInt(4, warehouseId); + db.update(prep, "updateOrders"); + prep = prepare("UPDATE ORDER_LINE SET OL_DELIVERY_D=? " + + "WHERE OL_O_ID=? AND OL_D_ID=? AND OL_W_ID=?"); + prep.setTimestamp(1, datetime); + prep.setInt(2, noId); + prep.setInt(3, dId); + prep.setInt(4, warehouseId); + db.update(prep, "updateOrderLine"); + prep = prepare("SELECT SUM(OL_AMOUNT) FROM ORDER_LINE " + + "WHERE OL_O_ID=? AND OL_D_ID=? AND OL_W_ID=?"); + prep.setInt(1, noId); + prep.setInt(2, dId); + prep.setInt(3, warehouseId); + rs = db.query(prep); + rs.next(); + BigDecimal amount = rs.getBigDecimal(1); + rs.close(); + prep = prepare("UPDATE CUSTOMER SET C_BALANCE=C_BALANCE+? " + + "WHERE C_ID=? AND C_D_ID=? AND C_W_ID=?"); + prep.setBigDecimal(1, amount); + prep.setInt(2, noId); + prep.setInt(3, dId); + prep.setInt(4, warehouseId); + db.update(prep, "updateCustomer"); + } + } + db.commit(); + } + + private void processStockLevel() throws SQLException { + int dId = (terminalId % bench.districtsPerWarehouse) + 1; + int threshold = random.getInt(10, 20); + PreparedStatement prep; + ResultSet rs; + + prep = prepare("UPDATE DISTRICT SET D_NEXT_O_ID=-1 WHERE D_ID=-1"); + db.update(prep, "updateDistrict"); + + prep = prepare("SELECT D_NEXT_O_ID FROM DISTRICT " + + "WHERE D_ID=? AND D_W_ID=?"); + prep.setInt(1, dId); + prep.setInt(2, warehouseId); + rs = db.query(prep); + rs.next(); + int oId = rs.getInt(1); + rs.close(); + prep = prepare("SELECT COUNT(DISTINCT S_I_ID) " + + "FROM ORDER_LINE, STOCK WHERE " + + "OL_W_ID=? AND " + + "OL_D_ID=? AND " + + "OL_O_ID=?-20 AND " + + "S_W_ID=? AND " + + "S_I_ID=OL_I_ID AND " + + "S_QUANTITY replace = new ArrayList<>(); + private String currentAction; + private long startTimeNs; + private long initialGCTime; + private Connection conn; + private Statement stat; + private long lastTrace; + private final Random random = new Random(1); + private final ArrayList results = new ArrayList<>(); + private int totalTime; + private int totalGCTime; + private final AtomicInteger executedStatements = new AtomicInteger(0); + private int threadCount; + + private Server serverH2; + private Object serverDerby; + private boolean serverHSQLDB; + + /** + * Get the database name. + * + * @return the database name + */ + String getName() { + return name; + } + + /** + * Get the total measured time. + * + * @return the time + */ + int getTotalTime() { + return totalTime; + } + + /** + * Get the total measured GC time. + * + * @return the time in milliseconds + */ + int getTotalGCTime() { + return totalGCTime; + } + + /** + * Get the result array. + * + * @return the result array + */ + ArrayList getResults() { + return results; + } + + /** + * Get the random number generator. + * + * @return the generator + */ + Random getRandom() { + return random; + } + + /** + * Start the server if the this is a remote connection. + */ + void startServer() throws Exception { + if (url.startsWith("jdbc:h2:tcp:")) { + serverH2 = Server.createTcpServer().start(); + Thread.sleep(100); + } else if (url.startsWith("jdbc:derby://")) { + serverDerby = Class.forName( + "org.apache.derby.drda.NetworkServerControl").newInstance(); + Method m = serverDerby.getClass().getMethod("start", PrintWriter.class); + m.invoke(serverDerby, new Object[] { null }); + // serverDerby = new NetworkServerControl(); + // serverDerby.start(null); + Thread.sleep(100); + } else if (url.startsWith("jdbc:hsqldb:hsql:")) { + if (!serverHSQLDB) { + Class c; + try { + c = Class.forName("org.hsqldb.server.Server"); + } catch (Exception e) { + c = Class.forName("org.hsqldb.Server"); + } + Method m = c.getMethod("main", String[].class); + m.invoke(null, new Object[] { new String[] { "-database.0", + "data/mydb;hsqldb.default_table_type=cached", "-dbname.0", "xdb" } }); + // org.hsqldb.Server.main(new String[]{"-database.0", "mydb", + // "-dbname.0", "xdb"}); + serverHSQLDB = true; + Thread.sleep(100); + } + } + } + + /** + * Stop the server if this is a remote connection. + */ + void stopServer() throws Exception { + if (serverH2 != null) { + serverH2.stop(); + serverH2 = null; + } + if (serverDerby != null) { + Method m = serverDerby.getClass().getMethod("shutdown"); + // cast for JDK 1.5 + m.invoke(serverDerby, (Object[]) null); + // serverDerby.shutdown(); + serverDerby = null; + } else if (serverHSQLDB) { + // can not shut down (shutdown calls System.exit) + // openConnection(); + // update("SHUTDOWN"); + // closeConnection(); + // serverHSQLDB = false; + } + } + + /** + * Parse a database configuration and create a database object from it. + * + * @param test the test application + * @param id the database id + * @param dbString the configuration string + * @param threadCount the number of threads to use + * @return a new database object with the given settings + */ + static Database parse(DatabaseTest test, int id, String dbString, + int threadCount) { + try { + StringTokenizer tokenizer = new StringTokenizer(dbString, ","); + Database db = new Database(); + db.id = id; + db.threadCount = threadCount; + db.test = test; + db.name = tokenizer.nextToken().trim(); + String driver = tokenizer.nextToken().trim(); + Class.forName(driver); + db.url = tokenizer.nextToken().trim(); + db.user = tokenizer.nextToken().trim(); + db.password = ""; + if (tokenizer.hasMoreTokens()) { + db.password = tokenizer.nextToken().trim(); + } + return db; + } catch (Exception e) { + System.out.println("Cannot load database " + dbString + " :" + e.toString()); + return null; + } + } + + /** + * Open a new database connection. This connection must be closed + * by calling conn.close(). + * + * @return the opened connection + */ + Connection openNewConnection() throws SQLException { + Connection newConn = DriverManager.getConnection(url, user, password); + if (url.startsWith("jdbc:derby:")) { + // Derby: use higher cache size + try (Statement s = newConn.createStatement()) { + // stat.execute("CALL + // SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY( + // 'derby.storage.pageCacheSize', '64')"); + // stat.execute("CALL + // SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY( + // 'derby.storage.pageSize', '8192')"); + } + } else if (url.startsWith("jdbc:hsqldb:")) { + // HSQLDB: use a WRITE_DELAY of 1 second + try (Statement s = newConn.createStatement()) { + s.execute("SET WRITE_DELAY 1"); + } + } + return newConn; + } + + /** + * Open the database connection. + */ + void openConnection() throws SQLException { + conn = openNewConnection(); + stat = conn.createStatement(); + } + + /** + * Close the database connection. + */ + void closeConnection() throws SQLException { + // if(!serverHSQLDB && url.startsWith("jdbc:hsqldb:")) { + // stat.execute("SHUTDOWN"); + // } + conn.close(); + stat = null; + conn = null; + } + + /** + * Initialize the SQL statement translation of this database. + * + * @param prop the properties with the translations to use + */ + void setTranslations(Properties prop) { + String databaseType = url.substring("jdbc:".length()); + databaseType = databaseType.substring(0, databaseType.indexOf(':')); + for (Object k : prop.keySet()) { + String key = (String) k; + if (key.startsWith(databaseType + ".")) { + String pattern = key.substring(databaseType.length() + 1); + pattern = pattern.replace('_', ' '); + pattern = StringUtils.toUpperEnglish(pattern); + String replacement = prop.getProperty(key); + replace.add(new String[]{pattern, replacement}); + } + } + } + + /** + * Prepare a SQL statement. + * + * @param sql the SQL statement + * @return the prepared statement + */ + PreparedStatement prepare(String sql) throws SQLException { + sql = getSQL(sql); + return conn.prepareStatement(sql); + } + + private String getSQL(String sql) { + for (String[] pair : replace) { + String pattern = pair[0]; + String replacement = pair[1]; + sql = StringUtils.replaceAll(sql, pattern, replacement); + } + return sql; + } + + /** + * Start the benchmark. + * + * @param bench the benchmark + * @param action the action + */ + void start(Bench bench, String action) { + this.currentAction = bench.getName() + ": " + action; + this.startTimeNs = System.nanoTime(); + this.initialGCTime = Utils.getGarbageCollectionTime(); + } + + /** + * This method is called when the test run ends. This will stop collecting + * data. + */ + void end() { + long time = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTimeNs); + long gcCollectionTime = Utils.getGarbageCollectionTime() - initialGCTime; + log(currentAction, "ms", (int) time); + if (test.isCollect()) { + totalTime += time; + totalGCTime += gcCollectionTime; + } + } + + /** + * Drop a table. Errors are ignored. + * + * @param table the table name + */ + void dropTable(String table) { + try { + update("DROP TABLE " + table); + } catch (Exception e) { + // ignore - table may not exist + } + } + + /** + * Execute an SQL statement. + * + * @param prep the prepared statement + * @param traceMessage the trace message + */ + void update(PreparedStatement prep, String traceMessage) throws SQLException { + test.trace(traceMessage); + prep.executeUpdate(); + if (test.isCollect()) { + executedStatements.incrementAndGet(); + } + } + + /** + * Execute an SQL statement. + * + * @param sql the SQL statement + */ + void update(String sql) throws SQLException { + sql = getSQL(sql); + if (sql.trim().length() > 0) { + if (test.isCollect()) { + executedStatements.incrementAndGet(); + } + stat.execute(sql); + } else { + System.out.println("?"); + } + } + + /** + * Enable or disable auto-commit. + * + * @param b false to disable + */ + void setAutoCommit(boolean b) throws SQLException { + conn.setAutoCommit(b); + } + + /** + * Commit a transaction. + */ + void commit() throws SQLException { + conn.commit(); + } + + /** + * Roll a transaction back. + */ + void rollback() throws SQLException { + conn.rollback(); + } + + /** + * Print trace information if trace is enabled. + * + * @param action the action + * @param i the current value + * @param max the maximum value + */ + void trace(String action, int i, int max) { + if (TRACE) { + long time = System.nanoTime(); + if (i == 0 || lastTrace == 0) { + lastTrace = time; + } else if (time > lastTrace + TimeUnit.SECONDS.toNanos(1)) { + System.out.println(action + ": " + ((100 * i / max) + "%")); + lastTrace = time; + } + } + } + + /** + * If data collection is enabled, add the currently used memory size to the + * log. + * + * @param bench the benchmark + * @param action the action + */ + void logMemory(Bench bench, String action) { + log(bench.getName() + ": " + action, "MB", TestBase.getMemoryUsed()); + } + + /** + * If data collection is enabled, add this information to the log. + * + * @param action the action + * @param scale the scale + * @param value the value + */ + void log(String action, String scale, int value) { + if (test.isCollect()) { + results.add(new Object[] { action, scale, Integer.valueOf(value) }); + } + } + + /** + * Execute a query. + * + * @param prep the prepared statement + * @return the result set + */ + ResultSet query(PreparedStatement prep) throws SQLException { + // long time = System.nanoTime(); + ResultSet rs = prep.executeQuery(); + // time = System.nanoTime() - time; + // if(time > 100) { + // System.out.println("time="+time); + // } + if (test.isCollect()) { + executedStatements.incrementAndGet(); + } + return rs; + } + + /** + * Execute a query and read all rows. + * + * @param prep the prepared statement + */ + void queryReadResult(PreparedStatement prep) throws SQLException { + ResultSet rs = query(prep); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + while (rs.next()) { + for (int i = 0; i < columnCount; i++) { + rs.getString(i + 1); + } + } + } + + /** + * Get the number of executed statements. + * + * @return the number of statements + */ + int getExecutedStatements() { + return executedStatements.get(); + } + + /** + * Get the database id. + * + * @return the id + */ + int getId() { + return id; + } + + int getThreadsCount() { + return threadCount; + } + + /** + * The interface used for a test. + */ + public interface DatabaseTest { + + /** + * Whether data needs to be collected. + * + * @return true if yes + */ + boolean isCollect(); + + /** + * Print a message to system out if trace is enabled. + * + * @param msg the message + */ + void trace(String msg); + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/TestPerformance.java b/modules/h2/src/test/java/org/h2/test/bench/TestPerformance.java new file mode 100644 index 0000000000000..22c7ce7abab16 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/TestPerformance.java @@ -0,0 +1,277 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.io.FileWriter; +import java.io.InputStream; +import java.io.PrintWriter; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Properties; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; + +/** + * The main controller class of the benchmark application. + * To run the benchmark, call the main method of this class. + */ +public class TestPerformance implements Database.DatabaseTest { + + /** + * Whether data should be collected. + */ + boolean collect; + + /** + * The flag used to enable or disable trace messages. + */ + boolean trace; + + /** + * This method is called when executing this sample application. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + new TestPerformance().test(args); + } + + private static Connection getResultConnection() throws SQLException { + org.h2.Driver.load(); + return DriverManager.getConnection("jdbc:h2:./data/results"); + } + + private static void openResults() throws SQLException { + Connection conn = null; + Statement stat = null; + try { + conn = getResultConnection(); + stat = conn.createStatement(); + stat.execute( + "CREATE TABLE IF NOT EXISTS RESULTS(" + + "TESTID INT, TEST VARCHAR, " + + "UNIT VARCHAR, DBID INT, DB VARCHAR, RESULT VARCHAR)"); + } finally { + JdbcUtils.closeSilently(stat); + JdbcUtils.closeSilently(conn); + } + } + + private void test(String... args) throws Exception { + int dbId = -1; + boolean exit = false; + String out = "benchmark.html"; + Properties prop = new Properties(); + InputStream in = getClass().getResourceAsStream("test.properties"); + prop.load(in); + in.close(); + int size = Integer.parseInt(prop.getProperty("size")); + for (int i = 0; i < args.length; i++) { + String arg = args[i]; + if ("-db".equals(arg)) { + dbId = Integer.parseInt(args[++i]); + } else if ("-init".equals(arg)) { + FileUtils.deleteRecursive("data", true); + } else if ("-out".equals(arg)) { + out = args[++i]; + } else if ("-trace".equals(arg)) { + trace = true; + } else if ("-exit".equals(arg)) { + exit = true; + } else if ("-size".equals(arg)) { + size = Integer.parseInt(args[++i]); + } + } + ArrayList dbs = new ArrayList<>(); + for (int i = 0; i < 100; i++) { + if (dbId != -1 && i != dbId) { + continue; + } + String dbString = prop.getProperty("db" + i); + if (dbString != null) { + Database db = Database.parse(this, i, dbString, 1); + if (db != null) { + db.setTranslations(prop); + dbs.add(db); + } + } + } + ArrayList tests = new ArrayList<>(); + for (int i = 0; i < 100; i++) { + String testString = prop.getProperty("test" + i); + if (testString != null) { + Bench bench = (Bench) Class.forName(testString).newInstance(); + tests.add(bench); + } + } + testAll(dbs, tests, size); + collect = false; + if (dbs.size() == 0) { + return; + } + ArrayList results = dbs.get(0).getResults(); + Connection conn = null; + PreparedStatement prep = null; + Statement stat = null; + PrintWriter writer = null; + try { + openResults(); + conn = getResultConnection(); + stat = conn.createStatement(); + prep = conn.prepareStatement( + "INSERT INTO RESULTS(TESTID, TEST, " + + "UNIT, DBID, DB, RESULT) VALUES(?, ?, ?, ?, ?, ?)"); + for (int i = 0; i < results.size(); i++) { + Object[] res = results.get(i); + prep.setInt(1, i); + prep.setString(2, res[0].toString()); + prep.setString(3, res[1].toString()); + for (Database db : dbs) { + prep.setInt(4, db.getId()); + prep.setString(5, db.getName()); + Object[] v = db.getResults().get(i); + prep.setString(6, v[2].toString()); + prep.execute(); + } + } + + writer = new PrintWriter(new FileWriter(out)); + ResultSet rs = stat.executeQuery( + "CALL '' " + + "|| (SELECT GROUP_CONCAT('' " + + "ORDER BY DBID SEPARATOR '') FROM " + + "(SELECT DISTINCT DBID, DB FROM RESULTS))" + + "|| '' || CHAR(10) " + + "|| (SELECT GROUP_CONCAT('' || ( " + + "SELECT GROUP_CONCAT('' " + + "ORDER BY DBID SEPARATOR '') FROM RESULTS R2 WHERE " + + "R2.TESTID = R1.TESTID) || '' " + + "ORDER BY TESTID SEPARATOR CHAR(10)) FROM " + + "(SELECT DISTINCT TESTID, TEST, UNIT FROM RESULTS) R1)" + + "|| '
    Test CaseUnit' || DB || '
    ' || TEST || " + + "'' || UNIT || '' || RESULT || '
    '" + ); + rs.next(); + String result = rs.getString(1); + writer.println(result); + } finally { + JdbcUtils.closeSilently(prep); + JdbcUtils.closeSilently(stat); + JdbcUtils.closeSilently(conn); + IOUtils.closeSilently(writer); + } + +// ResultSet rsDbs = conn.createStatement().executeQuery( +// "SELECT DB RESULTS GROUP BY DBID, DB ORDER BY DBID"); +// while(rsDbs.next()) { +// writer.println("" + rsDbs.getString(1) + ""); +// } +// ResultSet rs = conn.createStatement().executeQuery( +// "SELECT TEST, UNIT FROM RESULTS " + +// "GROUP BY TESTID, TEST, UNIT ORDER BY TESTID"); +// while(rs.next()) { +// writer.println("" + rs.getString(1) + ""); +// writer.println("" + rs.getString(2) + ""); +// ResultSet rsRes = conn.createStatement().executeQuery( +// "SELECT RESULT FROM RESULTS WHERE TESTID=? ORDER BY DBID"); +// +// +// } + +// PrintWriter writer = +// new PrintWriter(new FileWriter("benchmark.html")); +// writer.println(""); +// for(int j=0; j" + db.getName() + ""); +// } +// writer.println(""); +// for(int i=0; i"); +// writer.println(""); +// for(int j=0; j" + v[2] + ""); +// } +// writer.println(""); +// } +// writer.println("
    Test CaseUnit
    " + res[0] + "" + res[1] + "
    "); + + if (exit) { + System.exit(0); + } + } + + private void testAll(ArrayList dbs, ArrayList tests, + int size) throws Exception { + for (int i = 0; i < dbs.size(); i++) { + if (i > 0) { + Thread.sleep(1000); + } + // calls garbage collection + TestBase.getMemoryUsed(); + Database db = dbs.get(i); + System.out.println(); + System.out.println("Testing the performance of " + db.getName()); + db.startServer(); + Connection conn = db.openNewConnection(); + DatabaseMetaData meta = conn.getMetaData(); + System.out.println(" " + meta.getDatabaseProductName() + " " + + meta.getDatabaseProductVersion()); + runDatabase(db, tests, 1); + runDatabase(db, tests, 1); + collect = true; + runDatabase(db, tests, size); + conn.close(); + db.log("Executed statements", "#", db.getExecutedStatements()); + db.log("Total time", "ms", db.getTotalTime()); + int statPerSec = (int) (db.getExecutedStatements() * 1000L / db.getTotalTime()); + db.log("Statements per second", "#", statPerSec); + System.out.println("Statements per second: " + statPerSec); + System.out.println("GC overhead: " + (100 * db.getTotalGCTime() / db.getTotalTime()) + "%"); + collect = false; + db.stopServer(); + } + } + + private static void runDatabase(Database db, ArrayList tests, + int size) throws Exception { + for (Bench bench : tests) { + runTest(db, bench, size); + } + } + + private static void runTest(Database db, Bench bench, int size) + throws Exception { + bench.init(db, size); + bench.runTest(); + } + + @Override + public void trace(String msg) { + if (trace) { + System.out.println(msg); + } + } + + @Override + public boolean isCollect() { + return collect; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/TestScalability.java b/modules/h2/src/test/java/org/h2/test/bench/TestScalability.java new file mode 100644 index 0000000000000..32a1def082cb0 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/TestScalability.java @@ -0,0 +1,225 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.bench; + +import java.io.FileWriter; +import java.io.PrintWriter; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; + +/** + * Used to compare scalability between the old engine and the new MVStore + * engine. Mostly it runs BenchB with various numbers of threads. + */ +public class TestScalability implements Database.DatabaseTest { + + /** + * Whether data should be collected. + */ + boolean collect; + + /** + * The flag used to enable or disable trace messages. + */ + boolean trace; + + /** + * This method is called when executing this sample application. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + new TestScalability().test(); + } + + private static Connection getResultConnection() throws SQLException { + org.h2.Driver.load(); + return DriverManager.getConnection("jdbc:h2:./data/results"); + } + + private static void openResults() throws SQLException { + Connection conn = null; + Statement stat = null; + try { + conn = getResultConnection(); + stat = conn.createStatement(); + stat.execute( + "CREATE TABLE IF NOT EXISTS RESULTS(TESTID INT, " + + "TEST VARCHAR, UNIT VARCHAR, DBID INT, " + + "DB VARCHAR, TCNT INT, RESULT VARCHAR)"); + } finally { + JdbcUtils.closeSilently(stat); + JdbcUtils.closeSilently(conn); + } + } + + private void test() throws Exception { + FileUtils.deleteRecursive("data", true); + final String out = "benchmark.html"; + final int size = 400; + + ArrayList dbs = new ArrayList<>(); + int id = 1; + final String h2Url = "jdbc:h2:./data/test;" + + "LOCK_TIMEOUT=10000;MV_STORE=FALSE;LOCK_MODE=3"; + dbs.add(createDbEntry(id++, "H2", 1, h2Url)); + dbs.add(createDbEntry(id++, "H2", 2, h2Url)); + dbs.add(createDbEntry(id++, "H2", 4, h2Url)); + dbs.add(createDbEntry(id++, "H2", 8, h2Url)); + dbs.add(createDbEntry(id++, "H2", 16, h2Url)); + dbs.add(createDbEntry(id++, "H2", 32, h2Url)); + dbs.add(createDbEntry(id++, "H2", 64, h2Url)); + + final String mvUrl = "jdbc:h2:./data/mvTest;" + + "LOCK_TIMEOUT=10000;MULTI_THREADED=1"; + dbs.add(createDbEntry(id++, "MV", 1, mvUrl)); + dbs.add(createDbEntry(id++, "MV", 2, mvUrl)); + dbs.add(createDbEntry(id++, "MV", 4, mvUrl)); + dbs.add(createDbEntry(id++, "MV", 8, mvUrl)); + dbs.add(createDbEntry(id++, "MV", 16, mvUrl)); + dbs.add(createDbEntry(id++, "MV", 32, mvUrl)); + dbs.add(createDbEntry(id++, "MV", 64, mvUrl)); + + final BenchB test = new BenchB() { + // Since we focus on scalability here, lets emphasize multi-threaded + // part of the test (transactions) and minimize impact of the init. + @Override + protected int getTransactionsPerClient(int size) { + return size * 8; + } + }; + testAll(dbs, test, size); + collect = false; + + ArrayList results = dbs.get(0).getResults(); + Connection conn = null; + PreparedStatement prep = null; + Statement stat = null; + PrintWriter writer = null; + try { + openResults(); + conn = getResultConnection(); + stat = conn.createStatement(); + prep = conn.prepareStatement( + "INSERT INTO RESULTS(TESTID, " + + "TEST, UNIT, DBID, DB, TCNT, RESULT) VALUES(?, ?, ?, ?, ?, ?, ?)"); + for (int i = 0; i < results.size(); i++) { + Object[] res = results.get(i); + prep.setInt(1, i); + prep.setString(2, res[0].toString()); + prep.setString(3, res[1].toString()); + for (Database db : dbs) { + prep.setInt(4, db.getId()); + prep.setString(5, db.getName()); + prep.setInt(6, db.getThreadsCount()); + Object[] v = db.getResults().get(i); + prep.setString(7, v[2].toString()); + prep.execute(); + } + } + + writer = new PrintWriter(new FileWriter(out)); + ResultSet rs = stat.executeQuery( + "CALL '" + + "' " + + "|| (SELECT GROUP_CONCAT('' " + + "ORDER BY TCNT SEPARATOR '') FROM " + + "(SELECT TCNT, COUNT(*) COLSPAN FROM (SELECT DISTINCT DB, TCNT FROM RESULTS) GROUP BY TCNT))" + + "|| '' || CHAR(10) " + + "|| '' || (SELECT GROUP_CONCAT('' ORDER BY TCNT, DB SEPARATOR '')" + + " FROM (SELECT DISTINCT DB, TCNT FROM RESULTS)) || '' || CHAR(10) " + + "|| (SELECT GROUP_CONCAT('' || ( " + + "SELECT GROUP_CONCAT('' ORDER BY TCNT,DB SEPARATOR '')" + + " FROM RESULTS R2 WHERE R2.TESTID = R1.TESTID) || '' " + + "ORDER BY TESTID SEPARATOR CHAR(10)) FROM " + + "(SELECT DISTINCT TESTID, TEST, UNIT FROM RESULTS) R1)" + + "|| '
    Test CaseUnit' || TCNT || '
    ' || DB || '
    ' || TEST || '' || UNIT || '' || RESULT || '
    '"); + rs.next(); + String result = rs.getString(1); + writer.println(result); + } finally { + JdbcUtils.closeSilently(prep); + JdbcUtils.closeSilently(stat); + JdbcUtils.closeSilently(conn); + IOUtils.closeSilently(writer); + } + } + + private Database createDbEntry(int id, String namePrefix, + int threadCount, String url) { + Database db = Database.parse(this, id, namePrefix + + ", org.h2.Driver, " + url + ", sa, sa", threadCount); + return db; + } + + + private void testAll(ArrayList dbs, BenchB test, int size) + throws Exception { + for (int i = 0; i < dbs.size(); i++) { + if (i > 0) { + Thread.sleep(1000); + } + // calls garbage collection + TestBase.getMemoryUsed(); + Database db = dbs.get(i); + System.out.println("Testing the performance of " + db.getName() + + " (" + db.getThreadsCount() + " threads)"); + db.startServer(); + Connection conn = db.openNewConnection(); + DatabaseMetaData meta = conn.getMetaData(); + System.out.println(" " + meta.getDatabaseProductName() + " " + + meta.getDatabaseProductVersion()); + runDatabase(db, test, 1); + runDatabase(db, test, 1); + collect = true; + runDatabase(db, test, size); + conn.close(); + db.log("Executed statements", "#", db.getExecutedStatements()); + db.log("Total time", "ms", db.getTotalTime()); + int statPerSec = (int) (db.getExecutedStatements() * + 1000L / db.getTotalTime()); + db.log("Statements per second", "#", statPerSec); + System.out.println("Statements per second: " + statPerSec); + System.out.println("GC overhead: " + (100 * db.getTotalGCTime() / db.getTotalTime()) + "%"); + collect = false; + db.stopServer(); + } + } + + private static void runDatabase(Database db, BenchB bench, int size) + throws Exception { + bench.init(db, size); + bench.setThreadCount(db.getThreadsCount()); + bench.runTest(); + } + + /** + * Print a message to system out if trace is enabled. + * + * @param s the message + */ + @Override + public void trace(String s) { + if (trace) { + System.out.println(s); + } + } + + @Override + public boolean isCollect() { + return collect; + } +} diff --git a/modules/h2/src/test/java/org/h2/test/bench/package.html b/modules/h2/src/test/java/org/h2/test/bench/package.html new file mode 100644 index 0000000000000..4c4693440c9bc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +The implementation of the benchmark application. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/bench/test.properties b/modules/h2/src/test/java/org/h2/test/bench/test.properties new file mode 100644 index 0000000000000..1239af1a7eb92 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/bench/test.properties @@ -0,0 +1,39 @@ +db1 = H2, org.h2.Driver, jdbc:h2:./data/test, sa, sa + +#xdb1 = H2, org.h2.Driver, jdbc:h2:./data/test;LOCK_TIMEOUT=10000;LOCK_MODE=3;DEFAULT_TABLE_ENGINE=org.h2.mvstore.db.MVTableEngine, sa, sa + +#xdb1 = H2, org.h2.Driver, jdbc:h2:./data/test;LOG=1;LOCK_TIMEOUT=10000;LOCK_MODE=3;ACCESS_MODE_DATA=rwd, sa, sa +#xdb2 = H2 (nio), org.h2.Driver, jdbc:h2:nio:data/test;LOCK_TIMEOUT=10000;LOCK_MODE=3, sa, sa +#xdb3 = H2 (nioMapped), org.h2.Driver, jdbc:h2:nioMapped:data/test;LOCK_TIMEOUT=10000;LOCK_MODE=3, sa, sa +#xdb2 = H2 (MVCC), org.h2.Driver, jdbc:h2:./data/test_mvcc;MVCC=TRUE, sa, sa +#xdb2 = H2 (XTEA), org.h2.Driver, jdbc:h2:./data/test_xtea;LOCK_TIMEOUT=10000;LOCK_MODE=3;CIPHER=XTEA, sa, sa 123 +#xdb3 = H2 (AES), org.h2.Driver, jdbc:h2:./data/test_aes;LOCK_TIMEOUT=10000;LOCK_MODE=3;CIPHER=AES, sa, sa 123 +#xdb4 = H2, org.h2.Driver, jdbc:h2:./data/test;LOCK_TIMEOUT=10000;LOCK_MODE=3;write_mode_log=rws;write_delay=0, sa, sa +#xdb5 = H2_PG, org.postgresql.Driver, jdbc:postgresql://localhost:5435/h2test, sa, sa + +db2 = HSQLDB, org.hsqldb.jdbcDriver, jdbc:hsqldb:data/test;hsqldb.default_table_type=cached;sql.enforce_size=true, sa +db3 = Derby, org.apache.derby.jdbc.EmbeddedDriver, jdbc:derby:data/derby;create=true, sa, sa + +db4 = H2 (Server), org.h2.Driver, jdbc:h2:tcp://localhost/./data/testServer, sa, sa +db5 = HSQLDB, org.hsqldb.jdbcDriver, jdbc:hsqldb:hsql://localhost/xdb, sa +db6 = Derby, org.apache.derby.jdbc.ClientDriver, jdbc:derby://localhost/data/derbyServer;create=true, sa, sa +db7 = PostgreSQL, org.postgresql.Driver, jdbc:postgresql:test, sa, sa +db8 = MySQL, com.mysql.jdbc.Driver, jdbc:mysql://localhost/test?jdbcCompliantTruncation=false, sa, sa + +#db2 = MSSQLServer, com.microsoft.jdbc.sqlserver.SQLServerDriver, jdbc:microsoft:sqlserver://127.0.0.1:1433;DatabaseName=test, test, test +#db2 = Oracle, oracle.jdbc.driver.OracleDriver, jdbc:oracle:thin:@localhost:1521:XE, client, client +#db2 = Firebird, org.firebirdsql.jdbc.FBDriver, jdbc:firebirdsql:localhost:c:/temp/firebird/test, sysdba, masterkey +#db2 = DB2, COM.ibm.db2.jdbc.net.DB2Driver, jdbc:db2://localhost/test, test, test +#db2 = OneDollarDB, in.co.daffodil.db.jdbc.DaffodilDBDriver, jdbc:daffodilDB_embedded:school;path=C:/temp;create=true, sa + +firebirdsql.datetime = TIMESTAMP +postgresql.datetime = TIMESTAMP +derby.datetime = TIMESTAMP +oracle.datetime = TIMESTAMP + +test1 = org.h2.test.bench.BenchSimple +test2 = org.h2.test.bench.BenchA +test3 = org.h2.test.bench.BenchB +test4 = org.h2.test.bench.BenchC + +size = 5000 diff --git a/modules/h2/src/test/java/org/h2/test/coverage/Coverage.java b/modules/h2/src/test/java/org/h2/test/coverage/Coverage.java new file mode 100644 index 0000000000000..1bdcca1e75ad7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/coverage/Coverage.java @@ -0,0 +1,533 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.coverage; + +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.File; +import java.io.FileReader; +import java.io.FileWriter; +import java.io.IOException; +import java.io.Reader; +import java.io.Writer; +import java.util.ArrayList; +import java.util.concurrent.TimeUnit; + +import org.h2.util.New; + +/** + * Tool to instrument java files with profiler calls. The tool can be used for + * profiling an application and for coverage testing. This class is not used at + * runtime of the tested application. + */ +public class Coverage { + private static final String IMPORT = "import " + + Coverage.class.getPackage().getName() + ".Profile"; + private final ArrayList files = New.arrayList(); + private final ArrayList exclude = New.arrayList(); + private Tokenizer tokenizer; + private Writer writer; + private Writer data; + private String token = ""; + private String add = ""; + private String file; + private int index; + private int indent; + private int line; + private String last; + private String word, function; + private boolean perClass; + private boolean perFunction = true; + + private void printUsage() { + System.out + .println("Usage:\n" + + "- copy all your source files to another directory\n" + + " (be careful, they will be modified - don't take originals!)\n" + + "- java " + getClass().getName() + " \n" + + " this will modified the source code and create 'profile.txt'\n" + + "- compile the modified source files\n" + + "- run your main application\n" + + "- after the application exits, a file 'notCovered.txt' is created,\n" + + " which contains the class names, function names and line numbers\n" + + " of code that has not been covered\n\n" + + "Options:\n" + "-r recurse all subdirectories\n" + + "-e exclude files\n" + + "-c coverage on a per-class basis\n" + + "-f coverage on a per-function basis\n" + + " directory name (. for current directory)"); + } + + /** + * This method is called when executing this application. + * + * @param args the command line parameters + */ + public static void main(String... args) { + new Coverage().run(args); + } + + private void run(String... args) { + if (args.length == 0 || args[0].equals("-?")) { + printUsage(); + return; + } + Coverage c = new Coverage(); + int recurse = 1; + for (int i = 0; i < args.length; i++) { + String s = args[i]; + if (s.equals("-r")) { + // maximum recurse is 100 subdirectories, that should be enough + recurse = 100; + } else if (s.equals("-c")) { + c.perClass = true; + } else if (s.equals("-f")) { + c.perFunction = true; + } else if (s.equals("-e")) { + c.addExclude(args[++i]); + } else { + c.addDir(s, recurse); + } + } + try { + c.data = new BufferedWriter(new FileWriter("profile.txt")); + c.processAll(); + c.data.close(); + } catch (Exception e) { + e.printStackTrace(); + } + } + + private void addExclude(String fileName) { + exclude.add(fileName); + } + + private boolean isExcluded(String s) { + for (String e : exclude) { + if (s.startsWith(e)) { + return true; + } + } + return false; + } + + private void addDir(String path, int recurse) { + File f = new File(path); + if (f.isFile() && path.endsWith(".java")) { + if (!isExcluded(path)) { + files.add(path); + } + } else if (f.isDirectory() && recurse > 0) { + for (String name : f.list()) { + addDir(path + "/" + name, recurse - 1); + } + } + } + + private void processAll() { + int len = files.size(); + long time = System.nanoTime(); + for (int i = 0; i < len; i++) { + long t2 = System.nanoTime(); + if (t2 - time > TimeUnit.SECONDS.toNanos(1) || i >= len - 1) { + System.out.println((i + 1) + " of " + len + + " " + (100 * i / len) + "%"); + time = t2; + } + String fileName = files.get(i); + processFile(fileName); + } + } + + private void processFile(String name) { + file = name; + int i; + i = file.lastIndexOf('.'); + if (i != -1) { + file = file.substring(0, i); + } + while (true) { + i = file.indexOf('/'); + if (i < 0) { + i = file.indexOf('\\'); + } + if (i < 0) { + break; + } + file = file.substring(0, i) + "." + file.substring(i + 1); + } + if (name.endsWith("Coverage.java") || + name.endsWith("Tokenizer.java") || + name.endsWith("Profile.java")) { + return; + } + File f = new File(name); + File fileNew = new File(name + ".new"); + try { + writer = new BufferedWriter(new FileWriter(fileNew)); + Reader r = new BufferedReader(new FileReader(f)); + tokenizer = new Tokenizer(r); + indent = 0; + try { + process(); + } catch (Exception e) { + r.close(); + writer.close(); + e.printStackTrace(); + printError(e.getMessage()); + throw e; + } + r.close(); + writer.close(); + File backup = new File(name + ".bak"); + backup.delete(); + f.renameTo(backup); + File copy = new File(name); + fileNew.renameTo(copy); + if (perClass) { + nextDebug(); + } + } catch (Exception e) { + e.printStackTrace(); + printError(e.getMessage()); + } + } + + private void read() throws IOException { + last = token; + String write = token; + token = null; + tokenizer.initToken(); + int i = tokenizer.nextToken(); + if (i != Tokenizer.TYPE_EOF) { + token = tokenizer.getString(); + if (token == null) { + token = "" + ((char) i); + } else if (i == '\'') { + // mToken="'"+getEscape(mToken)+"'"; + token = tokenizer.getToken(); + } else if (i == '\"') { + // mToken="\""+getEscape(mToken)+"\""; + token = tokenizer.getToken(); + } else { + if (write == null) { + write = ""; + } else { + write = write + " "; + } + } + } + if (write == null || (!write.equals("else ") && + !write.equals("else") && !write.equals("super ") && + !write.equals("super") && !write.equals("this ") && + !write.equals("this") && !write.equals("} ") && + !write.equals("}"))) { + if (add != null && !add.equals("")) { + writeLine(); + write(add); + if (!perClass) { + nextDebug(); + } + } + } + add = ""; + if (write != null) { + write(write); + } + } + + private void readThis(String s) throws IOException { + if (!token.equals(s)) { + throw new IOException("Expected: " + s + " got:" + token); + } + read(); + } + + private void process() throws IOException { + boolean imp = false; + read(); + do { + while (true) { + if (token == null || token.equals("{")) { + break; + } else if (token.equals(";")) { + if (!imp) { + write(";" + IMPORT); + imp = true; + } + } + read(); + } + processClass(); + } while (token != null); + } + + private void processInit() throws IOException { + do { + if (token.equals("{")) { + read(); + processInit(); + } else if (token.equals("}")) { + read(); + return; + } else { + read(); + } + } while (true); + } + + private void processClass() throws IOException { + int type = 0; + while (true) { + if (token == null) { + break; + } else if (token.equals("class")) { + read(); + type = 1; + } else if (token.equals("=")) { + read(); + type = 2; + } else if (token.equals("static")) { + word = "static"; + read(); + type = 3; + } else if (token.equals("(")) { + word = last + "("; + read(); + if (!token.equals(")")) { + word = word + token; + } + type = 3; + } else if (token.equals(",")) { + read(); + word = word + "," + token; + } else if (token.equals(")")) { + word = word + ")"; + read(); + } else if (token.equals(";")) { + read(); + type = 0; + } else if (token.equals("{")) { + read(); + if (type == 1) { + processClass(); + } else if (type == 2) { + processInit(); + } else if (type == 3) { + writeLine(); + setLine(); + processFunction(); + writeLine(); + } + } else if (token.equals("}")) { + read(); + break; + } else { + read(); + } + } + } + + private void processBracket() throws IOException { + do { + if (token.equals("(")) { + read(); + processBracket(); + } else if (token.equals(")")) { + read(); + return; + } else { + read(); + } + } while (true); + } + + private void processFunction() throws IOException { + function = word; + writeLine(); + do { + processStatement(); + } while (!token.equals("}")); + read(); + writeLine(); + } + + private void processBlockOrStatement() throws IOException { + if (!token.equals("{")) { + write("{ //++"); + writeLine(); + setLine(); + processStatement(); + write("} //++"); + writeLine(); + } else { + read(); + setLine(); + processFunction(); + } + } + + private void processStatement() throws IOException { + while (true) { + if (token.equals("while") || token.equals("for") || + token.equals("synchronized")) { + read(); + readThis("("); + processBracket(); + indent++; + processBlockOrStatement(); + indent--; + return; + } else if (token.equals("if")) { + read(); + readThis("("); + processBracket(); + indent++; + processBlockOrStatement(); + indent--; + if (token.equals("else")) { + read(); + indent++; + processBlockOrStatement(); + indent--; + } + return; + } else if (token.equals("try")) { + read(); + indent++; + processBlockOrStatement(); + indent--; + while (true) { + if (token.equals("catch")) { + read(); + readThis("("); + processBracket(); + indent++; + processBlockOrStatement(); + indent--; + } else if (token.equals("finally")) { + read(); + indent++; + processBlockOrStatement(); + indent--; + } else { + break; + } + } + return; + } else if (token.equals("{")) { + if (last.equals(")")) { + // process anonymous inner classes (this is a hack) + read(); + processClass(); + return; + } else if (last.equals("]")) { + // process object array initialization (another hack) + while (!token.equals("}")) { + read(); + } + read(); + return; + } + indent++; + processBlockOrStatement(); + indent--; + return; + } else if (token.equals("do")) { + read(); + indent++; + processBlockOrStatement(); + readThis("while"); + readThis("("); + processBracket(); + readThis(";"); + setLine(); + indent--; + return; + } else if (token.equals("case")) { + add = ""; + read(); + while (!token.equals(":")) { + read(); + } + read(); + setLine(); + } else if (token.equals("default")) { + add = ""; + read(); + readThis(":"); + setLine(); + } else if (token.equals("switch")) { + read(); + readThis("("); + processBracket(); + indent++; + processBlockOrStatement(); + indent--; + return; + } else if (token.equals("class")) { + read(); + processClass(); + return; + } else if (token.equals("(")) { + read(); + processBracket(); + } else if (token.equals("=")) { + read(); + if (token.equals("{")) { + read(); + processInit(); + } + } else if (token.equals(";")) { + read(); + setLine(); + return; + } else if (token.equals("}")) { + return; + } else { + read(); + } + } + } + + private void setLine() { + add += "Profile.visit(" + index + ");"; + line = tokenizer.getLine(); + } + + private void nextDebug() throws IOException { + if (perFunction) { + int i = function.indexOf('('); + String func = i < 0 ? function : function.substring(0, i); + String fileLine = file + "." + func + "("; + i = file.lastIndexOf('.'); + String className = i < 0 ? file : file.substring(i + 1); + fileLine += className + ".java:" + line + ")"; + data.write(fileLine + " " + last + "\r\n"); + } else { + data.write(file + " " + line + "\r\n"); + } + index++; + } + + private void writeLine() throws IOException { + write("\r\n"); + for (int i = 0; i < indent; i++) { + writer.write(' '); + } + } + + private void write(String s) throws IOException { + writer.write(s); + // System.out.print(s); + } + + private void printError(String error) { + System.out.println(""); + System.out.println("File:" + file); + System.out.println("ERROR: " + error); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/coverage/Profile.java b/modules/h2/src/test/java/org/h2/test/coverage/Profile.java new file mode 100644 index 0000000000000..edc3fd9e94dc6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/coverage/Profile.java @@ -0,0 +1,228 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.coverage; + +import java.io.BufferedWriter; +import java.io.FileReader; +import java.io.FileWriter; +import java.io.IOException; +import java.io.LineNumberReader; +import java.util.concurrent.TimeUnit; + +import org.h2.util.IOUtils; + +/** + * The class used at runtime to measure the code usage and performance. + */ +public class Profile extends Thread { + private static final boolean LIST_UNVISITED = false; + private static final boolean TRACE = false; + private static final Profile MAIN = new Profile(); + private static int top = 15; + private int[] count; + private int[] time; + private boolean stop; + private int maxIndex; + private int lastIndex; + private long lastTimeNs; + private BufferedWriter trace; + + private Profile() { + try (LineNumberReader r = new LineNumberReader(new FileReader("profile.txt"))) { + while (r.readLine() != null) { + // nothing - just count lines + } + maxIndex = r.getLineNumber(); + count = new int[maxIndex]; + time = new int[maxIndex]; + lastTimeNs = System.nanoTime(); + Runtime.getRuntime().addShutdownHook(this); + } catch (Exception e) { + e.printStackTrace(); + System.exit(1); + } + } + + static { + try { + String s = System.getProperty("profile.top"); + if (s != null) { + top = Integer.parseInt(s); + } + } catch (Throwable e) { + // ignore SecurityExceptions + } + } + + /** + * This method is called by an instrumented application whenever a line of + * code is executed. + * + * @param i the line number that is executed + */ + public static void visit(int i) { + MAIN.addVisit(i); + } + + @Override + public void run() { + list(); + } + + /** + * Start collecting data. + */ + public static void startCollecting() { + MAIN.stop = false; + MAIN.lastTimeNs = System.nanoTime(); + } + + /** + * Stop collecting data. + */ + public static void stopCollecting() { + MAIN.stop = true; + } + + /** + * List all captured data. + */ + public static void list() { + if (MAIN.lastIndex == 0) { + // don't list anything if no statistics collected + return; + } + try { + MAIN.listUnvisited(); + MAIN.listTop("MOST CALLED", MAIN.count, top); + MAIN.listTop("MOST TIME USED", MAIN.time, top); + } catch (Exception e) { + e.printStackTrace(); + } + } + + private void addVisit(int i) { + if (stop) { + return; + } + long now = System.nanoTime(); + if (TRACE) { + if (trace != null) { + long duration = TimeUnit.NANOSECONDS.toMillis(now - lastTimeNs); + try { + trace.write(i + "\t" + duration + "\r\n"); + } catch (Exception e) { + e.printStackTrace(); + System.exit(1); + } + } + } + count[i]++; + time[lastIndex] += (int) TimeUnit.NANOSECONDS.toMillis(now - lastTimeNs); + lastTimeNs = now; + lastIndex = i; + } + + private void listUnvisited() throws IOException { + printLine('='); + print("NOT COVERED"); + printLine('-'); + LineNumberReader r = null; + BufferedWriter writer = null; + try { + r = new LineNumberReader(new FileReader("profile.txt")); + writer = new BufferedWriter(new FileWriter("notCovered.txt")); + int unvisited = 0; + int unvisitedThrow = 0; + for (int i = 0; i < maxIndex; i++) { + String line = r.readLine(); + if (count[i] == 0) { + if (!line.endsWith("throw")) { + writer.write(line + "\r\n"); + if (LIST_UNVISITED) { + print(line + "\r\n"); + } + unvisited++; + } else { + unvisitedThrow++; + } + } + } + int percent = 100 * unvisited / maxIndex; + print("Not covered: " + percent + " % " + " (" + + unvisited + " of " + maxIndex + "; throw=" + + unvisitedThrow + ")"); + } finally { + IOUtils.closeSilently(writer); + IOUtils.closeSilently(r); + } + } + + private void listTop(String title, int[] list, int max) throws IOException { + printLine('-'); + int total = 0; + int totalLines = 0; + for (int j = 0; j < maxIndex; j++) { + int l = list[j]; + if (l > 0) { + total += list[j]; + totalLines++; + } + } + if (max == 0) { + max = totalLines; + } + print(title); + print("Total: " + total); + printLine('-'); + String[] text = new String[max]; + int[] index = new int[max]; + for (int i = 0; i < max; i++) { + int big = list[0]; + int bigIndex = 0; + for (int j = 1; j < maxIndex; j++) { + int l = list[j]; + if (l > big) { + big = l; + bigIndex = j; + } + } + list[bigIndex] = -(big + 1); + index[i] = bigIndex; + } + + try (LineNumberReader r = new LineNumberReader(new FileReader("profile.txt"))) { + for (int i = 0; i < maxIndex; i++) { + String line = r.readLine(); + int k = list[i]; + if (k < 0) { + k = -(k + 1); + list[i] = k; + for (int j = 0; j < max; j++) { + if (index[j] == i) { + int percent = 100 * k / total; + text[j] = k + " " + percent + "%: " + line; + } + } + } + } + for (int i = 0; i < max; i++) { + print(text[i]); + } + } + } + + private static void print(String s) { + System.out.println(s); + } + + private static void printLine(char c) { + for (int i = 0; i < 60; i++) { + System.out.print(c); + } + print(""); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/coverage/Tokenizer.java b/modules/h2/src/test/java/org/h2/test/coverage/Tokenizer.java new file mode 100644 index 0000000000000..2de4e543818c3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/coverage/Tokenizer.java @@ -0,0 +1,277 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.coverage; + +import java.io.EOFException; +import java.io.IOException; +import java.io.Reader; +import java.util.Arrays; + +/** + * Helper class for the java file parser. + */ +public class Tokenizer { + + /** + * This token type means no more tokens are available. + */ + static final int TYPE_EOF = -1; + + private static final int TYPE_WORD = -2; + private static final int TYPE_NOTHING = -3; + private static final byte WHITESPACE = 1; + private static final byte ALPHA = 4; + private static final byte QUOTE = 8; + + private StringBuilder buffer; + + private Reader reader; + + private char[] chars = new char[20]; + private int peekChar; + private int line = 1; + + private byte[] charTypes = new byte[256]; + + private int type = TYPE_NOTHING; + private String value; + + private Tokenizer() { + wordChars('a', 'z'); + wordChars('A', 'Z'); + wordChars('0', '9'); + wordChars('.', '.'); + wordChars('+', '+'); + wordChars('-', '-'); + wordChars('_', '_'); + wordChars(128 + 32, 255); + whitespaceChars(0, ' '); + charTypes['"'] = QUOTE; + charTypes['\''] = QUOTE; + } + + Tokenizer(Reader r) { + this(); + reader = r; + } + + String getString() { + return value; + } + + private void wordChars(int low, int hi) { + while (low <= hi) { + charTypes[low++] |= ALPHA; + } + } + + private void whitespaceChars(int low, int hi) { + while (low <= hi) { + charTypes[low++] = WHITESPACE; + } + } + + private int read() throws IOException { + int i = reader.read(); + if (i != -1) { + append(i); + } + return i; + } + + /** + * Initialize the tokenizer. + */ + void initToken() { + buffer = new StringBuilder(); + } + + String getToken() { + buffer.setLength(buffer.length() - 1); + return buffer.toString(); + } + + private void append(int i) { + buffer.append((char) i); + } + + /** + * Read the next token and get the token type. + * + * @return the token type + */ + int nextToken() throws IOException { + byte[] ct = charTypes; + int c; + value = null; + + if (type == TYPE_NOTHING) { + c = read(); + if (c >= 0) { + type = c; + } + } else { + c = peekChar; + if (c < 0) { + try { + c = read(); + if (c >= 0) { + type = c; + } + } catch (EOFException e) { + c = -1; + } + } + } + + if (c < 0) { + return type = TYPE_EOF; + } + int charType = c < 256 ? ct[c] : ALPHA; + while ((charType & WHITESPACE) != 0) { + if (c == '\r') { + line++; + c = read(); + if (c == '\n') { + c = read(); + } + } else { + if (c == '\n') { + line++; + } + c = read(); + } + if (c < 0) { + return type = TYPE_EOF; + } + charType = c < 256 ? ct[c] : ALPHA; + } + if ((charType & ALPHA) != 0) { + initToken(); + append(c); + int i = 0; + do { + if (i >= chars.length) { + chars = Arrays.copyOf(chars, chars.length * 2); + } + chars[i++] = (char) c; + c = read(); + charType = c < 0 ? WHITESPACE : c < 256 ? ct[c] : ALPHA; + } while ((charType & ALPHA) != 0); + peekChar = c; + value = String.copyValueOf(chars, 0, i); + return type = TYPE_WORD; + } + if ((charType & QUOTE) != 0) { + initToken(); + append(c); + type = c; + int i = 0; + // \octal needs a lookahead + peekChar = read(); + while (peekChar >= 0 && peekChar != type && peekChar != '\n' + && peekChar != '\r') { + if (peekChar == '\\') { + c = read(); + // to allow \377, but not \477 + int first = c; + if (c >= '0' && c <= '7') { + c = c - '0'; + int c2 = read(); + if ('0' <= c2 && c2 <= '7') { + c = (c << 3) + (c2 - '0'); + c2 = read(); + if ('0' <= c2 && c2 <= '7' && first <= '3') { + c = (c << 3) + (c2 - '0'); + peekChar = read(); + } else { + peekChar = c2; + } + } else { + peekChar = c2; + } + } else { + switch (c) { + case 'b': + c = '\b'; + break; + case 'f': + c = '\f'; + break; + case 'n': + c = '\n'; + break; + case 'r': + c = '\r'; + break; + case 't': + c = '\t'; + break; + default: + } + peekChar = read(); + } + } else { + c = peekChar; + peekChar = read(); + } + + if (i >= chars.length) { + chars = Arrays.copyOf(chars, chars.length * 2); + } + chars[i++] = (char) c; + } + if (peekChar == type) { + // keep \n or \r intact in peekChar + peekChar = read(); + } + value = String.copyValueOf(chars, 0, i); + return type; + } + if (c == '/') { + c = read(); + if (c == '*') { + int prevChar = 0; + while ((c = read()) != '/' || prevChar != '*') { + if (c == '\r') { + line++; + c = read(); + if (c == '\n') { + c = read(); + } + } else { + if (c == '\n') { + line++; + c = read(); + } + } + if (c < 0) { + return type = TYPE_EOF; + } + prevChar = c; + } + peekChar = read(); + return nextToken(); + } else if (c == '/') { + while ((c = read()) != '\n' && c != '\r' && c >= 0) { + // nothing + } + peekChar = c; + return nextToken(); + } else { + peekChar = c; + return type = '/'; + } + } + peekChar = read(); + return type = c; + } + + int getLine() { + return line; + } +} + diff --git a/modules/h2/src/test/java/org/h2/test/coverage/package.html b/modules/h2/src/test/java/org/h2/test/coverage/package.html new file mode 100644 index 0000000000000..25c4c740f99ba --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/coverage/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A standalone code coverage tool. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/db/AbstractBaseForCommonTableExpressions.java b/modules/h2/src/test/java/org/h2/test/db/AbstractBaseForCommonTableExpressions.java new file mode 100644 index 0000000000000..66143a07e8966 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/AbstractBaseForCommonTableExpressions.java @@ -0,0 +1,89 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; + +/** + * Base class for common table expression tests + */ +public abstract class AbstractBaseForCommonTableExpressions extends TestBase { + + /** + * Test a query. + * + * @param maxRetries the number of times the query is run + * @param expectedRowData the expected result data + * @param expectedColumnNames the expected columns of the result + * @param expectedNumberOfRows the expected number of rows + * @param setupSQL the SQL statement used for setup + * @param withQuery the query + * @param closeAndReopenDatabaseConnectionOnIteration whether the connection + * should be re-opened each time + * @param expectedColumnTypes the expected datatypes of the result + */ + void testRepeatedQueryWithSetup(int maxRetries, String[] expectedRowData, String[] expectedColumnNames, + int expectedNumberOfRows, String setupSQL, String withQuery, + int closeAndReopenDatabaseConnectionOnIteration, String[] expectedColumnTypes) throws SQLException { + + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + PreparedStatement prep; + ResultSet rs; + + for (int queryRunTries = 1; queryRunTries <= maxRetries; queryRunTries++) { + + Statement stat = conn.createStatement(); + stat.execute(setupSQL); + stat.close(); + + // close and re-open connection for one iteration to make sure the query work + // between connections + if (queryRunTries == closeAndReopenDatabaseConnectionOnIteration) { + conn.close(); + + conn = getConnection("commonTableExpressionQueries"); + } + prep = conn.prepareStatement(withQuery); + + rs = prep.executeQuery(); + for (int columnIndex = 1; columnIndex <= rs.getMetaData().getColumnCount(); columnIndex++) { + + assertTrue(rs.getMetaData().getColumnLabel(columnIndex) != null); + assertEquals(expectedColumnNames[columnIndex - 1], rs.getMetaData().getColumnLabel(columnIndex)); + assertEquals( + "wrong type of column " + rs.getMetaData().getColumnLabel(columnIndex) + " on iteration #" + + queryRunTries, + expectedColumnTypes[columnIndex - 1], rs.getMetaData().getColumnTypeName(columnIndex)); + } + + int rowNdx = 0; + while (rs.next()) { + StringBuffer buf = new StringBuffer(); + for (int columnIndex = 1; columnIndex <= rs.getMetaData().getColumnCount(); columnIndex++) { + buf.append("|" + rs.getString(columnIndex)); + } + assertEquals(expectedRowData[rowNdx], buf.toString()); + rowNdx++; + } + + assertEquals(expectedNumberOfRows, rowNdx); + + rs.close(); + prep.close(); + } + + conn.close(); + deleteDb("commonTableExpressionQueries"); + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/Db.java b/modules/h2/src/test/java/org/h2/test/db/Db.java new file mode 100644 index 0000000000000..8b3f2f220bef9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/Db.java @@ -0,0 +1,252 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.InputStream; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * A simple wrapper around the JDBC API. + * Currently used for testing. + * Features: + *
      + *
    • No checked exceptions + *
    • Easy to use, fluent API + *
    + */ +public class Db { + + private Connection conn; + private Statement stat; + private final HashMap prepared = + new HashMap<>(); + + /** + * Create a database object using the given connection. + * + * @param conn the database connection + */ + public Db(Connection conn) { + try { + this.conn = conn; + stat = conn.createStatement(); + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Prepare a SQL statement. + * + * @param sql the SQL statement + * @return the prepared statement + */ + public Prepared prepare(String sql) { + try { + PreparedStatement prep = prepared.get(sql); + if (prep == null) { + prep = conn.prepareStatement(sql); + prepared.put(sql, prep); + } + return new Prepared(conn.prepareStatement(sql)); + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Execute a SQL statement. + * + * @param sql the SQL statement + */ + public void execute(String sql) { + try { + stat.execute(sql); + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Read a result set. + * + * @param rs the result set + * @return a list of maps + */ + static List> query(ResultSet rs) throws SQLException { + List> list = new ArrayList<>(); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + while (rs.next()) { + HashMap map = new HashMap<>(); + for (int i = 0; i < columnCount; i++) { + map.put(meta.getColumnLabel(i+1), rs.getObject(i+1)); + } + list.add(map); + } + return list; + } + + /** + * Execute a SQL statement. + * + * @param sql the SQL statement + * @return a list of maps + */ + public List> query(String sql) { + try { + return query(stat.executeQuery(sql)); + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Close the database connection. + */ + public void close() { + try { + conn.close(); + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * This class represents a prepared statement. + */ + public static class Prepared { + private final PreparedStatement prep; + private int index; + + Prepared(PreparedStatement prep) { + this.prep = prep; + } + + /** + * Set the value of the current parameter. + * + * @param x the value + * @return itself + */ + public Prepared set(int x) { + try { + prep.setInt(++index, x); + return this; + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Set the value of the current parameter. + * + * @param x the value + * @return itself + */ + public Prepared set(String x) { + try { + prep.setString(++index, x); + return this; + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Set the value of the current parameter. + * + * @param x the value + * @return itself + */ + public Prepared set(byte[] x) { + try { + prep.setBytes(++index, x); + return this; + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Set the value of the current parameter. + * + * @param x the value + * @return itself + */ + public Prepared set(InputStream x) { + try { + prep.setBinaryStream(++index, x, -1); + return this; + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Execute the prepared statement. + */ + public void execute() { + try { + prep.execute(); + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Execute the prepared query. + * + * @return the result list + */ + public List> query() { + try { + return Db.query(prep.executeQuery()); + } catch (SQLException e) { + throw convert(e); + } + } + } + + /** + * Convert a checked exception to a runtime exception. + * + * @param e the checked exception + * @return the runtime exception + */ + static RuntimeException convert(Exception e) { + return new RuntimeException(e.toString(), e); + } + + public void setAutoCommit(boolean autoCommit) { + try { + conn.setAutoCommit(autoCommit); + } catch (SQLException e) { + throw convert(e); + } + } + + /** + * Commit a pending transaction. + */ + public void commit() { + try { + conn.commit(); + } catch (SQLException e) { + throw convert(e); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TaskDef.java b/modules/h2/src/test/java/org/h2/test/db/TaskDef.java new file mode 100644 index 0000000000000..15e94f20029a5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TaskDef.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStreamReader; +import java.util.Arrays; + +import org.h2.test.utils.SelfDestructor; + +/** + * A task that can be run as a separate process. + */ +public abstract class TaskDef { + + /** + * Run the class. This method is called by the task framework, and should + * not be called directly from the application. + * + * @param args the command line arguments + */ + public static void main(String... args) { + SelfDestructor.startCountdown(60); + TaskDef task; + try { + String className = args[0]; + task = (TaskDef) Class.forName(className).newInstance(); + System.out.println("running"); + } catch (Throwable t) { + System.out.println("init error: " + t); + t.printStackTrace(); + return; + } + try { + task.run(Arrays.copyOf(args, args.length - 1)); + } catch (Throwable t) { + System.out.println("error: " + t); + t.printStackTrace(); + } + } + + /** + * Run the task. + * + * @param args the command line arguments + */ + abstract void run(String... args) throws Exception; + + /** + * Receive a message from the process over the standard output. + * + * @return the message + */ + protected String receive() { + try { + return new BufferedReader(new InputStreamReader(System.in)).readLine(); + } catch (IOException e) { + throw new RuntimeException("Error reading from input", e); + } + } + + /** + * Send a message to the process over the standard input. + * + * @param message the message + */ + protected void send(String message) { + System.out.println(message); + System.out.flush(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TaskProcess.java b/modules/h2/src/test/java/org/h2/test/db/TaskProcess.java new file mode 100644 index 0000000000000..fb425c7a0bb66 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TaskProcess.java @@ -0,0 +1,130 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.OutputStreamWriter; +import java.util.ArrayList; +import java.util.Arrays; +import org.h2.test.utils.SelfDestructor; +import org.h2.util.StringUtils; +import org.h2.util.Task; + +/** + * A task that is run as an external process. This class communicates over + * standard input / output with the process. The standard error stream of the + * process is directly send to the standard error stream of this process. + */ +public class TaskProcess { + private final TaskDef taskDef; + private Process process; + private BufferedReader reader; + private BufferedWriter writer; + + /** + * Construct a new task process. The process is not started yet. + * + * @param taskDef the task + */ + public TaskProcess(TaskDef taskDef) { + this.taskDef = taskDef; + } + + /** + * Start the task with the given arguments. + * + * @param args the arguments, or null + */ + public void start(String... args) { + try { + String selfDestruct = SelfDestructor.getPropertyString(60); + ArrayList list = new ArrayList<>(); + list.add("java"); + list.add(selfDestruct); + list.add("-cp"); + list.add("bin" + File.pathSeparator + "."); + list.add(TaskDef.class.getName()); + list.add(taskDef.getClass().getName()); + if (args != null && args.length > 0) { + list.addAll(Arrays.asList(args)); + } + String[] procDef = list.toArray(new String[0]); + process = Runtime.getRuntime().exec(procDef); + copyInThread(process.getErrorStream(), System.err); + reader = new BufferedReader(new InputStreamReader(process.getInputStream())); + writer = new BufferedWriter(new OutputStreamWriter(process.getOutputStream())); + String line = reader.readLine(); + if (line == null) { + throw new RuntimeException( + "No reply from process, command: " + + StringUtils.arrayCombine(procDef, ' ')); + } else if (line.startsWith("running")) { + } else if (line.startsWith("init error")) { + throw new RuntimeException(line); + } + } catch (Throwable t) { + throw new RuntimeException("Error starting task", t); + } + } + + private static void copyInThread(final InputStream in, final OutputStream out) { + new Task() { + @Override + public void call() throws IOException { + while (true) { + int x = in.read(); + if (x < 0) { + return; + } + if (out != null) { + out.write(x); + } + } + } + }.execute(); + } + + /** + * Receive a message from the process over the standard output. + * + * @return the message + */ + public String receive() { + try { + return reader.readLine(); + } catch (IOException e) { + throw new RuntimeException("Error reading", e); + } + } + + /** + * Send a message to the process over the standard input. + * + * @param message the message + */ + public void send(String message) { + try { + writer.write(message + "\n"); + writer.flush(); + } catch (IOException e) { + throw new RuntimeException("Error writing " + message, e); + } + } + + /** + * Kill the process if it still runs. + */ + public void destroy() { + process.destroy(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestAlter.java b/modules/h2/src/test/java/org/h2/test/db/TestAlter.java new file mode 100644 index 0000000000000..43fbc9d841667 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestAlter.java @@ -0,0 +1,333 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Test ALTER statements. + */ +public class TestAlter extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb(getTestName()); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + testAlterTableRenameConstraint(); + testAlterTableAlterColumnAsSelfColumn(); + testAlterTableDropColumnWithReferences(); + testAlterTableDropMultipleColumns(); + testAlterTableAlterColumnWithConstraint(); + testAlterTableAlterColumn(); + testAlterTableAddColumnIdentity(); + testAlterTableDropIdentityColumn(); + testAlterTableAddColumnIfNotExists(); + testAlterTableAddMultipleColumns(); + testAlterTableAlterColumn2(); + testAlterTableAddColumnBefore(); + testAlterTableAddColumnAfter(); + testAlterTableAddMultipleColumnsBefore(); + testAlterTableAddMultipleColumnsAfter(); + testAlterTableModifyColumn(); + testAlterTableModifyColumnSetNull(); + testAlterTableModifyColumnNotNullOracle(); + conn.close(); + deleteDb(getTestName()); + } + + private void testAlterTableAlterColumnAsSelfColumn() throws SQLException { + stat.execute("create table test(id int, name varchar)"); + stat.execute("alter table test alter column id int as id+1"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("update test set name='World'"); + ResultSet rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(3, rs.getInt(1)); + stat.execute("drop table test"); + } + + private void testAlterTableDropColumnWithReferences() throws SQLException { + stat.execute("create table parent(id int, b int)"); + stat.execute("create table child(p int primary key)"); + stat.execute("alter table child add foreign key(p) references parent(id)"); + stat.execute("alter table parent drop column id"); + stat.execute("drop table parent"); + stat.execute("drop table child"); + + stat.execute("create table test(id int, name varchar(255))"); + stat.execute("alter table test add constraint x check (id > name)"); + + // the constraint references multiple columns + assertThrows(ErrorCode.COLUMN_IS_REFERENCED_1, stat). + execute("alter table test drop column id"); + + stat.execute("drop table test"); + + stat.execute("create table test(id int, name varchar(255))"); + stat.execute("alter table test add constraint x unique(id, name)"); + + // the constraint references multiple columns + assertThrows(ErrorCode.COLUMN_IS_REFERENCED_1, stat). + execute("alter table test drop column id"); + + stat.execute("drop table test"); + + stat.execute("create table test(id int, name varchar(255))"); + stat.execute("alter table test add constraint x check (id > 1)"); + stat.execute("alter table test drop column id"); + stat.execute("drop table test"); + + stat.execute("create table test(id int, name varchar(255))"); + stat.execute("alter table test add constraint x check (name > 'TEST.ID')"); + // previous versions of H2 used sql.indexOf(columnName) + // to check if the column is referenced + stat.execute("alter table test drop column id"); + stat.execute("drop table test"); + + stat.execute("create table test(id int, name varchar(255))"); + stat.execute("alter table test add constraint x unique(id)"); + stat.execute("alter table test drop column id"); + stat.execute("drop table test"); + + } + + private void testAlterTableDropMultipleColumns() throws SQLException { + stat.execute("create table test(id int, b varchar, c int, d int)"); + stat.execute("alter table test drop column b, c"); + stat.execute("alter table test drop d"); + stat.execute("drop table test"); + // Test-Case: Same as above but using brackets (Oracle style) + stat.execute("create table test(id int, b varchar, c int, d int)"); + stat.execute("alter table test drop column (b, c)"); + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, stat). + execute("alter table test drop column b"); + stat.execute("alter table test drop (d)"); + stat.execute("drop table test"); + // Test-Case: Error if dropping all columns + stat.execute("create table test(id int, name varchar, name2 varchar)"); + assertThrows(ErrorCode.CANNOT_DROP_LAST_COLUMN, stat). + execute("alter table test drop column id, name, name2"); + stat.execute("drop table test"); + } + + /** + * Tests a bug we used to have where altering the name of a column that had + * a check constraint that referenced itself would result in not being able + * to re-open the DB. + */ + private void testAlterTableAlterColumnWithConstraint() throws SQLException { + if (config.memory) { + return; + } + stat.execute("create table test(id int check(id in (1,2)) )"); + stat.execute("alter table test alter id rename to id2"); + // disconnect and reconnect + conn.close(); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("insert into test values(1)"); + assertThrows(ErrorCode.CHECK_CONSTRAINT_VIOLATED_1, stat). + execute("insert into test values(3)"); + stat.execute("drop table test"); + } + + private void testAlterTableRenameConstraint() throws SQLException { + stat.execute("create table test(id int, name varchar(255))"); + stat.execute("alter table test add constraint x check (id > name)"); + stat.execute("alter table test rename constraint x to x2"); + stat.execute("drop table test"); + } + + private void testAlterTableDropIdentityColumn() throws SQLException { + stat.execute("create table test(id int auto_increment, name varchar)"); + stat.execute("alter table test drop column id"); + ResultSet rs = stat.executeQuery("select * from INFORMATION_SCHEMA.SEQUENCES"); + assertFalse(rs.next()); + stat.execute("drop table test"); + + stat.execute("create table test(id int auto_increment, name varchar)"); + stat.execute("alter table test drop column name"); + rs = stat.executeQuery("select * from INFORMATION_SCHEMA.SEQUENCES"); + assertTrue(rs.next()); + stat.execute("drop table test"); + } + + private void testAlterTableAlterColumn() throws SQLException { + stat.execute("create table t(x varchar) as select 'x'"); + assertThrows(ErrorCode.DATA_CONVERSION_ERROR_1, stat). + execute("alter table t alter column x int"); + stat.execute("drop table t"); + stat.execute("create table t(id identity, x varchar) as select null, 'x'"); + assertThrows(ErrorCode.DATA_CONVERSION_ERROR_1, stat). + execute("alter table t alter column x int"); + stat.execute("drop table t"); + } + + private void testAlterTableAddColumnIdentity() throws SQLException { + stat.execute("create table t(x varchar)"); + stat.execute("alter table t add id bigint identity(5, 5) not null"); + stat.execute("insert into t values (null, null)"); + stat.execute("insert into t values (null, null)"); + ResultSet rs = stat.executeQuery("select id from t order by id"); + assertTrue(rs.next()); + assertEquals(5, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(10, rs.getInt(1)); + assertFalse(rs.next()); + stat.execute("drop table t"); + } + + private void testAlterTableAddColumnIfNotExists() throws SQLException { + stat.execute("create table t(x varchar) as select 'x'"); + stat.execute("alter table t add if not exists x int"); + stat.execute("drop table t"); + stat.execute("create table t(x varchar) as select 'x'"); + stat.execute("alter table t add if not exists y int"); + stat.execute("select x, y from t"); + stat.execute("drop table t"); + } + + private void testAlterTableAddMultipleColumns() throws SQLException { + stat.execute("create table t(x varchar) as select 'x'"); + stat.execute("alter table t add (y int, z varchar)"); + stat.execute("drop table t"); + stat.execute("create table t(x varchar) as select 'x'"); + stat.execute("alter table t add (y int)"); + stat.execute("drop table t"); + } + + + + // column and field names must be upper-case due to getMetaData sensitivity + private void testAlterTableAddMultipleColumnsBefore() throws SQLException { + stat.execute("create table T(X varchar)"); + stat.execute("alter table T add (Y int, Z int) before X"); + DatabaseMetaData dbMeta = conn.getMetaData(); + ResultSet rs = dbMeta.getColumns(null, null, "T", null); + assertTrue(rs.next()); + assertEquals("Y", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("Z", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("X", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + stat.execute("drop table T"); + } + + // column and field names must be upper-case due to getMetaData sensitivity + private void testAlterTableAddMultipleColumnsAfter() throws SQLException { + stat.execute("create table T(X varchar)"); + stat.execute("alter table T add (Y int, Z int) after X"); + DatabaseMetaData dbMeta = conn.getMetaData(); + ResultSet rs = dbMeta.getColumns(null, null, "T", null); + assertTrue(rs.next()); + assertEquals("X", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("Y", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("Z", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + stat.execute("drop table T"); + } + + // column and field names must be upper-case due to getMetaData sensitivity + private void testAlterTableAddColumnBefore() throws SQLException { + stat.execute("create table T(X varchar)"); + stat.execute("alter table T add Y int before X"); + DatabaseMetaData dbMeta = conn.getMetaData(); + ResultSet rs = dbMeta.getColumns(null, null, "T", null); + assertTrue(rs.next()); + assertEquals("Y", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("X", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + stat.execute("drop table T"); + } + + // column and field names must be upper-case due to getMetaData sensitivity + private void testAlterTableAddColumnAfter() throws SQLException { + stat.execute("create table T(X varchar)"); + stat.execute("alter table T add Y int after X"); + DatabaseMetaData dbMeta = conn.getMetaData(); + ResultSet rs = dbMeta.getColumns(null, null, "T", null); + assertTrue(rs.next()); + assertEquals("X", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("Y", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + stat.execute("drop table T"); + } + + private void testAlterTableAlterColumn2() throws SQLException { + // ensure that increasing a VARCHAR columns length takes effect because + // we optimize this case + stat.execute("create table t(x varchar(2)) as select 'x'"); + stat.execute("alter table t alter column x varchar(20)"); + stat.execute("insert into t values('Hello')"); + stat.execute("drop table t"); + } + + private void testAlterTableModifyColumn() throws SQLException { + stat.execute("create table t(x int)"); + stat.execute("alter table t modify column x varchar(20)"); + stat.execute("insert into t values('Hello')"); + stat.execute("drop table t"); + } + + /** + * Test for fix "Change not-null / null -constraint to existing column" + * (MySql/ORACLE - SQL style) that failed silently corrupting the changed + * column.
    + * Before the change (added after v1.4.196) following was observed: + *
    +     *  alter table T modify C int null; -- Worked as expected
    +     *  alter table T modify C null;     -- Silently corrupted column C
    +     * 
    + */ + private void testAlterTableModifyColumnSetNull() throws SQLException { + // This worked in v1.4.196 + stat.execute("create table T (C varchar not null)"); + stat.execute("alter table T modify C int null"); + stat.execute("insert into T values(null)"); + stat.execute("drop table T"); + // This failed in v1.4.196 + stat.execute("create table T (C int not null)"); + stat.execute("alter table T modify C null"); // Silently corrupted column C + stat.execute("insert into T values(null)"); // <- Fixed in v1.4.196 - NULL is allowed + stat.execute("drop table T"); + } + + private void testAlterTableModifyColumnNotNullOracle() throws SQLException { + stat.execute("create table foo (bar varchar(255))"); + stat.execute("alter table foo modify (bar varchar(255) not null)"); + try { + stat.execute("insert into foo values(null)"); + fail("Null should not be allowed after modification."); + } + catch(SQLException e) { + // This is what we expect, fails to insert null. + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestAlterSchemaRename.java b/modules/h2/src/test/java/org/h2/test/db/TestAlterSchemaRename.java new file mode 100644 index 0000000000000..0ee52f99e4726 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestAlterSchemaRename.java @@ -0,0 +1,126 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * Test ALTER SCHEMA RENAME statements. + */ +public class TestAlterSchemaRename extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb(getTestName()); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + testTryToRenameSystemSchemas(); + testSimpleRename(); + testRenameToExistingSchema(); + testCrossSchemaViews(); + testAlias(); + conn.close(); + deleteDb(getTestName()); + } + + private void testTryToRenameSystemSchemas() throws SQLException { + assertThrows(ErrorCode.SCHEMA_CAN_NOT_BE_DROPPED_1, stat). + execute("alter schema information_schema rename to test_info"); + stat.execute("create sequence test_sequence"); + assertThrows(ErrorCode.SCHEMA_CAN_NOT_BE_DROPPED_1, stat). + execute("alter schema public rename to test_schema"); + } + + private void testSimpleRename() throws SQLException { + stat.execute("create schema s1"); + stat.execute("create table s1.tab(val int)"); + stat.execute("insert into s1.tab(val) values (3)"); + ResultSet rs = stat.executeQuery("select * from s1.tab"); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + stat.execute("alter schema s1 rename to s2"); + rs = stat.executeQuery("select * from s2.tab"); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + stat.execute("drop schema s2 cascade"); + } + + + private void testRenameToExistingSchema() throws SQLException { + stat.execute("create schema s1"); + stat.execute("create schema s2"); + assertThrows(ErrorCode.SCHEMA_ALREADY_EXISTS_1, stat). + execute("alter schema s1 rename to s2"); + stat.execute("drop schema s1"); + stat.execute("drop schema s2"); + } + + + private void testCrossSchemaViews() throws SQLException { + stat.execute("create schema s1"); + stat.execute("create schema s2"); + stat.execute("create table s1.tab(val int)"); + stat.execute("insert into s1.tab(val) values (3)"); + stat.execute("create view s1.v1 as select * from s1.tab"); + stat.execute("create view s2.v1 as select val * 2 from s1.tab"); + stat.execute("alter schema s2 rename to s2_new"); + ResultSet rs = stat.executeQuery("select * from s1.v1"); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + rs = stat.executeQuery("select * from s2_new.v1"); + assertTrue(rs.next()); + assertEquals(6, rs.getInt(1)); + if (!config.memory) { + conn.close(); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.executeQuery("select * from s2_new.v1"); + } + stat.execute("drop schema s1 cascade"); + stat.execute("drop schema s2_new cascade"); + } + + /** + * Check that aliases in the schema got moved + */ + private void testAlias() throws SQLException { + stat.execute("create schema s1"); + stat.execute("CREATE ALIAS S1.REVERSE AS $$ " + + "String reverse(String s) {" + + " return new StringBuilder(s).reverse().toString();" + + "} $$;"); + stat.execute("alter schema s1 rename to s2"); + ResultSet rs = stat.executeQuery("CALL S2.REVERSE('1234')"); + assertTrue(rs.next()); + assertEquals("4321", rs.getString(1)); + if (!config.memory) { + conn.close(); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.executeQuery("CALL S2.REVERSE('1234')"); + } + stat.execute("drop schema s2 cascade"); + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/db/TestAutoRecompile.java b/modules/h2/src/test/java/org/h2/test/db/TestAutoRecompile.java new file mode 100644 index 0000000000000..2be6d017bda49 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestAutoRecompile.java @@ -0,0 +1,53 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests if prepared statements are re-compiled when required. + */ +public class TestAutoRecompile extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("autoRecompile"); + Connection conn = getConnection("autoRecompile"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY)"); + PreparedStatement prep = conn.prepareStatement("SELECT * FROM TEST"); + assertEquals(1, prep.executeQuery().getMetaData().getColumnCount()); + stat.execute("ALTER TABLE TEST ADD COLUMN NAME VARCHAR(255)"); + assertEquals(2, prep.executeQuery().getMetaData().getColumnCount()); + stat.execute("DROP TABLE TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, X INT, Y INT)"); + assertEquals(3, prep.executeQuery().getMetaData().getColumnCount()); + // TODO test auto-recompile with insert..select, views and so on + + prep = conn.prepareStatement("INSERT INTO TEST VALUES(1, 2, 3)"); + stat.execute("ALTER TABLE TEST ADD COLUMN Z INT"); + assertThrows(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH, prep).execute(); + assertThrows(ErrorCode.COLUMN_COUNT_DOES_NOT_MATCH, prep).execute(); + conn.close(); + deleteDb("autoRecompile"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestBackup.java b/modules/h2/src/test/java/org/h2/test/db/TestBackup.java new file mode 100644 index 0000000000000..cb3b50d1b42dd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestBackup.java @@ -0,0 +1,209 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; + +import org.h2.api.DatabaseEventListener; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Backup; +import org.h2.tools.Restore; +import org.h2.util.Task; + +/** + * Test for the BACKUP SQL statement. + */ +public class TestBackup extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.memory) { + return; + } + testConcurrentBackup(); + testBackupRestoreLobStatement(); + testBackupRestoreLob(); + testBackup(); + deleteDb("backup"); + FileUtils.delete(getBaseDir() + "/backup.zip"); + } + + private void testConcurrentBackup() throws SQLException { + if (config.networked || !config.big) { + return; + } + deleteDb("backup"); + String url = getURL("backup;multi_threaded=true", true); + Connection conn = getConnection(url); + final Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test select x, 'Hello' from system_range(1, 2)"); + conn.setAutoCommit(false); + Connection conn1; + conn1 = getConnection(url); + final AtomicLong updateEnd = new AtomicLong(); + final Statement stat1 = conn.createStatement(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + if (System.nanoTime() < updateEnd.get()) { + stat.execute("update test set name = 'Hallo'"); + stat1.execute("checkpoint"); + stat.execute("update test set name = 'Hello'"); + stat.execute("commit"); + stat.execute("checkpoint"); + } else { + Thread.sleep(10); + } + } + } + }; + Connection conn2; + conn2 = getConnection(url + ";database_event_listener='" + + BackupListener.class.getName() + "'"); + Statement stat2 = conn2.createStatement(); + task.execute(); + for (int i = 0; i < 10; i++) { + updateEnd.set(System.nanoTime() + TimeUnit.SECONDS.toNanos(2)); + stat2.execute("backup to '"+getBaseDir()+"/backup.zip'"); + stat2.execute("checkpoint"); + Restore.execute(getBaseDir() + "/backup.zip", getBaseDir() + "/t" + i, "backup"); + Connection conn3 = getConnection("t" + i + "/backup"); + Statement stat3 = conn3.createStatement(); + stat3.execute("script"); + ResultSet rs = stat3.executeQuery( + "select * from test where name='Hallo'"); + while (rs.next()) { + fail(); + } + conn3.close(); + } + task.get(); + conn2.close(); + conn.close(); + conn1.close(); + } + + /** + * A backup listener to test concurrent backup. + */ + public static class BackupListener implements DatabaseEventListener { + + @Override + public void closingDatabase() { + // ignore + } + + @Override + public void exceptionThrown(SQLException e, String sql) { + // ignore + } + + @Override + public void init(String url) { + // ignore + } + + @Override + public void opened() { + // ignore + } + + @Override + public void setProgress(int state, String name, int x, int max) { + try { + Thread.sleep(1); + } catch (InterruptedException e) { + // ignore + } + if (x % 400 == 0) { + // System.out.println("state: " + state + + // " name: " + name + " x:" + x + "/" + max); + } + } + + } + + private void testBackupRestoreLob() throws SQLException { + deleteDb("backup"); + Connection conn = getConnection("backup"); + conn.createStatement().execute( + "create table test(x clob) as select space(10000)"); + conn.close(); + Backup.execute(getBaseDir() + "/backup.zip", + getBaseDir(), "backup", true); + deleteDb("backup"); + Restore.execute(getBaseDir() + "/backup.zip", + getBaseDir(), "backup"); + } + + private void testBackupRestoreLobStatement() throws SQLException { + deleteDb("backup"); + Connection conn = getConnection("backup"); + conn.createStatement().execute( + "create table test(x clob) as select space(10000)"); + conn.createStatement().execute("backup to '" + + getBaseDir() + "/backup.zip"+"'"); + conn.close(); + deleteDb("backup"); + Restore.execute(getBaseDir() + "/backup.zip", + getBaseDir(), "backup"); + } + + private void testBackup() throws SQLException { + deleteDb("backup"); + deleteDb("restored"); + Connection conn1, conn2, conn3; + Statement stat1, stat2, stat3; + conn1 = getConnection("backup"); + stat1 = conn1.createStatement(); + stat1.execute("create table test" + + "(id int primary key, name varchar(255))"); + stat1.execute("insert into test values" + + "(1, 'first'), (2, 'second')"); + stat1.execute("create table testlob" + + "(id int primary key, b blob, c clob)"); + stat1.execute("insert into testlob values" + + "(1, space(10000), repeat('00', 10000))"); + conn2 = getConnection("backup"); + stat2 = conn2.createStatement(); + stat2.execute("insert into test values(3, 'third')"); + conn2.setAutoCommit(false); + stat2.execute("insert into test values(4, 'fourth (uncommitted)')"); + stat2.execute("insert into testlob values(2, ' ', '00')"); + + stat1.execute("backup to '" + getBaseDir() + "/backup.zip'"); + conn2.rollback(); + assertEqualDatabases(stat1, stat2); + + Restore.execute(getBaseDir() + "/backup.zip", getBaseDir(), "restored"); + conn3 = getConnection("restored"); + stat3 = conn3.createStatement(); + assertEqualDatabases(stat1, stat3); + + conn1.close(); + conn2.close(); + conn3.close(); + deleteDb("restored"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestBigDb.java b/modules/h2/src/test/java/org/h2/test/db/TestBigDb.java new file mode 100644 index 0000000000000..58d81797942b2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestBigDb.java @@ -0,0 +1,164 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.test.TestBase; +import org.h2.util.Utils; + +/** + * Test for big databases. + */ +public class TestBigDb extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.memory) { + return; + } + if (config.networked && config.big) { + return; + } + testLargeTable(); + testInsert(); + testLeftSummary(); + deleteDb("bigDb"); + } + + private void testLargeTable() throws SQLException { + deleteDb("bigDb"); + Connection conn = getConnection("bigDb"); + Statement stat = conn.createStatement(); + stat.execute("CREATE CACHED TABLE TEST(" + + "M_CODE CHAR(1) DEFAULT CAST(RAND()*9 AS INT)," + + "PRD_CODE CHAR(20) DEFAULT SECURE_RAND(10)," + + "ORG_CODE_SUPPLIER CHAR(13) DEFAULT SECURE_RAND(6)," + + "PRD_CODE_1 CHAR(14) DEFAULT SECURE_RAND(7)," + + "PRD_CODE_2 CHAR(20) DEFAULT SECURE_RAND(10)," + + "ORG_CODE CHAR(13) DEFAULT SECURE_RAND(6)," + + "SUBSTITUTED_BY CHAR(20) DEFAULT SECURE_RAND(10)," + + "SUBSTITUTED_BY_2 CHAR(14) DEFAULT SECURE_RAND(7)," + + "SUBSTITUTION_FOR CHAR(20) DEFAULT SECURE_RAND(10)," + + "SUBSTITUTION_FOR_2 CHAR(14) DEFAULT SECURE_RAND(7)," + + "TEST CHAR(2) DEFAULT SECURE_RAND(1)," + + "TEST_2 CHAR(2) DEFAULT SECURE_RAND(1)," + + "TEST_3 DECIMAL(7,2) DEFAULT RAND()," + + "PRIMARY_UNIT_CODE CHAR(3) DEFAULT SECURE_RAND(1)," + + "RATE_PRICE_ORDER_UNIT DECIMAL(9,3) DEFAULT RAND()," + + "ORDER_UNIT_CODE CHAR(3) DEFAULT SECURE_RAND(1)," + + "ORDER_QTY_MIN DECIMAL(6,1) DEFAULT RAND()," + + "ORDER_QTY_LOT_SIZE DECIMAL(6,1) DEFAULT RAND()," + + "ORDER_UNIT_CODE_2 CHAR(3) DEFAULT SECURE_RAND(1)," + + "PRICE_GROUP CHAR(20) DEFAULT SECURE_RAND(10)," + + "LEAD_TIME INTEGER DEFAULT RAND()," + + "LEAD_TIME_UNIT_CODE CHAR(3) DEFAULT SECURE_RAND(1)," + + "PRD_GROUP CHAR(10) DEFAULT SECURE_RAND(5)," + + "WEIGHT_GROSS DECIMAL(7,3) DEFAULT RAND()," + + "WEIGHT_UNIT_CODE CHAR(3) DEFAULT SECURE_RAND(1)," + + "PACK_UNIT_CODE CHAR(3) DEFAULT SECURE_RAND(1)," + + "PACK_LENGTH DECIMAL(7,3) DEFAULT RAND()," + + "PACK_WIDTH DECIMAL(7,3) DEFAULT RAND()," + + "PACK_HEIGHT DECIMAL(7,3) DEFAULT RAND()," + + "SIZE_UNIT_CODE CHAR(3) DEFAULT SECURE_RAND(1)," + + "STATUS_CODE CHAR(3) DEFAULT SECURE_RAND(1)," + + "INTRA_STAT_CODE CHAR(12) DEFAULT SECURE_RAND(6)," + + "PRD_TITLE CHAR(50) DEFAULT SECURE_RAND(25)," + + "VALID_FROM DATE DEFAULT NOW()," + + "MOD_DATUM DATE DEFAULT NOW())"); + int len = getSize(10, 50000); + try { + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(PRD_CODE) VALUES('abc' || ?)"); + long time = System.nanoTime(); + for (int i = 0; i < len; i++) { + if ((i % 1000) == 0) { + long t = System.nanoTime(); + if (t - time > TimeUnit.SECONDS.toNanos(1)) { + time = t; + int free = Utils.getMemoryFree(); + println("i: " + i + " free: " + free + " used: " + Utils.getMemoryUsed()); + } + } + prep.setInt(1, i); + prep.execute(); + } + stat.execute("CREATE INDEX IDX_TEST_PRD_CODE ON TEST(PRD_CODE)"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + int columns = rs.getMetaData().getColumnCount(); + while (rs.next()) { + for (int i = 0; i < columns; i++) { + rs.getString(i + 1); + } + } + } catch (OutOfMemoryError e) { + TestBase.logError("memory", e); + conn.close(); + throw e; + } + conn.close(); + } + + private void testLeftSummary() throws SQLException { + deleteDb("bigDb"); + Connection conn = getConnection("bigDb"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT, NEG INT AS -ID, " + + "NAME VARCHAR, PRIMARY KEY(ID, NAME))"); + stat.execute("CREATE INDEX IDX_NEG ON TEST(NEG, NAME)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(ID, NAME) VALUES(?, '1234567890')"); + int len = getSize(10, 1000); + int block = getSize(3, 10); + int left, x = 0; + for (int i = 0; i < len; i++) { + left = x + block / 2; + for (int j = 0; j < block; j++) { + prep.setInt(1, x++); + prep.execute(); + } + stat.execute("DELETE FROM TEST WHERE ID>" + left); + ResultSet rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + int count = rs.getInt(1); + trace("count: " + count); + } + conn.close(); + } + + private void testInsert() throws SQLException { + deleteDb("bigDb"); + Connection conn = getConnection("bigDb"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(NAME) VALUES('Hello World')"); + int len = getSize(1000, 10000); + for (int i = 0; i < len; i++) { + if (i % 1000 == 0) { + println("rows: " + i); + Thread.yield(); + } + prep.execute(); + } + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestBigResult.java b/modules/h2/src/test/java/org/h2/test/db/TestBigResult.java new file mode 100644 index 0000000000000..4bb0b77fb491f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestBigResult.java @@ -0,0 +1,204 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; + +import org.h2.store.FileLister; +import org.h2.test.TestBase; + +/** + * Test for big result sets. + */ +public class TestBigResult extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.memory) { + return; + } + testLargeSubquery(); + testLargeUpdateDelete(); + testCloseConnectionDelete(); + testOrderGroup(); + testLimitBufferedResult(); + deleteDb("bigResult"); + } + + private void testLargeSubquery() throws SQLException { + deleteDb("bigResult"); + Connection conn = getConnection("bigResult"); + Statement stat = conn.createStatement(); + int len = getSize(1000, 4000); + stat.execute("SET MAX_MEMORY_ROWS " + (len / 10)); + stat.execute("CREATE TABLE RECOVERY(TRANSACTION_ID INT, SQL_STMT VARCHAR)"); + stat.execute("INSERT INTO RECOVERY " + + "SELECT X, CASE MOD(X, 2) WHEN 0 THEN 'commit' ELSE 'begin' END " + + "FROM SYSTEM_RANGE(1, "+len+")"); + ResultSet rs = stat.executeQuery("SELECT * FROM RECOVERY " + + "WHERE SQL_STMT LIKE 'begin%' AND " + + "TRANSACTION_ID NOT IN(SELECT TRANSACTION_ID FROM RECOVERY " + + "WHERE SQL_STMT='commit' OR SQL_STMT='rollback')"); + int count = 0, last = 1; + while (rs.next()) { + assertEquals(last, rs.getInt(1)); + last += 2; + count++; + } + assertEquals(len / 2, count); + conn.close(); + } + + private void testLargeUpdateDelete() throws SQLException { + deleteDb("bigResult"); + Connection conn = getConnection("bigResult"); + Statement stat = conn.createStatement(); + int len = getSize(10000, 100000); + stat.execute("SET MAX_OPERATION_MEMORY 4096"); + stat.execute("CREATE TABLE TEST AS SELECT * FROM SYSTEM_RANGE(1, " + len + ")"); + stat.execute("UPDATE TEST SET X=X+1"); + stat.execute("DELETE FROM TEST"); + conn.close(); + } + + private void testCloseConnectionDelete() throws SQLException { + deleteDb("bigResult"); + Connection conn = getConnection("bigResult"); + Statement stat = conn.createStatement(); + stat.execute("SET MAX_MEMORY_ROWS 2"); + ResultSet rs = stat.executeQuery("SELECT * FROM SYSTEM_RANGE(1, 100)"); + while (rs.next()) { + // ignore + } + // rs.close(); + conn.close(); + deleteDb("bigResult"); + ArrayList files = FileLister.getDatabaseFiles(getBaseDir(), + "bigResult", true); + if (files.size() > 0) { + fail("file not deleted: " + files.get(0)); + } + } + + private void testLimitBufferedResult() throws SQLException { + deleteDb("bigResult"); + Connection conn = getConnection("bigResult"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT)"); + for (int i = 0; i < 200; i++) { + stat.execute("INSERT INTO TEST(ID) VALUES(" + i + ")"); + } + stat.execute("SET MAX_MEMORY_ROWS 100"); + ResultSet rs; + rs = stat.executeQuery("select id from test order by id limit 10 offset 85"); + for (int i = 85; rs.next(); i++) { + assertEquals(i, rs.getInt(1)); + } + rs = stat.executeQuery("select id from test order by id limit 10 offset 95"); + for (int i = 95; rs.next(); i++) { + assertEquals(i, rs.getInt(1)); + } + rs = stat.executeQuery("select id from test order by id limit 10 offset 105"); + for (int i = 105; rs.next(); i++) { + assertEquals(i, rs.getInt(1)); + } + conn.close(); + } + + private void testOrderGroup() throws SQLException { + deleteDb("bigResult"); + Connection conn = getConnection("bigResult"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(" + + "ID INT PRIMARY KEY, " + + "Name VARCHAR(255), " + + "FirstName VARCHAR(255), " + + "Points INT," + + "LicenseID INT)"); + int len = getSize(10, 5000); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?, ?, ?, ?)"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.setString(2, "Name " + i); + prep.setString(3, "First Name " + i); + prep.setInt(4, i * 10); + prep.setInt(5, i * i); + prep.execute(); + } + conn.close(); + conn = getConnection("bigResult"); + stat = conn.createStatement(); + stat.setMaxRows(len + 1); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + for (int i = 0; i < len; i++) { + rs.next(); + assertEquals(i, rs.getInt(1)); + assertEquals("Name " + i, rs.getString(2)); + assertEquals("First Name " + i, rs.getString(3)); + assertEquals(i * 10, rs.getInt(4)); + assertEquals(i * i, rs.getInt(5)); + } + + stat.setMaxRows(len + 1); + rs = stat.executeQuery("SELECT * FROM TEST WHERE ID >= 1000 ORDER BY ID"); + for (int i = 1000; i < len; i++) { + rs.next(); + assertEquals(i, rs.getInt(1)); + assertEquals("Name " + i, rs.getString(2)); + assertEquals("First Name " + i, rs.getString(3)); + assertEquals(i * 10, rs.getInt(4)); + assertEquals(i * i, rs.getInt(5)); + } + + stat.execute("SET MAX_MEMORY_ROWS 2"); + rs = stat.executeQuery("SELECT Name, SUM(ID) FROM TEST GROUP BY NAME"); + while (rs.next()) { + rs.getString(1); + rs.getInt(2); + } + + conn.setAutoCommit(false); + stat.setMaxRows(0); + stat.execute("SET MAX_MEMORY_ROWS 0"); + stat.execute("CREATE TABLE DATA(ID INT, NAME VARCHAR_IGNORECASE(255))"); + prep = conn.prepareStatement("INSERT INTO DATA VALUES(?, ?)"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.setString(2, "" + i / 200); + prep.execute(); + } + Statement s2 = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, + ResultSet.CONCUR_UPDATABLE); + rs = s2.executeQuery("SELECT NAME FROM DATA"); + rs.last(); + conn.setAutoCommit(true); + + rs = s2.executeQuery("SELECT NAME FROM DATA ORDER BY ID"); + while (rs.next()) { + // do nothing + } + + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestCases.java b/modules/h2/src/test/java/org/h2/test/db/TestCases.java new file mode 100644 index 0000000000000..fe287b9a34a9e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestCases.java @@ -0,0 +1,1927 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.File; +import java.io.StringReader; +import java.sql.Connection; +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.util.List; +import java.util.Random; +import java.util.concurrent.TimeUnit; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Various test cases. + */ +public class TestCases extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testMinimalCoveringIndexPlan(); + testMinMaxDirectLookupIndex(); + testReferenceLaterTable(); + testAutoCommitInDatabaseURL(); + testReferenceableIndexUsage(); + testClearSyntaxException(); + testEmptyStatements(); + testViewParameters(); + testLargeKeys(); + testExtraSemicolonInDatabaseURL(); + testGroupSubquery(); + testSelfReferentialColumn(); + testCountDistinctNotNull(); + testDependencies(); + testDropTable(); + testConvertType(); + testSortedSelect(); + testMaxMemoryRows(); + testDeleteTop(); + testLikeExpressions(); + testUnicode(); + testOuterJoin(); + testCommentOnColumnWithSchemaEqualDatabase(); + testColumnWithConstraintAndComment(); + testTruncateConstraintsDisabled(); + testPreparedSubquery2(); + testPreparedSubquery(); + testCompareDoubleWithIntColumn(); + testDeleteIndexOutOfBounds(); + testOrderByWithSubselect(); + testInsertDeleteRollback(); + testLargeRollback(); + testConstraintAlterTable(); + testJoinWithView(); + testLobDecrypt(); + testInvalidDatabaseName(); + testReuseSpace(); + testDeleteGroup(); + testDisconnect(); + testExecuteTrace(); + testExplain(); + testExplainAnalyze(); + if (config.memory) { + return; + } + testCheckConstraintWithFunction(); + testDeleteAndDropTableWithLobs(true); + testDeleteAndDropTableWithLobs(false); + testEmptyBtreeIndex(); + testReservedKeywordReconnect(); + testSpecialSQL(); + testUpperCaseLowerCaseDatabase(); + testManualCommitSet(); + testSchemaIdentityReconnect(); + testAlterTableReconnect(); + testPersistentSettings(); + testInsertSelectUnion(); + testViewReconnect(); + testDefaultQueryReconnect(); + testBigString(); + testRenameReconnect(); + testAllSizes(); + testCreateDrop(); + testPolePos(); + testQuick(); + testMutableObjects(); + testSelectForUpdate(); + testDoubleRecovery(); + testConstraintReconnect(); + testCollation(); + testBinaryCollation(); + deleteDb("cases"); + } + + private void testReferenceLaterTable() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table a(id int)"); + stat.execute("create table b(id int)"); + stat.execute("drop table a"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat). + execute("create table a(id int check id < select max(id) from b)"); + stat.execute("drop table b"); + stat.execute("create table b(id int)"); + stat.execute("create table a(id int check id < select max(id) from b)"); + conn.close(); + conn = getConnection("cases"); + conn.close(); + } + + private void testAutoCommitInDatabaseURL() throws SQLException { + Connection conn = getConnection("cases;autocommit=false"); + assertFalse(conn.getAutoCommit()); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("call autocommit()"); + rs.next(); + assertFalse(rs.getBoolean(1)); + conn.close(); + } + + private void testReferenceableIndexUsage() throws SQLException { + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("drop table if exists a, b"); + stat.execute("create table a(id int, x int) as select 1, 100"); + stat.execute("create index idx1 on a(id, x)"); + stat.execute("create table b(id int primary key, a_id int) as select 1, 1"); + stat.execute("alter table b add constraint x " + + "foreign key(a_id) references a(id)"); + stat.execute("update a set x=200"); + stat.execute("drop table if exists a, b cascade"); + conn.close(); + } + + private void testClearSyntaxException() throws SQLException { + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + assertThrows(42000, stat).execute("select t.x, t.x t.y from dual t"); + conn.close(); + } + + private void testEmptyStatements() throws SQLException { + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;"); + stat.execute(""); + stat.execute(";"); + stat.execute(" ;"); + conn.close(); + } + + private void testViewParameters() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute( + "create view test as select 0 value, 'x' name from dual"); + PreparedStatement prep = conn.prepareStatement( + "select 1 from test where name=? and value=? and value<=?"); + prep.setString(1, "x"); + prep.setInt(2, 0); + prep.setInt(3, 1); + prep.executeQuery(); + conn.close(); + } + + private void testLargeKeys() throws SQLException { + if (config.memory) { + return; + } + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("create index on test(name)"); + stat.execute("insert into test values(1, '1' || space(1500))"); + conn.close(); + conn = getConnection("cases"); + stat = conn.createStatement(); + stat.execute("insert into test values(2, '2' || space(1500))"); + conn.close(); + conn = getConnection("cases"); + stat = conn.createStatement(); + stat.executeQuery("select name from test order by name"); + conn.close(); + } + + private void testExtraSemicolonInDatabaseURL() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases;"); + conn.close(); + conn = getConnection("cases;;mode=mysql;"); + conn.close(); + } + + private void testGroupSubquery() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(a int, b int)"); + stat.execute("create index idx on test(a)"); + stat.execute("insert into test values (1, 9), (2, 9), (3, 9)"); + ResultSet rs = stat.executeQuery("select (select count(*)" + + " from test where a = t.a and b = 0) from test t group by a"); + rs.next(); + assertEquals(0, rs.getInt(1)); + conn.close(); + } + + private void testSelfReferentialColumn() throws SQLException { + deleteDb("selfreferential"); + Connection conn = getConnection("selfreferential"); + Statement stat = conn.createStatement(); + stat.execute("create table sr(id integer, usecount integer as usecount + 1)"); + assertThrows(ErrorCode.NULL_NOT_ALLOWED, stat).execute("insert into sr(id) values (1)"); + assertThrows(ErrorCode.MUST_GROUP_BY_COLUMN_1, stat).execute("select max(id), usecount from sr"); + conn.close(); + } + + private void testCountDistinctNotNull() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int not null) as " + + "select 1 from system_range(1, 10)"); + ResultSet rs = stat.executeQuery("select count(distinct id) from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + } + + private void testDependencies() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + + // avoid endless recursion when adding dependencies + stat.execute("create table test(id int primary key, parent int)"); + stat.execute("alter table test add constraint test check " + + "(select count(*) from test) < 10"); + stat.execute("create table b()"); + stat.execute("drop table b"); + stat.execute("drop table test"); + + // ensure the dependency is detected + stat.execute("create alias is_positive as " + + "'boolean isPositive(int x) { return x > 0; }'"); + stat.execute("create table a(a integer, constraint test check is_positive(a))"); + assertThrows(ErrorCode.CANNOT_DROP_2, stat). + execute("drop alias is_positive"); + stat.execute("drop table a"); + stat.execute("drop alias is_positive"); + + // ensure trying to reference the table fails + // (otherwise re-opening the database is not possible) + stat.execute("create table test(id int primary key)"); + assertThrows(ErrorCode.COLUMN_IS_REFERENCED_1, stat). + execute("alter table test alter column id " + + "set default ifnull((select max(id) from test for update)+1, 0)"); + stat.execute("drop table test"); + conn.close(); + } + + private void testDropTable() throws SQLException { + trace("testDropTable"); + final boolean[] booleans = new boolean[] { true, false }; + for (final boolean stdDropTableRestrict : booleans) { + for (final boolean restrict : booleans) { + testDropTableNoReference(stdDropTableRestrict, restrict); + testDropTableViewReference(stdDropTableRestrict, restrict); + testDropTableForeignKeyReference(stdDropTableRestrict, restrict); + } + } + } + + private Statement createTable(final boolean stdDropTableRestrict) throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases;STANDARD_DROP_TABLE_RESTRICT=" + stdDropTableRestrict); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + return stat; + } + + private void dropTable(final boolean restrict, Statement stat, final boolean expectedDropSuccess) + throws SQLException { + assertThrows(expectedDropSuccess ? 0 : ErrorCode.CANNOT_DROP_2, stat) + .execute("drop table test " + (restrict ? "restrict" : "cascade")); + assertThrows(expectedDropSuccess ? ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1 : 0, stat) + .execute("select * from test"); + } + + private void testDropTableNoReference(final boolean stdDropTableRestrict, final boolean restrict) + throws SQLException { + Statement stat = createTable(stdDropTableRestrict); + // always succeed as there's no reference to the table + dropTable(restrict, stat, true); + stat.getConnection().close(); + } + + private void testDropTableViewReference(final boolean stdDropTableRestrict, final boolean restrict) + throws SQLException { + Statement stat = createTable(stdDropTableRestrict); + stat.execute("create view abc as select * from test"); + // drop allowed only if cascade + final boolean expectedDropSuccess = !restrict; + dropTable(restrict, stat, expectedDropSuccess); + // missing view if the drop succeeded + assertThrows(expectedDropSuccess ? ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1 : 0, stat).execute("select * from abc"); + stat.getConnection().close(); + } + + private void testDropTableForeignKeyReference(final boolean stdDropTableRestrict, final boolean restrict) + throws SQLException { + Statement stat = createTable(stdDropTableRestrict); + stat.execute("create table ref(id int, id_test int, foreign key (id_test) references test (id)) "); + // test table is empty, so the foreign key forces ref table to be also + // empty + assertThrows(ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, stat) + .execute("insert into ref values(1,2)"); + // drop allowed if cascade or old style + final boolean expectedDropSuccess = !stdDropTableRestrict || !restrict; + dropTable(restrict, stat, expectedDropSuccess); + // insertion succeeds if the foreign key was dropped + assertThrows(expectedDropSuccess ? 0 : ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, stat) + .execute("insert into ref values(1,2)"); + stat.getConnection().close(); + } + + private void testConvertType() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test as select cast(0 as dec(10, 2)) x"); + ResultSetMetaData meta = stat.executeQuery("select * from test").getMetaData(); + assertEquals(2, meta.getPrecision(1)); + assertEquals(2, meta.getScale(1)); + stat.execute("alter table test add column y int"); + stat.execute("drop table test"); + conn.close(); + } + + private void testSortedSelect() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create memory temporary table test(id int) not persistent"); + stat.execute("insert into test(id) direct sorted select 1"); + stat.execute("drop table test"); + conn.close(); + } + + private void testMaxMemoryRows() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection( + "cases;MAX_MEMORY_ROWS=1"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(1), (2)"); + stat.execute("select * from dual where x not in " + + "(select id from test order by id)"); + stat.execute("select * from dual where x not in " + + "(select id from test union select id from test)"); + stat.execute("(select id from test order by id) " + + "intersect (select id from test order by id)"); + conn.close(); + } + + private void testUnicode() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, name text)"); + String[] data = { "\uff1e", "\ud848\udf1e" }; + PreparedStatement prep = conn.prepareStatement( + "insert into test(name) values(?)"); + for (int i = 0; i < data.length; i++) { + prep.setString(1, data[i]); + prep.execute(); + } + prep = conn.prepareStatement("select * from test order by id"); + ResultSet rs = prep.executeQuery(); + for (int i = 0; i < data.length; i++) { + assertTrue(rs.next()); + assertEquals(data[i], rs.getString(2)); + } + stat.execute("drop table test"); + conn.close(); + } + + private void testCheckConstraintWithFunction() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create alias is_email as " + + "'boolean isEmail(String x) { return x != null && x.indexOf(''@'') > 0; }'"); + stat.execute("create domain email as varchar check is_email(value)"); + stat.execute("create table test(e email)"); + conn.close(); + conn = getConnection("cases"); + conn.close(); + } + + private void testOuterJoin() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table parent(p int primary key) as select 1"); + stat.execute("create table child(c int primary key, pc int) as select 2, 1"); + ResultSet rs = stat.executeQuery("select * from parent " + + "left outer join child on p = pc where c is null"); + assertFalse(rs.next()); + stat.execute("drop all objects"); + conn.close(); + } + + private void testCommentOnColumnWithSchemaEqualDatabase() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create schema cases"); + stat.execute("create table cases.cases(cases int)"); + stat.execute("comment on column " + + "cases.cases.cases is 'schema.table.column'"); + stat.execute("comment on column " + + "cases.cases.cases.cases is 'db.schema.table.column'"); + conn.close(); + } + + private void testColumnWithConstraintAndComment() throws SQLException { + if (config.memory) { + return; + } + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int check id < 500)"); + stat.execute("comment on column test.id is 'comment'"); + conn.close(); + conn = getConnection("cases"); + conn.close(); + } + + private void testTruncateConstraintsDisabled() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table parent(id identity) as select 0"); + stat.execute("create table child(id identity, " + + "parent int references parent(id)) as select 0, 0"); + assertThrows(ErrorCode.CANNOT_TRUNCATE_1, stat). + execute("truncate table parent"); + assertThrows(ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_CHILD_EXISTS_1, stat). + execute("delete from parent"); + stat.execute("alter table parent set referential_integrity false"); + stat.execute("delete from parent"); + stat.execute("truncate table parent"); + stat.execute("alter table parent set referential_integrity true"); + assertThrows(ErrorCode.CANNOT_TRUNCATE_1, stat). + execute("truncate table parent"); + stat.execute("set referential_integrity false"); + stat.execute("truncate table parent"); + conn.close(); + } + + private void testPreparedSubquery2() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar(255))"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("insert into test values(2, 'World')"); + + PreparedStatement ps = conn.prepareStatement( + "select name from test where id in " + + "(select id from test where name = ?)"); + ps.setString(1, "Hello"); + ResultSet rs = ps.executeQuery(); + if (rs.next()) { + if (!rs.getString("name").equals("Hello")) { + fail("'" + rs.getString("name") + "' must be 'Hello'"); + } + } else { + fail("Must have a result!"); + } + rs.close(); + + ps.setString(1, "World"); + rs = ps.executeQuery(); + if (rs.next()) { + if (!rs.getString("name").equals("World")) { + fail("'" + rs.getString("name") + "' must be 'World'"); + } + } else { + fail("Must have a result!"); + } + rs.close(); + conn.close(); + } + + private void testPreparedSubquery() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("insert into test values(1)"); + String sql = "select ?, ?, (select count(*) from test inner join " + + "(select id from test where 0=?) as t2 on t2.id=test.id) from test"; + ResultSet rs; + rs = stat.executeQuery(sql.replace('?', '0')); + rs.next(); + assertEquals(1, rs.getInt(3)); + PreparedStatement prep = conn.prepareStatement(sql); + prep.setInt(1, 0); + prep.setInt(2, 0); + prep.setInt(3, 0); + rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(3)); + conn.close(); + } + + private void testCompareDoubleWithIntColumn() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + testCompareDoubleWithIntColumn(stat, false, 0.1, false); + testCompareDoubleWithIntColumn(stat, false, 0.1, true); + testCompareDoubleWithIntColumn(stat, false, 0.9, false); + testCompareDoubleWithIntColumn(stat, false, 0.9, true); + testCompareDoubleWithIntColumn(stat, true, 0.1, false); + testCompareDoubleWithIntColumn(stat, true, 0.1, true); + testCompareDoubleWithIntColumn(stat, true, 0.9, false); + testCompareDoubleWithIntColumn(stat, true, 0.9, true); + conn.close(); + } + + private void testCompareDoubleWithIntColumn(Statement stat, boolean pk, + double x, boolean prepared) throws SQLException { + if (pk) { + stat.execute("create table test(id int primary key)"); + } else { + stat.execute("create table test(id int)"); + } + stat.execute("insert into test values(1)"); + ResultSet rs; + if (prepared) { + PreparedStatement prep = stat.getConnection().prepareStatement( + "select * from test where id > ?"); + prep.setDouble(1, x); + rs = prep.executeQuery(); + } else { + rs = stat.executeQuery("select * from test where id > " + x); + } + assertTrue(rs.next()); + stat.execute("drop table test"); + } + + private void testDeleteIndexOutOfBounds() throws SQLException { + if (config.memory || !config.big) { + return; + } + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS test " + + "(rowid INTEGER PRIMARY KEY AUTO_INCREMENT, txt VARCHAR(64000));"); + PreparedStatement prep = conn.prepareStatement( + "insert into test (txt) values(space(?))"); + for (int i = 0; i < 3000; i++) { + prep.setInt(1, i * 3); + prep.execute(); + } + stat.execute("DELETE FROM test;"); + conn.close(); + } + + private void testInsertDeleteRollback() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("set cache_size 1"); + stat.execute("SET MAX_MEMORY_ROWS " + Integer.MAX_VALUE); + stat.execute("SET MAX_MEMORY_UNDO " + Integer.MAX_VALUE); + stat.execute("SET MAX_OPERATION_MEMORY " + Integer.MAX_VALUE); + stat.execute("create table test(id identity)"); + conn.setAutoCommit(false); + stat.execute("insert into test select x from system_range(1, 11)"); + stat.execute("delete from test"); + conn.rollback(); + conn.close(); + } + + private void testLargeRollback() throws SQLException { + Connection conn; + Statement stat; + + deleteDb("cases"); + conn = getConnection("cases"); + stat = conn.createStatement(); + stat.execute("set max_operation_memory 1"); + stat.execute("create table test(id int)"); + stat.execute("insert into test values(1), (2)"); + stat.execute("create index idx on test(id)"); + conn.setAutoCommit(false); + stat.execute("update test set id = id where id=2"); + stat.execute("update test set id = id"); + conn.rollback(); + conn.close(); + + deleteDb("cases"); + conn = getConnection("cases"); + conn.createStatement().execute("set MAX_MEMORY_UNDO 1"); + conn.createStatement().execute("create table test(id number primary key)"); + conn.createStatement().execute( + "insert into test(id) select x from system_range(1, 2)"); + Connection conn2 = getConnection("cases"); + conn2.setAutoCommit(false); + assertEquals(2, conn2.createStatement().executeUpdate( + "delete from test")); + conn2.close(); + conn.close(); + + deleteDb("cases"); + conn = getConnection("cases"); + conn.createStatement().execute("set MAX_MEMORY_UNDO 8"); + conn.createStatement().execute("create table test(id number primary key)"); + conn.setAutoCommit(false); + conn.createStatement().execute( + "insert into test select x from system_range(1, 10)"); + conn.createStatement().execute("delete from test"); + conn.rollback(); + conn.close(); + } + + private void testConstraintAlterTable() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table parent (pid int)"); + stat.execute("create table child (cid int primary key, pid int)"); + stat.execute("alter table child add foreign key (pid) references parent(pid)"); + stat.execute("alter table child add column c2 int"); + stat.execute("alter table parent add column p2 varchar"); + conn.close(); + } + + private void testEmptyBtreeIndex() throws SQLException { + deleteDb("cases"); + Connection conn; + conn = getConnection("cases"); + conn.createStatement().execute("CREATE TABLE test(id int PRIMARY KEY);"); + conn.createStatement().execute( + "INSERT INTO test SELECT X FROM SYSTEM_RANGE(1, 77)"); + conn.createStatement().execute("DELETE from test"); + conn.close(); + conn = getConnection("cases"); + conn.createStatement().execute("INSERT INTO test (id) VALUES (1)"); + conn.close(); + conn = getConnection("cases"); + conn.createStatement().execute("DELETE from test"); + conn.createStatement().execute("drop table test"); + conn.close(); + } + + private void testJoinWithView() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + conn.createStatement().execute( + "create table t(i identity, n varchar) as select 1, 'x'"); + PreparedStatement prep = conn.prepareStatement( + "select 1 from dual " + + "inner join(select n from t where i=?) a on a.n='x' " + + "inner join(select n from t where i=?) b on b.n='x'"); + prep.setInt(1, 1); + prep.setInt(2, 1); + prep.execute(); + conn.close(); + } + + private void testLobDecrypt() throws SQLException { + Connection conn = getConnection("cases"); + String key = "key"; + String value = "Hello World"; + PreparedStatement prep = conn.prepareStatement( + "CALL ENCRYPT('AES', RAWTOHEX(?), STRINGTOUTF8(?))"); + prep.setCharacterStream(1, new StringReader(key), -1); + prep.setCharacterStream(2, new StringReader(value), -1); + ResultSet rs = prep.executeQuery(); + rs.next(); + String encrypted = rs.getString(1); + PreparedStatement prep2 = conn.prepareStatement( + "CALL TRIM(CHAR(0) FROM " + + "UTF8TOSTRING(DECRYPT('AES', RAWTOHEX(?), ?)))"); + prep2.setCharacterStream(1, new StringReader(key), -1); + prep2.setCharacterStream(2, new StringReader(encrypted), -1); + ResultSet rs2 = prep2.executeQuery(); + rs2.first(); + String decrypted = rs2.getString(1); + prep2.close(); + assertEquals(value, decrypted); + conn.close(); + } + + private void testReservedKeywordReconnect() throws SQLException { + if (config.memory) { + return; + } + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table \"UNIQUE\"(\"UNIQUE\" int)"); + conn.close(); + conn = getConnection("cases"); + stat = conn.createStatement(); + stat.execute("select \"UNIQUE\" from \"UNIQUE\""); + stat.execute("drop table \"UNIQUE\""); + conn.close(); + } + + private void testInvalidDatabaseName() throws SQLException { + if (config.memory) { + return; + } + assertThrows(ErrorCode.INVALID_DATABASE_NAME_1, this). + getConnection("cases/"); + } + + private void testReuseSpace() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + int tableCount = getSize(2, 5); + for (int i = 0; i < tableCount; i++) { + stat.execute("create table t" + i + "(data varchar)"); + } + Random random = new Random(1); + int len = getSize(50, 500); + for (int i = 0; i < len; i++) { + String table = "t" + random.nextInt(tableCount); + String sql; + if (random.nextBoolean()) { + sql = "insert into " + table + " values(space(100000))"; + } else { + sql = "delete from " + table; + } + stat.execute(sql); + stat.execute("script to '" + getBaseDir() + "/test.sql'"); + } + conn.close(); + FileUtils.delete(getBaseDir() + "/test.sql"); + } + + private void testDeleteGroup() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("set max_memory_rows 2"); + stat.execute("create table test(id int primary key, x int)"); + stat.execute("insert into test values(0, 0), (1, 1), (2, 2)"); + stat.execute("delete from test where id not in " + + "(select min(x) from test group by id)"); + conn.close(); + } + + private void testSpecialSQL() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + assertThrows(ErrorCode.SYNTAX_ERROR_2, stat). + execute("create table address" + + "(id identity, name varchar check? instr(value, '@') > 1)"); + stat.execute("SET AUTOCOMMIT OFF; \n//" + + "create sequence if not exists object_id;\n"); + stat.execute("SET AUTOCOMMIT OFF;\n//" + + "create sequence if not exists object_id;\n"); + stat.execute("SET AUTOCOMMIT OFF; //" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF;//" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF \n//" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF\n//" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF //" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF//" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF; \n///" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF;\n///" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF; ///" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF;///" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF \n///" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF\n///" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF ///" + + "create sequence if not exists object_id;"); + stat.execute("SET AUTOCOMMIT OFF///" + + "create sequence if not exists object_id;"); + conn.close(); + } + + private void testUpperCaseLowerCaseDatabase() throws SQLException { + if (File.separatorChar != '\\' || config.googleAppEngine) { + return; + } + deleteDb("cases"); + deleteDb("CaSeS"); + Connection conn, conn2; + ResultSet rs; + conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("CHECKPOINT"); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("INSERT INTO TEST VALUES(1)"); + stat.execute("CHECKPOINT"); + + conn2 = getConnection("CaSeS"); + rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); + assertTrue(rs.next()); + conn2.close(); + + conn.close(); + + conn = getConnection("cases"); + rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); + assertTrue(rs.next()); + conn.close(); + + conn = getConnection("CaSeS"); + rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); + assertTrue(rs.next()); + conn.close(); + + deleteDb("cases"); + deleteDb("CaSeS"); + + } + + private void testManualCommitSet() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Connection conn2 = getConnection("cases"); + conn.setAutoCommit(false); + conn2.setAutoCommit(false); + conn.createStatement().execute("SET MODE REGULAR"); + conn2.createStatement().execute("SET MODE REGULAR"); + conn.close(); + conn2.close(); + } + + private void testSchemaIdentityReconnect() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create schema s authorization sa"); + stat.execute("create table s.test(id identity)"); + conn.close(); + conn = getConnection("cases"); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM S.TEST"); + while (rs.next()) { + // ignore + } + conn.close(); + } + + private void testDisconnect() throws Exception { + if (config.networked || config.codeCoverage) { + return; + } + deleteDb("cases"); + Connection conn = getConnection("cases"); + final Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID IDENTITY)"); + for (int i = 0; i < 1000; i++) { + stat.execute("INSERT INTO TEST() VALUES()"); + } + final SQLException[] stopped = { null }; + Thread t = new Thread(new Runnable() { + @Override + public void run() { + try { + long time = System.nanoTime(); + ResultSet rs = stat.executeQuery("SELECT MAX(T.ID) " + + "FROM TEST T, TEST, TEST, TEST, TEST, " + + "TEST, TEST, TEST, TEST, TEST, TEST"); + rs.next(); + time = System.nanoTime() - time; + TestBase.logError("query was too quick; result: " + + rs.getInt(1) + " time:" + TimeUnit.NANOSECONDS.toMillis(time), null); + } catch (SQLException e) { + stopped[0] = e; + // ok + } + } + }); + t.start(); + Thread.sleep(300); + long time = System.nanoTime(); + conn.close(); + t.join(5000); + if (stopped[0] == null) { + fail("query still running"); + } else { + assertKnownException(stopped[0]); + } + time = System.nanoTime() - time; + if (time > TimeUnit.SECONDS.toNanos(5)) { + if (!config.reopen) { + fail("closing took " + time); + } + } + deleteDb("cases"); + } + + private void testExecuteTrace() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery( + "SELECT ? FROM DUAL {1: 'Hello'}"); + rs.next(); + assertEquals("Hello", rs.getString(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("SELECT ? FROM DUAL UNION ALL " + + "SELECT ? FROM DUAL {1: 'Hello', 2:'World' }"); + rs.next(); + assertEquals("Hello", rs.getString(1)); + rs.next(); + assertEquals("World", rs.getString(1)); + assertFalse(rs.next()); + + conn.close(); + } + + private void checkExplain(Statement stat, String sql, String expected) throws SQLException { + ResultSet rs = stat.executeQuery(sql); + + assertTrue(rs.next()); + + assertEquals(expected, rs.getString(1)); + } + + private void testExplain() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE ORGANIZATION" + + "(id int primary key, name varchar(100))"); + stat.execute("CREATE TABLE PERSON" + + "(id int primary key, orgId int, name varchar(100), salary int)"); + + checkExplain(stat, "/* bla-bla */ EXPLAIN SELECT ID FROM ORGANIZATION WHERE id = ?", + "SELECT\n" + + " ID\n" + + "FROM PUBLIC.ORGANIZATION\n" + + " /* PUBLIC.PRIMARY_KEY_D: ID = ?1 */\n" + + "WHERE ID = ?1"); + + checkExplain(stat, "EXPLAIN SELECT ID FROM ORGANIZATION WHERE id = 1", + "SELECT\n" + + " ID\n" + + "FROM PUBLIC.ORGANIZATION\n" + + " /* PUBLIC.PRIMARY_KEY_D: ID = 1 */\n" + + "WHERE ID = 1"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON WHERE id = ?", + "SELECT\n" + + " PERSON.ID,\n" + + " PERSON.ORGID,\n" + + " PERSON.NAME,\n" + + " PERSON.SALARY\n" + + "FROM PUBLIC.PERSON\n" + + " /* PUBLIC.PRIMARY_KEY_8: ID = ?1 */\n" + + "WHERE ID = ?1"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON WHERE id = 50", + "SELECT\n" + + " PERSON.ID,\n" + + " PERSON.ORGID,\n" + + " PERSON.NAME,\n" + + " PERSON.SALARY\n" + + "FROM PUBLIC.PERSON\n" + + " /* PUBLIC.PRIMARY_KEY_8: ID = 50 */\n" + + "WHERE ID = 50"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON WHERE salary > ? and salary < ?", + "SELECT\n" + + " PERSON.ID,\n" + + " PERSON.ORGID,\n" + + " PERSON.NAME,\n" + + " PERSON.SALARY\n" + + "FROM PUBLIC.PERSON\n" + + " /* PUBLIC.PERSON.tableScan */\n" + + "WHERE (SALARY > ?1)\n" + + " AND (SALARY < ?2)"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON WHERE salary > 1000 and salary < 2000", + "SELECT\n" + + " PERSON.ID,\n" + + " PERSON.ORGID,\n" + + " PERSON.NAME,\n" + + " PERSON.SALARY\n" + + "FROM PUBLIC.PERSON\n" + + " /* PUBLIC.PERSON.tableScan */\n" + + "WHERE (SALARY > 1000)\n" + + " AND (SALARY < 2000)"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON WHERE name = lower(?)", + "SELECT\n" + + " PERSON.ID,\n" + + " PERSON.ORGID,\n" + + " PERSON.NAME,\n" + + " PERSON.SALARY\n" + + "FROM PUBLIC.PERSON\n" + + " /* PUBLIC.PERSON.tableScan */\n" + + "WHERE NAME = LOWER(?1)"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON WHERE name = lower('Smith')", + "SELECT\n" + + " PERSON.ID,\n" + + " PERSON.ORGID,\n" + + " PERSON.NAME,\n" + + " PERSON.SALARY\n" + + "FROM PUBLIC.PERSON\n" + + " /* PUBLIC.PERSON.tableScan */\n" + + "WHERE NAME = 'smith'"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON p " + + "INNER JOIN ORGANIZATION o ON p.id = o.id WHERE o.id = ? AND p.salary > ?", + "SELECT\n" + + " P.ID,\n" + + " P.ORGID,\n" + + " P.NAME,\n" + + " P.SALARY,\n" + + " O.ID,\n" + + " O.NAME\n" + + "FROM PUBLIC.ORGANIZATION O\n" + + " /* PUBLIC.PRIMARY_KEY_D: ID = ?1 */\n" + + " /* WHERE O.ID = ?1\n" + + " */\n" + + "INNER JOIN PUBLIC.PERSON P\n" + + " /* PUBLIC.PRIMARY_KEY_8: ID = O.ID */\n" + + " ON 1=1\n" + + "WHERE (P.ID = O.ID)\n" + + " AND ((O.ID = ?1)\n" + + " AND (P.SALARY > ?2))"); + + checkExplain(stat, "EXPLAIN SELECT * FROM PERSON p " + + "INNER JOIN ORGANIZATION o ON p.id = o.id WHERE o.id = 10 AND p.salary > 1000", + "SELECT\n" + + " P.ID,\n" + + " P.ORGID,\n" + + " P.NAME,\n" + + " P.SALARY,\n" + + " O.ID,\n" + + " O.NAME\n" + + "FROM PUBLIC.ORGANIZATION O\n" + + " /* PUBLIC.PRIMARY_KEY_D: ID = 10 */\n" + + " /* WHERE O.ID = 10\n" + + " */\n" + + "INNER JOIN PUBLIC.PERSON P\n" + + " /* PUBLIC.PRIMARY_KEY_8: ID = O.ID */\n" + + " ON 1=1\n" + + "WHERE (P.ID = O.ID)\n" + + " AND ((O.ID = 10)\n" + + " AND (P.SALARY > 1000))"); + + PreparedStatement pStat = conn.prepareStatement( + "/* bla-bla */ EXPLAIN SELECT ID FROM ORGANIZATION WHERE id = ?"); + + ResultSet rs = pStat.executeQuery(); + + assertTrue(rs.next()); + + assertEquals("SELECT\n" + + " ID\n" + + "FROM PUBLIC.ORGANIZATION\n" + + " /* PUBLIC.PRIMARY_KEY_D: ID = ?1 */\n" + + "WHERE ID = ?1", + rs.getString(1)); + + conn.close(); + } + + private void testExplainAnalyze() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE ORGANIZATION" + + "(id int primary key, name varchar(100))"); + stat.execute("CREATE TABLE PERSON" + + "(id int primary key, orgId int, name varchar(100), salary int)"); + + stat.execute("INSERT INTO ORGANIZATION VALUES(1, 'org1')"); + stat.execute("INSERT INTO ORGANIZATION VALUES(2, 'org2')"); + + stat.execute("INSERT INTO PERSON VALUES(1, 1, 'person1', 1000)"); + stat.execute("INSERT INTO PERSON VALUES(2, 1, 'person2', 2000)"); + stat.execute("INSERT INTO PERSON VALUES(3, 2, 'person3', 3000)"); + stat.execute("INSERT INTO PERSON VALUES(4, 2, 'person4', 4000)"); + + assertThrows(ErrorCode.PARAMETER_NOT_SET_1, stat, + "/* bla-bla */ EXPLAIN ANALYZE SELECT ID FROM ORGANIZATION WHERE id = ?"); + + PreparedStatement pStat = conn.prepareStatement( + "/* bla-bla */ EXPLAIN ANALYZE SELECT ID FROM ORGANIZATION WHERE id = ?"); + + assertThrows(ErrorCode.PARAMETER_NOT_SET_1, pStat).executeQuery(); + + pStat.setInt(1, 1); + + ResultSet rs = pStat.executeQuery(); + + assertTrue(rs.next()); + + assertEquals("SELECT\n" + + " ID\n" + + "FROM PUBLIC.ORGANIZATION\n" + + " /* PUBLIC.PRIMARY_KEY_D: ID = ?1 */\n" + + " /* scanCount: 2 */\n" + + "WHERE ID = ?1", + rs.getString(1)); + + pStat = conn.prepareStatement("EXPLAIN ANALYZE SELECT * FROM PERSON p " + + "INNER JOIN ORGANIZATION o ON o.id = p.id WHERE o.id = ?"); + + assertThrows(ErrorCode.PARAMETER_NOT_SET_1, pStat).executeQuery(); + + pStat.setInt(1, 1); + + rs = pStat.executeQuery(); + + assertTrue(rs.next()); + + assertEquals("SELECT\n" + + " P.ID,\n" + + " P.ORGID,\n" + + " P.NAME,\n" + + " P.SALARY,\n" + + " O.ID,\n" + + " O.NAME\n" + + "FROM PUBLIC.ORGANIZATION O\n" + + " /* PUBLIC.PRIMARY_KEY_D: ID = ?1 */\n" + + " /* WHERE O.ID = ?1\n" + + " */\n" + + " /* scanCount: 2 */\n" + + "INNER JOIN PUBLIC.PERSON P\n" + + " /* PUBLIC.PRIMARY_KEY_8: ID = O.ID\n" + + " AND ID = ?1\n" + + " */\n" + + " ON 1=1\n" + + " /* scanCount: 2 */\n" + + "WHERE ((O.ID = ?1)\n" + + " AND (O.ID = P.ID))\n" + + " AND (P.ID = ?1)", + rs.getString(1)); + + conn.close(); + } + + private void testAlterTableReconnect() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity);"); + stat.execute("insert into test values(1);"); + assertThrows(ErrorCode.NULL_NOT_ALLOWED, stat). + execute("alter table test add column name varchar not null;"); + conn.close(); + conn = getConnection("cases"); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST"); + rs.next(); + assertEquals("1", rs.getString(1)); + assertFalse(rs.next()); + stat = conn.createStatement(); + stat.execute("drop table test"); + stat.execute("create table test(id identity)"); + stat.execute("insert into test values(1)"); + stat.execute("alter table test alter column id set default 'x'"); + conn.close(); + conn = getConnection("cases"); + stat = conn.createStatement(); + rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); + rs.next(); + assertEquals("1", rs.getString(1)); + assertFalse(rs.next()); + stat.execute("drop table test"); + stat.execute("create table test(id identity)"); + stat.execute("insert into test values(1)"); + assertThrows(ErrorCode.INVALID_DATETIME_CONSTANT_2, stat). + execute("alter table test alter column id date"); + conn.close(); + conn = getConnection("cases"); + rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); + rs.next(); + assertEquals("1", rs.getString(1)); + assertFalse(rs.next()); + conn.close(); + } + + private void testCollation() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("SET COLLATION ENGLISH STRENGTH PRIMARY"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello'), " + + "(2, 'World'), (3, 'WORLD'), (4, 'HELLO')"); + stat.execute("create index idxname on test(name)"); + ResultSet rs; + rs = stat.executeQuery("select name from test order by name"); + rs.next(); + assertEquals("Hello", rs.getString(1)); + rs.next(); + assertEquals("HELLO", rs.getString(1)); + rs.next(); + assertEquals("World", rs.getString(1)); + rs.next(); + assertEquals("WORLD", rs.getString(1)); + rs = stat.executeQuery("select name from test where name like 'He%'"); + rs.next(); + assertEquals("Hello", rs.getString(1)); + rs.next(); + assertEquals("HELLO", rs.getString(1)); + conn.close(); + } + + private void testBinaryCollation() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + ResultSet rs; + + // test the default (SIGNED) + if (Constants.VERSION_MINOR < 4) { + stat.execute("create table bin( x binary(1) );"); + stat.execute("insert into bin(x) values (x'09'),(x'0a'),(x'99'),(x'aa');"); + rs = stat.executeQuery("select * from bin order by x;"); + rs.next(); + assertEquals("99", rs.getString(1)); + rs.next(); + assertEquals("aa", rs.getString(1)); + rs.next(); + assertEquals("09", rs.getString(1)); + rs.next(); + assertEquals("0a", rs.getString(1)); + stat.execute("drop table bin"); + } + + // test UNSIGNED mode + stat.execute("SET BINARY_COLLATION UNSIGNED"); + stat.execute("create table bin( x binary(1) );"); + stat.execute("insert into bin(x) values (x'09'),(x'0a'),(x'99'),(x'aa');"); + rs = stat.executeQuery("select * from bin order by x;"); + rs.next(); + assertEquals("09", rs.getString(1)); + rs.next(); + assertEquals("0a", rs.getString(1)); + rs.next(); + assertEquals("99", rs.getString(1)); + rs.next(); + assertEquals("aa", rs.getString(1)); + + conn.close(); + } + + private void testPersistentSettings() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("SET COLLATION de_DE"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "NAME VARCHAR)"); + stat.execute("CREATE INDEX IDXNAME ON TEST(NAME)"); + // \u00f6 = oe + stat.execute("INSERT INTO TEST VALUES(1, 'B\u00f6hlen'), " + + "(2, 'Bach'), (3, 'Bucher')"); + conn.close(); + conn = getConnection("cases"); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT NAME FROM TEST ORDER BY NAME"); + rs.next(); + assertEquals("Bach", rs.getString(1)); + rs.next(); + assertEquals("B\u00f6hlen", rs.getString(1)); + rs.next(); + assertEquals("Bucher", rs.getString(1)); + conn.close(); + } + + private void testInsertSelectUnion() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ORDER_ID INT PRIMARY KEY, " + + "ORDER_DATE DATETIME, " + + "USER_ID INT, DESCRIPTION VARCHAR, STATE VARCHAR, " + + "TRACKING_ID VARCHAR)"); + Timestamp orderDate = Timestamp.valueOf("2005-05-21 17:46:00"); + String sql = "insert into TEST (ORDER_ID,ORDER_DATE," + + "USER_ID,DESCRIPTION,STATE,TRACKING_ID) " + + "select cast(? as int),cast(? as date),cast(? as int),cast(? as varchar)," + + "cast(? as varchar),cast(? as varchar) union all select ?,?,?,?,?,?"; + PreparedStatement ps = conn.prepareStatement(sql); + ps.setInt(1, 5555); + ps.setTimestamp(2, orderDate); + ps.setInt(3, 2222); + ps.setString(4, "test desc"); + ps.setString(5, "test_state"); + ps.setString(6, "testid"); + ps.setInt(7, 5556); + ps.setTimestamp(8, orderDate); + ps.setInt(9, 2222); + ps.setString(10, "test desc"); + ps.setString(11, "test_state"); + ps.setString(12, "testid"); + assertEquals(2, ps.executeUpdate()); + ps.close(); + conn.close(); + } + + private void testViewReconnect() throws SQLException { + trace("testViewReconnect"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("create view abc as select * from test"); + stat.execute("drop table test cascade"); + conn.close(); + conn = getConnection("cases"); + stat = conn.createStatement(); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat). + execute("select * from abc"); + conn.close(); + } + + private void testDefaultQueryReconnect() throws SQLException { + trace("testDefaultQueryReconnect"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table parent(id int)"); + stat.execute("insert into parent values(1)"); + stat.execute("create table test(id int default " + + "(select max(id) from parent), name varchar)"); + + conn.close(); + conn = getConnection("cases"); + stat = conn.createStatement(); + conn.setAutoCommit(false); + stat.execute("insert into parent values(2)"); + stat.execute("insert into test(name) values('test')"); + ResultSet rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + conn.close(); + } + + private void testBigString() throws SQLException { + trace("testBigString"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT, TEXT VARCHAR, TEXT_C CLOB)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?, ?)"); + int len = getSize(1000, 66000); + char[] buff = new char[len]; + + // Unicode problem: + // The UCS code values 0xd800-0xdfff (UTF-16 surrogates) + // as well as 0xfffe and 0xffff (UCS non-characters) + // should not appear in conforming UTF-8 streams. + // (String.getBytes("UTF-8") only returns 1 byte for 0xd800-0xdfff) + Random random = new Random(); + random.setSeed(1); + for (int i = 0; i < len; i++) { + char c; + do { + c = (char) random.nextInt(); + } while (c >= 0xd800 && c <= 0xdfff); + buff[i] = c; + } + String big = new String(buff); + prep.setInt(1, 1); + prep.setString(2, big); + prep.setString(3, big); + prep.execute(); + prep.setInt(1, 2); + prep.setCharacterStream(2, new StringReader(big), 0); + prep.setCharacterStream(3, new StringReader(big), 0); + prep.execute(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals(big, rs.getString(2)); + assertEquals(big, readString(rs.getCharacterStream(2))); + assertEquals(big, rs.getString(3)); + assertEquals(big, readString(rs.getCharacterStream(3))); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals(big, rs.getString(2)); + assertEquals(big, readString(rs.getCharacterStream(2))); + assertEquals(big, rs.getString(3)); + assertEquals(big, readString(rs.getCharacterStream(3))); + rs.next(); + assertFalse(rs.next()); + conn.close(); + } + + private void testConstraintReconnect() throws SQLException { + trace("testConstraintReconnect"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("drop table if exists parent"); + stat.execute("drop table if exists child"); + stat.execute("create table parent(id int)"); + stat.execute("create table child(c_id int, p_id int, " + + "foreign key(p_id) references parent(id))"); + stat.execute("insert into parent values(1), (2)"); + stat.execute("insert into child values(1, 1)"); + stat.execute("insert into child values(2, 2)"); + stat.execute("insert into child values(3, 2)"); + stat.execute("delete from child"); + conn.close(); + conn = getConnection("cases"); + conn.close(); + } + + private void testDoubleRecovery() throws SQLException { + if (config.networked || config.googleAppEngine) { + return; + } + trace("testDoubleRecovery"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("SET WRITE_DELAY 0"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + conn.setAutoCommit(false); + stat.execute("INSERT INTO TEST VALUES(2, 'World')"); + crash(conn); + + conn = getConnection("cases"); + stat = conn.createStatement(); + stat.execute("SET WRITE_DELAY 0"); + stat.execute("INSERT INTO TEST VALUES(3, 'Break')"); + crash(conn); + + conn = getConnection("cases"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals("Break", rs.getString(2)); + conn.close(); + } + + private void testRenameReconnect() throws SQLException { + trace("testRenameReconnect"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + conn.createStatement().execute("CREATE TABLE TEST_SEQ" + + "(ID INT IDENTITY, NAME VARCHAR(255))"); + conn.createStatement().execute("CREATE TABLE TEST" + + "(ID INT PRIMARY KEY)"); + conn.createStatement().execute("ALTER TABLE TEST RENAME TO TEST2"); + conn.createStatement().execute("CREATE TABLE TEST_B" + + "(ID INT PRIMARY KEY, NAME VARCHAR, UNIQUE(NAME))"); + conn.close(); + conn = getConnection("cases"); + conn.createStatement().execute("INSERT INTO TEST_SEQ(NAME) VALUES('Hi')"); + ResultSet rs = conn.createStatement().executeQuery("CALL IDENTITY()"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.createStatement().execute("SELECT * FROM TEST2"); + conn.createStatement().execute("SELECT * FROM TEST_B"); + conn.createStatement().execute("ALTER TABLE TEST_B RENAME TO TEST_B2"); + conn.close(); + conn = getConnection("cases"); + conn.createStatement().execute("SELECT * FROM TEST_B2"); + conn.createStatement().execute( + "INSERT INTO TEST_SEQ(NAME) VALUES('World')"); + rs = conn.createStatement().executeQuery("CALL IDENTITY()"); + rs.next(); + assertEquals(2, rs.getInt(1)); + conn.close(); + } + + private void testAllSizes() throws SQLException { + trace("testAllSizes"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(A INT, B INT, C INT, DATA VARCHAR)"); + int increment = getSize(100, 1); + for (int i = 1; i < 500; i += increment) { + StringBuilder buff = new StringBuilder(); + buff.append("CREATE TABLE TEST"); + for (int j = 0; j < i; j++) { + buff.append('a'); + } + buff.append("(ID INT)"); + String sql = buff.toString(); + stat.execute(sql); + stat.execute("INSERT INTO TEST VALUES(" + i + ", 0, 0, '" + sql + "')"); + } + conn.close(); + conn = getConnection("cases"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + while (rs.next()) { + int id = rs.getInt(1); + String s = rs.getString("DATA"); + if (!s.endsWith(")")) { + fail("id=" + id); + } + } + conn.close(); + } + + private void testSelectForUpdate() throws SQLException { + trace("testSelectForUpdate"); + deleteDb("cases"); + Connection conn1 = getConnection("cases"); + Statement stat1 = conn1.createStatement(); + stat1.execute("CREATE TABLE TEST(ID INT)"); + stat1.execute("INSERT INTO TEST VALUES(1)"); + conn1.setAutoCommit(false); + stat1.execute("SELECT * FROM TEST FOR UPDATE"); + Connection conn2 = getConnection("cases"); + Statement stat2 = conn2.createStatement(); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, stat2). + execute("UPDATE TEST SET ID=2"); + conn1.commit(); + stat2.execute("UPDATE TEST SET ID=2"); + conn1.close(); + conn2.close(); + } + + private void testMutableObjects() throws SQLException { + trace("testMutableObjects"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT, D DATE, T TIME, TS TIMESTAMP)"); + stat.execute("INSERT INTO TEST VALUES(1, '2001-01-01', " + + "'20:00:00', '2002-02-02 22:22:22.2')"); + stat.execute("INSERT INTO TEST VALUES(1, '2001-01-01', " + + "'20:00:00', '2002-02-02 22:22:22.2')"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + rs.next(); + Date d1 = rs.getDate("D"); + Time t1 = rs.getTime("T"); + Timestamp ts1 = rs.getTimestamp("TS"); + rs.next(); + Date d2 = rs.getDate("D"); + Time t2 = rs.getTime("T"); + Timestamp ts2 = rs.getTimestamp("TS"); + assertTrue(ts1 != ts2); + assertTrue(d1 != d2); + assertTrue(t1 != t2); + assertTrue(t2 != rs.getObject("T")); + assertTrue(d2 != rs.getObject("D")); + assertTrue(ts2 != rs.getObject("TS")); + assertFalse(rs.next()); + conn.close(); + } + + private void testCreateDrop() throws SQLException { + trace("testCreateDrop"); + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table employee(id int, firstName VARCHAR(50), " + + "salary decimal(10, 2), " + + "superior_id int, CONSTRAINT PK_employee PRIMARY KEY (id), " + + "CONSTRAINT FK_superior FOREIGN KEY (superior_id) " + + "REFERENCES employee(ID))"); + stat.execute("DROP TABLE employee"); + conn.close(); + conn = getConnection("cases"); + conn.close(); + } + + private void testPolePos() throws SQLException { + trace("testPolePos"); + // poleposition-0.20 + + Connection c0 = getConnection("cases"); + c0.createStatement().executeUpdate("SET AUTOCOMMIT FALSE"); + c0.createStatement().executeUpdate( + "create table australia (ID INTEGER NOT NULL, " + + "Name VARCHAR(100), firstName VARCHAR(100), " + + "Points INTEGER, LicenseID INTEGER, PRIMARY KEY(ID))"); + c0.createStatement().executeUpdate("COMMIT"); + c0.close(); + + c0 = getConnection("cases"); + c0.createStatement().executeUpdate("SET AUTOCOMMIT FALSE"); + PreparedStatement p15 = c0.prepareStatement("insert into australia" + + "(id, Name, firstName, Points, LicenseID) values (?, ?, ?, ?, ?)"); + int len = getSize(1, 1000); + for (int i = 0; i < len; i++) { + p15.setInt(1, i); + p15.setString(2, "Pilot_" + i); + p15.setString(3, "Herkules"); + p15.setInt(4, i); + p15.setInt(5, i); + p15.executeUpdate(); + } + c0.createStatement().executeUpdate("COMMIT"); + c0.close(); + + // c0=getConnection("cases"); + // c0.createStatement().executeUpdate("SET AUTOCOMMIT FALSE"); + // c0.createStatement().executeQuery("select * from australia"); + // c0.createStatement().executeQuery("select * from australia"); + // c0.close(); + + // c0=getConnection("cases"); + // c0.createStatement().executeUpdate("SET AUTOCOMMIT FALSE"); + // c0.createStatement().executeUpdate("COMMIT"); + // c0.createStatement().executeUpdate("delete from australia"); + // c0.createStatement().executeUpdate("COMMIT"); + // c0.close(); + + c0 = getConnection("cases"); + c0.createStatement().executeUpdate("SET AUTOCOMMIT FALSE"); + c0.createStatement().executeUpdate("drop table australia"); + c0.createStatement().executeUpdate("create table australia " + + "(ID INTEGER NOT NULL, Name VARCHAR(100), " + + "firstName VARCHAR(100), Points INTEGER, " + + "LicenseID INTEGER, PRIMARY KEY(ID))"); + c0.createStatement().executeUpdate("COMMIT"); + c0.close(); + + c0 = getConnection("cases"); + c0.createStatement().executeUpdate("SET AUTOCOMMIT FALSE"); + PreparedStatement p65 = c0.prepareStatement( + "insert into australia" + + "(id, Name, FirstName, Points, LicenseID) values (?, ?, ?, ?, ?)"); + len = getSize(1, 1000); + for (int i = 0; i < len; i++) { + p65.setInt(1, i); + p65.setString(2, "Pilot_" + i); + p65.setString(3, "Herkules"); + p65.setInt(4, i); + p65.setInt(5, i); + p65.executeUpdate(); + } + c0.createStatement().executeUpdate("COMMIT"); + c0.createStatement().executeUpdate("COMMIT"); + c0.createStatement().executeUpdate("COMMIT"); + c0.close(); + + c0 = getConnection("cases"); + c0.close(); + } + + private void testQuick() throws SQLException { + trace("testQuick"); + deleteDb("cases"); + + Connection c0 = getConnection("cases"); + c0.createStatement().executeUpdate( + "create table test (ID int PRIMARY KEY)"); + c0.createStatement().executeUpdate("insert into test values(1)"); + c0.createStatement().executeUpdate("drop table test"); + c0.createStatement().executeUpdate( + "create table test (ID int PRIMARY KEY)"); + c0.close(); + + c0 = getConnection("cases"); + c0.createStatement().executeUpdate("insert into test values(1)"); + c0.close(); + + c0 = getConnection("cases"); + c0.close(); + } + + private void testOrderByWithSubselect() throws SQLException { + deleteDb("cases"); + + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("create table master" + + "(id number primary key, name varchar2(30));"); + stat.execute("create table detail" + + "(id number references master(id), location varchar2(30));"); + + stat.execute("Insert into master values(1,'a'), (2,'b'), (3,'c');"); + stat.execute("Insert into detail values(1,'a'), (2,'b'), (3,'c');"); + + ResultSet rs = stat.executeQuery( + "select master.id, master.name " + + "from master " + + "where master.id in (select detail.id from detail) " + + "order by master.id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + + conn.close(); + } + + private void testDeleteAndDropTableWithLobs(boolean useDrop) + throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(id int, content BLOB)"); + stat.execute("set MAX_LENGTH_INPLACE_LOB 1"); + + PreparedStatement prepared = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + byte[] blobContent = "BLOB_CONTENT".getBytes(); + prepared.setInt(1, 1); + prepared.setBytes(2, blobContent); + prepared.execute(); + + if (useDrop) { + stat.execute("DROP TABLE TEST"); + } else { + stat.execute("DELETE FROM TEST"); + } + + conn.close(); + + List list = FileUtils.newDirectoryStream(getBaseDir() + + "/cases.lobs.db"); + assertEquals("Lob file was not deleted: " + list, 0, list.size()); + } + + private void testMinimalCoveringIndexPlan() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + + stat.execute("create table t(a int, b int, c int)"); + stat.execute("create index a_idx on t(a)"); + stat.execute("create index b_idx on t(b)"); + stat.execute("create index ab_idx on t(a, b)"); + stat.execute("create index abc_idx on t(a, b, c)"); + + ResultSet rs; + String plan; + + rs = stat.executeQuery("explain select a from t"); + assertTrue(rs.next()); + plan = rs.getString(1); + assertContains(plan, "/* PUBLIC.A_IDX */"); + rs.close(); + + rs = stat.executeQuery("explain select b from t"); + assertTrue(rs.next()); + plan = rs.getString(1); + assertContains(plan, "/* PUBLIC.B_IDX */"); + rs.close(); + + rs = stat.executeQuery("explain select b, a from t"); + assertTrue(rs.next()); + plan = rs.getString(1); + assertContains(plan, "/* PUBLIC.AB_IDX */"); + rs.close(); + + rs = stat.executeQuery("explain select b, a, c from t"); + assertTrue(rs.next()); + plan = rs.getString(1); + assertContains(plan, "/* PUBLIC.ABC_IDX */"); + rs.close(); + + conn.close(); + } + + private void testMinMaxDirectLookupIndex() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + + stat.execute("create table t(a int, b int)"); + stat.execute("create index b_idx on t(b desc)"); + stat.execute("create index ab_idx on t(a, b)"); + + final int count = 100; + + PreparedStatement p = conn.prepareStatement("insert into t values (?,?)"); + for (int i = 0; i <= count; i++) { + p.setInt(1, i); + p.setInt(2, count - i); + assertEquals(1, p.executeUpdate()); + } + p.close(); + + ResultSet rs; + String plan; + + rs = stat.executeQuery("select max(b) from t"); + assertTrue(rs.next()); + assertEquals(count, rs.getInt(1)); + rs.close(); + + rs = stat.executeQuery("explain select max(b) from t"); + assertTrue(rs.next()); + plan = rs.getString(1); + assertContains(plan, "/* PUBLIC.B_IDX */"); + assertContains(plan, "/* direct lookup */"); + rs.close(); + + rs = stat.executeQuery("select min(b) from t"); + assertTrue(rs.next()); + assertEquals(0, rs.getInt(1)); + rs.close(); + + rs = stat.executeQuery("explain select min(b) from t"); + assertTrue(rs.next()); + plan = rs.getString(1); + assertContains(plan, "/* PUBLIC.B_IDX */"); + assertContains(plan, "/* direct lookup */"); + rs.close(); + + conn.close(); + } + + private void testDeleteTop() throws SQLException { + deleteDb("cases"); + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE TEST(id int) AS " + + "SELECT x FROM system_range(1, 100)"); + stat.execute("DELETE TOP 10 FROM TEST"); + ResultSet rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + assertTrue(rs.next()); + assertEquals(90, rs.getInt(1)); + + stat.execute("DELETE FROM TEST LIMIT ((SELECT COUNT(*) FROM TEST) / 10)"); + rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + assertTrue(rs.next()); + assertEquals(81, rs.getInt(1)); + + rs = stat.executeQuery("EXPLAIN DELETE " + + "FROM TEST LIMIT ((SELECT COUNT(*) FROM TEST) / 10)"); + rs.next(); + assertEquals("DELETE FROM PUBLIC.TEST\n" + + " /* PUBLIC.TEST.tableScan */\n" + + "LIMIT ((SELECT\n" + + " COUNT(*)\n" + + "FROM PUBLIC.TEST\n" + + " /* PUBLIC.TEST.tableScan */\n" + + "/* direct lookup */) / 10)", + rs.getString(1)); + + PreparedStatement prep; + prep = conn.prepareStatement("SELECT * FROM TEST LIMIT ?"); + prep.setInt(1, 10); + prep.execute(); + + prep = conn.prepareStatement("DELETE FROM TEST LIMIT ?"); + prep.setInt(1, 10); + prep.execute(); + rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + assertTrue(rs.next()); + assertEquals(71, rs.getInt(1)); + + conn.close(); + } + + /** Tests fix for bug #682: Queries with 'like' expressions may filter rows incorrectly */ + private void testLikeExpressions() throws SQLException { + Connection conn = getConnection("cases"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from (select 'fo%' a union all select '%oo') where 'foo' like a"); + assertTrue(rs.next()); + assertEquals("fo%", rs.getString(1)); + assertTrue(rs.next()); + assertEquals("%oo", rs.getString(1)); + conn.close(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestCheckpoint.java b/modules/h2/src/test/java/org/h2/test/db/TestCheckpoint.java new file mode 100644 index 0000000000000..ac2503bd15113 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestCheckpoint.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.test.TestBase; + +/** + * Tests the CHECKPOINT SQL statement. + */ +public class TestCheckpoint extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + // TODO test checkpoint with rollback, not only just run the command + deleteDb("checkpoint"); + Connection c0 = getConnection("checkpoint"); + Statement s0 = c0.createStatement(); + Connection c1 = getConnection("checkpoint"); + Statement s1 = c1.createStatement(); + s1.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + s0.execute("CHECKPOINT"); + + s1.execute("INSERT INTO TEST VALUES(2, 'World')"); + c1.setAutoCommit(false); + s1.execute("INSERT INTO TEST VALUES(3, 'Maybe')"); + s0.execute("CHECKPOINT"); + + s1.execute("INSERT INTO TEST VALUES(4, 'Or not')"); + s0.execute("CHECKPOINT"); + + s1.execute("INSERT INTO TEST VALUES(5, 'ok yes')"); + s1.execute("COMMIT"); + s0.execute("CHECKPOINT"); + + c0.close(); + c1.close(); + deleteDb("checkpoint"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestCluster.java b/modules/h2/src/test/java/org/h2/test/db/TestCluster.java new file mode 100644 index 0000000000000..670d0822cdb93 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestCluster.java @@ -0,0 +1,528 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Properties; + +import org.h2.api.ErrorCode; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.CreateCluster; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Server; +import org.h2.util.JdbcUtils; + +/** + * Test the cluster feature. + */ +public class TestCluster extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testClob(); + testRecover(); + testRollback(); + testCase(); + testClientInfo(); + testCreateClusterAtRuntime(); + testStartStopCluster(); + } + + private void testClob() throws SQLException { + if (config.memory || config.networked || config.cipher != null) { + return; + } + deleteFiles(); + + org.h2.Driver.load(); + String user = getUser(), password = getPassword(); + Connection conn; + Statement stat; + + Server n1 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node1").start(); + int port1 = n1.getPort(); + String url1 = getURL("jdbc:h2:tcp://localhost:" + port1 + "/test", false); + + conn = getConnection(url1, user, password); + stat = conn.createStatement(); + stat.execute("create table t1(id int, name clob)"); + stat.execute("insert into t1 values(1, repeat('Hello', 50))"); + conn.close(); + + Server n2 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node2").start(); + int port2 = n2.getPort(); + String url2 = getURL("jdbc:h2:tcp://localhost:" + port2 + "/test", false); + + String serverList = "localhost:" + port1 + ",localhost:" + port2; + String urlCluster = getURL("jdbc:h2:tcp://" + serverList + "/test", true); + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + conn = getConnection(urlCluster, user, password); + conn.close(); + + n1.stop(); + n2.stop(); + deleteFiles(); + } + + private void testRecover() throws SQLException { + if (config.memory || config.networked || config.cipher != null) { + return; + } + deleteFiles(); + + org.h2.Driver.load(); + String user = getUser(), password = getPassword(); + Connection conn; + Statement stat; + ResultSet rs; + + + Server server1 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node1").start(); + int port1 = server1.getPort(); + Server server2 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node2").start(); + int port2 = server2.getPort(); + + String url1 = getURL("jdbc:h2:tcp://localhost:" + port1 + "/test", true); + String url2 = getURL("jdbc:h2:tcp://localhost:" + port2 + "/test", true); + String serverList = "localhost:" + port1 + ",localhost:" + port2; + String urlCluster = getURL("jdbc:h2:tcp://" + serverList + "/test", true); + + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + conn = getConnection(urlCluster, user, password); + stat = conn.createStatement(); + stat.execute("create table t1(id int, name varchar(30))"); + stat.execute("insert into t1 values(1, 'a'), (2, 'b'), (3, 'c')"); + rs = stat.executeQuery("select count(*) from t1"); + rs.next(); + assertEquals(3, rs.getInt(1)); + + server2.stop(); + DeleteDbFiles.main("-dir", getBaseDir() + "/node2", "-quiet"); + + stat.execute("insert into t1 values(4, 'd'), (5, 'e')"); + rs = stat.executeQuery("select count(*) from t1"); + rs.next(); + assertEquals(5, rs.getInt(1)); + + server2 = org.h2.tools.Server.createTcpServer("-tcpPort", + "" + port2 , "-baseDir", getBaseDir() + "/node2").start(); + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + conn.close(); + + conn = getConnection(urlCluster, user, password); + stat = conn.createStatement(); + rs = stat.executeQuery("select count(*) from t1"); + rs.next(); + assertEquals(5, rs.getInt(1)); + conn.close(); + + server1.stop(); + server2.stop(); + deleteFiles(); + } + + private void testRollback() throws SQLException { + if (config.memory || config.networked || config.cipher != null) { + return; + } + deleteFiles(); + + org.h2.Driver.load(); + String user = getUser(), password = getPassword(); + Connection conn; + Statement stat; + ResultSet rs; + + Server n1 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node1").start(); + int port1 = n1.getPort(); + Server n2 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node2").start(); + int port2 = n2.getPort(); + + String url1 = getURL("jdbc:h2:tcp://localhost:" + port1 + "/test", true); + String url2 = getURL("jdbc:h2:tcp://localhost:" + port2 + "/test", true); + String serverList = "localhost:" + port1 + ",localhost:" + port2; + String urlCluster = getURL("jdbc:h2:tcp://" + serverList + "/test", true); + + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + conn = getConnection(urlCluster, user, password); + stat = conn.createStatement(); + assertTrue(conn.getAutoCommit()); + stat.execute("create table test(id int, name varchar)"); + assertTrue(conn.getAutoCommit()); + stat.execute("set autocommit false"); + // issue 259 + // assertFalse(conn.getAutoCommit()); + conn.setAutoCommit(false); + assertFalse(conn.getAutoCommit()); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("rollback"); + rs = stat.executeQuery("select * from test order by id"); + assertFalse(rs.next()); + conn.close(); + + n1.stop(); + n2.stop(); + deleteFiles(); + } + + private void testCase() throws SQLException { + if (config.memory || config.networked || config.cipher != null) { + return; + } + deleteFiles(); + + org.h2.Driver.load(); + String user = getUser(), password = getPassword(); + Connection conn; + Statement stat; + ResultSet rs; + + + Server n1 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node1").start(); + int port1 = n1.getPort(); + Server n2 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node2").start(); + int port2 = n2.getPort(); + String serverList = "localhost:" + port1 + ",localhost:" + port2; + String url1 = getURL("jdbc:h2:tcp://localhost:" + port1 + "/test", true); + String url2 = getURL("jdbc:h2:tcp://localhost:" + port2 + "/test", true); + String urlCluster = getURL("jdbc:h2:tcp://" + serverList + "/test", true); + + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + conn = getConnection(urlCluster, user, password); + stat = conn.createStatement(); + assertTrue(conn.getAutoCommit()); + stat.execute("create table test(name int)"); + assertTrue(conn.getAutoCommit()); + stat.execute("insert into test values(1)"); + conn.setAutoCommit(false); + assertFalse(conn.getAutoCommit()); + stat.execute("insert into test values(2)"); + assertFalse(conn.getAutoCommit()); + conn.rollback(); + rs = stat.executeQuery("select * from test order by name"); + assertTrue(rs.next()); + assertFalse(rs.next()); + conn.close(); + + // stop server 2, and test if only one server is available + n2.stop(); + + conn = getConnection(urlCluster, user, password); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + conn.close(); + + n1.stop(); + deleteFiles(); + } + + private void testClientInfo() throws SQLException { + if (config.memory || config.networked || config.cipher != null) { + return; + } + deleteFiles(); + + org.h2.Driver.load(); + String user = getUser(), password = getPassword(); + Connection conn; + + + Server n1 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node1").start(); + int port1 = n1.getPort(); + Server n2 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node2").start(); + int port2 = n2.getPort(); + + String serverList = "localhost:" + port1 + ",localhost:" + port2; + String url1 = getURL("jdbc:h2:tcp://localhost:" + port1 + "/test", true); + String url2 = getURL("jdbc:h2:tcp://localhost:" + port2 + "/test", true); + String urlCluster = getURL("jdbc:h2:tcp://" + serverList + "/test", true); + + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + conn = getConnection(urlCluster, user, password); + Properties p = conn.getClientInfo(); + assertEquals("2", p.getProperty("numServers")); + assertEquals("127.0.0.1:" + port1, p.getProperty("server0")); + assertEquals("127.0.0.1:" + port2, p.getProperty("server1")); + + assertEquals("2", conn.getClientInfo("numServers")); + assertEquals("127.0.0.1:" + port1, conn.getClientInfo("server0")); + assertEquals("127.0.0.1:" + port2, conn.getClientInfo("server1")); + conn.close(); + + // stop server 2, and test if only one server is available + n2.stop(); + + conn = getConnection(urlCluster, user, password); + p = conn.getClientInfo(); + + assertEquals("1", p.getProperty("numServers")); + assertEquals("127.0.0.1:" + port1, p.getProperty("server0")); + assertEquals("1", conn.getClientInfo("numServers")); + assertEquals("127.0.0.1:" + port1, conn.getClientInfo("server0")); + conn.close(); + + n1.stop(); + deleteFiles(); + } + + private void testCreateClusterAtRuntime() throws SQLException { + if (config.memory || config.networked || config.cipher != null) { + return; + } + deleteFiles(); + + org.h2.Driver.load(); + String user = getUser(), password = getPassword(); + Connection conn; + Statement stat; + int len = 10; + + // initialize the database + Server n1 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node1").start(); + int port1 = n1.getPort(); + String url1 = getURL("jdbc:h2:tcp://localhost:" + port1 + "/test", false); + conn = getConnection(url1, user, password); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar) as " + + "select x, 'Data' || x from system_range(0, " + (len - 1) + ")"); + stat.execute("create user test password 'test'"); + stat.execute("grant all on test to test"); + + // start the second server + Server n2 = org.h2.tools.Server.createTcpServer("-baseDir", getBaseDir() + "/node2").start(); + int port2 = n2.getPort(); + String url2 = getURL("jdbc:h2:tcp://localhost:" + port2 + "/test", false); + + // copy the database and initialize the cluster + String serverList = "localhost:" + port1 + ",localhost:" + port2; + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + // check the original connection is closed + assertThrows(ErrorCode.CONNECTION_BROKEN_1, stat). + execute("select * from test"); + JdbcUtils.closeSilently(conn); + + // test the cluster connection + String urlCluster = getURL("jdbc:h2:tcp://" + serverList + "/test", false); + Connection connApp = getConnection(urlCluster + + ";AUTO_RECONNECT=TRUE", user, password); + check(connApp, len, "'" + serverList + "'"); + + // delete the rows, but don't commit + connApp.setAutoCommit(false); + connApp.createStatement().execute("delete from test"); + + // stop server 2, and test if only one server is available + n2.stop(); + + // rollback the transaction + connApp.createStatement().executeQuery("select count(*) from test"); + connApp.rollback(); + check(connApp, len, "''"); + connApp.setAutoCommit(true); + + // re-create the cluster + n2 = org.h2.tools.Server.createTcpServer("-tcpPort", "" + port2, + "-baseDir", getBaseDir() + "/node2").start(); + CreateCluster.main("-urlSource", url1, "-urlTarget", url2, + "-user", user, "-password", password, "-serverList", + serverList); + + // test the cluster connection + check(connApp, len, "'" + serverList + "'"); + connApp.close(); + + // test a non-admin user + String user2 = "test", password2 = getPassword("test"); + connApp = getConnection(urlCluster, user2, password2); + check(connApp, len, "'" + serverList + "'"); + connApp.close(); + + n1.stop(); + + // test non-admin cluster connection if only one server runs + Connection connApp2 = getConnection(urlCluster + + ";AUTO_RECONNECT=TRUE", user2, password2); + check(connApp2, len, "''"); + connApp2.close(); + // test non-admin cluster connection if only one server runs + connApp2 = getConnection(urlCluster + + ";AUTO_RECONNECT=TRUE", user2, password2); + check(connApp2, len, "''"); + connApp2.close(); + + n2.stop(); + deleteFiles(); + } + + private void testStartStopCluster() throws SQLException { + if (config.memory || config.networked || config.cipher != null) { + return; + } + int port1 = 9193, port2 = 9194; + String serverList = "localhost:" + port1 + ",localhost:" + port2; + deleteFiles(); + + // initialize the database + Connection conn; + org.h2.Driver.load(); + + String urlNode1 = getURL("node1/test", true); + String urlNode2 = getURL("node2/test", true); + String user = getUser(), password = getPassword(); + conn = getConnection(urlNode1, user, password); + Statement stat; + stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + int len = getSize(10, 1000); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.setString(2, "Data" + i); + prep.executeUpdate(); + } + check(conn, len, "''"); + conn.close(); + + // copy the database and initialize the cluster + CreateCluster.main("-urlSource", urlNode1, "-urlTarget", + urlNode2, "-user", user, "-password", password, "-serverList", + serverList); + + // start both servers + Server n1 = org.h2.tools.Server.createTcpServer("-tcpPort", "" + + port1, "-baseDir", getBaseDir() + "/node1").start(); + Server n2 = org.h2.tools.Server.createTcpServer("-tcpPort", "" + + port2, "-baseDir", getBaseDir() + "/node2").start(); + + // try to connect in standalone mode - should fail + // should not be able to connect in standalone mode + assertThrows(ErrorCode.CLUSTER_ERROR_DATABASE_RUNS_CLUSTERED_1, this). + getConnection("jdbc:h2:tcp://localhost:"+port1+"/test", user, password); + assertThrows(ErrorCode.CLUSTER_ERROR_DATABASE_RUNS_CLUSTERED_1, this). + getConnection("jdbc:h2:tcp://localhost:"+port2+"/test", user, password); + + // test a cluster connection + conn = getConnection("jdbc:h2:tcp://" + serverList + "/test", user, password); + check(conn, len, "'"+serverList+"'"); + conn.close(); + + // stop server 2, and test if only one server is available + n2.stop(); + conn = getConnection("jdbc:h2:tcp://" + serverList + "/test", user, password); + check(conn, len, "''"); + conn.close(); + conn = getConnection("jdbc:h2:tcp://" + serverList + "/test", user, password); + check(conn, len, "''"); + conn.close(); + + // disable the cluster + conn = getConnection("jdbc:h2:tcp://localhost:"+ + port1+"/test;CLUSTER=''", user, password); + conn.close(); + n1.stop(); + + // re-create the cluster + DeleteDbFiles.main("-dir", getBaseDir() + "/node2", "-quiet"); + CreateCluster.main("-urlSource", urlNode1, "-urlTarget", + urlNode2, "-user", user, "-password", password, "-serverList", + serverList); + n1 = org.h2.tools.Server.createTcpServer("-tcpPort", "" + + port1, "-baseDir", getBaseDir() + "/node1").start(); + n2 = org.h2.tools.Server.createTcpServer("-tcpPort", "" + + port2, "-baseDir", getBaseDir() + "/node2").start(); + + conn = getConnection("jdbc:h2:tcp://" + serverList + "/test", user, password); + stat = conn.createStatement(); + stat.execute("CREATE TABLE BOTH(ID INT)"); + + n1.stop(); + + stat.execute("CREATE TABLE A(ID INT)"); + conn.close(); + n2.stop(); + + n1 = org.h2.tools.Server.createTcpServer("-tcpPort", "" + + port1, "-baseDir", getBaseDir() + "/node1").start(); + conn = getConnection("jdbc:h2:tcp://localhost:"+ + port1+"/test;CLUSTER=''", user, password); + check(conn, len, "''"); + conn.close(); + n1.stop(); + + n2 = org.h2.tools.Server.createTcpServer("-tcpPort", "" + + port2, "-baseDir", getBaseDir() + "/node2").start(); + conn = getConnection("jdbc:h2:tcp://localhost:" + + port2 + "/test;CLUSTER=''", user, password); + check(conn, len, "''"); + conn.createStatement().execute("SELECT * FROM A"); + conn.close(); + n2.stop(); + deleteFiles(); + } + + private void deleteFiles() throws SQLException { + DeleteDbFiles.main("-dir", getBaseDir() + "/node1", "-quiet"); + DeleteDbFiles.main("-dir", getBaseDir() + "/node2", "-quiet"); + FileUtils.delete(getBaseDir() + "/node1"); + FileUtils.delete(getBaseDir() + "/node2"); + } + + private void check(Connection conn, int len, String expectedCluster) + throws SQLException { + PreparedStatement prep = conn.prepareStatement("SELECT * FROM TEST WHERE ID=?"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + ResultSet rs = prep.executeQuery(); + rs.next(); + assertEquals("Data" + i, rs.getString(2)); + assertFalse(rs.next()); + } + ResultSet rs = conn.createStatement().executeQuery( + "SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME='CLUSTER'"); + String cluster = rs.next() ? rs.getString(1) : "''"; + assertEquals(expectedCluster, cluster); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestCompatibility.java b/modules/h2/src/test/java/org/h2/test/db/TestCompatibility.java new file mode 100644 index 0000000000000..61d1f98129555 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestCompatibility.java @@ -0,0 +1,578 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests the compatibility with other databases. + */ +public class TestCompatibility extends TestBase { + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("compatibility"); + + testOnDuplicateKey(); + testCaseSensitiveIdentifiers(); + testKeyAsColumnInMySQLMode(); + + conn = getConnection("compatibility"); + testDomain(); + testColumnAlias(); + testUniqueIndexSingleNull(); + testUniqueIndexOracle(); + testPostgreSQL(); + testHsqlDb(); + testMySQL(); + testDB2(); + testDerby(); + testSybaseAndMSSQLServer(); + testIgnite(); + + conn.close(); + deleteDb("compatibility"); + } + + private void testOnDuplicateKey() throws SQLException { + Connection c = getConnection("compatibility;MODE=MYSQL"); + Statement stat = c.createStatement(); + stat.execute("set mode mysql"); + stat.execute("create schema s2"); + stat.execute("create table s2.test(id int primary key, name varchar(255))"); + stat.execute("insert into s2.test(id, name) values(1, 'a')"); + stat.execute("insert into s2.test(id, name) values(1, 'b') " + + "on duplicate key update name = values(name)"); + stat.execute("drop schema s2 cascade"); + c.close(); + } + + private void testKeyAsColumnInMySQLMode() throws SQLException { + Connection c = getConnection("compatibility;MODE=MYSQL"); + Statement stat = c.createStatement(); + stat.execute("create table test(id int primary key, key varchar)"); + stat.execute("drop table test"); + c.close(); + } + + private void testCaseSensitiveIdentifiers() throws SQLException { + Connection c = getConnection("compatibility;DATABASE_TO_UPPER=FALSE"); + Statement stat = c.createStatement(); + stat.execute("create table test(id int primary key, name varchar) " + + "as select 1, 'hello'"); + assertThrows(ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1, stat). + execute("create table test(id int primary key, name varchar)"); + assertThrows(ErrorCode.DUPLICATE_COLUMN_NAME_1, stat). + execute("alter table test add column Name varchar"); + ResultSet rs; + + DatabaseMetaData meta = c.getMetaData(); + rs = meta.getTables(null, null, "test", null); + assertTrue(rs.next()); + assertEquals("test", rs.getString("TABLE_NAME")); + + rs = stat.executeQuery("select id, name from test"); + assertEquals("id", rs.getMetaData().getColumnLabel(1)); + assertEquals("name", rs.getMetaData().getColumnLabel(2)); + + rs = stat.executeQuery("select Id, Name from Test"); + assertEquals("id", rs.getMetaData().getColumnLabel(1)); + assertEquals("name", rs.getMetaData().getColumnLabel(2)); + + rs = stat.executeQuery("select ID, NAME from TEST"); + assertEquals("id", rs.getMetaData().getColumnLabel(1)); + assertEquals("name", rs.getMetaData().getColumnLabel(2)); + + stat.execute("select COUNT(*), count(*), Count(*), Sum(id) from test"); + + stat.execute("select LENGTH(name), length(name), Length(name) from test"); + + stat.execute("select t.id from test t group by t.id"); + stat.execute("select id from test t group by t.id"); + stat.execute("select id from test group by ID"); + stat.execute("select id as c from test group by c"); + stat.execute("select t.id from test t group by T.ID"); + stat.execute("select id from test t group by T.ID"); + + stat.execute("drop table test"); + c.close(); + } + + private void testDomain() throws SQLException { + if (config.memory) { + return; + } + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key) as select 1"); + assertThrows(ErrorCode.USER_DATA_TYPE_ALREADY_EXISTS_1, stat). + execute("create domain int as varchar"); + conn.close(); + conn = getConnection("compatibility"); + stat = conn.createStatement(); + stat.execute("insert into test values(2)"); + stat.execute("drop table test"); + } + + private void testColumnAlias() throws SQLException { + Statement stat = conn.createStatement(); + String[] modes = { "PostgreSQL", "MySQL", "HSQLDB", "MSSQLServer", + "Derby", "Oracle", "Regular" }; + String columnAlias; + columnAlias = "MySQL,Regular"; + stat.execute("CREATE TABLE TEST(ID INT)"); + for (String mode : modes) { + stat.execute("SET MODE " + mode); + ResultSet rs = stat.executeQuery("SELECT ID I FROM TEST"); + ResultSetMetaData meta = rs.getMetaData(); + String columnName = meta.getColumnName(1); + String tableName = meta.getTableName(1); + if ("ID".equals(columnName) && "TEST".equals(tableName)) { + assertTrue(mode + " mode should not support columnAlias", + columnAlias.contains(mode)); + } else if ("I".equals(columnName) && tableName.equals("")) { + assertTrue(mode + " mode should support columnAlias", + columnAlias.indexOf(mode) < 0); + } else { + fail(); + } + } + stat.execute("DROP TABLE TEST"); + } + + private void testUniqueIndexSingleNull() throws SQLException { + Statement stat = conn.createStatement(); + String[] modes = { "PostgreSQL", "MySQL", "HSQLDB", "MSSQLServer", + "Derby", "Oracle", "Regular" }; + String multiNull = "PostgreSQL,MySQL,Oracle,Regular"; + for (String mode : modes) { + stat.execute("SET MODE " + mode); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("CREATE UNIQUE INDEX IDX_ID_U ON TEST(ID)"); + try { + stat.execute("INSERT INTO TEST VALUES(1), (2), (NULL), (NULL)"); + assertTrue(mode + " mode should not support multiple NULL", + multiNull.contains(mode)); + } catch (SQLException e) { + assertTrue(mode + " mode should support multiple NULL", + multiNull.indexOf(mode) < 0); + } + stat.execute("DROP TABLE TEST"); + } + } + + private void testUniqueIndexOracle() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("SET MODE ORACLE"); + stat.execute("create table t2(c1 int, c2 int)"); + stat.execute("create unique index i2 on t2(c1, c2)"); + stat.execute("insert into t2 values (null, 1)"); + assertThrows(ErrorCode.DUPLICATE_KEY_1, stat). + execute("insert into t2 values (null, 1)"); + stat.execute("insert into t2 values (null, null)"); + stat.execute("insert into t2 values (null, null)"); + stat.execute("insert into t2 values (1, null)"); + assertThrows(ErrorCode.DUPLICATE_KEY_1, stat). + execute("insert into t2 values (1, null)"); + stat.execute("DROP TABLE T2"); + } + + private void testHsqlDb() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("set mode hsqldb"); + testLog(Math.log(10), stat); + + stat.execute("DROP TABLE TEST IF EXISTS; " + + "CREATE TABLE TEST(ID INT PRIMARY KEY); "); + stat.execute("CALL CURRENT_TIME"); + stat.execute("CALL CURRENT_TIMESTAMP"); + stat.execute("CALL CURRENT_DATE"); + stat.execute("CALL SYSDATE"); + stat.execute("CALL TODAY"); + + stat.execute("DROP TABLE TEST IF EXISTS"); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("INSERT INTO TEST VALUES(1)"); + PreparedStatement prep = conn.prepareStatement( + "SELECT LIMIT ? 1 ID FROM TEST"); + prep.setInt(1, 2); + prep.executeQuery(); + stat.execute("DROP TABLE TEST IF EXISTS"); + + stat.execute("DROP TABLE TEST IF EXISTS"); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.executeQuery("SELECT * FROM TEST WHERE ID IN ()"); + stat.execute("DROP TABLE TEST IF EXISTS"); + } + + private void testLog(double expected, Statement stat) throws SQLException { + stat.execute("create table log(id int)"); + stat.execute("insert into log values(1)"); + ResultSet rs = stat.executeQuery("select log(10) from log"); + rs.next(); + assertEquals((int) (expected * 100), (int) (rs.getDouble(1) * 100)); + rs = stat.executeQuery("select ln(10) from log"); + rs.next(); + assertEquals((int) (Math.log(10) * 100), (int) (rs.getDouble(1) * 100)); + stat.execute("drop table log"); + } + + private void testPostgreSQL() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("SET MODE PostgreSQL"); + testLog(Math.log10(10), stat); + + assertResult("ABC", stat, "SELECT SUBSTRING('ABCDEF' FOR 3)"); + assertResult("ABCD", stat, "SELECT SUBSTRING('0ABCDEF' FROM 2 FOR 4)"); + + /* --------- Behaviour of CHAR(N) --------- */ + + /* Test right-padding of CHAR(N) at INSERT */ + stat.execute("CREATE TABLE TEST(CH CHAR(10))"); + stat.execute("INSERT INTO TEST (CH) VALUES ('Hello')"); + assertResult("Hello ", stat, "SELECT CH FROM TEST"); + + /* Test that WHERE clauses accept unpadded values and will pad before comparison */ + assertResult("Hello ", stat, "SELECT CH FROM TEST WHERE CH = 'Hello'"); + + /* Test CHAR which is identical to CHAR(1) */ + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(CH CHAR)"); + stat.execute("INSERT INTO TEST (CH) VALUES ('')"); + assertResult(" ", stat, "SELECT CH FROM TEST"); + assertResult(" ", stat, "SELECT CH FROM TEST WHERE CH = ''"); + + /* Test that excessive spaces are trimmed */ + stat.execute("DELETE FROM TEST"); + stat.execute("INSERT INTO TEST (CH) VALUES ('1 ')"); + assertResult("1", stat, "SELECT CH FROM TEST"); + assertResult("1", stat, "SELECT CH FROM TEST WHERE CH = '1 '"); + + /* Test that we do not trim too far */ + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(CH CHAR(2))"); + stat.execute("INSERT INTO TEST (CH) VALUES ('1 ')"); + assertResult("1 ", stat, "SELECT CH FROM TEST"); + assertResult("1 ", stat, "SELECT CH FROM TEST WHERE CH = '1 '"); + + /* --------- Disallowed column types --------- */ + + String[] DISALLOWED_TYPES = {"NUMBER", "IDENTITY", "TINYINT", "BLOB"}; + for (String type : DISALLOWED_TYPES) { + stat.execute("DROP TABLE IF EXISTS TEST"); + try { + stat.execute("CREATE TABLE TEST(COL " + type + ")"); + fail("Expect type " + type + " to not exist in PostgreSQL mode"); + } catch (org.h2.jdbc.JdbcSQLException e) { + /* Expected! */ + } + } + } + + private void testMySQL() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("set mode mysql"); + stat.execute("create schema test_schema"); + stat.execute("use test_schema"); + assertResult("TEST_SCHEMA", stat, "select schema()"); + stat.execute("use public"); + assertResult("PUBLIC", stat, "select schema()"); + + stat.execute("SELECT 1"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World')"); + org.h2.mode.FunctionsMySQL.register(conn); + assertResult("0", stat, "SELECT UNIX_TIMESTAMP('1970-01-01 00:00:00Z')"); + assertResult("1196418619", stat, + "SELECT UNIX_TIMESTAMP('2007-11-30 10:30:19Z')"); + assertResult("1196418619", stat, + "SELECT UNIX_TIMESTAMP(FROM_UNIXTIME(1196418619))"); + assertResult("2007 November", stat, + "SELECT FROM_UNIXTIME(1196300000, '%Y %M')"); + assertResult("2003-12-31", stat, + "SELECT DATE('2003-12-31 11:02:03')"); + assertResult("2003-12-31", stat, + "SELECT DATE('2003-12-31 11:02:03')"); + // check the weird MySQL variant of DELETE + stat.execute("DELETE TEST FROM TEST WHERE 1=2"); + + if (config.memory) { + return; + } + // need to reconnect, because meta data tables may be initialized + conn.close(); + conn = getConnection("compatibility;MODE=MYSQL"); + stat = conn.createStatement(); + testLog(Math.log(10), stat); + + DatabaseMetaData meta = conn.getMetaData(); + assertTrue(meta.storesLowerCaseIdentifiers()); + assertTrue(meta.storesLowerCaseQuotedIdentifiers()); + assertFalse(meta.storesMixedCaseIdentifiers()); + assertFalse(meta.storesMixedCaseQuotedIdentifiers()); + assertFalse(meta.storesUpperCaseIdentifiers()); + assertTrue(meta.storesUpperCaseQuotedIdentifiers()); + + stat = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, + ResultSet.CONCUR_UPDATABLE); + assertResult("test", stat, "SHOW TABLES"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + rs.next(); + rs.updateString(2, "Hallo"); + rs.updateRow(); + + // we used to have an NullPointerException in the MetaTable.checkIndex() + // method + rs = stat.executeQuery("SELECT * FROM " + + "INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME > 'aaaa'"); + rs = stat.executeQuery("SELECT * FROM " + + "INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME < 'aaaa'"); + + stat.execute("CREATE TABLE TEST_1" + + "(ID INT PRIMARY KEY) ENGINE=InnoDb"); + stat.execute("CREATE TABLE TEST_2" + + "(ID INT PRIMARY KEY) ENGINE=MyISAM"); + stat.execute("CREATE TABLE TEST_3" + + "(ID INT PRIMARY KEY) ENGINE=InnoDb charset=UTF8"); + stat.execute("CREATE TABLE TEST_4" + + "(ID INT PRIMARY KEY) charset=UTF8"); + stat.execute("CREATE TABLE TEST_5" + + "(ID INT PRIMARY KEY) ENGINE=InnoDb auto_increment=3 default charset=UTF8"); + stat.execute("CREATE TABLE TEST_6" + + "(ID INT PRIMARY KEY) ENGINE=InnoDb auto_increment=3 charset=UTF8"); + stat.execute("CREATE TABLE TEST_7" + + "(ID INT, KEY TEST_7_IDX(ID) USING BTREE)"); + stat.execute("CREATE TABLE TEST_8" + + "(ID INT, UNIQUE KEY TEST_8_IDX(ID) USING BTREE)"); + + // this maps to SET REFERENTIAL_INTEGRITY TRUE/FALSE + stat.execute("SET foreign_key_checks = 0"); + stat.execute("SET foreign_key_checks = 1"); + + // Check if mysql comments are supported, ensure clean connection + conn.close(); + conn = getConnection("compatibility;MODE=MYSQL"); + stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST_NO_COMMENT"); + stat.execute("CREATE table TEST_NO_COMMENT " + + "(ID bigint not null auto_increment, " + + "SOME_STR varchar(255), primary key (ID))"); + // now test creating a table with a comment + stat.execute("DROP TABLE IF EXISTS TEST_COMMENT"); + stat.execute("create table TEST_COMMENT (ID bigint not null auto_increment, " + + "SOME_STR varchar(255), primary key (ID)) comment='Some comment.'"); + // now test creating a table with a comment and engine + // and other typical mysql stuff as generated by hibernate + stat.execute("DROP TABLE IF EXISTS TEST_COMMENT_ENGINE"); + stat.execute("create table TEST_COMMENT_ENGINE " + + "(ID bigint not null auto_increment, " + + "ATTACHMENT_ID varchar(255), " + + "SOME_ITEM_ID bigint not null, primary key (ID), " + + "unique (ATTACHMENT_ID, SOME_ITEM_ID)) " + + "comment='Comment Again' ENGINE=InnoDB"); + + stat.execute("CREATE TABLE TEST2(ID INT) ROW_FORMAT=DYNAMIC"); + + // check the MySQL index dropping syntax + stat.execute("ALTER TABLE TEST_COMMENT_ENGINE ADD CONSTRAINT CommentUnique UNIQUE (SOME_ITEM_ID)"); + stat.execute("ALTER TABLE TEST_COMMENT_ENGINE DROP INDEX CommentUnique"); + stat.execute("CREATE INDEX IDX_ATTACHMENT_ID ON TEST_COMMENT_ENGINE (ATTACHMENT_ID)"); + stat.execute("DROP INDEX IDX_ATTACHMENT_ID ON TEST_COMMENT_ENGINE"); + + conn.close(); + conn = getConnection("compatibility"); + } + + private void testSybaseAndMSSQLServer() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("SET MODE MSSQLServer"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(NAME VARCHAR(50), SURNAME VARCHAR(50))"); + stat.execute("INSERT INTO TEST VALUES('John', 'Doe')"); + stat.execute("INSERT INTO TEST VALUES('Jack', 'Sullivan')"); + + assertResult("abcd123", stat, "SELECT 'abc' + 'd123'"); + + assertResult("Doe, John", stat, + "SELECT surname + ', ' + name FROM test " + + "WHERE SUBSTRING(NAME,1,1)+SUBSTRING(SURNAME,1,1) = 'JD'"); + + stat.execute("ALTER TABLE TEST ADD COLUMN full_name VARCHAR(100)"); + stat.execute("UPDATE TEST SET full_name = name + ', ' + surname"); + assertResult("John, Doe", stat, "SELECT full_name FROM TEST where name='John'"); + + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?, ? + ', ' + ?)"); + int ca = 1; + prep.setString(ca++, "Paul"); + prep.setString(ca++, "Frank"); + prep.setString(ca++, "Paul"); + prep.setString(ca++, "Frank"); + prep.executeUpdate(); + prep.close(); + + assertResult("Paul, Frank", stat, "SELECT full_name FROM test " + + "WHERE name = 'Paul'"); + + prep = conn.prepareStatement("SELECT ? + ?"); + int cb = 1; + prep.setString(cb++, "abcd123"); + prep.setString(cb++, "d123"); + prep.executeQuery(); + prep.close(); + + prep = conn.prepareStatement("SELECT full_name FROM test " + + "WHERE (SUBSTRING(name, 1, 1) + SUBSTRING(surname, 2, 3)) = ?"); + prep.setString(1, "Joe"); + ResultSet rs = prep.executeQuery(); + assertTrue("Result cannot be empty", rs.next()); + assertEquals("John, Doe", rs.getString(1)); + rs.close(); + prep.close(); + + // CONVERT has it's parameters the other way around from the default + // mode + rs = stat.executeQuery("SELECT CONVERT(INT, '10')"); + rs.next(); + assertEquals(10, rs.getInt(1)); + rs.close(); + rs = stat.executeQuery("SELECT X FROM (SELECT CONVERT(INT, '10') AS X)"); + rs.next(); + assertEquals(10, rs.getInt(1)); + rs.close(); + + // make sure we're ignoring the index part of the statement + rs = stat.executeQuery("select * from test (index table1_index)"); + rs.close(); + + // UNIQUEIDENTIFIER is MSSQL's equivalent of UUID + stat.execute("create table test3 (id UNIQUEIDENTIFIER)"); + } + + private void testDB2() throws SQLException { + conn.close(); + conn = getConnection("compatibility;MODE=DB2"); + Statement stat = conn.createStatement(); + testLog(Math.log(10), stat); + + ResultSet res = conn.createStatement().executeQuery( + "SELECT 1 FROM sysibm.sysdummy1"); + res.next(); + assertEquals("1", res.getString(1)); + conn.close(); + conn = getConnection("compatibility;MODE=MySQL"); + assertThrows(ErrorCode.SCHEMA_NOT_FOUND_1, conn.createStatement()). + executeQuery("SELECT 1 FROM sysibm.sysdummy1"); + conn.close(); + conn = getConnection("compatibility;MODE=DB2"); + stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(id varchar)"); + stat.execute("insert into test values ('3'),('1'),('2')"); + res = stat.executeQuery("select id from test order by id " + + "fetch next 2 rows only"); + res.next(); + assertEquals("1", res.getString(1)); + res.next(); + assertEquals("2", res.getString(1)); + assertFalse(res.next()); + conn.close(); + + // test isolation-clause + conn = getConnection("compatibility;MODE=DB2"); + stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(id varchar)"); + res = stat.executeQuery("select * from test with ur"); + stat.executeUpdate("insert into test values (1) with ur"); + res = stat.executeQuery("select * from test where id = 1 with rr"); + res = stat.executeQuery("select * from test order by id " + + "fetch next 2 rows only with rr"); + res = stat.executeQuery("select * from test order by id " + + "fetch next 2 rows only with rs"); + res = stat.executeQuery("select * from test order by id " + + "fetch next 2 rows only with cs"); + res = stat.executeQuery("select * from test order by id " + + "fetch next 2 rows only with ur"); + // test isolation-clause with lock-request-clause + res = stat.executeQuery("select * from test order by id " + + "fetch next 2 rows only with rr use and keep share locks"); + res = stat.executeQuery("select * from test order by id " + + "fetch next 2 rows only with rs use and keep update locks"); + res = stat.executeQuery("select * from test order by id " + + "fetch next 2 rows only with rr use and keep exclusive locks"); + + // Test DB2 TIMESTAMP format with dash separating date and time + stat.execute("drop table test if exists"); + stat.execute("create table test(date TIMESTAMP)"); + stat.executeUpdate("insert into test (date) values ('2014-04-05-09.48.28.020005')"); + assertResult("2014-04-05 09:48:28.020005", stat, + "select date from test"); // <- result is always H2 format timestamp! + assertResult("2014-04-05 09:48:28.020005", stat, + "select date from test where date = '2014-04-05-09.48.28.020005'"); + assertResult("2014-04-05 09:48:28.020005", stat, + "select date from test where date = '2014-04-05 09:48:28.020005'"); + } + + private void testDerby() throws SQLException { + conn.close(); + conn = getConnection("compatibility;MODE=Derby"); + Statement stat = conn.createStatement(); + testLog(Math.log(10), stat); + + ResultSet res = conn.createStatement().executeQuery( + "SELECT 1 FROM sysibm.sysdummy1 fetch next 1 row only"); + res.next(); + assertEquals("1", res.getString(1)); + conn.close(); + conn = getConnection("compatibility;MODE=PostgreSQL"); + assertThrows(ErrorCode.SCHEMA_NOT_FOUND_1, conn.createStatement()). + executeQuery("SELECT 1 FROM sysibm.sysdummy1"); + conn.close(); + conn = getConnection("compatibility"); + } + + private void testIgnite() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("SET MODE Ignite"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int affinity key)"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int affinity primary key)"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int, v1 varchar, v2 long affinity key, primary key(v1, id))"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int, v1 varchar, v2 long, primary key(v1, id), affinity key (id))"); + + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int shard key)"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int shard primary key)"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int, v1 varchar, v2 long shard key, primary key(v1, id))"); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int, v1 varchar, v2 long, primary key(v1, id), shard key (id))"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestCompatibilityOracle.java b/modules/h2/src/test/java/org/h2/test/db/TestCompatibilityOracle.java new file mode 100644 index 0000000000000..f3cb0d9de8e3e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestCompatibilityOracle.java @@ -0,0 +1,349 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.sql.Types; +import java.text.SimpleDateFormat; +import java.util.Arrays; +import java.util.Locale; +import org.h2.test.TestBase; +import org.h2.tools.SimpleResultSet; + +/** + * Test Oracle compatibility mode. + */ +public class TestCompatibilityOracle extends TestBase { + + /** + * Run just this test. + * + * @param s ignored + */ + public static void main(String... s) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.test(); + } + + @Override + public void test() throws Exception { + testNotNullSyntax(); + testTreatEmptyStringsAsNull(); + testDecimalScale(); + testPoundSymbolInColumnName(); + testToDate(); + testForbidEmptyInClause(); + testSpecialTypes(); + testDate(); + } + + private void testNotNullSyntax() throws SQLException { + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + // Some other variation (oracle syntax) + stat.execute("create table T (C int not null enable)"); + stat.execute("insert into T values(1)"); + stat.execute("drop table T"); + stat.execute("create table T (C int not null enable validate)"); + stat.execute("insert into T values(1)"); + stat.execute("drop table T"); + // can set NULL + // can set NULL even with 'not null syntax' (oracle) + stat.execute("create table T (C int not null disable)"); + stat.execute("insert into T values(null)"); + stat.execute("drop table T"); + // can set NULL even with 'not null syntax' (oracle) + stat.execute("create table T (C int not null enable novalidate)"); + stat.execute("insert into T values(null)"); + stat.execute("drop table T"); + + // Some other variation with oracle syntax + stat.execute("create table T (C int not null)"); + stat.execute("insert into T values(1)"); + stat.execute("alter table T modify C not null"); + stat.execute("insert into T values(1)"); + stat.execute("alter table T modify C not null enable"); + stat.execute("insert into T values(1)"); + stat.execute("alter table T modify C not null enable validate"); + stat.execute("insert into T values(1)"); + stat.execute("drop table T"); + // can set NULL + stat.execute("create table T (C int null)"); + stat.execute("insert into T values(null)"); + stat.execute("alter table T modify C null enable"); + stat.execute("alter table T modify C null enable validate"); + stat.execute("insert into T values(null)"); + // can set NULL even with 'not null syntax' (oracle) + stat.execute("alter table T modify C not null disable"); + stat.execute("insert into T values(null)"); + // can set NULL even with 'not null syntax' (oracle) + stat.execute("alter table T modify C not null enable novalidate"); + stat.execute("insert into T values(null)"); + stat.execute("drop table T"); + + conn.close(); + } + + private void testSpecialTypes() throws SQLException { + // Test VARCHAR, VARCHAR2 with CHAR and BYTE + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + stat.execute("create table T (ID NUMBER)"); + stat.execute("alter table T add A_1 VARCHAR(1)"); + stat.execute("alter table T add A_2 VARCHAR2(1)"); + stat.execute("alter table T add B_1 VARCHAR(1 byte)"); // with BYTE + stat.execute("alter table T add B_2 VARCHAR2(1 byte)"); + stat.execute("alter table T add C_1 VARCHAR(1 char)"); // with CHAR + stat.execute("alter table T add C_2 VARCHAR2(1 char)"); + stat.execute("alter table T add B_255 VARCHAR(255 byte)"); + stat.execute("alter table T add C_255 VARCHAR(255 char)"); + stat.execute("drop table T"); + conn.close(); + } + + private void testTreatEmptyStringsAsNull() throws SQLException { + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE A (ID NUMBER, X VARCHAR2(1))"); + stat.execute("INSERT INTO A VALUES (1, 'a')"); + stat.execute("INSERT INTO A VALUES (2, '')"); + stat.execute("INSERT INTO A VALUES (3, ' ')"); + assertResult("3", stat, "SELECT COUNT(*) FROM A"); + assertResult("1", stat, "SELECT COUNT(*) FROM A WHERE X IS NULL"); + assertResult("2", stat, "SELECT COUNT(*) FROM A WHERE TRIM(X) IS NULL"); + assertResult("0", stat, "SELECT COUNT(*) FROM A WHERE X = ''"); + assertResult(new Object[][] { { 1, "a" }, { 2, null }, { 3, " " } }, + stat, "SELECT * FROM A"); + assertResult(new Object[][] { { 1, "a" }, { 2, null }, { 3, null } }, + stat, "SELECT ID, TRIM(X) FROM A"); + + stat.execute("CREATE TABLE B (ID NUMBER, X NUMBER)"); + stat.execute("INSERT INTO B VALUES (1, '5')"); + stat.execute("INSERT INTO B VALUES (2, '')"); + assertResult("2", stat, "SELECT COUNT(*) FROM B"); + assertResult("1", stat, "SELECT COUNT(*) FROM B WHERE X IS NULL"); + assertResult("0", stat, "SELECT COUNT(*) FROM B WHERE X = ''"); + assertResult(new Object[][] { { 1, 5 }, { 2, null } }, + stat, "SELECT * FROM B"); + + stat.execute("CREATE TABLE C (ID NUMBER, X TIMESTAMP)"); + stat.execute("INSERT INTO C VALUES (1, '1979-11-12')"); + stat.execute("INSERT INTO C VALUES (2, '')"); + assertResult("2", stat, "SELECT COUNT(*) FROM C"); + assertResult("1", stat, "SELECT COUNT(*) FROM C WHERE X IS NULL"); + assertResult("0", stat, "SELECT COUNT(*) FROM C WHERE X = ''"); + assertResult(new Object[][] { { 1, "1979-11-12 00:00:00.0" }, { 2, null } }, + stat, "SELECT * FROM C"); + + stat.execute("CREATE TABLE D (ID NUMBER, X VARCHAR2(1))"); + stat.execute("INSERT INTO D VALUES (1, 'a')"); + stat.execute("SET @FOO = ''"); + stat.execute("INSERT INTO D VALUES (2, @FOO)"); + assertResult("2", stat, "SELECT COUNT(*) FROM D"); + assertResult("1", stat, "SELECT COUNT(*) FROM D WHERE X IS NULL"); + assertResult("0", stat, "SELECT COUNT(*) FROM D WHERE X = ''"); + assertResult(new Object[][] { { 1, "a" }, { 2, null } }, + stat, "SELECT * FROM D"); + + stat.execute("CREATE TABLE E (ID NUMBER, X RAW(1))"); + stat.execute("INSERT INTO E VALUES (1, '0A')"); + stat.execute("INSERT INTO E VALUES (2, '')"); + assertResult("2", stat, "SELECT COUNT(*) FROM E"); + assertResult("1", stat, "SELECT COUNT(*) FROM E WHERE X IS NULL"); + assertResult("0", stat, "SELECT COUNT(*) FROM E WHERE X = ''"); + assertResult(new Object[][] { { 1, new byte[] { 10 } }, { 2, null } }, + stat, "SELECT * FROM E"); + + stat.execute("CREATE TABLE F (ID NUMBER, X VARCHAR2(1))"); + stat.execute("INSERT INTO F VALUES (1, 'a')"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO F VALUES (2, ?)"); + prep.setString(1, ""); + prep.execute(); + assertResult("2", stat, "SELECT COUNT(*) FROM F"); + assertResult("1", stat, "SELECT COUNT(*) FROM F WHERE X IS NULL"); + assertResult("0", stat, "SELECT COUNT(*) FROM F WHERE X = ''"); + assertResult(new Object[][]{{1, "a"}, {2, null}}, stat, "SELECT * FROM F"); + + conn.close(); + } + + private void testDecimalScale() throws SQLException { + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE A (ID NUMBER, X DECIMAL(9,5))"); + stat.execute("INSERT INTO A VALUES (1, 2)"); + stat.execute("INSERT INTO A VALUES (2, 4.3)"); + stat.execute("INSERT INTO A VALUES (3, '6.78')"); + assertResult("3", stat, "SELECT COUNT(*) FROM A"); + assertResult(new Object[][] { { 1, 2 }, { 2, 4.3 }, { 3, 6.78 } }, + stat, "SELECT * FROM A"); + + conn.close(); + } + + /** + * Test the # in a column name for oracle compatibility + */ + private void testPoundSymbolInColumnName() throws SQLException { + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + + stat.execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, U##NAME VARCHAR(255))"); + stat.execute( + "INSERT INTO TEST VALUES(1, 'Hello'), (2, 'HelloWorld'), (3, 'HelloWorldWorld')"); + + assertResult("1", stat, "SELECT ID FROM TEST where U##NAME ='Hello'"); + + conn.close(); + } + + private void testToDate() throws SQLException { + if (Locale.getDefault() != Locale.ENGLISH) { + return; + } + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE DATE_TABLE (ID NUMBER PRIMARY KEY, TEST_VAL TIMESTAMP)"); + stat.execute("INSERT INTO DATE_TABLE VALUES (1, " + + "to_date('31-DEC-9999 23:59:59','DD-MON-RRRR HH24:MI:SS'))"); + stat.execute("INSERT INTO DATE_TABLE VALUES (2, " + + "to_date('01-JAN-0001 00:00:00','DD-MON-RRRR HH24:MI:SS'))"); + + assertResultDate("9999-12-31T23:59:59", stat, + "SELECT TEST_VAL FROM DATE_TABLE WHERE ID=1"); + assertResultDate("0001-01-01T00:00:00", stat, + "SELECT TEST_VAL FROM DATE_TABLE WHERE ID=2"); + + conn.close(); + } + + private void testForbidEmptyInClause() throws SQLException { + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE A (ID NUMBER, X VARCHAR2(1))"); + try { + stat.executeQuery("SELECT * FROM A WHERE ID IN ()"); + fail(); + } catch (SQLException e) { + // expected + } finally { + conn.close(); + } + } + + private void testDate() throws SQLException { + deleteDb("oracle"); + Connection conn = getConnection("oracle;MODE=Oracle"); + Statement stat = conn.createStatement(); + + Timestamp t1 = Timestamp.valueOf("2011-02-03 12:11:10"); + Timestamp t2 = Timestamp.valueOf("1999-10-15 13:14:15"); + Timestamp t3 = Timestamp.valueOf("2030-11-22 11:22:33"); + Timestamp t4 = Timestamp.valueOf("2018-01-10 22:10:01"); + + stat.execute("CREATE TABLE TEST (ID INT PRIMARY KEY, D DATE)"); + stat.executeUpdate("INSERT INTO TEST VALUES(1, TIMESTAMP '2011-02-03 12:11:10')"); + stat.executeUpdate("INSERT INTO TEST VALUES(2, CAST ('1999-10-15 13:14:15' AS DATE))"); + stat.executeUpdate("INSERT INTO TEST VALUES(3, '2030-11-22 11:22:33')"); + PreparedStatement ps = conn.prepareStatement("INSERT INTO TEST VALUES (?, ?)"); + ps.setInt(1, 4); + ps.setTimestamp(2, t4); + ps.executeUpdate(); + ResultSet rs = stat.executeQuery("SELECT D FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(t1, rs.getTimestamp(1)); + rs.next(); + assertEquals(t2, rs.getTimestamp(1)); + rs.next(); + assertEquals(t3, rs.getTimestamp(1)); + rs.next(); + assertEquals(t4, rs.getTimestamp(1)); + assertFalse(rs.next()); + + conn.close(); + } + + private void assertResultDate(String expected, Statement stat, String sql) + throws SQLException { + SimpleDateFormat iso8601 = new SimpleDateFormat( + "yyyy-MM-dd'T'HH:mm:ss"); + ResultSet rs = stat.executeQuery(sql); + if (rs.next()) { + assertEquals(expected, iso8601.format(rs.getTimestamp(1))); + } else { + assertEquals(expected, null); + } + } + + private void assertResult(Object[][] expectedRowsOfValues, Statement stat, + String sql) throws SQLException { + assertResult(newSimpleResultSet(expectedRowsOfValues), stat, sql); + } + + private void assertResult(ResultSet expected, Statement stat, String sql) + throws SQLException { + ResultSet actual = stat.executeQuery(sql); + int expectedColumnCount = expected.getMetaData().getColumnCount(); + assertEquals(expectedColumnCount, actual.getMetaData().getColumnCount()); + while (true) { + boolean expectedNext = expected.next(); + boolean actualNext = actual.next(); + if (!expectedNext && !actualNext) { + return; + } + if (expectedNext != actualNext) { + fail("number of rows in actual and expected results sets does not match"); + } + for (int i = 0; i < expectedColumnCount; i++) { + String expectedString = columnResultToString(expected.getObject(i + 1)); + String actualString = columnResultToString(actual.getObject(i + 1)); + assertEquals(expectedString, actualString); + } + } + } + + private static String columnResultToString(Object object) { + if (object == null) { + return null; + } + if (object instanceof Object[]) { + return Arrays.deepToString((Object[]) object); + } + if (object instanceof byte[]) { + return Arrays.toString((byte[]) object); + } + return object.toString(); + } + + private static SimpleResultSet newSimpleResultSet(Object[][] rowsOfValues) { + SimpleResultSet result = new SimpleResultSet(); + for (int i = 0; i < rowsOfValues[0].length; i++) { + result.addColumn(i + "", Types.JAVA_OBJECT, 0, 0); + } + for (int i = 0; i < rowsOfValues.length; i++) { + result.addRow(rowsOfValues[i]); + } + return result; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestCsv.java b/modules/h2/src/test/java/org/h2/test/db/TestCsv.java new file mode 100644 index 0000000000000..bef96a7f11102 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestCsv.java @@ -0,0 +1,590 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.Reader; +import java.io.StringReader; +import java.io.StringWriter; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Csv; +import org.h2.util.IOUtils; +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * CSVREAD and CSVWRITE tests. + * + * @author Thomas Mueller + * @author Sylvain Cuaz (testNull) + */ +public class TestCsv extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws Exception { + testWriteColumnHeader(); + testCaseSensitiveColumnNames(); + testWriteResultSetDataType(); + testPreserveWhitespace(); + testChangeData(); + testOptions(); + testPseudoBom(); + testWriteRead(); + testColumnNames(); + testSpaceSeparated(); + testNull(); + testRandomData(); + testEmptyFieldDelimiter(); + testFieldDelimiter(); + testAsTable(); + testRead(); + testPipe(); + deleteDb("csv"); + } + + private void testWriteColumnHeader() throws Exception { + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("call csvwrite('" + getBaseDir() + + "/test.tsv', 'select x from dual', 'writeColumnHeader=false')"); + String x = IOUtils.readStringAndClose(IOUtils.getReader( + FileUtils.newInputStream(getBaseDir() + "/test.tsv")), -1); + assertEquals("\"1\"", x.trim()); + stat.execute("call csvwrite('" + getBaseDir() + + "/test.tsv', 'select x from dual', 'writeColumnHeader=true')"); + x = IOUtils.readStringAndClose(IOUtils.getReader( + FileUtils.newInputStream(getBaseDir() + "/test.tsv")), -1); + x = x.trim(); + assertTrue(x.startsWith("\"X\"")); + assertTrue(x.endsWith("\"1\"")); + conn.close(); + } + + + private void testWriteResultSetDataType() throws Exception { + // Oracle: ResultSet.getString on a date or time column returns a + // strange result (2009-6-30.16.17. 21. 996802000 according to a + // customer) + StringWriter writer = new StringWriter(); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery( + "select timestamp '-100-01-01 12:00:00.0' ts, null n"); + Csv csv = new Csv(); + csv.setFieldDelimiter((char) 0); + csv.setLineSeparator(";"); + csv.write(writer, rs); + conn.close(); + // getTimestamp().getString() needs to be used (not for H2, but for + // Oracle) + assertEquals("TS,N;0101-01-01 12:00:00.0,;", writer.toString()); + } + + private void testCaseSensitiveColumnNames() throws Exception { + OutputStream out = FileUtils.newOutputStream( + getBaseDir() + "/test.tsv", false); + out.write("lower,Mixed,UPPER\n 1 , 2, 3 \n".getBytes()); + out.close(); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from csvread('" + + getBaseDir() + "/test.tsv')"); + rs.next(); + assertEquals("LOWER", rs.getMetaData().getColumnName(1)); + assertEquals("MIXED", rs.getMetaData().getColumnName(2)); + assertEquals("UPPER", rs.getMetaData().getColumnName(3)); + rs = stat.executeQuery("select * from csvread('" + + getBaseDir() + + "/test.tsv', null, 'caseSensitiveColumnNames=true')"); + rs.next(); + assertEquals("lower", rs.getMetaData().getColumnName(1)); + assertEquals("Mixed", rs.getMetaData().getColumnName(2)); + assertEquals("UPPER", rs.getMetaData().getColumnName(3)); + conn.close(); + } + + private void testPreserveWhitespace() throws Exception { + OutputStream out = FileUtils.newOutputStream( + getBaseDir() + "/test.tsv", false); + out.write("a,b\n 1 , 2 \n".getBytes()); + out.close(); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from csvread('" + + getBaseDir() + "/test.tsv')"); + rs.next(); + assertEquals("1", rs.getString(1)); + assertEquals("2", rs.getString(2)); + rs = stat.executeQuery("select * from csvread('" + + getBaseDir() + "/test.tsv', null, 'preserveWhitespace=true')"); + rs.next(); + assertEquals(" 1 ", rs.getString(1)); + assertEquals(" 2 ", rs.getString(2)); + conn.close(); + } + + private void testChangeData() throws Exception { + OutputStream out = FileUtils.newOutputStream( + getBaseDir() + "/test.tsv", false); + out.write("a,b,c,d,e,f,g\n1".getBytes()); + out.close(); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from csvread('" + + getBaseDir() + "/test.tsv')"); + assertEquals(7, rs.getMetaData().getColumnCount()); + assertEquals("A", rs.getMetaData().getColumnLabel(1)); + rs.next(); + assertEquals(1, rs.getInt(1)); + out = FileUtils.newOutputStream(getBaseDir() + "/test.tsv", false); + out.write("x".getBytes()); + out.close(); + rs = stat.executeQuery("select * from csvread('" + + getBaseDir() + "/test.tsv')"); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("X", rs.getMetaData().getColumnLabel(1)); + assertFalse(rs.next()); + conn.close(); + } + + private void testOptions() { + Csv csv = new Csv(); + assertEquals(",", csv.getFieldSeparatorWrite()); + assertEquals(SysProperties.LINE_SEPARATOR, csv.getLineSeparator()); + assertEquals("", csv.getNullString()); + assertEquals('\"', csv.getEscapeCharacter()); + assertEquals('"', csv.getFieldDelimiter()); + assertEquals(',', csv.getFieldSeparatorRead()); + assertEquals(",", csv.getFieldSeparatorWrite()); + assertEquals(0, csv.getLineCommentCharacter()); + assertEquals(false, csv.getPreserveWhitespace()); + + String charset; + + charset = csv.setOptions("escape=\\ fieldDelimiter=\\\\ fieldSeparator=\n " + + "lineComment=\" lineSeparator=\\ \\\\\\ "); + assertEquals(' ', csv.getEscapeCharacter()); + assertEquals('\\', csv.getFieldDelimiter()); + assertEquals('\n', csv.getFieldSeparatorRead()); + assertEquals("\n", csv.getFieldSeparatorWrite()); + assertEquals('"', csv.getLineCommentCharacter()); + assertEquals(" \\ ", csv.getLineSeparator()); + assertFalse(csv.getPreserveWhitespace()); + assertFalse(csv.getCaseSensitiveColumnNames()); + + charset = csv.setOptions("escape=1x fieldDelimiter=2x " + + "fieldSeparator=3x " + "lineComment=4x lineSeparator=5x " + + "null=6x charset=7x " + + "preserveWhitespace=true caseSensitiveColumnNames=true"); + assertEquals('1', csv.getEscapeCharacter()); + assertEquals('2', csv.getFieldDelimiter()); + assertEquals('3', csv.getFieldSeparatorRead()); + assertEquals("3x", csv.getFieldSeparatorWrite()); + assertEquals('4', csv.getLineCommentCharacter()); + assertEquals("5x", csv.getLineSeparator()); + assertEquals("6x", csv.getNullString()); + assertEquals("7x", charset); + assertTrue(csv.getPreserveWhitespace()); + assertTrue(csv.getCaseSensitiveColumnNames()); + + charset = csv.setOptions("escape= fieldDelimiter= " + + "fieldSeparator= " + "lineComment= lineSeparator=\r\n " + + "null=\0 charset="); + assertEquals(0, csv.getEscapeCharacter()); + assertEquals(0, csv.getFieldDelimiter()); + assertEquals(0, csv.getFieldSeparatorRead()); + assertEquals("", csv.getFieldSeparatorWrite()); + assertEquals(0, csv.getLineCommentCharacter()); + assertEquals("\r\n", csv.getLineSeparator()); + assertEquals("\0", csv.getNullString()); + assertEquals("", charset); + + createClassProxy(Csv.class); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, csv). + setOptions("escape=a error=b"); + assertEquals('a', csv.getEscapeCharacter()); + } + + private void testPseudoBom() throws Exception { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + // UTF-8 "BOM" / marker + out.write(StringUtils.convertHexToBytes("ef" + "bb" + "bf")); + out.write("\"ID\", \"NAME\"\n1, Hello".getBytes(StandardCharsets.UTF_8)); + byte[] buff = out.toByteArray(); + Reader r = new InputStreamReader(new ByteArrayInputStream(buff), StandardCharsets.UTF_8); + ResultSet rs = new Csv().read(r, null); + assertEquals("ID", rs.getMetaData().getColumnLabel(1)); + assertEquals("NAME", rs.getMetaData().getColumnLabel(2)); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + } + + private void testColumnNames() throws Exception { + ResultSet rs; + rs = new Csv().read(new StringReader("Id,First Name,2x,_x2\n1,2,3"), null); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("First Name", rs.getMetaData().getColumnName(2)); + assertEquals("2x", rs.getMetaData().getColumnName(3)); + assertEquals("_X2", rs.getMetaData().getColumnName(4)); + + rs = new Csv().read(new StringReader("a,a\n1,2"), null); + assertEquals("A", rs.getMetaData().getColumnName(1)); + assertEquals("A1", rs.getMetaData().getColumnName(2)); + + rs = new Csv().read(new StringReader("1,2"), new String[] { "", null }); + assertEquals("C1", rs.getMetaData().getColumnName(1)); + assertEquals("C2", rs.getMetaData().getColumnName(2)); + } + + private void testSpaceSeparated() throws SQLException { + deleteDb("csv"); + File f = new File(getBaseDir() + "/testSpace.csv"); + FileUtils.delete(f.getAbsolutePath()); + + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("create temporary table test (a int, b int, c int)"); + stat.execute("insert into test values(1,2,3)"); + stat.execute("insert into test values(4,null,5)"); + stat.execute("call csvwrite('" + getBaseDir() + + "/test.tsv','select * from test',null,' ')"); + ResultSet rs1 = stat.executeQuery("select * from test"); + assertResultSetOrdered(rs1, new String[][] { + new String[] { "1", "2", "3" }, new String[] { "4", null, "5" } }); + ResultSet rs2 = stat.executeQuery("select * from csvread('" + + getBaseDir() + "/test.tsv',null,null,' ')"); + assertResultSetOrdered(rs2, new String[][] { + new String[] { "1", "2", "3" }, new String[] { "4", null, "5" } }); + conn.close(); + FileUtils.delete(f.getAbsolutePath()); + FileUtils.delete(getBaseDir() + "/test.tsv"); + } + + /** + * Test custom NULL string. + */ + private void testNull() throws Exception { + deleteDb("csv"); + + String fileName = getBaseDir() + "/testNull.csv"; + FileUtils.delete(fileName); + + OutputStream out = FileUtils.newOutputStream(fileName, false); + String csvContent = "\"A\",\"B\",\"C\",\"D\"\n\\N,\"\",\"\\N\","; + byte[] b = csvContent.getBytes(StandardCharsets.UTF_8); + out.write(b, 0, b.length); + out.close(); + Csv csv = new Csv(); + csv.setNullString("\\N"); + ResultSet rs = csv.read(fileName, null, "UTF8"); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(4, meta.getColumnCount()); + assertEquals("A", meta.getColumnLabel(1)); + assertEquals("B", meta.getColumnLabel(2)); + assertEquals("C", meta.getColumnLabel(3)); + assertEquals("D", meta.getColumnLabel(4)); + assertTrue(rs.next()); + assertEquals(null, rs.getString(1)); + assertEquals("", rs.getString(2)); + // null is never quoted + assertEquals("\\N", rs.getString(3)); + // an empty string is always parsed as null + assertEquals(null, rs.getString(4)); + assertFalse(rs.next()); + + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("call csvwrite('" + fileName + + "', 'select NULL as a, '''' as b, ''\\N'' as c, NULL as d', " + + "'UTF8', ',', '\"', NULL, '\\N', '\n')"); + InputStreamReader reader = new InputStreamReader( + FileUtils.newInputStream(fileName)); + // on read, an empty string is treated like null, + // but on write a null is always written with the nullString + String data = IOUtils.readStringAndClose(reader, -1); + assertEquals(csvContent + "\\N", data.trim()); + conn.close(); + + FileUtils.delete(fileName); + } + + private void testRandomData() throws SQLException { + deleteDb("csv"); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("drop table if exists test"); + stat.execute("create table test(id identity, a varchar, b varchar)"); + int len = getSize(1000, 10000); + PreparedStatement prep = conn.prepareStatement( + "insert into test(a, b) values(?, ?)"); + ArrayList list = New.arrayList(); + Random random = new Random(1); + for (int i = 0; i < len; i++) { + String a = randomData(random), b = randomData(random); + prep.setString(1, a); + prep.setString(2, b); + list.add(new String[] { a, b }); + prep.execute(); + } + stat.execute("call csvwrite('" + getBaseDir() + + "/test.csv', 'select a, b from test order by id', 'UTF-8', '|', '#')"); + Csv csv = new Csv(); + csv.setFieldSeparatorRead('|'); + csv.setFieldDelimiter('#'); + ResultSet rs = csv.read(getBaseDir() + "/test.csv", null, "UTF-8"); + for (int i = 0; i < len; i++) { + assertTrue(rs.next()); + String[] pair = list.get(i); + assertEquals(pair[0], rs.getString(1)); + assertEquals(pair[1], rs.getString(2)); + } + assertFalse(rs.next()); + conn.close(); + FileUtils.delete(getBaseDir() + "/test.csv"); + } + + private static String randomData(Random random) { + if (random.nextInt(10) == 1) { + return null; + } + int len = random.nextInt(5); + StringBuilder buff = new StringBuilder(); + String chars = "\\\'\",\r\n\t ;.-123456|#"; + for (int i = 0; i < len; i++) { + buff.append(chars.charAt(random.nextInt(chars.length()))); + } + return buff.toString(); + } + + private void testEmptyFieldDelimiter() throws Exception { + String fileName = getBaseDir() + "/test.csv"; + FileUtils.delete(fileName); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("call csvwrite('" + fileName + + "', 'select 1 id, ''Hello'' name', null, '|', '', null, null, chr(10))"); + InputStreamReader reader = new InputStreamReader( + FileUtils.newInputStream(fileName)); + String text = IOUtils.readStringAndClose(reader, -1).trim(); + text = text.replace('\n', ' '); + assertEquals("ID|NAME 1|Hello", text); + ResultSet rs = stat.executeQuery("select * from csvread('" + + fileName + "', null, null, '|', '')"); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(2, meta.getColumnCount()); + assertEquals("ID", meta.getColumnLabel(1)); + assertEquals("NAME", meta.getColumnLabel(2)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + conn.close(); + FileUtils.delete(fileName); + } + + private void testFieldDelimiter() throws Exception { + String fileName = getBaseDir() + "/test.csv"; + String fileName2 = getBaseDir() + "/test2.csv"; + FileUtils.delete(fileName); + OutputStream out = FileUtils.newOutputStream(fileName, false); + byte[] b = "'A'; 'B'\n\'It\\'s nice\'; '\nHello\\*\n'".getBytes(); + out.write(b, 0, b.length); + out.close(); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from csvread('" + + fileName + "', null, null, ';', '''', '\\')"); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(2, meta.getColumnCount()); + assertEquals("A", meta.getColumnLabel(1)); + assertEquals("B", meta.getColumnLabel(2)); + assertTrue(rs.next()); + assertEquals("It's nice", rs.getString(1)); + assertEquals("\nHello*\n", rs.getString(2)); + assertFalse(rs.next()); + stat.execute("call csvwrite('" + fileName2 + + "', 'select * from csvread(''" + fileName + + "'', null, null, '';'', '''''''', ''\\'')', null, '+', '*', '#')"); + rs = stat.executeQuery("select * from csvread('" + fileName2 + + "', null, null, '+', '*', '#')"); + meta = rs.getMetaData(); + assertEquals(2, meta.getColumnCount()); + assertEquals("A", meta.getColumnLabel(1)); + assertEquals("B", meta.getColumnLabel(2)); + assertTrue(rs.next()); + assertEquals("It's nice", rs.getString(1)); + assertEquals("\nHello*\n", rs.getString(2)); + assertFalse(rs.next()); + conn.close(); + FileUtils.delete(fileName); + FileUtils.delete(fileName2); + } + + private void testPipe() throws SQLException { + deleteDb("csv"); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("call csvwrite('" + getBaseDir() + + "/test.csv', 'select 1 id, ''Hello'' name', 'utf-8', '|')"); + ResultSet rs = stat.executeQuery("select * from csvread('" + + getBaseDir() + "/test.csv', null, 'utf-8', '|')"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + new File(getBaseDir() + "/test.csv").delete(); + + // PreparedStatement prep = conn.prepareStatement("select * from + // csvread(?, null, ?, ?)"); + // prep.setString(1, BASE_DIR+"/test.csv"); + // prep.setString(2, "utf-8"); + // prep.setString(3, "|"); + // rs = prep.executeQuery(); + + conn.close(); + FileUtils.delete(getBaseDir() + "/test.csv"); + } + + private void testAsTable() throws SQLException { + deleteDb("csv"); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("call csvwrite('" + getBaseDir() + + "/test.csv', 'select 1 id, ''Hello'' name')"); + ResultSet rs = stat.executeQuery("select name from csvread('" + + getBaseDir() + "/test.csv')"); + assertTrue(rs.next()); + assertEquals("Hello", rs.getString(1)); + assertFalse(rs.next()); + rs = stat.executeQuery("call csvread('" + getBaseDir() + "/test.csv')"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + new File(getBaseDir() + "/test.csv").delete(); + conn.close(); + } + + private void testRead() throws Exception { + String fileName = getBaseDir() + "/test.csv"; + FileUtils.delete(fileName); + OutputStream out = FileUtils.newOutputStream(fileName, false); + byte[] b = ("a,b,c,d\n201,-2,0,18\n, \"abc\"\"\" ," + + ",\"\"\n 1 ,2 , 3, 4 \n5, 6, 7, 8").getBytes(); + out.write(b, 0, b.length); + out.close(); + ResultSet rs = new Csv().read(fileName, null, "UTF8"); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(4, meta.getColumnCount()); + assertEquals("A", meta.getColumnLabel(1)); + assertEquals("B", meta.getColumnLabel(2)); + assertEquals("C", meta.getColumnLabel(3)); + assertEquals("D", meta.getColumnLabel(4)); + assertTrue(rs.next()); + assertEquals("201", rs.getString(1)); + assertEquals("-2", rs.getString(2)); + assertEquals("0", rs.getString(3)); + assertEquals("18", rs.getString(4)); + assertTrue(rs.next()); + assertEquals(null, rs.getString(1)); + assertEquals("abc\"", rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertEquals("", rs.getString(4)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("2", rs.getString(2)); + assertEquals("3", rs.getString(3)); + assertEquals("4", rs.getString(4)); + assertTrue(rs.next()); + assertEquals("5", rs.getString(1)); + assertEquals("6", rs.getString(2)); + assertEquals("7", rs.getString(3)); + assertEquals("8", rs.getString(4)); + assertFalse(rs.next()); + + // a,b,c,d + // 201,-2,0,18 + // 201,2,0,18 + // 201,2,0,18 + // 201,2,0,18 + // 201,2,0,18 + // 201,2,0,18 + FileUtils.delete(fileName); + } + + private void testWriteRead() throws SQLException { + deleteDb("csv"); + Connection conn = getConnection("csv"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR)"); + // int len = 100000; + int len = 100; + for (int i = 0; i < len; i++) { + stat.execute("INSERT INTO TEST(NAME) VALUES('Ruebezahl')"); + } + long time; + time = System.nanoTime(); + new Csv().write(conn, getBaseDir() + "/testRW.csv", + "SELECT X ID, 'Ruebezahl' NAME FROM SYSTEM_RANGE(1, " + len + ")", "UTF8"); + trace("write: " + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + ResultSet rs; + time = System.nanoTime(); + for (int i = 0; i < 30; i++) { + rs = new Csv().read(getBaseDir() + "/testRW.csv", null, "UTF8"); + while (rs.next()) { + // ignore + } + } + trace("read: " + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + rs = new Csv().read(getBaseDir() + "/testRW.csv", null, "UTF8"); + // stat.execute("CREATE ALIAS CSVREAD FOR \"org.h2.tools.Csv.read\""); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(2, meta.getColumnCount()); + for (int i = 0; i < len; i++) { + rs.next(); + assertEquals("" + (i + 1), rs.getString("ID")); + assertEquals("Ruebezahl", rs.getString("NAME")); + } + assertFalse(rs.next()); + rs.close(); + conn.close(); + FileUtils.delete(getBaseDir() + "/testRW.csv"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestDateStorage.java b/modules/h2/src/test/java/org/h2/test/db/TestDateStorage.java new file mode 100644 index 0000000000000..bcb435026afab --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestDateStorage.java @@ -0,0 +1,285 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.util.ArrayList; +import java.util.Calendar; +import java.util.GregorianCalendar; +import java.util.SimpleTimeZone; +import java.util.TimeZone; +import org.h2.test.TestBase; +import org.h2.test.unit.TestDate; +import org.h2.util.DateTimeUtils; +import org.h2.value.ValueTimestamp; + +/** + * Tests the date transfer and storage. + */ +public class TestDateStorage extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb(getTestName()); + testDateTimeTimestampWithCalendar(); + testMoveDatabaseToAnotherTimezone(); + testAllTimeZones(); + testCurrentTimeZone(); + } + + private void testDateTimeTimestampWithCalendar() throws SQLException { + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table ts(x timestamp primary key)"); + stat.execute("create table t(x time primary key)"); + stat.execute("create table d(x date)"); + Calendar utcCalendar = new GregorianCalendar(new SimpleTimeZone(0, "Z")); + TimeZone old = TimeZone.getDefault(); + DateTimeUtils.resetCalendar(); + TimeZone.setDefault(TimeZone.getTimeZone("PST")); + try { + Timestamp ts1 = Timestamp.valueOf("2010-03-13 18:15:00"); + Time t1 = new Time(ts1.getTime()); + Date d1 = new Date(ts1.getTime()); + // when converted to UTC, this is 03:15, which doesn't actually + // exist because of summer time change at that day + Timestamp ts2 = Timestamp.valueOf("2010-03-13 19:15:00"); + Time t2 = new Time(ts2.getTime()); + Date d2 = new Date(ts2.getTime()); + PreparedStatement prep; + ResultSet rs; + prep = conn.prepareStatement("insert into ts values(?)"); + prep.setTimestamp(1, ts1, utcCalendar); + prep.execute(); + prep.setTimestamp(1, ts2, utcCalendar); + prep.execute(); + prep = conn.prepareStatement("insert into t values(?)"); + prep.setTime(1, t1, utcCalendar); + prep.execute(); + prep.setTime(1, t2, utcCalendar); + prep.execute(); + prep = conn.prepareStatement("insert into d values(?)"); + prep.setDate(1, d1, utcCalendar); + prep.execute(); + prep.setDate(1, d2, utcCalendar); + prep.execute(); + rs = stat.executeQuery("select * from ts order by x"); + rs.next(); + assertEquals("2010-03-14 02:15:00", + rs.getString(1)); + assertEquals("2010-03-13 18:15:00.0", + rs.getTimestamp(1, utcCalendar).toString()); + assertEquals("2010-03-14 03:15:00.0", + rs.getTimestamp(1).toString()); + assertEquals("2010-03-14 02:15:00", + rs.getString("x")); + assertEquals("2010-03-13 18:15:00.0", + rs.getTimestamp("x", utcCalendar).toString()); + assertEquals("2010-03-14 03:15:00.0", + rs.getTimestamp("x").toString()); + rs.next(); + assertEquals("2010-03-14 03:15:00", + rs.getString(1)); + assertEquals("2010-03-13 19:15:00.0", + rs.getTimestamp(1, utcCalendar).toString()); + assertEquals("2010-03-14 03:15:00.0", + rs.getTimestamp(1).toString()); + assertEquals("2010-03-14 03:15:00", + rs.getString("x")); + assertEquals("2010-03-13 19:15:00.0", + rs.getTimestamp("x", utcCalendar).toString()); + assertEquals("2010-03-14 03:15:00.0", + rs.getTimestamp("x").toString()); + rs = stat.executeQuery("select * from t order by x"); + rs.next(); + assertEquals("02:15:00", rs.getString(1)); + assertEquals("18:15:00", rs.getTime(1, utcCalendar).toString()); + assertEquals("02:15:00", rs.getTime(1).toString()); + assertEquals("02:15:00", rs.getString("x")); + assertEquals("18:15:00", rs.getTime("x", utcCalendar).toString()); + assertEquals("02:15:00", rs.getTime("x").toString()); + rs.next(); + assertEquals("03:15:00", rs.getString(1)); + assertEquals("19:15:00", rs.getTime(1, utcCalendar).toString()); + assertEquals("03:15:00", rs.getTime(1).toString()); + assertEquals("03:15:00", rs.getString("x")); + assertEquals("19:15:00", rs.getTime("x", utcCalendar).toString()); + assertEquals("03:15:00", rs.getTime("x").toString()); + rs = stat.executeQuery("select * from d order by x"); + rs.next(); + assertEquals("2010-03-14", rs.getString(1)); + assertEquals("2010-03-13", rs.getDate(1, utcCalendar).toString()); + assertEquals("2010-03-14", rs.getDate(1).toString()); + assertEquals("2010-03-14", rs.getString("x")); + assertEquals("2010-03-13", rs.getDate("x", utcCalendar).toString()); + assertEquals("2010-03-14", rs.getDate("x").toString()); + rs.next(); + assertEquals("2010-03-14", rs.getString(1)); + assertEquals("2010-03-13", rs.getDate(1, utcCalendar).toString()); + assertEquals("2010-03-14", rs.getDate(1).toString()); + assertEquals("2010-03-14", rs.getString("x")); + assertEquals("2010-03-13", rs.getDate("x", utcCalendar).toString()); + assertEquals("2010-03-14", rs.getDate("x").toString()); + } finally { + TimeZone.setDefault(old); + DateTimeUtils.resetCalendar(); + } + stat.execute("drop table ts"); + stat.execute("drop table t"); + stat.execute("drop table d"); + conn.close(); + } + + private void testMoveDatabaseToAnotherTimezone() throws SQLException { + if (config.memory) { + return; + } + if (config.mvStore) { + return; + } + String db = getTestName() + ";LOG=0;FILE_LOCK=NO"; + Connection conn = getConnection(db); + Statement stat; + stat = conn.createStatement(); + stat.execute("create table date_list(tz varchar, t varchar, ts timestamp)"); + conn.close(); + TimeZone defaultTimeZone = TimeZone.getDefault(); + ArrayList distinct = TestDate.getDistinctTimeZones(); + try { + for (TimeZone tz : distinct) { + // println("insert using " + tz.getID()); + TimeZone.setDefault(tz); + DateTimeUtils.resetCalendar(); + conn = getConnection(db); + PreparedStatement prep = conn.prepareStatement( + "insert into date_list values(?, ?, ?)"); + prep.setString(1, tz.getID()); + for (int m = 1; m < 10; m++) { + String s = "2000-0" + m + "-01 15:00:00"; + prep.setString(2, s); + prep.setTimestamp(3, Timestamp.valueOf(s)); + prep.execute(); + } + conn.close(); + } + // printTime("inserted"); + for (TimeZone target : distinct) { + // println("select from " + target.getID()); + if ("Pacific/Kiritimati".equals(target.getID())) { + // there is a problem with this time zone, but it seems + // unrelated to this database (possibly wrong timezone + // information?) + continue; + } + TimeZone.setDefault(target); + DateTimeUtils.resetCalendar(); + conn = getConnection(db); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from date_list order by t"); + while (rs.next()) { + String source = rs.getString(1); + String a = rs.getString(2); + String b = rs.getString(3); + b = b.substring(0, a.length()); + if (!a.equals(b)) { + assertEquals(source + ">" + target, a, b); + } + } + conn.close(); + } + } finally { + TimeZone.setDefault(defaultTimeZone); + DateTimeUtils.resetCalendar(); + } + // printTime("done"); + conn = getConnection(db); + stat = conn.createStatement(); + stat.execute("drop table date_list"); + conn.close(); + } + + private static void testCurrentTimeZone() { + for (int year = 1890; year < 2050; year += 3) { + for (int month = 1; month <= 12; month++) { + for (int day = 1; day < 29; day++) { + for (int hour = 0; hour < 24; hour++) { + test(year, month, day, hour); + } + } + } + } + } + + private static void test(int year, int month, int day, int hour) { + ValueTimestamp.parse(year + "-" + month + "-" + day + " " + hour + ":00:00"); + } + + private void testAllTimeZones() throws SQLException { + Connection conn = getConnection(getTestName()); + TimeZone defaultTimeZone = TimeZone.getDefault(); + PreparedStatement prep = conn.prepareStatement("CALL CAST(? AS DATE)"); + try { + ArrayList distinct = TestDate.getDistinctTimeZones(); + for (TimeZone tz : distinct) { + // println(tz.getID()); + TimeZone.setDefault(tz); + DateTimeUtils.resetCalendar(); + for (int d = 101; d < 129; d++) { + test(prep, d); + } + } + } finally { + TimeZone.setDefault(defaultTimeZone); + DateTimeUtils.resetCalendar(); + } + conn.close(); + deleteDb(getTestName()); + } + + private void test(PreparedStatement prep, int d) throws SQLException { + String s = "2040-10-" + ("" + d).substring(1); + // some dates don't work in some versions of Java + // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6772689 + java.sql.Date date = java.sql.Date.valueOf(s); + long time = date.getTime(); + while (true) { + date = new java.sql.Date(time); + String x = date.toString(); + if (x.equals(s)) { + break; + } + time += 1000; + } + if (!date.toString().equals(s)) { + println(TimeZone.getDefault().getID() + " " + s + " <> " + date.toString()); + return; + } + prep.setString(1, s); + ResultSet rs = prep.executeQuery(); + rs.next(); + String t = rs.getString(1); + if (!s.equals(t)) { + assertEquals(TimeZone.getDefault().getID(), s, t); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestDeadlock.java b/modules/h2/src/test/java/org/h2/test/db/TestDeadlock.java new file mode 100644 index 0000000000000..06197d2180570 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestDeadlock.java @@ -0,0 +1,380 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.Reader; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Test for deadlocks in the code, and test the deadlock detection mechanism. + */ +public class TestDeadlock extends TestBase { + + /** + * The first connection. + */ + Connection c1; + + /** + * The second connection. + */ + Connection c2; + + /** + * The third connection. + */ + Connection c3; + private volatile SQLException lastException; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("deadlock"); + testTemporaryTablesAndMetaDataLocking(); + testConcurrentLobReadAndTempResultTableDelete(); + testDiningPhilosophers(); + testLockUpgrade(); + testThreePhilosophers(); + testNoDeadlock(); + testThreeSome(); + deleteDb("deadlock"); + } + + private void testConcurrentLobReadAndTempResultTableDelete() throws Exception { + deleteDb("deadlock"); + String url = "deadlock;MAX_MEMORY_ROWS=10"; + Connection conn, conn2; + Statement stat2; + conn = getConnection(url); + conn2 = getConnection(url); + final Statement stat = conn.createStatement(); + stat2 = conn2.createStatement(); + stat.execute("create table test(id int primary key, name varchar) as " + + "select x, 'Hello' from system_range(1,20)"); + stat2.execute("create table test_clob(id int primary key, data clob) as " + + "select 1, space(10000)"); + ResultSet rs2 = stat2.executeQuery("select * from test_clob"); + rs2.next(); + Task t = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + stat.execute("select * from (select distinct id from test)"); + } + } + }; + t.execute(); + long start = System.nanoTime(); + while (System.nanoTime() - start < TimeUnit.SECONDS.toNanos(1)) { + Reader r = rs2.getCharacterStream(2); + char[] buff = new char[1024]; + while (true) { + int x = r.read(buff); + if (x < 0) { + break; + } + } + } + t.get(); + stat.execute("drop all objects"); + conn.close(); + conn2.close(); + } + + private void initTest() throws SQLException { + c1 = getConnection("deadlock"); + c2 = getConnection("deadlock"); + c3 = getConnection("deadlock"); + c1.createStatement().execute("SET LOCK_TIMEOUT 1000"); + c2.createStatement().execute("SET LOCK_TIMEOUT 1000"); + c3.createStatement().execute("SET LOCK_TIMEOUT 1000"); + c1.setAutoCommit(false); + c2.setAutoCommit(false); + c3.setAutoCommit(false); + lastException = null; + } + + private void end() throws SQLException { + c1.close(); + c2.close(); + c3.close(); + } + + /** + * This class wraps exception handling to simplify creating small threads + * that execute a statement. + */ + abstract class DoIt extends Thread { + + /** + * The operation to execute. + */ + abstract void execute() throws SQLException; + + @Override + public void run() { + try { + execute(); + } catch (SQLException e) { + catchDeadlock(e); + } + } + } + + /** + * Add the exception to the list of exceptions. + * + * @param e the exception + */ + void catchDeadlock(SQLException e) { + if (lastException != null) { + lastException.setNextException(e); + } else { + lastException = e; + } + } + + private void testNoDeadlock() throws Exception { + initTest(); + c1.createStatement().execute("CREATE TABLE TEST_A(ID INT PRIMARY KEY)"); + c1.createStatement().execute("CREATE TABLE TEST_B(ID INT PRIMARY KEY)"); + c1.createStatement().execute("CREATE TABLE TEST_C(ID INT PRIMARY KEY)"); + c1.commit(); + c1.createStatement().execute("INSERT INTO TEST_A VALUES(1)"); + c2.createStatement().execute("INSERT INTO TEST_B VALUES(1)"); + c3.createStatement().execute("INSERT INTO TEST_C VALUES(1)"); + DoIt t2 = new DoIt() { + @Override + public void execute() throws SQLException { + c1.createStatement().execute("DELETE FROM TEST_B"); + c1.commit(); + } + }; + t2.start(); + DoIt t3 = new DoIt() { + @Override + public void execute() throws SQLException { + c2.createStatement().execute("DELETE FROM TEST_C"); + c2.commit(); + } + }; + t3.start(); + Thread.sleep(500); + try { + c3.createStatement().execute("DELETE FROM TEST_C"); + c3.commit(); + } catch (SQLException e) { + catchDeadlock(e); + } + t2.join(); + t3.join(); + if (lastException != null) { + throw lastException; + } + c1.commit(); + c2.commit(); + c3.commit(); + c1.createStatement().execute("DROP TABLE TEST_A, TEST_B, TEST_C"); + end(); + + } + + private void testThreePhilosophers() throws Exception { + if (config.mvcc || config.mvStore) { + return; + } + initTest(); + c1.createStatement().execute("CREATE TABLE TEST_A(ID INT PRIMARY KEY)"); + c1.createStatement().execute("CREATE TABLE TEST_B(ID INT PRIMARY KEY)"); + c1.createStatement().execute("CREATE TABLE TEST_C(ID INT PRIMARY KEY)"); + c1.commit(); + c1.createStatement().execute("INSERT INTO TEST_A VALUES(1)"); + c2.createStatement().execute("INSERT INTO TEST_B VALUES(1)"); + c3.createStatement().execute("INSERT INTO TEST_C VALUES(1)"); + DoIt t2 = new DoIt() { + @Override + public void execute() throws SQLException { + c1.createStatement().execute("DELETE FROM TEST_B"); + c1.commit(); + } + }; + t2.start(); + DoIt t3 = new DoIt() { + @Override + public void execute() throws SQLException { + c2.createStatement().execute("DELETE FROM TEST_C"); + c2.commit(); + } + }; + t3.start(); + try { + c3.createStatement().execute("DELETE FROM TEST_A"); + c3.commit(); + } catch (SQLException e) { + catchDeadlock(e); + } + t2.join(); + t3.join(); + checkDeadlock(); + c1.commit(); + c2.commit(); + c3.commit(); + c1.createStatement().execute("DROP TABLE TEST_A, TEST_B, TEST_C"); + end(); + } + + // test case for issue # 61 + // http://code.google.com/p/h2database/issues/detail?id=61) + private void testThreeSome() throws Exception { + if (config.mvcc || config.mvStore) { + return; + } + initTest(); + c1.createStatement().execute("CREATE TABLE TEST_A(ID INT PRIMARY KEY)"); + c1.createStatement().execute("CREATE TABLE TEST_B(ID INT PRIMARY KEY)"); + c1.createStatement().execute("CREATE TABLE TEST_C(ID INT PRIMARY KEY)"); + c1.commit(); + c1.createStatement().execute("INSERT INTO TEST_A VALUES(1)"); + c1.createStatement().execute("INSERT INTO TEST_B VALUES(1)"); + c2.createStatement().execute("INSERT INTO TEST_C VALUES(1)"); + DoIt t2 = new DoIt() { + @Override + public void execute() throws SQLException { + c3.createStatement().execute("INSERT INTO TEST_B VALUES(2)"); + c3.commit(); + } + }; + t2.start(); + DoIt t3 = new DoIt() { + @Override + public void execute() throws SQLException { + c2.createStatement().execute("INSERT INTO TEST_A VALUES(2)"); + c2.commit(); + } + }; + t3.start(); + try { + c1.createStatement().execute("INSERT INTO TEST_C VALUES(2)"); + c1.commit(); + } catch (SQLException e) { + catchDeadlock(e); + c1.rollback(); + } + t2.join(); + t3.join(); + checkDeadlock(); + c1.commit(); + c2.commit(); + c3.commit(); + c1.createStatement().execute("DROP TABLE TEST_A, TEST_B, TEST_C"); + end(); + } + + private void testLockUpgrade() throws Exception { + if (config.mvcc || config.mvStore) { + return; + } + initTest(); + c1.createStatement().execute("CREATE TABLE TEST(ID INT PRIMARY KEY)"); + c1.createStatement().execute("INSERT INTO TEST VALUES(1)"); + c1.commit(); + c1.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); + c2.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); + c1.createStatement().executeQuery("SELECT * FROM TEST"); + c2.createStatement().executeQuery("SELECT * FROM TEST"); + Thread t1 = new DoIt() { + @Override + public void execute() throws SQLException { + c1.createStatement().execute("DELETE FROM TEST"); + c1.commit(); + } + }; + t1.start(); + try { + c2.createStatement().execute("DELETE FROM TEST"); + c2.commit(); + } catch (SQLException e) { + catchDeadlock(e); + } + t1.join(); + checkDeadlock(); + c1.commit(); + c2.commit(); + c1.createStatement().execute("DROP TABLE TEST"); + end(); + } + + private void testDiningPhilosophers() throws Exception { + if (config.mvcc || config.mvStore) { + return; + } + initTest(); + c1.createStatement().execute("CREATE TABLE T1(ID INT)"); + c1.createStatement().execute("CREATE TABLE T2(ID INT)"); + c1.createStatement().execute("INSERT INTO T1 VALUES(1)"); + c2.createStatement().execute("INSERT INTO T2 VALUES(1)"); + DoIt t1 = new DoIt() { + @Override + public void execute() throws SQLException { + c1.createStatement().execute("INSERT INTO T2 VALUES(2)"); + c1.commit(); + } + }; + t1.start(); + try { + c2.createStatement().execute("INSERT INTO T1 VALUES(2)"); + } catch (SQLException e) { + catchDeadlock(e); + } + t1.join(); + checkDeadlock(); + c1.commit(); + c2.commit(); + c1.createStatement().execute("DROP TABLE T1, T2"); + end(); + } + + private void checkDeadlock() throws SQLException { + assertTrue(lastException != null); + assertKnownException(lastException); + assertEquals(ErrorCode.DEADLOCK_1, lastException.getErrorCode()); + SQLException e2 = lastException.getNextException(); + if (e2 != null) { + // we have two exception, but there should only be one + throw new SQLException("Expected one exception, got multiple", e2); + } + } + + // there was a bug in the meta data locking here + private void testTemporaryTablesAndMetaDataLocking() throws Exception { + deleteDb("deadlock"); + Connection conn = getConnection("deadlock"); + Statement stmt = conn.createStatement(); + conn.setAutoCommit(false); + stmt.execute("CREATE SEQUENCE IF NOT EXISTS SEQ1 START WITH 1000000"); + stmt.execute("CREATE FORCE VIEW V1 AS WITH RECURSIVE TEMP(X) AS " + + "(SELECT x FROM DUAL) SELECT * FROM TEMP"); + stmt.executeQuery("SELECT SEQ1.NEXTVAL"); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestDrop.java b/modules/h2/src/test/java/org/h2/test/db/TestDrop.java new file mode 100644 index 0000000000000..60843fcb09e50 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestDrop.java @@ -0,0 +1,78 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; + +/** + * Test DROP statement + */ +public class TestDrop extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("drop"); + conn = getConnection("drop"); + stat = conn.createStatement(); + + testTableDependsOnView(); + testComputedColumnDependency(); + testInterSchemaDependency(); + + conn.close(); + deleteDb("drop"); + } + + private void testTableDependsOnView() throws SQLException { + stat.execute("drop all objects"); + stat.execute("create table a(x int)"); + stat.execute("create view b as select * from a"); + stat.execute("create table c(y int check (select count(*) from b) = 0)"); + stat.execute("drop all objects"); + } + + private void testComputedColumnDependency() throws SQLException { + stat.execute("DROP ALL OBJECTS"); + stat.execute("CREATE TABLE A (A INT);"); + stat.execute("CREATE TABLE B (B INT AS SELECT A FROM A);"); + stat.execute("DROP ALL OBJECTS"); + stat.execute("CREATE SCHEMA TEST_SCHEMA"); + stat.execute("CREATE TABLE TEST_SCHEMA.A (A INT);"); + stat.execute("CREATE TABLE TEST_SCHEMA.B " + + "(B INT AS SELECT A FROM TEST_SCHEMA.A);"); + stat.execute("DROP SCHEMA TEST_SCHEMA CASCADE"); + } + + private void testInterSchemaDependency() throws SQLException { + stat.execute("drop all objects;"); + stat.execute("create schema table_view"); + stat.execute("set schema table_view"); + stat.execute("create table test1 (id int, name varchar(20))"); + stat.execute("create view test_view_1 as (select * from test1)"); + stat.execute("set schema public"); + stat.execute("create schema test_run"); + stat.execute("set schema test_run"); + stat.execute("create table test2 (id int, address varchar(20), " + + "constraint a_cons check (id in (select id from table_view.test1)))"); + stat.execute("set schema public"); + stat.execute("drop all objects"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestDuplicateKeyUpdate.java b/modules/h2/src/test/java/org/h2/test/db/TestDuplicateKeyUpdate.java new file mode 100644 index 0000000000000..4cc6e73c016d9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestDuplicateKeyUpdate.java @@ -0,0 +1,265 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; + +/** + * Tests for the ON DUPLICATE KEY UPDATE in the Insert class. + */ +public class TestDuplicateKeyUpdate extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("duplicateKeyUpdate"); + Connection conn = getConnection("duplicateKeyUpdate;MODE=MySQL"); + testDuplicateOnPrimary(conn); + testDuplicateOnUnique(conn); + testDuplicateCache(conn); + testDuplicateExpression(conn); + testOnDuplicateKeyInsertBatch(conn); + testOnDuplicateKeyInsertMultiValue(conn); + testPrimaryKeyAndUniqueKey(conn); + conn.close(); + deleteDb("duplicateKeyUpdate"); + } + + private void testDuplicateOnPrimary(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("CREATE TABLE table_test (\n" + + " id bigint(20) NOT NULL ,\n" + + " a_text varchar(254) NOT NULL,\n" + + " some_text varchar(254) NULL,\n" + + " PRIMARY KEY (id)\n" + + ") ;"); + + stat.execute("INSERT INTO table_test ( id, a_text, some_text ) VALUES " + + "(1, 'aaaaaaaaaa', 'aaaaaaaaaa'), " + + "(2, 'bbbbbbbbbb', 'bbbbbbbbbb'), "+ + "(3, 'cccccccccc', 'cccccccccc'), " + + "(4, 'dddddddddd', 'dddddddddd'), " + + "(5, 'eeeeeeeeee', 'eeeeeeeeee')"); + + stat.execute("INSERT INTO table_test ( id , a_text, some_text ) " + + "VALUES (1, 'zzzzzzzzzz', 'abcdefghij') " + + "ON DUPLICATE KEY UPDATE some_text='UPDATE'"); + + rs = stat.executeQuery("SELECT some_text FROM table_test where id = 1"); + rs.next(); + assertEquals("UPDATE", rs.getNString(1)); + + stat.execute("INSERT INTO table_test ( id , a_text, some_text ) " + + "VALUES (3, 'zzzzzzzzzz', 'SOME TEXT') " + + "ON DUPLICATE KEY UPDATE some_text=values(some_text)"); + rs = stat.executeQuery("SELECT some_text FROM table_test where id = 3"); + rs.next(); + assertEquals("SOME TEXT", rs.getNString(1)); + } + + private void testDuplicateOnUnique(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("CREATE TABLE table_test2 (\n" + + " id bigint(20) NOT NULL AUTO_INCREMENT,\n" + + " a_text varchar(254) NOT NULL,\n" + + " some_text varchar(254) NOT NULL,\n" + + " updatable_text varchar(254) NULL,\n" + + " PRIMARY KEY (id)\n" + ") ;"); + + stat.execute("CREATE UNIQUE INDEX index_name \n" + + "ON table_test2 (a_text, some_text);"); + + stat.execute("INSERT INTO table_test2 " + + "( a_text, some_text, updatable_text ) VALUES ('a', 'a', '1')"); + stat.execute("INSERT INTO table_test2 " + + "( a_text, some_text, updatable_text ) VALUES ('b', 'b', '2')"); + stat.execute("INSERT INTO table_test2 " + + "( a_text, some_text, updatable_text ) VALUES ('c', 'c', '3')"); + stat.execute("INSERT INTO table_test2 " + + "( a_text, some_text, updatable_text ) VALUES ('d', 'd', '4')"); + stat.execute("INSERT INTO table_test2 " + + "( a_text, some_text, updatable_text ) VALUES ('e', 'e', '5')"); + + stat.execute("INSERT INTO table_test2 ( a_text, some_text ) " + + "VALUES ('e', 'e') ON DUPLICATE KEY UPDATE updatable_text='UPDATE'"); + + rs = stat.executeQuery("SELECT updatable_text " + + "FROM table_test2 where a_text = 'e'"); + rs.next(); + assertEquals("UPDATE", rs.getNString(1)); + + stat.execute("INSERT INTO table_test2 (a_text, some_text, updatable_text ) " + + "VALUES ('b', 'b', 'test'), ('c', 'c', 'test2') " + + "ON DUPLICATE KEY UPDATE updatable_text=values(updatable_text)"); + rs = stat.executeQuery("SELECT updatable_text " + + "FROM table_test2 where a_text in ('b', 'c') order by a_text"); + rs.next(); + assertEquals("test", rs.getNString(1)); + rs.next(); + assertEquals("test2", rs.getNString(1)); + } + + private void testDuplicateCache(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("CREATE TABLE table_test3 (\n" + + " id bigint(20) NOT NULL ,\n" + + " a_text varchar(254) NOT NULL,\n" + + " some_text varchar(254) NULL,\n" + + " PRIMARY KEY (id)\n" + + ") ;"); + + stat.execute("INSERT INTO table_test3 ( id, a_text, some_text ) " + + "VALUES (1, 'aaaaaaaaaa', 'aaaaaaaaaa')"); + + stat.execute("INSERT INTO table_test3 ( id , a_text, some_text ) " + + "VALUES (1, 'zzzzzzzzzz', 'SOME TEXT') " + + "ON DUPLICATE KEY UPDATE some_text=values(some_text)"); + rs = stat.executeQuery("SELECT some_text FROM table_test3 where id = 1"); + rs.next(); + assertEquals("SOME TEXT", rs.getNString(1)); + + // Execute twice the same query to use the one from cache without + // parsing, caused the values parameter to be seen as ambiguous + stat.execute("INSERT INTO table_test3 ( id , a_text, some_text ) " + + "VALUES (1, 'zzzzzzzzzz', 'SOME TEXT') " + + "ON DUPLICATE KEY UPDATE some_text=values(some_text)"); + rs = stat.executeQuery("SELECT some_text FROM table_test3 where id = 1"); + rs.next(); + assertEquals("SOME TEXT", rs.getNString(1)); + } + + private void testDuplicateExpression(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("CREATE TABLE table_test4 (\n" + + " id bigint(20) NOT NULL ,\n" + + " a_text varchar(254) NOT NULL,\n" + + " some_value int(10) NULL,\n" + + " PRIMARY KEY (id)\n" + + ") ;"); + + stat.execute("INSERT INTO table_test4 ( id, a_text, some_value ) " + + "VALUES (1, 'aaaaaaaaaa', 5)"); + stat.execute("INSERT INTO table_test4 ( id, a_text, some_value ) " + + "VALUES (2, 'aaaaaaaaaa', 5)"); + + stat.execute("INSERT INTO table_test4 ( id , a_text, some_value ) " + + "VALUES (1, 'b', 1) " + + "ON DUPLICATE KEY UPDATE some_value=some_value + values(some_value)"); + stat.execute("INSERT INTO table_test4 ( id , a_text, some_value ) " + + "VALUES (1, 'b', 1) " + + "ON DUPLICATE KEY UPDATE some_value=some_value + 100"); + stat.execute("INSERT INTO table_test4 ( id , a_text, some_value ) " + + "VALUES (2, 'b', 1) " + + "ON DUPLICATE KEY UPDATE some_value=values(some_value) + 1"); + rs = stat.executeQuery("SELECT some_value FROM table_test4 where id = 1"); + rs.next(); + assertEquals(106, rs.getInt(1)); + rs = stat.executeQuery( + "SELECT some_value FROM table_test4 where id = 2"); + rs.next(); + assertEquals(2, rs.getInt(1)); + } + + private void testOnDuplicateKeyInsertBatch(Connection conn) + throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("create table test " + + "(key varchar(1) primary key, count int not null)"); + + // Insert multiple values as a batch + for (int i = 0; i <= 2; ++i) { + PreparedStatement prep = conn.prepareStatement( + "insert into test(key, count) values(?, ?) " + + "on duplicate key update count = count + 1"); + prep.setString(1, "a"); + prep.setInt(2, 1); + prep.addBatch(); + prep.setString(1, "b"); + prep.setInt(2, 1); + prep.addBatch(); + prep.setString(1, "b"); + prep.setInt(2, 1); + prep.addBatch(); + prep.executeBatch(); + } + + // Check result + ResultSet rs = stat.executeQuery( + "select count from test where key = 'a'"); + rs.next(); + assertEquals(3, rs.getInt(1)); + + stat.execute("drop table test"); + } + + private void testOnDuplicateKeyInsertMultiValue(Connection conn) + throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("create table test" + + "(key varchar(1) primary key, count int not null)"); + + // Insert multiple values in single insert operation + for (int i = 0; i <= 2; ++i) { + PreparedStatement prep = conn.prepareStatement( + "insert into test(key, count) values(?, ?), (?, ?), (?, ?) " + + "on duplicate key update count = count + 1"); + prep.setString(1, "a"); + prep.setInt(2, 1); + prep.setString(3, "b"); + prep.setInt(4, 1); + prep.setString(5, "b"); + prep.setInt(6, 1); + prep.executeUpdate(); + } + conn.commit(); + + // Check result + ResultSet rs = stat.executeQuery("select count from test where key = 'a'"); + rs.next(); + assertEquals(3, rs.getInt(1)); + + stat.execute("drop table test"); + } + + private void testPrimaryKeyAndUniqueKey(Connection conn) throws SQLException + { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE test (id INT, dup INT, " + + "counter INT, PRIMARY KEY(id), UNIQUE(dup))"); + stat.execute("INSERT INTO test (id, dup, counter) VALUES (1, 1, 1)"); + stat.execute("INSERT INTO test (id, dup, counter) VALUES (2, 1, 1) " + + "ON DUPLICATE KEY UPDATE counter = counter + VALUES(counter)"); + + // Check result + ResultSet rs = stat.executeQuery("SELECT counter FROM test ORDER BY id"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals(false, rs.next()); + + stat.execute("drop table test"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestEncryptedDb.java b/modules/h2/src/test/java/org/h2/test/db/TestEncryptedDb.java new file mode 100644 index 0000000000000..27b5b26ea80e0 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestEncryptedDb.java @@ -0,0 +1,59 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Test using an encrypted database. + */ +public class TestEncryptedDb extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.memory || config.cipher != null) { + return; + } + deleteDb("encrypted"); + Connection conn = getConnection("encrypted;CIPHER=AES", "sa", "123 123"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("CHECKPOINT"); + stat.execute("SET WRITE_DELAY 0"); + stat.execute("INSERT INTO TEST VALUES(1)"); + stat.execute("SHUTDOWN IMMEDIATELY"); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn).close(); + + assertThrows(ErrorCode.FILE_ENCRYPTION_ERROR_1, this). + getConnection("encrypted;CIPHER=AES", "sa", "1234 1234"); + + conn = getConnection("encrypted;CIPHER=AES", "sa", "123 123"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + conn.close(); + deleteDb("encrypted"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestExclusive.java b/modules/h2/src/test/java/org/h2/test/db/TestExclusive.java new file mode 100644 index 0000000000000..5eff689d6db31 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestExclusive.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.atomic.AtomicInteger; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Test for the exclusive mode. + */ +public class TestExclusive extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("exclusive"); + Connection conn = getConnection("exclusive"); + Statement stat = conn.createStatement(); + stat.execute("set exclusive true"); + assertThrows(ErrorCode.DATABASE_IS_IN_EXCLUSIVE_MODE, this). + getConnection("exclusive"); + + stat.execute("set exclusive false"); + Connection conn2 = getConnection("exclusive"); + final Statement stat2 = conn2.createStatement(); + stat.execute("set exclusive true"); + final AtomicInteger state = new AtomicInteger(0); + Task task = new Task() { + @Override + public void call() throws SQLException { + stat2.execute("select * from dual"); + if (state.get() != 1) { + new Error("unexpected state: " + state.get()).printStackTrace(); + } + } + }; + task.execute(); + state.set(1); + stat.execute("set exclusive false"); + task.get(); + stat.execute("set exclusive true"); + conn.close(); + + // check that exclusive mode is off when disconnected + stat2.execute("select * from dual"); + conn2.close(); + deleteDb("exclusive"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestFunctionOverload.java b/modules/h2/src/test/java/org/h2/test/db/TestFunctionOverload.java new file mode 100644 index 0000000000000..a887ccf686835 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestFunctionOverload.java @@ -0,0 +1,201 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests for overloaded user defined functions. + * + * @author Gary Tong + */ +public class TestFunctionOverload extends TestBase { + + private static final String ME = TestFunctionOverload.class.getName(); + private Connection conn; + private DatabaseMetaData meta; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("functionOverload"); + conn = getConnection("functionOverload"); + meta = conn.getMetaData(); + testControl(); + testOverload(); + testOverloadNamedArgs(); + testOverloadWithConnection(); + testOverloadError(); + conn.close(); + deleteDb("functionOverload"); + } + + private void testOverloadError() throws SQLException { + Statement stat = conn.createStatement(); + assertThrows(ErrorCode.METHODS_MUST_HAVE_DIFFERENT_PARAMETER_COUNTS_2, stat). + execute("create alias overloadError for \"" + ME + ".overloadError\""); + } + + private void testControl() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("create alias overload0 for \"" + ME + ".overload0\""); + ResultSet rs = stat.executeQuery("select overload0() from dual"); + assertTrue(rs.next()); + assertEquals("0 args", 0, rs.getInt(1)); + assertFalse(rs.next()); + rs = meta.getProcedures(null, null, "OVERLOAD0"); + rs.next(); + assertFalse(rs.next()); + } + + private void testOverload() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("create alias overload1or2 for \"" + ME + ".overload1or2\""); + ResultSet rs = stat.executeQuery("select overload1or2(1) from dual"); + rs.next(); + assertEquals("1 arg", 1, rs.getInt(1)); + assertFalse(rs.next()); + rs = stat.executeQuery("select overload1or2(1, 2) from dual"); + rs.next(); + assertEquals("2 args", 3, rs.getInt(1)); + assertFalse(rs.next()); + rs = meta.getProcedures(null, null, "OVERLOAD1OR2"); + rs.next(); + assertEquals(1, rs.getInt("NUM_INPUT_PARAMS")); + rs.next(); + assertEquals(2, rs.getInt("NUM_INPUT_PARAMS")); + assertFalse(rs.next()); + } + + private void testOverloadNamedArgs() throws SQLException { + Statement stat = conn.createStatement(); + + stat.execute("create alias overload1or2Named for \"" + ME + + ".overload1or2(int)\""); + + ResultSet rs = stat.executeQuery("select overload1or2Named(1) from dual"); + assertTrue("First Row", rs.next()); + assertEquals("1 arg", 1, rs.getInt(1)); + assertFalse("Second Row", rs.next()); + rs.close(); + assertThrows(ErrorCode.METHOD_NOT_FOUND_1, stat). + executeQuery("select overload1or2Named(1, 2) from dual"); + stat.close(); + } + + private void testOverloadWithConnection() throws SQLException { + Statement stat = conn.createStatement(); + + stat.execute("create alias overload1or2WithConn for \"" + ME + + ".overload1or2WithConn\""); + + ResultSet rs = stat.executeQuery("select overload1or2WithConn(1) from dual"); + rs.next(); + assertEquals("1 arg", 1, rs.getInt(1)); + assertFalse(rs.next()); + rs.close(); + + rs = stat.executeQuery("select overload1or2WithConn(1, 2) from dual"); + rs.next(); + assertEquals("2 args", 3, rs.getInt(1)); + assertFalse(rs.next()); + rs.close(); + + stat.close(); + } + + /** + * This method is called via reflection from the database. + * + * @return 0 + */ + public static int overload0() { + return 0; + } + + /** + * This method is called via reflection from the database. + * + * @param one the value + * @return the value + */ + public static int overload1or2(int one) { + return one; + } + + /** + * This method is called via reflection from the database. + * + * @param one the first value + * @param two the second value + * @return the sum of both + */ + public static int overload1or2(int one, int two) { + return one + two; + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @param one the value + * @return the value + */ + public static int overload1or2WithConn(Connection conn, int one) + throws SQLException { + conn.createStatement().executeQuery("select 1 from dual"); + return one; + } + + /** + * This method is called via reflection from the database. + * + * @param one the first value + * @param two the second value + * @return the sum of both + */ + public static int overload1or2WithConn(int one, int two) { + return one + two; + } + + /** + * This method is called via reflection from the database. + * + * @param one the first value + * @param two the second value + * @return the sum of both + */ + public static int overloadError(int one, int two) { + return one + two; + } + + /** + * This method is called via reflection from the database. + * + * @param one the first value + * @param two the second value + * @return the sum of both + */ + public static int overloadError(double one, double two) { + return (int) (one + two); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestFunctions.java b/modules/h2/src/test/java/org/h2/test/db/TestFunctions.java new file mode 100644 index 0000000000000..2577dd8a63165 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestFunctions.java @@ -0,0 +1,2501 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.BufferedInputStream; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Array; +import java.sql.Blob; +import java.sql.CallableStatement; +import java.sql.Clob; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.sql.Types; +import java.text.DecimalFormatSymbols; +import java.text.ParseException; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Calendar; +import java.util.Currency; +import java.util.Date; +import java.util.GregorianCalendar; +import java.util.HashMap; +import java.util.Locale; +import java.util.Properties; +import java.util.TimeZone; +import java.util.UUID; +import org.h2.api.Aggregate; +import org.h2.api.AggregateFunction; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.jdbc.JdbcSQLException; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.ap.TestAnnotationProcessor; +import org.h2.tools.SimpleResultSet; +import org.h2.util.DateTimeUtils; +import org.h2.util.IOUtils; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.util.ToChar.Capitalization; +import org.h2.util.ToDateParser; +import org.h2.value.Value; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * Tests for user defined functions and aggregates. + */ +public class TestFunctions extends TestBase implements AggregateFunction { + + static int count; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + // Locale.setDefault(Locale.GERMANY); + // Locale.setDefault(Locale.US); + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("functions"); + testOverrideAlias(); + testIfNull(); + testToDate(); + testToDateException(); + testDataType(); + testVersion(); + testFunctionTable(); + testFunctionTableVarArgs(); + testArrayParameters(); + testDefaultConnection(); + testFunctionInSchema(); + testGreatest(); + testSource(); + testDynamicArgumentAndReturn(); + testUUID(); + testWhiteSpacesInParameters(); + testSchemaSearchPath(); + testDeterministic(); + testTransactionId(); + testPrecision(); + testMathFunctions(); + testVarArgs(); + testAggregate(); + testAggregateType(); + testFunctions(); + testFileRead(); + testValue(); + testNvl2(); + testConcatWs(); + testTruncate(); + testOraHash(); + testToCharFromDateTime(); + testToCharFromNumber(); + testToCharFromText(); + testTranslate(); + testGenerateSeries(); + testFileWrite(); + testThatCurrentTimestampIsSane(); + testThatCurrentTimestampStaysTheSameWithinATransaction(); + testThatCurrentTimestampUpdatesOutsideATransaction(); +// testAnnotationProcessorsOutput(); + testRound(); + testSignal(); + + deleteDb("functions"); + } + + private void testDataType() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + assertEquals(Types.DOUBLE, stat.executeQuery( + "select radians(x) from dual"). + getMetaData().getColumnType(1)); + assertEquals(Types.DOUBLE, stat.executeQuery( + "select power(10, 2*x) from dual"). + getMetaData().getColumnType(1)); + stat.close(); + conn.close(); + } + + private void testVersion() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + String query = "select h2version()"; + ResultSet rs = stat.executeQuery(query); + assertTrue(rs.next()); + String version = rs.getString(1); + assertEquals(Constants.getVersion(), version); + assertFalse(rs.next()); + rs.close(); + stat.close(); + conn.close(); + } + + private void testFunctionTable() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + stat.execute("create alias simple_function_table for \"" + + TestFunctions.class.getName() + ".simpleFunctionTable\""); + stat.execute("select * from simple_function_table() " + + "where a>0 and b in ('x', 'y')"); + conn.close(); + } + + private void testFunctionTableVarArgs() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + stat.execute("create alias varargs_function_table for \"" + TestFunctions.class.getName() + + ".varArgsFunctionTable\""); + ResultSet rs = stat.executeQuery("select * from varargs_function_table(1,2,3,5,8,13)"); + for (int i : new int[] { 1, 2, 3, 5, 8, 13 }) { + assertTrue(rs.next()); + assertEquals(i, rs.getInt(1)); + } + assertFalse(rs.next()); + conn.close(); + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @return a result set + */ + public static ResultSet simpleFunctionTable(@SuppressWarnings("unused") Connection conn) { + SimpleResultSet result = new SimpleResultSet(); + result.addColumn("A", Types.INTEGER, 0, 0); + result.addColumn("B", Types.CHAR, 0, 0); + result.addRow(42, 'X'); + return result; + } + + /** + * This method is called via reflection from the database. + * + * @param values the value array + * @return a result set + */ + public static ResultSet varArgsFunctionTable(int... values) throws SQLException { + if (values.length != 6) { + throw new SQLException("Unexpected argument count"); + } + SimpleResultSet result = new SimpleResultSet(); + result.addColumn("A", Types.INTEGER, 0, 0); + for (int value : values) { + result.addRow(value); + } + return result; + } + + private void testNvl2() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + String createSQL = "CREATE TABLE testNvl2(id BIGINT, txt1 " + + "varchar, txt2 varchar, num number(9, 0));"; + stat.execute(createSQL); + stat.execute("insert into testNvl2(id, txt1, txt2, num) " + + "values(1, 'test1', 'test2', null)"); + stat.execute("insert into testNvl2(id, txt1, txt2, num) " + + "values(2, null, 'test4', null)"); + stat.execute("insert into testNvl2(id, txt1, txt2, num) " + + "values(3, 'test5', null, null)"); + stat.execute("insert into testNvl2(id, txt1, txt2, num) " + + "values(4, null, null, null)"); + stat.execute("insert into testNvl2(id, txt1, txt2, num) " + + "values(5, '2', null, 1)"); + stat.execute("insert into testNvl2(id, txt1, txt2, num) " + + "values(6, '2', null, null)"); + stat.execute("insert into testNvl2(id, txt1, txt2, num) " + + "values(7, 'test2', null, null)"); + + String query = "SELECT NVL2(txt1, txt1, txt2), txt1 " + + "FROM testNvl2 order by id asc"; + ResultSet rs = stat.executeQuery(query); + rs.next(); + String actual = rs.getString(1); + assertEquals("test1", actual); + rs.next(); + actual = rs.getString(1); + assertEquals("test4", actual); + rs.next(); + actual = rs.getString(1); + assertEquals("test5", actual); + rs.next(); + actual = rs.getString(1); + assertEquals(null, actual); + assertEquals(rs.getMetaData().getColumnType(2), + rs.getMetaData().getColumnType(1)); + rs.close(); + + rs = stat.executeQuery("SELECT NVL2(num, num, txt1), num " + + "FROM testNvl2 where id in(5, 6) order by id asc"); + rs.next(); + assertEquals(rs.getMetaData().getColumnType(2), + rs.getMetaData().getColumnType(1)); + + assertThrows(ErrorCode.DATA_CONVERSION_ERROR_1, stat). + executeQuery("SELECT NVL2(num, num, txt1), num " + + "FROM testNvl2 where id = 7 order by id asc"); + + // nvl2 should return expr2's datatype, if expr2 is character data. + rs = stat.executeQuery("SELECT NVL2(1, 'test', 123), 'test' FROM dual"); + rs.next(); + actual = rs.getString(1); + assertEquals("test", actual); + assertEquals(rs.getMetaData().getColumnType(2), + rs.getMetaData().getColumnType(1)); + + conn.close(); + } + + private void testConcatWs() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + String createSQL = "CREATE TABLE testConcat(id BIGINT, txt1 " + + "varchar, txt2 varchar, txt3 varchar);"; + stat.execute(createSQL); + stat.execute("insert into testConcat(id, txt1, txt2, txt3) " + + "values(1, 'test1', 'test2', 'test3')"); + stat.execute("insert into testConcat(id, txt1, txt2, txt3) " + + "values(2, 'test1', 'test2', null)"); + stat.execute("insert into testConcat(id, txt1, txt2, txt3) " + + "values(3, 'test1', null, null)"); + stat.execute("insert into testConcat(id, txt1, txt2, txt3) " + + "values(4, null, 'test2', null)"); + stat.execute("insert into testConcat(id, txt1, txt2, txt3) " + + "values(5, null, null, null)"); + + String query = "SELECT concat_ws('_',txt1, txt2, txt3), txt1 " + + "FROM testConcat order by id asc"; + ResultSet rs = stat.executeQuery(query); + rs.next(); + String actual = rs.getString(1); + assertEquals("test1_test2_test3", actual); + rs.next(); + actual = rs.getString(1); + assertEquals("test1_test2", actual); + rs.next(); + actual = rs.getString(1); + assertEquals("test1", actual); + rs.next(); + actual = rs.getString(1); + assertEquals("test2", actual); + rs.next(); + actual = rs.getString(1); + assertEquals("", actual); + rs.close(); + + rs = stat.executeQuery("select concat_ws(null,null,null)"); + rs.next(); + assertNull(rs.getObject(1)); + + stat.execute("drop table testConcat"); + conn.close(); + } + + private void testValue() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("create alias TO_CHAR_2 for \"" + + getClass().getName() + ".toChar\""); + rs = stat.executeQuery( + "call TO_CHAR_2(TIMESTAMP '2001-02-03 04:05:06', 'format')"); + rs.next(); + assertEquals("2001-02-03 04:05:06", rs.getString(1)); + stat.execute("drop alias TO_CHAR_2"); + conn.close(); + } + + /** + * This method is called via reflection from the database. + * + * @param args the argument list + * @return the value + */ + public static Value toChar(Value... args) { + if (args.length == 0) { + return null; + } + return args[0].convertTo(Value.STRING); + } + + private void testDefaultConnection() throws SQLException { + Connection conn = getConnection("functions;DEFAULT_CONNECTION=TRUE"); + Statement stat = conn.createStatement(); + stat.execute("create alias test for \""+ + TestFunctions.class.getName()+".testDefaultConn\""); + stat.execute("call test()"); + stat.execute("drop alias test"); + conn.close(); + } + + /** + * This method is called via reflection from the database. + */ + public static void testDefaultConn() throws SQLException { + DriverManager.getConnection("jdbc:default:connection"); + } + + private void testFunctionInSchema() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + stat.execute("create schema schema2"); + stat.execute("create alias schema2.func as 'int x() { return 1; }'"); + stat.execute("create view test as select schema2.func()"); + ResultSet rs; + rs = stat.executeQuery("select * from information_schema.views"); + rs.next(); + assertContains(rs.getString("VIEW_DEFINITION"), "SCHEMA2.FUNC"); + + stat.execute("drop view test"); + stat.execute("drop schema schema2 cascade"); + + conn.close(); + } + + private void testGreatest() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + String createSQL = "CREATE TABLE testGreatest (id BIGINT);"; + stat.execute(createSQL); + stat.execute("insert into testGreatest values (1)"); + + String query = "SELECT GREATEST(id, " + + ((long) Integer.MAX_VALUE) + ") FROM testGreatest"; + ResultSet rs = stat.executeQuery(query); + rs.next(); + Object o = rs.getObject(1); + assertEquals(Long.class.getName(), o.getClass().getName()); + + String query2 = "SELECT GREATEST(id, " + + ((long) Integer.MAX_VALUE + 1) + ") FROM testGreatest"; + ResultSet rs2 = stat.executeQuery(query2); + rs2.next(); + Object o2 = rs2.getObject(1); + assertEquals(Long.class.getName(), o2.getClass().getName()); + + conn.close(); + } + + private void testSource() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("create force alias sayHi as 'String test(String name) {\n" + + "return \"Hello \" + name;\n}'"); + rs = stat.executeQuery("SELECT ALIAS_NAME " + + "FROM INFORMATION_SCHEMA.FUNCTION_ALIASES"); + rs.next(); + assertEquals("SAY" + "HI", rs.getString(1)); + rs = stat.executeQuery("call sayHi('Joe')"); + rs.next(); + assertEquals("Hello Joe", rs.getString(1)); + if (!config.memory) { + conn.close(); + conn = getConnection("functions"); + stat = conn.createStatement(); + rs = stat.executeQuery("call sayHi('Joe')"); + rs.next(); + assertEquals("Hello Joe", rs.getString(1)); + } + stat.execute("drop alias sayHi"); + conn.close(); + } + + private void testDynamicArgumentAndReturn() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("create alias dynamic deterministic for \"" + + getClass().getName() + ".dynamic\""); + setCount(0); + rs = stat.executeQuery("call dynamic(('a', 1))[0]"); + rs.next(); + String a = rs.getString(1); + assertEquals("a1", a); + stat.execute("drop alias dynamic"); + conn.close(); + } + + private void testUUID() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("create alias xorUUID for \""+ + getClass().getName()+".xorUUID\""); + setCount(0); + rs = stat.executeQuery("call xorUUID(random_uuid(), random_uuid())"); + rs.next(); + Object o = rs.getObject(1); + assertEquals(UUID.class.toString(), o.getClass().toString()); + stat.execute("drop alias xorUUID"); + + conn.close(); + } + + private void testDeterministic() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("create alias getCount for \""+ + getClass().getName()+".getCount\""); + setCount(0); + rs = stat.executeQuery("select getCount() from system_range(1, 2)"); + rs.next(); + assertEquals(0, rs.getInt(1)); + rs.next(); + assertEquals(1, rs.getInt(1)); + stat.execute("drop alias getCount"); + + stat.execute("create alias getCount deterministic for \""+ + getClass().getName()+".getCount\""); + setCount(0); + rs = stat.executeQuery("select getCount() from system_range(1, 2)"); + rs.next(); + assertEquals(0, rs.getInt(1)); + rs.next(); + assertEquals(0, rs.getInt(1)); + stat.execute("drop alias getCount"); + rs = stat.executeQuery("SELECT * FROM " + + "INFORMATION_SCHEMA.FUNCTION_ALIASES " + + "WHERE UPPER(ALIAS_NAME) = 'GET' || 'COUNT'"); + assertFalse(rs.next()); + stat.execute("create alias reverse deterministic for \""+ + getClass().getName()+".reverse\""); + rs = stat.executeQuery("select reverse(x) from system_range(700, 700)"); + rs.next(); + assertEquals("007", rs.getString(1)); + stat.execute("drop alias reverse"); + + conn.close(); + } + + private void testTransactionId() throws SQLException { + if (config.memory) { + return; + } + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + ResultSet rs; + rs = stat.executeQuery("call transaction_id()"); + rs.next(); + assertTrue(rs.getString(1) == null && rs.wasNull()); + stat.execute("insert into test values(1)"); + rs = stat.executeQuery("call transaction_id()"); + rs.next(); + assertTrue(rs.getString(1) == null && rs.wasNull()); + conn.setAutoCommit(false); + stat.execute("delete from test"); + rs = stat.executeQuery("call transaction_id()"); + rs.next(); + assertTrue(rs.getString(1) != null); + stat.execute("drop table test"); + conn.close(); + } + + private void testPrecision() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + stat.execute("create alias no_op for \""+getClass().getName()+".noOp\""); + PreparedStatement prep = conn.prepareStatement( + "select * from dual where no_op(1.6)=?"); + prep.setBigDecimal(1, new BigDecimal("1.6")); + ResultSet rs = prep.executeQuery(); + assertTrue(rs.next()); + + stat.execute("create aggregate agg_sum for \""+getClass().getName()+"\""); + rs = stat.executeQuery("select agg_sum(1), sum(1.6) from dual"); + rs.next(); + assertEquals(1, rs.getMetaData().getScale(2)); + assertEquals(32767, rs.getMetaData().getScale(1)); + stat.executeQuery("select * from information_schema.function_aliases"); + conn.close(); + } + + private void testMathFunctions() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("CALL SINH(50)"); + assertTrue(rs.next()); + assertEquals(Math.sinh(50), rs.getDouble(1)); + rs = stat.executeQuery("CALL COSH(50)"); + assertTrue(rs.next()); + assertEquals(Math.cosh(50), rs.getDouble(1)); + rs = stat.executeQuery("CALL TANH(50)"); + assertTrue(rs.next()); + assertEquals(Math.tanh(50), rs.getDouble(1)); + conn.close(); + } + + private void testVarArgs() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS mean FOR \"" + + getClass().getName() + ".mean\""); + ResultSet rs = stat.executeQuery( + "select mean(), mean(10), mean(10, 20), mean(10, 20, 30)"); + rs.next(); + assertEquals(1.0, rs.getDouble(1)); + assertEquals(10.0, rs.getDouble(2)); + assertEquals(15.0, rs.getDouble(3)); + assertEquals(20.0, rs.getDouble(4)); + + stat.execute("CREATE ALIAS mean2 FOR \"" + + getClass().getName() + ".mean2\""); + rs = stat.executeQuery( + "select mean2(), mean2(10), mean2(10, 20)"); + rs.next(); + assertEquals(Double.NaN, rs.getDouble(1)); + assertEquals(10.0, rs.getDouble(2)); + assertEquals(15.0, rs.getDouble(3)); + + DatabaseMetaData meta = conn.getMetaData(); + rs = meta.getProcedureColumns(null, null, "MEAN2", null); + assertTrue(rs.next()); + assertEquals("P0", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("FUNCTIONS", rs.getString("PROCEDURE_CAT")); + assertEquals("PUBLIC", rs.getString("PROCEDURE_SCHEM")); + assertEquals("MEAN2", rs.getString("PROCEDURE_NAME")); + assertEquals("P2", rs.getString("COLUMN_NAME")); + assertEquals(DatabaseMetaData.procedureColumnIn, + rs.getInt("COLUMN_TYPE")); + assertEquals("OTHER", rs.getString("TYPE_NAME")); + assertEquals(Integer.MAX_VALUE, rs.getInt("PRECISION")); + assertEquals(Integer.MAX_VALUE, rs.getInt("LENGTH")); + assertEquals(0, rs.getInt("SCALE")); + assertEquals(DatabaseMetaData.columnNullable, + rs.getInt("NULLABLE")); + assertEquals("", rs.getString("REMARKS")); + assertEquals(null, rs.getString("COLUMN_DEF")); + assertEquals(0, rs.getInt("SQL_DATA_TYPE")); + assertEquals(0, rs.getInt("SQL_DATETIME_SUB")); + assertEquals(0, rs.getInt("CHAR_OCTET_LENGTH")); + assertEquals(1, rs.getInt("ORDINAL_POSITION")); + assertEquals("YES", rs.getString("IS_NULLABLE")); + assertEquals("MEAN2", rs.getString("SPECIFIC_NAME")); + assertFalse(rs.next()); + + stat.execute("CREATE ALIAS printMean FOR \"" + + getClass().getName() + ".printMean\""); + rs = stat.executeQuery( + "select printMean('A'), printMean('A', 10), " + + "printMean('BB', 10, 20), printMean ('CCC', 10, 20, 30)"); + rs.next(); + assertEquals("A: 0", rs.getString(1)); + assertEquals("A: 10", rs.getString(2)); + assertEquals("BB: 15", rs.getString(3)); + assertEquals("CCC: 20", rs.getString(4)); + conn.close(); + } + + private void testFileRead() throws Exception { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + String fileName = getBaseDir() + "/test.txt"; + Properties prop = System.getProperties(); + OutputStream out = FileUtils.newOutputStream(fileName, false); + prop.store(out, ""); + out.close(); + ResultSet rs = stat.executeQuery("SELECT LENGTH(FILE_READ('" + + fileName + "')) LEN"); + rs.next(); + assertEquals(FileUtils.size(fileName), rs.getInt(1)); + rs = stat.executeQuery("SELECT FILE_READ('" + + fileName + "') PROP"); + rs.next(); + Properties p2 = new Properties(); + p2.load(rs.getBinaryStream(1)); + assertEquals(prop.size(), p2.size()); + rs = stat.executeQuery("SELECT FILE_READ('" + + fileName + "', NULL) PROP"); + rs.next(); + String ps = rs.getString(1); + InputStreamReader r = new InputStreamReader(FileUtils.newInputStream(fileName)); + String ps2 = IOUtils.readStringAndClose(r, -1); + assertEquals(ps, ps2); + conn.close(); + FileUtils.delete(fileName); + } + + + private void testFileWrite() throws Exception { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + // Copy data into clob table + stat.execute("DROP TABLE TEST IF EXISTS"); + PreparedStatement pst = conn.prepareStatement( + "CREATE TABLE TEST(data clob) AS SELECT ? " + "data"); + Properties prop = System.getProperties(); + ByteArrayOutputStream os = new ByteArrayOutputStream(prop.size()); + prop.store(os, ""); + pst.setBinaryStream(1, new ByteArrayInputStream(os.toByteArray())); + pst.execute(); + os.close(); + String fileName = new File(getBaseDir(), "test.txt").getPath(); + FileUtils.delete(fileName); + ResultSet rs = stat.executeQuery("SELECT FILE_WRITE(data, " + + StringUtils.quoteStringSQL(fileName) + ") len from test"); + assertTrue(rs.next()); + assertEquals(os.size(), rs.getInt(1)); + InputStreamReader r = new InputStreamReader(FileUtils.newInputStream(fileName)); + // Compare expected content with written file content + String ps2 = IOUtils.readStringAndClose(r, -1); + assertEquals(os.toString(), ps2); + conn.close(); + FileUtils.delete(fileName); + } + + + + /** + * This median implementation keeps all objects in memory. + */ + public static class MedianString implements AggregateFunction { + + private final ArrayList list = New.arrayList(); + + @Override + public void add(Object value) { + list.add(value.toString()); + } + + @Override + public Object getResult() { + return list.get(list.size() / 2); + } + + @Override + public int getType(int[] inputType) { + return Types.VARCHAR; + } + + @Override + public void init(Connection conn) { + // nothing to do + } + + } + + /** + * This median implementation keeps all objects in memory. + */ + public static class MedianStringType implements Aggregate { + + private final ArrayList list = New.arrayList(); + + @Override + public void add(Object value) { + list.add(value.toString()); + } + + @Override + public Object getResult() { + return list.get(list.size() / 2); + } + + @Override + public int getInternalType(int[] inputTypes) throws SQLException { + return Value.STRING; + } + + @Override + public void init(Connection conn) { + // nothing to do + } + + } + + private void testAggregateType() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + stat.execute("CREATE AGGREGATE SIMPLE_MEDIAN FOR \"" + + MedianStringType.class.getName() + "\""); + stat.execute("CREATE AGGREGATE IF NOT EXISTS SIMPLE_MEDIAN FOR \"" + + MedianStringType.class.getName() + "\""); + ResultSet rs = stat.executeQuery( + "SELECT SIMPLE_MEDIAN(X) FROM SYSTEM_RANGE(1, 9)"); + rs.next(); + assertEquals("5", rs.getString(1)); + rs = stat.executeQuery( + "SELECT SIMPLE_MEDIAN(X) FILTER (WHERE X > 2) FROM SYSTEM_RANGE(1, 9)"); + rs.next(); + assertEquals("6", rs.getString(1)); + conn.close(); + + if (config.memory) { + return; + } + + conn = getConnection("functions"); + stat = conn.createStatement(); + stat.executeQuery("SELECT SIMPLE_MEDIAN(X) FROM SYSTEM_RANGE(1, 9)"); + DatabaseMetaData meta = conn.getMetaData(); + rs = meta.getProcedures(null, null, "SIMPLE_MEDIAN"); + assertTrue(rs.next()); + assertFalse(rs.next()); + rs = stat.executeQuery("SCRIPT"); + boolean found = false; + while (rs.next()) { + String sql = rs.getString(1); + if (sql.contains("SIMPLE_MEDIAN")) { + found = true; + } + } + assertTrue(found); + stat.execute("DROP AGGREGATE SIMPLE_MEDIAN"); + stat.execute("DROP AGGREGATE IF EXISTS SIMPLE_MEDIAN"); + conn.close(); + } + + private void testAggregate() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + stat.execute("CREATE AGGREGATE SIMPLE_MEDIAN FOR \"" + + MedianString.class.getName() + "\""); + stat.execute("CREATE AGGREGATE IF NOT EXISTS SIMPLE_MEDIAN FOR \"" + + MedianString.class.getName() + "\""); + ResultSet rs = stat.executeQuery( + "SELECT SIMPLE_MEDIAN(X) FROM SYSTEM_RANGE(1, 9)"); + rs.next(); + assertEquals("5", rs.getString(1)); + conn.close(); + + if (config.memory) { + return; + } + + conn = getConnection("functions"); + stat = conn.createStatement(); + stat.executeQuery("SELECT SIMPLE_MEDIAN(X) FROM SYSTEM_RANGE(1, 9)"); + DatabaseMetaData meta = conn.getMetaData(); + rs = meta.getProcedures(null, null, "SIMPLE_MEDIAN"); + assertTrue(rs.next()); + assertFalse(rs.next()); + rs = stat.executeQuery("SCRIPT"); + boolean found = false; + while (rs.next()) { + String sql = rs.getString(1); + if (sql.contains("SIMPLE_MEDIAN")) { + found = true; + } + } + assertTrue(found); + stat.execute("DROP AGGREGATE SIMPLE_MEDIAN"); + stat.execute("DROP AGGREGATE IF EXISTS SIMPLE_MEDIAN"); + conn.close(); + } + + private void testFunctions() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + assertCallResult(null, stat, "abs(null)"); + assertCallResult("1", stat, "abs(1)"); + assertCallResult("1", stat, "abs(1)"); + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("CREATE ALIAS ADD_ROW FOR \"" + + getClass().getName() + ".addRow\""); + ResultSet rs; + rs = stat.executeQuery("CALL ADD_ROW(1, 'Hello')"); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs = stat.executeQuery("SELECT * FROM TEST"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + DatabaseMetaData meta = conn.getMetaData(); + rs = meta.getProcedureColumns(null, null, "ADD_ROW", null); + assertTrue(rs.next()); + assertEquals("P0", rs.getString("COLUMN_NAME")); + assertTrue(rs.next()); + assertEquals("FUNCTIONS", rs.getString("PROCEDURE_CAT")); + assertEquals("PUBLIC", rs.getString("PROCEDURE_SCHEM")); + assertEquals("ADD_ROW", rs.getString("PROCEDURE_NAME")); + assertEquals("P2", rs.getString("COLUMN_NAME")); + assertEquals(DatabaseMetaData.procedureColumnIn, + rs.getInt("COLUMN_TYPE")); + assertEquals("INTEGER", rs.getString("TYPE_NAME")); + assertEquals(10, rs.getInt("PRECISION")); + assertEquals(10, rs.getInt("LENGTH")); + assertEquals(0, rs.getInt("SCALE")); + assertEquals(DatabaseMetaData.columnNoNulls, rs.getInt("NULLABLE")); + assertEquals("", rs.getString("REMARKS")); + assertEquals(null, rs.getString("COLUMN_DEF")); + assertEquals(0, rs.getInt("SQL_DATA_TYPE")); + assertEquals(0, rs.getInt("SQL_DATETIME_SUB")); + assertEquals(0, rs.getInt("CHAR_OCTET_LENGTH")); + assertEquals(1, rs.getInt("ORDINAL_POSITION")); + assertEquals("YES", rs.getString("IS_NULLABLE")); + assertEquals("ADD_ROW", rs.getString("SPECIFIC_NAME")); + assertTrue(rs.next()); + assertEquals("P3", rs.getString("COLUMN_NAME")); + assertEquals("VARCHAR", rs.getString("TYPE_NAME")); + assertFalse(rs.next()); + + stat.executeQuery("CALL ADD_ROW(2, 'World')"); + + stat.execute("CREATE ALIAS SELECT_F FOR \"" + + getClass().getName() + ".select\""); + rs = stat.executeQuery("CALL SELECT_F('SELECT * " + + "FROM TEST ORDER BY ID')"); + assertEquals(2, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + + rs = stat.executeQuery("SELECT NAME FROM SELECT_F('SELECT * " + + "FROM TEST ORDER BY NAME') ORDER BY NAME DESC"); + assertEquals(1, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals("World", rs.getString(1)); + rs.next(); + assertEquals("Hello", rs.getString(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("SELECT SELECT_F('SELECT * " + + "FROM TEST WHERE ID=' || ID) FROM TEST ORDER BY ID"); + assertEquals(1, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals("((1, Hello))", rs.getString(1)); + rs.next(); + assertEquals("((2, World))", rs.getString(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("SELECT SELECT_F('SELECT * " + + "FROM TEST ORDER BY ID') FROM DUAL"); + assertEquals(1, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals("((1, Hello), (2, World))", rs.getString(1)); + assertFalse(rs.next()); + assertThrows(ErrorCode.SYNTAX_ERROR_2, stat). + executeQuery("CALL SELECT_F('ERROR')"); + stat.execute("CREATE ALIAS SIMPLE FOR \"" + + getClass().getName() + ".simpleResultSet\""); + rs = stat.executeQuery("CALL SIMPLE(2, 1, 1, 1, 1, 1, 1, 1)"); + assertEquals(2, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals(0, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + + rs = stat.executeQuery("SELECT * FROM SIMPLE(1, 1, 1, 1, 1, 1, 1, 1)"); + assertEquals(2, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals(0, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + stat.execute("CREATE ALIAS ARRAY FOR \"" + + getClass().getName() + ".getArray\""); + rs = stat.executeQuery("CALL ARRAY()"); + assertEquals(1, rs.getMetaData().getColumnCount()); + rs.next(); + Array a = rs.getArray(1); + Object[] array = (Object[]) a.getArray(); + assertEquals(2, array.length); + assertEquals(0, ((Integer) array[0]).intValue()); + assertEquals("Hello", (String) array[1]); + assertThrows(ErrorCode.INVALID_VALUE_2, a).getArray(1, -1); + assertThrows(ErrorCode.INVALID_VALUE_2, a).getArray(1, 3); + assertEquals(0, ((Object[]) a.getArray(1, 0)).length); + assertEquals(0, ((Object[]) a.getArray(2, 0)).length); + assertThrows(ErrorCode.INVALID_VALUE_2, a).getArray(0, 0); + assertThrows(ErrorCode.INVALID_VALUE_2, a).getArray(3, 0); + HashMap> map = new HashMap<>(); + assertEquals(0, ((Object[]) a.getArray(1, 0, map)).length); + assertEquals(2, ((Object[]) a.getArray(map)).length); + assertEquals(2, ((Object[]) a.getArray(null)).length); + map.put("x", Object.class); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, a).getArray(1, 0, map); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, a).getArray(map); + + ResultSet rs2; + rs2 = a.getResultSet(); + rs2.next(); + assertEquals(1, rs2.getInt(1)); + assertEquals(0, rs2.getInt(2)); + rs2.next(); + assertEquals(2, rs2.getInt(1)); + assertEquals("Hello", rs2.getString(2)); + assertFalse(rs.next()); + + map.clear(); + rs2 = a.getResultSet(map); + rs2.next(); + assertEquals(1, rs2.getInt(1)); + assertEquals(0, rs2.getInt(2)); + rs2.next(); + assertEquals(2, rs2.getInt(1)); + assertEquals("Hello", rs2.getString(2)); + assertFalse(rs.next()); + + rs2 = a.getResultSet(2, 1); + rs2.next(); + assertEquals(2, rs2.getInt(1)); + assertEquals("Hello", rs2.getString(2)); + assertFalse(rs.next()); + + rs2 = a.getResultSet(1, 1, map); + rs2.next(); + assertEquals(1, rs2.getInt(1)); + assertEquals(0, rs2.getInt(2)); + assertFalse(rs.next()); + + map.put("x", Object.class); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, a).getResultSet(map); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, a).getResultSet(0, 1, map); + + a.free(); + assertThrows(ErrorCode.OBJECT_CLOSED, a).getArray(); + assertThrows(ErrorCode.OBJECT_CLOSED, a).getResultSet(); + + stat.execute("CREATE ALIAS ROOT FOR \"" + getClass().getName() + ".root\""); + rs = stat.executeQuery("CALL ROOT(9)"); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + stat.execute("CREATE ALIAS MAX_ID FOR \"" + + getClass().getName() + ".selectMaxId\""); + rs = stat.executeQuery("CALL MAX_ID()"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("SELECT * FROM MAX_ID()"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("CALL CASE WHEN -9 < 0 THEN 0 ELSE ROOT(-9) END"); + rs.next(); + assertEquals(0, rs.getInt(1)); + assertFalse(rs.next()); + + stat.execute("CREATE ALIAS blob FOR \"" + getClass().getName() + ".blob\""); + rs = stat.executeQuery("SELECT blob(CAST('0102' AS BLOB)) FROM DUAL"); + while (rs.next()) { + // ignore + } + rs.close(); + + stat.execute("CREATE ALIAS clob FOR \"" + getClass().getName() + ".clob\""); + rs = stat.executeQuery("SELECT clob(CAST('Hello' AS CLOB)) FROM DUAL"); + while (rs.next()) { + // ignore + } + rs.close(); + + stat.execute("create alias sql as " + + "'ResultSet sql(Connection conn, String sql) " + + "throws SQLException { return conn.createStatement().executeQuery(sql); }'"); + rs = stat.executeQuery("select * from sql('select cast(''Hello'' as clob)')"); + assertTrue(rs.next()); + assertEquals("Hello", rs.getString(1)); + + rs = stat.executeQuery("select * from sql('select cast(''4869'' as blob)')"); + assertTrue(rs.next()); + assertEquals("Hi", new String(rs.getBytes(1))); + + rs = stat.executeQuery("select sql('select 1 a, ''Hello'' b')"); + assertTrue(rs.next()); + rs2 = (ResultSet) rs.getObject(1); + rs2.next(); + assertEquals(1, rs2.getInt(1)); + assertEquals("Hello", rs2.getString(2)); + ResultSetMetaData meta2 = rs2.getMetaData(); + assertEquals(Types.INTEGER, meta2.getColumnType(1)); + assertEquals("INTEGER", meta2.getColumnTypeName(1)); + assertEquals("java.lang.Integer", meta2.getColumnClassName(1)); + assertEquals(Types.VARCHAR, meta2.getColumnType(2)); + assertEquals("VARCHAR", meta2.getColumnTypeName(2)); + assertEquals("java.lang.String", meta2.getColumnClassName(2)); + + stat.execute("CREATE ALIAS blob2stream FOR \"" + + getClass().getName() + ".blob2stream\""); + stat.execute("CREATE ALIAS stream2stream FOR \"" + + getClass().getName() + ".stream2stream\""); + stat.execute("CREATE TABLE TEST_BLOB(ID INT PRIMARY KEY, VALUE BLOB)"); + stat.execute("INSERT INTO TEST_BLOB VALUES(0, null)"); + stat.execute("INSERT INTO TEST_BLOB VALUES(1, 'edd1f011edd1f011edd1f011')"); + rs = stat.executeQuery("SELECT blob2stream(VALUE) FROM TEST_BLOB"); + while (rs.next()) { + // ignore + } + rs.close(); + rs = stat.executeQuery("SELECT stream2stream(VALUE) FROM TEST_BLOB"); + while (rs.next()) { + // ignore + } + + stat.execute("CREATE ALIAS NULL_RESULT FOR \"" + + getClass().getName() + ".nullResultSet\""); + rs = stat.executeQuery("CALL NULL_RESULT()"); + assertEquals(1, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals(null, rs.getString(1)); + assertFalse(rs.next()); + + rs = meta.getProcedures(null, null, "NULL_RESULT"); + rs.next(); + assertEquals("FUNCTIONS", rs.getString("PROCEDURE_CAT")); + assertEquals("PUBLIC", rs.getString("PROCEDURE_SCHEM")); + assertEquals("NULL_RESULT", rs.getString("PROCEDURE_NAME")); + assertEquals(0, rs.getInt("NUM_INPUT_PARAMS")); + assertEquals(0, rs.getInt("NUM_OUTPUT_PARAMS")); + assertEquals(0, rs.getInt("NUM_RESULT_SETS")); + assertEquals("", rs.getString("REMARKS")); + assertEquals(DatabaseMetaData.procedureReturnsResult, + rs.getInt("PROCEDURE_TYPE")); + assertEquals("NULL_RESULT", rs.getString("SPECIFIC_NAME")); + + rs = meta.getProcedureColumns(null, null, "NULL_RESULT", null); + assertTrue(rs.next()); + assertEquals("P0", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + + stat.execute("CREATE ALIAS RESULT_WITH_NULL FOR \"" + + getClass().getName() + ".resultSetWithNull\""); + rs = stat.executeQuery("CALL RESULT_WITH_NULL()"); + assertEquals(1, rs.getMetaData().getColumnCount()); + rs.next(); + assertEquals(null, rs.getString(1)); + assertFalse(rs.next()); + + conn.close(); + } + + private void testWhiteSpacesInParameters() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + // with white space + stat.execute("CREATE ALIAS PARSE_INT2 FOR " + + "\"java.lang.Integer.parseInt(java.lang.String, int)\""); + ResultSet rs; + rs = stat.executeQuery("CALL PARSE_INT2('473', 10)"); + rs.next(); + assertEquals(473, rs.getInt(1)); + stat.execute("DROP ALIAS PARSE_INT2"); + // without white space + stat.execute("CREATE ALIAS PARSE_INT2 FOR " + + "\"java.lang.Integer.parseInt(java.lang.String,int)\""); + stat.execute("DROP ALIAS PARSE_INT2"); + conn.close(); + } + + private void testSchemaSearchPath() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("CREATE SCHEMA TEST"); + stat.execute("SET SCHEMA TEST"); + stat.execute("CREATE ALIAS PARSE_INT2 FOR " + + "\"java.lang.Integer.parseInt(java.lang.String, int)\";"); + rs = stat.executeQuery("SELECT ALIAS_NAME FROM " + + "INFORMATION_SCHEMA.FUNCTION_ALIASES WHERE ALIAS_SCHEMA ='TEST'"); + rs.next(); + assertEquals("PARSE_INT2", rs.getString(1)); + stat.execute("DROP ALIAS PARSE_INT2"); + + stat.execute("SET SCHEMA PUBLIC"); + stat.execute("CREATE ALIAS TEST.PARSE_INT2 FOR " + + "\"java.lang.Integer.parseInt(java.lang.String, int)\";"); + stat.execute("SET SCHEMA_SEARCH_PATH PUBLIC, TEST"); + + rs = stat.executeQuery("CALL PARSE_INT2('-FF', 16)"); + rs.next(); + assertEquals(-255, rs.getInt(1)); + rs = stat.executeQuery("SELECT ALIAS_NAME FROM " + + "INFORMATION_SCHEMA.FUNCTION_ALIASES WHERE ALIAS_SCHEMA ='TEST'"); + rs.next(); + assertEquals("PARSE_INT2", rs.getString(1)); + rs = stat.executeQuery("CALL TEST.PARSE_INT2('-2147483648', 10)"); + rs.next(); + assertEquals(-2147483648, rs.getInt(1)); + rs = stat.executeQuery("CALL FUNCTIONS.TEST.PARSE_INT2('-2147483648', 10)"); + rs.next(); + assertEquals(-2147483648, rs.getInt(1)); + conn.close(); + } + + private void testArrayParameters() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("create alias array_test AS " + + "$$ Integer[] array_test(Integer[] in_array) " + + "{ return in_array; } $$;"); + + PreparedStatement stmt = conn.prepareStatement( + "select array_test(?) from dual"); + stmt.setObject(1, new Integer[] { 1, 2 }); + rs = stmt.executeQuery(); + rs.next(); + assertEquals(Integer[].class.getName(), rs.getObject(1).getClass() + .getName()); + + CallableStatement call = conn.prepareCall("{ ? = call array_test(?) }"); + call.setObject(2, new Integer[] { 2, 1 }); + call.registerOutParameter(1, Types.ARRAY); + call.execute(); + assertEquals(Integer[].class.getName(), call.getArray(1).getArray() + .getClass().getName()); + assertEquals(new Integer[]{2, 1}, (Integer[]) call.getObject(1)); + + stat.execute("drop alias array_test"); + + conn.close(); + } + + private void testTruncate() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + ResultSet rs = stat.executeQuery("SELECT TRUNCATE(1.234, 2) FROM dual"); + rs.next(); + assertEquals(1.23d, rs.getDouble(1)); + + rs = stat.executeQuery( + "SELECT CURRENT_TIMESTAMP(), " + + "TRUNCATE(CURRENT_TIMESTAMP()) FROM dual"); + rs.next(); + Calendar c = DateTimeUtils.createGregorianCalendar(); + c.setTime(rs.getTimestamp(1)); + c.set(Calendar.HOUR_OF_DAY, 0); + c.set(Calendar.MINUTE, 0); + c.set(Calendar.SECOND, 0); + c.set(Calendar.MILLISECOND, 0); + java.util.Date nowDate = c.getTime(); + assertEquals(nowDate, rs.getTimestamp(2)); + + assertThrows(SQLException.class, stat).executeQuery("SELECT TRUNCATE('bad', 1) FROM dual"); + + // check for passing wrong data type + rs = assertThrows(SQLException.class, stat).executeQuery("SELECT TRUNCATE('bad') FROM dual"); + + // check for too many parameters + rs = assertThrows(SQLException.class, stat).executeQuery("SELECT TRUNCATE(1,2,3) FROM dual"); + + conn.close(); + } + + private void testTranslate() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + String createSQL = "CREATE TABLE testTranslate(id BIGINT, " + + "txt1 varchar);"; + stat.execute(createSQL); + stat.execute("insert into testTranslate(id, txt1) " + + "values(1, 'test1')"); + stat.execute("insert into testTranslate(id, txt1) " + + "values(2, null)"); + stat.execute("insert into testTranslate(id, txt1) " + + "values(3, '')"); + stat.execute("insert into testTranslate(id, txt1) " + + "values(4, 'caps')"); + + String query = "SELECT translate(txt1, 'p', 'r') " + + "FROM testTranslate order by id asc"; + ResultSet rs = stat.executeQuery(query); + rs.next(); + String actual = rs.getString(1); + assertEquals("test1", actual); + rs.next(); + actual = rs.getString(1); + assertNull(actual); + rs.next(); + actual = rs.getString(1); + assertEquals("", actual); + rs.next(); + actual = rs.getString(1); + assertEquals("cars", actual); + rs.close(); + + rs = stat.executeQuery("select translate(null,null,null)"); + rs.next(); + assertNull(rs.getObject(1)); + + stat.execute("drop table testTranslate"); + conn.close(); + } + + private void testOraHash() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + String testStr = "foo"; + assertResult(String.valueOf("foo".hashCode()), stat, + String.format("SELECT ORA_HASH('%s') FROM DUAL", testStr)); + assertResult(String.valueOf("foo".hashCode()), stat, + String.format("SELECT ORA_HASH('%s', 0) FROM DUAL", testStr)); + assertResult(String.valueOf("foo".hashCode()), stat, + String.format("SELECT ORA_HASH('%s', 0, 0) FROM DUAL", testStr)); + conn.close(); + } + + private void testToDateException() { + try { + ToDateParser.toDate("1979-ThisWillFail-12", "YYYY-MM-DD"); + } catch (Exception e) { + assertEquals(DbException.class.getSimpleName(), e.getClass().getSimpleName()); + } + + try { + ToDateParser.toDate("1-DEC-0000", "DD-MON-RRRR"); + fail("Oracle to_date should reject year 0 (ORA-01841)"); + } catch (Exception e) { + // expected + } + } + + private void testToDate() throws ParseException { + GregorianCalendar calendar = DateTimeUtils.createGregorianCalendar(); + int year = calendar.get(Calendar.YEAR); + int month = calendar.get(Calendar.MONTH) + 1; + // Default date in Oracle is the first day of the current month + String defDate = year + "-" + month + "-1 "; + ValueTimestamp date = null; + date = ValueTimestamp.parse("1979-11-12"); + assertEquals(date, ToDateParser.toDate("1979-11-12T00:00:00Z", "YYYY-MM-DD\"T\"HH24:MI:SS\"Z\"")); + assertEquals(date, ToDateParser.toDate("1979*foo*1112", "YYYY\"*foo*\"MM\"\"DD")); + assertEquals(date, ToDateParser.toDate("1979-11-12", "YYYY-MM-DD")); + assertEquals(date, ToDateParser.toDate("1979/11/12", "YYYY/MM/DD")); + assertEquals(date, ToDateParser.toDate("1979,11,12", "YYYY,MM,DD")); + assertEquals(date, ToDateParser.toDate("1979.11.12", "YYYY.MM.DD")); + assertEquals(date, ToDateParser.toDate("1979;11;12", "YYYY;MM;DD")); + assertEquals(date, ToDateParser.toDate("1979:11:12", "YYYY:MM:DD")); + + date = ValueTimestamp.parse("1979-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("1979", "YYYY")); + assertEquals(date, ToDateParser.toDate("1979 AD", "YYYY AD")); + assertEquals(date, ToDateParser.toDate("1979 A.D.", "YYYY A.D.")); + assertEquals(date, ToDateParser.toDate("1979 A.D.", "YYYY BC")); + assertEquals(date, ToDateParser.toDate("+1979", "SYYYY")); + assertEquals(date, ToDateParser.toDate("79", "RRRR")); + + date = ValueTimestamp.parse(defDate + "00:12:00"); + assertEquals(date, ToDateParser.toDate("12", "MI")); + + date = ValueTimestamp.parse("1970-11-01"); + assertEquals(date, ToDateParser.toDate("11", "MM")); + assertEquals(date, ToDateParser.toDate("11", "Mm")); + assertEquals(date, ToDateParser.toDate("11", "mM")); + assertEquals(date, ToDateParser.toDate("11", "mm")); + assertEquals(date, ToDateParser.toDate("XI", "RM")); + + int y = (year / 10) * 10 + 9; + date = ValueTimestamp.parse(y + "-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("9", "Y")); + y = (year / 100) * 100 + 79; + date = ValueTimestamp.parse(y + "-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("79", "YY")); + y = (year / 1_000) * 1_000 + 979; + date = ValueTimestamp.parse(y + "-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("979", "YYY")); + + // Gregorian calendar does not have a year 0. + // 0 = 0001 BC, -1 = 0002 BC, ... so we adjust + date = ValueTimestamp.parse("-99-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("0100 BC", "YYYY BC")); + assertEquals(date, ToDateParser.toDate("0100 B.C.", "YYYY B.C.")); + assertEquals(date, ToDateParser.toDate("-0100", "SYYYY")); + assertEquals(date, ToDateParser.toDate("-0100", "YYYY")); + + // Gregorian calendar does not have a year 0. + // 0 = 0001 BC, -1 = 0002 BC, ... so we adjust + y = -((year / 1_000) * 1_000 + 99); + date = ValueTimestamp.parse(y + "-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("100 BC", "YYY BC")); + + // Gregorian calendar does not have a year 0. + // 0 = 0001 BC, -1 = 0002 BC, ... so we adjust + y = -((year / 100) * 100); + date = ValueTimestamp.parse(y + "-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("01 BC", "YY BC")); + y = -((year / 10) * 10); + date = ValueTimestamp.parse(y + "-" + month + "-01"); + assertEquals(date, ToDateParser.toDate("1 BC", "Y BC")); + + date = ValueTimestamp.parse(defDate + "08:12:00"); + assertEquals(date, ToDateParser.toDate("08:12 AM", "HH:MI AM")); + assertEquals(date, ToDateParser.toDate("08:12 A.M.", "HH:MI A.M.")); + assertEquals(date, ToDateParser.toDate("08:12", "HH24:MI")); + + date = ValueTimestamp.parse(defDate + "08:12:00"); + assertEquals(date, ToDateParser.toDate("08:12", "HH:MI")); + assertEquals(date, ToDateParser.toDate("08:12", "HH12:MI")); + + date = ValueTimestamp.parse(defDate + "08:12:34"); + assertEquals(date, ToDateParser.toDate("08:12:34", "HH:MI:SS")); + + date = ValueTimestamp.parse(defDate + "12:00:00"); + assertEquals(date, ToDateParser.toDate("12:00:00 PM", "HH12:MI:SS AM")); + + date = ValueTimestamp.parse(defDate + "00:00:00"); + assertEquals(date, ToDateParser.toDate("12:00:00 AM", "HH12:MI:SS AM")); + + date = ValueTimestamp.parse(defDate + "00:00:34"); + assertEquals(date, ToDateParser.toDate("34", "SS")); + + date = ValueTimestamp.parse(defDate + "08:12:34"); + assertEquals(date, ToDateParser.toDate("29554", "SSSSS")); + + date = ValueTimestamp.parse(defDate + "08:12:34.550"); + assertEquals(date, ToDateParser.toDate("08:12:34 550", "HH:MI:SS FF")); + assertEquals(date, ToDateParser.toDate("08:12:34 55", "HH:MI:SS FF2")); + + date = ValueTimestamp.parse(defDate + "14:04:00"); + assertEquals(date, ToDateParser.toDate("02:04 P.M.", "HH:MI p.M.")); + assertEquals(date, ToDateParser.toDate("02:04 PM", "HH:MI PM")); + + date = ValueTimestamp.parse("1970-" + month + "-12"); + assertEquals(date, ToDateParser.toDate("12", "DD")); + + date = ValueTimestamp.parse(year + (calendar.isLeapYear(year) ? "11-11" : "-11-12")); + assertEquals(date, ToDateParser.toDate("316", "DDD")); + assertEquals(date, ToDateParser.toDate("316", "DdD")); + assertEquals(date, ToDateParser.toDate("316", "dDD")); + assertEquals(date, ToDateParser.toDate("316", "ddd")); + + date = ValueTimestamp.parse("2013-01-29"); + assertEquals(date, ToDateParser.toDate("2456322", "J")); + + if (Locale.getDefault().getLanguage().equals("en")) { + date = ValueTimestamp.parse("9999-12-31 23:59:59"); + assertEquals(date, ToDateParser.toDate("31-DEC-9999 23:59:59", "DD-MON-YYYY HH24:MI:SS")); + assertEquals(date, ToDateParser.toDate("31-DEC-9999 23:59:59", "DD-MON-RRRR HH24:MI:SS")); + assertEquals(ValueTimestamp.parse("0001-03-01"), ToDateParser.toDate("1-MAR-0001", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("9999-03-01"), ToDateParser.toDate("1-MAR-9999", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("2000-03-01"), ToDateParser.toDate("1-MAR-000", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("1999-03-01"), ToDateParser.toDate("1-MAR-099", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("0100-03-01"), ToDateParser.toDate("1-MAR-100", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("2000-03-01"), ToDateParser.toDate("1-MAR-00", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("2049-03-01"), ToDateParser.toDate("1-MAR-49", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("1950-03-01"), ToDateParser.toDate("1-MAR-50", "DD-MON-RRRR")); + assertEquals(ValueTimestamp.parse("1999-03-01"), ToDateParser.toDate("1-MAR-99", "DD-MON-RRRR")); + } + + assertEquals(ValueTimestampTimeZone.parse("2000-05-10 10:11:12-08:15"), + ToDateParser.toTimestampTz("2000-05-10 10:11:12 -8:15", "YYYY-MM-DD HH24:MI:SS TZH:TZM")); + assertEquals(ValueTimestampTimeZone.parse("2000-05-10 10:11:12-08:15"), + ToDateParser.toTimestampTz("2000-05-10 10:11:12 GMT-08:15", "YYYY-MM-DD HH24:MI:SS TZR")); + assertEquals(ValueTimestampTimeZone.parse("2000-02-10 10:11:12-08"), + ToDateParser.toTimestampTz("2000-02-10 10:11:12 US/Pacific", "YYYY-MM-DD HH24:MI:SS TZR")); + assertEquals(ValueTimestampTimeZone.parse("2000-02-10 10:11:12-08"), + ToDateParser.toTimestampTz("2000-02-10 10:11:12 PST", "YYYY-MM-DD HH24:MI:SS TZD")); + } + + private void testToCharFromDateTime() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + TimeZone tz = TimeZone.getDefault(); + final Timestamp timestamp1979 = Timestamp.valueOf("1979-11-12 08:12:34.560"); + boolean daylight = tz.inDaylightTime(timestamp1979); + String tzShortName = tz.getDisplayName(daylight, TimeZone.SHORT); + String tzLongName = tz.getID(); + + stat.executeUpdate("CREATE TABLE T (X TIMESTAMP(6))"); + stat.executeUpdate("INSERT INTO T VALUES " + + "(TIMESTAMP '"+timestamp1979.toString()+"')"); + stat.executeUpdate("CREATE TABLE U (X TIMESTAMP(6))"); + stat.executeUpdate("INSERT INTO U VALUES " + + "(TIMESTAMP '-100-01-15 14:04:02.120')"); + + assertResult("1979-11-12 08:12:34.56", stat, "SELECT X FROM T"); + assertResult("-100-01-15 14:04:02.12", stat, "SELECT X FROM U"); + String expected = String.format("%tb", timestamp1979).toUpperCase(); + expected = stripTrailingPeriod(expected); + assertResult("12-" + expected + "-79 08.12.34.560000000 AM", stat, + "SELECT TO_CHAR(X) FROM T"); + assertResult("- / , . ; : text - /", stat, + "SELECT TO_CHAR(X, '- / , . ; : \"text\" - /') FROM T"); + assertResult("1979-11-12", stat, + "SELECT TO_CHAR(X, 'YYYY-MM-DD') FROM T"); + assertResult("1979/11/12", stat, + "SELECT TO_CHAR(X, 'YYYY/MM/DD') FROM T"); + assertResult("1979,11,12", stat, + "SELECT TO_CHAR(X, 'YYYY,MM,DD') FROM T"); + assertResult("1979.11.12", stat, + "SELECT TO_CHAR(X, 'YYYY.MM.DD') FROM T"); + assertResult("1979;11;12", stat, + "SELECT TO_CHAR(X, 'YYYY;MM;DD') FROM T"); + assertResult("1979:11:12", stat, + "SELECT TO_CHAR(X, 'YYYY:MM:DD') FROM T"); + assertResult("year 1979!", stat, + "SELECT TO_CHAR(X, '\"year \"YYYY\"!\"') FROM T"); + assertResult("1979 AD", stat, + "SELECT TO_CHAR(X, 'YYYY AD') FROM T"); + assertResult("1979 A.D.", stat, + "SELECT TO_CHAR(X, 'YYYY A.D.') FROM T"); + assertResult("0100 B.C.", stat, + "SELECT TO_CHAR(X, 'YYYY A.D.') FROM U"); + assertResult("1979 AD", stat, + "SELECT TO_CHAR(X, 'YYYY BC') FROM T"); + assertResult("100 BC", stat, + "SELECT TO_CHAR(X, 'YYY BC') FROM U"); + assertResult("00 BC", stat, + "SELECT TO_CHAR(X, 'YY BC') FROM U"); + assertResult("0 BC", stat, + "SELECT TO_CHAR(X, 'Y BC') FROM U"); + assertResult("1979 A.D.", stat, "SELECT TO_CHAR(X, 'YYYY B.C.') FROM T"); + assertResult("2013", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'YYYY') FROM DUAL"); + assertResult("013", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'YYY') FROM DUAL"); + assertResult("13", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'YY') FROM DUAL"); + assertResult("3", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'Y') FROM DUAL"); + // ISO week year + assertResult("2014", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'IYYY') FROM DUAL"); + assertResult("014", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'IYY') FROM DUAL"); + assertResult("14", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'IY') FROM DUAL"); + assertResult("4", stat, "SELECT TO_CHAR(DATE '2013-12-30', 'I') FROM DUAL"); + assertResult("0001", stat, "SELECT TO_CHAR(DATE '-0001-01-01', 'IYYY') FROM DUAL"); + assertResult("0005", stat, "SELECT TO_CHAR(DATE '-0004-01-01', 'IYYY') FROM DUAL"); + assertResult("08:12 AM", stat, "SELECT TO_CHAR(X, 'HH:MI AM') FROM T"); + assertResult("08:12 A.M.", stat, "SELECT TO_CHAR(X, 'HH:MI A.M.') FROM T"); + assertResult("02:04 P.M.", stat, "SELECT TO_CHAR(X, 'HH:MI A.M.') FROM U"); + assertResult("08:12 AM", stat, "SELECT TO_CHAR(X, 'HH:MI PM') FROM T"); + assertResult("02:04 PM", stat, "SELECT TO_CHAR(X, 'HH:MI PM') FROM U"); + assertResult("08:12 A.M.", stat, "SELECT TO_CHAR(X, 'HH:MI P.M.') FROM T"); + assertResult("12 PM", stat, "SELECT TO_CHAR(TIME '12:00:00', 'HH AM')"); + assertResult("12 AM", stat, "SELECT TO_CHAR(TIME '00:00:00', 'HH AM')"); + assertResult("A.M.", stat, "SELECT TO_CHAR(X, 'P.M.') FROM T"); + assertResult("a.m.", stat, "SELECT TO_CHAR(X, 'p.M.') FROM T"); + assertResult("a.m.", stat, "SELECT TO_CHAR(X, 'p.m.') FROM T"); + assertResult("AM", stat, "SELECT TO_CHAR(X, 'PM') FROM T"); + assertResult("Am", stat, "SELECT TO_CHAR(X, 'Pm') FROM T"); + assertResult("am", stat, "SELECT TO_CHAR(X, 'pM') FROM T"); + assertResult("am", stat, "SELECT TO_CHAR(X, 'pm') FROM T"); + assertResult("2", stat, "SELECT TO_CHAR(X, 'D') FROM T"); + assertResult("2", stat, "SELECT TO_CHAR(X, 'd') FROM T"); + expected = String.format("%tA", timestamp1979); + expected = expected.substring(0, 1).toUpperCase() + expected.substring(1); + String spaces = " "; + String first9 = (expected + spaces).substring(0, 9); + assertResult(StringUtils.toUpperEnglish(first9), + stat, "SELECT TO_CHAR(X, 'DAY') FROM T"); + assertResult(first9, + stat, "SELECT TO_CHAR(X, 'Day') FROM T"); + assertResult(first9.toLowerCase(), + stat, "SELECT TO_CHAR(X, 'day') FROM T"); + assertResult(first9.toLowerCase(), + stat, "SELECT TO_CHAR(X, 'dAY') FROM T"); + assertResult(expected, + stat, "SELECT TO_CHAR(X, 'fmDay') FROM T"); + assertResult("12", stat, "SELECT TO_CHAR(X, 'DD') FROM T"); + assertResult("316", stat, "SELECT TO_CHAR(X, 'DDD') FROM T"); + assertResult("316", stat, "SELECT TO_CHAR(X, 'DdD') FROM T"); + assertResult("316", stat, "SELECT TO_CHAR(X, 'dDD') FROM T"); + assertResult("316", stat, "SELECT TO_CHAR(X, 'ddd') FROM T"); + expected = String.format("%1$tA, %1$tB %1$te, %1$tY", timestamp1979); + assertResult(expected, stat, + "SELECT TO_CHAR(X, 'DL') FROM T"); + assertResult("11/12/1979", stat, "SELECT TO_CHAR(X, 'DS') FROM T"); + assertResult("11/12/1979", stat, "SELECT TO_CHAR(X, 'Ds') FROM T"); + assertResult("11/12/1979", stat, "SELECT TO_CHAR(X, 'dS') FROM T"); + assertResult("11/12/1979", stat, "SELECT TO_CHAR(X, 'ds') FROM T"); + expected = String.format("%1$ta", timestamp1979); + assertResult(expected.toUpperCase(), stat, "SELECT TO_CHAR(X, 'DY') FROM T"); + assertResult(Capitalization.CAPITALIZE.apply(expected), stat, "SELECT TO_CHAR(X, 'Dy') FROM T"); + assertResult(expected.toLowerCase(), stat, "SELECT TO_CHAR(X, 'dy') FROM T"); + assertResult(expected.toLowerCase(), stat, "SELECT TO_CHAR(X, 'dY') FROM T"); + assertResult("08:12:34.560000000", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF') FROM T"); + assertResult("08:12:34.5", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF1') FROM T"); + assertResult("08:12:34.56", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF2') FROM T"); + assertResult("08:12:34.560", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF3') FROM T"); + assertResult("08:12:34.5600", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF4') FROM T"); + assertResult("08:12:34.56000", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF5') FROM T"); + assertResult("08:12:34.560000", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF6') FROM T"); + assertResult("08:12:34.5600000", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF7') FROM T"); + assertResult("08:12:34.56000000", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF8') FROM T"); + assertResult("08:12:34.560000000", stat, + "SELECT TO_CHAR(X, 'HH:MI:SS.FF9') FROM T"); + assertResult("012345678", stat, + "SELECT TO_CHAR(TIME '0:00:00.012345678', 'FF') FROM T"); + assertResult("00", stat, + "SELECT TO_CHAR(TIME '0:00:00.000', 'FF2') FROM T"); + assertResult("08:12", stat, "SELECT TO_CHAR(X, 'HH:MI') FROM T"); + assertResult("08:12", stat, "SELECT TO_CHAR(X, 'HH12:MI') FROM T"); + assertResult("08:12", stat, "SELECT TO_CHAR(X, 'HH24:MI') FROM T"); + assertResult("46", stat, "SELECT TO_CHAR(X, 'IW') FROM T"); + assertResult("46", stat, "SELECT TO_CHAR(X, 'WW') FROM T"); + assertResult("2", stat, "SELECT TO_CHAR(X, 'W') FROM T"); + assertResult("9", stat, "SELECT TO_CHAR(X, 'I') FROM T"); + assertResult("79", stat, "SELECT TO_CHAR(X, 'IY') FROM T"); + assertResult("979", stat, "SELECT TO_CHAR(X, 'IYY') FROM T"); + assertResult("1979", stat, "SELECT TO_CHAR(X, 'IYYY') FROM T"); + assertResult("2444190", stat, "SELECT TO_CHAR(X, 'J') FROM T"); + assertResult("12", stat, "SELECT TO_CHAR(X, 'MI') FROM T"); + assertResult("11", stat, "SELECT TO_CHAR(X, 'MM') FROM T"); + assertResult("11", stat, "SELECT TO_CHAR(X, 'Mm') FROM T"); + assertResult("11", stat, "SELECT TO_CHAR(X, 'mM') FROM T"); + assertResult("11", stat, "SELECT TO_CHAR(X, 'mm') FROM T"); + expected = String.format("%1$tb", timestamp1979); + expected = stripTrailingPeriod(expected); + expected = expected.substring(0, 1).toUpperCase() + expected.substring(1); + assertResult(expected.toUpperCase(), stat, + "SELECT TO_CHAR(X, 'MON') FROM T"); + assertResult(expected, stat, + "SELECT TO_CHAR(X, 'Mon') FROM T"); + assertResult(expected.toLowerCase(), stat, + "SELECT TO_CHAR(X, 'mon') FROM T"); + expected = String.format("%1$tB", timestamp1979); + expected = (expected + " ").substring(0, 9); + assertResult(expected.toUpperCase(), stat, + "SELECT TO_CHAR(X, 'MONTH') FROM T"); + assertResult(Capitalization.CAPITALIZE.apply(expected), stat, + "SELECT TO_CHAR(X, 'Month') FROM T"); + assertResult(expected.toLowerCase(), stat, + "SELECT TO_CHAR(X, 'month') FROM T"); + assertResult(Capitalization.CAPITALIZE.apply(expected.trim()), stat, + "SELECT TO_CHAR(X, 'fmMonth') FROM T"); + assertResult("4", stat, "SELECT TO_CHAR(X, 'Q') FROM T"); + assertResult("XI", stat, "SELECT TO_CHAR(X, 'RM') FROM T"); + assertResult("xi", stat, "SELECT TO_CHAR(X, 'rm') FROM T"); + assertResult("Xi", stat, "SELECT TO_CHAR(X, 'Rm') FROM T"); + assertResult("79", stat, "SELECT TO_CHAR(X, 'RR') FROM T"); + assertResult("1979", stat, "SELECT TO_CHAR(X, 'RRRR') FROM T"); + assertResult("34", stat, "SELECT TO_CHAR(X, 'SS') FROM T"); + assertResult("29554", stat, "SELECT TO_CHAR(X, 'SSSSS') FROM T"); + expected = new SimpleDateFormat("h:mm:ss aa").format(timestamp1979); + if (Locale.getDefault().getLanguage().equals(Locale.ENGLISH.getLanguage())) { + assertEquals("8:12:34 AM", expected); + } + assertResult(expected, stat, "SELECT TO_CHAR(X, 'TS') FROM T"); + assertResult(tzLongName, stat, "SELECT TO_CHAR(X, 'TZR') FROM T"); + assertResult(tzShortName, stat, "SELECT TO_CHAR(X, 'TZD') FROM T"); + assertResult("GMT+10:30", stat, + "SELECT TO_CHAR(TIMESTAMP WITH TIME ZONE '2010-01-01 0:00:00+10:30', 'TZR')"); + assertResult("GMT+10:30", stat, + "SELECT TO_CHAR(TIMESTAMP WITH TIME ZONE '2010-01-01 0:00:00+10:30', 'TZD')"); + expected = String.format("%f", 1.1).substring(1, 2); + assertResult(expected, stat, "SELECT TO_CHAR(X, 'X') FROM T"); + expected = String.format("%,d", 1979); + assertResult(expected, stat, "SELECT TO_CHAR(X, 'Y,YYY') FROM T"); + assertResult("1979", stat, "SELECT TO_CHAR(X, 'YYYY') FROM T"); + assertResult("1979", stat, "SELECT TO_CHAR(X, 'SYYYY') FROM T"); + assertResult("-0100", stat, "SELECT TO_CHAR(X, 'SYYYY') FROM U"); + assertResult("979", stat, "SELECT TO_CHAR(X, 'YYY') FROM T"); + assertResult("79", stat, "SELECT TO_CHAR(X, 'YY') FROM T"); + assertResult("9", stat, "SELECT TO_CHAR(X, 'Y') FROM T"); + assertResult("7979", stat, "SELECT TO_CHAR(X, 'yyfxyy') FROM T"); + assertThrows(ErrorCode.INVALID_TO_CHAR_FORMAT, stat, + "SELECT TO_CHAR(X, 'A') FROM T"); + + // check a bug we had when the month or day of the month is 1 digit + stat.executeUpdate("TRUNCATE TABLE T"); + stat.executeUpdate("INSERT INTO T VALUES (TIMESTAMP '1985-01-01 08:12:34.560')"); + assertResult("19850101", stat, "SELECT TO_CHAR(X, 'YYYYMMDD') FROM T"); + + conn.close(); + } + + private static String stripTrailingPeriod(String expected) { + // CLDR provider appends period on some locales + int l = expected.length() - 1; + if (expected.charAt(l) == '.') + expected = expected.substring(0, l); + return expected; + } + + private void testIfNull() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + stat.execute("CREATE TABLE T(f1 double)"); + stat.executeUpdate("INSERT INTO T VALUES( 1.2 )"); + stat.executeUpdate("INSERT INTO T VALUES( null )"); + ResultSet rs = stat.executeQuery("SELECT IFNULL(f1, 0.0) FROM T"); + ResultSetMetaData metaData = rs.getMetaData(); + assertEquals("java.lang.Double", metaData.getColumnClassName(1)); + rs.next(); + assertEquals("java.lang.Double", rs.getObject(1).getClass().getName()); + rs.next(); + assertEquals("java.lang.Double", rs.getObject(1).getClass().getName()); + conn.close(); + } + + private void testToCharFromNumber() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + Currency currency = Currency.getInstance(Locale.getDefault()); + String cc = currency.getCurrencyCode(); + String cs = currency.getSymbol(); + + assertResult(".45", stat, + "SELECT TO_CHAR(0.45) FROM DUAL"); + assertResult("12923", stat, + "SELECT TO_CHAR(12923) FROM DUAL"); + assertResult(" 12923.00", stat, + "SELECT TO_CHAR(12923, '99999.99', 'NLS_CURRENCY = BTC') FROM DUAL"); + assertResult("12923.", stat, + "SELECT TO_CHAR(12923, 'FM99999.99', 'NLS_CURRENCY = BTC') FROM DUAL"); + assertResult("######", stat, + "SELECT TO_CHAR(12345, '9,999') FROM DUAL"); + assertResult("######", stat, + "SELECT TO_CHAR(1234567, '9,999') FROM DUAL"); + assertResult(" 12,345", stat, + "SELECT TO_CHAR(12345, '99,999') FROM DUAL"); + assertResult(" 123,45", stat, + "SELECT TO_CHAR(12345, '999,99') FROM DUAL"); + assertResult("######", stat, + "SELECT TO_CHAR(12345, '9.999') FROM DUAL"); + assertResult("#######", stat, + "SELECT TO_CHAR(12345, '99.999') FROM DUAL"); + assertResult("########", stat, + "SELECT TO_CHAR(12345, '999.999') FROM DUAL"); + assertResult("#########", stat, + "SELECT TO_CHAR(12345, '9999.999') FROM DUAL"); + assertResult(" 12345.000", stat, + "SELECT TO_CHAR(12345, '99999.999') FROM DUAL"); + assertResult("###", stat, + "SELECT TO_CHAR(12345, '$9') FROM DUAL"); + assertResult("#####", stat, + "SELECT TO_CHAR(12345, '$999') FROM DUAL"); + assertResult("######", stat, + "SELECT TO_CHAR(12345, '$9999') FROM DUAL"); + String expected = String.format("%,d", 12345); + if (Locale.getDefault() == Locale.ENGLISH) { + assertResult(String.format("%5s12345", cs), stat, + "SELECT TO_CHAR(12345, '$99999999') FROM DUAL"); + assertResult(String.format("%6s12,345.35", cs), stat, + "SELECT TO_CHAR(12345.345, '$99,999,999.99') FROM DUAL"); + assertResult(String.format("%5s%s", cs, expected), stat, + "SELECT TO_CHAR(12345.345, '$99g999g999') FROM DUAL"); + assertResult(" " + cs + "123.45", stat, + "SELECT TO_CHAR(123.45, 'L999.99') FROM DUAL"); + assertResult(" -" + cs + "123.45", stat, + "SELECT TO_CHAR(-123.45, 'L999.99') FROM DUAL"); + assertResult(cs + "123.45", stat, + "SELECT TO_CHAR(123.45, 'FML999.99') FROM DUAL"); + assertResult(" " + cs + "123.45", stat, + "SELECT TO_CHAR(123.45, 'U999.99') FROM DUAL"); + assertResult(" " + cs + "123.45", stat, + "SELECT TO_CHAR(123.45, 'u999.99') FROM DUAL"); + + } + assertResult(" 12,345.35", stat, + "SELECT TO_CHAR(12345.345, '99,999,999.99') FROM DUAL"); + assertResult("12,345.35", stat, + "SELECT TO_CHAR(12345.345, 'FM99,999,999.99') FROM DUAL"); + assertResult(" 00,012,345.35", stat, + "SELECT TO_CHAR(12345.345, '00,000,000.00') FROM DUAL"); + assertResult("00,012,345.35", stat, + "SELECT TO_CHAR(12345.345, 'FM00,000,000.00') FROM DUAL"); + assertResult("###", stat, + "SELECT TO_CHAR(12345, '09') FROM DUAL"); + assertResult("#####", stat, + "SELECT TO_CHAR(12345, '0999') FROM DUAL"); + assertResult(" 00012345", stat, + "SELECT TO_CHAR(12345, '09999999') FROM DUAL"); + assertResult(" 0000012345", stat, + "SELECT TO_CHAR(12345, '0009999999') FROM DUAL"); + assertResult("###", stat, + "SELECT TO_CHAR(12345, '90') FROM DUAL"); + assertResult("#####", stat, + "SELECT TO_CHAR(12345, '9990') FROM DUAL"); + assertResult(" 12345", stat, + "SELECT TO_CHAR(12345, '99999990') FROM DUAL"); + assertResult(" 12345", stat, + "SELECT TO_CHAR(12345, '9999999000') FROM DUAL"); + assertResult(" 12345", stat, + "SELECT TO_CHAR(12345, '9999999990') FROM DUAL"); + assertResult("12345", stat, + "SELECT TO_CHAR(12345, 'FM9999999990') FROM DUAL"); + assertResult(" 12345.2300", stat, + "SELECT TO_CHAR(12345.23, '9999999.9000') FROM DUAL"); + assertResult(" 12345", stat, + "SELECT TO_CHAR(12345, '9999999') FROM DUAL"); + assertResult(" 12345", stat, + "SELECT TO_CHAR(12345, '999999') FROM DUAL"); + assertResult(" 12345", stat, + "SELECT TO_CHAR(12345, '99999') FROM DUAL"); + assertResult(" 12345", stat, + "SELECT TO_CHAR(12345, '00000') FROM DUAL"); + assertResult("#####", stat, + "SELECT TO_CHAR(12345, '9999') FROM DUAL"); + assertResult("#####", stat, + "SELECT TO_CHAR(12345, '0000') FROM DUAL"); + assertResult(" -12345", stat, + "SELECT TO_CHAR(-12345, '99999999') FROM DUAL"); + assertResult(" -12345", stat, + "SELECT TO_CHAR(-12345, '9999999') FROM DUAL"); + assertResult(" -12345", stat, + "SELECT TO_CHAR(-12345, '999999') FROM DUAL"); + assertResult("-12345", stat, + "SELECT TO_CHAR(-12345, '99999') FROM DUAL"); + assertResult("#####", stat, + "SELECT TO_CHAR(-12345, '9999') FROM DUAL"); + assertResult("####", stat, + "SELECT TO_CHAR(-12345, '999') FROM DUAL"); + assertResult(" 0", stat, + "SELECT TO_CHAR(0, '9999999') FROM DUAL"); + assertResult(" 00.30", stat, + "SELECT TO_CHAR(0.3, '00.99') FROM DUAL"); + assertResult("00.3", stat, + "SELECT TO_CHAR(0.3, 'FM00.99') FROM DUAL"); + assertResult(" 00.30", stat, + "SELECT TO_CHAR(0.3, '00.00') FROM DUAL"); + assertResult(" .30000", stat, + "SELECT TO_CHAR(0.3, '99.00000') FROM DUAL"); + assertResult(".30000", stat, + "SELECT TO_CHAR(0.3, 'FM99.00000') FROM DUAL"); + assertResult(" 00.30", stat, + "SELECT TO_CHAR(0.3, 'B00.99') FROM DUAL"); + assertResult(" .30", stat, + "SELECT TO_CHAR(0.3, 'B99.99') FROM DUAL"); + assertResult(" .30", stat, + "SELECT TO_CHAR(0.3, '99.99') FROM DUAL"); + assertResult(".3", stat, + "SELECT TO_CHAR(0.3, 'FMB99.99') FROM DUAL"); + assertResult(" 00.30", stat, + "SELECT TO_CHAR(0.3, 'B09.99') FROM DUAL"); + assertResult(" 00.30", stat, + "SELECT TO_CHAR(0.3, 'B00.00') FROM DUAL"); + assertResult(" " + cc + "123.45", stat, + "SELECT TO_CHAR(123.45, 'C999.99') FROM DUAL"); + assertResult(" -" + cc + "123.45", stat, + "SELECT TO_CHAR(-123.45, 'C999.99') FROM DUAL"); + assertResult(" " + cc + "123.45", stat, + "SELECT TO_CHAR(123.45, 'C999,999.99') FROM DUAL"); + assertResult(" " + cc + "123", stat, + "SELECT TO_CHAR(123.45, 'C999g999') FROM DUAL"); + assertResult(cc + "123.45", stat, + "SELECT TO_CHAR(123.45, 'FMC999,999.99') FROM DUAL"); + expected = String.format("%.2f", 0.33f).substring(1); + assertResult(" " + expected, stat, + "SELECT TO_CHAR(0.326, '99D99') FROM DUAL"); + assertResult(" 1.2E+02", stat, + "SELECT TO_CHAR(123.456, '9.9EEEE') FROM DUAL"); + assertResult(" 1.2E+14", stat, + "SELECT TO_CHAR(123456789012345, '9.9EEEE') FROM DUAL"); + assertResult(" 1E+02", stat, "SELECT TO_CHAR(123.456, '9EEEE') FROM DUAL"); + assertResult(" 1E+02", stat, "SELECT TO_CHAR(123.456, '999EEEE') FROM DUAL"); + assertResult(" 1E-03", stat, "SELECT TO_CHAR(.00123456, '999EEEE') FROM DUAL"); + assertResult(" 1E+00", stat, "SELECT TO_CHAR(1, '999EEEE') FROM DUAL"); + assertResult(" -1E+00", stat, "SELECT TO_CHAR(-1, '999EEEE') FROM DUAL"); + assertResult(" 1.23456000E+02", stat, + "SELECT TO_CHAR(123.456, '00.00000000EEEE') FROM DUAL"); + assertResult("1.23456000E+02", stat, + "SELECT TO_CHAR(123.456, 'fm00.00000000EEEE') FROM DUAL"); + expected = String.format("%,d", 1234567); + assertResult(" " + expected, stat, + "SELECT TO_CHAR(1234567, '9G999G999') FROM DUAL"); + assertResult("-" + expected, stat, + "SELECT TO_CHAR(-1234567, '9G999G999') FROM DUAL"); + assertResult("123.45-", stat, "SELECT TO_CHAR(-123.45, '999.99MI') FROM DUAL"); + assertResult("123.45-", stat, "SELECT TO_CHAR(-123.45, '999.99mi') FROM DUAL"); + assertResult("123.45-", stat, "SELECT TO_CHAR(-123.45, '999.99mI') FROM DUAL"); + assertResult("230.00-", stat, "SELECT TO_CHAR(-230, '999.99MI') FROM DUAL"); + assertResult("230-", stat, "SELECT TO_CHAR(-230, '999MI') FROM DUAL"); + assertResult("123.45 ", stat, "SELECT TO_CHAR(123.45, '999.99MI') FROM DUAL"); + assertResult("230.00 ", stat, "SELECT TO_CHAR(230, '999.99MI') FROM DUAL"); + assertResult("230 ", stat, "SELECT TO_CHAR(230, '999MI') FROM DUAL"); + assertResult("230", stat, "SELECT TO_CHAR(230, 'FM999MI') FROM DUAL"); + assertResult("<230>", stat, "SELECT TO_CHAR(-230, '999PR') FROM DUAL"); + assertResult("<230>", stat, "SELECT TO_CHAR(-230, '999pr') FROM DUAL"); + assertResult("<230>", stat, "SELECT TO_CHAR(-230, 'fm999pr') FROM DUAL"); + assertResult(" 230 ", stat, "SELECT TO_CHAR(230, '999PR') FROM DUAL"); + assertResult("230", stat, "SELECT TO_CHAR(230, 'FM999PR') FROM DUAL"); + assertResult("0", stat, "SELECT TO_CHAR(0, 'fm999pr') FROM DUAL"); + assertResult(" XI", stat, "SELECT TO_CHAR(11, 'RN') FROM DUAL"); + assertResult("XI", stat, "SELECT TO_CHAR(11, 'FMRN') FROM DUAL"); + assertResult("xi", stat, "SELECT TO_CHAR(11, 'FMrN') FROM DUAL"); + assertResult(" XI", stat, "SELECT TO_CHAR(11, 'RN') FROM DUAL;"); + assertResult(" xi", stat, "SELECT TO_CHAR(11, 'rN') FROM DUAL"); + assertResult(" xi", stat, "SELECT TO_CHAR(11, 'rn') FROM DUAL"); + assertResult(" +42", stat, "SELECT TO_CHAR(42, 'S999') FROM DUAL"); + assertResult(" +42", stat, "SELECT TO_CHAR(42, 's999') FROM DUAL"); + assertResult(" 42+", stat, "SELECT TO_CHAR(42, '999S') FROM DUAL"); + assertResult(" -42", stat, "SELECT TO_CHAR(-42, 'S999') FROM DUAL"); + assertResult(" 42-", stat, "SELECT TO_CHAR(-42, '999S') FROM DUAL"); + assertResult("42", stat, "SELECT TO_CHAR(42, 'TM') FROM DUAL"); + assertResult("-42", stat, "SELECT TO_CHAR(-42, 'TM') FROM DUAL"); + assertResult("4212341241234.23412342", stat, + "SELECT TO_CHAR(4212341241234.23412342, 'tm') FROM DUAL"); + assertResult(".23412342", stat, "SELECT TO_CHAR(0.23412342, 'tm') FROM DUAL"); + assertResult(" 12300", stat, "SELECT TO_CHAR(123, '999V99') FROM DUAL"); + assertResult("######", stat, "SELECT TO_CHAR(1234, '999V99') FROM DUAL"); + assertResult("123400", stat, "SELECT TO_CHAR(1234, 'FM9999v99') FROM DUAL"); + assertResult("1234", stat, "SELECT TO_CHAR(123.4, 'FM9999V9') FROM DUAL"); + assertResult("123", stat, "SELECT TO_CHAR(123.4, 'FM9999V') FROM DUAL"); + assertResult("123400000", stat, + "SELECT TO_CHAR(123.4, 'FM9999V090909') FROM DUAL"); + assertResult("##", stat, "SELECT TO_CHAR(123, 'X') FROM DUAL"); + assertResult(" 7B", stat, "SELECT TO_CHAR(123, 'XX') FROM DUAL"); + assertResult(" 7b", stat, "SELECT TO_CHAR(123, 'Xx') FROM DUAL"); + assertResult(" 7b", stat, "SELECT TO_CHAR(123, 'xX') FROM DUAL"); + assertResult(" 7B", stat, "SELECT TO_CHAR(123, 'XXXX') FROM DUAL"); + assertResult(" 007B", stat, "SELECT TO_CHAR(123, '000X') FROM DUAL"); + assertResult(" 007B", stat, "SELECT TO_CHAR(123, '0XXX') FROM DUAL"); + assertResult("####", stat, "SELECT TO_CHAR(123456789, 'FMXXX') FROM DUAL"); + assertResult("7B", stat, "SELECT TO_CHAR(123, 'FMXX') FROM DUAL"); + assertResult("C6", stat, "SELECT TO_CHAR(197.6, 'FMXX') FROM DUAL"); + assertResult(" 7", stat, "SELECT TO_CHAR(7, 'XX') FROM DUAL"); + assertResult("123", stat, "SELECT TO_CHAR(123, 'TM') FROM DUAL"); + assertResult("123", stat, "SELECT TO_CHAR(123, 'tm') FROM DUAL"); + assertResult("123", stat, "SELECT TO_CHAR(123, 'tM9') FROM DUAL"); + assertResult("1.23E+02", stat, "SELECT TO_CHAR(123, 'TME') FROM DUAL"); + assertResult("1.23456789012345E+14", stat, + "SELECT TO_CHAR(123456789012345, 'TME') FROM DUAL"); + assertResult("4.5E-01", stat, "SELECT TO_CHAR(0.45, 'TME') FROM DUAL"); + assertResult("4.5E-01", stat, "SELECT TO_CHAR(0.45, 'tMe') FROM DUAL"); + assertThrows(ErrorCode.INVALID_TO_CHAR_FORMAT, stat, + "SELECT TO_CHAR(123.45, '999.99q') FROM DUAL"); + assertThrows(ErrorCode.INVALID_TO_CHAR_FORMAT, stat, + "SELECT TO_CHAR(123.45, 'fm999.99q') FROM DUAL"); + assertThrows(ErrorCode.INVALID_TO_CHAR_FORMAT, stat, + "SELECT TO_CHAR(123.45, 'q999.99') FROM DUAL"); + + // ISSUE-115 + assertResult("0.123", stat, "select to_char(0.123, 'FM0.099') from dual;"); + assertResult("1.123", stat, "select to_char(1.1234, 'FM0.099') from dual;"); + assertResult("1.1234", stat, "select to_char(1.1234, 'FM0.0999') from dual;"); + assertResult("1.023", stat, "select to_char(1.023, 'FM0.099') from dual;"); + assertResult("0.012", stat, "select to_char(0.012, 'FM0.099') from dual;"); + assertResult("0.123", stat, "select to_char(0.123, 'FM0.099') from dual;"); + assertResult("0.001", stat, "select to_char(0.001, 'FM0.099') from dual;"); + assertResult("0.001", stat, "select to_char(0.0012, 'FM0.099') from dual;"); + assertResult("0.002", stat, "select to_char(0.0019, 'FM0.099') from dual;"); + final char decimalSeparator = DecimalFormatSymbols.getInstance().getDecimalSeparator(); + final String oneDecimal = "0" + decimalSeparator + "0"; + final String twoDecimals = "0" + decimalSeparator + "00"; + assertResult(oneDecimal, stat, "select to_char(0, 'FM0D099') from dual;"); + assertResult(twoDecimals, stat, "select to_char(0., 'FM0D009') from dual;"); + assertResult("0" + decimalSeparator + "000000000", + stat, "select to_char(0.000000000, 'FM0D999999999') from dual;"); + assertResult("0" + decimalSeparator, stat, "select to_char(0, 'FM0D9') from dual;"); + assertResult(oneDecimal, stat, "select to_char(0.0, 'FM0D099') from dual;"); + assertResult(twoDecimals, stat, "select to_char(0.00, 'FM0D009') from dual;"); + assertResult(twoDecimals, stat, "select to_char(0, 'FM0D009') from dual;"); + assertResult(oneDecimal, stat, "select to_char(0, 'FM0D09') from dual;"); + assertResult(oneDecimal, stat, "select to_char(0, 'FM0D0') from dual;"); + conn.close(); + } + + private void testToCharFromText() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + assertResult("abc", stat, "SELECT TO_CHAR('abc') FROM DUAL"); + conn.close(); + } + + private void testGenerateSeries() throws SQLException { + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + ResultSet rs = stat.executeQuery("select * from system_range(1,3)"); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs.next(); + assertEquals(2, rs.getInt(1)); + rs.next(); + assertEquals(3, rs.getInt(1)); + + rs = stat.executeQuery("select * from system_range(2,2)"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + + rs = stat.executeQuery("select * from system_range(2,1)"); + assertFalse(rs.next()); + + rs = stat.executeQuery("select * from system_range(1,2,-1)"); + assertFalse(rs.next()); + + assertThrows(ErrorCode.STEP_SIZE_MUST_NOT_BE_ZERO, stat).executeQuery( + "select * from system_range(1,2,0)"); + + rs = stat.executeQuery("select * from system_range(2,1,-1)"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + + rs = stat.executeQuery("select * from system_range(1,5,2)"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(5, rs.getInt(1)); + + rs = stat.executeQuery("select * from system_range(1,6,2)"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(5, rs.getInt(1)); + + conn.close(); + } + + private void testAnnotationProcessorsOutput() throws SQLException { + try { + System.setProperty(TestAnnotationProcessor.MESSAGES_KEY, "WARNING,foo1|ERROR,foo2"); + callCompiledFunction("test_annotation_processor_warn_and_error"); + fail(); + } catch (JdbcSQLException e) { + assertEquals(ErrorCode.SYNTAX_ERROR_1, e.getErrorCode()); + assertContains(e.getMessage(), "foo1"); + assertContains(e.getMessage(), "foo2"); + } finally { + System.clearProperty(TestAnnotationProcessor.MESSAGES_KEY); + } + } + + private void testRound() throws SQLException { + deleteDb("functions"); + + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + final ResultSet rs = stat.executeQuery( + "select ROUND(-1.2), ROUND(-1.5), ROUND(-1.6), " + + "ROUND(2), ROUND(1.5), ROUND(1.8), ROUND(1.1) from dual"); + + rs.next(); + assertEquals(-1, rs.getInt(1)); + assertEquals(-2, rs.getInt(2)); + assertEquals(-2, rs.getInt(3)); + assertEquals(2, rs.getInt(4)); + assertEquals(2, rs.getInt(5)); + assertEquals(2, rs.getInt(6)); + assertEquals(1, rs.getInt(7)); + + rs.close(); + conn.close(); + } + + private void testSignal() throws SQLException { + deleteDb("functions"); + + Connection conn = getConnection("functions"); + Statement stat = conn.createStatement(); + + assertThrows(ErrorCode.INVALID_VALUE_2, stat).execute("select signal('00145', 'success class is invalid')"); + assertThrows(ErrorCode.INVALID_VALUE_2, stat).execute("select signal('foo', 'SQLSTATE has 5 chars')"); + assertThrows(ErrorCode.INVALID_VALUE_2, stat) + .execute("select signal('Ab123', 'SQLSTATE has only digits or upper-case letters')"); + try { + stat.execute("select signal('AB123', 'some custom error')"); + fail("Should have thrown"); + } catch (SQLException e) { + assertEquals("AB123", e.getSQLState()); + assertContains(e.getMessage(), "some custom error"); + } + + conn.close(); + } + + private void testThatCurrentTimestampIsSane() throws SQLException, + ParseException { + deleteDb("functions"); + + Date before = new Date(); + + Connection conn = getConnection("functions"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + + + final String formatted; + final ResultSet rs = stat.executeQuery( + "select to_char(current_timestamp(9), 'YYYY MM DD HH24 MI SS FF3') from dual"); + rs.next(); + formatted = rs.getString(1); + rs.close(); + + Date after = new Date(); + + Date parsed = new SimpleDateFormat("y M d H m s S").parse(formatted); + + assertFalse(parsed.before(before)); + assertFalse(parsed.after(after)); + conn.close(); + } + + + private void testThatCurrentTimestampStaysTheSameWithinATransaction() + throws SQLException, InterruptedException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + + Timestamp first; + ResultSet rs = stat.executeQuery("select CURRENT_TIMESTAMP from DUAL"); + rs.next(); + first = rs.getTimestamp(1); + rs.close(); + + Thread.sleep(1); + + Timestamp second; + rs = stat.executeQuery("select CURRENT_TIMESTAMP from DUAL"); + rs.next(); + second = rs.getTimestamp(1); + rs.close(); + + assertEquals(first, second); + conn.close(); + } + + private void testThatCurrentTimestampUpdatesOutsideATransaction() + throws SQLException, InterruptedException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + conn.setAutoCommit(true); + Statement stat = conn.createStatement(); + + Timestamp first; + ResultSet rs = stat.executeQuery("select CURRENT_TIMESTAMP from DUAL"); + rs.next(); + first = rs.getTimestamp(1); + rs.close(); + + Thread.sleep(1); + + Timestamp second; + rs = stat.executeQuery("select CURRENT_TIMESTAMP from DUAL"); + rs.next(); + second = rs.getTimestamp(1); + rs.close(); + + assertTrue(second.after(first)); + conn.close(); + } + + private void testOverrideAlias() throws SQLException { + deleteDb("functions"); + Connection conn = getConnection("functions"); + conn.setAutoCommit(true); + Statement stat = conn.createStatement(); + + assertThrows(ErrorCode.FUNCTION_ALIAS_ALREADY_EXISTS_1, stat).execute("create alias CURRENT_TIMESTAMP for \"" + + getClass().getName() + ".currentTimestamp\""); + + stat.execute("set BUILTIN_ALIAS_OVERRIDE true"); + + stat.execute("create alias CURRENT_TIMESTAMP for \"" + + getClass().getName() + ".currentTimestampOverride\""); + + assertCallResult("3141", stat, "CURRENT_TIMESTAMP"); + + conn.close(); + } + + private void callCompiledFunction(String functionName) throws SQLException { + deleteDb("functions"); + try (Connection conn = getConnection("functions")) { + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("create alias " + functionName + " AS " + + "$$ boolean " + functionName + "() " + + "{ return true; } $$;"); + + PreparedStatement stmt = conn.prepareStatement( + "select " + functionName + "() from dual"); + rs = stmt.executeQuery(); + rs.next(); + assertEquals(Boolean.class.getName(), rs.getObject(1).getClass().getName()); + + stat.execute("drop alias " + functionName + ""); + } + } + + private void assertCallResult(String expected, Statement stat, String sql) + throws SQLException { + ResultSet rs = stat.executeQuery("CALL " + sql); + rs.next(); + String s = rs.getString(1); + assertEquals(expected, s); + } + + /** + * This method is called via reflection from the database. + * + * @param value the blob + * @return the input stream + */ + public static BufferedInputStream blob2stream(Blob value) + throws SQLException { + if (value == null) { + return null; + } + BufferedInputStream bufferedInStream = new BufferedInputStream( + value.getBinaryStream()); + return bufferedInStream; + } + + /** + * This method is called via reflection from the database. + * + * @param value the blob + * @return the blob + */ + public static Blob blob(Blob value) { + return value; + } + + /** + * This method is called via reflection from the database. + * + * @param value the blob + * @return the blob + */ + public static Clob clob(Clob value) { + return value; + } + + /** + * This method is called via reflection from the database. + * + * @param value the input stream + * @return the buffered input stream + */ + public static BufferedInputStream stream2stream(InputStream value) { + if (value == null) { + return null; + } + BufferedInputStream bufferedInStream = new BufferedInputStream(value); + return bufferedInStream; + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @param id the test id + * @param name the text + * @return the count + */ + public static int addRow(Connection conn, int id, String name) + throws SQLException { + conn.createStatement().execute( + "INSERT INTO TEST VALUES(" + id + ", '" + name + "')"); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT COUNT(*) FROM TEST"); + rs.next(); + int result = rs.getInt(1); + rs.close(); + return result; + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @param sql the SQL statement + * @return the result set + */ + public static ResultSet select(Connection conn, String sql) + throws SQLException { + Statement stat = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + return stat.executeQuery(sql); + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @return the result set + */ + public static ResultSet selectMaxId(Connection conn) throws SQLException { + return conn.createStatement().executeQuery( + "SELECT MAX(ID) FROM TEST"); + } + + /** + * This method is called via reflection from the database. + * + * @return the test array + */ + public static Object[] getArray() { + return new Object[] { 0, "Hello" }; + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @return the result set + */ + public static ResultSet resultSetWithNull(Connection conn) throws SQLException { + PreparedStatement statement = conn.prepareStatement( + "select null from system_range(1,1)"); + return statement.executeQuery(); + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @return the result set + */ + public static ResultSet nullResultSet(@SuppressWarnings("unused") Connection conn) { + return null; + } + + /** + * Test method to create a simple result set. + * + * @param rowCount the number of rows + * @param ip an int + * @param bp a boolean + * @param fp a float + * @param dp a double + * @param lp a long + * @param byParam a byte + * @param sp a short + * @return a result set + */ + public static ResultSet simpleResultSet(Integer rowCount, int ip, + boolean bp, float fp, double dp, long lp, byte byParam, short sp) { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("ID", Types.INTEGER, 10, 0); + rs.addColumn("NAME", Types.VARCHAR, 255, 0); + if (rowCount == null) { + if (ip != 0 || bp || fp != 0.0 || dp != 0.0 || + sp != 0 || lp != 0 || byParam != 0) { + throw new AssertionError("params not 0/false"); + } + } + if (rowCount != null) { + if (ip != 1 || !bp || fp != 1.0 || dp != 1.0 || + sp != 1 || lp != 1 || byParam != 1) { + throw new AssertionError("params not 1/true"); + } + if (rowCount.intValue() >= 1) { + rs.addRow(0, "Hello"); + } + if (rowCount.intValue() >= 2) { + rs.addRow(1, "World"); + } + } + return rs; + } + + /** + * This method is called via reflection from the database. + * + * @param value the value + * @return the square root + */ + public static int root(int value) { + if (value < 0) { + TestBase.logError("function called but should not", null); + } + return (int) Math.sqrt(value); + } + + /** + * This method is called via reflection from the database. + * + * @return 1 + */ + public static double mean() { + return 1; + } + + /** + * This method is called via reflection from the database. + * + * @param dec the value + * @return the value + */ + public static BigDecimal noOp(BigDecimal dec) { + return dec; + } + + /** + * This method is called via reflection from the database. + * + * @return the count + */ + public static int getCount() { + return count++; + } + + private static void setCount(int newCount) { + count = newCount; + } + + /** + * This method is called via reflection from the database. + * + * @param s the string + * @return the string, reversed + */ + public static String reverse(String s) { + return new StringBuilder(s).reverse().toString(); + } + + /** + * This method is called via reflection from the database. + * + * @param values the values + * @return the mean value + */ + public static double mean(double... values) { + double sum = 0; + for (double x : values) { + sum += x; + } + return sum / values.length; + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @param values the values + * @return the mean value + */ + public static double mean2(Connection conn, double... values) { + conn.getClass(); + double sum = 0; + for (double x : values) { + sum += x; + } + return sum / values.length; + } + + /** + * This method is called via reflection from the database. + * + * @param prefix the print prefix + * @param values the values + * @return the text + */ + public static String printMean(String prefix, double... values) { + double sum = 0; + for (double x : values) { + sum += x; + } + return prefix + ": " + (int) (sum / values.length); + } + + /** + * This method is called via reflection from the database. + * + * @param a the first UUID + * @param b the second UUID + * @return a xor b + */ + public static UUID xorUUID(UUID a, UUID b) { + return new UUID(a.getMostSignificantBits() ^ b.getMostSignificantBits(), + a.getLeastSignificantBits() ^ b.getLeastSignificantBits()); + } + + /** + * This method is called via reflection from the database. + * + * @param args the argument list + * @return an array of one element + */ + public static Object[] dynamic(Object[] args) { + StringBuilder buff = new StringBuilder(); + for (Object a : args) { + buff.append(a); + } + return new Object[] { buff.toString() }; + } + + /** + * This method is called via reflection from the database. + * + * @return a fixed number + */ + public static long currentTimestampOverride() { + return 3141; + } + + @Override + public void add(Object value) { + // ignore + } + + @Override + public Object getResult() { + return new BigDecimal("1.6"); + } + + @Override + public int getType(int[] inputTypes) { + if (inputTypes.length != 1 || inputTypes[0] != Types.INTEGER) { + throw new RuntimeException("unexpected data type"); + } + return Types.DECIMAL; + } + + @Override + public void init(Connection conn) { + // ignore + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestGeneralCommonTableQueries.java b/modules/h2/src/test/java/org/h2/test/db/TestGeneralCommonTableQueries.java new file mode 100644 index 0000000000000..ed6b4f4499333 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestGeneralCommonTableQueries.java @@ -0,0 +1,581 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import org.h2.jdbc.JdbcSQLException; +import org.h2.test.TestAll; +import org.h2.test.TestBase; + +/** + * Test non-recursive queries using WITH, but more than one common table defined. + */ +public class TestGeneralCommonTableQueries extends AbstractBaseForCommonTableExpressions { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testSimpleSelect(); + testImpliedColumnNames(); + testChainedQuery(); + testParameterizedQuery(); + testNumberedParameterizedQuery(); + testColumnNames(); + + testInsert(); + testUpdate(); + testDelete(); + testMerge(); + testCreateTable(); + testNestedSQL(); + testSimple4RowRecursiveQuery(); + testSimple2By4RowRecursiveQuery(); + testSimple3RowRecursiveQueryWithLazyEval(); + testSimple3RowRecursiveQueryDropAllObjects(); + } + + private void testSimpleSelect() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + Statement stat; + PreparedStatement prep; + ResultSet rs; + + stat = conn.createStatement(); + final String simpleTwoColumnQuery = "with " + + "t1(n) as (select 1 as first) " + + ",t2(n) as (select 2 as first) " + + "select * from t1 union all select * from t2"; + rs = stat.executeQuery(simpleTwoColumnQuery); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement(simpleTwoColumnQuery); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("with " + + "t1(n) as (select 2 as first) " + + ",t2(n) as (select 3 as first) " + + "select * from t1 union all select * from t2 where n<>?"); + + prep.setInt(1, 0); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("with " + + "t1(n) as (select 2 as first) " + + ",t2(n) as (select 3 as first) " + + ",t3(n) as (select 4 as first) " + + "select * from t1 union all select * from t2 union all select * from t3 where n<>?"); + + prep.setInt(1, 4); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testImpliedColumnNames() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + PreparedStatement prep; + ResultSet rs; + + prep = conn.prepareStatement("with " + + "t1 as (select 2 as first_col) " + + ",t2 as (select first_col+1 from t1) " + + ",t3 as (select 4 as first_col) " + + "select * from t1 union all select * from t2 union all select * from t3 where first_col<>?"); + + prep.setInt(1, 4); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt("FIRST_COL")); + assertFalse(rs.next()); + assertEquals(rs.getMetaData().getColumnCount(), 1); + assertEquals("FIRST_COL", rs.getMetaData().getColumnLabel(1)); + + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testChainedQuery() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + PreparedStatement prep; + ResultSet rs; + + prep = conn.prepareStatement( + " WITH t1 AS (" + + " SELECT 1 AS FIRST_COLUMN" + + ")," + + " t2 AS (" + + " SELECT FIRST_COLUMN+1 AS FIRST_COLUMN FROM t1 " + + ") " + + "SELECT sum(FIRST_COLUMN) FROM t2"); + + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testParameterizedQuery() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + PreparedStatement prep; + ResultSet rs; + + prep = conn.prepareStatement("WITH t1 AS (" + + " SELECT X, 'T1' FROM SYSTEM_RANGE(?,?)" + + ")," + + "t2 AS (" + + " SELECT X, 'T2' FROM SYSTEM_RANGE(?,?)" + + ") " + + "SELECT * FROM t1 UNION ALL SELECT * FROM t2 " + + "UNION ALL SELECT X, 'Q' FROM SYSTEM_RANGE(?,?)"); + prep.setInt(1, 1); + prep.setInt(2, 2); + prep.setInt(3, 3); + prep.setInt(4, 4); + prep.setInt(5, 5); + prep.setInt(6, 6); + rs = prep.executeQuery(); + + for (int n: new int[]{1, 2, 3, 4, 5, 6}) { + assertTrue(rs.next()); + assertEquals(n, rs.getInt(1)); + } + assertFalse(rs.next()); + + // call it twice + rs = prep.executeQuery(); + + for (int n: new int[]{1, 2, 3, 4, 5, 6}) { + assertTrue(rs.next()); + assertEquals(n, rs.getInt(1)); + } + assertFalse(rs.next()); + + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testNumberedParameterizedQuery() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + PreparedStatement prep; + ResultSet rs; + + conn.setAutoCommit(false); + + prep = conn.prepareStatement("WITH t1 AS (" + +" SELECT R.X, 'T1' FROM SYSTEM_RANGE(?1,?2) R" + +")," + +"t2 AS (" + +" SELECT R.X, 'T2' FROM SYSTEM_RANGE(?3,?4) R" + +") " + +"SELECT * FROM t1 UNION ALL SELECT * FROM t2 UNION ALL SELECT X, 'Q' FROM SYSTEM_RANGE(?5,?6)"); + prep.setInt(1, 1); + prep.setInt(2, 2); + prep.setInt(3, 3); + prep.setInt(4, 4); + prep.setInt(5, 5); + prep.setInt(6, 6); + rs = prep.executeQuery(); + + for (int n : new int[] { 1, 2, 3, 4, 5, 6 }) { + assertTrue(rs.next()); + assertEquals(n, rs.getInt(1)); + } + assertEquals("X", rs.getMetaData().getColumnLabel(1)); + assertEquals("'T1'", rs.getMetaData().getColumnLabel(2)); + + assertFalse(rs.next()); + + try { + prep = conn.prepareStatement("SELECT * FROM t1 UNION ALL SELECT * FROM t2 "+ + "UNION ALL SELECT X, 'Q' FROM SYSTEM_RANGE(5,6)"); + rs = prep.executeQuery(); + fail("Temp view T1 was accessible after previous WITH statement finished "+ + "- but should not have been."); + } catch (JdbcSQLException e) { + // ensure the T1 table has been removed even without auto commit + assertContains(e.getMessage(), "Table \"T1\" not found;"); + } + + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testInsert() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + Statement stat; + PreparedStatement prep; + ResultSet rs; + int rowCount; + + stat = conn.createStatement(); + stat.execute("CREATE TABLE T1 ( ID INT IDENTITY, X INT NULL, Y VARCHAR(100) NULL )"); + + prep = conn.prepareStatement("WITH v1 AS (" + + " SELECT R.X, 'X1' AS Y FROM SYSTEM_RANGE(?1,?2) R" + + ")" + + "INSERT INTO T1 (X,Y) SELECT v1.X, v1.Y FROM v1"); + prep.setInt(1, 1); + prep.setInt(2, 2); + rowCount = prep.executeUpdate(); + + assertEquals(2, rowCount); + + rs = stat.executeQuery("SELECT ID, X,Y FROM T1"); + + for (int n : new int[]{1, 2}) { + assertTrue(rs.next()); + assertTrue(rs.getInt(1) != 0); + assertEquals(n, rs.getInt(2)); + assertEquals("X1", rs.getString(3)); + } + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testUpdate() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + Statement stat; + PreparedStatement prep; + ResultSet rs; + int rowCount; + + stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS T1 AS SELECT R.X AS ID, R.X, 'X1' AS Y FROM SYSTEM_RANGE(1,2) R"); + + prep = conn.prepareStatement("WITH v1 AS (" + +" SELECT R.X, 'X1' AS Y FROM SYSTEM_RANGE(?1,?2) R" + +")" + +"UPDATE T1 SET Y = 'Y1' WHERE X IN ( SELECT v1.X FROM v1 )"); + prep.setInt(1, 1); + prep.setInt(2, 2); + rowCount = prep.executeUpdate(); + + assertEquals(2, rowCount); + + rs = stat.executeQuery("SELECT ID, X,Y FROM T1"); + + for (int n : new int[] { 1, 2 }) { + assertTrue(rs.next()); + assertTrue(rs.getInt(1) != 0); + assertEquals(n, rs.getInt(2)); + assertEquals("Y1", rs.getString(3)); + } + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testDelete() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + Statement stat; + PreparedStatement prep; + ResultSet rs; + int rowCount; + + stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS T1 AS SELECT R.X AS ID, R.X, 'X1' AS Y FROM SYSTEM_RANGE(1,2) R"); + + prep = conn.prepareStatement("WITH v1 AS (" + +" SELECT R.X, 'X1' AS Y FROM SYSTEM_RANGE(1,2) R" + +")" + +"DELETE FROM T1 WHERE X IN ( SELECT v1.X FROM v1 )"); + rowCount = prep.executeUpdate(); + + assertEquals(2, rowCount); + + rs = stat.executeQuery("SELECT ID, X,Y FROM T1"); + + assertFalse(rs.next()); + + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testMerge() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + Statement stat; + PreparedStatement prep; + ResultSet rs; + int rowCount; + + stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS T1 AS SELECT R.X AS ID, R.X, 'X1' AS Y FROM SYSTEM_RANGE(1,2) R"); + + prep = conn.prepareStatement("WITH v1 AS (" + +" SELECT R.X, 'X1' AS Y FROM SYSTEM_RANGE(1,3) R" + +")" + +"MERGE INTO T1 KEY(ID) SELECT v1.X AS ID, v1.X, v1.Y FROM v1"); + rowCount = prep.executeUpdate(); + + assertEquals(3, rowCount); + + rs = stat.executeQuery("SELECT ID, X,Y FROM T1"); + + for (int n : new int[] { 1, 2, 3 }) { + assertTrue(rs.next()); + assertTrue(rs.getInt(1) != 0); + assertEquals(n, rs.getInt(2)); + assertEquals("X1", rs.getString(3)); + } + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testCreateTable() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + Statement stat; + PreparedStatement prep; + ResultSet rs; + boolean success; + + stat = conn.createStatement(); + prep = conn.prepareStatement("WITH v1 AS (" + +" SELECT R.X, 'X1' AS Y FROM SYSTEM_RANGE(1,3) R" + +")" + +"CREATE TABLE IF NOT EXISTS T1 AS SELECT v1.X AS ID, v1.X, v1.Y FROM v1"); + success = prep.execute(); + + assertEquals(false, success); + + rs = stat.executeQuery("SELECT ID, X,Y FROM T1"); + + for (int n : new int[] { 1, 2, 3 }) { + assertTrue(rs.next()); + assertTrue(rs.getInt(1) != 0); + assertEquals(n, rs.getInt(2)); + assertEquals("X1", rs.getString(3)); + } + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testNestedSQL() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + PreparedStatement prep; + ResultSet rs; + + prep = conn.prepareStatement( + "WITH T1 AS ( "+ + " SELECT * "+ + " FROM TABLE ( "+ + " K VARCHAR = ('a', 'b'), "+ + " V INTEGER = (1, 2) "+ + " ) "+ + "), "+ + " "+ + " "+ + "T2 AS ( "+ + " SELECT * "+ + " FROM TABLE ( "+ + " K VARCHAR = ('a', 'b'), "+ + " V INTEGER = (3, 4) "+ + " ) "+ + "), "+ + " "+ + " "+ + "JOIN_CTE AS ( "+ + " SELECT T1.* "+ + " "+ + " FROM "+ + " T1 "+ + " JOIN T2 ON ( "+ + " T1.K = T2.K "+ + " ) "+ + ") "+ + " "+ + "SELECT * FROM JOIN_CTE"); + + rs = prep.executeQuery(); + + for (String keyLetter : new String[] { "a", "b" }) { + assertTrue(rs.next()); + assertContains("ab", rs.getString(1)); + assertEquals(rs.getString(1), keyLetter); + assertTrue(rs.getInt(2) != 0); + } + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testColumnNames() throws Exception { + deleteDb("commonTableExpressionQueries"); + Connection conn = getConnection("commonTableExpressionQueries"); + PreparedStatement prep; + ResultSet rs; + + conn.setAutoCommit(false); + + prep = conn.prepareStatement("WITH t1 AS (" + +" SELECT 1 AS ONE, R.X AS TWO, 'T1' AS THREE, X FROM SYSTEM_RANGE(1,1) R" + +")" + +"SELECT * FROM t1"); + rs = prep.executeQuery(); + + for (int n : new int[] { 1 }) { + assertTrue(rs.next()); + assertEquals(n, rs.getInt(1)); + assertEquals(n, rs.getInt(4)); + } + assertEquals("ONE", rs.getMetaData().getColumnLabel(1)); + assertEquals("TWO", rs.getMetaData().getColumnLabel(2)); + assertEquals("THREE", rs.getMetaData().getColumnLabel(3)); + assertEquals("X", rs.getMetaData().getColumnLabel(4)); + + assertFalse(rs.next()); + + conn.close(); + deleteDb("commonTableExpressionQueries"); + } + + private void testSimple4RowRecursiveQuery() throws Exception { + + String[] expectedRowData = new String[]{"|1", "|2", "|3"}; + String[] expectedColumnTypes = new String[]{"INTEGER"}; + String[] expectedColumnNames = new String[]{"N"}; + + String setupSQL = "-- do nothing"; + String withQuery = "with recursive r(n) as (\n"+ + "(select 1) union all (select n+1 from r where n < 3)\n"+ + ")\n"+ + "select n from r"; + + int maxRetries = 3; + int expectedNumberOfRows = expectedRowData.length; + + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + + } + + private void testSimple2By4RowRecursiveQuery() throws Exception { + + String[] expectedRowData = new String[]{"|0|1|10", "|1|2|11", "|2|3|12", "|3|4|13"}; + String[] expectedColumnTypes = new String[]{"INTEGER", "INTEGER", "INTEGER"}; + String[] expectedColumnNames = new String[]{"K", "N", "N2"}; + + String setupSQL = "-- do nothing"; + String withQuery = "with \n"+ + "r1(n,k) as ((select 1, 0) union all (select n+1,k+1 from r1 where n <= 3)),"+ + "r2(n,k) as ((select 10,0) union all (select n+1,k+1 from r2 where n <= 13))"+ + "select r1.k, r1.n, r2.n AS n2 from r1 inner join r2 ON r1.k= r2.k "; + + int maxRetries = 3; + int expectedNumberOfRows = expectedRowData.length; + + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + + } + + private void testSimple3RowRecursiveQueryWithLazyEval() throws Exception { + + String[] expectedRowData = new String[]{"|6"}; + String[] expectedColumnTypes = new String[]{"BIGINT"}; + String[] expectedColumnNames = new String[]{"SUM(N)"}; + + // back up the config - to restore it after this test + TestAll backupConfig = config; + config = new TestAll(); + + try { + // Test with settings: lazy mvStore memory mvcc multiThreaded + // connection url is + // mem:script;MV_STORE=true;LOG=1;LOCK_TIMEOUT=50;MVCC=TRUE; + // MULTI_THREADED=TRUE;LAZY_QUERY_EXECUTION=1 + config.lazy = true; + config.mvStore = true; + config.memory = true; + config.mvcc = true; + config.multiThreaded = true; + + String setupSQL = "--no config set"; + String withQuery = "select sum(n) from (\n" + +" with recursive r(n) as (\n" + +" (select 1) union all (select n+1 from r where n < 3) \n" + +" )\n" + +" select n from r \n" + +")\n"; + + int maxRetries = 10; + int expectedNumberOfRows = expectedRowData.length; + + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, + setupSQL, withQuery, maxRetries - 1, expectedColumnTypes); + } finally { + config = backupConfig; + } + } + + private void testSimple3RowRecursiveQueryDropAllObjects() throws Exception { + + String[] expectedRowData = new String[]{"|6"}; + String[] expectedColumnTypes = new String[]{"BIGINT"}; + String[] expectedColumnNames = new String[]{"SUM(N)"}; + + String setupSQL = "DROP ALL OBJECTS;"; + String withQuery = "select sum(n) from (" + +" with recursive r(n) as (" + +" (select 1) union all (select n+1 from r where n < 3)" + +" )," + +" dummyUnusedCte(n) as (" + +" select 1 " + +" )" + +" select n from r" + +")"; + + int maxRetries = 10; + int expectedNumberOfRows = expectedRowData.length; + + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestIndex.java b/modules/h2/src/test/java/org/h2/test/db/TestIndex.java new file mode 100644 index 0000000000000..cc785d9ee5f04 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestIndex.java @@ -0,0 +1,753 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Random; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.api.ErrorCode; +import org.h2.command.dml.Select; +import org.h2.result.SortOrder; +import org.h2.test.TestBase; +import org.h2.tools.SimpleResultSet; +import org.h2.value.ValueInt; + +/** + * Index tests. + */ +public class TestIndex extends TestBase { + + private static int testFunctionIndexCounter; + + private Connection conn; + private Statement stat; + private final Random random = new Random(); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("index"); + testOrderIndex(); + testIndexTypes(); + testHashIndexOnMemoryTable(); + testErrorMessage(); + testDuplicateKeyException(); + testConcurrentUpdate(); + testNonUniqueHashIndex(); + testRenamePrimaryKey(); + testRandomized(); + testDescIndex(); + testHashIndex(); + + if (config.networked && config.big) { + return; + } + + random.setSeed(100); + + deleteDb("index"); + testWideIndex(147); + testWideIndex(313); + testWideIndex(979); + testWideIndex(1200); + testWideIndex(2400); + if (config.big) { + Random r = new Random(); + for (int j = 0; j < 10; j++) { + int i = r.nextInt(3000); + if ((i % 100) == 0) { + println("width: " + i); + } + testWideIndex(i); + } + } + + testLike(); + reconnect(); + testConstraint(); + testLargeIndex(); + testMultiColumnIndex(); + // long time; + // time = System.nanoTime(); + testHashIndex(true, false); + + testHashIndex(false, false); + testHashIndex(true, true); + testHashIndex(false, true); + + testMultiColumnHashIndex(); + + testFunctionIndex(); + + conn.close(); + deleteDb("index"); + } + + private void testOrderIndex() throws SQLException { + Connection conn = getConnection("index"); + stat = conn.createStatement(); + stat.execute("create table test(id int, name varchar)"); + stat.execute("insert into test values (2, 'a'), (1, 'a')"); + stat.execute("create index on test(name)"); + ResultSet rs = stat.executeQuery( + "select id from test where name = 'a' order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + conn.close(); + deleteDb("index"); + } + + private void testIndexTypes() throws SQLException { + Connection conn = getConnection("index"); + stat = conn.createStatement(); + for (String type : new String[] { "unique", "hash", "unique hash" }) { + stat.execute("create table test(id int)"); + stat.execute("create " + type + " index idx_name on test(id)"); + stat.execute("insert into test select x from system_range(1, 1000)"); + ResultSet rs = stat.executeQuery("select * from test where id=100"); + assertTrue(rs.next()); + assertFalse(rs.next()); + stat.execute("delete from test where id=100"); + rs = stat.executeQuery("select * from test where id=100"); + assertFalse(rs.next()); + rs = stat.executeQuery("select min(id), max(id) from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals(1000, rs.getInt(2)); + stat.execute("drop table test"); + } + conn.close(); + deleteDb("index"); + } + + private void testErrorMessage() throws SQLException { + reconnect(); + stat.execute("create table test(id int primary key, name varchar)"); + testErrorMessage("PRIMARY", "KEY", " ON PUBLIC.TEST(ID)"); + stat.execute("create table test(id int, name varchar primary key)"); + testErrorMessage("PRIMARY_KEY_2 ON PUBLIC.TEST(NAME)"); + stat.execute("create table test(id int, name varchar, primary key(id, name))"); + testErrorMessage("PRIMARY_KEY_2 ON PUBLIC.TEST(ID, NAME)"); + stat.execute("create table test(id int, name varchar, primary key(name, id))"); + testErrorMessage("PRIMARY_KEY_2 ON PUBLIC.TEST(NAME, ID)"); + stat.execute("create table test(id int, name int primary key)"); + testErrorMessage("PRIMARY", "KEY", " ON PUBLIC.TEST(NAME)"); + stat.execute("create table test(id int, name int, unique(name))"); + testErrorMessage("CONSTRAINT_INDEX_2 ON PUBLIC.TEST(NAME)"); + stat.execute("create table test(id int, name int, " + + "constraint abc unique(name, id))"); + testErrorMessage("ABC_INDEX_2 ON PUBLIC.TEST(NAME, ID)"); + } + + private void testErrorMessage(String... expected) throws SQLException { + try { + stat.execute("INSERT INTO TEST VALUES(1, 1)"); + stat.execute("INSERT INTO TEST VALUES(1, 1)"); + fail(); + } catch (SQLException e) { + String m = e.getMessage(); + int start = m.indexOf('\"'), end = m.indexOf('\"', start + 1); + String s = m.substring(start + 1, end); + for (String t : expected) { + assertContains(s, t); + } + } + stat.execute("drop table test"); + } + + private void testDuplicateKeyException() throws SQLException { + reconnect(); + stat.execute("create table test(id int primary key, name varchar(255))"); + stat.execute("create unique index idx_test_name on test(name)"); + stat.execute("insert into TEST values(1, 'Hello')"); + try { + stat.execute("insert into TEST values(2, 'Hello')"); + fail(); + } catch (SQLException ex) { + assertEquals(ErrorCode.DUPLICATE_KEY_1, ex.getErrorCode()); + String m = ex.getMessage(); + // The format of the VALUES clause varies a little depending on the + // type of the index, so just test that we're getting useful info + // back. + assertContains(m, "IDX_TEST_NAME ON PUBLIC.TEST(NAME)"); + assertContains(m, "'Hello'"); + } + stat.execute("drop table test"); + } + + private class ConcurrentUpdateThread extends Thread { + private final AtomicInteger concurrentUpdateId, concurrentUpdateValue; + + private final PreparedStatement psInsert, psDelete; + + boolean haveDuplicateKeyException; + + ConcurrentUpdateThread(Connection c, AtomicInteger concurrentUpdateId, + AtomicInteger concurrentUpdateValue) throws SQLException { + this.concurrentUpdateId = concurrentUpdateId; + this.concurrentUpdateValue = concurrentUpdateValue; + psInsert = c.prepareStatement("insert into test(id, value) values (?, ?)"); + psDelete = c.prepareStatement("delete from test where value = ?"); + } + + @Override + public void run() { + for (int i = 0; i < 10000; i++) { + try { + if (Math.random() > 0.05) { + psInsert.setInt(1, concurrentUpdateId.incrementAndGet()); + psInsert.setInt(2, concurrentUpdateValue.get()); + psInsert.executeUpdate(); + } else { + psDelete.setInt(1, concurrentUpdateValue.get()); + psDelete.executeUpdate(); + } + } catch (SQLException ex) { + switch (ex.getErrorCode()) { + case 23505: + haveDuplicateKeyException = true; + break; + case 90131: + // Unlikely but possible + break; + default: + ex.printStackTrace(); + } + } + if (Math.random() > 0.95) + concurrentUpdateValue.incrementAndGet(); + } + } + } + + private void testConcurrentUpdate() throws SQLException { + Connection c = getConnection("index"); + Statement stat = c.createStatement(); + stat.execute("create table test(id int primary key, value int)"); + stat.execute("create unique index idx_value_name on test(value)"); + PreparedStatement check = c.prepareStatement("select value from test"); + ConcurrentUpdateThread[] threads = new ConcurrentUpdateThread[4]; + AtomicInteger concurrentUpdateId = new AtomicInteger(), concurrentUpdateValue = new AtomicInteger(); + + // The same connection + for (int i = 0; i < threads.length; i++) { + threads[i] = new ConcurrentUpdateThread(c, concurrentUpdateId, concurrentUpdateValue); + } + testConcurrentUpdateRun(threads, check); + // Different connections + Connection[] connections = new Connection[threads.length]; + for (int i = 0; i < threads.length; i++) { + Connection c2 = getConnection("index"); + connections[i] = c2; + threads[i] = new ConcurrentUpdateThread(c2, concurrentUpdateId, concurrentUpdateValue); + } + testConcurrentUpdateRun(threads, check); + for (Connection c2 : connections) { + c2.close(); + } + stat.execute("drop table test"); + c.close(); + } + + private void testConcurrentUpdateRun(ConcurrentUpdateThread[] threads, PreparedStatement check) + throws SQLException { + for (ConcurrentUpdateThread t : threads) { + t.start(); + } + boolean haveDuplicateKeyException = false; + for (ConcurrentUpdateThread t : threads) { + try { + t.join(); + haveDuplicateKeyException |= t.haveDuplicateKeyException; + } catch (InterruptedException e) { + } + } + assertTrue("haveDuplicateKeys", haveDuplicateKeyException); + HashSet set = new HashSet<>(); + try (ResultSet rs = check.executeQuery()) { + while (rs.next()) { + if (!set.add(rs.getInt(1))) { + fail("unique index violation"); + } + } + } + } + + private void testNonUniqueHashIndex() throws SQLException { + reconnect(); + stat.execute("create memory table test(id bigint, data bigint)"); + stat.execute("create hash index on test(id)"); + Random rand = new Random(1); + PreparedStatement prepInsert = conn.prepareStatement( + "insert into test values(?, ?)"); + PreparedStatement prepDelete = conn.prepareStatement( + "delete from test where id=?"); + PreparedStatement prepSelect = conn.prepareStatement( + "select count(*) from test where id=?"); + HashMap map = new HashMap<>(); + for (int i = 0; i < 1000; i++) { + long key = rand.nextInt(10) * 1000000000L; + Integer r = map.get(key); + int result = r == null ? 0 : (int) r; + if (rand.nextBoolean()) { + prepSelect.setLong(1, key); + ResultSet rs = prepSelect.executeQuery(); + rs.next(); + assertEquals(result, rs.getInt(1)); + } else { + if (rand.nextBoolean()) { + prepInsert.setLong(1, key); + prepInsert.setInt(2, rand.nextInt()); + prepInsert.execute(); + map.put(key, result + 1); + } else { + prepDelete.setLong(1, key); + prepDelete.execute(); + map.put(key, 0); + } + } + } + stat.execute("drop table test"); + conn.close(); + } + + private void testRenamePrimaryKey() throws SQLException { + if (config.memory) { + return; + } + reconnect(); + stat.execute("create table test(id int not null)"); + stat.execute("alter table test add constraint x primary key(id)"); + ResultSet rs; + rs = conn.getMetaData().getIndexInfo(null, null, "TEST", true, false); + rs.next(); + String old = rs.getString("INDEX_NAME"); + stat.execute("alter index " + old + " rename to y"); + rs = conn.getMetaData().getIndexInfo(null, null, "TEST", true, false); + rs.next(); + assertEquals("Y", rs.getString("INDEX_NAME")); + reconnect(); + rs = conn.getMetaData().getIndexInfo(null, null, "TEST", true, false); + rs.next(); + assertEquals("Y", rs.getString("INDEX_NAME")); + stat.execute("drop table test"); + } + + private void testRandomized() throws SQLException { + boolean reopen = !config.memory; + Random rand = new Random(1); + reconnect(); + stat.execute("drop all objects"); + stat.execute("CREATE TABLE TEST(ID identity)"); + int len = getSize(100, 1000); + for (int i = 0; i < len; i++) { + switch (rand.nextInt(4)) { + case 0: + if (rand.nextInt(10) == 0) { + if (reopen) { + trace("reconnect"); + reconnect(); + } + } + break; + case 1: + trace("insert"); + stat.execute("insert into test(id) values(null)"); + break; + case 2: + trace("delete"); + stat.execute("delete from test"); + break; + case 3: + trace("insert 1-100"); + stat.execute("insert into test select null from system_range(1, 100)"); + break; + } + } + stat.execute("drop table test"); + } + + private void testHashIndex() throws SQLException { + reconnect(); + stat.execute("create table testA(id int primary key, name varchar)"); + stat.execute("create table testB(id int primary key hash, name varchar)"); + int len = getSize(300, 3000); + stat.execute("insert into testA select x, 'Hello' from " + + "system_range(1, " + len + ")"); + stat.execute("insert into testB select x, 'Hello' from " + + "system_range(1, " + len + ")"); + Random rand = new Random(1); + for (int i = 0; i < len; i++) { + int x = rand.nextInt(len); + String sql = ""; + switch (rand.nextInt(3)) { + case 0: + sql = "delete from testA where id = " + x; + break; + case 1: + sql = "update testA set name = " + rand.nextInt(100) + " where id = " + x; + break; + case 2: + sql = "select name from testA where id = " + x; + break; + default: + } + boolean result = stat.execute(sql); + if (result) { + ResultSet rs = stat.getResultSet(); + String s1 = rs.next() ? rs.getString(1) : null; + rs = stat.executeQuery(sql.replace('A', 'B')); + String s2 = rs.next() ? rs.getString(1) : null; + assertEquals(s1, s2); + } else { + int count1 = stat.getUpdateCount(); + int count2 = stat.executeUpdate(sql.replace('A', 'B')); + assertEquals(count1, count2); + } + } + stat.execute("drop table testA, testB"); + conn.close(); + } + + private void reconnect() throws SQLException { + if (conn != null) { + conn.close(); + conn = null; + } + conn = getConnection("index"); + stat = conn.createStatement(); + } + + private void testDescIndex() throws SQLException { + if (config.memory) { + return; + } + ResultSet rs; + reconnect(); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("CREATE INDEX IDX_ND ON TEST(ID DESC)"); + rs = conn.getMetaData().getIndexInfo(null, null, "TEST", false, false); + rs.next(); + assertEquals("D", rs.getString("ASC_OR_DESC")); + assertEquals(SortOrder.DESCENDING, rs.getInt("SORT_TYPE")); + stat.execute("INSERT INTO TEST SELECT X FROM SYSTEM_RANGE(1, 30)"); + rs = stat.executeQuery( + "SELECT COUNT(*) FROM TEST WHERE ID BETWEEN 10 AND 20"); + rs.next(); + assertEquals(11, rs.getInt(1)); + reconnect(); + rs = conn.getMetaData().getIndexInfo(null, null, "TEST", false, false); + rs.next(); + assertEquals("D", rs.getString("ASC_OR_DESC")); + assertEquals(SortOrder.DESCENDING, rs.getInt("SORT_TYPE")); + rs = stat.executeQuery( + "SELECT COUNT(*) FROM TEST WHERE ID BETWEEN 10 AND 20"); + rs.next(); + assertEquals(11, rs.getInt(1)); + stat.execute("DROP TABLE TEST"); + + stat.execute("create table test(x int, y int)"); + stat.execute("insert into test values(1, 1), (1, 2)"); + stat.execute("create index test_x_y on test (x desc, y desc)"); + rs = stat.executeQuery("select * from test where x=1 and y<2"); + assertTrue(rs.next()); + + conn.close(); + } + + private String getRandomString(int len) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < len; i++) { + buff.append((char) ('a' + random.nextInt(26))); + } + return buff.toString(); + } + + private void testWideIndex(int length) throws SQLException { + reconnect(); + stat.execute("drop all objects"); + stat.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR)"); + stat.execute("CREATE INDEX IDXNAME ON TEST(NAME)"); + for (int i = 0; i < 100; i++) { + stat.execute("INSERT INTO TEST VALUES(" + i + + ", SPACE(" + length + ") || " + i + " )"); + } + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY NAME"); + while (rs.next()) { + int id = rs.getInt("ID"); + String name = rs.getString("NAME"); + assertEquals("" + id, name.trim()); + } + if (!config.memory) { + reconnect(); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY NAME"); + while (rs.next()) { + int id = rs.getInt("ID"); + String name = rs.getString("NAME"); + assertEquals("" + id, name.trim()); + } + } + stat.execute("drop all objects"); + } + + private void testLike() throws SQLException { + reconnect(); + stat.execute("CREATE TABLE ABC(ID INT, NAME VARCHAR)"); + stat.execute("INSERT INTO ABC VALUES(1, 'Hello')"); + PreparedStatement prep = conn.prepareStatement( + "SELECT * FROM ABC WHERE NAME LIKE CAST(? AS VARCHAR)"); + prep.setString(1, "Hi%"); + prep.execute(); + stat.execute("DROP TABLE ABC"); + } + + private void testConstraint() throws SQLException { + if (config.memory) { + return; + } + stat.execute("CREATE TABLE PARENT(ID INT PRIMARY KEY)"); + stat.execute("CREATE TABLE CHILD(ID INT PRIMARY KEY, " + + "PID INT, FOREIGN KEY(PID) REFERENCES PARENT(ID))"); + reconnect(); + stat.execute("DROP TABLE PARENT"); + stat.execute("DROP TABLE CHILD"); + } + + private void testLargeIndex() throws SQLException { + random.setSeed(10); + for (int i = 1; i < 100; i += getSize(1000, 7)) { + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(NAME VARCHAR(" + i + "))"); + stat.execute("CREATE INDEX IDXNAME ON TEST(NAME)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?)"); + for (int j = 0; j < getSize(2, 5); j++) { + prep.setString(1, getRandomString(i)); + prep.execute(); + } + if (!config.memory) { + conn.close(); + conn = getConnection("index"); + stat = conn.createStatement(); + } + ResultSet rs = stat.executeQuery( + "SELECT COUNT(*) FROM TEST WHERE NAME > 'mdd'"); + rs.next(); + int count = rs.getInt(1); + trace(i + " count=" + count); + } + + stat.execute("DROP TABLE IF EXISTS TEST"); + } + + private void testHashIndex(boolean primaryKey, boolean hash) + throws SQLException { + if (config.memory) { + return; + } + + reconnect(); + + stat.execute("DROP TABLE IF EXISTS TEST"); + if (primaryKey) { + stat.execute("CREATE TABLE TEST(A INT PRIMARY KEY " + + (hash ? "HASH" : "") + ", B INT)"); + } else { + stat.execute("CREATE TABLE TEST(A INT, B INT)"); + stat.execute("CREATE UNIQUE " + (hash ? "HASH" : "") + " INDEX ON TEST(A)"); + } + PreparedStatement prep; + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?)"); + int len = getSize(5, 1000); + for (int a = 0; a < len; a++) { + prep.setInt(1, a); + prep.setInt(2, a); + prep.execute(); + assertEquals(1, + getValue("SELECT COUNT(*) FROM TEST WHERE A=" + a)); + assertEquals(0, + getValue("SELECT COUNT(*) FROM TEST WHERE A=-1-" + a)); + } + + reconnect(); + + prep = conn.prepareStatement("DELETE FROM TEST WHERE A=?"); + for (int a = 0; a < len; a++) { + if (getValue("SELECT COUNT(*) FROM TEST WHERE A=" + a) != 1) { + assertEquals(1, + getValue("SELECT COUNT(*) FROM TEST WHERE A=" + a)); + } + prep.setInt(1, a); + assertEquals(1, prep.executeUpdate()); + } + assertEquals(0, getValue("SELECT COUNT(*) FROM TEST")); + } + + private void testMultiColumnIndex() throws SQLException { + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(A INT, B INT)"); + PreparedStatement prep; + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?)"); + int len = getSize(3, 260); + for (int a = 0; a < len; a++) { + prep.setInt(1, a); + prep.setInt(2, a); + prep.execute(); + } + stat.execute("INSERT INTO TEST SELECT A, B FROM TEST"); + stat.execute("CREATE INDEX ON TEST(A, B)"); + prep = conn.prepareStatement("DELETE FROM TEST WHERE A=?"); + for (int a = 0; a < len; a++) { + log("SELECT * FROM TEST"); + assertEquals(2, + getValue("SELECT COUNT(*) FROM TEST WHERE A=" + (len - a - 1))); + assertEquals((len - a) * 2, getValue("SELECT COUNT(*) FROM TEST")); + prep.setInt(1, len - a - 1); + prep.execute(); + } + assertEquals(0, getValue("SELECT COUNT(*) FROM TEST")); + } + + private void testMultiColumnHashIndex() throws SQLException { + if (config.memory) { + return; + } + + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(A INT, B INT, DATA VARCHAR(255))"); + stat.execute("CREATE UNIQUE HASH INDEX IDX_AB ON TEST(A, B)"); + PreparedStatement prep; + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?, ?)"); + // speed is quadratic (len*len) + int len = getSize(2, 14); + for (int a = 0; a < len; a++) { + for (int b = 0; b < len; b += 2) { + prep.setInt(1, a); + prep.setInt(2, b); + prep.setString(3, "i(" + a + "," + b + ")"); + prep.execute(); + } + } + + reconnect(); + + prep = conn.prepareStatement( + "UPDATE TEST SET DATA=DATA||? WHERE A=? AND B=?"); + for (int a = 0; a < len; a++) { + for (int b = 0; b < len; b += 2) { + prep.setString(1, "u(" + a + "," + b + ")"); + prep.setInt(2, a); + prep.setInt(3, b); + prep.execute(); + } + } + + reconnect(); + + ResultSet rs = stat.executeQuery( + "SELECT * FROM TEST WHERE DATA <> 'i('||a||','||b||')u('||a||','||b||')'"); + assertFalse(rs.next()); + assertEquals(len * (len / 2), getValue("SELECT COUNT(*) FROM TEST")); + stat.execute("DROP TABLE TEST"); + } + + private void testHashIndexOnMemoryTable() throws SQLException { + reconnect(); + stat.execute("drop table if exists hash_index_test"); + stat.execute("create memory table hash_index_test as " + + "select x as id, x % 10 as data from (select * from system_range(1, 100))"); + stat.execute("create hash index idx2 on hash_index_test(data)"); + assertEquals(10, + getValue("select count(*) from hash_index_test where data = 1")); + + stat.execute("drop index idx2"); + stat.execute("create unique hash index idx2 on hash_index_test(id)"); + assertEquals(1, + getValue("select count(*) from hash_index_test where id = 1")); + } + + private int getValue(String sql) throws SQLException { + ResultSet rs = stat.executeQuery(sql); + rs.next(); + return rs.getInt(1); + } + + private void log(String sql) throws SQLException { + trace(sql); + ResultSet rs = stat.executeQuery(sql); + int cols = rs.getMetaData().getColumnCount(); + while (rs.next()) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < cols; i++) { + if (i > 0) { + buff.append(", "); + } + buff.append("[" + i + "]=" + rs.getString(i + 1)); + } + trace(buff.toString()); + } + trace("---done---"); + } + + /** + * This method is called from the database. + * + * @return the result set + */ + public static ResultSet testFunctionIndexFunction() { + // There are additional callers like JdbcConnection.prepareCommand() and + // CommandContainer.recompileIfRequired() + for (StackTraceElement element : Thread.currentThread().getStackTrace()) { + if (element.getClassName().startsWith(Select.class.getName())) { + testFunctionIndexCounter++; + break; + } + } + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("ID", Types.INTEGER, ValueInt.PRECISION, 0); + rs.addColumn("VALUE", Types.INTEGER, ValueInt.PRECISION, 0); + rs.addRow(1, 10); + rs.addRow(2, 20); + rs.addRow(3, 30); + return rs; + } + + private void testFunctionIndex() throws SQLException { + testFunctionIndexCounter = 0; + stat.execute("CREATE ALIAS TEST_INDEX FOR \"" + TestIndex.class.getName() + ".testFunctionIndexFunction\""); + try (ResultSet rs = stat.executeQuery("SELECT * FROM TEST_INDEX() WHERE ID = 1 OR ID = 3")) { + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals(10, rs.getInt(2)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertEquals(30, rs.getInt(2)); + assertFalse(rs.next()); + } finally { + stat.execute("DROP ALIAS TEST_INDEX"); + } + assertEquals(1, testFunctionIndexCounter); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestIndexHints.java b/modules/h2/src/test/java/org/h2/test/db/TestIndexHints.java new file mode 100644 index 0000000000000..4dc9350a55370 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestIndexHints.java @@ -0,0 +1,135 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * Tests the index hints feature of this database. + */ +public class TestIndexHints extends TestBase { + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("indexhints"); + createDb(); + testQuotedIdentifier(); + testWithSingleIndexName(); + testWithEmptyIndexHintsList(); + testWithInvalidIndexName(); + testWithMultipleIndexNames(); + testPlanSqlHasIndexesInCorrectOrder(); + testWithTableAlias(); + testWithTableAliasCalledUse(); + conn.close(); + deleteDb("indexhints"); + } + + private void createDb() throws SQLException { + conn = getConnection("indexhints"); + Statement stat = conn.createStatement(); + stat.execute("create table test (x int, y int)"); + stat.execute("create index idx1 on test (x)"); + stat.execute("create index idx2 on test (x, y)"); + stat.execute("create index \"Idx3\" on test (y, x)"); + } + + private void testQuotedIdentifier() throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("explain analyze select * " + + "from test use index(\"Idx3\") where x=1 and y=1"); + assertTrue(rs.next()); + String plan = rs.getString(1); + rs.close(); + assertTrue(plan.contains("/* PUBLIC.\"Idx3\":")); + assertTrue(plan.contains("USE INDEX (\"Idx3\")")); + rs = stat.executeQuery("EXPLAIN ANALYZE " + plan); + assertTrue(rs.next()); + plan = rs.getString(1); + assertTrue(plan.contains("/* PUBLIC.\"Idx3\":")); + assertTrue(plan.contains("USE INDEX (\"Idx3\")")); + } + + private void testWithSingleIndexName() throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("explain analyze select * " + + "from test use index(idx1) where x=1 and y=1"); + rs.next(); + String result = rs.getString(1); + assertTrue(result.contains("/* PUBLIC.IDX1:")); + } + + private void testWithTableAlias() throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("explain analyze select * " + + "from test t use index(idx2) where x=1 and y=1"); + rs.next(); + String result = rs.getString(1); + assertTrue(result.contains("/* PUBLIC.IDX2:")); + } + + private void testWithTableAliasCalledUse() throws SQLException { + // make sure that while adding new syntax for table hints, code + // that uses "USE" as a table alias still works + Statement stat = conn.createStatement(); + stat.executeQuery("explain analyze select * " + + "from test use where use.x=1 and use.y=1"); + } + + private void testWithMultipleIndexNames() throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("explain analyze select * " + + "from test use index(idx1, idx2) where x=1 and y=1"); + rs.next(); + String result = rs.getString(1); + assertTrue(result.contains("/* PUBLIC.IDX2:")); + } + + private void testPlanSqlHasIndexesInCorrectOrder() throws SQLException { + ResultSet rs = conn.createStatement().executeQuery("explain analyze select * " + + "from test use index(idx1, idx2) where x=1 and y=1"); + rs.next(); + assertTrue(rs.getString(1).contains("USE INDEX (IDX1, IDX2)")); + + ResultSet rs2 = conn.createStatement().executeQuery("explain analyze select * " + + "from test use index(idx2, idx1) where x=1 and y=1"); + rs2.next(); + assertTrue(rs2.getString(1).contains("USE INDEX (IDX2, IDX1)")); + } + + private void testWithEmptyIndexHintsList() throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("explain analyze select * " + + "from test use index () where x=1 and y=1"); + rs.next(); + String result = rs.getString(1); + assertTrue(result.contains("/* PUBLIC.TEST.tableScan")); + } + + private void testWithInvalidIndexName() throws SQLException { + Statement stat = conn.createStatement(); + assertThrows(ErrorCode.INDEX_NOT_FOUND_1, stat).executeQuery("explain analyze select * " + + "from test use index(idx_doesnt_exist) where x=1 and y=1"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestLargeBlob.java b/modules/h2/src/test/java/org/h2/test/db/TestLargeBlob.java new file mode 100644 index 0000000000000..7185a2dc16dad --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestLargeBlob.java @@ -0,0 +1,87 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.InputStream; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import org.h2.test.TestBase; + +/** + * Test a BLOB larger than Integer.MAX_VALUE + */ +public class TestLargeBlob extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (!config.big || config.memory || config.mvcc || config.networked) { + return; + } + + deleteDb("largeBlob"); + String url = getURL("largeBlob;TRACE_LEVEL_FILE=0", true); + Connection conn = getConnection(url); + final long testLength = Integer.MAX_VALUE + 110L; + Statement stat = conn.createStatement(); + stat.execute("set COMPRESS_LOB LZF"); + stat.execute("create table test(x blob)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(?)"); + prep.setBinaryStream(1, new InputStream() { + long remaining = testLength; + int p; + byte[] oneByte = { 0 }; + @Override + public void close() { + // ignore + } + @Override + public int read(byte[] buff, int off, int len) { + len = (int) Math.min(remaining, len); + remaining -= len; + if (p++ % 5000 == 0) { + println("" + remaining); + } + return len == 0 ? -1 : len; + } + @Override + public int read() { + return read(oneByte, 0, 1) < 0 ? -1 : oneByte[0]; + } + }, -1); + prep.executeUpdate(); + ResultSet rs = stat.executeQuery( + "select length(x) from test"); + rs.next(); + assertEquals(testLength, rs.getLong(1)); + rs = stat.executeQuery("select x from test"); + rs.next(); + InputStream in = rs.getBinaryStream(1); + byte[] buff = new byte[4 * 1024]; + long length = 0; + while (true) { + int len = in.read(buff); + if (len < 0) { + break; + } + length += len; + } + assertEquals(testLength, length); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestLinkedTable.java b/modules/h2/src/test/java/org/h2/test/db/TestLinkedTable.java new file mode 100644 index 0000000000000..d827567c67dd2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestLinkedTable.java @@ -0,0 +1,723 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import org.h2.api.ErrorCode; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.value.DataType; + +/** + * Tests the linked table feature (CREATE LINKED TABLE). + */ +public class TestLinkedTable extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testLinkedServerMode(); + testDefaultValues(); + testHiddenSQL(); + // testLinkAutoAdd(); + testNestedQueriesToSameTable(); + testSharedConnection(); + testMultipleSchemas(); + testReadOnlyLinkedTable(); + testLinkOtherSchema(); + testLinkDrop(); + testLinkSchema(); + testLinkEmitUpdates(); + testLinkTable(); + testLinkTwoTables(); + testCachingResults(); + testLinkedTableInReadOnlyDb(); + testGeometry(); + deleteDb("linkedTable"); + } + + private void testLinkedServerMode() throws SQLException { + if (config.memory) { + return; + } + // the network mode will result in a deadlock + if (config.networked) { + return; + } + deleteDb("linkedTable1"); + deleteDb("linkedTable2"); + String url2 = getURL("linkedTable2", true); + String user = getUser(), password = getPassword(); + Connection conn = getConnection("linkedTable2"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + conn.close(); + conn = getConnection("linkedTable1"); + stat = conn.createStatement(); + stat.execute("create linked table link(null, '"+url2+ + "', '"+user+"', '"+password+"', 'TEST')"); + conn.close(); + conn = getConnection("linkedTable1"); + conn.close(); + } + + private void testDefaultValues() throws SQLException { + if (config.memory) { + return; + } + deleteDb("linkedTable"); + Connection connMain = DriverManager.getConnection("jdbc:h2:mem:linkedTable"); + Statement statMain = connMain.createStatement(); + statMain.execute("create table test(id identity, name varchar default 'test')"); + + Connection conn = getConnection("linkedTable"); + Statement stat = conn.createStatement(); + stat.execute("create linked table test1('', " + + "'jdbc:h2:mem:linkedTable', '', '', 'TEST') emit updates"); + stat.execute("create linked table test2('', " + + "'jdbc:h2:mem:linkedTable', '', '', 'TEST')"); + stat.execute("insert into test1 values(default, default)"); + stat.execute("insert into test2 values(default, default)"); + stat.execute("merge into test2 values(3, default)"); + stat.execute("update test1 set name=default where id=1"); + stat.execute("update test2 set name=default where id=2"); + + ResultSet rs = statMain.executeQuery("select * from test order by id"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("test", rs.getString(2)); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("test", rs.getString(2)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals("test", rs.getString(2)); + assertFalse(rs.next()); + + stat.execute("delete from test1 where id=1"); + stat.execute("delete from test2 where id=2"); + stat.execute("delete from test2 where id=3"); + conn.close(); + rs = statMain.executeQuery("select * from test order by id"); + assertFalse(rs.next()); + + connMain.close(); + } + + private void testHiddenSQL() throws SQLException { + if (config.memory) { + return; + } + org.h2.Driver.load(); + deleteDb("linkedTable"); + Connection conn = getConnection( + "linkedTable;SHARE_LINKED_CONNECTIONS=TRUE"); + try { + conn.createStatement().execute("create linked table test" + + "(null, 'jdbc:h2:mem:', 'sa', 'pwd', 'DUAL2')"); + fail(); + } catch (SQLException e) { + assertContains(e.toString(), "pwd"); + } + try { + conn.createStatement().execute("create linked table test" + + "(null, 'jdbc:h2:mem:', 'sa', 'pwd', 'DUAL2') --hide--"); + fail(); + } catch (SQLException e) { + assertTrue(e.toString().indexOf("pwd") < 0); + } + conn.close(); + } + + // this is not a bug, it is the documented behavior +// private void testLinkAutoAdd() throws SQLException { +// Class.forName("org.h2.Driver"); +// Connection ca = +// DriverManager.getConnection("jdbc:h2:mem:one", "sa", "sa"); +// Connection cb = +// DriverManager.getConnection("jdbc:h2:mem:two", "sa", "sa"); +// Statement sa = ca.createStatement(); +// Statement sb = cb.createStatement(); +// sa.execute("CREATE TABLE ONE (X NUMBER)"); +// sb.execute( +// "CALL LINK_SCHEMA('GOOD', '', " + +// "'jdbc:h2:mem:one', 'sa', 'sa', 'PUBLIC'); "); +// sb.executeQuery("SELECT * FROM GOOD.ONE"); +// sa.execute("CREATE TABLE TWO (X NUMBER)"); +// sb.executeQuery("SELECT * FROM GOOD.TWO"); // FAILED +// ca.close(); +// cb.close(); +// } + + private void testNestedQueriesToSameTable() throws SQLException { + if (config.memory) { + return; + } + org.h2.Driver.load(); + deleteDb("linkedTable"); + String url = getURL("linkedTable;SHARE_LINKED_CONNECTIONS=TRUE", true); + String user = getUser(); + String password = getPassword(); + Connection ca = getConnection(url, user, password); + Statement sa = ca.createStatement(); + sa.execute("CREATE TABLE TEST(ID INT) AS SELECT 1"); + ca.close(); + Connection cb = DriverManager.getConnection("jdbc:h2:mem:two", "sa", "sa"); + Statement sb = cb.createStatement(); + sb.execute("CREATE LINKED TABLE T1(NULL, '" + + url + "', '"+user+"', '"+password+"', 'TEST')"); + sb.executeQuery("SELECT * FROM DUAL A " + + "LEFT OUTER JOIN T1 A ON A.ID=1 LEFT OUTER JOIN T1 B ON B.ID=1"); + sb.execute("DROP ALL OBJECTS"); + cb.close(); + } + + private void testSharedConnection() throws SQLException { + if (config.memory) { + return; + } + org.h2.Driver.load(); + deleteDb("linkedTable"); + String url = getURL("linkedTable;SHARE_LINKED_CONNECTIONS=TRUE", true); + String user = getUser(); + String password = getPassword(); + Connection ca = getConnection(url, user, password); + Statement sa = ca.createStatement(); + sa.execute("CREATE TABLE TEST(ID INT)"); + ca.close(); + Connection cb = DriverManager.getConnection("jdbc:h2:mem:two", "sa", "sa"); + Statement sb = cb.createStatement(); + sb.execute("CREATE LINKED TABLE T1(NULL, '" + url + + ";OPEN_NEW=TRUE', '"+user+"', '"+password+"', 'TEST')"); + sb.execute("CREATE LINKED TABLE T2(NULL, '" + url + + ";OPEN_NEW=TRUE', '"+user+"', '"+password+"', 'TEST')"); + sb.execute("DROP ALL OBJECTS"); + cb.close(); + } + + private void testMultipleSchemas() throws SQLException { + org.h2.Driver.load(); + Connection ca = DriverManager.getConnection("jdbc:h2:mem:one", "sa", "sa"); + Connection cb = DriverManager.getConnection("jdbc:h2:mem:two", "sa", "sa"); + Statement sa = ca.createStatement(); + Statement sb = cb.createStatement(); + sa.execute("CREATE TABLE TEST(ID INT)"); + sa.execute("CREATE SCHEMA P"); + sa.execute("CREATE TABLE P.TEST(X INT)"); + sa.execute("INSERT INTO TEST VALUES(1)"); + sa.execute("INSERT INTO P.TEST VALUES(2)"); + assertThrows(ErrorCode.SCHEMA_NAME_MUST_MATCH, sb). + execute("CREATE LINKED TABLE T(NULL, " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'TEST')"); + sb.execute("CREATE LINKED TABLE T(NULL, " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'PUBLIC', 'TEST')"); + sb.execute("CREATE LINKED TABLE T2(NULL, " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'P', 'TEST')"); + assertSingleValue(sb, "SELECT * FROM T", 1); + assertSingleValue(sb, "SELECT * FROM T2", 2); + sa.execute("DROP ALL OBJECTS"); + sb.execute("DROP ALL OBJECTS"); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, sa). + execute("SELECT * FROM TEST"); + ca.close(); + cb.close(); + } + + private void testReadOnlyLinkedTable() throws SQLException { + org.h2.Driver.load(); + Connection ca = DriverManager.getConnection("jdbc:h2:mem:one", "sa", "sa"); + Connection cb = DriverManager.getConnection("jdbc:h2:mem:two", "sa", "sa"); + Statement sa = ca.createStatement(); + Statement sb = cb.createStatement(); + sa.execute("CREATE TABLE TEST(ID INT)"); + sa.execute("INSERT INTO TEST VALUES(1)"); + String[] suffix = {"", "READONLY", "EMIT UPDATES"}; + for (int i = 0; i < suffix.length; i++) { + String sql = "CREATE LINKED TABLE T(NULL, " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'TEST')" + suffix[i]; + sb.execute(sql); + sb.executeQuery("SELECT * FROM T"); + String[] update = {"DELETE FROM T", + "INSERT INTO T VALUES(2)", "UPDATE T SET ID = 3"}; + for (String u : update) { + try { + sb.execute(u); + if (i == 1) { + fail(); + } + } catch (SQLException e) { + if (i == 1) { + assertKnownException(e); + } else { + throw e; + } + } + } + sb.execute("DROP TABLE T"); + } + ca.close(); + cb.close(); + } + + private static void testLinkOtherSchema() throws SQLException { + org.h2.Driver.load(); + Connection ca = DriverManager.getConnection("jdbc:h2:mem:one", "sa", "sa"); + Connection cb = DriverManager.getConnection("jdbc:h2:mem:two", "sa", "sa"); + Statement sa = ca.createStatement(); + Statement sb = cb.createStatement(); + sa.execute("CREATE TABLE GOOD (X NUMBER)"); + sa.execute("CREATE SCHEMA S"); + sa.execute("CREATE TABLE S.BAD (X NUMBER)"); + sb.execute("CALL LINK_SCHEMA('G', '', " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'PUBLIC'); "); + sb.execute("CALL LINK_SCHEMA('B', '', " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'S'); "); + // OK + sb.executeQuery("SELECT * FROM G.GOOD"); + // FAILED + sb.executeQuery("SELECT * FROM B.BAD"); + ca.close(); + cb.close(); + } + + private void testLinkTwoTables() throws SQLException { + org.h2.Driver.load(); + Connection conn = DriverManager.getConnection( + "jdbc:h2:mem:one", "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("CREATE SCHEMA Y"); + stat.execute("CREATE TABLE A( C INT)"); + stat.execute("INSERT INTO A VALUES(1)"); + stat.execute("CREATE TABLE Y.A (C INT)"); + stat.execute("INSERT INTO Y.A VALUES(2)"); + Connection conn2 = DriverManager.getConnection("jdbc:h2:mem:two"); + Statement stat2 = conn2.createStatement(); + stat2.execute("CREATE LINKED TABLE one('org.h2.Driver', " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'Y.A');"); + stat2.execute("CREATE LINKED TABLE two('org.h2.Driver', " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'PUBLIC.A');"); + ResultSet rs = stat2.executeQuery("SELECT * FROM one"); + rs.next(); + assertEquals(2, rs.getInt(1)); + rs = stat2.executeQuery("SELECT * FROM two"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + conn2.close(); + } + + private static void testLinkDrop() throws SQLException { + org.h2.Driver.load(); + Connection connA = DriverManager.getConnection("jdbc:h2:mem:a"); + Statement statA = connA.createStatement(); + statA.execute("CREATE TABLE TEST(ID INT)"); + Connection connB = DriverManager.getConnection("jdbc:h2:mem:b"); + Statement statB = connB.createStatement(); + statB.execute("CREATE LINKED TABLE " + + "TEST_LINK('', 'jdbc:h2:mem:a', '', '', 'TEST')"); + connA.close(); + // the connection should be closed now + // (and the table should disappear because the last connection was + // closed) + statB.execute("DROP TABLE TEST_LINK"); + connA = DriverManager.getConnection("jdbc:h2:mem:a"); + statA = connA.createStatement(); + // table should not exist now + statA.execute("CREATE TABLE TEST(ID INT)"); + connA.close(); + connB.close(); + } + + private void testLinkEmitUpdates() throws SQLException { + if (config.memory || config.networked) { + return; + } + + deleteDb("linked1"); + deleteDb("linked2"); + org.h2.Driver.load(); + + String url1 = getURL("linked1", true); + String url2 = getURL("linked2", true); + + Connection conn = DriverManager.getConnection(url1, "sa1", "abc abc"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + + Connection conn2 = DriverManager.getConnection(url2, "sa2", "def def"); + Statement stat2 = conn2.createStatement(); + String link = "CREATE LINKED TABLE TEST_LINK_U('', '" + url1 + + "', 'sa1', 'abc abc', 'TEST') EMIT UPDATES"; + stat2.execute(link); + link = "CREATE LINKED TABLE TEST_LINK_DI('', '" + url1 + + "', 'sa1', 'abc abc', 'TEST')"; + stat2.execute(link); + stat2.executeUpdate("INSERT INTO TEST_LINK_U VALUES(1, 'Hello')"); + stat2.executeUpdate("INSERT INTO TEST_LINK_DI VALUES(2, 'World')"); + assertThrows(ErrorCode.ERROR_ACCESSING_LINKED_TABLE_2, stat2). + executeUpdate("UPDATE TEST_LINK_U SET ID=ID+1"); + stat2.executeUpdate("UPDATE TEST_LINK_DI SET ID=ID+1"); + stat2.executeUpdate("UPDATE TEST_LINK_U SET NAME=NAME || ID"); + ResultSet rs; + + rs = stat2.executeQuery("SELECT * FROM TEST_LINK_DI ORDER BY ID"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("Hello2", rs.getString(2)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals("World3", rs.getString(2)); + assertFalse(rs.next()); + + rs = stat2.executeQuery("SELECT * FROM TEST_LINK_U ORDER BY ID"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("Hello2", rs.getString(2)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals("World3", rs.getString(2)); + assertFalse(rs.next()); + + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("Hello2", rs.getString(2)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals("World3", rs.getString(2)); + assertFalse(rs.next()); + + conn.close(); + conn2.close(); + } + + private void testLinkSchema() throws SQLException { + if (config.memory || config.networked) { + return; + } + + deleteDb("linked1"); + deleteDb("linked2"); + org.h2.Driver.load(); + String url1 = getURL("linked1", true); + String url2 = getURL("linked2", true); + + Connection conn = DriverManager.getConnection(url1, "sa1", "abc abc"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST1(ID INT PRIMARY KEY)"); + + Connection conn2 = DriverManager.getConnection(url2, "sa2", "def def"); + Statement stat2 = conn2.createStatement(); + String link = "CALL LINK_SCHEMA('LINKED', '', '" + url1 + + "', 'sa1', 'abc abc', 'PUBLIC')"; + stat2.execute(link); + stat2.executeQuery("SELECT * FROM LINKED.TEST1"); + + stat.execute("CREATE TABLE TEST2(ID INT PRIMARY KEY)"); + stat2.execute(link); + stat2.executeQuery("SELECT * FROM LINKED.TEST1"); + stat2.executeQuery("SELECT * FROM LINKED.TEST2"); + + conn.close(); + conn2.close(); + } + + private void testLinkTable() throws SQLException { + if (config.memory || config.networked || config.reopen) { + return; + } + + deleteDb("linked1"); + deleteDb("linked2"); + org.h2.Driver.load(); + + String url1 = getURL("linked1", true); + String url2 = getURL("linked2", true); + + Connection conn = DriverManager.getConnection(url1, "sa1", "abc abc"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TEMP TABLE TEST_TEMP(ID INT PRIMARY KEY)"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "NAME VARCHAR(200), XT TINYINT, XD DECIMAL(10,2), " + + "XTS TIMESTAMP, XBY BINARY(255), XBO BIT, XSM SMALLINT, " + + "XBI BIGINT, XBL BLOB, XDA DATE, XTI TIME, XCL CLOB, XDO DOUBLE)"); + stat.execute("CREATE INDEX IDXNAME ON TEST(NAME)"); + stat.execute("INSERT INTO TEST VALUES(0, NULL, NULL, NULL, NULL, " + + "NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL)"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello', -1, 10.30, " + + "'2001-02-03 11:22:33.4455', X'FF0102', TRUE, 3000, " + + "1234567890123456789, X'1122AA', DATE '0002-01-01', " + + "TIME '00:00:00', 'J\u00fcrg', 2.25)"); + testRow(stat, "TEST"); + stat.execute("INSERT INTO TEST VALUES(2, 'World', 30, 100.05, " + + "'2005-12-31 12:34:56.789', X'FFEECC33', FALSE, 1, " + + "-1234567890123456789, X'4455FF', DATE '9999-12-31', " + + "TIME '23:59:59', 'George', -2.5)"); + testRow(stat, "TEST"); + stat.execute("SELECT * FROM TEST_TEMP"); + conn.close(); + + conn = DriverManager.getConnection(url1, "sa1", "abc abc"); + stat = conn.createStatement(); + testRow(stat, "TEST"); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat). + execute("SELECT * FROM TEST_TEMP"); + conn.close(); + + conn = DriverManager.getConnection(url2, "sa2", "def def"); + stat = conn.createStatement(); + stat.execute("CREATE LINKED TABLE IF NOT EXISTS " + + "LINK_TEST('org.h2.Driver', '" + url1 + + "', 'sa1', 'abc abc', 'TEST')"); + stat.execute("CREATE LINKED TABLE IF NOT EXISTS " + + "LINK_TEST('org.h2.Driver', '" + url1 + + "', 'sa1', 'abc abc', 'TEST')"); + testRow(stat, "LINK_TEST"); + ResultSet rs = stat.executeQuery("SELECT * FROM LINK_TEST"); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(10, meta.getPrecision(1)); + assertEquals(200, meta.getPrecision(2)); + + conn.close(); + conn = DriverManager.getConnection(url2, "sa2", "def def"); + stat = conn.createStatement(); + + stat.execute("INSERT INTO LINK_TEST VALUES(3, 'Link Test', " + + "30, 100.05, '2005-12-31 12:34:56.789', X'FFEECC33', " + + "FALSE, 1, -1234567890123456789, X'4455FF', " + + "DATE '9999-12-31', TIME '23:59:59', 'George', -2.5)"); + + rs = stat.executeQuery("SELECT COUNT(*) FROM LINK_TEST"); + rs.next(); + assertEquals(4, rs.getInt(1)); + + rs = stat.executeQuery("SELECT COUNT(*) FROM LINK_TEST WHERE NAME='Link Test'"); + rs.next(); + assertEquals(1, rs.getInt(1)); + + int uc = stat.executeUpdate("DELETE FROM LINK_TEST WHERE ID=3"); + assertEquals(1, uc); + + rs = stat.executeQuery("SELECT COUNT(*) FROM LINK_TEST"); + rs.next(); + assertEquals(3, rs.getInt(1)); + + rs = stat.executeQuery("SELECT * FROM " + + "INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='LINK_TEST'"); + rs.next(); + assertEquals("TABLE LINK", rs.getString("TABLE_TYPE")); + + rs.next(); + rs = stat.executeQuery("SELECT * FROM LINK_TEST WHERE ID=0"); + rs.next(); + assertTrue(rs.getString("NAME") == null && rs.wasNull()); + assertTrue(rs.getString("XT") == null && rs.wasNull()); + assertTrue(rs.getInt("ID") == 0 && !rs.wasNull()); + assertTrue(rs.getBigDecimal("XD") == null && rs.wasNull()); + assertTrue(rs.getTimestamp("XTS") == null && rs.wasNull()); + assertTrue(rs.getBytes("XBY") == null && rs.wasNull()); + assertTrue(!rs.getBoolean("XBO") && rs.wasNull()); + assertTrue(rs.getShort("XSM") == 0 && rs.wasNull()); + assertTrue(rs.getLong("XBI") == 0 && rs.wasNull()); + assertTrue(rs.getString("XBL") == null && rs.wasNull()); + assertTrue(rs.getString("XDA") == null && rs.wasNull()); + assertTrue(rs.getString("XTI") == null && rs.wasNull()); + assertTrue(rs.getString("XCL") == null && rs.wasNull()); + assertTrue(rs.getString("XDO") == null && rs.wasNull()); + assertFalse(rs.next()); + + stat.execute("DROP TABLE LINK_TEST"); + + stat.execute("CREATE LINKED TABLE LINK_TEST('org.h2.Driver', '" + url1 + + "', 'sa1', 'abc abc', '(SELECT COUNT(*) FROM TEST)')"); + rs = stat.executeQuery("SELECT * FROM LINK_TEST"); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + conn.close(); + + deleteDb("linked1"); + deleteDb("linked2"); + } + + private void testRow(Statement stat, String name) throws SQLException { + ResultSet rs = stat.executeQuery("SELECT * FROM " + name + " WHERE ID=1"); + rs.next(); + assertEquals("Hello", rs.getString("NAME")); + assertEquals(-1, rs.getByte("XT")); + BigDecimal bd = rs.getBigDecimal("XD"); + assertTrue(bd.equals(new BigDecimal("10.30"))); + Timestamp ts = rs.getTimestamp("XTS"); + String s = ts.toString(); + assertEquals("2001-02-03 11:22:33.4455", s); + assertTrue(ts.equals(Timestamp.valueOf("2001-02-03 11:22:33.4455"))); + assertEquals(new byte[] { (byte) 255, (byte) 1, (byte) 2 }, rs.getBytes("XBY")); + assertTrue(rs.getBoolean("XBO")); + assertEquals(3000, rs.getShort("XSM")); + assertEquals(1234567890123456789L, rs.getLong("XBI")); + assertEquals("1122aa", rs.getString("XBL")); + assertEquals("0002-01-01", rs.getString("XDA")); + assertEquals("00:00:00", rs.getString("XTI")); + assertEquals("J\u00fcrg", rs.getString("XCL")); + assertEquals("2.25", rs.getString("XDO")); + } + + private void testCachingResults() throws SQLException { + org.h2.Driver.load(); + Connection ca = DriverManager.getConnection( + "jdbc:h2:mem:one", "sa", "sa"); + Connection cb = DriverManager.getConnection( + "jdbc:h2:mem:two", "sa", "sa"); + + Statement sa = ca.createStatement(); + Statement sb = cb.createStatement(); + sa.execute("CREATE TABLE TEST(ID VARCHAR)"); + sa.execute("INSERT INTO TEST (ID) VALUES('abc')"); + sb.execute("CREATE LOCAL TEMPORARY LINKED TABLE T" + + "(NULL, 'jdbc:h2:mem:one', 'sa', 'sa', 'TEST')"); + + PreparedStatement paData = ca.prepareStatement( + "select id from TEST where id = ?"); + PreparedStatement pbData = cb.prepareStatement( + "select id from T where id = ?"); + PreparedStatement paCount = ca.prepareStatement( + "select count(*) from TEST"); + PreparedStatement pbCount = cb.prepareStatement( + "select count(*) from T"); + + // Direct query => Result 1 + testCachingResultsCheckResult(paData, 1, "abc"); + testCachingResultsCheckResult(paCount, 1); + + // Via linked table => Result 1 + testCachingResultsCheckResult(pbData, 1, "abc"); + testCachingResultsCheckResult(pbCount, 1); + + sa.execute("INSERT INTO TEST (ID) VALUES('abc')"); + + // Direct query => Result 2 + testCachingResultsCheckResult(paData, 2, "abc"); + testCachingResultsCheckResult(paCount, 2); + + // Via linked table => Result must be 2 + testCachingResultsCheckResult(pbData, 2, "abc"); + testCachingResultsCheckResult(pbCount, 2); + + ca.close(); + cb.close(); + } + + private void testCachingResultsCheckResult(PreparedStatement ps, + int expected) throws SQLException { + ResultSet rs = ps.executeQuery(); + rs.next(); + assertEquals(expected, rs.getInt(1)); + } + + private void testCachingResultsCheckResult(PreparedStatement ps, + int expected, String value) throws SQLException { + ps.setString(1, value); + ResultSet rs = ps.executeQuery(); + int counter = 0; + while (rs.next()) { + counter++; + String result = rs.getString(1); + assertEquals(result, value); + } + assertEquals(expected, counter); + } + + private void testLinkedTableInReadOnlyDb() throws SQLException { + if (config.memory || config.networked || config.googleAppEngine) { + return; + } + + deleteDb("testLinkedTableInReadOnlyDb"); + org.h2.Driver.load(); + + Connection memConn = DriverManager.getConnection( + "jdbc:h2:mem:one", "sa", "sa"); + Statement memStat = memConn.createStatement(); + memStat.execute("CREATE TABLE TEST(ID VARCHAR)"); + + String url1 = getURL("testLinkedTableInReadOnlyDb", true); + Connection conn = DriverManager.getConnection(url1, "sa1", "abc abc"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY)"); + conn.close(); + + for (String file : FileUtils.newDirectoryStream(getBaseDir())) { + String name = FileUtils.getName(file); + if ((name.startsWith("testLinkedTableInReadOnlyDb")) && + (!name.endsWith(".trace.db"))) { + FileUtils.setReadOnly(file); + boolean isReadOnly = !FileUtils.canWrite(file); + if (!isReadOnly) { + fail("File " + file + " is not read only. Can't test it."); + } + } + } + + // Now it's read only + conn = DriverManager.getConnection(url1, "sa1", "abc abc"); + stat = conn.createStatement(); + stat.execute("CREATE LOCAL TEMPORARY LINKED TABLE T" + + "(NULL, 'jdbc:h2:mem:one', 'sa', 'sa', 'TEST')"); + // This is valid because it's a linked table + stat.execute("INSERT INTO T VALUES('abc')"); + + conn.close(); + memConn.close(); + + deleteDb("testLinkedTableInReadOnlyDb"); + } + + private void testGeometry() throws SQLException { + if (!config.mvStore && config.mvcc) { + return; + } + if (config.memory && config.mvcc) { + return; + } + if (DataType.GEOMETRY_CLASS == null) { + return; + } + org.h2.Driver.load(); + Connection ca = DriverManager.getConnection("jdbc:h2:mem:one", "sa", "sa"); + Connection cb = DriverManager.getConnection("jdbc:h2:mem:two", "sa", "sa"); + Statement sa = ca.createStatement(); + Statement sb = cb.createStatement(); + sa.execute("CREATE TABLE TEST(ID SERIAL, the_geom geometry)"); + sa.execute("INSERT INTO TEST(THE_GEOM) VALUES('POINT (1 1)')"); + String sql = "CREATE LINKED TABLE T(NULL, " + + "'jdbc:h2:mem:one', 'sa', 'sa', 'TEST') READONLY"; + sb.execute(sql); + try (ResultSet rs = sb.executeQuery("SELECT * FROM T")) { + assertTrue(rs.next()); + assertEquals("POINT (1 1)", rs.getString("THE_GEOM")); + } + sb.execute("DROP TABLE T"); + ca.close(); + cb.close(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestListener.java b/modules/h2/src/test/java/org/h2/test/db/TestListener.java new file mode 100644 index 0000000000000..96aa7550a8f80 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestListener.java @@ -0,0 +1,145 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.api.DatabaseEventListener; +import org.h2.test.TestBase; + +/** + * Tests the DatabaseEventListener. + */ +public class TestListener extends TestBase implements DatabaseEventListener { + + private long last; + private int lastState = -1; + private String databaseUrl; + + public TestListener() { + start = last = System.nanoTime(); + } + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.networked || config.cipher != null) { + return; + } + deleteDb("listener"); + Connection conn; + conn = getConnection("listener"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, 'Test' || SPACE(100))"); + int len = getSize(100, 100000); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.execute(); + } + crash(conn); + + conn = getConnection("listener;database_event_listener='" + + getClass().getName() + "'"); + conn.close(); + deleteDb("listener"); + } + + @Override + public void exceptionThrown(SQLException e, String sql) { + TestBase.logError("exceptionThrown sql=" + sql, e); + } + + @Override + public void setProgress(int state, String name, int current, int max) { + long time = System.nanoTime(); + if (state == lastState && time < last + TimeUnit.SECONDS.toNanos(1)) { + return; + } + if (state == STATE_STATEMENT_START || + state == STATE_STATEMENT_END || + state == STATE_STATEMENT_PROGRESS) { + return; + } + if (name.length() > 30) { + name = "..." + name.substring(name.length() - 30); + } + last = time; + lastState = state; + String stateName; + switch (state) { + case STATE_SCAN_FILE: + stateName = "Scan " + name; + break; + case STATE_CREATE_INDEX: + stateName = "Create Index " + name; + break; + case STATE_RECOVER: + stateName = "Recover"; + break; + default: + TestBase.logError("unknown state: " + state, null); + stateName = "? " + name; + } + try { + Thread.sleep(1); + } catch (InterruptedException e) { + // ignore + } + printTime("state: " + stateName + " " + + (100 * current / max) + " " + TimeUnit.NANOSECONDS.toMillis(time - start)); + } + + @Override + public void closingDatabase() { + if (databaseUrl.toUpperCase().contains("CIPHER")) { + return; + } + + try (Connection conn = DriverManager.getConnection(databaseUrl, + getUser(), getPassword())) { + conn.createStatement().execute("DROP TABLE TEST2"); + conn.close(); + } catch (SQLException e) { + e.printStackTrace(); + } + } + + @Override + public void init(String url) { + this.databaseUrl = url; + } + + @Override + public void opened() { + if (databaseUrl.toUpperCase().contains("CIPHER")) { + return; + } + + try (Connection conn = DriverManager.getConnection(databaseUrl, + getUser(), getPassword())) { + conn.createStatement().execute("CREATE TABLE IF NOT EXISTS TEST2(ID INT)"); + conn.close(); + } catch (SQLException e) { + e.printStackTrace(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestLob.java b/modules/h2/src/test/java/org/h2/test/db/TestLob.java new file mode 100644 index 0000000000000..5670f296f03b9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestLob.java @@ -0,0 +1,1675 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.ByteArrayInputStream; +import java.io.CharArrayReader; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.Reader; +import java.io.StringReader; +import java.nio.charset.StandardCharsets; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Savepoint; +import java.sql.Statement; +import java.util.Random; +import java.util.concurrent.TimeUnit; +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.jdbc.JdbcConnection; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Recover; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.StringUtils; +import org.h2.util.Task; + +/** + * Tests LOB and CLOB data types. + */ +public class TestLob extends TestBase { + + private static final String MORE_THAN_128_CHARS = + "12345678901234567890123456789012345678901234567890" + + "12345678901234567890123456789012345678901234567890" + + "12345678901234567890123456789"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.big = true; + test.test(); + } + + @Override + public void test() throws Exception { + testRemoveAfterDeleteAndClose(); + testRemovedAfterTimeout(); + testConcurrentRemoveRead(); + testCloseLobTwice(); + testCleaningUpLobsOnRollback(); + testClobWithRandomUnicodeChars(); + testCommitOnExclusiveConnection(); + testReadManyLobs(); + testLobSkip(); + testLobSkipPastEnd(); + testCreateIndexOnLob(); + testBlobInputStreamSeek(true); + testBlobInputStreamSeek(false); + testDeadlock(); + testDeadlock2(); + testCopyManyLobs(); + testCopyLob(); + testConcurrentCreate(); + testLobInLargeResult(); + testUniqueIndex(); + testConvert(); + testCreateAsSelect(); + testDelete(); + testLobServerMemory(); + testUpdatingLobRow(); + testBufferedInputStreamBug(); + if (config.memory) { + return; + } + testLargeClob(); + testLobCleanupSessionTemporaries(); + testLobUpdateMany(); + testLobVariable(); + testLobDrop(); + testLobNoClose(); + testLobTransactions(10); + testLobTransactions(10000); + testLobRollbackStop(); + testLobCopy(); + testLobHibernate(); + testLobCopy(false); + testLobCopy(true); + testLobCompression(false); + testLobCompression(true); + testManyLobs(); + testClob(); + testUpdateLob(); + testLobReconnect(); + testLob(false); + testLob(true); + testJavaObject(); + deleteDb("lob"); + } + + private void testRemoveAfterDeleteAndClose() throws Exception { + if (config.memory || config.cipher != null) { + return; + } + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data clob)"); + for (int i = 0; i < 10; i++) { + stat.execute("insert into test values(1, space(100000))"); + if (i > 5) { + ResultSet rs = stat.executeQuery("select * from test"); + rs.next(); + Clob c = rs.getClob(2); + stat.execute("delete from test where id = 1"); + c.getSubString(1, 10); + } else { + stat.execute("delete from test where id = 1"); + } + } + // some clobs are removed only here (those that were queries for) + conn.close(); + Recover.execute(getBaseDir(), "lob"); + long size = FileUtils.size(getBaseDir() + "/lob.h2.sql"); + assertTrue("size: " + size, size > 1000 && size < 10000); + } + + private void testLargeClob() throws Exception { + deleteDb("lob"); + Connection conn; + conn = reconnect(null); + conn.createStatement().execute( + "CREATE TABLE TEST(ID IDENTITY, C CLOB)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(C) VALUES(?)"); + int len = SysProperties.LOB_CLIENT_MAX_SIZE_MEMORY + 1; + prep.setCharacterStream(1, getRandomReader(len, 2), -1); + prep.execute(); + conn = reconnect(conn); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEqualReaders(getRandomReader(len, 2), + rs.getCharacterStream("C"), -1); + assertFalse(rs.next()); + conn.close(); + } + + private void testRemovedAfterTimeout() throws Exception { + if (config.lazy) { + return; + } + deleteDb("lob"); + final String url = getURL("lob;lob_timeout=50", true); + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data clob)"); + PreparedStatement prep = conn.prepareStatement("insert into test values(?, ?)"); + prep.setInt(1, 1); + prep.setString(2, "aaa" + new String(new char[1024 * 16]).replace((char) 0, 'x')); + prep.execute(); + prep.setInt(1, 2); + prep.setString(2, "bbb" + new String(new char[1024 * 16]).replace((char) 0, 'x')); + prep.execute(); + ResultSet rs = stat.executeQuery("select * from test order by id"); + rs.next(); + Clob c1 = rs.getClob(2); + assertEquals("aaa", c1.getSubString(1, 3)); + rs.next(); + assertEquals("aaa", c1.getSubString(1, 3)); + rs.close(); + assertEquals("aaa", c1.getSubString(1, 3)); + stat.execute("delete from test"); + c1.getSubString(1, 3); + // wait until it times out + Thread.sleep(100); + // start a new transaction, to be sure + stat.execute("delete from test"); + assertThrows(SQLException.class, c1).getSubString(1, 3); + conn.close(); + } + + private void testConcurrentRemoveRead() throws Exception { + if (config.lazy) { + return; + } + deleteDb("lob"); + final String url = getURL("lob", true); + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("set max_length_inplace_lob 5"); + stat.execute("create table lob(data clob)"); + stat.execute("insert into lob values(space(100))"); + Connection conn2 = getConnection(url); + Statement stat2 = conn2.createStatement(); + ResultSet rs = stat2.executeQuery("select data from lob"); + rs.next(); + stat.execute("delete lob"); + InputStream in = rs.getBinaryStream(1); + in.read(); + conn2.close(); + conn.close(); + } + + private void testCloseLobTwice() throws SQLException { + deleteDb("lob"); + Connection conn = getConnection("lob"); + PreparedStatement prep = conn.prepareStatement("set @c = ?"); + prep.setCharacterStream(1, new StringReader( + new String(new char[10000])), 10000); + prep.execute(); + prep.setCharacterStream(1, new StringReader( + new String(new char[10001])), 10001); + prep.execute(); + conn.setAutoCommit(true); + conn.close(); + } + + private void testCleaningUpLobsOnRollback() throws Exception { + if (config.mvStore) { + return; + } + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE test(id int, data CLOB)"); + conn.setAutoCommit(false); + stat.executeUpdate("insert into test values (1, '" + + MORE_THAN_128_CHARS + "')"); + conn.rollback(); + ResultSet rs = stat.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(0, rs.getInt(1)); + rs = stat.executeQuery("select * from information_schema.lobs"); + rs = stat.executeQuery("select count(*) from information_schema.lob_data"); + rs.next(); + assertEquals(0, rs.getInt(1)); + conn.close(); + } + + private void testReadManyLobs() throws Exception { + deleteDb("lob"); + Connection conn; + conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, data clob)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(null, ?)"); + byte[] data = new byte[256]; + Random r = new Random(1); + for (int i = 0; i < 1000; i++) { + r.nextBytes(data); + prep.setBinaryStream(1, new ByteArrayInputStream(data), -1); + prep.execute(); + } + ResultSet rs = stat.executeQuery("select * from test"); + while (rs.next()) { + rs.getString(2); + } + conn.close(); + } + + private void testLobSkip() throws Exception { + deleteDb("lob"); + Connection conn; + conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.executeUpdate("create table test(x blob) as select secure_rand(1000)"); + ResultSet rs = stat.executeQuery("select * from test"); + rs.next(); + Blob b = rs.getBlob(1); + byte[] test = b.getBytes(5 + 1, 1000 - 5); + assertEquals(1000 - 5, test.length); + stat.execute("drop table test"); + conn.close(); + } + + private void testLobSkipPastEnd() throws Exception { + if (config.memory) { + return; + } + deleteDb("lob"); + Connection conn; + conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, data blob)"); + byte[] data = new byte[150000]; + new Random(0).nextBytes(data); + PreparedStatement prep = conn.prepareStatement("insert into test values(1, ?)"); + prep.setBytes(1, data); + prep.execute(); + ResultSet rs = stat.executeQuery("select data from test"); + rs.next(); + for (int blockSize = 1; blockSize < 100000; blockSize *= 10) { + for (int i = 0; i < data.length; i += 1000) { + InputStream in = rs.getBinaryStream(1); + in.skip(i); + byte[] d2 = new byte[data.length]; + int l = Math.min(blockSize, d2.length - i); + l = in.read(d2, i, l); + if (i >= data.length) { + assertEquals(-1, l); + } else if (i + blockSize >= data.length) { + assertEquals(data.length - i, l); + } + for (int j = i; j < blockSize && j < d2.length; j++) { + assertEquals(data[j], d2[j]); + } + } + } + stat.execute("drop table test"); + conn.close(); + } + + private void testCreateIndexOnLob() throws Exception { + if (config.memory) { + return; + } + deleteDb("lob"); + Connection conn; + conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, name clob)"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat). + execute("create index idx_n on test(name)"); + stat.execute("drop table test"); + conn.close(); + } + + private void testBlobInputStreamSeek(boolean upgraded) throws Exception { + deleteDb("lob"); + Connection conn; + conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data blob)"); + PreparedStatement prep; + Random random = new Random(); + byte[] buff = new byte[500000]; + for (int i = 0; i < 10; i++) { + prep = conn.prepareStatement("insert into test values(?, ?)"); + prep.setInt(1, i); + random.setSeed(i); + random.nextBytes(buff); + prep.setBinaryStream(2, new ByteArrayInputStream(buff), -1); + prep.execute(); + } + if (upgraded) { + if (!config.mvStore) { + if (config.memory) { + stat.execute("update information_schema.lob_map set pos=null"); + } else { + stat.execute("alter table information_schema.lob_map drop column pos"); + conn.close(); + conn = getConnection("lob"); + } + } + } + prep = conn.prepareStatement("select * from test where id = ?"); + for (int i = 0; i < 1; i++) { + random.setSeed(i); + random.nextBytes(buff); + for (int j = 0; j < buff.length; j += 10000) { + prep.setInt(1, i); + ResultSet rs = prep.executeQuery(); + rs.next(); + InputStream in = rs.getBinaryStream(2); + in.skip(j); + int t = in.read(); + assertEquals(t, buff[j] & 0xff); + } + } + conn.close(); + } + + /** + * Test for issue 315: Java Level Deadlock on Database & Session Objects + */ + private void testDeadlock() throws Exception { + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name clob)"); + stat.execute("insert into test select x, space(10000) from system_range(1, 3)"); + final Connection conn2 = getConnection("lob"); + Task task = new Task() { + + @Override + public void call() throws Exception { + Statement stat = conn2.createStatement(); + stat.setFetchSize(1); + for (int i = 0; !stop; i++) { + ResultSet rs = stat.executeQuery( + "select * from test where id > -" + i); + while (rs.next()) { + // ignore + } + } + } + + }; + task.execute(); + stat.execute("create table test2(id int primary key, name clob)"); + for (int i = 0; i < 100; i++) { + stat.execute("delete from test2"); + stat.execute("insert into test2 values(1, space(10000 + " + i + "))"); + } + task.get(); + conn.close(); + conn2.close(); + } + + /** + * A background task. + */ + private final class Deadlock2Task1 extends Task { + + public final Connection conn; + + Deadlock2Task1() throws SQLException { + this.conn = getDeadlock2Connection(); + } + + @Override + public void call() throws Exception { + Random random = new Random(); + Statement stat = conn.createStatement(); + char[] tmp = new char[1024]; + while (!stop) { + try { + ResultSet rs = stat.executeQuery( + "select name from test where id = " + random.nextInt(999)); + if (rs.next()) { + Reader r = rs.getClob("name").getCharacterStream(); + while (r.read(tmp) >= 0) { + // ignore + } + r.close(); + } + rs.close(); + } catch (SQLException ex) { + // ignore "LOB gone away", this can happen + // in the presence of concurrent updates + if (ex.getErrorCode() != ErrorCode.IO_EXCEPTION_2) { + throw ex; + } + } catch (IOException ex) { + // ignore "LOB gone away", this can happen + // in the presence of concurrent updates + Exception e = ex; + if (e.getCause() instanceof DbException) { + e = (Exception) e.getCause(); + } + if (!(e.getCause() instanceof SQLException)) { + throw ex; + } + SQLException e2 = (SQLException) e.getCause(); + if (e2.getErrorCode() != ErrorCode.IO_EXCEPTION_1) { + throw ex; + } + } catch (Exception e) { + e.printStackTrace(System.out); + throw e; + } + } + } + + } + + /** + * A background task. + */ + private final class Deadlock2Task2 extends Task { + + public final Connection conn; + + Deadlock2Task2() throws SQLException { + this.conn = getDeadlock2Connection(); + } + + @Override + public void call() throws Exception { + Random random = new Random(); + Statement stat = conn.createStatement(); + while (!stop) { + stat.execute("update test set counter = " + + random.nextInt(10) + " where id = " + random.nextInt(1000)); + } + } + + } + + private void testDeadlock2() throws Exception { + if (config.mvcc || config.memory) { + return; + } + deleteDb("lob"); + Connection conn = getDeadlock2Connection(); + Statement stat = conn.createStatement(); + stat.execute("create cached table test(id int not null identity, " + + "name clob, counter int)"); + stat.execute("insert into test(id, name) select x, space(100000) " + + "from system_range(1, 100)"); + Deadlock2Task1 task1 = new Deadlock2Task1(); + Deadlock2Task2 task2 = new Deadlock2Task2(); + task1.execute("task1"); + task2.execute("task2"); + for (int i = 0; i < 100; i++) { + stat.execute("insert into test values(null, space(10000 + " + i + "), 1)"); + } + task1.get(); + task1.conn.close(); + task2.get(); + task2.conn.close(); + conn.close(); + } + + Connection getDeadlock2Connection() throws SQLException { + return getConnection("lob;MULTI_THREADED=TRUE;LOCK_TIMEOUT=60000"); + } + + private void testCopyManyLobs() throws Exception { + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, data clob) " + + "as select 1, space(10000)"); + stat.execute("insert into test(id, data) select null, data from test"); + stat.execute("insert into test(id, data) select null, data from test"); + stat.execute("insert into test(id, data) select null, data from test"); + stat.execute("insert into test(id, data) select null, data from test"); + stat.execute("delete from test where id < 10"); + stat.execute("shutdown compact"); + conn.close(); + } + + private void testCopyLob() throws Exception { + if (config.memory) { + return; + } + deleteDb("lob"); + Connection conn; + Statement stat; + ResultSet rs; + conn = getConnection("lob"); + stat = conn.createStatement(); + stat.execute("create table test(id identity, data clob) " + + "as select 1, space(10000)"); + stat.execute("insert into test(id, data) select 2, data from test"); + stat.execute("delete from test where id = 1"); + conn.close(); + conn = getConnection("lob"); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(10000, rs.getString(2).length()); + conn.close(); + } + + private void testConcurrentCreate() throws Exception { + deleteDb("lob"); + final JdbcConnection conn1 = (JdbcConnection) getConnection("lob"); + final JdbcConnection conn2 = (JdbcConnection) getConnection("lob"); + conn1.setAutoCommit(false); + conn2.setAutoCommit(false); + + final byte[] buffer = new byte[10000]; + + Task task1 = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + Blob b = conn1.createBlob(); + OutputStream out = b.setBinaryStream(1); + out.write(buffer); + out.close(); + } + } + }; + Task task2 = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + Blob b = conn2.createBlob(); + OutputStream out = b.setBinaryStream(1); + out.write(buffer); + out.close(); + } + } + }; + task1.execute(); + task2.execute(); + Thread.sleep(1000); + task1.get(); + task2.get(); + conn1.close(); + conn2.close(); + } + + private void testLobInLargeResult() throws Exception { + deleteDb("lob"); + Connection conn; + Statement stat; + conn = getConnection("lob"); + stat = conn.createStatement(); + stat.execute("create table test(id int, data clob) as " + + "select x, null from system_range(1, 1000)"); + stat.execute("insert into test values(0, space(10000))"); + stat.execute("set max_memory_rows 100"); + ResultSet rs = stat.executeQuery("select * from test order by id desc"); + while (rs.next()) { + // this threw a NullPointerException because + // the disk based result set didn't know the lob handler + } + conn.close(); + } + + private void testUniqueIndex() throws Exception { + deleteDb("lob"); + Connection conn; + Statement stat; + conn = getConnection("lob"); + stat = conn.createStatement(); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat).execute("create memory table test(x clob unique)"); + conn.close(); + } + + private void testConvert() throws Exception { + deleteDb("lob"); + Connection conn; + Statement stat; + conn = getConnection("lob"); + stat = conn.createStatement(); + stat.execute("create table test(id int, data blob)"); + stat.execute("insert into test values(1, '')"); + ResultSet rs; + rs = stat.executeQuery("select cast(data as clob) from test"); + rs.next(); + assertEquals("", rs.getString(1)); + stat.execute("drop table test"); + + stat.execute("create table test(id int, data clob)"); + stat.execute("insert into test values(1, '')"); + rs = stat.executeQuery("select cast(data as blob) from test"); + rs.next(); + assertEquals("", rs.getString(1)); + + conn.close(); + } + + private void testCreateAsSelect() throws Exception { + deleteDb("lob"); + Connection conn; + Statement stat; + conn = getConnection("lob"); + stat = conn.createStatement(); + stat.execute("create table test(id int, data clob) as select 1, space(10000)"); + conn.close(); + } + + private void testDelete() throws Exception { + if (config.memory || config.mvStore) { + return; + } + deleteDb("lob"); + Connection conn; + Statement stat; + conn = getConnection("lob"); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name clob)"); + stat.execute("insert into test values(1, space(10000))"); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 1); + stat.execute("insert into test values(2, space(10000))"); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 1); + stat.execute("delete from test where id = 1"); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 1); + stat.execute("insert into test values(3, space(10000))"); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 1); + stat.execute("insert into test values(4, space(10000))"); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 1); + stat.execute("delete from test where id = 2"); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 1); + stat.execute("delete from test where id = 3"); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 1); + stat.execute("delete from test"); + conn.close(); + conn = getConnection("lob"); + stat = conn.createStatement(); + assertSingleValue(stat, + "select count(*) from information_schema.lob_data", 0); + stat.execute("drop table test"); + conn.close(); + } + + private void testLobUpdateMany() throws SQLException { + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table post(id int primary key, text clob) as " + + "select x, space(96) from system_range(1, 329)"); + PreparedStatement prep = conn.prepareStatement("update post set text = ?"); + prep.setCharacterStream(1, new StringReader(new String(new char[1025])), -1); + prep.executeUpdate(); + conn.close(); + } + + private void testLobCleanupSessionTemporaries() throws SQLException { + if (config.mvStore) { + return; + } + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(data clob)"); + + ResultSet rs = stat.executeQuery("select count(*) " + + "from INFORMATION_SCHEMA.LOBS"); + assertTrue(rs.next()); + assertEquals(0, rs.getInt(1)); + rs.close(); + + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO test(data) VALUES(?)"); + String name = new String(new char[200]).replace((char) 0, 'x'); + prep.setString(1, name); + prep.execute(); + prep.close(); + + rs = stat.executeQuery("select count(*) from INFORMATION_SCHEMA.LOBS"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + rs.close(); + conn.close(); + } + + private void testLobServerMemory() throws SQLException { + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT, DATA CLOB)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST VALUES(1, ?)"); + StringReader reader = new StringReader(new String(new char[100000])); + prep.setCharacterStream(1, reader, -1); + prep.execute(); + conn.close(); + } + + private void testLobVariable() throws SQLException { + deleteDb("lob"); + Connection conn = reconnect(null); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT, DATA CLOB)"); + stat.execute("INSERT INTO TEST VALUES(1, SPACE(100000))"); + stat.execute("SET @TOTAL = SELECT DATA FROM TEST WHERE ID=1"); + stat.execute("DROP TABLE TEST"); + stat.execute("CALL @TOTAL LIKE '%X'"); + stat.execute("CREATE TABLE TEST(ID INT, DATA CLOB)"); + stat.execute("INSERT INTO TEST VALUES(1, @TOTAL)"); + stat.execute("INSERT INTO TEST VALUES(2, @TOTAL)"); + stat.execute("DROP TABLE TEST"); + stat.execute("CALL @TOTAL LIKE '%X'"); + conn.close(); + } + + private void testLobDrop() throws SQLException { + if (config.networked) { + return; + } + deleteDb("lob"); + Connection conn = reconnect(null); + Statement stat = conn.createStatement(); + for (int i = 0; i < 500; i++) { + stat.execute("CREATE TABLE T" + i + "(ID INT, C CLOB)"); + } + stat.execute("CREATE TABLE TEST(ID INT, C CLOB)"); + stat.execute("INSERT INTO TEST VALUES(1, SPACE(10000))"); + for (int i = 0; i < 500; i++) { + stat.execute("DROP TABLE T" + i); + } + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + while (rs.next()) { + rs.getString("C"); + } + conn.close(); + } + + private void testLobNoClose() throws Exception { + if (config.networked) { + return; + } + deleteDb("lob"); + Connection conn = reconnect(null); + conn.createStatement().execute( + "CREATE TABLE TEST(ID IDENTITY, DATA CLOB)"); + conn.createStatement().execute( + "INSERT INTO TEST VALUES(1, SPACE(10000))"); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT DATA FROM TEST"); + rs.next(); + SysProperties.lobCloseBetweenReads = true; + Reader in = rs.getCharacterStream(1); + in.read(); + conn.createStatement().execute("DELETE FROM TEST"); + SysProperties.lobCloseBetweenReads = false; + conn.createStatement().execute( + "INSERT INTO TEST VALUES(1, SPACE(10000))"); + rs = conn.createStatement().executeQuery( + "SELECT DATA FROM TEST"); + rs.next(); + in = rs.getCharacterStream(1); + in.read(); + conn.setAutoCommit(false); + try { + conn.createStatement().execute("DELETE FROM TEST"); + conn.commit(); + // DELETE does not fail in Linux, but in Windows + // error("Error expected"); + // but reading afterwards should fail + int len = 0; + while (true) { + int x = in.read(); + if (x < 0) { + break; + } + len++; + } + in.close(); + if (len > 0) { + // in Linux, it seems it is still possible to read in files + // even if they are deleted + if (System.getProperty("os.name").indexOf("Windows") > 0) { + fail("Error expected; len=" + len); + } + } + } catch (SQLException e) { + assertKnownException(e); + } + conn.rollback(); + conn.close(); + } + + private void testLobTransactions(int spaceLen) throws SQLException { + deleteDb("lob"); + Connection conn = reconnect(null); + conn.createStatement().execute("CREATE TABLE TEST(ID IDENTITY, " + + "DATA CLOB, DATA2 VARCHAR)"); + conn.setAutoCommit(false); + Random random = new Random(0); + int rows = 0; + Savepoint sp = null; + int len = getSize(100, 400); + // config.traceTest = true; + for (int i = 0; i < len; i++) { + switch (random.nextInt(10)) { + case 0: + trace("insert " + i); + conn.createStatement().execute( + "INSERT INTO TEST(DATA, DATA2) VALUES('" + i + + "' || SPACE(" + spaceLen + "), '" + i + "')"); + rows++; + break; + case 1: + if (rows > 0) { + int x = random.nextInt(rows); + trace("delete " + x); + conn.createStatement().execute( + "DELETE FROM TEST WHERE ID=" + x); + } + break; + case 2: + if (rows > 0) { + int x = random.nextInt(rows); + trace("update " + x); + conn.createStatement().execute( + "UPDATE TEST SET DATA='x' || DATA, " + + "DATA2='x' || DATA2 WHERE ID=" + x); + } + break; + case 3: + if (rows > 0) { + trace("commit"); + conn.commit(); + sp = null; + } + break; + case 4: + if (rows > 0) { + trace("rollback"); + conn.rollback(); + sp = null; + } + break; + case 5: + trace("savepoint"); + sp = conn.setSavepoint(); + break; + case 6: + if (sp != null) { + trace("rollback to savepoint"); + conn.rollback(sp); + } + break; + case 7: + if (rows > 0) { + trace("checkpoint"); + conn.createStatement().execute("CHECKPOINT"); + trace("shutdown immediately"); + conn.createStatement().execute("SHUTDOWN IMMEDIATELY"); + trace("shutdown done"); + conn = reconnect(conn); + conn.setAutoCommit(false); + sp = null; + } + break; + default: + } + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST"); + while (rs.next()) { + int id = rs.getInt("ID"); + String d1 = rs.getString("DATA").trim(); + String d2 = rs.getString("DATA2"); + assertEquals("id:" + id, d2, d1); + } + + } + conn.close(); + } + + private void testLobRollbackStop() throws SQLException { + deleteDb("lob"); + Connection conn = reconnect(null); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, DATA CLOB)"); + conn.createStatement().execute( + "INSERT INTO TEST VALUES(1, SPACE(10000))"); + conn.setAutoCommit(false); + conn.createStatement().execute("DELETE FROM TEST"); + conn.createStatement().execute("CHECKPOINT"); + conn.createStatement().execute("SHUTDOWN IMMEDIATELY"); + conn = reconnect(conn); + ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); + assertTrue(rs.next()); + rs.getInt(1); + assertEquals(10000, rs.getString(2).length()); + conn.close(); + } + + private void testLobCopy() throws SQLException { + deleteDb("lob"); + Connection conn = reconnect(null); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, data clob)"); + stat.execute("insert into test values(1, space(1000));"); + stat.execute("insert into test values(2, space(10000));"); + stat.execute("create table test2(id int, data clob);"); + stat.execute("insert into test2 select * from test;"); + stat.execute("drop table test;"); + stat.execute("select * from test2;"); + stat.execute("update test2 set id=id;"); + stat.execute("select * from test2;"); + conn.close(); + } + + private void testLobHibernate() throws Exception { + deleteDb("lob"); + Connection conn0 = reconnect(null); + + conn0.getAutoCommit(); + conn0.setAutoCommit(false); + DatabaseMetaData dbMeta0 = conn0.getMetaData(); + dbMeta0.getDatabaseProductName(); + dbMeta0.getDatabaseMajorVersion(); + dbMeta0.getDatabaseProductVersion(); + dbMeta0.getDriverName(); + dbMeta0.getDriverVersion(); + dbMeta0.supportsResultSetType(1004); + dbMeta0.supportsBatchUpdates(); + dbMeta0.dataDefinitionCausesTransactionCommit(); + dbMeta0.dataDefinitionIgnoredInTransactions(); + dbMeta0.supportsGetGeneratedKeys(); + conn0.getAutoCommit(); + conn0.getAutoCommit(); + conn0.commit(); + conn0.setAutoCommit(true); + Statement stat0 = conn0.createStatement(); + stat0.executeUpdate("drop table CLOB_ENTITY if exists"); + stat0.getWarnings(); + stat0.executeUpdate("create table CLOB_ENTITY (ID bigint not null, " + + "DATA clob, CLOB_DATA clob, primary key (ID))"); + stat0.getWarnings(); + stat0.close(); + conn0.getWarnings(); + conn0.clearWarnings(); + conn0.setAutoCommit(false); + conn0.getAutoCommit(); + conn0.getAutoCommit(); + PreparedStatement prep0 = conn0.prepareStatement( + "select max(ID) from CLOB_ENTITY"); + ResultSet rs0 = prep0.executeQuery(); + rs0.next(); + rs0.getLong(1); + rs0.wasNull(); + rs0.close(); + prep0.close(); + conn0.getAutoCommit(); + PreparedStatement prep1 = conn0 + .prepareStatement("insert into CLOB_ENTITY" + + "(DATA, CLOB_DATA, ID) values (?, ?, ?)"); + prep1.setNull(1, 2005); + StringBuilder buff = new StringBuilder(10000); + for (int i = 0; i < 10000; i++) { + buff.append((char) ('0' + (i % 10))); + } + Reader x = new StringReader(buff.toString()); + prep1.setCharacterStream(2, x, 10000); + prep1.setLong(3, 1); + prep1.addBatch(); + prep1.executeBatch(); + prep1.close(); + conn0.getAutoCommit(); + conn0.getAutoCommit(); + conn0.commit(); + conn0.isClosed(); + conn0.getWarnings(); + conn0.clearWarnings(); + conn0.getAutoCommit(); + conn0.getAutoCommit(); + PreparedStatement prep2 = conn0 + .prepareStatement("select c_.ID as ID0_0_, c_.DATA as S_, " + + "c_.CLOB_DATA as CLOB3_0_0_ from CLOB_ENTITY c_ where c_.ID=?"); + prep2.setLong(1, 1); + ResultSet rs1 = prep2.executeQuery(); + rs1.next(); + rs1.getCharacterStream("S_"); + Clob clob0 = rs1.getClob("CLOB3_0_0_"); + rs1.wasNull(); + rs1.next(); + rs1.close(); + prep2.getMaxRows(); + prep2.getQueryTimeout(); + prep2.close(); + conn0.getAutoCommit(); + Reader r; + int ch; + r = clob0.getCharacterStream(); + for (int i = 0; i < 10000; i++) { + ch = r.read(); + if (ch != ('0' + (i % 10))) { + fail("expected " + (char) ('0' + (i % 10)) + + " got: " + ch + " (" + (char) ch + ")"); + } + } + ch = r.read(); + if (ch != -1) { + fail("expected -1 got: " + ch); + } + r.close(); + r = clob0.getCharacterStream(1235, 1000); + for (int i = 1234; i < 2234; i++) { + ch = r.read(); + if (ch != ('0' + (i % 10))) { + fail("expected " + (char) ('0' + (i % 10)) + + " got: " + ch + " (" + (char) ch + ")"); + } + } + ch = r.read(); + if (ch != -1) { + fail("expected -1 got: " + ch); + } + r.close(); + assertThrows(ErrorCode.INVALID_VALUE_2, clob0).getCharacterStream(10001, 1); + assertThrows(ErrorCode.INVALID_VALUE_2, clob0).getCharacterStream(10002, 0); + conn0.close(); + } + + private void testLobCopy(boolean compress) throws SQLException { + deleteDb("lob"); + Connection conn; + conn = reconnect(null); + Statement stat = conn.createStatement(); + if (compress) { + stat.execute("SET COMPRESS_LOB LZF"); + } else { + stat.execute("SET COMPRESS_LOB NO"); + } + conn = reconnect(conn); + stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select value from information_schema.settings " + + "where NAME='COMPRESS_LOB'"); + rs.next(); + assertEquals(compress ? "LZF" : "NO", rs.getString(1)); + assertFalse(rs.next()); + stat.execute("create table test(text clob)"); + stat.execute("create table test2(text clob)"); + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < 1000; i++) { + buff.append(' '); + } + String spaces = buff.toString(); + stat.execute("insert into test values('" + spaces + "')"); + stat.execute("insert into test2 select * from test"); + rs = stat.executeQuery("select * from test2"); + rs.next(); + assertEquals(spaces, rs.getString(1)); + stat.execute("drop table test"); + rs = stat.executeQuery("select * from test2"); + rs.next(); + assertEquals(spaces, rs.getString(1)); + stat.execute("alter table test2 add column id int before text"); + rs = stat.executeQuery("select * from test2"); + rs.next(); + assertEquals(spaces, rs.getString("text")); + conn.close(); + } + + private void testLobCompression(boolean compress) throws Exception { + deleteDb("lob"); + Connection conn; + conn = reconnect(null); + if (compress) { + conn.createStatement().execute("SET COMPRESS_LOB LZF"); + } else { + conn.createStatement().execute("SET COMPRESS_LOB NO"); + } + conn.createStatement().execute("CREATE TABLE TEST(ID INT PRIMARY KEY, C CLOB)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + long time = System.nanoTime(); + int len = getSize(10, 40); + if (config.networked && config.big) { + len = 5; + } + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < 1000; i++) { + buff.append(StringUtils.xmlNode("content", null, "This is a test " + i)); + } + String xml = buff.toString(); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.setString(2, xml + i); + prep.execute(); + } + for (int i = 0; i < len; i++) { + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST"); + while (rs.next()) { + if (i == 0) { + assertEquals(xml + rs.getInt(1), rs.getString(2)); + } else { + Reader r = rs.getCharacterStream(2); + String result = IOUtils.readStringAndClose(r, -1); + assertEquals(xml + rs.getInt(1), result); + } + } + } + time = System.nanoTime() - time; + trace("time: " + TimeUnit.NANOSECONDS.toMillis(time) + " compress: " + compress); + conn.close(); + if (!config.memory) { + long length = new File(getBaseDir() + "/lob.h2.db").length(); + trace("len: " + length + " compress: " + compress); + } + } + + private void testManyLobs() throws Exception { + deleteDb("lob"); + Connection conn; + conn = reconnect(null); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, B BLOB, C CLOB)"); + int len = getSize(10, 2000); + if (config.networked) { + len = 100; + } + + int first = 1, increment = 19; + + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(ID, B, C) VALUES(?, ?, ?)"); + for (int i = first; i < len; i += increment) { + int l = i; + prep.setInt(1, i); + prep.setBinaryStream(2, getRandomStream(l, i), -1); + prep.setCharacterStream(3, getRandomReader(l, i), -1); + prep.execute(); + } + + conn = reconnect(conn); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST ORDER BY ID"); + while (rs.next()) { + int i = rs.getInt("ID"); + Blob b = rs.getBlob("B"); + Clob c = rs.getClob("C"); + int l = i; + assertEquals(l, b.length()); + assertEquals(l, c.length()); + assertEqualStreams(getRandomStream(l, i), b.getBinaryStream(), -1); + assertEqualReaders(getRandomReader(l, i), c.getCharacterStream(), -1); + } + + prep = conn.prepareStatement( + "UPDATE TEST SET B=?, C=? WHERE ID=?"); + for (int i = first; i < len; i += increment) { + int l = i; + prep.setBinaryStream(1, getRandomStream(l, -i), -1); + prep.setCharacterStream(2, getRandomReader(l, -i), -1); + prep.setInt(3, i); + prep.execute(); + } + + conn = reconnect(conn); + rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST ORDER BY ID"); + while (rs.next()) { + int i = rs.getInt("ID"); + Blob b = rs.getBlob("B"); + Clob c = rs.getClob("C"); + int l = i; + assertEquals(l, b.length()); + assertEquals(l, c.length()); + assertEqualStreams(getRandomStream(l, -i), b.getBinaryStream(), -1); + assertEqualReaders(getRandomReader(l, -i), c.getCharacterStream(), -1); + } + + conn.close(); + } + + private void testClob() throws Exception { + deleteDb("lob"); + Connection conn; + conn = reconnect(null); + conn.createStatement().execute( + "CREATE TABLE TEST(ID IDENTITY, C CLOB)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(C) VALUES(?)"); + prep.setCharacterStream(1, + new CharArrayReader("Bohlen".toCharArray()), "Bohlen".length()); + prep.execute(); + prep.setCharacterStream(1, + new CharArrayReader("B\u00f6hlen".toCharArray()), "B\u00f6hlen".length()); + prep.execute(); + prep.setCharacterStream(1, getRandomReader(501, 1), -1); + prep.execute(); + prep.setCharacterStream(1, getRandomReader(1501, 2), 401); + prep.execute(); + conn = reconnect(conn); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals("Bohlen", rs.getString("C")); + assertEqualReaders(new CharArrayReader("Bohlen".toCharArray()), + rs.getCharacterStream("C"), -1); + rs.next(); + assertEqualReaders(new CharArrayReader("B\u00f6hlen".toCharArray()), + rs.getCharacterStream("C"), -1); + rs.next(); + assertEqualReaders(getRandomReader(501, 1), + rs.getCharacterStream("C"), -1); + Clob clob = rs.getClob("C"); + assertEqualReaders(getRandomReader(501, 1), + clob.getCharacterStream(), -1); + assertEquals(501, clob.length()); + rs.next(); + assertEqualReaders(getRandomReader(401, 2), + rs.getCharacterStream("C"), -1); + assertEqualReaders(getRandomReader(1500, 2), + rs.getCharacterStream("C"), 401); + clob = rs.getClob("C"); + assertEqualReaders(getRandomReader(1501, 2), + clob.getCharacterStream(), 401); + assertEqualReaders(getRandomReader(401, 2), + clob.getCharacterStream(), 401); + assertEquals(401, clob.length()); + assertFalse(rs.next()); + conn.close(); + } + + private Connection reconnect(Connection conn) throws SQLException { + long time = System.nanoTime(); + if (conn != null) { + JdbcUtils.closeSilently(conn); + } + conn = getConnection("lob"); + trace("re-connect=" + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + return conn; + } + + private void testUpdateLob() throws SQLException { + deleteDb("lob"); + Connection conn; + conn = reconnect(null); + + PreparedStatement prep = conn + .prepareStatement( + "CREATE TABLE IF NOT EXISTS p( id int primary key, rawbyte BLOB ); "); + prep.execute(); + prep.close(); + prep = conn.prepareStatement("INSERT INTO p(id) VALUES(?);"); + for (int i = 0; i < 10; i++) { + prep.setInt(1, i); + prep.execute(); + } + prep.close(); + + prep = conn.prepareStatement("UPDATE p set rawbyte=? WHERE id=?"); + for (int i = 0; i < 8; i++) { + prep.setBinaryStream(1, getRandomStream(10000, i), 0); + prep.setInt(2, i); + prep.execute(); + } + prep.close(); + conn.commit(); + + conn = reconnect(conn); + + conn.setAutoCommit(true); + prep = conn.prepareStatement("UPDATE p set rawbyte=? WHERE id=?"); + for (int i = 8; i < 10; i++) { + prep.setBinaryStream(1, getRandomStream(10000, i), 0); + prep.setInt(2, i); + prep.execute(); + } + prep.close(); + + prep = conn.prepareStatement("SELECT * from p"); + ResultSet rs = prep.executeQuery(); + while (rs.next()) { + for (int i = 1; i <= rs.getMetaData().getColumnCount(); i++) { + rs.getMetaData().getColumnName(i); + rs.getString(i); + } + } + conn.close(); + } + + private void testLobReconnect() throws Exception { + deleteDb("lob"); + Connection conn = reconnect(null); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, TEXT CLOB)"); + PreparedStatement prep; + prep = conn.prepareStatement("INSERT INTO TEST VALUES(1, ?)"); + String s = new String(getRandomChars(10000, 1)); + byte[] data = s.getBytes(StandardCharsets.UTF_8); + // if we keep the string, debugging with Eclipse is not possible + // because Eclipse wants to display the large string and fails + s = ""; + prep.setBinaryStream(1, new ByteArrayInputStream(data), 0); + prep.execute(); + + conn = reconnect(conn); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST WHERE ID=1"); + rs.next(); + InputStream in = new ByteArrayInputStream(data); + assertEqualStreams(in, rs.getBinaryStream("TEXT"), -1); + + prep = conn.prepareStatement("UPDATE TEST SET TEXT = ?"); + prep.setBinaryStream(1, new ByteArrayInputStream(data), 0); + prep.execute(); + + conn = reconnect(conn); + stat = conn.createStatement(); + rs = stat.executeQuery("SELECT * FROM TEST WHERE ID=1"); + rs.next(); + assertEqualStreams(rs.getBinaryStream("TEXT"), + new ByteArrayInputStream(data), -1); + + stat.execute("DROP TABLE IF EXISTS TEST"); + conn.close(); + } + + private void testLob(boolean clob) throws Exception { + deleteDb("lob"); + Connection conn = reconnect(null); + conn = reconnect(conn); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + PreparedStatement prep; + ResultSet rs; + long time; + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, VALUE " + + (clob ? "CLOB" : "BLOB") + ")"); + + int len = getSize(1, 1000); + if (config.networked && config.big) { + len = 100; + } + + time = System.nanoTime(); + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?)"); + for (int i = 0; i < len; i += i + i + 1) { + prep.setInt(1, i); + int size = i * i; + if (clob) { + prep.setCharacterStream(2, getRandomReader(size, i), 0); + } else { + prep.setBinaryStream(2, getRandomStream(size, i), 0); + } + prep.execute(); + } + trace("insert=" + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + traceMemory(); + conn = reconnect(conn); + + time = System.nanoTime(); + prep = conn.prepareStatement("SELECT ID, VALUE FROM TEST"); + rs = prep.executeQuery(); + while (rs.next()) { + int id = rs.getInt("ID"); + int size = id * id; + if (clob) { + Reader rt = rs.getCharacterStream(2); + assertEqualReaders(getRandomReader(size, id), rt, -1); + Object obj = rs.getObject(2); + if (obj instanceof Clob) { + obj = ((Clob) obj).getCharacterStream(); + } + assertEqualReaders(getRandomReader(size, id), + (Reader) obj, -1); + } else { + InputStream in = rs.getBinaryStream(2); + assertEqualStreams(getRandomStream(size, id), in, -1); + Object obj = rs.getObject(2); + if (obj instanceof Blob) { + obj = ((Blob) obj).getBinaryStream(); + } + assertEqualStreams(getRandomStream(size, id), + (InputStream) obj, -1); + } + } + trace("select=" + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + traceMemory(); + + conn = reconnect(conn); + + time = System.nanoTime(); + prep = conn.prepareStatement("DELETE FROM TEST WHERE ID=?"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.executeUpdate(); + } + trace("delete=" + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + traceMemory(); + conn = reconnect(conn); + + conn.setAutoCommit(false); + prep = conn.prepareStatement("INSERT INTO TEST VALUES(1, ?)"); + if (clob) { + prep.setCharacterStream(1, getRandomReader(0, 0), 0); + } else { + prep.setBinaryStream(1, getRandomStream(0, 0), 0); + } + prep.execute(); + conn.rollback(); + prep.execute(); + conn.commit(); + + conn.createStatement().execute("DELETE FROM TEST WHERE ID=1"); + conn.rollback(); + conn.createStatement().execute("DELETE FROM TEST WHERE ID=1"); + conn.commit(); + + conn.createStatement().execute("DROP TABLE TEST"); + conn.close(); + } + + private void testJavaObject() throws SQLException { + deleteDb("lob"); + JdbcConnection conn = (JdbcConnection) getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, DATA OTHER)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(1, ?)"); + prep.setObject(1, new TestLobObject("abc")); + prep.execute(); + ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); + rs.next(); + Object oa = rs.getObject(2); + assertEquals(TestLobObject.class.getName(), oa.getClass().getName()); + Object ob = rs.getObject("DATA"); + assertEquals(TestLobObject.class.getName(), ob.getClass().getName()); + assertEquals("TestLobObject: abc", oa.toString()); + assertEquals("TestLobObject: abc", ob.toString()); + assertFalse(rs.next()); + + conn.createStatement().execute("drop table test"); + stat.execute("create table test(value other)"); + prep = conn.prepareStatement("insert into test values(?)"); + prep.setObject(1, JdbcUtils.serialize("", conn.getSession().getDataHandler())); + prep.execute(); + rs = stat.executeQuery("select value from test"); + while (rs.next()) { + assertEquals("", (String) rs.getObject("value")); + } + conn.close(); + } + + /** + * Test a bug where the usage of BufferedInputStream in LobStorageMap was + * causing a deadlock. + */ + private void testBufferedInputStreamBug() throws SQLException { + deleteDb("lob"); + JdbcConnection conn = (JdbcConnection) getConnection("lob"); + conn.createStatement().execute("CREATE TABLE TEST(test BLOB)"); + PreparedStatement ps = conn.prepareStatement("INSERT INTO TEST(test) VALUES(?)"); + ps.setBlob(1, new ByteArrayInputStream(new byte[257])); + ps.executeUpdate(); + conn.close(); + } + + private static Reader getRandomReader(int len, int seed) { + return new CharArrayReader(getRandomChars(len, seed)); + } + + private static char[] getRandomChars(int len, int seed) { + Random random = new Random(seed); + char[] buff = new char[len]; + for (int i = 0; i < len; i++) { + char ch; + do { + ch = (char) random.nextInt(Character.MAX_VALUE); + // UTF8: String.getBytes("UTF-8") only returns 1 byte for + // 0xd800-0xdfff + } while (ch >= 0xd800 && ch <= 0xdfff); + buff[i] = ch; + } + return buff; + } + + private static InputStream getRandomStream(int len, int seed) { + Random random = new Random(seed); + byte[] buff = new byte[len]; + random.nextBytes(buff); + return new ByteArrayInputStream(buff); + } + + /** + * Test the combination of updating a table which contains an LOB, and + * reading from the LOB at the same time + */ + private void testUpdatingLobRow() throws Exception { + if (config.memory) { + return; + } + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, " + + "name clob, counter int)"); + stat.execute("insert into test(id, name) select x, " + + "space(100000) from system_range(1, 3)"); + + ResultSet rs = stat.executeQuery("select name " + + "from test where id = 1"); + rs.next(); + Reader r = rs.getClob("name").getCharacterStream(); + Random random = new Random(); + char[] tmp = new char[256]; + while (r.read(tmp) > 0) { + stat.execute("update test set counter = " + + random.nextInt(1000) + " where id = 1"); + } + r.close(); + conn.close(); + } + + private void testCommitOnExclusiveConnection() throws Exception { + deleteDb("lob"); + Connection conn = getConnection("lob;EXCLUSIVE=1"); + Statement statement = conn.createStatement(); + statement.execute("drop table if exists TEST"); + statement.execute("create table TEST (COL INTEGER, LOB CLOB)"); + conn.setAutoCommit(false); + statement.execute("insert into TEST (COL, LOB) values (1, '" + + MORE_THAN_128_CHARS + "')"); + statement.execute("update TEST set COL=2"); + // OK + // statement.execute("commit"); + // KO : should not hang + conn.commit(); + conn.close(); + } + + private void testClobWithRandomUnicodeChars() throws Exception { + // This tests an issue we had with storing unicode surrogate pairs, + // which only manifested at the boundaries between blocks i.e. at 4k + // boundaries + deleteDb("lob"); + Connection conn = getConnection("lob"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE logs" + + "(id int primary key auto_increment, message CLOB)"); + PreparedStatement s1 = conn.prepareStatement( + "INSERT INTO logs (id, message) VALUES(null, ?)"); + final Random rand = new Random(1); + for (int i = 1; i <= 100; i++) { + String data = randomUnicodeString(rand); + s1.setString(1, data); + s1.executeUpdate(); + ResultSet rs = stat.executeQuery("SELECT id, message " + + "FROM logs ORDER BY id DESC LIMIT 1"); + rs.next(); + String read = rs.getString(2); + if (!read.equals(data)) { + for (int j = 0; j < read.length(); j++) { + assertEquals("pos: " + j + " i:" + i, read.charAt(j), data.charAt(j)); + } + } + assertEquals(read, data); + } + conn.close(); + } + + private static String randomUnicodeString(Random rand) { + int count = 10000; + final char[] buffer = new char[count]; + while (count-- != 0) { + char ch = (char) rand.nextInt(); + if (ch >= 56320 && ch <= 57343) { + if (count == 0) { + count++; + } else { + // low surrogate, insert high surrogate after putting it + // in + buffer[count] = ch; + count--; + buffer[count] = (char) (55296 + rand.nextInt(128)); + } + } else if (ch >= 55296 && ch <= 56191) { + if (count == 0) { + count++; + } else { + // high surrogate, insert low surrogate before putting + // it in + buffer[count] = (char) (56320 + rand.nextInt(128)); + count--; + buffer[count] = ch; + } + } else if (ch >= 56192 && ch <= 56319) { + // private high surrogate: no clue, so skip it + count++; + } else { + buffer[count] = ch; + } + } + return new String(buffer); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestLobObject.java b/modules/h2/src/test/java/org/h2/test/db/TestLobObject.java new file mode 100644 index 0000000000000..51ecae3f04838 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestLobObject.java @@ -0,0 +1,26 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.Serializable; + +/** + * A utility class for TestLob. + */ +class TestLobObject implements Serializable { + + private static final long serialVersionUID = 1L; + String data; + + TestLobObject(String data) { + this.data = data; + } + + @Override + public String toString() { + return "TestLobObject: " + data; + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestMemoryUsage.java b/modules/h2/src/test/java/org/h2/test/db/TestMemoryUsage.java new file mode 100644 index 0000000000000..fce13839bb777 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestMemoryUsage.java @@ -0,0 +1,298 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.test.TestBase; +import org.h2.util.Utils; + +/** + * Tests the memory usage of the cache. + */ +public class TestMemoryUsage extends TestBase { + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testOpenCloseConnections(); + if (getBaseDir().indexOf(':') >= 0) { + // can't test in-memory databases + return; + } + // comment this out for now, not reliable when running on my 64-bit + // Java1.8 VM + // testCreateDropLoop(); + testCreateIndex(); + testClob(); + testReconnectOften(); + + deleteDb("memoryUsage"); + reconnect(); + insertUpdateSelectDelete(); + reconnect(); + insertUpdateSelectDelete(); + conn.close(); + + deleteDb("memoryUsage"); + } + + private void testOpenCloseConnections() throws SQLException { + if (!config.big) { + return; + } + deleteDb("memoryUsage"); + conn = getConnection("memoryUsage"); + eatMemory(4000); + for (int i = 0; i < 4000; i++) { + Connection c2 = getConnection("memoryUsage"); + c2.createStatement(); + c2.close(); + } + freeMemory(); + conn.close(); + } + + private void testCreateDropLoop() throws SQLException { + deleteDb("memoryUsageCreateDropLoop"); + conn = getConnection("memoryUsageCreateDropLoop"); + Statement stat = conn.createStatement(); + for (int i = 0; i < 100; i++) { + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("DROP TABLE TEST"); + } + stat.execute("checkpoint"); + int used = Utils.getMemoryUsed(); + for (int i = 0; i < 1000; i++) { + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY)"); + stat.execute("DROP TABLE TEST"); + } + stat.execute("checkpoint"); + int usedNow = Utils.getMemoryUsed(); + if (usedNow > used * 1.3) { + // try to lower memory usage (because it might be wrong) + // by forcing OOME + for (int i = 1024; i < (1 >> 31); i *= 2) { + try { + byte[] oome = new byte[1024 * 1024 * 256]; + oome[0] = (byte) i; + } catch (OutOfMemoryError e) { + break; + } + } + usedNow = Utils.getMemoryUsed(); + if (usedNow > used * 1.3) { + assertEquals(used, usedNow); + } + } + conn.close(); + } + + + private void reconnect() throws SQLException { + if (conn != null) { + conn.close(); + } + // Class.forName("org.hsqldb.jdbcDriver"); + // conn = DriverManager.getConnection("jdbc:hsqldb:test", "sa", ""); + conn = getConnection("memoryUsage"); + } + + private void testClob() throws SQLException { + if (config.memory || !config.big) { + return; + } + deleteDb("memoryUsageClob"); + conn = getConnection("memoryUsageClob"); + Statement stat = conn.createStatement(); + stat.execute("SET MAX_LENGTH_INPLACE_LOB 8192"); + stat.execute("SET CACHE_SIZE 8000"); + stat.execute("CREATE TABLE TEST(ID IDENTITY, DATA CLOB)"); + freeSoftReferences(); + try { + int base = Utils.getMemoryUsed(); + for (int i = 0; i < 4; i++) { + stat.execute("INSERT INTO TEST(DATA) " + + "SELECT SPACE(8000) FROM SYSTEM_RANGE(1, 800)"); + freeSoftReferences(); + int used = Utils.getMemoryUsed(); + if ((used - base) > 3 * 8192) { + fail("Used: " + (used - base) + " i: " + i); + } + } + } finally { + conn.close(); + freeMemory(); + } + } + + /** + * Eat memory so that all soft references are garbage collected. + */ + void freeSoftReferences() { + try { + eatMemory(1); + } catch (OutOfMemoryError e) { + // ignore + } + System.gc(); + System.gc(); + freeMemory(); + } + + private void testCreateIndex() throws SQLException { + if (config.memory) { + return; + } + deleteDb("memoryUsage"); + conn = getConnection("memoryUsage"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, name varchar(255))"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(?, space(200))"); + int len = getSize(10000, 100000); + for (int i = 0; i < len; i++) { + if (i % 1000 == 0) { + // trace("[" + i + "/" + len + "] KB: " + + // MemoryUtils.getMemoryUsed()); + } + prep.setInt(1, i); + prep.executeUpdate(); + } + int base = Utils.getMemoryUsed(); + stat.execute("create index idx_test_id on test(id)"); + for (int i = 0;; i++) { + System.gc(); + int used = Utils.getMemoryUsed() - base; + if (used <= getSize(7500, 12000)) { + break; + } + if (i < 16) { + continue; + } + fail("Used: " + used); + } + stat.execute("drop table test"); + conn.close(); + } + + private void testReconnectOften() throws SQLException { + deleteDb("memoryUsage"); + Connection conn1 = getConnection("memoryUsage"); + int len = getSize(1, 2000); + printTimeMemory("start", 0); + long time = System.nanoTime(); + for (int i = 0; i < len; i++) { + Connection conn2 = getConnection("memoryUsage"); + conn2.close(); + if (i % 10000 == 0) { + printTimeMemory("connect", + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + } + } + printTimeMemory("connect", + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + conn1.close(); + } + + private void insertUpdateSelectDelete() throws SQLException { + Statement stat = conn.createStatement(); + long time; + int len = getSize(1, 2000); + + // insert + time = System.nanoTime(); + stat.execute("DROP TABLE IF EXISTS TEST"); + trace("drop=" + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + stat.execute("CREATE CACHED TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, 'Hello World')"); + printTimeMemory("start", 0); + time = System.nanoTime(); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.execute(); + if (i % 50000 == 0) { + trace(" " + (100 * i / len) + "%"); + } + } + printTimeMemory("insert", TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + + // update + time = System.nanoTime(); + prep = conn.prepareStatement( + "UPDATE TEST SET NAME='Hallo Welt' || ID WHERE ID = ?"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.execute(); + if (i % 50000 == 0) { + trace(" " + (100 * i / len) + "%"); + } + } + printTimeMemory("update", TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + + // select + time = System.nanoTime(); + prep = conn.prepareStatement("SELECT * FROM TEST WHERE ID = ?"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + ResultSet rs = prep.executeQuery(); + rs.next(); + assertFalse(rs.next()); + if (i % 50000 == 0) { + trace(" " + (100 * i / len) + "%"); + } + } + printTimeMemory("select", + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + + // select randomized + Random random = new Random(1); + time = System.nanoTime(); + prep = conn.prepareStatement("SELECT * FROM TEST WHERE ID = ?"); + for (int i = 0; i < len; i++) { + prep.setInt(1, random.nextInt(len)); + ResultSet rs = prep.executeQuery(); + rs.next(); + assertFalse(rs.next()); + if (i % 50000 == 0) { + trace(" " + (100 * i / len) + "%"); + } + } + printTimeMemory("select randomized", + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + + // delete + time = System.nanoTime(); + prep = conn.prepareStatement("DELETE FROM TEST WHERE ID = ?"); + for (int i = 0; i < len; i++) { + prep.setInt(1, random.nextInt(len)); + prep.executeUpdate(); + if (i % 50000 == 0) { + trace(" " + (100 * i / len) + "%"); + } + } + printTimeMemory("delete", + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestMergeUsing.java b/modules/h2/src/test/java/org/h2/test/db/TestMergeUsing.java new file mode 100644 index 0000000000000..78b5a59baf52b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestMergeUsing.java @@ -0,0 +1,408 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.Trigger; +import org.h2.test.TestBase; + +/** + * Test merge using syntax. + */ +public class TestMergeUsing extends TestBase implements Trigger { + + private static final String GATHER_ORDERED_RESULTS_SQL = "SELECT ID, NAME FROM PARENT ORDER BY ID ASC"; + private static int triggerTestingUpdateCount; + + private String triggerName; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + + // Simple ID,NAME inserts, target table with PK initially empty + testMergeUsing( + "CREATE TABLE PARENT(ID INT, NAME VARCHAR, PRIMARY KEY(ID) );", + "MERGE INTO PARENT AS P USING (SELECT X AS ID, 'Marcy'||X AS NAME " + + "FROM SYSTEM_RANGE(1,2) ) AS S ON (P.ID = S.ID AND 1=1 AND S.ID = P.ID) " + + "WHEN MATCHED THEN " + + "UPDATE SET P.NAME = S.NAME WHERE 2 = 2 WHEN NOT MATCHED THEN INSERT (ID, NAME) VALUES (S.ID, S.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2)", 2); + // Simple NAME updates, target table missing PK + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) ) AS S " + + "ON (P.ID = S.ID AND 1=1 AND S.ID = P.ID) " + + "WHEN MATCHED THEN UPDATE SET P.NAME = S.NAME||S.ID WHERE 1 = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (S.ID, S.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(1,2)", + 2); + // No NAME updates, WHERE clause is always false, insert clause missing + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) ) AS S ON (P.ID = S.ID) " + + "WHEN MATCHED THEN UPDATE SET P.NAME = S.NAME||S.ID WHERE 1 = 2", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2)", 0); + // No NAME updates, no WHERE clause, insert clause missing + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) ) AS S ON (P.ID = S.ID) " + + "WHEN MATCHED THEN UPDATE SET P.NAME = S.NAME||S.ID", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(1,2)", + 2); + // Two delete updates done, no WHERE clause, insert clause missing + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) ) AS S ON (P.ID = S.ID) " + + "WHEN MATCHED THEN DELETE", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) WHERE 1=0", + 2); + // One insert, one update one delete happens, target table missing PK + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) ) AS S ON (P.ID = S.ID) " + + "WHEN MATCHED THEN UPDATE SET P.NAME = S.NAME||S.ID WHERE P.ID = 2 " + + "DELETE WHERE P.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (S.ID, S.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(2,2) UNION ALL " + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(3,3)", + 3); + // No updates happen: No insert defined, no update or delete happens due + // to ON condition failing always, target table missing PK + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) ) AS S ON (P.ID = S.ID AND 1=0) " + + "WHEN MATCHED THEN " + + "UPDATE SET P.NAME = S.NAME||S.ID WHERE P.ID = 2 DELETE WHERE P.ID = 1", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2)", 0); + // One insert, one update one delete happens, target table missing PK + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );" + + "CREATE TABLE SOURCE AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT AS P USING SOURCE AS S ON (P.ID = S.ID) WHEN MATCHED THEN " + + "UPDATE SET P.NAME = S.NAME||S.ID WHERE P.ID = 2 DELETE WHERE P.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (S.ID, S.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(2,2) UNION ALL " + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(3,3)", + 3); + // One insert, one update one delete happens, target table missing PK, + // no source alias + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );" + + "CREATE TABLE SOURCE AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT AS P USING SOURCE ON (P.ID = SOURCE.ID) WHEN MATCHED THEN " + + "UPDATE SET P.NAME = SOURCE.NAME||SOURCE.ID WHERE P.ID = 2 DELETE WHERE P.ID = 1 " + + "WHEN NOT MATCHED THEN INSERT (ID, NAME) VALUES (SOURCE.ID, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(2,2) UNION ALL " + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(3,3)", + 3); + // One insert, one update one delete happens, target table missing PK, + // no source or target alias + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );" + + "CREATE TABLE SOURCE AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT USING SOURCE ON (PARENT.ID = SOURCE.ID) WHEN MATCHED THEN " + + "UPDATE SET PARENT.NAME = SOURCE.NAME||SOURCE.ID WHERE PARENT.ID = 2 " + + "DELETE WHERE PARENT.ID = 1 WHEN NOT MATCHED THEN INSERT (ID, NAME) VALUES (SOURCE.ID, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(2,2) UNION ALL " + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(3,3)", + 3); + + // Only insert clause, no update or delete clauses + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,1) );" + + "DELETE FROM PARENT;", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) ) AS S ON (P.ID = S.ID) " + + "WHEN NOT MATCHED THEN INSERT (ID, NAME) VALUES (S.ID, S.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3)", 3); + // no insert, no update, no delete clauses - essentially a no-op + testMergeUsingException( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,1) );" + + "DELETE FROM PARENT;", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) ) AS S ON (P.ID = S.ID)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) WHERE X<0", + 0, + "At least UPDATE, DELETE or INSERT embedded statement must be supplied."); + // Two updates to same row - update and delete together - emptying the + // parent table + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,1) )", + "MERGE INTO PARENT AS P USING (" + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) ) AS S ON (P.ID = S.ID) " + + "WHEN MATCHED THEN " + + "UPDATE SET P.NAME = P.NAME||S.ID WHERE P.ID = 1 DELETE WHERE P.ID = 1 AND P.NAME = 'Marcy11'", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,1) WHERE X<0", + 2); + // Duplicate source keys but different ROWID update - so no error + // SQL standard says duplicate or repeated updates of same row in same + // statement should cause errors - but because first row is updated, + // deleted (on source row 1) then inserted (on source row 2) + // it's considered different - with respect to to ROWID - so no error + // One insert, one update one delete happens (on same row) , target + // table missing PK, no source or target alias + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,1) );" + + "CREATE TABLE SOURCE AS (SELECT 1 AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2) );", + "MERGE INTO PARENT USING SOURCE ON (PARENT.ID = SOURCE.ID) WHEN MATCHED THEN " + + "UPDATE SET PARENT.NAME = SOURCE.NAME||SOURCE.ID WHERE PARENT.ID = 2 " + + "DELETE WHERE PARENT.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (SOURCE.ID, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT 1 AS ID, 'Marcy'||X||X UNION ALL SELECT 1 AS ID, 'Marcy2'", + 2); + + // Multiple update on same row: SQL standard says duplicate or repeated + // updates in same statement should cause errors -but because first row + // is updated, delete then insert it's considered different + // One insert, one update one delete happens (on same row, which is + // okay), then another update (which is illegal)target table missing PK, + // no source or target alias + testMergeUsingException( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,1) );" + + "CREATE TABLE SOURCE AS (SELECT 1 AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT USING SOURCE ON (PARENT.ID = SOURCE.ID) WHEN MATCHED THEN " + + "UPDATE SET PARENT.NAME = SOURCE.NAME||SOURCE.ID WHERE PARENT.ID = 2 " + + "DELETE WHERE PARENT.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (SOURCE.ID, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT 1 AS ID, 'Marcy'||X||X UNION ALL SELECT 1 AS ID, 'Marcy2'", + 3, + "Unique index or primary key violation: \"Merge using " + + "ON column expression, duplicate _ROWID_ target record " + + "already updated, deleted or inserted:_ROWID_=2:in:PUBLIC.PARENT:conflicting source row number:2"); + // Duplicate key updated 3 rows at once, only 1 expected + testMergeUsingException( + "CREATE TABLE PARENT AS (SELECT 1 AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );" + + "CREATE TABLE SOURCE AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT USING SOURCE ON (PARENT.ID = SOURCE.ID) WHEN MATCHED THEN " + + "UPDATE SET PARENT.NAME = SOURCE.NAME||SOURCE.ID WHERE PARENT.ID = 2 " + + "DELETE WHERE PARENT.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (SOURCE.ID, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(2,2) UNION ALL " + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(3,3)", + 3, "Duplicate key updated 3 rows at once, only 1 expected"); + // Missing target columns in ON expression + testMergeUsingException( + "CREATE TABLE PARENT AS (SELECT 1 AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );" + + "CREATE TABLE SOURCE AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT USING SOURCE ON (1 = SOURCE.ID) WHEN MATCHED THEN " + + "UPDATE SET PARENT.NAME = SOURCE.NAME||SOURCE.ID WHERE PARENT.ID = 2 " + + "DELETE WHERE PARENT.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (SOURCE.ID, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(2,2) UNION ALL " + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(3,3)", + 3, "No references to target columns found in ON clause"); + // Missing source columns in ON expression + testMergeUsingException( + "CREATE TABLE PARENT AS (SELECT 1 AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );" + + "CREATE TABLE SOURCE AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT USING SOURCE ON (PARENT.ID = 1) WHEN MATCHED THEN " + + "UPDATE SET PARENT.NAME = SOURCE.NAME||SOURCE.ID WHERE PARENT.ID = 2 " + + "DELETE WHERE PARENT.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (SOURCE.ID, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X||X AS NAME FROM SYSTEM_RANGE(2,2) UNION ALL " + + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(3,3)", + 3, "No references to source columns found in ON clause"); + // Insert does not insert correct values with respect to ON condition + // (inserts ID value above 100, instead) + testMergeUsingException( + "CREATE TABLE PARENT AS (SELECT 1 AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(4,4) );" + + "CREATE TABLE SOURCE AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,3) );", + "MERGE INTO PARENT USING SOURCE ON (PARENT.ID = SOURCE.ID) WHEN MATCHED THEN " + + "UPDATE SET PARENT.NAME = SOURCE.NAME||SOURCE.ID WHERE PARENT.ID = 2 " + + "DELETE WHERE PARENT.ID = 1 WHEN NOT MATCHED THEN " + + "INSERT (ID, NAME) VALUES (SOURCE.ID+100, SOURCE.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(4,4)", 1, + "Expected to find key after row inserted, but none found. Insert does not match ON condition."); + // One insert, one update one delete happens, target table missing PK, + // triggers update all NAME fields + triggerTestingUpdateCount = 0; + testMergeUsing( + "CREATE TABLE PARENT AS (SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,2));" + + getCreateTriggerSQL(), + "MERGE INTO PARENT AS P USING " + + "(SELECT X AS ID, 'Marcy'||X AS NAME FROM SYSTEM_RANGE(1,4) ) AS S ON (P.ID = S.ID) " + + "WHEN MATCHED THEN UPDATE SET P.NAME = S.NAME||S.ID WHERE P.ID = 2 " + + "DELETE WHERE P.ID = 1 WHEN NOT MATCHED THEN INSERT (ID, NAME) VALUES (S.ID, S.NAME)", + GATHER_ORDERED_RESULTS_SQL, + "SELECT 2 AS ID, 'Marcy22-updated2' AS NAME UNION ALL " + + "SELECT X AS ID, 'Marcy'||X||'-inserted'||X AS NAME FROM SYSTEM_RANGE(3,4)", + 4); + } + + /** + * Run a test case of the merge using syntax + * + * @param setupSQL - one or more SQL statements to setup the case + * @param statementUnderTest - the merge statement being tested + * @param gatherResultsSQL - a select which gathers the results of the merge + * from the target table + * @param expectedResultsSQL - a select which returns the expected results + * in the target table + * @param expectedRowUpdateCount - how many updates should be expected from + * the merge using + */ + private void testMergeUsing(String setupSQL, String statementUnderTest, + String gatherResultsSQL, String expectedResultsSQL, + int expectedRowUpdateCount) throws Exception { + deleteDb("mergeUsingQueries"); + + try (Connection conn = getConnection("mergeUsingQueries")) { + Statement stat = conn.createStatement(); + stat.execute(setupSQL); + + PreparedStatement prep = conn.prepareStatement(statementUnderTest); + int rowCountUpdate = prep.executeUpdate(); + + // compare actual results from SQL result set with expected results + // - by diffing (aka set MINUS operation) + ResultSet rs = stat.executeQuery("( " + gatherResultsSQL + " ) MINUS ( " + + expectedResultsSQL + " )"); + + int rowCount = 0; + StringBuilder diffBuffer = new StringBuilder(""); + while (rs.next()) { + rowCount++; + diffBuffer.append("|"); + for (int i = 1; i <= rs.getMetaData().getColumnCount(); i++) { + diffBuffer.append(rs.getObject(i)); + diffBuffer.append("|\n"); + } + } + assertEquals("Differences between expected and actual output found:" + + diffBuffer, 0, rowCount); + assertEquals("Expected update counts differ", + expectedRowUpdateCount, rowCountUpdate); + } finally { + deleteDb("mergeUsingQueries"); + } + } + + /** + * Run a test case of the merge using syntax + * + * @param setupSQL - one or more SQL statements to setup the case + * @param statementUnderTest - the merge statement being tested + * @param gatherResultsSQL - a select which gathers the results of the merge + * from the target table + * @param expectedResultsSQL - a select which returns the expected results + * in the target table + * @param expectedRowUpdateCount - how many updates should be expected from + * the merge using + * @param exceptionMessage - the exception message expected + */ + private void testMergeUsingException(String setupSQL, + String statementUnderTest, String gatherResultsSQL, + String expectedResultsSQL, int expectedRowUpdateCount, + String exceptionMessage) throws Exception { + try { + testMergeUsing(setupSQL, statementUnderTest, gatherResultsSQL, + expectedResultsSQL, expectedRowUpdateCount); + } catch (RuntimeException | org.h2.jdbc.JdbcSQLException e) { + if (!e.getMessage().contains(exceptionMessage)) { + e.printStackTrace(); + } + assertContains(e.getMessage(), exceptionMessage); + return; + } + fail("Failed to see exception with message:" + exceptionMessage); + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + + if (conn == null) { + throw new AssertionError("connection is null"); + } + if (triggerName.startsWith("INS_BEFORE")) { + newRow[1] = newRow[1] + "-inserted" + (++triggerTestingUpdateCount); + } else if (triggerName.startsWith("UPD_BEFORE")) { + newRow[1] = newRow[1] + "-updated" + (++triggerTestingUpdateCount); + } else if (triggerName.startsWith("DEL_BEFORE")) { + oldRow[1] = oldRow[1] + "-deleted" + (++triggerTestingUpdateCount); + } + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + @Override + public void init(Connection conn, String schemaName, String trigger, + String tableName, boolean before, int type) { + this.triggerName = trigger; + if (!"PARENT".equals(tableName)) { + throw new AssertionError("supposed to be PARENT"); + } + if ((trigger.endsWith("AFTER") && before) + || (trigger.endsWith("BEFORE") && !before)) { + throw new AssertionError( + "triggerName: " + trigger + " before:" + before); + } + if ((trigger.startsWith("UPD") && type != UPDATE) + || (trigger.startsWith("INS") && type != INSERT) + || (trigger.startsWith("DEL") && type != DELETE)) { + throw new AssertionError( + "triggerName: " + trigger + " type:" + type); + } + } + + private String getCreateTriggerSQL() { + StringBuilder buf = new StringBuilder(); + buf.append("CREATE TRIGGER INS_BEFORE " + "BEFORE INSERT ON PARENT " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\";"); + buf.append("CREATE TRIGGER UPD_BEFORE " + "BEFORE UPDATE ON PARENT " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\";"); + buf.append("CREATE TRIGGER DEL_BEFORE " + "BEFORE DELETE ON PARENT " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\";"); + return buf.toString(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestMultiConn.java b/modules/h2/src/test/java/org/h2/test/db/TestMultiConn.java new file mode 100644 index 0000000000000..2870437de94b3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestMultiConn.java @@ -0,0 +1,245 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.DatabaseEventListener; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Multi-connection tests. + */ +public class TestMultiConn extends TestBase { + + /** + * How long to wait in milliseconds. + */ + static int wait; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testConcurrentShutdownQuery(); + testCommitRollback(); + testConcurrentOpen(); + testThreeThreads(); + deleteDb("multiConn"); + } + + private void testConcurrentShutdownQuery() throws Exception { + Connection conn1 = getConnection("multiConn"); + Connection conn2 = getConnection("multiConn"); + final Statement stat1 = conn1.createStatement(); + stat1.execute("CREATE ALIAS SLEEP FOR \"java.lang.Thread.sleep(long)\""); + final Statement stat2 = conn2.createStatement(); + stat1.execute("SET THROTTLE 100"); + Task t = new Task() { + @Override + public void call() throws Exception { + stat2.executeQuery("CALL SLEEP(100)"); + Thread.sleep(10); + stat2.executeQuery("CALL SLEEP(100)"); + } + }; + t.execute(); + Thread.sleep(50); + stat1.execute("SHUTDOWN"); + conn1.close(); + try { + conn2.close(); + } catch (SQLException e) { + // ignore + } + try { + t.get(); + } catch (Exception e) { + // ignore + } + } + + private void testThreeThreads() throws Exception { + deleteDb("multiConn"); + final Connection conn1 = getConnection("multiConn"); + final Connection conn2 = getConnection("multiConn"); + final Connection conn3 = getConnection("multiConn"); + conn1.setAutoCommit(false); + conn2.setAutoCommit(false); + conn3.setAutoCommit(false); + final Statement s1 = conn1.createStatement(); + final Statement s2 = conn2.createStatement(); + final Statement s3 = conn3.createStatement(); + s1.execute("CREATE TABLE TEST1(ID INT)"); + s2.execute("CREATE TABLE TEST2(ID INT)"); + s3.execute("CREATE TABLE TEST3(ID INT)"); + s1.execute("INSERT INTO TEST1 VALUES(1)"); + s2.execute("INSERT INTO TEST2 VALUES(2)"); + s3.execute("INSERT INTO TEST3 VALUES(3)"); + s1.execute("SET LOCK_TIMEOUT 1000"); + s2.execute("SET LOCK_TIMEOUT 1000"); + s3.execute("SET LOCK_TIMEOUT 1000"); + Thread t1 = new Thread(new Runnable() { + @Override + public void run() { + try { + s3.execute("INSERT INTO TEST2 VALUES(4)"); + conn3.commit(); + } catch (SQLException e) { + TestBase.logError("insert", e); + } + } + }); + t1.start(); + Thread.sleep(20); + Thread t2 = new Thread(new Runnable() { + @Override + public void run() { + try { + s2.execute("INSERT INTO TEST1 VALUES(5)"); + conn2.commit(); + } catch (SQLException e) { + TestBase.logError("insert", e); + } + } + }); + t2.start(); + Thread.sleep(20); + conn1.commit(); + t2.join(1000); + t1.join(1000); + ResultSet rs = s1.executeQuery("SELECT * FROM TEST1 ORDER BY ID"); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs.next(); + assertEquals(5, rs.getInt(1)); + assertFalse(rs.next()); + conn1.close(); + conn2.close(); + conn3.close(); + } + + private void testConcurrentOpen() throws Exception { + if (config.memory || config.googleAppEngine) { + return; + } + deleteDb("multiConn"); + Connection conn = getConnection("multiConn"); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + conn.createStatement().execute( + "INSERT INTO TEST VALUES(0, 'Hello'), (1, 'World')"); + conn.createStatement().execute("SHUTDOWN"); + conn.close(); + final String listener = MyDatabaseEventListener.class.getName(); + Runnable r = new Runnable() { + @Override + public void run() { + try { + Connection c1 = getConnection("multiConn;DATABASE_EVENT_LISTENER='" + listener + + "';file_lock=socket"); + c1.close(); + } catch (Exception e) { + TestBase.logError("connect", e); + } + } + }; + Thread thread = new Thread(r); + thread.start(); + Thread.sleep(10); + Connection c2 = getConnection("multiConn;file_lock=socket"); + c2.close(); + thread.join(); + } + + private void testCommitRollback() throws SQLException { + deleteDb("multiConn"); + Connection c1 = getConnection("multiConn"); + Connection c2 = getConnection("multiConn"); + c1.setAutoCommit(false); + c2.setAutoCommit(false); + Statement s1 = c1.createStatement(); + s1.execute("DROP TABLE IF EXISTS MULTI_A"); + s1.execute("CREATE TABLE MULTI_A(ID INT, NAME VARCHAR(255))"); + s1.execute("INSERT INTO MULTI_A VALUES(0, '0-insert-A')"); + Statement s2 = c2.createStatement(); + s1.execute("DROP TABLE IF EXISTS MULTI_B"); + s1.execute("CREATE TABLE MULTI_B(ID INT, NAME VARCHAR(255))"); + s2.execute("INSERT INTO MULTI_B VALUES(0, '1-insert-B')"); + c1.commit(); + c2.rollback(); + s1.execute("INSERT INTO MULTI_A VALUES(1, '0-insert-C')"); + s2.execute("INSERT INTO MULTI_B VALUES(1, '1-insert-D')"); + c1.rollback(); + c2.commit(); + c1.close(); + c2.close(); + + if (!config.memory) { + Connection conn = getConnection("multiConn"); + ResultSet rs; + rs = conn.createStatement().executeQuery("SELECT * FROM MULTI_A ORDER BY ID"); + rs.next(); + assertEquals("0-insert-A", rs.getString("NAME")); + assertFalse(rs.next()); + rs = conn.createStatement().executeQuery("SELECT * FROM MULTI_B ORDER BY ID"); + rs.next(); + assertEquals("1-insert-D", rs.getString("NAME")); + assertFalse(rs.next()); + conn.close(); + } + + } + + /** + * A database event listener used in this test. + */ + public static final class MyDatabaseEventListener implements + DatabaseEventListener { + + @Override + public void exceptionThrown(SQLException e, String sql) { + // do nothing + } + + @Override + public void setProgress(int state, String name, int x, int max) { + if (wait > 0) { + try { + Thread.sleep(wait); + } catch (InterruptedException e) { + TestBase.logError("sleep", e); + } + } + } + + @Override + public void closingDatabase() { + // do nothing + } + + @Override + public void init(String url) { + // do nothing + } + + @Override + public void opened() { + // do nothing + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestMultiDimension.java b/modules/h2/src/test/java/org/h2/test/db/TestMultiDimension.java new file mode 100644 index 0000000000000..97fcfd653bf65 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestMultiDimension.java @@ -0,0 +1,270 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.test.TestBase; +import org.h2.tools.MultiDimension; + +/** + * Tests the multi-dimension index tool. + */ +public class TestMultiDimension extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws SQLException { + testHelperMethods(); + testPerformance2d(); + testPerformance3d(); + } + + private void testHelperMethods() { + MultiDimension m = MultiDimension.getInstance(); + assertEquals(Integer.MAX_VALUE, m.getMaxValue(2)); + assertEquals(0, m.normalize(2, 0, 0, 100)); + assertEquals(Integer.MAX_VALUE / 2, m.normalize(2, 50, 0, 100)); + assertEquals(Integer.MAX_VALUE, m.normalize(2, 100, 0, 100)); + assertEquals(Integer.MAX_VALUE / 10, m.normalize(2, 0.1, 0, 1)); + assertEquals(0, m.normalize(2, 1, 1, 1)); + assertEquals(0, m.normalize(2, 0, 0, 0)); + assertEquals(3, m.interleave(1, 1)); + assertEquals(3, m.interleave(new int[]{1, 1})); + assertEquals(5, m.interleave(3, 0)); + assertEquals(5, m.interleave(new int[]{3, 0})); + assertEquals(10, m.interleave(0, 3)); + assertEquals(10, m.interleave(new int[] { 0, 3 })); + long v = Integer.MAX_VALUE | ((long) Integer.MAX_VALUE << 31L); + assertEquals(v, m.interleave(Integer.MAX_VALUE, Integer.MAX_VALUE)); + assertEquals(v, m.interleave(new int[] { + Integer.MAX_VALUE, Integer.MAX_VALUE })); + Random random = new Random(1); + for (int i = 0; i < 1000; i++) { + int x = random.nextInt(Integer.MAX_VALUE), y = + random.nextInt(Integer.MAX_VALUE); + v = m.interleave(new int[] {x, y}); + long v2 = m.interleave(x, y); + assertEquals(v, v2); + int x1 = m.deinterleave(2, v, 0); + int y1 = m.deinterleave(2, v, 1); + assertEquals(x, x1); + assertEquals(y, y1); + } + for (int i = 0; i < 1000; i++) { + int x = random.nextInt(1000), y = random.nextInt(1000), + z = random.nextInt(1000); + MultiDimension tool = MultiDimension.getInstance(); + long xyz = tool.interleave(new int[] { x, y, z }); + assertEquals(x, tool.deinterleave(3, xyz, 0)); + assertEquals(y, tool.deinterleave(3, xyz, 1)); + assertEquals(z, tool.deinterleave(3, xyz, 2)); + } + createClassProxy(MultiDimension.class); + assertThrows(IllegalArgumentException.class, m).getMaxValue(1); + assertThrows(IllegalArgumentException.class, m).getMaxValue(33); + assertThrows(IllegalArgumentException.class, m).normalize(2, 10, 11, 12); + assertThrows(IllegalArgumentException.class, m).normalize(2, 5, 10, 0); + assertThrows(IllegalArgumentException.class, m).normalize(2, 10, 0, 9); + assertThrows(IllegalArgumentException.class, m).interleave(-1, 5); + assertThrows(IllegalArgumentException.class, m).interleave(5, -1); + assertThrows(IllegalArgumentException.class, m). + interleave(Integer.MAX_VALUE, Integer.MAX_VALUE, Integer.MAX_VALUE); + } + + private void testPerformance2d() throws SQLException { + deleteDb("multiDimension"); + Connection conn; + conn = getConnection("multiDimension"); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS MAP FOR \"" + + getClass().getName() + ".interleave\""); + stat.execute("CREATE TABLE TEST(X INT NOT NULL, Y INT NOT NULL, " + + "XY BIGINT AS MAP(X, Y), DATA VARCHAR)"); + stat.execute("CREATE INDEX IDX_X ON TEST(X, Y)"); + stat.execute("CREATE INDEX IDX_XY ON TEST(XY)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(X, Y, DATA) VALUES(?, ?, ?)"); + // the MultiDimension tool is faster for 4225 (65^2) points + // the more the bigger the difference + int max = getSize(30, 65); + long time = System.nanoTime(); + for (int x = 0; x < max; x++) { + for (int y = 0; y < max; y++) { + long t2 = System.nanoTime(); + if (t2 - time > TimeUnit.SECONDS.toNanos(1)) { + int percent = (int) (100.0 * ((double) x * max + y) / + ((double) max * max)); + trace(percent + "%"); + time = t2; + } + prep.setInt(1, x); + prep.setInt(2, y); + prep.setString(3, "Test data"); + prep.execute(); + } + } + stat.execute("ANALYZE SAMPLE_SIZE 10000"); + PreparedStatement prepRegular = conn.prepareStatement( + "SELECT * FROM TEST WHERE X BETWEEN ? AND ? " + + "AND Y BETWEEN ? AND ? ORDER BY X, Y"); + MultiDimension multi = MultiDimension.getInstance(); + String sql = multi.generatePreparedQuery("TEST", "XY", + new String[] { "X", "Y" }); + sql += " ORDER BY X, Y"; + PreparedStatement prepMulti = conn.prepareStatement(sql); + long timeMulti = 0, timeRegular = 0; + int timeMax = getSize(500, 2000); + Random rand = new Random(1); + while (timeMulti < timeMax) { + int size = rand.nextInt(max / 10); + int minX = rand.nextInt(max - size); + int minY = rand.nextInt(max - size); + int maxX = minX + size, maxY = minY + size; + time = System.nanoTime(); + ResultSet rs1 = multi.getResult(prepMulti, + new int[] { minX, minY }, new int[] { maxX, maxY }); + timeMulti += System.nanoTime() - time; + time = System.nanoTime(); + prepRegular.setInt(1, minX); + prepRegular.setInt(2, maxX); + prepRegular.setInt(3, minY); + prepRegular.setInt(4, maxY); + ResultSet rs2 = prepRegular.executeQuery(); + timeRegular += System.nanoTime() - time; + while (rs1.next()) { + assertTrue(rs2.next()); + assertEquals(rs1.getInt(1), rs2.getInt(1)); + assertEquals(rs1.getInt(2), rs2.getInt(2)); + } + assertFalse(rs2.next()); + } + conn.close(); + deleteDb("multiDimension"); + trace("2d: regular: " + TimeUnit.NANOSECONDS.toMillis(timeRegular) + + " MultiDimension: " + TimeUnit.NANOSECONDS.toMillis(timeMulti)); + } + + private void testPerformance3d() throws SQLException { + deleteDb("multiDimension"); + Connection conn; + conn = getConnection("multiDimension"); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS MAP FOR \"" + + getClass().getName() + ".interleave\""); + stat.execute("CREATE TABLE TEST(X INT NOT NULL, " + + "Y INT NOT NULL, Z INT NOT NULL, " + + "XYZ BIGINT AS MAP(X, Y, Z), DATA VARCHAR)"); + stat.execute("CREATE INDEX IDX_X ON TEST(X, Y, Z)"); + stat.execute("CREATE INDEX IDX_XYZ ON TEST(XYZ)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(X, Y, Z, DATA) VALUES(?, ?, ?, ?)"); + // the MultiDimension tool is faster for 8000 (20^3) points + // the more the bigger the difference + int max = getSize(10, 20); + long time = System.nanoTime(); + for (int x = 0; x < max; x++) { + for (int y = 0; y < max; y++) { + for (int z = 0; z < max; z++) { + long t2 = System.nanoTime(); + if (t2 - time > TimeUnit.SECONDS.toNanos(1)) { + int percent = (int) (100.0 * ((double) x * max + y) / + ((double) max * max)); + trace(percent + "%"); + time = t2; + } + prep.setInt(1, x); + prep.setInt(2, y); + prep.setInt(3, z); + prep.setString(4, "Test data"); + prep.execute(); + } + } + } + stat.execute("ANALYZE SAMPLE_SIZE 10000"); + PreparedStatement prepRegular = conn.prepareStatement( + "SELECT * FROM TEST WHERE X BETWEEN ? AND ? " + + "AND Y BETWEEN ? AND ? AND Z BETWEEN ? AND ? ORDER BY X, Y, Z"); + MultiDimension multi = MultiDimension.getInstance(); + String sql = multi.generatePreparedQuery("TEST", "XYZ", new String[] { + "X", "Y", "Z" }); + sql += " ORDER BY X, Y, Z"; + PreparedStatement prepMulti = conn.prepareStatement(sql); + long timeMulti = 0, timeRegular = 0; + int timeMax = getSize(500, 2000); + Random rand = new Random(1); + while (timeMulti < timeMax) { + int size = rand.nextInt(max / 10); + int minX = rand.nextInt(max - size); + int minY = rand.nextInt(max - size); + int minZ = rand.nextInt(max - size); + int maxX = minX + size, maxY = minY + size, maxZ = minZ + size; + time = System.nanoTime(); + ResultSet rs1 = multi.getResult(prepMulti, new int[] { minX, minY, + minZ }, new int[] { maxX, maxY, maxZ }); + timeMulti += System.nanoTime() - time; + time = System.nanoTime(); + prepRegular.setInt(1, minX); + prepRegular.setInt(2, maxX); + prepRegular.setInt(3, minY); + prepRegular.setInt(4, maxY); + prepRegular.setInt(5, minZ); + prepRegular.setInt(6, maxZ); + ResultSet rs2 = prepRegular.executeQuery(); + timeRegular += System.nanoTime() - time; + while (rs1.next()) { + assertTrue(rs2.next()); + assertEquals(rs1.getInt(1), rs2.getInt(1)); + assertEquals(rs1.getInt(2), rs2.getInt(2)); + } + assertFalse(rs2.next()); + } + conn.close(); + deleteDb("multiDimension"); + trace("3d: regular: " + TimeUnit.NANOSECONDS.toMillis(timeRegular) + + " MultiDimension: " + TimeUnit.NANOSECONDS.toMillis(timeMulti)); + } + + /** + * This method is called via reflection from the database. + * + * @param x the x value + * @param y the y value + * @return the bit-interleaved value + */ + public static long interleave(int x, int y) { + return MultiDimension.getInstance().interleave(x, y); + } + + /** + * This method is called via reflection from the database. + * + * @param x the x value + * @param y the y value + * @param z the z value + * @return the bit-interleaved value + */ + public static long interleave(int x, int y, int z) { + return MultiDimension.getInstance().interleave(new int[] { x, y, z }); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestMultiThread.java b/modules/h2/src/test/java/org/h2/test/db/TestMultiThread.java new file mode 100644 index 0000000000000..8363c7d259e73 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestMultiThread.java @@ -0,0 +1,489 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.StringReader; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; +import org.h2.api.ErrorCode; +import org.h2.jdbc.JdbcSQLException; +import org.h2.test.TestAll; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; +import org.h2.util.SmallLRUCache; +import org.h2.util.SynchronizedVerifier; +import org.h2.util.Task; + +/** + * Multi-threaded tests. + */ +public class TestMultiThread extends TestBase implements Runnable { + + private boolean stop; + private TestMultiThread parent; + private Random random; + + public TestMultiThread() { + // nothing to do + } + + private TestMultiThread(TestAll config, TestMultiThread parent) { + this.config = config; + this.parent = parent; + random = new Random(); + } + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testConcurrentSchemaChange(); + testConcurrentLobAdd(); + testConcurrentView(); + testConcurrentAlter(); + testConcurrentAnalyze(); + testConcurrentInsertUpdateSelect(); + testLockModeWithMultiThreaded(); + testViews(); + testConcurrentInsert(); + testConcurrentUpdate(); + } + + private void testConcurrentSchemaChange() throws Exception { + String db = getTestName(); + deleteDb(db); + final String url = getURL(db + ";MULTI_THREADED=1;LOCK_TIMEOUT=10000", true); + try (Connection conn = getConnection(url)) { + Task[] tasks = new Task[2]; + for (int i = 0; i < tasks.length; i++) { + final int x = i; + Task t = new Task() { + @Override + public void call() throws Exception { + try (Connection c2 = getConnection(url)) { + Statement stat = c2.createStatement(); + for (int i = 0; !stop; i++) { + stat.execute("create table test" + x + "_" + i); + c2.getMetaData().getTables(null, null, null, null); + stat.execute("drop table test" + x + "_" + i); + } + } + } + }; + tasks[i] = t; + t.execute(); + } + Thread.sleep(1000); + for (Task t : tasks) { + t.get(); + } + } + } + + private void testConcurrentLobAdd() throws Exception { + String db = getTestName(); + deleteDb(db); + final String url = getURL(db + ";MULTI_THREADED=1", true); + try (Connection conn = getConnection(url)) { + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, data clob)"); + Task[] tasks = new Task[2]; + for (int i = 0; i < tasks.length; i++) { + Task t = new Task() { + @Override + public void call() throws Exception { + try (Connection c2 = getConnection(url)) { + PreparedStatement p2 = c2 + .prepareStatement("insert into test(data) values(?)"); + while (!stop) { + p2.setCharacterStream(1, new StringReader(new String( + new char[10 * 1024]))); + p2.execute(); + } + } + } + }; + tasks[i] = t; + t.execute(); + } + Thread.sleep(500); + for (Task t : tasks) { + t.get(); + } + } + } + + private void testConcurrentView() throws Exception { + if (config.mvcc || config.mvStore) { + return; + } + String db = getTestName(); + deleteDb(db); + final String url = getURL(db + ";MULTI_THREADED=1", true); + final Random r = new Random(); + try (Connection conn = getConnection(url)) { + Statement stat = conn.createStatement(); + StringBuilder buff = new StringBuilder(); + buff.append("create table test(id int"); + final int len = 3; + for (int i = 0; i < len; i++) { + buff.append(", x" + i + " int"); + } + buff.append(")"); + stat.execute(buff.toString()); + stat.execute("create view test_view as select * from test"); + stat.execute("insert into test(id) select x from system_range(1, 2)"); + Task t = new Task() { + @Override + public void call() throws Exception { + Connection c2 = getConnection(url); + while (!stop) { + c2.prepareStatement("select * from test_view where x" + + r.nextInt(len) + "=1"); + } + c2.close(); + } + }; + t.execute(); + SynchronizedVerifier.setDetect(SmallLRUCache.class, true); + for (int i = 0; i < 1000; i++) { + conn.prepareStatement("select * from test_view where x" + + r.nextInt(len) + "=1"); + } + t.get(); + SynchronizedVerifier.setDetect(SmallLRUCache.class, false); + } + } + + private void testConcurrentAlter() throws Exception { + deleteDb(getTestName()); + try (final Connection conn = getConnection(getTestName())) { + Statement stat = conn.createStatement(); + Task t = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + conn.prepareStatement("select * from test"); + } + } + }; + stat.execute("create table test(id int)"); + t.execute(); + for (int i = 0; i < 200; i++) { + stat.execute("alter table test add column x int"); + stat.execute("alter table test drop column x"); + } + t.get(); + } + } + + private void testConcurrentAnalyze() throws Exception { + if (config.mvcc) { + return; + } + deleteDb(getTestName()); + final String url = getURL("concurrentAnalyze;MULTI_THREADED=1", true); + try (Connection conn = getConnection(url)) { + Statement stat = conn.createStatement(); + stat.execute("create table test(id bigint primary key) " + + "as select x from system_range(1, 1000)"); + Task t = new Task() { + @Override + public void call() throws SQLException { + try (Connection conn2 = getConnection(url)) { + for (int i = 0; i < 1000; i++) { + conn2.createStatement().execute("analyze"); + } + } + } + }; + t.execute(); + Thread.yield(); + for (int i = 0; i < 1000; i++) { + conn.createStatement().execute("analyze"); + } + t.get(); + stat.execute("drop table test"); + } + } + + private void testConcurrentInsertUpdateSelect() throws Exception { + try (Connection conn = getConnection()) { + Statement stmt = conn.createStatement(); + stmt.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR)"); + int len = getSize(10, 200); + Thread[] threads = new Thread[len]; + for (int i = 0; i < len; i++) { + threads[i] = new Thread(new TestMultiThread(config, this)); + } + for (int i = 0; i < len; i++) { + threads[i].start(); + } + int sleep = getSize(400, 10000); + Thread.sleep(sleep); + this.stop = true; + for (int i = 0; i < len; i++) { + threads[i].join(); + } + ResultSet rs = stmt.executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + trace("max id=" + rs.getInt(1)); + } + } + + private Connection getConnection() throws SQLException { + return getConnection("jdbc:h2:mem:" + getTestName()); + } + + @Override + public void run() { + try (Connection conn = getConnection()) { + Statement stmt = conn.createStatement(); + while (!parent.stop) { + stmt.execute("SELECT COUNT(*) FROM TEST"); + stmt.execute("INSERT INTO TEST VALUES(NULL, 'Hi')"); + PreparedStatement prep = conn.prepareStatement( + "UPDATE TEST SET NAME='Hello' WHERE ID=?"); + prep.setInt(1, random.nextInt(10000)); + prep.execute(); + prep = conn.prepareStatement("SELECT * FROM TEST WHERE ID=?"); + prep.setInt(1, random.nextInt(10000)); + ResultSet rs = prep.executeQuery(); + while (rs.next()) { + rs.getString("NAME"); + } + } + } catch (Exception e) { + logError("multi", e); + } + } + + private void testLockModeWithMultiThreaded() throws Exception { + // currently the combination of LOCK_MODE=0 and MULTI_THREADED + // is not supported + deleteDb("lockMode"); + final String url = getURL("lockMode;MULTI_THREADED=1", true); + try (Connection conn = getConnection(url)) { + DatabaseMetaData meta = conn.getMetaData(); + assertFalse(meta.supportsTransactionIsolationLevel( + Connection.TRANSACTION_READ_UNCOMMITTED)); + } + deleteDb("lockMode"); + } + + private void testViews() throws Exception { + // currently the combination of LOCK_MODE=0 and MULTI_THREADED + // is not supported + deleteDb("lockMode"); + final String url = getURL("lockMode;MULTI_THREADED=1", true); + + // create some common tables and views + ExecutorService executor = Executors.newFixedThreadPool(8); + Connection conn = getConnection(url); + try { + Statement stat = conn.createStatement(); + stat.execute( + "CREATE TABLE INVOICE(INVOICE_ID INT PRIMARY KEY, AMOUNT DECIMAL)"); + stat.execute("CREATE VIEW INVOICE_VIEW as SELECT * FROM INVOICE"); + + stat.execute( + "CREATE TABLE INVOICE_DETAIL(DETAIL_ID INT PRIMARY KEY, " + + "INVOICE_ID INT, DESCRIPTION VARCHAR)"); + stat.execute( + "CREATE VIEW INVOICE_DETAIL_VIEW as SELECT * FROM INVOICE_DETAIL"); + + stat.close(); + + // create views that reference the common views in different threads + ArrayList> jobs = new ArrayList<>(); + for (int i = 0; i < 1000; i++) { + final int j = i; + jobs.add(executor.submit(new Callable() { + @Override + public Void call() throws Exception { + try (Connection conn2 = getConnection(url)) { + Statement stat2 = conn2.createStatement(); + + stat2.execute("CREATE VIEW INVOICE_VIEW" + j + + " as SELECT * FROM INVOICE_VIEW"); + + // the following query intermittently results in a + // NullPointerException + stat2.execute("CREATE VIEW INVOICE_DETAIL_VIEW" + j + + " as SELECT DTL.* FROM INVOICE_VIEW" + j + + " INV JOIN INVOICE_DETAIL_VIEW DTL " + + "ON INV.INVOICE_ID = DTL.INVOICE_ID" + + " WHERE DESCRIPTION='TEST'"); + + ResultSet rs = stat2 + .executeQuery("SELECT * FROM INVOICE_VIEW" + j); + rs.next(); + rs.close(); + + rs = stat2.executeQuery( + "SELECT * FROM INVOICE_DETAIL_VIEW" + j); + rs.next(); + rs.close(); + + stat2.close(); + } + return null; + } + })); + } + // check for exceptions + for (Future job : jobs) { + try { + job.get(); + } catch (ExecutionException ex) { + // ignore timeout exceptions, happens periodically when the + // machine is really busy and it's not the thing we are + // trying to test + if (!(ex.getCause() instanceof JdbcSQLException) + || ((JdbcSQLException) ex.getCause()) + .getErrorCode() != ErrorCode.LOCK_TIMEOUT_1) { + throw ex; + } + } + } + } finally { + IOUtils.closeSilently(conn); + executor.shutdown(); + executor.awaitTermination(20, TimeUnit.SECONDS); + } + + deleteDb("lockMode"); + } + + private void testConcurrentInsert() throws Exception { + deleteDb("lockMode"); + + final String url = getURL("lockMode;MULTI_THREADED=1;LOCK_TIMEOUT=10000", true); + int threadCount = 25; + ExecutorService executor = Executors.newFixedThreadPool(threadCount); + Connection conn = getConnection(url); + try { + conn.createStatement().execute( + "CREATE TABLE IF NOT EXISTS TRAN (ID NUMBER(18,0) not null PRIMARY KEY)"); + + final ArrayList> callables = new ArrayList<>(); + for (int i = 0; i < threadCount; i++) { + final long initialTransactionId = i * 1000000L; + callables.add(new Callable() { + @Override + public Void call() throws Exception { + try (Connection taskConn = getConnection(url)) { + taskConn.setAutoCommit(false); + PreparedStatement insertTranStmt = taskConn + .prepareStatement("INSERT INTO tran (id) VALUES(?)"); + // to guarantee uniqueness + long tranId = initialTransactionId; + for (int j = 0; j < 1000; j++) { + insertTranStmt.setLong(1, tranId++); + insertTranStmt.execute(); + taskConn.commit(); + } + } + return null; + } + }); + } + + final ArrayList> jobs = new ArrayList<>(); + for (int i = 0; i < threadCount; i++) { + jobs.add(executor.submit(callables.get(i))); + } + // check for exceptions + for (Future job : jobs) { + job.get(5, TimeUnit.MINUTES); + } + } finally { + IOUtils.closeSilently(conn); + executor.shutdown(); + executor.awaitTermination(20, TimeUnit.SECONDS); + } + + deleteDb("lockMode"); + } + + private void testConcurrentUpdate() throws Exception { + deleteDb("lockMode"); + + final int objectCount = 10000; + final String url = getURL("lockMode;MULTI_THREADED=1;LOCK_TIMEOUT=10000", true); + int threadCount = 25; + ExecutorService executor = Executors.newFixedThreadPool(threadCount); + Connection conn = getConnection(url); + try { + conn.createStatement().execute( + "CREATE TABLE IF NOT EXISTS ACCOUNT" + + "(ID NUMBER(18,0) not null PRIMARY KEY, BALANCE NUMBER null)"); + final PreparedStatement mergeAcctStmt = conn + .prepareStatement("MERGE INTO Account(id, balance) key (id) VALUES (?, ?)"); + for (int i = 0; i < objectCount; i++) { + mergeAcctStmt.setLong(1, i); + mergeAcctStmt.setBigDecimal(2, BigDecimal.ZERO); + mergeAcctStmt.execute(); + } + + final ArrayList> callables = new ArrayList<>(); + for (int i = 0; i < threadCount; i++) { + callables.add(new Callable() { + @Override + public Void call() throws Exception { + try (Connection taskConn = getConnection(url)) { + taskConn.setAutoCommit(false); + final PreparedStatement updateAcctStmt = taskConn + .prepareStatement("UPDATE account SET balance = ? WHERE id = ?"); + for (int j = 0; j < 1000; j++) { + updateAcctStmt.setDouble(1, Math.random()); + updateAcctStmt.setLong(2, (int) (Math.random() * objectCount)); + updateAcctStmt.execute(); + taskConn.commit(); + } + } + return null; + } + }); + } + + final ArrayList> jobs = new ArrayList<>(); + for (int i = 0; i < threadCount; i++) { + jobs.add(executor.submit(callables.get(i))); + } + // check for exceptions + for (Future job : jobs) { + job.get(5, TimeUnit.MINUTES); + } + } finally { + IOUtils.closeSilently(conn); + executor.shutdown(); + executor.awaitTermination(20, TimeUnit.SECONDS); + } + + deleteDb("lockMode"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestMultiThreadedKernel.java b/modules/h2/src/test/java/org/h2/test/db/TestMultiThreadedKernel.java new file mode 100644 index 0000000000000..b24a9e2805dca --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestMultiThreadedKernel.java @@ -0,0 +1,187 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.util.JdbcUtils; +import org.h2.util.New; +import org.h2.util.Task; + +/** + * A multi-threaded test case. + */ +public class TestMultiThreadedKernel extends TestBase { + + /** + * Stop the current thread. + */ + volatile boolean stop; + + /** + * The exception that occurred in the thread. + */ + Exception exception; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.mvcc) { + return; + } + if (config.mvStore) { // FIXME can't see why test should not work in MVStore mode + return; + } + deleteDb("multiThreadedKernel"); + testConcurrentRead(); + testCache(); + deleteDb("multiThreadedKernel"); + final String url = getURL("multiThreadedKernel;" + + "DB_CLOSE_DELAY=-1;MULTI_THREADED=1", true); + final String user = getUser(), password = getPassword(); + int len = 3; + Thread[] threads = new Thread[len]; + for (int i = 0; i < len; i++) { + threads[i] = new Thread(new Runnable() { + @Override + public void run() { + Connection conn = null; + try { + for (int j = 0; j < 100 && !stop; j++) { + conn = DriverManager.getConnection( + url, user, password); + work(conn); + } + } catch (Exception e) { + exception = e; + } finally { + JdbcUtils.closeSilently(conn); + } + } + private void work(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute( + "create local temporary table temp(id identity)"); + stat.execute( + "insert into temp values(1)"); + conn.close(); + } + }); + } + for (int i = 0; i < len; i++) { + threads[i].start(); + } + Thread.sleep(1000); + stop = true; + for (int i = 0; i < len; i++) { + threads[i].join(); + } + Connection conn = DriverManager.getConnection(url, user, password); + conn.createStatement().execute("shutdown"); + conn.close(); + if (exception != null) { + throw exception; + } + deleteDb("multiThreadedKernel"); + } + + private void testConcurrentRead() throws Exception { + ArrayList list = New.arrayList(); + int size = 2; + final int count = 1000; + final Connection[] connections = new Connection[count]; + String url = getURL("multiThreadedKernel;" + + "MULTI_THREADED=TRUE;CACHE_SIZE=16", true); + for (int i = 0; i < size; i++) { + final Connection conn = DriverManager.getConnection( + url, getUser(), getPassword()); + connections[i] = conn; + if (i == 0) { + Statement stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(id int primary key, name varchar) " + + "as select x, x || space(10) from system_range(1, " + count + ")"); + } + final Random random = new Random(i); + Task t = new Task() { + @Override + public void call() throws Exception { + PreparedStatement prep = conn.prepareStatement( + "select * from test where id = ?"); + while (!stop) { + prep.setInt(1, random.nextInt(count)); + prep.execute(); + } + } + }; + t.execute(); + list.add(t); + } + Thread.sleep(1000); + for (Task t : list) { + t.get(); + } + for (int i = 0; i < size; i++) { + connections[i].close(); + } + } + + private void testCache() throws Exception { + ArrayList list = New.arrayList(); + int size = 3; + final int count = 100; + final Connection[] connections = new Connection[count]; + String url = getURL("multiThreadedKernel;" + + "MULTI_THREADED=TRUE;CACHE_SIZE=1", true); + for (int i = 0; i < size; i++) { + final Connection conn = DriverManager.getConnection( + url, getUser(), getPassword()); + connections[i] = conn; + if (i == 0) { + Statement stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(id int primary key, name varchar) " + + "as select x, space(3000) from system_range(1, " + count + ")"); + } + final Random random = new Random(i); + Task t = new Task() { + @Override + public void call() throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "select * from test where id = ?"); + while (!stop) { + prep.setInt(1, random.nextInt(count)); + prep.execute(); + } + } + }; + t.execute(); + list.add(t); + } + Thread.sleep(1000); + for (Task t : list) { + t.get(); + } + for (int i = 0; i < size; i++) { + connections[i].close(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestOpenClose.java b/modules/h2/src/test/java/org/h2/test/db/TestOpenClose.java new file mode 100644 index 0000000000000..501a30782c4f0 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestOpenClose.java @@ -0,0 +1,279 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Restore; +import org.h2.util.Task; + +/** + * Tests opening and closing a database. + */ +public class TestOpenClose extends TestBase { + + private int nextId = 10; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testErrorMessageLocked(); + testErrorMessageWrongSplit(); + testCloseDelay(); + testBackup(); + testCase(); + testReconnectFast(); + deleteDb("openClose"); + } + + private void testErrorMessageLocked() throws Exception { + if (config.memory) { + return; + } + deleteDb("openClose"); + Connection conn; + conn = getConnection("jdbc:h2:" + getBaseDir() + "/openClose;FILE_LOCK=FS"); + assertThrows(ErrorCode.DATABASE_ALREADY_OPEN_1, this).getConnection( + "jdbc:h2:" + getBaseDir() + "/openClose;FILE_LOCK=FS;OPEN_NEW=TRUE"); + conn.close(); + } + + private void testErrorMessageWrongSplit() throws Exception { + if (config.memory || config.reopen) { + return; + } + String fn = getBaseDir() + "/openClose2"; + if (config.mvStore) { + fn += Constants.SUFFIX_MV_FILE; + } else { + fn += Constants.SUFFIX_PAGE_FILE; + } + FileUtils.delete("split:" + fn); + Connection conn; + String url = "jdbc:h2:split:18:" + getBaseDir() + "/openClose2"; + url = getURL(url, true); + conn = DriverManager.getConnection(url); + conn.createStatement().execute("create table test(id int, name varchar) " + + "as select 1, space(1000000)"); + conn.close(); + FileChannel c = FileUtils.open(fn+".1.part", "rw"); + c.position(c.size() * 2 - 1); + c.write(ByteBuffer.wrap(new byte[1])); + c.close(); + if (config.mvStore) { + assertThrows(ErrorCode.IO_EXCEPTION_1, this).getConnection(url); + } else { + assertThrows(ErrorCode.IO_EXCEPTION_2, this).getConnection(url); + } + FileUtils.delete("split:" + fn); + } + + private void testCloseDelay() throws Exception { + deleteDb("openClose"); + String url = getURL("openClose;DB_CLOSE_DELAY=1", true); + String user = getUser(), password = getPassword(); + Connection conn = DriverManager.getConnection(url, user, password); + conn.close(); + Thread.sleep(950); + long time = System.nanoTime(); + while (System.nanoTime() - time < TimeUnit.MILLISECONDS.toNanos(100)) { + conn = DriverManager.getConnection(url, user, password); + conn.close(); + } + conn = DriverManager.getConnection(url, user, password); + conn.createStatement().execute("SHUTDOWN"); + conn.close(); + } + + private void testBackup() throws SQLException { + if (config.memory) { + return; + } + deleteDb("openClose"); + String url = getURL("openClose", true); + org.h2.Driver.load(); + Connection conn = DriverManager.getConnection(url, "sa", "abc def"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(C CLOB)"); + stat.execute("INSERT INTO TEST VALUES(SPACE(10000))"); + stat.execute("BACKUP TO '" + getBaseDir() + "/test.zip'"); + conn.close(); + deleteDb("openClose"); + Restore.execute(getBaseDir() + "/test.zip", getBaseDir(), null); + conn = DriverManager.getConnection(url, "sa", "abc def"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + rs.next(); + assertEquals(10000, rs.getString(1).length()); + assertFalse(rs.next()); + conn.close(); + FileUtils.delete(getBaseDir() + "/test.zip"); + } + + private void testReconnectFast() throws SQLException { + if (config.memory) { + return; + } + + deleteDb("openClose"); + String user = getUser(), password = getPassword(); + String url = getURL("openClose;DATABASE_EVENT_LISTENER='" + + MyDatabaseEventListener.class.getName() + "'", true); + Connection conn = DriverManager.getConnection(url, user, password); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR)"); + stat.execute("SET MAX_MEMORY_UNDO 100000"); + stat.execute("CREATE INDEX IDXNAME ON TEST(NAME)"); + stat.execute("INSERT INTO TEST SELECT X, X || ' Data' " + + "FROM SYSTEM_RANGE(1, 1000)"); + stat.close(); + conn.close(); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM DUAL"); + if (rs.next()) { + rs.getString(1); + } + rs.close(); + stat.close(); + conn.close(); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + // stat.execute("SET DB_CLOSE_DELAY 0"); + stat.executeUpdate("SHUTDOWN"); + stat.close(); + conn.close(); + } + + private void testCase() throws Exception { + if (config.memory) { + return; + } + + org.h2.Driver.load(); + deleteDb("openClose"); + final String url = getURL("openClose;FILE_LOCK=NO", true); + final String user = getUser(), password = getPassword(); + Connection conn = DriverManager.getConnection(url, user, password); + conn.createStatement().execute("drop table employee if exists"); + conn.createStatement().execute( + "create table employee(id int primary key, name varchar, salary int)"); + conn.close(); + // previously using getSize(200, 1000); + // but for Ubuntu, the default ulimit is 1024, + // which breaks the test + int len = getSize(10, 50); + Task[] tasks = new Task[len]; + for (int i = 0; i < len; i++) { + tasks[i] = new Task() { + @Override + public void call() throws SQLException { + Connection c = DriverManager.getConnection(url, user, password); + PreparedStatement prep = c + .prepareStatement("insert into employee values(?, ?, 0)"); + int id = getNextId(); + prep.setInt(1, id); + prep.setString(2, "employee " + id); + prep.execute(); + c.close(); + } + }; + tasks[i].execute(); + } + // for(int i=0; i 0) { + throw new AssertionError("unexpected: " + stateName); + } + break; + case STATE_STATEMENT_START: + break; + case STATE_CREATE_INDEX: + stateName = "Create Index " + name + " " + current + "/" + max; + if (!"SYS:SYS_ID".equals(name)) { + throw new AssertionError("unexpected: " + stateName); + } + break; + case STATE_RECOVER: + stateName = "Recover " + current + "/" + max; + break; + default: + stateName = "?"; + } + // System.out.println(": " + stateName); + } + + @Override + public void closingDatabase() { + // nothing to do + } + + @Override + public void init(String url) { + // nothing to do + } + + @Override + public void opened() { + // nothing to do + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestOptimizations.java b/modules/h2/src/test/java/org/h2/test/db/TestOptimizations.java new file mode 100644 index 0000000000000..5ca792133e468 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestOptimizations.java @@ -0,0 +1,1178 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Random; +import java.util.TreeSet; +import java.util.concurrent.TimeUnit; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.tools.SimpleResultSet; +import org.h2.util.StringUtils; +import org.h2.util.Task; + +/** + * Test various optimizations (query cache, optimization for MIN(..), and + * MAX(..)). + */ +public class TestOptimizations extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("optimizations"); + testIdentityIndexUsage(); + testFastRowIdCondition(); + testExplainRoundTrip(); + testOrderByExpression(); + testGroupSubquery(); + testAnalyzeLob(); + testLike(); + testExistsSubquery(); + testQueryCacheConcurrentUse(); + testQueryCacheResetParams(); + testRowId(); + testSortIndex(); + testAutoAnalyze(); + testInAndBetween(); + testNestedIn(); + testConstantIn1(); + testConstantIn2(); + testConstantTypeConversionToColumnType(); + testNestedInSelectAndLike(); + testNestedInSelect(); + testInSelectJoin(); + testMinMaxNullOptimization(); + testUseCoveringIndex(); + // testUseIndexWhenAllColumnsNotInOrderBy(); + if (config.networked) { + return; + } + testOptimizeInJoinSelect(); + testOptimizeInJoin(); + testMultiColumnRangeQuery(); + testDistinctOptimization(); + testQueryCacheTimestamp(); + if (!config.lazy) { + testQueryCacheSpeed(); + } + testQueryCache(true); + testQueryCache(false); + testIn(); + testMinMaxCountOptimization(true); + testMinMaxCountOptimization(false); + testOrderedIndexes(); + testIndexUseDespiteNullsFirst(); + testConvertOrToIn(); + deleteDb("optimizations"); + } + + private void testIdentityIndexUsage() throws Exception { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(a identity)"); + stat.execute("insert into test values()"); + ResultSet rs = stat.executeQuery("explain select * from test where a = 1"); + rs.next(); + assertContains(rs.getString(1), "PRIMARY_KEY"); + stat.execute("drop table test"); + conn.close(); + } + + private void testFastRowIdCondition() throws Exception { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.executeUpdate("create table many(id int) " + + "as select x from system_range(1, 10000)"); + ResultSet rs = stat.executeQuery("explain analyze select * from many " + + "where _rowid_ = 400"); + rs.next(); + assertContains(rs.getString(1), "/* scanCount: 2 */"); + conn.close(); + } + + private void testExplainRoundTrip() throws Exception { + Connection conn = getConnection("optimizations"); + assertExplainRoundTrip(conn, + "select x from dual where x > any(select x from dual)"); + conn.close(); + } + + private void assertExplainRoundTrip(Connection conn, String sql) + throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("explain " + sql); + rs.next(); + String plan = rs.getString(1).toLowerCase(); + plan = plan.replaceAll("\\s+", " "); + plan = plan.replaceAll("/\\*[^\\*]*\\*/", ""); + plan = plan.replaceAll("\\s+", " "); + plan = StringUtils.replaceAll(plan, "system_range(1, 1)", "dual"); + plan = plan.replaceAll("\\( ", "\\("); + plan = plan.replaceAll(" \\)", "\\)"); + assertEquals(plan, sql); + } + + private void testOrderByExpression() throws Exception { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello'), (2, 'Hello'), (3, 'Hello')"); + ResultSet rs; + rs = stat.executeQuery( + "explain select name from test where name='Hello' order by name"); + rs.next(); + String plan = rs.getString(1); + assertContains(plan, "tableScan"); + stat.execute("drop table test"); + conn.close(); + } + + private void testGroupSubquery() throws Exception { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table t1(id int)"); + stat.execute("create table t2(id int)"); + stat.execute("insert into t1 values(2), (2), (3)"); + stat.execute("insert into t2 values(2), (3)"); + stat.execute("create index t1id_index on t1(id)"); + ResultSet rs; + rs = stat.executeQuery("select id, (select count(*) from t2 " + + "where t2.id = t1.id) cc from t1 group by id order by id"); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals(1, rs.getInt(2)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals(1, rs.getInt(2)); + rs.next(); + stat.execute("drop table t1, t2"); + conn.close(); + } + + private void testAnalyzeLob() throws Exception { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(v varchar, b binary, cl clob, bl blob) as " + + "select ' ', '00', ' ', '00' from system_range(1, 100)"); + stat.execute("analyze"); + ResultSet rs = stat.executeQuery("select column_name, selectivity " + + "from information_schema.columns where table_name='TEST'"); + rs.next(); + assertEquals("V", rs.getString(1)); + assertEquals(1, rs.getInt(2)); + rs.next(); + assertEquals("B", rs.getString(1)); + assertEquals(1, rs.getInt(2)); + rs.next(); + assertEquals("CL", rs.getString(1)); + assertEquals(50, rs.getInt(2)); + rs.next(); + assertEquals("BL", rs.getString(1)); + assertEquals(50, rs.getInt(2)); + stat.execute("drop table test"); + conn.close(); + } + + private void testLike() throws Exception { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(name varchar primary key) as " + + "select x from system_range(1, 10)"); + ResultSet rs = stat.executeQuery("explain select * from test " + + "where name like ? || '%' {1: 'Hello'}"); + rs.next(); + // ensure the ID = 10 part is evaluated first + assertContains(rs.getString(1), "PRIMARY_KEY_"); + stat.execute("drop table test"); + conn.close(); + } + + private void testExistsSubquery() throws Exception { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int) as select x from system_range(1, 10)"); + ResultSet rs = stat.executeQuery("explain select * from test " + + "where exists(select 1 from test, test, test) and id = 10"); + rs.next(); + // ensure the ID = 10 part is evaluated first + assertContains(rs.getString(1), "WHERE (ID = 10)"); + stat.execute("drop table test"); + conn.close(); + } + + private void testQueryCacheConcurrentUse() throws Exception { + if (config.lazy) { + return; + } + final Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data clob)"); + stat.execute("insert into test values(0, space(10000))"); + stat.execute("insert into test values(1, space(10001))"); + Task[] tasks = new Task[2]; + for (int i = 0; i < tasks.length; i++) { + tasks[i] = new Task() { + @Override + public void call() throws Exception { + PreparedStatement prep = conn.prepareStatement( + "select * from test where id = ?"); + while (!stop) { + int x = (int) (Math.random() * 2); + prep.setInt(1, x); + ResultSet rs = prep.executeQuery(); + rs.next(); + String data = rs.getString(2); + if (data.length() != 10000 + x) { + throw new Exception(data.length() + " != " + x); + } + rs.close(); + } + } + }; + tasks[i].execute(); + } + Thread.sleep(1000); + for (Task t : tasks) { + t.get(); + } + stat.execute("drop table test"); + conn.close(); + } + + private void testQueryCacheResetParams() throws SQLException { + Connection conn = getConnection("optimizations"); + PreparedStatement prep; + prep = conn.prepareStatement("select ?"); + prep.setString(1, "Hello"); + prep.execute(); + prep.close(); + prep = conn.prepareStatement("select ?"); + assertThrows(ErrorCode.PARAMETER_NOT_SET_1, prep).execute(); + prep.close(); + conn.close(); + } + + private void testRowId() throws SQLException { + if (config.memory) { + return; + } + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("create table test(data varchar)"); + stat.execute("select min(_rowid_ + 1) from test"); + stat.execute("insert into test(_rowid_, data) values(10, 'Hello')"); + stat.execute("insert into test(data) values('World')"); + stat.execute("insert into test(_rowid_, data) values(20, 'Hello')"); + stat.execute( + "merge into test(_rowid_, data) key(_rowid_) values(20, 'Hallo')"); + rs = stat.executeQuery( + "select _rowid_, data from test order by _rowid_"); + rs.next(); + assertEquals(10, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs.next(); + assertEquals(11, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + rs.next(); + assertEquals(21, rs.getInt(1)); + assertEquals("Hallo", rs.getString(2)); + assertFalse(rs.next()); + stat.execute("drop table test"); + + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(0, 'Hello')"); + stat.execute("insert into test values(3, 'Hello')"); + stat.execute("insert into test values(2, 'Hello')"); + + rs = stat.executeQuery("explain select * from test where _rowid_ = 2"); + rs.next(); + assertContains(rs.getString(1), ".tableScan: _ROWID_ ="); + + rs = stat.executeQuery("explain select * from test where _rowid_ > 2"); + rs.next(); + assertContains(rs.getString(1), ".tableScan: _ROWID_ >"); + + rs = stat.executeQuery("explain select * from test order by _rowid_"); + rs.next(); + assertContains(rs.getString(1), "/* index sorted */"); + rs = stat.executeQuery("select _rowid_, * from test order by _rowid_"); + rs.next(); + assertEquals(0, rs.getInt(1)); + assertEquals(0, rs.getInt(2)); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals(2, rs.getInt(2)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals(3, rs.getInt(2)); + + stat.execute("drop table test"); + conn.close(); + } + + private void testSortIndex() throws SQLException { + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(id int)"); + stat.execute("create index idx_id_desc on test(id desc)"); + stat.execute("create index idx_id_asc on test(id)"); + ResultSet rs; + + rs = stat.executeQuery("explain select * from test " + + "where id > 10 order by id"); + rs.next(); + assertContains(rs.getString(1), "IDX_ID_ASC"); + + rs = stat.executeQuery("explain select * from test " + + "where id < 10 order by id desc"); + rs.next(); + assertContains(rs.getString(1), "IDX_ID_DESC"); + + rs.next(); + stat.execute("drop table test"); + conn.close(); + } + + private void testAutoAnalyze() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select value " + + "from information_schema.settings where name='analyzeAuto'"); + int auto = rs.next() ? rs.getInt(1) : 0; + if (auto != 0) { + stat.execute("create table test(id int)"); + stat.execute("create user onlyInsert password ''"); + stat.execute("grant insert on test to onlyInsert"); + Connection conn2 = getConnection( + "optimizations", "onlyInsert", getPassword("")); + Statement stat2 = conn2.createStatement(); + stat2.execute("insert into test select x " + + "from system_range(1, " + (auto + 10) + ")"); + conn2.close(); + } + conn.close(); + } + + private void testInAndBetween() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("create table test(id int, name varchar)"); + stat.execute("create index idx_name on test(id, name)"); + stat.execute("insert into test values(1, 'Hello'), (2, 'World')"); + rs = stat.executeQuery("select * from test " + + "where id between 1 and 3 and name in ('World')"); + assertTrue(rs.next()); + rs = stat.executeQuery("select * from test " + + "where id between 1 and 3 and name in (select 'World')"); + assertTrue(rs.next()); + stat.execute("drop table test"); + conn.close(); + } + + private void testNestedIn() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("create table accounts(id integer primary key, " + + "status varchar(255), tag varchar(255))"); + stat.execute("insert into accounts values (31, 'X', 'A')"); + stat.execute("create table parent(id int)"); + stat.execute("insert into parent values(31)"); + stat.execute("create view test_view as select a.status, a.tag " + + "from accounts a, parent t where a.id = t.id"); + rs = stat.executeQuery("select * from test_view " + + "where status='X' and tag in ('A','B')"); + assertTrue(rs.next()); + rs = stat.executeQuery("select * from (select a.status, a.tag " + + "from accounts a, parent t where a.id = t.id) x " + + "where status='X' and tag in ('A','B')"); + assertTrue(rs.next()); + + stat.execute("create table test(id int primary key, name varchar(255))"); + stat.execute("create unique index idx_name on test(name, id)"); + stat.execute("insert into test values(1, 'Hello'), (2, 'World')"); + rs = stat.executeQuery("select * from (select * from test) " + + "where id=1 and name in('Hello', 'World')"); + assertTrue(rs.next()); + stat.execute("drop table test"); + + conn.close(); + } + + private void testConstantIn1() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + + stat.execute("create table test(id int primary key, name varchar(255))"); + stat.execute("insert into test values(1, 'Hello'), (2, 'World')"); + assertSingleValue(stat, + "select count(*) from test where name in ('Hello', 'World', 1)", 2); + assertSingleValue(stat, + "select count(*) from test where name in ('Hello', 'World')", 2); + assertSingleValue(stat, + "select count(*) from test where name in ('Hello', 'Not')", 1); + stat.execute("drop table test"); + + conn.close(); + } + + private void testConstantIn2() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations;IGNORECASE=TRUE"); + Statement stat = conn.createStatement(); + + stat.executeUpdate("CREATE TABLE testValues (x VARCHAR(50))"); + stat.executeUpdate("INSERT INTO testValues (x) SELECT 'foo' x"); + ResultSet resultSet; + resultSet = stat.executeQuery( + "SELECT x FROM testValues WHERE x IN ('foo')"); + assertTrue(resultSet.next()); + resultSet = stat.executeQuery( + "SELECT x FROM testValues WHERE x IN ('FOO')"); + assertTrue(resultSet.next()); + resultSet = stat.executeQuery( + "SELECT x FROM testValues WHERE x IN ('foo','bar')"); + assertTrue(resultSet.next()); + resultSet = stat.executeQuery( + "SELECT x FROM testValues WHERE x IN ('FOO','bar')"); + assertTrue(resultSet.next()); + + conn.close(); + } + + private void testConstantTypeConversionToColumnType() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations;IGNORECASE=TRUE"); + Statement stat = conn.createStatement(); + + stat.executeUpdate("CREATE TABLE test (x int)"); + ResultSet resultSet; + resultSet = stat.executeQuery( + "EXPLAIN SELECT x FROM test WHERE x = '5'"); + + assertTrue(resultSet.next()); + // String constant '5' has been converted to int constant 5 on + // optimization + assertTrue(resultSet.getString(1).endsWith("X = 5")); + + stat.execute("drop table test"); + + conn.close(); + } + + private void testNestedInSelect() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + ResultSet rs; + + stat.execute("create table test(id int primary key, name varchar) " + + "as select 1, 'Hello'"); + stat.execute("select * from (select * from test) " + + "where id=1 and name in('Hello', 'World')"); + + stat.execute("drop table test"); + + stat.execute("create table test(id int, name varchar) as select 1, 'Hello'"); + stat.execute("create index idx2 on test(id, name)"); + rs = stat.executeQuery("select count(*) from test " + + "where id=1 and name in('Hello', 'x')"); + rs.next(); + assertEquals(1, rs.getInt(1)); + + conn.close(); + } + + private void testNestedInSelectAndLike() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(2)"); + ResultSet rs = stat.executeQuery("select * from test where id in(1, 2)"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + stat.execute("create table test2(id int primary key hash)"); + stat.execute("insert into test2 values(2)"); + rs = stat.executeQuery("select * from test where id in(1, 2)"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + PreparedStatement prep; + prep = conn.prepareStatement("SELECT * FROM DUAL A " + + "WHERE A.X IN (SELECT B.X FROM DUAL B WHERE B.X LIKE ?)"); + prep.setString(1, "1"); + prep.execute(); + prep = conn.prepareStatement("SELECT * FROM DUAL A " + + "WHERE A.X IN (SELECT B.X FROM DUAL B WHERE B.X IN (?, ?))"); + prep.setInt(1, 1); + prep.setInt(2, 1); + prep.executeQuery(); + conn.close(); + } + + private void testInSelectJoin() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(a int, b int, c int, d int) " + + "as select 1, 1, 1, 1 from dual;"); + ResultSet rs; + PreparedStatement prep; + prep = conn.prepareStatement("SELECT 2 FROM TEST A " + + "INNER JOIN (SELECT DISTINCT B.C AS X FROM TEST B " + + "WHERE B.D = ?2) V ON 1=1 WHERE (A = ?1) AND (B = V.X)"); + prep.setInt(1, 1); + prep.setInt(2, 1); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertFalse(rs.next()); + + prep = conn.prepareStatement( + "select 2 from test a where a=? and b in(" + + "select b.c from test b where b.d=?)"); + prep.setInt(1, 1); + prep.setInt(2, 1); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertFalse(rs.next()); + conn.close(); + } + + + private void testOptimizeInJoinSelect() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table item(id int primary key)"); + stat.execute("insert into item values(1)"); + stat.execute("create alias opt for \"" + + getClass().getName() + + ".optimizeInJoinSelect\""); + PreparedStatement prep = conn.prepareStatement( + "select * from item where id in (select x from opt())"); + ResultSet rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + conn.close(); + } + + /** + * This method is called via reflection from the database. + * + * @return a result set + */ + public static ResultSet optimizeInJoinSelect() { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("X", Types.INTEGER, 0, 0); + rs.addRow(1); + return rs; + } + + private void testOptimizeInJoin() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test select x from system_range(1, 1000)"); + ResultSet rs = stat.executeQuery("explain select * " + + "from test where id in (400, 300)"); + rs.next(); + String plan = rs.getString(1); + if (plan.indexOf("/* PUBLIC.PRIMARY_KEY_") < 0) { + fail("Expected using the primary key, got: " + plan); + } + conn.close(); + } + + private void testMinMaxNullOptimization() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + Random random = new Random(1); + int len = getSize(50, 500); + for (int i = 0; i < len; i++) { + stat.execute("drop table if exists test"); + stat.execute("create table test(x int)"); + if (random.nextBoolean()) { + int count = random.nextBoolean() ? 1 : 1 + random.nextInt(len); + if (count > 0) { + stat.execute("insert into test select null " + + "from system_range(1, " + count + ")"); + } + } + int maxExpected = -1; + int minExpected = -1; + if (random.nextInt(10) != 1) { + minExpected = 1; + maxExpected = 1 + random.nextInt(len); + stat.execute("insert into test select x " + + "from system_range(1, " + maxExpected + ")"); + } + String sql = "create index idx on test(x"; + if (random.nextBoolean()) { + sql += " desc"; + } + if (random.nextBoolean()) { + if (random.nextBoolean()) { + sql += " nulls first"; + } else { + sql += " nulls last"; + } + } + sql += ")"; + stat.execute(sql); + ResultSet rs = stat.executeQuery( + "explain select min(x), max(x) from test"); + rs.next(); + if (!config.mvcc) { + String plan = rs.getString(1); + assertContains(plan, "direct"); + } + rs = stat.executeQuery("select min(x), max(x) from test"); + rs.next(); + int min = rs.getInt(1); + if (rs.wasNull()) { + min = -1; + } + int max = rs.getInt(2); + if (rs.wasNull()) { + max = -1; + } + assertEquals(minExpected, min); + assertEquals(maxExpected, max); + } + conn.close(); + } + + private void testMultiColumnRangeQuery() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE Logs(id INT PRIMARY KEY, type INT)"); + stat.execute("CREATE unique INDEX type_index ON Logs(type, id)"); + stat.execute("INSERT INTO Logs SELECT X, MOD(X, 3) " + + "FROM SYSTEM_RANGE(1, 1000)"); + stat.execute("ANALYZE SAMPLE_SIZE 0"); + ResultSet rs; + rs = stat.executeQuery("EXPLAIN SELECT id FROM Logs " + + "WHERE id < 100 and type=2 AND id<100"); + rs.next(); + String plan = rs.getString(1); + assertContains(plan, "TYPE_INDEX"); + conn.close(); + } + + private void testUseIndexWhenAllColumnsNotInOrderBy() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, account int, tx int)"); + stat.execute("insert into test select x, x*100, x from system_range(1, 10000)"); + stat.execute("analyze sample_size 5"); + stat.execute("create unique index idx_test_account_tx on test(account, tx desc)"); + ResultSet rs; + rs = stat.executeQuery("explain analyze " + + "select tx from test " + + "where account=22 and tx<9999999 " + + "order by tx desc limit 25"); + rs.next(); + String plan = rs.getString(1); + assertContains(plan, "index sorted"); + conn.close(); + } + + private void testDistinctOptimization() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "NAME VARCHAR, TYPE INT)"); + stat.execute("CREATE INDEX IDX_TEST_TYPE ON TEST(TYPE)"); + Random random = new Random(1); + int len = getSize(10000, 100000); + int[] groupCount = new int[10]; + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?, ?)"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.setString(2, "Hello World"); + int type = random.nextInt(10); + groupCount[type]++; + prep.setInt(3, type); + prep.execute(); + } + ResultSet rs; + rs = stat.executeQuery("SELECT TYPE, COUNT(*) FROM TEST " + + "GROUP BY TYPE ORDER BY TYPE"); + for (int i = 0; rs.next(); i++) { + assertEquals(i, rs.getInt(1)); + assertEquals(groupCount[i], rs.getInt(2)); + } + assertFalse(rs.next()); + rs = stat.executeQuery("SELECT DISTINCT TYPE FROM TEST " + + "ORDER BY TYPE"); + for (int i = 0; rs.next(); i++) { + assertEquals(i, rs.getInt(1)); + } + assertFalse(rs.next()); + stat.execute("ANALYZE"); + rs = stat.executeQuery("SELECT DISTINCT TYPE FROM TEST " + + "ORDER BY TYPE"); + for (int i = 0; i < 10; i++) { + assertTrue(rs.next()); + assertEquals(i, rs.getInt(1)); + } + assertFalse(rs.next()); + rs = stat.executeQuery("SELECT DISTINCT TYPE FROM TEST " + + "ORDER BY TYPE LIMIT 5 OFFSET 2"); + for (int i = 2; i < 7; i++) { + assertTrue(rs.next()); + assertEquals(i, rs.getInt(1)); + } + assertFalse(rs.next()); + rs = stat.executeQuery("SELECT DISTINCT TYPE FROM TEST " + + "ORDER BY TYPE LIMIT -1 OFFSET 0 SAMPLE_SIZE 3"); + // must have at least one row + assertTrue(rs.next()); + for (int i = 0; i < 3; i++) { + rs.getInt(1); + if (i > 0 && !rs.next()) { + break; + } + } + assertFalse(rs.next()); + conn.close(); + } + + private void testQueryCacheTimestamp() throws Exception { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + PreparedStatement prep = conn.prepareStatement( + "SELECT CURRENT_TIMESTAMP()"); + ResultSet rs = prep.executeQuery(); + rs.next(); + String a = rs.getString(1); + Thread.sleep(50); + rs = prep.executeQuery(); + rs.next(); + String b = rs.getString(1); + assertFalse(a.equals(b)); + conn.close(); + } + + private void testQueryCacheSpeed() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + testQuerySpeed(stat, + "select sum(a.n), sum(b.x) from system_range(1, 100) b, " + + "(select sum(x) n from system_range(1, 4000)) a"); + conn.close(); + } + + private void testQuerySpeed(Statement stat, String sql) throws SQLException { + long totalTime = 0; + long totalTimeOptimized = 0; + for (int i = 0; i < 3; i++) { + totalTime += measureQuerySpeed(stat, sql, false); + totalTimeOptimized += measureQuerySpeed(stat, sql, true); + } + // System.out.println( + // TimeUnit.NANOSECONDS.toMillis(totalTime) + " " + + // TimeUnit.NANOSECONDS.toMillis(totalTimeOptimized)); + if (totalTimeOptimized > totalTime) { + fail("not optimized: " + TimeUnit.NANOSECONDS.toMillis(totalTime) + + " optimized: " + TimeUnit.NANOSECONDS.toMillis(totalTimeOptimized) + + " sql:" + sql); + } + } + + private static long measureQuerySpeed(Statement stat, String sql, boolean optimized) throws SQLException { + stat.execute("set OPTIMIZE_REUSE_RESULTS " + (optimized ? "1" : "0")); + stat.execute(sql); + long time = System.nanoTime(); + stat.execute(sql); + return System.nanoTime() - time; + } + + private void testQueryCache(boolean optimize) throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + if (optimize) { + stat.execute("set OPTIMIZE_REUSE_RESULTS 1"); + } else { + stat.execute("set OPTIMIZE_REUSE_RESULTS 0"); + } + stat.execute("create table test(id int)"); + stat.execute("create table test2(id int)"); + stat.execute("insert into test values(1), (1), (2)"); + stat.execute("insert into test2 values(1)"); + PreparedStatement prep = conn.prepareStatement( + "select * from test where id = (select id from test2)"); + ResultSet rs1 = prep.executeQuery(); + rs1.next(); + assertEquals(1, rs1.getInt(1)); + rs1.next(); + assertEquals(1, rs1.getInt(1)); + assertFalse(rs1.next()); + + stat.execute("update test2 set id = 2"); + ResultSet rs2 = prep.executeQuery(); + rs2.next(); + assertEquals(2, rs2.getInt(1)); + + conn.close(); + } + + private void testMinMaxCountOptimization(boolean memory) + throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("create " + (memory ? "memory" : "") + + " table test(id int primary key, value int)"); + stat.execute("create index idx_value_id on test(value, id);"); + int len = getSize(1000, 10000); + HashMap map = new HashMap<>(); + TreeSet set = new TreeSet<>(); + Random random = new Random(1); + for (int i = 0; i < len; i++) { + if (i == len / 2) { + if (!config.memory) { + conn.close(); + conn = getConnection("optimizations"); + stat = conn.createStatement(); + } + } + switch (random.nextInt(10)) { + case 0: + case 1: + case 2: + case 3: + case 4: + case 5: + if (random.nextInt(1000) == 1) { + stat.execute("insert into test values(" + i + ", null)"); + map.put(i, null); + } else { + int value = random.nextInt(); + stat.execute("insert into test values(" + i + ", " + value + ")"); + map.put(i, value); + set.add(value); + } + break; + case 6: + case 7: + case 8: { + if (map.size() > 0) { + for (int j = random.nextInt(i), k = 0; k < 10; k++, j++) { + if (map.containsKey(j)) { + Integer x = map.remove(j); + if (x != null) { + set.remove(x); + } + stat.execute("delete from test where id=" + j); + } + } + } + break; + } + case 9: { + ArrayList list = new ArrayList<>(map.values()); + int count = list.size(); + Integer min = null, max = null; + if (count > 0) { + min = set.first(); + max = set.last(); + } + ResultSet rs = stat.executeQuery( + "select min(value), max(value), count(*) from test"); + rs.next(); + Integer minDb = (Integer) rs.getObject(1); + Integer maxDb = (Integer) rs.getObject(2); + int countDb = rs.getInt(3); + assertEquals(minDb, min); + assertEquals(maxDb, max); + assertEquals(countDb, count); + break; + } + default: + } + } + conn.close(); + } + + private void testIn() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + PreparedStatement prep; + ResultSet rs; + + assertFalse(stat.executeQuery("select * from dual " + + "where x in()").next()); + assertFalse(stat.executeQuery("select * from dual " + + "where null in(1)").next()); + assertFalse(stat.executeQuery("select * from dual " + + "where null in(null)").next()); + assertFalse(stat.executeQuery("select * from dual " + + "where null in(null, 1)").next()); + + assertFalse(stat.executeQuery("select * from dual " + + "where 1+x in(3, 4)").next()); + assertFalse(stat.executeQuery("select * from dual d1, dual d2 " + + "where d1.x in(3, 4)").next()); + + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("insert into test values(2, 'World')"); + + prep = conn.prepareStatement("select * from test t1 where t1.id in(?)"); + prep.setInt(1, 1); + rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("select * from test t1 " + + "where t1.id in(?, ?) order by id"); + prep.setInt(1, 1); + prep.setInt(2, 2); + rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("select * from test t1 where t1.id " + + "in(select t2.id from test t2 where t2.id=?)"); + prep.setInt(1, 2); + rs = prep.executeQuery(); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("select * from test t1 where t1.id " + + "in(select t2.id from test t2 where t2.id=? and t1.id<>t2.id)"); + prep.setInt(1, 2); + rs = prep.executeQuery(); + assertFalse(rs.next()); + + prep = conn.prepareStatement("select * from test t1 where t1.id " + + "in(select t2.id from test t2 where t2.id in(cast(?+10 as varchar)))"); + prep.setInt(1, 2); + rs = prep.executeQuery(); + assertFalse(rs.next()); + + conn.close(); + } + + /** + * Where there are multiple indices, and we have an ORDER BY, select the + * index that already has the required ordering. + */ + private void testOrderedIndexes() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE my_table(K1 INT, K2 INT, " + + "VAL VARCHAR, PRIMARY KEY(K1, K2))"); + stat.execute("CREATE INDEX my_index ON my_table(K1, VAL)"); + ResultSet rs = stat.executeQuery( + "EXPLAIN PLAN FOR SELECT * FROM my_table WHERE K1=7 " + + "ORDER BY K1, VAL"); + rs.next(); + assertContains(rs.getString(1), "/* PUBLIC.MY_INDEX: K1 = 7 */"); + + stat.execute("DROP TABLE my_table"); + + // where we have two covering indexes, make sure + // we choose the one that covers more + stat.execute("CREATE TABLE my_table(K1 INT, K2 INT, VAL VARCHAR)"); + stat.execute("CREATE INDEX my_index1 ON my_table(K1, K2)"); + stat.execute("CREATE INDEX my_index2 ON my_table(K1, K2, VAL)"); + rs = stat.executeQuery( + "EXPLAIN PLAN FOR SELECT * FROM my_table WHERE K1=7 " + + "ORDER BY K1, K2, VAL"); + rs.next(); + assertContains(rs.getString(1), "/* PUBLIC.MY_INDEX2: K1 = 7 */"); + + conn.close(); + } + + private void testIndexUseDespiteNullsFirst() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE my_table(K1 INT)"); + stat.execute("CREATE INDEX my_index ON my_table(K1)"); + stat.execute("INSERT INTO my_table VALUES (NULL)"); + stat.execute("INSERT INTO my_table VALUES (1)"); + stat.execute("INSERT INTO my_table VALUES (2)"); + + ResultSet rs; + String result; + + + rs = stat.executeQuery( + "EXPLAIN PLAN FOR SELECT * FROM my_table " + + "ORDER BY K1 ASC NULLS FIRST"); + rs.next(); + result = rs.getString(1); + assertContains(result, "/* index sorted */"); + + rs = stat.executeQuery( + "SELECT * FROM my_table " + + "ORDER BY K1 ASC NULLS FIRST"); + rs.next(); + assertNull(rs.getObject(1)); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs.next(); + assertEquals(2, rs.getInt(1)); + + // === + rs = stat.executeQuery( + "EXPLAIN PLAN FOR SELECT * FROM my_table " + + "ORDER BY K1 DESC NULLS FIRST"); + rs.next(); + result = rs.getString(1); + if (result.contains("/* index sorted */")) { + fail(result + " does not contain: /* index sorted */"); + } + + rs = stat.executeQuery( + "SELECT * FROM my_table " + + "ORDER BY K1 DESC NULLS FIRST"); + rs.next(); + assertNull(rs.getObject(1)); + rs.next(); + assertEquals(2, rs.getInt(1)); + rs.next(); + assertEquals(1, rs.getInt(1)); + + // === + rs = stat.executeQuery( + "EXPLAIN PLAN FOR SELECT * FROM my_table " + + "ORDER BY K1 ASC NULLS LAST"); + rs.next(); + result = rs.getString(1); + if (result.contains("/* index sorted */")) { + fail(result + " does not contain: /* index sorted */"); + } + + rs = stat.executeQuery( + "SELECT * FROM my_table " + + "ORDER BY K1 ASC NULLS LAST"); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs.next(); + assertEquals(2, rs.getInt(1)); + rs.next(); + assertNull(rs.getObject(1)); + + // TODO: Test "EXPLAIN PLAN FOR SELECT * FROM my_table ORDER BY K1 DESC NULLS FIRST" + // Currently fails, as using the index when sorting DESC is currently not supported. + + stat.execute("DROP TABLE my_table"); + conn.close(); + } + + private void testConvertOrToIn() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + + stat.execute("create table test(id int primary key, name varchar(255))"); + stat.execute("insert into test values" + + "(1, '1'), (2, '2'), (3, '3'), (4, '4'), (5, '5')"); + + ResultSet rs = stat.executeQuery("EXPLAIN PLAN FOR SELECT * " + + "FROM test WHERE ID=1 OR ID=2 OR ID=3 OR ID=4 OR ID=5"); + rs.next(); + assertContains(rs.getString(1), "ID IN(1, 2, 3, 4, 5)"); + + rs = stat.executeQuery("SELECT COUNT(*) FROM test " + + "WHERE ID=1 OR ID=2 OR ID=3 OR ID=4 OR ID=5"); + rs.next(); + assertEquals(5, rs.getInt(1)); + + conn.close(); + } + + private void testUseCoveringIndex() throws SQLException { + deleteDb("optimizations"); + Connection conn = getConnection("optimizations"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TABLE_A(id IDENTITY PRIMARY KEY NOT NULL, " + + "name VARCHAR NOT NULL, active BOOLEAN DEFAULT TRUE, " + + "UNIQUE KEY TABLE_A_UK (name) )"); + stat.execute("CREATE TABLE TABLE_B(id IDENTITY PRIMARY KEY NOT NULL, " + + "TABLE_a_id BIGINT NOT NULL, createDate TIMESTAMP DEFAULT NOW(), " + + "UNIQUE KEY TABLE_B_UK (table_a_id, createDate), " + + "FOREIGN KEY (table_a_id) REFERENCES TABLE_A(id) )"); + stat.execute("INSERT INTO TABLE_A (name) SELECT 'package_' || CAST(X as VARCHAR) " + + "FROM SYSTEM_RANGE(1, 100) WHERE X <= 100"); + stat.execute("INSERT INTO TABLE_B (table_a_id, createDate) SELECT " + + "CASE WHEN table_a_id = 0 THEN 1 ELSE table_a_id END, createDate " + + "FROM ( SELECT ROUND((RAND() * 100)) AS table_a_id, " + + "DATEADD('SECOND', X, NOW()) as createDate FROM SYSTEM_RANGE(1, 50000) " + + "WHERE X < 50000 )"); + stat.execute("CREATE INDEX table_b_idx ON table_b(table_a_id, id)"); + stat.execute("ANALYZE"); + + ResultSet rs = stat.executeQuery("EXPLAIN ANALYZE SELECT MAX(b.id) as id " + + "FROM table_b b JOIN table_a a ON b.table_a_id = a.id GROUP BY b.table_a_id " + + "HAVING A.ACTIVE = TRUE"); + rs.next(); + assertContains(rs.getString(1), "/* PUBLIC.TABLE_B_IDX: TABLE_A_ID = A.ID */"); + + rs = stat.executeQuery("EXPLAIN ANALYZE SELECT MAX(id) FROM table_b GROUP BY table_a_id"); + rs.next(); + assertContains(rs.getString(1), "/* PUBLIC.TABLE_B_IDX"); + conn.close(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestOptimizerHints.java b/modules/h2/src/test/java/org/h2/test/db/TestOptimizerHints.java new file mode 100644 index 0000000000000..41b1883263b5b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestOptimizerHints.java @@ -0,0 +1,172 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Arrays; +import org.h2.test.TestBase; +import org.h2.util.StatementBuilder; + +/** + * Test for optimizer hint SET FORCE_JOIN_ORDER. + * + * @author Sergi Vladykin + */ +public class TestOptimizerHints extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String[] a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("testOptimizerHints"); + Connection conn = getConnection("testOptimizerHints;FORCE_JOIN_ORDER=1"); + Statement s = conn.createStatement(); + + s.execute("create table t1(id int unique)"); + s.execute("create table t2(id int unique, t1_id int)"); + s.execute("create table t3(id int unique)"); + s.execute("create table t4(id int unique, t2_id int, t3_id int)"); + + String plan; + + plan = plan(s, "select * from t1, t2 where t1.id = t2.t1_id"); + assertContains(plan, "INNER JOIN PUBLIC.T2"); + + plan = plan(s, "select * from t2, t1 where t1.id = t2.t1_id"); + assertContains(plan, "INNER JOIN PUBLIC.T1"); + + plan = plan(s, "select * from t2, t1 where t1.id = 1"); + assertContains(plan, "INNER JOIN PUBLIC.T1"); + + plan = plan(s, "select * from t2, t1 where t1.id = t2.t1_id and t2.id = 1"); + assertContains(plan, "INNER JOIN PUBLIC.T1"); + + plan = plan(s, "select * from t1, t2 where t1.id = t2.t1_id and t2.id = 1"); + assertContains(plan, "INNER JOIN PUBLIC.T2"); + + checkPlanComma(s, "t1", "t2", "t3", "t4"); + checkPlanComma(s, "t4", "t2", "t3", "t1"); + checkPlanComma(s, "t2", "t1", "t3", "t4"); + checkPlanComma(s, "t1", "t4", "t3", "t2"); + checkPlanComma(s, "t2", "t1", "t4", "t3"); + checkPlanComma(s, "t4", "t3", "t2", "t1"); + + boolean on = false; + boolean left = false; + + checkPlanJoin(s, on, left, "t1", "t2", "t3", "t4"); + checkPlanJoin(s, on, left, "t4", "t2", "t3", "t1"); + checkPlanJoin(s, on, left, "t2", "t1", "t3", "t4"); + checkPlanJoin(s, on, left, "t1", "t4", "t3", "t2"); + checkPlanJoin(s, on, left, "t2", "t1", "t4", "t3"); + checkPlanJoin(s, on, left, "t4", "t3", "t2", "t1"); + + on = false; + left = true; + + checkPlanJoin(s, on, left, "t1", "t2", "t3", "t4"); + checkPlanJoin(s, on, left, "t4", "t2", "t3", "t1"); + checkPlanJoin(s, on, left, "t2", "t1", "t3", "t4"); + checkPlanJoin(s, on, left, "t1", "t4", "t3", "t2"); + checkPlanJoin(s, on, left, "t2", "t1", "t4", "t3"); + checkPlanJoin(s, on, left, "t4", "t3", "t2", "t1"); + + on = true; + left = false; + + checkPlanJoin(s, on, left, "t1", "t2", "t3", "t4"); + checkPlanJoin(s, on, left, "t4", "t2", "t3", "t1"); + checkPlanJoin(s, on, left, "t2", "t1", "t3", "t4"); + checkPlanJoin(s, on, left, "t1", "t4", "t3", "t2"); + checkPlanJoin(s, on, left, "t2", "t1", "t4", "t3"); + checkPlanJoin(s, on, left, "t4", "t3", "t2", "t1"); + + on = true; + left = true; + + checkPlanJoin(s, on, left, "t1", "t2", "t3", "t4"); + checkPlanJoin(s, on, left, "t4", "t2", "t3", "t1"); + checkPlanJoin(s, on, left, "t2", "t1", "t3", "t4"); + checkPlanJoin(s, on, left, "t1", "t4", "t3", "t2"); + checkPlanJoin(s, on, left, "t2", "t1", "t4", "t3"); + checkPlanJoin(s, on, left, "t4", "t3", "t2", "t1"); + + s.close(); + conn.close(); + deleteDb("testOptimizerHints"); + } + + private void checkPlanComma(Statement s, String ... t) throws SQLException { + StatementBuilder from = new StatementBuilder(); + for (String table : t) { + from.appendExceptFirst(", "); + from.append(table); + } + String plan = plan(s, "select 1 from " + from.toString() + " where t1.id = t2.t1_id " + + "and t2.id = t4.t2_id and t3.id = t4.t3_id"); + int prev = plan.indexOf("FROM PUBLIC." + t[0].toUpperCase()); + for (int i = 1; i < t.length; i++) { + int next = plan.indexOf("INNER JOIN PUBLIC." + t[i].toUpperCase()); + assertTrue("Wrong plan for : " + Arrays.toString(t) + "\n" + plan, next > prev); + prev = next; + } + } + + private void checkPlanJoin(Statement s, boolean on, boolean left, + String... t) throws SQLException { + StatementBuilder from = new StatementBuilder(); + for (int i = 0; i < t.length; i++) { + if (i != 0) { + if (left) { + from.append(" left join "); + } else { + from.append(" inner join "); + } + } + from.append(t[i]); + if (on && i != 0) { + from.append(" on 1=1 "); + } + } + String plan = plan(s, "select 1 from " + from.toString() + " where t1.id = t2.t1_id " + + "and t2.id = t4.t2_id and t3.id = t4.t3_id"); + int prev = plan.indexOf("FROM PUBLIC." + t[0].toUpperCase()); + for (int i = 1; i < t.length; i++) { + int next = plan.indexOf( + (!left ? "INNER JOIN PUBLIC." : on ? "LEFT OUTER JOIN PUBLIC." : "PUBLIC.") + + t[i].toUpperCase()); + if (prev > next) { + System.err.println(plan); + fail("Wrong plan for : " + Arrays.toString(t) + "\n" + plan); + } + prev = next; + } + } + + /** + * @param s Statement. + * @param query Query. + * @return Plan. + * @throws SQLException If failed. + */ + private String plan(Statement s, String query) throws SQLException { + ResultSet rs = s.executeQuery("explain " + query); + assertTrue(rs.next()); + String plan = rs.getString(1); + rs.close(); + return plan; + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestOutOfMemory.java b/modules/h2/src/test/java/org/h2/test/db/TestOutOfMemory.java new file mode 100644 index 0000000000000..9a776d58aafeb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestOutOfMemory.java @@ -0,0 +1,247 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Map; +import java.util.Random; +import java.util.concurrent.atomic.AtomicReference; +import org.h2.api.ErrorCode; +import org.h2.mvstore.MVStore; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathMem; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests out of memory situations. The database must not get corrupted, and + * transactions must stay atomic. + */ +public class TestOutOfMemory extends TestBase { + + private static final String DB_NAME = "outOfMemory"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.vmlens) { + // running out of memory will cause the vmlens agent to stop working + return; + } + try { + if (!config.travis) { + System.gc(); + testMVStoreUsingInMemoryFileSystem(); + System.gc(); + testDatabaseUsingInMemoryFileSystem(); + } + System.gc(); + testUpdateWhenNearlyOutOfMemory(); + } finally { + System.gc(); + } + } + + private void testMVStoreUsingInMemoryFileSystem() { + FilePath.register(new FilePathMem()); + String fileName = "memFS:" + getTestName(); + final AtomicReference exRef = new AtomicReference<>(); + MVStore store = new MVStore.Builder() + .fileName(fileName) + .backgroundExceptionHandler(new Thread.UncaughtExceptionHandler() { + @Override + public void uncaughtException(Thread t, Throwable e) { + exRef.compareAndSet(null, e); + } + }) + .open(); + try { + Map map = store.openMap("test"); + Random r = new Random(1); + try { + for (int i = 0; i < 100; i++) { + byte[] data = new byte[10 * 1024 * 1024]; + r.nextBytes(data); + map.put(i, data); + } + Throwable throwable = exRef.get(); + if(throwable instanceof OutOfMemoryError) throw (OutOfMemoryError)throwable; + if(throwable instanceof IllegalStateException) throw (IllegalStateException)throwable; + fail(); + } catch (OutOfMemoryError | IllegalStateException e) { + // expected + } + try { + store.close(); + } catch (IllegalStateException e) { + // expected + } + store.closeImmediately(); + store = MVStore.open(fileName); + store.openMap("test"); + store.close(); + } finally { + // just in case, otherwise if this test suffers a spurious failure, + // succeeding tests will too, because they will OOM + store.closeImmediately(); + FileUtils.delete(fileName); + } + } + + private void testDatabaseUsingInMemoryFileSystem() throws SQLException, InterruptedException { + String filename = "memFS:" + getTestName(); + String url = "jdbc:h2:" + filename + "/test"; + try { + Connection conn = DriverManager.getConnection(url); + Statement stat = conn.createStatement(); + try { + stat.execute("create table test(id int, name varchar) as " + + "select x, space(10000000+x) from system_range(1, 1000)"); + fail(); + } catch (SQLException e) { + assertTrue("Unexpected error code: " + e.getErrorCode(), + ErrorCode.OUT_OF_MEMORY == e.getErrorCode() || + ErrorCode.FILE_CORRUPTED_1 == e.getErrorCode() || + ErrorCode.DATABASE_IS_CLOSED == e.getErrorCode() || + ErrorCode.GENERAL_ERROR_1 == e.getErrorCode()); + } + recoverAfterOOM(); + try { + conn.close(); + fail(); + } catch (SQLException e) { + assertTrue("Unexpected error code: " + e.getErrorCode(), + ErrorCode.OUT_OF_MEMORY == e.getErrorCode() || + ErrorCode.FILE_CORRUPTED_1 == e.getErrorCode() || + ErrorCode.DATABASE_IS_CLOSED == e.getErrorCode() || + ErrorCode.GENERAL_ERROR_1 == e.getErrorCode()); + } + recoverAfterOOM(); + conn = DriverManager.getConnection(url); + stat = conn.createStatement(); + stat.execute("SELECT 1"); + conn.close(); + } finally { + // release the static data this test generates + FileUtils.deleteRecursive(filename, true); + } + } + + private static void recoverAfterOOM() throws InterruptedException { + for (int i = 0; i < 5; i++) { + System.gc(); + Thread.sleep(20); + } + } + + private void testUpdateWhenNearlyOutOfMemory() throws Exception { + if (config.memory) { + return; + } + deleteDb(DB_NAME); + + ProcessBuilder processBuilder = buildChild( + DB_NAME + ";MAX_OPERATION_MEMORY=1000000", + MyChild.class, + "-XX:+UseParallelGC", +// "-XX:+UseG1GC", + "-Xmx128m"); +//* + processBuilder.start().waitFor(); +/*/ + List args = processBuilder.command(); + for (Iterator iter = args.iterator(); iter.hasNext(); ) { + String arg = iter.next(); + if(arg.equals(MyChild.class.getName())) { + iter.remove(); + break; + } + iter.remove(); + } + MyChild.main(args.toArray(new String[0])); +//*/ + try (Connection conn = getConnection(DB_NAME)) { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT count(*) FROM stuff"); + assertTrue(rs.next()); + assertEquals(3000, rs.getInt(1)); + + rs = stat.executeQuery("SELECT * FROM stuff WHERE id = 3000"); + assertTrue(rs.next()); + String text = rs.getString(2); + assertFalse(rs.wasNull()); + assertEquals(1004, text.length()); + + // TODO: there are intermittent failures here + // where number is about 1000 short of expected value. + // This indicates a real problem - durability failure + // and need to be looked at. + rs = stat.executeQuery("SELECT sum(length(text)) FROM stuff"); + assertTrue(rs.next()); + int totalSize = rs.getInt(1); + if (3010893 > totalSize) { + TestBase.logErrorMessage("Durability failure - expected: 3010893, actual: " + totalSize); + } + } finally { + deleteDb(DB_NAME); + } + } + + public static final class MyChild extends TestBase.Child + { + + /** + * Run just this test. + * + * @param args the arguments + */ + public static void main(String... args) throws Exception { + new MyChild(args).init().test(); + } + + private MyChild(String... args) { + super(args); + } + + @Override + public void test() { + try (Connection conn = getConnection()) { + Statement stat = conn.createStatement(); + stat.execute("DROP ALL OBJECTS"); + stat.execute("CREATE TABLE stuff (id INT, text VARCHAR)"); + stat.execute("INSERT INTO stuff(id) SELECT x FROM system_range(1, 3000)"); + PreparedStatement prep = conn.prepareStatement( + "UPDATE stuff SET text = IFNULL(text,'') || space(1000) || id"); + prep.execute(); + stat.execute("CHECKPOINT"); + + ResultSet rs = stat.executeQuery("SELECT sum(length(text)) FROM stuff"); + assertTrue(rs.next()); + assertEquals(3010893, rs.getInt(1)); + + eatMemory(80); + prep.execute(); + fail(); + } catch (SQLException ignore) { + } finally { + freeMemory(); + } + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestPersistentCommonTableExpressions.java b/modules/h2/src/test/java/org/h2/test/db/TestPersistentCommonTableExpressions.java new file mode 100644 index 0000000000000..33cdffd28eb65 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestPersistentCommonTableExpressions.java @@ -0,0 +1,276 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import org.h2.engine.SysProperties; +import org.h2.test.TestBase; + +/** + * Test persistent common table expressions queries using WITH. + */ +public class TestPersistentCommonTableExpressions extends AbstractBaseForCommonTableExpressions { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + // persistent cte tests - also tests reconnects and database reloading... + testRecursiveTable(); + testPersistentNonRecursiveTableInCreateView(); + testPersistentRecursiveTableInCreateView(); + testPersistentNonRecursiveTableInCreateViewDropAllObjects(); + testPersistentRecursiveTableInCreateViewDropAllObjects(); + } + + private void testRecursiveTable() throws Exception { + String numericName; + if (SysProperties.BIG_DECIMAL_IS_DECIMAL) { + numericName = "DECIMAL"; + } else { + numericName = "NUMERIC"; + } + String[] expectedRowData = new String[]{"|meat|null", "|fruit|3", "|veg|2"}; + String[] expectedColumnTypes = new String[]{"VARCHAR", numericName}; + String[] expectedColumnNames = new String[]{"VAL", + "SUM(SELECT\n" + + " X\n" + + "FROM PUBLIC.\"\" BB\n" + + " /* SELECT\n" + + " SUM(1) AS X,\n" + + " A\n" + + " FROM PUBLIC.B\n" + + " /++ PUBLIC.B.tableScan ++/\n" + + " /++ WHERE A IS ?1\n" + + " ++/\n" + + " /++ scanCount: 4 ++/\n" + + " INNER JOIN PUBLIC.C\n" + + " /++ PUBLIC.C.tableScan ++/\n" + + " ON 1=1\n" + + " WHERE (A IS ?1)\n" + + " AND (B.VAL = C.B)\n" + + " GROUP BY A: A IS A.VAL\n" + + " */\n" + + " /* scanCount: 1 */\n" + + "WHERE BB.A IS A.VAL)"}; + + String setupSQL = + "DROP TABLE IF EXISTS A; " + +"DROP TABLE IF EXISTS B; " + +"DROP TABLE IF EXISTS C; " + +"CREATE TABLE A(VAL VARCHAR(255)); " + +"CREATE TABLE B(A VARCHAR(255), VAL VARCHAR(255)); " + +"CREATE TABLE C(B VARCHAR(255), VAL VARCHAR(255)); " + +" " + +"INSERT INTO A VALUES('fruit'); " + +"INSERT INTO B VALUES('fruit','apple'); " + +"INSERT INTO B VALUES('fruit','banana'); " + +"INSERT INTO C VALUES('apple', 'golden delicious');" + +"INSERT INTO C VALUES('apple', 'granny smith'); " + +"INSERT INTO C VALUES('apple', 'pippin'); " + +"INSERT INTO A VALUES('veg'); " + +"INSERT INTO B VALUES('veg', 'carrot'); " + +"INSERT INTO C VALUES('carrot', 'nantes'); " + +"INSERT INTO C VALUES('carrot', 'imperator'); " + +"INSERT INTO C VALUES(null, 'banapple'); " + +"INSERT INTO A VALUES('meat'); "; + + String withQuery = "WITH BB as (SELECT \n" + + "sum(1) as X, \n" + + "a \n" + + "FROM B \n" + + "JOIN C ON B.val=C.b \n" + + "GROUP BY a) \n" + + "SELECT \n" + + "A.val, \n" + + "sum(SELECT X FROM BB WHERE BB.a IS A.val)\n" + + "FROM A \n" + "GROUP BY A.val"; + int maxRetries = 3; + int expectedNumberOfRows = expectedRowData.length; + + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + + } + + private void testPersistentRecursiveTableInCreateView() throws Exception { + String setupSQL = "--SET TRACE_LEVEL_SYSTEM_OUT 3;\n" + +"DROP TABLE IF EXISTS my_tree; \n" + +"DROP VIEW IF EXISTS v_my_tree; \n" + +"CREATE TABLE my_tree ( \n" + +" id INTEGER, \n" + +" parent_fk INTEGER \n" + +"); \n" + +" \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 1, NULL ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 11, 1 ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 111, 11 ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 12, 1 ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 121, 12 ); \n" + +" \n" + +"CREATE OR REPLACE VIEW v_my_tree AS \n" + +"WITH RECURSIVE tree_cte (sub_tree_root_id, tree_level, parent_fk, child_fk) AS ( \n" + +" SELECT mt.ID AS sub_tree_root_id, CAST(0 AS INT) AS tree_level, mt.parent_fk, mt.id \n" + +" FROM my_tree mt \n" + +" UNION ALL \n" + +" SELECT sub_tree_root_id, mtc.tree_level + 1 AS tree_level, mtc.parent_fk, mt.id \n" + +" FROM my_tree mt \n" + +"INNER JOIN tree_cte mtc ON mtc.child_fk = mt.parent_fk \n" + +"), \n" + +"unused_cte AS ( SELECT 1 AS unUsedColumn ) \n" + +"SELECT sub_tree_root_id, tree_level, parent_fk, child_fk FROM tree_cte; \n"; + + String withQuery = "SELECT * FROM v_my_tree"; + int maxRetries = 4; + String[] expectedRowData = new String[]{"|1|0|null|1", + "|11|0|1|11", + "|111|0|11|111", + "|12|0|1|12", + "|121|0|12|121", + "|1|1|null|11", + "|11|1|1|111", + "|1|1|null|12", + "|12|1|1|121", + "|1|2|null|111", + "|1|2|null|121" + }; + String[] expectedColumnNames = new String[]{"SUB_TREE_ROOT_ID", "TREE_LEVEL", "PARENT_FK", "CHILD_FK"}; + String[] expectedColumnTypes = new String[]{"INTEGER", "INTEGER", "INTEGER", "INTEGER"}; + int expectedNumberOfRows = 11; + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + } + + private void testPersistentNonRecursiveTableInCreateView() throws Exception { + String setupSQL = "" + +"DROP VIEW IF EXISTS v_my_nr_tree; \n" + +"DROP TABLE IF EXISTS my_table; \n" + +"CREATE TABLE my_table ( \n" + +" id INTEGER, \n" + +" parent_fk INTEGER \n" + +"); \n" + +" \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 1, NULL ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 11, 1 ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 111, 11 ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 12, 1 ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 121, 12 ); \n" + +" \n" + +"CREATE OR REPLACE VIEW v_my_nr_tree AS \n" + +"WITH tree_cte_nr (sub_tree_root_id, tree_level, parent_fk, child_fk) AS ( \n" + +" SELECT mt.ID AS sub_tree_root_id, CAST(0 AS INT) AS tree_level, mt.parent_fk, mt.id \n" + +" FROM my_table mt \n" + +"), \n" + +"unused_cte AS ( SELECT 1 AS unUsedColumn ) \n" + +"SELECT sub_tree_root_id, tree_level, parent_fk, child_fk FROM tree_cte_nr; \n"; + + String withQuery = "SELECT * FROM v_my_nr_tree"; + int maxRetries = 6; + String[] expectedRowData = new String[]{ + "|1|0|null|1", + "|11|0|1|11", + "|111|0|11|111", + "|12|0|1|12", + "|121|0|12|121", + }; + String[] expectedColumnNames = new String[]{"SUB_TREE_ROOT_ID", "TREE_LEVEL", "PARENT_FK", "CHILD_FK"}; + String[] expectedColumnTypes = new String[]{"INTEGER", "INTEGER", "INTEGER", "INTEGER"}; + int expectedNumberOfRows = 5; + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + } + + private void testPersistentNonRecursiveTableInCreateViewDropAllObjects() throws Exception { + String setupSQL = "" + +"DROP ALL OBJECTS; \n" + +"CREATE TABLE my_table ( \n" + +" id INTEGER, \n" + +" parent_fk INTEGER \n" + +"); \n" + +" \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 1, NULL ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 11, 1 ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 111, 11 ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 12, 1 ); \n" + +"INSERT INTO my_table ( id, parent_fk) VALUES ( 121, 12 ); \n" + +" \n" + +"CREATE OR REPLACE VIEW v_my_nr_tree AS \n" + +"WITH tree_cte_nr (sub_tree_root_id, tree_level, parent_fk, child_fk) AS ( \n" + +" SELECT mt.ID AS sub_tree_root_id, CAST(0 AS INT) AS tree_level, mt.parent_fk, mt.id \n" + +" FROM my_table mt \n" + +"), \n" + +"unused_cte AS ( SELECT 1 AS unUsedColumn ) \n" + +"SELECT sub_tree_root_id, tree_level, parent_fk, child_fk FROM tree_cte_nr; \n"; + + String withQuery = "SELECT * FROM v_my_nr_tree"; + int maxRetries = 6; + String[] expectedRowData = new String[]{ + "|1|0|null|1", + "|11|0|1|11", + "|111|0|11|111", + "|12|0|1|12", + "|121|0|12|121", + }; + String[] expectedColumnNames = new String[]{"SUB_TREE_ROOT_ID", "TREE_LEVEL", "PARENT_FK", "CHILD_FK"}; + String[] expectedColumnTypes = new String[]{"INTEGER", "INTEGER", "INTEGER", "INTEGER"}; + int expectedNumberOfRows = 5; + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + } + + private void testPersistentRecursiveTableInCreateViewDropAllObjects() throws Exception { + String setupSQL = "--SET TRACE_LEVEL_SYSTEM_OUT 3;\n" + +"DROP ALL OBJECTS; \n" + +"CREATE TABLE my_tree ( \n" + +" id INTEGER, \n" + +" parent_fk INTEGER \n" + +"); \n" + +" \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 1, NULL ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 11, 1 ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 111, 11 ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 12, 1 ); \n" + +"INSERT INTO my_tree ( id, parent_fk) VALUES ( 121, 12 ); \n" + +" \n" + +"CREATE OR REPLACE VIEW v_my_tree AS \n" + +"WITH RECURSIVE tree_cte (sub_tree_root_id, tree_level, parent_fk, child_fk) AS ( \n" + +" SELECT mt.ID AS sub_tree_root_id, CAST(0 AS INT) AS tree_level, mt.parent_fk, mt.id \n" + +" FROM my_tree mt \n" + +" UNION ALL \n" + +" SELECT sub_tree_root_id, mtc.tree_level + 1 AS tree_level, mtc.parent_fk, mt.id \n" + +" FROM my_tree mt \n" + +"INNER JOIN tree_cte mtc ON mtc.child_fk = mt.parent_fk \n" + +"), \n" + +"unused_cte AS ( SELECT 1 AS unUsedColumn ) \n" + +"SELECT sub_tree_root_id, tree_level, parent_fk, child_fk FROM tree_cte; \n"; + + String withQuery = "SELECT * FROM v_my_tree"; + int maxRetries = 4; + String[] expectedRowData = new String[]{"|1|0|null|1", + "|11|0|1|11", + "|111|0|11|111", + "|12|0|1|12", + "|121|0|12|121", + "|1|1|null|11", + "|11|1|1|111", + "|1|1|null|12", + "|12|1|1|121", + "|1|2|null|111", + "|1|2|null|121" + }; + String[] expectedColumnNames = new String[]{"SUB_TREE_ROOT_ID", "TREE_LEVEL", "PARENT_FK", "CHILD_FK"}; + String[] expectedColumnTypes = new String[]{"INTEGER", "INTEGER", "INTEGER", "INTEGER"}; + int expectedNumberOfRows = 11; + testRepeatedQueryWithSetup(maxRetries, expectedRowData, expectedColumnNames, expectedNumberOfRows, setupSQL, + withQuery, maxRetries - 1, expectedColumnTypes); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestPowerOff.java b/modules/h2/src/test/java/org/h2/test/db/TestPowerOff.java new file mode 100644 index 0000000000000..0d941a5badd7d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestPowerOff.java @@ -0,0 +1,352 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.api.ErrorCode; +import org.h2.engine.Database; +import org.h2.jdbc.JdbcConnection; +import org.h2.test.TestBase; +import org.h2.util.JdbcUtils; + +/** + * Tests simulated power off conditions. + */ +public class TestPowerOff extends TestBase { + + private static final String DB_NAME = "powerOff"; + private String dir, url; + + private int maxPowerOffCount; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.memory) { + return; + } + if (config.big || config.googleAppEngine) { + dir = getBaseDir(); + url = DB_NAME; + } else { + dir = "memFS:"; + url = "memFS:/" + DB_NAME; + } + url += ";FILE_LOCK=NO;TRACE_LEVEL_FILE=0"; + testLobCrash(); + testSummaryCrash(); + testCrash(); + testShutdown(); + testMemoryTables(); + testPersistentTables(); + deleteDb(dir, DB_NAME); + } + + private void testLobCrash() throws SQLException { + if (config.networked) { + return; + } + deleteDb(dir, DB_NAME); + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, data clob)"); + conn.close(); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("set write_delay 0"); + ((JdbcConnection) conn).setPowerOffCount(Integer.MAX_VALUE); + stat.execute("insert into test values(null, space(11000))"); + int max = Integer.MAX_VALUE - ((JdbcConnection) conn).getPowerOffCount(); + for (int i = 0; i < max + 10; i++) { + conn.close(); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("insert into test values(null, space(11000))"); + stat.execute("set write_delay 0"); + ((JdbcConnection) conn).setPowerOffCount(i); + try { + stat.execute("insert into test values(null, space(11000))"); + } catch (SQLException e) { + // ignore + } + JdbcUtils.closeSilently(conn); + } + } + + private void testSummaryCrash() throws SQLException { + if (config.networked) { + return; + } + deleteDb(dir, DB_NAME); + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + for (int i = 0; i < 10; i++) { + stat.execute("CREATE TABLE TEST" + i + + "(ID INT PRIMARY KEY, NAME VARCHAR)"); + for (int j = 0; j < 10; j++) { + stat.execute("INSERT INTO TEST" + i + + " VALUES(" + j + ", 'Hello')"); + } + } + for (int i = 0; i < 10; i += 2) { + stat.execute("DROP TABLE TEST" + i); + } + stat.execute("SET WRITE_DELAY 0"); + stat.execute("CHECKPOINT"); + for (int j = 0; j < 10; j++) { + stat.execute("INSERT INTO TEST1 VALUES(" + (10 + j) + ", 'World')"); + } + stat.execute("SHUTDOWN IMMEDIATELY"); + JdbcUtils.closeSilently(conn); + conn = getConnection(url); + stat = conn.createStatement(); + for (int i = 1; i < 10; i += 2) { + ResultSet rs = stat.executeQuery( + "SELECT * FROM TEST" + i + " ORDER BY ID"); + for (int j = 0; j < 10; j++) { + rs.next(); + assertEquals(j, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + } + if (i == 1) { + for (int j = 0; j < 10; j++) { + rs.next(); + assertEquals(j + 10, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + } + } + assertFalse(rs.next()); + } + conn.close(); + } + + private void testCrash() throws SQLException { + if (config.networked) { + return; + } + deleteDb(dir, DB_NAME); + Random random = new Random(1); + int repeat = getSize(1, 20); + for (int i = 0; i < repeat; i++) { + Connection conn = getConnection(url); + conn.close(); + conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("SET WRITE_DELAY 0"); + ((JdbcConnection) conn).setPowerOffCount(random.nextInt(100)); + try { + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + conn.setAutoCommit(false); + int len = getSize(3, 100); + for (int j = 0; j < len; j++) { + stat.execute("INSERT INTO TEST VALUES(" + j + ", 'Hello')"); + if (random.nextInt(5) == 0) { + conn.commit(); + } + if (random.nextInt(10) == 0) { + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + } + } + stat.execute("DROP TABLE IF EXISTS TEST"); + conn.close(); + } catch (SQLException e) { + if (!e.getSQLState().equals("90098")) { + TestBase.logError("power", e); + } + } + } + } + + private void testShutdown() throws SQLException { + deleteDb(dir, DB_NAME); + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("SHUTDOWN"); + conn.close(); + + conn = getConnection(url); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + assertTrue(rs.next()); + assertFalse(rs.next()); + conn.close(); + } + + private void testMemoryTables() throws SQLException { + if (config.networked) { + return; + } + deleteDb(dir, DB_NAME); + + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("CREATE MEMORY TABLE TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("CHECKPOINT"); + ((JdbcConnection) conn).setPowerOffCount(1); + try { + stat.execute("INSERT INTO TEST VALUES(2, 'Hello')"); + stat.execute("INSERT INTO TEST VALUES(3, 'Hello')"); + stat.execute("CHECKPOINT"); + fail(); + } catch (SQLException e) { + assertKnownException(e); + } + + ((JdbcConnection) conn).setPowerOffCount(0); + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + conn = getConnection(url); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + } + + private void testPersistentTables() throws SQLException { + if (config.networked) { + return; + } + if (config.cipher != null) { + // this would take too long (setLength uses + // individual writes, many thousand operations) + return; + } + deleteDb(dir, DB_NAME); + + // ((JdbcConnection)conn).setPowerOffCount(Integer.MAX_VALUE); + testRun(true); + int max = maxPowerOffCount; + trace("max=" + max); + runTest(0, max, true); + recoverAndCheckConsistency(); + runTest(0, max, false); + recoverAndCheckConsistency(); + } + + private void runTest(int min, int max, boolean withConsistencyCheck) + throws SQLException { + for (int i = min; i < max; i++) { + deleteDb(dir, DB_NAME); + Database.setInitialPowerOffCount(i); + int expect = testRun(false); + if (withConsistencyCheck) { + int got = recoverAndCheckConsistency(); + trace("test " + i + " of " + max + " expect=" + expect + " got=" + got); + } else { + trace("test " + i + " of " + max + " expect=" + expect); + } + } + Database.setInitialPowerOffCount(0); + } + + private int testRun(boolean init) throws SQLException { + if (init) { + Database.setInitialPowerOffCount(Integer.MAX_VALUE); + } + int state = 0; + Connection conn = null; + try { + conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("SET WRITE_DELAY 0"); + stat.execute("CREATE TABLE IF NOT EXISTS TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + state = 1; + conn.setAutoCommit(false); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("INSERT INTO TEST VALUES(2, 'World')"); + conn.commit(); + state = 2; + stat.execute("UPDATE TEST SET NAME='Hallo' WHERE ID=1"); + stat.execute("UPDATE TEST SET NAME='Welt' WHERE ID=2"); + conn.commit(); + state = 3; + stat.execute("DELETE FROM TEST WHERE ID=1"); + stat.execute("DELETE FROM TEST WHERE ID=2"); + conn.commit(); + state = 1; + stat.execute("DROP TABLE TEST"); + state = 0; + if (init) { + maxPowerOffCount = Integer.MAX_VALUE - + ((JdbcConnection) conn).getPowerOffCount(); + } + conn.close(); + } catch (SQLException e) { + if (e.getSQLState().equals("" + ErrorCode.DATABASE_IS_CLOSED)) { + // this is ok + } else { + throw e; + } + } + JdbcUtils.closeSilently(conn); + return state; + } + + private int recoverAndCheckConsistency() throws SQLException { + int state; + Database.setInitialPowerOffCount(0); + Connection conn = getConnection(url); + assertEquals(0, ((JdbcConnection) conn).getPowerOffCount()); + Statement stat = conn.createStatement(); + DatabaseMetaData meta = conn.getMetaData(); + ResultSet rs = meta.getTables(null, null, "TEST", null); + if (!rs.next()) { + state = 0; + } else { + // table does not exist + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + if (!rs.next()) { + state = 1; + } else { + assertEquals(1, rs.getInt(1)); + String name1 = rs.getString(2); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + String name2 = rs.getString(2); + assertFalse(rs.next()); + if ("Hello".equals(name1)) { + assertEquals("World", name2); + state = 2; + } else { + assertEquals("Hallo", name1); + assertEquals("Welt", name2); + state = 3; + } + } + } + conn.close(); + return state; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestQueryCache.java b/modules/h2/src/test/java/org/h2/test/db/TestQueryCache.java new file mode 100644 index 0000000000000..d611ace1b94d6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestQueryCache.java @@ -0,0 +1,111 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests the query cache. + */ +public class TestQueryCache extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("queryCache"); + test1(); + testClearingCacheWithTableStructureChanges(); + deleteDb("queryCache"); + } + + private void test1() throws Exception { + try (Connection conn = getConnection("queryCache;QUERY_CACHE_SIZE=10")) { + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, name varchar)"); + PreparedStatement prep; + // query execution may be fast here but the parsing must be slow + StringBuilder queryBuilder = new StringBuilder("select count(*) from test t1 where \n"); + for (int i = 0; i < 1000; i++) { + if (i != 0) { + queryBuilder.append(" and "); + } + queryBuilder.append(" TIMESTAMP '2005-12-31 23:59:59' = TIMESTAMP '2005-12-31 23:59:59' "); + } + String query = queryBuilder.toString(); + conn.prepareStatement(query); + int firstGreater = 0; + int firstSmaller = 0; + long time; + ResultSet rs; + long first = 0; + // 1000 iterations to warm up and avoid JIT effects + for (int i = 0; i < 1005; i++) { + // this should both ensure results are not re-used + // stat.execute("set mode regular"); + // stat.execute("create table x()"); + // stat.execute("drop table x"); + time = System.nanoTime(); + prep = conn.prepareStatement(query); + execute(prep); + prep.close(); + rs = stat.executeQuery(query); + rs.next(); + int c = rs.getInt(1); + rs.close(); + assertEquals(0, c); + time = System.nanoTime() - time; + if (i == 1000) { + // take from cache and do not close, + // so that next iteration will have a cache miss + prep = conn.prepareStatement(query); + } else if (i == 1001) { + first = time; + // try to avoid pauses in subsequent iterations + System.gc(); + } else if (i > 1001) { + if (first > time) { + firstGreater++; + } else { + firstSmaller++; + } + } + } + // first prepare time must be always greater because of query cache, + // but JVM is too unpredictable to assert that, so just check that + // usually this is true + assertSmaller(firstSmaller, firstGreater); + stat.execute("drop table test"); + } + } + + private void testClearingCacheWithTableStructureChanges() throws Exception { + try (Connection conn = getConnection("queryCache;QUERY_CACHE_SIZE=10")) { + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, conn). + prepareStatement("SELECT * FROM TEST"); + Statement stat = conn.createStatement(); + stat.executeUpdate("CREATE TABLE TEST(col1 bigint, col2 varchar(255))"); + PreparedStatement prep = conn.prepareStatement("SELECT * FROM TEST"); + prep.close(); + stat.executeUpdate("DROP TABLE TEST"); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, conn). + prepareStatement("SELECT * FROM TEST"); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestReadOnly.java b/modules/h2/src/test/java/org/h2/test/db/TestReadOnly.java new file mode 100644 index 0000000000000..6cc193284db99 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestReadOnly.java @@ -0,0 +1,201 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.File; +import java.io.RandomAccessFile; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; + +import org.h2.api.ErrorCode; +import org.h2.dev.fs.FilePathZip2; +import org.h2.store.FileLister; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Backup; +import org.h2.tools.Server; + +/** + * Test for the read-only database feature. + */ +public class TestReadOnly extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.memory) { + return; + } + testReadOnlyInZip(); + testReadOnlyTempTableResult(); + testReadOnlyConnect(); + testReadOnlyDbCreate(); + if (!config.googleAppEngine) { + testReadOnlyFiles(true); + } + testReadOnlyFiles(false); + } + + private void testReadOnlyInZip() throws SQLException { + if (config.cipher != null) { + return; + } + deleteDb("readonlyInZip"); + String dir = getBaseDir(); + Connection conn = getConnection("readonlyInZip"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT) AS " + + "SELECT X FROM SYSTEM_RANGE(1, 20)"); + conn.close(); + Backup.execute(dir + "/readonly.zip", dir, "readonlyInZip", true); + conn = getConnection( + "jdbc:h2:zip:"+dir+"/readonly.zip!/readonlyInZip", getUser(), getPassword()); + conn.createStatement().execute("select * from test where id=1"); + conn.close(); + Server server = Server.createTcpServer("-baseDir", dir); + server.start(); + int port = server.getPort(); + try { + conn = getConnection( + "jdbc:h2:tcp://localhost:" + port + "/zip:readonly.zip!/readonlyInZip", + getUser(), getPassword()); + conn.createStatement().execute("select * from test where id=1"); + conn.close(); + FilePathZip2.register(); + conn = getConnection( + "jdbc:h2:tcp://localhost:" + port + "/zip2:readonly.zip!/readonlyInZip", + getUser(), getPassword()); + conn.createStatement().execute("select * from test where id=1"); + conn.close(); + } finally { + server.stop(); + } + deleteDb("readonlyInZip"); + } + + private void testReadOnlyTempTableResult() throws SQLException { + deleteDb("readonlyTemp"); + Connection conn = getConnection("readonlyTemp"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT) AS " + + "SELECT X FROM SYSTEM_RANGE(1, 20)"); + conn.close(); + conn = getConnection( + "readonlyTemp;ACCESS_MODE_DATA=r;" + + "MAX_MEMORY_ROWS=10"); + stat = conn.createStatement(); + stat.execute("SELECT DISTINCT ID FROM TEST"); + conn.close(); + deleteDb("readonlyTemp"); + } + + private void testReadOnlyDbCreate() throws SQLException { + deleteDb("readonlyDbCreate"); + Connection conn = getConnection("readonlyDbCreate"); + Statement stat = conn.createStatement(); + stat.execute("create table a(id int)"); + stat.execute("create index ai on a(id)"); + conn.close(); + conn = getConnection("readonlyDbCreate;ACCESS_MODE_DATA=r"); + stat = conn.createStatement(); + stat.execute("create table if not exists a(id int)"); + stat.execute("create index if not exists ai on a(id)"); + assertThrows(ErrorCode.DATABASE_IS_READ_ONLY, stat). + execute("CREATE TABLE TEST(ID INT)"); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat). + execute("SELECT * FROM TEST"); + stat.execute("create local temporary linked table test(" + + "null, 'jdbc:h2:mem:test3', 'sa', 'sa', 'INFORMATION_SCHEMA.TABLES')"); + ResultSet rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + conn.close(); + } + + private void testReadOnlyFiles(boolean setReadOnly) throws Exception { + new File(System.getProperty("java.io.tmpdir")).mkdirs(); + File f = File.createTempFile("test", "temp"); + assertTrue(f.canWrite()); + f.setReadOnly(); + assertTrue(!f.canWrite()); + f.delete(); + + f = File.createTempFile("test", "temp"); + RandomAccessFile r = new RandomAccessFile(f, "rw"); + r.write(1); + f.setReadOnly(); + r.close(); + assertTrue(!f.canWrite()); + f.delete(); + + deleteDb("readonlyFiles"); + Connection conn = getConnection("readonlyFiles"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("INSERT INTO TEST VALUES(2, 'World')"); + assertTrue(!conn.isReadOnly()); + conn.close(); + + if (setReadOnly) { + setReadOnly(); + conn = getConnection("readonlyFiles"); + } else { + conn = getConnection("readonlyFiles;ACCESS_MODE_DATA=r"); + } + assertTrue(conn.isReadOnly()); + stat = conn.createStatement(); + stat.execute("SELECT * FROM TEST"); + assertThrows(ErrorCode.DATABASE_IS_READ_ONLY, stat). + execute("DELETE FROM TEST"); + conn.close(); + + if (setReadOnly) { + conn = getConnection( + "readonlyFiles;DB_CLOSE_DELAY=1"); + } else { + conn = getConnection( + "readonlyFiles;DB_CLOSE_DELAY=1;ACCESS_MODE_DATA=r"); + } + stat = conn.createStatement(); + stat.execute("SELECT * FROM TEST"); + assertThrows(ErrorCode.DATABASE_IS_READ_ONLY, stat). + execute("DELETE FROM TEST"); + stat.execute("SET DB_CLOSE_DELAY=0"); + conn.close(); + } + + private void setReadOnly() { + ArrayList list = FileLister.getDatabaseFiles( + getBaseDir(), "readonlyFiles", true); + for (String fileName : list) { + FileUtils.setReadOnly(fileName); + } + } + + private void testReadOnlyConnect() throws SQLException { + deleteDb("readonlyConnect"); + Connection conn = getConnection("readonlyConnect;OPEN_NEW=TRUE"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity)"); + stat.execute("insert into test select x from system_range(1, 11)"); + assertThrows(ErrorCode.DATABASE_ALREADY_OPEN_1, this). + getConnection("readonlyConnect;ACCESS_MODE_DATA=r;OPEN_NEW=TRUE"); + conn.close(); + deleteDb("readonlyConnect"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestRecursiveQueries.java b/modules/h2/src/test/java/org/h2/test/db/TestRecursiveQueries.java new file mode 100644 index 0000000000000..147dafebe1676 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestRecursiveQueries.java @@ -0,0 +1,182 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.sql.Types; +import org.h2.test.TestBase; + +/** + * Test recursive queries using WITH. + */ +public class TestRecursiveQueries extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testWrongLinkLargeResult(); + testSimpleUnionAll(); + testSimpleUnion(); + } + + private void testWrongLinkLargeResult() throws Exception { + deleteDb("recursiveQueries"); + Connection conn = getConnection("recursiveQueries"); + Statement stat; + stat = conn.createStatement(); + stat.execute("create table test(parent varchar(255), child varchar(255))"); + stat.execute("insert into test values('/', 'a'), ('a', 'b1'), " + + "('a', 'b2'), ('a', 'c'), ('c', 'd1'), ('c', 'd2')"); + + ResultSet rs = stat.executeQuery( + "with recursive rec_test(depth, parent, child) as (" + + "select 0, parent, child from test where parent = '/' " + + "union all " + + "select depth+1, r.parent, r.child from test i join rec_test r " + + "on (i.parent = r.child) where depth<9 " + + ") select count(*) from rec_test"); + rs.next(); + assertEquals(29524, rs.getInt(1)); + stat.execute("with recursive rec_test(depth, parent, child) as ( "+ + "select 0, parent, child from test where parent = '/' "+ + "union all "+ + "select depth+1, i.parent, i.child from test i join rec_test r "+ + "on (r.child = i.parent) where depth<10 "+ + ") select * from rec_test"); + conn.close(); + deleteDb("recursiveQueries"); + } + + private void testSimpleUnionAll() throws Exception { + deleteDb("recursiveQueries"); + Connection conn = getConnection("recursiveQueries"); + Statement stat; + PreparedStatement prep, prep2; + ResultSet rs; + + stat = conn.createStatement(); + rs = stat.executeQuery("with recursive t(n) as " + + "(select 1 union all select n+1 from t where n<3) " + + "select * from t"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("with recursive t(n) as " + + "(select 1 union all select n+1 from t where n<3) " + + "select * from t where n>?"); + prep.setInt(1, 2); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + prep.setInt(1, 1); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("with recursive t(n) as " + + "(select @start union all select n+@inc from t where n<@end) " + + "select * from t"); + prep2 = conn.prepareStatement("select @start:=?, @inc:=?, @end:=?"); + prep2.setInt(1, 10); + prep2.setInt(2, 2); + prep2.setInt(3, 14); + assertTrue(prep2.executeQuery().next()); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(10, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(12, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(14, rs.getInt(1)); + assertFalse(rs.next()); + + prep2.setInt(1, 100); + prep2.setInt(2, 3); + prep2.setInt(3, 103); + assertTrue(prep2.executeQuery().next()); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(100, rs.getInt(1)); + assertTrue(rs.next()); + assertEquals(103, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("with recursive t(n) as " + + "(select ? union all select n+? from t where n= 1000); + conn.close(); + deleteDb("rowFactory"); + } + + /** + * Test row factory. + */ + public static class MyTestRowFactory extends RowFactory { + + /** + * A simple counter. + */ + static final AtomicInteger COUNTER = new AtomicInteger(); + + @Override + public Row createRow(Value[] data, int memory) { + COUNTER.incrementAndGet(); + return new RowImpl(data, memory); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestRunscript.java b/modules/h2/src/test/java/org/h2/test/db/TestRunscript.java new file mode 100644 index 0000000000000..d28f62d171229 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestRunscript.java @@ -0,0 +1,565 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.ChangeFileEncryption; +import org.h2.tools.Recover; +import org.h2.util.Task; + +/** + * Tests the RUNSCRIPT SQL statement. + */ +public class TestRunscript extends TestBase implements Trigger { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + test(false); + test(true); + testDropReferencedUserDefinedFunction(); + testDropCascade(); + testScriptExcludeSchema(); + testScriptExcludeTable(); + testScriptExcludeFunctionAlias(); + testScriptExcludeConstant(); + testScriptExcludeSequence(); + testScriptExcludeConstraint(); + testScriptExcludeTrigger(); + testScriptExcludeRight(); + testRunscriptFromClasspath(); + testCancelScript(); + testEncoding(); + testClobPrimaryKey(); + deleteDb("runscript"); + } + + private void testDropReferencedUserDefinedFunction() throws Exception { + deleteDb("runscript"); + Connection conn; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create alias int_decode for \"java.lang.Integer.decode\""); + stat.execute("create table test(x varchar, y int as int_decode(x))"); + stat.execute("script simple drop to '" + + getBaseDir() + "/backup.sql'"); + stat.execute("runscript from '" + + getBaseDir() + "/backup.sql'"); + conn.close(); + } + + private void testDropCascade() throws Exception { + deleteDb("runscript"); + Connection conn; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create table b(x int)"); + stat.execute("create view a as select * from b"); + stat.execute("script simple drop to '" + + getBaseDir() + "/backup.sql'"); + stat.execute("runscript from '" + + getBaseDir() + "/backup.sql'"); + conn.close(); + } + + private void testScriptExcludeSchema() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create schema include_schema1"); + stat.execute("create schema exclude_schema1"); + stat.execute("script schema include_schema1"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The schema 'exclude_schema1' should not be present in the script", + rs.getString(1).contains("exclude_schema1".toUpperCase())); + } + rs.close(); + stat.execute("create schema include_schema2"); + stat.execute("script nosettings schema include_schema1, include_schema2"); + rs = stat.getResultSet(); + // user and one row per schema = 3 + assertResultRowCount(3, rs); + rs.close(); + conn.close(); + } + + private void testScriptExcludeTable() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create schema a"); + stat.execute("create schema b"); + stat.execute("create schema c"); + stat.execute("create table a.test1(x varchar, y int)"); + stat.execute("create table a.test2(x varchar, y int)"); + stat.execute("create table b.test1(x varchar, y int)"); + stat.execute("create table b.test2(x varchar, y int)"); + stat.execute("script table a.test1"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The table 'a.test2' should not be present in the script", + rs.getString(1).contains("a.test2".toUpperCase())); + assertFalse("The table 'b.test1' should not be present in the script", + rs.getString(1).contains("b.test1".toUpperCase())); + assertFalse("The table 'b.test2' should not be present in the script", + rs.getString(1).contains("b.test2".toUpperCase())); + } + rs.close(); + stat.execute("set schema b"); + stat.execute("script table test1"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The table 'a.test1' should not be present in the script", + rs.getString(1).contains("a.test1".toUpperCase())); + assertFalse("The table 'a.test2' should not be present in the script", + rs.getString(1).contains("a.test2".toUpperCase())); + assertFalse("The table 'b.test2' should not be present in the script", + rs.getString(1).contains("b.test2".toUpperCase())); + } + stat.execute("script nosettings table a.test1, test2"); + rs = stat.getResultSet(); + // user, schemas 'a' & 'b' and 2 rows per table = 7 + assertResultRowCount(7, rs); + rs.close(); + conn.close(); + } + + private void testScriptExcludeFunctionAlias() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create schema a"); + stat.execute("create schema b"); + stat.execute("create schema c"); + stat.execute("create alias a.int_decode for \"java.lang.Integer.decode\""); + stat.execute("create table a.test(x varchar, y int as a.int_decode(x))"); + stat.execute("script schema b"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The function alias 'int_decode' " + + "should not be present in the script", + rs.getString(1).contains("int_decode".toUpperCase())); + } + rs.close(); + conn.close(); + } + + private void testScriptExcludeConstant() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create schema a"); + stat.execute("create schema b"); + stat.execute("create schema c"); + stat.execute("create constant a.default_email value 'no@thanks.org'"); + stat.execute("create table a.test1(x varchar, " + + "email varchar default a.default_email)"); + stat.execute("script schema b"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The constant 'default_email' " + + "should not be present in the script", + rs.getString(1).contains("default_email".toUpperCase())); + } + rs.close(); + conn.close(); + } + + private void testScriptExcludeSequence() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create schema a"); + stat.execute("create schema b"); + stat.execute("create schema c"); + stat.execute("create sequence a.seq_id"); + stat.execute("script schema b"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The sequence 'seq_id' should not be present in the script", + rs.getString(1).contains("seq_id".toUpperCase())); + } + rs.close(); + conn.close(); + } + + private void testScriptExcludeConstraint() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create schema a"); + stat.execute("create schema b"); + stat.execute("create schema c"); + stat.execute("create table a.test1(x varchar, y int)"); + stat.execute("alter table a.test1 add constraint " + + "unique_constraint unique (x, y) "); + stat.execute("script schema b"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The sequence 'unique_constraint' " + + "should not be present in the script", + rs.getString(1).contains("unique_constraint".toUpperCase())); + } + rs.close(); + stat.execute("create table a.test2(x varchar, y int)"); + stat.execute("script table a.test2"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The sequence 'unique_constraint' " + + "should not be present in the script", + rs.getString(1).contains("unique_constraint".toUpperCase())); + } + rs.close(); + conn.close(); + } + + private void testScriptExcludeTrigger() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create schema a"); + stat.execute("create schema b"); + stat.execute("create schema c"); + stat.execute("create table a.test1(x varchar, y int)"); + stat.execute("create trigger trigger_insert before insert on a.test1 " + + "for each row call \"org.h2.test.db.TestRunscript\""); + stat.execute("script schema b"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The trigger 'trigger_insert' should not be present in the script", + rs.getString(1).contains("trigger_insert".toUpperCase())); + } + rs.close(); + stat.execute("create table a.test2(x varchar, y int)"); + stat.execute("script table a.test2"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The trigger 'trigger_insert' should not be present in the script", + rs.getString(1).contains("trigger_insert".toUpperCase())); + } + rs.close(); + conn.close(); + } + + private void testScriptExcludeRight() throws Exception { + deleteDb("runscript"); + Connection conn; + ResultSet rs; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("create user USER_A1 password 'test'"); + stat.execute("create user USER_B1 password 'test'"); + stat.execute("create schema a"); + stat.execute("create schema b"); + stat.execute("create schema c"); + stat.execute("create table a.test1(x varchar, y int)"); + stat.execute("create table b.test1(x varchar, y int)"); + stat.execute("grant select on a.test1 to USER_A1"); + stat.execute("grant select on b.test1 to USER_B1"); + stat.execute("script schema b"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The grant to 'USER_A1' should not be present in the script", + rs.getString(1).contains("to USER_A1".toUpperCase())); + } + rs.close(); + stat.execute("create user USER_A2 password 'test'"); + stat.execute("create table a.test2(x varchar, y int)"); + stat.execute("grant select on a.test2 to USER_A2"); + stat.execute("script table a.test2"); + rs = stat.getResultSet(); + while (rs.next()) { + assertFalse("The grant to 'USER_A1' should not be present in the script", + rs.getString(1).contains("to USER_A1".toUpperCase())); + assertFalse("The grant to 'USER_B1' should not be present in the script", + rs.getString(1).contains("to USER_B1".toUpperCase())); + } + rs.close(); + conn.close(); + } + + private void testRunscriptFromClasspath() throws Exception { + deleteDb("runscript"); + Connection conn; + conn = getConnection("runscript"); + Statement stat = conn.createStatement(); + stat.execute("runscript from 'classpath:/org/h2/samples/newsfeed.sql'"); + stat.execute("select * from version"); + conn.close(); + } + + private void testCancelScript() throws Exception { + if (config.travis) { + // fails regularly under Travis, not sure why + return; + } + deleteDb("runscript"); + Connection conn; + conn = getConnection("runscript"); + final Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key) as " + + "select x from system_range(1, 20000)"); + stat.execute("script simple drop to '"+ + getBaseDir()+"/backup.sql'"); + stat.execute("set throttle 1000"); + // need to wait a bit (throttle is only used every 50 ms) + Thread.sleep(200); + final String dir = getBaseDir(); + Task task; + task = new Task() { + @Override + public void call() throws SQLException { + stat.execute("script simple drop to '"+dir+"/backup2.sql'"); + } + }; + task.execute(); + Thread.sleep(200); + stat.cancel(); + SQLException e = (SQLException) task.getException(); + assertTrue(e != null); + assertEquals(ErrorCode.STATEMENT_WAS_CANCELED, e.getErrorCode()); + + stat.execute("set throttle 1000"); + // need to wait a bit (throttle is only used every 50 ms) + Thread.sleep(100); + + task = new Task() { + @Override + public void call() throws SQLException { + stat.execute("runscript from '"+dir+"/backup.sql'"); + } + }; + task.execute(); + Thread.sleep(200); + stat.cancel(); + e = (SQLException) task.getException(); + assertTrue(e != null); + assertEquals(ErrorCode.STATEMENT_WAS_CANCELED, e.getErrorCode()); + + conn.close(); + FileUtils.delete(getBaseDir() + "/backup.sql"); + FileUtils.delete(getBaseDir() + "/backup2.sql"); + } + + private void testEncoding() throws SQLException { + deleteDb("runscript"); + Connection conn; + Statement stat; + conn = getConnection("runscript"); + stat = conn.createStatement(); + stat.execute("create table \"t\u00f6\"(id int)"); + stat.execute("script to '"+ + getBaseDir()+"/backup.sql'"); + stat.execute("drop all objects"); + stat.execute("runscript from '"+ + getBaseDir()+"/backup.sql'"); + stat.execute("select * from \"t\u00f6\""); + stat.execute("script to '"+ + getBaseDir()+"/backup.sql' charset 'UTF-8'"); + stat.execute("drop all objects"); + stat.execute("runscript from '"+ + getBaseDir()+"/backup.sql' charset 'UTF-8'"); + stat.execute("select * from \"t\u00f6\""); + conn.close(); + FileUtils.delete(getBaseDir() + "/backup.sql"); + } + + /** + * This method is called via reflection from the database. + * + * @param a the value + * @return the absolute value + */ + public static int test(int a) { + return Math.abs(a); + } + + private void testClobPrimaryKey() throws SQLException { + deleteDb("runscript"); + Connection conn; + Statement stat; + conn = getConnection("runscript"); + stat = conn.createStatement(); + stat.execute("create table test(id int not null, data clob) " + + "as select 1, space(4100)"); + // the primary key for SYSTEM_LOB_STREAM used to be named like this + stat.execute("create primary key primary_key_e on test(id)"); + stat.execute("script to '" + getBaseDir() + "/backup.sql'"); + conn.close(); + deleteDb("runscript"); + conn = getConnection("runscript"); + stat = conn.createStatement(); + stat.execute("runscript from '" + getBaseDir() + "/backup.sql'"); + conn.close(); + deleteDb("runscriptRestore"); + FileUtils.delete(getBaseDir() + "/backup.sql"); + } + + private void test(boolean password) throws SQLException { + deleteDb("runscript"); + Connection conn1, conn2; + Statement stat1, stat2; + conn1 = getConnection("runscript"); + stat1 = conn1.createStatement(); + stat1.execute("create table test (id identity, name varchar(12))"); + stat1.execute("insert into test (name) values ('first'), ('second')"); + stat1.execute("create table test2(id int primary key) as " + + "select x from system_range(1, 5000)"); + stat1.execute("create sequence testSeq start with 100 increment by 10"); + stat1.execute("create alias myTest for \"" + + getClass().getName() + ".test\""); + stat1.execute("create trigger myTrigger before insert " + + "on test nowait call \"" + getClass().getName() + "\""); + stat1.execute("create view testView as select * " + + "from test where 1=0 union all " + + "select * from test where 0=1"); + stat1.execute("create user testAdmin salt '00' hash '01' admin"); + stat1.execute("create schema testSchema authorization testAdmin"); + stat1.execute("create table testSchema.parent" + + "(id int primary key, name varchar)"); + stat1.execute("create index idxname on testSchema.parent(name)"); + stat1.execute("create table testSchema.child(id int primary key, " + + "parentId int, name varchar, foreign key(parentId) " + + "references parent(id))"); + stat1.execute("create user testUser salt '02' hash '03'"); + stat1.execute("create role testRole"); + stat1.execute("grant all on testSchema.child to testUser"); + stat1.execute("grant select, insert on testSchema.parent to testRole"); + stat1.execute("grant testRole to testUser"); + stat1.execute("create table blob (value blob)"); + PreparedStatement prep = conn1.prepareStatement( + "insert into blob values (?)"); + prep.setBytes(1, new byte[65536]); + prep.execute(); + String sql = "script to ?"; + if (password) { + sql += " CIPHER AES PASSWORD ?"; + } + prep = conn1.prepareStatement(sql); + prep.setString(1, getBaseDir() + "/backup.2.sql"); + if (password) { + prep.setString(2, "t1e2s3t4"); + } + prep.execute(); + + deleteDb("runscriptRestore"); + conn2 = getConnection("runscriptRestore"); + stat2 = conn2.createStatement(); + sql = "runscript from '" + getBaseDir() + "/backup.2.sql'"; + if (password) { + sql += " CIPHER AES PASSWORD 'wrongPassword'"; + } + if (password) { + assertThrows(ErrorCode.FILE_ENCRYPTION_ERROR_1, stat2). + execute(sql); + } + sql = "runscript from '" + getBaseDir() + "/backup.2.sql'"; + if (password) { + sql += " CIPHER AES PASSWORD 't1e2s3t4'"; + } + stat2.execute(sql); + stat2.execute("script to '" + getBaseDir() + "/backup.3.sql'"); + + assertEqualDatabases(stat1, stat2); + + if (!config.memory && !config.reopen) { + conn1.close(); + + if (config.cipher != null) { + ChangeFileEncryption.execute(getBaseDir(), "runscript", + config.cipher, getFilePassword().toCharArray(), null, true); + } + Recover.execute(getBaseDir(), "runscript"); + + deleteDb("runscriptRestoreRecover"); + Connection conn3 = getConnection("runscriptRestoreRecover"); + Statement stat3 = conn3.createStatement(); + stat3.execute("runscript from '" + getBaseDir() + "/runscript.h2.sql'"); + conn3.close(); + conn3 = getConnection("runscriptRestoreRecover"); + stat3 = conn3.createStatement(); + + if (config.cipher != null) { + ChangeFileEncryption.execute(getBaseDir(), + "runscript", config.cipher, null, getFilePassword().toCharArray(), true); + } + + conn1 = getConnection("runscript"); + stat1 = conn1.createStatement(); + + assertEqualDatabases(stat1, stat3); + conn3.close(); + } + + assertEqualDatabases(stat1, stat2); + + conn1.close(); + conn2.close(); + deleteDb("runscriptRestore"); + deleteDb("runscriptRestoreRecover"); + FileUtils.delete(getBaseDir() + "/backup.2.sql"); + FileUtils.delete(getBaseDir() + "/backup.3.sql"); + + } + + @Override + public void init(Connection conn, String schemaName, String triggerName, + String tableName, boolean before, int type) { + if (!before) { + throw new InternalError("before:" + before); + } + if (type != INSERT) { + throw new InternalError("type:" + type); + } + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) { + // nothing to do + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSQLInjection.java b/modules/h2/src/test/java/org/h2/test/db/TestSQLInjection.java new file mode 100644 index 0000000000000..070955b803ed0 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSQLInjection.java @@ -0,0 +1,117 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests the ALLOW_LITERALS feature (protection against SQL injection). + */ +public class TestSQLInjection extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.reopen) { + return; + } + deleteDb("sqlInjection"); + reconnect("sqlInjection"); + stat.execute("DROP TABLE IF EXISTS USERS"); + stat.execute("CREATE TABLE USERS(NAME VARCHAR PRIMARY KEY, " + + "PASSWORD VARCHAR, TYPE VARCHAR)"); + stat.execute("CREATE SCHEMA CONST"); + stat.execute("CREATE CONSTANT CONST.ACTIVE VALUE 'Active'"); + stat.execute("INSERT INTO USERS VALUES('James', '123456', CONST.ACTIVE)"); + assertTrue(checkPasswordInsecure("123456")); + assertFalse(checkPasswordInsecure("abcdef")); + assertTrue(checkPasswordInsecure("' OR ''='")); + assertTrue(checkPasswordSecure("123456")); + assertFalse(checkPasswordSecure("abcdef")); + assertFalse(checkPasswordSecure("' OR ''='")); + stat.execute("CALL 123"); + stat.execute("CALL 'Hello'"); + stat.execute("CALL $$Hello World$$"); + stat.execute("SET ALLOW_LITERALS NUMBERS"); + stat.execute("CALL 123"); + assertThrows(ErrorCode.LITERALS_ARE_NOT_ALLOWED, stat). + execute("CALL 'Hello'"); + assertThrows(ErrorCode.LITERALS_ARE_NOT_ALLOWED, stat). + execute("CALL $$Hello World$$"); + stat.execute("SET ALLOW_LITERALS NONE"); + try { + checkPasswordInsecure("123456"); + fail(); + } catch (SQLException e) { + assertKnownException(e); + } + assertTrue(checkPasswordSecure("123456")); + assertFalse(checkPasswordSecure("' OR ''='")); + conn.close(); + + if (config.memory) { + return; + } + + reconnect("sqlInjection"); + + try { + checkPasswordInsecure("123456"); + fail(); + } catch (SQLException e) { + assertKnownException(e); + } + assertTrue(checkPasswordSecure("123456")); + assertFalse(checkPasswordSecure("' OR ''='")); + conn.close(); + deleteDb("sqlInjection"); + } + + private boolean checkPasswordInsecure(String pwd) throws SQLException { + String sql = "SELECT * FROM USERS WHERE PASSWORD='" + pwd + "'"; + ResultSet rs = conn.createStatement().executeQuery(sql); + return rs.next(); + } + + private boolean checkPasswordSecure(String pwd) throws SQLException { + String sql = "SELECT * FROM USERS WHERE PASSWORD=?"; + PreparedStatement prep = conn.prepareStatement(sql); + prep.setString(1, pwd); + ResultSet rs = prep.executeQuery(); + return rs.next(); + } + + private void reconnect(String name) throws SQLException { + if (!config.memory) { + if (conn != null) { + conn.close(); + conn = null; + } + } + if (conn == null) { + conn = getConnection(name); + stat = conn.createStatement(); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSelectCountNonNullColumn.java b/modules/h2/src/test/java/org/h2/test/db/TestSelectCountNonNullColumn.java new file mode 100644 index 0000000000000..2b52ac29e5b86 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSelectCountNonNullColumn.java @@ -0,0 +1,107 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; + +/** + * Test that count(column) is converted to count(*) if the column is not + * nullable. + */ +public class TestSelectCountNonNullColumn extends TestBase { + + private static final String DBNAME = "selectCountNonNullColumn"; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + + deleteDb(DBNAME); + Connection conn = getConnection(DBNAME); + stat = conn.createStatement(); + + stat.execute("CREATE TABLE SIMPLE(KEY VARCHAR(25) " + + "PRIMARY KEY, NAME VARCHAR(25))"); + stat.execute("INSERT INTO SIMPLE(KEY) VALUES('k1')"); + stat.execute("INSERT INTO SIMPLE(KEY,NAME) VALUES('k2','name2')"); + + checkKeyCount(-1); + checkNameCount(-1); + checkStarCount(-1); + + checkKeyCount(2); + checkNameCount(1); + checkStarCount(2); + + conn.close(); + + } + + private void checkStarCount(long expect) throws SQLException { + String sql = "SELECT COUNT(*) FROM SIMPLE"; + if (expect < 0) { + sql = "EXPLAIN " + sql; + } + ResultSet rs = stat.executeQuery(sql); + rs.next(); + if (expect >= 0) { + assertEquals(expect, rs.getLong(1)); + } else { + // System.out.println(rs.getString(1)); + assertEquals("SELECT\n" + " COUNT(*)\n" + "FROM PUBLIC.SIMPLE\n" + + " /* PUBLIC.SIMPLE.tableScan */\n" + + "/* direct lookup */", rs.getString(1)); + } + } + + private void checkKeyCount(long expect) throws SQLException { + String sql = "SELECT COUNT(KEY) FROM SIMPLE"; + if (expect < 0) { + sql = "EXPLAIN " + sql; + } + ResultSet rs = stat.executeQuery(sql); + rs.next(); + if (expect >= 0) { + assertEquals(expect, rs.getLong(1)); + } else { + assertEquals("SELECT\n" + + " COUNT(KEY)\n" + + "FROM PUBLIC.SIMPLE\n" + + " /* PUBLIC.PRIMARY_KEY_9 */\n" + + "/* direct lookup */", rs.getString(1)); + } + } + + private void checkNameCount(long expect) throws SQLException { + String sql = "SELECT COUNT(NAME) FROM SIMPLE"; + if (expect < 0) { + sql = "EXPLAIN " + sql; + } + ResultSet rs = stat.executeQuery(sql); + rs.next(); + if (expect >= 0) { + assertEquals(expect, rs.getLong(1)); + } else { + // System.out.println(rs.getString(1)); + assertEquals("SELECT\n" + " COUNT(NAME)\n" + "FROM PUBLIC.SIMPLE\n" + + " /* PUBLIC.SIMPLE.tableScan */", rs.getString(1)); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSequence.java b/modules/h2/src/test/java/org/h2/test/db/TestSequence.java new file mode 100644 index 0000000000000..2abbc8daf1add --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSequence.java @@ -0,0 +1,449 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import org.h2.api.Trigger; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Tests the sequence feature of this database. + */ +public class TestSequence extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testConcurrentCreate(); + testSchemaSearchPath(); + testAlterSequenceColumn(); + testAlterSequence(); + testCache(); + testTwo(); + testMetaTable(); + testCreateWithMinValue(); + testCreateWithMaxValue(); + testCreationErrors(); + testCreateSql(); + testDefaultMinMax(); + deleteDb("sequence"); + } + + private void testConcurrentCreate() throws Exception { + deleteDb("sequence"); + final String url = getURL("sequence;MULTI_THREADED=1;LOCK_TIMEOUT=2000", true); + Connection conn = getConnection(url); + Task[] tasks = new Task[2]; + try { + Statement stat = conn.createStatement(); + stat.execute("create table dummy(id bigint primary key)"); + stat.execute("create table test(id bigint primary key)"); + stat.execute("create sequence test_seq cache 2"); + for (int i = 0; i < tasks.length; i++) { + final int x = i; + tasks[i] = new Task() { + @Override + public void call() throws Exception { + try (Connection conn = getConnection(url)) { + PreparedStatement prep = conn.prepareStatement( + "insert into test(id) values(next value for test_seq)"); + PreparedStatement prep2 = conn.prepareStatement( + "delete from test"); + while (!stop) { + prep.execute(); + if (Math.random() < 0.01) { + prep2.execute(); + } + if (Math.random() < 0.01) { + createDropTrigger(conn); + } + } + } + } + + private void createDropTrigger(Connection conn) throws Exception { + String triggerName = "t_" + x; + Statement stat = conn.createStatement(); + stat.execute("create trigger " + triggerName + + " before insert on dummy call \"" + + TriggerTest.class.getName() + "\""); + stat.execute("drop trigger " + triggerName); + } + + }.execute(); + } + Thread.sleep(1000); + for (Task t : tasks) { + t.get(); + } + } finally { + for (Task t : tasks) { + t.join(); + } + conn.close(); + } + } + + private void testSchemaSearchPath() throws SQLException { + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute("CREATE SCHEMA TEST"); + stat.execute("CREATE SEQUENCE TEST.TEST_SEQ"); + stat.execute("SET SCHEMA_SEARCH_PATH PUBLIC, TEST"); + stat.execute("CALL TEST_SEQ.NEXTVAL"); + stat.execute("CALL TEST_SEQ.CURRVAL"); + conn.close(); + } + + private void testAlterSequenceColumn() throws SQLException { + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT , NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("ALTER TABLE TEST ALTER COLUMN ID INT IDENTITY"); + stat.execute("ALTER TABLE test ALTER COLUMN ID RESTART WITH 3"); + stat.execute("INSERT INTO TEST (name) VALUES('Other World')"); + conn.close(); + } + + private void testAlterSequence() throws SQLException { + test("create sequence s; alter sequence s restart with 2", null, 2, 3, 4); + test("create sequence s; alter sequence s restart with 7", null, 7, 8, 9, 10); + test("create sequence s; alter sequence s restart with 11 " + + "minvalue 3 maxvalue 12 cycle", null, 11, 12, 3, 4); + test("create sequence s; alter sequence s restart with 5 cache 2", + null, 5, 6, 7, 8); + test("create sequence s; alter sequence s restart with 9 " + + "maxvalue 12 nocycle nocache", + "Sequence \"S\" has run out of numbers", 9, 10, 11, 12); + } + + private void testCache() throws SQLException { + if (config.memory) { + return; + } + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute("create sequence test_Sequence"); + stat.execute("create sequence test_Sequence3 cache 3"); + conn.close(); + conn = getConnection("sequence"); + stat = conn.createStatement(); + stat.execute("call next value for test_Sequence"); + stat.execute("call next value for test_Sequence3"); + ResultSet rs = stat.executeQuery("select * from " + + "information_schema.sequences order by sequence_name"); + rs.next(); + assertEquals("TEST_SEQUENCE", rs.getString("SEQUENCE_NAME")); + assertEquals("32", rs.getString("CACHE")); + rs.next(); + assertEquals("TEST_SEQUENCE3", rs.getString("SEQUENCE_NAME")); + assertEquals("3", rs.getString("CACHE")); + assertFalse(rs.next()); + conn.close(); + } + + private void testMetaTable() throws SQLException { + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute("create sequence a"); + stat.execute("create sequence b start with 7 minvalue 5 " + + "maxvalue 9 cycle increment by 2 nocache"); + stat.execute("create sequence c start with -4 minvalue -9 " + + "maxvalue -3 no cycle increment by -2 cache 3"); + + if (!config.memory) { + conn.close(); + conn = getConnection("sequence"); + } + + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from " + + "information_schema.sequences order by sequence_name"); + rs.next(); + assertEquals("SEQUENCE", rs.getString("SEQUENCE_CATALOG")); + assertEquals("PUBLIC", rs.getString("SEQUENCE_SCHEMA")); + assertEquals("A", rs.getString("SEQUENCE_NAME")); + assertEquals(0, rs.getLong("CURRENT_VALUE")); + assertEquals(1, rs.getLong("INCREMENT")); + assertEquals(false, rs.getBoolean("IS_GENERATED")); + assertEquals("", rs.getString("REMARKS")); + assertEquals(32, rs.getLong("CACHE")); + assertEquals(1, rs.getLong("MIN_VALUE")); + assertEquals(Long.MAX_VALUE, rs.getLong("MAX_VALUE")); + assertEquals(false, rs.getBoolean("IS_CYCLE")); + rs.next(); + assertEquals("SEQUENCE", rs.getString("SEQUENCE_CATALOG")); + assertEquals("PUBLIC", rs.getString("SEQUENCE_SCHEMA")); + assertEquals("B", rs.getString("SEQUENCE_NAME")); + assertEquals(5, rs.getLong("CURRENT_VALUE")); + assertEquals(2, rs.getLong("INCREMENT")); + assertEquals(false, rs.getBoolean("IS_GENERATED")); + assertEquals("", rs.getString("REMARKS")); + assertEquals(1, rs.getLong("CACHE")); + assertEquals(5, rs.getLong("MIN_VALUE")); + assertEquals(9, rs.getLong("MAX_VALUE")); + assertEquals(true, rs.getBoolean("IS_CYCLE")); + rs.next(); + assertEquals("SEQUENCE", rs.getString("SEQUENCE_CATALOG")); + assertEquals("PUBLIC", rs.getString("SEQUENCE_SCHEMA")); + assertEquals("C", rs.getString("SEQUENCE_NAME")); + assertEquals(-2, rs.getLong("CURRENT_VALUE")); + assertEquals(-2, rs.getLong("INCREMENT")); + assertEquals(false, rs.getBoolean("IS_GENERATED")); + assertEquals("", rs.getString("REMARKS")); + assertEquals(3, rs.getLong("CACHE")); + assertEquals(-9, rs.getLong("MIN_VALUE")); + assertEquals(-3, rs.getLong("MAX_VALUE")); + assertEquals(false, rs.getBoolean("IS_CYCLE")); + assertFalse(rs.next()); + conn.close(); + } + + private void testCreateWithMinValue() throws SQLException { + test("create sequence s minvalue 3", null, 3, 4, 5, 6); + test("create sequence s minvalue -3 increment by -1 cycle", + null, -1, -2, -3, -1); + test("create sequence s minvalue -3 increment by -1", + "Sequence \"S\" has run out of numbers", -1, -2, -3); + test("create sequence s minvalue -3 increment by -1 nocycle", + "Sequence \"S\" has run out of numbers", -1, -2, -3); + test("create sequence s minvalue -3 increment by -1 no cycle", + "Sequence \"S\" has run out of numbers", -1, -2, -3); + test("create sequence s minvalue -3 increment by -1 nocache cycle", + null, -1, -2, -3, -1); + test("create sequence s minvalue -3 increment by -1 nocache", + "Sequence \"S\" has run out of numbers", -1, -2, -3); + test("create sequence s minvalue -3 increment by -1 nocache nocycle", + "Sequence \"S\" has run out of numbers", -1, -2, -3); + test("create sequence s minvalue -3 increment by -1 no cache no cycle", + "Sequence \"S\" has run out of numbers", -1, -2, -3); + } + + private void testCreateWithMaxValue() throws SQLException { + test("create sequence s maxvalue -3 increment by -1", + null, -3, -4, -5, -6); + test("create sequence s maxvalue 3 cycle", null, 1, 2, 3, 1); + test("create sequence s maxvalue 3", + "Sequence \"S\" has run out of numbers", 1, 2, 3); + test("create sequence s maxvalue 3 nocycle", + "Sequence \"S\" has run out of numbers", 1, 2, 3); + test("create sequence s maxvalue 3 no cycle", + "Sequence \"S\" has run out of numbers", 1, 2, 3); + test("create sequence s maxvalue 3 nocache cycle", + null, 1, 2, 3, 1); + test("create sequence s maxvalue 3 nocache", + "Sequence \"S\" has run out of numbers", 1, 2, 3); + test("create sequence s maxvalue 3 nocache nocycle", + "Sequence \"S\" has run out of numbers", 1, 2, 3); + test("create sequence s maxvalue 3 no cache no cycle", + "Sequence \"S\" has run out of numbers", 1, 2, 3); + } + + private void testCreationErrors() throws SQLException { + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + expectError( + stat, + "create sequence a minvalue 5 start with 2", + "Unable to create or alter sequence \"A\" because of " + + "invalid attributes (start value \"2\", " + + "min value \"5\", max value \"" + Long.MAX_VALUE + + "\", increment \"1\")"); + expectError( + stat, + "create sequence b maxvalue 5 start with 7", + "Unable to create or alter sequence \"B\" because of " + + "invalid attributes (start value \"7\", " + + "min value \"1\", max value \"5\", increment \"1\")"); + expectError( + stat, + "create sequence c minvalue 5 maxvalue 2", + "Unable to create or alter sequence \"C\" because of " + + "invalid attributes (start value \"5\", " + + "min value \"5\", max value \"2\", increment \"1\")"); + expectError( + stat, + "create sequence d increment by 0", + "Unable to create or alter sequence \"D\" because of " + + "invalid attributes (start value \"1\", " + + "min value \"1\", max value \"" + + Long.MAX_VALUE + "\", increment \"0\")"); + expectError(stat, + "create sequence e minvalue 1 maxvalue 5 increment 99", + "Unable to create or alter sequence \"E\" because of " + + "invalid attributes (start value \"1\", " + + "min value \"1\", max value \"5\", increment \"99\")"); + conn.close(); + } + + private void testCreateSql() throws SQLException { + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute("create sequence a"); + stat.execute("create sequence b start with 5 increment by 2 " + + "minvalue 3 maxvalue 7 cycle nocache"); + stat.execute("create sequence c start with 3 increment by 1 " + + "minvalue 2 maxvalue 9 nocycle cache 2"); + stat.execute("create sequence d nomaxvalue no minvalue no cache nocycle"); + stat.execute("create sequence e cache 1"); + List script = new ArrayList<>(); + ResultSet rs = stat.executeQuery("script nodata"); + while (rs.next()) { + script.add(rs.getString(1)); + } + Collections.sort(script); + assertEquals("CREATE SEQUENCE PUBLIC.A START WITH 1;", script.get(0)); + assertEquals("CREATE SEQUENCE PUBLIC.B START " + + "WITH 5 INCREMENT BY 2 " + + "MINVALUE 3 MAXVALUE 7 CYCLE CACHE 1;", script.get(1)); + assertEquals("CREATE SEQUENCE PUBLIC.C START " + + "WITH 3 MINVALUE 2 MAXVALUE 9 CACHE 2;", + script.get(2)); + assertEquals("CREATE SEQUENCE PUBLIC.D START " + + "WITH 1 CACHE 1;", script.get(3)); + assertEquals("CREATE SEQUENCE PUBLIC.E START " + + "WITH 1 CACHE 1;", script.get(4)); + conn.close(); + } + + private void testDefaultMinMax() throws SQLException { + // test that we calculate default MIN and MAX values correctly + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute("create sequence a START WITH -7320917853639540658"); + stat.execute("create sequence b START WITH 7320917853639540658 INCREMENT -1"); + conn.close(); + } + + private void testTwo() throws SQLException { + deleteDb("sequence"); + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute("create sequence s"); + conn.setAutoCommit(false); + + Connection conn2 = getConnection("sequence"); + Statement stat2 = conn2.createStatement(); + conn2.setAutoCommit(false); + + long last = 0; + for (int i = 0; i < 100; i++) { + long v1 = getNext(stat); + assertTrue(v1 > last); + last = v1; + for (int j = 0; j < 100; j++) { + long v2 = getNext(stat2); + assertTrue(v2 > last); + last = v2; + } + } + + conn2.close(); + conn.close(); + } + + private void test(String setupSql, String finalError, long... values) + throws SQLException { + + deleteDb("sequence"); + + Connection conn = getConnection("sequence"); + Statement stat = conn.createStatement(); + stat.execute(setupSql); + + if (!config.memory) { + conn.close(); + conn = getConnection("sequence"); + } + + stat = conn.createStatement(); + for (long value : values) { + assertEquals(value, getNext(stat)); + } + + if (finalError != null) { + try { + getNext(stat); + fail("Expected error: " + finalError); + } catch (SQLException e) { + assertContains(e.getMessage(), finalError); + } + } + + conn.close(); + } + + private void expectError(Statement stat, String sql, String error) { + try { + stat.execute(sql); + fail("Expected error: " + error); + } catch (SQLException e) { + assertContains(e.getMessage(), error); + } + } + + private static long getNext(Statement stat) throws SQLException { + ResultSet rs = stat.executeQuery("call next value for s"); + rs.next(); + long value = rs.getLong(1); + return value; + } + + /** + * A test trigger. + */ + public static class TriggerTest implements Trigger { + + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, int type) + throws SQLException { + conn.createStatement().executeQuery("call next value for test_seq"); + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + // ignore + } + + @Override + public void close() throws SQLException { + // ignore + } + + @Override + public void remove() throws SQLException { + // ignore + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSessionsLocks.java b/modules/h2/src/test/java/org/h2/test/db/TestSessionsLocks.java new file mode 100644 index 0000000000000..5ecee1effbdc1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSessionsLocks.java @@ -0,0 +1,141 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; + +/** + * Tests the meta data tables information_schema.locks and sessions. + */ +public class TestSessionsLocks extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testCancelStatement(); + if (!config.mvcc) { + testLocks(); + } + deleteDb("sessionsLocks"); + } + + private void testLocks() throws SQLException { + deleteDb("sessionsLocks"); + Connection conn = getConnection("sessionsLocks;MULTI_THREADED=1"); + Statement stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from information_schema.locks " + + "order by session_id"); + assertFalse(rs.next()); + Connection conn2 = getConnection("sessionsLocks"); + Statement stat2 = conn2.createStatement(); + stat2.execute("create table test(id int primary key, name varchar)"); + conn2.setAutoCommit(false); + stat2.execute("insert into test values(1, 'Hello')"); + rs = stat.executeQuery("select * from information_schema.locks " + + "order by session_id"); + rs.next(); + assertEquals("PUBLIC", rs.getString("TABLE_SCHEMA")); + assertEquals("TEST", rs.getString("TABLE_NAME")); + rs.getString("SESSION_ID"); + if (config.mvcc || config.mvStore) { + assertEquals("READ", rs.getString("LOCK_TYPE")); + } else { + assertEquals("WRITE", rs.getString("LOCK_TYPE")); + } + assertFalse(rs.next()); + conn2.commit(); + conn2.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); + stat2.execute("SELECT * FROM TEST"); + rs = stat.executeQuery("select * from information_schema.locks " + + "order by session_id"); + if (!config.mvcc && !config.mvStore) { + rs.next(); + assertEquals("PUBLIC", rs.getString("TABLE_SCHEMA")); + assertEquals("TEST", rs.getString("TABLE_NAME")); + rs.getString("SESSION_ID"); + assertEquals("READ", rs.getString("LOCK_TYPE")); + } + assertFalse(rs.next()); + conn2.commit(); + rs = stat.executeQuery("select * from information_schema.locks " + + "order by session_id"); + assertFalse(rs.next()); + conn.close(); + conn2.close(); + } + + private void testCancelStatement() throws Exception { + deleteDb("sessionsLocks"); + Connection conn = getConnection("sessionsLocks;MULTI_THREADED=1"); + Statement stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from information_schema.sessions " + + "order by SESSION_START, ID"); + rs.next(); + int sessionId = rs.getInt("ID"); + rs.getString("USER_NAME"); + rs.getTimestamp("SESSION_START"); + rs.getString("STATEMENT"); + rs.getTimestamp("STATEMENT_START"); + assertFalse(rs.next()); + Connection conn2 = getConnection("sessionsLocks"); + final Statement stat2 = conn2.createStatement(); + rs = stat.executeQuery("select * from information_schema.sessions " + + "order by SESSION_START, ID"); + assertTrue(rs.next()); + assertEquals(sessionId, rs.getInt("ID")); + assertTrue(rs.next()); + int otherId = rs.getInt("ID"); + assertTrue(otherId != sessionId); + assertFalse(rs.next()); + stat2.execute("set throttle 1"); + final boolean[] done = { false }; + Runnable runnable = new Runnable() { + @Override + public void run() { + try { + stat2.execute("select count(*) from " + + "system_range(1, 10000000) t1, system_range(1, 10000000) t2"); + new Error("Unexpected success").printStackTrace(); + } catch (SQLException e) { + done[0] = true; + } + } + }; + new Thread(runnable).start(); + while (true) { + Thread.sleep(100); + rs = stat.executeQuery("CALL CANCEL_SESSION(" + otherId + ")"); + rs.next(); + if (rs.getBoolean(1)) { + for (int i = 0; i < 20; i++) { + Thread.sleep(100); + if (done[0]) { + break; + } + } + assertTrue(done[0]); + break; + } + } + conn2.close(); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSetCollation.java b/modules/h2/src/test/java/org/h2/test/db/TestSetCollation.java new file mode 100644 index 0000000000000..c48b78f347785 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSetCollation.java @@ -0,0 +1,192 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import org.h2.jdbc.JdbcSQLException; +import org.h2.test.TestBase; + +public class TestSetCollation extends TestBase { + private static final String[] TEST_STRINGS = new String[]{"A", "\u00c4", "AA", "B", "$", "1A", null}; + + private static final String DB_NAME = "collator"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testDefaultCollator(); + testCp500Collator(); + testDeCollator(); + testUrlParameter(); + testReopenDatabase(); + testReopenDatabaseWithUrlParameter(); + testReopenDatabaseWithDifferentCollationInUrl(); + testReopenDatabaseWithSameCollationInUrl(); + } + + + private void testDefaultCollator() throws Exception { + assertEquals(Arrays.asList(null, "$", "1A", "A", "AA", "B", "\u00c4"), orderedWithCollator(null)); + } + + private void testDeCollator() throws Exception { + assertEquals(Arrays.asList(null, "$", "1A", "A", "\u00c4", "AA", "B"), orderedWithCollator("DE")); + assertEquals(Arrays.asList(null, "$", "1A", "A", "\u00c4", "AA", "B"), orderedWithCollator("DEFAULT_DE")); + } + + private void testCp500Collator() throws Exception { + // IBM z/OS codepage + assertEquals(Arrays.asList(null, "A", "AA", "B", "1A", "$", "\u00c4"), + orderedWithCollator("CHARSET_CP500")); + } + + private void testUrlParameter() throws Exception { + // Specifying the collator in the JDBC Url should have the same effect + // as setting it with a set statement + config.collation = "CHARSET_CP500"; + try { + assertEquals(Arrays.asList(null, "A", "AA", "B", "1A", "$", "\u00c4"), orderedWithCollator(null)); + } finally { + config.collation = null; + } + } + + private void testReopenDatabase() throws Exception { + if (config.memory) { + return; + } + + orderedWithCollator("DE"); + + try (Connection con = getConnection(DB_NAME)) { + insertValues(con, new String[]{"A", "\u00c4"}, 100); + + assertEquals(Arrays.asList(null, "$", "1A", "A", "A", "\u00c4", "\u00c4", "AA", "B"), + loadTableValues(con)); + } + } + + private void testReopenDatabaseWithUrlParameter() throws Exception { + if (config.memory) { + return; + } + + config.collation = "DE"; + try { + orderedWithCollator(null); + } finally { + config.collation = null; + } + + // reopen the database without specifying a collation in the url. + // This should keep the initial collation. + try (Connection con = getConnection(DB_NAME)) { + insertValues(con, new String[]{"A", "\u00c4"}, 100); + + assertEquals(Arrays.asList(null, "$", "1A", "A", "A", "\u00c4", "\u00c4", "AA", "B"), + loadTableValues(con)); + } + + } + + private void testReopenDatabaseWithDifferentCollationInUrl() throws Exception { + if (config.memory) { + return; + } + config.collation = "DE"; + try { + orderedWithCollator(null); + } finally { + config.collation = null; + } + + config.collation = "CHARSET_CP500"; + try { + getConnection(DB_NAME); + fail(); + } catch (JdbcSQLException e) { + // expected + } finally { + config.collation = null; + } + } + + private void testReopenDatabaseWithSameCollationInUrl() throws Exception { + if (config.memory) { + return; + } + config.collation = "DE"; + try { + orderedWithCollator(null); + } finally { + config.collation = null; + } + + config.collation = "DE"; + try (Connection con = getConnection(DB_NAME)) { + insertValues(con, new String[]{"A", "\u00c4"}, 100); + + assertEquals(Arrays.asList(null, "$", "1A", "A", "A", "\u00c4", "\u00c4", "AA", "B"), + loadTableValues(con)); + } finally { + config.collation = null; + } + } + + + private List orderedWithCollator(String collator) throws SQLException { + deleteDb(DB_NAME); + try (Connection con = getConnection(DB_NAME); Statement statement = con.createStatement()) { + if (collator != null) { + statement.execute("SET COLLATION " + collator); + } + statement.execute("CREATE TABLE charsettable(id INT PRIMARY KEY, testvalue VARCHAR(50))"); + + insertValues(con, TEST_STRINGS, 1); + + return loadTableValues(con); + } + } + + private static void insertValues(Connection con, String[] values, int startId) throws SQLException { + PreparedStatement ps = con.prepareStatement("INSERT INTO charsettable VALUES (?, ?)"); + int id = startId; + for (String value : values) { + ps.setInt(1, id++); + ps.setString(2, value); + ps.execute(); + } + ps.close(); + } + + private static List loadTableValues(Connection con) throws SQLException { + List results = new ArrayList<>(); + Statement statement = con.createStatement(); + ResultSet resultSet = statement.executeQuery("select testvalue from charsettable order by testvalue"); + while (resultSet.next()) { + results.add(resultSet.getString(1)); + } + statement.close(); + return results; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestShow.java b/modules/h2/src/test/java/org/h2/test/db/TestShow.java new file mode 100644 index 0000000000000..004a3d64fee05 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestShow.java @@ -0,0 +1,68 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import org.h2.test.TestBase; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * Test of compatibility for the SHOW statement. + */ +public class TestShow extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testPgCompatibility(); + testMysqlCompatibility(); + } + + private void testPgCompatibility() throws SQLException { + try (Connection conn = getConnection("mem:pg")) { + Statement stat = conn.createStatement(); + + assertResult("UNICODE", stat, "SHOW CLIENT_ENCODING"); + assertResult("read committed", stat, "SHOW DEFAULT_TRANSACTION_ISOLATION"); + assertResult("read committed", stat, "SHOW TRANSACTION ISOLATION LEVEL"); + assertResult("ISO", stat, "SHOW DATESTYLE"); + assertResult("8.2.23", stat, "SHOW SERVER_VERSION"); + assertResult("UTF8", stat, "SHOW SERVER_ENCODING"); + } + } + + private void testMysqlCompatibility() throws SQLException { + try (Connection conn = getConnection("mem:pg")) { + Statement stat = conn.createStatement(); + ResultSet rs; + + // show tables without a schema + stat.execute("create table person(id int, name varchar)"); + rs = stat.executeQuery("SHOW TABLES"); + assertTrue(rs.next()); + assertEquals("PERSON", rs.getString(1)); + assertEquals("PUBLIC", rs.getString(2)); + assertFalse(rs.next()); + + // show tables with a schema + assertResultRowCount(1, stat.executeQuery("SHOW TABLES FROM PUBLIC")); + + // columns + assertResultRowCount(2, stat.executeQuery("SHOW COLUMNS FROM person")); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSpaceReuse.java b/modules/h2/src/test/java/org/h2/test/db/TestSpaceReuse.java new file mode 100644 index 0000000000000..56e8dd20720c2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSpaceReuse.java @@ -0,0 +1,65 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.engine.Constants; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests if disk space is reused after deleting many rows. + */ +public class TestSpaceReuse extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.memory) { + return; + } + deleteDb("spaceReuse"); + long max = 0, now = 0, min = Long.MAX_VALUE; + for (int i = 0; i < 20; i++) { + Connection conn = getConnection("spaceReuse"); + Statement stat = conn.createStatement(); + stat.execute("set retention_time 0"); + stat.execute("create table if not exists t(i int)"); + stat.execute("insert into t select x from system_range(1, 500)"); + conn.close(); + conn = getConnection("spaceReuse"); + conn.createStatement().execute("delete from t"); + conn.close(); + String fileName = getBaseDir() + "/spaceReuse"; + if (Constants.VERSION_MINOR >= 4) { + fileName += Constants.SUFFIX_MV_FILE; + } else { + fileName += Constants.SUFFIX_PAGE_FILE; + } + now = FileUtils.size(fileName); + assertTrue(now > 0); + if (i < 10) { + max = Math.max(max, now); + } else { + min = Math.min(min, now); + } + } + assertTrue("min: " + min + " max: " + max, min <= max); + deleteDb("spaceReuse"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSpatial.java b/modules/h2/src/test/java/org/h2/test/db/TestSpatial.java new file mode 100644 index 0000000000000..52bb75a82c8c6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSpatial.java @@ -0,0 +1,1159 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Savepoint; +import java.sql.Statement; +import java.sql.Types; +import java.util.Random; +import org.h2.api.Aggregate; +import org.h2.test.TestBase; +import org.h2.tools.SimpleResultSet; +import org.h2.tools.SimpleRowSource; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueGeometry; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.Envelope; +import org.locationtech.jts.geom.Geometry; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.Point; +import org.locationtech.jts.geom.Polygon; +import org.locationtech.jts.geom.util.AffineTransformation; +import org.locationtech.jts.io.ParseException; +import org.locationtech.jts.io.WKTReader; + +/** + * Spatial datatype and index tests. + * + * @author Thomas Mueller + * @author Noel Grandin + * @author Nicolas Fortin, Atelier SIG, IRSTV FR CNRS 24888 + */ +public class TestSpatial extends TestBase { + + private static final String URL = "spatial"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (!config.mvStore && config.mvcc) { + return; + } + if (config.memory && config.mvcc) { + return; + } + if (DataType.GEOMETRY_CLASS != null) { + deleteDb("spatial"); + testSpatial(); + deleteDb("spatial"); + } + } + + private void testSpatial() throws SQLException { + testBug1(); + testSpatialValues(); + testOverlap(); + testNotOverlap(); + testPersistentSpatialIndex(); + testSpatialIndexQueryMultipleTable(); + testIndexTransaction(); + testJavaAlias(); + testJavaAliasTableFunction(); + testMemorySpatialIndex(); + testGeometryDataType(); + testWKB(); + testValueConversion(); + testEquals(); + testTableFunctionGeometry(); + testHashCode(); + testAggregateWithGeometry(); + testTableViewSpatialPredicate(); + testValueGeometryScript(); + testInPlaceUpdate(); + testScanIndexOnNonSpatialQuery(); + testStoreCorruption(); + testExplainSpatialIndexWithPk(); + testNullableGeometry(); + testNullableGeometryDelete(); + testNullableGeometryInsert(); + testNullableGeometryUpdate(); + testIndexUpdateNullGeometry(); + testInsertNull(); + testSpatialIndexWithOrder(); + } + + private void testBug1() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE VECTORS (ID INTEGER NOT NULL, GEOM GEOMETRY, S INTEGER)"); + stat.execute("INSERT INTO VECTORS(ID, GEOM, S) " + + "VALUES(0, 'POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))', 1)"); + + stat.executeQuery("select * from (select * from VECTORS) WHERE S=1 " + + "AND GEOM && 'POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))'"); + conn.close(); + deleteDb("spatial"); + } + + private void testHashCode() { + ValueGeometry geomA = ValueGeometry + .get("POLYGON ((67 13 6, 67 18 5, 59 18 4, 59 13 6, 67 13 6))"); + ValueGeometry geomB = ValueGeometry + .get("POLYGON ((67 13 6, 67 18 5, 59 18 4, 59 13 6, 67 13 6))"); + ValueGeometry geomC = ValueGeometry + .get("POLYGON ((67 13 6, 67 18 5, 59 18 4, 59 13 5, 67 13 6))"); + assertEquals(geomA.hashCode(), geomB.hashCode()); + assertFalse(geomA.hashCode() == geomC.hashCode()); + } + + private void testSpatialValues() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + + stat.execute("create memory table test" + + "(id int primary key, polygon geometry)"); + stat.execute("insert into test values(1, " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + ResultSet rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("POLYGON ((1 1, 1 2, 2 2, 1 1))", rs.getString(2)); + GeometryFactory f = new GeometryFactory(); + Polygon polygon = f.createPolygon(new Coordinate[] { + new Coordinate(1, 1), + new Coordinate(1, 2), + new Coordinate(2, 2), + new Coordinate(1, 1) }); + assertTrue(polygon.equals(rs.getObject(2))); + + rs = stat.executeQuery("select * from test where polygon = " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))'"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + stat.executeQuery("select * from test where polygon > " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))'"); + stat.executeQuery("select * from test where polygon < " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))'"); + + stat.execute("drop table test"); + conn.close(); + deleteDb("spatial"); + } + + /** + * Generate a random line string under the given bounding box. + * + * @param geometryRand the random generator + * @param minX Bounding box min x + * @param maxX Bounding box max x + * @param minY Bounding box min y + * @param maxY Bounding box max y + * @param maxLength LineString maximum length + * @return A segment within this bounding box + */ + static Geometry getRandomGeometry(Random geometryRand, + double minX, double maxX, + double minY, double maxY, double maxLength) { + GeometryFactory factory = new GeometryFactory(); + // Create the start point + Coordinate start = new Coordinate( + geometryRand.nextDouble() * (maxX - minX) + minX, + geometryRand.nextDouble() * (maxY - minY) + minY); + // Compute an angle + double angle = geometryRand.nextDouble() * Math.PI * 2; + // Compute length + double length = geometryRand.nextDouble() * maxLength; + // Compute end point + Coordinate end = new Coordinate( + start.x + Math.cos(angle) * length, + start.y + Math.sin(angle) * length); + return factory.createLineString(new Coordinate[] { start, end }); + } + + private void testOverlap() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("create memory table test" + + "(id int primary key, poly geometry)"); + stat.execute("insert into test values(1, " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + stat.execute("insert into test values(2, " + + "'POLYGON ((3 1, 3 2, 4 2, 3 1))')"); + stat.execute("insert into test values(3, " + + "'POLYGON ((1 3, 1 4, 2 4, 1 3))')"); + + ResultSet rs = stat.executeQuery( + "select * from test " + + "where poly && 'POINT (1.5 1.5)'::Geometry"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("id")); + assertFalse(rs.next()); + stat.execute("drop table test"); + } + } + private void testPersistentSpatialIndex() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("create table test" + + "(id int primary key, poly geometry)"); + stat.execute("insert into test values(1, " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + stat.execute("insert into test values(2,null)"); + stat.execute("insert into test values(3, " + + "'POLYGON ((3 1, 3 2, 4 2, 3 1))')"); + stat.execute("insert into test values(4,null)"); + stat.execute("insert into test values(5, " + + "'POLYGON ((1 3, 1 4, 2 4, 1 3))')"); + stat.execute("create spatial index on test(poly)"); + + ResultSet rs = stat.executeQuery( + "select * from test " + + "where poly && 'POINT (1.5 1.5)'::Geometry"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("id")); + assertFalse(rs.next()); + rs.close(); + + // Test with multiple operator + rs = stat.executeQuery( + "select * from test " + + "where poly && 'POINT (1.5 1.5)'::Geometry " + + "AND poly && 'POINT (1.7 1.75)'::Geometry"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("id")); + assertFalse(rs.next()); + rs.close(); + } + + if (config.memory) { + return; + } + + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery( + "select * from test " + + "where poly && 'POINT (1.5 1.5)'::Geometry"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("id")); + assertFalse(rs.next()); + stat.execute("drop table test"); + } + } + + private void testNotOverlap() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("create memory table test" + + "(id int primary key, poly geometry)"); + stat.execute("insert into test values(1, " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + stat.execute("insert into test values(2,null)"); + stat.execute("insert into test values(3, " + + "'POLYGON ((3 1, 3 2, 4 2, 3 1))')"); + stat.execute("insert into test values(4,null)"); + stat.execute("insert into test values(5, " + + "'POLYGON ((1 3, 1 4, 2 4, 1 3))')"); + + ResultSet rs = stat.executeQuery( + "select * from test " + + "where NOT poly && 'POINT (1.5 1.5)'::Geometry"); + assertTrue(rs.next()); + assertEquals(3, rs.getInt("id")); + assertTrue(rs.next()); + assertEquals(5, rs.getInt("id")); + assertFalse(rs.next()); + stat.execute("drop table test"); + } + } + + private static void createTestTable(Statement stat) throws SQLException { + stat.execute("create table area(idArea int primary key, the_geom geometry)"); + stat.execute("create spatial index on area(the_geom)"); + stat.execute("insert into area values(1, " + + "'POLYGON ((-10 109, 90 109, 90 9, -10 9, -10 109))')"); + stat.execute("insert into area values(2, " + + "'POLYGON ((90 109, 190 109, 190 9, 90 9, 90 109))')"); + stat.execute("insert into area values(3, " + + "'POLYGON ((190 109, 290 109, 290 9, 190 9, 190 109))')"); + stat.execute("insert into area values(4, " + + "'POLYGON ((-10 9, 90 9, 90 -91, -10 -91, -10 9))')"); + stat.execute("insert into area values(5, " + + "'POLYGON ((90 9, 190 9, 190 -91, 90 -91, 90 9))')"); + stat.execute("insert into area values(6, " + + "'POLYGON ((190 9, 290 9, 290 -91, 190 -91, 190 9))')"); + stat.execute("insert into area values(7,null)"); + stat.execute("insert into area values(8,null)"); + + + stat.execute("create table roads(idRoad int primary key, the_geom geometry)"); + stat.execute("create spatial index on roads(the_geom)"); + stat.execute("insert into roads values(1, " + + "'LINESTRING (27.65595463138 -16.728733459357244, " + + "47.61814744801515 40.435727788279806)')"); + stat.execute("insert into roads values(2, " + + "'LINESTRING (17.674858223062415 55.861058601134246, " + + "55.78449905482046 76.73062381852554)')"); + stat.execute("insert into roads values(3, " + + "'LINESTRING (68.48771266540646 67.65689981096412, " + + "108.4120982986768 88.52646502835542)')"); + stat.execute("insert into roads values(4, " + + "'LINESTRING (177.3724007561437 18.65879017013235, " + + "196.4272211720227 -16.728733459357244)')"); + stat.execute("insert into roads values(5, " + + "'LINESTRING (106.5973534971645 -12.191871455576518, " + + "143.79962192816637 30.454631379962223)')"); + stat.execute("insert into roads values(6, " + + "'LINESTRING (144.70699432892252 55.861058601134246, " + + "150.1512287334594 83.9896030245747)')"); + stat.execute("insert into roads values(7, " + + "'LINESTRING (60.321361058601155 -13.099243856332663, " + + "149.24385633270325 5.955576559546344)')"); + stat.execute("insert into roads values(8, null)"); + stat.execute("insert into roads values(9, null)"); + + } + + private void testSpatialIndexQueryMultipleTable() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + createTestTable(stat); + testRoadAndArea(stat); + } + deleteDb("spatial"); + } + private void testRoadAndArea(Statement stat) throws SQLException { + ResultSet rs = stat.executeQuery( + "select idArea, COUNT(idRoad) roadCount " + + "from area, roads " + + "where area.the_geom && roads.the_geom " + + "GROUP BY idArea ORDER BY idArea"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("idArea")); + assertEquals(3, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(2, rs.getInt("idArea")); + assertEquals(4, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(3, rs.getInt("idArea")); + assertEquals(1, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(4, rs.getInt("idArea")); + assertEquals(2, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(5, rs.getInt("idArea")); + assertEquals(3, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(6, rs.getInt("idArea")); + assertEquals(1, rs.getInt("roadCount")); + assertFalse(rs.next()); + rs.close(); + } + private void testIndexTransaction() throws SQLException { + // Check session management in index + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + createTestTable(stat); + Savepoint sp = conn.setSavepoint(); + // Remove a row but do not commit + stat.execute("delete from roads where idRoad=9"); + stat.execute("delete from roads where idRoad=7"); + // Check if index is updated + ResultSet rs = stat.executeQuery( + "select idArea, COUNT(idRoad) roadCount " + + "from area, roads " + + "where area.the_geom && roads.the_geom " + + "GROUP BY idArea ORDER BY idArea"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("idArea")); + assertEquals(3, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(2, rs.getInt("idArea")); + assertEquals(4, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(3, rs.getInt("idArea")); + assertEquals(1, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(4, rs.getInt("idArea")); + assertEquals(1, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(5, rs.getInt("idArea")); + assertEquals(2, rs.getInt("roadCount")); + assertTrue(rs.next()); + assertEquals(6, rs.getInt("idArea")); + assertEquals(1, rs.getInt("roadCount")); + assertFalse(rs.next()); + rs.close(); + conn.rollback(sp); + // Check if the index is restored + testRoadAndArea(stat); + } + } + + /** + * Test the in the in-memory spatial index + */ + private void testMemorySpatialIndex() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + + stat.execute("create memory table test(id int primary key, polygon geometry)"); + stat.execute("create spatial index idx_test_polygon on test(polygon)"); + stat.execute("insert into test values(1, 'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + stat.execute("insert into test values(2, null)"); + ResultSet rs; + + // an query that can not possibly return a result + rs = stat.executeQuery("select * from test " + + "where polygon && 'POLYGON ((1 1, 1 2, 2 2, 1 1))'::Geometry " + + "and polygon && 'POLYGON ((10 10, 10 20, 20 20, 10 10))'::Geometry"); + assertFalse(rs.next()); + + rs = stat.executeQuery( + "explain select * from test " + + "where polygon && 'POLYGON ((1 1, 1 2, 2 2, 1 1))'::Geometry"); + rs.next(); + if (config.mvStore) { + assertContains(rs.getString(1), "/* PUBLIC.IDX_TEST_POLYGON: POLYGON &&"); + } + + // TODO equality should probably also use the spatial index + // rs = stat.executeQuery("explain select * from test " + + // "where polygon = 'POLYGON ((1 1, 1 2, 2 2, 1 1))'"); + // rs.next(); + // assertContains(rs.getString(1), + // "/* PUBLIC.IDX_TEST_POLYGON: POLYGON ="); + + // these queries actually have no meaning in the context of a spatial + // index, but + // check them anyhow + stat.executeQuery("select * from test where polygon > " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))'::Geometry"); + stat.executeQuery("select * from test where polygon < " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))'::Geometry"); + + rs = stat.executeQuery( + "select * from test " + + "where intersects(polygon, 'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + assertTrue(rs.next()); + + rs = stat.executeQuery( + "select * from test " + + "where intersects(polygon, 'POINT (1 1)')"); + assertTrue(rs.next()); + + rs = stat.executeQuery( + "select * from test " + + "where intersects(polygon, 'POINT (0 0)')"); + assertFalse(rs.next()); + + stat.execute("drop table test"); + conn.close(); + deleteDb("spatial"); + } + + /** + * Test java alias with Geometry type. + */ + private void testJavaAlias() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS T_GEOM_FROM_TEXT FOR \"" + + TestSpatial.class.getName() + ".geomFromText\""); + stat.execute("create table test(id int primary key " + + "auto_increment, the_geom geometry)"); + stat.execute("insert into test(the_geom) values(" + + "T_GEOM_FROM_TEXT('POLYGON ((" + + "62 48, 84 48, 84 42, 56 34, 62 48))',1488))"); + stat.execute("DROP ALIAS T_GEOM_FROM_TEXT"); + ResultSet rs = stat.executeQuery("select the_geom from test"); + assertTrue(rs.next()); + assertEquals("POLYGON ((62 48, 84 48, 84 42, 56 34, 62 48))", + rs.getObject(1).toString()); + } + deleteDb("spatial"); + } + + /** + * Test java alias with Geometry type. + */ + private void testJavaAliasTableFunction() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS T_RANDOM_GEOM_TABLE FOR \"" + + TestSpatial.class.getName() + ".getRandomGeometryTable\""); + stat.execute( + "create table test as " + + "select * from T_RANDOM_GEOM_TABLE(42,20,-100,100,-100,100,4)"); + stat.execute("DROP ALIAS T_RANDOM_GEOM_TABLE"); + ResultSet rs = stat.executeQuery("select count(*) from test"); + assertTrue(rs.next()); + assertEquals(20, rs.getInt(1)); + } + deleteDb("spatial"); + } + + /** + * Generate a result set with random geometry data. + * Used as an ALIAS function. + * + * @param seed the random seed + * @param rowCount the number of rows + * @param minX the smallest x + * @param maxX the largest x + * @param minY the smallest y + * @param maxY the largest y + * @param maxLength the maximum length + * @return a result set + */ + public static ResultSet getRandomGeometryTable( + final long seed, final long rowCount, + final double minX, final double maxX, + final double minY, final double maxY, final double maxLength) { + + SimpleResultSet rs = new SimpleResultSet(new SimpleRowSource() { + + private final Random random = new Random(seed); + private int currentRow; + + @Override + public Object[] readRow() throws SQLException { + if (currentRow++ < rowCount) { + return new Object[] { + getRandomGeometry(random, + minX, maxX, minY, maxY, maxLength) }; + } + return null; + } + + @Override + public void close() { + // nothing to do + } + + @Override + public void reset() throws SQLException { + random.setSeed(seed); + } + }); + rs.addColumn("the_geom", Types.OTHER, "GEOMETRY", Integer.MAX_VALUE, 0); + return rs; + } + + /** + * Convert the text to a geometry object. + * + * @param text the geometry as a Well Known Text + * @param srid the projection id + * @return Geometry object + */ + public static Geometry geomFromText(String text, int srid) throws SQLException { + WKTReader wktReader = new WKTReader(); + try { + Geometry geom = wktReader.read(text); + geom.setSRID(srid); + return geom; + } catch (ParseException ex) { + throw new SQLException(ex); + } + } + + private void testGeometryDataType() { + GeometryFactory geometryFactory = new GeometryFactory(); + Geometry geometry = geometryFactory.createPoint(new Coordinate(0, 0)); + assertEquals(Value.GEOMETRY, DataType.getTypeFromClass(geometry.getClass())); + } + + /** + * Test serialization of Z and SRID values. + */ + private void testWKB() { + ValueGeometry geom3d = ValueGeometry.get( + "POLYGON ((67 13 6, 67 18 5, 59 18 4, 59 13 6, 67 13 6))", 27572); + ValueGeometry copy = ValueGeometry.get(geom3d.getBytes()); + assertEquals(6, copy.getGeometry().getCoordinates()[0].z); + assertEquals(5, copy.getGeometry().getCoordinates()[1].z); + assertEquals(4, copy.getGeometry().getCoordinates()[2].z); + // Test SRID + copy = ValueGeometry.get(geom3d.getBytes()); + assertEquals(27572, copy.getGeometry().getSRID()); + } + + /** + * Test conversion of Geometry object into Object + */ + private void testValueConversion() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS OBJ_STRING FOR \"" + + TestSpatial.class.getName() + + ".getObjectString\""); + ResultSet rs = stat.executeQuery( + "select OBJ_STRING('POINT( 15 25 )'::geometry)"); + assertTrue(rs.next()); + assertEquals("POINT (15 25)", rs.getString(1)); + conn.close(); + deleteDb("spatial"); + } + + /** + * Get the toString value of the object. + * + * @param object the object + * @return the string representation + */ + public static String getObjectString(Object object) { + return object.toString(); + } + + /** + * Test equality method on ValueGeometry + */ + private void testEquals() { + // 3d equality test + ValueGeometry geom3d = ValueGeometry.get( + "POLYGON ((67 13 6, 67 18 5, 59 18 4, 59 13 6, 67 13 6))"); + ValueGeometry geom2d = ValueGeometry.get( + "POLYGON ((67 13, 67 18, 59 18, 59 13, 67 13))"); + assertFalse(geom3d.equals(geom2d)); + // SRID equality test + GeometryFactory geometryFactory = new GeometryFactory(); + Geometry geometry = geometryFactory.createPoint(new Coordinate(0, 0)); + geometry.setSRID(27572); + ValueGeometry valueGeometry = + ValueGeometry.getFromGeometry(geometry); + Geometry geometry2 = geometryFactory.createPoint(new Coordinate(0, 0)); + geometry2.setSRID(5326); + ValueGeometry valueGeometry2 = + ValueGeometry.getFromGeometry(geometry2); + assertFalse(valueGeometry.equals(valueGeometry2)); + // Check illegal geometry (no WKB representation) + try { + ValueGeometry.get("POINT EMPTY"); + fail("expected this to throw IllegalArgumentException"); + } catch (IllegalArgumentException ex) { + // expected + } + } + + /** + * Check that geometry column type is kept with a table function + */ + private void testTableFunctionGeometry() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS POINT_TABLE FOR \"" + + TestSpatial.class.getName() + ".pointTable\""); + stat.execute("create table test as select * from point_table(1, 1)"); + // Read column type + ResultSet columnMeta = conn.getMetaData(). + getColumns(null, null, "TEST", "THE_GEOM"); + assertTrue(columnMeta.next()); + assertEquals("geometry", + columnMeta.getString("TYPE_NAME").toLowerCase()); + assertFalse(columnMeta.next()); + } + deleteDb("spatial"); + } + + /** + * This method is called via reflection from the database. + * + * @param x the x position of the point + * @param y the y position of the point + * @return a result set with this point + */ + public static ResultSet pointTable(double x, double y) { + GeometryFactory factory = new GeometryFactory(); + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("THE_GEOM", Types.JAVA_OBJECT, "GEOMETRY", 0, 0); + rs.addRow(factory.createPoint(new Coordinate(x, y))); + return rs; + } + + private void testAggregateWithGeometry() throws SQLException { + deleteDb("spatialIndex"); + try (Connection conn = getConnection("spatialIndex")) { + Statement st = conn.createStatement(); + st.execute("CREATE AGGREGATE TABLE_ENVELOPE FOR \""+ + TableEnvelope.class.getName()+"\""); + st.execute("CREATE TABLE test(the_geom GEOMETRY)"); + st.execute("INSERT INTO test VALUES ('POINT(1 1)'), (null), (null), ('POINT(10 5)')"); + ResultSet rs = st.executeQuery("select TABLE_ENVELOPE(the_geom) from test"); + assertEquals("geometry", rs.getMetaData(). + getColumnTypeName(1).toLowerCase()); + assertTrue(rs.next()); + assertTrue(rs.getObject(1) instanceof Geometry); + assertTrue(new Envelope(1, 10, 1, 5).equals( + ((Geometry) rs.getObject(1)).getEnvelopeInternal())); + assertFalse(rs.next()); + } + deleteDb("spatialIndex"); + } + + /** + * An aggregate function that calculates the envelope. + */ + public static class TableEnvelope implements Aggregate { + private Envelope tableEnvelope; + + @Override + public int getInternalType(int[] inputTypes) throws SQLException { + for (int inputType : inputTypes) { + if (inputType != Value.GEOMETRY) { + throw new SQLException("TableEnvelope accept only Geometry argument"); + } + } + return Value.GEOMETRY; + } + + @Override + public void init(Connection conn) throws SQLException { + tableEnvelope = null; + } + + @Override + public void add(Object value) throws SQLException { + if (value instanceof Geometry) { + if (tableEnvelope == null) { + tableEnvelope = ((Geometry) value).getEnvelopeInternal(); + } else { + tableEnvelope.expandToInclude(((Geometry) value).getEnvelopeInternal()); + } + } + } + + @Override + public Object getResult() throws SQLException { + return new GeometryFactory().toGeometry(tableEnvelope); + } + } + + private void testTableViewSpatialPredicate() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("drop table if exists test"); + stat.execute("drop view if exists test_view"); + stat.execute("create table test(id int primary key, poly geometry)"); + stat.execute("insert into test values(1, 'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + stat.execute("insert into test values(4, null)"); + stat.execute("insert into test values(2, 'POLYGON ((3 1, 3 2, 4 2, 3 1))')"); + stat.execute("insert into test values(3, 'POLYGON ((1 3, 1 4, 2 4, 1 3))')"); + stat.execute("create view test_view as select * from test"); + + //Check result with view + ResultSet rs; + rs = stat.executeQuery( + "select * from test where poly && 'POINT (1.5 1.5)'::Geometry"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("id")); + assertFalse(rs.next()); + + rs = stat.executeQuery( + "select * from test_view where poly && 'POINT (1.5 1.5)'::Geometry"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("id")); + assertFalse(rs.next()); + rs.close(); + } + deleteDb("spatial"); + } + + /** + * Check ValueGeometry conversion into SQL script + */ + private void testValueGeometryScript() throws SQLException { + ValueGeometry valueGeometry = ValueGeometry.get("POINT(1 1 5)"); + try (Connection conn = getConnection(URL)) { + ResultSet rs = conn.createStatement().executeQuery( + "SELECT " + valueGeometry.getSQL()); + assertTrue(rs.next()); + Object obj = rs.getObject(1); + ValueGeometry g = ValueGeometry.getFromGeometry(obj); + assertTrue("got: " + g + " exp: " + valueGeometry, valueGeometry.equals(g)); + } + } + + /** + * If the user mutate the geometry of the object, the object cache must not + * be updated. + */ + private void testInPlaceUpdate() throws SQLException { + try (Connection conn = getConnection(URL)) { + ResultSet rs = conn.createStatement().executeQuery( + "SELECT 'POINT(1 1)'::geometry"); + assertTrue(rs.next()); + // Mutate the geometry + ((Geometry) rs.getObject(1)).apply(new AffineTransformation(1, 0, + 1, 1, 0, 1)); + rs.close(); + rs = conn.createStatement().executeQuery( + "SELECT 'POINT(1 1)'::geometry"); + assertTrue(rs.next()); + // Check if the geometry is the one requested + assertEquals(1, ((Point) rs.getObject(1)).getX()); + assertEquals(1, ((Point) rs.getObject(1)).getY()); + rs.close(); + } + } + + private void testScanIndexOnNonSpatialQuery() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("drop table if exists test"); + stat.execute("create table test(id serial primary key, " + + "value double, the_geom geometry)"); + stat.execute("create spatial index spatial on test(the_geom)"); + ResultSet rs = stat.executeQuery("explain select * from test where _ROWID_ = 5"); + assertTrue(rs.next()); + assertFalse(rs.getString(1).contains("/* PUBLIC.SPATIAL: _ROWID_ = " + + "5 */")); + } + deleteDb("spatial"); + } + + private void testStoreCorruption() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("drop table if exists pt_cloud;\n" + + "CREATE TABLE PT_CLOUD AS " + + " SELECT CONCAT('POINT(',A.X,' ',B.X,')')::geometry the_geom from" + + " system_range(1e6,1e6+10) A,system_range(6e6,6e6+10) B;\n" + + "create spatial index pt_index on pt_cloud(the_geom);"); + // Wait some time + try { + Thread.sleep(1000); + } catch (InterruptedException ex) { + throw new SQLException(ex); + } + stat.execute("drop table if exists pt_cloud;\n" + + "CREATE TABLE PT_CLOUD AS " + + " SELECT CONCAT('POINT(',A.X,' ',B.X,')')::geometry the_geom from" + + " system_range(1e6,1e6+50) A,system_range(6e6,6e6+50) B;\n" + + "create spatial index pt_index on pt_cloud(the_geom);\n" + + "shutdown compact;"); + } + deleteDb("spatial"); + } + + private void testExplainSpatialIndexWithPk() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("drop table if exists pt_cloud;"); + stat.execute("CREATE TABLE PT_CLOUD(id serial, the_geom geometry) AS " + + "SELECT null, CONCAT('POINT(',A.X,' ',B.X,')')::geometry the_geom " + + "from system_range(0,120) A,system_range(0,10) B;"); + stat.execute("create spatial index on pt_cloud(the_geom);"); + try (ResultSet rs = stat.executeQuery( + "explain select * from PT_CLOUD " + + "where the_geom && 'POINT(1 1)'")) { + assertTrue(rs.next()); + assertFalse("H2 should use spatial index got this explain:\n" + + rs.getString(1), rs.getString(1).contains("tableScan")); + } + } + deleteDb("spatial"); + } + + private void testNullableGeometry() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + + stat.execute("create memory table test" + + "(id int primary key, the_geom geometry)"); + stat.execute("create spatial index on test(the_geom)"); + stat.execute("insert into test values(1, null)"); + stat.execute("insert into test values(2, null)"); + stat.execute("delete from test where the_geom is null"); + stat.execute("insert into test values(1, null)"); + stat.execute("insert into test values(2, null)"); + stat.execute("insert into test values(3, " + + "'POLYGON ((1000 2000, 1000 3000, 2000 3000, 1000 2000))')"); + stat.execute("insert into test values(4, null)"); + stat.execute("insert into test values(5, null)"); + stat.execute("insert into test values(6, " + + "'POLYGON ((1000 3000, 1000 4000, 2000 4000, 1000 3000))')"); + + ResultSet rs = stat.executeQuery("select * from test"); + int count = 0; + while (rs.next()) { + count++; + int id = rs.getInt(1); + if (id == 3 || id == 6) { + assertTrue(rs.getObject(2) != null); + } else { + assertNull(rs.getObject(2)); + } + } + assertEquals(6, count); + + rs = stat.executeQuery("select * from test where the_geom is null"); + count = 0; + while (rs.next()) { + count++; + assertNull(rs.getObject(2)); + } + assertEquals(4, count); + + rs = stat.executeQuery("select * from test where the_geom is not null"); + count = 0; + while (rs.next()) { + count++; + assertTrue(rs.getObject(2) != null); + } + assertEquals(2, count); + + rs = stat.executeQuery( + "select * from test " + + "where intersects(the_geom, " + + "'POLYGON ((1000 1000, 1000 2000, 2000 2000, 1000 1000))')"); + + conn.close(); + if (!config.memory) { + conn = getConnection(URL); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertNull(rs.getObject(2)); + conn.close(); + } + deleteDb("spatial"); + } + + private void testNullableGeometryDelete() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("create memory table test" + + "(id int primary key, the_geom geometry)"); + stat.execute("create spatial index on test(the_geom)"); + stat.execute("insert into test values(1, null)"); + stat.execute("insert into test values(2, null)"); + stat.execute("insert into test values(3, null)"); + ResultSet rs = stat.executeQuery("select * from test order by id"); + while (rs.next()) { + assertNull(rs.getObject(2)); + } + stat.execute("delete from test where id = 1"); + stat.execute("delete from test where id = 2"); + stat.execute("delete from test where id = 3"); + stat.execute("insert into test values(4, null)"); + stat.execute("insert into test values(5, null)"); + stat.execute("insert into test values(6, null)"); + stat.execute("delete from test where id = 4"); + stat.execute("delete from test where id = 5"); + stat.execute("delete from test where id = 6"); + conn.close(); + deleteDb("spatial"); + } + + private void testNullableGeometryInsert() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("create memory table test" + + "(id identity, the_geom geometry)"); + stat.execute("create spatial index on test(the_geom)"); + for (int i = 0; i < 1000; i++) { + stat.execute("insert into test values(null, null)"); + } + ResultSet rs = stat.executeQuery("select * from test"); + while (rs.next()) { + assertNull(rs.getObject(2)); + } + conn.close(); + deleteDb("spatial"); + } + + private void testNullableGeometryUpdate() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("create memory table test" + + "(id int primary key, the_geom geometry, description varchar2(32))"); + stat.execute("create spatial index on test(the_geom)"); + for (int i = 0; i < 1000; i++) { + stat.execute("insert into test values("+ (i + 1) +", null, null)"); + } + ResultSet rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertNull(rs.getObject(2)); + stat.execute("update test set description='DESCRIPTION' where id = 1"); + stat.execute("update test set description='DESCRIPTION' where id = 2"); + stat.execute("update test set description='DESCRIPTION' where id = 3"); + conn.close(); + deleteDb("spatial"); + } + + private void testIndexUpdateNullGeometry() throws SQLException { + deleteDb("spatial"); + Connection conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("drop table if exists DUMMY_11;"); + stat.execute("CREATE TABLE PUBLIC.DUMMY_11 (fid serial, GEOM GEOMETRY);"); + stat.execute("CREATE SPATIAL INDEX PUBLIC_DUMMY_11_SPATIAL_INDEX on" + + " PUBLIC.DUMMY_11(GEOM);"); + stat.execute("insert into PUBLIC.DUMMY_11(geom) values(null);"); + stat.execute("update PUBLIC.DUMMY_11 set geom =" + + " 'POLYGON ((1 1, 5 1, 5 5, 1 5, 1 1))';"); + ResultSet rs = stat.executeQuery("select fid, GEOM from DUMMY_11 " + + "where GEOM && " + + "'POLYGON" + + "((1 1,5 1,5 5,1 5,1 1))';"); + try { + assertTrue(rs.next()); + assertEquals("POLYGON ((1 1, 5 1, 5 5, 1 5, 1 1))", rs.getString(2)); + } finally { + rs.close(); + } + // Update again the geometry elsewhere + stat.execute("update PUBLIC.DUMMY_11 set geom =" + + " 'POLYGON ((10 10, 50 10, 50 50, 10 50, 10 10))';"); + + rs = stat.executeQuery("select fid, GEOM from DUMMY_11 " + + "where GEOM && " + + "'POLYGON ((10 10, 50 10, 50 50, 10 50, 10 10))';"); + try { + assertTrue(rs.next()); + assertEquals("POLYGON ((10 10, 50 10, 50 50, 10 50, 10 10))", rs.getString(2)); + } finally { + rs.close(); + } + conn.close(); + deleteDb("spatial"); + } + + private void testInsertNull() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("\n" + + "drop table if exists PUBLIC.DUMMY_12;\n" + + "CREATE TABLE PUBLIC.DUMMY_12 (\n" + + " \"fid\" serial,\n" + + " Z_ID INTEGER,\n" + + " GEOM GEOMETRY,\n" + + " CONSTRAINT CONSTRAINT_DUMMY_12 PRIMARY KEY (\"fid\")\n" + + ");\n" + + "CREATE INDEX PRIMARY_KEY_DUMMY_12 ON PUBLIC.DUMMY_12 (\"fid\");\n" + + "CREATE spatial INDEX PUBLIC_DUMMY_12_SPATIAL_INDEX_ ON PUBLIC.DUMMY_12 (GEOM);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (123,3125163,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (124,3125164,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (125,3125173,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (126,3125174,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (127,3125175,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (128,3125176,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (129,3125177,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (130,3125178,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (131,3125179,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (132,3125180,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (133,3125335,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (134,3125336,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (135,3125165,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (136,3125337,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (137,3125338,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (138,3125339,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (139,3125340,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (140,3125341,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (141,3125342,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (142,3125343,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (143,3125344,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (144,3125345,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (145,3125346,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (146,3125166,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (147,3125347,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (148,3125348,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (149,3125349,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (150,3125350,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (151,3125351,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (152,3125352,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (153,3125353,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (154,3125354,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (155,3125355,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (156,3125356,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (157,3125167,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (158,3125357,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (159,3125358,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (160,3125359,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (161,3125360,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (162,3125361,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (163,3125362,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (164,3125363,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (165,3125364,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (166,3125365,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (167,3125366,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (168,3125168,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (169,3125367,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (170,3125368,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (171,3125369,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (172,3125370,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (173,3125169,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (174,3125170,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (175,3125171,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (176,3125172,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (177,-2,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (178,-1,NULL);\n" + + "INSERT INTO PUBLIC.DUMMY_12 (\"fid\",Z_ID,GEOM) VALUES (179," + + "-1,NULL);"); + try (ResultSet rs = stat.executeQuery("select * from DUMMY_12")) { + assertTrue(rs.next()); + } + } + deleteDb("spatial"); + } + + private void testSpatialIndexWithOrder() throws SQLException { + deleteDb("spatial"); + try (Connection conn = getConnection(URL)) { + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS BUILDINGS;" + + "CREATE TABLE BUILDINGS (PK serial, THE_GEOM geometry);" + + "insert into buildings(the_geom) SELECT 'POINT(1 1)" + + "'::geometry from SYSTEM_RANGE(1,10000);\n" + + "CREATE SPATIAL INDEX ON PUBLIC.BUILDINGS(THE_GEOM);\n"); + + try (ResultSet rs = stat.executeQuery("EXPLAIN SELECT * FROM " + + "BUILDINGS ORDER BY PK LIMIT 51;")) { + assertTrue(rs.next()); + assertTrue(rs.getString(1).contains("PRIMARY_KEY")); + } + } + deleteDb("spatial"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestSpeed.java b/modules/h2/src/test/java/org/h2/test/db/TestSpeed.java new file mode 100644 index 0000000000000..992723b376bb3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestSpeed.java @@ -0,0 +1,163 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.test.TestBase; + +/** + * Various small performance tests. + */ +public class TestSpeed extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + + deleteDb("speed"); + Connection conn; + + conn = getConnection("speed"); + + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + int len = getSize(1, 10000); + for (int i = 0; i < len; i++) { + stat.execute("SELECT ID, NAME FROM TEST ORDER BY ID"); + } + + // drop table if exists test; + // CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); + // @LOOP 100000 INSERT INTO TEST VALUES(?, 'Hello'); + // @LOOP 100000 SELECT * FROM TEST WHERE ID = ?; + + // stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME + // VARCHAR(255))"); + // for(int i=0; i<1000; i++) { + // stat.execute("INSERT INTO TEST VALUES("+i+", 'Hello')"); + // } + // stat.execute("CREATE TABLE TEST_A(ID INT PRIMARY KEY, NAME + // VARCHAR(255))"); + // stat.execute("INSERT INTO TEST_A VALUES(0, 'Hello')"); + long time = System.nanoTime(); + // for(int i=1; i<8000; i*=2) { + // stat.execute("INSERT INTO TEST_A SELECT ID+"+i+", NAME FROM TEST_A"); + // + // // stat.execute("INSERT INTO TEST_A VALUES("+i+", 'Hello')"); + // } + // for(int i=0; i<4; i++) { + // ResultSet rs = stat.executeQuery("SELECT * FROM TEST_A"); + // while(rs.next()) { + // rs.getInt(1); + // rs.getString(2); + // } + // } + + // + // stat.execute("CREATE TABLE TEST_B(ID INT PRIMARY KEY, NAME + // VARCHAR(255))"); + // for(int i=0; i<80000; i++) { + // stat.execute("INSERT INTO TEST_B VALUES("+i+", 'Hello')"); + // } + + // conn.close(); + // System.exit(0); + // int testParser; + // java -Xrunhprof:cpu=samples,depth=8 -cp . org.h2.test.TestAll + // + // stat.execute("CREATE TABLE TEST(ID INT)"); + // stat.execute("INSERT INTO TEST VALUES(1)"); + // ResultSet rs = stat.executeQuery("SELECT ID OTHER_ID FROM TEST"); + // rs.next(); + // rs.getString("ID"); + // stat.execute("DROP TABLE TEST"); + + // long time = System.nanoTime(); + + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE CACHED TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + + int max = getSize(1, 10000); + for (int i = 0; i < max; i++) { + prep.setInt(1, i); + prep.setString(2, + "abchelloasdfaldsjflajdflajdslfoajlskdfkjasdf" + + "abcfasdfadsfadfsalksdjflasjflajsdlkfjaksdjflkskd" + i); + prep.execute(); + } + + // System.exit(0); + // System.out.println("END "+Value.cacheHit+" "+Value.cacheMiss); + + time = System.nanoTime() - time; + trace(TimeUnit.NANOSECONDS.toMillis(time) + " insert"); + + // if(true) return; + + // if(config.log) { + // System.gc(); + // System.gc(); + // log("mem="+(Runtime.getRuntime().totalMemory() - + // Runtime.getRuntime().freeMemory())/1024); + // } + + // conn.close(); + + time = System.nanoTime(); + + prep = conn.prepareStatement("UPDATE TEST " + + "SET NAME='Another data row which is long' WHERE ID=?"); + for (int i = 0; i < max; i++) { + prep.setInt(1, i); + prep.execute(); + + // System.out.println("updated "+i); + // stat.execute("UPDATE TEST SET NAME='Another data row which is + // long' WHERE ID="+i); + // ResultSet rs = stat.executeQuery("SELECT * FROM TEST WHERE + // ID="+i); + // if(!rs.next()) { + // throw new AssertionError("hey! i="+i); + // } + // if(rs.next()) { + // throw new AssertionError("hey! i="+i); + // } + } + // for(int i=0; i> dataSet = New.arrayList(); + + dataSet.add(Arrays.asList(1, "1", 1L)); + dataSet.add(Arrays.asList(1, "0", 2L)); + dataSet.add(Arrays.asList(2, "0", -1L)); + dataSet.add(Arrays.asList(0, "0", 1L)); + dataSet.add(Arrays.asList(0, "1", null)); + dataSet.add(Arrays.asList(2, null, 0L)); + + PreparedStatement prep = conn.prepareStatement("INSERT INTO T(A,B,C) VALUES(?,?,?)"); + for (List row : dataSet) { + for (int i = 0; i < row.size(); i++) { + prep.setObject(i + 1, row.get(i)); + } + assertEquals(1, prep.executeUpdate()); + } + prep.close(); + + checkPlan(stat, "select max(c) from t", "direct lookup"); + checkPlan(stat, "select min(c) from t", "direct lookup"); + checkPlan(stat, "select count(*) from t", "direct lookup"); + + checkPlan(stat, "select * from t", "scan"); + + checkPlan(stat, "select * from t order by c", "IDX_C_B_A"); + checkPlan(stat, "select * from t order by c, b", "IDX_C_B_A"); + checkPlan(stat, "select * from t order by b", "IDX_B_A"); + checkPlan(stat, "select * from t order by b, a", "IDX_B_A"); + checkPlan(stat, "select * from t order by b, c", "scan"); + checkPlan(stat, "select * from t order by a, b", "scan"); + checkPlan(stat, "select * from t order by a, c, b", "scan"); + + checkPlan(stat, "select * from t where b > ''", "IDX_B_A"); + checkPlan(stat, "select * from t where a > 0 and b > ''", "IDX_B_A"); + checkPlan(stat, "select * from t where b < ''", "IDX_B_A"); + checkPlan(stat, "select * from t where b < '' and c < 1", "IDX_C_B_A"); + checkPlan(stat, "select * from t where a = 0", "scan"); + checkPlan(stat, "select * from t where a > 0 order by c, b", "IDX_C_B_A"); + checkPlan(stat, "select * from t where a = 0 and c > 0", "IDX_C_B_A"); + checkPlan(stat, "select * from t where a = 0 and b < 0", "IDX_B_A"); + + assertEquals(6, ((Number) query(stat, "select count(*) from t").get(0).get(0)).intValue()); + + checkResultsNoOrder(stat, 6, "select * from t", "select * from t order by a"); + checkResultsNoOrder(stat, 6, "select * from t", "select * from t order by b"); + checkResultsNoOrder(stat, 6, "select * from t", "select * from t order by c"); + checkResultsNoOrder(stat, 6, "select * from t", "select * from t order by c, a"); + checkResultsNoOrder(stat, 6, "select * from t", "select * from t order by b, a"); + checkResultsNoOrder(stat, 6, "select * from t", "select * from t order by c, b, a"); + checkResultsNoOrder(stat, 6, "select * from t", "select * from t order by a, c, b"); + + checkResultsNoOrder(stat, 4, "select * from t where a > 0", + "select * from t where a > 0 order by a"); + checkResultsNoOrder(stat, 4, "select * from t where a > 0", + "select * from t where a > 0 order by b"); + checkResultsNoOrder(stat, 4, "select * from t where a > 0", + "select * from t where a > 0 order by c"); + checkResultsNoOrder(stat, 4, "select * from t where a > 0", + "select * from t where a > 0 order by c, a"); + checkResultsNoOrder(stat, 4, "select * from t where a > 0", + "select * from t where a > 0 order by b, a"); + checkResultsNoOrder(stat, 4, "select * from t where a > 0", + "select * from t where a > 0 order by c, b, a"); + checkResultsNoOrder(stat, 4, "select * from t where a > 0", + "select * from t where a > 0 order by a, c, b"); + + checkResults(6, dataSet, stat, + "select * from t order by a", null, new RowComparator(0)); + checkResults(6, dataSet, stat, + "select * from t order by a desc", null, new RowComparator(true, 0)); + checkResults(6, dataSet, stat, + "select * from t order by b, c", null, new RowComparator(1, 2)); + checkResults(6, dataSet, stat, + "select * from t order by c, a", null, new RowComparator(2, 0)); + checkResults(6, dataSet, stat, + "select * from t order by b, a", null, new RowComparator(1, 0)); + checkResults(6, dataSet, stat, + "select * from t order by c, b, a", null, new RowComparator(2, 1, 0)); + + checkResults(4, dataSet, stat, + "select * from t where a > 0", new RowFilter() { + @Override + protected boolean accept(List row) { + return getInt(row, 0) > 0; + } + }, null); + checkResults(3, dataSet, stat, "select * from t where b = '0'", new RowFilter() { + @Override + protected boolean accept(List row) { + return "0".equals(getString(row, 1)); + } + }, null); + checkResults(5, dataSet, stat, "select * from t where b >= '0'", new RowFilter() { + @Override + protected boolean accept(List row) { + String b = getString(row, 1); + return b != null && b.compareTo("0") >= 0; + } + }, null); + checkResults(2, dataSet, stat, "select * from t where b > '0'", new RowFilter() { + @Override + protected boolean accept(List row) { + String b = getString(row, 1); + return b != null && b.compareTo("0") > 0; + } + }, null); + checkResults(1, dataSet, stat, "select * from t where b > '0' and c > 0", new RowFilter() { + @Override + protected boolean accept(List row) { + String b = getString(row, 1); + Long c = getLong(row, 2); + return b != null && b.compareTo("0") > 0 && c != null && c > 0; + } + }, null); + checkResults(1, dataSet, stat, "select * from t where b > '0' and c < 2", new RowFilter() { + @Override + protected boolean accept(List row) { + String b = getString(row, 1); + Long c = getLong(row, 2); + return b != null && b.compareTo("0") > 0 && c != null && c < 2; + } + }, null); + checkResults(2, dataSet, stat, "select * from t where b > '0' and a < 2", new RowFilter() { + @Override + protected boolean accept(List row) { + Integer a = getInt(row, 0); + String b = getString(row, 1); + return b != null && b.compareTo("0") > 0 && a != null && a < 2; + } + }, null); + checkResults(1, dataSet, stat, "select * from t where b > '0' and a > 0", new RowFilter() { + @Override + protected boolean accept(List row) { + Integer a = getInt(row, 0); + String b = getString(row, 1); + return b != null && b.compareTo("0") > 0 && a != null && a > 0; + } + }, null); + checkResults(2, dataSet, stat, "select * from t where b = '0' and a > 0", new RowFilter() { + @Override + protected boolean accept(List row) { + Integer a = getInt(row, 0); + String b = getString(row, 1); + return "0".equals(b) && a != null && a > 0; + } + }, null); + checkResults(2, dataSet, stat, "select * from t where b = '0' and a < 2", new RowFilter() { + @Override + protected boolean accept(List row) { + Integer a = getInt(row, 0); + String b = getString(row, 1); + return "0".equals(b) && a != null && a < 2; + } + }, null); + conn.close(); + deleteDb("tableEngine"); + } + + private void testQueryExpressionFlag() throws SQLException { + deleteDb("testQueryExpressionFlag"); + Connection conn = getConnection("testQueryExpressionFlag"); + Statement stat = conn.createStatement(); + stat.execute("create table QUERY_EXPR_TEST(id int) ENGINE \"" + + TreeSetIndexTableEngine.class.getName() + "\""); + stat.execute("create table QUERY_EXPR_TEST_NO(id int) ENGINE \"" + + TreeSetIndexTableEngine.class.getName() + "\""); + stat.executeQuery("select 1 + (select 1 from QUERY_EXPR_TEST)").next(); + stat.executeQuery("select 1 from QUERY_EXPR_TEST_NO where id in " + + "(select id from QUERY_EXPR_TEST)"); + stat.executeQuery("select 1 from QUERY_EXPR_TEST_NO n " + + "where exists(select 1 from QUERY_EXPR_TEST y where y.id = n.id)"); + conn.close(); + deleteDb("testQueryExpressionFlag"); + } + + private void testSubQueryInfo() throws SQLException { + deleteDb("testSubQueryInfo"); + Connection conn = getConnection("testSubQueryInfo"); + Statement stat = conn.createStatement(); + stat.execute("create table SUB_QUERY_TEST(id int primary key, name varchar) ENGINE \"" + + TreeSetIndexTableEngine.class.getName() + "\""); + // test sub-queries + stat.executeQuery("select * from " + + "(select t2.id from " + + "(select t3.id from sub_query_test t3 where t3.name = '') t4, " + + "sub_query_test t2 " + + "where t2.id = t4.id) t5").next(); + // test view 1 + stat.execute("create view t4 as (select t3.id from sub_query_test t3 where t3.name = '')"); + stat.executeQuery("select * from " + + "(select t2.id from t4, sub_query_test t2 where t2.id = t4.id) t5").next(); + // test view 2 + stat.execute("create view t5 as " + + "(select t2.id from t4, sub_query_test t2 where t2.id = t4.id)"); + stat.executeQuery("select * from t5").next(); + // test select expressions + stat.execute("create table EXPR_TEST(id int) ENGINE \"" + + TreeSetIndexTableEngine.class.getName() + "\""); + stat.executeQuery("select * from (select (select id from EXPR_TEST x limit 1) a " + + "from dual where 1 = (select id from EXPR_TEST y limit 1)) z").next(); + // test select expressions 2 + stat.execute("create table EXPR_TEST2(id int) ENGINE \"" + + TreeSetIndexTableEngine.class.getName() + "\""); + stat.executeQuery("select * from (select (select 1 from " + + "(select (select 2 from EXPR_TEST) from EXPR_TEST2) ZZ) from dual)").next(); + // test select expression plan + stat.execute("create table test_plan(id int primary key, name varchar)"); + stat.execute("create index MY_NAME_INDEX on test_plan(name)"); + checkPlan(stat, "select * from (select (select id from test_plan " + + "where name = 'z') from dual)", + "MY_NAME_INDEX"); + conn.close(); + deleteDb("testSubQueryInfo"); + } + + private void setBatchingEnabled(Statement stat, boolean enabled) throws SQLException { + stat.execute("SET BATCH_JOINS " + enabled); + if (!config.networked) { + Session s = (Session) ((JdbcConnection) stat.getConnection()).getSession(); + assertEquals(enabled, s.isJoinBatchEnabled()); + } + } + + private void testBatchedJoin() throws SQLException { + deleteDb("testBatchedJoin"); + Connection conn = getConnection("testBatchedJoin;OPTIMIZE_REUSE_RESULTS=0;BATCH_JOINS=1"); + Statement stat = conn.createStatement(); + setBatchingEnabled(stat, false); + setBatchingEnabled(stat, true); + + TreeSetIndex.exec = Executors.newFixedThreadPool(8, new ThreadFactory() { + @Override + public Thread newThread(Runnable r) { + Thread t = new Thread(r); + t.setDaemon(true); + return t; + } + }); + + forceJoinOrder(stat, true); + try { + doTestBatchedJoinSubQueryUnion(stat); + + TreeSetIndex.lookupBatches.set(0); + doTestBatchedJoin(stat, 1, 0, 0); + doTestBatchedJoin(stat, 0, 1, 0); + doTestBatchedJoin(stat, 0, 0, 1); + + doTestBatchedJoin(stat, 0, 2, 0); + doTestBatchedJoin(stat, 0, 0, 2); + + doTestBatchedJoin(stat, 0, 0, 3); + doTestBatchedJoin(stat, 0, 0, 4); + doTestBatchedJoin(stat, 0, 0, 5); + + doTestBatchedJoin(stat, 0, 3, 1); + doTestBatchedJoin(stat, 0, 3, 3); + doTestBatchedJoin(stat, 0, 3, 7); + + doTestBatchedJoin(stat, 0, 4, 1); + doTestBatchedJoin(stat, 0, 4, 6); + doTestBatchedJoin(stat, 0, 4, 20); + + doTestBatchedJoin(stat, 0, 10, 0); + doTestBatchedJoin(stat, 0, 0, 10); + + doTestBatchedJoin(stat, 0, 20, 0); + doTestBatchedJoin(stat, 0, 0, 20); + doTestBatchedJoin(stat, 0, 20, 20); + + doTestBatchedJoin(stat, 3, 7, 0); + doTestBatchedJoin(stat, 0, 0, 5); + doTestBatchedJoin(stat, 0, 8, 1); + doTestBatchedJoin(stat, 0, 2, 1); + + assertTrue(TreeSetIndex.lookupBatches.get() > 0); + } finally { + forceJoinOrder(stat, false); + TreeSetIndex.exec.shutdownNow(); + } + conn.close(); + deleteDb("testBatchedJoin"); + } + + private void testAffinityKey() throws SQLException { + deleteDb("tableEngine"); + Connection conn = getConnection("tableEngine;mode=Ignite;MV_STORE=FALSE"); + Statement stat = conn.createStatement(); + + stat.executeUpdate("CREATE TABLE T(ID INT AFFINITY PRIMARY KEY, NAME VARCHAR, AGE INT)" + + " ENGINE \"" + AffinityTableEngine.class.getName() + "\""); + Table tbl = AffinityTableEngine.createdTbl; + assertTrue(tbl != null); + assertEquals(3, tbl.getIndexes().size()); + Index aff = tbl.getIndexes().get(2); + assertTrue(aff.getIndexType().isAffinity()); + assertEquals("T_AFF", aff.getName()); + assertEquals(1, aff.getIndexColumns().length); + assertEquals("ID", aff.getIndexColumns()[0].columnName); + conn.close(); + deleteDb("tableEngine"); + } + + private static void forceJoinOrder(Statement s, boolean force) throws SQLException { + s.executeUpdate("SET FORCE_JOIN_ORDER " + force); + } + + private void checkPlan(Statement stat, String sql) throws SQLException { + ResultSet rs = stat.executeQuery("EXPLAIN " + sql); + assertTrue(rs.next()); + String plan = rs.getString(1); + assertEquals(normalize(sql), normalize(plan)); + } + + private static String normalize(String sql) { + sql = sql.replace('\n', ' '); + return sql.replaceAll("\\s+", " ").trim(); + } + + private void doTestBatchedJoinSubQueryUnion(Statement stat) throws SQLException { + String engine = '"' + TreeSetIndexTableEngine.class.getName() + '"'; + stat.execute("CREATE TABLE t (a int, b int) ENGINE " + engine); + TreeSetTable t = TreeSetIndexTableEngine.created; + stat.execute("CREATE INDEX T_IDX_A ON t(a)"); + stat.execute("CREATE INDEX T_IDX_B ON t(b)"); + setBatchSize(t, 3); + for (int i = 0; i < 20; i++) { + stat.execute("insert into t values (" + i + "," + (i + 10) + ")"); + } + stat.execute("CREATE TABLE u (a int, b int) ENGINE " + engine); + TreeSetTable u = TreeSetIndexTableEngine.created; + stat.execute("CREATE INDEX U_IDX_A ON u(a)"); + stat.execute("CREATE INDEX U_IDX_B ON u(b)"); + setBatchSize(u, 0); + for (int i = 10; i < 25; i++) { + stat.execute("insert into u values (" + i + "," + (i - 15)+ ")"); + } + + checkPlan(stat, "SELECT 1 FROM PUBLIC.T T1 /* PUBLIC.\"scan\" */ " + + "INNER JOIN PUBLIC.T T2 /* batched:test PUBLIC.T_IDX_B: B = T1.A */ " + + "ON 1=1 WHERE T1.A = T2.B"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.T T1 /* PUBLIC.\"scan\" */ " + + "INNER JOIN PUBLIC.T T2 /* batched:test PUBLIC.T_IDX_B: B = T1.A */ " + + "ON 1=1 /* WHERE T1.A = T2.B */ " + + "INNER JOIN PUBLIC.T T3 /* batched:test PUBLIC.T_IDX_B: B = T2.A */ " + + "ON 1=1 WHERE (T2.A = T3.B) AND (T1.A = T2.B)"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.T T1 /* PUBLIC.\"scan\" */ " + + "INNER JOIN PUBLIC.U /* batched:fake PUBLIC.U_IDX_A: A = T1.A */ " + + "ON 1=1 /* WHERE T1.A = U.A */ " + + "INNER JOIN PUBLIC.T T2 /* batched:test PUBLIC.T_IDX_B: B = U.B */ " + + "ON 1=1 WHERE (T1.A = U.A) AND (U.B = T2.B)"); + checkPlan(stat, "SELECT 1 FROM ( SELECT A FROM PUBLIC.T ) Z " + + "/* SELECT A FROM PUBLIC.T /++ PUBLIC.T_IDX_A ++/ */ " + + "INNER JOIN PUBLIC.T /* batched:test PUBLIC.T_IDX_B: B = Z.A */ " + + "ON 1=1 WHERE Z.A = T.B"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.T /* PUBLIC.T_IDX_B */ " + + "INNER JOIN ( SELECT A FROM PUBLIC.T ) Z " + + "/* batched:view SELECT A FROM PUBLIC.T " + + "/++ batched:test PUBLIC.T_IDX_A: A IS ?1 ++/ " + + "WHERE A IS ?1: A = T.B */ ON 1=1 WHERE Z.A = T.B"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.T /* PUBLIC.T_IDX_A */ " + + "INNER JOIN ( ((SELECT A FROM PUBLIC.T) UNION ALL (SELECT B FROM PUBLIC.U)) " + + "UNION ALL (SELECT B FROM PUBLIC.T) ) Z /* batched:view " + + "((SELECT A FROM PUBLIC.T /++ batched:test PUBLIC.T_IDX_A: A IS ?1 ++/ " + + "WHERE A IS ?1) " + + "UNION ALL " + + "(SELECT B FROM PUBLIC.U /++ PUBLIC.U_IDX_B: B IS ?1 ++/ WHERE B IS ?1)) " + + "UNION ALL " + + "(SELECT B FROM PUBLIC.T /++ batched:test PUBLIC.T_IDX_B: B IS ?1 ++/ " + + "WHERE B IS ?1): A = T.A */ ON 1=1 WHERE Z.A = T.A"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.T /* PUBLIC.T_IDX_A */ " + + "INNER JOIN ( SELECT U.A FROM PUBLIC.U INNER JOIN PUBLIC.T ON 1=1 " + + "WHERE U.B = T.B ) Z " + + "/* batched:view SELECT U.A FROM PUBLIC.U " + + "/++ batched:fake PUBLIC.U_IDX_A: A IS ?1 ++/ " + + "/++ WHERE U.A IS ?1 ++/ INNER JOIN PUBLIC.T " + + "/++ batched:test PUBLIC.T_IDX_B: B = U.B ++/ " + + "ON 1=1 WHERE (U.A IS ?1) AND (U.B = T.B): A = T.A */ ON 1=1 WHERE Z.A = T.A"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.T /* PUBLIC.T_IDX_A */ " + + "INNER JOIN ( SELECT A FROM PUBLIC.U ) Z /* SELECT A FROM PUBLIC.U " + + "/++ PUBLIC.U_IDX_A: A IS ?1 ++/ WHERE A IS ?1: A = T.A */ " + + "ON 1=1 WHERE T.A = Z.A"); + checkPlan(stat, "SELECT 1 FROM " + + "( SELECT U.A FROM PUBLIC.U INNER JOIN PUBLIC.T ON 1=1 WHERE U.B = T.B ) Z " + + "/* SELECT U.A FROM PUBLIC.U /++ PUBLIC.\"scan\" ++/ " + + "INNER JOIN PUBLIC.T /++ batched:test PUBLIC.T_IDX_B: B = U.B ++/ " + + "ON 1=1 WHERE U.B = T.B */ " + + "INNER JOIN PUBLIC.T /* batched:test PUBLIC.T_IDX_A: A = Z.A */ ON 1=1 " + + "WHERE T.A = Z.A"); + checkPlan(stat, "SELECT 1 FROM " + + "( SELECT U.A FROM PUBLIC.T INNER JOIN PUBLIC.U ON 1=1 WHERE T.B = U.B ) Z " + + "/* SELECT U.A FROM PUBLIC.T /++ PUBLIC.T_IDX_B ++/ " + + "INNER JOIN PUBLIC.U /++ PUBLIC.U_IDX_B: B = T.B ++/ " + + "ON 1=1 WHERE T.B = U.B */ INNER JOIN PUBLIC.T " + + "/* batched:test PUBLIC.T_IDX_A: A = Z.A */ " + + "ON 1=1 WHERE Z.A = T.A"); + checkPlan(stat, "SELECT 1 FROM ( (SELECT A FROM PUBLIC.T) UNION " + + "(SELECT A FROM PUBLIC.U) ) Z " + + "/* (SELECT A FROM PUBLIC.T /++ PUBLIC.T_IDX_A ++/) " + + "UNION " + + "(SELECT A FROM PUBLIC.U /++ PUBLIC.U_IDX_A ++/) */ " + + "INNER JOIN PUBLIC.T /* batched:test PUBLIC.T_IDX_A: A = Z.A */ ON 1=1 " + + "WHERE Z.A = T.A"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.U /* PUBLIC.U_IDX_B */ " + + "INNER JOIN ( (SELECT A, B FROM PUBLIC.T) UNION (SELECT B, A FROM PUBLIC.U) ) Z " + + "/* batched:view (SELECT A, B FROM PUBLIC.T " + + "/++ batched:test PUBLIC.T_IDX_B: B IS ?1 ++/ " + + "WHERE B IS ?1) UNION (SELECT B, A FROM PUBLIC.U " + + "/++ PUBLIC.U_IDX_A: A IS ?1 ++/ " + + "WHERE A IS ?1): B = U.B */ ON 1=1 /* WHERE U.B = Z.B */ " + + "INNER JOIN PUBLIC.T /* batched:test PUBLIC.T_IDX_A: A = Z.A */ ON 1=1 " + + "WHERE (U.B = Z.B) AND (Z.A = T.A)"); + checkPlan(stat, "SELECT 1 FROM PUBLIC.U /* PUBLIC.U_IDX_A */ " + + "INNER JOIN ( SELECT A, B FROM PUBLIC.U ) Z " + + "/* batched:fake SELECT A, B FROM PUBLIC.U /++ PUBLIC.U_IDX_A: A IS ?1 ++/ " + + "WHERE A IS ?1: A = U.A */ ON 1=1 /* WHERE U.A = Z.A */ " + + "INNER JOIN PUBLIC.T /* batched:test PUBLIC.T_IDX_B: B = Z.B */ " + + "ON 1=1 WHERE (U.A = Z.A) AND (Z.B = T.B)"); + + // t: a = [ 0..20), b = [10..30) + // u: a = [10..25), b = [-5..10) + checkBatchedQueryResult(stat, 10, + "select t.a from t, (select t.b from u, t where u.a = t.a) z " + + "where t.b = z.b"); + checkBatchedQueryResult(stat, 5, + "select t.a from (select t1.b from t t1, t t2 where t1.a = t2.b) z, t " + + "where t.b = z.b + 5"); + checkBatchedQueryResult(stat, 1, + "select t.a from (select u.b from u, t t2 where u.a = t2.b) z, t " + + "where t.b = z.b + 1"); + checkBatchedQueryResult(stat, 15, + "select t.a from (select u.b from u, t t2 where u.a = t2.b) z " + + "left join t on t.b = z.b"); + checkBatchedQueryResult(stat, 15, + "select t.a from (select t1.b from t t1 left join t t2 on t1.a = t2.b) z, t " + + "where t.b = z.b + 5"); + checkBatchedQueryResult(stat, 1, + "select t.a from t,(select 5 as b from t union select 10 from u) z " + + "where t.b = z.b"); + checkBatchedQueryResult(stat, 15, "select t.a from u,(select 5 as b, a from t " + + "union select 10, a from u) z, t where t.b = z.b and z.a = u.a"); + + stat.execute("DROP TABLE T"); + stat.execute("DROP TABLE U"); + } + + private void checkBatchedQueryResult(Statement stat, int size, String sql) + throws SQLException { + setBatchingEnabled(stat, false); + List> expected = query(stat, sql); + assertEquals(size, expected.size()); + setBatchingEnabled(stat, true); + List> actual = query(stat, sql); + if (!expected.equals(actual)) { + fail("\n" + "expected: " + expected + "\n" + "actual: " + actual); + } + } + + private void doTestBatchedJoin(Statement stat, int... batchSizes) throws SQLException { + ArrayList tables = new ArrayList<>(batchSizes.length); + + for (int i = 0; i < batchSizes.length; i++) { + stat.executeUpdate("DROP TABLE IF EXISTS T" + i); + stat.executeUpdate("CREATE TABLE T" + i + "(A INT, B INT) ENGINE \"" + + TreeSetIndexTableEngine.class.getName() + "\""); + tables.add(TreeSetIndexTableEngine.created); + + stat.executeUpdate("CREATE INDEX IDX_B ON T" + i + "(B)"); + stat.executeUpdate("CREATE INDEX IDX_A ON T" + i + "(A)"); + + PreparedStatement insert = stat.getConnection().prepareStatement( + "INSERT INTO T"+ i + " VALUES (?,?)"); + + for (int j = i, size = i + 10; j < size; j++) { + insert.setInt(1, j); + insert.setInt(2, j); + insert.executeUpdate(); + } + + for (TreeSetTable table : tables) { + assertEquals(10, table.getRowCount(null)); + } + } + + int[] zeroBatchSizes = new int[batchSizes.length]; + int tests = 1 << (batchSizes.length * 4); + + for (int test = 0; test < tests; test++) { + String query = generateQuery(test, batchSizes.length); + + // System.out.println(Arrays.toString(batchSizes) + + // ": " + test + " -> " + query); + + setBatchSize(tables, batchSizes); + List> res1 = query(stat, query); + + setBatchSize(tables, zeroBatchSizes); + List> res2 = query(stat, query); + + // System.out.println(res1 + " " + res2); + + if (!res2.equals(res1)) { + System.err.println(Arrays.toString(batchSizes) + ": " + res1 + " " + res2); + System.err.println("Test " + test); + System.err.println(query); + for (TreeSetTable table : tables) { + System.err.println(table.getName() + " = " + + query(stat, "select * from " + table.getName())); + } + fail(); + } + } + for (int i = 0; i < batchSizes.length; i++) { + stat.executeUpdate("DROP TABLE IF EXISTS T" + i); + } + } + + /** + * A static assertion method. + * + * @param condition the condition + * @param message the error message + */ + static void assert0(boolean condition, String message) { + if (!condition) { + throw new AssertionError(message); + } + } + + private static void setBatchSize(ArrayList tables, int... batchSizes) { + for (int i = 0; i < batchSizes.length; i++) { + int batchSize = batchSizes[i]; + setBatchSize(tables.get(i), batchSize); + } + } + + private static void setBatchSize(TreeSetTable t, int batchSize) { + if (t.getIndexes() == null) { + t.scan.preferredBatchSize = batchSize; + } else { + for (Index idx : t.getIndexes()) { + ((TreeSetIndex) idx).preferredBatchSize = batchSize; + } + } + } + + private static String generateQuery(int t, int tables) { + final int withLeft = 1; + final int withFalse = 2; + final int withWhere = 4; + final int withOnIsNull = 8; + + StringBuilder b = new StringBuilder(); + b.append("select count(*) from "); + + StringBuilder where = new StringBuilder(); + + for (int i = 0; i < tables; i++) { + if (i != 0) { + if ((t & withLeft) != 0) { + b.append(" left "); + } + b.append(" join "); + } + b.append("\nT").append(i).append(' '); + if (i != 0) { + boolean even = (i & 1) == 0; + if ((t & withOnIsNull) != 0) { + b.append(" on T").append(i - 1).append(even ? ".B" : ".A").append(" is null"); + } else if ((t & withFalse) != 0) { + b.append(" on false "); + } else { + b.append(" on T").append(i - 1).append(even ? ".B = " : ".A = "); + b.append("T").append(i).append(even ? ".B " : ".A "); + } + } + if ((t & withWhere) != 0) { + if (where.length() != 0) { + where.append(" and "); + } + where.append(" T").append(i).append(".A > 5"); + } + t >>>= 4; + } + if (where.length() != 0) { + b.append("\n" + "where ").append(where); + } + + return b.toString(); + } + + private void checkResultsNoOrder(Statement stat, int size, String query1, String query2) + throws SQLException { + List> res1 = query(stat, query1); + List> res2 = query(stat, query2); + if (size != res1.size() || size != res2.size()) { + fail("Wrong size: \n" + res1 + "\n" + res2); + } + if (size == 0) { + return; + } + int[] cols = new int[res1.get(0).size()]; + for (int i = 0; i < cols.length; i++) { + cols[i] = i; + } + Comparator> comp = new RowComparator(cols); + Collections.sort(res1, comp); + Collections.sort(res2, comp); + assertTrue("Wrong data: \n" + res1 + "\n" + res2, res1.equals(res2)); + } + + private void checkResults(int size, List> dataSet, + Statement stat, String query, RowFilter filter, RowComparator sort) + throws SQLException { + List> res1 = query(stat, query); + List> res2 = query(dataSet, filter, sort); + + assertTrue("Wrong size: " + size + " \n" + res1 + "\n" + res2, + res1.size() == size && res2.size() == size); + assertTrue(filter != null || sort != null); + + for (int i = 0; i < res1.size(); i++) { + List row1 = res1.get(i); + List row2 = res2.get(i); + + assertTrue("Filter failed on row " + i + " of \n" + res1 + "\n" + res2, + filter == null || filter.accept(row1)); + assertTrue("Sort failed on row " + i + " of \n" + res1 + "\n" + res2, + sort == null || sort.compare(row1, row2) == 0); + } + } + + private static List> query(List> dataSet, + RowFilter filter, RowComparator sort) { + List> res = New.arrayList(); + if (filter == null) { + res.addAll(dataSet); + } else { + for (List row : dataSet) { + if (filter.accept(row)) { + res.add(row); + } + } + } + if (sort != null) { + Collections.sort(res, sort); + } + return res; + } + + private static List> query(Statement stat, String query) throws SQLException { + ResultSet rs = stat.executeQuery(query); + int cols = rs.getMetaData().getColumnCount(); + List> list = New.arrayList(); + while (rs.next()) { + List row = new ArrayList<>(cols); + for (int i = 1; i <= cols; i++) { + row.add(rs.getObject(i)); + } + list.add(row); + } + rs.close(); + return list; + } + + private void checkPlan(Statement stat, String query, String index) + throws SQLException { + String plan = query(stat, "EXPLAIN " + query).get(0).get(0).toString(); + assertTrue("Index '" + index + "' is not used in query plan: " + plan, + plan.contains(index)); + } + + /** + * A test table factory. + */ + public static class OneRowTableEngine implements TableEngine { + + /** + * A table implementation with one row. + */ + private static class OneRowTable extends TableBase { + + /** + * A scan index for one row. + */ + public class Scan extends BaseIndex { + + Scan(Table table) { + initBaseIndex(table, table.getId(), table.getName() + "_SCAN", + IndexColumn.wrap(table.getColumns()), IndexType.createScan(false)); + } + + @Override + public long getRowCountApproximation() { + return table.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return table.getDiskSpaceUsed(); + } + + @Override + public long getRowCount(Session session) { + return table.getRowCount(session); + } + + @Override + public void checkRename() { + // do nothing + } + + @Override + public void truncate(Session session) { + // do nothing + } + + @Override + public void remove(Session session) { + // do nothing + } + + @Override + public void remove(Session session, Row r) { + // do nothing + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 0; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + return new SingleRowCursor(row); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return new SingleRowCursor(row); + } + + @Override + public void close(Session session) { + // do nothing + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public void add(Session session, Row r) { + // do nothing + } + } + + protected Index scanIndex; + + volatile Row row; + + OneRowTable(CreateTableData data) { + super(data); + scanIndex = new Scan(this); + } + + @Override + public Index addIndex(Session session, String indexName, + int indexId, IndexColumn[] cols, IndexType indexType, + boolean create, String indexComment) { + return null; + } + + @Override + public void addRow(Session session, Row r) { + this.row = r; + } + + @Override + public boolean canDrop() { + return true; + } + + @Override + public boolean canGetRowCount() { + return true; + } + + @Override + public void checkSupportAlter() { + // do nothing + } + + @Override + public void close(Session session) { + // do nothing + } + + @Override + public ArrayList getIndexes() { + return null; + } + + @Override + public long getMaxDataModificationId() { + return 0; + } + + @Override + public long getRowCount(Session session) { + return getRowCountApproximation(); + } + + @Override + public long getRowCountApproximation() { + return row == null ? 0 : 1; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public Index getScanIndex(Session session) { + return scanIndex; + } + + @Override + public TableType getTableType() { + return TableType.EXTERNAL_TABLE_ENGINE; + } + + @Override + public Index getUniqueIndex() { + return null; + } + + @Override + public boolean isDeterministic() { + return false; + } + + @Override + public boolean isLockedExclusively() { + return false; + } + + @Override + public boolean lock(Session session, boolean exclusive, boolean force) { + // do nothing + return false; + } + + @Override + public void removeRow(Session session, Row r) { + this.row = null; + } + + @Override + public void truncate(Session session) { + row = null; + } + + @Override + public void unlock(Session s) { + // do nothing + } + + @Override + public void checkRename() { + // do nothing + } + + } + + /** + * Create a new OneRowTable. + * + * @param data the meta data of the table to create + * @return the new table + */ + @Override + public OneRowTable createTable(CreateTableData data) { + return new OneRowTable(data); + } + + } + + /** + * A test table factory producing affinity aware tables. + */ + public static class AffinityTableEngine implements TableEngine { + public static Table createdTbl; + + /** + * A table able to handle affinity indexes. + */ + private static class AffinityTable extends RegularTable { + + /** + * A (no-op) affinity index. + */ + public class AffinityIndex extends BaseIndex { + AffinityIndex(Table table, int id, String name, IndexColumn[] newIndexColumns) { + initBaseIndex(table, id, name, newIndexColumns, IndexType.createAffinity()); + } + + @Override + public long getRowCountApproximation() { + return table.getRowCountApproximation(); + } + + @Override + public long getDiskSpaceUsed() { + return table.getDiskSpaceUsed(); + } + + @Override + public long getRowCount(Session session) { + return table.getRowCount(session); + } + + @Override + public void checkRename() { + // do nothing + } + + @Override + public void truncate(Session session) { + // do nothing + } + + @Override + public void remove(Session session) { + // do nothing + } + + @Override + public void remove(Session session, Row r) { + // do nothing + } + + @Override + public boolean needRebuild() { + return false; + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + return 0; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + throw DbException.getUnsupportedException("TEST"); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + throw DbException.getUnsupportedException("TEST"); + } + + @Override + public void close(Session session) { + // do nothing + } + + @Override + public boolean canGetFirstOrLast() { + return false; + } + + @Override + public void add(Session session, Row r) { + // do nothing + } + } + + AffinityTable(CreateTableData data) { + super(data); + } + + @Override + public Index addIndex(Session session, String indexName, + int indexId, IndexColumn[] cols, IndexType indexType, + boolean create, String indexComment) { + if (!indexType.isAffinity()) { + return super.addIndex(session, indexName, indexId, cols, indexType, create, indexComment); + } + + boolean isSessionTemporary = isTemporary() && !isGlobalTemporary(); + if (!isSessionTemporary) { + database.lockMeta(session); + } + AffinityIndex index = new AffinityIndex(this, indexId, getName() + "_AFF", cols); + index.setTemporary(isTemporary()); + if (index.getCreateSQL() != null) { + index.setComment(indexComment); + if (isSessionTemporary) { + session.addLocalTempTableIndex(index); + } else { + database.addSchemaObject(session, index); + } + } + getIndexes().add(index); + setModified(); + return index; + } + + } + + /** + * Create a new OneRowTable. + * + * @param data the meta data of the table to create + * @return the new table + */ + @Override + public Table createTable(CreateTableData data) { + return (createdTbl = new AffinityTable(data)); + } + + } + + /** + * A test table factory. + */ + public static class EndlessTableEngine implements TableEngine { + + public static CreateTableData createTableData; + + /** + * A table implementation with one row. + */ + private static class EndlessTable extends OneRowTableEngine.OneRowTable { + + EndlessTable(CreateTableData data) { + super(data); + row = data.schema.getDatabase().createRow( + new Value[] { ValueInt.get(1), ValueNull.INSTANCE }, 0); + scanIndex = new Auto(this); + } + + /** + * A scan index for one row. + */ + public class Auto extends OneRowTableEngine.OneRowTable.Scan { + + Auto(Table table) { + super(table); + } + + @Override + public Cursor find(TableFilter filter, SearchRow first, SearchRow last) { + return find(filter.getFilterCondition()); + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + return find(null); + } + + /** + * Search within the table. + * + * @param filter the table filter (optional) + * @return the cursor + */ + private Cursor find(Expression filter) { + if (filter != null) { + row.setValue(1, ValueString.get(filter.getSQL())); + } + return new SingleRowCursor(row); + } + + } + + } + + /** + * Create a new table. + * + * @param data the meta data of the table to create + * @return the new table + */ + @Override + public EndlessTable createTable(CreateTableData data) { + createTableData = data; + return new EndlessTable(data); + } + + } + + /** + * A table engine that internally uses a tree set. + */ + public static class TreeSetIndexTableEngine implements TableEngine { + + static TreeSetTable created; + + @Override + public Table createTable(CreateTableData data) { + return created = new TreeSetTable(data); + } + } + + /** + * A table that internally uses a tree set. + */ + private static class TreeSetTable extends TableBase { + int dataModificationId; + + ArrayList indexes; + + TreeSetIndex scan = new TreeSetIndex(this, "scan", + IndexColumn.wrap(getColumns()), IndexType.createScan(false)) { + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + doTests(session); + return getCostRangeIndex(masks, getRowCount(session), filters, + filter, sortOrder, true, allColumnsSet); + } + }; + + TreeSetTable(CreateTableData data) { + super(data); + } + + @Override + public void checkRename() { + // No-op. + } + + @Override + public void unlock(Session s) { + // No-op. + } + + @Override + public void truncate(Session session) { + if (indexes != null) { + for (Index index : indexes) { + index.truncate(session); + } + } else { + scan.truncate(session); + } + dataModificationId++; + } + + @Override + public void removeRow(Session session, Row row) { + if (indexes != null) { + for (Index index : indexes) { + index.remove(session, row); + } + } else { + scan.remove(session, row); + } + dataModificationId++; + } + + @Override + public void addRow(Session session, Row row) { + if (indexes != null) { + for (Index index : indexes) { + index.add(session, row); + } + } else { + scan.add(session, row); + } + dataModificationId++; + } + + @Override + public Index addIndex(Session session, String indexName, int indexId, IndexColumn[] cols, + IndexType indexType, boolean create, String indexComment) { + if (indexes == null) { + indexes = new ArrayList<>(2); + // Scan must be always at 0. + indexes.add(scan); + } + Index index = new TreeSetIndex(this, indexName, cols, indexType); + for (SearchRow row : scan.set) { + index.add(session, (Row) row); + } + indexes.add(index); + dataModificationId++; + setModified(); + return index; + } + + @Override + public boolean lock(Session session, boolean exclusive, boolean forceLockEvenInMvcc) { + return true; + } + + @Override + public boolean isLockedExclusively() { + return false; + } + + @Override + public boolean isDeterministic() { + return false; + } + + @Override + public Index getUniqueIndex() { + return null; + } + + @Override + public TableType getTableType() { + return TableType.EXTERNAL_TABLE_ENGINE; + } + + @Override + public Index getScanIndex(Session session) { + return scan; + } + + @Override + public long getRowCountApproximation() { + return getScanIndex(null).getRowCountApproximation(); + } + + @Override + public long getRowCount(Session session) { + return scan.getRowCount(session); + } + + @Override + public long getMaxDataModificationId() { + return dataModificationId; + } + + @Override + public ArrayList getIndexes() { + return indexes; + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public void close(Session session) { + // No-op. + } + + @Override + public void checkSupportAlter() { + // No-op. + } + + @Override + public boolean canGetRowCount() { + return true; + } + + @Override + public boolean canDrop() { + return true; + } + } + + /** + * An index that internally uses a tree set. + */ + private static class TreeSetIndex extends BaseIndex implements Comparator { + /** + * Executor service to test batched joins. + */ + static ExecutorService exec; + + static AtomicInteger lookupBatches = new AtomicInteger(); + + int preferredBatchSize; + + final TreeSet set = new TreeSet<>(this); + + TreeSetIndex(Table t, String name, IndexColumn[] cols, IndexType type) { + initBaseIndex(t, 0, name, cols, type); + } + + @Override + public int compare(SearchRow o1, SearchRow o2) { + int res = compareRows(o1, o2); + if (res == 0) { + if (o1.getKey() == Long.MAX_VALUE || o2.getKey() == Long.MIN_VALUE) { + res = 1; + } else if (o1.getKey() == Long.MIN_VALUE || o2.getKey() == Long.MAX_VALUE) { + res = -1; + } + } + return res; + } + + @Override + public IndexLookupBatch createLookupBatch(TableFilter[] filters, int f) { + final TableFilter filter = filters[f]; + assert0(filter.getMasks() != null || "scan".equals(getName()), "masks"); + final int preferredSize = preferredBatchSize; + if (preferredSize == 0) { + return null; + } + lookupBatches.incrementAndGet(); + return new IndexLookupBatch() { + List searchRows = New.arrayList(); + + @Override + public String getPlanSQL() { + return "test"; + } + + @Override public boolean isBatchFull() { + return searchRows.size() >= preferredSize * 2; + } + + @Override + public List> find() { + List> res = findBatched(filter, searchRows); + searchRows.clear(); + return res; + } + + @Override + public boolean addSearchRows(SearchRow first, SearchRow last) { + assert !isBatchFull(); + searchRows.add(first); + searchRows.add(last); + return true; + } + + @Override + public void reset(boolean beforeQuery) { + searchRows.clear(); + } + }; + } + + public List> findBatched(final TableFilter filter, + List firstLastPairs) { + ArrayList> result = new ArrayList<>(firstLastPairs.size()); + final Random rnd = new Random(); + for (int i = 0; i < firstLastPairs.size(); i += 2) { + final SearchRow first = firstLastPairs.get(i); + final SearchRow last = firstLastPairs.get(i + 1); + Future future; + if (rnd.nextBoolean()) { + IteratorCursor c = (IteratorCursor) find(filter, first, last); + if (c.it.hasNext()) { + future = new DoneFuture(c); + } else { + // we can return null instead of future of empty cursor + future = null; + } + } else { + future = exec.submit(new Callable() { + @Override + public Cursor call() throws Exception { + if (rnd.nextInt(50) == 0) { + Thread.sleep(0, 500); + } + return find(filter, first, last); + } + }); + } + result.add(future); + } + return result; + } + + @Override + public void close(Session session) { + // No-op. + } + + @Override + public void add(Session session, Row row) { + set.add(row); + } + + @Override + public void remove(Session session, Row row) { + set.remove(row); + } + + private static SearchRow mark(SearchRow row, boolean first) { + if (row != null) { + // Mark this row to be a search row. + row.setKey(first ? Long.MIN_VALUE : Long.MAX_VALUE); + } + return row; + } + + @Override + public Cursor find(Session session, SearchRow first, SearchRow last) { + Set subSet; + if (first != null && last != null && compareRows(last, first) < 0) { + subSet = Collections.emptySet(); + } else { + if (first != null) { + first = set.floor(mark(first, true)); + } + if (last != null) { + last = set.ceiling(mark(last, false)); + } + if (first == null && last == null) { + subSet = set; + } else if (first != null) { + if (last != null) { + subSet = set.subSet(first, true, last, true); + } else { + subSet = set.tailSet(first, true); + } + } else if (last != null) { + subSet = set.headSet(last, true); + } else { + throw new IllegalStateException(); + } + } + return new IteratorCursor(subSet.iterator()); + } + + private static String alias(SubQueryInfo info) { + return info.getFilters()[info.getFilter()].getTableAlias(); + } + + private void checkInfo(SubQueryInfo info) { + if (info.getUpper() == null) { + // check 1st level info + assert0(info.getFilters().length == 1, "getFilters().length " + + info.getFilters().length); + String alias = alias(info); + assert0("T5".equals(alias), "alias: " + alias); + } else { + // check 2nd level info + assert0(info.getFilters().length == 2, "getFilters().length " + + info.getFilters().length); + String alias = alias(info); + assert0("T4".equals(alias), "alias: " + alias); + checkInfo(info.getUpper()); + } + } + + protected void doTests(Session session) { + if (getTable().getName().equals("SUB_QUERY_TEST")) { + checkInfo(session.getSubQueryInfo()); + } else if (getTable().getName().equals("EXPR_TEST")) { + assert0(session.getSubQueryInfo() == null, "select expression"); + } else if (getTable().getName().equals("EXPR_TEST2")) { + String alias = alias(session.getSubQueryInfo()); + assert0(alias.equals("ZZ"), "select expression sub-query: " + alias); + assert0(session.getSubQueryInfo().getUpper() == null, "upper"); + } else if (getTable().getName().equals("QUERY_EXPR_TEST")) { + assert0(session.isPreparingQueryExpression(), "preparing query expression"); + } else if (getTable().getName().equals("QUERY_EXPR_TEST_NO")) { + assert0(!session.isPreparingQueryExpression(), "not preparing query expression"); + } + } + + @Override + public double getCost(Session session, int[] masks, + TableFilter[] filters, int filter, SortOrder sortOrder, + HashSet allColumnsSet) { + doTests(session); + return getCostRangeIndex(masks, set.size(), filters, filter, + sortOrder, false, allColumnsSet); + } + + @Override + public void remove(Session session) { + // No-op. + } + + @Override + public void truncate(Session session) { + set.clear(); + } + + @Override + public boolean canGetFirstOrLast() { + return true; + } + + @Override + public Cursor findFirstOrLast(Session session, boolean first) { + return new SingleRowCursor((Row) + (set.isEmpty() ? null : first ? set.first() : set.last())); + } + + @Override + public boolean needRebuild() { + return true; + } + + @Override + public long getRowCount(Session session) { + return set.size(); + } + + @Override + public long getRowCountApproximation() { + return getRowCount(null); + } + + @Override + public long getDiskSpaceUsed() { + return 0; + } + + @Override + public void checkRename() { + // No-op. + } + } + + /** + */ + private static class IteratorCursor implements Cursor { + Iterator it; + private Row current; + + IteratorCursor(Iterator it) { + this.it = it; + } + + @Override + public boolean previous() { + throw DbException.getUnsupportedException("prev"); + } + + @Override + public boolean next() { + if (it.hasNext()) { + current = (Row) it.next(); + return true; + } + current = null; + return false; + } + + @Override + public SearchRow getSearchRow() { + return get(); + } + + @Override + public Row get() { + return current; + } + + @Override + public String toString() { + return "IteratorCursor->" + current; + } + } + + /** + * A comparator for rows (lists of comparable objects). + */ + private static class RowComparator implements Comparator> { + private int[] cols; + private boolean descending; + + RowComparator(int... cols) { + this.descending = false; + this.cols = cols; + } + + RowComparator(boolean descending, int... cols) { + this.descending = descending; + this.cols = cols; + } + + @SuppressWarnings("unchecked") + @Override + public int compare(List row1, List row2) { + for (int i = 0; i < cols.length; i++) { + int col = cols[i]; + Comparable o1 = (Comparable) row1.get(col); + Comparable o2 = (Comparable) row2.get(col); + if (o1 == null) { + return applyDescending(o2 == null ? 0 : -1); + } + if (o2 == null) { + return applyDescending(1); + } + int res = o1.compareTo(o2); + if (res != 0) { + return applyDescending(res); + } + } + return 0; + } + + private int applyDescending(int v) { + if (!descending) { + return v; + } + if (v == 0) { + return v; + } + return -v; + } + } + + /** + * A filter for rows (lists of objects). + */ + abstract static class RowFilter { + + /** + * Check whether the row needs to be processed. + * + * @param row the row + * @return true if yes + */ + protected abstract boolean accept(List row); + + /** + * Get an integer from a row. + * + * @param row the row + * @param col the column index + * @return the value + */ + protected Integer getInt(List row, int col) { + return (Integer) row.get(col); + } + + /** + * Get a long from a row. + * + * @param row the row + * @param col the column index + * @return the value + */ + protected Long getLong(List row, int col) { + return (Long) row.get(col); + } + + /** + * Get a string from a row. + * + * @param row the row + * @param col the column index + * @return the value + */ + protected String getString(List row, int col) { + return (String) row.get(col); + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestTempTables.java b/modules/h2/src/test/java/org/h2/test/db/TestTempTables.java new file mode 100644 index 0000000000000..8181ac891a160 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestTempTables.java @@ -0,0 +1,359 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Temporary table tests. + */ +public class TestTempTables extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("tempTables"); + testAnalyzeReuseObjectId(); + testTempSequence(); + testTempFileResultSet(); + testTempTableResultSet(); + testTransactionalTemp(); + testDeleteGlobalTempTableWhenClosing(); + Connection c1 = getConnection("tempTables"); + testAlter(c1); + Connection c2 = getConnection("tempTables"); + testConstraints(c1, c2); + testTables(c1, c2); + testIndexes(c1, c2); + c1.close(); + c2.close(); + testLotsOfTables(); + testCreateAsSelectDistinct(); + deleteDb("tempTables"); + } + + private void testAnalyzeReuseObjectId() throws SQLException { + deleteDb("tempTables"); + Connection conn = getConnection("tempTables"); + Statement stat = conn.createStatement(); + stat.execute("create local temporary table test(id identity)"); + PreparedStatement prep = conn + .prepareStatement("insert into test values(null)"); + for (int i = 0; i < 10000; i++) { + prep.execute(); + } + stat.execute("create local temporary table " + + "test2(id identity) as select x from system_range(1, 10)"); + conn.close(); + } + + private void testTempSequence() throws SQLException { + deleteDb("tempTables"); + Connection conn = getConnection("tempTables"); + Statement stat = conn.createStatement(); + stat.execute("create local temporary table test(id identity)"); + ResultSet rs = stat.executeQuery("script"); + boolean foundSequence = false; + while (rs.next()) { + if (rs.getString(1).startsWith("CREATE SEQUENCE")) { + foundSequence = true; + } + } + assertTrue(foundSequence); + stat.execute("insert into test values(null)"); + stat.execute("shutdown"); + conn.close(); + conn = getConnection("tempTables"); + rs = conn.createStatement().executeQuery( + "select * from information_schema.sequences"); + assertFalse(rs.next()); + conn.close(); + } + + private void testTempFileResultSet() throws SQLException { + if (config.lazy) { + return; + } + deleteDb("tempTables"); + Connection conn = getConnection("tempTables;MAX_MEMORY_ROWS=10"); + ResultSet rs1, rs2; + Statement stat1 = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + Statement stat2 = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + rs1 = stat1.executeQuery("select * from system_range(1, 20)"); + rs2 = stat2.executeQuery("select * from system_range(1, 20)"); + for (int i = 0; i < 20; i++) { + assertTrue(rs1.next()); + assertTrue(rs2.next()); + assertEquals(i + 1, rs1.getInt(1)); + assertEquals(i + 1, rs2.getInt(1)); + } + rs2.close(); + // verify the temp table is not deleted yet + rs1.beforeFirst(); + for (int i = 0; i < 20; i++) { + rs1.next(); + rs1.getInt(1); + } + rs1.close(); + + rs1 = stat1.executeQuery( + "select * from system_range(1, 20) order by x desc"); + rs2 = stat2.executeQuery( + "select * from system_range(1, 20) order by x desc"); + for (int i = 0; i < 20; i++) { + rs1.next(); + rs2.next(); + rs1.getInt(1); + rs2.getInt(1); + } + rs1.close(); + // verify the temp table is not deleted yet + rs2.beforeFirst(); + for (int i = 0; i < 20; i++) { + rs2.next(); + rs2.getInt(1); + } + rs2.close(); + + conn.close(); + } + + private void testTempTableResultSet() throws SQLException { + deleteDb("tempTables"); + Connection conn = getConnection( + "tempTables;MAX_MEMORY_ROWS=10"); + Statement stat1 = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + Statement stat2 = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + ResultSet rs1, rs2; + rs1 = stat1.executeQuery("select distinct * from system_range(1, 20)"); + // this will re-use the same temp table + rs2 = stat2.executeQuery("select distinct * from system_range(1, 20)"); + for (int i = 0; i < 20; i++) { + rs1.next(); + rs2.next(); + rs1.getInt(1); + rs2.getInt(1); + } + rs2.close(); + // verify the temp table is not deleted yet + rs1.beforeFirst(); + for (int i = 0; i < 20; i++) { + rs1.next(); + rs1.getInt(1); + } + rs1.close(); + + rs1 = stat1.executeQuery("select distinct * from system_range(1, 20)"); + rs2 = stat2.executeQuery("select distinct * from system_range(1, 20)"); + for (int i = 0; i < 20; i++) { + rs1.next(); + rs2.next(); + rs1.getInt(1); + rs2.getInt(1); + } + rs1.close(); + // verify the temp table is not deleted yet + rs2.beforeFirst(); + for (int i = 0; i < 20; i++) { + rs2.next(); + rs2.getInt(1); + } + rs2.close(); + + conn.close(); + } + + private void testTransactionalTemp() throws SQLException { + deleteDb("tempTables"); + Connection conn = getConnection("tempTables"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(1)"); + stat.execute("commit"); + stat.execute("insert into test values(2)"); + stat.execute("create local temporary table temp(" + + "id int primary key, name varchar, constraint x index(name)) transactional"); + stat.execute("insert into temp values(3, 'test')"); + stat.execute("rollback"); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertFalse(rs.next()); + stat.execute("drop table test"); + stat.execute("drop table temp"); + conn.close(); + } + + private void testDeleteGlobalTempTableWhenClosing() throws SQLException { + if (config.memory) { + return; + } + if (config.mvStore) { + return; + } + deleteDb("tempTables"); + Connection conn = getConnection("tempTables"); + Statement stat = conn.createStatement(); + stat.execute("create global temporary table test(id int, data varchar)"); + stat.execute("insert into test " + + "select x, space(1000) from system_range(1, 1000)"); + stat.execute("shutdown compact"); + try { + conn.close(); + } catch (SQLException e) { + // expected + } + String dbName = getBaseDir() + "/tempTables" + Constants.SUFFIX_PAGE_FILE; + long before = FileUtils.size(dbName); + assertTrue(before > 0); + conn = getConnection("tempTables"); + conn.close(); + long after = FileUtils.size(dbName); + assertEquals(after, before); + } + + private void testAlter(Connection conn) throws SQLException { + Statement stat; + stat = conn.createStatement(); + stat.execute("create temporary table test(id varchar)"); + stat.execute("create index idx1 on test(id)"); + stat.execute("drop index idx1"); + stat.execute("create index idx1 on test(id)"); + stat.execute("insert into test select x from system_range(1, 10)"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat). + execute("alter table test add column x int"); + stat.execute("drop table test"); + } + + private static void testConstraints(Connection conn1, Connection conn2) + throws SQLException { + Statement s1 = conn1.createStatement(), s2 = conn2.createStatement(); + s1.execute("create local temporary table test(id int unique)"); + s2.execute("create local temporary table test(id int unique)"); + s1.execute("alter table test add constraint a unique(id)"); + s2.execute("alter table test add constraint a unique(id)"); + s1.execute("drop table test"); + s2.execute("drop table test"); + } + + private static void testIndexes(Connection conn1, Connection conn2) + throws SQLException { + conn1.createStatement().executeUpdate( + "create local temporary table test(id int)"); + conn1.createStatement().executeUpdate( + "create index idx_id on test(id)"); + conn2.createStatement().executeUpdate( + "create local temporary table test(id int)"); + conn2.createStatement().executeUpdate( + "create index idx_id on test(id)"); + conn2.createStatement().executeUpdate("drop index idx_id"); + conn2.createStatement().executeUpdate("drop table test"); + conn2.createStatement().executeUpdate("create table test(id int)"); + conn2.createStatement().executeUpdate("create index idx_id on test(id)"); + conn1.createStatement().executeUpdate("drop table test"); + conn1.createStatement().executeUpdate("drop table test"); + } + + private void testTables(Connection c1, Connection c2) throws SQLException { + Statement s1 = c1.createStatement(); + Statement s2 = c2.createStatement(); + s1.execute("CREATE LOCAL TEMPORARY TABLE LT(A INT)"); + s1.execute("CREATE GLOBAL TEMPORARY TABLE GT1(ID INT)"); + s2.execute("CREATE GLOBAL TEMPORARY TABLE GT2(ID INT)"); + s2.execute("CREATE LOCAL TEMPORARY TABLE LT(B INT)"); + s2.execute("SELECT B FROM LT"); + s1.execute("SELECT A FROM LT"); + s1.execute("SELECT * FROM GT1"); + s2.execute("SELECT * FROM GT1"); + s1.execute("SELECT * FROM GT2"); + s2.execute("SELECT * FROM GT2"); + s2.execute("DROP TABLE GT1"); + s2.execute("DROP TABLE GT2"); + s2.execute("DROP TABLE LT"); + s1.execute("DROP TABLE LT"); + + // temp tables: 'on commit' syntax is currently not documented, because + // not tested well + // and hopefully nobody is using it, as it looks like functional sugar + // (this features are here for compatibility only) + ResultSet rs; + c1.setAutoCommit(false); + s1.execute("create local temporary table test_temp(id int) " + + "on commit delete rows"); + s1.execute("insert into test_temp values(1)"); + rs = s1.executeQuery("select * from test_temp"); + assertResultRowCount(1, rs); + c1.commit(); + rs = s1.executeQuery("select * from test_temp"); + assertResultRowCount(0, rs); + s1.execute("drop table test_temp"); + + s1.execute("create local temporary table test_temp(id int) on commit drop"); + s1.execute("insert into test_temp values(1)"); + rs = s1.executeQuery("select * from test_temp"); + assertResultRowCount(1, rs); + c1.commit(); + // test_temp should have been dropped automatically + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, s1). + executeQuery("select * from test_temp"); + } + + /** + * There was a bug where creating lots of tables would overflow the + * transaction table in the MVStore + */ + private void testLotsOfTables() throws SQLException { + deleteDb("tempTables"); + Connection conn = getConnection("tempTables"); + Statement stat = conn.createStatement(); + for (int i = 0; i < 100000; i++) { + stat.executeUpdate("create local temporary table t(id int)"); + stat.executeUpdate("drop table t"); + } + conn.close(); + } + + /** + * Issue #401: NPE in "SELECT DISTINCT * ORDER BY" + */ + private void testCreateAsSelectDistinct() throws SQLException { + deleteDb("tempTables"); + Connection conn = getConnection("tempTables;MAX_MEMORY_ROWS=1000"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE ONE(S1 VARCHAR(255), S2 VARCHAR(255))"); + PreparedStatement prep = conn + .prepareStatement("insert into one values(?,?)"); + for (int row = 0; row < 10000; row++) { + prep.setString(1, "abc"); + prep.setString(2, "def" + row); + prep.execute(); + } + stat.execute( + "CREATE TABLE TWO AS SELECT DISTINCT * FROM ONE ORDER BY S1"); + conn.close(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestTransaction.java b/modules/h2/src/test/java/org/h2/test/db/TestTransaction.java new file mode 100644 index 0000000000000..e2f1bdff6ba2d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestTransaction.java @@ -0,0 +1,585 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Savepoint; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.New; + +/** + * Transactional tests, including transaction isolation tests, and tests related + * to savepoints. + */ +public class TestTransaction extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testClosingConnectionWithSessionTempTable(); + testClosingConnectionWithLockedTable(); + testConstraintCreationRollback(); + testCommitOnAutoCommitChange(); + testConcurrentSelectForUpdate(); + testLogMode(); + testRollback(); + testRollback2(); + testForUpdate(); + testSetTransaction(); + testReferential(); + testSavepoint(); + testIsolation(); + testTwoPhaseCommit(); + deleteDb("transaction"); + } + + private void testConstraintCreationRollback() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, p int)"); + stat.execute("insert into test values(1, 2)"); + try { + stat.execute("alter table test add constraint fail " + + "foreign key(p) references test(id)"); + fail(); + } catch (SQLException e) { + // expected + } + stat.execute("insert into test values(1, 2)"); + stat.execute("drop table test"); + conn.close(); + } + + private void testCommitOnAutoCommitChange() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + + Connection conn2 = getConnection("transaction"); + Statement stat2 = conn2.createStatement(); + + conn.setAutoCommit(false); + stat.execute("insert into test values(1)"); + + // should have no effect + conn.setAutoCommit(false); + + ResultSet rs; + if (config.mvcc || config.mvStore) { + rs = stat2.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(0, rs.getInt(1)); + } else { + assertThrows(ErrorCode.LOCK_TIMEOUT_1, stat2). + executeQuery("select count(*) from test"); + } + + // should commit + conn.setAutoCommit(true); + + rs = stat2.executeQuery("select * from test"); + assertTrue(rs.next()); + + stat.execute("drop table test"); + + conn2.close(); + conn.close(); + } + + private void testLogMode() throws SQLException { + if (config.memory) { + return; + } + if (config.mvStore) { + return; + } + deleteDb("transaction"); + testLogMode(0); + testLogMode(1); + testLogMode(2); + } + + private void testLogMode(int logMode) throws SQLException { + Connection conn; + Statement stat; + ResultSet rs; + conn = getConnection("transaction"); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key) as select 1"); + stat.execute("set write_delay 0"); + stat.execute("set log " + logMode); + rs = stat.executeQuery( + "select value from information_schema.settings where name = 'LOG'"); + rs.next(); + assertEquals(logMode, rs.getInt(1)); + stat.execute("insert into test values(2)"); + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (SQLException e) { + // expected + } + conn = getConnection("transaction"); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + if (logMode != 0) { + assertTrue(rs.next()); + } + assertFalse(rs.next()); + stat.execute("drop table test"); + conn.close(); + } + + private void testConcurrentSelectForUpdate() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello'), (2, 'World')"); + conn.commit(); + PreparedStatement prep = conn.prepareStatement( + "select * from test for update"); + prep.execute(); + Connection conn2 = getConnection("transaction"); + conn2.setAutoCommit(false); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, conn2.createStatement()). + execute("select * from test for update"); + conn2.close(); + conn.close(); + } + + private void testForUpdate() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello'), (2, 'World')"); + conn.commit(); + PreparedStatement prep = conn.prepareStatement( + "select * from test where id = 1 for update"); + prep.execute(); + // releases the lock + conn.commit(); + prep.execute(); + Connection conn2 = getConnection("transaction"); + conn2.setAutoCommit(false); + Statement stat2 = conn2.createStatement(); + if (config.mvcc) { + stat2.execute("update test set name = 'Welt' where id = 2"); + } + assertThrows(ErrorCode.LOCK_TIMEOUT_1, stat2). + execute("update test set name = 'Hallo' where id = 1"); + conn2.close(); + conn.close(); + } + + private void testRollback() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("create index idx_id on test(id)"); + stat.execute("insert into test values(1), (1), (1)"); + if (!config.memory) { + conn.close(); + conn = getConnection("transaction"); + stat = conn.createStatement(); + } + conn.setAutoCommit(false); + stat.execute("delete from test"); + conn.rollback(); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + assertResultRowCount(3, rs); + rs = stat.executeQuery("select * from test where id = 1"); + assertResultRowCount(3, rs); + conn.close(); + + conn = getConnection("transaction"); + stat = conn.createStatement(); + stat.execute("create table master(id int) as select 1"); + stat.execute("create table child1(id int references master(id) " + + "on delete cascade)"); + stat.execute("insert into child1 values(1), (1), (1)"); + stat.execute("create table child2(id int references master(id)) as select 1"); + if (!config.memory) { + conn.close(); + conn = getConnection("transaction"); + } + stat = conn.createStatement(); + assertThrows( + ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_CHILD_EXISTS_1, stat). + execute("delete from master"); + conn.rollback(); + rs = stat.executeQuery("select * from master where id=1"); + assertResultRowCount(1, rs); + rs = stat.executeQuery("select * from child1"); + assertResultRowCount(3, rs); + rs = stat.executeQuery("select * from child1 where id=1"); + assertResultRowCount(3, rs); + conn.close(); + } + + private void testRollback2() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("create index idx_id on test(id)"); + stat.execute("insert into test values(1), (1)"); + if (!config.memory) { + conn.close(); + conn = getConnection("transaction"); + stat = conn.createStatement(); + } + conn.setAutoCommit(false); + stat.execute("delete from test"); + conn.rollback(); + ResultSet rs; + rs = stat.executeQuery("select * from test where id = 1"); + assertResultRowCount(2, rs); + conn.close(); + + conn = getConnection("transaction"); + stat = conn.createStatement(); + stat.execute("create table master(id int) as select 1"); + stat.execute("create table child1(id int references master(id) " + + "on delete cascade)"); + stat.execute("insert into child1 values(1), (1)"); + stat.execute("create table child2(id int references master(id)) as select 1"); + if (!config.memory) { + conn.close(); + conn = getConnection("transaction"); + } + stat = conn.createStatement(); + assertThrows( + ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_CHILD_EXISTS_1, stat). + execute("delete from master"); + rs = stat.executeQuery("select * from master where id=1"); + assertResultRowCount(1, rs); + rs = stat.executeQuery("select * from child1 where id=1"); + assertResultRowCount(2, rs); + conn.close(); + } + + private void testSetTransaction() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("insert into test values(1)"); + stat.execute("set @x = 1"); + conn.commit(); + assertSingleValue(stat, "select id from test", 1); + assertSingleValue(stat, "call @x", 1); + + stat.execute("update test set id=2"); + stat.execute("set @x = 2"); + conn.rollback(); + assertSingleValue(stat, "select id from test", 1); + assertSingleValue(stat, "call @x", 2); + + conn.close(); + } + + private void testReferential() throws SQLException { + deleteDb("transaction"); + Connection c1 = getConnection("transaction"); + c1.setAutoCommit(false); + Statement s1 = c1.createStatement(); + s1.execute("drop table if exists a"); + s1.execute("drop table if exists b"); + s1.execute("create table a (id integer identity not null, " + + "code varchar(10) not null, primary key(id))"); + s1.execute("create table b (name varchar(100) not null, a integer, " + + "primary key(name), foreign key(a) references a(id))"); + Connection c2 = getConnection("transaction"); + c2.setAutoCommit(false); + s1.executeUpdate("insert into A(code) values('one')"); + Statement s2 = c2.createStatement(); + if (config.mvcc || config.mvStore) { + assertThrows( + ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, s2). + executeUpdate("insert into B values('two', 1)"); + } else { + assertThrows(ErrorCode.LOCK_TIMEOUT_1, s2). + executeUpdate("insert into B values('two', 1)"); + } + c2.commit(); + c1.rollback(); + c1.close(); + c2.close(); + } + + private void testClosingConnectionWithLockedTable() throws SQLException { + deleteDb("transaction"); + Connection c1 = getConnection("transaction"); + Connection c2 = getConnection("transaction"); + c1.setAutoCommit(false); + c2.setAutoCommit(false); + + Statement s1 = c1.createStatement(); + s1.execute("create table a (id integer identity not null, " + + "code varchar(10) not null, primary key(id))"); + s1.executeUpdate("insert into a(code) values('one')"); + c1.commit(); + s1.executeQuery("select * from a for update"); + c1.close(); + + Statement s2 = c2.createStatement(); + s2.executeQuery("select * from a for update"); + c2.close(); + } + + private void testClosingConnectionWithSessionTempTable() throws SQLException { + deleteDb("transaction"); + Connection c1 = getConnection("transaction"); + Connection c2 = getConnection("transaction"); + c1.setAutoCommit(false); + c2.setAutoCommit(false); + + Statement s1 = c1.createStatement(); + s1.execute("create local temporary table a (id int, x BLOB)"); + c1.commit(); + c1.close(); + + Statement s2 = c2.createStatement(); + s2.execute("create table c (id int)"); + c2.close(); + } + + private void testSavepoint() throws SQLException { + deleteDb("transaction"); + Connection conn = getConnection("transaction"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST0(ID IDENTITY, NAME VARCHAR)"); + stat.execute("CREATE TABLE TEST1(NAME VARCHAR, " + + "ID IDENTITY, X TIMESTAMP DEFAULT CURRENT_TIMESTAMP)"); + conn.setAutoCommit(false); + int[] count = new int[2]; + int[] countCommitted = new int[2]; + int[] countSave = new int[2]; + int len = getSize(2000, 10000); + Random random = new Random(10); + Savepoint sp = null; + for (int i = 0; i < len; i++) { + int tableId = random.nextInt(2); + String table = "TEST" + tableId; + int op = random.nextInt(6); + switch (op) { + case 0: + stat.execute("INSERT INTO " + table + "(NAME) VALUES('op" + i + "')"); + count[tableId]++; + break; + case 1: + if (count[tableId] > 0) { + int updateCount = stat.executeUpdate( + "DELETE FROM " + table + + " WHERE ID=SELECT MIN(ID) FROM " + table); + assertEquals(1, updateCount); + count[tableId]--; + } + break; + case 2: + sp = conn.setSavepoint(); + countSave[0] = count[0]; + countSave[1] = count[1]; + break; + case 3: + if (sp != null) { + conn.rollback(sp); + count[0] = countSave[0]; + count[1] = countSave[1]; + } + break; + case 4: + conn.commit(); + sp = null; + countCommitted[0] = count[0]; + countCommitted[1] = count[1]; + break; + case 5: + conn.rollback(); + sp = null; + count[0] = countCommitted[0]; + count[1] = countCommitted[1]; + break; + default: + } + checkTableCount(stat, "TEST0", count[0]); + checkTableCount(stat, "TEST1", count[1]); + } + conn.close(); + } + + private void checkTableCount(Statement stat, String tableName, int count) + throws SQLException { + ResultSet rs; + rs = stat.executeQuery("SELECT COUNT(*) FROM " + tableName); + rs.next(); + assertEquals(count, rs.getInt(1)); + } + + private void testIsolation() throws SQLException { + Connection conn = getConnection("transaction"); + trace("default TransactionIsolation=" + conn.getTransactionIsolation()); + conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); + assertTrue(conn.getTransactionIsolation() == + Connection.TRANSACTION_READ_COMMITTED); + conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); + assertTrue(conn.getTransactionIsolation() == + Connection.TRANSACTION_SERIALIZABLE); + Statement stat = conn.createStatement(); + assertTrue(conn.getAutoCommit()); + conn.setAutoCommit(false); + assertFalse(conn.getAutoCommit()); + conn.setAutoCommit(true); + assertTrue(conn.getAutoCommit()); + test(stat, "CREATE TABLE TEST(ID INT PRIMARY KEY)"); + conn.commit(); + test(stat, "INSERT INTO TEST VALUES(0)"); + conn.rollback(); + testValue(stat, "SELECT COUNT(*) FROM TEST", "1"); + conn.setAutoCommit(false); + test(stat, "DELETE FROM TEST"); + // testValue("SELECT COUNT(*) FROM TEST", "0"); + conn.rollback(); + testValue(stat, "SELECT COUNT(*) FROM TEST", "1"); + conn.commit(); + conn.setAutoCommit(true); + testNestedResultSets(conn); + conn.setAutoCommit(false); + testNestedResultSets(conn); + conn.close(); + } + + private void testNestedResultSets(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + test(stat, "CREATE TABLE NEST1(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + test(stat, "CREATE TABLE NEST2(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + DatabaseMetaData meta = conn.getMetaData(); + ArrayList result = New.arrayList(); + ResultSet rs1, rs2; + rs1 = meta.getTables(null, null, "NEST%", null); + while (rs1.next()) { + String table = rs1.getString("TABLE_NAME"); + rs2 = meta.getColumns(null, null, table, null); + while (rs2.next()) { + String column = rs2.getString("COLUMN_NAME"); + trace("Table: " + table + " Column: " + column); + result.add(table + "." + column); + } + } + // should be NEST1.ID, NEST1.NAME, NEST2.ID, NEST2.NAME + assertEquals(result.toString(), 4, result.size()); + result = New.arrayList(); + test(stat, "INSERT INTO NEST1 VALUES(1,'A')"); + test(stat, "INSERT INTO NEST1 VALUES(2,'B')"); + test(stat, "INSERT INTO NEST2 VALUES(1,'1')"); + test(stat, "INSERT INTO NEST2 VALUES(2,'2')"); + Statement s1 = conn.createStatement(); + Statement s2 = conn.createStatement(); + rs1 = s1.executeQuery("SELECT * FROM NEST1 ORDER BY ID"); + while (rs1.next()) { + rs2 = s2.executeQuery("SELECT * FROM NEST2 ORDER BY ID"); + while (rs2.next()) { + String v1 = rs1.getString("VALUE"); + String v2 = rs2.getString("VALUE"); + result.add(v1 + "/" + v2); + } + } + // should be A/1, A/2, B/1, B/2 + assertEquals(result.toString(), 4, result.size()); + result = New.arrayList(); + rs1 = s1.executeQuery("SELECT * FROM NEST1 ORDER BY ID"); + rs2 = s1.executeQuery("SELECT * FROM NEST2 ORDER BY ID"); + assertThrows(ErrorCode.OBJECT_CLOSED, rs1).next(); + // this is already closed, so but closing again should no do any harm + rs1.close(); + while (rs2.next()) { + String v1 = rs2.getString("VALUE"); + result.add(v1); + } + // should be A, B + assertEquals(result.toString(), 2, result.size()); + test(stat, "DROP TABLE NEST1"); + test(stat, "DROP TABLE NEST2"); + } + + private void testValue(Statement stat, String sql, String data) + throws SQLException { + ResultSet rs = stat.executeQuery(sql); + rs.next(); + String s = rs.getString(1); + assertEquals(data, s); + } + + private void test(Statement stat, String sql) throws SQLException { + trace(sql); + stat.execute(sql); + } + + private void testTwoPhaseCommit() throws SQLException { + if (config.memory) { + return; + } + deleteDb("transaction2pc"); + Connection conn = getConnection("transaction2pc"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID INT PRIMARY KEY)"); + conn.setAutoCommit(false); + stat.execute("INSERT INTO TEST VALUES (1)"); + stat.execute("PREPARE COMMIT \"#1\""); + conn.commit(); + stat.execute("SHUTDOWN IMMEDIATELY"); + conn = getConnection("transaction2pc"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT TRANSACTION, STATE FROM INFORMATION_SCHEMA.IN_DOUBT"); + assertFalse(rs.next()); + rs = stat.executeQuery("SELECT ID FROM TEST"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + conn.setAutoCommit(false); + stat.execute("INSERT INTO TEST VALUES (2)"); + stat.execute("PREPARE COMMIT \"#2\""); + conn.rollback(); + stat.execute("SHUTDOWN IMMEDIATELY"); + conn = getConnection("transaction2pc"); + stat = conn.createStatement(); + rs = stat.executeQuery("SELECT TRANSACTION, STATE FROM INFORMATION_SCHEMA.IN_DOUBT"); + assertFalse(rs.next()); + rs = stat.executeQuery("SELECT ID FROM TEST"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + conn.close(); + deleteDb("transaction2pc"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestTriggersConstraints.java b/modules/h2/src/test/java/org/h2/test/db/TestTriggersConstraints.java new file mode 100644 index 0000000000000..7cc18e462a55b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestTriggersConstraints.java @@ -0,0 +1,742 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Arrays; +import java.util.HashSet; + +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.engine.Session; +import org.h2.jdbc.JdbcConnection; +import org.h2.test.TestBase; +import org.h2.tools.TriggerAdapter; +import org.h2.util.Task; +import org.h2.value.ValueLong; + +/** + * Tests for trigger and constraints. + */ +public class TestTriggersConstraints extends TestBase implements Trigger { + + private static boolean mustNotCallTrigger; + private String triggerName; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("trigger"); + testTriggerDeadlock(); + testDeleteInTrigger(); + testTriggerAdapter(); + testTriggerSelectEachRow(); + testViewTrigger(); + testViewTriggerGeneratedKeys(); + testTriggerBeforeSelect(); + testTriggerAlterTable(); + testTriggerAsSource(); +// testTriggerAsJavascript(); + testTriggers(); + testConstraints(); + testCheckConstraintErrorMessage(); + testMultiPartForeignKeys(); + deleteDb("trigger"); + } + + /** + * A trigger that deletes all rows in the test table. + */ + public static class DeleteTrigger extends TriggerAdapter { + @Override + public void fire(Connection conn, ResultSet oldRow, ResultSet newRow) + throws SQLException { + conn.createStatement().execute("delete from test"); + } + } + + private void testTriggerDeadlock() throws Exception { + final Connection conn, conn2; + final Statement stat, stat2; + conn = getConnection("trigger"); + conn2 = getConnection("trigger"); + stat = conn.createStatement(); + stat2 = conn2.createStatement(); + stat.execute("create table test(id int) as select 1"); + stat.execute("create table test2(id int) as select 1"); + stat.execute("create trigger test_u before update on test2 " + + "for each row call \"" + DeleteTrigger.class.getName() + "\""); + conn.setAutoCommit(false); + conn2.setAutoCommit(false); + stat2.execute("update test set id = 2"); + Task task = new Task() { + @Override + public void call() throws Exception { + Thread.sleep(300); + stat2.execute("update test2 set id = 4"); + } + }; + task.execute(); + Thread.sleep(100); + try { + stat.execute("update test2 set id = 3"); + task.get(); + } catch (SQLException e) { + assertEquals(ErrorCode.LOCK_TIMEOUT_1, e.getErrorCode()); + } + conn2.rollback(); + conn.rollback(); + stat.execute("drop table test"); + stat.execute("drop table test2"); + conn.close(); + conn2.close(); + } + + private void testDeleteInTrigger() throws SQLException { + if (config.mvcc || config.mvStore) { + return; + } + Connection conn; + Statement stat; + conn = getConnection("trigger"); + stat = conn.createStatement(); + stat.execute("create table test(id int) as select 1"); + stat.execute("create trigger test_u before update on test " + + "for each row call \"" + DeleteTrigger.class.getName() + "\""); + // this threw a NullPointerException + assertThrows(ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1, stat). + execute("update test set id = 2"); + stat.execute("drop table test"); + conn.close(); + } + + private void testTriggerAdapter() throws SQLException { + Connection conn; + Statement stat; + conn = getConnection("trigger"); + stat = conn.createStatement(); + stat.execute("drop table if exists test"); + stat.execute("create table test(id int, c clob, b blob)"); + stat.execute("create table message(name varchar)"); + stat.execute( + "create trigger test_insert before insert, update, delete on test " + + "for each row call \"" + TestTriggerAdapter.class.getName() + "\""); + stat.execute("insert into test values(1, 'hello', 'abcd')"); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(10, rs.getInt(1)); + stat.execute("update test set id = 2"); + rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(20, rs.getInt(1)); + stat.execute("delete from test"); + rs = stat.executeQuery("select * from message"); + assertTrue(rs.next()); + assertEquals("+1;", rs.getString(1)); + assertTrue(rs.next()); + assertEquals("-10;+2;", rs.getString(1)); + assertTrue(rs.next()); + assertEquals("-20;", rs.getString(1)); + assertFalse(rs.next()); + stat.execute("drop table test, message"); + conn.close(); + } + + private void testTriggerSelectEachRow() throws SQLException { + Connection conn; + Statement stat; + conn = getConnection("trigger"); + stat = conn.createStatement(); + stat.execute("drop table if exists test"); + stat.execute("create table test(id int)"); + assertThrows(ErrorCode.TRIGGER_SELECT_AND_ROW_BASED_NOT_SUPPORTED, stat) + .execute("create trigger test_insert before select on test " + + "for each row call \"" + TestTriggerAdapter.class.getName() + "\""); + conn.close(); + } + + private void testViewTrigger() throws SQLException { + Connection conn; + Statement stat; + conn = getConnection("trigger"); + stat = conn.createStatement(); + stat.execute("drop table if exists test"); + stat.execute("create table test(id int)"); + stat.execute("create view test_view as select * from test"); + stat.execute("create trigger test_view_insert " + + "instead of insert on test_view for each row call \"" + + TestView.class.getName() + "\""); + stat.execute("create trigger test_view_delete " + + "instead of delete on test_view for each row call \"" + + TestView.class.getName() + "\""); + if (!config.memory) { + conn.close(); + conn = getConnection("trigger"); + stat = conn.createStatement(); + } + int count = stat.executeUpdate("insert into test_view values(1)"); + assertEquals(1, count); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertFalse(rs.next()); + count = stat.executeUpdate("delete from test_view"); + assertEquals(1, count); + stat.execute("drop view test_view"); + stat.execute("drop table test"); + conn.close(); + } + + private void testViewTriggerGeneratedKeys() throws SQLException { + Connection conn; + Statement stat; + conn = getConnection("trigger"); + stat = conn.createStatement(); + stat.execute("drop table if exists test"); + stat.execute("create table test(id int identity)"); + stat.execute("create view test_view as select * from test"); + stat.execute("create trigger test_view_insert " + + "instead of insert on test_view for each row call \"" + + TestViewGeneratedKeys.class.getName() + "\""); + if (!config.memory) { + conn.close(); + conn = getConnection("trigger"); + stat = conn.createStatement(); + } + + PreparedStatement pstat; + pstat = conn.prepareStatement( + "insert into test_view values()", Statement.RETURN_GENERATED_KEYS); + int count = pstat.executeUpdate(); + assertEquals(1, count); + + ResultSet gkRs; + gkRs = stat.executeQuery("select scope_identity()"); + + assertTrue(gkRs.next()); + assertEquals(1, gkRs.getInt(1)); + assertFalse(gkRs.next()); + + ResultSet rs; + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertFalse(rs.next()); + stat.execute("drop view test_view"); + stat.execute("drop table test"); + conn.close(); + } + + /** + * A test trigger adapter implementation. + */ + public static class TestTriggerAdapter extends TriggerAdapter { + + @Override + public void fire(Connection conn, ResultSet oldRow, ResultSet newRow) + throws SQLException { + StringBuilder buff = new StringBuilder(); + if (oldRow != null) { + buff.append("-").append(oldRow.getString("id")).append(';'); + } + if (newRow != null) { + buff.append("+").append(newRow.getString("id")).append(';'); + } + if (!"TEST_INSERT".equals(triggerName)) { + throw new RuntimeException("Wrong trigger name: " + triggerName); + } + if (!"TEST".equals(tableName)) { + throw new RuntimeException("Wrong table name: " + tableName); + } + if (!"PUBLIC".equals(schemaName)) { + throw new RuntimeException("Wrong schema name: " + schemaName); + } + if (type != (Trigger.INSERT | Trigger.UPDATE | Trigger.DELETE)) { + throw new RuntimeException("Wrong type: " + type); + } + if (newRow != null) { + if (oldRow == null) { + if (newRow.getInt(1) != 1) { + throw new RuntimeException("Expected: 1 got: " + + newRow.getString(1)); + } + } else { + if (newRow.getInt(1) != 2) { + throw new RuntimeException("Expected: 2 got: " + + newRow.getString(1)); + } + } + newRow.getCharacterStream(2); + newRow.getBinaryStream(3); + newRow.updateInt(1, newRow.getInt(1) * 10); + } + conn.createStatement().execute("insert into message values('" + + buff.toString() + "')"); + } + + } + + /** + * A test trigger implementation. + */ + public static class TestView implements Trigger { + + PreparedStatement prepInsert; + + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, int type) + throws SQLException { + prepInsert = conn.prepareStatement("insert into test values(?)"); + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + if (newRow != null) { + prepInsert.setInt(1, (Integer) newRow[0]); + prepInsert.execute(); + } + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + } + + /** + * + */ + public static class TestViewGeneratedKeys implements Trigger { + + PreparedStatement prepInsert; + + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, int type) + throws SQLException { + prepInsert = conn.prepareStatement( + "insert into test values()", Statement.RETURN_GENERATED_KEYS); + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + if (newRow != null) { + prepInsert.execute(); + ResultSet rs = prepInsert.getGeneratedKeys(); + if (rs.next()) { + JdbcConnection jconn = (JdbcConnection) conn; + Session session = (Session) jconn.getSession(); + session.setLastTriggerIdentity(ValueLong.get(rs.getLong(1))); + } + } + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + } + + private void testTriggerBeforeSelect() throws SQLException { + Connection conn; + Statement stat; + conn = getConnection("trigger"); + stat = conn.createStatement(); + stat.execute("drop table if exists meta_tables"); + stat.execute("create table meta_tables(name varchar)"); + stat.execute("create trigger meta_tables_select " + + "before select on meta_tables call \"" + TestSelect.class.getName() + "\""); + ResultSet rs; + rs = stat.executeQuery("select * from meta_tables"); + assertTrue(rs.next()); + assertFalse(rs.next()); + stat.execute("create table test(id int)"); + rs = stat.executeQuery("select * from meta_tables"); + assertTrue(rs.next()); + assertTrue(rs.next()); + assertFalse(rs.next()); + conn.close(); + if (!config.memory) { + conn = getConnection("trigger"); + stat = conn.createStatement(); + stat.execute("create table test2(id int)"); + rs = stat.executeQuery("select * from meta_tables"); + assertTrue(rs.next()); + assertTrue(rs.next()); + assertTrue(rs.next()); + assertFalse(rs.next()); + conn.close(); + } + } + + /** + * A test trigger implementation. + */ + public static class TestSelect implements Trigger { + + PreparedStatement prepMeta; + + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, int type) + throws SQLException { + prepMeta = conn.prepareStatement("insert into meta_tables " + + "select table_name from information_schema.tables " + + "where table_schema='PUBLIC'"); + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + if (oldRow != null || newRow != null) { + throw new SQLException("old and new must be null"); + } + conn.createStatement().execute("delete from meta_tables"); + prepMeta.execute(); + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + } + + /** + * A test trigger implementation. + */ + public static class TestTriggerAlterTable implements Trigger { + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + conn.createStatement().execute("call seq.nextval"); + } + + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, int type) { + // nothing to do + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + } + + private void testTriggerAlterTable() throws SQLException { + deleteDb("trigger"); + testTrigger(null); + } + + private void testTriggerAsSource() throws SQLException { + deleteDb("trigger"); + testTrigger("java"); + } + + private void testTriggerAsJavascript() throws SQLException { + deleteDb("trigger"); + testTrigger("javascript"); + } + + private void testTrigger(final String sourceLang) throws SQLException { + final String callSeq = "call seq.nextval"; + Connection conn = getConnection("trigger"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create sequence seq"); + stat.execute("create table test(id int primary key)"); + assertSingleValue(stat, callSeq, 1); + conn.setAutoCommit(false); + Trigger t = new org.h2.test.db.TestTriggersConstraints.TestTriggerAlterTable(); + t.close(); + if ("java".equals(sourceLang)) { + String triggerClassName = this.getClass().getName() + "." + + TestTriggerAlterTable.class.getSimpleName(); + stat.execute("create trigger test_upd before insert on test " + + "as $$org.h2.api.Trigger create() " + "{ return new " + + triggerClassName + "(); } $$"); + } else if ("javascript".equals(sourceLang)) { + String triggerClassName = this.getClass().getName() + "." + + TestTriggerAlterTable.class.getSimpleName(); + final String body = "//javascript\n" + + "new Packages." + triggerClassName + "();"; + stat.execute("create trigger test_upd before insert on test as $$" + + body + " $$"); + } else { + stat.execute("create trigger test_upd before insert on test call \"" + + TestTriggerAlterTable.class.getName() + "\""); + } + stat.execute("insert into test values(1)"); + assertSingleValue(stat, callSeq, 3); + stat.execute("alter table test add column name varchar"); + assertSingleValue(stat, callSeq, 4); + stat.execute("drop sequence seq"); + stat.execute("drop table test"); + conn.close(); + } + + private void testConstraints() throws SQLException { + Connection conn = getConnection("trigger"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("create table test(id int primary key, parent int)"); + stat.execute("alter table test add constraint test_parent_id " + + "foreign key(parent) references test (id) on delete cascade"); + stat.execute("insert into test select x, x/2 from system_range(0, 100)"); + stat.execute("delete from test"); + assertSingleValue(stat, "select count(*) from test", 0); + stat.execute("drop table test"); + conn.close(); + } + + private void testCheckConstraintErrorMessage() throws SQLException { + Connection conn = getConnection("trigger"); + Statement stat = conn.createStatement(); + + stat.execute("create table companies(id identity)"); + stat.execute("create table departments(id identity, " + + "company_id int not null, " + + "foreign key(company_id) references companies(id))"); + stat.execute("create table connections (id identity, company_id int not null, " + + "first int not null, second int not null, " + + "foreign key (company_id) references companies(id), " + + "foreign key (first) references departments(id), " + + "foreign key (second) references departments(id), " + + "check (select departments.company_id from departments, companies where " + + " departments.id in (first, second)) = company_id)"); + stat.execute("insert into companies(id) values(1)"); + stat.execute("insert into departments(id, company_id) " + + "values(10, 1)"); + stat.execute("insert into departments(id, company_id) " + + "values(20, 1)"); + assertThrows(ErrorCode.CHECK_CONSTRAINT_INVALID, stat) + .execute("insert into connections(id, company_id, first, second) " + + "values(100, 1, 10, 20)"); + + stat.execute("drop table connections"); + stat.execute("drop table departments"); + stat.execute("drop table companies"); + conn.close(); + } + + /** + * Regression test: we had a bug where the AlterTableAddConstraint class + * used to sometimes pick the wrong unique index for a foreign key. + */ + private void testMultiPartForeignKeys() throws SQLException { + Connection conn = getConnection("trigger"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST1"); + stat.execute("DROP TABLE IF EXISTS TEST2"); + + stat.execute("create table test1(id int primary key, col1 int)"); + stat.execute("alter table test1 add constraint unique_test1 " + + "unique (id,col1)"); + + stat.execute("create table test2(id int primary key, col1 int)"); + stat.execute("alter table test2 add constraint fk_test2 " + + "foreign key(id,col1) references test1 (id,col1)"); + + stat.execute("insert into test1 values (1,1)"); + stat.execute("insert into test1 values (2,2)"); + stat.execute("insert into test1 values (3,3)"); + stat.execute("insert into test2 values (1,1)"); + assertThrows(23506, stat).execute("insert into test2 values (2,1)"); + assertSingleValue(stat, "select count(*) from test1", 3); + assertSingleValue(stat, "select count(*) from test2", 1); + + stat.execute("drop table test1"); + stat.execute("drop table test2"); + conn.close(); + } + + private void testTriggers() throws SQLException { + mustNotCallTrigger = false; + Connection conn = getConnection("trigger"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + // CREATE TRIGGER trigger {BEFORE|AFTER} + // {INSERT|UPDATE|DELETE|ROLLBACK} ON table + // [FOR EACH ROW] [QUEUE n] [NOWAIT] CALL triggeredClass + stat.execute("CREATE TRIGGER IF NOT EXISTS INS_BEFORE " + + "BEFORE INSERT ON TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\""); + stat.execute("CREATE TRIGGER IF NOT EXISTS INS_BEFORE " + + "BEFORE INSERT ON TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\""); + stat.execute("CREATE TRIGGER INS_AFTER " + "" + + "AFTER INSERT ON TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\""); + stat.execute("CREATE TRIGGER UPD_BEFORE " + + "BEFORE UPDATE ON TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\""); + stat.execute("CREATE TRIGGER INS_AFTER_ROLLBACK " + + "AFTER INSERT, ROLLBACK ON TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\""); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + ResultSet rs; + rs = stat.executeQuery("SCRIPT"); + checkRows(rs, new String[] { + "CREATE FORCE TRIGGER PUBLIC.INS_BEFORE " + + "BEFORE INSERT ON PUBLIC.TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\";", + "CREATE FORCE TRIGGER PUBLIC.INS_AFTER " + + "AFTER INSERT ON PUBLIC.TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\";", + "CREATE FORCE TRIGGER PUBLIC.UPD_BEFORE " + + "BEFORE UPDATE ON PUBLIC.TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\";", + "CREATE FORCE TRIGGER PUBLIC.INS_AFTER_ROLLBACK " + + "AFTER INSERT, ROLLBACK ON PUBLIC.TEST " + + "FOR EACH ROW NOWAIT CALL \"" + getClass().getName() + "\";", + }); + while (rs.next()) { + String sql = rs.getString(1); + if (sql.startsWith("CREATE TRIGGER")) { + System.out.println(sql); + } + } + + rs = stat.executeQuery("SELECT * FROM TEST"); + rs.next(); + assertEquals("Hello-updated", rs.getString(2)); + assertFalse(rs.next()); + stat.execute("UPDATE TEST SET NAME=NAME||'-upd'"); + rs = stat.executeQuery("SELECT * FROM TEST"); + rs.next(); + assertEquals("Hello-updated-upd-updated2", rs.getString(2)); + assertFalse(rs.next()); + + mustNotCallTrigger = true; + stat.execute("DROP TRIGGER IF EXISTS INS_BEFORE"); + stat.execute("DROP TRIGGER IF EXISTS INS_BEFORE"); + stat.execute("DROP TRIGGER IF EXISTS INS_AFTER_ROLLBACK"); + assertThrows(ErrorCode.TRIGGER_NOT_FOUND_1, stat). + execute("DROP TRIGGER INS_BEFORE"); + stat.execute("DROP TRIGGER INS_AFTER"); + stat.execute("DROP TRIGGER UPD_BEFORE"); + stat.execute("UPDATE TEST SET NAME=NAME||'-upd-no_trigger'"); + stat.execute("INSERT INTO TEST VALUES(100, 'Insert-no_trigger')"); + conn.close(); + + conn = getConnection("trigger"); + + mustNotCallTrigger = false; + conn.close(); + } + + private void checkRows(ResultSet rs, String[] expected) throws SQLException { + HashSet set = new HashSet<>(Arrays.asList(expected)); + while (rs.next()) { + set.remove(rs.getString(1)); + } + if (set.size() > 0) { + fail("set should be empty: " + set); + } + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + if (mustNotCallTrigger) { + throw new AssertionError("must not be called now"); + } + if (conn == null) { + throw new AssertionError("connection is null"); + } + if (triggerName.startsWith("INS_BEFORE")) { + newRow[1] = newRow[1] + "-updated"; + } else if (triggerName.startsWith("INS_AFTER")) { + if (!newRow[1].toString().endsWith("-updated")) { + throw new AssertionError("supposed to be updated"); + } + checkCommit(conn); + } else if (triggerName.startsWith("UPD_BEFORE")) { + newRow[1] = newRow[1] + "-updated2"; + } else if (triggerName.startsWith("UPD_AFTER")) { + if (!newRow[1].toString().endsWith("-updated2")) { + throw new AssertionError("supposed to be updated2"); + } + checkCommit(conn); + } + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + private void checkCommit(Connection conn) throws SQLException { + assertThrows(ErrorCode.COMMIT_ROLLBACK_NOT_ALLOWED, conn).commit(); + assertThrows(ErrorCode.COMMIT_ROLLBACK_NOT_ALLOWED, conn.createStatement()). + execute("CREATE TABLE X(ID INT)"); + } + + @Override + public void init(Connection conn, String schemaName, String trigger, + String tableName, boolean before, int type) { + this.triggerName = trigger; + if (!"TEST".equals(tableName)) { + throw new AssertionError("supposed to be TEST"); + } + if ((trigger.endsWith("AFTER") && before) || + (trigger.endsWith("BEFORE") && !before)) { + throw new AssertionError("triggerName: " + trigger + " before:" + before); + } + if ((trigger.startsWith("UPD") && type != UPDATE) || + (trigger.startsWith("INS") && type != INSERT) || + (trigger.startsWith("DEL") && type != DELETE)) { + throw new AssertionError("triggerName: " + trigger + " type:" + type); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestTwoPhaseCommit.java b/modules/h2/src/test/java/org/h2/test/db/TestTwoPhaseCommit.java new file mode 100644 index 0000000000000..24f68413b4aad --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestTwoPhaseCommit.java @@ -0,0 +1,119 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; + +import org.h2.test.TestBase; +import org.h2.util.New; + +/** + * Tests for the two-phase-commit feature. + */ +public class TestTwoPhaseCommit extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.memory || config.networked) { + return; + } + + deleteDb("twoPhaseCommit"); + + prepare(); + openWith(true); + test(true); + + prepare(); + openWith(false); + test(false); + + if (!config.mvStore) { + testLargeTransactionName(); + } + deleteDb("twoPhaseCommit"); + } + + private void testLargeTransactionName() throws SQLException { + Connection conn = getConnection("twoPhaseCommit"); + Statement stat = conn.createStatement(); + conn.setAutoCommit(false); + stat.execute("CREATE TABLE TEST2(ID INT)"); + String name = "tx12345678"; + try { + while (true) { + stat.execute("INSERT INTO TEST2 VALUES(1)"); + name += "x"; + stat.execute("PREPARE COMMIT " + name); + } + } catch (SQLException e) { + assertKnownException(e); + } + conn.close(); + } + + private void test(boolean rolledBack) throws SQLException { + Connection conn = getConnection("twoPhaseCommit"); + Statement stat = conn.createStatement(); + stat.execute("SET WRITE_DELAY 0"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + if (!rolledBack) { + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + } + assertFalse(rs.next()); + conn.close(); + } + + private void openWith(boolean rollback) throws SQLException { + Connection conn = getConnection("twoPhaseCommit"); + Statement stat = conn.createStatement(); + ArrayList list = New.arrayList(); + ResultSet rs = stat.executeQuery("SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT"); + while (rs.next()) { + list.add(rs.getString("TRANSACTION")); + } + for (String s : list) { + if (rollback) { + stat.execute("ROLLBACK TRANSACTION " + s); + } else { + stat.execute("COMMIT TRANSACTION " + s); + } + } + conn.close(); + } + + private void prepare() throws SQLException { + deleteDb("twoPhaseCommit"); + Connection conn = getConnection("twoPhaseCommit"); + Statement stat = conn.createStatement(); + stat.execute("SET WRITE_DELAY 0"); + conn.setAutoCommit(false); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + conn.commit(); + stat.execute("INSERT INTO TEST VALUES(2, 'World')"); + stat.execute("PREPARE COMMIT XID_TEST_TRANSACTION_WITH_LONG_NAME"); + crash(conn); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestUpgrade.java b/modules/h2/src/test/java/org/h2/test/db/TestUpgrade.java new file mode 100644 index 0000000000000..ae33b0677394d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestUpgrade.java @@ -0,0 +1,230 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.io.OutputStream; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.upgrade.DbUpgrade; +import org.h2.util.Utils; + +/** + * Automatic upgrade test cases. + */ +public class TestUpgrade extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.mvStore) { + return; + } + if (!Utils.isClassPresent("org.h2.upgrade.v1_1.Driver")) { + return; + } + testLobs(); + testErrorUpgrading(); + testNoDb(); + testNoUpgradeOldAndNew(); + testIfExists(); + testCipher(); + } + + private void testLobs() throws Exception { + deleteDb("upgrade"); + Connection conn; + conn = DriverManager.getConnection("jdbc:h2v1_1:" + + getBaseDir() + "/upgrade;PAGE_STORE=FALSE", getUser(), getPassword()); + conn.createStatement().execute( + "create table test(data clob) as select space(100000)"); + conn.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.data.db")); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.index.db")); + DbUpgrade.setDeleteOldDb(true); + DbUpgrade.setScriptInTempDir(true); + conn = getConnection("upgrade"); + assertFalse(FileUtils.exists(getBaseDir() + "/upgrade.data.db")); + assertFalse(FileUtils.exists(getBaseDir() + "/upgrade.index.db")); + ResultSet rs = conn.createStatement().executeQuery("select * from test"); + rs.next(); + assertEquals(new String(new char[100000]).replace((char) 0, ' '), + rs.getString(1)); + conn.close(); + DbUpgrade.setDeleteOldDb(false); + DbUpgrade.setScriptInTempDir(false); + deleteDb("upgrade"); + } + + private void testErrorUpgrading() throws Exception { + deleteDb("upgrade"); + OutputStream out; + out = FileUtils.newOutputStream(getBaseDir() + "/upgrade.data.db", false); + out.write(new byte[10000]); + out.close(); + out = FileUtils.newOutputStream(getBaseDir() + "/upgrade.index.db", false); + out.write(new byte[10000]); + out.close(); + assertThrows(ErrorCode.FILE_VERSION_ERROR_1, this). + getConnection("upgrade"); + + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.data.db")); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.index.db")); + deleteDb("upgrade"); + } + + private void testNoDb() throws SQLException { + deleteDb("upgrade"); + Connection conn = getConnection("upgrade"); + conn.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.h2.db")); + deleteDb("upgrade"); + + conn = getConnection("upgrade;NO_UPGRADE=TRUE"); + conn.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.h2.db")); + deleteDb("upgrade"); + } + + private void testNoUpgradeOldAndNew() throws Exception { + deleteDb("upgrade"); + deleteDb("upgradeOld"); + String additionalParameters = ";AUTO_SERVER=TRUE;OPEN_NEW=TRUE"; + + // Create old db + Utils.callStaticMethod("org.h2.upgrade.v1_1.Driver.load"); + Connection connOld = DriverManager.getConnection("jdbc:h2v1_1:" + + getBaseDir() + "/upgradeOld;PAGE_STORE=FALSE" + additionalParameters); + // Test auto server, too + Connection connOld2 = DriverManager.getConnection("jdbc:h2v1_1:" + + getBaseDir() + "/upgradeOld;PAGE_STORE=FALSE" + additionalParameters); + Statement statOld = connOld.createStatement(); + statOld.execute("create table testOld(id int)"); + connOld.close(); + connOld2.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgradeOld.data.db")); + + // Create new DB + Connection connNew = DriverManager.getConnection("jdbc:h2:" + + getBaseDir() + "/upgrade" + additionalParameters); + Connection connNew2 = DriverManager.getConnection("jdbc:h2:" + + getBaseDir() + "/upgrade" + additionalParameters); + Statement statNew = connNew.createStatement(); + statNew.execute("create table test(id int)"); + + // Link to old DB without upgrade + statNew.executeUpdate("CREATE LOCAL TEMPORARY LINKED TABLE " + + "linkedTestOld('org.h2.Driver', 'jdbc:h2v1_1:" + + getBaseDir() + "/upgradeOld" + additionalParameters + "', '', '', 'TestOld')"); + statNew.executeQuery("select * from linkedTestOld"); + connNew.close(); + connNew2.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgradeOld.data.db")); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.h2.db")); + + connNew = DriverManager.getConnection("jdbc:h2:" + + getBaseDir() + "/upgrade" + additionalParameters); + connNew2 = DriverManager.getConnection("jdbc:h2:" + + getBaseDir() + "/upgrade" + additionalParameters); + statNew = connNew.createStatement(); + // Link to old DB with upgrade + statNew.executeUpdate("CREATE LOCAL TEMPORARY LINKED TABLE " + + "linkedTestOld('org.h2.Driver', 'jdbc:h2:" + + getBaseDir() + "/upgradeOld" + additionalParameters + "', '', '', 'TestOld')"); + statNew.executeQuery("select * from linkedTestOld"); + connNew.close(); + connNew2.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgradeOld.h2.db")); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.h2.db")); + + deleteDb("upgrade"); + deleteDb("upgradeOld"); + } + + private void testIfExists() throws Exception { + deleteDb("upgrade"); + + // Create old + Utils.callStaticMethod("org.h2.upgrade.v1_1.Driver.load"); + Connection connOld = DriverManager.getConnection( + "jdbc:h2v1_1:" + getBaseDir() + "/upgrade;PAGE_STORE=FALSE"); + // Test auto server, too + Connection connOld2 = DriverManager.getConnection( + "jdbc:h2v1_1:" + getBaseDir() + "/upgrade;PAGE_STORE=FALSE"); + Statement statOld = connOld.createStatement(); + statOld.execute("create table test(id int)"); + connOld.close(); + connOld2.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.data.db")); + + // Upgrade + Connection connOldViaNew = DriverManager.getConnection( + "jdbc:h2:" + getBaseDir() + "/upgrade;ifexists=true"); + Statement statOldViaNew = connOldViaNew.createStatement(); + statOldViaNew.executeQuery("select * from test"); + connOldViaNew.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.h2.db")); + + deleteDb("upgrade"); + } + + private void testCipher() throws Exception { + deleteDb("upgrade"); + + // Create old db + Utils.callStaticMethod("org.h2.upgrade.v1_1.Driver.load"); + Connection conn = DriverManager.getConnection("jdbc:h2v1_1:" + + getBaseDir() + "/upgrade;PAGE_STORE=FALSE;" + + "CIPHER=AES", "abc", "abc abc"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + conn.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.data.db")); + + // Connect to old DB with upgrade + conn = DriverManager.getConnection("jdbc:h2:" + + getBaseDir() + "/upgrade;CIPHER=AES", "abc", "abc abc"); + stat = conn.createStatement(); + stat.executeQuery("select * from test"); + conn.close(); + assertTrue(FileUtils.exists(getBaseDir() + "/upgrade.h2.db")); + + deleteDb("upgrade"); + } + + @Override + public void deleteDb(String dbName) { + super.deleteDb(dbName); + try { + Utils.callStaticMethod( + "org.h2.upgrade.v1_1.tools.DeleteDbFiles.execute", + getBaseDir(), dbName, true); + } catch (Exception e) { + throw new RuntimeException(e.getMessage()); + } + FileUtils.delete(getBaseDir() + "/" + + dbName + ".data.db.backup"); + FileUtils.delete(getBaseDir() + "/" + + dbName + ".index.db.backup"); + FileUtils.deleteRecursive(getBaseDir() + "/" + + dbName + ".lobs.db.backup", false); + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/db/TestUsingIndex.java b/modules/h2/src/test/java/org/h2/test/db/TestUsingIndex.java new file mode 100644 index 0000000000000..559d32f1353b5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestUsingIndex.java @@ -0,0 +1,172 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; +import org.h2.value.DataType; + +/** + * Tests the "create index ... using" syntax. + * + * @author Erwan Bocher Atelier SIG, IRSTV FR CNRS 2488 + */ +public class TestUsingIndex extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("using_index"); + testUsingBadSyntax(); + testUsingGoodSyntax(); + testHashIndex(); + testSpatialIndex(); + testBadSpatialSyntax(); + } + + private void testHashIndex() throws SQLException { + conn = getConnection("using_index"); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("create index idx_name on test(id) using hash"); + stat.execute("insert into test select x from system_range(1, 1000)"); + ResultSet rs = stat.executeQuery("select * from test where id=100"); + assertTrue(rs.next()); + assertFalse(rs.next()); + stat.execute("delete from test where id=100"); + rs = stat.executeQuery("select * from test where id=100"); + assertFalse(rs.next()); + rs = stat.executeQuery("select min(id), max(id) from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals(1000, rs.getInt(2)); + stat.execute("drop table test"); + conn.close(); + deleteDb("using_index"); + } + + private void testUsingBadSyntax() throws SQLException { + conn = getConnection("using_index"); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + assertFalse(isSupportedSyntax(stat, + "create hash index idx_name_1 on test(id) using hash")); + assertFalse(isSupportedSyntax(stat, + "create hash index idx_name_2 on test(id) using btree")); + assertFalse(isSupportedSyntax(stat, + "create index idx_name_3 on test(id) using hash_tree")); + assertFalse(isSupportedSyntax(stat, + "create unique hash index idx_name_4 on test(id) using hash")); + assertFalse(isSupportedSyntax(stat, + "create index idx_name_5 on test(id) using hash table")); + conn.close(); + deleteDb("using_index"); + } + + private void testUsingGoodSyntax() throws SQLException { + conn = getConnection("using_index"); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + assertTrue(isSupportedSyntax(stat, + "create index idx_name_1 on test(id) using hash")); + assertTrue(isSupportedSyntax(stat, + "create index idx_name_2 on test(id) using btree")); + assertTrue(isSupportedSyntax(stat, + "create unique index idx_name_3 on test(id) using hash")); + conn.close(); + deleteDb("using_index"); + } + + /** + * Return if the syntax is supported otherwise false + * + * @param stat the statement + * @param sql the SQL statement + * @return true if the query works, false if it fails + */ + private static boolean isSupportedSyntax(Statement stat, String sql) { + try { + stat.execute(sql); + return true; + } catch (SQLException ex) { + return false; + } + } + + private void testSpatialIndex() throws SQLException { + if (!config.mvStore && config.mvcc) { + return; + } + if (config.memory && config.mvcc) { + return; + } + if (DataType.GEOMETRY_CLASS == null) { + return; + } + deleteDb("spatial"); + conn = getConnection("spatial"); + stat = conn.createStatement(); + stat.execute("create table test" + + "(id int primary key, poly geometry)"); + stat.execute("insert into test values(1, " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + stat.execute("insert into test values(2,null)"); + stat.execute("insert into test values(3, " + + "'POLYGON ((3 1, 3 2, 4 2, 3 1))')"); + stat.execute("insert into test values(4,null)"); + stat.execute("insert into test values(5, " + + "'POLYGON ((1 3, 1 4, 2 4, 1 3))')"); + stat.execute("create index on test(poly) using rtree"); + + ResultSet rs = stat.executeQuery( + "select * from test " + + "where poly && 'POINT (1.5 1.5)'::Geometry"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt("id")); + assertFalse(rs.next()); + rs.close(); + conn.close(); + deleteDb("spatial"); + } + + private void testBadSpatialSyntax() throws SQLException { + if (!config.mvStore && config.mvcc) { + return; + } + if (config.memory && config.mvcc) { + return; + } + if (DataType.GEOMETRY_CLASS == null) { + return; + } + deleteDb("spatial"); + conn = getConnection("spatial"); + stat = conn.createStatement(); + stat.execute("create table test" + + "(id int primary key, poly geometry)"); + stat.execute("insert into test values(1, " + + "'POLYGON ((1 1, 1 2, 2 2, 1 1))')"); + assertFalse(isSupportedSyntax(stat, + "create spatial index on test(poly) using rtree")); + conn.close(); + deleteDb("spatial"); + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/db/TestView.java b/modules/h2/src/test/java/org/h2/test/db/TestView.java new file mode 100644 index 0000000000000..8a36672de73a1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestView.java @@ -0,0 +1,385 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.ErrorCode; +import org.h2.engine.Session; +import org.h2.jdbc.JdbcConnection; +import org.h2.test.TestBase; + +/** + * Test for views. + */ +public class TestView extends TestBase { + + private static int x; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("view"); + testSubSubQuery(); + testSubQueryViewIndexCache(); + testInnerSelectWithRownum(); + testInnerSelectWithRange(); + testEmptyColumn(); + testChangeSchemaSearchPath(); + testParameterizedView(); + testCache(); + testCacheFunction(true); + testCacheFunction(false); + testInSelect(); + testUnionReconnect(); + testManyViews(); + testReferenceView(); + testViewAlterAndCommandCache(); + testViewConstraintFromColumnExpression(); + deleteDb("view"); + } + + private void testSubSubQuery() throws SQLException { + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(a int, b int, c int)"); + stat.execute("insert into test values(1, 1, 1)"); + ResultSet rs = stat.executeQuery("select 1 x from (select a, b, c from " + + "(select * from test) bbb where bbb.a >=1 and bbb.a <= 1) sp " + + "where sp.a = 1 and sp.b = 1 and sp.c = 1"); + assertTrue(rs.next()); + conn.close(); + } + + private void testSubQueryViewIndexCache() throws SQLException { + if (config.networked) { + return; + } + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(id int primary key, " + + "name varchar(25) unique, age int unique)"); + + // check that initial cache size is empty + Session s = (Session) ((JdbcConnection) conn).getSession(); + s.clearViewIndexCache(); + assertTrue(s.getViewIndexCache(true).isEmpty()); + assertTrue(s.getViewIndexCache(false).isEmpty()); + + // create view command should not affect caches + stat.execute("create view v as select * from test"); + assertTrue(s.getViewIndexCache(true).isEmpty()); + assertTrue(s.getViewIndexCache(false).isEmpty()); + + // check view index cache + stat.executeQuery("select * from v where id > 0").next(); + int size1 = s.getViewIndexCache(false).size(); + assertTrue(size1 > 0); + assertTrue(s.getViewIndexCache(true).isEmpty()); + stat.executeQuery("select * from v where name = 'xyz'").next(); + int size2 = s.getViewIndexCache(false).size(); + assertTrue(size2 > size1); + assertTrue(s.getViewIndexCache(true).isEmpty()); + + // check we did not add anything to view cache if we run a sub-query + stat.executeQuery("select * from (select * from test) where age = 17").next(); + int size3 = s.getViewIndexCache(false).size(); + assertEquals(size2, size3); + assertTrue(s.getViewIndexCache(true).isEmpty()); + + // check clear works + s.clearViewIndexCache(); + assertTrue(s.getViewIndexCache(false).isEmpty()); + assertTrue(s.getViewIndexCache(true).isEmpty()); + + // drop everything + stat.execute("drop view v"); + stat.execute("drop table test"); + conn.close(); + } + + private void testInnerSelectWithRownum() throws SQLException { + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("drop table test if exists"); + stat.execute("create table test(id int primary key, name varchar(1))"); + stat.execute("insert into test(id, name) values(1, 'b'), (3, 'a')"); + ResultSet rs = stat.executeQuery( + "select nr from (select row_number() over() as nr, " + + "a.id as id from (select id from test order by name) as a) as b " + + "where b.id = 1;"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + stat.execute("drop table test"); + conn.close(); + } + + private void testInnerSelectWithRange() throws SQLException { + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery( + "select x from (select x from (" + + "select x from system_range(1, 5)) " + + "where x > 2 and x < 4) where x = 3"); + assertTrue(rs.next()); + assertFalse(rs.next()); + rs = stat.executeQuery( + "select x from (select x from (" + + "select x from system_range(1, 5)) " + + "where x = 3) where x > 2 and x < 4"); + assertTrue(rs.next()); + assertFalse(rs.next()); + conn.close(); + } + + private void testEmptyColumn() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("create table test(a int, b int)"); + stat.execute("create view test_view as select a, b from test"); + stat.execute("select * from test_view where a between 1 and 2 and b = 2"); + conn.close(); + } + + private void testChangeSchemaSearchPath() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view;FUNCTIONS_IN_SCHEMA=TRUE"); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS X AS $$ int x() { return 1; } $$;"); + stat.execute("CREATE SCHEMA S"); + stat.execute("CREATE VIEW S.TEST AS SELECT X() FROM DUAL"); + stat.execute("SET SCHEMA=S"); + stat.execute("SET SCHEMA_SEARCH_PATH=S"); + stat.execute("SELECT * FROM TEST"); + conn.close(); + } + + private void testParameterizedView() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE Test(id INT AUTO_INCREMENT NOT NULL, " + + "f1 VARCHAR NOT NULL, f2 VARCHAR NOT NULL)"); + stat.execute("INSERT INTO Test(f1, f2) VALUES ('value1','value2')"); + stat.execute("INSERT INTO Test(f1, f2) VALUES ('value1','value3')"); + PreparedStatement ps = conn.prepareStatement( + "CREATE VIEW Test_View AS SELECT f2 FROM Test WHERE f1=?"); + ps.setString(1, "value1"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, ps). + executeUpdate(); + // ResultSet rs; + // rs = stat.executeQuery("SELECT * FROM Test_View"); + // assertTrue(rs.next()); + // assertFalse(rs.next()); + // rs = stat.executeQuery("select VIEW_DEFINITION " + + // "from information_schema.views " + + // "where TABLE_NAME='TEST_VIEW'"); + // rs.next(); + // assertEquals("...", rs.getString(1)); + conn.close(); + } + + private void testCacheFunction(boolean deterministic) throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + x = 8; + stat.execute("CREATE ALIAS GET_X " + + (deterministic ? "DETERMINISTIC" : "") + + " FOR \"" + getClass().getName() + ".getX\""); + stat.execute("CREATE VIEW V AS SELECT * FROM (SELECT GET_X())"); + ResultSet rs; + rs = stat.executeQuery("SELECT * FROM V"); + rs.next(); + assertEquals(8, rs.getInt(1)); + x = 5; + rs = stat.executeQuery("SELECT * FROM V"); + rs.next(); + assertEquals(deterministic ? 8 : 5, rs.getInt(1)); + conn.close(); + } + + /** + * This method is called via reflection from the database. + * + * @return the static value x + */ + public static int getX() { + return x; + } + + private void testCache() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("SET @X 8"); + stat.execute("CREATE VIEW V AS SELECT * FROM (SELECT @X)"); + ResultSet rs; + rs = stat.executeQuery("SELECT * FROM V"); + rs.next(); + assertEquals(8, rs.getInt(1)); + stat.execute("SET @X 5"); + rs = stat.executeQuery("SELECT * FROM V"); + rs.next(); + assertEquals(5, rs.getInt(1)); + conn.close(); + } + + private void testInSelect() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key) as select 1"); + PreparedStatement prep = conn.prepareStatement( + "select * from test t where t.id in " + + "(select t2.id from test t2 where t2.id in (?, ?))"); + prep.setInt(1, 1); + prep.setInt(2, 2); + prep.execute(); + conn.close(); + } + + private void testUnionReconnect() throws SQLException { + if (config.memory) { + return; + } + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("create table t1(k smallint, ts timestamp(6))"); + stat.execute("create table t2(k smallint, ts timestamp(6))"); + stat.execute("create table t3(k smallint, ts timestamp(6))"); + stat.execute("create view v_max_ts as select " + + "max(ts) from (select max(ts) as ts from t1 " + + "union select max(ts) as ts from t2 " + + "union select max(ts) as ts from t3)"); + stat.execute("create view v_test as select max(ts) as ts from t1 " + + "union select max(ts) as ts from t2 " + + "union select max(ts) as ts from t3"); + conn.close(); + conn = getConnection("view"); + stat = conn.createStatement(); + stat.execute("select * from v_max_ts"); + conn.close(); + deleteDb("view"); + } + + private void testManyViews() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement s = conn.createStatement(); + s.execute("create table t0(id int primary key)"); + s.execute("insert into t0 values(1), (2), (3)"); + for (int i = 0; i < 30; i++) { + s.execute("create view t" + (i + 1) + " as select * from t" + i); + s.execute("select * from t" + (i + 1)); + ResultSet rs = s.executeQuery( + "select count(*) from t" + (i + 1) + " where id=2"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + } + conn.close(); + conn = getConnection("view"); + conn.close(); + deleteDb("view"); + } + + private void testReferenceView() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement s = conn.createStatement(); + s.execute("create table t0(id int primary key)"); + s.execute("create view t1 as select * from t0"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, s).execute( + "create table t2(id int primary key, " + + "col1 int not null, foreign key (col1) references t1(id))"); + conn.close(); + deleteDb("view"); + } + + /** + * Make sure that when we change a view, that change in reflected in other + * sessions command cache. + */ + private void testViewAlterAndCommandCache() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("create table t0(id int primary key)"); + stat.execute("create table t1(id int primary key)"); + stat.execute("insert into t0 values(0)"); + stat.execute("insert into t1 values(1)"); + stat.execute("create view v1 as select * from t0"); + ResultSet rs = stat.executeQuery("select * from v1"); + assertTrue(rs.next()); + assertEquals(0, rs.getInt(1)); + stat.execute("create or replace view v1 as select * from t1"); + rs = stat.executeQuery("select * from v1"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + conn.close(); + deleteDb("view"); + } + + /** + * Make sure that the table constraint is still available when create a view + * of other table. + */ + private void testViewConstraintFromColumnExpression() throws SQLException { + deleteDb("view"); + Connection conn = getConnection("view"); + Statement stat = conn.createStatement(); + stat.execute("create table t0(id1 int primary key CHECK ((ID1 % 2) = 0))"); + stat.execute("create table t1(id2 int primary key CHECK ((ID2 % 1) = 0))"); + stat.execute("insert into t0 values(0)"); + stat.execute("insert into t1 values(1)"); + stat.execute("create view v1 as select * from t0,t1"); + // Check with ColumnExpression + ResultSet rs = stat.executeQuery( + "select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'V1'"); + assertTrue(rs.next()); + assertEquals("ID1", rs.getString("COLUMN_NAME")); + assertEquals("((ID1 % 2) = 0)", rs.getString("CHECK_CONSTRAINT")); + assertTrue(rs.next()); + assertEquals("ID2", rs.getString("COLUMN_NAME")); + assertEquals("((ID2 % 1) = 0)", rs.getString("CHECK_CONSTRAINT")); + // Check with AliasExpression + stat.execute("create view v2 as select ID1 key1,ID2 key2 from t0,t1"); + rs = stat.executeQuery("select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'V2'"); + assertTrue(rs.next()); + assertEquals("KEY1", rs.getString("COLUMN_NAME")); + assertEquals("((KEY1 % 2) = 0)", rs.getString("CHECK_CONSTRAINT")); + assertTrue(rs.next()); + assertEquals("KEY2", rs.getString("COLUMN_NAME")); + assertEquals("((KEY2 % 1) = 0)", rs.getString("CHECK_CONSTRAINT")); + // Check hide of constraint if column is an Operation + stat.execute("create view v3 as select ID1 + 1 ID1, ID2 + 1 ID2 from t0,t1"); + rs = stat.executeQuery("select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'V3'"); + assertTrue(rs.next()); + assertEquals("ID1", rs.getString("COLUMN_NAME")); + assertEquals("", rs.getString("CHECK_CONSTRAINT")); + assertTrue(rs.next()); + assertEquals("ID2", rs.getString("COLUMN_NAME")); + assertEquals("", rs.getString("CHECK_CONSTRAINT")); + conn.close(); + deleteDb("view"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestViewAlterTable.java b/modules/h2/src/test/java/org/h2/test/db/TestViewAlterTable.java new file mode 100644 index 0000000000000..af8108aaeb5a3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestViewAlterTable.java @@ -0,0 +1,199 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; +import org.h2.api.ErrorCode; + +/** + * Test the impact of ALTER TABLE statements on views. + */ +public class TestViewAlterTable extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb(getTestName()); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + + testDropColumnWithoutViews(); + testViewsAreWorking(); + testAlterTableDropColumnNotInView(); + testAlterTableDropColumnInView(); + testAlterTableAddColumnWithView(); + testAlterTableAlterColumnDataTypeWithView(); + testSelectStar(); + testJoinAndAlias(); + testSubSelect(); + testForeignKey(); + + conn.close(); + deleteDb(getTestName()); + } + + private void testDropColumnWithoutViews() throws SQLException { + stat.execute("create table test(a int, b int, c int)"); + stat.execute("alter table test drop column c"); + stat.execute("drop table test"); + } + + private void testViewsAreWorking() throws SQLException { + createTestData(); + checkViewRemainsValid(); + } + + private void testAlterTableDropColumnNotInView() throws SQLException { + createTestData(); + stat.execute("alter table test drop column c"); + checkViewRemainsValid(); + } + + private void testAlterTableDropColumnInView() throws SQLException { + // simple + stat.execute("create table test(id identity, name varchar) " + + "as select x, 'Hello'"); + stat.execute("create view test_view as select * from test"); + assertThrows(ErrorCode.VIEW_IS_INVALID_2, stat). + execute("alter table test drop name"); + ResultSet rs = stat.executeQuery("select * from test_view"); + assertTrue(rs.next()); + stat.execute("drop view test_view"); + stat.execute("drop table test"); + + // nested + createTestData(); + // should throw exception because V1 uses column A + assertThrows(ErrorCode.VIEW_IS_INVALID_2, stat). + execute("alter table test drop column a"); + stat.execute("drop table test cascade"); + } + + private void testAlterTableAddColumnWithView() throws SQLException { + createTestData(); + stat.execute("alter table test add column d int"); + checkViewRemainsValid(); + } + + private void testAlterTableAlterColumnDataTypeWithView() + throws SQLException { + createTestData(); + stat.execute("alter table test alter b char(1)"); + checkViewRemainsValid(); + } + + private void testSelectStar() throws SQLException { + createTestData(); + stat.execute("create view v4 as select * from test"); + stat.execute("alter table test add d int default 6"); + // H2 doesn't remember v4 as 'select * from test', it instead remembers + // each individual column that was in 'test' when the view was + // originally created. This is consistent with PostgreSQL. + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, stat). + executeQuery("select d from v4"); + checkViewRemainsValid(); + } + + private void testJoinAndAlias() throws SQLException { + createTestData(); + stat.execute("create view v4 as select v1.a dog, v3.a cat " + + "from v1 join v3 on v1.b = v3.a"); + // should make no difference + stat.execute("alter table test add d int default 6"); + ResultSet rs = stat.executeQuery("select cat, dog from v4"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals(2, rs.getInt(2)); + assertFalse(rs.next()); + checkViewRemainsValid(); + } + + private void testSubSelect() throws SQLException { + createTestData(); + stat.execute("create view v4 as select * from v3 " + + "where a in (select b from v2)"); + // should make no difference + stat.execute("alter table test add d int default 6"); + ResultSet rs = stat.executeQuery("select a from v4"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + checkViewRemainsValid(); + } + + private void testForeignKey() throws SQLException { + createTestData(); + stat.execute("create table test2(z int, a int, primary key(z), " + + "foreign key (a) references TEST(a))"); + stat.execute("insert into test2(z, a) values (99, 1)"); + // should make no difference + stat.execute("alter table test add d int default 6"); + ResultSet rs = stat.executeQuery("select z from test2"); + assertTrue(rs.next()); + assertEquals(99, rs.getInt(1)); + assertFalse(rs.next()); + stat.execute("drop table test2"); + checkViewRemainsValid(); + } + + private void createTestData() throws SQLException { + stat.execute("create table test(a int, b int, c int)"); + stat.execute("insert into test(a, b, c) values (1, 2, 3)"); + stat.execute("create view v1 as select a as b, b as a from test"); + // child of v1 + stat.execute("create view v2 as select * from v1"); + stat.execute("create user if not exists test_user password 'x'"); + stat.execute("grant select on v2 to test_user"); + // sibling of v1 + stat.execute("create view v3 as select a from test"); + } + + private void checkViewRemainsValid() throws SQLException { + ResultSet rs = stat.executeQuery("select b from v1"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("select b from v2"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("select * from information_schema.rights"); + assertTrue(rs.next()); + assertEquals("TEST_USER", rs.getString("GRANTEE")); + assertEquals("V2", rs.getString("TABLE_NAME")); + rs = stat.executeQuery("select b from test"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + stat.execute("drop table test cascade"); + + rs = conn.getMetaData().getTables(null, null, null, null); + while (rs.next()) { + // should have no tables left in the database + assertEquals(rs.getString(2) + "." + rs.getString(3), + "INFORMATION_SCHEMA", rs.getString(2)); + } + + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/TestViewDropView.java b/modules/h2/src/test/java/org/h2/test/db/TestViewDropView.java new file mode 100644 index 0000000000000..8b5c5d542604d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/TestViewDropView.java @@ -0,0 +1,171 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.db; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Test the impact of DROP VIEW statements on dependent views. + */ +public class TestViewDropView extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb(getTestName()); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + + testDropViewDefaultBehaviour(); + testDropViewRestrict(); + testDropViewCascade(); + testCreateForceView(); + testCreateOrReplaceView(); + testCreateOrReplaceViewWithNowInvalidDependentViews(); + testCreateOrReplaceForceViewWithNowInvalidDependentViews(); + + conn.close(); + deleteDb(getTestName()); + } + + private void testCreateForceView() throws SQLException { + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat). + execute("create view test_view as select * from test"); + stat.execute("create force view test_view as select * from test"); + stat.execute("create table test(id int)"); + stat.execute("alter view test_view recompile"); + stat.execute("select * from test_view"); + stat.execute("drop table test_view, test cascade"); + stat.execute("create force view test_view as select * from test where 1=0"); + stat.execute("create table test(id int)"); + stat.execute("alter view test_view recompile"); + stat.execute("select * from test_view"); + stat.execute("drop table test_view, test cascade"); + } + + private void testDropViewDefaultBehaviour() throws SQLException { + createTestData(); + ResultSet rs = stat.executeQuery("select value " + + "from information_schema.settings where name = 'DROP_RESTRICT'"); + rs.next(); + boolean dropRestrict = rs.getBoolean(1); + if (dropRestrict) { + // should fail because have dependencies + assertThrows(ErrorCode.CANNOT_DROP_2, stat). + execute("drop view v1"); + } else { + stat.execute("drop view v1"); + checkViewRemainsValid(); + } + } + + private void testDropViewRestrict() throws SQLException { + createTestData(); + // should fail because have dependencies + assertThrows(ErrorCode.CANNOT_DROP_2, stat). + execute("drop view v1 restrict"); + checkViewRemainsValid(); + } + + private void testDropViewCascade() throws SQLException { + createTestData(); + stat.execute("drop view v1 cascade"); + // v1, v2, v3 should be deleted + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat). + execute("select * from v1"); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat). + execute("select * from v2"); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat). + execute("select * from v3"); + stat.execute("drop table test"); + } + + private void testCreateOrReplaceView() throws SQLException { + createTestData(); + + stat.execute("create or replace view v1 as select a as b, b as a, c from test"); + + checkViewRemainsValid(); + } + + private void testCreateOrReplaceViewWithNowInvalidDependentViews() + throws SQLException { + createTestData(); + // v2 and v3 need more than just "c", so we should get an error + // dependent views need more columns than just 'c' + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, stat). + execute("create or replace view v1 as select c from test"); + // make sure our old views come back ok + checkViewRemainsValid(); + } + + private void testCreateOrReplaceForceViewWithNowInvalidDependentViews() + throws SQLException { + createTestData(); + + // v2 and v3 need more than just "c", + // but we want to force the creation of v1 anyway + stat.execute("create or replace force view v1 as select c from test"); + // now v2 and v3 are broken, but they still exist + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, stat). + executeQuery("select b from v2"); + stat.execute("drop table test cascade"); + } + + private void createTestData() throws SQLException { + stat.execute("drop all objects"); + stat.execute("create table test(a int, b int, c int)"); + stat.execute("insert into test(a, b, c) values (1, 2, 3)"); + stat.execute("create view v1 as select a as b, b as a from test"); + // child of v1 + stat.execute("create view v2 as select * from v1"); + // child of v2 + stat.execute("create view v3 as select * from v2"); + } + + private void checkViewRemainsValid() throws SQLException { + ResultSet rs = stat.executeQuery("select b from v1"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("select b from v2"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + rs = stat.executeQuery("select b from test"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + stat.execute("drop table test cascade"); + + ResultSet d = conn.getMetaData().getTables(null, null, null, null); + while (d.next()) { + // should have no tables left in the database + assertEquals(d.getString(2) + "." + d.getString(3), + "INFORMATION_SCHEMA", d.getString(2)); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/db/package.html b/modules/h2/src/test/java/org/h2/test/db/package.html new file mode 100644 index 0000000000000..83f531dd37134 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/db/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Database tests. Most tests are on the SQL level. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestBatchUpdates.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestBatchUpdates.java new file mode 100644 index 0000000000000..e69e8821a6549 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestBatchUpdates.java @@ -0,0 +1,550 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.BatchUpdateException; +import java.sql.CallableStatement; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Test for batch updates. + */ +public class TestBatchUpdates extends TestBase { + + private static final String COFFEE_UPDATE = + "UPDATE TEST SET PRICE=PRICE*20 WHERE TYPE_ID=?"; + private static final String COFFEE_SELECT = + "SELECT PRICE FROM TEST WHERE KEY_ID=?"; + // private static final String COFFEE_QUERY = + // "SELECT C_NAME,PRICE FROM TEST WHERE TYPE_ID=?"; + // private static final String COFFEE_DELETE = + // "DELETE FROM TEST WHERE KEY_ID=?"; + private static final String COFFEE_INSERT1 = + "INSERT INTO TEST VALUES(9,'COFFEE-9',9.0,5)"; + private static final String COFFEE_DELETE1 = + "DELETE FROM TEST WHERE KEY_ID=9"; + private static final String COFFEE_UPDATE1 = + "UPDATE TEST SET PRICE=PRICE*20 WHERE TYPE_ID=1"; + private static final String COFFEE_SELECT1 = + "SELECT PRICE FROM TEST WHERE KEY_ID>4"; + private static final String COFFEE_UPDATE_SET = + "UPDATE TEST SET KEY_ID=?, C_NAME=? WHERE C_NAME=?"; + private static final String COFFEE_SELECT_CONTINUED = + "SELECT COUNT(*) FROM TEST WHERE C_NAME='Continue-1'"; + + private static final int COFFEE_SIZE = 10; + private static final int COFFEE_TYPE = 11; + + private Connection conn; + private Statement stat; + private PreparedStatement prep; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testRootCause(); + testExecuteCall(); + testException(); + testCoffee(); + deleteDb("batchUpdates"); + } + + private void testRootCause() throws SQLException { + deleteDb("batchUpdates"); + conn = getConnection("batchUpdates"); + stat = conn.createStatement(); + stat.addBatch("select * from test_x"); + stat.addBatch("select * from test_y"); + try { + stat.executeBatch(); + } catch (SQLException e) { + assertContains(e.toString(), "TEST_Y"); + e = e.getNextException(); + assertTrue(e != null); + assertContains(e.toString(), "TEST_Y"); + e = e.getNextException(); + assertTrue(e != null); + assertContains(e.toString(), "TEST_X"); + e = e.getNextException(); + assertTrue(e == null); + } + stat.execute("create table test(id int)"); + PreparedStatement prep = conn.prepareStatement("insert into test values(?)"); + prep.setString(1, "TEST_X"); + prep.addBatch(); + prep.setString(1, "TEST_Y"); + prep.addBatch(); + try { + prep.executeBatch(); + } catch (SQLException e) { + assertContains(e.toString(), "TEST_Y"); + e = e.getNextException(); + assertTrue(e != null); + assertContains(e.toString(), "TEST_Y"); + e = e.getNextException(); + assertTrue(e != null); + assertContains(e.toString(), "TEST_X"); + e = e.getNextException(); + assertTrue(e == null); + } + stat.execute("drop table test"); + conn.close(); + } + + private void testExecuteCall() throws SQLException { + deleteDb("batchUpdates"); + conn = getConnection("batchUpdates"); + stat = conn.createStatement(); + stat.execute("CREATE ALIAS updatePrices FOR \"" + + getClass().getName() + ".updatePrices\""); + CallableStatement call = conn.prepareCall("{call updatePrices(?, ?)}"); + call.setString(1, "Hello"); + call.setFloat(2, 1.4f); + call.addBatch(); + call.setString(1, "World"); + call.setFloat(2, 3.2f); + call.addBatch(); + int[] updateCounts = call.executeBatch(); + int total = 0; + for (int t : updateCounts) { + total += t; + } + assertEquals(4, total); + conn.close(); + } + + /** + * This method is called by the database. + * + * @param message the message (currently not used) + * @param f the float + * @return the float converted to an int + */ + public static int updatePrices(@SuppressWarnings("unused") String message, double f) { + return (int) f; + } + + private void testException() throws SQLException { + deleteDb("batchUpdates"); + conn = getConnection("batchUpdates"); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + prep = conn.prepareStatement("insert into test values(?)"); + for (int i = 0; i < 700; i++) { + prep.setString(1, "x"); + prep.addBatch(); + } + try { + prep.executeBatch(); + fail(); + } catch (BatchUpdateException e) { + // expected + } + conn.close(); + } + + private void testCoffee() throws SQLException { + deleteDb("batchUpdates"); + conn = getConnection("batchUpdates"); + stat = conn.createStatement(); + DatabaseMetaData meta = conn.getMetaData(); + assertTrue(meta.supportsBatchUpdates()); + stat.executeUpdate("CREATE TABLE TEST(KEY_ID INT PRIMARY KEY," + + "C_NAME VARCHAR(255),PRICE DECIMAL(20,2),TYPE_ID INT)"); + String newName = null; + float newPrice = 0; + int newType = 0; + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?,?,?,?)"); + int newKey = 1; + for (int i = 1; i <= COFFEE_TYPE && newKey <= COFFEE_SIZE; i++) { + for (int j = 1; j <= i && newKey <= COFFEE_SIZE; j++) { + newName = "COFFEE-" + newKey; + newPrice = newKey + (float) .00; + newType = i; + prep.setInt(1, newKey); + prep.setString(2, newName); + prep.setFloat(3, newPrice); + prep.setInt(4, newType); + prep.execute(); + newKey = newKey + 1; + } + } + trace("Inserted the Rows "); + testAddBatch01(); + testAddBatch02(); + testClearBatch01(); + testClearBatch02(); + testExecuteBatch01(); + testExecuteBatch02(); + testExecuteBatch03(); + testExecuteBatch04(); + testExecuteBatch05(); + testExecuteBatch06(); + testExecuteBatch07(); + testContinueBatch01(); + + conn.close(); + } + + private void testAddBatch01() throws SQLException { + trace("testAddBatch01"); + int i = 0; + int[] retValue = { 0, 0, 0 }; + String s = COFFEE_UPDATE; + trace("Prepared Statement String:" + s); + prep = conn.prepareStatement(s); + assertThrows(ErrorCode.PARAMETER_NOT_SET_1, prep).addBatch(); + prep.setInt(1, 2); + prep.addBatch(); + prep.setInt(1, 3); + prep.addBatch(); + prep.setInt(1, 4); + prep.addBatch(); + int[] updateCount = prep.executeBatch(); + int updateCountLen = updateCount.length; + + // PreparedStatement p; + // p = conn.prepareStatement(COFFEE_UPDATE); + // p.setInt(1,2); + // System.out.println("upc="+p.executeUpdate()); + // p.setInt(1,3); + // System.out.println("upc="+p.executeUpdate()); + // p.setInt(1,4); + // System.out.println("upc="+p.executeUpdate()); + + trace("updateCount length:" + updateCountLen); + assertEquals(3, updateCountLen); + String query1 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=2"; + String query2 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=3"; + String query3 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=4"; + ResultSet rs = stat.executeQuery(query1); + rs.next(); + retValue[i++] = rs.getInt(1); + rs = stat.executeQuery(query2); + rs.next(); + retValue[i++] = rs.getInt(1); + rs = stat.executeQuery(query3); + rs.next(); + retValue[i++] = rs.getInt(1); + for (int j = 0; j < updateCount.length; j++) { + trace("UpdateCount:" + updateCount[j]); + assertEquals(updateCount[j], retValue[j]); + } + } + + private void testAddBatch02() throws SQLException { + trace("testAddBatch02"); + int i = 0; + int[] retValue = { 0, 0, 0 }; + int updCountLength = 0; + String sUpdCoffee = COFFEE_UPDATE1; + String sDelCoffee = COFFEE_DELETE1; + String sInsCoffee = COFFEE_INSERT1; + stat.addBatch(sUpdCoffee); + stat.addBatch(sDelCoffee); + stat.addBatch(sInsCoffee); + int[] updateCount = stat.executeBatch(); + updCountLength = updateCount.length; + trace("updateCount Length:" + updCountLength); + assertEquals(3, updCountLength); + String query1 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=1"; + ResultSet rs = stat.executeQuery(query1); + rs.next(); + retValue[i++] = rs.getInt(1); + // 1 as delete Statement will delete only one row + retValue[i++] = 1; + // 1 as insert Statement will insert only one row + retValue[i++] = 1; + trace("ReturnValue count : " + retValue.length); + for (int j = 0; j < updateCount.length; j++) { + trace("Update Count:" + updateCount[j]); + trace("Returned Value : " + retValue[j]); + assertEquals("j:" + j, retValue[j], updateCount[j]); + } + } + + private void testClearBatch01() throws SQLException { + trace("testClearBatch01"); + String sPrepStmt = COFFEE_UPDATE; + trace("Prepared Statement String:" + sPrepStmt); + prep = conn.prepareStatement(sPrepStmt); + prep.setInt(1, 2); + prep.addBatch(); + prep.setInt(1, 3); + prep.addBatch(); + prep.setInt(1, 4); + prep.addBatch(); + prep.clearBatch(); + assertEquals(0, prep.executeBatch().length); + } + + private void testClearBatch02() throws SQLException { + trace("testClearBatch02"); + String sUpdCoffee = COFFEE_UPDATE1; + String sInsCoffee = COFFEE_INSERT1; + String sDelCoffee = COFFEE_DELETE1; + stat.addBatch(sUpdCoffee); + stat.addBatch(sDelCoffee); + stat.addBatch(sInsCoffee); + stat.clearBatch(); + assertEquals(0, stat.executeBatch().length); + } + + private void testExecuteBatch01() throws SQLException { + trace("testExecuteBatch01"); + int i = 0; + int[] retValue = { 0, 0, 0 }; + int updCountLength = 0; + String sPrepStmt = COFFEE_UPDATE; + trace("Prepared Statement String:" + sPrepStmt); + // get the PreparedStatement object + prep = conn.prepareStatement(sPrepStmt); + prep.setInt(1, 1); + prep.addBatch(); + prep.setInt(1, 2); + prep.addBatch(); + prep.setInt(1, 3); + prep.addBatch(); + int[] updateCount = prep.executeBatch(); + updCountLength = updateCount.length; + trace("Successfully Updated"); + trace("updateCount Length:" + updCountLength); + if (updCountLength != 3) { + fail("executeBatch"); + } else { + trace("executeBatch executes the Batch of SQL statements"); + } + // 1 is the number that is set First for Type Id in Prepared Statement + String query1 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=1"; + // 2 is the number that is set second for Type id in Prepared Statement + String query2 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=2"; + // 3 is the number that is set Third for Type id in Prepared Statement + String query3 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=3"; + ResultSet rs = stat.executeQuery(query1); + rs.next(); + retValue[i++] = rs.getInt(1); + rs = stat.executeQuery(query2); + rs.next(); + retValue[i++] = rs.getInt(1); + rs = stat.executeQuery(query3); + rs.next(); + retValue[i++] = rs.getInt(1); + trace("retValue length : " + retValue.length); + for (int j = 0; j < updateCount.length; j++) { + trace("UpdateCount Value:" + updateCount[j]); + trace("RetValue : " + retValue[j]); + if (updateCount[j] != retValue[j]) { + fail("j=" + j + " right:" + retValue[j]); + } + } + } + + private void testExecuteBatch02() throws SQLException { + trace("testExecuteBatch02"); + String sPrepStmt = COFFEE_UPDATE; + trace("Prepared Statement String:" + sPrepStmt); + prep = conn.prepareStatement(sPrepStmt); + prep.setInt(1, 1); + prep.setInt(1, 2); + prep.setInt(1, 3); + int[] updateCount = prep.executeBatch(); + int updCountLength = updateCount.length; + trace("UpdateCount Length : " + updCountLength); + if (updCountLength == 0) { + trace("executeBatch does not execute Empty Batch"); + } else { + fail("executeBatch"); + } + } + + private void testExecuteBatch03() throws SQLException { + trace("testExecuteBatch03"); + boolean batchExceptionFlag = false; + String sPrepStmt = COFFEE_SELECT; + trace("Prepared Statement String :" + sPrepStmt); + prep = conn.prepareStatement(sPrepStmt); + prep.setInt(1, 1); + prep.addBatch(); + try { + int[] updateCount = prep.executeBatch(); + trace("Update Count" + updateCount.length); + } catch (BatchUpdateException b) { + batchExceptionFlag = true; + } + if (batchExceptionFlag) { + trace("select not allowed; correct"); + } else { + fail("executeBatch select"); + } + } + + private void testExecuteBatch04() throws SQLException { + trace("testExecuteBatch04"); + int i = 0; + int[] retValue = { 0, 0, 0 }; + int updCountLength = 0; + String sUpdCoffee = COFFEE_UPDATE1; + String sInsCoffee = COFFEE_INSERT1; + String sDelCoffee = COFFEE_DELETE1; + stat.addBatch(sUpdCoffee); + stat.addBatch(sDelCoffee); + stat.addBatch(sInsCoffee); + int[] updateCount = stat.executeBatch(); + updCountLength = updateCount.length; + trace("Successfully Updated"); + trace("updateCount Length:" + updCountLength); + if (updCountLength != 3) { + fail("executeBatch"); + } else { + trace("executeBatch executes the Batch of SQL statements"); + } + String query1 = "SELECT COUNT(*) FROM TEST WHERE TYPE_ID=1"; + ResultSet rs = stat.executeQuery(query1); + rs.next(); + retValue[i++] = rs.getInt(1); + // 1 as Delete Statement will delete only one row + retValue[i++] = 1; + // 1 as Insert Statement will insert only one row + retValue[i++] = 1; + for (int j = 0; j < updateCount.length; j++) { + trace("Update Count : " + updateCount[j]); + if (updateCount[j] != retValue[j]) { + fail("j=" + j + " right:" + retValue[j]); + } + } + } + + private void testExecuteBatch05() throws SQLException { + trace("testExecuteBatch05"); + int updCountLength = 0; + int[] updateCount = stat.executeBatch(); + updCountLength = updateCount.length; + trace("updateCount Length:" + updCountLength); + if (updCountLength == 0) { + trace("executeBatch Method does not execute the Empty Batch "); + } else { + fail("executeBatch 0!=" + updCountLength); + } + } + + private void testExecuteBatch06() throws SQLException { + trace("testExecuteBatch06"); + boolean batchExceptionFlag = false; + // Insert a row which is already Present + String sInsCoffee = COFFEE_INSERT1; + String sDelCoffee = COFFEE_DELETE1; + stat.addBatch(sInsCoffee); + stat.addBatch(sInsCoffee); + stat.addBatch(sDelCoffee); + try { + stat.executeBatch(); + } catch (BatchUpdateException b) { + batchExceptionFlag = true; + for (int uc : b.getUpdateCounts()) { + trace("Update counts:" + uc); + } + } + if (batchExceptionFlag) { + trace("executeBatch insert duplicate; correct"); + } else { + fail("executeBatch"); + } + } + + private void testExecuteBatch07() throws SQLException { + trace("testExecuteBatch07"); + boolean batchExceptionFlag = false; + String selectCoffee = COFFEE_SELECT1; + trace("selectCoffee = " + selectCoffee); + Statement stmt = conn.createStatement(); + stmt.addBatch(selectCoffee); + try { + int[] updateCount = stmt.executeBatch(); + trace("updateCount Length : " + updateCount.length); + } catch (BatchUpdateException be) { + batchExceptionFlag = true; + } + if (batchExceptionFlag) { + trace("executeBatch select"); + } else { + fail("executeBatch"); + } + } + + private void testContinueBatch01() throws SQLException { + trace("testContinueBatch01"); + int[] batchUpdates = { 0, 0, 0 }; + int buCountLen = 0; + try { + String sPrepStmt = COFFEE_UPDATE_SET; + trace("Prepared Statement String:" + sPrepStmt); + prep = conn.prepareStatement(sPrepStmt); + // Now add a legal update to the batch + prep.setInt(1, 1); + prep.setString(2, "Continue-1"); + prep.setString(3, "COFFEE-1"); + prep.addBatch(); + // Now add an illegal update to the batch by + // forcing a unique constraint violation + // Try changing the key_id of row 3 to 1. + prep.setInt(1, 1); + prep.setString(2, "Invalid"); + prep.setString(3, "COFFEE-3"); + prep.addBatch(); + // Now add a second legal update to the batch + // which will be processed ONLY if the driver supports + // continued batch processing according to 6.2.2.3 + // of the J2EE platform spec. + prep.setInt(1, 2); + prep.setString(2, "Continue-2"); + prep.setString(3, "COFFEE-2"); + prep.addBatch(); + // The executeBatch() method will result in a + // BatchUpdateException + prep.executeBatch(); + } catch (BatchUpdateException b) { + trace("expected BatchUpdateException"); + batchUpdates = b.getUpdateCounts(); + buCountLen = batchUpdates.length; + } + if (buCountLen == 1) { + trace("no continued updates - OK"); + return; + } else if (buCountLen == 3) { + trace("Driver supports continued updates."); + // Check to see if the third row from the batch was added + String query = COFFEE_SELECT_CONTINUED; + trace("Query is: " + query); + ResultSet rs = stat.executeQuery(query); + rs.next(); + int count = rs.getInt(1); + rs.close(); + stat.close(); + trace("Count val is: " + count); + // make sure that we have the correct error code for + // the failed update. + if (!(batchUpdates[1] == -3 && count == 1)) { + fail("insert failed"); + } + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestCallableStatement.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestCallableStatement.java new file mode 100644 index 0000000000000..f25db22ff00d9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestCallableStatement.java @@ -0,0 +1,584 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.ByteArrayInputStream; +import java.io.Reader; +import java.io.StringReader; +import java.math.BigDecimal; +import java.net.URL; +import java.nio.charset.StandardCharsets; +import java.sql.Array; +import java.sql.CallableStatement; +import java.sql.Connection; +import java.sql.Ref; +import java.sql.ResultSet; +import java.sql.RowId; +import java.sql.SQLException; +import java.sql.SQLXML; +import java.sql.Statement; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.Collections; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.tools.SimpleResultSet; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.LocalDateTimeUtils; +import org.h2.util.Utils; + +/** + * Tests for the CallableStatement class. + */ +public class TestCallableStatement extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("callableStatement"); + Connection conn = getConnection("callableStatement"); + testOutParameter(conn); + testUnsupportedOperations(conn); + testGetters(conn); + testCallWithResultSet(conn); + testPreparedStatement(conn); + testCallWithResult(conn); + testPrepare(conn); + testClassLoader(conn); + testArrayArgument(conn); + testArrayReturnValue(conn); + conn.close(); + deleteDb("callableStatement"); + } + + private void testOutParameter(Connection conn) throws SQLException { + conn.createStatement().execute( + "create table test(id identity) as select null"); + for (int i = 1; i < 20; i++) { + CallableStatement cs = conn.prepareCall("{ ? = call IDENTITY()}"); + cs.registerOutParameter(1, Types.BIGINT); + cs.execute(); + long id = cs.getLong(1); + assertEquals(1, id); + cs.close(); + } + conn.createStatement().execute( + "drop table test"); + } + + private void testUnsupportedOperations(Connection conn) throws SQLException { + CallableStatement call; + call = conn.prepareCall("select 10 as a"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getURL(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getObject(1, Collections.>emptyMap()); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getRef(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getRowId(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getSQLXML(1); + + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getURL("a"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getObject("a", Collections.>emptyMap()); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getRef("a"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getRowId("a"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + getSQLXML("a"); + + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + setURL(1, (URL) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + setRef(1, (Ref) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + setRowId(1, (RowId) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + setSQLXML(1, (SQLXML) null); + + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + setURL("a", (URL) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + setRowId("a", (RowId) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, call). + setSQLXML("a", (SQLXML) null); + + } + + private void testCallWithResultSet(Connection conn) throws SQLException { + CallableStatement call; + ResultSet rs; + call = conn.prepareCall("select 10 as a"); + call.execute(); + rs = call.getResultSet(); + rs.next(); + assertEquals(10, rs.getInt(1)); + } + + private void testPreparedStatement(Connection conn) throws SQLException { + // using a callable statement like a prepared statement + CallableStatement call; + call = conn.prepareCall("create table test(id int)"); + call.executeUpdate(); + call = conn.prepareCall("insert into test values(1), (2)"); + assertEquals(2, call.executeUpdate()); + call = conn.prepareCall("drop table test"); + call.executeUpdate(); + } + + private void testGetters(Connection conn) throws SQLException { + CallableStatement call; + call = conn.prepareCall("{?=call ?}"); + call.setLong(2, 1); + call.registerOutParameter(1, Types.BIGINT); + call.execute(); + assertEquals(1, call.getLong(1)); + assertEquals(1, call.getByte(1)); + assertEquals(1, ((Long) call.getObject(1)).longValue()); + assertEquals(1, call.getObject(1, Long.class).longValue()); + assertFalse(call.wasNull()); + + call.setFloat(2, 1.1f); + call.registerOutParameter(1, Types.REAL); + call.execute(); + assertEquals(1.1f, call.getFloat(1)); + + call.setDouble(2, Math.PI); + call.registerOutParameter(1, Types.DOUBLE); + call.execute(); + assertEquals(Math.PI, call.getDouble(1)); + + call.setBytes(2, new byte[11]); + call.registerOutParameter(1, Types.BINARY); + call.execute(); + assertEquals(11, call.getBytes(1).length); + assertEquals(11, call.getBlob(1).length()); + + call.setDate(2, java.sql.Date.valueOf("2000-01-01")); + call.registerOutParameter(1, Types.DATE); + call.execute(); + assertEquals("2000-01-01", call.getDate(1).toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2000-01-01", call.getObject(1, + LocalDateTimeUtils.LOCAL_DATE).toString()); + } + + call.setTime(2, java.sql.Time.valueOf("01:02:03")); + call.registerOutParameter(1, Types.TIME); + call.execute(); + assertEquals("01:02:03", call.getTime(1).toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("01:02:03", call.getObject(1, + LocalDateTimeUtils.LOCAL_TIME).toString()); + } + + call.setTimestamp(2, java.sql.Timestamp.valueOf( + "2001-02-03 04:05:06.789")); + call.registerOutParameter(1, Types.TIMESTAMP); + call.execute(); + assertEquals("2001-02-03 04:05:06.789", call.getTimestamp(1).toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2001-02-03T04:05:06.789", call.getObject(1, + LocalDateTimeUtils.LOCAL_DATE_TIME).toString()); + } + + call.setBoolean(2, true); + call.registerOutParameter(1, Types.BIT); + call.execute(); + assertEquals(true, call.getBoolean(1)); + + call.setShort(2, (short) 123); + call.registerOutParameter(1, Types.SMALLINT); + call.execute(); + assertEquals(123, call.getShort(1)); + + call.setBigDecimal(2, BigDecimal.TEN); + call.registerOutParameter(1, Types.DECIMAL); + call.execute(); + assertEquals("10", call.getBigDecimal(1).toString()); + } + + private void testCallWithResult(Connection conn) throws SQLException { + CallableStatement call; + for (String s : new String[]{"{?= call abs(?)}", + " { ? = call abs(?)}", " {? = call abs(?)}"}) { + call = conn.prepareCall(s); + call.setInt(2, -3); + call.registerOutParameter(1, Types.INTEGER); + call.execute(); + assertEquals(3, call.getInt(1)); + call.executeUpdate(); + assertEquals(3, call.getInt(1)); + } + } + + private void testPrepare(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + CallableStatement call; + ResultSet rs; + stat.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR)"); + call = conn.prepareCall("INSERT INTO TEST VALUES(?, ?)"); + call.setInt(1, 1); + call.setString(2, "Hello"); + call.execute(); + call = conn.prepareCall("SELECT * FROM TEST", + ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_READ_ONLY); + rs = call.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + call = conn.prepareCall("SELECT * FROM TEST", + ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_READ_ONLY, + ResultSet.HOLD_CURSORS_OVER_COMMIT); + rs = call.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + stat.execute("CREATE ALIAS testCall FOR \"" + + getClass().getName() + ".testCall\""); + call = conn.prepareCall("{CALL testCall(?, ?, ?, ?)}"); + call.setInt("A", 50); + call.setString("B", "abc"); + long t = System.currentTimeMillis(); + call.setTimestamp("C", new Timestamp(t)); + call.setTimestamp("D", Timestamp.valueOf("2001-02-03 10:20:30.0")); + call.registerOutParameter(1, Types.INTEGER); + call.registerOutParameter("B", Types.VARCHAR); + call.executeUpdate(); + try { + call.getTimestamp("C"); + fail("not registered out parameter accessible"); + } catch (SQLException e) { + // expected exception + } + call.registerOutParameter(3, Types.TIMESTAMP); + call.registerOutParameter(4, Types.TIMESTAMP); + call.executeUpdate(); + + assertEquals(t + 1, call.getTimestamp(3).getTime()); + assertEquals(t + 1, call.getTimestamp("C").getTime()); + + assertEquals("2001-02-03 10:20:30.0", call.getTimestamp(4).toString()); + assertEquals("2001-02-03 10:20:30.0", call.getTimestamp("D").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2001-02-03T10:20:30", call.getObject(4, + LocalDateTimeUtils.LOCAL_DATE_TIME).toString()); + assertEquals("2001-02-03T10:20:30", call.getObject("D", + LocalDateTimeUtils.LOCAL_DATE_TIME).toString()); + } + assertEquals("10:20:30", call.getTime(4).toString()); + assertEquals("10:20:30", call.getTime("D").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("10:20:30", call.getObject(4, + LocalDateTimeUtils.LOCAL_TIME).toString()); + assertEquals("10:20:30", call.getObject("D", + LocalDateTimeUtils.LOCAL_TIME).toString()); + } + assertEquals("2001-02-03", call.getDate(4).toString()); + assertEquals("2001-02-03", call.getDate("D").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2001-02-03", call.getObject(4, + LocalDateTimeUtils.LOCAL_DATE).toString()); + assertEquals("2001-02-03", call.getObject("D", + LocalDateTimeUtils.LOCAL_DATE).toString()); + } + + assertEquals(100, call.getInt(1)); + assertEquals(100, call.getInt("A")); + assertEquals(100, call.getLong(1)); + assertEquals(100, call.getLong("A")); + assertEquals("100", call.getBigDecimal(1).toString()); + assertEquals("100", call.getBigDecimal("A").toString()); + assertEquals(100, call.getFloat(1)); + assertEquals(100, call.getFloat("A")); + assertEquals(100, call.getDouble(1)); + assertEquals(100, call.getDouble("A")); + assertEquals(100, call.getByte(1)); + assertEquals(100, call.getByte("A")); + assertEquals(100, call.getShort(1)); + assertEquals(100, call.getShort("A")); + assertTrue(call.getBoolean(1)); + assertTrue(call.getBoolean("A")); + + assertEquals("ABC", call.getString(2)); + Reader r = call.getCharacterStream(2); + assertEquals("ABC", IOUtils.readStringAndClose(r, -1)); + r = call.getNCharacterStream(2); + assertEquals("ABC", IOUtils.readStringAndClose(r, -1)); + assertEquals("ABC", call.getString("B")); + assertEquals("ABC", call.getNString(2)); + assertEquals("ABC", call.getNString("B")); + assertEquals("ABC", call.getClob(2).getSubString(1, 3)); + assertEquals("ABC", call.getClob("B").getSubString(1, 3)); + assertEquals("ABC", call.getNClob(2).getSubString(1, 3)); + assertEquals("ABC", call.getNClob("B").getSubString(1, 3)); + + try { + call.getString(100); + fail("incorrect parameter index value"); + } catch (SQLException e) { + // expected exception + } + try { + call.getString(0); + fail("incorrect parameter index value"); + } catch (SQLException e) { + // expected exception + } + try { + call.getBoolean("X"); + fail("incorrect parameter name value"); + } catch (SQLException e) { + // expected exception + } + + call.setCharacterStream("B", + new StringReader("xyz")); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setCharacterStream("B", + new StringReader("xyz-"), 3); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setCharacterStream("B", + new StringReader("xyz-"), 3L); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setAsciiStream("B", + new ByteArrayInputStream("xyz".getBytes(StandardCharsets.UTF_8))); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setAsciiStream("B", + new ByteArrayInputStream("xyz-".getBytes(StandardCharsets.UTF_8)), 3); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setAsciiStream("B", + new ByteArrayInputStream("xyz-".getBytes(StandardCharsets.UTF_8)), 3L); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + + call.setClob("B", new StringReader("xyz")); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setClob("B", new StringReader("xyz-"), 3); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + + call.setNClob("B", new StringReader("xyz")); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setNClob("B", new StringReader("xyz-"), 3); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + + call.setString("B", "xyz"); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + call.setNString("B", "xyz"); + call.executeUpdate(); + assertEquals("XYZ", call.getString("B")); + + // test for exceptions after closing + call.close(); + assertThrows(ErrorCode.OBJECT_CLOSED, call). + executeUpdate(); + assertThrows(ErrorCode.OBJECT_CLOSED, call). + registerOutParameter(1, Types.INTEGER); + assertThrows(ErrorCode.OBJECT_CLOSED, call). + getString("X"); + } + + private void testClassLoader(Connection conn) throws SQLException { + Utils.ClassFactory myFactory = new TestClassFactory(); + JdbcUtils.addClassFactory(myFactory); + try { + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS T_CLASSLOADER FOR \"TestClassFactory.testClassF\""); + ResultSet rs = stat.executeQuery("SELECT T_CLASSLOADER(true)"); + assertTrue(rs.next()); + assertEquals(false, rs.getBoolean(1)); + } finally { + JdbcUtils.removeClassFactory(myFactory); + } + } + + private void testArrayArgument(Connection connection) throws SQLException { + Array array = connection.createArrayOf("Int", new Object[] {0, 1, 2}); + try (Statement statement = connection.createStatement()) { + statement.execute("CREATE ALIAS getArrayLength FOR \"" + + getClass().getName() + ".getArrayLength\""); + + // test setArray + try (CallableStatement callableStatement = connection + .prepareCall("{call getArrayLength(?)}")) { + callableStatement.setArray(1, array); + assertTrue(callableStatement.execute()); + + try (ResultSet resultSet = callableStatement.getResultSet()) { + assertTrue(resultSet.next()); + assertEquals(3, resultSet.getInt(1)); + assertFalse(resultSet.next()); + } + } + + // test setObject + try (CallableStatement callableStatement = connection + .prepareCall("{call getArrayLength(?)}")) { + callableStatement.setObject(1, array); + assertTrue(callableStatement.execute()); + + try (ResultSet resultSet = callableStatement.getResultSet()) { + assertTrue(resultSet.next()); + assertEquals(3, resultSet.getInt(1)); + assertFalse(resultSet.next()); + } + } + } finally { + array.free(); + } + } + + private void testArrayReturnValue(Connection connection) throws SQLException { + Object[][] arraysToTest = new Object[][] { + new Object[] {0, 1, 2}, + new Object[] {0, "1", 2}, + new Object[] {0, null, 2}, + new Object[] {0, new Object[] {"s", 1}, new Object[] {null, 1L}}, + }; + try (Statement statement = connection.createStatement()) { + statement.execute("CREATE ALIAS arrayIdentiy FOR \"" + + getClass().getName() + ".arrayIdentiy\""); + + for (Object[] arrayToTest : arraysToTest) { + Array sqlInputArray = connection.createArrayOf("ignored", arrayToTest); + try { + try (CallableStatement callableStatement = connection + .prepareCall("{call arrayIdentiy(?)}")) { + callableStatement.setArray(1, sqlInputArray); + assertTrue(callableStatement.execute()); + + try (ResultSet resultSet = callableStatement.getResultSet()) { + assertTrue(resultSet.next()); + + // test getArray() + Array sqlReturnArray = resultSet.getArray(1); + try { + assertEquals( + (Object[]) sqlInputArray.getArray(), + (Object[]) sqlReturnArray.getArray()); + } finally { + sqlReturnArray.free(); + } + + // test getObject(Array.class) + sqlReturnArray = resultSet.getObject(1, Array.class); + try { + assertEquals( + (Object[]) sqlInputArray.getArray(), + (Object[]) sqlReturnArray.getArray()); + } finally { + sqlReturnArray.free(); + } + + assertFalse(resultSet.next()); + } + } + } finally { + sqlInputArray.free(); + } + + } + } + } + + /** + * Class factory unit test + * @param b boolean value + * @return !b + */ + public static Boolean testClassF(Boolean b) { + return !b; + } + + /** + * This method is called via reflection from the database. + * + * @param array the array + * @return the length of the array + */ + public static int getArrayLength(Object[] array) { + return array == null ? 0 : array.length; + } + + /** + * This method is called via reflection from the database. + * + * @param array the array + * @return the array + */ + public static Object[] arrayIdentiy(Object[] array) { + return array; + } + + /** + * This method is called via reflection from the database. + * + * @param conn the connection + * @param a the value a + * @param b the value b + * @param c the value c + * @param d the value d + * @return a result set + */ + public static ResultSet testCall(Connection conn, int a, String b, + Timestamp c, Timestamp d) throws SQLException { + SimpleResultSet rs = new SimpleResultSet(); + rs.addColumn("A", Types.INTEGER, 0, 0); + rs.addColumn("B", Types.VARCHAR, 0, 0); + rs.addColumn("C", Types.TIMESTAMP, 0, 0); + rs.addColumn("D", Types.TIMESTAMP, 0, 0); + if ("jdbc:columnlist:connection".equals(conn.getMetaData().getURL())) { + return rs; + } + rs.addRow(a * 2, b.toUpperCase(), new Timestamp(c.getTime() + 1), d); + return rs; + } + + /** + * A class factory used for testing. + */ + static class TestClassFactory implements Utils.ClassFactory { + + @Override + public boolean match(String name) { + return name.equals("TestClassFactory"); + } + + @Override + public Class loadClass(String name) throws ClassNotFoundException { + return TestCallableStatement.class; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestCancel.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestCancel.java new file mode 100644 index 0000000000000..00e4440585c22 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestCancel.java @@ -0,0 +1,208 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Savepoint; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests Statement.cancel + */ +public class TestCancel extends TestBase { + + private static int lastVisited; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + /** + * This thread cancels a statement after some time. + */ + static class CancelThread extends Thread { + private final Statement cancel; + private final int wait; + private volatile boolean stop; + + CancelThread(Statement cancel, int wait) { + this.cancel = cancel; + this.wait = wait; + } + + /** + * Stop the test now. + */ + public void stopNow() { + this.stop = true; + } + + @Override + public void run() { + while (!stop) { + try { + Thread.sleep(wait); + cancel.cancel(); + Thread.yield(); + } catch (SQLException e) { + // ignore errors on closed statements + } catch (Exception e) { + TestBase.logError("sleep", e); + } + } + } + } + + @Override + public void test() throws Exception { + testQueryTimeoutInTransaction(); + testReset(); + testMaxQueryTimeout(); + testQueryTimeout(); + testJdbcQueryTimeout(); + testCancelStatement(); + deleteDb("cancel"); + } + + private void testReset() throws SQLException { + deleteDb("cancel"); + Connection conn = getConnection("cancel"); + Statement stat = conn.createStatement(); + stat.execute("set query_timeout 1"); + assertThrows(ErrorCode.STATEMENT_WAS_CANCELED, stat). + execute("select count(*) from system_range(1, 1000000), " + + "system_range(1, 1000000)"); + stat.execute("set query_timeout 0"); + stat.execute("select count(*) from system_range(1, 1000), " + + "system_range(1, 1000)"); + conn.close(); + } + + private void testQueryTimeoutInTransaction() throws SQLException { + deleteDb("cancel"); + Connection conn = getConnection("cancel"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT)"); + conn.setAutoCommit(false); + stat.execute("INSERT INTO TEST VALUES(1)"); + Savepoint sp = conn.setSavepoint(); + stat.execute("INSERT INTO TEST VALUES(2)"); + stat.setQueryTimeout(1); + conn.rollback(sp); + conn.commit(); + conn.close(); + } + + private void testJdbcQueryTimeout() throws SQLException { + deleteDb("cancel"); + Connection conn = getConnection("cancel"); + Statement stat = conn.createStatement(); + assertEquals(0, stat.getQueryTimeout()); + stat.setQueryTimeout(1); + assertEquals(1, stat.getQueryTimeout()); + Statement s2 = conn.createStatement(); + assertEquals(1, s2.getQueryTimeout()); + ResultSet rs = s2.executeQuery("SELECT VALUE " + + "FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME = 'QUERY_TIMEOUT'"); + rs.next(); + assertEquals(1000, rs.getInt(1)); + assertThrows(ErrorCode.STATEMENT_WAS_CANCELED, stat). + executeQuery("SELECT MAX(RAND()) " + + "FROM SYSTEM_RANGE(1, 100000000)"); + stat.setQueryTimeout(0); + stat.execute("SET QUERY_TIMEOUT 1100"); + // explicit changes are not detected except, as documented + assertEquals(0, stat.getQueryTimeout()); + conn.close(); + } + + private void testQueryTimeout() throws SQLException { + deleteDb("cancel"); + Connection conn = getConnection("cancel"); + Statement stat = conn.createStatement(); + stat.execute("SET QUERY_TIMEOUT 10"); + assertThrows(ErrorCode.STATEMENT_WAS_CANCELED, stat). + executeQuery("SELECT MAX(RAND()) " + + "FROM SYSTEM_RANGE(1, 100000000)"); + conn.close(); + } + + private void testMaxQueryTimeout() throws SQLException { + deleteDb("cancel"); + Connection conn = getConnection("cancel;MAX_QUERY_TIMEOUT=10"); + Statement stat = conn.createStatement(); + assertThrows(ErrorCode.STATEMENT_WAS_CANCELED, stat). + executeQuery("SELECT MAX(RAND()) " + + "FROM SYSTEM_RANGE(1, 100000000)"); + conn.close(); + } + + /** + * This method is called via reflection from the database. + * + * @param x the value + * @return the value + */ + public static int visit(int x) { + lastVisited = x; + return x; + } + + private void testCancelStatement() throws Exception { + deleteDb("cancel"); + Connection conn = getConnection("cancel"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE ALIAS VISIT FOR \"" + getClass().getName() + ".visit\""); + stat.execute("CREATE MEMORY TABLE TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + trace("insert"); + int len = getSize(10, 1000); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + // prep.setString(2, "Test Value "+i); + prep.setString(2, "hi"); + prep.execute(); + } + trace("inserted"); + // TODO test insert into ... select + for (int i = 1;;) { + Statement query = conn.createStatement(); + CancelThread cancel = new CancelThread(query, i); + visit(0); + cancel.start(); + try { + Thread.yield(); + assertThrows(ErrorCode.STATEMENT_WAS_CANCELED, query, + "SELECT VISIT(ID), (SELECT SUM(X) " + + "FROM SYSTEM_RANGE(1, 100000) WHERE X<>ID) FROM TEST ORDER BY ID"); + } finally { + cancel.stopNow(); + cancel.join(); + } + if (lastVisited == 0) { + i += 10; + } else { + break; + } + } + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestConcurrentConnectionUsage.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestConcurrentConnectionUsage.java new file mode 100644 index 0000000000000..10ff8cbbb5d99 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestConcurrentConnectionUsage.java @@ -0,0 +1,58 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.ByteArrayInputStream; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Test concurrent usage of the same connection. + */ +public class TestConcurrentConnectionUsage extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testAutoCommit(); + } + + private void testAutoCommit() throws SQLException { + deleteDb(getTestName()); + final Connection conn = getConnection(getTestName()); + final PreparedStatement p1 = conn.prepareStatement("select 1 from dual"); + Task t = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + p1.executeQuery(); + conn.setAutoCommit(true); + conn.setAutoCommit(false); + } + } + }.execute(); + PreparedStatement prep = conn.prepareStatement("select ? from dual"); + for (int i = 0; i < 10; i++) { + prep.setBinaryStream(1, new ByteArrayInputStream(new byte[1024])); + prep.executeQuery(); + } + t.get(); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestConnection.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestConnection.java new file mode 100644 index 0000000000000..3b7aa6777250d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestConnection.java @@ -0,0 +1,137 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import java.sql.Connection; +import java.sql.SQLClientInfoException; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Properties; + + + +/** + * Tests the client info + */ +public class TestConnection extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testSetSupportedClientInfo(); + testSetUnsupportedClientInfo(); + testGetUnsupportedClientInfo(); + testSetSupportedClientInfoProperties(); + testSetUnsupportedClientInfoProperties(); + testSetInternalProperty(); + testSetInternalPropertyToInitialValue(); + testSetGetSchema(); + } + + private void testSetInternalProperty() throws SQLException { + // Use MySQL-mode since this allows all property names + // (apart from h2 internal names). + Connection conn = getConnection("clientInfoMySQL;MODE=MySQL"); + + assertThrows(SQLClientInfoException.class, conn).setClientInfo("numServers", "SomeValue"); + assertThrows(SQLClientInfoException.class, conn).setClientInfo("server23", "SomeValue"); + conn.close(); + } + + /** + * Test that no exception is thrown if the client info of a connection + * managed in a connection pool is reset to its initial values. + * + * This is needed when using h2 in websphere liberty. + */ + private void testSetInternalPropertyToInitialValue() throws SQLException { + // Use MySQL-mode since this allows all property names + // (apart from h2 internal names). + Connection conn = getConnection("clientInfoMySQL;MODE=MySQL"); + String numServersPropertyName = "numServers"; + String numServers = conn.getClientInfo(numServersPropertyName); + conn.setClientInfo(numServersPropertyName, numServers); + assertEquals(conn.getClientInfo(numServersPropertyName), numServers); + conn.close(); + } + + private void testSetUnsupportedClientInfoProperties() throws SQLException { + Connection conn = getConnection("clientInfo"); + Properties properties = new Properties(); + properties.put("ClientUser", "someUser"); + assertThrows(SQLClientInfoException.class, conn).setClientInfo(properties); + conn.close(); + } + + private void testSetSupportedClientInfoProperties() throws SQLException { + Connection conn = getConnection("clientInfoDB2;MODE=DB2"); + conn.setClientInfo("ApplicationName", "Connection Test"); + + Properties properties = new Properties(); + properties.put("ClientUser", "someUser"); + conn.setClientInfo(properties); + // old property should have been removed + assertNull(conn.getClientInfo("ApplicationName")); + // new property has been set + assertEquals(conn.getClientInfo("ClientUser"), "someUser"); + conn.close(); + } + + private void testSetSupportedClientInfo() throws SQLException { + Connection conn = getConnection("clientInfoDB2;MODE=DB2"); + conn.setClientInfo("ApplicationName", "Connection Test"); + + assertEquals(conn.getClientInfo("ApplicationName"), "Connection Test"); + conn.close(); + } + + private void testSetUnsupportedClientInfo() throws SQLException { + Connection conn = getConnection("clientInfoDB2;MODE=DB2"); + assertThrows(SQLClientInfoException.class, conn).setClientInfo( + "UnsupportedName", "SomeValue"); + conn.close(); + } + + private void testGetUnsupportedClientInfo() throws SQLException { + Connection conn = getConnection("clientInfo"); + assertNull(conn.getClientInfo("UnknownProperty")); + conn.close(); + } + + private void testSetGetSchema() throws SQLException { + if (config.networked) { + return; + } + deleteDb("schemaSetGet"); + Connection conn = getConnection("schemaSetGet"); + Statement s = conn.createStatement(); + s.executeUpdate("create schema my_test_schema"); + s.executeUpdate("create table my_test_schema.my_test_table(id uuid, nave varchar)"); + assertEquals("PUBLIC", conn.getSchema()); + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, s, "select * from my_test_table"); + assertThrows(ErrorCode.SCHEMA_NOT_FOUND_1, conn).setSchema("my_test_table"); + conn.setSchema("MY_TEST_SCHEMA"); + assertEquals("MY_TEST_SCHEMA", conn.getSchema()); + s.executeQuery("select * from my_test_table"); + assertThrows(ErrorCode.SCHEMA_NOT_FOUND_1, conn).setSchema("NON_EXISTING_SCHEMA"); + assertEquals("MY_TEST_SCHEMA", conn.getSchema()); + s.executeUpdate("create schema \"otheR_schEma\""); + conn.setSchema("otheR_schEma"); + assertEquals("otheR_schEma", conn.getSchema()); + s.close(); + conn.close(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestCustomDataTypesHandler.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestCustomDataTypesHandler.java new file mode 100644 index 0000000000000..e97ffb56b445c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestCustomDataTypesHandler.java @@ -0,0 +1,592 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.Serializable; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.text.DecimalFormat; +import java.util.Locale; +import org.h2.api.CustomDataTypesHandler; +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.store.DataHandler; +import org.h2.test.TestBase; +import org.h2.util.JdbcUtils; +import org.h2.util.StringUtils; +import org.h2.value.CompareMode; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDouble; +import org.h2.value.ValueJavaObject; +import org.h2.value.ValueString; + +/** + * Tests {@link CustomDataTypesHandler}. + */ +public class TestCustomDataTypesHandler extends TestBase { + + /** + * The database name. + */ + public final static String DB_NAME = "customDataTypes"; + + /** + * The system property name. + */ + public final static String HANDLER_NAME_PROPERTY = "h2.customDataTypesHandler"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + System.setProperty(HANDLER_NAME_PROPERTY, TestOnlyCustomDataTypesHandler.class.getName()); + TestBase test = createCaller().init(); + test.config.traceTest = true; + test.config.memory = true; + test.config.networked = true; + test.config.beforeTest(); + test.test(); + test.config.afterTest(); + System.clearProperty(HANDLER_NAME_PROPERTY); + } + + @Override + public void test() throws Exception { + try { + JdbcUtils.customDataTypesHandler = new TestOnlyCustomDataTypesHandler(); + + deleteDb(DB_NAME); + Connection conn = getConnection(DB_NAME); + + Statement stat = conn.createStatement(); + + //Test cast + ResultSet rs = stat.executeQuery("select CAST('1-1i' AS complex) + '1+1i' "); + rs.next(); + assertTrue(rs.getObject(1).equals(new ComplexNumber(2, 0))); + + //Test create table + stat.execute("create table t(id int, val complex)"); + rs = conn.getMetaData().getColumns(null, null, "T", "VAL"); + rs.next(); + assertEquals(rs.getString("TYPE_NAME"), "complex"); + assertEquals(rs.getInt("DATA_TYPE"), Types.JAVA_OBJECT); + + rs = stat.executeQuery("select val from t"); + assertEquals(ComplexNumber.class.getName(), rs.getMetaData().getColumnClassName(1)); + + //Test insert + PreparedStatement stmt = conn.prepareStatement( + "insert into t(id, val) values (0, '1.0+1.0i'), (1, ?), (2, ?), (3, ?)"); + stmt.setObject(1, new ComplexNumber(1, -1)); + stmt.setObject(2, "5.0+2.0i"); + stmt.setObject(3, 100.1); + stmt.executeUpdate(); + + //Test selects + ComplexNumber[] expected = new ComplexNumber[4]; + expected[0] = new ComplexNumber(1, 1); + expected[1] = new ComplexNumber(1, -1); + expected[2] = new ComplexNumber(5, 2); + expected[3] = new ComplexNumber(100.1, 0); + + for (int id = 0; id < expected.length; ++id) { + PreparedStatement prepStat =conn.prepareStatement( + "select val from t where id = ?"); + prepStat.setInt(1, id); + rs = prepStat.executeQuery(); + assertTrue(rs.next()); + assertTrue(rs.getObject(1).equals(expected[id])); + } + + for (int id = 0; id < expected.length; ++id) { + PreparedStatement prepStat = conn.prepareStatement( + "select id from t where val = ?"); + prepStat.setObject(1, expected[id]); + rs = prepStat.executeQuery(); + assertTrue(rs.next()); + assertEquals(rs.getInt(1), id); + } + + // Repeat selects with index + stat.execute("create index val_idx on t(val)"); + + for (int id = 0; id < expected.length; ++id) { + PreparedStatement prepStat = conn.prepareStatement( + "select id from t where val = ?"); + prepStat.setObject(1, expected[id]); + rs = prepStat.executeQuery(); + assertTrue(rs.next()); + assertEquals(rs.getInt(1), id); + } + + // sum function + rs = stat.executeQuery("select sum(val) from t"); + rs.next(); + assertTrue(rs.getObject(1).equals(new ComplexNumber(107.1, 2))); + + // user function + stat.execute("create alias complex_mod for \"" + + getClass().getName() + ".complexMod\""); + rs = stat.executeQuery("select complex_mod(val) from t where id=2"); + rs.next(); + assertEquals(complexMod(expected[2]), rs.getDouble(1)); + + conn.close(); + deleteDb(DB_NAME); + } finally { + JdbcUtils.customDataTypesHandler = null; + } + } + + /** + * The modulus function. + * + * @param val complex number + * @return result + */ + public static double complexMod(ComplexNumber val) { + return val.mod(); + } + + /** + * The custom data types handler to use for this test. + */ + public static class TestOnlyCustomDataTypesHandler implements CustomDataTypesHandler { + + /** Type name for complex number */ + public final static String COMPLEX_DATA_TYPE_NAME = "complex"; + + /** Type id for complex number */ + public final static int COMPLEX_DATA_TYPE_ID = 1000; + + /** Order for complex number data type */ + public final static int COMPLEX_DATA_TYPE_ORDER = 100_000; + + /** Cached DataType instance for complex number */ + public final DataType complexDataType; + + /** */ + public TestOnlyCustomDataTypesHandler() { + complexDataType = createComplex(); + } + + @Override + public DataType getDataTypeByName(String name) { + if (name.toLowerCase(Locale.ENGLISH).equals(COMPLEX_DATA_TYPE_NAME)) { + return complexDataType; + } + + return null; + } + + @Override + public DataType getDataTypeById(int type) { + if (type == COMPLEX_DATA_TYPE_ID) { + return complexDataType; + } + return null; + } + + @Override + public String getDataTypeClassName(int type) { + if (type == COMPLEX_DATA_TYPE_ID) { + return ComplexNumber.class.getName(); + } + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "type:" + type); + } + + @Override + public int getTypeIdFromClass(Class cls) { + if (cls == ComplexNumber.class) { + return COMPLEX_DATA_TYPE_ID; + } + return Value.JAVA_OBJECT; + } + + @Override + public Value convert(Value source, int targetType) { + if (source.getType() == targetType) { + return source; + } + if (targetType == COMPLEX_DATA_TYPE_ID) { + switch (source.getType()) { + case Value.JAVA_OBJECT: { + assert source instanceof ValueJavaObject; + return ValueComplex.get((ComplexNumber) + JdbcUtils.deserialize(source.getBytesNoCopy(), null)); + } + case Value.STRING: { + assert source instanceof ValueString; + return ValueComplex.get( + ComplexNumber.parseComplexNumber(source.getString())); + } + case Value.BYTES: { + assert source instanceof ValueBytes; + return ValueComplex.get((ComplexNumber) + JdbcUtils.deserialize(source.getBytesNoCopy(), null)); + } + case Value.DOUBLE: { + assert source instanceof ValueDouble; + return ValueComplex.get(new ComplexNumber(source.getDouble(), 0)); + } + } + + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, source.getString()); + } else { + return source.convertTo(targetType); + } + } + + @Override + public int getDataTypeOrder(int type) { + if (type == COMPLEX_DATA_TYPE_ID) { + return COMPLEX_DATA_TYPE_ORDER; + } + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "type:" + type); + } + + @Override + public Value getValue(int type, Object data, DataHandler dataHandler) { + if (type == COMPLEX_DATA_TYPE_ID) { + assert data instanceof ComplexNumber; + return ValueComplex.get((ComplexNumber)data); + } + return ValueJavaObject.getNoCopy(data, null, dataHandler); + } + + @Override + public Object getObject(Value value, Class cls) { + if (cls.equals(ComplexNumber.class)) { + if (value.getType() == COMPLEX_DATA_TYPE_ID) { + return value.getObject(); + } + return convert(value, COMPLEX_DATA_TYPE_ID).getObject(); + } + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "type:" + value.getType()); + } + + @Override + public boolean supportsAdd(int type) { + if (type == COMPLEX_DATA_TYPE_ID) { + return true; + } + return false; + } + + @Override + public int getAddProofType(int type) { + if (type == COMPLEX_DATA_TYPE_ID) { + return type; + } + throw DbException.get( + ErrorCode.UNKNOWN_DATA_TYPE_1, "type:" + type); + } + + /** Constructs data type instance for complex number type */ + private static DataType createComplex() { + DataType result = new DataType(); + result.type = COMPLEX_DATA_TYPE_ID; + result.name = COMPLEX_DATA_TYPE_NAME; + result.sqlType = Types.JAVA_OBJECT; + return result; + } + } + + /** + * Value type implementation that holds the complex number + */ + public static class ValueComplex extends Value { + + private ComplexNumber val; + + /** + * @param val complex number + */ + public ValueComplex(ComplexNumber val) { + assert val != null; + this.val = val; + } + + /** + * Get ValueComplex instance for given ComplexNumber. + * + * @param val complex number + * @return resulting instance + */ + public static ValueComplex get(ComplexNumber val) { + return new ValueComplex(val); + } + + @Override + public String getSQL() { + return val.toString(); + } + + @Override + public int getType() { + return TestOnlyCustomDataTypesHandler.COMPLEX_DATA_TYPE_ID; + } + + @Override + public long getPrecision() { + return 0; + } + + @Override + public int getDisplaySize() { + return 0; + } + + @Override + public String getString() { + return val.toString(); + } + + @Override + public Object getObject() { + return val; + } + + @Override + public void set(PreparedStatement prep, int parameterIndex) throws SQLException { + Object obj = JdbcUtils.deserialize(getBytesNoCopy(), getDataHandler()); + prep.setObject(parameterIndex, obj, Types.JAVA_OBJECT); + } + + @Override + protected int compareSecure(Value v, CompareMode mode) { + return val.compare((ComplexNumber) v.getObject()); + } + + @Override + public int hashCode() { + return val.hashCode(); + } + + @Override + public boolean equals(Object other) { + if (other == null) { + return false; + } + if (!(other instanceof ValueComplex)) { + return false; + } + ValueComplex complex = (ValueComplex)other; + return complex.val.equals(val); + } + + @Override + public Value convertTo(int targetType) { + if (getType() == targetType) { + return this; + } + switch (targetType) { + case Value.BYTES: { + return ValueBytes.getNoCopy(JdbcUtils.serialize(val, null)); + } + case Value.STRING: { + return ValueString.get(val.toString()); + } + case Value.DOUBLE: { + assert val.im == 0; + return ValueDouble.get(val.re); + } + case Value.JAVA_OBJECT: { + return ValueJavaObject.getNoCopy(JdbcUtils.serialize(val, null)); + } + } + + throw DbException.get( + ErrorCode.DATA_CONVERSION_ERROR_1, getString()); + } + + @Override + public Value add(Value value) { + ValueComplex v = (ValueComplex)value; + return ValueComplex.get(val.add(v.val)); + } + } + + /** + * Complex number + */ + public static class ComplexNumber implements Serializable { + /** */ + private static final long serialVersionUID = 1L; + + /** */ + public final static DecimalFormat REAL_FMT = new DecimalFormat("###.###"); + + /** */ + public final static DecimalFormat IMG_FMT = new DecimalFormat("+###.###i;-###.###i"); + + /** + * Real part + */ + double re; + + /** + * Imaginary part + */ + double im; + + /** + * @param re real part + * @param im imaginary part + */ + public ComplexNumber(double re, double im) { + this.re = re; + this.im = im; + } + + /** + * Addition + * @param other value to add + * @return result + */ + public ComplexNumber add(ComplexNumber other) { + return new ComplexNumber(re + other.re, im + other.im); + } + + /** + * Returns modulus + * @return result + */ + public double mod() { + return Math.sqrt(re * re + im * im); + } + + /** + * Compares two complex numbers + * + * True ordering of complex number has no sense, + * so we apply lexicographical order. + * + * @param v number to compare this with + * @return result of comparison + */ + public int compare(ComplexNumber v) { + if (re == v.re && im == v.im) { + return 0; + } + if (re == v.re) { + return im > v.im ? 1 : -1; + } else if (re > v.re) { + return 1; + } else { + return -1; + } + } + + @Override + public int hashCode() { + return (int)re | (int)im; + } + + @Override + public boolean equals(Object other) { + if (other == null) { + return false; + } + if (!(other instanceof ComplexNumber)) { + return false; + } + ComplexNumber complex = (ComplexNumber)other; + return (re==complex.re) && (im == complex.im); + } + + @Override + public String toString() { + if (im == 0.0) { + return REAL_FMT.format(re); + } + if (re == 0.0) { + return IMG_FMT.format(im); + } + return REAL_FMT.format(re) + "" + IMG_FMT.format(im); + } + + /** + * Simple parser for complex numbers. Both real and im components + * must be written in non scientific notation. + * @param s String. + * @return {@link ComplexNumber} object. + */ + public static ComplexNumber parseComplexNumber(String s) { + if (StringUtils.isNullOrEmpty(s)) + return null; + + s = s.replaceAll("\\s", ""); + + boolean hasIm = (s.charAt(s.length() - 1) == 'i'); + int signs = 0; + + int pos = 0; + + int maxSignPos = -1; + + while (pos != -1) { + pos = s.indexOf('-', pos); + if (pos != -1) { + signs++; + maxSignPos = Math.max(maxSignPos, pos++); + } + } + pos = 0; + + while (pos != -1) { + pos = s.indexOf('+', pos); + if (pos != -1) { + signs++; + maxSignPos = Math.max(maxSignPos, pos++); + } + } + + if (signs > 2 || (signs == 2 && !hasIm)) + throw new NumberFormatException(); + double real; + double im; + + if (signs == 0 || (signs == 1 && maxSignPos == 0)) { + if (hasIm) { + real = 0; + if (signs == 0 && s.length() == 1) { + im = 1.0; + } else if (signs > 0 && s.length() == 2) { + im = (s.charAt(0) == '-') ? -1.0 : 1.0; + } else { + im = Double.parseDouble(s.substring(0, s.length() - 1)); + } + } else { + real = Double.parseDouble(s); + im = 0; + } + } else { + real = Double.parseDouble(s.substring(0, maxSignPos)); + if (s.length() - maxSignPos == 2) { + im = (s.charAt(maxSignPos) == '-') ? -1.0 : 1.0; + } else { + im = Double.parseDouble(s.substring(maxSignPos, s.length() - 1)); + } + } + + return new ComplexNumber(real, im); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestDatabaseEventListener.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestDatabaseEventListener.java new file mode 100644 index 0000000000000..f13798e97fead --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestDatabaseEventListener.java @@ -0,0 +1,302 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Properties; + +import org.h2.Driver; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests the DatabaseEventListener interface. + */ +public class TestDatabaseEventListener extends TestBase { + + /** + * A flag to mark that the given method was called. + */ + static boolean calledOpened, calledClosingDatabase, calledScan, + calledCreateIndex, calledStatementStart, calledStatementEnd, + calledStatementProgress; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testInit(); + testIndexRebuiltOnce(); + testIndexNotRebuilt(); + testCalled(); + testCloseLog0(false); + testCloseLog0(true); + testCalledForStatement(); + deleteDb("databaseEventListener"); + } + + /** + * Initialize the database after opening. + */ + public static class Init implements DatabaseEventListener { + + private String databaseUrl; + + @Override + public void init(String url) { + databaseUrl = url; + } + + @Override + public void opened() { + try { + // using DriverManager.getConnection could result in a deadlock + // when using the server mode, but within the same process + Properties prop = new Properties(); + prop.setProperty("user", "sa"); + prop.setProperty("password", "sa"); + Connection conn = Driver.load().connect(databaseUrl, prop); + Statement stat = conn.createStatement(); + stat.execute("create table if not exists test(id int)"); + conn.close(); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public void closingDatabase() { + // nothing to do + } + + @Override + public void exceptionThrown(SQLException e, String sql) { + // nothing to do + } + + @Override + public void setProgress(int state, String name, int x, int max) { + // nothing to do + } + + } + + private void testInit() throws SQLException { + if (config.cipher != null || config.memory) { + return; + } + deleteDb("databaseEventListener"); + String url = getURL("databaseEventListener", true); + url += ";DATABASE_EVENT_LISTENER='" + Init.class.getName() + "'"; + Connection conn = DriverManager.getConnection(url, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("select * from test"); + conn.close(); + } + + private void testIndexRebuiltOnce() throws SQLException { + if (config.memory) { + return; + } + deleteDb("databaseEventListener"); + String url = getURL("databaseEventListener", true); + String user = getUser(), password = getPassword(); + Properties p = new Properties(); + p.setProperty("user", user); + p.setProperty("password", password); + Connection conn; + Statement stat; + conn = DriverManager.getConnection(url, p); + stat = conn.createStatement(); + // the old.id index head is at position 0 + stat.execute("create table old(id identity) as select 1"); + // the test.id index head is at position 1 + stat.execute("create table test(id identity) as select 1"); + conn.close(); + conn = DriverManager.getConnection(url, p); + stat = conn.createStatement(); + // free up space at position 0 + stat.execute("drop table old"); + stat.execute("insert into test values(2)"); + stat.execute("checkpoint sync"); + stat.execute("shutdown immediately"); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn).close(); + // now the index should be re-built + conn = DriverManager.getConnection(url, p); + conn.close(); + calledCreateIndex = false; + p.put("DATABASE_EVENT_LISTENER", + MyDatabaseEventListener.class.getName()); + conn = org.h2.Driver.load().connect(url, p); + conn.close(); + assertTrue(!calledCreateIndex); + } + + private void testIndexNotRebuilt() throws SQLException { + if (config.memory) { + return; + } + deleteDb("databaseEventListener"); + String url = getURL("databaseEventListener", true); + String user = getUser(), password = getPassword(); + Properties p = new Properties(); + p.setProperty("user", user); + p.setProperty("password", password); + Connection conn = DriverManager.getConnection(url, p); + Statement stat = conn.createStatement(); + // the old.id index head is at position 0 + stat.execute("create table old(id identity) as select 1"); + // the test.id index head is at position 1 + stat.execute("create table test(id identity) as select 1"); + conn.close(); + conn = DriverManager.getConnection(url, p); + stat = conn.createStatement(); + // free up space at position 0 + stat.execute("drop table old"); + // truncate, relocating to position 0 + stat.execute("truncate table test"); + stat.execute("insert into test select 1"); + conn.close(); + calledCreateIndex = false; + p.put("DATABASE_EVENT_LISTENER", + MyDatabaseEventListener.class.getName()); + conn = org.h2.Driver.load().connect(url, p); + conn.close(); + assertTrue(!calledCreateIndex); + } + + private void testCloseLog0(boolean shutdown) throws SQLException { + if (config.memory) { + return; + } + deleteDb("databaseEventListener"); + String url = getURL("databaseEventListener", true); + String user = getUser(), password = getPassword(); + Properties p = new Properties(); + p.setProperty("user", user); + p.setProperty("password", password); + Connection conn = DriverManager.getConnection(url, p); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test select x, space(1000) " + + "from system_range(1,1000)"); + if (shutdown) { + stat.execute("shutdown"); + } + conn.close(); + + calledOpened = false; + calledScan = false; + p.put("DATABASE_EVENT_LISTENER", MyDatabaseEventListener.class.getName()); + conn = org.h2.Driver.load().connect(url, p); + conn.close(); + if (calledOpened) { + assertTrue(!calledScan); + } + } + + private void testCalled() throws SQLException { + Properties p = new Properties(); + p.setProperty("user", "sa"); + p.setProperty("password", "sa"); + calledOpened = false; + calledClosingDatabase = false; + p.put("DATABASE_EVENT_LISTENER", MyDatabaseEventListener.class.getName()); + org.h2.Driver.load(); + String url = "jdbc:h2:mem:databaseEventListener"; + Connection conn = org.h2.Driver.load().connect(url, p); + conn.close(); + assertTrue(calledOpened); + assertTrue(calledClosingDatabase); + } + + private void testCalledForStatement() throws SQLException { + Properties p = new Properties(); + p.setProperty("user", "sa"); + p.setProperty("password", "sa"); + calledStatementStart = false; + calledStatementEnd = false; + calledStatementProgress = false; + p.put("DATABASE_EVENT_LISTENER", MyDatabaseEventListener.class.getName()); + org.h2.Driver.load(); + String url = "jdbc:h2:mem:databaseEventListener"; + Connection conn = org.h2.Driver.load().connect(url, p); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("select * from test"); + conn.close(); + assertTrue(calledStatementStart); + assertTrue(calledStatementEnd); + assertTrue(calledStatementProgress); + } + + /** + * The database event listener for this test. + */ + public static final class MyDatabaseEventListener implements + DatabaseEventListener { + + @Override + public void closingDatabase() { + calledClosingDatabase = true; + } + + @Override + public void exceptionThrown(SQLException e, String sql) { + // nothing to do + } + + @Override + public void init(String url) { + // nothing to do + } + + @Override + public void opened() { + calledOpened = true; + } + + @Override + public void setProgress(int state, String name, int x, int max) { + if (state == DatabaseEventListener.STATE_SCAN_FILE) { + calledScan = true; + } + if (state == DatabaseEventListener.STATE_CREATE_INDEX) { + if (!name.startsWith("SYS:")) { + calledCreateIndex = true; + } + } + if (state == STATE_STATEMENT_START) { + if (name.equals("select * from test")) { + calledStatementStart = true; + } + } + if (state == STATE_STATEMENT_END) { + if (name.equals("select * from test")) { + calledStatementEnd = true; + } + } + if (state == STATE_STATEMENT_PROGRESS) { + if (name.equals("select * from test")) { + calledStatementProgress = true; + } + } + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestDriver.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestDriver.java new file mode 100644 index 0000000000000..a24856cae47d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestDriver.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.Properties; + +import org.h2.Driver; +import org.h2.test.TestBase; + +/** + * Tests the database driver. + */ +public class TestDriver extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testSettingsAsProperties(); + testDriverObject(); + } + + private void testSettingsAsProperties() throws Exception { + Properties prop = new Properties(); + prop.put("user", getUser()); + prop.put("password", getPassword()); + prop.put("max_compact_time", "1234"); + prop.put("unknown", "1234"); + String url = getURL("driver", true); + Connection conn = DriverManager.getConnection(url, prop); + ResultSet rs; + rs = conn.createStatement().executeQuery( + "select * from information_schema.settings where name='MAX_COMPACT_TIME'"); + rs.next(); + assertEquals(1234, rs.getInt(2)); + conn.close(); + } + + private void testDriverObject() throws Exception { + Driver instance = Driver.load(); + assertTrue(DriverManager.getDriver("jdbc:h2:~/test") == instance); + Driver.unload(); + try { + java.sql.Driver d = DriverManager.getDriver("jdbc:h2:~/test"); + fail(d.toString()); + } catch (SQLException e) { + // ignore + } + Driver.load(); + assertTrue(DriverManager.getDriver("jdbc:h2:~/test") == instance); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestGetGeneratedKeys.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestGetGeneratedKeys.java new file mode 100644 index 0000000000000..33a93d46b86de --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestGetGeneratedKeys.java @@ -0,0 +1,1794 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.UUID; + +import org.h2.api.Trigger; +import org.h2.jdbc.JdbcPreparedStatement; +import org.h2.jdbc.JdbcStatement; +import org.h2.test.TestBase; + +/** + * Tests for the {@link Statement#getGeneratedKeys()}. + */ +public class TestGetGeneratedKeys extends TestBase { + + public static class TestGetGeneratedKeysTrigger implements Trigger { + + @Override + public void close() throws SQLException { + } + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) throws SQLException { + if (newRow[0] == null) { + newRow[0] = UUID.randomUUID(); + } + } + + @Override + public void init(Connection conn, String schemaName, String triggerName, String tableName, boolean before, + int type) throws SQLException { + } + + @Override + public void remove() throws SQLException { + } + + } + + /** + * Run just this test. + * + * @param a + * ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("getGeneratedKeys"); + Connection conn = getConnection("getGeneratedKeys"); + testBatchAndMergeInto(conn); + testCalledSequences(conn); + testInsertWithSelect(conn); + testMergeUsing(conn); + testMultithreaded(conn); + testNameCase(conn); + + testPrepareStatement_Execute(conn); + testPrepareStatement_ExecuteBatch(conn); + testPrepareStatement_ExecuteLargeBatch(conn); + testPrepareStatement_ExecuteLargeUpdate(conn); + testPrepareStatement_ExecuteUpdate(conn); + testPrepareStatement_int_Execute(conn); + testPrepareStatement_int_ExecuteBatch(conn); + testPrepareStatement_int_ExecuteLargeBatch(conn); + testPrepareStatement_int_ExecuteLargeUpdate(conn); + testPrepareStatement_int_ExecuteUpdate(conn); + testPrepareStatement_intArray_Execute(conn); + testPrepareStatement_intArray_ExecuteBatch(conn); + testPrepareStatement_intArray_ExecuteLargeBatch(conn); + testPrepareStatement_intArray_ExecuteLargeUpdate(conn); + testPrepareStatement_intArray_ExecuteUpdate(conn); + testPrepareStatement_StringArray_Execute(conn); + testPrepareStatement_StringArray_ExecuteBatch(conn); + testPrepareStatement_StringArray_ExecuteLargeBatch(conn); + testPrepareStatement_StringArray_ExecuteLargeUpdate(conn); + testPrepareStatement_StringArray_ExecuteUpdate(conn); + + testStatementExecute(conn); + testStatementExecute_int(conn); + testStatementExecute_intArray(conn); + testStatementExecute_StringArray(conn); + testStatementExecuteLargeUpdate(conn); + testStatementExecuteLargeUpdate_int(conn); + testStatementExecuteLargeUpdate_intArray(conn); + testStatementExecuteLargeUpdate_StringArray(conn); + testStatementExecuteUpdate(conn); + testStatementExecuteUpdate_int(conn); + testStatementExecuteUpdate_intArray(conn); + testStatementExecuteUpdate_StringArray(conn); + + testTrigger(conn); + conn.close(); + deleteDb("getGeneratedKeys"); + } + + /** + * Test for batch updates and MERGE INTO operator. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testBatchAndMergeInto(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID BIGINT AUTO_INCREMENT, UID UUID DEFAULT RANDOM_UUID(), VALUE INT)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (?), (?)", + Statement.RETURN_GENERATED_KEYS); + prep.setInt(1, 1); + prep.setInt(2, 2); + prep.addBatch(); + prep.setInt(1, 3); + prep.setInt(1, 4); + prep.addBatch(); + prep.executeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + rs.next(); + assertEquals(1L, rs.getLong(1)); + UUID u1 = (UUID) rs.getObject(2); + assertTrue(u1 != null); + rs.next(); + assertEquals(2L, rs.getLong(1)); + UUID u2 = (UUID) rs.getObject(2); + assertTrue(u2 != null); + rs.next(); + assertEquals(3L, rs.getLong(1)); + UUID u3 = (UUID) rs.getObject(2); + assertTrue(u3 != null); + rs.next(); + assertEquals(4L, rs.getLong(1)); + UUID u4 = (UUID) rs.getObject(2); + assertTrue(u4 != null); + assertFalse(rs.next()); + assertFalse(u1.equals(u2)); + assertFalse(u2.equals(u3)); + assertFalse(u3.equals(u4)); + prep = conn.prepareStatement("MERGE INTO TEST(ID, VALUE) KEY(ID) VALUES (?, ?)", + Statement.RETURN_GENERATED_KEYS); + prep.setInt(1, 2); + prep.setInt(2, 10); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + prep.setInt(1, 5); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + rs.next(); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test for keys generated by sequences. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testCalledSequences(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + + stat.execute("CREATE SEQUENCE SEQ"); + stat.execute("CREATE TABLE TEST(ID INT)"); + PreparedStatement prep; + prep = conn.prepareStatement("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", Statement.RETURN_GENERATED_KEYS); + prep.execute(); + ResultSet rs = prep.getGeneratedKeys(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", Statement.RETURN_GENERATED_KEYS); + prep.execute(); + rs = prep.getGeneratedKeys(); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", new int[] { 1 }); + prep.execute(); + rs = prep.getGeneratedKeys(); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", new String[] { "ID" }); + prep.execute(); + rs = prep.getGeneratedKeys(); + rs.next(); + assertEquals(4, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_READ_ONLY, ResultSet.HOLD_CURSORS_OVER_COMMIT); + prep.execute(); + rs = prep.getGeneratedKeys(); + rs.next(); + assertFalse(rs.next()); + + stat.execute("DROP TABLE TEST"); + stat.execute("DROP SEQUENCE SEQ"); + + stat.execute("CREATE TABLE TEST(ID BIGINT)"); + stat.execute("CREATE SEQUENCE SEQ"); + prep = conn.prepareStatement("INSERT INTO TEST VALUES (30), (NEXT VALUE FOR SEQ)," + + " (NEXT VALUE FOR SEQ), (NEXT VALUE FOR SEQ), (20)", Statement.RETURN_GENERATED_KEYS); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + rs.next(); + assertEquals(1L, rs.getLong(1)); + rs.next(); + assertEquals(2L, rs.getLong(1)); + rs.next(); + assertEquals(3L, rs.getLong(1)); + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + stat.execute("DROP SEQUENCE SEQ"); + } + + /** + * Test method for INSERT ... SELECT operator. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testInsertWithSelect(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT, VALUE INT NOT NULL)"); + + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) SELECT 10", + Statement.RETURN_GENERATED_KEYS); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertTrue(rs.next()); + assertEquals(1, rs.getLong(1)); + rs.close(); + + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for MERGE USING operator. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testMergeUsing(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE SOURCE (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + " UID INT NOT NULL UNIQUE, VALUE INT NOT NULL)"); + stat.execute("CREATE TABLE DESTINATION (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + " UID INT NOT NULL UNIQUE, VALUE INT NOT NULL)"); + PreparedStatement ps = conn.prepareStatement("INSERT INTO SOURCE(UID, VALUE) VALUES (?, ?)"); + for (int i = 1; i <= 100; i++) { + ps.setInt(1, i); + ps.setInt(2, i * 10 + 5); + ps.executeUpdate(); + } + // Insert first half of a rows with different values + ps = conn.prepareStatement("INSERT INTO DESTINATION(UID, VALUE) VALUES (?, ?)"); + for (int i = 1; i <= 50; i++) { + ps.setInt(1, i); + ps.setInt(2, i * 10); + ps.executeUpdate(); + } + // And merge second half into it, first half will be updated with a new values + ps = conn.prepareStatement( + "MERGE INTO DESTINATION USING SOURCE ON (DESTINATION.UID = SOURCE.UID)" + + " WHEN MATCHED THEN UPDATE SET VALUE = SOURCE.VALUE" + + " WHEN NOT MATCHED THEN INSERT (UID, VALUE) VALUES (SOURCE.UID, SOURCE.VALUE)", + Statement.RETURN_GENERATED_KEYS); + // All rows should be either updated or inserted + assertEquals(100, ps.executeUpdate()); + ResultSet rs = ps.getGeneratedKeys(); + // Only 50 keys for inserted rows should be generated + for (int i = 1; i <= 50; i++) { + assertTrue(rs.next()); + assertEquals(i + 50, rs.getLong(1)); + } + assertFalse(rs.next()); + rs.close(); + // Check merged data + rs = stat.executeQuery("SELECT ID, UID, VALUE FROM DESTINATION ORDER BY ID"); + for (int i = 1; i <= 100; i++) { + assertTrue(rs.next()); + assertEquals(i, rs.getLong(1)); + assertEquals(i, rs.getInt(2)); + assertEquals(i * 10 + 5, rs.getInt(3)); + } + assertFalse(rs.next()); + stat.execute("DROP TABLE SOURCE"); + stat.execute("DROP TABLE DESTINATION"); + } + + /** + * Test method for shared connection between several statements in different + * threads. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testMultithreaded(final Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + "VALUE INT NOT NULL)"); + final int count = 4, iterations = 10_000; + Thread[] threads = new Thread[count]; + final long[] keys = new long[count * iterations]; + for (int i = 0; i < count; i++) { + final int num = i; + threads[num] = new Thread("getGeneratedKeys-" + num) { + @Override + public void run() { + try { + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (?)", + Statement.RETURN_GENERATED_KEYS); + for (int i = 0; i < iterations; i++) { + int value = iterations * num + i; + prep.setInt(1, value); + prep.execute(); + ResultSet rs = prep.getGeneratedKeys(); + rs.next(); + keys[value] = rs.getLong(1); + rs.close(); + } + } catch (SQLException ex) { + ex.printStackTrace(); + } + } + }; + } + for (int i = 0; i < count; i++) { + threads[i].start(); + } + for (int i = 0; i < count; i++) { + threads[i].join(); + } + ResultSet rs = stat.executeQuery("SELECT VALUE, ID FROM TEST ORDER BY VALUE"); + for (int i = 0; i < keys.length; i++) { + assertTrue(rs.next()); + assertEquals(i, rs.getInt(1)); + assertEquals(keys[i], rs.getLong(2)); + } + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for case of letters in column names. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testNameCase(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "\"id\" UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + // Test columns with only difference in case + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", + new String[] { "id", "ID" }); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("id", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(1L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + // Test lower case name of upper case column + stat.execute("ALTER TABLE TEST DROP COLUMN \"id\""); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", new String[] { "id" }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertFalse(rs.next()); + rs.close(); + // Test upper case name of lower case column + stat.execute("ALTER TABLE TEST ALTER COLUMN ID RENAME TO \"id\""); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", new String[] { "ID" }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("id", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(3L, rs.getLong(1)); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String)} + * .{@link PreparedStatement#execute()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_Execute(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)"); + prep.execute(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String)} + * .{@link PreparedStatement#executeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_ExecuteBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)"); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String)} + * .{@link PreparedStatement#executeLargeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_ExecuteLargeBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)"); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String)} + * .{@link PreparedStatement#executeLargeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_ExecuteLargeUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)"); + prep.executeLargeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String)} + * .{@link PreparedStatement#executeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_ExecuteUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)"); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int)} + * .{@link PreparedStatement#execute()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_int_Execute(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", + Statement.NO_GENERATED_KEYS); + prep.execute(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", Statement.RETURN_GENERATED_KEYS); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int)} + * .{@link PreparedStatement#executeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_int_ExecuteBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", + Statement.NO_GENERATED_KEYS); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", Statement.RETURN_GENERATED_KEYS); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(3L, rs.getLong(1)); + assertEquals(3L, rs.getLong("ID")); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertEquals(UUID.class, rs.getObject("UID").getClass()); + assertEquals(UUID.class, rs.getObject("UID", UUID.class).getClass()); + assertTrue(rs.next()); + assertEquals(4L, rs.getLong(1)); + assertEquals(4L, rs.getLong("ID")); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertEquals(UUID.class, rs.getObject("UID").getClass()); + assertEquals(UUID.class, rs.getObject("UID", UUID.class).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Connection#prepareStatement(String, int)} + * .{@link PreparedStatement#executeLargeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_int_ExecuteLargeBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", Statement.NO_GENERATED_KEYS); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", + Statement.RETURN_GENERATED_KEYS); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(3L, rs.getLong(1)); + assertEquals(3L, rs.getLong("ID")); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertEquals(UUID.class, rs.getObject("UID").getClass()); + assertEquals(UUID.class, rs.getObject("UID", UUID.class).getClass()); + assertTrue(rs.next()); + assertEquals(4L, rs.getLong(1)); + assertEquals(4L, rs.getLong("ID")); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertEquals(UUID.class, rs.getObject("UID").getClass()); + assertEquals(UUID.class, rs.getObject("UID", UUID.class).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int)} + * .{@link PreparedStatement#executeLargeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_int_ExecuteLargeUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", Statement.NO_GENERATED_KEYS); + prep.executeLargeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", + Statement.RETURN_GENERATED_KEYS); + prep.executeLargeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int)} + * .{@link PreparedStatement#executeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_int_ExecuteUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", + Statement.NO_GENERATED_KEYS); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", Statement.RETURN_GENERATED_KEYS); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int[])} + * .{@link PreparedStatement#execute()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_intArray_Execute(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + prep.execute(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", new int[] { 1, 2 }); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", new int[] { 2, 1 }); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int[])} + * .{@link PreparedStatement#executeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_intArray_ExecuteBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", new int[] { 1, 2 }); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(3L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertTrue(rs.next()); + assertEquals(4L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", new int[] { 2, 1 }); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(5L, rs.getLong(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(6L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int[])} + * .{@link PreparedStatement#executeLargeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_intArray_ExecuteLargeBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", + new int[] { 1, 2 }); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(3L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertTrue(rs.next()); + assertEquals(4L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", + new int[] { 2, 1 }); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(5L, rs.getLong(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(6L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int[])} + * .{@link PreparedStatement#executeLargeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_intArray_ExecuteLargeUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + prep.executeLargeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", + new int[] { 1, 2 }); + prep.executeLargeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", + new int[] { 2, 1 }); + prep.executeLargeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + prep.executeLargeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, int[])} + * .{@link PreparedStatement#executeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_intArray_ExecuteUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", new int[] { 1, 2 }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", new int[] { 2, 1 }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, String[])} + * .{@link PreparedStatement#execute()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_StringArray_Execute(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", new String[] { "ID", "UID" }); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", new String[] { "UID", "ID" }); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new String[] { "UID" }); + prep.execute(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, String[])} + * .{@link PreparedStatement#executeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_StringArray_ExecuteBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", new String[] { "ID", "UID" }); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(3L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertTrue(rs.next()); + assertEquals(4L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", new String[] { "UID", "ID" }); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(5L, rs.getLong(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(6L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new String[] { "UID" }); + prep.addBatch(); + prep.addBatch(); + prep.executeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, String[])} + * .{@link PreparedStatement#executeLargeBatch()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_StringArray_ExecuteLargeBatch(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", + new String[] { "ID", "UID" }); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(3L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertTrue(rs.next()); + assertEquals(4L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", + new String[] { "UID", "ID" }); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(5L, rs.getLong(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(6L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", + new String[] { "UID" }); + prep.addBatch(); + prep.addBatch(); + prep.executeLargeBatch(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for + * {@link Connection#prepareStatement(String, String[])} + * .{@link PreparedStatement#executeLargeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_StringArray_ExecuteLargeUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + JdbcPreparedStatement prep = (JdbcPreparedStatement) conn + .prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + prep.executeLargeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", + new String[] { "ID", "UID" }); + prep.executeLargeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", + new String[] { "UID", "ID" }); + prep.executeLargeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = (JdbcPreparedStatement) conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", + new String[] { "UID" }); + prep.executeLargeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Connection#prepareStatement(String, String[])} + * .{@link PreparedStatement#executeUpdate()}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testPrepareStatement_StringArray_ExecuteUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (20)", new String[] { "ID", "UID" }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (30)", new String[] { "UID", "ID" }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + prep = conn.prepareStatement("INSERT INTO TEST(VALUE) VALUES (40)", new String[] { "UID" }); + prep.executeUpdate(); + rs = prep.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#execute(String)}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecute(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.execute("INSERT INTO TEST(VALUE) VALUES (10)"); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#execute(String, int)}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecute_int(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.execute("INSERT INTO TEST(VALUE) VALUES (10)", Statement.NO_GENERATED_KEYS); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("INSERT INTO TEST(VALUE) VALUES (20)", Statement.RETURN_GENERATED_KEYS); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#execute(String, int[])}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecute_intArray(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.execute("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("INSERT INTO TEST(VALUE) VALUES (20)", new int[] { 1, 2 }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("INSERT INTO TEST(VALUE) VALUES (30)", new int[] { 2, 1 }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + stat.execute("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + rs = stat.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeUpdate(String, String[])}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecute_StringArray(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.execute("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("INSERT INTO TEST(VALUE) VALUES (20)", new String[] { "ID", "UID" }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("INSERT INTO TEST(VALUE) VALUES (30)", new String[] { "UID", "ID" }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + stat.execute("INSERT INTO TEST(VALUE) VALUES (40)", new String[] { "UID" }); + rs = stat.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeLargeUpdate(String)}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteLargeUpdate(Connection conn) throws Exception { + JdbcStatement stat = (JdbcStatement) conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (10)"); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeLargeUpdate(String, int)}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteLargeUpdate_int(Connection conn) throws Exception { + JdbcStatement stat = (JdbcStatement) conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (10)", Statement.NO_GENERATED_KEYS); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (20)", Statement.RETURN_GENERATED_KEYS); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeLargeUpdate(String, int[])}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteLargeUpdate_intArray(Connection conn) throws Exception { + JdbcStatement stat = (JdbcStatement) conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (20)", new int[] { 1, 2 }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (30)", new int[] { 2, 1 }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + rs = stat.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeLargeUpdate(String, String[])}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteLargeUpdate_StringArray(Connection conn) throws Exception { + JdbcStatement stat = (JdbcStatement) conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (20)", new String[] { "ID", "UID" }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (30)", new String[] { "UID", "ID" }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + stat.executeLargeUpdate("INSERT INTO TEST(VALUE) VALUES (40)", new String[] { "UID" }); + rs = stat.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeUpdate(String)}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteUpdate(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (10)"); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeUpdate(String, int)}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteUpdate_int(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (10)", Statement.NO_GENERATED_KEYS); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (20)", Statement.RETURN_GENERATED_KEYS); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeUpdate(String, int[])}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteUpdate_intArray(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (10)", new int[0]); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (20)", new int[] { 1, 2 }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (30)", new int[] { 2, 1 }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (40)", new int[] { 2 }); + rs = stat.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test method for {@link Statement#executeUpdate(String, String[])}. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testStatementExecuteUpdate_StringArray(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST (ID BIGINT PRIMARY KEY AUTO_INCREMENT," + + "UID UUID NOT NULL DEFAULT RANDOM_UUID(), VALUE INT NOT NULL)"); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (10)", new String[0]); + ResultSet rs = stat.getGeneratedKeys(); + assertFalse(rs.next()); + rs.close(); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (20)", new String[] { "ID", "UID" }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("ID", rs.getMetaData().getColumnName(1)); + assertEquals("UID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(2L, rs.getLong(1)); + assertEquals(UUID.class, rs.getObject(2).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (30)", new String[] { "UID", "ID" }); + rs = stat.getGeneratedKeys(); + assertEquals(2, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertEquals("ID", rs.getMetaData().getColumnName(2)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertEquals(3L, rs.getLong(2)); + assertFalse(rs.next()); + rs.close(); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (40)", new String[] { "UID" }); + rs = stat.getGeneratedKeys(); + assertEquals(1, rs.getMetaData().getColumnCount()); + assertEquals("UID", rs.getMetaData().getColumnName(1)); + assertTrue(rs.next()); + assertEquals(UUID.class, rs.getObject(1).getClass()); + assertFalse(rs.next()); + rs.close(); + stat.execute("DROP TABLE TEST"); + } + + /** + * Test for keys generated by trigger. + * + * @param conn + * connection + * @throws Exception + * on exception + */ + private void testTrigger(Connection conn) throws Exception { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID UUID, VALUE INT)"); + stat.execute("CREATE TRIGGER TEST_INSERT BEFORE INSERT ON TEST FOR EACH ROW CALL \"" + + TestGetGeneratedKeysTrigger.class.getName() + '"'); + stat.executeUpdate("INSERT INTO TEST(VALUE) VALUES (10), (20)", Statement.RETURN_GENERATED_KEYS); + ResultSet rs = stat.getGeneratedKeys(); + rs.next(); + UUID u1 = (UUID) rs.getObject(1); + rs.next(); + UUID u2 = (UUID) rs.getObject(1); + assertFalse(rs.next()); + rs = stat.executeQuery("SELECT ID FROM TEST ORDER BY VALUE"); + rs.next(); + assertEquals(u1, rs.getObject(1)); + rs.next(); + assertEquals(u2, rs.getObject(1)); + stat.execute("DROP TRIGGER TEST_INSERT"); + stat.execute("DROP TABLE TEST"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestJavaObject.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestJavaObject.java new file mode 100644 index 0000000000000..0dcc82d099ce2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestJavaObject.java @@ -0,0 +1,173 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.Serializable; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.Arrays; +import java.util.UUID; + +import org.h2.engine.SysProperties; +import org.h2.test.TestBase; + +/** + * Tests java object values when SysProperties.SERIALIZE_JAVA_OBJECT property is + * disabled. + * + * @author Sergi Vladykin + */ +public class TestJavaObject extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = createCaller().init(); + test.config.traceTest = true; + test.config.memory = true; + test.config.networked = true; + test.config.beforeTest(); + test.test(); + test.config.afterTest(); + } + + @Override + public void test() throws Exception { + SysProperties.serializeJavaObject = false; + try { + trace("Test Java Object"); + doTest(new MyObj(1), new MyObj(2), false); + doTest(Arrays.asList(UUID.randomUUID(), null), + Arrays.asList(UUID.randomUUID(), UUID.randomUUID()), true); + // doTest(new Timestamp(System.currentTimeMillis()), + // new Timestamp(System.currentTimeMillis() + 10000), + // false); + doTest(200, 100, false); + doTest(200, 100L, true); + // doTest(new Date(System.currentTimeMillis() + 1000), + // new Date(System.currentTimeMillis()), false); + // doTest(new java.util.Date(System.currentTimeMillis() + 1000), + // new java.util.Date(System.currentTimeMillis()), false); + // doTest(new Time(System.currentTimeMillis() + 1000), + // new Date(System.currentTimeMillis()), false); + // doTest(new Time(System.currentTimeMillis() + 1000), + // new Timestamp(System.currentTimeMillis()), false); + } finally { + SysProperties.serializeJavaObject = true; + } + } + + private void doTest(Object o1, Object o2, boolean hash) throws SQLException { + deleteDb("javaObject"); + Connection conn = getConnection("javaObject"); + Statement stat = conn.createStatement(); + stat.execute("create table t(id identity, val other)"); + + PreparedStatement ins = conn.prepareStatement( + "insert into t(val) values(?)"); + + ins.setObject(1, o1, Types.JAVA_OBJECT); + assertEquals(1, ins.executeUpdate()); + + ins.setObject(1, o2, Types.JAVA_OBJECT); + assertEquals(1, ins.executeUpdate()); + + ResultSet rs = stat.executeQuery( + "select val from t order by val limit 1"); + + assertTrue(rs.next()); + + Object smallest; + if (hash) { + if (o1.getClass() != o2.getClass()) { + smallest = o1.getClass().getName().compareTo( + o2.getClass().getName()) < 0 ? o1 : o2; + } else { + assertFalse(o1.hashCode() == o2.hashCode()); + smallest = o1.hashCode() < o2.hashCode() ? o1 : o2; + } + } else { + @SuppressWarnings("unchecked") + int compare = ((Comparable) o1).compareTo(o2); + assertFalse(compare == 0); + smallest = compare < 0 ? o1 : o2; + } + + assertEquals(smallest.toString(), rs.getString(1)); + + Object y = rs.getObject(1); + + assertTrue(smallest.equals(y)); + assertFalse(rs.next()); + rs.close(); + + PreparedStatement prep = conn.prepareStatement( + "select id from t where val = ?"); + + prep.setObject(1, o1, Types.JAVA_OBJECT); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + rs.close(); + + prep.setObject(1, o2, Types.JAVA_OBJECT); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + rs.close(); + + stat.close(); + prep.close(); + + conn.close(); + deleteDb("javaObject"); + // trace("ok: " + o1.getClass().getName() + " vs " + + // o2.getClass().getName()); + } + + /** + * A test class. + */ + public static class MyObj implements Comparable, Serializable { + + private static final long serialVersionUID = 1L; + private final int value; + + MyObj(int value) { + this.value = value; + } + + @Override + public String toString() { + return "myObj:" + value; + } + + @Override + public int compareTo(MyObj o) { + return value - o.value; + } + + @Override + public boolean equals(Object o) { + return toString().equals(o.toString()); + } + + @Override + public int hashCode() { + return -value; + } + + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestJavaObjectSerializer.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestJavaObjectSerializer.java new file mode 100644 index 0000000000000..6ab844256511a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestJavaObjectSerializer.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.sql.Types; +import org.h2.api.JavaObjectSerializer; +import org.h2.test.TestBase; +import org.h2.util.JdbcUtils; + +/** + * Tests {@link JavaObjectSerializer}. + * + * @author Sergi Vladykin + * @author Davide Cavestro + */ +public class TestJavaObjectSerializer extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = createCaller().init(); + test.config.traceTest = true; + test.config.memory = true; + test.config.networked = true; + test.config.beforeTest(); + test.test(); + test.config.afterTest(); + } + + @Override + public void test() throws Exception { + deleteDb("javaSerializer"); + testStaticGlobalSerializer(); + testDbLevelJavaObjectSerializer(); + deleteDb("javaSerializer"); + } + + private void testStaticGlobalSerializer() throws Exception { + JdbcUtils.serializer = new JavaObjectSerializer() { + @Override + public byte[] serialize(Object obj) throws Exception { + assertEquals(100500, ((Integer) obj).intValue()); + + return new byte[] { 1, 2, 3 }; + } + + @Override + public Object deserialize(byte[] bytes) throws Exception { + assertEquals(new byte[] { 1, 2, 3 }, bytes); + + return 100500; + } + }; + + try { + deleteDb("javaSerializer"); + Connection conn = getConnection("javaSerializer"); + + Statement stat = conn.createStatement(); + stat.execute("create table t(id identity, val other)"); + + PreparedStatement ins = conn.prepareStatement("insert into t(val) values(?)"); + + ins.setObject(1, 100500, Types.JAVA_OBJECT); + assertEquals(1, ins.executeUpdate()); + + Statement s = conn.createStatement(); + ResultSet rs = s.executeQuery("select val from t"); + + assertTrue(rs.next()); + + assertEquals(100500, ((Integer) rs.getObject(1)).intValue()); + assertEquals(new byte[] { 1, 2, 3 }, rs.getBytes(1)); + + conn.close(); + deleteDb("javaSerializer"); + } finally { + JdbcUtils.serializer = null; + } + } + + /** + * Tests per-database serializer when set through the related SET command. + */ + public void testDbLevelJavaObjectSerializer() throws Exception { + + DbLevelJavaObjectSerializer.testBaseRef = this; + + try { + deleteDb("javaSerializer"); + Connection conn = getConnection("javaSerializer"); + + conn.createStatement().execute("SET JAVA_OBJECT_SERIALIZER '"+ + DbLevelJavaObjectSerializer.class.getName()+"'"); + + Statement stat = conn.createStatement(); + stat.execute("create table t1(id identity, val other)"); + + PreparedStatement ins = conn.prepareStatement("insert into t1(val) values(?)"); + + ins.setObject(1, 100500, Types.JAVA_OBJECT); + assertEquals(1, ins.executeUpdate()); + + Statement s = conn.createStatement(); + ResultSet rs = s.executeQuery("select val from t1"); + + assertTrue(rs.next()); + + assertEquals(100500, ((Integer) rs.getObject(1)).intValue()); + assertEquals(new byte[] { 1, 2, 3 }, rs.getBytes(1)); + + conn.close(); + deleteDb("javaSerializer"); + } finally { + DbLevelJavaObjectSerializer.testBaseRef = null; + } + } + + /** + * The serializer to use for this test. + */ + public static class DbLevelJavaObjectSerializer implements + JavaObjectSerializer { + + /** + * The test. + */ + static TestBase testBaseRef; + + @Override + public byte[] serialize(Object obj) throws Exception { + testBaseRef.assertEquals(100500, ((Integer) obj).intValue()); + + return new byte[] { 1, 2, 3 }; + } + + @Override + public Object deserialize(byte[] bytes) throws Exception { + testBaseRef.assertEquals(new byte[] { 1, 2, 3 }, bytes); + + return 100500; + } + + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestLimitUpdates.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestLimitUpdates.java new file mode 100644 index 0000000000000..da1b7405e329e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestLimitUpdates.java @@ -0,0 +1,115 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import org.h2.test.TestBase; + +/** + * Test for limit updates. + */ +public class TestLimitUpdates extends TestBase { + + private static final String DATABASE_NAME = "limitUpdates"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testLimitUpdates(); + deleteDb(DATABASE_NAME); + } + + private void testLimitUpdates() throws SQLException { + deleteDb(DATABASE_NAME); + Connection conn = null; + PreparedStatement prep = null; + + try { + conn = getConnection(DATABASE_NAME); + prep = conn.prepareStatement( + "CREATE TABLE TEST(KEY_ID INT PRIMARY KEY, VALUE_ID INT)"); + prep.executeUpdate(); + + prep.close(); + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?)"); + int numRows = 10; + for (int i = 0; i < numRows; ++i) { + prep.setInt(1, i); + prep.setInt(2, 0); + prep.execute(); + } + assertEquals(numRows, countWhere(conn, 0)); + + // update all elements than available + prep.close(); + prep = conn.prepareStatement("UPDATE TEST SET VALUE_ID = ?"); + prep.setInt(1, 1); + prep.execute(); + assertEquals(numRows, countWhere(conn, 1)); + + // update less elements than available + updateLimit(conn, 2, numRows / 2); + assertEquals(numRows / 2, countWhere(conn, 2)); + + // update more elements than available + updateLimit(conn, 3, numRows * 2); + assertEquals(numRows, countWhere(conn, 3)); + + // update no elements + updateLimit(conn, 4, 0); + assertEquals(0, countWhere(conn, 4)); + } finally { + if (prep != null) { + prep.close(); + } + if (conn != null) { + conn.close(); + } + } + } + + private static int countWhere(final Connection conn, final int where) + throws SQLException { + PreparedStatement prep = null; + ResultSet rs = null; + try { + prep = conn.prepareStatement( + "SELECT COUNT(*) FROM TEST WHERE VALUE_ID = ?"); + prep.setInt(1, where); + rs = prep.executeQuery(); + rs.next(); + return rs.getInt(1); + } finally { + if (rs != null) { + rs.close(); + } + if (prep != null) { + prep.close(); + } + } + } + + private static void updateLimit(final Connection conn, final int value, + final int limit) throws SQLException { + try (PreparedStatement prep = conn.prepareStatement( + "UPDATE TEST SET VALUE_ID = ? LIMIT ?")) { + prep.setInt(1, value); + prep.setInt(2, limit); + prep.execute(); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestLobApi.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestLobApi.java new file mode 100644 index 0000000000000..4475406dc2ba9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestLobApi.java @@ -0,0 +1,386 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.Reader; +import java.io.StringReader; +import java.io.Writer; +import java.nio.charset.StandardCharsets; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.Connection; +import java.sql.NClob; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.util.Random; + +import org.h2.api.ErrorCode; +import org.h2.jdbc.JdbcConnection; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; + +/** + * Test the Blob, Clob, and NClob implementations. + */ +public class TestLobApi extends TestBase { + + private JdbcConnection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb(getTestName()); + testUnsupportedOperations(); + testLobStaysOpenUntilCommitted(); + testInputStreamThrowsException(true); + testInputStreamThrowsException(false); + conn = (JdbcConnection) getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("create table test(id int, x blob)"); + testBlob(0); + testBlob(1); + testBlob(100); + testBlob(100000); + stat.execute("drop table test"); + stat.execute("create table test(id int, x clob)"); + testClob(0); + testClob(1); + testClob(100); + testClob(100000); + stat.execute("drop table test"); + conn.close(); + } + + private void testUnsupportedOperations() throws Exception { + Connection conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("create table test(id int, c clob, b blob)"); + stat.execute("insert into test values(1, 'x', x'00')"); + ResultSet rs = stat.executeQuery("select * from test order by id"); + rs.next(); + Clob clob = rs.getClob(2); + byte[] data = IOUtils.readBytesAndClose(clob.getAsciiStream(), -1); + assertEquals("x", new String(data, StandardCharsets.UTF_8)); + assertTrue(clob.toString().endsWith("'x'")); + clob.free(); + assertTrue(clob.toString().endsWith("null")); + + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, clob). + truncate(0); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, clob). + setAsciiStream(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, clob). + position("", 0); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, clob). + position((Clob) null, 0); + + Blob blob = rs.getBlob(3); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, blob). + truncate(0); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, blob). + position(new byte[1], 0); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, blob). + position((Blob) null, 0); + assertTrue(blob.toString().endsWith("X'00'")); + blob.free(); + assertTrue(blob.toString().endsWith("null")); + + stat.execute("drop table test"); + conn.close(); + } + + /** + * According to the JDBC spec, BLOB and CLOB objects must stay open even if + * the result set is closed (see ResultSet.close). + */ + private void testLobStaysOpenUntilCommitted() throws Exception { + Connection conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("create table test(id identity, c clob, b blob)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(null, ?, ?)"); + prep.setString(1, ""); + prep.setBytes(2, new byte[0]); + prep.execute(); + + Random r = new Random(1); + + char[] charsSmall = new char[20]; + for (int i = 0; i < charsSmall.length; i++) { + charsSmall[i] = (char) r.nextInt(10000); + } + String dSmall = new String(charsSmall); + prep.setCharacterStream(1, new StringReader(dSmall), -1); + byte[] bytesSmall = new byte[20]; + r.nextBytes(bytesSmall); + prep.setBinaryStream(2, new ByteArrayInputStream(bytesSmall), -1); + prep.execute(); + + char[] chars = new char[100000]; + for (int i = 0; i < chars.length; i++) { + chars[i] = (char) r.nextInt(10000); + } + String d = new String(chars); + prep.setCharacterStream(1, new StringReader(d), -1); + byte[] bytes = new byte[100000]; + r.nextBytes(bytes); + prep.setBinaryStream(2, new ByteArrayInputStream(bytes), -1); + prep.execute(); + + conn.setAutoCommit(false); + ResultSet rs = stat.executeQuery("select * from test order by id"); + rs.next(); + Clob c1 = rs.getClob(2); + Blob b1 = rs.getBlob(3); + rs.next(); + Clob c2 = rs.getClob(2); + Blob b2 = rs.getBlob(3); + rs.next(); + Clob c3 = rs.getClob(2); + Blob b3 = rs.getBlob(3); + assertFalse(rs.next()); + // now close + rs.close(); + // but the LOBs must stay open + assertEquals(0, c1.length()); + assertEquals(0, b1.length()); + assertEquals("", c1.getSubString(1, 0)); + assertEquals(new byte[0], b1.getBytes(1, 0)); + + assertEquals(charsSmall.length, c2.length()); + assertEquals(bytesSmall.length, b2.length()); + assertEquals(dSmall, c2.getSubString(1, (int) c2.length())); + assertEquals(bytesSmall, b2.getBytes(1, (int) b2.length())); + + assertEquals(chars.length, c3.length()); + assertEquals(bytes.length, b3.length()); + assertEquals(d, c3.getSubString(1, (int) c3.length())); + assertEquals(bytes, b3.getBytes(1, (int) b3.length())); + stat.execute("drop table test"); + conn.close(); + } + + private void testInputStreamThrowsException(final boolean ioException) + throws Exception { + Connection conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("create table test(id identity, c clob, b blob)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(null, ?, ?)"); + + assertThrows(ErrorCode.IO_EXCEPTION_1, prep). + setCharacterStream(1, new Reader() { + int pos; + @Override + public int read(char[] buff, int off, int len) throws IOException { + pos += len; + if (pos > 100001) { + if (ioException) { + throw new IOException(""); + } + throw new IllegalStateException(); + } + return len; + } + @Override + public void close() throws IOException { + // nothing to do + } + }, -1); + + prep.setString(1, new String(new char[10000])); + prep.setBytes(2, new byte[0]); + prep.execute(); + prep.setString(1, ""); + + assertThrows(ErrorCode.IO_EXCEPTION_1, prep). + setBinaryStream(2, new InputStream() { + int pos; + @Override + public int read() throws IOException { + pos++; + if (pos > 100001) { + if (ioException) { + throw new IOException(""); + } + throw new IllegalStateException(); + } + return 0; + } + }, -1); + + prep.setBytes(2, new byte[10000]); + prep.execute(); + ResultSet rs = stat.executeQuery("select c, b from test order by id"); + rs.next(); + assertEquals(new String(new char[10000]), rs.getString(1)); + assertEquals(new byte[0], rs.getBytes(2)); + rs.next(); + assertEquals("", rs.getString(1)); + assertEquals(new byte[10000], rs.getBytes(2)); + stat.execute("drop table test"); + conn.close(); + } + + private void testBlob(int length) throws Exception { + Random r = new Random(length); + byte[] data = new byte[length]; + r.nextBytes(data); + Blob b = conn.createBlob(); + OutputStream out = b.setBinaryStream(1); + out.write(data, 0, data.length); + out.close(); + stat.execute("delete from test"); + + PreparedStatement prep = conn.prepareStatement("insert into test values(?, ?)"); + prep.setInt(1, 1); + prep.setBlob(2, b); + prep.execute(); + + prep.setInt(1, 2); + b = conn.createBlob(); + assertEquals(length, b.setBytes(1, data)); + prep.setBlob(2, b); + prep.execute(); + + prep.setInt(1, 3); + Blob b2 = conn.createBlob(); + byte[] xdata = new byte[length + 2]; + System.arraycopy(data, 0, xdata, 1, length); + assertEquals(length, b2.setBytes(1, xdata, 1, length)); + prep.setBlob(2, b2); + prep.execute(); + + prep.setInt(1, 4); + prep.setBlob(2, new ByteArrayInputStream(data)); + prep.execute(); + + prep.setInt(1, 5); + prep.setBlob(2, new ByteArrayInputStream(data), -1); + prep.execute(); + + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + Blob b3 = rs.getBlob(2); + assertEquals(length, b3.length()); + byte[] bytes = b.getBytes(1, length); + byte[] bytes2 = b3.getBytes(1, length); + assertEquals(bytes, bytes2); + rs.next(); + b3 = rs.getBlob(2); + assertEquals(length, b3.length()); + bytes2 = b3.getBytes(1, length); + assertEquals(bytes, bytes2); + rs.next(); + b3 = rs.getBlob(2); + assertEquals(length, b3.length()); + bytes2 = b3.getBytes(1, length); + assertEquals(bytes, bytes2); + while (rs.next()) { + bytes2 = rs.getBytes(2); + assertEquals(bytes, bytes2); + } + } + + private void testClob(int length) throws Exception { + Random r = new Random(length); + char[] data = new char[length]; + + // Unicode problem: + // The UCS code values 0xd800-0xdfff (UTF-16 surrogates) + // as well as 0xfffe and 0xffff (UCS non-characters) + // should not appear in conforming UTF-8 streams. + // (String.getBytes("UTF-8") only returns 1 byte for 0xd800-0xdfff) + for (int i = 0; i < length; i++) { + char c; + do { + c = (char) r.nextInt(); + } while (c >= 0xd800 && c <= 0xdfff); + data[i] = c; + } + Clob c = conn.createClob(); + Writer out = c.setCharacterStream(1); + out.write(data, 0, data.length); + out.close(); + stat.execute("delete from test"); + PreparedStatement prep = conn.prepareStatement("insert into test values(?, ?)"); + + prep.setInt(1, 1); + prep.setClob(2, c); + prep.execute(); + + c = conn.createClob(); + c.setString(1, new String(data)); + prep.setInt(1, 2); + prep.setClob(2, c); + prep.execute(); + + prep.setInt(1, 3); + prep.setCharacterStream(2, new StringReader(new String(data))); + prep.execute(); + + prep.setInt(1, 4); + prep.setCharacterStream(2, new StringReader(new String(data)), -1); + prep.execute(); + + NClob nc; + nc = conn.createNClob(); + assertEquals(length, nc.setString(1, new String(data))); + prep.setInt(1, 5); + prep.setNClob(2, nc); + prep.execute(); + + nc = conn.createNClob(); + char[] xdata = new char[length + 2]; + System.arraycopy(data, 0, xdata, 1, length); + assertEquals(length, nc.setString(1, new String(xdata), 1, length)); + prep.setInt(1, 6); + prep.setNClob(2, nc); + prep.execute(); + + prep.setInt(1, 7); + prep.setNClob(2, new StringReader(new String(data))); + prep.execute(); + + prep.setInt(1, 8); + prep.setNClob(2, new StringReader(new String(data)), -1); + prep.execute(); + + prep.setInt(1, 9); + prep.setNString(2, new String(data)); + prep.execute(); + + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + Clob c2 = rs.getClob(2); + assertEquals(length, c2.length()); + String s = c.getSubString(1, length); + String s2 = c2.getSubString(1, length); + while (rs.next()) { + c2 = rs.getClob(2); + assertEquals(length, c2.length()); + s2 = c2.getSubString(1, length); + assertEquals(s, s2); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestManyJdbcObjects.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestManyJdbcObjects.java new file mode 100644 index 0000000000000..8534a395ac85d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestManyJdbcObjects.java @@ -0,0 +1,116 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.test.TestBase; + +/** + * Tests the server by creating many JDBC objects (result sets and so on). + */ +public class TestManyJdbcObjects extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testNestedResultSets(); + testManyConnections(); + testOneConnectionPrepare(); + deleteDb("manyObjects"); + } + + private void testNestedResultSets() throws SQLException { + if (!config.networked) { + return; + } + deleteDb("manyObjects"); + Connection conn = getConnection("manyObjects"); + DatabaseMetaData meta = conn.getMetaData(); + ResultSet rsTables = meta.getColumns(null, null, null, null); + while (rsTables.next()) { + meta.getExportedKeys(null, null, null); + meta.getImportedKeys(null, null, null); + } + conn.close(); + } + + private void testManyConnections() throws SQLException { + if (!config.networked || config.memory) { + return; + } + // SERVER_CACHED_OBJECTS = 1000: connections = 20 (1250) + // SERVER_CACHED_OBJECTS = 500: connections = 40 + // SERVER_CACHED_OBJECTS = 50: connections = 120 + deleteDb("manyObjects"); + int connCount = getSize(4, 40); + Connection[] conn = new Connection[connCount]; + for (int i = 0; i < connCount; i++) { + conn[i] = getConnection("manyObjects"); + } + int len = getSize(50, 500); + for (int j = 0; j < len; j++) { + if ((j % 10) == 0) { + trace("j=" + j); + } + for (int i = 0; i < connCount; i++) { + conn[i].getMetaData().getSchemas().close(); + } + } + for (int i = 0; i < connCount; i++) { + conn[i].close(); + } + } + + private void testOneConnectionPrepare() throws SQLException { + deleteDb("manyObjects"); + Connection conn = getConnection("manyObjects"); + PreparedStatement prep; + Statement stat; + int size = getSize(10, 1000); + for (int i = 0; i < size; i++) { + conn.getMetaData(); + } + for (int i = 0; i < size; i++) { + conn.createStatement(); + } + stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + for (int i = 0; i < size; i++) { + stat.executeQuery("SELECT * FROM TEST WHERE 1=0"); + } + for (int i = 0; i < size; i++) { + stat.executeQuery("SELECT * FROM TEST"); + } + for (int i = 0; i < size; i++) { + conn.prepareStatement("SELECT * FROM TEST"); + } + prep = conn.prepareStatement("SELECT * FROM TEST WHERE 1=0"); + for (int i = 0; i < size; i++) { + prep.executeQuery(); + } + prep = conn.prepareStatement("SELECT * FROM TEST"); + for (int i = 0; i < size; i++) { + prep.executeQuery(); + } + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestMetaData.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestMetaData.java new file mode 100644 index 0000000000000..164650b3aabe6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestMetaData.java @@ -0,0 +1,1340 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.Driver; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.test.TestBase; +import org.h2.value.DataType; + +/** + * Test for the DatabaseMetaData implementation. + */ +public class TestMetaData extends TestBase { + + private static final String CATALOG = "METADATA"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("metaData"); + testUnwrap(); + testUnsupportedOperations(); + testTempTable(); + testColumnResultSetMeta(); + testColumnLobMeta(); + testColumnMetaData(); + testColumnPrecision(); + testColumnDefault(); + testCrossReferences(); + testProcedureColumns(); + testUDTs(); + testStatic(); + testGeneral(); + testAllowLiteralsNone(); + testClientInfo(); + testSessionsUncommitted(); + testQueryStatistics(); + testQueryStatisticsLimit(); + } + + private void testUnwrap() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select 1 as x from dual"); + ResultSetMetaData meta = rs.getMetaData(); + assertTrue(meta.isWrapperFor(Object.class)); + assertTrue(meta.isWrapperFor(ResultSetMetaData.class)); + assertTrue(meta.isWrapperFor(meta.getClass())); + assertTrue(meta == meta.unwrap(Object.class)); + assertTrue(meta == meta.unwrap(ResultSetMetaData.class)); + assertTrue(meta == meta.unwrap(meta.getClass())); + assertFalse(meta.isWrapperFor(Integer.class)); + assertThrows(ErrorCode.INVALID_VALUE_2, meta). + unwrap(Integer.class); + conn.close(); + } + + private void testUnsupportedOperations() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select 1 as x from dual"); + ResultSetMetaData meta = rs.getMetaData(); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getColumnLabel(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getColumnName(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getColumnType(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getColumnTypeName(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getSchemaName(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getTableName(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getCatalogName(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isAutoIncrement(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isCaseSensitive(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isSearchable(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isCurrency(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isNullable(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isSigned(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isReadOnly(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isWritable(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).isDefinitelyWritable(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getColumnClassName(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getPrecision(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getScale(0); + assertThrows(ErrorCode.INVALID_VALUE_2, meta).getColumnDisplaySize(0); + conn.close(); + } + + private void testColumnResultSetMeta() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + stat.executeUpdate("create table test(data result_set)"); + stat.execute("create alias x as 'ResultSet x(Connection conn, String sql) " + + "throws SQLException { return conn.createStatement(" + + "ResultSet.TYPE_SCROLL_INSENSITIVE, " + + "ResultSet.CONCUR_READ_ONLY).executeQuery(sql); }'"); + stat.execute("insert into test values(" + + "select x('select x from system_range(1, 2)'))"); + ResultSet rs = stat.executeQuery("select * from test"); + ResultSetMetaData rsMeta = rs.getMetaData(); + assertTrue(rsMeta.toString().endsWith(": columns=1")); + assertEquals("java.sql.ResultSet", rsMeta.getColumnClassName(1)); + assertEquals(DataType.TYPE_RESULT_SET, rsMeta.getColumnType(1)); + rs.next(); + assertTrue(rs.getObject(1) instanceof java.sql.ResultSet); + assertEquals("org.h2.tools.SimpleResultSet", + rs.getObject(1).getClass().getName()); + stat.executeUpdate("drop alias x"); + + rs = stat.executeQuery("select 1 from dual"); + rs.next(); + rsMeta = rs.getMetaData(); + assertTrue(rsMeta.getCatalogName(1) != null); + assertEquals("1", rsMeta.getColumnLabel(1)); + assertEquals("1", rsMeta.getColumnName(1)); + assertEquals("", rsMeta.getSchemaName(1)); + assertEquals("", rsMeta.getTableName(1)); + assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, conn.getHoldability()); + assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, rs.getHoldability()); + stat.executeUpdate("drop table test"); + conn.close(); + } + + private void testColumnLobMeta() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + stat.executeUpdate("CREATE TABLE t (blob BLOB, clob CLOB)"); + stat.execute("INSERT INTO t VALUES('', '')"); + ResultSet rs = stat.executeQuery("SELECT blob,clob FROM t"); + ResultSetMetaData rsMeta = rs.getMetaData(); + assertEquals("java.sql.Blob", rsMeta.getColumnClassName(1)); + assertEquals("java.sql.Clob", rsMeta.getColumnClassName(2)); + rs.next(); + assertTrue(rs.getObject(1) instanceof java.sql.Blob); + assertTrue(rs.getObject(2) instanceof java.sql.Clob); + stat.executeUpdate("DROP TABLE t"); + conn.close(); + } + + private void testColumnMetaData() throws SQLException { + Connection conn = getConnection("metaData"); + String sql = "select substring('Hello',0,1)"; + ResultSet rs = conn.prepareStatement(sql).executeQuery(); + rs.next(); + int type = rs.getMetaData().getColumnType(1); + assertEquals(Types.VARCHAR, type); + rs = conn.createStatement().executeQuery("SELECT COUNT(*) C FROM DUAL"); + assertEquals("C", rs.getMetaData().getColumnName(1)); + + Statement stat = conn.createStatement(); + stat.execute("create table a(x array)"); + stat.execute("insert into a values((1, 2))"); + rs = stat.executeQuery("SELECT x[1] FROM a"); + ResultSetMetaData rsMeta = rs.getMetaData(); + assertEquals(Types.VARCHAR, rsMeta.getColumnType(1)); + rs.next(); + // assertEquals(String.class.getName(), + // rs.getObject(1).getClass().getName()); + stat.execute("drop table a"); + conn.close(); + } + + private void testColumnPrecision() throws SQLException { + int numericType; + if (SysProperties.BIG_DECIMAL_IS_DECIMAL) { + numericType = Types.DECIMAL; + } else { + numericType = Types.NUMERIC; + } + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE ONE(X NUMBER(12,2), Y FLOAT)"); + stat.execute("CREATE TABLE TWO AS SELECT * FROM ONE"); + ResultSet rs; + ResultSetMetaData rsMeta; + rs = stat.executeQuery("SELECT * FROM ONE"); + rsMeta = rs.getMetaData(); + assertEquals(12, rsMeta.getPrecision(1)); + assertEquals(17, rsMeta.getPrecision(2)); + assertEquals(numericType, rsMeta.getColumnType(1)); + assertEquals(Types.DOUBLE, rsMeta.getColumnType(2)); + rs = stat.executeQuery("SELECT * FROM TWO"); + rsMeta = rs.getMetaData(); + assertEquals(12, rsMeta.getPrecision(1)); + assertEquals(17, rsMeta.getPrecision(2)); + assertEquals(numericType, rsMeta.getColumnType(1)); + assertEquals(Types.DOUBLE, rsMeta.getColumnType(2)); + stat.execute("DROP TABLE ONE, TWO"); + conn.close(); + } + + private void testColumnDefault() throws SQLException { + Connection conn = getConnection("metaData"); + DatabaseMetaData meta = conn.getMetaData(); + ResultSet rs; + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(A INT, B INT DEFAULT NULL)"); + rs = meta.getColumns(null, null, "TEST", null); + rs.next(); + assertEquals("A", rs.getString("COLUMN_NAME")); + assertEquals(null, rs.getString("COLUMN_DEF")); + rs.next(); + assertEquals("B", rs.getString("COLUMN_NAME")); + assertEquals("NULL", rs.getString("COLUMN_DEF")); + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + conn.close(); + } + + private void testProcedureColumns() throws SQLException { + Connection conn = getConnection("metaData"); + DatabaseMetaData meta = conn.getMetaData(); + ResultSet rs; + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS PROP FOR " + + "\"java.lang.System.getProperty(java.lang.String)\""); + stat.execute("CREATE ALIAS EXIT FOR \"java.lang.System.exit\""); + rs = meta.getProcedures(null, null, "EX%"); + assertResultSetMeta(rs, 9, new String[] { "PROCEDURE_CAT", + "PROCEDURE_SCHEM", "PROCEDURE_NAME", "NUM_INPUT_PARAMS", + "NUM_OUTPUT_PARAMS", "NUM_RESULT_SETS", "REMARKS", + "PROCEDURE_TYPE", "SPECIFIC_NAME" }, new int[] { Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.INTEGER, Types.INTEGER, + Types.INTEGER, Types.VARCHAR, Types.SMALLINT, Types.VARCHAR }, + null, null); + assertResultSetOrdered(rs, new String[][] { { CATALOG, + Constants.SCHEMA_MAIN, "EXIT", "1", "0", "0", "", + "" + DatabaseMetaData.procedureNoResult } }); + rs = meta.getProcedureColumns(null, null, null, null); + assertResultSetMeta(rs, 20, new String[] { "PROCEDURE_CAT", + "PROCEDURE_SCHEM", "PROCEDURE_NAME", "COLUMN_NAME", + "COLUMN_TYPE", "DATA_TYPE", "TYPE_NAME", "PRECISION", "LENGTH", + "SCALE", "RADIX", "NULLABLE", "REMARKS", "COLUMN_DEF", + "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "CHAR_OCTET_LENGTH", + "ORDINAL_POSITION", "IS_NULLABLE", "SPECIFIC_NAME" }, + new int[] { Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.SMALLINT, Types.INTEGER, + Types.VARCHAR, Types.INTEGER, Types.INTEGER, + Types.SMALLINT, Types.SMALLINT, Types.SMALLINT, + Types.VARCHAR, Types.VARCHAR, Types.INTEGER, + Types.INTEGER, Types.INTEGER, Types.INTEGER, + Types.VARCHAR, Types.VARCHAR }, null, null); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "EXIT", "P1", + "" + DatabaseMetaData.procedureColumnIn, + "" + Types.INTEGER, "INTEGER", "10", "10", "0", "10", + "" + DatabaseMetaData.procedureNoNulls }, + { CATALOG, Constants.SCHEMA_MAIN, "PROP", "P0", + "" + DatabaseMetaData.procedureColumnReturn, + "" + Types.VARCHAR, "VARCHAR", "" + Integer.MAX_VALUE, + "" + Integer.MAX_VALUE, "0", "10", + "" + DatabaseMetaData.procedureNullableUnknown }, + { CATALOG, Constants.SCHEMA_MAIN, "PROP", "P1", + "" + DatabaseMetaData.procedureColumnIn, + "" + Types.VARCHAR, "VARCHAR", "" + Integer.MAX_VALUE, + "" + Integer.MAX_VALUE, "0", "10", + "" + DatabaseMetaData.procedureNullable }, }); + stat.execute("DROP ALIAS EXIT"); + stat.execute("DROP ALIAS PROP"); + conn.close(); + } + + private void testUDTs() throws SQLException { + Connection conn = getConnection("metaData"); + DatabaseMetaData meta = conn.getMetaData(); + ResultSet rs; + rs = meta.getUDTs(null, null, null, null); + assertResultSetMeta(rs, 7, + new String[] { "TYPE_CAT", "TYPE_SCHEM", "TYPE_NAME", + "CLASS_NAME", "DATA_TYPE", "REMARKS", "BASE_TYPE" }, + new int[] { Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.SMALLINT, Types.VARCHAR, + Types.SMALLINT }, null, null); + conn.close(); + } + + private void testCrossReferences() throws SQLException { + Connection conn = getConnection("metaData"); + DatabaseMetaData meta = conn.getMetaData(); + ResultSet rs; + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE PARENT(A INT, B INT, PRIMARY KEY(A, B))"); + stat.execute("CREATE TABLE CHILD(ID INT PRIMARY KEY, PA INT, PB INT, " + + "CONSTRAINT AB FOREIGN KEY(PA, PB) REFERENCES PARENT(A, B))"); + rs = meta.getCrossReference(null, "PUBLIC", "PARENT", null, "PUBLIC", "CHILD"); + checkCrossRef(rs); + rs = meta.getImportedKeys(null, "PUBLIC", "CHILD"); + checkCrossRef(rs); + rs = meta.getExportedKeys(null, "PUBLIC", "PARENT"); + checkCrossRef(rs); + stat.execute("DROP TABLE PARENT"); + stat.execute("DROP TABLE CHILD"); + conn.close(); + } + + private void checkCrossRef(ResultSet rs) throws SQLException { + assertResultSetMeta(rs, 14, new String[] { "PKTABLE_CAT", + "PKTABLE_SCHEM", "PKTABLE_NAME", "PKCOLUMN_NAME", + "FKTABLE_CAT", "FKTABLE_SCHEM", "FKTABLE_NAME", + "FKCOLUMN_NAME", "KEY_SEQ", "UPDATE_RULE", "DELETE_RULE", + "FK_NAME", "PK_NAME", "DEFERRABILITY" }, new int[] { + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.SMALLINT, Types.SMALLINT, Types.VARCHAR, + Types.VARCHAR, Types.SMALLINT }, null, null); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "PARENT", "A", CATALOG, + Constants.SCHEMA_MAIN, "CHILD", "PA", "1", + "" + DatabaseMetaData.importedKeyRestrict, + "" + DatabaseMetaData.importedKeyRestrict, "AB", + "PRIMARY_KEY_8", + "" + DatabaseMetaData.importedKeyNotDeferrable }, + { CATALOG, Constants.SCHEMA_MAIN, "PARENT", "B", CATALOG, + Constants.SCHEMA_MAIN, "CHILD", "PB", "2", + "" + DatabaseMetaData.importedKeyRestrict, + "" + DatabaseMetaData.importedKeyRestrict, "AB", + "PRIMARY_KEY_8", + "" + DatabaseMetaData.importedKeyNotDeferrable } }); + } + + private void testTempTable() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + stat.execute("DROP TABLE IF EXISTS TEST_TEMP"); + stat.execute("CREATE TEMP TABLE TEST_TEMP" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("CREATE INDEX IDX_NAME ON TEST_TEMP(NAME)"); + stat.execute("ALTER TABLE TEST_TEMP ADD FOREIGN KEY(ID) REFERENCES(ID)"); + conn.close(); + + conn = getConnection("metaData"); + stat = conn.createStatement(); + stat.execute("CREATE TEMP TABLE TEST_TEMP" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + ResultSet rs = stat.executeQuery("SELECT STORAGE_TYPE FROM " + + "INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='TEST_TEMP'"); + rs.next(); + assertEquals("GLOBAL TEMPORARY", rs.getString("STORAGE_TYPE")); + stat.execute("DROP TABLE IF EXISTS TEST_TEMP"); + conn.close(); + } + + private void testStatic() throws SQLException { + Driver dr = org.h2.Driver.load(); + Connection conn = getConnection("metaData"); + DatabaseMetaData meta = conn.getMetaData(); + + assertEquals(dr.getMajorVersion(), meta.getDriverMajorVersion()); + assertEquals(dr.getMinorVersion(), meta.getDriverMinorVersion()); + assertTrue(dr.jdbcCompliant()); + + assertEquals(0, dr.getPropertyInfo(null, null).length); + assertTrue(dr.connect("jdbc:test:false", null) == null); + + assertTrue(meta.getNumericFunctions().length() > 0); + assertTrue(meta.getStringFunctions().length() > 0); + assertTrue(meta.getSystemFunctions().length() > 0); + assertTrue(meta.getTimeDateFunctions().length() > 0); + + assertTrue(meta.allProceduresAreCallable()); + assertTrue(meta.allTablesAreSelectable()); + assertTrue(meta.dataDefinitionCausesTransactionCommit()); + assertFalse(meta.dataDefinitionIgnoredInTransactions()); + assertFalse(meta.deletesAreDetected(ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.deletesAreDetected(ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.deletesAreDetected(ResultSet.TYPE_SCROLL_SENSITIVE)); + assertFalse(meta.doesMaxRowSizeIncludeBlobs()); + assertEquals(".", meta.getCatalogSeparator()); + assertEquals("catalog", meta.getCatalogTerm()); + assertTrue(meta.getConnection() == conn); + String versionStart = meta.getDatabaseMajorVersion() + "." + + meta.getDatabaseMinorVersion(); + assertTrue(meta.getDatabaseProductVersion().startsWith(versionStart)); + assertEquals(meta.getDatabaseMajorVersion(), + meta.getDriverMajorVersion()); + assertEquals(meta.getDatabaseMinorVersion(), + meta.getDriverMinorVersion()); + int majorVersion = 4; + assertEquals(majorVersion, meta.getJDBCMajorVersion()); + assertEquals(0, meta.getJDBCMinorVersion()); + assertEquals("H2", meta.getDatabaseProductName()); + assertEquals(Connection.TRANSACTION_READ_COMMITTED, + meta.getDefaultTransactionIsolation()); + assertEquals("H2 JDBC Driver", meta.getDriverName()); + + versionStart = meta.getDriverMajorVersion() + "." + + meta.getDriverMinorVersion(); + assertTrue(meta.getDriverVersion().startsWith(versionStart)); + assertEquals("", meta.getExtraNameCharacters()); + assertEquals("\"", meta.getIdentifierQuoteString()); + assertEquals(0, meta.getMaxBinaryLiteralLength()); + assertEquals(0, meta.getMaxCatalogNameLength()); + assertEquals(0, meta.getMaxCharLiteralLength()); + assertEquals(0, meta.getMaxColumnNameLength()); + assertEquals(0, meta.getMaxColumnsInGroupBy()); + assertEquals(0, meta.getMaxColumnsInIndex()); + assertEquals(0, meta.getMaxColumnsInOrderBy()); + assertEquals(0, meta.getMaxColumnsInSelect()); + assertEquals(0, meta.getMaxColumnsInTable()); + assertEquals(0, meta.getMaxConnections()); + assertEquals(0, meta.getMaxCursorNameLength()); + assertEquals(0, meta.getMaxIndexLength()); + assertEquals(0, meta.getMaxProcedureNameLength()); + assertEquals(0, meta.getMaxRowSize()); + assertEquals(0, meta.getMaxSchemaNameLength()); + assertEquals(0, meta.getMaxStatementLength()); + assertEquals(0, meta.getMaxStatements()); + assertEquals(0, meta.getMaxTableNameLength()); + assertEquals(0, meta.getMaxTablesInSelect()); + assertEquals(0, meta.getMaxUserNameLength()); + assertEquals("procedure", meta.getProcedureTerm()); + + assertEquals(ResultSet.CLOSE_CURSORS_AT_COMMIT, + meta.getResultSetHoldability()); + assertEquals(DatabaseMetaData.sqlStateSQL99, + meta.getSQLStateType()); + assertFalse(meta.locatorsUpdateCopy()); + + assertEquals("schema", meta.getSchemaTerm()); + assertEquals("\\", meta.getSearchStringEscape()); + assertEquals("LIMIT,MINUS,OFFSET,ROWNUM,SYSDATE,SYSTIME,SYSTIMESTAMP,TODAY", + meta.getSQLKeywords()); + + assertTrue(meta.getURL().startsWith("jdbc:h2:")); + assertTrue(meta.getUserName().length() > 1); + assertFalse(meta.insertsAreDetected( + ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.insertsAreDetected( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.insertsAreDetected( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertTrue(meta.isCatalogAtStart()); + assertFalse(meta.isReadOnly()); + assertTrue(meta.nullPlusNonNullIsNull()); + assertFalse(meta.nullsAreSortedAtEnd()); + assertFalse(meta.nullsAreSortedAtStart()); + assertFalse(meta.nullsAreSortedHigh()); + assertTrue(meta.nullsAreSortedLow()); + assertFalse(meta.othersDeletesAreVisible( + ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.othersDeletesAreVisible( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.othersDeletesAreVisible( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertFalse(meta.othersInsertsAreVisible( + ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.othersInsertsAreVisible( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.othersInsertsAreVisible( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertFalse(meta.othersUpdatesAreVisible( + ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.othersUpdatesAreVisible( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.othersUpdatesAreVisible( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertFalse(meta.ownDeletesAreVisible( + ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.ownDeletesAreVisible( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.ownDeletesAreVisible( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertFalse(meta.ownInsertsAreVisible( + ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.ownInsertsAreVisible( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.ownInsertsAreVisible( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertTrue(meta.ownUpdatesAreVisible( + ResultSet.TYPE_FORWARD_ONLY)); + assertTrue(meta.ownUpdatesAreVisible( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertTrue(meta.ownUpdatesAreVisible( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertFalse(meta.storesLowerCaseIdentifiers()); + assertFalse(meta.storesLowerCaseQuotedIdentifiers()); + assertFalse(meta.storesMixedCaseIdentifiers()); + assertTrue(meta.storesMixedCaseQuotedIdentifiers()); + assertTrue(meta.storesUpperCaseIdentifiers()); + assertFalse(meta.storesUpperCaseQuotedIdentifiers()); + assertTrue(meta.supportsAlterTableWithAddColumn()); + assertTrue(meta.supportsAlterTableWithDropColumn()); + assertTrue(meta.supportsANSI92EntryLevelSQL()); + assertFalse(meta.supportsANSI92IntermediateSQL()); + assertFalse(meta.supportsANSI92FullSQL()); + assertTrue(meta.supportsBatchUpdates()); + assertTrue(meta.supportsCatalogsInDataManipulation()); + assertTrue(meta.supportsCatalogsInIndexDefinitions()); + assertTrue(meta.supportsCatalogsInPrivilegeDefinitions()); + assertFalse(meta.supportsCatalogsInProcedureCalls()); + assertTrue(meta.supportsCatalogsInTableDefinitions()); + assertTrue(meta.supportsColumnAliasing()); + assertTrue(meta.supportsConvert()); + assertTrue(meta.supportsConvert(Types.INTEGER, Types.VARCHAR)); + assertTrue(meta.supportsCoreSQLGrammar()); + assertTrue(meta.supportsCorrelatedSubqueries()); + assertFalse(meta.supportsDataDefinitionAndDataManipulationTransactions()); + assertTrue(meta.supportsDataManipulationTransactionsOnly()); + assertFalse(meta.supportsDifferentTableCorrelationNames()); + assertTrue(meta.supportsExpressionsInOrderBy()); + assertFalse(meta.supportsExtendedSQLGrammar()); + assertFalse(meta.supportsFullOuterJoins()); + + assertTrue(meta.supportsGetGeneratedKeys()); + assertTrue(meta.supportsMultipleOpenResults()); + assertFalse(meta.supportsNamedParameters()); + + assertTrue(meta.supportsGroupBy()); + assertTrue(meta.supportsGroupByBeyondSelect()); + assertTrue(meta.supportsGroupByUnrelated()); + assertTrue(meta.supportsIntegrityEnhancementFacility()); + assertTrue(meta.supportsLikeEscapeClause()); + assertTrue(meta.supportsLimitedOuterJoins()); + assertTrue(meta.supportsMinimumSQLGrammar()); + assertFalse(meta.supportsMixedCaseIdentifiers()); + assertTrue(meta.supportsMixedCaseQuotedIdentifiers()); + assertFalse(meta.supportsMultipleResultSets()); + assertTrue(meta.supportsMultipleTransactions()); + assertTrue(meta.supportsNonNullableColumns()); + assertFalse(meta.supportsOpenCursorsAcrossCommit()); + assertFalse(meta.supportsOpenCursorsAcrossRollback()); + assertTrue(meta.supportsOpenStatementsAcrossCommit()); + assertTrue(meta.supportsOpenStatementsAcrossRollback()); + assertTrue(meta.supportsOrderByUnrelated()); + assertTrue(meta.supportsOuterJoins()); + assertTrue(meta.supportsPositionedDelete()); + assertTrue(meta.supportsPositionedUpdate()); + assertTrue(meta.supportsResultSetConcurrency( + ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)); + assertTrue(meta.supportsResultSetConcurrency( + ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE)); + assertTrue(meta.supportsResultSetConcurrency( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY)); + assertTrue(meta.supportsResultSetConcurrency( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE)); + assertFalse(meta.supportsResultSetConcurrency( + ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY)); + assertFalse(meta.supportsResultSetConcurrency( + ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE)); + + assertFalse(meta.supportsResultSetHoldability( + ResultSet.HOLD_CURSORS_OVER_COMMIT)); + assertTrue(meta.supportsResultSetHoldability( + ResultSet.CLOSE_CURSORS_AT_COMMIT)); + assertTrue(meta.supportsSavepoints()); + assertFalse(meta.supportsStatementPooling()); + + assertTrue(meta.supportsResultSetType( + ResultSet.TYPE_FORWARD_ONLY)); + assertTrue(meta.supportsResultSetType( + ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.supportsResultSetType( + ResultSet.TYPE_SCROLL_SENSITIVE)); + assertTrue(meta.supportsSchemasInDataManipulation()); + assertTrue(meta.supportsSchemasInIndexDefinitions()); + assertTrue(meta.supportsSchemasInPrivilegeDefinitions()); + assertTrue(meta.supportsSchemasInProcedureCalls()); + assertTrue(meta.supportsSchemasInTableDefinitions()); + assertTrue(meta.supportsSelectForUpdate()); + assertFalse(meta.supportsStoredProcedures()); + assertTrue(meta.supportsSubqueriesInComparisons()); + assertTrue(meta.supportsSubqueriesInExists()); + assertTrue(meta.supportsSubqueriesInIns()); + assertTrue(meta.supportsSubqueriesInQuantifieds()); + assertTrue(meta.supportsTableCorrelationNames()); + assertTrue(meta.supportsTransactions()); + assertTrue(meta.supportsTransactionIsolationLevel( + Connection.TRANSACTION_NONE)); + assertTrue(meta.supportsTransactionIsolationLevel( + Connection.TRANSACTION_READ_COMMITTED)); + if (!config.multiThreaded) { + assertTrue(meta.supportsTransactionIsolationLevel( + Connection.TRANSACTION_READ_UNCOMMITTED)); + } + assertTrue(meta.supportsTransactionIsolationLevel( + Connection.TRANSACTION_REPEATABLE_READ)); + assertTrue(meta.supportsTransactionIsolationLevel( + Connection.TRANSACTION_SERIALIZABLE)); + assertTrue(meta.supportsUnion()); + assertTrue(meta.supportsUnionAll()); + assertFalse(meta.updatesAreDetected(ResultSet.TYPE_FORWARD_ONLY)); + assertFalse(meta.updatesAreDetected(ResultSet.TYPE_SCROLL_INSENSITIVE)); + assertFalse(meta.updatesAreDetected(ResultSet.TYPE_SCROLL_SENSITIVE)); + assertFalse(meta.usesLocalFilePerTable()); + assertTrue(meta.usesLocalFiles()); + conn.close(); + } + + private void testMore() throws SQLException { + int numericType; + String numericName; + if (SysProperties.BIG_DECIMAL_IS_DECIMAL) { + numericType = Types.DECIMAL; + numericName = "DECIMAL"; + } else { + numericType = Types.NUMERIC; + numericName = "NUMERIC"; + } + Connection conn = getConnection("metaData"); + DatabaseMetaData meta = conn.getMetaData(); + Statement stat = conn.createStatement(); + ResultSet rs; + + conn.setReadOnly(true); + conn.setReadOnly(false); + assertFalse(conn.isReadOnly()); + assertTrue(conn.isReadOnly() == meta.isReadOnly()); + + assertTrue(conn == meta.getConnection()); + + // currently, setCatalog is ignored + conn.setCatalog("XYZ"); + trace(conn.getCatalog()); + + String product = meta.getDatabaseProductName(); + trace("meta.getDatabaseProductName:" + product); + + String version = meta.getDatabaseProductVersion(); + trace("meta.getDatabaseProductVersion:" + version); + + int major = meta.getDriverMajorVersion(); + trace("meta.getDriverMajorVersion:" + major); + + int minor = meta.getDriverMinorVersion(); + trace("meta.getDriverMinorVersion:" + minor); + + String driverName = meta.getDriverName(); + trace("meta.getDriverName:" + driverName); + + String driverVersion = meta.getDriverVersion(); + trace("meta.getDriverVersion:" + driverVersion); + + meta.getSearchStringEscape(); + + String url = meta.getURL(); + trace("meta.getURL:" + url); + + String user = meta.getUserName(); + trace("meta.getUserName:" + user); + + trace("meta.nullsAreSortedHigh:" + meta.nullsAreSortedHigh()); + trace("meta.nullsAreSortedLow:" + meta.nullsAreSortedLow()); + trace("meta.nullsAreSortedAtStart:" + meta.nullsAreSortedAtStart()); + trace("meta.nullsAreSortedAtEnd:" + meta.nullsAreSortedAtEnd()); + int count = (meta.nullsAreSortedHigh() ? 1 : 0) + + (meta.nullsAreSortedLow() ? 1 : 0) + + (meta.nullsAreSortedAtStart() ? 1 : 0) + + (meta.nullsAreSortedAtEnd() ? 1 : 0); + assertTrue(count == 1); + + trace("meta.allProceduresAreCallable:" + + meta.allProceduresAreCallable()); + assertTrue(meta.allProceduresAreCallable()); + + trace("meta.allTablesAreSelectable:" + meta.allTablesAreSelectable()); + assertTrue(meta.allTablesAreSelectable()); + + trace("getTables"); + rs = meta.getTables(null, Constants.SCHEMA_MAIN, null, + new String[] { "TABLE" }); + assertResultSetMeta(rs, 11, new String[] { "TABLE_CAT", "TABLE_SCHEM", + "TABLE_NAME", "TABLE_TYPE", "REMARKS", "TYPE_CAT", + "TYPE_SCHEM", "TYPE_NAME", "SELF_REFERENCING_COL_NAME", + "REF_GENERATION", "SQL" }, new int[] { Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR }, null, null); + if (rs.next()) { + fail("Database is not empty after dropping all tables"); + } + stat.executeUpdate("CREATE TABLE TEST(" + "ID INT PRIMARY KEY," + + "TEXT_V VARCHAR(120)," + "DEC_V DECIMAL(12,3)," + + "DATE_V DATETIME," + "BLOB_V BLOB," + "CLOB_V CLOB" + ")"); + rs = meta.getTables(null, Constants.SCHEMA_MAIN, null, + new String[] { "TABLE" }); + assertResultSetOrdered(rs, new String[][] { { CATALOG, + Constants.SCHEMA_MAIN, "TEST", "TABLE", "" } }); + trace("getColumns"); + rs = meta.getColumns(null, null, "TEST", null); + assertResultSetMeta(rs, 24, new String[] { "TABLE_CAT", "TABLE_SCHEM", + "TABLE_NAME", "COLUMN_NAME", "DATA_TYPE", "TYPE_NAME", + "COLUMN_SIZE", "BUFFER_LENGTH", "DECIMAL_DIGITS", + "NUM_PREC_RADIX", "NULLABLE", "REMARKS", "COLUMN_DEF", + "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "CHAR_OCTET_LENGTH", + "ORDINAL_POSITION", "IS_NULLABLE", "SCOPE_CATALOG", + "SCOPE_SCHEMA", "SCOPE_TABLE", "SOURCE_DATA_TYPE", + "IS_AUTOINCREMENT", "SCOPE_CATLOG" }, new int[] { + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.INTEGER, Types.VARCHAR, Types.INTEGER, Types.INTEGER, + Types.INTEGER, Types.INTEGER, Types.INTEGER, Types.VARCHAR, + Types.VARCHAR, Types.INTEGER, Types.INTEGER, Types.INTEGER, + Types.INTEGER, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.SMALLINT, Types.VARCHAR, Types.VARCHAR }, + null, null); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "ID", + "" + Types.INTEGER, "INTEGER", "10", "10", "0", "10", + "" + DatabaseMetaData.columnNoNulls, "", null, + "" + Types.INTEGER, "0", "10", "1", "NO" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "TEXT_V", + "" + Types.VARCHAR, "VARCHAR", "120", "120", "0", "10", + "" + DatabaseMetaData.columnNullable, "", null, + "" + Types.VARCHAR, "0", "120", "2", "YES" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "DEC_V", + "" + numericType, numericName, "12", "12", "3", "10", + "" + DatabaseMetaData.columnNullable, "", null, + "" + numericType, "0", "12", "3", "YES" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "DATE_V", + "" + Types.TIMESTAMP, "TIMESTAMP", "26", "26", "6", + "10", "" + DatabaseMetaData.columnNullable, "", null, + "" + Types.TIMESTAMP, "0", "26", "4", "YES" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "BLOB_V", + "" + Types.BLOB, "BLOB", "" + Integer.MAX_VALUE, + "" + Integer.MAX_VALUE, "0", "10", + "" + DatabaseMetaData.columnNullable, "", null, + "" + Types.BLOB, "0", "" + Integer.MAX_VALUE, "5", + "YES" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "CLOB_V", + "" + Types.CLOB, "CLOB", "" + Integer.MAX_VALUE, + "" + Integer.MAX_VALUE, "0", "10", + "" + DatabaseMetaData.columnNullable, "", null, + "" + Types.CLOB, "0", "" + Integer.MAX_VALUE, "6", + "YES" } }); + /* + * rs=meta.getColumns(null,null,"TEST",null); while(rs.next()) { int + * datatype=rs.getInt(5); } + */ + trace("getIndexInfo"); + stat.executeUpdate("CREATE INDEX IDX_TEXT_DEC ON TEST(TEXT_V,DEC_V)"); + stat.executeUpdate("CREATE UNIQUE INDEX IDX_DATE ON TEST(DATE_V)"); + rs = meta.getIndexInfo(null, null, "TEST", false, false); + assertResultSetMeta(rs, 14, new String[] { "TABLE_CAT", "TABLE_SCHEM", + "TABLE_NAME", "NON_UNIQUE", "INDEX_QUALIFIER", "INDEX_NAME", + "TYPE", "ORDINAL_POSITION", "COLUMN_NAME", "ASC_OR_DESC", + "CARDINALITY", "PAGES", "FILTER_CONDITION", "SORT_TYPE" }, + new int[] { Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.BOOLEAN, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.SMALLINT, Types.VARCHAR, + Types.VARCHAR, Types.INTEGER, Types.INTEGER, + Types.VARCHAR, Types.INTEGER }, null, null); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "FALSE", CATALOG, + "IDX_DATE", "" + DatabaseMetaData.tableIndexOther, "1", + "DATE_V", "A", "0", "0", "" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "FALSE", CATALOG, + "PRIMARY_KEY_2", "" + DatabaseMetaData.tableIndexOther, + "1", "ID", "A", "0", "0", "" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "TRUE", CATALOG, + "IDX_TEXT_DEC", "" + DatabaseMetaData.tableIndexOther, + "1", "TEXT_V", "A", "0", "0", "" }, + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "TRUE", CATALOG, + "IDX_TEXT_DEC", "" + DatabaseMetaData.tableIndexOther, + "2", "DEC_V", "A", "0", "0", "" }, }); + stat.executeUpdate("DROP INDEX IDX_TEXT_DEC"); + stat.executeUpdate("DROP INDEX IDX_DATE"); + rs = meta.getIndexInfo(null, null, "TEST", false, false); + assertResultSetMeta(rs, 14, new String[] { "TABLE_CAT", "TABLE_SCHEM", + "TABLE_NAME", "NON_UNIQUE", "INDEX_QUALIFIER", "INDEX_NAME", + "TYPE", "ORDINAL_POSITION", "COLUMN_NAME", "ASC_OR_DESC", + "CARDINALITY", "PAGES", "FILTER_CONDITION", "SORT_TYPE" }, + new int[] { Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.BOOLEAN, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.SMALLINT, Types.VARCHAR, + Types.VARCHAR, Types.INTEGER, Types.INTEGER, + Types.VARCHAR, Types.INTEGER }, null, null); + assertResultSetOrdered(rs, new String[][] { { CATALOG, + Constants.SCHEMA_MAIN, "TEST", "FALSE", CATALOG, + "PRIMARY_KEY_2", "" + DatabaseMetaData.tableIndexOther, "1", + "ID", "A", "0", "0", "" } }); + trace("getPrimaryKeys"); + rs = meta.getPrimaryKeys(null, null, "TEST"); + assertResultSetMeta(rs, 6, new String[] { "TABLE_CAT", "TABLE_SCHEM", + "TABLE_NAME", "COLUMN_NAME", "KEY_SEQ", "PK_NAME" }, new int[] { + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.VARCHAR }, null, null); + assertResultSetOrdered(rs, new String[][] { { CATALOG, + Constants.SCHEMA_MAIN, "TEST", "ID", "1", "CONSTRAINT_2" }, }); + trace("getTables - using a wildcard"); + stat.executeUpdate( + "CREATE TABLE T_2(B INT,A VARCHAR(6),C INT,PRIMARY KEY(C,A,B))"); + stat.executeUpdate( + "CREATE TABLE TX2(B INT,A VARCHAR(6),C INT,PRIMARY KEY(C,A,B))"); + rs = meta.getTables(null, null, "T_2", null); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "TABLE", "" }, + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "TABLE", "" } }); + trace("getTables - using a quoted _ character"); + rs = meta.getTables(null, null, "T\\_2", null); + assertResultSetOrdered(rs, new String[][] { { CATALOG, + Constants.SCHEMA_MAIN, "T_2", "TABLE", "" } }); + trace("getTables - using the % wildcard"); + rs = meta.getTables(null, Constants.SCHEMA_MAIN, "%", + new String[] { "TABLE" }); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "TEST", "TABLE", "" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "TABLE", "" }, + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "TABLE", "" } }); + stat.execute("DROP TABLE TEST"); + + trace("getColumns - using wildcards"); + rs = meta.getColumns(null, null, "___", "B%"); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "B", + "" + Types.INTEGER, "INTEGER", "10" }, + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "B", + "" + Types.INTEGER, "INTEGER", "10" }, }); + trace("getColumns - using wildcards"); + rs = meta.getColumns(null, null, "_\\__", "%"); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "B", + "" + Types.INTEGER, "INTEGER", "10" }, + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "A", + "" + Types.VARCHAR, "VARCHAR", "6" }, + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "C", + "" + Types.INTEGER, "INTEGER", "10" }, }); + trace("getIndexInfo"); + stat.executeUpdate("CREATE UNIQUE INDEX A_INDEX ON TX2(B,C,A)"); + stat.executeUpdate("CREATE INDEX B_INDEX ON TX2(A,B,C)"); + rs = meta.getIndexInfo(null, null, "TX2", false, false); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "FALSE", CATALOG, + "A_INDEX", "" + DatabaseMetaData.tableIndexOther, "1", + "B", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "FALSE", CATALOG, + "A_INDEX", "" + DatabaseMetaData.tableIndexOther, "2", + "C", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "FALSE", CATALOG, + "A_INDEX", "" + DatabaseMetaData.tableIndexOther, "3", + "A", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "FALSE", CATALOG, + "PRIMARY_KEY_14", + "" + DatabaseMetaData.tableIndexOther, "1", "C", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "FALSE", CATALOG, + "PRIMARY_KEY_14", + "" + DatabaseMetaData.tableIndexOther, "2", "A", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "FALSE", CATALOG, + "PRIMARY_KEY_14", + "" + DatabaseMetaData.tableIndexOther, "3", "B", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "TRUE", CATALOG, + "B_INDEX", "" + DatabaseMetaData.tableIndexOther, "1", + "A", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "TRUE", CATALOG, + "B_INDEX", "" + DatabaseMetaData.tableIndexOther, "2", + "B", "A" }, + { CATALOG, Constants.SCHEMA_MAIN, "TX2", "TRUE", CATALOG, + "B_INDEX", "" + DatabaseMetaData.tableIndexOther, "3", + "C", "A" }, }); + trace("getPrimaryKeys"); + rs = meta.getPrimaryKeys(null, null, "T_2"); + assertResultSetOrdered(rs, new String[][] { + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "A", "2", "CONSTRAINT_1" }, + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "B", "3", "CONSTRAINT_1" }, + { CATALOG, Constants.SCHEMA_MAIN, "T_2", "C", "1", "CONSTRAINT_1" }, }); + stat.executeUpdate("DROP TABLE TX2"); + stat.executeUpdate("DROP TABLE T_2"); + stat.executeUpdate("CREATE TABLE PARENT(ID INT PRIMARY KEY)"); + stat.executeUpdate("CREATE TABLE CHILD(P_ID INT,ID INT," + + "PRIMARY KEY(P_ID,ID),FOREIGN KEY(P_ID) REFERENCES PARENT(ID))"); + + trace("getImportedKeys"); + rs = meta.getImportedKeys(null, null, "CHILD"); + assertResultSetMeta(rs, 14, new String[] { "PKTABLE_CAT", + "PKTABLE_SCHEM", "PKTABLE_NAME", "PKCOLUMN_NAME", + "FKTABLE_CAT", "FKTABLE_SCHEM", "FKTABLE_NAME", + "FKCOLUMN_NAME", "KEY_SEQ", "UPDATE_RULE", "DELETE_RULE", + "FK_NAME", "PK_NAME", "DEFERRABILITY" }, new int[] { + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.SMALLINT, Types.SMALLINT, Types.VARCHAR, + Types.VARCHAR, Types.SMALLINT }, null, null); + // TODO test + // testResultSetOrdered(rs, new String[][] { { null, null, "PARENT", + // "ID", + // null, null, "CHILD", "P_ID", "1", + // "" + DatabaseMetaData.importedKeyNoAction, + // "" + DatabaseMetaData.importedKeyNoAction, "FK_1", null, + // "" + DatabaseMetaData.importedKeyNotDeferrable}}); + + trace("getExportedKeys"); + rs = meta.getExportedKeys(null, null, "PARENT"); + assertResultSetMeta(rs, 14, new String[] { "PKTABLE_CAT", + "PKTABLE_SCHEM", "PKTABLE_NAME", "PKCOLUMN_NAME", + "FKTABLE_CAT", "FKTABLE_SCHEM", "FKTABLE_NAME", + "FKCOLUMN_NAME", "KEY_SEQ", "UPDATE_RULE", "DELETE_RULE", + "FK_NAME", "PK_NAME", "DEFERRABILITY" }, new int[] { + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.SMALLINT, Types.SMALLINT, Types.VARCHAR, + Types.VARCHAR, Types.SMALLINT }, null, null); + // TODO test + /* + * testResultSetOrdered(rs, new String[][]{ { null,null,"PARENT","ID", + * null,null,"CHILD","P_ID", + * "1",""+DatabaseMetaData.importedKeyNoAction, + * ""+DatabaseMetaData.importedKeyNoAction, + * null,null,""+DatabaseMetaData.importedKeyNotDeferrable } } ); + */ + trace("getCrossReference"); + rs = meta.getCrossReference(null, null, "PARENT", null, null, "CHILD"); + assertResultSetMeta(rs, 14, new String[] { "PKTABLE_CAT", + "PKTABLE_SCHEM", "PKTABLE_NAME", "PKCOLUMN_NAME", + "FKTABLE_CAT", "FKTABLE_SCHEM", "FKTABLE_NAME", + "FKCOLUMN_NAME", "KEY_SEQ", "UPDATE_RULE", "DELETE_RULE", + "FK_NAME", "PK_NAME", "DEFERRABILITY" }, new int[] { + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.SMALLINT, Types.SMALLINT, Types.VARCHAR, + Types.VARCHAR, Types.SMALLINT }, null, null); + // TODO test + /* + * testResultSetOrdered(rs, new String[][]{ { null,null,"PARENT","ID", + * null,null,"CHILD","P_ID", + * "1",""+DatabaseMetaData.importedKeyNoAction, + * ""+DatabaseMetaData.importedKeyNoAction, + * null,null,""+DatabaseMetaData.importedKeyNotDeferrable } } ); + */ + + rs = meta.getSchemas(); + assertResultSetMeta(rs, 3, new String[] { "TABLE_SCHEM", + "TABLE_CATALOG", "IS_DEFAULT" }, new int[] { Types.VARCHAR, + Types.VARCHAR, Types.BOOLEAN }, null, null); + assertTrue(rs.next()); + assertEquals("INFORMATION_SCHEMA", rs.getString(1)); + assertTrue(rs.next()); + assertEquals("PUBLIC", rs.getString(1)); + assertFalse(rs.next()); + + rs = meta.getSchemas(null, null); + assertResultSetMeta(rs, 3, new String[] { "TABLE_SCHEM", + "TABLE_CATALOG", "IS_DEFAULT" }, new int[] { Types.VARCHAR, + Types.VARCHAR, Types.BOOLEAN }, null, null); + assertTrue(rs.next()); + assertEquals("INFORMATION_SCHEMA", rs.getString(1)); + assertTrue(rs.next()); + assertEquals("PUBLIC", rs.getString(1)); + assertFalse(rs.next()); + + rs = meta.getCatalogs(); + assertResultSetMeta(rs, 1, new String[] { "TABLE_CAT" }, + new int[] { Types.VARCHAR }, null, null); + assertResultSetOrdered(rs, new String[][] { { CATALOG } }); + + rs = meta.getTableTypes(); + assertResultSetMeta(rs, 1, new String[] { "TABLE_TYPE" }, + new int[] { Types.VARCHAR }, null, null); + assertResultSetOrdered(rs, new String[][] { + { "EXTERNAL" }, { "SYSTEM TABLE" }, + { "TABLE" }, { "TABLE LINK" }, { "VIEW" } }); + + rs = meta.getTypeInfo(); + assertResultSetMeta(rs, 18, new String[] { "TYPE_NAME", "DATA_TYPE", + "PRECISION", "LITERAL_PREFIX", "LITERAL_SUFFIX", + "CREATE_PARAMS", "NULLABLE", "CASE_SENSITIVE", "SEARCHABLE", + "UNSIGNED_ATTRIBUTE", "FIXED_PREC_SCALE", "AUTO_INCREMENT", + "LOCAL_TYPE_NAME", "MINIMUM_SCALE", "MAXIMUM_SCALE", + "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "NUM_PREC_RADIX" }, + new int[] { Types.VARCHAR, Types.INTEGER, Types.INTEGER, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.SMALLINT, Types.BOOLEAN, Types.SMALLINT, + Types.BOOLEAN, Types.BOOLEAN, Types.BOOLEAN, + Types.VARCHAR, Types.SMALLINT, Types.SMALLINT, + Types.INTEGER, Types.INTEGER, Types.INTEGER }, null, + null); + + rs = meta.getTablePrivileges(null, null, null); + assertResultSetMeta(rs, 7, + new String[] { "TABLE_CAT", "TABLE_SCHEM", "TABLE_NAME", + "GRANTOR", "GRANTEE", "PRIVILEGE", "IS_GRANTABLE" }, + new int[] { Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR }, null, null); + + rs = meta.getColumnPrivileges(null, null, "TEST", null); + assertResultSetMeta(rs, 8, new String[] { "TABLE_CAT", "TABLE_SCHEM", + "TABLE_NAME", "COLUMN_NAME", "GRANTOR", "GRANTEE", "PRIVILEGE", + "IS_GRANTABLE" }, new int[] { Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, + Types.VARCHAR, Types.VARCHAR, Types.VARCHAR }, null, null); + + assertTrue(conn.getWarnings() == null); + conn.clearWarnings(); + assertTrue(conn.getWarnings() == null); + conn.close(); + } + + private void testGeneral() throws SQLException { + Connection conn = getConnection("metaData"); + DatabaseMetaData meta = conn.getMetaData(); + + Statement stat = conn.createStatement(); + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("CREATE INDEX IDXNAME ON TEST(NAME)"); + + ResultSet rs; + + rs = meta.getCatalogs(); + rs.next(); + assertEquals(CATALOG, rs.getString(1)); + assertFalse(rs.next()); + + rs = meta.getSchemas(); + rs.next(); + assertEquals("INFORMATION_SCHEMA", rs.getString("TABLE_SCHEM")); + rs.next(); + assertEquals("PUBLIC", rs.getString("TABLE_SCHEM")); + assertFalse(rs.next()); + + rs = meta.getSchemas(null, null); + rs.next(); + assertEquals("INFORMATION_SCHEMA", rs.getString("TABLE_SCHEM")); + rs.next(); + assertEquals("PUBLIC", rs.getString("TABLE_SCHEM")); + assertFalse(rs.next()); + + rs = meta.getSchemas(null, "PUBLIC"); + rs.next(); + assertEquals("PUBLIC", rs.getString("TABLE_SCHEM")); + assertFalse(rs.next()); + + rs = meta.getTableTypes(); + rs.next(); + assertEquals("EXTERNAL", rs.getString("TABLE_TYPE")); + rs.next(); + assertEquals("SYSTEM TABLE", rs.getString("TABLE_TYPE")); + rs.next(); + assertEquals("TABLE", rs.getString("TABLE_TYPE")); + rs.next(); + assertEquals("TABLE LINK", rs.getString("TABLE_TYPE")); + rs.next(); + assertEquals("VIEW", rs.getString("TABLE_TYPE")); + assertFalse(rs.next()); + + rs = meta.getTables(null, Constants.SCHEMA_MAIN, + null, new String[] { "TABLE" }); + assertTrue(rs.getStatement() == null); + rs.next(); + assertEquals("TEST", rs.getString("TABLE_NAME")); + assertFalse(rs.next()); + + rs = meta.getTables(null, "INFORMATION_SCHEMA", + null, new String[] { "TABLE", "SYSTEM TABLE" }); + rs.next(); + assertEquals("CATALOGS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("COLLATIONS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("COLUMNS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("COLUMN_PRIVILEGES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("CONSTANTS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("CONSTRAINTS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("CROSS_REFERENCES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("DOMAINS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("FUNCTION_ALIASES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("FUNCTION_COLUMNS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("HELP", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("INDEXES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("IN_DOUBT", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("KEY_COLUMN_USAGE", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("LOCKS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("QUERY_STATISTICS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("REFERENTIAL_CONSTRAINTS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("RIGHTS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("ROLES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("SCHEMATA", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("SEQUENCES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("SESSIONS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("SESSION_STATE", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("SETTINGS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("SYNONYMS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("TABLES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("TABLE_CONSTRAINTS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("TABLE_PRIVILEGES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("TABLE_TYPES", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("TRIGGERS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("TYPE_INFO", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("USERS", rs.getString("TABLE_NAME")); + rs.next(); + assertEquals("VIEWS", rs.getString("TABLE_NAME")); + assertFalse(rs.next()); + + rs = meta.getColumns(null, null, "TEST", null); + rs.next(); + assertEquals("ID", rs.getString("COLUMN_NAME")); + rs.next(); + assertEquals("NAME", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + + rs = meta.getPrimaryKeys(null, null, "TEST"); + rs.next(); + assertEquals("ID", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + + rs = meta.getBestRowIdentifier(null, null, "TEST", + DatabaseMetaData.bestRowSession, false); + rs.next(); + assertEquals("ID", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + + rs = meta.getIndexInfo(null, null, "TEST", false, false); + rs.next(); + String index = rs.getString("INDEX_NAME"); + assertTrue(index.startsWith("PRIMARY_KEY")); + assertEquals("ID", rs.getString("COLUMN_NAME")); + rs.next(); + assertEquals("IDXNAME", rs.getString("INDEX_NAME")); + assertEquals("NAME", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + + rs = meta.getIndexInfo(null, null, "TEST", true, false); + rs.next(); + index = rs.getString("INDEX_NAME"); + assertTrue(index.startsWith("PRIMARY_KEY")); + assertEquals("ID", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + + rs = meta.getVersionColumns(null, null, "TEST"); + assertFalse(rs.next()); + + stat.execute("DROP TABLE TEST"); + + rs = stat.executeQuery("SELECT * FROM INFORMATION_SCHEMA.SETTINGS"); + while (rs.next()) { + String name = rs.getString("NAME"); + String value = rs.getString("VALUE"); + trace(name + "=" + value); + } + + testMore(); + + // meta.getTablePrivileges() + + // meta.getAttributes() + // meta.getColumnPrivileges() + // meta.getSuperTables() + // meta.getSuperTypes() + // meta.getTypeInfo() + + conn.close(); + + deleteDb("metaData"); + } + + private void testAllowLiteralsNone() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + stat.execute("SET ALLOW_LITERALS NONE"); + DatabaseMetaData meta = conn.getMetaData(); + // meta.getAttributes(null, null, null, null); + meta.getBestRowIdentifier(null, null, null, 0, false); + meta.getCatalogs(); + // meta.getClientInfoProperties(); + meta.getColumnPrivileges(null, null, null, null); + meta.getColumns(null, null, null, null); + meta.getCrossReference(null, null, null, null, null, null); + meta.getExportedKeys(null, null, null); + // meta.getFunctionColumns(null, null, null, null); + // meta.getFunctions(null, null, null); + meta.getImportedKeys(null, null, null); + meta.getIndexInfo(null, null, null, false, false); + meta.getPrimaryKeys(null, null, null); + meta.getProcedureColumns(null, null, null, null); + meta.getProcedures(null, null, null); + meta.getSchemas(); + meta.getSchemas(null, null); + meta.getSuperTables(null, null, null); + // meta.getSuperTypes(null, null, null); + meta.getTablePrivileges(null, null, null); + meta.getTables(null, null, null, null); + meta.getTableTypes(); + meta.getTypeInfo(); + meta.getUDTs(null, null, null, null); + meta.getVersionColumns(null, null, null); + conn.close(); + deleteDb("metaData"); + } + + private void testClientInfo() throws SQLException { + Connection conn = getConnection("metaData"); + assertNull(conn.getClientInfo("xxx")); + DatabaseMetaData meta = conn.getMetaData(); + ResultSet rs = meta.getClientInfoProperties(); + int count = 0; + while (rs.next()) { + count++; + } + if (config.networked) { + // server0, numServers + assertEquals(2, count); + } else { + // numServers + assertEquals(1, count); + } + conn.close(); + deleteDb("metaData"); + } + + private void testSessionsUncommitted() throws SQLException { + if (config.mvcc || config.memory) { + return; + } + Connection conn = getConnection("metaData"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("begin transaction"); + for (int i = 0; i < 6; i++) { + stat.execute("insert into test values (1)"); + } + ResultSet rs = stat.executeQuery("select contains_uncommitted " + + "from INFORMATION_SCHEMA.SESSIONS"); + rs.next(); + assertEquals(true, rs.getBoolean(1)); + rs.close(); + stat.execute("commit"); + rs = stat.executeQuery("select contains_uncommitted " + + "from INFORMATION_SCHEMA.SESSIONS"); + rs.next(); + assertEquals(false, rs.getBoolean(1)); + conn.close(); + deleteDb("metaData"); + } + + private void testQueryStatistics() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar) as " + + "select x, space(1000) from system_range(1, 2000)"); + + ResultSet rs = stat.executeQuery( + "select * from INFORMATION_SCHEMA.QUERY_STATISTICS"); + assertFalse(rs.next()); + rs.close(); + stat.execute("SET QUERY_STATISTICS TRUE"); + int count = 100; + for (int i = 0; i < count; i++) { + execute(stat, "select * from test limit 10"); + } + // The "order by" makes the result set more stable on windows, where the + // timer resolution is not that great + rs = stat.executeQuery( + "select * from INFORMATION_SCHEMA.QUERY_STATISTICS " + + "ORDER BY EXECUTION_COUNT desc"); + assertTrue(rs.next()); + assertEquals("select * from test limit 10", rs.getString("SQL_STATEMENT")); + assertEquals(count, rs.getInt("EXECUTION_COUNT")); + assertEquals(config.lazy ? 0 : 10 * count, rs.getInt("CUMULATIVE_ROW_COUNT")); + rs.close(); + conn.close(); + deleteDb("metaData"); + } + + private void testQueryStatisticsLimit() throws SQLException { + Connection conn = getConnection("metaData"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar) as " + + "select x, space(1000) from system_range(1, 2000)"); + + ResultSet rs = stat.executeQuery( + "select * from INFORMATION_SCHEMA.QUERY_STATISTICS"); + assertFalse(rs.next()); + rs.close(); + + //first, test setting the limit before activating statistics + int statisticsMaxEntries = 200; + //prevent test limit being less than or equal to default limit + assertTrue(statisticsMaxEntries > Constants.QUERY_STATISTICS_MAX_ENTRIES); + stat.execute("SET QUERY_STATISTICS_MAX_ENTRIES " + statisticsMaxEntries); + stat.execute("SET QUERY_STATISTICS TRUE"); + for (int i = 0; i < statisticsMaxEntries * 2; i++) { + stat.execute("select * from test where id = " + i); + } + rs = stat.executeQuery("select count(*) from INFORMATION_SCHEMA.QUERY_STATISTICS"); + assertTrue(rs.next()); + assertEquals(statisticsMaxEntries, rs.getInt(1)); + rs.close(); + + //first, test changing the limit once statistics is activated + int statisticsMaxEntriesNew = 50; + //prevent new test limit being greater than or equal to default limit + assertTrue(statisticsMaxEntriesNew < Constants.QUERY_STATISTICS_MAX_ENTRIES); + stat.execute("SET QUERY_STATISTICS_MAX_ENTRIES " + statisticsMaxEntriesNew); + for (int i = 0; i < statisticsMaxEntriesNew * 2; i++) { + stat.execute("select * from test where id = " + i); + } + rs = stat.executeQuery("select count(*) from INFORMATION_SCHEMA.QUERY_STATISTICS"); + assertTrue(rs.next()); + assertEquals(statisticsMaxEntriesNew, rs.getInt(1)); + rs.close(); + + conn.close(); + deleteDb("metaData"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestNativeSQL.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestNativeSQL.java new file mode 100644 index 0000000000000..a32b7fb0e99bf --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestNativeSQL.java @@ -0,0 +1,263 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests the Connection.nativeSQL method. + */ +public class TestNativeSQL extends TestBase { + + private static final String[] PAIRS = { + "CREATE TABLE TEST(ID INT PRIMARY KEY)", + "CREATE TABLE TEST(ID INT PRIMARY KEY)", + + "INSERT INTO TEST VALUES(1)", + "INSERT INTO TEST VALUES(1)", + + "SELECT '{nothing}' FROM TEST", + "SELECT '{nothing}' FROM TEST", + + "SELECT '{fn ABS(1)}' FROM TEST", + "SELECT '{fn ABS(1)}' FROM TEST", + + "SELECT {d '2001-01-01'} FROM TEST", + "SELECT d '2001-01-01' FROM TEST", + + "SELECT {t '20:00:00'} FROM TEST", + "SELECT t '20:00:00' FROM TEST", + + "SELECT {ts '2001-01-01 20:00:00'} FROM TEST", + "SELECT ts '2001-01-01 20:00:00' FROM TEST", + + "SELECT {fn CONCAT('{fn x}','{oj}')} FROM TEST", + "SELECT CONCAT('{fn x}','{oj}') FROM TEST", + + "SELECT * FROM {oj TEST T1 LEFT OUTER JOIN TEST T2 ON T1.ID=T2.ID}", + "SELECT * FROM TEST T1 LEFT OUTER JOIN TEST T2 ON T1.ID=T2.ID ", + + "SELECT * FROM TEST WHERE '{' LIKE '{{' {escape '{'}", + "SELECT * FROM TEST WHERE '{' LIKE '{{' escape '{' ", + + "SELECT * FROM TEST WHERE '}' LIKE '}}' {escape '}'}", + "SELECT * FROM TEST WHERE '}' LIKE '}}' escape '}' ", + + "{call TEST('}')}", " call TEST('}') ", + + "{?= call TEST('}')}", " ?= call TEST('}') ", + + "{? = call TEST('}')}", " ? = call TEST('}') ", + + "{{{{this is a bug}", null, }; + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("nativeSql"); + conn = getConnection("nativeSql"); + testPairs(); + testCases(); + testRandom(); + testQuotes(); + conn.close(); + assertTrue(conn.isClosed()); + deleteDb("nativeSql"); + } + + private void testQuotes() throws SQLException { + Statement stat = conn.createStatement(); + Random random = new Random(1); + String s = "'\"$/-* \n"; + for (int i = 0; i < 200; i++) { + StringBuilder buffQuoted = new StringBuilder(); + StringBuilder buffRaw = new StringBuilder(); + if (random.nextBoolean()) { + buffQuoted.append("'"); + for (int j = 0; j < 10; j++) { + char c = s.charAt(random.nextInt(s.length())); + if (c == '\'') { + buffQuoted.append('\''); + } + buffQuoted.append(c); + buffRaw.append(c); + } + buffQuoted.append("'"); + } else { + buffQuoted.append("$$"); + for (int j = 0; j < 10; j++) { + char c = s.charAt(random.nextInt(s.length())); + buffQuoted.append(c); + buffRaw.append(c); + if (c == '$') { + buffQuoted.append(' '); + buffRaw.append(' '); + } + } + buffQuoted.append("$$"); + } + String sql = "CALL " + buffQuoted.toString(); + ResultSet rs = stat.executeQuery(sql); + rs.next(); + String raw = buffRaw.toString(); + assertEquals(raw, rs.getString(1)); + } + } + + private void testRandom() throws SQLException { + Random random = new Random(1); + for (int i = 0; i < 100; i++) { + StringBuilder buff = new StringBuilder("{oj }"); + String s = "{}\'\"-/*$ $-"; + for (int j = random.nextInt(30); j > 0; j--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + String sql = buff.toString(); + try { + conn.nativeSQL(sql); + } catch (SQLException e) { + assertKnownException(sql, e); + } + } + String smallest = null; + for (int i = 0; i < 1000; i++) { + StringBuilder buff = new StringBuilder("{oj }"); + for (int j = random.nextInt(10); j > 0; j--) { + String s; + switch (random.nextInt(7)) { + case 0: + buff.append(" $$"); + s = "{}\'\"-/* a\n"; + for (int k = random.nextInt(5); k > 0; k--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + buff.append("$$"); + break; + case 1: + buff.append("'"); + s = "{}\"-/*$ a\n"; + for (int k = random.nextInt(5); k > 0; k--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + buff.append("'"); + break; + case 2: + buff.append("\""); + s = "{}'-/*$ a\n"; + for (int k = random.nextInt(5); k > 0; k--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + buff.append("\""); + break; + case 3: + buff.append("/*"); + s = "{}'\"-/$ a\n"; + for (int k = random.nextInt(5); k > 0; k--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + buff.append("*/"); + break; + case 4: + buff.append("--"); + s = "{}'\"-/$ a"; + for (int k = random.nextInt(5); k > 0; k--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + buff.append("\n"); + break; + case 5: + buff.append("//"); + s = "{}'\"-/$ a"; + for (int k = random.nextInt(5); k > 0; k--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + buff.append("\n"); + break; + case 6: + s = " a\n"; + for (int k = random.nextInt(5); k > 0; k--) { + buff.append(s.charAt(random.nextInt(s.length()))); + } + break; + default: + } + } + String sql = buff.toString(); + try { + conn.nativeSQL(sql); + } catch (Exception e) { + if (smallest == null || sql.length() < smallest.length()) { + smallest = sql; + } + } + } + if (smallest != null) { + conn.nativeSQL(smallest); + } + } + + private void testPairs() { + for (int i = 0; i < PAIRS.length; i += 2) { + test(PAIRS[i], PAIRS[i + 1]); + } + } + + private void testCases() throws SQLException { + conn.nativeSQL("TEST"); + conn.nativeSQL("TEST--testing"); + conn.nativeSQL("TEST--testing{oj }"); + conn.nativeSQL("TEST/*{fn }*/"); + conn.nativeSQL("TEST//{fn }"); + conn.nativeSQL("TEST-TEST/TEST/*TEST*/TEST--\rTEST--{fn }"); + conn.nativeSQL("TEST-TEST//TEST"); + conn.nativeSQL("'{}' '' \"1\" \"\"\"\""); + conn.nativeSQL("{?= call HELLO{t '10'}}"); + conn.nativeSQL("TEST 'test'{OJ OUTER JOIN}'test'{oj OUTER JOIN}"); + conn.nativeSQL("{call {ts '2001-01-10'}}"); + conn.nativeSQL("call ? { 1: '}' };"); + conn.nativeSQL("TEST TEST TEST TEST TEST 'TEST' TEST \"TEST\""); + conn.nativeSQL("TEST TEST TEST 'TEST' TEST \"TEST\""); + Statement stat = conn.createStatement(); + stat.setEscapeProcessing(true); + stat.execute("CALL {d '2001-01-01'}"); + stat.setEscapeProcessing(false); + assertThrows(ErrorCode.SYNTAX_ERROR_2, stat). + execute("CALL {d '2001-01-01'} // this is a test"); + assertFalse(conn.isClosed()); + } + + private void test(String original, String expected) { + trace("original: <" + original + ">"); + trace("expected: <" + expected + ">"); + try { + String result = conn.nativeSQL(original); + trace("result: <" + result + ">"); + assertEquals(expected, result); + } catch (SQLException e) { + assertEquals(expected, null); + assertKnownException(e); + trace("got exception, good"); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestPreparedStatement.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestPreparedStatement.java new file mode 100644 index 0000000000000..ffd4b47ca90d5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestPreparedStatement.java @@ -0,0 +1,1574 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.StringReader; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.net.URL; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ParameterMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.RowId; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.Calendar; +import java.util.GregorianCalendar; +import java.util.UUID; + +import org.h2.api.ErrorCode; +import org.h2.api.Trigger; +import org.h2.engine.SysProperties; +import org.h2.test.TestBase; +import org.h2.util.LocalDateTimeUtils; +import org.h2.util.Task; + +/** + * Tests for the PreparedStatement implementation. + */ +public class TestPreparedStatement extends TestBase { + + private static final int LOB_SIZE = 4000, LOB_SIZE_BIG = 512 * 1024; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("preparedStatement"); + Connection conn = getConnection("preparedStatement"); + testUnwrap(conn); + testUnsupportedOperations(conn); + testChangeType(conn); + testCallTablePrepared(conn); + testValues(conn); + testToString(conn); + testExecuteUpdateCall(conn); + testPrepareExecute(conn); + testEnum(conn); + testUUID(conn); + testUUIDAsJavaObject(conn); + testScopedGeneratedKey(conn); + testLobTempFiles(conn); + testExecuteErrorTwice(conn); + testTempView(conn); + testInsertFunction(conn); + testPrepareRecompile(conn); + testMaxRowsChange(conn); + testUnknownDataType(conn); + testCancelReuse(conn); + testCoalesce(conn); + testPreparedStatementMetaData(conn); + testDate(conn); + testDate8(conn); + testTime8(conn); + testDateTime8(conn); + testOffsetDateTime8(conn); + testInstant8(conn); + testArray(conn); + testSetObject(conn); + testPreparedSubquery(conn); + testLikeIndex(conn); + testCasewhen(conn); + testSubquery(conn); + testObject(conn); + testDataTypes(conn); + testGetMoreResults(conn); + testBlob(conn); + testClob(conn); + testParameterMetaData(conn); + testColumnMetaDataWithEquals(conn); + testColumnMetaDataWithIn(conn); + conn.close(); + testPreparedStatementWithLiteralsNone(); + testPreparedStatementWithIndexedParameterAndLiteralsNone(); + testPreparedStatementWithAnyParameter(); + deleteDb("preparedStatement"); + } + + private void testUnwrap(Connection conn) throws SQLException { + assertTrue(conn.isWrapperFor(Object.class)); + assertTrue(conn.isWrapperFor(Connection.class)); + assertTrue(conn.isWrapperFor(conn.getClass())); + assertFalse(conn.isWrapperFor(String.class)); + assertTrue(conn == conn.unwrap(Object.class)); + assertTrue(conn == conn.unwrap(Connection.class)); + assertThrows(ErrorCode.INVALID_VALUE_2, conn). + unwrap(String.class); + } + + @SuppressWarnings("deprecation") + private void testUnsupportedOperations(Connection conn) throws Exception { + PreparedStatement prep = conn.prepareStatement("select ? from dual"); + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + addBatch("select 1"); + + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + executeUpdate("create table test(id int)"); + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + executeUpdate("create table test(id int)", new int[0]); + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + executeUpdate("create table test(id int)", new String[0]); + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + executeUpdate("create table test(id int)", Statement.RETURN_GENERATED_KEYS); + + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + execute("create table test(id int)"); + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + execute("create table test(id int)", new int[0]); + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + execute("create table test(id int)", new String[0]); + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + execute("create table test(id int)", Statement.RETURN_GENERATED_KEYS); + + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_PREPARED_STATEMENT, prep). + executeQuery("select * from dual"); + + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, prep). + setURL(1, new URL("http://www.acme.com")); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, prep). + setRowId(1, (RowId) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, prep). + setUnicodeStream(1, (InputStream) null, 0); + + ParameterMetaData meta = prep.getParameterMetaData(); + assertTrue(meta.toString(), meta.toString().endsWith("parameterCount=1")); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, conn). + createSQLXML(); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, conn). + createStruct("Integer", new Object[0]); + } + + private static void testChangeType(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "select (? || ? || ?) from dual"); + prep.setString(1, "a"); + prep.setString(2, "b"); + prep.setString(3, "c"); + prep.executeQuery(); + prep.setInt(1, 1); + prep.setString(2, "ab"); + prep.setInt(3, 45); + prep.executeQuery(); + } + + private static void testCallTablePrepared(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement("call table(x int = (1))"); + prep.executeQuery(); + prep.executeQuery(); + } + + private void testValues(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement("values(?, ?)"); + prep.setInt(1, 1); + prep.setString(2, "Hello"); + ResultSet rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + + prep = conn.prepareStatement("select * from values(?, ?), (2, 'World!')"); + prep.setInt(1, 1); + prep.setString(2, "Hello"); + rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("World!", rs.getString(2)); + + prep = conn.prepareStatement("values 1, 2"); + rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs.next(); + assertEquals(2, rs.getInt(1)); + } + + private void testToString(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement("call 1"); + assertTrue(prep.toString().endsWith(": call 1")); + prep = conn.prepareStatement("call ?"); + assertTrue(prep.toString().endsWith(": call ?")); + prep.setString(1, "Hello World"); + assertTrue(prep.toString().endsWith(": call ? {1: 'Hello World'}")); + } + + private void testExecuteUpdateCall(Connection conn) throws SQLException { + assertThrows(ErrorCode.DATA_CONVERSION_ERROR_1, conn.createStatement()). + executeUpdate("CALL HASH('SHA256', STRINGTOUTF8('Password'), 1000)"); + } + + private void testPrepareExecute(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("prepare test(int, int) as select ?1*?2"); + ResultSet rs = stat.executeQuery("execute test(3, 2)"); + rs.next(); + assertEquals(6, rs.getInt(1)); + stat.execute("deallocate test"); + } + + private void testLobTempFiles(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, DATA CLOB)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + for (int i = 0; i < 5; i++) { + prep.setInt(1, i); + if (i % 2 == 0) { + prep.setCharacterStream(2, new StringReader(getString(i)), -1); + } + prep.execute(); + } + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + int check = 0; + for (int i = 0; i < 5; i++) { + assertTrue(rs.next()); + if (i % 2 == 0) { + check = i; + } + assertEquals(getString(check), rs.getString(2)); + } + assertFalse(rs.next()); + stat.execute("DELETE FROM TEST"); + for (int i = 0; i < 3; i++) { + prep.setInt(1, i); + prep.setCharacterStream(2, new StringReader(getString(i)), -1); + prep.addBatch(); + } + prep.executeBatch(); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + for (int i = 0; i < 3; i++) { + assertTrue(rs.next()); + assertEquals(getString(i), rs.getString(2)); + } + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private static String getString(int i) { + return new String(new char[100000]).replace('\0', (char) ('0' + i)); + } + + private void testExecuteErrorTwice(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "CREATE TABLE BAD AS SELECT A"); + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, prep).execute(); + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, prep).execute(); + } + + private void testTempView(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + PreparedStatement prep; + stat.execute("CREATE TABLE TEST(FIELD INT PRIMARY KEY)"); + stat.execute("INSERT INTO TEST VALUES(1)"); + stat.execute("INSERT INTO TEST VALUES(2)"); + prep = conn.prepareStatement("select FIELD FROM " + + "(select FIELD FROM (SELECT FIELD FROM TEST " + + "WHERE FIELD = ?) AS T2 " + + "WHERE T2.FIELD = ?) AS T3 WHERE T3.FIELD = ?"); + prep.setInt(1, 1); + prep.setInt(2, 1); + prep.setInt(3, 1); + ResultSet rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + prep.setInt(1, 2); + prep.setInt(2, 2); + prep.setInt(3, 2); + rs = prep.executeQuery(); + rs.next(); + assertEquals(2, rs.getInt(1)); + stat.execute("DROP TABLE TEST"); + } + + private void testInsertFunction(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + PreparedStatement prep; + ResultSet rs; + + stat.execute("CREATE TABLE TEST(ID INT, H BINARY)"); + prep = conn.prepareStatement("INSERT INTO TEST " + + "VALUES(?, HASH('SHA256', STRINGTOUTF8(?), 5))"); + prep.setInt(1, 1); + prep.setString(2, "One"); + prep.execute(); + prep.setInt(1, 2); + prep.setString(2, "Two"); + prep.execute(); + rs = stat.executeQuery("SELECT COUNT(DISTINCT H) FROM TEST"); + rs.next(); + assertEquals(2, rs.getInt(1)); + + stat.execute("DROP TABLE TEST"); + } + + private void testPrepareRecompile(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + PreparedStatement prep; + ResultSet rs; + + prep = conn.prepareStatement("SELECT COUNT(*) " + + "FROM DUAL WHERE ? IS NULL"); + prep.setString(1, null); + prep.executeQuery(); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("DROP TABLE TEST"); + prep.setString(1, null); + prep.executeQuery(); + prep.setString(1, "X"); + rs = prep.executeQuery(); + rs.next(); + assertEquals(0, rs.getInt(1)); + + stat.execute("CREATE TABLE t1 (c1 INT, c2 VARCHAR(10))"); + stat.execute("INSERT INTO t1 SELECT X, CONCAT('Test', X) " + + "FROM SYSTEM_RANGE(1, 5);"); + prep = conn.prepareStatement("SELECT c1, c2 FROM t1 WHERE c1 = ?"); + prep.setInt(1, 1); + prep.executeQuery(); + stat.execute("CREATE TABLE t2 (x int PRIMARY KEY)"); + prep.setInt(1, 2); + rs = prep.executeQuery(); + rs.next(); + assertEquals(2, rs.getInt(1)); + prep.setInt(1, 3); + rs = prep.executeQuery(); + rs.next(); + assertEquals(3, rs.getInt(1)); + stat.execute("DROP TABLE t1, t2"); + + } + + private void testMaxRowsChange(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "SELECT * FROM SYSTEM_RANGE(1, 100)"); + ResultSet rs; + for (int j = 1; j < 20; j++) { + prep.setMaxRows(j); + rs = prep.executeQuery(); + for (int i = 0; i < j; i++) { + assertTrue(rs.next()); + } + assertFalse(rs.next()); + } + } + + private void testUnknownDataType(Connection conn) throws SQLException { + assertThrows(ErrorCode.UNKNOWN_DATA_TYPE_1, conn). + prepareStatement("SELECT * FROM (SELECT ? FROM DUAL)"); + PreparedStatement prep = conn.prepareStatement("SELECT -?"); + prep.setInt(1, 1); + execute(prep); + prep = conn.prepareStatement("SELECT ?-?"); + prep.setInt(1, 1); + prep.setInt(2, 2); + execute(prep); + } + + private void testCancelReuse(Connection conn) throws Exception { + conn.createStatement().execute( + "CREATE ALIAS SLEEP FOR \"java.lang.Thread.sleep\""); + // sleep for 10 seconds + final PreparedStatement prep = conn.prepareStatement( + "SELECT SLEEP(?) FROM SYSTEM_RANGE(1, 10000) LIMIT ?"); + prep.setInt(1, 1); + prep.setInt(2, 10000); + Task t = new Task() { + @Override + public void call() throws SQLException { + TestPreparedStatement.this.execute(prep); + } + }; + t.execute(); + Thread.sleep(100); + prep.cancel(); + SQLException e = (SQLException) t.getException(); + assertTrue(e != null); + assertEquals(ErrorCode.STATEMENT_WAS_CANCELED, e.getErrorCode()); + prep.setInt(1, 1); + prep.setInt(2, 1); + ResultSet rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(0, rs.getInt(1)); + assertFalse(rs.next()); + } + + private static void testCoalesce(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.executeUpdate("create table test(tm timestamp)"); + stat.executeUpdate("insert into test values(current_timestamp)"); + PreparedStatement prep = conn.prepareStatement( + "update test set tm = coalesce(?,tm)"); + prep.setTimestamp(1, new java.sql.Timestamp(System.currentTimeMillis())); + prep.executeUpdate(); + stat.executeUpdate("drop table test"); + } + + private void testPreparedStatementMetaData(Connection conn) + throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "select * from table(x int = ?, name varchar = ?)"); + ResultSetMetaData meta = prep.getMetaData(); + assertEquals(2, meta.getColumnCount()); + assertEquals("INTEGER", meta.getColumnTypeName(1)); + assertEquals("VARCHAR", meta.getColumnTypeName(2)); + prep = conn.prepareStatement("call 1"); + meta = prep.getMetaData(); + assertEquals(1, meta.getColumnCount()); + assertEquals("INTEGER", meta.getColumnTypeName(1)); + } + + private void testArray(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "select * from table(x int = ?) order by x"); + prep.setObject(1, new Object[] { new BigDecimal("1"), "2" }); + ResultSet rs = prep.executeQuery(); + rs.next(); + assertEquals("1", rs.getString(1)); + rs.next(); + assertEquals("2", rs.getString(1)); + assertFalse(rs.next()); + } + + private void testEnum(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE test_enum(size ENUM('small', 'medium', 'large'))"); + + String[] badSizes = new String[]{"green", "smol", "0"}; + for (int i = 0; i < badSizes.length; i++) { + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO test_enum VALUES(?)"); + prep.setObject(1, badSizes[i]); + assertThrows(ErrorCode.ENUM_VALUE_NOT_PERMITTED, prep).execute(); + } + + String[] goodSizes = new String[]{"small", "medium", "large"}; + for (int i = 0; i < goodSizes.length; i++) { + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO test_enum VALUES(?)"); + prep.setObject(1, goodSizes[i]); + prep.execute(); + ResultSet rs = stat.executeQuery("SELECT * FROM test_enum"); + for (int j = 0; j <= i; j++) { + rs.next(); + } + assertEquals(goodSizes[i], rs.getString(1)); + assertEquals(i, rs.getInt(1)); + Object o = rs.getObject(1); + assertEquals(Integer.class, o.getClass()); + } + + for (int i = 0; i < goodSizes.length; i++) { + PreparedStatement prep = conn.prepareStatement("SELECT * FROM test_enum WHERE size = ?"); + prep.setObject(1, goodSizes[i]); + ResultSet rs = prep.executeQuery(); + rs.next(); + String s = rs.getString(1); + assertTrue(s.equals(goodSizes[i])); + assertFalse(rs.next()); + } + + for (int i = 0; i < badSizes.length; i++) { + PreparedStatement prep = conn.prepareStatement("SELECT * FROM test_enum WHERE size = ?"); + prep.setObject(1, badSizes[i]); + if (config.lazy) { + ResultSet resultSet = prep.executeQuery(); + assertThrows(ErrorCode.ENUM_VALUE_NOT_PERMITTED, resultSet).next(); + } else { + assertThrows(ErrorCode.ENUM_VALUE_NOT_PERMITTED, prep).executeQuery(); + } + } + + stat.execute("DROP TABLE test_enum"); + } + + private void testUUID(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("create table test_uuid(id uuid primary key)"); + UUID uuid = new UUID(-2, -1); + PreparedStatement prep = conn.prepareStatement( + "insert into test_uuid values(?)"); + prep.setObject(1, uuid); + prep.execute(); + ResultSet rs = stat.executeQuery("select * from test_uuid"); + rs.next(); + assertEquals("ffffffff-ffff-fffe-ffff-ffffffffffff", rs.getString(1)); + Object o = rs.getObject(1); + assertEquals("java.util.UUID", o.getClass().getName()); + stat.execute("drop table test_uuid"); + } + + private void testUUIDAsJavaObject(Connection conn) throws SQLException { + String uuidStr = "12345678-1234-4321-8765-123456789012"; + + Statement stat = conn.createStatement(); + stat.execute("create table test_uuid(id uuid primary key)"); + UUID origUUID = UUID.fromString(uuidStr); + PreparedStatement prep = conn.prepareStatement("insert into test_uuid values(?)"); + prep.setObject(1, origUUID, java.sql.Types.JAVA_OBJECT); + prep.execute(); + + prep = conn.prepareStatement("select * from test_uuid where id=?"); + prep.setObject(1, origUUID, java.sql.Types.JAVA_OBJECT); + ResultSet rs = prep.executeQuery(); + rs.next(); + Object o = rs.getObject(1); + assertTrue(o instanceof UUID); + UUID selectedUUID = (UUID) o; + assertTrue(selectedUUID.toString().equals(uuidStr)); + assertTrue(selectedUUID.equals(origUUID)); + stat.execute("drop table test_uuid"); + } + + /** + * A trigger that creates a sequence value. + */ + public static class SequenceTrigger implements Trigger { + + @Override + public void fire(Connection conn, Object[] oldRow, Object[] newRow) + throws SQLException { + conn.setAutoCommit(false); + conn.createStatement().execute("call next value for seq"); + } + + @Override + public void init(Connection conn, String schemaName, + String triggerName, String tableName, boolean before, int type) { + // ignore + } + + @Override + public void close() { + // ignore + } + + @Override + public void remove() { + // ignore + } + + } + + private void testScopedGeneratedKey(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity)"); + stat.execute("create sequence seq start with 1000"); + stat.execute("create trigger test_ins after insert on test call \"" + + SequenceTrigger.class.getName() + "\""); + stat.execute("insert into test values(null)", Statement.RETURN_GENERATED_KEYS); + ResultSet rs = stat.getGeneratedKeys(); + rs.next(); + // Generated key + assertEquals(1, rs.getLong(1)); + stat.execute("insert into test values(100)"); + rs = stat.getGeneratedKeys(); + // No generated keys + assertFalse(rs.next()); + // Value from sequence from trigger + rs = stat.executeQuery("select scope_identity()"); + rs.next(); + assertEquals(100, rs.getLong(1)); + stat.execute("drop sequence seq"); + stat.execute("drop table test"); + } + + private void testSetObject(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(C CHAR(1))"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?)"); + prep.setObject(1, 'x'); + prep.execute(); + stat.execute("DROP TABLE TEST"); + stat.execute("CREATE TABLE TEST(ID INT, DATA BINARY, JAVA OTHER)"); + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?, ?)"); + prep.setInt(1, 1); + prep.setObject(2, 11); + prep.setObject(3, null); + prep.execute(); + prep.setInt(1, 2); + prep.setObject(2, 101, Types.OTHER); + prep.setObject(3, 103, Types.OTHER); + prep.execute(); + PreparedStatement p2 = conn.prepareStatement( + "SELECT * FROM TEST ORDER BY ID"); + ResultSet rs = p2.executeQuery(); + rs.next(); + Object o = rs.getObject(2); + assertTrue(o instanceof byte[]); + assertTrue(rs.getObject(3) == null); + rs.next(); + o = rs.getObject(2); + assertTrue(o instanceof byte[]); + o = rs.getObject(3); + assertTrue(o instanceof Integer); + assertEquals(103, ((Integer) o).intValue()); + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private void testDate(Connection conn) throws SQLException { + PreparedStatement prep = conn.prepareStatement("SELECT ?"); + Timestamp ts = Timestamp.valueOf("2001-02-03 04:05:06"); + prep.setObject(1, new java.util.Date(ts.getTime())); + ResultSet rs = prep.executeQuery(); + rs.next(); + Timestamp ts2 = rs.getTimestamp(1); + assertEquals(ts.toString(), ts2.toString()); + } + + private void testDate8(Connection conn) throws SQLException { + if (!LocalDateTimeUtils.isJava8DateApiPresent()) { + return; + } + PreparedStatement prep = conn.prepareStatement("SELECT ?"); + Object localDate = LocalDateTimeUtils.parseLocalDate("2001-02-03"); + prep.setObject(1, localDate); + ResultSet rs = prep.executeQuery(); + rs.next(); + Object localDate2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE); + assertEquals(localDate, localDate2); + rs.close(); + localDate = LocalDateTimeUtils.parseLocalDate("-0509-01-01"); + prep.setObject(1, localDate); + rs = prep.executeQuery(); + rs.next(); + localDate2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE); + assertEquals(localDate, localDate2); + rs.close(); + /* + * Check that date that doesn't exist in proleptic Gregorian calendar can be + * read as a next date. + */ + prep.setString(1, "1500-02-29"); + rs = prep.executeQuery(); + rs.next(); + localDate2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE); + assertEquals(LocalDateTimeUtils.parseLocalDate("1500-03-01"), localDate2); + rs.close(); + prep.setString(1, "1400-02-29"); + rs = prep.executeQuery(); + rs.next(); + localDate2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE); + assertEquals(LocalDateTimeUtils.parseLocalDate("1400-03-01"), localDate2); + rs.close(); + prep.setString(1, "1300-02-29"); + rs = prep.executeQuery(); + rs.next(); + localDate2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE); + assertEquals(LocalDateTimeUtils.parseLocalDate("1300-03-01"), localDate2); + rs.close(); + prep.setString(1, "-0100-02-29"); + rs = prep.executeQuery(); + rs.next(); + localDate2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE); + assertEquals(LocalDateTimeUtils.parseLocalDate("-0100-03-01"), localDate2); + rs.close(); + /* + * Check that date that doesn't exist in traditional calendar can be set and + * read with LocalDate and can be read with getDate() as a next date. + */ + localDate = LocalDateTimeUtils.parseLocalDate("1582-10-05"); + prep.setObject(1, localDate); + rs = prep.executeQuery(); + rs.next(); + localDate2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE); + assertEquals(localDate, localDate2); + assertEquals("1582-10-05", rs.getString(1)); + assertEquals(Date.valueOf("1582-10-15"), rs.getDate(1)); + /* + * Also check that date that doesn't exist in traditional calendar can be read + * with getDate() with custom Calendar properly. + */ + GregorianCalendar gc = new GregorianCalendar(); + gc.setGregorianChange(new java.util.Date(Long.MIN_VALUE)); + gc.clear(); + gc.set(Calendar.YEAR, 1582); + gc.set(Calendar.MONTH, 9); + gc.set(Calendar.DAY_OF_MONTH, 5); + Date expected = new Date(gc.getTimeInMillis()); + gc.clear(); + assertEquals(expected, rs.getDate(1, gc)); + rs.close(); + } + + private void testTime8(Connection conn) throws SQLException { + if (!LocalDateTimeUtils.isJava8DateApiPresent()) { + return; + } + PreparedStatement prep = conn.prepareStatement("SELECT ?"); + Object localTime = LocalDateTimeUtils.parseLocalTime("04:05:06"); + prep.setObject(1, localTime); + ResultSet rs = prep.executeQuery(); + rs.next(); + Object localTime2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_TIME); + assertEquals(localTime, localTime2); + rs.close(); + localTime = LocalDateTimeUtils.parseLocalTime("04:05:06.123456789"); + prep.setObject(1, localTime); + rs = prep.executeQuery(); + rs.next(); + localTime2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_TIME); + assertEquals(localTime, localTime2); + rs.close(); + } + + private void testDateTime8(Connection conn) throws SQLException { + if (!LocalDateTimeUtils.isJava8DateApiPresent()) { + return; + } + PreparedStatement prep = conn.prepareStatement("SELECT ?"); + Object localDateTime = LocalDateTimeUtils.parseLocalDateTime("2001-02-03T04:05:06"); + prep.setObject(1, localDateTime); + ResultSet rs = prep.executeQuery(); + rs.next(); + Object localDateTime2 = rs.getObject(1, LocalDateTimeUtils.LOCAL_DATE_TIME); + assertEquals(localDateTime, localDateTime2); + rs.close(); + } + + private void testOffsetDateTime8(Connection conn) throws SQLException { + if (!LocalDateTimeUtils.isJava8DateApiPresent()) { + return; + } + PreparedStatement prep = conn.prepareStatement("SELECT ?"); + Object offsetDateTime = LocalDateTimeUtils + .parseOffsetDateTime("2001-02-03T04:05:06+02:30"); + prep.setObject(1, offsetDateTime); + ResultSet rs = prep.executeQuery(); + rs.next(); + Object offsetDateTime2 = rs.getObject(1, LocalDateTimeUtils.OFFSET_DATE_TIME); + assertEquals(offsetDateTime, offsetDateTime2); + assertFalse(rs.next()); + rs.close(); + + prep.setObject(1, offsetDateTime, 2014); // Types.TIMESTAMP_WITH_TIMEZONE + rs = prep.executeQuery(); + rs.next(); + offsetDateTime2 = rs.getObject(1, LocalDateTimeUtils.OFFSET_DATE_TIME); + assertEquals(offsetDateTime, offsetDateTime2); + assertFalse(rs.next()); + rs.close(); + } + + private void testInstant8(Connection conn) throws Exception { + if (!LocalDateTimeUtils.isJava8DateApiPresent()) { + return; + } + Method timestampToInstant = Timestamp.class.getMethod("toInstant"); + Method now = LocalDateTimeUtils.INSTANT.getMethod("now"); + Method parse = LocalDateTimeUtils.INSTANT.getMethod("parse", CharSequence.class); + + PreparedStatement prep = conn.prepareStatement("SELECT ?"); + + testInstant8Impl(prep, timestampToInstant, now.invoke(null)); + testInstant8Impl(prep, timestampToInstant, parse.invoke(null, "2000-01-15T12:13:14.123456789Z")); + testInstant8Impl(prep, timestampToInstant, parse.invoke(null, "1500-09-10T23:22:11.123456789Z")); + } + + private void testInstant8Impl(PreparedStatement prep, Method timestampToInstant, Object instant) + throws SQLException, IllegalAccessException, InvocationTargetException { + prep.setObject(1, instant); + ResultSet rs = prep.executeQuery(); + rs.next(); + Object instant2 = rs.getObject(1, LocalDateTimeUtils.INSTANT); + assertEquals(instant, instant2); + Timestamp ts = rs.getTimestamp(1); + assertEquals(instant, timestampToInstant.invoke(ts)); + assertFalse(rs.next()); + rs.close(); + + prep.setTimestamp(1, ts); + rs = prep.executeQuery(); + rs.next(); + instant2 = rs.getObject(1, LocalDateTimeUtils.INSTANT); + assertEquals(instant, instant2); + assertFalse(rs.next()); + rs.close(); + } + + private void testPreparedSubquery(Connection conn) throws SQLException { + Statement s = conn.createStatement(); + s.executeUpdate("CREATE TABLE TEST(ID IDENTITY, FLAG BIT)"); + s.executeUpdate("INSERT INTO TEST(ID, FLAG) VALUES(0, FALSE)"); + s.executeUpdate("INSERT INTO TEST(ID, FLAG) VALUES(1, FALSE)"); + PreparedStatement u = conn.prepareStatement( + "SELECT ID, FLAG FROM TEST ORDER BY ID"); + PreparedStatement p = conn.prepareStatement( + "UPDATE TEST SET FLAG=true WHERE ID=(SELECT ?)"); + p.clearParameters(); + p.setLong(1, 0); + assertEquals(1, p.executeUpdate()); + p.clearParameters(); + p.setLong(1, 1); + assertEquals(1, p.executeUpdate()); + ResultSet rs = u.executeQuery(); + assertTrue(rs.next()); + assertEquals(0, rs.getInt(1)); + assertTrue(rs.getBoolean(2)); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.getBoolean(2)); + + p = conn.prepareStatement("SELECT * FROM TEST " + + "WHERE EXISTS(SELECT * FROM TEST WHERE ID=?)"); + p.setInt(1, -1); + rs = p.executeQuery(); + assertFalse(rs.next()); + p.setInt(1, 1); + rs = p.executeQuery(); + assertTrue(rs.next()); + + s.executeUpdate("DROP TABLE IF EXISTS TEST"); + } + + private void testParameterMetaData(Connection conn) throws SQLException { + int numericType; + String numericName; + if (SysProperties.BIG_DECIMAL_IS_DECIMAL) { + numericType = Types.DECIMAL; + numericName = "DECIMAL"; + } else { + numericType = Types.NUMERIC; + numericName = "NUMERIC"; + } + PreparedStatement prep = conn.prepareStatement("SELECT ?, ?, ? FROM DUAL"); + ParameterMetaData pm = prep.getParameterMetaData(); + assertEquals("java.lang.String", pm.getParameterClassName(1)); + assertEquals("VARCHAR", pm.getParameterTypeName(1)); + assertEquals(3, pm.getParameterCount()); + assertEquals(ParameterMetaData.parameterModeIn, pm.getParameterMode(1)); + assertEquals(Types.VARCHAR, pm.getParameterType(1)); + assertEquals(0, pm.getPrecision(1)); + assertEquals(0, pm.getScale(1)); + assertEquals(ResultSetMetaData.columnNullableUnknown, pm.isNullable(1)); + assertEquals(pm.isSigned(1), true); + assertThrows(ErrorCode.INVALID_VALUE_2, pm).getPrecision(0); + assertThrows(ErrorCode.INVALID_VALUE_2, pm).getPrecision(4); + prep.close(); + assertThrows(ErrorCode.OBJECT_CLOSED, pm).getPrecision(1); + + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST3(ID INT, " + + "NAME VARCHAR(255), DATA DECIMAL(10,2))"); + PreparedStatement prep1 = conn.prepareStatement( + "UPDATE TEST3 SET ID=?, NAME=?, DATA=?"); + PreparedStatement prep2 = conn.prepareStatement( + "INSERT INTO TEST3 VALUES(?, ?, ?)"); + checkParameter(prep1, 1, "java.lang.Integer", 4, "INTEGER", 10, 0); + checkParameter(prep1, 2, "java.lang.String", 12, "VARCHAR", 255, 0); + checkParameter(prep1, 3, "java.math.BigDecimal", numericType, numericName, 10, 2); + checkParameter(prep2, 1, "java.lang.Integer", 4, "INTEGER", 10, 0); + checkParameter(prep2, 2, "java.lang.String", 12, "VARCHAR", 255, 0); + checkParameter(prep2, 3, "java.math.BigDecimal", numericType, numericName, 10, 2); + PreparedStatement prep3 = conn.prepareStatement( + "SELECT * FROM TEST3 WHERE ID=? AND NAME LIKE ? AND ?>DATA"); + checkParameter(prep3, 1, "java.lang.Integer", 4, "INTEGER", 10, 0); + checkParameter(prep3, 2, "java.lang.String", 12, "VARCHAR", 0, 0); + checkParameter(prep3, 3, "java.math.BigDecimal", numericType, numericName, 10, 2); + stat.execute("DROP TABLE TEST3"); + } + + private void checkParameter(PreparedStatement prep, int index, + String className, int type, String typeName, int precision, + int scale) throws SQLException { + ParameterMetaData meta = prep.getParameterMetaData(); + assertEquals(className, meta.getParameterClassName(index)); + assertEquals(type, meta.getParameterType(index)); + assertEquals(typeName, meta.getParameterTypeName(index)); + assertEquals(precision, meta.getPrecision(index)); + assertEquals(scale, meta.getScale(index)); + } + + private void testLikeIndex(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("INSERT INTO TEST VALUES(2, 'World')"); + stat.execute("create index idxname on test(name);"); + PreparedStatement prep, prepExe; + + prep = conn.prepareStatement( + "EXPLAIN SELECT * FROM TEST WHERE NAME LIKE ?"); + assertEquals(1, prep.getParameterMetaData().getParameterCount()); + prepExe = conn.prepareStatement( + "SELECT * FROM TEST WHERE NAME LIKE ?"); + prep.setString(1, "%orld"); + prepExe.setString(1, "%orld"); + ResultSet rs = prep.executeQuery(); + rs.next(); + String plan = rs.getString(1); + assertContains(plan, ".tableScan"); + rs = prepExe.executeQuery(); + rs.next(); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + + prep.setString(1, "H%"); + prepExe.setString(1, "H%"); + rs = prep.executeQuery(); + rs.next(); + String plan1 = rs.getString(1); + assertContains(plan1, "IDXNAME"); + rs = prepExe.executeQuery(); + rs.next(); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + stat.execute("DROP TABLE IF EXISTS TEST"); + } + + private void testCasewhen(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("INSERT INTO TEST VALUES(1),(2),(3)"); + PreparedStatement prep; + ResultSet rs; + prep = conn.prepareStatement("EXPLAIN SELECT COUNT(*) FROM TEST " + + "WHERE CASEWHEN(ID=1, ID, ID)=? GROUP BY ID"); + prep.setInt(1, 1); + rs = prep.executeQuery(); + rs.next(); + String plan = rs.getString(1); + trace(plan); + rs.close(); + prep = conn.prepareStatement("EXPLAIN SELECT COUNT(*) FROM TEST " + + "WHERE CASE ID WHEN 1 THEN ID WHEN 2 THEN ID " + + "ELSE ID END=? GROUP BY ID"); + prep.setInt(1, 1); + rs = prep.executeQuery(); + rs.next(); + plan = rs.getString(1); + trace(plan); + + prep = conn.prepareStatement("SELECT COUNT(*) FROM TEST " + + "WHERE CASEWHEN(ID=1, ID, ID)=? GROUP BY ID"); + prep.setInt(1, 1); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("SELECT COUNT(*) FROM TEST " + + "WHERE CASE ID WHEN 1 THEN ID WHEN 2 THEN ID " + + "ELSE ID END=? GROUP BY ID"); + prep.setInt(1, 1); + rs = prep.executeQuery(); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + prep = conn.prepareStatement("SELECT * FROM TEST WHERE ? IS NULL"); + prep.setString(1, "Hello"); + rs = prep.executeQuery(); + assertFalse(rs.next()); + assertThrows(ErrorCode.UNKNOWN_DATA_TYPE_1, conn). + prepareStatement("select ? from dual union select ? from dual"); + prep = conn.prepareStatement("select cast(? as varchar) " + + "from dual union select ? from dual"); + assertEquals(2, prep.getParameterMetaData().getParameterCount()); + prep.setString(1, "a"); + prep.setString(2, "a"); + rs = prep.executeQuery(); + rs.next(); + assertEquals("a", rs.getString(1)); + assertEquals("a", rs.getString(1)); + assertFalse(rs.next()); + + stat.execute("DROP TABLE TEST"); + } + + private void testSubquery(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("INSERT INTO TEST VALUES(1),(2),(3)"); + PreparedStatement prep = conn.prepareStatement("select x.id, ? from " + + "(select * from test where id in(?, ?)) x where x.id*2 <> ?"); + assertEquals(4, prep.getParameterMetaData().getParameterCount()); + prep.setInt(1, 0); + prep.setInt(2, 1); + prep.setInt(3, 2); + prep.setInt(4, 4); + ResultSet rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals(0, rs.getInt(2)); + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private void testDataTypes(Connection conn) throws SQLException { + conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_READ_ONLY); + conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, + ResultSet.CONCUR_UPDATABLE); + Statement stat = conn.createStatement(); + PreparedStatement prep; + ResultSet rs; + trace("Create tables"); + stat.execute("CREATE TABLE T_INT" + + "(ID INT PRIMARY KEY,VALUE INT)"); + stat.execute("CREATE TABLE T_VARCHAR" + + "(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + stat.execute("CREATE TABLE T_DECIMAL_0" + + "(ID INT PRIMARY KEY,VALUE DECIMAL(30,0))"); + stat.execute("CREATE TABLE T_DECIMAL_10" + + "(ID INT PRIMARY KEY,VALUE DECIMAL(20,10))"); + stat.execute("CREATE TABLE T_DATETIME" + + "(ID INT PRIMARY KEY,VALUE DATETIME)"); + stat.execute("CREATE TABLE T_BIGINT" + + "(ID INT PRIMARY KEY,VALUE DECIMAL(30,0))"); + prep = conn.prepareStatement("INSERT INTO T_INT VALUES(?,?)", + ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); + prep.setInt(1, 1); + prep.setInt(2, 0); + prep.executeUpdate(); + prep.setInt(1, 2); + prep.setInt(2, -1); + prep.executeUpdate(); + prep.setInt(1, 3); + prep.setInt(2, 3); + prep.executeUpdate(); + prep.setInt(1, 4); + prep.setNull(2, Types.INTEGER, "INTEGER"); + prep.setNull(2, Types.INTEGER); + prep.executeUpdate(); + prep.setInt(1, 5); + prep.setBigDecimal(2, new java.math.BigDecimal("0")); + prep.executeUpdate(); + prep.setInt(1, 6); + prep.setString(2, "-1"); + prep.executeUpdate(); + prep.setInt(1, 7); + prep.setObject(2, 3); + prep.executeUpdate(); + prep.setObject(1, "8"); + // should throw an exception + prep.setObject(2, null); + // some databases don't allow calling setObject with null (no data type) + prep.executeUpdate(); + prep.setInt(1, 9); + prep.setObject(2, -4, Types.VARCHAR); + prep.executeUpdate(); + prep.setInt(1, 10); + prep.setObject(2, "5", Types.INTEGER); + prep.executeUpdate(); + prep.setInt(1, 11); + prep.setObject(2, null, Types.INTEGER); + prep.executeUpdate(); + prep.setInt(1, 12); + prep.setBoolean(2, true); + prep.executeUpdate(); + prep.setInt(1, 13); + prep.setBoolean(2, false); + prep.executeUpdate(); + prep.setInt(1, 14); + prep.setByte(2, (byte) -20); + prep.executeUpdate(); + prep.setInt(1, 15); + prep.setByte(2, (byte) 100); + prep.executeUpdate(); + prep.setInt(1, 16); + prep.setShort(2, (short) 30000); + prep.executeUpdate(); + prep.setInt(1, 17); + prep.setShort(2, (short) (-30000)); + prep.executeUpdate(); + prep.setInt(1, 18); + prep.setLong(2, Integer.MAX_VALUE); + prep.executeUpdate(); + prep.setInt(1, 19); + prep.setLong(2, Integer.MIN_VALUE); + prep.executeUpdate(); + + assertTrue(stat.execute("SELECT * FROM T_INT ORDER BY ID")); + rs = stat.getResultSet(); + assertResultSetOrdered(rs, new String[][] { { "1", "0" }, + { "2", "-1" }, { "3", "3" }, { "4", null }, { "5", "0" }, + { "6", "-1" }, { "7", "3" }, { "8", null }, { "9", "-4" }, + { "10", "5" }, { "11", null }, { "12", "1" }, { "13", "0" }, + { "14", "-20" }, { "15", "100" }, { "16", "30000" }, + { "17", "-30000" }, { "18", "" + Integer.MAX_VALUE }, + { "19", "" + Integer.MIN_VALUE }, }); + + prep = conn.prepareStatement("INSERT INTO T_DECIMAL_0 VALUES(?,?)"); + prep.setInt(1, 1); + prep.setLong(2, Long.MAX_VALUE); + prep.executeUpdate(); + prep.setInt(1, 2); + prep.setLong(2, Long.MIN_VALUE); + prep.executeUpdate(); + prep.setInt(1, 3); + prep.setFloat(2, 10); + prep.executeUpdate(); + prep.setInt(1, 4); + prep.setFloat(2, -20); + prep.executeUpdate(); + prep.setInt(1, 5); + prep.setFloat(2, 30); + prep.executeUpdate(); + prep.setInt(1, 6); + prep.setFloat(2, -40); + prep.executeUpdate(); + + rs = stat.executeQuery("SELECT VALUE FROM T_DECIMAL_0 ORDER BY ID"); + checkBigDecimal(rs, new String[] { "" + Long.MAX_VALUE, + "" + Long.MIN_VALUE, "10", "-20", "30", "-40" }); + prep = conn.prepareStatement("INSERT INTO T_BIGINT VALUES(?,?)"); + prep.setInt(1, 1); + prep.setObject(2, new BigInteger("" + Long.MAX_VALUE)); + prep.executeUpdate(); + prep.setInt(1, 2); + prep.setObject(2, Long.MIN_VALUE); + prep.executeUpdate(); + prep.setInt(1, 3); + prep.setObject(2, 10); + prep.executeUpdate(); + prep.setInt(1, 4); + prep.setObject(2, -20); + prep.executeUpdate(); + prep.setInt(1, 5); + prep.setObject(2, 30); + prep.executeUpdate(); + prep.setInt(1, 6); + prep.setObject(2, -40); + prep.executeUpdate(); + prep.setInt(1, 7); + prep.setObject(2, new BigInteger("-60")); + prep.executeUpdate(); + + rs = stat.executeQuery("SELECT VALUE FROM T_BIGINT ORDER BY ID"); + checkBigDecimal(rs, new String[] { "" + Long.MAX_VALUE, + "" + Long.MIN_VALUE, "10", "-20", "30", "-40", "-60" }); + } + + private void testGetMoreResults(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + PreparedStatement prep; + ResultSet rs; + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("INSERT INTO TEST VALUES(1)"); + + prep = conn.prepareStatement("SELECT * FROM TEST"); + // just to check if it doesn't throw an exception - it may be null + prep.getMetaData(); + assertTrue(prep.execute()); + rs = prep.getResultSet(); + assertFalse(prep.getMoreResults()); + assertEquals(-1, prep.getUpdateCount()); + // supposed to be closed now + assertThrows(ErrorCode.OBJECT_CLOSED, rs).next(); + assertEquals(-1, prep.getUpdateCount()); + + prep = conn.prepareStatement("UPDATE TEST SET ID = 2"); + assertFalse(prep.execute()); + assertEquals(1, prep.getUpdateCount()); + assertFalse(prep.getMoreResults(Statement.CLOSE_CURRENT_RESULT)); + assertEquals(-1, prep.getUpdateCount()); + // supposed to be closed now + assertThrows(ErrorCode.OBJECT_CLOSED, rs).next(); + assertEquals(-1, prep.getUpdateCount()); + + prep = conn.prepareStatement("DELETE FROM TEST"); + prep.executeUpdate(); + assertFalse(prep.getMoreResults()); + assertEquals(-1, prep.getUpdateCount()); + stat.execute("DROP TABLE TEST"); + } + + private void testObject(Connection conn) throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs; + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + PreparedStatement prep = conn.prepareStatement( + "SELECT ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? FROM TEST"); + prep.setObject(1, Boolean.TRUE); + prep.setObject(2, "Abc"); + prep.setObject(3, new BigDecimal("10.2")); + prep.setObject(4, (byte) 0xff); + prep.setObject(5, Short.MAX_VALUE); + prep.setObject(6, Integer.MIN_VALUE); + prep.setObject(7, Long.MAX_VALUE); + prep.setObject(8, Float.MAX_VALUE); + prep.setObject(9, Double.MAX_VALUE); + prep.setObject(10, java.sql.Date.valueOf("2001-02-03")); + prep.setObject(11, java.sql.Time.valueOf("04:05:06")); + prep.setObject(12, java.sql.Timestamp.valueOf( + "2001-02-03 04:05:06.123456789")); + prep.setObject(13, new java.util.Date(java.sql.Date.valueOf( + "2001-02-03").getTime())); + prep.setObject(14, new byte[] { 10, 20, 30 }); + prep.setObject(15, 'a', Types.OTHER); + prep.setObject(16, "2001-01-02", Types.DATE); + // converting to null seems strange... + prep.setObject(17, "2001-01-02", Types.NULL); + prep.setObject(18, "3.725", Types.DOUBLE); + prep.setObject(19, "23:22:21", Types.TIME); + prep.setObject(20, new java.math.BigInteger("12345"), Types.OTHER); + prep.setArray(21, conn.createArrayOf("TINYINT", new Object[] {(byte) 1})); + prep.setArray(22, conn.createArrayOf("SMALLINT", new Object[] {(short) -2})); + rs = prep.executeQuery(); + rs.next(); + assertTrue(rs.getObject(1).equals(Boolean.TRUE)); + assertTrue(rs.getObject(2).equals("Abc")); + assertTrue(rs.getObject(3).equals(new BigDecimal("10.2"))); + assertTrue(rs.getObject(4).equals(SysProperties.OLD_RESULT_SET_GET_OBJECT ? + (Object) Byte.valueOf((byte) 0xff) : (Object) Integer.valueOf(-1))); + assertTrue(rs.getObject(5).equals(SysProperties.OLD_RESULT_SET_GET_OBJECT ? + (Object) Short.valueOf(Short.MAX_VALUE) : (Object) Integer.valueOf(Short.MAX_VALUE))); + assertTrue(rs.getObject(6).equals(Integer.MIN_VALUE)); + assertTrue(rs.getObject(7).equals(Long.MAX_VALUE)); + assertTrue(rs.getObject(8).equals(Float.MAX_VALUE)); + assertTrue(rs.getObject(9).equals(Double.MAX_VALUE)); + assertTrue(rs.getObject(10).equals( + java.sql.Date.valueOf("2001-02-03"))); + assertEquals("04:05:06", rs.getObject(11).toString()); + assertTrue(rs.getObject(11).equals( + java.sql.Time.valueOf("04:05:06"))); + assertTrue(rs.getObject(12).equals( + java.sql.Timestamp.valueOf("2001-02-03 04:05:06.123456789"))); + assertTrue(rs.getObject(13).equals( + java.sql.Timestamp.valueOf("2001-02-03 00:00:00"))); + assertEquals(new byte[] { 10, 20, 30 }, (byte[]) rs.getObject(14)); + assertTrue(rs.getObject(15).equals('a')); + assertTrue(rs.getObject(16).equals( + java.sql.Date.valueOf("2001-01-02"))); + assertTrue(rs.getObject(17) == null && rs.wasNull()); + assertTrue(rs.getObject(18).equals(3.725d)); + assertTrue(rs.getObject(19).equals( + java.sql.Time.valueOf("23:22:21"))); + assertTrue(rs.getObject(20).equals( + new java.math.BigInteger("12345"))); + Object[] a = (Object[]) rs.getObject(21); + assertEquals(a[0], SysProperties.OLD_RESULT_SET_GET_OBJECT ? + (Object) Byte.valueOf((byte) 1) : (Object) Integer.valueOf(1)); + a = (Object[]) rs.getObject(22); + assertEquals(a[0], SysProperties.OLD_RESULT_SET_GET_OBJECT ? + (Object) Short.valueOf((short) -2) : (Object) Integer.valueOf(-2)); + + // } else if(x instanceof java.io.Reader) { + // return session.createLob(Value.CLOB, + // TypeConverter.getInputStream((java.io.Reader)x), 0); + // } else if(x instanceof java.io.InputStream) { + // return session.createLob(Value.BLOB, (java.io.InputStream)x, 0); + // } else { + // return ValueBytes.get(TypeConverter.serialize(x)); + + stat.execute("DROP TABLE TEST"); + + } + + private int getLength() { + return getSize(LOB_SIZE, LOB_SIZE_BIG); + } + + private void testBlob(Connection conn) throws SQLException { + trace("testBlob"); + Statement stat = conn.createStatement(); + PreparedStatement prep; + ResultSet rs; + stat.execute("CREATE TABLE T_BLOB(ID INT PRIMARY KEY,V1 BLOB,V2 BLOB)"); + trace("table created"); + prep = conn.prepareStatement("INSERT INTO T_BLOB VALUES(?,?,?)"); + + prep.setInt(1, 1); + prep.setBytes(2, null); + prep.setNull(3, Types.BINARY); + prep.executeUpdate(); + + prep.setInt(1, 2); + prep.setBinaryStream(2, null, 0); + prep.setNull(3, Types.BLOB); + prep.executeUpdate(); + + int length = getLength(); + byte[] big1 = new byte[length]; + byte[] big2 = new byte[length]; + for (int i = 0; i < big1.length; i++) { + big1[i] = (byte) ((i * 11) % 254); + big2[i] = (byte) ((i * 17) % 251); + } + + prep.setInt(1, 3); + prep.setBytes(2, big1); + prep.setBytes(3, big2); + prep.executeUpdate(); + + prep.setInt(1, 4); + ByteArrayInputStream buffer; + buffer = new ByteArrayInputStream(big2); + prep.setBinaryStream(2, buffer, big2.length); + buffer = new ByteArrayInputStream(big1); + prep.setBinaryStream(3, buffer, big1.length); + prep.executeUpdate(); + try { + buffer.close(); + trace("buffer not closed"); + } catch (IOException e) { + trace("buffer closed"); + } + + prep.setInt(1, 5); + buffer = new ByteArrayInputStream(big2); + prep.setObject(2, buffer, Types.BLOB, 0); + buffer = new ByteArrayInputStream(big1); + prep.setObject(3, buffer); + prep.executeUpdate(); + + rs = stat.executeQuery("SELECT ID, V1, V2 FROM T_BLOB ORDER BY ID"); + + rs.next(); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.getBytes(2) == null && rs.wasNull()); + assertTrue(rs.getBytes(3) == null && rs.wasNull()); + + rs.next(); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.getBytes(2) == null && rs.wasNull()); + assertTrue(rs.getBytes(3) == null && rs.wasNull()); + + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals(big1, rs.getBytes(2)); + assertEquals(big2, rs.getBytes(3)); + + rs.next(); + assertEquals(4, rs.getInt(1)); + assertEquals(big2, rs.getBytes(2)); + assertEquals(big1, rs.getBytes(3)); + + rs.next(); + assertEquals(5, rs.getInt(1)); + assertEquals(big2, rs.getBytes(2)); + assertEquals(big1, rs.getBytes(3)); + + assertFalse(rs.next()); + } + + private void testClob(Connection conn) throws SQLException { + trace("testClob"); + Statement stat = conn.createStatement(); + PreparedStatement prep; + ResultSet rs; + stat.execute("CREATE TABLE T_CLOB(ID INT PRIMARY KEY,V1 CLOB,V2 CLOB)"); + StringBuilder asciiBuffer = new StringBuilder(); + int len = getLength(); + for (int i = 0; i < len; i++) { + asciiBuffer.append((char) ('a' + (i % 20))); + } + String ascii1 = asciiBuffer.toString(); + String ascii2 = "Number2 " + ascii1; + prep = conn.prepareStatement("INSERT INTO T_CLOB VALUES(?,?,?)"); + + prep.setInt(1, 1); + prep.setString(2, null); + prep.setNull(3, Types.CLOB); + prep.executeUpdate(); + + prep.clearParameters(); + prep.setInt(1, 2); + prep.setAsciiStream(2, null, 0); + prep.setCharacterStream(3, null, 0); + prep.executeUpdate(); + + prep.clearParameters(); + prep.setInt(1, 3); + prep.setCharacterStream(2, + new StringReader(ascii1), ascii1.length()); + prep.setCharacterStream(3, null, 0); + prep.setAsciiStream(3, + new ByteArrayInputStream(ascii2.getBytes()), ascii2.length()); + prep.executeUpdate(); + + prep.clearParameters(); + prep.setInt(1, 4); + prep.setNull(2, Types.CLOB); + prep.setString(2, ascii2); + prep.setCharacterStream(3, null, 0); + prep.setNull(3, Types.CLOB); + prep.setString(3, ascii1); + prep.executeUpdate(); + + prep.clearParameters(); + prep.setInt(1, 5); + prep.setObject(2, new StringReader(ascii1)); + prep.setObject(3, new StringReader(ascii2), Types.CLOB, 0); + prep.executeUpdate(); + + rs = stat.executeQuery("SELECT ID, V1, V2 FROM T_CLOB ORDER BY ID"); + + rs.next(); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.getCharacterStream(2) == null && rs.wasNull()); + assertTrue(rs.getAsciiStream(3) == null && rs.wasNull()); + + rs.next(); + assertEquals(2, rs.getInt(1)); + assertTrue(rs.getString(2) == null && rs.wasNull()); + assertTrue(rs.getString(3) == null && rs.wasNull()); + + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals(ascii1, rs.getString(2)); + assertEquals(ascii2, rs.getString(3)); + + rs.next(); + assertEquals(4, rs.getInt(1)); + assertEquals(ascii2, rs.getString(2)); + assertEquals(ascii1, rs.getString(3)); + + rs.next(); + assertEquals(5, rs.getInt(1)); + assertEquals(ascii1, rs.getString(2)); + assertEquals(ascii2, rs.getString(3)); + + assertFalse(rs.next()); + assertTrue(prep.getWarnings() == null); + prep.clearWarnings(); + assertTrue(prep.getWarnings() == null); + assertTrue(conn == prep.getConnection()); + } + + private void testPreparedStatementWithLiteralsNone() throws SQLException { + // make sure that when the analyze table kicks in, + // it works with ALLOW_LITERALS=NONE + deleteDb("preparedStatement"); + Connection conn = getConnection( + "preparedStatement;ANALYZE_AUTO=100"); + conn.createStatement().execute( + "SET ALLOW_LITERALS NONE"); + conn.prepareStatement("CREATE TABLE test (id INT)").execute(); + PreparedStatement ps = conn.prepareStatement( + "INSERT INTO test (id) VALUES (?)"); + for (int i = 0; i < 200; i++) { + ps.setInt(1, i); + ps.executeUpdate(); + } + conn.close(); + deleteDb("preparedStatement"); + } + + private void testPreparedStatementWithIndexedParameterAndLiteralsNone() throws SQLException { + // make sure that when the analyze table kicks in, + // it works with ALLOW_LITERALS=NONE + deleteDb("preparedStatement"); + Connection conn = getConnection( + "preparedStatement;ANALYZE_AUTO=100"); + conn.createStatement().execute( + "SET ALLOW_LITERALS NONE"); + conn.prepareStatement("CREATE TABLE test (id INT)").execute(); + PreparedStatement ps = conn.prepareStatement( + "INSERT INTO test (id) VALUES (?1)"); + + ps.setInt(1, 1); + ps.executeUpdate(); + + conn.close(); + deleteDb("preparedStatement"); + } + + private void testPreparedStatementWithAnyParameter() throws SQLException { + deleteDb("preparedStatement"); + Connection conn = getConnection("preparedStatement"); + conn.prepareStatement("CREATE TABLE TEST(ID INT PRIMARY KEY, VALUE INT UNIQUE)").execute(); + PreparedStatement ps = conn.prepareStatement("INSERT INTO TEST(ID, VALUE) VALUES (?, ?)"); + for (int i = 0; i < 10_000; i++) { + ps.setInt(1, i); + ps.setInt(2, i * 10); + ps.executeUpdate(); + } + Object[] values = {-100, 10, 200, 3_000, 40_000, 500_000}; + int[] expected = {1, 20, 300, 4_000}; + // Ensure that other methods return the same results + ps = conn.prepareStatement("SELECT ID FROM TEST WHERE VALUE IN (SELECT * FROM TABLE(X INT=?)) ORDER BY ID"); + anyParameterCheck(ps, values, expected); + ps = conn.prepareStatement("SELECT ID FROM TEST INNER JOIN TABLE(X INT=?) T ON TEST.VALUE = T.X"); + anyParameterCheck(ps, values, expected); + // Test expression = ANY(?) + ps = conn.prepareStatement("SELECT ID FROM TEST WHERE VALUE = ANY(?)"); + assertThrows(ErrorCode.PARAMETER_NOT_SET_1, ps).executeQuery(); + anyParameterCheck(ps, values, expected); + anyParameterCheck(ps, 300, new int[] {30}); + anyParameterCheck(ps, -5, new int[0]); + conn.close(); + deleteDb("preparedStatement"); + } + + private void anyParameterCheck(PreparedStatement ps, Object values, int[] expected) throws SQLException { + ps.setObject(1, values); + try (ResultSet rs = ps.executeQuery()) { + for (int exp : expected) { + assertTrue(rs.next()); + assertEquals(exp, rs.getInt(1)); + } + assertFalse(rs.next()); + } + } + + private void checkBigDecimal(ResultSet rs, String[] value) throws SQLException { + for (String v : value) { + assertTrue(rs.next()); + java.math.BigDecimal x = rs.getBigDecimal(1); + trace("v=" + v + " x=" + x); + if (v == null) { + assertTrue(x == null); + } else { + assertTrue(x.compareTo(new java.math.BigDecimal(v)) == 0); + } + } + assertTrue(!rs.next()); + } + + private void testColumnMetaDataWithEquals(Connection conn) + throws SQLException { + Statement stmt = conn.createStatement(); + stmt.execute("CREATE TABLE TEST( id INT, someColumn INT )"); + PreparedStatement ps = conn + .prepareStatement("INSERT INTO TEST VALUES(?,?)"); + ps.setInt(1, 0); + ps.setInt(2, 999); + ps.execute(); + ps = conn.prepareStatement("SELECT * FROM TEST WHERE someColumn = ?"); + assertEquals(Types.INTEGER, + ps.getParameterMetaData().getParameterType(1)); + stmt.execute("DROP TABLE TEST"); + } + + private void testColumnMetaDataWithIn(Connection conn) throws SQLException { + Statement stmt = conn.createStatement(); + stmt.execute("CREATE TABLE TEST( id INT, someColumn INT )"); + PreparedStatement ps = conn + .prepareStatement("INSERT INTO TEST VALUES( ? , ? )"); + ps.setInt(1, 0); + ps.setInt(2, 999); + ps.execute(); + ps = conn + .prepareStatement("SELECT * FROM TEST WHERE someColumn IN (?,?)"); + assertEquals(Types.INTEGER, + ps.getParameterMetaData().getParameterType(1)); + stmt.execute("DROP TABLE TEST"); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestResultSet.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestResultSet.java new file mode 100644 index 0000000000000..303b4ca434615 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestResultSet.java @@ -0,0 +1,1844 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.Writer; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.nio.charset.StandardCharsets; +import java.sql.Array; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.Date; +import java.sql.NClob; +import java.sql.PreparedStatement; +import java.sql.Ref; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.RowId; +import java.sql.SQLException; +import java.sql.SQLXML; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.Arrays; +import java.util.Calendar; +import java.util.Collections; +import java.util.TimeZone; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.test.TestBase; +import org.h2.util.DateTimeUtils; +import org.h2.util.IOUtils; +import org.h2.util.LocalDateTimeUtils; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; + +/** + * Tests for the ResultSet implementation. + */ +public class TestResultSet extends TestBase { + + private Connection conn; + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("resultSet"); + conn = getConnection("resultSet"); + + stat = conn.createStatement(); + + testUnwrap(); + testReuseSimpleResult(); + testUnsupportedOperations(); + testAmbiguousColumnNames(); + testInsertRowWithUpdatableResultSetDefault(); + testBeforeFirstAfterLast(); + testParseSpecialValues(); + testSubstringPrecision(); + testSubstringDataType(); + testColumnLabelColumnName(); + testAbsolute(); + testFetchSize(); + testOwnUpdates(); + testUpdatePrimaryKey(); + testFindColumn(); + testColumnLength(); + testArray(); + testLimitMaxRows(); + + trace("max rows=" + stat.getMaxRows()); + stat.setMaxRows(6); + trace("max rows after set to 6=" + stat.getMaxRows()); + assertTrue(stat.getMaxRows() == 6); + + testInt(); + testSmallInt(); + testBigInt(); + testVarchar(); + testDecimal(); + testDoubleFloat(); + testDatetime(); + testDatetimeWithCalendar(); + testBlob(); + testClob(); + testAutoIncrement(); + + conn.close(); + deleteDb("resultSet"); + + } + + private void testUnwrap() throws SQLException { + ResultSet rs = stat.executeQuery("select 1"); + assertTrue(rs.isWrapperFor(Object.class)); + assertTrue(rs.isWrapperFor(ResultSet.class)); + assertTrue(rs.isWrapperFor(rs.getClass())); + assertFalse(rs.isWrapperFor(Integer.class)); + assertTrue(rs == rs.unwrap(Object.class)); + assertTrue(rs == rs.unwrap(ResultSet.class)); + assertTrue(rs == rs.unwrap(rs.getClass())); + assertThrows(ErrorCode.INVALID_VALUE_2, rs). + unwrap(Integer.class); + } + + private void testReuseSimpleResult() throws SQLException { + ResultSet rs = stat.executeQuery("select table(x array=((1)))"); + while (rs.next()) { + rs.getString(1); + } + rs.close(); + rs = stat.executeQuery("select table(x array=((1)))"); + while (rs.next()) { + rs.getString(1); + } + rs.close(); + } + + @SuppressWarnings("deprecation") + private void testUnsupportedOperations() throws SQLException { + ResultSet rs = stat.executeQuery("select 1 as x from dual"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getUnicodeStream(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getUnicodeStream("x"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getObject(1, Collections.>emptyMap()); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getObject("x", Collections.>emptyMap()); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getRef(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getRef("x"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getURL(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getURL("x"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getRowId(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getRowId("x"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getSQLXML(1); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getSQLXML("x"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateRef(1, (Ref) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateRef("x", (Ref) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateArray(1, (Array) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateArray("x", (Array) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateRowId(1, (RowId) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateRowId("x", (RowId) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateNClob(1, (NClob) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateNClob("x", (NClob) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateSQLXML(1, (SQLXML) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + updateSQLXML("x", (SQLXML) null); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + getCursorName(); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, rs). + setFetchDirection(ResultSet.FETCH_REVERSE); + } + + private void testAmbiguousColumnNames() throws SQLException { + stat.execute("create table test(id int)"); + stat.execute("insert into test values(1)"); + ResultSet rs = stat.executeQuery( + "select 1 x, 2 x, 3 x, 4 x, 5 x, 6 x from test"); + rs.next(); + assertEquals(1, rs.getInt("x")); + stat.execute("drop table test"); + } + + private void testInsertRowWithUpdatableResultSetDefault() throws Exception { + stat.execute("create table test(id int primary key, " + + "data varchar(255) default 'Hello')"); + PreparedStatement prep = conn.prepareStatement("select * from test", + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE); + ResultSet rs = prep.executeQuery(); + rs.moveToInsertRow(); + rs.updateInt(1, 1); + rs.insertRow(); + rs.close(); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertEquals("Hello", rs.getString(2)); + assertEquals("Hello", rs.getString("data")); + assertEquals("Hello", rs.getNString(2)); + assertEquals("Hello", rs.getNString("data")); + assertEquals("Hello", IOUtils.readStringAndClose( + rs.getNCharacterStream(2), -1)); + assertEquals("Hello", IOUtils.readStringAndClose( + rs.getNCharacterStream("data"), -1)); + assertEquals("Hello", IOUtils.readStringAndClose( + rs.getNClob(2).getCharacterStream(), -1)); + assertEquals("Hello", IOUtils.readStringAndClose( + rs.getNClob("data").getCharacterStream(), -1)); + + rs = prep.executeQuery(); + + rs.moveToInsertRow(); + rs.updateInt(1, 2); + rs.updateNString(2, "Hello"); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 3); + rs.updateNString("data", "Hello"); + rs.insertRow(); + + Clob c; + Writer w; + + rs.moveToInsertRow(); + rs.updateInt(1, 4); + c = conn.createClob(); + w = c.setCharacterStream(1); + w.write("Hello"); + w.close(); + rs.updateClob(2, c); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 5); + c = conn.createClob(); + w = c.setCharacterStream(1); + w.write("Hello"); + w.close(); + rs.updateClob("data", c); + rs.insertRow(); + + InputStream in; + + rs.moveToInsertRow(); + rs.updateInt(1, 6); + in = new ByteArrayInputStream("Hello".getBytes(StandardCharsets.UTF_8)); + rs.updateAsciiStream(2, in); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 7); + in = new ByteArrayInputStream("Hello".getBytes(StandardCharsets.UTF_8)); + rs.updateAsciiStream("data", in); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 8); + in = new ByteArrayInputStream("Hello-".getBytes(StandardCharsets.UTF_8)); + rs.updateAsciiStream(2, in, 5); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 9); + in = new ByteArrayInputStream("Hello-".getBytes(StandardCharsets.UTF_8)); + rs.updateAsciiStream("data", in, 5); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 10); + in = new ByteArrayInputStream("Hello-".getBytes(StandardCharsets.UTF_8)); + rs.updateAsciiStream(2, in, 5L); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 11); + in = new ByteArrayInputStream("Hello-".getBytes(StandardCharsets.UTF_8)); + rs.updateAsciiStream("data", in, 5L); + rs.insertRow(); + + rs = stat.executeQuery("select * from test"); + while (rs.next()) { + assertEquals("Hello", rs.getString(2)); + } + + stat.execute("drop table test"); + } + + private void testBeforeFirstAfterLast() throws SQLException { + stat.executeUpdate("create table test(id int)"); + stat.executeUpdate("insert into test values(1)"); + // With a result + ResultSet rs = stat.executeQuery("select * from test"); + assertTrue(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + rs.next(); + assertFalse(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + rs.next(); + assertFalse(rs.isBeforeFirst()); + assertTrue(rs.isAfterLast()); + rs.close(); + // With no result + rs = stat.executeQuery("select * from test where 1 = 2"); + assertFalse(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + rs.next(); + assertFalse(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + rs.close(); + stat.execute("drop table test"); + } + + private void testParseSpecialValues() throws SQLException { + for (int i = -10; i < 10; i++) { + testParseSpecialValue("" + ((long) Integer.MIN_VALUE + i)); + testParseSpecialValue("" + ((long) Integer.MAX_VALUE + i)); + BigInteger bi = BigInteger.valueOf(i); + testParseSpecialValue(bi.add(BigInteger.valueOf(Long.MIN_VALUE)).toString()); + testParseSpecialValue(bi.add(BigInteger.valueOf(Long.MAX_VALUE)).toString()); + } + } + + private void testParseSpecialValue(String x) throws SQLException { + Object expected; + expected = new BigDecimal(x); + try { + expected = Long.decode(x); + expected = Integer.decode(x); + } catch (Exception e) { + // ignore + } + ResultSet rs = stat.executeQuery("call " + x); + rs.next(); + Object o = rs.getObject(1); + assertEquals(expected.getClass().getName(), o.getClass().getName()); + assertTrue(expected.equals(o)); + } + + private void testSubstringDataType() throws SQLException { + ResultSet rs = stat.executeQuery("select substr(x, 1, 1) from dual"); + rs.next(); + assertEquals(Types.VARCHAR, rs.getMetaData().getColumnType(1)); + } + + private void testColumnLabelColumnName() throws SQLException { + ResultSet rs = stat.executeQuery("select x as y from dual"); + rs.next(); + rs.getString("x"); + rs.getString("y"); + rs.close(); + rs = conn.getMetaData().getColumns(null, null, null, null); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + String[] columnName = new String[columnCount]; + for (int i = 1; i <= columnCount; i++) { + // columnName[i - 1] = meta.getColumnLabel(i); + columnName[i - 1] = meta.getColumnName(i); + } + while (rs.next()) { + for (int i = 0; i < columnCount; i++) { + rs.getObject(columnName[i]); + } + } + } + + private void testAbsolute() throws SQLException { + // stat.execute("SET MAX_MEMORY_ROWS 90"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY)"); + // there was a problem when more than MAX_MEMORY_ROWS where in the + // result set + stat.execute("INSERT INTO TEST SELECT X FROM SYSTEM_RANGE(1, 200)"); + Statement s2 = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + ResultSet rs = s2.executeQuery("SELECT * FROM TEST ORDER BY ID"); + for (int i = 100; i > 0; i--) { + rs.absolute(i); + assertEquals(i, rs.getInt(1)); + } + stat.execute("DROP TABLE TEST"); + } + + private void testFetchSize() throws SQLException { + if (!config.networked || config.memory) { + return; + } + ResultSet rs = stat.executeQuery("SELECT * FROM SYSTEM_RANGE(1, 100)"); + int a = stat.getFetchSize(); + int b = rs.getFetchSize(); + assertEquals(a, b); + rs.setFetchDirection(ResultSet.FETCH_FORWARD); + rs.setFetchSize(b + 1); + b = rs.getFetchSize(); + assertEquals(a + 1, b); + } + + private void testOwnUpdates() throws SQLException { + DatabaseMetaData meta = conn.getMetaData(); + for (int i = 0; i < 3; i++) { + int type = i == 0 ? ResultSet.TYPE_FORWARD_ONLY : + i == 1 ? ResultSet.TYPE_SCROLL_INSENSITIVE : + ResultSet.TYPE_SCROLL_SENSITIVE; + assertTrue(meta.ownUpdatesAreVisible(type)); + assertFalse(meta.ownDeletesAreVisible(type)); + assertFalse(meta.ownInsertsAreVisible(type)); + } + stat = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, + ResultSet.CONCUR_UPDATABLE); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("INSERT INTO TEST VALUES(2, 'World')"); + ResultSet rs; + rs = stat.executeQuery("SELECT ID, NAME FROM TEST ORDER BY ID"); + rs.next(); + rs.next(); + rs.updateString(2, "Hallo"); + rs.updateRow(); + assertEquals("Hallo", rs.getString(2)); + stat.execute("DROP TABLE TEST"); + } + + private void testUpdatePrimaryKey() throws SQLException { + stat = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, + ResultSet.CONCUR_UPDATABLE); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + rs.next(); + rs.updateInt(1, 2); + rs.updateRow(); + rs.updateInt(1, 3); + rs.updateRow(); + stat.execute("DROP TABLE TEST"); + } + + private void checkPrecision(int expected, String sql) throws SQLException { + ResultSetMetaData meta = stat.executeQuery(sql).getMetaData(); + assertEquals(expected, meta.getPrecision(1)); + } + + private void testSubstringPrecision() throws SQLException { + trace("testSubstringPrecision"); + stat.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR(10))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello'), (2, 'WorldPeace')"); + checkPrecision(0, "SELECT SUBSTR(NAME, 12, 4) FROM TEST"); + checkPrecision(9, "SELECT SUBSTR(NAME, 2) FROM TEST"); + checkPrecision(10, "SELECT SUBSTR(NAME, ID) FROM TEST"); + checkPrecision(4, "SELECT SUBSTR(NAME, 2, 4) FROM TEST"); + checkPrecision(3, "SELECT SUBSTR(NAME, 8, 4) FROM TEST"); + checkPrecision(4, "SELECT SUBSTR(NAME, 7, 4) FROM TEST"); + checkPrecision(8, "SELECT SUBSTR(NAME, 3, ID*0) FROM TEST"); + stat.execute("DROP TABLE TEST"); + } + + private void testFindColumn() throws SQLException { + trace("testFindColumn"); + ResultSet rs; + stat.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR)"); + rs = stat.executeQuery("SELECT * FROM TEST"); + assertEquals(1, rs.findColumn("ID")); + assertEquals(2, rs.findColumn("NAME")); + assertEquals(1, rs.findColumn("id")); + assertEquals(2, rs.findColumn("name")); + assertEquals(1, rs.findColumn("Id")); + assertEquals(2, rs.findColumn("Name")); + assertEquals(1, rs.findColumn("TEST.ID")); + assertEquals(2, rs.findColumn("TEST.NAME")); + assertEquals(1, rs.findColumn("Test.Id")); + assertEquals(2, rs.findColumn("Test.Name")); + stat.execute("DROP TABLE TEST"); + + stat.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR, DATA VARCHAR)"); + rs = stat.executeQuery("SELECT * FROM TEST"); + assertEquals(1, rs.findColumn("ID")); + assertEquals(2, rs.findColumn("NAME")); + assertEquals(3, rs.findColumn("DATA")); + assertEquals(1, rs.findColumn("id")); + assertEquals(2, rs.findColumn("name")); + assertEquals(3, rs.findColumn("data")); + assertEquals(1, rs.findColumn("Id")); + assertEquals(2, rs.findColumn("Name")); + assertEquals(3, rs.findColumn("Data")); + assertEquals(1, rs.findColumn("TEST.ID")); + assertEquals(2, rs.findColumn("TEST.NAME")); + assertEquals(3, rs.findColumn("TEST.DATA")); + assertEquals(1, rs.findColumn("Test.Id")); + assertEquals(2, rs.findColumn("Test.Name")); + assertEquals(3, rs.findColumn("Test.Data")); + stat.execute("DROP TABLE TEST"); + + } + + private void testColumnLength() throws SQLException { + trace("testColumnDisplayLength"); + ResultSet rs; + ResultSetMetaData meta; + + stat.execute("CREATE TABLE one (ID INT, NAME VARCHAR(255))"); + rs = stat.executeQuery("select * from one"); + meta = rs.getMetaData(); + assertEquals("ID", meta.getColumnLabel(1)); + assertEquals(11, meta.getColumnDisplaySize(1)); + assertEquals("NAME", meta.getColumnLabel(2)); + assertEquals(255, meta.getColumnDisplaySize(2)); + stat.execute("DROP TABLE one"); + + rs = stat.executeQuery("select 1, 'Hello' union select 2, 'Hello World!'"); + meta = rs.getMetaData(); + assertEquals(11, meta.getColumnDisplaySize(1)); + assertEquals(12, meta.getColumnDisplaySize(2)); + + rs = stat.executeQuery("explain select * from dual"); + meta = rs.getMetaData(); + assertEquals(Integer.MAX_VALUE, meta.getColumnDisplaySize(1)); + assertEquals(Integer.MAX_VALUE, meta.getPrecision(1)); + + rs = stat.executeQuery("script"); + meta = rs.getMetaData(); + assertEquals(Integer.MAX_VALUE, meta.getColumnDisplaySize(1)); + assertEquals(Integer.MAX_VALUE, meta.getPrecision(1)); + + rs = stat.executeQuery("select group_concat(table_name) " + + "from information_schema.tables"); + rs.next(); + meta = rs.getMetaData(); + assertEquals(Integer.MAX_VALUE, meta.getColumnDisplaySize(1)); + assertEquals(Integer.MAX_VALUE, meta.getPrecision(1)); + + } + + private void testLimitMaxRows() throws SQLException { + trace("Test LimitMaxRows"); + ResultSet rs; + stat.execute("CREATE TABLE one (C CHARACTER(10))"); + rs = stat.executeQuery("SELECT C || C FROM one;"); + ResultSetMetaData md = rs.getMetaData(); + assertEquals(20, md.getPrecision(1)); + ResultSet rs2 = stat.executeQuery("SELECT UPPER (C) FROM one;"); + ResultSetMetaData md2 = rs2.getMetaData(); + assertEquals(10, md2.getPrecision(1)); + rs = stat.executeQuery("SELECT UPPER (C), CHAR(10), " + + "CONCAT(C,C,C), HEXTORAW(C), RAWTOHEX(C) FROM one"); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(10, meta.getPrecision(1)); + assertEquals(1, meta.getPrecision(2)); + assertEquals(30, meta.getPrecision(3)); + assertEquals(3, meta.getPrecision(4)); + assertEquals(40, meta.getPrecision(5)); + stat.execute("DROP TABLE one"); + } + + private void testAutoIncrement() throws SQLException { + trace("Test AutoIncrement"); + stat.execute("DROP TABLE IF EXISTS TEST"); + ResultSet rs; + stat.execute("CREATE TABLE TEST(ID IDENTITY NOT NULL, NAME VARCHAR NULL)"); + + stat.execute("INSERT INTO TEST(NAME) VALUES('Hello')", + Statement.RETURN_GENERATED_KEYS); + rs = stat.getGeneratedKeys(); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + + stat.execute("INSERT INTO TEST(NAME) VALUES('World')", + Statement.RETURN_GENERATED_KEYS); + rs = stat.getGeneratedKeys(); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + + rs = stat.executeQuery("SELECT ID AS I, NAME AS N, ID+1 AS IP1 FROM TEST"); + ResultSetMetaData meta = rs.getMetaData(); + assertTrue(meta.isAutoIncrement(1)); + assertFalse(meta.isAutoIncrement(2)); + assertFalse(meta.isAutoIncrement(3)); + assertEquals(ResultSetMetaData.columnNoNulls, meta.isNullable(1)); + assertEquals(ResultSetMetaData.columnNullable, meta.isNullable(2)); + assertEquals(ResultSetMetaData.columnNullableUnknown, meta.isNullable(3)); + assertTrue(rs.next()); + assertTrue(rs.next()); + assertFalse(rs.next()); + + } + + private void testInt() throws SQLException { + trace("Test INT"); + ResultSet rs; + Object o; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE INT)"); + stat.execute("INSERT INTO TEST VALUES(1,-1)"); + stat.execute("INSERT INTO TEST VALUES(2,0)"); + stat.execute("INSERT INTO TEST VALUES(3,1)"); + stat.execute("INSERT INTO TEST VALUES(4," + Integer.MAX_VALUE + ")"); + stat.execute("INSERT INTO TEST VALUES(5," + Integer.MIN_VALUE + ")"); + stat.execute("INSERT INTO TEST VALUES(6,NULL)"); + // this should not be read - maxrows=6 + stat.execute("INSERT INTO TEST VALUES(7,NULL)"); + + // MySQL compatibility (is this required?) + // rs=stat.executeQuery("SELECT * FROM TEST T ORDER BY ID"); + // check(rs.findColumn("T.ID"), 1); + // check(rs.findColumn("T.NAME"), 2); + + rs = stat.executeQuery("SELECT *, NULL AS N FROM TEST ORDER BY ID"); + + // MySQL compatibility + assertEquals(1, rs.findColumn("TEST.ID")); + assertEquals(2, rs.findColumn("TEST.VALUE")); + + ResultSetMetaData meta = rs.getMetaData(); + assertEquals(3, meta.getColumnCount()); + assertEquals("resultSet".toUpperCase(), meta.getCatalogName(1)); + assertTrue("PUBLIC".equals(meta.getSchemaName(2))); + assertTrue("TEST".equals(meta.getTableName(1))); + assertTrue("ID".equals(meta.getColumnName(1))); + assertTrue("VALUE".equals(meta.getColumnName(2))); + assertTrue(!meta.isAutoIncrement(1)); + assertTrue(meta.isCaseSensitive(1)); + assertTrue(meta.isSearchable(1)); + assertFalse(meta.isCurrency(1)); + assertTrue(meta.getColumnDisplaySize(1) > 0); + assertTrue(meta.isSigned(1)); + assertTrue(meta.isSearchable(2)); + assertEquals(ResultSetMetaData.columnNoNulls, meta.isNullable(1)); + assertFalse(meta.isReadOnly(1)); + assertTrue(meta.isWritable(1)); + assertFalse(meta.isDefinitelyWritable(1)); + assertTrue(meta.getColumnDisplaySize(1) > 0); + assertTrue(meta.getColumnDisplaySize(2) > 0); + assertEquals(null, meta.getColumnClassName(3)); + + assertTrue(rs.getRow() == 0); + assertResultSetMeta(rs, 3, new String[] { "ID", "VALUE", "N" }, + new int[] { Types.INTEGER, Types.INTEGER, + Types.NULL }, new int[] { 10, 10, 1 }, new int[] { 0, 0, 0 }); + rs.next(); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + assertEquals(ResultSet.FETCH_FORWARD, rs.getFetchDirection()); + trace("default fetch size=" + rs.getFetchSize()); + // 0 should be an allowed value (but it's not defined what is actually + // means) + rs.setFetchSize(0); + assertThrows(ErrorCode.INVALID_VALUE_2, rs).setFetchSize(-1); + // fetch size 100 is bigger than maxrows - not allowed + assertThrows(ErrorCode.INVALID_VALUE_2, rs).setFetchSize(100); + rs.setFetchSize(6); + + assertTrue(rs.getRow() == 1); + assertEquals(2, rs.findColumn("VALUE")); + assertEquals(2, rs.findColumn("value")); + assertEquals(2, rs.findColumn("Value")); + assertEquals(2, rs.findColumn("Value")); + assertEquals(1, rs.findColumn("ID")); + assertEquals(1, rs.findColumn("id")); + assertEquals(1, rs.findColumn("Id")); + assertEquals(1, rs.findColumn("iD")); + assertTrue(rs.getInt(2) == -1 && !rs.wasNull()); + assertTrue(rs.getInt("VALUE") == -1 && !rs.wasNull()); + assertTrue(rs.getInt("value") == -1 && !rs.wasNull()); + assertTrue(rs.getInt("Value") == -1 && !rs.wasNull()); + assertTrue(rs.getString("Value").equals("-1") && !rs.wasNull()); + + o = rs.getObject("value"); + trace(o.getClass().getName()); + assertTrue(o instanceof Integer); + assertTrue(((Integer) o).intValue() == -1); + o = rs.getObject("value", Integer.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Integer); + assertTrue(((Integer) o).intValue() == -1); + o = rs.getObject(2); + trace(o.getClass().getName()); + assertTrue(o instanceof Integer); + assertTrue(((Integer) o).intValue() == -1); + o = rs.getObject(2, Integer.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Integer); + assertTrue(((Integer) o).intValue() == -1); + assertTrue(rs.getBoolean("Value")); + assertTrue(rs.getByte("Value") == (byte) -1); + assertTrue(rs.getShort("Value") == (short) -1); + assertTrue(rs.getLong("Value") == -1); + assertTrue(rs.getFloat("Value") == -1.0); + assertTrue(rs.getDouble("Value") == -1.0); + + assertTrue(rs.getString("Value").equals("-1") && !rs.wasNull()); + assertTrue(rs.getInt("ID") == 1 && !rs.wasNull()); + assertTrue(rs.getInt("id") == 1 && !rs.wasNull()); + assertTrue(rs.getInt("Id") == 1 && !rs.wasNull()); + assertTrue(rs.getInt(1) == 1 && !rs.wasNull()); + rs.next(); + assertTrue(rs.getRow() == 2); + assertTrue(rs.getInt(2) == 0 && !rs.wasNull()); + assertTrue(!rs.getBoolean(2)); + assertTrue(rs.getByte(2) == 0); + assertTrue(rs.getShort(2) == 0); + assertTrue(rs.getLong(2) == 0); + assertTrue(rs.getFloat(2) == 0.0); + assertTrue(rs.getDouble(2) == 0.0); + assertTrue(rs.getString(2).equals("0") && !rs.wasNull()); + assertTrue(rs.getInt(1) == 2 && !rs.wasNull()); + rs.next(); + assertTrue(rs.getRow() == 3); + assertTrue(rs.getInt("ID") == 3 && !rs.wasNull()); + assertTrue(rs.getInt("VALUE") == 1 && !rs.wasNull()); + rs.next(); + assertTrue(rs.getRow() == 4); + assertTrue(rs.getInt("ID") == 4 && !rs.wasNull()); + assertTrue(rs.getInt("VALUE") == Integer.MAX_VALUE && !rs.wasNull()); + rs.next(); + assertTrue(rs.getRow() == 5); + assertTrue(rs.getInt("id") == 5 && !rs.wasNull()); + assertTrue(rs.getInt("value") == Integer.MIN_VALUE && !rs.wasNull()); + assertTrue(rs.getString(1).equals("5") && !rs.wasNull()); + rs.next(); + assertTrue(rs.getRow() == 6); + assertTrue(rs.getInt("id") == 6 && !rs.wasNull()); + assertTrue(rs.getInt("value") == 0 && rs.wasNull()); + assertTrue(rs.getInt(2) == 0 && rs.wasNull()); + assertTrue(rs.getInt(1) == 6 && !rs.wasNull()); + assertTrue(rs.getString(1).equals("6") && !rs.wasNull()); + assertTrue(rs.getString(2) == null && rs.wasNull()); + o = rs.getObject(2); + assertTrue(o == null); + assertTrue(rs.wasNull()); + o = rs.getObject(2, Integer.class); + assertTrue(o == null); + assertTrue(rs.wasNull()); + assertFalse(rs.next()); + assertEquals(0, rs.getRow()); + // there is one more row, but because of setMaxRows we don't get it + + stat.execute("DROP TABLE TEST"); + stat.setMaxRows(0); + } + + private void testSmallInt() throws SQLException { + trace("Test SMALLINT"); + ResultSet rs; + Object o; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE SMALLINT)"); + stat.execute("INSERT INTO TEST VALUES(1,-1)"); + stat.execute("INSERT INTO TEST VALUES(2,0)"); + stat.execute("INSERT INTO TEST VALUES(3,1)"); + stat.execute("INSERT INTO TEST VALUES(4," + Short.MAX_VALUE + ")"); + stat.execute("INSERT INTO TEST VALUES(5," + Short.MIN_VALUE + ")"); + stat.execute("INSERT INTO TEST VALUES(6,NULL)"); + + // MySQL compatibility (is this required?) + // rs=stat.executeQuery("SELECT * FROM TEST T ORDER BY ID"); + // check(rs.findColumn("T.ID"), 1); + // check(rs.findColumn("T.NAME"), 2); + + rs = stat.executeQuery("SELECT *, NULL AS N FROM TEST ORDER BY ID"); + + // MySQL compatibility + assertEquals(1, rs.findColumn("TEST.ID")); + assertEquals(2, rs.findColumn("TEST.VALUE")); + + assertTrue(rs.getRow() == 0); + assertResultSetMeta(rs, 3, new String[] { "ID", "VALUE", "N" }, + new int[] { Types.INTEGER, Types.SMALLINT, + Types.NULL }, new int[] { 10, 5, 1 }, new int[] { 0, 0, 0 }); + rs.next(); + + assertTrue(rs.getRow() == 1); + assertEquals(2, rs.findColumn("VALUE")); + assertEquals(2, rs.findColumn("value")); + assertEquals(2, rs.findColumn("Value")); + assertEquals(2, rs.findColumn("Value")); + assertEquals(1, rs.findColumn("ID")); + assertEquals(1, rs.findColumn("id")); + assertEquals(1, rs.findColumn("Id")); + assertEquals(1, rs.findColumn("iD")); + assertTrue(rs.getShort(2) == -1 && !rs.wasNull()); + assertTrue(rs.getShort("VALUE") == -1 && !rs.wasNull()); + assertTrue(rs.getShort("value") == -1 && !rs.wasNull()); + assertTrue(rs.getShort("Value") == -1 && !rs.wasNull()); + assertTrue(rs.getString("Value").equals("-1") && !rs.wasNull()); + + o = rs.getObject("value"); + trace(o.getClass().getName()); + assertTrue(o.getClass() == (SysProperties.OLD_RESULT_SET_GET_OBJECT ? Short.class : Integer.class)); + assertTrue(((Number) o).intValue() == -1); + o = rs.getObject("value", Short.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Short); + assertTrue(((Short) o).shortValue() == -1); + o = rs.getObject(2); + trace(o.getClass().getName()); + assertTrue(o.getClass() == (SysProperties.OLD_RESULT_SET_GET_OBJECT ? Short.class : Integer.class)); + assertTrue(((Number) o).intValue() == -1); + o = rs.getObject(2, Short.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Short); + assertTrue(((Short) o).shortValue() == -1); + assertTrue(rs.getBoolean("Value")); + assertTrue(rs.getByte("Value") == (byte) -1); + assertTrue(rs.getInt("Value") == -1); + assertTrue(rs.getLong("Value") == -1); + assertTrue(rs.getFloat("Value") == -1.0); + assertTrue(rs.getDouble("Value") == -1.0); + + assertTrue(rs.getString("Value").equals("-1") && !rs.wasNull()); + assertTrue(rs.getShort("ID") == 1 && !rs.wasNull()); + assertTrue(rs.getShort("id") == 1 && !rs.wasNull()); + assertTrue(rs.getShort("Id") == 1 && !rs.wasNull()); + assertTrue(rs.getShort(1) == 1 && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 2); + assertTrue(rs.getShort(2) == 0 && !rs.wasNull()); + assertTrue(!rs.getBoolean(2)); + assertTrue(rs.getByte(2) == 0); + assertTrue(rs.getInt(2) == 0); + assertTrue(rs.getLong(2) == 0); + assertTrue(rs.getFloat(2) == 0.0); + assertTrue(rs.getDouble(2) == 0.0); + assertTrue(rs.getString(2).equals("0") && !rs.wasNull()); + assertTrue(rs.getShort(1) == 2 && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 3); + assertTrue(rs.getShort("ID") == 3 && !rs.wasNull()); + assertTrue(rs.getShort("VALUE") == 1 && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 4); + assertTrue(rs.getShort("ID") == 4 && !rs.wasNull()); + assertTrue(rs.getShort("VALUE") == Short.MAX_VALUE && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 5); + assertTrue(rs.getShort("id") == 5 && !rs.wasNull()); + assertTrue(rs.getShort("value") == Short.MIN_VALUE && !rs.wasNull()); + assertTrue(rs.getString(1).equals("5") && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 6); + assertTrue(rs.getShort("id") == 6 && !rs.wasNull()); + assertTrue(rs.getShort("value") == 0 && rs.wasNull()); + assertTrue(rs.getShort(2) == 0 && rs.wasNull()); + assertTrue(rs.getShort(1) == 6 && !rs.wasNull()); + assertTrue(rs.getString(1).equals("6") && !rs.wasNull()); + assertTrue(rs.getString(2) == null && rs.wasNull()); + o = rs.getObject(2); + assertTrue(o == null); + assertTrue(rs.wasNull()); + o = rs.getObject(2, Short.class); + assertTrue(o == null); + assertTrue(rs.wasNull()); + assertFalse(rs.next()); + assertEquals(0, rs.getRow()); + + stat.execute("DROP TABLE TEST"); + stat.setMaxRows(0); + } + + private void testBigInt() throws SQLException { + trace("Test SMALLINT"); + ResultSet rs; + Object o; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE BIGINT)"); + stat.execute("INSERT INTO TEST VALUES(1,-1)"); + stat.execute("INSERT INTO TEST VALUES(2,0)"); + stat.execute("INSERT INTO TEST VALUES(3,1)"); + stat.execute("INSERT INTO TEST VALUES(4," + Long.MAX_VALUE + ")"); + stat.execute("INSERT INTO TEST VALUES(5," + Long.MIN_VALUE + ")"); + stat.execute("INSERT INTO TEST VALUES(6,NULL)"); + + // MySQL compatibility (is this required?) + // rs=stat.executeQuery("SELECT * FROM TEST T ORDER BY ID"); + // check(rs.findColumn("T.ID"), 1); + // check(rs.findColumn("T.NAME"), 2); + + rs = stat.executeQuery("SELECT *, NULL AS N FROM TEST ORDER BY ID"); + + // MySQL compatibility + assertEquals(1, rs.findColumn("TEST.ID")); + assertEquals(2, rs.findColumn("TEST.VALUE")); + + assertTrue(rs.getRow() == 0); + assertResultSetMeta(rs, 3, new String[] { "ID", "VALUE", "N" }, + new int[] { Types.INTEGER, Types.BIGINT, + Types.NULL }, new int[] { 10, 19, 1 }, new int[] { 0, 0, 0 }); + rs.next(); + + assertTrue(rs.getRow() == 1); + assertEquals(2, rs.findColumn("VALUE")); + assertEquals(2, rs.findColumn("value")); + assertEquals(2, rs.findColumn("Value")); + assertEquals(2, rs.findColumn("Value")); + assertEquals(1, rs.findColumn("ID")); + assertEquals(1, rs.findColumn("id")); + assertEquals(1, rs.findColumn("Id")); + assertEquals(1, rs.findColumn("iD")); + assertTrue(rs.getLong(2) == -1 && !rs.wasNull()); + assertTrue(rs.getLong("VALUE") == -1 && !rs.wasNull()); + assertTrue(rs.getLong("value") == -1 && !rs.wasNull()); + assertTrue(rs.getLong("Value") == -1 && !rs.wasNull()); + assertTrue(rs.getString("Value").equals("-1") && !rs.wasNull()); + + o = rs.getObject("value"); + trace(o.getClass().getName()); + assertTrue(o instanceof Long); + assertTrue(((Long) o).longValue() == -1); + o = rs.getObject("value", Long.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Long); + assertTrue(((Long) o).longValue() == -1); + o = rs.getObject("value", BigInteger.class); + trace(o.getClass().getName()); + assertTrue(o instanceof BigInteger); + assertTrue(((BigInteger) o).longValue() == -1); + o = rs.getObject(2); + trace(o.getClass().getName()); + assertTrue(o instanceof Long); + assertTrue(((Long) o).longValue() == -1); + o = rs.getObject(2, Long.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Long); + assertTrue(((Long) o).longValue() == -1); + o = rs.getObject(2, BigInteger.class); + trace(o.getClass().getName()); + assertTrue(o instanceof BigInteger); + assertTrue(((BigInteger) o).longValue() == -1); + assertTrue(rs.getBoolean("Value")); + assertTrue(rs.getByte("Value") == (byte) -1); + assertTrue(rs.getShort("Value") == -1); + assertTrue(rs.getInt("Value") == -1); + assertTrue(rs.getFloat("Value") == -1.0); + assertTrue(rs.getDouble("Value") == -1.0); + + assertTrue(rs.getString("Value").equals("-1") && !rs.wasNull()); + assertTrue(rs.getLong("ID") == 1 && !rs.wasNull()); + assertTrue(rs.getLong("id") == 1 && !rs.wasNull()); + assertTrue(rs.getLong("Id") == 1 && !rs.wasNull()); + assertTrue(rs.getLong(1) == 1 && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 2); + assertTrue(rs.getLong(2) == 0 && !rs.wasNull()); + assertTrue(!rs.getBoolean(2)); + assertTrue(rs.getByte(2) == 0); + assertTrue(rs.getShort(2) == 0); + assertTrue(rs.getInt(2) == 0); + assertTrue(rs.getFloat(2) == 0.0); + assertTrue(rs.getDouble(2) == 0.0); + assertTrue(rs.getString(2).equals("0") && !rs.wasNull()); + assertTrue(rs.getLong(1) == 2 && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 3); + assertTrue(rs.getLong("ID") == 3 && !rs.wasNull()); + assertTrue(rs.getLong("VALUE") == 1 && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 4); + assertTrue(rs.getLong("ID") == 4 && !rs.wasNull()); + assertTrue(rs.getLong("VALUE") == Long.MAX_VALUE && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 5); + assertTrue(rs.getLong("id") == 5 && !rs.wasNull()); + assertTrue(rs.getLong("value") == Long.MIN_VALUE && !rs.wasNull()); + assertTrue(rs.getString(1).equals("5") && !rs.wasNull()); + rs.next(); + + assertTrue(rs.getRow() == 6); + assertTrue(rs.getLong("id") == 6 && !rs.wasNull()); + assertTrue(rs.getLong("value") == 0 && rs.wasNull()); + assertTrue(rs.getLong(2) == 0 && rs.wasNull()); + assertTrue(rs.getLong(1) == 6 && !rs.wasNull()); + assertTrue(rs.getString(1).equals("6") && !rs.wasNull()); + assertTrue(rs.getString(2) == null && rs.wasNull()); + o = rs.getObject(2); + assertTrue(o == null); + assertTrue(rs.wasNull()); + o = rs.getObject(2, Long.class); + assertTrue(o == null); + assertTrue(rs.wasNull()); + assertFalse(rs.next()); + assertEquals(0, rs.getRow()); + + stat.execute("DROP TABLE TEST"); + stat.setMaxRows(0); + } + + private void testVarchar() throws SQLException { + trace("Test VARCHAR"); + ResultSet rs; + Object o; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1,'')"); + stat.execute("INSERT INTO TEST VALUES(2,' ')"); + stat.execute("INSERT INTO TEST VALUES(3,' ')"); + stat.execute("INSERT INTO TEST VALUES(4,NULL)"); + stat.execute("INSERT INTO TEST VALUES(5,'Hi')"); + stat.execute("INSERT INTO TEST VALUES(6,' Hi ')"); + stat.execute("INSERT INTO TEST VALUES(7,'Joe''s')"); + stat.execute("INSERT INTO TEST VALUES(8,'{escape}')"); + stat.execute("INSERT INTO TEST VALUES(9,'\\n')"); + stat.execute("INSERT INTO TEST VALUES(10,'\\''')"); + stat.execute("INSERT INTO TEST VALUES(11,'\\%')"); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 2, new String[] { "ID", "VALUE" }, + new int[] { Types.INTEGER, Types.VARCHAR }, new int[] { + 10, 255 }, new int[] { 0, 0 }); + String value; + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: <>)"); + assertTrue(value != null && value.equals("") && !rs.wasNull()); + assertTrue(rs.getInt(1) == 1 && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: < >)"); + assertTrue(rs.getString(2).equals(" ") && !rs.wasNull()); + assertTrue(rs.getInt(1) == 2 && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: < >)"); + assertTrue(rs.getString(2).equals(" ") && !rs.wasNull()); + assertTrue(rs.getInt(1) == 3 && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: )"); + assertTrue(rs.getString(2) == null && rs.wasNull()); + assertTrue(rs.getInt(1) == 4 && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: )"); + assertTrue(rs.getInt(1) == 5 && !rs.wasNull()); + assertTrue(rs.getString(2).equals("Hi") && !rs.wasNull()); + o = rs.getObject("value"); + trace(o.getClass().getName()); + assertTrue(o instanceof String); + assertTrue(o.toString().equals("Hi")); + o = rs.getObject("value", String.class); + trace(o.getClass().getName()); + assertTrue(o instanceof String); + assertTrue(o.equals("Hi")); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: < Hi >)"); + assertTrue(rs.getInt(1) == 6 && !rs.wasNull()); + assertTrue(rs.getString(2).equals(" Hi ") && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: )"); + assertTrue(rs.getInt(1) == 7 && !rs.wasNull()); + assertTrue(rs.getString(2).equals("Joe's") && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: <{escape}>)"); + assertTrue(rs.getInt(1) == 8 && !rs.wasNull()); + assertTrue(rs.getString(2).equals("{escape}") && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: <\\n>)"); + assertTrue(rs.getInt(1) == 9 && !rs.wasNull()); + assertTrue(rs.getString(2).equals("\\n") && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: <\\'>)"); + assertTrue(rs.getInt(1) == 10 && !rs.wasNull()); + assertTrue(rs.getString(2).equals("\\'") && !rs.wasNull()); + rs.next(); + value = rs.getString(2); + trace("Value: <" + value + "> (should be: <\\%>)"); + assertTrue(rs.getInt(1) == 11 && !rs.wasNull()); + assertTrue(rs.getString(2).equals("\\%") && !rs.wasNull()); + assertTrue(!rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private void testDecimal() throws SQLException { + int numericType; + if (SysProperties.BIG_DECIMAL_IS_DECIMAL) { + numericType = Types.DECIMAL; + } else { + numericType = Types.NUMERIC; + } + trace("Test DECIMAL"); + ResultSet rs; + Object o; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE DECIMAL(10,2))"); + stat.execute("INSERT INTO TEST VALUES(1,-1)"); + stat.execute("INSERT INTO TEST VALUES(2,.0)"); + stat.execute("INSERT INTO TEST VALUES(3,1.)"); + stat.execute("INSERT INTO TEST VALUES(4,12345678.89)"); + stat.execute("INSERT INTO TEST VALUES(6,99999998.99)"); + stat.execute("INSERT INTO TEST VALUES(7,-99999998.99)"); + stat.execute("INSERT INTO TEST VALUES(8,NULL)"); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 2, new String[] { "ID", "VALUE" }, + new int[] { Types.INTEGER, numericType }, new int[] { + 10, 10 }, new int[] { 0, 2 }); + BigDecimal bd; + + rs.next(); + assertTrue(rs.getInt(1) == 1); + assertTrue(!rs.wasNull()); + assertTrue(rs.getInt(2) == -1); + assertTrue(!rs.wasNull()); + bd = rs.getBigDecimal(2); + assertTrue(bd.compareTo(new BigDecimal("-1.00")) == 0); + assertTrue(!rs.wasNull()); + o = rs.getObject(2); + trace(o.getClass().getName()); + assertTrue(o instanceof BigDecimal); + assertTrue(((BigDecimal) o).compareTo(new BigDecimal("-1.00")) == 0); + o = rs.getObject(2, BigDecimal.class); + trace(o.getClass().getName()); + assertTrue(o instanceof BigDecimal); + assertTrue(((BigDecimal) o).compareTo(new BigDecimal("-1.00")) == 0); + + rs.next(); + assertTrue(rs.getInt(1) == 2); + assertTrue(!rs.wasNull()); + assertTrue(rs.getInt(2) == 0); + assertTrue(!rs.wasNull()); + bd = rs.getBigDecimal(2); + assertTrue(bd.compareTo(new BigDecimal("0.00")) == 0); + assertTrue(!rs.wasNull()); + + rs.next(); + checkColumnBigDecimal(rs, 2, 1, "1.00"); + + rs.next(); + checkColumnBigDecimal(rs, 2, 12345679, "12345678.89"); + + rs.next(); + checkColumnBigDecimal(rs, 2, 99999999, "99999998.99"); + + rs.next(); + checkColumnBigDecimal(rs, 2, -99999999, "-99999998.99"); + + rs.next(); + checkColumnBigDecimal(rs, 2, 0, null); + + assertTrue(!rs.next()); + stat.execute("DROP TABLE TEST"); + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE DECIMAL(22,2))"); + stat.execute("INSERT INTO TEST VALUES(1,-12345678909876543210)"); + stat.execute("INSERT INTO TEST VALUES(2,12345678901234567890.12345)"); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(new BigDecimal("-12345678909876543210.00"), rs.getBigDecimal(2)); + assertEquals(new BigInteger("-12345678909876543210"), rs.getObject(2, BigInteger.class)); + rs.next(); + assertEquals(new BigDecimal("12345678901234567890.12"), rs.getBigDecimal(2)); + assertEquals(new BigInteger("12345678901234567890"), rs.getObject(2, BigInteger.class)); + stat.execute("DROP TABLE TEST"); + } + + private void testDoubleFloat() throws SQLException { + trace("Test DOUBLE - FLOAT"); + ResultSet rs; + Object o; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, D DOUBLE, R REAL)"); + stat.execute("INSERT INTO TEST VALUES(1, -1, -1)"); + stat.execute("INSERT INTO TEST VALUES(2,.0, .0)"); + stat.execute("INSERT INTO TEST VALUES(3, 1., 1.)"); + stat.execute("INSERT INTO TEST VALUES(4, 12345678.89, 12345678.89)"); + stat.execute("INSERT INTO TEST VALUES(6, 99999999.99, 99999999.99)"); + stat.execute("INSERT INTO TEST VALUES(7, -99999999.99, -99999999.99)"); + stat.execute("INSERT INTO TEST VALUES(8, NULL, NULL)"); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 3, new String[] { "ID", "D", "R" }, + new int[] { Types.INTEGER, Types.DOUBLE, Types.REAL }, + new int[] { 10, 17, 7 }, new int[] { 0, 0, 0 }); + BigDecimal bd; + rs.next(); + assertTrue(rs.getInt(1) == 1); + assertTrue(!rs.wasNull()); + assertTrue(rs.getInt(2) == -1); + assertTrue(rs.getInt(3) == -1); + assertTrue(!rs.wasNull()); + bd = rs.getBigDecimal(2); + assertTrue(bd.compareTo(new BigDecimal("-1.00")) == 0); + assertTrue(!rs.wasNull()); + o = rs.getObject(2); + trace(o.getClass().getName()); + assertTrue(o instanceof Double); + assertTrue(((Double) o).compareTo(-1d) == 0); + o = rs.getObject(2, Double.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Double); + assertTrue(((Double) o).compareTo(-1d) == 0); + o = rs.getObject(3); + trace(o.getClass().getName()); + assertTrue(o instanceof Float); + assertTrue(((Float) o).compareTo(-1f) == 0); + o = rs.getObject(3, Float.class); + trace(o.getClass().getName()); + assertTrue(o instanceof Float); + assertTrue(((Float) o).compareTo(-1f) == 0); + rs.next(); + assertTrue(rs.getInt(1) == 2); + assertTrue(!rs.wasNull()); + assertTrue(rs.getInt(2) == 0); + assertTrue(!rs.wasNull()); + assertTrue(rs.getInt(3) == 0); + assertTrue(!rs.wasNull()); + bd = rs.getBigDecimal(2); + assertTrue(bd.compareTo(new BigDecimal("0.00")) == 0); + assertTrue(!rs.wasNull()); + bd = rs.getBigDecimal(3); + assertTrue(bd.compareTo(new BigDecimal("0.00")) == 0); + assertTrue(!rs.wasNull()); + rs.next(); + assertEquals(1.0, rs.getDouble(2)); + assertEquals(1.0f, rs.getFloat(3)); + rs.next(); + assertEquals(12345678.89, rs.getDouble(2)); + assertEquals(12345678.89f, rs.getFloat(3)); + rs.next(); + assertEquals(99999999.99, rs.getDouble(2)); + assertEquals(99999999.99f, rs.getFloat(3)); + rs.next(); + assertEquals(-99999999.99, rs.getDouble(2)); + assertEquals(-99999999.99f, rs.getFloat(3)); + rs.next(); + checkColumnBigDecimal(rs, 2, 0, null); + checkColumnBigDecimal(rs, 3, 0, null); + assertTrue(!rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private void testDatetime() throws SQLException { + trace("Test DATETIME"); + ResultSet rs; + Object o; + + rs = stat.executeQuery("call date '99999-12-23'"); + rs.next(); + assertEquals("99999-12-23", rs.getString(1)); + rs = stat.executeQuery("call timestamp '99999-12-23 01:02:03.000'"); + rs.next(); + assertEquals("99999-12-23 01:02:03", rs.getString(1)); + rs = stat.executeQuery("call date '-99999-12-23'"); + rs.next(); + assertEquals("-99999-12-23", rs.getString(1)); + rs = stat.executeQuery("call timestamp '-99999-12-23 01:02:03.000'"); + rs.next(); + assertEquals("-99999-12-23 01:02:03", rs.getString(1)); + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE DATETIME)"); + stat.execute("INSERT INTO TEST VALUES(1,DATE '2011-11-11')"); + stat.execute("INSERT INTO TEST VALUES(2,TIMESTAMP '2002-02-02 02:02:02')"); + stat.execute("INSERT INTO TEST VALUES(3,TIMESTAMP '1800-1-1 0:0:0')"); + stat.execute("INSERT INTO TEST VALUES(4,TIMESTAMP '9999-12-31 23:59:59')"); + stat.execute("INSERT INTO TEST VALUES(5,NULL)"); + rs = stat.executeQuery("SELECT 0 ID, " + + "TIMESTAMP '9999-12-31 23:59:59' VALUE FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 2, new String[] { "ID", "VALUE" }, + new int[] { Types.INTEGER, Types.TIMESTAMP }, + new int[] { 10, 29 }, new int[] { 0, 9 }); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 2, new String[] { "ID", "VALUE" }, + new int[] { Types.INTEGER, Types.TIMESTAMP }, + new int[] { 10, 26 }, new int[] { 0, 6 }); + rs.next(); + java.sql.Date date; + java.sql.Time time; + java.sql.Timestamp ts; + date = rs.getDate(2); + assertTrue(!rs.wasNull()); + time = rs.getTime(2); + assertTrue(!rs.wasNull()); + ts = rs.getTimestamp(2); + assertTrue(!rs.wasNull()); + trace("Date: " + date.toString() + " Time:" + time.toString() + + " Timestamp:" + ts.toString()); + trace("Date ms: " + date.getTime() + " Time ms:" + time.getTime() + + " Timestamp ms:" + ts.getTime()); + trace("1970 ms: " + java.sql.Timestamp.valueOf( + "1970-01-01 00:00:00.0").getTime()); + assertEquals(java.sql.Timestamp.valueOf( + "2011-11-11 00:00:00.0").getTime(), date.getTime()); + assertEquals(java.sql.Timestamp.valueOf( + "1970-01-01 00:00:00.0").getTime(), time.getTime()); + assertEquals(java.sql.Timestamp.valueOf( + "2011-11-11 00:00:00.0").getTime(), ts.getTime()); + assertTrue(date.equals( + java.sql.Date.valueOf("2011-11-11"))); + assertTrue(time.equals( + java.sql.Time.valueOf("00:00:00"))); + assertTrue(ts.equals( + java.sql.Timestamp.valueOf("2011-11-11 00:00:00.0"))); + assertFalse(rs.wasNull()); + o = rs.getObject(2); + trace(o.getClass().getName()); + assertTrue(o instanceof java.sql.Timestamp); + assertTrue(((java.sql.Timestamp) o).equals( + java.sql.Timestamp.valueOf("2011-11-11 00:00:00.0"))); + assertFalse(rs.wasNull()); + o = rs.getObject(2, java.sql.Timestamp.class); + trace(o.getClass().getName()); + assertTrue(o instanceof java.sql.Timestamp); + assertTrue(((java.sql.Timestamp) o).equals( + java.sql.Timestamp.valueOf("2011-11-11 00:00:00.0"))); + assertFalse(rs.wasNull()); + o = rs.getObject(2, java.util.Date.class); + assertTrue(o.getClass() == java.util.Date.class); + assertEquals(((java.util.Date) o).getTime(), + java.sql.Timestamp.valueOf("2011-11-11 00:00:00.0").getTime()); + o = rs.getObject(2, Calendar.class); + assertTrue(o instanceof Calendar); + assertEquals(((Calendar) o).getTimeInMillis(), + java.sql.Timestamp.valueOf("2011-11-11 00:00:00.0").getTime()); + rs.next(); + + date = rs.getDate("VALUE"); + assertTrue(!rs.wasNull()); + time = rs.getTime("VALUE"); + assertTrue(!rs.wasNull()); + ts = rs.getTimestamp("VALUE"); + assertTrue(!rs.wasNull()); + trace("Date: " + date.toString() + + " Time:" + time.toString() + " Timestamp:" + ts.toString()); + assertEquals("2002-02-02", date.toString()); + assertEquals("02:02:02", time.toString()); + assertEquals("2002-02-02 02:02:02.0", ts.toString()); + rs.next(); + + assertEquals("1800-01-01", rs.getDate("value").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("1800-01-01", rs.getObject("value", + LocalDateTimeUtils.LOCAL_DATE).toString()); + } + assertEquals("00:00:00", rs.getTime("value").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("00:00", rs.getObject("value", + LocalDateTimeUtils.LOCAL_TIME).toString()); + } + assertEquals("1800-01-01 00:00:00.0", rs.getTimestamp("value").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("1800-01-01T00:00", rs.getObject("value", + LocalDateTimeUtils.LOCAL_DATE_TIME).toString()); + } + rs.next(); + + assertEquals("9999-12-31", rs.getDate("Value").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("9999-12-31", rs.getObject("Value", + LocalDateTimeUtils.LOCAL_DATE).toString()); + } + assertEquals("23:59:59", rs.getTime("Value").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("23:59:59", rs.getObject("Value", + LocalDateTimeUtils.LOCAL_TIME).toString()); + } + assertEquals("9999-12-31 23:59:59.0", rs.getTimestamp("Value").toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("9999-12-31T23:59:59", rs.getObject("Value", + LocalDateTimeUtils.LOCAL_DATE_TIME).toString()); + } + rs.next(); + + assertTrue(rs.getDate("Value") == null && rs.wasNull()); + assertTrue(rs.getTime("vALUe") == null && rs.wasNull()); + assertTrue(rs.getTimestamp(2) == null && rs.wasNull()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertTrue(rs.getObject(2, + LocalDateTimeUtils.LOCAL_DATE_TIME) == null && rs.wasNull()); + } + assertTrue(!rs.next()); + + rs = stat.executeQuery("SELECT DATE '2001-02-03' D, " + + "TIME '14:15:16', " + + "TIMESTAMP '2007-08-09 10:11:12.141516171' TS FROM TEST"); + rs.next(); + + date = (Date) rs.getObject(1); + time = (Time) rs.getObject(2); + ts = (Timestamp) rs.getObject(3); + assertEquals("2001-02-03", date.toString()); + assertEquals("14:15:16", time.toString()); + assertEquals("2007-08-09 10:11:12.141516171", ts.toString()); + date = rs.getObject(1, Date.class); + time = rs.getObject(2, Time.class); + ts = rs.getObject(3, Timestamp.class); + assertEquals("2001-02-03", date.toString()); + assertEquals("14:15:16", time.toString()); + assertEquals("2007-08-09 10:11:12.141516171", ts.toString()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2001-02-03", rs.getObject(1, + LocalDateTimeUtils.LOCAL_DATE).toString()); + } + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("14:15:16", rs.getObject(2, + LocalDateTimeUtils.LOCAL_TIME).toString()); + } + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2007-08-09T10:11:12.141516171", + rs.getObject(3, LocalDateTimeUtils.LOCAL_DATE_TIME) + .toString()); + } + + stat.execute("DROP TABLE TEST"); + } + + private void testDatetimeWithCalendar() throws SQLException { + trace("Test DATETIME with Calendar"); + ResultSet rs; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "D DATE, T TIME, TS TIMESTAMP(9))"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?, ?, ?)"); + Calendar regular = DateTimeUtils.createGregorianCalendar(); + Calendar other = null; + // search a locale that has a _different_ raw offset + long testTime = java.sql.Date.valueOf("2001-02-03").getTime(); + for (String s : TimeZone.getAvailableIDs()) { + TimeZone zone = TimeZone.getTimeZone(s); + long rawOffsetDiff = regular.getTimeZone().getRawOffset() - + zone.getRawOffset(); + // must not be the same timezone (not 0 h and not 24 h difference + // as for Pacific/Auckland and Etc/GMT+12) + if (rawOffsetDiff != 0 && rawOffsetDiff != 1000 * 60 * 60 * 24) { + if (regular.getTimeZone().getOffset(testTime) != + zone.getOffset(testTime)) { + other = DateTimeUtils.createGregorianCalendar(zone); + break; + } + } + } + + trace("regular offset = " + regular.getTimeZone().getRawOffset() + + " other = " + other.getTimeZone().getRawOffset()); + + prep.setInt(1, 0); + prep.setDate(2, null, regular); + prep.setTime(3, null, regular); + prep.setTimestamp(4, null, regular); + prep.execute(); + + prep.setInt(1, 1); + prep.setDate(2, null, other); + prep.setTime(3, null, other); + prep.setTimestamp(4, null, other); + prep.execute(); + + prep.setInt(1, 2); + prep.setDate(2, java.sql.Date.valueOf("2001-02-03"), regular); + prep.setTime(3, java.sql.Time.valueOf("04:05:06"), regular); + prep.setTimestamp(4, + java.sql.Timestamp.valueOf("2007-08-09 10:11:12.131415"), regular); + prep.execute(); + + prep.setInt(1, 3); + prep.setDate(2, java.sql.Date.valueOf("2101-02-03"), other); + prep.setTime(3, java.sql.Time.valueOf("14:05:06"), other); + prep.setTimestamp(4, + java.sql.Timestamp.valueOf("2107-08-09 10:11:12.131415"), other); + prep.execute(); + + prep.setInt(1, 4); + prep.setDate(2, java.sql.Date.valueOf("2101-02-03")); + prep.setTime(3, java.sql.Time.valueOf("14:05:06")); + prep.setTimestamp(4, + java.sql.Timestamp.valueOf("2107-08-09 10:11:12.131415")); + prep.execute(); + + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 4, + new String[] { "ID", "D", "T", "TS" }, + new int[] { Types.INTEGER, Types.DATE, + Types.TIME, Types.TIMESTAMP }, + new int[] { 10, 10, 8, 29 }, new int[] { 0, 0, 0, 9 }); + + rs.next(); + assertEquals(0, rs.getInt(1)); + assertTrue(rs.getDate(2, regular) == null && rs.wasNull()); + assertTrue(rs.getTime(3, regular) == null && rs.wasNull()); + assertTrue(rs.getTimestamp(3, regular) == null && rs.wasNull()); + + rs.next(); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.getDate(2, other) == null && rs.wasNull()); + assertTrue(rs.getTime(3, other) == null && rs.wasNull()); + assertTrue(rs.getTimestamp(3, other) == null && rs.wasNull()); + + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("2001-02-03", rs.getDate(2, regular).toString()); + assertEquals("04:05:06", rs.getTime(3, regular).toString()); + assertFalse(rs.getTime(3, other).toString().equals("04:05:06")); + assertEquals("2007-08-09 10:11:12.131415", + rs.getTimestamp(4, regular).toString()); + assertFalse(rs.getTimestamp(4, other).toString(). + equals("2007-08-09 10:11:12.131415")); + + rs.next(); + assertEquals(3, rs.getInt("ID")); + assertFalse(rs.getTimestamp("TS", regular).toString(). + equals("2107-08-09 10:11:12.131415")); + assertEquals("2107-08-09 10:11:12.131415", + rs.getTimestamp("TS", other).toString()); + assertFalse(rs.getTime("T", regular).toString().equals("14:05:06")); + assertEquals("14:05:06", + rs.getTime("T", other).toString()); + // checkFalse(rs.getDate(2, regular).toString(), "2101-02-03"); + // check(rs.getDate("D", other).toString(), "2101-02-03"); + + rs.next(); + assertEquals(4, rs.getInt("ID")); + assertEquals("2107-08-09 10:11:12.131415", + rs.getTimestamp("TS").toString()); + assertEquals("14:05:06", rs.getTime("T").toString()); + assertEquals("2101-02-03", rs.getDate("D").toString()); + + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private void testBlob() throws SQLException { + trace("Test BLOB"); + ResultSet rs; + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE BLOB)"); + stat.execute("INSERT INTO TEST VALUES(1,X'01010101')"); + stat.execute("INSERT INTO TEST VALUES(2,X'02020202')"); + stat.execute("INSERT INTO TEST VALUES(3,X'00')"); + stat.execute("INSERT INTO TEST VALUES(4,X'ffffff')"); + stat.execute("INSERT INTO TEST VALUES(5,X'0bcec1')"); + stat.execute("INSERT INTO TEST VALUES(6,X'03030303')"); + stat.execute("INSERT INTO TEST VALUES(7,NULL)"); + byte[] random = new byte[0x10000]; + MathUtils.randomBytes(random); + stat.execute("INSERT INTO TEST VALUES(8, X'" + StringUtils.convertBytesToHex(random) + "')"); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 2, new String[] { "ID", "VALUE" }, + new int[] { Types.INTEGER, Types.BLOB }, new int[] { + 10, Integer.MAX_VALUE }, new int[] { 0, 0 }); + rs.next(); + + assertEqualsWithNull(new byte[] { (byte) 0x01, (byte) 0x01, + (byte) 0x01, (byte) 0x01 }, + rs.getBytes(2)); + assertTrue(!rs.wasNull()); + assertEqualsWithNull(new byte[] { (byte) 0x01, (byte) 0x01, + (byte) 0x01, (byte) 0x01 }, + rs.getObject(2, byte[].class)); + assertTrue(!rs.wasNull()); + rs.next(); + + assertEqualsWithNull(new byte[] { (byte) 0x02, (byte) 0x02, + (byte) 0x02, (byte) 0x02 }, + rs.getBytes("value")); + assertTrue(!rs.wasNull()); + assertEqualsWithNull(new byte[] { (byte) 0x02, (byte) 0x02, + (byte) 0x02, (byte) 0x02 }, + rs.getObject("value", byte[].class)); + assertTrue(!rs.wasNull()); + rs.next(); + + assertEqualsWithNull(new byte[] { (byte) 0x00 }, + readAllBytes(rs.getBinaryStream(2))); + assertTrue(!rs.wasNull()); + rs.next(); + + assertEqualsWithNull(new byte[] { (byte) 0xff, (byte) 0xff, (byte) 0xff }, + readAllBytes(rs.getBinaryStream("VaLuE"))); + assertTrue(!rs.wasNull()); + rs.next(); + + InputStream in = rs.getBinaryStream("value"); + byte[] b = readAllBytes(in); + assertEqualsWithNull(new byte[] { (byte) 0x0b, (byte) 0xce, (byte) 0xc1 }, b); + Blob blob = rs.getObject("value", Blob.class); + try { + assertTrue(blob != null); + assertEqualsWithNull(new byte[] { (byte) 0x0b, (byte) 0xce, (byte) 0xc1 }, + readAllBytes(blob.getBinaryStream())); + assertEqualsWithNull(new byte[] { (byte) 0xce, + (byte) 0xc1 }, readAllBytes(blob.getBinaryStream(2, 2))); + assertTrue(!rs.wasNull()); + } finally { + blob.free(); + } + assertTrue(!rs.wasNull()); + rs.next(); + + blob = rs.getObject("value", Blob.class); + try { + assertTrue(blob != null); + assertEqualsWithNull(new byte[] { (byte) 0x03, (byte) 0x03, + (byte) 0x03, (byte) 0x03 }, readAllBytes(blob.getBinaryStream())); + assertEqualsWithNull(new byte[] { (byte) 0x03, + (byte) 0x03 }, readAllBytes(blob.getBinaryStream(2, 2))); + assertTrue(!rs.wasNull()); + assertThrows(ErrorCode.INVALID_VALUE_2, blob).getBinaryStream(5, 1); + } finally { + blob.free(); + } + rs.next(); + + assertEqualsWithNull(null, readAllBytes(rs.getBinaryStream("VaLuE"))); + assertTrue(rs.wasNull()); + rs.next(); + + blob = rs.getObject("value", Blob.class); + try { + assertTrue(blob != null); + assertEqualsWithNull(random, readAllBytes(blob.getBinaryStream())); + byte[] expected = Arrays.copyOfRange(random, 100, 50102); + byte[] got = readAllBytes(blob.getBinaryStream(101, 50002)); + assertEqualsWithNull(expected, got); + assertTrue(!rs.wasNull()); + assertThrows(ErrorCode.INVALID_VALUE_2, blob).getBinaryStream(0x10001, 1); + assertThrows(ErrorCode.INVALID_VALUE_2, blob).getBinaryStream(0x10002, 0); + } finally { + blob.free(); + } + + assertTrue(!rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private void testClob() throws SQLException { + trace("Test CLOB"); + ResultSet rs; + String string; + stat = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_UPDATABLE); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE CLOB)"); + stat.execute("INSERT INTO TEST VALUES(1,'Test')"); + stat.execute("INSERT INTO TEST VALUES(2,'Hello')"); + stat.execute("INSERT INTO TEST VALUES(3,'World!')"); + stat.execute("INSERT INTO TEST VALUES(4,'Hallo')"); + stat.execute("INSERT INTO TEST VALUES(5,'Welt!')"); + stat.execute("INSERT INTO TEST VALUES(6,'Test2')"); + stat.execute("INSERT INTO TEST VALUES(7,NULL)"); + stat.execute("INSERT INTO TEST VALUES(8,NULL)"); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + assertResultSetMeta(rs, 2, new String[] { "ID", "VALUE" }, + new int[] { Types.INTEGER, Types.CLOB }, new int[] { + 10, Integer.MAX_VALUE }, new int[] { 0, 0 }); + rs.next(); + Object obj = rs.getObject(2); + assertTrue(obj instanceof java.sql.Clob); + string = rs.getString(2); + assertTrue(string != null && string.equals("Test")); + assertTrue(!rs.wasNull()); + rs.next(); + InputStreamReader reader = null; + try { + reader = new InputStreamReader(rs.getAsciiStream(2), StandardCharsets.ISO_8859_1); + } catch (Exception e) { + assertTrue(false); + } + string = readString(reader); + assertTrue(!rs.wasNull()); + trace(string); + assertTrue(string != null && string.equals("Hello")); + rs.next(); + try { + reader = new InputStreamReader(rs.getAsciiStream("value"), StandardCharsets.ISO_8859_1); + } catch (Exception e) { + assertTrue(false); + } + string = readString(reader); + assertTrue(!rs.wasNull()); + trace(string); + assertTrue(string != null && string.equals("World!")); + rs.next(); + + string = readString(rs.getCharacterStream(2)); + assertTrue(!rs.wasNull()); + trace(string); + assertTrue(string != null && string.equals("Hallo")); + Clob clob = rs.getClob(2); + try { + assertEquals("all", readString(clob.getCharacterStream(2, 3))); + assertThrows(ErrorCode.INVALID_VALUE_2, clob).getCharacterStream(6, 1); + assertThrows(ErrorCode.INVALID_VALUE_2, clob).getCharacterStream(7, 0); + } finally { + clob.free(); + } + rs.next(); + + string = readString(rs.getCharacterStream("value")); + assertTrue(!rs.wasNull()); + trace(string); + assertTrue(string != null && string.equals("Welt!")); + rs.next(); + + clob = rs.getObject("value", Clob.class); + try { + assertTrue(clob != null); + string = readString(clob.getCharacterStream()); + assertTrue(string != null && string.equals("Test2")); + assertTrue(!rs.wasNull()); + } finally { + clob.free(); + } + rs.next(); + + assertTrue(rs.getCharacterStream(2) == null); + assertTrue(rs.wasNull()); + rs.next(); + + assertTrue(rs.getAsciiStream("Value") == null); + assertTrue(rs.wasNull()); + + assertTrue(rs.getStatement() == stat); + assertTrue(rs.getWarnings() == null); + rs.clearWarnings(); + assertTrue(rs.getWarnings() == null); + assertEquals(ResultSet.FETCH_FORWARD, rs.getFetchDirection()); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs.next(); + stat.execute("DROP TABLE TEST"); + } + + private void testArray() throws SQLException { + trace("Test ARRAY"); + ResultSet rs; + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, VALUE ARRAY)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?)"); + prep.setInt(1, 1); + prep.setObject(2, new Object[] { 1, 2 }); + prep.execute(); + prep.setInt(1, 2); + prep.setObject(2, new Object[] { 11, 12 }); + prep.execute(); + prep.close(); + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(1, rs.getInt(1)); + Object[] list = (Object[]) rs.getObject(2); + assertEquals(1, ((Integer) list[0]).intValue()); + assertEquals(2, ((Integer) list[1]).intValue()); + + Array array = rs.getArray(2); + Object[] list2 = (Object[]) array.getArray(); + assertEquals(1, ((Integer) list2[0]).intValue()); + assertEquals(2, ((Integer) list2[1]).intValue()); + list2 = (Object[]) array.getArray(2, 1); + assertEquals(2, ((Integer) list2[0]).intValue()); + rs.next(); + assertEquals(2, rs.getInt(1)); + list = (Object[]) rs.getObject(2); + assertEquals(11, ((Integer) list[0]).intValue()); + assertEquals(12, ((Integer) list[1]).intValue()); + + array = rs.getArray("VALUE"); + list2 = (Object[]) array.getArray(); + assertEquals(11, ((Integer) list2[0]).intValue()); + assertEquals(12, ((Integer) list2[1]).intValue()); + list2 = (Object[]) array.getArray(2, 1); + assertEquals(12, ((Integer) list2[0]).intValue()); + + list2 = (Object[]) array.getArray(Collections.>emptyMap()); + assertEquals(11, ((Integer) list2[0]).intValue()); + + assertEquals(Types.NULL, array.getBaseType()); + assertEquals("NULL", array.getBaseTypeName()); + + assertTrue(array.toString().endsWith(": (11, 12)")); + + // free + array.free(); + assertEquals("null", array.toString()); + assertThrows(ErrorCode.OBJECT_CLOSED, array).getBaseType(); + assertThrows(ErrorCode.OBJECT_CLOSED, array).getBaseTypeName(); + assertThrows(ErrorCode.OBJECT_CLOSED, array).getResultSet(); + + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + } + + private byte[] readAllBytes(InputStream in) { + if (in == null) { + return null; + } + ByteArrayOutputStream out = new ByteArrayOutputStream(); + try { + while (true) { + int b = in.read(); + if (b == -1) { + break; + } + out.write(b); + } + return out.toByteArray(); + } catch (IOException e) { + assertTrue(false); + return null; + } + } + + private void assertEqualsWithNull(byte[] expected, byte[] got) { + if (got == null || expected == null) { + assertTrue(got == expected); + } else { + assertEquals(got, expected); + } + } + + private void checkColumnBigDecimal(ResultSet rs, int column, int i, + String bd) throws SQLException { + BigDecimal bd1 = rs.getBigDecimal(column); + int i1 = rs.getInt(column); + if (bd == null) { + trace("should be: null"); + assertTrue(rs.wasNull()); + } else { + trace("BigDecimal i=" + i + " bd=" + bd + " ; i1=" + i1 + " bd1=" + bd1); + assertTrue(!rs.wasNull()); + assertTrue(i1 == i); + assertTrue(bd1.compareTo(new BigDecimal(bd)) == 0); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestStatement.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestStatement.java new file mode 100644 index 0000000000000..63cae2cd67b5c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestStatement.java @@ -0,0 +1,487 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Savepoint; +import java.sql.Statement; +import java.util.Arrays; +import java.util.HashMap; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.jdbc.JdbcPreparedStatementBackwardsCompat; +import org.h2.jdbc.JdbcStatement; +import org.h2.jdbc.JdbcStatementBackwardsCompat; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests for the Statement implementation. + */ +public class TestStatement extends TestBase { + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("statement"); + conn = getConnection("statement"); + testUnwrap(); + testUnsupportedOperations(); + testTraceError(); + testSavepoint(); + testConnectionRollback(); + testStatement(); + testPreparedStatement(); + testIdentityMerge(); + testIdentity(); + conn.close(); + deleteDb("statement"); + } + + private void testUnwrap() throws SQLException { + Statement stat = conn.createStatement(); + assertTrue(stat.isWrapperFor(Object.class)); + assertTrue(stat.isWrapperFor(Statement.class)); + assertTrue(stat.isWrapperFor(stat.getClass())); + assertFalse(stat.isWrapperFor(Integer.class)); + assertTrue(stat == stat.unwrap(Object.class)); + assertTrue(stat == stat.unwrap(Statement.class)); + assertTrue(stat == stat.unwrap(stat.getClass())); + assertThrows(ErrorCode.INVALID_VALUE_2, stat). + unwrap(Integer.class); + } + + private void testUnsupportedOperations() throws Exception { + conn.setTypeMap(null); + HashMap> map = new HashMap<>(); + conn.setTypeMap(map); + map.put("x", Object.class); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, conn). + setTypeMap(map); + } + + private void testTraceError() throws Exception { + if (config.memory || config.networked || config.traceLevelFile != 0) { + return; + } + Statement stat = conn.createStatement(); + String fileName = getBaseDir() + "/statement.trace.db"; + stat.execute("DROP TABLE TEST IF EXISTS"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY)"); + stat.execute("INSERT INTO TEST VALUES(1)"); + try { + stat.execute("ERROR"); + } catch (SQLException e) { + // ignore + } + long lengthBefore = FileUtils.size(fileName); + try { + stat.execute("ERROR"); + } catch (SQLException e) { + // ignore + } + long error = FileUtils.size(fileName); + assertSmaller(lengthBefore, error); + lengthBefore = error; + try { + stat.execute("INSERT INTO TEST VALUES(1)"); + } catch (SQLException e) { + // ignore + } + error = FileUtils.size(fileName); + assertEquals(lengthBefore, error); + stat.execute("DROP TABLE TEST IF EXISTS"); + } + + private void testConnectionRollback() throws SQLException { + Statement stat = conn.createStatement(); + conn.setAutoCommit(false); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + conn.rollback(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + assertFalse(rs.next()); + stat.execute("DROP TABLE TEST"); + conn.setAutoCommit(true); + } + + private void testSavepoint() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + conn.setAutoCommit(false); + stat.execute("INSERT INTO TEST VALUES(0, 'Hi')"); + Savepoint savepoint1 = conn.setSavepoint(); + int id1 = savepoint1.getSavepointId(); + assertThrows(ErrorCode.SAVEPOINT_IS_UNNAMED, savepoint1). + getSavepointName(); + stat.execute("DELETE FROM TEST"); + conn.rollback(savepoint1); + stat.execute("UPDATE TEST SET NAME='Hello'"); + Savepoint savepoint2a = conn.setSavepoint(); + Savepoint savepoint2 = conn.setSavepoint(); + conn.releaseSavepoint(savepoint2a); + assertThrows(ErrorCode.SAVEPOINT_IS_INVALID_1, savepoint2a). + getSavepointId(); + int id2 = savepoint2.getSavepointId(); + assertTrue(id1 != id2); + stat.execute("UPDATE TEST SET NAME='Hallo' WHERE NAME='Hello'"); + Savepoint savepointTest = conn.setSavepoint("Joe's"); + assertTrue(savepointTest.toString().endsWith("name=Joe's")); + stat.execute("DELETE FROM TEST"); + assertEquals(savepointTest.getSavepointName(), "Joe's"); + assertThrows(ErrorCode.SAVEPOINT_IS_NAMED, savepointTest). + getSavepointId(); + conn.rollback(savepointTest); + conn.commit(); + ResultSet rs = stat.executeQuery("SELECT NAME FROM TEST"); + rs.next(); + String name = rs.getString(1); + assertEquals(name, "Hallo"); + assertFalse(rs.next()); + assertThrows(ErrorCode.SAVEPOINT_IS_INVALID_1, conn). + rollback(savepoint2); + stat.execute("DROP TABLE TEST"); + conn.setAutoCommit(true); + } + + private void testStatement() throws SQLException { + + Statement stat = conn.createStatement(); + + assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, + conn.getHoldability()); + conn.setHoldability(ResultSet.CLOSE_CURSORS_AT_COMMIT); + assertEquals(ResultSet.CLOSE_CURSORS_AT_COMMIT, + conn.getHoldability()); + + assertFalse(stat.isPoolable()); + stat.setPoolable(true); + assertFalse(stat.isPoolable()); + + // ignored + stat.setCursorName("x"); + // fixed return value + assertEquals(stat.getFetchDirection(), ResultSet.FETCH_FORWARD); + // ignored + stat.setFetchDirection(ResultSet.FETCH_REVERSE); + // ignored + stat.setMaxFieldSize(100); + + assertEquals(SysProperties.SERVER_RESULT_SET_FETCH_SIZE, + stat.getFetchSize()); + stat.setFetchSize(10); + assertEquals(10, stat.getFetchSize()); + stat.setFetchSize(0); + assertEquals(SysProperties.SERVER_RESULT_SET_FETCH_SIZE, + stat.getFetchSize()); + assertEquals(ResultSet.TYPE_FORWARD_ONLY, + stat.getResultSetType()); + Statement stat2 = conn.createStatement( + ResultSet.TYPE_SCROLL_SENSITIVE, + ResultSet.CONCUR_READ_ONLY, + ResultSet.HOLD_CURSORS_OVER_COMMIT); + assertEquals(ResultSet.TYPE_SCROLL_SENSITIVE, + stat2.getResultSetType()); + assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, + stat2.getResultSetHoldability()); + assertEquals(ResultSet.CONCUR_READ_ONLY, + stat2.getResultSetConcurrency()); + assertEquals(0, stat.getMaxFieldSize()); + assertTrue(!((JdbcStatement) stat2).isClosed()); + stat2.close(); + assertTrue(((JdbcStatement) stat2).isClosed()); + + + ResultSet rs; + int count; + long largeCount; + boolean result; + + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("SELECT * FROM TEST"); + stat.execute("DROP TABLE TEST"); + + conn.getTypeMap(); + + // this method should not throw an exception - if not supported, this + // calls are ignored + + assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, + stat.getResultSetHoldability()); + assertEquals(ResultSet.CONCUR_READ_ONLY, + stat.getResultSetConcurrency()); + + stat.cancel(); + stat.setQueryTimeout(10); + assertTrue(stat.getQueryTimeout() == 10); + stat.setQueryTimeout(0); + assertTrue(stat.getQueryTimeout() == 0); + assertThrows(ErrorCode.INVALID_VALUE_2, stat).setQueryTimeout(-1); + assertTrue(stat.getQueryTimeout() == 0); + trace("executeUpdate"); + count = stat.executeUpdate( + "CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + assertEquals(0, count); + count = stat.executeUpdate( + "INSERT INTO TEST VALUES(1,'Hello')"); + assertEquals(1, count); + count = stat.executeUpdate( + "INSERT INTO TEST(VALUE,ID) VALUES('JDBC',2)"); + assertEquals(1, count); + count = stat.executeUpdate( + "UPDATE TEST SET VALUE='LDBC' WHERE ID=2 OR ID=1"); + assertEquals(2, count); + count = stat.executeUpdate( + "UPDATE TEST SET VALUE='\\LDBC\\' WHERE VALUE LIKE 'LDBC' "); + assertEquals(2, count); + count = stat.executeUpdate( + "UPDATE TEST SET VALUE='LDBC' WHERE VALUE LIKE '\\\\LDBC\\\\'"); + trace("count:" + count); + assertEquals(2, count); + count = stat.executeUpdate("DELETE FROM TEST WHERE ID=-1"); + assertEquals(0, count); + count = stat.executeUpdate("DELETE FROM TEST WHERE ID=2"); + assertEquals(1, count); + JdbcStatementBackwardsCompat statBC = (JdbcStatementBackwardsCompat) stat; + largeCount = statBC.executeLargeUpdate("DELETE FROM TEST WHERE ID=-1"); + assertEquals(0, largeCount); + assertEquals(0, statBC.getLargeUpdateCount()); + largeCount = statBC.executeLargeUpdate("INSERT INTO TEST(VALUE,ID) VALUES('JDBC',2)"); + assertEquals(1, largeCount); + assertEquals(1, statBC.getLargeUpdateCount()); + largeCount = statBC.executeLargeUpdate("DELETE FROM TEST WHERE ID=2"); + assertEquals(1, largeCount); + assertEquals(1, statBC.getLargeUpdateCount()); + + assertThrows(ErrorCode.METHOD_NOT_ALLOWED_FOR_QUERY, stat). + executeUpdate("SELECT * FROM TEST"); + + count = stat.executeUpdate("DROP TABLE TEST"); + assertTrue(count == 0); + + trace("execute"); + result = stat.execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + assertTrue(!result); + result = stat.execute("INSERT INTO TEST VALUES(1,'Hello')"); + assertTrue(!result); + result = stat.execute("INSERT INTO TEST(VALUE,ID) VALUES('JDBC',2)"); + assertTrue(!result); + result = stat.execute("UPDATE TEST SET VALUE='LDBC' WHERE ID=2"); + assertTrue(!result); + result = stat.execute("DELETE FROM TEST WHERE ID=3"); + assertTrue(!result); + result = stat.execute("SELECT * FROM TEST"); + assertTrue(result); + result = stat.execute("DROP TABLE TEST"); + assertTrue(!result); + + assertThrows(ErrorCode.METHOD_ONLY_ALLOWED_FOR_QUERY, stat). + executeQuery("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY,VALUE VARCHAR(255))"); + + assertThrows(ErrorCode.METHOD_ONLY_ALLOWED_FOR_QUERY, stat). + executeQuery("INSERT INTO TEST VALUES(1,'Hello')"); + + assertThrows(ErrorCode.METHOD_ONLY_ALLOWED_FOR_QUERY, stat). + executeQuery("UPDATE TEST SET VALUE='LDBC' WHERE ID=2"); + + assertThrows(ErrorCode.METHOD_ONLY_ALLOWED_FOR_QUERY, stat). + executeQuery("DELETE FROM TEST WHERE ID=3"); + + stat.executeQuery("SELECT * FROM TEST"); + + assertThrows(ErrorCode.METHOD_ONLY_ALLOWED_FOR_QUERY, stat). + executeQuery("DROP TABLE TEST"); + + // getMoreResults + rs = stat.executeQuery("SELECT * FROM TEST"); + assertFalse(stat.getMoreResults()); + assertThrows(ErrorCode.OBJECT_CLOSED, rs).next(); + assertTrue(stat.getUpdateCount() == -1); + count = stat.executeUpdate("DELETE FROM TEST"); + assertFalse(stat.getMoreResults()); + assertTrue(stat.getUpdateCount() == -1); + + stat.execute("DROP TABLE TEST"); + stat.executeUpdate("DROP TABLE IF EXISTS TEST"); + + assertTrue(stat.getWarnings() == null); + stat.clearWarnings(); + assertTrue(stat.getWarnings() == null); + assertTrue(conn == stat.getConnection()); + + assertEquals("SOME_ID", statBC.enquoteIdentifier("SOME_ID", false)); + assertEquals("\"SOME ID\"", statBC.enquoteIdentifier("SOME ID", false)); + assertEquals("\"SOME_ID\"", statBC.enquoteIdentifier("SOME_ID", true)); + assertEquals("\"FROM\"", statBC.enquoteIdentifier("FROM", false)); + assertEquals("\"Test\"", statBC.enquoteIdentifier("Test", false)); + assertEquals("\"TODAY\"", statBC.enquoteIdentifier("TODAY", false)); + + assertTrue(statBC.isSimpleIdentifier("SOME_ID")); + assertFalse(statBC.isSimpleIdentifier("SOME ID")); + assertFalse(statBC.isSimpleIdentifier("FROM")); + assertFalse(statBC.isSimpleIdentifier("Test")); + assertFalse(statBC.isSimpleIdentifier("TODAY")); + + stat.close(); + } + + private void testIdentityMerge() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("drop table if exists test1"); + stat.execute("create table test1(id identity, x int)"); + stat.execute("drop table if exists test2"); + stat.execute("create table test2(id identity, x int)"); + stat.execute("merge into test1(x) key(x) values(5)", + Statement.RETURN_GENERATED_KEYS); + ResultSet keys; + keys = stat.getGeneratedKeys(); + keys.next(); + assertEquals(1, keys.getInt(1)); + stat.execute("insert into test2(x) values(10), (11), (12)"); + stat.execute("merge into test1(x) key(x) values(5)", + Statement.RETURN_GENERATED_KEYS); + keys = stat.getGeneratedKeys(); + assertFalse(keys.next()); + stat.execute("merge into test1(x) key(x) values(6)", + Statement.RETURN_GENERATED_KEYS); + keys = stat.getGeneratedKeys(); + keys.next(); + assertEquals(2, keys.getInt(1)); + stat.execute("drop table test1, test2"); + } + + private void testIdentity() throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE SEQUENCE SEQ"); + stat.execute("CREATE TABLE TEST(ID INT)"); + stat.execute("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", + Statement.RETURN_GENERATED_KEYS); + ResultSet rs = stat.getGeneratedKeys(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + stat.execute("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", + Statement.RETURN_GENERATED_KEYS); + rs = stat.getGeneratedKeys(); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertFalse(rs.next()); + stat.execute("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", + new int[] { 1 }); + rs = stat.getGeneratedKeys(); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertFalse(rs.next()); + stat.execute("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", + new String[] { "ID" }); + rs = stat.getGeneratedKeys(); + rs.next(); + assertEquals(4, rs.getInt(1)); + assertFalse(rs.next()); + stat.executeUpdate("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", + Statement.RETURN_GENERATED_KEYS); + rs = stat.getGeneratedKeys(); + rs.next(); + assertEquals(5, rs.getInt(1)); + assertFalse(rs.next()); + stat.executeUpdate("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", + new int[] { 1 }); + rs = stat.getGeneratedKeys(); + rs.next(); + assertEquals(6, rs.getInt(1)); + assertFalse(rs.next()); + stat.executeUpdate("INSERT INTO TEST VALUES(NEXT VALUE FOR SEQ)", + new String[] { "ID" }); + rs = stat.getGeneratedKeys(); + rs.next(); + assertEquals(7, rs.getInt(1)); + assertFalse(rs.next()); + + stat.execute("CREATE TABLE TEST2(ID identity primary key)"); + stat.execute("INSERT INTO TEST2 VALUES()"); + stat.execute("SET @X = IDENTITY()"); + rs = stat.executeQuery("SELECT @X"); + rs.next(); + assertEquals(1, rs.getInt(1)); + + stat.execute("DROP TABLE TEST"); + stat.execute("DROP TABLE TEST2"); + } + + private void testPreparedStatement() throws SQLException{ + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar(255))"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("insert into test values(2, 'World')"); + PreparedStatement ps = conn.prepareStatement( + "select name from test where id in (select id from test where name REGEXP ?)"); + ps.setString(1, "Hello"); + ResultSet rs = ps.executeQuery(); + assertTrue(rs.next()); + assertEquals("Hello", rs.getString("name")); + assertFalse(rs.next()); + ps.setString(1, "World"); + rs = ps.executeQuery(); + assertTrue(rs.next()); + assertEquals("World", rs.getString("name")); + assertFalse(rs.next()); + //Changes the table structure + stat.execute("create index t_id on test(name)"); + //Test the prepared statement again to check if the internal cache attributes were reset + ps.setString(1, "Hello"); + rs = ps.executeQuery(); + assertTrue(rs.next()); + assertEquals("Hello", rs.getString("name")); + assertFalse(rs.next()); + ps.setString(1, "World"); + rs = ps.executeQuery(); + assertTrue(rs.next()); + assertEquals("World", rs.getString("name")); + assertFalse(rs.next()); + ps = conn.prepareStatement("insert into test values(?, ?)"); + ps.setInt(1, 3); + ps.setString(2, "v3"); + ps.addBatch(); + ps.setInt(1, 4); + ps.setString(2, "v4"); + ps.addBatch(); + assertTrue(Arrays.equals(new int[] {1, 1}, ps.executeBatch())); + ps.setInt(1, 5); + ps.setString(2, "v5"); + ps.addBatch(); + ps.setInt(1, 6); + ps.setString(2, "v6"); + ps.addBatch(); + assertTrue(Arrays.equals(new long[] {1, 1}, ((JdbcStatementBackwardsCompat) ps).executeLargeBatch())); + ps.setInt(1, 7); + ps.setString(2, "v7"); + assertEquals(1, ps.executeUpdate()); + assertEquals(1, ps.getUpdateCount()); + ps.setInt(1, 8); + ps.setString(2, "v8"); + assertEquals(1, ((JdbcPreparedStatementBackwardsCompat) ps).executeLargeUpdate()); + assertEquals(1, ((JdbcStatementBackwardsCompat) ps).getLargeUpdateCount()); + stat.execute("drop table test"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestTransactionIsolation.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestTransactionIsolation.java new file mode 100644 index 0000000000000..c69d268938b6d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestTransactionIsolation.java @@ -0,0 +1,104 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.SQLException; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Transaction isolation level tests. + */ +public class TestTransactionIsolation extends TestBase { + + private Connection conn1, conn2; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + if (config.mvcc || config.mvStore) { + // no tests yet + } else { + testTableLevelLocking(); + } + } + + private void testTableLevelLocking() throws SQLException { + deleteDb("transactionIsolation"); + conn1 = getConnection("transactionIsolation"); + assertEquals(Connection.TRANSACTION_READ_COMMITTED, + conn1.getTransactionIsolation()); + conn1.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); + assertEquals(Connection.TRANSACTION_SERIALIZABLE, + conn1.getTransactionIsolation()); + conn1.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED); + assertEquals(Connection.TRANSACTION_READ_UNCOMMITTED, + conn1.getTransactionIsolation()); + assertSingleValue(conn1.createStatement(), "CALL LOCK_MODE()", 0); + conn1.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); + assertSingleValue(conn1.createStatement(), "CALL LOCK_MODE()", 3); + assertEquals(Connection.TRANSACTION_READ_COMMITTED, + conn1.getTransactionIsolation()); + conn1.createStatement().execute("SET LOCK_MODE 1"); + assertEquals(Connection.TRANSACTION_SERIALIZABLE, + conn1.getTransactionIsolation()); + conn1.createStatement().execute("CREATE TABLE TEST(ID INT)"); + conn1.createStatement().execute("INSERT INTO TEST VALUES(1)"); + conn1.setAutoCommit(false); + + conn2 = getConnection("transactionIsolation"); + conn2.setAutoCommit(false); + + conn1.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); + + // serializable: just reading + assertSingleValue(conn1.createStatement(), "SELECT * FROM TEST", 1); + assertSingleValue(conn2.createStatement(), "SELECT * FROM TEST", 1); + conn1.commit(); + conn2.commit(); + + // serializable: write lock + conn1.createStatement().executeUpdate("UPDATE TEST SET ID=2"); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, conn2.createStatement()). + executeQuery("SELECT * FROM TEST"); + conn1.commit(); + conn2.commit(); + + conn1.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); + + // read-committed: #1 read, #2 update, #1 read again + assertSingleValue(conn1.createStatement(), "SELECT * FROM TEST", 2); + conn2.createStatement().executeUpdate("UPDATE TEST SET ID=3"); + conn2.commit(); + assertSingleValue(conn1.createStatement(), "SELECT * FROM TEST", 3); + conn1.commit(); + + // read-committed: #1 read, #2 read, #2 update, #1 delete + assertSingleValue(conn1.createStatement(), "SELECT * FROM TEST", 3); + assertSingleValue(conn2.createStatement(), "SELECT * FROM TEST", 3); + conn2.createStatement().executeUpdate("UPDATE TEST SET ID=4"); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, conn1.createStatement()). + executeUpdate("DELETE FROM TEST"); + conn2.commit(); + conn1.commit(); + assertSingleValue(conn1.createStatement(), "SELECT * FROM TEST", 4); + assertSingleValue(conn2.createStatement(), "SELECT * FROM TEST", 4); + + conn1.close(); + conn2.close(); + deleteDb("transactionIsolation"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestUpdatableResultSet.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestUpdatableResultSet.java new file mode 100644 index 0000000000000..27dd25668e848 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestUpdatableResultSet.java @@ -0,0 +1,682 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.io.ByteArrayInputStream; +import java.io.OutputStream; +import java.io.StringReader; +import java.math.BigDecimal; +import java.sql.Blob; +import java.sql.Connection; +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Updatable result set tests. + */ +public class TestUpdatableResultSet extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testDetectUpdatable(); + testUpdateLob(); + testScroll(); + testUpdateDeleteInsert(); + testUpdateDataType(); + testUpdateResetRead(); + deleteDb("updatableResultSet"); + } + + private void testDetectUpdatable() throws SQLException { + deleteDb("updatableResultSet"); + Connection conn = getConnection("updatableResultSet"); + Statement stat; + ResultSet rs; + stat = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, + ResultSet.CONCUR_UPDATABLE); + + stat.execute("create table test(id int primary key, name varchar)"); + rs = stat.executeQuery("select * from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + stat.execute("drop table test"); + + stat.execute("create table test(a int, b int, " + + "name varchar, primary key(a, b))"); + rs = stat.executeQuery("select * from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select a, name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + rs = stat.executeQuery("select b, name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + rs = stat.executeQuery("select b, name, a from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + stat.execute("drop table test"); + + stat.execute("create table test(a int, b int, name varchar)"); + stat.execute("create unique index on test(b, a)"); + rs = stat.executeQuery("select * from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select a, name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + rs = stat.executeQuery("select b, name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + rs = stat.executeQuery("select b, name, a from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + stat.execute("drop table test"); + + stat.execute("create table test(a int, b int, c int unique, " + + "name varchar, primary key(a, b))"); + rs = stat.executeQuery("select * from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select a, name, c from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select b, a, name, c from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + stat.execute("drop table test"); + + stat.execute("create table test(id int primary key, " + + "a int, b int, i int, j int, k int, name varchar)"); + stat.execute("create unique index on test(b, a)"); + stat.execute("create unique index on test(i, j)"); + stat.execute("create unique index on test(a, j)"); + rs = stat.executeQuery("select * from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select a, name, b from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select a, name, b from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + rs = stat.executeQuery("select i, b, k, name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + rs = stat.executeQuery("select a, i, name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + rs = stat.executeQuery("select b, i, k, name from test"); + assertEquals(ResultSet.CONCUR_READ_ONLY, rs.getConcurrency()); + rs = stat.executeQuery("select a, k, j, name from test"); + assertEquals(ResultSet.CONCUR_UPDATABLE, rs.getConcurrency()); + stat.execute("drop table test"); + + conn.close(); + } + + private void testUpdateLob() throws SQLException { + deleteDb("updatableResultSet"); + Connection conn = getConnection("updatableResultSet"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE object_index " + + "(id integer primary key, object other, number integer)"); + + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO object_index (id,object) VALUES (1,?)"); + prep.setObject(1, "hello", Types.JAVA_OBJECT); + prep.execute(); + + ResultSet rs = stat.executeQuery( + "SELECT object,id,number FROM object_index WHERE id =1"); + rs.next(); + assertEquals("hello", rs.getObject(1).toString()); + stat = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_UPDATABLE); + rs = stat.executeQuery("SELECT object,id,number FROM object_index WHERE id =1"); + rs.next(); + assertEquals("hello", rs.getObject(1).toString()); + rs.updateInt(2, 1); + rs.updateRow(); + rs.close(); + stat = conn.createStatement(); + rs = stat.executeQuery("SELECT object,id,number FROM object_index WHERE id =1"); + rs.next(); + assertEquals("hello", rs.getObject(1).toString()); + conn.close(); + } + + private void testUpdateResetRead() throws SQLException { + deleteDb("updatableResultSet"); + Connection conn = getConnection("updatableResultSet"); + Statement stat = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + stat.execute("INSERT INTO TEST VALUES(2, 'World')"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + rs.updateInt(1, 10); + rs.updateRow(); + rs.next(); + + rs.updateString(2, "Welt"); + rs.cancelRowUpdates(); + rs.updateString(2, "Welt"); + + rs.updateRow(); + rs.beforeFirst(); + rs.next(); + assertEquals(10, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("Welt", rs.getString(2)); + + assertFalse(rs.isClosed()); + rs.close(); + assertTrue(rs.isClosed()); + + conn.close(); + } + + private void testScroll() throws SQLException { + deleteDb("updatableResultSet"); + Connection conn = getConnection("updatableResultSet"); + Statement stat = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World'), (3, 'Test')"); + + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + + assertTrue(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + assertEquals(0, rs.getRow()); + + rs.next(); + assertFalse(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + assertEquals(1, rs.getInt(1)); + assertEquals(1, rs.getRow()); + + rs.next(); + assertThrows(ErrorCode.RESULT_SET_READONLY, rs).insertRow(); + assertFalse(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + assertEquals(2, rs.getInt(1)); + assertEquals(2, rs.getRow()); + + rs.next(); + assertFalse(rs.isBeforeFirst()); + assertFalse(rs.isAfterLast()); + assertEquals(3, rs.getInt(1)); + assertEquals(3, rs.getRow()); + + assertFalse(rs.next()); + assertFalse(rs.isBeforeFirst()); + assertTrue(rs.isAfterLast()); + assertEquals(0, rs.getRow()); + + assertTrue(rs.first()); + assertEquals(1, rs.getInt(1)); + assertEquals(1, rs.getRow()); + + assertTrue(rs.last()); + assertEquals(3, rs.getInt(1)); + assertEquals(3, rs.getRow()); + + assertTrue(rs.relative(0)); + assertEquals(3, rs.getRow()); + + assertTrue(rs.relative(-1)); + assertEquals(2, rs.getRow()); + + assertTrue(rs.relative(1)); + assertEquals(3, rs.getRow()); + + assertFalse(rs.relative(100)); + assertTrue(rs.isAfterLast()); + + assertFalse(rs.absolute(0)); + assertEquals(0, rs.getRow()); + + assertTrue(rs.absolute(1)); + assertEquals(1, rs.getRow()); + + assertTrue(rs.absolute(2)); + assertEquals(2, rs.getRow()); + + assertTrue(rs.absolute(3)); + assertEquals(3, rs.getRow()); + + assertFalse(rs.absolute(4)); + assertEquals(0, rs.getRow()); + + // allowed for compatibility + assertFalse(rs.absolute(0)); + + assertTrue(rs.absolute(3)); + assertEquals(3, rs.getRow()); + + if (!config.lazy) { + assertTrue(rs.absolute(-1)); + assertEquals(3, rs.getRow()); + + assertTrue(rs.absolute(-2)); + assertEquals(2, rs.getRow()); + } + + assertFalse(rs.absolute(4)); + assertTrue(rs.isAfterLast()); + + assertFalse(rs.absolute(5)); + assertTrue(rs.isAfterLast()); + + assertTrue(rs.previous()); + assertEquals(3, rs.getRow()); + + assertTrue(rs.previous()); + assertEquals(2, rs.getRow()); + + conn.close(); + } + + private void testUpdateDataType() throws Exception { + deleteDb("updatableResultSet"); + Connection conn = getConnection("updatableResultSet"); + Statement stat = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_UPDATABLE); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255), " + + "DEC DECIMAL(10,2), BOO BIT, BYE TINYINT, BIN BINARY(100), " + + "D DATE, T TIME, TS TIMESTAMP(9), DB DOUBLE, R REAL, L BIGINT, " + + "O_I INT, SH SMALLINT, CL CLOB, BL BLOB)"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + ResultSetMetaData meta = rs.getMetaData(); + assertEquals("java.lang.Integer", meta.getColumnClassName(1)); + assertEquals("java.lang.String", meta.getColumnClassName(2)); + assertEquals("java.math.BigDecimal", meta.getColumnClassName(3)); + assertEquals("java.lang.Boolean", meta.getColumnClassName(4)); + assertEquals("java.lang.Byte", meta.getColumnClassName(5)); + assertEquals("[B", meta.getColumnClassName(6)); + assertEquals("java.sql.Date", meta.getColumnClassName(7)); + assertEquals("java.sql.Time", meta.getColumnClassName(8)); + assertEquals("java.sql.Timestamp", meta.getColumnClassName(9)); + assertEquals("java.lang.Double", meta.getColumnClassName(10)); + assertEquals("java.lang.Float", meta.getColumnClassName(11)); + assertEquals("java.lang.Long", meta.getColumnClassName(12)); + assertEquals("java.lang.Integer", meta.getColumnClassName(13)); + assertEquals("java.lang.Short", meta.getColumnClassName(14)); + assertEquals("java.sql.Clob", meta.getColumnClassName(15)); + assertEquals("java.sql.Blob", meta.getColumnClassName(16)); + rs.moveToInsertRow(); + rs.updateInt(1, 0); + rs.updateNull(2); + rs.updateNull("DEC"); + // 'not set' values are set to null + assertThrows(ErrorCode.NO_DATA_AVAILABLE, rs).cancelRowUpdates(); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt(1, 1); + rs.updateString(2, null); + rs.updateBigDecimal(3, null); + rs.updateBoolean(4, false); + rs.updateByte(5, (byte) 0); + rs.updateBytes(6, null); + rs.updateDate(7, null); + rs.updateTime(8, null); + rs.updateTimestamp(9, null); + rs.updateDouble(10, 0.0); + rs.updateFloat(11, (float) 0.0); + rs.updateLong(12, 0L); + rs.updateObject(13, null); + rs.updateShort(14, (short) 0); + rs.updateCharacterStream(15, new StringReader("test"), 0); + rs.updateBinaryStream(16, + new ByteArrayInputStream(new byte[] { (byte) 0xff, 0x00 }), 0); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 2); + rs.updateString("NAME", "+"); + rs.updateBigDecimal("DEC", new BigDecimal("1.2")); + rs.updateBoolean("BOO", true); + rs.updateByte("BYE", (byte) 0xff); + rs.updateBytes("BIN", new byte[] { 0x00, (byte) 0xff }); + rs.updateDate("D", Date.valueOf("2005-09-21")); + rs.updateTime("T", Time.valueOf("21:46:28")); + rs.updateTimestamp("TS", + Timestamp.valueOf("2005-09-21 21:47:09.567890123")); + rs.updateDouble("DB", 1.725); + rs.updateFloat("R", (float) 2.5); + rs.updateLong("L", Long.MAX_VALUE); + rs.updateObject("O_I", 10); + rs.updateShort("SH", Short.MIN_VALUE); + // auml, ouml, uuml + rs.updateCharacterStream("CL", new StringReader("\u00ef\u00f6\u00fc"), 0); + rs.updateBinaryStream("BL", + new ByteArrayInputStream(new byte[] { (byte) 0xab, 0x12 }), 0); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 3); + rs.updateCharacterStream("CL", new StringReader("\u00ef\u00f6\u00fc")); + rs.updateBinaryStream("BL", + new ByteArrayInputStream(new byte[] { (byte) 0xab, 0x12 })); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 4); + rs.updateCharacterStream(15, new StringReader("\u00ef\u00f6\u00fc")); + rs.updateBinaryStream(16, + new ByteArrayInputStream(new byte[] { (byte) 0xab, 0x12 })); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 5); + rs.updateClob("CL", new StringReader("\u00ef\u00f6\u00fc")); + rs.updateBlob("BL", + new ByteArrayInputStream(new byte[] { (byte) 0xab, 0x12 })); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 6); + rs.updateClob(15, new StringReader("\u00ef\u00f6\u00fc")); + rs.updateBlob(16, + new ByteArrayInputStream(new byte[] { (byte) 0xab, 0x12 })); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 7); + rs.updateNClob("CL", new StringReader("\u00ef\u00f6\u00fc")); + Blob b = conn.createBlob(); + OutputStream out = b.setBinaryStream(1); + out.write(new byte[] { (byte) 0xab, 0x12 }); + out.close(); + rs.updateBlob("BL", b); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 8); + rs.updateNClob(15, new StringReader("\u00ef\u00f6\u00fc")); + rs.updateBlob(16, b); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 9); + rs.updateNClob("CL", new StringReader("\u00ef\u00f6\u00fc"), -1); + rs.updateBlob("BL", b); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 10); + rs.updateNClob(15, new StringReader("\u00ef\u00f6\u00fc"), -1); + rs.updateBlob(16, b); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 11); + rs.updateNCharacterStream("CL", + new StringReader("\u00ef\u00f6\u00fc"), -1); + rs.updateBlob("BL", b); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 12); + rs.updateNCharacterStream(15, + new StringReader("\u00ef\u00f6\u00fc"), -1); + rs.updateBlob(16, b); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 13); + rs.updateNCharacterStream("CL", + new StringReader("\u00ef\u00f6\u00fc")); + rs.updateBlob("BL", b); + rs.insertRow(); + + rs.moveToInsertRow(); + rs.updateInt("ID", 14); + rs.updateNCharacterStream(15, + new StringReader("\u00ef\u00f6\u00fc")); + rs.updateBlob(16, b); + rs.insertRow(); + + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID NULLS FIRST"); + rs.next(); + assertTrue(rs.getInt(1) == 0); + assertTrue(rs.getString(2) == null && rs.wasNull()); + assertTrue(rs.getBigDecimal(3) == null && rs.wasNull()); + assertTrue(!rs.getBoolean(4) && rs.wasNull()); + assertTrue(rs.getByte(5) == 0 && rs.wasNull()); + assertTrue(rs.getBytes(6) == null && rs.wasNull()); + assertTrue(rs.getDate(7) == null && rs.wasNull()); + assertTrue(rs.getTime(8) == null && rs.wasNull()); + assertTrue(rs.getTimestamp(9) == null && rs.wasNull()); + assertTrue(rs.getDouble(10) == 0.0 && rs.wasNull()); + assertTrue(rs.getFloat(11) == 0.0 && rs.wasNull()); + assertTrue(rs.getLong(12) == 0 && rs.wasNull()); + assertTrue(rs.getObject(13) == null && rs.wasNull()); + assertTrue(rs.getShort(14) == 0 && rs.wasNull()); + assertTrue(rs.getCharacterStream(15) == null && rs.wasNull()); + assertTrue(rs.getBinaryStream(16) == null && rs.wasNull()); + + rs.next(); + assertTrue(rs.getInt(1) == 1); + assertTrue(rs.getString(2) == null && rs.wasNull()); + assertTrue(rs.getBigDecimal(3) == null && rs.wasNull()); + assertTrue(!rs.getBoolean(4) && !rs.wasNull()); + assertTrue(rs.getByte(5) == 0 && !rs.wasNull()); + assertTrue(rs.getBytes(6) == null && rs.wasNull()); + assertTrue(rs.getDate(7) == null && rs.wasNull()); + assertTrue(rs.getTime(8) == null && rs.wasNull()); + assertTrue(rs.getTimestamp(9) == null && rs.wasNull()); + assertTrue(rs.getDouble(10) == 0.0 && !rs.wasNull()); + assertTrue(rs.getFloat(11) == 0.0 && !rs.wasNull()); + assertTrue(rs.getLong(12) == 0 && !rs.wasNull()); + assertTrue(rs.getObject(13) == null && rs.wasNull()); + assertTrue(rs.getShort(14) == 0 && !rs.wasNull()); + assertEquals("test", rs.getString(15)); + assertEquals(new byte[] { (byte) 0xff, 0x00 }, rs.getBytes(16)); + + rs.next(); + assertTrue(rs.getInt(1) == 2); + assertEquals("+", rs.getString(2)); + assertEquals("1.20", rs.getBigDecimal(3).toString()); + assertTrue(rs.getBoolean(4)); + assertTrue((rs.getByte(5) & 0xff) == 0xff); + assertEquals(new byte[] { 0x00, (byte) 0xff }, rs.getBytes(6)); + assertEquals("2005-09-21", rs.getDate(7).toString()); + assertEquals("21:46:28", rs.getTime(8).toString()); + assertEquals("2005-09-21 21:47:09.567890123", rs.getTimestamp(9).toString()); + assertTrue(rs.getDouble(10) == 1.725); + assertTrue(rs.getFloat(11) == (float) 2.5); + assertTrue(rs.getLong(12) == Long.MAX_VALUE); + assertEquals(10, ((Integer) rs.getObject(13)).intValue()); + assertTrue(rs.getShort(14) == Short.MIN_VALUE); + // auml ouml uuml + assertEquals("\u00ef\u00f6\u00fc", rs.getString(15)); + assertEquals(new byte[] { (byte) 0xab, 0x12 }, rs.getBytes(16)); + + for (int i = 3; i <= 14; i++) { + rs.next(); + assertEquals(i, rs.getInt(1)); + assertEquals("\u00ef\u00f6\u00fc", rs.getString(15)); + assertEquals(new byte[] { (byte) 0xab, 0x12 }, rs.getBytes(16)); + } + assertFalse(rs.next()); + + stat.execute("DROP TABLE TEST"); + conn.close(); + } + + private void testUpdateDeleteInsert() throws SQLException { + deleteDb("updatableResultSet"); + Connection c1 = getConnection("updatableResultSet"); + Connection c2 = getConnection("updatableResultSet"); + Statement stat = c1.createStatement(ResultSet.TYPE_FORWARD_ONLY, + ResultSet.CONCUR_UPDATABLE); + stat.execute("DROP TABLE IF EXISTS TEST"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + int max = 8; + for (int i = 0; i < max; i++) { + stat.execute("INSERT INTO TEST VALUES(" + i + ", 'Hello" + i + "')"); + } + ResultSet rs; + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(0, rs.getInt(1)); + rs.moveToInsertRow(); + rs.updateInt(1, 100); + rs.moveToCurrentRow(); + assertEquals(0, rs.getInt(1)); + + rs = stat.executeQuery("SELECT * FROM TEST"); + int j = max; + while (rs.next()) { + int id = rs.getInt(1); + if (id % 2 == 0) { + Statement s2 = c2.createStatement(); + s2.execute("UPDATE TEST SET NAME = NAME || '+' WHERE ID = " + rs.getInt(1)); + if (id % 4 == 0) { + rs.refreshRow(); + } + rs.updateString(2, "Updated " + rs.getString(2)); + rs.updateRow(); + } else { + rs.deleteRow(); + } + // the driver does not detect it in any case + assertFalse(rs.rowUpdated()); + assertFalse(rs.rowInserted()); + assertFalse(rs.rowDeleted()); + + rs.moveToInsertRow(); + rs.updateString(2, "Inserted " + j); + rs.updateInt(1, j); + j += 2; + rs.insertRow(); + + // the driver does not detect it in any case + assertFalse(rs.rowUpdated()); + assertFalse(rs.rowInserted()); + assertFalse(rs.rowDeleted()); + + } + rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + while (rs.next()) { + int id = rs.getInt(1); + String name = rs.getString(2); + assertEquals(0, id % 2); + if (id >= max) { + assertEquals("Inserted " + id, rs.getString(2)); + } else { + if (id % 4 == 0) { + assertEquals("Updated Hello" + id + "+", rs.getString(2)); + } else { + assertEquals("Updated Hello" + id, rs.getString(2)); + } + } + trace("id=" + id + " name=" + name); + } + c2.close(); + c1.close(); + + // test scrollable result sets + Connection conn = getConnection("updatableResultSet"); + for (int i = 0; i < 5; i++) { + testScrollable(conn, i); + } + conn.close(); + } + + private void testScrollable(Connection conn, int rows) throws SQLException { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + stat.execute("DELETE FROM TEST"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + for (int i = 0; i < rows; i++) { + prep.setInt(1, i); + prep.setString(2, "Data " + i); + prep.execute(); + } + Statement regular = conn.createStatement(); + testScrollResultSet(regular, ResultSet.TYPE_FORWARD_ONLY, rows); + Statement scroll = conn.createStatement( + ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); + testScrollResultSet(scroll, ResultSet.TYPE_SCROLL_INSENSITIVE, rows); + } + + private void testScrollResultSet(Statement stat, int type, int rows) + throws SQLException { + boolean error = false; + if (type == ResultSet.TYPE_FORWARD_ONLY) { + error = true; + } + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + assertEquals(type, rs.getType()); + + assertState(rs, rows > 0, false, false, false); + for (int i = 0; i < rows; i++) { + rs.next(); + assertState(rs, rows == 0, i == 0, i == rows - 1, rows == 0 || i == rows); + } + try { + rs.beforeFirst(); + assertState(rs, rows > 0, false, false, false); + } catch (SQLException e) { + if (!error) { + throw e; + } + } + try { + rs.afterLast(); + assertState(rs, false, false, false, rows > 0); + } catch (SQLException e) { + if (!error) { + throw e; + } + } + try { + boolean valid = rs.first(); + assertEquals(rows > 0, valid); + if (valid) { + assertState(rs, false, true, rows == 1, rows == 0); + } + } catch (SQLException e) { + if (!error) { + throw e; + } + } + try { + boolean valid = rs.last(); + assertEquals(rows > 0, valid); + if (valid) { + assertState(rs, false, rows == 1, true, rows == 0); + } + } catch (SQLException e) { + if (!error) { + throw e; + } + } + } + + private void assertState(ResultSet rs, boolean beforeFirst, + boolean first, boolean last, boolean afterLast) throws SQLException { + assertEquals(beforeFirst, rs.isBeforeFirst()); + assertEquals(first, rs.isFirst()); + assertEquals(last, rs.isLast()); + assertEquals(afterLast, rs.isAfterLast()); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestUrlJavaObjectSerializer.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestUrlJavaObjectSerializer.java new file mode 100644 index 0000000000000..b1a33ef1bb533 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestUrlJavaObjectSerializer.java @@ -0,0 +1,96 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.sql.Types; +import org.h2.api.JavaObjectSerializer; +import org.h2.test.TestBase; + +/** + * Tests per-db {@link JavaObjectSerializer} when set through the JDBC URL. + * + * @author Davide Cavestro + */ +public class TestUrlJavaObjectSerializer extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = createCaller().init(); + test.config.traceTest = true; + test.config.memory = true; + test.config.networked = true; + test.config.beforeTest(); + test.test(); + test.config.afterTest(); + } + + @Override + public void test() throws Exception { + FakeJavaObjectSerializer.testBaseRef = this; + try { + deleteDb("javaSerializer"); + String fqn = FakeJavaObjectSerializer.class.getName(); + Connection conn = getConnection( + "javaSerializer;JAVA_OBJECT_SERIALIZER='"+fqn+"'"); + + Statement stat = conn.createStatement(); + stat.execute("create table t1(id identity, val other)"); + + PreparedStatement ins = conn.prepareStatement( + "insert into t1(val) values(?)"); + + ins.setObject(1, 100500, Types.JAVA_OBJECT); + assertEquals(1, ins.executeUpdate()); + + Statement s = conn.createStatement(); + ResultSet rs = s.executeQuery("select val from t1"); + + assertTrue(rs.next()); + + assertEquals(100500, ((Integer) rs.getObject(1)).intValue()); + assertEquals(new byte[] { 1, 2, 3 }, rs.getBytes(1)); + + conn.close(); + deleteDb("javaSerializer"); + } finally { + FakeJavaObjectSerializer.testBaseRef = null; + } + } + + /** + * The serializer to use for this test. + */ + public static class FakeJavaObjectSerializer implements JavaObjectSerializer { + + /** + * The test. + */ + static TestBase testBaseRef; + + @Override + public byte[] serialize(Object obj) throws Exception { + testBaseRef.assertEquals(100500, ((Integer) obj).intValue()); + + return new byte[] { 1, 2, 3 }; + } + + @Override + public Object deserialize(byte[] bytes) throws Exception { + testBaseRef.assertEquals(new byte[] { 1, 2, 3 }, bytes); + + return 100500; + } + + } +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/TestZloty.java b/modules/h2/src/test/java/org/h2/test/jdbc/TestZloty.java new file mode 100644 index 0000000000000..58ddf4454072b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/TestZloty.java @@ -0,0 +1,120 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbc; + +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests a custom BigDecimal implementation, as well + * as direct modification of a byte in a byte array. + */ +public class TestZloty extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testZloty(); + testModifyBytes(); + deleteDb("zloty"); + } + + /** + * This class overrides BigDecimal and implements some strange comparison + * method. + */ + private static class ZlotyBigDecimal extends BigDecimal { + + private static final long serialVersionUID = 1L; + + public ZlotyBigDecimal(String s) { + super(s); + } + + @Override + public int compareTo(BigDecimal bd) { + return -super.compareTo(bd); + } + + } + + private void testModifyBytes() throws SQLException { + deleteDb("zloty"); + Connection conn = getConnection("zloty"); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT, DATA BINARY)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + byte[] shared = { 0 }; + prep.setInt(1, 0); + prep.setBytes(2, shared); + prep.execute(); + shared[0] = 1; + prep.setInt(1, 1); + prep.setBytes(2, shared); + prep.execute(); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(0, rs.getInt(1)); + assertEquals(0, rs.getBytes(2)[0]); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals(1, rs.getBytes(2)[0]); + rs.getBytes(2)[0] = 2; + assertEquals(1, rs.getBytes(2)[0]); + assertFalse(rs.next()); + conn.close(); + } + + /** + * H2 destroyer application ;-> + * + * @author Maciej Wegorkiewicz + */ + private void testZloty() throws SQLException { + deleteDb("zloty"); + Connection conn = getConnection("zloty"); + conn.createStatement().execute("CREATE TABLE TEST(ID INT, AMOUNT DECIMAL)"); + PreparedStatement prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?)"); + prep.setInt(1, 1); + prep.setBigDecimal(2, new BigDecimal("10.0")); + prep.execute(); + prep.setInt(1, 2); + assertThrows(ErrorCode.INVALID_CLASS_2, prep). + setBigDecimal(2, new ZlotyBigDecimal("11.0")); + + prep.setInt(1, 3); + BigDecimal value = new BigDecimal("12.100000") { + + private static final long serialVersionUID = 1L; + + @Override + public String toString() { + return "12,100000 EURO"; + } + }; + assertThrows(ErrorCode.INVALID_CLASS_2, prep). + setBigDecimal(2, value); + + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbc/package.html b/modules/h2/src/test/java/org/h2/test/jdbc/package.html new file mode 100644 index 0000000000000..eefece587895e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbc/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +JDBC API tests. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/jdbcx/SimpleXid.java b/modules/h2/src/test/java/org/h2/test/jdbcx/SimpleXid.java new file mode 100644 index 0000000000000..c24871c872273 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbcx/SimpleXid.java @@ -0,0 +1,85 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbcx; + +import java.util.Arrays; +import java.util.concurrent.atomic.AtomicInteger; +import javax.transaction.xa.Xid; +import org.h2.util.MathUtils; + +/** + * A simple Xid implementation. + */ +public class SimpleXid implements Xid { + + private static AtomicInteger next = new AtomicInteger(); + + private final int formatId; + private final byte[] branchQualifier; + private final byte[] globalTransactionId; + + private SimpleXid(int formatId, byte[] branchQualifier, + byte[] globalTransactionId) { + this.formatId = formatId; + this.branchQualifier = branchQualifier; + this.globalTransactionId = globalTransactionId; + } + + /** + * Create a new random xid. + * + * @return the new object + */ + public static SimpleXid createRandom() { + int formatId = next.getAndIncrement(); + byte[] bq = new byte[MAXBQUALSIZE]; + MathUtils.randomBytes(bq); + byte[] gt = new byte[MAXGTRIDSIZE]; + MathUtils.randomBytes(gt); + return new SimpleXid(formatId, bq, gt); + } + + @Override + public byte[] getBranchQualifier() { + return branchQualifier; + } + + @Override + public int getFormatId() { + return formatId; + } + + @Override + public byte[] getGlobalTransactionId() { + return globalTransactionId; + } + + @Override + public int hashCode() { + return formatId; + } + + @Override + public boolean equals(Object other) { + if (other instanceof Xid) { + Xid xid = (Xid) other; + if (xid.getFormatId() == formatId) { + if (Arrays.equals(branchQualifier, xid.getBranchQualifier())) { + if (Arrays.equals(globalTransactionId, xid.getGlobalTransactionId())) { + return true; + } + } + } + } + return false; + } + + @Override + public String toString() { + return "xid:" + formatId; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbcx/TestConnectionPool.java b/modules/h2/src/test/java/org/h2/test/jdbcx/TestConnectionPool.java new file mode 100644 index 0000000000000..19c1f78072e12 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbcx/TestConnectionPool.java @@ -0,0 +1,255 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbcx; + +import java.io.PrintWriter; +import java.io.StringWriter; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import javax.sql.DataSource; + +import org.h2.jdbcx.JdbcConnectionPool; +import org.h2.jdbcx.JdbcDataSource; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * This class tests the JdbcConnectionPool. + */ +public class TestConnectionPool extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("connectionPool"); + testShutdown(); + testWrongUrl(); + testTimeout(); + testUncommittedTransaction(); + testPerformance(); + testKeepOpen(); + testConnect(); + testThreads(); + deleteDb("connectionPool"); + deleteDb("connectionPool2"); + } + + private void testShutdown() throws SQLException { + String url = getURL("connectionPool2", true), user = getUser(); + String password = getPassword(); + JdbcConnectionPool cp = JdbcConnectionPool.create(url, user, password); + StringWriter w = new StringWriter(); + cp.setLogWriter(new PrintWriter(w)); + Connection conn1 = cp.getConnection(); + Connection conn2 = cp.getConnection(); + conn1.close(); + conn2.createStatement().execute("shutdown immediately"); + cp.dispose(); + assertTrue(w.toString().length() > 0); + cp.dispose(); + } + + private void testWrongUrl() { + JdbcConnectionPool cp = JdbcConnectionPool.create( + "jdbc:wrong:url", "", ""); + try { + cp.getConnection(); + } catch (SQLException e) { + assertEquals(8001, e.getErrorCode()); + } + cp.dispose(); + } + + private void testTimeout() throws Exception { + String url = getURL("connectionPool", true), user = getUser(); + String password = getPassword(); + final JdbcConnectionPool man = JdbcConnectionPool.create(url, user, password); + man.setLoginTimeout(1); + createClassProxy(man.getClass()); + assertThrows(IllegalArgumentException.class, man). + setMaxConnections(-1); + man.setMaxConnections(2); + // connection 1 (of 2) + Connection conn = man.getConnection(); + Task t = new Task() { + @Override + public void call() { + while (!stop) { + // this calls notifyAll + man.setMaxConnections(1); + man.setMaxConnections(2); + } + } + }; + t.execute(); + long time = System.nanoTime(); + Connection conn2 = null; + try { + // connection 2 (of 1 or 2) may fail + conn2 = man.getConnection(); + // connection 3 (of 1 or 2) must fail + man.getConnection(); + fail(); + } catch (SQLException e) { + if (conn2 != null) { + conn2.close(); + } + assertContains(e.toString().toLowerCase(), "timeout"); + time = System.nanoTime() - time; + assertTrue("timeout after " + TimeUnit.NANOSECONDS.toMillis(time) + + " ms", time > TimeUnit.SECONDS.toNanos(1)); + } finally { + conn.close(); + t.get(); + } + + man.dispose(); + } + + private void testUncommittedTransaction() throws SQLException { + String url = getURL("connectionPool", true), user = getUser(); + String password = getPassword(); + JdbcConnectionPool man = JdbcConnectionPool.create(url, user, password); + + assertEquals(30, man.getLoginTimeout()); + man.setLoginTimeout(1); + assertEquals(1, man.getLoginTimeout()); + man.setLoginTimeout(0); + assertEquals(30, man.getLoginTimeout()); + assertEquals(10, man.getMaxConnections()); + + PrintWriter old = man.getLogWriter(); + PrintWriter pw = new PrintWriter(new StringWriter()); + man.setLogWriter(pw); + assertTrue(pw == man.getLogWriter()); + man.setLogWriter(old); + + Connection conn1 = man.getConnection(); + assertTrue(conn1.getAutoCommit()); + conn1.setAutoCommit(false); + conn1.close(); + assertTrue(conn1.isClosed()); + + Connection conn2 = man.getConnection(); + assertTrue(conn2.getAutoCommit()); + conn2.close(); + + man.dispose(); + } + + private void testPerformance() throws SQLException { + String url = getURL("connectionPool", true), user = getUser(); + String password = getPassword(); + JdbcConnectionPool man = JdbcConnectionPool.create(url, user, password); + Connection conn = man.getConnection(); + int len = 1000; + long time = System.nanoTime(); + for (int i = 0; i < len; i++) { + man.getConnection().close(); + } + man.dispose(); + trace((int) TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + time = System.nanoTime(); + for (int i = 0; i < len; i++) { + DriverManager.getConnection(url, user, password).close(); + } + trace((int) TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + conn.close(); + } + + private void testKeepOpen() throws Exception { + JdbcConnectionPool man = getConnectionPool(1); + Connection conn = man.getConnection(); + Statement stat = conn.createStatement(); + stat.execute("create local temporary table test(id int)"); + conn.close(); + conn = man.getConnection(); + stat = conn.createStatement(); + stat.execute("select * from test"); + conn.close(); + man.dispose(); + } + + private void testThreads() throws Exception { + final int len = getSize(4, 20); + final JdbcConnectionPool man = getConnectionPool(len - 2); + final boolean[] stop = { false }; + + /** + * This class gets and returns connections from the pool. + */ + class TestRunner implements Runnable { + @Override + public void run() { + try { + while (!stop[0]) { + Connection conn = man.getConnection(); + if (man.getActiveConnections() >= len + 1) { + throw new Exception("a: " + + man.getActiveConnections() + + " is not smaller than b: " + (len + 1)); + } + Statement stat = conn.createStatement(); + stat.execute("SELECT 1 FROM DUAL"); + conn.close(); + Thread.sleep(100); + } + } catch (Exception e) { + e.printStackTrace(); + } + } + } + Thread[] threads = new Thread[len]; + for (int i = 0; i < len; i++) { + threads[i] = new Thread(new TestRunner()); + threads[i].start(); + } + Thread.sleep(1000); + stop[0] = true; + for (int i = 0; i < len; i++) { + threads[i].join(); + } + assertEquals(0, man.getActiveConnections()); + man.dispose(); + } + + private JdbcConnectionPool getConnectionPool(int poolSize) { + JdbcDataSource ds = new JdbcDataSource(); + ds.setURL(getURL("connectionPool", true)); + ds.setUser(getUser()); + ds.setPassword(getPassword()); + JdbcConnectionPool pool = JdbcConnectionPool.create(ds); + pool.setMaxConnections(poolSize); + return pool; + } + + private void testConnect() throws SQLException { + JdbcConnectionPool pool = getConnectionPool(3); + for (int i = 0; i < 100; i++) { + Connection conn = pool.getConnection(); + conn.close(); + } + pool.dispose(); + DataSource ds = pool; + assertThrows(IllegalStateException.class, ds). + getConnection(); + assertThrows(UnsupportedOperationException.class, ds). + getConnection(null, null); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbcx/TestDataSource.java b/modules/h2/src/test/java/org/h2/test/jdbcx/TestDataSource.java new file mode 100644 index 0000000000000..a6b85799df44a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbcx/TestDataSource.java @@ -0,0 +1,213 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbcx; + +import java.io.PrintWriter; +import java.io.StringWriter; +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; +import javax.naming.Reference; +import javax.naming.StringRefAddr; +import javax.naming.spi.ObjectFactory; +import javax.sql.ConnectionEvent; +import javax.sql.ConnectionEventListener; +import javax.sql.DataSource; +import javax.sql.XAConnection; +import javax.transaction.xa.XAResource; +import javax.transaction.xa.Xid; + +import org.h2.api.ErrorCode; +import org.h2.jdbcx.JdbcDataSource; +import org.h2.jdbcx.JdbcDataSourceFactory; +import org.h2.jdbcx.JdbcXAConnection; +import org.h2.message.TraceSystem; +import org.h2.test.TestBase; + +/** + * Tests DataSource and XAConnection. + */ +public class TestDataSource extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + +// public static void main(String... args) throws SQLException { +// +// // first, need to start on the command line: +// // rmiregistry 1099 +// +// // System.setProperty(Context.INITIAL_CONTEXT_FACTORY, +// "com.sun.jndi.ldap.LdapCtxFactory"); +// System.setProperty(Context.INITIAL_CONTEXT_FACTORY, +// "com.sun.jndi.rmi.registry.RegistryContextFactory"); +// System.setProperty(Context.PROVIDER_URL, "rmi://localhost:1099"); +// +// JdbcDataSource ds = new JdbcDataSource(); +// ds.setURL("jdbc:h2:test"); +// ds.setUser("test"); +// ds.setPassword(""); +// +// Context ctx = new InitialContext(); +// ctx.bind("jdbc/test", ds); +// +// DataSource ds2 = (DataSource)ctx.lookup("jdbc/test"); +// Connection conn = ds2.getConnection(); +// conn.close(); +// } + + @Override + public void test() throws Exception { + if (config.traceLevelFile > 0) { + TraceSystem sys = JdbcDataSourceFactory.getTraceSystem(); + sys.setFileName(getBaseDir() + "/test/trace"); + sys.setLevelFile(3); + } + testDataSourceFactory(); + testDataSource(); + testUnwrap(); + testXAConnection(); + deleteDb("dataSource"); + } + + private void testDataSourceFactory() throws Exception { + ObjectFactory factory = new JdbcDataSourceFactory(); + assertTrue(null == factory.getObjectInstance("test", null, null, null)); + Reference ref = new Reference("java.lang.String"); + assertTrue(null == factory.getObjectInstance(ref, null, null, null)); + ref = new Reference(JdbcDataSource.class.getName()); + ref.add(new StringRefAddr("url", "jdbc:h2:mem:")); + ref.add(new StringRefAddr("user", "u")); + ref.add(new StringRefAddr("password", "p")); + ref.add(new StringRefAddr("loginTimeout", "1")); + ref.add(new StringRefAddr("description", "test")); + JdbcDataSource ds = (JdbcDataSource) factory.getObjectInstance( + ref, null, null, null); + assertEquals(1, ds.getLoginTimeout()); + assertEquals("test", ds.getDescription()); + assertEquals("jdbc:h2:mem:", ds.getURL()); + assertEquals("u", ds.getUser()); + assertEquals("p", ds.getPassword()); + Reference ref2 = ds.getReference(); + assertEquals(ref.size(), ref2.size()); + assertEquals(ref.get("url").getContent().toString(), + ref2.get("url").getContent().toString()); + assertEquals(ref.get("user").getContent().toString(), + ref2.get("user").getContent().toString()); + assertEquals(ref.get("password").getContent().toString(), + ref2.get("password").getContent().toString()); + assertEquals(ref.get("loginTimeout").getContent().toString(), + ref2.get("loginTimeout").getContent().toString()); + assertEquals(ref.get("description").getContent().toString(), + ref2.get("description").getContent().toString()); + ds.setPasswordChars("abc".toCharArray()); + assertEquals("abc", ds.getPassword()); + } + + private void testXAConnection() throws Exception { + testXAConnection(false); + testXAConnection(true); + } + + private void testXAConnection(boolean userInDataSource) throws Exception { + deleteDb("dataSource"); + JdbcDataSource ds = new JdbcDataSource(); + String url = getURL("dataSource", true); + String user = getUser(); + ds.setURL(url); + if (userInDataSource) { + ds.setUser(user); + ds.setPassword(getPassword()); + } + if (userInDataSource) { + assertEquals("ds" + ds.getTraceId() + ": url=" + url + + " user=" + user, ds.toString()); + } else { + assertEquals("ds" + ds.getTraceId() + ": url=" + url + + " user=", ds.toString()); + } + XAConnection xaConn; + if (userInDataSource) { + xaConn = ds.getXAConnection(); + } else { + xaConn = ds.getXAConnection(user, getPassword()); + } + + int traceId = ((JdbcXAConnection) xaConn).getTraceId(); + assertTrue(xaConn.toString().startsWith("xads" + traceId + ": conn")); + + xaConn.addConnectionEventListener(new ConnectionEventListener() { + @Override + public void connectionClosed(ConnectionEvent event) { + // nothing to do + } + + @Override + public void connectionErrorOccurred(ConnectionEvent event) { + // nothing to do + } + }); + XAResource res = xaConn.getXAResource(); + + assertFalse(res.setTransactionTimeout(1)); + assertEquals(0, res.getTransactionTimeout()); + assertTrue(res.isSameRM(res)); + assertFalse(res.isSameRM(null)); + + Connection conn = xaConn.getConnection(); + assertEquals(user.toUpperCase(), conn.getMetaData().getUserName()); + Xid[] list = res.recover(XAResource.TMSTARTRSCAN); + assertEquals(0, list.length); + Statement stat = conn.createStatement(); + stat.execute("SELECT * FROM DUAL"); + conn.close(); + xaConn.close(); + } + + private void testDataSource() throws SQLException { + deleteDb("dataSource"); + JdbcDataSource ds = new JdbcDataSource(); + PrintWriter p = new PrintWriter(new StringWriter()); + ds.setLogWriter(p); + assertTrue(p == ds.getLogWriter()); + ds.setURL(getURL("dataSource", true)); + ds.setUser(getUser()); + ds.setPassword(getPassword()); + Connection conn; + conn = ds.getConnection(); + Statement stat; + stat = conn.createStatement(); + stat.execute("SELECT * FROM DUAL"); + conn.close(); + conn = ds.getConnection(getUser(), getPassword()); + stat = conn.createStatement(); + stat.execute("SELECT * FROM DUAL"); + conn.close(); + } + + private void testUnwrap() throws SQLException { + JdbcDataSource ds = new JdbcDataSource(); + assertTrue(ds.isWrapperFor(Object.class)); + assertTrue(ds.isWrapperFor(DataSource.class)); + assertTrue(ds.isWrapperFor(JdbcDataSource.class)); + assertFalse(ds.isWrapperFor(String.class)); + assertTrue(ds == ds.unwrap(Object.class)); + assertTrue(ds == ds.unwrap(DataSource.class)); + try { + ds.unwrap(String.class); + fail(); + } catch (SQLException ex) { + assertEquals(ErrorCode.INVALID_VALUE_2, ex.getErrorCode()); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbcx/TestXA.java b/modules/h2/src/test/java/org/h2/test/jdbcx/TestXA.java new file mode 100644 index 0000000000000..7b6e324ec997a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbcx/TestXA.java @@ -0,0 +1,423 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: James Devenish + */ +package org.h2.test.jdbcx; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; +import javax.sql.XAConnection; +import javax.sql.XADataSource; +import javax.transaction.xa.XAResource; +import javax.transaction.xa.Xid; +import org.h2.jdbcx.JdbcDataSource; +import org.h2.test.TestBase; +import org.h2.util.JdbcUtils; + +/** + * Basic XA tests. + */ +public class TestXA extends TestBase { + + private static final String DB_NAME1 = "xadb1"; + private static final String DB_NAME2 = "xadb2"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testRollbackWithoutPrepare(); + testRollbackAfterPrepare(); + testXAAutoCommit(); + deleteDb("xa"); + testMixedXaNormal(); + testXA(true); + deleteDb(DB_NAME1); + deleteDb(DB_NAME2); + testXA(false); + deleteDb("xa"); + deleteDb(DB_NAME1); + deleteDb(DB_NAME2); + } + + private void testRollbackWithoutPrepare() throws Exception { + if (config.memory) { + return; + } + Xid xid = new Xid() { + @Override + public int getFormatId() { + return 3145; + } + @Override + public byte[] getGlobalTransactionId() { + return new byte[] { 1, 2, 3, 4, 5, 6, 6, 7, 8 }; + } + @Override + public byte[] getBranchQualifier() { + return new byte[] { 34, 43, 33, 3, 3, 3, 33, 33, 3 }; + } + }; + deleteDb("xa"); + JdbcDataSource ds = new JdbcDataSource(); + ds.setURL(getURL("xa", true)); + ds.setPassword(getPassword()); + Connection dm = ds.getConnection(); + Statement stat = dm.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS TEST(ID INT PRIMARY KEY, VAL INT)"); + stat.execute("INSERT INTO TEST(ID,VAL) VALUES (1,1)"); + dm.close(); + XAConnection c = ds.getXAConnection(); + XAResource xa = c.getXAResource(); + Connection connection = c.getConnection(); + xa.start(xid, XAResource.TMJOIN); + PreparedStatement ps = connection.prepareStatement( + "UPDATE TEST SET VAL=? WHERE ID=?"); + ps.setInt(1, new Random().nextInt()); + ps.setInt(2, 1); + ps.close(); + xa.rollback(xid); + connection.close(); + c.close(); + deleteDb("xa"); + } + + private void testRollbackAfterPrepare() throws Exception { + if (config.memory) { + return; + } + Xid xid = new Xid() { + @Override + public int getFormatId() { + return 3145; + } + @Override + public byte[] getGlobalTransactionId() { + return new byte[] { 1, 2, 3, 4, 5, 6, 6, 7, 8 }; + } + @Override + public byte[] getBranchQualifier() { + return new byte[] { 34, 43, 33, 3, 3, 3, 33, 33, 3 }; + } + }; + deleteDb("xa"); + JdbcDataSource ds = new JdbcDataSource(); + ds.setURL(getURL("xa", true)); + ds.setPassword(getPassword()); + Connection dm = ds.getConnection(); + Statement stat = dm.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS TEST(ID INT PRIMARY KEY, VAL INT)"); + stat.execute("INSERT INTO TEST(ID,VAL) VALUES (1,1)"); + dm.close(); + XAConnection c = ds.getXAConnection(); + XAResource xa = c.getXAResource(); + Connection connection = c.getConnection(); + xa.start(xid, XAResource.TMJOIN); + PreparedStatement ps = connection.prepareStatement("UPDATE TEST SET VAL=? WHERE ID=?"); + ps.setInt(1, new Random().nextInt()); + ps.setInt(2, 1); + ps.close(); + xa.prepare(xid); + xa.rollback(xid); + connection.close(); + c.close(); + deleteDb("xa"); + } + + + private void testMixedXaNormal() throws Exception { + JdbcDataSource ds = new JdbcDataSource(); + ds.setURL("jdbc:h2:mem:test"); + ds.setUser("sa"); + ds.setPassword(""); + XAConnection xa = ds.getXAConnection(); + Connection c = xa.getConnection(); + assertTrue(c.getAutoCommit()); + MyXid xid = new MyXid(); + XAResource res = xa.getXAResource(); + + res.start(xid, XAResource.TMNOFLAGS); + assertTrue(!c.getAutoCommit()); + res.end(xid, XAResource.TMSUCCESS); + res.commit(xid, true); + assertTrue(c.getAutoCommit()); + + res.start(xid, XAResource.TMNOFLAGS); + assertTrue(!c.getAutoCommit()); + res.end(xid, XAResource.TMFAIL); + res.rollback(xid); + assertTrue(c.getAutoCommit()); + + c.close(); + xa.close(); + } + + /** + * A simple Xid implementation. + */ + public static class MyXid implements Xid { + private final byte[] branchQualifier = { 0 }; + private final byte[] globalTransactionId = { 0 }; + @Override + public byte[] getBranchQualifier() { + return branchQualifier; + } + @Override + public int getFormatId() { + return 0; + } + @Override + public byte[] getGlobalTransactionId() { + return globalTransactionId; + } + } + + private void testXAAutoCommit() throws Exception { + JdbcDataSource ds = new JdbcDataSource(); + ds.setURL("jdbc:h2:mem:test"); + ds.setUser("sa"); + ds.setPassword(""); + XAConnection xa = ds.getXAConnection(); + MyXid xid = new MyXid(); + xa.getXAResource().start(xid, + XAResource.TMNOFLAGS); + Connection c = xa.getConnection(); + assertTrue(!c.getAutoCommit()); + c.close(); + xa.close(); + } + + private void testXA(boolean useOneDatabase) throws SQLException { + String url1 = getURL(DB_NAME1, true); + String url2 = getURL(DB_NAME2, true); + + XAConnection xaConn1 = null; + XAConnection xaConn2 = null; + Connection conn1 = null; + Connection conn2 = null; + Statement stat1 = null; + Statement stat2 = null; + try { + trace("xads1 = createXADatasource1()"); + XADataSource xaDs1 = createXADatasource(useOneDatabase, url1); + trace("xads2 = createXADatasource2()"); + XADataSource xaDs2 = createXADatasource(useOneDatabase, url2); + + trace("xacon1 = xads1.getXAConnection()"); + xaConn1 = xaDs1.getXAConnection(); + trace("xacon2 = xads2.getXAConnection()"); + xaConn2 = xaDs2.getXAConnection(); + + trace("xares1 = xacon1.getXAResource()"); + XAResource xares1 = xaConn1.getXAResource(); + trace("xares2 = xacon2.getXAResource()"); + XAResource xares2 = xaConn2.getXAResource(); + + trace("xares1.recover(XAResource.TMSTARTRSCAN)"); + Xid[] xids1 = xares1.recover(XAResource.TMSTARTRSCAN); + if ((xids1 == null) || (xids1.length == 0)) { + trace("xares1.recover(XAResource.TMSTARTRSCAN): 0"); + } else { + trace("xares1.recover(XAResource.TMSTARTRSCAN): " + xids1.length); + } + + trace("xares2.recover(XAResource.TMSTARTRSCAN)"); + Xid[] xids2 = xares2.recover(XAResource.TMSTARTRSCAN); + if ((xids2 == null) || (xids2.length == 0)) { + trace("xares2.recover(XAResource.TMSTARTRSCAN): 0"); + } else { + trace("xares2.recover(XAResource.TMSTARTRSCAN): " + xids2.length); + } + + trace("con1 = xacon1.getConnection()"); + conn1 = xaConn1.getConnection(); + trace("stmt1 = con1.createStatement()"); + stat1 = conn1.createStatement(); + + trace("con2 = xacon2.getConnection()"); + conn2 = xaConn2.getConnection(); + trace("stmt2 = con2.createStatement()"); + stat2 = conn2.createStatement(); + + if (useOneDatabase) { + trace("stmt1.executeUpdate(\"DROP TABLE xatest1\")"); + try { + stat1.executeUpdate("DROP TABLE xatest1"); + } catch (SQLException e) { + // ignore + } + trace("stmt2.executeUpdate(\"DROP TABLE xatest2\")"); + try { + stat2.executeUpdate("DROP TABLE xatest2"); + } catch (SQLException e) { + // ignore + } + } else { + trace("stmt1.executeUpdate(\"DROP TABLE xatest\")"); + try { + stat1.executeUpdate("DROP TABLE xatest"); + } catch (SQLException e) { + // ignore + } + trace("stmt2.executeUpdate(\"DROP TABLE xatest\")"); + try { + stat2.executeUpdate("DROP TABLE xatest"); + } catch (SQLException e) { + // ignore + } + } + + if (useOneDatabase) { + trace("stmt1.executeUpdate(\"CREATE TABLE xatest1 " + + "(id INT PRIMARY KEY, value INT)\")"); + stat1.executeUpdate("CREATE TABLE xatest1 " + + "(id INT PRIMARY KEY, value INT)"); + trace("stmt2.executeUpdate(\"CREATE TABLE xatest2 " + + "(id INT PRIMARY KEY, value INT)\")"); + stat2.executeUpdate("CREATE TABLE xatest2 " + + "(id INT PRIMARY KEY, value INT)"); + } else { + trace("stmt1.executeUpdate(\"CREATE TABLE xatest " + + "(id INT PRIMARY KEY, value INT)\")"); + stat1.executeUpdate("CREATE TABLE xatest " + + "(id INT PRIMARY KEY, value INT)"); + trace("stmt2.executeUpdate(\"CREATE TABLE xatest " + + "(id INT PRIMARY KEY, value INT)\")"); + stat2.executeUpdate("CREATE TABLE xatest " + + "(id INT PRIMARY KEY, value INT)"); + } + + if (useOneDatabase) { + trace("stmt1.executeUpdate(\"INSERT INTO xatest1 " + + "VALUES (1, 0)\")"); + stat1.executeUpdate("INSERT INTO xatest1 VALUES (1, 0)"); + trace("stmt2.executeUpdate(\"INSERT INTO xatest2 " + + "VALUES (2, 0)\")"); + stat2.executeUpdate("INSERT INTO xatest2 " + + "VALUES (2, 0)"); + } else { + trace("stmt1.executeUpdate(\"INSERT INTO xatest " + + "VALUES (1, 0)\")"); + stat1.executeUpdate("INSERT INTO xatest " + + "VALUES (1, 0)"); + trace("stmt2.executeUpdate(\"INSERT INTO xatest " + + "VALUES (2, 0)\")"); + stat2.executeUpdate("INSERT INTO xatest " + + "VALUES (2, 0)"); + } + + Xid xid1 = null; + Xid xid2 = null; + + if (useOneDatabase) { + xid1 = SimpleXid.createRandom(); + xid2 = SimpleXid.createRandom(); + } else { + xid1 = SimpleXid.createRandom(); + xid2 = xid1; + } + + if (useOneDatabase) { + trace("xares1.start(xid1, XAResource.TMNOFLAGS)"); + xares1.start(xid1, XAResource.TMNOFLAGS); + trace("xares2.start(xid2, XAResource.TMJOIN)"); + xares2.start(xid2, XAResource.TMJOIN); + } else { + trace("xares1.start(xid1, XAResource.TMNOFLAGS)"); + xares1.start(xid1, XAResource.TMNOFLAGS); + trace("xares2.start(xid2, XAResource.TMNOFLAGS)"); + xares2.start(xid2, XAResource.TMNOFLAGS); + } + + if (useOneDatabase) { + trace("stmt1.executeUpdate(\"UPDATE xatest1 " + + "SET value=1 WHERE id=1\")"); + stat1.executeUpdate("UPDATE xatest1 " + + "SET value=1 WHERE id=1"); + trace("stmt2.executeUpdate(\"UPDATE xatest2 " + + "SET value=1 WHERE id=2\")"); + stat2.executeUpdate("UPDATE xatest2 " + + "SET value=1 WHERE id=2"); + } else { + trace("stmt1.executeUpdate(\"UPDATE xatest " + + "SET value=1 WHERE id=1\")"); + stat1.executeUpdate("UPDATE xatest " + + "SET value=1 WHERE id=1"); + trace("stmt2.executeUpdate(\"UPDATE xatest " + + "SET value=1 WHERE id=2\")"); + stat2.executeUpdate("UPDATE xatest " + + "SET value=1 WHERE id=2"); + } + + trace("xares1.end(xid1, XAResource.TMSUCCESS)"); + xares1.end(xid1, XAResource.TMSUCCESS); + trace("xares2.end(xid2, XAResource.TMSUCCESS)"); + xares2.end(xid2, XAResource.TMSUCCESS); + + int ret1; + int ret2; + + trace("ret1 = xares1.prepare(xid1)"); + ret1 = xares1.prepare(xid1); + trace("xares1.prepare(xid1): " + ret1); + trace("ret2 = xares2.prepare(xid2)"); + ret2 = xares2.prepare(xid2); + trace("xares2.prepare(xid2): " + ret2); + + if ((ret1 != XAResource.XA_OK) && (ret1 != XAResource.XA_RDONLY)) { + throw new IllegalStateException( + "xares1.prepare(xid1) must return XA_OK or XA_RDONLY"); + } + if ((ret2 != XAResource.XA_OK) && (ret2 != XAResource.XA_RDONLY)) { + throw new IllegalStateException( + "xares2.prepare(xid2) must return XA_OK or XA_RDONLY"); + } + + if (ret1 == XAResource.XA_OK) { + trace("xares1.commit(xid1, false)"); + xares1.commit(xid1, false); + } + if (ret2 == XAResource.XA_OK) { + trace("xares2.commit(xid2, false)"); + xares2.commit(xid2, false); + } + } catch (Exception e) { + e.printStackTrace(); + } finally { + JdbcUtils.closeSilently(stat1); + JdbcUtils.closeSilently(stat2); + JdbcUtils.closeSilently(conn1); + JdbcUtils.closeSilently(conn2); + if (xaConn1 != null) { + xaConn1.close(); + } + if (xaConn2 != null) { + xaConn2.close(); + } + } + } + + private XADataSource createXADatasource(boolean useOneDatabase, String url) { + JdbcDataSource ds = new JdbcDataSource(); + ds.setPassword(getPassword("")); + ds.setUser("sa"); + if (useOneDatabase) { + ds.setURL(getURL("xa", true)); + } else { + ds.setURL(url); + } + return ds; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbcx/TestXASimple.java b/modules/h2/src/test/java/org/h2/test/jdbcx/TestXASimple.java new file mode 100644 index 0000000000000..e03f75c9f5389 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbcx/TestXASimple.java @@ -0,0 +1,160 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.jdbcx; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import javax.sql.XAConnection; +import javax.transaction.xa.XAResource; +import javax.transaction.xa.Xid; +import org.h2.jdbcx.JdbcDataSource; +import org.h2.test.TestBase; +import org.h2.util.JdbcUtils; + +/** + * A simple XA test. + */ +public class TestXASimple extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testTwoPhase(); + testSimple(); + } + + private void testTwoPhase() throws Exception { + if (config.memory || config.networked) { + return; + } + // testTwoPhase(false, true); + // testTwoPhase(false, false); + testTwoPhase("xaSimple2a", true, true); + testTwoPhase("xaSimple2b", true, false); + + } + + private void testTwoPhase(String db, boolean shutdown, boolean commit) + throws Exception { + deleteDb(db); + JdbcDataSource ds = new JdbcDataSource(); + ds.setPassword(getPassword()); + ds.setUser("sa"); + // ds.setURL(getURL("xaSimple", true) + ";trace_level_system_out=3"); + ds.setURL(getURL(db, true)); + + XAConnection xa; + xa = ds.getXAConnection(); + Connection conn; + + conn = xa.getConnection(); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar(255))"); + Xid xid = SimpleXid.createRandom(); + xa.getXAResource().start(xid, XAResource.TMNOFLAGS); + conn.setAutoCommit(false); + stat.execute("insert into test values(1, 'Hello')"); + xa.getXAResource().end(xid, XAResource.TMSUCCESS); + xa.getXAResource().prepare(xid); + if (shutdown) { + shutdown(ds); + } + + xa = ds.getXAConnection(); + Xid[] list = xa.getXAResource().recover(XAResource.TMSTARTRSCAN); + assertEquals(1, list.length); + assertTrue(xid.equals(list[0])); + if (commit) { + xa.getXAResource().commit(list[0], false); + } else { + xa.getXAResource().rollback(list[0]); + } + conn = xa.getConnection(); + conn.createStatement().executeQuery("select * from test"); + if (shutdown) { + shutdown(ds); + } + + xa = ds.getXAConnection(); + list = xa.getXAResource().recover(XAResource.TMSTARTRSCAN); + assertEquals(0, list.length); + conn = xa.getConnection(); + ResultSet rs; + rs = conn.createStatement().executeQuery("select * from test"); + if (commit) { + assertTrue(rs.next()); + } else { + assertFalse(rs.next()); + } + xa.close(); + } + + private static void shutdown(JdbcDataSource ds) throws SQLException { + Connection conn = ds.getConnection(); + conn.createStatement().execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + } + + private void testSimple() throws SQLException { + + deleteDb("xaSimple1"); + deleteDb("xaSimple2"); + org.h2.Driver.load(); + + // InitialContext context = new InitialContext(); + // context.rebind(USER_TRANSACTION_JNDI_NAME, j.getUserTransaction()); + + JdbcDataSource ds1 = new JdbcDataSource(); + ds1.setPassword(getPassword()); + ds1.setUser("sa"); + ds1.setURL(getURL("xaSimple1", true)); + + JdbcDataSource ds2 = new JdbcDataSource(); + ds2.setPassword(getPassword()); + ds2.setUser("sa"); + ds2.setURL(getURL("xaSimple2", true)); + + // UserTransaction ut = (UserTransaction) + // context.lookup("UserTransaction"); + // ut.begin(); + + XAConnection xa1 = ds1.getXAConnection(); + Connection c1 = xa1.getConnection(); + c1.setAutoCommit(false); + XAConnection xa2 = ds2.getXAConnection(); + Connection c2 = xa2.getConnection(); + c2.setAutoCommit(false); + + c1.createStatement().executeUpdate( + "create table test(id int, test varchar(255))"); + c2.createStatement().executeUpdate( + "create table test(id int, test varchar(255))"); + + // ut.rollback(); + c1.close(); + c2.close(); + + xa1.close(); + xa2.close(); + + // j.stop(); + // System.exit(0); + deleteDb("xaSimple1"); + deleteDb("xaSimple2"); + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/jdbcx/package.html b/modules/h2/src/test/java/org/h2/test/jdbcx/package.html new file mode 100644 index 0000000000000..4384639aa0493 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/jdbcx/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Tests related to distributed transactions. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc1.java b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc1.java new file mode 100644 index 0000000000000..b30104678ad91 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc1.java @@ -0,0 +1,397 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.mvcc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Basic MVCC (multi version concurrency) test cases. + */ +public class TestMvcc1 extends TestBase { + + private Connection c1, c2; + private Statement s1, s2; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.mvcc = true; + test.test(); + } + + @Override + public void test() throws SQLException { + testCases(); + testSetMode(); + deleteDb("mvcc1"); + } + + private void testSetMode() throws SQLException { + deleteDb("mvcc1"); + c1 = getConnection("mvcc1;MVCC=FALSE"); + Statement stat = c1.createStatement(); + ResultSet rs = stat.executeQuery( + "select * from information_schema.settings where name='MVCC'"); + rs.next(); + assertEquals("FALSE", rs.getString("VALUE")); + assertThrows(ErrorCode.CANNOT_CHANGE_SETTING_WHEN_OPEN_1, stat). + execute("SET MVCC TRUE"); + rs = stat.executeQuery("select * from information_schema.settings " + + "where name='MVCC'"); + rs.next(); + assertEquals("FALSE", rs.getString("VALUE")); + c1.close(); + } + + private void testCases() throws SQLException { + if (!config.mvcc) { + return; + } + ResultSet rs; + + // TODO Prio 1: document: exclusive table lock still used when altering + // tables, adding indexes, select ... for update; table level locks are + // checked + // TODO Prio 2: if MVCC is used, rows of transactions need to fit in + // memory + // TODO Prio 2: getFirst / getLast in MultiVersionIndex + // TODO Prio 2: snapshot isolation (currently read-committed, not + // repeatable read) + + // TODO test: one thread appends, the other + // selects new data (select * from test where id > ?) and deletes + + deleteDb("mvcc1"); + c1 = getConnection("mvcc1;MVCC=TRUE;LOCK_TIMEOUT=10"); + s1 = c1.createStatement(); + c2 = getConnection("mvcc1;MVCC=TRUE;LOCK_TIMEOUT=10"); + s2 = c2.createStatement(); + c1.setAutoCommit(false); + c2.setAutoCommit(false); + + // table rollback problem + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, s1). + execute("create table b(primary key(x))"); + s1.execute("create table a(id int as 1 unique)"); + s1.execute("drop table a"); + + // update same key problem + s1.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR, PRIMARY KEY(ID))"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + c1.commit(); + assertResult("Hello", s2, "SELECT NAME FROM TEST WHERE ID=1"); + s1.execute("UPDATE TEST SET NAME = 'Hallo' WHERE ID=1"); + assertResult("Hello", s2, "SELECT NAME FROM TEST WHERE ID=1"); + assertResult("Hallo", s1, "SELECT NAME FROM TEST WHERE ID=1"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + c2.commit(); + + // referential integrity problem + s1.execute("create table a (id integer identity not null, " + + "code varchar(10) not null, primary key(id))"); + s1.execute("create table b (name varchar(100) not null, a integer, " + + "primary key(name), foreign key(a) references a(id))"); + s1.execute("insert into a(code) values('one')"); + assertThrows(ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, s2). + execute("insert into b values('un B', 1)"); + c2.commit(); + c1.rollback(); + s1.execute("drop table a, b"); + c2.commit(); + + // it should not be possible to drop a table + // when an uncommitted transaction changed something + s1.execute("create table test(id int primary key)"); + s1.execute("insert into test values(1)"); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, s2). + execute("drop table test"); + c1.rollback(); + s2.execute("drop table test"); + c2.rollback(); + + // table scan problem + s1.execute("create table test(id int, name varchar)"); + s1.execute("insert into test values(1, 'A'), (2, 'B')"); + c1.commit(); + assertResult("2", s1, "select count(*) from test where name<>'C'"); + s2.execute("update test set name='B2' where id=2"); + assertResult("2", s1, "select count(*) from test where name<>'C'"); + c2.commit(); + s2.execute("drop table test"); + c2.rollback(); + + // select for update should do an exclusive lock, even with mvcc + s1.execute("create table test(id int primary key, name varchar(255))"); + s1.execute("insert into test values(1, 'y')"); + c1.commit(); + s2.execute("select * from test where id = 1 for update"); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, s1). + execute("delete from test"); + c2.rollback(); + s1.execute("drop table test"); + c1.commit(); + c2.commit(); + + s1.execute("create table test(id int primary key, name varchar(255))"); + s2.execute("insert into test values(4, 'Hello')"); + c2.rollback(); + assertResult("0", s1, "select count(*) from test where name = 'Hello'"); + assertResult("0", s2, "select count(*) from test where name = 'Hello'"); + c1.commit(); + c2.commit(); + s1.execute("DROP TABLE TEST"); + + s1.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + s1.execute("INSERT INTO TEST VALUES(1, 'Test')"); + c1.commit(); + assertResult("1", s1, "select max(id) from test"); + s1.execute("INSERT INTO TEST VALUES(2, 'World')"); + c1.rollback(); + assertResult("1", s1, "select max(id) from test"); + c1.commit(); + c2.commit(); + s1.execute("DROP TABLE TEST"); + + + s1.execute("create table test as select * from table(id int=(1, 2))"); + s1.execute("update test set id=1 where id=1"); + s1.execute("select max(id) from test"); + assertResult("2", s1, "select max(id) from test"); + c1.commit(); + c2.commit(); + s1.execute("DROP TABLE TEST"); + + s1.execute("CREATE TABLE TEST(ID INT)"); + s1.execute("INSERT INTO TEST VALUES(1)"); + c1.commit(); + assertResult("1", s2, "SELECT COUNT(*) FROM TEST"); + s1.executeUpdate("DELETE FROM TEST"); + PreparedStatement p2 = c2.prepareStatement("select count(*) from test"); + rs = p2.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertResult("1", s2, "SELECT COUNT(*) FROM TEST"); + assertResult("0", s1, "SELECT COUNT(*) FROM TEST"); + c1.commit(); + assertResult("0", s2, "SELECT COUNT(*) FROM TEST"); + rs = p2.executeQuery(); + rs.next(); + assertEquals(0, rs.getInt(1)); + c1.commit(); + c2.commit(); + s1.execute("DROP TABLE TEST"); + + s1.execute("CREATE TABLE TEST(ID INT)"); + s1.execute("INSERT INTO TEST VALUES(1)"); + c1.commit(); + s1.execute("DELETE FROM TEST"); + assertResult("0", s1, "SELECT COUNT(*) FROM TEST"); + c1.commit(); + assertResult("0", s1, "SELECT COUNT(*) FROM TEST"); + s1.execute("INSERT INTO TEST VALUES(1)"); + s1.execute("DELETE FROM TEST"); + c1.commit(); + assertResult("0", s1, "SELECT COUNT(*) FROM TEST"); + s1.execute("DROP TABLE TEST"); + + s1.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World')"); + assertResult("0", s2, "SELECT COUNT(*) FROM TEST"); + c1.commit(); + assertResult("2", s2, "SELECT COUNT(*) FROM TEST"); + s1.execute("INSERT INTO TEST VALUES(3, '!')"); + c1.rollback(); + assertResult("2", s2, "SELECT COUNT(*) FROM TEST"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + + s1.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + s1.execute("DELETE FROM TEST"); + assertResult("0", s2, "SELECT COUNT(*) FROM TEST"); + c1.commit(); + assertResult("0", s2, "SELECT COUNT(*) FROM TEST"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + + s1.execute("CREATE TABLE TEST(ID INT IDENTITY, NAME VARCHAR)"); + s1.execute("INSERT INTO TEST(NAME) VALUES('Ruebezahl')"); + assertResult("0", s2, "SELECT COUNT(*) FROM TEST"); + assertResult("1", s1, "SELECT COUNT(*) FROM TEST"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + + s1.execute("CREATE TABLE TEST(ID INT IDENTITY, NAME VARCHAR)"); + s1.execute("INSERT INTO TEST(NAME) VALUES('Ruebezahl')"); + s1.execute("INSERT INTO TEST(NAME) VALUES('Ruebezahl')"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + + s1.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + c1.commit(); + s1.execute("DELETE FROM TEST WHERE ID=1"); + c1.rollback(); + s1.execute("DROP TABLE TEST"); + c1.commit(); + + Random random = new Random(1); + s1.execute("CREATE TABLE TEST(ID INT IDENTITY, NAME VARCHAR)"); + Statement s; + Connection c; + for (int i = 0; i < 1000; i++) { + if (random.nextBoolean()) { + s = s1; + c = c1; + } else { + s = s2; + c = c2; + } + switch (random.nextInt(5)) { + case 0: + s.execute("INSERT INTO TEST(NAME) VALUES('Hello')"); + break; + case 1: + s.execute("UPDATE TEST SET NAME=" + i + " WHERE ID=" + random.nextInt(i)); + break; + case 2: + s.execute("DELETE FROM TEST WHERE ID=" + random.nextInt(i)); + break; + case 3: + c.commit(); + break; + case 4: + c.rollback(); + break; + default: + } + s1.execute("SELECT * FROM TEST ORDER BY ID"); + s2.execute("SELECT * FROM TEST ORDER BY ID"); + } + c2.rollback(); + s1.execute("DROP TABLE TEST"); + c1.commit(); + c2.commit(); + + random = new Random(1); + s1.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + for (int i = 0; i < 1000; i++) { + if (random.nextBoolean()) { + s = s1; + c = c1; + } else { + s = s2; + c = c2; + } + switch (random.nextInt(5)) { + case 0: + s.execute("INSERT INTO TEST VALUES(" + i + ", 'Hello')"); + break; + case 1: + try { + s.execute("UPDATE TEST SET NAME=" + i + " WHERE ID=" + random.nextInt(i)); + } catch (SQLException e) { + assertEquals(ErrorCode.CONCURRENT_UPDATE_1, e.getErrorCode()); + } + break; + case 2: + s.execute("DELETE FROM TEST WHERE ID=" + random.nextInt(i)); + break; + case 3: + c.commit(); + break; + case 4: + c.rollback(); + break; + default: + } + s1.execute("SELECT * FROM TEST ORDER BY ID"); + s2.execute("SELECT * FROM TEST ORDER BY ID"); + } + c2.rollback(); + s1.execute("DROP TABLE TEST"); + c1.commit(); + c2.commit(); + + s1.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR)"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + assertResult("0", s2, "SELECT COUNT(*) FROM TEST WHERE NAME!='X'"); + assertResult("1", s1, "SELECT COUNT(*) FROM TEST WHERE NAME!='X'"); + c1.commit(); + assertResult("1", s2, "SELECT COUNT(*) FROM TEST WHERE NAME!='X'"); + assertResult("1", s2, "SELECT COUNT(*) FROM TEST WHERE NAME!='X'"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + c2.commit(); + + s1.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + assertResult("0", s2, "SELECT COUNT(*) FROM TEST WHERE ID<100"); + assertResult("1", s1, "SELECT COUNT(*) FROM TEST WHERE ID<100"); + c1.commit(); + assertResult("1", s2, "SELECT COUNT(*) FROM TEST WHERE ID<100"); + assertResult("1", s2, "SELECT COUNT(*) FROM TEST WHERE ID<100"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + c2.commit(); + + s1.execute("CREATE TABLE TEST(ID INT, NAME VARCHAR, PRIMARY KEY(ID, NAME))"); + s1.execute("INSERT INTO TEST VALUES(1, 'Hello')"); + c1.commit(); + assertResult("Hello", s2, "SELECT NAME FROM TEST WHERE ID=1"); + s1.execute("UPDATE TEST SET NAME = 'Hallo' WHERE ID=1"); + assertResult("Hello", s2, "SELECT NAME FROM TEST WHERE ID=1"); + assertResult("Hallo", s1, "SELECT NAME FROM TEST WHERE ID=1"); + s1.execute("DROP TABLE TEST"); + c1.commit(); + c2.commit(); + + + s1.execute("create table test(id int primary key, name varchar(255))"); + s1.execute("insert into test values(1, 'Hello'), (2, 'World')"); + c1.commit(); + assertThrows(ErrorCode.DUPLICATE_KEY_1, s1). + execute("update test set id=2 where id=1"); + rs = s1.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + + rs = s2.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + s1.execute("drop table test"); + c1.commit(); + c2.commit(); + + c1.close(); + c2.close(); + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc2.java b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc2.java new file mode 100644 index 0000000000000..b4d6ab00c2ada --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc2.java @@ -0,0 +1,181 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.mvcc; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.atomic.AtomicBoolean; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Additional MVCC (multi version concurrency) test cases. + */ +public class TestMvcc2 extends TestBase { + + private static final String DROP_TABLE = + "DROP TABLE IF EXISTS EMPLOYEE"; + private static final String CREATE_TABLE = + "CREATE TABLE EMPLOYEE (id BIGINT, version BIGINT, NAME VARCHAR(255))"; + private static final String INSERT = + "INSERT INTO EMPLOYEE (id, version, NAME) VALUES (1, 1, 'Jones')"; + private static final String UPDATE = + "UPDATE EMPLOYEE SET NAME = 'Miller' WHERE version = 1"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.mvcc = true; + test.test(); + } + + @Override + public void test() throws Exception { + if (!config.mvcc) { + return; + } + deleteDb("mvcc2"); + testConcurrentInsert(); + testConcurrentUpdate(); + testSelectForUpdate(); + testInsertUpdateRollback(); + testInsertRollback(); + deleteDb("mvcc2"); + } + + private Connection getConnection() throws SQLException { + return getConnection("mvcc2"); + } + + private void testConcurrentInsert() throws Exception { + Connection conn = getConnection(); + final Connection conn2 = getConnection(); + Statement stat = conn.createStatement(); + final Statement stat2 = conn2.createStatement(); + stat2.execute("set lock_timeout 1000"); + stat.execute("create table test(id int primary key, name varchar)"); + conn.setAutoCommit(false); + final AtomicBoolean committed = new AtomicBoolean(false); + Task t = new Task() { + @Override + public void call() throws SQLException { + try { +//System.out.println("insert2 hallo"); + stat2.execute("insert into test values(0, 'Hallo')"); +//System.out.println("insert2 hallo done"); + } catch (SQLException e) { +//System.out.println("insert2 hallo e " + e); + if (!committed.get()) { + throw e; + } + } + } + }; +//System.out.println("insert hello"); + stat.execute("insert into test values(0, 'Hello')"); + t.execute(); + Thread.sleep(500); +//System.out.println("insert hello commit"); + committed.set(true); + conn.commit(); + t.get(); + ResultSet rs; + rs = stat.executeQuery("select name from test"); + rs.next(); + assertEquals("Hello", rs.getString(1)); + stat.execute("drop table test"); + conn2.close(); + conn.close(); + } + + private void testConcurrentUpdate() throws Exception { + Connection conn = getConnection(); + final Connection conn2 = getConnection(); + Statement stat = conn.createStatement(); + final Statement stat2 = conn2.createStatement(); + stat2.execute("set lock_timeout 1000"); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(0, 'Hello')"); + conn.setAutoCommit(false); + Task t = new Task() { + @Override + public void call() throws SQLException { + stat2.execute("update test set name = 'Hallo'"); + } + }; + stat.execute("update test set name = 'Hi'"); + t.execute(); + Thread.sleep(500); + conn.commit(); + t.get(); + ResultSet rs; + rs = stat.executeQuery("select name from test"); + rs.next(); + assertEquals("Hallo", rs.getString(1)); + stat.execute("drop table test"); + conn2.close(); + conn.close(); + } + + private void testSelectForUpdate() throws SQLException { + Connection conn = getConnection("mvcc2;SELECT_FOR_UPDATE_MVCC=true"); + Connection conn2 = getConnection("mvcc2;SELECT_FOR_UPDATE_MVCC=true"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + conn.setAutoCommit(false); + stat.execute("insert into test select x, 'Hello' from system_range(1, 10)"); + stat.execute("select * from test where id = 3 for update"); + conn.commit(); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat). + execute("select sum(id) from test for update"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat). + execute("select distinct id from test for update"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat). + execute("select id from test group by id for update"); + assertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1, stat). + execute("select t1.id from test t1, test t2 for update"); + stat.execute("select * from test where id = 3 for update"); + conn2.setAutoCommit(false); + conn2.createStatement().execute("select * from test where id = 4 for update"); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, conn2.createStatement()). + execute("select * from test where id = 3 for update"); + conn.close(); + conn2.close(); + } + + private void testInsertUpdateRollback() throws SQLException { + Connection conn = getConnection(); + conn.setAutoCommit(false); + Statement stmt = conn.createStatement(); + stmt.execute(DROP_TABLE); + stmt.execute(CREATE_TABLE); + conn.commit(); + stmt.execute(INSERT); + stmt.execute(UPDATE); + conn.rollback(); + conn.close(); + } + + private void testInsertRollback() throws SQLException { + Connection conn = getConnection(); + conn.setAutoCommit(false); + Statement stmt = conn.createStatement(); + stmt.execute(DROP_TABLE); + stmt.execute(CREATE_TABLE); + conn.commit(); + stmt.execute(INSERT); + conn.rollback(); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc3.java b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc3.java new file mode 100644 index 0000000000000..102ad93dee1e1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc3.java @@ -0,0 +1,265 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.mvcc; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Savepoint; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Additional MVCC (multi version concurrency) test cases. + */ +public class TestMvcc3 extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.mvcc = true; + test.test(); + } + + @Override + public void test() throws SQLException { + testFailedUpdate(); + testConcurrentUpdate(); + testInsertUpdateRollback(); + testCreateTableAsSelect(); + testSequence(); + testDisableAutoCommit(); + testRollback(); + deleteDb("mvcc3"); + } + + private void testFailedUpdate() throws SQLException { + deleteDb("mvcc3"); + Connection conn = getConnection("mvcc3"); + conn.setAutoCommit(false); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, a int unique, b int)"); + stat.execute("insert into test values(1, 1, 1)"); + stat.execute("insert into test values(2, 2, 2)"); + assertThrows(ErrorCode.DUPLICATE_KEY_1, stat). + execute("update test set a = 1 where id = 2"); + ResultSet rs; + rs = stat.executeQuery("select * from test where id = 2"); + assertTrue(rs.next()); + rs = stat.executeQuery("select * from test where a = 2"); + assertTrue(rs.next()); + rs = stat.executeQuery("select * from test where b = 2"); + assertTrue(rs.next()); + conn.close(); + } + + private void testConcurrentUpdate() throws SQLException { + if (!config.mvcc) { + return; + } + deleteDb("mvcc3"); + Connection c1 = getConnection("mvcc3"); + c1.setAutoCommit(false); + Statement s1 = c1.createStatement(); + Connection c2 = getConnection("mvcc3"); + c2.setAutoCommit(false); + Statement s2 = c2.createStatement(); + + s1.execute("create table test(id int primary key, name varchar) as " + + "select x, x from system_range(1, 2)"); + s1.execute("create unique index on test(name)"); + s1.executeUpdate("update test set name = 100 where id = 1"); + + assertThrows(SQLException.class, s2).executeUpdate("update test set name = 100 where id = 2"); + + ResultSet rs = s1.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("100", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertEquals("2", rs.getString(2)); + assertFalse(rs.next()); + rs = s2.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("1", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertEquals("2", rs.getString(2)); + assertFalse(rs.next()); + c1.close(); + c2.close(); + } + + private void testInsertUpdateRollback() throws SQLException { + if (!config.mvcc) { + return; + } + + deleteDb("mvcc3"); + Connection c1 = getConnection("mvcc3"); + Statement s1 = c1.createStatement(); + Connection c2 = getConnection("mvcc3"); + Statement s2 = c2.createStatement(); + + s1.execute("create table test(id int primary key, name varchar) " + + "as select 0, 'Hello'"); + c1.setAutoCommit(false); + s1.executeUpdate("update test set name = 'World'"); + printRows("after update", s1, s2); + Savepoint sp1 = c1.setSavepoint(); + s1.executeUpdate("delete from test"); + printRows("after delete", s1, s2); + c1.rollback(sp1); + printRows("after rollback delete", s1, s2); + c1.rollback(); + printRows("after rollback all", s1, s2); + + ResultSet rs = s2.executeQuery("select * from test"); + assertTrue(rs.next()); + assertFalse(rs.next()); + c1.close(); + c2.close(); + } + + private void printRows(String s, Statement s1, Statement s2) + throws SQLException { + trace(s); + ResultSet rs; + rs = s1.executeQuery("select * from test"); + while (rs.next()) { + trace("s1: " + rs.getString(2)); + } + rs = s2.executeQuery("select * from test"); + while (rs.next()) { + trace("s2: " + rs.getString(2)); + } + } + + private void testCreateTableAsSelect() throws SQLException { + if (!config.mvcc) { + return; + } + deleteDb("mvcc3"); + Connection c1 = getConnection("mvcc3"); + Statement s1 = c1.createStatement(); + s1.execute("CREATE TABLE TEST AS SELECT X ID, 'Hello' NAME " + + "FROM SYSTEM_RANGE(1, 3)"); + Connection c2 = getConnection("mvcc3"); + Statement s2 = c2.createStatement(); + ResultSet rs = s2.executeQuery("SELECT NAME FROM TEST WHERE ID=1"); + rs.next(); + assertEquals("Hello", rs.getString(1)); + c1.close(); + c2.close(); + } + + private void testRollback() throws SQLException { + if (!config.mvcc) { + return; + } + + deleteDb("mvcc3"); + Connection conn = getConnection("mvcc3"); + Statement stat = conn.createStatement(); + stat.executeUpdate("DROP TABLE IF EXISTS TEST"); + stat.executeUpdate("CREATE TABLE TEST (ID NUMBER(2) PRIMARY KEY, " + + "VAL VARCHAR(10))"); + stat.executeUpdate("INSERT INTO TEST (ID, VAL) VALUES (1, 'Value')"); + stat.executeUpdate("INSERT INTO TEST (ID, VAL) VALUES (2, 'Value')"); + if (!config.memory) { + conn.close(); + conn = getConnection("mvcc3"); + } + conn.setAutoCommit(false); + conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); + + Connection conn2 = getConnection("mvcc3"); + conn2.setAutoCommit(false); + conn2.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); + + conn.createStatement().executeUpdate( + "UPDATE TEST SET VAL='Updated' WHERE ID = 1"); + conn.rollback(); + + ResultSet rs = conn2.createStatement().executeQuery( + "SELECT * FROM TEST"); + assertTrue(rs.next()); + assertEquals("Value", rs.getString(2)); + assertTrue(rs.next()); + assertEquals("Value", rs.getString(2)); + assertFalse(rs.next()); + + conn.createStatement().executeUpdate( + "UPDATE TEST SET VAL='Updated' WHERE ID = 1"); + conn.commit(); + rs = conn2.createStatement().executeQuery( + "SELECT * FROM TEST ORDER BY ID"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Updated", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertEquals("Value", rs.getString(2)); + assertFalse(rs.next()); + + conn.close(); + conn2.close(); + } + + private void testDisableAutoCommit() throws SQLException { + if (!config.mvcc) { + return; + } + deleteDb("mvcc3"); + Connection conn = getConnection("mvcc3"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY)"); + stat.execute("INSERT INTO TEST VALUES(0)"); + conn.setAutoCommit(false); + stat.execute("INSERT INTO TEST VALUES(1)"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(0, rs.getInt(1)); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + } + + private void testSequence() throws SQLException { + if (config.memory) { + return; + } + + deleteDb("mvcc3"); + Connection conn; + ResultSet rs; + + conn = getConnection("mvcc3"); + conn.createStatement().execute("create sequence abc"); + conn.close(); + + conn = getConnection("mvcc3"); + rs = conn.createStatement().executeQuery("call abc.nextval"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + + conn = getConnection("mvcc3"); + rs = conn.createStatement().executeQuery("call abc.currval"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc4.java b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc4.java new file mode 100644 index 0000000000000..5c0fd71c462d5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvcc4.java @@ -0,0 +1,159 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.mvcc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.util.Map; +import java.util.concurrent.CountDownLatch; +import org.h2.test.TestBase; + +/** + * Additional MVCC (multi version concurrency) test cases. + */ +public class TestMvcc4 extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.mvcc = true; + test.config.lockTimeout = 20000; + test.config.memory = true; + test.test(); + } + + @Override + public void test() throws SQLException { + if (config.networked) { + return; + } + testSelectForUpdateAndUpdateConcurrency(); + } + + private void testSelectForUpdateAndUpdateConcurrency() throws SQLException { + Connection setup = getConnection("mvcc4"); + setup.setAutoCommit(false); + + { + Statement s = setup.createStatement(); + s.executeUpdate("CREATE TABLE test (" + + "entity_id VARCHAR(100) NOT NULL PRIMARY KEY, " + + "lastUpdated TIMESTAMP NOT NULL)"); + + PreparedStatement ps = setup.prepareStatement( + "INSERT INTO test (entity_id, lastUpdated) VALUES (?, ?)"); + for (int i = 0; i < 2; i++) { + String id = "" + i; + ps.setString(1, id); + ps.setTimestamp(2, new Timestamp(System.currentTimeMillis())); + ps.executeUpdate(); + } + setup.commit(); + } + + //Create a connection from thread 1 + Connection c1 = getConnection("mvcc4;LOCK_TIMEOUT=10000"); + c1.setAutoCommit(false); + + //Fire off a concurrent update. + final Thread mainThread = Thread.currentThread(); + final CountDownLatch executedUpdate = new CountDownLatch(1); + new Thread() { + @Override + public void run() { + try { + Connection c2 = getConnection("mvcc4"); + c2.setAutoCommit(false); + + PreparedStatement ps = c2.prepareStatement( + "SELECT * FROM test WHERE entity_id = ? FOR UPDATE"); + ps.setString(1, "1"); + ps.executeQuery().next(); + + executedUpdate.countDown(); + waitForThreadToBlockOnDB(mainThread); + + c2.commit(); + c2.close(); + } catch (SQLException e) { + e.printStackTrace(); + } + } + }.start(); + + //Wait until the concurrent update has executed, but not yet committed + try { + executedUpdate.await(); + } catch (InterruptedException e) { + // ignore + } + + // Execute an update. This should initially fail, and enter the waiting + // for lock case. + PreparedStatement ps = c1.prepareStatement("UPDATE test SET lastUpdated = ?"); + ps.setTimestamp(1, new Timestamp(System.currentTimeMillis())); + ps.executeUpdate(); + + c1.commit(); + c1.close(); + + Connection verify = getConnection("mvcc4"); + + verify.setAutoCommit(false); + ps = verify.prepareStatement("SELECT COUNT(*) FROM test"); + ResultSet rs = ps.executeQuery(); + assertTrue(rs.next()); + assertTrue(rs.getInt(1) == 2); + verify.commit(); + verify.close(); + + setup.close(); + } + + /** + * Wait for the given thread to block on synchronizing on the database + * object. + * + * @param t the thread + */ + void waitForThreadToBlockOnDB(Thread t) { + while (true) { + // sleep the first time through the loop so we give the main thread + // a chance + try { + Thread.sleep(20); + } catch (InterruptedException e1) { + // ignore + } + // TODO must not use getAllStackTraces, as the method names are + // implementation details + Map threadMap = Thread.getAllStackTraces(); + StackTraceElement[] elements = threadMap.get(t); + if (elements != null + && + elements.length > 1 && + (config.multiThreaded ? "sleep".equals(elements[0] + .getMethodName()) : "wait".equals(elements[0] + .getMethodName())) && + "filterConcurrentUpdate" + .equals(elements[1].getMethodName())) { + return; + } + } + } +} + + + + diff --git a/modules/h2/src/test/java/org/h2/test/mvcc/TestMvccMultiThreaded.java b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvccMultiThreaded.java new file mode 100644 index 0000000000000..c2e06c15c7bd0 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvccMultiThreaded.java @@ -0,0 +1,185 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.mvcc; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.concurrent.CountDownLatch; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Multi-threaded MVCC (multi version concurrency) test cases. + */ +public class TestMvccMultiThreaded extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testConcurrentSelectForUpdate(); + testMergeWithUniqueKeyViolation(); + // not supported currently + if (!config.multiThreaded) { + testConcurrentMerge(); + testConcurrentUpdate(); + } + } + + private void testConcurrentSelectForUpdate() throws Exception { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName() + ";MULTI_THREADED=TRUE"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int not null primary key, updated int not null)"); + stat.execute("insert into test(id, updated) values(1, 100)"); + ArrayList tasks = new ArrayList<>(); + int count = 3; + for (int i = 0; i < count; i++) { + Task task = new Task() { + @Override + public void call() throws Exception { + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + try { + while (!stop) { + try { + stat.execute("select * from test where id=1 for update"); + } catch (SQLException e) { + int errorCode = e.getErrorCode(); + assertTrue(e.getMessage(), + errorCode == ErrorCode.DEADLOCK_1 || + errorCode == ErrorCode.LOCK_TIMEOUT_1); + } + } + } finally { + conn.close(); + } + } + }.execute(); + tasks.add(task); + } + for (int i = 0; i < 10; i++) { + Thread.sleep(100); + ResultSet rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + } + for (Task t : tasks) { + t.get(); + } + conn.close(); + deleteDb(getTestName()); + } + + private void testMergeWithUniqueKeyViolation() throws Exception { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test(x int primary key, y int unique)"); + stat.execute("insert into test values(1, 1)"); + assertThrows(ErrorCode.DUPLICATE_KEY_1, stat). + execute("merge into test values(2, 1)"); + stat.execute("merge into test values(1, 2)"); + conn.close(); + + } + + private void testConcurrentMerge() throws Exception { + deleteDb(getTestName()); + int len = 3; + final Connection[] connList = new Connection[len]; + for (int i = 0; i < len; i++) { + Connection conn = getConnection( + getTestName() + ";MVCC=TRUE;LOCK_TIMEOUT=500"); + connList[i] = conn; + } + Connection conn = connList[0]; + conn.createStatement().execute( + "create table test(id int primary key, name varchar)"); + Task[] tasks = new Task[len]; + final boolean[] stop = { false }; + for (int i = 0; i < len; i++) { + final Connection c = connList[i]; + c.setAutoCommit(false); + tasks[i] = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + c.createStatement().execute( + "merge into test values(1, 'x')"); + c.commit(); + Thread.sleep(1); + } + } + }; + tasks[i].execute(); + } + Thread.sleep(1000); + stop[0] = true; + for (int i = 0; i < len; i++) { + tasks[i].get(); + } + for (int i = 0; i < len; i++) { + connList[i].close(); + } + deleteDb(getTestName()); + } + + private void testConcurrentUpdate() throws Exception { + deleteDb(getTestName()); + int len = 2; + final Connection[] connList = new Connection[len]; + for (int i = 0; i < len; i++) { + connList[i] = getConnection( + getTestName() + ";MVCC=TRUE"); + } + Connection conn = connList[0]; + conn.createStatement().execute( + "create table test(id int primary key, value int)"); + conn.createStatement().execute( + "insert into test values(0, 0)"); + final int count = 1000; + Task[] tasks = new Task[len]; + + final CountDownLatch latch = new CountDownLatch(len); + + for (int i = 0; i < len; i++) { + final int x = i; + tasks[i] = new Task() { + @Override + public void call() throws Exception { + for (int a = 0; a < count; a++) { + connList[x].createStatement().execute( + "update test set value=value+1"); + latch.countDown(); + latch.await(); + } + } + }; + tasks[i].execute(); + } + for (int i = 0; i < len; i++) { + tasks[i].get(); + } + ResultSet rs = conn.createStatement().executeQuery("select value from test"); + rs.next(); + assertEquals(count * len, rs.getInt(1)); + for (int i = 0; i < len; i++) { + connList[i].close(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/mvcc/TestMvccMultiThreaded2.java b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvccMultiThreaded2.java new file mode 100644 index 0000000000000..4dc1e49b68ee4 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/mvcc/TestMvccMultiThreaded2.java @@ -0,0 +1,180 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.mvcc; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import org.h2.jdbc.JdbcSQLException; +import org.h2.message.DbException; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; + +/** + * Additional MVCC (multi version concurrency) test cases. + */ +public class TestMvccMultiThreaded2 extends TestBase { + + private static final int TEST_THREAD_COUNT = 100; + private static final int TEST_TIME_SECONDS = 60; + private static final boolean DISPLAY_STATS = false; + + private static final String URL = ";MVCC=TRUE;LOCK_TIMEOUT=120000;MULTI_THREADED=TRUE"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.mvcc = true; + test.config.lockTimeout = 120000; + test.config.memory = true; + test.config.multiThreaded = true; + test.test(); + } + + @Override + public void test() throws SQLException, InterruptedException { + testSelectForUpdateConcurrency(); + } + + private void testSelectForUpdateConcurrency() + throws SQLException, InterruptedException { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName() + URL); + conn.setAutoCommit(false); + + String sql = "CREATE TABLE test (" + + "entity_id INTEGER NOT NULL PRIMARY KEY, " + + "lastUpdated INTEGER NOT NULL)"; + + Statement smtm = conn.createStatement(); + smtm.executeUpdate(sql); + + PreparedStatement ps = conn.prepareStatement( + "INSERT INTO test (entity_id, lastUpdated) VALUES (?, ?)"); + ps.setInt(1, 1); + ps.setInt(2, 100); + ps.executeUpdate(); + ps.setInt(1, 2); + ps.setInt(2, 200); + ps.executeUpdate(); + conn.commit(); + + ArrayList threads = new ArrayList<>(); + for (int i = 0; i < TEST_THREAD_COUNT; i++) { + SelectForUpdate sfu = new SelectForUpdate(); + sfu.setName("Test SelectForUpdate Thread#"+i); + threads.add(sfu); + sfu.start(); + } + + // give any of the 100 threads a chance to start by yielding the processor to them + Thread.yield(); + + // gather stats on threads after they finished + @SuppressWarnings("unused") + int minProcessed = Integer.MAX_VALUE, maxProcessed = 0, totalProcessed = 0; + + boolean allOk = true; + for (SelectForUpdate sfu : threads) { + // make sure all threads have stopped by joining with them + sfu.join(); + allOk &= sfu.ok; + totalProcessed += sfu.iterationsProcessed; + if (sfu.iterationsProcessed > maxProcessed) { + maxProcessed = sfu.iterationsProcessed; + } + if (sfu.iterationsProcessed < minProcessed) { + minProcessed = sfu.iterationsProcessed; + } + } + + if (DISPLAY_STATS) { + System.out.println(String.format( + "+ INFO: TestMvccMultiThreaded2 RUN STATS threads=%d, minProcessed=%d, maxProcessed=%d, " + + "totalProcessed=%d, averagePerThread=%d, averagePerThreadPerSecond=%d\n", + TEST_THREAD_COUNT, minProcessed, maxProcessed, totalProcessed, totalProcessed / TEST_THREAD_COUNT, + totalProcessed / (TEST_THREAD_COUNT * TEST_TIME_SECONDS))); + } + + IOUtils.closeSilently(conn); + deleteDb(getTestName()); + + assertTrue(allOk); + } + + /** + * Worker test thread selecting for update + */ + private class SelectForUpdate extends Thread { + + public int iterationsProcessed; + + public boolean ok; + + SelectForUpdate() { + } + + @Override + public void run() { + final long start = System.currentTimeMillis(); + boolean done = false; + Connection conn = null; + try { + conn = getConnection(getTestName() + URL); + conn.setAutoCommit(false); + + // give the other threads a chance to start up before going into our work loop + Thread.yield(); + + while (!done) { + try { + PreparedStatement ps = conn.prepareStatement( + "SELECT * FROM test WHERE entity_id = ? FOR UPDATE"); + String id; + int value; + if ((iterationsProcessed & 1) == 0) { + id = "1"; + value = 100; + } else { + id = "2"; + value = 200; + } + ps.setString(1, id); + ResultSet rs = ps.executeQuery(); + + assertTrue(rs.next()); + assertTrue(rs.getInt(2) == value); + + conn.commit(); + iterationsProcessed++; + + long now = System.currentTimeMillis(); + if (now - start > 1000 * TEST_TIME_SECONDS) { + done = true; + } + } catch (JdbcSQLException e1) { + throw e1; + } + } + } catch (SQLException e) { + TestBase.logError("SQL error from thread "+getName(), e); + throw DbException.convert(e); + } catch (Exception e) { + TestBase.logError("General error from thread "+getName(), e); + throw e; + } + IOUtils.closeSilently(conn); + ok = true; + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/mvcc/package.html b/modules/h2/src/test/java/org/h2/test/mvcc/package.html new file mode 100644 index 0000000000000..5ffaa63922265 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/mvcc/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Multi version concurrency tests. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/otherDatabases.txt b/modules/h2/src/test/java/org/h2/test/otherDatabases.txt new file mode 100644 index 0000000000000..9232ea30be992 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/otherDatabases.txt @@ -0,0 +1,75 @@ +MySQL +-------------------------------------------------------------------------------------------------------- +Start: +sudo mysqld_safe --default-storage-engine=innodb +sudo mysqld_safe --default-storage-engine=innodb --wait_timeout=10 + +Stop: +sudo mysqladmin shutdown + +Configuration: +sudo cp /usr/local/mysql/support-files/my-medium.cnf /etc/my.cnf +sudo pico /etc/my.cnf +innodb_flush_log_at_trx_commit = 0 + +Initialization: +sudo mysql +create database test; +create user 'sa'@'localhost' identified by 'sa'; +use test; +grant all on * to 'sa'@'localhost' with grant option; + +'TRADITIONAL' is default; ANSI mode can be set using: +SET GLOBAL sql_mode='ANSI'; +SELECT @@global.sql_mode; + +Non-standard escape mechanism: +select 'Joe''s', 'Joe\'s'; + +Compare with NULL problem: +drop table test; +create table test(id int); +insert into test values(1); +insert into test values(null); +-- 2 rows even in ANSI mode (correct is 1 row): +select * from test where id=id and 1=1; + + +MS SQL Server 2005 +-------------------------------------------------------------------------------------------------------- +Problems when trying to select large objects (even if ResultSet.getBinaryStream is used). +The workaround responseBuffering=adaptive doesn't always seem to work +(jdbc:sqlserver://localhost:4220;DatabaseName=test;responseBuffering=adaptive) + + +PostgreSQL +-------------------------------------------------------------------------------------------------------- + +Non-standard escape mechanism: +select 'Joe''s', 'Joe\'s'; + +Configuration: +Mac OS: +/Library/PostgreSQL/8.4/data/postgresql.conf +fsync = off +commit_delay = 1000 + + +HSQLDB +-------------------------------------------------------------------------------------------------------- +To use the same default settings as H2, use: +jdbc:hsqldb:data/test;hsqldb.default_table_type=cached;sql.enforce_size=true +Also, you need to execute the following statement: +SET WRITE_DELAY 1 +No optimization for COUNT(*) + + +Derby +-------------------------------------------------------------------------------------------------------- +To call getFD().sync() (which results in the OS call fsync()), +set the system property derby.storage.fileSyncTransactionLog to true true. +See +http://db.apache.org/derby/javadoc/engine/org/apache/derby/iapi/reference/Property.html#FILESYNC_TRANSACTION_LOG +Missing features: +LIMIT OFFSET is not supported. +No optimization for COUNT(*) diff --git a/modules/h2/src/test/java/org/h2/test/package.html b/modules/h2/src/test/java/org/h2/test/package.html new file mode 100644 index 0000000000000..41299cbbf9e79 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +High level test classes. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/poweroff/Listener.java b/modules/h2/src/test/java/org/h2/test/poweroff/Listener.java new file mode 100644 index 0000000000000..6f7b98959451b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/poweroff/Listener.java @@ -0,0 +1,83 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.poweroff; + +import java.io.DataInputStream; +import java.io.IOException; +import java.net.ServerSocket; +import java.net.Socket; +import java.util.concurrent.TimeUnit; + +/** + * The listener application for the power off test. + * The listener runs on a computer that stays on during the whole test. + */ +public class Listener implements Runnable { + + private volatile int maxValue; + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws IOException { + new Listener().test(args); + } + + private void test(String... args) throws IOException { + int port = 9099; + for (int i = 0; i < args.length; i++) { + if (args[i].equals("-port")) { + port = Integer.parseInt(args[++i]); + } + } + listen(port); + } + + @Override + public void run() { + while (true) { + try { + Thread.sleep(10000); + } catch (Exception e) { + // ignore + } + System.out.println("Max=" + maxValue); + } + } + + private void listen(int port) throws IOException { + new Thread(this).start(); + ServerSocket serverSocket = new ServerSocket(port); + System.out.println("Listening on " + serverSocket.toString()); + long time; + maxValue = 0; + while (true) { + Socket socket = serverSocket.accept(); + DataInputStream in = new DataInputStream(socket.getInputStream()); + System.out.println("Connected"); + time = System.nanoTime(); + try { + while (true) { + int value = in.readInt(); + if (value < 0) { + break; + } + maxValue = Math.max(maxValue, value); + } + } catch (IOException e) { + System.out.println("Closed with Exception: " + e); + } + time = System.nanoTime() - time; + int operationsPerSecond = (int) (TimeUnit.SECONDS.toNanos(1) * maxValue / time); + System.out.println("Max=" + maxValue + + " operations/sec=" + operationsPerSecond); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/poweroff/Test.java b/modules/h2/src/test/java/org/h2/test/poweroff/Test.java new file mode 100644 index 0000000000000..9848b3a834850 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/poweroff/Test.java @@ -0,0 +1,169 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.poweroff; + +import java.io.DataOutputStream; +import java.io.File; +import java.io.FileDescriptor; +import java.io.IOException; +import java.io.RandomAccessFile; +import java.net.Socket; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * This application tests the durability / non-durability of file systems and + * databases. Two computers with network connection are required to run this + * test. Before starting this application, the Listener application must be + * started on another computer. + */ +public class Test { + + private String url; + private Connection conn; + private Statement stat; + private PreparedStatement prep; + + private Test() { + // nothing to do + } + + private Test(String driver, String url, String user, String password, + boolean writeDelay0) { + this.url = url; + try { + Class.forName(driver); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + if (writeDelay0) { + stat.execute("SET WRITE_DELAY 0"); + } + System.out.println(url + " started"); + } catch (Exception e) { + System.out.println(url + ": " + e.toString()); + return; + } + try { + ResultSet rs = stat.executeQuery("SELECT MAX(ID) FROM TEST"); + rs.next(); + System.out.println(url + ": MAX(ID)=" + rs.getInt(1)); + stat.execute("DROP TABLE TEST"); + } catch (SQLException e) { + // ignore + } + try { + stat.execute("CREATE TABLE TEST" + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + prep = conn.prepareStatement("INSERT INTO TEST VALUES(?, ?)"); + } catch (SQLException e) { + System.out.println(url + ": " + e.toString()); + } + } + + private void insert(int id) { + try { + if (prep != null) { + prep.setInt(1, id); + prep.setString(2, "World " + id); + prep.execute(); + } + } catch (SQLException e) { + System.out.println(url + ": " + e.toString()); + } + } + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + int port = 9099; + String connect = "192.168.0.3"; + boolean file = false; + for (int i = 0; i < args.length; i++) { + if (args[i].equals("-port")) { + port = Integer.parseInt(args[++i]); + } else if (args[i].equals("-connect")) { + connect = args[++i]; + } else if (args[i].equals("-file")) { + file = true; + } + } + test(connect, port, file); + } + + private static void test(String connect, int port, boolean file) + throws Exception { + Socket socket = new Socket(connect, port); + DataOutputStream out = new DataOutputStream(socket.getOutputStream()); + System.out.println("Connected to " + socket.toString()); + if (file) { + testFile(out); + } else { + testDatabases(out); + } + } + + private static void testFile(DataOutputStream out) throws IOException { + File file = new File("test.txt"); + if (file.exists()) { + file.delete(); + } + RandomAccessFile write = new RandomAccessFile(file, "rws"); + // RandomAccessFile write = new RandomAccessFile(file, "rwd"); + int fileSize = 10 * 1024 * 1024; + write.seek(fileSize - 1); + write.write(0); + write.seek(0); + int i = 0; + FileDescriptor fd = write.getFD(); + while (true) { + if (write.getFilePointer() >= fileSize) { + break; + } + write.writeBytes(i + "\r\n"); + fd.sync(); + out.writeInt(i); + out.flush(); + i++; + } + write.close(); + } + + private static void testDatabases(DataOutputStream out) throws Exception { + Test[] dbs = { + new Test("org.h2.Driver", + "jdbc:h2:test1", "sa", "", true), + new Test("org.h2.Driver", + "jdbc:h2:test2", "sa", "", false), + new Test("org.hsqldb.jdbcDriver", + "jdbc:hsqldb:test4", "sa", "", false), + // new Test("com.mysql.jdbc.Driver", + // "jdbc:mysql://localhost/test", "sa", ""), + new Test("org.postgresql.Driver", + "jdbc:postgresql:test", "sa", "sa", false), + new Test("org.apache.derby.jdbc.EmbeddedDriver", + "jdbc:derby:test;create=true", "sa", "", false), + new Test("org.h2.Driver", + "jdbc:h2:test5", "sa", "", true), + new Test("org.h2.Driver", + "jdbc:h2:test6", "sa", "", false), }; + for (int i = 0;; i++) { + for (Test t : dbs) { + t.insert(i); + } + out.writeInt(i); + out.flush(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/poweroff/TestRecover.java b/modules/h2/src/test/java/org/h2/test/poweroff/TestRecover.java new file mode 100644 index 0000000000000..f94fe4b8964bd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/poweroff/TestRecover.java @@ -0,0 +1,375 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.poweroff; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.PrintWriter; +import java.security.SecureRandom; +import java.sql.Connection; +import java.sql.Driver; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Date; +import java.util.List; +import java.util.Random; +import java.util.zip.ZipEntry; +import java.util.zip.ZipOutputStream; +import org.h2.util.IOUtils; +import org.h2.util.New; + +/** + * This standalone test checks if recovery of a database works after power + * failure. + */ +public class TestRecover { + + private static final int MAX_STRING_LENGTH = 10000; + private static final String NODE = System.getProperty("test.node", ""); + private static final String DIR = System.getProperty("test.dir", "/target/temp/db"); + + private static final String TEST_DIRECTORY = DIR + "/data" + NODE; + private static final String BACKUP_DIRECTORY = DIR + "/last"; + private static final String URL = System.getProperty( + "test.url", "jdbc:h2:" + TEST_DIRECTORY + "/test"); + private static final String DRIVER = System.getProperty( + "test.driver", "org.h2.Driver"); + + // private static final String DIR = + // System.getProperty("test.dir", "/temp/derby"); + // private static final String URL = + // System.getProperty("test.url", + // "jdbc:derby:/temp/derby/data/test;create=true"); + // private static final String DRIVER = + // System.getProperty("test.driver", + // "org.apache.derby.jdbc.EmbeddedDriver"); + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + System.out.println("URL=" + URL); + System.out.println("backup..."); + new File(TEST_DIRECTORY).mkdirs(); + File backup = backup(TEST_DIRECTORY, + BACKUP_DIRECTORY, "data", 10, NODE); + System.out.println("check consistency..."); + if (!testConsistency()) { + System.out.println("error! renaming file"); + backup.renameTo(new File(backup.getParentFile(), + "error-" + backup.getName())); + } + System.out.println("deleting old run..."); + deleteRecursive(new File(TEST_DIRECTORY)); + System.out.println("testing..."); + testLoop(); + } + + private static File backup(String sourcePath, String targetPath, + String basePath, int max, String node) throws IOException { + File root = new File(targetPath); + if (!root.exists()) { + root.mkdirs(); + } + while (true) { + File oldest = null; + int count = 0; + for (File f : root.listFiles()) { + String name = f.getName(); + if (f.isFile() && name.startsWith("backup") && name.endsWith(".zip")) { + count++; + if (oldest == null || f.lastModified() < oldest.lastModified()) { + oldest = f; + } + } + } + if (count < max) { + break; + } + oldest.delete(); + } + SimpleDateFormat sd = new SimpleDateFormat("yyMMdd-HHmmss"); + String date = sd.format(new Date()); + File zipFile = new File(root, "backup-" + date + "-" + node + ".zip"); + ArrayList list = New.arrayList(); + File base = new File(sourcePath); + listRecursive(list, base); + if (list.size() == 0) { + FileOutputStream out = new FileOutputStream(zipFile); + out.close(); + } else { + OutputStream out = null; + try { + out = new FileOutputStream(zipFile); + ZipOutputStream zipOut = new ZipOutputStream(out); + String baseName = base.getAbsolutePath(); + for (File f : list) { + String fileName = f.getAbsolutePath(); + String entryName = fileName; + if (fileName.startsWith(baseName)) { + entryName = entryName.substring(baseName.length()); + } + if (entryName.startsWith("\\")) { + entryName = entryName.substring(1); + } + if (!entryName.startsWith("/")) { + entryName = "/" + entryName; + } + ZipEntry entry = new ZipEntry(basePath + entryName); + zipOut.putNextEntry(entry); + + try (InputStream in = new FileInputStream(fileName)) { + IOUtils.copyAndCloseInput(in, zipOut); + } + zipOut.closeEntry(); + } + zipOut.closeEntry(); + zipOut.close(); + } finally { + IOUtils.closeSilently(out); + } + } + return zipFile; + } + + private static void listRecursive(List list, File file) + throws IOException { + File[] l = file.listFiles(); + for (int i = 0; l != null && i < l.length; i++) { + File f = l[i]; + if (f.isDirectory()) { + listRecursive(list, f); + } else { + list.add(f); + } + } + } + + private static void deleteRecursive(File file) throws IOException { + if (file.isDirectory()) { + for (File f : file.listFiles()) { + deleteRecursive(f); + } + } + if (file.exists() && !file.delete()) { + throw new IOException("Could not delete " + file.getAbsolutePath()); + } + } + + private static void testLoop() throws Exception { + Random random = new SecureRandom(); + while (true) { + runOneTest(random.nextInt()); + } + } + + private static Connection openConnection() throws Exception { + Class.forName(DRIVER); + Connection conn = DriverManager.getConnection(URL, "sa", "sa"); + Statement stat = conn.createStatement(); + try { + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "D INT, NAME VARCHAR("+MAX_STRING_LENGTH+"))"); + stat.execute("CREATE INDEX IDX_TEST_D ON TEST(D)"); + } catch (SQLException e) { + // ignore + } + return conn; + } + + private static void closeConnection(Connection conn) { + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + if (DRIVER.startsWith("org.apache.derby")) { + try { + DriverManager.getConnection("jdbc:derby:;shutdown=true"); + } catch (SQLException e) { + // ignore + } + try { + Driver driver = (Driver) Class.forName(DRIVER).newInstance(); + DriverManager.registerDriver(driver); + } catch (Exception e) { + e.printStackTrace(); + } + } + } + + private static void runOneTest(int i) throws Exception { + Random random = new Random(i); + Connection conn = openConnection(); + PreparedStatement prepInsert = null; + PreparedStatement prepDelete = null; + conn.setAutoCommit(false); + for (int id = 0;; id++) { + boolean rollback = random.nextInt(10) == 1; + int len; + if (random.nextInt(10) == 1) { + len = random.nextInt(100) * 2; + } else { + len = random.nextInt(2) * 2; + } + if (rollback && random.nextBoolean()) { + // make the length odd + len++; + } + // byte[] data = new byte[len]; + // random.nextBytes(data); + int op = random.nextInt(); + if (op % 1000000 == 0) { + closeConnection(conn); + conn = openConnection(); + conn.setAutoCommit(false); + prepInsert = null; + prepDelete = null; + } +// if (random.nextBoolean()) { +// int test; +// if (random.nextBoolean()) { +// conn.createStatement().execute( +// "drop index if exists idx_2"); +// conn.createStatement().execute( +// "create table if not exists test2" + +// "(id int primary key) as select x " + +// "from system_range(1, 1000)"); +// } else { +// conn.createStatement().execute( +// "create index if not exists idx_2 " + +// "on test(d, name, id)"); +// conn.createStatement().execute( +// "drop table if exists test2"); +// } +// } + if (random.nextBoolean()) { + if (prepInsert == null) { + prepInsert = conn.prepareStatement( + "INSERT INTO TEST(ID, D, NAME) VALUES(?, ?, ?)"); + } + prepInsert.setInt(1, id); + prepInsert.setInt(2, random.nextInt(10000)); + StringBuilder buff = new StringBuilder(); + buff.append(len); + switch (random.nextInt(10)) { + case 0: + len = random.nextInt(MAX_STRING_LENGTH); + break; + case 1: + case 2: + case 3: + len = random.nextInt(MAX_STRING_LENGTH / 20); + break; + default: + len = 0; + } + len -= 10; + while (len > 0) { + buff.append('-'); + len--; + } + buff.append("->"); + String s = buff.toString(); + prepInsert.setString(3, s); + prepInsert.execute(); + } else { + ResultSet rs = conn.createStatement().executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + int count = rs.getInt(1); + rs.close(); + if (count > 1000) { + if (prepDelete == null) { + prepDelete = conn.prepareStatement("DELETE FROM TEST WHERE ROWNUM <= 4"); + } + prepDelete.execute(); + } + } + if (rollback) { + conn.rollback(); + } else { + conn.commit(); + } + } + } + + private static boolean testConsistency() { + PrintWriter p = null; + try { + p = new PrintWriter(new FileOutputStream(TEST_DIRECTORY + "/result.txt")); + p.println("Results"); + p.flush(); + } catch (Throwable t) { + t.printStackTrace(); + System.exit(0); + } + Connection conn = null; + try { + conn = openConnection(); + test(conn, ""); + test(conn, "ORDER BY D"); + closeConnection(conn); + return true; + } catch (Throwable t) { + t.printStackTrace(); + t.printStackTrace(p); + return false; + } finally { + if (conn != null) { + try { + closeConnection(conn); + } catch (Throwable t2) { + t2.printStackTrace(); + t2.printStackTrace(p); + } + } + if (p != null) { + p.flush(); + p.close(); + IOUtils.closeSilently(p); + } + } + } + + private static void test(Connection conn, String order) throws Exception { + ResultSet rs; + rs = conn.createStatement().executeQuery("SELECT * FROM TEST " + order); + int max = 0; + int count = 0; + while (rs.next()) { + count++; + int id = rs.getInt("ID"); + String name = rs.getString("NAME"); + if (!name.endsWith(">")) { + throw new Exception("unexpected entry " + id + " value " + name); + } + int idx = name.indexOf('-'); + if (idx < 0) { + throw new Exception("unexpected entry " + id + " value " + name); + } + int value = Integer.parseInt(name.substring(0, idx)); + if (value % 2 != 0) { + throw new Exception("unexpected odd entry " + id + " value " + value); + } + max = Math.max(max, id); + } + rs.close(); + System.out.println("max row id: " + max + " rows: " + count); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/poweroff/TestRecoverKillLoop.java b/modules/h2/src/test/java/org/h2/test/poweroff/TestRecoverKillLoop.java new file mode 100644 index 0000000000000..3b7830584754a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/poweroff/TestRecoverKillLoop.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.poweroff; + +import java.io.InputStream; +import java.util.Random; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.synth.OutputCatcher; + +/** + * Run the TestRecover test case in a loop. The process is killed after 10 + * seconds. + */ +public class TestRecoverKillLoop extends TestBase { + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + new TestRecoverKillLoop().runTest(Integer.MAX_VALUE); + } + + @Override + public void test() throws Exception { + runTest(3); + } + + private void runTest(int count) throws Exception { + FileUtils.deleteRecursive("target/data/db", false); + Random random = new Random(1); + for (int i = 0; i < count; i++) { + String[] procDef = { + "java", "-cp", getClassPath(), + "-Dtest.dir=target/data/db", + TestRecover.class.getName() + }; + Process p = Runtime.getRuntime().exec(procDef); + InputStream in = p.getInputStream(); + OutputCatcher catcher = new OutputCatcher(in); + catcher.start(); + while (true) { + String s = catcher.readLine(60 * 1000); + // System.out.println("> " + s); + if (s == null) { + fail("No reply from process"); + } else if (s.startsWith("testing...")) { + int sleep = random.nextInt(10000); + Thread.sleep(sleep); + printTime("killing"); + p.destroy(); + p.waitFor(); + break; + } else if (s.startsWith("error!")) { + fail("Failed: " + s); + } + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/poweroff/TestReorderWrites.java b/modules/h2/src/test/java/org/h2/test/poweroff/TestReorderWrites.java new file mode 100644 index 0000000000000..e00faacb23528 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/poweroff/TestReorderWrites.java @@ -0,0 +1,201 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.poweroff; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.util.Arrays; +import java.util.Map; +import java.util.Random; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.MVStoreTool; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.FilePathReorderWrites; + +/** + * Tests that the MVStore recovers from a power failure if the file system or + * disk re-ordered the write operations. + */ +public class TestReorderWrites extends TestBase { + + private static final boolean LOG = false; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testMVStore(); + testFileSystem(); + } + + private void testMVStore() { + FilePathReorderWrites fs = FilePathReorderWrites.register(); + String fileName = "reorder:memFS:test.mv"; + try { + for (int i = 0; i < 1000; i++) { + log(i + " --------------------------------"); + // this test is not interested in power off failures during + // initial creation + fs.setPowerOffCountdown(0, 0); + // release the static data this test generates + FileUtils.delete("memFS:test.mv"); + FileUtils.delete("memFS:test.mv.copy"); + MVStore store = new MVStore.Builder(). + fileName(fileName). + autoCommitDisabled().open(); + // store.setRetentionTime(10); + Map map = store.openMap("data"); + map.put(-1, new byte[1]); + store.commit(); + store.getFileStore().sync(); + Random r = new Random(i); + int stop = 4 + r.nextInt(20); + log("countdown start"); + fs.setPowerOffCountdown(stop, i); + try { + for (int j = 1; j < 100; j++) { + Map newMap = store.openMap("d" + j); + newMap.put(j, j * 10); + int key = r.nextInt(10); + int len = 10 * r.nextInt(1000); + if (r.nextBoolean()) { + map.remove(key); + } else { + map.put(key, new byte[len]); + } + log("op " + j + ": "); + store.commit(); + switch (r.nextInt(10)) { + case 0: + log("op compact"); + store.compact(100, 10 * 1024); + break; + case 1: + log("op compactMoveChunks"); + store.compactMoveChunks(); + log("op compactMoveChunks done"); + break; + } + } + // write has to fail at some point + fail(); + } catch (IllegalStateException e) { + log("stop " + e + ", cause: " + e.getCause()); + // expected + } + try { + store.close(); + } catch (IllegalStateException e) { + // expected + store.closeImmediately(); + } + log("verify"); + fs.setPowerOffCountdown(100, 0); + if (LOG) { + MVStoreTool.dump(fileName, true); + } + store = new MVStore.Builder(). + fileName(fileName). + autoCommitDisabled().open(); + map = store.openMap("data"); + if (!map.containsKey(-1)) { + fail("key not found, size=" + map.size() + " i=" + i); + } else { + assertEquals("i=" + i, 1, map.get(-1).length); + } + for (int j = 0; j < 100; j++) { + Map newMap = store.openMap("d" + j); + newMap.get(j); + } + map.keySet(); + store.close(); + } + } finally { + // release the static data this test generates + FileUtils.delete("memFS:test.mv"); + FileUtils.delete("memFS:test.mv.copy"); + } + } + + private static void log(String message) { + if (LOG) { + System.out.println(message); + } + } + + private void testFileSystem() throws IOException { + FilePathReorderWrites fs = FilePathReorderWrites.register(); + // disable this for now, still bug(s) in our code + FilePathReorderWrites.setPartialWrites(false); + String fileName = "reorder:memFS:test"; + final ByteBuffer empty = ByteBuffer.allocate(1024); + Random r = new Random(1); + long minSize = Long.MAX_VALUE; + long maxSize = 0; + int minWritten = Integer.MAX_VALUE; + int maxWritten = 0; + for (int i = 0; i < 100; i++) { + fs.setPowerOffCountdown(100, i); + FileUtils.delete(fileName); + FileChannel fc = FilePath.get(fileName).open("rw"); + for (int j = 0; j < 20; j++) { + fc.write(empty, j * 1024); + empty.flip(); + } + fs.setPowerOffCountdown(4 + r.nextInt(20), i); + int lastWritten = 0; + int lastTruncated = 0; + for (int j = 20; j >= 0; j--) { + try { + byte[] bytes = new byte[1024]; + Arrays.fill(bytes, (byte) j); + ByteBuffer data = ByteBuffer.wrap(bytes); + fc.write(data, 0); + lastWritten = j; + } catch (IOException e) { + // expected + break; + } + try { + fc.truncate(j * 1024); + lastTruncated = j * 1024; + } catch (IOException e) { + // expected + break; + } + } + if (lastTruncated <= 0 || lastWritten <= 0) { + fail(); + } + fs.setPowerOffCountdown(100, 0); + fc = FilePath.get(fileName).open("rw"); + ByteBuffer data = ByteBuffer.allocate(1024); + fc.read(data, 0); + data.flip(); + int got = data.get(); + long size = fc.size(); + minSize = Math.min(minSize, size); + maxSize = Math.max(minSize, size); + minWritten = Math.min(minWritten, got); + maxWritten = Math.max(maxWritten, got); + } + assertTrue(minSize < maxSize); + assertTrue(minWritten < maxWritten); + // release the static data this test generates + FileUtils.delete(fileName); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/poweroff/TestWrite.java b/modules/h2/src/test/java/org/h2/test/poweroff/TestWrite.java new file mode 100644 index 0000000000000..f88f92cce16aa --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/poweroff/TestWrite.java @@ -0,0 +1,134 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.poweroff; + +import java.io.File; +import java.io.FileDescriptor; +import java.io.RandomAccessFile; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +/** + * This test shows the raw file access performance using various file modes. + * It also tests databases. + */ +public class TestWrite { + + private TestWrite() { + // utility class + } + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + testFile("rw", false); + testFile("rwd", false); + testFile("rws", false); + testFile("rw", true); + testFile("rwd", true); + testFile("rws", true); + testDatabase("org.h2.Driver", + "jdbc:h2:test", "sa", ""); + testDatabase("org.hsqldb.jdbcDriver", + "jdbc:hsqldb:test4", "sa", ""); + testDatabase("org.apache.derby.jdbc.EmbeddedDriver", + "jdbc:derby:test;create=true", "sa", ""); + testDatabase("com.mysql.jdbc.Driver", + "jdbc:mysql://localhost/test", "sa", "sa"); + testDatabase("org.postgresql.Driver", + "jdbc:postgresql:test", "sa", "sa"); + } + + private static void testFile(String mode, boolean flush) throws Exception { + System.out.println("Testing RandomAccessFile(.., \"" + mode + "\")..."); + if (flush) { + System.out.println(" with FileDescriptor.sync()"); + } + RandomAccessFile file = new RandomAccessFile("test.txt", mode); + file.setLength(0); + FileDescriptor fd = file.getFD(); + long start = System.nanoTime(); + byte[] data = { 0 }; + file.write(data); + int i = 0; + if (flush) { + for (;; i++) { + file.seek(0); + file.write(data); + fd.sync(); + if ((i & 15) == 0) { + long time = System.nanoTime() - start; + if (time > TimeUnit.SECONDS.toNanos(5)) { + break; + } + } + } + } else { + for (;; i++) { + file.seek(0); + file.write(data); + if ((i & 1023) == 0) { + long time = System.nanoTime() - start; + if (time > TimeUnit.SECONDS.toNanos(5)) { + break; + } + } + } + } + long time = System.nanoTime() - start; + System.out.println("Time: " + TimeUnit.NANOSECONDS.toMillis(time)); + System.out.println("Operations: " + i); + System.out.println("Operations/second: " + (i * TimeUnit.SECONDS.toNanos(1) / time)); + System.out.println(); + file.close(); + new File("test.txt").delete(); + } + + private static void testDatabase(String driver, String url, String user, + String password) throws Exception { + Class.forName(driver); + Connection conn = DriverManager.getConnection(url, user, password); + System.out.println("Testing Database, URL=" + url); + Statement stat = conn.createStatement(); + try { + stat.execute("DROP TABLE TEST"); + } catch (SQLException e) { + // ignore + } + stat.execute("CREATE TABLE TEST(ID INT)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?)"); + long start = System.nanoTime(); + int i = 0; + for (;; i++) { + prep.setInt(1, i); + // autocommit is on by default, so this commits as well + prep.execute(); + if ((i & 15) == 0) { + long time = System.nanoTime() - start; + if (time > TimeUnit.SECONDS.toNanos(5)) { + break; + } + } + } + long time = System.nanoTime() - start; + System.out.println("Time: " + TimeUnit.NANOSECONDS.toMillis(time)); + System.out.println("Operations: " + i); + System.out.println("Operations/second: " + (i * TimeUnit.SECONDS.toNanos(1) / time)); + System.out.println(); + stat.execute("DROP TABLE TEST"); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/poweroff/package.html b/modules/h2/src/test/java/org/h2/test/poweroff/package.html new file mode 100644 index 0000000000000..5ffaa63922265 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/poweroff/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Multi version concurrency tests. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/recover/RecoverLobTest.java b/modules/h2/src/test/java/org/h2/test/recover/RecoverLobTest.java new file mode 100644 index 0000000000000..473b3b0b52a2e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/recover/RecoverLobTest.java @@ -0,0 +1,82 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.recover; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.Statement; +import org.h2.test.TestBase; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Recover; + +/** + * Tests BLOB/CLOB recovery. + */ +public class RecoverLobTest extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public TestBase init() throws Exception { + TestBase tb = super.init(); + config.mvStore = false; + return tb; + } + + @Override + public void test() throws Exception { + if (config.mvStore || config.memory) { + return; + } + testRecoverClob(); + } + + private void testRecoverClob() throws Exception { + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, data clob)"); + stat.execute("insert into test values(1, space(10000))"); + stat.execute("insert into test values(2, space(20000))"); + stat.execute("insert into test values(3, space(30000))"); + stat.execute("insert into test values(4, space(40000))"); + stat.execute("insert into test values(5, space(50000))"); + stat.execute("insert into test values(6, space(60000))"); + stat.execute("insert into test values(7, space(70000))"); + stat.execute("insert into test values(8, space(80000))"); + + conn.close(); + Recover.main("-dir", getBaseDir(), "-db", "recovery"); + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + conn = getConnection( + "recovery;init=runscript from '" + + getBaseDir() + "/recovery.h2.sql'"); + stat = conn.createStatement(); + + ResultSet rs = stat.executeQuery("select * from test"); + while(rs.next()){ + + int id = rs.getInt(1); + String data = rs.getString(2); + + assertTrue(data != null); + assertTrue(data.length() == 10000 * id); + + } + rs.close(); + conn.close(); + } + + + +} diff --git a/modules/h2/src/test/java/org/h2/test/recover/package.html b/modules/h2/src/test/java/org/h2/test/recover/package.html new file mode 100644 index 0000000000000..d16979fd81a6b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/recover/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Recovery tests. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/rowlock/TestRowLocks.java b/modules/h2/src/test/java/org/h2/test/rowlock/TestRowLocks.java new file mode 100644 index 0000000000000..00a1a43239162 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/rowlock/TestRowLocks.java @@ -0,0 +1,114 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.rowlock; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Row level locking tests. + */ +public class TestRowLocks extends TestBase { + + /** + * The statements used in this test. + */ + Statement s1, s2; + + private Connection c1, c2; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testSetMode(); + testCases(); + deleteDb(getTestName()); + } + + private void testSetMode() throws SQLException { + deleteDb(getTestName()); + c1 = getConnection(getTestName()); + Statement stat = c1.createStatement(); + stat.execute("SET LOCK_MODE 2"); + ResultSet rs = stat.executeQuery("call lock_mode()"); + rs.next(); + assertEquals("2", rs.getString(1)); + c1.close(); + } + + private void testCases() throws Exception { + deleteDb(getTestName()); + c1 = getConnection(getTestName() + ";MVCC=TRUE"); + s1 = c1.createStatement(); + s1.execute("SET LOCK_TIMEOUT 10000"); + s1.execute("CREATE TABLE TEST AS " + + "SELECT X ID, 'Hello' NAME FROM SYSTEM_RANGE(1, 3)"); + c1.commit(); + c1.setAutoCommit(false); + s1.execute("UPDATE TEST SET NAME='Hallo' WHERE ID=1"); + + c2 = getConnection(getTestName()); + c2.setAutoCommit(false); + s2 = c2.createStatement(); + + assertEquals("Hallo", getSingleValue(s1, + "SELECT NAME FROM TEST WHERE ID=1")); + assertEquals("Hello", getSingleValue(s2, + "SELECT NAME FROM TEST WHERE ID=1")); + + s2.execute("UPDATE TEST SET NAME='Hallo' WHERE ID=2"); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, s2). + executeUpdate("UPDATE TEST SET NAME='Hi' WHERE ID=1"); + c1.commit(); + c2.commit(); + + assertEquals("Hallo", getSingleValue(s1, + "SELECT NAME FROM TEST WHERE ID=1")); + assertEquals("Hallo", getSingleValue(s2, + "SELECT NAME FROM TEST WHERE ID=1")); + + s2.execute("UPDATE TEST SET NAME='H1' WHERE ID=1"); + Task task = new Task() { + @Override + public void call() throws SQLException { + s1.execute("UPDATE TEST SET NAME='H2' WHERE ID=1"); + } + }; + task.execute(); + Thread.sleep(100); + c2.commit(); + task.get(); + c1.commit(); + assertEquals("H2", getSingleValue(s1, + "SELECT NAME FROM TEST WHERE ID=1")); + assertEquals("H2", getSingleValue(s2, + "SELECT NAME FROM TEST WHERE ID=1")); + + c1.close(); + c2.close(); + } + + private static String getSingleValue(Statement stat, String sql) + throws SQLException { + ResultSet rs = stat.executeQuery(sql); + return rs.next() ? rs.getString(1) : null; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/rowlock/package.html b/modules/h2/src/test/java/org/h2/test/rowlock/package.html new file mode 100644 index 0000000000000..75f29502a7e1a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/rowlock/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Row level locking tests. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/scripts/TestScript.java b/modules/h2/src/test/java/org/h2/test/scripts/TestScript.java new file mode 100644 index 0000000000000..267850b8b28b3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/TestScript.java @@ -0,0 +1,507 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.scripts; + +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.io.PrintStream; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; + +import org.h2.engine.SysProperties; +import org.h2.test.TestAll; +import org.h2.test.TestBase; +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * This test runs a SQL script file and compares the output with the expected + * output. + */ +public class TestScript extends TestBase { + + private static final String BASE_DIR = "org/h2/test/scripts/"; + + /** If set to true, the test will exit at the first failure. */ + private boolean failFast; + private final ArrayList statements = New.arrayList(); + + private boolean reconnectOften; + private Connection conn; + private Statement stat; + private String fileName; + private LineNumberReader in; + private int outputLineNo; + private PrintStream out; + private final ArrayList result = New.arrayList(); + private String putBack; + private StringBuilder errors; + + private Random random = new Random(1); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + /** + * Get all SQL statements of this file. + * + * @param conf the configuration + * @return the list of statements + */ + public ArrayList getAllStatements(TestAll conf) throws Exception { + config = conf; + if (statements.isEmpty()) { + test(); + } + return statements; + } + + @Override + public void test() throws Exception { + if (config.networked && config.big) { + return; + } + reconnectOften = !config.memory && config.big; + + testScript("testScript.sql"); + testScript("derived-column-names.sql"); + testScript("information_schema.sql"); + testScript("joins.sql"); + testScript("range_table.sql"); + testScript("altertable-index-reuse.sql"); + testScript("default-and-on_update.sql"); + testScript("query-optimisations.sql"); + String decimal2; + if (SysProperties.BIG_DECIMAL_IS_DECIMAL) { + decimal2 = "decimal_decimal"; + } else { + decimal2 = "decimal_numeric"; + } + + for (String s : new String[] { "array", "bigint", "binary", "blob", + "boolean", "char", "clob", "date", "decimal", decimal2, "double", "enum", + "geometry", "identity", "int", "other", "real", "smallint", + "time", "timestamp-with-timezone", "timestamp", "tinyint", + "uuid", "varchar", "varchar-ignorecase" }) { + testScript("datatypes/" + s + ".sql"); + } + for (String s : new String[] { "alterTableAdd", "alterTableDropColumn", "createView", "createTable", + "dropSchema" }) { + testScript("ddl/" + s + ".sql"); + } + for (String s : new String[] { "insertIgnore", "mergeUsing", "script", "with" }) { + testScript("dml/" + s + ".sql"); + } + for (String s : new String[] { "avg", "bit-and", "bit-or", "count", + "group-concat", "max", "median", "min", "selectivity", "stddev-pop", + "stddev-samp", "sum", "var-pop", "var-samp", "array-agg" }) { + testScript("functions/aggregate/" + s + ".sql"); + } + for (String s : new String[] { "abs", "acos", "asin", "atan", "atan2", + "bitand", "bitget", "bitor", "bitxor", "ceil", "compress", + "cos", "cosh", "cot", "decrypt", "degrees", "encrypt", "exp", + "expand", "floor", "hash", "length", "log", "mod", "pi", + "power", "radians", "rand", "random-uuid", "round", + "roundmagic", "secure-rand", "sign", "sin", "sinh", "sqrt", + "tan", "tanh", "truncate", "zero" }) { + testScript("functions/numeric/" + s + ".sql"); + } + for (String s : new String[] { "ascii", "bit-length", "char", "concat", + "concat-ws", "difference", "hextoraw", "insert", "instr", + "left", "length", "locate", "lower", "lpad", "ltrim", + "octet-length", "position", "rawtohex", "regexp-like", + "regex-replace", "repeat", "replace", "right", "rpad", "rtrim", + "soundex", "space", "stringdecode", "stringencode", + "stringtoutf8", "substring", "to-char", "translate", "trim", + "upper", "utf8tostring", "xmlattr", "xmlcdata", "xmlcomment", + "xmlnode", "xmlstartdoc", "xmltext" }) { + testScript("functions/string/" + s + ".sql"); + } + for (String s : new String[] { "array-contains", "array-get", + "array-length", "autocommit", "cancel-session", "casewhen", + "cast", "coalesce", "convert", "csvread", "csvwrite", "currval", + "database-path", "database", "decode", "disk-space-used", + "file-read", "file-write", "greatest", "h2version", "identity", + "ifnull", "least", "link-schema", "lock-mode", "lock-timeout", + "memory-free", "memory-used", "nextval", "nullif", "nvl2", + "readonly", "rownum", "schema", "scope-identity", "session-id", + "set", "table", "transaction-id", "truncate-value", "user" }) { + testScript("functions/system/" + s + ".sql"); + } + for (String s : new String[] { "add_months", "current_date", "current_timestamp", + "current-time", "dateadd", "datediff", "dayname", + "day-of-month", "day-of-week", "day-of-year", "extract", + "formatdatetime", "hour", "minute", "month", "monthname", + "parsedatetime", "quarter", "second", "truncate", "week", "year", "date_trunc" }) { + testScript("functions/timeanddate/" + s + ".sql"); + } + + deleteDb("script"); + System.out.flush(); + } + + private void testScript(String scriptFileName) throws Exception { + deleteDb("script"); + + // Reset all the state in case there is anything left over from the previous file + // we processed. + conn = null; + stat = null; + fileName = null; + in = null; + outputLineNo = 0; + out = null; + result.clear(); + putBack = null; + errors = null; + + println("Running commands in " + scriptFileName); + final String outFile = "target/test.out.txt"; + conn = getConnection("script"); + stat = conn.createStatement(); + out = new PrintStream(new FileOutputStream(outFile)); + errors = new StringBuilder(); + testFile(BASE_DIR + scriptFileName); + conn.close(); + out.close(); + if (errors.length() > 0) { + throw new Exception("errors in " + scriptFileName + " found"); + } + // new File(outFile).delete(); + } + + private String readLine() throws IOException { + if (putBack != null) { + String s = putBack; + putBack = null; + return s; + } + while (true) { + String s = in.readLine(); + if (s == null) { + return s; + } + s = s.trim(); + if (s.length() > 0) { + return s; + } + } + } + + private void testFile(String inFile) throws Exception { + InputStream is = getClass().getClassLoader().getResourceAsStream(inFile); + if (is == null) { + throw new IOException("could not find " + inFile); + } + fileName = inFile; + in = new LineNumberReader(new InputStreamReader(is, "Cp1252")); + StringBuilder buff = new StringBuilder(); + while (true) { + String sql = readLine(); + if (sql == null) { + break; + } + if (sql.startsWith("--")) { + write(sql); + } else if (sql.startsWith(">")) { + // do nothing + } else if (sql.endsWith(";")) { + write(sql); + buff.append(sql, 0, sql.length() - 1); + sql = buff.toString(); + buff = new StringBuilder(); + process(sql); + } else { + write(sql); + buff.append(sql); + buff.append('\n'); + } + } + } + + private boolean containsTempTables() throws SQLException { + ResultSet rs = conn.getMetaData().getTables(null, null, null, + new String[] { "TABLE" }); + while (rs.next()) { + String sql = rs.getString("SQL"); + if (sql != null) { + if (sql.contains("TEMPORARY")) { + return true; + } + } + } + return false; + } + + private void process(String sql) throws Exception { + if (reconnectOften) { + if (!containsTempTables()) { + boolean autocommit = conn.getAutoCommit(); + if (autocommit && random.nextInt(10) < 1) { + // reconnect 10% of the time + conn.close(); + conn = getConnection("script"); + conn.setAutoCommit(autocommit); + stat = conn.createStatement(); + } + } + } + statements.add(sql); + if (sql.indexOf('?') == -1) { + processStatement(sql); + } else { + String param = readLine(); + write(param); + if (!param.equals("{")) { + throw new AssertionError("expected '{', got " + param + " in " + sql); + } + try { + PreparedStatement prep = conn.prepareStatement(sql); + int count = 0; + while (true) { + param = readLine(); + write(param); + if (param.startsWith("}")) { + break; + } + count += processPrepared(sql, prep, param); + } + writeResult(sql, "update count: " + count, null); + } catch (SQLException e) { + writeException(sql, e); + } + } + write(""); + } + + private static void setParameter(PreparedStatement prep, int i, String param) + throws SQLException { + if (param.equalsIgnoreCase("null")) { + param = null; + } + prep.setString(i, param); + } + + private int processPrepared(String sql, PreparedStatement prep, String param) + throws Exception { + try { + StringBuilder buff = new StringBuilder(); + int index = 0; + for (int i = 0; i < param.length(); i++) { + char c = param.charAt(i); + if (c == ',') { + setParameter(prep, ++index, buff.toString()); + buff = new StringBuilder(); + } else if (c == '"') { + while (true) { + c = param.charAt(++i); + if (c == '"') { + break; + } + buff.append(c); + } + } else if (c > ' ') { + buff.append(c); + } + } + if (buff.length() > 0) { + setParameter(prep, ++index, buff.toString()); + } + if (prep.execute()) { + writeResultSet(sql, prep.getResultSet()); + return 0; + } + return prep.getUpdateCount(); + } catch (SQLException e) { + writeException(sql, e); + return 0; + } + } + + private int processStatement(String sql) throws Exception { + try { + if (stat.execute(sql)) { + writeResultSet(sql, stat.getResultSet()); + } else { + int count = stat.getUpdateCount(); + writeResult(sql, count < 1 ? "ok" : "update count: " + count, null); + } + } catch (SQLException e) { + writeException(sql, e); + } + return 0; + } + + private static String formatString(String s) { + if (s == null) { + return "null"; + } + s = StringUtils.replaceAll(s, "\r\n", "\n"); + s = s.replace('\n', ' '); + s = StringUtils.replaceAll(s, " ", " "); + while (true) { + String s2 = StringUtils.replaceAll(s, " ", " "); + if (s2.length() == s.length()) { + break; + } + s = s2; + } + return s; + } + + private void writeResultSet(String sql, ResultSet rs) throws Exception { + boolean ordered = StringUtils.toLowerEnglish(sql).contains("order by"); + ResultSetMetaData meta = rs.getMetaData(); + int len = meta.getColumnCount(); + int[] max = new int[len]; + result.clear(); + while (rs.next()) { + String[] row = new String[len]; + for (int i = 0; i < len; i++) { + String data = formatString(rs.getString(i + 1)); + if (max[i] < data.length()) { + max[i] = data.length(); + } + row[i] = data; + } + result.add(row); + } + String[] head = new String[len]; + for (int i = 0; i < len; i++) { + String label = formatString(meta.getColumnLabel(i + 1)); + if (max[i] < label.length()) { + max[i] = label.length(); + } + head[i] = label; + } + rs.close(); + String line = readLine(); + putBack = line; + if (line != null && line.startsWith(">> ")) { + switch (result.size()) { + case 0: + writeResult(sql, "", null, ">> "); + return; + case 1: + String[] row = result.get(0); + if (row.length == 1) { + writeResult(sql, row[0], null, ">> "); + } else { + writeResult(sql, "", null, ">> "); + } + return; + default: + writeResult(sql, "<" + result.size() + " rows>", null, ">> "); + return; + } + } + writeResult(sql, format(head, max), null); + writeResult(sql, format(null, max), null); + String[] array = new String[result.size()]; + for (int i = 0; i < result.size(); i++) { + array[i] = format(result.get(i), max); + } + if (!ordered) { + sort(array); + } + int i = 0; + for (; i < array.length; i++) { + writeResult(sql, array[i], null); + } + writeResult(sql, (ordered ? "rows (ordered): " : "rows: ") + i, null); + } + + private static String format(String[] row, int[] max) { + int length = max.length; + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < length; i++) { + if (i > 0) { + buff.append(' '); + } + if (row == null) { + for (int j = 0; j < max[i]; j++) { + buff.append('-'); + } + } else { + int len = row[i].length(); + buff.append(row[i]); + if (i < length - 1) { + for (int j = len; j < max[i]; j++) { + buff.append(' '); + } + } + } + } + return buff.toString(); + } + + private void writeException(String sql, SQLException e) throws Exception { + writeResult(sql, "exception", e); + } + + private void writeResult(String sql, String s, SQLException e) throws Exception { + writeResult(sql, s, e, "> "); + } + + private void writeResult(String sql, String s, SQLException e, String prefix) throws Exception { + assertKnownException(sql, e); + s = (prefix + s).trim(); + String compare = readLine(); + if (compare != null && compare.startsWith(">")) { + if (!compare.equals(s)) { + if (reconnectOften && sql.toUpperCase().startsWith("EXPLAIN")) { + return; + } + errors.append(fileName).append('\n'); + errors.append("line: ").append(outputLineNo).append('\n'); + errors.append("exp: ").append(compare).append('\n'); + errors.append("got: ").append(s).append('\n'); + if (e != null) { + TestBase.logError("script", e); + } + TestBase.logErrorMessage(errors.toString()); + if (failFast) { + conn.close(); + System.exit(1); + } + } + } else { + putBack = compare; + } + write(s); + } + + private void write(String s) { + outputLineNo++; + out.println(s); + } + + private static void sort(String[] a) { + for (int i = 1, j, len = a.length; i < len; i++) { + String t = a[i]; + for (j = i - 1; j >= 0 && t.compareTo(a[j]) < 0; j--) { + a[j + 1] = a[j]; + } + a[j + 1] = t; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/scripts/TestScriptSimple.java b/modules/h2/src/test/java/org/h2/test/scripts/TestScriptSimple.java new file mode 100644 index 0000000000000..97d4f39f53d8c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/TestScriptSimple.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.scripts; + +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import org.h2.test.TestBase; +import org.h2.util.ScriptReader; + +/** + * This test runs a simple SQL script file and compares the output with the + * expected output. + */ +public class TestScriptSimple extends TestBase { + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.memory || config.big || config.networked) { + return; + } + deleteDb("scriptSimple"); + reconnect(); + String inFile = "org/h2/test/scripts/testSimple.in.txt"; + InputStream is = getClass().getClassLoader().getResourceAsStream(inFile); + LineNumberReader lineReader = new LineNumberReader( + new InputStreamReader(is, "Cp1252")); + try (ScriptReader reader = new ScriptReader(lineReader)) { + while (true) { + String sql = reader.readStatement(); + if (sql == null) { + break; + } + sql = sql.trim(); + try { + if ("@reconnect".equals(sql.toLowerCase())) { + reconnect(); + } else if (sql.length() == 0) { + // ignore + } else if (sql.toLowerCase().startsWith("select")) { + ResultSet rs = conn.createStatement().executeQuery(sql); + while (rs.next()) { + String expected = reader.readStatement().trim(); + String got = "> " + rs.getString(1); + assertEquals(sql, expected, got); + } + } else { + conn.createStatement().execute(sql); + } + } catch (SQLException e) { + System.out.println(sql); + throw e; + } + } + } + conn.close(); + deleteDb("scriptSimple"); + } + + private void reconnect() throws SQLException { + if (conn != null) { + conn.close(); + } + conn = getConnection("scriptSimple"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/scripts/altertable-index-reuse.sql b/modules/h2/src/test/java/org/h2/test/scripts/altertable-index-reuse.sql new file mode 100644 index 0000000000000..77eed9b3ec017 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/altertable-index-reuse.sql @@ -0,0 +1,33 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE "domains" ("id" bigint NOT NULL auto_increment PRIMARY KEY); +> ok + +CREATE TABLE "users" ("id" bigint NOT NULL auto_increment PRIMARY KEY,"username" varchar_ignorecase(255),"domain" bigint,"desc" varchar_ignorecase(255)); +> ok + +-- adds constraint on (domain,username) and generates unique index domainusername_key_INDEX_xxx +ALTER TABLE "users" ADD CONSTRAINT "domainusername_key" UNIQUE ("domain","username"); +> ok + +-- adds foreign key on domain - if domainusername_key didn't exist it would create unique index on domain, but it reuses the existing index +ALTER TABLE "users" ADD CONSTRAINT "udomain_fkey" FOREIGN KEY ("domain") REFERENCES "domains"("id") ON DELETE RESTRICT; +> ok + +-- now we drop the domainusername_key, but domainusername_key_INDEX_xxx is used by udomain_fkey and was not being dropped +-- this was an issue, because it's a unique index and still enforcing constraint on (domain,username) +ALTER TABLE "users" DROP CONSTRAINT "domainusername_key"; +> ok + +insert into "domains" ("id") VALUES (1); +> update count: 1 + +insert into "users" ("username","domain","desc") VALUES ('test',1,'first user'); +> update count: 1 + +-- should work,because we dropped domainusername_key, but failed: Unique index or primary key violation +INSERT INTO "users" ("username","domain","desc") VALUES ('test',1,'second user'); +> update count: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/array.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/array.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/array.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/bigint.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/bigint.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/bigint.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/binary.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/binary.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/binary.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/blob.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/blob.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/blob.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/boolean.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/boolean.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/boolean.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/char.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/char.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/char.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/clob.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/clob.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/clob.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/date.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/date.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/date.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal_decimal.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal_decimal.sql new file mode 100644 index 0000000000000..0880a24572974 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal_decimal.sql @@ -0,0 +1,47 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- +-- h2.bigDecimalIsDecimal=true +-- + +create memory table orders ( orderid varchar(10), name varchar(20), customer_id varchar(10), completed numeric(1) not null, verified numeric(1) ); +> ok + +select * from information_schema.columns where table_name = 'ORDERS'; +> TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_PRECISION_RADIX NUMERIC_SCALE CHARACTER_SET_NAME COLLATION_NAME TYPE_NAME NULLABLE IS_COMPUTED SELECTIVITY CHECK_CONSTRAINT SEQUENCE_NAME REMARKS SOURCE_DATA_TYPE COLUMN_TYPE COLUMN_ON_UPDATE +> ------------- ------------ ---------- ----------- ---------------- -------------- ----------- --------- ------------------------ ---------------------- ----------------- ----------------------- ------------- ------------------ -------------- --------- -------- ----------- ----------- ---------------- ------------- ------- ---------------- ------------------- ---------------- +> SCRIPT PUBLIC ORDERS COMPLETED 4 null NO 3 1 1 1 10 0 Unicode OFF DECIMAL 0 FALSE 50 null null NUMERIC(1) NOT NULL null +> SCRIPT PUBLIC ORDERS CUSTOMER_ID 3 null YES 12 10 10 10 10 0 Unicode OFF VARCHAR 1 FALSE 50 null null VARCHAR(10) null +> SCRIPT PUBLIC ORDERS NAME 2 null YES 12 20 20 20 10 0 Unicode OFF VARCHAR 1 FALSE 50 null null VARCHAR(20) null +> SCRIPT PUBLIC ORDERS ORDERID 1 null YES 12 10 10 10 10 0 Unicode OFF VARCHAR 1 FALSE 50 null null VARCHAR(10) null +> SCRIPT PUBLIC ORDERS VERIFIED 5 null YES 3 1 1 1 10 0 Unicode OFF DECIMAL 1 FALSE 50 null null NUMERIC(1) null +> rows: 5 + +drop table orders; +> ok + +CREATE TABLE TEST(ID INT, X1 BIT, XT TINYINT, X_SM SMALLINT, XB BIGINT, XD DECIMAL(10,2), XD2 DOUBLE PRECISION, XR REAL); +> ok + +INSERT INTO TEST VALUES(?, ?, ?, ?, ?, ?, ?, ?); +{ +0,FALSE,0,0,0,0.0,0.0,0.0 +1,TRUE,1,1,1,1.0,1.0,1.0 +4,TRUE,4,4,4,4.0,4.0,4.0 +-1,FALSE,-1,-1,-1,-1.0,-1.0,-1.0 +NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL +}; +> update count: 5 + +SELECT ID, CAST(XT AS NUMBER(10,1)), +CAST(X_SM AS NUMBER(10,1)), CAST(XB AS NUMBER(10,1)), CAST(XD AS NUMBER(10,1)), +CAST(XD2 AS NUMBER(10,1)), CAST(XR AS NUMBER(10,1)) FROM TEST; +> ID CAST(XT AS DECIMAL(10, 1)) CAST(X_SM AS DECIMAL(10, 1)) CAST(XB AS DECIMAL(10, 1)) CAST(XD AS DECIMAL(10, 1)) CAST(XD2 AS DECIMAL(10, 1)) CAST(XR AS DECIMAL(10, 1)) +> ---- -------------------------- ---------------------------- -------------------------- -------------------------- --------------------------- -------------------------- +> -1 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 +> 0 0.0 0.0 0.0 0.0 0.0 0.0 +> 1 1.0 1.0 1.0 1.0 1.0 1.0 +> 4 4.0 4.0 4.0 4.0 4.0 4.0 +> null null null null null null null +> rows: 5 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal_numeric.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal_numeric.sql new file mode 100644 index 0000000000000..7e146de3e05ba --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/decimal_numeric.sql @@ -0,0 +1,47 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- +-- h2.bigDecimalIsDecimal=false +-- + +create memory table orders ( orderid varchar(10), name varchar(20), customer_id varchar(10), completed numeric(1) not null, verified numeric(1) ); +> ok + +select * from information_schema.columns where table_name = 'ORDERS'; +> TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_PRECISION_RADIX NUMERIC_SCALE CHARACTER_SET_NAME COLLATION_NAME TYPE_NAME NULLABLE IS_COMPUTED SELECTIVITY CHECK_CONSTRAINT SEQUENCE_NAME REMARKS SOURCE_DATA_TYPE COLUMN_TYPE COLUMN_ON_UPDATE +> ------------- ------------ ---------- ----------- ---------------- -------------- ----------- --------- ------------------------ ---------------------- ----------------- ----------------------- ------------- ------------------ -------------- --------- -------- ----------- ----------- ---------------- ------------- ------- ---------------- ------------------- ---------------- +> SCRIPT PUBLIC ORDERS COMPLETED 4 null NO 2 1 1 1 10 0 Unicode OFF NUMERIC 0 FALSE 50 null null NUMERIC(1) NOT NULL null +> SCRIPT PUBLIC ORDERS CUSTOMER_ID 3 null YES 12 10 10 10 10 0 Unicode OFF VARCHAR 1 FALSE 50 null null VARCHAR(10) null +> SCRIPT PUBLIC ORDERS NAME 2 null YES 12 20 20 20 10 0 Unicode OFF VARCHAR 1 FALSE 50 null null VARCHAR(20) null +> SCRIPT PUBLIC ORDERS ORDERID 1 null YES 12 10 10 10 10 0 Unicode OFF VARCHAR 1 FALSE 50 null null VARCHAR(10) null +> SCRIPT PUBLIC ORDERS VERIFIED 5 null YES 2 1 1 1 10 0 Unicode OFF NUMERIC 1 FALSE 50 null null NUMERIC(1) null +> rows: 5 + +drop table orders; +> ok + +CREATE TABLE TEST(ID INT, X1 BIT, XT TINYINT, X_SM SMALLINT, XB BIGINT, XD DECIMAL(10,2), XD2 DOUBLE PRECISION, XR REAL); +> ok + +INSERT INTO TEST VALUES(?, ?, ?, ?, ?, ?, ?, ?); +{ +0,FALSE,0,0,0,0.0,0.0,0.0 +1,TRUE,1,1,1,1.0,1.0,1.0 +4,TRUE,4,4,4,4.0,4.0,4.0 +-1,FALSE,-1,-1,-1,-1.0,-1.0,-1.0 +NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL +}; +> update count: 5 + +SELECT ID, CAST(XT AS NUMBER(10,1)), +CAST(X_SM AS NUMBER(10,1)), CAST(XB AS NUMBER(10,1)), CAST(XD AS NUMBER(10,1)), +CAST(XD2 AS NUMBER(10,1)), CAST(XR AS NUMBER(10,1)) FROM TEST; +> ID CAST(XT AS NUMERIC(10, 1)) CAST(X_SM AS NUMERIC(10, 1)) CAST(XB AS NUMERIC(10, 1)) CAST(XD AS NUMERIC(10, 1)) CAST(XD2 AS NUMERIC(10, 1)) CAST(XR AS NUMERIC(10, 1)) +> ---- -------------------------- ---------------------------- -------------------------- -------------------------- --------------------------- -------------------------- +> -1 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 +> 0 0.0 0.0 0.0 0.0 0.0 0.0 +> 1 1.0 1.0 1.0 1.0 1.0 1.0 +> 4 4.0 4.0 4.0 4.0 4.0 4.0 +> null null null null null null null +> rows: 5 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/double.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/double.sql new file mode 100644 index 0000000000000..e95dffc61f267 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/double.sql @@ -0,0 +1,32 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE MEMORY TABLE TEST(D1 DOUBLE, D2 DOUBLE PRECISION, D3 FLOAT, D4 FLOAT(25), D5 FLOAT(53)); +> ok + +ALTER TABLE TEST ADD COLUMN D6 FLOAT(54); +> exception + +SELECT COLUMN_NAME, DATA_TYPE, TYPE_NAME, COLUMN_TYPE FROM INFORMATION_SCHEMA.COLUMNS + WHERE TABLE_NAME = 'TEST' ORDER BY ORDINAL_POSITION; +> COLUMN_NAME DATA_TYPE TYPE_NAME COLUMN_TYPE +> ----------- --------- --------- ---------------- +> D1 8 DOUBLE DOUBLE +> D2 8 DOUBLE DOUBLE PRECISION +> D3 8 DOUBLE FLOAT +> D4 8 DOUBLE FLOAT(25) +> D5 8 DOUBLE FLOAT(53) +> rows (ordered): 5 + +SCRIPT NODATA NOPASSWORDS NOSETTINGS TABLE TEST; +> SCRIPT +> -------------------------------------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> CREATE MEMORY TABLE PUBLIC.TEST( D1 DOUBLE, D2 DOUBLE PRECISION, D3 FLOAT, D4 FLOAT(25), D5 FLOAT(53) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 3 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/enum.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/enum.sql new file mode 100644 index 0000000000000..3fa34449090f2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/enum.sql @@ -0,0 +1,164 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +---------------- +--- ENUM support +---------------- + +--- ENUM basic operations + +create table card (rank int, suit enum('hearts', 'clubs', 'spades')); +> ok + +insert into card (rank, suit) values (0, 'clubs'), (3, 'hearts'), (4, NULL); +> update count: 3 + +alter table card alter column suit enum('hearts', 'clubs', 'spades', 'diamonds'); +> ok + +select * from card; +> RANK SUIT +> ---- ------ +> 0 clubs +> 3 hearts +> 4 null + +select * from card order by suit; +> RANK SUIT +> ---- ------ +> 4 null +> 3 hearts +> 0 clubs + +insert into card (rank, suit) values (8, 'diamonds'), (10, 'clubs'), (7, 'hearts'); +> update count: 3 + +select suit, count(rank) from card group by suit order by suit, count(rank); +> SUIT COUNT(RANK) +> -------- ----------- +> null 1 +> hearts 2 +> clubs 2 +> diamonds 1 + +select rank from card where suit = 'diamonds'; +> RANK +> ---- +> 8 + +select column_type from information_schema.columns where COLUMN_NAME = 'SUIT'; +> COLUMN_TYPE +> ------------------------------------------ +> ENUM('hearts','clubs','spades','diamonds') +> rows: 1 + +--- ENUM integer-based operations + +select rank from card where suit = 1; +> RANK +> ---- +> 0 +> 10 + +insert into card (rank, suit) values(5, 2); +> update count: 1 + +select * from card where rank = 5; +> RANK SUIT +> ---- ------ +> 5 spades + +--- ENUM edge cases + +insert into card (rank, suit) values(6, ' '); +> exception + +alter table card alter column suit enum('hearts', 'clubs', 'spades', 'diamonds', 'clubs'); +> exception + +alter table card alter column suit enum('hearts', 'clubs', 'spades', 'diamonds', ''); +> exception + +drop table card; +> ok + +--- ENUM as custom user data type + +create type CARD_SUIT as enum('hearts', 'clubs', 'spades', 'diamonds'); +> ok + +create table card (rank int, suit CARD_SUIT); +> ok + +insert into card (rank, suit) values (0, 'clubs'), (3, 'hearts'); +> update count: 2 + +select * from card; +> RANK SUIT +> ---- ------ +> 0 clubs +> 3 hearts + +drop table card; +> ok + +drop type CARD_SUIT; +> ok + +--- ENUM in primary key with another column +create type CARD_SUIT as enum('hearts', 'clubs', 'spades', 'diamonds'); +> ok + +create table card (rank int, suit CARD_SUIT, primary key(rank, suit)); +> ok + +insert into card (rank, suit) values (0, 'clubs'), (3, 'hearts'), (1, 'clubs'); +> update count: 3 + +insert into card (rank, suit) values (0, 'clubs'); +> exception + +select rank from card where suit = 'clubs'; +> RANK +> ---- +> 0 +> 1 + +drop table card; +> ok + +drop type CARD_SUIT; +> ok + +--- ENUM with index +create type CARD_SUIT as enum('hearts', 'clubs', 'spades', 'diamonds'); +> ok + +create table card (rank int, suit CARD_SUIT, primary key(rank, suit)); +> ok + +insert into card (rank, suit) values (0, 'clubs'), (3, 'hearts'), (1, 'clubs'); +> update count: 3 + +create index idx_card_suite on card(`suit`); + +select rank from card where suit = 'clubs'; +> RANK +> ---- +> 0 +> 1 + +select rank from card where suit in ('clubs'); +> RANK +> ---- +> 0 +> 1 + +drop table card; +> ok + +drop type CARD_SUIT; +> ok + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/geometry.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/geometry.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/geometry.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/identity.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/identity.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/identity.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/int.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/int.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/int.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/other.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/other.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/other.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/real.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/real.sql new file mode 100644 index 0000000000000..d7670015b7ea6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/real.sql @@ -0,0 +1,31 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE MEMORY TABLE TEST(D1 REAL, D2 FLOAT4, D3 FLOAT(0), D4 FLOAT(24)); +> ok + +ALTER TABLE TEST ADD COLUMN D5 FLOAT(-1); +> exception + +SELECT COLUMN_NAME, DATA_TYPE, TYPE_NAME, COLUMN_TYPE FROM INFORMATION_SCHEMA.COLUMNS + WHERE TABLE_NAME = 'TEST' ORDER BY ORDINAL_POSITION; +> COLUMN_NAME DATA_TYPE TYPE_NAME COLUMN_TYPE +> ----------- --------- --------- ----------- +> D1 7 REAL REAL +> D2 7 REAL FLOAT4 +> D3 7 REAL FLOAT(0) +> D4 7 REAL FLOAT(24) +> rows (ordered): 4 + +SCRIPT NODATA NOPASSWORDS NOSETTINGS TABLE TEST; +> SCRIPT +> --------------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> CREATE MEMORY TABLE PUBLIC.TEST( D1 REAL, D2 FLOAT4, D3 FLOAT(0), D4 FLOAT(24) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 3 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/smallint.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/smallint.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/smallint.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/time.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/time.sql new file mode 100644 index 0000000000000..8e64cab6327dd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/time.sql @@ -0,0 +1,86 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE TEST(T1 TIME, T2 TIME WITHOUT TIME ZONE); +> ok + +INSERT INTO TEST(T1, T2) VALUES (TIME '10:00:00', TIME WITHOUT TIME ZONE '10:00:00'); +> update count: 1 + +SELECT T1, T2, T1 = T2 FROM TEST; +> T1 T2 T1 = T2 +> -------- -------- ------- +> 10:00:00 10:00:00 TRUE +> rows: 1 + +SELECT COLUMN_NAME, DATA_TYPE, TYPE_NAME, COLUMN_TYPE FROM INFORMATION_SCHEMA.COLUMNS + WHERE TABLE_NAME = 'TEST' ORDER BY ORDINAL_POSITION; +> COLUMN_NAME DATA_TYPE TYPE_NAME COLUMN_TYPE +> ----------- --------- --------- ---------------------- +> T1 92 TIME TIME +> T2 92 TIME TIME WITHOUT TIME ZONE +> rows (ordered): 2 + +ALTER TABLE TEST ADD (T3 TIME(0), T4 TIME(9) WITHOUT TIME ZONE); +> ok + +SELECT COLUMN_NAME, DATA_TYPE, TYPE_NAME, COLUMN_TYPE, NUMERIC_SCALE FROM INFORMATION_SCHEMA.COLUMNS + WHERE TABLE_NAME = 'TEST' ORDER BY ORDINAL_POSITION; +> COLUMN_NAME DATA_TYPE TYPE_NAME COLUMN_TYPE NUMERIC_SCALE +> ----------- --------- --------- ------------------------- ------------- +> T1 92 TIME TIME 0 +> T2 92 TIME TIME WITHOUT TIME ZONE 0 +> T3 92 TIME TIME(0) 0 +> T4 92 TIME TIME(9) WITHOUT TIME ZONE 9 +> rows (ordered): 4 + +ALTER TABLE TEST ADD T5 TIME(10); +> exception + +DROP TABLE TEST; +> ok + +-- Check that TIME is allowed as a column name +CREATE TABLE TEST(TIME TIME); +> ok + +INSERT INTO TEST VALUES (TIME '08:00:00'); +> update count: 1 + +SELECT TIME FROM TEST; +> TIME +> -------- +> 08:00:00 +> rows: 1 + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(T TIME, T0 TIME(0), T1 TIME(1), T2 TIME(2), T3 TIME(3), T4 TIME(4), T5 TIME(5), T6 TIME(6), + T7 TIME(7), T8 TIME(8), T9 TIME(9)); +> ok + +INSERT INTO TEST VALUES ('08:00:00.123456789', '08:00:00.123456789', '08:00:00.123456789', '08:00:00.123456789', + '08:00:00.123456789', '08:00:00.123456789', '08:00:00.123456789', '08:00:00.123456789', '08:00:00.123456789', + '08:00:00.123456789', '08:00:00.123456789'); +> update count: 1 + +SELECT * FROM TEST; +> T T0 T1 T2 T3 T4 T5 T6 T7 T8 T9 +> -------- -------- ---------- ----------- ------------ ------------- -------------- --------------- ---------------- ----------------- ------------------ +> 08:00:00 08:00:00 08:00:00.1 08:00:00.12 08:00:00.123 08:00:00.1235 08:00:00.12346 08:00:00.123457 08:00:00.1234568 08:00:00.12345679 08:00:00.123456789 +> rows: 1 + +DELETE FROM TEST; +> update count: 1 + +INSERT INTO TEST(T0) VALUES ('23:59:59.999999999'); +> update count: 1 + +SELECT T0 FROM TEST; +>> 23:59:59.999999999 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/timestamp-with-timezone.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/timestamp-with-timezone.sql new file mode 100644 index 0000000000000..459a4618a1655 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/timestamp-with-timezone.sql @@ -0,0 +1,97 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE tab_with_timezone(x TIMESTAMP WITH TIME ZONE); +> ok + +INSERT INTO tab_with_timezone(x) VALUES ('2017-01-01'); +> update count: 1 + +SELECT "Query".* FROM (select * from tab_with_timezone where x > '2016-01-01') AS "Query"; +> X +> ---------------------- +> 2017-01-01 00:00:00+00 + +DELETE FROM tab_with_timezone; +> update count: 1 + +INSERT INTO tab_with_timezone VALUES ('2018-03-25 01:59:00 Europe/Berlin'), ('2018-03-25 03:00:00 Europe/Berlin'); +> update count: 2 + +SELECT * FROM tab_with_timezone ORDER BY X; +> X +> ---------------------- +> 2018-03-25 01:59:00+01 +> 2018-03-25 03:00:00+02 +> rows (ordered): 2 + +SELECT TIMESTAMP WITH TIME ZONE '2000-01-10 00:00:00 -02' AS A, + TIMESTAMP WITH TIME ZONE '2000-01-10 00:00:00.000000000 +02:00' AS B, + TIMESTAMP WITH TIME ZONE '2000-01-10 00:00:00.000000000+02:00' AS C, + TIMESTAMP WITH TIME ZONE '2000-01-10T00:00:00.000000000+09:00[Asia/Tokyo]' AS D; +> A B C D +> ---------------------- ---------------------- ---------------------- ---------------------- +> 2000-01-10 00:00:00-02 2000-01-10 00:00:00+02 2000-01-10 00:00:00+02 2000-01-10 00:00:00+09 +> rows: 1 + +CREATE TABLE TEST(T1 TIMESTAMP WITH TIME ZONE, T2 TIMESTAMP(0) WITH TIME ZONE, T3 TIMESTAMP(9) WITH TIME ZONE); +> ok + +SELECT COLUMN_NAME, DATA_TYPE, TYPE_NAME, COLUMN_TYPE, NUMERIC_SCALE FROM INFORMATION_SCHEMA.COLUMNS + WHERE TABLE_NAME = 'TEST' ORDER BY ORDINAL_POSITION; +> COLUMN_NAME DATA_TYPE TYPE_NAME COLUMN_TYPE NUMERIC_SCALE +> ----------- --------- ------------------------ --------------------------- ------------- +> T1 2014 TIMESTAMP WITH TIME ZONE TIMESTAMP WITH TIME ZONE 6 +> T2 2014 TIMESTAMP WITH TIME ZONE TIMESTAMP(0) WITH TIME ZONE 0 +> T3 2014 TIMESTAMP WITH TIME ZONE TIMESTAMP(9) WITH TIME ZONE 9 +> rows (ordered): 3 + +ALTER TABLE TEST ADD T4 TIMESTAMP (10) WITH TIME ZONE; +> exception + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(T TIMESTAMP WITH TIME ZONE, T0 TIMESTAMP(0) WITH TIME ZONE, T1 TIMESTAMP(1) WITH TIME ZONE, + T2 TIMESTAMP(2) WITH TIME ZONE, T3 TIMESTAMP(3) WITH TIME ZONE, T4 TIMESTAMP(4) WITH TIME ZONE, + T5 TIMESTAMP(5) WITH TIME ZONE, T6 TIMESTAMP(6) WITH TIME ZONE, T7 TIMESTAMP(7) WITH TIME ZONE, + T8 TIMESTAMP(8) WITH TIME ZONE, T9 TIMESTAMP(9) WITH TIME ZONE); +> ok + +INSERT INTO TEST VALUES ('2000-01-01 08:00:00.123456789Z', '2000-01-01 08:00:00.123456789Z', + '2000-01-01 08:00:00.123456789Z', '2000-01-01 08:00:00.123456789Z', '2000-01-01 08:00:00.123456789Z', + '2000-01-01 08:00:00.123456789Z', '2000-01-01 08:00:00.123456789Z', '2000-01-01 08:00:00.123456789Z', + '2000-01-01 08:00:00.123456789Z', '2000-01-01 08:00:00.123456789Z', '2000-01-01 08:00:00.123456789Z'); +> update count: 1 + +SELECT T, T0, T1, T2 FROM TEST; +> T T0 T1 T2 +> ----------------------------- ---------------------- ------------------------ ------------------------- +> 2000-01-01 08:00:00.123457+00 2000-01-01 08:00:00+00 2000-01-01 08:00:00.1+00 2000-01-01 08:00:00.12+00 +> rows: 1 + +SELECT T3, T4, T5, T6 FROM TEST; +> T3 T4 T5 T6 +> -------------------------- --------------------------- ---------------------------- ----------------------------- +> 2000-01-01 08:00:00.123+00 2000-01-01 08:00:00.1235+00 2000-01-01 08:00:00.12346+00 2000-01-01 08:00:00.123457+00 +> rows: 1 + +SELECT T7, T8, T9 FROM TEST; +> T7 T8 T9 +> ------------------------------ ------------------------------- -------------------------------- +> 2000-01-01 08:00:00.1234568+00 2000-01-01 08:00:00.12345679+00 2000-01-01 08:00:00.123456789+00 +> rows: 1 + +DELETE FROM TEST; +> update count: 1 + +INSERT INTO TEST(T0) VALUES ('2000-01-01 23:59:59.999999999Z'); +> update count: 1 + +SELECT T0 FROM TEST; +>> 2000-01-02 00:00:00+00 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/timestamp.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/timestamp.sql new file mode 100644 index 0000000000000..b52062db75242 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/timestamp.sql @@ -0,0 +1,87 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE TEST(T1 TIMESTAMP, T2 TIMESTAMP WITHOUT TIME ZONE); +> ok + +INSERT INTO TEST(T1, T2) VALUES (TIMESTAMP '2010-01-01 10:00:00', TIMESTAMP WITHOUT TIME ZONE '2010-01-01 10:00:00'); +> update count: 1 + +SELECT T1, T2, T1 = T2 FROM TEST; +> T1 T2 T1 = T2 +> ------------------- ------------------- ------- +> 2010-01-01 10:00:00 2010-01-01 10:00:00 TRUE +> rows: 1 + +ALTER TABLE TEST ADD (T3 TIMESTAMP(0), T4 TIMESTAMP(9) WITHOUT TIME ZONE); + +SELECT COLUMN_NAME, DATA_TYPE, TYPE_NAME, COLUMN_TYPE, NUMERIC_SCALE FROM INFORMATION_SCHEMA.COLUMNS + WHERE TABLE_NAME = 'TEST' ORDER BY ORDINAL_POSITION; +> COLUMN_NAME DATA_TYPE TYPE_NAME COLUMN_TYPE NUMERIC_SCALE +> ----------- --------- --------- ------------------------------ ------------- +> T1 93 TIMESTAMP TIMESTAMP 6 +> T2 93 TIMESTAMP TIMESTAMP WITHOUT TIME ZONE 6 +> T3 93 TIMESTAMP TIMESTAMP(0) 0 +> T4 93 TIMESTAMP TIMESTAMP(9) WITHOUT TIME ZONE 9 +> rows (ordered): 4 + +ALTER TABLE TEST ADD T5 TIMESTAMP(10); +> exception + +DROP TABLE TEST; +> ok + +-- Check that TIMESTAMP is allowed as a column name +CREATE TABLE TEST(TIMESTAMP TIMESTAMP(0)); +> ok + +INSERT INTO TEST VALUES (TIMESTAMP '1999-12-31 08:00:00'); +> update count: 1 + +SELECT TIMESTAMP FROM TEST; +>> 1999-12-31 08:00:00 + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(T TIMESTAMP, T0 TIMESTAMP(0), T1 TIMESTAMP(1), T2 TIMESTAMP(2), T3 TIMESTAMP(3), T4 TIMESTAMP(4), + T5 TIMESTAMP(5), T6 TIMESTAMP(6), T7 TIMESTAMP(7), T8 TIMESTAMP(8), T9 TIMESTAMP(9)); +> ok + +INSERT INTO TEST VALUES ('2000-01-01 08:00:00.123456789', '2000-01-01 08:00:00.123456789', + '2000-01-01 08:00:00.123456789', '2000-01-01 08:00:00.123456789', '2000-01-01 08:00:00.123456789', + '2000-01-01 08:00:00.123456789', '2000-01-01 08:00:00.123456789', '2000-01-01 08:00:00.123456789', + '2000-01-01 08:00:00.123456789', '2000-01-01 08:00:00.123456789', '2000-01-01 08:00:00.123456789'); +> update count: 1 + +SELECT T, T0, T1, T2 FROM TEST; +> T T0 T1 T2 +> -------------------------- ------------------- --------------------- ---------------------- +> 2000-01-01 08:00:00.123457 2000-01-01 08:00:00 2000-01-01 08:00:00.1 2000-01-01 08:00:00.12 +> rows: 1 + +SELECT T3, T4, T5, T6 FROM TEST; +> T3 T4 T5 T6 +> ----------------------- ------------------------ ------------------------- -------------------------- +> 2000-01-01 08:00:00.123 2000-01-01 08:00:00.1235 2000-01-01 08:00:00.12346 2000-01-01 08:00:00.123457 +> rows: 1 + +SELECT T7, T8, T9 FROM TEST; +> T7 T8 T9 +> --------------------------- ---------------------------- ----------------------------- +> 2000-01-01 08:00:00.1234568 2000-01-01 08:00:00.12345679 2000-01-01 08:00:00.123456789 +> rows: 1 + +DELETE FROM TEST; +> update count: 1 + +INSERT INTO TEST(T0) VALUES ('2000-01-01 23:59:59.999999999'); +> update count: 1 + +SELECT T0 FROM TEST; +>> 2000-01-02 00:00:00 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/tinyint.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/tinyint.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/tinyint.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/uuid.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/uuid.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/uuid.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/varchar-ignorecase.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/varchar-ignorecase.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/varchar-ignorecase.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/datatypes/varchar.sql b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/varchar.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/datatypes/varchar.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/ddl/alterTableAdd.sql b/modules/h2/src/test/java/org/h2/test/scripts/ddl/alterTableAdd.sql new file mode 100644 index 0000000000000..5f372d47475b2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/ddl/alterTableAdd.sql @@ -0,0 +1,103 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE TEST(B INT); +> ok + +ALTER TABLE TEST ADD C INT; +> ok + +ALTER TABLE TEST ADD COLUMN D INT; +> ok + +ALTER TABLE TEST ADD IF NOT EXISTS B INT; +> ok + +ALTER TABLE TEST ADD IF NOT EXISTS E INT; +> ok + +ALTER TABLE IF EXISTS TEST2 ADD COLUMN B INT; +> ok + +ALTER TABLE TEST ADD B1 INT AFTER B; +> ok + +ALTER TABLE TEST ADD B2 INT BEFORE C; +> ok + +ALTER TABLE TEST ADD (C1 INT, C2 INT) AFTER C; +> ok + +ALTER TABLE TEST ADD (C3 INT, C4 INT) BEFORE D; +> ok + +ALTER TABLE TEST ADD A2 INT FIRST; +> ok + +ALTER TABLE TEST ADD (A INT, A1 INT) FIRST; +> ok + +SELECT * FROM TEST; +> A A1 A2 B B1 B2 C C1 C2 C3 C4 D E +> - -- -- - -- -- - -- -- -- -- - - +> rows: 0 + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(A INT NOT NULL, B INT); +> ok + +-- column B may be null +ALTER TABLE TEST ADD (CONSTRAINT PK_B PRIMARY KEY (B)); +> exception + +ALTER TABLE TEST ADD (CONSTRAINT PK_A PRIMARY KEY (A)); +> ok + +ALTER TABLE TEST ADD (C INT AUTO_INCREMENT UNIQUE, CONSTRAINT U_B UNIQUE (B), D INT UNIQUE); +> ok + +INSERT INTO TEST(A, B, D) VALUES (11, 12, 14); +> update count: 1 + +SELECT * FROM TEST; +> A B C D +> -- -- - -- +> 11 12 1 14 +> rows: 1 + +INSERT INTO TEST VALUES (11, 20, 30, 40); +> exception + +INSERT INTO TEST VALUES (10, 12, 30, 40); +> exception + +INSERT INTO TEST VALUES (10, 20, 1, 40); +> exception + +INSERT INTO TEST VALUES (10, 20, 30, 14); +> exception + +INSERT INTO TEST VALUES (10, 20, 30, 40); +> update count: 1 + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(); +> ok + +ALTER TABLE TEST ADD A INT CONSTRAINT PK_1 PRIMARY KEY; +> ok + +SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS; +> CONSTRAINT_NAME CONSTRAINT_TYPE +> --------------- --------------- +> PK_1 PRIMARY KEY +> rows: 1 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/ddl/alterTableDropColumn.sql b/modules/h2/src/test/java/org/h2/test/scripts/ddl/alterTableDropColumn.sql new file mode 100644 index 0000000000000..e6263c332c85c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/ddl/alterTableDropColumn.sql @@ -0,0 +1,80 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE TEST(A VARCHAR, B VARCHAR, C VARCHAR AS LOWER(A)); +> ok + +ALTER TABLE TEST DROP COLUMN B; +> ok + +DROP TABLE TEST; +> ok + +ALTER TABLE IF EXISTS TEST DROP COLUMN A; +> ok + +ALTER TABLE TEST DROP COLUMN A; +> exception + +CREATE TABLE TEST(A INT, B INT, C INT, D INT, E INT, F INT, G INT, H INT, I INT, J INT); +> ok + +ALTER TABLE TEST DROP COLUMN IF EXISTS J; +> ok + +ALTER TABLE TEST DROP COLUMN J; +> exception + +ALTER TABLE TEST DROP COLUMN B; +> ok + +ALTER TABLE TEST DROP COLUMN IF EXISTS C; +> ok + +SELECT * FROM TEST; +> A D E F G H I +> - - - - - - - +> rows: 0 + +ALTER TABLE TEST DROP COLUMN B, D; +> exception + +ALTER TABLE TEST DROP COLUMN IF EXISTS B, D; +> ok + +SELECT * FROM TEST; +> A E F G H I +> - - - - - - +> rows: 0 + +ALTER TABLE TEST DROP COLUMN E, F; +> ok + +SELECT * FROM TEST; +> A G H I +> - - - - +> rows: 0 + +ALTER TABLE TEST DROP COLUMN (B, H); +> exception + +ALTER TABLE TEST DROP COLUMN IF EXISTS (B, H); +> ok + +SELECT * FROM TEST; +> A G I +> - - - +> rows: 0 + +ALTER TABLE TEST DROP COLUMN (G, I); +> ok + +SELECT * FROM TEST; +> A +> - +> rows: 0 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/ddl/createTable.sql b/modules/h2/src/test/java/org/h2/test/scripts/ddl/createTable.sql new file mode 100644 index 0000000000000..96b29d66ff45a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/ddl/createTable.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE TEST(A INT CONSTRAINT PK_1 PRIMARY KEY); +> ok + +SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS; +> CONSTRAINT_NAME CONSTRAINT_TYPE +> --------------- --------------- +> PK_1 PRIMARY KEY +> rows: 1 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/ddl/createView.sql b/modules/h2/src/test/java/org/h2/test/scripts/ddl/createView.sql new file mode 100644 index 0000000000000..7f4f97a346108 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/ddl/createView.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE VIEW TEST_VIEW(A) AS SELECT 'a'; +> ok + +CREATE OR REPLACE VIEW TEST_VIEW(B, C) AS SELECT 'b', 'c'; +> ok + +SELECT * FROM TEST_VIEW; +> B C +> - - +> b c +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/ddl/dropSchema.sql b/modules/h2/src/test/java/org/h2/test/scripts/ddl/dropSchema.sql new file mode 100644 index 0000000000000..dc7cf2398b182 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/ddl/dropSchema.sql @@ -0,0 +1,88 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE SCHEMA TEST_SCHEMA; +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> ok + +CREATE SCHEMA TEST_SCHEMA; +> ok + +CREATE TABLE TEST_SCHEMA.TEST(); +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> exception + +DROP SCHEMA TEST_SCHEMA CASCADE; +> ok + +CREATE SCHEMA TEST_SCHEMA; +> ok + +CREATE VIEW TEST_SCHEMA.TEST AS SELECT 1; +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> exception + +DROP SCHEMA TEST_SCHEMA CASCADE; +> ok + +CREATE TABLE PUBLIC.SRC(); +> ok + +CREATE SCHEMA TEST_SCHEMA; +> ok + +CREATE SYNONYM TEST_SCHEMA.TEST FOR PUBLIC.SRC; +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> exception + +DROP SCHEMA TEST_SCHEMA CASCADE; +> ok + +DROP TABLE PUBLIC.SRC; +> ok + +CREATE SCHEMA TEST_SCHEMA; +> ok + +CREATE SEQUENCE TEST_SCHEMA.TEST; +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> exception + +DROP SCHEMA TEST_SCHEMA CASCADE; +> ok + +CREATE SCHEMA TEST_SCHEMA; +> ok + +CREATE CONSTANT TEST_SCHEMA.TEST VALUE 1; +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> exception + +DROP SCHEMA TEST_SCHEMA CASCADE; +> ok + +CREATE SCHEMA TEST_SCHEMA; +> ok + +CREATE ALIAS TEST_SCHEMA.TEST FOR "java.lang.System.currentTimeMillis"; +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> exception + +DROP SCHEMA TEST_SCHEMA CASCADE; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/default-and-on_update.sql b/modules/h2/src/test/java/org/h2/test/scripts/default-and-on_update.sql new file mode 100644 index 0000000000000..b3ffa50fccfb4 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/default-and-on_update.sql @@ -0,0 +1,112 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE SEQUENCE SEQ; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, V INT DEFAULT NEXT VALUE FOR SEQ ON UPDATE 1000 * NEXT VALUE FOR SEQ); +> ok + +INSERT INTO TEST(ID) VALUES (1), (2); +> update count: 2 + +SELECT * FROM TEST ORDER BY ID; +> ID V +> -- - +> 1 1 +> 2 2 +> rows (ordered): 2 + +UPDATE TEST SET ID = 3 WHERE ID = 2; +> update count: 1 + +SELECT * FROM TEST ORDER BY ID; +> ID V +> -- ---- +> 1 1 +> 3 3000 +> rows (ordered): 2 + + +UPDATE TEST SET V = 3 WHERE ID = 3; +> update count: 1 + +SELECT * FROM TEST ORDER BY ID; +> ID V +> -- - +> 1 1 +> 3 3 +> rows (ordered): 2 + +ALTER TABLE TEST ADD V2 TIMESTAMP ON UPDATE CURRENT_TIMESTAMP; +> ok + +UPDATE TEST SET V = 4 WHERE ID = 3; +> update count: 1 + +SELECT ID, V, LENGTH(V2) > 18 AS L FROM TEST ORDER BY ID; +> ID V L +> -- - ---- +> 1 1 null +> 3 4 TRUE +> rows (ordered): 2 + +UPDATE TEST SET V = 1 WHERE V = 1; +> update count: 1 + +SELECT ID, V, LENGTH(V2) > 18 AS L FROM TEST ORDER BY ID; +> ID V L +> -- - ---- +> 1 1 null +> 3 4 TRUE +> rows (ordered): 2 + +MERGE INTO TEST(ID, V) KEY(ID) VALUES (1, 1); +> update count: 1 + +SELECT ID, V, LENGTH(V2) > 18 AS L FROM TEST ORDER BY ID; +> ID V L +> -- - ---- +> 1 1 null +> 3 4 TRUE +> rows (ordered): 2 + +MERGE INTO TEST(ID, V) KEY(ID) VALUES (1, 2); +> update count: 1 + +SELECT ID, V, LENGTH(V2) > 18 AS L FROM TEST ORDER BY ID; +> ID V L +> -- - ---- +> 1 2 TRUE +> 3 4 TRUE +> rows (ordered): 2 + +ALTER TABLE TEST ALTER COLUMN V SET ON UPDATE NULL; +> ok + +SELECT COLUMN_NAME, COLUMN_DEFAULT, COLUMN_ON_UPDATE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'TEST' ORDER BY COLUMN_NAME; +> COLUMN_NAME COLUMN_DEFAULT COLUMN_ON_UPDATE +> ----------- --------------------------- ------------------- +> ID null null +> V (NEXT VALUE FOR PUBLIC.SEQ) NULL +> V2 null CURRENT_TIMESTAMP() +> rows (ordered): 3 + +ALTER TABLE TEST ALTER COLUMN V DROP ON UPDATE; +> ok + +SELECT COLUMN_NAME, COLUMN_DEFAULT, COLUMN_ON_UPDATE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'TEST' ORDER BY COLUMN_NAME; +> COLUMN_NAME COLUMN_DEFAULT COLUMN_ON_UPDATE +> ----------- --------------------------- ------------------- +> ID null null +> V (NEXT VALUE FOR PUBLIC.SEQ) null +> V2 null CURRENT_TIMESTAMP() +> rows (ordered): 3 + +DROP TABLE TEST; +> ok + +DROP SEQUENCE SEQ; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/derived-column-names.sql b/modules/h2/src/test/java/org/h2/test/scripts/derived-column-names.sql new file mode 100644 index 0000000000000..83ad7cc0bc5a8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/derived-column-names.sql @@ -0,0 +1,82 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +SELECT * FROM (VALUES(1, 2)); +> C1 C2 +> -- -- +> 1 2 +> rows: 1 + +SELECT * FROM (VALUES(1, 2)) AS T; +> C1 C2 +> -- -- +> 1 2 +> rows: 1 + +SELECT * FROM (VALUES(1, 2)) AS T(A, B); +> A B +> - - +> 1 2 +> rows: 1 + +SELECT A AS A1, B AS B1 FROM (VALUES(1, 2)) AS T(A, B); +> A1 B1 +> -- -- +> 1 2 +> rows: 1 + +SELECT A AS A1, B AS B1 FROM (VALUES(1, 2)) AS T(A, B) WHERE A <> B; +> A1 B1 +> -- -- +> 1 2 +> rows: 1 + +SELECT A AS A1, B AS B1 FROM (VALUES(1, 2)) AS T(A, B) WHERE A1 <> B1; +> exception + +SELECT * FROM (VALUES(1, 2)) AS T(A); +> exception + +SELECT * FROM (VALUES(1, 2)) AS T(A, a); +> exception + +SELECT * FROM (VALUES(1, 2)) AS T(A, B, C); +> exception + +SELECT V AS V1, A AS A1, B AS B1 FROM (VALUES (1)) T1(V) INNER JOIN (VALUES(1, 2)) T2(A, B) ON V = A; +> V1 A1 B1 +> -- -- -- +> 1 1 2 +> rows: 1 + +CREATE TABLE TEST(I INT, J INT); +> ok + +CREATE INDEX TEST_I_IDX ON TEST(I); +> ok + +INSERT INTO TEST VALUES (1, 2); +> update count: 1 + +SELECT * FROM (TEST) AS T(A, B); +> A B +> - - +> 1 2 +> rows: 1 + +SELECT * FROM TEST AS T(A, B); +> A B +> - - +> 1 2 +> rows: 1 + +SELECT * FROM TEST AS T(A, B) USE INDEX (TEST_I_IDX); +> A B +> - - +> 1 2 +> rows: 1 + +DROP TABLE TEST; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/dml/insertIgnore.sql b/modules/h2/src/test/java/org/h2/test/scripts/dml/insertIgnore.sql new file mode 100644 index 0000000000000..cd1d7ec6a6a29 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/dml/insertIgnore.sql @@ -0,0 +1,41 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +SET MODE MySQL; +> ok + +CREATE TABLE TEST(ID BIGINT PRIMARY KEY, VALUE INT NOT NULL); +> ok + +INSERT INTO TEST VALUES (1, 10), (2, 20), (3, 30), (4, 40); +> update count: 4 + +INSERT INTO TEST VALUES (3, 31), (5, 51); +> exception + +SELECT * FROM TEST ORDER BY ID; +> ID VALUE +> -- ----- +> 1 10 +> 2 20 +> 3 30 +> 4 40 +> rows (ordered): 4 + +INSERT IGNORE INTO TEST VALUES (3, 32), (5, 52); +> update count: 1 + +INSERT IGNORE INTO TEST VALUES (4, 43); +> ok + +SELECT * FROM TEST ORDER BY ID; +> ID VALUE +> -- ----- +> 1 10 +> 2 20 +> 3 30 +> 4 40 +> 5 52 +> rows (ordered): 5 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/dml/mergeUsing.sql b/modules/h2/src/test/java/org/h2/test/scripts/dml/mergeUsing.sql new file mode 100644 index 0000000000000..5940a3e245f12 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/dml/mergeUsing.sql @@ -0,0 +1,33 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- +CREATE TABLE PARENT(ID INT, NAME VARCHAR, PRIMARY KEY(ID) ); +> ok + +MERGE INTO PARENT AS P + USING (SELECT X AS ID, 'Coco'||X AS NAME FROM SYSTEM_RANGE(1,2) ) AS S + ON (P.ID = S.ID AND 1=1 AND S.ID = P.ID) + WHEN MATCHED THEN + UPDATE SET P.NAME = S.NAME WHERE 2 = 2 WHEN NOT + MATCHED THEN + INSERT (ID, NAME) VALUES (S.ID, S.NAME); +> update count: 2 + +SELECT * FROM PARENT; +> ID NAME +> -- ----- +> 1 Coco1 +> 2 Coco2 + +EXPLAIN PLAN + MERGE INTO PARENT AS P + USING (SELECT X AS ID, 'Coco'||X AS NAME FROM SYSTEM_RANGE(1,2) ) AS S + ON (P.ID = S.ID AND 1=1 AND S.ID = P.ID) + WHEN MATCHED THEN + UPDATE SET P.NAME = S.NAME WHERE 2 = 2 WHEN NOT + MATCHED THEN + INSERT (ID, NAME) VALUES (S.ID, S.NAME); +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------- +> MERGE INTO PUBLIC.PARENT(ID, NAME) KEY(ID) SELECT X AS ID, ('Coco' || X) AS NAME FROM SYSTEM_RANGE(1, 2) /* PUBLIC.RANGE_INDEX */ \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/scripts/dml/script.sql b/modules/h2/src/test/java/org/h2/test/scripts/dml/script.sql new file mode 100644 index 0000000000000..26666870dc361 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/dml/script.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +INSERT INTO TEST VALUES(2, STRINGDECODE('abcsond\344rzeich\344 ') || char(22222) || STRINGDECODE(' \366\344\374\326\304\334\351\350\340\361!')); +> update count: 1 + +script nopasswords nosettings; +> SCRIPT +> ------------------------------------------------------------------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, NAME VARCHAR(255) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.TEST(ID, NAME) VALUES (2, STRINGDECODE('abcsond\u00e4rzeich\u00e4 \u56ce \u00f6\u00e4\u00fc\u00d6\u00c4\u00dc\u00e9\u00e8\u00e0\u00f1!')); +> rows: 5 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/dml/with.sql b/modules/h2/src/test/java/org/h2/test/scripts/dml/with.sql new file mode 100644 index 0000000000000..8ea464fc7db40 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/dml/with.sql @@ -0,0 +1,55 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- +explain with recursive r(n) as ( + (select 1) union all (select n+1 from r where n < 3) +) +select n from r; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> WITH RECURSIVE R(N) AS ( (SELECT 1 FROM SYSTEM_RANGE(1, 1) /* PUBLIC.RANGE_INDEX */) UNION ALL (SELECT (N + 1) FROM PUBLIC.R /* PUBLIC.R.tableScan */ WHERE N < 3) ) SELECT N FROM R R /* null */ +> rows: 1 + +select sum(n) from ( + with recursive r(n) as ( + (select 1) union all (select n+1 from r where n < 3) + ) + select n from r +); +> SUM(N) +> ------ +> 6 +> rows: 1 + +select sum(n) from (select 0) join ( + with recursive r(n) as ( + (select 1) union all (select n+1 from r where n < 3) + ) + select n from r +) on 1=1; +> SUM(N) +> ------ +> 6 +> rows: 1 + +select 0 from ( + select 0 where 0 in ( + with recursive r(n) as ( + (select 1) union all (select n+1 from r where n < 3) + ) + select n from r + ) +); +> 0 +> - +> rows: 0 +with + r0(n,k) as (select -1, 0), + r1(n,k) as ((select 1, 0) union all (select n+1,k+1 from r1 where n <= 3)), + r2(n,k) as ((select 10,0) union all (select n+1,k+1 from r2 where n <= 13)) + select r1.k, r0.n as N0, r1.n AS N1, r2.n AS n2 from r0 inner join r1 ON r1.k= r0.k inner join r2 ON r1.k= r2.k; +> K N0 N1 N2 +> - -- -- -- +> 0 -1 1 10 +> rows: 1 \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/array-agg.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/array-agg.sql new file mode 100644 index 0000000000000..7c64ba48296eb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/array-agg.sql @@ -0,0 +1,45 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: Alex Nordlund +-- + +-- with filter condition + +create table test(v varchar); +> ok + +insert into test values ('1'), ('2'), ('3'), ('4'), ('5'), ('6'), ('7'), ('8'), ('9'); +> update count: 9 + +select array_agg(v order by v asc), + array_agg(v order by v desc) filter (where v >= '4') + from test where v >= '2'; +> ARRAY_AGG(V ORDER BY V) ARRAY_AGG(V ORDER BY V DESC) FILTER (WHERE (V >= '4')) +> ---------------------------------------------------------------- ------------------------------------------------------ +------------------------------ +> (2, 3, 4, 5, 6, 7, 8, 9) (9, 8, 7, 6, 5, 4) +> rows (ordered): 1 + +create index test_idx on test(v); + +select ARRAY_AGG(v order by v asc), + ARRAY_AGG(v order by v desc) filter (where v >= '4') + from test where v >= '2'; +> ARRAY_AGG(V ORDER BY V) ARRAY_AGG(V ORDER BY V DESC) FILTER (WHERE (V >= '4')) +> ---------------------------------------------------------------- ------------------------------------------------------ +------------------------------ +> (2, 3, 4, 5, 6, 7, 8, 9) (9, 8, 7, 6, 5, 4) +> rows (ordered): 1 + +select ARRAY_AGG(v order by v asc), + ARRAY_AGG(v order by v desc) filter (where v >= '4') + from test; +> ARRAY_AGG(V ORDER BY V) ARRAY_AGG(V ORDER BY V DESC) FILTER (WHERE (V >= '4')) +> ------------------------------------------------------------------------ ------------------------------------------------------ +------------------------------ +> (1, 2, 3, 4, 5, 6, 7, 8, 9) (9, 8, 7, 6, 5, 4) +> rows (ordered): 1 + + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/avg.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/avg.sql new file mode 100644 index 0000000000000..33fba11f40e68 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/avg.sql @@ -0,0 +1,29 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +create table test(v int); +> ok + +insert into test values (10), (20), (30), (40), (50), (60), (70), (80), (90), (100), (110), (120); +> update count: 12 + +select avg(v), avg(v) filter (where v >= 40) from test where v <= 100; +> AVG(V) AVG(V) FILTER (WHERE (V >= 40)) +> ------ ------------------------------- +> 55 70 +> rows: 1 + +create index test_idx on test(v); + +select avg(v), avg(v) filter (where v >= 40) from test where v <= 100; +> AVG(V) AVG(V) FILTER (WHERE (V >= 40)) +> ------ ------------------------------- +> 55 70 +> rows: 1 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bit-and.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bit-and.sql new file mode 100644 index 0000000000000..e53338cbe6714 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bit-and.sql @@ -0,0 +1,32 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +create table test(v bigint); +> ok + +insert into test values + (0xfffffffffff0), (0xffffffffff0f), (0xfffffffff0ff), (0xffffffff0fff), + (0xfffffff0ffff), (0xffffff0fffff), (0xfffff0ffffff), (0xffff0fffffff), + (0xfff0ffffffff), (0xff0fffffffff), (0xf0ffffffffff), (0x0fffffffffff); +> update count: 12 + +select bit_and(v), bit_and(v) filter (where v <= 0xffffffff0fff) from test where v >= 0xff0fffffffff; +> BIT_AND(V) BIT_AND(V) FILTER (WHERE (V <= 281474976649215)) +> --------------- ------------------------------------------------ +> 280375465082880 280375465086975 +> rows: 1 + +create index test_idx on test(v); + +select bit_and(v), bit_and(v) filter (where v <= 0xffffffff0fff) from test where v >= 0xff0fffffffff; +> BIT_AND(V) BIT_AND(V) FILTER (WHERE (V <= 281474976649215)) +> --------------- ------------------------------------------------ +> 280375465082880 280375465086975 +> rows: 1 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bit-or.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bit-or.sql new file mode 100644 index 0000000000000..890775eca6e56 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bit-or.sql @@ -0,0 +1,31 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +-- with filter condition + +create table test(v bigint); +> ok + +insert into test values (1), (2), (4), (8), (16), (32), (64), (128), (256), (512), (1024), (2048); +> update count: 12 + +select bit_or(v), bit_or(v) filter (where v >= 8) from test where v <= 512; +> BIT_OR(V) BIT_OR(V) FILTER (WHERE (V >= 8)) +> --------- --------------------------------- +> 1023 1016 +> rows: 1 + +create index test_idx on test(v); + +select bit_or(v), bit_or(v) filter (where v >= 8) from test where v <= 512; +> BIT_OR(V) BIT_OR(V) FILTER (WHERE (V >= 8)) +> --------- --------------------------------- +> 1023 1016 +> rows: 1 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bool-and.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bool-and.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bool-and.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bool-or.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bool-or.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/bool-or.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/count.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/count.sql new file mode 100644 index 0000000000000..29e3311a3da91 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/count.sql @@ -0,0 +1,35 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +create table test(v int); +> ok + +insert into test values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12); +> update count: 12 + +select count(v), count(v) filter (where v >= 4) from test where v <= 10; +> COUNT(V) COUNT(V) FILTER (WHERE (V >= 4)) +> -------- -------------------------------- +> 10 7 +> rows: 1 + +create index test_idx on test(v); + +select count(v), count(v) filter (where v >= 4) from test where v <= 10; +> COUNT(V) COUNT(V) FILTER (WHERE (V >= 4)) +> -------- -------------------------------- +> 10 7 +> rows: 1 + +select count(v), count(v) filter (where v >= 4) from test; +> COUNT(V) COUNT(V) FILTER (WHERE (V >= 4)) +> -------- -------------------------------- +> 12 9 +> rows: 1 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/group-concat.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/group-concat.sql new file mode 100644 index 0000000000000..8c70e89a650e3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/group-concat.sql @@ -0,0 +1,42 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +create table test(v varchar); +> ok + +insert into test values ('1'), ('2'), ('3'), ('4'), ('5'), ('6'), ('7'), ('8'), ('9'); +> update count: 9 + +select group_concat(v order by v asc separator '-'), + group_concat(v order by v desc separator '-') filter (where v >= '4') + from test where v >= '2'; +> GROUP_CONCAT(V ORDER BY V SEPARATOR '-') GROUP_CONCAT(V ORDER BY V DESC SEPARATOR '-') FILTER (WHERE (V >= '4')) +> ---------------------------------------- ----------------------------------------------------------------------- +> 2-3-4-5-6-7-8-9 9-8-7-6-5-4 +> rows (ordered): 1 + +create index test_idx on test(v); + +select group_concat(v order by v asc separator '-'), + group_concat(v order by v desc separator '-') filter (where v >= '4') + from test where v >= '2'; +> GROUP_CONCAT(V ORDER BY V SEPARATOR '-') GROUP_CONCAT(V ORDER BY V DESC SEPARATOR '-') FILTER (WHERE (V >= '4')) +> ---------------------------------------- ----------------------------------------------------------------------- +> 2-3-4-5-6-7-8-9 9-8-7-6-5-4 +> rows (ordered): 1 + +select group_concat(v order by v asc separator '-'), + group_concat(v order by v desc separator '-') filter (where v >= '4') + from test; +> GROUP_CONCAT(V ORDER BY V SEPARATOR '-') GROUP_CONCAT(V ORDER BY V DESC SEPARATOR '-') FILTER (WHERE (V >= '4')) +> ---------------------------------------- ----------------------------------------------------------------------- +> 1-2-3-4-5-6-7-8-9 9-8-7-6-5-4 +> rows (ordered): 1 + + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/max.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/max.sql new file mode 100644 index 0000000000000..5ee2d4fb16581 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/max.sql @@ -0,0 +1,35 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +create table test(v int); +> ok + +insert into test values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12); +> update count: 12 + +select max(v), max(v) filter (where v <= 8) from test where v <= 10; +> MAX(V) MAX(V) FILTER (WHERE (V <= 8)) +> ------ ------------------------------ +> 10 8 +> rows: 1 + +create index test_idx on test(v); + +select max(v), max(v) filter (where v <= 8) from test where v <= 10; +> MAX(V) MAX(V) FILTER (WHERE (V <= 8)) +> ------ ------------------------------ +> 10 8 +> rows: 1 + +select max(v), max(v) filter (where v <= 8) from test; +> MAX(V) MAX(V) FILTER (WHERE (V <= 8)) +> ------ ------------------------------ +> 12 8 +> rows: 1 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/median.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/median.sql new file mode 100644 index 0000000000000..f2a9b2cdffd30 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/median.sql @@ -0,0 +1,653 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- ASC +create table test(v tinyint); +> ok + +create index test_idx on test(v asc); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +-- ASC NULLS FIRST +create table test(v tinyint); +> ok + +create index test_idx on test(v asc nulls first); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +-- ASC NULLS LAST +create table test(v tinyint); +> ok + +create index test_idx on test(v asc nulls last); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +-- DESC +create table test(v tinyint); +> ok + +create index test_idx on test(v desc); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +-- DESC NULLS FIRST +create table test(v tinyint); +> ok + +create index test_idx on test(v desc nulls first); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +-- DESC NULLS LAST +create table test(v tinyint); +> ok + +create index test_idx on test(v desc nulls last); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +create table test(v tinyint); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +create table test(v smallint); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +create table test(v int); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +create table test(v bigint); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test; +>> 20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20 + +select median(distinct v) from test; +>> 15 + +insert into test values (10); +> update count: 1 + +select median(v) from test; +>> 15 + +drop table test; +> ok + +create table test(v real); +> ok + +insert into test values (2), (2), (1); +> update count: 3 + +select median(v) from test; +>> 2.0 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 2.0 + +select median(distinct v) from test; +>> 1.5 + +insert into test values (1); +> update count: 1 + +select median(v) from test; +>> 1.5 + +drop table test; +> ok + +create table test(v double); +> ok + +insert into test values (2), (2), (1); +> update count: 3 + +select median(v) from test; +>> 2.0 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 2.0 + +select median(distinct v) from test; +>> 1.5 + +insert into test values (1); +> update count: 1 + +select median(v) from test; +>> 1.5 + +drop table test; +> ok + +create table test(v numeric(1)); +> ok + +insert into test values (2), (2), (1); +> update count: 3 + +select median(v) from test; +>> 2 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 2 + +select median(distinct v) from test; +>> 1.5 + +insert into test values (1); +> update count: 1 + +select median(v) from test; +>> 1.5 + +drop table test; +> ok + +create table test(v time); +> ok + +insert into test values ('20:00:00'), ('20:00:00'), ('10:00:00'); +> update count: 3 + +select median(v) from test; +>> 20:00:00 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 20:00:00 + +select median(distinct v) from test; +>> 15:00:00 + +insert into test values ('10:00:00'); +> update count: 1 + +select median(v) from test; +>> 15:00:00 + +drop table test; +> ok + +create table test(v date); +> ok + +insert into test values ('2000-01-20'), ('2000-01-20'), ('2000-01-10'); +> update count: 3 + +select median(v) from test; +>> 2000-01-20 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 2000-01-20 + +select median(distinct v) from test; +>> 2000-01-15 + +insert into test values ('2000-01-10'); +> update count: 1 + +select median(v) from test; +>> 2000-01-15 + +drop table test; +> ok + +create table test(v timestamp); +> ok + +insert into test values ('2000-01-20 20:00:00'), ('2000-01-20 20:00:00'), ('2000-01-10 10:00:00'); +> update count: 3 + +select median(v) from test; +>> 2000-01-20 20:00:00 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 2000-01-20 20:00:00 + +select median(distinct v) from test; +>> 2000-01-15 15:00:00 + +insert into test values ('2000-01-10 10:00:00'); +> update count: 1 + +select median(v) from test; +>> 2000-01-15 15:00:00 + +delete from test; +> update count: 5 + +insert into test values ('2000-01-20 20:00:00'), ('2000-01-21 20:00:00'); +> update count: 2 + +select median(v) from test; +>> 2000-01-21 08:00:00 + +drop table test; +> ok + +create table test(v timestamp with time zone); +> ok + +insert into test values ('2000-01-20 20:00:00+04'), ('2000-01-20 20:00:00+04'), ('2000-01-10 10:00:00+02'); +> update count: 3 + +select median(v) from test; +>> 2000-01-20 20:00:00+04 + +insert into test values (null); +> update count: 1 + +select median(v) from test; +>> 2000-01-20 20:00:00+04 + +select median(distinct v) from test; +>> 2000-01-15 15:00:00+03 + +insert into test values ('2000-01-10 10:00:00+02'); +> update count: 1 + +select median(v) from test; +>> 2000-01-15 15:00:00+03 + +delete from test; +> update count: 5 + +insert into test values ('2000-01-20 20:00:00+10:15'), ('2000-01-21 20:00:00-09'); +> update count: 2 + +select median(v) from test; +>> 2000-01-21 08:00:30+00:37 + +drop table test; +> ok + +-- with group by +create table test(name varchar, value int); +> ok + +insert into test values ('Group 2A', 10), ('Group 2A', 10), ('Group 2A', 20), + ('Group 1X', 40), ('Group 1X', 50), ('Group 3B', null); +> update count: 6 + +select name, median(value) from test group by name order by name; +> NAME MEDIAN(VALUE) +> -------- ------------- +> Group 1X 45 +> Group 2A 10 +> Group 3B null +> rows (ordered): 3 + +drop table test; +> ok + +-- with filter +create table test(v int); +> ok + +insert into test values (20), (20), (10); +> update count: 3 + +select median(v) from test where v <> 20; +>> 10 + +create index test_idx on test(v asc); +> ok + +select median(v) from test where v <> 20; +>> 10 + +drop table test; +> ok + +-- two-column index +create table test(v int, v2 int); +> ok + +create index test_idx on test(v, v2); +> ok + +insert into test values (20, 1), (10, 2), (20, 3); +> update count: 3 + +select median(v) from test; +>> 20 + +drop table test; +> ok + +-- not null column +create table test (v int not null); +> ok + +create index test_idx on test(v desc); +> ok + +select median(v) from test; +>> null + +insert into test values (10), (20); +> update count: 2 + +select median(v) from test; +>> 15 + +insert into test values (20), (10), (20); +> update count: 3 + +select median(v) from test; +>> 20 + +drop table test; +> ok + +-- with filter condition + +create table test(v int); +> ok + +insert into test values (10), (20), (30), (40), (50), (60), (70), (80), (90), (100), (110), (120); +> update count: 12 + +select median(v), median(v) filter (where v >= 40) from test where v <= 100; +> MEDIAN(V) MEDIAN(V) FILTER (WHERE (V >= 40)) +> --------- ---------------------------------- +> 55 70 +> rows: 1 + +create index test_idx on test(v); + +select median(v), median(v) filter (where v >= 40) from test where v <= 100; +> MEDIAN(V) MEDIAN(V) FILTER (WHERE (V >= 40)) +> --------- ---------------------------------- +> 55 70 +> rows: 1 + +select median(v), median(v) filter (where v >= 40) from test; +> MEDIAN(V) MEDIAN(V) FILTER (WHERE (V >= 40)) +> --------- ---------------------------------- +> 65 80 +> rows: 1 + +drop table test; +> ok + +-- with filter and group by + +create table test(dept varchar, amount int); +> ok + +insert into test values + ('First', 10), ('First', 10), ('First', 20), ('First', 30), ('First', 30), + ('Second', 5), ('Second', 4), ('Second', 20), ('Second', 22), ('Second', 300), + ('Third', 3), ('Third', 100), ('Third', 150), ('Third', 170), ('Third', 400); + +select dept, median(amount) from test group by dept order by dept; +> DEPT MEDIAN(AMOUNT) +> ------ -------------- +> First 20 +> Second 20 +> Third 150 +> rows (ordered): 3 + +select dept, median(amount) filter (where amount >= 20) from test group by dept order by dept; +> DEPT MEDIAN(AMOUNT) FILTER (WHERE (AMOUNT >= 20)) +> ------ -------------------------------------------- +> First 30 +> Second 22 +> Third 160 +> rows (ordered): 3 + +select dept, median(amount) filter (where amount >= 20) from test + where (amount < 200) group by dept order by dept; +> DEPT MEDIAN(AMOUNT) FILTER (WHERE (AMOUNT >= 20)) +> ------ -------------------------------------------- +> First 30 +> Second 21 +> Third 150 +> rows (ordered): 3 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/min.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/min.sql new file mode 100644 index 0000000000000..c640abc5210fd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/min.sql @@ -0,0 +1,35 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +create table test(v int); +> ok + +insert into test values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12); +> update count: 12 + +select min(v), min(v) filter (where v >= 4) from test where v >= 2; +> MIN(V) MIN(V) FILTER (WHERE (V >= 4)) +> ------ ------------------------------ +> 2 4 +> rows: 1 + +create index test_idx on test(v); + +select min(v), min(v) filter (where v >= 4) from test where v >= 2; +> MIN(V) MIN(V) FILTER (WHERE (V >= 4)) +> ------ ------------------------------ +> 2 4 +> rows: 1 + +select min(v), min(v) filter (where v >= 4) from test; +> MIN(V) MIN(V) FILTER (WHERE (V >= 4)) +> ------ ------------------------------ +> 1 4 +> rows: 1 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/selectivity.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/selectivity.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/selectivity.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/stddev-pop.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/stddev-pop.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/stddev-pop.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/stddev-samp.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/stddev-samp.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/stddev-samp.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/sum.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/sum.sql new file mode 100644 index 0000000000000..3ac6474356d8c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/sum.sql @@ -0,0 +1,29 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- with filter condition + +create table test(v int); +> ok + +insert into test values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12); +> update count: 12 + +select sum(v), sum(v) filter (where v >= 4) from test where v <= 10; +> SUM(V) SUM(V) FILTER (WHERE (V >= 4)) +> ------ ------------------------------ +> 55 49 +> rows: 1 + +create index test_idx on test(v); + +select sum(v), sum(v) filter (where v >= 4) from test where v <= 10; +> SUM(V) SUM(V) FILTER (WHERE (V >= 4)) +> ------ ------------------------------ +> 55 49 +> rows: 1 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/var-pop.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/var-pop.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/var-pop.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/var-samp.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/var-samp.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/aggregate/var-samp.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/abs.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/abs.sql new file mode 100644 index 0000000000000..b9212dab8fd6c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/abs.sql @@ -0,0 +1,32 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select abs(-1) r1, abs(id) r1b from test; +> R1 R1B +> -- --- +> 1 1 +> rows: 1 + +select abs(sum(id)) from test; +>> 1 + +select abs(null) vn, abs(-1) r1, abs(1) r2, abs(0) r3, abs(-0.1) r4, abs(0.1) r5 from test; +> VN R1 R2 R3 R4 R5 +> ---- -- -- -- --- --- +> null 1 1 0 0.1 0.1 +> rows: 1 + +select * from table(id int=(1, 2), name varchar=('Hello', 'World')) x order by id; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows (ordered): 2 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/acos.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/acos.sql new file mode 100644 index 0000000000000..b3b54e1222faa --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/acos.sql @@ -0,0 +1,17 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select acos(null) vn, acos(-1) r1 from test; +> VN R1 +> ---- ----------------- +> null 3.141592653589793 +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/asin.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/asin.sql new file mode 100644 index 0000000000000..2c1e1f3d657df --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/asin.sql @@ -0,0 +1,17 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select asin(null) vn, asin(-1) r1 from test; +> VN R1 +> ---- ------------------- +> null -1.5707963267948966 +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/atan.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/atan.sql new file mode 100644 index 0000000000000..1097f68084ac2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/atan.sql @@ -0,0 +1,17 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select atan(null) vn, atan(-1) r1 from test; +> VN R1 +> ---- ------------------- +> null -0.7853981633974483 +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/atan2.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/atan2.sql new file mode 100644 index 0000000000000..bd3250ecdaeb5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/atan2.sql @@ -0,0 +1,18 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select atan2(null, null) vn, atan2(10, 1) r1 from test; +> VN R1 +> ---- ------------------ +> null 1.4711276743037347 +> rows: 1 + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitand.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitand.sql new file mode 100644 index 0000000000000..413f47d24cce7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitand.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select bitand(null, 1) vn, bitand(1, null) vn1, bitand(null, null) vn2, bitand(3, 6) e2 from test; +> VN VN1 VN2 E2 +> ---- ---- ---- -- +> null null null 2 +> rows: 1 + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitget.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitget.sql new file mode 100644 index 0000000000000..67280a7523bc1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitget.sql @@ -0,0 +1,5 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitor.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitor.sql new file mode 100644 index 0000000000000..e010b22df452c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitor.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select bitor(null, 1) vn, bitor(1, null) vn1, bitor(null, null) vn2, bitor(3, 6) e7 from test; +> VN VN1 VN2 E7 +> ---- ---- ---- -- +> null null null 7 +> rows: 1 + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitxor.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitxor.sql new file mode 100644 index 0000000000000..209f02ee696f2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/bitxor.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select bitxor(null, 1) vn, bitxor(1, null) vn1, bitxor(null, null) vn2, bitxor(3, 6) e5 from test; +> VN VN1 VN2 E5 +> ---- ---- ---- -- +> null null null 5 +> rows: 1 + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/ceil.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/ceil.sql new file mode 100644 index 0000000000000..39e900cbdddd5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/ceil.sql @@ -0,0 +1,21 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select ceil(null) vn, ceil(1) v1, ceiling(1.1) v2, ceil(-1.1) v3, ceiling(1.9) v4, ceiling(-1.9) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- --- --- ---- --- ---- +> null 1.0 2.0 -1.0 2.0 -1.0 +> rows: 1 + + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/compress.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/compress.sql new file mode 100644 index 0000000000000..67280a7523bc1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/compress.sql @@ -0,0 +1,5 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cos.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cos.sql new file mode 100644 index 0000000000000..48d8b3fb46d5e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cos.sql @@ -0,0 +1,18 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select cos(null) vn, cos(-1) r1 from test; +> VN R1 +> ---- ------------------ +> null 0.5403023058681398 +> rows: 1 + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cosh.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cosh.sql new file mode 100644 index 0000000000000..67280a7523bc1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cosh.sql @@ -0,0 +1,5 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cot.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cot.sql new file mode 100644 index 0000000000000..5b92a353ac266 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/cot.sql @@ -0,0 +1,18 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select cot(null) vn, cot(-1) r1 from test; +> VN R1 +> ---- ------------------- +> null -0.6420926159343306 +> rows: 1 + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/decrypt.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/decrypt.sql new file mode 100644 index 0000000000000..583667f5757f3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/decrypt.sql @@ -0,0 +1,10 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +call utf8tostring(decrypt('AES', '00000000000000000000000000000000', 'dbd42d55d4b923c4b03eba0396fac98e')); +>> Hello World Test + +call utf8tostring(decrypt('AES', hash('sha256', stringtoutf8('Hello'), 1000), encrypt('AES', hash('sha256', stringtoutf8('Hello'), 1000), stringtoutf8('Hello World Test')))); +>> Hello World Test diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/degrees.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/degrees.sql new file mode 100644 index 0000000000000..c406956c1862c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/degrees.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +-- Truncate least significant digits because implementations returns slightly +-- different results depending on Java version +select degrees(null) vn, truncate(degrees(1), 10) v1, truncate(degrees(1.1), 10) v2, + truncate(degrees(-1.1), 10) v3, truncate(degrees(1.9), 10) v4, + truncate(degrees(-1.9), 10) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- ------------ ------------- -------------- -------------- --------------- +> null 57.295779513 63.0253574643 -63.0253574643 108.8619810748 -108.8619810748 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/encrypt.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/encrypt.sql new file mode 100644 index 0000000000000..4c602362a4327 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/encrypt.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +call encrypt('AES', '00000000000000000000000000000000', stringtoutf8('Hello World Test')); +>> dbd42d55d4b923c4b03eba0396fac98e + +CALL ENCRYPT('XTEA', '00', STRINGTOUTF8('Test')); +>> 8bc9a4601b3062692a72a5941072425f + +call encrypt('XTEA', '000102030405060708090a0b0c0d0e0f', '4142434445464748'); +>> dea0b0b40966b0669fbae58ab503765f diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/exp.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/exp.sql new file mode 100644 index 0000000000000..b3720ccb062d9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/exp.sql @@ -0,0 +1,19 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select exp(null) vn, left(exp(1), 4) v1, left(exp(1.1), 4) v2, left(exp(-1.1), 4) v3, left(exp(1.9), 4) v4, left(exp(-1.9), 4) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- ---- ---- ---- ---- ---- +> null 2.71 3.00 0.33 6.68 0.14 +> rows: 1 + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/expand.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/expand.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/expand.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/floor.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/floor.sql new file mode 100644 index 0000000000000..4d823e7b66772 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/floor.sql @@ -0,0 +1,22 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select floor(null) vn, floor(1) v1, floor(1.1) v2, floor(-1.1) v3, floor(1.9) v4, floor(-1.9) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- --- --- ---- --- ---- +> null 1.0 1.0 -2.0 1.0 -2.0 +> rows: 1 + + + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/hash.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/hash.sql new file mode 100644 index 0000000000000..4d874705785ba --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/hash.sql @@ -0,0 +1,10 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +call hash('SHA256', stringtoutf8('Hello'), 1); +>> 185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969 + +CALL HASH('SHA256', STRINGTOUTF8('Password'), 1000); +>> c644a176ce920bde361ac336089b06cc2f1514dfa95ba5aabfe33f9a22d577f0 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/length.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/length.sql new file mode 100644 index 0000000000000..f60f3a29083b1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/length.sql @@ -0,0 +1,45 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select bit_length(null) en, bit_length('') e0, bit_length('ab') e32 from test; +> EN E0 E32 +> ---- -- --- +> null 0 32 +> rows: 1 + +select length(null) en, length('') e0, length('ab') e2 from test; +> EN E0 E2 +> ---- -- -- +> null 0 2 +> rows: 1 + +select char_length(null) en, char_length('') e0, char_length('ab') e2 from test; +> EN E0 E2 +> ---- -- -- +> null 0 2 +> rows: 1 + +select character_length(null) en, character_length('') e0, character_length('ab') e2 from test; +> EN E0 E2 +> ---- -- -- +> null 0 2 +> rows: 1 + +select octet_length(null) en, octet_length('') e0, octet_length('ab') e4 from test; +> EN E0 E4 +> ---- -- -- +> null 0 4 +> rows: 1 + + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/log.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/log.sql new file mode 100644 index 0000000000000..20a8c5068a0d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/log.sql @@ -0,0 +1,34 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select log(null) vn, log(1) v1, ln(1.1) v2, log(-1.1) v3, log(1.9) v4, log(-1.9) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- --- ------------------- --- ------------------ --- +> null 0.0 0.09531017980432493 NaN 0.6418538861723947 NaN +> rows: 1 + +select log10(null) vn, log10(0) v1, log10(10) v2, log10(0.0001) v3, log10(1000000) v4, log10(1) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- --------- --- ---- --- --- +> null -Infinity 1.0 -4.0 6.0 0.0 +> rows: 1 + +select log(null) vn, log(1) v1, log(1.1) v2, log(-1.1) v3, log(1.9) v4, log(-1.9) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- --- ------------------- --- ------------------ --- +> null 0.0 0.09531017980432493 NaN 0.6418538861723947 NaN +> rows: 1 + + + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/mod.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/mod.sql new file mode 100644 index 0000000000000..20921ea7d001e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/mod.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select mod(null, 1) vn, mod(1, null) vn1, mod(null, null) vn2, mod(10, 2) e1 from test; +> VN VN1 VN2 E1 +> ---- ---- ---- -- +> null null null 0 +> rows: 1 + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/pi.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/pi.sql new file mode 100644 index 0000000000000..292f6f1513e4a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/pi.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select pi() from test; +>> 3.141592653589793 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/power.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/power.sql new file mode 100644 index 0000000000000..6a51fc99ad279 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/power.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select power(null, null) en, power(2, 3) e8, power(16, 0.5) e4 from test; +> EN E8 E4 +> ---- --- --- +> null 8.0 4.0 +> rows: 1 + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/radians.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/radians.sql new file mode 100644 index 0000000000000..e9657f7ff587f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/radians.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +-- Truncate least significant digits because implementations returns slightly +-- different results depending on Java version +select radians(null) vn, truncate(radians(1), 10) v1, truncate(radians(1.1), 10) v2, + truncate(radians(-1.1), 10) v3, truncate(radians(1.9), 10) v4, + truncate(radians(-1.9), 10) v5 from test; +> VN V1 V2 V3 V4 V5 +> ---- ------------ ------------ ------------- ------------ ------------- +> null 0.0174532925 0.0191986217 -0.0191986217 0.0331612557 -0.0331612557 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/rand.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/rand.sql new file mode 100644 index 0000000000000..4433f4bdf9e3d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/rand.sql @@ -0,0 +1,19 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select rand(1) e, random() f from test; +> E F +> ------------------ ------------------- +> 0.7308781907032909 0.41008081149220166 +> rows: 1 + +select rand() from test; +>> 0.20771484130971707 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/random-uuid.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/random-uuid.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/random-uuid.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/round.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/round.sql new file mode 100644 index 0000000000000..dedfe6ddc2b62 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/round.sql @@ -0,0 +1,31 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select round(null, null) en, round(10.49, 0) e10, round(10.05, 1) e101 from test; +> EN E10 E101 +> ---- ---- ---- +> null 10.0 10.1 +> rows: 1 + +select round(null) en, round(0.6, null) en2, round(1.05) e1, round(-1.51) em2 from test; +> EN EN2 E1 EM2 +> ---- ---- --- ---- +> null null 1.0 -2.0 +> rows: 1 + +select roundmagic(null) en, roundmagic(cast(3.11 as double) - 3.1) e001, roundmagic(3.11-3.1-0.01) e000, roundmagic(2000000000000) e20x from test; +> EN E001 E000 E20X +> ---- ---- ---- ------ +> null 0.01 0.0 2.0E12 +> rows: 1 + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/roundmagic.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/roundmagic.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/roundmagic.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/secure-rand.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/secure-rand.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/secure-rand.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sign.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sign.sql new file mode 100644 index 0000000000000..d3fc866b41c9f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sign.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select sign(null) en, sign(10) e1, sign(0) e0, sign(-0.1) em1 from test; +> EN E1 E0 EM1 +> ---- -- -- --- +> null 1 0 -1 +> rows: 1 + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sin.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sin.sql new file mode 100644 index 0000000000000..67ab212ea7261 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sin.sql @@ -0,0 +1,18 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select sin(null) vn, sin(-1) r1 from test; +> VN R1 +> ---- ------------------- +> null -0.8414709848078965 +> rows: 1 + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sinh.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sinh.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sinh.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sqrt.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sqrt.sql new file mode 100644 index 0000000000000..c06ea69a91b44 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/sqrt.sql @@ -0,0 +1,19 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select sqrt(null) vn, sqrt(0) e0, sqrt(1) e1, sqrt(4) e2, sqrt(100) e10, sqrt(0.25) e05 from test; +> VN E0 E1 E2 E10 E05 +> ---- --- --- --- ---- --- +> null 0.0 1.0 2.0 10.0 0.5 +> rows: 1 + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/tan.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/tan.sql new file mode 100644 index 0000000000000..9250992d33679 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/tan.sql @@ -0,0 +1,19 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select tan(null) vn, tan(-1) r1 from test; +> VN R1 +> ---- ------------------- +> null -1.5574077246549023 +> rows: 1 + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/tanh.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/tanh.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/tanh.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/truncate.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/truncate.sql new file mode 100644 index 0000000000000..a7b567c6de955 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/truncate.sql @@ -0,0 +1,25 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select truncate(null, null) en, truncate(1.99, 0) e1, truncate(-10.9, 0) em10 from test; +> EN E1 EM10 +> ---- --- ----- +> null 1.0 -10.0 +> rows: 1 + +select trunc(null, null) en, trunc(1.99, 0) e1, trunc(-10.9, 0) em10 from test; +> EN E1 EM10 +> ---- --- ----- +> null 1.0 -10.0 +> rows: 1 + +select trunc(1.3); +>> 1.0 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/zero.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/zero.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/numeric/zero.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/ascii.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/ascii.sql new file mode 100644 index 0000000000000..1c52bb864059b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/ascii.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select ascii(null) en, ascii('') en, ascii('Abc') e65 from test; +> EN EN E65 +> ---- ---- --- +> null null 65 +> rows: 1 + + + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/bit-length.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/bit-length.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/bit-length.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/char.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/char.sql new file mode 100644 index 0000000000000..559233a14c26b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/char.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); + +select char(null) en, char(65) ea from test; +> EN EA +> ---- -- +> null A +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/concat-ws.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/concat-ws.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/concat-ws.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/concat.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/concat.sql new file mode 100644 index 0000000000000..3c6effa8b0b4f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/concat.sql @@ -0,0 +1,17 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); + +select concat(null, null) en, concat(null, 'a') ea, concat('b', null) eb, concat('ab', 'c') abc from test; +> EN EA EB ABC +> ---- -- -- --- +> null a b abc +> rows: 1 + +SELECT CONCAT('a', 'b', 'c', 'd'); +>> abcd diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/difference.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/difference.sql new file mode 100644 index 0000000000000..75e6075054188 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/difference.sql @@ -0,0 +1,23 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); + +select difference(null, null) en, difference('a', null) en1, difference(null, 'a') en2 from test; +> EN EN1 EN2 +> ---- ---- ---- +> null null null +> rows: 1 + +select difference('abc', 'abc') e0, difference('Thomas', 'Tom') e1 from test; +> E0 E1 +> -- -- +> 4 3 +> rows: 1 + + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/hextoraw.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/hextoraw.sql new file mode 100644 index 0000000000000..5f2de7141d557 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/hextoraw.sql @@ -0,0 +1,17 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select hextoraw(null) en, rawtohex(null) en1, hextoraw(rawtohex('abc')) abc from test; +> EN EN1 ABC +> ---- ---- --- +> null null abc +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/insert.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/insert.sql new file mode 100644 index 0000000000000..c797b869c8b19 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/insert.sql @@ -0,0 +1,23 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select insert(null, null, null, null) en, insert('Rund', 1, 0, 'o') e_round, insert(null, 1, 1, 'a') ea from test; +> EN E_ROUND EA +> ---- ------- -- +> null Rund a +> rows: 1 + +select insert('World', 2, 4, 'e') welt, insert('Hello', 2, 1, 'a') hallo from test; +> WELT HALLO +> ---- ----- +> We Hallo +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/instr.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/instr.sql new file mode 100644 index 0000000000000..c63f659db2aac --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/instr.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select instr('Hello World', 'World') e7, instr('abchihihi', 'hi', 2) e3, instr('abcooo', 'o') e2 from test; +> E7 E3 E2 +> -- -- -- +> 7 4 4 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/left.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/left.sql new file mode 100644 index 0000000000000..e636b92eb1ae2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/left.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select left(null, 10) en, left('abc', null) en2, left('boat', 2) e_bo, left('', 1) ee, left('a', -1) ee2 from test; +> EN EN2 E_BO EE EE2 +> ---- ---- ---- -- --- +> null null bo +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/length.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/length.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/length.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/locate.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/locate.sql new file mode 100644 index 0000000000000..d850afbc24320 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/locate.sql @@ -0,0 +1,22 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select locate(null, null) en, locate(null, null, null) en1 from test; +> EN EN1 +> ---- ---- +> null null +> rows: 1 + +select locate('World', 'Hello World') e7, locate('hi', 'abchihihi', 2) e3 from test; +> E7 E3 +> -- -- +> 7 4 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/lower.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/lower.sql new file mode 100644 index 0000000000000..80601fa8c0dfa --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/lower.sql @@ -0,0 +1,22 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select lower(null) en, lower('Hello') hello, lower('ABC') abc from test; +> EN HELLO ABC +> ---- ----- --- +> null hello abc +> rows: 1 + +select lcase(null) en, lcase('Hello') hello, lcase('ABC') abc from test; +> EN HELLO ABC +> ---- ----- --- +> null hello abc +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/lpad.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/lpad.sql new file mode 100644 index 0000000000000..bb35879e4346a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/lpad.sql @@ -0,0 +1,7 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +select lpad('string', 10, '+'); +>> ++++string diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/ltrim.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/ltrim.sql new file mode 100644 index 0000000000000..08b4e5177f586 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/ltrim.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select ltrim(null) en, '>' || ltrim('a') || '<' ea, '>' || ltrim(' a ') || '<' e_as from test; +> EN EA E_AS +> ---- --- ---- +> null >a< >a < +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/octet-length.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/octet-length.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/octet-length.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/position.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/position.sql new file mode 100644 index 0000000000000..9a16fda06236e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/position.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select position(null, null) en, position(null, 'abc') en1, position('World', 'Hello World') e7, position('hi', 'abchihihi') e1 from test; +> EN EN1 E7 E1 +> ---- ---- -- -- +> null null 7 4 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rawtohex.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rawtohex.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rawtohex.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/regex-replace.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/regex-replace.sql new file mode 100644 index 0000000000000..2ce40011a5d64 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/regex-replace.sql @@ -0,0 +1,35 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +call regexp_replace('x', 'x', '\'); +> exception + +CALL REGEXP_REPLACE('abckaboooom', 'o+', 'o'); +>> abckabom + +select regexp_replace('Sylvain', 'S..', 'TOTO', 'mni'); +>> TOTOvain + +set mode oracle; + +select regexp_replace('first last', '(\w+) (\w+)', '\2 \1'); +>> last first + +select regexp_replace('first last', '(\w+) (\w+)', '\\2 \1'); +>> \2 first + +select regexp_replace('first last', '(\w+) (\w+)', '\$2 \1'); +>> $2 first + +select regexp_replace('first last', '(\w+) (\w+)', '$2 $1'); +>> $2 $1 + +set mode regular; + +select regexp_replace('first last', '(\w+) (\w+)', '\2 \1'); +>> 2 1 + +select regexp_replace('first last', '(\w+) (\w+)', '$2 $1'); +>> last first diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/regexp-like.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/regexp-like.sql new file mode 100644 index 0000000000000..b0c01524486e3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/regexp-like.sql @@ -0,0 +1,15 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +call select 1 from dual where regexp_like('x', 'x', '\'); +> exception + +select x from dual where REGEXP_LIKE('A', '[a-z]', 'i'); +>> 1 + +select x from dual where REGEXP_LIKE('A', '[a-z]', 'c'); +> X +> - +> rows: 0 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/repeat.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/repeat.sql new file mode 100644 index 0000000000000..9857aa2a5845e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/repeat.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select repeat(null, null) en, repeat('Ho', 2) abcehoho , repeat('abc', 0) ee from test; +> EN ABCEHOHO EE +> ---- -------- -- +> null HoHo +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/replace.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/replace.sql new file mode 100644 index 0000000000000..a2816b5929247 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/replace.sql @@ -0,0 +1,31 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select replace(null, null) en, replace(null, null, null) en1 from test; +> EN EN1 +> ---- ---- +> null null +> rows: 1 + +select replace('abchihihi', 'i', 'o') abcehohoho, replace('that is tom', 'i') abcethstom from test; +> ABCEHOHOHO ABCETHSTOM +> ---------- ---------- +> abchohoho that s tom +> rows: 1 + +set mode oracle; +> ok + +select replace('white space', ' ', '') x, replace('white space', ' ', null) y from dual; +> X Y +> ---------- ---------- +> whitespace whitespace +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/right.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/right.sql new file mode 100644 index 0000000000000..7d1505074fbfc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/right.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select right(null, 10) en, right('abc', null) en2, right('boat-trip', 2) e_ip, right('', 1) ee, right('a', -1) ee2 from test; +> EN EN2 E_IP EE EE2 +> ---- ---- ---- -- --- +> null null ip +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rpad.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rpad.sql new file mode 100644 index 0000000000000..3f157429d5d7e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rpad.sql @@ -0,0 +1,7 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +select rpad('string', 10, '+'); +>> string++++ diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rtrim.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rtrim.sql new file mode 100644 index 0000000000000..7ffc551fe72b2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/rtrim.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select rtrim(null) en, '>' || rtrim('a') || '<' ea, '>' || rtrim(' a ') || '<' es from test; +> EN EA ES +> ---- --- ---- +> null >a< > a< +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/soundex.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/soundex.sql new file mode 100644 index 0000000000000..cdc8cda4001b1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/soundex.sql @@ -0,0 +1,26 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); + +select soundex(null) en, soundex('tom') et from test; +> EN ET +> ---- ---- +> null t500 +> rows: 1 + +select +soundex('Washington') W252, soundex('Lee') L000, +soundex('Gutierrez') G362, soundex('Pfister') P236, +soundex('Jackson') J250, soundex('Tymczak') T522, +soundex('VanDeusen') V532, soundex('Ashcraft') A261 from test; +> W252 L000 G362 P236 J250 T522 V532 A261 +> ---- ---- ---- ---- ---- ---- ---- ---- +> W252 L000 G362 P236 J250 T522 V532 A261 +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/space.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/space.sql new file mode 100644 index 0000000000000..ac7378d06252d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/space.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select space(null) en, '>' || space(1) || '<' es, '>' || space(3) || '<' e2 from test; +> EN ES E2 +> ---- --- --- +> null > < > < +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringdecode.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringdecode.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringdecode.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringencode.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringencode.sql new file mode 100644 index 0000000000000..4561290889916 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringencode.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +INSERT INTO TEST VALUES(2, STRINGDECODE('abcsond\344rzeich\344 ') || char(22222) || STRINGDECODE(' \366\344\374\326\304\334\351\350\340\361!')); +> update count: 1 + +call STRINGENCODE(STRINGDECODE('abcsond\344rzeich\344 \u56ce \366\344\374\326\304\334\351\350\340\361!')); +>> abcsond\u00e4rzeich\u00e4 \u56ce \u00f6\u00e4\u00fc\u00d6\u00c4\u00dc\u00e9\u00e8\u00e0\u00f1! + +CALL STRINGENCODE(STRINGDECODE('Lines 1\nLine 2')); +>> Lines 1\nLine 2 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringtoutf8.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringtoutf8.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/stringtoutf8.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/substring.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/substring.sql new file mode 100644 index 0000000000000..ed3290247fca2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/substring.sql @@ -0,0 +1,34 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select substr(null, null) en, substr(null, null, null) e1, substr('bob', 2) e_ob, substr('bob', 2, 1) eo from test; +> EN E1 E_OB EO +> ---- ---- ---- -- +> null null ob o +> rows: 1 + +select substring(null, null) en, substring(null, null, null) e1, substring('bob', 2) e_ob, substring('bob', 2, 1) eo from test; +> EN E1 E_OB EO +> ---- ---- ---- -- +> null null ob o +> rows: 1 + +select substring(null from null) en, substring(null from null for null) e1, substring('bob' from 2) e_ob, substring('bob' from 2 for 1) eo from test; +> EN E1 E_OB EO +> ---- ---- ---- -- +> null null ob o +> rows: 1 + +select substr('[Hello]', 2, 5); +>> Hello + +select substr('Hello World', -5); +>> World diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/to-char.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/to-char.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/to-char.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/translate.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/translate.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/translate.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/trim.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/trim.sql new file mode 100644 index 0000000000000..3b5b7a94bacba --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/trim.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select TRIM(BOTH '_' FROM '__A__') A, TRIM(LEADING FROM ' B ') BS, TRIM(TRAILING 'x' FROM 'xAx') XA from test; +> A BS XA +> - -- -- +> A B xA +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/upper.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/upper.sql new file mode 100644 index 0000000000000..86da2a75e07da --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/upper.sql @@ -0,0 +1,22 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select ucase(null) en, ucase('Hello') hello, ucase('ABC') abc from test; +> EN HELLO ABC +> ---- ----- --- +> null HELLO ABC +> rows: 1 + +select upper(null) en, upper('Hello') hello, upper('ABC') abc from test; +> EN HELLO ABC +> ---- ----- --- +> null HELLO ABC +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/utf8tostring.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/utf8tostring.sql new file mode 100644 index 0000000000000..361959095f129 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/utf8tostring.sql @@ -0,0 +1,7 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL UTF8TOSTRING(STRINGTOUTF8('This is a test')); +>> This is a test diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlattr.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlattr.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlattr.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlcdata.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlcdata.sql new file mode 100644 index 0000000000000..660627e39e290 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlcdata.sql @@ -0,0 +1,10 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL XMLCDATA(''); +>> ]]> + +CALL XMLCDATA('special text ]]>'); +>> special text ]]> diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlcomment.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlcomment.sql new file mode 100644 index 0000000000000..4e69b80cff84c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlcomment.sql @@ -0,0 +1,10 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL XMLCOMMENT('Test'); +>> + +CALL XMLCOMMENT('--- test ---'); +>> diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlnode.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlnode.sql new file mode 100644 index 0000000000000..e40425bb5a66e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlnode.sql @@ -0,0 +1,19 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL XMLNODE('a', XMLATTR('href', 'http://h2database.com')); +>> + +CALL XMLNODE('br'); +>>
    + +CALL XMLNODE('p', null, 'Hello World'); +>>

    Hello World

    + +SELECT XMLNODE('p', null, 'Hello' || chr(10) || 'World'); +>>

    Hello World

    + +SELECT XMLNODE('p', null, 'Hello' || chr(10) || 'World', false); +>>

    Hello World

    diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlstartdoc.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlstartdoc.sql new file mode 100644 index 0000000000000..6264db4b491ca --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmlstartdoc.sql @@ -0,0 +1,7 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL XMLSTARTDOC(); +>> diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmltext.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmltext.sql new file mode 100644 index 0000000000000..49af6c4606ad8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/string/xmltext.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL XMLTEXT('test'); +>> test + +CALL XMLTEXT(''); +>> <test> + +SELECT XMLTEXT('hello' || chr(10) || 'world'); +>> hello world + +CALL XMLTEXT('hello' || chr(10) || 'world', true); +>> hello world diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-contains.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-contains.sql new file mode 100644 index 0000000000000..3d1850eba6d9c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-contains.sql @@ -0,0 +1,31 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +select array_contains((4.0, 2.0, 2.0), 2.0); +>> TRUE + +select array_contains((4.0, 2.0, 2.0), 5.0); +>> FALSE + +select array_contains(('one', 'two'), 'one'); +>> TRUE + +select array_contains(('one', 'two'), 'xxx'); +>> FALSE + +select array_contains(('one', 'two'), null); +>> FALSE + +select array_contains((null, 'two'), null); +>> TRUE + +select array_contains(null, 'one'); +>> FALSE + +select array_contains(((1, 2), (3, 4)), (1, 2)); +>> TRUE + +select array_contains(((1, 2), (3, 4)), (5, 6)); +>> FALSE diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-get.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-get.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-get.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-length.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-length.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/array-length.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/autocommit.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/autocommit.sql new file mode 100644 index 0000000000000..0f62edbeccf1e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/autocommit.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select autocommit() from test; +>> TRUE diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/cancel-session.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/cancel-session.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/cancel-session.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/casewhen.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/casewhen.sql new file mode 100644 index 0000000000000..3e2000d32bdf3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/casewhen.sql @@ -0,0 +1,46 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select casewhen(null, '1', '2') xn, casewhen(1>0, 'n', 'y') xy, casewhen(0<1, 'a', 'b') xa from test; +> XN XY XA +> -- -- -- +> 2 n a +> rows: 1 + +select x, case when x=0 then 'zero' else 'not zero' end y from system_range(0, 2); +> X Y +> - -------- +> 0 zero +> 1 not zero +> 2 not zero +> rows: 3 + +select x, case when x=0 then 'zero' end y from system_range(0, 1); +> X Y +> - ---- +> 0 zero +> 1 null +> rows: 2 + +select x, case x when 0 then 'zero' else 'not zero' end y from system_range(0, 1); +> X Y +> - -------- +> 0 zero +> 1 not zero +> rows: 2 + +select x, case x when 0 then 'zero' when 1 then 'one' end y from system_range(0, 2); +> X Y +> - ---- +> 0 zero +> 1 one +> 2 null +> rows: 3 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/cast.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/cast.sql new file mode 100644 index 0000000000000..d970175034f85 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/cast.sql @@ -0,0 +1,101 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + + +select cast(null as varchar(255)) xn, cast(' 10' as int) x10, cast(' 20 ' as int) x20 from test; +> XN X10 X20 +> ---- --- --- +> null 10 20 +> rows: 1 + +select cast(128 as binary); +>> 00000080 + +select cast(65535 as binary); +>> 0000ffff + +select cast(cast('ff' as binary) as tinyint) x; +>> -1 + +select cast(cast('7f' as binary) as tinyint) x; +>> 127 + +select cast(cast('ff' as binary) as smallint) x; +>> 255 + +select cast(cast('ff' as binary) as int) x; +>> 255 + +select cast(cast('ffff' as binary) as long) x; +>> 65535 + +select cast(cast(65535 as long) as binary); +>> 000000000000ffff + +select cast(cast(-1 as tinyint) as binary); +>> ff + +select cast(cast(-1 as smallint) as binary); +>> ffff + +select cast(cast(-1 as int) as binary); +>> ffffffff + +select cast(cast(-1 as long) as binary); +>> ffffffffffffffff + +select cast(cast(1 as tinyint) as binary); +>> 01 + +select cast(cast(1 as smallint) as binary); +>> 0001 + +select cast(cast(1 as int) as binary); +>> 00000001 + +select cast(cast(1 as long) as binary); +>> 0000000000000001 + +select cast(X'ff' as tinyint); +>> -1 + +select cast(X'ffff' as smallint); +>> -1 + +select cast(X'ffffffff' as int); +>> -1 + +select cast(X'ffffffffffffffff' as long); +>> -1 + +select cast(' 011 ' as int); +>> 11 + +select cast(cast(0.1 as real) as decimal); +>> 0.1 + +select cast(cast(95605327.73 as float) as decimal); +>> 95605327.73 + +select cast(cast('01020304-0506-0708-090a-0b0c0d0e0f00' as uuid) as binary); +>> 0102030405060708090a0b0c0d0e0f00 + +call cast('null' as uuid); +> exception + +select cast('12345678123456781234567812345678' as uuid); +>> 12345678-1234-5678-1234-567812345678 + +select cast('000102030405060708090a0b0c0d0e0f' as uuid); +>> 00010203-0405-0607-0809-0a0b0c0d0e0f + +select -cast(0 as double); +>> 0.0 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/coalesce.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/coalesce.sql new file mode 100644 index 0000000000000..9a8a257b6deec --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/coalesce.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select coalesce(null, null) xn, coalesce(null, 'a') xa, coalesce('1', '2') x1 from test; +> XN XA X1 +> ---- -- -- +> null a 1 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/convert.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/convert.sql new file mode 100644 index 0000000000000..1f232bdd4aff6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/convert.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select convert(null, varchar(255)) xn, convert(' 10', int) x10, convert(' 20 ', int) x20 from test; +> XN X10 X20 +> ---- --- --- +> null 10 20 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/csvread.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/csvread.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/csvread.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/csvwrite.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/csvwrite.sql new file mode 100644 index 0000000000000..67280a7523bc1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/csvwrite.sql @@ -0,0 +1,5 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/currval.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/currval.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/currval.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/database-path.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/database-path.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/database-path.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/database.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/database.sql new file mode 100644 index 0000000000000..4feabeb826c8e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/database.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select right(database(), 6) from test; +>> SCRIPT diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/decode.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/decode.sql new file mode 100644 index 0000000000000..f0967dae9de23 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/decode.sql @@ -0,0 +1,31 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +select select decode(null, null, 'a'); +>> a + +select select decode(1, 1, 'a'); +>> a + +select select decode(1, 2, 'a'); +>> null + +select select decode(1, 1, 'a', 'else'); +>> a + +select select decode(1, 2, 'a', 'else'); +>> else + +select decode(4.0, 2.0, 2.0, 3.0, 3.0); +>> null + +select decode('3', 2.0, 2.0, 3, 3.0); +>> 3.0 + +select decode(4.0, 2.0, 2.0, 3.0, 3.0, 4.0, 4.0, 9.0); +>> 4.0 + +select decode(1, 1, '1', 1, '11') from dual; +>> 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/disk-space-used.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/disk-space-used.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/disk-space-used.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/file-read.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/file-read.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/file-read.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/file-write.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/file-write.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/file-write.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/greatest.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/greatest.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/greatest.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/h2version.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/h2version.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/h2version.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/identity.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/identity.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/identity.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/ifnull.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/ifnull.sql new file mode 100644 index 0000000000000..a49ea81163b12 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/ifnull.sql @@ -0,0 +1,23 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select ifnull(null, '1') x1, ifnull(null, null) xn, ifnull('a', 'b') xa from test; +> X1 XN XA +> -- ---- -- +> 1 null a +> rows: 1 + +select isnull(null, '1') x1, isnull(null, null) xn, isnull('a', 'b') xa from test; +> X1 XN XA +> -- ---- -- +> 1 null a +> rows: 1 + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/least.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/least.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/least.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/link-schema.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/link-schema.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/link-schema.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/lock-mode.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/lock-mode.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/lock-mode.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/lock-timeout.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/lock-timeout.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/lock-timeout.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/memory-free.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/memory-free.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/memory-free.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/memory-used.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/memory-used.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/memory-used.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nextval.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nextval.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nextval.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nullif.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nullif.sql new file mode 100644 index 0000000000000..3929eeb916348 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nullif.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select nullif(null, null) xn, nullif('a', 'a') xn, nullif('1', '2') x1 from test; +> XN XN X1 +> ---- ---- -- +> null null 1 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nvl2.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nvl2.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/nvl2.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/readonly.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/readonly.sql new file mode 100644 index 0000000000000..ba395364e2e09 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/readonly.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select readonly() from test; +>> FALSE diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/rownum.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/rownum.sql new file mode 100644 index 0000000000000..19b08f4116727 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/rownum.sql @@ -0,0 +1,17 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +----- Issue#600 ----- +create table test as (select char(x) as str from system_range(48,90)); +> ok + +select row_number() over () as rnum, str from test where str = 'A'; +> RNUM STR +> ---- --- +> 1 A + +drop table test; +> ok + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/schema.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/schema.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/schema.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/scope-identity.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/scope-identity.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/scope-identity.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/session-id.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/session-id.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/session-id.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/set.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/set.sql new file mode 100644 index 0000000000000..f67d6ae7266bc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/set.sql @@ -0,0 +1,82 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- +-- Try a custom column naming rules setup + +SET COLUMN_NAME_RULES=MAX_IDENTIFIER_LENGTH = 30; +> ok + +SET COLUMN_NAME_RULES=REGULAR_EXPRESSION_MATCH_ALLOWED = '[A-Za-z0-9_]+'; +> ok + +SET COLUMN_NAME_RULES=REGULAR_EXPRESSION_MATCH_DISALLOWED = '[^A-Za-z0-9_]+'; +> ok + +SET COLUMN_NAME_RULES=DEFAULT_COLUMN_NAME_PATTERN = 'noName$$'; +> ok + +SET COLUMN_NAME_RULES=GENERATE_UNIQUE_COLUMN_NAMES = 1; +> ok + +SELECT 1 AS VERY_VERY_VERY_LONG_ID_VERY_VERY_VERY_LONG_ID, SUM(X)+1 AS _123456789012345, SUM(X)+1 , SUM(X)+1 ++47, 'x' , '!!!' , '!!!!' FROM SYSTEM_RANGE(1,2); +> VERY_VERY_VERY_LONG_ID_VERY_VE _123456789012345 SUMX1 SUMX147 x noName6 noName7 +> ------------------------------ ---------------- ----- ------- - ------- ------- +> 1 4 4 51 x !!! !!!! + +SET COLUMN_NAME_RULES=EMULATE='Oracle'; +> ok + +SELECT 1 AS VERY_VERY_VERY_LONG_ID, SUM(X)+1 AS _123456789012345, SUM(X)+1 , SUM(X)+1 ++47, 'x' , '!!!' , '!!!!' FROM SYSTEM_RANGE(1,2); +> VERY_VERY_VERY_LONG_ID _123456789012345 SUMX1 SUMX147 x _UNNAMED_6 _UNNAMED_7 +> ---------------------- ---------------- ----- ------- - ---------- ---------- +> 1 4 4 51 x !!! !!!! + +SET COLUMN_NAME_RULES=EMULATE='Oracle'; +> ok + +SELECT 1 AS VERY_VERY_VERY_LONG_ID, SUM(X)+1 AS _123456789012345, SUM(X)+1 , SUM(X)+1 ++47, 'x' , '!!!' , '!!!!', 'Very Long' AS _23456789012345678901234567890XXX FROM SYSTEM_RANGE(1,2); +> VERY_VERY_VERY_LONG_ID _123456789012345 SUMX1 SUMX147 x _UNNAMED_6 _UNNAMED_7 _23456789012345678901234567890XXX +> ---------------------- ---------------- ----- ------- - ---------- ---------- --------------------------------- +> 1 4 4 51 x !!! !!!! Very Long + +SET COLUMN_NAME_RULES=EMULATE='PostgreSQL'; +> ok + +SELECT 1 AS VERY_VERY_VERY_LONG_ID, SUM(X)+1 AS _123456789012345, SUM(X)+1 , SUM(X)+1 ++47, 'x' , '!!!' , '!!!!', 999 AS "QuotedColumnId" FROM SYSTEM_RANGE(1,2); +> VERY_VERY_VERY_LONG_ID _123456789012345 SUMX1 SUMX147 x _UNNAMED_6 _UNNAMED_7 QuotedColumnId +> ---------------------- ---------------- ----- ------- - ---------- ---------- -------------- +> 1 4 4 51 x !!! !!!! 999 + +SET COLUMN_NAME_RULES=DEFAULT; +> ok + +-- Test all MODES of database: +-- DB2, Derby, MSSQLServer, HSQLDB, MySQL, Oracle, PostgreSQL, Ignite +SET COLUMN_NAME_RULES=EMULATE='DB2'; +> ok + +SET COLUMN_NAME_RULES=EMULATE='Derby'; +> ok + +SET COLUMN_NAME_RULES=EMULATE='MSSQLServer'; +> ok + +SET COLUMN_NAME_RULES=EMULATE='MySQL'; +> ok + +SET COLUMN_NAME_RULES=EMULATE='Oracle'; +> ok + +SET COLUMN_NAME_RULES=EMULATE='PostgreSQL'; +> ok + +SET COLUMN_NAME_RULES=EMULATE='Ignite'; +> ok + +SET COLUMN_NAME_RULES=EMULATE='REGULAR'; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/table.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/table.sql new file mode 100644 index 0000000000000..39b0ff6527d40 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/table.sql @@ -0,0 +1,45 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +select * from table(a int=(1)), table(b int=(2)); +> A B +> - - +> 1 2 +> rows: 1 + +create table test as select * from table(id int=(1, 2, 3)); +> ok + +SELECT * FROM (SELECT * FROM TEST) ORDER BY id; +> ID +> -- +> 1 +> 2 +> 3 +> rows (ordered): 3 + +SELECT * FROM (SELECT * FROM TEST) x ORDER BY id; +> ID +> -- +> 1 +> 2 +> 3 +> rows (ordered): 3 + +drop table test; +> ok + +explain select * from table(id int = (1, 2), name varchar=('Hello', 'World')); +> PLAN +> ----------------------------------------------------------------------------------------------------- +> SELECT TABLE.ID, TABLE.NAME FROM TABLE(ID INT=(1, 2), NAME VARCHAR=('Hello', 'World')) /* function */ +> rows: 1 + +select * from table(id int=(1, 2), name varchar=('Hello', 'World')) x order by id; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows (ordered): 2 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/transaction-id.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/transaction-id.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/transaction-id.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/truncate-value.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/truncate-value.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/truncate-value.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/system/user.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/user.sql new file mode 100644 index 0000000000000..db3ade64ac5d7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/system/user.sql @@ -0,0 +1,19 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select user() x_sa, current_user() x_sa2 from test; +> X_SA X_SA2 +> ---- ----- +> SA SA +> rows: 1 + +select current_user() from test; +>> SA diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/add_months.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/add_months.sql new file mode 100644 index 0000000000000..3c32baf974795 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/add_months.sql @@ -0,0 +1,20 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- 01-Aug-03 + 3 months = 01-Nov-03 +SELECT ADD_MONTHS('2003-08-01', 3); +>> 2003-11-01 00:00:00 + +-- 31-Jan-03 + 1 month = 28-Feb-2003 +SELECT ADD_MONTHS('2003-01-31', 1); +>> 2003-02-28 00:00:00 + +-- 21-Aug-2003 - 3 months = 21-May-2003 +SELECT ADD_MONTHS('2003-08-21', -3); +>> 2003-05-21 00:00:00 + +-- 21-Aug-2003 00:00:00.333 - 3 months = 21-May-2003 00:00:00.333 +SELECT ADD_MONTHS('2003-08-21 00:00:00.333', -3); +>> 2003-05-21 00:00:00.333 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current-time.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current-time.sql new file mode 100644 index 0000000000000..f27d6a73d2800 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current-time.sql @@ -0,0 +1,23 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select length(curtime())>=8 c1, length(current_time())>=8 c2, substring(curtime(), 3, 1) c3 from test; +> C1 C2 C3 +> ---- ---- -- +> TRUE TRUE : +> rows: 1 + + +select length(now())>18 c1, length(current_timestamp())>18 c2, length(now(0))>18 c3, length(now(2))>18 c4 from test; +> C1 C2 C3 C4 +> ---- ---- ---- ---- +> TRUE TRUE TRUE TRUE +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current_date.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current_date.sql new file mode 100644 index 0000000000000..afbcecc08cab6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current_date.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select length(curdate()) c1, length(current_date()) c2, substring(curdate(), 5, 1) c3 from test; +> C1 C2 C3 +> -- -- -- +> 10 10 - +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current_timestamp.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current_timestamp.sql new file mode 100644 index 0000000000000..dc138746018d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/current_timestamp.sql @@ -0,0 +1,4 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/date_trunc.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/date_trunc.sql new file mode 100644 index 0000000000000..49f7e0111a137 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/date_trunc.sql @@ -0,0 +1,1076 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +-- +-- Test time unit in 'MICROSECONDS' +-- +SELECT DATE_TRUNC('MICROSECONDS', time '00:00:00.000'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('microseconds', time '00:00:00.000'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('MICROSECONDS', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('microseconds', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('MICROSECONDS', time '15:14:13'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('microseconds', time '15:14:13'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('MICROSECONDS', time '15:14:13.123456789'); +>> 1970-01-01 15:14:13.123456 + +SELECT DATE_TRUNC('microseconds', time '15:14:13.123456789'); +>> 1970-01-01 15:14:13.123456 + +SELECT DATE_TRUNC('MICROSECONDS', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('microseconds', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MICROSECONDS', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('microseconds', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('MICROSECONDS', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('microseconds', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('MICROSECONDS', timestamp with time zone '2015-05-29 15:14:13.123456789'); +>> 2015-05-29 15:14:13.123456+00 + +select DATE_TRUNC('microseconds', timestamp with time zone '2015-05-29 15:14:13.123456789'); +>> 2015-05-29 15:14:13.123456+00 + +select DATE_TRUNC('MICROSECONDS', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('microseconds', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('MICROSECONDS', timestamp with time zone '2015-05-29 15:14:13.123456789-06'); +>> 2015-05-29 15:14:13.123456-06 + +select DATE_TRUNC('microseconds', timestamp with time zone '2015-05-29 15:14:13.123456789-06'); +>> 2015-05-29 15:14:13.123456-06 + +select DATE_TRUNC('MICROSECONDS', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:13+10 + +select DATE_TRUNC('microseconds', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:13+10 + +select DATE_TRUNC('MICROSECONDS', timestamp with time zone '2015-05-29 15:14:13.123456789+10'); +>> 2015-05-29 15:14:13.123456+10 + +select DATE_TRUNC('microseconds', timestamp with time zone '2015-05-29 15:14:13.123456789+10'); +>> 2015-05-29 15:14:13.123456+10 + +SELECT DATE_TRUNC('microseconds', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('MICROSECONDS', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('microseconds', timestamp '2015-05-29 15:14:13.123456789'); +>> 2015-05-29 15:14:13.123456 + +SELECT DATE_TRUNC('MICROSECONDS', timestamp '2015-05-29 15:14:13.123456789'); +>> 2015-05-29 15:14:13.123456 + +SELECT DATE_TRUNC('microseconds', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('MICROSECONDS', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('microseconds', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MICROSECONDS', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('microseconds', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('MICROSECONDS', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('microseconds', '2015-05-29 15:14:13.123456789'); +>> 2015-05-29 15:14:13.123456 + +SELECT DATE_TRUNC('MICROSECONDS', '2015-05-29 15:14:13.123456789'); +>> 2015-05-29 15:14:13.123456 + +SELECT DATE_TRUNC('microseconds', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('MICROSECONDS', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('microseconds', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MICROSECONDS', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +-- +-- Test time unit in 'MILLISECONDS' +-- +SELECT DATE_TRUNC('MILLISECONDS', time '00:00:00.000'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('milliseconds', time '00:00:00.000'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('MILLISECONDS', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('milliseconds', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('MILLISECONDS', time '15:14:13'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('milliseconds', time '15:14:13'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('MILLISECONDS', time '15:14:13.123456'); +>> 1970-01-01 15:14:13.123 + +SELECT DATE_TRUNC('milliseconds', time '15:14:13.123456'); +>> 1970-01-01 15:14:13.123 + +SELECT DATE_TRUNC('MILLISECONDS', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('milliseconds', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MILLISECONDS', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('milliseconds', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('MILLISECONDS', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('milliseconds', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('MILLISECONDS', timestamp with time zone '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13.123+00 + +select DATE_TRUNC('milliseconds', timestamp with time zone '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13.123+00 + +select DATE_TRUNC('MILLISECONDS', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('milliseconds', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('MILLISECONDS', timestamp with time zone '2015-05-29 15:14:13.123456-06'); +>> 2015-05-29 15:14:13.123-06 + +select DATE_TRUNC('milliseconds', timestamp with time zone '2015-05-29 15:14:13.123456-06'); +>> 2015-05-29 15:14:13.123-06 + +select DATE_TRUNC('MILLISECONDS', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:13+10 + +select DATE_TRUNC('milliseconds', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:13+10 + +select DATE_TRUNC('MILLISECONDS', timestamp with time zone '2015-05-29 15:14:13.123456+10'); +>> 2015-05-29 15:14:13.123+10 + +select DATE_TRUNC('milliseconds', timestamp with time zone '2015-05-29 15:14:13.123456+10'); +>> 2015-05-29 15:14:13.123+10 + +SELECT DATE_TRUNC('milliseconds', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('MILLISECONDS', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('milliseconds', timestamp '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13.123 + +SELECT DATE_TRUNC('MILLISECONDS', timestamp '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13.123 + +SELECT DATE_TRUNC('milliseconds', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('MILLISECONDS', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('milliseconds', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MILLISECONDS', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('milliseconds', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('MILLISECONDS', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('milliseconds', '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13.123 + +SELECT DATE_TRUNC('MILLISECONDS', '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13.123 + +SELECT DATE_TRUNC('milliseconds', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('MILLISECONDS', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('milliseconds', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MILLISECONDS', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +-- +-- Test time unit 'SECOND' +-- +SELECT DATE_TRUNC('SECOND', time '00:00:00.000'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('second', time '00:00:00.000'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('SECOND', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('second', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('SECOND', time '15:14:13'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('second', time '15:14:13'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('SECOND', time '15:14:13.123456'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('second', time '15:14:13.123456'); +>> 1970-01-01 15:14:13 + +SELECT DATE_TRUNC('SECOND', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('second', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('SECOND', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('second', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('SECOND', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('second', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('SECOND', timestamp with time zone '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('second', timestamp with time zone '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13+00 + +select DATE_TRUNC('SECOND', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('second', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('SECOND', timestamp with time zone '2015-05-29 15:14:13.123456-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('second', timestamp with time zone '2015-05-29 15:14:13.123456-06'); +>> 2015-05-29 15:14:13-06 + +select DATE_TRUNC('SECOND', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:13+10 + +select DATE_TRUNC('second', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:13+10 + +select DATE_TRUNC('SECOND', timestamp with time zone '2015-05-29 15:14:13.123456+10'); +>> 2015-05-29 15:14:13+10 + +select DATE_TRUNC('second', timestamp with time zone '2015-05-29 15:14:13.123456+10'); +>> 2015-05-29 15:14:13+10 + +SELECT DATE_TRUNC('second', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('SECOND', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('second', timestamp '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('SECOND', timestamp '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('second', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('SECOND', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('second', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('SECOND', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('second', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('SECOND', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('second', '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('SECOND', '2015-05-29 15:14:13.123456'); +>> 2015-05-29 15:14:13 + +SELECT DATE_TRUNC('second', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('SECOND', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('second', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('SECOND', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + + +-- +-- Test time unit 'MINUTE' +-- +SELECT DATE_TRUNC('MINUTE', time '00:00:00'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('minute', time '00:00:00'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('MINUTE', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('minute', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('MINUTE', time '15:14:13'); +>> 1970-01-01 15:14:00 + +SELECT DATE_TRUNC('minute', time '15:14:13'); +>> 1970-01-01 15:14:00 + +SELECT DATE_TRUNC('MINUTE', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('minute', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MINUTE', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('minute', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('MINUTE', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:00+00 + +select DATE_TRUNC('minute', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:00+00 + +select DATE_TRUNC('MINUTE', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:00-06 + +select DATE_TRUNC('minute', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:14:00-06 + +select DATE_TRUNC('MINUTE', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:00+10 + +select DATE_TRUNC('minute', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:14:00+10 + +SELECT DATE_TRUNC('minute', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:00 + +SELECT DATE_TRUNC('MINUTE', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:00 + +SELECT DATE_TRUNC('minute', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('MINUTE', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('minute', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MINUTE', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('minute', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:00 + +SELECT DATE_TRUNC('MINUTE', '2015-05-29 15:14:13'); +>> 2015-05-29 15:14:00 + +SELECT DATE_TRUNC('minute', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('MINUTE', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('minute', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('MINUTE', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +-- +-- Test time unit 'HOUR' +-- +SELECT DATE_TRUNC('HOUR', time '00:00:00'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('hour', time '00:00:00'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('HOUR', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('hour', time '15:00:00'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('HOUR', time '15:14:13'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('hour', time '15:14:13'); +>> 1970-01-01 15:00:00 + +SELECT DATE_TRUNC('HOUR', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('hour', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('HOUR', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +SELECT DATE_TRUNC('hour', date '1970-01-01'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('HOUR', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:00:00+00 + +select DATE_TRUNC('hour', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 15:00:00+00 + +select DATE_TRUNC('HOUR', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:00:00-06 + +select DATE_TRUNC('hour', timestamp with time zone '2015-05-29 15:14:13-06'); +>> 2015-05-29 15:00:00-06 + +select DATE_TRUNC('HOUR', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:00:00+10 + +select DATE_TRUNC('hour', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 15:00:00+10 + +SELECT DATE_TRUNC('hour', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('HOUR', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('hour', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('HOUR', timestamp '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('hour', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('HOUR', timestamp '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('hour', '2015-05-29 15:14:13'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('HOUR', '2015-05-29 15:14:13'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('hour', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('HOUR', '2015-05-29 15:00:00'); +>> 2015-05-29 15:00:00 + +SELECT DATE_TRUNC('hour', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +SELECT DATE_TRUNC('HOUR', '2015-05-29 00:00:00'); +>> 2015-05-29 00:00:00 + +-- +-- Test time unit 'DAY' +-- +select DATE_TRUNC('day', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('DAY', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('day', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('DAY', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('day', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +select DATE_TRUNC('DAY', date '2015-05-29'); +>> 2015-05-29 00:00:00 + +select DATE_TRUNC('day', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 00:00:00 + +select DATE_TRUNC('DAY', timestamp '2015-05-29 15:14:13'); +>> 2015-05-29 00:00:00 + +select DATE_TRUNC('day', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 00:00:00+00 + +select DATE_TRUNC('DAY', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-29 00:00:00+00 + +select DATE_TRUNC('day', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-05-29 00:00:00-06 + +select DATE_TRUNC('DAY', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-05-29 00:00:00-06 + +select DATE_TRUNC('day', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 00:00:00+10 + +select DATE_TRUNC('DAY', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-29 00:00:00+10 + +select DATE_TRUNC('day', '2015-05-29 15:14:13'); +>> 2015-05-29 00:00:00 + +select DATE_TRUNC('DAY', '2015-05-29 15:14:13'); +>> 2015-05-29 00:00:00 + + +-- +-- Test time unit 'WEEK' +-- +select DATE_TRUNC('week', time '00:00:00'); +>> 1969-12-29 00:00:00 + +select DATE_TRUNC('WEEK', time '00:00:00'); +>> 1969-12-29 00:00:00 + +select DATE_TRUNC('week', time '15:14:13'); +>> 1969-12-29 00:00:00 + +select DATE_TRUNC('WEEK', time '15:14:13'); +>> 1969-12-29 00:00:00 + +select DATE_TRUNC('week', date '2015-05-28'); +>> 2015-05-25 00:00:00 + +select DATE_TRUNC('WEEK', date '2015-05-28'); +>> 2015-05-25 00:00:00 + +select DATE_TRUNC('week', timestamp '2015-05-29 15:14:13'); +>> 2015-05-25 00:00:00 + +select DATE_TRUNC('WEEK', timestamp '2015-05-29 15:14:13'); +>> 2015-05-25 00:00:00 + +select DATE_TRUNC('week', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-25 00:00:00+00 + +select DATE_TRUNC('WEEK', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-25 00:00:00+00 + +select DATE_TRUNC('week', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-05-25 00:00:00-06 + +select DATE_TRUNC('WEEK', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-05-25 00:00:00-06 + +select DATE_TRUNC('week', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-25 00:00:00+10 + +select DATE_TRUNC('WEEK', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-25 00:00:00+10 + +select DATE_TRUNC('week', '2015-05-29 15:14:13'); +>> 2015-05-25 00:00:00 + +select DATE_TRUNC('WEEK', '2015-05-29 15:14:13'); +>> 2015-05-25 00:00:00 + +SELECT DATE_TRUNC('WEEK', '2018-03-14 00:00:00.000'); +>> 2018-03-12 00:00:00 + +SELECT DATE_TRUNC('week', '2018-03-14 00:00:00.000'); +>> 2018-03-12 00:00:00 + +-- +-- Test time unit 'MONTH' +-- +select DATE_TRUNC('month', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('MONTH', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('month', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('MONTH', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('month', date '2015-05-28'); +>> 2015-05-01 00:00:00 + +select DATE_TRUNC('MONTH', date '2015-05-28'); +>> 2015-05-01 00:00:00 + +select DATE_TRUNC('month', timestamp '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00 + +select DATE_TRUNC('MONTH', timestamp '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00 + +select DATE_TRUNC('month', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00+00 + +select DATE_TRUNC('MONTH', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00+00 + +select DATE_TRUNC('month', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-05-01 00:00:00-06 + +select DATE_TRUNC('MONTH', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-05-01 00:00:00-06 + +select DATE_TRUNC('month', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-01 00:00:00+10 + +select DATE_TRUNC('MONTH', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-05-01 00:00:00+10 + +select DATE_TRUNC('month', '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00 + +select DATE_TRUNC('MONTH', '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00 + +SELECT DATE_TRUNC('MONTH', '2018-03-14 00:00:00.000'); +>> 2018-03-01 00:00:00 + +SELECT DATE_TRUNC('month', '2018-03-14 00:00:00.000'); +>> 2018-03-01 00:00:00 + +SELECT DATE_TRUNC('month', '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00 + +SELECT DATE_TRUNC('MONTH', '2015-05-29 15:14:13'); +>> 2015-05-01 00:00:00 + +SELECT DATE_TRUNC('month', '2015-05-01 15:14:13'); +>> 2015-05-01 00:00:00 + +SELECT DATE_TRUNC('MONTH', '2015-05-01 15:14:13'); +>> 2015-05-01 00:00:00 + +-- +-- Test time unit 'QUARTER' +-- +select DATE_TRUNC('quarter', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('QUARTER', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('quarter', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('QUARTER', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('quarter', date '2015-05-28'); +>> 2015-04-01 00:00:00 + +select DATE_TRUNC('QUARTER', date '2015-05-28'); +>> 2015-04-01 00:00:00 + +select DATE_TRUNC('quarter', timestamp '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00 + +select DATE_TRUNC('QUARTER', timestamp '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00 + +select DATE_TRUNC('quarter', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00+00 + +select DATE_TRUNC('QUARTER', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00+00 + +select DATE_TRUNC('quarter', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-04-01 00:00:00-06 + +select DATE_TRUNC('QUARTER', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-04-01 00:00:00-06 + +select DATE_TRUNC('quarter', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-04-01 00:00:00+10 + +select DATE_TRUNC('QUARTER', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-04-01 00:00:00+10 + +select DATE_TRUNC('quarter', '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00 + +select DATE_TRUNC('QUARTER', '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00 + +SELECT DATE_TRUNC('QUARTER', '2018-03-14 00:00:00.000'); +>> 2018-01-01 00:00:00 + +SELECT DATE_TRUNC('quarter', '2018-03-14 00:00:00.000'); +>> 2018-01-01 00:00:00 + +SELECT DATE_TRUNC('quarter', '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00 + +SELECT DATE_TRUNC('QUARTER', '2015-05-29 15:14:13'); +>> 2015-04-01 00:00:00 + +SELECT DATE_TRUNC('quarter', '2015-05-01 15:14:13'); +>> 2015-04-01 00:00:00 + +SELECT DATE_TRUNC('QUARTER', '2015-05-01 15:14:13'); +>> 2015-04-01 00:00:00 + +SELECT DATE_TRUNC('quarter', '2015-07-29 15:14:13'); +>> 2015-07-01 00:00:00 + +SELECT DATE_TRUNC('QUARTER', '2015-07-29 15:14:13'); +>> 2015-07-01 00:00:00 + +SELECT DATE_TRUNC('quarter', '2015-09-29 15:14:13'); +>> 2015-07-01 00:00:00 + +SELECT DATE_TRUNC('QUARTER', '2015-09-29 15:14:13'); +>> 2015-07-01 00:00:00 + +SELECT DATE_TRUNC('quarter', '2015-10-29 15:14:13'); +>> 2015-10-01 00:00:00 + +SELECT DATE_TRUNC('QUARTER', '2015-10-29 15:14:13'); +>> 2015-10-01 00:00:00 + +SELECT DATE_TRUNC('quarter', '2015-12-29 15:14:13'); +>> 2015-10-01 00:00:00 + +SELECT DATE_TRUNC('QUARTER', '2015-12-29 15:14:13'); +>> 2015-10-01 00:00:00 + + +-- +-- Test time unit 'YEAR' +-- +select DATE_TRUNC('year', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('YEAR', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('year', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('YEAR', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('year', date '2015-05-28'); +>> 2015-01-01 00:00:00 + +select DATE_TRUNC('YEAR', date '2015-05-28'); +>> 2015-01-01 00:00:00 + +select DATE_TRUNC('year', timestamp '2015-05-29 15:14:13'); +>> 2015-01-01 00:00:00 + +select DATE_TRUNC('YEAR', timestamp '2015-05-29 15:14:13'); +>> 2015-01-01 00:00:00 + +select DATE_TRUNC('year', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-01-01 00:00:00+00 + +select DATE_TRUNC('YEAR', timestamp with time zone '2015-05-29 15:14:13'); +>> 2015-01-01 00:00:00+00 + +select DATE_TRUNC('year', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-01-01 00:00:00-06 + +select DATE_TRUNC('YEAR', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2015-01-01 00:00:00-06 + +select DATE_TRUNC('year', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-01-01 00:00:00+10 + +select DATE_TRUNC('YEAR', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2015-01-01 00:00:00+10 + +SELECT DATE_TRUNC('year', '2015-05-29 15:14:13'); +>> 2015-01-01 00:00:00 + +SELECT DATE_TRUNC('YEAR', '2015-05-29 15:14:13'); +>> 2015-01-01 00:00:00 + +-- +-- Test time unit 'DECADE' +-- +select DATE_TRUNC('decade', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('DECADE', time '00:00:00'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('decade', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('DECADE', time '15:14:13'); +>> 1970-01-01 00:00:00 + +select DATE_TRUNC('decade', date '2015-05-28'); +>> 2010-01-01 00:00:00 + +select DATE_TRUNC('DECADE', date '2015-05-28'); +>> 2010-01-01 00:00:00 + +select DATE_TRUNC('decade', timestamp '2015-05-29 15:14:13'); +>> 2010-01-01 00:00:00 + +select DATE_TRUNC('DECADE', timestamp '2015-05-29 15:14:13'); +>> 2010-01-01 00:00:00 + +select DATE_TRUNC('decade', timestamp with time zone '2015-05-29 15:14:13'); +>> 2010-01-01 00:00:00+00 + +select DATE_TRUNC('DECADE', timestamp with time zone '2015-05-29 15:14:13'); +>> 2010-01-01 00:00:00+00 + +select DATE_TRUNC('decade', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2010-01-01 00:00:00-06 + +select DATE_TRUNC('DECADE', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2010-01-01 00:00:00-06 + +select DATE_TRUNC('decade', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2010-01-01 00:00:00+10 + +select DATE_TRUNC('DECADE', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2010-01-01 00:00:00+10 + +SELECT DATE_TRUNC('decade', '2015-05-29 15:14:13'); +>> 2010-01-01 00:00:00 + +SELECT DATE_TRUNC('DECADE', '2015-05-29 15:14:13'); +>> 2010-01-01 00:00:00 + +SELECT DATE_TRUNC('decade', '2010-05-29 15:14:13'); +>> 2010-01-01 00:00:00 + +SELECT DATE_TRUNC('DECADE', '2010-05-29 15:14:13'); +>> 2010-01-01 00:00:00 + +-- +-- Test time unit 'CENTURY' +-- +select DATE_TRUNC('century', time '00:00:00'); +>> 1901-01-01 00:00:00 + +select DATE_TRUNC('CENTURY', time '00:00:00'); +>> 1901-01-01 00:00:00 + +select DATE_TRUNC('century', time '15:14:13'); +>> 1901-01-01 00:00:00 + +select DATE_TRUNC('CENTURY', time '15:14:13'); +>> 1901-01-01 00:00:00 + +select DATE_TRUNC('century', date '2015-05-28'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('CENTURY', date '2015-05-28'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('century', timestamp '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('CENTURY', timestamp '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('century', timestamp with time zone '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00+00 + +select DATE_TRUNC('CENTURY', timestamp with time zone '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00+00 + +select DATE_TRUNC('century', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2001-01-01 00:00:00-06 + +select DATE_TRUNC('CENTURY', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2001-01-01 00:00:00-06 + +select DATE_TRUNC('century', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2001-01-01 00:00:00+10 + +select DATE_TRUNC('CENTURY', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2001-01-01 00:00:00+10 + +SELECT DATE_TRUNC('century', '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +SELECT DATE_TRUNC('CENTURY', '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +SELECT DATE_TRUNC('century', '2199-05-29 15:14:13'); +>> 2101-01-01 00:00:00 + +SELECT DATE_TRUNC('CENTURY', '2199-05-29 15:14:13'); +>> 2101-01-01 00:00:00 + +SELECT DATE_TRUNC('century', '2000-05-29 15:14:13'); +>> 1901-01-01 00:00:00 + +SELECT DATE_TRUNC('CENTURY', '2000-05-29 15:14:13'); +>> 1901-01-01 00:00:00 + +SELECT DATE_TRUNC('century', '2001-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +SELECT DATE_TRUNC('CENTURY', '2001-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +-- +-- Test time unit 'MILLENNIUM' +-- +select DATE_TRUNC('millennium', time '00:00:00'); +>> 1001-01-01 00:00:00 + +select DATE_TRUNC('MILLENNIUM', time '00:00:00'); +>> 1001-01-01 00:00:00 + +select DATE_TRUNC('millennium', time '15:14:13'); +>> 1001-01-01 00:00:00 + +select DATE_TRUNC('MILLENNIUM', time '15:14:13'); +>> 1001-01-01 00:00:00 + +select DATE_TRUNC('millennium', date '2015-05-28'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('MILLENNIUM', date '2015-05-28'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('millennium', timestamp '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('MILLENNIUM', timestamp '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +select DATE_TRUNC('millennium', timestamp with time zone '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00+00 + +select DATE_TRUNC('MILLENNIUM', timestamp with time zone '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00+00 + +select DATE_TRUNC('millennium', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2001-01-01 00:00:00-06 + +select DATE_TRUNC('MILLENNIUM', timestamp with time zone '2015-05-29 05:14:13-06'); +>> 2001-01-01 00:00:00-06 + +select DATE_TRUNC('millennium', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2001-01-01 00:00:00+10 + +select DATE_TRUNC('MILLENNIUM', timestamp with time zone '2015-05-29 15:14:13+10'); +>> 2001-01-01 00:00:00+10 + +SELECT DATE_TRUNC('millennium', '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +SELECT DATE_TRUNC('MILLENNIUM', '2015-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +SELECT DATE_TRUNC('millennium', '2001-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +SELECT DATE_TRUNC('MILLENNIUM', '2001-05-29 15:14:13'); +>> 2001-01-01 00:00:00 + +SELECT DATE_TRUNC('millennium', '2000-05-29 15:14:13'); +>> 1001-01-01 00:00:00 + +SELECT DATE_TRUNC('MILLENNIUM', '2000-05-29 15:14:13'); +>> 1001-01-01 00:00:00 + +-- +-- Test unhandled time unit and bad date +-- +SELECT DATE_TRUNC('---', '2015-05-29 15:14:13'); +> exception + +SELECT DATE_TRUNC('', '2015-05-29 15:14:13'); +> exception + +SELECT DATE_TRUNC('', ''); +> exception + +SELECT DATE_TRUNC('YEAR', ''); +> exception + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/dateadd.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/dateadd.sql new file mode 100644 index 0000000000000..ed555474d0cce --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/dateadd.sql @@ -0,0 +1,110 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select dateadd('month', 1, timestamp '2003-01-31 10:20:30.012345678') from test; +>> 2003-02-28 10:20:30.012345678 + +select dateadd('year', -1, timestamp '2000-02-29 10:20:30.012345678') from test; +>> 1999-02-28 10:20:30.012345678 + +drop table test; +> ok + +create table test(d date, t time, ts timestamp); +> ok + +insert into test values(date '2001-01-01', time '01:00:00', timestamp '2010-01-01 00:00:00'); +> update count: 1 + +select ts + t from test; +>> 2010-01-01 01:00:00 + +select ts + t + t - t x from test; +>> 2010-01-01 01:00:00 + +select ts + t * 0.5 x from test; +>> 2010-01-01 00:30:00 + +select ts + 0.5 x from test; +>> 2010-01-01 12:00:00 + +select ts - 1.5 x from test; +>> 2009-12-30 12:00:00 + +select ts + 0.5 * t + t - t x from test; +>> 2010-01-01 00:30:00 + +select ts + t / 0.5 x from test; +>> 2010-01-01 02:00:00 + +select d + t, t + d - t x from test; +> T + D X +> ------------------- ------------------- +> 2001-01-01 01:00:00 2001-01-01 00:00:00 +> rows: 1 + +select 1 + d + 1, d - 1, 2 + ts + 2, ts - 2 from test; +> DATEADD('DAY', 1, DATEADD('DAY', 1, D)) DATEADD('DAY', -1, D) DATEADD('DAY', 2, DATEADD('DAY', 2, TS)) DATEADD('DAY', -2, TS) +> --------------------------------------- --------------------- ---------------------------------------- ---------------------- +> 2001-01-03 2000-12-31 2010-01-05 00:00:00 2009-12-30 00:00:00 +> rows: 1 + +select 1 + d + t + 1 from test; +>> 2001-01-03 01:00:00 + +select ts - t - 2 from test; +>> 2009-12-29 23:00:00 + +drop table test; +> ok + +call dateadd('MS', 1, TIMESTAMP '2001-02-03 04:05:06.789001'); +>> 2001-02-03 04:05:06.790001 + +SELECT DATEADD('MICROSECOND', 1, TIME '10:00:01'), DATEADD('MCS', 1, TIMESTAMP '2010-10-20 10:00:01.1'); +> TIME '10:00:01.000001' TIMESTAMP '2010-10-20 10:00:01.100001' +> ---------------------- -------------------------------------- +> 10:00:01.000001 2010-10-20 10:00:01.100001 +> rows: 1 + +SELECT DATEADD('NANOSECOND', 1, TIME '10:00:01'), DATEADD('NS', 1, TIMESTAMP '2010-10-20 10:00:01.1'); +> TIME '10:00:01.000000001' TIMESTAMP '2010-10-20 10:00:01.100000001' +> ------------------------- ----------------------------------------- +> 10:00:01.000000001 2010-10-20 10:00:01.100000001 +> rows: 1 + +SELECT DATEADD('HOUR', 1, DATE '2010-01-20'); +>> 2010-01-20 01:00:00 + +SELECT DATEADD('MINUTE', 30, TIME '12:30:55'); +>> 13:00:55 + +SELECT DATEADD('DAY', 1, TIME '12:30:55'); +> exception + +SELECT DATEADD('QUARTER', 1, DATE '2010-11-16'); +>> 2011-02-16 + +SELECT DATEADD('DAY', 10, TIMESTAMP WITH TIME ZONE '2000-01-05 15:00:30.123456789-10'); +>> 2000-01-15 15:00:30.123456789-10 + +SELECT TIMESTAMPADD('DAY', 10, TIMESTAMP '2000-01-05 15:00:30.123456789'); +>> 2000-01-15 15:00:30.123456789 + +SELECT TIMESTAMPADD('TIMEZONE_HOUR', 1, TIMESTAMP WITH TIME ZONE '2010-01-01 10:00:00+07:30'); +>> 2010-01-01 10:00:00+08:30 + +SELECT TIMESTAMPADD('TIMEZONE_MINUTE', -45, TIMESTAMP WITH TIME ZONE '2010-01-01 10:00:00+07:30'); +>> 2010-01-01 10:00:00+06:45 + +SELECT DATEADD(HOUR, 1, TIME '23:00:00'); +>> 00:00:00 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/datediff.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/datediff.sql new file mode 100644 index 0000000000000..6e560b184551e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/datediff.sql @@ -0,0 +1,214 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select datediff('yy', timestamp '2003-12-01 10:20:30.0', timestamp '2004-01-01 10:00:00.0') from test; +>> 1 + +select datediff('year', timestamp '2003-12-01 10:20:30.0', timestamp '2004-01-01 10:00:00.0') from test; +>> 1 + +select datediff('mm', timestamp '2003-11-01 10:20:30.0', timestamp '2004-01-01 10:00:00.0') from test; +>> 2 + +select datediff('month', timestamp '2003-11-01 10:20:30.0', timestamp '2004-01-01 10:00:00.0') from test; +>> 2 + +select datediff('dd', timestamp '2004-01-01 10:20:30.0', timestamp '2004-01-05 10:00:00.0') from test; +>> 4 + +select datediff('day', timestamp '2004-01-01 10:20:30.0', timestamp '2004-01-05 10:00:00.0') from test; +>> 4 + +select datediff('hh', timestamp '2004-01-01 10:20:30.0', timestamp '2004-01-02 10:00:00.0') from test; +>> 24 + +select datediff('hour', timestamp '2004-01-01 10:20:30.0', timestamp '2004-01-02 10:00:00.0') from test; +>> 24 + +select datediff('mi', timestamp '2004-01-01 10:20:30.0', timestamp '2004-01-01 10:00:00.0') from test; +>> -20 + +select datediff('minute', timestamp '2004-01-01 10:20:30.0', timestamp '2004-01-01 10:00:00.0') from test; +>> -20 + +select datediff('ss', timestamp '2004-01-01 10:00:00.5', timestamp '2004-01-01 10:00:01.0') from test; +>> 1 + +select datediff('second', timestamp '2004-01-01 10:00:00.5', timestamp '2004-01-01 10:00:01.0') from test; +>> 1 + +select datediff('ms', timestamp '2004-01-01 10:00:00.5', timestamp '2004-01-01 10:00:01.0') from test; +>> 500 + +select datediff('millisecond', timestamp '2004-01-01 10:00:00.5', timestamp '2004-01-01 10:00:01.0') from test; +>> 500 + +SELECT DATEDIFF('SECOND', '1900-01-01 00:00:00.001', '1900-01-01 00:00:00.002'), DATEDIFF('SECOND', '2000-01-01 00:00:00.001', '2000-01-01 00:00:00.002'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('SECOND', '1900-01-01 00:00:00.000', '1900-01-01 00:00:00.001'), DATEDIFF('SECOND', '2000-01-01 00:00:00.000', '2000-01-01 00:00:00.001'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('MINUTE', '1900-01-01 00:00:00.000', '1900-01-01 00:00:01.000'), DATEDIFF('MINUTE', '2000-01-01 00:00:00.000', '2000-01-01 00:00:01.000'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('MINUTE', '1900-01-01 00:00:01.000', '1900-01-01 00:00:02.000'), DATEDIFF('MINUTE', '2000-01-01 00:00:01.000', '2000-01-01 00:00:02.000'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('HOUR', '1900-01-01 00:00:00.000', '1900-01-01 00:00:01.000'), DATEDIFF('HOUR', '2000-01-01 00:00:00.000', '2000-01-01 00:00:01.000'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('HOUR', '1900-01-01 00:00:00.001', '1900-01-01 00:00:01.000'), DATEDIFF('HOUR', '2000-01-01 00:00:00.001', '2000-01-01 00:00:01.000'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('HOUR', '1900-01-01 01:00:00.000', '1900-01-01 01:00:01.000'), DATEDIFF('HOUR', '2000-01-01 01:00:00.000', '2000-01-01 01:00:01.000'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('HOUR', '1900-01-01 01:00:00.001', '1900-01-01 01:00:01.000'), DATEDIFF('HOUR', '2000-01-01 01:00:00.001', '2000-01-01 01:00:01.000'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +select datediff(day, '2015-12-09 23:59:00.0', '2016-01-16 23:59:00.0'), datediff(wk, '2015-12-09 23:59:00.0', '2016-01-16 23:59:00.0'); +> 38 5 +> -- - +> 38 5 +> rows: 1 + +call datediff('MS', TIMESTAMP '2001-02-03 04:05:06.789001', TIMESTAMP '2001-02-03 04:05:06.789002'); +> 0 +> - +> 0 +> rows: 1 + +call datediff('MS', TIMESTAMP '1900-01-01 00:00:01.000', TIMESTAMP '2008-01-01 00:00:00.000'); +>> 3408134399000 + +SELECT DATEDIFF('MICROSECOND', '2006-01-01 00:00:00.0000000', '2006-01-01 00:00:00.123456789'), + DATEDIFF('MCS', '2006-01-01 00:00:00.0000000', '2006-01-01 00:00:00.123456789'), + DATEDIFF('MCS', '2006-01-01 00:00:00.0000000', '2006-01-02 00:00:00.123456789'); +> 123456 123456 86400123456 +> ------ ------ ----------- +> 123456 123456 86400123456 +> rows: 1 + +SELECT DATEDIFF('NANOSECOND', '2006-01-01 00:00:00.0000000', '2006-01-01 00:00:00.123456789'), + DATEDIFF('NS', '2006-01-01 00:00:00.0000000', '2006-01-01 00:00:00.123456789'), + DATEDIFF('NS', '2006-01-01 00:00:00.0000000', '2006-01-02 00:00:00.123456789'); +> 123456789 123456789 86400123456789 +> --------- --------- -------------- +> 123456789 123456789 86400123456789 +> rows: 1 + +SELECT DATEDIFF('WEEK', DATE '2018-02-02', DATE '2018-02-03'), DATEDIFF('ISO_WEEK', DATE '2018-02-02', DATE '2018-02-03'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('WEEK', DATE '2018-02-03', DATE '2018-02-04'), DATEDIFF('ISO_WEEK', DATE '2018-02-03', DATE '2018-02-04'); +> 1 0 +> - - +> 1 0 +> rows: 1 + +SELECT DATEDIFF('WEEK', DATE '2018-02-04', DATE '2018-02-05'), DATEDIFF('ISO_WEEK', DATE '2018-02-04', DATE '2018-02-05'); +> 0 1 +> - - +> 0 1 +> rows: 1 + +SELECT DATEDIFF('WEEK', DATE '2018-02-05', DATE '2018-02-06'), DATEDIFF('ISO_WEEK', DATE '2018-02-05', DATE '2018-02-06'); +> 0 0 +> - - +> 0 0 +> rows: 1 + +SELECT DATEDIFF('WEEK', DATE '1969-12-27', DATE '1969-12-28'), DATEDIFF('ISO_WEEK', DATE '1969-12-27', DATE '1969-12-28'); +> 1 0 +> - - +> 1 0 +> rows: 1 + +SELECT DATEDIFF('WEEK', DATE '1969-12-28', DATE '1969-12-29'), DATEDIFF('ISO_WEEK', DATE '1969-12-28', DATE '1969-12-29'); +> 0 1 +> - - +> 0 1 +> rows: 1 + +SELECT DATEDIFF('QUARTER', DATE '2009-12-30', DATE '2009-12-31'); +>> 0 + +SELECT DATEDIFF('QUARTER', DATE '2010-01-01', DATE '2009-12-31'); +>> -1 + +SELECT DATEDIFF('QUARTER', DATE '2010-01-01', DATE '2010-01-02'); +>> 0 + +SELECT DATEDIFF('QUARTER', DATE '2010-01-01', DATE '2010-03-31'); +>> 0 + +SELECT DATEDIFF('QUARTER', DATE '-1000-01-01', DATE '2000-01-01'); +>> 12000 + +SELECT DATEDIFF('TIMEZONE_HOUR', TIMESTAMP WITH TIME ZONE '2010-01-01 10:00:00+01', + TIMESTAMP WITH TIME ZONE '2012-02-02 12:00:00+02'); +>> 1 + +SELECT DATEDIFF('TIMEZONE_MINUTE', TIMESTAMP WITH TIME ZONE '2010-01-01 10:00:00+01:15', + TIMESTAMP WITH TIME ZONE '2012-02-02 12:00:00+02'); +>> 45 + +select datediff('HOUR', timestamp '2007-01-06 10:00:00Z', '2007-01-06 10:00:00Z'); +>> 0 + +select datediff('HOUR', timestamp '1234-05-06 10:00:00+01:00', '1234-05-06 10:00:00+02:00'); +>> -1 + +select datediff('HOUR', timestamp '1234-05-06 10:00:00+01:00', '1234-05-06 10:00:00-02:00'); +>> 3 + +select timestampdiff(month, '2003-02-01','2003-05-01'); +>> 3 + +select timestampdiff(YEAR,'2002-05-01','2001-01-01'); +>> -1 + +select timestampdiff(YEAR,'2017-01-01','2016-12-31 23:59:59'); +>> -1 + +select timestampdiff(YEAR,'2017-01-01','2017-12-31 23:59:59'); +>> 0 + +select timestampdiff(MINUTE,'2003-02-01','2003-05-01 12:05:55'); +>> 128885 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-month.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-month.sql new file mode 100644 index 0000000000000..20dad613211e1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-month.sql @@ -0,0 +1,32 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select dayofmonth(date '2005-09-12') from test; +>> 12 + +drop table test; +> ok + +create table test(ts timestamp with time zone); +> ok + +insert into test(ts) values ('2010-05-11 00:00:00+10:00'), ('2010-05-11 00:00:00-10:00'); +> update count: 2 + +select dayofmonth(ts) d from test; +> D +> -- +> 11 +> 11 +> rows: 2 + +drop table test; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-week.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-week.sql new file mode 100644 index 0000000000000..de33d9a895db7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-week.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select dayofweek(date '2005-09-12') from test; +>> 2 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-year.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-year.sql new file mode 100644 index 0000000000000..4b4dceb854cd9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/day-of-year.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select dayofyear(date '2005-01-01') d1 from test; +>> 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/dayname.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/dayname.sql new file mode 100644 index 0000000000000..0e7e846068f00 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/dayname.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select dayname(date '2005-09-12') from test; +>> Monday diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/extract.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/extract.sql new file mode 100644 index 0000000000000..fce7579ea7f5f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/extract.sql @@ -0,0 +1,78 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +SELECT EXTRACT (MICROSECOND FROM TIME '10:00:00.123456789'), + EXTRACT (MCS FROM TIMESTAMP '2015-01-01 11:22:33.987654321'); +> 123456 987654 +> ------ ------ +> 123456 987654 +> rows: 1 + +SELECT EXTRACT (NANOSECOND FROM TIME '10:00:00.123456789'), + EXTRACT (NS FROM TIMESTAMP '2015-01-01 11:22:33.987654321'); +> 123456789 987654321 +> --------- --------- +> 123456789 987654321 +> rows: 1 + +select EXTRACT (EPOCH from time '00:00:00'); +>> 0 + +select EXTRACT (EPOCH from time '10:00:00'); +>> 36000 + +select EXTRACT (EPOCH from time '10:00:00.123456'); +>> 36000.123456 + +select EXTRACT (EPOCH from date '1970-01-01'); +>> 0 + +select EXTRACT (EPOCH from date '2000-01-03'); +>> 946857600 + +select EXTRACT (EPOCH from timestamp '1970-01-01 00:00:00'); +>> 0 + +select EXTRACT (EPOCH from timestamp '1970-01-03 12:00:00.123456'); +>> 216000.123456 + +select EXTRACT (EPOCH from timestamp '2000-01-03 12:00:00.123456'); +>> 946900800.123456 + +select EXTRACT (EPOCH from timestamp '2500-01-03 12:00:00.654321'); +>> 16725441600.654321 + +select EXTRACT (EPOCH from timestamp with time zone '1970-01-01 00:00:00+05'); +>> -18000 + +select EXTRACT (EPOCH from timestamp with time zone '1970-01-03 12:00:00.123456+05'); +>> 198000.123456 + +select EXTRACT (EPOCH from timestamp with time zone '2000-01-03 12:00:00.123456+05'); +>> 946882800.123456 + +select extract(EPOCH from '2001-02-03 14:15:16'); +>> 981209716 + +SELECT EXTRACT(TIMEZONE_HOUR FROM TIMESTAMP WITH TIME ZONE '2010-01-02 5:00:00+07:15'); +>> 7 + +SELECT EXTRACT(TIMEZONE_HOUR FROM TIMESTAMP WITH TIME ZONE '2010-01-02 5:00:00-08:30'); +>> -8 + +SELECT EXTRACT(TIMEZONE_MINUTE FROM TIMESTAMP WITH TIME ZONE '2010-01-02 5:00:00+07:15'); +>> 15 + +SELECT EXTRACT(TIMEZONE_MINUTE FROM TIMESTAMP WITH TIME ZONE '2010-01-02 5:00:00-08:30'); +>> -30 + +select extract(hour from timestamp '2001-02-03 14:15:16'); +>> 14 + +select extract(hour from '2001-02-03 14:15:16'); +>> 14 + +select extract(week from timestamp '2001-02-03 14:15:16'); +>> 5 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/formatdatetime.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/formatdatetime.sql new file mode 100644 index 0000000000000..a7efeb9d54195 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/formatdatetime.sql @@ -0,0 +1,25 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL FORMATDATETIME(PARSEDATETIME('2001-02-03 04:05:06 GMT', 'yyyy-MM-dd HH:mm:ss z', 'en', 'GMT'), 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT'); +>> Sat, 3 Feb 2001 04:05:06 GMT + +CALL FORMATDATETIME(TIMESTAMP '2001-02-03 04:05:06', 'yyyy-MM-dd HH:mm:ss'); +>> 2001-02-03 04:05:06 + +CALL FORMATDATETIME(TIMESTAMP '2001-02-03 04:05:06', 'MM/dd/yyyy HH:mm:ss'); +>> 02/03/2001 04:05:06 + +CALL FORMATDATETIME(TIMESTAMP '2001-02-03 04:05:06', 'd. MMMM yyyy', 'de'); +>> 3. Februar 2001 + +CALL FORMATDATETIME(PARSEDATETIME('Sat, 3 Feb 2001 04:05:06 GMT', 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT'), 'yyyy-MM-dd HH:mm:ss', 'en', 'GMT'); +>> 2001-02-03 04:05:06 + +SELECT FORMATDATETIME(TIMESTAMP WITH TIME ZONE '2010-05-06 07:08:09.123Z', 'yyyy-MM-dd HH:mm:ss.SSS z'); +>> 2010-05-06 07:08:09.123 UTC + +SELECT FORMATDATETIME(TIMESTAMP WITH TIME ZONE '2010-05-06 07:08:09.123+13:30', 'yyyy-MM-dd HH:mm:ss.SSS z'); +>> 2010-05-06 07:08:09.123 GMT+13:30 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/hour.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/hour.sql new file mode 100644 index 0000000000000..766c796854c8f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/hour.sql @@ -0,0 +1,35 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select hour(time '23:10:59') from test; +>> 23 + +drop table test; +> ok + +create table test(ts timestamp with time zone); +> ok + +insert into test(ts) values ('2010-05-11 05:15:10+10:00'), ('2010-05-11 05:15:10-10:00'); +> update count: 2 + +select hour(ts) h from test; +> H +> - +> 5 +> 5 +> rows: 2 + +drop table test; +> ok + +select hour('2001-02-03 14:15:16'); +>> 14 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/minute.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/minute.sql new file mode 100644 index 0000000000000..e1b22ed557884 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/minute.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select minute(timestamp '2005-01-01 23:10:59') from test; +>> 10 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/month.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/month.sql new file mode 100644 index 0000000000000..32ce2ae6b073f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/month.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select month(date '2005-09-25') from test; +>> 9 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/monthname.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/monthname.sql new file mode 100644 index 0000000000000..15ac7bb6532a0 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/monthname.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select monthname(date '2005-09-12') from test; +>> September diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/parsedatetime.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/parsedatetime.sql new file mode 100644 index 0000000000000..dcf7d26bbcc0b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/parsedatetime.sql @@ -0,0 +1,10 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CALL PARSEDATETIME('3. Februar 2001', 'd. MMMM yyyy', 'de'); +>> 2001-02-03 00:00:00 + +CALL PARSEDATETIME('02/03/2001 04:05:06', 'MM/dd/yyyy HH:mm:ss'); +>> 2001-02-03 04:05:06 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/quarter.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/quarter.sql new file mode 100644 index 0000000000000..b657c743471bd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/quarter.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select quarter(date '2005-09-01') from test; +>> 3 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/second.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/second.sql new file mode 100644 index 0000000000000..e448766c69c78 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/second.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select second(timestamp '2005-01-01 23:10:59') from test; +>> 59 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/truncate.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/truncate.sql new file mode 100644 index 0000000000000..3b7ac775764eb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/truncate.sql @@ -0,0 +1,16 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +select trunc('2015-05-29 15:00:00'); +>> 2015-05-29 00:00:00 + +select trunc('2015-05-29'); +>> 2015-05-29 00:00:00 + +select trunc(timestamp '2000-01-01 10:20:30.0'); +>> 2000-01-01 00:00:00 + +select trunc(timestamp '2001-01-01 14:00:00.0'); +>> 2001-01-01 00:00:00 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/week.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/week.sql new file mode 100644 index 0000000000000..4587837a54363 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/week.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select week(date '2003-01-09') from test; +>> 2 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/year.sql b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/year.sql new file mode 100644 index 0000000000000..c49b6078ae9ef --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/functions/timeanddate/year.sql @@ -0,0 +1,13 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create memory table test(id int primary key, name varchar(255)); +> ok + +insert into test values(1, 'Hello'); +> update count: 1 + +select year(date '2005-01-01') from test; +>> 2005 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/information_schema.sql b/modules/h2/src/test/java/org/h2/test/scripts/information_schema.sql new file mode 100644 index 0000000000000..97bfe9e217c08 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/information_schema.sql @@ -0,0 +1,93 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +CREATE TABLE T1(C1 INT NOT NULL, C2 INT NOT NULL, C3 INT, C4 INT); +> ok + +ALTER TABLE T1 ADD CONSTRAINT PK_1 PRIMARY KEY(C1, C2); +> ok + +ALTER TABLE T1 ADD CONSTRAINT U_1 UNIQUE(C3, C4); +> ok + +CREATE TABLE T2(C1 INT, C2 INT, C3 INT, C4 INT); +> ok + +ALTER TABLE T2 ADD CONSTRAINT FK_1 FOREIGN KEY (C3, C4) REFERENCES T1(C1, C3) ON DELETE SET NULL; +> ok + +ALTER TABLE T2 ADD CONSTRAINT FK_2 FOREIGN KEY (C3, C4) REFERENCES T1(C4, C3) ON UPDATE CASCADE ON DELETE SET DEFAULT; +> ok + +ALTER TABLE T2 ADD CONSTRAINT CH_1 CHECK C4 > 0; +> ok + +SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS LIMIT 0; +> CONSTRAINT_CATALOG CONSTRAINT_SCHEMA CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_CATALOG TABLE_SCHEMA TABLE_NAME IS_DEFERRABLE INITIALLY_DEFERRED +> ------------------ ----------------- --------------- --------------- ------------- ------------ ---------- ------------- ------------------ +> rows: 0 + +SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE, TABLE_NAME, IS_DEFERRABLE, INITIALLY_DEFERRED FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS + WHERE CONSTRAINT_CATALOG = DATABASE() AND CONSTRAINT_SCHEMA = SCHEMA() AND TABLE_CATALOG = DATABASE() AND TABLE_SCHEMA = SCHEMA() + ORDER BY TABLE_NAME, CONSTRAINT_NAME; +> CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME IS_DEFERRABLE INITIALLY_DEFERRED +> --------------- --------------- ---------- ------------- ------------------ +> PK_1 PRIMARY KEY T1 NO NO +> U_1 UNIQUE T1 NO NO +> CH_1 CHECK T2 NO NO +> FK_1 FOREIGN KEY T2 NO NO +> FK_2 FOREIGN KEY T2 NO NO +> rows (ordered): 5 + +SELECT * FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE LIMIT 0; +> CONSTRAINT_CATALOG CONSTRAINT_SCHEMA CONSTRAINT_NAME TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION POSITION_IN_UNIQUE_CONSTRAINT +> ------------------ ----------------- --------------- ------------- ------------ ---------- ----------- ---------------- ----------------------------- +> rows: 0 + +SELECT CONSTRAINT_NAME, TABLE_NAME, COLUMN_NAME, ORDINAL_POSITION, POSITION_IN_UNIQUE_CONSTRAINT FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE + WHERE CONSTRAINT_CATALOG = DATABASE() AND CONSTRAINT_SCHEMA = SCHEMA() AND TABLE_CATALOG = DATABASE() AND TABLE_SCHEMA = SCHEMA() + ORDER BY TABLE_NAME, CONSTRAINT_NAME, ORDINAL_POSITION; +> CONSTRAINT_NAME TABLE_NAME COLUMN_NAME ORDINAL_POSITION POSITION_IN_UNIQUE_CONSTRAINT +> --------------- ---------- ----------- ---------------- ----------------------------- +> PK_1 T1 C1 1 null +> PK_1 T1 C2 2 null +> U_1 T1 C3 1 null +> U_1 T1 C4 2 null +> FK_1 T2 C3 1 1 +> FK_1 T2 C4 2 2 +> FK_2 T2 C3 1 2 +> FK_2 T2 C4 2 1 +> rows (ordered): 8 + +SELECT * FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS LIMIT 0; +> CONSTRAINT_CATALOG CONSTRAINT_SCHEMA CONSTRAINT_NAME UNIQUE_CONSTRAINT_CATALOG UNIQUE_CONSTRAINT_SCHEMA UNIQUE_CONSTRAINT_NAME MATCH_OPTION UPDATE_RULE DELETE_RULE +> ------------------ ----------------- --------------- ------------------------- ------------------------ ---------------------- ------------ ----------- ----------- +> rows: 0 + +-- H2 may return name of the index instead of name of the referenced constraint as UNIQUE_CONSTRAINT_NAME +SELECT CONSTRAINT_NAME, SUBSTRING(UNIQUE_CONSTRAINT_NAME, 0, 11) AS UCN_PART, MATCH_OPTION, UPDATE_RULE, DELETE_RULE FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS + WHERE CONSTRAINT_CATALOG = DATABASE() AND CONSTRAINT_SCHEMA = SCHEMA() AND UNIQUE_CONSTRAINT_CATALOG = DATABASE() AND UNIQUE_CONSTRAINT_SCHEMA = SCHEMA() + ORDER BY CONSTRAINT_NAME, UNIQUE_CONSTRAINT_NAME; +> CONSTRAINT_NAME UCN_PART MATCH_OPTION UPDATE_RULE DELETE_RULE +> --------------- ----------- ------------ ----------- ----------- +> FK_1 FK_1_INDEX_ NONE RESTRICT SET NULL +> FK_2 U_1 NONE CASCADE SET DEFAULT +> rows (ordered): 2 + +SELECT U1.TABLE_NAME T1, U1.COLUMN_NAME C1, U2.TABLE_NAME T2, U2.COLUMN_NAME C2 + FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE U1 JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS RC ON U1.CONSTRAINT_NAME = RC.CONSTRAINT_NAME + JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE U2 ON RC.UNIQUE_CONSTRAINT_NAME = U2.CONSTRAINT_NAME AND U1.POSITION_IN_UNIQUE_CONSTRAINT = U2.ORDINAL_POSITION + WHERE U1.CONSTRAINT_NAME = 'FK_2' ORDER BY U1.COLUMN_NAME; +> T1 C1 T2 C2 +> -- -- -- -- +> T2 C3 T1 C4 +> T2 C4 T1 C3 +> rows (ordered): 2 + +DROP TABLE T2; +> ok + +DROP TABLE T1; +> ok diff --git a/modules/h2/src/test/java/org/h2/test/scripts/joins.sql b/modules/h2/src/test/java/org/h2/test/scripts/joins.sql new file mode 100644 index 0000000000000..011236d27c7ca --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/joins.sql @@ -0,0 +1,793 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create table a(a int) as select 1; +> ok + +create table b(b int) as select 1; +> ok + +create table c(c int) as select x from system_range(1, 2); +> ok + +select * from a inner join b on a=b right outer join c on c=a; +> C A B +> - ---- ---- +> 1 1 1 +> 2 null null +> rows: 2 + +select * from c left outer join (a inner join b on b=a) on c=a; +> C A B +> - ---- ---- +> 1 1 1 +> 2 null null +> rows: 2 + +select * from c left outer join a on c=a inner join b on b=a; +> C A B +> - - - +> 1 1 1 +> rows: 1 + +drop table a, b, c; +> ok + +create table test(a int, b int) as select x, x from system_range(1, 100); +> ok + +-- the table t1 should be processed first +explain select * from test t2, test t1 where t1.a=1 and t1.b = t2.b; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T2.A, T2.B, T1.A, T1.B FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ /* WHERE T1.A = 1 */ INNER JOIN PUBLIC.TEST T2 /* PUBLIC.TEST.tableScan */ ON 1=1 WHERE (T1.A = 1) AND (T1.B = T2.B) +> rows: 1 + +explain select * from test t1, test t2 where t1.a=1 and t1.b = t2.b; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.A, T1.B, T2.A, T2.B FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ /* WHERE T1.A = 1 */ INNER JOIN PUBLIC.TEST T2 /* PUBLIC.TEST.tableScan */ ON 1=1 WHERE (T1.A = 1) AND (T1.B = T2.B) +> rows: 1 + +drop table test; +> ok + +create table test(id identity) as select x from system_range(1, 4); +> ok + +select a.id from test a inner join test b on a.id > b.id and b.id < 3 group by a.id; +> ID +> -- +> 2 +> 3 +> 4 +> rows: 3 + +drop table test; +> ok + +select * from system_range(1, 3) t1 inner join system_range(2, 3) t2 inner join system_range(1, 2) t3 on t3.x=t2.x on t1.x=t2.x; +> X X X +> - - - +> 2 2 2 +> rows: 1 + +CREATE TABLE PARENT(ID INT PRIMARY KEY); +> ok + +CREATE TABLE CHILD(ID INT PRIMARY KEY); +> ok + +INSERT INTO PARENT VALUES(1); +> update count: 1 + +SELECT * FROM PARENT P LEFT OUTER JOIN CHILD C ON C.PARENTID=P.ID; +> exception + +DROP TABLE PARENT, CHILD; +> ok + +create table t1 (i int); +> ok + +create table t2 (i int); +> ok + +create table t3 (i int); +> ok + +select a.i from t1 a inner join (select a.i from t2 a inner join (select i from t3) b on a.i=b.i) b on a.i=b.i; +> I +> - +> rows: 0 + +insert into t1 values (1); +> update count: 1 + +insert into t2 values (1); +> update count: 1 + +insert into t3 values (1); +> update count: 1 + +select a.i from t1 a inner join (select a.i from t2 a inner join (select i from t3) b on a.i=b.i) b on a.i=b.i; +> I +> - +> 1 +> rows: 1 + +drop table t1, t2, t3; +> ok + +CREATE TABLE TESTA(ID IDENTITY); +> ok + +CREATE TABLE TESTB(ID IDENTITY); +> ok + +explain SELECT TESTA.ID A, TESTB.ID B FROM TESTA, TESTB ORDER BY TESTA.ID, TESTB.ID; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------ +> SELECT TESTA.ID AS A, TESTB.ID AS B FROM PUBLIC.TESTA /* PUBLIC.TESTA.tableScan */ INNER JOIN PUBLIC.TESTB /* PUBLIC.TESTB.tableScan */ ON 1=1 ORDER BY 1, 2 +> rows (ordered): 1 + +DROP TABLE IF EXISTS TESTA, TESTB; +> ok + +create table one (id int primary key); +> ok + +create table two (id int primary key, val date); +> ok + +insert into one values(0); +> update count: 1 + +insert into one values(1); +> update count: 1 + +insert into one values(2); +> update count: 1 + +insert into two values(0, null); +> update count: 1 + +insert into two values(1, DATE'2006-01-01'); +> update count: 1 + +insert into two values(2, DATE'2006-07-01'); +> update count: 1 + +insert into two values(3, null); +> update count: 1 + +select * from one; +> ID +> -- +> 0 +> 1 +> 2 +> rows: 3 + +select * from two; +> ID VAL +> -- ---------- +> 0 null +> 1 2006-01-01 +> 2 2006-07-01 +> 3 null +> rows: 4 + +-- Query #1: should return one row +-- okay +select * from one natural join two left join two three on +one.id=three.id left join one four on two.id=four.id where three.val +is null; +> ID VAL ID VAL ID +> -- ---- -- ---- -- +> 0 null 0 null 0 +> rows: 1 + +-- Query #2: should return one row +-- okay +select * from one natural join two left join two three on +one.id=three.id left join one four on two.id=four.id where +three.val>=DATE'2006-07-01'; +> ID VAL ID VAL ID +> -- ---------- -- ---------- -- +> 2 2006-07-01 2 2006-07-01 2 +> rows: 1 + +-- Query #3: should return the union of #1 and #2 +select * from one natural join two left join two three on +one.id=three.id left join one four on two.id=four.id where three.val +is null or three.val>=DATE'2006-07-01'; +> ID VAL ID VAL ID +> -- ---------- -- ---------- -- +> 0 null 0 null 0 +> 2 2006-07-01 2 2006-07-01 2 +> rows: 2 + +explain select * from one natural join two left join two three on +one.id=three.id left join one four on two.id=four.id where three.val +is null or three.val>=DATE'2006-07-01'; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT ONE.ID, TWO.VAL, THREE.ID, THREE.VAL, FOUR.ID FROM PUBLIC.ONE /* PUBLIC.ONE.tableScan */ INNER JOIN PUBLIC.TWO /* PUBLIC.PRIMARY_KEY_14: ID = PUBLIC.ONE.ID AND ID = PUBLIC.ONE.ID */ ON 1=1 /* WHERE PUBLIC.ONE.ID = PUBLIC.TWO.ID */ LEFT OUTER JOIN PUBLIC.TWO THREE /* PUBLIC.PRIMARY_KEY_14: ID = ONE.ID */ ON ONE.ID = THREE.ID LEFT OUTER JOIN PUBLIC.ONE FOUR /* PUBLIC.PRIMARY_KEY_1: ID = TWO.ID */ ON TWO.ID = FOUR.ID WHERE (PUBLIC.ONE.ID = PUBLIC.TWO.ID) AND ((THREE.VAL IS NULL) OR (THREE.VAL >= DATE '2006-07-01')) +> rows: 1 + +-- Query #4: same as #3, but the joins have been manually re-ordered +-- Correct result set, same as expected for #3. +select * from one natural join two left join one four on +two.id=four.id left join two three on one.id=three.id where three.val +is null or three.val>=DATE'2006-07-01'; +> ID VAL ID ID VAL +> -- ---------- -- -- ---------- +> 0 null 0 0 null +> 2 2006-07-01 2 2 2006-07-01 +> rows: 2 + +drop table one; +> ok + +drop table two; +> ok + +create table test1 (id int primary key); +> ok + +create table test2 (id int primary key); +> ok + +create table test3 (id int primary key); +> ok + +insert into test1 values(1); +> update count: 1 + +insert into test2 values(1); +> update count: 1 + +insert into test3 values(1); +> update count: 1 + +select * from test1 +inner join test2 on test1.id=test2.id left +outer join test3 on test2.id=test3.id +where test3.id is null; +> ID ID ID +> -- -- -- +> rows: 0 + +explain select * from test1 +inner join test2 on test1.id=test2.id left +outer join test3 on test2.id=test3.id +where test3.id is null; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST1.ID, TEST2.ID, TEST3.ID FROM PUBLIC.TEST2 /* PUBLIC.TEST2.tableScan */ LEFT OUTER JOIN PUBLIC.TEST3 /* PUBLIC.PRIMARY_KEY_4C0: ID = TEST2.ID */ ON TEST2.ID = TEST3.ID INNER JOIN PUBLIC.TEST1 /* PUBLIC.PRIMARY_KEY_4: ID = TEST2.ID */ ON 1=1 WHERE (TEST3.ID IS NULL) AND (TEST1.ID = TEST2.ID) +> rows: 1 + +insert into test1 select x from system_range(2, 1000); +> update count: 999 + +select * from test1 +inner join test2 on test1.id=test2.id +left outer join test3 on test2.id=test3.id +where test3.id is null; +> ID ID ID +> -- -- -- +> rows: 0 + +explain select * from test1 +inner join test2 on test1.id=test2.id +left outer join test3 on test2.id=test3.id +where test3.id is null; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST1.ID, TEST2.ID, TEST3.ID FROM PUBLIC.TEST2 /* PUBLIC.TEST2.tableScan */ LEFT OUTER JOIN PUBLIC.TEST3 /* PUBLIC.PRIMARY_KEY_4C0: ID = TEST2.ID */ ON TEST2.ID = TEST3.ID INNER JOIN PUBLIC.TEST1 /* PUBLIC.PRIMARY_KEY_4: ID = TEST2.ID */ ON 1=1 WHERE (TEST3.ID IS NULL) AND (TEST1.ID = TEST2.ID) +> rows: 1 + +SELECT TEST1.ID, TEST2.ID, TEST3.ID +FROM TEST2 +LEFT OUTER JOIN TEST3 ON TEST2.ID = TEST3.ID +INNER JOIN TEST1 +WHERE TEST3.ID IS NULL AND TEST1.ID = TEST2.ID; +> ID ID ID +> -- -- -- +> rows: 0 + +drop table test1; +> ok + +drop table test2; +> ok + +drop table test3; +> ok + +create table left_hand (id int primary key); +> ok + +create table right_hand (id int primary key); +> ok + +insert into left_hand values(0); +> update count: 1 + +insert into left_hand values(1); +> update count: 1 + +insert into right_hand values(0); +> update count: 1 + +-- h2, postgresql, mysql, derby, hsqldb: 2 +select * from left_hand left outer join right_hand on left_hand.id=right_hand.id; +> ID ID +> -- ---- +> 0 0 +> 1 null +> rows: 2 + +-- h2, postgresql, mysql, derby, hsqldb: 2 +select * from left_hand left join right_hand on left_hand.id=right_hand.id; +> ID ID +> -- ---- +> 0 0 +> 1 null +> rows: 2 + +-- h2: 1 (2 cols); postgresql, mysql: 1 (1 col); derby, hsqldb: no natural join +select * from left_hand natural join right_hand; +> ID +> -- +> 0 +> rows: 1 + +-- h2, postgresql, mysql, derby, hsqldb: 1 +select * from left_hand left outer join right_hand on left_hand.id=right_hand.id where left_hand.id=1; +> ID ID +> -- ---- +> 1 null +> rows: 1 + +-- h2, postgresql, mysql, derby, hsqldb: 1 +select * from left_hand left join right_hand on left_hand.id=right_hand.id where left_hand.id=1; +> ID ID +> -- ---- +> 1 null +> rows: 1 + +-- h2: 0 (2 cols); postgresql, mysql: 0 (1 col); derby, hsqldb: no natural join +select * from left_hand natural join right_hand where left_hand.id=1; +> ID +> -- +> rows: 0 + +-- !!! h2: 1; postgresql, mysql, hsqldb: 0; derby: exception +select * from left_hand left outer join right_hand on left_hand.id=right_hand.id where left_hand.id=1 having right_hand.id=2; +> ID ID +> -- -- +> rows: 0 + +-- !!! h2: 1; postgresql, mysql, hsqldb: 0; derby: exception +select * from left_hand left join right_hand on left_hand.id=right_hand.id where left_hand.id=1 having right_hand.id=2; +> ID ID +> -- -- +> rows: 0 + +-- h2: 0 (2 cols); postgresql: 0 (1 col), mysql: exception; derby, hsqldb: no natural join +select * from left_hand natural join right_hand where left_hand.id=1 having right_hand.id=2; +> exception + +-- h2, mysql, hsqldb: 0 rows; postgresql, derby: exception +select * from left_hand left outer join right_hand on left_hand.id=right_hand.id where left_hand.id=1 group by left_hand.id having right_hand.id=2; +> ID ID +> -- -- +> rows: 0 + +-- h2, mysql, hsqldb: 0 rows; postgresql, derby: exception +select * from left_hand left join right_hand on left_hand.id=right_hand.id where left_hand.id=1 group by left_hand.id having right_hand.id=2; +> ID ID +> -- -- +> rows: 0 + +-- h2: 0 rows; postgresql, mysql: exception; derby, hsqldb: no natural join +select * from left_hand natural join right_hand where left_hand.id=1 group by left_hand.id having right_hand.id=2; +> ID +> -- +> rows: 0 + +drop table right_hand; +> ok + +drop table left_hand; +> ok + +--- complex join --------------------------------------------------------------------------------------------- +CREATE TABLE T1(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE TABLE T2(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE TABLE T3(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO T1 VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO T1 VALUES(2, 'World'); +> update count: 1 + +INSERT INTO T1 VALUES(3, 'Peace'); +> update count: 1 + +INSERT INTO T2 VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO T2 VALUES(2, 'World'); +> update count: 1 + +INSERT INTO T3 VALUES(1, 'Hello'); +> update count: 1 + +SELECT * FROM t1 left outer join t2 on t1.id=t2.id; +> ID NAME ID NAME +> -- ----- ---- ----- +> 1 Hello 1 Hello +> 2 World 2 World +> 3 Peace null null +> rows: 3 + +SELECT * FROM t1 left outer join t2 on t1.id=t2.id left outer join t3 on t1.id=t3.id; +> ID NAME ID NAME ID NAME +> -- ----- ---- ----- ---- ----- +> 1 Hello 1 Hello 1 Hello +> 2 World 2 World null null +> 3 Peace null null null null +> rows: 3 + +SELECT * FROM t1 left outer join t2 on t1.id=t2.id inner join t3 on t1.id=t3.id; +> ID NAME ID NAME ID NAME +> -- ----- -- ----- -- ----- +> 1 Hello 1 Hello 1 Hello +> rows: 1 + +drop table t1; +> ok + +drop table t2; +> ok + +drop table t3; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, parent int, sid int); +> ok + +create index idx_p on test(sid); +> ok + +insert into test select x, x, x from system_range(0,20); +> update count: 21 + +select * from test l0 inner join test l1 on l0.sid=l1.sid, test l3 where l0.sid=l3.parent; +> ID PARENT SID ID PARENT SID ID PARENT SID +> -- ------ --- -- ------ --- -- ------ --- +> 0 0 0 0 0 0 0 0 0 +> 1 1 1 1 1 1 1 1 1 +> 10 10 10 10 10 10 10 10 10 +> 11 11 11 11 11 11 11 11 11 +> 12 12 12 12 12 12 12 12 12 +> 13 13 13 13 13 13 13 13 13 +> 14 14 14 14 14 14 14 14 14 +> 15 15 15 15 15 15 15 15 15 +> 16 16 16 16 16 16 16 16 16 +> 17 17 17 17 17 17 17 17 17 +> 18 18 18 18 18 18 18 18 18 +> 19 19 19 19 19 19 19 19 19 +> 2 2 2 2 2 2 2 2 2 +> 20 20 20 20 20 20 20 20 20 +> 3 3 3 3 3 3 3 3 3 +> 4 4 4 4 4 4 4 4 4 +> 5 5 5 5 5 5 5 5 5 +> 6 6 6 6 6 6 6 6 6 +> 7 7 7 7 7 7 7 7 7 +> 8 8 8 8 8 8 8 8 8 +> 9 9 9 9 9 9 9 9 9 +> rows: 21 + +select * from +test l0 +inner join test l1 on l0.sid=l1.sid +inner join test l2 on l0.sid=l2.id, +test l5 +inner join test l3 on l5.sid=l3.sid +inner join test l4 on l5.sid=l4.id +where l2.id is not null +and l0.sid=l5.parent; +> ID PARENT SID ID PARENT SID ID PARENT SID ID PARENT SID ID PARENT SID ID PARENT SID +> -- ------ --- -- ------ --- -- ------ --- -- ------ --- -- ------ --- -- ------ --- +> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +> 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 +> 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 +> 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 +> 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 +> 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 +> 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 +> 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 +> 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 +> 17 17 17 17 17 17 17 17 17 17 17 17 17 17 17 17 17 17 +> 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 +> 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 +> 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 +> 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 +> 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 +> 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 +> 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 +> 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 +> 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 +> 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 +> 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 +> rows: 21 + +DROP TABLE IF EXISTS TEST; +> ok + +--- joins ---------------------------------------------------------------------------------------------------- +create table t1(id int, name varchar); +> ok + +insert into t1 values(1, 'hi'), (2, 'world'); +> update count: 2 + +create table t2(id int, name varchar); +> ok + +insert into t2 values(1, 'Hallo'), (3, 'Welt'); +> update count: 2 + +select * from t1 join t2 on t1.id=t2.id; +> ID NAME ID NAME +> -- ---- -- ----- +> 1 hi 1 Hallo +> rows: 1 + +select * from t1 left join t2 on t1.id=t2.id; +> ID NAME ID NAME +> -- ----- ---- ----- +> 1 hi 1 Hallo +> 2 world null null +> rows: 2 + +select * from t1 right join t2 on t1.id=t2.id; +> ID NAME ID NAME +> -- ----- ---- ---- +> 1 Hallo 1 hi +> 3 Welt null null +> rows: 2 + +select * from t1 cross join t2; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 hi 1 Hallo +> 1 hi 3 Welt +> 2 world 1 Hallo +> 2 world 3 Welt +> rows: 4 + +select * from t1 natural join t2; +> ID NAME +> -- ---- +> rows: 0 + +explain select * from t1 natural join t2; +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.T2 /* PUBLIC.T2.tableScan */ INNER JOIN PUBLIC.T1 /* PUBLIC.T1.tableScan */ ON 1=1 WHERE (PUBLIC.T1.ID = PUBLIC.T2.ID) AND (PUBLIC.T1.NAME = PUBLIC.T2.NAME) +> rows: 1 + +drop table t1; +> ok + +drop table t2; +> ok + +create table customer(customerid int, customer_name varchar); +> ok + +insert into customer values(0, 'Acme'); +> update count: 1 + +create table invoice(customerid int, invoiceid int, invoice_text varchar); +> ok + +insert into invoice values(0, 1, 'Soap'), (0, 2, 'More Soap'); +> update count: 2 + +create table INVOICE_LINE(line_id int, invoiceid int, customerid int, line_text varchar); +> ok + +insert into INVOICE_LINE values(10, 1, 0, 'Super Soap'), (20, 1, 0, 'Regular Soap'); +> update count: 2 + +select c.*, i.*, l.* from customer c natural join invoice i natural join INVOICE_LINE l; +> CUSTOMERID CUSTOMER_NAME INVOICEID INVOICE_TEXT LINE_ID LINE_TEXT +> ---------- ------------- --------- ------------ ------- ------------ +> 0 Acme 1 Soap 10 Super Soap +> 0 Acme 1 Soap 20 Regular Soap +> rows: 2 + +explain select c.*, i.*, l.* from customer c natural join invoice i natural join INVOICE_LINE l; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +> SELECT C.CUSTOMERID, C.CUSTOMER_NAME, I.INVOICEID, I.INVOICE_TEXT, L.LINE_ID, L.LINE_TEXT FROM PUBLIC.INVOICE I /* PUBLIC.INVOICE.tableScan */ INNER JOIN PUBLIC.INVOICE_LINE L /* PUBLIC.INVOICE_LINE.tableScan */ ON 1=1 /* WHERE (PUBLIC.I.CUSTOMERID = PUBLIC.L.CUSTOMERID) AND (PUBLIC.I.INVOICEID = PUBLIC.L.INVOICEID) */ INNER JOIN PUBLIC.CUSTOMER C /* PUBLIC.CUSTOMER.tableScan */ ON 1=1 WHERE (PUBLIC.C.CUSTOMERID = PUBLIC.I.CUSTOMERID) AND ((PUBLIC.I.CUSTOMERID = PUBLIC.L.CUSTOMERID) AND (PUBLIC.I.INVOICEID = PUBLIC.L.INVOICEID)) +> rows: 1 + +drop table customer; +> ok + +drop table invoice; +> ok + +drop table INVOICE_LINE; +> ok + +--- outer joins ---------------------------------------------------------------------------------------------- +CREATE TABLE PARENT(ID INT, NAME VARCHAR(20)); +> ok + +CREATE TABLE CHILD(ID INT, PARENTID INT, NAME VARCHAR(20)); +> ok + +INSERT INTO PARENT VALUES(1, 'Sue'); +> update count: 1 + +INSERT INTO PARENT VALUES(2, 'Joe'); +> update count: 1 + +INSERT INTO CHILD VALUES(100, 1, 'Simon'); +> update count: 1 + +INSERT INTO CHILD VALUES(101, 1, 'Sabine'); +> update count: 1 + +SELECT * FROM PARENT P INNER JOIN CHILD C ON P.ID = C.PARENTID; +> ID NAME ID PARENTID NAME +> -- ---- --- -------- ------ +> 1 Sue 100 1 Simon +> 1 Sue 101 1 Sabine +> rows: 2 + +SELECT * FROM PARENT P LEFT OUTER JOIN CHILD C ON P.ID = C.PARENTID; +> ID NAME ID PARENTID NAME +> -- ---- ---- -------- ------ +> 1 Sue 100 1 Simon +> 1 Sue 101 1 Sabine +> 2 Joe null null null +> rows: 3 + +SELECT * FROM CHILD C RIGHT OUTER JOIN PARENT P ON P.ID = C.PARENTID; +> ID NAME ID PARENTID NAME +> -- ---- ---- -------- ------ +> 1 Sue 100 1 Simon +> 1 Sue 101 1 Sabine +> 2 Joe null null null +> rows: 3 + +DROP TABLE PARENT; +> ok + +DROP TABLE CHILD; +> ok + +CREATE TABLE A(A1 INT, A2 INT); +> ok + +INSERT INTO A VALUES (1, 2); +> update count: 1 + +CREATE TABLE B(B1 INT, B2 INT); +> ok + +INSERT INTO B VALUES (1, 2); +> update count: 1 + +CREATE TABLE C(B1 INT, C1 INT); +> ok + +INSERT INTO C VALUES (1, 2); +> update count: 1 + +SELECT * FROM A LEFT JOIN B ON TRUE; +> A1 A2 B1 B2 +> -- -- -- -- +> 1 2 1 2 +> rows: 1 + +SELECT A.A1, A.A2, B.B1, B.B2 FROM A RIGHT JOIN B ON TRUE; +> A1 A2 B1 B2 +> -- -- -- -- +> 1 2 1 2 +> rows: 1 + +-- this syntax without ON or USING in not standard +SELECT * FROM A LEFT JOIN B; +> A1 A2 B1 B2 +> -- -- -- -- +> 1 2 1 2 +> rows: 1 + +-- this syntax without ON or USING in not standard +SELECT A.A1, A.A2, B.B1, B.B2 FROM A RIGHT JOIN B; +> A1 A2 B1 B2 +> -- -- -- -- +> 1 2 1 2 +> rows: 1 + +SELECT * FROM A LEFT JOIN B ON TRUE NATURAL JOIN C; +> A1 A2 B1 B2 C1 +> -- -- -- -- -- +> 1 2 1 2 2 +> rows: 1 + +SELECT A.A1, A.A2, B.B1, B.B2, C.C1 FROM A RIGHT JOIN B ON TRUE NATURAL JOIN C; +> A1 A2 B1 B2 C1 +> -- -- -- -- -- +> 1 2 1 2 2 +> rows: 1 + +-- this syntax without ON or USING in not standard +SELECT * FROM A LEFT JOIN B NATURAL JOIN C; +> A1 A2 B1 B2 C1 +> -- -- -- -- -- +> 1 2 1 2 2 +> rows: 1 + +-- this syntax without ON or USING in not standard +SELECT A.A1, A.A2, B.B1, B.B2, C.C1 FROM A RIGHT JOIN B NATURAL JOIN C; +> A1 A2 B1 B2 C1 +> -- -- -- -- -- +> 1 2 1 2 2 +> rows: 1 + +DROP TABLE A; +> ok + +DROP TABLE B; +> ok + +DROP TABLE C; +> ok + +CREATE TABLE T1(X1 INT); +CREATE TABLE T2(X2 INT); +CREATE TABLE T3(X3 INT); +CREATE TABLE T4(X4 INT); +CREATE TABLE T5(X5 INT); + +INSERT INTO T1 VALUES (1); +INSERT INTO T1 VALUES (NULL); +INSERT INTO T2 VALUES (1); +INSERT INTO T2 VALUES (NULL); +INSERT INTO T3 VALUES (1); +INSERT INTO T3 VALUES (NULL); +INSERT INTO T4 VALUES (1); +INSERT INTO T4 VALUES (NULL); +INSERT INTO T5 VALUES (1); +INSERT INTO T5 VALUES (NULL); + +SELECT T1.X1, T2.X2, T3.X3, T4.X4, T5.X5 FROM ( + T1 INNER JOIN ( + T2 LEFT OUTER JOIN ( + T3 INNER JOIN T4 ON T3.X3 = T4.X4 + ) ON T2.X2 = T4.X4 + ) ON T1.X1 = T2.X2 +) INNER JOIN T5 ON T2.X2 = T5.X5; +> X1 X2 X3 X4 X5 +> -- -- -- -- -- +> 1 1 1 1 1 +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/package.html b/modules/h2/src/test/java/org/h2/test/scripts/package.html new file mode 100644 index 0000000000000..3e6a0b04131d8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Script test files. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/scripts/query-optimisations.sql b/modules/h2/src/test/java/org/h2/test/scripts/query-optimisations.sql new file mode 100644 index 0000000000000..a60455d3d909d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/query-optimisations.sql @@ -0,0 +1,22 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +create table person(firstname varchar, lastname varchar); +> ok + +create index person_1 on person(firstname, lastname); +> ok + +insert into person select convert(x,varchar) as firstname, (convert(x,varchar) || ' last') as lastname from system_range(1,100); +> update count: 100 + +-- Issue #643: verify that when using an index, we use the IN part of the query, if that part of the query +-- can directly use the index. +-- +explain analyze SELECT * FROM person WHERE firstname IN ('FirstName1', 'FirstName2') AND lastname='LastName1'; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT PERSON.FIRSTNAME, PERSON.LASTNAME FROM PUBLIC.PERSON /* PUBLIC.PERSON_1: FIRSTNAME IN('FirstName1', 'FirstName2') AND LASTNAME = 'LastName1' */ /* scanCount: 1 */ WHERE (FIRSTNAME IN('FirstName1', 'FirstName2')) AND (LASTNAME = 'LastName1') +> rows: 1 diff --git a/modules/h2/src/test/java/org/h2/test/scripts/range_table.sql b/modules/h2/src/test/java/org/h2/test/scripts/range_table.sql new file mode 100644 index 0000000000000..ffa0d835bb3fb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/range_table.sql @@ -0,0 +1,120 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- + +SELECT * FROM SYSTEM_RANGE(1, 10) ORDER BY 1; +> X +> -- +> 1 +> 2 +> 3 +> 4 +> 5 +> 6 +> 7 +> 8 +> 9 +> 10 +> rows (ordered): 10 + +SELECT COUNT(*) FROM SYSTEM_RANGE(1, 10); +>> 10 + +SELECT * FROM SYSTEM_RANGE(1, 10, 2) ORDER BY 1; +> X +> - +> 1 +> 3 +> 5 +> 7 +> 9 +> rows (ordered): 5 + +SELECT COUNT(*) FROM SYSTEM_RANGE(1, 10, 2); +>> 5 + +SELECT * FROM SYSTEM_RANGE(1, 9, 2) ORDER BY 1; +> X +> - +> 1 +> 3 +> 5 +> 7 +> 9 +> rows (ordered): 5 + +SELECT COUNT(*) FROM SYSTEM_RANGE(1, 9, 2); +>> 5 + +SELECT * FROM SYSTEM_RANGE(10, 1, -2) ORDER BY 1 DESC; +> X +> -- +> 10 +> 8 +> 6 +> 4 +> 2 +> rows (ordered): 5 + +SELECT COUNT(*) FROM SYSTEM_RANGE(10, 1, -2); +>> 5 + +SELECT * FROM SYSTEM_RANGE(10, 2, -2) ORDER BY 1 DESC; +> X +> -- +> 10 +> 8 +> 6 +> 4 +> 2 +> rows (ordered): 5 + +SELECT COUNT(*) FROM SYSTEM_RANGE(10, 2, -2); +>> 5 + +SELECT * FROM SYSTEM_RANGE(1, 1); +> X +> - +> 1 +> rows: 1 + +SELECT COUNT(*) FROM SYSTEM_RANGE(1, 1); +>> 1 + +SELECT * FROM SYSTEM_RANGE(1, 1, -1); +> X +> - +> 1 +> rows: 1 + +SELECT COUNT(*) FROM SYSTEM_RANGE(1, 1, -1); +>> 1 + +SELECT * FROM SYSTEM_RANGE(2, 1); +> X +> - +> rows: 0 + +SELECT COUNT(*) FROM SYSTEM_RANGE(2, 1); +>> 0 + +SELECT * FROM SYSTEM_RANGE(2, 1, 2); +> X +> - +> rows: 0 + +SELECT COUNT(*) FROM SYSTEM_RANGE(2, 1, 2); +>> 0 + +SELECT * FROM SYSTEM_RANGE(1, 2, 0); +> exception + +SELECT COUNT(*) FROM SYSTEM_RANGE(1, 2, 0); +> exception + +SELECT * FROM SYSTEM_RANGE(2, 1, 0); +> exception + +SELECT COUNT(*) FROM SYSTEM_RANGE(2, 1, 0); +> exception diff --git a/modules/h2/src/test/java/org/h2/test/scripts/testScript.sql b/modules/h2/src/test/java/org/h2/test/scripts/testScript.sql new file mode 100644 index 0000000000000..fe0e8b56c1300 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/testScript.sql @@ -0,0 +1,8688 @@ +-- Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, +-- and the EPL 1.0 (http://h2database.com/html/license.html). +-- Initial Developer: H2 Group +-- +--- special grammar and test cases --------------------------------------------------------------------------------------------- +create table test(id int) as select 1; +> ok + +select * from test where id in (select id from test order by 'x'); +> ID +> -- +> 1 +> rows (ordered): 1 + +drop table test; +> ok + +select x, x in(2, 3) i from system_range(1, 2) group by x; +> X I +> - ----- +> 1 FALSE +> 2 TRUE +> rows: 2 + +select * from dual join(select x from dual) on 1=1; +> X X +> - - +> 1 1 +> rows: 1 + +select 0 as x from system_range(1, 2) d group by d.x; +> X +> - +> 0 +> 0 +> rows: 2 + +select 1 "a", count(*) from dual group by "a" order by "a"; +> a COUNT(*) +> - -------- +> 1 1 +> rows (ordered): 1 + +create table results(eventId int, points int, studentId int); +> ok + +insert into results values(1, 10, 1), (2, 20, 1), (3, 5, 1); +> update count: 3 + +insert into results values(1, 10, 2), (2, 20, 2), (3, 5, 2); +> update count: 3 + +insert into results values(1, 10, 3), (2, 20, 3), (3, 5, 3); +> update count: 3 + +SELECT SUM(points) FROM RESULTS +WHERE eventID IN +(SELECT eventID FROM RESULTS +WHERE studentID = 2 +ORDER BY points DESC +LIMIT 2 ) +AND studentID = 2; +> SUM(POINTS) +> ----------- +> 30 +> rows (ordered): 1 + +SELECT eventID X FROM RESULTS +WHERE studentID = 2 +ORDER BY points DESC +LIMIT 2; +> X +> - +> 2 +> 1 +> rows (ordered): 2 + +SELECT SUM(r.points) FROM RESULTS r, +(SELECT eventID FROM RESULTS +WHERE studentID = 2 +ORDER BY points DESC +LIMIT 2 ) r2 +WHERE r2.eventID = r.eventId +AND studentID = 2; +> SUM(R.POINTS) +> ------------- +> 30 +> rows (ordered): 1 + +drop table results; +> ok + +create table test(a int, b int); +> ok + +insert into test values(1, 1); +> update count: 1 + +create index on test(a, b desc); +> ok + +select * from test where a = 1; +> A B +> - - +> 1 1 +> rows: 1 + +drop table test; +> ok + +create table test(id int, name varchar) as select 1, 'a'; +> ok + +(select id from test order by id) union (select id from test order by name); +> ID +> -- +> 1 +> rows (ordered): 1 + +drop table test; +> ok + +create sequence seq; +> ok + +select case seq.nextval when 2 then 'two' when 3 then 'three' when 1 then 'one' else 'other' end result from dual; +> RESULT +> ------ +> one +> rows: 1 + +drop sequence seq; +> ok + +create table test(x int); +> ok + +create hash index on test(x); +> ok + +select 1 from test group by x; +> 1 +> - +> rows: 0 + +drop table test; +> ok + +select * from dual where x = x + 1 or x in(2, 0); +> X +> - +> rows: 0 + +select * from system_range(1,1) order by x limit 3 offset 3; +> X +> - +> rows (ordered): 0 + +select * from dual where cast('a' || x as varchar_ignorecase) in ('A1', 'B1'); +> X +> - +> 1 +> rows: 1 + +create sequence seq start with 65 increment by 1; +> ok + +select char(nextval('seq')) as x; +> X +> - +> A +> rows: 1 + +select char(nextval('seq')) as x; +> X +> - +> B +> rows: 1 + +drop sequence seq; +> ok + +create table test(id int, name varchar); +> ok + +insert into test values(5, 'b'), (5, 'b'), (20, 'a'); +> update count: 3 + +select id from test where name in(null, null); +> ID +> -- +> rows: 0 + +select * from (select * from test order by name limit 1) where id < 10; +> ID NAME +> -- ---- +> rows (ordered): 0 + +drop table test; +> ok + +create table test (id int not null, pid int); +> ok + +create index idx_test_pid on test (pid); +> ok + +alter table test add constraint fk_test foreign key (pid) +references test (id) index idx_test_pid; +> ok + +insert into test values (2, null); +> update count: 1 + +update test set pid = 1 where id = 2; +> exception + +drop table test; +> ok + +create table test(name varchar(255)); +> ok + +select * from test union select * from test order by test.name; +> exception + +insert into test values('a'), ('b'), ('c'); +> update count: 3 + +select name from test where name > all(select name from test where name<'b'); +> NAME +> ---- +> b +> c +> rows: 2 + +select count(*) from (select name from test where name > all(select name from test where name<'b')) x; +> COUNT(*) +> -------- +> 2 +> rows: 1 + +drop table test; +> ok + +create table test(id int) as select 1; +> ok + +select * from test where id >= all(select id from test where 1=0); +> ID +> -- +> 1 +> rows: 1 + +select * from test where id = all(select id from test where 1=0); +> ID +> -- +> 1 +> rows: 1 + +select * from test where id = all(select id from test union all select id from test); +> ID +> -- +> 1 +> rows: 1 + +select * from test where null >= all(select id from test where 1=0); +> ID +> -- +> 1 +> rows: 1 + +select * from test where null = all(select id from test where 1=0); +> ID +> -- +> 1 +> rows: 1 + +select * from test where null = all(select id from test union all select id from test); +> ID +> -- +> rows: 0 + +select * from test where id >= all(select cast(null as int) from test); +> ID +> -- +> rows: 0 + +select * from test where id = all(select null from test union all select id from test); +> ID +> -- +> rows: 0 + +select * from test where null >= all(select cast(null as int) from test); +> ID +> -- +> rows: 0 + +select * from test where null = all(select null from test union all select id from test); +> ID +> -- +> rows: 0 + +drop table test; +> ok + +select x from dual order by y.x; +> exception + +create table test(id int primary key, name varchar(255), row_number int); +> ok + +insert into test values(1, 'hello', 10), (2, 'world', 20); +> update count: 2 + +select row_number() over(), id, name from test order by id; +> ROWNUM() ID NAME +> -------- -- ----- +> 1 1 hello +> 2 2 world +> rows (ordered): 2 + +select row_number() over(), id, name from test order by name; +> ROWNUM() ID NAME +> -------- -- ----- +> 1 1 hello +> 2 2 world +> rows (ordered): 2 + +select row_number() over(), id, name from test order by name desc; +> ROWNUM() ID NAME +> -------- -- ----- +> 2 2 world +> 1 1 hello +> rows (ordered): 2 + +update test set (id)=(id); +> update count: 2 + +drop table test; +> ok + +create table test(x int) as select x from system_range(1, 2); +> ok + +select * from (select rownum r from test) where r in (1, 2); +> R +> - +> 1 +> 2 +> rows: 2 + +select * from (select rownum r from test) where r = 1 or r = 2; +> R +> - +> 1 +> 2 +> rows: 2 + +drop table test; +> ok + +select 2^2; +> exception + +select * from dual where x in (select x from dual group by x order by max(x)); +> X +> - +> 1 +> rows (ordered): 1 + +create table test(d decimal(1, 2)); +> exception + +call truncate_value('Test 123', 4, false); +> 'Test' +> ------ +> Test +> rows: 1 + +call truncate_value(1234567890.123456789, 4, false); +> exception + +call truncate_value(1234567890.123456789, 4, true); +> 1234567890.1234567 +> ------------------ +> 1234567890.1234567 +> rows: 1 + +select * from dual where cast('xx' as varchar_ignorecase(1)) = 'X' and cast('x x ' as char(2)) = 'x'; +> X +> - +> 1 +> rows: 1 + +explain select -cast(0 as real), -cast(0 as double); +> PLAN +> ---------------------------------------------------------------- +> SELECT 0.0, 0.0 FROM SYSTEM_RANGE(1, 1) /* PUBLIC.RANGE_INDEX */ +> rows: 1 + +select () empty; +> EMPTY +> ----- +> () +> rows: 1 + +select (1,) one_element; +> ONE_ELEMENT +> ----------- +> (1) +> rows: 1 + +select (1) one; +> ONE +> --- +> 1 +> rows: 1 + +create table test(id int); +> ok + +insert into test values(1), (2), (4); +> update count: 3 + +select * from test order by id limit -1; +> ID +> -- +> 1 +> 2 +> 4 +> rows (ordered): 3 + +select * from test order by id limit 0; +> ID +> -- +> rows (ordered): 0 + +select * from test order by id limit 1; +> ID +> -- +> 1 +> rows (ordered): 1 + +select * from test order by id limit 1+1; +> ID +> -- +> 1 +> 2 +> rows (ordered): 2 + +select * from test order by id limit null; +> ID +> -- +> 1 +> 2 +> 4 +> rows (ordered): 3 + +select a.id, a.id in(select 4) x from test a, test b where a.id in (b.id, b.id - 1); +> ID X +> -- ----- +> 1 FALSE +> 1 FALSE +> 2 FALSE +> 4 TRUE +> rows: 4 + +select a.id, a.id in(select 4) x from test a, test b where a.id in (b.id, b.id - 1) group by a.id; +> ID X +> -- ----- +> 1 FALSE +> 2 FALSE +> 4 TRUE +> rows: 3 + +select a.id, 4 in(select a.id) x from test a, test b where a.id in (b.id, b.id - 1) group by a.id; +> ID X +> -- ----- +> 1 FALSE +> 2 FALSE +> 4 TRUE +> rows: 3 + +delete from test limit 0; +> ok + +delete from test limit 1; +> update count: 1 + +delete from test limit -1; +> update count: 2 + +drop table test; +> ok + +create domain x as int not null; +> ok + +create table test(id x); +> ok + +insert into test values(null); +> exception + +drop table test; +> ok + +drop domain x; +> ok + +create table test(id int primary key); +> ok + +insert into test(id) direct sorted select x from system_range(1, 100); +> update count: 100 + +explain insert into test(id) direct sorted select x from system_range(1, 100); +> PLAN +> ----------------------------------------------------------------------------------------------------- +> INSERT INTO PUBLIC.TEST(ID) DIRECT SORTED SELECT X FROM SYSTEM_RANGE(1, 100) /* PUBLIC.RANGE_INDEX */ +> rows: 1 + +explain select * from test limit 10 sample_size 10; +> PLAN +> ----------------------------------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ LIMIT 10 SAMPLE_SIZE 10 +> rows: 1 + +drop table test; +> ok + +create table test(id int primary key); +> ok + +insert into test values(1), (2), (3), (4); +> update count: 4 + +explain analyze select * from test where id is null; +> PLAN +> ---------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID IS NULL */ /* scanCount: 1 */ WHERE ID IS NULL +> rows: 1 + +drop table test; +> ok + +explain analyze select 1; +> PLAN +> ---------------------------------------------------------------------------- +> SELECT 1 FROM SYSTEM_RANGE(1, 1) /* PUBLIC.RANGE_INDEX */ /* scanCount: 2 */ +> rows: 1 + +create table folder(id int primary key, name varchar(255), parent int); +> ok + +insert into folder values(1, null, null), (2, 'bin', 1), (3, 'docs', 1), (4, 'html', 3), (5, 'javadoc', 3), (6, 'ext', 1), (7, 'service', 1), (8, 'src', 1), (9, 'docsrc', 8), (10, 'installer', 8), (11, 'main', 8), (12, 'META-INF', 11), (13, 'org', 11), (14, 'h2', 13), (15, 'test', 8), (16, 'tools', 8); +> update count: 16 + +with link(id, name, level) as (select id, name, 0 from folder where parent is null union all select folder.id, ifnull(link.name || '/', '') || folder.name, level + 1 from link inner join folder on link.id = folder.parent) select name from link where name is not null order by cast(id as int); +> NAME +> ----------------- +> bin +> docs +> docs/html +> docs/javadoc +> ext +> service +> src +> src/docsrc +> src/installer +> src/main +> src/main/META-INF +> src/main/org +> src/main/org/h2 +> src/test +> src/tools +> rows (ordered): 15 + +drop table folder; +> ok + +create table test(id int); +> ok + +create view x as select * from test; +> ok + +drop table test restrict; +> exception + +drop table test cascade; +> ok + +select 1, 2 from (select * from dual) union all select 3, 4 from dual; +> 1 2 +> - - +> 1 2 +> 3 4 +> rows: 2 + +select 3 from (select * from dual) union all select 2 from dual; +> 3 +> - +> 2 +> 3 +> rows: 2 + +create table a(x int, y int); +> ok + +create unique index a_xy on a(x, y); +> ok + +create table b(x int, y int, foreign key(x, y) references a(x, y)); +> ok + +insert into a values(null, null), (null, 0), (0, null), (0, 0); +> update count: 4 + +insert into b values(null, null), (null, 0), (0, null), (0, 0); +> update count: 4 + +delete from a where x is null and y is null; +> update count: 1 + +delete from a where x is null and y = 0; +> update count: 1 + +delete from a where x = 0 and y is null; +> update count: 1 + +delete from a where x = 0 and y = 0; +> exception + +drop table b; +> ok + +drop table a; +> ok + +select * from (select null as x) where x=1; +> X +> - +> rows: 0 + +create table test(a int primary key, b int references(a)); +> ok + +merge into test values(1, 2); +> exception + +drop table test; +> ok + +create table test(id int primary key, d int); +> ok + +insert into test values(1,1), (2, 1); +> update count: 2 + +select id from test where id in (1, 2) and d = 1; +> ID +> -- +> 1 +> 2 +> rows: 2 + +drop table test; +> ok + +create table test(id decimal(10, 2) primary key) as select 0; +> ok + +select * from test where id = 0.00; +> ID +> ---- +> 0.00 +> rows: 1 + +select * from test where id = 0.0; +> ID +> ---- +> 0.00 +> rows: 1 + +drop table test; +> ok + +select count(*) from (select 1 union (select 2 intersect select 2)) x; +> COUNT(*) +> -------- +> 2 +> rows: 1 + +create table test(id varchar(1) primary key) as select 'X'; +> ok + +select count(*) from (select 1 from dual where x in ((select 1 union select 1))) a; +> COUNT(*) +> -------- +> 1 +> rows: 1 + +insert into test ((select 1 union select 2) union select 3); +> update count: 3 + +select count(*) from test where id = 'X1'; +> COUNT(*) +> -------- +> 0 +> rows: 1 + +drop table test; +> ok + +create table test(id int primary key, name varchar(255), x int); +> ok + +create unique index idx_name1 on test(name); +> ok + +create unique index idx_name2 on test(name); +> ok + +show columns from test; +> FIELD TYPE NULL KEY DEFAULT +> ----- ------------ ---- --- ------- +> ID INTEGER(10) NO PRI NULL +> NAME VARCHAR(255) YES UNI NULL +> X INTEGER(10) YES NULL +> rows: 3 + +show columns from catalogs from information_schema; +> FIELD TYPE NULL KEY DEFAULT +> ------------ ------------------- ---- --- ------- +> CATALOG_NAME VARCHAR(2147483647) YES NULL +> rows: 1 + +show columns from information_schema.catalogs; +> FIELD TYPE NULL KEY DEFAULT +> ------------ ------------------- ---- --- ------- +> CATALOG_NAME VARCHAR(2147483647) YES NULL +> rows: 1 + +drop table test; +> ok + +create table test(id int, constraint pk primary key(id), constraint x unique(id)); +> ok + +select constraint_name from information_schema.indexes where table_name = 'TEST'; +> CONSTRAINT_NAME +> --------------- +> PK +> rows: 1 + +drop table test; +> ok + +create table parent(id int primary key); +> ok + +create table child(id int, parent_id int, constraint child_parent foreign key (parent_id) references parent(id)); +> ok + +select constraint_name from information_schema.indexes where table_name = 'CHILD'; +> CONSTRAINT_NAME +> --------------- +> CHILD_PARENT +> rows: 1 + +drop table parent, child; +> ok + +create table test(id int, name varchar(max)); +> ok + +alter table test alter column id identity; +> ok + +drop table test; +> ok + +create table test(id int primary key, name varchar); +> ok + +alter table test alter column id int auto_increment; +> ok + +create table otherTest(id int primary key, name varchar); +> ok + +alter table otherTest add constraint fk foreign key(id) references test(id); +> ok + +alter table otherTest drop foreign key fk; +> ok + +create unique index idx on otherTest(name); +> ok + +alter table otherTest drop index idx; +> ok + +drop table otherTest; +> ok + +insert into test(id) values(1); +> update count: 1 + +alter table test change column id id2 int; +> ok + +select id2 from test; +> ID2 +> --- +> 1 +> rows: 1 + +drop table test; +> ok + +create table test(id identity); +> ok + +set password test; +> exception + +alter user sa set password test; +> exception + +comment on table test is test; +> exception + +select 1 from test a where 1 in(select 1 from test b where b.id in(select 1 from test c where c.id=a.id)); +> 1 +> - +> rows: 0 + +drop table test; +> ok + +select @n := case when x = 1 then 1 else @n * x end f from system_range(1, 4); +> F +> -- +> 1 +> 2 +> 24 +> 6 +> rows: 4 + +select * from (select "x" from dual); +> exception + +select * from(select 1 from system_range(1, 2) group by sin(x) order by sin(x)); +> 1 +> - +> 1 +> 1 +> rows (ordered): 2 + +create table parent as select 1 id, 2 x; +> ok + +create table child(id int references parent(id)) as select 1; +> ok + +delete from parent; +> exception + +drop table parent, child; +> ok + +create domain integer as varchar; +> exception + +create domain int as varchar; +> ok + +create memory table test(id int); +> ok + +script nodata nopasswords nosettings; +> SCRIPT +> ----------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> CREATE DOMAIN INT AS VARCHAR; +> CREATE MEMORY TABLE PUBLIC.TEST( ID VARCHAR ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 4 + +drop table test; +> ok + +drop domain int; +> ok + +create table test(id identity, parent bigint, foreign key(parent) references(id)); +> ok + +insert into test values(0, 0), (1, NULL), (2, 1), (3, 3), (4, 3); +> update count: 5 + +delete from test where id = 3; +> exception + +delete from test where id = 0; +> update count: 1 + +delete from test where id = 1; +> exception + +drop table test; +> ok + +select iso_week('2006-12-31') w, iso_year('2007-12-31') y, iso_day_of_week('2007-12-31') w; +> W Y W +> -- ---- - +> 52 2008 1 +> rows: 1 + +create schema a; +> ok + +set autocommit false; +> ok + +set schema a; +> ok + +create table t1 ( k int, v varchar(10) ); +> ok + +insert into t1 values ( 1, 't1' ); +> update count: 1 + +create table t2 ( k int, v varchar(10) ); +> ok + +insert into t2 values ( 2, 't2' ); +> update count: 1 + +create view v_test(a, b, c, d) as select t1.*, t2.* from t1 join t2 on ( t1.k = t2.k ); +> ok + +select * from v_test; +> A B C D +> - - - - +> rows: 0 + +set schema public; +> ok + +drop schema a cascade; +> ok + +set autocommit true; +> ok + +select x/3 as a, count(*) c from system_range(1, 10) group by a having c>2; +> A C +> - - +> 1 3 +> 2 3 +> rows: 2 + +create table test(id int); +> ok + +insert into test values(1), (2); +> update count: 2 + +select id+1 as x, count(*) from test group by x; +> X COUNT(*) +> - -------- +> 2 1 +> 3 1 +> rows: 2 + +select 1 as id, id as b, count(*) from test group by id; +> ID B COUNT(*) +> -- - -------- +> 1 1 1 +> 1 2 1 +> rows: 2 + +select id+1 as x, count(*) from test group by -x; +> exception + +select id+1 as x, count(*) from test group by x having x>2; +> exception + +select id+1 as x, count(*) from test group by 1; +> exception + +drop table test; +> ok + +create table test(t0 timestamp(0), t1 timestamp(1), t4 timestamp(4)); +> ok + +select column_name, numeric_scale from information_schema.columns c where c.table_name = 'TEST' order by column_name; +> COLUMN_NAME NUMERIC_SCALE +> ----------- ------------- +> T0 0 +> T1 1 +> T4 4 +> rows (ordered): 3 + +drop table test; +> ok + +create table test(id int); +> ok + +insert into test values(null), (1); +> update count: 2 + +select * from test where id not in (select id from test where 1=0); +> ID +> ---- +> 1 +> null +> rows: 2 + +select * from test where null not in (select id from test where 1=0); +> ID +> ---- +> 1 +> null +> rows: 2 + +select * from test where not (id in (select id from test where 1=0)); +> ID +> ---- +> 1 +> null +> rows: 2 + +select * from test where not (null in (select id from test where 1=0)); +> ID +> ---- +> 1 +> null +> rows: 2 + +drop table test; +> ok + +create table test(a int); +> ok + +insert into test values(1), (2); +> update count: 2 + +select -test.a a from test order by test.a; +> A +> -- +> -1 +> -2 +> rows (ordered): 2 + +select -test.a from test order by test.a; +> - TEST.A +> -------- +> -1 +> -2 +> rows (ordered): 2 + +select -test.a aa from test order by a; +> AA +> -- +> -1 +> -2 +> rows (ordered): 2 + +select -test.a aa from test order by aa; +> AA +> -- +> -2 +> -1 +> rows (ordered): 2 + +select -test.a a from test order by a; +> A +> -- +> -2 +> -1 +> rows (ordered): 2 + +drop table test; +> ok + +CREATE TABLE table_a(a_id INT PRIMARY KEY, left_id INT, right_id INT); +> ok + +CREATE TABLE table_b(b_id INT PRIMARY KEY, a_id INT); +> ok + +CREATE TABLE table_c(left_id INT, right_id INT, center_id INT); +> ok + +CREATE VIEW view_a AS +SELECT table_c.center_id, table_a.a_id, table_b.b_id +FROM table_c +INNER JOIN table_a ON table_c.left_id = table_a.left_id +AND table_c.right_id = table_a.right_id +LEFT JOIN table_b ON table_b.a_id = table_a.a_id; +> ok + +SELECT * FROM table_c INNER JOIN view_a +ON table_c.center_id = view_a.center_id; +> LEFT_ID RIGHT_ID CENTER_ID CENTER_ID A_ID B_ID +> ------- -------- --------- --------- ---- ---- +> rows: 0 + +drop view view_a; +> ok + +drop table table_a, table_b, table_c; +> ok + +create table t (pk int primary key, attr int); +> ok + +insert into t values (1, 5), (5, 1); +> update count: 2 + +select t1.pk from t t1, t t2 where t1.pk = t2.attr order by t1.pk; +> PK +> -- +> 1 +> 5 +> rows (ordered): 2 + +drop table t; +> ok + +CREATE ROLE TEST_A; +> ok + +GRANT TEST_A TO TEST_A; +> exception + +CREATE ROLE TEST_B; +> ok + +GRANT TEST_A TO TEST_B; +> ok + +GRANT TEST_B TO TEST_A; +> exception + +DROP ROLE TEST_A; +> ok + +DROP ROLE TEST_B; +> ok + +CREATE ROLE PUBLIC2; +> ok + +GRANT PUBLIC2 TO SA; +> ok + +GRANT PUBLIC2 TO SA; +> ok + +REVOKE PUBLIC2 FROM SA; +> ok + +REVOKE PUBLIC2 FROM SA; +> ok + +DROP ROLE PUBLIC2; +> ok + +create table test(id int primary key, lastname varchar, firstname varchar, parent int references(id)); +> ok + +alter table test add constraint name unique (lastname, firstname); +> ok + +SELECT CONSTRAINT_NAME, UNIQUE_INDEX_NAME, COLUMN_LIST FROM INFORMATION_SCHEMA.CONSTRAINTS ; +> CONSTRAINT_NAME UNIQUE_INDEX_NAME COLUMN_LIST +> --------------- ----------------- ------------------ +> CONSTRAINT_2 PRIMARY_KEY_2 ID +> CONSTRAINT_27 PRIMARY_KEY_2 PARENT +> NAME NAME_INDEX_2 LASTNAME,FIRSTNAME +> rows: 3 + +drop table test; +> ok + +alter table information_schema.help rename to information_schema.help2; +> exception + +help abc; +> ID SECTION TOPIC SYNTAX TEXT +> -- ------- ----- ------ ---- +> rows: 0 + +CREATE TABLE test (id int(25) NOT NULL auto_increment, name varchar NOT NULL, PRIMARY KEY (id,name)); +> ok + +drop table test; +> ok + +CREATE TABLE test (id bigserial NOT NULL primary key); +> ok + +drop table test; +> ok + +CREATE TABLE test (id serial NOT NULL primary key); +> ok + +drop table test; +> ok + +CREATE MEMORY TABLE TEST(ID INT, D DOUBLE, F FLOAT); +> ok + +insert into test values(0, POWER(0, -1), POWER(0, -1)), (1, -POWER(0, -1), -POWER(0, -1)), (2, SQRT(-1), SQRT(-1)); +> update count: 3 + +select * from test order by id; +> ID D F +> -- --------- --------- +> 0 Infinity Infinity +> 1 -Infinity -Infinity +> 2 NaN NaN +> rows (ordered): 3 + +script nopasswords nosettings; +> SCRIPT +> ----------------------------------------------------------------------------------------------------------------------------------------- +> -- 3 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT, D DOUBLE, F FLOAT ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.TEST(ID, D, F) VALUES (0, POWER(0, -1), POWER(0, -1)), (1, (-POWER(0, -1)), (-POWER(0, -1))), (2, SQRT(-1), SQRT(-1)); +> rows: 4 + +DROP TABLE TEST; +> ok + +create schema a; +> ok + +create table a.x(ax int); +> ok + +create schema b; +> ok + +create table b.x(bx int); +> ok + +select * from a.x, b.x; +> AX BX +> -- -- +> rows: 0 + +drop schema a cascade; +> ok + +drop schema b cascade; +> ok + +create table t1 (id int primary key); +> ok + +create table t2 (id int primary key); +> ok + +insert into t1 select x from system_range(1, 1000); +> update count: 1000 + +insert into t2 select x from system_range(1, 1000); +> update count: 1000 + +explain select count(*) from t1 where t1.id in ( select t2.id from t2 ); +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +> SELECT COUNT(*) FROM PUBLIC.T1 /* PUBLIC.PRIMARY_KEY_A: ID IN(SELECT T2.ID FROM PUBLIC.T2 /++ PUBLIC.T2.tableScan ++/) */ WHERE T1.ID IN( SELECT T2.ID FROM PUBLIC.T2 /* PUBLIC.T2.tableScan */) +> rows: 1 + +select count(*) from t1 where t1.id in ( select t2.id from t2 ); +> COUNT(*) +> -------- +> 1000 +> rows: 1 + +drop table t1, t2; +> ok + +CREATE TABLE p(d date); +> ok + +INSERT INTO p VALUES('-1-01-01'), ('0-01-01'), ('0001-01-01'); +> update count: 3 + +select d, year(d), extract(year from d), cast(d as timestamp) from p; +> D YEAR(D) EXTRACT(YEAR FROM D) CAST(D AS TIMESTAMP) +> ---------- ------- -------------------- -------------------- +> -1-01-01 -1 -1 -1-01-01 00:00:00 +> 0-01-01 0 0 0-01-01 00:00:00 +> 0001-01-01 1 1 0001-01-01 00:00:00 +> rows: 3 + +drop table p; +> ok + +(SELECT X FROM DUAL ORDER BY X+2) UNION SELECT X FROM DUAL; +> X +> - +> 1 +> rows (ordered): 1 + +create table test(a int, b int default 1); +> ok + +insert into test values(1, default), (2, 2), (3, null); +> update count: 3 + +select * from test; +> A B +> - ---- +> 1 1 +> 2 2 +> 3 null +> rows: 3 + +update test set b = default where a = 2; +> update count: 1 + +explain update test set b = default where a = 2; +> PLAN +> -------------------------------------------------------------------------- +> UPDATE PUBLIC.TEST /* PUBLIC.TEST.tableScan */ SET B = DEFAULT WHERE A = 2 +> rows: 1 + +select * from test; +> A B +> - ---- +> 1 1 +> 2 1 +> 3 null +> rows: 3 + +update test set a=default; +> update count: 3 + +drop table test; +> ok + +CREATE ROLE X; +> ok + +GRANT X TO X; +> exception + +CREATE ROLE Y; +> ok + +GRANT Y TO X; +> ok + +DROP ROLE Y; +> ok + +DROP ROLE X; +> ok + +select top sum(1) 0 from dual; +> exception + +create table test(id int primary key, name varchar) as select 1, 'Hello World'; +> ok + +select * from test; +> ID NAME +> -- ----------- +> 1 Hello World +> rows: 1 + +drop table test; +> ok + +select rtrim() from dual; +> exception + +CREATE TABLE COUNT(X INT); +> ok + +CREATE FORCE TRIGGER T_COUNT BEFORE INSERT ON COUNT CALL "com.Unknown"; +> ok + +INSERT INTO COUNT VALUES(NULL); +> exception + +DROP TRIGGER T_COUNT; +> ok + +CREATE TABLE ITEMS(ID INT CHECK ID < SELECT MAX(ID) FROM COUNT); +> ok + +insert into items values(DEFAULT); +> update count: 1 + +DROP TABLE COUNT; +> exception + +insert into items values(DEFAULT); +> update count: 1 + +drop table items, count; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, LABEL CHAR(20), LOOKUP CHAR(30)); +> ok + +INSERT INTO TEST VALUES (1, 'Mouse', 'MOUSE'), (2, 'MOUSE', 'Mouse'); +> update count: 2 + +SELECT * FROM TEST; +> ID LABEL LOOKUP +> -- ----- ------ +> 1 Mouse MOUSE +> 2 MOUSE Mouse +> rows: 2 + +DROP TABLE TEST; +> ok + +call 'a' regexp 'Ho.*\'; +> exception + +set @t = 0; +> ok + +call set(1, 2); +> exception + +select x, set(@t, ifnull(@t, 0) + x) from system_range(1, 3); +> X SET(@T, (IFNULL(@T, 0) + X)) +> - ---------------------------- +> 1 1 +> 2 3 +> 3 6 +> rows: 3 + +select * from system_range(1, 2) a, +(select * from system_range(1, 2) union select * from system_range(1, 2) +union select * from system_range(1, 1)) v where a.x = v.x; +> X X +> - - +> 1 1 +> 2 2 +> rows: 2 + +create table test(id int); +> ok + +select * from ((select * from test) union (select * from test)) where id = 0; +> ID +> -- +> rows: 0 + +select * from ((test d1 inner join test d2 on d1.id = d2.id) inner join test d3 on d1.id = d3.id) inner join test d4 on d4.id = d1.id; +> ID ID ID ID +> -- -- -- -- +> rows: 0 + +drop table test; +> ok + +select count(*) from system_range(1, 2) where x in(1, 1, 1); +> COUNT(*) +> -------- +> 1 +> rows: 1 + +create table person(id bigint auto_increment, name varchar(100)); +> ok + +insert into person(name) values ('a'), ('b'), ('c'); +> update count: 3 + +select * from person order by id; +> ID NAME +> -- ---- +> 1 a +> 2 b +> 3 c +> rows (ordered): 3 + +select * from person order by id limit 2; +> ID NAME +> -- ---- +> 1 a +> 2 b +> rows (ordered): 2 + +select * from person order by id limit 2 offset 1; +> ID NAME +> -- ---- +> 2 b +> 3 c +> rows (ordered): 2 + +select * from person order by id limit 2147483647 offset 1; +> ID NAME +> -- ---- +> 2 b +> 3 c +> rows (ordered): 2 + +select * from person order by id limit 2147483647-1 offset 1; +> ID NAME +> -- ---- +> 2 b +> 3 c +> rows (ordered): 2 + +select * from person order by id limit 2147483647-1 offset 2; +> ID NAME +> -- ---- +> 3 c +> rows (ordered): 1 + +select * from person order by id limit 2147483647-2 offset 2; +> ID NAME +> -- ---- +> 3 c +> rows (ordered): 1 + +drop table person; +> ok + +CREATE TABLE TEST(ID INTEGER NOT NULL, ID2 INTEGER DEFAULT 0); +> ok + +ALTER TABLE test ALTER COLUMN ID2 RENAME TO ID; +> exception + +drop table test; +> ok + +create table test(id int primary key, data array); +> ok + +insert into test values(1, (1, 1)), (2, (1, 2)), (3, (1, 1, 1)); +> update count: 3 + +select * from test order by data; +> ID DATA +> -- --------- +> 1 (1, 1) +> 3 (1, 1, 1) +> 2 (1, 2) +> rows (ordered): 3 + +drop table test; +> ok + +CREATE TABLE FOO (A CHAR(10)); +> ok + +CREATE TABLE BAR AS SELECT * FROM FOO; +> ok + +select table_name, numeric_precision from information_schema.columns where column_name = 'A'; +> TABLE_NAME NUMERIC_PRECISION +> ---------- ----------------- +> BAR 10 +> FOO 10 +> rows: 2 + +DROP TABLE FOO, BAR; +> ok + +create table multi_pages(dir_num int, bh_id int); +> ok + +insert into multi_pages values(1, 1), (2, 2), (3, 3); +> update count: 3 + +create table b_holding(id int primary key, site varchar(255)); +> ok + +insert into b_holding values(1, 'Hello'), (2, 'Hello'), (3, 'Hello'); +> update count: 3 + +select * from (select dir_num, count(*) as cnt from multi_pages t, b_holding bh +where t.bh_id=bh.id and bh.site='Hello' group by dir_num) as x +where cnt < 1000 order by dir_num asc; +> DIR_NUM CNT +> ------- --- +> 1 1 +> 2 1 +> 3 1 +> rows (ordered): 3 + +explain select * from (select dir_num, count(*) as cnt from multi_pages t, b_holding bh +where t.bh_id=bh.id and bh.site='Hello' group by dir_num) as x +where cnt < 1000 order by dir_num asc; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +> SELECT X.DIR_NUM, X.CNT FROM ( SELECT DIR_NUM, COUNT(*) AS CNT FROM PUBLIC.MULTI_PAGES T INNER JOIN PUBLIC.B_HOLDING BH ON 1=1 WHERE (BH.SITE = 'Hello') AND (T.BH_ID = BH.ID) GROUP BY DIR_NUM ) X /* SELECT DIR_NUM, COUNT(*) AS CNT FROM PUBLIC.MULTI_PAGES T /++ PUBLIC.MULTI_PAGES.tableScan ++/ INNER JOIN PUBLIC.B_HOLDING BH /++ PUBLIC.PRIMARY_KEY_3: ID = T.BH_ID ++/ ON 1=1 WHERE (BH.SITE = 'Hello') AND (T.BH_ID = BH.ID) GROUP BY DIR_NUM HAVING COUNT(*) <= ?1: CNT < 1000 */ WHERE CNT < 1000 ORDER BY 1 +> rows (ordered): 1 + +select dir_num, count(*) as cnt from multi_pages t, b_holding bh +where t.bh_id=bh.id and bh.site='Hello' group by dir_num +having count(*) < 1000 order by dir_num asc; +> DIR_NUM CNT +> ------- --- +> 1 1 +> 2 1 +> 3 1 +> rows (ordered): 3 + +drop table multi_pages, b_holding; +> ok + +select * from dual where x = 1000000000000000000000; +> X +> - +> rows: 0 + +select * from dual where x = 'Hello'; +> exception + +create table test(id smallint primary key); +> ok + +insert into test values(1), (2), (3); +> update count: 3 + +explain select * from test where id = 1; +> PLAN +> ------------------------------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ WHERE ID = 1 +> rows: 1 + +EXPLAIN SELECT * FROM TEST WHERE ID = (SELECT MAX(ID) FROM TEST); +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = (SELECT MAX(ID) FROM PUBLIC.TEST /++ PUBLIC.TEST.tableScan ++/ /++ direct lookup ++/) */ WHERE ID = (SELECT MAX(ID) FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ /* direct lookup */) +> rows: 1 + +drop table test; +> ok + +create table test(id tinyint primary key); +> ok + +insert into test values(1), (2), (3); +> update count: 3 + +explain select * from test where id = 3; +> PLAN +> ------------------------------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 3 */ WHERE ID = 3 +> rows: 1 + +explain select * from test where id = 255; +> PLAN +> ----------------------------------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 255 */ WHERE ID = 255 +> rows: 1 + +drop table test; +> ok + +create table test(id int primary key); +> ok + +insert into test values(1), (2), (3); +> update count: 3 + +explain select * from test where id in(1, 2, null); +> PLAN +> ----------------------------------------------------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID IN(1, 2, NULL) */ WHERE ID IN(1, 2, NULL) +> rows: 1 + +drop table test; +> ok + +create alias "SYSDATE" for "java.lang.Integer.parseInt(java.lang.String)"; +> exception + +create alias "MIN" for "java.lang.Integer.parseInt(java.lang.String)"; +> exception + +create alias "CAST" for "java.lang.Integer.parseInt(java.lang.String)"; +> exception + +CREATE TABLE PARENT(A INT, B INT, PRIMARY KEY(A, B)); +> ok + +CREATE TABLE CHILD(A INT, B INT, CONSTRAINT CP FOREIGN KEY(A, B) REFERENCES PARENT(A, B)); +> ok + +INSERT INTO PARENT VALUES(1, 2); +> update count: 1 + +INSERT INTO CHILD VALUES(2, NULL), (NULL, 3), (NULL, NULL), (1, 2); +> update count: 4 + +set autocommit false; +> ok + +ALTER TABLE CHILD SET REFERENTIAL_INTEGRITY FALSE; +> ok + +ALTER TABLE CHILD SET REFERENTIAL_INTEGRITY TRUE CHECK; +> ok + +set autocommit true; +> ok + +DROP TABLE CHILD, PARENT; +> ok + +CREATE TABLE TEST(BIRTH TIMESTAMP); +> ok + +INSERT INTO TEST VALUES('2006-04-03 10:20:30'), ('2006-04-03 10:20:31'), ('2006-05-05 00:00:00'), ('2006-07-03 22:30:00'), ('2006-07-03 22:31:00'); +> update count: 5 + +SELECT * FROM (SELECT CAST(BIRTH AS DATE) B +FROM TEST GROUP BY CAST(BIRTH AS DATE)) A +WHERE A.B >= '2006-05-05'; +> B +> ---------- +> 2006-05-05 +> 2006-07-03 +> rows: 2 + +DROP TABLE TEST; +> ok + +CREATE TABLE Parent(ID INT PRIMARY KEY, Name VARCHAR); +> ok + +CREATE TABLE Child(ID INT); +> ok + +ALTER TABLE Child ADD FOREIGN KEY(ID) REFERENCES Parent(ID); +> ok + +INSERT INTO Parent VALUES(1, '0'), (2, '0'), (3, '0'); +> update count: 3 + +INSERT INTO Child VALUES(1); +> update count: 1 + +ALTER TABLE Parent ALTER COLUMN Name BOOLEAN NULL; +> ok + +DELETE FROM Parent WHERE ID=3; +> update count: 1 + +DROP TABLE Parent, Child; +> ok + +set autocommit false; +> ok + +CREATE TABLE A(ID INT PRIMARY KEY, SK INT); +> ok + +ALTER TABLE A ADD CONSTRAINT AC FOREIGN KEY(SK) REFERENCES A(ID); +> ok + +INSERT INTO A VALUES(1, 1); +> update count: 1 + +INSERT INTO A VALUES(-2, NULL); +> update count: 1 + +ALTER TABLE A SET REFERENTIAL_INTEGRITY FALSE; +> ok + +ALTER TABLE A SET REFERENTIAL_INTEGRITY TRUE CHECK; +> ok + +ALTER TABLE A SET REFERENTIAL_INTEGRITY FALSE; +> ok + +INSERT INTO A VALUES(2, 3); +> update count: 1 + +ALTER TABLE A SET REFERENTIAL_INTEGRITY TRUE; +> ok + +ALTER TABLE A SET REFERENTIAL_INTEGRITY FALSE; +> ok + +ALTER TABLE A SET REFERENTIAL_INTEGRITY TRUE CHECK; +> exception + +DROP TABLE A; +> ok + +set autocommit true; +> ok + +CREATE TABLE PARENT(ID INT); +> ok + +CREATE TABLE CHILD(PID INT); +> ok + +INSERT INTO PARENT VALUES(1); +> update count: 1 + +INSERT INTO CHILD VALUES(2); +> update count: 1 + +ALTER TABLE CHILD ADD CONSTRAINT CP FOREIGN KEY(PID) REFERENCES PARENT(ID); +> exception + +UPDATE CHILD SET PID=1; +> update count: 1 + +ALTER TABLE CHILD ADD CONSTRAINT CP FOREIGN KEY(PID) REFERENCES PARENT(ID); +> ok + +DROP TABLE CHILD, PARENT; +> ok + +CREATE TABLE A(ID INT PRIMARY KEY, SK INT); +> ok + +INSERT INTO A VALUES(1, 2); +> update count: 1 + +ALTER TABLE A ADD CONSTRAINT AC FOREIGN KEY(SK) REFERENCES A(ID); +> exception + +DROP TABLE A; +> ok + +CREATE TABLE TEST(ID INT); +> ok + +INSERT INTO TEST VALUES(0), (1), (100); +> update count: 3 + +ALTER TABLE TEST ADD CONSTRAINT T CHECK ID<100; +> exception + +UPDATE TEST SET ID=20 WHERE ID=100; +> update count: 1 + +ALTER TABLE TEST ADD CONSTRAINT T CHECK ID<100; +> ok + +DROP TABLE TEST; +> ok + +create table test(id int); +> ok + +set autocommit false; +> ok + +insert into test values(1); +> update count: 1 + +prepare commit tx1; +> ok + +commit transaction tx1; +> ok + +rollback; +> ok + +select * from test; +> ID +> -- +> 1 +> rows: 1 + +drop table test; +> ok + +set autocommit true; +> ok + +SELECT 'Hello' ~ 'He.*' T1, 'HELLO' ~ 'He.*' F2, CAST('HELLO' AS VARCHAR_IGNORECASE) ~ 'He.*' T3; +> T1 F2 T3 +> ---- ----- ---- +> TRUE FALSE TRUE +> rows: 1 + +SELECT 'Hello' ~* 'He.*' T1, 'HELLO' ~* 'He.*' T2, 'hallo' ~* 'He.*' F3; +> T1 T2 F3 +> ---- ---- ----- +> TRUE TRUE FALSE +> rows: 1 + +SELECT 'Hello' !~* 'Ho.*' T1, 'HELLO' !~* 'He.*' F2, 'hallo' !~* 'Ha.*' F3; +> T1 F2 F3 +> ---- ----- ----- +> TRUE FALSE FALSE +> rows: 1 + +create table test(parent int primary key, child int, foreign key(child) references (parent)); +> ok + +insert into test values(1, 1); +> update count: 1 + +insert into test values(2, 3); +> exception + +set autocommit false; +> ok + +set referential_integrity false; +> ok + +insert into test values(4, 4); +> update count: 1 + +insert into test values(5, 6); +> update count: 1 + +set referential_integrity true; +> ok + +insert into test values(7, 7), (8, 9); +> exception + +set autocommit true; +> ok + +drop table test; +> ok + +create table test as select 1, space(10) from dual where 1=0 union all select x, cast(space(100) as varchar(101)) d from system_range(1, 100); +> ok + +drop table test; +> ok + +explain select * from system_range(1, 2) where x=x+1 and x=1; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------- +> SELECT SYSTEM_RANGE.X FROM SYSTEM_RANGE(1, 2) /* PUBLIC.RANGE_INDEX: X = 1 */ WHERE ((X = 1) AND (X = (X + 1))) AND (1 = (X + 1)) +> rows: 1 + +explain select * from system_range(1, 2) where not (x = 1 and x*2 = 2); +> PLAN +> ------------------------------------------------------------------------------------------------------- +> SELECT SYSTEM_RANGE.X FROM SYSTEM_RANGE(1, 2) /* PUBLIC.RANGE_INDEX */ WHERE (X <> 1) OR ((X * 2) <> 2) +> rows: 1 + +explain select * from system_range(1, 10) where (NOT x >= 5); +> PLAN +> ------------------------------------------------------------------------------------------ +> SELECT SYSTEM_RANGE.X FROM SYSTEM_RANGE(1, 10) /* PUBLIC.RANGE_INDEX: X < 5 */ WHERE X < 5 +> rows: 1 + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'), (-1, '-1'); +> update count: 2 + +select * from test where name = -1 and name = id; +> ID NAME +> -- ---- +> -1 -1 +> rows: 1 + +explain select * from test where name = -1 and name = id; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = -1 */ WHERE ((NAME = -1) AND (NAME = ID)) AND (ID = -1) +> rows: 1 + +DROP TABLE TEST; +> ok + +select * from system_range(1, 2) where x=x+1 and x=1; +> X +> - +> rows: 0 + +CREATE TABLE A as select 6 a; +> ok + +CREATE TABLE B(B INT PRIMARY KEY); +> ok + +CREATE VIEW V(V) AS (SELECT A FROM A UNION SELECT B FROM B); +> ok + +create table C as select * from table(c int = (0,6)); +> ok + +select * from V, C where V.V = C.C; +> V C +> - - +> 6 6 +> rows: 1 + +drop table A, B, C, V cascade; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, FLAG BOOLEAN, NAME VARCHAR); +> ok + +CREATE INDEX IDX_FLAG ON TEST(FLAG, NAME); +> ok + +INSERT INTO TEST VALUES(1, TRUE, 'Hello'), (2, FALSE, 'World'); +> update count: 2 + +EXPLAIN SELECT * FROM TEST WHERE FLAG; +> PLAN +> --------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.FLAG, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.IDX_FLAG: FLAG = TRUE */ WHERE FLAG +> rows: 1 + +EXPLAIN SELECT * FROM TEST WHERE FLAG AND NAME>'I'; +> PLAN +> ----------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.FLAG, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.IDX_FLAG: FLAG = TRUE AND NAME > 'I' */ WHERE FLAG AND (NAME > 'I') +> rows: 1 + +DROP TABLE TEST; +> ok + +CREATE TABLE test_table (first_col varchar(20), second_col integer); +> ok + +insert into test_table values('a', 10), ('a', 4), ('b', 30), ('b', 3); +> update count: 4 + +CREATE VIEW test_view AS SELECT first_col AS renamed_col, MIN(second_col) AS also_renamed FROM test_table GROUP BY first_col; +> ok + +SELECT * FROM test_view WHERE renamed_col = 'a'; +> RENAMED_COL ALSO_RENAMED +> ----------- ------------ +> a 4 +> rows: 1 + +drop view test_view; +> ok + +drop table test_table; +> ok + +create table test(id int); +> ok + +explain select id+1 a from test group by id+1; +> PLAN +> --------------------------------------------------------------------------------- +> SELECT (ID + 1) AS A FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ GROUP BY ID + 1 +> rows: 1 + +drop table test; +> ok + +set autocommit off; +> ok + +set search_path = public, information_schema; +> ok + +select table_name from tables where 1=0; +> TABLE_NAME +> ---------- +> rows: 0 + +set search_path = public; +> ok + +set autocommit on; +> ok + +create table script.public.x(a int); +> ok + +select * from script.PUBLIC.x; +> A +> - +> rows: 0 + +create index script.public.idx on script.public.x(a); +> ok + +drop table script.public.x; +> ok + +create table d(d double, r real); +> ok + +insert into d(d, d, r) values(1.1234567890123456789, 1.1234567890123456789, 3); +> exception + +insert into d values(1.1234567890123456789, 1.1234567890123456789); +> update count: 1 + +select r+d, r+r, d+d from d; +> R + D R + R D + D +> ----------------- --------- ------------------ +> 2.246913624759111 2.2469137 2.2469135780246914 +> rows: 1 + +drop table d; +> ok + +create table test(id int, c char(5), v varchar(5)); +> ok + +insert into test set id = 1, c = 'a', v = 'a'; +> update count: 1 + +insert into test set id = 2, c = 'a ', v = 'a '; +> update count: 1 + +insert into test set id = 3, c = 'abcde ', v = 'abcde'; +> update count: 1 + +select distinct length(c) from test order by length(c); +> LENGTH(C) +> --------- +> 1 +> 5 +> rows (ordered): 2 + +select id, c, v, length(c), length(v) from test order by id; +> ID C V LENGTH(C) LENGTH(V) +> -- ----- ----- --------- --------- +> 1 a a 1 1 +> 2 a a 1 2 +> 3 abcde abcde 5 5 +> rows (ordered): 3 + +select id from test where c='a' order by id; +> ID +> -- +> 1 +> 2 +> rows (ordered): 2 + +select id from test where c='a ' order by id; +> ID +> -- +> 1 +> 2 +> rows (ordered): 2 + +select id from test where c=v order by id; +> ID +> -- +> 1 +> 2 +> 3 +> rows (ordered): 3 + +drop table test; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255), C INT); +> ok + +INSERT INTO TEST VALUES(1, '10', NULL), (2, '0', NULL); +> update count: 2 + +SELECT LEAST(ID, C, NAME), GREATEST(ID, C, NAME), LEAST(NULL, C), GREATEST(NULL, NULL), ID FROM TEST ORDER BY ID; +> LEAST(ID, C, NAME) GREATEST(ID, C, NAME) LEAST(NULL, C) NULL ID +> ------------------ --------------------- -------------- ---- -- +> 1 10 null null 1 +> 0 2 null null 2 +> rows (ordered): 2 + +DROP TABLE IF EXISTS TEST; +> ok + +create table people (family varchar(1) not null, person varchar(1) not null); +> ok + +create table cars (family varchar(1) not null, car varchar(1) not null); +> ok + +insert into people values(1, 1), (2, 1), (2, 2), (3, 1), (5, 1); +> update count: 5 + +insert into cars values(2, 1), (2, 2), (3, 1), (3, 2), (3, 3), (4, 1); +> update count: 6 + +select family, (select count(car) from cars where cars.family = people.family) as x +from people group by family order by family; +> FAMILY X +> ------ - +> 1 0 +> 2 2 +> 3 3 +> 5 0 +> rows (ordered): 4 + +drop table people, cars; +> ok + +select (1, 2); +> 1, 2 +> ------ +> (1, 2) +> rows: 1 + +create table array_test(x array); +> ok + +insert into array_test values((1, 2, 3)), ((2, 3, 4)); +> update count: 2 + +select * from array_test where x = (1, 2, 3); +> X +> --------- +> (1, 2, 3) +> rows: 1 + +drop table array_test; +> ok + +select * from (select 1), (select 2); +> 1 2 +> - - +> 1 2 +> rows: 1 + +create table t1(c1 int, c2 int); +> ok + +create table t2(c1 int, c2 int); +> ok + +insert into t1 values(1, null), (2, 2), (3, 3); +> update count: 3 + +insert into t2 values(1, 1), (1, 2), (2, null), (3, 3); +> update count: 4 + +select * from t2 where c1 not in(select c2 from t1); +> C1 C2 +> -- -- +> rows: 0 + +select * from t2 where c1 not in(null, 2, 3); +> C1 C2 +> -- -- +> rows: 0 + +select * from t1 where c2 not in(select c1 from t2); +> C1 C2 +> -- -- +> rows: 0 + +select * from t1 where not exists(select * from t2 where t1.c2=t2.c1); +> C1 C2 +> -- ---- +> 1 null +> rows: 1 + +drop table t1; +> ok + +drop table t2; +> ok + +create constant abc value 1; +> ok + +call abc; +> 1 +> - +> 1 +> rows: 1 + +drop all objects; +> ok + +call abc; +> exception + +create table FOO(id integer primary key); +> ok + +create table BAR(fooId integer); +> ok + +alter table bar add foreign key (fooId) references foo (id); +> ok + +truncate table bar; +> ok + +truncate table foo; +> exception + +drop table bar, foo; +> ok + +CREATE TABLE test (family_name VARCHAR_IGNORECASE(63) NOT NULL); +> ok + +INSERT INTO test VALUES('Smith'), ('de Smith'), ('el Smith'), ('von Smith'); +> update count: 4 + +SELECT * FROM test WHERE family_name IN ('de Smith', 'Smith'); +> FAMILY_NAME +> ----------- +> Smith +> de Smith +> rows: 2 + +SELECT * FROM test WHERE family_name BETWEEN 'D' AND 'T'; +> FAMILY_NAME +> ----------- +> Smith +> de Smith +> el Smith +> rows: 3 + +CREATE INDEX family_name ON test(family_name); +> ok + +SELECT * FROM test WHERE family_name IN ('de Smith', 'Smith'); +> FAMILY_NAME +> ----------- +> Smith +> de Smith +> rows: 2 + +drop table test; +> ok + +create memory table test(id int primary key, data clob); +> ok + +insert into test values(1, 'abc' || space(20)); +> update count: 1 + +script nopasswords nosettings blocksize 10; +> SCRIPT +> -------------------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CALL SYSTEM_COMBINE_BLOB(-1); +> CREATE ALIAS IF NOT EXISTS SYSTEM_COMBINE_BLOB FOR "org.h2.command.dml.ScriptCommand.combineBlob"; +> CREATE ALIAS IF NOT EXISTS SYSTEM_COMBINE_CLOB FOR "org.h2.command.dml.ScriptCommand.combineClob"; +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, DATA CLOB ); +> CREATE PRIMARY KEY SYSTEM_LOB_STREAM_PRIMARY_KEY ON SYSTEM_LOB_STREAM(ID, PART); +> CREATE TABLE IF NOT EXISTS SYSTEM_LOB_STREAM(ID INT NOT NULL, PART INT NOT NULL, CDATA VARCHAR, BDATA BINARY); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> DROP ALIAS IF EXISTS SYSTEM_COMBINE_BLOB; +> DROP ALIAS IF EXISTS SYSTEM_COMBINE_CLOB; +> DROP TABLE IF EXISTS SYSTEM_LOB_STREAM; +> INSERT INTO PUBLIC.TEST(ID, DATA) VALUES (1, SYSTEM_COMBINE_CLOB(0)); +> INSERT INTO SYSTEM_LOB_STREAM VALUES(0, 0, 'abc ', NULL); +> INSERT INTO SYSTEM_LOB_STREAM VALUES(0, 1, ' ', NULL); +> INSERT INTO SYSTEM_LOB_STREAM VALUES(0, 2, ' ', NULL); +> rows: 16 + +drop table test; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World'); +> update count: 2 + +SELECT DISTINCT * FROM TEST ORDER BY ID; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows (ordered): 2 + +DROP TABLE TEST; +> ok + +create table Foo (A varchar(20), B integer); +> ok + +insert into Foo (A, B) values ('abcd', 1), ('abcd', 2); +> update count: 2 + +select * from Foo where A like 'abc%' escape '\' AND B=1; +> A B +> ---- - +> abcd 1 +> rows: 1 + +drop table Foo; +> ok + +create table test(id int, d timestamp); +> ok + +insert into test values(1, '2006-01-01 12:00:00.000'); +> update count: 1 + +insert into test values(1, '1999-12-01 23:59:00.000'); +> update count: 1 + +select * from test where d= '1999-12-01 23:59:00.000'; +> ID D +> -- ------------------- +> 1 1999-12-01 23:59:00 +> rows: 1 + +select * from test where d= timestamp '2006-01-01 12:00:00.000'; +> ID D +> -- ------------------- +> 1 2006-01-01 12:00:00 +> rows: 1 + +drop table test; +> ok + +create table test(id int, b binary); +> ok + +insert into test values(1, 'face'); +> update count: 1 + +select * from test where b = 'FaCe'; +> ID B +> -- ---- +> 1 face +> rows: 1 + +drop table test; +> ok + +create sequence main_seq; +> ok + +create schema "TestSchema"; +> ok + +create sequence "TestSchema"."TestSeq"; +> ok + +create sequence "TestSchema"."ABC"; +> ok + +select currval('main_seq'), currval('TestSchema', 'TestSeq'), nextval('TestSchema', 'ABC'); +> CURRVAL('main_seq') CURRVAL('TestSchema', 'TestSeq') NEXTVAL('TestSchema', 'ABC') +> ------------------- -------------------------------- ---------------------------- +> 0 0 1 +> rows: 1 + +set autocommit off; +> ok + +set schema "TestSchema"; +> ok + +select nextval('abc'), currval('Abc'), nextval('TestSchema', 'ABC'); +> NEXTVAL('abc') CURRVAL('Abc') NEXTVAL('TestSchema', 'ABC') +> -------------- -------------- ---------------------------- +> 2 2 3 +> rows: 1 + +set schema public; +> ok + +drop schema "TestSchema" cascade; +> ok + +drop sequence main_seq; +> ok + +create sequence "test"; +> ok + +select nextval('test'); +> NEXTVAL('test') +> --------------- +> 1 +> rows: 1 + +drop sequence "test"; +> ok + +set autocommit on; +> ok + +CREATE TABLE parent(id int PRIMARY KEY); +> ok + +CREATE TABLE child(parentid int REFERENCES parent); +> ok + +select * from INFORMATION_SCHEMA.CROSS_REFERENCES; +> PKTABLE_CATALOG PKTABLE_SCHEMA PKTABLE_NAME PKCOLUMN_NAME FKTABLE_CATALOG FKTABLE_SCHEMA FKTABLE_NAME FKCOLUMN_NAME ORDINAL_POSITION UPDATE_RULE DELETE_RULE FK_NAME PK_NAME DEFERRABILITY +> --------------- -------------- ------------ ------------- --------------- -------------- ------------ ------------- ---------------- ----------- ----------- ------------ ------------- ------------- +> SCRIPT PUBLIC PARENT ID SCRIPT PUBLIC CHILD PARENTID 1 1 1 CONSTRAINT_3 PRIMARY_KEY_8 7 +> rows: 1 + +ALTER TABLE parent ADD COLUMN name varchar; +> ok + +select * from INFORMATION_SCHEMA.CROSS_REFERENCES; +> PKTABLE_CATALOG PKTABLE_SCHEMA PKTABLE_NAME PKCOLUMN_NAME FKTABLE_CATALOG FKTABLE_SCHEMA FKTABLE_NAME FKCOLUMN_NAME ORDINAL_POSITION UPDATE_RULE DELETE_RULE FK_NAME PK_NAME DEFERRABILITY +> --------------- -------------- ------------ ------------- --------------- -------------- ------------ ------------- ---------------- ----------- ----------- ------------ -------------- ------------- +> SCRIPT PUBLIC PARENT ID SCRIPT PUBLIC CHILD PARENTID 1 1 1 CONSTRAINT_3 PRIMARY_KEY_82 7 +> rows: 1 + +drop table parent, child; +> ok + +create table test(id int); +> ok + +create schema TEST_SCHEMA; +> ok + +set autocommit false; +> ok + +set schema TEST_SCHEMA; +> ok + +create table test(id int, name varchar); +> ok + +explain select * from test; +> PLAN +> -------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.NAME FROM TEST_SCHEMA.TEST /* TEST_SCHEMA.TEST.tableScan */ +> rows: 1 + +explain select * from public.test; +> PLAN +> ----------------------------------------------------------- +> SELECT TEST.ID FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ +> rows: 1 + +drop schema TEST_SCHEMA cascade; +> ok + +set autocommit true; +> ok + +set schema public; +> ok + +select * from test; +> ID +> -- +> rows: 0 + +drop table test; +> ok + +create table content(thread_id int, parent_id int); +> ok + +alter table content add constraint content_parent_id check (parent_id = thread_id) or (parent_id is null) or ( parent_id in (select thread_id from content)); +> ok + +create index content_thread_id ON content(thread_id); +> ok + +insert into content values(0, 0), (0, 0); +> update count: 2 + +insert into content values(0, 1); +> exception + +insert into content values(1, 1), (2, 2); +> update count: 2 + +insert into content values(2, 1); +> update count: 1 + +insert into content values(2, 3); +> exception + +drop table content; +> ok + +select x/10 y from system_range(1, 100) group by x/10; +> Y +> -- +> 0 +> 1 +> 10 +> 2 +> 3 +> 4 +> 5 +> 6 +> 7 +> 8 +> 9 +> rows: 11 + +select timestamp '2001-02-03T10:30:33'; +> TIMESTAMP '2001-02-03 10:30:33' +> ------------------------------- +> 2001-02-03 10:30:33 +> rows: 1 + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World'); +> update count: 2 + +select * from test where id in (select id from test); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +select * from test where id in ((select id from test)); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +select * from test where id in (((select id from test))); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +DROP TABLE TEST; +> ok + +create table test(id int); +> ok + +insert into test (select x from system_range(1, 100)); +> update count: 100 + +select id/1000 from test group by id/1000; +> ID / 1000 +> --------- +> 0 +> rows: 1 + +select id/(10*100) from test group by id/(10*100); +> ID / 1000 +> --------- +> 0 +> rows: 1 + +select id/1000 from test group by id/100; +> exception + +drop table test; +> ok + +select (x/10000) from system_range(10, 20) group by (x/10000); +> X / 10000 +> --------- +> 0 +> rows: 1 + +select sum(x), (x/10) from system_range(10, 100) group by (x/10); +> SUM(X) X / 10 +> ------ ------ +> 100 10 +> 145 1 +> 245 2 +> 345 3 +> 445 4 +> 545 5 +> 645 6 +> 745 7 +> 845 8 +> 945 9 +> rows: 10 + +CREATE FORCE VIEW ADDRESS_VIEW AS SELECT * FROM ADDRESS; +> ok + +CREATE memory TABLE ADDRESS(ID INT); +> ok + +alter view address_view recompile; +> ok + +alter view if exists address_view recompile; +> ok + +alter view if exists does_not_exist recompile; +> ok + +select * from ADDRESS_VIEW; +> ID +> -- +> rows: 0 + +drop view address_view; +> ok + +drop table address; +> ok + +CREATE ALIAS PARSE_INT2 FOR "java.lang.Integer.parseInt(java.lang.String, int)"; +> ok + +select min(SUBSTRING(random_uuid(), 15,1)='4') from system_range(1, 10); +> MIN(SUBSTRING(RANDOM_UUID(), 15, 1) = '4') +> ------------------------------------------ +> TRUE +> rows: 1 + +select min(8=bitand(12, PARSE_INT2(SUBSTRING(random_uuid(), 20,1), 16))) from system_range(1, 10); +> MIN(8 = BITAND(12, PUBLIC.PARSE_INT2(SUBSTRING(RANDOM_UUID(), 20, 1), 16))) +> --------------------------------------------------------------------------- +> TRUE +> rows: 1 + +select BITGET(x, 0) AS IS_SET from system_range(1, 2); +> IS_SET +> ------ +> FALSE +> TRUE +> rows: 2 + +drop alias PARSE_INT2; +> ok + +create memory table test(name varchar check(name = upper(name))); +> ok + +insert into test values(null); +> update count: 1 + +insert into test values('aa'); +> exception + +insert into test values('AA'); +> update count: 1 + +script nodata nopasswords nosettings; +> SCRIPT +> --------------------------------------------------------------------------- +> -- 2 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> CREATE MEMORY TABLE PUBLIC.TEST( NAME VARCHAR CHECK (NAME = UPPER(NAME)) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 3 + +drop table test; +> ok + +create domain email as varchar(200) check (position('@' in value) > 1); +> ok + +create domain gmail as email default '@gmail.com' check (position('gmail' in value) > 1); +> ok + +create memory table address(id int primary key, name email, name2 gmail); +> ok + +insert into address(id, name, name2) values(1, 'test@abc', 'test@gmail.com'); +> update count: 1 + +insert into address(id, name, name2) values(2, 'test@abc', 'test@acme'); +> exception + +insert into address(id, name, name2) values(3, 'test_abc', 'test@gmail'); +> exception + +insert into address2(name) values('test@abc'); +> exception + +CREATE DOMAIN STRING AS VARCHAR(255) DEFAULT '' NOT NULL; +> ok + +CREATE DOMAIN IF NOT EXISTS STRING AS VARCHAR(255) DEFAULT '' NOT NULL; +> ok + +CREATE DOMAIN STRING1 AS VARCHAR NULL; +> ok + +CREATE DOMAIN STRING2 AS VARCHAR NOT NULL; +> ok + +CREATE DOMAIN STRING3 AS VARCHAR DEFAULT ''; +> ok + +create domain string_x as string3; +> ok + +create memory table test(a string, b string1, c string2, d string3); +> ok + +insert into test(c) values('x'); +> update count: 1 + +select * from test; +> A B C D +> - ---- - ------- +> null x +> rows: 1 + +select DOMAIN_NAME, COLUMN_DEFAULT, IS_NULLABLE, DATA_TYPE, PRECISION, SCALE, TYPE_NAME, SELECTIVITY, CHECK_CONSTRAINT, REMARKS, SQL from information_schema.domains; +> DOMAIN_NAME COLUMN_DEFAULT IS_NULLABLE DATA_TYPE PRECISION SCALE TYPE_NAME SELECTIVITY CHECK_CONSTRAINT REMARKS SQL +> ----------- -------------- ----------- --------- ---------- ----- --------- ----------- --------------------------------------------------------------- ------- ------------------------------------------------------------------------------------------------------------------------------ +> EMAIL null YES 12 200 0 VARCHAR 50 (POSITION('@', VALUE) > 1) CREATE DOMAIN EMAIL AS VARCHAR(200) CHECK (POSITION('@', VALUE) > 1) +> GMAIL '@gmail.com' YES 12 200 0 VARCHAR 50 ((POSITION('@', VALUE) > 1) AND (POSITION('gmail', VALUE) > 1)) CREATE DOMAIN GMAIL AS VARCHAR(200) DEFAULT '@gmail.com' CHECK ((POSITION('@', VALUE) > 1) AND (POSITION('gmail', VALUE) > 1)) +> STRING '' NO 12 255 0 VARCHAR 50 CREATE DOMAIN STRING AS VARCHAR(255) DEFAULT '' NOT NULL +> STRING1 null YES 12 2147483647 0 VARCHAR 50 CREATE DOMAIN STRING1 AS VARCHAR +> STRING2 null NO 12 2147483647 0 VARCHAR 50 CREATE DOMAIN STRING2 AS VARCHAR NOT NULL +> STRING3 '' YES 12 2147483647 0 VARCHAR 50 CREATE DOMAIN STRING3 AS VARCHAR DEFAULT '' +> STRING_X '' YES 12 2147483647 0 VARCHAR 50 CREATE DOMAIN STRING_X AS VARCHAR DEFAULT '' +> rows: 7 + +script nodata nopasswords nosettings; +> SCRIPT +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.ADDRESS; +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.ADDRESS ADD CONSTRAINT PUBLIC.CONSTRAINT_E PRIMARY KEY(ID); +> CREATE DOMAIN EMAIL AS VARCHAR(200) CHECK (POSITION('@', VALUE) > 1); +> CREATE DOMAIN GMAIL AS VARCHAR(200) DEFAULT '@gmail.com' CHECK ((POSITION('@', VALUE) > 1) AND (POSITION('gmail', VALUE) > 1)); +> CREATE DOMAIN STRING AS VARCHAR(255) DEFAULT '' NOT NULL; +> CREATE DOMAIN STRING1 AS VARCHAR; +> CREATE DOMAIN STRING2 AS VARCHAR NOT NULL; +> CREATE DOMAIN STRING3 AS VARCHAR DEFAULT ''; +> CREATE DOMAIN STRING_X AS VARCHAR DEFAULT ''; +> CREATE MEMORY TABLE PUBLIC.ADDRESS( ID INT NOT NULL, NAME VARCHAR(200) CHECK (POSITION('@', NAME) > 1), NAME2 VARCHAR(200) DEFAULT '@gmail.com' CHECK ((POSITION('@', NAME2) > 1) AND (POSITION('gmail', NAME2) > 1)) ); +> CREATE MEMORY TABLE PUBLIC.TEST( A VARCHAR(255) DEFAULT '' NOT NULL, B VARCHAR, C VARCHAR NOT NULL, D VARCHAR DEFAULT '' ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 13 + +drop table test; +> ok + +drop domain string; +> ok + +drop domain string1; +> ok + +drop domain string2; +> ok + +drop domain string3; +> ok + +drop domain string_x; +> ok + +drop table address; +> ok + +drop domain email; +> ok + +drop domain gmail; +> ok + +create force view address_view as select * from address; +> ok + +create table address(id identity, name varchar check instr(value, '@') > 1); +> exception + +create table address(id identity, name varchar check instr(name, '@') > 1); +> ok + +drop view if exists address_view; +> ok + +drop table address; +> ok + +create memory table a(k10 blob(10k), m20 blob(20m), g30 clob(30g)); +> ok + +script NODATA NOPASSWORDS NOSETTINGS drop; +> SCRIPT +> ------------------------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.A; +> CREATE MEMORY TABLE PUBLIC.A( K10 BLOB(10240), M20 BLOB(20971520), G30 CLOB(32212254720) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> DROP TABLE IF EXISTS PUBLIC.A CASCADE; +> rows: 4 + +create table b(); +> ok + +create table c(); +> ok + +drop table information_schema.columns; +> exception + +create table columns as select * from information_schema.columns; +> ok + +create table tables as select * from information_schema.tables where false; +> ok + +create table dual2 as select 1 from dual; +> ok + +select * from dual2; +> 1 +> - +> 1 +> rows: 1 + +drop table dual2, columns, tables; +> ok + +drop table a, a; +> ok + +drop table b, c; +> ok + +CREATE SCHEMA CONST; +> ok + +CREATE CONSTANT IF NOT EXISTS ONE VALUE 1; +> ok + +COMMENT ON CONSTANT ONE IS 'Eins'; +> ok + +CREATE CONSTANT IF NOT EXISTS ONE VALUE 1; +> ok + +CREATE CONSTANT CONST.ONE VALUE 1; +> ok + +SELECT CONSTANT_SCHEMA, CONSTANT_NAME, DATA_TYPE, REMARKS, SQL FROM INFORMATION_SCHEMA.CONSTANTS; +> CONSTANT_SCHEMA CONSTANT_NAME DATA_TYPE REMARKS SQL +> --------------- ------------- --------- ------- --- +> CONST ONE 4 1 +> PUBLIC ONE 4 Eins 1 +> rows: 2 + +SELECT ONE, CONST.ONE FROM DUAL; +> 1 1 +> - - +> 1 1 +> rows: 1 + +COMMENT ON CONSTANT ONE IS NULL; +> ok + +DROP SCHEMA CONST CASCADE; +> ok + +SELECT CONSTANT_SCHEMA, CONSTANT_NAME, DATA_TYPE, REMARKS, SQL FROM INFORMATION_SCHEMA.CONSTANTS; +> CONSTANT_SCHEMA CONSTANT_NAME DATA_TYPE REMARKS SQL +> --------------- ------------- --------- ------- --- +> PUBLIC ONE 4 1 +> rows: 1 + +DROP CONSTANT ONE; +> ok + +DROP CONSTANT IF EXISTS ONE; +> ok + +DROP CONSTANT IF EXISTS ONE; +> ok + +CREATE TABLE A (ID_A int primary key); +> ok + +CREATE TABLE B (ID_B int primary key); +> ok + +CREATE TABLE C (ID_C int primary key); +> ok + +insert into A values (1); +> update count: 1 + +insert into A values (2); +> update count: 1 + +insert into B values (1); +> update count: 1 + +insert into C values (1); +> update count: 1 + +SELECT * FROM C WHERE NOT EXISTS ((SELECT ID_A FROM A) EXCEPT (SELECT ID_B FROM B)); +> ID_C +> ---- +> rows: 0 + +(SELECT ID_A FROM A) EXCEPT (SELECT ID_B FROM B); +> ID_A +> ---- +> 2 +> rows: 1 + +drop table a; +> ok + +drop table b; +> ok + +drop table c; +> ok + +CREATE TABLE X (ID INTEGER PRIMARY KEY); +> ok + +insert into x values(0), (1), (10); +> update count: 3 + +SELECT t1.ID, (SELECT t1.id || ':' || AVG(t2.ID) FROM X t2) FROM X t1; +> ID SELECT ((T1.ID || ':') || AVG(T2.ID)) FROM PUBLIC.X T2 /* PUBLIC.X.tableScan */ /* scanCount: 4 */ +> -- -------------------------------------------------------------------------------------------------- +> 0 0:3 +> 1 1:3 +> 10 10:3 +> rows: 3 + +drop table x; +> ok + +select (select t1.x from system_range(1,1) t2) from system_range(1,1) t1; +> SELECT T1.X FROM SYSTEM_RANGE(1, 1) T2 /* PUBLIC.RANGE_INDEX */ /* scanCount: 2 */ +> ---------------------------------------------------------------------------------- +> 1 +> rows: 1 + +create table test(id int primary key, name varchar); +> ok + +insert into test values(rownum, '11'), (rownum, '22'), (rownum, '33'); +> update count: 3 + +select * from test order by id; +> ID NAME +> -- ---- +> 1 11 +> 2 22 +> 3 33 +> rows (ordered): 3 + +select rownum, (select count(*) from test), rownum from test; +> ROWNUM() SELECT COUNT(*) FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ /* direct lookup */ ROWNUM() +> -------- -------------------------------------------------------------------------------- -------- +> 1 3 1 +> 2 3 2 +> 3 3 3 +> rows: 3 + +delete from test t0 where rownum<2; +> update count: 1 + +select rownum, * from (select * from test where id>1 order by id desc); +> ROWNUM() ID NAME +> -------- -- ---- +> 1 3 33 +> 2 2 22 +> rows (ordered): 2 + +update test set name='x' where rownum<2; +> update count: 1 + +select * from test; +> ID NAME +> -- ---- +> 2 x +> 3 33 +> rows: 2 + +merge into test values(2, 'r' || rownum), (10, rownum), (11, rownum); +> update count: 3 + +select * from test; +> ID NAME +> -- ---- +> 10 2 +> 11 3 +> 2 r1 +> 3 33 +> rows: 4 + +call rownum; +> ROWNUM() +> -------- +> 1 +> rows: 1 + +drop table test; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +create index idx_test_name on test(name); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +set ignorecase true; +> ok + +CREATE TABLE TEST2(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +create unique index idx_test2_name on test2(name); +> ok + +INSERT INTO TEST2 VALUES(1, 'HElLo'); +> update count: 1 + +INSERT INTO TEST2 VALUES(2, 'World'); +> update count: 1 + +INSERT INTO TEST2 VALUES(3, 'WoRlD'); +> exception + +drop index idx_test2_name; +> ok + +select * from test where name='HELLO'; +> ID NAME +> -- ---- +> rows: 0 + +select * from test2 where name='HELLO'; +> ID NAME +> -- ----- +> 1 HElLo +> rows: 1 + +select * from test where name like 'HELLO'; +> ID NAME +> -- ---- +> rows: 0 + +select * from test2 where name like 'HELLO'; +> ID NAME +> -- ----- +> 1 HElLo +> rows: 1 + +explain plan for select * from test2, test where test2.name = test.name; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST2.ID, TEST2.NAME, TEST.ID, TEST.NAME FROM PUBLIC.TEST2 /* PUBLIC.TEST2.tableScan */ INNER JOIN PUBLIC.TEST /* PUBLIC.IDX_TEST_NAME: NAME = TEST2.NAME */ ON 1=1 WHERE TEST2.NAME = TEST.NAME +> rows: 1 + +select * from test2, test where test2.name = test.name; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 HElLo 1 Hello +> 2 World 2 World +> rows: 2 + +explain plan for select * from test, test2 where test2.name = test.name; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.NAME, TEST2.ID, TEST2.NAME FROM PUBLIC.TEST2 /* PUBLIC.TEST2.tableScan */ INNER JOIN PUBLIC.TEST /* PUBLIC.IDX_TEST_NAME: NAME = TEST2.NAME */ ON 1=1 WHERE TEST2.NAME = TEST.NAME +> rows: 1 + +select * from test, test2 where test2.name = test.name; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 Hello 1 HElLo +> 2 World 2 World +> rows: 2 + +create index idx_test2_name on test2(name); +> ok + +explain plan for select * from test2, test where test2.name = test.name; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST2.ID, TEST2.NAME, TEST.ID, TEST.NAME FROM PUBLIC.TEST2 /* PUBLIC.TEST2.tableScan */ INNER JOIN PUBLIC.TEST /* PUBLIC.IDX_TEST_NAME: NAME = TEST2.NAME */ ON 1=1 WHERE TEST2.NAME = TEST.NAME +> rows: 1 + +select * from test2, test where test2.name = test.name; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 HElLo 1 Hello +> 2 World 2 World +> rows: 2 + +explain plan for select * from test, test2 where test2.name = test.name; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +> SELECT TEST.ID, TEST.NAME, TEST2.ID, TEST2.NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ INNER JOIN PUBLIC.TEST2 /* PUBLIC.IDX_TEST2_NAME: NAME = TEST.NAME */ ON 1=1 WHERE TEST2.NAME = TEST.NAME +> rows: 1 + +select * from test, test2 where test2.name = test.name; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 Hello 1 HElLo +> 2 World 2 World +> rows: 2 + +DROP TABLE IF EXISTS TEST; +> ok + +DROP TABLE IF EXISTS TEST2; +> ok + +set ignorecase false; +> ok + +create table test(f1 varchar, f2 varchar); +> ok + +insert into test values('abc','222'); +> update count: 1 + +insert into test values('abc','111'); +> update count: 1 + +insert into test values('abc','333'); +> update count: 1 + +SELECT t.f1, t.f2 FROM test t ORDER BY t.f2; +> F1 F2 +> --- --- +> abc 111 +> abc 222 +> abc 333 +> rows (ordered): 3 + +SELECT t1.f1, t1.f2, t2.f1, t2.f2 FROM test t1, test t2 ORDER BY t2.f2; +> F1 F2 F1 F2 +> --- --- --- --- +> abc 222 abc 111 +> abc 111 abc 111 +> abc 333 abc 111 +> abc 222 abc 222 +> abc 111 abc 222 +> abc 333 abc 222 +> abc 222 abc 333 +> abc 111 abc 333 +> abc 333 abc 333 +> rows (ordered): 9 + +drop table if exists test; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +explain select t0.id, t1.id from test t0, test t1 order by t0.id, t1.id; +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T0.ID, T1.ID FROM PUBLIC.TEST T0 /* PUBLIC.TEST.tableScan */ INNER JOIN PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ ON 1=1 ORDER BY 1, 2 +> rows (ordered): 1 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +SELECT id, sum(id) FROM test GROUP BY id ORDER BY id*sum(id); +> ID SUM(ID) +> -- ------- +> 1 1 +> 2 2 +> rows (ordered): 2 + +select * +from test t1 +inner join test t2 on t2.id=t1.id +inner join test t3 on t3.id=t2.id +where exists (select 1 from test t4 where t2.id=t4.id); +> ID NAME ID NAME ID NAME +> -- ----- -- ----- -- ----- +> 1 Hello 1 Hello 1 Hello +> 2 World 2 World 2 World +> rows: 2 + +explain select * from test t1 where id in(select id from test t2 where t1.id=t2.id); +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ WHERE ID IN( SELECT ID FROM PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID = T1.ID */ WHERE T1.ID = T2.ID) +> rows: 1 + +select * from test t1 where id in(select id from test t2 where t1.id=t2.id); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +explain select * from test t1 where id in(id, id+1); +> PLAN +> ----------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ WHERE ID IN(ID, (ID + 1)) +> rows: 1 + +select * from test t1 where id in(id, id+1); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +explain select * from test t1 where id in(id); +> PLAN +> ----------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ WHERE ID = ID +> rows: 1 + +select * from test t1 where id in(id); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +explain select * from test t1 where id in(select id from test); +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID IN(SELECT ID FROM PUBLIC.TEST /++ PUBLIC.TEST.tableScan ++/) */ WHERE ID IN( SELECT ID FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */) +> rows: 1 + +select * from test t1 where id in(select id from test); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +explain select * from test t1 where id in(1, select max(id) from test); +> PLAN +> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID IN(1, (SELECT MAX(ID) FROM PUBLIC.TEST /++ PUBLIC.TEST.tableScan ++/ /++ direct lookup ++/)) */ WHERE ID IN(1, (SELECT MAX(ID) FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ /* direct lookup */)) +> rows: 1 + +select * from test t1 where id in(1, select max(id) from test); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +explain select * from test t1 where id in(1, select max(id) from test t2 where t1.id=t2.id); +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ WHERE ID IN(1, (SELECT MAX(ID) FROM PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID = T1.ID */ WHERE T1.ID = T2.ID)) +> rows: 1 + +select * from test t1 where id in(1, select max(id) from test t2 where t1.id=t2.id); +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows: 2 + +DROP TABLE TEST; +> ok + +create force view t1 as select * from t1; +> ok + +select * from t1; +> exception + +drop table t1; +> ok + +CREATE TABLE TEST(id INT PRIMARY KEY, foo BIGINT); +> ok + +INSERT INTO TEST VALUES(1, 100); +> update count: 1 + +INSERT INTO TEST VALUES(2, 123456789012345678); +> update count: 1 + +SELECT * FROM TEST WHERE foo = 123456789014567; +> ID FOO +> -- --- +> rows: 0 + +DROP TABLE IF EXISTS TEST; +> ok + +create table test(v boolean); +> ok + +insert into test values(null), (true), (false); +> update count: 3 + +SELECT CASE WHEN NOT (false IN (null)) THEN false END; +> NULL +> ---- +> null +> rows: 1 + +select a.v as av, b.v as bv, a.v IN (b.v), not a.v IN (b.v) from test a, test b; +> AV BV A.V = B.V NOT (A.V = B.V) +> ----- ----- --------- --------------- +> FALSE FALSE TRUE FALSE +> FALSE TRUE FALSE TRUE +> FALSE null null null +> TRUE FALSE FALSE TRUE +> TRUE TRUE TRUE FALSE +> TRUE null null null +> null FALSE null null +> null TRUE null null +> null null null null +> rows: 9 + +select a.v as av, b.v as bv, a.v IN (b.v, null), not a.v IN (b.v, null) from test a, test b; +> AV BV A.V IN(B.V, NULL) NOT (A.V IN(B.V, NULL)) +> ----- ----- ----------------- ----------------------- +> FALSE FALSE TRUE FALSE +> FALSE TRUE null null +> FALSE null null null +> TRUE FALSE null null +> TRUE TRUE TRUE FALSE +> TRUE null null null +> null FALSE null null +> null TRUE null null +> null null null null +> rows: 9 + +drop table test; +> ok + +SELECT CASE WHEN NOT (false IN (null)) THEN false END; +> NULL +> ---- +> null +> rows: 1 + +create table test(id int); +> ok + +insert into test values(1), (2), (3), (4); +> update count: 4 + +(select * from test a, test b) minus (select * from test a, test b); +> ID ID +> -- -- +> rows: 0 + +drop table test; +> ok + +call select 1.0/3.0*3.0, 100.0/2.0, -25.0/100.0, 0.0/3.0, 6.9/2.0, 0.72179425150347250912311550800000 / 5314251955.21; +> SELECT 0.999999999999999999999999990, 50, -0.25, 0, 3.45, 1.35822361752313607260107721120531135706133161972E-10 FROM SYSTEM_RANGE(1, 1) /* PUBLIC.RANGE_INDEX */ /* scanCount: 2 */ +> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> (0.999999999999999999999999990, 50, -0.25, 0, 3.45, 1.35822361752313607260107721120531135706133161972E-10) +> rows: 1 + +CALL 1 /* comment */ ;; +> 1 +> - +> 1 +> rows: 1 + +CALL 1 /* comment */ ; +> 1 +> - +> 1 +> rows: 1 + +call /* remark * / * /* ** // end */ 1; +> 1 +> - +> 1 +> rows: 1 + +call (select x from dual where x is null); +> SELECT X FROM SYSTEM_RANGE(1, 1) /* PUBLIC.RANGE_INDEX: X IS NULL */ /* scanCount: 1 */ WHERE X IS NULL +> ------------------------------------------------------------------------------------------------------- +> null +> rows: 1 + +create sequence test_seq; +> ok + +create table test(id int primary key, parent int); +> ok + +create index ni on test(parent); +> ok + +alter table test add constraint nu unique(parent); +> ok + +alter table test add constraint fk foreign key(parent) references(id); +> ok + +select TABLE_NAME, NON_UNIQUE, INDEX_NAME, ORDINAL_POSITION, COLUMN_NAME, CARDINALITY, PRIMARY_KEY from INFORMATION_SCHEMA.INDEXES; +> TABLE_NAME NON_UNIQUE INDEX_NAME ORDINAL_POSITION COLUMN_NAME CARDINALITY PRIMARY_KEY +> ---------- ---------- ------------- ---------------- ----------- ----------- ----------- +> TEST FALSE NU_INDEX_2 1 PARENT 0 FALSE +> TEST FALSE PRIMARY_KEY_2 1 ID 0 TRUE +> TEST TRUE NI 1 PARENT 0 FALSE +> rows: 3 + +select SEQUENCE_NAME, CURRENT_VALUE, INCREMENT, IS_GENERATED, REMARKS from INFORMATION_SCHEMA.SEQUENCES; +> SEQUENCE_NAME CURRENT_VALUE INCREMENT IS_GENERATED REMARKS +> ------------- ------------- --------- ------------ ------- +> TEST_SEQ 0 1 FALSE +> rows: 1 + +drop table test; +> ok + +drop sequence test_seq; +> ok + +create table test(id int); +> ok + +insert into test values(1), (2); +> update count: 2 + +select count(*) from test where id in ((select id from test where 1=0)); +> COUNT(*) +> -------- +> 0 +> rows: 1 + +select count(*) from test where id = ((select id from test where 1=0)+1); +> COUNT(*) +> -------- +> 0 +> rows: 1 + +select count(*) from test where id = (select id from test where 1=0); +> COUNT(*) +> -------- +> 0 +> rows: 1 + +select count(*) from test where id in ((select id from test)); +> COUNT(*) +> -------- +> 2 +> rows: 1 + +select count(*) from test where id = ((select id from test)); +> exception + +select count(*) from test where id = ((select id from test), 1); +> exception + +select (select id from test where 1=0) from test; +> SELECT ID FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan: FALSE */ WHERE FALSE +> ------------------------------------------------------------------------- +> null +> null +> rows: 2 + +drop table test; +> ok + +select TRIM(' ' FROM ' abc ') from dual; +> 'abc' +> ----- +> abc +> rows: 1 + +create table test(id int primary key, a boolean); +> ok + +insert into test values(1, 'Y'); +> update count: 1 + +call select a from test order by id; +> SELECT A FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2 */ /* scanCount: 2 */ ORDER BY =ID /* index sorted */ +> ------------------------------------------------------------------------------------------------------- +> TRUE +> rows (ordered): 1 + +select select a from test order by id; +> SELECT A FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2 */ /* scanCount: 2 */ ORDER BY =ID /* index sorted */ +> ------------------------------------------------------------------------------------------------------- +> TRUE +> rows (ordered): 1 + +insert into test values(2, 'N'); +> update count: 1 + +insert into test values(3, '1'); +> update count: 1 + +insert into test values(4, '0'); +> update count: 1 + +insert into test values(5, 'T'); +> update count: 1 + +insert into test values(6, 'F'); +> update count: 1 + +select max(id) from test where id = max(id) group by id; +> exception + +select * from test where a=TRUE=a; +> ID A +> -- ----- +> 1 TRUE +> 2 FALSE +> 3 TRUE +> 4 FALSE +> 5 TRUE +> 6 FALSE +> rows: 6 + +drop table test; +> ok + +CREATE memory TABLE TEST(ID INT PRIMARY KEY, PARENT INT REFERENCES TEST); +> ok + +CREATE memory TABLE s(S_NO VARCHAR(5) PRIMARY KEY, name VARCHAR(16), city VARCHAR(16)); +> ok + +CREATE memory TABLE p(p_no VARCHAR(5) PRIMARY KEY, descr VARCHAR(16), color VARCHAR(8)); +> ok + +CREATE memory TABLE sp1(S_NO VARCHAR(5) REFERENCES s, p_no VARCHAR(5) REFERENCES p, qty INT, PRIMARY KEY (S_NO, p_no)); +> ok + +CREATE memory TABLE sp2(S_NO VARCHAR(5), p_no VARCHAR(5), qty INT, constraint c1 FOREIGN KEY (S_NO) references s, PRIMARY KEY (S_NO, p_no)); +> ok + +script NOPASSWORDS NOSETTINGS; +> SCRIPT +> ------------------------------------------------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.P; +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.S; +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.SP1; +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.SP2; +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.P ADD CONSTRAINT PUBLIC.CONSTRAINT_50_0 PRIMARY KEY(P_NO); +> ALTER TABLE PUBLIC.S ADD CONSTRAINT PUBLIC.CONSTRAINT_5 PRIMARY KEY(S_NO); +> ALTER TABLE PUBLIC.SP1 ADD CONSTRAINT PUBLIC.CONSTRAINT_1 FOREIGN KEY(S_NO) REFERENCES PUBLIC.S(S_NO) NOCHECK; +> ALTER TABLE PUBLIC.SP1 ADD CONSTRAINT PUBLIC.CONSTRAINT_14 FOREIGN KEY(P_NO) REFERENCES PUBLIC.P(P_NO) NOCHECK; +> ALTER TABLE PUBLIC.SP1 ADD CONSTRAINT PUBLIC.CONSTRAINT_141 PRIMARY KEY(S_NO, P_NO); +> ALTER TABLE PUBLIC.SP2 ADD CONSTRAINT PUBLIC.C1 FOREIGN KEY(S_NO) REFERENCES PUBLIC.S(S_NO) NOCHECK; +> ALTER TABLE PUBLIC.SP2 ADD CONSTRAINT PUBLIC.CONSTRAINT_1417 PRIMARY KEY(S_NO, P_NO); +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_27 FOREIGN KEY(PARENT) REFERENCES PUBLIC.TEST(ID) NOCHECK; +> CREATE MEMORY TABLE PUBLIC.P( P_NO VARCHAR(5) NOT NULL, DESCR VARCHAR(16), COLOR VARCHAR(8) ); +> CREATE MEMORY TABLE PUBLIC.S( S_NO VARCHAR(5) NOT NULL, NAME VARCHAR(16), CITY VARCHAR(16) ); +> CREATE MEMORY TABLE PUBLIC.SP1( S_NO VARCHAR(5) NOT NULL, P_NO VARCHAR(5) NOT NULL, QTY INT ); +> CREATE MEMORY TABLE PUBLIC.SP2( S_NO VARCHAR(5) NOT NULL, P_NO VARCHAR(5) NOT NULL, QTY INT ); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, PARENT INT ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 20 + +drop table test; +> ok + +drop table sp1; +> ok + +drop table sp2; +> ok + +drop table s; +> ok + +drop table p; +> ok + +create table test (id identity, value int not null); +> ok + +create primary key on test(id); +> exception + +alter table test drop primary key; +> ok + +alter table test drop primary key; +> exception + +create primary key on test(id, id, id); +> ok + +alter table test drop primary key; +> ok + +drop table test; +> ok + +set autocommit off; +> ok + +create local temporary table test (id identity, b int, foreign key(b) references(id)); +> ok + +drop table test; +> ok + +script NOPASSWORDS NOSETTINGS drop; +> SCRIPT +> ----------------------------------------------- +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 1 + +create local temporary table test1 (id identity); +> ok + +create local temporary table test2 (id identity); +> ok + +alter table test2 add constraint test2_test1 foreign key (id) references test1; +> ok + +drop table test1; +> ok + +drop table test2; +> ok + +create local temporary table test1 (id identity); +> ok + +create local temporary table test2 (id identity); +> ok + +alter table test2 add constraint test2_test1 foreign key (id) references test1; +> ok + +drop table test1; +> ok + +drop table test2; +> ok + +set autocommit on; +> ok + +create table test(id int primary key, ref int, foreign key(ref) references(id)); +> ok + +insert into test values(1, 1), (2, 2); +> update count: 2 + +update test set ref=3-ref; +> update count: 2 + +alter table test add column dummy int; +> ok + +insert into test values(4, 4, null); +> update count: 1 + +drop table test; +> ok + +create table test(id int primary key); +> ok + +explain select * from test a inner join test b left outer join test c on c.id = a.id; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT A.ID, C.ID, B.ID FROM PUBLIC.TEST A /* PUBLIC.TEST.tableScan */ LEFT OUTER JOIN PUBLIC.TEST C /* PUBLIC.PRIMARY_KEY_2: ID = A.ID */ ON C.ID = A.ID INNER JOIN PUBLIC.TEST B /* PUBLIC.TEST.tableScan */ ON 1=1 +> rows: 1 + +SELECT T.ID FROM TEST "T"; +> ID +> -- +> rows: 0 + +SELECT T."ID" FROM TEST "T"; +> ID +> -- +> rows: 0 + +SELECT "T".ID FROM TEST "T"; +> ID +> -- +> rows: 0 + +SELECT "T"."ID" FROM TEST "T"; +> ID +> -- +> rows: 0 + +SELECT T.ID FROM "TEST" T; +> ID +> -- +> rows: 0 + +SELECT T."ID" FROM "TEST" T; +> ID +> -- +> rows: 0 + +SELECT "T".ID FROM "TEST" T; +> ID +> -- +> rows: 0 + +SELECT "T"."ID" FROM "TEST" T; +> ID +> -- +> rows: 0 + +SELECT T.ID FROM "TEST" "T"; +> ID +> -- +> rows: 0 + +SELECT T."ID" FROM "TEST" "T"; +> ID +> -- +> rows: 0 + +SELECT "T".ID FROM "TEST" "T"; +> ID +> -- +> rows: 0 + +SELECT "T"."ID" FROM "TEST" "T"; +> ID +> -- +> rows: 0 + +select "TEST".id from test; +> ID +> -- +> rows: 0 + +select test."ID" from test; +> ID +> -- +> rows: 0 + +select test."id" from test; +> exception + +select "TEST"."ID" from test; +> ID +> -- +> rows: 0 + +select "test"."ID" from test; +> exception + +select public."TEST".id from test; +> ID +> -- +> rows: 0 + +select public.test."ID" from test; +> ID +> -- +> rows: 0 + +select public."TEST"."ID" from test; +> ID +> -- +> rows: 0 + +select public."test"."ID" from test; +> exception + +select "PUBLIC"."TEST".id from test; +> ID +> -- +> rows: 0 + +select "PUBLIC".test."ID" from test; +> ID +> -- +> rows: 0 + +select public."TEST"."ID" from test; +> ID +> -- +> rows: 0 + +select "public"."TEST"."ID" from test; +> exception + +drop table test; +> ok + +create schema s authorization sa; +> ok + +create memory table s.test(id int); +> ok + +create index if not exists idx_id on s.test(id); +> ok + +create index if not exists idx_id on s.test(id); +> ok + +alter index s.idx_id rename to s.x; +> ok + +alter index if exists s.idx_id rename to s.x; +> ok + +alter index if exists s.x rename to s.index_id; +> ok + +alter sequence if exists s.seq restart with 10; +> ok + +create sequence s.seq cache 0; +> ok + +alter sequence if exists s.seq restart with 3; +> ok + +select s.seq.nextval as x; +> X +> - +> 3 +> rows: 1 + +drop sequence s.seq; +> ok + +create sequence s.seq cache 0; +> ok + +alter sequence s.seq restart with 10; +> ok + +alter table s.test add constraint cu_id unique(id); +> ok + +alter table s.test add name varchar; +> ok + +alter table s.test drop column name; +> ok + +alter table s.test drop constraint cu_id; +> ok + +alter table s.test rename to testtab; +> ok + +alter table s.testtab rename to test; +> ok + +create trigger test_trigger before insert on s.test call "org.h2.test.db.TestTriggersConstraints"; +> ok + +script NOPASSWORDS NOSETTINGS drop; +> SCRIPT +> --------------------------------------------------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM S.TEST; +> CREATE FORCE TRIGGER S.TEST_TRIGGER BEFORE INSERT ON S.TEST QUEUE 1024 CALL "org.h2.test.db.TestTriggersConstraints"; +> CREATE INDEX S.INDEX_ID ON S.TEST(ID); +> CREATE MEMORY TABLE S.TEST( ID INT ); +> CREATE SCHEMA IF NOT EXISTS S AUTHORIZATION SA; +> CREATE SEQUENCE S.SEQ START WITH 10 CACHE 1; +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> DROP SEQUENCE IF EXISTS S.SEQ; +> DROP TABLE IF EXISTS S.TEST CASCADE; +> rows: 9 + +drop trigger s.test_trigger; +> ok + +drop schema s cascade; +> ok + +CREATE MEMORY TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255), y int as id+1); +> ok + +INSERT INTO TEST(id, name) VALUES(1, 'Hello'); +> update count: 1 + +create index idx_n_id on test(name, id); +> ok + +alter table test add constraint abc foreign key(id) references (id); +> ok + +alter table test rename column id to i; +> ok + +script NOPASSWORDS NOSETTINGS drop; +> SCRIPT +> --------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.ABC FOREIGN KEY(I) REFERENCES PUBLIC.TEST(I) NOCHECK; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(I); +> CREATE INDEX PUBLIC.IDX_N_ID ON PUBLIC.TEST(NAME, I); +> CREATE MEMORY TABLE PUBLIC.TEST( I INT NOT NULL, NAME VARCHAR(255), Y INT AS (I + 1) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> DROP TABLE IF EXISTS PUBLIC.TEST CASCADE; +> INSERT INTO PUBLIC.TEST(I, NAME, Y) VALUES (1, 'Hello', 2); +> rows: 8 + +INSERT INTO TEST(i, name) VALUES(2, 'World'); +> update count: 1 + +SELECT * FROM TEST ORDER BY I; +> I NAME Y +> - ----- - +> 1 Hello 2 +> 2 World 3 +> rows (ordered): 2 + +UPDATE TEST SET NAME='Hi' WHERE I=1; +> update count: 1 + +DELETE FROM TEST t0 WHERE t0.I=2; +> update count: 1 + +drop table test; +> ok + +create table test(current int); +> ok + +select current from test; +> CURRENT +> ------- +> rows: 0 + +drop table test; +> ok + +CREATE table my_table(my_int integer, my_char varchar); +> ok + +INSERT INTO my_table VALUES(1, 'Testing'); +> update count: 1 + +ALTER TABLE my_table ALTER COLUMN my_int RENAME to my_new_int; +> ok + +SELECT my_new_int FROM my_table; +> MY_NEW_INT +> ---------- +> 1 +> rows: 1 + +UPDATE my_table SET my_new_int = 33; +> update count: 1 + +SELECT * FROM my_table; +> MY_NEW_INT MY_CHAR +> ---------- ------- +> 33 Testing +> rows: 1 + +DROP TABLE my_table; +> ok + +create sequence seq1; +> ok + +create table test(ID INT default next value for seq1); +> ok + +drop sequence seq1; +> exception + +alter table test add column name varchar; +> ok + +insert into test(name) values('Hello'); +> update count: 1 + +select * from test; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +drop table test; +> ok + +drop sequence seq1; +> ok + +create table test(a int primary key, b int, c int); +> ok + +create unique index idx_ba on test(b, a); +> ok + +alter table test add constraint abc foreign key(c, a) references test(b, a); +> ok + +insert into test values(1, 1, null); +> update count: 1 + +drop table test; +> ok + +create table ADDRESS (ADDRESS_ID int primary key, ADDRESS_TYPE int not null, SERVER_ID int not null); +> ok + +create unique index idx_a on address(ADDRESS_TYPE, SERVER_ID); +> ok + +create table SERVER (SERVER_ID int primary key, SERVER_TYPE int not null, ADDRESS_TYPE int); +> ok + +alter table ADDRESS add constraint addr foreign key (SERVER_ID) references SERVER; +> ok + +alter table SERVER add constraint server_const foreign key (ADDRESS_TYPE, SERVER_ID) references ADDRESS (ADDRESS_TYPE, SERVER_ID); +> ok + +insert into SERVER (SERVER_ID, SERVER_TYPE) values (1, 1); +> update count: 1 + +drop table address; +> ok + +drop table server; +> ok + +CREATE TABLE PlanElements(id int primary key, name varchar, parent_id int, foreign key(parent_id) references(id) on delete cascade); +> ok + +INSERT INTO PlanElements(id,name,parent_id) VALUES(1, '#1', null), (2, '#1-A', 1), (3, '#1-A-1', 2), (4, '#1-A-2', 2); +> update count: 4 + +INSERT INTO PlanElements(id,name,parent_id) VALUES(5, '#1-B', 1), (6, '#1-B-1', 5), (7, '#1-B-2', 5); +> update count: 3 + +INSERT INTO PlanElements(id,name,parent_id) VALUES(8, '#1-C', 1), (9, '#1-C-1', 8), (10, '#1-C-2', 8); +> update count: 3 + +INSERT INTO PlanElements(id,name,parent_id) VALUES(11, '#1-D', 1), (12, '#1-D-1', 11), (13, '#1-D-2', 11), (14, '#1-D-3', 11); +> update count: 4 + +INSERT INTO PlanElements(id,name,parent_id) VALUES(15, '#1-E', 1), (16, '#1-E-1', 15), (17, '#1-E-2', 15), (18, '#1-E-3', 15), (19, '#1-E-4', 15); +> update count: 5 + +DELETE FROM PlanElements WHERE id = 1; +> update count: 1 + +SELECT * FROM PlanElements; +> ID NAME PARENT_ID +> -- ---- --------- +> rows: 0 + +DROP TABLE PlanElements; +> ok + +CREATE TABLE PARENT(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE TABLE CHILD(ID INT PRIMARY KEY, NAME VARCHAR(255), FOREIGN KEY(NAME) REFERENCES PARENT(ID)); +> ok + +INSERT INTO PARENT VALUES(1, '1'); +> update count: 1 + +INSERT INTO CHILD VALUES(1, '1'); +> update count: 1 + +INSERT INTO CHILD VALUES(2, 'Hello'); +> exception + +DROP TABLE IF EXISTS CHILD; +> ok + +DROP TABLE IF EXISTS PARENT; +> ok + +(SELECT * FROM DUAL) UNION ALL (SELECT * FROM DUAL); +> X +> - +> 1 +> 1 +> rows: 2 + +DECLARE GLOBAL TEMPORARY TABLE TEST(ID INT PRIMARY KEY); +> ok + +SELECT * FROM TEST; +> ID +> -- +> rows: 0 + +SELECT GROUP_CONCAT(ID) FROM TEST; +> GROUP_CONCAT(ID) +> ---------------- +> null +> rows: 1 + +SELECT * FROM SESSION.TEST; +> ID +> -- +> rows: 0 + +DROP TABLE TEST; +> ok + +VALUES(1, 2); +> C1 C2 +> -- -- +> 1 2 +> rows: 1 + +DROP TABLE IF EXISTS TEST; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +SELECT group_concat(name) FROM TEST group by id; +> GROUP_CONCAT(NAME) +> ------------------ +> Hello +> World +> rows: 2 + +drop table test; +> ok + +create table test(a int primary key, b int invisible, c int); +> ok + +select * from test; +> A C +> - - +> rows: 0 + +select a, b, c from test; +> A B C +> - - - +> rows: 0 + +drop table test; +> ok + +--- script drop --------------------------------------------------------------------------------------------- +create memory table test (id int primary key, im_ie varchar(10)); +> ok + +create sequence test_seq; +> ok + +script NODATA NOPASSWORDS NOSETTINGS drop; +> SCRIPT +> --------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, IM_IE VARCHAR(10) ); +> CREATE SEQUENCE PUBLIC.TEST_SEQ START WITH 1; +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> DROP SEQUENCE IF EXISTS PUBLIC.TEST_SEQ; +> DROP TABLE IF EXISTS PUBLIC.TEST CASCADE; +> rows: 7 + +drop sequence test_seq; +> ok + +drop table test; +> ok + +--- constraints --------------------------------------------------------------------------------------------- +CREATE MEMORY TABLE TEST(ID IDENTITY(100, 10), NAME VARCHAR); +> ok + +INSERT INTO TEST(NAME) VALUES('Hello'), ('World'); +> update count: 2 + +SELECT * FROM TEST; +> ID NAME +> --- ----- +> 100 Hello +> 110 World +> rows: 2 + +DROP TABLE TEST; +> ok + +CREATE MEMORY TABLE TEST(ID BIGINT NOT NULL IDENTITY(10, 5), NAME VARCHAR); +> ok + +INSERT INTO TEST(NAME) VALUES('Hello'), ('World'); +> update count: 2 + +SELECT * FROM TEST; +> ID NAME +> -- ----- +> 10 Hello +> 15 World +> rows: 2 + +DROP TABLE TEST; +> ok + +CREATE CACHED TABLE account( +id INTEGER NOT NULL IDENTITY, +name VARCHAR NOT NULL, +mail_address VARCHAR NOT NULL, +UNIQUE(name), +PRIMARY KEY(id) +); +> ok + +CREATE CACHED TABLE label( +id INTEGER NOT NULL IDENTITY, +parent_id INTEGER NOT NULL, +account_id INTEGER NOT NULL, +name VARCHAR NOT NULL, +PRIMARY KEY(id), +UNIQUE(parent_id, name), +UNIQUE(id, account_id), +FOREIGN KEY(account_id) REFERENCES account (id), +FOREIGN KEY(parent_id, account_id) REFERENCES label (id, account_id) +); +> ok + +INSERT INTO account VALUES (0, 'example', 'example@example.com'); +> update count: 1 + +INSERT INTO label VALUES ( 0, 0, 0, 'TEST'); +> update count: 1 + +INSERT INTO label VALUES ( 1, 0, 0, 'TEST'); +> exception + +INSERT INTO label VALUES ( 1, 0, 0, 'TEST1'); +> update count: 1 + +INSERT INTO label VALUES ( 2, 2, 1, 'TEST'); +> exception + +drop table label; +> ok + +drop table account; +> ok + +--- constraints and alter table add column --------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT, PARENTID INT, FOREIGN KEY(PARENTID) REFERENCES(ID)); +> ok + +INSERT INTO TEST VALUES(0, 0); +> update count: 1 + +ALTER TABLE TEST ADD COLUMN CHILD_ID INT; +> ok + +ALTER TABLE TEST ALTER COLUMN CHILD_ID VARCHAR; +> ok + +ALTER TABLE TEST ALTER COLUMN PARENTID VARCHAR; +> ok + +ALTER TABLE TEST DROP COLUMN PARENTID; +> ok + +ALTER TABLE TEST DROP COLUMN CHILD_ID; +> ok + +SELECT * FROM TEST; +> ID +> -- +> 0 +> rows: 1 + +DROP TABLE TEST; +> ok + +CREATE MEMORY TABLE A(X INT); +> ok + +CREATE MEMORY TABLE B(XX INT, CONSTRAINT B2A FOREIGN KEY(XX) REFERENCES A(X)); +> ok + +CREATE MEMORY TABLE C(X_MASTER INT); +> ok + +ALTER TABLE A ADD CONSTRAINT A2C FOREIGN KEY(X) REFERENCES C(X_MASTER); +> ok + +insert into c values(1); +> update count: 1 + +insert into a values(1); +> update count: 1 + +insert into b values(1); +> update count: 1 + +ALTER TABLE A ADD COLUMN Y INT; +> ok + +insert into c values(2); +> update count: 1 + +insert into a values(2, 2); +> update count: 1 + +insert into b values(2); +> update count: 1 + +DROP TABLE IF EXISTS A; +> ok + +DROP TABLE IF EXISTS B; +> ok + +DROP TABLE IF EXISTS C; +> ok + +--- quoted keywords --------------------------------------------------------------------------------------------- +CREATE TABLE "CREATE"("SELECT" INT, "PRIMARY" INT, "KEY" INT, "INDEX" INT, "ROWNUM" INT, "NEXTVAL" INT, "FROM" INT); +> ok + +INSERT INTO "CREATE" default values; +> update count: 1 + +INSERT INTO "CREATE" default values; +> update count: 1 + +SELECT "ROWNUM", ROWNUM, "SELECT" "AS", "PRIMARY" AS "X", "KEY", "NEXTVAL", "INDEX", "SELECT" "FROM" FROM "CREATE"; +> ROWNUM ROWNUM() AS X KEY NEXTVAL INDEX FROM +> ------ -------- ---- ---- ---- ------- ----- ---- +> null 1 null null null null null null +> null 2 null null null null null null +> rows: 2 + +DROP TABLE "CREATE"; +> ok + +--- truncate table --------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World'); +> update count: 2 + +TRUNCATE TABLE TEST; +> ok + +SELECT * FROM TEST; +> ID NAME +> -- ---- +> rows: 0 + +DROP TABLE TEST; +> ok + +CREATE TABLE PARENT(ID INT PRIMARY KEY, NAME VARCHAR); +> ok + +CREATE TABLE CHILD(PARENTID INT, FOREIGN KEY(PARENTID) REFERENCES PARENT(ID), NAME VARCHAR); +> ok + +TRUNCATE TABLE CHILD; +> ok + +TRUNCATE TABLE PARENT; +> exception + +DROP TABLE CHILD; +> ok + +DROP TABLE PARENT; +> ok + +--- test case for number like string --------------------------------------------------------------------------------------------- +CREATE TABLE test (one bigint primary key, two bigint, three bigint); +> ok + +CREATE INDEX two ON test(two); +> ok + +INSERT INTO TEST VALUES(1, 2, 3), (10, 20, 30), (100, 200, 300); +> update count: 3 + +INSERT INTO TEST VALUES(2, 6, 9), (20, 60, 90), (200, 600, 900); +> update count: 3 + +SELECT * FROM test WHERE one LIKE '2%'; +> ONE TWO THREE +> --- --- ----- +> 2 6 9 +> 20 60 90 +> 200 600 900 +> rows: 3 + +SELECT * FROM test WHERE two LIKE '2%'; +> ONE TWO THREE +> --- --- ----- +> 1 2 3 +> 10 20 30 +> 100 200 300 +> rows: 3 + +SELECT * FROM test WHERE three LIKE '2%'; +> ONE TWO THREE +> --- --- ----- +> rows: 0 + +DROP TABLE TEST; +> ok + +--- merge (upsert) --------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +EXPLAIN SELECT * FROM TEST WHERE ID=1; +> PLAN +> ------------------------------------------------------------------------------------------ +> SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ WHERE ID = 1 +> rows: 1 + +EXPLAIN MERGE INTO TEST VALUES(1, 'Hello'); +> PLAN +> ------------------------------------------------------------ +> MERGE INTO PUBLIC.TEST(ID, NAME) KEY(ID) VALUES (1, 'Hello') +> rows: 1 + +MERGE INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +MERGE INTO TEST VALUES(1, 'Hi'); +> update count: 1 + +MERGE INTO TEST VALUES(2, 'World'); +> update count: 1 + +MERGE INTO TEST VALUES(2, 'World!'); +> update count: 1 + +MERGE INTO TEST(ID, NAME) VALUES(3, 'How are you'); +> update count: 1 + +EXPLAIN MERGE INTO TEST(ID, NAME) VALUES(3, 'How are you'); +> PLAN +> ------------------------------------------------------------------ +> MERGE INTO PUBLIC.TEST(ID, NAME) KEY(ID) VALUES (3, 'How are you') +> rows: 1 + +MERGE INTO TEST(ID, NAME) KEY(ID) VALUES(3, 'How do you do'); +> update count: 1 + +EXPLAIN MERGE INTO TEST(ID, NAME) KEY(ID) VALUES(3, 'How do you do'); +> PLAN +> -------------------------------------------------------------------- +> MERGE INTO PUBLIC.TEST(ID, NAME) KEY(ID) VALUES (3, 'How do you do') +> rows: 1 + +MERGE INTO TEST(ID, NAME) KEY(NAME) VALUES(3, 'Fine'); +> exception + +MERGE INTO TEST(ID, NAME) KEY(NAME) VALUES(4, 'Fine!'); +> update count: 1 + +MERGE INTO TEST(ID, NAME) KEY(NAME) VALUES(4, 'Fine! And you'); +> exception + +MERGE INTO TEST(ID, NAME) KEY(NAME, ID) VALUES(5, 'I''m ok'); +> update count: 1 + +MERGE INTO TEST(ID, NAME) KEY(NAME, ID) VALUES(5, 'Oh, fine'); +> exception + +MERGE INTO TEST(ID, NAME) VALUES(6, 'Oh, fine.'); +> update count: 1 + +SELECT * FROM TEST; +> ID NAME +> -- ------------- +> 1 Hi +> 2 World! +> 3 How do you do +> 4 Fine! +> 5 I'm ok +> 6 Oh, fine. +> rows: 6 + +MERGE INTO TEST SELECT ID+4, NAME FROM TEST; +> update count: 6 + +SELECT * FROM TEST; +> ID NAME +> -- ------------- +> 1 Hi +> 10 Oh, fine. +> 2 World! +> 3 How do you do +> 4 Fine! +> 5 Hi +> 6 World! +> 7 How do you do +> 8 Fine! +> 9 I'm ok +> rows: 10 + +DROP TABLE TEST; +> ok + +CREATE TABLE PARENT(ID INT, NAME VARCHAR); +> ok + +CREATE TABLE CHILD(ID INT, PARENTID INT, FOREIGN KEY(PARENTID) REFERENCES PARENT(ID)); +> ok + +INSERT INTO PARENT VALUES(1, 'Mary'), (2, 'John'); +> update count: 2 + +INSERT INTO CHILD VALUES(10, 1), (11, 1), (20, 2), (21, 2); +> update count: 4 + +MERGE INTO PARENT KEY(ID) VALUES(1, 'Marcy'); +> update count: 1 + +SELECT * FROM PARENT; +> ID NAME +> -- ----- +> 1 Marcy +> 2 John +> rows: 2 + +SELECT * FROM CHILD; +> ID PARENTID +> -- -------- +> 10 1 +> 11 1 +> 20 2 +> 21 2 +> rows: 4 + +DROP TABLE PARENT; +> ok + +DROP TABLE CHILD; +> ok + +--- +create table STRING_TEST(label varchar(31), label2 varchar(255)); +> ok + +create table STRING_TEST_ic(label varchar_ignorecase(31), label2 +varchar_ignorecase(255)); +> ok + +insert into STRING_TEST values('HELLO','Bye'); +> update count: 1 + +insert into STRING_TEST values('HELLO','Hello'); +> update count: 1 + +insert into STRING_TEST_ic select * from STRING_TEST; +> update count: 2 + +-- Expect rows of STRING_TEST_ic and STRING_TEST to be identical +select * from STRING_TEST; +> LABEL LABEL2 +> ----- ------ +> HELLO Bye +> HELLO Hello +> rows: 2 + +-- correct +select * from STRING_TEST_ic; +> LABEL LABEL2 +> ----- ------ +> HELLO Bye +> HELLO Hello +> rows: 2 + +drop table STRING_TEST; +> ok + +drop table STRING_TEST_ic; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR_IGNORECASE); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World'), (3, 'hallo'), (4, 'hoi'); +> update count: 4 + +SELECT * FROM TEST WHERE NAME = 'HELLO'; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +SELECT * FROM TEST WHERE NAME = 'HE11O'; +> ID NAME +> -- ---- +> rows: 0 + +SELECT * FROM TEST ORDER BY NAME; +> ID NAME +> -- ----- +> 3 hallo +> 1 Hello +> 4 hoi +> 2 World +> rows (ordered): 4 + +DROP TABLE IF EXISTS TEST; +> ok + +--- update with list --------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +SELECT * FROM TEST ORDER BY ID; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows (ordered): 2 + +UPDATE TEST t0 SET t0.NAME='Hi' WHERE t0.ID=1; +> update count: 1 + +update test set (id, name)=(id+1, name || 'Hi'); +> update count: 2 + +update test set (id, name)=(select id+1, name || 'Ho' from test t1 where test.id=t1.id); +> update count: 2 + +explain update test set (id, name)=(id+1, name || 'Hi'); +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------- +> UPDATE PUBLIC.TEST /* PUBLIC.TEST.tableScan */ SET ID = ARRAY_GET(((ID + 1), (NAME || 'Hi')), 1), NAME = ARRAY_GET(((ID + 1), (NAME || 'Hi')), 2) +> rows: 1 + +explain update test set (id, name)=(select id+1, name || 'Ho' from test t1 where test.id=t1.id); +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> UPDATE PUBLIC.TEST /* PUBLIC.TEST.tableScan */ SET ID = ARRAY_GET((SELECT (ID + 1), (NAME || 'Ho') FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID = TEST.ID */ WHERE TEST.ID = T1.ID), 1), NAME = ARRAY_GET((SELECT (ID + 1), (NAME || 'Ho') FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID = TEST.ID */ WHERE TEST.ID = T1.ID), 2) +> rows: 1 + +select * from test; +> ID NAME +> -- --------- +> 3 HiHiHo +> 4 WorldHiHo +> rows: 2 + +DROP TABLE IF EXISTS TEST; +> ok + +--- script --------------------------------------------------------------------------------------------- +create memory table test(id int primary key, c clob, b blob); +> ok + +insert into test values(0, null, null); +> update count: 1 + +insert into test values(1, '', ''); +> update count: 1 + +insert into test values(2, 'Cafe', X'cafe'); +> update count: 1 + +script simple nopasswords nosettings; +> SCRIPT +> --------------------------------------------------------------------------- +> -- 3 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, C CLOB, B BLOB ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.TEST(ID, C, B) VALUES(0, NULL, NULL); +> INSERT INTO PUBLIC.TEST(ID, C, B) VALUES(1, '', X''); +> INSERT INTO PUBLIC.TEST(ID, C, B) VALUES(2, 'Cafe', X'cafe'); +> rows: 7 + +drop table test; +> ok + +--- optimizer --------------------------------------------------------------------------------------------- +create table b(id int primary key, p int); +> ok + +create index bp on b(p); +> ok + +insert into b values(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9); +> update count: 10 + +insert into b select id+10, p+10 from b; +> update count: 10 + +explain select * from b b0, b b1, b b2 where b1.p = b0.id and b2.p = b1.id and b0.id=10; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +> SELECT B0.ID, B0.P, B1.ID, B1.P, B2.ID, B2.P FROM PUBLIC.B B0 /* PUBLIC.PRIMARY_KEY_4: ID = 10 */ /* WHERE B0.ID = 10 */ INNER JOIN PUBLIC.B B1 /* PUBLIC.BP: P = B0.ID */ ON 1=1 /* WHERE B1.P = B0.ID */ INNER JOIN PUBLIC.B B2 /* PUBLIC.BP: P = B1.ID */ ON 1=1 WHERE (B0.ID = 10) AND ((B1.P = B0.ID) AND (B2.P = B1.ID)) +> rows: 1 + +explain select * from b b0, b b1, b b2, b b3 where b1.p = b0.id and b2.p = b1.id and b3.p = b2.id and b0.id=10; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT B0.ID, B0.P, B1.ID, B1.P, B2.ID, B2.P, B3.ID, B3.P FROM PUBLIC.B B0 /* PUBLIC.PRIMARY_KEY_4: ID = 10 */ /* WHERE B0.ID = 10 */ INNER JOIN PUBLIC.B B1 /* PUBLIC.BP: P = B0.ID */ ON 1=1 /* WHERE B1.P = B0.ID */ INNER JOIN PUBLIC.B B2 /* PUBLIC.BP: P = B1.ID */ ON 1=1 /* WHERE B2.P = B1.ID */ INNER JOIN PUBLIC.B B3 /* PUBLIC.BP: P = B2.ID */ ON 1=1 WHERE (B0.ID = 10) AND ((B3.P = B2.ID) AND ((B1.P = B0.ID) AND (B2.P = B1.ID))) +> rows: 1 + +explain select * from b b0, b b1, b b2, b b3, b b4 where b1.p = b0.id and b2.p = b1.id and b3.p = b2.id and b4.p = b3.id and b0.id=10; +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT B0.ID, B0.P, B1.ID, B1.P, B2.ID, B2.P, B3.ID, B3.P, B4.ID, B4.P FROM PUBLIC.B B0 /* PUBLIC.PRIMARY_KEY_4: ID = 10 */ /* WHERE B0.ID = 10 */ INNER JOIN PUBLIC.B B1 /* PUBLIC.BP: P = B0.ID */ ON 1=1 /* WHERE B1.P = B0.ID */ INNER JOIN PUBLIC.B B2 /* PUBLIC.BP: P = B1.ID */ ON 1=1 /* WHERE B2.P = B1.ID */ INNER JOIN PUBLIC.B B3 /* PUBLIC.BP: P = B2.ID */ ON 1=1 /* WHERE B3.P = B2.ID */ INNER JOIN PUBLIC.B B4 /* PUBLIC.BP: P = B3.ID */ ON 1=1 WHERE (B0.ID = 10) AND ((B4.P = B3.ID) AND ((B3.P = B2.ID) AND ((B1.P = B0.ID) AND (B2.P = B1.ID)))) +> rows: 1 + +analyze; +> ok + +explain select * from b b0, b b1, b b2, b b3, b b4 where b1.p = b0.id and b2.p = b1.id and b3.p = b2.id and b4.p = b3.id and b0.id=10; +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT B0.ID, B0.P, B1.ID, B1.P, B2.ID, B2.P, B3.ID, B3.P, B4.ID, B4.P FROM PUBLIC.B B0 /* PUBLIC.PRIMARY_KEY_4: ID = 10 */ /* WHERE B0.ID = 10 */ INNER JOIN PUBLIC.B B1 /* PUBLIC.BP: P = B0.ID */ ON 1=1 /* WHERE B1.P = B0.ID */ INNER JOIN PUBLIC.B B2 /* PUBLIC.BP: P = B1.ID */ ON 1=1 /* WHERE B2.P = B1.ID */ INNER JOIN PUBLIC.B B3 /* PUBLIC.BP: P = B2.ID */ ON 1=1 /* WHERE B3.P = B2.ID */ INNER JOIN PUBLIC.B B4 /* PUBLIC.BP: P = B3.ID */ ON 1=1 WHERE (B0.ID = 10) AND ((B4.P = B3.ID) AND ((B3.P = B2.ID) AND ((B1.P = B0.ID) AND (B2.P = B1.ID)))) +> rows: 1 + +drop table if exists b; +> ok + +create table test(id int primary key, first_name varchar, name varchar, state int); +> ok + +create index idx_first_name on test(first_name); +> ok + +create index idx_name on test(name); +> ok + +create index idx_state on test(state); +> ok + +insert into test values +(0, 'Anne', 'Smith', 0), (1, 'Tom', 'Smith', 0), +(2, 'Tom', 'Jones', 0), (3, 'Steve', 'Johnson', 0), +(4, 'Steve', 'Martin', 0), (5, 'Jon', 'Jones', 0), +(6, 'Marc', 'Scott', 0), (7, 'Marc', 'Miller', 0), +(8, 'Susan', 'Wood', 0), (9, 'Jon', 'Bennet', 0); +> update count: 10 + +EXPLAIN SELECT * FROM TEST WHERE ID = 3; +> PLAN +> ----------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.FIRST_NAME, TEST.NAME, TEST.STATE FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 3 */ WHERE ID = 3 +> rows: 1 + +SELECT SELECTIVITY(ID), SELECTIVITY(FIRST_NAME), +SELECTIVITY(NAME), SELECTIVITY(STATE) +FROM TEST WHERE ROWNUM()<100000; +> SELECTIVITY(ID) SELECTIVITY(FIRST_NAME) SELECTIVITY(NAME) SELECTIVITY(STATE) +> --------------- ----------------------- ----------------- ------------------ +> 100 60 80 10 +> rows: 1 + +explain select * from test where name='Smith' and first_name='Tom' and state=0; +> PLAN +> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.FIRST_NAME, TEST.NAME, TEST.STATE FROM PUBLIC.TEST /* PUBLIC.IDX_FIRST_NAME: FIRST_NAME = 'Tom' */ WHERE (STATE = 0) AND ((NAME = 'Smith') AND (FIRST_NAME = 'Tom')) +> rows: 1 + +alter table test alter column name selectivity 100; +> ok + +explain select * from test where name='Smith' and first_name='Tom' and state=0; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.FIRST_NAME, TEST.NAME, TEST.STATE FROM PUBLIC.TEST /* PUBLIC.IDX_NAME: NAME = 'Smith' */ WHERE (STATE = 0) AND ((NAME = 'Smith') AND (FIRST_NAME = 'Tom')) +> rows: 1 + +drop table test; +> ok + +CREATE TABLE O(X INT PRIMARY KEY, Y INT); +> ok + +INSERT INTO O SELECT X, X+1 FROM SYSTEM_RANGE(1, 1000); +> update count: 1000 + +EXPLAIN SELECT A.X FROM O B, O A, O F, O D, O C, O E, O G, O H, O I, O J +WHERE 1=J.X and J.Y=I.X AND I.Y=H.X AND H.Y=G.X AND G.Y=F.X AND F.Y=E.X +AND E.Y=D.X AND D.Y=C.X AND C.Y=B.X AND B.Y=A.X; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT A.X FROM PUBLIC.O J /* PUBLIC.PRIMARY_KEY_4: X = 1 */ /* WHERE J.X = 1 */ INNER JOIN PUBLIC.O I /* PUBLIC.PRIMARY_KEY_4: X = J.Y */ ON 1=1 /* WHERE J.Y = I.X */ INNER JOIN PUBLIC.O H /* PUBLIC.PRIMARY_KEY_4: X = I.Y */ ON 1=1 /* WHERE I.Y = H.X */ INNER JOIN PUBLIC.O G /* PUBLIC.PRIMARY_KEY_4: X = H.Y */ ON 1=1 /* WHERE H.Y = G.X */ INNER JOIN PUBLIC.O F /* PUBLIC.PRIMARY_KEY_4: X = G.Y */ ON 1=1 /* WHERE G.Y = F.X */ INNER JOIN PUBLIC.O E /* PUBLIC.PRIMARY_KEY_4: X = F.Y */ ON 1=1 /* WHERE F.Y = E.X */ INNER JOIN PUBLIC.O D /* PUBLIC.PRIMARY_KEY_4: X = E.Y */ ON 1=1 /* WHERE E.Y = D.X */ INNER JOIN PUBLIC.O C /* PUBLIC.PRIMARY_KEY_4: X = D.Y */ ON 1=1 /* WHERE D.Y = C.X */ INNER JOIN PUBLIC.O B /* PUBLIC.PRIMARY_KEY_4: X = C.Y */ ON 1=1 /* WHERE C.Y = B.X */ INNER JOIN PUBLIC.O A /* PUBLIC.PRIMARY_KEY_4: X = B.Y */ ON 1=1 WHERE (B.Y = A.X) AND ((C.Y = B.X) AND ((D.Y = C.X) AND ((E.Y = D.X) AND ((F.Y = E.X) AND ((G.Y = F.X) AND ((H.Y = G.X) AND ((I.Y = H.X) AND ((J.X = 1) AND (J.Y = I.X))))))))) +> rows: 1 + +DROP TABLE O; +> ok + +CREATE TABLE PARENT(ID INT PRIMARY KEY, AID INT, BID INT, CID INT, DID INT, EID INT, FID INT, GID INT, HID INT); +> ok + +CREATE TABLE CHILD(ID INT PRIMARY KEY); +> ok + +INSERT INTO PARENT SELECT X, 1, 2, 1, 2, 1, 2, 1, 2 FROM SYSTEM_RANGE(0, 1000); +> update count: 1001 + +INSERT INTO CHILD SELECT X FROM SYSTEM_RANGE(0, 1000); +> update count: 1001 + +SELECT COUNT(*) FROM PARENT, CHILD A, CHILD B, CHILD C, CHILD D, CHILD E, CHILD F, CHILD G, CHILD H +WHERE AID=A.ID AND BID=B.ID AND CID=C.ID +AND DID=D.ID AND EID=E.ID AND FID=F.ID AND GID=G.ID AND HID=H.ID; +> COUNT(*) +> -------- +> 1001 +> rows: 1 + +EXPLAIN SELECT COUNT(*) FROM PARENT, CHILD A, CHILD B, CHILD C, CHILD D, CHILD E, CHILD F, CHILD G, CHILD H +WHERE AID=A.ID AND BID=B.ID AND CID=C.ID +AND DID=D.ID AND EID=E.ID AND FID=F.ID AND GID=G.ID AND HID=H.ID; +> PLAN +> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT COUNT(*) FROM PUBLIC.PARENT /* PUBLIC.PARENT.tableScan */ INNER JOIN PUBLIC.CHILD A /* PUBLIC.PRIMARY_KEY_3: ID = AID */ ON 1=1 /* WHERE AID = A.ID */ INNER JOIN PUBLIC.CHILD B /* PUBLIC.PRIMARY_KEY_3: ID = BID */ ON 1=1 /* WHERE BID = B.ID */ INNER JOIN PUBLIC.CHILD C /* PUBLIC.PRIMARY_KEY_3: ID = CID */ ON 1=1 /* WHERE CID = C.ID */ INNER JOIN PUBLIC.CHILD D /* PUBLIC.PRIMARY_KEY_3: ID = DID */ ON 1=1 /* WHERE DID = D.ID */ INNER JOIN PUBLIC.CHILD E /* PUBLIC.PRIMARY_KEY_3: ID = EID */ ON 1=1 /* WHERE EID = E.ID */ INNER JOIN PUBLIC.CHILD F /* PUBLIC.PRIMARY_KEY_3: ID = FID */ ON 1=1 /* WHERE FID = F.ID */ INNER JOIN PUBLIC.CHILD G /* PUBLIC.PRIMARY_KEY_3: ID = GID */ ON 1=1 /* WHERE GID = G.ID */ INNER JOIN PUBLIC.CHILD H /* PUBLIC.PRIMARY_KEY_3: ID = HID */ ON 1=1 WHERE (HID = H.ID) AND ((GID = G.ID) AND ((FID = F.ID) AND ((EID = E.ID) AND ((DID = D.ID) AND ((CID = C.ID) AND ((AID = A.ID) AND (BID = B.ID))))))) +> rows: 1 + +CREATE TABLE FAMILY(ID INT PRIMARY KEY, PARENTID INT); +> ok + +INSERT INTO FAMILY SELECT X, X-1 FROM SYSTEM_RANGE(0, 1000); +> update count: 1001 + +EXPLAIN SELECT COUNT(*) FROM CHILD A, CHILD B, FAMILY, CHILD C, CHILD D, PARENT, CHILD E, CHILD F, CHILD G +WHERE FAMILY.ID=1 AND FAMILY.PARENTID=PARENT.ID +AND AID=A.ID AND BID=B.ID AND CID=C.ID AND DID=D.ID AND EID=E.ID AND FID=F.ID AND GID=G.ID; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT COUNT(*) FROM PUBLIC.FAMILY /* PUBLIC.PRIMARY_KEY_7: ID = 1 */ /* WHERE FAMILY.ID = 1 */ INNER JOIN PUBLIC.PARENT /* PUBLIC.PRIMARY_KEY_8: ID = FAMILY.PARENTID */ ON 1=1 /* WHERE FAMILY.PARENTID = PARENT.ID */ INNER JOIN PUBLIC.CHILD A /* PUBLIC.PRIMARY_KEY_3: ID = AID */ ON 1=1 /* WHERE AID = A.ID */ INNER JOIN PUBLIC.CHILD B /* PUBLIC.PRIMARY_KEY_3: ID = BID */ ON 1=1 /* WHERE BID = B.ID */ INNER JOIN PUBLIC.CHILD C /* PUBLIC.PRIMARY_KEY_3: ID = CID */ ON 1=1 /* WHERE CID = C.ID */ INNER JOIN PUBLIC.CHILD D /* PUBLIC.PRIMARY_KEY_3: ID = DID */ ON 1=1 /* WHERE DID = D.ID */ INNER JOIN PUBLIC.CHILD E /* PUBLIC.PRIMARY_KEY_3: ID = EID */ ON 1=1 /* WHERE EID = E.ID */ INNER JOIN PUBLIC.CHILD F /* PUBLIC.PRIMARY_KEY_3: ID = FID */ ON 1=1 /* WHERE FID = F.ID */ INNER JOIN PUBLIC.CHILD G /* PUBLIC.PRIMARY_KEY_3: ID = GID */ ON 1=1 WHERE (GID = G.ID) AND ((FID = F.ID) AND ((EID = E.ID) AND ((DID = D.ID) AND ((CID = C.ID) AND ((BID = B.ID) AND ((AID = A.ID) AND ((FAMILY.ID = 1) AND (FAMILY.PARENTID = PARENT.ID)))))))) +> rows: 1 + +DROP TABLE FAMILY; +> ok + +DROP TABLE PARENT; +> ok + +DROP TABLE CHILD; +> ok + +--- is null / not is null --------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT UNIQUE, NAME VARCHAR CHECK LENGTH(NAME)>3); +> ok + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(ID INT, NAME VARCHAR(255), B INT); +> ok + +CREATE UNIQUE INDEX IDXNAME ON TEST(NAME); +> ok + +CREATE UNIQUE INDEX IDX_NAME_B ON TEST(NAME, B); +> ok + +INSERT INTO TEST(ID, NAME, B) VALUES (0, NULL, NULL); +> update count: 1 + +INSERT INTO TEST(ID, NAME, B) VALUES (1, 'Hello', NULL); +> update count: 1 + +INSERT INTO TEST(ID, NAME, B) VALUES (2, NULL, NULL); +> update count: 1 + +INSERT INTO TEST(ID, NAME, B) VALUES (3, 'World', NULL); +> update count: 1 + +select * from test; +> ID NAME B +> -- ----- ---- +> 0 null null +> 1 Hello null +> 2 null null +> 3 World null +> rows: 4 + +UPDATE test SET name='Hi'; +> exception + +select * from test; +> ID NAME B +> -- ----- ---- +> 0 null null +> 1 Hello null +> 2 null null +> 3 World null +> rows: 4 + +UPDATE test SET name=NULL; +> update count: 4 + +UPDATE test SET B=1; +> update count: 4 + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(ID INT, NAME VARCHAR); +> ok + +INSERT INTO TEST VALUES(NULL, NULL), (0, 'Hello'), (1, 'World'); +> update count: 3 + +SELECT * FROM TEST WHERE NOT (1=1); +> ID NAME +> -- ---- +> rows: 0 + +DROP TABLE TEST; +> ok + +create table test_null(a int, b int); +> ok + +insert into test_null values(0, 0); +> update count: 1 + +insert into test_null values(0, null); +> update count: 1 + +insert into test_null values(null, null); +> update count: 1 + +insert into test_null values(null, 0); +> update count: 1 + +select * from test_null where a=0; +> A B +> - ---- +> 0 0 +> 0 null +> rows: 2 + +select * from test_null where not a=0; +> A B +> - - +> rows: 0 + +select * from test_null where (a=0 or b=0); +> A B +> ---- ---- +> 0 0 +> 0 null +> null 0 +> rows: 3 + +select * from test_null where not (a=0 or b=0); +> A B +> - - +> rows: 0 + +select * from test_null where (a=1 or b=0); +> A B +> ---- - +> 0 0 +> null 0 +> rows: 2 + +select * from test_null where not( a=1 or b=0); +> A B +> - - +> rows: 0 + +select * from test_null where not(not( a=1 or b=0)); +> A B +> ---- - +> 0 0 +> null 0 +> rows: 2 + +select * from test_null where a=0 or b=0; +> A B +> ---- ---- +> 0 0 +> 0 null +> null 0 +> rows: 3 + +SELECT count(*) FROM test_null WHERE not ('X'=null and 1=0); +> COUNT(*) +> -------- +> 4 +> rows: 1 + +drop table if exists test_null; +> ok + +--- function alias --------------------------------------------------------------------------------------------- +CREATE ALIAS MY_SQRT FOR "java.lang.Math.sqrt"; +> ok + +SELECT MY_SQRT(2.0) MS, SQRT(2.0); +> MS 1.4142135623730951 +> ------------------ ------------------ +> 1.4142135623730951 1.4142135623730951 +> rows: 1 + +SELECT MY_SQRT(SUM(X)), SUM(X), MY_SQRT(55) FROM SYSTEM_RANGE(1, 10); +> PUBLIC.MY_SQRT(SUM(X)) SUM(X) PUBLIC.MY_SQRT(55) +> ---------------------- ------ ------------------ +> 7.416198487095663 55 7.416198487095663 +> rows: 1 + +SELECT MY_SQRT(-1.0) MS, SQRT(NULL) S; +> MS S +> --- ---- +> NaN null +> rows: 1 + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> ------------------------------------------------------------ +> CREATE FORCE ALIAS PUBLIC.MY_SQRT FOR "java.lang.Math.sqrt"; +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 2 + +SELECT ALIAS_NAME, JAVA_CLASS, JAVA_METHOD, DATA_TYPE, COLUMN_COUNT, RETURNS_RESULT, REMARKS FROM INFORMATION_SCHEMA.FUNCTION_ALIASES; +> ALIAS_NAME JAVA_CLASS JAVA_METHOD DATA_TYPE COLUMN_COUNT RETURNS_RESULT REMARKS +> ---------- -------------- ----------- --------- ------------ -------------- ------- +> MY_SQRT java.lang.Math sqrt 8 1 2 +> rows: 1 + +DROP ALIAS MY_SQRT; +> ok + +--- schema ---------------------------------------------------------------------------------------------- +SELECT DISTINCT TABLE_SCHEMA, TABLE_CATALOG FROM INFORMATION_SCHEMA.TABLES ORDER BY TABLE_SCHEMA; +> TABLE_SCHEMA TABLE_CATALOG +> ------------------ ------------- +> INFORMATION_SCHEMA SCRIPT +> rows (ordered): 1 + +SELECT * FROM INFORMATION_SCHEMA.SCHEMATA; +> CATALOG_NAME SCHEMA_NAME SCHEMA_OWNER DEFAULT_CHARACTER_SET_NAME DEFAULT_COLLATION_NAME IS_DEFAULT REMARKS ID +> ------------ ------------------ ------------ -------------------------- ---------------------- ---------- ------- -- +> SCRIPT INFORMATION_SCHEMA SA Unicode OFF FALSE -1 +> SCRIPT PUBLIC SA Unicode OFF TRUE 0 +> rows: 2 + +SELECT * FROM INFORMATION_SCHEMA.CATALOGS; +> CATALOG_NAME +> ------------ +> SCRIPT +> rows: 1 + +SELECT INFORMATION_SCHEMA.SCHEMATA.SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA; +> SCHEMA_NAME +> ------------------ +> INFORMATION_SCHEMA +> PUBLIC +> rows: 2 + +SELECT INFORMATION_SCHEMA.SCHEMATA.* FROM INFORMATION_SCHEMA.SCHEMATA; +> CATALOG_NAME SCHEMA_NAME SCHEMA_OWNER DEFAULT_CHARACTER_SET_NAME DEFAULT_COLLATION_NAME IS_DEFAULT REMARKS ID +> ------------ ------------------ ------------ -------------------------- ---------------------- ---------- ------- -- +> SCRIPT INFORMATION_SCHEMA SA Unicode OFF FALSE -1 +> SCRIPT PUBLIC SA Unicode OFF TRUE 0 +> rows: 2 + +CREATE SCHEMA TEST_SCHEMA AUTHORIZATION SA; +> ok + +DROP SCHEMA TEST_SCHEMA RESTRICT; +> ok + +create schema Contact_Schema AUTHORIZATION SA; +> ok + +CREATE TABLE Contact_Schema.Address ( +address_id BIGINT NOT NULL +CONSTRAINT address_id_check +CHECK (address_id > 0), +address_type VARCHAR(20) NOT NULL +CONSTRAINT address_type +CHECK (address_type in ('postal','email','web')), +CONSTRAINT X_PKAddress +PRIMARY KEY (address_id) +); +> ok + +create schema ClientServer_Schema AUTHORIZATION SA; +> ok + +CREATE TABLE ClientServer_Schema.PrimaryKey_Seq ( +sequence_name VARCHAR(100) NOT NULL, +seq_number BIGINT NOT NULL, +CONSTRAINT X_PKPrimaryKey_Seq +PRIMARY KEY (sequence_name) +); +> ok + +alter table Contact_Schema.Address add constraint abc foreign key(address_id) +references ClientServer_Schema.PrimaryKey_Seq(seq_number); +> ok + +drop table ClientServer_Schema.PrimaryKey_Seq; +> ok + +drop table Contact_Schema.Address; +> ok + +drop schema Contact_Schema restrict; +> ok + +drop schema ClientServer_Schema restrict; +> ok + +--- alter table add / drop / rename column ---------------------------------------------------------------------------------------------- +CREATE MEMORY TABLE TEST(ID INT PRIMARY KEY); +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> --------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 4 + +ALTER TABLE TEST ADD CREATEDATE VARCHAR(255) DEFAULT '2001-01-01' NOT NULL; +> ok + +ALTER TABLE TEST ADD NAME VARCHAR(255) NULL BEFORE CREATEDATE; +> ok + +CREATE INDEX IDXNAME ON TEST(NAME); +> ok + +INSERT INTO TEST(ID, NAME) VALUES(1, 'Hi'); +> update count: 1 + +ALTER TABLE TEST ALTER COLUMN NAME SET NOT NULL; +> ok + +ALTER TABLE TEST ALTER COLUMN NAME SET NOT NULL; +> ok + +ALTER TABLE TEST ALTER COLUMN NAME SET NULL; +> ok + +ALTER TABLE TEST ALTER COLUMN NAME SET NULL; +> ok + +ALTER TABLE TEST ALTER COLUMN NAME SET DEFAULT 1; +> ok + +SELECT * FROM TEST; +> ID NAME CREATEDATE +> -- ---- ---------- +> 1 Hi 2001-01-01 +> rows: 1 + +ALTER TABLE TEST ADD MODIFY_DATE TIMESTAMP; +> ok + +CREATE MEMORY TABLE TEST_SEQ(ID INT, NAME VARCHAR); +> ok + +INSERT INTO TEST_SEQ VALUES(-1, '-1'); +> update count: 1 + +ALTER TABLE TEST_SEQ ALTER COLUMN ID IDENTITY; +> ok + +INSERT INTO TEST_SEQ VALUES(NULL, '1'); +> update count: 1 + +ALTER TABLE TEST_SEQ ALTER COLUMN ID RESTART WITH 10; +> ok + +INSERT INTO TEST_SEQ VALUES(NULL, '10'); +> update count: 1 + +alter table test_seq drop primary key; +> ok + +ALTER TABLE TEST_SEQ ALTER COLUMN ID INT DEFAULT 20; +> ok + +INSERT INTO TEST_SEQ VALUES(DEFAULT, '20'); +> update count: 1 + +ALTER TABLE TEST_SEQ ALTER COLUMN NAME RENAME TO DATA; +> ok + +SELECT * FROM TEST_SEQ ORDER BY ID; +> ID DATA +> -- ---- +> -1 -1 +> 1 1 +> 10 10 +> 20 20 +> rows (ordered): 4 + +SCRIPT SIMPLE NOPASSWORDS NOSETTINGS; +> SCRIPT +> -------------------------------------------------------------------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> -- 4 +/- SELECT COUNT(*) FROM PUBLIC.TEST_SEQ; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE INDEX PUBLIC.IDXNAME ON PUBLIC.TEST(NAME); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, NAME VARCHAR(255) DEFAULT 1, CREATEDATE VARCHAR(255) DEFAULT '2001-01-01' NOT NULL, MODIFY_DATE TIMESTAMP ); +> CREATE MEMORY TABLE PUBLIC.TEST_SEQ( ID INT DEFAULT 20 NOT NULL, DATA VARCHAR ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.TEST(ID, NAME, CREATEDATE, MODIFY_DATE) VALUES(1, 'Hi', '2001-01-01', NULL); +> INSERT INTO PUBLIC.TEST_SEQ(ID, DATA) VALUES(-1, '-1'); +> INSERT INTO PUBLIC.TEST_SEQ(ID, DATA) VALUES(1, '1'); +> INSERT INTO PUBLIC.TEST_SEQ(ID, DATA) VALUES(10, '10'); +> INSERT INTO PUBLIC.TEST_SEQ(ID, DATA) VALUES(20, '20'); +> rows: 12 + +CREATE UNIQUE INDEX IDX_NAME_ID ON TEST(ID, NAME); +> ok + +ALTER TABLE TEST DROP COLUMN NAME; +> exception + +DROP INDEX IDX_NAME_ID; +> ok + +DROP INDEX IDX_NAME_ID IF EXISTS; +> ok + +ALTER TABLE TEST DROP NAME; +> ok + +DROP TABLE TEST_SEQ; +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> --------------------------------------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, CREATEDATE VARCHAR(255) DEFAULT '2001-01-01' NOT NULL, MODIFY_DATE TIMESTAMP ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.TEST(ID, CREATEDATE, MODIFY_DATE) VALUES (1, '2001-01-01', NULL); +> rows: 5 + +ALTER TABLE TEST ADD NAME VARCHAR(255) NULL BEFORE CREATEDATE; +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> ---------------------------------------------------------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, NAME VARCHAR(255), CREATEDATE VARCHAR(255) DEFAULT '2001-01-01' NOT NULL, MODIFY_DATE TIMESTAMP ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.TEST(ID, NAME, CREATEDATE, MODIFY_DATE) VALUES (1, NULL, '2001-01-01', NULL); +> rows: 5 + +UPDATE TEST SET NAME = 'Hi'; +> update count: 1 + +INSERT INTO TEST VALUES(2, 'Hello', DEFAULT, DEFAULT); +> update count: 1 + +SELECT * FROM TEST; +> ID NAME CREATEDATE MODIFY_DATE +> -- ----- ---------- ----------- +> 1 Hi 2001-01-01 null +> 2 Hello 2001-01-01 null +> rows: 2 + +DROP TABLE TEST; +> ok + +create table test(id int, name varchar invisible); +> ok + +select * from test; +> ID +> -- +> rows: 0 + +alter table test alter column name set visible; +> ok + +select * from test; +> ID NAME +> -- ---- +> rows: 0 + +alter table test add modify_date timestamp invisible before name; +> ok + +select * from test; +> ID NAME +> -- ---- +> rows: 0 + +alter table test alter column modify_date timestamp visible; +> ok + +select * from test; +> ID MODIFY_DATE NAME +> -- ----------- ---- +> rows: 0 + +alter table test alter column modify_date set invisible; +> ok + +select * from test; +> ID NAME +> -- ---- +> rows: 0 + +drop table test; +> ok + +--- autoIncrement ---------------------------------------------------------------------------------------------- +CREATE MEMORY TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR); +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> --------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, NAME VARCHAR ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 4 + +INSERT INTO TEST(ID, NAME) VALUES(1, 'Hi'), (2, 'World'); +> update count: 2 + +SELECT * FROM TEST; +> ID NAME +> -- ----- +> 1 Hi +> 2 World +> rows: 2 + +SELECT * FROM TEST WHERE ? IS NULL; +{ +Hello +> ID NAME +> -- ---- +> rows: 0 +}; +> update count: 0 + +DROP TABLE TEST; +> ok + +--- limit/offset ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'), (2, 'World'), (3, 'with'), (4, 'limited'), (5, 'resources'); +> update count: 5 + +SELECT TOP 2 * FROM TEST ORDER BY ID; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows (ordered): 2 + +SELECT LIMIT (0+0) (2+0) * FROM TEST ORDER BY ID; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows (ordered): 2 + +SELECT LIMIT (1+0) (2+0) NAME, -ID, ID _ID_ FROM TEST ORDER BY _ID_; +> NAME - ID _ID_ +> ----- ---- ---- +> World -2 2 +> with -3 3 +> rows (ordered): 2 + +EXPLAIN SELECT LIMIT (1+0) (2+0) * FROM TEST ORDER BY ID; +> PLAN +> -------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2 */ ORDER BY 1 LIMIT 2 OFFSET 1 /* index sorted */ +> rows (ordered): 1 + +SELECT * FROM TEST ORDER BY ID LIMIT 2+0 OFFSET 1+0; +> ID NAME +> -- ----- +> 2 World +> 3 with +> rows (ordered): 2 + +SELECT * FROM TEST UNION ALL SELECT * FROM TEST ORDER BY ID LIMIT 2+0 OFFSET 1+0; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> rows (ordered): 2 + +SELECT * FROM TEST ORDER BY ID OFFSET 4; +> ID NAME +> -- --------- +> 5 resources +> rows (ordered): 1 + +SELECT ID FROM TEST GROUP BY ID UNION ALL SELECT ID FROM TEST GROUP BY ID; +> ID +> -- +> 1 +> 1 +> 2 +> 2 +> 3 +> 3 +> 4 +> 4 +> 5 +> 5 +> rows: 10 + +SELECT * FROM (SELECT ID FROM TEST GROUP BY ID); +> ID +> -- +> 1 +> 2 +> 3 +> 4 +> 5 +> rows: 5 + +EXPLAIN SELECT * FROM TEST UNION ALL SELECT * FROM TEST ORDER BY ID LIMIT 2+0 OFFSET 1+0; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> (SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */) UNION ALL (SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */) ORDER BY 1 LIMIT 2 OFFSET 1 +> rows (ordered): 1 + +EXPLAIN DELETE FROM TEST WHERE ID=1; +> PLAN +> ----------------------------------------------------------------------- +> DELETE FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ WHERE ID = 1 +> rows: 1 + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST2COL(A INT, B INT, C VARCHAR(255), PRIMARY KEY(A, B)); +> ok + +INSERT INTO TEST2COL VALUES(0, 0, 'Hallo'), (0, 1, 'Welt'), (1, 0, 'Hello'), (1, 1, 'World'); +> update count: 4 + +SELECT * FROM TEST2COL WHERE A=0 AND B=0; +> A B C +> - - ----- +> 0 0 Hallo +> rows: 1 + +EXPLAIN SELECT * FROM TEST2COL WHERE A=0 AND B=0; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST2COL.A, TEST2COL.B, TEST2COL.C FROM PUBLIC.TEST2COL /* PUBLIC.PRIMARY_KEY_E: A = 0 AND B = 0 */ WHERE ((A = 0) AND (B = 0)) AND (A = B) +> rows: 1 + +SELECT * FROM TEST2COL WHERE A=0; +> A B C +> - - ----- +> 0 0 Hallo +> 0 1 Welt +> rows: 2 + +EXPLAIN SELECT * FROM TEST2COL WHERE A=0; +> PLAN +> ------------------------------------------------------------------------------------------------------------ +> SELECT TEST2COL.A, TEST2COL.B, TEST2COL.C FROM PUBLIC.TEST2COL /* PUBLIC.PRIMARY_KEY_E: A = 0 */ WHERE A = 0 +> rows: 1 + +SELECT * FROM TEST2COL WHERE B=0; +> A B C +> - - ----- +> 0 0 Hallo +> 1 0 Hello +> rows: 2 + +EXPLAIN SELECT * FROM TEST2COL WHERE B=0; +> PLAN +> ---------------------------------------------------------------------------------------------------------- +> SELECT TEST2COL.A, TEST2COL.B, TEST2COL.C FROM PUBLIC.TEST2COL /* PUBLIC.TEST2COL.tableScan */ WHERE B = 0 +> rows: 1 + +DROP TABLE TEST2COL; +> ok + +--- testCases ---------------------------------------------------------------------------------------------- +CREATE TABLE t_1 (ch CHARACTER(10), dec DECIMAL(10,2), do DOUBLE, lo BIGINT, "IN" INTEGER, sm SMALLINT, ty TINYINT, +da DATE DEFAULT CURRENT_DATE, ti TIME DEFAULT CURRENT_TIME, ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); +> ok + +INSERT INTO T_1 (ch, dec, do) VALUES ('name', 10.23, 0); +> update count: 1 + +SELECT COUNT(*) FROM T_1; +> COUNT(*) +> -------- +> 1 +> rows: 1 + +DROP TABLE T_1; +> ok + +--- rights ---------------------------------------------------------------------------------------------- +CREATE USER TEST_USER PASSWORD '123'; +> ok + +CREATE TABLE TEST(ID INT); +> ok + +CREATE ROLE TEST_ROLE; +> ok + +CREATE ROLE IF NOT EXISTS TEST_ROLE; +> ok + +GRANT SELECT, INSERT ON TEST TO TEST_USER; +> ok + +GRANT UPDATE ON TEST TO TEST_ROLE; +> ok + +GRANT TEST_ROLE TO TEST_USER; +> ok + +SELECT NAME FROM INFORMATION_SCHEMA.ROLES; +> NAME +> --------- +> PUBLIC +> TEST_ROLE +> rows: 2 + +SELECT GRANTEE, GRANTEETYPE, GRANTEDROLE, RIGHTS, TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.RIGHTS; +> GRANTEE GRANTEETYPE GRANTEDROLE RIGHTS TABLE_SCHEMA TABLE_NAME +> --------- ----------- ----------- -------------- ------------ ---------- +> TEST_ROLE ROLE UPDATE PUBLIC TEST +> TEST_USER USER SELECT, INSERT PUBLIC TEST +> TEST_USER USER TEST_ROLE +> rows: 3 + +SELECT * FROM INFORMATION_SCHEMA.TABLE_PRIVILEGES; +> GRANTOR GRANTEE TABLE_CATALOG TABLE_SCHEMA TABLE_NAME PRIVILEGE_TYPE IS_GRANTABLE +> ------- --------- ------------- ------------ ---------- -------------- ------------ +> null TEST_ROLE SCRIPT PUBLIC TEST UPDATE NO +> null TEST_USER SCRIPT PUBLIC TEST INSERT NO +> null TEST_USER SCRIPT PUBLIC TEST SELECT NO +> rows: 3 + +SELECT * FROM INFORMATION_SCHEMA.COLUMN_PRIVILEGES; +> GRANTOR GRANTEE TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME PRIVILEGE_TYPE IS_GRANTABLE +> ------- --------- ------------- ------------ ---------- ----------- -------------- ------------ +> null TEST_ROLE SCRIPT PUBLIC TEST ID UPDATE NO +> null TEST_USER SCRIPT PUBLIC TEST ID INSERT NO +> null TEST_USER SCRIPT PUBLIC TEST ID SELECT NO +> rows: 3 + +REVOKE INSERT ON TEST FROM TEST_USER; +> ok + +REVOKE TEST_ROLE FROM TEST_USER; +> ok + +SELECT GRANTEE, GRANTEETYPE, GRANTEDROLE, RIGHTS, TABLE_NAME FROM INFORMATION_SCHEMA.RIGHTS; +> GRANTEE GRANTEETYPE GRANTEDROLE RIGHTS TABLE_NAME +> --------- ----------- ----------- ------ ---------- +> TEST_ROLE ROLE UPDATE TEST +> TEST_USER USER SELECT TEST +> rows: 2 + +SELECT * FROM INFORMATION_SCHEMA.TABLE_PRIVILEGES; +> GRANTOR GRANTEE TABLE_CATALOG TABLE_SCHEMA TABLE_NAME PRIVILEGE_TYPE IS_GRANTABLE +> ------- --------- ------------- ------------ ---------- -------------- ------------ +> null TEST_ROLE SCRIPT PUBLIC TEST UPDATE NO +> null TEST_USER SCRIPT PUBLIC TEST SELECT NO +> rows: 2 + +DROP USER TEST_USER; +> ok + +DROP TABLE TEST; +> ok + +DROP ROLE TEST_ROLE; +> ok + +SELECT * FROM INFORMATION_SCHEMA.ROLES; +> NAME REMARKS ID +> ------ ------- -- +> PUBLIC 0 +> rows: 1 + +SELECT * FROM INFORMATION_SCHEMA.RIGHTS; +> GRANTEE GRANTEETYPE GRANTEDROLE RIGHTS TABLE_SCHEMA TABLE_NAME ID +> ------- ----------- ----------- ------ ------------ ---------- -- +> rows: 0 + +--- plan ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(?, ?); +{ +1, Hello +2, World +3, Peace +}; +> update count: 3 + +EXPLAIN INSERT INTO TEST VALUES(1, 'Test'); +> PLAN +> ---------------------------------------------------- +> INSERT INTO PUBLIC.TEST(ID, NAME) VALUES (1, 'Test') +> rows: 1 + +EXPLAIN INSERT INTO TEST VALUES(1, 'Test'), (2, 'World'); +> PLAN +> ------------------------------------------------------------------ +> INSERT INTO PUBLIC.TEST(ID, NAME) VALUES (1, 'Test'), (2, 'World') +> rows: 1 + +EXPLAIN INSERT INTO TEST SELECT DISTINCT ID+1, NAME FROM TEST; +> PLAN +> ------------------------------------------------------------------------------------------------------------- +> INSERT INTO PUBLIC.TEST(ID, NAME) SELECT DISTINCT (ID + 1), NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ +> rows: 1 + +EXPLAIN SELECT DISTINCT ID + 1, NAME FROM TEST; +> PLAN +> --------------------------------------------------------------------------- +> SELECT DISTINCT (ID + 1), NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ +> rows: 1 + +EXPLAIN SELECT * FROM TEST WHERE 1=0; +> PLAN +> ----------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan: FALSE */ WHERE FALSE +> rows: 1 + +EXPLAIN SELECT TOP 1 * FROM TEST FOR UPDATE; +> PLAN +> ----------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ LIMIT 1 FOR UPDATE +> rows: 1 + +EXPLAIN SELECT COUNT(NAME) FROM TEST WHERE ID=1; +> PLAN +> ----------------------------------------------------------------------------------- +> SELECT COUNT(NAME) FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ WHERE ID = 1 +> rows: 1 + +EXPLAIN SELECT * FROM TEST WHERE (ID>=1 AND ID<=2) OR (ID>0 AND ID<3) AND (ID<>6) ORDER BY NAME NULLS FIRST, 1 NULLS LAST, (1+1) DESC; +> PLAN +> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ WHERE ((ID >= 1) AND (ID <= 2)) OR ((ID <> 6) AND ((ID > 0) AND (ID < 3))) ORDER BY 2 NULLS FIRST, 1 NULLS LAST, =2 DESC +> rows (ordered): 1 + +EXPLAIN SELECT * FROM TEST WHERE ID=1 GROUP BY NAME, ID; +> PLAN +> ------------------------------------------------------------------------------------------------------------ +> SELECT TEST.ID, TEST.NAME FROM PUBLIC.TEST /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ WHERE ID = 1 GROUP BY NAME, ID +> rows: 1 + +EXPLAIN PLAN FOR UPDATE TEST SET NAME='Hello', ID=1 WHERE NAME LIKE 'T%' ESCAPE 'x'; +> PLAN +> --------------------------------------------------------------------------------------------------------- +> UPDATE PUBLIC.TEST /* PUBLIC.TEST.tableScan */ SET NAME = 'Hello', ID = 1 WHERE NAME LIKE 'T%' ESCAPE 'x' +> rows: 1 + +EXPLAIN PLAN FOR DELETE FROM TEST; +> PLAN +> --------------------------------------------------- +> DELETE FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ +> rows: 1 + +EXPLAIN PLAN FOR SELECT NAME, COUNT(*) FROM TEST GROUP BY NAME HAVING COUNT(*) > 1; +> PLAN +> ---------------------------------------------------------------------------------------------------- +> SELECT NAME, COUNT(*) FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ GROUP BY NAME HAVING COUNT(*) > 1 +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM test t1 inner join test t2 on t1.id=t2.id and t2.name is not null where t1.id=1; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME, T2.ID, T2.NAME FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ /* WHERE T1.ID = 1 */ INNER JOIN PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID = T1.ID AND ID = T1.ID */ ON 1=1 WHERE (T1.ID = 1) AND ((T2.NAME IS NOT NULL) AND (T1.ID = T2.ID)) +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM test t1 left outer join test t2 on t1.id=t2.id and t2.name is not null where t1.id=1; +> PLAN +> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME, T2.ID, T2.NAME FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ /* WHERE T1.ID = 1 */ LEFT OUTER JOIN PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID = T1.ID */ ON (T2.NAME IS NOT NULL) AND (T1.ID = T2.ID) WHERE T1.ID = 1 +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM test t1 left outer join test t2 on t1.id=t2.id and t2.name is null where t1.id=1; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME, T2.ID, T2.NAME FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID = 1 */ /* WHERE T1.ID = 1 */ LEFT OUTER JOIN PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID = T1.ID */ ON (T2.NAME IS NULL) AND (T1.ID = T2.ID) WHERE T1.ID = 1 +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM TEST T1 WHERE EXISTS(SELECT * FROM TEST T2 WHERE T1.ID-1 = T2.ID); +> PLAN +> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ WHERE EXISTS( SELECT T2.ID, T2.NAME FROM PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID = (T1.ID - 1) */ WHERE (T1.ID - 1) = T2.ID) +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM TEST T1 WHERE ID IN(1, 2); +> PLAN +> --------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID IN(1, 2) */ WHERE ID IN(1, 2) +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM TEST T1 WHERE ID IN(SELECT ID FROM TEST); +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.PRIMARY_KEY_2: ID IN(SELECT ID FROM PUBLIC.TEST /++ PUBLIC.TEST.tableScan ++/) */ WHERE ID IN( SELECT ID FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */) +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM TEST T1 WHERE ID NOT IN(SELECT ID FROM TEST); +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------ +> SELECT T1.ID, T1.NAME FROM PUBLIC.TEST T1 /* PUBLIC.TEST.tableScan */ WHERE NOT (ID IN( SELECT ID FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */)) +> rows: 1 + +EXPLAIN PLAN FOR SELECT CAST(ID AS VARCHAR(255)) FROM TEST; +> PLAN +> ---------------------------------------------------------------------------- +> SELECT CAST(ID AS VARCHAR(255)) FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ +> rows: 1 + +EXPLAIN PLAN FOR SELECT LEFT(NAME, 2) FROM TEST; +> PLAN +> ----------------------------------------------------------------- +> SELECT LEFT(NAME, 2) FROM PUBLIC.TEST /* PUBLIC.TEST.tableScan */ +> rows: 1 + +EXPLAIN PLAN FOR SELECT * FROM SYSTEM_RANGE(1, 20); +> PLAN +> ----------------------------------------------------------------------- +> SELECT SYSTEM_RANGE.X FROM SYSTEM_RANGE(1, 20) /* PUBLIC.RANGE_INDEX */ +> rows: 1 + +SELECT * FROM test t1 inner join test t2 on t1.id=t2.id and t2.name is not null where t1.id=1; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 Hello 1 Hello +> rows: 1 + +SELECT * FROM test t1 left outer join test t2 on t1.id=t2.id and t2.name is not null where t1.id=1; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 Hello 1 Hello +> rows: 1 + +SELECT * FROM test t1 left outer join test t2 on t1.id=t2.id and t2.name is null where t1.id=1; +> ID NAME ID NAME +> -- ----- ---- ---- +> 1 Hello null null +> rows: 1 + +DROP TABLE TEST; +> ok + +--- union ---------------------------------------------------------------------------------------------- +SELECT * FROM SYSTEM_RANGE(1,2) UNION ALL SELECT * FROM SYSTEM_RANGE(1,2) ORDER BY 1; +> X +> - +> 1 +> 1 +> 2 +> 2 +> rows (ordered): 4 + +EXPLAIN (SELECT * FROM SYSTEM_RANGE(1,2) UNION ALL SELECT * FROM SYSTEM_RANGE(1,2) ORDER BY 1); +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> (SELECT SYSTEM_RANGE.X FROM SYSTEM_RANGE(1, 2) /* PUBLIC.RANGE_INDEX */) UNION ALL (SELECT SYSTEM_RANGE.X FROM SYSTEM_RANGE(1, 2) /* PUBLIC.RANGE_INDEX */) ORDER BY 1 +> rows (ordered): 1 + +CREATE TABLE CHILDREN(ID INT PRIMARY KEY, NAME VARCHAR(255), CLASS INT); +> ok + +CREATE TABLE CLASSES(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO CHILDREN VALUES(?, ?, ?); +{ +0, Joe, 0 +1, Anne, 1 +2, Joerg, 1 +3, Petra, 2 +}; +> update count: 4 + +INSERT INTO CLASSES VALUES(?, ?); +{ +0, Kindergarden +1, Class 1 +2, Class 2 +3, Class 3 +4, Class 4 +}; +> update count: 5 + +SELECT * FROM CHILDREN UNION ALL SELECT * FROM CHILDREN ORDER BY ID, NAME FOR UPDATE; +> ID NAME CLASS +> -- ----- ----- +> 0 Joe 0 +> 0 Joe 0 +> 1 Anne 1 +> 1 Anne 1 +> 2 Joerg 1 +> 2 Joerg 1 +> 3 Petra 2 +> 3 Petra 2 +> rows (ordered): 8 + +EXPLAIN SELECT * FROM CHILDREN UNION ALL SELECT * FROM CHILDREN ORDER BY ID, NAME FOR UPDATE; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */ FOR UPDATE) UNION ALL (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */ FOR UPDATE) ORDER BY 1, 2 FOR UPDATE +> rows (ordered): 1 + +SELECT 'Child', ID, NAME FROM CHILDREN UNION SELECT 'Class', ID, NAME FROM CLASSES; +> 'Child' ID NAME +> ------- -- ------------ +> Child 0 Joe +> Child 1 Anne +> Child 2 Joerg +> Child 3 Petra +> Class 0 Kindergarden +> Class 1 Class1 +> Class 2 Class2 +> Class 3 Class3 +> Class 4 Class4 +> rows: 9 + +EXPLAIN SELECT 'Child', ID, NAME FROM CHILDREN UNION SELECT 'Class', ID, NAME FROM CLASSES; +> PLAN +> ------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> (SELECT 'Child', ID, NAME FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */) UNION (SELECT 'Class', ID, NAME FROM PUBLIC.CLASSES /* PUBLIC.CLASSES.tableScan */) +> rows: 1 + +SELECT * FROM CHILDREN EXCEPT SELECT * FROM CHILDREN WHERE CLASS=0; +> ID NAME CLASS +> -- ----- ----- +> 1 Anne 1 +> 2 Joerg 1 +> 3 Petra 2 +> rows: 3 + +EXPLAIN SELECT * FROM CHILDREN EXCEPT SELECT * FROM CHILDREN WHERE CLASS=0; +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */) EXCEPT (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */ WHERE CLASS = 0) +> rows: 1 + +EXPLAIN SELECT CLASS FROM CHILDREN INTERSECT SELECT ID FROM CLASSES; +> PLAN +> -------------------------------------------------------------------------------------------------------------------------------------------- +> (SELECT CLASS FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */) INTERSECT (SELECT ID FROM PUBLIC.CLASSES /* PUBLIC.CLASSES.tableScan */) +> rows: 1 + +SELECT CLASS FROM CHILDREN INTERSECT SELECT ID FROM CLASSES; +> CLASS +> ----- +> 0 +> 1 +> 2 +> rows: 3 + +EXPLAIN SELECT * FROM CHILDREN EXCEPT SELECT * FROM CHILDREN WHERE CLASS=0; +> PLAN +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */) EXCEPT (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /* PUBLIC.CHILDREN.tableScan */ WHERE CLASS = 0) +> rows: 1 + +SELECT * FROM CHILDREN CH, CLASSES CL WHERE CH.CLASS = CL.ID; +> ID NAME CLASS ID NAME +> -- ----- ----- -- ------------ +> 0 Joe 0 0 Kindergarden +> 1 Anne 1 1 Class1 +> 2 Joerg 1 1 Class1 +> 3 Petra 2 2 Class2 +> rows: 4 + +SELECT CH.ID CH_ID, CH.NAME CH_NAME, CL.ID CL_ID, CL.NAME CL_NAME FROM CHILDREN CH, CLASSES CL WHERE CH.CLASS = CL.ID; +> CH_ID CH_NAME CL_ID CL_NAME +> ----- ------- ----- ------------ +> 0 Joe 0 Kindergarden +> 1 Anne 1 Class1 +> 2 Joerg 1 Class1 +> 3 Petra 2 Class2 +> rows: 4 + +CREATE VIEW CHILDREN_CLASSES(CH_ID, CH_NAME, CL_ID, CL_NAME) AS +SELECT CH.ID CH_ID1, CH.NAME CH_NAME2, CL.ID CL_ID3, CL.NAME CL_NAME4 +FROM CHILDREN CH, CLASSES CL WHERE CH.CLASS = CL.ID; +> ok + +SELECT * FROM CHILDREN_CLASSES WHERE CH_NAME <> 'X'; +> CH_ID CH_NAME CL_ID CL_NAME +> ----- ------- ----- ------------ +> 0 Joe 0 Kindergarden +> 1 Anne 1 Class1 +> 2 Joerg 1 Class1 +> 3 Petra 2 Class2 +> rows: 4 + +CREATE VIEW CHILDREN_CLASS1 AS SELECT * FROM CHILDREN_CLASSES WHERE CL_ID=1; +> ok + +SELECT * FROM CHILDREN_CLASS1; +> CH_ID CH_NAME CL_ID CL_NAME +> ----- ------- ----- ------- +> 1 Anne 1 Class1 +> 2 Joerg 1 Class1 +> rows: 2 + +CREATE VIEW CHILDREN_CLASS2 AS SELECT * FROM CHILDREN_CLASSES WHERE CL_ID=2; +> ok + +SELECT * FROM CHILDREN_CLASS2; +> CH_ID CH_NAME CL_ID CL_NAME +> ----- ------- ----- ------- +> 3 Petra 2 Class2 +> rows: 1 + +CREATE VIEW CHILDREN_CLASS12 AS SELECT * FROM CHILDREN_CLASS1 UNION ALL SELECT * FROM CHILDREN_CLASS1; +> ok + +SELECT * FROM CHILDREN_CLASS12; +> CH_ID CH_NAME CL_ID CL_NAME +> ----- ------- ----- ------- +> 1 Anne 1 Class1 +> 1 Anne 1 Class1 +> 2 Joerg 1 Class1 +> 2 Joerg 1 Class1 +> rows: 4 + +DROP VIEW CHILDREN_CLASS2; +> ok + +DROP VIEW CHILDREN_CLASS1 cascade; +> ok + +DROP VIEW CHILDREN_CLASSES; +> ok + +DROP VIEW CHILDREN_CLASS12; +> exception + +CREATE VIEW V_UNION AS SELECT * FROM CHILDREN UNION ALL SELECT * FROM CHILDREN; +> ok + +SELECT * FROM V_UNION WHERE ID=1; +> ID NAME CLASS +> -- ---- ----- +> 1 Anne 1 +> 1 Anne 1 +> rows: 2 + +EXPLAIN SELECT * FROM V_UNION WHERE ID=1; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT V_UNION.ID, V_UNION.NAME, V_UNION.CLASS FROM PUBLIC.V_UNION /* (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /++ PUBLIC.PRIMARY_KEY_9: ID IS ?1 ++/ /++ scanCount: 2 ++/ WHERE CHILDREN.ID IS ?1) UNION ALL (SELECT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /++ PUBLIC.PRIMARY_KEY_9: ID IS ?1 ++/ /++ scanCount: 2 ++/ WHERE CHILDREN.ID IS ?1): ID = 1 */ WHERE ID = 1 +> rows: 1 + +CREATE VIEW V_EXCEPT AS SELECT * FROM CHILDREN EXCEPT SELECT * FROM CHILDREN WHERE ID=2; +> ok + +SELECT * FROM V_EXCEPT WHERE ID=1; +> ID NAME CLASS +> -- ---- ----- +> 1 Anne 1 +> rows: 1 + +EXPLAIN SELECT * FROM V_EXCEPT WHERE ID=1; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT V_EXCEPT.ID, V_EXCEPT.NAME, V_EXCEPT.CLASS FROM PUBLIC.V_EXCEPT /* (SELECT DISTINCT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /++ PUBLIC.PRIMARY_KEY_9: ID IS ?1 ++/ /++ scanCount: 2 ++/ WHERE CHILDREN.ID IS ?1) EXCEPT (SELECT DISTINCT CHILDREN.ID, CHILDREN.NAME, CHILDREN.CLASS FROM PUBLIC.CHILDREN /++ PUBLIC.PRIMARY_KEY_9: ID = 2 ++/ /++ scanCount: 2 ++/ WHERE ID = 2): ID = 1 */ WHERE ID = 1 +> rows: 1 + +CREATE VIEW V_INTERSECT AS SELECT ID, NAME FROM CHILDREN INTERSECT SELECT * FROM CLASSES; +> ok + +SELECT * FROM V_INTERSECT WHERE ID=1; +> ID NAME +> -- ---- +> rows: 0 + +EXPLAIN SELECT * FROM V_INTERSECT WHERE ID=1; +> PLAN +> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> SELECT V_INTERSECT.ID, V_INTERSECT.NAME FROM PUBLIC.V_INTERSECT /* (SELECT DISTINCT ID, NAME FROM PUBLIC.CHILDREN /++ PUBLIC.PRIMARY_KEY_9: ID IS ?1 ++/ /++ scanCount: 2 ++/ WHERE ID IS ?1) INTERSECT (SELECT DISTINCT CLASSES.ID, CLASSES.NAME FROM PUBLIC.CLASSES /++ PUBLIC.PRIMARY_KEY_5: ID IS ?1 ++/ /++ scanCount: 2 ++/ WHERE CLASSES.ID IS ?1): ID = 1 */ WHERE ID = 1 +> rows: 1 + +DROP VIEW V_UNION; +> ok + +DROP VIEW V_EXCEPT; +> ok + +DROP VIEW V_INTERSECT; +> ok + +DROP TABLE CHILDREN; +> ok + +DROP TABLE CLASSES; +> ok + +--- view ---------------------------------------------------------------------------------------------- +CREATE CACHED TABLE TEST_A(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE CACHED TABLE TEST_B(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +SELECT A.ID AID, A.NAME A_NAME, B.ID BID, B.NAME B_NAME FROM TEST_A A INNER JOIN TEST_B B WHERE A.ID = B.ID; +> AID A_NAME BID B_NAME +> --- ------ --- ------ +> rows: 0 + +INSERT INTO TEST_B VALUES(1, 'Hallo'), (2, 'Welt'), (3, 'Rekord'); +> update count: 3 + +CREATE VIEW IF NOT EXISTS TEST_ALL AS SELECT A.ID AID, A.NAME A_NAME, B.ID BID, B.NAME B_NAME FROM TEST_A A, TEST_B B WHERE A.ID = B.ID; +> ok + +SELECT COUNT(*) FROM TEST_ALL; +> COUNT(*) +> -------- +> 0 +> rows: 1 + +CREATE VIEW IF NOT EXISTS TEST_ALL AS +SELECT * FROM TEST_A; +> ok + +INSERT INTO TEST_A VALUES(1, 'Hello'), (2, 'World'), (3, 'Record'); +> update count: 3 + +SELECT * FROM TEST_ALL; +> AID A_NAME BID B_NAME +> --- ------ --- ------ +> 1 Hello 1 Hallo +> 2 World 2 Welt +> 3 Record 3 Rekord +> rows: 3 + +SELECT * FROM TEST_ALL WHERE AID=1; +> AID A_NAME BID B_NAME +> --- ------ --- ------ +> 1 Hello 1 Hallo +> rows: 1 + +SELECT * FROM TEST_ALL WHERE AID>0; +> AID A_NAME BID B_NAME +> --- ------ --- ------ +> 1 Hello 1 Hallo +> 2 World 2 Welt +> 3 Record 3 Rekord +> rows: 3 + +SELECT * FROM TEST_ALL WHERE AID<2; +> AID A_NAME BID B_NAME +> --- ------ --- ------ +> 1 Hello 1 Hallo +> rows: 1 + +SELECT * FROM TEST_ALL WHERE AID<=2; +> AID A_NAME BID B_NAME +> --- ------ --- ------ +> 1 Hello 1 Hallo +> 2 World 2 Welt +> rows: 2 + +SELECT * FROM TEST_ALL WHERE AID>=2; +> AID A_NAME BID B_NAME +> --- ------ --- ------ +> 2 World 2 Welt +> 3 Record 3 Rekord +> rows: 2 + +CREATE VIEW TEST_A_SUB AS SELECT * FROM TEST_A WHERE ID < 2; +> ok + +SELECT TABLE_NAME, SQL FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='VIEW'; +> TABLE_NAME SQL +> ---------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> TEST_ALL CREATE FORCE VIEW PUBLIC.TEST_ALL(AID, A_NAME, BID, B_NAME) AS SELECT A.ID AS AID, A.NAME AS A_NAME, B.ID AS BID, B.NAME AS B_NAME FROM PUBLIC.TEST_A A INNER JOIN PUBLIC.TEST_B B ON 1=1 WHERE A.ID = B.ID +> TEST_A_SUB CREATE FORCE VIEW PUBLIC.TEST_A_SUB(ID, NAME) AS SELECT TEST_A.ID, TEST_A.NAME FROM PUBLIC.TEST_A WHERE ID < 2 +> rows: 2 + +SELECT * FROM TEST_A_SUB WHERE NAME IS NOT NULL; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +DROP VIEW TEST_A_SUB; +> ok + +DROP TABLE TEST_A cascade; +> ok + +DROP TABLE TEST_B cascade; +> ok + +DROP VIEW TEST_ALL; +> exception + +DROP VIEW IF EXISTS TEST_ALL; +> ok + +--- commit/rollback ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +SET AUTOCOMMIT FALSE; +> ok + +INSERT INTO TEST VALUES(1, 'Test'); +> update count: 1 + +ROLLBACK; +> ok + +SELECT * FROM TEST; +> ID NAME +> -- ---- +> rows: 0 + +INSERT INTO TEST VALUES(1, 'Test2'); +> update count: 1 + +SAVEPOINT TEST; +> ok + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +ROLLBACK TO SAVEPOINT NOT_EXISTING; +> exception + +ROLLBACK TO SAVEPOINT TEST; +> ok + +SELECT * FROM TEST; +> ID NAME +> -- ----- +> 1 Test2 +> rows: 1 + +ROLLBACK WORK; +> ok + +SELECT * FROM TEST; +> ID NAME +> -- ---- +> rows: 0 + +INSERT INTO TEST VALUES(1, 'Test3'); +> update count: 1 + +SAVEPOINT TEST3; +> ok + +INSERT INTO TEST VALUES(2, 'World2'); +> update count: 1 + +ROLLBACK TO SAVEPOINT TEST3; +> ok + +COMMIT WORK; +> ok + +SELECT * FROM TEST; +> ID NAME +> -- ----- +> 1 Test3 +> rows: 1 + +SET AUTOCOMMIT TRUE; +> ok + +DROP TABLE TEST; +> ok + +--- insert..select ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(0, 'Hello'); +> update count: 1 + +INSERT INTO TEST SELECT ID+1, NAME||'+' FROM TEST; +> update count: 1 + +INSERT INTO TEST SELECT ID+2, NAME||'+' FROM TEST; +> update count: 2 + +INSERT INTO TEST SELECT ID+4, NAME||'+' FROM TEST; +> update count: 4 + +SELECT * FROM TEST; +> ID NAME +> -- -------- +> 0 Hello +> 1 Hello+ +> 2 Hello+ +> 3 Hello++ +> 4 Hello+ +> 5 Hello++ +> 6 Hello++ +> 7 Hello+++ +> rows: 8 + +DROP TABLE TEST; +> ok + +--- range ---------------------------------------------------------------------------------------------- +--import java.math.*; +--int s=0;for(int i=2;i<=1000;i++) +--s+=BigInteger.valueOf(i).isProbablePrime(10000)?i:0;s; +select sum(x) from system_range(2, 1000) r where +not exists(select * from system_range(2, 32) r2 where r.x>r2.x and mod(r.x, r2.x)=0); +> SUM(X) +> ------ +> 76127 +> rows: 1 + +SELECT COUNT(*) FROM SYSTEM_RANGE(0, 2111222333); +> COUNT(*) +> ---------- +> 2111222334 +> rows: 1 + +select * from system_range(2, 100) r where +not exists(select * from system_range(2, 11) r2 where r.x>r2.x and mod(r.x, r2.x)=0); +> X +> -- +> 11 +> 13 +> 17 +> 19 +> 2 +> 23 +> 29 +> 3 +> 31 +> 37 +> 41 +> 43 +> 47 +> 5 +> 53 +> 59 +> 61 +> 67 +> 7 +> 71 +> 73 +> 79 +> 83 +> 89 +> 97 +> rows: 25 + +--- syntax errors ---------------------------------------------------------------------------------------------- +CREATE SOMETHING STRANGE; +> exception + +SELECT T1.* T2; +> exception + +select replace('abchihihi', 'i', 'o') abcehohoho, replace('this is tom', 'i') 1e_th_st_om from test; +> exception + +select monthname(date )'005-0E9-12') d_set fm test; +> exception + +call substring('bob', 2, -1); +> '' +> -- +> +> rows: 1 + +--- like ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(0, NULL); +> update count: 1 + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +INSERT INTO TEST VALUES(3, 'Word'); +> update count: 1 + +INSERT INTO TEST VALUES(4, 'Wo%'); +> update count: 1 + +SELECT * FROM TEST WHERE NAME IS NULL; +> ID NAME +> -- ---- +> 0 null +> rows: 1 + +SELECT * FROM TEST WHERE NAME IS NOT NULL; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> 3 Word +> 4 Wo% +> rows: 4 + +SELECT * FROM TEST WHERE NAME BETWEEN 'H' AND 'Word'; +> ID NAME +> -- ----- +> 1 Hello +> 3 Word +> 4 Wo% +> rows: 3 + +SELECT * FROM TEST WHERE ID >= 2 AND ID <= 3 AND ID <> 2; +> ID NAME +> -- ---- +> 3 Word +> rows: 1 + +SELECT * FROM TEST WHERE ID>0 AND ID<4 AND ID!=2; +> ID NAME +> -- ----- +> 1 Hello +> 3 Word +> rows: 2 + +SELECT * FROM TEST WHERE 'Hello' LIKE '_el%'; +> ID NAME +> -- ----- +> 0 null +> 1 Hello +> 2 World +> 3 Word +> 4 Wo% +> rows: 5 + +SELECT * FROM TEST WHERE NAME LIKE 'Hello%'; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +SELECT * FROM TEST WHERE NAME ILIKE 'hello%'; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +SELECT * FROM TEST WHERE NAME ILIKE 'xxx%'; +> ID NAME +> -- ---- +> rows: 0 + +SELECT * FROM TEST WHERE NAME LIKE 'Wo%'; +> ID NAME +> -- ----- +> 2 World +> 3 Word +> 4 Wo% +> rows: 3 + +SELECT * FROM TEST WHERE NAME LIKE 'Wo\%'; +> ID NAME +> -- ---- +> 4 Wo% +> rows: 1 + +SELECT * FROM TEST WHERE NAME LIKE 'WoX%' ESCAPE 'X'; +> ID NAME +> -- ---- +> 4 Wo% +> rows: 1 + +SELECT * FROM TEST WHERE NAME LIKE 'Word_'; +> ID NAME +> -- ---- +> rows: 0 + +SELECT * FROM TEST WHERE NAME LIKE '%Hello%'; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +SELECT * FROM TEST WHERE 'Hello' LIKE NAME; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +SELECT T1.*, T2.* FROM TEST AS T1, TEST AS T2 WHERE T1.ID = T2.ID AND T1.NAME LIKE T2.NAME || '%'; +> ID NAME ID NAME +> -- ----- -- ----- +> 1 Hello 1 Hello +> 2 World 2 World +> 3 Word 3 Word +> 4 Wo% 4 Wo% +> rows: 4 + +SELECT ID, MAX(NAME) FROM TEST GROUP BY ID HAVING MAX(NAME) = 'World'; +> ID MAX(NAME) +> -- --------- +> 2 World +> rows: 1 + +SELECT ID, MAX(NAME) FROM TEST GROUP BY ID HAVING MAX(NAME) LIKE 'World%'; +> ID MAX(NAME) +> -- --------- +> 2 World +> rows: 1 + +DROP TABLE TEST; +> ok + +--- remarks/comments/syntax ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST( +ID INT PRIMARY KEY, -- this is the primary key, type {integer} +NAME VARCHAR(255) -- this is a string +); +> ok + +INSERT INTO TEST VALUES( +1 /* ID */, +'Hello' // NAME +); +> update count: 1 + +SELECT * FROM TEST; +> ID NAME +> -- ----- +> 1 Hello +> rows: 1 + +DROP_ TABLE_ TEST_T; +> exception + +DROP TABLE TEST /*; +> exception + +DROP TABLE TEST; +> ok + +--- exists ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(0, NULL); +> update count: 1 + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +SELECT * FROM TEST T WHERE NOT EXISTS( +SELECT * FROM TEST T2 WHERE T.ID > T2.ID); +> ID NAME +> -- ---- +> 0 null +> rows: 1 + +DROP TABLE TEST; +> ok + +--- subquery ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +INSERT INTO TEST VALUES(0, NULL); +> update count: 1 + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +select * from test where (select max(t1.id) from test t1) between 0 and 100; +> ID NAME +> -- ----- +> 0 null +> 1 Hello +> rows: 2 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +SELECT * FROM TEST T WHERE T.ID = (SELECT T2.ID FROM TEST T2 WHERE T2.ID=T.ID); +> ID NAME +> -- ----- +> 0 null +> 1 Hello +> 2 World +> rows: 3 + +SELECT (SELECT T2.NAME FROM TEST T2 WHERE T2.ID=T.ID), T.NAME FROM TEST T; +> SELECT T2.NAME FROM PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID = T.ID */ /* scanCount: 2 */ WHERE T2.ID = T.ID NAME +> -------------------------------------------------------------------------------------------------------------- ----- +> Hello Hello +> World World +> null null +> rows: 3 + +SELECT (SELECT SUM(T2.ID) FROM TEST T2 WHERE T2.ID>T.ID), T.ID FROM TEST T; +> SELECT SUM(T2.ID) FROM PUBLIC.TEST T2 /* PUBLIC.PRIMARY_KEY_2: ID > T.ID */ /* scanCount: 2 */ WHERE T2.ID > T.ID ID +> ----------------------------------------------------------------------------------------------------------------- -- +> 2 1 +> 3 0 +> null 2 +> rows: 3 + +select * from test t where t.id+1 in (select id from test); +> ID NAME +> -- ----- +> 0 null +> 1 Hello +> rows: 2 + +select * from test t where t.id in (select id from test where id=t.id); +> ID NAME +> -- ----- +> 0 null +> 1 Hello +> 2 World +> rows: 3 + +select 1 from test, test where 1 in (select 1 from test where id=1); +> 1 +> - +> 1 +> 1 +> 1 +> 1 +> 1 +> 1 +> 1 +> 1 +> 1 +> rows: 9 + +select * from test, test where id=id; +> exception + +select 1 from test, test where id=id; +> exception + +select 1 from test where id in (select id from test, test); +> exception + +DROP TABLE TEST; +> ok + +--- group by ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(A INT, B INT, VALUE INT, UNIQUE(A, B)); +> ok + +INSERT INTO TEST VALUES(?, ?, ?); +{ +NULL, NULL, NULL +NULL, 0, 0 +NULL, 1, 10 +0, 0, -1 +0, 1, 100 +1, 0, 200 +1, 1, 300 +}; +> update count: 7 + +SELECT A, B, COUNT(*) CAL, COUNT(A) CA, COUNT(B) CB, MIN(VALUE) MI, MAX(VALUE) MA, SUM(VALUE) S FROM TEST GROUP BY A, B; +> A B CAL CA CB MI MA S +> ---- ---- --- -- -- ---- ---- ---- +> 0 0 1 1 1 -1 -1 -1 +> 0 1 1 1 1 100 100 100 +> 1 0 1 1 1 200 200 200 +> 1 1 1 1 1 300 300 300 +> null 0 1 0 1 0 0 0 +> null 1 1 0 1 10 10 10 +> null null 1 0 0 null null null +> rows: 7 + +DROP TABLE TEST; +> ok + +--- data types (blob, clob, varchar_ignorecase) ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT, XB BINARY, XBL BLOB, XO OTHER, XCL CLOB, XVI VARCHAR_IGNORECASE); +> ok + +INSERT INTO TEST VALUES(0, X '', '', '', '', ''); +> update count: 1 + +INSERT INTO TEST VALUES(1, X '0101', '0101', '0101', 'abc', 'aa'); +> update count: 1 + +INSERT INTO TEST VALUES(2, X '0AFF', '08FE', 'F0F1', 'AbCdEfG', 'ZzAaBb'); +> update count: 1 + +INSERT INTO TEST VALUES(3, X '112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff', '112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff', '112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff', 'AbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYz', 'AbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYz'); +> update count: 1 + +INSERT INTO TEST VALUES(4, NULL, NULL, NULL, NULL, NULL); +> update count: 1 + +SELECT * FROM TEST; +> ID XB XBL XO XCL XVI +> -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> 0 +> 1 0101 0101 0101 abc aa +> 2 0aff 08fe f0f1 AbCdEfG ZzAaBb +> 3 112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff 112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff 112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff112233445566778899aabbccddeeff AbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYz AbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYzAbCdEfGhIjKlMnOpQrStUvWxYz +> 4 null null null null null +> rows: 5 + +SELECT ID FROM TEST WHERE XCL = XCL; +> ID +> -- +> 0 +> 1 +> 2 +> 3 +> rows: 4 + +SELECT ID FROM TEST WHERE XCL LIKE 'abc%'; +> ID +> -- +> 1 +> rows: 1 + +SELECT ID FROM TEST WHERE XVI LIKE 'abc%'; +> ID +> -- +> 3 +> rows: 1 + +SELECT 'abc', 'Papa Joe''s', CAST(-1 AS SMALLINT), CAST(2 AS BIGINT), CAST(0 AS DOUBLE), CAST('0a0f' AS BINARY), CAST(125 AS TINYINT), TRUE, FALSE FROM TEST WHERE ID=1; +> 'abc' 'Papa Joe''s' -1 2 0.0 X'0a0f' 125 TRUE FALSE +> ----- ------------- -- - --- ------- --- ---- ----- +> abc Papa Joe's -1 2 0.0 0a0f 125 TRUE FALSE +> rows: 1 + +SELECT CAST('abcd' AS VARCHAR(255)), CAST('ef_gh' AS VARCHAR(3)); +> 'abcd' 'ef_' +> ------ ----- +> abcd ef_ +> rows: 1 + +DROP TABLE TEST; +> ok + +--- data types (date and time) ---------------------------------------------------------------------------------------------- +CREATE MEMORY TABLE TEST(ID INT, XT TIME, XD DATE, XTS TIMESTAMP(9)); +> ok + +INSERT INTO TEST VALUES(0, '0:0:0','1-2-3','2-3-4 0:0:0'); +> update count: 1 + +INSERT INTO TEST VALUES(1, '01:02:03','2001-02-03','2001-02-29 0:0:0'); +> exception + +INSERT INTO TEST VALUES(1, '24:62:03','2001-02-03','2001-02-01 0:0:0'); +> exception + +INSERT INTO TEST VALUES(1, '23:02:03','2001-04-31','2001-02-01 0:0:0'); +> exception + +INSERT INTO TEST VALUES(1,'1:2:3','4-5-6','7-8-9 0:1:2'); +> update count: 1 + +INSERT INTO TEST VALUES(2,'23:59:59','1999-12-31','1999-12-31 23:59:59.123456789'); +> update count: 1 + +INSERT INTO TEST VALUES(NULL,NULL,NULL,NULL); +> update count: 1 + +SELECT * FROM TEST; +> ID XT XD XTS +> ---- -------- ---------- ----------------------------- +> 0 00:00:00 0001-02-03 0002-03-04 00:00:00 +> 1 01:02:03 0004-05-06 0007-08-09 00:01:02 +> 2 23:59:59 1999-12-31 1999-12-31 23:59:59.123456789 +> null null null null +> rows: 4 + +SELECT XD+1, XD-1, XD-XD FROM TEST; +> DATEADD('DAY', 1, XD) DATEADD('DAY', -1, XD) DATEDIFF('DAY', XD, XD) +> --------------------- ---------------------- ----------------------- +> 0001-02-04 0001-02-02 0 +> 0004-05-07 0004-05-05 0 +> 2000-01-01 1999-12-30 0 +> null null null +> rows: 4 + +SELECT ID, CAST(XT AS DATE) T2D, CAST(XTS AS DATE) TS2D, +CAST(XD AS TIME) D2T, CAST(XTS AS TIME(9)) TS2T, +CAST(XT AS TIMESTAMP) D2TS, CAST(XD AS TIMESTAMP) D2TS FROM TEST; +> ID T2D TS2D D2T TS2T D2TS D2TS +> ---- ---------- ---------- -------- ------------------ ------------------- ------------------- +> 0 1970-01-01 0002-03-04 00:00:00 00:00:00 1970-01-01 00:00:00 0001-02-03 00:00:00 +> 1 1970-01-01 0007-08-09 00:00:00 00:01:02 1970-01-01 01:02:03 0004-05-06 00:00:00 +> 2 1970-01-01 1999-12-31 00:00:00 23:59:59.123456789 1970-01-01 23:59:59 1999-12-31 00:00:00 +> null null null null null null null +> rows: 4 + +SCRIPT SIMPLE NOPASSWORDS NOSETTINGS; +> SCRIPT +> ---------------------------------------------------------------------------------------------------------------------------------- +> -- 4 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT, XT TIME, XD DATE, XTS TIMESTAMP(9) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.TEST(ID, XT, XD, XTS) VALUES(0, TIME '00:00:00', DATE '0001-02-03', TIMESTAMP '0002-03-04 00:00:00'); +> INSERT INTO PUBLIC.TEST(ID, XT, XD, XTS) VALUES(1, TIME '01:02:03', DATE '0004-05-06', TIMESTAMP '0007-08-09 00:01:02'); +> INSERT INTO PUBLIC.TEST(ID, XT, XD, XTS) VALUES(2, TIME '23:59:59', DATE '1999-12-31', TIMESTAMP '1999-12-31 23:59:59.123456789'); +> INSERT INTO PUBLIC.TEST(ID, XT, XD, XTS) VALUES(NULL, NULL, NULL, NULL); +> rows: 7 + +DROP TABLE TEST; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, t0 timestamp(23, 0), t1 timestamp(23, 1), t2 timestamp(23, 2), t5 timestamp(23, 5)); +> ok + +INSERT INTO TEST VALUES(1, '2001-01-01 12:34:56.789123', '2001-01-01 12:34:56.789123', '2001-01-01 12:34:56.789123', '2001-01-01 12:34:56.789123'); +> update count: 1 + +select * from test; +> ID T0 T1 T2 T5 +> -- ------------------- --------------------- ---------------------- ------------------------- +> 1 2001-01-01 12:34:57 2001-01-01 12:34:56.8 2001-01-01 12:34:56.79 2001-01-01 12:34:56.78912 +> rows: 1 + +DROP TABLE IF EXISTS TEST; +> ok + +--- data types (decimal) ---------------------------------------------------------------------------------------------- +CALL 1.2E10+1; +> 12000000001 +> ----------- +> 12000000001 +> rows: 1 + +CALL -1.2E-10-1; +> -1.00000000012 +> -------------- +> -1.00000000012 +> rows: 1 + +CALL 1E-1; +> 0.1 +> --- +> 0.1 +> rows: 1 + +CREATE TABLE TEST(ID INT, X1 BIT, XT TINYINT, X_SM SMALLINT, XB BIGINT, XD DECIMAL(10,2), XD2 DOUBLE PRECISION, XR REAL); +> ok + +INSERT INTO TEST VALUES(?, ?, ?, ?, ?, ?, ?, ?); +{ +0,FALSE,0,0,0,0.0,0.0,0.0 +1,TRUE,1,1,1,1.0,1.0,1.0 +4,TRUE,4,4,4,4.0,4.0,4.0 +-1,FALSE,-1,-1,-1,-1.0,-1.0,-1.0 +NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL +}; +> update count: 5 + +SELECT *, 0xFF, -0x1234567890abcd FROM TEST; +> ID X1 XT X_SM XB XD XD2 XR 255 -5124095575370701 +> ---- ----- ---- ---- ---- ----- ---- ---- --- ----------------- +> -1 FALSE -1 -1 -1 -1.00 -1.0 -1.0 255 -5124095575370701 +> 0 FALSE 0 0 0 0.00 0.0 0.0 255 -5124095575370701 +> 1 TRUE 1 1 1 1.00 1.0 1.0 255 -5124095575370701 +> 4 TRUE 4 4 4 4.00 4.0 4.0 255 -5124095575370701 +> null null null null null null null null 255 -5124095575370701 +> rows: 5 + +SELECT XD, CAST(XD AS DECIMAL(10,1)) D2DE, CAST(XD2 AS DECIMAL(4, 3)) DO2DE, CAST(XR AS DECIMAL(20,3)) R2DE FROM TEST; +> XD D2DE DO2DE R2DE +> ----- ---- ------ ------ +> -1.00 -1.0 -1.000 -1.000 +> 0.00 0.0 0.000 0.000 +> 1.00 1.0 1.000 1.000 +> 4.00 4.0 4.000 4.000 +> null null null null +> rows: 5 + +SELECT ID, CAST(XB AS DOUBLE) L2D, CAST(X_SM AS DOUBLE) S2D, CAST(XT AS DOUBLE) X2D FROM TEST; +> ID L2D S2D X2D +> ---- ---- ---- ---- +> -1 -1.0 -1.0 -1.0 +> 0 0.0 0.0 0.0 +> 1 1.0 1.0 1.0 +> 4 4.0 4.0 4.0 +> null null null null +> rows: 5 + +SELECT ID, CAST(XB AS REAL) L2D, CAST(X_SM AS REAL) S2D, CAST(XT AS REAL) T2R FROM TEST; +> ID L2D S2D T2R +> ---- ---- ---- ---- +> -1 -1.0 -1.0 -1.0 +> 0 0.0 0.0 0.0 +> 1 1.0 1.0 1.0 +> 4 4.0 4.0 4.0 +> null null null null +> rows: 5 + +SELECT ID, CAST(X_SM AS BIGINT) S2L, CAST(XT AS BIGINT) B2L, CAST(XD2 AS BIGINT) D2L, CAST(XR AS BIGINT) R2L FROM TEST; +> ID S2L B2L D2L R2L +> ---- ---- ---- ---- ---- +> -1 -1 -1 -1 -1 +> 0 0 0 0 0 +> 1 1 1 1 1 +> 4 4 4 4 4 +> null null null null null +> rows: 5 + +SELECT ID, CAST(XB AS INT) L2I, CAST(XD2 AS INT) D2I, CAST(XD2 AS SMALLINT) DO2I, CAST(XR AS SMALLINT) R2I FROM TEST; +> ID L2I D2I DO2I R2I +> ---- ---- ---- ---- ---- +> -1 -1 -1 -1 -1 +> 0 0 0 0 0 +> 1 1 1 1 1 +> 4 4 4 4 4 +> null null null null null +> rows: 5 + +SELECT ID, CAST(XD AS SMALLINT) D2S, CAST(XB AS SMALLINT) L2S, CAST(XT AS SMALLINT) B2S FROM TEST; +> ID D2S L2S B2S +> ---- ---- ---- ---- +> -1 -1 -1 -1 +> 0 0 0 0 +> 1 1 1 1 +> 4 4 4 4 +> null null null null +> rows: 5 + +SELECT ID, CAST(XD2 AS TINYINT) D2B, CAST(XD AS TINYINT) DE2B, CAST(XB AS TINYINT) L2B, CAST(X_SM AS TINYINT) S2B FROM TEST; +> ID D2B DE2B L2B S2B +> ---- ---- ---- ---- ---- +> -1 -1 -1 -1 -1 +> 0 0 0 0 0 +> 1 1 1 1 1 +> 4 4 4 4 4 +> null null null null null +> rows: 5 + +SELECT ID, CAST(XD2 AS BIT) D2B, CAST(XD AS BIT) DE2B, CAST(XB AS BIT) L2B, CAST(X_SM AS BIT) S2B FROM TEST; +> ID D2B DE2B L2B S2B +> ---- ----- ----- ----- ----- +> -1 TRUE TRUE TRUE TRUE +> 0 FALSE FALSE FALSE FALSE +> 1 TRUE TRUE TRUE TRUE +> 4 TRUE TRUE TRUE TRUE +> null null null null null +> rows: 5 + +SELECT CAST('TRUE' AS BIT) NT, CAST('1.0' AS BIT) N1, CAST('0.0' AS BIT) N0; +> NT N1 N0 +> ---- ---- ----- +> TRUE TRUE FALSE +> rows: 1 + +SELECT ID, ID+X1, ID+XT, ID+X_SM, ID+XB, ID+XD, ID+XD2, ID+XR FROM TEST; +> ID ID + X1 ID + XT ID + X_SM ID + XB ID + XD ID + XD2 ID + XR +> ---- ------- ------- --------- ------- ------- -------- ------- +> -1 -1 -2 -2 -2 -2.00 -2.0 -2.0 +> 0 0 0 0 0 0.00 0.0 0.0 +> 1 2 2 2 2 2.00 2.0 2.0 +> 4 5 8 8 8 8.00 8.0 8.0 +> null null null null null null null null +> rows: 5 + +SELECT ID, 10-X1, 10-XT, 10-X_SM, 10-XB, 10-XD, 10-XD2, 10-XR FROM TEST; +> ID 10 - X1 10 - XT 10 - X_SM 10 - XB 10 - XD 10 - XD2 10 - XR +> ---- ------- ------- --------- ------- ------- -------- ------- +> -1 10 11 11 11 11.00 11.0 11.0 +> 0 10 10 10 10 10.00 10.0 10.0 +> 1 9 9 9 9 9.00 9.0 9.0 +> 4 9 6 6 6 6.00 6.0 6.0 +> null null null null null null null null +> rows: 5 + +SELECT ID, 10*X1, 10*XT, 10*X_SM, 10*XB, 10*XD, 10*XD2, 10*XR FROM TEST; +> ID 10 * X1 10 * XT 10 * X_SM 10 * XB 10 * XD 10 * XD2 10 * XR +> ---- ------- ------- --------- ------- ------- -------- ------- +> -1 0 -10 -10 -10 -10.00 -10.0 -10.0 +> 0 0 0 0 0 0.00 0.0 0.0 +> 1 10 10 10 10 10.00 10.0 10.0 +> 4 10 40 40 40 40.00 40.0 40.0 +> null null null null null null null null +> rows: 5 + +SELECT ID, SIGN(XT), SIGN(X_SM), SIGN(XB), SIGN(XD), SIGN(XD2), SIGN(XR) FROM TEST; +> ID SIGN(XT) SIGN(X_SM) SIGN(XB) SIGN(XD) SIGN(XD2) SIGN(XR) +> ---- -------- ---------- -------- -------- --------- -------- +> -1 -1 -1 -1 -1 -1 -1 +> 0 0 0 0 0 0 0 +> 1 1 1 1 1 1 1 +> 4 1 1 1 1 1 1 +> null null null null null null null +> rows: 5 + +SELECT ID, XT-XT-XT, X_SM-X_SM-X_SM, XB-XB-XB, XD-XD-XD, XD2-XD2-XD2, XR-XR-XR FROM TEST; +> ID (XT - XT) - XT (X_SM - X_SM) - X_SM (XB - XB) - XB (XD - XD) - XD (XD2 - XD2) - XD2 (XR - XR) - XR +> ---- -------------- -------------------- -------------- -------------- ----------------- -------------- +> -1 1 1 1 1.00 1.0 1.0 +> 0 0 0 0 0.00 0.0 0.0 +> 1 -1 -1 -1 -1.00 -1.0 -1.0 +> 4 -4 -4 -4 -4.00 -4.0 -4.0 +> null null null null null null null +> rows: 5 + +SELECT ID, XT+XT, X_SM+X_SM, XB+XB, XD+XD, XD2+XD2, XR+XR FROM TEST; +> ID XT + XT X_SM + X_SM XB + XB XD + XD XD2 + XD2 XR + XR +> ---- ------- ----------- ------- ------- --------- ------- +> -1 -2 -2 -2 -2.00 -2.0 -2.0 +> 0 0 0 0 0.00 0.0 0.0 +> 1 2 2 2 2.00 2.0 2.0 +> 4 8 8 8 8.00 8.0 8.0 +> null null null null null null null +> rows: 5 + +SELECT ID, XT*XT, X_SM*X_SM, XB*XB, XD*XD, XD2*XD2, XR*XR FROM TEST; +> ID XT * XT X_SM * X_SM XB * XB XD * XD XD2 * XD2 XR * XR +> ---- ------- ----------- ------- ------- --------- ------- +> -1 1 1 1 1.0000 1.0 1.0 +> 0 0 0 0 0.0000 0.0 0.0 +> 1 1 1 1 1.0000 1.0 1.0 +> 4 16 16 16 16.0000 16.0 16.0 +> null null null null null null null +> rows: 5 + +SELECT 2/3 FROM TEST WHERE ID=1; +> 0 +> - +> 0 +> rows: 1 + +SELECT ID/ID FROM TEST; +> exception + +SELECT XT/XT FROM TEST; +> exception + +SELECT X_SM/X_SM FROM TEST; +> exception + +SELECT XB/XB FROM TEST; +> exception + +SELECT XD/XD FROM TEST; +> exception + +SELECT XD2/XD2 FROM TEST; +> exception + +SELECT XR/XR FROM TEST; +> exception + +SELECT ID++0, -X1, -XT, -X_SM, -XB, -XD, -XD2, -XR FROM TEST; +> ID + 0 - X1 - XT - X_SM - XB - XD - XD2 - XR +> ------ ----- ---- ------ ---- ----- ----- ---- +> -1 TRUE 1 1 1 1.00 1.0 1.0 +> 0 TRUE 0 0 0 0.00 0.0 0.0 +> 1 FALSE -1 -1 -1 -1.00 -1.0 -1.0 +> 4 FALSE -4 -4 -4 -4.00 -4.0 -4.0 +> null null null null null null null null +> rows: 5 + +SELECT ID, X1||'!', XT||'!', X_SM||'!', XB||'!', XD||'!', XD2||'!', XR||'!' FROM TEST; +> ID X1 || '!' XT || '!' X_SM || '!' XB || '!' XD || '!' XD2 || '!' XR || '!' +> ---- --------- --------- ----------- --------- --------- ---------- --------- +> -1 FALSE! -1! -1! -1! -1.00! -1.0! -1.0! +> 0 FALSE! 0! 0! 0! 0.00! 0.0! 0.0! +> 1 TRUE! 1! 1! 1! 1.00! 1.0! 1.0! +> 4 TRUE! 4! 4! 4! 4.00! 4.0! 4.0! +> null null null null null null null null +> rows: 5 + +DROP TABLE TEST; +> ok + +--- in ---------------------------------------------------------------------------------------------- +CREATE TABLE CUSTOMER(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE TABLE INVOICE(ID INT, CUSTOMER_ID INT, PRIMARY KEY(CUSTOMER_ID, ID), VALUE DECIMAL(10,2)); +> ok + +INSERT INTO CUSTOMER VALUES(?, ?); +{ +1,Lehmann +2,Meier +3,Scott +4,NULL +}; +> update count: 4 + +INSERT INTO INVOICE VALUES(?, ?, ?); +{ +10,1,100.10 +11,1,10.01 +12,1,1.001 +20,2,22.2 +21,2,200.02 +}; +> update count: 5 + +SELECT * FROM CUSTOMER WHERE ID IN(1,2,4,-1); +> ID NAME +> -- ------- +> 1 Lehmann +> 2 Meier +> 4 null +> rows: 3 + +SELECT * FROM CUSTOMER WHERE ID NOT IN(3,4,5,'1'); +> ID NAME +> -- ----- +> 2 Meier +> rows: 1 + +SELECT * FROM CUSTOMER WHERE ID NOT IN(SELECT CUSTOMER_ID FROM INVOICE); +> ID NAME +> -- ----- +> 3 Scott +> 4 null +> rows: 2 + +SELECT * FROM INVOICE WHERE CUSTOMER_ID IN(SELECT C.ID FROM CUSTOMER C); +> ID CUSTOMER_ID VALUE +> -- ----------- ------ +> 10 1 100.10 +> 11 1 10.01 +> 12 1 1.00 +> 20 2 22.20 +> 21 2 200.02 +> rows: 5 + +SELECT * FROM CUSTOMER WHERE NAME IN('Lehmann', 20); +> ID NAME +> -- ------- +> 1 Lehmann +> rows: 1 + +SELECT * FROM CUSTOMER WHERE NAME NOT IN('Scott'); +> ID NAME +> -- ------- +> 1 Lehmann +> 2 Meier +> rows: 2 + +SELECT * FROM CUSTOMER WHERE NAME IN(SELECT NAME FROM CUSTOMER); +> ID NAME +> -- ------- +> 1 Lehmann +> 2 Meier +> 3 Scott +> rows: 3 + +SELECT * FROM CUSTOMER WHERE NAME NOT IN(SELECT NAME FROM CUSTOMER); +> ID NAME +> -- ---- +> rows: 0 + +SELECT * FROM CUSTOMER WHERE NAME = ANY(SELECT NAME FROM CUSTOMER); +> ID NAME +> -- ------- +> 1 Lehmann +> 2 Meier +> 3 Scott +> rows: 3 + +SELECT * FROM CUSTOMER WHERE NAME = ALL(SELECT NAME FROM CUSTOMER); +> ID NAME +> -- ---- +> rows: 0 + +SELECT * FROM CUSTOMER WHERE NAME > ALL(SELECT NAME FROM CUSTOMER); +> ID NAME +> -- ---- +> rows: 0 + +SELECT * FROM CUSTOMER WHERE NAME > ANY(SELECT NAME FROM CUSTOMER); +> ID NAME +> -- ----- +> 2 Meier +> 3 Scott +> rows: 2 + +SELECT * FROM CUSTOMER WHERE NAME < ANY(SELECT NAME FROM CUSTOMER); +> ID NAME +> -- ------- +> 1 Lehmann +> 2 Meier +> rows: 2 + +DROP TABLE INVOICE; +> ok + +DROP TABLE CUSTOMER; +> ok + +--- aggregates ---------------------------------------------------------------------------------------------- +drop table if exists t; +> ok + +create table t(x double precision, y double precision); +> ok + +create view s as +select stddev_pop(x) s_px, stddev_samp(x) s_sx, var_pop(x) v_px, var_samp(x) v_sx, +stddev_pop(y) s_py, stddev_samp(y) s_sy, var_pop(y) v_py, var_samp(y) v_sy from t; +> ok + +select var(100000000.1) z from system_range(1, 1000000); +> Z +> --- +> 0.0 +> rows: 1 + +select * from s; +> S_PX S_SX V_PX V_SX S_PY S_SY V_PY V_SY +> ---- ---- ---- ---- ---- ---- ---- ---- +> null null null null null null null null +> rows: 1 + +select some(y>10), every(y>10), min(y), max(y) from t; +> BOOL_OR(Y > 10.0) BOOL_AND(Y > 10.0) MIN(Y) MAX(Y) +> ----------------- ------------------ ------ ------ +> null null null null +> rows: 1 + +insert into t values(1000000004, 4); +> update count: 1 + +select * from s; +> S_PX S_SX V_PX V_SX S_PY S_SY V_PY V_SY +> ---- ---- ---- ---- ---- ---- ---- ---- +> 0.0 null 0.0 null 0.0 null 0.0 null +> rows: 1 + +insert into t values(1000000007, 7); +> update count: 1 + +select * from s; +> S_PX S_SX V_PX V_SX S_PY S_SY V_PY V_SY +> ---- ------------------ ---- ---- ---- ------------------ ---- ---- +> 1.5 2.1213203435596424 2.25 4.5 1.5 2.1213203435596424 2.25 4.5 +> rows: 1 + +insert into t values(1000000013, 13); +> update count: 1 + +select * from s; +> S_PX S_SX V_PX V_SX S_PY S_SY V_PY V_SY +> ------------------ ---------------- ---- ---- ------------------ ---------------- ---- ---- +> 3.7416573867739413 4.58257569495584 14.0 21.0 3.7416573867739413 4.58257569495584 14.0 21.0 +> rows: 1 + +insert into t values(1000000016, 16); +> update count: 1 + +select * from s; +> S_PX S_SX V_PX V_SX S_PY S_SY V_PY V_SY +> ----------------- ----------------- ---- ---- ----------------- ----------------- ---- ---- +> 4.743416490252569 5.477225575051661 22.5 30.0 4.743416490252569 5.477225575051661 22.5 30.0 +> rows: 1 + +insert into t values(1000000016, 16); +> update count: 1 + +select * from s; +> S_PX S_SX V_PX V_SX S_PY S_SY V_PY V_SY +> ----------------- ----------------- ----------------- ------------------ ----------------- ----------------- ----- ------------------ +> 4.874423036912116 5.449770630813229 23.75999994277954 29.699999928474426 4.874423042781577 5.449770637375485 23.76 29.700000000000003 +> rows: 1 + +select stddev_pop(distinct x) s_px, stddev_samp(distinct x) s_sx, var_pop(distinct x) v_px, var_samp(distinct x) v_sx, +stddev_pop(distinct y) s_py, stddev_samp(distinct y) s_sy, var_pop(distinct y) v_py, var_samp(distinct y) V_SY from t; +> S_PX S_SX V_PX V_SX S_PY S_SY V_PY V_SY +> ----------------- ----------------- ---- ---- ----------------- ----------------- ---- ---- +> 4.743416490252569 5.477225575051661 22.5 30.0 4.743416490252569 5.477225575051661 22.5 30.0 +> rows: 1 + +select some(y>10), every(y>10), min(y), max(y) from t; +> BOOL_OR(Y > 10.0) BOOL_AND(Y > 10.0) MIN(Y) MAX(Y) +> ----------------- ------------------ ------ ------ +> TRUE FALSE 4.0 16.0 +> rows: 1 + +drop view s; +> ok + +drop table t; +> ok + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255), VALUE DECIMAL(10,2)); +> ok + +INSERT INTO TEST VALUES(?, ?, ?); +{ +1,Apples,1.20 +2,Oranges,2.05 +3,Cherries,5.10 +4,Apples,1.50 +5,Apples,1.10 +6,Oranges,1.80 +7,Bananas,2.50 +8,NULL,3.10 +9,NULL,-10.0 +}; +> update count: 9 + +SELECT IFNULL(NAME, '') || ': ' || GROUP_CONCAT(VALUE ORDER BY NAME, VALUE DESC SEPARATOR ', ') FROM TEST GROUP BY NAME; +> (IFNULL(NAME, '') || ': ') || GROUP_CONCAT(VALUE ORDER BY NAME, VALUE DESC SEPARATOR ', ') +> ------------------------------------------------------------------------------------------ +> Apples: 1.50, 1.20, 1.10 +> Oranges: 2.05, 1.80 +> Bananas: 2.50 +> Cherries: 5.10 +> : 3.10, -10.00 +> rows (ordered): 5 + +SELECT GROUP_CONCAT(ID ORDER BY ID) FROM TEST; +> GROUP_CONCAT(ID ORDER BY ID) +> ---------------------------- +> 1,2,3,4,5,6,7,8,9 +> rows (ordered): 1 + +SELECT STRING_AGG(ID,';') FROM TEST; +> GROUP_CONCAT(ID SEPARATOR ';') +> ------------------------------ +> 1;2;3;4;5;6;7;8;9 +> rows: 1 + +SELECT DISTINCT NAME FROM TEST; +> NAME +> -------- +> Apples +> Bananas +> Cherries +> Oranges +> null +> rows: 5 + +SELECT DISTINCT NAME FROM TEST ORDER BY NAME DESC NULLS LAST; +> NAME +> -------- +> Oranges +> Cherries +> Bananas +> Apples +> null +> rows (ordered): 5 + +SELECT DISTINCT NAME FROM TEST ORDER BY NAME DESC NULLS LAST LIMIT 2 OFFSET 1; +> NAME +> -------- +> Cherries +> Bananas +> rows (ordered): 2 + +SELECT NAME, COUNT(*), SUM(VALUE), MAX(VALUE), MIN(VALUE), AVG(VALUE), COUNT(DISTINCT VALUE) FROM TEST GROUP BY NAME; +> NAME COUNT(*) SUM(VALUE) MAX(VALUE) MIN(VALUE) AVG(VALUE) COUNT(DISTINCT VALUE) +> -------- -------- ---------- ---------- ---------- ----------------------------- --------------------- +> Apples 3 3.80 1.50 1.10 1.266666666666666666666666667 3 +> Bananas 1 2.50 2.50 2.50 2.5 1 +> Cherries 1 5.10 5.10 5.10 5.1 1 +> Oranges 2 3.85 2.05 1.80 1.925 2 +> null 2 -6.90 3.10 -10.00 -3.45 2 +> rows: 5 + +SELECT NAME, MAX(VALUE), MIN(VALUE), MAX(VALUE+1)*MIN(VALUE+1) FROM TEST GROUP BY NAME; +> NAME MAX(VALUE) MIN(VALUE) MAX(VALUE + 1) * MIN(VALUE + 1) +> -------- ---------- ---------- ------------------------------- +> Apples 1.50 1.10 5.2500 +> Bananas 2.50 2.50 12.2500 +> Cherries 5.10 5.10 37.2100 +> Oranges 2.05 1.80 8.5400 +> null 3.10 -10.00 -36.9000 +> rows: 5 + +DROP TABLE TEST; +> ok + +--- order by ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE UNIQUE INDEX IDXNAME ON TEST(NAME); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'World'); +> update count: 1 + +INSERT INTO TEST VALUES(3, NULL); +> update count: 1 + +SELECT * FROM TEST ORDER BY NAME; +> ID NAME +> -- ----- +> 3 null +> 1 Hello +> 2 World +> rows (ordered): 3 + +SELECT * FROM TEST ORDER BY NAME DESC; +> ID NAME +> -- ----- +> 2 World +> 1 Hello +> 3 null +> rows (ordered): 3 + +SELECT * FROM TEST ORDER BY NAME NULLS FIRST; +> ID NAME +> -- ----- +> 3 null +> 1 Hello +> 2 World +> rows (ordered): 3 + +SELECT * FROM TEST ORDER BY NAME DESC NULLS FIRST; +> ID NAME +> -- ----- +> 3 null +> 2 World +> 1 Hello +> rows (ordered): 3 + +SELECT * FROM TEST ORDER BY NAME NULLS LAST; +> ID NAME +> -- ----- +> 1 Hello +> 2 World +> 3 null +> rows (ordered): 3 + +SELECT * FROM TEST ORDER BY NAME DESC NULLS LAST; +> ID NAME +> -- ----- +> 2 World +> 1 Hello +> 3 null +> rows (ordered): 3 + +SELECT ID, '=', NAME FROM TEST ORDER BY 2 FOR UPDATE; +> ID '=' NAME +> -- --- ----- +> 1 = Hello +> 2 = World +> 3 = null +> rows (ordered): 3 + +DROP TABLE TEST; +> ok + +--- having ---------------------------------------------------------------------------------------------- +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE INDEX IDXNAME ON TEST(NAME); +> ok + +INSERT INTO TEST VALUES(1, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'Hello'); +> update count: 1 + +INSERT INTO TEST VALUES(3, 'World'); +> update count: 1 + +INSERT INTO TEST VALUES(4, 'World'); +> update count: 1 + +INSERT INTO TEST VALUES(5, 'Orange'); +> update count: 1 + +SELECT NAME, SUM(ID) FROM TEST GROUP BY NAME HAVING COUNT(*)>1 ORDER BY NAME; +> NAME SUM(ID) +> ----- ------- +> Hello 3 +> World 7 +> rows (ordered): 2 + +DROP INDEX IF EXISTS IDXNAME; +> ok + +DROP TABLE TEST; +> ok + +--- help ---------------------------------------------------------------------------------------------- +HELP ABCDE EF_GH; +> ID SECTION TOPIC SYNTAX TEXT +> -- ------- ----- ------ ---- +> rows: 0 + +--- sequence ---------------------------------------------------------------------------------------------- +CREATE CACHED TABLE TEST(ID INT PRIMARY KEY); +> ok + +CREATE CACHED TABLE IF NOT EXISTS TEST(ID INT PRIMARY KEY); +> ok + +CREATE SEQUENCE IF NOT EXISTS TEST_SEQ START WITH 10; +> ok + +CREATE SEQUENCE IF NOT EXISTS TEST_SEQ START WITH 20; +> ok + +INSERT INTO TEST VALUES(NEXT VALUE FOR TEST_SEQ); +> update count: 1 + +CALL CURRVAL('test_seq'); +> CURRVAL('test_seq') +> ------------------- +> 10 +> rows: 1 + +INSERT INTO TEST VALUES(NEXT VALUE FOR TEST_SEQ); +> update count: 1 + +CALL NEXT VALUE FOR TEST_SEQ; +> NEXT VALUE FOR PUBLIC.TEST_SEQ +> ------------------------------ +> 12 +> rows: 1 + +INSERT INTO TEST VALUES(NEXT VALUE FOR TEST_SEQ); +> update count: 1 + +SELECT * FROM TEST; +> ID +> -- +> 10 +> 11 +> 13 +> rows: 3 + +SELECT TOP 2 * FROM TEST; +> ID +> -- +> 10 +> 11 +> rows: 2 + +SELECT TOP 2 * FROM TEST ORDER BY ID DESC; +> ID +> -- +> 13 +> 11 +> rows (ordered): 2 + +ALTER SEQUENCE TEST_SEQ RESTART WITH 20 INCREMENT BY -1; +> ok + +INSERT INTO TEST VALUES(NEXT VALUE FOR TEST_SEQ); +> update count: 1 + +INSERT INTO TEST VALUES(NEXT VALUE FOR TEST_SEQ); +> update count: 1 + +SELECT * FROM TEST ORDER BY ID ASC; +> ID +> -- +> 10 +> 11 +> 13 +> 19 +> 20 +> rows (ordered): 5 + +CALL NEXTVAL('test_seq'); +> NEXTVAL('test_seq') +> ------------------- +> 18 +> rows: 1 + +DROP SEQUENCE IF EXISTS TEST_SEQ; +> ok + +DROP SEQUENCE IF EXISTS TEST_SEQ; +> ok + +CREATE SEQUENCE TEST_LONG START WITH 90123456789012345 MAXVALUE 90123456789012345 INCREMENT BY -1; +> ok + +SET AUTOCOMMIT FALSE; +> ok + +CALL NEXT VALUE FOR TEST_LONG; +> NEXT VALUE FOR PUBLIC.TEST_LONG +> ------------------------------- +> 90123456789012345 +> rows: 1 + +CALL IDENTITY(); +> IDENTITY() +> ----------------- +> 90123456789012345 +> rows: 1 + +SELECT SEQUENCE_NAME, CURRENT_VALUE, INCREMENT FROM INFORMATION_SCHEMA.SEQUENCES; +> SEQUENCE_NAME CURRENT_VALUE INCREMENT +> ------------- ----------------- --------- +> TEST_LONG 90123456789012345 -1 +> rows: 1 + +SET AUTOCOMMIT TRUE; +> ok + +DROP SEQUENCE TEST_LONG; +> ok + +DROP TABLE TEST; +> ok + +--- call ---------------------------------------------------------------------------------------------- +CALL PI(); +> 3.141592653589793 +> ----------------- +> 3.141592653589793 +> rows: 1 + +CALL 1+1; +> 2 +> - +> 2 +> rows: 1 + +--- constraints ---------------------------------------------------------------------------------------------- +CREATE TABLE PARENT(A INT, B INT, PRIMARY KEY(A, B)); +> ok + +CREATE TABLE CHILD(ID INT PRIMARY KEY, PA INT, PB INT, CONSTRAINT AB FOREIGN KEY(PA, PB) REFERENCES PARENT(A, B)); +> ok + +SELECT * FROM INFORMATION_SCHEMA.CROSS_REFERENCES; +> PKTABLE_CATALOG PKTABLE_SCHEMA PKTABLE_NAME PKCOLUMN_NAME FKTABLE_CATALOG FKTABLE_SCHEMA FKTABLE_NAME FKCOLUMN_NAME ORDINAL_POSITION UPDATE_RULE DELETE_RULE FK_NAME PK_NAME DEFERRABILITY +> --------------- -------------- ------------ ------------- --------------- -------------- ------------ ------------- ---------------- ----------- ----------- ------- ------------- ------------- +> SCRIPT PUBLIC PARENT A SCRIPT PUBLIC CHILD PA 1 1 1 AB PRIMARY_KEY_8 7 +> SCRIPT PUBLIC PARENT B SCRIPT PUBLIC CHILD PB 2 1 1 AB PRIMARY_KEY_8 7 +> rows: 2 + +DROP TABLE PARENT; +> ok + +DROP TABLE CHILD; +> ok + +drop table if exists test; +> ok + +create table test(id int primary key, parent int, foreign key(id) references test(parent)); +> ok + +insert into test values(1, 1); +> update count: 1 + +delete from test; +> update count: 1 + +drop table test; +> ok + +drop table if exists child; +> ok + +drop table if exists parent; +> ok + +create table child(a int, id int); +> ok + +create table parent(id int primary key); +> ok + +alter table child add foreign key(id) references parent; +> ok + +insert into parent values(1); +> update count: 1 + +delete from parent; +> update count: 1 + +drop table if exists child; +> ok + +drop table if exists parent; +> ok + +CREATE MEMORY TABLE PARENT(ID INT PRIMARY KEY); +> ok + +CREATE MEMORY TABLE CHILD(ID INT, PARENT_ID INT, FOREIGN KEY(PARENT_ID) REFERENCES PARENT); +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> ------------------------------------------------------------------------------------------------------------------------ +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.CHILD; +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.PARENT; +> ALTER TABLE PUBLIC.CHILD ADD CONSTRAINT PUBLIC.CONSTRAINT_3 FOREIGN KEY(PARENT_ID) REFERENCES PUBLIC.PARENT(ID) NOCHECK; +> ALTER TABLE PUBLIC.PARENT ADD CONSTRAINT PUBLIC.CONSTRAINT_8 PRIMARY KEY(ID); +> CREATE MEMORY TABLE PUBLIC.CHILD( ID INT, PARENT_ID INT ); +> CREATE MEMORY TABLE PUBLIC.PARENT( ID INT NOT NULL ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 7 + +DROP TABLE PARENT; +> ok + +DROP TABLE CHILD; +> ok + +CREATE TABLE TEST(ID INT, CONSTRAINT PK PRIMARY KEY(ID), NAME VARCHAR, PARENT INT, CONSTRAINT P FOREIGN KEY(PARENT) REFERENCES(ID)); +> ok + +ALTER TABLE TEST DROP PRIMARY KEY; +> exception + +ALTER TABLE TEST DROP CONSTRAINT PK; +> ok + +INSERT INTO TEST VALUES(1, 'Frank', 1); +> update count: 1 + +INSERT INTO TEST VALUES(2, 'Sue', 1); +> update count: 1 + +INSERT INTO TEST VALUES(3, 'Karin', 2); +> update count: 1 + +INSERT INTO TEST VALUES(4, 'Joe', 5); +> exception + +INSERT INTO TEST VALUES(4, 'Joe', 3); +> update count: 1 + +DROP TABLE TEST; +> ok + +CREATE MEMORY TABLE TEST(A_INT INT NOT NULL, B_INT INT NOT NULL, PRIMARY KEY(A_INT, B_INT)); +> ok + +ALTER TABLE TEST ADD CONSTRAINT A_UNIQUE UNIQUE(A_INT); +> ok + +ALTER TABLE TEST DROP PRIMARY KEY; +> ok + +ALTER TABLE TEST DROP PRIMARY KEY; +> exception + +ALTER TABLE TEST DROP CONSTRAINT A_UNIQUE; +> ok + +ALTER TABLE TEST ADD CONSTRAINT C1 FOREIGN KEY(A_INT) REFERENCES TEST(B_INT); +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> ---------------------------------------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.C1 FOREIGN KEY(A_INT) REFERENCES PUBLIC.TEST(B_INT) NOCHECK; +> CREATE MEMORY TABLE PUBLIC.TEST( A_INT INT NOT NULL, B_INT INT NOT NULL ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 4 + +ALTER TABLE TEST DROP CONSTRAINT C1; +> ok + +ALTER TABLE TEST DROP CONSTRAINT C1; +> exception + +DROP TABLE TEST; +> ok + +CREATE MEMORY TABLE A_TEST(A_INT INT NOT NULL, A_VARCHAR VARCHAR(255) DEFAULT 'x', A_DATE DATE, A_DECIMAL DECIMAL(10,2)); +> ok + +ALTER TABLE A_TEST ADD PRIMARY KEY(A_INT); +> ok + +ALTER TABLE A_TEST ADD CONSTRAINT MIN_LENGTH CHECK LENGTH(A_VARCHAR)>1; +> ok + +ALTER TABLE A_TEST ADD CONSTRAINT DATE_UNIQUE UNIQUE(A_DATE); +> ok + +ALTER TABLE A_TEST ADD CONSTRAINT DATE_UNIQUE_2 UNIQUE(A_DATE); +> ok + +INSERT INTO A_TEST VALUES(NULL, NULL, NULL, NULL); +> exception + +INSERT INTO A_TEST VALUES(1, 'A', NULL, NULL); +> exception + +INSERT INTO A_TEST VALUES(1, 'AB', NULL, NULL); +> update count: 1 + +INSERT INTO A_TEST VALUES(1, 'AB', NULL, NULL); +> exception + +INSERT INTO A_TEST VALUES(2, 'AB', NULL, NULL); +> update count: 1 + +INSERT INTO A_TEST VALUES(3, 'AB', '2004-01-01', NULL); +> update count: 1 + +INSERT INTO A_TEST VALUES(4, 'AB', '2004-01-01', NULL); +> exception + +INSERT INTO A_TEST VALUES(5, 'ABC', '2004-01-02', NULL); +> update count: 1 + +CREATE MEMORY TABLE B_TEST(B_INT INT DEFAULT -1 NOT NULL , B_VARCHAR VARCHAR(255) DEFAULT NULL NULL, CONSTRAINT B_UNIQUE UNIQUE(B_INT)); +> ok + +ALTER TABLE B_TEST ADD CHECK LENGTH(B_VARCHAR)>1; +> ok + +ALTER TABLE B_TEST ADD CONSTRAINT C1 FOREIGN KEY(B_INT) REFERENCES A_TEST(A_INT) ON DELETE CASCADE ON UPDATE CASCADE; +> ok + +ALTER TABLE B_TEST ADD PRIMARY KEY(B_INT); +> ok + +INSERT INTO B_TEST VALUES(10, 'X'); +> exception + +INSERT INTO B_TEST VALUES(1, 'X'); +> exception + +INSERT INTO B_TEST VALUES(1, 'XX'); +> update count: 1 + +SELECT * FROM B_TEST; +> B_INT B_VARCHAR +> ----- --------- +> 1 XX +> rows: 1 + +UPDATE A_TEST SET A_INT = A_INT*10; +> update count: 4 + +SELECT * FROM B_TEST; +> B_INT B_VARCHAR +> ----- --------- +> 10 XX +> rows: 1 + +ALTER TABLE B_TEST DROP CONSTRAINT C1; +> ok + +ALTER TABLE B_TEST ADD CONSTRAINT C2 FOREIGN KEY(B_INT) REFERENCES A_TEST(A_INT) ON DELETE SET NULL ON UPDATE SET NULL; +> ok + +UPDATE A_TEST SET A_INT = A_INT*10; +> exception + +SELECT * FROM B_TEST; +> B_INT B_VARCHAR +> ----- --------- +> 10 XX +> rows: 1 + +ALTER TABLE B_TEST DROP CONSTRAINT C2; +> ok + +UPDATE B_TEST SET B_INT = 20; +> update count: 1 + +SELECT A_INT FROM A_TEST; +> A_INT +> ----- +> 10 +> 20 +> 30 +> 50 +> rows: 4 + +ALTER TABLE B_TEST ADD CONSTRAINT C3 FOREIGN KEY(B_INT) REFERENCES A_TEST(A_INT) ON DELETE SET DEFAULT ON UPDATE SET DEFAULT; +> ok + +UPDATE A_TEST SET A_INT = A_INT*10; +> update count: 4 + +SELECT * FROM B_TEST; +> B_INT B_VARCHAR +> ----- --------- +> -1 XX +> rows: 1 + +DELETE FROM A_TEST; +> update count: 4 + +SELECT * FROM B_TEST; +> B_INT B_VARCHAR +> ----- --------- +> -1 XX +> rows: 1 + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> ---------------------------------------------------------------------------------------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.A_TEST; +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.B_TEST; +> ALTER TABLE PUBLIC.A_TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_7 PRIMARY KEY(A_INT); +> ALTER TABLE PUBLIC.A_TEST ADD CONSTRAINT PUBLIC.DATE_UNIQUE UNIQUE(A_DATE); +> ALTER TABLE PUBLIC.A_TEST ADD CONSTRAINT PUBLIC.DATE_UNIQUE_2 UNIQUE(A_DATE); +> ALTER TABLE PUBLIC.A_TEST ADD CONSTRAINT PUBLIC.MIN_LENGTH CHECK(LENGTH(A_VARCHAR) > 1) NOCHECK; +> ALTER TABLE PUBLIC.B_TEST ADD CONSTRAINT PUBLIC.B_UNIQUE UNIQUE(B_INT); +> ALTER TABLE PUBLIC.B_TEST ADD CONSTRAINT PUBLIC.C3 FOREIGN KEY(B_INT) REFERENCES PUBLIC.A_TEST(A_INT) ON DELETE SET DEFAULT ON UPDATE SET DEFAULT NOCHECK; +> ALTER TABLE PUBLIC.B_TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_76 CHECK(LENGTH(B_VARCHAR) > 1) NOCHECK; +> ALTER TABLE PUBLIC.B_TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_760 PRIMARY KEY(B_INT); +> CREATE MEMORY TABLE PUBLIC.A_TEST( A_INT INT NOT NULL, A_VARCHAR VARCHAR(255) DEFAULT 'x', A_DATE DATE, A_DECIMAL DECIMAL(10, 2) ); +> CREATE MEMORY TABLE PUBLIC.B_TEST( B_INT INT DEFAULT -1 NOT NULL, B_VARCHAR VARCHAR(255) DEFAULT NULL ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.B_TEST(B_INT, B_VARCHAR) VALUES (-1, 'XX'); +> rows: 14 + +DROP TABLE A_TEST; +> ok + +DROP TABLE B_TEST; +> ok + +CREATE MEMORY TABLE FAMILY(ID INT, NAME VARCHAR(20)); +> ok + +CREATE INDEX FAMILY_ID_NAME ON FAMILY(ID, NAME); +> ok + +CREATE MEMORY TABLE PARENT(ID INT, FAMILY_ID INT, NAME VARCHAR(20)); +> ok + +ALTER TABLE PARENT ADD CONSTRAINT PARENT_FAMILY FOREIGN KEY(FAMILY_ID) +REFERENCES FAMILY(ID); +> ok + +CREATE MEMORY TABLE CHILD( +ID INT, +PARENTID INT, +FAMILY_ID INT, +UNIQUE(ID, PARENTID), +CONSTRAINT PARENT_CHILD FOREIGN KEY(PARENTID, FAMILY_ID) +REFERENCES PARENT(ID, FAMILY_ID) +ON UPDATE CASCADE +ON DELETE SET NULL, +NAME VARCHAR(20)); +> ok + +INSERT INTO FAMILY VALUES(1, 'Capone'); +> update count: 1 + +INSERT INTO CHILD VALUES(100, 1, 1, 'early'); +> exception + +INSERT INTO PARENT VALUES(1, 1, 'Sue'); +> update count: 1 + +INSERT INTO PARENT VALUES(2, 1, 'Joe'); +> update count: 1 + +INSERT INTO CHILD VALUES(100, 1, 1, 'Simon'); +> update count: 1 + +INSERT INTO CHILD VALUES(101, 1, 1, 'Sabine'); +> update count: 1 + +INSERT INTO CHILD VALUES(200, 2, 1, 'Jim'); +> update count: 1 + +INSERT INTO CHILD VALUES(201, 2, 1, 'Johann'); +> update count: 1 + +UPDATE PARENT SET ID=3 WHERE ID=1; +> update count: 1 + +SELECT * FROM CHILD; +> ID PARENTID FAMILY_ID NAME +> --- -------- --------- ------ +> 100 3 1 Simon +> 101 3 1 Sabine +> 200 2 1 Jim +> 201 2 1 Johann +> rows: 4 + +UPDATE CHILD SET PARENTID=-1 WHERE PARENTID IS NOT NULL; +> exception + +DELETE FROM PARENT WHERE ID=2; +> update count: 1 + +SELECT * FROM CHILD; +> ID PARENTID FAMILY_ID NAME +> --- -------- --------- ------ +> 100 3 1 Simon +> 101 3 1 Sabine +> 200 null null Jim +> 201 null null Johann +> rows: 4 + +SCRIPT SIMPLE NOPASSWORDS NOSETTINGS; +> SCRIPT +> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.FAMILY; +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.PARENT; +> -- 4 +/- SELECT COUNT(*) FROM PUBLIC.CHILD; +> ALTER TABLE PUBLIC.CHILD ADD CONSTRAINT PUBLIC.CONSTRAINT_3 UNIQUE(ID, PARENTID); +> ALTER TABLE PUBLIC.CHILD ADD CONSTRAINT PUBLIC.PARENT_CHILD FOREIGN KEY(PARENTID, FAMILY_ID) REFERENCES PUBLIC.PARENT(ID, FAMILY_ID) ON DELETE SET NULL ON UPDATE CASCADE NOCHECK; +> ALTER TABLE PUBLIC.PARENT ADD CONSTRAINT PUBLIC.PARENT_FAMILY FOREIGN KEY(FAMILY_ID) REFERENCES PUBLIC.FAMILY(ID) NOCHECK; +> CREATE INDEX PUBLIC.FAMILY_ID_NAME ON PUBLIC.FAMILY(ID, NAME); +> CREATE MEMORY TABLE PUBLIC.CHILD( ID INT, PARENTID INT, FAMILY_ID INT, NAME VARCHAR(20) ); +> CREATE MEMORY TABLE PUBLIC.FAMILY( ID INT, NAME VARCHAR(20) ); +> CREATE MEMORY TABLE PUBLIC.PARENT( ID INT, FAMILY_ID INT, NAME VARCHAR(20) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(100, 3, 1, 'Simon'); +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(101, 3, 1, 'Sabine'); +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(200, NULL, NULL, 'Jim'); +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(201, NULL, NULL, 'Johann'); +> INSERT INTO PUBLIC.FAMILY(ID, NAME) VALUES(1, 'Capone'); +> INSERT INTO PUBLIC.PARENT(ID, FAMILY_ID, NAME) VALUES(3, 1, 'Sue'); +> rows: 17 + +ALTER TABLE CHILD DROP CONSTRAINT PARENT_CHILD; +> ok + +SCRIPT SIMPLE NOPASSWORDS NOSETTINGS; +> SCRIPT +> -------------------------------------------------------------------------------------------------------------------------- +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.FAMILY; +> -- 1 +/- SELECT COUNT(*) FROM PUBLIC.PARENT; +> -- 4 +/- SELECT COUNT(*) FROM PUBLIC.CHILD; +> ALTER TABLE PUBLIC.CHILD ADD CONSTRAINT PUBLIC.CONSTRAINT_3 UNIQUE(ID, PARENTID); +> ALTER TABLE PUBLIC.PARENT ADD CONSTRAINT PUBLIC.PARENT_FAMILY FOREIGN KEY(FAMILY_ID) REFERENCES PUBLIC.FAMILY(ID) NOCHECK; +> CREATE INDEX PUBLIC.FAMILY_ID_NAME ON PUBLIC.FAMILY(ID, NAME); +> CREATE MEMORY TABLE PUBLIC.CHILD( ID INT, PARENTID INT, FAMILY_ID INT, NAME VARCHAR(20) ); +> CREATE MEMORY TABLE PUBLIC.FAMILY( ID INT, NAME VARCHAR(20) ); +> CREATE MEMORY TABLE PUBLIC.PARENT( ID INT, FAMILY_ID INT, NAME VARCHAR(20) ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(100, 3, 1, 'Simon'); +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(101, 3, 1, 'Sabine'); +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(200, NULL, NULL, 'Jim'); +> INSERT INTO PUBLIC.CHILD(ID, PARENTID, FAMILY_ID, NAME) VALUES(201, NULL, NULL, 'Johann'); +> INSERT INTO PUBLIC.FAMILY(ID, NAME) VALUES(1, 'Capone'); +> INSERT INTO PUBLIC.PARENT(ID, FAMILY_ID, NAME) VALUES(3, 1, 'Sue'); +> rows: 16 + +DELETE FROM PARENT; +> update count: 1 + +SELECT * FROM CHILD; +> ID PARENTID FAMILY_ID NAME +> --- -------- --------- ------ +> 100 3 1 Simon +> 101 3 1 Sabine +> 200 null null Jim +> 201 null null Johann +> rows: 4 + +DROP TABLE PARENT; +> ok + +DROP TABLE CHILD; +> ok + +DROP TABLE FAMILY; +> ok + +CREATE TABLE INVOICE(CUSTOMER_ID INT, ID INT, TOTAL_AMOUNT DECIMAL(10,2), PRIMARY KEY(CUSTOMER_ID, ID)); +> ok + +CREATE TABLE INVOICE_LINE(CUSTOMER_ID INT, INVOICE_ID INT, LINE_ID INT, TEXT VARCHAR, AMOUNT DECIMAL(10,2)); +> ok + +CREATE INDEX ON INVOICE_LINE(CUSTOMER_ID); +> ok + +ALTER TABLE INVOICE_LINE ADD FOREIGN KEY(CUSTOMER_ID, INVOICE_ID) REFERENCES INVOICE(CUSTOMER_ID, ID) ON DELETE CASCADE; +> ok + +INSERT INTO INVOICE VALUES(1, 100, NULL), (1, 101, NULL); +> update count: 2 + +INSERT INTO INVOICE_LINE VALUES(1, 100, 10, 'Apples', 20.35), (1, 100, 20, 'Paper', 10.05), (1, 101, 10, 'Pencil', 1.10), (1, 101, 20, 'Chair', 540.40); +> update count: 4 + +INSERT INTO INVOICE_LINE VALUES(1, 102, 20, 'Nothing', 30.00); +> exception + +DELETE FROM INVOICE WHERE ID = 100; +> update count: 1 + +SELECT * FROM INVOICE_LINE; +> CUSTOMER_ID INVOICE_ID LINE_ID TEXT AMOUNT +> ----------- ---------- ------- ------ ------ +> 1 101 10 Pencil 1.10 +> 1 101 20 Chair 540.40 +> rows: 2 + +DROP TABLE INVOICE; +> ok + +DROP TABLE INVOICE_LINE; +> ok + +CREATE MEMORY TABLE TEST(A INT, B INT, FOREIGN KEY (B) REFERENCES(A) ON UPDATE RESTRICT ON DELETE NO ACTION); +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> ------------------------------------------------------------------------------------------------------------ +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 FOREIGN KEY(B) REFERENCES PUBLIC.TEST(A) NOCHECK; +> CREATE MEMORY TABLE PUBLIC.TEST( A INT, B INT ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> rows: 4 + +DROP TABLE TEST; +> ok + +--- users ---------------------------------------------------------------------------------------------- +CREATE USER TEST PASSWORD 'abc'; +> ok + +CREATE USER TEST_ADMIN_X PASSWORD 'def' ADMIN; +> ok + +ALTER USER TEST_ADMIN_X RENAME TO TEST_ADMIN; +> ok + +ALTER USER TEST_ADMIN ADMIN TRUE; +> ok + +CREATE USER TEST2 PASSWORD '123' ADMIN; +> ok + +ALTER USER TEST2 SET PASSWORD 'abc'; +> ok + +ALTER USER TEST2 ADMIN FALSE; +> ok + +CREATE MEMORY TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +> ok + +CREATE MEMORY TABLE TEST2_X(ID INT); +> ok + +CREATE INDEX IDX_ID ON TEST2_X(ID); +> ok + +ALTER TABLE TEST2_X RENAME TO TEST2; +> ok + +ALTER INDEX IDX_ID RENAME TO IDX_ID2; +> ok + +SCRIPT NOPASSWORDS NOSETTINGS; +> SCRIPT +> --------------------------------------------------------------------------- +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST2; +> -- 0 +/- SELECT COUNT(*) FROM PUBLIC.TEST; +> ALTER TABLE PUBLIC.TEST ADD CONSTRAINT PUBLIC.CONSTRAINT_2 PRIMARY KEY(ID); +> CREATE INDEX PUBLIC.IDX_ID2 ON PUBLIC.TEST2(ID); +> CREATE MEMORY TABLE PUBLIC.TEST( ID INT NOT NULL, NAME VARCHAR(255) ); +> CREATE MEMORY TABLE PUBLIC.TEST2( ID INT ); +> CREATE USER IF NOT EXISTS SA PASSWORD '' ADMIN; +> CREATE USER IF NOT EXISTS TEST PASSWORD ''; +> CREATE USER IF NOT EXISTS TEST2 PASSWORD ''; +> CREATE USER IF NOT EXISTS TEST_ADMIN PASSWORD '' ADMIN; +> rows: 10 + +SELECT NAME, ADMIN FROM INFORMATION_SCHEMA.USERS; +> NAME ADMIN +> ---------- ----- +> SA true +> TEST false +> TEST2 false +> TEST_ADMIN true +> rows: 4 + +DROP TABLE TEST2; +> ok + +DROP TABLE TEST; +> ok + +DROP USER TEST; +> ok + +DROP USER IF EXISTS TEST; +> ok + +DROP USER IF EXISTS TEST2; +> ok + +DROP USER TEST_ADMIN; +> ok + +SET AUTOCOMMIT FALSE; +> ok + +SET SALT '' HASH ''; +> ok + +CREATE USER SECURE SALT '001122' HASH '1122334455'; +> ok + +ALTER USER SECURE SET SALT '112233' HASH '2233445566'; +> ok + +SCRIPT NOSETTINGS; +> SCRIPT +> ----------------------------------------------------------------- +> CREATE USER IF NOT EXISTS SA SALT '' HASH '' ADMIN; +> CREATE USER IF NOT EXISTS SECURE SALT '112233' HASH '2233445566'; +> rows: 2 + +SET PASSWORD '123'; +> ok + +SET AUTOCOMMIT TRUE; +> ok + +DROP USER SECURE; +> ok + +--- sequence with manual value ------------------ +drop table if exists test; +> ok + +CREATE TABLE TEST(ID bigint generated by default as identity (start with 1), name varchar); +> ok + +SET AUTOCOMMIT FALSE; +> ok + +insert into test(name) values('Hello'); +> update count: 1 + +insert into test(name) values('World'); +> update count: 1 + +call identity(); +> IDENTITY() +> ---------- +> 2 +> rows: 1 + +insert into test(id, name) values(1234567890123456, 'World'); +> update count: 1 + +call identity(); +> IDENTITY() +> ---------------- +> 1234567890123456 +> rows: 1 + +insert into test(name) values('World'); +> update count: 1 + +call identity(); +> IDENTITY() +> ---------------- +> 1234567890123457 +> rows: 1 + +select * from test order by id; +> ID NAME +> ---------------- ----- +> 1 Hello +> 2 World +> 1234567890123456 World +> 1234567890123457 World +> rows (ordered): 4 + +SET AUTOCOMMIT TRUE; +> ok + +drop table if exists test; +> ok + +CREATE TABLE TEST(ID bigint generated by default as identity (start with 1), name varchar); +> ok + +SET AUTOCOMMIT FALSE; +> ok + +insert into test(name) values('Hello'); +> update count: 1 + +insert into test(name) values('World'); +> update count: 1 + +call identity(); +> IDENTITY() +> ---------- +> 2 +> rows: 1 + +insert into test(id, name) values(1234567890123456, 'World'); +> update count: 1 + +call identity(); +> IDENTITY() +> ---------------- +> 1234567890123456 +> rows: 1 + +insert into test(name) values('World'); +> update count: 1 + +call identity(); +> IDENTITY() +> ---------------- +> 1234567890123457 +> rows: 1 + +select * from test order by id; +> ID NAME +> ---------------- ----- +> 1 Hello +> 2 World +> 1234567890123456 World +> 1234567890123457 World +> rows (ordered): 4 + +SET AUTOCOMMIT TRUE; +> ok + +drop table test; +> ok + +--- test cases --------------------------------------------------------------------------------------------- +create memory table word(word_id integer, name varchar); +> ok + +alter table word alter column word_id integer(10) auto_increment; +> ok + +insert into word(name) values('Hello'); +> update count: 1 + +alter table word alter column word_id restart with 30872; +> ok + +insert into word(name) values('World'); +> update count: 1 + +select * from word; +> WORD_ID NAME +> ------- ----- +> 1 Hello +> 30872 World +> rows: 2 + +drop table word; +> ok + +create table test(id int, name varchar); +> ok + +insert into test values(5, 'b'), (5, 'b'), (20, 'a'); +> update count: 3 + +drop table test; +> ok + +select 0 from (( + select 0 as f from dual u1 where null in (?, ?, ?, ?, ?) +) union all ( + select u2.f from ( + select 0 as f from ( + select 0 from dual u2f1f1 where now() = ? + ) u2f1 + ) u2 +)) where f = 12345; +{ +11, 22, 33, 44, 55, null +> 0 +> - +> rows: 0 +}; +> update count: 0 + +create table x(id int not null); +> ok + +alter table if exists y add column a varchar; +> ok + +alter table if exists x add column a varchar; +> ok + +alter table if exists x add column a varchar; +> exception + +alter table if exists y alter column a rename to b; +> ok + +alter table if exists x alter column a rename to b; +> ok + +alter table if exists x alter column a rename to b; +> exception + +alter table if exists y alter column b set default 'a'; +> ok + +alter table if exists x alter column b set default 'a'; +> ok + +insert into x(id) values(1); +> update count: 1 + +select b from x; +> B +> - +> a +> rows: 1 + +delete from x; +> update count: 1 + +alter table if exists y alter column b drop default; +> ok + +alter table if exists x alter column b drop default; +> ok + +alter table if exists y alter column b set not null; +> ok + +alter table if exists x alter column b set not null; +> ok + +insert into x(id) values(1); +> exception + +alter table if exists y alter column b drop not null; +> ok + +alter table if exists x alter column b drop not null; +> ok + +insert into x(id) values(1); +> update count: 1 + +select b from x; +> B +> ---- +> null +> rows: 1 + +delete from x; +> update count: 1 + +alter table if exists y add constraint x_pk primary key (id); +> ok + +alter table if exists x add constraint x_pk primary key (id); +> ok + +alter table if exists x add constraint x_pk primary key (id); +> exception + +insert into x(id) values(1); +> update count: 1 + +insert into x(id) values(1); +> exception + +delete from x; +> update count: 1 + +alter table if exists y add constraint x_check check (b = 'a'); +> ok + +alter table if exists x add constraint x_check check (b = 'a'); +> ok + +alter table if exists x add constraint x_check check (b = 'a'); +> exception + +insert into x(id, b) values(1, 'b'); +> exception + +alter table if exists y rename constraint x_check to x_check1; +> ok + +alter table if exists x rename constraint x_check to x_check1; +> ok + +alter table if exists x rename constraint x_check to x_check1; +> exception + +alter table if exists y drop constraint x_check1; +> ok + +alter table if exists x drop constraint x_check1; +> ok + +alter table if exists y rename to z; +> ok + +alter table if exists x rename to z; +> ok + +alter table if exists x rename to z; +> ok + +insert into z(id, b) values(1, 'b'); +> update count: 1 + +delete from z; +> update count: 1 + +alter table if exists y add constraint z_uk unique (b); +> ok + +alter table if exists z add constraint z_uk unique (b); +> ok + +alter table if exists z add constraint z_uk unique (b); +> exception + +insert into z(id, b) values(1, 'b'); +> update count: 1 + +insert into z(id, b) values(1, 'b'); +> exception + +delete from z; +> update count: 1 + +alter table if exists y drop column b; +> ok + +alter table if exists z drop column b; +> ok + +alter table if exists z drop column b; +> exception + +alter table if exists y drop primary key; +> ok + +alter table if exists z drop primary key; +> ok + +alter table if exists z drop primary key; +> exception + +create table x (id int not null primary key); +> ok + +alter table if exists y add constraint z_fk foreign key (id) references x (id); +> ok + +alter table if exists z add constraint z_fk foreign key (id) references x (id); +> ok + +alter table if exists z add constraint z_fk foreign key (id) references x (id); +> exception + +insert into z (id) values (1); +> exception + +alter table if exists y drop foreign key z_fk; +> ok + +alter table if exists z drop foreign key z_fk; +> ok + +alter table if exists z drop foreign key z_fk; +> exception + +insert into z (id) values (1); +> update count: 1 + +delete from z; +> update count: 1 + +drop table x; +> ok + +drop table z; +> ok + +create schema x; +> ok + +alter schema if exists y rename to z; +> ok + +alter schema if exists x rename to z; +> ok + +alter schema if exists x rename to z; +> ok + +create table z.z (id int); +> ok + +drop schema z cascade; +> ok + +----- Issue#493 ----- +create table test (year int, action varchar(10)); +> ok + +insert into test values (2015, 'order'), (2016, 'order'), (2014, 'order'); +> update count: 3 + +insert into test values (2014, 'execution'), (2015, 'execution'), (2016, 'execution'); +> update count: 3 + +select * from test where year in (select distinct year from test order by year desc limit 1 offset 0); +> YEAR ACTION +> ---- --------- +> 2016 order +> 2016 execution + +drop table test; +> ok + diff --git a/modules/h2/src/test/java/org/h2/test/scripts/testSimple.in.txt b/modules/h2/src/test/java/org/h2/test/scripts/testSimple.in.txt new file mode 100644 index 0000000000000..bc635fe3f5698 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/scripts/testSimple.in.txt @@ -0,0 +1,699 @@ +select 1000L / 10; +> 100; +select * from (select x as y from dual order by y); +> 1; +select a.x from dual a, dual b order by x; +> 1; +select 1 from(select 2 from(select 1) a right join dual b) c; +> 1; +select 1.00 / 3 * 0.00; +> 0.00000000000000000000000000000; +select 1.00000 / 3 * 0.0000; +> 0.0000000000000000000000000000000000; +select 1.0000000 / 3 * 0.00000; +> 0.0000000000000000000000000000000000000; +select 1.0000000 / 3 * 0.000000; +> 0E-38; +create table test(id null); +drop table test; +select * from (select group_concat(distinct 1) from system_range(1, 3)); +> 1; +select sum(mod(x, 2) = 1) from system_range(1, 10); +> 5; +create table a(x int); +create table b(x int); +select count(*) from (select b.x from a left join b); +> 0; +drop table a, b; +select count(distinct now()) c from system_range(1, 100), system_range(1, 1000); +> 1; +select {fn TIMESTAMPADD(SQL_TSI_DAY, 1, {ts '2011-10-20 20:30:40.001'})}; +> 2011-10-21 20:30:40.001; +select {fn TIMESTAMPADD(SQL_TSI_SECOND, 1, cast('2011-10-20 20:30:40.001' as timestamp))}; +> 2011-10-20 20:30:41.001; +select N'test'; +> test; +select E'test\\test'; +> test\test; +create table a(id int) as select null; +create table b(id int references a(id)) as select null; +delete from a; +drop table a, b; +create table test(a int, b int) as select 2, 0; +create index idx on test(b, a); +select count(*) from test where a in(2, 10) and b in(0, null); +> 1; +drop table test; +create table test(a int, b int) as select 1, 0; +create index idx on test(b, a); +select count(*) from test where b in(null, 0) and a in(1, null); +> 1; +drop table test; +create cached temp table test(id identity) not persistent; +drop table test; +create table test(a int, b int, unique(a, b)); +insert into test values(1,1), (1,2); +select count(*) from test where a in(1,2) and b in(1,2); +> 2; +drop table test; +create table test(id int); +alter table test alter column id set default 'x'; +select column_default from information_schema.columns c where c.table_name = 'TEST' and c.column_name = 'ID'; +> 'x'; +alter table test alter column id set not null; +select is_nullable from information_schema.columns c where c.table_name = 'TEST' and c.column_name = 'ID'; +> NO; +alter table test alter column id set data type varchar; +select type_name from information_schema.columns c where c.table_name = 'TEST' and c.column_name = 'ID'; +> VARCHAR; +alter table test alter column id type int; +select type_name from information_schema.columns c where c.table_name = 'TEST' and c.column_name = 'ID'; +> INTEGER; +alter table test alter column id drop default; +select column_default from information_schema.columns c where c.table_name = 'TEST' and c.column_name = 'ID'; +> null; +alter table test alter column id drop not null; +select is_nullable from information_schema.columns c where c.table_name = 'TEST' and c.column_name = 'ID'; +> YES; +drop table test; +select x from (select *, rownum as r from system_range(1, 3)) where r=2; +> 2; +create table test(name varchar(255)) as select 'Hello+World+'; +select count(*) from test where name like 'Hello++World++' escape '+'; +> 1; +select count(*) from test where name like '+H+e+l+l+o++World++' escape '+'; +> 1; +select count(*) from test where name like 'Hello+World++' escape '+'; +> 0; +select count(*) from test where name like 'Hello++World+' escape '+'; +> 0; +drop table test; + +select count(*) from system_range(1, 1); +> 1; +select count(*) from system_range(1, -1); +> 0; + +select 1 from dual where '\' like '\' escape ''; +> 1; +select left(timestamp '2001-02-03 08:20:31+04', 4); +> 2001; + +create table t1$2(id int); +drop table t1$2; + +create table test(id int primary key) as select x from system_range(1, 200); +delete from test; +insert into test(id) values(1); +select * from test order by id; +> 1; +drop table test; + +create memory table test(id int) not persistent as select 1 from dual; +insert into test values(1); +select count(1) from test; +> 2; +@reconnect; +select count(1) from test; +> 0; +drop table test; +create table test(t clob) as select 1; +select distinct t from test; +> 1; +drop table test; +create table test(id int unique not null); +drop table test; +create table test(id int not null unique); +drop table test; +select count(*)from((select 1 from dual limit 1)union(select 2 from dual limit 1)); +> 2; +select sum(cast(x as int)) from system_range(2147483547, 2147483637); +> 195421006872; +select sum(x) from system_range(9223372036854775707, 9223372036854775797); +> 839326855353784593432; +select sum(cast(100 as tinyint)) from system_range(1, 1000); +> 100000; +select sum(cast(100 as smallint)) from system_range(1, 1000); +> 100000; +select avg(cast(x as int)) from system_range(2147483547, 2147483637); +> 2147483592; +select avg(x) from system_range(9223372036854775707, 9223372036854775797); +> 9223372036854775752; +select avg(cast(100 as tinyint)) from system_range(1, 1000); +> 100; +select avg(cast(100 as smallint)) from system_range(1, 1000); +> 100; +select datediff(yyyy, now(), now()); +> 0; +create table t(d date) as select '2008-11-01' union select '2008-11-02'; +select 1 from t group by year(d) order by year(d); +> 1; +drop table t; +create table t(d int) as select 2001 union select 2002; +select 1 from t group by d/10 order by d/10; +> 1; +drop table t; + +create schema test; +create sequence test.report_id_seq; +select nextval('"test".REPORT_ID_SEQ'); +> 1; +select nextval('"test"."report_id_seq"'); +> 2; +select nextval('test.report_id_seq'); +> 3; +drop schema test cascade; + +create table master(id int primary key); +create table detail(id int primary key, x bigint, foreign key(x) references master(id) on delete cascade); +alter table detail alter column x bigint; +insert into master values(0); +insert into detail values(0,0); +delete from master; +drop table master, detail; + +drop all objects; +create table test(id int, parent int references test(id) on delete cascade); +insert into test values(0, 0); +alter table test rename to test2; +delete from test2; +drop table test2; + +SELECT X FROM dual GROUP BY X HAVING X=AVG(X); +> 1; +create view test_view(id,) as select * from dual; +drop view test_view; +create table test(id int,); +insert into test(id,) values(1,); +merge into test(id,) key(id,) values(1,); +drop table test; + +SET MODE DB2; +SELECT * FROM SYSTEM_RANGE(1, 100) OFFSET 99 ROWS; +> 100; +SELECT * FROM SYSTEM_RANGE(1, 100) OFFSET 50 ROWS FETCH FIRST 1 ROW ONLY; +> 51; +SELECT * FROM SYSTEM_RANGE(1, 100) FETCH FIRST 1 ROWS ONLY; +> 1; +SELECT * FROM SYSTEM_RANGE(1, 100) FETCH FIRST ROW ONLY; +> 1; +SET MODE REGULAR; + +create domain email as varchar comment 'e-mail'; +create table test(e email); +select remarks from INFORMATION_SCHEMA.COLUMNS where table_name='TEST'; +> e-mail; +drop table test; +drop domain email; + +create table test$test(id int); +drop table test$test; +create table test$$test(id int); +drop table test$$test; +create table test (id varchar(36) as random_uuid() primary key); +insert into test() values(); +delete from test where id = select id from test; +drop table test; +create table test (id varchar(36) as now() primary key); +insert into test() values(); +delete from test where id = select id from test; +drop table test; +SELECT SOME(X>4) FROM SYSTEM_RANGE(1,6); +> TRUE; +SELECT EVERY(X>4) FROM SYSTEM_RANGE(1,6); +> FALSE; +SELECT BOOL_OR(X>4) FROM SYSTEM_RANGE(1,6); +> TRUE; +SELECT BOOL_AND(X>4) FROM SYSTEM_RANGE(1,6); +> FALSE; +SELECT BIT_OR(X) FROM SYSTEM_RANGE(1,6); +> 7; +SELECT BIT_AND(X) FROM SYSTEM_RANGE(1,6); +> 0; +SELECT BIT_AND(X) FROM SYSTEM_RANGE(1,1); +> 1; +CREATE TABLE TEST(ID IDENTITY); +ALTER TABLE TEST ALTER COLUMN ID RESTART WITH ? {1:10}; +INSERT INTO TEST VALUES(NULL); +SELECT * FROM TEST; +> 10; +DROP TABLE TEST; +CREATE SEQUENCE TEST_SEQ; +ALTER SEQUENCE TEST_SEQ RESTART WITH ? INCREMENT BY ? {1:20, 2: 3}; +SELECT NEXT VALUE FOR TEST_SEQ; +> 20; +SELECT NEXT VALUE FOR TEST_SEQ; +> 23; +DROP SEQUENCE TEST_SEQ; + +create schema Contact; +CREATE TABLE Account (id BIGINT); +CREATE TABLE Person (id BIGINT, FOREIGN KEY (id) REFERENCES Account(id)); +CREATE TABLE Contact.Contact (id BIGINT, FOREIGN KEY (id) REFERENCES public.Person(id)); +drop schema contact cascade; +drop table account, person; + +create schema Contact; +CREATE TABLE Account (id BIGINT primary key); +CREATE TABLE Person (id BIGINT primary key, FOREIGN KEY (id) REFERENCES Account); +CREATE TABLE Contact.Contact (id BIGINT primary key, FOREIGN KEY (id) REFERENCES public.Person); +drop schema contact cascade; +drop table account, person; + +CREATE TABLE TEST(A int NOT NULL, B int NOT NULL, C int) ; +ALTER TABLE TEST ADD CONSTRAINT CON UNIQUE(A,B); +ALTER TABLE TEST DROP C; +ALTER TABLE TEST DROP CONSTRAINT CON; +ALTER TABLE TEST DROP B; +DROP TABLE TEST; + +select count(d.*) from dual d group by d.x; +> 1; + +create table test(id int); +select count(*) from (select * from ((select * from test) union (select * from test)) a) b where id = 0; +> 0; +select count(*) from (select * from ((select * from test) union select * from test) a) b where id = 0; +> 0; +select count(*) from (select * from (select * from test union select * from test) a) b where id = 0; +> 0; +select 1 from ((test d1 inner join test d2 on d1.id = d2.id) inner join test d3 on d1.id = d3.id) inner join test d4 on d4.id = d1.id; +drop table test; + +select lpad('string', 10); +> string; + +select count(*) from (select * from dual union select * from dual) where x = 0; +> 0; +select count(*) from (select * from (select * from dual union select * from dual)) where x = 0; +> 0; + +select instr('abcisj','s', -1) from dual; +> 5; +CREATE TABLE TEST(ID INT); +INSERT INTO TEST VALUES(1), (2), (3); +create index idx_desc on test(id desc); +select * from test where id between 0 and 1; +> 1; +select * from test where id between 3 and 4; +> 3; +drop table test; + +CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR(255)); +INSERT INTO TEST VALUES(1, 'Hello'), (2, 'HelloWorld'), (3, 'HelloWorldWorld'); +SELECT COUNT(*) FROM TEST WHERE NAME REGEXP 'World'; +> 2; +SELECT NAME FROM TEST WHERE NAME REGEXP 'WorldW'; +> HelloWorldWorld; +drop table test; + +select * from (select x from (select x from dual)) where 1=x; +> 1; +CREATE VIEW TEST_VIEW AS SELECT X FROM (SELECT X FROM DUAL); +SELECT * FROM TEST_VIEW; +> 1; +SELECT * FROM TEST_VIEW; +> 1; +DROP VIEW TEST_VIEW; + +SELECT X FROM (SELECT X, X AS "XY" FROM DUAL) WHERE X=1; +> 1; +SELECT X FROM (SELECT X, X AS "X Y" FROM DUAL) WHERE X=1; +> 1; +SELECT X FROM (SELECT X, X AS "X Y" FROM DUAL AS "D Z") WHERE X=1; +> 1; + +select * from (select x from dual union select convert(x, int) from dual) where x=0; +create table test(id int); +insert into scriptSimple.public.test(id) values(1), (2); +update test t set t.id=t.id+1; +update public.test set public.test.id=1; +select count(scriptSimple.public.test.id) from scriptSimple.public.test; +> 2; +update scriptSimple.public.test set scriptSimple.public.test.id=1; +drop table scriptSimple.public.test; + +select year(timestamp '2007-07-26T18:44:26.109000+02:00'); +> 2007; + +create table test(id int primary key); +begin; +insert into test values(1); +rollback; +insert into test values(2); +rollback; +begin; +insert into test values(3); +commit; +insert into test values(4); +rollback; +select group_concat(id order by id) from test; +> 2,3,4; +drop table test; + +create table test(); +insert into test values(); +ALTER TABLE TEST ADD ID INTEGER; +select count(*) from test; +> 1; +drop table test; + +select * from dual where 'a_z' like '%=_%' escape '='; +> 1; + +create table test as select 1 from dual union all select 2 from dual; +drop table test; + +create table test_table(column_a integer); +insert into test_table values(1); +create view test_view AS SELECT * FROM (SELECT DISTINCT * FROM test_table) AS subquery; +select * FROM test_view; +> 1; +drop view test_view; +drop table test_table; + +CREATE TABLE TEST(ID INT); +INSERT INTO TEST VALUES(1); +CREATE VIEW TEST_VIEW AS SELECT COUNT(ID) X FROM TEST; +explain SELECT * FROM TEST_VIEW WHERE X>1; +DROP VIEW TEST_VIEW; +DROP TABLE TEST; + +create table test1(id int); +insert into test1 values(1), (1), (2), (3); +select sum(C0) from (select count(*) AS C0 from (select distinct * from test1) as temp); +> 3; +drop table test1; + +create table test(id int primary key check id>1); +drop table test; +create table table1(f1 int not null primary key); +create table table2(f2 int not null references table1(f1) on delete cascade); +drop table table2; +drop table table1; +create table table1(f1 int not null primary key); +create table table2(f2 int not null primary key references table1(f1)); +drop table table1; +drop table table2; + +select case when 1=null then 1 else 2 end; +> 2; + +select case (1) when 1 then 1 else 2 end; +> 1; + +create table test(id int); +insert into test values(1); +select distinct id from test a order by a.id; +> 1; +drop table test; + +create table FOO (ID int, A number(18, 2)); +insert into FOO (ID, A) values (1, 10.0), (2, 20.0); +select SUM (CASE when ID=1 then 0 ELSE A END) col0 from Foo; +> 20.00; +drop table FOO; + +select (SELECT true)+1 GROUP BY 1; +> 2; +create table FOO (ID int, A number(18, 2)); +insert into FOO (ID, A) values (1, 10.0), (2, 20.0); +select SUM (CASE when ID=1 then A ELSE 0 END) col0 from Foo; +> 10.00; +drop table FOO; + +create table A ( ID integer, a1 varchar(20) ); +create table B ( ID integer, AID integer, b1 varchar(20)); +create table C ( ID integer, BId integer, c1 varchar(20)); +insert into A (ID, a1) values (1, 'a1'); +insert into A (ID, a1) values (2, 'a2'); +select count(*) from A left outer join (B inner join C on C.BID=B.ID ) on B.AID=A.ID where A.id=1; +> 1; +select count(*) from A left outer join (B left join C on C.BID=B.ID ) on B.AID=A.ID where A.id=1; +> 1; +select count(*) from A left outer join B on B.AID=A.ID inner join C on C.BID=B.ID where A.id=1; +> 0; +select count(*) from (A left outer join B on B.AID=A.ID) inner join C on C.BID=B.ID where A.id=1; +> 0; +drop table a, b, c; + +create schema a; +create table a.test(id int); +insert into a.test values(1); +create schema b; +create table b.test(id int); +insert into b.test values(2); +select a.test.id + b.test.id from a.test, b.test; +> 3; +drop schema a cascade; +drop schema b cascade; + +select date '+0011-01-01'; +> 0011-01-01; +select date'-0010-01-01'; +> -10-01-01; + +create schema TEST_SCHEMA; +create table TEST_SCHEMA.test(id int); +create sequence TEST_SCHEMA.TEST_SEQ; +select TEST_SCHEMA.TEST_SEQ.CURRVAL; +> 0; +select TEST_SCHEMA.TEST_SEQ.nextval; +> 1; +drop schema TEST_SCHEMA cascade; + +create table test(id int); +create trigger TEST_TRIGGER before insert on test call "org.h2.test.db.TestTriggersConstraints"; +comment on trigger TEST_TRIGGER is 'just testing'; +select remarks from information_schema.triggers where trigger_name = 'TEST_TRIGGER'; +> just testing; +@reconnect; +select remarks from information_schema.triggers where trigger_name = 'TEST_TRIGGER'; +> just testing; +drop trigger TEST_TRIGGER; +@reconnect; + +create alias parse_long for "java.lang.Long.parseLong(java.lang.String)"; +comment on alias parse_long is 'Parse a long with base'; +select remarks from information_schema.function_aliases where alias_name = 'PARSE_LONG'; +> Parse a long with base; +@reconnect; +select remarks from information_schema.function_aliases where alias_name = 'PARSE_LONG'; +> Parse a long with base; +drop alias parse_long; +@reconnect; + +create role hr; +comment on role hr is 'Human Resources'; +select remarks from information_schema.roles where name = 'HR'; +> Human Resources; +@reconnect; +select remarks from information_schema.roles where name = 'HR'; +> Human Resources; +create user abc password 'x'; +grant hr to abc; +drop role hr; +@reconnect; +drop user abc; + +create domain email as varchar(100) check instr(value, '@') > 0; +comment on domain email is 'must contain @'; +select remarks from information_schema.domains where domain_name = 'EMAIL'; +> must contain @; +@reconnect; +select remarks from information_schema.domains where domain_name = 'EMAIL'; +> must contain @; +drop domain email; +@reconnect; + +create schema tests; +set schema tests; +create sequence walk; +comment on schema tests is 'Test Schema'; +comment on sequence walk is 'Walker'; +select remarks from information_schema.schemata where schema_name = 'TESTS'; +> Test Schema; +select remarks from information_schema.sequences where sequence_name = 'WALK'; +> Walker; +@reconnect; +select remarks from information_schema.schemata where schema_name = 'TESTS'; +> Test Schema; +select remarks from information_schema.sequences where sequence_name = 'WALK'; +> Walker; +drop schema tests cascade; +@reconnect; + +create constant abc value 1; +comment on constant abc is 'One'; +select remarks from information_schema.constants where constant_name = 'ABC'; +> One; +@reconnect; +select remarks from information_schema.constants where constant_name = 'ABC'; +> One; +drop constant abc; +drop table test; +@reconnect; + +create table test(id int); +alter table test add constraint const1 unique(id); +create index IDX_ID on test(id); +comment on constraint const1 is 'unique id'; +comment on index IDX_ID is 'id_index'; +select remarks from information_schema.constraints where constraint_name = 'CONST1'; +> unique id; +select remarks from information_schema.indexes where index_name = 'IDX_ID'; +> id_index; +@reconnect; +select remarks from information_schema.constraints where constraint_name = 'CONST1'; +> unique id; +select remarks from information_schema.indexes where index_name = 'IDX_ID'; +> id_index; +drop table test; +@reconnect; + +create user sales password '1'; +comment on user sales is 'mr. money'; +select remarks from information_schema.users where name = 'SALES'; +> mr. money; +@reconnect; +select remarks from information_schema.users where name = 'SALES'; +> mr. money; +alter user sales rename to SALES_USER; +select remarks from information_schema.users where name = 'SALES_USER'; +> mr. money; +@reconnect; +select remarks from information_schema.users where name = 'SALES_USER'; +> mr. money; + +create table test(id int); +create linked table test_link('org.h2.Driver', 'jdbc:h2:mem:', 'sa', 'sa', 'DUAL'); +comment on table test_link is '123'; +select remarks from information_schema.tables where table_name = 'TEST_LINK'; +> 123; +@reconnect; +select remarks from information_schema.tables where table_name = 'TEST_LINK'; +> 123; +comment on table test_link is 'xyz'; +select remarks from information_schema.tables where table_name = 'TEST_LINK'; +> xyz; +alter table test_link rename to test_l; +select remarks from information_schema.tables where table_name = 'TEST_L'; +> xyz; +@reconnect; +select remarks from information_schema.tables where table_name = 'TEST_L'; +> xyz; +drop table test; +@reconnect; + +create table test(id int); +create view test_v as select * from test; +comment on table test_v is 'abc'; +select remarks from information_schema.tables where table_name = 'TEST_V'; +> abc; +@reconnect; +select remarks from information_schema.tables where table_name = 'TEST_V'; +> abc; +alter table test_v rename to TEST_VIEW; +select remarks from information_schema.tables where table_name = 'TEST_VIEW'; +> abc; +@reconnect; +select remarks from information_schema.tables where table_name = 'TEST_VIEW'; +> abc; +drop table test cascade; +@reconnect; + +create table test(a int); +comment on table test is 'hi'; +select remarks from information_schema.tables where table_name = 'TEST'; +> hi; +alter table test add column b int; +select remarks from information_schema.tables where table_name = 'TEST'; +> hi; +alter table test rename to test1; +select remarks from information_schema.tables where table_name = 'TEST1'; +> hi; +@reconnect; +select remarks from information_schema.tables where table_name = 'TEST1'; +> hi; +comment on table test1 is 'ho'; +@reconnect; +select remarks from information_schema.tables where table_name = 'TEST1'; +> ho; +drop table test1; + +create table test(a int, b int); +comment on column test.b is 'test'; +select remarks from information_schema.columns where table_name = 'TEST' and column_name = 'B'; +> test; +@reconnect; +select remarks from information_schema.columns where table_name = 'TEST' and column_name = 'B'; +> test; +alter table test drop column b; +@reconnect; +comment on column test.a is 'ho'; +select remarks from information_schema.columns where table_name = 'TEST' and column_name = 'A'; +> ho; +@reconnect; +select remarks from information_schema.columns where table_name = 'TEST' and column_name = 'A'; +> ho; +drop table test; +@reconnect; + +create table test(a int); +comment on column test.a is 'test'; +alter table test rename to test2; +@reconnect; +select remarks from information_schema.columns where table_name = 'TEST2'; +> test; +@reconnect; +select remarks from information_schema.columns where table_name = 'TEST2'; +> test; +drop table test2; +@reconnect; + +create table test1 (a varchar(10)); +create hash index x1 on test1(a); +insert into test1 values ('abcaaaa'),('abcbbbb'),('abccccc'),('abcdddd'); +insert into test1 values ('abcaaaa'),('abcbbbb'),('abccccc'),('abcdddd'); +insert into test1 values ('abcaaaa'),('abcbbbb'),('abccccc'),('abcdddd'); +insert into test1 values ('abcaaaa'),('abcbbbb'),('abccccc'),('abcdddd'); +select count(*) from test1 where a='abcaaaa'; +> 4; +select count(*) from test1 where a='abcbbbb'; +> 4; +@reconnect; +select count(*) from test1 where a='abccccc'; +> 4; +select count(*) from test1 where a='abcdddd'; +> 4; +update test1 set a='abccccc' where a='abcdddd'; +select count(*) from test1 where a='abccccc'; +> 8; +select count(*) from test1 where a='abcdddd'; +> 0; +delete from test1 where a='abccccc'; +select count(*) from test1 where a='abccccc'; +> 0; +truncate table test1; +insert into test1 values ('abcaaaa'); +insert into test1 values ('abcaaaa'); +delete from test1; +drop table test1; +@reconnect; + +drop table if exists test; +create table if not exists test(col1 int primary key); +insert into test values(1); +insert into test values(2); +insert into test values(3); +select count(*) from test; +> 3; +select max(col1) from test; +> 3; +update test set col1 = col1 + 1 order by col1 asc limit 100; +select count(*) from test; +> 3; +select max(col1) from test; +> 4; +drop table if exists test; + diff --git a/modules/h2/src/test/java/org/h2/test/server/TestAutoServer.java b/modules/h2/src/test/java/org/h2/test/server/TestAutoServer.java new file mode 100644 index 0000000000000..de7e0f465a5d5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/server/TestAutoServer.java @@ -0,0 +1,179 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.server; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; +import org.h2.util.SortedProperties; + +/** + * Tests automatic embedded/server mode. + */ +public class TestAutoServer extends TestBase { + + /** + * The number of iterations. + */ + static final int ITERATIONS = 30; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testUnsupportedCombinations(); + testAutoServer(false); + if (!config.big) { + testAutoServer(true); + } + testLinkedLocalTablesWithAutoServerReconnect(); + } + + private void testUnsupportedCombinations() throws SQLException { + String[] urls = { + "jdbc:h2:" + getTestName() + ";file_lock=no;auto_server=true", + "jdbc:h2:" + getTestName() + ";file_lock=serialized;auto_server=true", + "jdbc:h2:" + getTestName() + ";access_mode_data=r;auto_server=true", + "jdbc:h2:mem:" + getTestName() + ";auto_server=true" + }; + for (String url : urls) { + assertThrows(SQLException.class, this).getConnection(url); + try { + getConnection(url); + fail(url); + } catch (SQLException e) { + assertKnownException(e); + } + } + } + + private void testAutoServer(boolean port) throws Exception { + if (config.memory || config.networked) { + return; + } + deleteDb(getTestName()); + String url = getURL(getTestName() + ";AUTO_SERVER=TRUE", true); + if (port) { + url += ";AUTO_SERVER_PORT=11111"; + } + String user = getUser(), password = getPassword(); + Connection connServer = getConnection(url + ";OPEN_NEW=TRUE", + user, password); + + int i = ITERATIONS; + for (; i > 0; i--) { + Thread.sleep(100); + SortedProperties prop = SortedProperties.loadProperties( + getBaseDir() + "/" + getTestName() + ".lock.db"); + String key = prop.getProperty("id"); + String server = prop.getProperty("server"); + if (server != null) { + String u2 = url.substring(url.indexOf(';')); + u2 = "jdbc:h2:tcp://" + server + "/" + key + u2; + Connection conn = DriverManager.getConnection(u2, user, password); + conn.close(); + int gotPort = Integer.parseInt(server.substring(server.lastIndexOf(':') + 1)); + if (port) { + assertEquals(11111, gotPort); + } + break; + } + } + if (i <= 0) { + fail(); + } + Connection conn = getConnection(url + ";OPEN_NEW=TRUE"); + Statement stat = conn.createStatement(); + if (config.big) { + try { + stat.execute("SHUTDOWN"); + } catch (SQLException e) { + assertKnownException(e); + // the connection is closed + } + } + conn.close(); + connServer.close(); + deleteDb("autoServer"); + } + + /** + * Tests recreation of temporary linked tables on reconnect + */ + private void testLinkedLocalTablesWithAutoServerReconnect() + throws SQLException { + if (config.memory || config.networked) { + return; + } + deleteDb(getTestName() + "1"); + deleteDb(getTestName() + "2"); + String url = getURL(getTestName() + "1;AUTO_SERVER=TRUE", true); + String urlLinked = getURL(getTestName() + "2", true); + String user = getUser(), password = getPassword(); + + Connection connLinked = getConnection(urlLinked, user, password); + Statement statLinked = connLinked.createStatement(); + statLinked.execute("CREATE TABLE TEST(ID VARCHAR)"); + + // Server is connection 1 + Connection connAutoServer1 = getConnection( + url + ";OPEN_NEW=TRUE", user, password); + Statement statAutoServer1 = connAutoServer1.createStatement(); + statAutoServer1.execute("CREATE LOCAL TEMPORARY LINKED TABLE T('', '" + + urlLinked + "', '" + user + "', '" + password + "', 'TEST')"); + + // Connection 2 connects + Connection connAutoServer2 = getConnection( + url + ";OPEN_NEW=TRUE", user, password); + Statement statAutoServer2 = connAutoServer2.createStatement(); + statAutoServer2.execute("CREATE LOCAL TEMPORARY LINKED TABLE T('', '" + + urlLinked + "', '" + user + "', '" + password + "', 'TEST')"); + + // Server 1 closes the connection => connection 2 will be the server + // => the "force create local temporary linked..." must be reissued + statAutoServer1.execute("shutdown immediately"); + try { + connAutoServer1.close(); + } catch (SQLException e) { + // ignore + } + + // Now test insert + statAutoServer2.execute("INSERT INTO T (ID) VALUES('abc')"); + statAutoServer2.execute("drop table t"); + connAutoServer2.close(); + + // this will also close the linked connection from statAutoServer1 + connLinked.createStatement().execute("shutdown immediately"); + try { + connLinked.close(); + } catch (SQLException e) { + // ignore + } + + deleteDb(getTestName() + "1"); + deleteDb(getTestName() + "2"); + } + + /** + * This method is called via reflection from the database. + * + * @param exitValue the exit value + */ + public static void halt(int exitValue) { + Runtime.getRuntime().halt(exitValue); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/server/TestInit.java b/modules/h2/src/test/java/org/h2/test/server/TestInit.java new file mode 100644 index 0000000000000..60a81928a1134 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/server/TestInit.java @@ -0,0 +1,79 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.server; + +import java.io.OutputStreamWriter; +import java.io.PrintWriter; +import java.io.Writer; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.Statement; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests INIT command within embedded/server mode. + */ +public class TestInit extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String[] a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + + String init1 = getBaseDir() + "/test-init-1.sql"; + String init2 = getBaseDir() + "/test-init-2.sql"; + + // Create two scripts that we will run via "INIT" + FileUtils.createDirectories(FileUtils.getParent(init1)); + + Writer w = new OutputStreamWriter(FileUtils.newOutputStream(init1, false)); + + PrintWriter writer = new PrintWriter(w); + writer.println("create table test(id int identity, name varchar);"); + writer.println("insert into test(name) values('cat');"); + writer.close(); + + w = new OutputStreamWriter(FileUtils.newOutputStream(init2, false)); + writer = new PrintWriter(w); + writer.println("insert into test(name) values('dog');"); + writer.close(); + + // Make the database connection, and run the two scripts + deleteDb("initDb"); + Connection conn = getConnection("initDb;" + + "INIT=" + + "RUNSCRIPT FROM '" + init1 + "'\\;" + + "RUNSCRIPT FROM '" + init2 + "'"); + + Statement stat = conn.createStatement(); + + // Confirm our scripts have run by loading the data they inserted + ResultSet rs = stat.executeQuery("select name from test order by name"); + + assertTrue(rs.next()); + assertEquals("cat", rs.getString(1)); + + assertTrue(rs.next()); + assertEquals("dog", rs.getString(1)); + + assertFalse(rs.next()); + + conn.close(); + deleteDb("initDb"); + + FileUtils.delete(init1); + FileUtils.delete(init2); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/server/TestNestedLoop.java b/modules/h2/src/test/java/org/h2/test/server/TestNestedLoop.java new file mode 100644 index 0000000000000..5bd6480914f81 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/server/TestNestedLoop.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.server; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Tests remote JDBC access with nested loops. + * This is not allowed in some databases. + */ +public class TestNestedLoop extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("nestedLoop"); + Connection conn = getConnection("nestedLoop"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int identity, name varchar)"); + int len = getSize(1010, 10000); + for (int i = 0; i < len; i++) { + stat.execute("insert into test(name) values('Hello World')"); + } + ResultSet rs = stat.executeQuery("select id from test"); + stat.executeQuery("select id from test"); + assertThrows(ErrorCode.OBJECT_CLOSED, rs).next(); + rs = stat.executeQuery("select id from test"); + stat.close(); + assertThrows(ErrorCode.OBJECT_CLOSED, rs).next(); + stat = conn.createStatement(); + rs = stat.executeQuery("select id from test"); + Statement stat2 = conn.createStatement(); + while (rs.next()) { + int id = rs.getInt(1); + ResultSet rs2 = stat2.executeQuery("select * from test where id=" + id); + while (rs2.next()) { + assertEquals(id, rs2.getInt(1)); + assertEquals("Hello World", rs2.getString(2)); + } + rs2 = stat2.executeQuery("select * from test where id=" + id); + while (rs2.next()) { + assertEquals(id, rs2.getInt(1)); + assertEquals("Hello World", rs2.getString(2)); + } + } + conn.close(); + deleteDb("nestedLoop"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/server/TestWeb.java b/modules/h2/src/test/java/org/h2/test/server/TestWeb.java new file mode 100644 index 0000000000000..b3f5147bd83a2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/server/TestWeb.java @@ -0,0 +1,1194 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.server; + +import java.io.BufferedReader; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.PrintStream; +import java.io.PrintWriter; +import java.io.UnsupportedEncodingException; +import java.net.ConnectException; +import java.nio.charset.StandardCharsets; +import java.security.Principal; +import java.sql.Connection; +import java.sql.SQLException; +import java.util.Enumeration; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; +import java.util.Vector; + +import javax.servlet.AsyncContext; +import javax.servlet.DispatcherType; +import javax.servlet.RequestDispatcher; +import javax.servlet.ServletConfig; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletOutputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.WriteListener; +import javax.servlet.http.Cookie; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; +import javax.servlet.http.HttpSession; +import javax.servlet.http.HttpUpgradeHandler; +import javax.servlet.http.Part; +import javax.servlet.ServletContext; +import javax.servlet.ServletException; + +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.server.web.WebServlet; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.AssertThrows; +import org.h2.tools.Server; +import org.h2.util.StringUtils; +import org.h2.util.Task; + +/** + * Tests the H2 Console application. + */ +public class TestWeb extends TestBase { + + private static volatile String lastUrl; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testServlet(); + testWrongParameters(); + testTools(); + testAlreadyRunning(); + testStartWebServerWithConnection(); + testServer(); + testWebApp(); + testIfExists(); + } + + private void testServlet() throws Exception { + WebServlet servlet = new WebServlet(); + final HashMap configMap = new HashMap<>(); + configMap.put("ifExists", ""); + configMap.put("", ""); + configMap.put("", ""); + configMap.put("", ""); + ServletConfig config = new ServletConfig() { + + @Override + public String getServletName() { + return "H2Console"; + } + + @Override + public Enumeration getInitParameterNames() { + return new Vector<>(configMap.keySet()).elements(); + } + + @Override + public String getInitParameter(String name) { + return configMap.get(name); + } + + @Override + public ServletContext getServletContext() { + return null; + } + + }; + servlet.init(config); + + + TestHttpServletRequest request = new TestHttpServletRequest(); + request.setPathInfo("/"); + TestHttpServletResponse response = new TestHttpServletResponse(); + TestServletOutputStream out = new TestServletOutputStream(); + response.setServletOutputStream(out); + servlet.doGet(request, response); + assertContains(out.toString(), "location.href = 'login.jsp"); + servlet.destroy(); + } + + private static void testWrongParameters() { + new AssertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1) { + @Override + public void test() throws SQLException { + Server.createPgServer("-pgPort 8182"); + }}; + new AssertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1) { + @Override + public void test() throws SQLException { + Server.createTcpServer("-tcpPort 8182"); + }}; + new AssertThrows(ErrorCode.FEATURE_NOT_SUPPORTED_1) { + @Override + public void test() throws SQLException { + Server.createWebServer("-webPort=8182"); + }}; + } + + private void testAlreadyRunning() throws Exception { + Server server = Server.createWebServer( + "-webPort", "8182", "-properties", "null"); + server.start(); + assertContains(server.getStatus(), "server running"); + Server server2 = Server.createWebServer( + "-webPort", "8182", "-properties", "null"); + assertEquals("Not started", server2.getStatus()); + try { + server2.start(); + fail(); + } catch (Exception e) { + assertContains(e.toString(), "port may be in use"); + assertContains(server2.getStatus(), + "could not be started"); + } + server.stop(); + } + + private void testTools() throws Exception { + if (config.memory || config.cipher != null) { + return; + } + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + conn.createStatement().execute( + "create table test(id int) as select 1"); + conn.close(); + Server server = new Server(); + server.setOut(new PrintStream(new ByteArrayOutputStream())); + server.runTool("-web", "-webPort", "8182", + "-properties", "null", "-tcp", "-tcpPort", "9101"); + try { + String url = "http://localhost:8182"; + WebClient client; + String result; + client = new WebClient(); + result = client.get(url); + client.readSessionId(result); + result = client.get(url, "tools.jsp"); + FileUtils.delete(getBaseDir() + "/backup.zip"); + result = client.get(url, "tools.do?tool=Backup&args=-dir," + + getBaseDir() + ",-db," + getTestName() + ",-file," + + getBaseDir() + "/backup.zip"); + deleteDb(getTestName()); + assertTrue(FileUtils.exists(getBaseDir() + "/backup.zip")); + result = client.get(url, + "tools.do?tool=DeleteDbFiles&args=-dir," + + getBaseDir() + ",-db," + getTestName()); + String fn = getBaseDir() + "/" + getTestName(); + if (config.mvStore) { + fn += Constants.SUFFIX_MV_FILE; + } else { + fn += Constants.SUFFIX_PAGE_FILE; + } + assertFalse(FileUtils.exists(fn)); + result = client.get(url, "tools.do?tool=Restore&args=-dir," + + getBaseDir() + ",-db," + getTestName() +",-file," + getBaseDir() + + "/backup.zip"); + assertTrue(FileUtils.exists(fn)); + FileUtils.delete(getBaseDir() + "/web.h2.sql"); + FileUtils.delete(getBaseDir() + "/backup.zip"); + result = client.get(url, "tools.do?tool=Recover&args=-dir," + + getBaseDir() + ",-db," + getTestName()); + assertTrue(FileUtils.exists(getBaseDir() + "/" + getTestName() + ".h2.sql")); + FileUtils.delete(getBaseDir() + "/web.h2.sql"); + result = client.get(url, "tools.do?tool=RunScript&args=-script," + + getBaseDir() + "/" + getTestName() + ".h2.sql,-url," + + getURL(getTestName(), true) + + ",-user," + getUser() + ",-password," + getPassword()); + FileUtils.delete(getBaseDir() + "/" + getTestName() + ".h2.sql"); + assertTrue(FileUtils.exists(fn)); + deleteDb(getTestName()); + } finally { + server.shutdown(); + } + } + + private void testServer() throws Exception { + Server server = new Server(); + server.setOut(new PrintStream(new ByteArrayOutputStream())); + server.runTool("-web", "-webPort", "8182", "-properties", + "null", "-tcp", "-tcpPort", "9101"); + try { + String url = "http://localhost:8182"; + WebClient client; + String result; + client = new WebClient(); + client.setAcceptLanguage("de-de,de;q=0.5"); + result = client.get(url); + client.readSessionId(result); + result = client.get(url, "login.jsp"); + assertEquals("text/html", client.getContentType()); + assertContains(result, "Einstellung"); + client.get(url, "favicon.ico"); + assertEquals("image/x-icon", client.getContentType()); + client.get(url, "ico_ok.gif"); + assertEquals("image/gif", client.getContentType()); + client.get(url, "tree.js"); + assertEquals("text/javascript", client.getContentType()); + client.get(url, "stylesheet.css"); + assertEquals("text/css", client.getContentType()); + client.get(url, "admin.do"); + try { + client.get(url, "adminShutdown.do"); + } catch (IOException e) { + // expected + Thread.sleep(1000); + } + } finally { + server.shutdown(); + } + // it should be stopped now + server = Server.createTcpServer("-tcpPort", "9101"); + server.start(); + server.stop(); + } + + private void testIfExists() throws Exception { + Connection conn = getConnection("jdbc:h2:mem:" + getTestName(), + getUser(), getPassword()); + Server server = new Server(); + server.setOut(new PrintStream(new ByteArrayOutputStream())); + server.runTool("-ifExists", "-web", "-webPort", "8182", + "-properties", "null", "-tcp", "-tcpPort", "9101"); + try { + String url = "http://localhost:8182"; + WebClient client; + String result; + client = new WebClient(); + result = client.get(url); + client.readSessionId(result); + result = client.get(url, "login.jsp"); + result = client.get(url, "test.do?driver=org.h2.Driver" + + "&url=jdbc:h2:mem:" + getTestName() + + "&user=" + getUser() + "&password=" + + getPassword() + "&name=_test_"); + assertTrue(result.indexOf("Exception") < 0); + result = client.get(url, "test.do?driver=org.h2.Driver" + + "&url=jdbc:h2:mem:" + getTestName() + "Wrong" + + "&user=" + getUser() + "&password=" + + getPassword() + "&name=_test_"); + assertContains(result, "Exception"); + } finally { + server.shutdown(); + conn.close(); + } + } + + private void testWebApp() throws Exception { + Server server = new Server(); + server.setOut(new PrintStream(new ByteArrayOutputStream())); + server.runTool("-web", "-webPort", "8182", + "-properties", "null", "-tcp", "-tcpPort", "9101"); + try { + String url = "http://localhost:8182"; + WebClient client; + String result; + client = new WebClient(); + result = client.get(url); + client.readSessionId(result); + client.get(url, "login.jsp"); + client.get(url, "adminSave.do"); + result = client.get(url, "index.do?language=de"); + result = client.get(url, "login.jsp"); + assertContains(result, "Einstellung"); + result = client.get(url, "index.do?language=en"); + result = client.get(url, "login.jsp"); + assertTrue(result.indexOf("Einstellung") < 0); + result = client.get(url, "test.do?driver=abc" + + "&url=jdbc:abc:mem: " + getTestName() + + "&user=sa&password=sa&name=_test_"); + assertContains(result, "Exception"); + result = client.get(url, "test.do?driver=org.h2.Driver" + + "&url=jdbc:h2:mem:" + getTestName() + + "&user=sa&password=sa&name=_test_"); + assertTrue(result.indexOf("Exception") < 0); + result = client.get(url, "login.do?driver=org.h2.Driver" + + "&url=jdbc:h2:mem:" + getTestName() + + "&user=sa&password=sa&name=_test_"); + result = client.get(url, "header.jsp"); + result = client.get(url, "query.do?sql=" + + "create table test(id int primary key, name varchar);" + + "insert into test values(1, 'Hello')"); + result = client.get(url, "query.do?sql=create sequence test_sequence"); + result = client.get(url, "query.do?sql=create schema test_schema"); + result = client.get(url, "query.do?sql=" + + "create view test_view as select * from test"); + result = client.get(url, "tables.do"); + result = client.get(url, "query.jsp"); + result = client.get(url, "query.do?sql=select * from test"); + assertContains(result, "Hello"); + result = client.get(url, "query.do?sql=select * from test"); + result = client.get(url, "query.do?sql=@META select * from test"); + assertContains(result, "typeName"); + result = client.get(url, "query.do?sql=delete from test"); + result = client.get(url, "query.do?sql=@LOOP 1000 " + + "insert into test values(?, 'Hello ' || ?/*RND*/)"); + assertContains(result, "1000 * (Prepared)"); + result = client.get(url, "query.do?sql=select * from test"); + result = client.get(url, "query.do?sql=@list select * from test"); + assertContains(result, "Row #"); + result = client.get(url, "query.do?sql=@parameter_meta " + + "select * from test where id = ?"); + assertContains(result, "INTEGER"); + result = client.get(url, "query.do?sql=@edit select * from test"); + assertContains(result, "editResult.do"); + result = client.get(url, "query.do?sql=" + + StringUtils.urlEncode("select space(100001) a, 1 b")); + assertContains(result, "..."); + result = client.get(url, "query.do?sql=" + + StringUtils.urlEncode("call '<&>'")); + assertContains(result, "<&>"); + result = client.get(url, "query.do?sql=@HISTORY"); + result = client.get(url, "getHistory.do?id=4"); + assertContains(result, "select * from test"); + result = client.get(url, "query.do?sql=delete from test"); + // op 1 (row -1: insert, otherwise update): ok, + // 2: delete 3: cancel, + result = client.get(url, "editResult.do?sql=@edit " + + "select * from test&op=1&row=-1&r-1c1=1&r-1c2=Hello"); + assertContains(result, "1"); + assertContains(result, "Hello"); + result = client.get(url, "editResult.do?sql=@edit " + + "select * from test&op=1&row=1&r1c1=1&r1c2=Hallo"); + assertContains(result, "1"); + assertContains(result, "Hallo"); + result = client.get(url, "query.do?sql=select * from test"); + assertContains(result, "1"); + assertContains(result, "Hallo"); + result = client.get(url, "editResult.do?sql=@edit " + + "select * from test&op=2&row=1"); + result = client.get(url, "query.do?sql=select * from test"); + assertContains(result, "no rows"); + + // autoComplete + result = client.get(url, "autoCompleteList.do?query=select 'abc"); + assertContains(StringUtils.urlDecode(result), "'"); + result = client.get(url, "autoCompleteList.do?query=select 'abc''"); + assertContains(StringUtils.urlDecode(result), "'"); + result = client.get(url, "autoCompleteList.do?query=select 'abc' "); + assertContains(StringUtils.urlDecode(result), "||"); + result = client.get(url, "autoCompleteList.do?query=select 'abc' |"); + assertContains(StringUtils.urlDecode(result), "|"); + result = client.get(url, "autoCompleteList.do?query=select 'abc' || "); + assertContains(StringUtils.urlDecode(result), "'"); + result = client.get(url, "autoCompleteList.do?query=call timestamp '2"); + assertContains(result, "20"); + result = client.get(url, "autoCompleteList.do?query=call time '1"); + assertContains(StringUtils.urlDecode(result), "12:00:00"); + result = client.get(url, "autoCompleteList.do?query=" + + "call timestamp '2001-01-01 12:00:00."); + assertContains(result, "nanoseconds"); + result = client.get(url, "autoCompleteList.do?query=" + + "call timestamp '2001-01-01 12:00:00.00"); + assertContains(result, "nanoseconds"); + result = client.get(url, "autoCompleteList.do?query=" + + "call $$ hello world"); + assertContains(StringUtils.urlDecode(result), "$$"); + result = client.get(url, "autoCompleteList.do?query=alter index "); + assertContains(StringUtils.urlDecode(result), "character"); + result = client.get(url, "autoCompleteList.do?query=alter index idx"); + assertContains(StringUtils.urlDecode(result), "character"); + result = client.get(url, "autoCompleteList.do?query=alter index \"IDX_"); + assertContains(StringUtils.urlDecode(result), "\""); + result = client.get(url, "autoCompleteList.do?query=alter index \"IDX_\"\""); + assertContains(StringUtils.urlDecode(result), "\""); + result = client.get(url, "autoCompleteList.do?query=help "); + assertContains(result, "anything"); + result = client.get(url, "autoCompleteList.do?query=help select"); + assertContains(result, "anything"); + result = client.get(url, "autoCompleteList.do?query=call "); + assertContains(result, "0x"); + result = client.get(url, "autoCompleteList.do?query=call 0"); + assertContains(result, "."); + result = client.get(url, "autoCompleteList.do?query=se"); + assertContains(result, "select"); + assertContains(result, "set"); + result = client.get(url, "tables.do"); + assertContains(result, "TEST"); + result = client.get(url, "autoCompleteList.do?query=" + + "select * from "); + assertContains(result, "test"); + result = client.get(url, "autoCompleteList.do?query=" + + "select * from test t where t."); + assertContains(result, "id"); + result = client.get(url, "autoCompleteList.do?query=" + + "select id x from test te where t"); + assertContains(result, "te"); + result = client.get(url, "autoCompleteList.do?query=" + + "select * from test where name = '"); + assertContains(StringUtils.urlDecode(result), "'"); + result = client.get(url, "autoCompleteList.do?query=" + + "select * from information_schema.columns where columns."); + assertContains(result, "column_name"); + + result = client.get(url, "query.do?sql=delete from test"); + + // special commands + result = client.get(url, "query.do?sql=@autocommit_true"); + assertContains(result, "Auto commit is now ON"); + result = client.get(url, "query.do?sql=@autocommit_false"); + assertContains(result, "Auto commit is now OFF"); + result = client.get(url, "query.do?sql=@cancel"); + assertContains(result, "There is currently no running statement"); + result = client.get(url, + "query.do?sql=@generated insert into test(id) values(test_sequence.nextval)"); + assertContains(result, "ID1"); + result = client.get(url, "query.do?sql=@maxrows 2000"); + assertContains(result, "Max rowcount is set"); + result = client.get(url, "query.do?sql=@password_hash user password"); + assertContains(result, + "501cf5c163c184c26e62e76d25d441979f8f25dfd7a683484995b4a43a112fdf"); + result = client.get(url, "query.do?sql=@sleep 1"); + assertContains(result, "Ok"); + result = client.get(url, "query.do?sql=@catalogs"); + assertContains(result, "PUBLIC"); + result = client.get(url, + "query.do?sql=@column_privileges null null null TEST null"); + assertContains(result, "PRIVILEGE"); + result = client.get(url, + "query.do?sql=@cross_references null null null TEST"); + assertContains(result, "PKTABLE_NAME"); + result = client.get(url, + "query.do?sql=@exported_keys null null null TEST"); + assertContains(result, "PKTABLE_NAME"); + result = client.get(url, + "query.do?sql=@imported_keys null null null TEST"); + assertContains(result, "PKTABLE_NAME"); + result = client.get(url, + "query.do?sql=@primary_keys null null null TEST"); + assertContains(result, "PK_NAME"); + result = client.get(url, "query.do?sql=@procedures null null null"); + assertContains(result, "PROCEDURE_NAME"); + result = client.get(url, "query.do?sql=@procedure_columns"); + assertContains(result, "PROCEDURE_NAME"); + result = client.get(url, "query.do?sql=@schemas"); + assertContains(result, "PUBLIC"); + result = client.get(url, "query.do?sql=@table_privileges"); + assertContains(result, "PRIVILEGE"); + result = client.get(url, "query.do?sql=@table_types"); + assertContains(result, "SYSTEM TABLE"); + result = client.get(url, "query.do?sql=@type_info"); + assertContains(result, "CLOB"); + result = client.get(url, "query.do?sql=@version_columns"); + assertContains(result, "PSEUDO_COLUMN"); + result = client.get(url, "query.do?sql=@attributes"); + assertContains(result, "Feature not supported: "attributes""); + result = client.get(url, "query.do?sql=@super_tables"); + assertContains(result, "SUPERTABLE_NAME"); + result = client.get(url, "query.do?sql=@super_types"); + assertContains(result, "Feature not supported: "superTypes""); + result = client.get(url, "query.do?sql=@prof_start"); + assertContains(result, "Ok"); + result = client.get(url, "query.do?sql=@prof_stop"); + assertContains(result, "Top Stack Trace(s)"); + result = client.get(url, + "query.do?sql=@best_row_identifier null null TEST"); + assertContains(result, "SCOPE"); + assertContains(result, "COLUMN_NAME"); + assertContains(result, "ID"); + result = client.get(url, "query.do?sql=@udts"); + assertContains(result, "CLASS_NAME"); + result = client.get(url, "query.do?sql=@udts null null null 1,2,3"); + assertContains(result, "CLASS_NAME"); + result = client.get(url, "query.do?sql=@LOOP 10 " + + "@STATEMENT insert into test values(?, 'Hello')"); + result = client.get(url, "query.do?sql=select * from test"); + assertContains(result, "8"); + result = client.get(url, "query.do?sql=@EDIT select * from test"); + assertContains(result, "editRow"); + + result = client.get(url, "query.do?sql=@AUTOCOMMIT TRUE"); + result = client.get(url, "query.do?sql=@AUTOCOMMIT FALSE"); + result = client.get(url, "query.do?sql=@TRANSACTION_ISOLATION"); + result = client.get(url, "query.do?sql=@SET MAXROWS 1"); + result = client.get(url, "query.do?sql=select * from test order by id"); + result = client.get(url, "query.do?sql=@SET MAXROWS 1000"); + result = client.get(url, "query.do?sql=@TABLES"); + assertContains(result, "TEST"); + result = client.get(url, "query.do?sql=@COLUMNS null null TEST"); + assertContains(result, "ID"); + result = client.get(url, "query.do?sql=@INDEX_INFO null null TEST"); + assertContains(result, "PRIMARY"); + result = client.get(url, "query.do?sql=@CATALOG"); + assertContains(result, "PUBLIC"); + result = client.get(url, "query.do?sql=@MEMORY"); + assertContains(result, "Used"); + + result = client.get(url, "query.do?sql=@INFO"); + assertContains(result, "getCatalog"); + + result = client.get(url, "logout.do"); + result = client.get(url, "login.do?driver=org.h2.Driver&" + + "url=jdbc:h2:mem:" + getTestName() + + "&user=sa&password=sa&name=_test_"); + + result = client.get(url, "logout.do"); + result = client.get(url, "settingRemove.do?name=_test_"); + + client.get(url, "admin.do"); + } finally { + server.shutdown(); + } + } + + private void testStartWebServerWithConnection() throws Exception { + String old = System.getProperty(SysProperties.H2_BROWSER); + try { + System.setProperty(SysProperties.H2_BROWSER, + "call:" + TestWeb.class.getName() + ".openBrowser"); + Server.openBrowser("testUrl"); + assertEquals("testUrl", lastUrl); + String oldUrl = lastUrl; + final Connection conn = getConnection(getTestName()); + Task t = new Task() { + @Override + public void call() throws Exception { + Server.startWebServer(conn, true); + } + }; + t.execute(); + for (int i = 0; lastUrl == oldUrl; i++) { + if (i > 100) { + throw new Exception("Browser not started"); + } + Thread.sleep(100); + } + String url = lastUrl; + WebClient client = new WebClient(); + client.readSessionId(url); + url = client.getBaseUrl(url); + try { + client.get(url, "logout.do"); + } catch (ConnectException e) { + // the server stops on logout + } + t.get(); + conn.close(); + } finally { + if (old != null) { + System.setProperty(SysProperties.H2_BROWSER, old); + } else { + System.clearProperty(SysProperties.H2_BROWSER); + } + } + } + + /** + * This method is called via reflection. + * + * @param url the browser url + */ + public static void openBrowser(String url) { + lastUrl = url; + } + + /** + * A HTTP servlet request for testing. + */ + static class TestHttpServletRequest implements HttpServletRequest { + + private String pathInfo; + + void setPathInfo(String pathInfo) { + this.pathInfo = pathInfo; + } + + @Override + public Object getAttribute(String name) { + return null; + } + + @Override + public Enumeration getAttributeNames() { + return new Vector().elements(); + } + + @Override + public String getCharacterEncoding() { + return null; + } + + @Override + public int getContentLength() { + return 0; + } + + @Override + public String getContentType() { + return null; + } + + @Override + public ServletInputStream getInputStream() throws IOException { + return null; + } + + @Override + public String getLocalAddr() { + return null; + } + + @Override + public String getLocalName() { + return null; + } + + @Override + public int getLocalPort() { + return 0; + } + + @Override + public Locale getLocale() { + return null; + } + + @Override + public Enumeration getLocales() { + return null; + } + + @Override + public String getParameter(String name) { + return null; + } + + @Override + public Map getParameterMap() { + return null; + } + + @Override + public Enumeration getParameterNames() { + return new Vector().elements(); + } + + @Override + public String[] getParameterValues(String name) { + return null; + } + + @Override + public String getProtocol() { + return null; + } + + @Override + public BufferedReader getReader() throws IOException { + return null; + } + + @Override + @Deprecated + public String getRealPath(String path) { + return null; + } + + @Override + public String getRemoteAddr() { + return null; + } + + @Override + public String getRemoteHost() { + return null; + } + + @Override + public int getRemotePort() { + return 0; + } + + @Override + public RequestDispatcher getRequestDispatcher(String name) { + return null; + } + + @Override + public String getScheme() { + return null; + } + + @Override + public String getServerName() { + return null; + } + + @Override + public int getServerPort() { + return 0; + } + + @Override + public boolean isSecure() { + return false; + } + + @Override + public void removeAttribute(String name) { + // ignore + } + + @Override + public void setAttribute(String name, Object value) { + // ignore + } + + @Override + public void setCharacterEncoding(String encoding) + throws UnsupportedEncodingException { + // ignore + } + + @Override + public String getAuthType() { + return null; + } + + @Override + public String getContextPath() { + return null; + } + + @Override + public Cookie[] getCookies() { + return null; + } + + @Override + public long getDateHeader(String x) { + return 0; + } + + @Override + public String getHeader(String name) { + return null; + } + + @Override + public Enumeration getHeaderNames() { + return null; + } + + @Override + public Enumeration getHeaders(String name) { + return null; + } + + @Override + public int getIntHeader(String name) { + return 0; + } + + @Override + public String getMethod() { + return null; + } + + @Override + public String getPathInfo() { + return pathInfo; + } + + @Override + public String getPathTranslated() { + return null; + } + + @Override + public String getQueryString() { + return null; + } + + @Override + public String getRemoteUser() { + return null; + } + + @Override + public String getRequestURI() { + return null; + } + + @Override + public StringBuffer getRequestURL() { + return null; + } + + @Override + public String getRequestedSessionId() { + return null; + } + + @Override + public String getServletPath() { + return null; + } + + @Override + public HttpSession getSession() { + return null; + } + + @Override + public HttpSession getSession(boolean x) { + return null; + } + + @Override + public Principal getUserPrincipal() { + return null; + } + + @Override + public boolean isRequestedSessionIdFromCookie() { + return false; + } + + @Override + public boolean isRequestedSessionIdFromURL() { + return false; + } + + @Override + @Deprecated + public boolean isRequestedSessionIdFromUrl() { + return false; + } + + @Override + public boolean isRequestedSessionIdValid() { + return false; + } + + @Override + public boolean isUserInRole(String x) { + return false; + } + + @Override + public java.util.Collection getParts() { + return null; + } + + @Override + public Part getPart(String name) { + return null; + } + + @Override + public boolean authenticate(HttpServletResponse response) { + return false; + } + + @Override + public void login(String username, String password) { + // ignore + } + + @Override + public void logout() { + // ignore + } + + @Override + public ServletContext getServletContext() { + return null; + } + + @Override + public AsyncContext startAsync() { + return null; + } + + @Override + public AsyncContext startAsync( + ServletRequest servletRequest, + ServletResponse servletResponse) { + return null; + } + + @Override + public boolean isAsyncStarted() { + return false; + } + + @Override + public boolean isAsyncSupported() { + return false; + } + + @Override + public AsyncContext getAsyncContext() { + return null; + } + + @Override + public DispatcherType getDispatcherType() { + return null; + } + + @Override + public long getContentLengthLong() { + return 0; + } + + @Override + public String changeSessionId() { + return null; + } + + @Override + public T upgrade(Class handlerClass) + throws IOException, ServletException { + return null; + } + + } + + /** + * A HTTP servlet response for testing. + */ + static class TestHttpServletResponse implements HttpServletResponse { + + ServletOutputStream servletOutputStream; + + void setServletOutputStream(ServletOutputStream servletOutputStream) { + this.servletOutputStream = servletOutputStream; + } + + @Override + public void flushBuffer() throws IOException { + // ignore + } + + @Override + public int getBufferSize() { + return 0; + } + + @Override + public String getCharacterEncoding() { + return null; + } + + @Override + public String getContentType() { + return null; + } + + @Override + public Locale getLocale() { + return null; + } + + @Override + public ServletOutputStream getOutputStream() throws IOException { + return servletOutputStream; + } + + @Override + public PrintWriter getWriter() throws IOException { + return null; + } + + @Override + public boolean isCommitted() { + return false; + } + + @Override + public void reset() { + // ignore + } + + @Override + public void resetBuffer() { + // ignore + } + + @Override + public void setBufferSize(int arg0) { + // ignore + } + + @Override + public void setCharacterEncoding(String arg0) { + // ignore + } + + @Override + public void setContentLength(int arg0) { + // ignore + } + + @Override + public void setContentLengthLong(long arg0) { + // ignore + } + + @Override + public void setContentType(String arg0) { + // ignore + } + + @Override + public void setLocale(Locale arg0) { + // ignore + } + + @Override + public void addCookie(Cookie arg0) { + // ignore + } + + @Override + public void addDateHeader(String arg0, long arg1) { + // ignore + } + + @Override + public void addHeader(String arg0, String arg1) { + // ignore + } + + @Override + public void addIntHeader(String arg0, int arg1) { + // ignore + } + + @Override + public boolean containsHeader(String arg0) { + return false; + } + + @Override + public String encodeRedirectURL(String arg0) { + return null; + } + + @Override + @Deprecated + public String encodeRedirectUrl(String arg0) { + return null; + } + + @Override + public String encodeURL(String arg0) { + return null; + } + + @Override + @Deprecated + public String encodeUrl(String arg0) { + return null; + } + + @Override + public void sendError(int arg0) throws IOException { + // ignore + } + + @Override + public void sendError(int arg0, String arg1) throws IOException { + // ignore + } + + @Override + public void sendRedirect(String arg0) throws IOException { + // ignore + } + + @Override + public void setDateHeader(String arg0, long arg1) { + // ignore + } + + @Override + public void setHeader(String arg0, String arg1) { + // ignore + } + + @Override + public void setIntHeader(String arg0, int arg1) { + // ignore + } + + @Override + public void setStatus(int arg0) { + // ignore + } + + @Override + @Deprecated + public void setStatus(int arg0, String arg1) { + // ignore + } + + @Override + public int getStatus() { + return 0; + } + + @Override + public String getHeader(String name) { + return null; + } + + @Override + public java.util.Collection getHeaders(String name) { + return null; + } + + @Override + public java.util.Collection getHeaderNames() { + return null; + } + + } + + /** + * A servlet output stream for testing. + */ + static class TestServletOutputStream extends ServletOutputStream { + + private final ByteArrayOutputStream buff = new ByteArrayOutputStream(); + + @Override + public void write(int b) throws IOException { + buff.write(b); + } + + @Override + public String toString() { + return new String(buff.toByteArray(), StandardCharsets.UTF_8); + } + + @Override + public boolean isReady() { + return true; + } + + @Override + public void setWriteListener(WriteListener writeListener) { + // ignore + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/server/WebClient.java b/modules/h2/src/test/java/org/h2/test/server/WebClient.java new file mode 100644 index 0000000000000..ba19f5c3807e3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/server/WebClient.java @@ -0,0 +1,153 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.server; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.UUID; +import org.h2.util.IOUtils; + +/** + * A simple web browser simulator. + */ +public class WebClient { + + private String sessionId; + private String acceptLanguage; + private String contentType; + + /** + * Open an URL and get the HTML data. + * + * @param url the HTTP URL + * @return the HTML as a string + */ + String get(String url) throws IOException { + HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection(); + conn.setRequestMethod("GET"); + conn.setInstanceFollowRedirects(true); + if (acceptLanguage != null) { + conn.setRequestProperty("accept-language", acceptLanguage); + } + conn.connect(); + int code = conn.getResponseCode(); + contentType = conn.getContentType(); + if (code != HttpURLConnection.HTTP_OK) { + throw new IOException("Result code: " + code); + } + InputStream in = conn.getInputStream(); + String result = IOUtils.readStringAndClose(new InputStreamReader(in), -1); + conn.disconnect(); + return result; + } + + /** + * Upload a file. + * + * @param url the target URL + * @param fileName the file name to post + * @param in the input stream + * @return the result + */ + String upload(String url, String fileName, InputStream in) throws IOException { + HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection(); + conn.setDoOutput(true); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setRequestMethod("POST"); + conn.setRequestProperty("Connection", "Keep-Alive"); + String boundary = UUID.randomUUID().toString(); + conn.setRequestProperty("Content-Type", + "multipart/form-data;boundary="+boundary); + conn.connect(); + DataOutputStream out = new DataOutputStream(conn.getOutputStream()); + out.writeBytes("--" + boundary + "--\r\n"); + out.writeBytes("Content-Disposition: form-data; name=\"upload\";" + + " filename=\"" + fileName +"\"\r\n\r\n"); + IOUtils.copyAndCloseInput(in, out); + out.writeBytes("\r\n--" + boundary + "--\r\n"); + out.close(); + int code = conn.getResponseCode(); + if (code != HttpURLConnection.HTTP_OK) { + throw new IOException("Result code: " + code); + } + in = conn.getInputStream(); + String result = IOUtils.readStringAndClose(new InputStreamReader(in), -1); + conn.disconnect(); + return result; + } + + void setAcceptLanguage(String acceptLanguage) { + this.acceptLanguage = acceptLanguage; + } + + String getContentType() { + return contentType; + } + + /** + * Read the session ID from a URL. + * + * @param url the URL + * @return the session id + */ + String readSessionId(String url) { + int idx = url.indexOf("jsessionid="); + String id = url.substring(idx + "jsessionid=".length()); + for (int i = 0; i < id.length(); i++) { + char ch = id.charAt(i); + if (!Character.isLetterOrDigit(ch)) { + id = id.substring(0, i); + break; + } + } + this.sessionId = id; + return id; + } + + /** + * Read the specified HTML page. + * + * @param url the base URL + * @param page the page to read + * @return the HTML page + */ + String get(String url, String page) throws IOException { + if (sessionId != null) { + if (page.indexOf('?') < 0) { + page += "?"; + } else { + page += "&"; + } + page += "jsessionid=" + sessionId; + } + if (!url.endsWith("/")) { + url += "/"; + } + url += page; + return get(url); + } + + /** + * Get the base URL (the host name and port). + * + * @param url the complete URL + * @return the host name and port + */ + String getBaseUrl(String url) { + int idx = url.indexOf("//"); + idx = url.indexOf('/', idx + 2); + if (idx >= 0) { + return url.substring(0, idx); + } + return url; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/server/package.html b/modules/h2/src/test/java/org/h2/test/server/package.html new file mode 100644 index 0000000000000..aae4552977856 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/server/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +This package contains server tests. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/store/CalculateHashConstant.java b/modules/h2/src/test/java/org/h2/test/store/CalculateHashConstant.java new file mode 100644 index 0000000000000..6948f0b2f2f98 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/CalculateHashConstant.java @@ -0,0 +1,554 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.io.File; +import java.io.FileOutputStream; +import java.math.BigInteger; +import java.util.Arrays; +import java.util.BitSet; +import java.util.Collections; +import java.util.HashSet; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.security.AES; + +/** + * Calculate the constant for the secondary / supplemental hash function, so + * that the hash function mixes the input bits as much as possible. + */ +public class CalculateHashConstant implements Runnable { + + private static BitSet primeNumbers = new BitSet(); + private static int[] randomValues; + private static AtomicInteger high = new AtomicInteger(0x20); + private static Set candidates = + Collections.synchronizedSet(new HashSet()); + + private int constant; + private int[] fromTo = new int[32 * 32]; + + private final AES aes = new AES(); + { + aes.setKey("Hello Welt Hallo Welt".getBytes()); + } + private final byte[] data = new byte[16]; + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + for (int i = 0x0; i < 0x10000; i++) { + if (BigInteger.valueOf(i).isProbablePrime(20)) { + primeNumbers.set(i); + } + } + randomValues = getRandomValues(1000, 1); + Random r = new Random(1); + for (int i = 0; i < randomValues.length; i++) { + randomValues[i] = r.nextInt(); + } + Thread[] threads = new Thread[8]; + for (int i = 0; i < 8; i++) { + threads[i] = new Thread(new CalculateHashConstant()); + threads[i].start(); + } + for (int i = 0; i < 8; i++) { + threads[i].join(); + } + int finalCount = 10000; + int[] randomValues = getRandomValues(finalCount, 10); + + System.out.println(); + System.out.println("AES:"); + CalculateHashConstant test = new CalculateHashConstant(); + int[] minMax; + int av = 0; + test = new CalculateHashConstant() { + @Override + public int hash(int x) { + return secureHash(x); + } + }; + minMax = test.getDependencies(test, randomValues); + System.out.println("Dependencies: " + minMax[0] + ".." + minMax[1]); + av = 0; + for (int j = 0; j < 100; j++) { + av += test.getAvalanche(test, randomValues[j]); + } + System.out.println("AvalancheSum: " + av); + minMax = test.getEffect(test, finalCount * 10, 11); + System.out.println("Effect: " + minMax[0] + ".." + minMax[1]); + + test = new CalculateHashConstant(); + int best = 0; + int dist = Integer.MAX_VALUE; + for (int i : new int[] { + 0x10b5383, 0x10b65f3, 0x1170d8b, 0x118e97b, + 0x1190d8b, 0x11ab37d, 0x11c65f3, 0x1228357, 0x122a837, + 0x122a907, 0x12b24f7, 0x12c4d05, 0x131a907, 0x131afa3, + 0x131b683, 0x132927d, 0x13298fb, 0x134a837, 0x13698fb, + 0x136afa3, 0x138da0b, 0x138f563, 0x13957c5, 0x1470b1b, + 0x148e97b, 0x14b5827, 0x150a837, 0x151c97d, 0x151ed3d, + 0x1525707, 0x1534b07, 0x1570b9b, 0x158d283, 0x15933eb, + 0x15947cb, 0x15b5f33, 0x166da7d, 0x16af4a3, 0x16b0b47, + 0x16ca907, 0x16ee585, 0x17609a3, 0x1770b23, 0x17a2507, + 0x1855383, 0x18b5383, 0x18d9f3b, 0x18db37d, 0x1922507, + 0x1960d3d, 0x1990d4f, 0x1991a7b, 0x19b4b07, 0x19b5383, + 0x19d9f3b, 0x1a2b683, 0x1a40d8b, 0x1a4b08d, 0x1a698fb, + 0x1a6cf9b, 0x1a714bd, 0x1a89283, 0x1aa86c3, 0x1b14e0b, + 0x1b196fb, 0x1b1f2bd, 0x1b30b1b, 0x1b32507, 0x1b44f75, + 0x1b50ce3, 0x1b5927d, 0x1b6afa3, 0x1b8a6f7, 0x1baa907, + 0x1bb2507, 0x1c2a6f7, 0x1c48357, 0x1c957c5, 0x1cb8357, + 0x1cc4d05, 0x1cd9283, 0x1ce2683, 0x1d198fb, 0x1d45383, + 0x1d4ed3d, 0x1d598fb, 0x1d8d283, 0x1d95f3b, 0x1db24f7, + 0x1db5383, 0x1db997d, 0x1e465f3, 0x1f198fb, 0x1f2b683, + 0x1f4a837, 0x20998fb, 0x20eb683, 0x214a907, 0x2152ec3, + 0x2169f3b, 0x216ed3d, 0x218a6f7, 0x2194b07, 0x21c5707, + 0x22158f9, 0x2250d4f, 0x2252507, 0x2297d63, 0x22aed3d, + 0x22b5383, 0x22ca7d7, 0x23596fb, 0x23633eb, 0x23957c5, + 0x23b24f7, 0x2476e7b, 0x24a57c5, 0x24b5383, 0x252ebb7, + 0x254f547, 0x258d283, 0x2595707, 0x25957c5, 0x25c5707, + 0x262a837, 0x2638357, 0x2645827, 0x265f2bd, 0x266f2bd, + 0x268a6f7, 0x26a0b23, 0x26cce7b, 0x2730b47, 0x2750b23, + 0x275b683, 0x28465f3, 0x2850d4f, 0x28d0b47, 0x291c97d, + 0x2922507, 0x2941a7b, 0x294a907, 0x294b683, 0x295f2bd, + 0x2969f3b, 0x296cfb5, 0x2976e7b, 0x2989f3b, 0x29933eb, + 0x29a907, 0x29b2683, 0x29c8357, 0x29d9f3b, 0x2a1a907, + 0x2a45383, 0x2a52f75, 0x2a85383, 0x2aacf9b, 0x2ac5f33, + 0x2ad1a7b, 0x2ad2507, 0x2b2b683, 0x2b3af8b, 0x2b63e65, + 0x2b8da0b, 0x2b9416b, 0x2bb24f7, 0x2c4b37d, 0x2c6cf9b, + 0x2ca0d13, 0x2cb2507, 0x2cb2983, 0x2cce97b, 0x2ce305b, + 0x2ceb683, 0x2d14e0b, 0x2d1a7b, 0x2d2507, 0x2d2af8b, 0x2d41a7b, + 0x2d7467b, 0x2d8d283, 0x2d960bb, 0x2dab683, 0x2db5f33, + 0x2dd5f3b, 0x2e2ebb7, 0x2e32e85, 0x2e6a7c9, 0x2e85383, + 0x2e8e585, 0x2e960bb, 0x2f3afa3, 0x2f62fa5, 0x30e8639, + 0x3132983, 0x3150d4f, 0x315cf9b, 0x3162fa5, 0x316ce7b, + 0x31914bd, 0x31a927d, 0x31ea83b, 0x31f1a7b, 0x3285383, + 0x3289f3b, 0x32933eb, 0x329afa3, 0x32a5707, 0x32a6f7, + 0x32ae585, 0x32b1a7b, 0x32b4b07, 0x32b5383, 0x32b5827, + 0x32ee97b, 0x330be5b, 0x3314e0b, 0x33317a5, 0x333af8b, + 0x335afa3, 0x335b37d, 0x3371a7b, 0x3393e65, 0x339a907, + 0x33d1a7b, 0x3425707, 0x34606d3, 0x347b37d, 0x349305b, + 0x34b2683, 0x34b683, 0x34dafa3, 0x34ec97d, 0x3512683, + 0x3515383, 0x3515707, 0x352e585, 0x353af75, 0x354b683, + 0x355ed3d, 0x3562fa5, 0x356f2bd, 0x3574135, 0x359afa3, + 0x35ad283, 0x35b2683, 0x35d9f3b, 0x361ed3d, 0x3671a7b, + 0x3672fa5, 0x36cbe5b, 0x37598fb, 0x375c97d, 0x37ca907, + 0x389a907, 0x38c65f3, 0x38cf547, 0x38e33eb, 0x3931a7b, + 0x39598fb, 0x3979283, 0x398ce7b, 0x39933eb, 0x39960bb, + 0x39b1a87, 0x39b5f33, 0x39c8333, 0x39d2507, 0x3a55827, + 0x3a89f3b, 0x3a9f2bd, 0x3ab2983, 0x3aba7d7, 0x3adafa3, + 0x3b196fb, 0x3b29f3b, 0x3b32e85, 0x3b4e97b, 0x3b9260b, + 0x3bb5383, 0x3c4a907, 0x3c4c97d, 0x3c6cf9b, 0x3c95707, + 0x3ca57c5, 0x3caa907, 0x3cb2683, 0x3cb2983, 0x3ce305b, + 0x3d158f9, 0x3d15f65, 0x3d2c685, 0x3d34b07, 0x3d76e7b, + 0x3d8d283, 0x3d8f563, 0x3dae585, 0x3dd60bb, 0x3e5c97d, + 0x3eb2683, 0x3eb467b, 0x3f5927d, 0x3f596fb, 0x414e585, + 0x424b37d, 0x425afa3, 0x42e5383, 0x4315f65, 0x4325707, + 0x434b683, 0x4485383, 0x448e97b, 0x44996fb, 0x44b3e65, + 0x44eafa3, 0x4515707, 0x4532983, 0x4533e65, 0x453e585, + 0x454b683, 0x454e97b, 0x45598fb, 0x455a907, 0x456f2bd, + 0x4595f3b, 0x45d9f3b, 0x461a907, 0x462a837, 0x4645827, + 0x46a2fa5, 0x46aa213, 0x46acf9b, 0x46b2507, 0x46ba837, + 0x4715707, 0x472f547, 0x475c97d, 0x4799283, 0x47a0d3d, + 0x47ca907, 0x482a907, 0x4835707, 0x4844b07, 0x48933eb, + 0x48ab683, 0x48b5827, 0x491d0e7, 0x4922507, 0x4929f3b, + 0x492e97b, 0x4933eb, 0x4934b07, 0x49598fb, 0x495f2bd, + 0x4962fa5, 0x496b683, 0x499da7d, 0x499ed3d, 0x49ab683, + 0x4a2f547, 0x4a32e85, 0x4a5f2bd, 0x4a696fb, 0x4aacf9b, + 0x4ab5383, 0x4aba837, 0x4aeb683, 0x4aeda7d, 0x4b0f69b, + 0x4b12fa5, 0x4b13e65, 0x4b5a907, 0x4b5afa3, 0x4b8afa3, + 0x4b8e585, 0x4b8f58d, 0x4ba5707, 0x4bc5707, 0x4c2a837, + 0x4c5b37d, 0x4c5ed3d, 0x4c67d8d, 0x4c8f58d, 0x4c933eb, + 0x4caec5d, 0x4cd17a5, 0x4ce305b, 0x4cf179b, 0x4cfc979, + 0x4d0be5b, 0x4d1ed3d, 0x4d2507, 0x4d30d8b, 0x4d32983, + 0x4d40ae5, 0x4d40d8b, 0x4d4f697, 0x4d8e97b, 0x4d9f2bd, + 0x4da0d3d, 0x4dce585, 0x4dd1a7b, 0x4df467b, 0x4e25707, + 0x4e4c97d, 0x4e63833, 0x4eb1a7b, 0x4eb2683, 0x4eb683, + 0x4ece97b, 0x4eee585, 0x4f13e65, 0x4f6afa3, 0x504b37d, + 0x50e2683, 0x50eb683, 0x5132e85, 0x514b08d, 0x516e97b, + 0x5198fb, 0x519f2bd, 0x51ab37d, 0x51b2683, 0x522a837, + 0x5232983, 0x52465f3, 0x52660bb, 0x528b3ed, 0x5294193, + 0x5294e0b, 0x52a57c5, 0x52eaf8b, 0x52eda7d, 0x52ee585, + 0x530be93, 0x5314e0b, 0x532e97b, 0x5340b1b, 0x535279d, + 0x53598fb, 0x5429283, 0x54a0b23, 0x54b24f7, 0x54d17a5, + 0x54eb683, 0x55198fb, 0x5523e65, 0x55357c5, 0x553e585, + 0x555cf9b, 0x55a24f7, 0x55d2507, 0x55eda7d, 0x5645f33, + 0x567ecdd, 0x56a0d3d, 0x571305b, 0x5714e0b, 0x574a837, + 0x57a5707, 0x57b65f3, 0x57c65f3, 0x5879283, 0x58ba837, + 0x58ded3d, 0x59598fb, 0x5989f3b, 0x598a83b, 0x59957c5, + 0x599f2bd, 0x59bd37b, 0x59ca907, 0x59d9f3b, 0x59e60bb, + 0x5a4b37d, 0x5a64133, 0x5a6cf9b, 0x5a89f3b, 0x5a927d, + 0x5a94165, 0x5a94193, 0x5a958f9, 0x5a960bb, 0x5aac3d3, + 0x5ad98fb, 0x5ae98fb, 0x5b198fb, 0x5b2e97b, 0x5b40b5d, + 0x5b5c97d, 0x5b75f33, 0x5b8afa3, 0x5b94e0b, 0x5ba24f7, + 0x5cab683, 0x5cb0b47, 0x5cb0ce5, 0x5cba7d7, 0x5d0be93, + 0x5d12683, 0x5d20cc7, 0x5d3e585, 0x5e2a907, 0x5e3467b, + 0x5e5c97d, 0x5e89f3b, 0x5eb2683, 0x5f598fb, 0x5f5a907, + 0x676e7b, 0x695705, 0x6b2983, 0x8998fb, 0x8ab683, 0x94a837, + 0x95f2bd, 0x991a7b, 0x995705, 0xa714bd, 0xa90a63, 0xa933eb, + 0xad98fb, 0xb365f3, 0xb4a907, 0xb598fb, 0xb5c97d, 0xb5ed3d, + 0xb698fb, 0xbd279d, 0xc55383, 0xc7b37d, 0xc8da0b, 0xca0b23, + 0xca96fb, 0xcacf9b, 0xcb2683, 0xcd1a7b, 0xd45383, 0xd4e585, + 0xd6afa3, 0xd94e0b, 0xdaf547, 0xdb1a7b, 0xdca907, 0xdd2e85, + 0xe6da7d, 0xe94e0b, 0xe9a907, 0xeca7d7, 0xf4a837 + }) { + // for(int i : candidates) { + test.constant = i; + System.out.println(); + System.out.println("Constant: 0x" + Integer.toHexString(i)); + minMax = test.getDependencies(test, randomValues); + System.out.println("Dependencies: " + minMax[0] + ".." + minMax[1]); + int d = minMax[1] - minMax[0]; + av = 0; + for (int j = 0; j < 100; j++) { + av += test.getAvalanche(test, randomValues[j]); + } + System.out.println("AvalancheSum: " + av); + minMax = test.getEffect(test, finalCount * 10, 11); + System.out.println("Effect: " + minMax[0] + ".." + minMax[1]); + d += minMax[1] - minMax[0]; + if (d < dist) { + dist = d; + best = i; + } + } + System.out.println(); + System.out.println("Best constant: 0x" + Integer.toHexString(best)); + test.constant = best; + long collisions = test.getCollisionCount(); + System.out.println("Collisions: " + collisions); + } + + /** + * Calculate the multiplicative inverse of a value (int). + * + * @param a the value + * @return the multiplicative inverse + */ + static long calcMultiplicativeInverse(long a) { + return BigInteger.valueOf(a).modPow( + BigInteger.valueOf((1 << 31) - 1), BigInteger.valueOf(1L << 32)).longValue(); + } + + /** + * Calculate the multiplicative inverse of a value (long). + * + * @param a the value + * @return the multiplicative inverse + */ + static long calcMultiplicativeInverseLong(long a) { + BigInteger oneShift64 = BigInteger.valueOf(1).shiftLeft(64); + BigInteger oneShift63 = BigInteger.valueOf(1).shiftLeft(63); + return BigInteger.valueOf(a).modPow( + oneShift63.subtract(BigInteger.ONE), + oneShift64).longValue(); + } + /** + * Store a random file to be analyzed by the Diehard test. + */ + void storeRandomFile() throws Exception { + File f = new File(System.getProperty("user.home") + "/temp/rand.txt"); + FileOutputStream out = new FileOutputStream(f); + CalculateHashConstant test = new CalculateHashConstant(); + // Random r = new Random(1); + byte[] buff = new byte[4]; + // tt.constant = 0x29a907; + for (int i = 0; i < 10000000 / 8; i++) { + int y = test.hash(i); + // int y = r.nextInt(); + writeInt(buff, 0, y); + out.write(buff); + } + out.close(); + } + + private static int[] getRandomValues(int count, int seed) { + int[] values = new int[count]; + Random r = new Random(seed); + for (int i = 0; i < count; i++) { + values[i] = r.nextInt(); + } + return values; + } + + @Override + public void run() { + while (true) { + int currentHigh = high.getAndIncrement(); + // if (currentHigh > 0x2d) { + if (currentHigh > 0xffff) { + break; + } + System.out.println("testing " + Integer.toHexString(currentHigh) + "...."); + addCandidates(currentHigh); + } + } + + private void addCandidates(int currentHigh) { + for (int low = 0; low <= 0xffff; low++) { + // the lower 16 bits don't have to be a prime number + // but it seems that's a good restriction + if (!primeNumbers.get(low)) { + continue; + } + int i = (currentHigh << 16) | low; + constant = i; + // after one bit changes in the input, + // on average 16 bits of the output change + int av = getAvalanche(this, 0); + if (Math.abs(av - 16000) > 130) { + continue; + } + av = getAvalanche(this, 0xffffffff); + if (Math.abs(av - 16000) > 130) { + continue; + } + long es = getEffectSquare(this, randomValues); + if (es > 259000) { + continue; + } + int[] minMax = getEffect(this, 10000, 1); + if (!isWithin(4800, 5200, minMax)) { + continue; + } + minMax = getDependencies(this, randomValues); + // aes: 7788..8183 + if (!isWithin(7720, 8240, minMax)) { + continue; + } + minMax = getEffect(this, 100000, 3); + if (!isWithin(49000, 51000, minMax)) { + continue; + } + System.out.println(Integer.toHexString(i) + + " hit av:" + av + " bits:" + Integer.bitCount(i) + " es:" + es + " prime:" + + BigInteger.valueOf(i).isProbablePrime(15)); + candidates.add(i); + } + } + + long getCollisionCount() { + BitSet set = new BitSet(); + BitSet neg = new BitSet(); + long collisions = 0; + long t = System.nanoTime(); + for (int i = Integer.MIN_VALUE; i != Integer.MAX_VALUE; i++) { + int x = hash(i); + if (x >= 0) { + if (set.get(x)) { + collisions++; + } else { + set.set(x); + } + } else { + x = -(x + 1); + if (neg.get(x)) { + collisions++; + } else { + neg.set(x); + } + } + if ((i & 0xfffff) == 0) { + long n = System.nanoTime(); + if (n - t > TimeUnit.SECONDS.toNanos(5)) { + System.out.println(Integer.toHexString(constant) + " " + + Integer.toHexString(i) + " collisions: " + collisions); + t = n; + } + } + } + return collisions; + } + + private static boolean isWithin(int min, int max, int[] range) { + return range[0] >= min && range[1] <= max; + } + + /** + * Calculate how much the bit changes (output bits that change if an input + * bit is changed) are independent of each other. + * + * @param h the hash object + * @param values the values to test with + * @return the minimum and maximum number of output bits that are changed in + * combination with another output bit + */ + int[] getDependencies(CalculateHashConstant h, int[] values) { + Arrays.fill(fromTo, 0); + for (int x : values) { + for (int shift = 0; shift < 32; shift++) { + int x1 = h.hash(x); + int x2 = h.hash(x ^ (1 << shift)); + int x3 = x1 ^ x2; + for (int s = 0; s < 32; s++) { + if ((x3 & (1 << s)) != 0) { + for (int s2 = 0; s2 < 32; s2++) { + if (s == s2) { + continue; + } + if ((x3 & (1 << s2)) != 0) { + fromTo[s * 32 + s2]++; + } + } + } + } + } + } + int a = Integer.MAX_VALUE, b = Integer.MIN_VALUE; + for (int x : fromTo) { + if (x == 0) { + continue; + } + if (x < a) { + a = x; + } + if (x > b) { + b = x; + } + } + return new int[] {a, b}; + } + + /** + * Calculate the number of bits that change if a single bit is changed + * multiplied by 1000 (expected: 16000 +/- 5%). + * + * @param h the hash object + * @param value the base value + * @return the number of bit changes multiplied by 1000 + */ + int getAvalanche(CalculateHashConstant h, int value) { + int changedBitsSum = 0; + for (int i = 0; i < 32; i++) { + int x = value ^ (1 << i); + for (int shift = 0; shift < 32; shift++) { + int x1 = h.hash(x); + int x2 = h.hash(x ^ (1 << shift)); + int x3 = x1 ^ x2; + changedBitsSum += Integer.bitCount(x3); + } + } + return changedBitsSum * 1000 / 32 / 32; + } + + /** + * Calculate the sum of the square of the distance to the expected + * probability that an output bit changes if an input bit is changed. The + * lower the value, the better. + * + * @param h the hash object + * @param values the values to test with + * @return sum(distance^2) + */ + long getEffectSquare(CalculateHashConstant h, int[] values) { + Arrays.fill(fromTo, 0); + int total = 0; + for (int x : values) { + for (int shift = 0; shift < 32; shift++) { + int x1 = h.hash(x); + int x2 = h.hash(x ^ (1 << shift)); + int x3 = x1 ^ x2; + for (int s = 0; s < 32; s++) { + if ((x3 & (1 << s)) != 0) { + fromTo[shift * 32 + s]++; + total++; + } + } + } + } + long sqDist = 0; + int expected = total / 32 / 32; + for (int x : fromTo) { + int dist = Math.abs(x - expected); + sqDist += dist * dist; + } + return sqDist; + } + + /** + * Calculate if the all bit changes (that an output bit changes if an input + * bit is changed) are within a certain range. + * + * @param h the hash object + * @param count the number of values to test + * @param seed the random seed + * @return the minimum and maximum value of all input-to-output bit changes + */ + int[] getEffect(CalculateHashConstant h, int count, int seed) { + Random r = new Random(); + r.setSeed(seed); + Arrays.fill(fromTo, 0); + for (int i = 0; i < count; i++) { + int x = r.nextInt(); + for (int shift = 0; shift < 32; shift++) { + int x1 = h.hash(x); + int x2 = h.hash(x ^ (1 << shift)); + int x3 = x1 ^ x2; + for (int s = 0; s < 32; s++) { + if ((x3 & (1 << s)) != 0) { + fromTo[shift * 32 + s]++; + } + } + } + } + int a = Integer.MAX_VALUE, b = Integer.MIN_VALUE; + for (int x : fromTo) { + if (x < a) { + a = x; + } + if (x > b) { + b = x; + } + } + return new int[] {a, b}; + } + + /** + * The hash method. + * + * @param x the input + * @return the output + */ + int hash(int x) { + // return secureHash(x); + x = ((x >>> 16) ^ x) * constant; + x = ((x >>> 16) ^ x) * constant; + x = (x >>> 16) ^ x; + return x; + } + + /** + * Calculate a hash using AES. + * + * @param x the input + * @return the output + */ + int secureHash(int x) { + Arrays.fill(data, (byte) 0); + writeInt(data, 0, x); + aes.encrypt(data, 0, 16); + return readInt(data, 0); + } + + private static void writeInt(byte[] buff, int pos, int x) { + buff[pos++] = (byte) (x >> 24); + buff[pos++] = (byte) (x >> 16); + buff[pos++] = (byte) (x >> 8); + buff[pos++] = (byte) x; + } + + private static int readInt(byte[] buff, int pos) { + return (buff[pos++] << 24) + ((buff[pos++] & 0xff) << 16) + + ((buff[pos++] & 0xff) << 8) + (buff[pos] & 0xff); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/CalculateHashConstantLong.java b/modules/h2/src/test/java/org/h2/test/store/CalculateHashConstantLong.java new file mode 100644 index 0000000000000..ec81d388788d7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/CalculateHashConstantLong.java @@ -0,0 +1,435 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.io.File; +import java.io.FileOutputStream; +import java.math.BigInteger; +import java.util.Arrays; +import java.util.BitSet; +import java.util.Collections; +import java.util.HashSet; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.security.AES; + +/** + * Calculate the constant for the secondary hash function, so that the hash + * function mixes the input bits as much as possible. + */ +public class CalculateHashConstantLong implements Runnable { + + private static BitSet primeNumbers = new BitSet(); + private static long[] randomValues; + private static AtomicInteger high = new AtomicInteger(0x20); + private static Set candidates = + Collections.synchronizedSet(new HashSet()); + + private long constant; + private int[] fromTo = new int[64 * 64]; + + private final AES aes = new AES(); + { + aes.setKey("Hello World Hallo Welt".getBytes()); + } + private final byte[] data = new byte[16]; + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + for (int i = 0x0; i < 0x10000; i++) { + if (BigInteger.valueOf(i).isProbablePrime(20)) { + primeNumbers.set(i); + } + } + randomValues = getRandomValues(1000, 1); + Random r = new Random(1); + for (int i = 0; i < randomValues.length; i++) { + randomValues[i] = r.nextInt(); + } + printQuality(new CalculateHashConstantLong() { + @Override + public long hash(long x) { + return secureHash(x); + } + @Override + public String toString() { + return "AES"; + } + }, randomValues); + // Quality of AES + // Dependencies: 15715..16364 + // Avalanche: 31998 + // AvalancheSum: 3199841 + // Effect: 49456..50584 + + printQuality(new CalculateHashConstantLong() { + @Override + public long hash(long x) { + x = (x ^ (x >>> 30)) * 0xbf58476d1ce4e5b9L; + x = (x ^ (x >>> 27)) * 0x94d049bb133111ebL; + return x ^ (x >>> 31); + } + @Override + public String toString() { + return "Test"; + } + }, randomValues); + // Quality of Test + // Dependencies: 14693..16502 + // Avalanche: 31996 + // AvalancheSum: 3199679 + // Effect: 49437..50537 + + Thread[] threads = new Thread[8]; + for (int i = 0; i < 8; i++) { + threads[i] = new Thread(new CalculateHashConstantLong()); + threads[i].start(); + } + for (int i = 0; i < 8; i++) { + threads[i].join(); + } + + int finalCount = 10000; + long[] randomValues = getRandomValues(finalCount, 10); + + CalculateHashConstantLong test; + int[] minMax; + test = new CalculateHashConstantLong(); + long best = 0; + int dist = Integer.MAX_VALUE; + for (long i : candidates) { + test.constant = i; + System.out.println(); + System.out.println("Constant: 0x" + Long.toHexString(i)); + minMax = test.getDependencies(test, randomValues); + System.out.println("Dependencies: " + minMax[0] + ".." + minMax[1]); + int d = minMax[1] - minMax[0]; + int av = 0; + for (int j = 0; j < 100; j++) { + av += test.getAvalanche(test, randomValues[j]); + } + System.out.println("AvalancheSum: " + av); + minMax = test.getEffect(test, finalCount * 10, 11); + System.out.println("Effect: " + minMax[0] + ".." + minMax[1]); + d += minMax[1] - minMax[0]; + if (d < dist) { + dist = d; + best = i; + } + } + System.out.println(); + System.out.println("Best constant: 0x" + Long.toHexString(best)); + test.constant = best; + long collisions = test.getCollisionCount(); + System.out.println("Collisions: " + collisions); + } + + private static void printQuality(CalculateHashConstantLong test, long[] randomValues) { + int finalCount = randomValues.length * 10; + System.out.println("Quality of " + test); + int[] minMax; + int av = 0; + minMax = test.getDependencies(test, randomValues); + System.out.println("Dependencies: " + minMax[0] + ".." + minMax[1]); + av = 0; + for (int j = 0; j < 100; j++) { + av += test.getAvalanche(test, randomValues[j]); + } + System.out.println("Avalanche: " + (av / 100)); + System.out.println("AvalancheSum: " + av); + minMax = test.getEffect(test, finalCount * 10, 11); + System.out.println("Effect: " + minMax[0] + ".." + minMax[1]); + System.out.println("ok=" + test.testCandidate()); + } + + /** + * Store a random file to be analyzed by the Diehard test. + */ + void storeRandomFile() throws Exception { + File f = new File(System.getProperty("user.home") + "/temp/rand.txt"); + FileOutputStream out = new FileOutputStream(f); + CalculateHashConstantLong test = new CalculateHashConstantLong(); + // Random r = new Random(1); + byte[] buff = new byte[8]; + // tt.constant = 0x29a907; + for (int i = 0; i < 10000000 / 8; i++) { + long y = test.hash(i); + // int y = r.nextInt(); + writeLong(buff, 0, y); + out.write(buff); + } + out.close(); + } + + private static long[] getRandomValues(int count, int seed) { + long[] values = new long[count]; + Random r = new Random(seed); + for (int i = 0; i < count; i++) { + values[i] = r.nextLong(); + } + return values; + } + + @Override + public void run() { + while (true) { + int currentHigh = high.getAndIncrement(); + // if (currentHigh > 0x2d) { + if (currentHigh > 0xffff) { + break; + } + System.out.println("testing " + Integer.toHexString(currentHigh) + "...."); + addCandidates(currentHigh); + } + } + + private void addCandidates(long currentHigh) { + for (int low = 0; low <= 0xffff; low++) { + // the lower 16 bits don't have to be a prime number + // but it seems that's a good restriction + if (!primeNumbers.get(low)) { + continue; + } + long i = (currentHigh << 48) | ((long) low << 32) | (currentHigh << 16) | low; + constant = i; + if (!testCandidate()) { + continue; + } + System.out.println(Long.toHexString(i) + + " hit " + i); + candidates.add(i); + } + } + + private boolean testCandidate() { + // after one bit changes in the input, + // on average 32 bits of the output change + int av = getAvalanche(this, 0); + if (Math.abs(av - 32000) > 1000) { + return false; + } + av = getAvalanche(this, 0xffffffffffffffffL); + if (Math.abs(av - 32000) > 1000) { + return false; + } + long es = getEffectSquare(this, randomValues); + if (es > 1100000) { + System.out.println("fail at a " + es); + return false; + } + int[] minMax = getEffect(this, 10000, 1); + if (!isWithin(4700, 5300, minMax)) { + System.out.println("fail at b " + minMax[0] + " " + minMax[1]); + return false; + } + minMax = getDependencies(this, randomValues); + if (!isWithin(14500, 17000, minMax)) { + System.out.println("fail at c " + minMax[0] + " " + minMax[1]); + return false; + } + return true; + } + + long getCollisionCount() { + // TODO need a way to check this + return 0; + } + + private static boolean isWithin(int min, int max, int[] range) { + return range[0] >= min && range[1] <= max; + } + + /** + * Calculate how much the bit changes (output bits that change if an input + * bit is changed) are independent of each other. + * + * @param h the hash object + * @param values the values to test with + * @return the minimum and maximum number of output bits that are changed in + * combination with another output bit + */ + int[] getDependencies(CalculateHashConstantLong h, long[] values) { + Arrays.fill(fromTo, 0); + for (long x : values) { + for (int shift = 0; shift < 64; shift++) { + long x1 = h.hash(x); + long x2 = h.hash(x ^ (1L << shift)); + long x3 = x1 ^ x2; + for (int s = 0; s < 64; s++) { + if ((x3 & (1L << s)) != 0) { + for (int s2 = 0; s2 < 64; s2++) { + if (s == s2) { + continue; + } + if ((x3 & (1L << s2)) != 0) { + fromTo[s * 64 + s2]++; + } + } + } + } + } + } + int a = Integer.MAX_VALUE, b = Integer.MIN_VALUE; + for (int x : fromTo) { + if (x == 0) { + continue; + } + if (x < a) { + a = x; + } + if (x > b) { + b = x; + } + } + return new int[] {a, b}; + } + + /** + * Calculate the number of bits that change if a single bit is changed + * multiplied by 1000 (expected: 16000 +/- 5%). + * + * @param h the hash object + * @param value the base value + * @return the number of bit changes multiplied by 1000 + */ + int getAvalanche(CalculateHashConstantLong h, long value) { + int changedBitsSum = 0; + for (int i = 0; i < 64; i++) { + long x = value ^ (1L << i); + for (int shift = 0; shift < 64; shift++) { + long x1 = h.hash(x); + long x2 = h.hash(x ^ (1L << shift)); + long x3 = x1 ^ x2; + changedBitsSum += Long.bitCount(x3); + } + } + return changedBitsSum * 1000 / 64 / 64; + } + + /** + * Calculate the sum of the square of the distance to the expected + * probability that an output bit changes if an input bit is changed. The + * lower the value, the better. + * + * @param h the hash object + * @param values the values to test with + * @return sum(distance^2) + */ + long getEffectSquare(CalculateHashConstantLong h, long[] values) { + Arrays.fill(fromTo, 0); + int total = 0; + for (long x : values) { + for (int shift = 0; shift < 64; shift++) { + long x1 = h.hash(x); + long x2 = h.hash(x ^ (1L << shift)); + long x3 = x1 ^ x2; + for (int s = 0; s < 64; s++) { + if ((x3 & (1L << s)) != 0) { + fromTo[shift * 64 + s]++; + total++; + } + } + } + } + long sqDist = 0; + int expected = total / 64 / 64; + for (int x : fromTo) { + int dist = Math.abs(x - expected); + sqDist += dist * dist; + } + return sqDist; + } + + /** + * Calculate if the bit changes (that an output bit changes if an input + * bit is changed) are within a certain range. + * + * @param h the hash object + * @param count the number of values to test + * @param seed the random seed + * @return the minimum and maximum value of all input-to-output bit changes + */ + int[] getEffect(CalculateHashConstantLong h, int count, int seed) { + Random r = new Random(); + r.setSeed(seed); + Arrays.fill(fromTo, 0); + for (int i = 0; i < count; i++) { + long x = r.nextLong(); + for (int shift = 0; shift < 64; shift++) { + long x1 = h.hash(x); + long x2 = h.hash(x ^ (1L << shift)); + long x3 = x1 ^ x2; + for (int s = 0; s < 64; s++) { + if ((x3 & (1L << s)) != 0) { + fromTo[shift * 64 + s]++; + } + } + } + } + int a = Integer.MAX_VALUE, b = Integer.MIN_VALUE; + for (int x : fromTo) { + if (x < a) { + a = x; + } + if (x > b) { + b = x; + } + } + return new int[] {a, b}; + } + + /** + * The hash method. + * + * @param x the input + * @return the output + */ + long hash(long x) { + x = ((x >>> 32) ^ x) * constant; + x = ((x >>> 32) ^ x) * constant; + x = (x >>> 32) ^ x; + return x; + } + + /** + * Calculate a hash using AES. + * + * @param x the input + * @return the output + */ + long secureHash(long x) { + writeLong(data, 0, x); + aes.encrypt(data, 0, 16); + return readLong(data, 0); + } + + private static void writeLong(byte[] buff, int pos, long x) { + writeInt(buff, pos, (int) (x >>> 32)); + writeInt(buff, pos + 4, (int) x); + } + + private static void writeInt(byte[] buff, int pos, int x) { + buff[pos++] = (byte) (x >> 24); + buff[pos++] = (byte) (x >> 16); + buff[pos++] = (byte) (x >> 8); + buff[pos++] = (byte) x; + } + + private static long readLong(byte[] buff, int pos) { + return (((long) readInt(buff, pos)) << 32) | (readInt(buff, pos + 4) & 0xffffffffL); + } + + private static int readInt(byte[] buff, int pos) { + return (buff[pos++] << 24) + ((buff[pos++] & 0xff) << 16) + + ((buff[pos++] & 0xff) << 8) + (buff[pos] & 0xff); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/FreeSpaceList.java b/modules/h2/src/test/java/org/h2/test/store/FreeSpaceList.java new file mode 100644 index 0000000000000..3b143b6b0a462 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/FreeSpaceList.java @@ -0,0 +1,217 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.ArrayList; +import java.util.List; + +import org.h2.mvstore.DataUtils; +import org.h2.util.MathUtils; + +/** + * A list that maintains ranges of free space (in blocks). + */ +public class FreeSpaceList { + + /** + * The first usable block. + */ + private final int firstFreeBlock; + + /** + * The block size in bytes. + */ + private final int blockSize; + + private List freeSpaceList = new ArrayList<>(); + + public FreeSpaceList(int firstFreeBlock, int blockSize) { + this.firstFreeBlock = firstFreeBlock; + if (Integer.bitCount(blockSize) != 1) { + throw DataUtils.newIllegalArgumentException("Block size is not a power of 2"); + } + this.blockSize = blockSize; + clear(); + } + + /** + * Reset the list. + */ + public synchronized void clear() { + freeSpaceList.clear(); + freeSpaceList.add(new BlockRange(firstFreeBlock, + Integer.MAX_VALUE - firstFreeBlock)); + } + + /** + * Allocate a number of blocks and mark them as used. + * + * @param length the number of bytes to allocate + * @return the start position in bytes + */ + public synchronized long allocate(int length) { + int required = getBlockCount(length); + for (BlockRange pr : freeSpaceList) { + if (pr.length >= required) { + int result = pr.start; + this.markUsed(pr.start * blockSize, length); + return result * blockSize; + } + } + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, + "Could not find a free page to allocate"); + } + + /** + * Mark the space as in use. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public synchronized void markUsed(long pos, int length) { + int start = (int) (pos / blockSize); + int required = getBlockCount(length); + BlockRange found = null; + int i = 0; + for (BlockRange pr : freeSpaceList) { + if (start >= pr.start && start < (pr.start + pr.length)) { + found = pr; + break; + } + i++; + } + if (found == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, + "Cannot find spot to mark as used in free list"); + } + if (start + required > found.start + found.length) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, + "Runs over edge of free space"); + } + if (found.start == start) { + // if the used space is at the beginning of a free-space-range + found.start += required; + found.length -= required; + if (found.length == 0) { + // if the free-space-range is now empty, remove it + freeSpaceList.remove(i); + } + } else if (found.start + found.length == start + required) { + // if the used space is at the end of a free-space-range + found.length -= required; + } else { + // it's in the middle, so split the existing entry + int length1 = start - found.start; + int start2 = start + required; + int length2 = found.start + found.length - start - required; + + found.length = length1; + BlockRange newRange = new BlockRange(start2, length2); + freeSpaceList.add(i + 1, newRange); + } + } + + /** + * Mark the space as free. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public synchronized void free(long pos, int length) { + int start = (int) (pos / blockSize); + int required = getBlockCount(length); + BlockRange found = null; + int i = 0; + for (BlockRange pr : freeSpaceList) { + if (pr.start > start) { + found = pr; + break; + } + i++; + } + if (found == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, + "Cannot find spot to mark as unused in free list"); + } + if (start + required == found.start) { + // if the used space is adjacent to the beginning of a + // free-space-range + found.start = start; + found.length += required; + // compact: merge the previous entry into this one if + // they are now adjacent + if (i > 0) { + BlockRange previous = freeSpaceList.get(i - 1); + if (previous.start + previous.length == found.start) { + previous.length += found.length; + freeSpaceList.remove(i); + } + } + return; + } + if (i > 0) { + // if the used space is adjacent to the end of a free-space-range + BlockRange previous = freeSpaceList.get(i - 1); + if (previous.start + previous.length == start) { + previous.length += required; + return; + } + } + + // it is between 2 entries, so add a new one + BlockRange newRange = new BlockRange(start, required); + freeSpaceList.add(i, newRange); + } + + private int getBlockCount(int length) { + if (length <= 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "Free space invalid length"); + } + return MathUtils.roundUpInt(length, blockSize) / blockSize; + } + + @Override + public String toString() { + return freeSpaceList.toString(); + } + + /** + * A range of free blocks. + */ + private static final class BlockRange { + + /** + * The starting point, in blocks. + */ + int start; + + /** + * The length, in blocks. + */ + int length; + + public BlockRange(int start, int length) { + this.start = start; + this.length = length; + } + + @Override + public String toString() { + if (start + length == Integer.MAX_VALUE) { + return Integer.toHexString(start) + "-"; + } + return Integer.toHexString(start) + "-" + + Integer.toHexString(start + length - 1); + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/FreeSpaceTree.java b/modules/h2/src/test/java/org/h2/test/store/FreeSpaceTree.java new file mode 100644 index 0000000000000..b0d937ee26a8c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/FreeSpaceTree.java @@ -0,0 +1,206 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.TreeSet; + +import org.h2.mvstore.DataUtils; +import org.h2.util.MathUtils; + +/** + * A list that maintains ranges of free space (in blocks) in a file. + */ +public class FreeSpaceTree { + + /** + * The first usable block. + */ + private final int firstFreeBlock; + + /** + * The block size in bytes. + */ + private final int blockSize; + + /** + * The list of free space. + */ + private TreeSet freeSpace = new TreeSet<>(); + + public FreeSpaceTree(int firstFreeBlock, int blockSize) { + this.firstFreeBlock = firstFreeBlock; + if (Integer.bitCount(blockSize) != 1) { + throw DataUtils.newIllegalArgumentException("Block size is not a power of 2"); + } + this.blockSize = blockSize; + clear(); + } + + /** + * Reset the list. + */ + public synchronized void clear() { + freeSpace.clear(); + freeSpace.add(new BlockRange(firstFreeBlock, + Integer.MAX_VALUE - firstFreeBlock)); + } + + /** + * Allocate a number of blocks and mark them as used. + * + * @param length the number of bytes to allocate + * @return the start position in bytes + */ + public synchronized long allocate(int length) { + int blocks = getBlockCount(length); + BlockRange x = null; + for (BlockRange b : freeSpace) { + if (b.blocks >= blocks) { + x = b; + break; + } + } + long pos = getPos(x.start); + if (x.blocks == blocks) { + freeSpace.remove(x); + } else { + x.start += blocks; + x.blocks -= blocks; + } + return pos; + } + + /** + * Mark the space as in use. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public synchronized void markUsed(long pos, int length) { + int start = getBlock(pos); + int blocks = getBlockCount(length); + BlockRange x = new BlockRange(start, blocks); + BlockRange prev = freeSpace.floor(x); + if (prev == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "Free space already marked"); + } + if (prev.start == start) { + if (prev.blocks == blocks) { + // match + freeSpace.remove(prev); + } else { + // cut the front + prev.start += blocks; + prev.blocks -= blocks; + } + } else if (prev.start + prev.blocks == start + blocks) { + // cut the end + prev.blocks -= blocks; + } else { + // insert an entry + x.start = start + blocks; + x.blocks = prev.start + prev.blocks - x.start; + freeSpace.add(x); + prev.blocks = start - prev.start; + } + } + + /** + * Mark the space as free. + * + * @param pos the position in bytes + * @param length the number of bytes + */ + public synchronized void free(long pos, int length) { + int start = getBlock(pos); + int blocks = getBlockCount(length); + BlockRange x = new BlockRange(start, blocks); + BlockRange next = freeSpace.ceiling(x); + if (next == null) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "Free space sentinel is missing"); + } + BlockRange prev = freeSpace.lower(x); + if (prev != null) { + if (prev.start + prev.blocks == start) { + // extend the previous entry + prev.blocks += blocks; + if (prev.start + prev.blocks == next.start) { + // merge with the next entry + prev.blocks += next.blocks; + freeSpace.remove(next); + } + return; + } + } + if (start + blocks == next.start) { + // extend the next entry + next.start -= blocks; + next.blocks += blocks; + return; + } + freeSpace.add(x); + } + + private long getPos(int block) { + return (long) block * (long) blockSize; + } + + private int getBlock(long pos) { + return (int) (pos / blockSize); + } + + private int getBlockCount(int length) { + if (length <= 0) { + throw DataUtils.newIllegalStateException( + DataUtils.ERROR_INTERNAL, "Free space invalid length"); + } + return MathUtils.roundUpInt(length, blockSize) / blockSize; + } + + @Override + public String toString() { + return freeSpace.toString(); + } + + /** + * A range of free blocks. + */ + private static final class BlockRange implements Comparable { + + /** + * The starting point (the block number). + */ + public int start; + + /** + * The length, in blocks. + */ + public int blocks; + + public BlockRange(int start, int blocks) { + this.start = start; + this.blocks = blocks; + } + + @Override + public int compareTo(BlockRange o) { + return Integer.compare(start, o.start); + } + + @Override + public String toString() { + if (blocks + start == Integer.MAX_VALUE) { + return Integer.toHexString(start) + "-"; + } + return Integer.toHexString(start) + "-" + + Integer.toHexString(start + blocks - 1); + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/RowDataType.java b/modules/h2/src/test/java/org/h2/test/store/RowDataType.java new file mode 100644 index 0000000000000..6e1c1074cb9b8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/RowDataType.java @@ -0,0 +1,95 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.nio.ByteBuffer; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.WriteBuffer; +import org.h2.mvstore.type.DataType; + +/** + * A row type. + */ +public class RowDataType implements DataType { + + static final String PREFIX = "org.h2.test.store.row"; + + private final DataType[] types; + + RowDataType(DataType[] types) { + this.types = types; + } + + @Override + public int compare(Object a, Object b) { + if (a == b) { + return 0; + } + Object[] ax = (Object[]) a; + Object[] bx = (Object[]) b; + int al = ax.length; + int bl = bx.length; + int len = Math.min(al, bl); + for (int i = 0; i < len; i++) { + int comp = types[i].compare(ax[i], bx[i]); + if (comp != 0) { + return comp; + } + } + if (len < al) { + return -1; + } else if (len < bl) { + return 1; + } + return 0; + } + + @Override + public int getMemory(Object obj) { + Object[] x = (Object[]) obj; + int len = x.length; + int memory = 0; + for (int i = 0; i < len; i++) { + memory += types[i].getMemory(x[i]); + } + return memory; + } + + @Override + public void read(ByteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + obj[i] = read(buff); + } + } + + @Override + public void write(WriteBuffer buff, Object[] obj, int len, boolean key) { + for (int i = 0; i < len; i++) { + write(buff, obj[i]); + } + } + + @Override + public Object[] read(ByteBuffer buff) { + int len = DataUtils.readVarInt(buff); + Object[] x = new Object[len]; + for (int i = 0; i < len; i++) { + x[i] = types[i].read(buff); + } + return x; + } + + @Override + public void write(WriteBuffer buff, Object obj) { + Object[] x = (Object[]) obj; + int len = x.length; + buff.putVarInt(len); + for (int i = 0; i < len; i++) { + types[i].write(buff, x[i]); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/SequenceMap.java b/modules/h2/src/test/java/org/h2/test/store/SequenceMap.java new file mode 100644 index 0000000000000..45bc2e5a2cc49 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/SequenceMap.java @@ -0,0 +1,86 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.AbstractSet; +import java.util.Iterator; +import java.util.Set; +import org.h2.mvstore.MVMap; + +/** + * A custom map returning the keys and values values 1 .. 10. + */ +public class SequenceMap extends MVMap { + + /** + * The minimum value. + */ + int min = 1; + + /** + * The maximum value. + */ + int max = 10; + + public SequenceMap() { + super(null, null); + } + + @Override + public Set keySet() { + return new AbstractSet() { + + @Override + public Iterator iterator() { + return new Iterator() { + + long x = min; + + @Override + public boolean hasNext() { + return x <= max; + } + + @Override + public Long next() { + return Long.valueOf(x++); + } + + @Override + public void remove() { + throw new UnsupportedOperationException(); + } + + }; + } + + @Override + public int size() { + return max - min + 1; + } + }; + } + + /** + * A builder for this class. + */ + public static class Builder implements MapBuilder { + + /** + * Create a new builder. + */ + public Builder() { + // ignore + } + + @Override + public SequenceMap create() { + return new SequenceMap(); + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestBenchmark.java b/modules/h2/src/test/java/org/h2/test/store/TestBenchmark.java new file mode 100644 index 0000000000000..f1ec2be18b958 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestBenchmark.java @@ -0,0 +1,272 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.Statement; +import java.util.Random; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.store.FileLister; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Tests performance and helps analyze bottlenecks. + */ +public class TestBenchmark extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testConcurrency(); + + // TODO this test is currently disabled + + test(true); + test(false); + test(true); + test(false); + test(true); + test(false); + } + + private void testConcurrency() throws Exception { + // String fileName = getBaseDir() + "/" + getTestName(); + String fileName = "nioMemFS:/" + getTestName(); + FileUtils.delete(fileName); + MVStore store = new MVStore.Builder().cacheSize(16). + fileName(fileName).open(); + MVMap map = store.openMap("test"); + byte[] data = new byte[1024]; + int count = 1000000; + for (int i = 0; i < count; i++) { + map.put(i, data); + } + store.close(); + for (int concurrency = 1024; concurrency > 0; concurrency /= 2) { + testConcurrency(fileName, concurrency, count); + testConcurrency(fileName, concurrency, count); + testConcurrency(fileName, concurrency, count); + } + FileUtils.delete(fileName); + } + + private void testConcurrency(String fileName, + int concurrency, final int count) throws Exception { + Thread.sleep(1000); + final MVStore store = new MVStore.Builder().cacheSize(256). + cacheConcurrency(concurrency). + fileName(fileName).open(); + int threadCount = 128; + final CountDownLatch wait = new CountDownLatch(1); + final AtomicInteger counter = new AtomicInteger(); + final AtomicBoolean stopped = new AtomicBoolean(); + Task[] tasks = new Task[threadCount]; + // Profiler prof = new Profiler().startCollecting(); + for (int i = 0; i < threadCount; i++) { + final int x = i; + Task t = new Task() { + @Override + public void call() throws Exception { + MVMap map = store.openMap("test"); + Random random = new Random(x); + wait.await(); + while (!stopped.get()) { + int key = random.nextInt(count); + byte[] data = map.get(key); + if (data.length > 1) { + counter.incrementAndGet(); + } + } + } + }; + t.execute("t" + i); + tasks[i] = t; + } + wait.countDown(); + try { + Thread.sleep(3000); + } catch (InterruptedException e) { + e.printStackTrace(); + } + stopped.set(true); + for (Task t : tasks) { + t.get(); + } + // System.out.println(prof.getTop(5)); + String msg = "concurrency " + concurrency + + " threads " + threadCount + " requests: " + counter; + System.out.println(msg); + trace(msg); + store.close(); + } + + private void test(boolean mvStore) throws Exception { + // testInsertSelect(mvStore); + // testBinary(mvStore); + testCreateIndex(mvStore); + } + + private void testCreateIndex(boolean mvStore) throws Exception { + FileUtils.deleteRecursive(getBaseDir(), true); + Connection conn; + Statement stat; + String url = "mvstore"; + if (mvStore) { + // ;COMPRESS=TRUE"; + url += ";MV_STORE=TRUE"; + } + + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id bigint primary key, data bigint)"); + conn.setAutoCommit(false); + PreparedStatement prep = conn + .prepareStatement("insert into test values(?, ?)"); + +// int rowCount = 10000000; + int rowCount = 1000000; + + Random r = new Random(1); + + for (int i = 0; i < rowCount; i++) { + prep.setInt(1, i); + // prep.setInt(2, i); + prep.setInt(2, r.nextInt()); + prep.execute(); + if (i % 10000 == 0) { + conn.commit(); + } + } + + long start = System.nanoTime(); + // Profiler prof = new Profiler().startCollecting(); + stat.execute("create index on test(data)"); + // System.out.println(prof.getTop(5)); + + System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start) + " " + + (mvStore ? "mvstore" : "default")); + conn.createStatement().execute("shutdown compact"); + conn.close(); + for (String f : FileLister.getDatabaseFiles(getBaseDir(), "mvstore", true)) { + System.out.println(" " + f + " " + FileUtils.size(f)); + } + } + + private void testBinary(boolean mvStore) throws Exception { + FileUtils.deleteRecursive(getBaseDir(), true); + Connection conn; + Statement stat; + String url = "mvstore"; + if (mvStore) { + url += ";MV_STORE=TRUE"; + } + + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id bigint primary key, data blob)"); + conn.setAutoCommit(false); + PreparedStatement prep = conn + .prepareStatement("insert into test values(?, ?)"); + byte[] data = new byte[1024 * 1024]; + + int rowCount = 100; + int readCount = 20 * rowCount; + + long start = System.nanoTime(); + + for (int i = 0; i < rowCount; i++) { + prep.setInt(1, i); + randomize(data, i); + prep.setBytes(2, data); + prep.execute(); + if (i % 100 == 0) { + conn.commit(); + } + } + + prep = conn.prepareStatement("select * from test where id = ?"); + for (int i = 0; i < readCount; i++) { + prep.setInt(1, i % rowCount); + prep.executeQuery(); + } + + System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start) + " " + + (mvStore ? "mvstore" : "default")); + conn.close(); + } + + private static void randomize(byte[] data, int i) { + Random r = new Random(i); + r.nextBytes(data); + } + + private void testInsertSelect(boolean mvStore) throws Exception { + + FileUtils.deleteRecursive(getBaseDir(), true); + Connection conn; + Statement stat; + String url = "mvstore"; + if (mvStore) { + url += ";MV_STORE=TRUE;LOG=0;COMPRESS=TRUE"; + } + + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id bigint primary key, name varchar)"); + conn.setAutoCommit(false); + PreparedStatement prep = conn + .prepareStatement("insert into test values(?, ?)"); + String data = "Hello World"; + + int rowCount = 100000; + int readCount = 20 * rowCount; + + for (int i = 0; i < rowCount; i++) { + prep.setInt(1, i); + prep.setString(2, data); + prep.execute(); + if (i % 100 == 0) { + conn.commit(); + } + } + long start = System.nanoTime(); + + prep = conn.prepareStatement("select * from test where id = ?"); + for (int i = 0; i < readCount; i++) { + prep.setInt(1, i % rowCount); + prep.executeQuery(); + } + + System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start) + " " + + (mvStore ? "mvstore" : "default")); + conn.createStatement().execute("shutdown compact"); + conn.close(); + for (String f : FileLister.getDatabaseFiles(getBaseDir(), "mvstore", true)) { + System.out.println(" " + f + " " + FileUtils.size(f)); + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestCacheConcurrentLIRS.java b/modules/h2/src/test/java/org/h2/test/store/TestCacheConcurrentLIRS.java new file mode 100644 index 0000000000000..df542ea597bfb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestCacheConcurrentLIRS.java @@ -0,0 +1,94 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.Random; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicBoolean; +import org.h2.mvstore.cache.CacheLongKeyLIRS; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Tests the cache algorithm. + */ +public class TestCacheConcurrentLIRS extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testConcurrent(); + } + + private void testConcurrent() { + CacheLongKeyLIRS.Config cc = new CacheLongKeyLIRS.Config(); + cc.maxMemory = 100; + final CacheLongKeyLIRS test = new CacheLongKeyLIRS<>(cc); + int threadCount = 8; + final CountDownLatch wait = new CountDownLatch(1); + final AtomicBoolean stopped = new AtomicBoolean(); + Task[] tasks = new Task[threadCount]; + final int[] getCounts = new int[threadCount]; + final int offset = 1000000; + for (int i = 0; i < 100; i++) { + test.put(offset + i, i); + } + final int[] keys = new int[1000]; + Random random = new Random(1); + for (int i = 0; i < keys.length; i++) { + int key; + do { + key = (int) Math.abs(random.nextGaussian() * 50); + } while (key > 100); + keys[i] = key; + } + for (int i = 0; i < threadCount; i++) { + final int x = i; + Task t = new Task() { + @Override + public void call() throws Exception { + Random random = new Random(x); + wait.await(); + int i = 0; + for (; !stopped.get(); i++) { + int key = keys[random.nextInt(keys.length)]; + test.get(offset + key); + if ((i & 127) == 0) { + test.put(offset + random.nextInt(100), random.nextInt()); + } + } + getCounts[x] = i; + } + }; + t.execute("t" + i); + tasks[i] = t; + } + wait.countDown(); + try { + Thread.sleep(2000); + } catch (InterruptedException e) { + e.printStackTrace(); + } + stopped.set(true); + for (Task t : tasks) { + t.get(); + } + int totalCount = 0; + for (int x : getCounts) { + totalCount += x; + } + trace("requests: " + totalCount); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestCacheLIRS.java b/modules/h2/src/test/java/org/h2/test/store/TestCacheLIRS.java new file mode 100644 index 0000000000000..86e2ed279cc90 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestCacheLIRS.java @@ -0,0 +1,574 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map.Entry; +import java.util.Random; +import org.h2.dev.cache.CacheLIRS; +import org.h2.test.TestBase; + +/** + * Tests the cache algorithm. + */ +public class TestCacheLIRS extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testCache(); + } + + private void testCache() { + testRandomSmallCache(); + testEdgeCases(); + testSize(); + testClear(); + testGetPutPeekRemove(); + testPruneStack(); + testLimitHot(); + testLimitNonResident(); + testBadHashMethod(); + testLimitMemory(); + testScanResistance(); + testRandomOperations(); + } + + private static void testRandomSmallCache() { + Random r = new Random(1); + for (int i = 0; i < 10000; i++) { + int j = 0; + StringBuilder buff = new StringBuilder(); + CacheLIRS test = createCache(1 + r.nextInt(10)); + for (; j < 30; j++) { + int key = r.nextInt(5); + switch (r.nextInt(3)) { + case 0: + int memory = r.nextInt(5) + 1; + buff.append("add ").append(key).append(' '). + append(memory).append('\n'); + test.put(key, j, memory); + break; + case 1: + buff.append("remove ").append(key).append('\n'); + test.remove(key); + break; + case 2: + buff.append("get ").append(key).append('\n'); + test.get(key); + } + } + } + } + + private void testEdgeCases() { + CacheLIRS test = createCache(1); + test.put(1, 10, 100); + assertEquals(0, test.size()); + try { + test.put(null, 10, 100); + fail(); + } catch (NullPointerException e) { + // expected + } + try { + test.put(1, null, 100); + fail(); + } catch (NullPointerException e) { + // expected + } + try { + test.setMaxMemory(0); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + } + + private void testSize() { + verifyMapSize(7, 16); + verifyMapSize(13, 32); + verifyMapSize(25, 64); + verifyMapSize(49, 128); + verifyMapSize(97, 256); + verifyMapSize(193, 512); + verifyMapSize(385, 1024); + verifyMapSize(769, 2048); + + CacheLIRS test; + + test = createCache(1000); + for (int j = 0; j < 2000; j++) { + test.put(j, j); + } + // for a cache of size 1000, + // there are 62 cold entries (about 6.25%). + assertEquals(62, test.size() - test.sizeHot()); + // at most as many non-resident elements + // as there are entries in the stack + assertEquals(968, test.sizeNonResident()); + } + + private void verifyMapSize(int elements, int expectedMapSize) { + CacheLIRS test; + test = createCache(elements - 1); + for (int i = 0; i < elements - 1; i++) { + test.put(i, i * 10); + } + assertTrue(test.sizeMapArray() + "<" + expectedMapSize, + test.sizeMapArray() < expectedMapSize); + test = createCache(elements); + for (int i = 0; i < elements + 1; i++) { + test.put(i, i * 10); + } + assertEquals(expectedMapSize, test.sizeMapArray()); + test = createCache(elements * 2); + for (int i = 0; i < elements * 2; i++) { + test.put(i, i * 10); + } + assertTrue(test.sizeMapArray() + ">" + expectedMapSize, + test.sizeMapArray() > expectedMapSize); + } + + private void testGetPutPeekRemove() { + CacheLIRS test = createCache(4); + test.put(1, 10); + test.put(2, 20); + test.put(3, 30); + assertNull(test.peek(4)); + assertNull(test.get(4)); + test.put(4, 40); + verify(test, "mem: 4 stack: 4 3 2 1 cold: non-resident:"); + // move middle to front + assertEquals(30, test.get(3).intValue()); + assertEquals(20, test.get(2).intValue()); + assertEquals(20, test.peek(2).intValue()); + // already on (an optimization) + assertEquals(20, test.get(2).intValue()); + assertEquals(10, test.peek(1).intValue()); + assertEquals(10, test.get(1).intValue()); + verify(test, "mem: 4 stack: 1 2 3 4 cold: non-resident:"); + test.put(3, 30); + verify(test, "mem: 4 stack: 3 1 2 4 cold: non-resident:"); + // 5 is cold; will make 4 non-resident + test.put(5, 50); + verify(test, "mem: 4 stack: 5 3 1 2 cold: 5 non-resident: 4"); + assertEquals(1, test.getMemory(1)); + assertEquals(1, test.getMemory(5)); + assertEquals(0, test.getMemory(4)); + assertEquals(0, test.getMemory(100)); + assertNull(test.peek(4)); + assertNull(test.get(4)); + assertEquals(10, test.get(1).intValue()); + assertEquals(20, test.get(2).intValue()); + assertEquals(30, test.get(3).intValue()); + verify(test, "mem: 4 stack: 3 2 1 cold: 5 non-resident: 4"); + assertEquals(50, test.get(5).intValue()); + verify(test, "mem: 4 stack: 5 3 2 1 cold: 5 non-resident: 4"); + assertEquals(50, test.get(5).intValue()); + verify(test, "mem: 4 stack: 5 3 2 cold: 1 non-resident: 4"); + + // remove + assertEquals(50, test.remove(5).intValue()); + assertNull(test.remove(5)); + verify(test, "mem: 3 stack: 3 2 1 cold: non-resident: 4"); + assertNull(test.remove(4)); + verify(test, "mem: 3 stack: 3 2 1 cold: non-resident:"); + assertNull(test.remove(4)); + verify(test, "mem: 3 stack: 3 2 1 cold: non-resident:"); + test.put(4, 40); + test.put(5, 50); + verify(test, "mem: 4 stack: 5 4 3 2 cold: 5 non-resident: 1"); + test.get(5); + test.get(2); + test.get(3); + test.get(4); + verify(test, "mem: 4 stack: 4 3 2 5 cold: 2 non-resident: 1"); + assertEquals(50, test.remove(5).intValue()); + verify(test, "mem: 3 stack: 4 3 2 cold: non-resident: 1"); + assertEquals(20, test.remove(2).intValue()); + assertFalse(test.containsKey(1)); + assertNull(test.remove(1)); + assertFalse(test.containsKey(1)); + verify(test, "mem: 2 stack: 4 3 cold: non-resident:"); + test.put(1, 10); + test.put(2, 20); + verify(test, "mem: 4 stack: 2 1 4 3 cold: non-resident:"); + test.get(1); + test.get(3); + test.get(4); + verify(test, "mem: 4 stack: 4 3 1 2 cold: non-resident:"); + assertEquals(10, test.remove(1).intValue()); + verify(test, "mem: 3 stack: 4 3 2 cold: non-resident:"); + test.remove(2); + test.remove(3); + test.remove(4); + + // test clear + test.clear(); + verify(test, "mem: 0 stack: cold: non-resident:"); + + // strange situation where there is only a non-resident entry + test.put(1, 10); + test.put(2, 20); + test.put(3, 30); + test.put(4, 40); + test.put(5, 50); + assertTrue(test.containsValue(50)); + verify(test, "mem: 4 stack: 5 4 3 2 cold: 5 non-resident: 1"); + test.put(1, 10); + verify(test, "mem: 4 stack: 1 5 4 3 2 cold: 1 non-resident: 5"); + assertFalse(test.containsValue(50)); + test.remove(2); + test.remove(3); + test.remove(4); + verify(test, "mem: 1 stack: 1 cold: non-resident: 5"); + assertTrue(test.containsKey(1)); + test.remove(1); + assertFalse(test.containsKey(1)); + verify(test, "mem: 0 stack: cold: non-resident: 5"); + assertFalse(test.containsKey(5)); + assertTrue(test.isEmpty()); + + // verify that converting a hot to cold entry will prune the stack + test.clear(); + test.put(1, 10); + test.put(2, 20); + test.put(3, 30); + test.put(4, 40); + test.put(5, 50); + test.get(4); + test.get(3); + verify(test, "mem: 4 stack: 3 4 5 2 cold: 5 non-resident: 1"); + test.put(6, 60); + verify(test, "mem: 4 stack: 6 3 4 5 2 cold: 6 non-resident: 5 1"); + // this will prune the stack (remove entry 5 as entry 2 becomes cold) + test.get(6); + verify(test, "mem: 4 stack: 6 3 4 cold: 2 non-resident: 5 1"); + } + + private void testPruneStack() { + CacheLIRS test = createCache(5); + for (int i = 0; i < 7; i++) { + test.put(i, i * 10); + } + verify(test, "mem: 5 stack: 6 5 4 3 2 1 cold: 6 non-resident: 5 0"); + test.get(4); + test.get(3); + test.get(2); + verify(test, "mem: 5 stack: 2 3 4 6 5 1 cold: 6 non-resident: 5 0"); + // this call needs to prune the stack + test.remove(1); + verify(test, "mem: 4 stack: 2 3 4 6 cold: non-resident: 5 0"); + test.put(0, 0); + test.put(1, 10); + // the the stack was not pruned, the following will fail + verify(test, "mem: 5 stack: 1 0 2 3 4 cold: 1 non-resident: 6 5"); + } + + private void testClear() { + CacheLIRS test = createCache(40); + for (int i = 0; i < 5; i++) { + test.put(i, 10 * i, 9); + } + verify(test, "mem: 36 stack: 4 3 2 1 cold: 4 non-resident: 0"); + for (Entry e : test.entrySet()) { + assertTrue(e.getKey() >= 1 && e.getKey() <= 4); + assertTrue(e.getValue() >= 10 && e.getValue() <= 40); + } + for (int x : test.values()) { + assertTrue(x >= 10 && x <= 40); + } + for (int x : test.keySet()) { + assertTrue(x >= 1 && x <= 4); + } + assertEquals(40, test.getMaxMemory()); + assertEquals(36, test.getUsedMemory()); + assertEquals(4, test.size()); + assertEquals(3, test.sizeHot()); + assertEquals(1, test.sizeNonResident()); + assertFalse(test.isEmpty()); + + // changing the limit is not supposed to modify the map + test.setMaxMemory(10); + assertEquals(10, test.getMaxMemory()); + test.setMaxMemory(40); + verify(test, "mem: 36 stack: 4 3 2 1 cold: 4 non-resident: 0"); + + test.putAll(test); + verify(test, "mem: 4 stack: 4 3 2 1 cold: non-resident: 0"); + + test.clear(); + verify(test, "mem: 0 stack: cold: non-resident:"); + + assertEquals(40, test.getMaxMemory()); + assertEquals(0, test.getUsedMemory()); + assertEquals(0, test.size()); + assertEquals(0, test.sizeHot()); + assertEquals(0, test.sizeNonResident()); + assertTrue(test.isEmpty()); + } + + private void testLimitHot() { + CacheLIRS test = createCache(100); + for (int i = 0; i < 300; i++) { + test.put(i, 10 * i); + } + assertEquals(100, test.size()); + assertEquals(99, test.sizeNonResident()); + assertEquals(93, test.sizeHot()); + } + + private void testLimitNonResident() { + CacheLIRS test = createCache(4); + for (int i = 0; i < 20; i++) { + test.put(i, 10 * i); + } + verify(test, "mem: 4 stack: 19 18 17 16 3 2 1 " + + "cold: 19 non-resident: 18 17 16"); + } + + private void testBadHashMethod() { + // ensure an 2^n cache size + final int size = 4; + + /** + * A class with a bad hashCode implementation. + */ + class BadHash { + int x; + + BadHash(int x) { + this.x = x; + } + + @Override + public int hashCode() { + return (x & 1) * size * 2; + } + + @Override + public boolean equals(Object o) { + return ((BadHash) o).x == x; + } + + @Override + public String toString() { + return "" + x; + } + + } + + CacheLIRS test = createCache(size * 2); + for (int i = 0; i < size; i++) { + test.put(new BadHash(i), i); + } + for (int i = 0; i < size; i++) { + if (i % 3 == 0) { + assertEquals(i, test.remove(new BadHash(i)).intValue()); + assertNull(test.remove(new BadHash(i))); + } + } + for (int i = 0; i < size; i++) { + if (i % 3 == 0) { + assertNull(test.get(new BadHash(i))); + } else { + assertEquals(i, test.get(new BadHash(i)).intValue()); + } + } + for (int i = 0; i < size; i++) { + test.put(new BadHash(i), i); + } + for (int i = 0; i < size; i++) { + if (i % 3 == 0) { + assertEquals(i, test.remove(new BadHash(i)).intValue()); + assertNull(test.remove(new BadHash(i))); + } + } + for (int i = 0; i < size; i++) { + if (i % 3 == 0) { + assertNull(test.get(new BadHash(i))); + } else { + assertEquals(i, test.get(new BadHash(i)).intValue()); + } + } + } + + private void testLimitMemory() { + CacheLIRS test = createCache(4); + for (int i = 0; i < 5; i++) { + test.put(i, 10 * i, 1); + } + verify(test, "mem: 4 stack: 4 3 2 1 cold: 4 non-resident: 0"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + test.put(6, 60, 3); + verify(test, "mem: 4 stack: 6 3 cold: 6 non-resident:"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + test.put(7, 70, 3); + verify(test, "mem: 4 stack: 7 6 3 cold: 7 non-resident: 6"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + test.put(8, 80, 4); + verify(test, "mem: 4 stack: 8 cold: non-resident:"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + } + + private void testScanResistance() { + boolean log = false; + int size = 20; + // cache size 11 (10 hot, 1 cold) + CacheLIRS test = createCache(size / 2 + 1); + // init the cache with some dummy entries + for (int i = 0; i < size; i++) { + test.put(-i, -i * 10); + } + verify(test, null); + // init with 0..9, ensure those are hot entries + for (int i = 0; i < size / 2; i++) { + test.put(i, i * 10); + test.get(i); + if (log) { + System.out.println("get " + i + " -> " + test); + } + } + verify(test, null); + // read 0..9, add 10..19 (cold) + for (int i = 0; i < size; i++) { + Integer x = test.get(i); + Integer y = test.peek(i); + if (i < size / 2) { + assertTrue("i: " + i, x != null); + assertTrue("i: " + i, y != null); + assertEquals(i * 10, x.intValue()); + assertEquals(i * 10, y.intValue()); + } else { + assertNull(x); + assertNull(y); + test.put(i, i * 10); + // peek should have no effect + assertEquals(i * 10, test.peek(i).intValue()); + } + if (log) { + System.out.println("get " + i + " -> " + test); + } + verify(test, null); + } + // ensure 0..9 are hot, 10..18 are not resident, 19 is cold + for (int i = 0; i < size; i++) { + Integer x = test.get(i); + if (i < size / 2 || i == size - 1) { + assertTrue("i: " + i, x != null); + assertEquals(i * 10, x.intValue()); + } else { + assertNull(x); + } + verify(test, null); + } + } + + private void testRandomOperations() { + boolean log = false; + int size = 10; + Random r = new Random(1); + for (int j = 0; j < 100; j++) { + CacheLIRS test = createCache(size / 2); + HashMap good = new HashMap<>(); + for (int i = 0; i < 10000; i++) { + int key = r.nextInt(size); + int value = r.nextInt(); + switch (r.nextInt(3)) { + case 0: + if (log) { + System.out.println(i + " put " + key + " " + value); + } + good.put(key, value); + test.put(key, value); + break; + case 1: + if (log) { + System.out.println(i + " get " + key); + } + Integer a = good.get(key); + Integer b = test.get(key); + if (a == null) { + assertNull(b); + } else if (b != null) { + assertEquals(a, b); + } + break; + case 2: + if (log) { + System.out.println(i + " remove " + key); + } + good.remove(key); + test.remove(key); + break; + } + if (log) { + System.out.println(" -> " + toString(test)); + } + } + verify(test, null); + } + } + + private static String toString(CacheLIRS cache) { + StringBuilder buff = new StringBuilder(); + buff.append("mem: " + cache.getUsedMemory()); + buff.append(" stack:"); + for (K k : cache.keys(false, false)) { + buff.append(' ').append(k); + } + buff.append(" cold:"); + for (K k : cache.keys(true, false)) { + buff.append(' ').append(k); + } + buff.append(" non-resident:"); + for (K k : cache.keys(true, true)) { + buff.append(' ').append(k); + } + return buff.toString(); + } + + private void verify(CacheLIRS cache, String expected) { + if (expected != null) { + String got = toString(cache); + assertEquals(expected, got); + } + int mem = 0; + for (K k : cache.keySet()) { + mem += cache.getMemory(k); + } + assertEquals(mem, cache.getUsedMemory()); + List stack = cache.keys(false, false); + List cold = cache.keys(true, false); + List nonResident = cache.keys(true, true); + assertEquals(nonResident.size(), cache.sizeNonResident()); + HashSet hot = new HashSet<>(stack); + hot.removeAll(cold); + hot.removeAll(nonResident); + assertEquals(hot.size(), cache.sizeHot()); + assertEquals(hot.size() + cold.size(), cache.size()); + if (stack.size() > 0) { + K lastStack = stack.get(stack.size() - 1); + assertTrue(hot.contains(lastStack)); + } + } + + private static CacheLIRS createCache(int maxSize) { + return new CacheLIRS<>(maxSize, 1, 0); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestCacheLongKeyLIRS.java b/modules/h2/src/test/java/org/h2/test/store/TestCacheLongKeyLIRS.java new file mode 100644 index 0000000000000..d9942e26b4bf5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestCacheLongKeyLIRS.java @@ -0,0 +1,507 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map.Entry; +import java.util.Random; +import org.h2.mvstore.cache.CacheLongKeyLIRS; +import org.h2.test.TestBase; + +/** + * Tests the cache algorithm. + */ +public class TestCacheLongKeyLIRS extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testCache(); + } + + private void testCache() { + testRandomSmallCache(); + testEdgeCases(); + testSize(); + testClear(); + testGetPutPeekRemove(); + testPruneStack(); + testLimitHot(); + testLimitNonResident(); + testLimitMemory(); + testScanResistance(); + testRandomOperations(); + } + + private static void testRandomSmallCache() { + Random r = new Random(1); + for (int i = 0; i < 10000; i++) { + int j = 0; + StringBuilder buff = new StringBuilder(); + CacheLongKeyLIRS test = createCache(1 + r.nextInt(10)); + for (; j < 30; j++) { + int key = r.nextInt(5); + switch (r.nextInt(3)) { + case 0: + int memory = r.nextInt(5) + 1; + buff.append("add ").append(key).append(' '). + append(memory).append('\n'); + test.put(key, j, memory); + break; + case 1: + buff.append("remove ").append(key).append('\n'); + test.remove(key); + break; + case 2: + buff.append("get ").append(key).append('\n'); + test.get(key); + } + } + } + } + + private void testEdgeCases() { + CacheLongKeyLIRS test = createCache(1); + test.put(1, 10, 100); + assertEquals(0, test.size()); + try { + test.put(1, null, 100); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + try { + test.setMaxMemory(0); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + } + + private void testSize() { + verifyMapSize(7, 16); + verifyMapSize(13, 32); + verifyMapSize(25, 64); + verifyMapSize(49, 128); + verifyMapSize(97, 256); + verifyMapSize(193, 512); + verifyMapSize(385, 1024); + verifyMapSize(769, 2048); + + CacheLongKeyLIRS test; + + test = createCache(1000); + for (int j = 0; j < 2000; j++) { + test.put(j, j); + } + // for a cache of size 1000, + // there are 63 cold entries (about 6.25%). + assertEquals(63, test.size() - test.sizeHot()); + // at most as many non-resident elements + // as there are entries in the stack + assertEquals(1000, test.size()); + assertEquals(1000, test.sizeNonResident()); + } + + private void verifyMapSize(int elements, int expectedMapSize) { + CacheLongKeyLIRS test; + test = createCache(elements - 1); + for (int i = 0; i < elements - 1; i++) { + test.put(i, i * 10); + } + assertTrue(test.sizeMapArray() + "<" + expectedMapSize, + test.sizeMapArray() < expectedMapSize); + test = createCache(elements); + for (int i = 0; i < elements + 1; i++) { + test.put(i, i * 10); + } + assertEquals(expectedMapSize, test.sizeMapArray()); + test = createCache(elements * 2); + for (int i = 0; i < elements * 2; i++) { + test.put(i, i * 10); + } + assertTrue(test.sizeMapArray() + ">" + expectedMapSize, + test.sizeMapArray() > expectedMapSize); + } + + private void testGetPutPeekRemove() { + CacheLongKeyLIRS test = createCache(4); + test.put(1, 10); + test.put(2, 20); + test.put(3, 30); + assertNull(test.peek(4)); + assertNull(test.get(4)); + test.put(4, 40); + verify(test, "mem: 4 stack: 4 3 2 1 cold: non-resident:"); + // move middle to front + assertEquals(30, test.get(3).intValue()); + assertEquals(20, test.get(2).intValue()); + assertEquals(20, test.peek(2).intValue()); + // already on (an optimization) + assertEquals(20, test.get(2).intValue()); + assertEquals(10, test.peek(1).intValue()); + assertEquals(10, test.get(1).intValue()); + verify(test, "mem: 4 stack: 1 2 3 4 cold: non-resident:"); + test.put(3, 30); + verify(test, "mem: 4 stack: 3 1 2 4 cold: non-resident:"); + // 5 is cold; will make 4 non-resident + test.put(5, 50); + verify(test, "mem: 4 stack: 5 3 1 2 cold: 5 non-resident: 4"); + assertEquals(1, test.getMemory(1)); + assertEquals(1, test.getMemory(5)); + assertEquals(0, test.getMemory(4)); + assertEquals(0, test.getMemory(100)); + assertNull(test.peek(4)); + assertNull(test.get(4)); + assertEquals(10, test.get(1).intValue()); + assertEquals(20, test.get(2).intValue()); + assertEquals(30, test.get(3).intValue()); + verify(test, "mem: 4 stack: 3 2 1 cold: 5 non-resident: 4"); + assertEquals(50, test.get(5).intValue()); + verify(test, "mem: 4 stack: 5 3 2 1 cold: 5 non-resident: 4"); + assertEquals(50, test.get(5).intValue()); + verify(test, "mem: 4 stack: 5 3 2 cold: 1 non-resident: 4"); + + // remove + assertEquals(50, test.remove(5).intValue()); + assertNull(test.remove(5)); + verify(test, "mem: 3 stack: 3 2 1 cold: non-resident: 4"); + assertNull(test.remove(4)); + verify(test, "mem: 3 stack: 3 2 1 cold: non-resident:"); + assertNull(test.remove(4)); + verify(test, "mem: 3 stack: 3 2 1 cold: non-resident:"); + test.put(4, 40); + test.put(5, 50); + verify(test, "mem: 4 stack: 5 4 3 2 cold: 5 non-resident: 1"); + test.get(5); + test.get(2); + test.get(3); + test.get(4); + verify(test, "mem: 4 stack: 4 3 2 5 cold: 2 non-resident: 1"); + assertEquals(50, test.remove(5).intValue()); + verify(test, "mem: 3 stack: 4 3 2 cold: non-resident: 1"); + assertEquals(20, test.remove(2).intValue()); + assertFalse(test.containsKey(1)); + assertNull(test.remove(1)); + assertFalse(test.containsKey(1)); + verify(test, "mem: 2 stack: 4 3 cold: non-resident:"); + test.put(1, 10); + test.put(2, 20); + verify(test, "mem: 4 stack: 2 1 4 3 cold: non-resident:"); + test.get(1); + test.get(3); + test.get(4); + verify(test, "mem: 4 stack: 4 3 1 2 cold: non-resident:"); + assertEquals(10, test.remove(1).intValue()); + verify(test, "mem: 3 stack: 4 3 2 cold: non-resident:"); + test.remove(2); + test.remove(3); + test.remove(4); + + // test clear + test.clear(); + verify(test, "mem: 0 stack: cold: non-resident:"); + + // strange situation where there is only a non-resident entry + test.put(1, 10); + test.put(2, 20); + test.put(3, 30); + test.put(4, 40); + test.put(5, 50); + assertTrue(test.containsValue(50)); + verify(test, "mem: 4 stack: 5 4 3 2 cold: 5 non-resident: 1"); + // 1 was non-resident, so this should make it hot + test.put(1, 10); + verify(test, "mem: 4 stack: 1 5 4 3 cold: 2 non-resident: 5"); + assertFalse(test.containsValue(50)); + test.remove(2); + test.remove(3); + test.remove(4); + verify(test, "mem: 1 stack: 1 cold: non-resident: 5"); + assertTrue(test.containsKey(1)); + test.remove(1); + assertFalse(test.containsKey(1)); + verify(test, "mem: 0 stack: cold: non-resident: 5"); + assertFalse(test.containsKey(5)); + assertTrue(test.isEmpty()); + + // verify that converting a hot to cold entry will prune the stack + test.clear(); + test.put(1, 10); + test.put(2, 20); + test.put(3, 30); + test.put(4, 40); + test.put(5, 50); + test.get(4); + test.get(3); + verify(test, "mem: 4 stack: 3 4 5 2 cold: 5 non-resident: 1"); + test.put(6, 60); + verify(test, "mem: 4 stack: 6 3 4 5 2 cold: 6 non-resident: 5 1"); + // this will prune the stack (remove entry 5 as entry 2 becomes cold) + test.get(6); + verify(test, "mem: 4 stack: 6 3 4 cold: 2 non-resident: 5 1"); + } + + private void testPruneStack() { + CacheLongKeyLIRS test = createCache(5); + for (int i = 0; i < 7; i++) { + test.put(i, i * 10); + } + verify(test, "mem: 5 stack: 6 5 4 3 2 1 cold: 6 non-resident: 5 0"); + test.get(4); + test.get(3); + test.get(2); + verify(test, "mem: 5 stack: 2 3 4 6 5 1 cold: 6 non-resident: 5 0"); + // this call needs to prune the stack + test.remove(1); + verify(test, "mem: 4 stack: 2 3 4 6 cold: non-resident: 5 0"); + test.put(0, 0); + test.put(1, 10); + // the the stack was not pruned, the following will fail + verify(test, "mem: 5 stack: 1 0 2 3 4 cold: 1 non-resident: 6 5"); + } + + private void testClear() { + CacheLongKeyLIRS test = createCache(40); + for (int i = 0; i < 5; i++) { + test.put(i, 10 * i, 9); + } + verify(test, "mem: 36 stack: 4 3 2 1 cold: 4 non-resident: 0"); + for (Entry e : test.entrySet()) { + assertTrue(e.getKey() >= 1 && e.getKey() <= 4); + assertTrue(e.getValue() >= 10 && e.getValue() <= 40); + } + for (int x : test.values()) { + assertTrue(x >= 10 && x <= 40); + } + for (long x : test.keySet()) { + assertTrue(x >= 1 && x <= 4); + } + assertEquals(40, test.getMaxMemory()); + assertEquals(36, test.getUsedMemory()); + assertEquals(4, test.size()); + assertEquals(3, test.sizeHot()); + assertEquals(1, test.sizeNonResident()); + assertFalse(test.isEmpty()); + + // changing the limit is not supposed to modify the map + test.setMaxMemory(10); + assertEquals(10, test.getMaxMemory()); + test.setMaxMemory(40); + verify(test, "mem: 36 stack: 4 3 2 1 cold: 4 non-resident: 0"); + + test.putAll(test.getMap()); + verify(test, "mem: 4 stack: 4 3 2 1 cold: non-resident: 0"); + + test.clear(); + verify(test, "mem: 0 stack: cold: non-resident:"); + + assertEquals(40, test.getMaxMemory()); + assertEquals(0, test.getUsedMemory()); + assertEquals(0, test.size()); + assertEquals(0, test.sizeHot()); + assertEquals(0, test.sizeNonResident()); + assertTrue(test.isEmpty()); + } + + private void testLimitHot() { + CacheLongKeyLIRS test = createCache(100); + for (int i = 0; i < 300; i++) { + test.put(i, 10 * i); + } + assertEquals(100, test.size()); + assertEquals(200, test.sizeNonResident()); + assertEquals(90, test.sizeHot()); + } + + private void testLimitNonResident() { + CacheLongKeyLIRS test = createCache(4); + for (int i = 0; i < 20; i++) { + test.put(i, 10 * i); + } + verify(test, "mem: 4 stack: 19 18 17 16 15 14 13 12 11 10 3 2 1 " + + "cold: 19 non-resident: 18 17 16 15 14 13 12 11 10"); + } + + private void testLimitMemory() { + CacheLongKeyLIRS test = createCache(4); + for (int i = 0; i < 5; i++) { + test.put(i, 10 * i, 1); + } + verify(test, "mem: 4 stack: 4 3 2 1 cold: 4 non-resident: 0"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + test.put(6, 60, 3); + verify(test, "mem: 4 stack: 6 4 3 cold: 6 non-resident: 2 1 4"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + test.put(7, 70, 3); + verify(test, "mem: 4 stack: 7 6 3 cold: 7 non-resident: 6 2 1"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + test.put(8, 80, 4); + verify(test, "mem: 4 stack: 8 cold: non-resident:"); + assertTrue("" + test.getUsedMemory(), test.getUsedMemory() <= 4); + } + + private void testScanResistance() { + boolean log = false; + int size = 20; + // cache size 11 (10 hot, 2 cold) + CacheLongKeyLIRS test = createCache(size / 2 + 2); + // init the cache with some dummy entries + for (int i = 0; i < size; i++) { + test.put(-i, -i * 10); + } + verify(test, null); + // init with 0..9, ensure those are hot entries + for (int i = 0; i < size / 2; i++) { + test.put(i, i * 10); + test.get(i); + if (log) { + System.out.println("get " + i + " -> " + test); + } + } + verify(test, null); + // read 0..9, add 10..19 (cold) + for (int i = 0; i < size; i++) { + Integer x = test.get(i); + Integer y = test.peek(i); + if (i < size / 2) { + assertTrue("i: " + i, x != null); + assertTrue("i: " + i, y != null); + assertEquals(i * 10, x.intValue()); + assertEquals(i * 10, y.intValue()); + } else { + assertNull(x); + assertNull(y); + test.put(i, i * 10); + // peek should have no effect + assertEquals(i * 10, test.peek(i).intValue()); + } + if (log) { + System.out.println("get " + i + " -> " + test); + } + verify(test, null); + } + // ensure 0..9 are hot, 10..17 are not resident, 18..19 are cold + for (int i = 0; i < size; i++) { + Integer x = test.get(i); + if (i < size / 2 || i == size - 1 || i == size - 2) { + assertTrue("i: " + i, x != null); + assertEquals(i * 10, x.intValue()); + } else { + assertNull(x); + } + verify(test, null); + } + } + + private void testRandomOperations() { + boolean log = false; + int size = 10; + Random r = new Random(1); + for (int j = 0; j < 100; j++) { + CacheLongKeyLIRS test = createCache(size / 2); + HashMap good = new HashMap<>(); + for (int i = 0; i < 10000; i++) { + int key = r.nextInt(size); + int value = r.nextInt(); + switch (r.nextInt(3)) { + case 0: + if (log) { + System.out.println(i + " put " + key + " " + value); + } + good.put(key, value); + test.put(key, value); + break; + case 1: + if (log) { + System.out.println(i + " get " + key); + } + Integer a = good.get(key); + Integer b = test.get(key); + if (a == null) { + assertNull(b); + } else if (b != null) { + assertEquals(a, b); + } + break; + case 2: + if (log) { + System.out.println(i + " remove " + key); + } + good.remove(key); + test.remove(key); + break; + } + if (log) { + System.out.println(" -> " + toString(test)); + } + } + verify(test, null); + } + } + + private static String toString(CacheLongKeyLIRS cache) { + StringBuilder buff = new StringBuilder(); + buff.append("mem: " + cache.getUsedMemory()); + buff.append(" stack:"); + for (long k : cache.keys(false, false)) { + buff.append(' ').append(k); + } + buff.append(" cold:"); + for (long k : cache.keys(true, false)) { + buff.append(' ').append(k); + } + buff.append(" non-resident:"); + for (long k : cache.keys(true, true)) { + buff.append(' ').append(k); + } + return buff.toString(); + } + + private void verify(CacheLongKeyLIRS cache, String expected) { + if (expected != null) { + String got = toString(cache); + assertEquals(expected, got); + } + int mem = 0; + for (long k : cache.keySet()) { + mem += cache.getMemory(k); + } + assertEquals(mem, cache.getUsedMemory()); + List stack = cache.keys(false, false); + List cold = cache.keys(true, false); + List nonResident = cache.keys(true, true); + assertEquals(nonResident.size(), cache.sizeNonResident()); + HashSet hot = new HashSet<>(stack); + hot.removeAll(cold); + hot.removeAll(nonResident); + assertEquals(hot.size(), cache.sizeHot()); + assertEquals(hot.size() + cold.size(), cache.size()); + if (stack.size() > 0) { + long lastStack = stack.get(stack.size() - 1); + assertTrue(hot.contains(lastStack)); + } + } + + private static CacheLongKeyLIRS createCache(int maxSize) { + CacheLongKeyLIRS.Config cc = new CacheLongKeyLIRS.Config(); + cc.maxMemory = maxSize; + cc.segmentCount = 1; + cc.stackMoveDistance = 0; + return new CacheLongKeyLIRS<>(cc); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestConcurrent.java b/modules/h2/src/test/java/org/h2/test/store/TestConcurrent.java new file mode 100644 index 0000000000000..6c37da554a258 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestConcurrent.java @@ -0,0 +1,805 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.FileOutputStream; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.channels.FileChannel; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.ConcurrentModificationException; +import java.util.Iterator; +import java.util.Map; +import java.util.Random; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.WriteBuffer; +import org.h2.mvstore.type.ObjectDataType; +import org.h2.store.fs.FileChannelInputStream; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.New; +import org.h2.util.Task; + +/** + * Tests concurrently accessing a tree map store. + */ +public class TestConcurrent extends TestMVStore { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + FileUtils.createDirectories(getBaseDir()); + testInterruptReopen(); + testConcurrentSaveCompact(); + testConcurrentDataType(); + testConcurrentAutoCommitAndChange(); + testConcurrentReplaceAndRead(); + testConcurrentChangeAndCompact(); + testConcurrentChangeAndGetVersion(); + testConcurrentFree(); + testConcurrentStoreAndRemoveMap(); + testConcurrentStoreAndClose(); + testConcurrentOnlineBackup(); + testConcurrentMap(); + testConcurrentIterate(); + testConcurrentWrite(); + testConcurrentRead(); + } + + private void testInterruptReopen() throws Exception { + String fileName = "retry:nio:" + getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + final MVStore s = new MVStore.Builder(). + fileName(fileName). + cacheSize(0). + open(); + final Thread mainThread = Thread.currentThread(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + mainThread.interrupt(); + Thread.sleep(10); + } + } + }; + try { + MVMap map = s.openMap("data"); + task.execute(); + for (int i = 0; i < 1000 && !task.isFinished(); i++) { + map.get(i % 1000); + map.put(i % 1000, new byte[1024]); + s.commit(); + } + } finally { + task.get(); + s.close(); + } + } + + private void testConcurrentSaveCompact() throws Exception { + String fileName = "memFS:" + getTestName(); + FileUtils.delete(fileName); + final MVStore s = new MVStore.Builder(). + fileName(fileName). + cacheSize(0). + open(); + try { + s.setRetentionTime(0); + final MVMap dataMap = s.openMap("data"); + Task task = new Task() { + @Override + public void call() throws Exception { + int i = 0; + while (!stop) { + s.compact(100, 1024 * 1024); + dataMap.put(i % 1000, i * 10); + s.commit(); + i++; + } + } + }; + task.execute(); + for (int i = 0; i < 1000 && !task.isFinished(); i++) { + s.compact(100, 1024 * 1024); + dataMap.put(i % 1000, i * 10); + s.commit(); + } + task.get(); + } finally { + s.close(); + } + } + + private void testConcurrentDataType() throws InterruptedException { + final ObjectDataType type = new ObjectDataType(); + final Object[] data = new Object[]{ + null, + -1, + 1, + 10, + "Hello", + new Object[]{ new byte[]{(byte) -1, (byte) 1}, null}, + new Object[]{ new byte[]{(byte) 1, (byte) -1}, 10}, + new Object[]{ new byte[]{(byte) -1, (byte) 1}, 20L}, + new Object[]{ new byte[]{(byte) 1, (byte) -1}, 5}, + }; + Arrays.sort(data, new Comparator() { + @Override + public int compare(Object o1, Object o2) { + return type.compare(o1, o2); + } + }); + Task[] tasks = new Task[2]; + for (int i = 0; i < tasks.length; i++) { + tasks[i] = new Task() { + @Override + public void call() throws Exception { + Random r = new Random(); + WriteBuffer buff = new WriteBuffer(); + while (!stop) { + int a = r.nextInt(data.length); + int b = r.nextInt(data.length); + int comp; + if (r.nextBoolean()) { + comp = type.compare(a, b); + } else { + comp = -type.compare(b, a); + } + buff.clear(); + type.write(buff, a); + buff.clear(); + type.write(buff, b); + if (a == b) { + assertEquals(0, comp); + } else { + assertEquals(a > b ? 1 : -1, comp); + } + } + } + }; + tasks[i].execute(); + } + try { + Thread.sleep(100); + } finally { + for (Task t : tasks) { + t.get(); + } + } + } + + private void testConcurrentAutoCommitAndChange() throws InterruptedException { + String fileName = "memFS:" + getTestName(); + FileUtils.delete(fileName); + final MVStore s = new MVStore.Builder(). + fileName(fileName).pageSplitSize(1000). + open(); + try { + s.setRetentionTime(1000); + s.setAutoCommitDelay(1); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + s.compact(100, 1024 * 1024); + } + } + }; + final MVMap dataMap = s.openMap("data"); + final MVMap dataSmallMap = s.openMap("dataSmall"); + s.openMap("emptyMap"); + final AtomicInteger counter = new AtomicInteger(); + Task task2 = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + int i = counter.getAndIncrement(); + dataMap.put(i, i * 10); + dataSmallMap.put(i % 100, i * 10); + if (i % 100 == 0) { + dataSmallMap.clear(); + } + } + } + }; + task.execute(); + task2.execute(); + Thread.sleep(1); + for (int i = 0; !task.isFinished() && !task2.isFinished() && i < 1000; i++) { + MVMap map = s.openMap("d" + (i % 3)); + map.put(0, i); + s.commit(); + } + task.get(); + task2.get(); + for (int i = 0; i < counter.get(); i++) { + assertEquals(10 * i, dataMap.get(i).intValue()); + } + } finally { + s.close(); + } + } + + private void testConcurrentReplaceAndRead() throws InterruptedException { + final MVStore s = new MVStore.Builder().open(); + final MVMap map = s.openMap("data"); + for (int i = 0; i < 100; i++) { + map.put(i, i % 100); + } + Task task = new Task() { + @Override + public void call() throws Exception { + int i = 0; + while (!stop) { + map.put(i % 100, i % 100); + i++; + if (i % 1000 == 0) { + s.commit(); + } + } + } + }; + task.execute(); + try { + Thread.sleep(1); + for (int i = 0; !task.isFinished() && i < 1000000; i++) { + assertEquals(i % 100, map.get(i % 100).intValue()); + } + } finally { + task.get(); + } + s.close(); + } + + private void testConcurrentChangeAndCompact() throws InterruptedException { + String fileName = "memFS:" + getTestName(); + FileUtils.delete(fileName); + final MVStore s = new MVStore.Builder().fileName( + fileName). + pageSplitSize(10). + autoCommitDisabled().open(); + s.setRetentionTime(10000); + try { + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + s.compact(100, 1024 * 1024); + } + } + }; + task.execute(); + Task task2 = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + s.compact(100, 1024 * 1024); + } + } + }; + task2.execute(); + Thread.sleep(1); + for (int i = 0; !task.isFinished() && !task2.isFinished() && i < 1000; i++) { + MVMap map = s.openMap("d" + (i % 3)); + // MVMap map = s.openMap("d" + (i % 3), + // new MVMapConcurrent.Builder()); + map.put(0, i); + map.get(0); + s.commit(); + } + task.get(); + task2.get(); + } finally { + s.close(); + } + } + + private static void testConcurrentChangeAndGetVersion() throws InterruptedException { + for (int test = 0; test < 10; test++) { + final MVStore s = new MVStore.Builder(). + autoCommitDisabled().open(); + try { + s.setVersionsToKeep(10); + final MVMap m = s.openMap("data"); + m.put(1, 1); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + m.put(1, 1); + s.commit(); + } + } + }; + task.execute(); + Thread.sleep(1); + for (int i = 0; i < 10000; i++) { + if (task.isFinished()) { + break; + } + for (int j = 0; j < 20; j++) { + m.put(1, 1); + s.commit(); + } + s.setVersionsToKeep(15); + long version = s.getCurrentVersion() - 1; + try { + m.openVersion(version); + } catch (IllegalArgumentException e) { + // ignore + } + s.setVersionsToKeep(20); + } + task.get(); + s.commit(); + } finally { + s.close(); + } + } + } + + private void testConcurrentFree() throws InterruptedException { + String fileName = "memFS:" + getTestName(); + for (int test = 0; test < 10; test++) { + FileUtils.delete(fileName); + final MVStore s1 = new MVStore.Builder(). + fileName(fileName).autoCommitDisabled().open(); + s1.setRetentionTime(0); + final int count = 200; + for (int i = 0; i < count; i++) { + MVMap m = s1.openMap("d" + i); + m.put(1, 1); + if (i % 2 == 0) { + s1.commit(); + } + } + s1.close(); + final MVStore s = new MVStore.Builder(). + fileName(fileName).autoCommitDisabled().open(); + try { + s.setRetentionTime(0); + final ArrayList> list = New.arrayList(); + for (int i = 0; i < count; i++) { + MVMap m = s.openMap("d" + i); + list.add(m); + } + + final AtomicInteger counter = new AtomicInteger(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + int x = counter.getAndIncrement(); + if (x >= count) { + break; + } + MVMap m = list.get(x); + m.clear(); + s.removeMap(m); + } + } + }; + task.execute(); + Thread.sleep(1); + while (true) { + int x = counter.getAndIncrement(); + if (x >= count) { + break; + } + MVMap m = list.get(x); + m.clear(); + s.removeMap(m); + if (x % 5 == 0) { + s.commit(); + } + } + task.get(); + // this will mark old chunks as unused, + // but not remove (and overwrite) them yet + s.commit(); + // this will remove them, so we end up with + // one unused one, and one active one + MVMap m = s.openMap("dummy"); + m.put(1, 1); + s.commit(); + m.put(2, 2); + s.commit(); + + MVMap meta = s.getMetaMap(); + int chunkCount = 0; + for (String k : meta.keyList()) { + if (k.startsWith("chunk.")) { + chunkCount++; + } + } + assertTrue("" + chunkCount, chunkCount < 3); + } finally { + s.close(); + } + } + } + + private void testConcurrentStoreAndRemoveMap() throws InterruptedException { + String fileName = "memFS:" + getTestName(); + FileUtils.delete(fileName); + final MVStore s = openStore(fileName); + try { + int count = 200; + for (int i = 0; i < count; i++) { + MVMap m = s.openMap("d" + i); + m.put(1, 1); + } + final AtomicInteger counter = new AtomicInteger(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + counter.incrementAndGet(); + s.commit(); + } + } + }; + task.execute(); + Thread.sleep(1); + for (int i = 0; i < count || counter.get() < count; i++) { + MVMap m = s.openMap("d" + i); + m.put(1, 10); + s.removeMap(m); + if (task.isFinished()) { + break; + } + } + task.get(); + } finally { + s.close(); + } + } + + private void testConcurrentStoreAndClose() throws InterruptedException { + String fileName = "memFS:" + getTestName(); + for (int i = 0; i < 10; i++) { + FileUtils.delete(fileName); + final MVStore s = openStore(fileName); + try { + final AtomicInteger counter = new AtomicInteger(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + s.setStoreVersion(counter.incrementAndGet()); + s.commit(); + } + } + }; + task.execute(); + while (counter.get() < 5) { + Thread.sleep(1); + } + try { + s.close(); + // sometimes closing works, in which case + // storing must fail at some point (not necessarily + // immediately) + for (int x = counter.get(), y = x; x <= y + 2; x++) { + Thread.sleep(1); + } + Exception e = task.getException(); + assertEquals(DataUtils.ERROR_CLOSED, + DataUtils.getErrorCode(e.getMessage())); + } catch (IllegalStateException e) { + // sometimes storing works, in which case + // closing must fail + assertEquals(DataUtils.ERROR_WRITING_FAILED, + DataUtils.getErrorCode(e.getMessage())); + task.get(); + } + } finally { + s.close(); + } + } + } + + /** + * Test the concurrent map implementation. + */ + private static void testConcurrentMap() throws InterruptedException { + final MVStore s = openStore(null); + final MVMap m = s.openMap("data"); + try { + final int size = 20; + final Random rand = new Random(1); + Task task = new Task() { + @Override + public void call() throws Exception { + try { + while (!stop) { + if (rand.nextBoolean()) { + m.put(rand.nextInt(size), 1); + } else { + m.remove(rand.nextInt(size)); + } + m.get(rand.nextInt(size)); + m.firstKey(); + m.lastKey(); + m.ceilingKey(5); + m.floorKey(5); + m.higherKey(5); + m.lowerKey(5); + for (Iterator it = m.keyIterator(null); + it.hasNext();) { + it.next(); + } + } + } catch (Exception e) { + e.printStackTrace(); + } + } + }; + task.execute(); + Thread.sleep(1); + for (int j = 0; j < 100; j++) { + for (int i = 0; i < 100; i++) { + if (rand.nextBoolean()) { + m.put(rand.nextInt(size), 2); + } else { + m.remove(rand.nextInt(size)); + } + m.get(rand.nextInt(size)); + } + s.commit(); + Thread.sleep(1); + } + task.get(); + } finally { + s.close(); + } + } + + private void testConcurrentOnlineBackup() throws Exception { + String fileName = getBaseDir() + "/" + getTestName(); + String fileNameRestore = getBaseDir() + "/" + getTestName() + "2"; + final MVStore s = openStore(fileName); + final MVMap map = s.openMap("test"); + final Random r = new Random(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + for (int i = 0; i < 10; i++) { + map.put(i, new byte[100 * r.nextInt(100)]); + } + s.commit(); + map.clear(); + s.commit(); + long len = s.getFileStore().size(); + if (len > 1024 * 1024) { + // slow down writing a lot + Thread.sleep(200); + } else if (len > 20 * 1024) { + // slow down writing + Thread.sleep(20); + } + } + } + }; + task.execute(); + try { + for (int i = 0; i < 10; i++) { + // System.out.println("test " + i); + s.setReuseSpace(false); + OutputStream out = new BufferedOutputStream( + new FileOutputStream(fileNameRestore)); + long len = s.getFileStore().size(); + copyFileSlowly(s.getFileStore().getFile(), + len, out); + out.close(); + s.setReuseSpace(true); + MVStore s2 = openStore(fileNameRestore); + MVMap test = s2.openMap("test"); + for (Integer k : test.keySet()) { + test.get(k); + } + s2.close(); + // let it compact + Thread.sleep(10); + } + } finally { + task.get(); + } + s.close(); + } + + private static void copyFileSlowly(FileChannel file, long length, OutputStream out) + throws Exception { + file.position(0); + InputStream in = new BufferedInputStream(new FileChannelInputStream( + file, false)); + for (int j = 0; j < length; j++) { + int x = in.read(); + if (x < 0) { + break; + } + out.write(x); + } + in.close(); + } + + private static void testConcurrentIterate() { + MVStore s = new MVStore.Builder().pageSplitSize(3).open(); + s.setVersionsToKeep(100); + final MVMap map = s.openMap("test"); + final int len = 10; + final Random r = new Random(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + int x = r.nextInt(len); + if (r.nextBoolean()) { + map.remove(x); + } else { + map.put(x, r.nextInt(100)); + } + } + } + }; + task.execute(); + try { + for (int k = 0; k < 10000; k++) { + Iterator it = map.keyIterator(r.nextInt(len)); + long old = s.getCurrentVersion(); + s.commit(); + while (map.getVersion() == old) { + Thread.yield(); + } + while (it.hasNext()) { + it.next(); + } + } + } finally { + task.get(); + } + s.close(); + } + + + /** + * Test what happens on concurrent write. Concurrent write may corrupt the + * map, so that keys and values may become null. + */ + private void testConcurrentWrite() throws InterruptedException { + final AtomicInteger detected = new AtomicInteger(); + final AtomicInteger notDetected = new AtomicInteger(); + for (int i = 0; i < 10; i++) { + testConcurrentWrite(detected, notDetected); + } + // in most cases, it should be detected + assertTrue(notDetected.get() * 10 <= detected.get()); + } + + private static void testConcurrentWrite(final AtomicInteger detected, + final AtomicInteger notDetected) throws InterruptedException { + final MVStore s = openStore(null); + final MVMap m = s.openMap("data"); + final int size = 20; + final Random rand = new Random(1); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + try { + if (rand.nextBoolean()) { + m.put(rand.nextInt(size), 1); + } else { + m.remove(rand.nextInt(size)); + } + m.get(rand.nextInt(size)); + } catch (ConcurrentModificationException e) { + detected.incrementAndGet(); + } catch (NegativeArraySizeException e) { + notDetected.incrementAndGet(); + } catch (ArrayIndexOutOfBoundsException e) { + notDetected.incrementAndGet(); + } catch (IllegalArgumentException e) { + notDetected.incrementAndGet(); + } catch (NullPointerException e) { + notDetected.incrementAndGet(); + } + } + } + }; + task.execute(); + try { + Thread.sleep(1); + for (int j = 0; j < 10; j++) { + for (int i = 0; i < 10; i++) { + try { + if (rand.nextBoolean()) { + m.put(rand.nextInt(size), 2); + } else { + m.remove(rand.nextInt(size)); + } + m.get(rand.nextInt(size)); + } catch (ConcurrentModificationException e) { + detected.incrementAndGet(); + } catch (NegativeArraySizeException e) { + notDetected.incrementAndGet(); + } catch (ArrayIndexOutOfBoundsException e) { + notDetected.incrementAndGet(); + } catch (IllegalArgumentException e) { + notDetected.incrementAndGet(); + } catch (NullPointerException e) { + notDetected.incrementAndGet(); + } + } + s.commit(); + Thread.sleep(1); + } + } finally { + task.get(); + } + s.close(); + } + + private static void testConcurrentRead() throws InterruptedException { + final MVStore s = openStore(null); + final MVMap m = s.openMap("data"); + final int size = 3; + int x = (int) s.getCurrentVersion(); + for (int i = 0; i < size; i++) { + m.put(i, x); + } + s.commit(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + long v = s.getCurrentVersion() - 1; + Map old = m.openVersion(v); + for (int i = 0; i < size; i++) { + Integer x = old.get(i); + if (x == null || (int) v != x) { + Map old2 = m.openVersion(v); + throw new AssertionError(x + "<>" + v + " at " + i + " " + old2); + } + } + } + } + }; + task.execute(); + try { + Thread.sleep(1); + for (int j = 0; j < 100; j++) { + x = (int) s.getCurrentVersion(); + for (int i = 0; i < size; i++) { + m.put(i, x); + } + s.commit(); + Thread.sleep(1); + } + } finally { + task.get(); + } + s.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestConcurrentLinkedList.java b/modules/h2/src/test/java/org/h2/test/store/TestConcurrentLinkedList.java new file mode 100644 index 0000000000000..1c6c806416499 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestConcurrentLinkedList.java @@ -0,0 +1,207 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.Iterator; +import java.util.LinkedList; +import java.util.Random; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.mvstore.ConcurrentArrayList; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Test the concurrent linked list. + */ +public class TestConcurrentLinkedList extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestConcurrentLinkedList test = (TestConcurrentLinkedList) TestBase.createCaller().init(); + test.test(); + testPerformance(); + } + + @Override + public void test() throws Exception { + testRandomized(); + testConcurrent(); + } + + private static void testPerformance() { + testPerformance(true); + testPerformance(false); + testPerformance(true); + testPerformance(false); + testPerformance(true); + testPerformance(false); + } + + private static void testPerformance(final boolean stock) { + System.out.print(stock ? "stock " : "custom "); + long start = System.nanoTime(); + // final ConcurrentLinkedList test = + // new ConcurrentLinkedList(); + final ConcurrentArrayList test = + new ConcurrentArrayList<>(); + final LinkedList x = new LinkedList<>(); + final AtomicInteger counter = new AtomicInteger(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + if (stock) { + synchronized (x) { + Integer y = x.peekFirst(); + if (y == null) { + counter.incrementAndGet(); + } + } + } else { + Integer y = test.peekFirst(); + if (y == null) { + counter.incrementAndGet(); + } + } + } + } + }; + task.execute(); + test.add(-1); + x.add(-1); + for (int i = 0; i < 2000000; i++) { + Integer value = Integer.valueOf(i & 63); + if (stock) { + synchronized (x) { + Integer f = x.peekLast(); + if (f != value) { + x.add(i); + } + } + Math.sin(i); + synchronized (x) { + if (x.peekFirst() != x.peekLast()) { + x.removeFirst(); + } + } + } else { + Integer f = test.peekLast(); + if (f != value) { + test.add(i); + } + Math.sin(i); + f = test.peekFirst(); + if (f != test.peekLast()) { + test.removeFirst(f); + } + } + } + task.get(); + System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start)); + } + + private void testConcurrent() { + final ConcurrentArrayList test = new ConcurrentArrayList<>(); + // final ConcurrentRing test = new ConcurrentRing(); + final AtomicInteger counter = new AtomicInteger(); + final AtomicInteger size = new AtomicInteger(); + Task task = new Task() { + @Override + public void call() { + while (!stop) { + Thread.yield(); + if (size.get() < 10) { + test.add(counter.getAndIncrement()); + size.getAndIncrement(); + } + } + } + }; + task.execute(); + for (int i = 0; i < 1000000;) { + Thread.yield(); + Integer x = test.peekFirst(); + if (x == null) { + continue; + } + assertEquals(i, x.intValue()); + if (test.removeFirst(x)) { + size.getAndDecrement(); + i++; + } + } + task.get(); + } + + private void testRandomized() { + Random r = new Random(0); + for (int i = 0; i < 100; i++) { + ConcurrentArrayList test = new ConcurrentArrayList<>(); + // ConcurrentRing test = new ConcurrentRing(); + LinkedList x = new LinkedList<>(); + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < 10000; j++) { + buff.append("[" + j + "] "); + int opType = r.nextInt(3); + switch (opType) { + case 0: { + int value = r.nextInt(100); + buff.append("add " + value + "\n"); + test.add(value); + x.add(value); + break; + } + case 1: { + Integer value = x.peek(); + if (value != null && (x.size() > 5 || r.nextBoolean())) { + buff.append("removeFirst\n"); + x.removeFirst(); + test.removeFirst(value); + } else { + buff.append("removeFirst -1\n"); + test.removeFirst(-1); + } + break; + } + case 2: { + Integer value = x.peekLast(); + if (value != null && (x.size() > 5 || r.nextBoolean())) { + buff.append("removeLast\n"); + x.removeLast(); + test.removeLast(value); + } else { + buff.append("removeLast -1\n"); + test.removeLast(-1); + } + break; + } + } + assertEquals(toString(x.iterator()), toString(test.iterator())); + if (x.isEmpty()) { + assertNull(test.peekFirst()); + assertNull(test.peekLast()); + } else { + assertEquals(x.peekFirst().intValue(), test.peekFirst().intValue()); + assertEquals(x.peekLast().intValue(), test.peekLast().intValue()); + } + } + } + } + + private static String toString(Iterator it) { + StringBuilder buff = new StringBuilder(); + while (it.hasNext()) { + buff.append(' ').append(it.next()); + } + return buff.toString(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestDataUtils.java b/modules/h2/src/test/java/org/h2/test/store/TestDataUtils.java new file mode 100644 index 0000000000000..65ef8de3a8ad2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestDataUtils.java @@ -0,0 +1,360 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Random; + +import org.h2.mvstore.Chunk; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.WriteBuffer; +import org.h2.test.TestBase; + +/** + * Test utility classes. + */ +public class TestDataUtils extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testParse(); + testWriteBuffer(); + testEncodeLength(); + testFletcher(); + testMap(); + testMapRandomized(); + testMaxShortVarIntVarLong(); + testVarIntVarLong(); + testCheckValue(); + testPagePos(); + } + + private static void testWriteBuffer() { + WriteBuffer buff = new WriteBuffer(); + buff.put(new byte[1500000]); + buff.put(new byte[1900000]); + } + + private void testFletcher() { + byte[] data = new byte[10000]; + for (int i = 0; i < 10000; i += 1000) { + assertEquals(-1, DataUtils.getFletcher32(data, 0, i)); + } + Arrays.fill(data, (byte) 255); + for (int i = 0; i < 10000; i += 1000) { + assertEquals(-1, DataUtils.getFletcher32(data, 0, i)); + } + for (int i = 0; i < 1000; i++) { + for (int j = 0; j < 255; j++) { + Arrays.fill(data, 0, i, (byte) j); + data[i] = 0; + int a = DataUtils.getFletcher32(data, 0, i); + if (i % 2 == 1) { + // add length: same as appending a 0 + int b = DataUtils.getFletcher32(data, 0, i + 1); + assertEquals(a, b); + } + data[i] = 10; + int c = DataUtils.getFletcher32(data, 0, i); + assertEquals(a, c); + } + } + long last = 0; + for (int i = 1; i < 255; i++) { + Arrays.fill(data, (byte) i); + for (int j = 0; j < 10; j += 2) { + int x = DataUtils.getFletcher32(data, 0, j); + assertTrue(x != last); + last = x; + } + } + Arrays.fill(data, (byte) 10); + assertEquals(0x1e1e1414, + DataUtils.getFletcher32(data, 0, 10000)); + assertEquals(0x1e3fa7ed, + DataUtils.getFletcher32("Fletcher32".getBytes(), 0, 10)); + assertEquals(0x1e3fa7ed, + DataUtils.getFletcher32("XFletcher32".getBytes(), 1, 10)); + } + + private void testMap() { + StringBuilder buff = new StringBuilder(); + DataUtils.appendMap(buff, "", ""); + DataUtils.appendMap(buff, "a", "1"); + DataUtils.appendMap(buff, "b", ","); + DataUtils.appendMap(buff, "c", "1,2"); + DataUtils.appendMap(buff, "d", "\"test\""); + DataUtils.appendMap(buff, "e", "}"); + DataUtils.appendMap(buff, "name", "1:1\","); + String encoded = buff.toString(); + assertEquals(":,a:1,b:\",\",c:\"1,2\",d:\"\\\"test\\\"\",e:},name:\"1:1\\\",\"", encoded); + + HashMap m = DataUtils.parseMap(encoded); + assertEquals(7, m.size()); + assertEquals("", m.get("")); + assertEquals("1", m.get("a")); + assertEquals(",", m.get("b")); + assertEquals("1,2", m.get("c")); + assertEquals("\"test\"", m.get("d")); + assertEquals("}", m.get("e")); + assertEquals("1:1\",", m.get("name")); + assertEquals("1:1\",", DataUtils.getMapName(encoded)); + + buff.setLength(0); + DataUtils.appendMap(buff, "1", "1"); + DataUtils.appendMap(buff, "name", "2"); + DataUtils.appendMap(buff, "3", "3"); + encoded = buff.toString(); + assertEquals("2", DataUtils.parseMap(encoded).get("name")); + assertEquals("2", DataUtils.getMapName(encoded)); + + buff.setLength(0); + DataUtils.appendMap(buff, "name", "xx"); + encoded = buff.toString(); + assertEquals("xx", DataUtils.parseMap(encoded).get("name")); + assertEquals("xx", DataUtils.getMapName(encoded)); + } + + private void testMapRandomized() { + Random r = new Random(1); + String chars = "a_1,\\\":"; + for (int i = 0; i < 1000; i++) { + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < 20; j++) { + buff.append(chars.charAt(r.nextInt(chars.length()))); + } + try { + HashMap map = DataUtils.parseMap(buff.toString()); + assertFalse(map == null); + // ok + } catch (IllegalStateException e) { + // ok - but not another exception + } + } + } + + private void testMaxShortVarIntVarLong() { + ByteBuffer buff = ByteBuffer.allocate(100); + DataUtils.writeVarInt(buff, DataUtils.COMPRESSED_VAR_INT_MAX); + assertEquals(3, buff.position()); + buff.rewind(); + DataUtils.writeVarInt(buff, DataUtils.COMPRESSED_VAR_INT_MAX + 1); + assertEquals(4, buff.position()); + buff.rewind(); + DataUtils.writeVarLong(buff, DataUtils.COMPRESSED_VAR_LONG_MAX); + assertEquals(7, buff.position()); + buff.rewind(); + DataUtils.writeVarLong(buff, DataUtils.COMPRESSED_VAR_LONG_MAX + 1); + assertEquals(8, buff.position()); + buff.rewind(); + } + + private void testVarIntVarLong() { + ByteBuffer buff = ByteBuffer.allocate(100); + for (long x = 0; x < 1000; x++) { + testVarIntVarLong(buff, x); + testVarIntVarLong(buff, -x); + } + for (long x = Long.MIN_VALUE, i = 0; i < 1000; x++, i++) { + testVarIntVarLong(buff, x); + } + for (long x = Long.MAX_VALUE, i = 0; i < 1000; x--, i++) { + testVarIntVarLong(buff, x); + } + for (int shift = 0; shift < 64; shift++) { + for (long x = 250; x < 260; x++) { + testVarIntVarLong(buff, x << shift); + testVarIntVarLong(buff, -(x << shift)); + } + } + // invalid varInt / varLong + // should work, but not read far too much + for (int i = 0; i < 50; i++) { + buff.put((byte) 255); + } + buff.flip(); + assertEquals(-1, DataUtils.readVarInt(buff)); + assertEquals(5, buff.position()); + buff.rewind(); + assertEquals(-1, DataUtils.readVarLong(buff)); + assertEquals(10, buff.position()); + + buff.clear(); + testVarIntVarLong(buff, DataUtils.COMPRESSED_VAR_INT_MAX); + testVarIntVarLong(buff, DataUtils.COMPRESSED_VAR_INT_MAX + 1); + testVarIntVarLong(buff, DataUtils.COMPRESSED_VAR_LONG_MAX); + testVarIntVarLong(buff, DataUtils.COMPRESSED_VAR_LONG_MAX + 1); + } + + private void testVarIntVarLong(ByteBuffer buff, long x) { + int len; + byte[] data; + try { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + DataUtils.writeVarLong(out, x); + data = out.toByteArray(); + } catch (IOException e) { + throw new RuntimeException(e); + } + DataUtils.writeVarLong(buff, x); + len = buff.position(); + assertEquals(data.length, len); + byte[] data2 = new byte[len]; + buff.position(0); + buff.get(data2); + assertEquals(data2, data); + buff.flip(); + long y = DataUtils.readVarLong(buff); + assertEquals(y, x); + assertEquals(len, buff.position()); + assertEquals(len, DataUtils.getVarLongLen(x)); + buff.clear(); + + int intX = (int) x; + try { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + DataUtils.writeVarInt(out, intX); + data = out.toByteArray(); + } catch (IOException e) { + throw new RuntimeException(e); + } + DataUtils.writeVarInt(buff, intX); + len = buff.position(); + assertEquals(data.length, len); + data2 = new byte[len]; + buff.position(0); + buff.get(data2); + assertEquals(data2, data); + buff.flip(); + int intY = DataUtils.readVarInt(buff); + assertEquals(intY, intX); + assertEquals(len, buff.position()); + assertEquals(len, DataUtils.getVarIntLen(intX)); + buff.clear(); + } + + private void testCheckValue() { + // 0 xor 0 = 0 + assertEquals(0, DataUtils.getCheckValue(0)); + // 1111... xor 1111... = 0 + assertEquals(0, DataUtils.getCheckValue(-1)); + // 0 xor 1111... = 1111... + assertEquals((short) -1, DataUtils.getCheckValue(-1 >>> 16)); + // 1111... xor 0 = 1111... + assertEquals((short) -1, DataUtils.getCheckValue(-1 << 16)); + // 0 xor 1000... = 1000... + assertEquals((short) (1 << 15), DataUtils.getCheckValue(1 << 15)); + // 1000... xor 0 = 1000... + assertEquals((short) (1 << 15), DataUtils.getCheckValue(1 << 31)); + } + + private void testParse() { + for (long i = -1; i != 0; i >>>= 1) { + String x = Long.toHexString(i); + assertEquals(i, DataUtils.parseHexLong(x)); + x = Long.toHexString(-i); + assertEquals(-i, DataUtils.parseHexLong(x)); + int j = (int) i; + x = Integer.toHexString(j); + assertEquals(j, DataUtils.parseHexInt(x)); + j = (int) -i; + x = Integer.toHexString(j); + assertEquals(j, DataUtils.parseHexInt(x)); + } + } + + private void testPagePos() { + assertEquals(0, DataUtils.PAGE_TYPE_LEAF); + assertEquals(1, DataUtils.PAGE_TYPE_NODE); + + long max = DataUtils.getPagePos(Chunk.MAX_ID, Integer.MAX_VALUE, + Integer.MAX_VALUE, DataUtils.PAGE_TYPE_NODE); + String hex = Long.toHexString(max); + assertEquals(max, DataUtils.parseHexLong(hex)); + assertEquals(Chunk.MAX_ID, DataUtils.getPageChunkId(max)); + assertEquals(Integer.MAX_VALUE, DataUtils.getPageOffset(max)); + assertEquals(DataUtils.PAGE_LARGE, DataUtils.getPageMaxLength(max)); + assertEquals(DataUtils.PAGE_TYPE_NODE, DataUtils.getPageType(max)); + + long overflow = DataUtils.getPagePos(Chunk.MAX_ID + 1, + Integer.MAX_VALUE, Integer.MAX_VALUE, DataUtils.PAGE_TYPE_NODE); + assertTrue(Chunk.MAX_ID + 1 != DataUtils.getPageChunkId(overflow)); + + for (int i = 0; i < Chunk.MAX_ID; i++) { + long pos = DataUtils.getPagePos(i, 3, 128, 1); + assertEquals(i, DataUtils.getPageChunkId(pos)); + assertEquals(3, DataUtils.getPageOffset(pos)); + assertEquals(128, DataUtils.getPageMaxLength(pos)); + assertEquals(1, DataUtils.getPageType(pos)); + } + for (int type = 0; type <= 1; type++) { + for (int chunkId = 0; chunkId < Chunk.MAX_ID; + chunkId += Chunk.MAX_ID / 100) { + for (long offset = 0; offset < Integer.MAX_VALUE; + offset += Integer.MAX_VALUE / 100) { + for (int length = 0; length < 2000000; length += 200000) { + long pos = DataUtils.getPagePos( + chunkId, (int) offset, length, type); + assertEquals(chunkId, DataUtils.getPageChunkId(pos)); + assertEquals(offset, DataUtils.getPageOffset(pos)); + assertTrue(DataUtils.getPageMaxLength(pos) >= length); + assertTrue(DataUtils.getPageType(pos) == type); + } + } + } + } + } + + private void testEncodeLength() { + int lastCode = 0; + assertEquals(0, DataUtils.encodeLength(32)); + assertEquals(1, DataUtils.encodeLength(33)); + assertEquals(1, DataUtils.encodeLength(48)); + assertEquals(2, DataUtils.encodeLength(49)); + assertEquals(2, DataUtils.encodeLength(64)); + assertEquals(3, DataUtils.encodeLength(65)); + assertEquals(30, DataUtils.encodeLength(1024 * 1024)); + assertEquals(31, DataUtils.encodeLength(1024 * 1024 + 1)); + assertEquals(31, DataUtils.encodeLength(Integer.MAX_VALUE)); + int[] maxLengthForIndex = {32, 48, 64, 96, 128, 192, 256, + 384, 512, 768, 1024, 1536, 2048, 3072, 4096, 6144, + 8192, 12288, 16384, 24576, 32768, 49152, 65536, + 98304, 131072, 196608, 262144, 393216, 524288, + 786432, 1048576}; + for (int i = 0; i < maxLengthForIndex.length; i++) { + assertEquals(i, DataUtils.encodeLength(maxLengthForIndex[i])); + assertEquals(i + 1, DataUtils.encodeLength(maxLengthForIndex[i] + 1)); + } + for (int i = 1024 * 1024 + 1; i < 100 * 1024 * 1024; i += 1024) { + int code = DataUtils.encodeLength(i); + assertEquals(31, code); + } + for (int i = 0; i < 2 * 1024 * 1024; i++) { + int code = DataUtils.encodeLength(i); + assertTrue(code <= 31 && code >= 0); + assertTrue(code >= lastCode); + if (code > lastCode) { + lastCode = code; + } + int max = DataUtils.getPageMaxLength(code << 1); + assertTrue(max >= i && max >= 32); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestFreeSpace.java b/modules/h2/src/test/java/org/h2/test/store/TestFreeSpace.java new file mode 100644 index 0000000000000..e9296e885c4d4 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestFreeSpace.java @@ -0,0 +1,145 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.mvstore.FreeSpaceBitSet; +import org.h2.test.TestBase; +import org.h2.util.Utils; + +/** + * Tests the free space list. + */ +public class TestFreeSpace extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + testMemoryUsage(); + testPerformance(); + } + + @Override + public void test() throws Exception { + testSimple(); + testRandomized(); + } + + private static void testPerformance() { + for (int i = 0; i < 10; i++) { + long t = System.nanoTime(); + + FreeSpaceBitSet f = new FreeSpaceBitSet(0, 4096); + // 75 ms + + // FreeSpaceList f = new FreeSpaceList(0, 4096); + // 13868 ms + + // FreeSpaceTree f = new FreeSpaceTree(0, 4096); + // 56 ms + + for (int j = 0; j < 100000; j++) { + f.markUsed(j * 2 * 4096, 4096); + } + for (int j = 0; j < 100000; j++) { + f.free(j * 2 * 4096, 4096); + } + for (int j = 0; j < 100000; j++) { + f.allocate(4096 * 2); + } + System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - t)); + } + } + + private static void testMemoryUsage() { + + // 16 GB file size + long size = 16L * 1024 * 1024 * 1024; + System.gc(); + System.gc(); + long first = Utils.getMemoryUsed(); + + FreeSpaceBitSet f = new FreeSpaceBitSet(0, 4096); + // 512 KB + + // FreeSpaceTree f = new FreeSpaceTree(0, 4096); + // 64 MB + + // FreeSpaceList f = new FreeSpaceList(0, 4096); + // too slow + + for (long j = size; j > 0; j -= 4 * 4096) { + f.markUsed(j, 4096); + } + + System.gc(); + System.gc(); + long mem = Utils.getMemoryUsed() - first; + System.out.println("Memory used: " + mem); + System.out.println("f: " + f.toString().length()); + + } + + private void testSimple() { + FreeSpaceBitSet f1 = new FreeSpaceBitSet(2, 1024); + FreeSpaceList f2 = new FreeSpaceList(2, 1024); + FreeSpaceTree f3 = new FreeSpaceTree(2, 1024); + assertEquals(f1.toString(), f2.toString()); + assertEquals(f1.toString(), f3.toString()); + assertEquals(2 * 1024, f1.allocate(10240)); + assertEquals(2 * 1024, f2.allocate(10240)); + assertEquals(2 * 1024, f3.allocate(10240)); + assertEquals(f1.toString(), f2.toString()); + assertEquals(f1.toString(), f3.toString()); + f1.markUsed(20480, 1024); + f2.markUsed(20480, 1024); + f3.markUsed(20480, 1024); + assertEquals(f1.toString(), f2.toString()); + assertEquals(f1.toString(), f3.toString()); + } + + private void testRandomized() { + FreeSpaceBitSet f1 = new FreeSpaceBitSet(2, 8); + FreeSpaceList f2 = new FreeSpaceList(2, 8); + Random r = new Random(1); + StringBuilder log = new StringBuilder(); + for (int i = 0; i < 100000; i++) { + long pos = r.nextInt(1024); + int length = 1 + r.nextInt(8 * 128); + switch (r.nextInt(3)) { + case 0: { + log.append("allocate(" + length + ");\n"); + long a = f1.allocate(length); + long b = f2.allocate(length); + assertEquals(a, b); + break; + } + case 1: + if (f1.isUsed(pos, length)) { + log.append("free(" + pos + ", " + length + ");\n"); + f1.free(pos, length); + f2.free(pos, length); + } + break; + case 2: + if (f1.isFree(pos, length)) { + log.append("markUsed(" + pos + ", " + length + ");\n"); + f1.markUsed(pos, length); + f2.markUsed(pos, length); + } + break; + } + assertEquals(f1.toString(), f2.toString()); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestImmutableArray.java b/modules/h2/src/test/java/org/h2/test/store/TestImmutableArray.java new file mode 100644 index 0000000000000..9ab0832a99e3a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestImmutableArray.java @@ -0,0 +1,162 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.ArrayList; +import java.util.Iterator; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.dev.util.ImmutableArray2; +import org.h2.test.TestBase; + +/** + * Test the concurrent linked list. + */ +public class TestImmutableArray extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestImmutableArray test = (TestImmutableArray) TestBase.createCaller().init(); + test.test(); + testPerformance(); + } + + @Override + public void test() throws Exception { + testRandomized(); + } + + private static void testPerformance() { + testPerformance(true); + testPerformance(false); + testPerformance(true); + testPerformance(false); + testPerformance(true); + testPerformance(false); + } + + private static void testPerformance(final boolean immutable) { +// immutable time 2068 dummy: 60000000 +// immutable time 1140 dummy: 60000000 +// ArrayList time 361 dummy: 60000000 + + System.out.print(immutable ? "immutable" : "ArrayList"); + long start = System.nanoTime(); + int count = 20000000; + Integer x = 1; + int sum = 0; + if (immutable) { + ImmutableArray2 test = ImmutableArray2.empty(); + for (int i = 0; i < count; i++) { + if (i % 10 != 0) { + test = test.insert(test.length(), x); + } else { + test = test.insert(i % 30, x); + } + if (test.length() > 100) { + while (test.length() > 30) { + if (i % 10 != 0) { + test = test.remove(test.length() - 1); + } else { + test = test.remove(0); + } + } + } + sum += test.get(0); + sum += test.get(test.length() - 1); + sum += test.get(test.length() / 2); + if (i % 10 == 0) { + test = test.set(0, x); + } + } + } else { + ArrayList test = new ArrayList<>(); + for (int i = 0; i < count; i++) { + if (i % 10 != 0) { + test.add(test.size(), x); + } else { + test.add(i % 30, x); + } + if (test.size() > 100) { + while (test.size() > 30) { + if (i % 2 == 0) { + test.remove(test.size() - 1); + } else { + test.remove(0); + } + } + } + sum += test.get(0); + sum += test.get(test.size() - 1); + sum += test.get(test.size() / 2); + if (i % 10 == 0) { + test.set(0, x); + } + } + } + System.out.println(" time " + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start) + + " dummy: " + sum); + } + + private void testRandomized() { + Random r = new Random(0); + for (int i = 0; i < 100; i++) { + ImmutableArray2 test = ImmutableArray2.empty(); + // ConcurrentRing test = new ConcurrentRing(); + ArrayList x = new ArrayList<>(); + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < 1000; j++) { + buff.append("[" + j + "] "); + int opType = r.nextInt(3); + switch (opType) { + case 0: { + int index = test.length() == 0 ? 0 : r.nextInt(test.length()); + int value = r.nextInt(100); + buff.append("insert " + index + " " + value + "\n"); + test = test.insert(index, value); + x.add(index, value); + break; + } + case 1: { + if (test.length() > 0) { + int index = r.nextInt(test.length()); + int value = r.nextInt(100); + buff.append("set " + index + " " + value + "\n"); + x.set(index, value); + test = test.set(index, value); + } + break; + } + case 2: { + if (test.length() > 0) { + int index = r.nextInt(test.length()); + buff.append("remove " + index + "\n"); + x.remove(index); + test = test.remove(index); + } + break; + } + } + assertEquals(x.size(), test.length()); + assertEquals(toString(x.iterator()), toString(test.iterator())); + } + } + } + + private static String toString(Iterator it) { + StringBuilder buff = new StringBuilder(); + while (it.hasNext()) { + buff.append(' ').append(it.next()); + } + return buff.toString(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestKillProcessWhileWriting.java b/modules/h2/src/test/java/org/h2/test/store/TestKillProcessWhileWriting.java new file mode 100644 index 0000000000000..468c179d0de94 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestKillProcessWhileWriting.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.Random; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.FilePathUnstable; + +/** + * Tests the MVStore. + */ +public class TestKillProcessWhileWriting extends TestBase { + + private String fileName; + private int seed; + private FilePathUnstable fs; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.big = true; + test.test(); + } + + @Override + public void test() throws Exception { + fs = FilePathUnstable.register(); + fs.setPartialWrites(false); + test("unstable:memFS:killProcess.h3"); + + if (config.big) { + try { + fs.setPartialWrites(true); + test("unstable:memFS:killProcess.h3"); + } finally { + fs.setPartialWrites(false); + } + } + FileUtils.delete("unstable:memFS:killProcess.h3"); + } + + private void test(String fileName) throws Exception { + for (seed = 0; seed < 10; seed++) { + this.fileName = fileName; + FileUtils.delete(fileName); + test(Integer.MAX_VALUE, seed); + int max = Integer.MAX_VALUE - fs.getDiskFullCount() + 10; + assertTrue("" + (max - 10), max > 0); + for (int i = 0; i < max; i++) { + test(i, seed); + } + } + } + + private void test(int x, int seed) throws Exception { + FileUtils.delete(fileName); + fs.setDiskFullCount(x, seed); + try { + write(); + verify(); + } catch (Exception e) { + if (x == Integer.MAX_VALUE) { + throw e; + } + fs.setDiskFullCount(0, seed); + verify(); + } + } + + private int write() { + MVStore s; + MVMap m; + + s = new MVStore.Builder(). + fileName(fileName). + pageSplitSize(50). + autoCommitDisabled(). + open(); + m = s.openMap("data"); + Random r = new Random(seed); + int op = 0; + try { + for (; op < 100; op++) { + int k = r.nextInt(100); + byte[] v = new byte[r.nextInt(100) * 100]; + int type = r.nextInt(10); + switch (type) { + case 0: + case 1: + case 2: + case 3: + m.put(k, v); + break; + case 4: + case 5: + m.remove(k); + break; + case 6: + s.commit(); + break; + case 7: + s.compact(80, 1024); + break; + case 8: + m.clear(); + break; + case 9: + s.close(); + s = new MVStore.Builder(). + fileName(fileName). + pageSplitSize(50). + autoCommitDisabled(). + open(); + m = s.openMap("data"); + break; + } + } + s.close(); + return 0; + } catch (Exception e) { + s.closeImmediately(); + return op; + } + } + + private void verify() { + + MVStore s; + MVMap m; + + FileUtils.delete(fileName); + s = new MVStore.Builder(). + fileName(fileName).open(); + m = s.openMap("data"); + for (int i = 0; i < 100; i++) { + byte[] x = m.get(i); + if (x == null) { + break; + } + assertEquals(i * 100, x.length); + } + s.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestMVRTree.java b/modules/h2/src/test/java/org/h2/test/store/TestMVRTree.java new file mode 100644 index 0000000000000..b983f1782e109 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestMVRTree.java @@ -0,0 +1,446 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.awt.AlphaComposite; +import java.awt.Color; +import java.awt.Graphics2D; +import java.awt.RenderingHints; +import java.awt.image.BufferedImage; +import java.io.File; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.Random; +import javax.imageio.ImageIO; +import javax.imageio.ImageWriter; +import javax.imageio.stream.FileImageOutputStream; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.rtree.MVRTreeMap; +import org.h2.mvstore.rtree.SpatialKey; +import org.h2.mvstore.type.StringDataType; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.New; + +/** + * Tests the r-tree. + */ +public class TestMVRTree extends TestMVStore { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testRemoveAll(); + testRandomInsert(); + testSpatialKey(); + testExample(); + testMany(); + testSimple(); + testRandom(); + testRandomFind(); + } + + private void testRemoveAll() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + s = new MVStore.Builder().fileName(fileName). + pageSplitSize(100).open(); + MVRTreeMap map = s.openMap("data", + new MVRTreeMap.Builder()); + Random r = new Random(1); + for (int i = 0; i < 1000; i++) { + float x = r.nextFloat() * 50, y = r.nextFloat() * 50; + SpatialKey k = new SpatialKey(i % 100, x, x + 2, y, y + 1); + map.put(k, "i:" + i); + } + s.commit(); + map.clear(); + s.close(); + } + + private void testRandomInsert() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + s = new MVStore.Builder().fileName(fileName). + pageSplitSize(100).open(); + MVRTreeMap map = s.openMap("data", + new MVRTreeMap.Builder()); + Random r = new Random(1); + for (int i = 0; i < 1000; i++) { + if (i % 100 == 0) { + r.setSeed(1); + } + float x = r.nextFloat() * 50, y = r.nextFloat() * 50; + SpatialKey k = new SpatialKey(i % 100, x, x + 2, y, y + 1); + map.put(k, "i:" + i); + if (i % 10 == 0) { + s.commit(); + } + } + s.close(); + } + + private void testSpatialKey() { + SpatialKey a0 = new SpatialKey(0, 1, 2, 3, 4); + SpatialKey a1 = new SpatialKey(0, 1, 2, 3, 4); + SpatialKey b0 = new SpatialKey(1, 1, 2, 3, 4); + SpatialKey c0 = new SpatialKey(1, 1.1f, 2.2f, 3.3f, 4.4f); + assertEquals(0, a0.hashCode()); + assertEquals(1, b0.hashCode()); + assertTrue(a0.equals(a0)); + assertTrue(a0.equals(a1)); + assertFalse(a0.equals(b0)); + assertTrue(a0.equalsIgnoringId(b0)); + assertFalse(b0.equals(c0)); + assertFalse(b0.equalsIgnoringId(c0)); + assertEquals("0: (1.0/2.0, 3.0/4.0)", a0.toString()); + assertEquals("1: (1.0/2.0, 3.0/4.0)", b0.toString()); + assertEquals("1: (1.1/2.2, 3.3/4.4)", c0.toString()); + } + + private void testExample() { + // create an in-memory store + MVStore s = MVStore.open(null); + + // open an R-tree map + MVRTreeMap r = s.openMap("data", + new MVRTreeMap.Builder()); + + // add two key-value pairs + // the first value is the key id (to make the key unique) + // then the min x, max x, min y, max y + r.add(new SpatialKey(0, -3f, -2f, 2f, 3f), "left"); + r.add(new SpatialKey(1, 3f, 4f, 4f, 5f), "right"); + + // iterate over the intersecting keys + Iterator it = r.findIntersectingKeys( + new SpatialKey(0, 0f, 9f, 3f, 6f)); + for (SpatialKey k; it.hasNext();) { + k = it.next(); + // System.out.println(k + ": " + r.get(k)); + assertTrue(k != null); + } + s.close(); + } + + private void testMany() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + s = openStore(fileName); + // s.setMaxPageSize(50); + MVRTreeMap r = s.openMap("data", + new MVRTreeMap.Builder().dimensions(2). + valueType(StringDataType.INSTANCE)); + // r.setQuadraticSplit(true); + Random rand = new Random(1); + int len = 1000; + // long t = System.nanoTime(); + // Profiler prof = new Profiler(); + // prof.startCollecting(); + for (int i = 0; i < len; i++) { + float x = rand.nextFloat(), y = rand.nextFloat(); + float p = (float) (rand.nextFloat() * 0.000001); + SpatialKey k = new SpatialKey(i, x - p, x + p, y - p, y + p); + r.add(k, "" + i); + if (i > 0 && (i % len / 10) == 0) { + s.commit(); + } + if (i > 0 && (i % 10000) == 0) { + render(r, getBaseDir() + "/test.png"); + } + } + s.close(); + s = openStore(fileName); + r = s.openMap("data", + new MVRTreeMap.Builder().dimensions(2). + valueType(StringDataType.INSTANCE)); + rand = new Random(1); + for (int i = 0; i < len; i++) { + float x = rand.nextFloat(), y = rand.nextFloat(); + float p = (float) (rand.nextFloat() * 0.000001); + SpatialKey k = new SpatialKey(i, x - p, x + p, y - p, y + p); + assertEquals("" + i, r.get(k)); + } + assertEquals(len, r.size()); + int count = 0; + for (SpatialKey k : r.keySet()) { + assertTrue(r.get(k) != null); + count++; + } + assertEquals(len, count); + rand = new Random(1); + for (int i = 0; i < len; i++) { + float x = rand.nextFloat(), y = rand.nextFloat(); + float p = (float) (rand.nextFloat() * 0.000001); + SpatialKey k = new SpatialKey(i, x - p, x + p, y - p, y + p); + r.remove(k); + } + assertEquals(0, r.size()); + s.close(); + } + + private void testSimple() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + s = openStore(fileName); + MVRTreeMap r = s.openMap("data", + new MVRTreeMap.Builder().dimensions(2). + valueType(StringDataType.INSTANCE)); + + add(r, "Bern", key(0, 46.57, 7.27, 124381)); + add(r, "Basel", key(1, 47.34, 7.36, 170903)); + add(r, "Zurich", key(2, 47.22, 8.33, 376008)); + add(r, "Lucerne", key(3, 47.03, 8.18, 77491)); + add(r, "Geneva", key(4, 46.12, 6.09, 191803)); + add(r, "Lausanne", key(5, 46.31, 6.38, 127821)); + add(r, "Winterthur", key(6, 47.30, 8.45, 102966)); + add(r, "St. Gallen", key(7, 47.25, 9.22, 73500)); + add(r, "Biel/Bienne", key(8, 47.08, 7.15, 51203)); + add(r, "Lugano", key(9, 46.00, 8.57, 54667)); + add(r, "Thun", key(10, 46.46, 7.38, 42623)); + add(r, "Bellinzona", key(11, 46.12, 9.01, 17373)); + add(r, "Chur", key(12, 46.51, 9.32, 33756)); + // render(r, getBaseDir() + "/test.png"); + ArrayList list = New.arrayList(); + for (SpatialKey x : r.keySet()) { + list.add(r.get(x)); + } + Collections.sort(list); + assertEquals("[Basel, Bellinzona, Bern, Biel/Bienne, Chur, Geneva, " + + "Lausanne, Lucerne, Lugano, St. Gallen, Thun, Winterthur, Zurich]", + list.toString()); + + SpatialKey k; + + // intersection + list.clear(); + k = key(0, 47.34, 7.36, 0); + for (Iterator it = r.findIntersectingKeys(k); it.hasNext();) { + list.add(r.get(it.next())); + } + Collections.sort(list); + assertEquals("[Basel]", list.toString()); + + // contains + list.clear(); + k = key(0, 47.34, 7.36, 0); + for (Iterator it = r.findContainedKeys(k); it.hasNext();) { + list.add(r.get(it.next())); + } + assertEquals(0, list.size()); + k = key(0, 47.34, 7.36, 171000); + for (Iterator it = r.findContainedKeys(k); it.hasNext();) { + list.add(r.get(it.next())); + } + assertEquals("[Basel]", list.toString()); + + s.close(); + } + + private static void add(MVRTreeMap r, String name, SpatialKey k) { + r.put(k, name); + } + + private static SpatialKey key(int id, double y, double x, int population) { + float a = (float) ((int) x + (x - (int) x) * 5 / 3); + float b = 50 - (float) ((int) y + (y - (int) y) * 5 / 3); + float s = (float) Math.sqrt(population / 10000000.); + SpatialKey k = new SpatialKey(id, a - s, a + s, b - s, b + s); + return k; + } + + private static void render(MVRTreeMap r, String fileName) { + int width = 1000, height = 500; + BufferedImage img = new BufferedImage(width, height, + BufferedImage.TYPE_INT_ARGB); + Graphics2D g2d = (Graphics2D) img.getGraphics(); + g2d.setBackground(Color.WHITE); + g2d.setColor(Color.WHITE); + g2d.fillRect(0, 0, width, height); + g2d.setComposite(AlphaComposite.SrcOver.derive(0.5f)); + g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, + RenderingHints.VALUE_ANTIALIAS_ON); + g2d.setColor(Color.BLACK); + SpatialKey b = new SpatialKey(0, Float.MAX_VALUE, Float.MIN_VALUE, + Float.MAX_VALUE, Float.MIN_VALUE); + for (SpatialKey x : r.keySet()) { + b.setMin(0, Math.min(b.min(0), x.min(0))); + b.setMin(1, Math.min(b.min(1), x.min(1))); + b.setMax(0, Math.max(b.max(0), x.max(0))); + b.setMax(1, Math.max(b.max(1), x.max(1))); + } + // System.out.println(b); + for (SpatialKey x : r.keySet()) { + int[] rect = scale(b, x, width, height); + g2d.drawRect(rect[0], rect[1], rect[2] - rect[0], rect[3] - rect[1]); + String s = r.get(x); + g2d.drawChars(s.toCharArray(), 0, s.length(), rect[0], rect[1] - 4); + } + g2d.setColor(Color.red); + ArrayList list = New.arrayList(); + r.addNodeKeys(list, r.getRoot()); + for (SpatialKey x : list) { + int[] rect = scale(b, x, width, height); + g2d.drawRect(rect[0], rect[1], rect[2] - rect[0], rect[3] - rect[1]); + } + ImageWriter out = ImageIO.getImageWritersByFormatName("png").next(); + try { + out.setOutput(new FileImageOutputStream(new File(fileName))); + out.write(img); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + private static int[] scale(SpatialKey b, SpatialKey x, int width, int height) { + int[] rect = { + (int) ((x.min(0) - b.min(0)) * (width * 0.9) / + (b.max(0) - b.min(0)) + width * 0.05), + (int) ((x.min(1) - b.min(1)) * (height * 0.9) / + (b.max(1) - b.min(1)) + height * 0.05), + (int) ((x.max(0) - b.min(0)) * (width * 0.9) / + (b.max(0) - b.min(0)) + width * 0.05), + (int) ((x.max(1) - b.min(1)) * (height * 0.9) / + (b.max(1) - b.min(1)) + height * 0.05), + }; + return rect; + } + + private void testRandom() { + testRandom(true); + testRandom(false); + } + + private void testRandomFind() { + MVStore s = openStore(null); + MVRTreeMap m = s.openMap("data", + new MVRTreeMap.Builder()); + int max = 100; + for (int x = 0; x < max; x++) { + for (int y = 0; y < max; y++) { + int id = x * max + y; + SpatialKey k = new SpatialKey(id, x, x, y, y); + m.put(k, id); + } + } + Random rand = new Random(1); + int operationCount = 1000; + for (int i = 0; i < operationCount; i++) { + int x1 = rand.nextInt(max), y1 = rand.nextInt(10); + int x2 = rand.nextInt(10), y2 = rand.nextInt(10); + int intersecting = Math.max(0, x2 - x1 + 1) * Math.max(0, y2 - y1 + 1); + int contained = Math.max(0, x2 - x1 - 1) * Math.max(0, y2 - y1 - 1); + SpatialKey k = new SpatialKey(0, x1, x2, y1, y2); + Iterator it = m.findContainedKeys(k); + int count = 0; + while (it.hasNext()) { + SpatialKey t = it.next(); + assertTrue(t.min(0) > x1); + assertTrue(t.min(1) > y1); + assertTrue(t.max(0) < x2); + assertTrue(t.max(1) < y2); + count++; + } + assertEquals(contained, count); + it = m.findIntersectingKeys(k); + count = 0; + while (it.hasNext()) { + SpatialKey t = it.next(); + assertTrue(t.min(0) >= x1); + assertTrue(t.min(1) >= y1); + assertTrue(t.max(0) <= x2); + assertTrue(t.max(1) <= y2); + count++; + } + assertEquals(intersecting, count); + } + } + + private void testRandom(boolean quadraticSplit) { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + + MVRTreeMap m = s.openMap("data", + new MVRTreeMap.Builder()); + + m.setQuadraticSplit(quadraticSplit); + HashMap map = new HashMap<>(); + Random rand = new Random(1); + int operationCount = 10000; + int maxValue = 300; + for (int i = 0; i < operationCount; i++) { + int key = rand.nextInt(maxValue); + Random rk = new Random(key); + float x = rk.nextFloat(), y = rk.nextFloat(); + float p = (float) (rk.nextFloat() * 0.000001); + SpatialKey k = new SpatialKey(key, x - p, x + p, y - p, y + p); + String v = "" + rand.nextInt(); + Iterator it; + switch (rand.nextInt(5)) { + case 0: + log(i + ": put " + k + " = " + v + " " + m.size()); + m.put(k, v); + map.put(k, v); + break; + case 1: + log(i + ": remove " + k + " " + m.size()); + m.remove(k); + map.remove(k); + break; + case 2: { + p = (float) (rk.nextFloat() * 0.01); + k = new SpatialKey(key, x - p, x + p, y - p, y + p); + it = m.findIntersectingKeys(k); + while (it.hasNext()) { + SpatialKey n = it.next(); + String a = map.get(n); + assertFalse(a == null); + } + break; + } + case 3: { + p = (float) (rk.nextFloat() * 0.01); + k = new SpatialKey(key, x - p, x + p, y - p, y + p); + it = m.findContainedKeys(k); + while (it.hasNext()) { + SpatialKey n = it.next(); + String a = map.get(n); + assertFalse(a == null); + } + break; + } + default: + String a = map.get(k); + String b = m.get(k); + if (a == null || b == null) { + assertTrue(a == b); + } else { + assertEquals(a, b); + } + break; + } + assertEquals(map.size(), m.size()); + } + s.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestMVStore.java b/modules/h2/src/test/java/org/h2/test/store/TestMVStore.java new file mode 100644 index 0000000000000..df40f362f9226 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestMVStore.java @@ -0,0 +1,2080 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.lang.Thread.UncaughtExceptionHandler; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.charset.StandardCharsets; +import java.util.Iterator; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Random; +import java.util.TreeMap; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; +import org.h2.mvstore.Chunk; +import org.h2.mvstore.Cursor; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.FileStore; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.OffHeapStore; +import org.h2.mvstore.type.DataType; +import org.h2.mvstore.type.ObjectDataType; +import org.h2.mvstore.type.StringDataType; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.AssertThrows; + +/** + * Tests the MVStore. + */ +public class TestMVStore extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.config.big = true; + test.test(); + } + + @Override + public void test() throws Exception { + testRemoveMapRollback(); + testProvidedFileStoreNotOpenedAndClosed(); + testVolatileMap(); + testEntrySet(); + testCompressEmptyPage(); + testCompressed(); + testFileFormatExample(); + testMaxChunkLength(); + testCacheInfo(); + testRollback(); + testVersionsToKeep(); + testVersionsToKeep2(); + testRemoveMap(); + testIsEmpty(); + testOffHeapStorage(); + testNewerWriteVersion(); + testCompactFully(); + testBackgroundExceptionListener(); + testOldVersion(); + testAtomicOperations(); + testWriteBuffer(); + testWriteDelay(); + testEncryptedFile(); + testFileFormatChange(); + testRecreateMap(); + testRenameMapRollback(); + testCustomMapType(); + testCacheSize(); + testConcurrentOpen(); + testFileHeader(); + testFileHeaderCorruption(); + testIndexSkip(); + testMinMaxNextKey(); + testStoreVersion(); + testIterateOldVersion(); + testObjects(); + testExample(); + testExampleMvcc(); + testOpenStoreCloseLoop(); + testVersion(); + testTruncateFile(); + testFastDelete(); + testRollbackInMemory(); + testRollbackStored(); + testMeta(); + testInMemory(); + testLargeImport(); + testBtreeStore(); + testCompact(); + testCompactMapNotOpen(); + testReuseSpace(); + testRandom(); + testKeyValueClasses(); + testIterate(); + testCloseTwice(); + testSimple(); + + // longer running tests + testLargerThan2G(); + } + + private void testRemoveMapRollback() { + MVStore store = new MVStore.Builder(). + open(); + MVMap map = store.openMap("test"); + map.put("1", "Hello"); + store.commit(); + store.removeMap(map); + store.rollback(); + assertTrue(store.hasMap("test")); + map = store.openMap("test"); + // TODO the data should get back alive + assertNull(map.get("1")); + store.close(); + + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + store = new MVStore.Builder(). + autoCommitDisabled(). + fileName(fileName). + open(); + map = store.openMap("test"); + map.put("1", "Hello"); + store.commit(); + store.removeMap(map); + store.rollback(); + assertTrue(store.hasMap("test")); + map = store.openMap("test"); + // the data will get back alive + assertEquals("Hello", map.get("1")); + store.close(); + } + + private void testProvidedFileStoreNotOpenedAndClosed() { + final AtomicInteger openClose = new AtomicInteger(); + FileStore fileStore = new OffHeapStore() { + + @Override + public void open(String fileName, boolean readOnly, char[] encryptionKey) { + openClose.incrementAndGet(); + super.open(fileName, readOnly, encryptionKey); + } + + @Override + public void close() { + openClose.incrementAndGet(); + super.close(); + } + }; + MVStore store = new MVStore.Builder(). + fileStore(fileStore). + open(); + store.close(); + assertEquals(0, openClose.get()); + } + + private void testVolatileMap() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore store = new MVStore.Builder(). + fileName(fileName). + open(); + MVMap map = store.openMap("test"); + assertFalse(map.isVolatile()); + map.setVolatile(true); + assertTrue(map.isVolatile()); + map.put("1", "Hello"); + assertEquals("Hello", map.get("1")); + assertEquals(1, map.size()); + store.close(); + store = new MVStore.Builder(). + fileName(fileName). + open(); + assertTrue(store.hasMap("test")); + map = store.openMap("test"); + assertEquals(0, map.size()); + store.close(); + } + + private void testEntrySet() { + MVStore s = new MVStore.Builder().open(); + MVMap map; + map = s.openMap("data"); + for (int i = 0; i < 20; i++) { + map.put(i, i * 10); + } + int next = 0; + for (Entry e : map.entrySet()) { + assertEquals(next, e.getKey().intValue()); + assertEquals(next * 10, e.getValue().intValue()); + next++; + } + } + + private void testCompressEmptyPage() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore store = new MVStore.Builder(). + cacheSize(100).fileName(fileName). + compress(). + autoCommitBufferSize(10 * 1024). + open(); + MVMap map = store.openMap("test"); + store.removeMap(map); + store.commit(); + store.close(); + store = new MVStore.Builder(). + compress(). + open(); + store.close(); + } + + private void testCompressed() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + long lastSize = 0; + for (int level = 0; level <= 2; level++) { + FileUtils.delete(fileName); + MVStore.Builder builder = new MVStore.Builder().fileName(fileName); + if (level == 1) { + builder.compress(); + } else if (level == 2) { + builder.compressHigh(); + } + MVStore s = builder.open(); + MVMap map = s.openMap("data"); + String data = new String(new char[1000]).replace((char) 0, 'x'); + for (int i = 0; i < 400; i++) { + map.put(data + i, data); + } + s.close(); + long size = FileUtils.size(fileName); + if (level > 0) { + assertTrue(size < lastSize); + } + lastSize = size; + s = new MVStore.Builder().fileName(fileName).open(); + map = s.openMap("data"); + for (int i = 0; i < 400; i++) { + assertEquals(data, map.get(data + i)); + } + s.close(); + } + } + + private void testFileFormatExample() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = MVStore.open(fileName); + MVMap map = s.openMap("data"); + for (int i = 0; i < 400; i++) { + map.put(i, "Hello"); + } + s.commit(); + for (int i = 0; i < 100; i++) { + map.put(0, "Hi"); + } + s.commit(); + s.close(); + // ;MVStoreTool.dump(fileName); + } + + private void testMaxChunkLength() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder().fileName(fileName).open(); + MVMap map = s.openMap("data"); + map.put(0, new byte[2 * 1024 * 1024]); + s.commit(); + map.put(1, new byte[10 * 1024]); + s.commit(); + MVMap meta = s.getMetaMap(); + Chunk c = Chunk.fromString(meta.get("chunk.1")); + assertTrue(c.maxLen < Integer.MAX_VALUE); + assertTrue(c.maxLenLive < Integer.MAX_VALUE); + s.close(); + } + + private void testCacheInfo() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder().fileName(fileName).cacheSize(2).open(); + assertEquals(2, s.getCacheSize()); + MVMap map; + map = s.openMap("data"); + byte[] data = new byte[1024]; + for (int i = 0; i < 1000; i++) { + map.put(i, data); + s.commit(); + if (i < 50) { + assertEquals(0, s.getCacheSizeUsed()); + } else if (i > 300) { + assertTrue(s.getCacheSizeUsed() >= 1); + } + } + s.close(); + s = new MVStore.Builder().open(); + assertEquals(0, s.getCacheSize()); + assertEquals(0, s.getCacheSizeUsed()); + s.close(); + } + + private void testVersionsToKeep() throws Exception { + MVStore s = new MVStore.Builder().open(); + MVMap map; + map = s.openMap("data"); + for (int i = 0; i < 20; i++) { + long version = s.getCurrentVersion(); + map.put(i, i); + s.commit(); + if (version >= 6) { + map.openVersion(version - 5); + try { + map.openVersion(version - 6); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + } + } + } + + private void testVersionsToKeep2() { + MVStore s = new MVStore.Builder().autoCommitDisabled().open(); + s.setVersionsToKeep(2); + final MVMap m = s.openMap("data"); + s.commit(); + assertEquals(1, s.getCurrentVersion()); + m.put(1, "version 1"); + s.commit(); + assertEquals(2, s.getCurrentVersion()); + m.put(1, "version 2"); + s.commit(); + assertEquals(3, s.getCurrentVersion()); + m.put(1, "version 3"); + s.commit(); + m.put(1, "version 4"); + assertEquals("version 4", m.openVersion(4).get(1)); + assertEquals("version 3", m.openVersion(3).get(1)); + assertEquals("version 2", m.openVersion(2).get(1)); + new AssertThrows(IllegalArgumentException.class) { + @Override + public void test() throws Exception { + m.openVersion(1); + } + }; + s.close(); + } + + private void testRemoveMap() throws Exception { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder(). + fileName(fileName). + open(); + MVMap map; + + map = s.openMap("data"); + map.put(1, 1); + assertEquals(1, map.get(1).intValue()); + s.commit(); + + s.removeMap(map); + s.commit(); + + map = s.openMap("data"); + assertTrue(map.isEmpty()); + map.put(2, 2); + + s.close(); + } + + private void testIsEmpty() throws Exception { + MVStore s = new MVStore.Builder(). + pageSplitSize(50). + open(); + Map m = s.openMap("data"); + m.put(1, new byte[50]); + m.put(2, new byte[50]); + m.put(3, new byte[50]); + m.remove(1); + m.remove(2); + m.remove(3); + assertEquals(0, m.size()); + assertTrue(m.isEmpty()); + s.close(); + } + + private void testOffHeapStorage() throws Exception { + OffHeapStore offHeap = new OffHeapStore(); + MVStore s = new MVStore.Builder(). + fileStore(offHeap). + open(); + int count = 1000; + Map map = s.openMap("data"); + for (int i = 0; i < count; i++) { + map.put(i, "Hello " + i); + s.commit(); + } + assertTrue(offHeap.getWriteCount() > count); + s.close(); + + s = new MVStore.Builder(). + fileStore(offHeap). + open(); + map = s.openMap("data"); + for (int i = 0; i < count; i++) { + assertEquals("Hello " + i, map.get(i)); + } + s.close(); + } + + private void testNewerWriteVersion() throws Exception { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder(). + encryptionKey("007".toCharArray()). + fileName(fileName). + open(); + s.setRetentionTime(Integer.MAX_VALUE); + Map header = s.getStoreHeader(); + assertEquals("1", header.get("format").toString()); + header.put("formatRead", "1"); + header.put("format", "2"); + forceWriteStoreHeader(s); + MVMap m = s.openMap("data"); + forceWriteStoreHeader(s); + m.put(0, "Hello World"); + s.close(); + try { + s = new MVStore.Builder(). + encryptionKey("007".toCharArray()). + fileName(fileName). + open(); + header = s.getStoreHeader(); + fail(header.toString()); + } catch (IllegalStateException e) { + assertEquals(DataUtils.ERROR_UNSUPPORTED_FORMAT, + DataUtils.getErrorCode(e.getMessage())); + } + s = new MVStore.Builder(). + encryptionKey("007".toCharArray()). + readOnly(). + fileName(fileName). + open(); + assertTrue(s.getFileStore().isReadOnly()); + m = s.openMap("data"); + assertEquals("Hello World", m.get(0)); + s.close(); + /* FileUtils.setReadOnly(fileName); + s = new MVStore.Builder(). + encryptionKey("007".toCharArray()). + fileName(fileName). + open(); + assertTrue(s.getFileStore().isReadOnly()); + m = s.openMap("data"); + assertEquals("Hello World", m.get(0)); + s.close();*/ + + } + + private void testCompactFully() throws Exception { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder(). + fileName(fileName). + autoCommitDisabled(). + open(); + MVMap m; + for (int i = 0; i < 100; i++) { + m = s.openMap("data" + i); + m.put(0, "Hello World"); + s.commit(); + } + for (int i = 0; i < 100; i += 2) { + m = s.openMap("data" + i); + s.removeMap(m); + s.commit(); + } + long sizeOld = s.getFileStore().size(); + s.compactMoveChunks(); + long sizeNew = s.getFileStore().size(); + assertTrue("old: " + sizeOld + " new: " + sizeNew, sizeNew < sizeOld); + s.close(); + } + + private void testBackgroundExceptionListener() throws Exception { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + final AtomicReference exRef = + new AtomicReference<>(); + s = new MVStore.Builder(). + fileName(fileName). + backgroundExceptionHandler(new UncaughtExceptionHandler() { + + @Override + public void uncaughtException(Thread t, Throwable e) { + exRef.set(e); + } + + }). + open(); + s.setAutoCommitDelay(10); + MVMap m; + m = s.openMap("data"); + s.getFileStore().getFile().close(); + try { + m.put(1, "Hello"); + for (int i = 0; i < 200; i++) { + if (exRef.get() != null) { + break; + } + sleep(10); + } + Throwable e = exRef.get(); + assertTrue(e != null); + assertEquals(DataUtils.ERROR_WRITING_FAILED, + DataUtils.getErrorCode(e.getMessage())); + } catch (IllegalStateException e) { + // sometimes it is detected right away + assertEquals(DataUtils.ERROR_CLOSED, + DataUtils.getErrorCode(e.getMessage())); + } + + s.closeImmediately(); + FileUtils.delete(fileName); + } + + private void testAtomicOperations() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap m; + s = new MVStore.Builder(). + fileName(fileName). + open(); + m = s.openMap("data"); + + // putIfAbsent + assertNull(m.putIfAbsent(1, new byte[1])); + assertEquals(1, m.putIfAbsent(1, new byte[2]).length); + assertEquals(1, m.get(1).length); + + // replace + assertNull(m.replace(2, new byte[2])); + assertNull(m.get(2)); + assertEquals(1, m.replace(1, new byte[2]).length); + assertEquals(2, m.replace(1, new byte[3]).length); + assertEquals(3, m.replace(1, new byte[1]).length); + + // replace with oldValue + assertFalse(m.replace(1, new byte[2], new byte[10])); + assertTrue(m.replace(1, new byte[1], new byte[2])); + assertTrue(m.replace(1, new byte[2], new byte[1])); + + // remove + assertFalse(m.remove(1, new byte[2])); + assertTrue(m.remove(1, new byte[1])); + + s.close(); + FileUtils.delete(fileName); + } + + private void testWriteBuffer() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap m; + byte[] data = new byte[1000]; + long lastSize = 0; + int len = 1000; + for (int bs = 0; bs <= 1; bs++) { + s = new MVStore.Builder(). + fileName(fileName). + autoCommitBufferSize(bs). + open(); + m = s.openMap("data"); + for (int i = 0; i < len; i++) { + m.put(i, data); + } + long size = s.getFileStore().size(); + assertTrue("last:" + lastSize + " now: " + size, size > lastSize); + lastSize = size; + s.close(); + } + + s = new MVStore.Builder(). + fileName(fileName). + open(); + m = s.openMap("data"); + assertTrue(m.containsKey(1)); + + m.put(-1, data); + s.commit(); + m.put(-2, data); + s.close(); + + s = new MVStore.Builder(). + fileName(fileName). + open(); + m = s.openMap("data"); + assertTrue(m.containsKey(-1)); + assertTrue(m.containsKey(-2)); + + s.close(); + FileUtils.delete(fileName); + } + + private void testWriteDelay() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap m; + + FileUtils.delete(fileName); + s = new MVStore.Builder(). + autoCommitDisabled(). + fileName(fileName).open(); + m = s.openMap("data"); + m.put(1, "1"); + s.commit(); + s.close(); + s = new MVStore.Builder(). + autoCommitDisabled(). + fileName(fileName).open(); + m = s.openMap("data"); + assertEquals(1, m.size()); + s.close(); + + FileUtils.delete(fileName); + s = new MVStore.Builder(). + fileName(fileName). + open(); + m = s.openMap("data"); + m.put(1, "Hello"); + m.put(2, "World."); + s.commit(); + s.close(); + + s = new MVStore.Builder(). + fileName(fileName). + open(); + s.setAutoCommitDelay(2); + m = s.openMap("data"); + assertEquals("World.", m.get(2)); + m.put(2, "World"); + s.commit(); + long v = s.getCurrentVersion(); + long time = System.nanoTime(); + m.put(3, "!"); + + for (int i = 200; i > 0; i--) { + if (s.getCurrentVersion() > v) { + break; + } + long diff = System.nanoTime() - time; + if (diff > TimeUnit.SECONDS.toNanos(1)) { + fail("diff=" + TimeUnit.NANOSECONDS.toMillis(diff)); + } + sleep(10); + } + s.closeImmediately(); + + s = new MVStore.Builder(). + fileName(fileName). + open(); + m = s.openMap("data"); + assertEquals("Hello", m.get(1)); + assertEquals("World", m.get(2)); + assertEquals("!", m.get(3)); + s.close(); + + FileUtils.delete(fileName); + } + + private void testEncryptedFile() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap m; + + char[] passwordChars = "007".toCharArray(); + s = new MVStore.Builder(). + fileName(fileName). + encryptionKey(passwordChars). + open(); + assertEquals(0, passwordChars[0]); + assertEquals(0, passwordChars[1]); + assertEquals(0, passwordChars[2]); + assertTrue(FileUtils.exists(fileName)); + m = s.openMap("test"); + m.put(1, "Hello"); + assertEquals("Hello", m.get(1)); + s.close(); + + passwordChars = "008".toCharArray(); + try { + s = new MVStore.Builder(). + fileName(fileName). + encryptionKey(passwordChars).open(); + fail(); + } catch (IllegalStateException e) { + assertEquals(DataUtils.ERROR_FILE_CORRUPT, + DataUtils.getErrorCode(e.getMessage())); + } + assertEquals(0, passwordChars[0]); + assertEquals(0, passwordChars[1]); + assertEquals(0, passwordChars[2]); + + passwordChars = "007".toCharArray(); + s = new MVStore.Builder(). + fileName(fileName). + encryptionKey(passwordChars).open(); + assertEquals(0, passwordChars[0]); + assertEquals(0, passwordChars[1]); + assertEquals(0, passwordChars[2]); + m = s.openMap("test"); + assertEquals("Hello", m.get(1)); + s.close(); + + /* FileUtils.setReadOnly(fileName); + passwordChars = "007".toCharArray(); + s = new MVStore.Builder(). + fileName(fileName). + encryptionKey(passwordChars).open(); + assertTrue(s.getFileStore().isReadOnly()); + s.close(); +*/ + FileUtils.delete(fileName); + assertFalse(FileUtils.exists(fileName)); + } + + private void testFileFormatChange() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap m; + s = openStore(fileName); + s.setRetentionTime(Integer.MAX_VALUE); + m = s.openMap("test"); + m.put(1, 1); + Map header = s.getStoreHeader(); + int format = Integer.parseInt(header.get("format").toString()); + assertEquals(1, format); + header.put("format", Integer.toString(format + 1)); + forceWriteStoreHeader(s); + s.close(); + try { + openStore(fileName).close(); + fail(); + } catch (IllegalStateException e) { + assertEquals(DataUtils.ERROR_UNSUPPORTED_FORMAT, + DataUtils.getErrorCode(e.getMessage())); + } + FileUtils.delete(fileName); + } + + private void testRecreateMap() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + MVMap m = s.openMap("test"); + m.put(1, 1); + s.commit(); + s.removeMap(m); + s.close(); + s = openStore(fileName); + m = s.openMap("test"); + assertNull(m.get(1)); + s.close(); + } + + private void testRenameMapRollback() { + MVStore s = openStore(null); + MVMap map; + map = s.openMap("hello"); + map.put(1, 10); + long old = s.commit(); + s.renameMap(map, "world"); + map.put(2, 20); + assertEquals("world", map.getName()); + s.rollbackTo(old); + assertEquals("hello", map.getName()); + s.rollbackTo(0); + assertTrue(map.isClosed()); + s.close(); + } + + private void testCustomMapType() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + SequenceMap seq = s.openMap("data", new SequenceMap.Builder()); + StringBuilder buff = new StringBuilder(); + for (long x : seq.keySet()) { + buff.append(x).append(';'); + } + assertEquals("1;2;3;4;5;6;7;8;9;10;", buff.toString()); + s.close(); + } + + private void testCacheSize() { + if (config.memory) { + return; + } + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap map; + s = new MVStore.Builder(). + fileName(fileName). + autoCommitDisabled(). + compress().open(); + map = s.openMap("test"); + // add 10 MB of data + for (int i = 0; i < 1024; i++) { + map.put(i, new String(new char[10240])); + } + s.close(); + int[] expectedReadsForCacheSize = { + 3407, 2590, 1924, 1440, 1330, 956, 918 + }; + for (int cacheSize = 0; cacheSize <= 6; cacheSize += 4) { + int cacheMB = 1 + 3 * cacheSize; + s = new MVStore.Builder(). + fileName(fileName). + cacheSize(cacheMB).open(); + assertEquals(cacheMB, s.getCacheSize()); + map = s.openMap("test"); + for (int i = 0; i < 1024; i += 128) { + for (int j = 0; j < i; j++) { + String x = map.get(j); + assertEquals(10240, x.length()); + } + } + long readCount = s.getFileStore().getReadCount(); + int expected = expectedReadsForCacheSize[cacheSize]; + assertTrue("reads: " + readCount + " expected: " + expected, + Math.abs(100 - (100 * expected / readCount)) < 5); + s.close(); + } + + } + + private void testConcurrentOpen() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder().fileName(fileName).open(); + try { + MVStore s1 = new MVStore.Builder().fileName(fileName).open(); + s1.close(); + fail(); + } catch (IllegalStateException e) { + // expected + } + try { + MVStore s1 = new MVStore.Builder().fileName(fileName).readOnly().open(); + s1.close(); + fail(); + } catch (IllegalStateException e) { + // expected + } + assertFalse(s.getFileStore().isReadOnly()); + s.close(); + s = new MVStore.Builder().fileName(fileName).readOnly().open(); + assertTrue(s.getFileStore().isReadOnly()); + s.close(); + } + + private void testFileHeader() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + s.setRetentionTime(Integer.MAX_VALUE); + long time = System.currentTimeMillis(); + Map m = s.getStoreHeader(); + assertEquals("1", m.get("format").toString()); + long creationTime = (Long) m.get("created"); + assertTrue(Math.abs(time - creationTime) < 100); + m.put("test", "123"); + forceWriteStoreHeader(s); + s.close(); + s = openStore(fileName); + Object test = s.getStoreHeader().get("test"); + assertFalse(test == null); + assertEquals("123", test.toString()); + s.close(); + } + + private static void forceWriteStoreHeader(MVStore s) { + MVMap map = s.openMap("dummy"); + map.put(10, 100); + // this is to ensure the file header is overwritten + // the header is written at least every 20 commits + for (int i = 0; i < 30; i++) { + if (i > 5) { + s.setRetentionTime(0); + // ensure that the next save time is different, + // so that blocks can be reclaimed + // (on Windows, resolution is 10 ms) + sleep(1); + } + map.put(10, 110); + s.commit(); + } + s.removeMap(map); + s.commit(); + } + + private static void sleep(long ms) { + // on Windows, need to sleep in some cases, + // mainly because the milliseconds resolution of + // System.currentTimeMillis is 10 ms. + try { + Thread.sleep(ms); + } catch (InterruptedException e) { + // ignore + } + } + + private void testFileHeaderCorruption() throws Exception { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder(). + fileName(fileName).pageSplitSize(1000).autoCommitDisabled().open(); + s.setRetentionTime(0); + MVMap map; + map = s.openMap("test"); + map.put(0, new byte[100]); + for (int i = 0; i < 10; i++) { + map = s.openMap("test" + i); + map.put(0, new byte[1000]); + s.commit(); + } + FileStore fs = s.getFileStore(); + long size = fs.getFile().size(); + for (int i = 0; i < 100; i++) { + map = s.openMap("test" + i); + s.removeMap(map); + s.commit(); + s.compact(100, 1); + if (fs.getFile().size() <= size) { + break; + } + } + // the last chunk is at the end + s.setReuseSpace(false); + map = s.openMap("test2"); + map.put(1, new byte[1000]); + s.close(); + FilePath f = FilePath.get(fileName); + int blockSize = 4 * 1024; + // test corrupt file headers + for (int i = 0; i <= blockSize; i += blockSize) { + FileChannel fc = f.open("rw"); + if (i == 0) { + // corrupt the last block (the end header) + fc.write(ByteBuffer.allocate(256), fc.size() - 256); + } + ByteBuffer buff = ByteBuffer.allocate(4 * 1024); + fc.read(buff, i); + String h = new String(buff.array(), StandardCharsets.UTF_8).trim(); + int idx = h.indexOf("fletcher:"); + int old = Character.digit(h.charAt(idx + "fletcher:".length()), 16); + int bad = (old + 1) & 15; + buff.put(idx + "fletcher:".length(), + (byte) Character.forDigit(bad, 16)); + buff.rewind(); + fc.write(buff, i); + fc.close(); + + if (i == 0) { + // if the first header is corrupt, the second + // header should be used + s = openStore(fileName); + map = s.openMap("test"); + assertEquals(100, map.get(0).length); + map = s.openMap("test2"); + assertFalse(map.containsKey(1)); + s.close(); + } else { + // both headers are corrupt + try { + s = openStore(fileName); + fail(); + } catch (Exception e) { + // expected + } + } + } + } + + private void testIndexSkip() { + MVStore s = openStore(null, 4); + MVMap map = s.openMap("test"); + for (int i = 0; i < 100; i += 2) { + map.put(i, 10 * i); + } + + Cursor c = map.cursor(50); + // skip must reset the root of the cursor + c.skip(10); + for (int i = 70; i < 100; i += 2) { + assertTrue(c.hasNext()); + assertEquals(i, c.next().intValue()); + } + assertFalse(c.hasNext()); + + for (int i = -1; i < 100; i++) { + long index = map.getKeyIndex(i); + if (i < 0 || (i % 2) != 0) { + assertEquals(i < 0 ? -1 : -(i / 2) - 2, index); + } else { + assertEquals(i / 2, index); + } + } + for (int i = -1; i < 60; i++) { + Integer k = map.getKey(i); + if (i < 0 || i >= 50) { + assertNull(k); + } else { + assertEquals(i * 2, k.intValue()); + } + } + // skip + c = map.cursor(0); + assertTrue(c.hasNext()); + assertEquals(0, c.next().intValue()); + c.skip(0); + assertEquals(2, c.next().intValue()); + c.skip(1); + assertEquals(6, c.next().intValue()); + c.skip(20); + assertEquals(48, c.next().intValue()); + + c = map.cursor(0); + c.skip(20); + assertEquals(40, c.next().intValue()); + + c = map.cursor(0); + assertEquals(0, c.next().intValue()); + + assertEquals(12, map.keyList().indexOf(24)); + assertEquals(24, map.keyList().get(12).intValue()); + assertEquals(-14, map.keyList().indexOf(25)); + assertEquals(map.size(), map.keyList().size()); + } + + private void testMinMaxNextKey() { + MVStore s = openStore(null); + MVMap map = s.openMap("test"); + map.put(10, 100); + map.put(20, 200); + + assertEquals(10, map.firstKey().intValue()); + assertEquals(20, map.lastKey().intValue()); + + assertEquals(20, map.ceilingKey(15).intValue()); + assertEquals(20, map.ceilingKey(20).intValue()); + assertEquals(10, map.floorKey(15).intValue()); + assertEquals(10, map.floorKey(10).intValue()); + assertEquals(20, map.higherKey(10).intValue()); + assertEquals(10, map.lowerKey(20).intValue()); + + final MVMap m = map; + assertEquals(10, m.ceilingKey(null).intValue()); + assertEquals(10, m.higherKey(null).intValue()); + assertNull(m.lowerKey(null)); + assertNull(m.floorKey(null)); + + for (int i = 3; i < 20; i++) { + s = openStore(null, 4); + map = s.openMap("test"); + for (int j = 3; j < i; j++) { + map.put(j * 2, j * 20); + } + if (i == 3) { + assertNull(map.firstKey()); + assertNull(map.lastKey()); + } else { + assertEquals(6, map.firstKey().intValue()); + int max = (i - 1) * 2; + assertEquals(max, map.lastKey().intValue()); + + for (int j = 0; j < i * 2 + 2; j++) { + if (j > max) { + assertNull(map.ceilingKey(j)); + } else { + int ceiling = Math.max((j + 1) / 2 * 2, 6); + assertEquals(ceiling, map.ceilingKey(j).intValue()); + } + + int floor = Math.min(max, Math.max(j / 2 * 2, 4)); + if (floor < 6) { + assertNull(map.floorKey(j)); + } else { + map.floorKey(j); + } + + int lower = Math.min(max, Math.max((j - 1) / 2 * 2, 4)); + if (lower < 6) { + assertNull(map.lowerKey(j)); + } else { + assertEquals(lower, map.lowerKey(j).intValue()); + } + + int higher = Math.max((j + 2) / 2 * 2, 6); + if (higher > max) { + assertNull(map.higherKey(j)); + } else { + assertEquals(higher, map.higherKey(j).intValue()); + } + } + } + } + } + + private void testStoreVersion() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = MVStore.open(fileName); + assertEquals(0, s.getCurrentVersion()); + assertEquals(0, s.getStoreVersion()); + s.setStoreVersion(0); + s.commit(); + s.setStoreVersion(1); + s.closeImmediately(); + s = MVStore.open(fileName); + assertEquals(1, s.getCurrentVersion()); + assertEquals(0, s.getStoreVersion()); + s.setStoreVersion(1); + s.close(); + s = MVStore.open(fileName); + assertEquals(2, s.getCurrentVersion()); + assertEquals(1, s.getStoreVersion()); + s.close(); + } + + private void testIterateOldVersion() { + MVStore s; + Map map; + s = new MVStore.Builder().open(); + map = s.openMap("test"); + int len = 100; + for (int i = 0; i < len; i++) { + map.put(i, 10 * i); + } + Iterator it = map.keySet().iterator(); + s.commit(); + for (int i = 0; i < len; i += 2) { + map.remove(i); + } + int count = 0; + while (it.hasNext()) { + it.next(); + count++; + } + assertEquals(len, count); + s.close(); + } + + private void testObjects() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + Map map; + s = new MVStore.Builder().fileName(fileName).open(); + map = s.openMap("test"); + map.put(1, "Hello"); + map.put("2", 200); + map.put(new Object[1], new Object[]{1, "2"}); + s.close(); + + s = new MVStore.Builder().fileName(fileName).open(); + map = s.openMap("test"); + assertEquals("Hello", map.get(1).toString()); + assertEquals(200, ((Integer) map.get("2")).intValue()); + Object[] x = (Object[]) map.get(new Object[1]); + assertEquals(2, x.length); + assertEquals(1, ((Integer) x[0]).intValue()); + assertEquals("2", (String) x[1]); + s.close(); + } + + private void testExample() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + + // open the store (in-memory if fileName is null) + MVStore s = MVStore.open(fileName); + + // create/get the map named "data" + MVMap map = s.openMap("data"); + + // add and read some data + map.put(1, "Hello World"); + // System.out.println(map.get(1)); + + // close the store (this will persist changes) + s.close(); + + s = MVStore.open(fileName); + map = s.openMap("data"); + assertEquals("Hello World", map.get(1)); + s.close(); + } + + private void testExampleMvcc() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + + // open the store (in-memory if fileName is null) + MVStore s = MVStore.open(fileName); + + // create/get the map named "data" + MVMap map = s.openMap("data"); + + // add some data + map.put(1, "Hello"); + map.put(2, "World"); + + // get the current version, for later use + long oldVersion = s.getCurrentVersion(); + + // from now on, the old version is read-only + s.commit(); + + // more changes, in the new version + // changes can be rolled back if required + // changes always go into "head" (the newest version) + map.put(1, "Hi"); + map.remove(2); + + // access the old data (before the commit) + MVMap oldMap = + map.openVersion(oldVersion); + + // print the old version (can be done + // concurrently with further modifications) + // this will print "Hello" and "World": + // System.out.println(oldMap.get(1)); + assertEquals("Hello", oldMap.get(1)); + // System.out.println(oldMap.get(2)); + assertEquals("World", oldMap.get(2)); + + // print the newest version ("Hi") + // System.out.println(map.get(1)); + assertEquals("Hi", map.get(1)); + + // close the store + s.close(); + } + + private void testOpenStoreCloseLoop() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + for (int k = 0; k < 1; k++) { + // long t = System.nanoTime(); + for (int j = 0; j < 3; j++) { + MVStore s = openStore(fileName); + Map m = s.openMap("data"); + for (int i = 0; i < 3; i++) { + Integer x = m.get("value"); + m.put("value", x == null ? 0 : x + 1); + s.commit(); + } + s.close(); + } + // System.out.println("open/close: " + + // TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - t)); + // System.out.println("size: " + FileUtils.size(fileName)); + } + } + + private void testOldVersion() { + MVStore s; + for (int op = 0; op <= 1; op++) { + for (int i = 0; i < 5; i++) { + s = openStore(null); + s.setVersionsToKeep(Integer.MAX_VALUE); + MVMap m; + m = s.openMap("data"); + for (int j = 0; j < 5; j++) { + if (op == 1) { + m.put("1", "" + s.getCurrentVersion()); + } + s.commit(); + } + for (int j = 0; j < s.getCurrentVersion(); j++) { + MVMap old = m.openVersion(j); + if (op == 1) { + assertEquals("" + j, old.get("1")); + } + } + s.close(); + } + } + } + + private void testVersion() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + s = openStore(fileName); + s.setVersionsToKeep(100); + s.setAutoCommitDelay(0); + s.setRetentionTime(Integer.MAX_VALUE); + MVMap m = s.openMap("data"); + s.commit(); + long first = s.getCurrentVersion(); + m.put("0", "test"); + s.commit(); + m.put("1", "Hello"); + m.put("2", "World"); + for (int i = 10; i < 20; i++) { + m.put("" + i, "data"); + } + long old = s.getCurrentVersion(); + s.commit(); + m.put("1", "Hallo"); + m.put("2", "Welt"); + MVMap mFirst; + mFirst = m.openVersion(first); + assertEquals(0, mFirst.size()); + MVMap mOld; + assertEquals("Hallo", m.get("1")); + assertEquals("Welt", m.get("2")); + mOld = m.openVersion(old); + assertEquals("Hello", mOld.get("1")); + assertEquals("World", mOld.get("2")); + assertTrue(mOld.isReadOnly()); + s.getCurrentVersion(); + long old3 = s.commit(); + + // the old version is still available + assertEquals("Hello", mOld.get("1")); + assertEquals("World", mOld.get("2")); + + mOld = m.openVersion(old3); + assertEquals("Hallo", mOld.get("1")); + assertEquals("Welt", mOld.get("2")); + + m.put("1", "Hi"); + assertEquals("Welt", m.remove("2")); + s.close(); + + s = openStore(fileName); + m = s.openMap("data"); + assertEquals("Hi", m.get("1")); + assertEquals(null, m.get("2")); + + mOld = m.openVersion(old3); + assertEquals("Hallo", mOld.get("1")); + assertEquals("Welt", mOld.get("2")); + + try { + m.openVersion(-3); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + s.close(); + } + + private void testTruncateFile() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap m; + s = openStore(fileName); + m = s.openMap("data"); + String data = new String(new char[10000]).replace((char) 0, 'x'); + for (int i = 1; i < 10; i++) { + m.put(i, data); + s.commit(); + } + s.close(); + long len = FileUtils.size(fileName); + s = openStore(fileName); + s.setRetentionTime(0); + // remove 75% + m = s.openMap("data"); + for (int i = 0; i < 10; i++) { + if (i % 4 != 0) { + sleep(2); + m.remove(i); + s.commit(); + } + } + assertTrue(s.compact(100, 50 * 1024)); + s.close(); + long len2 = FileUtils.size(fileName); + assertTrue("len2: " + len2 + " len: " + len, len2 < len); + } + + private void testFastDelete() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s; + MVMap m; + s = openStore(fileName, 700); + m = s.openMap("data"); + for (int i = 0; i < 1000; i++) { + m.put(i, "Hello World"); + assertEquals(i + 1, m.size()); + } + assertEquals(1000, m.size()); + // previously (131896) we fail to account for initial root page for every map + // there are two of them here (meta and "data"), hence lack of 256 bytes + assertEquals(132152, s.getUnsavedMemory()); + s.commit(); + assertEquals(2, s.getFileStore().getWriteCount()); + s.close(); + + s = openStore(fileName); + m = s.openMap("data"); + m.clear(); + assertEquals(0, m.size()); + s.commit(); + // ensure only nodes are read, but not leaves + assertEquals(45, s.getFileStore().getReadCount()); + assertTrue(s.getFileStore().getWriteCount() < 5); + s.close(); + } + + private void testRollback() { + MVStore s = MVStore.open(null); + MVMap m = s.openMap("m"); + m.put(1, -1); + s.commit(); + for (int i = 0; i < 10; i++) { + m.put(1, i); + s.rollback(); + assertEquals(i - 1, m.get(1).intValue()); + m.put(1, i); + s.commit(); + } + } + + private void testRollbackStored() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVMap meta; + MVStore s = openStore(fileName); + assertEquals(45000, s.getRetentionTime()); + s.setRetentionTime(0); + assertEquals(0, s.getRetentionTime()); + s.setRetentionTime(45000); + assertEquals(45000, s.getRetentionTime()); + assertEquals(0, s.getCurrentVersion()); + assertFalse(s.hasUnsavedChanges()); + MVMap m = s.openMap("data"); + assertTrue(s.hasUnsavedChanges()); + MVMap m0 = s.openMap("data0"); + m.put("1", "Hello"); + assertEquals(1, s.commit()); + s.rollbackTo(1); + assertEquals(1, s.getCurrentVersion()); + assertEquals("Hello", m.get("1")); + // so a new version is created + m.put("1", "Hello"); + + long v2 = s.commit(); + assertEquals(2, v2); + assertEquals(2, s.getCurrentVersion()); + assertFalse(s.hasUnsavedChanges()); + assertEquals("Hello", m.get("1")); + s.close(); + + s = openStore(fileName); + s.setRetentionTime(45000); + assertEquals(2, s.getCurrentVersion()); + meta = s.getMetaMap(); + m = s.openMap("data"); + assertFalse(s.hasUnsavedChanges()); + assertEquals("Hello", m.get("1")); + m0 = s.openMap("data0"); + MVMap m1 = s.openMap("data1"); + m.put("1", "Hallo"); + m0.put("1", "Hallo"); + m1.put("1", "Hallo"); + assertEquals("Hallo", m.get("1")); + assertEquals("Hallo", m1.get("1")); + assertTrue(s.hasUnsavedChanges()); + s.rollbackTo(v2); + assertFalse(s.hasUnsavedChanges()); + assertNull(meta.get("name.data1")); + assertNull(m0.get("1")); + assertEquals("Hello", m.get("1")); + assertEquals(2, s.commit()); + s.close(); + + s = openStore(fileName); + s.setRetentionTime(45000); + assertEquals(2, s.getCurrentVersion()); + meta = s.getMetaMap(); + assertTrue(meta.get("name.data") != null); + assertTrue(meta.get("name.data0") != null); + assertNull(meta.get("name.data1")); + m = s.openMap("data"); + m0 = s.openMap("data0"); + assertNull(m0.get("1")); + assertEquals("Hello", m.get("1")); + assertFalse(m0.isReadOnly()); + m.put("1", "Hallo"); + s.commit(); + long v3 = s.getCurrentVersion(); + assertEquals(3, v3); + s.close(); + + s = openStore(fileName); + s.setRetentionTime(45000); + assertEquals(3, s.getCurrentVersion()); + m = s.openMap("data"); + m.put("1", "Hi"); + s.close(); + + s = openStore(fileName); + s.setRetentionTime(45000); + m = s.openMap("data"); + assertEquals("Hi", m.get("1")); + s.rollbackTo(v3); + assertEquals("Hallo", m.get("1")); + s.close(); + + s = openStore(fileName); + s.setRetentionTime(45000); + m = s.openMap("data"); + assertEquals("Hallo", m.get("1")); + s.close(); + } + + private void testRollbackInMemory() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName, 5); + s.setAutoCommitDelay(0); + assertEquals(0, s.getCurrentVersion()); + MVMap m = s.openMap("data"); + s.rollbackTo(0); + assertTrue(m.isClosed()); + assertEquals(0, s.getCurrentVersion()); + m = s.openMap("data"); + + MVMap m0 = s.openMap("data0"); + MVMap m2 = s.openMap("data2"); + m.put("1", "Hello"); + for (int i = 0; i < 10; i++) { + m2.put("" + i, "Test"); + } + long v1 = s.commit(); + assertEquals(1, v1); + assertEquals(1, s.getCurrentVersion()); + MVMap m1 = s.openMap("data1"); + assertEquals("Test", m2.get("1")); + m.put("1", "Hallo"); + m0.put("1", "Hallo"); + m1.put("1", "Hallo"); + m2.clear(); + assertEquals("Hallo", m.get("1")); + assertEquals("Hallo", m1.get("1")); + s.rollbackTo(v1); + assertEquals(1, s.getCurrentVersion()); + for (int i = 0; i < 10; i++) { + assertEquals("Test", m2.get("" + i)); + } + assertEquals("Hello", m.get("1")); + assertNull(m0.get("1")); + assertTrue(m1.isClosed()); + assertFalse(m0.isReadOnly()); + s.close(); + } + + private void testMeta() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + s.setRetentionTime(Integer.MAX_VALUE); + MVMap m = s.getMetaMap(); + assertEquals("[]", s.getMapNames().toString()); + MVMap data = s.openMap("data"); + data.put("1", "Hello"); + data.put("2", "World"); + s.commit(); + assertEquals(1, s.getCurrentVersion()); + + assertEquals("[data]", s.getMapNames().toString()); + assertEquals("data", s.getMapName(data.getId())); + assertNull(s.getMapName(s.getMetaMap().getId())); + assertNull(s.getMapName(data.getId() + 1)); + + String id = s.getMetaMap().get("name.data"); + assertEquals("name:data", m.get("map." + id)); + assertEquals("Hello", data.put("1", "Hallo")); + s.commit(); + assertEquals("name:data", m.get("map." + id)); + assertTrue(m.get("root.1").length() > 0); + assertTrue(m.containsKey("chunk.1")); + + assertEquals(2, s.getCurrentVersion()); + + s.rollbackTo(1); + assertEquals("Hello", data.get("1")); + assertEquals("World", data.get("2")); + + s.close(); + } + + private void testInMemory() { + for (int j = 0; j < 1; j++) { + MVStore s = openStore(null); + // s.setMaxPageSize(10); + int len = 100; + // TreeMap m = new TreeMap(); + // HashMap m = New.hashMap(); + MVMap m = s.openMap("data"); + for (int i = 0; i < len; i++) { + assertNull(m.put(i, "Hello World")); + } + for (int i = 0; i < len; i++) { + assertEquals("Hello World", m.get(i)); + } + for (int i = 0; i < len; i++) { + assertEquals("Hello World", m.remove(i)); + } + assertEquals(null, m.get(0)); + assertEquals(0, m.size()); + s.close(); + } + } + + private void testLargeImport() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + int len = 1000; + for (int j = 0; j < 5; j++) { + FileUtils.delete(fileName); + MVStore s = openStore(fileName, 40); + MVMap m = s.openMap("data", + new MVMap.Builder() + .valueType(new RowDataType(new DataType[] { + new ObjectDataType(), + StringDataType.INSTANCE, + StringDataType.INSTANCE }))); + + // Profiler prof = new Profiler(); + // prof.startCollecting(); + // long t = System.nanoTime(); + for (int i = 0; i < len;) { + Object[] o = new Object[3]; + o[0] = i; + o[1] = "Hello World"; + o[2] = "World"; + m.put(i, o); + i++; + if (i % 10000 == 0) { + s.commit(); + } + } + s.close(); + // System.out.println(prof.getTop(5)); + // System.out.println("store time " + + // TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - t)); + // System.out.println("store size " + + // FileUtils.size(fileName)); + } + } + + private void testBtreeStore() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + s.close(); + + s = openStore(fileName); + MVMap m = s.openMap("data"); + int count = 2000; + for (int i = 0; i < count; i++) { + assertNull(m.put(i, "hello " + i)); + assertEquals("hello " + i, m.get(i)); + } + s.commit(); + assertEquals("hello 0", m.remove(0)); + assertNull(m.get(0)); + for (int i = 1; i < count; i++) { + assertEquals("hello " + i, m.get(i)); + } + s.close(); + + s = openStore(fileName); + m = s.openMap("data"); + assertNull(m.get(0)); + for (int i = 1; i < count; i++) { + assertEquals("hello " + i, m.get(i)); + } + for (int i = 1; i < count; i++) { + m.remove(i); + } + s.commit(); + assertNull(m.get(0)); + for (int i = 0; i < count; i++) { + assertNull(m.get(i)); + } + s.close(); + } + + private void testCompactMapNotOpen() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName, 1000); + MVMap m = s.openMap("data"); + int factor = 100; + for (int j = 0; j < 10; j++) { + for (int i = j * factor; i < 10 * factor; i++) { + m.put(i, "Hello" + j); + } + s.commit(); + } + s.close(); + + s = openStore(fileName); + s.setRetentionTime(0); + + Map meta = s.getMetaMap(); + int chunkCount1 = 0; + for (String k : meta.keySet()) { + if (k.startsWith("chunk.")) { + chunkCount1++; + } + } + s.compact(80, 1); + s.compact(80, 1); + + int chunkCount2 = 0; + for (String k : meta.keySet()) { + if (k.startsWith("chunk.")) { + chunkCount2++; + } + } + assertTrue(chunkCount2 >= chunkCount1); + + m = s.openMap("data"); + for (int i = 0; i < 10; i++) { + sleep(1); + boolean result = s.compact(50, 50 * 1024); + if (!result) { + break; + } + } + assertFalse(s.compact(50, 1024)); + + int chunkCount3 = 0; + for (String k : meta.keySet()) { + if (k.startsWith("chunk.")) { + chunkCount3++; + } + } + + assertTrue(chunkCount1 + ">" + chunkCount2 + ">" + chunkCount3, + chunkCount3 < chunkCount1); + + for (int i = 0; i < 10 * factor; i++) { + assertEquals("x" + i, "Hello" + (i / factor), m.get(i)); + } + s.close(); + } + + private void testCompact() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + long initialLength = 0; + for (int j = 0; j < 20; j++) { + sleep(2); + MVStore s = openStore(fileName); + s.setRetentionTime(0); + MVMap m = s.openMap("data"); + for (int i = 0; i < 100; i++) { + m.put(j + i, "Hello " + j); + } + s.compact(80, 1024); + s.close(); + long len = FileUtils.size(fileName); + // System.out.println(" len:" + len); + if (initialLength == 0) { + initialLength = len; + } else { + assertTrue("initial: " + initialLength + " len: " + len, + len <= initialLength * 3); + } + } + // long len = FileUtils.size(fileName); + // System.out.println("len0: " + len); + MVStore s = openStore(fileName); + MVMap m = s.openMap("data"); + for (int i = 0; i < 100; i++) { + m.remove(i); + } + s.compact(80, 1024); + s.close(); + // len = FileUtils.size(fileName); + // System.out.println("len1: " + len); + s = openStore(fileName); + m = s.openMap("data"); + s.compact(80, 1024); + s.close(); + // len = FileUtils.size(fileName); + // System.out.println("len2: " + len); + } + + private void testReuseSpace() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + long initialLength = 0; + for (int j = 0; j < 20; j++) { + sleep(2); + MVStore s = openStore(fileName); + s.setRetentionTime(0); + MVMap m = s.openMap("data"); + for (int i = 0; i < 10; i++) { + m.put(i, "Hello"); + } + s.commit(); + for (int i = 0; i < 10; i++) { + assertEquals("Hello", m.get(i)); + assertEquals("Hello", m.remove(i)); + } + s.close(); + long len = FileUtils.size(fileName); + if (initialLength == 0) { + initialLength = len; + } else { + assertTrue("len: " + len + " initial: " + initialLength + " j: " + j, + len <= initialLength * 5); + } + } + } + + private void testRandom() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + MVMap m = s.openMap("data"); + TreeMap map = new TreeMap<>(); + Random r = new Random(1); + int operationCount = 1000; + int maxValue = 30; + Integer expected, got; + for (int i = 0; i < operationCount; i++) { + int k = r.nextInt(maxValue); + int v = r.nextInt(); + boolean compareAll; + switch (r.nextInt(3)) { + case 0: + log(i + ": put " + k + " = " + v); + expected = map.put(k, v); + got = m.put(k, v); + if (expected == null) { + assertNull(got); + } else { + assertEquals(expected, got); + } + compareAll = true; + break; + case 1: + log(i + ": remove " + k); + expected = map.remove(k); + got = m.remove(k); + if (expected == null) { + assertNull(got); + } else { + assertEquals(expected, got); + } + compareAll = true; + break; + default: + Integer a = map.get(k); + Integer b = m.get(k); + if (a == null || b == null) { + assertTrue(a == b); + } else { + assertEquals(a.intValue(), b.intValue()); + } + compareAll = false; + break; + } + if (compareAll) { + Iterator it = m.keyIterator(null); + Iterator itExpected = map.keySet().iterator(); + while (itExpected.hasNext()) { + assertTrue(it.hasNext()); + expected = itExpected.next(); + got = it.next(); + assertEquals(expected, got); + } + assertFalse(it.hasNext()); + } + } + s.close(); + } + + private void testKeyValueClasses() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + MVMap is = s.openMap("intString"); + is.put(1, "Hello"); + MVMap ii = s.openMap("intInt"); + ii.put(1, 10); + MVMap si = s.openMap("stringInt"); + si.put("Test", 10); + MVMap ss = s.openMap("stringString"); + ss.put("Hello", "World"); + s.close(); + s = openStore(fileName); + is = s.openMap("intString"); + assertEquals("Hello", is.get(1)); + ii = s.openMap("intInt"); + assertEquals(10, ii.get(1).intValue()); + si = s.openMap("stringInt"); + assertEquals(10, si.get("Test").intValue()); + ss = s.openMap("stringString"); + assertEquals("World", ss.get("Hello")); + s.close(); + } + + private void testIterate() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + MVMap m = s.openMap("data"); + Iterator it = m.keyIterator(null); + assertFalse(it.hasNext()); + for (int i = 0; i < 10; i++) { + m.put(i, "hello " + i); + } + s.commit(); + it = m.keyIterator(null); + it.next(); + assertThrows(UnsupportedOperationException.class, it).remove(); + + it = m.keyIterator(null); + for (int i = 0; i < 10; i++) { + assertTrue(it.hasNext()); + assertEquals(i, it.next().intValue()); + } + assertFalse(it.hasNext()); + assertNull(it.next()); + for (int j = 0; j < 10; j++) { + it = m.keyIterator(j); + for (int i = j; i < 10; i++) { + assertTrue(it.hasNext()); + assertEquals(i, it.next().intValue()); + } + assertFalse(it.hasNext()); + } + s.close(); + } + + private void testCloseTwice() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + MVMap m = s.openMap("data"); + for (int i = 0; i < 3; i++) { + m.put(i, "hello " + i); + } + // closing twice should be fine + s.close(); + s.close(); + } + + private void testSimple() { + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + MVMap m = s.openMap("data"); + for (int i = 0; i < 3; i++) { + m.put(i, "hello " + i); + } + s.commit(); + assertEquals("hello 0", m.remove(0)); + + assertNull(m.get(0)); + for (int i = 1; i < 3; i++) { + assertEquals("hello " + i, m.get(i)); + } + s.close(); + + s = openStore(fileName); + m = s.openMap("data"); + assertNull(m.get(0)); + for (int i = 1; i < 3; i++) { + assertEquals("hello " + i, m.get(i)); + } + s.close(); + } + + private void testLargerThan2G() { + if (!config.big) { + return; + } + String fileName = getBaseDir() + "/" + getTestName(); + FileUtils.delete(fileName); + MVStore store = new MVStore.Builder().cacheSize(16). + fileName(fileName).open(); + try { + MVMap map = store.openMap("test"); + long last = System.nanoTime(); + String data = new String(new char[2500]).replace((char) 0, 'x'); + for (int i = 0;; i++) { + map.put(i, data); + if (i % 10000 == 0) { + store.commit(); + long time = System.nanoTime(); + if (time - last > TimeUnit.SECONDS.toNanos(2)) { + long mb = store.getFileStore().size() / 1024 / 1024; + trace(mb + "/4500"); + if (mb > 4500) { + break; + } + last = time; + } + } + } + store.commit(); + store.close(); + } finally { + store.closeImmediately(); + } + FileUtils.delete(fileName); + } + + /** + * Open a store for the given file name, using a small page size. + * + * @param fileName the file name (null for in-memory) + * @return the store + */ + protected static MVStore openStore(String fileName) { + return openStore(fileName, 1000); + } + + /** + * Open a store for the given file name, using a small page size. + * + * @param fileName the file name (null for in-memory) + * @param pageSplitSize the page split size + * @return the store + */ + protected static MVStore openStore(String fileName, int pageSplitSize) { + MVStore store = new MVStore.Builder(). + fileName(fileName).pageSplitSize(pageSplitSize).open(); + return store; + } + + /** + * Log the message. + * + * @param msg the message + */ + @SuppressWarnings("unused") + protected static void log(String msg) { + // System.out.println(msg); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestMVStoreBenchmark.java b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreBenchmark.java new file mode 100644 index 0000000000000..c5892e23ad5f8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreBenchmark.java @@ -0,0 +1,191 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; +import java.util.concurrent.TimeUnit; +import org.h2.mvstore.MVStore; +import org.h2.test.TestBase; +import org.h2.util.New; + +/** + * Tests the performance and memory usage claims in the documentation. + */ +public class TestMVStoreBenchmark extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.config.big = true; + test.test(); + } + + @Override + public void test() throws Exception { + if (!config.big) { + return; + } + if (config.codeCoverage) { + // run only when _not_ using a code coverage tool, + // because the tool might instrument our code but not + // java.util.* + return; + } + + testPerformanceComparison(); + testMemoryUsageComparison(); + } + + private void testMemoryUsageComparison() { + long[] mem; + long hash, tree, mv; + String msg; + + mem = getMemoryUsed(10000, 10); + hash = mem[0]; + tree = mem[1]; + mv = mem[2]; + msg = Arrays.toString(mem); + assertTrue(msg, hash < mv); + assertTrue(msg, tree < mv); + + mem = getMemoryUsed(10000, 30); + hash = mem[0]; + tree = mem[1]; + mv = mem[2]; + msg = Arrays.toString(mem); + assertTrue(msg, mv < hash); + assertTrue(msg, mv < tree); + + } + + private long[] getMemoryUsed(int count, int size) { + long hash, tree, mv; + ArrayList> mapList; + long mem; + + mapList = New.arrayList(); + mem = getMemory(); + for (int i = 0; i < count; i++) { + mapList.add(new HashMap(size)); + } + addEntries(mapList, size); + hash = getMemory() - mem; + mapList.size(); + + mapList = New.arrayList(); + mem = getMemory(); + for (int i = 0; i < count; i++) { + mapList.add(new TreeMap()); + } + addEntries(mapList, size); + tree = getMemory() - mem; + mapList.size(); + + mapList = New.arrayList(); + mem = getMemory(); + MVStore store = MVStore.open(null); + for (int i = 0; i < count; i++) { + Map map = store.openMap("t" + i); + mapList.add(map); + } + addEntries(mapList, size); + mv = getMemory() - mem; + mapList.size(); + + trace("hash: " + hash / 1024 / 1024 + " mb"); + trace("tree: " + tree / 1024 / 1024 + " mb"); + trace("mv: " + mv / 1024 / 1024 + " mb"); + + return new long[]{hash, tree, mv}; + } + + private static void addEntries(List> mapList, int size) { + for (Map map : mapList) { + for (int i = 0; i < size; i++) { + map.put(i, "Hello World"); + } + } + } + + static long getMemory() { + for (int i = 0; i < 16; i++) { + System.gc(); + try { + Thread.sleep(10); + } catch (InterruptedException e) { + // ignore + } + } + return getMemoryUsedBytes(); + } + + private void testPerformanceComparison() { + if (!config.big) { + return; + } + // -mx12g -agentlib:hprof=heap=sites,depth=8 + // int size = 1000; + int size = 1000000; + long hash = 0, tree = 0, mv = 0; + for (int i = 0; i < 5; i++) { + Map map; + MVStore store = MVStore.open(null); + map = store.openMap("test"); + mv = testPerformance(map, size); + map = new HashMap<>(size); + // map = new ConcurrentHashMap(size); + hash = testPerformance(map, size); + map = new TreeMap<>(); + // map = new ConcurrentSkipListMap(); + tree = testPerformance(map, size); + if (hash < tree && mv < tree * 1.5) { + break; + } + } + String msg = "mv " + mv + " tree " + tree + " hash " + hash; + assertTrue(msg, hash < tree); + // assertTrue(msg, hash < mv); + assertTrue(msg, mv < tree * 2); + } + + private long testPerformance(Map map, int size) { + System.gc(); + long time = 0; + for (int t = 0; t < 3; t++) { + time = System.nanoTime(); + for (int b = 0; b < 3; b++) { + for (int i = 0; i < size; i++) { + map.put(i, "Hello World"); + } + for (int a = 0; a < 5; a++) { + for (int i = 0; i < size; i++) { + String x = map.get(i); + assertTrue(x != null); + } + } + for (int i = 0; i < size; i++) { + map.remove(i); + } + assertEquals(0, map.size()); + } + time = System.nanoTime() - time; + } + trace(map.getClass().getName() + ": " + TimeUnit.NANOSECONDS.toMillis(time)); + return time; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestMVStoreCachePerformance.java b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreCachePerformance.java new file mode 100644 index 0000000000000..5969525bc0565 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreCachePerformance.java @@ -0,0 +1,97 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.Random; +import java.util.concurrent.atomic.AtomicInteger; + +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Tests the MVStore cache. + */ +public class TestMVStoreCachePerformance extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.test(); + } + + @Override + public void test() throws Exception { + testCache(1, ""); + testCache(1, "cache:"); + testCache(10, ""); + testCache(10, "cache:"); + testCache(100, ""); + testCache(100, "cache:"); + } + + private void testCache(int threadCount, String fileNamePrefix) { + String fileName = getBaseDir() + "/" + getTestName(); + fileName = fileNamePrefix + fileName; + FileUtils.delete(fileName); + MVStore store = new MVStore.Builder(). + fileName(fileName). + // cacheSize(1024). + open(); + final MVMap map = store.openMap("test"); + final AtomicInteger counter = new AtomicInteger(); + byte[] data = new byte[8 * 1024]; + final int count = 10000; + for (int i = 0; i < count; i++) { + map.put(i, data); + store.commit(); + if (i % 1000 == 0) { + // System.out.println("add " + i); + } + } + Task[] tasks = new Task[threadCount]; + for (int i = 0; i < threadCount; i++) { + tasks[i] = new Task() { + + @Override + public void call() throws Exception { + Random r = new Random(); + do { + int id = r.nextInt(count); + map.get(id); + counter.incrementAndGet(); + } while (!stop); + } + + }; + tasks[i].execute(); + } + for (int i = 0; i < 4; i++) { + // Profiler prof = new Profiler().startCollecting(); + try { + Thread.sleep(1000); + } catch (InterruptedException e) { + // ignore + } + // System.out.println(prof.getTop(5)); + // System.out.println(" " + counter.get() / (i + 1) + " op/s"); + } + // long time = System.nanoTime(); + for (Task t : tasks) { + t.get(); + } + store.close(); + System.out.println(counter.get() / 10000 + " ops/ms; " + + threadCount + " thread(s); " + fileNamePrefix); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestMVStoreStopCompact.java b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreStopCompact.java new file mode 100644 index 0000000000000..235eb80701cbb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreStopCompact.java @@ -0,0 +1,73 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.Random; + +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Test that the MVStore eventually stops optimizing (does not excessively opti + */ +public class TestMVStoreStopCompact extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.big = true; + test.test(); + } + + @Override + public void test() throws Exception { + if (!config.big) { + return; + } + for(int retentionTime = 10; retentionTime < 1000; retentionTime *= 10) { + for(int timeout = 100; timeout <= 1000; timeout *= 10) { + testStopCompact(retentionTime, timeout); + } + } + } + + private void testStopCompact(int retentionTime, int timeout) throws InterruptedException { + String fileName = getBaseDir() + "/testStopCompact.h3"; + FileUtils.createDirectories(getBaseDir()); + FileUtils.delete(fileName); + // store with a very small page size, to make sure + // there are many leaf pages + MVStore s = new MVStore.Builder(). + fileName(fileName).open(); + s.setRetentionTime(retentionTime); + MVMap map = s.openMap("data"); + long start = System.currentTimeMillis(); + Random r = new Random(1); + for (int i = 0; i < 4000000; i++) { + long time = System.currentTimeMillis() - start; + if (time > timeout) { + break; + } + int x = r.nextInt(10000000); + map.put(x, "Hello World " + i * 10); + } + s.setAutoCommitDelay(100); + long oldWriteCount = s.getFileStore().getWriteCount(); + // expect background write to stop after 5 seconds + Thread.sleep(5000); + long newWriteCount = s.getFileStore().getWriteCount(); + // expect that compaction didn't cause many writes + assertTrue(newWriteCount - oldWriteCount < 30); + s.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestMVStoreTool.java b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreTool.java new file mode 100644 index 0000000000000..47eb1367443ab --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestMVStoreTool.java @@ -0,0 +1,129 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.Map.Entry; +import java.util.Random; + +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.MVStoreTool; +import org.h2.mvstore.rtree.MVRTreeMap; +import org.h2.mvstore.rtree.SpatialKey; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests the MVStoreTool class. + */ +public class TestMVStoreTool extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.config.big = true; + test.test(); + } + + @Override + public void test() throws Exception { + testCompact(); + } + + private void testCompact() { + String fileName = getBaseDir() + "/testCompact.h3"; + FileUtils.createDirectories(getBaseDir()); + FileUtils.delete(fileName); + // store with a very small page size, to make sure + // there are many leaf pages + MVStore s = new MVStore.Builder(). + pageSplitSize(1000). + fileName(fileName).autoCommitDisabled().open(); + MVMap map = s.openMap("data"); + for (int i = 0; i < 10; i++) { + map.put(i, "Hello World " + i * 10); + if (i % 3 == 0) { + s.commit(); + } + } + for (int i = 0; i < 20; i++) { + map = s.openMap("data" + i); + for (int j = 0; j < i * i; j++) { + map.put(j, "Hello World " + j * 10); + } + s.commit(); + } + MVRTreeMap rTreeMap = s.openMap("rtree", new MVRTreeMap.Builder()); + Random r = new Random(1); + for (int i = 0; i < 10; i++) { + float x = r.nextFloat(); + float y = r.nextFloat(); + float width = r.nextFloat() / 10; + float height = r.nextFloat() / 10; + SpatialKey k = new SpatialKey(i, x, x + width, y, y + height); + rTreeMap.put(k, "Hello World " + i * 10); + if (i % 3 == 0) { + s.commit(); + } + } + s.close(); + + MVStoreTool.compact(fileName, fileName + ".new", false); + MVStoreTool.compact(fileName, fileName + ".new.compress", true); + MVStore s1 = new MVStore.Builder(). + fileName(fileName).readOnly().open(); + MVStore s2 = new MVStore.Builder(). + fileName(fileName + ".new").readOnly().open(); + MVStore s3 = new MVStore.Builder(). + fileName(fileName + ".new.compress").readOnly().open(); + assertEquals(s1, s2); + assertEquals(s1, s3); + s1.close(); + s2.close(); + s3.close(); + long size1 = FileUtils.size(fileName); + long size2 = FileUtils.size(fileName + ".new"); + long size3 = FileUtils.size(fileName + ".new.compress"); + assertTrue("size1: " + size1 + " size2: " + size2 + " size3: " + size3, + size2 < size1 && size3 < size2); + MVStoreTool.compact(fileName, false); + assertEquals(size2, FileUtils.size(fileName)); + MVStoreTool.compact(fileName, true); + assertEquals(size3, FileUtils.size(fileName)); + } + + private void assertEquals(MVStore a, MVStore b) { + assertEquals(a.getMapNames().size(), b.getMapNames().size()); + for (String mapName : a.getMapNames()) { + if (mapName.startsWith("rtree")) { + MVRTreeMap ma = a.openMap( + mapName, new MVRTreeMap.Builder()); + MVRTreeMap mb = b.openMap( + mapName, new MVRTreeMap.Builder()); + assertEquals(ma.sizeAsLong(), mb.sizeAsLong()); + for (Entry e : ma.entrySet()) { + Object x = mb.get(e.getKey()); + assertEquals(e.getValue().toString(), x.toString()); + } + + } else { + MVMap ma = a.openMap(mapName); + MVMap mb = a.openMap(mapName); + assertEquals(ma.sizeAsLong(), mb.sizeAsLong()); + for (Entry e : ma.entrySet()) { + Object x = mb.get(e.getKey()); + assertEquals(e.getValue().toString(), x.toString()); + } + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestMVTableEngine.java b/modules/h2/src/test/java/org/h2/test/store/TestMVTableEngine.java new file mode 100644 index 0000000000000..2481793761ce6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestMVTableEngine.java @@ -0,0 +1,1514 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.StringReader; +import java.math.BigDecimal; +import java.nio.channels.FileChannel; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Savepoint; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.jdbc.JdbcConnection; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.db.TransactionStore; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Recover; +import org.h2.tools.Restore; +import org.h2.util.JdbcUtils; +import org.h2.util.Task; + +/** + * Tests the MVStore in a database. + */ +public class TestMVTableEngine extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testLobCopy(); + testLobReuse(); + testShutdownDuringLobCreation(); + testLobCreationThenShutdown(); + testManyTransactions(); +// testAppendOnly(); + testLowRetentionTime(); + testOldAndNew(); + testTemporaryTables(); + testUniqueIndex(); + testSecondaryIndex(); + testGarbageCollectionForLOB(); + testSpatial(); + testCount(); + testMinMaxWithNull(); + testTimeout(); + testExplainAnalyze(); + testTransactionLogUsuallyNotStored(); + testShrinkDatabaseFile(); + testTwoPhaseCommit(); + testRecover(); + testSeparateKey(); + testRollback(); + testRollbackAfterCrash(); + testReferentialIntegrity(); + testWriteDelay(); + testAutoCommit(); + testReopen(); + testBlob(); + testExclusiveLock(); + testEncryption(); + testReadOnly(); + testReuseDiskSpace(); + testDataTypes(); + testLocking(); + testSimple(); + if (!config.travis) { + testReverseDeletePerformance(); + } + } + + private void testLobCopy() throws Exception { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data clob)"); + stat = conn.createStatement(); + stat.execute("insert into test(id, data) values(2, space(300))"); + stat.execute("insert into test(id, data) values(1, space(300))"); + stat.execute("alter table test add column x int"); + if (!config.memory) { + conn.close(); + conn = getConnection(getTestName()); + } + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select data from test"); + while (rs.next()) { + rs.getString(1); + } + conn.close(); + } + + private void testLobReuse() throws Exception { + deleteDb(getTestName()); + Connection conn1 = getConnection(getTestName()); + Statement stat = conn1.createStatement(); + stat.execute("create table test(id identity primary key, lob clob)"); + byte[] buffer = new byte[8192]; + for (int i = 0; i < 20; i++) { + Connection conn2 = getConnection(getTestName()); + stat = conn2.createStatement(); + stat.execute("insert into test(lob) select space(1025) from system_range(1, 10)"); + stat.execute("delete from test where random() > 0.5"); + ResultSet rs = conn2.createStatement().executeQuery( + "select lob from test"); + while (rs.next()) { + InputStream is = rs.getBinaryStream(1); + while (is.read(buffer) != -1) { + // ignore + } + } + conn2.close(); + } + conn1.close(); + } + + private void testShutdownDuringLobCreation() throws Exception { + if (config.memory) { + return; + } + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test(data clob) as select space(10000)"); + final PreparedStatement prep = conn + .prepareStatement("set @lob = ?"); + final AtomicBoolean end = new AtomicBoolean(); + Task t = new Task() { + + @Override + public void call() throws Exception { + prep.setBinaryStream(1, new InputStream() { + + int len; + + @Override + public int read() throws IOException { + if (len++ < 1024 * 1024 * 4) { + return 0; + } + end.set(true); + while (!stop) { + try { + Thread.sleep(1); + } catch (InterruptedException e) { + // ignore + } + } + return -1; + } + } , -1); + } + }; + t.execute(); + while (!end.get()) { + Thread.sleep(1); + } + stat.execute("checkpoint"); + stat.execute("shutdown immediately"); + Exception ex = t.getException(); + assertTrue(ex != null); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("shutdown defrag"); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + conn = getConnection(getTestName()); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * " + + "from information_schema.settings " + + "where name = 'info.PAGE_COUNT'"); + rs.next(); + int pages = rs.getInt(2); + // only one lob should remain (but it is small and compressed) + assertTrue("p:" + pages, pages < 4); + conn.close(); + } + + private void testLobCreationThenShutdown() throws Exception { + if (config.memory) { + return; + } + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, data clob)"); + PreparedStatement prep = conn + .prepareStatement("insert into test values(?, ?)"); + for (int i = 0; i < 9; i++) { + prep.setInt(1, i); + int size = i * i * i * i * 1024; + prep.setCharacterStream(2, new StringReader(new String( + new char[size]))); + prep.execute(); + } + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("drop all objects"); + stat.execute("shutdown defrag"); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + conn = getConnection(getTestName()); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * " + + "from information_schema.settings " + + "where name = 'info.PAGE_COUNT'"); + rs.next(); + int pages = rs.getInt(2); + // no lobs should remain + assertTrue("p:" + pages, pages < 4); + conn.close(); + } + + private void testManyTransactions() throws Exception { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test()"); + conn.setAutoCommit(false); + stat.execute("insert into test values()"); + + Connection conn2 = getConnection(getTestName()); + Statement stat2 = conn2.createStatement(); + for (long i = 0; i < 100000; i++) { + stat2.execute("insert into test values()"); + } + conn2.close(); + conn.close(); + } + + private void testAppendOnly() throws Exception { + if (config.memory) { + return; + } + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("set retention_time 0"); + for (int i = 0; i < 10; i++) { + stat.execute("create table dummy" + i + + " as select x, space(100) from system_range(1, 1000)"); + stat.execute("checkpoint"); + } + stat.execute("create table test as select x from system_range(1, 1000)"); + conn.close(); + String fileName = getBaseDir() + "/" + getTestName() + Constants.SUFFIX_MV_FILE; + long fileSize = FileUtils.size(fileName); + + conn = getConnection( + getTestName() + ";reuse_space=false"); + stat = conn.createStatement(); + stat.execute("set retention_time 0"); + for (int i = 0; i < 10; i++) { + stat.execute("drop table dummy" + i); + stat.execute("checkpoint"); + } + stat.execute("alter table test alter column x rename to y"); + stat.execute("select y from test where 1 = 0"); + stat.execute("create table test2 as select x from system_range(1, 1000)"); + conn.close(); + + FileChannel fc = FileUtils.open(fileName, "rw"); + // undo all changes + fc.truncate(fileSize); + fc.close(); + + conn = getConnection(getTestName()); + stat = conn.createStatement(); + stat.execute("select * from dummy0 where 1 = 0"); + stat.execute("select * from dummy9 where 1 = 0"); + stat.execute("select x from test where 1 = 0"); + conn.close(); + } + + private void testLowRetentionTime() throws SQLException { + deleteDb(getTestName()); + Connection conn = getConnection( + getTestName() + ";RETENTION_TIME=10;WRITE_DELAY=10"); + Statement stat = conn.createStatement(); + Connection conn2 = getConnection(getTestName()); + Statement stat2 = conn2.createStatement(); + stat.execute("create alias sleep as " + + "$$void sleep(int ms) throws Exception { Thread.sleep(ms); }$$"); + stat.execute("create table test(id identity, name varchar) " + + "as select x, 'Init' from system_range(0, 1999)"); + for (int i = 0; i < 10; i++) { + stat.execute("insert into test values(null, 'Hello')"); + // create and delete a large table: this will force compaction + stat.execute("create table temp(id identity, name varchar) as " + + "select x, space(1000000) from system_range(0, 10)"); + stat.execute("drop table temp"); + } + ResultSet rs = stat2 + .executeQuery("select *, sleep(1) from test order by id"); + for (int i = 0; i < 2000 + 10; i++) { + assertTrue(rs.next()); + assertEquals(i, rs.getInt(1)); + } + assertFalse(rs.next()); + conn2.close(); + conn.close(); + } + + private void testOldAndNew() throws SQLException { + if (config.memory) { + return; + } + Connection conn; + deleteDb(getTestName()); + String urlOld = getURL(getTestName() + ";MV_STORE=FALSE", true); + String urlNew = getURL(getTestName() + ";MV_STORE=TRUE", true); + String url = getURL(getTestName(), true); + + conn = getConnection(urlOld); + conn.createStatement().execute("create table test_old(id int)"); + conn.close(); + conn = getConnection(url); + conn.createStatement().execute("select * from test_old"); + conn.close(); + conn = getConnection(urlNew); + conn.createStatement().execute("create table test_new(id int)"); + conn.close(); + conn = getConnection(url); + conn.createStatement().execute("select * from test_new"); + conn.close(); + conn = getConnection(urlOld); + conn.createStatement().execute("select * from test_old"); + conn.close(); + conn = getConnection(urlNew); + conn.createStatement().execute("select * from test_new"); + conn.close(); + } + + private void testTemporaryTables() throws SQLException { + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("set max_memory_rows 100"); + stat.execute("create table t1 as select x from system_range(1, 200)"); + stat.execute("create table t2 as select x from system_range(1, 200)"); + for (int i = 0; i < 20; i++) { + // this will create temporary results that + // internally use temporary tables, which are not all closed + stat.execute("select count(*) from t1 where t1.x in (select t2.x from t2)"); + } + conn.close(); + conn = getConnection(url); + stat = conn.createStatement(); + for (int i = 0; i < 20; i++) { + stat.execute("create table a" + i + "(id int primary key)"); + ResultSet rs = stat.executeQuery("select count(*) from a" + i); + rs.next(); + assertEquals(0, rs.getInt(1)); + } + conn.close(); + } + + private void testUniqueIndex() throws SQLException { + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test as select x, 0 from system_range(1, 5000)"); + stat.execute("create unique index on test(x)"); + ResultSet rs = stat.executeQuery("select * from test where x=1"); + assertTrue(rs.next()); + assertFalse(rs.next()); + conn.close(); + } + + private void testSecondaryIndex() throws SQLException { + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + int size = 8 * 1024; + stat.execute("insert into test select mod(x * 111, " + size + ") " + + "from system_range(1, " + size + ")"); + stat.execute("create index on test(id)"); + ResultSet rs = stat.executeQuery( + "select count(*) from test inner join " + + "system_range(1, " + size + ") where " + + "id = mod(x * 111, " + size + ")"); + rs.next(); + assertEquals(size, rs.getInt(1)); + conn.close(); + } + + private void testGarbageCollectionForLOB() throws SQLException { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int, data blob)"); + stat.execute("insert into test select x, repeat('0', 10000) " + + "from system_range(1, 10)"); + stat.execute("drop table test"); + stat.execute("create table test2(id int, data blob)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test2 values(?, ?)"); + prep.setInt(1, 1); + assertThrows(ErrorCode.IO_EXCEPTION_1, prep). + setBinaryStream(1, createFailingStream(new IOException())); + prep.setInt(1, 2); + assertThrows(ErrorCode.IO_EXCEPTION_1, prep). + setBinaryStream(1, createFailingStream(new IllegalStateException())); + conn.close(); + MVStore s = MVStore.open(getBaseDir()+ "/" + getTestName() + ".mv.db"); + assertTrue(s.hasMap("lobData")); + MVMap lobData = s.openMap("lobData"); + assertEquals(0, lobData.sizeAsLong()); + assertTrue(s.hasMap("lobMap")); + MVMap lobMap = s.openMap("lobMap"); + assertEquals(0, lobMap.sizeAsLong()); + assertTrue(s.hasMap("lobRef")); + MVMap lobRef = s.openMap("lobRef"); + assertEquals(0, lobRef.sizeAsLong()); + s.close(); + } + + private void testSpatial() throws SQLException { + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("call rand(1)"); + stat.execute("create table coordinates as select rand()*50 x, " + + "rand()*50 y from system_range(1, 5000)"); + stat.execute("create table test(id identity, data geometry)"); + stat.execute("create spatial index on test(data)"); + stat.execute("insert into test(data) select 'polygon(('||" + + "(1+x)||' '||(1+y)||', '||(2+x)||' '||(2+y)||', "+ + "'||(3+x)||' '||(1+y)||', '||(1+x)||' '||(1+y)||'))' from coordinates;"); + conn.close(); + } + + private void testCount() throws Exception { + if (config.memory) { + return; + } + + Connection conn; + Connection conn2; + Statement stat; + Statement stat2; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE;MVCC=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("create table test2(id int)"); + stat.execute("insert into test select x from system_range(1, 10000)"); + conn.close(); + + ResultSet rs; + String plan; + + conn2 = getConnection(url); + stat2 = conn2.createStatement(); + rs = stat2.executeQuery("explain analyze select count(*) from test"); + rs.next(); + plan = rs.getString(1); + assertTrue(plan, plan.indexOf("reads:") < 0); + + conn = getConnection(url); + stat = conn.createStatement(); + conn.setAutoCommit(false); + stat.execute("insert into test select x from system_range(1, 1000)"); + rs = stat.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(11000, rs.getInt(1)); + + // not yet committed + rs = stat2.executeQuery("explain analyze select count(*) from test"); + rs.next(); + plan = rs.getString(1); + // transaction log is small, so no need to read the table + assertTrue(plan, plan.indexOf("reads:") < 0); + rs = stat2.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(10000, rs.getInt(1)); + + stat.execute("insert into test2 select x from system_range(1, 11000)"); + rs = stat2.executeQuery("explain analyze select count(*) from test"); + rs.next(); + plan = rs.getString(1); + // transaction log is larger than the table, so read the table + assertContains(plan, "reads:"); + rs = stat2.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(10000, rs.getInt(1)); + + conn2.close(); + conn.close(); + } + + private void testMinMaxWithNull() throws Exception { + Connection conn; + Connection conn2; + Statement stat; + Statement stat2; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE;MVCC=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(data int)"); + stat.execute("create index on test(data)"); + stat.execute("insert into test values(null), (2)"); + conn2 = getConnection(url); + stat2 = conn2.createStatement(); + conn.setAutoCommit(false); + conn2.setAutoCommit(false); + stat.execute("insert into test values(1)"); + ResultSet rs; + rs = stat.executeQuery("select min(data) from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs = stat2.executeQuery("select min(data) from test"); + rs.next(); + // not yet committed + assertEquals(2, rs.getInt(1)); + conn2.close(); + conn.close(); + } + + private void testTimeout() throws Exception { + Connection conn; + Connection conn2; + Statement stat; + Statement stat2; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE;MVCC=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id identity, name varchar)"); + conn2 = getConnection(url); + stat2 = conn2.createStatement(); + conn.setAutoCommit(false); + conn2.setAutoCommit(false); + stat.execute("insert into test values(1, 'Hello')"); + assertThrows(ErrorCode.LOCK_TIMEOUT_1, stat2). + execute("insert into test values(1, 'Hello')"); + conn2.close(); + conn.close(); + } + + private void testExplainAnalyze() throws Exception { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id identity, name varchar) as " + + "select x, space(1000) from system_range(1, 1000)"); + ResultSet rs; + conn.close(); + conn = getConnection(url); + stat = conn.createStatement(); + rs = stat.executeQuery("explain analyze select * from test"); + rs.next(); + String plan = rs.getString(1); + // expect about 1000 reads + String readCount = plan.substring(plan.indexOf("reads: ")); + readCount = readCount.substring("reads: ".length(), readCount.indexOf('\n')); + int rc = Integer.parseInt(readCount); + assertTrue(plan, rc >= 1000 && rc <= 1200); + conn.close(); + } + + private void testTransactionLogUsuallyNotStored() throws Exception { + Connection conn; + Statement stat; + // we expect the transaction log is empty in at least some of the cases + for (int test = 0; test < 5; test++) { + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id identity, name varchar)"); + conn.setAutoCommit(false); + PreparedStatement prep = conn.prepareStatement( + "insert into test(name) values(space(10000))"); + for (int j = 0; j < 100; j++) { + for (int i = 0; i < 100; i++) { + prep.execute(); + } + conn.commit(); + } + stat.execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + + String file = getBaseDir() + "/" + getTestName() + + Constants.SUFFIX_MV_FILE; + + MVStore store = MVStore.open(file); + TransactionStore t = new TransactionStore(store); + t.init(); + int openTransactions = t.getOpenTransactions().size(); + store.close(); + if (openTransactions == 0) { + return; + } + } + fail("transaction log was never empty"); + } + + private void testShrinkDatabaseFile() throws Exception { + if (config.memory) { + return; + } + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + Connection conn; + Statement stat; + long maxSize = 0; + // by default, the database does not shrink for 45 seconds + int retentionTime = 45000; + for (int i = 0; i < 20; i++) { + // the first 10 times, keep the default retention time + // then switch to 0, at which point the database file + // should stop to grow + conn = getConnection(dbName); + stat = conn.createStatement(); + if (i == 10) { + stat.execute("set retention_time 0"); + retentionTime = 0; + } + ResultSet rs = stat.executeQuery( + "select value from information_schema.settings " + + "where name='RETENTION_TIME'"); + assertTrue(rs.next()); + assertEquals(retentionTime, rs.getInt(1)); + stat.execute("create table test(id int primary key, data varchar)"); + stat.execute("insert into test select x, space(100) " + + "from system_range(1, 1000)"); + // this table is kept + if (i < 10) { + stat.execute("create table test" + i + + "(id int primary key, data varchar) " + + "as select x, space(10) from system_range(1, 100)"); + } + // force writing the chunk + stat.execute("checkpoint"); + // drop the table - but the chunk is still used + stat.execute("drop table test"); + stat.execute("checkpoint"); + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + String fileName = getBaseDir() + "/" + getTestName() + + Constants.SUFFIX_MV_FILE; + long size = FileUtils.size(fileName); + if (i < 10) { + maxSize = (int) (Math.max(size, maxSize) * 1.2); + } else if (size > maxSize) { + fail(i + " size: " + size + " max: " + maxSize); + } + } + long sizeOld = FileUtils.size(getBaseDir() + "/" + getTestName() + + Constants.SUFFIX_MV_FILE); + conn = getConnection(dbName); + stat = conn.createStatement(); + stat.execute("shutdown compact"); + conn.close(); + long sizeNew = FileUtils.size(getBaseDir() + "/" + getTestName() + + Constants.SUFFIX_MV_FILE); + assertTrue("new: " + sizeNew + " old: " + sizeOld, sizeNew < sizeOld); + } + + private void testTwoPhaseCommit() throws Exception { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("set write_delay 0"); + conn.setAutoCommit(false); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("prepare commit test_tx"); + stat.execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + + conn = getConnection(url); + stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from information_schema.in_doubt"); + assertTrue(rs.next()); + stat.execute("commit transaction test_tx"); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + conn.close(); + } + + private void testRecover() throws Exception { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + url = getURL(url, true); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("create table test2(name varchar)"); + stat.execute("insert into test2 values('Hello World')"); + conn.close(); + + Recover.execute(getBaseDir(), getTestName()); + deleteDb(getTestName()); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("runscript from '" + getBaseDir() + "/" + getTestName()+ ".h2.sql'"); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + rs = stat.executeQuery("select * from test2"); + assertTrue(rs.next()); + assertEquals("Hello World", rs.getString(1)); + conn.close(); + } + + private void testRollback() throws Exception { + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id identity)"); + conn.setAutoCommit(false); + stat.execute("insert into test values(1)"); + stat.execute("delete from test"); + conn.rollback(); + conn.close(); + } + + private void testSeparateKey() throws Exception { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table a(id int)"); + stat.execute("insert into a values(1)"); + stat.execute("insert into a values(1)"); + + stat.execute("create table test(id int not null) as select 100"); + stat.execute("create primary key on test(id)"); + ResultSet rs = stat.executeQuery("select * from test where id = 100"); + assertTrue(rs.next()); + conn.close(); + + conn = getConnection(url); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test where id = 100"); + assertTrue(rs.next()); + conn.close(); + } + + private void testRollbackAfterCrash() throws Exception { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + deleteDb(getTestName()); + String url = getTestName() + ";MV_STORE=TRUE"; + String url2 = getTestName() + "2;MV_STORE=TRUE"; + + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("insert into test values(0)"); + stat.execute("set write_delay 0"); + conn.setAutoCommit(false); + stat.execute("insert into test values(1)"); + stat.execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + + conn = getConnection(url); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select row_count_estimate " + + "from information_schema.tables where table_name='TEST'"); + rs.next(); + assertEquals(1, rs.getLong(1)); + stat.execute("drop table test"); + + stat.execute("create table test(id int primary key, data clob)"); + stat.execute("insert into test values(1, space(10000))"); + conn.setAutoCommit(false); + stat.execute("delete from test"); + stat.execute("checkpoint"); + stat.execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + + conn = getConnection(url); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + stat.execute("drop all objects delete files"); + conn.close(); + + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("create index idx_name on test(name, id)"); + stat.execute("insert into test select x, x || space(200 * x) " + + "from system_range(1, 10)"); + conn.setAutoCommit(false); + stat.execute("delete from test where id > 5"); + stat.execute("backup to '" + getBaseDir() + "/" + getTestName() + ".zip'"); + conn.rollback(); + Restore.execute(getBaseDir() + "/" +getTestName() + ".zip", + getBaseDir(), getTestName() + "2"); + Connection conn2; + conn2 = getConnection(url2); + conn.close(); + conn2.close(); + + } + + private void testReferentialIntegrity() throws Exception { + Connection conn; + Statement stat; + deleteDb(getTestName()); + conn = getConnection(getTestName() + ";MV_STORE=TRUE"); + + stat = conn.createStatement(); + stat.execute("create table test(id int, parent int " + + "references test(id) on delete cascade)"); + stat.execute("insert into test values(0, 0)"); + stat.execute("delete from test"); + stat.execute("drop table test"); + + stat.execute("create table parent(id int, name varchar)"); + stat.execute("create table child(id int, parentid int, " + + "foreign key(parentid) references parent(id))"); + stat.execute("insert into parent values(1, 'mary'), (2, 'john')"); + stat.execute("insert into child values(10, 1), (11, 1), (20, 2), (21, 2)"); + stat.execute("update parent set name = 'marc' where id = 1"); + stat.execute("merge into parent key(id) values(1, 'marcy')"); + stat.execute("drop table parent, child"); + + stat.execute("create table test(id identity, parent bigint, " + + "foreign key(parent) references(id))"); + stat.execute("insert into test values(0, 0), (1, NULL), " + + "(2, 1), (3, 3), (4, 3)"); + stat.execute("drop table test"); + + stat.execute("create table parent(id int)"); + stat.execute("create table child(pid int)"); + stat.execute("insert into parent values(1)"); + stat.execute("insert into child values(2)"); + try { + stat.execute("alter table child add constraint cp " + + "foreign key(pid) references parent(id)"); + fail(); + } catch (SQLException e) { + assertEquals( + ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, + e.getErrorCode()); + } + stat.execute("update child set pid=1"); + stat.execute("drop table child, parent"); + + stat.execute("create table parent(id int)"); + stat.execute("create table child(pid int)"); + stat.execute("insert into parent values(1)"); + stat.execute("insert into child values(2)"); + try { + stat.execute("alter table child add constraint cp " + + "foreign key(pid) references parent(id)"); + fail(); + } catch (SQLException e) { + assertEquals( + ErrorCode.REFERENTIAL_INTEGRITY_VIOLATED_PARENT_MISSING_1, + e.getErrorCode()); + } + stat.execute("drop table child, parent"); + + stat.execute("create table test(id identity, parent bigint, " + + "foreign key(parent) references(id))"); + stat.execute("insert into test values(0, 0), (1, NULL), " + + "(2, 1), (3, 3), (4, 3)"); + stat.execute("drop table test"); + + stat.execute("create table parent(id int, x int)"); + stat.execute("insert into parent values(1, 2)"); + stat.execute("create table child(id int references parent(id)) as select 1"); + + conn.close(); + } + + private void testWriteDelay() throws Exception { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + ResultSet rs; + deleteDb(getTestName()); + conn = getConnection(getTestName() + ";MV_STORE=TRUE"); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("set write_delay 0"); + stat.execute("insert into test values(1)"); + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + conn = getConnection(getTestName() + ";MV_STORE=TRUE"); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + conn.close(); + } + + private void testAutoCommit() throws SQLException { + Connection conn; + Statement stat; + ResultSet rs; + deleteDb(getTestName()); + conn = getConnection(getTestName() + ";MV_STORE=TRUE"); + for (int i = 0; i < 2; i++) { + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("create index on test(name)"); + conn.setAutoCommit(false); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("insert into test values(2, 'World')"); + rs = stat.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(2, rs.getInt(1)); + conn.rollback(); + rs = stat.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(0, rs.getInt(1)); + + stat.execute("insert into test values(1, 'Hello')"); + Savepoint sp = conn.setSavepoint(); + stat.execute("insert into test values(2, 'World')"); + conn.rollback(sp); + rs = stat.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + stat.execute("drop table test"); + } + + conn.close(); + } + + private void testReopen() throws SQLException { + if (config.memory) { + return; + } + Connection conn; + Statement stat; + deleteDb(getTestName()); + conn = getConnection(getTestName() + ";MV_STORE=TRUE"); + stat = conn.createStatement(); + stat.execute("create table test(id int, name varchar)"); + conn.close(); + conn = getConnection(getTestName() + ";MV_STORE=TRUE"); + stat = conn.createStatement(); + stat.execute("drop table test"); + conn.close(); + } + + private void testBlob() throws SQLException, IOException { + if (config.memory) { + return; + } + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + Connection conn; + Statement stat; + conn = getConnection(dbName); + stat = conn.createStatement(); + stat.execute("create table test(id int, name blob)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(1, ?)"); + prep.setBinaryStream(1, new ByteArrayInputStream(new byte[129])); + prep.execute(); + conn.close(); + conn = getConnection(dbName); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from test"); + while (rs.next()) { + InputStream in = rs.getBinaryStream(2); + int len = 0; + while (in.read() >= 0) { + len++; + } + assertEquals(129, len); + } + conn.close(); + } + + private void testEncryption() throws Exception { + if (config.memory) { + return; + } + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + Connection conn; + Statement stat; + String url = getURL(dbName + ";CIPHER=AES", true); + String user = "sa"; + String password = "123 123"; + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + conn.close(); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + stat.execute("select * from test"); + stat.execute("drop table test"); + conn.close(); + } + + private void testExclusiveLock() throws Exception { + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE;MVCC=FALSE"; + Connection conn, conn2; + Statement stat, stat2; + conn = getConnection(dbName); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("insert into test values(1)"); + conn.setAutoCommit(false); + // stat.execute("update test set id = 2"); + stat.executeQuery("select * from test for update"); + conn2 = getConnection(dbName); + stat2 = conn2.createStatement(); + ResultSet rs2 = stat2.executeQuery( + "select * from information_schema.locks"); + assertTrue(rs2.next()); + assertEquals("TEST", rs2.getString("table_name")); + assertEquals("WRITE", rs2.getString("lock_type")); + conn2.close(); + conn.close(); + } + + private void testReadOnly() throws Exception { + if (config.memory) { + return; + } + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + Connection conn; + Statement stat; + conn = getConnection(dbName); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + conn.close(); + FileUtils.setReadOnly(getBaseDir() + "/" + getTestName() + + Constants.SUFFIX_MV_FILE); + conn = getConnection(dbName); + Database db = (Database) ((JdbcConnection) conn).getSession() + .getDataHandler(); + assertTrue(db.getMvStore().getStore().getFileStore().isReadOnly()); + conn.close(); + } + + private void testReuseDiskSpace() throws Exception { + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + Connection conn; + Statement stat; + long maxSize = 0; + for (int i = 0; i < 20; i++) { + conn = getConnection(dbName); + Database db = (Database) ((JdbcConnection) conn). + getSession().getDataHandler(); + db.getMvStore().getStore().setRetentionTime(0); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data varchar)"); + stat.execute("insert into test select x, space(1000) " + + "from system_range(1, 1000)"); + stat.execute("drop table test"); + conn.close(); + long size = FileUtils.size(getBaseDir() + "/" + getTestName() + + Constants.SUFFIX_MV_FILE); + if (i < 10) { + maxSize = (int) (Math.max(size, maxSize) * 1.1); + } else if (size > maxSize) { + fail(i + " size: " + size + " max: " + maxSize); + } + } + } + + private void testDataTypes() throws Exception { + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + Connection conn = getConnection(dbName); + Statement stat = conn.createStatement(); + + stat.execute("create table test(id int primary key, " + + "vc varchar," + + "ch char(10)," + + "bo boolean," + + "by tinyint," + + "sm smallint," + + "bi bigint," + + "de decimal," + + "re real,"+ + "do double," + + "ti time," + + "da date," + + "ts timestamp," + + "bin binary," + + "uu uuid," + + "bl blob," + + "cl clob)"); + stat.execute("insert into test values(1000, '', '', null, 0, 0, 0, " + + "9, 2, 3, '10:00:00', '2001-01-01', " + + "'2010-10-10 10:10:10', x'00', 0, x'b1', 'clob')"); + stat.execute("insert into test values(1, 'vc', 'ch', true, 8, 16, 64, " + + "123.00, 64.0, 32.0, '10:00:00', '2001-01-01', " + + "'2010-10-10 10:10:10', x'00', 0, x'b1', 'clob')"); + stat.execute("insert into test values(-1, " + + "'quite a long string \u1234 \u00ff', 'ch', false, -8, -16, -64, " + + "0, 0, 0, '10:00:00', '2001-01-01', " + + "'2010-10-10 10:10:10', SECURE_RAND(100), 0, x'b1', 'clob')"); + stat.execute("insert into test values(-1000, space(1000), 'ch', " + + "false, -8, -16, -64, " + + "1, 1, 1, '10:00:00', '2001-01-01', " + + "'2010-10-10 10:10:10', SECURE_RAND(100), 0, x'b1', 'clob')"); + if (!config.memory) { + conn.close(); + conn = getConnection(dbName); + stat = conn.createStatement(); + } + ResultSet rs; + rs = stat.executeQuery("select * from test order by id desc"); + rs.next(); + assertEquals(1000, rs.getInt(1)); + assertEquals("", rs.getString(2)); + assertEquals("", rs.getString(3)); + assertFalse(rs.getBoolean(4)); + assertEquals(0, rs.getByte(5)); + assertEquals(0, rs.getShort(6)); + assertEquals(0, rs.getLong(7)); + assertEquals("9", rs.getBigDecimal(8).toString()); + assertEquals(2d, rs.getDouble(9)); + assertEquals(3d, rs.getFloat(10)); + assertEquals("10:00:00", rs.getString(11)); + assertEquals("2001-01-01", rs.getString(12)); + assertEquals("2010-10-10 10:10:10", rs.getString(13)); + assertEquals(1, rs.getBytes(14).length); + assertEquals("00000000-0000-0000-0000-000000000000", + rs.getString(15)); + assertEquals(1, rs.getBytes(16).length); + assertEquals("clob", rs.getString(17)); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("vc", rs.getString(2)); + assertEquals("ch", rs.getString(3)); + assertTrue(rs.getBoolean(4)); + assertEquals(8, rs.getByte(5)); + assertEquals(16, rs.getShort(6)); + assertEquals(64, rs.getLong(7)); + assertEquals("123.00", rs.getBigDecimal(8).toString()); + assertEquals(64d, rs.getDouble(9)); + assertEquals(32d, rs.getFloat(10)); + assertEquals("10:00:00", rs.getString(11)); + assertEquals("2001-01-01", rs.getString(12)); + assertEquals("2010-10-10 10:10:10", rs.getString(13)); + assertEquals(1, rs.getBytes(14).length); + assertEquals("00000000-0000-0000-0000-000000000000", + rs.getString(15)); + assertEquals(1, rs.getBytes(16).length); + assertEquals("clob", rs.getString(17)); + rs.next(); + assertEquals(-1, rs.getInt(1)); + assertEquals("quite a long string \u1234 \u00ff", + rs.getString(2)); + assertEquals("ch", rs.getString(3)); + assertFalse(rs.getBoolean(4)); + assertEquals(-8, rs.getByte(5)); + assertEquals(-16, rs.getShort(6)); + assertEquals(-64, rs.getLong(7)); + assertEquals("0", rs.getBigDecimal(8).toString()); + assertEquals(0.0d, rs.getDouble(9)); + assertEquals(0.0d, rs.getFloat(10)); + assertEquals("10:00:00", rs.getString(11)); + assertEquals("2001-01-01", rs.getString(12)); + assertEquals("2010-10-10 10:10:10", rs.getString(13)); + assertEquals(100, rs.getBytes(14).length); + assertEquals("00000000-0000-0000-0000-000000000000", + rs.getString(15)); + assertEquals(1, rs.getBytes(16).length); + assertEquals("clob", rs.getString(17)); + rs.next(); + assertEquals(-1000, rs.getInt(1)); + assertEquals(1000, rs.getString(2).length()); + assertEquals("ch", rs.getString(3)); + assertFalse(rs.getBoolean(4)); + assertEquals(-8, rs.getByte(5)); + assertEquals(-16, rs.getShort(6)); + assertEquals(-64, rs.getLong(7)); + assertEquals("1", rs.getBigDecimal(8).toString()); + assertEquals(1.0d, rs.getDouble(9)); + assertEquals(1.0d, rs.getFloat(10)); + assertEquals("10:00:00", rs.getString(11)); + assertEquals("2001-01-01", rs.getString(12)); + assertEquals("2010-10-10 10:10:10", rs.getString(13)); + assertEquals(100, rs.getBytes(14).length); + assertEquals("00000000-0000-0000-0000-000000000000", + rs.getString(15)); + assertEquals(1, rs.getBytes(16).length); + assertEquals("clob", rs.getString(17)); + + stat.execute("drop table test"); + + stat.execute("create table test(id int, obj object, " + + "rs result_set, arr array, ig varchar_ignorecase)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(?, ?, ?, ?, ?)"); + prep.setInt(1, 1); + prep.setObject(2, new java.lang.AssertionError()); + prep.setObject(3, stat.executeQuery("select 1 from dual")); + prep.setObject(4, new Object[]{1, 2}); + prep.setObject(5, "test"); + prep.execute(); + prep.setInt(1, 1); + prep.setObject(2, new java.lang.AssertionError()); + prep.setObject(3, stat.executeQuery("select 1 from dual")); + prep.setObject(4, new Object[]{ + new BigDecimal(new String( + new char[1000]).replace((char) 0, '1'))}); + prep.setObject(5, "test"); + prep.execute(); + if (!config.memory) { + conn.close(); + conn = getConnection(dbName); + stat = conn.createStatement(); + } + stat.execute("select * from test"); + + rs = stat.executeQuery("script"); + int count = 0; + while (rs.next()) { + count++; + } + assertTrue(count < 10); + + stat.execute("drop table test"); + conn.close(); + } + + private void testLocking() throws Exception { + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE;MVCC=FALSE"; + Connection conn = getConnection(dbName); + Statement stat = conn.createStatement(); + stat.execute("set lock_timeout 1000"); + + stat.execute("create table a(id int primary key, name varchar)"); + stat.execute("create table b(id int primary key, name varchar)"); + + Connection conn1 = getConnection(dbName); + final Statement stat1 = conn1.createStatement(); + stat1.execute("set lock_timeout 1000"); + + conn.setAutoCommit(false); + conn1.setAutoCommit(false); + stat.execute("insert into a values(1, 'Hello')"); + stat1.execute("insert into b values(1, 'Hello')"); + Task t = new Task() { + @Override + public void call() throws Exception { + stat1.execute("insert into a values(2, 'World')"); + } + }; + t.execute(); + try { + stat.execute("insert into b values(2, 'World')"); + throw t.getException(); + } catch (SQLException e) { + assertEquals(e.toString(), ErrorCode.DEADLOCK_1, e.getErrorCode()); + } + + conn1.close(); + conn.close(); + } + + private void testSimple() throws Exception { + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + Connection conn = getConnection(dbName); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello'), (2, 'World')"); + ResultSet rs = stat.executeQuery("select *, _rowid_ from test"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertEquals(1, rs.getInt(3)); + + stat.execute("update test set name = 'Hello' where id = 1"); + + if (!config.memory) { + conn.close(); + conn = getConnection(dbName); + stat = conn.createStatement(); + } + + rs = stat.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertEquals("World", rs.getString(2)); + assertFalse(rs.next()); + + stat.execute("create unique index idx_name on test(name)"); + rs = stat.executeQuery("select * from test " + + "where name = 'Hello' order by name"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + try { + stat.execute("insert into test(id, name) values(10, 'Hello')"); + fail(); + } catch (SQLException e) { + assertEquals(e.toString(), ErrorCode.DUPLICATE_KEY_1, e.getErrorCode()); + } + + rs = stat.executeQuery("select min(id), max(id), " + + "min(name), max(name) from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals(2, rs.getInt(2)); + assertEquals("Hello", rs.getString(3)); + assertEquals("World", rs.getString(4)); + assertFalse(rs.next()); + + stat.execute("delete from test where id = 2"); + rs = stat.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + stat.execute("alter table test add column firstName varchar"); + rs = stat.executeQuery("select * from test where name = 'Hello'"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + if (!config.memory) { + conn.close(); + conn = getConnection(dbName); + stat = conn.createStatement(); + } + + rs = stat.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + + stat.execute("truncate table test"); + rs = stat.executeQuery("select * from test order by id"); + assertFalse(rs.next()); + + rs = stat.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(0, rs.getInt(1)); + stat.execute("insert into test(id) select x from system_range(1, 3000)"); + rs = stat.executeQuery("select count(*) from test"); + rs.next(); + assertEquals(3000, rs.getInt(1)); + try { + stat.execute("insert into test(id) values(1)"); + fail(); + } catch (SQLException e) { + assertEquals(ErrorCode.DUPLICATE_KEY_1, e.getErrorCode()); + } + stat.execute("delete from test"); + stat.execute("insert into test(id, name) values(-1, 'Hello')"); + rs = stat.executeQuery("select count(*) from test where id = -1"); + rs.next(); + assertEquals(1, rs.getInt(1)); + rs = stat.executeQuery("select count(*) from test where name = 'Hello'"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + } + + private void testReverseDeletePerformance() throws Exception { + long direct = 0; + long reverse = 0; + for (int i = 0; i < 5; i++) { + reverse += testReverseDeletePerformance(true); + direct += testReverseDeletePerformance(false); + } + assertTrue("direct: " + direct + ", reverse: " + reverse, 2 * Math.abs(reverse - direct) < reverse + direct); + } + + private long testReverseDeletePerformance(boolean reverse) throws Exception { + deleteDb(getTestName()); + String dbName = getTestName() + ";MV_STORE=TRUE"; + try (Connection conn = getConnection(dbName)) { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE test(id INT PRIMARY KEY, name VARCHAR) AS " + + "SELECT x, x || space(1024) || x FROM system_range(1, 1000)"); + conn.setAutoCommit(false); + PreparedStatement prep = conn.prepareStatement("DELETE FROM test WHERE id = ?"); + long start = System.nanoTime(); + for (int i = 0; i < 1000; i++) { + prep.setInt(1, reverse ? 1000 - i : i); + prep.execute(); + } + long end = System.nanoTime(); + conn.commit(); + return TimeUnit.NANOSECONDS.toMillis(end - start); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestObjectDataType.java b/modules/h2/src/test/java/org/h2/test/store/TestObjectDataType.java new file mode 100644 index 0000000000000..16bfed3dba628 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestObjectDataType.java @@ -0,0 +1,186 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.math.BigDecimal; +import java.math.BigInteger; +import java.nio.ByteBuffer; +import java.sql.Timestamp; +import java.util.Arrays; +import java.util.Random; +import java.util.UUID; + +import org.h2.mvstore.WriteBuffer; +import org.h2.mvstore.type.ObjectDataType; +import org.h2.test.TestBase; + +/** + * Test the ObjectType class. + */ +public class TestObjectDataType extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testCommonValues(); + } + + private void testCommonValues() { + BigInteger largeBigInt = BigInteger.probablePrime(200, new Random(1)); + ObjectDataType ot = new ObjectDataType(); + Object[] array = { + false, true, + Byte.MIN_VALUE, (byte) -1, (byte) 0, (byte) 1, Byte.MAX_VALUE, + Short.MIN_VALUE, (short) -1, (short) 0, (short) 1, Short.MAX_VALUE, + Integer.MIN_VALUE, Integer.MIN_VALUE + 1, + -1000, -100, -1, 0, 1, 2, 14, + 15, 16, 17, 100, Integer.MAX_VALUE - 1, Integer.MAX_VALUE, + Long.MIN_VALUE, Long.MIN_VALUE + 1, -1000L, -1L, 0L, 1L, 2L, 14L, + 15L, 16L, 17L, 100L, Long.MAX_VALUE - 1, Long.MAX_VALUE, + largeBigInt.negate(), BigInteger.valueOf(-1), BigInteger.ZERO, + BigInteger.ONE, BigInteger.TEN, largeBigInt, + Float.NEGATIVE_INFINITY, -Float.MAX_VALUE, -1f, -0f, 0f, + Float.MIN_VALUE, 1f, Float.MAX_VALUE, + Float.POSITIVE_INFINITY, Float.NaN, + Double.NEGATIVE_INFINITY, -Double.MAX_VALUE, -1d, -0d, 0d, + Double.MIN_VALUE, 1d, Double.MAX_VALUE, + Double.POSITIVE_INFINITY, Double.NaN, + BigDecimal.valueOf(Double.MAX_VALUE).negate(), + new BigDecimal(largeBigInt).negate(), + BigDecimal.valueOf(-100.0), BigDecimal.ZERO, BigDecimal.ONE, + BigDecimal.TEN, BigDecimal.valueOf(Long.MAX_VALUE), + new BigDecimal(largeBigInt), + BigDecimal.valueOf(Double.MAX_VALUE), + Character.MIN_VALUE, '0', 'a', Character.MAX_VALUE, + "", " ", " ", "123456789012345", "1234567890123456", + new String(new char[100]).replace((char) 0, 'x'), + new String(new char[100000]).replace((char) 0, 'x'), "y", + "\u1234", "\u2345", "\u6789", "\uffff", + new UUID(Long.MIN_VALUE, Long.MIN_VALUE), + new UUID(Long.MIN_VALUE, 0), new UUID(0, 0), + new UUID(Long.MAX_VALUE, Long.MAX_VALUE), + new java.util.Date(0), new java.util.Date(1000), + new java.util.Date(4000), new java.util.Date(5000), + new boolean[0], new boolean[] { false, false }, + new boolean[] { true }, + new byte[0], new byte[1], new byte[15], new byte[16], + new byte[10000], new byte[] { (byte) 1 }, + new byte[] { (byte) 0xff }, + new short[0], new short[] { -1 }, new short[] { 1 }, + new char[0], new char[1], new char[10000], + new char[] { (char) 1 }, + new int[0], new int[1], new int[15], new int[16], + new int[10000], new int[] { (byte) 1 }, + new long[0], new long[1], new long[15], new long[16], + new long[10000], new long[] { (byte) 1 }, + new float[0], new float[]{Float.NEGATIVE_INFINITY}, + new float[1], new float[]{Float.POSITIVE_INFINITY}, + new double[0], new double[]{Double.NEGATIVE_INFINITY}, + new double[1], new double[]{Double.POSITIVE_INFINITY}, + new Object[0], + new Object[100], + new Object[] { 1 }, + new Object[] { 0.0, "Hello", null, Double.NaN }, + new String[] { "Hello", null }, + new String[] { "World" }, + new java.sql.Date[] { }, + new Timestamp[] { }, + new Timestamp[] { null }, + new Timestamp(2000), new Timestamp(3000), + }; + Object otherType = false; + Object last = null; + for (Object x : array) { + test(otherType, x); + if (last != null) { + int comp = ot.compare(x, last); + if (comp <= 0) { + ot.compare(x, last); + fail(x.getClass().getSimpleName() + ": " + + x.toString() + " " + comp); + } + assertTrue(x.toString(), ot.compare(last, x) < 0); + } + if (last != null && last.getClass() != x.getClass()) { + otherType = last; + } + last = x; + } + Random r = new Random(1); + for (int i = 0; i < 1000; i++) { + Object x = array[r.nextInt(array.length)]; + Object y = array[r.nextInt(array.length)]; + int comp = ot.compare(x, y); + if (comp != 0) { + assertEquals("x:" + x + " y:" + y, -comp, ot.compare(y, x)); + } + } + } + + private void test(Object last, Object x) { + ObjectDataType ot = new ObjectDataType(); + + // switch to the last type before every operation, + // to test switching types + ot.getMemory(last); + assertTrue(ot.getMemory(x) >= 0); + + ot.getMemory(last); + assertTrue(ot.getMemory(x) >= 0); + + ot.getMemory(last); + assertEquals(0, ot.compare(x, x)); + WriteBuffer buff = new WriteBuffer(); + + ot.getMemory(last); + ot.write(buff, x); + buff.put((byte) 123); + ByteBuffer bb = buff.getBuffer(); + bb.flip(); + + ot.getMemory(last); + Object y = ot.read(bb); + assertEquals(123, bb.get()); + assertEquals(0, bb.remaining()); + assertEquals(x.getClass().getName(), y.getClass().getName()); + + ot.getMemory(last); + assertEquals(0, ot.compare(x, y)); + if (x.getClass().isArray()) { + if (x instanceof byte[]) { + assertTrue(Arrays.equals((byte[]) x, (byte[]) y)); + } else if (x instanceof boolean[]) { + assertTrue(Arrays.equals((boolean[]) x, (boolean[]) y)); + } else if (x instanceof short[]) { + assertTrue(Arrays.equals((short[]) x, (short[]) y)); + } else if (x instanceof float[]) { + assertTrue(Arrays.equals((float[]) x, (float[]) y)); + } else if (x instanceof double[]) { + assertTrue(Arrays.equals((double[]) x, (double[]) y)); + } else if (x instanceof char[]) { + assertTrue(Arrays.equals((char[]) x, (char[]) y)); + } else if (x instanceof int[]) { + assertTrue(Arrays.equals((int[]) x, (int[]) y)); + } else if (x instanceof long[]) { + assertTrue(Arrays.equals((long[]) x, (long[]) y)); + } else { + assertTrue(Arrays.equals((Object[]) x, (Object[]) y)); + } + } else { + assertEquals(x.hashCode(), y.hashCode()); + assertTrue(x.equals(y)); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestRandomMapOps.java b/modules/h2/src/test/java/org/h2/test/store/TestRandomMapOps.java new file mode 100644 index 0000000000000..9ba9f0984b7de --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestRandomMapOps.java @@ -0,0 +1,189 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.text.MessageFormat; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Random; +import java.util.TreeMap; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests the MVStore. + */ +public class TestRandomMapOps extends TestBase { + + private static final boolean LOG = false; + private int op; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.big = true; + test.test(); + } + + @Override + public void test() throws Exception { + testMap("memFS:randomOps.h3"); + FileUtils.delete("memFS:randomOps.h3"); + } + + private void testMap(String fileName) { + int best = Integer.MAX_VALUE; + int bestSeed = 0; + Throwable failException = null; + int size = getSize(100, 1000); + for (int seed = 0; seed < 100; seed++) { + FileUtils.delete(fileName); + Throwable ex = null; + try { + testOps(fileName, size, seed); + continue; + } catch (Exception e) { + ex = e; + } catch (AssertionError e) { + ex = e; + } + if (op < best) { + trace(seed); + bestSeed = seed; + best = op; + size = best; + failException = ex; + // System.out.println("seed:" + seed + " op:" + op + " " + ex); + } + } + if (failException != null) { + throw (AssertionError) new AssertionError("seed = " + bestSeed + + " op = " + best).initCause(failException); + } + } + + private void testOps(String fileName, int size, int seed) { + FileUtils.delete(fileName); + MVStore s = openStore(fileName); + MVMap m = s.openMap("data"); + Random r = new Random(seed); + op = 0; + TreeMap map = new TreeMap<>(); + for (; op < size; op++) { + int k = r.nextInt(100); + byte[] v = new byte[r.nextInt(10) * 10]; + int type = r.nextInt(12); + switch (type) { + case 0: + case 1: + case 2: + case 3: + log(op, k, v, "m.put({0}, {1})"); + m.put(k, v); + map.put(k, v); + break; + case 4: + case 5: + log(op, k, v, "m.remove({0})"); + m.remove(k); + map.remove(k); + break; + case 6: + log(op, k, v, "s.compact(90, 1024)"); + s.compact(90, 1024); + break; + case 7: + log(op, k, v, "m.clear()"); + m.clear(); + map.clear(); + break; + case 8: + log(op, k, v, "s.commit()"); + s.commit(); + break; + case 9: + log(op, k, v, "s.commit()"); + s.commit(); + log(op, k, v, "s.close()"); + s.close(); + log(op, k, v, "s = openStore(fileName)"); + s = openStore(fileName); + log(op, k, v, "m = s.openMap(\"data\")"); + m = s.openMap("data"); + break; + case 10: + log(op, k, v, "s.commit()"); + s.commit(); + log(op, k, v, "s.compactMoveChunks()"); + s.compactMoveChunks(); + break; + case 11: + log(op, k, v, "m.getKeyIndex({0})"); + ArrayList keyList = new ArrayList<>(map.keySet()); + int index = Collections.binarySearch(keyList, k, null); + int index2 = (int) m.getKeyIndex(k); + assertEquals(index, index2); + if (index >= 0) { + int k2 = m.getKey(index); + assertEquals(k2, k); + } + break; + } + assertEqualsMapValues(map.get(k), m.get(k)); + assertEquals(map.ceilingKey(k), m.ceilingKey(k)); + assertEquals(map.floorKey(k), m.floorKey(k)); + assertEquals(map.higherKey(k), m.higherKey(k)); + assertEquals(map.lowerKey(k), m.lowerKey(k)); + assertEquals(map.isEmpty(), m.isEmpty()); + assertEquals(map.size(), m.size()); + if (!map.isEmpty()) { + assertEquals(map.firstKey(), m.firstKey()); + assertEquals(map.lastKey(), m.lastKey()); + } + } + s.close(); + } + + private static MVStore openStore(String fileName) { + MVStore s = new MVStore.Builder().fileName(fileName). + pageSplitSize(50).autoCommitDisabled().open(); + s.setRetentionTime(1000); + return s; + } + + private void assertEqualsMapValues(byte[] x, byte[] y) { + if (x == null || y == null) { + if (x != y) { + assertTrue(x == y); + } + } else { + assertEquals(x.length, y.length); + } + } + + /** + * Log the operation + * + * @param op the operation id + * @param k the key + * @param v the value + * @param msg the message + */ + private static void log(int op, int k, byte[] v, String msg) { + if (LOG) { + msg = MessageFormat.format(msg, k, + v == null ? null : "new byte[" + v.length + "]"); + System.out.println(msg + "; // op " + op); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestShardedMap.java b/modules/h2/src/test/java/org/h2/test/store/TestShardedMap.java new file mode 100644 index 0000000000000..2f48a013bac50 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestShardedMap.java @@ -0,0 +1,99 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.util.TreeMap; + +import org.h2.dev.cluster.ShardedMap; +import org.h2.test.TestBase; + +/** + * Test sharded maps. + */ +public class TestShardedMap extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testLinearSplit(); + testReplication(); + testOverlap(); + } + + private void testLinearSplit() { + ShardedMap map = new ShardedMap<>(); + TreeMap a = new TreeMap<>(); + TreeMap b = new TreeMap<>(); + map.addMap(a, null, 5); + map.addMap(b, 5, null); + for (int i = 0; i < 10; i++) { + map.put(i, i * 10); + } + assertEquals(10, map.size()); + for (int i = 0; i < 10; i++) { + assertEquals(i * 10, map.get(i).intValue()); + } + assertEquals("[0, 1, 2, 3, 4]", + a.keySet().toString()); + assertEquals("[5, 6, 7, 8, 9]", + b.keySet().toString()); + assertEquals("[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]", + map.keySet().toString()); + assertEquals(10, map.sizeAsLong()); + } + + private void testReplication() { + ShardedMap map = new ShardedMap<>(); + TreeMap a = new TreeMap<>(); + TreeMap b = new TreeMap<>(); + map.addMap(a, null, null); + map.addMap(b, null, null); + for (int i = 0; i < 10; i++) { + map.put(i, i * 10); + } + assertEquals(10, map.size()); + for (int i = 0; i < 10; i++) { + assertEquals(i * 10, map.get(i).intValue()); + } + assertEquals("[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]", + a.keySet().toString()); + assertEquals("[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]", + b.keySet().toString()); + assertEquals("[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]", + map.keySet().toString()); + assertEquals(10, map.sizeAsLong()); + } + + private void testOverlap() { + ShardedMap map = new ShardedMap<>(); + TreeMap a = new TreeMap<>(); + TreeMap b = new TreeMap<>(); + map.addMap(a, null, 10); + map.addMap(b, 5, null); + for (int i = 0; i < 20; i++) { + map.put(i, i * 10); + } + // overlap: size is unknown + assertEquals(-1, map.size()); + for (int i = 0; i < 20; i++) { + assertEquals(i * 10, map.get(i).intValue()); + } + assertEquals("[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]", + a.keySet().toString()); + assertEquals("[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]", + b.keySet().toString()); + assertEquals(-1, map.sizeAsLong()); + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/store/TestSpinLock.java b/modules/h2/src/test/java/org/h2/test/store/TestSpinLock.java new file mode 100644 index 0000000000000..5ad8387c04ae8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestSpinLock.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import org.h2.test.TestBase; + +/** + * Test using volatile fields to ensure we don't read from a version that is + * concurrently written to. + */ +public class TestSpinLock extends TestBase { + + /** + * The version to use for writing. + */ + volatile int writeVersion; + + /** + * The current data object. + */ + volatile Data data = new Data(0, null); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + final TestSpinLock obj = new TestSpinLock(); + Thread t = new Thread() { + @Override + public void run() { + while (!isInterrupted()) { + for (int i = 0; i < 10000; i++) { + Data d = obj.copyOnWrite(); + obj.data = d; + d.write(i); + d.writing = false; + } + } + } + }; + t.start(); + try { + for (int i = 0; i < 100000; i++) { + Data d = obj.getImmutable(); + int z = d.x + d.y; + if (z != 0) { + String error = i + " result: " + z + " now: " + d.x + " " + + d.y; + System.out.println(error); + throw new Exception(error); + } + } + } finally { + t.interrupt(); + t.join(); + } + } + + /** + * Clone the data object if necessary (if the write version is newer than + * the current version). + * + * @return the data object + */ + Data copyOnWrite() { + Data d = data; + d.writing = true; + int w = writeVersion; + if (w <= data.version) { + return d; + } + Data d2 = new Data(w, data); + d2.writing = true; + d.writing = false; + return d2; + } + + /** + * Get an immutable copy of the data object. + * + * @return the immutable object + */ + private Data getImmutable() { + Data d = data; + ++writeVersion; + // wait until writing is done, + // but only for the current write operation: + // a bit like a spin lock + while (d.writing) { + // Thread.yield() is not required, specially + // if there are multiple cores + // but getImmutable() doesn't + // need to be that fast actually + Thread.yield(); + } + return d; + } + + /** + * The data class - represents the root page. + */ + static class Data { + + /** + * The version. + */ + final int version; + + /** + * The values. + */ + int x, y; + + /** + * Whether a write operation is in progress. + */ + volatile boolean writing; + + /** + * Create a copy of the data. + * + * @param version the new version + * @param old the old data or null + */ + Data(int version, Data old) { + this.version = version; + if (old != null) { + this.x = old.x; + this.y = old.y; + } + } + + /** + * Write to the fields in an unsynchronized way. + * + * @param value the new value + */ + void write(int value) { + this.x = value; + this.y = -value; + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestStreamStore.java b/modules/h2/src/test/java/org/h2/test/store/TestStreamStore.java new file mode 100644 index 0000000000000..8613aee21464e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestStreamStore.java @@ -0,0 +1,483 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; +import java.util.Random; +import java.util.TreeMap; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.StreamStore; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; +import org.h2.util.StringUtils; + +/** + * Test the stream store. + */ +public class TestStreamStore extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws IOException { + testMaxBlockKey(); + testIOException(); + testSaveCount(); + testExceptionDuringStore(); + testReadCount(); + testLarge(); + testDetectIllegalId(); + testTreeStructure(); + testFormat(); + testWithExistingData(); + testWithFullMap(); + testLoop(); + } + + private void testMaxBlockKey() throws IOException { + TreeMap map = new TreeMap<>(); + StreamStore s = new StreamStore(map); + s.setMaxBlockSize(128); + s.setMinBlockSize(64); + map.clear(); + for (int len = 1; len < 1024 * 1024; len *= 2) { + byte[] id = s.put(new ByteArrayInputStream(new byte[len])); + long max = s.getMaxBlockKey(id); + if (max == -1) { + assertTrue(map.isEmpty()); + } else { + assertEquals(map.lastKey(), (Long) max); + } + } + } + + private void testIOException() throws IOException { + HashMap map = new HashMap<>(); + StreamStore s = new StreamStore(map); + byte[] id = s.put(new ByteArrayInputStream(new byte[1024 * 1024])); + InputStream in = s.get(id); + map.clear(); + try { + while (true) { + if (in.read() < 0) { + break; + } + } + fail(); + } catch (IOException e) { + assertEquals(DataUtils.ERROR_BLOCK_NOT_FOUND, + DataUtils.getErrorCode(e.getMessage())); + } + } + + private void testSaveCount() throws IOException { + String fileName = getBaseDir() + "/testSaveCount.h3"; + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder(). + fileName(fileName). + open(); + MVMap map = s.openMap("data"); + StreamStore streamStore = new StreamStore(map); + int blockSize = 256 * 1024; + assertEquals(blockSize, streamStore.getMaxBlockSize()); + for (int i = 0; i < 8 * 16; i++) { + streamStore.put(new RandomStream(blockSize, i)); + } + long writeCount = s.getFileStore().getWriteCount(); + assertTrue(writeCount > 2); + s.close(); + } + + private void testExceptionDuringStore() throws IOException { + // test that if there is an IOException while storing + // the data, the entries in the map are "rolled back" + HashMap map = new HashMap<>(); + StreamStore s = new StreamStore(map); + s.setMaxBlockSize(1024); + assertThrows(IOException.class, s). + put(createFailingStream(new IOException())); + assertEquals(0, map.size()); + // the runtime exception is converted to an IOException + assertThrows(IOException.class, s). + put(createFailingStream(new IllegalStateException())); + assertEquals(0, map.size()); + } + + private void testReadCount() throws IOException { + String fileName = getBaseDir() + "/testReadCount.h3"; + FileUtils.delete(fileName); + MVStore s = new MVStore.Builder(). + fileName(fileName). + open(); + s.setCacheSize(1); + StreamStore streamStore = getAutoCommitStreamStore(s); + long size = s.getPageSplitSize() * 2; + for (int i = 0; i < 100; i++) { + streamStore.put(new RandomStream(size, i)); + } + s.commit(); + MVMap map = s.openMap("data"); + assertTrue("size: " + map.size(), map.sizeAsLong() >= 100); + s.close(); + + s = new MVStore.Builder(). + fileName(fileName). + open(); + streamStore = getAutoCommitStreamStore(s); + for (int i = 0; i < 100; i++) { + streamStore.put(new RandomStream(size, -i)); + } + s.commit(); + long readCount = s.getFileStore().getReadCount(); + // the read count should be low because new blocks + // are appended at the end (not between existing blocks) + assertTrue("rc: " + readCount, readCount < 15); + map = s.openMap("data"); + assertTrue("size: " + map.size(), map.sizeAsLong() >= 200); + s.close(); + } + + private static StreamStore getAutoCommitStreamStore(final MVStore s) { + MVMap map = s.openMap("data"); + return new StreamStore(map) { + @Override + protected void onStore(int len) { + if (s.getUnsavedMemory() > s.getAutoCommitMemory() / 2) { + s.commit(); + } + } + }; + } + + private void testLarge() throws IOException { + String fileName = getBaseDir() + "/testVeryLarge.h3"; + FileUtils.delete(fileName); + final MVStore s = new MVStore.Builder(). + fileName(fileName). + open(); + MVMap map = s.openMap("data"); + final AtomicInteger count = new AtomicInteger(); + StreamStore streamStore = new StreamStore(map) { + @Override + protected void onStore(int len) { + count.incrementAndGet(); + s.commit(); + } + }; + long size = 1 * 1024 * 1024; + streamStore.put(new RandomStream(size, 0)); + s.close(); + assertEquals(4, count.get()); + } + + /** + * A stream of incompressible data. + */ + static class RandomStream extends InputStream { + + private long pos, size; + private int seed; + + RandomStream(long size, int seed) { + this.size = size; + this.seed = seed; + } + + @Override + public int read() { + byte[] data = new byte[1]; + int len = read(data, 0, 1); + return len <= 0 ? len : data[0] & 255; + } + + @Override + public int read(byte[] b, int off, int len) { + if (pos >= size) { + return -1; + } + len = (int) Math.min(size - pos, len); + int x = seed, end = off + len; + // a fast and very simple pseudo-random number generator + // with a period length of 4 GB + // also good: x * 9 + 1, shift 6; x * 11 + 1, shift 7 + while (off < end) { + x = (x << 4) + x + 1; + b[off++] = (byte) (x >> 8); + } + seed = x; + pos += len; + return len; + } + + } + + private void testDetectIllegalId() throws IOException { + Map map = new HashMap<>(); + StreamStore store = new StreamStore(map); + try { + store.length(new byte[]{3, 0, 0}); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + try { + store.remove(new byte[]{3, 0, 0}); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + map.put(0L, new byte[]{3, 0, 0}); + InputStream in = store.get(new byte[]{2, 1, 0}); + try { + in.read(); + fail(); + } catch (IllegalArgumentException e) { + // expected + } + } + + private void testTreeStructure() throws IOException { + + final AtomicInteger reads = new AtomicInteger(); + Map map = new HashMap() { + + private static final long serialVersionUID = 1L; + + @Override + public byte[] get(Object k) { + reads.incrementAndGet(); + return super.get(k); + } + + }; + + StreamStore store = new StreamStore(map); + store.setMinBlockSize(10); + store.setMaxBlockSize(100); + byte[] id = store.put(new ByteArrayInputStream(new byte[10000])); + InputStream in = store.get(id); + assertEquals(0, in.read(new byte[0])); + assertEquals(0, in.read()); + assertEquals(3, reads.get()); + } + + private void testFormat() throws IOException { + Map map = new HashMap<>(); + StreamStore store = new StreamStore(map); + store.setMinBlockSize(10); + store.setMaxBlockSize(20); + store.setNextKey(123); + + byte[] id; + + id = store.put(new ByteArrayInputStream(new byte[200])); + assertEquals(200, store.length(id)); + assertEquals("02c8018801", StringUtils.convertBytesToHex(id)); + + id = store.put(new ByteArrayInputStream(new byte[0])); + assertEquals("", StringUtils.convertBytesToHex(id)); + + id = store.put(new ByteArrayInputStream(new byte[1])); + assertEquals("000100", StringUtils.convertBytesToHex(id)); + + id = store.put(new ByteArrayInputStream(new byte[3])); + assertEquals("0003000000", StringUtils.convertBytesToHex(id)); + + id = store.put(new ByteArrayInputStream(new byte[10])); + assertEquals("010a8901", StringUtils.convertBytesToHex(id)); + + byte[] combined = StringUtils.convertHexToBytes("0001aa0002bbcc"); + assertEquals(3, store.length(combined)); + InputStream in = store.get(combined); + assertEquals(1, in.skip(1)); + assertEquals(0xbb, in.read()); + assertEquals(1, in.skip(1)); + } + + private void testWithExistingData() throws IOException { + + final AtomicInteger tests = new AtomicInteger(); + Map map = new HashMap() { + + private static final long serialVersionUID = 1L; + + @Override + public boolean containsKey(Object k) { + tests.incrementAndGet(); + return super.containsKey(k); + } + + }; + StreamStore store = new StreamStore(map); + store.setMinBlockSize(10); + store.setMaxBlockSize(20); + store.setNextKey(0); + for (int i = 0; i < 10; i++) { + store.put(new ByteArrayInputStream(new byte[20])); + } + assertEquals(10, map.size()); + assertEquals(10, tests.get()); + for (int i = 0; i < 10; i++) { + map.containsKey((long)i); + } + assertEquals(20, tests.get()); + store = new StreamStore(map); + store.setMinBlockSize(10); + store.setMaxBlockSize(20); + store.setNextKey(0); + assertEquals(0, store.getNextKey()); + for (int i = 0; i < 5; i++) { + store.put(new ByteArrayInputStream(new byte[20])); + } + assertEquals(88, tests.get()); + assertEquals(15, store.getNextKey()); + assertEquals(15, map.size()); + for (int i = 0; i < 15; i++) { + map.containsKey((long)i); + } + } + + private void testWithFullMap() throws IOException { + final AtomicInteger tests = new AtomicInteger(); + Map map = new HashMap() { + + private static final long serialVersionUID = 1L; + + @Override + public boolean containsKey(Object k) { + tests.incrementAndGet(); + if (((Long) k) < Long.MAX_VALUE / 2) { + // simulate a *very* full map + return true; + } + return super.containsKey(k); + } + + }; + StreamStore store = new StreamStore(map); + store.setMinBlockSize(20); + store.setMaxBlockSize(100); + store.setNextKey(0); + store.put(new ByteArrayInputStream(new byte[100])); + assertEquals(1, map.size()); + assertEquals(64, tests.get()); + assertEquals(Long.MAX_VALUE / 2 + 1, store.getNextKey()); + } + + private void testLoop() throws IOException { + Map map = new HashMap<>(); + StreamStore store = new StreamStore(map); + assertEquals(256 * 1024, store.getMaxBlockSize()); + assertEquals(256, store.getMinBlockSize()); + store.setNextKey(0); + assertEquals(0, store.getNextKey()); + test(store, 10, 20, 1000); + for (int i = 0; i < 20; i++) { + test(store, 0, 128, i); + test(store, 10, 128, i); + } + for (int i = 20; i < 200; i += 10) { + test(store, 0, 128, i); + test(store, 10, 128, i); + } + } + + private void test(StreamStore store, int minBlockSize, int maxBlockSize, + int length) throws IOException { + store.setMinBlockSize(minBlockSize); + assertEquals(minBlockSize, store.getMinBlockSize()); + store.setMaxBlockSize(maxBlockSize); + assertEquals(maxBlockSize, store.getMaxBlockSize()); + long next = store.getNextKey(); + Random r = new Random(length); + byte[] data = new byte[length]; + r.nextBytes(data); + byte[] id = store.put(new ByteArrayInputStream(data)); + if (length > 0 && length >= minBlockSize) { + assertFalse(store.isInPlace(id)); + } else { + assertTrue(store.isInPlace(id)); + } + long next2 = store.getNextKey(); + if (length > 0 && length >= minBlockSize) { + assertTrue(next2 > next); + } else { + assertEquals(next, next2); + } + if (length == 0) { + assertEquals(0, id.length); + } + + assertEquals(length, store.length(id)); + + InputStream in = store.get(id); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + IOUtils.copy(in, out); + assertTrue(Arrays.equals(data, out.toByteArray())); + + in = store.get(id); + in.close(); + assertEquals(-1, in.read()); + + in = store.get(id); + assertEquals(0, in.skip(0)); + if (length > 0) { + assertEquals(1, in.skip(1)); + if (length > 1) { + assertEquals(data[1] & 255, in.read()); + if (length > 2) { + assertEquals(1, in.skip(1)); + if (length > 3) { + assertEquals(data[3] & 255, in.read()); + } + } else { + assertEquals(0, in.skip(1)); + } + } else { + assertEquals(-1, in.read()); + } + } else { + assertEquals(0, in.skip(1)); + } + + if (length > 12) { + in = store.get(id); + assertEquals(12, in.skip(12)); + assertEquals(data[12] & 255, in.read()); + long skipped = 0; + while (true) { + long s = in.skip(Integer.MAX_VALUE); + if (s == 0) { + break; + } + skipped += s; + } + assertEquals(length - 13, skipped); + assertEquals(-1, in.read()); + } + + store.remove(id); + assertEquals(0, store.getMap().size()); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/TestTransactionStore.java b/modules/h2/src/test/java/org/h2/test/store/TestTransactionStore.java new file mode 100644 index 0000000000000..f92bc5cbb3d87 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/TestTransactionStore.java @@ -0,0 +1,1092 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.store; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Iterator; +import java.util.List; +import java.util.Random; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.mvstore.db.TransactionStore; +import org.h2.mvstore.db.TransactionStore.Change; +import org.h2.mvstore.db.TransactionStore.Transaction; +import org.h2.mvstore.db.TransactionStore.TransactionMap; +import org.h2.mvstore.type.ObjectDataType; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.New; +import org.h2.util.Task; + +/** + * Test concurrent transactions. + */ +public class TestTransactionStore extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + FileUtils.createDirectories(getBaseDir()); + testHCLFKey(); + testConcurrentAddRemove(); + testConcurrentAdd(); + testCountWithOpenTransactions(); + testConcurrentUpdate(); + testRepeatedChange(); + testTransactionAge(); + testStopWhileCommitting(); + testGetModifiedMaps(); + testKeyIterator(); + testMultiStatement(); + testTwoPhaseCommit(); + testSavepoint(); + testConcurrentTransactionsReadCommitted(); + testSingleConnection(); + testCompareWithPostgreSQL(); + testStoreMultiThreadedReads(); + } + + private void testHCLFKey() { + MVStore s = MVStore.open(null); + final TransactionStore ts = new TransactionStore(s); + ts.init(); + Transaction t = ts.begin(); + ObjectDataType keyType = new ObjectDataType(); + TransactionMap map = t.openMap("test", keyType, keyType); + // firstKey() + assertNull(map.firstKey()); + // lastKey() + assertNull(map.lastKey()); + map.put(10L, 100L); + map.put(20L, 200L); + map.put(30L, 300L); + map.put(40L, 400L); + t.commit(); + t = ts.begin(); + map = t.openMap("test", keyType, keyType); + map.put(15L, 150L); + // The same transaction + assertEquals((Object) 15L, map.higherKey(10L)); + t = ts.begin(); + map = t.openMap("test", keyType, keyType); + // Another transaction + // higherKey() + assertEquals((Object) 20L, map.higherKey(10L)); + assertEquals((Object) 20L, map.higherKey(15L)); + assertNull(map.higherKey(40L)); + // ceilingKey() + assertEquals((Object) 10L, map.ceilingKey(10L)); + assertEquals((Object) 20L, map.ceilingKey(15L)); + assertEquals((Object) 40L, map.ceilingKey(40L)); + assertNull(map.higherKey(45L)); + // lowerKey() + assertNull(map.lowerKey(10L)); + assertEquals((Object) 10L, map.lowerKey(15L)); + assertEquals((Object) 10L, map.lowerKey(20L)); + assertEquals((Object) 20L, map.lowerKey(25L)); + // floorKey() + assertNull(map.floorKey(5L)); + assertEquals((Object) 10L, map.floorKey(10L)); + assertEquals((Object) 10L, map.floorKey(15L)); + assertEquals((Object) 30L, map.floorKey(35L)); + s.close(); + } + + private static void testConcurrentAddRemove() throws InterruptedException { + MVStore s = MVStore.open(null); + int threadCount = 3; + final int keyCount = 2; + final TransactionStore ts = new TransactionStore(s); + ts.init(); + + final Random r = new Random(1); + + Task[] tasks = new Task[threadCount]; + for (int i = 0; i < threadCount; i++) { + Task task = new Task() { + + @Override + public void call() throws Exception { + TransactionMap map = null; + while (!stop) { + Transaction tx = ts.begin(); + map = tx.openMap("data"); + int k = r.nextInt(keyCount); + try { + map.remove(k); + map.put(k, r.nextInt()); + } catch (IllegalStateException e) { + // ignore and retry + } + tx.commit(); + } + } + + }; + task.execute(); + tasks[i] = task; + } + Thread.sleep(1000); + for (Task t : tasks) { + t.get(); + } + s.close(); + } + + private void testConcurrentAdd() { + MVStore s; + s = MVStore.open(null); + final TransactionStore ts = new TransactionStore(s); + ts.init(); + + final Random r = new Random(1); + + final AtomicInteger key = new AtomicInteger(); + final AtomicInteger failCount = new AtomicInteger(); + + Task task = new Task() { + + @Override + public void call() throws Exception { + Transaction tx = null; + TransactionMap map = null; + while (!stop) { + int k = key.get(); + tx = ts.begin(); + map = tx.openMap("data"); + try { + map.put(k, r.nextInt()); + } catch (IllegalStateException e) { + failCount.incrementAndGet(); + // ignore and retry + } + tx.commit(); + } + } + + }; + task.execute(); + Transaction tx = null; + int count = 100000; + TransactionMap map = null; + for (int i = 0; i < count; i++) { + int k = i; + key.set(k); + tx = ts.begin(); + map = tx.openMap("data"); + try { + map.put(k, r.nextInt()); + } catch (IllegalStateException e) { + failCount.incrementAndGet(); + // ignore and retry + } + tx.commit(); + if (failCount.get() > 0 && i > 4000) { + // stop earlier, if possible + count = i; + break; + } + } + // we expect at least 10% the operations were successful + assertTrue(failCount.toString() + " >= " + (count * 0.9), + failCount.get() < count * 0.9); + // we expect at least a few failures + assertTrue(failCount.toString(), failCount.get() > 0); + s.close(); + } + + private void testCountWithOpenTransactions() { + MVStore s; + TransactionStore ts; + s = MVStore.open(null); + ts = new TransactionStore(s); + ts.init(); + + Transaction tx1 = ts.begin(); + TransactionMap map1 = tx1.openMap("data"); + int size = 150; + for (int i = 0; i < size; i++) { + map1.put(i, i * 10); + } + tx1.commit(); + tx1 = ts.begin(); + map1 = tx1.openMap("data"); + + Transaction tx2 = ts.begin(); + TransactionMap map2 = tx2.openMap("data"); + + Random r = new Random(1); + for (int i = 0; i < size * 3; i++) { + assertEquals("op: " + i, size, (int) map1.sizeAsLong()); + // keep the first 10%, and add 10% + int k = size / 10 + r.nextInt(size); + if (r.nextBoolean()) { + map2.remove(k); + } else { + map2.put(k, i); + } + } + s.close(); + } + + private void testConcurrentUpdate() { + MVStore s; + TransactionStore ts; + s = MVStore.open(null); + ts = new TransactionStore(s); + ts.init(); + + Transaction tx1 = ts.begin(); + TransactionMap map1 = tx1.openMap("data"); + map1.put(1, 10); + + Transaction tx2 = ts.begin(); + TransactionMap map2 = tx2.openMap("data"); + try { + map2.put(1, 20); + fail(); + } catch (IllegalStateException e) { + assertEquals(DataUtils.ERROR_TRANSACTION_LOCKED, + DataUtils.getErrorCode(e.getMessage())); + } + assertEquals(10, map1.get(1).intValue()); + assertNull(map2.get(1)); + tx1.commit(); + assertEquals(10, map2.get(1).intValue()); + + s.close(); + } + + private void testRepeatedChange() { + MVStore s; + TransactionStore ts; + s = MVStore.open(null); + ts = new TransactionStore(s); + ts.init(); + + Transaction tx0 = ts.begin(); + TransactionMap map0 = tx0.openMap("data"); + map0.put(1, -1); + tx0.commit(); + + Transaction tx = ts.begin(); + TransactionMap map = tx.openMap("data"); + for (int i = 0; i < 2000; i++) { + map.put(1, i); + } + + Transaction tx2 = ts.begin(); + TransactionMap map2 = tx2.openMap("data"); + assertEquals(-1, map2.get(1).intValue()); + + s.close(); + } + + private void testTransactionAge() throws Exception { + MVStore s; + TransactionStore ts; + s = MVStore.open(null); + ts = new TransactionStore(s); + ts.init(); + ts.setMaxTransactionId(16); + ArrayList openList = new ArrayList<>(); + for (int i = 0, j = 1; i < 64; i++) { + Transaction t = ts.begin(); + openList.add(t); + assertEquals(j, t.getId()); + j++; + if (j > 16) { + j = 1; + } + if (openList.size() >= 16) { + t = openList.remove(0); + t.commit(); + } + } + + s = MVStore.open(null); + ts = new TransactionStore(s); + ts.init(); + ts.setMaxTransactionId(16); + ArrayList fifo = New.arrayList(); + int open = 0; + for (int i = 0; i < 64; i++) { + Transaction t = null; + if (open >= 16) { + try { + t = ts.begin(); + fail(); + } catch (IllegalStateException e) { + // expected - too many open + } + Transaction first = fifo.remove(0); + first.commit(); + open--; + } + t = ts.begin(); + t.openMap("data").put(i, i); + fifo.add(t); + open++; + } + s.close(); + } + + private void testStopWhileCommitting() throws Exception { + String fileName = getBaseDir() + "/testStopWhileCommitting.h3"; + FileUtils.delete(fileName); + Random r = new Random(0); + + for (int i = 0; i < 10;) { + MVStore s; + TransactionStore ts; + Transaction tx; + TransactionMap m; + + s = MVStore.open(fileName); + ts = new TransactionStore(s); + ts.init(); + tx = ts.begin(); + s.setReuseSpace(false); + m = tx.openMap("test"); + final String value = "x" + i; + for (int j = 0; j < 1000; j++) { + m.put(j, value); + } + final AtomicInteger state = new AtomicInteger(); + final MVStore store = s; + final MVMap other = s.openMap("other"); + Task task = new Task() { + + @Override + public void call() throws Exception { + for (int i = 0; !stop; i++) { + state.set(i); + other.put(i, value); + store.commit(); + } + } + }; + task.execute(); + // wait for the task to start + while (state.get() < 1) { + Thread.yield(); + } + // commit while writing in the task + tx.commit(); + // wait for the task to stop + task.get(); + store.close(); + s = MVStore.open(fileName); + // roll back a bit, until we have some undo log entries + assertTrue(s.hasMap("undoLog")); + for (int back = 0; back < 100; back++) { + int minus = r.nextInt(10); + s.rollbackTo(Math.max(0, s.getCurrentVersion() - minus)); + MVMap undo = s.openMap("undoLog"); + if (undo.size() > 0) { + break; + } + } + // re-open the store, because we have opened + // the undoLog map with the wrong data type + s.close(); + s = MVStore.open(fileName); + ts = new TransactionStore(s); + List list = ts.getOpenTransactions(); + if (list.size() != 0) { + tx = list.get(0); + if (tx.getStatus() == Transaction.STATUS_COMMITTING) { + i++; + } + } + s.close(); + FileUtils.delete(fileName); + assertFalse(FileUtils.exists(fileName)); + } + } + + private void testGetModifiedMaps() { + MVStore s = MVStore.open(null); + TransactionStore ts = new TransactionStore(s); + ts.init(); + Transaction tx; + TransactionMap m1, m2, m3; + long sp; + + tx = ts.begin(); + m1 = tx.openMap("m1"); + m2 = tx.openMap("m2"); + m3 = tx.openMap("m3"); + assertFalse(tx.getChanges(0).hasNext()); + tx.commit(); + + tx = ts.begin(); + m1 = tx.openMap("m1"); + m2 = tx.openMap("m2"); + m3 = tx.openMap("m3"); + m1.put("1", "100"); + sp = tx.setSavepoint(); + m2.put("1", "100"); + m3.put("1", "100"); + Iterator it = tx.getChanges(sp); + assertTrue(it.hasNext()); + Change c; + c = it.next(); + assertEquals("m3", c.mapName); + assertEquals("1", c.key.toString()); + assertNull(c.value); + assertTrue(it.hasNext()); + c = it.next(); + assertEquals("m2", c.mapName); + assertEquals("1", c.key.toString()); + assertNull(c.value); + assertFalse(it.hasNext()); + + it = tx.getChanges(0); + assertTrue(it.hasNext()); + c = it.next(); + assertEquals("m3", c.mapName); + assertEquals("1", c.key.toString()); + assertNull(c.value); + assertTrue(it.hasNext()); + c = it.next(); + assertEquals("m2", c.mapName); + assertEquals("1", c.key.toString()); + assertNull(c.value); + assertTrue(it.hasNext()); + c = it.next(); + assertEquals("m1", c.mapName); + assertEquals("1", c.key.toString()); + assertNull(c.value); + assertFalse(it.hasNext()); + + tx.rollbackToSavepoint(sp); + + it = tx.getChanges(0); + assertTrue(it.hasNext()); + c = it.next(); + assertEquals("m1", c.mapName); + assertEquals("1", c.key.toString()); + assertNull(c.value); + assertFalse(it.hasNext()); + + tx.commit(); + + s.close(); + } + + private void testKeyIterator() { + MVStore s = MVStore.open(null); + TransactionStore ts = new TransactionStore(s); + ts.init(); + Transaction tx, tx2; + TransactionMap m, m2; + Iterator it, it2; + + tx = ts.begin(); + m = tx.openMap("test"); + m.put("1", "Hello"); + m.put("2", "World"); + m.put("3", "."); + tx.commit(); + + tx2 = ts.begin(); + m2 = tx2.openMap("test"); + m2.remove("2"); + m2.put("3", "!"); + m2.put("4", "?"); + + tx = ts.begin(); + m = tx.openMap("test"); + it = m.keyIterator(null); + assertTrue(it.hasNext()); + assertEquals("1", it.next()); + assertTrue(it.hasNext()); + assertEquals("2", it.next()); + assertTrue(it.hasNext()); + assertEquals("3", it.next()); + assertFalse(it.hasNext()); + + it2 = m2.keyIterator(null); + assertTrue(it2.hasNext()); + assertEquals("1", it2.next()); + assertTrue(it2.hasNext()); + assertEquals("3", it2.next()); + assertTrue(it2.hasNext()); + assertEquals("4", it2.next()); + assertFalse(it2.hasNext()); + + s.close(); + } + + /** + * Tests behavior when used for a sequence of SQL statements. Each statement + * uses a savepoint. Within a statement, changes by the statement itself are + * not seen; the change is only seen when the statement finished. + *

    + * Update statements that change the key of multiple rows may use delete/add + * pairs to do so (they don't need to first delete all entries and then + * re-add them). Trying to add multiple values for the same key is not + * allowed (an update statement that would result in a duplicate key). + */ + private void testMultiStatement() { + MVStore s = MVStore.open(null); + TransactionStore ts = new TransactionStore(s); + ts.init(); + + Transaction tx; + TransactionMap m; + long startUpdate; + + tx = ts.begin(); + + // start of statement + // create table test + startUpdate = tx.setSavepoint(); + m = tx.openMap("test"); + m.setSavepoint(startUpdate); + + // start of statement + // insert into test(id, name) values(1, 'Hello'), (2, 'World') + startUpdate = tx.setSavepoint(); + m.setSavepoint(startUpdate); + assertTrue(m.trySet("1", "Hello", true)); + assertTrue(m.trySet("2", "World", true)); + // not seen yet (within the same statement) + assertNull(m.get("1")); + assertNull(m.get("2")); + + // start of statement + startUpdate = tx.setSavepoint(); + // now we see the newest version + m.setSavepoint(startUpdate); + assertEquals("Hello", m.get("1")); + assertEquals("World", m.get("2")); + // update test set primaryKey = primaryKey + 1 + // (this is usually a tricky case) + assertEquals("Hello", m.get("1")); + assertTrue(m.trySet("1", null, true)); + assertTrue(m.trySet("2", "Hello", true)); + assertEquals("World", m.get("2")); + // already updated by this statement, so it has no effect + // but still returns true because it was changed by this transaction + assertTrue(m.trySet("2", null, true)); + + assertTrue(m.trySet("3", "World", true)); + // not seen within this statement + assertEquals("Hello", m.get("1")); + assertEquals("World", m.get("2")); + assertNull(m.get("3")); + + // start of statement + startUpdate = tx.setSavepoint(); + m.setSavepoint(startUpdate); + // select * from test + assertNull(m.get("1")); + assertEquals("Hello", m.get("2")); + assertEquals("World", m.get("3")); + + // start of statement + startUpdate = tx.setSavepoint(); + m.setSavepoint(startUpdate); + // update test set id = 1 + // should fail: duplicate key + assertTrue(m.trySet("2", null, true)); + assertTrue(m.trySet("1", "Hello", true)); + assertTrue(m.trySet("3", null, true)); + assertFalse(m.trySet("1", "World", true)); + tx.rollbackToSavepoint(startUpdate); + + startUpdate = tx.setSavepoint(); + m.setSavepoint(startUpdate); + assertNull(m.get("1")); + assertEquals("Hello", m.get("2")); + assertEquals("World", m.get("3")); + + tx.commit(); + + ts.close(); + s.close(); + } + + private void testTwoPhaseCommit() { + String fileName = getBaseDir() + "/testTwoPhaseCommit.h3"; + FileUtils.delete(fileName); + + MVStore s; + TransactionStore ts; + Transaction tx; + Transaction txOld; + TransactionMap m; + List list; + + s = MVStore.open(fileName); + ts = new TransactionStore(s); + ts.init(); + tx = ts.begin(); + assertEquals(null, tx.getName()); + tx.setName("first transaction"); + assertEquals("first transaction", tx.getName()); + assertEquals(1, tx.getId()); + assertEquals(Transaction.STATUS_OPEN, tx.getStatus()); + m = tx.openMap("test"); + m.put("1", "Hello"); + list = ts.getOpenTransactions(); + assertEquals(1, list.size()); + txOld = list.get(0); + assertTrue(tx.getId() == txOld.getId()); + assertEquals("first transaction", txOld.getName()); + s.commit(); + ts.close(); + s.close(); + + s = MVStore.open(fileName); + ts = new TransactionStore(s); + ts.init(); + tx = ts.begin(); + assertEquals(2, tx.getId()); + m = tx.openMap("test"); + assertEquals(null, m.get("1")); + m.put("2", "Hello"); + list = ts.getOpenTransactions(); + assertEquals(2, list.size()); + txOld = list.get(0); + assertEquals(1, txOld.getId()); + assertEquals(Transaction.STATUS_OPEN, txOld.getStatus()); + assertEquals("first transaction", txOld.getName()); + txOld.prepare(); + assertEquals(Transaction.STATUS_PREPARED, txOld.getStatus()); + txOld = list.get(1); + txOld.commit(); + s.commit(); + s.close(); + + s = MVStore.open(fileName); + ts = new TransactionStore(s); + ts.init(); + tx = ts.begin(); + m = tx.openMap("test"); + m.put("3", "Test"); + assertEquals(2, tx.getId()); + list = ts.getOpenTransactions(); + assertEquals(2, list.size()); + txOld = list.get(1); + assertEquals(2, txOld.getId()); + assertEquals(Transaction.STATUS_OPEN, txOld.getStatus()); + assertEquals(null, txOld.getName()); + txOld.rollback(); + txOld = list.get(0); + assertEquals(1, txOld.getId()); + assertEquals(Transaction.STATUS_PREPARED, txOld.getStatus()); + assertEquals("first transaction", txOld.getName()); + txOld.commit(); + assertEquals("Hello", m.get("1")); + s.close(); + + FileUtils.delete(fileName); + } + + private void testSavepoint() { + MVStore s = MVStore.open(null); + TransactionStore ts = new TransactionStore(s); + ts.init(); + Transaction tx; + TransactionMap m; + + tx = ts.begin(); + m = tx.openMap("test"); + m.put("1", "Hello"); + m.put("2", "World"); + m.put("1", "Hallo"); + m.remove("2"); + m.put("3", "!"); + long logId = tx.setSavepoint(); + m.put("1", "Hi"); + m.put("2", "."); + m.remove("3"); + tx.rollbackToSavepoint(logId); + assertEquals("Hallo", m.get("1")); + assertNull(m.get("2")); + assertEquals("!", m.get("3")); + tx.rollback(); + + tx = ts.begin(); + m = tx.openMap("test"); + assertNull(m.get("1")); + assertNull(m.get("2")); + assertNull(m.get("3")); + + ts.close(); + s.close(); + } + + private void testCompareWithPostgreSQL() throws Exception { + ArrayList statements = New.arrayList(); + ArrayList transactions = New.arrayList(); + ArrayList> maps = New.arrayList(); + int connectionCount = 3, opCount = 1000, rowCount = 10; + try { + Class.forName("org.postgresql.Driver"); + for (int i = 0; i < connectionCount; i++) { + Connection conn = DriverManager.getConnection( + "jdbc:postgresql:test?loggerLevel=OFF", "sa", "sa"); + statements.add(conn.createStatement()); + } + } catch (Exception e) { + // database not installed - ok + return; + } + statements.get(0).execute( + "drop table if exists test cascade"); + statements.get(0).execute( + "create table test(id int primary key, name varchar(255))"); + + MVStore s = MVStore.open(null); + TransactionStore ts = new TransactionStore(s); + ts.init(); + for (int i = 0; i < connectionCount; i++) { + Statement stat = statements.get(i); + // 100 ms to avoid blocking (the test is single threaded) + stat.execute("set statement_timeout to 100"); + Connection c = stat.getConnection(); + c.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); + c.setAutoCommit(false); + Transaction transaction = ts.begin(); + transactions.add(transaction); + TransactionMap map; + map = transaction.openMap("test"); + maps.add(map); + } + StringBuilder buff = new StringBuilder(); + + Random r = new Random(1); + try { + for (int i = 0; i < opCount; i++) { + int connIndex = r.nextInt(connectionCount); + Statement stat = statements.get(connIndex); + Transaction transaction = transactions.get(connIndex); + TransactionMap map = maps.get(connIndex); + if (transaction == null) { + transaction = ts.begin(); + map = transaction.openMap("test"); + transactions.set(connIndex, transaction); + maps.set(connIndex, map); + + // read all data, to get a snapshot + ResultSet rs = stat.executeQuery( + "select * from test order by id"); + buff.append(i).append(": [" + connIndex + "]="); + int size = 0; + while (rs.next()) { + buff.append(' '); + int k = rs.getInt(1); + String v = rs.getString(2); + buff.append(k).append(':').append(v); + assertEquals(v, map.get(k)); + size++; + } + buff.append('\n'); + if (size != map.sizeAsLong()) { + assertEquals(size, map.sizeAsLong()); + } + } + int x = r.nextInt(rowCount); + int y = r.nextInt(rowCount); + buff.append(i).append(": [" + connIndex + "]: "); + ResultSet rs = null; + switch (r.nextInt(7)) { + case 0: + buff.append("commit"); + stat.getConnection().commit(); + transaction.commit(); + transactions.set(connIndex, null); + break; + case 1: + buff.append("rollback"); + stat.getConnection().rollback(); + transaction.rollback(); + transactions.set(connIndex, null); + break; + case 2: + // insert or update + String old = map.get(x); + if (old == null) { + buff.append("insert " + x + "=" + y); + if (map.tryPut(x, "" + y)) { + stat.execute("insert into test values(" + x + ", '" + y + "')"); + } else { + buff.append(" -> row was locked"); + // the statement would time out in PostgreSQL + // TODO test sometimes if timeout occurs + } + } else { + buff.append("update " + x + "=" + y + " (old:" + old + ")"); + if (map.tryPut(x, "" + y)) { + int c = stat.executeUpdate("update test set name = '" + y + + "' where id = " + x); + assertEquals(1, c); + } else { + buff.append(" -> row was locked"); + // the statement would time out in PostgreSQL + // TODO test sometimes if timeout occurs + } + } + break; + case 3: + buff.append("delete " + x); + try { + int c = stat.executeUpdate("delete from test where id = " + x); + if (c == 1) { + map.remove(x); + } else { + assertNull(map.get(x)); + } + } catch (SQLException e) { + assertTrue(map.get(x) != null); + assertFalse(map.tryRemove(x)); + // PostgreSQL needs to rollback + buff.append(" -> rollback"); + stat.getConnection().rollback(); + transaction.rollback(); + transactions.set(connIndex, null); + } + break; + case 4: + case 5: + case 6: + rs = stat.executeQuery("select * from test where id = " + x); + String expected = rs.next() ? rs.getString(2) : null; + buff.append("select " + x + "=" + expected); + assertEquals("i:" + i, expected, map.get(x)); + break; + } + buff.append('\n'); + } + } catch (Exception e) { + e.printStackTrace(); + fail(buff.toString()); + } + for (Statement stat : statements) { + stat.getConnection().close(); + } + ts.close(); + s.close(); + } + + private void testConcurrentTransactionsReadCommitted() { + MVStore s = MVStore.open(null); + + TransactionStore ts = new TransactionStore(s); + ts.init(); + + Transaction tx1, tx2; + TransactionMap m1, m2; + + tx1 = ts.begin(); + m1 = tx1.openMap("test"); + m1.put("1", "Hi"); + m1.put("3", "."); + tx1.commit(); + + tx1 = ts.begin(); + m1 = tx1.openMap("test"); + m1.put("1", "Hello"); + m1.put("2", "World"); + m1.remove("3"); + tx1.commit(); + + // start new transaction to read old data + tx2 = ts.begin(); + m2 = tx2.openMap("test"); + + // start transaction tx1, update/delete/add + tx1 = ts.begin(); + m1 = tx1.openMap("test"); + m1.put("1", "Hallo"); + m1.remove("2"); + m1.put("3", "!"); + + assertEquals("Hello", m2.get("1")); + assertEquals("World", m2.get("2")); + assertNull(m2.get("3")); + + tx1.commit(); + + assertEquals("Hallo", m2.get("1")); + assertNull(m2.get("2")); + assertEquals("!", m2.get("3")); + + tx1 = ts.begin(); + m1 = tx1.openMap("test"); + m1.put("2", "World"); + + assertNull(m2.get("2")); + assertFalse(m2.tryRemove("2")); + assertFalse(m2.tryPut("2", "Welt")); + + tx2 = ts.begin(); + m2 = tx2.openMap("test"); + assertNull(m2.get("2")); + m1.remove("2"); + assertNull(m2.get("2")); + tx1.commit(); + + tx1 = ts.begin(); + m1 = tx1.openMap("test"); + assertNull(m1.get("2")); + m1.put("2", "World"); + m1.put("2", "Welt"); + tx1.rollback(); + + tx1 = ts.begin(); + m1 = tx1.openMap("test"); + assertNull(m1.get("2")); + + ts.close(); + s.close(); + } + + private void testSingleConnection() { + MVStore s = MVStore.open(null); + + TransactionStore ts = new TransactionStore(s); + ts.init(); + + Transaction tx; + TransactionMap m; + + // add, rollback + tx = ts.begin(); + m = tx.openMap("test"); + m.put("1", "Hello"); + assertEquals("Hello", m.get("1")); + m.put("2", "World"); + assertEquals("World", m.get("2")); + tx.rollback(); + tx = ts.begin(); + m = tx.openMap("test"); + assertNull(m.get("1")); + assertNull(m.get("2")); + + // add, commit + tx = ts.begin(); + m = tx.openMap("test"); + m.put("1", "Hello"); + m.put("2", "World"); + assertEquals("Hello", m.get("1")); + assertEquals("World", m.get("2")); + tx.commit(); + tx = ts.begin(); + m = tx.openMap("test"); + assertEquals("Hello", m.get("1")); + assertEquals("World", m.get("2")); + + // update+delete+insert, rollback + tx = ts.begin(); + m = tx.openMap("test"); + m.put("1", "Hallo"); + m.remove("2"); + m.put("3", "!"); + assertEquals("Hallo", m.get("1")); + assertNull(m.get("2")); + assertEquals("!", m.get("3")); + tx.rollback(); + tx = ts.begin(); + m = tx.openMap("test"); + assertEquals("Hello", m.get("1")); + assertEquals("World", m.get("2")); + assertNull(m.get("3")); + + // update+delete+insert, commit + tx = ts.begin(); + m = tx.openMap("test"); + m.put("1", "Hallo"); + m.remove("2"); + m.put("3", "!"); + assertEquals("Hallo", m.get("1")); + assertNull(m.get("2")); + assertEquals("!", m.get("3")); + tx.commit(); + tx = ts.begin(); + m = tx.openMap("test"); + assertEquals("Hallo", m.get("1")); + assertNull(m.get("2")); + assertEquals("!", m.get("3")); + + ts.close(); + s.close(); + } + + private static void testStoreMultiThreadedReads() throws Exception { + MVStore s = MVStore.open(null); + final TransactionStore ts = new TransactionStore(s); + + ts.init(); + Transaction t = ts.begin(); + TransactionMap mapA = t.openMap("a"); + mapA.put(1, 0); + t.commit(); + + Task task = new Task() { + @Override + public void call() throws Exception { + for (int i = 0; !stop; i++) { + Transaction tx = ts.begin(); + TransactionMap mapA = tx.openMap("a"); + while (!mapA.tryPut(1, i)) { + // repeat + } + tx.commit(); + + // map B transaction + // the other thread will get a map A uncommitted value, + // but by the time it tries to walk back to the committed + // value, the undoLog has changed + tx = ts.begin(); + TransactionMap mapB = tx.openMap("b"); + // put a new value to the map; this will cause a map B + // undoLog entry to be created with a null pre-image value + mapB.tryPut(i, -i); + // this is where the real race condition occurs: + // some other thread might get the B log entry + // for this transaction rather than the uncommitted A log + // entry it is expecting + tx.commit(); + } + } + }; + task.execute(); + try { + for (int i = 0; i < 10000; i++) { + Transaction tx = ts.begin(); + mapA = tx.openMap("a"); + if (mapA.get(1) == null) { + throw new AssertionError("key not found"); + } + tx.commit(); + } + } finally { + task.get(); + } + ts.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/store/package.html b/modules/h2/src/test/java/org/h2/test/store/package.html new file mode 100644 index 0000000000000..37b326d97243f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/store/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +This package contains tests for the map store. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/synth/BnfRandom.java b/modules/h2/src/test/java/org/h2/test/synth/BnfRandom.java new file mode 100644 index 0000000000000..7b26d7c310ca3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/BnfRandom.java @@ -0,0 +1,203 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.util.ArrayList; +import java.util.Random; +import org.h2.bnf.Bnf; +import org.h2.bnf.BnfVisitor; +import org.h2.bnf.Rule; +import org.h2.bnf.RuleFixed; +import org.h2.bnf.RuleHead; +import org.h2.util.New; + +/** + * A BNF visitor that generates a random SQL statement. + */ +public class BnfRandom implements BnfVisitor { + + private static final boolean SHOW_SYNTAX = false; + + private final Random random = new Random(); + private final ArrayList statements = New.arrayList(); + + private int level; + private String sql; + + public BnfRandom() throws Exception { + Bnf config = Bnf.getInstance(null); + config.addAlias("procedure", "@func@"); + config.linkStatements(); + + ArrayList all = config.getStatements(); + + // go backwards so we can append at the end + for (int i = all.size() - 1; i >= 0; i--) { + RuleHead r = all.get(i); + String topic = r.getTopic().toLowerCase(); + int weight = 0; + if (topic.equals("select")) { + weight = 10; + } else if (topic.equals("create table")) { + weight = 20; + } else if (topic.equals("insert")) { + weight = 5; + } else if (topic.startsWith("update")) { + weight = 3; + } else if (topic.startsWith("delete")) { + weight = 3; + } else if (topic.startsWith("drop")) { + weight = 2; + } + if (SHOW_SYNTAX) { + System.out.println(r.getTopic()); + } + for (int j = 0; j < weight; j++) { + statements.add(r); + } + } + } + + public String getRandomSQL() { + int sid = random.nextInt(statements.size()); + + RuleHead r = statements.get(sid); + level = 0; + r.getRule().accept(this); + sql = sql.trim(); + + if (sql.length() > 0) { + if (sql.indexOf("TRACE_LEVEL_") < 0 + && sql.indexOf("COLLATION") < 0 + && sql.indexOf("SCRIPT ") < 0 + && sql.indexOf("CSVWRITE") < 0 + && sql.indexOf("BACKUP") < 0 + && sql.indexOf("DB_CLOSE_DELAY") < 0) { + if (SHOW_SYNTAX) { + System.out.println(" " + sql); + } + return sql; + } + } + return null; + } + + @Override + public void visitRuleElement(boolean keyword, String name, Rule link) { + if (keyword) { + if (name.startsWith(";")) { + sql = ""; + } else { + sql = name.length() > 1 ? " " + name + " " : name; + } + } else if (link != null) { + level++; + link.accept(this); + level--; + } else { + throw new AssertionError(name); + } + } + + @Override + public void visitRuleFixed(int type) { + sql = getRandomFixed(type); + } + + private String getRandomFixed(int type) { + Random r = random; + switch (type) { + case RuleFixed.YMD: + return (1800 + r.nextInt(200)) + "-" + + (1 + r.nextInt(12)) + "-" + (1 + r.nextInt(31)); + case RuleFixed.HMS: + return (r.nextInt(24)) + "-" + (r.nextInt(60)) + "-" + (r.nextInt(60)); + case RuleFixed.NANOS: + return "" + (r.nextInt(100000) + r.nextInt(10000)); + case RuleFixed.ANY_UNTIL_EOL: + case RuleFixed.ANY_EXCEPT_SINGLE_QUOTE: + case RuleFixed.ANY_EXCEPT_DOUBLE_QUOTE: + case RuleFixed.ANY_WORD: + case RuleFixed.ANY_EXCEPT_2_DOLLAR: + case RuleFixed.ANY_UNTIL_END: { + StringBuilder buff = new StringBuilder(); + int len = r.nextBoolean() ? 1 : r.nextInt(5); + for (int i = 0; i < len; i++) { + buff.append((char) ('A' + r.nextInt('C' - 'A'))); + } + return buff.toString(); + } + case RuleFixed.HEX_START: + return "0x"; + case RuleFixed.CONCAT: + return "||"; + case RuleFixed.AZ_UNDERSCORE: + return "" + (char) ('A' + r.nextInt('C' - 'A')); + case RuleFixed.AF: + return "" + (char) ('A' + r.nextInt('F' - 'A')); + case RuleFixed.DIGIT: + return "" + (char) ('0' + r.nextInt(10)); + case RuleFixed.OPEN_BRACKET: + return "["; + case RuleFixed.CLOSE_BRACKET: + return "]"; + default: + throw new AssertionError("type="+type); + } + } + + @Override + public void visitRuleList(boolean or, ArrayList list) { + if (or) { + if (level > 10) { + if (level > 1000) { + // better than stack overflow + throw new AssertionError(); + } + list.get(0).accept(this); + return; + } + int idx = random.nextInt(list.size()); + level++; + list.get(idx).accept(this); + level--; + return; + } + StringBuilder buff = new StringBuilder(); + level++; + for (Rule r : list) { + r.accept(this); + buff.append(sql); + } + level--; + sql = buff.toString(); + } + + @Override + public void visitRuleOptional(Rule rule) { + if (level > 10 ? random.nextInt(level) == 1 : random.nextInt(4) == 1) { + level++; + rule.accept(this); + level--; + return; + } + sql = ""; + } + + @Override + public void visitRuleRepeat(boolean comma, Rule rule) { + rule.accept(this); + } + + public void setSeed(int seed) { + random.setSeed(seed); + } + + public int getStatementCount() { + return statements.size(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/OutputCatcher.java b/modules/h2/src/test/java/org/h2/test/synth/OutputCatcher.java new file mode 100644 index 0000000000000..18973c76b9536 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/OutputCatcher.java @@ -0,0 +1,97 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.IOException; +import java.io.InputStream; +import java.util.LinkedList; +import java.util.concurrent.TimeUnit; +import org.h2.util.IOUtils; + +/** + * Catches the output of another process. + */ +public class OutputCatcher extends Thread { + private final InputStream in; + private final LinkedList list = new LinkedList<>(); + private boolean done; + + public OutputCatcher(InputStream in) { + this.in = in; + } + + /** + * Read a line from the output. + * + * @param wait the maximum number of milliseconds to wait + * @return the line + */ + public String readLine(long wait) { + long start = System.nanoTime(); + while (true) { + synchronized (list) { + if (list.size() > 0) { + return list.removeFirst(); + } + if (done) + return null; + try { + list.wait(30 * 1000); + } catch (InterruptedException e) { + // ignore + } + long time = System.nanoTime() - start; + if (time >= TimeUnit.MILLISECONDS.toNanos(wait)) { + return null; + } + } + } + } + + @Override + public void run() { + final StringBuilder buff = new StringBuilder(); + try { + while (true) { + try { + int x = in.read(); + if (x < 0) { + break; + } + if (x < ' ') { + if (buff.length() > 0) { + String s = buff.toString(); + buff.setLength(0); + synchronized (list) { + list.add(s); + list.notifyAll(); + } + } + } else { + buff.append((char) x); + } + } catch (IOException e) { + break; + } + } + } finally { + IOUtils.closeSilently(in); + + // just in case something goes wrong, make sure we store any partial output we got + if (buff.length() > 0) { + synchronized (list) { + list.add(buff.toString()); + list.notifyAll(); + } + } + + synchronized (list){ + done = true; + list.notifyAll(); + } + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestBtreeIndex.java b/modules/h2/src/test/java/org/h2/test/synth/TestBtreeIndex.java new file mode 100644 index 0000000000000..592aca415e250 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestBtreeIndex.java @@ -0,0 +1,200 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.tools.DeleteDbFiles; + +/** + * A b-tree index test. + */ +public class TestBtreeIndex extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.big = true; + test.test(); + test.test(); + test.test(); + } + + @Override + public void test() throws SQLException { + Random random = new Random(); + for (int i = 0; i < getSize(1, 4); i++) { + testAddDelete(); + int seed = random.nextInt(); + testCase(seed); + } + } + + private void testAddDelete() throws SQLException { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + try { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID bigint primary key)"); + int count = 1000; + stat.execute( + "insert into test select x from system_range(1, " + + count + ")"); + if (!config.memory) { + conn.close(); + conn = getConnection(getTestName()); + stat = conn.createStatement(); + } + for (int i = 1; i < count; i++) { + ResultSet rs = stat.executeQuery("select * from test order by id"); + for (int j = i; rs.next(); j++) { + assertEquals(j, rs.getInt(1)); + } + stat.execute("delete from test where id =" + i); + } + stat.execute("drop all objects delete files"); + } finally { + conn.close(); + } + deleteDb(getTestName()); + } + + private void testCase(int seed) throws SQLException { + testOne(seed); + } + + private void testOne(int seed) throws SQLException { + org.h2.Driver.load(); + deleteDb(getTestName()); + printTime("testIndex " + seed); + Random random = new Random(seed); + int distinct, prefixLength; + if (random.nextBoolean()) { + distinct = random.nextInt(8000) + 1; + prefixLength = random.nextInt(8000) + 1; + } else if (random.nextBoolean()) { + distinct = random.nextInt(16000) + 1; + prefixLength = random.nextInt(100) + 1; + } else { + distinct = random.nextInt(10) + 1; + prefixLength = random.nextInt(10) + 1; + } + boolean delete = random.nextBoolean(); + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < prefixLength; j++) { + buff.append("x"); + if (buff.length() % 10 == 0) { + buff.append(buff.length()); + } + } + String prefix = buff.toString().substring(0, prefixLength); + DeleteDbFiles.execute(getBaseDir() + "/" + getTestName(), null, true); + try (Connection conn = getConnection(getTestName())) { + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE a(text VARCHAR PRIMARY KEY)"); + PreparedStatement prepInsert = conn.prepareStatement( + "INSERT INTO a VALUES(?)"); + PreparedStatement prepDelete = conn.prepareStatement( + "DELETE FROM a WHERE text=?"); + PreparedStatement prepDeleteAllButOne = conn.prepareStatement( + "DELETE FROM a WHERE text <> ?"); + int count = 0; + for (int i = 0; i < 1000; i++) { + int y = random.nextInt(distinct); + try { + prepInsert.setString(1, prefix + y); + prepInsert.executeUpdate(); + count++; + } catch (SQLException e) { + if (e.getSQLState().equals("23505")) { + // ignore + } else { + TestBase.logError("error", e); + break; + } + } + if (delete && random.nextInt(10) == 1) { + if (random.nextInt(4) == 1) { + try { + prepDeleteAllButOne.setString(1, prefix + y); + int deleted = prepDeleteAllButOne.executeUpdate(); + if (deleted < count - 1) { + printError(seed, "deleted:" + deleted + " i:" + i); + } + count -= deleted; + } catch (SQLException e) { + TestBase.logError("error", e); + break; + } + } else { + try { + prepDelete.setString(1, prefix + y); + int deleted = prepDelete.executeUpdate(); + if (deleted > 1) { + printError(seed, "deleted:" + deleted + " i:" + i); + } + count -= deleted; + } catch (SQLException e) { + TestBase.logError("error", e); + break; + } + } + } + } + int testCount; + testCount = 0; + ResultSet rs = stat.executeQuery( + "SELECT text FROM a ORDER BY text"); + ResultSet rs2 = conn.createStatement().executeQuery( + "SELECT text FROM a ORDER BY 'x' || text"); + +//System.out.println("-----------"); +//while(rs.next()) { +// System.out.println(rs.getString(1)); +//} +//System.out.println("-----------"); +//while(rs2.next()) { +// System.out.println(rs2.getString(1)); +//} +//if (true) throw new AssertionError("stop"); +// + testCount = 0; + while (rs.next() && rs2.next()) { + if (!rs.getString(1).equals(rs2.getString(1))) { + assertEquals("" + testCount, rs.getString(1), rs2.getString(1)); + } + testCount++; + } + assertFalse(rs.next()); + assertFalse(rs2.next()); + if (testCount != count) { + printError(seed, "count:" + count + " testCount:" + testCount); + } + rs = stat.executeQuery("SELECT text, count(*) FROM a " + + "GROUP BY text HAVING COUNT(*)>1"); + if (rs.next()) { + printError(seed, "testCount:" + testCount + " " + rs.getString(1)); + } + } + deleteDb(getTestName()); + } + + private void printError(int seed, String message) { + TestBase.logError("new TestBtreeIndex().init(test).testCase(" + + seed + "); // " + message, null); + fail(message); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestConcurrentUpdate.java b/modules/h2/src/test/java/org/h2/test/synth/TestConcurrentUpdate.java new file mode 100644 index 0000000000000..2f580f369b594 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestConcurrentUpdate.java @@ -0,0 +1,133 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * A concurrent test. + */ +public class TestConcurrentUpdate extends TestBase { + + private static final int THREADS = 3; + private static final int ROW_COUNT = 10; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase t = TestBase.createCaller().init(); + t.config.memory = true; + t.test(); + } + + @Override + public void test() throws Exception { + deleteDb("concurrent"); + final String url = getURL("concurrent;MULTI_THREADED=TRUE", true); + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + + Task[] tasks = new Task[THREADS]; + for (int i = 0; i < THREADS; i++) { + final int threadId = i; + Task t = new Task() { + @Override + public void call() throws Exception { + Random r = new Random(threadId); + Connection conn = getConnection(url); + PreparedStatement insert = conn.prepareStatement( + "insert into test values(?, ?)"); + PreparedStatement update = conn.prepareStatement( + "update test set name = ? where id = ?"); + PreparedStatement delete = conn.prepareStatement( + "delete from test where id = ?"); + PreparedStatement select = conn.prepareStatement( + "select * from test where id = ?"); + while (!stop) { + try { + int x = r.nextInt(ROW_COUNT); + String data = "x" + r.nextInt(ROW_COUNT); + switch (r.nextInt(3)) { + case 0: + insert.setInt(1, x); + insert.setString(2, data); + insert.execute(); + break; + case 1: + update.setString(1, data); + update.setInt(2, x); + update.execute(); + break; + case 2: + delete.setInt(1, x); + delete.execute(); + break; + case 4: + select.setInt(1, x); + ResultSet rs = select.executeQuery(); + while (rs.next()) { + rs.getString(2); + } + break; + } + } catch (SQLException e) { + handleException(e); + } + } + conn.close(); + } + + }; + tasks[i] = t; + t.execute(); + } + // test 2 seconds + for (int i = 0; i < 200; i++) { + Thread.sleep(10); + for (Task t : tasks) { + if (t.isFinished()) { + i = 1000; + break; + } + } + } + for (Task t : tasks) { + t.get(); + } + conn.close(); + } + + /** + * Handle or ignore the exception. + * + * @param e the exception + */ + void handleException(SQLException e) throws SQLException { + switch (e.getErrorCode()) { + case ErrorCode.CONCURRENT_UPDATE_1: + case ErrorCode.DUPLICATE_KEY_1: + case ErrorCode.ROW_NOT_FOUND_WHEN_DELETING_1: + case ErrorCode.LOCK_TIMEOUT_1: + break; + default: + throw e; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestCrashAPI.java b/modules/h2/src/test/java/org/h2/test/synth/TestCrashAPI.java new file mode 100644 index 0000000000000..4854ef8399d28 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestCrashAPI.java @@ -0,0 +1,535 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.File; +import java.io.PrintWriter; +import java.io.StringWriter; +import java.lang.reflect.Array; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.sql.BatchUpdateException; +import java.sql.Blob; +import java.sql.CallableStatement; +import java.sql.Clob; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ParameterMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.SQLFeatureNotSupportedException; +import java.sql.Savepoint; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Calendar; +import java.util.Comparator; +import java.util.HashMap; +import java.util.Map; +import org.h2.api.ErrorCode; +import org.h2.jdbc.JdbcConnection; +import org.h2.store.FileLister; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestAll; +import org.h2.test.TestBase; +import org.h2.test.scripts.TestScript; +import org.h2.test.synth.sql.RandomGen; +import org.h2.tools.Backup; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Restore; +import org.h2.util.DateTimeUtils; +import org.h2.util.MathUtils; +import org.h2.util.New; + +/** + * A test that calls random methods with random parameters from JDBC objects. + * This is sometimes called 'Fuzz Testing'. + */ +public class TestCrashAPI extends TestBase implements Runnable { + + private static final boolean RECOVER_ALL = false; + + private static final Class[] INTERFACES = { Connection.class, + PreparedStatement.class, Statement.class, ResultSet.class, + ResultSetMetaData.class, Savepoint.class, ParameterMetaData.class, + Clob.class, Blob.class, Array.class, CallableStatement.class }; + + private static final String DIR = "synth"; + + private final ArrayList objects = New.arrayList(); + private final HashMap, ArrayList> classMethods = + new HashMap<>(); + private RandomGen random = new RandomGen(); + private final ArrayList statements = New.arrayList(); + private int openCount; + private long callCount; + private volatile long maxWait = 60; + private volatile boolean stopped; + private volatile boolean running; + private Thread mainThread; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + System.setProperty("h2.delayWrongPasswordMin", "0"); + System.setProperty("h2.delayWrongPasswordMax", "0"); + TestBase.createCaller().init().test(); + } + + @Override + @SuppressWarnings("deprecation") + public void run() { + while (--maxWait > 0) { + try { + Thread.sleep(1000); + } catch (InterruptedException e) { + maxWait++; + // ignore + } + if (maxWait > 0 && maxWait <= 10) { + println("stopping..."); + stopped = true; + } + } + if (maxWait == 0 && running) { + objects.clear(); + if (running) { + println("stopping (force)..."); + for (StackTraceElement e : mainThread.getStackTrace()) { + System.out.println(e.toString()); + } + mainThread.stop(new SQLException("stop")); + } + } + } + + private static void recoverAll() { + org.h2.Driver.load(); + File[] files = new File("target/temp/backup").listFiles(); + Arrays.sort(files, new Comparator() { + @Override + public int compare(File o1, File o2) { + return o1.getName().compareTo(o2.getName()); + } + }); + for (File f : files) { + if (!f.getName().startsWith("db-")) { + continue; + } + DeleteDbFiles.execute("target/data", null, true); + try { + Restore.execute(f.getAbsolutePath(), "data", null); + } catch (Exception e) { + System.out.println(f.getName() + " restore error " + e); + // ignore + } + ArrayList dbFiles = FileLister.getDatabaseFiles("target/data", null, false); + for (String name: dbFiles) { + if (!name.endsWith(".h2.db")) { + continue; + } + name = name.substring(0, name.length() - 6); + try { + DriverManager.getConnection("jdbc:h2:data/" + name, "sa", "").close(); + System.out.println(f.getName() + " OK"); + } catch (SQLException e) { + System.out.println(f.getName() + " " + e); + } + } + } + } + + @Override + public void test() throws Exception { + if (RECOVER_ALL) { + recoverAll(); + return; + } + if (config.mvcc || config.networked) { + return; + } + int len = getSize(2, 6); + Thread t = new Thread(this); + try { + mainThread = Thread.currentThread(); + t.start(); + running = true; + for (int i = 0; i < len && !stopped; i++) { + int seed = MathUtils.randomInt(Integer.MAX_VALUE); + testCase(seed); + deleteDb(); + } + } finally { + running = false; + deleteDb(); + maxWait = -1; + t.join(); + } + } + + private void deleteDb() { + try { + deleteDb(getBaseDir() + "/" + DIR, null); + } catch (Exception e) { + // ignore + } + } + + private Connection getConnection(int seed, boolean delete) throws SQLException { + openCount++; + if (delete) { + deleteDb(); + } + // can not use FILE_LOCK=NO, otherwise something could be written into + // the database in the finalize method + + String add = ";MAX_QUERY_TIMEOUT=10000"; + +// int testing; +// if(openCount >= 32) { +// int test; +// Runtime.getRuntime().halt(0); +// System.exit(1); +// } + // System.out.println("now open " + openCount); + // add += ";TRACE_LEVEL_FILE=3"; + // config.logMode = 2; + // } + + String dbName = "crashApi" + seed; + String url = getURL(DIR + "/" + dbName, true) + add; + +// int test; +// url += ";DB_CLOSE_ON_EXIT=FALSE"; +// int test; +// url += ";TRACE_LEVEL_FILE=3"; + + Connection conn = null; + String fileName = "target/temp/backup/db-" + uniqueId++ + ".zip"; + Backup.execute(fileName, getBaseDir() + "/" + DIR, dbName, true); + // close databases earlier + System.gc(); + try { + conn = DriverManager.getConnection(url, "sa", getPassword("")); + // delete the backup if opening was successful + FileUtils.delete(fileName); + } catch (SQLException e) { + if (e.getErrorCode() == ErrorCode.WRONG_USER_OR_PASSWORD) { + // delete if the password changed + FileUtils.delete(fileName); + } + throw e; + } + int len = random.getInt(50); + int first = random.getInt(statements.size() - len); + int end = first + len; + Statement stat = conn.createStatement(); + stat.execute("SET LOCK_TIMEOUT 10"); + stat.execute("SET WRITE_DELAY 0"); + if (random.nextBoolean()) { + if (random.nextBoolean()) { + double g = random.nextGaussian(); + int size = (int) Math.abs(10000 * g * g); + stat.execute("SET CACHE_SIZE " + size); + } else { + stat.execute("SET CACHE_SIZE 0"); + } + } + stat.execute("SCRIPT NOPASSWORDS NOSETTINGS"); + for (int i = first; i < end && i < statements.size() && !stopped; i++) { + try { + stat.execute("SELECT * FROM TEST WHERE ID=1"); + } catch (Throwable t) { + printIfBad(seed, -i, -1, t); + } + try { + stat.execute("SELECT * FROM TEST WHERE ID=1 OR ID=1"); + } catch (Throwable t) { + printIfBad(seed, -i, -1, t); + } + + String sql = statements.get(i); + try { +// if(openCount == 32) { +// int test; +// System.out.println("stop!"); +// } + stat.execute(sql); + } catch (Throwable t) { + printIfBad(seed, -i, -1, t); + } + } + if (random.nextBoolean()) { + try { + conn.commit(); + } catch (Throwable t) { + printIfBad(seed, 0, -1, t); + } + } + return conn; + } + + private void testCase(int seed) throws SQLException { + printTime("seed: " + seed); + callCount = 0; + openCount = 0; + random = new RandomGen(); + random.setSeed(seed); + Connection c1 = getConnection(seed, true); + Connection conn = null; + for (int i = 0; i < 2000 && !stopped; i++) { + // if(i % 10 == 0) { + // for(int j=0; j in = getJdbcInterface(o); + ArrayList methods = classMethods.get(in); + Method m = methods.get(random.getInt(methods.size())); + Object o2 = callRandom(seed, i, objectId, o, m); + if (o2 != null) { + objects.add(o2); + } + } + try { + if (conn != null) { + conn.close(); + } + c1.close(); + } catch (Throwable t) { + printIfBad(seed, -101010, -1, t); + try { + deleteDb(); + } catch (Throwable t2) { + printIfBad(seed, -101010, -1, t2); + } + } + objects.clear(); + } + + private void printError(int seed, int id, Throwable t) { + StringWriter writer = new StringWriter(); + t.printStackTrace(new PrintWriter(writer)); + String s = writer.toString(); + TestBase.logError("new TestCrashAPI().init(test).testCase(" + + seed + "); // Bug " + s.hashCode() + " id=" + id + + " callCount=" + callCount + " openCount=" + openCount + + " " + t.getMessage(), t); + throw new RuntimeException(t); + } + + private Object callRandom(int seed, int id, int objectId, Object o, Method m) { + Class[] paramClasses = m.getParameterTypes(); + Object[] params = new Object[paramClasses.length]; + for (int i = 0; i < params.length; i++) { + params[i] = getRandomParam(paramClasses[i]); + } + Object result = null; + try { + callCount++; + result = m.invoke(o, params); + } catch (IllegalArgumentException e) { + TestBase.logError("error", e); + } catch (IllegalAccessException e) { + TestBase.logError("error", e); + } catch (InvocationTargetException e) { + Throwable t = e.getTargetException(); + printIfBad(seed, id, objectId, t); + } + if (result == null) { + return null; + } + Class in = getJdbcInterface(result); + if (in == null) { + return null; + } + return result; + } + + private void printIfBad(int seed, int id, int objectId, Throwable t) { + if (t instanceof BatchUpdateException) { + // do nothing + } else if (t.getClass().getName().contains("SQLClientInfoException")) { + // do nothing + } else if (t instanceof UnsupportedOperationException) { + // do nothing - new Java8/9 stuff + } else if (t instanceof SQLFeatureNotSupportedException) { + // do nothing + } else if (t instanceof SQLException) { + SQLException s = (SQLException) t; + int errorCode = s.getErrorCode(); + if (errorCode == 0) { + printError(seed, id, s); + } else if (errorCode == ErrorCode.OBJECT_CLOSED) { + if (objectId >= 0 && objects.size() > 0) { + // TODO at least call a few more times after close - maybe + // there is still an error + objects.remove(objectId); + } + } else if (errorCode == ErrorCode.GENERAL_ERROR_1) { + // General error [HY000] + printError(seed, id, s); + } + } else { + printError(seed, id, t); + } + } + + private Object getRandomParam(Class type) { + if (type == int.class) { + return random.getRandomInt(); + } else if (type == byte.class) { + return (byte) random.getRandomInt(); + } else if (type == short.class) { + return (short) random.getRandomInt(); + } else if (type == long.class) { + return random.getRandomLong(); + } else if (type == float.class) { + return (float) random.getRandomDouble(); + } else if (type == boolean.class) { + return random.nextBoolean(); + } else if (type == double.class) { + return random.getRandomDouble(); + } else if (type == String.class) { + if (random.getInt(10) == 0) { + return null; + } + int randomId = random.getInt(statements.size()); + String sql = statements.get(randomId); + if (random.getInt(10) == 0) { + sql = random.modify(sql); + } + return sql; + } else if (type == int[].class) { + // TODO test with 'shared' arrays (make sure database creates a + // copy) + return random.getIntArray(); + } else if (type == java.io.Reader.class) { + return null; + } else if (type == java.sql.Array.class) { + return null; + } else if (type == byte[].class) { + // TODO test with 'shared' arrays (make sure database creates a + // copy) + return random.getByteArray(); + } else if (type == Map.class) { + return null; + } else if (type == Object.class) { + return null; + } else if (type == java.sql.Date.class) { + return random.randomDate(); + } else if (type == java.sql.Time.class) { + return random.randomTime(); + } else if (type == java.sql.Timestamp.class) { + return random.randomTimestamp(); + } else if (type == java.io.InputStream.class) { + return null; + } else if (type == String[].class) { + return null; + } else if (type == java.sql.Clob.class) { + return null; + } else if (type == java.sql.Blob.class) { + return null; + } else if (type == Savepoint.class) { + // TODO should use generated savepoints + return null; + } else if (type == Calendar.class) { + return DateTimeUtils.createGregorianCalendar(); + } else if (type == java.net.URL.class) { + return null; + } else if (type == java.math.BigDecimal.class) { + return new java.math.BigDecimal("" + random.getRandomDouble()); + } else if (type == java.sql.Ref.class) { + return null; + } + return null; + } + + private Class getJdbcInterface(Object o) { + for (Class in : o.getClass().getInterfaces()) { + if (classMethods.get(in) != null) { + return in; + } + } + return null; + } + + private void initMethods() { + for (Class inter : INTERFACES) { + classMethods.put(inter, new ArrayList()); + } + for (Class inter : INTERFACES) { + ArrayList list = classMethods.get(inter); + for (Method m : inter.getMethods()) { + list.add(m); + } + } + } + + @Override + public TestBase init(TestAll conf) throws Exception { + super.init(conf); + if (config.mvcc || config.networked) { + return this; + } + startServerIfRequired(); + TestScript script = new TestScript(); + ArrayList add = script.getAllStatements(config); + initMethods(); + org.h2.Driver.load(); + statements.addAll(add); + return this; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestDiskFull.java b/modules/h2/src/test/java/org/h2/test/synth/TestDiskFull.java new file mode 100644 index 0000000000000..fe36cf8451b5a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestDiskFull.java @@ -0,0 +1,138 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.ErrorCode; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.FilePathUnstable; + +/** + * Test simulated disk full problems. + */ +public class TestDiskFull extends TestBase { + + private FilePathUnstable fs; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + fs = FilePathUnstable.register(); + if (config.mvStore) { + fs.setPartialWrites(true); + } else { + fs.setPartialWrites(false); + } + try { + test(Integer.MAX_VALUE); + int max = Integer.MAX_VALUE - fs.getDiskFullCount() + 10; + for (int i = 0; i < max; i++) { + test(i); + } + } finally { + fs.setPartialWrites(false); + } + } + + private boolean test(int x) throws SQLException { + deleteDb("memFS:", null); + fs.setDiskFullCount(x, 0); + String url = "jdbc:h2:unstable:memFS:diskFull" + x + + ";FILE_LOCK=NO;TRACE_LEVEL_FILE=0;WRITE_DELAY=10;" + + "LOCK_TIMEOUT=100;CACHE_SIZE=4096;MAX_COMPACT_TIME=10"; + url = getURL(url, true); + Connection conn = null; + Statement stat = null; + boolean opened = false; + try { + conn = DriverManager.getConnection(url); + stat = conn.createStatement(); + opened = true; + for (int j = 0; j < 5; j++) { + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("create index idx_name on test(name)"); + stat.execute("insert into test values(2, 'World')"); + stat.execute("update test set name='Hallo' where id=1"); + stat.execute("delete from test where id=2"); + stat.execute("checkpoint"); + stat.execute("insert into test values(3, space(10000))"); + stat.execute("update test set name='Hallo' where id=3"); + stat.execute("drop table test"); + } + conn.close(); + conn = null; + return fs.getDiskFullCount() > 0; + } catch (SQLException e) { + if (e.getErrorCode() == ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1) { + throw e; + } + if (stat != null) { + try { + fs.setDiskFullCount(0, 0); + stat.execute("create table if not exists test" + + "(id int primary key, name varchar)"); + stat.execute("insert into test values(4, space(10000))"); + stat.execute("update test set name='Hallo' where id=3"); + conn.close(); + } catch (SQLException e2) { + if (e2.getErrorCode() != ErrorCode.IO_EXCEPTION_1 + && e2.getErrorCode() != ErrorCode.IO_EXCEPTION_2 + && e2.getErrorCode() != ErrorCode.DATABASE_IS_CLOSED + && e2.getErrorCode() != ErrorCode.OBJECT_CLOSED) { + throw e2; + } + } + } + } finally { + if (conn != null) { + try { + if (stat != null) { + stat.execute("shutdown immediately"); + } + } catch (Exception e2) { + // ignore + } + try { + conn.close(); + } catch (Exception e2) { + // ignore + } + } + } + fs.setDiskFullCount(0, 0); + try { + conn = null; + conn = DriverManager.getConnection(url); + } catch (SQLException e) { + if (!opened) { + return false; + } + throw e; + } + stat = conn.createStatement(); + stat.execute("script to 'memFS:test.sql'"); + conn.close(); + + deleteDb("memFS:", null); + FileUtils.delete("memFS:test.sql"); + + return false; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestFuzzOptimizations.java b/modules/h2/src/test/java/org/h2/test/synth/TestFuzzOptimizations.java new file mode 100644 index 0000000000000..f7d1b0f33ac16 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestFuzzOptimizations.java @@ -0,0 +1,274 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.Random; + +import org.h2.test.TestBase; +import org.h2.test.db.Db; +import org.h2.test.db.Db.Prepared; +import org.h2.util.New; + +/** + * This test executes random SQL statements to test if optimizations are working + * correctly. + */ +public class TestFuzzOptimizations extends TestBase { + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb(getTestName()); + conn = getConnection(getTestName()); + if (!config.diskResult) { + testIn(); + } + testGroupSorted(); + testInSelect(); + conn.close(); + deleteDb(getTestName()); + } + + /* + drop table test0; + drop table test1; + create table test0(a int, b int, c int); + create index idx_1 on test0(a); + create index idx_2 on test0(b, a); + create table test1(a int, b int, c int); + insert into test0 select x / 100, + mod(x / 10, 10), mod(x, 10) + from system_range(0, 999); + update test0 set a = null where a = 9; + update test0 set b = null where b = 9; + update test0 set c = null where c = 9; + insert into test1 select * from test0; + + select * from test0 where + b in(null, 0) and a in(2, null, null) + order by 1, 2, 3; + + select * from test1 where + b in(null, 0) and a in(2, null, null) + order by 1, 2, 3; + */ + private void testIn() throws SQLException { + Db db = new Db(conn); + db.execute("create table test0(a int, b int, c int)"); + db.execute("create index idx_1 on test0(a)"); + db.execute("create index idx_2 on test0(b, a)"); + db.execute("create table test1(a int, b int, c int)"); + db.execute("insert into test0 select x / 100, " + + "mod(x / 10, 10), mod(x, 10) from system_range(0, 999)"); + db.execute("update test0 set a = null where a = 9"); + db.execute("update test0 set b = null where b = 9"); + db.execute("update test0 set c = null where c = 9"); + db.execute("insert into test1 select * from test0"); + + // this failed at some point + Prepared p = db.prepare("select * from test0 where b in(" + + "select a from test1 where a a) " + + "and c in(0, 10) and c in(10, 0, 0) order by 1, 2, 3"); + p.set(1); + p.execute(); + + Random seedGenerator = new Random(); + String[] columns = new String[] { "a", "b", "c" }; + String[] values = new String[] { null, "0", "0", "1", "2", "10", "a", "?" }; + String[] compares = new String[] { "in(", "not in(", "=", "=", ">", + "<", ">=", "<=", "<>", "in(select", "not in(select" }; + int size = getSize(100, 1000); + for (int i = 0; i < size; i++) { + long seed = seedGenerator.nextLong(); + println("seed: " + seed); + Random random = new Random(seed); + ArrayList params = New.arrayList(); + String condition = getRandomCondition(random, params, columns, + compares, values); + String message = "seed: " + seed + " " + condition; + executeAndCompare(condition, params, message); + if (params.size() > 0) { + for (int j = 0; j < params.size(); j++) { + String value = values[random.nextInt(values.length - 2)]; + params.set(j, value); + } + executeAndCompare(condition, params, message); + } + } + executeAndCompare("a >=0 and b in(?, 2) and a in(1, ?, null)", Arrays.asList("10", "2"), + "seed=-6191135606105920350L"); + db.execute("drop table test0, test1"); + } + + private void executeAndCompare(String condition, List params, String message) throws SQLException { + PreparedStatement prep0 = conn.prepareStatement( + "select * from test0 where " + condition + + " order by 1, 2, 3"); + PreparedStatement prep1 = conn.prepareStatement( + "select * from test1 where " + condition + + " order by 1, 2, 3"); + for (int j = 0; j < params.size(); j++) { + prep0.setString(j + 1, params.get(j)); + prep1.setString(j + 1, params.get(j)); + } + ResultSet rs0 = prep0.executeQuery(); + ResultSet rs1 = prep1.executeQuery(); + assertEquals(message, rs0, rs1); + } + + private String getRandomCondition(Random random, ArrayList params, + String[] columns, String[] compares, String[] values) { + int comp = 1 + random.nextInt(4); + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < comp; j++) { + if (j > 0) { + buff.append(random.nextBoolean() ? " and " : " or "); + } + String column = columns[random.nextInt(columns.length)]; + String compare = compares[random.nextInt(compares.length)]; + buff.append(column).append(' ').append(compare); + if (compare.endsWith("in(")) { + int len = 1+random.nextInt(3); + for (int k = 0; k < len; k++) { + if (k > 0) { + buff.append(", "); + } + String value = values[random.nextInt(values.length)]; + buff.append(value); + if ("?".equals(value)) { + value = values[random.nextInt(values.length - 2)]; + params.add(value); + } + } + buff.append(")"); + } else if (compare.endsWith("(select")) { + String col = columns[random.nextInt(columns.length)]; + buff.append(" ").append(col).append(" from test1 where "); + String condition = getRandomCondition( + random, params, columns, compares, values); + buff.append(condition); + buff.append(")"); + } else { + String value = values[random.nextInt(values.length)]; + buff.append(value); + if ("?".equals(value)) { + value = values[random.nextInt(values.length - 2)]; + params.add(value); + } + } + } + return buff.toString(); + } + + private void testInSelect() { + Db db = new Db(conn); + db.execute("CREATE TABLE TEST(A INT, B INT)"); + db.execute("CREATE INDEX IDX ON TEST(A)"); + db.execute("INSERT INTO TEST SELECT X/4, MOD(X, 4) " + + "FROM SYSTEM_RANGE(1, 16)"); + db.execute("UPDATE TEST SET A = NULL WHERE A = 0"); + db.execute("UPDATE TEST SET B = NULL WHERE B = 0"); + Random random = new Random(); + long seed = random.nextLong(); + println("seed: " + seed); + for (int i = 0; i < 100; i++) { + String column = random.nextBoolean() ? "A" : "B"; + String value = new String[] { "NULL", "0", "A", "B" }[random.nextInt(4)]; + String compare = random.nextBoolean() ? "A" : "B"; + int x = random.nextInt(3); + String sql1 = "SELECT * FROM TEST T WHERE " + column + "+0 " + + "IN(SELECT " + value + + " FROM TEST I WHERE I." + compare + "=?) ORDER BY 1, 2"; + String sql2 = "SELECT * FROM TEST T WHERE " + column + " " + + "IN(SELECT " + value + + " FROM TEST I WHERE I." + compare + "=?) ORDER BY 1, 2"; + List> a = db.prepare(sql1).set(x).query(); + List> b = db.prepare(sql2).set(x).query(); + assertTrue("seed: " + seed + " sql: " + sql1 + + " a: " + a + " b: " + b, a.equals(b)); + } + db.execute("DROP TABLE TEST"); + } + + private void testGroupSorted() { + Db db = new Db(conn); + db.execute("CREATE TABLE TEST(A INT, B INT, C INT)"); + Random random = new Random(); + long seed = random.nextLong(); + println("seed: " + seed); + for (int i = 0; i < 100; i++) { + Prepared p = db.prepare("INSERT INTO TEST VALUES(?, ?, ?)"); + p.set(new String[] { null, "0", "1", "2" }[random.nextInt(4)]); + p.set(new String[] { null, "0", "1", "2" }[random.nextInt(4)]); + p.set(new String[] { null, "0", "1", "2" }[random.nextInt(4)]); + p.execute(); + } + int len = getSize(1000, 3000); + for (int i = 0; i < len / 10; i++) { + db.execute("CREATE TABLE TEST_INDEXED AS SELECT * FROM TEST"); + int jLen = 1 + random.nextInt(2); + for (int j = 0; j < jLen; j++) { + String x = "CREATE INDEX IDX" + j + " ON TEST_INDEXED("; + int kLen = 1 + random.nextInt(2); + for (int k = 0; k < kLen; k++) { + if (k > 0) { + x += ","; + } + x += new String[] { "A", "B", "C" }[random.nextInt(3)]; + } + db.execute(x + ")"); + } + for (int j = 0; j < 10; j++) { + String x = "SELECT "; + for (int k = 0; k < 3; k++) { + if (k > 0) { + x += ","; + } + x += new String[] { "SUM(A)", "MAX(B)", "AVG(C)", + "COUNT(B)" }[random.nextInt(4)]; + x += " S" + k; + } + x += " FROM "; + String group = " GROUP BY "; + int kLen = 1 + random.nextInt(2); + for (int k = 0; k < kLen; k++) { + if (k > 0) { + group += ","; + } + group += new String[] { "A", "B", "C" }[random.nextInt(3)]; + } + group += " ORDER BY 1, 2, 3"; + List> a = db.query(x + "TEST" + group); + List> b = db.query(x + "TEST_INDEXED" + group); + assertEquals(a.toString(), b.toString()); + assertTrue(a.equals(b)); + } + db.execute("DROP TABLE TEST_INDEXED"); + } + db.execute("DROP TABLE TEST"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestHalt.java b/modules/h2/src/test/java/org/h2/test/synth/TestHalt.java new file mode 100644 index 0000000000000..0c4772e9b56da --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestHalt.java @@ -0,0 +1,354 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.File; +import java.io.FileWriter; +import java.io.IOException; +import java.io.InputStream; +import java.io.PrintWriter; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.text.SimpleDateFormat; +import java.util.Date; +import java.util.Random; +import org.h2.test.TestAll; +import org.h2.test.TestBase; +import org.h2.test.utils.SelfDestructor; +import org.h2.tools.Backup; +import org.h2.tools.DeleteDbFiles; +import org.h2.util.StringUtils; + +/** + * Tests database recovery by destroying a process that writes to the database. + */ +public abstract class TestHalt extends TestBase { + + /** + * This bit flag means insert operations should be performed. + */ + protected static final int OP_INSERT = 1; + + /** + * This bit flag means delete operations should be performed. + */ + protected static final int OP_DELETE = 2; + + /** + * This bit flag means update operations should be performed. + */ + protected static final int OP_UPDATE = 4; + + /** + * This bit flag means select operations should be performed. + */ + protected static final int OP_SELECT = 8; + + /** + * This bit flag means operations should be written to the transaction log + * immediately. + */ + protected static final int FLAG_NO_DELAY = 1; + + /** + * This bit flag means the test should use LOB values. + */ + protected static final int FLAG_LOBS = 2; + + private static final String DATABASE_NAME = "halt"; + private static final String TRACE_FILE_NAME = "haltTrace.trace.db"; + + /** + * The current operations bit mask. + */ + protected int operations; + + /** + * The current flags bit mask. + */ + protected int flags; + + /** + * The current test value, for example the number of rows. + */ + protected int value; + + /** + * The database connection. + */ + protected Connection conn; + + /** + * The pseudo random number generator used for this test. + */ + protected Random random = new Random(); + + private final SimpleDateFormat dateFormat = + new SimpleDateFormat("MM-dd HH:mm:ss "); + private int errorId; + private int sequenceId; + + /** + * Initialize the test. + */ + abstract void controllerInit() throws SQLException; + + /** + * Check if the database is consistent after a simulated database crash. + */ + abstract void controllerCheckAfterCrash() throws SQLException; + + /** + * Wait for some time after the application has been started. + */ + abstract void controllerWaitAfterAppStart() throws Exception; + + /** + * Start the application. + */ + abstract void processAppStart() throws SQLException; + + /** + * Run the application. + */ + abstract void processAppRun() throws SQLException; + + @Override + public void test() { + for (int i = 0;; i++) { + operations = OP_INSERT | i; + flags = i >> 4; + // flags |= FLAG_NO_DELAY; // | FLAG_LOBS; + try { + controllerTest(); + } catch (Throwable t) { + System.out.println("Error: " + t); + t.printStackTrace(); + } + } + } + + Connection getConnection() throws SQLException { + org.h2.Driver.load(); + String url = "jdbc:h2:" + getBaseDir() + "/halt"; + // String url = "jdbc:h2:" + baseDir + "/halt;TRACE_LEVEL_FILE=3"; + return DriverManager.getConnection(url, "sa", "sa"); + } + + /** + * The second process starts the application and executes random operations. + */ + void processRunRandom() throws SQLException { + connect(); + try { + traceOperation("connected, operations:" + + operations + " flags:" + flags + " value:" + value); + processAppStart(); + System.out.println("READY"); + System.out.println("READY"); + System.out.println("READY"); + processAppRun(); + traceOperation("done"); + } catch (Exception e) { + traceOperation("run", e); + } + disconnect(); + } + + private void connect() throws SQLException { + try { + traceOperation("connecting"); + conn = getConnection(); + } catch (SQLException e) { + traceOperation("connect", e); + e.printStackTrace(); + throw e; + } + } + + /** + * Print a trace message to the trace file. + * + * @param s the message + */ + protected void traceOperation(String s) { + traceOperation(s, null); + } + + /** + * Print a trace message to the trace file. + * + * @param s the message + * @param e the exception or null + */ + protected void traceOperation(String s, Exception e) { + File f = new File(getBaseDir() + "/" + TRACE_FILE_NAME); + f.getParentFile().mkdirs(); + try (FileWriter writer = new FileWriter(f, true)) { + PrintWriter w = new PrintWriter(writer); + s = dateFormat.format(new Date()) + ": " + s; + w.println(s); + if (e != null) { + e.printStackTrace(w); + } + } catch (IOException e2) { + e2.printStackTrace(); + } + } + + /** + * Run one test. The controller starts the process, waits, kills the + * process, and checks if everything is ok. + */ + void controllerTest() throws Exception { + traceOperation("delete database -----------------------------"); + DeleteDbFiles.execute(getBaseDir(), DATABASE_NAME, true); + new File(getBaseDir() + "/" + TRACE_FILE_NAME).delete(); + + connect(); + controllerInit(); + disconnect(); + for (int i = 0; i < 10; i++) { + traceOperation("backing up " + sequenceId); + Backup.execute(getBaseDir() + "/haltSeq" + + sequenceId + ".zip", getBaseDir(), null, true); + sequenceId++; + // int operations = OP_INSERT; + // OP_DELETE = 1, OP_UPDATE = 2, OP_SELECT = 4; + // int flags = FLAG_NODELAY; + // FLAG_NO_DELAY = 1, FLAG_AUTO_COMMIT = 2, FLAG_SMALL_CACHE = 4; + int testValue = random.nextInt(1000); + // for Derby and HSQLDB + // String classPath = "-cp + // .;D:/data/java/hsqldb.jar;D:/data/java/derby.jar"; + String selfDestruct = SelfDestructor.getPropertyString(60); + String[] procDef = { "java", selfDestruct, + "-cp", getClassPath(), + getClass().getName(), "" + operations, "" + flags, "" + testValue}; + traceOperation("start: " + StringUtils.arrayCombine(procDef, ' ')); + Process p = Runtime.getRuntime().exec(procDef); + InputStream in = p.getInputStream(); + OutputCatcher catcher = new OutputCatcher(in); + catcher.start(); + String s = catcher.readLine(5 * 60 * 1000); + if (s == null) { + throw new IOException( + "No reply from process, command: " + + StringUtils.arrayCombine(procDef, ' ')); + } else if (s.startsWith("READY")) { + traceOperation("got reply: " + s); + } + controllerWaitAfterAppStart(); + p.destroy(); + p.waitFor(); + try { + traceOperation("backing up " + sequenceId); + Backup.execute(getBaseDir() + "/haltSeq" + + sequenceId + ".zip", getBaseDir(), null, true); + // new File(BASE_DIR + "/haltSeq" + (sequenceId-20) + + // ".zip").delete(); + connect(); + controllerCheckAfterCrash(); + } catch (Exception e) { + File zip = new File(getBaseDir() + "/haltSeq" + + sequenceId + ".zip"); + File zipId = new File(getBaseDir() + "/haltSeq" + + sequenceId + "-" + errorId + ".zip"); + zip.renameTo(zipId); + printTime("ERROR: " + sequenceId + " " + errorId + " " + e.toString()); + e.printStackTrace(); + errorId++; + } finally { + sequenceId++; + disconnect(); + } + } + } + + /** + * Close the database connection normally. + */ + protected void disconnect() { + try { + traceOperation("disconnect"); + conn.close(); + } catch (Exception e) { + traceOperation("disconnect", e); + } + } + +// public Connection getConnectionHSQLDB() throws SQLException { +// File lock = new File("test.lck"); +// while (lock.exists()) { +// lock.delete(); +// System.gc(); +// } +// Class.forName("org.hsqldb.jdbcDriver"); +// return DriverManager.getConnection("jdbc:hsqldb:test", "sa", ""); +// } + +// public Connection getConnectionDerby() throws SQLException { +// File lock = new File("test3/db.lck"); +// while (lock.exists()) { +// lock.delete(); +// System.gc(); +// } +// Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance(); +// try { +// return DriverManager.getConnection( +// "jdbc:derby:test3;create=true", "sa", "sa"); +// } catch (SQLException e) { +// Exception e2 = e; +// do { +// e.printStackTrace(); +// e = e.getNextException(); +// } while (e != null); +// throw e2; +// } +// } + +// void disconnectHSQLDB() { +// try { +// conn.createStatement().execute("SHUTDOWN"); +// } catch (Exception e) { +// // ignore +// } +// // super.disconnect(); +// } + +// void disconnectDerby() { +// // super.disconnect(); +// try { +// Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); +// DriverManager.getConnection( +// "jdbc:derby:;shutdown=true", "sa", "sa"); +// } catch (Exception e) { +// // ignore +// } +// } + + /** + * Create a random string with the specified length. + * + * @param len the number of characters + * @return the random string + */ + protected String getRandomString(int len) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < len; i++) { + buff.append('a' + random.nextInt(20)); + } + return buff.toString(); + } + + @Override + public TestBase init(TestAll conf) throws Exception { + super.init(conf); + return this; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestHaltApp.java b/modules/h2/src/test/java/org/h2/test/synth/TestHaltApp.java new file mode 100644 index 0000000000000..0f1af2e365f3e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestHaltApp.java @@ -0,0 +1,183 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.utils.SelfDestructor; + +/** + * The application code for the {@link TestHalt} application. + */ +public class TestHaltApp extends TestHalt { + + private int rowCount; + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + SelfDestructor.startCountdown(60); + TestHaltApp app = new TestHaltApp(); + if (args.length == 0) { + app.controllerTest(); + } else { + app.operations = Integer.parseInt(args[0]); + app.flags = Integer.parseInt(args[1]); + app.value = Integer.parseInt(args[2]); + app.processRunRandom(); + } + } + + @Override + protected void execute(Statement stat, String sql) throws SQLException { + traceOperation("execute: " + sql); + super.execute(stat, sql); + } + + /** + * Initialize the database. + */ + @Override + protected void controllerInit() throws SQLException { + Statement stat = conn.createStatement(); + // stat.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR(255))"); + for (int i = 0; i < 20; i++) { + execute(stat, "DROP TABLE IF EXISTS TEST" + i); + execute(stat, "CREATE TABLE TEST" + i + + "(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + } + for (int i = 0; i < 20; i += 2) { + execute(stat, "DROP TABLE TEST" + i); + } + execute(stat, "DROP TABLE IF EXISTS TEST"); + execute(stat, "CREATE TABLE TEST" + + "(ID BIGINT GENERATED BY DEFAULT AS IDENTITY, " + + "NAME VARCHAR(255), DATA CLOB)"); + } + + /** + * Wait after the application has been started. + */ + @Override + protected void controllerWaitAfterAppStart() throws Exception { + int sleep = 10 + random.nextInt(300); + if ((flags & FLAG_NO_DELAY) == 0) { + sleep += 1000; + } + Thread.sleep(sleep); + } + + /** + * This method is called after a simulated crash. The method should check if + * the data is transactionally consistent and throw an exception if not. + * + * @throws SQLException if the data is not consistent. + */ + @Override + protected void controllerCheckAfterCrash() throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + int count = rs.getInt(1); + System.out.println("count: " + count); + if (count % 2 != 0) { + traceOperation("row count: " + count); + throw new SQLException("Unexpected odd row count: " + count); + } + } + + /** + * Initialize the application. + */ + @Override + protected void processAppStart() throws SQLException { + Statement stat = conn.createStatement(); + if ((flags & FLAG_NO_DELAY) != 0) { + execute(stat, "SET WRITE_DELAY 0"); + execute(stat, "SET MAX_LOG_SIZE 1"); + } + ResultSet rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + rowCount = rs.getInt(1); + traceOperation("rows: " + rowCount, null); + } + + /** + * Run the application code. + */ + @Override + protected void processAppRun() throws SQLException { + conn.setAutoCommit(false); + traceOperation("setAutoCommit false"); + int rows = 10000 + value; + PreparedStatement prepInsert = conn.prepareStatement( + "INSERT INTO TEST(NAME, DATA) VALUES('Hello World', ?)"); + PreparedStatement prepUpdate = conn.prepareStatement( + "UPDATE TEST SET NAME = 'Hallo Welt', DATA = ? WHERE ID = ?"); + for (int i = 0; i < rows; i++) { + Statement stat = conn.createStatement(); + if ((operations & OP_INSERT) != 0) { + if ((flags & FLAG_LOBS) != 0) { + String s = getRandomString(random.nextInt(200)); + prepInsert.setString(1, s); + traceOperation("insert " + s); + prepInsert.execute(); + } else { + execute(stat, "INSERT INTO TEST(NAME) " + + "VALUES('Hello World')"); + } + ResultSet rs = stat.getGeneratedKeys(); + rs.next(); + int key = rs.getInt(1); + traceOperation("inserted key: " + key); + rowCount++; + } + if ((operations & OP_UPDATE) != 0) { + if ((flags & FLAG_LOBS) != 0) { + String s = getRandomString(random.nextInt(200)); + prepUpdate.setString(1, s); + int x = random.nextInt(rowCount + 1); + prepUpdate.setInt(2, x); + traceOperation("update " + s + " " + x); + prepUpdate.execute(); + } else { + int x = random.nextInt(rowCount + 1); + execute(stat, "UPDATE TEST SET VALUE = 'Hallo Welt' " + + "WHERE ID = " + x); + } + } + if ((operations & OP_DELETE) != 0) { + int x = random.nextInt(rowCount + 1); + traceOperation("deleting " + x); + int uc = stat.executeUpdate("DELETE FROM TEST " + + "WHERE ID = " + x); + traceOperation("updated: " + uc); + rowCount -= uc; + } + traceOperation("rowCount " + rowCount); + traceOperation("rows now: " + rowCount, null); + if (rowCount % 2 == 0) { + traceOperation("commit " + rowCount); + conn.commit(); + traceOperation("committed: " + rowCount, null); + } + if ((flags & FLAG_NO_DELAY) != 0) { + if (random.nextInt(10) == 0 && (rowCount % 2 == 0)) { + execute(stat, "CHECKPOINT"); + } + } + } + traceOperation("rollback"); + conn.rollback(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestJoin.java b/modules/h2/src/test/java/org/h2/test/synth/TestJoin.java new file mode 100644 index 0000000000000..a60f439803aff --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestJoin.java @@ -0,0 +1,309 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.test.TestBase; +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * A test that runs random join statements against two databases and compares + * the results. + */ +public class TestJoin extends TestBase { + + private final ArrayList connections = New.arrayList(); + private Random random; + private int paramCount; + private StringBuilder buff; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testJoin(); + } + + private void testJoin() throws Exception { + deleteDb("join"); + String shortestFailed = null; + + Connection c1 = getConnection("join"); + connections.add(c1); + + Class.forName("org.postgresql.Driver"); + Connection c2 = DriverManager.getConnection("jdbc:postgresql:test", "sa", "sa"); + connections.add(c2); + + // Class.forName("com.mysql.jdbc.Driver"); + // Connection c2 = + // DriverManager.getConnection("jdbc:mysql://localhost/test", "sa", + // "sa"); + // connections.add(c2); + + // Class.forName("org.hsqldb.jdbcDriver"); + // Connection c2 = DriverManager.getConnection("jdbc:hsqldb:join", "sa", + // ""); + // connections.add(c2); + + /* + DROP TABLE ONE; + DROP TABLE TWO; + CREATE TABLE ONE(A INT PRIMARY KEY, B INT); + INSERT INTO ONE VALUES(0, NULL); + INSERT INTO ONE VALUES(1, 0); + INSERT INTO ONE VALUES(2, 1); + INSERT INTO ONE VALUES(3, 4); + CREATE TABLE TWO(A INT PRIMARY KEY, B INT); + INSERT INTO TWO VALUES(0, NULL); + INSERT INTO TWO VALUES(1, 0); + INSERT INTO TWO VALUES(2, 2); + INSERT INTO TWO VALUES(3, 3); + INSERT INTO TWO VALUES(4, NULL); + */ + + execute("DROP TABLE ONE", null, true); + execute("DROP TABLE TWO", null, true); + execute("CREATE TABLE ONE(A INT PRIMARY KEY, B INT)", null); + execute("INSERT INTO ONE VALUES(0, NULL)", null); + execute("INSERT INTO ONE VALUES(1, 0)", null); + execute("INSERT INTO ONE VALUES(2, 1)", null); + execute("INSERT INTO ONE VALUES(3, 4)", null); + execute("CREATE TABLE TWO(A INT PRIMARY KEY, B INT)", null); + execute("INSERT INTO TWO VALUES(0, NULL)", null); + execute("INSERT INTO TWO VALUES(1, 0)", null); + execute("INSERT INTO TWO VALUES(2, 2)", null); + execute("INSERT INTO TWO VALUES(3, 3)", null); + execute("INSERT INTO TWO VALUES(4, NULL)", null); + random = new Random(); + long startTime = System.nanoTime(); + for (int i = 0;; i++) { + paramCount = 0; + buff = new StringBuilder(); + long time = System.nanoTime(); + if (time - startTime > TimeUnit.SECONDS.toNanos(5)) { + printTime("i:" + i); + startTime = time; + } + buff.append("SELECT "); + int tables = 1 + random.nextInt(5); + for (int j = 0; j < tables; j++) { + if (j > 0) { + buff.append(", "); + } + buff.append("T" + (char) ('0' + j) + ".A"); + } + buff.append(" FROM "); + appendRandomTable(); + buff.append(" T0 "); + for (int j = 1; j < tables; j++) { + if (random.nextBoolean()) { + buff.append("INNER"); + } else { + // if(random.nextInt(4)==1) { + // buff.append("RIGHT"); + // } else { + buff.append("LEFT"); + // } + } + buff.append(" JOIN "); + appendRandomTable(); + buff.append(" T"); + buff.append((char) ('0' + j)); + buff.append(" ON "); + appendRandomCondition(j); + } + if (random.nextBoolean()) { + buff.append("WHERE "); + appendRandomCondition(tables - 1); + } + String sql = buff.toString(); + Object[] params = new Object[paramCount]; + for (int j = 0; j < paramCount; j++) { + params[j] = random.nextInt(4) == 1 ? null : random.nextInt(10) - 3; + } + try { + execute(sql, params); + } catch (Exception e) { + if (shortestFailed == null || shortestFailed.length() > sql.length()) { + TestBase.logError("/*SHORT*/ " + sql, null); + shortestFailed = sql; + } + } + } + // c1.close(); + // c2.close(); + } + + private void appendRandomTable() { + if (random.nextBoolean()) { + buff.append("ONE"); + } else { + buff.append("TWO"); + } + } + + private void appendRandomCondition(int j) { + if (random.nextInt(10) == 1) { + buff.append("NOT "); + appendRandomCondition(j); + } else if (random.nextInt(5) == 1) { + buff.append("("); + appendRandomCondition(j); + if (random.nextBoolean()) { + buff.append(") OR ("); + } else { + buff.append(") AND ("); + } + appendRandomCondition(j); + buff.append(")"); + } else { + if (j > 0 && random.nextBoolean()) { + buff.append("T" + (char) ('0' + j - 1) + ".A=T" + (char) ('0' + j) + ".A "); + } else { + appendRandomConditionPart(j); + } + } + } + + private void appendRandomConditionPart(int j) { + int t1 = j <= 1 ? 0 : random.nextInt(j + 1); + int t2 = j <= 1 ? 0 : random.nextInt(j + 1); + String c1 = random.nextBoolean() ? "A" : "B"; + String c2 = random.nextBoolean() ? "A" : "B"; + buff.append("T" + (char) ('0' + t1)); + buff.append("." + c1); + if (random.nextInt(4) == 1) { + if (random.nextInt(5) == 1) { + buff.append(" IS NOT NULL"); + } else { + buff.append(" IS NULL"); + } + } else { + if (random.nextInt(5) == 1) { + switch (random.nextInt(5)) { + case 0: + buff.append(">"); + break; + case 1: + buff.append("<"); + break; + case 2: + buff.append("<="); + break; + case 3: + buff.append(">="); + break; + case 4: + buff.append("<>"); + break; + default: + } + } else { + buff.append("="); + } + if (random.nextBoolean()) { + buff.append("T" + (char) ('0' + t2)); + buff.append("." + c2); + } else { + buff.append(random.nextInt(5) - 1); + } + } + buff.append(" "); + } + + private void execute(String sql, Object[] params) { + execute(sql, params, false); + } + + private void execute(String sql, Object[] params, boolean ignoreDifference) { + String first = null; + for (int i = 0; i < connections.size(); i++) { + Connection conn = connections.get(i); + String s; + try { + Statement stat; + boolean result; + if (params == null || params.length == 0) { + stat = conn.createStatement(); + result = stat.execute(sql); + } else { + PreparedStatement prep = conn.prepareStatement(sql); + stat = prep; + for (int j = 0; j < params.length; j++) { + prep.setObject(j + 1, params[j]); + } + result = prep.execute(); + } + if (result) { + ResultSet rs = stat.getResultSet(); + s = "rs: " + readResult(rs); + } else { + s = "updateCount: " + stat.getUpdateCount(); + } + } catch (SQLException e) { + s = "exception"; + } + if (i == 0) { + first = s; + } else { + if (!ignoreDifference && !s.equals(first)) { + fail("FAIL s:" + s + " first:" + first + " sql:" + sql); + } + } + } + } + + private static String readResult(ResultSet rs) throws SQLException { + StringBuilder b = new StringBuilder(); + ResultSetMetaData meta = rs.getMetaData(); + int columnCount = meta.getColumnCount(); + for (int i = 0; i < columnCount; i++) { + if (i > 0) { + b.append(","); + } + b.append(StringUtils.toUpperEnglish(meta.getColumnLabel(i + 1))); + } + b.append(":\n"); + String result = b.toString(); + ArrayList list = New.arrayList(); + while (rs.next()) { + b = new StringBuilder(); + for (int i = 0; i < columnCount; i++) { + if (i > 0) { + b.append(","); + } + b.append(rs.getString(i + 1)); + } + list.add(b.toString()); + } + Collections.sort(list); + for (int i = 0; i < list.size(); i++) { + result += list.get(i) + "\n"; + } + return result; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestKill.java b/modules/h2/src/test/java/org/h2/test/synth/TestKill.java new file mode 100644 index 0000000000000..906547dcf996b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestKill.java @@ -0,0 +1,165 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.test.TestBase; +import org.h2.test.utils.SelfDestructor; + +/** + * A random recovery test. This test starts a process that executes random + * operations against a database, then kills this process. Afterwards recovery + * is tested. + */ +public class TestKill extends TestBase { + + private static final String DIR = TestBase.getTestDir("kill"); + + private static final int ACCOUNTS = 10; + + private Connection conn; + private final Random random = new Random(1); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + String connect = ""; + + connect = ";MAX_LOG_SIZE=10;THROTTLE=80"; + + String url = getURL(DIR + "/kill" + connect, true); + String user = getUser(); + String password = getPassword(); + String selfDestruct = SelfDestructor.getPropertyString(60); + String[] procDef = { + "java", selfDestruct, + "-cp", getClassPath(), + "org.h2.test.synth.TestKillProcess", url, user, + password, getBaseDir(), "" + ACCOUNTS }; + + for (int i = 0;; i++) { + printTime("TestKill " + i); + if (i % 10 == 0) { + trace("deleting db..."); + deleteDb("kill"); + } + conn = getConnection(url); + createTables(); + checkData(); + initData(); + conn.close(); + Process proc = Runtime.getRuntime().exec(procDef); + // while(true) { + // int ch = proc.getErrorStream().read(); + // if(ch < 0) { + // break; + // } + // System.out.print((char)ch); + // } + int runtime = random.nextInt(10000); + trace("running..."); + Thread.sleep(runtime); + trace("stopping..."); + proc.destroy(); + proc.waitFor(); + trace("stopped"); + } + } + + private void createTables() throws SQLException { + trace("createTables..."); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS ACCOUNT" + + "(ID INT PRIMARY KEY, SUM INT)"); + stat.execute("CREATE TABLE IF NOT EXISTS LOG(" + + "ID IDENTITY, ACCOUNTID INT, AMOUNT INT, " + + "FOREIGN KEY(ACCOUNTID) REFERENCES ACCOUNT(ID))"); + stat.execute("CREATE TABLE IF NOT EXISTS TEST_A" + + "(ID INT PRIMARY KEY, DATA VARCHAR)"); + stat.execute("CREATE TABLE IF NOT EXISTS TEST_B" + + "(ID INT PRIMARY KEY, DATA VARCHAR)"); + } + + private void initData() throws SQLException { + trace("initData..."); + conn.createStatement().execute("DROP TABLE LOG"); + conn.createStatement().execute("DROP TABLE ACCOUNT"); + conn.createStatement().execute("DROP TABLE TEST_A"); + conn.createStatement().execute("DROP TABLE TEST_B"); + createTables(); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO ACCOUNT VALUES(?, 0)"); + for (int i = 0; i < ACCOUNTS; i++) { + prep.setInt(1, i); + prep.execute(); + } + PreparedStatement p1 = conn.prepareStatement( + "INSERT INTO TEST_A VALUES(?, '')"); + PreparedStatement p2 = conn.prepareStatement( + "INSERT INTO TEST_B VALUES(?, '')"); + for (int i = 0; i < ACCOUNTS; i++) { + p1.setInt(1, i); + p2.setInt(1, i); + p1.execute(); + p2.execute(); + } + } + + private void checkData() throws SQLException { + trace("checkData..."); + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM ACCOUNT ORDER BY ID"); + PreparedStatement prep = conn.prepareStatement( + "SELECT SUM(AMOUNT) FROM LOG WHERE ACCOUNTID=?"); + while (rs.next()) { + int account = rs.getInt(1); + int sum = rs.getInt(2); + prep.setInt(1, account); + ResultSet rs2 = prep.executeQuery(); + rs2.next(); + int sumLog = rs2.getInt(1); + assertEquals(sum, sumLog); + trace("account=" + account + " sum=" + sum); + } + PreparedStatement p1 = conn.prepareStatement( + "SELECT * FROM TEST_A WHERE ID=?"); + PreparedStatement p2 = conn.prepareStatement( + "SELECT * FROM TEST_B WHERE ID=?"); + for (int i = 0; i < ACCOUNTS; i++) { + p1.setInt(1, i); + p2.setInt(1, i); + ResultSet r1 = p1.executeQuery(); + ResultSet r2 = p2.executeQuery(); + boolean hasData = r1.next(); + assertEquals(r2.next(), hasData); + if (hasData) { + String d1 = r1.getString("DATA"); + String d2 = r2.getString("DATA"); + assertEquals(d1, d2); + assertFalse(r1.next()); + assertFalse(r2.next()); + trace("test: data=" + d1); + } else { + trace("test: empty"); + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestKillProcess.java b/modules/h2/src/test/java/org/h2/test/synth/TestKillProcess.java new file mode 100644 index 0000000000000..4db2e210943b1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestKillProcess.java @@ -0,0 +1,92 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.util.ArrayList; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.store.FileLister; +import org.h2.test.TestBase; +import org.h2.test.utils.SelfDestructor; + +/** + * Test application for TestKill. + */ +public class TestKillProcess { + + private TestKillProcess() { + // utility class + } + + /** + * This method is called when executing this application. + * + * @param args the command line parameters + */ + public static void main(String... args) { + SelfDestructor.startCountdown(60); + try { + Class.forName("org.h2.Driver"); + String url = args[0], user = args[1], password = args[2]; + String baseDir = args[3]; + int accounts = Integer.parseInt(args[4]); + + Random random = new Random(); + Connection conn1 = DriverManager.getConnection( + url, user, password); + + PreparedStatement prep1a = conn1.prepareStatement( + "INSERT INTO LOG(ACCOUNTID, AMOUNT) VALUES(?, ?)"); + PreparedStatement prep1b = conn1.prepareStatement( + "UPDATE ACCOUNT SET SUM=SUM+? WHERE ID=?"); + conn1.setAutoCommit(false); + long time = System.nanoTime(); + String d = null; + for (int i = 0;; i++) { + long t = System.nanoTime(); + if (t > time + TimeUnit.SECONDS.toNanos(1)) { + ArrayList list = FileLister.getDatabaseFiles( + baseDir, "kill", true); + System.out.println("inserting... i:" + i + " d:" + d + + " files:" + list.size()); + time = t; + } + if (i > 10000) { + // System.out.println("halt"); + // Runtime.getRuntime().halt(0); + // conn.createStatement().execute("SHUTDOWN IMMEDIATELY"); + // System.exit(0); + } + int account = random.nextInt(accounts); + int value = random.nextInt(100); + prep1a.setInt(1, account); + prep1a.setInt(2, value); + prep1a.execute(); + prep1b.setInt(1, value); + prep1b.setInt(2, account); + prep1b.execute(); + conn1.commit(); + if (random.nextInt(100) < 2) { + d = "D" + random.nextInt(1000); + account = random.nextInt(accounts); + conn1.createStatement().execute( + "UPDATE TEST_A SET DATA='" + d + + "' WHERE ID=" + account); + conn1.createStatement().execute( + "UPDATE TEST_B SET DATA='" + d + + "' WHERE ID=" + account); + } + } + } catch (Throwable e) { + TestBase.logError("error", e); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestKillRestart.java b/modules/h2/src/test/java/org/h2/test/synth/TestKillRestart.java new file mode 100644 index 0000000000000..caabad2b84e09 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestKillRestart.java @@ -0,0 +1,227 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.IOException; +import java.io.InputStream; +import java.lang.reflect.Field; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.util.Random; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.h2.test.TestBase; +import org.h2.test.utils.SelfDestructor; + +/** + * Standalone recovery test. A new process is started and then killed while it + * executes random statements. + */ +public class TestKillRestart extends TestBase { + + @Override + public void test() throws Exception { + if (config.networked) { + return; + } + if (getBaseDir().indexOf(':') > 0) { + return; + } + deleteDb("killRestart"); + String url = getURL("killRestart", true); + // String url = getURL( + // "killRestart;CACHE_SIZE=2048;WRITE_DELAY=0", true); + String user = getUser(), password = getPassword(); + String selfDestruct = SelfDestructor.getPropertyString(60); + String[] procDef = { "java", selfDestruct, + "-cp", getClassPath(), "-ea", + getClass().getName(), "-url", url, "-user", user, + "-password", password }; + + int len = getSize(2, 15); + for (int i = 0; i < len; i++) { + Process p = new ProcessBuilder().redirectErrorStream(true).command(procDef).start(); + InputStream in = p.getInputStream(); + OutputCatcher catcher = new OutputCatcher(in); + catcher.start(); + while (true) { + String s = catcher.readLine(60 * 1000); + // System.out.println("> " + s); + if (s == null) { + fail("No reply from process"); + } else if (!s.startsWith("#")) { + continue; + } else if (s.startsWith("#Running")) { + Thread.sleep(100); + printTime("killing: " + i); + p.destroy(); + waitForTimeout(p); + break; + } else if (s.startsWith("#Fail")) { + fail("Failed: " + s); + } + } + } + deleteDb("killRestart"); + } + + /** + * Wait for a subprocess with timeout. + */ + private static void waitForTimeout(final Process p) + throws InterruptedException, IOException { + final long pid = getPidOfProcess(p); + if (pid == -1) { + p.waitFor(); + } + // when we hit Java8 we can use the waitFor(1,TimeUnit.MINUTES) method + final CountDownLatch latch = new CountDownLatch(1); + new Thread("waitForTimeout") { + @Override + public void run() { + try { + p.waitFor(); + latch.countDown(); + } catch (InterruptedException ex) { + ex.printStackTrace(); + } + } + }.start(); + if (!latch.await(2, TimeUnit.MINUTES)) { + String[] procDef = { "jstack", "-F", "-m", "-l", "" + pid }; + new ProcessBuilder().redirectErrorStream(true).command(procDef) + .start(); + OutputCatcher catcher = new OutputCatcher(p.getInputStream()); + catcher.start(); + Thread.sleep(500); + throw new IOException("timed out waiting for subprocess to die"); + } + } + + /** + * Get the PID of a subprocess. Only works on Linux and OSX. + */ + private static long getPidOfProcess(Process p) { + // When we hit Java9 we can call getPid() on Process. + long pid = -1; + try { + if (p.getClass().getName().equals("java.lang.UNIXProcess")) { + Field f = p.getClass().getDeclaredField("pid"); + f.setAccessible(true); + pid = f.getLong(p); + f.setAccessible(false); + } + } catch (Exception e) { + pid = -1; + } + return pid; + } + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) { + SelfDestructor.startCountdown(60); + String driver = "org.h2.Driver"; + String url = "jdbc:h2:mem:test", user = "sa", password = "sa"; + for (int i = 0; i < args.length; i++) { + if ("-url".equals(args[i])) { + url = args[++i]; + } else if ("-driver".equals(args[i])) { + driver = args[++i]; + } else if ("-user".equals(args[i])) { + user = args[++i]; + } else if ("-password".equals(args[i])) { + password = args[++i]; + } + } + System.out.println("#Started; driver: " + driver + " url: " + url + + " user: " + user + " password: " + password); + try { + Class.forName(driver); + System.out.println("#Opening..."); + Connection conn = DriverManager.getConnection(url, user, password); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS TEST" + + "(ID IDENTITY, NAME VARCHAR)"); + stat.execute("CREATE TABLE IF NOT EXISTS TEST2" + + "(ID IDENTITY, NAME VARCHAR)"); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST"); + while (rs.next()) { + rs.getLong("ID"); + rs.getString("NAME"); + } + rs = stat.executeQuery("SELECT * FROM TEST2"); + while (rs.next()) { + rs.getLong("ID"); + rs.getString("NAME"); + } + stat.execute("DROP ALL OBJECTS DELETE FILES"); + System.out.println("#Closing with delete..."); + conn.close(); + System.out.println("#Starting..."); + conn = DriverManager.getConnection(url, user, password); + stat = conn.createStatement(); + stat.execute("DROP ALL OBJECTS"); + stat.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR)"); + stat.execute("CREATE TABLE TEST2(ID IDENTITY, NAME VARCHAR)"); + stat.execute("CREATE TABLE TEST_META(ID INT)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST(NAME) VALUES(?)"); + PreparedStatement prep2 = conn.prepareStatement( + "INSERT INTO TEST2(NAME) VALUES(?)"); + Random r = new Random(0); +// Runnable stopper = new Runnable() { +// public void run() { +// try { +// Thread.sleep(500); +// } catch (InterruptedException e) { +// } +// System.out.println("#Halt..."); +// Runtime.getRuntime().halt(0); +// } +// }; +// new Thread(stopper).start(); + for (int i = 0; i < 2000; i++) { + if (i == 100) { + System.out.println("#Running..."); + } + if (r.nextInt(100) < 10) { + conn.createStatement().execute( + "ALTER TABLE TEST_META " + + "ALTER COLUMN ID INT DEFAULT 10"); + } + if (r.nextBoolean()) { + if (r.nextBoolean()) { + prep.setString(1, new String(new char[r.nextInt(30) * 10])); + prep.execute(); + } else { + prep2.setString(1, new String(new char[r.nextInt(30) * 10])); + prep2.execute(); + } + } else { + if (r.nextBoolean()) { + conn.createStatement().execute( + "UPDATE TEST SET NAME = NULL"); + } else { + conn.createStatement().execute( + "UPDATE TEST2 SET NAME = NULL"); + } + } + } + } catch (Throwable e) { + e.printStackTrace(System.out); + System.out.println("#Fail: " + e.toString()); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestKillRestartMulti.java b/modules/h2/src/test/java/org/h2/test/synth/TestKillRestartMulti.java new file mode 100644 index 0000000000000..a89ce1c546fbe --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestKillRestartMulti.java @@ -0,0 +1,324 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.InputStream; +import java.lang.ProcessBuilder.Redirect; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; +import org.h2.api.ErrorCode; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.SelfDestructor; +import org.h2.tools.Backup; +import org.h2.util.New; + +/** + * Standalone recovery test. A new process is started and then killed while it + * executes random statements using multiple connection. + */ +public class TestKillRestartMulti extends TestBase { + + /** + * We want self-destruct to occur before the read times out and we kill the + * child process. + */ + private static final int CHILD_READ_TIMEOUT_MS = 7 * 60 * 1000; // 7 minutes + private static final int CHILD_SELFDESTRUCT_TIMEOUT_MINS = 5; + + private String driver = "org.h2.Driver"; + private String url; + private String user = "sa"; + private String password = "sa"; + private final ArrayList connections = New.arrayList(); + private final ArrayList tables = New.arrayList(); + private int openCount; + + + /** + * This method is called when executing this application from the command + * line. + * + * Note that this entry can be used in two different ways, either + * (a) running just this test + * (b) or when this test invokes itself in a child process + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + if (args != null && args.length > 0) { + // the child process case + SelfDestructor.startCountdown(CHILD_SELFDESTRUCT_TIMEOUT_MINS); + new TestKillRestartMulti().test(args); + } + else + { + // the standalone test case + TestBase.createCaller().init().test(); + } + } + + @Override + public void test() throws Exception { + if (config.networked) { + return; + } + if (getBaseDir().indexOf(':') > 0) { + return; + } + deleteDb("killRestartMulti"); + url = getURL("killRestartMulti", true); + user = getUser(); + password = getPassword(); + String selfDestruct = SelfDestructor.getPropertyString(60); + // Inherit error so that the stacktraces reported from SelfDestructor + // show up in our log. + ProcessBuilder pb = new ProcessBuilder().redirectError(Redirect.INHERIT) + .command("java", selfDestruct, "-cp", getClassPath(), + "-ea", + getClass().getName(), "-url", url, "-user", user, + "-password", password); + deleteDb("killRestartMulti"); + int len = getSize(3, 10); + Random random = new Random(); + for (int i = 0; i < len; i++) { + Process p = pb.start(); + InputStream in = p.getInputStream(); + OutputCatcher catcher = new OutputCatcher(in); + catcher.start(); + while (true) { + String s = catcher.readLine(CHILD_READ_TIMEOUT_MS); + // System.out.println("> " + s); + if (s == null) { + fail("No reply from process"); + } else if (!s.startsWith("#")) { + // System.out.println(s); + fail("Expected: #..., got: " + s); + } else if (s.startsWith("#Running")) { + int sleep = 10 + random.nextInt(100); + Thread.sleep(sleep); + printTime("killing: " + i); + p.destroy(); + printTime("killing, waiting for: " + i); + p.waitFor(); + printTime("killing, dead: " + i); + break; + } else if (s.startsWith("#Info")) { + // System.out.println("info: " + s); + } else if (s.startsWith("#Fail")) { + System.err.println(s); + while (true) { + String a = catcher.readLine(CHILD_READ_TIMEOUT_MS); + if (a == null || "#End".endsWith(a)) { + break; + } + System.err.println(" " + a); + } + fail("Failed: " + s); + } + } + String backup = getBaseDir() + "/killRestartMulti-" + + System.currentTimeMillis() + ".zip"; + try { + Backup.execute(backup, getBaseDir(), "killRestartMulti", true); + Connection conn = null; + for (int j = 0;; j++) { + try { + conn = openConnection(); + break; + } catch (SQLException e2) { + if (e2.getErrorCode() == ErrorCode.DATABASE_ALREADY_OPEN_1 + && j < 3) { + Thread.sleep(100); + } else { + throw e2; + } + } + } + testConsistent(conn); + Statement stat = conn.createStatement(); + stat.execute("DROP ALL OBJECTS"); + conn.close(); + conn = openConnection(); + conn.close(); + FileUtils.delete(backup); + } catch (SQLException e) { + FileUtils.move(backup, backup + ".error"); + throw e; + } + } + deleteDb("killRestartMulti"); + } + + private void test(String... args) { + for (int i = 0; i < args.length; i++) { + if ("-url".equals(args[i])) { + url = args[++i]; + } else if ("-driver".equals(args[i])) { + driver = args[++i]; + } else if ("-user".equals(args[i])) { + user = args[++i]; + } else if ("-password".equals(args[i])) { + password = args[++i]; + } + } + System.out.println("#Started; driver: " + driver + " url: " + url + + " user: " + user + " password: " + password); + try { + System.out.println("#Starting..."); + Random random = new Random(); + boolean wasRunning = false; + for (int i = 0; i < 3000; i++) { + if (i > 1000 && connections.size() > 1 && tables.size() > 1) { + System.out.println("#Running connections: " + + connections.size() + " tables: " + tables.size()); + wasRunning = true; + } + if (connections.size() < 1) { + openConnection(); + } + if (tables.size() < 1) { + createTable(random); + } + int p = random.nextInt(100); + if ((p -= 2) <= 0) { + // 2%: open new connection + if (connections.size() < 5) { + openConnection(); + } + } else if ((p -= 1) <= 0) { + // 1%: close connection + if (connections.size() > 1) { + Connection conn = connections.remove( + random.nextInt(connections.size())); + if (random.nextBoolean()) { + conn.close(); + } + } + } else if ((p -= 10) <= 0) { + // 10% create table + createTable(random); + } else if ((p -= 20) <= 0) { + // 20% large insert, delete, or update + if (tables.size() > 0) { + Connection conn = connections.get( + random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + if (random.nextBoolean()) { + // 10% insert + stat.execute("INSERT INTO " + table + + "(NAME) SELECT 'Hello ' || X FROM SYSTEM_RANGE(0, 20)"); + } else if (random.nextBoolean()) { + // 5% update + stat.execute("UPDATE " + table + " SET NAME='Hallo Welt'"); + } else { + // 5% delete + stat.execute("DELETE FROM " + table); + } + } + } else if ((p -= 5) < 0) { + // 5% truncate or drop table + if (tables.size() > 0) { + Connection conn = connections.get(random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + if (random.nextBoolean()) { + stat.execute("TRUNCATE TABLE " + table); + } else { + stat.execute("DROP TABLE " + table); + System.out.println("#Info table dropped: " + table); + tables.remove(table); + } + } + } else if ((p -= 30) <= 0) { + // 30% insert + if (tables.size() > 0) { + Connection conn = connections.get(random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + stat.execute("INSERT INTO " + table + "(NAME) VALUES('Hello World')"); + } + } else { + // 32% delete + if (tables.size() > 0) { + Connection conn = connections.get(random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + stat.execute("DELETE FROM " + table + + " WHERE ID = SELECT MIN(ID) FROM " + table); + } + } + } + System.out.println("#Fail: end " + wasRunning); + System.out.println("#End"); + } catch (Throwable e) { + System.out.println("#Fail: openCount=" + + openCount + " url=" + url + " " + e.toString()); + e.printStackTrace(System.out); + System.out.println("#End"); + } + } + + private Connection openConnection() throws Exception { + Class.forName(driver); + openCount++; + Connection conn = DriverManager.getConnection(url, user, password); + connections.add(conn); + return conn; + } + + private void createTable(Random random) throws SQLException { + Connection conn = connections.get(random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = "TEST" + random.nextInt(10); + try { + stat.execute("CREATE TABLE " + table + "(ID IDENTITY, NAME VARCHAR)"); + System.out.println("#Info table created: " + table); + tables.add(table); + } catch (SQLException e) { + if (e.getErrorCode() == ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1) { + System.out.println("#Info table already exists: " + table); + if (!tables.contains(table)) { + tables.add(table); + } + // ok + } else { + throw e; + } + } + } + + private static void testConsistent(Connection conn) throws SQLException { + for (int i = 0; i < 20; i++) { + Statement stat = conn.createStatement(); + try { + ResultSet rs = stat.executeQuery("SELECT * FROM TEST" + i); + while (rs.next()) { + rs.getLong("ID"); + rs.getString("NAME"); + } + rs = stat.executeQuery("SELECT * FROM TEST" + i + " ORDER BY ID"); + while (rs.next()) { + rs.getLong("ID"); + rs.getString("NAME"); + } + } catch (SQLException e) { + if (e.getErrorCode() == ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1) { + // ok + } else { + throw e; + } + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestLimit.java b/modules/h2/src/test/java/org/h2/test/synth/TestLimit.java new file mode 100644 index 0000000000000..f7e35c20eab82 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestLimit.java @@ -0,0 +1,92 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.test.TestBase; + +/** + * The LIMIT, OFFSET, maxRows. + */ +public class TestLimit extends TestBase { + + private Statement stat; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + // test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws Exception { + deleteDb("limit"); + Connection conn = getConnection("limit"); + stat = conn.createStatement(); + stat.execute("create table test(id int) as " + + "select x from system_range(1, 10)"); + for (int maxRows = 0; maxRows < 12; maxRows++) { + stat.setMaxRows(maxRows); + for (int limit = -2; limit < 12; limit++) { + for (int offset = -2; offset < 12; offset++) { + int l = limit < 0 ? 10 : Math.min(10, limit); + for (int d = 0; d < 2; d++) { + int m = maxRows <= 0 ? 10 : Math.min(10, maxRows); + int expected = Math.min(m, l); + if (offset > 0) { + expected = Math.max(0, Math.min(10 - offset, expected)); + } + String s = "select " + (d == 1 ? "distinct " : "") + + " * from test limit " + (limit == -2 ? "null" : limit) + + " offset " + (offset == -2 ? "null" : offset); + assertRow(expected, s); + String union = "(" + s + ") union (" + s + ")"; + assertRow(expected, union); + m = maxRows <= 0 ? 20 : Math.min(20, maxRows); + if (offset > 0) { + l = Math.max(0, Math.min(10 - offset, l)); + } + expected = Math.min(m, l * 2); + union = "(" + s + ") union all (" + s + ")"; + assertRow(expected, union); + for (int unionLimit = -2; unionLimit < 5; unionLimit++) { + int e = unionLimit < 0 ? 20 : Math.min(20, unionLimit); + e = Math.min(expected, e); + String u = union + " limit " + + (unionLimit == -2 ? "null" : unionLimit); + assertRow(e, u); + } + } + } + } + } + assertEquals(0, stat.executeUpdate("delete from test limit 0")); + assertEquals(1, stat.executeUpdate("delete from test limit 1")); + assertEquals(2, stat.executeUpdate("delete from test limit 2")); + assertEquals(7, stat.executeUpdate("delete from test limit null")); + stat.execute("insert into test select x from system_range(1, 10)"); + assertEquals(10, stat.executeUpdate("delete from test limit -1")); + conn.close(); + deleteDb("limit"); + } + + private void assertRow(int expected, String sql) throws SQLException { + try { + assertResultRowCount(expected, stat.executeQuery(sql)); + } catch (AssertionError e) { + stat.executeQuery(sql + " -- cache killer"); + throw e; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestMultiThreaded.java b/modules/h2/src/test/java/org/h2/test/synth/TestMultiThreaded.java new file mode 100644 index 0000000000000..d348082016a3a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestMultiThreaded.java @@ -0,0 +1,181 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.test.TestBase; + +/** + * Tests the multi-threaded mode. + */ +public class TestMultiThreaded extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + /** + * Processes random operations. + */ + private class Processor extends Thread { + private final int id; + private final Statement stat; + private final Random random; + private volatile Throwable exception; + private boolean stop; + + Processor(Connection conn, int id) throws SQLException { + this.id = id; + stat = conn.createStatement(); + random = new Random(id); + } + public Throwable getException() { + return exception; + } + @Override + public void run() { + int count = 0; + ResultSet rs; + try { + while (!stop) { + switch (random.nextInt(6)) { + case 0: + // insert a row for this connection + traceThread("insert " + id + " count: " + count); + stat.execute("INSERT INTO TEST(NAME) VALUES('"+ id +"')"); + traceThread("insert done"); + count++; + break; + case 1: + // delete a row for this connection + if (count > 0) { + traceThread("delete " + id + " count: " + count); + int updateCount = stat.executeUpdate( + "DELETE FROM TEST " + + "WHERE NAME = '"+ id +"' AND ROWNUM()<2"); + traceThread("delete done"); + if (updateCount != 1) { + throw new AssertionError( + "Expected: 1 Deleted: " + updateCount); + } + count--; + } + break; + case 2: + // select the number of rows of this connection + traceThread("select " + id + " count: " + count); + rs = stat.executeQuery("SELECT COUNT(*) " + + "FROM TEST WHERE NAME = '"+ id +"'"); + traceThread("select done"); + rs.next(); + int got = rs.getInt(1); + if (got != count) { + throw new AssertionError("Expected: " + count + " got: " + got); + } + break; + case 3: + traceThread("insert"); + stat.execute("INSERT INTO TEST(NAME) VALUES(NULL)"); + traceThread("insert done"); + break; + case 4: + traceThread("delete"); + stat.execute("DELETE FROM TEST WHERE NAME IS NULL"); + traceThread("delete done"); + break; + case 5: + traceThread("select"); + rs = stat.executeQuery("SELECT * FROM TEST WHERE NAME IS NULL"); + traceThread("select done"); + while (rs.next()) { + rs.getString(1); + } + break; + } + } + } catch (Throwable e) { + exception = e; + } + } + + private void traceThread(String s) { + if (config.traceTest) { + trace(id + " " + s); + } + } + public void stopNow() { + this.stop = true; + } + } + + @Override + public void test() throws Exception { + if (config.mvcc) { + return; + } + deleteDb("multiThreaded"); + int size = getSize(2, 4); + Connection[] connList = new Connection[size]; + for (int i = 0; i < size; i++) { + connList[i] = getConnection("multiThreaded;MULTI_THREADED=1;" + + "TRACE_LEVEL_SYSTEM_OUT=1"); + } + Connection conn = connList[0]; + Statement stat = conn.createStatement(); + stat.execute("CREATE SEQUENCE TEST_SEQ"); + stat.execute("CREATE TABLE TEST" + + "(ID BIGINT DEFAULT NEXT VALUE FOR TEST_SEQ, NAME VARCHAR)"); + // stat.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR)"); + // stat.execute("CREATE INDEX IDX_TEST_NAME ON TEST(NAME)"); + trace("init done"); + Processor[] processors = new Processor[size]; + for (int i = 0; i < size; i++) { + conn = connList[i]; + conn.createStatement().execute("SET LOCK_TIMEOUT 1000"); + processors[i] = new Processor(conn, i); + processors[i].start(); + trace("started " + i); + Thread.sleep(100); + } + for (int t = 0; t < 2; t++) { + Thread.sleep(1000); + for (int i = 0; i < size; i++) { + Processor p = processors[i]; + if (p.getException() != null) { + throw new Exception("" + i, p.getException()); + } + } + } + trace("stopping"); + for (int i = 0; i < size; i++) { + Processor p = processors[i]; + p.stopNow(); + } + for (int i = 0; i < size; i++) { + Processor p = processors[i]; + p.join(100); + if (p.getException() != null) { + throw new Exception(p.getException()); + } + } + trace("close"); + for (int i = 0; i < size; i++) { + connList[i].close(); + } + deleteDb("multiThreaded"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestNestedJoins.java b/modules/h2/src/test/java/org/h2/test/synth/TestNestedJoins.java new file mode 100644 index 0000000000000..7ef76406b6168 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestNestedJoins.java @@ -0,0 +1,653 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.File; +import java.io.StringReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Random; +import org.h2.api.ErrorCode; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.New; +import org.h2.util.ScriptReader; + +/** + * Tests nested joins and right outer joins. + */ +public class TestNestedJoins extends TestBase { + + private final ArrayList dbs = New.arrayList(); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + // test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws Exception { + deleteDb("nestedJoins"); + // testCases2(); + testCases(); + testRandom(); + deleteDb("nestedJoins"); + } + + private void testRandom() throws Exception { + Connection conn = getConnection("nestedJoins"); + dbs.add(conn.createStatement()); + + try { + Class.forName("org.postgresql.Driver"); + Connection c2 = DriverManager.getConnection("jdbc:postgresql:test?loggerLevel=OFF", "sa", "sa"); + dbs.add(c2.createStatement()); + } catch (Exception e) { + // database not installed - ok + } + + // Derby doesn't work currently + // deleteDerby(); + // try { + // Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); + // Connection c2 = DriverManager.getConnection( + // "jdbc:derby:" + getBaseDir() + + // "/derby/test;create=true", "sa", "sa"); + // dbs.add(c2.createStatement()); + // } catch (Exception e) { + // // database not installed - ok + // } + String shortest = null; + Throwable shortestEx = null; + for (int i = 0; i < 10; i++) { + try { + execute("drop table t" + i); + } catch (Exception e) { + // ignore + } + String sql = "create table t" + i + "(x int)"; + trace(sql + ";"); + execute(sql); + if (i >= 4) { + for (int j = 0; j < i; j++) { + sql = "insert into t" + i + " values(" + j + ")"; + trace(sql + ";"); + execute(sql); + } + } + } + // the first 4 tables: all combinations + for (int i = 0; i < 16; i++) { + for (int j = 0; j < 4; j++) { + if ((i & (1 << j)) != 0) { + String sql = "insert into t" + j + " values(" + i + ")"; + trace(sql + ";"); + execute(sql); + } + } + } + Random random = new Random(1); + int size = getSize(1000, 10000); + for (int i = 0; i < size; i++) { + StringBuilder buff = new StringBuilder(); + int t = 1 + random.nextInt(9); + buff.append("select "); + for (int j = 0; j < t; j++) { + if (j > 0) { + buff.append(", "); + } + buff.append("t" + j + ".x "); + } + buff.append("from "); + appendRandomJoin(random, buff, 0, t - 1); + String sql = buff.toString(); + try { + execute(sql); + } catch (Throwable e) { + if (e instanceof SQLException) { + trace(sql); + fail(sql); + // SQLException se = (SQLException) e; + // System.out.println(se); + // System.out.println(" " + sql); + } + if (shortest == null || sql.length() < shortest.length()) { + shortest = sql; + shortestEx = e; + } + } + } + if (shortest != null) { + shortestEx.printStackTrace(); + fail(shortest + " " + shortestEx); + } + for (int i = 0; i < 10; i++) { + try { + execute("drop table t" + i); + } catch (Exception e) { + // ignore + } + } + for (Statement s : dbs) { + s.getConnection().close(); + } + deleteDerby(); + deleteDb("nestedJoins"); + } + + private void deleteDerby() { + try { + new File("derby.log").delete(); + try { + DriverManager.getConnection("jdbc:derby:" + + getBaseDir() + "/derby/test;shutdown=true", "sa", "sa"); + } catch (Exception e) { + // ignore + } + FileUtils.deleteRecursive(getBaseDir() + "/derby", false); + } catch (Exception e) { + e.printStackTrace(); + // database not installed - ok + } + } + + private void appendRandomJoin(Random random, StringBuilder buff, int min, + int max) { + if (min == max) { + buff.append("t" + min); + return; + } + buff.append("("); + int m = min + random.nextInt(max - min); + int left = min + (m == min ? 0 : random.nextInt(m - min)); + appendRandomJoin(random, buff, min, m); + switch (random.nextInt(3)) { + case 0: + buff.append(" inner join "); + break; + case 1: + buff.append(" left outer join "); + break; + case 2: + buff.append(" right outer join "); + break; + } + m++; + int right = m + (m == max ? 0 : random.nextInt(max - m)); + appendRandomJoin(random, buff, m, max); + buff.append(" on t" + left + ".x = t" + right + ".x "); + buff.append(")"); + } + + private void execute(String sql) throws SQLException { + String expected = null; + SQLException e = null; + for (Statement s : dbs) { + try { + boolean result = s.execute(sql); + if (result) { + String data = getResult(s.getResultSet()); + if (expected == null) { + expected = data; + } else { + assertEquals(sql, expected, data); + } + } + } catch (SQLException e2) { + // ignore now, throw at the end + e = e2; + } + } + if (e != null) { + throw e; + } + } + + private static String getResult(ResultSet rs) throws SQLException { + ArrayList list = New.arrayList(); + while (rs.next()) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + if (i > 0) { + buff.append(" "); + } + buff.append(rs.getString(i + 1)); + } + list.add(buff.toString()); + } + Collections.sort(list); + return list.toString(); + } + + private void testCases() throws Exception { + + Connection conn = getConnection("nestedJoins"); + Statement stat = conn.createStatement(); + ResultSet rs; + String sql; + + // issue 288 + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, stat). + execute("select 1 from dual a right outer join " + + "(select b.x from dual b) c on unknown.x = c.x, dual d"); + + // issue 288 + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(1)"); + // this threw the exception Column "T.ID" must be in the GROUP BY list + stat.execute("select * from test t right outer join " + + "(select t2.id, count(*) c from test t2 group by t2.id) x on x.id = t.id " + + "where t.id = 1"); + + // the query plan of queries with subqueries + // that contain nested joins was wrong + stat.execute("select 1 from (select 2 from ((test t1 inner join test t2 " + + "on t1.id=t2.id) inner join test t3 on t3.id=t1.id)) x"); + + stat.execute("drop table test"); + + // issue 288 + /* + create table test(id int); + select 1 from test a right outer join test b on a.id = 1, test c; + drop table test; + */ + stat.execute("create table test(id int)"); + stat.execute("select 1 from test a " + + "right outer join test b on a.id = 1, test c"); + stat.execute("drop table test"); + + /* + create table a(id int); + create table b(id int); + create table c(id int); + select * from a inner join b inner join c on c.id = b.id on b.id = a.id; + drop table a, b, c; + */ + stat.execute("create table a(id int)"); + stat.execute("create table b(id int)"); + stat.execute("create table c(id int)"); + rs = stat.executeQuery("explain select * from a inner join b " + + "inner join c on c.id = b.id on b.id = a.id"); + assertTrue(rs.next()); + sql = rs.getString(1); + assertContains(sql, "("); + stat.execute("drop table a, b, c"); + + // see roadmap, tag: swapInnerJoinTables + /* + create table test(id int primary key, x int) + as select x, x from system_range(1, 10); + create index on test(x); + create table o(id int primary key) + as select x from system_range(1, 10); + explain select * from test a inner join test b + on a.id=b.id left outer join o on o.id=a.id where b.x=1; + -- expected: no tableScan + explain select * from test a inner join test b + on a.id=b.id inner join o on o.id=a.id where b.x=1; + -- expected: no tableScan + drop table test; + drop table o; + */ + stat.execute("create table test(id int primary key, x int) " + + "as select x, x from system_range(1, 10)"); + stat.execute("create index on test(x)"); + stat.execute("create table o(id int primary key) " + + "as select x from system_range(1, 10)"); + rs = stat.executeQuery("explain select * from test a inner join " + + "test b on a.id=b.id inner join o on o.id=a.id where b.x=1"); + assertTrue(rs.next()); + sql = rs.getString(1); + assertTrue("using table scan", sql.indexOf("tableScan") < 0); + rs = stat.executeQuery("explain select * from test a inner join " + + "test b on a.id=b.id left outer join o on o.id=a.id where b.x=1"); + assertTrue(rs.next()); + sql = rs.getString(1); + // TODO support optimizing queries with both inner and outer joins + // assertTrue("using table scan", sql.indexOf("tableScan") < 0); + stat.execute("drop table test"); + stat.execute("drop table o"); + + /* + create table test(id int primary key); + insert into test values(1); + select b.id from test a left outer join test b on a.id = b.id + and not exists (select * from test c where c.id = b.id); + -- expected: null + */ + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(1)"); + rs = stat.executeQuery("select b.id from test a left outer join " + + "test b on a.id = b.id and not exists " + + "(select * from test c where c.id = b.id)"); + assertTrue(rs.next()); + sql = rs.getString(1); + assertEquals(null, sql); + stat.execute("drop table test"); + + /* + create table test(id int primary key); + explain select * from test a left outer join (test c) on a.id = c.id; + -- expected: uses the primary key index + */ + stat.execute("create table test(id int primary key)"); + rs = stat.executeQuery("explain select * from test a " + + "left outer join (test c) on a.id = c.id"); + assertTrue(rs.next()); + sql = rs.getString(1); + assertContains(sql, "PRIMARY_KEY"); + stat.execute("drop table test"); + + /* + create table t1(a int, b int); + create table t2(a int, b int); + create table t3(a int, b int); + create table t4(a int, b int); + insert into t1 values(1,1), (2,2), (3,3); + insert into t2 values(1,1), (2,2); + insert into t3 values(1,1), (3,3); + insert into t4 values(1,1), (2,2), (3,3), (4,4); + select distinct t1.a, t2.a, t3.a from t1 + right outer join t3 on t1.b=t3.a right outer join t2 on t2.b=t1.a; + drop table t1, t2, t3, t4; + */ + stat.execute("create table t1(a int, b int)"); + stat.execute("create table t2(a int, b int)"); + stat.execute("create table t3(a int, b int)"); + stat.execute("create table t4(a int, b int)"); + stat.execute("insert into t1 values(1,1), (2,2), (3,3)"); + stat.execute("insert into t2 values(1,1), (2,2)"); + stat.execute("insert into t3 values(1,1), (3,3)"); + stat.execute("insert into t4 values(1,1), (2,2), (3,3), (4,4)"); + rs = stat.executeQuery( + "explain select distinct t1.a, t2.a, t3.a from t1 " + + "right outer join t3 on t1.b=t3.a right outer join t2 on t2.b=t1.a"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT DISTINCT T1.A, T2.A, T3.A FROM PUBLIC.T2 " + + "LEFT OUTER JOIN ( PUBLIC.T3 LEFT OUTER JOIN PUBLIC.T1 " + + "ON T1.B = T3.A ) ON T2.B = T1.A", sql); + rs = stat.executeQuery("select distinct t1.a, t2.a, t3.a from t1 " + + "right outer join t3 on t1.b=t3.a " + + "right outer join t2 on t2.b=t1.a"); + // expected: 1 1 1; null 2 null + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("1", rs.getString(2)); + assertEquals("1", rs.getString(3)); + assertTrue(rs.next()); + assertEquals(null, rs.getString(1)); + assertEquals("2", rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table t1, t2, t3, t4"); + + /* + create table a(x int); + create table b(x int); + create table c(x int); + insert into a values(1); + insert into b values(1); + insert into c values(1), (2); + select a.x, b.x, c.x from a inner join b on a.x = b.x + right outer join c on c.x = a.x; + drop table a, b, c; + */ + stat.execute("create table a(x int)"); + stat.execute("create table b(x int)"); + stat.execute("create table c(x int)"); + stat.execute("insert into a values(1)"); + stat.execute("insert into b values(1)"); + stat.execute("insert into c values(1), (2)"); + rs = stat.executeQuery("explain select a.x, b.x, c.x from a " + + "inner join b on a.x = b.x right outer join c on c.x = a.x"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.X, B.X, C.X FROM PUBLIC.C LEFT OUTER JOIN " + + "( PUBLIC.A INNER JOIN PUBLIC.B ON A.X = B.X ) ON C.X = A.X", sql); + rs = stat.executeQuery("select a.x, b.x, c.x from a " + + "inner join b on a.x = b.x " + + "right outer join c on c.x = a.x"); + // expected result: 1 1 1; null null 2 + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("1", rs.getString(2)); + assertEquals("1", rs.getString(3)); + assertTrue(rs.next()); + assertEquals(null, rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals("2", rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table a, b, c"); + + /* + drop table a, b, c; + create table a(x int); + create table b(x int); + create table c(x int, y int); + insert into a values(1), (2); + insert into b values(3); + insert into c values(1, 3); + insert into c values(4, 5); + explain select * from a left outer join + (b left outer join c on b.x = c.y) on a.x = c.x; + select * from a left outer join + (b left outer join c on b.x = c.y) on a.x = c.x; + */ + stat.execute("create table a(x int)"); + stat.execute("create table b(x int)"); + stat.execute("create table c(x int, y int)"); + stat.execute("insert into a values(1), (2)"); + stat.execute("insert into b values(3)"); + stat.execute("insert into c values(1, 3)"); + stat.execute("insert into c values(4, 5)"); + rs = stat.executeQuery("explain select * from a " + + "left outer join (b " + + "left outer join c " + + "on b.x = c.y) " + + "on a.x = c.x"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.X, B.X, C.X, C.Y FROM PUBLIC.A " + + "LEFT OUTER JOIN ( PUBLIC.B " + + "LEFT OUTER JOIN PUBLIC.C " + + "ON B.X = C.Y ) " + + "ON A.X = C.X", sql); + rs = stat.executeQuery("select * from a " + + "left outer join (b " + + "left outer join c " + + "on b.x = c.y) " + + "on a.x = c.x"); + // expected result: 1 3 1 3; 2 null null null + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("3", rs.getString(2)); + assertEquals("1", rs.getString(3)); + assertEquals("3", rs.getString(4)); + assertTrue(rs.next()); + assertEquals("2", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertEquals(null, rs.getString(4)); + assertFalse(rs.next()); + stat.execute("drop table a, b, c"); + + stat.execute("create table a(x int primary key)"); + stat.execute("insert into a values(0), (1)"); + stat.execute("create table b(x int primary key)"); + stat.execute("insert into b values(0)"); + stat.execute("create table c(x int primary key)"); + rs = stat.executeQuery("select a.*, b.*, c.* from a " + + "left outer join (b " + + "inner join c " + + "on b.x = c.x) " + + "on a.x = b.x"); + // expected result: 0, null, null; 1, null, null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + rs = stat.executeQuery("select * from a " + + "left outer join b on a.x = b.x " + + "inner join c on b.x = c.x"); + // expected result: - + assertFalse(rs.next()); + rs = stat.executeQuery("select * from a " + + "left outer join b on a.x = b.x " + + "left outer join c on b.x = c.x"); + // expected result: 0 0 null; 1 null null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals("0", rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + + rs = stat.executeQuery("select * from a " + + "left outer join (b " + + "inner join c on b.x = c.x) on a.x = b.x"); + // expected result: 0 null null; 1 null null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + rs = stat.executeQuery("explain select * from a " + + "left outer join (b " + + "inner join c on c.x = 1) on a.x = b.x"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.X, B.X, C.X FROM PUBLIC.A " + + "LEFT OUTER JOIN ( PUBLIC.B " + + "INNER JOIN PUBLIC.C ON C.X = 1 ) ON A.X = B.X", sql); + stat.execute("drop table a, b, c"); + + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(0), (1), (2)"); + rs = stat.executeQuery("select * from test a " + + "left outer join (test b " + + "inner join test c on b.id = c.id - 2) on a.id = b.id + 1"); + // drop table test; + // create table test(id int primary key); + // insert into test values(0), (1), (2); + // select * from test a left outer join + // (test b inner join test c on b.id = c.id - 2) on a.id = b.id + 1; + // expected result: 0 null null; 1 0 2; 2 null null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("0", rs.getString(2)); + assertEquals("2", rs.getString(3)); + assertTrue(rs.next()); + assertEquals("2", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table test"); + + stat.execute("create table a(pk int, val varchar(255))"); + stat.execute("create table b(pk int, val varchar(255))"); + stat.execute("create table base(pk int, deleted int)"); + stat.execute("insert into base values(1, 0)"); + stat.execute("insert into base values(2, 1)"); + stat.execute("insert into base values(3, 0)"); + stat.execute("insert into a values(1, 'a')"); + stat.execute("insert into b values(2, 'a')"); + stat.execute("insert into b values(3, 'a')"); + rs = stat.executeQuery( + "explain select a.pk, a_base.pk, b.pk, b_base.pk " + + "from a " + + "inner join base a_base on a.pk = a_base.pk " + + "left outer join (b inner join base b_base " + + "on b.pk = b_base.pk and b_base.deleted = 0) on 1=1"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.PK, A_BASE.PK, B.PK, B_BASE.PK " + + "FROM PUBLIC.BASE A_BASE " + + "LEFT OUTER JOIN ( PUBLIC.B " + + "INNER JOIN PUBLIC.BASE B_BASE " + + "ON (B_BASE.DELETED = 0) AND (B.PK = B_BASE.PK) ) " + + "ON TRUE INNER JOIN PUBLIC.A ON 1=1 " + + "WHERE A.PK = A_BASE.PK", sql); + rs = stat.executeQuery( + "select a.pk, a_base.pk, b.pk, b_base.pk from a " + + "inner join base a_base on a.pk = a_base.pk " + + "left outer join (b inner join base b_base " + + "on b.pk = b_base.pk and b_base.deleted = 0) on 1=1"); + // expected: 1 1 3 3 + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("1", rs.getString(2)); + assertEquals("3", rs.getString(3)); + assertEquals("3", rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table a, b, base"); + + // while (rs.next()) { + // for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + // System.out.print(rs.getString(i + 1) + " "); + // } + // System.out.println(); + // } + + conn.close(); + deleteDb("nestedJoins"); + } + + private static String cleanRemarks(String sql) { + ScriptReader r = new ScriptReader(new StringReader(sql)); + r.setSkipRemarks(true); + sql = r.readStatement(); + sql = sql.replaceAll("\\n", " "); + while (sql.contains(" ")) { + sql = sql.replaceAll(" ", " "); + } + return sql; + } + + private void testCases2() throws Exception { + Connection conn = getConnection("nestedJoins"); + Statement stat = conn.createStatement(); + stat.execute("create table a(id int primary key)"); + stat.execute("create table b(id int primary key)"); + stat.execute("create table c(id int primary key)"); + stat.execute("insert into a(id) values(1)"); + stat.execute("insert into c(id) values(1)"); + stat.execute("insert into b(id) values(1)"); + stat.executeQuery("select 1 from a left outer join " + + "(a t0 join b t1 on 1 = 1) on t1.id = 1, c"); + conn.close(); + deleteDb("nestedJoins"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestOuterJoins.java b/modules/h2/src/test/java/org/h2/test/synth/TestOuterJoins.java new file mode 100644 index 0000000000000..bf2728427c5cd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestOuterJoins.java @@ -0,0 +1,588 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.File; +import java.io.StringReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Random; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.New; +import org.h2.util.ScriptReader; + +/** + * Tests nested joins and right outer joins. + */ +public class TestOuterJoins extends TestBase { + + private final ArrayList dbs = New.arrayList(); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws Exception { + deleteDb("outerJoins"); + testCases(); + testRandom(); + deleteDb("outerJoins"); + } + + private void testRandom() throws Exception { + Connection conn = getConnection("outerJoins"); + dbs.add(conn.createStatement()); + + try { + Class.forName("org.postgresql.Driver"); + Connection c2 = DriverManager.getConnection( + "jdbc:postgresql:test?loggerLevel=OFF", "sa", "sa"); + dbs.add(c2.createStatement()); + } catch (Exception e) { + // database not installed - ok + } + deleteDerby(); + try { + Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); + Connection c2 = DriverManager.getConnection( + "jdbc:derby:" + getBaseDir() + + "/derby/test;create=true", "sa", "sa"); + dbs.add(c2.createStatement()); + } catch (Exception e) { + // database not installed - ok + } + String shortest = null; + Throwable shortestEx = null; + for (int i = 0; i < 4; i++) { + try { + executeAndLog("drop table t" + i); + } catch (Exception e) { + // ignore + } + } + executeAndLog("create table t0(x int primary key)"); + executeAndLog("create table t1(x int)"); + // for H2, this will ensure it's not using a clustered index + executeAndLog("create table t2(x real primary key)"); + executeAndLog("create table t3(x int)"); + executeAndLog("create index idx_t3_x on t3(x)"); + for (int i = 0; i < 16; i++) { + for (int j = 0; j < 4; j++) { + if ((i & (1 << j)) != 0) { + executeAndLog("insert into t" + j + " values(" + i + ")"); + } + } + } + Random random = new Random(); + int len = getSize(500, 5000); + for (int i = 0; i < len; i++) { + StringBuilder buff = new StringBuilder(); + int t = 1 + random.nextInt(3); + buff.append("select "); + for (int j = 0; j < t; j++) { + if (j > 0) { + buff.append(", "); + } + buff.append("t" + j + ".x "); + } + buff.append("from "); + appendRandomJoin(random, buff, 0, t - 1); + appendRandomCondition(random, buff, t); + String sql = buff.toString(); + try { + execute(sql); + } catch (Throwable e) { + if (e instanceof SQLException) { + trace(sql); + fail(sql); + // SQLException se = (SQLException) e; + // System.out.println(se); + } + if (shortest == null || sql.length() < shortest.length()) { + shortest = sql; + shortestEx = e; + } + } + } + if (shortest != null) { + shortestEx.printStackTrace(); + fail(shortest + " " + shortestEx); + } + for (int i = 0; i < 4; i++) { + try { + execute("drop table t" + i); + } catch (Exception e) { + // ignore + } + } + for (Statement s : dbs) { + s.getConnection().close(); + } + deleteDerby(); + deleteDb("outerJoins"); + } + + private void deleteDerby() { + try { + new File("derby.log").delete(); + try { + DriverManager.getConnection("jdbc:derby:" + + getBaseDir() + "/derby/test;shutdown=true", "sa", "sa"); + } catch (Exception e) { + // ignore + } + FileUtils.deleteRecursive(getBaseDir() + "/derby", false); + } catch (Exception e) { + e.printStackTrace(); + // database not installed - ok + } + } + + private void appendRandomJoin(Random random, StringBuilder buff, int min, + int max) { + if (min == max) { + buff.append("t" + min); + return; + } + buff.append("("); + int m = min + random.nextInt(max - min); + int left = min + (m == min ? 0 : random.nextInt(m - min)); + appendRandomJoin(random, buff, min, m); + switch (random.nextInt(3)) { + case 0: + buff.append(" inner join "); + break; + case 1: + buff.append(" left outer join "); + break; + case 2: + buff.append(" right outer join "); + break; + } + m++; + int right = m + (m == max ? 0 : random.nextInt(max - m)); + appendRandomJoin(random, buff, m, max); + buff.append(" on t" + left + ".x = t" + right + ".x "); + buff.append(")"); + } + + private static void appendRandomCondition(Random random, + StringBuilder buff, int max) { + if (max > 0 && random.nextInt(4) == 0) { + return; + } + buff.append(" where "); + int count = 1 + random.nextInt(3); + for (int i = 0; i < count; i++) { + if (i > 0) { + buff.append(random.nextBoolean() ? " and " : " or "); + } + buff.append("t" + random.nextInt(max) + ".x"); + switch (random.nextInt(8)) { + case 0: + buff.append("="); + appendRandomValueOrColumn(random, buff, max); + break; + case 1: + buff.append(">="); + appendRandomValueOrColumn(random, buff, max); + break; + case 2: + buff.append("<="); + appendRandomValueOrColumn(random, buff, max); + break; + case 3: + buff.append("<"); + appendRandomValueOrColumn(random, buff, max); + break; + case 4: + buff.append(">"); + appendRandomValueOrColumn(random, buff, max); + break; + case 5: + buff.append("<>"); + appendRandomValueOrColumn(random, buff, max); + break; + case 6: + buff.append(" is not null"); + break; + case 7: + buff.append(" is null"); + break; + } + } + } + + private static void appendRandomValueOrColumn(Random random, + StringBuilder buff, int max) { + if (random.nextBoolean()) { + buff.append(random.nextInt(8) - 2); + } else { + buff.append("t" + random.nextInt(max) + ".x"); + } + } + + private void executeAndLog(String sql) throws SQLException { + trace(sql + ";"); + execute(sql); + } + + private void execute(String sql) throws SQLException { + String expected = null; + SQLException e = null; + for (Statement s : dbs) { + try { + boolean result = s.execute(sql); + if (result) { + String data = getResult(s.getResultSet()); + if (expected == null) { + expected = data; + } else { + assertEquals(sql, expected, data); + } + } + } catch (AssertionError e2) { + e = new SQLException(e2.getMessage()); + } catch (SQLException e2) { + // ignore now, throw at the end + e = e2; + } + } + if (e != null) { + throw e; + } + } + + private static String getResult(ResultSet rs) throws SQLException { + ArrayList list = New.arrayList(); + while (rs.next()) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + if (i > 0) { + buff.append(" "); + } + int x = rs.getInt(i + 1); + buff.append(rs.wasNull() ? "null" : x); + } + list.add(buff.toString()); + } + Collections.sort(list); + return list.toString(); + } + + private void testCases() throws Exception { + + Connection conn = getConnection("outerJoins"); + Statement stat = conn.createStatement(); + ResultSet rs; + String sql; + + /* + create table test(id int primary key); + explain select * from test a left outer join (test c) on a.id = c.id; + -- expected: uses the primary key index + */ + stat.execute("create table test(id int primary key)"); + rs = stat.executeQuery("explain select * from test a " + + "left outer join (test c) on a.id = c.id"); + assertTrue(rs.next()); + sql = rs.getString(1); + assertContains(sql, "PRIMARY_KEY"); + stat.execute("drop table test"); + + /* + create table t1(a int, b int); + create table t2(a int, b int); + create table t3(a int, b int); + create table t4(a int, b int); + insert into t1 values(1,1), (2,2), (3,3); + insert into t2 values(1,1), (2,2); + insert into t3 values(1,1), (3,3); + insert into t4 values(1,1), (2,2), (3,3), (4,4); + select distinct t1.a, t2.a, t3.a from t1 + right outer join t3 on t1.b=t3.a right outer join t2 on t2.b=t1.a; + drop table t1, t2, t3, t4; + */ + stat.execute("create table t1(a int, b int)"); + stat.execute("create table t2(a int, b int)"); + stat.execute("create table t3(a int, b int)"); + stat.execute("create table t4(a int, b int)"); + stat.execute("insert into t1 values(1,1), (2,2), (3,3)"); + stat.execute("insert into t2 values(1,1), (2,2)"); + stat.execute("insert into t3 values(1,1), (3,3)"); + stat.execute("insert into t4 values(1,1), (2,2), (3,3), (4,4)"); + rs = stat.executeQuery("explain select distinct t1.a, t2.a, t3.a from t1 " + + "right outer join t3 on t1.b=t3.a right outer join t2 on t2.b=t1.a"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT DISTINCT T1.A, T2.A, T3.A FROM PUBLIC.T2 " + + "LEFT OUTER JOIN ( PUBLIC.T3 " + + "LEFT OUTER JOIN PUBLIC.T1 ON T1.B = T3.A ) " + + "ON T2.B = T1.A", sql); + rs = stat.executeQuery("select distinct t1.a, t2.a, t3.a from t1 " + + "right outer join t3 on t1.b=t3.a right outer join t2 on t2.b=t1.a"); + // expected: 1 1 1; null 2 null + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("1", rs.getString(2)); + assertEquals("1", rs.getString(3)); + assertTrue(rs.next()); + assertEquals(null, rs.getString(1)); + assertEquals("2", rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table t1, t2, t3, t4"); + + /* + create table a(x int); + create table b(x int); + create table c(x int); + insert into a values(1); + insert into b values(1); + insert into c values(1), (2); + select a.x, b.x, c.x from a inner join b on a.x = b.x + right outer join c on c.x = a.x; + drop table a, b, c; + */ + stat.execute("create table a(x int)"); + stat.execute("create table b(x int)"); + stat.execute("create table c(x int)"); + stat.execute("insert into a values(1)"); + stat.execute("insert into b values(1)"); + stat.execute("insert into c values(1), (2)"); + rs = stat.executeQuery("explain select a.x, b.x, c.x from a " + + "inner join b on a.x = b.x right outer join c on c.x = a.x"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.X, B.X, C.X FROM PUBLIC.C LEFT OUTER JOIN " + + "( PUBLIC.A INNER JOIN PUBLIC.B ON A.X = B.X ) ON C.X = A.X", sql); + rs = stat.executeQuery("select a.x, b.x, c.x from a " + + "inner join b on a.x = b.x " + + "right outer join c on c.x = a.x"); + // expected result: 1 1 1; null null 2 + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("1", rs.getString(2)); + assertEquals("1", rs.getString(3)); + assertTrue(rs.next()); + assertEquals(null, rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals("2", rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table a, b, c"); + + /* + drop table a, b, c; + create table a(x int); + create table b(x int); + create table c(x int, y int); + insert into a values(1), (2); + insert into b values(3); + insert into c values(1, 3); + insert into c values(4, 5); + explain select * from a left outer join + (b left outer join c on b.x = c.y) on a.x = c.x; + select * from a left outer join + (b left outer join c on b.x = c.y) on a.x = c.x; + */ + stat.execute("create table a(x int)"); + stat.execute("create table b(x int)"); + stat.execute("create table c(x int, y int)"); + stat.execute("insert into a values(1), (2)"); + stat.execute("insert into b values(3)"); + stat.execute("insert into c values(1, 3)"); + stat.execute("insert into c values(4, 5)"); + rs = stat.executeQuery("explain select * from a " + + "left outer join (b " + + "left outer join c " + + "on b.x = c.y) " + + "on a.x = c.x"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.X, B.X, C.X, C.Y FROM PUBLIC.A " + + "LEFT OUTER JOIN ( PUBLIC.B " + + "LEFT OUTER JOIN PUBLIC.C " + + "ON B.X = C.Y ) " + + "ON A.X = C.X", sql); + rs = stat.executeQuery("select * from a " + + "left outer join (b " + + "left outer join c " + + "on b.x = c.y) " + + "on a.x = c.x"); + // expected result: 1 3 1 3; 2 null null null + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("3", rs.getString(2)); + assertEquals("1", rs.getString(3)); + assertEquals("3", rs.getString(4)); + assertTrue(rs.next()); + assertEquals("2", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertEquals(null, rs.getString(4)); + assertFalse(rs.next()); + stat.execute("drop table a, b, c"); + + stat.execute("create table a(x int primary key)"); + stat.execute("insert into a values(0), (1)"); + stat.execute("create table b(x int primary key)"); + stat.execute("insert into b values(0)"); + stat.execute("create table c(x int primary key)"); + rs = stat.executeQuery("select a.*, b.*, c.* from a " + + "left outer join (b " + + "inner join c " + + "on b.x = c.x) " + + "on a.x = b.x"); + // expected result: 0, null, null; 1, null, null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + rs = stat.executeQuery("select * from a " + + "left outer join b on a.x = b.x " + + "inner join c on b.x = c.x"); + // expected result: - + assertFalse(rs.next()); + rs = stat.executeQuery("select * from a " + + "left outer join b on a.x = b.x " + + "left outer join c on b.x = c.x"); + // expected result: 0 0 null; 1 null null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals("0", rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + + rs = stat.executeQuery("select * from a " + + "left outer join (b " + + "inner join c on b.x = c.x) on a.x = b.x"); + // expected result: 0 null null; 1 null null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + rs = stat.executeQuery("explain select * from a " + + "left outer join (b " + + "inner join c on c.x = 1) on a.x = b.x"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.X, B.X, C.X FROM PUBLIC.A " + + "LEFT OUTER JOIN ( PUBLIC.B " + + "INNER JOIN PUBLIC.C ON C.X = 1 ) ON A.X = B.X", sql); + stat.execute("drop table a, b, c"); + + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(0), (1), (2)"); + rs = stat.executeQuery("select * from test a " + + "left outer join (test b " + + "inner join test c on b.id = c.id - 2) on a.id = b.id + 1"); + // drop table test; + // create table test(id int primary key); + // insert into test values(0), (1), (2); + // select * from test a left outer join + // (test b inner join test c on b.id = c.id - 2) on a.id = b.id + 1; + // expected result: 0 null null; 1 0 2; 2 null null + assertTrue(rs.next()); + assertEquals("0", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("0", rs.getString(2)); + assertEquals("2", rs.getString(3)); + assertTrue(rs.next()); + assertEquals("2", rs.getString(1)); + assertEquals(null, rs.getString(2)); + assertEquals(null, rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table test"); + + stat.execute("create table a(pk int, val varchar(255))"); + stat.execute("create table b(pk int, val varchar(255))"); + stat.execute("create table base(pk int, deleted int)"); + stat.execute("insert into base values(1, 0)"); + stat.execute("insert into base values(2, 1)"); + stat.execute("insert into base values(3, 0)"); + stat.execute("insert into a values(1, 'a')"); + stat.execute("insert into b values(2, 'a')"); + stat.execute("insert into b values(3, 'a')"); + rs = stat.executeQuery("explain select a.pk, a_base.pk, b.pk, b_base.pk " + + "from a " + + "inner join base a_base on a.pk = a_base.pk " + + "left outer join (b inner join base b_base " + + "on b.pk = b_base.pk and b_base.deleted = 0) on 1=1"); + assertTrue(rs.next()); + sql = cleanRemarks(rs.getString(1)); + assertEquals("SELECT A.PK, A_BASE.PK, B.PK, B_BASE.PK " + + "FROM PUBLIC.BASE A_BASE " + + "LEFT OUTER JOIN ( PUBLIC.B " + + "INNER JOIN PUBLIC.BASE B_BASE " + + "ON (B_BASE.DELETED = 0) AND (B.PK = B_BASE.PK) ) " + + "ON TRUE INNER JOIN PUBLIC.A ON 1=1 WHERE A.PK = A_BASE.PK", sql); + rs = stat.executeQuery("select a.pk, a_base.pk, b.pk, b_base.pk from a " + + "inner join base a_base on a.pk = a_base.pk " + + "left outer join (b inner join base b_base " + + "on b.pk = b_base.pk and b_base.deleted = 0) on 1=1"); + // expected: 1 1 3 3 + assertTrue(rs.next()); + assertEquals("1", rs.getString(1)); + assertEquals("1", rs.getString(2)); + assertEquals("3", rs.getString(3)); + assertEquals("3", rs.getString(3)); + assertFalse(rs.next()); + stat.execute("drop table a, b, base"); + + // while (rs.next()) { + // for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + // System.out.print(rs.getString(i + 1) + " "); + // } + // System.out.println(); + // } + + conn.close(); + deleteDb("outerJoins"); + } + + private static String cleanRemarks(String sql) { + ScriptReader r = new ScriptReader(new StringReader(sql)); + r.setSkipRemarks(true); + sql = r.readStatement(); + sql = sql.replaceAll("\\n", " "); + while (sql.contains(" ")) { + sql = sql.replaceAll(" ", " "); + } + return sql; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestPowerOffFs.java b/modules/h2/src/test/java/org/h2/test/synth/TestPowerOffFs.java new file mode 100644 index 0000000000000..895958f74b0fe --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestPowerOffFs.java @@ -0,0 +1,101 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.ErrorCode; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.FilePathDebug; + +/** + * Tests that use the debug file system to simulate power failure. + */ +public class TestPowerOffFs extends TestBase { + + private FilePathDebug fs; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + fs = FilePathDebug.register(); + test(Integer.MAX_VALUE); + System.out.println(Integer.MAX_VALUE - fs.getPowerOffCount()); + System.out.println("done"); + for (int i = 0;; i++) { + boolean end = test(i); + if (end) { + break; + } + } + deleteDb("memFS:", null); + } + + private boolean test(int x) throws SQLException { + deleteDb("memFS:", null); + fs.setPowerOffCount(x); + String url = "jdbc:h2:debug:memFS:powerOffFs;" + + "FILE_LOCK=NO;TRACE_LEVEL_FILE=0;" + + "WRITE_DELAY=0;CACHE_SIZE=4096"; + Connection conn = null; + Statement stat = null; + try { + conn = DriverManager.getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("create index idx_name on test(name)"); + stat.execute("insert into test values(2, 'World')"); + stat.execute("update test set name='Hallo' where id=1"); + stat.execute("delete from test where name=2"); + stat.execute("insert into test values(3, space(10000))"); + stat.execute("update test set name='Hallo' where id=3"); + stat.execute("drop table test"); + conn.close(); + conn = null; + return fs.getPowerOffCount() > 0; + } catch (SQLException e) { + if (e.getErrorCode() == ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1) { + throw e; + } + // ignore + } finally { + if (conn != null) { + try { + if (stat != null) { + stat.execute("shutdown immediately"); + } + } catch (Exception e2) { + // ignore + } + try { + conn.close(); + } catch (Exception e2) { + // ignore + } + } + } + fs.setPowerOffCount(0); + conn = DriverManager.getConnection(url); + stat = conn.createStatement(); + stat.execute("script to 'memFS:test.sql'"); + conn.close(); + FileUtils.delete("memFS:test.sql"); + return false; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestPowerOffFs2.java b/modules/h2/src/test/java/org/h2/test/synth/TestPowerOffFs2.java new file mode 100644 index 0000000000000..e75daf73352e6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestPowerOffFs2.java @@ -0,0 +1,229 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.test.utils.FilePathDebug; +import org.h2.util.New; + +/** + * Tests that use the debug file system to simulate power failure. + * This test runs many random operations and stops after some time. + */ +public class TestPowerOffFs2 extends TestBase { + + private static final String USER = "sa"; + private static final String PASSWORD = "sa"; + + private FilePathDebug fs; + + private String url; + private final ArrayList connections = New.arrayList(); + private final ArrayList tables = New.arrayList(); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + fs = FilePathDebug.register(); + url = "jdbc:h2:debug:memFS:powerOffFs;FILE_LOCK=NO;" + + "TRACE_LEVEL_FILE=0;WRITE_DELAY=0;CACHE_SIZE=32"; + for (int i = 0;; i++) { + test(i); + } + } + + private void test(int x) throws SQLException { + System.out.println("x:" + x); + deleteDb("memFS:", null); + try { + testCrash(x); + fail(); + } catch (SQLException e) { + if (e.toString().indexOf("Simulated") < 0) { + throw e; + } + for (Connection c : connections) { + try { + Statement stat = c.createStatement(); + stat.execute("shutdown immediately"); + } catch (Exception e2) { + // ignore + } + try { + c.close(); + } catch (Exception e2) { + // ignore + } + } + } + fs.setPowerOffCount(0); + Connection conn; + conn = openConnection(); + testConsistent(conn); + conn.close(); + } + + private void testCrash(int x) throws SQLException { + connections.clear(); + tables.clear(); + Random random = new Random(x); + for (int i = 0;; i++) { + if (i > 200 && connections.size() > 1 && tables.size() > 1) { + fs.setPowerOffCount(100); + } + if (connections.size() < 1) { + openConnection(); + } + if (tables.size() < 1) { + createTable(random); + } + int p = random.nextInt(100); + if ((p -= 2) <= 0) { + // 2%: open new connection + if (connections.size() < 5) { + openConnection(); + } + } else if ((p -= 1) <= 0) { + // 1%: close connection + if (connections.size() > 1) { + Connection conn = connections.remove( + random.nextInt(connections.size())); + conn.close(); + } + } else if ((p -= 10) <= 0) { + // 10% create table + createTable(random); + } else if ((p -= 20) <= 0) { + // 20% large insert, delete, or update + if (tables.size() > 0) { + Connection conn = connections.get( + random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + if (random.nextBoolean()) { + // 10% insert + stat.execute("INSERT INTO " + table + + "(NAME) SELECT 'Hello ' || X FROM SYSTEM_RANGE(0, 20)"); + } else if (random.nextBoolean()) { + // 5% update + stat.execute("UPDATE " + table + " SET NAME='Hallo Welt'"); + } else { + // 5% delete + stat.execute("DELETE FROM " + table); + } + } + } else if ((p -= 5) < 0) { + // 5% truncate or drop table + if (tables.size() > 0) { + Connection conn = connections.get( + random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + if (random.nextBoolean()) { + stat.execute("TRUNCATE TABLE " + table); + } else { + stat.execute("DROP TABLE " + table); + tables.remove(table); + } + } + } else if ((p -= 30) <= 0) { + // 30% insert + if (tables.size() > 0) { + Connection conn = connections.get( + random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + int spaces = random.nextInt(4) * 30; + if (random.nextInt(15) == 2) { + spaces *= 100; + } + int name = random.nextInt(20); + stat.execute("INSERT INTO " + table + + "(NAME) VALUES('" + name + "' || space( " + spaces + " ))"); + } + } else { + // 32% delete + if (tables.size() > 0) { + Connection conn = connections.get(random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = tables.get(random.nextInt(tables.size())); + stat.execute("DELETE FROM " + table + + " WHERE ID = SELECT MIN(ID) FROM " + table); + } + } + } + } + + private Connection openConnection() throws SQLException { + Connection conn = DriverManager.getConnection(url, USER, PASSWORD); + connections.add(conn); + return conn; + } + + private void createTable(Random random) throws SQLException { + Connection conn = connections.get(random.nextInt(connections.size())); + Statement stat = conn.createStatement(); + String table = "TEST" + random.nextInt(10); + try { + stat.execute("CREATE TABLE " + table + "(ID IDENTITY, NAME VARCHAR)"); + if (random.nextBoolean()) { + stat.execute("CREATE INDEX IDX_" + table + " ON " + table + "(NAME)"); + } + tables.add(table); + } catch (SQLException e) { + if (e.getErrorCode() == ErrorCode.TABLE_OR_VIEW_ALREADY_EXISTS_1) { + if (!tables.contains(table)) { + tables.add(table); + } + // ok + } else { + throw e; + } + } + } + + private static void testConsistent(Connection conn) throws SQLException { + for (int i = 0; i < 20; i++) { + Statement stat = conn.createStatement(); + try { + ResultSet rs = stat.executeQuery("SELECT * FROM TEST" + i); + while (rs.next()) { + rs.getLong("ID"); + rs.getString("NAME"); + } + rs = stat.executeQuery("SELECT * FROM TEST" + i + " ORDER BY ID"); + while (rs.next()) { + rs.getLong("ID"); + rs.getString("NAME"); + } + } catch (SQLException e) { + if (e.getErrorCode() == ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1) { + // ok + } else { + throw e; + } + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestRandomCompare.java b/modules/h2/src/test/java/org/h2/test/synth/TestRandomCompare.java new file mode 100644 index 0000000000000..c294aa3231a57 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestRandomCompare.java @@ -0,0 +1,307 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.util.New; + +/** + * Tests random compare operations. + */ +public class TestRandomCompare extends TestBase { + + private final ArrayList dbs = New.arrayList(); + private int aliasId; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws Exception { + deleteDb("randomCompare"); + testCases(); + testRandom(); + deleteDb("randomCompare"); + } + + private void testRandom() throws Exception { + Connection conn = getConnection("randomCompare"); + dbs.add(conn.createStatement()); + + try { + Class.forName("org.postgresql.Driver"); + Connection c2 = DriverManager.getConnection( + "jdbc:postgresql:test?loggerLevel=OFF", "sa", "sa"); + dbs.add(c2.createStatement()); + } catch (Exception e) { + // database not installed - ok + } + + String shortest = null; + Throwable shortestEx = null; + /* + drop table test; + create table test(x0 int, x1 int); + create index idx_test_x0 on test(x0); + insert into test values(null, null); + insert into test values(null, 1); + insert into test values(null, 2); + insert into test values(1, null); + insert into test values(1, 1); + insert into test values(1, 2); + insert into test values(2, null); + insert into test values(2, 1); + insert into test values(2, 2); + */ + try { + execute("drop table test"); + } catch (Exception e) { + // ignore + } + try { + execute("drop table test cascade"); + } catch (Exception e) { + // ignore + } + String sql = "create table test(x0 int, x1 int)"; + trace(sql + ";"); + execute(sql); + sql = "create index idx_test_x0 on test(x0)"; + trace(sql + ";"); + execute(sql); + for (int x0 = 0; x0 < 3; x0++) { + for (int x1 = 0; x1 < 3; x1++) { + sql = "insert into test values(" + (x0 == 0 ? "null" : x0) + + ", " + (x1 == 0 ? "null" : x1) + ")"; + trace(sql + ";"); + execute(sql); + } + } + Random random = new Random(1); + for (int i = 0; i < 1000; i++) { + StringBuilder buff = new StringBuilder(); + appendRandomCompare(random, buff); + sql = buff.toString(); + try { + execute(sql); + } catch (Throwable e) { + if (e instanceof SQLException) { + trace(sql); + fail(sql); + // SQLException se = (SQLException) e; + // System.out.println(se); + // System.out.println(" " + sql); + } + if (shortest == null || sql.length() < shortest.length()) { + shortest = sql; + shortestEx = e; + } + } + } + if (shortest != null) { + shortestEx.printStackTrace(); + fail(shortest + " " + shortestEx); + } + for (int i = 0; i < 10; i++) { + try { + execute("drop table t" + i); + } catch (Exception e) { + // ignore + } + } + for (Statement s : dbs) { + s.getConnection().close(); + } + deleteDb("randomCompare"); + } + + private void appendRandomCompare(Random random, StringBuilder buff) { + buff.append("select * from "); + int alias = aliasId++; + if (random.nextBoolean()) { + buff.append("("); + appendRandomCompare(random, buff); + buff.append(")"); + } else { + buff.append("test"); + } + buff.append(" as t").append(alias); + if (random.nextInt(10) == 0) { + return; + } + buff.append(" where "); + int count = 1 + random.nextInt(3); + for (int i = 0; i < count; i++) { + if (i > 0) { + buff.append(random.nextBoolean() ? " or " : " and "); + } + if (random.nextInt(10) == 0) { + buff.append("not "); + } + appendRandomValue(random, buff); + switch (random.nextInt(8)) { + case 0: + buff.append("="); + appendRandomValue(random, buff); + break; + case 1: + buff.append("<"); + appendRandomValue(random, buff); + break; + case 2: + buff.append(">"); + appendRandomValue(random, buff); + break; + case 3: + buff.append("<="); + appendRandomValue(random, buff); + break; + case 4: + buff.append(">="); + appendRandomValue(random, buff); + break; + case 5: + buff.append("<>"); + appendRandomValue(random, buff); + break; + case 6: + buff.append(" is distinct from "); + appendRandomValue(random, buff); + break; + case 7: + buff.append(" is not distinct from "); + appendRandomValue(random, buff); + break; + } + } + } + + private static void appendRandomValue(Random random, StringBuilder buff) { + switch (random.nextInt(7)) { + case 0: + buff.append("null"); + break; + case 1: + buff.append(1); + break; + case 2: + buff.append(2); + break; + case 3: + buff.append(3); + break; + case 4: + buff.append(-1); + break; + case 5: + buff.append("x0"); + break; + case 6: + buff.append("x1"); + break; + } + } + + private void execute(String sql) throws SQLException { + String expected = null; + SQLException e = null; + for (Statement s : dbs) { + try { + boolean result = s.execute(sql); + if (result) { + String data = getResult(s.getResultSet()); + if (expected == null) { + expected = data; + } else { + assertEquals(sql, expected, data); + } + } + } catch (SQLException e2) { + // ignore now, throw at the end + e = e2; + } + } + if (e != null) { + throw e; + } + } + + private static String getResult(ResultSet rs) throws SQLException { + ArrayList list = New.arrayList(); + while (rs.next()) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + if (i > 0) { + buff.append(" "); + } + buff.append(rs.getString(i + 1)); + } + list.add(buff.toString()); + } + Collections.sort(list); + return list.toString(); + } + + private void testCases() throws Exception { + + Connection conn = getConnection("randomCompare"); + Statement stat = conn.createStatement(); + ResultSet rs; + + /* + create table test(x int); + insert into test values(null); + select * from (select x from test + union all select x from test) where x is null; + select * from (select x from test) where x is null; + */ + stat.execute("create table test(x int)"); + stat.execute("insert into test values(null)"); + rs = stat.executeQuery("select * from (select x from test " + + "union all select x from test) where x is null"); + assertTrue(rs.next()); + rs = stat.executeQuery( + "select * from (select x from test) where x is null"); + assertTrue(rs.next()); + rs = stat.executeQuery("select * from (select x from test " + + "union all select x from test) where x is null"); + assertTrue(rs.next()); + assertTrue(rs.next()); + + Connection conn2 = DriverManager.getConnection("jdbc:h2:mem:temp"); + conn2.createStatement().execute("create table test(x int) as select null"); + stat.execute("drop table test"); + stat.execute("create linked table test" + + "(null, 'jdbc:h2:mem:temp', null, null, 'TEST')"); + rs = stat.executeQuery("select * from (select x from test) where x is null"); + assertTrue(rs.next()); + rs = stat.executeQuery("select * from (select x from test " + + "union all select x from test) where x is null"); + assertTrue(rs.next()); + assertTrue(rs.next()); + conn2.close(); + + conn.close(); + deleteDb("randomCompare"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestRandomSQL.java b/modules/h2/src/test/java/org/h2/test/synth/TestRandomSQL.java new file mode 100644 index 0000000000000..65a02f1661dbe --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestRandomSQL.java @@ -0,0 +1,117 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.engine.SysProperties; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.MathUtils; + +/** + * This test executes random SQL statements generated using the BNF tool. + */ +public class TestRandomSQL extends TestBase { + + private int success, total; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.networked) { + return; + } + config.memory = true; + int len = getSize(2, 6); + for (int a = 0; a < len; a++) { + int s = MathUtils.randomInt(Integer.MAX_VALUE); + testCase(s); + } + } + + private void testWithSeed(int seed) throws Exception { + Connection conn = null; + try { + conn = getConnection(getDatabaseName(seed)); + } catch (SQLException e) { + if (e.getSQLState().equals("HY000")) { + TestBase.logError("new TestRandomSQL().init(test).testCase(" + seed + "); " + + "// FAIL: " + e.toString() + " sql: " + "connect", e); + } + conn = getConnection(getDatabaseName(seed)); + } + Statement stat = conn.createStatement(); + + BnfRandom bnfRandom = new BnfRandom(); + bnfRandom.setSeed(seed); + for (int i = 0; i < bnfRandom.getStatementCount(); i++) { + String sql = bnfRandom.getRandomSQL(); + if (sql != null) { + try { + Thread.yield(); + total++; + if (total % 100 == 0) { + printTime("total: " + total + " success: " + + (100 * success / total) + "%"); + } + stat.execute(sql); + success++; + } catch (SQLException e) { + if (e.getSQLState().equals("HY000")) { + TestBase.logError( + "new TestRandomSQL().init(test).testCase(" + + seed + "); " + "// FAIL: " + + e.toString() + " sql: " + sql, e); + } + } + } + } + try { + conn.close(); + conn = getConnection(getDatabaseName(seed)); + conn.createStatement().execute("shutdown immediately"); + conn.close(); + } catch (SQLException e) { + if (e.getSQLState().equals("HY000")) { + TestBase.logError("new TestRandomSQL().init(test).testCase(" + seed + "); " + + "// FAIL: " + e.toString() + " sql: " + "conn.close", e); + } + } + } + + private void testCase(int seed) throws Exception { + String old = SysProperties.getScriptDirectory(); + try { + System.setProperty(SysProperties.H2_SCRIPT_DIRECTORY, + getBaseDir() + "/" + getTestName()); + printTime("seed: " + seed); + deleteDb(seed); + testWithSeed(seed); + } finally { + System.setProperty(SysProperties.H2_SCRIPT_DIRECTORY, old); + } + deleteDb(seed); + } + + private String getDatabaseName(int seed) { + return getTestName() + "/db" + seed; + } + + private void deleteDb(int seed) { + FileUtils.delete(getDatabaseName(seed)); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestReleaseSelectLock.java b/modules/h2/src/test/java/org/h2/test/synth/TestReleaseSelectLock.java new file mode 100644 index 0000000000000..fc1d53d525ff1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestReleaseSelectLock.java @@ -0,0 +1,85 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import org.h2.test.TestBase; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.util.concurrent.CountDownLatch; + +/** + * Tests lock releasing for concurrent select statements + */ +public class TestReleaseSelectLock extends TestBase { + + private static final String TEST_DB_NAME = "releaseSelectLock"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + config.mvStore = false; + config.mvcc = false; + config.multiThreaded = true; + + deleteDb(TEST_DB_NAME); + + Connection conn = getConnection(TEST_DB_NAME); + final Statement statement = conn.createStatement(); + statement.execute("create table test(id int primary key)"); + + runConcurrentSelects(); + + // check that all locks have been released by dropping the test table + statement.execute("drop table test"); + + statement.close(); + conn.close(); + deleteDb(TEST_DB_NAME); + } + + private void runConcurrentSelects() throws InterruptedException { + int tryCount = 500; + int threadsCount = getSize(2, 4); + for (int tryNumber = 0; tryNumber < tryCount; tryNumber++) { + final CountDownLatch allFinished = new CountDownLatch(threadsCount); + + for (int i = 0; i < threadsCount; i++) { + new Thread(new Runnable() { + @Override + public void run() { + try { + Connection conn = getConnection(TEST_DB_NAME); + PreparedStatement stmt = conn.prepareStatement("select id from test"); + ResultSet rs = stmt.executeQuery(); + while (rs.next()) { + rs.getInt(1); + } + stmt.close(); + conn.close(); + } catch (Exception e) { + throw new RuntimeException(e); + } finally { + allFinished.countDown(); + } + } + }).start(); + } + + allFinished.await(); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestSimpleIndex.java b/modules/h2/src/test/java/org/h2/test/synth/TestSimpleIndex.java new file mode 100644 index 0000000000000..13f385b5a5c5f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestSimpleIndex.java @@ -0,0 +1,176 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.test.TestBase; +import org.h2.test.synth.sql.RandomGen; + +/** + * A test that runs random operations against a table to test the various index + * implementations. + */ +public class TestSimpleIndex extends TestBase { + + private Connection conn; + private Statement stat; + private RandomGen random; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb("simpleIndex"); + conn = getConnection("simpleIndex"); + random = new RandomGen(); + stat = conn.createStatement(); + for (int i = 0; i < 10000; i++) { + testIndex(i); + } + } + + private void testIndex(int seed) throws SQLException { + random.setSeed(seed); + String unique = random.nextBoolean() ? "UNIQUE " : ""; + int len = random.getInt(2) + 1; + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < len; i++) { + if (i > 0) { + buff.append(", "); + } + buff.append((char) ('A' + random.getInt(3))); + } + String cols = buff.toString(); + execute("CREATE MEMORY TABLE TEST_M(A INT, B INT, C INT, DATA VARCHAR(255))"); + execute("CREATE CACHED TABLE TEST_D(A INT, B INT, C INT, DATA VARCHAR(255))"); + execute("CREATE MEMORY TABLE TEST_MI(A INT, B INT, C INT, DATA VARCHAR(255))"); + execute("CREATE CACHED TABLE TEST_DI(A INT, B INT, C INT, DATA VARCHAR(255))"); + execute("CREATE " + unique + "INDEX M ON TEST_MI(" + cols + ")"); + execute("CREATE " + unique + "INDEX D ON TEST_DI(" + cols + ")"); + for (int i = 0; i < 100; i++) { + println("i=" + i); + testRows(); + } + execute("DROP INDEX M"); + execute("DROP INDEX D"); + execute("DROP TABLE TEST_M"); + execute("DROP TABLE TEST_D"); + execute("DROP TABLE TEST_MI"); + execute("DROP TABLE TEST_DI"); + } + + private void testRows() throws SQLException { + String a = randomValue(), b = randomValue(), c = randomValue(); + String data = a + "/" + b + "/" + c; + String sql = "VALUES(" + a + ", " + b + ", " + c + ", '" + data + "')"; + boolean em, ed; + // if(id==73) { + // print("halt"); + // } + try { + execute("INSERT INTO TEST_MI " + sql); + em = false; + } catch (SQLException e) { + em = true; + } + try { + execute("INSERT INTO TEST_DI " + sql); + ed = false; + } catch (SQLException e) { + ed = true; + } + if (em != ed) { + fail("different result: "); + } + if (!em) { + execute("INSERT INTO TEST_M " + sql); + execute("INSERT INTO TEST_D " + sql); + } + StringBuilder buff = new StringBuilder("WHERE 1=1"); + int len = random.getLog(10); + for (int i = 0; i < len; i++) { + buff.append(" AND "); + buff.append('A' + random.getInt(3)); + switch (random.getInt(10)) { + case 0: + buff.append("<"); + buff.append(random.getInt(100) - 50); + break; + case 1: + buff.append("<="); + buff.append(random.getInt(100) - 50); + break; + case 2: + buff.append(">"); + buff.append(random.getInt(100) - 50); + break; + case 3: + buff.append(">="); + buff.append(random.getInt(100) - 50); + break; + case 4: + buff.append("<>"); + buff.append(random.getInt(100) - 50); + break; + case 5: + buff.append(" IS NULL"); + break; + case 6: + buff.append(" IS NOT NULL"); + break; + default: + buff.append("="); + buff.append(random.getInt(100) - 50); + } + } + String where = buff.toString(); + String r1 = getResult("SELECT DATA FROM TEST_M " + where + " ORDER BY DATA"); + String r2 = getResult("SELECT DATA FROM TEST_D " + where + " ORDER BY DATA"); + String r3 = getResult("SELECT DATA FROM TEST_MI " + where + " ORDER BY DATA"); + String r4 = getResult("SELECT DATA FROM TEST_DI " + where + " ORDER BY DATA"); + assertEquals(r1, r2); + assertEquals(r1, r3); + assertEquals(r1, r4); + } + + private String getResult(String sql) throws SQLException { + ResultSet rs = stat.executeQuery(sql); + StringBuilder buff = new StringBuilder(); + while (rs.next()) { + buff.append(rs.getString(1)); + buff.append("; "); + } + rs.close(); + return buff.toString(); + } + + private String randomValue() { + return random.getInt(10) == 0 ? "NULL" : "" + (random.getInt(100) - 50); + } + + private void execute(String sql) throws SQLException { + try { + println(sql + ";"); + stat.execute(sql); + println("> update count: 1"); + } catch (SQLException e) { + println("> exception"); + throw e; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestStringAggCompatibility.java b/modules/h2/src/test/java/org/h2/test/synth/TestStringAggCompatibility.java new file mode 100644 index 0000000000000..624d717b8ac30 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestStringAggCompatibility.java @@ -0,0 +1,78 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import org.h2.test.TestBase; + +/** + * Test for check compatibility with PostgreSQL function string_agg() + */ +public class TestStringAggCompatibility extends TestBase { + + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb(getTestName()); + conn = getConnection(getTestName()); + prepareDb(); + testWhenOrderByMissing(); + testWithOrderBy(); + conn.close(); + } + + private void testWithOrderBy() throws SQLException { + ResultSet result = query( + "select string_agg(b, ', ' order by b desc) from stringAgg group by a; "); + + assertTrue(result.next()); + assertEquals("3, 2, 1", result.getString(1)); + } + + private void testWhenOrderByMissing() throws SQLException { + ResultSet result = query("select string_agg(b, ', ') from stringAgg group by a; "); + + assertTrue(result.next()); + assertEquals("1, 2, 3", result.getString(1)); + } + + private ResultSet query(String q) throws SQLException { + PreparedStatement st = conn.prepareStatement(q); + + st.execute(); + + return st.getResultSet(); + } + + private void prepareDb() throws SQLException { + exec("create table stringAgg(\n" + + " a int not null,\n" + + " b varchar(50) not null\n" + + ");"); + + exec("insert into stringAgg values(1, '1')"); + exec("insert into stringAgg values(1, '2')"); + exec("insert into stringAgg values(1, '3')"); + + } + + private void exec(String sql) throws SQLException { + conn.prepareStatement(sql).execute(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestThreads.java b/modules/h2/src/test/java/org/h2/test/synth/TestThreads.java new file mode 100644 index 0000000000000..623bc1f81100c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestThreads.java @@ -0,0 +1,174 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.test.TestBase; + +/** + * This test starts multiple threads and executes random operations in each + * thread. + */ +public class TestThreads extends TestBase implements Runnable { + + private static final int INSERT = 0, UPDATE = 1, DELETE = 2; + private static final int SELECT_ONE = 3, SELECT_ALL = 4; + private static final int CHECKPOINT = 5, RECONNECT = 6; + private static final int OP_TYPES = RECONNECT + 1; + + private int maxId = 1; + + private volatile boolean stop; + private TestThreads master; + private int type; + private String table; + private final Random random = new Random(); + + public TestThreads() { + // nothing to do + } + + TestThreads(TestThreads master, int type, String table) { + this.master = master; + this.type = type; + this.table = table; + } + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("threads"); + Connection conn = getConnection("threads;MAX_LOG_SIZE=1"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST_A(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("CREATE TABLE TEST_B(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("CREATE TABLE TEST_C(ID INT PRIMARY KEY, NAME VARCHAR)"); + int len = 1000; + insertRows(conn, "TEST_A", len); + insertRows(conn, "TEST_B", len); + insertRows(conn, "TEST_C", len); + maxId = len; + int threadCount = 4; + Thread[] threads = new Thread[threadCount]; + for (int i = 0; i < threadCount; i++) { + String t = random.nextBoolean() ? null : getRandomTable(); + int op = random.nextInt(OP_TYPES); + op = i % 2 == 0 ? RECONNECT : CHECKPOINT; + threads[i] = new Thread(new TestThreads(this, op, t)); + } + for (int i = 0; i < threadCount; i++) { + threads[i].start(); + } + Thread.sleep(10000); + stop = true; + for (int i = 0; i < threadCount; i++) { + threads[i].join(); + } + conn.close(); + conn = getConnection("threads"); + checkTable(conn, "TEST_A"); + checkTable(conn, "TEST_B"); + checkTable(conn, "TEST_C"); + conn.close(); + } + + private static void insertRows(Connection conn, String tableName, int len) + throws SQLException { + PreparedStatement prep = conn.prepareStatement("INSERT INTO " + + tableName + " VALUES(?, 'Hi')"); + for (int i = 0; i < len; i++) { + prep.setInt(1, i); + prep.execute(); + } + } + + private static void checkTable(Connection conn, String tableName) + throws SQLException { + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM " + tableName + " ORDER BY ID"); + while (rs.next()) { + int id = rs.getInt(1); + String name = rs.getString(2); + System.out.println("id=" + id + " name=" + name); + } + } + + private int getMaxId() { + return maxId; + } + + private synchronized int incrementMaxId() { + return maxId++; + } + + private String getRandomTable() { + return "TEST_" + (char) ('A' + random.nextInt(3)); + } + + @Override + public void run() { + try { + String t = table == null ? getRandomTable() : table; + Connection conn = master.getConnection("threads"); + Statement stat = conn.createStatement(); + ResultSet rs; + int max = master.getMaxId(); + int rid = random.nextInt(max); + while (!master.stop) { + switch (type) { + case INSERT: + max = master.incrementMaxId(); + stat.execute("INSERT INTO " + t + "(ID, NAME) VALUES(" + max + ", 'Hello')"); + break; + case UPDATE: + stat.execute("UPDATE " + t + " SET NAME='World " + rid + "' WHERE ID=" + rid); + break; + case DELETE: + stat.execute("DELETE FROM " + t + " WHERE ID=" + rid); + break; + case SELECT_ALL: + rs = stat.executeQuery("SELECT * FROM " + t + " ORDER BY ID"); + while (rs.next()) { + // nothing + } + break; + case SELECT_ONE: + rs = stat.executeQuery("SELECT * FROM " + t + " WHERE ID=" + rid); + while (rs.next()) { + // nothing + } + break; + case CHECKPOINT: + stat.execute("CHECKPOINT"); + break; + case RECONNECT: + conn.close(); + conn = master.getConnection("threads"); + break; + default: + } + } + conn.close(); + } catch (Exception e) { + TestBase.logError("error", e); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/TestTimer.java b/modules/h2/src/test/java/org/h2/test/synth/TestTimer.java new file mode 100644 index 0000000000000..3164fbc8cb859 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/TestTimer.java @@ -0,0 +1,144 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth; + +import java.io.File; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.test.TestBase; +import org.h2.tools.Backup; +import org.h2.tools.DeleteDbFiles; + +/** + * A recovery test that checks the consistency of a database (if it exists), + * then deletes everything and runs in an endless loop executing random + * operations. This loop is usually stopped by switching off the computer. + */ +public class TestTimer extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + validateOld(); + DeleteDbFiles.execute(getBaseDir(), "timer", true); + loop(); + } + + private void loop() throws SQLException { + println("loop"); + Connection conn = getConnection("timer"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID IDENTITY, NAME VARCHAR)"); + Random random = new Random(); + int max = 0; + int count = 0; + long startTime = System.nanoTime(); + while (true) { + int action = random.nextInt(10); + int x = max == 0 ? 0 : random.nextInt(max); + switch (action) { + case 0: + case 1: + case 2: + stat.execute("INSERT INTO TEST VALUES(NULL, 'Hello')"); + ResultSet rs = stat.getGeneratedKeys(); + rs.next(); + int i = rs.getInt(1); + max = i; + count++; + break; + case 3: + case 4: + if (count == 0) { + break; + } + stat.execute("UPDATE TEST SET NAME=NAME||'+' WHERE ID=" + x); + break; + case 5: + case 6: + if (count == 0) { + break; + } + count -= stat.executeUpdate("DELETE FROM TEST WHERE ID=" + x); + break; + case 7: + rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + int c = rs.getInt(1); + assertEquals(count, c); + long time = System.nanoTime(); + if (time > startTime + TimeUnit.SECONDS.toNanos(5)) { + println("rows: " + count); + startTime = time; + } + break; + default: + } + } + } + + private void validateOld() { + println("validate"); + try { + Connection conn = getConnection("timer"); + // TODO validate transactions + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE IF NOT EXISTS TEST(ID IDENTITY, NAME VARCHAR)"); + ResultSet rs = stat.executeQuery("SELECT COUNT(*) FROM TEST"); + rs.next(); + int count = rs.getInt(1); + println("row count: " + count); + int real = 0; + rs = stat.executeQuery("SELECT * FROM TEST"); + while (rs.next()) { + real++; + } + if (real != count) { + println("real count: " + real); + throw new AssertionError("COUNT(*)=" + count + " SELECT=" + real); + } + rs = stat.executeQuery("SCRIPT"); + while (rs.next()) { + rs.getString(1); + } + conn.close(); + } catch (Throwable e) { + logError("validate", e); + backup(); + } + } + + private void backup() { + println("backup"); + for (int i = 0;; i++) { + String s = "timer." + i + ".zip"; + File f = new File(s); + if (f.exists()) { + continue; + } + try { + Backup.execute(s, getBaseDir(), "timer", true); + } catch (SQLException e) { + logError("backup", e); + } + break; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/package.html b/modules/h2/src/test/java/org/h2/test/synth/package.html new file mode 100644 index 0000000000000..a8608f61be395 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Synthetic tests using random operations or statements. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Column.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Column.java new file mode 100644 index 0000000000000..6226d7d0d95be --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Column.java @@ -0,0 +1,228 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Types; + +/** + * A column of a table. + */ +class Column { + + private static final int[] TYPES = { Types.INTEGER, Types.VARCHAR, + Types.DECIMAL, Types.DATE, Types.TIME, Types.TIMESTAMP, + Types.BOOLEAN, Types.BINARY, Types.VARBINARY, Types.CLOB, + Types.BLOB, Types.DOUBLE, Types.BIGINT, Types.TIMESTAMP, Types.BIT, }; + + private TestSynth config; + private String name; + private int type; + private int precision; + private int scale; + private boolean isNullable; + private boolean isPrimaryKey; + // TODO test isAutoincrement; + + private Column(TestSynth config) { + this.config = config; + } + + Column(ResultSetMetaData meta, int index) throws SQLException { + name = meta.getColumnLabel(index); + type = meta.getColumnType(index); + switch (type) { + case Types.DECIMAL: + precision = meta.getPrecision(index); + scale = meta.getScale(index); + break; + case Types.BLOB: + case Types.BINARY: + case Types.VARBINARY: + case Types.CLOB: + case Types.LONGVARCHAR: + case Types.DATE: + case Types.TIME: + case Types.INTEGER: + case Types.VARCHAR: + case Types.CHAR: + case Types.BIGINT: + case Types.NUMERIC: + case Types.TIMESTAMP: + case Types.NULL: + case Types.LONGVARBINARY: + case Types.DOUBLE: + case Types.REAL: + case Types.OTHER: + case Types.BIT: + case Types.BOOLEAN: + break; + default: + throw new AssertionError("type=" + type); + } + } + + /** + * Check if this data type supports comparisons for this database. + * + * @param config the configuration + * @param type the SQL type + * @return true if the value can be used in conditions + */ + static boolean isConditionType(TestSynth config, int type) { + switch (config.getMode()) { + case TestSynth.H2: + case TestSynth.H2_MEM: + return true; + case TestSynth.MYSQL: + case TestSynth.HSQLDB: + case TestSynth.POSTGRESQL: + switch (type) { + case Types.INTEGER: + case Types.VARCHAR: + case Types.DECIMAL: + case Types.DATE: + case Types.TIME: + case Types.TIMESTAMP: + case Types.DOUBLE: + case Types.BIGINT: + case Types.BOOLEAN: + case Types.BIT: + return true; + case Types.BINARY: + case Types.VARBINARY: + case Types.BLOB: + case Types.CLOB: + case Types.LONGVARCHAR: + case Types.LONGVARBINARY: + return false; + default: + throw new AssertionError("type=" + type); + } + default: + throw new AssertionError("type=" + type); + } + } + + private String getTypeName() { + switch (type) { + case Types.INTEGER: + return "INT"; + case Types.VARCHAR: + return "VARCHAR(" + precision + ")"; + case Types.DECIMAL: + return "NUMERIC(" + precision + ", " + scale + ")"; + case Types.DATE: + return "DATE"; + case Types.TIME: + return "TIME"; + case Types.TIMESTAMP: + return "TIMESTAMP"; + case Types.BINARY: + case Types.VARBINARY: + if (config.is(TestSynth.POSTGRESQL)) { + return "BYTEA"; + } + return "BINARY(" + precision + ")"; + case Types.CLOB: { + if (config.is(TestSynth.HSQLDB)) { + return "LONGVARCHAR"; + } else if (config.is(TestSynth.POSTGRESQL)) { + return "TEXT"; + } + return "CLOB"; + } + case Types.BLOB: { + if (config.is(TestSynth.HSQLDB)) { + return "LONGVARBINARY"; + } + return "BLOB"; + } + case Types.DOUBLE: + if (config.is(TestSynth.POSTGRESQL)) { + return "DOUBLE PRECISION"; + } + return "DOUBLE"; + case Types.BIGINT: + return "BIGINT"; + case Types.BOOLEAN: + case Types.BIT: + return "BOOLEAN"; + default: + throw new AssertionError("type=" + type); + } + } + + String getCreateSQL() { + String sql = name + " " + getTypeName(); + if (!isNullable) { + sql += " NOT NULL"; + } + return sql; + } + + String getName() { + return name; + } + + Value getRandomValue() { + return Value.getRandom(config, type, precision, scale, isNullable); + } + +// Value getRandomValueNotNull() { +// return Value.getRandom(config, type, precision, scale, false); +// } + + /** + * Generate a random column. + * + * @param config the configuration + * @return the column + */ + static Column getRandomColumn(TestSynth config) { + Column column = new Column(config); + column.name = "C_" + config.randomIdentifier(); + int randomType; + while (true) { + randomType = TYPES[config.random().getLog(TYPES.length)]; + if (config.is(TestSynth.POSTGRESQL) + && (randomType == Types.BINARY || + randomType == Types.VARBINARY || + randomType == Types.BLOB)) { + continue; + } + break; + } + column.type = randomType; + column.precision = config.random().getInt(20) + 2; + column.scale = config.random().getInt(column.precision); + column.isNullable = config.random().getBoolean(50); + return column; + } + + boolean getPrimaryKey() { + return isPrimaryKey; + } + + void setPrimaryKey(boolean b) { + isPrimaryKey = b; + } + + void setNullable(boolean b) { + isNullable = b; + } + + /** + * The the column type. + * + * @return the type + */ + int getType() { + return type; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Command.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Command.java new file mode 100644 index 0000000000000..e034ec6226fec --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Command.java @@ -0,0 +1,427 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.sql.SQLException; +import java.util.HashMap; +import org.h2.util.StatementBuilder; + +/** + * Represents a statement. + */ +class Command { + private static final int CONNECT = 0, RESET = 1, DISCONNECT = 2, + CREATE_TABLE = 3, INSERT = 4, DROP_TABLE = 5, SELECT = 6, + DELETE = 7, UPDATE = 8, COMMIT = 9, ROLLBACK = 10, + AUTOCOMMIT_ON = 11, AUTOCOMMIT_OFF = 12, + CREATE_INDEX = 13, DROP_INDEX = 14, END = 15; + + /** + * The select list. + */ + String[] selectList; + + private TestSynth config; + private final int type; + private Table table; + private HashMap tables; + private Index index; + private Column[] columns; + private Value[] values; + private String condition; + // private int nextAlias; + private String order; + private final String join = ""; + private Result result; + + private Command(TestSynth config, int type) { + this.config = config; + this.type = type; + } + + private Command(TestSynth config, int type, Table table) { + this.config = config; + this.type = type; + this.table = table; + } + + private Command(TestSynth config, int type, Table table, String alias) { + this.config = config; + this.type = type; + this.table = table; + this.tables = new HashMap<>(); + this.tables.put(alias, table); + } + + private Command(TestSynth config, int type, Index index) { + this.config = config; + this.type = type; + this.index = index; + } + + Command(int type, String alias, Table table) { + this.type = type; + if (alias == null) { + alias = table.getName(); + } + addSubqueryTable(alias, table); + this.table = table; + } + +// static Command getDropTable(TestSynth config, Table table) { +// return new Command(config, Command.DROP_TABLE, table); +// } + +// Command getCommit(TestSynth config) { +// return new Command(config, Command.COMMIT); +// } + +// Command getRollback(TestSynth config) { +// return new Command(config, Command.ROLLBACK); +// } + +// Command getSetAutoCommit(TestSynth config, boolean auto) { +// int type = auto ? Command.AUTOCOMMIT_ON : Command.AUTOCOMMIT_OFF; +// return new Command(config, type); +// } + + /** + * Create a connect command. + * + * @param config the configuration + * @return the command + */ + static Command getConnect(TestSynth config) { + return new Command(config, CONNECT); + } + + /** + * Create a reset command. + * + * @param config the configuration + * @return the command + */ + static Command getReset(TestSynth config) { + return new Command(config, RESET); + } + + /** + * Create a disconnect command. + * + * @param config the configuration + * @return the command + */ + static Command getDisconnect(TestSynth config) { + return new Command(config, DISCONNECT); + } + + /** + * Create an end command. + * + * @param config the configuration + * @return the command + */ + static Command getEnd(TestSynth config) { + return new Command(config, END); + } + + /** + * Create a create table command. + * + * @param config the configuration + * @param table the table + * @return the command + */ + static Command getCreateTable(TestSynth config, Table table) { + return new Command(config, CREATE_TABLE, table); + } + + /** + * Create a create index command. + * + * @param config the configuration + * @param index the index + * @return the command + */ + static Command getCreateIndex(TestSynth config, Index index) { + return new Command(config, CREATE_INDEX, index); + } + + /** + * Create a random select command. + * + * @param config the configuration + * @param table the table + * @return the command + */ + static Command getRandomSelect(TestSynth config, Table table) { + Command command = new Command(config, Command.SELECT, table, "M"); + command.selectList = Expression.getRandomSelectList(config, command); + // TODO group by, having, joins + command.condition = Expression.getRandomCondition(config, command).getSQL(); + command.order = Expression.getRandomOrder(config, command); + return command; + } + +// static Command getRandomSelectJoin(TestSynth config, Table table) { +// Command command = new Command(config, Command.SELECT, table, "M"); +// int len = config.random().getLog(5) + 1; +// String globalJoinCondition = ""; +// for (int i = 0; i < len; i++) { +// Table t2 = config.randomTable(); +// String alias = "J" + i; +// command.addSubqueryTable(alias, t2); +// Expression joinOn = +// Expression.getRandomJoinOn(config, command, alias); +// if (config.random().getBoolean(50)) { +// // regular join +// if (globalJoinCondition.length() > 0) { +// globalJoinCondition += " AND "; +// +// } +// globalJoinCondition += " (" + joinOn.getSQL() + ") "; +// command.addJoin(", " + t2.getName() + " " + alias); +// } else { +// String join = " JOIN " + t2.getName() + +// " " + alias + " ON " + joinOn.getSQL(); +// if (config.random().getBoolean(20)) { +// command.addJoin(" LEFT OUTER" + join); +// } else { +// command.addJoin(" INNER" + join); +// } +// } +// } +// command.selectList = +// Expression.getRandomSelectList(config, command); +// // TODO group by, having +// String cond = Expression.getRandomCondition(config, command).getSQL(); +// if (globalJoinCondition.length() > 0) { +// if (cond != null) { +// cond = "(" + globalJoinCondition + " ) AND (" + cond + ")"; +// } else { +// cond = globalJoinCondition; +// } +// } +// command.condition = cond; +// command.order = Expression.getRandomOrder(config, command); +// return command; +// } + + /** + * Create a random delete command. + * + * @param config the configuration + * @param table the table + * @return the command + */ + static Command getRandomDelete(TestSynth config, Table table) { + Command command = new Command(config, Command.DELETE, table); + command.condition = Expression.getRandomCondition(config, command).getSQL(); + return command; + } + + /** + * Create a random update command. + * + * @param config the configuration + * @param table the table + * @return the command + */ + static Command getRandomUpdate(TestSynth config, Table table) { + Command command = new Command(config, Command.UPDATE, table); + command.prepareUpdate(); + return command; + } + + /** + * Create a random insert command. + * + * @param config the configuration + * @param table the table + * @return the command + */ + static Command getRandomInsert(TestSynth config, Table table) { + Command command = new Command(config, Command.INSERT, table); + command.prepareInsert(); + return command; + } + + /** + * Add a subquery table to the command. + * + * @param alias the table alias + * @param t the table + */ + void addSubqueryTable(String alias, Table t) { + tables.put(alias, t); + } + +// void removeSubqueryTable(String alias) { +// tables.remove(alias); +// } + + private void prepareInsert() { + Column[] c; + if (config.random().getBoolean(70)) { + c = table.getColumns(); + } else { + int len = config.random().getInt(table.getColumnCount() - 1) + 1; + c = columns = table.getRandomColumns(len); + } + values = new Value[c.length]; + for (int i = 0; i < c.length; i++) { + values[i] = c[i].getRandomValue(); + } + } + + private void prepareUpdate() { + int len = config.random().getLog(table.getColumnCount() - 1) + 1; + Column[] c = columns = table.getRandomColumns(len); + values = new Value[c.length]; + for (int i = 0; i < c.length; i++) { + values[i] = c[i].getRandomValue(); + } + condition = Expression.getRandomCondition(config, this).getSQL(); + } + + private Result select(DbInterface db) throws SQLException { + StatementBuilder buff = new StatementBuilder("SELECT "); + for (String s : selectList) { + buff.appendExceptFirst(", "); + buff.append(s); + } + buff.append(" FROM ").append(table.getName()).append(" M"). + append(' ').append(join); + if (condition != null) { + buff.append(" WHERE ").append(condition); + } + if (order.trim().length() > 0) { + buff.append(" ORDER BY ").append(order); + } + return db.select(buff.toString()); + } + + /** + * Run the command against the specified database. + * + * @param db the database + * @return the result + */ + Result run(DbInterface db) throws Exception { + try { + switch (type) { + case CONNECT: + db.connect(); + result = new Result("connect"); + break; + case RESET: + db.reset(); + result = new Result("reset"); + break; + case DISCONNECT: + db.disconnect(); + result = new Result("disconnect"); + break; + case END: + db.end(); + result = new Result("disconnect"); + break; + case CREATE_TABLE: + db.createTable(table); + result = new Result("createTable"); + break; + case DROP_TABLE: + db.dropTable(table); + result = new Result("dropTable"); + break; + case CREATE_INDEX: + db.createIndex(index); + result = new Result("createIndex"); + break; + case DROP_INDEX: + db.dropIndex(index); + result = new Result("dropIndex"); + break; + case INSERT: + result = db.insert(table, columns, values); + break; + case SELECT: + result = select(db); + break; + case DELETE: + result = db.delete(table, condition); + break; + case UPDATE: + result = db.update(table, columns, values, condition); + break; + case AUTOCOMMIT_ON: + db.setAutoCommit(true); + result = new Result("setAutoCommit true"); + break; + case AUTOCOMMIT_OFF: + db.setAutoCommit(false); + result = new Result("setAutoCommit false"); + break; + case COMMIT: + db.commit(); + result = new Result("commit"); + break; + case ROLLBACK: + db.rollback(); + result = new Result("rollback"); + break; + default: + throw new AssertionError("type=" + type); + } + } catch (SQLException e) { + result = new Result("", e); + } + return result; + } + +// public String getNextTableAlias() { +// return "S" + nextAlias++; +// } + + /** + * Get a random table alias name. + * + * @return the alias name + */ + String getRandomTableAlias() { + if (tables == null) { + return null; + } + Object[] list = tables.keySet().toArray(); + int i = config.random().getInt(list.length); + return (String) list[i]; + } + + /** + * Get the table with the specified alias. + * + * @param alias the alias or null if there is only one table + * @return the table + */ + Table getTable(String alias) { + if (alias == null) { + return table; + } + return tables.get(alias); + } + +// public void addJoin(String string) { +// join += string; +// } + +// static Command getSelectAll(TestSynth config, Table table) { +// Command command = new Command(config, Command.SELECT, table, "M"); +// command.selectList = new String[] { "*" }; +// command.order = ""; +// return command; +// } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/DbConnection.java b/modules/h2/src/test/java/org/h2/test/synth/sql/DbConnection.java new file mode 100644 index 0000000000000..c7d9ac802545a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/DbConnection.java @@ -0,0 +1,210 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import org.h2.util.New; + +/** + * Represents a connection to a real database. + */ +class DbConnection implements DbInterface { + private final TestSynth config; + private final int id; + private final String driver; + private final String url; + private final String user; + private final String password; + private Connection conn; + private Connection sentinel; + private final boolean useSentinel; + + DbConnection(TestSynth config, String driver, String url, String user, + String password, int id, boolean useSentinel) { + this.config = config; + this.driver = driver; + this.url = url; + this.user = user; + this.password = password; + this.id = id; + this.useSentinel = useSentinel; + log("url=" + url); + } + + @Override + public void reset() throws SQLException { + log("reset;"); + DatabaseMetaData meta = conn.getMetaData(); + Statement stat = conn.createStatement(); + ArrayList tables = New.arrayList(); + ResultSet rs = meta.getTables(null, null, null, new String[] { "TABLE" }); + while (rs.next()) { + String schemaName = rs.getString("TABLE_SCHEM"); + if (!"INFORMATION_SCHEMA".equals(schemaName)) { + tables.add(rs.getString("TABLE_NAME")); + } + } + while (tables.size() > 0) { + int dropped = 0; + for (int i = 0; i < tables.size(); i++) { + try { + String table = tables.get(i); + stat.execute("DROP TABLE " + table); + dropped++; + tables.remove(i); + i--; + } catch (SQLException e) { + // maybe a referential integrity + } + } + // could not drop any table and still tables to drop + if (dropped == 0 && tables.size() > 0) { + throw new AssertionError("Cannot drop " + tables); + } + } + } + + @Override + public void connect() throws Exception { + if (useSentinel && sentinel == null) { + sentinel = getConnection(); + } + log("connect to " + url + ";"); + conn = getConnection(); + } + + private Connection getConnection() throws Exception { + log("(getConnection to " + url + ");"); + if (driver == null) { + return config.getConnection("synth"); + } + Class.forName(driver); + return DriverManager.getConnection(url, user, password); + } + + @Override + public void disconnect() throws SQLException { + log("disconnect " + url + ";"); + conn.close(); + } + + @Override + public void end() throws SQLException { + log("end " + url + ";"); + if (sentinel != null) { + sentinel.close(); + sentinel = null; + } + } + + @Override + public void createTable(Table table) throws SQLException { + execute(table.getCreateSQL()); + } + + @Override + public void dropTable(Table table) throws SQLException { + execute(table.getDropSQL()); + } + + @Override + public void createIndex(Index index) throws SQLException { + execute(index.getCreateSQL()); + index.getTable().addIndex(index); + } + + @Override + public void dropIndex(Index index) throws SQLException { + execute(index.getDropSQL()); + index.getTable().removeIndex(index); + } + + @Override + public Result insert(Table table, Column[] c, Value[] v) + throws SQLException { + String sql = table.getInsertSQL(c, v); + execute(sql); + return new Result(sql, 1); + } + + private void execute(String sql) throws SQLException { + log(sql + ";"); + conn.createStatement().execute(sql); + } + + @Override + public Result select(String sql) throws SQLException { + log(sql + ";"); + Statement stat = conn.createStatement(); + Result result = new Result(config, sql, stat.executeQuery(sql)); + return result; + } + + @Override + public Result delete(Table table, String condition) throws SQLException { + String sql = "DELETE FROM " + table.getName(); + if (condition != null) { + sql += " WHERE " + condition; + } + log(sql + ";"); + Statement stat = conn.createStatement(); + Result result = new Result(sql, stat.executeUpdate(sql)); + return result; + } + + @Override + public Result update(Table table, Column[] columns, Value[] values, + String condition) throws SQLException { + String sql = "UPDATE " + table.getName() + " SET "; + for (int i = 0; i < columns.length; i++) { + if (i > 0) { + sql += ", "; + } + sql += columns[i].getName() + "=" + values[i].getSQL(); + } + if (condition != null) { + sql += " WHERE " + condition; + } + log(sql + ";"); + Statement stat = conn.createStatement(); + Result result = new Result(sql, stat.executeUpdate(sql)); + return result; + } + + @Override + public void setAutoCommit(boolean b) throws SQLException { + log("set autoCommit " + b + ";"); + conn.setAutoCommit(b); + } + + @Override + public void commit() throws SQLException { + log("commit;"); + conn.commit(); + } + + @Override + public void rollback() throws SQLException { + log("rollback;"); + conn.rollback(); + } + + private void log(String s) { + config.log(id, s); + } + + @Override + public String toString() { + return url; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/DbInterface.java b/modules/h2/src/test/java/org/h2/test/synth/sql/DbInterface.java new file mode 100644 index 0000000000000..609ace63b46f2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/DbInterface.java @@ -0,0 +1,118 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.sql.SQLException; + +/** + * Represents a connection to a (real or simulated) database. + */ +public interface DbInterface { + + /** + * Drop all objects in the database. + */ + void reset() throws SQLException; + + /** + * Connect to the database. + */ + void connect() throws Exception; + + /** + * Disconnect from the database. + */ + void disconnect() throws SQLException; + + /** + * Close the connection and the database. + */ + void end() throws SQLException; + + /** + * Create the specified table. + * + * @param table the table to create + */ + void createTable(Table table) throws SQLException; + + /** + * Drop the specified table. + * + * @param table the table to drop + */ + void dropTable(Table table) throws SQLException; + + /** + * Create an index. + * + * @param index the index to create + */ + void createIndex(Index index) throws SQLException; + + /** + * Drop an index. + * + * @param index the index to drop + */ + void dropIndex(Index index) throws SQLException; + + /** + * Insert a row into a table. + * + * @param table the table + * @param c the column list + * @param v the values + * @return the result + */ + Result insert(Table table, Column[] c, Value[] v) throws SQLException; + + /** + * Execute a query. + * + * @param sql the SQL statement + * @return the result + */ + Result select(String sql) throws SQLException; + + /** + * Delete a number of rows. + * + * @param table the table + * @param condition the condition + * @return the result + */ + Result delete(Table table, String condition) throws SQLException; + + /** + * Update the given table with the new values. + * + * @param table the table + * @param columns the columns to update + * @param values the new values + * @param condition the condition + * @return the result of the update + */ + Result update(Table table, Column[] columns, Value[] values, + String condition) throws SQLException; + + /** + * Enable or disable autocommit. + * + * @param b the new value + */ + void setAutoCommit(boolean b) throws SQLException; + + /** + * Commit a pending transaction. + */ + void commit() throws SQLException; + + /** + * Roll back a pending transaction. + */ + void rollback() throws SQLException; +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/DbState.java b/modules/h2/src/test/java/org/h2/test/synth/sql/DbState.java new file mode 100644 index 0000000000000..44eaf8d0f879d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/DbState.java @@ -0,0 +1,121 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.util.ArrayList; +import org.h2.util.New; + +/** + * Represents a connection to a simulated database. + */ +public class DbState implements DbInterface { + + private boolean connected; + private boolean autoCommit; + private final TestSynth config; + private ArrayList tables = New.arrayList(); + private ArrayList indexes = New.arrayList(); + + DbState(TestSynth config) { + this.config = config; + } + + @Override + public void reset() { + tables = New.arrayList(); + indexes = New.arrayList(); + } + + @Override + public void connect() { + connected = true; + } + + @Override + public void disconnect() { + connected = false; + } + + @Override + public void createTable(Table table) { + tables.add(table); + } + + @Override + public void dropTable(Table table) { + tables.remove(table); + } + + @Override + public void createIndex(Index index) { + indexes.add(index); + } + + @Override + public void dropIndex(Index index) { + indexes.remove(index); + } + + @Override + public Result insert(Table table, Column[] c, Value[] v) { + return null; + } + + @Override + public Result select(String sql) { + return null; + } + + @Override + public Result delete(Table table, String condition) { + return null; + } + + @Override + public Result update(Table table, Column[] columns, Value[] values, + String condition) { + return null; + } + + @Override + public void setAutoCommit(boolean b) { + autoCommit = b; + } + + @Override + public void commit() { + // nothing to do + } + + @Override + public void rollback() { + // nothing to do + } + + /** + * Get a random table. + * + * @return the table + */ + Table randomTable() { + if (tables.size() == 0) { + return null; + } + int i = config.random().getInt(tables.size()); + return tables.get(i); + } + + @Override + public void end() { + // nothing to do + } + + @Override + public String toString() { + return "autocommit: " + autoCommit + " connected: " + connected; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Expression.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Expression.java new file mode 100644 index 0000000000000..94d1bc00ef41e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Expression.java @@ -0,0 +1,380 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.sql.Types; +import java.util.ArrayList; +import org.h2.util.New; + +/** + * Represents an expression. + */ +public class Expression { + + private String sql; + private final TestSynth config; + private final Command command; + + private Expression(TestSynth config, Command command) { + this.config = config; + this.command = command; + sql = ""; + } + + /** + * Create a random select list. + * + * @param config the configuration + * @param command the command + * @return the select list + */ + static String[] getRandomSelectList(TestSynth config, Command command) { + if (config.random().getBoolean(30)) { + return new String[] { "*" }; + } + ArrayList exp = New.arrayList(); + String sql = ""; + if (config.random().getBoolean(10)) { + sql += "DISTINCT "; + } + int len = config.random().getLog(8) + 1; + for (int i = 0; i < len; i++) { + sql += getRandomExpression(config, command).getSQL(); + sql += " AS A" + i + " "; + exp.add(sql); + sql = ""; + } + return exp.toArray(new String[0]); + } + + /** + * Generate a random condition. + * + * @param config the configuration + * @param command the command + * @return the random condition expression + */ + static Expression getRandomCondition(TestSynth config, Command command) { + Expression condition = new Expression(config, command); + if (config.random().getBoolean(50)) { + condition.create(); + } + return condition; + } + + private static Expression getRandomExpression(TestSynth config, + Command command) { + Expression expression = new Expression(config, command); + String alias = command.getRandomTableAlias(); + Column column = command.getTable(alias).getRandomConditionColumn(); + if (column == null) { + expression.createValue(); + } else { + expression.createExpression(alias, column); + } + return expression; + } + + private void createValue() { + Value v = Column.getRandomColumn(config).getRandomValue(); + sql = v.getSQL(); + } + + /** + * Generate a random join condition. + * + * @param config the configuration + * @param command the command + * @param alias the alias name + * @return the join condition + */ + static Expression getRandomJoinOn(TestSynth config, Command command, + String alias) { + Expression expression = new Expression(config, command); + expression.createJoinComparison(alias); + return expression; + } + + /** + * Generate a random sort order list. + * + * @param config the configuration + * @param command the command + * @return the ORDER BY list + */ + static String getRandomOrder(TestSynth config, Command command) { + int len = config.random().getLog(6); + String sql = ""; + for (int i = 0; i < len; i++) { + if (i > 0) { + sql += ", "; + } + int max = command.selectList.length; + int idx = config.random().getInt(max); + // sql += getRandomExpression(command).getSQL(); + // if (max > 1 && config.random().getBoolean(50)) { + sql += "A" + idx; + // } else { + // sql += String.valueOf(idx + 1); + // } + if (config.random().getBoolean(50)) { + if (config.random().getBoolean(10)) { + sql += " ASC"; + } else { + sql += " DESC"; + } + } + } + return sql; + } + + /** + * Get the SQL snippet of this expression. + * + * @return the SQL snippet + */ + String getSQL() { + return sql.trim().length() == 0 ? null : sql.trim(); + } + + private boolean is(int percent) { + return config.random().getBoolean(percent); + } + + private String oneOf(String[] list) { + int i = config.random().getInt(list.length); + if (!sql.endsWith(" ")) { + sql += " "; + } + sql += list[i] + " "; + return list[i]; + } + + private static String getColumnName(String alias, Column column) { + if (alias == null) { + return column.getName(); + } + return alias + "." + column.getName(); + } + + private void createJoinComparison(String alias) { + int len = config.random().getLog(5) + 1; + for (int i = 0; i < len; i++) { + if (i > 0) { + sql += "AND "; + } + Column column = command.getTable(alias).getRandomConditionColumn(); + if (column == null) { + sql += "1=1"; + return; + } + sql += getColumnName(alias, column); + sql += "="; + String a2; + do { + a2 = command.getRandomTableAlias(); + } while (a2.equals(alias)); + Table t2 = command.getTable(a2); + Column c2 = t2.getRandomColumnOfType(column.getType()); + if (c2 == null) { + sql += column.getRandomValue().getSQL(); + } else { + sql += getColumnName(a2, c2); + } + sql += " "; + } + } + + private void create() { + createComparison(); + while (is(50)) { + oneOf(new String[] { "AND", "OR" }); + createComparison(); + } + } + + // private void createSubquery() { + // // String alias = command.getRandomTableAlias(); + // // Table t1 = command.getTable(alias); + // Database db = command.getDatabase(); + // Table t2 = db.getRandomTable(); + // String a2 = command.getNextTableAlias(); + // sql += "SELECT * FROM " + t2.getName() + " " + a2 + " WHERE "; + // command.addSubqueryTable(a2, t2); + // createComparison(); + // command.removeSubqueryTable(a2); + // } + + private void createComparison() { + if (is(5)) { + sql += " NOT( "; + createComparisonSub(); + sql += ")"; + } else { + createComparisonSub(); + } + } + + private void createComparisonSub() { + /* + * if (is(10)) { sql += " EXISTS("; createSubquery(); sql += ")"; + * return; } else + */ + if (is(10)) { + sql += "("; + create(); + sql += ")"; + return; + } + String alias = command.getRandomTableAlias(); + Column column = command.getTable(alias).getRandomConditionColumn(); + if (column == null) { + if (is(50)) { + sql += "1=1"; + } else { + sql += "1=0"; + } + return; + } + boolean columnFirst = is(90); + if (columnFirst) { + sql += getColumnName(alias, column); + } else { + Value v = column.getRandomValue(); + sql += v.getSQL(); + } + if (is(10)) { + oneOf(new String[] { "IS NULL", "IS NOT NULL" }); + } else if (is(10)) { + oneOf(new String[] { "BETWEEN", "NOT BETWEEN" }); + Value v = column.getRandomValue(); + sql += v.getSQL(); + sql += " AND "; + v = column.getRandomValue(); + sql += v.getSQL(); + // } else if (is(10)) { + // // oneOf(new String[] { "IN", "NOT IN" }); + // sql += " IN "; + // sql += "("; + // int len = config.random().getInt(8) + 1; + // for (int i = 0; i < len; i++) { + // if (i > 0) { + // sql += ", "; + // } + // sql += column.getRandomValueNotNull().getSQL(); + // } + // sql += ")"; + } else { + if (column.getType() == Types.VARCHAR) { + oneOf(new String[] { "=", "=", "=", "<", ">", + "<=", ">=", "<>", "LIKE", "NOT LIKE" }); + } else { + oneOf(new String[] { "=", "=", "=", "<", ">", + "<=", ">=", "<>" }); + } + if (columnFirst) { + Value v = column.getRandomValue(); + sql += v.getSQL(); + } else { + sql += getColumnName(alias, column); + } + } + } + + private void createExpression(String alias, Column type) { + boolean op = is(20); + // no null values if there is an operation + boolean allowNull = !op; + // boolean allowNull =true; + + createTerm(alias, type, true); + if (op) { + switch (type.getType()) { + case Types.INTEGER: + if (config.is(TestSynth.POSTGRESQL)) { + oneOf(new String[] { "+", "-", "/" }); + } else { + oneOf(new String[] { "+", "-", "*", "/" }); + } + createTerm(alias, type, allowNull); + break; + case Types.DECIMAL: + oneOf(new String[] { "+", "-", "*" }); + createTerm(alias, type, allowNull); + break; + case Types.VARCHAR: + sql += " || "; + createTerm(alias, type, allowNull); + break; + case Types.BLOB: + case Types.CLOB: + case Types.DATE: + break; + default: + } + } + } + + private void createTerm(String alias, Column type, boolean allowNull) { + int dt = type.getType(); + if (is(5) && (dt == Types.INTEGER) || (dt == Types.DECIMAL)) { + sql += " - "; + allowNull = false; + } + if (is(10)) { + sql += "("; + createTerm(alias, type, allowNull); + sql += ")"; + return; + } + if (is(20)) { + // if (is(10)) { + // sql += "CAST("; + // // TODO cast + // Column c = Column.getRandomColumn(config); + // createTerm(alias, c, allowNull); + // sql += " AS "; + // sql += type.getTypeName(); + // sql += ")"; + // return; + // } + switch (dt) { + // case Types.INTEGER: + // String function = oneOf(new String[] { "LENGTH" /*, "MOD" */ }); + // sql += "("; + // createTerm(alias, type, allowNull); + // sql += ")"; + // break; + case Types.VARCHAR: + oneOf(new String[] { "LOWER", "UPPER" }); + sql += "("; + createTerm(alias, type, allowNull); + sql += ")"; + break; + default: + createTerm(alias, type, allowNull); + } + return; + } + if (is(60)) { + String a2 = command.getRandomTableAlias(); + Column column = command.getTable(a2).getRandomColumnOfType(dt); + if (column != null) { + sql += getColumnName(a2, column); + return; + } + } + + Value v = Value.getRandom(config, dt, 20, 2, allowNull); + sql += v.getSQL(); + } + + @Override + public String toString() { + throw new AssertionError(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Index.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Index.java new file mode 100644 index 0000000000000..44b251ed3cca3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Index.java @@ -0,0 +1,52 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +/** + * Represents an index. + */ +public class Index { + private final Table table; + private final String name; + private final Column[] columns; + private final boolean unique; + + Index(Table table, String name, Column[] columns, boolean unique) { + this.table = table; + this.name = name; + this.columns = columns; + this.unique = unique; + } + + String getName() { + return name; + } + + String getCreateSQL() { + String sql = "CREATE "; + if (unique) { + sql += "UNIQUE "; + } + sql += "INDEX " + name + " ON " + table.getName() + "("; + for (int i = 0; i < columns.length; i++) { + if (i > 0) { + sql += ", "; + } + sql += columns[i].getName(); + } + sql += ")"; + return sql; + } + + String getDropSQL() { + return "DROP INDEX " + name; + } + + Table getTable() { + return table; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/RandomGen.java b/modules/h2/src/test/java/org/h2/test/synth/sql/RandomGen.java new file mode 100644 index 0000000000000..0ca1575569124 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/RandomGen.java @@ -0,0 +1,345 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.sql.Date; +import java.sql.Time; +import java.sql.Timestamp; +import java.util.Random; + +/** + * A random data generator class. + */ +public class RandomGen { + private final Random random = new Random(); + + /** + * Create a new random instance with a fixed seed value. + */ + public RandomGen() { + random.setSeed(12); + } + + /** + * Get the next integer that is smaller than max. + * + * @param max the upper limit (exclusive) + * @return the random value + */ + public int getInt(int max) { + return max == 0 ? 0 : random.nextInt(max); + } + + /** + * Get the next gaussian value. + * + * @return the value + */ + public double nextGaussian() { + return random.nextGaussian(); + } + + /** + * Get the next random value that is at most max but probably much lower. + * + * @param max the maximum value + * @return the value + */ + public int getLog(int max) { + if (max == 0) { + return 0; + } + while (true) { + int d = Math.abs((int) (random.nextGaussian() / 2. * max)); + if (d < max) { + return d; + } + } + } + + /** + * Get a number of random bytes. + * + * @param data the target buffer + */ + public void getBytes(byte[] data) { + random.nextBytes(data); + } + + /** + * Get a boolean that is true with the given probability in percent. + * + * @param percent the probability + * @return the boolean value + */ + public boolean getBoolean(int percent) { + return random.nextInt(100) <= percent; + } + + /** + * Get a random string with the given length. + * + * @param len the length + * @return the string + */ + public String randomString(int len) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < len; i++) { + String from = (i % 2 == 0) ? "bdfghklmnpqrst" : "aeiou"; + buff.append(from.charAt(getInt(from.length()))); + } + return buff.toString(); + } + + /** + * Get a random integer. In 10% of the cases, Integer.MAX_VALUE is returned, + * in 10% of the cases Integer.MIN_VALUE. Also in the 10% of the cases, a + * random value between those extremes is returned. In 20% of the cases, + * this method returns 0. In 10% of the cases, a gaussian value in the range + * -200 and +1800 is returned. In all other cases, a gaussian value in the + * range -5 and +20 is returned. + * + * @return the random value + */ + public int getRandomInt() { + switch (random.nextInt(10)) { + case 0: + return Integer.MAX_VALUE; + case 1: + return Integer.MIN_VALUE; + case 2: + return random.nextInt(); + case 3: + case 4: + return 0; + case 5: + return (int) (random.nextGaussian() * 2000) - 200; + default: + return (int) (random.nextGaussian() * 20) - 5; + } + } + + /** + * Get a random long. The algorithm is similar to a random int. + * + * @return the random value + */ + public long getRandomLong() { + switch (random.nextInt(10)) { + case 0: + return Long.MAX_VALUE; + case 1: + return Long.MIN_VALUE; + case 2: + return random.nextLong(); + case 3: + case 4: + return 0; + case 5: + return (int) (random.nextGaussian() * 20000) - 2000; + default: + return (int) (random.nextGaussian() * 200) - 50; + } + } + + /** + * Get a random double. The algorithm is similar to a random int. + * + * @return the random value + */ + public double getRandomDouble() { + switch (random.nextInt(10)) { + case 0: + return Double.MIN_VALUE; + case 1: + return Double.MAX_VALUE; + case 2: + return Float.MIN_VALUE; + case 3: + return Float.MAX_VALUE; + case 4: + return random.nextDouble(); + case 5: + case 6: + return 0; + case 7: + return random.nextGaussian() * 20000. - 2000.; + default: + return random.nextGaussian() * 200. - 50.; + } + } + + /** + * Get a random boolean. + * + * @return the random value + */ + public boolean nextBoolean() { + return random.nextBoolean(); + } + + /** + * Get a random integer array. In 10% of the cases, null is returned. + * + * @return the array + */ + public int[] getIntArray() { + switch (random.nextInt(10)) { + case 0: + return null; + default: + int len = getInt(100); + int[] list = new int[len]; + for (int i = 0; i < len; i++) { + list[i] = getRandomInt(); + } + return list; + } + } + + /** + * Get a random byte array. In 10% of the cases, null is returned. + * + * @return the array + */ + public byte[] getByteArray() { + switch (random.nextInt(10)) { + case 0: + return null; + default: + int len = getInt(100); + byte[] list = new byte[len]; + random.nextBytes(list); + return list; + } + } + + /** + * Get a random time value. In 10% of the cases, null is returned. + * + * @return the value + */ + public Time randomTime() { + if (random.nextInt(10) == 0) { + return null; + } + StringBuilder buff = new StringBuilder(); + buff.append(getInt(24)); + buff.append(':'); + buff.append(getInt(24)); + buff.append(':'); + buff.append(getInt(24)); + return Time.valueOf(buff.toString()); + + } + + /** + * Get a random timestamp value. In 10% of the cases, null is returned. + * + * @return the value + */ + public Timestamp randomTimestamp() { + if (random.nextInt(10) == 0) { + return null; + } + StringBuilder buff = new StringBuilder(); + buff.append(getInt(10) + 2000); + buff.append('-'); + int month = getInt(12) + 1; + if (month < 10) { + buff.append('0'); + } + buff.append(month); + buff.append('-'); + int day = getInt(28) + 1; + if (day < 10) { + buff.append('0'); + } + buff.append(day); + buff.append(' '); + int hour = getInt(24); + if (hour < 10) { + buff.append('0'); + } + buff.append(hour); + buff.append(':'); + int minute = getInt(60); + if (minute < 10) { + buff.append('0'); + } + buff.append(minute); + buff.append(':'); + int second = getInt(60); + if (second < 10) { + buff.append('0'); + } + buff.append(second); + // TODO test timestamp nanos + return Timestamp.valueOf(buff.toString()); + } + + /** + * Get a random date value. In 10% of the cases, null is returned. + * + * @return the value + */ + public Date randomDate() { + if (random.nextInt(10) == 0) { + return null; + } + StringBuilder buff = new StringBuilder(); + buff.append(getInt(10) + 2000); + buff.append('-'); + int month = getInt(12) + 1; + if (month < 10) { + buff.append('0'); + } + buff.append(month); + buff.append('-'); + int day = getInt(28) + 1; + if (day < 10) { + buff.append('0'); + } + buff.append(day); + return Date.valueOf(buff.toString()); + } + + /** + * Randomly modify a SQL statement. + * + * @param sql the original SQL statement + * @return the modified statement + */ + public String modify(String sql) { + int len = getLog(10); + for (int i = 0; i < len; i++) { + int pos = getInt(sql.length()); + if (getBoolean(50)) { + String badChars = "abcABCDEF\u00ef\u00f6\u00fcC1230=<>+\"\\*%&/()=?$_-.:,;{}[]"; + // auml, ouml, uuml + char bad = badChars.charAt(getInt(badChars.length())); + sql = sql.substring(0, pos) + bad + sql.substring(pos); + } else { + if (pos >= sql.length()) { + sql = sql.substring(0, pos); + } else { + sql = sql.substring(0, pos) + sql.substring(pos + 1); + } + } + } + return sql; + } + + /** + * Set the seed value. + * + * @param seed the new seed value + */ + public void setSeed(int seed) { + random.setSeed(seed); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Result.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Result.java new file mode 100644 index 0000000000000..1c4c6518c5aab --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Result.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.io.PrintWriter; +import java.io.StringWriter; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; + +import org.h2.test.TestBase; +import org.h2.util.New; + +/** + * Represents an in-memory result. + */ +class Result implements Comparable { + static final int SUCCESS = 0, BOOLEAN = 1, INT = 2, EXCEPTION = 3, + RESULT_SET = 4; + + String sql; + + private final int type; + private boolean bool; + private int intValue; + private SQLException exception; + private ArrayList rows; + private ArrayList header; + + Result(String sql) { + this.sql = sql; + type = SUCCESS; + } + + Result(String sql, SQLException e) { + this.sql = sql; + type = EXCEPTION; + exception = e; + } + + Result(String sql, boolean b) { + this.sql = sql; + type = BOOLEAN; + this.bool = b; + } + + Result(String sql, int i) { + this.sql = sql; + type = INT; + this.intValue = i; + } + + Result(TestSynth config, String sql, ResultSet rs) { + this.sql = sql; + type = RESULT_SET; + try { + rows = New.arrayList(); + header = New.arrayList(); + ResultSetMetaData meta = rs.getMetaData(); + int len = meta.getColumnCount(); + Column[] cols = new Column[len]; + for (int i = 0; i < len; i++) { + cols[i] = new Column(meta, i + 1); + } + while (rs.next()) { + Row row = new Row(config, rs, len); + rows.add(row); + } + Collections.sort(rows); + } catch (SQLException e) { + // type = EXCEPTION; + // exception = e; + TestBase.logError("error reading result set", e); + } + } + + @Override + public String toString() { + switch (type) { + case SUCCESS: + return "success"; + case BOOLEAN: + return "boolean: " + this.bool; + case INT: + return "int: " + this.intValue; + case EXCEPTION: { + StringWriter w = new StringWriter(); + exception.printStackTrace(new PrintWriter(w)); + return "exception: " + exception.getSQLState() + ": " + + exception.getMessage() + "\r\n" + w.toString(); + } + case RESULT_SET: + String result = "ResultSet { // size=" + rows.size() + "\r\n "; + for (Column column : header) { + result += column.toString() + "; "; + } + result += "} = {\r\n"; + for (Row row : rows) { + result += " { " + row.toString() + "};\r\n"; + } + return result + "}"; + default: + throw new AssertionError("type=" + type); + } + } + + @Override + public int compareTo(Result r) { + switch (type) { + case EXCEPTION: + if (r.type != EXCEPTION) { + return 1; + } + return 0; + // return + // exception.getSQLState().compareTo(r.exception.getSQLState()); + case BOOLEAN: + case INT: + case SUCCESS: + case RESULT_SET: + return toString().compareTo(r.toString()); + default: + throw new AssertionError("type=" + type); + } + } + +// public void log() { +// switch (type) { +// case SUCCESS: +// System.out.println("> ok"); +// break; +// case EXCEPTION: +// System.out.println("> exception"); +// break; +// case INT: +// if (intValue == 0) { +// System.out.println("> ok"); +// } else { +// System.out.println("> update count: " + intValue); +// } +// break; +// case RESULT_SET: +// System.out.println("> rs " + rows.size()); +// break; +// default: +// } +// System.out.println(); +// } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Row.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Row.java new file mode 100644 index 0000000000000..6cad9087ee3bd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Row.java @@ -0,0 +1,51 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.sql.ResultSet; +import java.sql.SQLException; + +/** + * Represents a row. + */ +class Row implements Comparable { + private final Value[] data; + + public Row(TestSynth config, ResultSet rs, int len) throws SQLException { + data = new Value[len]; + for (int i = 0; i < len; i++) { + data[i] = Value.read(config, rs, i + 1); + } + } + + @Override + public String toString() { + String s = ""; + for (Object o : data) { + s += o == null ? "NULL" : o.toString(); + s += "; "; + } + return s; + } + + @Override + public int compareTo(Row r2) { + int result = 0; + for (int i = 0; i < data.length && result == 0; i++) { + Object o1 = data[i]; + Object o2 = r2.data[i]; + if (o1 == null) { + result = (o2 == null) ? 0 : -1; + } else if (o2 == null) { + result = 1; + } else { + result = o1.toString().compareTo(o2.toString()); + } + } + return result; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Table.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Table.java new file mode 100644 index 0000000000000..899036bb436ed --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Table.java @@ -0,0 +1,265 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.util.ArrayList; +import org.h2.util.New; + +/** + * Represents a table. + */ +class Table { + private final TestSynth config; + private String name; + private boolean temporary; + private boolean globalTemporary; + private Column[] columns; + private Column[] primaryKeys; + private final ArrayList indexes = New.arrayList(); + + Table(TestSynth config) { + this.config = config; + } + + /** + * Create a new random table. + * + * @param config the configuration + * @return the table + */ + static Table newRandomTable(TestSynth config) { + Table table = new Table(config); + table.name = "T_" + config.randomIdentifier(); + + // there is a difference between local temp tables for persistent and + // in-memory mode + // table.temporary = config.random().getBoolean(10); + // if(table.temporary) { + // if(config.getMode() == TestSynth.H2_MEM) { + // table.globalTemporary = false; + // } else { + // table.globalTemporary = config.random().getBoolean(50); + // } + // } + + int len = config.random().getLog(10) + 1; + table.columns = new Column[len]; + for (int i = 0; i < len; i++) { + Column col = Column.getRandomColumn(config); + table.columns[i] = col; + } + if (config.random().getBoolean(90)) { + int pkLen = config.random().getLog(len); + table.primaryKeys = new Column[pkLen]; + for (int i = 0; i < pkLen; i++) { + Column pk = null; + do { + pk = table.columns[config.random().getInt(len)]; + } while (pk.getPrimaryKey()); + table.primaryKeys[i] = pk; + pk.setPrimaryKey(true); + pk.setNullable(false); + } + } + return table; + } + + /** + * Create a new random index. + * + * @return the index + */ + Index newRandomIndex() { + String indexName = "I_" + config.randomIdentifier(); + int len = config.random().getLog(getColumnCount() - 1) + 1; + boolean unique = config.random().getBoolean(50); + Column[] cols = getRandomColumns(len); + Index index = new Index(this, indexName, cols, unique); + return index; + } + + /** + * Get the DROP TABLE statement for this table. + * + * @return the SQL statement + */ + String getDropSQL() { + return "DROP TABLE " + name; + } + + /** + * Get the CREATE TABLE statement for this table. + * + * @return the SQL statement + */ + String getCreateSQL() { + String sql = "CREATE "; + if (temporary) { + if (globalTemporary) { + sql += "GLOBAL "; + } else { + sql += "LOCAL "; + } + sql += "TEMPORARY "; + } + sql += "TABLE " + name + "("; + for (int i = 0; i < columns.length; i++) { + if (i > 0) { + sql += ", "; + } + Column column = columns[i]; + sql += column.getCreateSQL(); + if (primaryKeys != null && primaryKeys.length == 1 && + primaryKeys[0] == column) { + sql += " PRIMARY KEY"; + } + } + if (primaryKeys != null && primaryKeys.length > 1) { + sql += ", "; + sql += "PRIMARY KEY("; + for (int i = 0; i < primaryKeys.length; i++) { + if (i > 0) { + sql += ", "; + } + Column column = primaryKeys[i]; + sql += column.getName(); + } + sql += ")"; + } + sql += ")"; + return sql; + } + + /** + * Get the INSERT statement for this table. + * + * @param c the column list + * @param v the value list + * @return the SQL statement + */ + String getInsertSQL(Column[] c, Value[] v) { + String sql = "INSERT INTO " + name; + if (c != null) { + sql += "("; + for (int i = 0; i < c.length; i++) { + if (i > 0) { + sql += ", "; + } + sql += c[i].getName(); + } + sql += ")"; + } + sql += " VALUES("; + for (int i = 0; i < v.length; i++) { + if (i > 0) { + sql += ", "; + } + sql += v[i].getSQL(); + } + sql += ")"; + return sql; + } + + /** + * Get the table name. + * + * @return the name + */ + String getName() { + return name; + } + + /** + * Get a random column that can be used in a condition. + * + * @return the column + */ + Column getRandomConditionColumn() { + ArrayList list = New.arrayList(); + for (Column col : columns) { + if (Column.isConditionType(config, col.getType())) { + list.add(col); + } + } + if (list.size() == 0) { + return null; + } + return list.get(config.random().getInt(list.size())); + } + + Column getRandomColumn() { + return columns[config.random().getInt(columns.length)]; + } + + int getColumnCount() { + return columns.length; + } + + /** + * Get a random column of the specified type. + * + * @param type the type + * @return the column or null if no such column was found + */ + Column getRandomColumnOfType(int type) { + ArrayList list = New.arrayList(); + for (Column col : columns) { + if (col.getType() == type) { + list.add(col); + } + } + if (list.size() == 0) { + return null; + } + return list.get(config.random().getInt(list.size())); + } + + /** + * Get a number of random column from this table. + * + * @param len the column count + * @return the columns + */ + Column[] getRandomColumns(int len) { + int[] index = new int[columns.length]; + for (int i = 0; i < columns.length; i++) { + index[i] = i; + } + for (int i = 0; i < columns.length; i++) { + int temp = index[i]; + int r = index[config.random().getInt(columns.length)]; + index[i] = index[r]; + index[r] = temp; + } + Column[] c = new Column[len]; + for (int i = 0; i < len; i++) { + c[i] = columns[index[i]]; + } + return c; + } + + Column[] getColumns() { + return columns; + } + + /** + * Add this index to the table. + * + * @param index the index to add + */ + void addIndex(Index index) { + indexes.add(index); + } + + /** + * Remove an index from the table. + * + * @param index the index to remove + */ + void removeIndex(Index index) { + indexes.remove(index); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/TestSynth.java b/modules/h2/src/test/java/org/h2/test/synth/sql/TestSynth.java new file mode 100644 index 0000000000000..12f53d14724c9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/TestSynth.java @@ -0,0 +1,346 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.util.ArrayList; +import org.h2.test.TestAll; +import org.h2.test.TestBase; +import org.h2.util.MathUtils; +import org.h2.util.New; + +/** + * A test that generates random SQL statements against a number of databases + * and compares the results. + */ +public class TestSynth extends TestBase { + + // TODO hsqldb: call 1||null should return 1 but returns null + // TODO hsqldb: call mod(1) should return invalid parameter count + + /** + * A H2 database connection. + */ + static final int H2 = 0; + + /** + * An in-memory H2 database connection. + */ + static final int H2_MEM = 1; + + /** + * An HSQLDB database connection. + */ + static final int HSQLDB = 2; + + /** + * A MySQL database connection. + */ + static final int MYSQL = 3; + + /** + * A PostgreSQL database connection. + */ + static final int POSTGRESQL = 4; + + private final DbState dbState = new DbState(this); + private ArrayList databases; + private ArrayList commands; + private final RandomGen random = new RandomGen(); + private boolean showError, showLog; + private boolean stopImmediately; + private int mode; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + /** + * Check whether this database is of the specified type. + * + * @param isType the database type + * @return true if it is + */ + boolean is(int isType) { + return mode == isType; + } + + /** + * Get the random number generator. + * + * @return the random number generator + */ + RandomGen random() { + return random; + } + + /** + * Get a random identifier. + * + * @return the random identifier + */ + String randomIdentifier() { + int len = random.getLog(8) + 2; + while (true) { + return random.randomString(len); + } + } + + private void add(Command command) throws Exception { + command.run(dbState); + commands.add(command); + } + + private void addRandomCommands() throws Exception { + switch (random.getInt(20)) { + case 0: { + add(Command.getDisconnect(this)); + add(Command.getConnect(this)); + break; + } + case 1: { + Table table = Table.newRandomTable(this); + add(Command.getCreateTable(this, table)); + break; + } + case 2: { + Table table = randomTable(); + add(Command.getCreateIndex(this, table.newRandomIndex())); + break; + } + case 3: + case 4: + case 5: { + Table table = randomTable(); + add(Command.getRandomInsert(this, table)); + break; + } + case 6: + case 7: + case 8: { + Table table = randomTable(); + add(Command.getRandomUpdate(this, table)); + break; + } + case 9: + case 10: { + Table table = randomTable(); + add(Command.getRandomDelete(this, table)); + break; + } + default: { + Table table = randomTable(); + add(Command.getRandomSelect(this, table)); + } + } + } + + private void testRun(int seed) throws Exception { + random.setSeed(seed); + commands = New.arrayList(); + add(Command.getConnect(this)); + add(Command.getReset(this)); + + for (int i = 0; i < 1; i++) { + Table table = Table.newRandomTable(this); + add(Command.getCreateTable(this, table)); + add(Command.getCreateIndex(this, table.newRandomIndex())); + } + for (int i = 0; i < 2000; i++) { + addRandomCommands(); + } + // for (int i = 0; i < 20; i++) { + // Table table = randomTable(); + // add(Command.getRandomInsert(this, table)); + // } + // for (int i = 0; i < 100; i++) { + // Table table = randomTable(); + // add(Command.getRandomSelect(this, table)); + // } + // for (int i = 0; i < 10; i++) { + // Table table = randomTable(); + // add(Command.getRandomUpdate(this, table)); + // } + // for (int i = 0; i < 30; i++) { + // Table table = randomTable(); + // add(Command.getRandomSelect(this, table)); + // } + // for (int i = 0; i < 50; i++) { + // Table table = randomTable(); + // add(Command.getRandomDelete(this, table)); + // } + // for (int i = 0; i < 10; i++) { + // Table table = randomTable(); + // add(Command.getRandomSelect(this, table)); + // } + // while(true) { + // Table table = randomTable(); + // if(table == null) { + // break; + // } + // add(Command.getDropTable(this, table)); + // } + add(Command.getDisconnect(this)); + add(Command.getEnd(this)); + + for (int i = 0; i < commands.size(); i++) { + Command command = commands.get(i); + boolean stop = process(seed, i, command); + if (stop) { + break; + } + } + } + + private boolean process(int seed, int id, Command command) throws Exception { + try { + + ArrayList results = New.arrayList(); + for (int i = 0; i < databases.size(); i++) { + DbInterface db = databases.get(i); + Result result = command.run(db); + results.add(result); + if (showError && i == 0) { + // result.log(); + } + } + compareResults(results); + + } catch (Error e) { + if (showError) { + TestBase.logError("synth", e); + } + System.out.println("new TestSynth().init(test).testCase(" + seed + + "); // id=" + id + " " + e.toString()); + if (stopImmediately) { + System.exit(0); + } + return true; + } + return false; + } + + private void compareResults(ArrayList results) { + Result original = results.get(0); + for (int i = 1; i < results.size(); i++) { + Result copy = results.get(i); + if (original.compareTo(copy) != 0) { + if (showError) { + throw new AssertionError( + "Results don't match: original (0): \r\n" + + original + "\r\n" + "other:\r\n" + copy); + } + throw new AssertionError("Results don't match"); + } + } + } + + /** + * Get a random table. + * + * @return the table + */ + Table randomTable() { + return dbState.randomTable(); + } + + /** + * Print this message if the log is enabled. + * + * @param id the id + * @param s the message + */ + void log(int id, String s) { + if (showLog && id == 0) { + System.out.println(s); + } + } + + int getMode() { + return mode; + } + + private void addDatabase(String className, String url, String user, + String password, boolean useSentinel) { + DbConnection db = new DbConnection(this, className, url, user, + password, databases.size(), useSentinel); + databases.add(db); + } + + @Override + public TestBase init(TestAll conf) throws Exception { + super.init(conf); + deleteDb("synth/synth"); + databases = New.arrayList(); + + // mode = HSQLDB; + // addDatabase("org.hsqldb.jdbcDriver", "jdbc:hsqldb:test", "sa", "" ); + // addDatabase("org.h2.Driver", "jdbc:h2:synth;mode=hsqldb", "sa", ""); + + // mode = POSTGRESQL; + // addDatabase("org.postgresql.Driver", "jdbc:postgresql:test", "sa", + // "sa"); + // addDatabase("org.h2.Driver", "jdbc:h2:synth;mode=postgresql", "sa", + // ""); + + mode = H2_MEM; + org.h2.Driver.load(); + addDatabase("org.h2.Driver", "jdbc:h2:mem:synth", "sa", "", true); + addDatabase("org.h2.Driver", "jdbc:h2:" + + getBaseDir() + "/synth/synth", "sa", "", false); + + // addDatabase("com.mysql.jdbc.Driver", "jdbc:mysql://localhost/test", + // "sa", ""); + // addDatabase("org.h2.Driver", "jdbc:h2:synth;mode=mysql", "sa", ""); + + // addDatabase("com.mysql.jdbc.Driver", "jdbc:mysql://localhost/test", + // "sa", ""); + // addDatabase("org.ldbc.jdbc.jdbcDriver", + // "jdbc:ldbc:mysql://localhost/test", "sa", ""); + // addDatabase("org.h2.Driver", "jdbc:h2:memFS:synth", "sa", ""); + + // MySQL: NOT is bound to column: NOT ID = 1 means (NOT ID) = 1 instead + // of NOT (ID=1) + for (int i = 0; i < databases.size(); i++) { + DbConnection conn = (DbConnection) databases.get(i); + System.out.println(i + " = " + conn.toString()); + } + showError = true; + showLog = false; + + // stopImmediately = true; + // showLog = true; + // testRun(110600); // id=27 java.lang.Error: results don't match: + // original (0): + // System.exit(0); + + return this; + } + + private void testCase(int seed) throws Exception { + deleteDb("synth/synth"); + try { + printTime("TestSynth " + seed); + testRun(seed); + } catch (Error e) { + TestBase.logError("error", e); + System.exit(0); + } + } + + @Override + public void test() throws Exception { + while (true) { + int seed = MathUtils.randomInt(Integer.MAX_VALUE); + testCase(seed); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/Value.java b/modules/h2/src/test/java/org/h2/test/synth/sql/Value.java new file mode 100644 index 0000000000000..4a881259bea7e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/Value.java @@ -0,0 +1,338 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.sql; + +import java.math.BigDecimal; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; + +/** + * Represents a simple value. + */ +public class Value { + private final int type; + private final Object data; + private final TestSynth config; + + private Value(TestSynth config, int type, Object data) { + this.config = config; + this.type = type; + this.data = data; + } + + /** + * Convert the value to a SQL string. + * + * @return the SQL string + */ + String getSQL() { + if (data == null) { + return "NULL"; + } + switch (type) { + case Types.DECIMAL: + case Types.NUMERIC: + case Types.BIGINT: + case Types.INTEGER: + case Types.DOUBLE: + case Types.REAL: + return data.toString(); + case Types.CLOB: + case Types.VARCHAR: + case Types.CHAR: + case Types.OTHER: + case Types.LONGVARCHAR: + return "'" + data.toString() + "'"; + case Types.BLOB: + case Types.BINARY: + case Types.VARBINARY: + case Types.LONGVARBINARY: + return getBlobSQL(); + case Types.DATE: + return getDateSQL((Date) data); + case Types.TIME: + return getTimeSQL((Time) data); + case Types.TIMESTAMP: + return getTimestampSQL((Timestamp) data); + case Types.BOOLEAN: + case Types.BIT: + return (String) data; + default: + throw new AssertionError("type=" + type); + } + } + + private static Date randomDate(TestSynth config) { + return config.random().randomDate(); + + } + + private static Double randomDouble(TestSynth config) { + return config.random().getInt(100) / 10.; + } + + private static Long randomLong(TestSynth config) { + return Long.valueOf(config.random().getInt(1000)); + } + + private static Time randomTime(TestSynth config) { + return config.random().randomTime(); + } + + private static Timestamp randomTimestamp(TestSynth config) { + return config.random().randomTimestamp(); + } + + private String getTimestampSQL(Timestamp ts) { + String s = "'" + ts.toString() + "'"; + if (config.getMode() != TestSynth.HSQLDB) { + s = "TIMESTAMP " + s; + } + return s; + } + + private String getDateSQL(Date date) { + String s = "'" + date.toString() + "'"; + if (config.getMode() != TestSynth.HSQLDB) { + s = "DATE " + s; + } + return s; + } + + private String getTimeSQL(Time time) { + String s = "'" + time.toString() + "'"; + if (config.getMode() != TestSynth.HSQLDB) { + s = "TIME " + s; + } + return s; + } + + private String getBlobSQL() { + byte[] bytes = (byte[]) data; + // StringBuilder buff = new StringBuilder("X'"); + StringBuilder buff = new StringBuilder("'"); + for (byte b : bytes) { + int c = b & 0xff; + buff.append(Integer.toHexString(c >> 4 & 0xf)); + buff.append(Integer.toHexString(c & 0xf)); + + } + buff.append("'"); + return buff.toString(); + } + + /** + * Read a value from a result set. + * + * @param config the configuration + * @param rs the result set + * @param index the column index + * @return the value + */ + static Value read(TestSynth config, ResultSet rs, int index) + throws SQLException { + ResultSetMetaData meta = rs.getMetaData(); + Object data; + int type = meta.getColumnType(index); + switch (type) { + case Types.REAL: + case Types.DOUBLE: + data = rs.getDouble(index); + break; + case Types.BIGINT: + data = rs.getLong(index); + break; + case Types.DECIMAL: + case Types.NUMERIC: + data = rs.getBigDecimal(index); + break; + case Types.BLOB: + case Types.BINARY: + case Types.VARBINARY: + case Types.LONGVARBINARY: + data = rs.getBytes(index); + break; + case Types.OTHER: + case Types.CLOB: + case Types.VARCHAR: + case Types.LONGVARCHAR: + case Types.CHAR: + data = rs.getString(index); + break; + case Types.DATE: + data = rs.getDate(index); + break; + case Types.TIME: + data = rs.getTime(index); + break; + case Types.TIMESTAMP: + data = rs.getTimestamp(index); + break; + case Types.INTEGER: + data = rs.getInt(index); + break; + case Types.NULL: + data = null; + break; + case Types.BOOLEAN: + case Types.BIT: + data = rs.getBoolean(index) ? "TRUE" : "FALSE"; + break; + default: + throw new AssertionError("type=" + type); + } + if (rs.wasNull()) { + data = null; + } + return new Value(config, type, data); + } + + /** + * Generate a random value. + * + * @param config the configuration + * @param type the value type + * @param precision the precision + * @param scale the scale + * @param mayBeNull if the value may be null or not + * @return the value + */ + static Value getRandom(TestSynth config, int type, int precision, + int scale, boolean mayBeNull) { + Object data; + if (mayBeNull && config.random().getBoolean(20)) { + return new Value(config, type, null); + } + switch (type) { + case Types.BIGINT: + data = randomLong(config); + break; + case Types.DOUBLE: + data = randomDouble(config); + break; + case Types.DECIMAL: + data = randomDecimal(config, precision, scale); + break; + case Types.VARBINARY: + case Types.BINARY: + case Types.BLOB: + data = randomBytes(config, precision); + break; + case Types.CLOB: + case Types.VARCHAR: + data = config.random().randomString(config.random().getInt(precision)); + break; + case Types.DATE: + data = randomDate(config); + break; + case Types.TIME: + data = randomTime(config); + break; + case Types.TIMESTAMP: + data = randomTimestamp(config); + break; + case Types.INTEGER: + data = randomInt(config); + break; + case Types.BOOLEAN: + case Types.BIT: + data = config.random().getBoolean(50) ? "TRUE" : "FALSE"; + break; + default: + throw new AssertionError("type=" + type); + } + return new Value(config, type, data); + } + + private static Object randomInt(TestSynth config) { + int value; + if (config.is(TestSynth.POSTGRESQL)) { + value = config.random().getInt(1000000); + } else { + value = config.random().getRandomInt(); + } + return value; + } + + private static byte[] randomBytes(TestSynth config, int max) { + int len = config.random().getLog(max); + byte[] data = new byte[len]; + config.random().getBytes(data); + return data; + } + + private static BigDecimal randomDecimal(TestSynth config, int precision, + int scale) { + int len = config.random().getLog(precision - scale) + scale; + if (len == 0) { + len++; + } + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < len; i++) { + buff.append((char) ('0' + config.random().getInt(10))); + } + buff.insert(len - scale, '.'); + if (config.random().getBoolean(20)) { + buff.insert(0, '-'); + } + return new BigDecimal(buff.toString()); + } + +// private int compareTo(Object o) { +// Value v = (Value) o; +// if (type != v.type) { +// throw new AssertionError("compare " + type + +// " " + v.type + " " + data + " " + v.data); +// } +// if (data == null) { +// return (v.data == null) ? 0 : -1; +// } else if (v.data == null) { +// return 1; +// } +// switch (type) { +// case Types.DECIMAL: +// return ((BigDecimal) data).compareTo((BigDecimal) v.data); +// case Types.BLOB: +// case Types.VARBINARY: +// case Types.BINARY: +// return compareBytes((byte[]) data, (byte[]) v.data); +// case Types.CLOB: +// case Types.VARCHAR: +// return data.toString().compareTo(v.data.toString()); +// case Types.DATE: +// return ((Date) data).compareTo((Date) v.data); +// case Types.INTEGER: +// return ((Integer) data).compareTo((Integer) v.data); +// default: +// throw new AssertionError("type=" + type); +// } +// } + +// private static int compareBytes(byte[] a, byte[] b) { +// int al = a.length, bl = b.length; +// int len = Math.min(al, bl); +// for (int i = 0; i < len; i++) { +// int x = a[i] & 0xff; +// int y = b[i] & 0xff; +// if (x == y) { +// continue; +// } +// return x > y ? 1 : -1; +// } +// return al == bl ? 0 : al > bl ? 1 : -1; +// } + + @Override + public String toString() { + return getSQL(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/sql/package.html b/modules/h2/src/test/java/org/h2/test/synth/sql/package.html new file mode 100644 index 0000000000000..015da2c22e798 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/sql/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A synthetic test using random SQL statements executed against multiple databases. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/synth/thread/TestMulti.java b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMulti.java new file mode 100644 index 0000000000000..f6a0319f2ef3e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMulti.java @@ -0,0 +1,63 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.thread; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; + +import org.h2.test.TestBase; + +/** + * Starts multiple threads and performs random operations on each thread. + */ +public class TestMulti extends TestBase { + + /** + * If set, the test should stop. + */ + public volatile boolean stop; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + org.h2.Driver.load(); + deleteDb("openClose"); + + // int len = getSize(5, 100); + int len = 10; + TestMultiThread[] threads = new TestMultiThread[len]; + for (int i = 0; i < len; i++) { + threads[i] = new TestMultiNews(this); + } + threads[0].first(); + for (int i = 0; i < len; i++) { + threads[i].start(); + } + Thread.sleep(10000); + this.stop = true; + for (int i = 0; i < len; i++) { + threads[i].join(); + } + threads[0].finalTest(); + } + + Connection getConnection() throws SQLException { + final String url = "jdbc:h2:" + getBaseDir() + + "/openClose;LOCK_MODE=3;DB_CLOSE_DELAY=-1"; + Connection conn = DriverManager.getConnection(url, "sa", ""); + return conn; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiNews.java b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiNews.java new file mode 100644 index 0000000000000..5b5cc917ff45f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiNews.java @@ -0,0 +1,134 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.thread; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * The operation part of {@link TestMulti}. + * Queries and updates a table. + */ +public class TestMultiNews extends TestMultiThread { + + private static final String PREFIX_URL = + "http://feeds.wizbangblog.com/WizbangFullFeed?m="; + + private static final int LEN = 10000; + private Connection conn; + + TestMultiNews(TestMulti base) throws SQLException { + super(base); + conn = base.getConnection(); + } + + @Override + void operation() throws SQLException { + if (random.nextInt(10) == 0) { + conn.close(); + conn = base.getConnection(); + } else if (random.nextInt(10) == 0) { + if (random.nextBoolean()) { + conn.commit(); + } else { + conn.rollback(); + } + } else if (random.nextInt(10) == 0) { + conn.setAutoCommit(random.nextBoolean()); + } else { + if (random.nextBoolean()) { + PreparedStatement prep; + if (random.nextBoolean()) { + prep = conn.prepareStatement( + "SELECT * FROM NEWS WHERE LINK = ?"); + } else { + prep = conn.prepareStatement( + "SELECT * FROM NEWS WHERE VALUE = ?"); + } + prep.setString(1, PREFIX_URL + random.nextInt(LEN)); + ResultSet rs = prep.executeQuery(); + if (!rs.next()) { + throw new SQLException("expected one row, got none"); + } + if (rs.next()) { + throw new SQLException("expected one row, got more"); + } + } else { + PreparedStatement prep = conn.prepareStatement( + "UPDATE NEWS SET STATE = ? WHERE FID = ?"); + prep.setInt(1, random.nextInt(100)); + prep.setInt(2, random.nextInt(LEN)); + int count = prep.executeUpdate(); + if (count != 1) { + throw new SQLException("expected one row, got " + count); + } + } + } + } + + @Override + void begin() { + // nothing to do + } + + @Override + void end() throws SQLException { + conn.close(); + } + + @Override + void finalTest() { + // nothing to do + } + + @Override + void first() throws SQLException { + Connection c = base.getConnection(); + Statement stat = c.createStatement(); + stat.execute("CREATE TABLE TEST (ID IDENTITY, NAME VARCHAR)"); + stat.execute("CREATE TABLE NEWS" + + "(FID NUMERIC(19) PRIMARY KEY, COMMENTS LONGVARCHAR, " + + "LINK VARCHAR(255), STATE INTEGER, VALUE VARCHAR(255))"); + stat.execute("CREATE INDEX IF NOT EXISTS " + + "NEWS_GUID_VALUE_INDEX ON NEWS(VALUE)"); + stat.execute("CREATE INDEX IF NOT EXISTS " + + "NEWS_LINK_INDEX ON NEWS(LINK)"); + stat.execute("CREATE INDEX IF NOT EXISTS " + + "NEWS_STATE_INDEX ON NEWS(STATE)"); + PreparedStatement prep = c.prepareStatement( + "INSERT INTO NEWS (FID, COMMENTS, LINK, STATE, VALUE) " + + "VALUES (?, ?, ?, ?, ?) "); + PreparedStatement prep2 = c.prepareStatement( + "INSERT INTO TEST (NAME) VALUES (?)"); + for (int i = 0; i < LEN; i++) { + int x = random.nextInt(10) * 128; + StringBuilder buff = new StringBuilder(); + while (buff.length() < x) { + buff.append("Test "); + buff.append(buff.length()); + buff.append(' '); + } + String comment = buff.toString(); + // FID + prep.setInt(1, i); + // COMMENTS + prep.setString(2, comment); + // LINK + prep.setString(3, PREFIX_URL + i); + // STATE + prep.setInt(4, 0); + // VALUE + prep.setString(5, PREFIX_URL + i); + prep.execute(); + prep2.setString(1, comment); + prep2.execute(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiNewsSimple.java b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiNewsSimple.java new file mode 100644 index 0000000000000..0acbe3e4a1da5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiNewsSimple.java @@ -0,0 +1,95 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.thread; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; + +/** + * The operation part of {@link TestMulti}. + * Executes simple queries and updates in a table. + */ +public class TestMultiNewsSimple extends TestMultiThread { + + private static int newsCount = 10000; + + private final Connection conn; + + TestMultiNewsSimple(TestMulti base) throws SQLException { + super(base); + conn = base.getConnection(); + } + + private static int getNewsCount() { + return newsCount; + } + + @Override + void first() throws SQLException { + Connection c = base.getConnection(); + c.createStatement().execute("create table news" + + "(id identity, state int default 0, text varchar default '')"); + PreparedStatement prep = c.prepareStatement( + "insert into news() values()"); + for (int i = 0; i < newsCount; i++) { + prep.executeUpdate(); + } + c.createStatement().execute("update news set text = 'Text' || id"); + c.close(); + } + + @Override + void begin() { + // nothing to do + } + + @Override + void end() throws SQLException { + conn.close(); + } + + @Override + void operation() throws SQLException { + if (random.nextInt(10) == 0) { + conn.setAutoCommit(random.nextBoolean()); + } else if (random.nextInt(10) == 0) { + if (random.nextBoolean()) { + conn.commit(); + } else { + // conn.rollback(); + } + } else { + if (random.nextBoolean()) { + PreparedStatement prep = conn.prepareStatement( + "update news set state = ? where id = ?"); + prep.setInt(1, random.nextInt(getNewsCount())); + prep.setInt(2, random.nextInt(10)); + prep.execute(); + } else { + PreparedStatement prep = conn.prepareStatement( + "select * from news where id = ?"); + prep.setInt(1, random.nextInt(getNewsCount())); + ResultSet rs = prep.executeQuery(); + if (!rs.next()) { + System.out.println("No row found"); + // throw new AssertionError("No row found"); + } + if (rs.next()) { + System.out.println("Multiple rows found"); + // throw new AssertionError("Multiple rows found"); + } + } + } + } + + @Override + void finalTest() { + // nothing to do + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiOrder.java b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiOrder.java new file mode 100644 index 0000000000000..ef1968814cf11 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiOrder.java @@ -0,0 +1,167 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.thread; + +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.Random; + +/** + * The operation part of {@link TestMulti}. + * Queries and updates two tables. + */ +public class TestMultiOrder extends TestMultiThread { + + private static int customerCount; + private static int orderCount; + private static int orderLineCount; + + private static final String[] ITEMS = { "Apples", "Oranges", + "Bananas", "Coffee" }; + + private Connection conn; + private PreparedStatement insertLine; + + TestMultiOrder(TestMulti base) throws SQLException { + super(base); + conn = base.getConnection(); + } + + @Override + void begin() throws SQLException { + insertLine = conn.prepareStatement("insert into orderLine" + + "(order_id, line_id, text, amount) values(?, ?, ?, ?)"); + insertCustomer(); + } + + @Override + void end() throws SQLException { + conn.close(); + } + + @Override + void operation() throws SQLException { + if (random.nextInt(10) == 0) { + insertCustomer(); + } else { + insertOrder(); + } + } + + private void insertOrder() throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "insert into orders(customer_id , total) values(?, ?)"); + prep.setInt(1, random.nextInt(getCustomerCount())); + BigDecimal total = new BigDecimal("0"); + prep.setBigDecimal(2, total); + prep.executeUpdate(); + ResultSet rs = prep.getGeneratedKeys(); + rs.next(); + int orderId = rs.getInt(1); + int lines = random.nextInt(20); + for (int i = 0; i < lines; i++) { + insertLine.setInt(1, orderId); + insertLine.setInt(2, i); + insertLine.setString(3, ITEMS[random.nextInt(ITEMS.length)]); + BigDecimal amount = new BigDecimal( + random.nextInt(100) + "." + random.nextInt(10)); + insertLine.setBigDecimal(4, amount); + total = total.add(amount); + insertLine.addBatch(); + } + insertLine.executeBatch(); + increaseOrderLines(lines); + prep = conn.prepareStatement( + "update orders set total = ? where id = ?"); + prep.setBigDecimal(1, total); + prep.setInt(2, orderId); + increaseOrders(); + prep.execute(); + } + + private void insertCustomer() throws SQLException { + PreparedStatement prep = conn.prepareStatement( + "insert into customer(id, name) values(?, ?)"); + int customerId = getNextCustomerId(); + prep.setInt(1, customerId); + prep.setString(2, getString(customerId)); + prep.execute(); + } + + private static String getString(int id) { + StringBuilder buff = new StringBuilder(); + Random rnd = new Random(id); + int len = rnd.nextInt(40); + for (int i = 0; i < len; i++) { + String s = "bcdfghklmnprstwz"; + char c = s.charAt(rnd.nextInt(s.length())); + buff.append(i == 0 ? Character.toUpperCase(c) : c); + s = "aeiou "; + + buff.append(s.charAt(rnd.nextInt(s.length()))); + } + return buff.toString(); + } + + private static synchronized int getNextCustomerId() { + return customerCount++; + } + + private static synchronized int increaseOrders() { + return orderCount++; + } + + private static synchronized int increaseOrderLines(int count) { + return orderLineCount += count; + } + + private static int getCustomerCount() { + return customerCount; + } + + @Override + void first() throws SQLException { + Connection c = base.getConnection(); + c.createStatement().execute("drop table customer if exists"); + c.createStatement().execute("drop table orders if exists"); + c.createStatement().execute("drop table orderLine if exists"); + c.createStatement().execute("create table customer(" + + "id int primary key, name varchar, account decimal)"); + c.createStatement().execute("create table orders(" + + "id int identity primary key, customer_id int, total decimal)"); + c.createStatement().execute("create table orderLine(" + + "order_id int, line_id int, text varchar, " + + "amount decimal, primary key(order_id, line_id))"); + c.close(); + } + + @Override + void finalTest() throws SQLException { + conn = base.getConnection(); + ResultSet rs = conn.createStatement().executeQuery( + "select count(*) from customer"); + rs.next(); + base.assertEquals(customerCount, rs.getInt(1)); + // System.out.println("customers: " + rs.getInt(1)); + + rs = conn.createStatement().executeQuery( + "select count(*) from orders"); + rs.next(); + base.assertEquals(orderCount, rs.getInt(1)); + // System.out.println("orders: " + rs.getInt(1)); + + rs = conn.createStatement().executeQuery( + "select count(*) from orderLine"); + rs.next(); + base.assertEquals(orderLineCount, rs.getInt(1)); + // System.out.println("orderLines: " + rs.getInt(1)); + + conn.close(); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiThread.java b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiThread.java new file mode 100644 index 0000000000000..d4d9799c248d0 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/thread/TestMultiThread.java @@ -0,0 +1,72 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.synth.thread; + +import java.sql.SQLException; +import java.util.Random; + +import org.h2.test.TestBase; + +/** + * The is an abstract operation for {@link TestMulti}. + */ +abstract class TestMultiThread extends Thread { + + /** + * The base object. + */ + TestMulti base; + + /** + * The random number generator. + */ + Random random = new Random(); + + TestMultiThread(TestMulti base) { + this.base = base; + } + + /** + * Execute statements that need to be executed before starting the thread. + * This includes CREATE TABLE statements. + */ + abstract void first() throws SQLException; + + /** + * The main operation to perform. This method is called in a loop. + */ + abstract void operation() throws SQLException; + + /** + * Execute statements before entering the loop, but after starting the + * thread. + */ + abstract void begin() throws SQLException; + + /** + * This method is called once after the test is stopped. + */ + abstract void end() throws SQLException; + + /** + * This method is called once after all threads have been stopped. + */ + abstract void finalTest() throws SQLException; + + @Override + public void run() { + try { + begin(); + while (!base.stop) { + operation(); + } + end(); + } catch (Throwable e) { + TestBase.logError("error", e); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/synth/thread/package.html b/modules/h2/src/test/java/org/h2/test/synth/thread/package.html new file mode 100644 index 0000000000000..88b15cc638555 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/synth/thread/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Synthetic tests using random operations in multiple threads. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/todo/TestDiskSpaceLeak.java b/modules/h2/src/test/java/org/h2/test/todo/TestDiskSpaceLeak.java new file mode 100644 index 0000000000000..0ae7d597f0081 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/TestDiskSpaceLeak.java @@ -0,0 +1,80 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.todo; + +import java.io.File; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import org.h2.jdbc.JdbcConnection; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Recover; +import org.h2.util.JdbcUtils; + +/** + * A test to detect disk space leaks when killing a process. + */ +public class TestDiskSpaceLeak { + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + DeleteDbFiles.execute("data", null, true); + Class.forName("org.h2.Driver"); + Connection conn; + long before = 0; + for (int i = 0; i < 10; i++) { + conn = DriverManager.getConnection("jdbc:h2:data/test"); + ResultSet rs; + rs = conn.createStatement().executeQuery( + "select count(*) from information_schema.lobs"); + rs.next(); + System.out.println("lobs: " + rs.getInt(1)); + rs = conn.createStatement().executeQuery( + "select count(*) from information_schema.lob_map"); + rs.next(); + System.out.println("lob_map: " + rs.getInt(1)); + rs = conn.createStatement().executeQuery( + "select count(*) from information_schema.lob_data"); + rs.next(); + System.out.println("lob_data: " + rs.getInt(1)); + conn.close(); + Recover.execute("data", "test"); + new File("data/test.h2.sql").renameTo(new File("data/test." + i + ".sql")); + conn = DriverManager.getConnection("jdbc:h2:data/test"); + // ((JdbcConnection) conn).setPowerOffCount(i); + ((JdbcConnection) conn).setPowerOffCount(28); + String last = "connect"; + try { + conn.createStatement().execute("drop table test if exists"); + last = "drop"; + conn.createStatement().execute("create table test(id identity, b blob)"); + last = "create"; + conn.createStatement().execute("insert into test values(1, space(10000))"); + last = "insert"; + conn.createStatement().execute("delete from test"); + last = "delete"; + conn.createStatement().execute("insert into test values(1, space(10000))"); + last = "insert2"; + conn.createStatement().execute("delete from test"); + last = "delete2"; + } catch (SQLException e) { + // ignore + } finally { + JdbcUtils.closeSilently(conn); + } + long now = new File("data/test.h2.db").length(); + long diff = now - before; + before = now; + System.out.println(now + " " + diff + " " + i + " " + last); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/todo/TestDropTableLarge.java b/modules/h2/src/test/java/org/h2/test/todo/TestDropTableLarge.java new file mode 100644 index 0000000000000..b7490723e10e2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/TestDropTableLarge.java @@ -0,0 +1,58 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.todo; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.tools.DeleteDbFiles; +import org.h2.util.Profiler; + +/** + * Test the performance of dropping large tables + */ +public class TestDropTableLarge { + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + // System.setProperty("h2.largeTransactions", "true"); + TestDropTableLarge.test(); + } + + private static void test() throws SQLException { + DeleteDbFiles.execute("data", "test", true); + Connection conn = DriverManager.getConnection("jdbc:h2:data/test"); + Statement stat = conn.createStatement(); + stat.execute("create table test1(id identity, name varchar)"); + stat.execute("create table test2(id identity, name varchar)"); + conn.setAutoCommit(true); + // use two tables to make sure the data stored on disk is not too simple + PreparedStatement prep1 = conn.prepareStatement( + "insert into test1(name) values(space(255))"); + PreparedStatement prep2 = conn.prepareStatement( + "insert into test2(name) values(space(255))"); + for (int i = 0; i < 50000; i++) { + if (i % 7 != 0) { + prep1.execute(); + } else { + prep2.execute(); + } + } + Profiler prof = new Profiler(); + prof.startCollecting(); + stat.execute("DROP TABLE test1"); + prof.stopCollecting(); + System.out.println(prof.getTop(3)); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/todo/TestLinkedTableFullCondition.java b/modules/h2/src/test/java/org/h2/test/todo/TestLinkedTableFullCondition.java new file mode 100644 index 0000000000000..bf3646525eeb3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/TestLinkedTableFullCondition.java @@ -0,0 +1,44 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.todo; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.Statement; +import org.h2.tools.DeleteDbFiles; + +/** + * The complete condition should be sent to a linked table, not just the index + * condition. + */ +public class TestLinkedTableFullCondition { + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + DeleteDbFiles.execute("data", null, true); + Class.forName("org.h2.Driver"); + Connection conn; + conn = DriverManager.getConnection("jdbc:h2:data/test"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello')"); + stat.execute("insert into test values(2, 'World')"); + stat.execute("create linked table test_link" + + "('', 'jdbc:h2:data/test', '', '', 'TEST')"); + stat.execute("set trace_level_system_out 2"); + // the query sent to the linked database is + // SELECT * FROM PUBLIC.TEST T WHERE ID>=? AND ID<=? {1: 1, 2: 1}; + // it should also include AND NAME='Hello' + stat.execute("select * from test_link " + + "where id = 1 and name = 'Hello'"); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/todo/TestTempTableCrash.java b/modules/h2/src/test/java/org/h2/test/todo/TestTempTableCrash.java new file mode 100644 index 0000000000000..fc657da2001e7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/TestTempTableCrash.java @@ -0,0 +1,83 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.todo; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.Statement; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.store.fs.FilePathRec; +import org.h2.test.unit.TestReopen; +import org.h2.tools.DeleteDbFiles; + +/** + * Test crashing a database by creating a lot of temporary tables. + */ +public class TestTempTableCrash { + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String[] args) throws Exception { + TestTempTableCrash.test(); + } + + private static void test() throws Exception { + Connection conn; + Statement stat; + + System.setProperty("h2.delayWrongPasswordMin", "0"); + System.setProperty("h2.check2", "false"); + FilePathRec.register(); + System.setProperty("reopenShift", "4"); + TestReopen reopen = new TestReopen(); + FilePathRec.setRecorder(reopen); + + String url = "jdbc:h2:rec:memFS:data;PAGE_SIZE=64;ANALYZE_AUTO=100"; + // String url = "jdbc:h2:" + RecordingFileSystem.PREFIX + + // "data/test;PAGE_SIZE=64"; + + Class.forName("org.h2.Driver"); + DeleteDbFiles.execute("data", "test", true); + conn = DriverManager.getConnection(url, "sa", "sa"); + stat = conn.createStatement(); + + Random random = new Random(1); + long start = System.nanoTime(); + for (int i = 0; i < 10000; i++) { + long now = System.nanoTime(); + if (now > start + TimeUnit.SECONDS.toNanos(1)) { + System.out.println("i: " + i); + start = now; + } + int x; + x = random.nextInt(100); + stat.execute("drop table if exists test" + x); + String type = random.nextBoolean() ? "temp" : ""; + // String type = ""; + stat.execute("create " + type + " table test" + x + + "(id int primary key, name varchar)"); + if (random.nextBoolean()) { + stat.execute("create index idx_" + x + " on test" + x + "(name, id)"); + } + if (random.nextBoolean()) { + stat.execute("insert into test" + x + " select x, x " + + "from system_range(1, " + random.nextInt(100) + ")"); + } + if (random.nextInt(10) == 1) { + conn.close(); + conn = DriverManager.getConnection(url, "sa", "sa"); + stat = conn.createStatement(); + } + } + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/todo/TestUndoLogLarge.java b/modules/h2/src/test/java/org/h2/test/todo/TestUndoLogLarge.java new file mode 100644 index 0000000000000..b868b5c27899d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/TestUndoLogLarge.java @@ -0,0 +1,55 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.todo; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.concurrent.TimeUnit; + +import org.h2.tools.DeleteDbFiles; + +/** + * A test with an undo log size of 2 GB. + */ +public class TestUndoLogLarge { + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + // System.setProperty("h2.largeTransactions", "true"); + TestUndoLogLarge.test(); + } + + private static void test() throws SQLException { + DeleteDbFiles.execute("target/data", "test", true); + Connection conn = DriverManager.getConnection("jdbc:h2:data/test"); + Statement stat = conn.createStatement(); + stat.execute("set max_operation_memory 100"); + stat.execute("set max_memory_undo 100"); + stat.execute("create table test(id identity, name varchar)"); + conn.setAutoCommit(false); + PreparedStatement prep = conn.prepareStatement( + "insert into test(name) values(space(1024*1024))"); + long time = System.nanoTime(); + for (int i = 0; i < 2500; i++) { + prep.execute(); + long now = System.nanoTime(); + if (now > time + TimeUnit.SECONDS.toNanos(5)) { + System.out.println(i); + time = now + TimeUnit.SECONDS.toNanos(5); + } + } + conn.rollback(); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/todo/TestUndoLogMemory.java b/modules/h2/src/test/java/org/h2/test/todo/TestUndoLogMemory.java new file mode 100644 index 0000000000000..826ae7909423f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/TestUndoLogMemory.java @@ -0,0 +1,83 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.todo; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.tools.DeleteDbFiles; + +/** + * A test to reproduce out of memory using a large operation. + */ +public class TestUndoLogMemory { + + /** + * Run just this test. + * + * @param args ignored + */ + public static void main(String... args) throws Exception { + TestUndoLogMemory.test(10, "null"); + TestUndoLogMemory.test(100, "space(100000)"); + // new TestUndoLogMemory().test(100000, "null"); + // new TestUndoLogMemory().test(1000, "space(100000)"); + } + + private static void test(int count, String defaultValue) throws SQLException { + + // -Xmx1m -XX:+HeapDumpOnOutOfMemoryError + DeleteDbFiles.execute("data", "test", true); + Connection conn = DriverManager.getConnection( + "jdbc:h2:data/test;large_transactions=true"); + Statement stat = conn.createStatement(); + stat.execute("set cache_size 32"); + stat.execute("SET max_operation_memory 100"); + stat.execute("SET max_memory_undo 100"); + conn.setAutoCommit(false); + + // also a problem: tables without unique index + System.out.println("create--- " + count + " " + defaultValue); + stat.execute("create table test(id int, name varchar default " + + defaultValue + " )"); + System.out.println("insert---"); + stat.execute("insert into test(id) select x from system_range(1, " + + count + ")"); + System.out.println("rollback---"); + conn.rollback(); + + System.out.println("drop---"); + stat.execute("drop table test"); + System.out.println("create---"); + stat.execute("create table test" + + "(id int primary key, name varchar default " + + defaultValue + " )"); + + // INSERT problem + System.out.println("insert---"); + stat.execute( + "insert into test(id) select x from system_range(1, "+count+")"); + System.out.println("delete---"); + stat.execute("delete from test"); + + // DELETE problem + System.out.println("insert---"); + PreparedStatement prep = conn.prepareStatement( + "insert into test(id) values(?)"); + for (int i = 0; i < count; i++) { + prep.setInt(1, i); + prep.execute(); + } + System.out.println("delete---"); + stat.execute("delete from test"); + + System.out.println("close---"); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/todo/columnAlias.txt b/modules/h2/src/test/java/org/h2/test/todo/columnAlias.txt new file mode 100644 index 0000000000000..0c66332f69b9d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/columnAlias.txt @@ -0,0 +1,31 @@ +DROP TABLE TEST; +CREATE TABLE TEST(ID INT); +INSERT INTO TEST VALUES(1); +INSERT INTO TEST VALUES(2); + +SELECT ID AS A FROM TEST WHERE A>0; +-- Yes: HSQLDB +-- Fail: Oracle, MS SQL Server, PostgreSQL, MySQL, H2, Derby + +SELECT ID AS A FROM TEST ORDER BY A; +-- Yes: Oracle, MS SQL Server, PostgreSQL, MySQL, H2, Derby, HSQLDB + +SELECT ID AS A FROM TEST ORDER BY -A; +-- Yes: Oracle, MySQL, HSQLDB +-- Fail: MS SQL Server, PostgreSQL, H2, Derby + +SELECT ID AS A FROM TEST GROUP BY A; +-- Yes: PostgreSQL, MySQL, HSQLDB +-- Fail: Oracle, MS SQL Server, H2, Derby + +SELECT ID AS A FROM TEST GROUP BY -A; +-- Yes: MySQL, HSQLDB +-- Fail: Oracle, MS SQL Server, PostgreSQL, H2, Derby + +SELECT ID AS A FROM TEST GROUP BY ID HAVING A>0; +-- Yes: MySQL, HSQLDB +-- Fail: Oracle, MS SQL Server, PostgreSQL, H2, Derby + +SELECT COUNT(*) AS A FROM TEST GROUP BY ID HAVING A>0; +-- Yes: MySQL, HSQLDB +-- Fail: Oracle, MS SQL Server, PostgreSQL, H2, Derby diff --git a/modules/h2/src/test/java/org/h2/test/todo/dateFunctions.txt b/modules/h2/src/test/java/org/h2/test/todo/dateFunctions.txt new file mode 100644 index 0000000000000..4f13eab11a85a --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/dateFunctions.txt @@ -0,0 +1,10 @@ +h2 +update FOO set a = dateadd('second', 4320000, a); +ms sql server +update FOO set a = dateadd(s, 4320000, a); +mysql +update FOO set a = date_add(a, interval 4320000 second); +postgresql +update FOO set a = a + interval '4320000 s'; +oracle +update FOO set a = a + INTERVAL '4320000' SECOND; diff --git a/modules/h2/src/test/java/org/h2/test/todo/package.html b/modules/h2/src/test/java/org/h2/test/todo/package.html new file mode 100644 index 0000000000000..0b7c8e40c4600 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Documentation and tests for open issues. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/todo/recursiveQueries.txt b/modules/h2/src/test/java/org/h2/test/todo/recursiveQueries.txt new file mode 100644 index 0000000000000..da21c32a6ca74 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/recursiveQueries.txt @@ -0,0 +1,130 @@ +WITH DirectReports (ManagerID, EmployeeID, Title, DeptID, Level) +AS +( +-- Anchor member definition + SELECT e.ManagerID, e.EmployeeID, e.Title, edh.DepartmentID, + 0 AS Level + FROM Employee AS e + INNER JOIN EmployeeDepartmentHistory AS edh + ON e.EmployeeID = edh.EmployeeID AND edh.EndDate IS NULL + WHERE ManagerID IS NULL + UNION ALL +-- Recursive member definition + SELECT e.ManagerID, e.EmployeeID, e.Title, edh.DepartmentID, + Level + 1 + FROM Employee AS e + INNER JOIN EmployeeDepartmentHistory AS edh + ON e.EmployeeID = edh.EmployeeID AND edh.EndDate IS NULL + INNER JOIN DirectReports AS d + ON e.ManagerID = d.EmployeeID +) +-- Statement that executes the CTE +SELECT ManagerID, EmployeeID, Title, Level +FROM DirectReports +INNER JOIN Department AS dp + ON DirectReports.DeptID = dp.DepartmentID +WHERE dp.GroupName = N'Research and Development' OR Level = 0; +GO + + DROP VIEW IF EXISTS TEST_REC; + DROP VIEW IF EXISTS TEST_2; + DROP TABLE IF EXISTS TEST; + + CREATE TABLE TEST(ID INT PRIMARY KEY, PARENT INT, NAME VARCHAR(255)); + INSERT INTO TEST VALUES(1, NULL, 'Root'); + INSERT INTO TEST VALUES(2, 1, 'Plant'); + INSERT INTO TEST VALUES(3, 1, 'Animal'); + INSERT INTO TEST VALUES(4, 2, 'Tree'); + INSERT INTO TEST VALUES(5, 2, 'Flower'); + INSERT INTO TEST VALUES(6, 3, 'Elephant'); + INSERT INTO TEST VALUES(7, 3, 'Dog'); + + CREATE FORCE VIEW TEST_2(ID, PARENT, NAME) AS SELECT ID, PARENT, NAME FROM TEST_REC; + + CREATE FORCE VIEW TEST_REC(ID, PARENT, NAME) AS + SELECT ID, PARENT, NAME FROM TEST T + WHERE PARENT IS NULL + UNION ALL + SELECT T.ID, T.PARENT, T.NAME + FROM TEST T, TEST_2 R + WHERE T.PARENT=R.ID; + + SELECT * FROM TEST_REC; + +------------ + + DROP VIEW IF EXISTS TEST_REC; + DROP VIEW IF EXISTS TEST_2; + DROP TABLE IF EXISTS TEST; + + CREATE TABLE TEST(ID INT PRIMARY KEY, PARENT INT, NAME VARCHAR(255)); + INSERT INTO TEST VALUES(1, NULL, 'Root'); + INSERT INTO TEST VALUES(2, 1, 'Plant'); + INSERT INTO TEST VALUES(3, 1, 'Animal'); + INSERT INTO TEST VALUES(4, 2, 'Tree'); + INSERT INTO TEST VALUES(5, 2, 'Flower'); + INSERT INTO TEST VALUES(6, 3, 'Elephant'); + INSERT INTO TEST VALUES(7, 3, 'Dog'); + + CREATE VIEW RECURSIVE TEST_REC(ID, PARENT, NAME) AS + SELECT ID, PARENT, NAME FROM TEST T + WHERE PARENT IS NULL + UNION ALL + SELECT T.ID, T.PARENT, T.NAME + FROM TEST T, TEST_REC R + WHERE T.PARENT=R.ID; + + SELECT * FROM TEST_REC; + +---------------- + + +CREATE LOCAL TEMPORARY TABLE test (family_name VARCHAR_IGNORECASE(63) NOT NULL); +INSERT INTO test VALUES('Smith'); +INSERT INTO test VALUES('de Smith'); +INSERT INTO test VALUES('el Smith'); +INSERT INTO test VALUES('von Smith'); +SELECT * FROM test WHERE family_name IN ('de Smith', 'Smith'); +-- okay IN(...) with TABLE_SCAN + +SELECT * FROM test WHERE family_name BETWEEN 'd' AND 'T'; +-- okay, ignorecase honoured + +SELECT * FROM test WHERE family_name BETWEEN 'D' AND 'T'; +-- okay, ignorecase honoured + +CREATE INDEX family_name ON test(family_name); +SELECT * FROM test WHERE family_name IN ('de Smith', 'Smith'); +-- OOPS, the comparison's operands are sorted incorrectly for ignorecase! + +EXPLAIN SELECT * FROM test WHERE family_name IN ('de Smith', 'Smith'); + + +------------------- + + +complete recursive views: + +drop all objects; +create table parent(id int primary key, parent int); +insert into parent values(1, null), (2, 1), (3, 1); + +with test_view(id, parent) as +select id, parent from parent where id = ? +union all +select parent.id, parent.parent from test_view, parent +where parent.parent = test_view.id +select * from test_view {1: 1}; + +drop view test_view; + +with test_view(id, parent) as +select id, parent from parent where id = 1 +union all +select parent.id, parent.parent from test_view, parent +where parent.parent = test_view.id +select * from test_view; + +drop view test_view; + +drop table parent; diff --git a/modules/h2/src/test/java/org/h2/test/todo/supportTemplates.txt b/modules/h2/src/test/java/org/h2/test/todo/supportTemplates.txt new file mode 100644 index 0000000000000..76bb9b6eaf244 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/supportTemplates.txt @@ -0,0 +1,251 @@ +Old issue tracking + +Please send a question to the H2 Google Group or StackOverflow first, +and only then, once you are completely sure it is an issue, submit it here. +The reason is that only very few people actively monitor the issue tracker. + +Before submitting a bug, please also check the FAQ: +http://www.h2database.com/html/faq.html + +What steps will reproduce the problem? +(simple SQL scripts or simple standalone applications are preferred) +1. +2. +3. + +What is the expected output? What do you see instead? + + +What version of the product are you using? On what operating system, file +system, and virtual machine? + + +Do you know a workaround? + +What is your use case, meaning why do you need this feature? + +How important/urgent is the problem for you? + +Please provide any additional information below. + +------------------ +Benchmark: + +Hi, + +If you are running benchmarks, could you profile them please? To do that use java -Xrunhprof:cpu=samples,depth=8 ... - and then upload/post the file java.hprof.txt + +If the benchmark application is not a self contained application, another way is to generate 'full thread dumps' (maybe 30 or so) while running the benchmark, and upload them? To get a full thread dump, you can use "kill -QUIT ", or (Windows) press Ctrl+Pause, or (Java 1.6) use jps -l to get the , and then use jstack . + +Regards, +Thomas + +----------------- + +Can not reproduce: + +Hi, + +Sorry I can not reproduce this problem. Could you post a simple, standalone test case that reproduces the problem? It would be great if the test case does not have any dependencies except the H2 jar file (that is, a simple SQL script that can be run in the H2 Console, or a Java class uses the JDBC API and is run using a static main method). If the test case does have dependencies it would help a lot if you upload them somewhere (for example Rapidshare.com). Please include any initialization code (CREATE TABLE, INSERT and so on) in the Java class or in a .sql script file. + +Regards, +Thomas + +----------------- + +Defect report: + +What steps will reproduce the problem? +(simple SQL scripts or simple standalone applications are preferred) +1. +2. +3. + +What is the expected output? What do you see instead? + + +What version of the product are you using? On what operating system, file system, and virtual machine? + + +Do you know a workaround? + +How important/urgent is the problem for you? + +In your view, is this a defect or a feature request? + +Please provide any additional information below. + +----------------- +Hi, + +This looks like a corrupt database. To recover the data, use the tool org.h2.tools.Recover to create the SQL script file, and then re-create the database using this script. Does it work when you do this? + +With version 1.3.171 and older: when using local temporary tables and not dropping them manually before closing the session, and then killing the process could result in a database that couldn't be opened. + +With version 1.3.162 and older: on out of disk space, the database can get corrupt sometimes, if later write operations succeed. The same problem happens on other kinds of I/O exceptions (where one or some of the writes fail, but subsequent writes succeed). Now the file is closed on the first unsuccessful write operation, so that later requests fail consistently. + +Important corruption problems were fixed in version 1.2.135 and version 1.2.140 (see the change log). Known causes for corrupt databases are: if the database was created or used with a version older than 1.2.135, and the process was killed while the database was closing or writing a checkpoint. Using the transaction isolation level READ_UNCOMMITTED (LOCK_MODE 0) while at the same time using multiple connections. Disabling database file protection using (setting FILE_LOCK to NO in the database URL). Some other areas that are not fully tested are: Platforms other than Windows XP, Linux, Mac OS X, or JVMs other than Sun 1.5 or 1.6; the feature MULTI_THREADED; the features AUTO_SERVER and AUTO_RECONNECT; the file locking method 'Serialized'. + +I am very interested in analyzing and solving this problem. Corruption problems have top priority for me. I have a few questions: + +- What is your database URL? +- Did you use LOG=0 or LOG=1? Did you read the FAQ about it? +- Did the system ever run out of disk space? +- Could you send the full stack trace of the exception including message text? +- Did you use SHUTDOWN DEFRAG or the database setting DEFRAG_ALWAYS with H2 version 1.3.159 or older? +- How many connections does your application use concurrently? +- Do you use temporary tables? +- With which version of H2 was this database created? + You can find it out using: + select * from information_schema.settings where name='CREATE_BUILD' + or have a look in the SQL script created by the recover tool. +- Did the application run out of memory (once, or multiple times)? +- Do you use any settings or special features (for example cache settings, + two phase commit, linked tables)? +- Do you use any H2-specific system properties? +- Is the application multi-threaded? +- What operating system, file system, and virtual machine + (java -version) do you use? +- How did you start the Java process (java -Xmx... and so on)? +- Is it (or was it at some point) a networked file system? +- How big is the database (file sizes)? +- How much heap memory does the Java process have? +- Is the database usually closed normally, or is process terminated + forcefully or the computer switched off? +- Is it possible to reproduce this problem using a fresh database + (sometimes, or always)? +- Are there any other exceptions (maybe in the .trace.db file)? + Could you send them please? +- Do you still have any .trace.db files, and if yes could you send them? +- Could you send the .h2.db file where this exception occurs? + +Regards, +Thomas + +----------------- + + +This looks like a serious problem. I have a few questions: + +- Could you send the full stack trace of the exception including message text? +- What is your database URL? +- Did you use multiple connections? +- Do you use temporary tables? +- A workarounds is: use the tool org.h2.tools.Recover to create + the SQL script file, and then re-create the database using this script. + Does it work when you do this? +- With which version of H2 was this database created? + You can find it out using: + select * from information_schema.settings where name='CREATE_BUILD' + or have a look in the SQL script created by the recover tool. +- Did the application run out of memory (once, or multiple times)? +- Do you use any settings or special features (for example cache settings, + two phase commit, linked tables)? +- Do you use any H2-specific system properties? +- Is the application multi-threaded? +- What operating system, file system, and virtual machine + (java -version) do you use? +- How did you start the Java process (java -Xmx... and so on)? +- Is it (or was it at some point) a networked file system? +- How big is the database (file sizes)? +- How much heap memory does the Java process have? +- Is the database usually closed normally, or is process terminated + forcefully or the computer switched off? +- Is it possible to reproduce this problem using a fresh database + (sometimes, or always)? +- Are there any other exceptions (maybe in the .trace.db file)? + Could you send them please? +- Do you still have any .trace.db files, and if yes could you send them? +- Could you send the .h2.db file where this exception occurs? + +----------------- + + +Hi, + +I have a few questions: + +- The database URL? +- With which version of H2 was this database created? + You can find it out using: + select * from information_schema.settings where name='CREATE_BUILD' +- Did the application run out of memory (once, or multiple times)? +- Did you use multiple connections? +- Do you use any settings or special features (for example, the setting + LOG=0, or two phase commit, linked tables, cache settings)? +- Is the application multi-threaded? +- What operating system, file system, and virtual machine + (java -version) do you use? +- How big is the database (file sizes)? +- Is the database usually closed normally, or is process terminated + forcefully or the computer switched off? +- Are there any other exceptions (maybe in the .trace.db file)? + Could you send them please? +- Do you still have any .trace.db files, and if yes could you send them? +- Could you send the .h2.db file where this exception occurs? + + + + +Corrupted database + +I am sorry to say that, but it looks like a corruption problem. I am very interested in analyzing and solving this problem. Corruption problems have top priority for me. I have a few questions: + +- Could you send the full stack trace of the exception including message text? +- What is your database URL? +- Do you use Tomcat or another web server? + Do you unload or reload the web application? +- You can find out if the database is corrupted when running + SCRIPT TO 'test.sql' +- What version H2 are you using? +- Did you use multiple connections? +- The first workarounds is: append ;RECOVER=1 to the database URL. + Does it work when you do this? +- The second workarounds is: delete the index.db file (it is re-created + automatically) and try again. Does it work when you do this? +- The third workarounds is: use the tool org.h2.tools.Recover to create + the SQL script file, and then re-create the database using this script. + Does it work when you do this? +- With which version of H2 was this database created? + You can find it out using: + select * from information_schema.settings where name='CREATE_BUILD' +- Did the application run out of memory (once, or multiple times)? +- Do you use any settings or special features (for example, the setting + LOG=0, or two phase commit, linked tables, cache settings)? +- Is the application multi-threaded? +- What operating system, file system, and virtual machine + (java -version) do you use? +- Is it (or was it at some point) a networked file system? +- How big is the database (file sizes)? +- Is the database usually closed normally, or is process terminated + forcefully or the computer switched off? +- Is it possible to reproduce this problem using a fresh database + (sometimes, or always)? +- Are there any other exceptions (maybe in the .trace.db file)? + Could you send them please? +- Do you still have any .trace.db files, and if yes could you send them? +- Could you send the .data.db file where this exception occurs? + + + +Hi, + +I have a few questions: + +- The database URL? +- With which version of H2 was this database created? + You can find it out using: + select * from information_schema.settings where name='CREATE_BUILD' +- Did you use multiple connections? +- Do you use any settings or special features (for example, the setting + LOG=0, or two phase commit, linked tables, cache settings)? +- Is the application multi-threaded? +- What operating system, file system, and virtual machine + (java -version) do you use? +- How big is the database (file sizes)? +- Is the database usually closed normally, or is process terminated + forcefully or the computer switched off? +- Are there any other exceptions (maybe in the .trace.db file)? + Could you send them please? +- Do you still have any .trace.db files, and if yes could you send them? +- Could you send the .h2.db file where this exception occurs? diff --git a/modules/h2/src/test/java/org/h2/test/todo/todo.txt b/modules/h2/src/test/java/org/h2/test/todo/todo.txt new file mode 100644 index 0000000000000..f306fd728bdc4 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/todo.txt @@ -0,0 +1,94 @@ +TIMESTAMPDIFF +----------------- +Test cases: +select timestampdiff(year, '2000-01-01', '2001-01-01'); +select timestampdiff(year, '2000-03-01', '2001-03-01'); +select timestampdiff(day, '2000-03-01', '2001-03-01'); +select timestampdiff(year, '2000-02-29', '2001-02-28'); +select timestampdiff(day, '2000-02-29', '2001-02-28'); +select timestampdiff(month, '2000-02-29', '2001-02-28'); +select timestampadd(year, 1, '2000-02-29'); + + +Auto Upgrade +----------------- +file conversion should be done automatically when the new engine connects. + +auto-upgrade application: +check if new version is available +(option: digital signature) +if yes download new version +(option: http, https, ftp) +backup database to SQL script +(option: list of databases, use recovery mechanism) +install new version + +ftp client +task to download new version from another HTTP / HTTPS / FTP server +multi-task + + +Direct Lookup +----------------- +drop table test; +create table test(id int, version int, idx int); +@LOOP 1000 insert into test values(1, 1, ?); +@LOOP 1000 insert into test values(1, 2, ?); +@LOOP 1000 insert into test values(2, 1, ?); +create index idx_test on test(id, version, idx); +@LOOP 1000 select max(id)+1 from test; +@LOOP 1000 select max(idx)+1 from test where id=1 and version=2; +@LOOP 1000 select max(id)+1 from test; +@LOOP 1000 select max(idx)+1 from test where id=1 and version=2; +@LOOP 1000 select max(id)+1 from test; +@LOOP 1000 select max(idx)+1 from test where id=1 and version=2; +-- should be direct query + + +Update Multiple Tables with Merge +----------------- +drop table statisticlog; +create table statisticlog(id int primary key, datatext varchar, moment int); +@LOOP 20000 insert into statisticlog values(?, ?, ?); +merge into statisticlog(id, datatext) key(id) +select id, 'data1' from statisticlog order by moment limit 5; +select * from statisticlog where id < 10; +UPDATE statisticlog SET datatext = 'data2' +WHERE id IN (SELECT id FROM statisticlog ORDER BY moment LIMIT 5); +select * from statisticlog where id < 10; + +Auto-Reconnect +----------------- +Implemented: +- auto_server includes auto_reconnect +- works with server mode +- works with auto_server mode +- keep temporary linked tables, variables on client +- statements +- prepared statements +- small result sets (up to fetch size) +- throws an error when autocommit is false +- an error is thrown when the connection is lost + while looping over large result sets (larger than fetch size) +Not implemented / not tested: +- batch updates +- ignored in embedded mode +- keep temporary tables (including data) on client +- keep identity, getGeneratedKeys on client +- throw error when in cluster mode + +Support Model +----------------- +Check JBoss and Spring support models +http://wiki.bonita.ow2.org/xwiki/bin/view/Main/BullOffer +- starting 2500 euros / year +- unlimited support requests +- 2 named contacts +- optional half days of technical aid by remote services + +Durability +----------------- +Improve documentation. +You can't make a system that will not lose data, you can only make +a system that knows the last save point of 100% integrity. There are +too many variables and too much randomness on a cold hard power failure. diff --git a/modules/h2/src/test/java/org/h2/test/todo/tools.sql b/modules/h2/src/test/java/org/h2/test/todo/tools.sql new file mode 100644 index 0000000000000..e627e57e7cbbe --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/tools.sql @@ -0,0 +1,83 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ + +-- TO_DATE +create alias TO_DATE as $$ +java.util.Date toDate(String s) throws Exception { + return new java.text.SimpleDateFormat("yyyy.MM.dd").parse(s); +} +$$; +call TO_DATE('1990.02.03') + +-- TO_CHAR +drop alias if exists TO_CHAR; +create alias TO_CHAR as $$ +String toChar(BigDecimal x, String pattern) throws Exception { + return new java.text.DecimalFormat(pattern).format(x); +} +$$; +call TO_CHAR(123456789.12, '###,###,###,###.##'); + +-- update all rows in all tables +select 'update ' || table_schema || '.' || table_name || ' set ' || column_name || '=' || column_name || ';' +from information_schema.columns where ORDINAL_POSITION = 1 and table_schema <> 'INFORMATION_SCHEMA'; + +-- read the first few bytes from a BLOB +drop table test; +drop alias first_bytes; +create alias first_bytes as $$ +import java.io.*; +@CODE +byte[] firstBytes(InputStream in, int len) throws IOException { + try { + byte[] data = new byte[len]; + DataInputStream dIn = new DataInputStream(in); + dIn.readFully(data, 0, len); + return data; + } finally { + in.close(); + } +} +$$; +create table test(data blob); +insert into test values('010203040506070809'); +select first_bytes(data, 3) from test; + +-- encrypt and decrypt strings +CALL CAST(ENCRYPT('AES', HASH('SHA256', STRINGTOUTF8('key'), 1), STRINGTOUTF8('Hello')) AS VARCHAR); +CALL TRIM(CHAR(0) FROM UTF8TOSTRING(DECRYPT('AES', HASH('SHA256', STRINGTOUTF8('key'), 1), '16e44604230717eec9f5fa6058e77e83'))); +DROP ALIAS ENC; +DROP ALIAS DEC; +CREATE ALIAS ENC AS $$ +import org.h2.security.*; +import org.h2.util.*; +@CODE +String encrypt(String data, String key) throws Exception { + byte[] k = new SHA256().getHash(key.getBytes("UTF-8"), false); + byte[] b1 = data.getBytes("UTF-8"); + byte[] buff = new byte[(b1.length + 15) / 16 * 16]; + System.arraycopy(b1, 0, buff, 0, b1.length); + BlockCipher bc = CipherFactory.getBlockCipher("AES"); + bc.setKey(k); + bc.encrypt(buff, 0, buff.length); + return ByteUtils.convertBytesToString(buff); +} +$$; +CREATE ALIAS DEC AS $$ +import org.h2.security.*; +import org.h2.util.*; +@CODE +String decrypt(String data, String key) throws Exception { + byte[] k = new SHA256().getHash(key.getBytes("UTF-8"), false); + byte[] buff = ByteUtils.convertStringToBytes(data); + BlockCipher bc = CipherFactory.getBlockCipher("AES"); + bc.setKey(k); + bc.decrypt(buff, 0, buff.length); + return StringUtils.trim(new String(buff, "UTF-8"), false, true, "\u0000"); +} +$$; +CALL ENC('Hello', 'key'); +CALL DEC(ENC('Hello', 'key'), 'key'); diff --git a/modules/h2/src/test/java/org/h2/test/todo/versionlist.txt b/modules/h2/src/test/java/org/h2/test/todo/versionlist.txt new file mode 100644 index 0000000000000..b8db3428b99fd --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/todo/versionlist.txt @@ -0,0 +1,65 @@ +1.3.162 2011-11-26 +1.3.161 2011-10-28 +1.3.160 2011-09-11 +1.3.159 2011-08-13 +1.3.158 2011-07-17 +1.3.157 2011-06-25 +1.3.156 2011-06-17 +1.3.155 2011-05-27 +1.3.154 2011-04-04 +1.3.153 2011-03-14 +1.3.152 2011-03-01 Beta +1.3.151 2011-02-12 Beta +1.3.150 2011-01-28 Beta +1.3.149 2011-01-07 Beta +1.3.148 2010-12-12 Beta +1.2.147 2010-11-21 +1.3.146 2010-11-08 Beta +1.2.145 2010-11-02 + +1.0.78 2008-08-28 +1.0.77 2008-08-16 +1.0.76 2008-07-27 +1.0.75 2008-07-14 +1.0.74 2008-06-21 +1.0.73 2008-05-31 +1.0.72 2008-05-10 +1.0.71 2008-04-25 +1.0.70 2008-04-20 +1.0.69 2008-03-29 +1.0.68 2008-03-15 +1.0.67 2008-02-22 +1.0.66 2008-02-02 +1.0.65 2008-01-18 +1.0.64 2007-12-27 +1.0.63 2007-12-02 +1.0.62 2007-11-25 +1.0.61 2007-11-10 +1.0.60 2007-10-20 +1.0.59 2007-10-03 +1.0.58 2007-09-15 +1.0.57 2007-08-25 + +1.0 2007-08-02 +1.0 2007-07-12 + +build 50 2007-06-17 +build 46 2007-04-29 +build 41 2007-01-30 +build 2007-01-17 +build 34 2006-12-17 +build 2006-12-03 +build 2006-11-20 +build 2006-11-03 +build 2006-10-10 +build 2006-09-24 +build 2006-09-10 +build 2006-08-31 +build 2006-08-28 +build 2006-08-23 +build 2006-08-14 +build 2006-07-29 +build 2006-07-14 +build 2006-07-01 +build 10 2006-06-02 +alpha 2005-11-17 \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/trace/Arg.java b/modules/h2/src/test/java/org/h2/test/trace/Arg.java new file mode 100644 index 0000000000000..ed55604fd4ac6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/trace/Arg.java @@ -0,0 +1,88 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + */ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.h2.test.trace; + +import java.math.BigDecimal; + +import org.h2.util.StringUtils; + +/** + * An argument of a statement. + */ +class Arg { + private Class clazz; + private Object obj; + private Statement stat; + + Arg(Class clazz, Object obj) { + this.clazz = clazz; + this.obj = obj; + } + + Arg(Statement stat) { + this.stat = stat; + } + + @Override + public String toString() { + if (stat != null) { + return stat.toString(); + } + return quote(clazz, getValue()); + } + + /** + * Calculate the value if this is a statement. + */ + void execute() throws Exception { + if (stat != null) { + obj = stat.execute(); + clazz = stat.getReturnClass(); + stat = null; + } + } + + Class getValueClass() { + return clazz; + } + + Object getValue() { + return obj; + } + + private static String quote(Class valueClass, Object value) { + if (value == null) { + return null; + } else if (valueClass == String.class) { + return StringUtils.quoteJavaString(value.toString()); + } else if (valueClass == BigDecimal.class) { + return "new BigDecimal(\"" + value.toString() + "\")"; + } else if (valueClass.isArray()) { + if (valueClass == String[].class) { + return StringUtils.quoteJavaStringArray((String[]) value); + } else if (valueClass == int[].class) { + return StringUtils.quoteJavaIntArray((int[]) value); + } + } + return value.toString(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/trace/Parser.java b/modules/h2/src/test/java/org/h2/test/trace/Parser.java new file mode 100644 index 0000000000000..7bfd683f16896 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/trace/Parser.java @@ -0,0 +1,280 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + */ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.h2.test.trace; + +import java.math.BigDecimal; +import java.util.ArrayList; + +import org.h2.util.New; +import org.h2.util.StringUtils; + +/** + * Parses an entry in a Java-style log file. + */ +class Parser { + private static final int STRING = 0, NAME = 1, NUMBER = 2, SPECIAL = 3; + private final Player player; + private Statement stat; + private final String line; + private String token; + private int tokenType; + private int pos; + + private Parser(Player player, String line) { + this.player = player; + this.line = line; + read(); + } + + /** + * Parse a Java statement. + * + * @param player the player + * @param line the statement text + * @return the statement + */ + static Statement parseStatement(Player player, String line) { + Parser p = new Parser(player, line); + p.parseStatement(); + return p.stat; + } + + private Statement parseStatement() { + stat = new Statement(player); + String name = readToken(); + Object o = player.getObject(name); + if (o == null) { + if (readIf(".")) { + // example: java.lang.System.exit(0); + parseStaticCall(name); + } else { + // example: Statement s1 = ... + stat.setAssign(name, readToken()); + read("="); + name = readToken(); + o = player.getObject(name); + if (o != null) { + // example: ... = s1.executeQuery(); + read("."); + parseCall(name, o, readToken()); + } else if (readIf(".")) { + // ... = x.y.z("..."); + parseStaticCall(name); + } + } + } else { + // example: s1.execute() + read("."); + String methodName = readToken(); + parseCall(name, o, methodName); + } + return stat; + } + + private void read() { + while (line.charAt(pos) == ' ') { + pos++; + } + int start = pos; + char ch = line.charAt(pos); + switch (ch) { + case '\"': + tokenType = STRING; + pos++; + while (pos < line.length()) { + ch = line.charAt(pos); + if (ch == '\\') { + pos += 2; + } else if (ch == '\"') { + pos++; + break; + } else { + pos++; + } + } + break; + case '.': + case ',': + case '(': + case ')': + case ';': + case '{': + case '}': + case '[': + case ']': + case '=': + tokenType = SPECIAL; + pos++; + break; + default: + if (Character.isLetter(ch) || ch == '_') { + tokenType = NAME; + pos++; + while (true) { + ch = line.charAt(pos); + if (Character.isLetterOrDigit(ch) || ch == '_') { + pos++; + } else { + break; + } + } + } else if (ch == '-' || Character.isDigit(ch)) { + tokenType = NUMBER; + pos++; + while (true) { + ch = line.charAt(pos); + if (Character.isDigit(ch) + || ".+-eElLxabcdefABCDEF".indexOf(ch) >= 0) { + pos++; + } else { + break; + } + } + } + } + token = line.substring(start, pos); + } + + private boolean readIf(String s) { + if (token.equals(s)) { + read(); + return true; + } + return false; + } + + private String readToken() { + String s = token; + read(); + return s; + } + + private void read(String s) { + if (!readIf(s)) { + throw new RuntimeException("Expected: " + s + " got: " + token + " in " + + line); + } + } + + private Arg parseValue() { + if (tokenType == STRING) { + String s = readToken(); + s = StringUtils.javaDecode(s.substring(1, s.length() - 1)); + return new Arg(String.class, s); + } else if (tokenType == NUMBER) { + String number = readToken().toLowerCase(); + if (number.endsWith("f")) { + Float v = Float.parseFloat(number); + return new Arg(float.class, v); + } else if (number.endsWith("d") || + number.indexOf('e') >= 0 || + number.indexOf('.') >= 0) { + Double v = Double.parseDouble(number); + return new Arg(double.class, v); + } else if (number.endsWith("L") || number.endsWith("l")) { + Long v = Long.parseLong(number.substring(0, number.length() - 1)); + return new Arg(long.class, v); + } else { + Integer v = Integer.parseInt(number); + return new Arg(int.class, v); + } + } else if (tokenType == NAME) { + if (readIf("true")) { + return new Arg(boolean.class, Boolean.TRUE); + } else if (readIf("false")) { + return new Arg(boolean.class, Boolean.FALSE); + } else if (readIf("null")) { + throw new RuntimeException( + "Null: class not specified. Example: (java.lang.String)null"); + } else if (readIf("new")) { + if (readIf("String")) { + read("["); + read("]"); + read("{"); + ArrayList values = New.arrayList(); + do { + values.add(parseValue().getValue()); + } while (readIf(",")); + read("}"); + String[] list = values.toArray(new String[0]); + return new Arg(String[].class, list); + } else if (readIf("BigDecimal")) { + read("("); + BigDecimal value = new BigDecimal((String) parseValue().getValue()); + read(")"); + return new Arg(BigDecimal.class, value); + } else { + throw new RuntimeException("Unsupported constructor: " + readToken()); + } + } + String name = readToken(); + Object obj = player.getObject(name); + if (obj != null) { + return new Arg(obj.getClass(), obj); + } + read("."); + Statement outer = stat; + stat = new Statement(player); + parseStaticCall(name); + Arg s = new Arg(stat); + stat = outer; + return s; + } else if (readIf("(")) { + read("short"); + read(")"); + String number = readToken(); + return new Arg(short.class, Short.parseShort(number)); + } else { + throw new RuntimeException("Value expected, got: " + readToken() + " in " + + line); + } + } + + private void parseCall(String objectName, Object o, String methodName) { + stat.setMethodCall(objectName, o, methodName); + ArrayList args = New.arrayList(); + read("("); + while (true) { + if (readIf(")")) { + break; + } + Arg p = parseValue(); + args.add(p); + if (readIf(")")) { + break; + } + read(","); + } + stat.setArgs(args); + } + + private void parseStaticCall(String clazz) { + String last = readToken(); + while (readIf(".")) { + clazz += last == null ? "" : "." + last; + last = readToken(); + } + String methodName = last; + stat.setStaticCall(clazz); + parseCall(null, null, methodName); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/trace/Player.java b/modules/h2/src/test/java/org/h2/test/trace/Player.java new file mode 100644 index 0000000000000..ef58a310d917f --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/trace/Player.java @@ -0,0 +1,179 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + */ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.h2.test.trace; + +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.util.HashMap; +import org.h2.store.fs.FileUtils; + +/** + * This tool can re-run Java style log files. There is no size limit. + */ +public class Player { + + // TODO support InputStream; + // TODO support Reader; + // TODO support int[]; + // TODO support Blob and Clob; + // TODO support Calendar + // TODO support Object + // TODO support Object[] + // TODO support URL + // TODO support Array + // TODO support Ref + // TODO support SQLInput, SQLOutput + // TODO support Properties + // TODO support Map + // TODO support SQLXML + + private static final String[] IMPORTED_PACKAGES = { + "", "java.lang.", "java.sql.", "javax.sql." }; + private boolean trace; + private final HashMap objects = new HashMap<>(); + + /** + * Execute a trace file using the command line. The log file name to execute + * (replayed) must be specified as the last parameter. The following + * optional command line parameters are supported: + *
      + *
    • -log to enable logging the executed statement to + * System.out + *
    + * + * @param args the arguments of the application + */ + public static void main(String... args) throws IOException { + new Player().run(args); + } + + /** + * Execute a trace file. + * + * @param fileName the file name + * @param trace print debug information + */ + public static void execute(String fileName, boolean trace) throws IOException { + Player player = new Player(); + player.trace = trace; + player.runFile(fileName); + } + + private void run(String... args) throws IOException { + String fileName = "test.log.db"; + try { + fileName = args[args.length - 1]; + for (int i = 0; i < args.length - 1; i++) { + if ("-trace".equals(args[i])) { + trace = true; + } else { + throw new RuntimeException("Unknown setting: " + args[i]); + } + } + } catch (Exception e) { + e.printStackTrace(); + System.out.println("Usage: java " + getClass().getName() + + " [-trace] "); + return; + } + runFile(fileName); + } + + private void runFile(String fileName) throws IOException { + LineNumberReader reader = new LineNumberReader(new BufferedReader( + new InputStreamReader(FileUtils.newInputStream(fileName)))); + while (true) { + String line = reader.readLine(); + if (line == null) { + break; + } + runLine(line.trim()); + } + reader.close(); + } + + /** + * Write trace information if trace is enabled. + * + * @param s the message to write + */ + void trace(String s) { + if (trace) { + System.out.println(s); + } + } + + private void runLine(String line) { + if (!line.startsWith("/**/")) { + return; + } + line = line.substring("/**/".length()) + ";"; + Statement s = Parser.parseStatement(this, line); + trace("> " + s.toString()); + try { + s.execute(); + } catch (Exception e) { + e.printStackTrace(); + trace("error: " + e.toString()); + } + } + + /** + * Get the class for the given class name. + * Only a limited set of classes is supported. + * + * @param className the class name + * @return the class + */ + static Class getClass(String className) { + for (String s : IMPORTED_PACKAGES) { + try { + return Class.forName(s + className); + } catch (ClassNotFoundException e) { + // ignore + } + } + throw new RuntimeException("Class not found: " + className); + } + + /** + * Assign an object to a variable. + * + * @param variableName the variable name + * @param obj the object + */ + void assign(String variableName, Object obj) { + objects.put(variableName, obj); + } + + /** + * Get an object. + * + * @param name the variable name + * @return the object + */ + Object getObject(String name) { + return objects.get(name); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/trace/Statement.java b/modules/h2/src/test/java/org/h2/test/trace/Statement.java new file mode 100644 index 0000000000000..ad0b8a6276ad3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/trace/Statement.java @@ -0,0 +1,167 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + */ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.h2.test.trace; + +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.util.ArrayList; + +/** + * A statement in a Java-style log file. + */ +class Statement { + private final Player player; + private boolean assignment; + private boolean staticCall; + private String assignClass; + private String assignVariable; + private String staticCallClass; + private String objectName; + private Object object; + private String methodName; + private Arg[] args; + private Class returnClass; + + Statement(Player player) { + this.player = player; + } + + /** + * Execute the statement. + * + * @return the object returned if this was a method call + */ + Object execute() throws Exception { + if (object == player) { + // there was an exception previously + player.trace("> " + assignVariable + " not set"); + if (assignment) { + player.assign(assignVariable, player); + } + return null; + } + Class clazz; + if (staticCall) { + clazz = Player.getClass(staticCallClass); + } else { + clazz = object.getClass(); + } + Class[] parameterTypes = new Class[args.length]; + Object[] parameters = new Object[args.length]; + for (int i = 0; i < args.length; i++) { + Arg arg = args[i]; + arg.execute(); + parameterTypes[i] = arg.getValueClass(); + parameters[i] = arg.getValue(); + } + Method method = clazz.getMethod(methodName, parameterTypes); + returnClass = method.getReturnType(); + try { + Object obj = method.invoke(object, parameters); + if (assignment) { + player.assign(assignVariable, obj); + } + return obj; + } catch (IllegalArgumentException e) { + e.printStackTrace(); + } catch (IllegalAccessException e) { + e.printStackTrace(); + } catch (InvocationTargetException e) { + Throwable t = e.getTargetException(); + player.trace("> " + t.toString()); + if (assignment) { + player.assign(assignVariable, player); + } + } + return null; + } + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + if (assignment) { + buff.append(assignClass); + buff.append(' '); + buff.append(assignVariable); + buff.append('='); + } + if (staticCall) { + buff.append(staticCallClass); + } else { + buff.append(objectName); + } + buff.append('.'); + buff.append(methodName); + buff.append('('); + for (int i = 0; args != null && i < args.length; i++) { + if (i > 0) { + buff.append(", "); + } + buff.append(args[i].toString()); + } + buff.append(");"); + return buff.toString(); + } + + Class getReturnClass() { + return returnClass; + } + + /** + * This statement is an assignment. + * + * @param className the class of the variable + * @param variableName the variable name + */ + void setAssign(String className, String variableName) { + this.assignment = true; + this.assignClass = className; + this.assignVariable = variableName; + } + + /** + * This statement is a static method call. + * + * @param className the class name + */ + void setStaticCall(String className) { + this.staticCall = true; + this.staticCallClass = className; + } + + /** + * This statement is a method call, and the result is assigned to a + * variable. + * + * @param variableName the variable name + * @param object the object + * @param methodName the method name + */ + void setMethodCall(String variableName, Object object, String methodName) { + this.objectName = variableName; + this.object = object; + this.methodName = methodName; + } + + public void setArgs(ArrayList list) { + args = list.toArray(new Arg[0]); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/trace/package.html b/modules/h2/src/test/java/org/h2/test/trace/package.html new file mode 100644 index 0000000000000..ee0ad04ad9fbb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/trace/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A player to interpret and execute Java statements in a trace file. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestAnsCompression.java b/modules/h2/src/test/java/org/h2/test/unit/TestAnsCompression.java new file mode 100644 index 0000000000000..ddf46e182d52d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestAnsCompression.java @@ -0,0 +1,110 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Random; + +import org.h2.dev.util.AnsCompression; +import org.h2.dev.util.BinaryArithmeticStream; +import org.h2.dev.util.BitStream; +import org.h2.test.TestBase; + +/** + * Tests the ANS (Asymmetric Numeral Systems) compression tool. + */ +public class TestAnsCompression extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testScaleFrequencies(); + testRandomized(); + testCompressionRate(); + } + + private void testCompressionRate() throws IOException { + byte[] data = new byte[1024 * 1024]; + Random r = new Random(1); + for (int i = 0; i < data.length; i++) { + data[i] = (byte) (r.nextInt(4) * r.nextInt(4)); + } + int[] freq = new int[256]; + AnsCompression.countFrequencies(freq, data); + int lenAns = AnsCompression.encode(freq, data).length; + BitStream.Huffman huff = new BitStream.Huffman(freq); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + BitStream.Out o = new BitStream.Out(out); + for (byte x : data) { + huff.write(o, x & 255); + } + o.flush(); + int lenHuff = out.toByteArray().length; + BinaryArithmeticStream.Huffman aHuff = new BinaryArithmeticStream.Huffman( + freq); + out = new ByteArrayOutputStream(); + BinaryArithmeticStream.Out o2 = new BinaryArithmeticStream.Out(out); + for (byte x : data) { + aHuff.write(o2, x & 255); + } + o2.flush(); + int lenArithmetic = out.toByteArray().length; + + assertTrue(lenAns < lenArithmetic); + assertTrue(lenArithmetic < lenHuff); + assertTrue(lenHuff < data.length); + } + + private void testScaleFrequencies() { + Random r = new Random(1); + for (int j = 0; j < 100; j++) { + int symbolCount = r.nextInt(200) + 1; + int[] freq = new int[symbolCount]; + for (int total = symbolCount * 2; total < 10000; total *= 2) { + for (int i = 0; i < freq.length; i++) { + freq[i] = r.nextInt(1000) + 1; + } + AnsCompression.scaleFrequencies(freq, total); + } + } + int[] freq = new int[]{0, 1, 1, 1000}; + AnsCompression.scaleFrequencies(freq, 100); + assertEquals("[0, 1, 1, 98]", Arrays.toString(freq)); + } + + private void testRandomized() { + Random r = new Random(1); + int symbolCount = r.nextInt(200) + 1; + int[] freq = new int[symbolCount]; + for (int i = 0; i < freq.length; i++) { + freq[i] = r.nextInt(1000) + 1; + } + int seed = r.nextInt(); + r.setSeed(seed); + int len = 10000; + byte[] data = new byte[len]; + r.nextBytes(data); + freq = new int[256]; + AnsCompression.countFrequencies(freq, data); + byte[] encoded = AnsCompression.encode(freq, data); + byte[] decoded = AnsCompression.decode(freq, encoded, data.length); + for (int i = 0; i < len; i++) { + int expected = data[i]; + assertEquals(expected, decoded[i]); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestAutoReconnect.java b/modules/h2/src/test/java/org/h2/test/unit/TestAutoReconnect.java new file mode 100644 index 0000000000000..d2dd1f6c18f81 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestAutoReconnect.java @@ -0,0 +1,217 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.tools.Server; + +/** + * Tests automatic embedded/server mode. + */ +public class TestAutoReconnect extends TestBase { + + private String url; + private boolean autoServer; + private Server server; + private Connection connServer; + private Connection conn; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + private void restart() throws SQLException, InterruptedException { + if (autoServer) { + if (connServer != null) { + connServer.createStatement().execute("SHUTDOWN"); + connServer.close(); + } + org.h2.Driver.load(); + connServer = getConnection(url); + } else { + server.stop(); + Thread.sleep(100); // try to prevent "port may be in use" error + server.start(); + } + } + + @Override + public void test() throws Exception { + testWrongUrl(); + autoServer = true; + testReconnect(); + autoServer = false; + testReconnect(); + deleteDb(getTestName()); + } + + private void testWrongUrl() throws Exception { + deleteDb(getTestName()); + Server tcp = Server.createTcpServer().start(); + try { + conn = getConnection("jdbc:h2:" + getBaseDir() + + "/" + getTestName() + ";AUTO_SERVER=TRUE"); + assertThrows(ErrorCode.DATABASE_ALREADY_OPEN_1, this). + getConnection("jdbc:h2:" + getBaseDir() + + "/" + getTestName() + ";OPEN_NEW=TRUE"); + assertThrows(ErrorCode.DATABASE_ALREADY_OPEN_1, this). + getConnection("jdbc:h2:" + getBaseDir() + + "/" + getTestName() + ";OPEN_NEW=TRUE"); + conn.close(); + + conn = getConnection("jdbc:h2:tcp://localhost/" + getBaseDir() + + "/" + getTestName()); + assertThrows(ErrorCode.DATABASE_ALREADY_OPEN_1, this). + getConnection("jdbc:h2:" + getBaseDir() + + "/" + getTestName() + ";AUTO_SERVER=TRUE;OPEN_NEW=TRUE"); + conn.close(); + } finally { + tcp.stop(); + } + } + + private void testReconnect() throws Exception { + deleteDb(getTestName()); + if (autoServer) { + url = "jdbc:h2:" + getBaseDir() + "/" + getTestName() + ";" + + "FILE_LOCK=SOCKET;" + + "AUTO_SERVER=TRUE;OPEN_NEW=TRUE"; + restart(); + } else { + server = Server.createTcpServer("-ifNotExists").start(); + int port = server.getPort(); + url = "jdbc:h2:tcp://localhost:" + port + "/" + getBaseDir() + "/" + getTestName() + ";" + + "FILE_LOCK=SOCKET;AUTO_RECONNECT=TRUE"; + } + + // test the database event listener + conn = getConnection(url + ";DATABASE_EVENT_LISTENER='" + + MyDatabaseEventListener.class.getName() + "'"); + conn.close(); + + Statement stat; + + conn = getConnection(url); + restart(); + stat = conn.createStatement(); + restart(); + stat.execute("create table test(id identity, name varchar)"); + restart(); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(null, ?)"); + restart(); + prep.setString(1, "Hello"); + restart(); + prep.execute(); + restart(); + prep.setString(1, "World"); + restart(); + prep.execute(); + restart(); + ResultSet rs = stat.executeQuery("select * from test order by id"); + restart(); + assertTrue(rs.next()); + restart(); + assertEquals(1, rs.getInt(1)); + restart(); + assertEquals("Hello", rs.getString(2)); + restart(); + assertTrue(rs.next()); + restart(); + assertEquals(2, rs.getInt(1)); + restart(); + assertEquals("World", rs.getString(2)); + restart(); + assertFalse(rs.next()); + restart(); + stat.execute("SET @TEST 10"); + restart(); + rs = stat.executeQuery("CALL @TEST"); + rs.next(); + assertEquals(10, rs.getInt(1)); + stat.setFetchSize(10); + restart(); + rs = stat.executeQuery("select * from system_range(1, 20)"); + restart(); + for (int i = 0;; i++) { + try { + boolean more = rs.next(); + if (!more) { + assertEquals(i, 20); + break; + } + restart(); + int x = rs.getInt(1); + assertEquals(x, i + 1); + if (i > 10) { + fail(); + } + } catch (SQLException e) { + if (i < 10) { + throw e; + } + } + } + restart(); + rs.close(); + + conn.setAutoCommit(false); + restart(); + assertThrows(ErrorCode.CONNECTION_BROKEN_1, conn.createStatement()). + execute("select * from test"); + + conn.close(); + if (autoServer) { + connServer.close(); + } else { + server.stop(); + } + } + + /** + * A database event listener used in this test. + */ + public static final class MyDatabaseEventListener implements + DatabaseEventListener { + + @Override + public void closingDatabase() { + // ignore + } + + @Override + public void exceptionThrown(SQLException e, String sql) { + // ignore + } + + @Override + public void init(String u) { + // ignore + } + + @Override + public void opened() { + // ignore + } + + @Override + public void setProgress(int state, String name, int x, int max) { + // ignore + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestBinaryArithmeticStream.java b/modules/h2/src/test/java/org/h2/test/unit/TestBinaryArithmeticStream.java new file mode 100644 index 0000000000000..8e2b28fd83810 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestBinaryArithmeticStream.java @@ -0,0 +1,172 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.util.Random; + +import org.h2.dev.util.BinaryArithmeticStream; +import org.h2.dev.util.BinaryArithmeticStream.Huffman; +import org.h2.dev.util.BinaryArithmeticStream.In; +import org.h2.dev.util.BinaryArithmeticStream.Out; +import org.h2.dev.util.BitStream; +import org.h2.test.TestBase; + +/** + * Test the binary arithmetic stream utility. + */ +public class TestBinaryArithmeticStream extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testCompareWithHuffman(); + testHuffmanRandomized(); + testCompressionRatio(); + testRandomized(); + testPerformance(); + } + + private void testCompareWithHuffman() throws IOException { + Random r = new Random(1); + for (int test = 0; test < 10; test++) { + int[] freq = new int[4]; + for (int i = 0; i < freq.length; i++) { + freq[i] = 0 + r.nextInt(1000); + } + BinaryArithmeticStream.Huffman ah = new BinaryArithmeticStream.Huffman( + freq); + BitStream.Huffman hh = new BitStream.Huffman(freq); + ByteArrayOutputStream hbOut = new ByteArrayOutputStream(); + ByteArrayOutputStream abOut = new ByteArrayOutputStream(); + BitStream.Out bOut = new BitStream.Out(hbOut); + BinaryArithmeticStream.Out aOut = new BinaryArithmeticStream.Out(abOut); + for (int i = 0; i < freq.length; i++) { + for (int j = 0; j < freq[i]; j++) { + int x = i; + hh.write(bOut, x); + ah.write(aOut, x); + } + } + assertTrue(hbOut.toByteArray().length >= abOut.toByteArray().length); + } + } + + private void testHuffmanRandomized() throws IOException { + Random r = new Random(1); + int[] freq = new int[r.nextInt(200) + 1]; + for (int i = 0; i < freq.length; i++) { + freq[i] = r.nextInt(1000) + 1; + } + int seed = r.nextInt(); + r.setSeed(seed); + Huffman huff = new Huffman(freq); + ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); + Out out = new Out(byteOut); + for (int i = 0; i < 10000; i++) { + huff.write(out, r.nextInt(freq.length)); + } + out.flush(); + In in = new In(new ByteArrayInputStream(byteOut.toByteArray())); + r.setSeed(seed); + for (int i = 0; i < 10000; i++) { + int expected = r.nextInt(freq.length); + int got = huff.read(in); + assertEquals(expected, got); + } + } + + private void testPerformance() throws IOException { + Random r = new Random(); + // long time = System.nanoTime(); + // Profiler prof = new Profiler().startCollecting(); + for (int seed = 0; seed < 10000; seed++) { + r.setSeed(seed); + ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); + Out out = new Out(byteOut); + int len = 100; + for (int i = 0; i < len; i++) { + boolean v = r.nextBoolean(); + int prob = r.nextInt(BinaryArithmeticStream.MAX_PROBABILITY); + out.writeBit(v, prob); + } + out.flush(); + r.setSeed(seed); + ByteArrayInputStream byteIn = new ByteArrayInputStream( + byteOut.toByteArray()); + In in = new In(byteIn); + for (int i = 0; i < len; i++) { + boolean expected = r.nextBoolean(); + int prob = r.nextInt(BinaryArithmeticStream.MAX_PROBABILITY); + assertEquals(expected, in.readBit(prob)); + } + } + // time = System.nanoTime() - time; + // System.out.println("time: " + TimeUnit.NANOSECONDS.toMillis(time)); + // System.out.println(prof.getTop(5)); + } + + private void testCompressionRatio() throws IOException { + ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); + Out out = new Out(byteOut); + int prob = 1000; + int len = 1024; + for (int i = 0; i < len; i++) { + out.writeBit(true, prob); + } + out.flush(); + ByteArrayInputStream byteIn = new ByteArrayInputStream( + byteOut.toByteArray()); + In in = new In(byteIn); + for (int i = 0; i < len; i++) { + assertTrue(in.readBit(prob)); + } + // System.out.println(len / 8 + " comp: " + + // byteOut.toByteArray().length); + } + + private void testRandomized() throws IOException { + for (int i = 0; i < 10000; i = (int) ((i + 10) * 1.1)) { + testRandomized(i); + } + } + + private void testRandomized(int len) throws IOException { + Random r = new Random(); + int seed = r.nextInt(); + r.setSeed(seed); + ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); + Out out = new Out(byteOut); + for (int i = 0; i < len; i++) { + int prob = r.nextInt(BinaryArithmeticStream.MAX_PROBABILITY); + out.writeBit(r.nextBoolean(), prob); + } + out.flush(); + byteOut.write(r.nextInt(255)); + ByteArrayInputStream byteIn = new ByteArrayInputStream( + byteOut.toByteArray()); + In in = new In(byteIn); + r.setSeed(seed); + for (int i = 0; i < len; i++) { + int prob = r.nextInt(BinaryArithmeticStream.MAX_PROBABILITY); + boolean expected = r.nextBoolean(); + boolean got = in.readBit(prob); + assertEquals(expected, got); + } + assertEquals(r.nextInt(255), byteIn.read()); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestBitField.java b/modules/h2/src/test/java/org/h2/test/unit/TestBitField.java new file mode 100644 index 0000000000000..a3ff447c7d7fc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestBitField.java @@ -0,0 +1,158 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.util.BitSet; +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.util.BitField; + +/** + * A unit test for bit fields. + */ +public class TestBitField extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testNextClearBit(); + testByteOperations(); + testRandom(); + testGetSet(); + testRandomSetRange(); + } + + private void testNextClearBit() { + BitSet set = new BitSet(); + BitField field = new BitField(); + set.set(0, 640); + field.set(0, 640, true); + assertEquals(set.nextClearBit(0), field.nextClearBit(0)); + + Random random = new Random(1); + field = new BitField(); + field.set(0, 500, true); + for (int i = 0; i < 100000; i++) { + int a = random.nextInt(120); + int b = a + 1 + random.nextInt(200); + field.clear(a); + field.clear(b); + assertEquals(b, field.nextClearBit(a + 1)); + field.set(a); + field.set(b); + } + } + + private void testByteOperations() { + BitField used = new BitField(); + testSetFast(used, false); + testSetFast(used, true); + } + + private void testSetFast(BitField used, boolean init) { + int len = 10000; + Random random = new Random(1); + for (int i = 0, x = 0; i < len / 8; i++) { + int mask = random.nextInt() & 255; + if (init) { + assertEquals(mask, used.getByte(x)); + x += 8; + // for (int j = 0; j < 8; j++, x++) { + // if (used.get(x) != ((mask & (1 << j)) != 0)) { + // throw Message.getInternalError( + // "Redo failure, block: " + x + + // " expected in-use bit: " + used.get(x)); + // } + // } + } else { + used.setByte(x, mask); + x += 8; + // for (int j = 0; j < 8; j++, x++) { + // if ((mask & (1 << j)) != 0) { + // used.set(x); + // } + // } + } + } + } + + private void testRandom() { + BitField bits = new BitField(); + BitSet set = new BitSet(); + int max = 300; + int count = 100000; + Random random = new Random(1); + for (int i = 0; i < count; i++) { + int idx = random.nextInt(max); + if (random.nextBoolean()) { + if (random.nextBoolean()) { + bits.set(idx); + set.set(idx); + } else { + bits.clear(idx); + set.clear(idx); + } + } else { + assertEquals(set.get(idx), bits.get(idx)); + assertEquals(set.nextClearBit(idx), bits.nextClearBit(idx)); + assertEquals(set.length(), bits.length()); + } + } + } + + private void testGetSet() { + BitField bits = new BitField(); + for (int i = 0; i < 10000; i++) { + bits.set(i); + if (!bits.get(i)) { + fail("not set: " + i); + } + if (bits.get(i + 1)) { + fail("set: " + i); + } + } + for (int i = 0; i < 10000; i++) { + if (!bits.get(i)) { + fail("not set: " + i); + } + } + for (int i = 0; i < 1000; i++) { + int k = bits.nextClearBit(0); + if (k != 10000) { + fail("" + k); + } + } + } + + private void testRandomSetRange() { + BitField bits = new BitField(); + BitSet set = new BitSet(); + Random random = new Random(1); + int maxOffset = 500; + int maxLen = 500; + int total = maxOffset + maxLen; + int count = 10000; + for (int i = 0; i < count; i++) { + int offset = random.nextInt(maxOffset); + int len = random.nextInt(maxLen); + boolean val = random.nextBoolean(); + set.set(offset, offset + len, val); + bits.set(offset, offset + len, val); + for (int j = 0; j < total; j++) { + assertEquals(set.get(j), bits.get(j)); + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestBitStream.java b/modules/h2/src/test/java/org/h2/test/unit/TestBitStream.java new file mode 100644 index 0000000000000..6edaf2d718317 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestBitStream.java @@ -0,0 +1,160 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.util.Random; + +import org.h2.dev.util.BitStream; +import org.h2.dev.util.BitStream.In; +import org.h2.dev.util.BitStream.Out; +import org.h2.test.TestBase; + +/** + * Test the bit stream (Golomb code and Huffman code) utility. + */ +public class TestBitStream extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testHuffmanRandomized(); + testHuffman(); + testBitStream(); + testGolomb("11110010", 10, 42); + testGolomb("00", 3, 0); + testGolomb("010", 3, 1); + testGolomb("011", 3, 2); + testGolomb("100", 3, 3); + testGolomb("1010", 3, 4); + testGolombRandomized(); + } + + private void testHuffmanRandomized() { + Random r = new Random(1); + int[] freq = new int[r.nextInt(200) + 1]; + for (int i = 0; i < freq.length; i++) { + freq[i] = r.nextInt(1000) + 1; + } + int seed = r.nextInt(); + r.setSeed(seed); + BitStream.Huffman huff = new BitStream.Huffman(freq); + ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); + BitStream.Out out = new BitStream.Out(byteOut); + for (int i = 0; i < 10000; i++) { + huff.write(out, r.nextInt(freq.length)); + } + out.close(); + BitStream.In in = new BitStream.In(new ByteArrayInputStream(byteOut.toByteArray())); + r.setSeed(seed); + for (int i = 0; i < 10000; i++) { + int expected = r.nextInt(freq.length); + assertEquals(expected, huff.read(in)); + } + } + + private void testHuffman() { + int[] freq = { 36, 18, 12, 9, 7, 6, 5, 4 }; + BitStream.Huffman huff = new BitStream.Huffman(freq); + final StringBuilder buff = new StringBuilder(); + Out o = new Out(null) { + @Override + public void writeBit(int bit) { + buff.append(bit == 0 ? '0' : '1'); + } + }; + for (int i = 0; i < freq.length; i++) { + buff.append(i + ": "); + huff.write(o, i); + buff.append("\n"); + } + assertEquals( + "0: 0\n" + + "1: 110\n" + + "2: 100\n" + + "3: 1110\n" + + "4: 1011\n" + + "5: 1010\n" + + "6: 11111\n" + + "7: 11110\n", buff.toString()); + } + + private void testGolomb(String expected, int div, int value) { + final StringBuilder buff = new StringBuilder(); + Out o = new Out(null) { + @Override + public void writeBit(int bit) { + buff.append(bit == 0 ? '0' : '1'); + } + }; + o.writeGolomb(div, value); + int size = Out.getGolombSize(div, value); + String got = buff.toString(); + assertEquals(size, got.length()); + assertEquals(expected, got); + } + + private void testGolombRandomized() { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + Out bitOut = new Out(out); + Random r = new Random(1); + int len = 1000; + for (int i = 0; i < len; i++) { + int div = r.nextInt(100) + 1; + int value = r.nextInt(1000000); + bitOut.writeGolomb(div, value); + } + bitOut.flush(); + bitOut.close(); + byte[] data = out.toByteArray(); + ByteArrayInputStream in = new ByteArrayInputStream(data); + In bitIn = new In(in); + r.setSeed(1); + for (int i = 0; i < len; i++) { + int div = r.nextInt(100) + 1; + int value = r.nextInt(1000000); + int v = bitIn.readGolomb(div); + assertEquals("i=" + i + " div=" + div, value, v); + } + } + + private void testBitStream() { + Random r = new Random(); + for (int test = 0; test < 10000; test++) { + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + int len = r.nextInt(40); + Out out = new Out(buff); + long seed = r.nextLong(); + Random r2 = new Random(seed); + for (int i = 0; i < len; i++) { + out.writeBit(r2.nextBoolean() ? 1 : 0); + } + out.close(); + In in = new In(new ByteArrayInputStream( + buff.toByteArray())); + r2 = new Random(seed); + int i = 0; + for (; i < len; i++) { + int expected = r2.nextBoolean() ? 1 : 0; + assertEquals(expected, in.readBit()); + } + for (; i % 8 != 0; i++) { + assertEquals(0, in.readBit()); + } + assertEquals(-1, in.readBit()); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestBnf.java b/modules/h2/src/test/java/org/h2/test/unit/TestBnf.java new file mode 100644 index 0000000000000..752ac126c87dc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestBnf.java @@ -0,0 +1,177 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import org.h2.bnf.Bnf; +import org.h2.bnf.context.DbContents; +import org.h2.bnf.context.DbContextRule; +import org.h2.bnf.context.DbProcedure; +import org.h2.bnf.context.DbSchema; +import org.h2.test.TestBase; + +/** + * Test Bnf Sql parser + * @author Nicolas Fortin + */ +public class TestBnf extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("bnf"); + try (Connection conn = getConnection("bnf")) { + testModes(conn); + testProcedures(conn, false); + } + try (Connection conn = getConnection("bnf;mode=mysql")) { + testProcedures(conn, true); + } + } + + private void testModes(Connection conn) throws Exception { + DbContents dbContents; + dbContents = new DbContents(); + dbContents.readContents("jdbc:h2:test", conn); + assertTrue(dbContents.isH2()); + dbContents = new DbContents(); + dbContents.readContents("jdbc:derby:test", conn); + assertTrue(dbContents.isDerby()); + dbContents = new DbContents(); + dbContents.readContents("jdbc:firebirdsql:test", conn); + assertTrue(dbContents.isFirebird()); + dbContents = new DbContents(); + dbContents.readContents("jdbc:sqlserver:test", conn); + assertTrue(dbContents.isMSSQLServer()); + dbContents = new DbContents(); + dbContents.readContents("jdbc:mysql:test", conn); + assertTrue(dbContents.isMySQL()); + dbContents = new DbContents(); + dbContents.readContents("jdbc:oracle:test", conn); + assertTrue(dbContents.isOracle()); + dbContents = new DbContents(); + dbContents.readContents("jdbc:postgresql:test", conn); + assertTrue(dbContents.isPostgreSQL()); + dbContents = new DbContents(); + dbContents.readContents("jdbc:sqlite:test", conn); + assertTrue(dbContents.isSQLite()); + } + + private void testProcedures(Connection conn, boolean isMySQLMode) + throws Exception { + // Register a procedure and check if it is present in DbContents + conn.createStatement().execute( + "DROP ALIAS IF EXISTS CUSTOM_PRINT"); + conn.createStatement().execute( + "CREATE ALIAS CUSTOM_PRINT " + + "AS $$ void print(String s) { System.out.println(s); } $$"); + conn.createStatement().execute( + "DROP TABLE IF EXISTS " + + "TABLE_WITH_STRING_FIELD"); + conn.createStatement().execute( + "CREATE TABLE " + + "TABLE_WITH_STRING_FIELD (STRING_FIELD VARCHAR(50), INT_FIELD integer)"); + DbContents dbContents = new DbContents(); + dbContents.readContents("jdbc:h2:test", conn); + assertTrue(dbContents.isH2()); + assertFalse(dbContents.isDerby()); + assertFalse(dbContents.isFirebird()); + assertEquals(null, dbContents.quoteIdentifier(null)); + if (isMySQLMode) { + assertTrue(dbContents.isH2ModeMySQL()); + assertEquals("TEST", dbContents.quoteIdentifier("TEST")); + assertEquals("TEST", dbContents.quoteIdentifier("Test")); + assertEquals("TEST", dbContents.quoteIdentifier("test")); + } else { + assertFalse(dbContents.isH2ModeMySQL()); + assertEquals("TEST", dbContents.quoteIdentifier("TEST")); + assertEquals("\"Test\"", dbContents.quoteIdentifier("Test")); + assertEquals("\"test\"", dbContents.quoteIdentifier("test")); + } + assertFalse(dbContents.isMSSQLServer()); + assertFalse(dbContents.isMySQL()); + assertFalse(dbContents.isOracle()); + assertFalse(dbContents.isPostgreSQL()); + assertFalse(dbContents.isSQLite()); + DbSchema defaultSchema = dbContents.getDefaultSchema(); + DbProcedure[] procedures = defaultSchema.getProcedures(); + Set procedureName = new HashSet<>(procedures.length); + for (DbProcedure procedure : procedures) { + assertTrue(defaultSchema == procedure.getSchema()); + procedureName.add(procedure.getName()); + } + if (isMySQLMode) { + assertTrue(procedureName.contains("custom_print")); + } else { + assertTrue(procedureName.contains("CUSTOM_PRINT")); + } + + if (isMySQLMode) { + return; + } + + // Test completion + Bnf bnf = Bnf.getInstance(null); + DbContextRule columnRule = new + DbContextRule(dbContents, DbContextRule.COLUMN); + bnf.updateTopic("column_name", columnRule); + bnf.updateTopic("user_defined_function_name", new + DbContextRule(dbContents, DbContextRule.PROCEDURE)); + bnf.linkStatements(); + // Test partial + Map tokens; + tokens = bnf.getNextTokenList("SELECT CUSTOM_PR"); + assertTrue(tokens.values().contains("INT")); + + // Test identifiers are working + tokens = bnf.getNextTokenList("create table \"test\" as s" + "el"); + assertTrue(tokens.values().contains("E" + "CT")); + + tokens = bnf.getNextTokenList("create table test as s" + "el"); + assertTrue(tokens.values().contains("E" + "CT")); + + // Test || with and without spaces + tokens = bnf.getNextTokenList("select 1||f"); + assertFalse(tokens.values().contains("R" + "OM")); + tokens = bnf.getNextTokenList("select 1 || f"); + assertFalse(tokens.values().contains("R" + "OM")); + tokens = bnf.getNextTokenList("select 1 || 2 "); + assertTrue(tokens.values().contains("FROM")); + tokens = bnf.getNextTokenList("select 1||2"); + assertTrue(tokens.values().contains("FROM")); + tokens = bnf.getNextTokenList("select 1 || 2"); + assertTrue(tokens.values().contains("FROM")); + + // Test keyword + tokens = bnf.getNextTokenList("SELECT LE" + "AS"); + assertTrue(tokens.values().contains("T")); + + // Test parameters + tokens = bnf.getNextTokenList("SELECT CUSTOM_PRINT("); + assertTrue(tokens.values().contains("STRING_FIELD")); + assertFalse(tokens.values().contains("INT_FIELD")); + + // Test parameters with spaces + tokens = bnf.getNextTokenList("SELECT CUSTOM_PRINT ( "); + assertTrue(tokens.values().contains("STRING_FIELD")); + assertFalse(tokens.values().contains("INT_FIELD")); + + // Test parameters with close bracket + tokens = bnf.getNextTokenList("SELECT CUSTOM_PRINT ( STRING_FIELD"); + assertTrue(tokens.values().contains(")")); + } +} \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestCache.java b/modules/h2/src/test/java/org/h2/test/unit/TestCache.java new file mode 100644 index 0000000000000..b02f149ac9cd6 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestCache.java @@ -0,0 +1,292 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Random; + +import org.h2.message.Trace; +import org.h2.test.TestBase; +import org.h2.util.Cache; +import org.h2.util.CacheLRU; +import org.h2.util.CacheObject; +import org.h2.util.CacheWriter; +import org.h2.util.StringUtils; +import org.h2.util.Utils; +import org.h2.value.Value; + +/** + * Tests the cache. + */ +public class TestCache extends TestBase implements CacheWriter { + + private String out; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); +// test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws Exception { + if (!config.mvStore) { + testTQ(); + } + testMemoryUsage(); + testCache(); + testCacheDb(false); + testCacheDb(true); + } + + private void testTQ() throws Exception { + if (config.memory || config.reopen) { + return; + } + deleteDb("cache"); + Connection conn = getConnection( + "cache;LOG=0;UNDO_LOG=0"); + Statement stat = conn.createStatement(); + stat.execute("create table if not exists lob" + + "(id int primary key, data blob)"); + PreparedStatement prep = conn.prepareStatement( + "insert into lob values(?, ?)"); + Random r = new Random(1); + byte[] buff = new byte[2 * 1024 * 1024]; + for (int i = 0; i < 10; i++) { + prep.setInt(1, i); + r.nextBytes(buff); + prep.setBinaryStream(2, new ByteArrayInputStream(buff), -1); + prep.execute(); + } + stat.execute("create table if not exists test" + + "(id int primary key, data varchar)"); + prep = conn.prepareStatement("insert into test values(?, ?)"); + for (int i = 0; i < 20000; i++) { + prep.setInt(1, i); + prep.setString(2, "Hello"); + prep.execute(); + } + conn.close(); + testTQ("LRU", false); + testTQ("TQ", true); + } + + private void testTQ(String cacheType, boolean scanResistant) throws Exception { + Connection conn = getConnection( + "cache;CACHE_TYPE=" + cacheType + ";CACHE_SIZE=4096"); + Statement stat = conn.createStatement(); + PreparedStatement prep; + for (int k = 0; k < 10; k++) { + int rc; + prep = conn.prepareStatement( + "select * from test where id = ?"); + rc = getReadCount(stat); + for (int x = 0; x < 2; x++) { + for (int i = 0; i < 15000; i++) { + prep.setInt(1, i); + prep.executeQuery(); + } + } + int rcData = getReadCount(stat) - rc; + if (scanResistant && k > 0) { + // TQ is expected to keep the data rows in the cache + // even if the LOB is read once in a while + assertEquals(0, rcData); + } else { + assertTrue(rcData > 0); + } + rc = getReadCount(stat); + ResultSet rs = stat.executeQuery( + "select * from lob where id = " + k); + rs.next(); + InputStream in = rs.getBinaryStream(2); + while (in.read() >= 0) { + // ignore + } + in.close(); + int rcLob = getReadCount(stat) - rc; + assertTrue(rcLob > 0); + } + conn.close(); + } + + private static int getReadCount(Statement stat) throws Exception { + ResultSet rs; + rs = stat.executeQuery( + "select value from information_schema.settings " + + "where name = 'info.FILE_READ'"); + rs.next(); + return rs.getInt(1); + } + + private void testMemoryUsage() throws SQLException { + if (!config.traceTest) { + return; + } + if (config.memory) { + return; + } + deleteDb("cache"); + Connection conn; + Statement stat; + ResultSet rs; + conn = getConnection("cache;CACHE_SIZE=16384"); + stat = conn.createStatement(); + // test DataOverflow + stat.execute("create table test(id int primary key, data varchar)"); + stat.execute("set max_memory_undo 10000"); + conn.close(); + stat = null; + conn = null; + long before = getRealMemory(); + + conn = getConnection("cache;CACHE_SIZE=16384;DB_CLOSE_ON_EXIT=FALSE"); + stat = conn.createStatement(); + + // -XX:+HeapDumpOnOutOfMemoryError + + stat.execute( + "insert into test select x, random_uuid() || space(1) " + + "from system_range(1, 10000)"); + + // stat.execute("create index idx_test_n on test(data)"); + // stat.execute("select data from test where data >= ''"); + + rs = stat.executeQuery( + "select value from information_schema.settings " + + "where name = 'info.CACHE_SIZE'"); + rs.next(); + int calculated = rs.getInt(1); + rs = null; + long afterInsert = getRealMemory(); + + conn.close(); + stat = null; + conn = null; + long afterClose = getRealMemory(); + trace("Used memory: " + (afterInsert - afterClose) + + " calculated cache size: " + calculated); + trace("Before: " + before + " after: " + afterInsert + + " after closing: " + afterClose); + } + + private int getRealMemory() { + StringUtils.clearCache(); + Value.clearCache(); + eatMemory(100); + freeMemory(); + System.gc(); + return Utils.getMemoryUsed(); + } + + private void testCache() { + out = ""; + Cache c = CacheLRU.getCache(this, "LRU", 16); + for (int i = 0; i < 20; i++) { + c.put(new Obj(i)); + } + assertEquals("flush 0 flush 1 flush 2 flush 3 ", out); + } + + /** + * A simple cache object + */ + static class Obj extends CacheObject { + + Obj(int pos) { + setPos(pos); + } + + @Override + public int getMemory() { + return 1024; + } + + @Override + public boolean canRemove() { + return true; + } + + @Override + public boolean isChanged() { + return true; + } + + @Override + public String toString() { + return "[" + getPos() + "]"; + } + + } + + @Override + public void flushLog() { + out += "flush "; + } + + @Override + public Trace getTrace() { + return null; + } + + @Override + public void writeBack(CacheObject entry) { + out += entry.getPos() + " "; + } + + private void testCacheDb(boolean lru) throws SQLException { + if (config.memory) { + return; + } + deleteDb("cache"); + Connection conn = getConnection( + "cache;CACHE_TYPE=" + (lru ? "LRU" : "SOFT_LRU")); + Statement stat = conn.createStatement(); + stat.execute("SET CACHE_SIZE 1024"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + stat.execute("CREATE TABLE MAIN(ID INT PRIMARY KEY, NAME VARCHAR)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?)"); + PreparedStatement prep2 = conn.prepareStatement( + "INSERT INTO MAIN VALUES(?, ?)"); + int max = 10000; + for (int i = 0; i < max; i++) { + prep.setInt(1, i); + prep.setString(2, "Hello " + i); + prep.execute(); + prep2.setInt(1, i); + prep2.setString(2, "World " + i); + prep2.execute(); + } + conn.close(); + conn = getConnection("cache"); + stat = conn.createStatement(); + stat.execute("SET CACHE_SIZE 1024"); + Random random = new Random(1); + for (int i = 0; i < 100; i++) { + stat.executeQuery("SELECT * FROM MAIN WHERE ID BETWEEN 40 AND 50"); + stat.executeQuery("SELECT * FROM MAIN WHERE ID = " + random.nextInt(max)); + if ((i % 10) == 0) { + stat.executeQuery("SELECT * FROM TEST"); + } + } + conn.close(); + deleteDb("cache"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestCharsetCollator.java b/modules/h2/src/test/java/org/h2/test/unit/TestCharsetCollator.java new file mode 100644 index 0000000000000..ace12a52f6149 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestCharsetCollator.java @@ -0,0 +1,70 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; +import java.nio.charset.UnsupportedCharsetException; +import java.text.Collator; +import org.h2.test.TestBase; +import org.h2.value.CharsetCollator; +import org.h2.value.CompareMode; + +/** + * Unittest for org.h2.value.CharsetCollator + */ +public class TestCharsetCollator extends TestBase { + private CharsetCollator cp500Collator = new CharsetCollator(Charset.forName("cp500")); + private CharsetCollator utf8Collator = new CharsetCollator(StandardCharsets.UTF_8); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + + @Override + public void test() throws Exception { + testBasicComparison(); + testNumberToCharacterComparison(); + testLengthComparison(); + testCreationFromCompareMode(); + testCreationFromCompareModeWithInvalidCharset(); + } + + private void testCreationFromCompareModeWithInvalidCharset() { + try { + CompareMode.getCollator("CHARSET_INVALID"); + fail(); + } catch (UnsupportedCharsetException e) { + // expected + } + } + + private void testCreationFromCompareMode() { + Collator utf8Col = CompareMode.getCollator("CHARSET_UTF-8"); + assertTrue(utf8Col instanceof CharsetCollator); + assertEquals(((CharsetCollator) utf8Col).getCharset(), StandardCharsets.UTF_8); + } + + private void testBasicComparison() { + assertTrue(cp500Collator.compare("A", "B") < 0); + assertTrue(cp500Collator.compare("AA", "AB") < 0); + } + + private void testLengthComparison() { + assertTrue(utf8Collator.compare("AA", "A") > 0); + } + + private void testNumberToCharacterComparison() { + assertTrue(cp500Collator.compare("A", "1") < 0); + assertTrue(utf8Collator.compare("A", "1") > 0); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestClassLoaderLeak.java b/modules/h2/src/test/java/org/h2/test/unit/TestClassLoaderLeak.java new file mode 100644 index 0000000000000..982900e8373ec --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestClassLoaderLeak.java @@ -0,0 +1,130 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.lang.ref.WeakReference; +import java.lang.reflect.Method; +import java.net.URLClassLoader; +import java.sql.Connection; +import java.sql.Driver; +import java.sql.DriverManager; +import java.util.ArrayList; +import org.h2.test.TestBase; +import org.h2.util.New; + +/** + * Test that static references within the database engine don't reference the + * class itself. For example, there is a leak if a class contains a static + * reference to a stack trace. This was the case using the following + * declaration: static EOFException EOF = new EOFException(). The way to solve + * the problem is to not use such references, or to not fill in the stack trace + * (which indirectly references the class loader). + * + * @author Erik Karlsson + * @author Thomas Mueller + */ +public class TestClassLoaderLeak extends TestBase { + + /** + * The name of this class (used by reflection). + */ + static final String CLASS_NAME = TestClassLoaderLeak.class.getName(); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + WeakReference ref = createClassLoader(); + for (int i = 0; i < 10; i++) { + System.gc(); + Thread.sleep(10); + } + ClassLoader cl = ref.get(); + assertTrue(cl == null); + // fill the memory, so a heap dump is created + // using -XX:+HeapDumpOnOutOfMemoryError + // which can be analyzed using EclipseMAT + // (check incoming references to TestClassLoader) + boolean fillMemory = false; + if (fillMemory) { + ArrayList memory = New.arrayList(); + for (int i = 0; i < Integer.MAX_VALUE; i++) { + memory.add(new byte[1024]); + } + } + DriverManager.registerDriver((Driver) + Class.forName("org.h2.Driver").newInstance()); + DriverManager.registerDriver((Driver) + Class.forName("org.h2.upgrade.v1_1.Driver").newInstance()); + } + + private static WeakReference createClassLoader() throws Exception { + ClassLoader cl = new TestClassLoader(); + Class h2ConnectionTestClass = Class.forName(CLASS_NAME, true, cl); + Method testMethod = h2ConnectionTestClass.getDeclaredMethod("runTest"); + testMethod.setAccessible(true); + testMethod.invoke(null); + return new WeakReference<>(cl); + } + + /** + * This method is called using reflection. + */ + static void runTest() throws Exception { + Class.forName("org.h2.Driver"); + Class.forName("org.h2.upgrade.v1_1.Driver"); + Driver d1 = DriverManager.getDriver("jdbc:h2:mem:test"); + Driver d2 = DriverManager.getDriver("jdbc:h2v1_1:mem:test"); + Connection connection; + connection = DriverManager.getConnection("jdbc:h2:mem:test"); + DriverManager.deregisterDriver(d1); + DriverManager.deregisterDriver(d2); + connection.close(); + connection = null; + } + + /** + * The application class loader. + */ + private static class TestClassLoader extends URLClassLoader { + + public TestClassLoader() { + super(((URLClassLoader) TestClassLoader.class.getClassLoader()) + .getURLs(), ClassLoader.getSystemClassLoader()); + } + + // allows delegation of H2 to the AppClassLoader + @Override + public synchronized Class loadClass(String name, boolean resolve) + throws ClassNotFoundException { + if (!name.contains(CLASS_NAME) && !name.startsWith("org.h2.")) { + return super.loadClass(name, resolve); + } + Class c = findLoadedClass(name); + if (c == null) { + try { + c = findClass(name); + } catch (SecurityException e) { + return super.loadClass(name, resolve); + } catch (ClassNotFoundException e) { + return super.loadClass(name, resolve); + } + if (resolve) { + resolveClass(c); + } + } + return c; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestClearReferences.java b/modules/h2/src/test/java/org/h2/test/unit/TestClearReferences.java new file mode 100644 index 0000000000000..7c2280749fa05 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestClearReferences.java @@ -0,0 +1,228 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.File; +import java.lang.reflect.Field; +import java.lang.reflect.Modifier; +import java.util.ArrayList; +import org.h2.test.TestBase; +import org.h2.util.MathUtils; +import org.h2.util.New; +import org.h2.value.ValueInt; + +/** + * Tests if Tomcat would clear static fields when re-loading a web application. + * See also + * http://svn.apache.org/repos/asf/tomcat/trunk/java/org/apache/catalina + * /loader/WebappClassLoader.java + */ +public class TestClearReferences extends TestBase { + + private static final String[] KNOWN_REFRESHED = { + "org.h2.compress.CompressLZF.cachedHashTable", + "org.h2.engine.DbSettings.defaultSettings", + "org.h2.engine.SessionRemote.sessionFactory", + "org.h2.jdbcx.JdbcDataSourceFactory.cachedTraceSystem", + "org.h2.store.RecoverTester.instance", + "org.h2.store.fs.FilePath.defaultProvider", + "org.h2.store.fs.FilePath.providers", + "org.h2.store.fs.FilePath.tempRandom", + "org.h2.store.fs.FilePathRec.recorder", + "org.h2.store.fs.FileMemData.data", + "org.h2.tools.CompressTool.cachedBuffer", + "org.h2.util.CloseWatcher.queue", + "org.h2.util.CloseWatcher.refs", + "org.h2.util.DateTimeFunctions.MONTHS_AND_WEEKS", + "org.h2.util.DateTimeUtils.timeZone", + "org.h2.util.MathUtils.cachedSecureRandom", + "org.h2.util.NetUtils.cachedLocalAddress", + "org.h2.util.StringUtils.softCache", + "org.h2.util.JdbcUtils.allowedClassNames", + "org.h2.util.JdbcUtils.allowedClassNamePrefixes", + "org.h2.util.JdbcUtils.userClassFactories", + "org.h2.util.Task.counter", + "org.h2.util.ToChar.NAMES", + "org.h2.value.CompareMode.lastUsed", + "org.h2.value.Value.softCache", + }; + + /** + * Path to main sources. In IDE project may be located either in the root + * directory of repository or in the h2 subdirectory. + */ + private final String SOURCE_PATH = new File("h2/src/main/org/h2/Driver.java").exists() + ? "h2/src/main/" : "src/main/"; + + private boolean hasError; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + // initialize the known classes + MathUtils.secureRandomLong(); + ValueInt.get(1); + Class.forName("org.h2.store.fs.FileMemData"); + + clear(); + + if (hasError) { + fail("Tomcat may clear the field above when reloading the web app"); + } + for (String s : KNOWN_REFRESHED) { + String className = s.substring(0, s.lastIndexOf('.')); + String fieldName = s.substring(s.lastIndexOf('.') + 1); + Class clazz = Class.forName(className); + try { + clazz.getDeclaredField(fieldName); + } catch (Exception e) { + fail(s); + } + } + } + + private void clear() throws Exception { + ArrayList> classes = New.arrayList(); + findClasses(classes, new File("bin/org/h2")); + findClasses(classes, new File("temp/org/h2")); + for (Class clazz : classes) { + clearClass(clazz); + } + } + + private void findClasses(ArrayList> classes, File file) { + String name = file.getName(); + if (file.isDirectory()) { + if (name.equals("CVS") || name.equals(".svn")) { + return; + } + for (File f : file.listFiles()) { + findClasses(classes, f); + } + } else { + if (!name.endsWith(".class")) { + return; + } + if (name.indexOf('$') >= 0) { + return; + } + String className = file.getAbsolutePath().replace('\\', '/'); + className = className.substring(className.lastIndexOf("org/h2")); + String packageName = className.substring(0, className.lastIndexOf('/')); + if (!new File(SOURCE_PATH + packageName).exists()) { + return; + } + className = className.replace('/', '.'); + className = className.substring(0, className.length() - ".class".length()); + Class clazz = null; + try { + clazz = Class.forName(className); + } catch (NoClassDefFoundError e) { + if (e.toString().contains("lucene")) { + // Lucene is not in the classpath, OK + } + } catch (ClassNotFoundException e) { + fail("Could not load " + className + ": " + e.toString()); + } + if (clazz != null) { + classes.add(clazz); + } + } + } + + /** + * This is how Tomcat resets the fields as of 2009-01-30. + * + * @param clazz the class to clear + */ + private void clearClass(Class clazz) throws Exception { + Field[] fields; + try { + fields = clazz.getDeclaredFields(); + } catch (NoClassDefFoundError e) { + if (e.toString().contains("lucene")) { + // Lucene is not in the classpath, OK + return; + } else if (e.toString().contains("jts")) { + // JTS is not in the classpath, OK + return; + } else if (e.toString().contains("slf4j")) { + // slf4j is not in the classpath, OK + return; + } + throw e; + } + for (Field field : fields) { + if (field.getType().isPrimitive() || field.getName().contains("$")) { + continue; + } + int modifiers = field.getModifiers(); + if (!Modifier.isStatic(modifiers)) { + continue; + } + field.setAccessible(true); + Object o = field.get(null); + if (o == null) { + continue; + } + if (Modifier.isFinal(modifiers)) { + if (field.getType().getName().startsWith("java.")) { + continue; + } + if (field.getType().getName().startsWith("javax.")) { + continue; + } + clearInstance(o); + } else { + clearField(clazz.getName() + "." + field.getName() + " = " + o); + } + } + } + + private void clearInstance(Object instance) throws Exception { + for (Field field : instance.getClass().getDeclaredFields()) { + if (field.getType().isPrimitive() || field.getName().contains("$")) { + continue; + } + int modifiers = field.getModifiers(); + if (Modifier.isStatic(modifiers) && Modifier.isFinal(modifiers)) { + continue; + } + field.setAccessible(true); + Object o = field.get(instance); + if (o == null) { + continue; + } + // loadedByThisOrChild + if (o.getClass().getName().startsWith("java.lang.")) { + continue; + } + if (o.getClass().isArray() && o.getClass().getComponentType().isPrimitive()) { + continue; + } + clearField(instance.getClass().getName() + "." + field.getName() + " = " + o); + } + } + + private void clearField(String s) { + for (String k : KNOWN_REFRESHED) { + if (s.startsWith(k)) { + return; + } + } + hasError = true; + System.out.println(s); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestCollation.java b/modules/h2/src/test/java/org/h2/test/unit/TestCollation.java new file mode 100644 index 0000000000000..eeb4ab6206d72 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestCollation.java @@ -0,0 +1,52 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; + +/** + * Test the ICU4J collator. + */ +public class TestCollation extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + deleteDb("collation"); + Connection conn = getConnection("collation"); + Statement stat = conn.createStatement(); + assertThrows(ErrorCode.INVALID_VALUE_2, stat). + execute("set collation xyz"); + stat.execute("set collation en"); + stat.execute("set collation default_en"); + assertThrows(ErrorCode.CLASS_NOT_FOUND_1, stat). + execute("set collation icu4j_en"); + + stat.execute("set collation ge"); + stat.execute("create table test(id int)"); + // the same as the current - ok + stat.execute("set collation ge"); + // not allowed to change now + assertThrows(ErrorCode.COLLATION_CHANGE_WITH_DATA_TABLE_1, stat). + execute("set collation en"); + + conn.close(); + deleteDb("collation"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestCompress.java b/modules/h2/src/test/java/org/h2/test/unit/TestCompress.java new file mode 100644 index 0000000000000..7c801c6a138aa --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestCompress.java @@ -0,0 +1,329 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.compress.CompressLZF; +import org.h2.compress.Compressor; +import org.h2.engine.Constants; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.CompressTool; +import org.h2.util.IOUtils; +import org.h2.util.New; +import org.h2.util.Task; + +/** + * Data compression tests. + */ +public class TestCompress extends TestBase { + + private boolean testPerformance; + private final byte[] buff = new byte[10]; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (testPerformance) { + testDatabase(); + System.exit(0); + return; + } + testVariableSizeInt(); + testMultiThreaded(); + if (config.big) { + for (int i = 0; i < 100; i++) { + test(i); + } + for (int i = 100; i < 10000; i += i + i + 1) { + test(i); + } + } else { + test(0); + test(1); + test(7); + test(50); + test(200); + } + test(4000000); + testVariableEnd(); + } + + private void testVariableSizeInt() { + assertEquals(1, CompressTool.getVariableIntLength(0)); + assertEquals(2, CompressTool.getVariableIntLength(0x80)); + assertEquals(3, CompressTool.getVariableIntLength(0x4000)); + assertEquals(4, CompressTool.getVariableIntLength(0x200000)); + assertEquals(5, CompressTool.getVariableIntLength(0x10000000)); + assertEquals(5, CompressTool.getVariableIntLength(-1)); + for (int x = 0; x < 0x20000; x++) { + testVar(x); + testVar(Integer.MIN_VALUE + x); + testVar(Integer.MAX_VALUE - x); + testVar(0x200000 + x - 100); + testVar(0x10000000 + x - 100); + } + } + + private void testVar(int x) { + int len = CompressTool.getVariableIntLength(x); + int l2 = CompressTool.writeVariableInt(buff, 0, x); + assertEquals(len, l2); + int x2 = CompressTool.readVariableInt(buff, 0); + assertEquals(x2, x); + } + + private void testMultiThreaded() throws Exception { + Task[] tasks = new Task[3]; + for (int i = 0; i < tasks.length; i++) { + Task t = new Task() { + @Override + public void call() { + CompressTool tool = CompressTool.getInstance(); + byte[] b = new byte[1024]; + Random r = new Random(); + while (!stop) { + r.nextBytes(b); + byte[] test = tool.expand(tool.compress(b, "LZF")); + assertEquals(b, test); + } + } + }; + tasks[i] = t; + t.execute(); + } + Thread.sleep(1000); + for (Task t : tasks) { + t.get(); + } + } + + private void testVariableEnd() { + CompressTool utils = CompressTool.getInstance(); + StringBuilder b = new StringBuilder(); + for (int i = 0; i < 90; i++) { + b.append('0'); + } + String prefix = b.toString(); + for (int i = 0; i < 100; i++) { + b = new StringBuilder(prefix); + for (int j = 0; j < i; j++) { + b.append((char) ('1' + j)); + } + String test = b.toString(); + byte[] in = test.getBytes(); + assertEquals(in, utils.expand(utils.compress(in, "LZF"))); + } + } + + private void testDatabase() throws Exception { + deleteDb("memFS:compress"); + Connection conn = getConnection("memFS:compress"); + Statement stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select table_name from information_schema.tables"); + Statement stat2 = conn.createStatement(); + while (rs.next()) { + String table = rs.getString(1); + if (!"COLLATIONS".equals(table)) { + stat2.execute("create table " + table + + " as select * from information_schema." + table); + } + } + conn.close(); + Compressor compress = new CompressLZF(); + int pageSize = Constants.DEFAULT_PAGE_SIZE; + byte[] buff2 = new byte[pageSize]; + byte[] test = new byte[2 * pageSize]; + compress.compress(buff2, pageSize, test, 0); + for (int j = 0; j < 4; j++) { + long time = System.nanoTime(); + for (int i = 0; i < 1000; i++) { + InputStream in = FileUtils.newInputStream("memFS:compress.h2.db"); + while (true) { + int len = in.read(buff2); + if (len < 0) { + break; + } + compress.compress(buff2, pageSize, test, 0); + } + in.close(); + } + System.out.println("compress: " + + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time) + + " ms"); + } + + for (int j = 0; j < 4; j++) { + ArrayList comp = New.arrayList(); + InputStream in = FileUtils.newInputStream("memFS:compress.h2.db"); + while (true) { + int len = in.read(buff2); + if (len < 0) { + break; + } + int b = compress.compress(buff2, pageSize, test, 0); + byte[] data = Arrays.copyOf(test, b); + comp.add(data); + } + in.close(); + byte[] result = new byte[pageSize]; + long time = System.nanoTime(); + for (int i = 0; i < 1000; i++) { + for (int k = 0; k < comp.size(); k++) { + byte[] data = comp.get(k); + compress.expand(data, 0, data.length, result, 0, pageSize); + } + } + System.out.println("expand: " + + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time) + + " ms"); + } + } + + private void test(int len) throws IOException { + testByteArray(len); + testByteBuffer(len); + } + + private void testByteArray(int len) throws IOException { + Random r = new Random(len); + for (int pattern = 0; pattern < 4; pattern++) { + byte[] b = new byte[len]; + switch (pattern) { + case 0: + // leave empty + break; + case 1: { + r.nextBytes(b); + break; + } + case 2: { + for (int x = 0; x < len; x++) { + b[x] = (byte) (x & 10); + } + break; + } + case 3: { + for (int x = 0; x < len; x++) { + b[x] = (byte) (x / 10); + } + break; + } + default: + } + if (r.nextInt(2) < 1) { + for (int x = 0; x < len; x++) { + if (r.nextInt(20) < 1) { + b[x] = (byte) (r.nextInt(255)); + } + } + } + CompressTool utils = CompressTool.getInstance(); + // level 9 is highest, strategy 2 is huffman only + for (String a : new String[] { "LZF", "No", + "Deflate", "Deflate level 9 strategy 2" }) { + long time = System.nanoTime(); + byte[] out = utils.compress(b, a); + byte[] test = utils.expand(out); + if (testPerformance) { + System.out.println("p:" + + pattern + + " len: " + + out.length + + " time: " + + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - + time) + " " + a); + } + assertEquals(b.length, test.length); + assertEquals(b, test); + Arrays.fill(test, (byte) 0); + CompressTool.expand(out, test, 0); + assertEquals(b, test); + } + for (String a : new String[] { null, "LZF", "DEFLATE", "ZIP", "GZIP" }) { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + OutputStream out2 = CompressTool.wrapOutputStream(out, a, "test"); + IOUtils.copy(new ByteArrayInputStream(b), out2); + out2.close(); + InputStream in = new ByteArrayInputStream(out.toByteArray()); + in = CompressTool.wrapInputStream(in, a, "test"); + out.reset(); + IOUtils.copy(in, out); + assertEquals(b, out.toByteArray()); + } + } + } + + private void testByteBuffer(int len) { + if (len < 4) { + return; + } + Random r = new Random(len); + CompressLZF comp = new CompressLZF(); + for (int pattern = 0; pattern < 4; pattern++) { + byte[] b = new byte[len]; + switch (pattern) { + case 0: + // leave empty + break; + case 1: { + r.nextBytes(b); + break; + } + case 2: { + for (int x = 0; x < len; x++) { + b[x] = (byte) (x & 10); + } + break; + } + case 3: { + for (int x = 0; x < len; x++) { + b[x] = (byte) (x / 10); + } + break; + } + default: + } + if (r.nextInt(2) < 1) { + for (int x = 0; x < len; x++) { + if (r.nextInt(20) < 1) { + b[x] = (byte) (r.nextInt(255)); + } + } + } + ByteBuffer buff = ByteBuffer.wrap(b); + byte[] temp = new byte[100 + b.length * 2]; + int compLen = comp.compress(buff, 0, temp, 0); + ByteBuffer test = ByteBuffer.wrap(temp, 0, compLen); + byte[] exp = new byte[b.length]; + CompressLZF.expand(test, ByteBuffer.wrap(exp)); + assertEquals(b, exp); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestConcurrent.java b/modules/h2/src/test/java/org/h2/test/unit/TestConcurrent.java new file mode 100644 index 0000000000000..c512479871d4c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestConcurrent.java @@ -0,0 +1,91 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.util.Task; + +/** + * Test concurrent access to JDBC objects. + */ +public class TestConcurrent extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + String url = "jdbc:h2:mem:"; + for (int i = 0; i < 50; i++) { + final int x = i % 4; + final Connection conn = DriverManager.getConnection(url); + final Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + String sql = ""; + switch (x % 6) { + case 0: + sql = "select 1"; + break; + case 1: + case 2: + sql = "delete from test"; + break; + } + final PreparedStatement prep = conn.prepareStatement(sql); + Task t = new Task() { + @Override + public void call() throws SQLException { + while (!conn.isClosed()) { + switch (x % 6) { + case 0: + prep.executeQuery(); + break; + case 1: + prep.execute(); + break; + case 2: + prep.executeUpdate(); + break; + case 3: + stat.executeQuery("select 1"); + break; + case 4: + stat.execute("select 1"); + break; + case 5: + stat.execute("delete from test"); + break; + } + } + } + }; + t.execute(); + Thread.sleep(100); + conn.close(); + SQLException e = (SQLException) t.getException(); + if (e != null) { + if (ErrorCode.OBJECT_CLOSED != e.getErrorCode() && + ErrorCode.STATEMENT_WAS_CANCELED != e.getErrorCode()) { + throw e; + } + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestConnectionInfo.java b/modules/h2/src/test/java/org/h2/test/unit/TestConnectionInfo.java new file mode 100644 index 0000000000000..5ea665fd08b22 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestConnectionInfo.java @@ -0,0 +1,96 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.File; +import java.util.Properties; + +import org.h2.api.ErrorCode; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.SysProperties; +import org.h2.test.TestBase; +import org.h2.tools.DeleteDbFiles; + +/** + * Test the ConnectionInfo class. + * + * @author Kerry Sainsbury + * @author Thomas Mueller Graf + */ +public class TestConnectionInfo extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String[] a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testImplicitRelativePath(); + testConnectInitError(); + testConnectionInfo(); + testName(); + } + + private void testImplicitRelativePath() throws Exception { + if (SysProperties.IMPLICIT_RELATIVE_PATH) { + return; + } + assertThrows(ErrorCode.URL_RELATIVE_TO_CWD, this). + getConnection("jdbc:h2:" + getTestName()); + assertThrows(ErrorCode.URL_RELATIVE_TO_CWD, this). + getConnection("jdbc:h2:data/" + getTestName()); + + getConnection("jdbc:h2:./data/" + getTestName()).close(); + DeleteDbFiles.execute("data", getTestName(), true); + } + + private void testConnectInitError() throws Exception { + assertThrows(ErrorCode.SYNTAX_ERROR_2, this). + getConnection("jdbc:h2:mem:;init=error"); + assertThrows(ErrorCode.IO_EXCEPTION_2, this). + getConnection("jdbc:h2:mem:;init=runscript from 'wrong.file'"); + } + + private void testConnectionInfo() { + Properties info = new Properties(); + ConnectionInfo connectionInfo = new ConnectionInfo( + "jdbc:h2:mem:" + getTestName() + + ";LOG=2" + + ";ACCESS_MODE_DATA=rws" + + ";INIT=CREATE this...\\;INSERT that..." + + ";IFEXISTS=TRUE", + info); + + assertEquals("jdbc:h2:mem:" + getTestName(), + connectionInfo.getURL()); + + assertEquals("2", + connectionInfo.getProperty("LOG", "")); + assertEquals("rws", + connectionInfo.getProperty("ACCESS_MODE_DATA", "")); + assertEquals("CREATE this...;INSERT that...", + connectionInfo.getProperty("INIT", "")); + assertEquals("TRUE", + connectionInfo.getProperty("IFEXISTS", "")); + assertEquals("undefined", + connectionInfo.getProperty("CACHE_TYPE", "undefined")); + } + + private void testName() throws Exception { + char differentFileSeparator = File.separatorChar == '/' ? '\\' : '/'; + ConnectionInfo connectionInfo = new ConnectionInfo("./test" + + differentFileSeparator + "subDir"); + File file = new File("test" + File.separatorChar + "subDir"); + assertEquals(file.getCanonicalPath().replace('\\', '/'), + connectionInfo.getName()); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestDataPage.java b/modules/h2/src/test/java/org/h2/test/unit/TestDataPage.java new file mode 100644 index 0000000000000..8781feb2e6c76 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestDataPage.java @@ -0,0 +1,353 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.math.BigDecimal; +import java.sql.Date; +import java.sql.Time; +import java.sql.Types; +import java.util.concurrent.TimeUnit; + +import org.h2.api.JavaObjectSerializer; +import org.h2.store.Data; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.LobStorageBackend; +import org.h2.test.TestBase; +import org.h2.tools.SimpleResultSet; +import org.h2.util.SmallLRUCache; +import org.h2.util.TempFileDeleter; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueByte; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueInt; +import org.h2.value.ValueJavaObject; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; +import org.h2.value.ValueShort; +import org.h2.value.ValueString; +import org.h2.value.ValueStringFixed; +import org.h2.value.ValueStringIgnoreCase; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; +import org.h2.value.ValueUuid; + +/** + * Data page tests. + */ +public class TestDataPage extends TestBase implements DataHandler { + + private boolean testPerformance; + private final CompareMode compareMode = CompareMode.getInstance(null, 0); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + if (testPerformance) { + testPerformance(); + System.exit(0); + return; + } + testValues(); + testAll(); + } + + private static void testPerformance() { + Data data = Data.create(null, 1024); + for (int j = 0; j < 4; j++) { + long time = System.nanoTime(); + for (int i = 0; i < 100000; i++) { + data.reset(); + for (int k = 0; k < 30; k++) { + data.writeString("Hello World"); + } + } + // for (int i = 0; i < 5000000; i++) { + // data.reset(); + // for (int k = 0; k < 100; k++) { + // data.writeInt(k * k); + // } + // } + // for (int i = 0; i < 200000; i++) { + // data.reset(); + // for (int k = 0; k < 100; k++) { + // data.writeVarInt(k * k); + // } + // } + System.out.println("write: " + + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time) + + " ms"); + } + for (int j = 0; j < 4; j++) { + long time = System.nanoTime(); + for (int i = 0; i < 1000000; i++) { + data.reset(); + for (int k = 0; k < 30; k++) { + data.readString(); + } + } + // for (int i = 0; i < 3000000; i++) { + // data.reset(); + // for (int k = 0; k < 100; k++) { + // data.readVarInt(); + // } + // } + // for (int i = 0; i < 50000000; i++) { + // data.reset(); + // for (int k = 0; k < 100; k++) { + // data.readInt(); + // } + // } + System.out.println("read: " + + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time) + + " ms"); + } + } + + private void testValues() { + testValue(ValueNull.INSTANCE); + testValue(ValueBoolean.FALSE); + testValue(ValueBoolean.TRUE); + for (int i = 0; i < 256; i++) { + testValue(ValueByte.get((byte) i)); + } + for (int i = 0; i < 256 * 256; i += 10) { + testValue(ValueShort.get((short) i)); + } + for (int i = 0; i < 256 * 256; i += 10) { + testValue(ValueInt.get(i)); + testValue(ValueInt.get(-i)); + testValue(ValueLong.get(i)); + testValue(ValueLong.get(-i)); + } + testValue(ValueInt.get(Integer.MAX_VALUE)); + testValue(ValueInt.get(Integer.MIN_VALUE)); + for (long i = 0; i < Integer.MAX_VALUE; i += 10 + i / 4) { + testValue(ValueInt.get((int) i)); + testValue(ValueInt.get((int) -i)); + } + testValue(ValueLong.get(Long.MAX_VALUE)); + testValue(ValueLong.get(Long.MIN_VALUE)); + for (long i = 0; i >= 0; i += 10 + i / 4) { + testValue(ValueLong.get(i)); + testValue(ValueLong.get(-i)); + } + testValue(ValueDecimal.get(BigDecimal.ZERO)); + testValue(ValueDecimal.get(BigDecimal.ONE)); + testValue(ValueDecimal.get(BigDecimal.TEN)); + testValue(ValueDecimal.get(BigDecimal.ONE.negate())); + testValue(ValueDecimal.get(BigDecimal.TEN.negate())); + for (long i = 0; i >= 0; i += 10 + i / 4) { + testValue(ValueDecimal.get(new BigDecimal(i))); + testValue(ValueDecimal.get(new BigDecimal(-i))); + for (int j = 0; j < 200; j += 50) { + testValue(ValueDecimal.get(new BigDecimal(i).setScale(j))); + testValue(ValueDecimal.get(new BigDecimal(i * i).setScale(j))); + } + testValue(ValueDecimal.get(new BigDecimal(i * i))); + } + testValue(ValueDate.get(new Date(System.currentTimeMillis()))); + testValue(ValueDate.get(new Date(0))); + testValue(ValueTime.get(new Time(System.currentTimeMillis()))); + testValue(ValueTime.get(new Time(0))); + testValue(ValueTimestamp.fromMillis(System.currentTimeMillis())); + testValue(ValueTimestamp.fromMillis(0)); + testValue(ValueTimestampTimeZone.parse("2000-01-01 10:00:00")); + testValue(ValueJavaObject.getNoCopy(null, new byte[0], this)); + testValue(ValueJavaObject.getNoCopy(null, new byte[100], this)); + for (int i = 0; i < 300; i++) { + testValue(ValueBytes.getNoCopy(new byte[i])); + } + for (int i = 0; i < 65000; i += 10 + i) { + testValue(ValueBytes.getNoCopy(new byte[i])); + } + testValue(ValueUuid.getNewRandom()); + for (int i = 0; i < 100; i++) { + testValue(ValueString.get(new String(new char[i]))); + } + for (int i = 0; i < 65000; i += 10 + i) { + testValue(ValueString.get(new String(new char[i]))); + testValue(ValueStringFixed.get(new String(new char[i]))); + testValue(ValueStringIgnoreCase.get(new String(new char[i]))); + } + testValue(ValueFloat.get(0f)); + testValue(ValueFloat.get(1f)); + testValue(ValueFloat.get(-1f)); + testValue(ValueDouble.get(0)); + testValue(ValueDouble.get(1)); + testValue(ValueDouble.get(-1)); + for (int i = 0; i < 65000; i += 10 + i) { + for (double j = 0.1; j < 65000; j += 10 + j) { + testValue(ValueFloat.get((float) (i / j))); + testValue(ValueDouble.get(i / j)); + testValue(ValueFloat.get((float) -(i / j))); + testValue(ValueDouble.get(-(i / j))); + } + } + testValue(ValueArray.get(new Value[0])); + testValue(ValueArray.get(new Value[] { ValueBoolean.TRUE, + ValueInt.get(10) })); + + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + rs.addColumn("ID", Types.INTEGER, 0, 0); + rs.addColumn("NAME", Types.VARCHAR, 255, 0); + rs.addRow(1, "Hello"); + rs.addRow(2, "World"); + rs.addRow(3, "Peace"); + testValue(ValueResultSet.get(rs)); + } + + private void testValue(Value v) { + Data data = Data.create(null, 1024); + data.checkCapacity((int) v.getPrecision()); + data.writeValue(v); + data.writeInt(123); + data.reset(); + Value v2 = data.readValue(); + assertEquals(v.getType(), v2.getType()); + assertEquals(0, v.compareTo(v2, compareMode)); + assertEquals(123, data.readInt()); + } + + private void testAll() { + Data page = Data.create(this, 128); + + char[] data = new char[0x10000]; + for (int i = 0; i < data.length; i++) { + data[i] = (char) i; + } + String s = new String(data); + page.checkCapacity(s.length() * 4); + page.writeString(s); + int len = page.length(); + assertEquals(len, Data.getStringLen(s)); + page.reset(); + assertEquals(s, page.readString()); + page.reset(); + + page.writeString("H\u1111!"); + page.writeString("John\tBrack's \"how are you\" M\u1111ller"); + page.writeValue(ValueInt.get(10)); + page.writeValue(ValueString.get("test")); + page.writeValue(ValueFloat.get(-2.25f)); + page.writeValue(ValueDouble.get(10.40)); + page.writeValue(ValueNull.INSTANCE); + trace(new String(page.getBytes())); + page.reset(); + + trace(page.readString()); + trace(page.readString()); + trace(page.readValue().getInt()); + trace(page.readValue().getString()); + trace("" + page.readValue().getFloat()); + trace("" + page.readValue().getDouble()); + trace(page.readValue().toString()); + page.reset(); + + page.writeInt(0); + page.writeInt(Integer.MAX_VALUE); + page.writeInt(Integer.MIN_VALUE); + page.writeInt(1); + page.writeInt(-1); + page.writeInt(1234567890); + page.writeInt(54321); + trace(new String(page.getBytes())); + page.reset(); + trace(page.readInt()); + trace(page.readInt()); + trace(page.readInt()); + trace(page.readInt()); + trace(page.readInt()); + trace(page.readInt()); + trace(page.readInt()); + + page = null; + } + + @Override + public String getDatabasePath() { + return null; + } + + @Override + public FileStore openFile(String name, String mode, boolean mustExist) { + return null; + } + + @Override + public void checkPowerOff() { + // nothing to do + } + + @Override + public void checkWritingAllowed() { + // ok + } + + @Override + public int getMaxLengthInplaceLob() { + throw new AssertionError(); + } + + @Override + public String getLobCompressionAlgorithm(int type) { + throw new AssertionError(); + } + + @Override + public Object getLobSyncObject() { + return this; + } + + @Override + public SmallLRUCache getLobFileListCache() { + return null; + } + + @Override + public TempFileDeleter getTempFileDeleter() { + return TempFileDeleter.getInstance(); + } + + @Override + public LobStorageBackend getLobStorage() { + return null; + } + + @Override + public int readLob(long lobId, byte[] hmac, long offset, byte[] buff, + int off, int length) { + return -1; + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + return null; + } + + @Override + public CompareMode getCompareMode() { + return compareMode; + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestDate.java b/modules/h2/src/test/java/org/h2/test/unit/TestDate.java new file mode 100644 index 0000000000000..ec9e5f4bcce9c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestDate.java @@ -0,0 +1,521 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Date; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; +import java.util.ArrayList; +import java.util.Calendar; +import java.util.GregorianCalendar; +import java.util.TimeZone; +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.test.TestBase; +import org.h2.test.utils.AssertThrows; +import org.h2.util.DateTimeUtils; +import org.h2.util.New; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueDouble; +import org.h2.value.ValueInt; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; + +/** + * Tests the date parsing. The problem is that some dates are not allowed + * because of the summer time change. Most countries change at 2 o'clock in the + * morning to 3 o'clock, but some (for example Chile) change at midnight. + * Non-lenient parsing would not work in this case. + */ +public class TestDate extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testValueDate(); + testValueTime(); + testValueTimestamp(); + testValueTimestampWithTimezone(); + testValidDate(); + testAbsoluteDay(); + testCalculateLocalMillis(); + testDateTimeUtils(); + } + + private void testValueDate() { + assertEquals("2000-01-01", + ValueDate.get(Date.valueOf("2000-01-01")).getString()); + assertEquals("0-00-00", + ValueDate.fromDateValue(0).getString()); + assertEquals("9999-12-31", + ValueDate.parse("9999-12-31").getString()); + assertEquals("-9999-12-31", + ValueDate.parse("-9999-12-31").getString()); + assertEquals(Integer.MAX_VALUE + "-12-31", + ValueDate.parse(Integer.MAX_VALUE + "-12-31").getString()); + assertEquals(Integer.MIN_VALUE + "-12-31", + ValueDate.parse(Integer.MIN_VALUE + "-12-31").getString()); + ValueDate d1 = ValueDate.parse("2001-01-01"); + assertEquals("2001-01-01", d1.getDate().toString()); + assertEquals("DATE '2001-01-01'", d1.getSQL()); + assertEquals("DATE '2001-01-01'", d1.toString()); + assertEquals(Value.DATE, d1.getType()); + long dv = d1.getDateValue(); + assertEquals((int) ((dv >>> 32) ^ dv), d1.hashCode()); + assertEquals(d1.getString().length(), d1.getDisplaySize()); + assertEquals(ValueDate.PRECISION, d1.getPrecision()); + assertEquals("java.sql.Date", d1.getObject().getClass().getName()); + ValueDate d1b = ValueDate.parse("2001-01-01"); + assertTrue(d1 == d1b); + Value.clearCache(); + d1b = ValueDate.parse("2001-01-01"); + assertFalse(d1 == d1b); + assertTrue(d1.equals(d1)); + assertTrue(d1.equals(d1b)); + assertTrue(d1b.equals(d1)); + assertEquals(0, d1.compareTo(d1b, null)); + assertEquals(0, d1b.compareTo(d1, null)); + ValueDate d2 = ValueDate.parse("2002-02-02"); + assertFalse(d1.equals(d2)); + assertFalse(d2.equals(d1)); + assertEquals(-1, d1.compareTo(d2, null)); + assertEquals(1, d2.compareTo(d1, null)); + + // can't convert using java.util.Date + assertEquals( + Integer.MAX_VALUE + "-12-31 00:00:00", + ValueDate.parse(Integer.MAX_VALUE + "-12-31"). + convertTo(Value.TIMESTAMP).getString()); + assertEquals( + Integer.MIN_VALUE + "-12-31 00:00:00", + ValueDate.parse(Integer.MIN_VALUE + "-12-31"). + convertTo(Value.TIMESTAMP).getString()); + assertEquals( + "00:00:00", + ValueDate.parse(Integer.MAX_VALUE + "-12-31"). + convertTo(Value.TIME).getString()); + assertEquals( + "00:00:00", + ValueDate.parse(Integer.MIN_VALUE + "-12-31"). + convertTo(Value.TIME).getString()); + } + + private void testValueTime() { + assertEquals("10:20:30", ValueTime.get(Time.valueOf("10:20:30")).getString()); + assertEquals("00:00:00", ValueTime.fromNanos(0).getString()); + assertEquals("23:59:59", ValueTime.parse("23:59:59").getString()); + assertEquals("11:22:33.444555666", ValueTime.parse("11:22:33.444555666").getString()); + if (SysProperties.UNLIMITED_TIME_RANGE) { + assertEquals("99:59:59", ValueTime.parse("99:59:59").getString()); + assertEquals("-00:10:10", ValueTime.parse("-00:10:10").getString()); + assertEquals("-99:02:03.001002003", + ValueTime.parse("-99:02:03.001002003").getString()); + assertEquals("-99:02:03.001002", + ValueTime.parse("-99:02:03.001002000").getString()); + assertEquals("-99:02:03", + ValueTime.parse("-99:02:03.0000000000001").getString()); + assertEquals("1999999:59:59.999999999", + ValueTime.parse("1999999:59:59.999999999").getString()); + assertEquals("-1999999:59:59.999999999", + ValueTime.parse("-1999999:59:59.999999999").getString()); + assertEquals("2562047:47:16.854775807", + ValueTime.fromNanos(Long.MAX_VALUE).getString()); + assertEquals("-2562047:47:16.854775808", + ValueTime.fromNanos(Long.MIN_VALUE).getString()); + } else { + try { + ValueTime.parse("-00:00:00.000000001"); + fail(); + } catch (DbException ex) { + assertEquals(ErrorCode.INVALID_DATETIME_CONSTANT_2, ex.getErrorCode()); + } + try { + ValueTime.parse("24:00:00"); + fail(); + } catch (DbException ex) { + assertEquals(ErrorCode.INVALID_DATETIME_CONSTANT_2, ex.getErrorCode()); + } + } + ValueTime t1 = ValueTime.parse("11:11:11"); + assertEquals("11:11:11", t1.getTime().toString()); + assertEquals("1970-01-01", t1.getDate().toString()); + assertEquals("TIME '11:11:11'", t1.getSQL()); + assertEquals("TIME '11:11:11'", t1.toString()); + assertEquals(1, t1.getSignum()); + assertEquals(0, t1.multiply(ValueInt.get(0)).getSignum()); + assertEquals(0, t1.subtract(t1).getSignum()); + assertEquals("05:35:35.5", t1.multiply(ValueDouble.get(0.5)).getString()); + assertEquals("22:22:22", t1.divide(ValueDouble.get(0.5)).getString()); + assertEquals(Value.TIME, t1.getType()); + long nanos = t1.getNanos(); + assertEquals((int) ((nanos >>> 32) ^ nanos), t1.hashCode()); + // Literals return maximum precision + assertEquals(ValueTime.MAXIMUM_PRECISION, t1.getDisplaySize()); + assertEquals(ValueTime.MAXIMUM_PRECISION, t1.getPrecision()); + assertEquals("java.sql.Time", t1.getObject().getClass().getName()); + ValueTime t1b = ValueTime.parse("11:11:11"); + assertTrue(t1 == t1b); + Value.clearCache(); + t1b = ValueTime.parse("11:11:11"); + assertFalse(t1 == t1b); + assertTrue(t1.equals(t1)); + assertTrue(t1.equals(t1b)); + assertTrue(t1b.equals(t1)); + assertEquals(0, t1.compareTo(t1b, null)); + assertEquals(0, t1b.compareTo(t1, null)); + ValueTime t2 = ValueTime.parse("22:22:22"); + assertFalse(t1.equals(t2)); + assertFalse(t2.equals(t1)); + assertEquals(-1, t1.compareTo(t2, null)); + assertEquals(1, t2.compareTo(t1, null)); + + if (SysProperties.UNLIMITED_TIME_RANGE) { + assertEquals(-1, t1.negate().getSignum()); + assertEquals("-11:11:11", t1.negate().getString()); + assertEquals("11:11:11", t1.negate().negate().getString()); + assertEquals("33:33:33", t1.add(t2).getString()); + assertEquals("33:33:33", t1.multiply(ValueInt.get(4)).subtract(t1).getString()); + + // can't convert using java.util.Date + assertEquals( + "1969-12-31 23:00:00.0", + ValueTime.parse("-1:00:00"). + convertTo(Value.TIMESTAMP).getString()); + assertEquals( + "1970-01-01", + ValueTime.parse("-1:00:00"). + convertTo(Value.DATE).getString()); + } + } + + private void testValueTimestampWithTimezone() { + for (int m = 1; m <= 12; m++) { + for (int d = 1; d <= 28; d++) { + for (int h = 0; h <= 23; h++) { + String s = "2011-" + (m < 10 ? "0" : "") + m + + "-" + (d < 10 ? "0" : "") + d + " " + + (h < 10 ? "0" : "") + h + ":00:00"; + ValueTimestamp ts = ValueTimestamp.parse(s + "Z"); + String s2 = ts.getString(); + ValueTimestamp ts2 = ValueTimestamp.parse(s2); + assertEquals(ts.getString(), ts2.getString()); + } + } + } + } + + @SuppressWarnings("unlikely-arg-type") + private void testValueTimestamp() { + assertEquals( + "2001-02-03 04:05:06", ValueTimestamp.get( + Timestamp.valueOf( + "2001-02-03 04:05:06")).getString()); + assertEquals( + "2001-02-03 04:05:06.001002003", ValueTimestamp.get( + Timestamp.valueOf( + "2001-02-03 04:05:06.001002003")).getString()); + assertEquals( + "0-00-00 00:00:00", ValueTimestamp.fromDateValueAndNanos(0, 0).getString()); + assertEquals( + "9999-12-31 23:59:59", + ValueTimestamp.parse( + "9999-12-31 23:59:59").getString()); + + assertEquals( + Integer.MAX_VALUE + + "-12-31 01:02:03.04050607", + ValueTimestamp.parse(Integer.MAX_VALUE + + "-12-31 01:02:03.0405060708").getString()); + assertEquals( + Integer.MIN_VALUE + + "-12-31 01:02:03.04050607", + ValueTimestamp.parse(Integer.MIN_VALUE + + "-12-31 01:02:03.0405060708").getString()); + + ValueTimestamp t1 = ValueTimestamp.parse("2001-01-01 01:01:01.111"); + assertEquals("2001-01-01 01:01:01.111", t1.getTimestamp().toString()); + assertEquals("2001-01-01", t1.getDate().toString()); + assertEquals("01:01:01", t1.getTime().toString()); + assertEquals("TIMESTAMP '2001-01-01 01:01:01.111'", t1.getSQL()); + assertEquals("TIMESTAMP '2001-01-01 01:01:01.111'", t1.toString()); + assertEquals(Value.TIMESTAMP, t1.getType()); + long dateValue = t1.getDateValue(); + long nanos = t1.getTimeNanos(); + assertEquals((int) ((dateValue >>> 32) ^ dateValue ^ + (nanos >>> 32) ^ nanos), + t1.hashCode()); + // Literals return maximum precision + assertEquals(ValueTimestamp.MAXIMUM_PRECISION, t1.getDisplaySize()); + assertEquals(ValueTimestamp.MAXIMUM_PRECISION, t1.getPrecision()); + assertEquals(9, t1.getScale()); + assertEquals("java.sql.Timestamp", t1.getObject().getClass().getName()); + ValueTimestamp t1b = ValueTimestamp.parse("2001-01-01 01:01:01.111"); + assertTrue(t1 == t1b); + Value.clearCache(); + t1b = ValueTimestamp.parse("2001-01-01 01:01:01.111"); + assertFalse(t1 == t1b); + assertTrue(t1.equals(t1)); + assertTrue(t1.equals(t1b)); + assertTrue(t1b.equals(t1)); + assertEquals(0, t1.compareTo(t1b, null)); + assertEquals(0, t1b.compareTo(t1, null)); + ValueTimestamp t2 = ValueTimestamp.parse("2002-02-02 02:02:02.222"); + assertFalse(t1.equals(t2)); + assertFalse(t2.equals(t1)); + assertEquals(-1, t1.compareTo(t2, null)); + assertEquals(1, t2.compareTo(t1, null)); + t1 = ValueTimestamp.parse("2001-01-01 01:01:01.123456789"); + assertEquals("2001-01-01 01:01:01.123456789", + t1.getString()); + assertEquals("2001-01-01 01:01:01.123456789", + t1.convertScale(true, 10).getString()); + assertEquals("2001-01-01 01:01:01.123456789", + t1.convertScale(true, 9).getString()); + assertEquals("2001-01-01 01:01:01.12345679", + t1.convertScale(true, 8).getString()); + assertEquals("2001-01-01 01:01:01.1234568", + t1.convertScale(true, 7).getString()); + assertEquals("2001-01-01 01:01:01.123457", + t1.convertScale(true, 6).getString()); + assertEquals("2001-01-01 01:01:01.12346", + t1.convertScale(true, 5).getString()); + assertEquals("2001-01-01 01:01:01.1235", + t1.convertScale(true, 4).getString()); + assertEquals("2001-01-01 01:01:01.123", + t1.convertScale(true, 3).getString()); + assertEquals("2001-01-01 01:01:01.12", + t1.convertScale(true, 2).getString()); + assertEquals("2001-01-01 01:01:01.1", + t1.convertScale(true, 1).getString()); + assertEquals("2001-01-01 01:01:01", + t1.convertScale(true, 0).getString()); + t1 = ValueTimestamp.parse("-2001-01-01 01:01:01.123456789"); + assertEquals("-2001-01-01 01:01:01.123457", + t1.convertScale(true, 6).getString()); + // classes do not match + assertFalse(ValueTimestamp.parse("2001-01-01"). + equals(ValueDate.parse("2001-01-01"))); + + assertEquals("2001-01-01 01:01:01", + ValueTimestamp.parse("2001-01-01").add( + ValueTime.parse("01:01:01")).getString()); + assertEquals("1010-10-10 00:00:00", + ValueTimestamp.parse("1010-10-10 10:10:10").subtract( + ValueTime.parse("10:10:10")).getString()); + assertEquals("-2001-01-01 01:01:01", + ValueTimestamp.parse("-2001-01-01").add( + ValueTime.parse("01:01:01")).getString()); + assertEquals("-1010-10-10 00:00:00", + ValueTimestamp.parse("-1010-10-10 10:10:10").subtract( + ValueTime.parse("10:10:10")).getString()); + + if (SysProperties.UNLIMITED_TIME_RANGE) { + assertEquals("2001-01-02 01:01:01", + ValueTimestamp.parse("2001-01-01").add( + ValueTime.parse("25:01:01")).getString()); + assertEquals("1010-10-10 10:00:00", + ValueTimestamp.parse("1010-10-11 10:10:10").subtract( + ValueTime.parse("24:10:10")).getString()); + } + + assertEquals(0, DateTimeUtils.absoluteDayFromDateValue( + ValueTimestamp.parse("1970-01-01").getDateValue())); + assertEquals(0, ValueTimestamp.parse( + "1970-01-01").getTimeNanos()); + assertEquals(0, ValueTimestamp.parse( + "1970-01-01 00:00:00.000 UTC").getTimestamp().getTime()); + assertEquals(0, ValueTimestamp.parse( + "+1970-01-01T00:00:00.000Z").getTimestamp().getTime()); + assertEquals(0, ValueTimestamp.parse( + "1970-01-01T00:00:00.000+00:00").getTimestamp().getTime()); + assertEquals(0, ValueTimestamp.parse( + "1970-01-01T00:00:00.000-00:00").getTimestamp().getTime()); + new AssertThrows(ErrorCode.INVALID_DATETIME_CONSTANT_2) { + @Override + public void test() { + ValueTimestamp.parse("1970-01-01 00:00:00.000 ABC"); + } + }; + new AssertThrows(ErrorCode.INVALID_DATETIME_CONSTANT_2) { + @Override + public void test() { + ValueTimestamp.parse("1970-01-01T00:00:00.000+ABC"); + } + }; + } + + private void testAbsoluteDay() { + long next = Long.MIN_VALUE; + for (int y = -2000; y < 3000; y++) { + for (int m = -3; m <= 14; m++) { + for (int d = -2; d <= 35; d++) { + if (!DateTimeUtils.isValidDate(y, m, d)) { + continue; + } + long date = DateTimeUtils.dateValue(y, m, d); + long abs = DateTimeUtils.absoluteDayFromDateValue(date); + if (abs != next && next != Long.MIN_VALUE) { + assertEquals(abs, next); + } + if (m == 1 && d == 1) { + assertEquals(abs, DateTimeUtils.absoluteDayFromYear(y)); + } + next = abs + 1; + long d2 = DateTimeUtils.dateValueFromAbsoluteDay(abs); + assertEquals(date, d2); + assertEquals(y, DateTimeUtils.yearFromDateValue(date)); + assertEquals(m, DateTimeUtils.monthFromDateValue(date)); + assertEquals(d, DateTimeUtils.dayFromDateValue(date)); + long nextDateValue = DateTimeUtils.dateValueFromAbsoluteDay(next); + assertEquals(nextDateValue, DateTimeUtils.incrementDateValue(date)); + assertEquals(date, DateTimeUtils.decrementDateValue(nextDateValue)); + } + } + } + } + + private void testValidDate() { + Calendar c = DateTimeUtils.createGregorianCalendar(DateTimeUtils.UTC); + c.setLenient(false); + for (int y = -2000; y < 3000; y++) { + for (int m = -3; m <= 14; m++) { + for (int d = -2; d <= 35; d++) { + boolean valid = DateTimeUtils.isValidDate(y, m, d); + if (m < 1 || m > 12) { + assertFalse(valid); + } else if (d < 1 || d > 31) { + assertFalse(valid); + } else if (y != 1582 && d >= 1 && d <= 27) { + assertTrue(valid); + } else { + if (y <= 0) { + c.set(Calendar.ERA, GregorianCalendar.BC); + c.set(Calendar.YEAR, 1 - y); + } else { + c.set(Calendar.ERA, GregorianCalendar.AD); + c.set(Calendar.YEAR, y); + } + c.set(Calendar.MONTH, m - 1); + c.set(Calendar.DAY_OF_MONTH, d); + boolean expected = true; + try { + c.getTimeInMillis(); + } catch (Exception e) { + expected = false; + } + if (expected != valid) { + fail(y + "-" + m + "-" + d + + " expected: " + expected + " got: " + valid); + } + } + } + } + } + } + + private static void testCalculateLocalMillis() { + TimeZone defaultTimeZone = TimeZone.getDefault(); + try { + for (TimeZone tz : TestDate.getDistinctTimeZones()) { + TimeZone.setDefault(tz); + for (int y = 1900; y < 2039; y += 10) { + if (y == 1993) { + // timezone change in Kwajalein + } else if (y == 1995) { + // timezone change in Enderbury and Kiritimati + } + for (int m = 1; m <= 12; m++) { + if (m != 3 && m != 4 && m != 10 && m != 11) { + // only test daylight saving time transitions + continue; + } + for (int day = 1; day < 29; day++) { + testDate(y, m, day); + } + } + } + } + } finally { + TimeZone.setDefault(defaultTimeZone); + } + } + + private static void testDate(int y, int m, int day) { + long millis = DateTimeUtils.getMillis( + TimeZone.getDefault(), y, m, day, 0, 0, 0, 0); + String st = new java.sql.Date(millis).toString(); + int y2 = Integer.parseInt(st.substring(0, 4)); + int m2 = Integer.parseInt(st.substring(5, 7)); + int d2 = Integer.parseInt(st.substring(8, 10)); + if (y != y2 || m != m2 || day != d2) { + String s = y + "-" + (m < 10 ? "0" + m : m) + + "-" + (day < 10 ? "0" + day : day); + System.out.println(s + "<>" + st + " " + TimeZone.getDefault().getID()); + } + } + + /** + * Get the list of timezones with distinct rules. + * + * @return the list + */ + public static ArrayList getDistinctTimeZones() { + ArrayList distinct = New.arrayList(); + for (String id : TimeZone.getAvailableIDs()) { + TimeZone t = TimeZone.getTimeZone(id); + for (TimeZone d : distinct) { + if (t.hasSameRules(d)) { + t = null; + break; + } + } + if (t != null) { + distinct.add(t); + } + } + return distinct; + } + + private void testDateTimeUtils() { + ValueTimestamp ts1 = ValueTimestamp.parse("-999-08-07 13:14:15.16"); + ValueTimestamp ts2 = ValueTimestamp.parse("19999-08-07 13:14:15.16"); + ValueTime t1 = (ValueTime) ts1.convertTo(Value.TIME); + ValueTime t2 = (ValueTime) ts2.convertTo(Value.TIME); + ValueDate d1 = (ValueDate) ts1.convertTo(Value.DATE); + ValueDate d2 = (ValueDate) ts2.convertTo(Value.DATE); + assertEquals("-999-08-07 13:14:15.16", ts1.getString()); + assertEquals("-999-08-07", d1.getString()); + assertEquals("13:14:15.16", t1.getString()); + assertEquals("19999-08-07 13:14:15.16", ts2.getString()); + assertEquals("19999-08-07", d2.getString()); + assertEquals("13:14:15.16", t2.getString()); + ValueTimestamp ts1a = DateTimeUtils.convertTimestamp( + ts1.getTimestamp(), DateTimeUtils.createGregorianCalendar()); + ValueTimestamp ts2a = DateTimeUtils.convertTimestamp( + ts2.getTimestamp(), DateTimeUtils.createGregorianCalendar()); + assertEquals("-999-08-07 13:14:15.16", ts1a.getString()); + assertEquals("19999-08-07 13:14:15.16", ts2a.getString()); + + // test for bug on Java 1.8.0_60 in "Europe/Moscow" timezone. + // Doesn't affect most other timezones + long millis = 1407437460000L; + long result1 = DateTimeUtils.nanosFromDate(DateTimeUtils.getTimeUTCWithoutDst(millis)); + long result2 = DateTimeUtils.nanosFromDate(DateTimeUtils.getTimeUTCWithoutDst(millis)); + assertEquals(result1, result2); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestDateIso8601.java b/modules/h2/src/test/java/org/h2/test/unit/TestDateIso8601.java new file mode 100644 index 0000000000000..9e927826afaef --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestDateIso8601.java @@ -0,0 +1,282 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Robert Rathsack (firstName dot lastName at gmx dot de) + */ +package org.h2.test.unit; + +import static org.h2.util.DateTimeUtils.getIsoDayOfWeek; +import static org.h2.util.DateTimeUtils.getIsoWeekOfYear; +import static org.h2.util.DateTimeUtils.getIsoWeekYear; + +import org.h2.test.TestBase; +import org.h2.value.ValueDate; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + * Test cases for DateTimeIso8601Utils. + */ +public class TestDateIso8601 extends TestBase { + + private enum Type { + DATE, TIMESTAMP, TIMESTAMP_TIMEZONE_0, TIMESTAMP_TIMEZONE_PLUS_18, TIMESTAMP_TIMEZONE_MINUS_18; + } + + private static Type type; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + private static long parse(String s) { + if (type == null) { + throw new IllegalStateException(); + } + switch (type) { + case DATE: + return ValueDate.parse(s).getDateValue(); + case TIMESTAMP: + return ValueTimestamp.parse(s).getDateValue(); + case TIMESTAMP_TIMEZONE_0: + return ValueTimestampTimeZone.parse(s + " 00:00:00.0Z").getDateValue(); + case TIMESTAMP_TIMEZONE_PLUS_18: + return ValueTimestampTimeZone.parse(s + " 00:00:00+18:00").getDateValue(); + case TIMESTAMP_TIMEZONE_MINUS_18: + return ValueTimestampTimeZone.parse(s + " 00:00:00-18:00").getDateValue(); + default: + throw new IllegalStateException(); + } + } + + @Override + public void test() throws Exception { + type = Type.DATE; + doTest(); + type = Type.TIMESTAMP; + doTest(); + type = Type.TIMESTAMP_TIMEZONE_0; + doTest(); + type = Type.TIMESTAMP_TIMEZONE_PLUS_18; + doTest(); + type = Type.TIMESTAMP_TIMEZONE_MINUS_18; + doTest(); + } + + private void doTest() throws Exception { + testIsoDayOfWeek(); + testIsoWeekJanuary1thMonday(); + testIsoWeekJanuary1thTuesday(); + testIsoWeekJanuary1thWednesday(); + testIsoWeekJanuary1thThursday(); + testIsoWeekJanuary1thFriday(); + testIsoWeekJanuary1thSaturday(); + testIsoWeekJanuary1thSunday(); + testIsoYearJanuary1thMonday(); + testIsoYearJanuary1thTuesday(); + testIsoYearJanuary1thWednesday(); + testIsoYearJanuary1thThursday(); + testIsoYearJanuary1thFriday(); + testIsoYearJanuary1thSaturday(); + testIsoYearJanuary1thSunday(); + } + + /** + * Test if day of week is returned as Monday = 1 to Sunday = 7. + */ + private void testIsoDayOfWeek() throws Exception { + assertEquals(1, getIsoDayOfWeek(parse("2008-09-29"))); + assertEquals(2, getIsoDayOfWeek(parse("2008-09-30"))); + assertEquals(3, getIsoDayOfWeek(parse("2008-10-01"))); + assertEquals(4, getIsoDayOfWeek(parse("2008-10-02"))); + assertEquals(5, getIsoDayOfWeek(parse("2008-10-03"))); + assertEquals(6, getIsoDayOfWeek(parse("2008-10-04"))); + assertEquals(7, getIsoDayOfWeek(parse("2008-10-05"))); + } + + /** + * January 1st is a Monday therefore the week belongs to the next year. + */ + private void testIsoWeekJanuary1thMonday() throws Exception { + assertEquals(52, getIsoWeekOfYear(parse("2006-12-31"))); + assertEquals(1, getIsoWeekOfYear(parse("2007-01-01"))); + assertEquals(1, getIsoWeekOfYear(parse("2007-01-07"))); + assertEquals(2, getIsoWeekOfYear(parse("2007-01-08"))); + } + + /** + * January 1st is a Tuesday therefore the week belongs to the next year. + */ + private void testIsoWeekJanuary1thTuesday() throws Exception { + assertEquals(52, getIsoWeekOfYear(parse("2007-12-30"))); + assertEquals(1, getIsoWeekOfYear(parse("2007-12-31"))); + assertEquals(1, getIsoWeekOfYear(parse("2008-01-01"))); + assertEquals(1, getIsoWeekOfYear(parse("2008-01-06"))); + assertEquals(2, getIsoWeekOfYear(parse("2008-01-07"))); + } + + /** + * January1th is a Wednesday therefore the week belongs to the next year. + */ + private void testIsoWeekJanuary1thWednesday() throws Exception { + assertEquals(52, getIsoWeekOfYear(parse("2002-12-28"))); + assertEquals(52, getIsoWeekOfYear(parse("2002-12-29"))); + assertEquals(1, getIsoWeekOfYear(parse("2002-12-30"))); + assertEquals(1, getIsoWeekOfYear(parse("2002-12-31"))); + assertEquals(1, getIsoWeekOfYear(parse("2003-01-01"))); + assertEquals(1, getIsoWeekOfYear(parse("2003-01-05"))); + assertEquals(2, getIsoWeekOfYear(parse("2003-01-06"))); + } + + /** + * January 1st is a Thursday therefore the week belongs to the next year. + */ + private void testIsoWeekJanuary1thThursday() throws Exception { + assertEquals(52, getIsoWeekOfYear(parse("2008-12-28"))); + assertEquals(1, getIsoWeekOfYear(parse("2008-12-29"))); + assertEquals(1, getIsoWeekOfYear(parse("2008-12-30"))); + assertEquals(1, getIsoWeekOfYear(parse("2008-12-31"))); + assertEquals(1, getIsoWeekOfYear(parse("2009-01-01"))); + assertEquals(1, getIsoWeekOfYear(parse("2009-01-04"))); + assertEquals(2, getIsoWeekOfYear(parse("2009-01-09"))); + } + + /** + * January 1st is a Friday therefore the week belongs to the previous year. + */ + private void testIsoWeekJanuary1thFriday() throws Exception { + assertEquals(53, getIsoWeekOfYear(parse("2009-12-31"))); + assertEquals(53, getIsoWeekOfYear(parse("2010-01-01"))); + assertEquals(53, getIsoWeekOfYear(parse("2010-01-03"))); + assertEquals(1, getIsoWeekOfYear(parse("2010-01-04"))); + } + + /** + * January 1st is a Saturday therefore the week belongs to the previous + * year. + */ + private void testIsoWeekJanuary1thSaturday() throws Exception { + assertEquals(52, getIsoWeekOfYear(parse("2010-12-31"))); + assertEquals(52, getIsoWeekOfYear(parse("2011-01-01"))); + assertEquals(52, getIsoWeekOfYear(parse("2011-01-02"))); + assertEquals(1, getIsoWeekOfYear(parse("2011-01-03"))); + } + + /** + * January 1st is a Sunday therefore the week belongs to the previous year. + */ + private void testIsoWeekJanuary1thSunday() throws Exception { + assertEquals(52, getIsoWeekOfYear(parse("2011-12-31"))); + assertEquals(52, getIsoWeekOfYear(parse("2012-01-01"))); + assertEquals(1, getIsoWeekOfYear(parse("2012-01-02"))); + assertEquals(1, getIsoWeekOfYear(parse("2012-01-08"))); + assertEquals(2, getIsoWeekOfYear(parse("2012-01-09"))); + } + + /** + * January 1st is a Monday therefore year is equal to isoYear. + */ + private void testIsoYearJanuary1thMonday() throws Exception { + assertEquals(2006, getIsoWeekYear(parse("2006-12-28"))); + assertEquals(2006, getIsoWeekYear(parse("2006-12-29"))); + assertEquals(2006, getIsoWeekYear(parse("2006-12-30"))); + assertEquals(2006, getIsoWeekYear(parse("2006-12-31"))); + assertEquals(2007, getIsoWeekYear(parse("2007-01-01"))); + assertEquals(2007, getIsoWeekYear(parse("2007-01-02"))); + assertEquals(2007, getIsoWeekYear(parse("2007-01-03"))); + } + + /** + * January 1st is a Tuesday therefore 31th of December belong to the next + * year. + */ + private void testIsoYearJanuary1thTuesday() throws Exception { + assertEquals(2007, getIsoWeekYear(parse("2007-12-28"))); + assertEquals(2007, getIsoWeekYear(parse("2007-12-29"))); + assertEquals(2007, getIsoWeekYear(parse("2007-12-30"))); + assertEquals(2008, getIsoWeekYear(parse("2007-12-31"))); + assertEquals(2008, getIsoWeekYear(parse("2008-01-01"))); + assertEquals(2008, getIsoWeekYear(parse("2008-01-02"))); + assertEquals(2008, getIsoWeekYear(parse("2008-01-03"))); + assertEquals(2008, getIsoWeekYear(parse("2008-01-04"))); + } + + /** + * January 1st is a Wednesday therefore 30th and 31th of December belong to + * the next year. + */ + private void testIsoYearJanuary1thWednesday() throws Exception { + assertEquals(2002, getIsoWeekYear(parse("2002-12-28"))); + assertEquals(2002, getIsoWeekYear(parse("2002-12-29"))); + assertEquals(2003, getIsoWeekYear(parse("2002-12-30"))); + assertEquals(2003, getIsoWeekYear(parse("2002-12-31"))); + assertEquals(2003, getIsoWeekYear(parse("2003-01-01"))); + assertEquals(2003, getIsoWeekYear(parse("2003-01-02"))); + assertEquals(2003, getIsoWeekYear(parse("2003-12-02"))); + } + + /** + * January 1st is a Thursday therefore 29th - 31th of December belong to the + * next year. + */ + private void testIsoYearJanuary1thThursday() throws Exception { + assertEquals(2008, getIsoWeekYear(parse("2008-12-28"))); + assertEquals(2009, getIsoWeekYear(parse("2008-12-29"))); + assertEquals(2009, getIsoWeekYear(parse("2008-12-30"))); + assertEquals(2009, getIsoWeekYear(parse("2008-12-31"))); + assertEquals(2009, getIsoWeekYear(parse("2009-01-01"))); + assertEquals(2009, getIsoWeekYear(parse("2009-01-02"))); + assertEquals(2009, getIsoWeekYear(parse("2009-01-03"))); + assertEquals(2009, getIsoWeekYear(parse("2009-01-04"))); + } + + /** + * January 1st is a Friday therefore 1st - 3rd of January belong to the + * previous year. + */ + private void testIsoYearJanuary1thFriday() throws Exception { + assertEquals(2009, getIsoWeekYear(parse("2009-12-28"))); + assertEquals(2009, getIsoWeekYear(parse("2009-12-29"))); + assertEquals(2009, getIsoWeekYear(parse("2009-12-30"))); + assertEquals(2009, getIsoWeekYear(parse("2009-12-31"))); + assertEquals(2009, getIsoWeekYear(parse("2010-01-01"))); + assertEquals(2009, getIsoWeekYear(parse("2010-01-02"))); + assertEquals(2009, getIsoWeekYear(parse("2010-01-03"))); + assertEquals(2010, getIsoWeekYear(parse("2010-01-04"))); + } + + /** + * January 1st is a Saturday therefore 1st and 2nd of January belong to the + * previous year. + */ + private void testIsoYearJanuary1thSaturday() throws Exception { + assertEquals(2010, getIsoWeekYear(parse("2010-12-28"))); + assertEquals(2010, getIsoWeekYear(parse("2010-12-29"))); + assertEquals(2010, getIsoWeekYear(parse("2010-12-30"))); + assertEquals(2010, getIsoWeekYear(parse("2010-12-31"))); + assertEquals(2010, getIsoWeekYear(parse("2011-01-01"))); + assertEquals(2010, getIsoWeekYear(parse("2011-01-02"))); + assertEquals(2011, getIsoWeekYear(parse("2011-01-03"))); + assertEquals(2011, getIsoWeekYear(parse("2011-01-04"))); + } + + /** + * January 1st is a Sunday therefore this day belong to the previous year. + */ + private void testIsoYearJanuary1thSunday() throws Exception { + assertEquals(2011, getIsoWeekYear(parse("2011-12-28"))); + assertEquals(2011, getIsoWeekYear(parse("2011-12-29"))); + assertEquals(2011, getIsoWeekYear(parse("2011-12-30"))); + assertEquals(2011, getIsoWeekYear(parse("2011-12-31"))); + assertEquals(2011, getIsoWeekYear(parse("2012-01-01"))); + assertEquals(2012, getIsoWeekYear(parse("2012-01-02"))); + assertEquals(2012, getIsoWeekYear(parse("2012-01-03"))); + assertEquals(2012, getIsoWeekYear(parse("2012-01-04"))); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestDateTimeUtils.java b/modules/h2/src/test/java/org/h2/test/unit/TestDateTimeUtils.java new file mode 100644 index 0000000000000..0b8cdd82324f9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestDateTimeUtils.java @@ -0,0 +1,197 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import static org.h2.util.DateTimeUtils.dateValue; + +import java.sql.Timestamp; +import java.util.Calendar; +import java.util.GregorianCalendar; +import java.util.TimeZone; + +import org.h2.test.TestBase; +import org.h2.util.DateTimeUtils; +import org.h2.value.ValueTimestamp; + +/** + * Unit tests for the DateTimeUtils class + */ +public class TestDateTimeUtils extends TestBase { + + /** + * Run just this test. + * + * @param a + * if {@code "testUtc2Value"} only {@link #testUTC2Value(boolean)} + * will be executed with all time zones (slow). Otherwise all tests + * in this test unit will be executed with local time zone. + */ + public static void main(String... a) throws Exception { + if (a.length == 1) { + if ("testUtc2Value".equals(a[0])) { + new TestDateTimeUtils().testUTC2Value(true); + return; + } + } + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testParseTimeNanosDB2Format(); + testDayOfWeek(); + testWeekOfYear(); + testDateValueFromDenormalizedDate(); + testUTC2Value(false); + testConvertScale(); + } + + private void testParseTimeNanosDB2Format() { + assertEquals(3723004000000L, DateTimeUtils.parseTimeNanos("01:02:03.004", 0, 12, true)); + assertEquals(3723004000000L, DateTimeUtils.parseTimeNanos("01.02.03.004", 0, 12, true)); + + assertEquals(3723000000000L, DateTimeUtils.parseTimeNanos("01:02:03", 0, 8, true)); + assertEquals(3723000000000L, DateTimeUtils.parseTimeNanos("01.02.03", 0, 8, true)); + } + + /** + * Test for {@link DateTimeUtils#getSundayDayOfWeek(long)} and + * {@link DateTimeUtils#getIsoDayOfWeek(long)}. + */ + private void testDayOfWeek() { + GregorianCalendar gc = DateTimeUtils.createGregorianCalendar(DateTimeUtils.UTC); + for (int i = -1_000_000; i <= 1_000_000; i++) { + gc.clear(); + gc.setTimeInMillis(i * 86400000L); + int year = gc.get(Calendar.YEAR); + if (gc.get(Calendar.ERA) == GregorianCalendar.BC) { + year = 1 - year; + } + long expectedDateValue = dateValue(year, gc.get(Calendar.MONTH) + 1, + gc.get(Calendar.DAY_OF_MONTH)); + long dateValue = DateTimeUtils.dateValueFromAbsoluteDay(i); + assertEquals(expectedDateValue, dateValue); + assertEquals(i, DateTimeUtils.absoluteDayFromDateValue(dateValue)); + int dow = gc.get(Calendar.DAY_OF_WEEK); + assertEquals(dow, DateTimeUtils.getSundayDayOfWeek(dateValue)); + int isoDow = (dow + 5) % 7 + 1; + assertEquals(isoDow, DateTimeUtils.getIsoDayOfWeek(dateValue)); + assertEquals(gc.get(Calendar.WEEK_OF_YEAR), + DateTimeUtils.getWeekOfYear(dateValue, gc.getFirstDayOfWeek() - 1, + gc.getMinimalDaysInFirstWeek())); + } + } + + /** + * Test for {@link DateTimeUtils#getDayOfYear(long)}, + * {@link DateTimeUtils#getWeekOfYear(long, int, int)} and + * {@link DateTimeUtils#getWeekYear(long, int, int)}. + */ + private void testWeekOfYear() { + GregorianCalendar gc = new GregorianCalendar(DateTimeUtils.UTC); + for (int firstDay = 1; firstDay <= 7; firstDay++) { + gc.setFirstDayOfWeek(firstDay); + for (int minimalDays = 1; minimalDays <= 7; minimalDays++) { + gc.setMinimalDaysInFirstWeek(minimalDays); + for (int i = 0; i < 150_000; i++) { + long dateValue = DateTimeUtils.dateValueFromAbsoluteDay(i); + gc.clear(); + gc.setTimeInMillis(i * 86400000L); + assertEquals(gc.get(Calendar.DAY_OF_YEAR), DateTimeUtils.getDayOfYear(dateValue)); + assertEquals(gc.get(Calendar.WEEK_OF_YEAR), + DateTimeUtils.getWeekOfYear(dateValue, firstDay - 1, minimalDays)); + assertEquals(gc.getWeekYear(), DateTimeUtils.getWeekYear(dateValue, firstDay - 1, minimalDays)); + } + } + } + } + + /** + * Test for {@link DateTimeUtils#dateValueFromDenormalizedDate(long, long, int)}. + */ + private void testDateValueFromDenormalizedDate() { + assertEquals(dateValue(2017, 1, 1), DateTimeUtils.dateValueFromDenormalizedDate(2018, -11, 0)); + assertEquals(dateValue(2001, 2, 28), DateTimeUtils.dateValueFromDenormalizedDate(2000, 14, 29)); + assertEquals(dateValue(1999, 8, 1), DateTimeUtils.dateValueFromDenormalizedDate(2000, -4, -100)); + assertEquals(dateValue(2100, 12, 31), DateTimeUtils.dateValueFromDenormalizedDate(2100, 12, 2000)); + assertEquals(dateValue(-100, 2, 29), DateTimeUtils.dateValueFromDenormalizedDate(-100, 2, 30)); + } + + private void testUTC2Value(boolean allTimeZones) { + TimeZone def = TimeZone.getDefault(); + GregorianCalendar gc = new GregorianCalendar(); + if (allTimeZones) { + try { + for (String id : TimeZone.getAvailableIDs()) { + System.out.println(id); + TimeZone tz = TimeZone.getTimeZone(id); + TimeZone.setDefault(tz); + DateTimeUtils.resetCalendar(); + testUTC2ValueImpl(tz, gc); + } + } finally { + TimeZone.setDefault(def); + DateTimeUtils.resetCalendar(); + } + } else { + testUTC2ValueImpl(def, gc); + } + } + + private void testUTC2ValueImpl(TimeZone tz, GregorianCalendar gc) { + gc.setTimeZone(tz); + gc.set(Calendar.MILLISECOND, 0); + long absoluteStart = DateTimeUtils.absoluteDayFromDateValue(DateTimeUtils.dateValue(1950, 01, 01)); + long absoluteEnd = DateTimeUtils.absoluteDayFromDateValue(DateTimeUtils.dateValue(2050, 01, 01)); + for (long i = absoluteStart; i < absoluteEnd; i++) { + long dateValue = DateTimeUtils.dateValueFromAbsoluteDay(i); + int year = DateTimeUtils.yearFromDateValue(dateValue); + int month = DateTimeUtils.monthFromDateValue(dateValue); + int day = DateTimeUtils.dayFromDateValue(dateValue); + for (int j = 0; j < 48; j++) { + gc.set(year, month - 1, day, j / 2, (j & 1) * 30, 0); + long timeMillis = gc.getTimeInMillis(); + ValueTimestamp ts = DateTimeUtils.convertTimestamp(new Timestamp(timeMillis), gc); + assertEquals(ts.getDateValue(), DateTimeUtils.dateValueFromDate(timeMillis)); + assertEquals(ts.getTimeNanos(), DateTimeUtils.nanosFromDate(timeMillis)); + } + } + } + + private void testConvertScale() { + assertEquals(555_555_555_555L, DateTimeUtils.convertScale(555_555_555_555L, 9)); + assertEquals(555_555_555_550L, DateTimeUtils.convertScale(555_555_555_554L, 8)); + assertEquals(555_555_555_500L, DateTimeUtils.convertScale(555_555_555_549L, 7)); + assertEquals(555_555_555_000L, DateTimeUtils.convertScale(555_555_555_499L, 6)); + assertEquals(555_555_550_000L, DateTimeUtils.convertScale(555_555_554_999L, 5)); + assertEquals(555_555_500_000L, DateTimeUtils.convertScale(555_555_549_999L, 4)); + assertEquals(555_555_000_000L, DateTimeUtils.convertScale(555_555_499_999L, 3)); + assertEquals(555_550_000_000L, DateTimeUtils.convertScale(555_554_999_999L, 2)); + assertEquals(555_500_000_000L, DateTimeUtils.convertScale(555_549_999_999L, 1)); + assertEquals(555_000_000_000L, DateTimeUtils.convertScale(555_499_999_999L, 0)); + assertEquals(555_555_555_555L, DateTimeUtils.convertScale(555_555_555_555L, 9)); + assertEquals(555_555_555_560L, DateTimeUtils.convertScale(555_555_555_555L, 8)); + assertEquals(555_555_555_600L, DateTimeUtils.convertScale(555_555_555_550L, 7)); + assertEquals(555_555_556_000L, DateTimeUtils.convertScale(555_555_555_500L, 6)); + assertEquals(555_555_560_000L, DateTimeUtils.convertScale(555_555_555_000L, 5)); + assertEquals(555_555_600_000L, DateTimeUtils.convertScale(555_555_550_000L, 4)); + assertEquals(555_556_000_000L, DateTimeUtils.convertScale(555_555_500_000L, 3)); + assertEquals(555_560_000_000L, DateTimeUtils.convertScale(555_555_000_000L, 2)); + assertEquals(555_600_000_000L, DateTimeUtils.convertScale(555_550_000_000L, 1)); + assertEquals(556_000_000_000L, DateTimeUtils.convertScale(555_500_000_000L, 0)); + assertEquals(100_999_999_999L, DateTimeUtils.convertScale(100_999_999_999L, 9)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 8)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 7)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 6)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 5)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 4)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 3)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 2)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 1)); + assertEquals(101_000_000_000L, DateTimeUtils.convertScale(100_999_999_999L, 0)); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestExit.java b/modules/h2/src/test/java/org/h2/test/unit/TestExit.java new file mode 100644 index 0000000000000..0147ba466db63 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestExit.java @@ -0,0 +1,164 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.File; +import java.io.IOException; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; + +import org.h2.api.DatabaseEventListener; +import org.h2.test.TestBase; +import org.h2.test.utils.SelfDestructor; + +/** + * Tests the flag db_close_on_exit. A new process is started. + */ +public class TestExit extends TestBase { + + private static Connection conn; + + private static final int OPEN_WITH_CLOSE_ON_EXIT = 1, + OPEN_WITHOUT_CLOSE_ON_EXIT = 2; + + @Override + public void test() throws Exception { + if (config.codeCoverage || config.networked) { + return; + } + if (getBaseDir().indexOf(':') > 0) { + return; + } + deleteDb("exit"); + String url = getURL(OPEN_WITH_CLOSE_ON_EXIT); + String selfDestruct = SelfDestructor.getPropertyString(60); + String[] procDef = { "java", selfDestruct, "-cp", getClassPath(), + getClass().getName(), url }; + Process proc = Runtime.getRuntime().exec(procDef); + while (true) { + int ch = proc.getErrorStream().read(); + if (ch < 0) { + break; + } + System.out.print((char) ch); + } + while (true) { + int ch = proc.getInputStream().read(); + if (ch < 0) { + break; + } + System.out.print((char) ch); + } + proc.waitFor(); + Thread.sleep(100); + if (!getClosedFile().exists()) { + fail("did not close database"); + } + url = getURL(OPEN_WITHOUT_CLOSE_ON_EXIT); + procDef = new String[] { "java", "-cp", getClassPath(), + getClass().getName(), url }; + proc = Runtime.getRuntime().exec(procDef); + proc.waitFor(); + Thread.sleep(100); + if (getClosedFile().exists()) { + fail("closed database"); + } + deleteDb("exit"); + } + + private String getURL(int action) { + String url = ""; + switch (action) { + case OPEN_WITH_CLOSE_ON_EXIT: + url = "jdbc:h2:" + getBaseDir() + + "/exit;database_event_listener='" + + MyDatabaseEventListener.class.getName() + + "';db_close_on_exit=true"; + break; + case OPEN_WITHOUT_CLOSE_ON_EXIT: + url = "jdbc:h2:" + getBaseDir() + + "/exit;database_event_listener='" + + MyDatabaseEventListener.class.getName() + + "';db_close_on_exit=false"; + break; + default: + } + url = getURL(url, true); + return url; + } + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws SQLException { + SelfDestructor.startCountdown(60); + if (args.length == 0) { + System.exit(1); + } + String url = args[0]; + TestExit.execute(url); + } + + private static void execute(String url) throws SQLException { + org.h2.Driver.load(); + conn = open(url); + Connection conn2 = open(url); + conn2.close(); + // do not close + conn.isClosed(); + } + + private static Connection open(String url) throws SQLException { + getClosedFile().delete(); + return DriverManager.getConnection(url, "sa", ""); + } + + static File getClosedFile() { + return new File(TestBase.BASE_TEST_DIR + "/closed.txt"); + } + + /** + * A database event listener used in this test. + */ + public static final class MyDatabaseEventListener implements + DatabaseEventListener { + + @Override + public void exceptionThrown(SQLException e, String sql) { + // nothing to do + } + + @Override + public void closingDatabase() { + try { + getClosedFile().createNewFile(); + } catch (IOException e) { + TestBase.logError("error", e); + } + } + + @Override + public void setProgress(int state, String name, int x, int max) { + // nothing to do + } + + @Override + public void init(String url) { + // nothing to do + } + + @Override + public void opened() { + // nothing to do + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestFile.java b/modules/h2/src/test/java/org/h2/test/unit/TestFile.java new file mode 100644 index 0000000000000..c2d4b7e791055 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestFile.java @@ -0,0 +1,204 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.util.Random; +import org.h2.api.JavaObjectSerializer; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.LobStorageBackend; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.SmallLRUCache; +import org.h2.util.TempFileDeleter; +import org.h2.value.CompareMode; + +/** + * Tests the in-memory file store. + */ +public class TestFile extends TestBase implements DataHandler { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + doTest(false, false); + doTest(false, true); + doTest(true, false); + doTest(true, true); + } + + private void doTest(boolean nioMem, boolean compress) { + int len = getSize(1000, 10000); + Random random = new Random(); + FileStore mem = null, file = null; + byte[] buffMem = null; + byte[] buffFile = null; + String prefix = nioMem ? (compress ? "nioMemLZF:" : "nioMemFS:") + : (compress ? "memLZF:" : "memFS:"); + FileUtils.delete(prefix + "test"); + FileUtils.delete("~/testFile"); + + // config.traceTest = true; + + for (int i = 0; i < len; i++) { + if (buffMem == null) { + int l = 1 + random.nextInt(1000); + buffMem = new byte[l]; + buffFile = new byte[l]; + } + if (file == null) { + mem = FileStore.open(this, prefix + "test", "rw"); + file = FileStore.open(this, "~/testFile", "rw"); + } + assertEquals(file.getFilePointer(), mem.getFilePointer()); + assertEquals(file.length(), mem.length()); + int x = random.nextInt(100); + if ((x -= 20) < 0) { + if (file.length() > 0) { + long pos = random.nextInt((int) (file.length() / 16)) * 16; + trace("seek " + pos); + mem.seek(pos); + file.seek(pos); + } + } else if ((x -= 20) < 0) { + trace("close"); + mem.close(); + file.close(); + mem = null; + file = null; + } else if ((x -= 20) < 0) { + if (buffFile.length > 16) { + random.nextBytes(buffFile); + System.arraycopy(buffFile, 0, buffMem, 0, buffFile.length); + int off = random.nextInt(buffFile.length - 16); + int l = random.nextInt((buffFile.length - off) / 16) * 16; + trace("write " + off + " " + l); + mem.write(buffMem, off, l); + file.write(buffFile, off, l); + } + } else if ((x -= 20) < 0) { + if (buffFile.length > 16) { + int off = random.nextInt(buffFile.length - 16); + int l = random.nextInt((buffFile.length - off) / 16) * 16; + l = (int) Math + .min(l, file.length() - file.getFilePointer()); + trace("read " + off + " " + l); + Exception a = null, b = null; + try { + file.readFully(buffFile, off, l); + } catch (Exception e) { + a = e; + } + try { + mem.readFully(buffMem, off, l); + } catch (Exception e) { + b = e; + } + if (a != b) { + if (a == null || b == null) { + fail("only one threw an exception"); + } + } + assertEquals(buffMem, buffFile); + } + } else if ((x -= 10) < 0) { + trace("reset buffers"); + buffMem = null; + buffFile = null; + } else { + int l = random.nextInt(10000) * 16; + long p = file.getFilePointer(); + file.setLength(l); + mem.setLength(l); + trace("setLength " + l); + if (p > l) { + file.seek(l); + mem.seek(l); + } + } + } + if (mem != null) { + mem.close(); + file.close(); + } + FileUtils.delete(prefix + "test"); + FileUtils.delete("~/testFile"); + } + + @Override + public void checkPowerOff() { + // nothing to do + } + + @Override + public void checkWritingAllowed() { + // nothing to do + } + + @Override + public String getDatabasePath() { + return null; + } + + @Override + public String getLobCompressionAlgorithm(int type) { + return null; + } + + @Override + public Object getLobSyncObject() { + return null; + } + + @Override + public int getMaxLengthInplaceLob() { + return 0; + } + + @Override + public FileStore openFile(String name, String mode, boolean mustExist) { + return null; + } + + @Override + public SmallLRUCache getLobFileListCache() { + return null; + } + + @Override + public TempFileDeleter getTempFileDeleter() { + return TempFileDeleter.getInstance(); + } + + @Override + public LobStorageBackend getLobStorage() { + return null; + } + + @Override + public int readLob(long lobId, byte[] hmac, long offset, byte[] buff, + int off, int length) { + return -1; + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + return null; + } + + @Override + public CompareMode getCompareMode() { + return CompareMode.getInstance(null, 0); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestFileLock.java b/modules/h2/src/test/java/org/h2/test/unit/TestFileLock.java new file mode 100644 index 0000000000000..6d4d6c26a585d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestFileLock.java @@ -0,0 +1,157 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.File; +import java.sql.Connection; +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.message.TraceSystem; +import org.h2.store.FileLock; +import org.h2.store.FileLockMethod; +import org.h2.test.TestBase; + +/** + * Tests the database file locking facility. Both lock files and sockets locking + * is tested. + */ +public class TestFileLock extends TestBase implements Runnable { + + private static volatile int locks; + private static volatile boolean stop; + private TestBase base; + private int wait; + private boolean allowSockets; + + public TestFileLock() { + // nothing to do + } + + TestFileLock(TestBase base, boolean allowSockets) { + this.base = base; + this.allowSockets = allowSockets; + } + + private String getFile() { + return getBaseDir() + "/test.lock"; + } + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (!getFile().startsWith(TestBase.BASE_TEST_DIR)) { + return; + } + testFsFileLock(); + testFutureModificationDate(); + testSimple(); + test(false); + test(true); + } + + private void testFsFileLock() throws Exception { + deleteDb("fileLock"); + String url = "jdbc:h2:" + getBaseDir() + + "/fileLock;FILE_LOCK=FS;OPEN_NEW=TRUE"; + Connection conn = getConnection(url); + assertThrows(ErrorCode.DATABASE_ALREADY_OPEN_1, this) + .getConnection(url); + conn.close(); + } + + private void testFutureModificationDate() throws Exception { + File f = new File(getFile()); + f.delete(); + f.createNewFile(); + f.setLastModified(System.currentTimeMillis() + 10000); + FileLock lock = new FileLock(new TraceSystem(null), getFile(), + Constants.LOCK_SLEEP); + lock.lock(FileLockMethod.FILE); + lock.unlock(); + } + + private void testSimple() { + FileLock lock1 = new FileLock(new TraceSystem(null), getFile(), + Constants.LOCK_SLEEP); + FileLock lock2 = new FileLock(new TraceSystem(null), getFile(), + Constants.LOCK_SLEEP); + lock1.lock(FileLockMethod.FILE); + createClassProxy(FileLock.class); + assertThrows(ErrorCode.DATABASE_ALREADY_OPEN_1, lock2).lock( + FileLockMethod.FILE); + lock1.unlock(); + lock2 = new FileLock(new TraceSystem(null), getFile(), + Constants.LOCK_SLEEP); + lock2.lock(FileLockMethod.FILE); + lock2.unlock(); + } + + private void test(boolean allowSocketsLock) throws Exception { + int threadCount = getSize(3, 5); + wait = getSize(20, 200); + Thread[] threads = new Thread[threadCount]; + new File(getFile()).delete(); + for (int i = 0; i < threadCount; i++) { + threads[i] = new Thread(new TestFileLock(this, allowSocketsLock)); + threads[i].start(); + Thread.sleep(wait + (int) (Math.random() * wait)); + } + trace("wait"); + Thread.sleep(500); + stop = true; + trace("STOP file"); + for (int i = 0; i < threadCount; i++) { + threads[i].join(); + } + assertEquals(0, locks); + } + + @Override + public void run() { + FileLock lock = null; + while (!stop) { + lock = new FileLock(new TraceSystem(null), getFile(), 100); + try { + lock.lock(allowSockets ? FileLockMethod.SOCKET + : FileLockMethod.FILE); + base.trace(lock + " locked"); + locks++; + if (locks > 1) { + System.err.println("ERROR! LOCKS=" + locks + " sockets=" + + allowSockets); + stop = true; + } + Thread.sleep(wait + (int) (Math.random() * wait)); + locks--; + base.trace(lock + " unlock"); + lock.unlock(); + if (locks < 0) { + System.err.println("ERROR! LOCKS=" + locks); + stop = true; + } + } catch (Exception e) { + // log(id+" cannot lock: " + e); + } + try { + Thread.sleep(wait + (int) (Math.random() * wait)); + } catch (InterruptedException e1) { + // ignore + } + } + if (lock != null) { + lock.unlock(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestFileLockProcess.java b/modules/h2/src/test/java/org/h2/test/unit/TestFileLockProcess.java new file mode 100644 index 0000000000000..2224ce7058dfe --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestFileLockProcess.java @@ -0,0 +1,122 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.ArrayList; +import org.h2.test.TestBase; +import org.h2.test.utils.SelfDestructor; +import org.h2.util.New; + +/** + * Tests database file locking. + * A new process is started. + */ +public class TestFileLockProcess extends TestBase { + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + SelfDestructor.startCountdown(60); + if (args.length == 0) { + TestBase.createCaller().init().test(); + return; + } + String url = args[0]; + execute(url); + } + + private static void execute(String url) { + org.h2.Driver.load(); + try { + Class.forName("org.h2.Driver"); + Connection conn = DriverManager.getConnection(url); + System.out.println("!"); + conn.close(); + } catch (Exception e) { + // failed - expected + } + } + + @Override + public void test() throws Exception { + if (config.codeCoverage || config.networked) { + return; + } + if (getBaseDir().indexOf(':') > 0) { + return; + } + deleteDb("lock"); + String url = "jdbc:h2:"+getBaseDir()+"/lock"; + + println("socket"); + test(4, url + ";file_lock=socket"); + + println("fs"); + test(4, url + ";file_lock=fs"); + + println("default"); + test(50, url); + + deleteDb("lock"); + } + + private void test(int count, String url) throws Exception { + url = getURL(url, true); + Connection conn = getConnection(url); + String selfDestruct = SelfDestructor.getPropertyString(60); + String[] procDef = { "java", selfDestruct, + "-cp", getClassPath(), + getClass().getName(), url }; + ArrayList processes = New.arrayList(); + for (int i = 0; i < count; i++) { + Thread.sleep(100); + if (i % 10 == 0) { + println(i + "/" + count); + } + Process proc = Runtime.getRuntime().exec(procDef); + processes.add(proc); + } + for (int i = 0; i < count; i++) { + Process proc = processes.get(i); + StringBuilder buff = new StringBuilder(); + while (true) { + int ch = proc.getErrorStream().read(); + if (ch < 0) { + break; + } + System.out.print((char) ch); + buff.append((char) ch); + } + while (true) { + int ch = proc.getInputStream().read(); + if (ch < 0) { + break; + } + System.out.print((char) ch); + buff.append((char) ch); + } + proc.waitFor(); + + // The travis build somehow generates messages like this from javac. + // No idea where it is coming from. + String processOutput = buff.toString(); + processOutput = processOutput.replaceAll("Picked up _JAVA_OPTIONS: -Xmx2048m -Xms512m", "").trim(); + processOutput = processOutput.replaceAll("Picked up JAVA_TOOL_OPTIONS:", "").trim(); + + assertEquals(0, proc.exitValue()); + assertTrue(i + ": " + buff.toString(), processOutput.isEmpty()); + } + Thread.sleep(100); + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestFileLockSerialized.java b/modules/h2/src/test/java/org/h2/test/unit/TestFileLockSerialized.java new file mode 100644 index 0000000000000..ea3025b5cb3a7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestFileLockSerialized.java @@ -0,0 +1,698 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.OutputStream; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; + +import org.h2.api.ErrorCode; +import org.h2.jdbc.JdbcConnection; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.SortedProperties; +import org.h2.util.Task; + +/** + * Test the serialized (server-less) mode. + */ +public class TestFileLockSerialized extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.mvStore) { + return; + } + println("testSequence"); + testSequence(); + println("testAutoIncrement"); + testAutoIncrement(); + println("testSequenceFlush"); + testSequenceFlush(); + println("testLeftLogFiles"); + testLeftLogFiles(); + println("testWrongDatabaseInstanceOnReconnect"); + testWrongDatabaseInstanceOnReconnect(); + println("testCache()"); + testCache(); + println("testBigDatabase(false)"); + testBigDatabase(false); + println("testBigDatabase(true)"); + testBigDatabase(true); + println("testCheckpointInUpdateRaceCondition"); + testCheckpointInUpdateRaceCondition(); + println("testConcurrentUpdates"); + testConcurrentUpdates(); + println("testThreeMostlyReaders true"); + testThreeMostlyReaders(true); + println("testThreeMostlyReaders false"); + testThreeMostlyReaders(false); + println("testTwoReaders"); + testTwoReaders(); + println("testTwoWriters"); + testTwoWriters(); + println("testPendingWrite"); + testPendingWrite(); + println("testKillWriter"); + testKillWriter(); + println("testConcurrentReadWrite"); + testConcurrentReadWrite(); + deleteDb("fileLockSerialized"); + } + + private void testSequence() throws Exception { + deleteDb("fileLockSerialized"); + String url = "jdbc:h2:" + getBaseDir() + "/fileLockSerialized" + + ";FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE;RECONNECT_CHECK_DELAY=10"; + ResultSet rs; + Connection conn1 = getConnection(url); + Statement stat1 = conn1.createStatement(); + stat1.execute("create sequence seq"); + // 5 times RECONNECT_CHECK_DELAY + Thread.sleep(100); + rs = stat1.executeQuery("call seq.nextval"); + rs.next(); + conn1.close(); + } + + private void testSequenceFlush() throws Exception { + deleteDb("fileLockSerialized"); + String url = "jdbc:h2:" + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE"; + ResultSet rs; + Connection conn1 = getConnection(url); + Statement stat1 = conn1.createStatement(); + stat1.execute("create sequence seq"); + rs = stat1.executeQuery("call seq.nextval"); + rs.next(); + assertEquals(1, rs.getInt(1)); + Connection conn2 = getConnection(url); + Statement stat2 = conn2.createStatement(); + rs = stat2.executeQuery("call seq.nextval"); + rs.next(); + assertEquals(2, rs.getInt(1)); + conn1.close(); + conn2.close(); + } + + private void testThreeMostlyReaders(final boolean write) throws Exception { + boolean longRun = false; + deleteDb("fileLockSerialized"); + final String url = "jdbc:h2:" + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE"; + + Connection conn = getConnection(url); + conn.createStatement().execute("create table test(id int) as select 1"); + conn.close(); + + final int len = 10; + final Exception[] ex = { null }; + final boolean[] stop = { false }; + Thread[] threads = new Thread[len]; + for (int i = 0; i < len; i++) { + Thread t = new Thread(new Runnable() { + @Override + public void run() { + try { + Connection c = getConnection(url); + PreparedStatement p = c + .prepareStatement("select * from test where id = ?"); + while (!stop[0]) { + Thread.sleep(100); + if (write) { + if (Math.random() > 0.9) { + c.createStatement().execute( + "update test set id = id"); + } + } + p.setInt(1, 1); + p.executeQuery(); + p.clearParameters(); + } + c.close(); + } catch (Exception e) { + ex[0] = e; + } + } + }); + t.start(); + threads[i] = t; + } + if (longRun) { + Thread.sleep(40000); + } else { + Thread.sleep(1000); + } + stop[0] = true; + for (int i = 0; i < len; i++) { + threads[i].join(); + } + if (ex[0] != null) { + throw ex[0]; + } + getConnection(url).close(); + } + + private void testTwoReaders() throws Exception { + deleteDb("fileLockSerialized"); + String url = "jdbc:h2:" + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE"; + Connection conn1 = getConnection(url); + conn1.createStatement().execute("create table test(id int)"); + Connection conn2 = getConnection(url); + Statement stat2 = conn2.createStatement(); + stat2.execute("drop table test"); + stat2.execute("create table test(id identity) as select 1"); + conn2.close(); + conn1.close(); + getConnection(url).close(); + } + + private void testTwoWriters() throws Exception { + deleteDb("fileLockSerialized"); + String url = "jdbc:h2:" + getBaseDir() + "/fileLockSerialized"; + final String writeUrl = url + ";FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE"; + Connection conn = getConnection(writeUrl, "sa", "sa"); + conn.createStatement() + .execute( + "create table test(id identity) as " + + "select x from system_range(1, 100)"); + conn.close(); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + Thread.sleep(10); + Connection c = getConnection(writeUrl, "sa", "sa"); + c.createStatement().execute("select * from test"); + c.close(); + } + } + }.execute(); + Thread.sleep(20); + for (int i = 0; i < 2; i++) { + conn = getConnection(writeUrl, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("drop table test"); + stat.execute("create table test(id identity) as " + + "select x from system_range(1, 100)"); + conn.createStatement().execute("select * from test"); + conn.close(); + } + Thread.sleep(100); + conn = getConnection(writeUrl, "sa", "sa"); + conn.createStatement().execute("select * from test"); + conn.close(); + task.get(); + } + + private void testPendingWrite() throws Exception { + deleteDb("fileLockSerialized"); + String url = "jdbc:h2:" + getBaseDir() + "/fileLockSerialized"; + String writeUrl = url + + ";FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE;WRITE_DELAY=0"; + + Connection conn = getConnection(writeUrl, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + Thread.sleep(100); + String propFile = getBaseDir() + "/fileLockSerialized.lock.db"; + SortedProperties p = SortedProperties.loadProperties(propFile); + p.setProperty("changePending", "true"); + p.setProperty("modificationDataId", "1000"); + try (OutputStream out = FileUtils.newOutputStream(propFile, false)) { + p.store(out, "test"); + } + Thread.sleep(100); + stat.execute("select * from test"); + conn.close(); + } + + private void testKillWriter() throws Exception { + deleteDb("fileLockSerialized"); + String url = "jdbc:h2:" + getBaseDir() + "/fileLockSerialized"; + String writeUrl = url + + ";FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE;WRITE_DELAY=0"; + + Connection conn = getConnection(writeUrl, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + ((JdbcConnection) conn).setPowerOffCount(1); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, stat).execute( + "insert into test values(1)"); + + Connection conn2 = getConnection(writeUrl, "sa", "sa"); + Statement stat2 = conn2.createStatement(); + stat2.execute("insert into test values(1)"); + printResult(stat2, "select * from test"); + + conn2.close(); + + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn).close(); + } + + private void testConcurrentReadWrite() throws Exception { + deleteDb("fileLockSerialized"); + + String url = "jdbc:h2:" + getBaseDir() + "/fileLockSerialized"; + String writeUrl = url + ";FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE"; + // ;TRACE_LEVEL_SYSTEM_OUT=3 + // String readUrl = writeUrl + ";ACCESS_MODE_DATA=R"; + + trace(" create database"); + Class.forName("org.h2.Driver"); + Connection conn = getConnection(writeUrl, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + + Connection conn3 = getConnection(writeUrl, "sa", "sa"); + PreparedStatement prep3 = conn3 + .prepareStatement("insert into test values(?)"); + + Connection conn2 = getConnection(writeUrl, "sa", "sa"); + Statement stat2 = conn2.createStatement(); + printResult(stat2, "select * from test"); + + stat2.execute("create local temporary table temp(name varchar) not persistent"); + printResult(stat2, "select * from temp"); + + trace(" insert row 1"); + stat.execute("insert into test values(1)"); + trace(" insert row 2"); + prep3.setInt(1, 2); + prep3.execute(); + printResult(stat2, "select * from test"); + printResult(stat2, "select * from temp"); + + conn.close(); + conn2.close(); + conn3.close(); + } + + private void printResult(Statement stat, String sql) throws SQLException { + trace(" query: " + sql); + ResultSet rs = stat.executeQuery(sql); + int rowCount = 0; + while (rs.next()) { + trace(" " + rs.getString(1)); + rowCount++; + } + trace(" " + rowCount + " row(s)"); + } + + private void testConcurrentUpdates() throws Exception { + boolean longRun = false; + if (longRun) { + for (int waitTime = 100; waitTime < 10000; waitTime += 20) { + for (int howManyThreads = 1; howManyThreads < 10; howManyThreads++) { + testConcurrentUpdates(waitTime, howManyThreads, waitTime * + howManyThreads * 10); + } + } + } else { + testConcurrentUpdates(100, 4, 2000); + } + } + + private void testAutoIncrement() throws Exception { + boolean longRun = false; + if (longRun) { + for (int waitTime = 100; waitTime < 10000; waitTime += 20) { + for (int howManyThreads = 1; howManyThreads < 10; howManyThreads++) { + testAutoIncrement(waitTime, howManyThreads, 2000); + } + } + } else { + testAutoIncrement(400, 2, 2000); + } + } + + private void testAutoIncrement(final int waitTime, int howManyThreads, + int runTime) throws Exception { + println("testAutoIncrement waitTime: " + waitTime + + " howManyThreads: " + howManyThreads + " runTime: " + runTime); + deleteDb("fileLockSerialized"); + final String url = "jdbc:h2:" + + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE;" + + "AUTO_RECONNECT=TRUE;MAX_LENGTH_INPLACE_LOB=8192;" + + "COMPRESS_LOB=DEFLATE;CACHE_SIZE=65536"; + + Connection conn = getConnection(url); + conn.createStatement().execute( + "create table test(id int auto_increment, id2 int)"); + conn.close(); + + final long endTime = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(runTime); + final Exception[] ex = { null }; + final Connection[] connList = new Connection[howManyThreads]; + final boolean[] stop = { false }; + final int[] nextInt = { 0 }; + Thread[] threads = new Thread[howManyThreads]; + for (int i = 0; i < howManyThreads; i++) { + final int finalNrOfConnection = i; + Thread t = new Thread(new Runnable() { + @Override + public void run() { + try { + Connection c = getConnection(url); + connList[finalNrOfConnection] = c; + while (!stop[0]) { + synchronized (nextInt) { + ResultSet rs = c.createStatement() + .executeQuery( + "select id, id2 from test"); + while (rs.next()) { + if (rs.getInt(1) != rs.getInt(2)) { + throw new Exception(Thread + .currentThread().getId() + + " nextInt: " + + nextInt[0] + + " rs.getInt(1): " + + rs.getInt(1) + + " rs.getInt(2): " + + rs.getInt(2)); + } + } + nextInt[0]++; + Statement stat = c.createStatement(); + stat.execute("insert into test (id2) values(" + + nextInt[0] + ")"); + ResultSet rsKeys = stat.getGeneratedKeys(); + while (rsKeys.next()) { + assertEquals(nextInt[0], rsKeys.getInt(1)); + } + rsKeys.close(); + } + Thread.sleep(waitTime); + } + c.close(); + } catch (Exception e) { + e.printStackTrace(); + ex[0] = e; + } + } + }); + t.start(); + threads[i] = t; + } + while ((ex[0] == null) && (System.nanoTime() < endTime)) { + Thread.sleep(10); + } + + stop[0] = true; + for (int i = 0; i < howManyThreads; i++) { + threads[i].join(); + } + if (ex[0] != null) { + throw ex[0]; + } + getConnection(url).close(); + deleteDb("fileLockSerialized"); + } + + private void testConcurrentUpdates(final int waitTime, int howManyThreads, + int runTime) throws Exception { + println("testConcurrentUpdates waitTime: " + waitTime + + " howManyThreads: " + howManyThreads + " runTime: " + runTime); + deleteDb("fileLockSerialized"); + final String url = "jdbc:h2:" + + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE;" + + "AUTO_RECONNECT=TRUE;MAX_LENGTH_INPLACE_LOB=8192;" + + "COMPRESS_LOB=DEFLATE;CACHE_SIZE=65536"; + + Connection conn = getConnection(url); + conn.createStatement().execute("create table test(id int)"); + conn.createStatement().execute("insert into test values(1)"); + conn.close(); + + final long endTime = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(runTime); + final Exception[] ex = { null }; + final Connection[] connList = new Connection[howManyThreads]; + final boolean[] stop = { false }; + final int[] lastInt = { 1 }; + Thread[] threads = new Thread[howManyThreads]; + for (int i = 0; i < howManyThreads; i++) { + final int finalNrOfConnection = i; + Thread t = new Thread(new Runnable() { + @Override + public void run() { + try { + Connection c = getConnection(url); + connList[finalNrOfConnection] = c; + while (!stop[0]) { + ResultSet rs = c.createStatement().executeQuery( + "select * from test"); + rs.next(); + if (rs.getInt(1) != lastInt[0]) { + throw new Exception(finalNrOfConnection + + " Expected: " + lastInt[0] + " got " + + rs.getInt(1)); + } + Thread.sleep(waitTime); + if (Math.random() > 0.7) { + int newLastInt = (int) (Math.random() * 1000); + c.createStatement().execute( + "update test set id = " + newLastInt); + lastInt[0] = newLastInt; + } + } + c.close(); + } catch (Exception e) { + e.printStackTrace(); + ex[0] = e; + } + } + }); + t.start(); + threads[i] = t; + } + while ((ex[0] == null) && (System.nanoTime() < endTime)) { + Thread.sleep(10); + } + + stop[0] = true; + for (int i = 0; i < howManyThreads; i++) { + threads[i].join(); + } + if (ex[0] != null) { + throw ex[0]; + } + getConnection(url).close(); + deleteDb("fileLockSerialized"); + } + + /** + * If a checkpoint occurs between beforeWriting and checkWritingAllowed then + * the result of checkWritingAllowed is READ_ONLY, which is wrong. + * + * Also, if a checkpoint started before beforeWriting, and ends between + * between beforeWriting and checkWritingAllowed, then the same error + * occurs. + */ + private void testCheckpointInUpdateRaceCondition() throws Exception { + boolean longRun = false; + deleteDb("fileLockSerialized"); + String url = "jdbc:h2:" + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED;OPEN_NEW=TRUE"; + + Connection conn = getConnection(url); + conn.createStatement().execute("create table test(id int)"); + conn.createStatement().execute("insert into test values(1)"); + for (int i = 0; i < (longRun ? 10000 : 5); i++) { + Thread.sleep(402); + conn.createStatement().execute("update test set id = " + i); + } + conn.close(); + deleteDb("fileLockSerialized"); + } + + /** + * Caches must be cleared. Session.reconnect only closes the DiskFile (which + * is associated with the cache) if there is one session + */ + private void testCache() throws Exception { + deleteDb("fileLockSerialized"); + + String urlShared = "jdbc:h2:" + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED"; + + Connection connShared1 = getConnection(urlShared); + Statement statement1 = connShared1.createStatement(); + Connection connShared2 = getConnection(urlShared); + Statement statement2 = connShared2.createStatement(); + + statement1.execute("create table test1(id int)"); + statement1.execute("insert into test1 values(1)"); + + ResultSet rs = statement1.executeQuery("select id from test1"); + rs.close(); + rs = statement2.executeQuery("select id from test1"); + rs.close(); + + statement1.execute("update test1 set id=2"); + Thread.sleep(500); + + rs = statement2.executeQuery("select id from test1"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + rs.close(); + + connShared1.close(); + connShared2.close(); + deleteDb("fileLockSerialized"); + } + + private void testWrongDatabaseInstanceOnReconnect() throws Exception { + deleteDb("fileLockSerialized"); + + String urlShared = "jdbc:h2:" + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED"; + String urlForNew = urlShared + ";OPEN_NEW=TRUE"; + + Connection connShared1 = getConnection(urlShared); + Statement statement1 = connShared1.createStatement(); + Connection connShared2 = getConnection(urlShared); + Connection connNew = getConnection(urlForNew); + statement1.execute("create table test1(id int)"); + connShared1.close(); + connShared2.close(); + connNew.close(); + deleteDb("fileLockSerialized"); + } + + private void testBigDatabase(boolean withCache) { + boolean longRun = false; + final int howMuchRows = longRun ? 2000000 : 500000; + deleteDb("fileLockSerialized"); + int cacheSizeKb = withCache ? 5000 : 0; + + final CountDownLatch importFinishedLatch = new CountDownLatch(1); + final CountDownLatch select1FinishedLatch = new CountDownLatch(1); + + final String url = "jdbc:h2:" + getBaseDir() + "/fileLockSerialized" + + ";FILE_LOCK=SERIALIZED" + ";OPEN_NEW=TRUE" + ";CACHE_SIZE=" + + cacheSizeKb; + final Task importUpdateTask = new Task() { + @Override + public void call() throws Exception { + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, id2 int)"); + for (int i = 0; i < howMuchRows; i++) { + stat.execute("insert into test values(" + i + ", " + i + + ")"); + } + importFinishedLatch.countDown(); + + select1FinishedLatch.await(); + + stat.execute("update test set id2=999 where id=500"); + conn.close(); + } + }; + importUpdateTask.execute(); + + Task selectTask = new Task() { + @Override + public void call() throws Exception { + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + importFinishedLatch.await(); + + ResultSet rs = stat + .executeQuery("select id2 from test where id=500"); + assertTrue(rs.next()); + assertEquals(500, rs.getInt(1)); + rs.close(); + select1FinishedLatch.countDown(); + + // wait until the other task finished + importUpdateTask.get(); + + // can't use the exact same query, otherwise it would use + // the query cache + rs = stat.executeQuery("select id2 from test where id=500+0"); + assertTrue(rs.next()); + assertEquals(999, rs.getInt(1)); + rs.close(); + conn.close(); + } + }; + selectTask.execute(); + + importUpdateTask.get(); + selectTask.get(); + deleteDb("fileLockSerialized"); + } + + private void testLeftLogFiles() throws Exception { + deleteDb("fileLockSerialized"); + + // without serialized + String url; + url = "jdbc:h2:" + getBaseDir() + "/fileLockSerialized"; + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("insert into test values(0)"); + conn.close(); + + List filesWithoutSerialized = FileUtils + .newDirectoryStream(getBaseDir()); + deleteDb("fileLockSerialized"); + + // with serialized + url = "jdbc:h2:" + getBaseDir() + + "/fileLockSerialized;FILE_LOCK=SERIALIZED"; + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int)"); + Thread.sleep(500); + stat.execute("insert into test values(0)"); + conn.close(); + + List filesWithSerialized = FileUtils + .newDirectoryStream(getBaseDir()); + if (filesWithoutSerialized.size() != filesWithSerialized.size()) { + for (int i = 0; i < filesWithoutSerialized.size(); i++) { + if (!filesWithSerialized + .contains(filesWithoutSerialized.get(i))) { + System.out + .println("File left from 'without serialized' mode: " + + filesWithoutSerialized.get(i)); + } + } + for (int i = 0; i < filesWithSerialized.size(); i++) { + if (!filesWithoutSerialized + .contains(filesWithSerialized.get(i))) { + System.out + .println("File left from 'with serialized' mode: " + + filesWithSerialized.get(i)); + } + } + fail("With serialized it must create the same files than without serialized"); + } + deleteDb("fileLockSerialized"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestFileSystem.java b/modules/h2/src/test/java/org/h2/test/unit/TestFileSystem.java new file mode 100644 index 0000000000000..15000cc01acab --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestFileSystem.java @@ -0,0 +1,872 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.RandomAccessFile; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileChannel.MapMode; +import java.nio.channels.FileLock; +import java.nio.channels.NonWritableChannelException; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.List; +import java.util.Random; +import java.util.zip.ZipEntry; +import java.util.zip.ZipOutputStream; +import org.h2.dev.fs.FilePathZip2; +import org.h2.message.DbException; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.cache.FilePathCache; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathEncrypt; +import org.h2.store.fs.FilePathRec; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.utils.AssertThrows; +import org.h2.test.utils.FilePathDebug; +import org.h2.tools.Backup; +import org.h2.tools.DeleteDbFiles; +import org.h2.util.IOUtils; +import org.h2.util.Task; + +/** + * Tests various file system. + */ +public class TestFileSystem extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase test = TestBase.createCaller().init(); + // test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws Exception { + testFileSystem(getBaseDir() + "/fs"); + + testAbsoluteRelative(); + testDirectories(getBaseDir()); + testMoveTo(getBaseDir()); + testUnsupportedFeatures(getBaseDir()); + FilePathZip2.register(); + FilePath.register(new FilePathCache()); + FilePathRec.register(); + testZipFileSystem("zip:"); + testZipFileSystem("cache:zip:"); + testZipFileSystem("zip2:"); + testZipFileSystem("cache:zip2:"); + testMemFsDir(); + testClasspath(); + FilePathDebug.register().setTrace(true); + FilePathEncrypt.register(); + testSimpleExpandTruncateSize(); +// testSplitDatabaseInZip(); + testDatabaseInMemFileSys(); + testDatabaseInJar(); + // set default part size to 1 << 10 + String f = "split:10:" + getBaseDir() + "/fs"; + FileUtils.toRealPath(f); + testFileSystem(getBaseDir() + "/fs"); + testFileSystem("memFS:"); + testFileSystem("memLZF:"); + testFileSystem("nioMemFS:"); + testFileSystem("nioMemLZF:1:"); + // 12% compressLaterCache + testFileSystem("nioMemLZF:12:"); + testFileSystem("rec:memFS:"); + testUserHome(); + try { + testFileSystem("nio:" + getBaseDir() + "/fs"); + testFileSystem("cache:nio:" + getBaseDir() + "/fs"); + testFileSystem("nioMapped:" + getBaseDir() + "/fs"); + testFileSystem("encrypt:0007:" + getBaseDir() + "/fs"); + testFileSystem("cache:encrypt:0007:" + getBaseDir() + "/fs"); + if (!config.splitFileSystem) { + testFileSystem("split:" + getBaseDir() + "/fs"); + testFileSystem("split:nioMapped:" + getBaseDir() + "/fs"); + } + } catch (Exception e) { + e.printStackTrace(); + throw e; + } catch (Error e) { + e.printStackTrace(); + throw e; + } finally { + FileUtils.delete(getBaseDir() + "/fs"); + } + } + + private void testZipFileSystem(String prefix) throws IOException { + Random r = new Random(1); + for (int i = 0; i < 5; i++) { + testZipFileSystem(prefix, r); + } + } + + private void testZipFileSystem(String prefix, Random r) throws IOException { + byte[] data = new byte[r.nextInt(16 * 1024)]; + long x = r.nextLong(); + FilePath file = FilePath.get(getBaseDir() + "/fs/readonly" + x + ".zip"); + r.nextBytes(data); + OutputStream out = file.newOutputStream(false); + ZipOutputStream zipOut = new ZipOutputStream(out); + ZipEntry entry = new ZipEntry("data"); + zipOut.putNextEntry(entry); + zipOut.write(data); + zipOut.closeEntry(); + zipOut.close(); + out.close(); + FilePath fp = FilePath.get( + prefix + getBaseDir() + "/fs/readonly" + x + ".zip!data"); + FileChannel fc = fp.open("r"); + StringBuilder buff = new StringBuilder(); + try { + int pos = 0; + for (int i = 0; i < 100; i++) { + trace("op " + i); + switch (r.nextInt(5)) { + case 0: { + int p = r.nextInt(data.length); + trace("seek " + p); + buff.append("seek " + p + "\n"); + fc.position(p); + pos = p; + break; + } + case 1: { + int len = r.nextInt(1000); + int offset = r.nextInt(100); + int arrayLen = len + offset; + len = Math.min(len, data.length - pos); + byte[] b1 = new byte[arrayLen]; + byte[] b2 = new byte[arrayLen]; + trace("readFully " + len); + buff.append("readFully " + len + "\n"); + System.arraycopy(data, pos, b1, offset, len); + ByteBuffer byteBuff = createSlicedBuffer(b2, offset, len); + FileUtils.readFully(fc, byteBuff); + assertEquals(b1, b2); + pos += len; + break; + } + case 2: { + int len = r.nextInt(1000); + int offset = r.nextInt(100); + int arrayLen = len + offset; + int p = r.nextInt(data.length); + len = Math.min(len, data.length - p); + byte[] b1 = new byte[arrayLen]; + byte[] b2 = new byte[arrayLen]; + trace("readFully " + p + " " + len); + buff.append("readFully " + p + " " + len + "\n"); + System.arraycopy(data, p, b1, offset, len); + ByteBuffer byteBuff = createSlicedBuffer(b2, offset, len); + DataUtils.readFully(fc, p, byteBuff); + assertEquals(b1, b2); + break; + } + case 3: { + trace("getFilePointer"); + buff.append("getFilePointer\n"); + assertEquals(pos, fc.position()); + break; + } + case 4: { + trace("length " + data.length); + buff.append("length " + data.length + "\n"); + assertEquals(data.length, fc.size()); + break; + } + default: + } + } + fc.close(); + file.delete(); + } catch (Throwable e) { + e.printStackTrace(); + fail("Exception: " + e + "\n"+ buff.toString()); + } + } + + private void testAbsoluteRelative() { + assertFalse(FileUtils.isAbsolute("test/abc")); + assertTrue(FileUtils.isAbsolute("~/test/abc")); + } + + private void testMemFsDir() throws IOException { + FileUtils.newOutputStream("memFS:data/test/a.txt", false).close(); + assertEquals(FileUtils.newDirectoryStream("memFS:data/test").toString(), + 1, FileUtils.newDirectoryStream("memFS:data/test").size()); + FileUtils.deleteRecursive("memFS:", false); + } + + private void testClasspath() throws IOException { + String resource = "org/h2/test/scripts/testSimple.in.txt"; + InputStream in; + in = getClass().getResourceAsStream("/" + resource); + assertTrue(in != null); + in.close(); + in = getClass().getClassLoader().getResourceAsStream(resource); + assertTrue(in != null); + in.close(); + in = FileUtils.newInputStream("classpath:" + resource); + assertTrue(in != null); + in.close(); + in = FileUtils.newInputStream("classpath:/" + resource); + assertTrue(in != null); + in.close(); + } + + private void testSimpleExpandTruncateSize() throws Exception { + String f = "memFS:" + getBaseDir() + "/fs/test.data"; + FileUtils.createDirectories("memFS:" + getBaseDir() + "/fs"); + FileChannel c = FileUtils.open(f, "rw"); + c.position(4000); + c.write(ByteBuffer.wrap(new byte[1])); + FileLock lock = c.tryLock(); + c.truncate(0); + if (lock != null) { + lock.release(); + } + c.close(); + FileUtils.deleteRecursive("memFS:", false); + } + + private void testSplitDatabaseInZip() throws SQLException { + String dir = getBaseDir() + "/fs"; + FileUtils.deleteRecursive(dir, false); + Connection conn; + Statement stat; + conn = getConnection("jdbc:h2:split:18:"+dir+"/test"); + stat = conn.createStatement(); + stat.execute( + "create table test(id int primary key, name varchar) " + + "as select x, space(10000) from system_range(1, 100)"); + // stat.execute("shutdown defrag"); + conn.close(); + Backup.execute(dir + "/test.zip", dir, "", true); + DeleteDbFiles.execute("split:" + dir, "test", true); + conn = getConnection( + "jdbc:h2:split:zip:"+dir+"/test.zip!/test"); + conn.createStatement().execute("select * from test where id=1"); + conn.close(); + FileUtils.deleteRecursive(dir, false); + } + + private void testDatabaseInMemFileSys() throws SQLException { + org.h2.Driver.load(); + deleteDb("fsMem"); + String url = "jdbc:h2:" + getBaseDir() + "/fsMem"; + Connection conn = getConnection(url, "sa", "sa"); + conn.createStatement().execute( + "CREATE TABLE TEST AS SELECT * FROM DUAL"); + conn.createStatement().execute( + "BACKUP TO '" + getBaseDir() + "/fsMem.zip'"); + conn.close(); + org.h2.tools.Restore.main("-file", getBaseDir() + "/fsMem.zip", "-dir", + "memFS:"); + conn = getConnection("jdbc:h2:memFS:fsMem", "sa", "sa"); + ResultSet rs = conn.createStatement() + .executeQuery("SELECT * FROM TEST"); + rs.close(); + conn.close(); + deleteDb("fsMem"); + FileUtils.delete(getBaseDir() + "/fsMem.zip"); + FileUtils.delete("memFS:fsMem.mv.db"); + } + + private void testDatabaseInJar() throws Exception { + if (getBaseDir().indexOf(':') > 0) { + return; + } + if (config.networked) { + return; + } + org.h2.Driver.load(); + String url = "jdbc:h2:" + getBaseDir() + "/fsJar"; + Connection conn = getConnection(url, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, " + + "name varchar, b blob, c clob)"); + stat.execute("insert into test values(1, 'Hello', " + + "SECURE_RAND(2000), space(2000))"); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + byte[] b1 = rs.getBytes(3); + String s1 = rs.getString(4); + conn.close(); + conn = getConnection(url, "sa", "sa"); + stat = conn.createStatement(); + stat.execute("backup to '" + getBaseDir() + "/fsJar.zip'"); + conn.close(); + + deleteDb("fsJar"); + for (String f : FileUtils.newDirectoryStream( + "zip:" + getBaseDir() + "/fsJar.zip")) { + assertFalse(FileUtils.isAbsolute(f)); + assertTrue(!FileUtils.isDirectory(f)); + assertTrue(FileUtils.size(f) > 0); + assertTrue(f.endsWith(FileUtils.getName(f))); + assertEquals(0, FileUtils.lastModified(f)); + FileUtils.setReadOnly(f); + assertFalse(FileUtils.canWrite(f)); + InputStream in = FileUtils.newInputStream(f); + int len = 0; + while (in.read() >= 0) { + len++; + } + assertEquals(len, FileUtils.size(f)); + testReadOnly(f); + } + String urlJar = "jdbc:h2:zip:" + getBaseDir() + "/fsJar.zip!/fsJar"; + conn = getConnection(urlJar, "sa", "sa"); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + byte[] b2 = rs.getBytes(3); + String s2 = rs.getString(4); + assertEquals(2000, b2.length); + assertEquals(2000, s2.length()); + assertEquals(b1, b2); + assertEquals(s1, s2); + assertFalse(rs.next()); + conn.close(); + FileUtils.delete(getBaseDir() + "/fsJar.zip"); + } + + private void testReadOnly(final String f) throws IOException { + new AssertThrows(IOException.class) { + @Override + public void test() throws IOException { + FileUtils.newOutputStream(f, false); + }}; + new AssertThrows(DbException.class) { + @Override + public void test() { + FileUtils.move(f, f); + }}; + new AssertThrows(DbException.class) { + @Override + public void test() { + FileUtils.move(f, f); + }}; + new AssertThrows(IOException.class) { + @Override + public void test() throws IOException { + FileUtils.createTempFile(f, ".tmp", false, false); + }}; + final FileChannel channel = FileUtils.open(f, "r"); + new AssertThrows(IOException.class) { + @Override + public void test() throws IOException { + channel.write(ByteBuffer.allocate(1)); + }}; + new AssertThrows(IOException.class) { + @Override + public void test() throws IOException { + channel.truncate(0); + }}; + assertTrue(null == channel.tryLock()); + channel.force(false); + channel.close(); + } + + private void testUserHome() { + String userDir = System.getProperty("user.home").replace('\\', '/'); + assertTrue(FileUtils.toRealPath("~/test").startsWith(userDir)); + assertTrue(FileUtils.toRealPath("file:~/test").startsWith(userDir)); + } + + private void testFileSystem(String fsBase) throws Exception { + testConcurrent(fsBase); + testRootExists(fsBase); + testPositionedReadWrite(fsBase); + testSetReadOnly(fsBase); + testParentEventuallyReturnsNull(fsBase); + testSimple(fsBase); + testTempFile(fsBase); + testRandomAccess(fsBase); + } + + private void testRootExists(String fsBase) { + String fileName = fsBase + "/testFile"; + FilePath p = FilePath.get(fileName); + while (p.getParent() != null) { + p = p.getParent(); + } + assertTrue(p.exists()); + } + + private void testSetReadOnly(String fsBase) { + String fileName = fsBase + "/testFile"; + if (FileUtils.exists(fileName)) { + FileUtils.delete(fileName); + } + if (FileUtils.createFile(fileName)) { + FileUtils.setReadOnly(fileName); +// assertFalse(FileUtils.canWrite(fileName)); + FileUtils.delete(fileName); + } + } + + private static void testDirectories(String fsBase) { + final String fileName = fsBase + "/testFile"; + if (FileUtils.exists(fileName)) { + FileUtils.delete(fileName); + } + if (FileUtils.createFile(fileName)) { + new AssertThrows(DbException.class) { + @Override + public void test() { + FileUtils.createDirectory(fileName); + }}; + new AssertThrows(DbException.class) { + @Override + public void test() { + FileUtils.createDirectories(fileName + "/test"); + }}; + FileUtils.delete(fileName); + } + } + + private static void testMoveTo(String fsBase) { + final String fileName = fsBase + "/testFile"; + final String fileName2 = fsBase + "/testFile2"; + if (FileUtils.exists(fileName)) { + FileUtils.delete(fileName); + } + if (FileUtils.createFile(fileName)) { + FileUtils.move(fileName, fileName2); + FileUtils.createFile(fileName); + new AssertThrows(DbException.class) { + @Override + public void test() { + FileUtils.move(fileName2, fileName); + }}; + FileUtils.delete(fileName); + FileUtils.delete(fileName2); + new AssertThrows(DbException.class) { + @Override + public void test() { + FileUtils.move(fileName, fileName2); + }}; + } + } + + private static void testUnsupportedFeatures(String fsBase) throws IOException { + final String fileName = fsBase + "/testFile"; + if (FileUtils.exists(fileName)) { + FileUtils.delete(fileName); + } + if (FileUtils.createFile(fileName)) { + final FileChannel channel = FileUtils.open(fileName, "rw"); + new AssertThrows(UnsupportedOperationException.class) { + @Override + public void test() throws IOException { + channel.map(MapMode.PRIVATE, 0, channel.size()); + }}; + new AssertThrows(UnsupportedOperationException.class) { + @Override + public void test() throws IOException { + channel.read(new ByteBuffer[]{ByteBuffer.allocate(10)}, 0, 0); + }}; + new AssertThrows(UnsupportedOperationException.class) { + @Override + public void test() throws IOException { + channel.write(new ByteBuffer[]{ByteBuffer.allocate(10)}, 0, 0); + }}; + new AssertThrows(UnsupportedOperationException.class) { + @Override + public void test() throws IOException { + channel.transferFrom(channel, 0, 0); + }}; + new AssertThrows(UnsupportedOperationException.class) { + @Override + public void test() throws IOException { + channel.transferTo(0, 0, channel); + }}; + new AssertThrows(UnsupportedOperationException.class) { + @Override + public void test() throws IOException { + channel.lock(); + }}; + channel.close(); + FileUtils.delete(fileName); + } + } + + private void testParentEventuallyReturnsNull(String fsBase) { + FilePath p = FilePath.get(fsBase + "/testFile"); + assertTrue(p.getScheme().length() > 0); + for (int i = 0; i < 100; i++) { + if (p == null) { + return; + } + p = p.getParent(); + } + fail("Parent is not null: " + p); + String path = fsBase + "/testFile"; + for (int i = 0; i < 100; i++) { + if (path == null) { + return; + } + path = FileUtils.getParent(path); + } + fail("Parent is not null: " + path); + } + + private void testSimple(final String fsBase) throws Exception { + long time = System.currentTimeMillis(); + for (String s : FileUtils.newDirectoryStream(fsBase)) { + FileUtils.delete(s); + } + FileUtils.createDirectories(fsBase + "/test"); + assertTrue(FileUtils.exists(fsBase)); + FileUtils.delete(fsBase + "/test"); + FileUtils.delete(fsBase + "/test2"); + assertTrue(FileUtils.createFile(fsBase + "/test")); + List p = FilePath.get(fsBase).newDirectoryStream(); + assertEquals(1, p.size()); + String can = FilePath.get(fsBase + "/test").toRealPath().toString(); + assertEquals(can, p.get(0).toString()); + assertTrue(FileUtils.canWrite(fsBase + "/test")); + FileChannel channel = FileUtils.open(fsBase + "/test", "rw"); + byte[] buffer = new byte[10000]; + Random random = new Random(1); + random.nextBytes(buffer); + channel.write(ByteBuffer.wrap(buffer)); + assertEquals(10000, channel.size()); + channel.position(20000); + assertEquals(20000, channel.position()); + assertEquals(-1, channel.read(ByteBuffer.wrap(buffer, 0, 1))); + String path = fsBase + "/test"; + assertEquals("test", FileUtils.getName(path)); + can = FilePath.get(fsBase).toRealPath().toString(); + String can2 = FileUtils.toRealPath(FileUtils.getParent(path)); + assertEquals(can, can2); + FileLock lock = channel.tryLock(); + if (lock != null) { + lock.release(); + } + assertEquals(10000, channel.size()); + channel.close(); + assertEquals(10000, FileUtils.size(fsBase + "/test")); + channel = FileUtils.open(fsBase + "/test", "r"); + final byte[] test = new byte[10000]; + FileUtils.readFully(channel, ByteBuffer.wrap(test, 0, 10000)); + assertEquals(buffer, test); + final FileChannel fc = channel; + new AssertThrows(IOException.class) { + @Override + public void test() throws Exception { + fc.write(ByteBuffer.wrap(test, 0, 10)); + } + }; + new AssertThrows(NonWritableChannelException.class) { + @Override + public void test() throws Exception { + fc.truncate(10); + } + }; + channel.close(); + long lastMod = FileUtils.lastModified(fsBase + "/test"); + if (lastMod < time - 1999) { + // at most 2 seconds difference + assertEquals(time, lastMod); + } + assertEquals(10000, FileUtils.size(fsBase + "/test")); + List list = FileUtils.newDirectoryStream(fsBase); + assertEquals(1, list.size()); + assertTrue(list.get(0).endsWith("test")); + IOUtils.copyFiles(fsBase + "/test", fsBase + "/test3"); + FileUtils.move(fsBase + "/test3", fsBase + "/test2"); + FileUtils.move(fsBase + "/test2", fsBase + "/test2"); + assertTrue(!FileUtils.exists(fsBase + "/test3")); + assertTrue(FileUtils.exists(fsBase + "/test2")); + assertEquals(10000, FileUtils.size(fsBase + "/test2")); + byte[] buffer2 = new byte[10000]; + InputStream in = FileUtils.newInputStream(fsBase + "/test2"); + int pos = 0; + while (true) { + int l = in.read(buffer2, pos, Math.min(10000 - pos, 1000)); + if (l <= 0) { + break; + } + pos += l; + } + in.close(); + assertEquals(10000, pos); + assertEquals(buffer, buffer2); + + assertTrue(FileUtils.tryDelete(fsBase + "/test2")); + FileUtils.delete(fsBase + "/test"); + if (fsBase.indexOf("memFS:") < 0 && fsBase.indexOf("memLZF:") < 0 + && fsBase.indexOf("nioMemFS:") < 0 && fsBase.indexOf("nioMemLZF:") < 0) { + FileUtils.createDirectories(fsBase + "/testDir"); + assertTrue(FileUtils.isDirectory(fsBase + "/testDir")); + if (!fsBase.startsWith("jdbc:")) { + FileUtils.deleteRecursive(fsBase + "/testDir", false); + assertTrue(!FileUtils.exists(fsBase + "/testDir")); + } + } + } + + private void testPositionedReadWrite(String fsBase) throws IOException { + FileUtils.deleteRecursive(fsBase + "/testFile", false); + FileUtils.delete(fsBase + "/testFile"); + FileUtils.createDirectories(fsBase); + assertTrue(FileUtils.createFile(fsBase + "/testFile")); + FileChannel fc = FilePath.get(fsBase + "/testFile").open("rw"); + ByteBuffer buff = ByteBuffer.allocate(4000); + for (int i = 0; i < 4000; i++) { + buff.put((byte) i); + } + buff.flip(); + fc.write(buff, 96); + assertEquals(0, fc.position()); + assertEquals(4096, fc.size()); + buff = ByteBuffer.allocate(4000); + assertEquals(4000, fc.read(buff, 96)); + assertEquals(0, fc.position()); + buff.flip(); + for (int i = 0; i < 4000; i++) { + assertEquals((byte) i, buff.get()); + } + buff = ByteBuffer.allocate(0); + assertTrue(fc.read(buff, 8000) <= 0); + assertEquals(0, fc.position()); + assertTrue(fc.read(buff, 4000) <= 0); + assertEquals(0, fc.position()); + assertTrue(fc.read(buff, 2000) <= 0); + assertEquals(0, fc.position()); + buff = ByteBuffer.allocate(1); + assertEquals(-1, fc.read(buff, 8000)); + assertEquals(1, fc.read(buff, 4000)); + buff.flip(); + assertEquals(1, fc.read(buff, 2000)); + fc.close(); + } + + private void testRandomAccess(String fsBase) throws Exception { + testRandomAccess(fsBase, 1); + } + + private void testRandomAccess(String fsBase, int seed) throws Exception { + StringBuilder buff = new StringBuilder(); + String s = FileUtils.createTempFile(fsBase + "/tmp", ".tmp", false, false); + File file = new File(TestBase.BASE_TEST_DIR + "/tmp"); + file.getParentFile().mkdirs(); + file.delete(); + RandomAccessFile ra = new RandomAccessFile(file, "rw"); + FileUtils.delete(s); + FileChannel f = FileUtils.open(s, "rw"); + assertEquals(s, f.toString()); + assertEquals(-1, f.read(ByteBuffer.wrap(new byte[1]))); + f.force(true); + Random random = new Random(seed); + int size = getSize(100, 500); + try { + for (int i = 0; i < size; i++) { + trace("op " + i); + int pos = random.nextInt(10000); + switch (random.nextInt(7)) { + case 0: { + pos = (int) Math.min(pos, ra.length()); + trace("seek " + pos); + buff.append("seek " + pos + "\n"); + f.position(pos); + ra.seek(pos); + break; + } + case 1: { + int arrayLen = random.nextInt(1000); + int offset = arrayLen / 10; + offset = offset == 0 ? 0 : random.nextInt(offset); + int len = arrayLen == 0 ? 0 : random.nextInt(arrayLen - offset); + byte[] buffer = new byte[arrayLen]; + ByteBuffer byteBuff = createSlicedBuffer(buffer, offset, len); + random.nextBytes(buffer); + trace("write " + offset + " len " + len); + buff.append("write " + offset + " " + len + "\n"); + f.write(byteBuff); + ra.write(buffer, offset, len); + break; + } + case 2: { + trace("truncate " + pos); + buff.append("truncate " + pos + "\n"); + f.truncate(pos); + if (pos < ra.length()) { + // truncate is supposed to have no effect if the + // position is larger than the current size + ra.setLength(pos); + } + assertEquals(ra.getFilePointer(), f.position()); + break; + } + case 3: { + int len = random.nextInt(1000); + int offset = random.nextInt(100); + int arrayLen = len + offset; + len = (int) Math.min(len, ra.length() - ra.getFilePointer()); + byte[] b1 = new byte[arrayLen]; + byte[] b2 = new byte[arrayLen]; + trace("readFully " + len); + buff.append("readFully " + len + "\n"); + ra.readFully(b1, offset, len); + ByteBuffer byteBuff = createSlicedBuffer(b2, offset, len); + FileUtils.readFully(f, byteBuff); + assertEquals(b1, b2); + break; + } + case 4: { + trace("getFilePointer"); + buff.append("getFilePointer\n"); + assertEquals(ra.getFilePointer(), f.position()); + break; + } + case 5: { + trace("length " + ra.length()); + buff.append("length " + ra.length() + "\n"); + assertEquals(ra.length(), f.size()); + break; + } + case 6: { + trace("reopen"); + buff.append("reopen\n"); + f.close(); + ra.close(); + ra = new RandomAccessFile(file, "rw"); + f = FileUtils.open(s, "rw"); + assertEquals(ra.length(), f.size()); + break; + } + default: + } + } + } catch (Throwable e) { + e.printStackTrace(); + fail("Exception: " + e + "\n"+ buff.toString()); + } finally { + f.close(); + ra.close(); + file.delete(); + FileUtils.delete(s); + } + } + + private static ByteBuffer createSlicedBuffer(byte[] buffer, int offset, + int len) { + ByteBuffer byteBuff = ByteBuffer.wrap(buffer); + byteBuff.position(offset); + // force the arrayOffset to be non-0 + byteBuff = byteBuff.slice(); + byteBuff.limit(len); + return byteBuff; + } + + private void testTempFile(String fsBase) throws Exception { + int len = 10000; + String s = FileUtils.createTempFile(fsBase + "/tmp", ".tmp", false, false); + OutputStream out = FileUtils.newOutputStream(s, false); + byte[] buffer = new byte[len]; + out.write(buffer); + out.close(); + out = FileUtils.newOutputStream(s, true); + out.write(1); + out.close(); + InputStream in = FileUtils.newInputStream(s); + for (int i = 0; i < len; i++) { + assertEquals(0, in.read()); + } + assertEquals(1, in.read()); + assertEquals(-1, in.read()); + in.close(); + out.close(); + FileUtils.delete(s); + } + + private void testConcurrent(String fsBase) throws Exception { + String s = FileUtils.createTempFile(fsBase + "/tmp", ".tmp", false, false); + File file = new File(TestBase.BASE_TEST_DIR + "/tmp"); + file.getParentFile().mkdirs(); + file.delete(); + RandomAccessFile ra = new RandomAccessFile(file, "rw"); + FileUtils.delete(s); + final FileChannel f = FileUtils.open(s, "rw"); + final int size = getSize(10, 50); + f.write(ByteBuffer.allocate(size * 64 * 1024)); + Random random = new Random(1); + System.gc(); + Task task = new Task() { + @Override + public void call() throws Exception { + ByteBuffer byteBuff = ByteBuffer.allocate(16); + while (!stop) { + for (int pos = 0; pos < size; pos++) { + byteBuff.clear(); + f.read(byteBuff, pos * 64 * 1024); + byteBuff.position(0); + int x = byteBuff.getInt(); + int y = byteBuff.getInt(); + assertEquals(x, y); + Thread.yield(); + } + } + } + }; + task.execute(); + int[] data = new int[size]; + try { + ByteBuffer byteBuff = ByteBuffer.allocate(16); + int operations = 10000; + for (int i = 0; i < operations; i++) { + byteBuff.position(0); + byteBuff.putInt(i); + byteBuff.putInt(i); + byteBuff.flip(); + int pos = random.nextInt(size); + f.write(byteBuff, pos * 64 * 1024); + data[pos] = i; + pos = random.nextInt(size); + byteBuff.clear(); + f.read(byteBuff, pos * 64 * 1024); + byteBuff.limit(16); + byteBuff.position(0); + int x = byteBuff.getInt(); + int y = byteBuff.getInt(); + assertEquals(x, y); + assertEquals(data[pos], x); + } + } catch (Throwable e) { + e.printStackTrace(); + fail("Exception: " + e); + } finally { + task.get(); + f.close(); + ra.close(); + file.delete(); + FileUtils.delete(s); + System.gc(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestFtp.java b/modules/h2/src/test/java/org/h2/test/unit/TestFtp.java new file mode 100644 index 0000000000000..74516b0f83971 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestFtp.java @@ -0,0 +1,77 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import org.h2.dev.ftp.FtpClient; +import org.h2.dev.ftp.server.FtpEvent; +import org.h2.dev.ftp.server.FtpEventListener; +import org.h2.dev.ftp.server.FtpServer; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Server; + +/** + * Tests the FTP server tool. + */ +public class TestFtp extends TestBase implements FtpEventListener { + + private FtpEvent lastEvent; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (getBaseDir().indexOf(':') > 0) { + return; + } + FileUtils.delete(getBaseDir() + "/ftp"); + test(getBaseDir()); + FileUtils.delete(getBaseDir() + "/ftp"); + } + + private void test(String dir) throws Exception { + Server server = FtpServer.createFtpServer( + "-ftpDir", dir, "-ftpPort", "8121").start(); + FtpServer ftp = (FtpServer) server.getService(); + ftp.setEventListener(this); + FtpClient client = FtpClient.open("localhost:8121"); + client.login("sa", "sa"); + client.makeDirectory("ftp"); + client.changeWorkingDirectory("ftp"); + assertEquals("CWD", lastEvent.getCommand()); + client.makeDirectory("hello"); + client.changeWorkingDirectory("hello"); + client.changeDirectoryUp(); + assertEquals("CDUP", lastEvent.getCommand()); + client.nameList("hello"); + client.removeDirectory("hello"); + client.close(); + server.stop(); + } + + @Override + public void beforeCommand(FtpEvent event) { + lastEvent = event; + } + + @Override + public void afterCommand(FtpEvent event) { + lastEvent = event; + } + + @Override + public void onUnsupportedCommand(FtpEvent event) { + lastEvent = event; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestIntArray.java b/modules/h2/src/test/java/org/h2/test/unit/TestIntArray.java new file mode 100644 index 0000000000000..4533eb91ae68b --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestIntArray.java @@ -0,0 +1,112 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.util.IntArray; + +/** + * Tests the IntArray class. + */ +public class TestIntArray extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testInit(); + testRandom(); + testRemoveRange(); + } + + private void testRemoveRange() { + IntArray array = new IntArray(new int[] {1, 2, 3, 4, 5}); + array.removeRange(1, 3); + assertEquals(3, array.size()); + assertEquals(1, array.get(0)); + assertEquals(4, array.get(1)); + assertEquals(5, array.get(2)); + } + + private static void testInit() { + IntArray array = new IntArray(new int[0]); + array.add(10); + } + + private void testRandom() { + IntArray array = new IntArray(); + int[] test = {}; + Random random = new Random(1); + for (int i = 0; i < 10000; i++) { + int idx = test.length == 0 ? 0 : random.nextInt(test.length); + int v = random.nextInt(100); + int op = random.nextInt(4); + switch (op) { + case 0: + array.add(v); + test = add(test, v); + break; + case 1: + if (test.length > idx) { + assertEquals(get(test, idx), array.get(idx)); + } + break; + case 2: + if (test.length > 0) { + array.remove(idx); + test = remove(test, idx); + } + break; + case 3: + assertEquals(test.length, array.size()); + break; + default: + } + assertEquals(test.length, array.size()); + for (int j = 0; j < test.length; j++) { + assertEquals(test[j], array.get(j)); + } + + } + } + + private static int[] add(int[] array, int i, int value) { + int[] a2 = new int[array.length + 1]; + System.arraycopy(array, 0, a2, 0, array.length); + if (i < array.length) { + System.arraycopy(a2, i, a2, i + 1, a2.length - i - 1); + } + array = a2; + array[i] = value; + return array; + } + + private static int[] add(int[] array, int value) { + return add(array, array.length, value); + } + + private static int get(int[] array, int i) { + return array[i]; + } + + private static int[] remove(int[] array, int i) { + int[] a2 = new int[array.length - 1]; + System.arraycopy(array, 0, a2, 0, i); + if (i < a2.length) { + System.arraycopy(array, i + 1, a2, i, array.length - i - 1); + } + return a2; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestIntIntHashMap.java b/modules/h2/src/test/java/org/h2/test/unit/TestIntIntHashMap.java new file mode 100644 index 0000000000000..71b8bd3535a06 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestIntIntHashMap.java @@ -0,0 +1,79 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.util.Random; + +import org.h2.test.TestBase; +import org.h2.util.IntIntHashMap; + +/** + * Tests the IntHashMap class. + */ +public class TestIntIntHashMap extends TestBase { + + private final Random rand = new Random(); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + IntIntHashMap map = new IntIntHashMap(); + map.put(1, 1); + map.put(1, 2); + assertEquals(1, map.size()); + map.put(0, 1); + map.put(0, 2); + assertEquals(2, map.size()); + rand.setSeed(10); + test(true); + test(false); + } + + private void test(boolean random) { + int len = 2000; + int[] x = new int[len]; + for (int i = 0; i < len; i++) { + int key = random ? rand.nextInt() : i; + x[i] = key; + } + IntIntHashMap map = new IntIntHashMap(); + for (int i = 0; i < len; i++) { + map.put(x[i], i); + } + for (int i = 0; i < len; i++) { + if (map.get(x[i]) != i) { + throw new AssertionError("get " + x[i] + " = " + map.get(i) + + " should be " + i); + } + } + for (int i = 1; i < len; i += 2) { + map.remove(x[i]); + } + for (int i = 1; i < len; i += 2) { + if (map.get(x[i]) != -1) { + throw new AssertionError("get " + x[i] + " = " + map.get(i) + + " should be <=0"); + } + } + for (int i = 1; i < len; i += 2) { + map.put(x[i], i); + } + for (int i = 0; i < len; i++) { + if (map.get(x[i]) != i) { + throw new AssertionError("get " + x[i] + " = " + map.get(i) + + " should be " + i); + } + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestIntPerfectHash.java b/modules/h2/src/test/java/org/h2/test/unit/TestIntPerfectHash.java new file mode 100644 index 0000000000000..29d05a284ad87 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestIntPerfectHash.java @@ -0,0 +1,109 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.util.ArrayList; +import java.util.BitSet; +import java.util.HashSet; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.h2.dev.hash.IntPerfectHash; +import org.h2.dev.hash.IntPerfectHash.BitArray; +import org.h2.test.TestBase; + +/** + * Tests the perfect hash tool. + */ +public class TestIntPerfectHash extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestIntPerfectHash test = (TestIntPerfectHash) TestBase.createCaller().init(); + test.measure(); + test.test(); + test.measure(); + } + + /** + * Measure the hash functions. + */ + public void measure() { + int size = 10000; + test(size / 10); + int s; + long time = System.nanoTime(); + s = test(size); + time = System.nanoTime() - time; + System.out.println((double) s / size + " bits/key in " + + TimeUnit.NANOSECONDS.toMillis(time) + " ms"); + + } + + @Override + public void test() { + testBitArray(); + for (int i = 0; i < 100; i++) { + test(i); + } + for (int i = 100; i <= 10000; i *= 10) { + test(i); + } + } + + private void testBitArray() { + byte[] data = new byte[0]; + BitSet set = new BitSet(); + for (int i = 100; i >= 0; i--) { + data = BitArray.setBit(data, i, true); + set.set(i); + } + Random r = new Random(1); + for (int i = 0; i < 10000; i++) { + int pos = r.nextInt(100); + boolean s = r.nextBoolean(); + data = BitArray.setBit(data, pos, s); + set.set(pos, s); + pos = r.nextInt(100); + assertTrue(BitArray.getBit(data, pos) == set.get(pos)); + } + assertTrue(BitArray.countBits(data) == set.cardinality()); + } + + private int test(int size) { + Random r = new Random(size); + HashSet set = new HashSet<>(); + while (set.size() < size) { + set.add(r.nextInt()); + } + ArrayList list = new ArrayList<>(set); + byte[] desc = IntPerfectHash.generate(list); + int max = test(desc, set); + assertEquals(size - 1, max); + return desc.length * 8; + } + + private int test(byte[] desc, Set set) { + int max = -1; + HashSet test = new HashSet<>(); + IntPerfectHash hash = new IntPerfectHash(desc); + for (int x : set) { + int h = hash.get(x); + assertTrue(h >= 0); + assertTrue(h <= set.size() * 3); + max = Math.max(max, h); + assertFalse(test.contains(h)); + test.add(h); + } + return max; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestJmx.java b/modules/h2/src/test/java/org/h2/test/unit/TestJmx.java new file mode 100644 index 0000000000000..164d3ee667d78 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestJmx.java @@ -0,0 +1,187 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.lang.management.ManagementFactory; +import java.sql.Connection; +import java.sql.Statement; +import java.util.HashMap; +import java.util.Set; +import javax.management.Attribute; +import javax.management.MBeanAttributeInfo; +import javax.management.MBeanInfo; +import javax.management.MBeanOperationInfo; +import javax.management.MBeanServer; +import javax.management.ObjectName; +import org.h2.test.TestBase; + +/** + * Tests the JMX feature. + */ +public class TestJmx extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + HashMap attrMap; + HashMap opMap; + String result; + MBeanInfo info; + ObjectName name; + Connection conn; + Statement stat; + + MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer(); + + conn = getConnection("mem:jmx;jmx=true"); + stat = conn.createStatement(); + + name = new ObjectName("org.h2:name=JMX,path=mem_jmx"); + info = mbeanServer.getMBeanInfo(name); + assertEquals("0", mbeanServer. + getAttribute(name, "CacheSizeMax").toString()); + // cache size is ignored for in-memory databases + mbeanServer.setAttribute(name, new Attribute("CacheSizeMax", 1)); + assertEquals("0", mbeanServer. + getAttribute(name, "CacheSizeMax").toString()); + assertEquals("0", mbeanServer. + getAttribute(name, "CacheSize").toString()); + assertEquals("false", mbeanServer. + getAttribute(name, "Exclusive").toString()); + assertEquals("0", mbeanServer. + getAttribute(name, "FileSize").toString()); + assertEquals("0", mbeanServer. + getAttribute(name, "FileReadCount").toString()); + assertEquals("0", mbeanServer. + getAttribute(name, "FileWriteCount").toString()); + assertEquals("0", mbeanServer. + getAttribute(name, "FileWriteCountTotal").toString()); + if (config.mvStore) { + assertEquals("1", mbeanServer. + getAttribute(name, "LogMode").toString()); + mbeanServer.setAttribute(name, new Attribute("LogMode", 2)); + assertEquals("2", mbeanServer. + getAttribute(name, "LogMode").toString()); + } + assertEquals("REGULAR", mbeanServer. + getAttribute(name, "Mode").toString()); + if (config.multiThreaded) { + assertEquals("true", mbeanServer. + getAttribute(name, "MultiThreaded").toString()); + } else { + assertEquals("false", mbeanServer. + getAttribute(name, "MultiThreaded").toString()); + } + if (config.mvStore) { + assertEquals("true", mbeanServer. + getAttribute(name, "Mvcc").toString()); + } else { + assertEquals("false", mbeanServer. + getAttribute(name, "Mvcc").toString()); + } + assertEquals("false", mbeanServer. + getAttribute(name, "ReadOnly").toString()); + assertEquals("1", mbeanServer. + getAttribute(name, "TraceLevel").toString()); + mbeanServer.setAttribute(name, new Attribute("TraceLevel", 0)); + assertEquals("0", mbeanServer. + getAttribute(name, "TraceLevel").toString()); + assertTrue(mbeanServer. + getAttribute(name, "Version").toString().startsWith("1.")); + assertEquals(14, info.getAttributes().length); + result = mbeanServer.invoke(name, "listSettings", null, null).toString(); + assertContains(result, "ANALYZE_AUTO"); + + conn.setAutoCommit(false); + stat.execute("create table test(id int)"); + stat.execute("insert into test values(1)"); + + result = mbeanServer.invoke(name, "listSessions", null, null).toString(); + assertContains(result, "session id"); + if (config.mvcc || config.mvStore) { + assertContains(result, "read lock"); + } else { + assertContains(result, "write lock"); + } + + assertEquals(2, info.getOperations().length); + assertContains(info.getDescription(), "database"); + attrMap = new HashMap<>(); + for (MBeanAttributeInfo a : info.getAttributes()) { + attrMap.put(a.getName(), a); + } + assertContains(attrMap.get("CacheSize").getDescription(), "KB"); + opMap = new HashMap<>(); + for (MBeanOperationInfo o : info.getOperations()) { + opMap.put(o.getName(), o); + } + assertContains(opMap.get("listSessions").getDescription(), "lock"); + assertEquals(MBeanOperationInfo.INFO, opMap.get("listSessions").getImpact()); + + conn.close(); + + conn = getConnection("jmx;jmx=true"); + conn.close(); + conn = getConnection("jmx;jmx=true"); + + name = new ObjectName("org.h2:name=JMX,*"); + @SuppressWarnings("rawtypes") + Set set = mbeanServer.queryNames(name, null); + name = (ObjectName) set.iterator().next(); + + if (config.memory) { + assertEquals("0", mbeanServer. + getAttribute(name, "CacheSizeMax").toString()); + } else { + assertEquals("16384", mbeanServer. + getAttribute(name, "CacheSizeMax").toString()); + } + mbeanServer.setAttribute(name, new Attribute("CacheSizeMax", 1)); + if (config.memory) { + assertEquals("0", mbeanServer. + getAttribute(name, "CacheSizeMax").toString()); + } else if (config.mvStore) { + assertEquals("1024", mbeanServer. + getAttribute(name, "CacheSizeMax").toString()); + assertEquals("0", mbeanServer. + getAttribute(name, "CacheSize").toString()); + assertTrue(0 < (Long) mbeanServer. + getAttribute(name, "FileReadCount")); + assertTrue(0 < (Long) mbeanServer. + getAttribute(name, "FileWriteCount")); + assertEquals("0", mbeanServer. + getAttribute(name, "FileWriteCountTotal").toString()); + } else { + assertEquals("1", mbeanServer. + getAttribute(name, "CacheSizeMax").toString()); + assertTrue(0 < (Integer) mbeanServer. + getAttribute(name, "CacheSize")); + assertTrue(0 < (Long) mbeanServer. + getAttribute(name, "FileSize")); + assertTrue(0 < (Long) mbeanServer. + getAttribute(name, "FileReadCount")); + assertTrue(0 < (Long) mbeanServer. + getAttribute(name, "FileWriteCount")); + assertTrue(0 < (Long) mbeanServer. + getAttribute(name, "FileWriteCountTotal")); + } + mbeanServer.setAttribute(name, new Attribute("LogMode", 0)); + assertEquals("0", mbeanServer. + getAttribute(name, "LogMode").toString()); + + conn.close(); + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestLocale.java b/modules/h2/src/test/java/org/h2/test/unit/TestLocale.java new file mode 100644 index 0000000000000..3d56e5191fdbc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestLocale.java @@ -0,0 +1,87 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Locale; +import org.h2.test.TestBase; + +/** + * Tests that change the default locale. + */ +public class TestLocale extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testSpecialLocale(); + testDatesInJapanLocale(); + } + + private void testSpecialLocale() throws SQLException { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + Locale old = Locale.getDefault(); + try { + // when using Turkish as the default locale, "i".toUpperCase() is + // not "I" + Locale.setDefault(new Locale("tr")); + stat.execute("create table test(I1 int, i2 int, b int, c int, d int) " + + "as select 1, 1, 1, 1, 1"); + ResultSet rs = stat.executeQuery("select * from test"); + rs.next(); + rs.getString("I1"); + rs.getString("i1"); + rs.getString("I2"); + rs.getString("i2"); + stat.execute("drop table test"); + } finally { + Locale.setDefault(old); + } + conn.close(); + } + + private void testDatesInJapanLocale() throws SQLException { + deleteDb(getTestName()); + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + Locale old = Locale.getDefault(); + try { + // when using Japanese as the default locale, the default calendar is + // the imperial japanese calendar + Locale.setDefault(new Locale("ja", "JP", "JP")); + stat.execute("CREATE TABLE test(d TIMESTAMP, dz TIMESTAMP WITH TIME ZONE) " + + "as select '2017-12-03T00:00:00Z', '2017-12-03T00:00:00Z'"); + ResultSet rs = stat.executeQuery("select YEAR(d) y, YEAR(dz) yz from test"); + rs.next(); + assertEquals(2017, rs.getInt("y")); + assertEquals(2017, rs.getInt("yz")); + stat.execute("drop table test"); + + rs = stat.executeQuery( + "CALL FORMATDATETIME(TIMESTAMP '2001-02-03 04:05:06', 'yyyy-MM-dd HH:mm:ss', 'en')"); + rs.next(); + assertEquals("2001-02-03 04:05:06", rs.getString(1)); + + } finally { + Locale.setDefault(old); + } + conn.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestMathUtils.java b/modules/h2/src/test/java/org/h2/test/unit/TestMathUtils.java new file mode 100644 index 0000000000000..f26b05344454c --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestMathUtils.java @@ -0,0 +1,65 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import org.h2.test.TestBase; +import org.h2.util.MathUtils; + +/** + * Tests math utility methods. + */ +public class TestMathUtils extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testRandom(); + testNextPowerOf2Int(); + } + + private void testRandom() { + int bits = 0; + for (int i = 0; i < 1000; i++) { + bits |= 1 << MathUtils.randomInt(8); + } + assertEquals(255, bits); + bits = 0; + for (int i = 0; i < 1000; i++) { + bits |= 1 << MathUtils.secureRandomInt(8); + } + assertEquals(255, bits); + bits = 0; + for (int i = 0; i < 1000; i++) { + bits |= 1 << (MathUtils.secureRandomLong() & 7); + } + assertEquals(255, bits); + // just verify the method doesn't throw an exception + byte[] data = MathUtils.generateAlternativeSeed(); + assertTrue(data.length > 10); + } + + private void testNextPowerOf2Int() { + // the largest power of two that fits into an integer + final int largestPower2 = 0x40000000; + int[] testValues = { 0, 1, 2, 3, 4, 12, 17, 500, 1023, + largestPower2 - 500, largestPower2 }; + int[] resultValues = { 1, 1, 2, 4, 4, 16, 32, 512, 1024, + largestPower2, largestPower2 }; + + for (int i = 0; i < testValues.length; i++) { + assertEquals(resultValues[i], MathUtils.nextPowerOf2(testValues[i])); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestMode.java b/modules/h2/src/test/java/org/h2/test/unit/TestMode.java new file mode 100644 index 0000000000000..16b5581d0d7b8 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestMode.java @@ -0,0 +1,90 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import org.h2.engine.Mode; +import org.h2.test.TestBase; + +/** + * Unit test for the Mode class. + */ +public class TestMode extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String[] a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testDb2ClientInfo(); + testDerbyClientInfo(); + testHsqlDbClientInfo(); + testMsSqlServerClientInfo(); + testMySqlClientInfo(); + testOracleClientInfo(); + testPostgresqlClientInfo(); + } + + private void testDb2ClientInfo() { + Mode db2Mode = Mode.getInstance("DB2"); + + assertTrue(db2Mode.supportedClientInfoPropertiesRegEx.matcher( + "ApplicationName").matches()); + assertTrue(db2Mode.supportedClientInfoPropertiesRegEx.matcher( + "ClientAccountingInformation").matches()); + assertTrue(db2Mode.supportedClientInfoPropertiesRegEx.matcher( + "ClientUser").matches()); + assertTrue(db2Mode.supportedClientInfoPropertiesRegEx.matcher( + "ClientCorrelationToken").matches()); + + assertFalse(db2Mode.supportedClientInfoPropertiesRegEx.matcher( + "AnyOtherValue").matches()); + } + + private void testDerbyClientInfo() { + Mode derbyMode = Mode.getInstance("Derby"); + assertNull(derbyMode.supportedClientInfoPropertiesRegEx); + } + + private void testHsqlDbClientInfo() { + Mode hsqlMode = Mode.getInstance("HSQLDB"); + assertNull(hsqlMode.supportedClientInfoPropertiesRegEx); + } + + private void testMsSqlServerClientInfo() { + Mode msSqlMode = Mode.getInstance("MSSQLServer"); + assertNull(msSqlMode.supportedClientInfoPropertiesRegEx); + } + + private void testMySqlClientInfo() { + Mode mySqlMode = Mode.getInstance("MySQL"); + assertTrue(mySqlMode.supportedClientInfoPropertiesRegEx.matcher( + "AnyString").matches()); + } + + private void testOracleClientInfo() { + Mode oracleMode = Mode.getInstance("Oracle"); + assertTrue(oracleMode.supportedClientInfoPropertiesRegEx.matcher( + "anythingContaining.aDot").matches()); + assertFalse(oracleMode.supportedClientInfoPropertiesRegEx.matcher( + "anythingContainingNoDot").matches()); + } + + + private void testPostgresqlClientInfo() { + Mode postgresqlMode = Mode.getInstance("PostgreSQL"); + assertTrue(postgresqlMode.supportedClientInfoPropertiesRegEx.matcher( + "ApplicationName").matches()); + assertFalse(postgresqlMode.supportedClientInfoPropertiesRegEx.matcher( + "AnyOtherValue").matches()); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestModifyOnWrite.java b/modules/h2/src/test/java/org/h2/test/unit/TestModifyOnWrite.java new file mode 100644 index 0000000000000..d79116e7da961 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestModifyOnWrite.java @@ -0,0 +1,72 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.Statement; + +import org.h2.engine.SysProperties; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; +import org.h2.util.Utils; + +/** + * Test that the database file is only modified when writing to the database. + */ +public class TestModifyOnWrite extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + System.setProperty("h2.modifyOnWrite", "true"); + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (!SysProperties.MODIFY_ON_WRITE) { + return; + } + deleteDb("modifyOnWrite"); + String dbFile = getBaseDir() + "/modifyOnWrite.h2.db"; + assertFalse(FileUtils.exists(dbFile)); + Connection conn = getConnection("modifyOnWrite"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + conn.close(); + byte[] test = IOUtils.readBytesAndClose(FileUtils.newInputStream(dbFile), -1); + + conn = getConnection("modifyOnWrite"); + stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + assertFalse(rs.next()); + conn.close(); + assertTrue(FileUtils.exists(dbFile)); + byte[] test2 = IOUtils.readBytesAndClose(FileUtils.newInputStream(dbFile), -1); + assertEquals(test, test2); + + conn = getConnection("modifyOnWrite"); + stat = conn.createStatement(); + stat.execute("insert into test values(1)"); + conn.close(); + + conn = getConnection("modifyOnWrite"); + stat = conn.createStatement(); + rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + conn.close(); + + test2 = IOUtils.readBytesAndClose(FileUtils.newInputStream(dbFile), -1); + assertFalse(Utils.compareSecure(test, test2)); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestMultiThreadedKernel.java b/modules/h2/src/test/java/org/h2/test/unit/TestMultiThreadedKernel.java new file mode 100644 index 0000000000000..d968d2b774511 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestMultiThreadedKernel.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; + +import org.h2.test.TestBase; + +/** + * Tests the multi-threaded kernel feature. + */ +public class TestMultiThreadedKernel extends TestBase implements Runnable { + + private String url, user, password; + private int id; + private TestMultiThreadedKernel master; + private volatile boolean stop; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.networked || config.mvcc) { + return; + } + deleteDb("multiThreadedKernel"); + int count = getSize(2, 5); + Thread[] list = new Thread[count]; + for (int i = 0; i < count; i++) { + TestMultiThreadedKernel r = new TestMultiThreadedKernel(); + r.url = getURL("multiThreadedKernel;MULTI_THREADED=1", true); + r.user = getUser(); + r.password = getPassword(); + r.master = this; + r.id = i; + Thread thread = new Thread(r); + thread.setName("Thread " + i); + thread.start(); + list[i] = thread; + } + Thread.sleep(getSize(2000, 5000)); + stop = true; + for (int i = 0; i < count; i++) { + list[i].join(); + } + deleteDb("multiThreadedKernel"); + } + + @Override + public void run() { + try { + org.h2.Driver.load(); + Connection conn = DriverManager.getConnection(url + + ";MULTI_THREADED=1;LOCK_MODE=3;WRITE_DELAY=0", + user, password); + conn.createStatement().execute( + "CREATE TABLE TEST" + id + + "(COL1 BIGINT AUTO_INCREMENT PRIMARY KEY, COL2 BIGINT)"); + PreparedStatement prep = conn.prepareStatement( + "insert into TEST" + id + "(col2) values (?)"); + for (int i = 0; !master.stop; i++) { + prep.setLong(1, i); + prep.execute(); + } + conn.close(); + } catch (SQLException e) { + e.printStackTrace(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestNetUtils.java b/modules/h2/src/test/java/org/h2/test/unit/TestNetUtils.java new file mode 100644 index 0000000000000..301f423c97c25 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestNetUtils.java @@ -0,0 +1,268 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Sergi Vladykin + */ +package org.h2.test.unit; + +import java.io.IOException; +import java.net.ServerSocket; +import java.net.Socket; +import java.util.HashSet; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; +import javax.net.ssl.SSLContext; +import javax.net.ssl.SSLServerSocket; +import javax.net.ssl.SSLSession; +import javax.net.ssl.SSLSocket; +import org.h2.engine.SysProperties; +import org.h2.test.TestBase; +import org.h2.util.NetUtils; +import org.h2.util.Task; + +/** + * Test the network utilities from {@link NetUtils}. + * + * @author Sergi Vladykin + * @author Tomas Pospichal + */ +public class TestNetUtils extends TestBase { + + private static final int WORKER_COUNT = 10; + private static final int PORT = 9111; + private static final int WAIT_MILLIS = 100; + private static final int WAIT_LONGER_MILLIS = 2 * WAIT_MILLIS; + private static final String TASK_PREFIX = "ServerSocketThread-"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.travis) + return; + + testAnonymousTlsSession(); + testTlsSessionWithServerSideAnonymousDisabled(); + testFrequentConnections(true, 100); + testFrequentConnections(false, 1000); + } + + /** + * With default settings, H2 client SSL socket should be able to connect + * to an H2 server SSL socket using an anonymous cipher suite + * (no SSL certificate is needed). + */ + private void testAnonymousTlsSession() throws Exception { + assertTrue("Failed assumption: the default value of ENABLE_ANONYMOUS_TLS" + + " property should be true", SysProperties.ENABLE_ANONYMOUS_TLS); + boolean ssl = true; + Task task = null; + ServerSocket serverSocket = null; + Socket socket = null; + + try { + serverSocket = NetUtils.createServerSocket(PORT, ssl); + serverSocket.setSoTimeout(WAIT_LONGER_MILLIS); + task = createServerSocketTask(serverSocket); + task.execute(TASK_PREFIX + "AnonEnabled"); + Thread.sleep(WAIT_MILLIS); + socket = NetUtils.createLoopbackSocket(PORT, ssl); + assertTrue("loopback anon socket should be connected", socket.isConnected()); + SSLSession session = ((SSLSocket) socket).getSession(); + assertTrue("TLS session should be valid when anonymous TLS is enabled", + session.isValid()); + // in case of handshake failure: + // the cipher suite is the pre-handshake SSL_NULL_WITH_NULL_NULL + assertContains(session.getCipherSuite(), "_anon_"); + } finally { + closeSilently(socket); + closeSilently(serverSocket); + if (task != null) { + // SSL server socket should succeed using an anonymous cipher + // suite, and not throw javax.net.ssl.SSLHandshakeException + assertNull(task.getException()); + task.join(); + } + } + } + + /** + * TLS connections (without trusted certificates) should fail if the server + * does not allow anonymous TLS. + * The global property ENABLE_ANONYMOUS_TLS cannot be modified for the test; + * instead, the server socket is altered. + */ + private void testTlsSessionWithServerSideAnonymousDisabled() throws Exception { + boolean ssl = true; + Task task = null; + ServerSocket serverSocket = null; + Socket socket = null; + try { + serverSocket = NetUtils.createServerSocket(PORT, ssl); + serverSocket.setSoTimeout(WAIT_LONGER_MILLIS); + // emulate the situation ENABLE_ANONYMOUS_TLS=false on server side + String[] defaultCipherSuites = SSLContext.getDefault().getServerSocketFactory() + .getDefaultCipherSuites(); + ((SSLServerSocket) serverSocket).setEnabledCipherSuites(defaultCipherSuites); + task = createServerSocketTask(serverSocket); + task.execute(TASK_PREFIX + "AnonDisabled"); + Thread.sleep(WAIT_MILLIS); + socket = NetUtils.createLoopbackSocket(PORT, ssl); + assertTrue("loopback socket should be connected", socket.isConnected()); + // Java 6 API does not have getHandshakeSession() which could + // reveal the actual cipher selected in the attempted handshake + SSLSession session = ((SSLSocket) socket).getSession(); + assertFalse("TLS session should be invalid when the server" + + "disables anonymous TLS", session.isValid()); + // the SSL handshake should fail, because non-anon ciphers require + // a trusted certificate + assertEquals("SSL_NULL_WITH_NULL_NULL", session.getCipherSuite()); + } finally { + closeSilently(socket); + closeSilently(serverSocket); + if (task != null) { + assertTrue(task.getException() != null); + assertEquals(javax.net.ssl.SSLHandshakeException.class.getName(), + task.getException().getClass().getName()); + assertContains(task.getException().getMessage(), "certificate_unknown"); + task.join(); + } + } + } + + private Task createServerSocketTask(final ServerSocket serverSocket) { + Task task = new Task() { + + @Override + public void call() throws Exception { + Socket ss = null; + try { + ss = serverSocket.accept(); + ss.getOutputStream().write(123); + } finally { + closeSilently(ss); + } + } + }; + return task; + } + + /** + * Close a socket, ignoring errors + * + * @param socket the socket + */ + void closeSilently(Socket socket) { + try { + if (socket != null) { + socket.close(); + } + } catch (Exception e) { + // ignore + } + } + + /** + * Close a server socket, ignoring errors + * + * @param socket the server socket + */ + void closeSilently(ServerSocket socket) { + try { + socket.close(); + } catch (Exception e) { + // ignore + } + } + + private static void testFrequentConnections(boolean ssl, int count) throws Exception { + final ServerSocket serverSocket = NetUtils.createServerSocket(PORT, ssl); + final AtomicInteger counter = new AtomicInteger(count); + Task serverThread = new Task() { + @Override + public void call() { + while (!stop) { + try { + Socket socket = serverSocket.accept(); + // System.out.println("opened " + counter); + socket.close(); + } catch (Exception e) { + // ignore + } + } + // System.out.println("stopped "); + + } + }; + serverThread.execute(); + try { + Set workers = new HashSet<>(); + for (int i = 0; i < WORKER_COUNT; i++) { + workers.add(new ConnectWorker(ssl, counter)); + } + // ensure the server is started + Thread.sleep(100); + for (ConnectWorker worker : workers) { + worker.start(); + } + for (ConnectWorker worker : workers) { + worker.join(); + Exception e = worker.getException(); + if (e != null) { + e.printStackTrace(); + } + } + } finally { + try { + serverSocket.close(); + } catch (Exception e) { + // ignore + } + serverThread.get(); + } + } + + /** + * A worker thread to test connecting. + */ + private static class ConnectWorker extends Thread { + + private final boolean ssl; + private final AtomicInteger counter; + private Exception exception; + + ConnectWorker(boolean ssl, AtomicInteger counter) { + this.ssl = ssl; + this.counter = counter; + } + + @Override + public void run() { + try { + while (!isInterrupted() && counter.decrementAndGet() > 0) { + Socket socket = NetUtils.createLoopbackSocket(PORT, ssl); + try { + socket.close(); + } catch (IOException e) { + // ignore + } + } + } catch (Exception e) { + exception = new Exception("count: " + counter, e); + } + } + + public Exception getException() { + return exception; + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestObjectDeserialization.java b/modules/h2/src/test/java/org/h2/test/unit/TestObjectDeserialization.java new file mode 100644 index 0000000000000..8055996edc71d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestObjectDeserialization.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: Noah Fontes + */ +package org.h2.test.unit; + +import org.h2.message.DbException; +import org.h2.test.TestBase; +import org.h2.util.JdbcUtils; +import org.h2.util.StringUtils; + +/** + * Tests the ability to deserialize objects that are not part of the system + * class-loading scope. + */ +public class TestObjectDeserialization extends TestBase { + + private static final String CLAZZ = "org.h2.test.unit.SampleObject"; + private static final String OBJECT = + "aced00057372001d6f72672e68322e746573742e756" + + "e69742e53616d706c654f626a65637400000000000000010200007870"; + + /** + * The thread context class loader was used. + */ + protected boolean usesThreadContextClassLoader; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + System.setProperty("h2.useThreadContextClassLoader", "true"); + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testThreadContextClassLoader(); + } + + private void testThreadContextClassLoader() { + usesThreadContextClassLoader = false; + Thread.currentThread().setContextClassLoader(new TestClassLoader()); + try { + JdbcUtils.deserialize(StringUtils.convertHexToBytes(OBJECT), null); + fail(); + } catch (DbException e) { + // expected + } + assertTrue(usesThreadContextClassLoader); + } + + /** + * A special class loader. + */ + private class TestClassLoader extends ClassLoader { + + public TestClassLoader() { + super(); + } + + @Override + protected synchronized Class loadClass(String name, boolean resolve) + throws ClassNotFoundException { + if (name.equals(CLAZZ)) { + usesThreadContextClassLoader = true; + } + return super.loadClass(name, resolve); + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestOldVersion.java b/modules/h2/src/test/java/org/h2/test/unit/TestOldVersion.java new file mode 100644 index 0000000000000..7f1c5b3d21364 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestOldVersion.java @@ -0,0 +1,159 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.lang.reflect.Method; +import java.net.URL; +import java.net.URLClassLoader; +import java.sql.Connection; +import java.sql.Driver; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.Properties; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.tools.Server; + +/** + * Tests the compatibility with older versions + */ +public class TestOldVersion extends TestBase { + + private ClassLoader cl; + private Driver driver; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.mvStore) { + return; + } + cl = getClassLoader("file:ext/h2-1.2.127.jar"); + driver = getDriver(cl); + if (driver == null) { + println("not found: ext/h2-1.2.127.jar - test skipped"); + return; + } + Connection conn = driver.connect("jdbc:h2:mem:", null); + assertEquals("1.2.127 (2010-01-15)", conn.getMetaData() + .getDatabaseProductVersion()); + conn.close(); + testLobInFiles(); + testOldClientNewServer(); + } + + private void testLobInFiles() throws Exception { + deleteDb("oldVersion"); + Connection conn; + Statement stat; + conn = driver.connect("jdbc:h2:" + getBaseDir() + "/oldVersion", null); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, b blob, c clob)"); + PreparedStatement prep = conn + .prepareStatement("insert into test values(?, ?, ?)"); + prep.setInt(1, 0); + prep.setNull(2, Types.BLOB); + prep.setNull(3, Types.CLOB); + prep.execute(); + prep.setInt(1, 1); + prep.setBytes(2, new byte[0]); + prep.setString(3, ""); + prep.execute(); + prep.setInt(1, 2); + prep.setBytes(2, new byte[5]); + prep.setString(3, "\u1234\u1234\u1234\u1234\u1234"); + prep.execute(); + prep.setInt(1, 3); + prep.setBytes(2, new byte[100000]); + prep.setString(3, new String(new char[100000])); + prep.execute(); + conn.close(); + conn = DriverManager.getConnection("jdbc:h2:" + getBaseDir() + + "/oldVersion", new Properties()); + stat = conn.createStatement(); + checkResult(stat.executeQuery("select * from test order by id")); + stat.execute("create table test2 as select * from test"); + checkResult(stat.executeQuery("select * from test2 order by id")); + stat.execute("delete from test"); + conn.close(); + } + + private void checkResult(ResultSet rs) throws SQLException { + rs.next(); + assertEquals(0, rs.getInt(1)); + assertEquals(null, rs.getBytes(2)); + assertEquals(null, rs.getString(3)); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals(new byte[0], rs.getBytes(2)); + assertEquals("", rs.getString(3)); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals(new byte[5], rs.getBytes(2)); + assertEquals("\u1234\u1234\u1234\u1234\u1234", rs.getString(3)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals(new byte[100000], rs.getBytes(2)); + assertEquals(new String(new char[100000]), rs.getString(3)); + } + + private void testOldClientNewServer() throws Exception { + Server server = org.h2.tools.Server.createTcpServer(); + server.start(); + int port = server.getPort(); + assertThrows(ErrorCode.DRIVER_VERSION_ERROR_2, driver).connect( + "jdbc:h2:tcp://localhost:" + port + "/mem:test", null); + server.stop(); + + Class serverClass = cl.loadClass("org.h2.tools.Server"); + Method m; + m = serverClass.getMethod("createTcpServer", String[].class); + Object serverOld = m.invoke(null, new Object[] { new String[] { + "-tcpPort", "" + port } }); + m = serverOld.getClass().getMethod("start"); + m.invoke(serverOld); + Connection conn; + conn = org.h2.Driver.load().connect("jdbc:h2:mem:", null); + Statement stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("call 1"); + rs.next(); + assertEquals(1, rs.getInt(1)); + conn.close(); + m = serverOld.getClass().getMethod("stop"); + m.invoke(serverOld); + } + + private static ClassLoader getClassLoader(String jarFile) throws Exception { + URL[] urls = { new URL(jarFile) }; + return new URLClassLoader(urls, null); + } + + private static Driver getDriver(ClassLoader cl) throws Exception { + Class driverClass; + try { + driverClass = cl.loadClass("org.h2.Driver"); + } catch (ClassNotFoundException e) { + return null; + } + Method m = driverClass.getMethod("load"); + Driver driver = (Driver) m.invoke(null); + return driver; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestOverflow.java b/modules/h2/src/test/java/org/h2/test/unit/TestOverflow.java new file mode 100644 index 0000000000000..ac447fe98a1e2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestOverflow.java @@ -0,0 +1,130 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.math.BigInteger; +import java.util.ArrayList; +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.util.New; +import org.h2.value.Value; +import org.h2.value.ValueString; + +/** + * Tests numeric overflow on various data types. + * Other than in Java, overflow is detected and an exception is thrown. + */ +public class TestOverflow extends TestBase { + + private ArrayList values; + private int dataType; + private BigInteger min, max; + private boolean successExpected; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + test(Value.BYTE, Byte.MIN_VALUE, Byte.MAX_VALUE); + test(Value.INT, Integer.MIN_VALUE, Integer.MAX_VALUE); + test(Value.LONG, Long.MIN_VALUE, Long.MAX_VALUE); + test(Value.SHORT, Short.MIN_VALUE, Short.MAX_VALUE); + } + + private void test(int type, long minValue, long maxValue) { + values = New.arrayList(); + this.dataType = type; + this.min = new BigInteger("" + minValue); + this.max = new BigInteger("" + maxValue); + add(0); + add(minValue); + add(maxValue); + add(maxValue - 1); + add(minValue + 1); + add(1); + add(-1); + Random random = new Random(1); + for (int i = 0; i < 40; i++) { + if (maxValue > Integer.MAX_VALUE) { + add(random.nextLong()); + } else { + add((random.nextBoolean() ? 1 : -1) * random.nextInt((int) maxValue)); + } + } + for (Value va : values) { + for (Value vb : values) { + testValues(va, vb); + } + } + } + + private void checkIfExpected(String a, String b) { + if (successExpected) { + assertEquals(a, b); + } + } + + private void onSuccess() { + if (!successExpected) { + fail(); + } + } + + private void onError() { + if (successExpected) { + fail(); + } + } + + private void testValues(Value va, Value vb) { + BigInteger a = new BigInteger(va.getString()); + BigInteger b = new BigInteger(vb.getString()); + successExpected = inRange(a.negate()); + try { + checkIfExpected(va.negate().getString(), a.negate().toString()); + onSuccess(); + } catch (Exception e) { + onError(); + } + successExpected = inRange(a.add(b)); + try { + checkIfExpected(va.add(vb).getString(), a.add(b).toString()); + onSuccess(); + } catch (Exception e) { + onError(); + } + successExpected = inRange(a.subtract(b)); + try { + checkIfExpected(va.subtract(vb).getString(), a.subtract(b).toString()); + onSuccess(); + } catch (Exception e) { + onError(); + } + successExpected = inRange(a.multiply(b)); + try { + checkIfExpected(va.multiply(vb).getString(), a.multiply(b).toString()); + onSuccess(); + } catch (Exception e) { + onError(); + } + } + + private boolean inRange(BigInteger v) { + return v.compareTo(min) >= 0 && v.compareTo(max) <= 0; + } + + private void add(long l) { + values.add(ValueString.get("" + l).convertTo(dataType)); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestPageStore.java b/modules/h2/src/test/java/org/h2/test/unit/TestPageStore.java new file mode 100644 index 0000000000000..dca57a04874ca --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestPageStore.java @@ -0,0 +1,906 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.InputStream; +import java.io.InputStreamReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Random; +import java.util.Set; +import java.util.TreeSet; +import java.util.concurrent.TimeUnit; + +import org.h2.api.DatabaseEventListener; +import org.h2.api.ErrorCode; +import org.h2.result.Row; +import org.h2.result.RowImpl; +import org.h2.store.Page; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; +import org.h2.util.JdbcUtils; +import org.h2.util.New; + +/** + * Test the page store. + */ +public class TestPageStore extends TestBase { + + /** + * The events log. + */ + static StringBuilder eventBuffer = new StringBuilder(); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + System.setProperty("h2.check2", "true"); + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.memory) { + return; + } + deleteDb(null); + testDropTempTable(); + testLogLimitFalsePositive(); + testLogLimit(); + testRecoverLobInDatabase(); + testWriteTransactionLogBeforeData(); + testDefrag(); + testInsertReverse(); + testInsertDelete(); + testCheckpoint(); + testDropRecreate(); + testDropAll(); + testCloseTempTable(); + testDuplicateKey(); + testUpdateOverflow(); + testTruncateReconnect(); + testReverseIndex(); + testLargeUpdates(); + testLargeInserts(); + testLargeDatabaseFastOpen(); + testUniqueIndexReopen(); + testLargeRows(); + testRecoverDropIndex(); + testDropPk(); + testCreatePkLater(); + testTruncate(); + testLargeIndex(); + testUniqueIndex(); + testCreateIndexLater(); + testFuzzOperations(); + deleteDb(null); + } + + private void testDropTempTable() throws SQLException { + deleteDb("pageStoreDropTemp"); + Connection c1 = getConnection("pageStoreDropTemp"); + Connection c2 = getConnection("pageStoreDropTemp"); + c1.setAutoCommit(false); + c2.setAutoCommit(false); + Statement s1 = c1.createStatement(); + Statement s2 = c2.createStatement(); + s1.execute("create local temporary table a(id int primary key)"); + s1.execute("insert into a values(1)"); + c1.commit(); + c1.close(); + s2.execute("create table b(id int primary key)"); + s2.execute("insert into b values(1)"); + c2.commit(); + s2.execute("checkpoint sync"); + s2.execute("shutdown immediately"); + try { + c2.close(); + } catch (SQLException e) { + // ignore + } + c1 = getConnection("pageStoreDropTemp"); + c1.close(); + deleteDb("pageStoreDropTemp"); + } + + private void testLogLimit() throws Exception { + if (config.mvStore) { + return; + } + deleteDb("pageStoreLogLimit"); + Connection conn, conn2; + Statement stat, stat2; + String url = "pageStoreLogLimit;TRACE_LEVEL_FILE=2"; + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + conn.setAutoCommit(false); + stat.execute("insert into test values(1)"); + + conn2 = getConnection(url); + stat2 = conn2.createStatement(); + stat2.execute("create table t2(id identity, name varchar)"); + stat2.execute("set max_log_size 1"); + for (int i = 0; i < 10; i++) { + stat2.execute("insert into t2(name) " + + "select space(100) from system_range(1, 1000)"); + } + InputStream in = FileUtils.newInputStream(getBaseDir() + + "/pageStoreLogLimit.trace.db"); + String s = IOUtils.readStringAndClose(new InputStreamReader(in), -1); + assertContains(s, "Transaction log could not be truncated"); + conn.commit(); + ResultSet rs = stat2.executeQuery("select * from test"); + assertTrue(rs.next()); + conn2.close(); + conn.close(); + } + + private void testLogLimitFalsePositive() throws Exception { + deleteDb("pageStoreLogLimitFalsePositive"); + String url = "pageStoreLogLimitFalsePositive;TRACE_LEVEL_FILE=2"; + Connection conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("set max_log_size 1"); + stat.execute("create table test(x varchar)"); + for (int i = 0; i < 1000; ++i) { + stat.execute("insert into test values (space(2000))"); + } + stat.execute("checkpoint"); + InputStream in = FileUtils.newInputStream(getBaseDir() + + "/pageStoreLogLimitFalsePositive.trace.db"); + String s = IOUtils.readStringAndClose(new InputStreamReader(in), -1); + assertFalse(s.indexOf("Transaction log could not be truncated") > 0); + conn.close(); + } + + private void testRecoverLobInDatabase() throws SQLException { + deleteDb("pageStoreRecoverLobInDatabase"); + String url = getURL("pageStoreRecoverLobInDatabase;" + + "MVCC=TRUE;CACHE_SIZE=1", true); + Connection conn; + Statement stat; + conn = getConnection(url, getUser(), getPassword()); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name clob)"); + stat.execute("create index idx_id on test(id)"); + stat.execute("insert into test " + + "select x, space(1100+x) from system_range(1, 100)"); + Random r = new Random(1); + ArrayList list = New.arrayList(); + for (int i = 0; i < 10; i++) { + Connection conn2 = getConnection(url, getUser(), getPassword()); + list.add(conn2); + Statement stat2 = conn2.createStatement(); + conn2.setAutoCommit(false); + if (r.nextBoolean()) { + stat2.execute("update test set id = id where id = " + r.nextInt(100)); + } else { + stat2.execute("delete from test where id = " + r.nextInt(100)); + } + } + stat.execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + for (Connection c : list) { + JdbcUtils.closeSilently(c); + } + conn = getConnection(url, getUser(), getPassword()); + conn.close(); + } + + private void testWriteTransactionLogBeforeData() throws SQLException { + deleteDb("pageStoreWriteTransactionLogBeforeData"); + String url = getURL("pageStoreWriteTransactionLogBeforeData;" + + "CACHE_SIZE=16;WRITE_DELAY=1000000", true); + Connection conn; + Statement stat; + conn = getConnection(url, getUser(), getPassword()); + stat = conn.createStatement(); + stat.execute("create table test(name varchar) as select space(100000)"); + for (int i = 0; i < 100; i++) { + stat.execute("create table test" + i + "(id int) " + + "as select x from system_range(1, 1000)"); + } + conn.close(); + conn = getConnection(url, getUser(), getPassword()); + stat = conn.createStatement(); + stat.execute("drop table test0"); + stat.execute("select * from test"); + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + conn = getConnection(url, getUser(), getPassword()); + stat = conn.createStatement(); + for (int i = 1; i < 100; i++) { + stat.execute("select * from test" + i); + } + conn.close(); + } + + private void testDefrag() throws SQLException { + if (config.reopen || config.multiThreaded) { + return; + } + deleteDb("pageStoreDefrag"); + Connection conn = getConnection( + "pageStoreDefrag;LOG=0;UNDO_LOG=0;LOCK_MODE=0"); + Statement stat = conn.createStatement(); + int tableCount = 10; + int rowCount = getSize(1000, 100000); + for (int i = 0; i < tableCount; i++) { + stat.execute("create table test" + i + "(id int primary key, " + + "string1 varchar, string2 varchar, string3 varchar)"); + } + for (int j = 0; j < tableCount; j++) { + PreparedStatement prep = conn.prepareStatement( + "insert into test" + j + " values(?, ?, ?, ?)"); + for (int i = 0; i < rowCount; i++) { + prep.setInt(1, i); + prep.setInt(2, i); + prep.setInt(3, i); + prep.setInt(4, i); + prep.execute(); + } + } + stat.executeUpdate("shutdown defrag"); + conn.close(); + } + + private void testInsertReverse() throws SQLException { + deleteDb("pageStoreInsertReverse"); + Connection conn; + conn = getConnection("pageStoreInsertReverse"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data varchar)"); + stat.execute("insert into test select -x, space(100) " + + "from system_range(1, 1000)"); + stat.execute("drop table test"); + stat.execute("create table test(id int primary key, data varchar)"); + stat.execute("insert into test select -x, space(2048) " + + "from system_range(1, 1000)"); + conn.close(); + } + + private void testInsertDelete() { + Row[] x = new Row[0]; + Row r = new RowImpl(null, 0); + x = Page.insert(x, 0, 0, r); + assertTrue(x[0] == r); + Row r2 = new RowImpl(null, 0); + x = Page.insert(x, 1, 0, r2); + assertTrue(x[0] == r2); + assertTrue(x[1] == r); + Row r3 = new RowImpl(null, 0); + x = Page.insert(x, 2, 1, r3); + assertTrue(x[0] == r2); + assertTrue(x[1] == r3); + assertTrue(x[2] == r); + + x = Page.remove(x, 3, 1); + assertTrue(x[0] == r2); + assertTrue(x[1] == r); + x = Page.remove(x, 2, 0); + assertTrue(x[0] == r); + x = Page.remove(x, 1, 0); + } + + private void testCheckpoint() throws SQLException { + deleteDb("pageStoreCheckpoint"); + Connection conn; + conn = getConnection("pageStoreCheckpoint"); + Statement stat = conn.createStatement(); + stat.execute("create table test(data varchar)"); + stat.execute("create sequence seq"); + stat.execute("set max_log_size 1"); + conn.setAutoCommit(false); + stat.execute("insert into test select space(1000) from system_range(1, 1000)"); + long before = System.nanoTime(); + stat.execute("select nextval('SEQ') from system_range(1, 100000)"); + long after = System.nanoTime(); + // it's hard to test - basically it shouldn't checkpoint too often + if (after - before > TimeUnit.SECONDS.toNanos(20)) { + if (!config.reopen) { + fail("Checkpoint took " + TimeUnit.NANOSECONDS.toMillis(after - before) + " ms"); + } + } + stat.execute("drop table test"); + stat.execute("drop sequence seq"); + conn.close(); + } + + private void testDropRecreate() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreDropRecreate"); + Connection conn; + conn = getConnection("pageStoreDropRecreate"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int)"); + stat.execute("create index idx_test on test(id)"); + stat.execute("create table test2(id int)"); + stat.execute("drop table test"); + // this will re-used the object id of the test table, + // which is lower than the object id of test2 + stat.execute("create index idx_test on test2(id)"); + conn.close(); + conn = getConnection("pageStoreDropRecreate"); + conn.close(); + } + + private void testDropAll() throws SQLException { + deleteDb("pageStoreDropAll"); + Connection conn; + String url = "pageStoreDropAll"; + conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("CREATE TEMP TABLE A(A INT)"); + stat.execute("CREATE TABLE B(A VARCHAR IDENTITY)"); + stat.execute("CREATE TEMP TABLE C(A INT)"); + conn.close(); + conn = getConnection(url); + stat = conn.createStatement(); + stat.execute("DROP ALL OBJECTS"); + conn.close(); + } + + private void testCloseTempTable() throws SQLException { + deleteDb("pageStoreCloseTempTable"); + Connection conn; + String url = "pageStoreCloseTempTable;CACHE_SIZE=0"; + conn = getConnection(url); + Statement stat = conn.createStatement(); + stat.execute("create local temporary table test(id int)"); + conn.rollback(); + Connection conn2 = getConnection(url); + Statement stat2 = conn2.createStatement(); + stat2.execute("create table test2 as select x from system_range(1, 5000)"); + stat2.execute("shutdown immediately"); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn).close(); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn2).close(); + } + + private void testDuplicateKey() throws SQLException { + deleteDb("pageStoreDuplicateKey"); + Connection conn; + conn = getConnection("pageStoreDuplicateKey"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(0, space(3000))"); + try { + stat.execute("insert into test values(0, space(3000))"); + } catch (SQLException e) { + // ignore + } + stat.execute("select * from test"); + conn.close(); + } + + private void testTruncateReconnect() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreTruncateReconnect"); + Connection conn; + conn = getConnection("pageStoreTruncateReconnect"); + conn.createStatement().execute( + "create table test(id int primary key, name varchar)"); + conn.createStatement().execute( + "insert into test(id) select x from system_range(1, 390)"); + conn.createStatement().execute("checkpoint"); + conn.createStatement().execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + conn = getConnection("pageStoreTruncateReconnect"); + conn.createStatement().execute("truncate table test"); + conn.createStatement().execute( + "insert into test(id) select x from system_range(1, 390)"); + conn.createStatement().execute("shutdown immediately"); + JdbcUtils.closeSilently(conn); + conn = getConnection("pageStoreTruncateReconnect"); + conn.close(); + } + + private void testUpdateOverflow() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreUpdateOverflow"); + Connection conn; + conn = getConnection("pageStoreUpdateOverflow"); + conn.createStatement().execute("create table test" + + "(id int primary key, name varchar)"); + conn.createStatement().execute( + "insert into test values(0, space(3000))"); + conn.createStatement().execute("checkpoint"); + conn.createStatement().execute("shutdown immediately"); + + JdbcUtils.closeSilently(conn); + conn = getConnection("pageStoreUpdateOverflow"); + conn.createStatement().execute("update test set id = 1"); + conn.createStatement().execute("shutdown immediately"); + + JdbcUtils.closeSilently(conn); + conn = getConnection("pageStoreUpdateOverflow"); + conn.close(); + } + + private void testReverseIndex() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreReverseIndex"); + Connection conn = getConnection("pageStoreReverseIndex"); + Statement stat = conn.createStatement(); + stat.execute("create table test(x int, y varchar default space(200))"); + for (int i = 30; i < 100; i++) { + stat.execute("insert into test(x) select null from system_range(1, " + i + ")"); + stat.execute("insert into test(x) select x from system_range(1, " + i + ")"); + stat.execute("create index idx on test(x desc, y)"); + ResultSet rs = stat.executeQuery("select min(x) from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + stat.execute("drop index idx"); + stat.execute("truncate table test"); + } + conn.close(); + } + + private void testLargeUpdates() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreLargeUpdates"); + Connection conn; + conn = getConnection("pageStoreLargeUpdates"); + Statement stat = conn.createStatement(); + int size = 1500; + stat.execute("call rand(1)"); + stat.execute( + "create table test(id int primary key, data varchar, test int) as " + + "select x, '', 123 from system_range(1, " + size + ")"); + Random random = new Random(1); + PreparedStatement prep = conn.prepareStatement( + "update test set data=space(?) where id=?"); + for (int i = 0; i < 2500; i++) { + int id = random.nextInt(size); + int newSize = random.nextInt(6000); + prep.setInt(1, newSize); + prep.setInt(2, id); + prep.execute(); + } + conn.close(); + conn = getConnection("pageStoreLargeUpdates"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from test where test<>123"); + assertFalse(rs.next()); + conn.close(); + } + + private void testLargeInserts() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreLargeInserts"); + Connection conn; + conn = getConnection("pageStoreLargeInserts"); + Statement stat = conn.createStatement(); + stat.execute("create table test(data varchar)"); + stat.execute("insert into test values(space(1024 * 1024))"); + stat.execute("insert into test values(space(1024 * 1024))"); + conn.close(); + } + + private void testLargeDatabaseFastOpen() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreLargeDatabaseFastOpen"); + Connection conn; + String url = "pageStoreLargeDatabaseFastOpen"; + conn = getConnection(url); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + conn.createStatement().execute( + "create unique index idx_test_name on test(name)"); + conn.createStatement().execute( + "INSERT INTO TEST " + + "SELECT X, X || space(10) FROM SYSTEM_RANGE(1, 1000)"); + conn.close(); + conn = getConnection(url); + conn.createStatement().execute("DELETE FROM TEST WHERE ID=1"); + conn.createStatement().execute("CHECKPOINT"); + conn.createStatement().execute("SHUTDOWN IMMEDIATELY"); + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + eventBuffer.setLength(0); + conn = getConnection(url + ";DATABASE_EVENT_LISTENER='" + + MyDatabaseEventListener.class.getName() + "'"); + assertEquals("init;opened;", eventBuffer.toString()); + conn.close(); + } + + private void testUniqueIndexReopen() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreUniqueIndexReopen"); + Connection conn; + String url = "pageStoreUniqueIndexReopen"; + conn = getConnection(url); + conn.createStatement().execute( + "CREATE TABLE test(ID INT PRIMARY KEY, NAME VARCHAR(255))"); + conn.createStatement().execute( + "create unique index idx_test_name on test(name)"); + conn.createStatement().execute("INSERT INTO TEST VALUES(1, 'Hello')"); + conn.close(); + conn = getConnection(url); + assertThrows(ErrorCode.DUPLICATE_KEY_1, conn.createStatement()) + .execute("INSERT INTO TEST VALUES(2, 'Hello')"); + conn.close(); + } + + private void testLargeRows() throws Exception { + if (config.memory) { + return; + } + for (int i = 0; i < 10; i++) { + testLargeRows(i); + } + } + + private void testLargeRows(int seed) throws Exception { + deleteDb("pageStoreLargeRows"); + String url = getURL("pageStoreLargeRows;CACHE_SIZE=16", true); + Connection conn = null; + Statement stat = null; + int count = 0; + try { + Class.forName("org.h2.Driver"); + conn = DriverManager.getConnection(url); + stat = conn.createStatement(); + int tableCount = 1; + PreparedStatement[] insert = new PreparedStatement[tableCount]; + PreparedStatement[] deleteMany = new PreparedStatement[tableCount]; + PreparedStatement[] updateMany = new PreparedStatement[tableCount]; + for (int i = 0; i < tableCount; i++) { + stat.execute("create table test" + i + + "(id int primary key, name varchar)"); + stat.execute("create index idx_test" + i + " on test" + i + + "(name)"); + insert[i] = conn.prepareStatement("insert into test" + i + + " values(?, ? || space(?))"); + deleteMany[i] = conn.prepareStatement("delete from test" + i + + " where id between ? and ?"); + updateMany[i] = conn.prepareStatement("update test" + i + + " set name=? || space(?) where id between ? and ?"); + } + Random random = new Random(seed); + for (int i = 0; i < 1000; i++) { + count = i; + PreparedStatement p; + if (random.nextInt(100) < 95) { + p = insert[random.nextInt(tableCount)]; + p.setInt(1, i); + p.setInt(2, i); + if (random.nextInt(30) == 5) { + p.setInt(3, 3000); + } else { + p.setInt(3, random.nextInt(100)); + } + p.execute(); + } else if (random.nextInt(100) < 90) { + p = updateMany[random.nextInt(tableCount)]; + p.setInt(1, i); + p.setInt(2, random.nextInt(50)); + int first = random.nextInt(1 + i); + p.setInt(3, first); + p.setInt(4, first + random.nextInt(50)); + p.executeUpdate(); + } else { + p = deleteMany[random.nextInt(tableCount)]; + int first = random.nextInt(1 + i); + p.setInt(1, first); + p.setInt(2, first + random.nextInt(100)); + p.executeUpdate(); + } + } + conn.close(); + conn = DriverManager.getConnection(url); + conn.close(); + conn = DriverManager.getConnection(url); + stat = conn.createStatement(); + stat.execute("script to '" + getBaseDir() + "/pageStoreLargeRows.sql'"); + conn.close(); + FileUtils.delete(getBaseDir() + "/pageStoreLargeRows.sql"); + } catch (Exception e) { + if (stat != null) { + try { + stat.execute("shutdown immediately"); + } catch (SQLException e2) { + // ignore + } + } + if (conn != null) { + try { + conn.close(); + } catch (SQLException e2) { + // ignore + } + } + throw new RuntimeException("count: " + count, e); + } + } + + private void testRecoverDropIndex() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreRecoverDropIndex"); + Connection conn = getConnection("pageStoreRecoverDropIndex"); + Statement stat = conn.createStatement(); + stat.execute("set write_delay 0"); + stat.execute("create table test(id int, name varchar) " + + "as select x, x from system_range(1, 1400)"); + stat.execute("create index idx_name on test(name)"); + conn.close(); + conn = getConnection("pageStoreRecoverDropIndex"); + stat = conn.createStatement(); + stat.execute("drop index idx_name"); + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + conn = getConnection("pageStoreRecoverDropIndex;cache_size=1"); + conn.close(); + } + + private void testDropPk() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreDropPk"); + Connection conn; + Statement stat; + conn = getConnection("pageStoreDropPk"); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key)"); + stat.execute("insert into test values(" + Integer.MIN_VALUE + "), (" + + Integer.MAX_VALUE + ")"); + stat.execute("alter table test drop primary key"); + conn.close(); + conn = getConnection("pageStoreDropPk"); + stat = conn.createStatement(); + stat.execute("insert into test values(" + Integer.MIN_VALUE + "), (" + + Integer.MAX_VALUE + ")"); + conn.close(); + } + + private void testCreatePkLater() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreCreatePkLater"); + Connection conn; + Statement stat; + conn = getConnection("pageStoreCreatePkLater"); + stat = conn.createStatement(); + stat.execute("create table test(id int not null) as select 100"); + stat.execute("create primary key on test(id)"); + conn.close(); + conn = getConnection("pageStoreCreatePkLater"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("select * from test where id = 100"); + assertTrue(rs.next()); + conn.close(); + } + + private void testTruncate() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreTruncate"); + Connection conn = getConnection("pageStoreTruncate"); + Statement stat = conn.createStatement(); + stat.execute("set write_delay 0"); + stat.execute("create table test(id int) as select 1"); + stat.execute("truncate table test"); + stat.execute("insert into test values(1)"); + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (SQLException e) { + // ignore + } + conn = getConnection("pageStoreTruncate"); + conn.close(); + } + + private void testLargeIndex() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreLargeIndex"); + Connection conn = getConnection("pageStoreLargeIndex"); + conn.createStatement().execute( + "create table test(id varchar primary key, d varchar)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(?, space(500))"); + for (int i = 0; i < 20000; i++) { + prep.setString(1, "" + i); + prep.executeUpdate(); + } + conn.close(); + } + + private void testUniqueIndex() throws SQLException { + if (config.memory) { + return; + } + deleteDb("pageStoreUniqueIndex"); + Connection conn = getConnection("pageStoreUniqueIndex"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT UNIQUE)"); + stat.execute("INSERT INTO TEST VALUES(1)"); + conn.close(); + conn = getConnection("pageStoreUniqueIndex"); + assertThrows(ErrorCode.DUPLICATE_KEY_1, + conn.createStatement()).execute("INSERT INTO TEST VALUES(1)"); + conn.close(); + } + + private void testCreateIndexLater() throws SQLException { + deleteDb("pageStoreCreateIndexLater"); + Connection conn = getConnection("pageStoreCreateIndexLater"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(NAME VARCHAR) AS SELECT 1"); + stat.execute("CREATE INDEX IDX_N ON TEST(NAME)"); + stat.execute("INSERT INTO TEST SELECT X FROM SYSTEM_RANGE(20, 100)"); + stat.execute("INSERT INTO TEST SELECT X FROM SYSTEM_RANGE(1000, 1100)"); + stat.execute("SHUTDOWN IMMEDIATELY"); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn).close(); + conn = getConnection("pageStoreCreateIndexLater"); + conn.close(); + } + + private void testFuzzOperations() throws Exception { + int best = Integer.MAX_VALUE; + for (int i = 0; i < 10; i++) { + int x = testFuzzOperationsSeed(i, 10); + if (x >= 0 && x < best) { + best = x; + fail("op:" + x + " seed:" + i); + } + } + } + + private int testFuzzOperationsSeed(int seed, int len) throws SQLException { + deleteDb("pageStoreFuzz"); + Connection conn = getConnection("pageStoreFuzz"); + Statement stat = conn.createStatement(); + log("DROP TABLE IF EXISTS TEST;"); + stat.execute("DROP TABLE IF EXISTS TEST"); + log("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "NAME VARCHAR DEFAULT 'Hello World');"); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, " + + "NAME VARCHAR DEFAULT 'Hello World')"); + Set rows = new TreeSet<>(); + Random random = new Random(seed); + for (int i = 0; i < len; i++) { + int op = random.nextInt(3); + Integer x = random.nextInt(100); + switch (op) { + case 0: + if (!rows.contains(x)) { + log("insert into test(id) values(" + x + ");"); + stat.execute("INSERT INTO TEST(ID) VALUES(" + x + ");"); + rows.add(x); + } + break; + case 1: + if (rows.contains(x)) { + log("delete from test where id=" + x + ";"); + stat.execute("DELETE FROM TEST WHERE ID=" + x); + rows.remove(x); + } + break; + case 2: + conn.close(); + conn = getConnection("pageStoreFuzz"); + stat = conn.createStatement(); + ResultSet rs = stat.executeQuery("SELECT * FROM TEST ORDER BY ID"); + log("--reconnect"); + for (int test : rows) { + if (!rs.next()) { + log("error: expected next"); + conn.close(); + return i; + } + int y = rs.getInt(1); + // System.out.println(" " + x); + if (y != test) { + log("error: " + y + " <> " + test); + conn.close(); + return i; + } + } + if (rs.next()) { + log("error: unexpected next"); + conn.close(); + return i; + } + } + } + conn.close(); + return -1; + } + + private void log(String m) { + trace(" " + m); + } + + /** + * A database event listener used in this test. + */ + public static final class MyDatabaseEventListener implements + DatabaseEventListener { + + @Override + public void closingDatabase() { + event("closing"); + } + + @Override + public void exceptionThrown(SQLException e, String sql) { + event("exceptionThrown " + e + " " + sql); + } + + @Override + public void init(String url) { + event("init"); + } + + @Override + public void opened() { + event("opened"); + } + + @Override + public void setProgress(int state, String name, int x, int max) { + if (name.startsWith("SYS:SYS_ID")) { + // ignore + return; + } + switch (state) { + case DatabaseEventListener.STATE_STATEMENT_START: + case DatabaseEventListener.STATE_STATEMENT_END: + case DatabaseEventListener.STATE_STATEMENT_PROGRESS: + return; + } + event("setProgress " + state + " " + name + " " + x + " " + max); + } + + private static void event(String s) { + eventBuffer.append(s).append(';'); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestPageStoreCoverage.java b/modules/h2/src/test/java/org/h2/test/unit/TestPageStoreCoverage.java new file mode 100644 index 0000000000000..1431899cf9330 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestPageStoreCoverage.java @@ -0,0 +1,264 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.nio.channels.FileChannel; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.h2.api.ErrorCode; +import org.h2.engine.Constants; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.Restore; + +/** + * Test the page store. + */ +public class TestPageStoreCoverage extends TestBase { + + private static final String URL = "pageStoreCoverage;" + + "PAGE_SIZE=64;CACHE_SIZE=16;MAX_LOG_SIZE=1"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + // TODO mvcc, 2-phase commit + if (config.memory) { + return; + } + deleteDb("pageStoreCoverage"); + testMoveRoot(); + testBasic(); + testReadOnly(); + testIncompleteCreate(); + testBackupRestore(); + testTrim(); + testLongTransaction(); + testRecoverTemp(); + deleteDb("pageStoreCoverage"); + } + + private void testMoveRoot() throws SQLException { + Connection conn; + + conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("create memory table test(id int primary key) " + + "as select x from system_range(1, 20)"); + for (int i = 0; i < 10; i++) { + stat.execute("create memory table test" + i + + "(id int primary key) as select x from system_range(1, 2)"); + } + stat.execute("drop table test"); + conn.close(); + + conn = getConnection(URL); + stat = conn.createStatement(); + stat.execute("drop all objects delete files"); + conn.close(); + + conn = getConnection(URL); + stat = conn.createStatement(); + stat.execute("create table test(id int primary key) " + + "as select x from system_range(1, 100)"); + for (int i = 0; i < 10; i++) { + stat.execute("create table test" + i + "(id int primary key) " + + "as select x from system_range(1, 2)"); + } + stat.execute("drop table test"); + conn.close(); + + conn = getConnection(URL); + stat = conn.createStatement(); + for (int i = 0; i < 10; i++) { + ResultSet rs = stat.executeQuery("select * from test" + i); + while (rs.next()) { + // ignore + } + } + stat.execute("drop all objects delete files"); + conn.close(); + } + + private void testRecoverTemp() throws SQLException { + Connection conn; + conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("create cached temporary table test(id identity, name varchar)"); + stat.execute("create index idx_test_name on test(name)"); + stat.execute("create index idx_test_name2 on test(name, id)"); + stat.execute("create table test2(id identity, name varchar)"); + stat.execute("create index idx_test2_name on test2(name desc)"); + stat.execute("create index idx_test2_name2 on test2(name, id)"); + stat.execute("insert into test2 " + + "select null, space(10) from system_range(1, 10)"); + stat.execute("create table test3(id identity, name varchar)"); + stat.execute("checkpoint"); + conn.setAutoCommit(false); + stat.execute("create table test4(id identity, name varchar)"); + stat.execute("create index idx_test4_name2 on test(name, id)"); + stat.execute("insert into test " + + "select null, space(10) from system_range(1, 10)"); + stat.execute("insert into test3 " + + "select null, space(10) from system_range(1, 10)"); + stat.execute("insert into test4 " + + "select null, space(10) from system_range(1, 10)"); + stat.execute("truncate table test2"); + stat.execute("drop index idx_test_name"); + stat.execute("drop index idx_test2_name"); + stat.execute("drop table test2"); + stat.execute("insert into test " + + "select null, space(10) from system_range(1, 10)"); + stat.execute("shutdown immediately"); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn).close(); + conn = getConnection(URL); + stat = conn.createStatement(); + stat.execute("drop all objects"); + // re-allocate index root pages + for (int i = 0; i < 10; i++) { + stat.execute("create table test" + i + "(id identity, name varchar)"); + } + stat.execute("checkpoint"); + for (int i = 0; i < 10; i++) { + stat.execute("drop table test" + i); + } + for (int i = 0; i < 10; i++) { + stat.execute("create table test" + i + "(id identity, name varchar)"); + } + stat.execute("shutdown immediately"); + assertThrows(ErrorCode.DATABASE_IS_CLOSED, conn).close(); + conn = getConnection(URL); + conn.createStatement().execute("drop all objects"); + conn.close(); + } + + private void testLongTransaction() throws SQLException { + Connection conn; + conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, name varchar)"); + conn.setAutoCommit(false); + stat.execute("insert into test " + + "select null, space(10) from system_range(1, 10)"); + Connection conn2; + conn2 = getConnection(URL); + Statement stat2 = conn2.createStatement(); + stat2.execute("checkpoint"); + // large transaction + stat2.execute("create table test2(id identity, name varchar)"); + stat2.execute("create index idx_test2_name on test2(name)"); + stat2.execute("insert into test2 " + + "select null, x || space(10000) from system_range(1, 100)"); + stat2.execute("drop table test2"); + conn2.close(); + stat.execute("drop table test"); + conn.close(); + } + + private void testTrim() throws SQLException { + Connection conn; + conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("create index idx_name on test(name, id)"); + stat.execute("insert into test " + + "select x, x || space(10) from system_range(1, 20)"); + stat.execute("create table test2(id int primary key, name varchar)"); + stat.execute("create index idx_test2_name on test2(name, id)"); + stat.execute("insert into test2 " + + "select x, x || space(10) from system_range(1, 20)"); + stat.execute("create table test3(id int primary key, name varchar)"); + stat.execute("create index idx_test3_name on test3(name, id)"); + stat.execute("insert into test3 " + + "select x, x || space(3) from system_range(1, 3)"); + stat.execute("delete from test"); + stat.execute("checkpoint"); + stat.execute("checkpoint sync"); + stat.execute("shutdown compact"); + conn.close(); + conn = getConnection(URL); + conn.createStatement().execute("drop all objects"); + conn.close(); + } + + private void testBasic() throws Exception { + Connection conn; + conn = getConnection(URL); + conn.close(); + conn = getConnection(URL); + conn.close(); + + } + + private void testReadOnly() throws Exception { + Connection conn; + conn = getConnection(URL); + conn.createStatement().execute("shutdown compact"); + conn.close(); + conn = getConnection(URL + ";access_mode_data=r"); + conn.close(); + } + + private void testBackupRestore() throws Exception { + Connection conn; + conn = getConnection(URL); + Statement stat = conn.createStatement(); + stat.execute( + "create table test(id int primary key, name varchar)"); + stat.execute( + "create index idx_name on test(name, id)"); + stat.execute( + "insert into test select x, x || space(200 * x) from system_range(1, 10)"); + conn.setAutoCommit(false); + stat.execute("delete from test where id > 5"); + stat.execute("backup to '" + getBaseDir() + "/backup.zip'"); + conn.rollback(); + Restore.execute(getBaseDir() + "/backup.zip", getBaseDir(), + "pageStore2"); + Connection conn2; + conn2 = getConnection("pageStore2"); + Statement stat2 = conn2.createStatement(); + assertEqualDatabases(stat, stat2); + conn.createStatement().execute("drop table test"); + conn2.close(); + conn.close(); + FileUtils.delete(getBaseDir() + "/backup.zip"); + deleteDb("pageStore2"); + } + + private void testIncompleteCreate() throws Exception { + deleteDb("pageStoreCoverage"); + Connection conn; + String fileName = getBaseDir() + "/pageStore" + Constants.SUFFIX_PAGE_FILE; + conn = getConnection("pageStoreCoverage"); + Statement stat = conn.createStatement(); + stat.execute("drop table if exists INFORMATION_SCHEMA.LOB_DATA"); + stat.execute("drop table if exists INFORMATION_SCHEMA.LOB_MAP"); + conn.close(); + FileChannel f = FileUtils.open(fileName, "rw"); + // create a new database + conn = getConnection("pageStoreCoverage"); + conn.close(); + f = FileUtils.open(fileName, "rw"); + f.truncate(16); + // create a new database + conn = getConnection("pageStoreCoverage"); + conn.close(); + deleteDb("pageStoreCoverage"); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestPattern.java b/modules/h2/src/test/java/org/h2/test/unit/TestPattern.java new file mode 100644 index 0000000000000..cb74d75895bfb --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestPattern.java @@ -0,0 +1,124 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.text.Collator; +import org.h2.expression.CompareLike; +import org.h2.test.TestBase; +import org.h2.value.CompareMode; + +/** + * Tests LIKE pattern matching. + */ +public class TestPattern extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testCompareModeReuse(); + testPattern(); + } + + private void testCompareModeReuse() { + CompareMode mode1, mode2; + mode1 = CompareMode.getInstance(null, 0); + mode2 = CompareMode.getInstance(null, 0); + assertTrue(mode1 == mode2); + + mode1 = CompareMode.getInstance("DE", Collator.SECONDARY); + assertFalse(mode1 == mode2); + mode2 = CompareMode.getInstance("DE", Collator.SECONDARY); + assertTrue(mode1 == mode2); + } + + private void testPattern() { + CompareMode mode = CompareMode.getInstance(null, 0); + CompareLike comp = new CompareLike(mode, "\\", null, null, null, false); + test(comp, "B", "%_"); + test(comp, "A", "A%"); + test(comp, "A", "A%%"); + test(comp, "A_A", "%\\_%"); + + for (int i = 0; i < 10000; i++) { + String pattern = getRandomPattern(); + String value = getRandomValue(); + test(comp, value, pattern); + } + } + + private void test(CompareLike comp, String value, String pattern) { + String regexp = initPatternRegexp(pattern, '\\'); + boolean resultRegexp = value.matches(regexp); + boolean result = comp.test(pattern, value, '\\'); + if (result != resultRegexp) { + fail("Error: >" + value + "< LIKE >" + pattern + "< result=" + + result + " resultReg=" + resultRegexp); + } + } + + private static String getRandomValue() { + StringBuilder buff = new StringBuilder(); + int len = (int) (Math.random() * 10); + String s = "AB_%\\"; + for (int i = 0; i < len; i++) { + buff.append(s.charAt((int) (Math.random() * s.length()))); + } + return buff.toString(); + } + + private static String getRandomPattern() { + StringBuilder buff = new StringBuilder(); + int len = (int) (Math.random() * 4); + String s = "A%_\\"; + for (int i = 0; i < len; i++) { + char c = s.charAt((int) (Math.random() * s.length())); + if ((c == '_' || c == '%') && Math.random() > 0.5) { + buff.append('\\'); + } else if (c == '\\') { + buff.append(c); + } + buff.append(c); + } + return buff.toString(); + } + + private String initPatternRegexp(String pattern, char escape) { + int len = pattern.length(); + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < len; i++) { + char c = pattern.charAt(i); + if (escape == c) { + if (i >= len) { + fail("escape can't be last char"); + } + c = pattern.charAt(++i); + buff.append('\\'); + buff.append(c); + } else if (c == '%') { + buff.append(".*"); + } else if (c == '_') { + buff.append('.'); + } else if (c == '\\') { + buff.append("\\\\"); + } else { + buff.append(c); + } + // TODO regexp: there are other chars that need escaping + } + String regexp = buff.toString(); + // System.out.println("regexp = " + regexp); + return regexp; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestPerfectHash.java b/modules/h2/src/test/java/org/h2/test/unit/TestPerfectHash.java new file mode 100644 index 0000000000000..9f30b48c2f25d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestPerfectHash.java @@ -0,0 +1,329 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.File; +import java.io.IOException; +import java.io.RandomAccessFile; +import java.util.BitSet; +import java.util.HashSet; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.h2.dev.hash.MinimalPerfectHash; +import org.h2.dev.hash.MinimalPerfectHash.LongHash; +import org.h2.dev.hash.MinimalPerfectHash.StringHash; +import org.h2.dev.hash.MinimalPerfectHash.UniversalHash; +import org.h2.dev.hash.PerfectHash; +import org.h2.test.TestBase; + +/** + * Tests the perfect hash tool. + */ +public class TestPerfectHash extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestPerfectHash test = (TestPerfectHash) TestBase.createCaller().init(); + test.measure(); + largeFile(); + test.test(); + test.measure(); + } + + private static void largeFile() throws IOException { + largeFile("sequence.txt"); + for (int i = 1; i <= 4; i++) { + largeFile("unique" + i + ".txt"); + } + largeFile("enwiki-20140811-all-titles.txt"); + } + + private static void largeFile(String s) throws IOException { + String fileName = System.getProperty("user.home") + "/temp/" + s; + if (!new File(fileName).exists()) { + System.out.println("not found: " + fileName); + return; + } + RandomAccessFile f = new RandomAccessFile(fileName, "r"); + byte[] data = new byte[(int) f.length()]; + f.readFully(data); + UniversalHash hf = new UniversalHash() { + + @Override + public int hashCode(Text o, int index, int seed) { + return o.hashCode(index, seed); + } + + }; + f.close(); + HashSet set = new HashSet<>(); + Text t = new Text(data, 0); + while (true) { + set.add(t); + int end = t.getEnd(); + if (end >= data.length - 1) { + break; + } + t = new Text(data, end + 1); + if (set.size() % 1000000 == 0) { + System.out.println("size: " + set.size()); + } + } + System.out.println("file: " + s); + System.out.println("size: " + set.size()); + long time = System.nanoTime(); + byte[] desc = MinimalPerfectHash.generate(set, hf); + time = System.nanoTime() - time; + System.out.println("millis: " + TimeUnit.NANOSECONDS.toMillis(time)); + System.out.println("len: " + desc.length); + int bits = desc.length * 8; + System.out.println(((double) bits / set.size()) + " bits/key"); + } + + /** + * Measure the hash functions. + */ + public void measure() { + int size = 1000000; + testMinimal(size / 10); + int s; + long time = System.nanoTime(); + s = testMinimal(size); + time = System.nanoTime() - time; + System.out.println((double) s / size + " bits/key (minimal) in " + + TimeUnit.NANOSECONDS.toMillis(time) + " ms"); + + time = System.nanoTime(); + s = testMinimalWithString(size); + time = System.nanoTime() - time; + System.out.println((double) s / size + + " bits/key (minimal; String keys) in " + + TimeUnit.NANOSECONDS.toMillis(time) + " ms"); + + time = System.nanoTime(); + s = test(size, true); + time = System.nanoTime() - time; + System.out.println((double) s / size + " bits/key (minimal old) in " + + TimeUnit.NANOSECONDS.toMillis(time) + " ms"); + time = System.nanoTime(); + s = test(size, false); + time = System.nanoTime() - time; + System.out.println((double) s / size + " bits/key (not minimal) in " + + TimeUnit.NANOSECONDS.toMillis(time) + " ms"); + } + + @Override + public void test() { + testBrokenHashFunction(); + for (int i = 0; i < 100; i++) { + testMinimal(i); + } + for (int i = 100; i <= 100000; i *= 10) { + testMinimal(i); + } + for (int i = 0; i < 100; i++) { + test(i, true); + test(i, false); + } + for (int i = 100; i <= 100000; i *= 10) { + test(i, true); + test(i, false); + } + } + + private void testBrokenHashFunction() { + int size = 10000; + Random r = new Random(10000); + HashSet set = new HashSet<>(size); + while (set.size() < size) { + set.add("x " + r.nextDouble()); + } + for (int test = 1; test < 10; test++) { + final int badUntilLevel = test; + UniversalHash badHash = new UniversalHash() { + + @Override + public int hashCode(String o, int index, int seed) { + if (index < badUntilLevel) { + return 0; + } + return StringHash.getFastHash(o, index, seed); + } + + }; + byte[] desc = MinimalPerfectHash.generate(set, badHash); + testMinimal(desc, set, badHash); + } + } + + private int test(int size, boolean minimal) { + Random r = new Random(size); + HashSet set = new HashSet<>(); + while (set.size() < size) { + set.add(r.nextInt()); + } + byte[] desc = PerfectHash.generate(set, minimal); + int max = test(desc, set); + if (minimal) { + assertEquals(size - 1, max); + } else { + if (size > 10) { + assertTrue(max < 1.5 * size); + } + } + return desc.length * 8; + } + + private int test(byte[] desc, Set set) { + int max = -1; + HashSet test = new HashSet<>(); + PerfectHash hash = new PerfectHash(desc); + for (int x : set) { + int h = hash.get(x); + assertTrue(h >= 0); + assertTrue(h <= set.size() * 3); + max = Math.max(max, h); + assertFalse(test.contains(h)); + test.add(h); + } + return max; + } + + private int testMinimal(int size) { + Random r = new Random(size); + HashSet set = new HashSet<>(size); + while (set.size() < size) { + set.add((long) r.nextInt()); + } + LongHash hf = new LongHash(); + byte[] desc = MinimalPerfectHash.generate(set, hf); + int max = testMinimal(desc, set, hf); + assertEquals(size - 1, max); + return desc.length * 8; + } + + private int testMinimalWithString(int size) { + Random r = new Random(size); + HashSet set = new HashSet<>(size); + while (set.size() < size) { + set.add("x " + r.nextDouble()); + } + StringHash hf = new StringHash(); + byte[] desc = MinimalPerfectHash.generate(set, hf); + int max = testMinimal(desc, set, hf); + assertEquals(size - 1, max); + return desc.length * 8; + } + + private int testMinimal(byte[] desc, Set set, UniversalHash hf) { + int max = -1; + BitSet test = new BitSet(); + MinimalPerfectHash hash = new MinimalPerfectHash<>(desc, hf); + for (K x : set) { + int h = hash.get(x); + assertTrue(h >= 0); + assertTrue(h <= set.size() * 3); + max = Math.max(max, h); + assertFalse(test.get(h)); + test.set(h); + } + return max; + } + + /** + * A text. + */ + static class Text { + + /** + * The byte data (may be shared, so must not be modified). + */ + final byte[] data; + + /** + * The start location. + */ + final int start; + + Text(byte[] data, int start) { + this.data = data; + this.start = start; + } + + /** + * The hash code (using a universal hash function). + * + * @param index the hash function index + * @param seed the random seed + * @return the hash code + */ + public int hashCode(int index, int seed) { + if (index < 8) { + int x = (index * 0x9f3b) ^ seed; + int result = seed; + int p = start; + while (true) { + int c = data[p++] & 255; + if (c == '\n') { + break; + } + x = 31 + x * 0x9f3b; + result ^= x * (1 + c); + } + return result; + } + int end = getEnd(); + return StringHash.getSipHash24(data, start, end, index, seed); + } + + int getEnd() { + int end = start; + while (data[end] != '\n') { + end++; + } + return end; + } + + @Override + public int hashCode() { + return hashCode(0, 0); + } + + @Override + public boolean equals(Object other) { + if (other == this) { + return true; + } else if (!(other instanceof Text)) { + return false; + } + Text o = (Text) other; + int end = getEnd(); + int s2 = o.start; + int e2 = o.getEnd(); + if (e2 - s2 != end - start) { + return false; + } + for (int s1 = start; s1 < end; s1++, s2++) { + if (data[s1] != o.data[s2]) { + return false; + } + } + return true; + } + + @Override + public String toString() { + return new String(data, start, getEnd() - start); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestPgServer.java b/modules/h2/src/test/java/org/h2/test/unit/TestPgServer.java new file mode 100644 index 0000000000000..b90c670a8491d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestPgServer.java @@ -0,0 +1,572 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.Date; +import java.sql.DriverManager; +import java.sql.ParameterMetaData; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.Properties; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; + +import org.h2.api.ErrorCode; +import org.h2.test.TestBase; +import org.h2.tools.Server; + +/** + * Tests the PostgreSQL server protocol compliant implementation. + */ +public class TestPgServer extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + config.multiThreaded = true; + config.memory = true; + config.mvStore = true; + config.mvcc = true; + // testPgAdapter() starts server by itself without a wait so run it first + testPgAdapter(); + testLowerCaseIdentifiers(); + testKeyAlias(); + testKeyAlias(); + testCancelQuery(); + testBinaryTypes(); + testDateTime(); + testPrepareWithUnspecifiedType(); + } + + private void testLowerCaseIdentifiers() throws SQLException { + if (!getPgJdbcDriver()) { + return; + } + deleteDb("pgserver"); + Connection conn = getConnection( + "mem:pgserver;DATABASE_TO_UPPER=false", "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, name varchar(255))"); + Server server = createPgServer("-baseDir", getBaseDir(), + "-pgPort", "5535", "-pgDaemon", "-key", "pgserver", + "mem:pgserver"); + try { + Connection conn2; + conn2 = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", "sa", "sa"); + stat = conn2.createStatement(); + stat.execute("select * from test"); + conn2.close(); + } finally { + server.stop(); + } + conn.close(); + deleteDb("pgserver"); + } + + private boolean getPgJdbcDriver() { + try { + Class.forName("org.postgresql.Driver"); + return true; + } catch (ClassNotFoundException e) { + println("PostgreSQL JDBC driver not found - PgServer not tested"); + return false; + } + } + + private Server createPgServer(String... args) throws SQLException { + Server server = Server.createPgServer(args); + int failures = 0; + for (;;) { + try { + server.start(); + return server; + } catch (SQLException e) { + // the sleeps are too mitigate "port in use" exceptions on Jenkins + if (e.getErrorCode() != ErrorCode.EXCEPTION_OPENING_PORT_2 || ++failures > 10) { + throw e; + } + println("Sleeping"); + try { + Thread.sleep(100); + } catch (InterruptedException e2) { + throw new RuntimeException(e2); + } + } + } + } + + private void testPgAdapter() throws SQLException { + deleteDb("pgserver"); + Server server = Server.createPgServer( + "-baseDir", getBaseDir(), "-pgPort", "5535", "-pgDaemon"); + assertEquals(5535, server.getPort()); + assertEquals("Not started", server.getStatus()); + server.start(); + assertStartsWith(server.getStatus(), "PG server running at pg://"); + try { + if (getPgJdbcDriver()) { + testPgClient(); + } + } finally { + server.stop(); + } + } + + private void testCancelQuery() throws Exception { + if (!getPgJdbcDriver()) { + return; + } + + Server server = createPgServer( + "-pgPort", "5535", "-pgDaemon", "-key", "pgserver", "mem:pgserver"); + + ExecutorService executor = Executors.newSingleThreadExecutor(); + try { + Connection conn = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", "sa", "sa"); + final Statement stat = conn.createStatement(); + stat.execute("create alias sleep for \"java.lang.Thread.sleep\""); + + // create a table with 200 rows (cancel interval is 127) + stat.execute("create table test(id int)"); + for (int i = 0; i < 200; i++) { + stat.execute("insert into test (id) values (rand())"); + } + + Future future = executor.submit(new Callable() { + @Override + public Boolean call() throws SQLException { + return stat.execute("select id, sleep(5) from test"); + } + }); + + // give it a little time to start and then cancel it + Thread.sleep(100); + stat.cancel(); + + try { + future.get(); + throw new IllegalStateException(); + } catch (ExecutionException e) { + assertStartsWith(e.getCause().getMessage(), + "ERROR: canceling statement due to user request"); + } finally { + conn.close(); + } + } finally { + server.stop(); + executor.shutdown(); + } + deleteDb("pgserver"); + } + + private void testPgClient() throws SQLException { + Connection conn = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", "sa", "sa"); + Statement stat = conn.createStatement(); + assertThrows(SQLException.class, stat). + execute("select ***"); + stat.execute("create user test password 'test'"); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("create index idx_test_name on test(name, id)"); + stat.execute("grant all on test to test"); + stat.close(); + conn.close(); + + conn = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", "test", "test"); + stat = conn.createStatement(); + ResultSet rs; + + stat.execute("prepare test(int, int) as select ?1*?2"); + rs = stat.executeQuery("execute test(3, 2)"); + rs.next(); + assertEquals(6, rs.getInt(1)); + stat.execute("deallocate test"); + + PreparedStatement prep; + prep = conn.prepareStatement("select * from test where name = ?"); + prep.setNull(1, Types.VARCHAR); + rs = prep.executeQuery(); + assertFalse(rs.next()); + + prep = conn.prepareStatement("insert into test values(?, ?)"); + ParameterMetaData meta = prep.getParameterMetaData(); + assertEquals(2, meta.getParameterCount()); + prep.setInt(1, 1); + prep.setString(2, "Hello"); + prep.execute(); + rs = stat.executeQuery("select * from test"); + rs.next(); + + ResultSetMetaData rsMeta = rs.getMetaData(); + assertEquals(Types.INTEGER, rsMeta.getColumnType(1)); + assertEquals(Types.VARCHAR, rsMeta.getColumnType(2)); + + prep.close(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + prep = conn.prepareStatement( + "select * from test " + + "where id = ? and name = ?"); + prep.setInt(1, 1); + prep.setString(2, "Hello"); + rs = prep.executeQuery(); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + rs.close(); + DatabaseMetaData dbMeta = conn.getMetaData(); + rs = dbMeta.getTables(null, null, "TEST", null); + rs.next(); + assertEquals("TEST", rs.getString("TABLE_NAME")); + assertFalse(rs.next()); + rs = dbMeta.getColumns(null, null, "TEST", null); + rs.next(); + assertEquals("ID", rs.getString("COLUMN_NAME")); + rs.next(); + assertEquals("NAME", rs.getString("COLUMN_NAME")); + assertFalse(rs.next()); + rs = dbMeta.getIndexInfo(null, null, "TEST", false, false); + // index info is currently disabled + // rs.next(); + // assertEquals("TEST", rs.getString("TABLE_NAME")); + // rs.next(); + // assertEquals("TEST", rs.getString("TABLE_NAME")); + assertFalse(rs.next()); + rs = stat.executeQuery( + "select version(), pg_postmaster_start_time(), current_schema()"); + rs.next(); + String s = rs.getString(1); + assertContains(s, "H2"); + assertContains(s, "PostgreSQL"); + s = rs.getString(2); + s = rs.getString(3); + assertEquals(s, "PUBLIC"); + assertFalse(rs.next()); + + conn.setAutoCommit(false); + stat.execute("delete from test"); + conn.rollback(); + stat.execute("update test set name = 'Hallo'"); + conn.commit(); + rs = stat.executeQuery("select * from test order by id"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hallo", rs.getString(2)); + assertFalse(rs.next()); + + rs = stat.executeQuery("select id, name, pg_get_userbyid(id) " + + "from information_schema.users order by id"); + rs.next(); + assertEquals(rs.getString(2), rs.getString(3)); + assertFalse(rs.next()); + rs.close(); + + rs = stat.executeQuery("select currTid2('x', 1)"); + rs.next(); + assertEquals(1, rs.getInt(1)); + + rs = stat.executeQuery("select has_table_privilege('TEST', 'READ')"); + rs.next(); + assertTrue(rs.getBoolean(1)); + + rs = stat.executeQuery("select has_database_privilege(1, 'READ')"); + rs.next(); + assertTrue(rs.getBoolean(1)); + + + rs = stat.executeQuery("select pg_get_userbyid(-1)"); + rs.next(); + assertEquals(null, rs.getString(1)); + + rs = stat.executeQuery("select pg_encoding_to_char(0)"); + rs.next(); + assertEquals("SQL_ASCII", rs.getString(1)); + + rs = stat.executeQuery("select pg_encoding_to_char(6)"); + rs.next(); + assertEquals("UTF8", rs.getString(1)); + + rs = stat.executeQuery("select pg_encoding_to_char(8)"); + rs.next(); + assertEquals("LATIN1", rs.getString(1)); + + rs = stat.executeQuery("select pg_encoding_to_char(20)"); + rs.next(); + assertEquals("UTF8", rs.getString(1)); + + rs = stat.executeQuery("select pg_encoding_to_char(40)"); + rs.next(); + assertEquals("", rs.getString(1)); + + rs = stat.executeQuery("select pg_get_oid('\"WRONG\"')"); + rs.next(); + assertEquals(0, rs.getInt(1)); + + rs = stat.executeQuery("select pg_get_oid('TEST')"); + rs.next(); + assertTrue(rs.getInt(1) > 0); + + rs = stat.executeQuery("select pg_get_indexdef(0, 0, false)"); + rs.next(); + assertEquals("", rs.getString(1)); + + rs = stat.executeQuery("select id from information_schema.indexes " + + "where index_name='IDX_TEST_NAME'"); + rs.next(); + int indexId = rs.getInt(1); + + rs = stat.executeQuery("select pg_get_indexdef("+indexId+", 0, false)"); + rs.next(); + assertEquals( + "CREATE INDEX PUBLIC.IDX_TEST_NAME ON PUBLIC.TEST(NAME, ID)", + rs.getString(1)); + rs = stat.executeQuery("select pg_get_indexdef("+indexId+", null, false)"); + rs.next(); + assertEquals( + "CREATE INDEX PUBLIC.IDX_TEST_NAME ON PUBLIC.TEST(NAME, ID)", + rs.getString(1)); + rs = stat.executeQuery("select pg_get_indexdef("+indexId+", 1, false)"); + rs.next(); + assertEquals("NAME", rs.getString(1)); + rs = stat.executeQuery("select pg_get_indexdef("+indexId+", 2, false)"); + rs.next(); + assertEquals("ID", rs.getString(1)); + + conn.close(); + } + + private void testKeyAlias() throws SQLException { + if (!getPgJdbcDriver()) { + return; + } + Server server = createPgServer( + "-pgPort", "5535", "-pgDaemon", "-key", "pgserver", "mem:pgserver"); + try { + Connection conn = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", "sa", "sa"); + Statement stat = conn.createStatement(); + + // confirm that we've got the in memory implementation + // by creating a table and checking flags + stat.execute("create table test(id int primary key, name varchar)"); + ResultSet rs = stat.executeQuery( + "select storage_type from information_schema.tables " + + "where table_name = 'TEST'"); + assertTrue(rs.next()); + assertEquals("MEMORY", rs.getString(1)); + + conn.close(); + } finally { + server.stop(); + } + } + + private void testBinaryTypes() throws SQLException { + if (!getPgJdbcDriver()) { + return; + } + + Server server = createPgServer( + "-pgPort", "5535", "-pgDaemon", "-key", "pgserver", "mem:pgserver"); + try { + Properties props = new Properties(); + props.setProperty("user", "sa"); + props.setProperty("password", "sa"); + // force binary + props.setProperty("prepareThreshold", "-1"); + + Connection conn = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", props); + Statement stat = conn.createStatement(); + + stat.execute( + "create table test(x1 varchar, x2 int, " + + "x3 smallint, x4 bigint, x5 double, x6 float, " + + "x7 real, x8 boolean, x9 char, x10 bytea, " + + "x11 date, x12 time, x13 timestamp, x14 numeric)"); + + PreparedStatement ps = conn.prepareStatement( + "insert into test values (?,?,?,?,?,?,?,?,?,?,?,?,?,?)"); + ps.setString(1, "test"); + ps.setInt(2, 12345678); + ps.setShort(3, (short) 12345); + ps.setLong(4, 1234567890123L); + ps.setDouble(5, 123.456); + ps.setFloat(6, 123.456f); + ps.setFloat(7, 123.456f); + ps.setBoolean(8, true); + ps.setByte(9, (byte) 0xfe); + ps.setBytes(10, new byte[] { 'a', (byte) 0xfe, '\127' }); + ps.setDate(11, Date.valueOf("2015-01-31")); + ps.setTime(12, Time.valueOf("20:11:15")); + ps.setTimestamp(13, Timestamp.valueOf("2001-10-30 14:16:10.111")); + ps.setBigDecimal(14, new BigDecimal("12345678901234567890.12345")); + ps.execute(); + for (int i = 1; i <= 14; i++) { + ps.setNull(i, Types.NULL); + } + ps.execute(); + + ResultSet rs = stat.executeQuery("select * from test"); + assertTrue(rs.next()); + assertEquals("test", rs.getString(1)); + assertEquals(12345678, rs.getInt(2)); + assertEquals((short) 12345, rs.getShort(3)); + assertEquals(1234567890123L, rs.getLong(4)); + assertEquals(123.456, rs.getDouble(5)); + assertEquals(123.456f, rs.getFloat(6)); + assertEquals(123.456f, rs.getFloat(7)); + assertEquals(true, rs.getBoolean(8)); + assertEquals((byte) 0xfe, rs.getByte(9)); + assertEquals(new byte[] { 'a', (byte) 0xfe, '\127' }, + rs.getBytes(10)); + assertEquals(Date.valueOf("2015-01-31"), rs.getDate(11)); + assertEquals(Time.valueOf("20:11:15"), rs.getTime(12)); + assertEquals(Timestamp.valueOf("2001-10-30 14:16:10.111"), rs.getTimestamp(13)); + assertEquals(new BigDecimal("12345678901234567890.12345"), rs.getBigDecimal(14)); + assertTrue(rs.next()); + for (int i = 1; i <= 14; i++) { + assertNull(rs.getObject(i)); + } + assertFalse(rs.next()); + + conn.close(); + } finally { + server.stop(); + } + } + + private void testDateTime() throws SQLException { + if (!getPgJdbcDriver()) { + return; + } + + Server server = createPgServer( + "-pgPort", "5535", "-pgDaemon", "-key", "pgserver", "mem:pgserver"); + try { + Properties props = new Properties(); + props.setProperty("user", "sa"); + props.setProperty("password", "sa"); + // force binary + props.setProperty("prepareThreshold", "-1"); + + Connection conn = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", props); + Statement stat = conn.createStatement(); + + stat.execute( + "create table test(x1 date, x2 time, x3 timestamp)"); + + Date[] dates = { null, Date.valueOf("2017-02-20"), + Date.valueOf("1970-01-01"), Date.valueOf("1969-12-31"), + Date.valueOf("1940-01-10"), Date.valueOf("1950-11-10"), + Date.valueOf("1500-01-01")}; + Time[] times = { null, Time.valueOf("14:15:16"), + Time.valueOf("00:00:00"), Time.valueOf("23:59:59"), + Time.valueOf("00:10:59"), Time.valueOf("08:30:42"), + Time.valueOf("10:00:00")}; + Timestamp[] timestamps = { null, Timestamp.valueOf("2017-02-20 14:15:16.763"), + Timestamp.valueOf("1970-01-01 00:00:00"), Timestamp.valueOf("1969-12-31 23:59:59"), + Timestamp.valueOf("1940-01-10 00:10:59"), Timestamp.valueOf("1950-11-10 08:30:42.12"), + Timestamp.valueOf("1500-01-01 10:00:10")}; + int count = dates.length; + + PreparedStatement ps = conn.prepareStatement( + "insert into test values (?,?,?)"); + for (int i = 0; i < count; i++) { + ps.setDate(1, dates[i]); + ps.setTime(2, times[i]); + ps.setTimestamp(3, timestamps[i]); + ps.execute(); + } + + ResultSet rs = stat.executeQuery("select * from test"); + for (int i = 0; i < count; i++) { + assertTrue(rs.next()); + assertEquals(dates[i], rs.getDate(1)); + assertEquals(times[i], rs.getTime(2)); + assertEquals(timestamps[i], rs.getTimestamp(3)); + } + assertFalse(rs.next()); + + conn.close(); + } finally { + server.stop(); + } + } + + private void testPrepareWithUnspecifiedType() throws Exception { + if (!getPgJdbcDriver()) { + return; + } + + Server server = createPgServer( + "-pgPort", "5535", "-pgDaemon", "-key", "pgserver", "mem:pgserver"); + try { + Properties props = new Properties(); + + props.setProperty("user", "sa"); + props.setProperty("password", "sa"); + // force server side prepare + props.setProperty("prepareThreshold", "1"); + + Connection conn = DriverManager.getConnection( + "jdbc:postgresql://localhost:5535/pgserver", props); + + Statement stmt = conn.createStatement(); + stmt.executeUpdate("create table t1 (id integer, value timestamp)"); + stmt.close(); + + PreparedStatement pstmt = conn.prepareStatement("insert into t1 values(100500, ?)"); + // assertTrue(((PGStatement) pstmt).isUseServerPrepare()); + assertEquals(Types.TIMESTAMP, pstmt.getParameterMetaData().getParameterType(1)); + + Timestamp t = new Timestamp(System.currentTimeMillis()); + pstmt.setObject(1, t); + assertEquals(1, pstmt.executeUpdate()); + pstmt.close(); + + pstmt = conn.prepareStatement("SELECT * FROM t1 WHERE value = ?"); + assertEquals(Types.TIMESTAMP, pstmt.getParameterMetaData().getParameterType(1)); + + pstmt.setObject(1, t); + ResultSet rs = pstmt.executeQuery(); + assertTrue(rs.next()); + assertEquals(100500, rs.getInt(1)); + rs.close(); + pstmt.close(); + + conn.close(); + } finally { + server.stop(); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestReader.java b/modules/h2/src/test/java/org/h2/test/unit/TestReader.java new file mode 100644 index 0000000000000..bf94a71253095 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestReader.java @@ -0,0 +1,43 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.io.Reader; +import java.io.StringReader; + +import org.h2.dev.util.ReaderInputStream; +import org.h2.test.TestBase; +import org.h2.util.IOUtils; + +/** + * Tests the stream to UTF-8 reader conversion. + */ +public class TestReader extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + String s = "\u00ef\u00f6\u00fc"; + StringReader r = new StringReader(s); + InputStream in = new ReaderInputStream(r); + byte[] buff = IOUtils.readBytesAndClose(in, 0); + InputStream in2 = new ByteArrayInputStream(buff); + Reader r2 = IOUtils.getBufferedReader(in2); + String s2 = IOUtils.readStringAndClose(r2, Integer.MAX_VALUE); + assertEquals(s, s2); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestRecovery.java b/modules/h2/src/test/java/org/h2/test/unit/TestRecovery.java new file mode 100644 index 0000000000000..3cf633ec72ea9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestRecovery.java @@ -0,0 +1,321 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayOutputStream; +import java.io.InputStreamReader; +import java.io.PrintStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import org.h2.engine.Constants; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Recover; +import org.h2.util.IOUtils; + +/** + * Tests database recovery. + */ +public class TestRecovery extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.memory) { + return; + } + if (!config.mvStore) { + testRecoverTestMode(); + } + testRecoverClob(); + testRecoverFulltext(); + testRedoTransactions(); + testCorrupt(); + testWithTransactionLog(); + testCompressedAndUncompressed(); + testRunScript(); + } + + private void testRecoverTestMode() throws Exception { + String recoverTestLog = getBaseDir() + "/recovery.h2.db.log"; + FileUtils.delete(recoverTestLog); + deleteDb("recovery"); + Connection conn = getConnection("recovery;RECOVER_TEST=1"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, name varchar)"); + stat.execute("drop all objects delete files"); + conn.close(); + assertTrue(FileUtils.exists(recoverTestLog)); + } + + private void testRecoverClob() throws Exception { + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, data clob)"); + stat.execute("insert into test values(1, space(100000))"); + conn.close(); + Recover.main("-dir", getBaseDir(), "-db", "recovery"); + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + conn = getConnection( + "recovery;init=runscript from '" + + getBaseDir() + "/recovery.h2.sql'"); + stat = conn.createStatement(); + stat.execute("select * from test"); + conn.close(); + } + + private void testRecoverFulltext() throws Exception { + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("CREATE ALIAS IF NOT EXISTS FTL_INIT " + + "FOR \"org.h2.fulltext.FullTextLucene.init\""); + stat.execute("CALL FTL_INIT()"); + stat.execute("create table test(id int primary key, name varchar) as " + + "select 1, 'Hello'"); + stat.execute("CALL FTL_CREATE_INDEX('PUBLIC', 'TEST', 'NAME')"); + conn.close(); + Recover.main("-dir", getBaseDir(), "-db", "recovery"); + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + conn = getConnection( + "recovery;init=runscript from '" + + getBaseDir() + "/recovery.h2.sql'"); + conn.close(); + } + + private void testRedoTransactions() throws Exception { + if (config.mvStore) { + // not needed for MV_STORE=TRUE + return; + } + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("set write_delay 0"); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test select x, 'Hello' from system_range(1, 5)"); + stat.execute("create table test2(id int primary key)"); + stat.execute("drop table test2"); + stat.execute("update test set name = 'Hallo' where id < 3"); + stat.execute("delete from test where id = 1"); + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (Exception e) { + // ignore + } + Recover.main("-dir", getBaseDir(), "-db", "recovery", "-transactionLog"); + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + conn = getConnection("recovery;init=runscript from '" + + getBaseDir() + "/recovery.h2.sql'"); + stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from test order by id"); + assertTrue(rs.next()); + assertEquals(2, rs.getInt(1)); + assertEquals("Hallo", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(3, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(4, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertTrue(rs.next()); + assertEquals(5, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + assertFalse(rs.next()); + conn.close(); + } + + private void testCorrupt() throws Exception { + if (config.mvStore) { + // not needed for MV_STORE=TRUE + return; + } + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int, name varchar) as " + + "select 1, 'Hello World1'"); + conn.close(); + FileChannel f = FileUtils.open(getBaseDir() + "/recovery.h2.db", "rw"); + byte[] buff = new byte[Constants.DEFAULT_PAGE_SIZE]; + while (f.position() < f.size()) { + FileUtils.readFully(f, ByteBuffer.wrap(buff)); + if (new String(buff).contains("Hello World1")) { + buff[buff.length - 1]++; + f.position(f.position() - buff.length); + f.write(ByteBuffer.wrap(buff)); + } + } + f.close(); + Recover.main("-dir", getBaseDir(), "-db", "recovery"); + String script = IOUtils.readStringAndClose( + new InputStreamReader( + FileUtils.newInputStream(getBaseDir() + "/recovery.h2.sql")), -1); + assertContains(script, "checksum mismatch"); + assertContains(script, "dump:"); + assertContains(script, "Hello World2"); + } + + private void testWithTransactionLog() throws SQLException { + if (config.mvStore) { + // not needed for MV_STORE=TRUE + return; + } + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("create table truncate(id int primary key) as " + + "select x from system_range(1, 1000)"); + stat.execute("create table test(id int primary key, data int, text varchar)"); + stat.execute("create index on test(data, id)"); + stat.execute("insert into test direct select x, 0, null " + + "from system_range(1, 1000)"); + stat.execute("insert into test values(-1, -1, space(10000))"); + stat.execute("checkpoint"); + stat.execute("delete from test where id = -1"); + stat.execute("truncate table truncate"); + conn.setAutoCommit(false); + long base = 0; + while (true) { + ResultSet rs = stat.executeQuery( + "select value from information_schema.settings " + + "where name = 'info.FILE_WRITE'"); + rs.next(); + long count = rs.getLong(1); + if (base == 0) { + base = count; + } else if (count > base + 10) { + break; + } + stat.execute("update test set data=0"); + stat.execute("update test set text=space(10000) where id = 0"); + stat.execute("update test set data=1, text = null"); + conn.commit(); + } + stat.execute("shutdown immediately"); + try { + conn.close(); + } catch (Exception e) { + // expected + } + Recover.main("-dir", getBaseDir(), "-db", "recovery"); + conn = getConnection("recovery"); + conn.close(); + Recover.main("-dir", getBaseDir(), "-db", "recovery", "-removePassword"); + conn = getConnection("recovery", getUser(), ""); + conn.close(); + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + } + + private void testCompressedAndUncompressed() throws SQLException { + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + DeleteDbFiles.execute(getBaseDir(), "recovery2", true); + org.h2.Driver.load(); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, data clob)"); + stat.execute("insert into test values(1, space(10000))"); + stat.execute("set compress_lob lzf"); + stat.execute("insert into test values(2, space(10000))"); + conn.close(); + Recover rec = new Recover(); + rec.runTool("-dir", getBaseDir(), "-db", "recovery"); + Connection conn2 = getConnection("recovery2"); + Statement stat2 = conn2.createStatement(); + String name = "recovery.h2.sql"; + stat2.execute("runscript from '" + getBaseDir() + "/" + name + "'"); + stat2.execute("select * from test"); + conn2.close(); + + conn = getConnection("recovery"); + stat = conn.createStatement(); + conn2 = getConnection("recovery2"); + stat2 = conn2.createStatement(); + + assertEqualDatabases(stat, stat2); + conn.close(); + conn2.close(); + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + DeleteDbFiles.execute(getBaseDir(), "recovery2", true); + } + + private void testRunScript() throws SQLException { + DeleteDbFiles.execute(getBaseDir(), "recovery", true); + DeleteDbFiles.execute(getBaseDir(), "recovery2", true); + org.h2.Driver.load(); + Connection conn = getConnection("recovery"); + Statement stat = conn.createStatement(); + stat.execute("create table \"Joe\"\"s Table\" as " + + "select 1"); + stat.execute("create table test as " + + "select * from system_range(1, 100)"); + stat.execute("create view \"TEST VIEW OF TABLE TEST\" as " + + "select * from test"); + stat.execute("create table a(id int primary key) as " + + "select * from system_range(1, 100)"); + stat.execute("create table b(id int references a(id)) as " + + "select * from system_range(1, 100)"); + stat.execute("create table lob(c clob, b blob) as " + + "select space(10000) || 'end', SECURE_RAND(10000)"); + stat.execute("create table d(d varchar) as " + + "select space(10000) || 'end'"); + stat.execute("alter table a add foreign key(id) references b(id)"); + // all rows have the same value - so that SCRIPT can't re-order the rows + stat.execute("create table e(id varchar) as " + + "select space(10) from system_range(1, 1000)"); + stat.execute("create index idx_e_id on e(id)"); + conn.close(); + + Recover rec = new Recover(); + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + rec.setOut(new PrintStream(buff)); + rec.runTool("-dir", getBaseDir(), "-db", "recovery", "-trace"); + String out = new String(buff.toByteArray()); + assertContains(out, "Created file"); + + Connection conn2 = getConnection("recovery2"); + Statement stat2 = conn2.createStatement(); + String name = "recovery.h2.sql"; + + stat2.execute("runscript from '" + getBaseDir() + "/" + name + "'"); + stat2.execute("select * from test"); + conn2.close(); + + conn = getConnection("recovery"); + stat = conn.createStatement(); + conn2 = getConnection("recovery2"); + stat2 = conn2.createStatement(); + + assertEqualDatabases(stat, stat2); + conn.close(); + conn2.close(); + + Recover.execute(getBaseDir(), "recovery"); + + deleteDb("recovery"); + deleteDb("recovery2"); + FileUtils.delete(getBaseDir() + "/recovery.h2.sql"); + String dir = getBaseDir() + "/recovery.lobs.db"; + FileUtils.deleteRecursive(dir, false); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestReopen.java b/modules/h2/src/test/java/org/h2/test/unit/TestReopen.java new file mode 100644 index 0000000000000..a718f3e0c6b38 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestReopen.java @@ -0,0 +1,201 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.SQLException; +import java.util.HashSet; +import java.util.Properties; +import java.util.concurrent.TimeUnit; + +import org.h2.api.ErrorCode; +import org.h2.engine.ConnectionInfo; +import org.h2.engine.Constants; +import org.h2.engine.Database; +import org.h2.engine.Session; +import org.h2.message.DbException; +import org.h2.store.fs.FilePathRec; +import org.h2.store.fs.FileUtils; +import org.h2.store.fs.Recorder; +import org.h2.test.TestBase; +import org.h2.tools.Recover; +import org.h2.util.IOUtils; +import org.h2.util.Profiler; +import org.h2.util.Utils; + +/** + * A test that calls another test, and after each write operation to the + * database file, it copies the file, and tries to reopen it. + */ +public class TestReopen extends TestBase implements Recorder { + + // TODO this is largely a copy of org.h2.util.RecoverTester + + private String testDatabase = "memFS:" + TestBase.BASE_TEST_DIR + "/reopen"; + private int writeCount = Utils.getProperty("h2.reopenOffset", 0); + private final int testEvery = 1 << Utils.getProperty("h2.reopenShift", 6); + private final long maxFileSize = Utils.getProperty("h2.reopenMaxFileSize", + Integer.MAX_VALUE) * 1024L * 1024; + private int verifyCount; + private final HashSet knownErrors = new HashSet<>(); + private volatile boolean testing; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + System.setProperty("h2.delayWrongPasswordMin", "0"); + FilePathRec.register(); + FilePathRec.setRecorder(this); + config.reopen = true; + + long time = System.nanoTime(); + Profiler p = new Profiler(); + p.startCollecting(); + new TestPageStoreCoverage().init(config).test(); + System.out.println(p.getTop(3)); + System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - time)); + System.out.println("counter: " + writeCount); + } + + @Override + public void log(int op, String fileName, byte[] data, long x) { + if (op != Recorder.WRITE && op != Recorder.TRUNCATE) { + return; + } + if (!fileName.endsWith(Constants.SUFFIX_PAGE_FILE) && + !fileName.endsWith(Constants.SUFFIX_MV_FILE)) { + return; + } + if (testing) { + // avoid deadlocks + return; + } + testing = true; + try { + logDb(fileName); + } finally { + testing = false; + } + } + + private synchronized void logDb(String fileName) { + writeCount++; + if ((writeCount & (testEvery - 1)) != 0) { + return; + } + if (FileUtils.size(fileName) > maxFileSize) { + // System.out.println(fileName + " " + IOUtils.length(fileName)); + return; + } + System.out.println("+ write #" + writeCount + " verify #" + verifyCount); + + try { + if (fileName.endsWith(Constants.SUFFIX_PAGE_FILE)) { + IOUtils.copyFiles(fileName, testDatabase + + Constants.SUFFIX_PAGE_FILE); + } else { + IOUtils.copyFiles(fileName, testDatabase + + Constants.SUFFIX_MV_FILE); + } + verifyCount++; + // avoid using the Engine class to avoid deadlocks + Properties p = new Properties(); + String userName = getUser(); + p.setProperty("user", userName); + p.setProperty("password", getPassword()); + String url = "jdbc:h2:" + testDatabase + + ";FILE_LOCK=NO;TRACE_LEVEL_FILE=0"; + ConnectionInfo ci = new ConnectionInfo(url, p); + Database database = new Database(ci, null); + // close the database + Session session = database.getSystemSession(); + session.prepare("script to '" + testDatabase + ".sql'").query(0); + session.prepare("shutdown immediately").update(); + database.removeSession(null); + // everything OK - return + return; + } catch (DbException e) { + SQLException e2 = DbException.toSQLException(e); + int errorCode = e2.getErrorCode(); + if (errorCode == ErrorCode.WRONG_USER_OR_PASSWORD) { + return; + } else if (errorCode == ErrorCode.FILE_ENCRYPTION_ERROR_1) { + return; + } + e.printStackTrace(System.out); + throw e; + } catch (Exception e) { + // failed + int errorCode = 0; + if (e instanceof SQLException) { + errorCode = ((SQLException) e).getErrorCode(); + } + if (errorCode == ErrorCode.WRONG_USER_OR_PASSWORD) { + return; + } else if (errorCode == ErrorCode.FILE_ENCRYPTION_ERROR_1) { + return; + } + e.printStackTrace(System.out); + } + System.out.println( + "begin ------------------------------ " + writeCount); + try { + Recover.execute(fileName.substring(0, fileName.lastIndexOf('/')), null); + } catch (SQLException e) { + // ignore + } + testDatabase += "X"; + try { + if (fileName.endsWith(Constants.SUFFIX_PAGE_FILE)) { + IOUtils.copyFiles(fileName, testDatabase + + Constants.SUFFIX_PAGE_FILE); + } else { + IOUtils.copyFiles(fileName, testDatabase + + Constants.SUFFIX_MV_FILE); + } + // avoid using the Engine class to avoid deadlocks + Properties p = new Properties(); + String url = "jdbc:h2:" + testDatabase + ";FILE_LOCK=NO"; + ConnectionInfo ci = new ConnectionInfo(url, p); + Database database = new Database(ci, null); + // close the database + database.removeSession(null); + } catch (Exception e) { + int errorCode = 0; + if (e instanceof DbException) { + e = ((DbException) e).getSQLException(); + errorCode = ((SQLException) e).getErrorCode(); + } + if (errorCode == ErrorCode.WRONG_USER_OR_PASSWORD) { + return; + } else if (errorCode == ErrorCode.FILE_ENCRYPTION_ERROR_1) { + return; + } + StringBuilder buff = new StringBuilder(); + StackTraceElement[] list = e.getStackTrace(); + for (int i = 0; i < 10 && i < list.length; i++) { + buff.append(list[i].toString()).append('\n'); + } + String s = buff.toString(); + if (!knownErrors.contains(s)) { + System.out.println(writeCount + " code: " + errorCode + " " + + e.toString()); + e.printStackTrace(System.out); + knownErrors.add(s); + } else { + System.out.println(writeCount + " code: " + errorCode); + } + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestSampleApps.java b/modules/h2/src/test/java/org/h2/test/unit/TestSampleApps.java new file mode 100644 index 0000000000000..b8f763aed3479 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestSampleApps.java @@ -0,0 +1,148 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.io.InputStream; +import java.io.PrintStream; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.nio.charset.StandardCharsets; + +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.tools.DeleteDbFiles; +import org.h2.util.IOUtils; +import org.h2.util.StringUtils; + +/** + * Tests the sample apps. + */ +public class TestSampleApps extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (!getBaseDir().startsWith(TestBase.BASE_TEST_DIR)) { + return; + } + deleteDb(getTestName()); + InputStream in = getClass().getClassLoader().getResourceAsStream( + "org/h2/samples/optimizations.sql"); + new File(getBaseDir()).mkdirs(); + FileOutputStream out = new FileOutputStream(getBaseDir() + + "/optimizations.sql"); + IOUtils.copyAndClose(in, out); + String url = "jdbc:h2:" + getBaseDir() + "/" + getTestName(); + testApp("", org.h2.tools.RunScript.class, "-url", url, "-user", "sa", + "-password", "sa", "-script", getBaseDir() + + "/optimizations.sql", "-checkResults"); + deleteDb(getTestName()); + testApp("Compacting...\nDone.", org.h2.samples.Compact.class); + testApp("NAME: Bob Meier\n" + + "EMAIL: bob.meier@abcde.abc\n" + + "PHONE: +41123456789\n\n" + + "NAME: John Jones\n" + + "EMAIL: john.jones@abcde.abc\n" + + "PHONE: +41976543210\n", + org.h2.samples.CsvSample.class); + testApp("", + org.h2.samples.CachedPreparedStatements.class); + testApp("2 is prime\n" + + "3 is prime\n" + + "5 is prime\n" + + "7 is prime\n" + + "11 is prime\n" + + "13 is prime\n" + + "17 is prime\n" + + "19 is prime\n" + + "30\n" + + "20\n" + + "0/0\n" + + "0/1\n" + + "1/0\n" + + "1/1\n" + + "10", + org.h2.samples.Function.class); + // Not compatible with PostgreSQL JDBC driver (throws a + // NullPointerException): + // testApp(org.h2.samples.SecurePassword.class, null, "Joe"); + // TODO test ShowProgress (percent numbers are hardware specific) + // TODO test ShutdownServer (server needs to be started in a separate + // process) + testApp("The sum is 20.00", org.h2.samples.TriggerSample.class); + testApp("Hello: 1\nWorld: 2", org.h2.samples.TriggerPassData.class); + testApp("table test:\n" + + "1 Hallo\n\n" + + "test_view:\n" + + "1 Hallo", + org.h2.samples.UpdatableView.class); + testApp( + "adding test data...\n" + + "defrag to reduce random access...\n" + + "create the zip file...\n" + + "open the database from the zip file...", + org.h2.samples.ReadOnlyDatabaseInZip.class); + testApp( + "a: 1/Hello!\n" + + "b: 1/Hallo!\n" + + "1/A/Hello!\n" + + "1/B/Hallo!", + org.h2.samples.RowAccessRights.class); + + // tools + testApp("Allows changing the database file encryption password or algorithm*", + org.h2.tools.ChangeFileEncryption.class, "-help"); + testApp("Allows changing the database file encryption password or algorithm*", + org.h2.tools.ChangeFileEncryption.class); + testApp("Deletes all files belonging to a database.*", + org.h2.tools.DeleteDbFiles.class, "-help"); + FileUtils.delete(getBaseDir() + "/optimizations.sql"); + } + + private void testApp(String expected, Class clazz, String... args) + throws Exception { + DeleteDbFiles.execute("data", "test", true); + Method m = clazz.getMethod("main", String[].class); + PrintStream oldOut = System.out, oldErr = System.err; + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + PrintStream out = new PrintStream(buff, false, "UTF-8"); + System.setOut(out); + System.setErr(out); + try { + m.invoke(null, new Object[] { args }); + } catch (InvocationTargetException e) { + TestBase.logError("error", e.getTargetException()); + } catch (Throwable e) { + TestBase.logError("error", e); + } + out.flush(); + System.setOut(oldOut); + System.setErr(oldErr); + String s = new String(buff.toByteArray(), StandardCharsets.UTF_8); + s = StringUtils.replaceAll(s, "\r\n", "\n"); + s = s.trim(); + expected = expected.trim(); + if (expected.endsWith("*")) { + expected = expected.substring(0, expected.length() - 1); + if (!s.startsWith(expected)) { + assertEquals(expected.trim(), s.trim()); + } + } else { + assertEquals(expected.trim(), s.trim()); + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestScriptReader.java b/modules/h2/src/test/java/org/h2/test/unit/TestScriptReader.java new file mode 100644 index 0000000000000..8521c803ff104 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestScriptReader.java @@ -0,0 +1,199 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.StringReader; +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.util.ScriptReader; + +/** + * Tests the script reader tool that breaks up SQL scripts in statements. + */ +public class TestScriptReader extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testCommon(); + testRandom(); + } + + private void testRandom() { + int len = getSize(1000, 10000); + Random random = new Random(10); + for (int i = 0; i < len; i++) { + int l = random.nextInt(10); + String[] sql = new String[l]; + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < l; j++) { + sql[j] = randomStatement(random); + buff.append(sql[j]); + if (j < l - 1) { + buff.append(";"); + } + } + String s = buff.toString(); + StringReader reader = new StringReader(s); + try (ScriptReader source = new ScriptReader(reader)) { + for (int j = 0; j < l; j++) { + String e = source.readStatement(); + String c = sql[j]; + if (c.length() == 0 && j == l - 1) { + c = null; + } + assertEquals(c, e); + } + assertEquals(null, source.readStatement()); + } + } + } + + private static String randomStatement(Random random) { + StringBuilder buff = new StringBuilder(); + int len = random.nextInt(5); + for (int i = 0; i < len; i++) { + switch (random.nextInt(10)) { + case 0: { + int l = random.nextInt(4); + String[] ch = { "\n", "\r", " ", "*", "a", "0", "$ " }; + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + break; + } + case 1: { + buff.append('\''); + int l = random.nextInt(4); + String[] ch = { ";", "\n", "\r", "--", "//", "/", "-", "*", + "/*", "*/", "\"", "$ " }; + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + buff.append('\''); + break; + } + case 2: { + buff.append('"'); + int l = random.nextInt(4); + String[] ch = { ";", "\n", "\r", "--", "//", "/", "-", "*", + "/*", "*/", "\'", "$" }; + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + buff.append('"'); + break; + } + case 3: { + buff.append('-'); + if (random.nextBoolean()) { + String[] ch = { "\n", "\r", "*", "a", " ", "$ " }; + int l = 1 + random.nextInt(4); + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + } else { + buff.append('-'); + String[] ch = { ";", "-", "//", "/*", "*/", "a", "$" }; + int l = random.nextInt(4); + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + buff.append('\n'); + } + break; + } + case 4: { + buff.append('/'); + if (random.nextBoolean()) { + String[] ch = { "\n", "\r", "a", " ", "- ", "$ " }; + int l = 1 + random.nextInt(4); + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + } else { + buff.append('*'); + String[] ch = { ";", "-", "//", "/* ", "--", "\n", "\r", "a", "$" }; + int l = random.nextInt(4); + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + buff.append("*/"); + } + break; + } + case 5: { + if (buff.length() > 0) { + buff.append(" "); + } + buff.append("$"); + if (random.nextBoolean()) { + String[] ch = { "\n", "\r", "a", " ", "- ", "/ " }; + int l = 1 + random.nextInt(4); + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + } else { + buff.append("$"); + String[] ch = { ";", "-", "//", "/* ", "--", "\n", "\r", "a", "$ " }; + int l = random.nextInt(4); + for (int j = 0; j < l; j++) { + buff.append(ch[random.nextInt(ch.length)]); + } + buff.append("$$"); + } + break; + } + default: + } + } + return buff.toString(); + } + + private void testCommon() { + String s; + ScriptReader source; + + s = "$$;$$;"; + source = new ScriptReader(new StringReader(s)); + assertEquals("$$;$$", source.readStatement()); + assertEquals(null, source.readStatement()); + source.close(); + + s = "a;';';\";\";--;\n;/*;\n*/;//;\na;"; + source = new ScriptReader(new StringReader(s)); + assertEquals("a", source.readStatement()); + assertEquals("';'", source.readStatement()); + assertEquals("\";\"", source.readStatement()); + assertEquals("--;\n", source.readStatement()); + assertEquals("/*;\n*/", source.readStatement()); + assertEquals("//;\na", source.readStatement()); + assertEquals(null, source.readStatement()); + source.close(); + + s = "/\n$ \n\n $';$$a$$ $\n;'"; + source = new ScriptReader(new StringReader(s)); + assertEquals("/\n$ \n\n $';$$a$$ $\n;'", source.readStatement()); + assertEquals(null, source.readStatement()); + source.close(); + + // check handling of unclosed block comments + s = "/*xxx"; + source = new ScriptReader(new StringReader(s)); + assertEquals("/*xxx", source.readStatement()); + assertTrue(source.isBlockRemark()); + source.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestSecurity.java b/modules/h2/src/test/java/org/h2/test/unit/TestSecurity.java new file mode 100644 index 0000000000000..ff0aa263420cc --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestSecurity.java @@ -0,0 +1,296 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.util.Arrays; + +import org.h2.security.BlockCipher; +import org.h2.security.CipherFactory; +import org.h2.security.SHA256; +import org.h2.test.TestBase; +import org.h2.util.StringUtils; + +/** + * Tests various security primitives. + */ +public class TestSecurity extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testConnectWithHash(); + testSHA(); + testAES(); + testBlockCiphers(); + testRemoveAnonFromLegacyAlgorithms(); + // testResetLegacyAlgorithms(); + } + + private static void testConnectWithHash() throws SQLException { + Connection conn = DriverManager.getConnection( + "jdbc:h2:mem:test", "sa", "sa"); + String pwd = StringUtils.convertBytesToHex( + SHA256.getKeyPasswordHash("SA", "sa".toCharArray())); + Connection conn2 = DriverManager.getConnection( + "jdbc:h2:mem:test;PASSWORD_HASH=TRUE", "sa", pwd); + conn.close(); + conn2.close(); + } + + private void testSHA() { + testPBKDF2(); + testHMAC(); + testOneSHA(); + } + + private void testPBKDF2() { + // test vectors from StackOverflow (PBKDF2-HMAC-SHA2) + assertEquals( + "120fb6cffcf8b32c43e7225256c4f837a86548c92ccc35480805987cb70be17b", + StringUtils.convertBytesToHex( + SHA256.getPBKDF2( + "password".getBytes(), + "salt".getBytes(), 1, 32))); + assertEquals( + "ae4d0c95af6b46d32d0adff928f06dd02a303f8ef3c251dfd6e2d85a95474c43", + StringUtils.convertBytesToHex( + SHA256.getPBKDF2( + "password".getBytes(), + "salt".getBytes(), 2, 32))); + assertEquals( + "c5e478d59288c841aa530db6845c4c8d962893a001ce4e11a4963873aa98134a", + StringUtils.convertBytesToHex( + SHA256.getPBKDF2( + "password".getBytes(), + "salt".getBytes(), 4096, 32))); + // take a very long time to calculate + // assertEquals( + // "cf81c66fe8cfc04d1f31ecb65dab4089f7f179e" + + // "89b3b0bcb17ad10e3ac6eba46", + // StringUtils.convertBytesToHex( + // SHA256.getPBKDF2( + // "password".getBytes(), + // "salt".getBytes(), 16777216, 32))); + assertEquals( + "348c89dbcbd32b2f32d814b8116e84cf2b17347e" + + "bc1800181c4e2a1fb8dd53e1c635518c7dac47e9", + StringUtils.convertBytesToHex( + SHA256.getPBKDF2( + ("password" + "PASSWORD" + "password").getBytes(), + ("salt"+ "SALT"+ "salt"+ "SALT"+ "salt"+ + "SALT"+ "salt"+ "SALT"+ "salt").getBytes(), 4096, 40))); + assertEquals( + "89b69d0516f829893c696226650a8687", + StringUtils.convertBytesToHex( + SHA256.getPBKDF2( + "pass\0word".getBytes(), + "sa\0lt".getBytes(), 4096, 16))); + + // the password is filled with zeroes + byte[] password = "Test".getBytes(); + SHA256.getPBKDF2(password, "".getBytes(), 1, 16); + assertEquals(new byte[4], password); + } + + private void testHMAC() { + // from Wikipedia + assertEquals( + "b613679a0814d9ec772f95d778c35fc5ff1697c493715653c6c712144292c5ad", + StringUtils.convertBytesToHex( + SHA256.getHMAC(new byte[0], new byte[0]))); + assertEquals( + "f7bc83f430538424b13298e6aa6fb143ef4d59a14946175997479dbc2d1a3cd8", + StringUtils.convertBytesToHex( + SHA256.getHMAC( + "key".getBytes(), + "The quick brown fox jumps over the lazy dog".getBytes()))); + } + + private String getHashString(byte[] data) { + byte[] result = SHA256.getHash(data, true); + if (data.length > 0) { + assertEquals(0, data[0]); + } + return StringUtils.convertBytesToHex(result); + } + + private void testOneSHA() { + assertEquals( + "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + getHashString(new byte[] {})); + assertEquals( + "68aa2e2ee5dff96e3355e6c7ee373e3d6a4e17f75f9518d843709c0c9bc3e3d4", + getHashString(new byte[] { 0x19 })); + assertEquals( + "175ee69b02ba9b58e2b0a5fd13819cea573f3940a94f825128cf4209beabb4e8", + getHashString( + new byte[] { (byte) 0xe3, (byte) 0xd7, 0x25, + 0x70, (byte) 0xdc, (byte) 0xdd, 0x78, 0x7c, + (byte) 0xe3, (byte) 0x88, 0x7a, (byte) 0xb2, + (byte) 0xcd, 0x68, 0x46, 0x52 })); + checkSHA256( + "", + "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855"); + checkSHA256( + "a", + "CA978112CA1BBDCAFAC231B39A23DC4DA786EFF8147C4E72B9807785AFEE48BB"); + checkSHA256( + "abc", + "BA7816BF8F01CFEA414140DE5DAE2223B00361A396177A9CB410FF61F20015AD"); + checkSHA256( + "message digest", + "F7846F55CF23E14EEBEAB5B4E1550CAD5B509E3348FBC4EFA3A1413D393CB650"); + checkSHA256( + "abcdefghijklmnopqrstuvwxyz", + "71C480DF93D6AE2F1EFAD1447C66C9525E316218CF51FC8D9ED832F2DAF18B73"); + checkSHA256( + "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq", + "248D6A61D20638B8E5C026930C3E6039A33CE45964FF2167F6ECEDD419DB06C1"); + checkSHA256( + "123456789012345678901234567890" + + "12345678901234567890" + + "123456789012345678901234567890", + "F371BC4A311F2B009EEF952DD83CA80E2B60026C8E935592D0F9C308453C813E"); + StringBuilder buff = new StringBuilder(1000000); + buff.append('a'); + checkSHA256(buff.toString(), + "CA978112CA1BBDCAFAC231B39A23DC4DA786EFF8147C4E72B9807785AFEE48BB"); + } + + private void checkSHA256(String message, String expected) { + String hash = StringUtils.convertBytesToHex( + SHA256.getHash(message.getBytes(), true)).toUpperCase(); + assertEquals(expected, hash); + } + + private void testBlockCiphers() { + for (String algorithm : new String[] { "AES", "FOG" }) { + byte[] test = new byte[4096]; + BlockCipher cipher = CipherFactory.getBlockCipher(algorithm); + cipher.setKey("abcdefghijklmnop".getBytes()); + for (int i = 0; i < 10; i++) { + cipher.encrypt(test, 0, test.length); + } + assertFalse(isCompressible(test)); + for (int i = 0; i < 10; i++) { + cipher.decrypt(test, 0, test.length); + } + assertEquals(new byte[test.length], test); + assertTrue(isCompressible(test)); + } + } + + private void testAES() { + BlockCipher test = CipherFactory.getBlockCipher("AES"); + + String r; + byte[] data; + + // test vector from + // http://csrc.nist.gov/groups/STM/cavp/documents/aes/KAT_AES.zip + // ECBVarTxt128e.txt + // COUNT = 0 + test.setKey(StringUtils.convertHexToBytes("00000000000000000000000000000000")); + data = StringUtils.convertHexToBytes("80000000000000000000000000000000"); + test.encrypt(data, 0, data.length); + r = StringUtils.convertBytesToHex(data); + assertEquals("3ad78e726c1ec02b7ebfe92b23d9ec34", r); + + // COUNT = 127 + test.setKey(StringUtils.convertHexToBytes("00000000000000000000000000000000")); + data = StringUtils.convertHexToBytes("ffffffffffffffffffffffffffffffff"); + test.encrypt(data, 0, data.length); + r = StringUtils.convertBytesToHex(data); + assertEquals("3f5b8cc9ea855a0afa7347d23e8d664e", r); + + // test vector + test.setKey(StringUtils.convertHexToBytes("2b7e151628aed2a6abf7158809cf4f3c")); + data = StringUtils.convertHexToBytes("6bc1bee22e409f96e93d7e117393172a"); + test.encrypt(data, 0, data.length); + r = StringUtils.convertBytesToHex(data); + assertEquals("3ad77bb40d7a3660a89ecaf32466ef97", r); + + test.setKey(StringUtils.convertHexToBytes("000102030405060708090A0B0C0D0E0F")); + byte[] in = new byte[128]; + byte[] enc = new byte[128]; + test.encrypt(enc, 0, 128); + test.decrypt(enc, 0, 128); + if (!Arrays.equals(in, enc)) { + throw new AssertionError(); + } + + for (int i = 0; i < 10; i++) { + test.encrypt(in, 0, 128); + test.decrypt(enc, 0, 128); + } + } + + private static boolean isCompressible(byte[] data) { + int len = data.length; + int[] sum = new int[16]; + for (int i = 0; i < len; i++) { + int x = (data[i] & 255) >> 4; + sum[x]++; + } + int r = 0; + for (int x : sum) { + long v = ((long) x << 32) / len; + r += 63 - Long.numberOfLeadingZeros(v + 1); + } + return len * r < len * 120; + } + + private void testRemoveAnonFromLegacyAlgorithms() { + String legacyAlgorithms = "K_NULL, C_NULL, M_NULL, DHE_DSS_EXPORT" + + ", DHE_RSA_EXPORT, DH_anon_EXPORT, DH_DSS_EXPORT, DH_RSA_EXPORT, RSA_EXPORT" + + ", DH_anon, ECDH_anon, RC4_128, RC4_40, DES_CBC, DES40_CBC"; + String expectedLegacyWithoutDhAnon = "K_NULL, C_NULL, M_NULL, DHE_DSS_EXPORT" + + ", DHE_RSA_EXPORT, DH_anon_EXPORT, DH_DSS_EXPORT, DH_RSA_EXPORT, RSA_EXPORT" + + ", RC4_128, RC4_40, DES_CBC, DES40_CBC"; + assertEquals(expectedLegacyWithoutDhAnon, + CipherFactory.removeDhAnonFromCommaSeparatedList(legacyAlgorithms)); + + legacyAlgorithms = "ECDH_anon, DH_anon_EXPORT, DH_anon"; + expectedLegacyWithoutDhAnon = "DH_anon_EXPORT"; + assertEquals(expectedLegacyWithoutDhAnon, + CipherFactory.removeDhAnonFromCommaSeparatedList(legacyAlgorithms)); + + legacyAlgorithms = null; + assertNull(CipherFactory.removeDhAnonFromCommaSeparatedList(legacyAlgorithms)); + } + + /** + * This test is meaningful when run in isolation. However, tests of server + * sockets or ssl connections may modify the global state given by the + * jdk.tls.legacyAlgorithms security property (for a good reason). + * It is best to avoid running it in test suites, as it could itself lead + * to a modification of the global state with hard-to-track consequences. + */ + @SuppressWarnings("unused") + private void testResetLegacyAlgorithms() { + String legacyAlgorithmsBefore = CipherFactory.getLegacyAlgorithmsSilently(); + assertEquals("Failed assumption: jdk.tls.legacyAlgorithms" + + " has been modified from its initial setting", + CipherFactory.DEFAULT_LEGACY_ALGORITHMS, legacyAlgorithmsBefore); + CipherFactory.removeAnonFromLegacyAlgorithms(); + CipherFactory.resetDefaultLegacyAlgorithms(); + String legacyAlgorithmsAfter = CipherFactory.getLegacyAlgorithmsSilently(); + assertEquals(CipherFactory.DEFAULT_LEGACY_ALGORITHMS, legacyAlgorithmsAfter); + } + + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestServlet.java b/modules/h2/src/test/java/org/h2/test/unit/TestServlet.java new file mode 100644 index 0000000000000..b1105b5915b46 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestServlet.java @@ -0,0 +1,396 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.InputStream; +import java.net.URL; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Enumeration; +import java.util.EventListener; +import java.util.HashMap; +import java.util.Map; +import java.util.Properties; +import java.util.Set; +import javax.servlet.Filter; +import javax.servlet.FilterRegistration; +import javax.servlet.FilterRegistration.Dynamic; +import javax.servlet.RequestDispatcher; +import javax.servlet.Servlet; +import javax.servlet.ServletContext; +import javax.servlet.ServletContextEvent; +import javax.servlet.ServletException; +import javax.servlet.ServletRegistration; +import javax.servlet.SessionCookieConfig; +import javax.servlet.SessionTrackingMode; +import javax.servlet.descriptor.JspConfigDescriptor; +import org.h2.api.ErrorCode; +import org.h2.server.web.DbStarter; +import org.h2.test.TestBase; + +/** + * Tests the DbStarter servlet. + * This test simulates a minimum servlet container environment. + */ +public class TestServlet extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + /** + * Minimum ServletContext implementation. + * Most methods are not implemented. + */ + static class TestServletContext implements ServletContext { + + private final Properties initParams = new Properties(); + private final HashMap attributes = new HashMap<>(); + + @Override + public void setAttribute(String key, Object value) { + attributes.put(key, value); + } + + @Override + public Object getAttribute(String key) { + return attributes.get(key); + } + + @Override + public boolean setInitParameter(String key, String value) { + initParams.setProperty(key, value); + return true; + } + + @Override + public String getInitParameter(String key) { + return initParams.getProperty(key); + } + + @Override + public Enumeration getAttributeNames() { + throw new UnsupportedOperationException(); + } + + @Override + public ServletContext getContext(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public Enumeration getInitParameterNames() { + throw new UnsupportedOperationException(); + } + + @Override + public int getMajorVersion() { + throw new UnsupportedOperationException(); + } + + @Override + public String getMimeType(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public int getMinorVersion() { + throw new UnsupportedOperationException(); + } + + @Override + public RequestDispatcher getNamedDispatcher(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public String getRealPath(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public RequestDispatcher getRequestDispatcher(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public URL getResource(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public InputStream getResourceAsStream(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public Set getResourcePaths(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public String getServerInfo() { + throw new UnsupportedOperationException(); + } + + /** + * @deprecated as of servlet API 2.1 + */ + @Override + @Deprecated + public Servlet getServlet(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public String getServletContextName() { + throw new UnsupportedOperationException(); + } + + /** + * @deprecated as of servlet API 2.1 + */ + @Deprecated + @Override + public Enumeration getServletNames() { + throw new UnsupportedOperationException(); + } + + /** + * @deprecated as of servlet API 2.0 + */ + @Deprecated + @Override + public Enumeration getServlets() { + throw new UnsupportedOperationException(); + } + + @Override + public void log(String string) { + throw new UnsupportedOperationException(); + } + + /** + * @deprecated as of servlet API 2.1 + */ + @Deprecated + @Override + public void log(Exception exception, String string) { + throw new UnsupportedOperationException(); + } + + @Override + public void log(String string, Throwable throwable) { + throw new UnsupportedOperationException(); + } + + @Override + public void removeAttribute(String string) { + throw new UnsupportedOperationException(); + } + + @Override + public Dynamic addFilter(String arg0, String arg1) { + throw new UnsupportedOperationException(); + } + + @Override + public Dynamic addFilter(String arg0, Filter arg1) { + throw new UnsupportedOperationException(); + } + + @Override + public Dynamic addFilter(String arg0, Class arg1) { + throw new UnsupportedOperationException(); + } + + @Override + public void addListener(String arg0) { + throw new UnsupportedOperationException(); + } + + @Override + public void addListener(T arg0) { + throw new UnsupportedOperationException(); + } + + @Override + public void addListener(Class arg0) { + throw new UnsupportedOperationException(); + } + + @Override + public javax.servlet.ServletRegistration.Dynamic addServlet( + String arg0, String arg1) { + throw new UnsupportedOperationException(); + } + + @Override + public javax.servlet.ServletRegistration.Dynamic addServlet( + String arg0, Servlet arg1) { + throw new UnsupportedOperationException(); + } + + @Override + public javax.servlet.ServletRegistration.Dynamic addServlet( + String arg0, Class arg1) { + throw new UnsupportedOperationException(); + } + + @Override + public T createFilter(Class arg0) + throws ServletException { + throw new UnsupportedOperationException(); + } + + @Override + public T createListener(Class arg0) + throws ServletException { + throw new UnsupportedOperationException(); + } + + @Override + public T createServlet(Class arg0) + throws ServletException { + throw new UnsupportedOperationException(); + } + + @Override + public void declareRoles(String... arg0) { + throw new UnsupportedOperationException(); + } + + @Override + public ClassLoader getClassLoader() { + throw new UnsupportedOperationException(); + } + + @Override + public String getContextPath() { + throw new UnsupportedOperationException(); + } + + @Override + public Set getDefaultSessionTrackingModes() { + throw new UnsupportedOperationException(); + } + + @Override + public int getEffectiveMajorVersion() { + throw new UnsupportedOperationException(); + } + + @Override + public int getEffectiveMinorVersion() { + throw new UnsupportedOperationException(); + } + + @Override + public Set getEffectiveSessionTrackingModes() { + throw new UnsupportedOperationException(); + } + + @Override + public FilterRegistration getFilterRegistration(String arg0) { + throw new UnsupportedOperationException(); + } + + @Override + public Map getFilterRegistrations() { + throw new UnsupportedOperationException(); + } + + @Override + public JspConfigDescriptor getJspConfigDescriptor() { + throw new UnsupportedOperationException(); + } + + @Override + public ServletRegistration getServletRegistration(String arg0) { + throw new UnsupportedOperationException(); + } + + @Override + public Map getServletRegistrations() { + throw new UnsupportedOperationException(); + } + + @Override + public SessionCookieConfig getSessionCookieConfig() { + throw new UnsupportedOperationException(); + } + + + @Override + public void setSessionTrackingModes(Set arg0) { + throw new UnsupportedOperationException(); + } + + @Override + public String getVirtualServerName() { + throw new UnsupportedOperationException(); + } + + } + + @Override + public void test() throws SQLException { + if (config.networked || config.memory) { + return; + } + DbStarter listener = new DbStarter(); + + TestServletContext context = new TestServletContext(); + String url = getURL("servlet", true); + context.setInitParameter("db.url", url); + context.setInitParameter("db.user", getUser()); + context.setInitParameter("db.password", getPassword()); + context.setInitParameter("db.tcpServer", "-tcpPort 8888"); + + ServletContextEvent event = new ServletContextEvent(context); + listener.contextInitialized(event); + + Connection conn1 = listener.getConnection(); + Connection conn1a = (Connection) context.getAttribute("connection"); + assertTrue(conn1 == conn1a); + Statement stat1 = conn1.createStatement(); + stat1.execute("CREATE TABLE T(ID INT)"); + + String u2 = url.substring(url.indexOf("servlet")); + u2 = "jdbc:h2:tcp://localhost:8888/" + getBaseDir() + "/" + u2; + Connection conn2 = DriverManager.getConnection( + u2, getUser(), getPassword()); + Statement stat2 = conn2.createStatement(); + stat2.execute("SELECT * FROM T"); + stat2.execute("DROP TABLE T"); + + assertThrows(ErrorCode.TABLE_OR_VIEW_NOT_FOUND_1, stat1). + execute("SELECT * FROM T"); + conn2.close(); + + listener.contextDestroyed(event); + + // listener must be stopped + assertThrows(ErrorCode.CONNECTION_BROKEN_1, this).getConnection( + "jdbc:h2:tcp://localhost:8888/" + getBaseDir() + "/servlet", + getUser(), getPassword()); + + // connection must be closed + assertThrows(ErrorCode.OBJECT_CLOSED, stat1). + execute("SELECT * FROM DUAL"); + + deleteDb("servlet"); + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestShell.java b/modules/h2/src/test/java/org/h2/test/unit/TestShell.java new file mode 100644 index 0000000000000..0f939725936c7 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestShell.java @@ -0,0 +1,238 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.io.PipedInputStream; +import java.io.PipedOutputStream; +import java.io.PrintStream; +import org.h2.test.TestBase; +import org.h2.tools.Shell; +import org.h2.util.Task; + +/** + * Test the shell tool. + */ +public class TestShell extends TestBase { + + /** + * The output stream of the tool. + */ + PrintStream toolOut; + + /** + * The input stream of the tool. + */ + InputStream toolIn; + + private LineNumberReader lineReader; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + Shell shell = new Shell(); + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + shell.setOut(new PrintStream(buff)); + shell.runTool("-url", "jdbc:h2:mem:", "-driver", "org.h2.Driver", + "-user", "sa", "-password", "sa", "-properties", "null", + "-sql", "select 'Hello ' || 'World' as hi"); + String s = new String(buff.toByteArray()); + assertContains(s, "HI"); + assertContains(s, "Hello World"); + assertContains(s, "(1 row, "); + + shell = new Shell(); + buff = new ByteArrayOutputStream(); + shell.setOut(new PrintStream(buff)); + shell.runTool("-help"); + s = new String(buff.toByteArray()); + assertContains(s, + "Interactive command line tool to access a database using JDBC."); + + test(true); + test(false); + } + + private void test(final boolean commandLineArgs) throws IOException { + PipedInputStream testIn = new PipedInputStream(); + PipedOutputStream out = new PipedOutputStream(testIn); + toolOut = new PrintStream(out, true); + out = new PipedOutputStream(); + PrintStream testOut = new PrintStream(out, true); + toolIn = new PipedInputStream(out); + Task task = new Task() { + @Override + public void call() throws Exception { + try { + Shell shell = new Shell(); + shell.setIn(toolIn); + shell.setOut(toolOut); + shell.setErr(toolOut); + if (commandLineArgs) { + shell.runTool("-url", "jdbc:h2:mem:", + "-user", "sa", "-password", "sa"); + } else { + shell.runTool(); + } + } finally { + toolOut.close(); + } + } + }; + task.execute(); + InputStreamReader reader = new InputStreamReader(testIn); + lineReader = new LineNumberReader(reader); + read(""); + read("Welcome to H2 Shell"); + read("Exit with"); + if (!commandLineArgs) { + read("[Enter]"); + testOut.println("jdbc:h2:mem:"); + read("URL"); + testOut.println(""); + read("Driver"); + testOut.println("sa"); + read("User"); + testOut.println("sa"); + read("Password"); + } + read("Commands are case insensitive"); + read("help or ?"); + read("list"); + read("maxwidth"); + read("autocommit"); + read("history"); + read("quit or exit"); + read(""); + testOut.println("history"); + read("sql> No history"); + testOut.println("1"); + read("sql> Not found"); + testOut.println("select 1 a;"); + read("sql> A"); + read("1"); + read("(1 row,"); + testOut.println("history"); + read("sql> #1: select 1 a"); + read("To re-run a statement, type the number and press and enter"); + testOut.println("1"); + read("sql> select 1 a"); + read("A"); + read("1"); + read("(1 row,"); + + testOut.println("select 'x' || space(1000) large, 'y' small;"); + read("sql> LARGE"); + read("x"); + read("(data is partially truncated)"); + read("(1 row,"); + + testOut.println("select x, 's' s from system_range(0, 10001);"); + read("sql> X | S"); + for (int i = 0; i < 10000; i++) { + read((i + " ").substring(0, 4) + " | s"); + } + for (int i = 10000; i <= 10001; i++) { + read((i + " ").substring(0, 5) + " | s"); + } + read("(10002 rows,"); + + testOut.println("select error;"); + read("sql> Error:"); + if (read("").startsWith("Column \"ERROR\" not found")) { + read(""); + } + testOut.println("create table test(id int primary key, name varchar)\n;"); + read("sql> ...>"); + testOut.println("insert into test values(1, 'Hello');"); + read("sql>"); + testOut.println("select null n, * from test;"); + read("sql> N | ID | NAME"); + read("null | 1 | Hello"); + read("(1 row,"); + + // test history + for (int i = 0; i < 30; i++) { + testOut.println("select " + i + " ID from test;"); + read("sql> ID"); + read("" + i); + read("(1 row,"); + } + testOut.println("20"); + read("sql> select 10 ID from test"); + read("ID"); + read("10"); + read("(1 row,"); + + testOut.println("maxwidth"); + read("sql> Usage: maxwidth "); + read("Maximum column width is now 100"); + testOut.println("maxwidth 80"); + read("sql> Maximum column width is now 80"); + testOut.println("autocommit"); + read("sql> Usage: autocommit [true|false]"); + read("Autocommit is now true"); + testOut.println("autocommit false"); + read("sql> Autocommit is now false"); + testOut.println("autocommit true"); + read("sql> Autocommit is now true"); + testOut.println("list"); + read("sql> Result list mode is now on"); + + testOut.println("select 1 first, 2 second;"); + read("sql> FIRST : 1"); + read("SECOND: 2"); + read("(1 row, "); + + testOut.println("select x from system_range(1, 3);"); + read("sql> X: 1"); + read(""); + read("X: 2"); + read(""); + read("X: 3"); + read("(3 rows, "); + + testOut.println("select x, 2 as y from system_range(1, 3) where 1 = 0;"); + read("sql> X"); + read("Y"); + read("(0 rows, "); + + testOut.println("list"); + read("sql> Result list mode is now off"); + testOut.println("help"); + read("sql> Commands are case insensitive"); + read("help or ?"); + read("list"); + read("maxwidth"); + read("autocommit"); + read("history"); + read("quit or exit"); + read(""); + testOut.println("exit"); + read("sql>"); + task.get(); + } + + private String read(String expectedStart) throws IOException { + String line = lineReader.readLine(); + // System.out.println(": " + line); + assertStartsWith(line, expectedStart); + return line; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestSort.java b/modules/h2/src/test/java/org/h2/test/unit/TestSort.java new file mode 100644 index 0000000000000..55dd8ed3cef58 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestSort.java @@ -0,0 +1,166 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.util.Arrays; +import java.util.Comparator; +import java.util.Random; +import java.util.concurrent.atomic.AtomicInteger; +import org.h2.dev.sort.InPlaceStableMergeSort; +import org.h2.dev.sort.InPlaceStableQuicksort; +import org.h2.test.TestBase; + +/** + * Tests the stable in-place sorting implementations. + */ +public class TestSort extends TestBase { + + /** + * The number of times the compare method was called. + */ + AtomicInteger compareCount = new AtomicInteger(); + + /** + * The comparison object used in this test. + */ + Comparator comp = new Comparator() { + @Override + public int compare(Long o1, Long o2) { + compareCount.incrementAndGet(); + return Long.compare(o1 >> 32, o2 >> 32); + } + }; + + private final Long[] array = new Long[100000]; + private Class clazz; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + test(InPlaceStableMergeSort.class); + test(InPlaceStableQuicksort.class); + test(Arrays.class); + } + + private void test(Class c) throws Exception { + this.clazz = c; + ordered(array); + shuffle(array); + stabilize(array); + test("random"); + ordered(array); + stabilize(array); + test("ordered"); + ordered(array); + reverse(array); + stabilize(array); + test("reverse"); + ordered(array); + stretch(array); + shuffle(array); + stabilize(array); + test("few random"); + ordered(array); + stretch(array); + stabilize(array); + test("few ordered"); + ordered(array); + reverse(array); + stretch(array); + stabilize(array); + test("few reverse"); + // System.out.println(); + } + + /** + * Sort the array and verify the result. + * + * @param type the type of data + */ + private void test(@SuppressWarnings("unused") String type) throws Exception { + compareCount.set(0); + + // long t = System.nanoTime(); + + clazz.getMethod("sort", Object[].class, Comparator.class).invoke(null, + array, comp); + + // System.out.printf( + // "%4d ms; %10d comparisons order: %s data: %s\n", + // TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - t), + // compareCount.get(), clazz, type); + + verify(array); + + } + + private static void verify(Long[] array) { + long last = Long.MIN_VALUE; + int len = array.length; + for (int i = 0; i < len; i++) { + long x = array[i]; + long x1 = x >> 32, x2 = x - (x1 << 32); + long last1 = last >> 32, last2 = last - (last1 << 32); + if (x1 < last1) { + if (array.length < 1000) { + System.out.println(Arrays.toString(array)); + } + throw new RuntimeException("" + x); + } else if (x1 == last1 && x2 < last2) { + if (array.length < 1000) { + System.out.println(Arrays.toString(array)); + } + throw new RuntimeException("" + x); + } + last = x; + } + } + + private static void ordered(Long[] array) { + for (int i = 0; i < array.length; i++) { + array[i] = (long) i; + } + } + + private static void stretch(Long[] array) { + for (int i = array.length - 1; i >= 0; i--) { + array[i] = array[i / 4]; + } + } + + private static void reverse(Long[] array) { + for (int i = 0; i < array.length / 2; i++) { + long temp = array[i]; + array[i] = array[array.length - i - 1]; + array[array.length - i - 1] = temp; + } + } + + private static void shuffle(Long[] array) { + Random r = new Random(1); + for (int i = 0; i < array.length; i++) { + long temp = array[i]; + int j = r.nextInt(array.length); + array[j] = array[i]; + array[i] = temp; + } + } + + private static void stabilize(Long[] array) { + for (int i = 0; i < array.length; i++) { + array[i] = (array[i] << 32) + i; + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestStreams.java b/modules/h2/src/test/java/org/h2/test/unit/TestStreams.java new file mode 100644 index 0000000000000..0fbd6fdfd4727 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestStreams.java @@ -0,0 +1,121 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.Random; +import org.h2.compress.LZFInputStream; +import org.h2.compress.LZFOutputStream; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests the LZF stream. + */ +public class TestStreams extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws IOException { + testLZFStreams(); + testLZFStreamClose(); + } + + private static byte[] getRandomBytes(Random random) { + int[] sizes = { 0, 1, random.nextInt(1000), random.nextInt(100000), + random.nextInt(1000000) }; + int size = sizes[random.nextInt(sizes.length)]; + byte[] buffer = new byte[size]; + if (random.nextInt(5) == 1) { + random.nextBytes(buffer); + } else if (random.nextBoolean()) { + int patternLen = random.nextInt(100) + 1; + for (int j = 0; j < size; j++) { + buffer[j] = (byte) (j % patternLen); + } + } + return buffer; + } + + private void testLZFStreamClose() throws IOException { + String fileName = getBaseDir() + "/temp"; + FileUtils.createDirectories(FileUtils.getParent(fileName)); + OutputStream fo = FileUtils.newOutputStream(fileName, false); + LZFOutputStream out = new LZFOutputStream(fo); + out.write("Hello".getBytes()); + out.close(); + InputStream fi = FileUtils.newInputStream(fileName); + LZFInputStream in = new LZFInputStream(fi); + byte[] buff = new byte[100]; + assertEquals(5, in.read(buff)); + in.read(); + in.close(); + FileUtils.delete(getBaseDir() + "/temp"); + } + + private void testLZFStreams() throws IOException { + Random random = new Random(1); + int max = getSize(100, 1000); + for (int i = 0; i < max; i += 3) { + byte[] buffer = getRandomBytes(random); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + LZFOutputStream comp = new LZFOutputStream(out); + if (random.nextInt(10) == 1) { + comp.write(buffer); + } else { + for (int j = 0; j < buffer.length;) { + int[] sizes = { 0, 1, random.nextInt(100), random.nextInt(100000) }; + int size = sizes[random.nextInt(sizes.length)]; + size = Math.min(size, buffer.length - j); + if (size == 1) { + comp.write(buffer[j]); + } else { + comp.write(buffer, j, size); + } + j += size; + } + } + comp.close(); + byte[] compressed = out.toByteArray(); + ByteArrayInputStream in = new ByteArrayInputStream(compressed); + LZFInputStream decompress = new LZFInputStream(in); + byte[] test = new byte[buffer.length]; + for (int j = 0; j < buffer.length;) { + int[] sizes = { 0, 1, random.nextInt(100), random.nextInt(100000) }; + int size = sizes[random.nextInt(sizes.length)]; + if (size == 1) { + int x = decompress.read(); + if (x < 0) { + break; + } + test[j++] = (byte) x; + } else { + size = Math.min(size, test.length - j); + int l = decompress.read(test, j, size); + if (l < 0) { + break; + } + j += l; + } + } + decompress.close(); + assertEquals(buffer, test); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestStringCache.java b/modules/h2/src/test/java/org/h2/test/unit/TestStringCache.java new file mode 100644 index 0000000000000..24db66a9c83b1 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestStringCache.java @@ -0,0 +1,180 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, and the + * EPL 1.0 (http://h2database.com/html/license.html). Initial Developer: H2 + * Group + */ +package org.h2.test.unit; + +import java.util.Locale; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +import org.h2.test.TestBase; +import org.h2.util.StringUtils; + +/** + * Tests the string cache facility. + */ +public class TestStringCache extends TestBase { + + /** + * Flag to indicate the test should stop. + */ + volatile boolean stop; + private final Random random = new Random(1); + private final String[] some = { null, "", "ABC", + "this is a medium sized string", "1", "2" }; + private boolean useIntern; + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + TestBase.createCaller().init().test(); + new TestStringCache().runBenchmark(); + } + + @Override + public void test() throws InterruptedException { + testToUpperToLower(); + StringUtils.clearCache(); + testSingleThread(getSize(5000, 20000)); + testMultiThreads(); + } + + private void testToUpperCache() { + Random r = new Random(); + String[] test = new String[50]; + for (int i = 0; i < test.length; i++) { + StringBuilder buff = new StringBuilder(); + for (int a = 0; a < 50; a++) { + buff.append((char) r.nextInt()); + } + String a = buff.toString(); + test[i] = a; + } + int repeat = 100000; + int testLen = 0; + long time = System.nanoTime(); + for (int a = 0; a < repeat; a++) { + for (String x : test) { + String y = StringUtils.toUpperEnglish(x); + testLen += y.length(); + } + } + time = System.nanoTime() - time; + System.out.println("cache " + TimeUnit.NANOSECONDS.toMillis(time)); + time = System.nanoTime(); + for (int a = 0; a < repeat; a++) { + for (String x : test) { + String y = x.toUpperCase(Locale.ENGLISH); + testLen -= y.length(); + } + } + time = System.nanoTime() - time; + System.out.println("toUpperCase " + TimeUnit.NANOSECONDS.toMillis(time)); + assertEquals(0, testLen); + } + + private void testToUpperToLower() { + Random r = new Random(); + for (int i = 0; i < 1000; i++) { + StringBuilder buff = new StringBuilder(); + for (int a = 0; a < 100; a++) { + buff.append((char) r.nextInt()); + } + String a = buff.toString(); + String b = StringUtils.toUpperEnglish(a); + String c = a.toUpperCase(Locale.ENGLISH); + assertEquals(c, b); + String d = StringUtils.toLowerEnglish(a); + String e = a.toLowerCase(Locale.ENGLISH); + assertEquals(e, d); + } + } + + private void runBenchmark() { + testToUpperCache(); + testToUpperCache(); + testToUpperCache(); + for (int i = 0; i < 6; i++) { + useIntern = (i % 2) == 0; + long time = System.nanoTime(); + testSingleThread(100000); + time = System.nanoTime() - time; + System.out.println(TimeUnit.NANOSECONDS.toMillis(time) + + " ms (useIntern=" + useIntern + ")"); + } + + } + + private String randomString() { + if (random.nextBoolean()) { + String s = some[random.nextInt(some.length)]; + if (s != null && random.nextBoolean()) { + s = new String(s); + } + return s; + } + int len = random.nextBoolean() ? random.nextInt(1000) + : random.nextInt(10); + StringBuilder buff = new StringBuilder(len); + for (int i = 0; i < len; i++) { + buff.append(random.nextInt(0xfff)); + } + return buff.toString(); + } + + /** + * Test one string operation using the string cache. + */ + void testString() { + String a = randomString(); + String b; + if (useIntern) { + b = a == null ? null : a.intern(); + } else { + b = StringUtils.cache(a); + } + try { + assertEquals(a, b); + } catch (Exception e) { + TestBase.logError("error", e); + } + } + + private void testSingleThread(int len) { + for (int i = 0; i < len; i++) { + testString(); + } + } + + private void testMultiThreads() throws InterruptedException { + int threadCount = getSize(3, 100); + Thread[] threads = new Thread[threadCount]; + for (int i = 0; i < threadCount; i++) { + Thread t = new Thread(new Runnable() { + @Override + public void run() { + while (!stop) { + testString(); + } + } + }); + threads[i] = t; + } + for (int i = 0; i < threadCount; i++) { + threads[i].start(); + } + int wait = getSize(200, 2000); + Thread.sleep(wait); + stop = true; + for (int i = 0; i < threadCount; i++) { + threads[i].join(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestStringUtils.java b/modules/h2/src/test/java/org/h2/test/unit/TestStringUtils.java new file mode 100644 index 0000000000000..30b6c762758f2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestStringUtils.java @@ -0,0 +1,256 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.UnsupportedEncodingException; +import java.net.URLDecoder; +import java.net.URLEncoder; +import java.util.Date; +import java.util.Random; +import org.h2.message.DbException; +import org.h2.test.TestBase; +import org.h2.test.utils.AssertThrows; +import org.h2.util.DateTimeFunctions; +import org.h2.util.StringUtils; + +/** + * Tests string utility methods. + */ +public class TestStringUtils extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testHex(); + testXML(); + testSplit(); + testJavaString(); + testURL(); + testPad(); + testReplaceAll(); + testTrim(); + } + + private void testHex() { + assertEquals("face", + StringUtils.convertBytesToHex(new byte[] + { (byte) 0xfa, (byte) 0xce })); + assertEquals(new byte[] { (byte) 0xfa, (byte) 0xce }, + StringUtils.convertHexToBytes("face")); + assertEquals(new byte[] { (byte) 0xfa, (byte) 0xce }, + StringUtils.convertHexToBytes("fAcE")); + assertEquals(new byte[] { (byte) 0xfa, (byte) 0xce }, + StringUtils.convertHexToBytes("FaCe")); + new AssertThrows(DbException.class) { @Override + public void test() { + StringUtils.convertHexToBytes("120"); + }}; + new AssertThrows(DbException.class) { @Override + public void test() { + StringUtils.convertHexToBytes("fast"); + }}; + new AssertThrows(DbException.class) { @Override + public void test() { + StringUtils.convertHexToBytes("012=abcf"); + }}; + } + + private void testPad() { + assertEquals("large", StringUtils.pad("larger text", 5, null, true)); + assertEquals("large", StringUtils.pad("larger text", 5, null, false)); + assertEquals("short+++++", StringUtils.pad("short", 10, "+", true)); + assertEquals("+++++short", StringUtils.pad("short", 10, "+", false)); + } + + private void testXML() { + assertEquals("\n", + StringUtils.xmlComment("------abc------")); + assertEquals("\n", + StringUtils.xmlNode("test", null, null)); + assertEquals("Grübel\n", + StringUtils.xmlNode("test", null, + StringUtils.xmlText("Gr\u00fcbel"))); + assertEquals("Rand&Blue", + StringUtils.xmlText("Rand&Blue")); + assertEquals("<<[[[]]]>>", + StringUtils.xmlCData("<<[[[]]]>>")); + Date dt = DateTimeFunctions.parseDateTime( + "2001-02-03 04:05:06 GMT", + "yyyy-MM-dd HH:mm:ss z", "en", "GMT"); + String s = StringUtils.xmlStartDoc() + + StringUtils.xmlComment("Test Comment") + + StringUtils.xmlNode("rss", + StringUtils.xmlAttr("version", "2.0"), + StringUtils.xmlComment("Test Comment\nZeile2") + + StringUtils.xmlNode("channel", null, + StringUtils.xmlNode("title", null, "H2 Database Engine") + + StringUtils.xmlNode("link", null, "http://www.h2database.com") + + StringUtils.xmlNode("description", null, "H2 Database Engine") + + StringUtils.xmlNode("language", null, "en-us") + + StringUtils.xmlNode("pubDate", null, + DateTimeFunctions.formatDateTime(dt, + "EEE, d MMM yyyy HH:mm:ss z", "en", "GMT")) + + StringUtils.xmlNode("lastBuildDate", null, + DateTimeFunctions.formatDateTime(dt, + "EEE, d MMM yyyy HH:mm:ss z", "en", "GMT")) + + StringUtils.xmlNode("item", null, + StringUtils.xmlNode("title", null, + "New Version 0.9.9.9.9") + + StringUtils.xmlNode("link", null, "http://www.h2database.com") + + StringUtils.xmlNode("description", null, + StringUtils.xmlCData("\nNew Features\nTest\n"))))); + assertEquals( + s, + "\n" + + "\n" + + "\n" + + " \n" + + " \n" + + " H2 Database Engine\n" + + " http://www.h2database.com\n" + + " H2 Database Engine\n" + + " en-us\n" + + " Sat, 3 Feb 2001 04:05:06 GMT\n" + + " Sat, 3 Feb 2001 04:05:06 GMT\n" + + " \n" + + " New Version 0.9.9.9.9\n" + + " http://www.h2database.com\n" + + " \n" + + " \n" + + " \n" + " \n" + + " \n" + "\n"); + } + + private void testURL() throws UnsupportedEncodingException { + Random random = new Random(1); + for (int i = 0; i < 100; i++) { + int len = random.nextInt(10); + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < len; j++) { + if (random.nextBoolean()) { + buff.append((char) random.nextInt(0x3000)); + } else { + buff.append((char) random.nextInt(255)); + } + } + String a = buff.toString(); + String b = URLEncoder.encode(a, "UTF-8"); + String c = URLDecoder.decode(b, "UTF-8"); + assertEquals(a, c); + String d = StringUtils.urlDecode(b); + assertEquals(d, c); + } + } + + private void testJavaString() { + assertEquals("a\"b", StringUtils.javaDecode("a\"b")); + Random random = new Random(1); + for (int i = 0; i < 1000; i++) { + int len = random.nextInt(10); + StringBuilder buff = new StringBuilder(); + for (int j = 0; j < len; j++) { + if (random.nextBoolean()) { + buff.append((char) random.nextInt(0x3000)); + } else { + buff.append((char) random.nextInt(255)); + } + } + String a = buff.toString(); + String b = StringUtils.javaEncode(a); + String c = StringUtils.javaDecode(b); + assertEquals(a, c); + } + } + + private void testSplit() { + assertEquals(3, + StringUtils.arraySplit("ABC,DEF,G\\,HI", ',', false).length); + assertEquals( + StringUtils.arrayCombine(new String[] { "", " ", "," }, ','), + ", ,\\,"); + Random random = new Random(1); + for (int i = 0; i < 100; i++) { + int len = random.nextInt(10); + StringBuilder buff = new StringBuilder(); + String select = "abcd,"; + for (int j = 0; j < len; j++) { + char c = select.charAt(random.nextInt(select.length())); + if (c == 'a') { + buff.append("\\\\"); + } else if (c == 'b') { + buff.append("\\,"); + } else { + buff.append(c); + } + } + String a = buff.toString(); + String[] b = StringUtils.arraySplit(a, ',', false); + String c = StringUtils.arrayCombine(b, ','); + assertEquals(a, c); + } + } + + private void testReplaceAll() { + assertEquals("def", + StringUtils.replaceAll("abc def", "abc ", "")); + assertEquals("af", + StringUtils.replaceAll("abc def", "bc de", "")); + assertEquals("abc def", + StringUtils.replaceAll("abc def", "bc ", "bc ")); + assertEquals("abc ", + StringUtils.replaceAll("abc def", "def", "")); + assertEquals(" ", + StringUtils.replaceAll("abc abc", "abc", "")); + assertEquals("xyz xyz", + StringUtils.replaceAll("abc abc", "abc", "xyz")); + assertEquals("abc def", + StringUtils.replaceAll("abc def", "xyz", "abc")); + assertEquals("", + StringUtils.replaceAll("abcabcabc", "abc", "")); + assertEquals("abcabcabc", + StringUtils.replaceAll("abcabcabc", "aBc", "")); + assertEquals("abcabcabc", + StringUtils.replaceAll("abcabcabc", "", "abc")); + } + + private void testTrim() { + assertEquals("a a", + StringUtils.trim("a a", true, true, null)); + assertEquals(" a a ", + StringUtils.trim(" a a ", false, false, null)); + assertEquals(" a a", + StringUtils.trim(" a a ", false, true, null)); + assertEquals("a a ", + StringUtils.trim(" a a ", true, false, null)); + assertEquals("a a", + StringUtils.trim(" a a ", true, true, null)); + assertEquals("a a", + StringUtils.trim(" a a ", true, true, "")); + assertEquals("zzbbzz", + StringUtils.trim("zzbbzz", false, false, "z")); + assertEquals("zzbb", + StringUtils.trim("zzbbzz", false, true, "z")); + assertEquals("bbzz", + StringUtils.trim("zzbbzz", true, false, "z")); + assertEquals("bb", + StringUtils.trim("zzbbzz", true, true, "z")); + } + + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestTimeStampWithTimeZone.java b/modules/h2/src/test/java/org/h2/test/unit/TestTimeStampWithTimeZone.java new file mode 100644 index 0000000000000..fd8ffbcea0889 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestTimeStampWithTimeZone.java @@ -0,0 +1,230 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, and the + * EPL 1.0 (http://h2database.com/html/license.html). Initial Developer: H2 + * Group + */ +package org.h2.test.unit; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.TimeZone; + +import org.h2.api.TimestampWithTimeZone; +import org.h2.test.TestBase; +import org.h2.util.DateTimeUtils; +import org.h2.util.LocalDateTimeUtils; +import org.h2.value.Value; +import org.h2.value.ValueDate; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; + +/** + */ +public class TestTimeStampWithTimeZone extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + deleteDb(getTestName()); + test1(); + test2(); + test3(); + test4(); + test5(); + testOrder(); + testConversions(); + deleteDb(getTestName()); + } + + private void test1() throws SQLException { + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test(id identity, t1 timestamp(9) with time zone)"); + stat.execute("insert into test(t1) values('1970-01-01 12:00:00.00+00:15')"); + // verify NanosSinceMidnight is in local time and not UTC + stat.execute("insert into test(t1) values('2016-09-24 00:00:00.000000001+00:01')"); + stat.execute("insert into test(t1) values('2016-09-24 00:00:00.000000001-00:01')"); + // verify year month day is in local time and not UTC + stat.execute("insert into test(t1) values('2016-01-01 05:00:00.00+10:00')"); + stat.execute("insert into test(t1) values('2015-12-31 19:00:00.00-10:00')"); + ResultSet rs = stat.executeQuery("select t1 from test"); + rs.next(); + assertEquals("1970-01-01 12:00:00+00:15", rs.getString(1)); + TimestampWithTimeZone ts = (TimestampWithTimeZone) rs.getObject(1); + assertEquals(1970, ts.getYear()); + assertEquals(1, ts.getMonth()); + assertEquals(1, ts.getDay()); + assertEquals(15, ts.getTimeZoneOffsetMins()); + assertEquals(new TimestampWithTimeZone(1008673L, 43200000000000L, (short) 15), ts); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("1970-01-01T12:00+00:15", rs.getObject(1, + LocalDateTimeUtils.OFFSET_DATE_TIME).toString()); + } + rs.next(); + ts = (TimestampWithTimeZone) rs.getObject(1); + assertEquals(2016, ts.getYear()); + assertEquals(9, ts.getMonth()); + assertEquals(24, ts.getDay()); + assertEquals(1, ts.getTimeZoneOffsetMins()); + assertEquals(1L, ts.getNanosSinceMidnight()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2016-09-24T00:00:00.000000001+00:01", rs.getObject(1, + LocalDateTimeUtils.OFFSET_DATE_TIME).toString()); + } + rs.next(); + ts = (TimestampWithTimeZone) rs.getObject(1); + assertEquals(2016, ts.getYear()); + assertEquals(9, ts.getMonth()); + assertEquals(24, ts.getDay()); + assertEquals(-1, ts.getTimeZoneOffsetMins()); + assertEquals(1L, ts.getNanosSinceMidnight()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2016-09-24T00:00:00.000000001-00:01", rs.getObject(1, + LocalDateTimeUtils.OFFSET_DATE_TIME).toString()); + } + rs.next(); + ts = (TimestampWithTimeZone) rs.getObject(1); + assertEquals(2016, ts.getYear()); + assertEquals(1, ts.getMonth()); + assertEquals(1, ts.getDay()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2016-01-01T05:00+10:00", rs.getObject(1, + LocalDateTimeUtils.OFFSET_DATE_TIME).toString()); + } + rs.next(); + ts = (TimestampWithTimeZone) rs.getObject(1); + assertEquals(2015, ts.getYear()); + assertEquals(12, ts.getMonth()); + assertEquals(31, ts.getDay()); + if (LocalDateTimeUtils.isJava8DateApiPresent()) { + assertEquals("2015-12-31T19:00-10:00", rs.getObject(1, + LocalDateTimeUtils.OFFSET_DATE_TIME).toString()); + } + + ResultSetMetaData metaData = rs.getMetaData(); + int columnType = metaData.getColumnType(1); + // 2014 is the value of Types.TIMESTAMP_WITH_TIMEZONE + // use the value instead of the reference because the code has to + // compile (on Java 1.7). Can be replaced with + // Types.TIMESTAMP_WITH_TIMEZONE + // once Java 1.8 is required. + assertEquals(2014, columnType); + + rs.close(); + stat.close(); + conn.close(); + } + + private void test2() { + ValueTimestampTimeZone a = ValueTimestampTimeZone.parse("1970-01-01 12:00:00.00+00:15"); + ValueTimestampTimeZone b = ValueTimestampTimeZone.parse("1970-01-01 12:00:01.00+01:15"); + int c = a.compareTo(b, null); + assertEquals(1, c); + c = b.compareTo(a, null); + assertEquals(-1, c); + } + + private void test3() { + ValueTimestampTimeZone a = ValueTimestampTimeZone.parse("1970-01-02 00:00:02.00+01:15"); + ValueTimestampTimeZone b = ValueTimestampTimeZone.parse("1970-01-01 23:00:01.00+00:15"); + int c = a.compareTo(b, null); + assertEquals(1, c); + c = b.compareTo(a, null); + assertEquals(-1, c); + } + + private void test4() { + ValueTimestampTimeZone a = ValueTimestampTimeZone.parse("1970-01-02 00:00:01.00+01:15"); + ValueTimestampTimeZone b = ValueTimestampTimeZone.parse("1970-01-01 23:00:01.00+00:15"); + int c = a.compareTo(b, null); + assertEquals(0, c); + c = b.compareTo(a, null); + assertEquals(0, c); + } + + private void test5() throws SQLException { + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test5(id identity, t1 timestamp with time zone)"); + stat.execute("insert into test5(t1) values('2016-09-24 00:00:00.000000001+00:01')"); + stat.execute("insert into test5(t1) values('2017-04-20 00:00:00.000000001+00:01')"); + + PreparedStatement preparedStatement = conn.prepareStatement("select id" + + " from test5" + + " where (t1 < ?)"); + Value value = ValueTimestampTimeZone.parse("2016-12-24 00:00:00.000000001+00:01"); + preparedStatement.setObject(1, value.getObject()); + + ResultSet rs = preparedStatement.executeQuery(); + + assertTrue(rs.next()); + assertEquals(1, rs.getInt(1)); + assertFalse(rs.next()); + + rs.close(); + preparedStatement.close(); + stat.close(); + conn.close(); + } + + private void testOrder() throws SQLException { + Connection conn = getConnection(getTestName()); + Statement stat = conn.createStatement(); + stat.execute("create table test_order(id identity, t1 timestamp with time zone)"); + stat.execute("insert into test_order(t1) values('1970-01-01 12:00:00.00+00:15')"); + stat.execute("insert into test_order(t1) values('1970-01-01 12:00:01.00+01:15')"); + ResultSet rs = stat.executeQuery("select t1 from test_order order by t1"); + rs.next(); + assertEquals("1970-01-01 12:00:01+01:15", rs.getString(1)); + conn.close(); + } + + private void testConversionsImpl(String timeStr, boolean testReverse) { + ValueTimestamp ts = ValueTimestamp.parse(timeStr); + ValueDate d = (ValueDate) ts.convertTo(Value.DATE); + ValueTime t = (ValueTime) ts.convertTo(Value.TIME); + ValueTimestampTimeZone tstz = ValueTimestampTimeZone.parse(timeStr); + assertEquals(ts, tstz.convertTo(Value.TIMESTAMP)); + assertEquals(d, tstz.convertTo(Value.DATE)); + assertEquals(t, tstz.convertTo(Value.TIME)); + assertEquals(ts.getTimestamp(), tstz.getTimestamp()); + if (testReverse) { + assertEquals(0, tstz.compareTo(ts.convertTo(Value.TIMESTAMP_TZ), null)); + assertEquals(d.convertTo(Value.TIMESTAMP).convertTo(Value.TIMESTAMP_TZ), + d.convertTo(Value.TIMESTAMP_TZ)); + assertEquals(t.convertTo(Value.TIMESTAMP).convertTo(Value.TIMESTAMP_TZ), + t.convertTo(Value.TIMESTAMP_TZ)); + } + } + + private void testConversions() { + TimeZone current = TimeZone.getDefault(); + try { + for (String id : TimeZone.getAvailableIDs()) { + TimeZone.setDefault(TimeZone.getTimeZone(id)); + DateTimeUtils.resetCalendar(); + testConversionsImpl("2017-12-05 23:59:30.987654321-12:00", true); + testConversionsImpl("2000-01-02 10:20:30.123456789+07:30", true); + boolean testReverse = !"Africa/Monrovia".equals(id); + testConversionsImpl("1960-04-06 12:13:14.777666555+12:00", testReverse); + } + } finally { + TimeZone.setDefault(current); + DateTimeUtils.resetCalendar(); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestTools.java b/modules/h2/src/test/java/org/h2/test/unit/TestTools.java new file mode 100644 index 0000000000000..be020866431a4 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestTools.java @@ -0,0 +1,1340 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.awt.Button; +import java.awt.HeadlessException; +import java.awt.event.ActionEvent; +import java.awt.event.MouseEvent; +import java.io.ByteArrayOutputStream; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.PrintStream; +import java.io.Reader; +import java.io.Writer; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.net.ServerSocket; +import java.net.Socket; +import java.nio.charset.StandardCharsets; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.Connection; +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.ArrayList; +import java.util.List; +import java.util.Random; +import java.util.UUID; + +import org.h2.api.ErrorCode; +import org.h2.engine.SysProperties; +import org.h2.store.FileLister; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; +import org.h2.test.trace.Player; +import org.h2.test.utils.AssertThrows; +import org.h2.tools.Backup; +import org.h2.tools.ChangeFileEncryption; +import org.h2.tools.Console; +import org.h2.tools.ConvertTraceFile; +import org.h2.tools.DeleteDbFiles; +import org.h2.tools.Recover; +import org.h2.tools.Restore; +import org.h2.tools.RunScript; +import org.h2.tools.Script; +import org.h2.tools.Server; +import org.h2.tools.SimpleResultSet; +import org.h2.tools.SimpleResultSet.SimpleArray; +import org.h2.util.JdbcUtils; +import org.h2.util.Task; +import org.h2.value.ValueUuid; + +/** + * Tests the database tools. + */ +public class TestTools extends TestBase { + + private static String lastUrl; + private Server server; + private List remainingServers = new ArrayList<>(3); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + if (config.networked) { + return; + } + DeleteDbFiles.execute(getBaseDir(), null, true); + org.h2.Driver.load(); + testSimpleResultSet(); + testTcpServerWithoutPort(); + testConsole(); + testJdbcDriverUtils(); + testWrongServer(); + testDeleteFiles(); + testScriptRunscriptLob(); + testServerMain(); + testRemove(); + testConvertTraceFile(); + testManagementDb(); + testChangeFileEncryption(false); + if (!config.splitFileSystem) { + testChangeFileEncryption(true); + } + testChangeFileEncryptionWithWrongPassword(); + testServer(); + testScriptRunscript(); + testBackupRestore(); + testRecover(); + FileUtils.delete(getBaseDir() + "/b2.sql"); + FileUtils.delete(getBaseDir() + "/b2.sql.txt"); + FileUtils.delete(getBaseDir() + "/b2.zip"); + } + + private void testTcpServerWithoutPort() throws Exception { + Server s1 = Server.createTcpServer().start(); + Server s2 = Server.createTcpServer().start(); + assertTrue(s1.getPort() != s2.getPort()); + s1.stop(); + s2.stop(); + s1 = Server.createTcpServer("-tcpPort", "9123").start(); + assertEquals(9123, s1.getPort()); + createClassProxy(Server.class); + assertThrows(ErrorCode.EXCEPTION_OPENING_PORT_2, + Server.createTcpServer("-tcpPort", "9123")).start(); + s1.stop(); + } + + private void testConsole() throws Exception { + String old = System.getProperty(SysProperties.H2_BROWSER); + Console c = new Console(); + c.setOut(new PrintStream(new ByteArrayOutputStream())); + try { + + // start including browser + lastUrl = "-"; + System.setProperty(SysProperties.H2_BROWSER, "call:" + + TestTools.class.getName() + ".openBrowser"); + c.runTool("-web", "-webPort", "9002", "-tool", "-browser", "-tcp", + "-tcpPort", "9003", "-pg", "-pgPort", "9004"); + assertContains(lastUrl, ":9002"); + shutdownConsole(c); + + // check if starting the browser works + c.runTool("-web", "-webPort", "9002", "-tool"); + lastUrl = "-"; + c.actionPerformed(new ActionEvent(this, 0, "console")); + assertContains(lastUrl, ":9002"); + lastUrl = "-"; + // double-click prevention is 100 ms + Thread.sleep(200); + try { + MouseEvent me = new MouseEvent(new Button(), 0, 0, 0, 0, 0, 0, + false, MouseEvent.BUTTON1); + c.mouseClicked(me); + assertContains(lastUrl, ":9002"); + lastUrl = "-"; + // no delay - ignore because it looks like a double click + c.mouseClicked(me); + assertEquals("-", lastUrl); + // open the window + c.actionPerformed(new ActionEvent(this, 0, "status")); + c.actionPerformed(new ActionEvent(this, 0, "exit")); + + // check if the service was stopped + c.runTool("-webPort", "9002"); + + } catch (HeadlessException e) { + // ignore + } + + shutdownConsole(c); + + // trying to use the same port for two services should fail, + // but also stop the first service + createClassProxy(c.getClass()); + assertThrows(ErrorCode.EXCEPTION_OPENING_PORT_2, c).runTool("-web", + "-webPort", "9002", "-tcp", "-tcpPort", "9002"); + c.runTool("-web", "-webPort", "9002"); + + } finally { + if (old != null) { + System.setProperty(SysProperties.H2_BROWSER, old); + } else { + System.clearProperty(SysProperties.H2_BROWSER); + } + shutdownConsole(c); + } + } + + private static void shutdownConsole(Console c) { + c.shutdown(); + if (Thread.currentThread().isInterrupted()) { + // Clear interrupted state so test can continue its work safely + try { + Thread.sleep(1); + } catch (InterruptedException e) { + // Ignore + } + } + } + + /** + * This method is called via reflection. + * + * @param url the browser url + */ + public static void openBrowser(String url) { + lastUrl = url; + } + + private void testSimpleResultSet() throws Exception { + + SimpleResultSet rs; + rs = new SimpleResultSet(); + rs.addColumn(null, 0, 0, 0); + rs.addRow(1); + createClassProxy(rs.getClass()); + assertThrows(IllegalStateException.class, rs). + addColumn(null, 0, 0, 0); + assertEquals(ResultSet.TYPE_FORWARD_ONLY, rs.getType()); + + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("1", rs.getString(1)); + assertEquals("1", rs.getString("C1")); + assertFalse(rs.wasNull()); + assertEquals("C1", rs.getMetaData().getColumnLabel(1)); + assertEquals("C1", rs.getColumnName(1)); + assertEquals( + ResultSetMetaData.columnNullableUnknown, + rs.getMetaData().isNullable(1)); + assertFalse(rs.getMetaData().isAutoIncrement(1)); + assertTrue(rs.getMetaData().isCaseSensitive(1)); + assertFalse(rs.getMetaData().isCurrency(1)); + assertFalse(rs.getMetaData().isDefinitelyWritable(1)); + assertTrue(rs.getMetaData().isReadOnly(1)); + assertTrue(rs.getMetaData().isSearchable(1)); + assertTrue(rs.getMetaData().isSigned(1)); + assertFalse(rs.getMetaData().isWritable(1)); + assertEquals(null, rs.getMetaData().getCatalogName(1)); + assertEquals(null, rs.getMetaData().getColumnClassName(1)); + assertEquals("NULL", rs.getMetaData().getColumnTypeName(1)); + assertEquals(null, rs.getMetaData().getSchemaName(1)); + assertEquals(null, rs.getMetaData().getTableName(1)); + assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, rs.getHoldability()); + assertEquals(1, rs.getColumnCount()); + + rs = new SimpleResultSet(); + rs.setAutoClose(false); + + rs.addColumn("a", Types.BIGINT, 0, 0); + rs.addColumn("b", Types.BINARY, 0, 0); + rs.addColumn("c", Types.BOOLEAN, 0, 0); + rs.addColumn("d", Types.DATE, 0, 0); + rs.addColumn("e", Types.DECIMAL, 0, 0); + rs.addColumn("f", Types.FLOAT, 0, 0); + rs.addColumn("g", Types.VARCHAR, 0, 0); + rs.addColumn("h", Types.ARRAY, 0, 0); + rs.addColumn("i", Types.TIME, 0, 0); + rs.addColumn("j", Types.TIMESTAMP, 0, 0); + rs.addColumn("k", Types.CLOB, 0, 0); + rs.addColumn("l", Types.BLOB, 0, 0); + + Date d = Date.valueOf("2001-02-03"); + byte[] b = {(byte) 0xab}; + Object[] a = {1, 2}; + Time t = Time.valueOf("10:20:30"); + Timestamp ts = Timestamp.valueOf("2002-03-04 10:20:30"); + Clob clob = new SimpleClob("Hello World"); + Blob blob = new SimpleBlob(new byte[]{(byte) 1, (byte) 2}); + rs.addRow(1, b, true, d, "10.3", Math.PI, "-3", a, t, ts, clob, blob); + rs.addRow(BigInteger.ONE, null, true, null, BigDecimal.ONE, 1d, null, null, null, null, null); + rs.addRow(BigInteger.ZERO, null, false, null, BigDecimal.ZERO, 0d, null, null, null, null, null); + rs.addRow(null, null, null, null, null, null, null, null, null, null, null); + + rs.next(); + + assertEquals(1, rs.getLong(1)); + assertEquals((byte) 1, rs.getByte(1)); + assertEquals((short) 1, rs.getShort(1)); + assertEquals(1, rs.getLong("a")); + assertEquals((byte) 1, rs.getByte("a")); + assertEquals(1, rs.getInt("a")); + assertEquals((short) 1, rs.getShort("a")); + assertTrue(rs.getObject(1).getClass() == Integer.class); + assertTrue(rs.getObject("a").getClass() == Integer.class); + assertTrue(rs.getBoolean(1)); + + assertEquals(b, rs.getBytes(2)); + assertEquals(b, rs.getBytes("b")); + + assertTrue(rs.getBoolean(3)); + assertTrue(rs.getBoolean("c")); + assertEquals(d.getTime(), rs.getDate(4).getTime()); + assertEquals(d.getTime(), rs.getDate("d").getTime()); + + assertTrue(new BigDecimal("10.3").equals(rs.getBigDecimal(5))); + assertTrue(new BigDecimal("10.3").equals(rs.getBigDecimal("e"))); + assertEquals(10.3, rs.getDouble(5)); + assertEquals((float) 10.3, rs.getFloat(5)); + + assertTrue(Math.PI == rs.getDouble(6)); + assertTrue(Math.PI == rs.getDouble("f")); + assertTrue((float) Math.PI == rs.getFloat(6)); + assertTrue((float) Math.PI == rs.getFloat("f")); + assertTrue(rs.getBoolean(6)); + + assertEquals(-3, rs.getInt(7)); + assertEquals(-3, rs.getByte(7)); + assertEquals(-3, rs.getShort(7)); + assertEquals(-3, rs.getLong(7)); + + Object[] a2 = (Object[]) rs.getArray(8).getArray(); + assertEquals(2, a2.length); + assertTrue(a == a2); + SimpleArray array = (SimpleArray) rs.getArray("h"); + assertEquals(Types.NULL, array.getBaseType()); + assertEquals("NULL", array.getBaseTypeName()); + a2 = (Object[]) array.getArray(); + array.free(); + assertEquals(2, a2.length); + assertTrue(a == a2); + + assertTrue(t == rs.getTime("i")); + assertTrue(t == rs.getTime(9)); + + assertTrue(ts == rs.getTimestamp("j")); + assertTrue(ts == rs.getTimestamp(10)); + + assertTrue(clob == rs.getClob("k")); + assertTrue(clob == rs.getClob(11)); + assertEquals("Hello World", rs.getString("k")); + assertEquals("Hello World", rs.getString(11)); + + assertTrue(blob == rs.getBlob("l")); + assertTrue(blob == rs.getBlob(12)); + + assertThrows(ErrorCode.INVALID_VALUE_2, (ResultSet) rs). + getString(13); + assertThrows(ErrorCode.COLUMN_NOT_FOUND_1, (ResultSet) rs). + getString("NOT_FOUND"); + + rs.next(); + + assertTrue(rs.getBoolean(1)); + assertTrue(rs.getBoolean(3)); + assertTrue(rs.getBoolean(5)); + assertTrue(rs.getBoolean(6)); + + rs.next(); + + assertFalse(rs.getBoolean(1)); + assertFalse(rs.getBoolean(3)); + assertFalse(rs.getBoolean(5)); + assertFalse(rs.getBoolean(6)); + + rs.next(); + + assertEquals(0, rs.getLong(1)); + assertTrue(rs.wasNull()); + assertEquals(null, rs.getBytes(2)); + assertTrue(rs.wasNull()); + assertFalse(rs.getBoolean(3)); + assertTrue(rs.wasNull()); + assertNull(rs.getDate(4)); + assertTrue(rs.wasNull()); + assertNull(rs.getBigDecimal(5)); + assertTrue(rs.wasNull()); + assertEquals(0.0, rs.getDouble(5)); + assertTrue(rs.wasNull()); + assertEquals(0.0, rs.getDouble(6)); + assertTrue(rs.wasNull()); + assertEquals(0.0, rs.getFloat(6)); + assertTrue(rs.wasNull()); + assertEquals(0, rs.getInt(7)); + assertTrue(rs.wasNull()); + assertNull(rs.getArray(8)); + assertTrue(rs.wasNull()); + assertNull(rs.getTime(9)); + assertTrue(rs.wasNull()); + assertNull(rs.getTimestamp(10)); + assertTrue(rs.wasNull()); + assertNull(rs.getClob(11)); + assertTrue(rs.wasNull()); + assertNull(rs.getCharacterStream(11)); + assertTrue(rs.wasNull()); + assertNull(rs.getBlob(12)); + assertTrue(rs.wasNull()); + assertNull(rs.getBinaryStream(12)); + assertTrue(rs.wasNull()); + + // all updateX methods + for (Method m: rs.getClass().getMethods()) { + if (m.getName().startsWith("update")) { + if (m.getName().equals("updateRow")) { + continue; + } + int len = m.getParameterTypes().length; + if (m.getName().equals("updateObject") && m.getParameterTypes().length > 2) { + Class p3 = m.getParameterTypes()[2]; + if (p3.toString().indexOf("SQLType") >= 0) { + continue; + } + } + Object[] params = new Object[len]; + int i = 0; + String expectedValue = null; + for (Class type : m.getParameterTypes()) { + Object o; + String e = null; + if (type == int.class) { + o = 1; + e = "1"; + } else if (type == byte.class) { + o = (byte) 2; + e = "2"; + } else if (type == double.class) { + o = (double) 3; + e = "3.0"; + } else if (type == float.class) { + o = (float) 4; + e = "4.0"; + } else if (type == long.class) { + o = (long) 5; + e = "5"; + } else if (type == short.class) { + o = (short) 6; + e = "6"; + } else if (type == boolean.class) { + o = false; + e = "false"; + } else if (type == String.class) { + // columnName or value + o = "a"; + e = "a"; + } else { + o = null; + } + if (i == 1) { + expectedValue = e; + } + params[i] = o; + i++; + } + m.invoke(rs, params); + if (params.length == 1) { + // updateNull + assertEquals(null, rs.getString(1)); + } else { + assertEquals(expectedValue, rs.getString(1)); + } + // invalid column name / index + Object invalidColumn; + if (m.getParameterTypes()[0] == String.class) { + invalidColumn = "x"; + } else { + invalidColumn = 0; + } + params[0] = invalidColumn; + try { + m.invoke(rs, params); + fail(); + } catch (InvocationTargetException e) { + SQLException e2 = (SQLException) e.getTargetException(); + if (invalidColumn instanceof String) { + assertEquals(ErrorCode.COLUMN_NOT_FOUND_1, + e2.getErrorCode()); + } else { + assertEquals(ErrorCode.INVALID_VALUE_2, + e2.getErrorCode()); + } + } + } + } + assertEquals(ResultSet.FETCH_FORWARD, rs.getFetchDirection()); + assertEquals(0, rs.getFetchSize()); + assertEquals(ResultSet.TYPE_SCROLL_INSENSITIVE, rs.getType()); + assertTrue(rs.getStatement() == null); + assertFalse(rs.isClosed()); + + rs.beforeFirst(); + assertEquals(0, rs.getRow()); + assertTrue(rs.next()); + assertFalse(rs.isClosed()); + assertEquals(1, rs.getRow()); + assertTrue(rs.next()); + assertTrue(rs.next()); + assertTrue(rs.next()); + assertFalse(rs.next()); + assertThrows(ErrorCode.NO_DATA_AVAILABLE, (ResultSet) rs). + getInt(1); + assertEquals(0, rs.getRow()); + assertFalse(rs.isClosed()); + rs.close(); + assertTrue(rs.isClosed()); + rs = new SimpleResultSet(); + rs.addColumn("TEST", Types.BINARY, 0, 0); + UUID uuid = UUID.randomUUID(); + rs.addRow(uuid); + rs.next(); + assertEquals(uuid, rs.getObject(1)); + assertEquals(uuid, ValueUuid.get(rs.getBytes(1)).getObject()); + } + + private void testJdbcDriverUtils() { + assertEquals("org.h2.Driver", + JdbcUtils.getDriver("jdbc:h2:~/test")); + assertEquals("org.postgresql.Driver", + JdbcUtils.getDriver("jdbc:postgresql:test")); + assertEquals(null, + JdbcUtils.getDriver("jdbc:unknown:test")); + } + + private void testWrongServer() throws Exception { + // try to connect when the server is not running + assertThrows(ErrorCode.CONNECTION_BROKEN_1, this). + getConnection("jdbc:h2:tcp://localhost:9001/test"); + final ServerSocket serverSocket = new ServerSocket(9001); + Task task = new Task() { + @Override + public void call() throws Exception { + while (!stop) { + Socket socket = serverSocket.accept(); + byte[] data = new byte[1024]; + data[0] = 'x'; + OutputStream out = socket.getOutputStream(); + out.write(data); + out.close(); + socket.close(); + } + } + }; + try { + task.execute(); + Thread.sleep(100); + try { + getConnection("jdbc:h2:tcp://localhost:9001/test"); + fail(); + } catch (SQLException e) { + assertEquals(ErrorCode.CONNECTION_BROKEN_1, e.getErrorCode()); + } + } finally { + serverSocket.close(); + task.getException(); + } + } + + private void testDeleteFiles() throws SQLException { + if (config.memory) { + return; + } + deleteDb("testDeleteFiles"); + Connection conn = getConnection("testDeleteFiles"); + Statement stat = conn.createStatement(); + stat.execute("create table test(c clob) as select space(10000) from dual"); + conn.close(); + // the name starts with the same string, but does not match it + DeleteDbFiles.execute(getBaseDir(), "testDelete", true); + conn = getConnection("testDeleteFiles"); + stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + rs.getString(1); + conn.close(); + deleteDb("testDeleteFiles"); + } + + private void testServerMain() throws SQLException { + testNonSSL(); + if (!config.travis) { + testSSL(); + } + } + + private void testNonSSL() throws SQLException { + String result; + Connection conn; + + try { + result = runServer(0, new String[]{"-?"}); + assertContains(result, "Starts the H2 Console"); + assertTrue(result.indexOf("Unknown option") < 0); + + result = runServer(1, new String[]{"-xy"}); + assertContains(result, "Starts the H2 Console"); + assertContains(result, "Feature not supported"); + result = runServer(0, new String[]{"-ifNotExists", "-tcp", + "-tcpPort", "9001", "-tcpPassword", "abc"}); + assertContains(result, "tcp://"); + assertContains(result, ":9001"); + assertContains(result, "only local"); + assertTrue(result.indexOf("Starts the H2 Console") < 0); + conn = getConnection("jdbc:h2:tcp://localhost:9001/mem:", "sa", "sa"); + conn.close(); + result = runServer(0, new String[]{"-tcpShutdown", + "tcp://localhost:9001", "-tcpPassword", "abc", "-tcpShutdownForce"}); + assertContains(result, "Shutting down"); + } finally { + shutdownServers(); + } + } + + private void testSSL() throws SQLException { + String result; + Connection conn; + + try { + result = runServer(0, new String[]{"-ifNotExists", "-tcp", + "-tcpAllowOthers", "-tcpPort", "9001", "-tcpPassword", "abcdef", "-tcpSSL"}); + assertContains(result, "ssl://"); + assertContains(result, ":9001"); + assertContains(result, "others can"); + assertTrue(result.indexOf("Starts the H2 Console") < 0); + conn = getConnection("jdbc:h2:ssl://localhost:9001/mem:", "sa", "sa"); + conn.close(); + + result = runServer(0, new String[]{"-tcpShutdown", + "ssl://localhost:9001", "-tcpPassword", "abcdef"}); + assertContains(result, "Shutting down"); + assertThrows(ErrorCode.CONNECTION_BROKEN_1, this). + getConnection("jdbc:h2:ssl://localhost:9001/mem:", "sa", "sa"); + + result = runServer(0, new String[]{ + "-ifNotExists", "-web", "-webPort", "9002", "-webAllowOthers", "-webSSL", + "-pg", "-pgAllowOthers", "-pgPort", "9003", + "-tcp", "-tcpAllowOthers", "-tcpPort", "9006", "-tcpPassword", "abc"}); + Server stop = server; + assertContains(result, "https://"); + assertContains(result, ":9002"); + assertContains(result, "pg://"); + assertContains(result, ":9003"); + assertContains(result, "others can"); + assertTrue(result.indexOf("only local") < 0); + assertContains(result, "tcp://"); + assertContains(result, ":9006"); + + conn = getConnection("jdbc:h2:tcp://localhost:9006/mem:", "sa", "sa"); + conn.close(); + + result = runServer(0, new String[]{"-tcpShutdown", + "tcp://localhost:9006", "-tcpPassword", "abc", "-tcpShutdownForce"}); + assertContains(result, "Shutting down"); + stop.shutdown(); + assertThrows(ErrorCode.CONNECTION_BROKEN_1, this). + getConnection("jdbc:h2:tcp://localhost:9006/mem:", "sa", "sa"); + } finally { + shutdownServers(); + } + } + + private String runServer(int exitCode, String... args) { + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + PrintStream ps = new PrintStream(buff); + if (server != null) { + remainingServers.add(server); + } + server = new Server(); + server.setOut(ps); + int result = 0; + try { + server.runTool(args); + } catch (SQLException e) { + result = 1; + e.printStackTrace(ps); + } + assertEquals(exitCode, result); + ps.flush(); + String s = new String(buff.toByteArray()); + return s; + } + + private void shutdownServers() { + for (Server remainingServer : remainingServers) { + if (remainingServer != null) { + remainingServer.shutdown(); + } + } + remainingServers.clear(); + if (server != null) { + server.shutdown(); + } + } + + private void testConvertTraceFile() throws Exception { + deleteDb("toolsConvertTraceFile"); + org.h2.Driver.load(); + String url = "jdbc:h2:" + getBaseDir() + "/toolsConvertTraceFile"; + url = getURL(url, true); + Connection conn = getConnection(url + ";TRACE_LEVEL_FILE=3", "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute( + "create table test(id int primary key, name varchar, amount decimal)"); + PreparedStatement prep = conn.prepareStatement( + "insert into test values(?, ?, ?)"); + prep.setInt(1, 1); + prep.setString(2, "Hello \\'Joe\n\\'"); + prep.setBigDecimal(3, new BigDecimal("10.20")); + prep.executeUpdate(); + stat.execute("create table test2(id int primary key,\n" + + "a real, b double, c bigint,\n" + + "d smallint, e boolean, f binary, g date, h time, i timestamp)", + Statement.NO_GENERATED_KEYS); + prep = conn.prepareStatement( + "insert into test2 values(1, ?, ?, ?, ?, ?, ?, ?, ?, ?)"); + prep.setFloat(1, Float.MIN_VALUE); + prep.setDouble(2, Double.MIN_VALUE); + prep.setLong(3, Long.MIN_VALUE); + prep.setShort(4, Short.MIN_VALUE); + prep.setBoolean(5, false); + prep.setBytes(6, new byte[] { (byte) 10, (byte) 20 }); + prep.setDate(7, java.sql.Date.valueOf("2007-12-31")); + prep.setTime(8, java.sql.Time.valueOf("23:59:59")); + prep.setTimestamp(9, java.sql.Timestamp.valueOf("2007-12-31 23:59:59")); + prep.executeUpdate(); + conn.close(); + + ConvertTraceFile.main("-traceFile", getBaseDir() + + "/toolsConvertTraceFile.trace.db", "-javaClass", getBaseDir() + + "/Test", "-script", getBaseDir() + "/test.sql"); + FileUtils.delete(getBaseDir() + "/Test.java"); + + String trace = getBaseDir() + "/toolsConvertTraceFile.trace.db"; + assertTrue(FileUtils.exists(trace)); + String newTrace = getBaseDir() + "/test.trace.db"; + FileUtils.delete(newTrace); + assertFalse(FileUtils.exists(newTrace)); + FileUtils.move(trace, newTrace); + deleteDb("toolsConvertTraceFile"); + Player.main(getBaseDir() + "/test.trace.db"); + testTraceFile(url); + + deleteDb("toolsConvertTraceFile"); + RunScript.main("-url", url, "-user", "sa", "-script", getBaseDir() + + "/test.sql"); + testTraceFile(url); + + deleteDb("toolsConvertTraceFile"); + FileUtils.delete(getBaseDir() + "/toolsConvertTraceFile.h2.sql"); + FileUtils.delete(getBaseDir() + "/test.sql"); + } + + private void testTraceFile(String url) throws SQLException { + Connection conn; + Recover.main("-removePassword", "-dir", getBaseDir(), "-db", + "toolsConvertTraceFile"); + conn = getConnection(url, "sa", ""); + Statement stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello \\'Joe\n\\'", rs.getString(2)); + assertEquals("10.20", rs.getBigDecimal(3).toString()); + assertFalse(rs.next()); + rs = stat.executeQuery("select * from test2"); + rs.next(); + assertEquals(Float.MIN_VALUE, rs.getFloat("a")); + assertEquals(Double.MIN_VALUE, rs.getDouble("b")); + assertEquals(Long.MIN_VALUE, rs.getLong("c")); + assertEquals(Short.MIN_VALUE, rs.getShort("d")); + assertTrue(!rs.getBoolean("e")); + assertEquals(new byte[] { (byte) 10, (byte) 20 }, rs.getBytes("f")); + assertEquals("2007-12-31", rs.getString("g")); + assertEquals("23:59:59", rs.getString("h")); + assertEquals("2007-12-31 23:59:59", rs.getString("i")); + assertFalse(rs.next()); + conn.close(); + } + + private void testRemove() throws SQLException { + if (config.mvStore) { + return; + } + deleteDb("toolsRemove"); + org.h2.Driver.load(); + String url = "jdbc:h2:" + getBaseDir() + "/toolsRemove"; + Connection conn = getConnection(url, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, name varchar)"); + stat.execute("insert into test values(1, 'Hello')"); + conn.close(); + Recover.main("-dir", getBaseDir(), "-db", "toolsRemove", + "-removePassword"); + conn = getConnection(url, "sa", ""); + stat = conn.createStatement(); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + conn.close(); + deleteDb("toolsRemove"); + FileUtils.delete(getBaseDir() + "/toolsRemove.h2.sql"); + } + + private void testRecover() throws SQLException { + if (config.memory) { + return; + } + deleteDb("toolsRecover"); + org.h2.Driver.load(); + String url = getURL("toolsRecover", true); + Connection conn = getConnection(url, "sa", "sa"); + Statement stat = conn.createStatement(); + stat.execute("create table test(id int primary key, " + + "name varchar, b blob, c clob)"); + stat.execute("create table \"test 2\"(id int primary key, name varchar)"); + stat.execute("comment on table test is ';-)'"); + stat.execute("insert into test values" + + "(1, 'Hello', SECURE_RAND(4100), '\u00e4' || space(4100))"); + ResultSet rs; + rs = stat.executeQuery("select * from test"); + rs.next(); + byte[] b1 = rs.getBytes(3); + String s1 = rs.getString(4); + + conn.close(); + Recover.main("-dir", getBaseDir(), "-db", "toolsRecover"); + + // deleteDb would delete the .lob.db directory as well + // deleteDb("toolsRecover"); + ArrayList list = FileLister.getDatabaseFiles(getBaseDir(), + "toolsRecover", true); + for (String fileName : list) { + if (!FileUtils.isDirectory(fileName)) { + FileUtils.delete(fileName); + } + } + + conn = getConnection(url); + stat = conn.createStatement(); + String suffix = ".h2.sql"; + stat.execute("runscript from '" + getBaseDir() + "/toolsRecover" + + suffix + "'"); + rs = stat.executeQuery("select * from \"test 2\""); + assertFalse(rs.next()); + rs = stat.executeQuery("select * from test"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertEquals("Hello", rs.getString(2)); + byte[] b2 = rs.getBytes(3); + String s2 = rs.getString(4); + assertEquals("\u00e4 ", s2.substring(0, 2)); + assertEquals(4100, b2.length); + assertEquals(4101, s2.length()); + assertEquals(b1, b2); + assertEquals(s1, s2); + assertFalse(rs.next()); + conn.close(); + deleteDb("toolsRecover"); + FileUtils.delete(getBaseDir() + "/toolsRecover.h2.sql"); + String dir = getBaseDir() + "/toolsRecover.lobs.db"; + FileUtils.deleteRecursive(dir, false); + } + + private void testManagementDb() throws SQLException { + int count = getSize(2, 10); + for (int i = 0; i < count; i++) { + Server tcpServer = Server. + createTcpServer().start(); + tcpServer.stop(); + tcpServer = Server.createTcpServer("-tcpPassword", "abc").start(); + tcpServer.stop(); + } + } + + private void testScriptRunscriptLob() throws Exception { + org.h2.Driver.load(); + String url = getURL("jdbc:h2:" + getBaseDir() + + "/testScriptRunscriptLob", true); + String user = "sa", password = "abc"; + String fileName = getBaseDir() + "/b2.sql"; + Connection conn = getConnection(url, user, password); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, BDATA BLOB, CDATA CLOB)"); + PreparedStatement prep = conn.prepareStatement( + "INSERT INTO TEST VALUES(?, ?, ?)"); + + prep.setInt(1, 1); + prep.setNull(2, Types.BLOB); + prep.setNull(3, Types.CLOB); + prep.execute(); + + prep.setInt(1, 2); + prep.setString(2, "face"); + prep.setString(3, "face"); + prep.execute(); + + Random random = new Random(1); + prep.setInt(1, 3); + byte[] large = new byte[getSize(10 * 1024, 100 * 1024)]; + random.nextBytes(large); + prep.setBytes(2, large); + String largeText = new String(large, StandardCharsets.ISO_8859_1); + prep.setString(3, largeText); + prep.execute(); + + for (int i = 0; i < 2; i++) { + ResultSet rs = conn.createStatement().executeQuery( + "SELECT * FROM TEST ORDER BY ID"); + rs.next(); + assertEquals(1, rs.getInt(1)); + assertTrue(rs.getString(2) == null); + assertTrue(rs.getString(3) == null); + rs.next(); + assertEquals(2, rs.getInt(1)); + assertEquals("face", rs.getString(2)); + assertEquals("face", rs.getString(3)); + rs.next(); + assertEquals(3, rs.getInt(1)); + assertEquals(large, rs.getBytes(2)); + assertEquals(largeText, rs.getString(3)); + assertFalse(rs.next()); + + conn.close(); + Script.main("-url", url, "-user", user, "-password", password, + "-script", fileName); + DeleteDbFiles.main("-dir", getBaseDir(), "-db", + "testScriptRunscriptLob", "-quiet"); + RunScript.main("-url", url, "-user", user, "-password", password, + "-script", fileName); + conn = getConnection("jdbc:h2:" + getBaseDir() + + "/testScriptRunscriptLob", "sa", "abc"); + } + conn.close(); + + } + + private void testScriptRunscript() throws SQLException { + org.h2.Driver.load(); + String url = getURL("jdbc:h2:" + getBaseDir() + "/testScriptRunscript", + true); + String user = "sa", password = "abc"; + String fileName = getBaseDir() + "/b2.sql"; + DeleteDbFiles.main("-dir", getBaseDir(), "-db", "testScriptRunscript", + "-quiet"); + Connection conn = getConnection(url, user, password); + conn.createStatement().execute("CREATE TABLE \u00f6()"); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + conn.createStatement().execute("INSERT INTO TEST VALUES(1, 'Hello')"); + conn.close(); + Script.main("-url", url, "-user", user, "-password", password, + "-script", fileName, "-options", "nodata", "compression", + "lzf", "cipher", "aes", "password", "'123'", "charset", + "'utf-8'"); + Script.main("-url", url, "-user", user, "-password", password, + "-script", fileName + ".txt"); + DeleteDbFiles.main("-dir", getBaseDir(), "-db", "testScriptRunscript", + "-quiet"); + RunScript.main("-url", url, "-user", user, "-password", password, + "-script", fileName, "-options", "compression", "lzf", + "cipher", "aes", "password", "'123'", "charset", "'utf-8'"); + conn = getConnection( + "jdbc:h2:" + getBaseDir() + "/testScriptRunscript", "sa", "abc"); + ResultSet rs = conn.createStatement() + .executeQuery("SELECT * FROM TEST"); + assertFalse(rs.next()); + rs = conn.createStatement().executeQuery("SELECT * FROM \u00f6"); + assertFalse(rs.next()); + conn.close(); + + DeleteDbFiles.main("-dir", getBaseDir(), "-db", "testScriptRunscript", + "-quiet"); + RunScript tool = new RunScript(); + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + tool.setOut(new PrintStream(buff)); + tool.runTool("-url", url, "-user", user, "-password", password, + "-script", fileName + ".txt", "-showResults"); + assertContains(buff.toString(), "Hello"); + + + // test parsing of BLOCKSIZE option + DeleteDbFiles.main("-dir", getBaseDir(), "-db", "testScriptRunscript", + "-quiet"); + conn = getConnection(url, user, password); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + conn.close(); + Script.main("-url", url, "-user", user, "-password", password, + "-script", fileName, "-options", "simple", "blocksize", + "8192"); + } + + private void testBackupRestore() throws SQLException { + org.h2.Driver.load(); + String url = "jdbc:h2:" + getBaseDir() + "/testBackupRestore"; + String user = "sa", password = "abc"; + final String fileName = getBaseDir() + "/b2.zip"; + DeleteDbFiles.main("-dir", getBaseDir(), "-db", "testBackupRestore", + "-quiet"); + Connection conn = getConnection(url, user, password); + conn.createStatement().execute( + "CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); + conn.createStatement().execute("INSERT INTO TEST VALUES(1, 'Hello')"); + conn.close(); + Backup.main("-file", fileName, "-dir", getBaseDir(), "-db", + "testBackupRestore", "-quiet"); + DeleteDbFiles.main("-dir", getBaseDir(), "-db", "testBackupRestore", + "-quiet"); + Restore.main("-file", fileName, "-dir", getBaseDir(), "-db", + "testBackupRestore", "-quiet"); + conn = getConnection("jdbc:h2:" + getBaseDir() + "/testBackupRestore", + "sa", "abc"); + ResultSet rs = conn.createStatement() + .executeQuery("SELECT * FROM TEST"); + assertTrue(rs.next()); + assertFalse(rs.next()); + new AssertThrows(ErrorCode.CANNOT_CHANGE_SETTING_WHEN_OPEN_1) { + @Override + public void test() throws SQLException { + // must fail when the database is in use + Backup.main("-file", fileName, "-dir", getBaseDir(), "-db", + "testBackupRestore"); + } + }; + conn.close(); + DeleteDbFiles.main("-dir", getBaseDir(), "-db", "testBackupRestore", + "-quiet"); + } + + private void testChangeFileEncryption(boolean split) throws SQLException { + org.h2.Driver.load(); + final String dir = (split ? "split:19:" : "") + getBaseDir(); + String url = "jdbc:h2:" + dir + "/testChangeFileEncryption;CIPHER=AES"; + DeleteDbFiles.execute(dir, "testChangeFileEncryption", true); + Connection conn = getConnection(url, "sa", "abc 123"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, DATA CLOB) " + + "AS SELECT X, SPACE(3000) FROM SYSTEM_RANGE(1, 300)"); + conn.close(); + String[] args = { "-dir", dir, "-db", "testChangeFileEncryption", + "-cipher", "AES", "-decrypt", "abc", "-quiet" }; + ChangeFileEncryption.main(args); + args = new String[] { "-dir", dir, "-db", "testChangeFileEncryption", + "-cipher", "AES", "-encrypt", "def", "-quiet" }; + ChangeFileEncryption.main(args); + conn = getConnection(url, "sa", "def 123"); + stat = conn.createStatement(); + stat.execute("SELECT * FROM TEST"); + new AssertThrows(ErrorCode.CANNOT_CHANGE_SETTING_WHEN_OPEN_1) { + @Override + public void test() throws SQLException { + ChangeFileEncryption.main(new String[] { "-dir", dir, "-db", + "testChangeFileEncryption", "-cipher", "AES", + "-decrypt", "def", "-quiet" }); + } + }; + conn.close(); + args = new String[] { "-dir", dir, "-db", "testChangeFileEncryption", + "-quiet" }; + DeleteDbFiles.main(args); + } + + private void testChangeFileEncryptionWithWrongPassword() throws SQLException { + if (config.mvStore) { + // the file system encryption abstraction used by the MVStore + // doesn't detect wrong passwords + return; + } + org.h2.Driver.load(); + final String dir = getBaseDir(); + // TODO: this doesn't seem to work in MVSTORE mode yet + String url = "jdbc:h2:" + dir + "/testChangeFileEncryption;CIPHER=AES"; + DeleteDbFiles.execute(dir, "testChangeFileEncryption", true); + Connection conn = getConnection(url, "sa", "abc 123"); + Statement stat = conn.createStatement(); + stat.execute("CREATE TABLE TEST(ID INT PRIMARY KEY, DATA CLOB) " + + "AS SELECT X, SPACE(3000) FROM SYSTEM_RANGE(1, 300)"); + conn.close(); + // try with wrong password, this used to have a bug where it kept the + // file handle open + new AssertThrows(SQLException.class) { + @Override + public void test() throws SQLException { + ChangeFileEncryption.execute(dir, "testChangeFileEncryption", + "AES", "wrong".toCharArray(), + "def".toCharArray(), true); + } + }; + ChangeFileEncryption.execute(dir, "testChangeFileEncryption", + "AES", "abc".toCharArray(), "def".toCharArray(), + true); + + conn = getConnection(url, "sa", "def 123"); + stat = conn.createStatement(); + stat.execute("SELECT * FROM TEST"); + conn.close(); + String[] args = new String[] { "-dir", dir, "-db", "testChangeFileEncryption", "-quiet" }; + DeleteDbFiles.main(args); + } + + private void testServer() throws SQLException { + Connection conn; + try { + deleteDb("test"); + Server tcpServer = Server.createTcpServer("-ifNotExists", + "-baseDir", getBaseDir(), + "-tcpAllowOthers").start(); + remainingServers.add(tcpServer); + final int port = tcpServer.getPort(); + conn = getConnection("jdbc:h2:tcp://localhost:"+ port +"/test", "sa", ""); + conn.close(); + // must not be able to use a different base dir + new AssertThrows(ErrorCode.IO_EXCEPTION_1) { + @Override + public void test() throws SQLException { + getConnection("jdbc:h2:tcp://localhost:"+ port +"/../test", "sa", ""); + }}; + new AssertThrows(ErrorCode.IO_EXCEPTION_1) { + @Override + public void test() throws SQLException { + getConnection("jdbc:h2:tcp://localhost:"+port+"/../test2/test", "sa", ""); + }}; + tcpServer.stop(); + Server tcpServerWithPassword = Server.createTcpServer( + "-ifExists", + "-tcpPassword", "abc", + "-baseDir", getBaseDir()).start(); + final int prt = tcpServerWithPassword.getPort(); + remainingServers.add(tcpServerWithPassword); + // must not be able to create new db + new AssertThrows(ErrorCode.DATABASE_NOT_FOUND_1) { + @Override + public void test() throws SQLException { + getConnection("jdbc:h2:tcp://localhost:"+prt+"/test2", "sa", ""); + }}; + new AssertThrows(ErrorCode.DATABASE_NOT_FOUND_1) { + @Override + public void test() throws SQLException { + getConnection("jdbc:h2:tcp://localhost:"+prt+"/test2;ifexists=false", "sa", ""); + }}; + conn = getConnection("jdbc:h2:tcp://localhost:"+prt+"/test", "sa", ""); + conn.close(); + new AssertThrows(ErrorCode.WRONG_USER_OR_PASSWORD) { + @Override + public void test() throws SQLException { + Server.shutdownTcpServer("tcp://localhost:"+prt, "", true, false); + }}; + conn = getConnection("jdbc:h2:tcp://localhost:"+prt+"/test", "sa", ""); + // conn.close(); + Server.shutdownTcpServer("tcp://localhost:"+prt, "abc", true, false); + // check that the database is closed + deleteDb("test"); + // server must have been closed + assertThrows(ErrorCode.CONNECTION_BROKEN_1, this). + getConnection("jdbc:h2:tcp://localhost:"+prt+"/test", "sa", ""); + JdbcUtils.closeSilently(conn); + // Test filesystem prefix and escape from baseDir + deleteDb("testSplit"); + server = Server.createTcpServer("-ifNotExists", + "-baseDir", getBaseDir(), + "-tcpAllowOthers").start(); + final int p = server.getPort(); + conn = getConnection("jdbc:h2:tcp://localhost:"+p+"/split:testSplit", "sa", ""); + conn.close(); + + assertThrows(ErrorCode.IO_EXCEPTION_1, this). + getConnection("jdbc:h2:tcp://localhost:"+p+"/../test", "sa", ""); + + server.stop(); + deleteDb("testSplit"); + } finally { + shutdownServers(); + } + } + + /** + * A simple Clob implementation. + */ + class SimpleClob implements Clob { + + private final String data; + + SimpleClob(String data) { + this.data = data; + } + + /** + * Free the clob. + */ + @Override + public void free() throws SQLException { + // ignore + } + + @Override + public InputStream getAsciiStream() throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public Reader getCharacterStream() throws SQLException { + throw new UnsupportedOperationException(); + } + + /** + * Get the reader. + * + * @param pos the position + * @param length the length + * @return the reader + */ + @Override + public Reader getCharacterStream(long pos, long length) + throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public String getSubString(long pos, int length) throws SQLException { + return data; + } + + @Override + public long length() throws SQLException { + return data.length(); + } + + @Override + public long position(String search, long start) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public long position(Clob search, long start) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public OutputStream setAsciiStream(long pos) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public Writer setCharacterStream(long pos) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public int setString(long pos, String str) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public int setString(long pos, String str, int offset, int len) + throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public void truncate(long len) throws SQLException { + throw new UnsupportedOperationException(); + } + + } + + /** + * A simple Blob implementation. + */ + class SimpleBlob implements Blob { + + private final byte[] data; + + SimpleBlob(byte[] data) { + this.data = data; + } + + /** + * Free the blob. + */ + @Override + public void free() throws SQLException { + // ignore + } + + @Override + public InputStream getBinaryStream() throws SQLException { + throw new UnsupportedOperationException(); + } + + /** + * Get the binary stream. + * + * @param pos the position + * @param length the length + * @return the input stream + */ + @Override + public InputStream getBinaryStream(long pos, long length) + throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public byte[] getBytes(long pos, int length) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public long length() throws SQLException { + return data.length; + } + + @Override + public long position(byte[] pattern, long start) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public long position(Blob pattern, long start) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public OutputStream setBinaryStream(long pos) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public int setBytes(long pos, byte[] bytes) throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public int setBytes(long pos, byte[] bytes, int offset, int len) + throws SQLException { + throw new UnsupportedOperationException(); + } + + @Override + public void truncate(long len) throws SQLException { + throw new UnsupportedOperationException(); + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestTraceSystem.java b/modules/h2/src/test/java/org/h2/test/unit/TestTraceSystem.java new file mode 100644 index 0000000000000..5671f5f4bd571 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestTraceSystem.java @@ -0,0 +1,76 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayOutputStream; +import java.io.PrintStream; +import org.h2.message.TraceSystem; +import org.h2.store.fs.FileUtils; +import org.h2.test.TestBase; + +/** + * Tests the trace system + */ +public class TestTraceSystem extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testTraceDebug(); + testReadOnly(); + testAdapter(); + } + + private void testAdapter() { + TraceSystem ts = new TraceSystem(null); + ts.setName("test"); + ts.setLevelFile(TraceSystem.ADAPTER); + ts.getTrace("test").debug("test"); + ts.getTrace("test").info("test"); + ts.getTrace("test").error(new Exception(), "test"); + + // The used SLF4J-nop logger has all log levels disabled, + // so this should be reflected in the trace system. +// assertFalse(ts.isEnabled(TraceSystem.INFO)); +// assertFalse(ts.getTrace("test").isInfoEnabled()); + + ts.close(); + } + + private void testTraceDebug() { + TraceSystem ts = new TraceSystem(null); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + ts.setSysOut(new PrintStream(out)); + ts.setLevelSystemOut(TraceSystem.DEBUG); + ts.getTrace("test").debug(new Exception("error"), "test"); + ts.close(); + String outString = new String(out.toByteArray()); + assertContains(outString, "error"); + assertContains(outString, "Exception"); + assertContains(outString, "test"); + } + + private void testReadOnly() throws Exception { + String readOnlyFile = getBaseDir() + "/readOnly.log"; + FileUtils.delete(readOnlyFile); + FileUtils.newOutputStream(readOnlyFile, false).close(); + FileUtils.setReadOnly(readOnlyFile); + TraceSystem ts = new TraceSystem(readOnlyFile); + ts.setLevelFile(TraceSystem.INFO); + ts.getTrace("test").info("test"); + FileUtils.delete(readOnlyFile); + ts.close(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestUtils.java b/modules/h2/src/test/java/org/h2/test/unit/TestUtils.java new file mode 100644 index 0000000000000..ee0c3fcc60e1e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestUtils.java @@ -0,0 +1,285 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.Reader; +import java.io.StringReader; +import java.math.BigInteger; +import java.sql.Timestamp; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.Date; +import java.util.Random; +import org.h2.test.TestBase; +import org.h2.util.Bits; +import org.h2.util.IOUtils; +import org.h2.util.Utils; + +/** + * Tests reflection utilities. + */ +public class TestUtils extends TestBase { + + /** + * Dummy field + */ + public final String testField = "abc"; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws Exception { + testIOUtils(); + testSortTopN(); + testSortTopNRandom(); + testWriteReadLong(); + testGetNonPrimitiveClass(); + testGetNonPrimitiveClass(); + testGetNonPrimitiveClass(); + testReflectionUtils(); + testParseBoolean(); + } + + private void testIOUtils() throws IOException { + for (int i = 0; i < 20; i++) { + byte[] data = new byte[i]; + InputStream in = new ByteArrayInputStream(data); + byte[] buffer = new byte[i]; + assertEquals(0, IOUtils.readFully(in, buffer, -2)); + assertEquals(0, IOUtils.readFully(in, buffer, -1)); + assertEquals(0, IOUtils.readFully(in, buffer, 0)); + for (int j = 1, off = 0;; j += 1) { + int read = Math.max(0, Math.min(i - off, j)); + int l = IOUtils.readFully(in, buffer, j); + assertEquals(read, l); + off += l; + if (l == 0) { + break; + } + } + assertEquals(0, IOUtils.readFully(in, buffer, 1)); + } + for (int i = 0; i < 10; i++) { + char[] data = new char[i]; + Reader in = new StringReader(new String(data)); + char[] buffer = new char[i]; + assertEquals(0, IOUtils.readFully(in, buffer, -2)); + assertEquals(0, IOUtils.readFully(in, buffer, -1)); + assertEquals(0, IOUtils.readFully(in, buffer, 0)); + for (int j = 1, off = 0;; j += 1) { + int read = Math.max(0, Math.min(i - off, j)); + int l = IOUtils.readFully(in, buffer, j); + assertEquals(read, l); + off += l; + if (l == 0) { + break; + } + } + assertEquals(0, IOUtils.readFully(in, buffer, 1)); + } + } + + private void testWriteReadLong() { + byte[] buff = new byte[8]; + for (long x : new long[]{Long.MIN_VALUE, Long.MAX_VALUE, 0, 1, -1, + Integer.MIN_VALUE, Integer.MAX_VALUE}) { + Bits.writeLong(buff, 0, x); + long y = Bits.readLong(buff, 0); + assertEquals(x, y); + } + Random r = new Random(1); + for (int i = 0; i < 1000; i++) { + long x = r.nextLong(); + Bits.writeLong(buff, 0, x); + long y = Bits.readLong(buff, 0); + assertEquals(x, y); + } + } + + private void testSortTopN() { + Comparator comp = new Comparator() { + @Override + public int compare(Integer o1, Integer o2) { + return o1.compareTo(o2); + } + }; + Integer[] arr = new Integer[] {}; + Utils.sortTopN(arr, 0, 5, comp); + + arr = new Integer[] { 1 }; + Utils.sortTopN(arr, 0, 5, comp); + + arr = new Integer[] { 3, 5, 1, 4, 2 }; + Utils.sortTopN(arr, 0, 2, comp); + assertEquals(arr[0].intValue(), 1); + assertEquals(arr[1].intValue(), 2); + } + + private void testSortTopNRandom() { + Random rnd = new Random(); + Comparator comp = new Comparator() { + @Override + public int compare(Integer o1, Integer o2) { + return o1.compareTo(o2); + } + }; + for (int z = 0; z < 10000; z++) { + Integer[] arr = new Integer[1 + rnd.nextInt(500)]; + for (int i = 0; i < arr.length; i++) { + arr[i] = rnd.nextInt(50); + } + Integer[] arr2 = Arrays.copyOf(arr, arr.length); + int offset = rnd.nextInt(arr.length); + int limit = rnd.nextInt(arr.length); + Utils.sortTopN(arr, offset, limit, comp); + Arrays.sort(arr2, comp); + for (int i = offset, end = Math.min(offset + limit, arr.length); i < end; i++) { + if (!arr[i].equals(arr2[i])) { + fail(offset + " " + end + "\n" + Arrays.toString(arr) + + "\n" + Arrays.toString(arr2)); + } + } + } + } + + private void testGetNonPrimitiveClass() { + testGetNonPrimitiveClass(BigInteger.class, BigInteger.class); + testGetNonPrimitiveClass(Boolean.class, boolean.class); + testGetNonPrimitiveClass(Byte.class, byte.class); + testGetNonPrimitiveClass(Character.class, char.class); + testGetNonPrimitiveClass(Byte.class, byte.class); + testGetNonPrimitiveClass(Double.class, double.class); + testGetNonPrimitiveClass(Float.class, float.class); + testGetNonPrimitiveClass(Integer.class, int.class); + testGetNonPrimitiveClass(Long.class, long.class); + testGetNonPrimitiveClass(Short.class, short.class); + testGetNonPrimitiveClass(Void.class, void.class); + } + + private void testGetNonPrimitiveClass(Class expected, Class p) { + assertEquals(expected.getName(), Utils.getNonPrimitiveClass(p).getName()); + } + + private void testReflectionUtils() throws Exception { + // Static method call + long currentTimeNanos1 = System.nanoTime(); + long currentTimeNanos2 = (Long) Utils.callStaticMethod( + "java.lang.System.nanoTime"); + assertTrue(currentTimeNanos1 <= currentTimeNanos2); + // New Instance + Object instance = Utils.newInstance("java.lang.StringBuilder"); + // New Instance with int parameter + instance = Utils.newInstance("java.lang.StringBuilder", 10); + // StringBuilder.append or length don't work on JDK 5 due to + // http://bugs.sun.com/view_bug.do?bug_id=4283544 + instance = Utils.newInstance("java.lang.Integer", 10); + // Instance methods + long x = (Long) Utils.callMethod(instance, "longValue"); + assertEquals(10, x); + // Static fields + String pathSeparator = (String) Utils + .getStaticField("java.io.File.pathSeparator"); + assertEquals(File.pathSeparator, pathSeparator); + // Instance fields + String test = (String) Utils.getField(this, "testField"); + assertEquals(this.testField, test); + // Class present? + assertFalse(Utils.isClassPresent("abc")); + assertTrue(Utils.isClassPresent(getClass().getName())); + Utils.callStaticMethod("java.lang.String.valueOf", "a"); + Utils.callStaticMethod("java.awt.AWTKeyStroke.getAWTKeyStroke", + 'x', java.awt.event.InputEvent.SHIFT_DOWN_MASK); + // Common comparable superclass + assertFalse(Utils.haveCommonComparableSuperclass( + Integer.class, + Long.class)); + assertTrue(Utils.haveCommonComparableSuperclass( + Integer.class, + Integer.class)); + assertTrue(Utils.haveCommonComparableSuperclass( + Timestamp.class, + Date.class)); + assertFalse(Utils.haveCommonComparableSuperclass( + ArrayList.class, + Long.class)); + assertFalse(Utils.haveCommonComparableSuperclass( + Integer.class, + ArrayList.class)); + } + + private void testParseBooleanCheckFalse(String value) { + assertFalse(Utils.parseBoolean(value, false, false)); + assertFalse(Utils.parseBoolean(value, false, true)); + assertFalse(Utils.parseBoolean(value, true, false)); + assertFalse(Utils.parseBoolean(value, true, true)); + } + + private void testParseBooleanCheckTrue(String value) { + assertTrue(Utils.parseBoolean(value, false, false)); + assertTrue(Utils.parseBoolean(value, false, true)); + assertTrue(Utils.parseBoolean(value, true, false)); + assertTrue(Utils.parseBoolean(value, true, true)); + } + + private void testParseBoolean() { + // Test for default value in case of null + assertFalse(Utils.parseBoolean(null, false, false)); + assertFalse(Utils.parseBoolean(null, false, true)); + assertTrue(Utils.parseBoolean(null, true, false)); + assertTrue(Utils.parseBoolean(null, true, true)); + // Test assorted valid strings + testParseBooleanCheckFalse("0"); + testParseBooleanCheckFalse("f"); + testParseBooleanCheckFalse("F"); + testParseBooleanCheckFalse("n"); + testParseBooleanCheckFalse("N"); + testParseBooleanCheckFalse("no"); + testParseBooleanCheckFalse("No"); + testParseBooleanCheckFalse("NO"); + testParseBooleanCheckFalse("false"); + testParseBooleanCheckFalse("False"); + testParseBooleanCheckFalse("FALSE"); + testParseBooleanCheckTrue("1"); + testParseBooleanCheckTrue("t"); + testParseBooleanCheckTrue("T"); + testParseBooleanCheckTrue("y"); + testParseBooleanCheckTrue("Y"); + testParseBooleanCheckTrue("yes"); + testParseBooleanCheckTrue("Yes"); + testParseBooleanCheckTrue("YES"); + testParseBooleanCheckTrue("true"); + testParseBooleanCheckTrue("True"); + testParseBooleanCheckTrue("TRUE"); + // Test other values + assertFalse(Utils.parseBoolean("BAD", false, false)); + assertTrue(Utils.parseBoolean("BAD", true, false)); + try { + Utils.parseBoolean("BAD", false, true); + fail(); + } catch (IllegalArgumentException e) { + // OK + } + try { + Utils.parseBoolean("BAD", true, true); + fail(); + } catch (IllegalArgumentException e) { + // OK + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestValue.java b/modules/h2/src/test/java/org/h2/test/unit/TestValue.java new file mode 100644 index 0000000000000..e6e1196ebc733 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestValue.java @@ -0,0 +1,394 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.Arrays; +import java.util.Calendar; +import java.util.TimeZone; +import java.util.UUID; + +import org.h2.api.ErrorCode; +import org.h2.message.DbException; +import org.h2.test.TestBase; +import org.h2.test.utils.AssertThrows; +import org.h2.tools.SimpleResultSet; +import org.h2.util.Bits; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueJavaObject; +import org.h2.value.ValueLobDb; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; +import org.h2.value.ValueString; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueUuid; + +/** + * Tests features of values. + */ +public class TestValue extends TestBase { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() throws SQLException { + testResultSetOperations(); + testBinaryAndUuid(); + testCastTrim(); + testValueResultSet(); + testDataType(); + testUUID(); + testDouble(false); + testDouble(true); + testTimestamp(); + testModulusDouble(); + testModulusDecimal(); + testModulusOperator(); + } + + private void testResultSetOperations() throws SQLException { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + rs.addColumn("X", Types.INTEGER, 10, 0); + rs.addRow(new Object[]{null}); + rs.next(); + for (int type = Value.NULL; type < Value.TYPE_COUNT; type++) { + if (type == 23) { + // a defunct experimental type + } else { + Value v = DataType.readValue(null, rs, 1, type); + assertTrue(v == ValueNull.INSTANCE); + } + } + testResultSetOperation(new byte[0]); + testResultSetOperation(1); + testResultSetOperation(Boolean.TRUE); + testResultSetOperation((byte) 1); + testResultSetOperation((short) 2); + testResultSetOperation((long) 3); + testResultSetOperation(4.0f); + testResultSetOperation(5.0d); + testResultSetOperation(new Date(6)); + testResultSetOperation(new Time(7)); + testResultSetOperation(new Timestamp(8)); + testResultSetOperation(new BigDecimal("9")); + testResultSetOperation(UUID.randomUUID()); + + SimpleResultSet rs2 = new SimpleResultSet(); + rs2.setAutoClose(false); + rs2.addColumn("X", Types.INTEGER, 10, 0); + rs2.addRow(new Object[]{1}); + rs2.next(); + testResultSetOperation(rs2); + + } + + private void testResultSetOperation(Object obj) throws SQLException { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + int valueType = DataType.getTypeFromClass(obj.getClass()); + int sqlType = DataType.convertTypeToSQLType(valueType); + rs.addColumn("X", sqlType, 10, 0); + rs.addRow(new Object[]{obj}); + rs.next(); + Value v = DataType.readValue(null, rs, 1, valueType); + Value v2 = DataType.convertToValue(null, obj, valueType); + if (v.getType() == Value.RESULT_SET) { + assertEquals(v.toString(), v2.toString()); + } else { + assertTrue(v.equals(v2)); + } + } + + private void testBinaryAndUuid() throws SQLException { + try (Connection conn = getConnection("binaryAndUuid")) { + UUID uuid = UUID.randomUUID(); + PreparedStatement prep; + ResultSet rs; + // Check conversion to byte[] + prep = conn.prepareStatement("SELECT * FROM TABLE(X BINARY=?)"); + prep.setObject(1, new Object[] { uuid }); + rs = prep.executeQuery(); + rs.next(); + assertTrue(Arrays.equals(Bits.uuidToBytes(uuid), (byte[]) rs.getObject(1))); + // Check that type is not changed + prep = conn.prepareStatement("SELECT * FROM TABLE(X UUID=?)"); + prep.setObject(1, new Object[] { uuid }); + rs = prep.executeQuery(); + rs.next(); + assertEquals(uuid, rs.getObject(1)); + } finally { + deleteDb("binaryAndUuid"); + } + } + + private void testCastTrim() { + Value v; + String spaces = new String(new char[100]).replace((char) 0, ' '); + + v = ValueArray.get(new Value[] { ValueString.get("hello"), + ValueString.get("world") }); + assertEquals(10, v.getPrecision()); + assertEquals(5, v.convertPrecision(5, true).getPrecision()); + v = ValueArray.get(new Value[]{ValueString.get(""), ValueString.get("")}); + assertEquals(0, v.getPrecision()); + assertEquals("('')", v.convertPrecision(1, true).toString()); + + v = ValueBytes.get(spaces.getBytes()); + assertEquals(100, v.getPrecision()); + assertEquals(10, v.convertPrecision(10, false).getPrecision()); + assertEquals(10, v.convertPrecision(10, false).getBytes().length); + assertEquals(32, v.convertPrecision(10, false).getBytes()[9]); + assertEquals(10, v.convertPrecision(10, true).getPrecision()); + + final Value vd = ValueDecimal.get(new BigDecimal("1234567890.123456789")); + assertEquals(19, vd.getPrecision()); + assertEquals("1234567890.1234567", vd.convertPrecision(10, true).getString()); + new AssertThrows(ErrorCode.NUMERIC_VALUE_OUT_OF_RANGE_1) { + @Override + public void test() { + vd.convertPrecision(10, false); + } + }; + + v = ValueLobDb.createSmallLob(Value.CLOB, spaces.getBytes(), 100); + assertEquals(100, v.getPrecision()); + assertEquals(10, v.convertPrecision(10, false).getPrecision()); + assertEquals(10, v.convertPrecision(10, false).getString().length()); + assertEquals(" ", v.convertPrecision(10, false).getString()); + assertEquals(10, v.convertPrecision(10, true).getPrecision()); + + v = ValueLobDb.createSmallLob(Value.BLOB, spaces.getBytes(), 100); + assertEquals(100, v.getPrecision()); + assertEquals(10, v.convertPrecision(10, false).getPrecision()); + assertEquals(10, v.convertPrecision(10, false).getBytes().length); + assertEquals(32, v.convertPrecision(10, false).getBytes()[9]); + assertEquals(10, v.convertPrecision(10, true).getPrecision()); + + ResultSet rs = new SimpleResultSet(); + v = ValueResultSet.get(rs); + assertEquals(Integer.MAX_VALUE, v.getPrecision()); + assertEquals(Integer.MAX_VALUE, v.convertPrecision(10, false).getPrecision()); + assertTrue(rs == v.convertPrecision(10, false).getObject()); + assertFalse(rs == v.convertPrecision(10, true).getObject()); + assertEquals(Integer.MAX_VALUE, v.convertPrecision(10, true).getPrecision()); + + v = ValueString.get(spaces); + assertEquals(100, v.getPrecision()); + assertEquals(10, v.convertPrecision(10, false).getPrecision()); + assertEquals(" ", v.convertPrecision(10, false).getString()); + assertEquals(" ", v.convertPrecision(10, true).getString()); + + } + + private void testValueResultSet() throws SQLException { + SimpleResultSet rs = new SimpleResultSet(); + rs.setAutoClose(false); + rs.addColumn("ID", Types.INTEGER, 0, 0); + rs.addColumn("NAME", Types.VARCHAR, 255, 0); + rs.addRow(1, "Hello"); + rs.addRow(2, "World"); + rs.addRow(3, "Peace"); + + ValueResultSet v; + v = ValueResultSet.get(rs); + assertTrue(rs == v.getObject()); + + v = ValueResultSet.getCopy(rs, 2); + assertEquals(0, v.hashCode()); + assertEquals(Integer.MAX_VALUE, v.getDisplaySize()); + assertEquals(Integer.MAX_VALUE, v.getPrecision()); + assertEquals(0, v.getScale()); + assertEquals("", v.getSQL()); + assertEquals(Value.RESULT_SET, v.getType()); + assertEquals("((1, Hello), (2, World))", v.getString()); + rs.beforeFirst(); + ValueResultSet v2 = ValueResultSet.getCopy(rs, 2); + assertTrue(v.equals(v)); + assertFalse(v.equals(v2)); + rs.beforeFirst(); + + ResultSet rs2 = v.getResultSet(); + rs2.next(); + rs.next(); + assertEquals(rs.getInt(1), rs2.getInt(1)); + assertEquals(rs.getString(2), rs2.getString(2)); + rs2.next(); + rs.next(); + assertEquals(rs.getInt(1), rs2.getInt(1)); + assertEquals(rs.getString(2), rs2.getString(2)); + assertFalse(rs2.next()); + assertTrue(rs.next()); + } + + private void testDataType() { + testDataType(Value.NULL, null); + testDataType(Value.NULL, Void.class); + testDataType(Value.NULL, void.class); + testDataType(Value.ARRAY, String[].class); + testDataType(Value.STRING, String.class); + testDataType(Value.INT, Integer.class); + testDataType(Value.LONG, Long.class); + testDataType(Value.BOOLEAN, Boolean.class); + testDataType(Value.DOUBLE, Double.class); + testDataType(Value.BYTE, Byte.class); + testDataType(Value.SHORT, Short.class); + testDataType(Value.FLOAT, Float.class); + testDataType(Value.BYTES, byte[].class); + testDataType(Value.UUID, UUID.class); + testDataType(Value.NULL, Void.class); + testDataType(Value.DECIMAL, BigDecimal.class); + testDataType(Value.RESULT_SET, ResultSet.class); + testDataType(Value.BLOB, Value.ValueBlob.class); + testDataType(Value.CLOB, Value.ValueClob.class); + testDataType(Value.DATE, Date.class); + testDataType(Value.TIME, Time.class); + testDataType(Value.TIMESTAMP, Timestamp.class); + testDataType(Value.TIMESTAMP, java.util.Date.class); + testDataType(Value.CLOB, java.io.Reader.class); + testDataType(Value.CLOB, java.sql.Clob.class); + testDataType(Value.BLOB, java.io.InputStream.class); + testDataType(Value.BLOB, java.sql.Blob.class); + testDataType(Value.ARRAY, Object[].class); + testDataType(Value.JAVA_OBJECT, StringBuffer.class); + } + + private void testDataType(int type, Class clazz) { + assertEquals(type, DataType.getTypeFromClass(clazz)); + } + + private void testDouble(boolean useFloat) { + double[] d = { + Double.NEGATIVE_INFINITY, + -1, + 0, + 1, + Double.POSITIVE_INFINITY, + Double.NaN + }; + Value[] values = new Value[d.length]; + for (int i = 0; i < d.length; i++) { + Value v = useFloat ? (Value) ValueFloat.get((float) d[i]) + : (Value) ValueDouble.get(d[i]); + values[i] = v; + assertTrue(values[i].compareTypeSafe(values[i], null) == 0); + assertTrue(v.equals(v)); + assertEquals(Integer.compare(i, 2), v.getSignum()); + } + for (int i = 0; i < d.length - 1; i++) { + assertTrue(values[i].compareTypeSafe(values[i+1], null) < 0); + assertTrue(values[i + 1].compareTypeSafe(values[i], null) > 0); + assertTrue(!values[i].equals(values[i+1])); + } + } + + private void testTimestamp() { + ValueTimestamp valueTs = ValueTimestamp.parse("2000-01-15 10:20:30.333222111"); + Timestamp ts = Timestamp.valueOf("2000-01-15 10:20:30.333222111"); + assertEquals(ts.toString(), valueTs.getString()); + assertEquals(ts, valueTs.getTimestamp()); + Calendar c = Calendar.getInstance(TimeZone.getTimeZone("Europe/Berlin")); + c.set(2018, 02, 25, 1, 59, 00); + c.set(Calendar.MILLISECOND, 123); + long expected = c.getTimeInMillis(); + ts = ValueTimestamp.parse("2018-03-25 01:59:00.123123123 Europe/Berlin").getTimestamp(); + assertEquals(expected, ts.getTime()); + assertEquals(123123123, ts.getNanos()); + ts = ValueTimestamp.parse("2018-03-25 01:59:00.123123123+01").getTimestamp(); + assertEquals(expected, ts.getTime()); + assertEquals(123123123, ts.getNanos()); + expected += 60000; // 1 minute + ts = ValueTimestamp.parse("2018-03-25 03:00:00.123123123 Europe/Berlin").getTimestamp(); + assertEquals(expected, ts.getTime()); + assertEquals(123123123, ts.getNanos()); + ts = ValueTimestamp.parse("2018-03-25 03:00:00.123123123+02").getTimestamp(); + assertEquals(expected, ts.getTime()); + assertEquals(123123123, ts.getNanos()); + } + + private void testUUID() { + long maxHigh = 0, maxLow = 0, minHigh = -1L, minLow = -1L; + for (int i = 0; i < 100; i++) { + ValueUuid uuid = ValueUuid.getNewRandom(); + maxHigh |= uuid.getHigh(); + maxLow |= uuid.getLow(); + minHigh &= uuid.getHigh(); + minLow &= uuid.getLow(); + } + ValueUuid max = ValueUuid.get(maxHigh, maxLow); + assertEquals("ffffffff-ffff-4fff-bfff-ffffffffffff", max.getString()); + ValueUuid min = ValueUuid.get(minHigh, minLow); + assertEquals("00000000-0000-4000-8000-000000000000", min.getString()); + + // Test conversion from ValueJavaObject to ValueUuid + String uuidStr = "12345678-1234-4321-8765-123456789012"; + + UUID origUUID = UUID.fromString(uuidStr); + ValueJavaObject valObj = ValueJavaObject.getNoCopy(origUUID, null, null); + Value valUUID = valObj.convertTo(Value.UUID); + assertTrue(valUUID instanceof ValueUuid); + assertTrue(valUUID.getString().equals(uuidStr)); + assertTrue(valUUID.getObject().equals(origUUID)); + + ValueJavaObject voString = ValueJavaObject.getNoCopy( + new String("This is not a ValueUuid object"), null, null); + assertThrows(DbException.class, voString).convertTo(Value.UUID); + } + + private void testModulusDouble() { + final ValueDouble vd1 = ValueDouble.get(12); + new AssertThrows(ErrorCode.DIVISION_BY_ZERO_1) { @Override + public void test() { + vd1.modulus(ValueDouble.get(0)); + }}; + ValueDouble vd2 = ValueDouble.get(10); + ValueDouble vd3 = vd1.modulus(vd2); + assertEquals(2, vd3.getDouble()); + } + + private void testModulusDecimal() { + final ValueDecimal vd1 = ValueDecimal.get(new BigDecimal(12)); + new AssertThrows(ErrorCode.DIVISION_BY_ZERO_1) { @Override + public void test() { + vd1.modulus(ValueDecimal.get(new BigDecimal(0))); + }}; + ValueDecimal vd2 = ValueDecimal.get(new BigDecimal(10)); + ValueDecimal vd3 = vd1.modulus(vd2); + assertEquals(2, vd3.getDouble()); + } + + private void testModulusOperator() throws SQLException { + try (Connection conn = getConnection("modulus")) { + ResultSet rs = conn.createStatement().executeQuery("CALL 12 % 10"); + rs.next(); + assertEquals(2, rs.getInt(1)); + } finally { + deleteDb("modulus"); + } + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestValueHashMap.java b/modules/h2/src/test/java/org/h2/test/unit/TestValueHashMap.java new file mode 100644 index 0000000000000..39c2f70a5a525 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestValueHashMap.java @@ -0,0 +1,178 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.Random; + +import org.h2.api.JavaObjectSerializer; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.LobStorageBackend; +import org.h2.test.TestBase; +import org.h2.util.SmallLRUCache; +import org.h2.util.TempFileDeleter; +import org.h2.util.ValueHashMap; +import org.h2.value.CompareMode; +import org.h2.value.Value; +import org.h2.value.ValueDouble; +import org.h2.value.ValueInt; + +/** + * Tests the value hash map. + */ +public class TestValueHashMap extends TestBase implements DataHandler { + + CompareMode compareMode = CompareMode.getInstance(null, 0); + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + TestBase.createCaller().init().test(); + } + + @Override + public void test() { + testNotANumber(); + testRandomized(); + } + + private void testNotANumber() { + ValueHashMap map = ValueHashMap.newInstance(); + for (int i = 1; i < 100; i++) { + double d = Double.longBitsToDouble(0x7ff0000000000000L | i); + ValueDouble v = ValueDouble.get(d); + map.put(v, null); + assertEquals(1, map.size()); + } + } + + private void testRandomized() { + ValueHashMap map = ValueHashMap.newInstance(); + HashMap hash = new HashMap<>(); + Random random = new Random(1); + Comparator vc = new Comparator() { + @Override + public int compare(Value v1, Value v2) { + return v1.compareTo(v2, compareMode); + } + }; + for (int i = 0; i < 10000; i++) { + int op = random.nextInt(10); + Value key = ValueInt.get(random.nextInt(100)); + Value value = ValueInt.get(random.nextInt(100)); + switch (op) { + case 0: + map.put(key, value); + hash.put(key, value); + break; + case 1: + map.remove(key); + hash.remove(key); + break; + case 2: + Value v1 = map.get(key); + Value v2 = hash.get(key); + assertTrue(v1 == null ? v2 == null : v1.equals(v2)); + break; + case 3: { + ArrayList a1 = map.keys(); + ArrayList a2 = new ArrayList<>(hash.keySet()); + assertEquals(a1.size(), a2.size()); + Collections.sort(a1, vc); + Collections.sort(a2, vc); + for (int j = 0; j < a1.size(); j++) { + assertTrue(a1.get(j).equals(a2.get(j))); + } + break; + } + case 4: + ArrayList a1 = map.values(); + ArrayList a2 = new ArrayList<>(hash.values()); + assertEquals(a1.size(), a2.size()); + Collections.sort(a1, vc); + Collections.sort(a2, vc); + for (int j = 0; j < a1.size(); j++) { + assertTrue(a1.get(j).equals(a2.get(j))); + } + break; + default: + } + } + } + + @Override + public String getDatabasePath() { + return null; + } + + @Override + public FileStore openFile(String name, String mode, boolean mustExist) { + return null; + } + + @Override + public void checkPowerOff() { + // nothing to do + } + + @Override + public void checkWritingAllowed() { + // nothing to do + } + + @Override + public int getMaxLengthInplaceLob() { + return 0; + } + + @Override + public String getLobCompressionAlgorithm(int type) { + return null; + } + + @Override + public Object getLobSyncObject() { + return this; + } + + @Override + public SmallLRUCache getLobFileListCache() { + return null; + } + + @Override + public TempFileDeleter getTempFileDeleter() { + return TempFileDeleter.getInstance(); + } + + @Override + public LobStorageBackend getLobStorage() { + return null; + } + + @Override + public int readLob(long lobId, byte[] hmac, long offset, byte[] buff, + int off, int length) { + return -1; + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + return null; + } + + @Override + public CompareMode getCompareMode() { + return compareMode; + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/TestValueMemory.java b/modules/h2/src/test/java/org/h2/test/unit/TestValueMemory.java new file mode 100644 index 0000000000000..441c0151803d3 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/TestValueMemory.java @@ -0,0 +1,315 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.unit; + +import java.io.ByteArrayInputStream; +import java.io.StringReader; +import java.math.BigDecimal; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.IdentityHashMap; +import java.util.Random; +import org.h2.api.JavaObjectSerializer; +import org.h2.engine.Constants; +import org.h2.store.DataHandler; +import org.h2.store.FileStore; +import org.h2.store.LobStorageFrontend; +import org.h2.test.TestBase; +import org.h2.test.utils.MemoryFootprint; +import org.h2.tools.SimpleResultSet; +import org.h2.util.SmallLRUCache; +import org.h2.util.TempFileDeleter; +import org.h2.util.Utils; +import org.h2.value.CompareMode; +import org.h2.value.DataType; +import org.h2.value.Value; +import org.h2.value.ValueArray; +import org.h2.value.ValueBoolean; +import org.h2.value.ValueByte; +import org.h2.value.ValueBytes; +import org.h2.value.ValueDate; +import org.h2.value.ValueDecimal; +import org.h2.value.ValueDouble; +import org.h2.value.ValueFloat; +import org.h2.value.ValueGeometry; +import org.h2.value.ValueInt; +import org.h2.value.ValueJavaObject; +import org.h2.value.ValueLong; +import org.h2.value.ValueNull; +import org.h2.value.ValueResultSet; +import org.h2.value.ValueShort; +import org.h2.value.ValueString; +import org.h2.value.ValueStringFixed; +import org.h2.value.ValueStringIgnoreCase; +import org.h2.value.ValueTime; +import org.h2.value.ValueTimestamp; +import org.h2.value.ValueTimestampTimeZone; +import org.h2.value.ValueUuid; + +/** + * Tests the memory consumption of values. Values can estimate how much memory + * they occupy, and this tests if this estimation is correct. + */ +public class TestValueMemory extends TestBase implements DataHandler { + + private final Random random = new Random(1); + private final SmallLRUCache lobFileListCache = SmallLRUCache + .newInstance(128); + private LobStorageFrontend lobStorage; + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) throws Exception { + // run using -javaagent:ext/h2-1.2.139.jar + TestBase test = TestBase.createCaller().init(); + test.config.traceTest = true; + test.test(); + } + + @Override + public void test() throws SQLException { + testCompare(); + for (int i = 0; i < Value.TYPE_COUNT; i++) { + if (i == 23) { + // this used to be "TIMESTAMP UTC", which was a short-lived + // experiment + continue; + } + Value v = create(i); + String s = "type: " + v.getType() + + " calculated: " + v.getMemory() + + " real: " + MemoryFootprint.getObjectSize(v) + " " + + v.getClass().getName() + ": " + v.toString(); + trace(s); + } + for (int i = 0; i < Value.TYPE_COUNT; i++) { + if (i == 23) { + // this used to be "TIMESTAMP UTC", which was a short-lived + // experiment + continue; + } + Value v = create(i); + if (v == ValueNull.INSTANCE && i == Value.GEOMETRY) { + // jts not in the classpath, OK + continue; + } + assertEquals(i, v.getType()); + testType(i); + } + } + + private void testCompare() { + ValueDecimal a = ValueDecimal.get(new BigDecimal("0.0")); + ValueDecimal b = ValueDecimal.get(new BigDecimal("-0.00")); + assertTrue(a.hashCode() != b.hashCode()); + assertFalse(a.equals(b)); + } + + private void testType(int type) throws SQLException { + System.gc(); + System.gc(); + long first = Utils.getMemoryUsed(); + ArrayList list = new ArrayList<>(); + long memory = 0; + while (memory < 1000000) { + Value v = create(type); + memory += v.getMemory() + Constants.MEMORY_POINTER; + list.add(v); + } + Object[] array = list.toArray(); + IdentityHashMap map = new IdentityHashMap<>(); + for (Object a : array) { + map.put(a, a); + } + int size = map.size(); + map.clear(); + map = null; + list = null; + System.gc(); + System.gc(); + long used = Utils.getMemoryUsed() - first; + memory /= 1024; + if (config.traceTest || used > memory * 3) { + String msg = "Type: " + type + " Used memory: " + used + + " calculated: " + memory + " length: " + array.length + " size: " + size; + if (config.traceTest) { + trace(msg); + } + if (used > memory * 3) { + fail(msg); + } + } + } + private Value create(int type) throws SQLException { + switch (type) { + case Value.NULL: + return ValueNull.INSTANCE; + case Value.BOOLEAN: + return ValueBoolean.FALSE; + case Value.BYTE: + return ValueByte.get((byte) random.nextInt()); + case Value.SHORT: + return ValueShort.get((short) random.nextInt()); + case Value.INT: + return ValueInt.get(random.nextInt()); + case Value.LONG: + return ValueLong.get(random.nextLong()); + case Value.DECIMAL: + return ValueDecimal.get(new BigDecimal(random.nextInt())); + // + "12123344563456345634565234523451312312" + case Value.DOUBLE: + return ValueDouble.get(random.nextDouble()); + case Value.FLOAT: + return ValueFloat.get(random.nextFloat()); + case Value.TIME: + return ValueTime.get(new java.sql.Time(random.nextLong())); + case Value.DATE: + return ValueDate.get(new java.sql.Date(random.nextLong())); + case Value.TIMESTAMP: + return ValueTimestamp.fromMillis(random.nextLong()); + case Value.TIMESTAMP_TZ: + // clamp to max legal value + long nanos = Math.max(Math.min(random.nextLong(), + 24L * 60 * 60 * 1000 * 1000 * 1000 - 1), 0); + int timeZoneOffsetMins = (int) (random.nextFloat() * (24 * 60)) + - (12 * 60); + return ValueTimestampTimeZone.fromDateValueAndNanos( + random.nextLong(), nanos, (short) timeZoneOffsetMins); + case Value.BYTES: + return ValueBytes.get(randomBytes(random.nextInt(1000))); + case Value.STRING: + return ValueString.get(randomString(random.nextInt(100))); + case Value.STRING_IGNORECASE: + return ValueStringIgnoreCase.get(randomString(random.nextInt(100))); + case Value.BLOB: { + int len = (int) Math.abs(random.nextGaussian() * 10); + byte[] data = randomBytes(len); + return getLobStorage().createBlob(new ByteArrayInputStream(data), len); + } + case Value.CLOB: { + int len = (int) Math.abs(random.nextGaussian() * 10); + String s = randomString(len); + return getLobStorage().createClob(new StringReader(s), len); + } + case Value.ARRAY: { + int len = random.nextInt(20); + Value[] list = new Value[len]; + for (int i = 0; i < list.length; i++) { + list[i] = create(Value.STRING); + } + return ValueArray.get(list); + } + case Value.RESULT_SET: + return ValueResultSet.get(new SimpleResultSet()); + case Value.JAVA_OBJECT: + return ValueJavaObject.getNoCopy(null, randomBytes(random.nextInt(100)), this); + case Value.UUID: + return ValueUuid.get(random.nextLong(), random.nextLong()); + case Value.STRING_FIXED: + return ValueStringFixed.get(randomString(random.nextInt(100))); + case Value.GEOMETRY: + if (DataType.GEOMETRY_CLASS == null) { + return ValueNull.INSTANCE; + } + return ValueGeometry.get("POINT (" + random.nextInt(100) + " " + + random.nextInt(100) + ")"); + default: + throw new AssertionError("type=" + type); + } + } + + private byte[] randomBytes(int len) { + byte[] data = new byte[len]; + if (random.nextBoolean()) { + // don't initialize always (compression) + random.nextBytes(data); + } + return data; + } + + private String randomString(int len) { + char[] chars = new char[len]; + if (random.nextBoolean()) { + // don't initialize always (compression) + for (int i = 0; i < chars.length; i++) { + chars[i] = (char) (random.nextGaussian() * 100); + } + } + return new String(chars); + } + + @Override + public void checkPowerOff() { + // nothing to do + } + + @Override + public void checkWritingAllowed() { + // nothing to do + } + + @Override + public String getDatabasePath() { + return getBaseDir() + "/valueMemory"; + } + + @Override + public String getLobCompressionAlgorithm(int type) { + return "LZF"; + } + + @Override + public Object getLobSyncObject() { + return this; + } + + @Override + public int getMaxLengthInplaceLob() { + return 100; + } + + @Override + public FileStore openFile(String name, String mode, boolean mustExist) { + return FileStore.open(this, name, mode); + } + + @Override + public SmallLRUCache getLobFileListCache() { + return lobFileListCache; + } + + @Override + public TempFileDeleter getTempFileDeleter() { + return TempFileDeleter.getInstance(); + } + + @Override + public LobStorageFrontend getLobStorage() { + if (lobStorage == null) { + lobStorage = new LobStorageFrontend(this); + } + return lobStorage; + } + + @Override + public int readLob(long lobId, byte[] hmac, long offset, byte[] buff, + int off, int length) { + return -1; + } + + @Override + public JavaObjectSerializer getJavaObjectSerializer() { + return null; + } + + @Override + public CompareMode getCompareMode() { + return CompareMode.getInstance(null, 0); + } +} diff --git a/modules/h2/src/test/java/org/h2/test/unit/package.html b/modules/h2/src/test/java/org/h2/test/unit/package.html new file mode 100644 index 0000000000000..fa15e2da4177e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/unit/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Unit tests that don't start the database (in most cases). + +

    \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/utils/AssertThrows.java b/modules/h2/src/test/java/org/h2/test/utils/AssertThrows.java new file mode 100644 index 0000000000000..3884de68cf2a5 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/AssertThrows.java @@ -0,0 +1,121 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.lang.reflect.Method; +import java.sql.SQLException; +import org.h2.message.DbException; + +/** + * Helper class to simplify negative testing. Usage: + *
    + * new AssertThrows() { public void test() {
    + *     Integer.parseInt("not a number");
    + * }};
    + * 
    + */ +public abstract class AssertThrows { + + /** + * Create a new assertion object, and call the test method to verify the + * expected exception is thrown. + * + * @param expectedExceptionClass the expected exception class + */ + public AssertThrows(final Class expectedExceptionClass) { + this(new ResultVerifier() { + @Override + public boolean verify(Object returnValue, Throwable t, Method m, + Object... args) { + if (t == null) { + throw new AssertionError("Expected an exception of type " + + expectedExceptionClass.getSimpleName() + + " to be thrown, but the method returned successfully"); + } + if (!expectedExceptionClass.isAssignableFrom(t.getClass())) { + AssertionError ae = new AssertionError( + "Expected an exception of type\n" + + expectedExceptionClass.getSimpleName() + + " to be thrown, but the method under test " + + "threw an exception of type\n" + + t.getClass().getSimpleName() + + " (see in the 'Caused by' for the exception " + + "that was thrown)"); + ae.initCause(t); + throw ae; + } + return false; + } + }); + } + + /** + * Create a new assertion object, and call the test method to verify the + * expected exception is thrown. + */ + public AssertThrows() { + this(new ResultVerifier() { + @Override + public boolean verify(Object returnValue, Throwable t, Method m, + Object... args) { + if (t != null) { + throw new AssertionError("Expected an exception " + + "to be thrown, but the method returned successfully"); + } + // all exceptions are fine + return false; + } + }); + } + + /** + * Create a new assertion object, and call the test method to verify the + * expected exception is thrown. + * + * @param expectedErrorCode the error code of the exception + */ + public AssertThrows(final int expectedErrorCode) { + this(new ResultVerifier() { + @Override + public boolean verify(Object returnValue, Throwable t, Method m, + Object... args) { + int errorCode; + if (t instanceof DbException) { + errorCode = ((DbException) t).getErrorCode(); + } else if (t instanceof SQLException) { + errorCode = ((SQLException) t).getErrorCode(); + } else { + errorCode = 0; + } + if (errorCode != expectedErrorCode) { + AssertionError ae = new AssertionError( + "Expected an SQLException or DbException with error code " + + expectedErrorCode); + ae.initCause(t); + throw ae; + } + return false; + } + }); + } + + private AssertThrows(ResultVerifier verifier) { + try { + test(); + verifier.verify(null, null, null); + } catch (Exception e) { + verifier.verify(null, e, null); + } + } + + /** + * The test method that is called. + * + * @throws Exception the exception + */ + public abstract void test() throws Exception; + +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/FilePathDebug.java b/modules/h2/src/test/java/org/h2/test/utils/FilePathDebug.java new file mode 100644 index 0000000000000..e3d15145abee4 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/FilePathDebug.java @@ -0,0 +1,337 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.io.FilterInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.util.List; +import org.h2.store.fs.FileBase; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathWrapper; + +/** + * A debugging file system that logs all operations. + */ +public class FilePathDebug extends FilePathWrapper { + + private static final FilePathDebug INSTANCE = new FilePathDebug(); + + private static final IOException POWER_OFF = new IOException( + "Simulated power failure"); + + private int powerOffCount; + private boolean trace; + + /** + * Register the file system. + * + * @return the instance + */ + public static FilePathDebug register() { + FilePath.register(INSTANCE); + return INSTANCE; + } + + /** + * Check if the simulated power failure occurred. + * This call will decrement the countdown. + * + * @throws IOException if the simulated power failure occurred + */ + void checkPowerOff() throws IOException { + if (powerOffCount == 0) { + return; + } + if (powerOffCount > 1) { + powerOffCount--; + return; + } + powerOffCount = -1; + // throw new IOException("Simulated power failure"); + throw POWER_OFF; + } + + @Override + public void createDirectory() { + trace(name, "createDirectory"); + super.createDirectory(); + } + + @Override + public boolean createFile() { + trace(name, "createFile"); + return super.createFile(); + } + + @Override + public void delete() { + trace(name, "fileName"); + super.delete(); + } + + @Override + public boolean exists() { + trace(name, "exists"); + return super.exists(); + } + + @Override + public String getName() { + trace(name, "getName"); + return super.getName(); + } + + @Override + public long lastModified() { + trace(name, "lastModified"); + return super.lastModified(); + } + + @Override + public FilePath getParent() { + trace(name, "getParent"); + return super.getParent(); + } + + @Override + public boolean isAbsolute() { + trace(name, "isAbsolute"); + return super.isAbsolute(); + } + + @Override + public boolean isDirectory() { + trace(name, "isDirectory"); + return super.isDirectory(); + } + + @Override + public boolean canWrite() { + trace(name, "canWrite"); + return super.canWrite(); + } + + @Override + public boolean setReadOnly() { + trace(name, "setReadOnly"); + return super.setReadOnly(); + } + + @Override + public long size() { + trace(name, "size"); + return super.size(); + } + + @Override + public List newDirectoryStream() { + trace(name, "newDirectoryStream"); + return super.newDirectoryStream(); + } + + @Override + public FilePath toRealPath() { + trace(name, "toRealPath"); + return super.toRealPath(); + } + + @Override + public InputStream newInputStream() throws IOException { + trace(name, "newInputStream"); + InputStream in = super.newInputStream(); + if (!isTrace()) { + return in; + } + final String fileName = name; + return new FilterInputStream(in) { + @Override + public int read(byte[] b) throws IOException { + trace(fileName, "in.read(b)"); + return super.read(b); + } + + @Override + public int read(byte[] b, int off, int len) throws IOException { + trace(fileName, "in.read(b)", "in.read(b, " + off + ", " + len + ")"); + return super.read(b, off, len); + } + + @Override + public long skip(long n) throws IOException { + trace(fileName, "in.read(b)", "in.skip(" + n + ")"); + return super.skip(n); + } + }; + } + + @Override + public FileChannel open(String mode) throws IOException { + trace(name, "open", mode); + return new FileDebug(this, super.open(mode), name); + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + trace(name, "newOutputStream", append); + return super.newOutputStream(append); + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + trace(name, "moveTo", unwrap(((FilePathDebug) newName).name)); + super.moveTo(newName, atomicReplace); + } + + @Override + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + trace(name, "createTempFile", suffix, deleteOnExit, inTempDir); + return super.createTempFile(suffix, deleteOnExit, inTempDir); + } + + /** + * Print a debug message. + * + * @param fileName the (wrapped) file name + * @param method the method name + * @param params parameters if any + */ + void trace(String fileName, String method, Object... params) { + if (isTrace()) { + StringBuilder buff = new StringBuilder(" "); + buff.append(unwrap(fileName)).append(' ').append(method); + for (Object s : params) { + buff.append(' ').append(s); + } + System.out.println(buff); + } + } + + public void setPowerOffCount(int count) { + this.powerOffCount = count; + } + + public int getPowerOffCount() { + return powerOffCount; + } + + public boolean isTrace() { + return INSTANCE.trace; + } + + public void setTrace(boolean trace) { + INSTANCE.trace = trace; + } + + @Override + public String getScheme() { + return "debug"; + } + +} + +/** + * A debugging file that logs all operations. + */ +class FileDebug extends FileBase { + + private final FilePathDebug debug; + private final FileChannel channel; + private final String name; + + FileDebug(FilePathDebug debug, FileChannel channel, String name) { + this.debug = debug; + this.channel = channel; + this.name = name; + } + + @Override + public void implCloseChannel() throws IOException { + debug("close"); + channel.close(); + } + + @Override + public long position() throws IOException { + debug("getFilePointer"); + return channel.position(); + } + + @Override + public long size() throws IOException { + debug("length"); + return channel.size(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + debug("read", channel.position(), dst.position(), dst.remaining()); + return channel.read(dst); + } + + @Override + public FileChannel position(long pos) throws IOException { + debug("seek", pos); + channel.position(pos); + return this; + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + checkPowerOff(); + debug("truncate", newLength); + channel.truncate(newLength); + return this; + } + + @Override + public void force(boolean metaData) throws IOException { + debug("force"); + channel.force(metaData); + } + + @Override + public int write(ByteBuffer src) throws IOException { + checkPowerOff(); + debug("write", channel.position(), src.position(), src.remaining()); + return channel.write(src); + } + + private void debug(String method, Object... params) { + debug.trace(name, method, params); + } + + private void checkPowerOff() throws IOException { + try { + debug.checkPowerOff(); + } catch (IOException e) { + try { + channel.close(); + } catch (IOException e2) { + // ignore + } + throw e; + } + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + debug("tryLock"); + return channel.tryLock(position, size, shared); + } + + @Override + public String toString() { + return name; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/FilePathReorderWrites.java b/modules/h2/src/test/java/org/h2/test/utils/FilePathReorderWrites.java new file mode 100644 index 0000000000000..60be6b7bbe438 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/FilePathReorderWrites.java @@ -0,0 +1,401 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.util.ArrayList; +import java.util.Random; +import org.h2.store.fs.FileBase; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathWrapper; +import org.h2.util.IOUtils; + +/** + * An unstable file system. It is used to simulate file system problems (for + * example out of disk space). + */ +public class FilePathReorderWrites extends FilePathWrapper { + + /** + * Whether trace output of all method calls is enabled. + */ + static final boolean TRACE = false; + + private static final FilePathReorderWrites INSTANCE = new FilePathReorderWrites(); + + private static final IOException POWER_FAILURE = new IOException("Power Failure"); + + private static int powerFailureCountdown; + + private static boolean partialWrites; + + private static Random random = new Random(1); + + /** + * Register the file system. + * + * @return the instance + */ + public static FilePathReorderWrites register() { + FilePath.register(INSTANCE); + return INSTANCE; + } + + /** + * Set the number of write operations before a simulated power failure, and + * the random seed (for partial writes). + * + * @param count the number of write operations (0 to never fail, + * Integer.MAX_VALUE to count the operations) + * @param seed the new seed + */ + public void setPowerOffCountdown(int count, int seed) { + powerFailureCountdown = count; + random.setSeed(seed); + } + + public int getPowerOffCountdown() { + return powerFailureCountdown; + } + + /** + * Whether partial writes are possible (writing only part of the data). + * + * @param b true to enable + */ + public static void setPartialWrites(boolean b) { + partialWrites = b; + } + + static boolean isPartialWrites() { + return partialWrites; + } + + /** + * Get a buffer with a subset (the head) of the data of the source buffer. + * + * @param src the source buffer + * @return a buffer with a subset of the data + */ + ByteBuffer getRandomSubset(ByteBuffer src) { + int len = src.remaining(); + len = Math.min(4096, Math.min(len, 1 + random.nextInt(len))); + ByteBuffer temp = ByteBuffer.allocate(len); + src.get(temp.array()); + return temp; + } + + Random getRandom() { + return random; + } + + /** + * Check if the simulated problem occurred. + * This call will decrement the countdown. + * + * @throws IOException if the simulated power failure occurred + */ + void checkError() throws IOException { + if (powerFailureCountdown == 0) { + return; + } + if (powerFailureCountdown < 0) { + throw POWER_FAILURE; + } + powerFailureCountdown--; + if (powerFailureCountdown == 0) { + powerFailureCountdown--; + throw POWER_FAILURE; + } + } + + @Override + public FileChannel open(String mode) throws IOException { + InputStream in = newInputStream(); + FilePath copy = FilePath.get(getBase().toString() + ".copy"); + OutputStream out = copy.newOutputStream(false); + IOUtils.copy(in, out); + in.close(); + out.close(); + FileChannel base = getBase().open(mode); + FileChannel readBase = copy.open(mode); + return new FileReorderWrites(this, base, readBase); + } + + @Override + public String getScheme() { + return "reorder"; + } + + public long getMaxAge() { + // TODO implement, configurable + return 45000; + } + + @Override + public void delete() { + super.delete(); + FilePath.get(getBase().toString() + ".copy").delete(); + } +} + +/** + * A write-reordering file implementation. + */ +class FileReorderWrites extends FileBase { + + private final FilePathReorderWrites file; + /** + * The base channel, where not all operations are immediately applied. + */ + private final FileChannel base; + + /** + * The base channel that is used for reading, where all operations are + * immediately applied to get a consistent view before a power failure. + */ + private final FileChannel readBase; + + private boolean closed; + + /** + * The list of not yet applied to the base channel. It is sorted by time. + */ + private ArrayList notAppliedList = new ArrayList<>(); + + private int id; + + FileReorderWrites(FilePathReorderWrites file, FileChannel base, FileChannel readBase) { + this.file = file; + this.base = base; + this.readBase = readBase; + } + + @Override + public void implCloseChannel() throws IOException { + base.close(); + readBase.close(); + closed = true; + } + + @Override + public long position() throws IOException { + return readBase.position(); + } + + @Override + public long size() throws IOException { + return readBase.size(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + return readBase.read(dst); + } + + @Override + public int read(ByteBuffer dst, long pos) throws IOException { + return readBase.read(dst, pos); + } + + @Override + public FileChannel position(long pos) throws IOException { + readBase.position(pos); + return this; + } + + @Override + public FileChannel truncate(long newSize) throws IOException { + long oldSize = readBase.size(); + if (oldSize <= newSize) { + return this; + } + addOperation(new FileWriteOperation(id++, newSize, null)); + return this; + } + + private int addOperation(FileWriteOperation op) throws IOException { + trace("op " + op); + checkError(); + notAppliedList.add(op); + long now = op.getTime(); + for (int i = 0; i < notAppliedList.size() - 1; i++) { + FileWriteOperation old = notAppliedList.get(i); + boolean applyOld = false; + // String reason = ""; + if (old.getTime() + 45000 < now) { + // reason = "old"; + applyOld = true; + } else if (old.overlaps(op)) { + // reason = "overlap"; + applyOld = true; + } else if (file.getRandom().nextInt(100) < 10) { + // reason = "random"; + applyOld = true; + } + if (applyOld) { + trace("op apply " + op); + old.apply(base); + notAppliedList.remove(i); + i--; + } + } + return op.apply(readBase); + } + + private void applyAll() throws IOException { + trace("applyAll"); + for (FileWriteOperation op : notAppliedList) { + op.apply(base); + } + notAppliedList.clear(); + } + + @Override + public void force(boolean metaData) throws IOException { + checkError(); + readBase.force(metaData); + applyAll(); + } + + @Override + public int write(ByteBuffer src) throws IOException { + return write(src, readBase.position()); + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + if (FilePathReorderWrites.isPartialWrites() && src.remaining() > 2) { + ByteBuffer buf1 = src.slice(); + ByteBuffer buf2 = src.slice(); + int len1 = src.remaining() / 2; + int len2 = src.remaining() - len1; + buf1.limit(buf1.limit() - len2); + buf2.position(buf2.position() + len1); + int x = addOperation(new FileWriteOperation(id++, position, buf1)); + x += addOperation( + new FileWriteOperation(id++, position + len1, buf2)); + src.position( src.position() + x ); + return x; + } + return addOperation(new FileWriteOperation(id++, position, src)); + } + + private void checkError() throws IOException { + if (closed) { + throw new IOException("Closed"); + } + file.checkError(); + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + return readBase.tryLock(position, size, shared); + } + + @Override + public String toString() { + return file.getScheme() + ":" + file.toString(); + } + + private static void trace(String message) { + if (FilePathReorderWrites.TRACE) { + System.out.println(message); + } + } + + /** + * A file operation (that might be re-ordered with other operations, or not + * be applied on power failure). + */ + static class FileWriteOperation { + private final int id; + private final long time; + private final ByteBuffer buffer; + private final long position; + + FileWriteOperation(int id, long position, ByteBuffer src) { + this.id = id; + this.time = System.currentTimeMillis(); + if (src == null) { + buffer = null; + } else { + int len = src.limit() - src.position(); + this.buffer = ByteBuffer.allocate(len); + buffer.put(src); + buffer.flip(); + } + this.position = position; + } + + public long getTime() { + return time; + } + + /** + * Check whether the file region of this operation overlaps with + * another operation. + * + * @param other the other operation + * @return if there is an overlap + */ + boolean overlaps(FileWriteOperation other) { + if (isTruncate() && other.isTruncate()) { + // we just keep the latest truncate operation + return true; + } + if (isTruncate()) { + return position < other.getEndPosition(); + } else if (other.isTruncate()) { + return getEndPosition() > other.position; + } + return position < other.getEndPosition() && + getEndPosition() > other.position; + } + + private boolean isTruncate() { + return buffer == null; + } + + private long getEndPosition() { + return position + getLength(); + } + + private int getLength() { + return buffer == null ? 0 : buffer.limit() - buffer.position(); + } + + /** + * Apply the operation to the channel. + * + * @param channel the channel + * @return the return value of the operation + */ + int apply(FileChannel channel) throws IOException { + if (isTruncate()) { + channel.truncate(position); + return -1; + } + int len = channel.write(buffer, position); + buffer.flip(); + return len; + } + + @Override + public String toString() { + String s = "[" + id + "]: @" + position + ( + isTruncate() ? "-truncate" : ("+" + getLength())); + return s; + } + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/utils/FilePathUnstable.java b/modules/h2/src/test/java/org/h2/test/utils/FilePathUnstable.java new file mode 100644 index 0000000000000..b3ed78c7daa99 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/FilePathUnstable.java @@ -0,0 +1,308 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.util.List; +import java.util.Random; + +import org.h2.store.fs.FileBase; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathWrapper; + +/** + * An unstable file system. It is used to simulate file system problems (for + * example out of disk space). + */ +public class FilePathUnstable extends FilePathWrapper { + + private static final FilePathUnstable INSTANCE = new FilePathUnstable(); + + private static final IOException DISK_FULL = new IOException("Disk full"); + + private static int diskFullOffCount; + + private static boolean partialWrites; + + private static Random random = new Random(1); + + /** + * Register the file system. + * + * @return the instance + */ + public static FilePathUnstable register() { + FilePath.register(INSTANCE); + return INSTANCE; + } + + /** + * Set the number of write operations before the disk is full, and the + * random seed (for partial writes). + * + * @param count the number of write operations (0 to never fail, + * Integer.MAX_VALUE to count the operations) + * @param seed the new seed + */ + public void setDiskFullCount(int count, int seed) { + diskFullOffCount = count; + random.setSeed(seed); + } + + public int getDiskFullCount() { + return diskFullOffCount; + } + + /** + * Whether partial writes are possible (writing only part of the data). + * + * @param partialWrites true to enable + */ + public void setPartialWrites(boolean partialWrites) { + FilePathUnstable.partialWrites = partialWrites; + } + + boolean getPartialWrites() { + return partialWrites; + } + + /** + * Get a buffer with a subset (the head) of the data of the source buffer. + * + * @param src the source buffer + * @return a buffer with a subset of the data + */ + ByteBuffer getRandomSubset(ByteBuffer src) { + int len = src.remaining(); + len = Math.min(4096, Math.min(len, 1 + random.nextInt(len))); + ByteBuffer temp = ByteBuffer.allocate(len); + src.get(temp.array()); + return temp; + } + + /** + * Check if the simulated problem occurred. + * This call will decrement the countdown. + * + * @throws IOException if the simulated power failure occurred + */ + void checkError() throws IOException { + if (diskFullOffCount == 0) { + return; + } + if (--diskFullOffCount > 0) { + return; + } + if (diskFullOffCount >= -1) { + diskFullOffCount--; + throw DISK_FULL; + } + } + + @Override + public void createDirectory() { + super.createDirectory(); + } + + @Override + public boolean createFile() { + return super.createFile(); + } + + @Override + public void delete() { + super.delete(); + } + + @Override + public boolean exists() { + return super.exists(); + } + + @Override + public String getName() { + return super.getName(); + } + + @Override + public long lastModified() { + return super.lastModified(); + } + + @Override + public FilePath getParent() { + return super.getParent(); + } + + @Override + public boolean isAbsolute() { + return super.isAbsolute(); + } + + @Override + public boolean isDirectory() { + return super.isDirectory(); + } + + @Override + public boolean canWrite() { + return super.canWrite(); + } + + @Override + public boolean setReadOnly() { + return super.setReadOnly(); + } + + @Override + public long size() { + return super.size(); + } + + @Override + public List newDirectoryStream() { + return super.newDirectoryStream(); + } + + @Override + public FilePath toRealPath() { + return super.toRealPath(); + } + + @Override + public InputStream newInputStream() throws IOException { + return super.newInputStream(); + } + + @Override + public FileChannel open(String mode) throws IOException { + return new FileUnstable(this, super.open(mode)); + } + + @Override + public OutputStream newOutputStream(boolean append) throws IOException { + return super.newOutputStream(append); + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + super.moveTo(newName, atomicReplace); + } + + @Override + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + return super.createTempFile(suffix, deleteOnExit, inTempDir); + } + + @Override + public String getScheme() { + return "unstable"; + } + +} + +/** + * An file that checks for errors before each write operation. + */ +class FileUnstable extends FileBase { + + private final FilePathUnstable file; + private final FileChannel channel; + private boolean closed; + + FileUnstable(FilePathUnstable file, FileChannel channel) { + this.file = file; + this.channel = channel; + } + + @Override + public void implCloseChannel() throws IOException { + channel.close(); + closed = true; + } + + @Override + public long position() throws IOException { + return channel.position(); + } + + @Override + public long size() throws IOException { + return channel.size(); + } + + @Override + public int read(ByteBuffer dst) throws IOException { + return channel.read(dst); + } + + @Override + public int read(ByteBuffer dst, long pos) throws IOException { + return channel.read(dst, pos); + } + + @Override + public FileChannel position(long pos) throws IOException { + channel.position(pos); + return this; + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + checkError(); + channel.truncate(newLength); + return this; + } + + @Override + public void force(boolean metaData) throws IOException { + checkError(); + channel.force(metaData); + } + + @Override + public int write(ByteBuffer src) throws IOException { + checkError(); + if (file.getPartialWrites()) { + return channel.write(file.getRandomSubset(src)); + } + return channel.write(src); + } + + @Override + public int write(ByteBuffer src, long position) throws IOException { + checkError(); + if (file.getPartialWrites()) { + return channel.write(file.getRandomSubset(src), position); + } + return channel.write(src, position); + } + + private void checkError() throws IOException { + if (closed) { + throw new IOException("Closed"); + } + file.checkError(); + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + return channel.tryLock(position, size, shared); + } + + @Override + public String toString() { + return "unstable:" + file.toString(); + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/java/org/h2/test/utils/MemoryFootprint.java b/modules/h2/src/test/java/org/h2/test/utils/MemoryFootprint.java new file mode 100644 index 0000000000000..9f068c0874801 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/MemoryFootprint.java @@ -0,0 +1,84 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.lang.instrument.Instrumentation; +import java.math.BigDecimal; +import java.math.BigInteger; +import org.h2.engine.Constants; +import org.h2.result.RowImpl; +import org.h2.store.Data; +import org.h2.util.Profiler; +import org.h2.value.Value; + +/** + * Calculate the memory footprint of various objects. + */ +public class MemoryFootprint { + + /** + * Run just this test. + * + * @param a ignored + */ + public static void main(String... a) { + // System.getProperties().store(System.out, ""); + print("Object", new Object()); + print("Timestamp", new java.sql.Timestamp(0)); + print("Date", new java.sql.Date(0)); + print("Time", new java.sql.Time(0)); + print("BigDecimal", new BigDecimal("0")); + print("BigInteger", new BigInteger("0")); + print("String", new String("Hello")); + print("Data", Data.create(null, 10)); + print("Row", new RowImpl(new Value[0], 0)); + System.out.println(); + for (int i = 1; i < 128; i += i) { + + System.out.println(getArraySize(1, i) + " bytes per p1[]"); + print("boolean[" + i +"]", new boolean[i]); + + System.out.println(getArraySize(2, i) + " bytes per p2[]"); + print("char[" + i +"]", new char[i]); + print("short[" + i +"]", new short[i]); + + System.out.println(getArraySize(4, i) + " bytes per p4[]"); + print("int[" + i +"]", new int[i]); + print("float[" + i +"]", new float[i]); + + System.out.println(getArraySize(8, i) + " bytes per p8[]"); + print("long[" + i +"]", new long[i]); + print("double[" + i +"]", new double[i]); + + System.out.println(getArraySize(Constants.MEMORY_POINTER, i) + + " bytes per obj[]"); + print("Object[" + i +"]", new Object[i]); + + System.out.println(); + } + } + + private static int getArraySize(int type, int length) { + return ((Constants.MEMORY_OBJECT + length * type) + 7) / 8 * 8; + } + + private static void print(String type, Object o) { + System.out.println(getObjectSize(o) + " bytes per " + type); + } + + /** + * Get the number of bytes required for the given object. + * This method only works if the agent is set. + * + * @param o the object + * @return the number of bytes required + */ + public static long getObjectSize(Object o) { + Instrumentation inst = Profiler.getInstrumentation(); + return inst == null ? 0 : inst.getObjectSize(o); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/OutputCatcher.java b/modules/h2/src/test/java/org/h2/test/utils/OutputCatcher.java new file mode 100644 index 0000000000000..d9960786351a9 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/OutputCatcher.java @@ -0,0 +1,218 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.io.FilterOutputStream; +import java.io.IOException; +import java.io.OutputStream; +import java.io.PrintStream; +import java.io.PrintWriter; + +/** + * A tool to capture the output of System.out and System.err. The regular output + * still occurs, but it is additionally available as a String. + */ +public class OutputCatcher { + + /** + * The HTML text will contain this string if something was written to + * System.err. + */ + public static final String START_ERROR = ""; + + private final ByteArrayOutputStream buff = new ByteArrayOutputStream(); + private final DualOutputStream out, err; + private String output; + + private OutputCatcher() { + HtmlOutputStream html = new HtmlOutputStream(buff); + out = new DualOutputStream(html, System.out, false); + err = new DualOutputStream(html, System.err, true); + System.setOut(new PrintStream(out, true)); + System.setErr(new PrintStream(err, true)); + } + + /** + * Stop catching output. + */ + public void stop() { + System.out.flush(); + System.setOut(out.print); + System.err.flush(); + System.setErr(err.print); + output = new String(buff.toByteArray()); + } + + /** + * Write the output to a HTML file. + * + * @param title the title + * @param fileName the file name + */ + public void writeTo(String title, String fileName) throws IOException { + File file = new File(fileName); + file.getParentFile().mkdirs(); + PrintWriter writer = new PrintWriter(new FileOutputStream(file)); + writer.write("\n"); + writer.write("\n"); + writer.write("\n"); + writer.print(title); + writer.print("\n"); + writer.print("\n"); + writer.print("

    " + title + "


    \n"); + writer.print(output); + writer.write("\n"); + writer.close(); + } + + /** + * Create a new output catcher and start it. + * + * @return the output catcher + */ + public static OutputCatcher start() { + return new OutputCatcher(); + } + + /** + * An output stream that writes to both a HTML stream and a print stream. + */ + static class DualOutputStream extends FilterOutputStream { + + /** + * The original print stream. + */ + final PrintStream print; + + private final HtmlOutputStream htmlOut; + private final boolean error; + + DualOutputStream(HtmlOutputStream out, PrintStream print, boolean error) { + super(out); + this.htmlOut = out; + this.print = print; + this.error = error; + } + + @Override + public void close() throws IOException { + print.close(); + super.close(); + } + + @Override + public void flush() throws IOException { + print.flush(); + super.flush(); + } + + @Override + public void write(int b) throws IOException { + print.write(b); + htmlOut.write(error, b); + } + } + + /** + * An output stream that has two modes: error mode and regular mode. + */ + static class HtmlOutputStream extends FilterOutputStream { + + private static final byte[] START = START_ERROR.getBytes(); + private static final byte[] END = "
    ".getBytes(); + private static final byte[] BR = "
    \n".getBytes(); + private static final byte[] NBSP = " ".getBytes(); + private static final byte[] LT = "<".getBytes(); + private static final byte[] GT = ">".getBytes(); + private static final byte[] AMP = "&".getBytes(); + private boolean error; + private boolean hasError; + private boolean convertSpace; + + HtmlOutputStream(OutputStream out) { + super(out); + } + + /** + * Check if the error mode was used. + * + * @return true if it was + */ + boolean hasError() { + return hasError; + } + + /** + * Enable or disable the error mode. + * + * @param error the flag + */ + void setError(boolean error) throws IOException { + if (error != this.error) { + if (error) { + hasError = true; + super.write(START); + } else { + super.write(END); + } + this.error = error; + } + } + + /** + * Write a character. + * + * @param errorStream if the character comes from the error stream + * @param b the character + */ + void write(boolean errorStream, int b) throws IOException { + setError(errorStream); + switch (b) { + case '\n': + super.write(BR); + convertSpace = true; + break; + case '\t': + super.write(NBSP); + super.write(NBSP); + break; + case ' ': + if (convertSpace) { + super.write(NBSP); + } else { + super.write(b); + } + break; + case '<': + super.write(LT); + break; + case '>': + super.write(GT); + break; + case '&': + super.write(AMP); + break; + default: + if (b >= 128) { + super.write(("&#" + b + ";").getBytes()); + } else { + super.write(b); + } + convertSpace = false; + } + } + + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/ProxyCodeGenerator.java b/modules/h2/src/test/java/org/h2/test/utils/ProxyCodeGenerator.java new file mode 100644 index 0000000000000..86ef964252f79 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/ProxyCodeGenerator.java @@ -0,0 +1,360 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.io.PrintWriter; +import java.io.StringWriter; +import java.lang.reflect.Constructor; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.util.HashMap; +import java.util.TreeMap; +import java.util.TreeSet; +import org.h2.util.SourceCompiler; + +/** + * A code generator for class proxies. + */ +public class ProxyCodeGenerator { + + private static SourceCompiler compiler = new SourceCompiler(); + private static HashMap, Class> proxyMap = new HashMap<>(); + + private final TreeSet imports = new TreeSet<>(); + private final TreeMap methods = new TreeMap<>(); + private String packageName; + private String className; + private Class extendsClass; + private Constructor constructor; + + /** + * Check whether there is already a proxy class generated. + * + * @param c the class + * @return true if yes + */ + public static boolean isGenerated(Class c) { + return proxyMap.containsKey(c); + } + + /** + * Generate a proxy class. The returned class extends the given class. + * + * @param c the class to extend + * @return the proxy class + */ + public static Class getClassProxy(Class c) throws ClassNotFoundException { + Class p = proxyMap.get(c); + if (p != null) { + return p; + } + // TODO how to extend a class with private constructor + // TODO call right constructor + // TODO use the right package + ProxyCodeGenerator cg = new ProxyCodeGenerator(); + cg.setPackageName("bytecode"); + cg.generateClassProxy(c); + StringWriter sw = new StringWriter(); + cg.write(new PrintWriter(sw)); + String code = sw.toString(); + String proxy = "bytecode."+ c.getSimpleName() + "Proxy"; + compiler.setJavaSystemCompiler(false); + compiler.setSource(proxy, code); + // System.out.println(code); + Class px = compiler.getClass(proxy); + proxyMap.put(c, px); + return px; + } + + private void setPackageName(String packageName) { + this.packageName = packageName; + } + + /** + * Generate a class that implements all static methods of the given class, + * but as non-static. + * + * @param clazz the class to extend + */ + void generateStaticProxy(Class clazz) { + imports.clear(); + addImport(InvocationHandler.class); + addImport(Method.class); + addImport(clazz); + className = getClassName(clazz) + "Proxy"; + for (Method m : clazz.getDeclaredMethods()) { + if (Modifier.isStatic(m.getModifiers())) { + if (!Modifier.isPrivate(m.getModifiers())) { + addMethod(m); + } + } + } + } + + private void generateClassProxy(Class clazz) { + imports.clear(); + addImport(InvocationHandler.class); + addImport(Method.class); + addImport(clazz); + className = getClassName(clazz) + "Proxy"; + extendsClass = clazz; + int doNotOverride = Modifier.FINAL | Modifier.STATIC | + Modifier.PRIVATE | Modifier.ABSTRACT | Modifier.VOLATILE; + Class dc = clazz; + while (dc != null) { + addImport(dc); + for (Method m : dc.getDeclaredMethods()) { + if ((m.getModifiers() & doNotOverride) == 0) { + addMethod(m); + } + } + dc = dc.getSuperclass(); + } + for (Constructor c : clazz.getDeclaredConstructors()) { + if (Modifier.isPrivate(c.getModifiers())) { + continue; + } + if (constructor == null) { + constructor = c; + } else if (c.getParameterTypes().length < + constructor.getParameterTypes().length) { + constructor = c; + } + } + } + + private void addMethod(Method m) { + if (methods.containsKey(getMethodName(m))) { + // already declared in a subclass + return; + } + addImport(m.getReturnType()); + for (Class c : m.getParameterTypes()) { + addImport(c); + } + for (Class c : m.getExceptionTypes()) { + addImport(c); + } + methods.put(getMethodName(m), m); + } + + private static String getMethodName(Method m) { + StringBuilder buff = new StringBuilder(); + buff.append(m.getReturnType()).append(' '); + buff.append(m.getName()); + for (Class p : m.getParameterTypes()) { + buff.append(' '); + buff.append(p.getName()); + } + return buff.toString(); + } + + private void addImport(Class c) { + while (c.isArray()) { + c = c.getComponentType(); + } + if (!c.isPrimitive()) { + if (!"java.lang".equals(c.getPackage().getName())) { + imports.add(c.getName()); + } + } + } + + private static String getClassName(Class c) { + return getClassName(c, false); + } + + private static String getClassName(Class c, boolean varArg) { + if (varArg) { + c = c.getComponentType(); + } + String s = c.getSimpleName(); + while (true) { + c = c.getEnclosingClass(); + if (c == null) { + break; + } + s = c.getSimpleName() + "." + s; + } + if (varArg) { + return s + "..."; + } + return s; + } + + private void write(PrintWriter writer) { + if (packageName != null) { + writer.println("package " + packageName + ";"); + } + for (String imp : imports) { + writer.println("import " + imp + ";"); + } + writer.print("public class " + className); + if (extendsClass != null) { + writer.print(" extends " + getClassName(extendsClass)); + } + writer.println(" {"); + writer.println(" private final InvocationHandler ih;"); + writer.println(" public " + className + "() {"); + writer.println(" this(new InvocationHandler() {"); + writer.println(" public Object invoke(Object proxy,"); + writer.println(" Method method, Object[] args) " + + "throws Throwable {"); + writer.println(" return method.invoke(proxy, args);"); + writer.println(" }});"); + writer.println(" }"); + writer.println(" public " + className + "(InvocationHandler ih) {"); + if (constructor != null) { + writer.print(" super("); + int i = 0; + for (Class p : constructor.getParameterTypes()) { + if (i > 0) { + writer.print(", "); + } + if (p.isPrimitive()) { + if (p == boolean.class) { + writer.print("false"); + } else if (p == byte.class) { + writer.print("(byte) 0"); + } else if (p == char.class) { + writer.print("(char) 0"); + } else if (p == short.class) { + writer.print("(short) 0"); + } else if (p == int.class) { + writer.print("0"); + } else if (p == long.class) { + writer.print("0L"); + } else if (p == float.class) { + writer.print("0F"); + } else if (p == double.class) { + writer.print("0D"); + } + } else { + writer.print("null"); + } + i++; + } + writer.println(");"); + } + writer.println(" this.ih = ih;"); + writer.println(" }"); + writer.println(" @SuppressWarnings(\"unchecked\")"); + writer.println(" private static " + + "T convertException(Throwable e) {"); + writer.println(" if (e instanceof Error) {"); + writer.println(" throw (Error) e;"); + writer.println(" }"); + writer.println(" return (T) e;"); + writer.println(" }"); + for (Method m : methods.values()) { + Class retClass = m.getReturnType(); + writer.print(" "); + if (Modifier.isProtected(m.getModifiers())) { + // 'public' would also work + writer.print("protected "); + } else { + writer.print("public "); + } + writer.print(getClassName(retClass) + + " " + m.getName() + "("); + Class[] pc = m.getParameterTypes(); + for (int i = 0; i < pc.length; i++) { + Class p = pc[i]; + if (i > 0) { + writer.print(", "); + } + boolean varArg = i == pc.length - 1 && m.isVarArgs(); + writer.print(getClassName(p, varArg) + " p" + i); + } + writer.print(")"); + Class[] ec = m.getExceptionTypes(); + writer.print(" throws RuntimeException"); + if (ec.length > 0) { + for (Class e : ec) { + writer.print(", "); + writer.print(getClassName(e)); + } + } + writer.println(" {"); + writer.println(" try {"); + writer.print(" "); + if (retClass != void.class) { + writer.print("return ("); + if (retClass == boolean.class) { + writer.print("Boolean"); + } else if (retClass == byte.class) { + writer.print("Byte"); + } else if (retClass == char.class) { + writer.print("Character"); + } else if (retClass == short.class) { + writer.print("Short"); + } else if (retClass == int.class) { + writer.print("Integer"); + } else if (retClass == long.class) { + writer.print("Long"); + } else if (retClass == float.class) { + writer.print("Float"); + } else if (retClass == double.class) { + writer.print("Double"); + } else { + writer.print(getClassName(retClass)); + } + writer.print(") "); + } + writer.print("ih.invoke(this, "); + writer.println(getClassName(m.getDeclaringClass()) + + ".class.getDeclaredMethod(\"" + m.getName() + + "\","); + writer.print(" new Class[] {"); + int i = 0; + for (Class p : m.getParameterTypes()) { + if (i > 0) { + writer.print(", "); + } + writer.print(getClassName(p) + ".class"); + i++; + } + writer.println("}),"); + writer.print(" new Object[] {"); + for (i = 0; i < m.getParameterTypes().length; i++) { + if (i > 0) { + writer.print(", "); + } + writer.print("p" + i); + } + writer.println("});"); + writer.println(" } catch (Throwable e) {"); + writer.println(" throw convertException(e);"); + writer.println(" }"); + writer.println(" }"); + } + writer.println("}"); + writer.flush(); + } + + /** + * Format a method call, including arguments, for an exception message. + * + * @param m the method + * @param args the arguments + * @return the formatted string + */ + public static String formatMethodCall(Method m, Object... args) { + StringBuilder buff = new StringBuilder(); + buff.append(m.getName()).append('('); + for (int i = 0; i < args.length; i++) { + Object a = args[i]; + if (i > 0) { + buff.append(", "); + } + buff.append(a == null ? "null" : a.toString()); + } + buff.append(")"); + return buff.toString(); + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/ResultVerifier.java b/modules/h2/src/test/java/org/h2/test/utils/ResultVerifier.java new file mode 100644 index 0000000000000..5cfcb7ca5b7c2 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/ResultVerifier.java @@ -0,0 +1,26 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.lang.reflect.Method; + +/** + * This handler is called after a method returned. + */ +public interface ResultVerifier { + + /** + * Verify the result or exception. + * + * @param returnValue the returned value or null + * @param t the exception / error or null if the method returned normally + * @param m the method or null if unknown + * @param args the arguments or null if unknown + * @return true if the method should be called again + */ + boolean verify(Object returnValue, Throwable t, Method m, Object... args); + +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/SelfDestructor.java b/modules/h2/src/test/java/org/h2/test/utils/SelfDestructor.java new file mode 100644 index 0000000000000..04c73b76fc237 --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/SelfDestructor.java @@ -0,0 +1,108 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.test.utils; + +import java.sql.Timestamp; +import org.h2.util.ThreadDeadlockDetector; + +/** + * This is a self-destructor class to kill a long running process automatically + * after a pre-defined time. The class reads the number of minutes from the + * system property 'h2.selfDestruct' and starts a countdown thread to kill the + * virtual machine if it still runs then. + */ +public class SelfDestructor { + private static final String PROPERTY_NAME = "h2.selfDestruct"; + + /** + * Start the countdown. If the self-destruct system property is set, this + * value is used, otherwise the given default value is used. + * + * @param defaultMinutes the default number of minutes after which the + * current process is killed. + */ + public static void startCountdown(int defaultMinutes) { + final int minutes = Integer.parseInt( + System.getProperty(PROPERTY_NAME, "" + defaultMinutes)); + if (minutes == 0) { + return; + } + Thread thread = new Thread() { + @Override + public void run() { + for (int i = minutes; i >= 0; i--) { + while (true) { + try { + String name = "SelfDestructor " + i + " min"; + setName(name); + break; + } catch (OutOfMemoryError e) { + // ignore + } + } + try { + Thread.sleep(60 * 1000); + } catch (InterruptedException e) { + // ignore + } + } + try { + String time = new Timestamp( + System.currentTimeMillis()).toString(); + System.out.println(time + " Killing the process after " + + minutes + " minute(s)"); + try { + ThreadDeadlockDetector.dumpAllThreadsAndLocks( + "SelfDestructor timed out", System.err); + try { + Thread.sleep(1000); + } catch (Exception e) { + // ignore + } + int activeCount = Thread.activeCount(); + Thread[] threads = new Thread[activeCount + 100]; + int len = Thread.enumerate(threads); + for (int i = 0; i < len; i++) { + Thread t = threads[i]; + if (t != Thread.currentThread()) { + t.interrupt(); + } + } + } catch (Throwable t) { + t.printStackTrace(); + // ignore + } + try { + Thread.sleep(1000); + } catch (Exception e) { + // ignore + } + System.out.println("Killing the process now"); + } catch (Throwable t) { + try { + t.printStackTrace(System.out); + } catch (Throwable t2) { + // ignore (out of memory) + } + } + Runtime.getRuntime().halt(1); + } + }; + thread.setDaemon(true); + thread.start(); + } + + /** + * Get the string to be added when starting the Java process. + * + * @param minutes the countdown time in minutes + * @return the setting + */ + public static String getPropertyString(int minutes) { + return "-D" + PROPERTY_NAME + "=" + minutes; + } + +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/TestColumnNamer.java b/modules/h2/src/test/java/org/h2/test/utils/TestColumnNamer.java new file mode 100644 index 0000000000000..a9a0ecf7e006d --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/TestColumnNamer.java @@ -0,0 +1,57 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + */ +package org.h2.test.utils; + +import org.h2.expression.Expression; +import org.h2.expression.ValueExpression; +import org.h2.test.TestBase; +import org.h2.util.ColumnNamer; + +/** + * Tests the column name factory. + */ +public class TestColumnNamer extends TestBase { + + private String[] ids = new String[] { "ABC", "123", "a\n2", "a$c%d#e@f!.", null, + "VERYVERYVERYVERYVERYVERYLONGVERYVERYVERYVERYVERYVERYLONGVERYVERYVERYVERYVERYVERYLONG", "'!!!'", "'!!!!'", + "3.1415", "\r", "col1", "col1", "col1", + "col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2", + "col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2col2" }; + + private String[] expectedColumnName = { "ABC", "123", "a2", "acdef", "colName6", "VERYVERYVERYVERYVERYVERYLONGVE", + "colName8", "colName9", "31415", "colName11", "col1", "col1_2", "col1_3", "col2col2col2col2col2col2col2co", + "col2col2col2col2col2col2col2_2" }; + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String[] args) { + new TestColumnNamer().test(); + } + + @Override + public void test() { + ColumnNamer columnNamer = new ColumnNamer(null); + columnNamer.getConfiguration().configure("MAX_IDENTIFIER_LENGTH = 30"); + columnNamer.getConfiguration().configure("REGULAR_EXPRESSION_MATCH_ALLOWED = '[A-Za-z0-9_]+'"); + columnNamer.getConfiguration().configure("REGULAR_EXPRESSION_MATCH_DISALLOWED = '[^A-Za-z0-9_]+'"); + columnNamer.getConfiguration().configure("DEFAULT_COLUMN_NAME_PATTERN = 'colName$$'"); + columnNamer.getConfiguration().configure("GENERATE_UNIQUE_COLUMN_NAMES = 1"); + + int index = 0; + for (String id : ids) { + Expression columnExp = ValueExpression.getDefault(); + String newColumnName = columnNamer.getColumnName(columnExp, index + 1, id); + assertTrue(newColumnName != null); + assertTrue(newColumnName.length() <= 30); + assertTrue(newColumnName.length() >= 1); + assertEquals(newColumnName, expectedColumnName[index]); + index++; + } + } +} diff --git a/modules/h2/src/test/java/org/h2/test/utils/package.html b/modules/h2/src/test/java/org/h2/test/utils/package.html new file mode 100644 index 0000000000000..608e2a29cb33e --- /dev/null +++ b/modules/h2/src/test/java/org/h2/test/utils/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Utility classes used by the tests. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/WEB-INF/console.html b/modules/h2/src/test/tools/WEB-INF/console.html new file mode 100644 index 0000000000000..41b25142bc327 --- /dev/null +++ b/modules/h2/src/test/tools/WEB-INF/console.html @@ -0,0 +1,15 @@ + + + + + H2 Console + + + + H2 Console + + diff --git a/modules/h2/src/test/tools/WEB-INF/web.xml b/modules/h2/src/test/tools/WEB-INF/web.xml new file mode 100644 index 0000000000000..1b1b5c87ed419 --- /dev/null +++ b/modules/h2/src/test/tools/WEB-INF/web.xml @@ -0,0 +1,67 @@ + + + + + H2 Console Web Application + + A web application that includes the H2 Console servlet. + + + + H2Console + org.h2.server.web.WebServlet + + 1 + + + + H2Console + /console/* + + + + /console.html + + + + + + + diff --git a/modules/h2/src/test/tools/org/h2/dev/cache/CacheLIRS.java b/modules/h2/src/test/tools/org/h2/dev/cache/CacheLIRS.java new file mode 100644 index 0000000000000..fd3135323980d --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/cache/CacheLIRS.java @@ -0,0 +1,1081 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.cache; + +import java.util.AbstractMap; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * A scan resistant cache. It is meant to cache objects that are relatively + * costly to acquire, for example file content. + *

    + * This implementation is multi-threading safe and supports concurrent access. + * Null keys or null values are not allowed. The map fill factor is at most 75%. + *

    + * Each entry is assigned a distinct memory size, and the cache will try to use + * at most the specified amount of memory. The memory unit is not relevant, + * however it is suggested to use bytes as the unit. + *

    + * This class implements an approximation of the the LIRS replacement algorithm + * invented by Xiaodong Zhang and Song Jiang as described in + * http://www.cse.ohio-state.edu/~zhang/lirs-sigmetrics-02.html with a few + * smaller changes: An additional queue for non-resident entries is used, to + * prevent unbound memory usage. The maximum size of this queue is at most the + * size of the rest of the stack. About 6.25% of the mapped entries are cold. + *

    + * Internally, the cache is split into a number of segments, and each segment is + * an individual LIRS cache. + *

    + * Accessed entries are only moved to the top of the stack if at least a number + * of other entries have been moved to the front (8 per segment by default). + * Write access and moving entries to the top of the stack is synchronized per + * segment. + * + * @author Thomas Mueller + * @param the key type + * @param the value type + */ +public class CacheLIRS extends AbstractMap { + + /** + * The maximum memory this cache should use. + */ + private long maxMemory; + + private final Segment[] segments; + + private final int segmentCount; + private final int segmentShift; + private final int segmentMask; + private final int stackMoveDistance; + + + /** + * Create a new cache with the given number of entries, and the default + * settings (16 segments, and stack move distance of 8. + * + * @param maxMemory the maximum memory to use (1 or larger) + */ + public CacheLIRS(long maxMemory) { + this(maxMemory, 16, 8); + } + + /** + * Create a new cache with the given memory size. + * + * @param maxMemory the maximum memory to use (1 or larger) + * @param segmentCount the number of cache segments (must be a power of 2) + * @param stackMoveDistance how many other item are to be moved to the top + * of the stack before the current item is moved + */ + @SuppressWarnings("unchecked") + public CacheLIRS(long maxMemory, int segmentCount, + int stackMoveDistance) { + setMaxMemory(maxMemory); + if (Integer.bitCount(segmentCount) != 1) { + throw new IllegalArgumentException( + "The segment count must be a power of 2, is " + + segmentCount); + } + this.segmentCount = segmentCount; + this.segmentMask = segmentCount - 1; + this.stackMoveDistance = stackMoveDistance; + segments = new Segment[segmentCount]; + clear(); + // use the high bits for the segment + this.segmentShift = 32 - Integer.bitCount(segmentMask); + } + + /** + * Remove all entries. + */ + @Override + public void clear() { + long max = Math.max(1, maxMemory / segmentCount); + for (int i = 0; i < segmentCount; i++) { + segments[i] = new Segment<>( + this, max, stackMoveDistance, 8); + } + } + + private Entry find(Object key) { + int hash = getHash(key); + return getSegment(hash).find(key, hash); + } + + /** + * Check whether there is a resident entry for the given key. This + * method does not adjust the internal state of the cache. + * + * @param key the key (may not be null) + * @return true if there is a resident entry + */ + @Override + public boolean containsKey(Object key) { + int hash = getHash(key); + return getSegment(hash).containsKey(key, hash); + } + + /** + * Get the value for the given key if the entry is cached. This method does + * not modify the internal state. + * + * @param key the key (may not be null) + * @return the value, or null if there is no resident entry + */ + public V peek(K key) { + Entry e = find(key); + return e == null ? null : e.value; + } + + /** + * Add an entry to the cache. The entry may or may not exist in the + * cache yet. This method will usually mark unknown entries as cold and + * known entries as hot. + * + * @param key the key (may not be null) + * @param value the value (may not be null) + * @param memory the memory used for the given entry + * @return the old value, or null if there was no resident entry + */ + public V put(K key, V value, int memory) { + int hash = getHash(key); + int segmentIndex = getSegmentIndex(hash); + Segment s = segments[segmentIndex]; + // check whether resize is required: synchronize on s, to avoid + // concurrent resizes (concurrent reads read + // from the old segment) + synchronized (s) { + s = resizeIfNeeded(s, segmentIndex); + return s.put(key, hash, value, memory); + } + } + + private Segment resizeIfNeeded(Segment s, int segmentIndex) { + int newLen = s.getNewMapLen(); + if (newLen == 0) { + return s; + } + // another thread might have resized + // (as we retrieved the segment before synchronizing on it) + Segment s2 = segments[segmentIndex]; + if (s == s2) { + // no other thread resized, so we do + s = new Segment<>(s, newLen); + segments[segmentIndex] = s; + } + return s; + } + + /** + * Add an entry to the cache using a memory size of 1. + * + * @param key the key (may not be null) + * @param value the value (may not be null) + * @return the old value, or null if there was no resident entry + */ + @Override + public V put(K key, V value) { + return put(key, value, sizeOf(key, value)); + } + + /** + * Get the size of the given value. The default implementation returns 1. + * + * @param key the key + * @param value the value + * @return the size + */ + @SuppressWarnings("unused") + protected int sizeOf(K key, V value) { + return 1; + } + + /** + * This method is called after the value for the given key was removed. + * It is not called on clear or put when replacing a value. + * + * @param key the key + */ + protected void onRemove(@SuppressWarnings("unused") K key) { + // do nothing + } + + /** + * Remove an entry. Both resident and non-resident entries can be + * removed. + * + * @param key the key (may not be null) + * @return the old value, or null if there was no resident entry + */ + @Override + public V remove(Object key) { + int hash = getHash(key); + int segmentIndex = getSegmentIndex(hash); + Segment s = segments[segmentIndex]; + // check whether resize is required: synchronize on s, to avoid + // concurrent resizes (concurrent reads read + // from the old segment) + synchronized (s) { + s = resizeIfNeeded(s, segmentIndex); + return s.remove(key, hash); + } + } + + /** + * Get the memory used for the given key. + * + * @param key the key (may not be null) + * @return the memory, or 0 if there is no resident entry + */ + public int getMemory(K key) { + int hash = getHash(key); + return getSegment(hash).getMemory(key, hash); + } + + /** + * Get the value for the given key if the entry is cached. This method + * adjusts the internal state of the cache sometimes, to ensure commonly + * used entries stay in the cache. + * + * @param key the key (may not be null) + * @return the value, or null if there is no resident entry + */ + @Override + public V get(Object key) { + int hash = getHash(key); + return getSegment(hash).get(key, hash); + } + + private Segment getSegment(int hash) { + return segments[getSegmentIndex(hash)]; + } + + private int getSegmentIndex(int hash) { + return (hash >>> segmentShift) & segmentMask; + } + + /** + * Get the hash code for the given key. The hash code is + * further enhanced to spread the values more evenly. + * + * @param key the key + * @return the hash code + */ + static int getHash(Object key) { + int hash = key.hashCode(); + // a supplemental secondary hash function + // to protect against hash codes that don't differ much + hash = ((hash >>> 16) ^ hash) * 0x45d9f3b; + hash = ((hash >>> 16) ^ hash) * 0x45d9f3b; + hash = (hash >>> 16) ^ hash; + return hash; + } + + /** + * Get the currently used memory. + * + * @return the used memory + */ + public long getUsedMemory() { + long x = 0; + for (Segment s : segments) { + x += s.usedMemory; + } + return x; + } + + /** + * Set the maximum memory this cache should use. This will not + * immediately cause entries to get removed however; it will only change + * the limit. To resize the internal array, call the clear method. + * + * @param maxMemory the maximum size (1 or larger) + */ + public void setMaxMemory(long maxMemory) { + if (maxMemory <= 0) { + throw new IllegalArgumentException("Max memory must be larger than 0"); + } + this.maxMemory = maxMemory; + if (segments != null) { + long max = 1 + maxMemory / segments.length; + for (Segment s : segments) { + s.setMaxMemory(max); + } + } + } + + /** + * Get the maximum memory to use. + * + * @return the maximum memory + */ + public long getMaxMemory() { + return maxMemory; + } + + /** + * Get the entry set for all resident entries. + * + * @return the entry set + */ + @Override + public Set> entrySet() { + HashMap map = new HashMap<>(); + for (K k : keySet()) { + map.put(k, find(k).value); + } + return map.entrySet(); + } + + /** + * Get the set of keys for resident entries. + * + * @return the set of keys + */ + @Override + public Set keySet() { + HashSet set = new HashSet<>(); + for (Segment s : segments) { + set.addAll(s.keySet()); + } + return set; + } + + /** + * Get the number of non-resident entries in the cache. + * + * @return the number of non-resident entries + */ + public int sizeNonResident() { + int x = 0; + for (Segment s : segments) { + x += s.queue2Size; + } + return x; + } + + /** + * Get the length of the internal map array. + * + * @return the size of the array + */ + public int sizeMapArray() { + int x = 0; + for (Segment s : segments) { + x += s.entries.length; + } + return x; + } + + /** + * Get the number of hot entries in the cache. + * + * @return the number of hot entries + */ + public int sizeHot() { + int x = 0; + for (Segment s : segments) { + x += s.mapSize - s.queueSize - s.queue2Size; + } + return x; + } + + /** + * Get the number of resident entries. + * + * @return the number of entries + */ + @Override + public int size() { + int x = 0; + for (Segment s : segments) { + x += s.mapSize - s.queue2Size; + } + return x; + } + + /** + * Get the list of keys. This method allows to read the internal state of + * the cache. + * + * @param cold if true, only keys for the cold entries are returned + * @param nonResident true for non-resident entries + * @return the key list + */ + public List keys(boolean cold, boolean nonResident) { + ArrayList keys = new ArrayList<>(); + for (Segment s : segments) { + keys.addAll(s.keys(cold, nonResident)); + } + return keys; + } + + /** + * A cache segment + * + * @param the key type + * @param the value type + */ + private static class Segment { + + /** + * The number of (hot, cold, and non-resident) entries in the map. + */ + int mapSize; + + /** + * The size of the LIRS queue for resident cold entries. + */ + int queueSize; + + /** + * The size of the LIRS queue for non-resident cold entries. + */ + int queue2Size; + + /** + * The map array. The size is always a power of 2. + */ + final Entry[] entries; + + /** + * The currently used memory. + */ + long usedMemory; + + private final CacheLIRS cache; + + /** + * How many other item are to be moved to the top of the stack before + * the current item is moved. + */ + private final int stackMoveDistance; + + /** + * The maximum memory this cache should use. + */ + private long maxMemory; + + /** + * The bit mask that is applied to the key hash code to get the index in + * the map array. The mask is the length of the array minus one. + */ + private final int mask; + + /** + * The stack of recently referenced elements. This includes all hot + * entries, and the recently referenced cold entries. Resident cold + * entries that were not recently referenced, as well as non-resident + * cold entries, are not in the stack. + *

    + * There is always at least one entry: the head entry. + */ + private final Entry stack; + + /** + * The number of entries in the stack. + */ + private int stackSize; + + /** + * The queue of resident cold entries. + *

    + * There is always at least one entry: the head entry. + */ + private final Entry queue; + + /** + * The queue of non-resident cold entries. + *

    + * There is always at least one entry: the head entry. + */ + private final Entry queue2; + + /** + * The number of times any item was moved to the top of the stack. + */ + private int stackMoveCounter; + + /** + * Create a new cache segment. + * + * @param cache the cache + * @param maxMemory the maximum memory to use + * @param stackMoveDistance the number of other entries to be moved to + * the top of the stack before moving an entry to the top + * @param len the number of hash table buckets (must be a power of 2) + */ + Segment(CacheLIRS cache, long maxMemory, + int stackMoveDistance, int len) { + this.cache = cache; + setMaxMemory(maxMemory); + this.stackMoveDistance = stackMoveDistance; + + // the bit mask has all bits set + mask = len - 1; + + // initialize the stack and queue heads + stack = new Entry<>(); + stack.stackPrev = stack.stackNext = stack; + queue = new Entry<>(); + queue.queuePrev = queue.queueNext = queue; + queue2 = new Entry<>(); + queue2.queuePrev = queue2.queueNext = queue2; + + @SuppressWarnings("unchecked") + Entry[] e = new Entry[len]; + entries = e; + } + + /** + * Create a new cache segment from an existing one. + * The caller must synchronize on the old segment, to avoid + * concurrent modifications. + * + * @param old the old segment + * @param len the number of hash table buckets (must be a power of 2) + */ + Segment(Segment old, int len) { + this(old.cache, old.maxMemory, old.stackMoveDistance, len); + Entry s = old.stack.stackPrev; + while (s != old.stack) { + Entry e = copy(s); + addToMap(e); + addToStack(e); + s = s.stackPrev; + } + s = old.queue.queuePrev; + while (s != old.queue) { + Entry e = find(s.key, getHash(s.key)); + if (e == null) { + e = copy(s); + addToMap(e); + } + addToQueue(queue, e); + s = s.queuePrev; + } + s = old.queue2.queuePrev; + while (s != old.queue2) { + Entry e = find(s.key, getHash(s.key)); + if (e == null) { + e = copy(s); + addToMap(e); + } + addToQueue(queue2, e); + s = s.queuePrev; + } + } + + /** + * Calculate the new number of hash table buckets if the internal map + * should be re-sized. + * + * @return 0 if no resizing is needed, or the new length + */ + int getNewMapLen() { + int len = mask + 1; + if (len * 3 < mapSize * 4 && len < (1 << 28)) { + // more than 75% usage + return len * 2; + } else if (len > 32 && len / 8 > mapSize) { + // less than 12% usage + return len / 2; + } + return 0; + } + + private void addToMap(Entry e) { + int index = getHash(e.key) & mask; + e.mapNext = entries[index]; + entries[index] = e; + usedMemory += e.memory; + mapSize++; + } + + private static Entry copy(Entry old) { + Entry e = new Entry<>(); + e.key = old.key; + e.value = old.value; + e.memory = old.memory; + e.topMove = old.topMove; + return e; + } + + /** + * Get the memory used for the given key. + * + * @param key the key (may not be null) + * @param hash the hash + * @return the memory, or 0 if there is no resident entry + */ + int getMemory(K key, int hash) { + Entry e = find(key, hash); + return e == null ? 0 : e.memory; + } + + /** + * Get the value for the given key if the entry is cached. This method + * adjusts the internal state of the cache sometimes, to ensure commonly + * used entries stay in the cache. + * + * @param key the key (may not be null) + * @param hash the hash + * @return the value, or null if there is no resident entry + */ + V get(Object key, int hash) { + Entry e = find(key, hash); + if (e == null) { + // the entry was not found + return null; + } + V value = e.value; + if (value == null) { + // it was a non-resident entry + return null; + } + if (e.isHot()) { + if (e != stack.stackNext) { + if (stackMoveDistance == 0 || + stackMoveCounter - e.topMove > stackMoveDistance) { + access(key, hash); + } + } + } else { + access(key, hash); + } + return value; + } + + /** + * Access an item, moving the entry to the top of the stack or front of + * the queue if found. + * + * @param key the key + */ + private synchronized void access(Object key, int hash) { + Entry e = find(key, hash); + if (e == null || e.value == null) { + return; + } + if (e.isHot()) { + if (e != stack.stackNext) { + if (stackMoveDistance == 0 || + stackMoveCounter - e.topMove > stackMoveDistance) { + // move a hot entry to the top of the stack + // unless it is already there + boolean wasEnd = e == stack.stackPrev; + removeFromStack(e); + if (wasEnd) { + // if moving the last entry, the last entry + // could now be cold, which is not allowed + pruneStack(); + } + addToStack(e); + } + } + } else { + removeFromQueue(e); + if (e.stackNext != null) { + // resident cold entries become hot + // if they are on the stack + removeFromStack(e); + // which means a hot entry needs to become cold + // (this entry is cold, that means there is at least one + // more entry in the stack, which must be hot) + convertOldestHotToCold(); + } else { + // cold entries that are not on the stack + // move to the front of the queue + addToQueue(queue, e); + } + // in any case, the cold entry is moved to the top of the stack + addToStack(e); + } + } + + /** + * Add an entry to the cache. The entry may or may not exist in the + * cache yet. This method will usually mark unknown entries as cold and + * known entries as hot. + * + * @param key the key (may not be null) + * @param hash the hash + * @param value the value (may not be null) + * @param memory the memory used for the given entry + * @return the old value, or null if there was no resident entry + */ + synchronized V put(K key, int hash, V value, int memory) { + if (value == null) { + throw new NullPointerException("The value may not be null"); + } + V old; + Entry e = find(key, hash); + if (e == null) { + old = null; + } else { + old = e.value; + remove(key, hash); + } + if (memory > maxMemory) { + // the new entry is too big to fit + return old; + } + e = new Entry<>(); + e.key = key; + e.value = value; + e.memory = memory; + int index = hash & mask; + e.mapNext = entries[index]; + entries[index] = e; + usedMemory += memory; + if (usedMemory > maxMemory) { + // old entries needs to be removed + evict(); + // if the cache is full, the new entry is + // cold if possible + if (stackSize > 0) { + // the new cold entry is at the top of the queue + addToQueue(queue, e); + } + } + mapSize++; + // added entries are always added to the stack + addToStack(e); + return old; + } + + /** + * Remove an entry. Both resident and non-resident entries can be + * removed. + * + * @param key the key (may not be null) + * @param hash the hash + * @return the old value, or null if there was no resident entry + */ + synchronized V remove(Object key, int hash) { + int index = hash & mask; + Entry e = entries[index]; + if (e == null) { + return null; + } + V old; + if (e.key.equals(key)) { + old = e.value; + entries[index] = e.mapNext; + } else { + Entry last; + do { + last = e; + e = e.mapNext; + if (e == null) { + return null; + } + } while (!e.key.equals(key)); + old = e.value; + last.mapNext = e.mapNext; + } + mapSize--; + usedMemory -= e.memory; + if (e.stackNext != null) { + removeFromStack(e); + } + if (e.isHot()) { + // when removing a hot entry, the newest cold entry gets hot, + // so the number of hot entries does not change + e = queue.queueNext; + if (e != queue) { + removeFromQueue(e); + if (e.stackNext == null) { + addToStackBottom(e); + } + } + } else { + removeFromQueue(e); + } + pruneStack(); + return old; + } + + /** + * Evict cold entries (resident and non-resident) until the memory limit + * is reached. The new entry is added as a cold entry, except if it is + * the only entry. + */ + private void evict() { + do { + evictBlock(); + } while (usedMemory > maxMemory); + } + + private void evictBlock() { + // ensure there are not too many hot entries: right shift of 5 is + // division by 32, that means if there are only 1/32 (3.125%) or + // less cold entries, a hot entry needs to become cold + while (queueSize <= (mapSize >>> 5) && stackSize > 0) { + convertOldestHotToCold(); + } + // the oldest resident cold entries become non-resident + while (usedMemory > maxMemory && queueSize > 0) { + Entry e = queue.queuePrev; + usedMemory -= e.memory; + removeFromQueue(e); + cache.onRemove(e.key); + e.value = null; + e.memory = 0; + addToQueue(queue2, e); + // the size of the non-resident-cold entries needs to be limited + while (queue2Size + queue2Size > stackSize) { + e = queue2.queuePrev; + int hash = getHash(e.key); + remove(e.key, hash); + } + } + } + + private void convertOldestHotToCold() { + // the last entry of the stack is known to be hot + Entry last = stack.stackPrev; + if (last == stack) { + // never remove the stack head itself (this would mean the + // internal structure of the cache is corrupt) + throw new IllegalStateException(); + } + // remove from stack - which is done anyway in the stack pruning, + // but we can do it here as well + removeFromStack(last); + // adding an entry to the queue will make it cold + addToQueue(queue, last); + pruneStack(); + } + + /** + * Ensure the last entry of the stack is cold. + */ + private void pruneStack() { + while (true) { + Entry last = stack.stackPrev; + // must stop at a hot entry or the stack head, + // but the stack head itself is also hot, so we + // don't have to test it + if (last.isHot()) { + break; + } + // the cold entry is still in the queue + removeFromStack(last); + } + } + + /** + * Try to find an entry in the map. + * + * @param key the key + * @param hash the hash + * @return the entry (might be a non-resident) + */ + Entry find(Object key, int hash) { + int index = hash & mask; + Entry e = entries[index]; + while (e != null && !e.key.equals(key)) { + e = e.mapNext; + } + return e; + } + + private void addToStack(Entry e) { + e.stackPrev = stack; + e.stackNext = stack.stackNext; + e.stackNext.stackPrev = e; + stack.stackNext = e; + stackSize++; + e.topMove = stackMoveCounter++; + } + + private void addToStackBottom(Entry e) { + e.stackNext = stack; + e.stackPrev = stack.stackPrev; + e.stackPrev.stackNext = e; + stack.stackPrev = e; + stackSize++; + } + + /** + * Remove the entry from the stack. The head itself must not be removed. + * + * @param e the entry + */ + private void removeFromStack(Entry e) { + e.stackPrev.stackNext = e.stackNext; + e.stackNext.stackPrev = e.stackPrev; + e.stackPrev = e.stackNext = null; + stackSize--; + } + + private void addToQueue(Entry q, Entry e) { + e.queuePrev = q; + e.queueNext = q.queueNext; + e.queueNext.queuePrev = e; + q.queueNext = e; + if (e.value != null) { + queueSize++; + } else { + queue2Size++; + } + } + + private void removeFromQueue(Entry e) { + e.queuePrev.queueNext = e.queueNext; + e.queueNext.queuePrev = e.queuePrev; + e.queuePrev = e.queueNext = null; + if (e.value != null) { + queueSize--; + } else { + queue2Size--; + } + } + + /** + * Get the list of keys. This method allows to read the internal state + * of the cache. + * + * @param cold if true, only keys for the cold entries are returned + * @param nonResident true for non-resident entries + * @return the key list + */ + synchronized List keys(boolean cold, boolean nonResident) { + ArrayList keys = new ArrayList<>(); + if (cold) { + Entry start = nonResident ? queue2 : queue; + for (Entry e = start.queueNext; e != start; + e = e.queueNext) { + keys.add(e.key); + } + } else { + for (Entry e = stack.stackNext; e != stack; + e = e.stackNext) { + keys.add(e.key); + } + } + return keys; + } + + /** + * Check whether there is a resident entry for the given key. This + * method does not adjust the internal state of the cache. + * + * @param key the key (may not be null) + * @param hash the hash + * @return true if there is a resident entry + */ + boolean containsKey(Object key, int hash) { + Entry e = find(key, hash); + return e != null && e.value != null; + } + + /** + * Get the set of keys for resident entries. + * + * @return the set of keys + */ + synchronized Set keySet() { + HashSet set = new HashSet<>(); + for (Entry e = stack.stackNext; e != stack; e = e.stackNext) { + set.add(e.key); + } + for (Entry e = queue.queueNext; e != queue; e = e.queueNext) { + set.add(e.key); + } + return set; + } + + /** + * Set the maximum memory this cache should use. This will not + * immediately cause entries to get removed however; it will only change + * the limit. To resize the internal array, call the clear method. + * + * @param maxMemory the maximum size (1 or larger) + */ + void setMaxMemory(long maxMemory) { + this.maxMemory = maxMemory; + } + + } + + /** + * A cache entry. Each entry is either hot (low inter-reference recency; + * LIR), cold (high inter-reference recency; HIR), or non-resident-cold. Hot + * entries are in the stack only. Cold entries are in the queue, and may be + * in the stack. Non-resident-cold entries have their value set to null and + * are in the stack and in the non-resident queue. + * + * @param the key type + * @param the value type + */ + static class Entry { + + /** + * The key. + */ + K key; + + /** + * The value. Set to null for non-resident-cold entries. + */ + V value; + + /** + * The estimated memory used. + */ + int memory; + + /** + * When the item was last moved to the top of the stack. + */ + int topMove; + + /** + * The next entry in the stack. + */ + Entry stackNext; + + /** + * The previous entry in the stack. + */ + Entry stackPrev; + + /** + * The next entry in the queue (either the resident queue or the + * non-resident queue). + */ + Entry queueNext; + + /** + * The previous entry in the queue. + */ + Entry queuePrev; + + /** + * The next entry in the map (the chained entry). + */ + Entry mapNext; + + /** + * Whether this entry is hot. Cold entries are in one of the two queues. + * + * @return whether the entry is hot + */ + boolean isHot() { + return queueNext == null; + } + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/cache/package.html b/modules/h2/src/test/tools/org/h2/dev/cache/package.html new file mode 100644 index 0000000000000..f2de5f88c5285 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/cache/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A LIRS cache implementation. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/cluster/ShardedMap.java b/modules/h2/src/test/tools/org/h2/dev/cluster/ShardedMap.java new file mode 100644 index 0000000000000..e5dbb92d91c0b --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/cluster/ShardedMap.java @@ -0,0 +1,335 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.cluster; + +import java.util.AbstractMap; +import java.util.AbstractSet; +import java.util.Arrays; +import java.util.Iterator; +import java.util.Map; +import java.util.NoSuchElementException; +import java.util.Set; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.type.DataType; +import org.h2.mvstore.type.ObjectDataType; + +/** + * A sharded map. It is typically split into multiple sub-maps that don't have + * overlapping keys. + * + * @param the key type + * @param the value type + */ +public class ShardedMap extends AbstractMap { + + private final DataType keyType; + + /** + * The shards. Each shard has a minimum and a maximum key (null for no + * limit). Key ranges are ascending but can overlap, in which case entries + * may be stored in multiple maps. If that is the case, for read operations, + * the entry in the first map is used, and for write operations, the changes + * are applied to all maps within the range. + */ + private Shard[] shards; + + public ShardedMap() { + this(new ObjectDataType()); + } + + @SuppressWarnings("unchecked") + public ShardedMap(DataType keyType) { + this.keyType = keyType; + shards = new Shard[0]; + } + + /** + * Get the size of the map. + * + * @param map the map + * @return the size + */ + static long getSize(Map map) { + if (map instanceof LargeMap) { + return ((LargeMap) map).sizeAsLong(); + } + return map.size(); + } + + /** + * Add the given shard. + * + * @param map the map + * @param min the lowest key, or null if no limit + * @param max the highest key, or null if no limit + */ + public void addMap(Map map, K min, K max) { + if (min != null && max != null && keyType.compare(min, max) > 0) { + DataUtils.newIllegalArgumentException("Invalid range: {0} .. {1}", min, max); + } + int len = shards.length + 1; + Shard[] newShards = Arrays.copyOf(shards, len); + Shard newShard = new Shard<>(); + newShard.map = map; + newShard.minIncluding = min; + newShard.maxExcluding = max; + newShards[len - 1] = newShard; + shards = newShards; + } + + private boolean isInRange(K key, Shard shard) { + if (shard.minIncluding != null) { + if (keyType.compare(key, shard.minIncluding) < 0) { + return false; + } + } + if (shard.maxExcluding != null) { + if (keyType.compare(key, shard.maxExcluding) >= 0) { + return false; + } + } + return true; + } + + @Override + public int size() { + long size = sizeAsLong(); + return size > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) size; + } + + /** + * The size of the map. + * + * @return the size + */ + public long sizeAsLong() { + Shard[] copy = shards; + for (Shard s : copy) { + if (s.minIncluding == null && s.maxExcluding == null) { + return getSize(s.map); + } + } + if (isSimpleSplit(copy)) { + long size = 0; + for (Shard s : copy) { + size += getSize(s.map); + } + return size; + } + return -1; + } + + private boolean isSimpleSplit(Shard[] shards) { + K last = null; + for (int i = 0; i < shards.length; i++) { + Shard s = shards[i]; + if (last == null) { + if (s.minIncluding != null) { + return false; + } + } else if (keyType.compare(last, s.minIncluding) != 0) { + return false; + } + if (s.maxExcluding == null) { + return i == shards.length - 1; + } + last = s.maxExcluding; + } + return last == null; + } + + @Override + public V put(K key, V value) { + V result = null; + Shard[] copy = shards; + for (Shard s : copy) { + if (isInRange(key, s)) { + V r = s.map.put(key, value); + if (result == null) { + result = r; + } + } + } + return result; + } + + @Override + public V get(Object key) { + @SuppressWarnings("unchecked") + K k = (K) key; + Shard[] copy = shards; + for (Shard s : copy) { + if (isInRange(k, s)) { + return s.map.get(k); + } + } + return null; + } + + @Override + public Set> entrySet() { + Shard[] copy = shards; + for (Shard s : copy) { + if (s.minIncluding == null && s.maxExcluding == null) { + return s.map.entrySet(); + } + } + if (isSimpleSplit(copy)) { + return new CombinedSet<>(size(), copy); + } + return null; + } + + /** + * A subset of a map. + * + * @param the key type + * @param the value type + */ + static class Shard { + + /** + * The lowest key, or null if no limit. + */ + K minIncluding; + + /** + * A key higher than the highest key, or null if no limit. + */ + K maxExcluding; + + /** + * The backing map. + */ + Map map; + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + if (minIncluding != null) { + buff.append('[').append(minIncluding); + } + buff.append(".."); + if (maxExcluding != null) { + buff.append(maxExcluding).append(')'); + } + return buff.toString(); + } + + } + + /** + * A combination of multiple sets. + * + * @param the key type + * @param the value type + */ + private static class CombinedSet extends AbstractSet> { + + final int size; + final Shard[] shards; + + CombinedSet(int size, Shard[] shards) { + this.size = size; + this.shards = shards; + } + + @Override + public Iterator> iterator() { + return new Iterator>() { + + boolean init; + Entry current; + Iterator> currentIterator; + int shardIndex; + + private void fetchNext() { + while (currentIterator == null || !currentIterator.hasNext()) { + if (shardIndex >= shards.length) { + current = null; + return; + } + currentIterator = shards[shardIndex++].map.entrySet().iterator(); + } + current = currentIterator.next(); + } + + @Override + public boolean hasNext() { + if (!init) { + fetchNext(); + init = true; + } + return current != null; + } + + @Override + public Entry next() { + if (!hasNext()) { + throw new NoSuchElementException(); + } + Entry e = current; + fetchNext(); + return e; + } + + @Override + public void remove() { + throw new UnsupportedOperationException(); + } + + }; + } + + @Override + public int size() { + return size; + } + + } + + /** + * A large map. + */ + public interface LargeMap { + + /** + * The size of the map. + * + * @return the size + */ + long sizeAsLong(); + } + + /** + * A map that can efficiently return the index of a key, and the key at a + * given index. + */ + public interface CountedMap { + + /** + * Get the key at the given index. + * + * @param index the index + * @return the key + */ + K getKey(long index); + + /** + * Get the index of the given key in the map. + *

    + * If the key was found, the returned value is the index in the key + * array. If not found, the returned value is negative, where -1 means + * the provided key is smaller than any keys. See also + * Arrays.binarySearch. + * + * @param key the key + * @return the index + */ + long getKeyIndex(K key); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/cluster/package.html b/modules/h2/src/test/tools/org/h2/dev/cluster/package.html new file mode 100644 index 0000000000000..8bd8809892bc9 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/cluster/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A clustering implementation. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/fs/ArchiveTool.java b/modules/h2/src/test/tools/org/h2/dev/fs/ArchiveTool.java new file mode 100644 index 0000000000000..6905cf5614bdb --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/fs/ArchiveTool.java @@ -0,0 +1,1204 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.fs; + +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.DataInputStream; +import java.io.DataOutputStream; +import java.io.EOFException; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Iterator; +import java.util.List; +import java.util.TreeMap; +import java.util.TreeSet; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.zip.Deflater; +import java.util.zip.DeflaterOutputStream; +import java.util.zip.Inflater; +import java.util.zip.InflaterInputStream; + +/** + * A standalone archive tool to compress directories. It does not have any + * dependencies except for the Java libraries. + *

    + * Unlike other compression tools, it splits the data into chunks and sorts the + * chunks, so that large directories or files that contain duplicate data are + * compressed much better. + */ +public class ArchiveTool { + + /** + * The file header. + */ + private static final byte[] HEADER = {'H', '2', 'A', '1'}; + + /** + * The number of bytes per megabyte (used for the output). + */ + private static final int MB = 1000 * 1000; + + /** + * Run the tool. + * + * @param args the command line arguments + */ + public static void main(String... args) throws Exception { + Log log = new Log(); + int level = Integer.getInteger("level", Deflater.BEST_SPEED); + if (args.length == 1) { + File f = new File(args[0]); + if (f.exists()) { + if (f.isDirectory()) { + String fromDir = f.getAbsolutePath(); + String toFile = fromDir + ".at"; + compress(fromDir, toFile, level); + return; + } + String fromFile = f.getAbsolutePath(); + int dot = fromFile.lastIndexOf('.'); + if (dot > 0 && dot > fromFile.replace('\\', '/').lastIndexOf('/')) { + String toDir = fromFile.substring(0, dot); + extract(fromFile, toDir); + return; + } + } + } + String arg = args.length != 3 ? null : args[0]; + if ("-compress".equals(arg)) { + String toFile = args[1]; + String fromDir = args[2]; + compress(fromDir, toFile, level); + } else if ("-extract".equals(arg)) { + String fromFile = args[1]; + String toDir = args[2]; + extract(fromFile, toDir); + } else { + log.println("An archive tool to efficiently compress large directories"); + log.println("Command line options:"); + log.println(""); + log.println(""); + log.println("-compress "); + log.println("-extract "); + } + } + + private static void compress(String fromDir, String toFile, int level) throws IOException { + final Log log = new Log(); + final long start = System.nanoTime(); + final long startMs = System.currentTimeMillis(); + final AtomicBoolean title = new AtomicBoolean(); + long size = getSize(new File(fromDir), new Runnable() { + int count; + long lastTime = start; + @Override + public void run() { + count++; + if (count % 1000 == 0) { + long now = System.nanoTime(); + if (now - lastTime > TimeUnit.SECONDS.toNanos(3)) { + if (!title.getAndSet(true)) { + log.println("Counting files"); + } + log.print(count + " "); + lastTime = now; + } + } + } + }); + if (title.get()) { + log.println(); + } + log.println("Compressing " + size / MB + " MB at " + + new java.sql.Time(startMs).toString()); + InputStream in = getDirectoryInputStream(fromDir); + String temp = toFile + ".temp"; + OutputStream out = + new BufferedOutputStream( + new FileOutputStream(toFile), 1024 * 1024); + Deflater def = new Deflater(); + def.setLevel(level); + out = new BufferedOutputStream( + new DeflaterOutputStream(out, def), 1024 * 1024); + sort(log, in, out, temp, size); + in.close(); + out.close(); + log.println(); + log.println("Compressed to " + + new File(toFile).length() / MB + " MB in " + + TimeUnit.NANOSECONDS.toSeconds(System.nanoTime() - start) + + " seconds"); + log.println(); + } + + private static void extract(String fromFile, String toDir) throws IOException { + Log log = new Log(); + long start = System.nanoTime(); + long startMs = System.currentTimeMillis(); + long size = new File(fromFile).length(); + log.println("Extracting " + size / MB + " MB at " + new java.sql.Time(startMs).toString()); + InputStream in = + new BufferedInputStream( + new FileInputStream(fromFile), 1024 * 1024); + String temp = fromFile + ".temp"; + Inflater inflater = new Inflater(); + in = new InflaterInputStream(in, inflater, 1024 * 1024); + OutputStream out = getDirectoryOutputStream(toDir); + combine(log, in, out, temp); + inflater.end(); + in.close(); + out.close(); + log.println(); + log.println("Extracted in " + + TimeUnit.NANOSECONDS.toSeconds(System.nanoTime() - start) + + " seconds"); + } + + private static long getSize(File f, Runnable r) { + // assume a metadata entry is 40 bytes + long size = 40; + if (f.isDirectory()) { + File[] list = f.listFiles(); + if (list != null) { + for (File c : list) { + size += getSize(c, r); + } + } + } else { + size += f.length(); + } + r.run(); + return size; + } + + private static InputStream getDirectoryInputStream(final String dir) { + + File f = new File(dir); + if (!f.isDirectory() || !f.exists()) { + throw new IllegalArgumentException("Not an existing directory: " + dir); + } + + return new InputStream() { + + private final String baseDir; + private final ArrayList files = new ArrayList<>(); + private String current; + private ByteArrayInputStream meta; + private DataInputStream fileIn; + private long remaining; + + { + File f = new File(dir); + baseDir = f.getAbsolutePath(); + addDirectory(f); + } + + private void addDirectory(File f) { + File[] list = f.listFiles(); + if (list != null) { + // first all directories, then all files + for (File c : list) { + if (c.isDirectory()) { + files.add(c.getAbsolutePath()); + } + } + for (File c : list) { + if (c.isFile()) { + files.add(c.getAbsolutePath()); + } + } + } + } + + // int: metadata length + // byte: 0: directory, 1: file + // varLong: lastModified + // byte: 0: read-write, 1: read-only + // (file only) varLong: file length + // utf-8: file name + + @Override + public int read() throws IOException { + if (meta != null) { + // read from the metadata + int x = meta.read(); + if (x >= 0) { + return x; + } + meta = null; + } + if (fileIn != null) { + if (remaining > 0) { + // read from the file + int x = fileIn.read(); + remaining--; + if (x < 0) { + throw new EOFException(); + } + return x; + } + fileIn.close(); + fileIn = null; + } + if (files.size() == 0) { + // EOF + return -1; + } + // breadth-first traversal + // first all files, then all directories + current = files.remove(files.size() - 1); + File f = new File(current); + if (f.isDirectory()) { + addDirectory(f); + } + ByteArrayOutputStream metaOut = new ByteArrayOutputStream(); + DataOutputStream out = new DataOutputStream(metaOut); + boolean isFile = f.isFile(); + out.writeInt(0); + out.write(isFile ? 1 : 0); + out.write(!f.canWrite() ? 1 : 0); + writeVarLong(out, f.lastModified()); + if (isFile) { + remaining = f.length(); + writeVarLong(out, remaining); + fileIn = new DataInputStream(new BufferedInputStream( + new FileInputStream(current), 1024 * 1024)); + } + if (!current.startsWith(baseDir)) { + throw new IOException("File " + current + " does not start with " + baseDir); + } + String n = current.substring(baseDir.length() + 1); + out.writeUTF(n); + out.writeInt(metaOut.size()); + out.flush(); + byte[] bytes = metaOut.toByteArray(); + // copy metadata length to beginning + System.arraycopy(bytes, bytes.length - 4, bytes, 0, 4); + // cut the length + bytes = Arrays.copyOf(bytes, bytes.length - 4); + meta = new ByteArrayInputStream(bytes); + return meta.read(); + } + + @Override + public int read(byte[] buff, int offset, int length) throws IOException { + if (meta != null || fileIn == null || remaining == 0) { + return super.read(buff, offset, length); + } + int l = (int) Math.min(length, remaining); + fileIn.readFully(buff, offset, l); + remaining -= l; + return l; + } + + }; + + } + + private static OutputStream getDirectoryOutputStream(final String dir) { + new File(dir).mkdirs(); + return new OutputStream() { + + private ByteArrayOutputStream meta = new ByteArrayOutputStream(); + private OutputStream fileOut; + private File file; + private long remaining = 4; + private long modified; + private boolean readOnly; + + @Override + public void write(byte[] buff, int offset, int length) throws IOException { + while (length > 0) { + if (fileOut == null || remaining <= 1) { + write(buff[offset] & 255); + offset++; + length--; + } else { + int l = (int) Math.min(length, remaining - 1); + fileOut.write(buff, offset, l); + remaining -= l; + offset += l; + length -= l; + } + } + } + + @Override + public void write(int b) throws IOException { + if (fileOut != null) { + fileOut.write(b); + if (--remaining > 0) { + return; + } + // this can be slow, but I don't know a way to avoid it + fileOut.close(); + fileOut = null; + file.setLastModified(modified); + if (readOnly) { + file.setReadOnly(); + } + remaining = 4; + return; + } + meta.write(b); + if (--remaining > 0) { + return; + } + DataInputStream in = new DataInputStream( + new ByteArrayInputStream(meta.toByteArray())); + if (meta.size() == 4) { + // metadata is next + remaining = in.readInt() - 4; + if (remaining > 16 * 1024) { + throw new IOException("Illegal directory stream"); + } + return; + } + // read and ignore the length + in.readInt(); + boolean isFile = in.read() == 1; + readOnly = in.read() == 1; + modified = readVarLong(in); + if (isFile) { + remaining = readVarLong(in); + } else { + remaining = 4; + } + String name = dir + "/" + in.readUTF(); + file = new File(name); + if (isFile) { + if (remaining == 0) { + new File(name).createNewFile(); + remaining = 4; + } else { + fileOut = new BufferedOutputStream( + new FileOutputStream(name), 1024 * 1024); + } + } else { + file.mkdirs(); + file.setLastModified(modified); + if (readOnly) { + file.setReadOnly(); + } + } + meta.reset(); + } + }; + } + + private static void sort(Log log, InputStream in, OutputStream out, + String tempFileName, long size) throws IOException { + int bufferSize = 32 * 1024 * 1024; + DataOutputStream tempOut = new DataOutputStream(new BufferedOutputStream( + new FileOutputStream(tempFileName), 1024 * 1024)); + byte[] bytes = new byte[bufferSize]; + List segmentStart = new ArrayList<>(); + long outPos = 0; + long id = 1; + + // Temp file: segment* 0 + // Segment: chunk* 0 + // Chunk: pos* 0 sortKey data + + log.setRange(0, 30, size); + while (true) { + int len = readFully(in, bytes, bytes.length); + if (len == 0) { + break; + } + log.printProgress(len); + TreeMap map = new TreeMap<>(); + for (int pos = 0; pos < len;) { + int[] key = getKey(bytes, pos, len); + int l = key[3]; + byte[] buff = Arrays.copyOfRange(bytes, pos, pos + l); + pos += l; + Chunk c = new Chunk(null, key, buff); + Chunk old = map.get(c); + if (old == null) { + // new entry + c.idList = new ArrayList<>(); + c.idList.add(id); + map.put(c, c); + } else { + old.idList.add(id); + } + id++; + } + segmentStart.add(outPos); + for (Chunk c : map.keySet()) { + outPos += c.write(tempOut, true); + } + // end of segment + outPos += writeVarLong(tempOut, 0); + } + tempOut.close(); + long tempSize = new File(tempFileName).length(); + + // merge blocks if needed + int blockSize = 64; + boolean merge = false; + while (segmentStart.size() > blockSize) { + merge = true; + log.setRange(30, 50, tempSize); + log.println(); + log.println("Merging " + segmentStart.size() + " segments " + blockSize + ":1"); + ArrayList segmentStart2 = new ArrayList<>(); + outPos = 0; + DataOutputStream tempOut2 = new DataOutputStream(new BufferedOutputStream( + new FileOutputStream(tempFileName + ".b"), 1024 * 1024)); + while (segmentStart.size() > 0) { + segmentStart2.add(outPos); + int s = Math.min(segmentStart.size(), blockSize); + List start = segmentStart.subList(0, s); + TreeSet segmentIn = new TreeSet<>(); + long read = openSegments(start, segmentIn, tempFileName, true); + log.printProgress(read); + Chunk last = null; + Iterator it = merge(segmentIn, log); + while (it.hasNext()) { + Chunk c = it.next(); + if (last == null) { + last = c; + } else if (last.compareTo(c) == 0) { + for (long x : c.idList) { + last.idList.add(x); + } + } else { + outPos += last.write(tempOut2, true); + last = c; + } + } + if (last != null) { + outPos += last.write(tempOut2, true); + } + // end of segment + outPos += writeVarLong(tempOut2, 0); + segmentStart = segmentStart.subList(s, segmentStart.size()); + } + segmentStart = segmentStart2; + tempOut2.close(); + tempSize = new File(tempFileName).length(); + new File(tempFileName).delete(); + tempFileName += ".b"; + } + if (merge) { + log.println(); + log.println("Combining " + segmentStart.size() + " segments"); + } + + TreeSet segmentIn = new TreeSet<>(); + long read = openSegments(segmentStart, segmentIn, tempFileName, true); + log.printProgress(read); + + DataOutputStream dataOut = new DataOutputStream(out); + dataOut.write(HEADER); + writeVarLong(dataOut, size); + + // File: header length chunk* 0 + // chunk: pos* 0 data + log.setRange(50, 100, tempSize); + Chunk last = null; + Iterator it = merge(segmentIn, log); + while (it.hasNext()) { + Chunk c = it.next(); + if (last == null) { + last = c; + } else if (last.compareTo(c) == 0) { + for (long x : c.idList) { + last.idList.add(x); + } + } else { + last.write(dataOut, false); + last = c; + } + } + if (last != null) { + last.write(dataOut, false); + } + new File(tempFileName).delete(); + writeVarLong(dataOut, 0); + dataOut.flush(); + } + + private static long openSegments(List segmentStart, TreeSet segmentIn, + String tempFileName, boolean readKey) throws IOException { + long inPos = 0; + int bufferTotal = 64 * 1024 * 1024; + int bufferPerStream = bufferTotal / segmentStart.size(); + // FileChannel fc = new RandomAccessFile(tempFileName, "r"). + // getChannel(); + for (int i = 0; i < segmentStart.size(); i++) { + // long end = i < segmentStart.size() - 1 ? + // segmentStart.get(i+1) : fc.size(); + // InputStream in = + // new SharedInputStream(fc, segmentStart.get(i), end); + InputStream in = new FileInputStream(tempFileName); + in.skip(segmentStart.get(i)); + ChunkStream s = new ChunkStream(i); + s.readKey = readKey; + s.in = new DataInputStream(new BufferedInputStream(in, bufferPerStream)); + inPos += s.readNext(); + if (s.current != null) { + segmentIn.add(s); + } + } + return inPos; + } + + private static Iterator merge(final TreeSet segmentIn, final Log log) { + return new Iterator() { + + @Override + public boolean hasNext() { + return segmentIn.size() > 0; + } + + @Override + public Chunk next() { + ChunkStream s = segmentIn.first(); + segmentIn.remove(s); + Chunk c = s.current; + int len = s.readNext(); + log.printProgress(len); + if (s.current != null) { + segmentIn.add(s); + } + return c; + } + + @Override + public void remove() { + throw new UnsupportedOperationException(); + } + + }; + } + + /** + * Read a number of bytes. This method repeats reading until + * either the bytes have been read, or EOF. + * + * @param in the input stream + * @param buffer the target buffer + * @param max the number of bytes to read + * @return the number of bytes read (max unless EOF has been reached) + */ + private static int readFully(InputStream in, byte[] buffer, int max) + throws IOException { + int result = 0, len = Math.min(max, buffer.length); + while (len > 0) { + int l = in.read(buffer, result, len); + if (l < 0) { + break; + } + result += l; + len -= l; + } + return result; + } + + /** + * Get the sort key and length of a chunk. + */ + private static int[] getKey(byte[] data, int start, int maxPos) { + int minLen = 4 * 1024; + int mask = 4 * 1024 - 1; + long min = Long.MAX_VALUE; + int pos = start; + for (int j = 0; pos < maxPos; pos++, j++) { + if (pos <= start + 10) { + continue; + } + long hash = getSipHash24(data, pos - 10, pos, 111, 11224); + if (hash < min) { + min = hash; + } + if (j > minLen) { + if ((hash & mask) == 1) { + break; + } + if (j > minLen * 4 && (hash & (mask >> 1)) == 1) { + break; + } + if (j > minLen * 16) { + break; + } + } + } + int len = pos - start; + int[] counts = new int[8]; + for (int i = start; i < pos; i++) { + int x = data[i] & 0xff; + counts[x >> 5]++; + } + int cs = 0; + for (int i = 0; i < 8; i++) { + cs *= 2; + if (counts[i] > (len / 32)) { + cs += 1; + } + } + int[] key = new int[4]; + // TODO test if cs makes a difference + key[0] = (int) (min >>> 32); + key[1] = (int) min; + key[2] = cs; + key[3] = len; + return key; + } + + private static long getSipHash24(byte[] b, int start, int end, long k0, + long k1) { + long v0 = k0 ^ 0x736f6d6570736575L; + long v1 = k1 ^ 0x646f72616e646f6dL; + long v2 = k0 ^ 0x6c7967656e657261L; + long v3 = k1 ^ 0x7465646279746573L; + int repeat; + for (int off = start; off <= end + 8; off += 8) { + long m; + if (off <= end) { + m = 0; + int i = 0; + for (; i < 8 && off + i < end; i++) { + m |= ((long) b[off + i] & 255) << (8 * i); + } + if (i < 8) { + m |= ((long) end - start) << 56; + } + v3 ^= m; + repeat = 2; + } else { + m = 0; + v2 ^= 0xff; + repeat = 4; + } + for (int i = 0; i < repeat; i++) { + v0 += v1; + v2 += v3; + v1 = Long.rotateLeft(v1, 13); + v3 = Long.rotateLeft(v3, 16); + v1 ^= v0; + v3 ^= v2; + v0 = Long.rotateLeft(v0, 32); + v2 += v1; + v0 += v3; + v1 = Long.rotateLeft(v1, 17); + v3 = Long.rotateLeft(v3, 21); + v1 ^= v2; + v3 ^= v0; + v2 = Long.rotateLeft(v2, 32); + } + v0 ^= m; + } + return v0 ^ v1 ^ v2 ^ v3; + } + + /** + * Get the sort key and length of a chunk. + */ + private static int[] getKeyOld(byte[] data, int start, int maxPos) { + int minLen = 4 * 1024; + int mask = 4 * 1024 - 1; + int min = Integer.MAX_VALUE; + int max = Integer.MIN_VALUE; + int pos = start; + long bytes = 0; + for (int j = 0; pos < maxPos; pos++, j++) { + bytes = (bytes << 8) | (data[pos] & 255); + int hash = getHash(bytes); + if (hash < min) { + min = hash; + } + if (hash > max) { + max = hash; + } + if (j > minLen) { + if ((hash & mask) == 1) { + break; + } + if (j > minLen * 4 && (hash & (mask >> 1)) == 1) { + break; + } + if (j > minLen * 16) { + break; + } + } + } + int len = pos - start; + int[] counts = new int[8]; + for (int i = start; i < pos; i++) { + int x = data[i] & 0xff; + counts[x >> 5]++; + } + int cs = 0; + for (int i = 0; i < 8; i++) { + cs *= 2; + if (counts[i] > (len / 32)) { + cs += 1; + } + } + int[] key = new int[4]; + key[0] = cs; + key[1] = min; + key[2] = max; + key[3] = len; + return key; + } + + private static int getHash(long key) { + int hash = (int) ((key >>> 32) ^ key); + hash = ((hash >>> 16) ^ hash) * 0x45d9f3b; + hash = ((hash >>> 16) ^ hash) * 0x45d9f3b; + hash = (hash >>> 16) ^ hash; + return hash; + } + + private static void combine(Log log, InputStream in, OutputStream out, + String tempFileName) throws IOException { + int bufferSize = 16 * 1024 * 1024; + DataOutputStream tempOut = + new DataOutputStream( + new BufferedOutputStream( + new FileOutputStream(tempFileName), 1024 * 1024)); + + // File: header length chunk* 0 + // chunk: pos* 0 data + + DataInputStream dataIn = new DataInputStream(in); + byte[] header = new byte[4]; + dataIn.readFully(header); + if (!Arrays.equals(header, HEADER)) { + tempOut.close(); + throw new IOException("Invalid header"); + } + long size = readVarLong(dataIn); + long outPos = 0; + List segmentStart = new ArrayList<>(); + boolean end = false; + + // Temp file: segment* 0 + // Segment: chunk* 0 + // Chunk: pos* 0 data + log.setRange(0, 30, size); + while (!end) { + int segmentSize = 0; + TreeMap map = new TreeMap<>(); + while (segmentSize < bufferSize) { + Chunk c = Chunk.read(dataIn, false); + if (c == null) { + end = true; + break; + } + int length = c.value.length; + log.printProgress(length); + segmentSize += length; + for (long x : c.idList) { + map.put(x, c.value); + } + } + if (map.size() == 0) { + break; + } + segmentStart.add(outPos); + for (Long x : map.keySet()) { + outPos += writeVarLong(tempOut, x); + outPos += writeVarLong(tempOut, 0); + byte[] v = map.get(x); + outPos += writeVarLong(tempOut, v.length); + tempOut.write(v); + outPos += v.length; + } + outPos += writeVarLong(tempOut, 0); + } + tempOut.close(); + long tempSize = new File(tempFileName).length(); + size = outPos; + + // merge blocks if needed + int blockSize = 64; + boolean merge = false; + while (segmentStart.size() > blockSize) { + merge = true; + log.setRange(30, 50, tempSize); + log.println(); + log.println("Merging " + segmentStart.size() + " segments " + blockSize + ":1"); + ArrayList segmentStart2 = new ArrayList<>(); + outPos = 0; + DataOutputStream tempOut2 = new DataOutputStream(new BufferedOutputStream( + new FileOutputStream(tempFileName + ".b"), 1024 * 1024)); + while (segmentStart.size() > 0) { + segmentStart2.add(outPos); + int s = Math.min(segmentStart.size(), blockSize); + List start = segmentStart.subList(0, s); + TreeSet segmentIn = new TreeSet<>(); + long read = openSegments(start, segmentIn, tempFileName, false); + log.printProgress(read); + + Iterator it = merge(segmentIn, log); + while (it.hasNext()) { + Chunk c = it.next(); + outPos += writeVarLong(tempOut2, c.idList.get(0)); + outPos += writeVarLong(tempOut2, 0); + outPos += writeVarLong(tempOut2, c.value.length); + tempOut2.write(c.value); + outPos += c.value.length; + } + outPos += writeVarLong(tempOut2, 0); + + segmentStart = segmentStart.subList(s, segmentStart.size()); + } + segmentStart = segmentStart2; + tempOut2.close(); + tempSize = new File(tempFileName).length(); + new File(tempFileName).delete(); + tempFileName += ".b"; + } + if (merge) { + log.println(); + log.println("Combining " + segmentStart.size() + " segments"); + } + + TreeSet segmentIn = new TreeSet<>(); + DataOutputStream dataOut = new DataOutputStream(out); + log.setRange(50, 100, size); + + long read = openSegments(segmentStart, segmentIn, tempFileName, false); + log.printProgress(read); + + Iterator it = merge(segmentIn, log); + while (it.hasNext()) { + dataOut.write(it.next().value); + } + new File(tempFileName).delete(); + dataOut.flush(); + } + + /** + * A stream of chunks. + */ + static class ChunkStream implements Comparable { + final int id; + Chunk current; + DataInputStream in; + boolean readKey; + + ChunkStream(int id) { + this.id = id; + } + + /** + * Read the next chunk. + * + * @return the number of bytes read + */ + int readNext() { + current = null; + current = Chunk.read(in, readKey); + if (current == null) { + return 0; + } + return current.value.length; + } + + @Override + public int compareTo(ChunkStream o) { + int comp = current.compareTo(o.current); + if (comp != 0) { + return comp; + } + return Integer.signum(id - o.id); + } + } + + /** + * A chunk of data. + */ + static class Chunk implements Comparable { + ArrayList idList; + final byte[] value; + private final int[] sortKey; + + Chunk(ArrayList idList, int[] sortKey, byte[] value) { + this.idList = idList; + this.sortKey = sortKey; + this.value = value; + } + + /** + * Read a chunk. + * + * @param in the input stream + * @param readKey whether to read the sort key + * @return the chunk, or null if 0 has been read + */ + public static Chunk read(DataInputStream in, boolean readKey) { + try { + ArrayList idList = new ArrayList<>(); + while (true) { + long x = readVarLong(in); + if (x == 0) { + break; + } + idList.add(x); + } + if (idList.size() == 0) { + // eof + in.close(); + return null; + } + int[] key = null; + if (readKey) { + key = new int[4]; + for (int i = 0; i < key.length; i++) { + key[i] = in.readInt(); + } + } + int len = (int) readVarLong(in); + byte[] value = new byte[len]; + in.readFully(value); + return new Chunk(idList, key, value); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + /** + * Write a chunk. + * + * @param out the output stream + * @param writeKey whether to write the sort key + * @return the number of bytes written + */ + int write(DataOutputStream out, boolean writeKey) throws IOException { + int len = 0; + for (long x : idList) { + len += writeVarLong(out, x); + } + len += writeVarLong(out, 0); + if (writeKey) { + for (int i = 0; i < sortKey.length; i++) { + out.writeInt(sortKey[i]); + len += 4; + } + } + len += writeVarLong(out, value.length); + out.write(value); + len += value.length; + return len; + } + + @Override + public int compareTo(Chunk o) { + if (sortKey == null) { + // sort by id + long a = idList.get(0); + long b = o.idList.get(0); + if (a < b) { + return -1; + } else if (a > b) { + return 1; + } + return 0; + } + for (int i = 0; i < sortKey.length; i++) { + if (sortKey[i] < o.sortKey[i]) { + return -1; + } else if (sortKey[i] > o.sortKey[i]) { + return 1; + } + } + if (value.length < o.value.length) { + return -1; + } else if (value.length > o.value.length) { + return 1; + } + for (int i = 0; i < value.length; i++) { + int a = value[i] & 255; + int b = o.value[i] & 255; + if (a < b) { + return -1; + } else if (a > b) { + return 1; + } + } + return 0; + } + } + + /** + * A logger, including context. + */ + static class Log { + + private long lastTime; + private long current; + private int pos; + private int low; + private int high; + private long total; + + /** + * Print an empty line. + */ + void println() { + System.out.println(); + pos = 0; + } + + /** + * Print a message. + * + * @param msg the message + */ + void print(String msg) { + System.out.print(msg); + } + + /** + * Print a message. + * + * @param msg the message + */ + void println(String msg) { + System.out.println(msg); + pos = 0; + } + + /** + * Set the range. + * + * @param low the percent value if current = 0 + * @param high the percent value if current = total + * @param total the maximum value + */ + void setRange(int low, int high, long total) { + this.low = low; + this.high = high; + this.current = 0; + this.total = total; + } + + /** + * Print the progress. + * + * @param offset the offset since the last operation + */ + void printProgress(long offset) { + current += offset; + long now = System.nanoTime(); + if (now - lastTime > TimeUnit.SECONDS.toNanos(3)) { + String msg = (low + (high - low) * current / total) + "% "; + if (pos > 80) { + System.out.println(); + pos = 0; + } + System.out.print(msg); + pos += msg.length(); + lastTime = now; + } + } + + } + + /** + * Write a variable size long value. + * + * @param out the output stream + * @param x the value + * @return the number of bytes written + */ + static int writeVarLong(OutputStream out, long x) + throws IOException { + int len = 0; + while ((x & ~0x7f) != 0) { + out.write((byte) (0x80 | (x & 0x7f))); + x >>>= 7; + len++; + } + out.write((byte) x); + return ++len; + } + + /** + * Read a variable size long value. + * + * @param in the input stream + * @return the value + */ + static long readVarLong(InputStream in) throws IOException { + long x = in.read(); + if (x < 0) { + throw new EOFException(); + } + x = (byte) x; + if (x >= 0) { + return x; + } + x &= 0x7f; + for (int s = 7; s < 64; s += 7) { + long b = in.read(); + if (b < 0) { + throw new EOFException(); + } + b = (byte) b; + x |= (b & 0x7f) << s; + if (b >= 0) { + break; + } + } + return x; + } + + /** + * An input stream that uses a shared file channel. + */ + static class SharedInputStream extends InputStream { + private final FileChannel channel; + private final long endPosition; + private long position; + + SharedInputStream(FileChannel channel, long position, long endPosition) { + this.channel = channel; + this.position = position; + this.endPosition = endPosition; + } + + @Override + public int read() { + throw new UnsupportedOperationException(); + } + + @Override + public int read(byte[] b, int off, int len) throws IOException { + if (len == 0) { + return 0; + } + len = (int) Math.min(len, endPosition - position); + if (len <= 0) { + return -1; + } + ByteBuffer buff = ByteBuffer.wrap(b, off, len); + len = channel.read(buff, position); + position += len; + return len; + } + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/fs/ArchiveToolStore.java b/modules/h2/src/test/tools/org/h2/dev/fs/ArchiveToolStore.java new file mode 100644 index 0000000000000..543e86a9b98aa --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/fs/ArchiveToolStore.java @@ -0,0 +1,532 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.fs; + +import java.io.BufferedOutputStream; +import java.io.FileOutputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.Map.Entry; +import java.util.concurrent.TimeUnit; +import java.util.Random; +import org.h2.mvstore.Cursor; +import org.h2.mvstore.DataUtils; +import org.h2.mvstore.MVMap; +import org.h2.mvstore.MVStore; +import org.h2.store.fs.FileUtils; +import org.h2.util.New; + +/** + * An archive tool to compress directories, using the MVStore backend. + */ +public class ArchiveToolStore { + + private static final int[] RANDOM = new int[256]; + private static final int MB = 1000 * 1000; + private long lastTime; + private long start; + private int bucket; + private String fileName; + + static { + Random r = new Random(1); + for (int i = 0; i < RANDOM.length; i++) { + RANDOM[i] = r.nextInt(); + } + } + + /** + * Run the tool. + * + * @param args the command line arguments + */ + public static void main(String... args) throws Exception { + ArchiveToolStore app = new ArchiveToolStore(); + String arg = args.length != 3 ? null : args[0]; + if ("-compress".equals(arg)) { + app.fileName = args[1]; + app.compress(args[2]); + } else if ("-extract".equals(arg)) { + app.fileName = args[1]; + app.expand(args[2]); + } else { + System.out.println("Command line options:"); + System.out.println("-compress "); + System.out.println("-extract "); + } + } + + private void compress(String sourceDir) throws Exception { + start(); + long tempSize = 8 * 1024 * 1024; + String tempFileName = fileName + ".temp"; + ArrayList fileNames = New.arrayList(); + + System.out.println("Reading the file list"); + long totalSize = addFiles(sourceDir, fileNames); + System.out.println("Compressing " + totalSize / MB + " MB"); + + FileUtils.delete(tempFileName); + FileUtils.delete(fileName); + MVStore storeTemp = new MVStore.Builder(). + fileName(tempFileName). + autoCommitDisabled(). + open(); + final MVStore store = new MVStore.Builder(). + fileName(fileName). + pageSplitSize(2 * 1024 * 1024). + compressHigh(). + autoCommitDisabled(). + open(); + MVMap filesTemp = storeTemp.openMap("files"); + long currentSize = 0; + int segmentId = 1; + int segmentLength = 0; + ByteBuffer buff = ByteBuffer.allocate(1024 * 1024); + for (String s : fileNames) { + String name = s.substring(sourceDir.length() + 1); + if (FileUtils.isDirectory(s)) { + // directory + filesTemp.put(name, new int[1]); + continue; + } + buff.clear(); + buff.flip(); + ArrayList posList = new ArrayList<>(); + try (FileChannel fc = FileUtils.open(s, "r")) { + boolean eof = false; + while (true) { + while (!eof && buff.remaining() < 512 * 1024) { + int remaining = buff.remaining(); + buff.compact(); + buff.position(remaining); + int l = fc.read(buff); + if (l < 0) { + eof = true; + } + buff.flip(); + } + if (buff.remaining() == 0) { + break; + } + int position = buff.position(); + int c = getChunkLength(buff.array(), position, + buff.limit()) - position; + byte[] bytes = Arrays.copyOfRange(buff.array(), position, position + c); + buff.position(position + c); + int[] key = getKey(bucket, bytes); + key[3] = segmentId; + while (true) { + MVMap data = storeTemp. + openMap("data" + segmentId); + byte[] old = data.get(key); + if (old == null) { + // new + data.put(key, bytes); + break; + } + if (Arrays.equals(old, bytes)) { + // duplicate + break; + } + // same checksum: change checksum + key[2]++; + } + for (int i = 0; i < key.length; i++) { + posList.add(key[i]); + } + segmentLength += c; + currentSize += c; + if (segmentLength > tempSize) { + storeTemp.commit(); + segmentId++; + segmentLength = 0; + } + printProgress(0, 50, currentSize, totalSize); + } + } + int[] posArray = new int[posList.size()]; + for (int i = 0; i < posList.size(); i++) { + posArray[i] = posList.get(i); + } + filesTemp.put(name, posArray); + } + storeTemp.commit(); + ArrayList> list = New.arrayList(); + totalSize = 0; + for (int i = 1; i <= segmentId; i++) { + MVMap data = storeTemp.openMap("data" + i); + totalSize += data.sizeAsLong(); + Cursor c = data.cursor(null); + if (c.hasNext()) { + c.next(); + list.add(c); + } + } + segmentId = 1; + segmentLength = 0; + currentSize = 0; + MVMap data = store.openMap("data" + segmentId); + MVMap keepSegment = storeTemp.openMap("keep"); + while (list.size() > 0) { + Collections.sort(list, new Comparator>() { + + @Override + public int compare(Cursor o1, + Cursor o2) { + int[] k1 = o1.getKey(); + int[] k2 = o2.getKey(); + int comp = 0; + for (int i = 0; i < k1.length - 1; i++) { + long x1 = k1[i]; + long x2 = k2[i]; + if (x1 > x2) { + comp = 1; + break; + } else if (x1 < x2) { + comp = -1; + break; + } + } + return comp; + } + + }); + Cursor top = list.get(0); + int[] key = top.getKey(); + byte[] bytes = top.getValue(); + int[] k2 = Arrays.copyOf(key, key.length); + k2[key.length - 1] = 0; + // TODO this lookup can be avoided + // if we remember the last entry with k[..] = 0 + byte[] old = data.get(k2); + if (old == null) { + if (segmentLength > tempSize) { + // switch only for new entries + // where segmentId is 0, + // so that entries with the same + // key but different segmentId + // are in the same segment + store.commit(); + segmentLength = 0; + segmentId++; + data = store.openMap("data" + segmentId); + } + key = k2; + // new entry + data.put(key, bytes); + segmentLength += bytes.length; + } else if (Arrays.equals(old, bytes)) { + // duplicate + } else { + // almost a duplicate: + // keep segment id + keepSegment.put(key, Boolean.TRUE); + data.put(key, bytes); + segmentLength += bytes.length; + } + if (!top.hasNext()) { + list.remove(0); + } else { + top.next(); + } + currentSize++; + printProgress(50, 100, currentSize, totalSize); + } + MVMap files = store.openMap("files"); + for (Entry e : filesTemp.entrySet()) { + String k = e.getKey(); + int[] ids = e.getValue(); + if (ids.length == 1) { + // directory + files.put(k, ids); + continue; + } + int[] newIds = Arrays.copyOf(ids, ids.length); + for (int i = 0; i < ids.length; i += 4) { + int[] id = new int[4]; + id[0] = ids[i]; + id[1] = ids[i + 1]; + id[2] = ids[i + 2]; + id[3] = ids[i + 3]; + if (!keepSegment.containsKey(id)) { + newIds[i + 3] = 0; + } + } + files.put(k, newIds); + } + store.commit(); + storeTemp.close(); + FileUtils.delete(tempFileName); + store.close(); + System.out.println(); + System.out.println("Compressed to " + + FileUtils.size(fileName) / MB + " MB"); + printDone(); + } + + private void start() { + this.start = System.nanoTime(); + this.lastTime = start; + } + + private void printProgress(int low, int high, long current, long total) { + long now = System.nanoTime(); + if (now - lastTime > TimeUnit.SECONDS.toNanos(5)) { + System.out.print((low + (high - low) * current / total) + "% "); + lastTime = now; + } + } + + private void printDone() { + System.out.println("Done in " + + TimeUnit.NANOSECONDS.toSeconds(System.nanoTime() - start) + + " seconds"); + } + + private static long addFiles(String dir, ArrayList list) { + long size = 0; + for (String s : FileUtils.newDirectoryStream(dir)) { + if (FileUtils.isDirectory(s)) { + size += addFiles(s, list); + } else { + size += FileUtils.size(s); + } + list.add(s); + } + return size; + } + + private void expand(String targetDir) throws Exception { + start(); + long tempSize = 8 * 1024 * 1024; + String tempFileName = fileName + ".temp"; + FileUtils.createDirectories(targetDir); + MVStore store = new MVStore.Builder(). + fileName(fileName).open(); + MVMap files = store.openMap("files"); + System.out.println("Extracting " + files.size() + " files"); + MVStore storeTemp = null; + FileUtils.delete(tempFileName); + long totalSize = 0; + int lastSegment = 0; + for (int i = 1;; i++) { + if (!store.hasMap("data" + i)) { + lastSegment = i - 1; + break; + } + } + + storeTemp = new MVStore.Builder(). + fileName(tempFileName). + autoCommitDisabled(). + open(); + + MVMap fileNames = storeTemp.openMap("fileNames"); + + MVMap filesTemp = storeTemp.openMap("files"); + int fileId = 0; + for (Entry e : files.entrySet()) { + fileNames.put(fileId++, e.getKey()); + filesTemp.put(e.getKey(), e.getValue()); + totalSize += e.getValue().length / 4; + } + storeTemp.commit(); + + files = filesTemp; + long currentSize = 0; + int chunkSize = 0; + for (int s = 1; s <= lastSegment; s++) { + MVMap segmentData = store.openMap("data" + s); + // key: fileId, blockId; value: data + MVMap fileData = storeTemp.openMap("fileData" + s); + fileId = 0; + for (Entry e : files.entrySet()) { + int[] keys = e.getValue(); + if (keys.length == 1) { + fileId++; + continue; + } + for (int i = 0; i < keys.length; i += 4) { + int[] dk = new int[4]; + dk[0] = keys[i]; + dk[1] = keys[i + 1]; + dk[2] = keys[i + 2]; + dk[3] = keys[i + 3]; + byte[] bytes = segmentData.get(dk); + if (bytes != null) { + int[] k = new int[] { fileId, i / 4 }; + fileData.put(k, bytes); + chunkSize += bytes.length; + if (chunkSize > tempSize) { + storeTemp.commit(); + chunkSize = 0; + } + currentSize++; + printProgress(0, 50, currentSize, totalSize); + } + } + fileId++; + } + storeTemp.commit(); + } + + ArrayList> list = New.arrayList(); + totalSize = 0; + currentSize = 0; + for (int i = 1; i <= lastSegment; i++) { + MVMap fileData = storeTemp.openMap("fileData" + i); + totalSize += fileData.sizeAsLong(); + Cursor c = fileData.cursor(null); + if (c.hasNext()) { + c.next(); + list.add(c); + } + } + String lastFileName = null; + OutputStream file = null; + int[] lastKey = null; + while (list.size() > 0) { + Collections.sort(list, new Comparator>() { + + @Override + public int compare(Cursor o1, + Cursor o2) { + int[] k1 = o1.getKey(); + int[] k2 = o2.getKey(); + int comp = 0; + for (int i = 0; i < k1.length; i++) { + long x1 = k1[i]; + long x2 = k2[i]; + if (x1 > x2) { + comp = 1; + break; + } else if (x1 < x2) { + comp = -1; + break; + } + } + return comp; + } + + }); + Cursor top = list.get(0); + int[] key = top.getKey(); + byte[] bytes = top.getValue(); + String f = targetDir + "/" + fileNames.get(key[0]); + if (!f.equals(lastFileName)) { + if (file != null) { + file.close(); + } + String p = FileUtils.getParent(f); + if (p != null) { + FileUtils.createDirectories(p); + } + file = new BufferedOutputStream(new FileOutputStream(f)); + lastFileName = f; + } else { + if (key[0] != lastKey[0] || key[1] != lastKey[1] + 1) { + System.out.println("missing entry after " + Arrays.toString(lastKey)); + } + } + lastKey = key; + file.write(bytes); + if (!top.hasNext()) { + list.remove(0); + } else { + top.next(); + } + currentSize++; + printProgress(50, 100, currentSize, totalSize); + } + for (Entry e : files.entrySet()) { + String f = targetDir + "/" + e.getKey(); + int[] keys = e.getValue(); + if (keys.length == 1) { + FileUtils.createDirectories(f); + } else if (keys.length == 0) { + // empty file + String p = FileUtils.getParent(f); + if (p != null) { + FileUtils.createDirectories(p); + } + new FileOutputStream(f).close(); + } + } + if (file != null) { + file.close(); + } + store.close(); + + storeTemp.close(); + FileUtils.delete(tempFileName); + + System.out.println(); + printDone(); + } + + private int getChunkLength(byte[] data, int start, int maxPos) { + int minLen = 4 * 1024; + int mask = 4 * 1024 - 1; + int factor = 31; + int hash = 0, mul = 1, offset = 8; + int min = Integer.MAX_VALUE; + int max = Integer.MIN_VALUE; + int i = start; + int[] rand = RANDOM; + for (int j = 0; i < maxPos; i++, j++) { + hash = hash * factor + rand[data[i] & 255]; + if (j >= offset) { + hash -= mul * rand[data[i - offset] & 255]; + } else { + mul *= factor; + } + if (hash < min) { + min = hash; + } + if (hash > max) { + max = hash; + } + if (j > minLen) { + if (j > minLen * 4) { + break; + } + if ((hash & mask) == 1) { + break; + } + } + } + bucket = min; + return i; + } + + private static int[] getKey(int bucket, byte[] buff) { + int[] key = new int[4]; + int[] counts = new int[8]; + int len = buff.length; + for (int i = 0; i < len; i++) { + int x = buff[i] & 0xff; + counts[x >> 5]++; + } + int cs = 0; + for (int i = 0; i < 8; i++) { + cs *= 2; + if (counts[i] > (len / 32)) { + cs += 1; + } + } + key[0] = cs; + key[1] = bucket; + key[2] = DataUtils.getFletcher32(buff, 0, buff.length); + return key; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/fs/FilePathZip2.java b/modules/h2/src/test/tools/org/h2/dev/fs/FilePathZip2.java new file mode 100644 index 0000000000000..2f7f45b816798 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/fs/FilePathZip2.java @@ -0,0 +1,450 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.fs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.ByteBuffer; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.util.ArrayList; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; +import org.h2.engine.Constants; +import org.h2.message.DbException; +import org.h2.store.fs.FakeFileChannel; +import org.h2.store.fs.FileBase; +import org.h2.store.fs.FileChannelInputStream; +import org.h2.store.fs.FilePath; +import org.h2.store.fs.FilePathDisk; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.New; + +/** + * This is a read-only file system that allows to access databases stored in a + * .zip or .jar file. The problem of this file system is that data is always + * accessed as a stream. This implementation allows to stack file systems. + */ +public class FilePathZip2 extends FilePath { + + /** + * Register the file system. + * + * @return the instance + */ + public static FilePathZip2 register() { + FilePathZip2 instance = new FilePathZip2(); + FilePath.register(instance); + return instance; + } + + @Override + public FilePathZip2 getPath(String path) { + FilePathZip2 p = new FilePathZip2(); + p.name = path; + return p; + } + + @Override + public void createDirectory() { + // ignore + } + + @Override + public boolean createFile() { + throw DbException.getUnsupportedException("write"); + } + + @Override + public FilePath createTempFile(String suffix, boolean deleteOnExit, + boolean inTempDir) throws IOException { + if (!inTempDir) { + throw new IOException("File system is read-only"); + } + return new FilePathDisk().getPath(name).createTempFile(suffix, + deleteOnExit, true); + } + + @Override + public void delete() { + throw DbException.getUnsupportedException("write"); + } + + @Override + public boolean exists() { + try { + String entryName = getEntryName(); + if (entryName.length() == 0) { + return true; + } + ZipInputStream file = openZip(); + boolean result = false; + while (true) { + ZipEntry entry = file.getNextEntry(); + if (entry == null) { + break; + } + if (entry.getName().equals(entryName)) { + result = true; + break; + } + file.closeEntry(); + } + file.close(); + return result; + } catch (IOException e) { + return false; + } + } + + @Override + public long lastModified() { + return 0; + } + + @Override + public FilePath getParent() { + int idx = name.lastIndexOf('/'); + return idx < 0 ? null : getPath(name.substring(0, idx)); + } + + @Override + public boolean isAbsolute() { + String fileName = translateFileName(name); + return FilePath.get(fileName).isAbsolute(); + } + + @Override + public FilePath unwrap() { + return FilePath.get(name.substring(getScheme().length() + 1)); + } + + @Override + public boolean isDirectory() { + try { + String entryName = getEntryName(); + if (entryName.length() == 0) { + return true; + } + ZipInputStream file = openZip(); + boolean result = false; + while (true) { + ZipEntry entry = file.getNextEntry(); + if (entry == null) { + break; + } + String n = entry.getName(); + if (n.equals(entryName)) { + result = entry.isDirectory(); + break; + } else if (n.startsWith(entryName)) { + if (n.length() == entryName.length() + 1) { + if (n.equals(entryName + "/")) { + result = true; + break; + } + } + } + file.closeEntry(); + } + file.close(); + return result; + } catch (IOException e) { + return false; + } + } + + @Override + public boolean canWrite() { + return false; + } + + @Override + public boolean setReadOnly() { + return true; + } + + @Override + public long size() { + try { + String entryName = getEntryName(); + ZipInputStream file = openZip(); + long result = 0; + while (true) { + ZipEntry entry = file.getNextEntry(); + if (entry == null) { + break; + } + if (entry.getName().equals(entryName)) { + result = entry.getSize(); + if (result == -1) { + result = 0; + while (true) { + long x = file.skip(16 * Constants.IO_BUFFER_SIZE); + if (x == 0) { + break; + } + result += x; + } + } + break; + } + file.closeEntry(); + } + file.close(); + return result; + } catch (IOException e) { + return 0; + } + } + + @Override + public ArrayList newDirectoryStream() { + String path = name; + try { + if (path.indexOf('!') < 0) { + path += "!"; + } + if (!path.endsWith("/")) { + path += "/"; + } + ZipInputStream file = openZip(); + String dirName = getEntryName(); + String prefix = path.substring(0, path.length() - dirName.length()); + ArrayList list = New.arrayList(); + while (true) { + ZipEntry entry = file.getNextEntry(); + if (entry == null) { + break; + } + String entryName = entry.getName(); + if (entryName.startsWith(dirName) && entryName.length() > dirName.length()) { + int idx = entryName.indexOf('/', dirName.length()); + if (idx < 0 || idx >= entryName.length() - 1) { + list.add(getPath(prefix + entryName)); + } + } + file.closeEntry(); + } + file.close(); + return list; + } catch (IOException e) { + throw DbException.convertIOException(e, "listFiles " + path); + } + } + + @Override + public FilePath toRealPath() { + return this; + } + + @Override + public InputStream newInputStream() throws IOException { + return new FileChannelInputStream(open("r"), true); + } + + @Override + public FileChannel open(String mode) throws IOException { + String entryName = getEntryName(); + if (entryName.length() == 0) { + throw new FileNotFoundException(); + } + ZipInputStream in = openZip(); + while (true) { + ZipEntry entry = in.getNextEntry(); + if (entry == null) { + break; + } + if (entry.getName().equals(entryName)) { + return new FileZip2(name, entryName, in, size()); + } + in.closeEntry(); + } + in.close(); + throw new FileNotFoundException(name); + } + + @Override + public OutputStream newOutputStream(boolean append) { + throw DbException.getUnsupportedException("write"); + } + + @Override + public void moveTo(FilePath newName, boolean atomicReplace) { + throw DbException.getUnsupportedException("write"); + } + + private String getEntryName() { + int idx = name.indexOf('!'); + String fileName; + if (idx <= 0) { + fileName = ""; + } else { + fileName = name.substring(idx + 1); + } + fileName = fileName.replace('\\', '/'); + if (fileName.startsWith("/")) { + fileName = fileName.substring(1); + } + return fileName; + } + + private ZipInputStream openZip() throws IOException { + String fileName = translateFileName(name); + return new ZipInputStream(FileUtils.newInputStream(fileName)); + } + + private static String translateFileName(String fileName) { + if (fileName.startsWith("zip2:")) { + fileName = fileName.substring("zip2:".length()); + } + int idx = fileName.indexOf('!'); + if (idx >= 0) { + fileName = fileName.substring(0, idx); + } + return FilePathDisk.expandUserHomeDirectory(fileName); + } + + @Override + public String getScheme() { + return "zip2"; + } + +} + +/** + * The file is read from a stream. When reading from start to end, the same + * input stream is re-used, however when reading from end to start, a new input + * stream is opened for each request. + */ +class FileZip2 extends FileBase { + + private static final byte[] SKIP_BUFFER = new byte[1024]; + + private final String fullName; + private final String name; + private final long length; + private long pos; + private InputStream in; + private long inPos; + private boolean skipUsingRead; + + FileZip2(String fullName, String name, ZipInputStream in, long length) { + this.fullName = fullName; + this.name = name; + this.length = length; + this.in = in; + } + + @Override + public void implCloseChannel() throws IOException { + in.close(); + } + + @Override + public long position() { + return pos; + } + + @Override + public long size() { + return length; + } + + @Override + public int read(ByteBuffer dst) throws IOException { + seek(); + int len = in.read(dst.array(), dst.arrayOffset() + dst.position(), + dst.remaining()); + if (len > 0) { + dst.position(dst.position() + len); + pos += len; + inPos += len; + } + return len; + } + + private void seek() throws IOException { + if (inPos > pos) { + if (in != null) { + in.close(); + } + in = null; + } + if (in == null) { + in = FileUtils.newInputStream(fullName); + inPos = 0; + } + if (inPos < pos) { + long skip = pos - inPos; + if (!skipUsingRead) { + try { + IOUtils.skipFully(in, skip); + } catch (NullPointerException e) { + // workaround for Android + skipUsingRead = true; + } + } + if (skipUsingRead) { + while (skip > 0) { + int s = (int) Math.min(SKIP_BUFFER.length, skip); + s = in.read(SKIP_BUFFER, 0, s); + skip -= s; + } + } + inPos = pos; + } + } + + @Override + public FileChannel position(long newPos) { + this.pos = newPos; + return this; + } + + @Override + public FileChannel truncate(long newLength) throws IOException { + throw new IOException("File is read-only"); + } + + @Override + public void force(boolean metaData) throws IOException { + // nothing to do + } + + @Override + public int write(ByteBuffer src) throws IOException { + throw new IOException("File is read-only"); + } + + @Override + public synchronized FileLock tryLock(long position, long size, + boolean shared) throws IOException { + if (shared) { + return new FileLock(new FakeFileChannel(), position, size, shared) { + + @Override + public boolean isValid() { + return true; + } + + @Override + public void release() throws IOException { + // ignore + }}; + } + return null; + } + + @Override + public String toString() { + return "zip2:" + name; + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/fs/FileShell.java b/modules/h2/src/test/tools/org/h2/dev/fs/FileShell.java new file mode 100644 index 0000000000000..9297451991cd1 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/fs/FileShell.java @@ -0,0 +1,516 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.fs; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.PrintStream; +import java.nio.channels.FileChannel; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.util.ArrayList; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; +import java.util.zip.ZipOutputStream; +import org.h2.command.dml.BackupCommand; +import org.h2.engine.Constants; +import org.h2.engine.SysProperties; +import org.h2.message.DbException; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; +import org.h2.util.New; +import org.h2.util.StringUtils; +import org.h2.util.Tool; + +/** + * A shell tool that allows to list and manipulate files. + */ +public class FileShell extends Tool { + + private boolean verbose; + private BufferedReader reader; + private PrintStream err = System.err; + private InputStream in = System.in; + private String currentWorkingDirectory; + + /** + * Options are case sensitive. Supported options are: + *

    + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-verbose]Print stack traces
    [-run ...]Execute the given commands and exit
    + * Multiple commands may be executed if separated by ; + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new FileShell().runTool(args); + } + + /** + * Sets the standard error stream. + * + * @param err the new standard error stream + */ + public void setErr(PrintStream err) { + this.err = err; + } + + /** + * Redirects the standard input. By default, System.in is used. + * + * @param in the input stream to use + */ + public void setIn(InputStream in) { + this.in = in; + } + + /** + * Redirects the standard input. By default, System.in is used. + * + * @param reader the input stream reader to use + */ + public void setInReader(BufferedReader reader) { + this.reader = reader; + } + + /** + * Run the shell tool with the given command line settings. + * + * @param args the command line settings + */ + @Override + public void runTool(String... args) throws SQLException { + try { + currentWorkingDirectory = new File(".").getCanonicalPath(); + } catch (IOException e) { + throw DbException.convertIOException(e, "cwd"); + } + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-run")) { + try { + execute(args[++i]); + } catch (Exception e) { + throw DbException.convert(e); + } + } else if (arg.equals("-verbose")) { + verbose = true; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + promptLoop(); + } + + private void promptLoop() { + println(""); + println("Welcome to H2 File Shell " + Constants.getFullVersion()); + println("Exit with Ctrl+C"); + showHelp(); + if (reader == null) { + reader = new BufferedReader(new InputStreamReader(in)); + } + println(FileUtils.toRealPath(currentWorkingDirectory)); + while (true) { + try { + print("> "); + String line = readLine(); + if (line == null) { + break; + } + line = line.trim(); + if (line.length() == 0) { + continue; + } + try { + execute(line); + } catch (Exception e) { + error(e); + } + } catch (Exception e) { + error(e); + break; + } + } + } + + private void execute(String line) throws IOException { + String[] commands = StringUtils.arraySplit(line, ';', true); + for (String command : commands) { + String[] list = StringUtils.arraySplit(command, ' ', true); + if (!execute(list)) { + break; + } + } + } + + private boolean execute(String[] list) throws IOException { + // TODO unit tests for everything (multiple commands, errors, ...) + // TODO less (support large files) + // TODO hex dump + int i = 0; + String c = list[i++]; + if ("exit".equals(c) || "quit".equals(c)) { + end(list, i); + return false; + } else if ("help".equals(c) || "?".equals(c)) { + showHelp(); + end(list, i); + } else if ("cat".equals(c)) { + String file = getFile(list[i++]); + end(list, i); + cat(file, Long.MAX_VALUE); + } else if ("cd".equals(c)) { + String dir = getFile(list[i++]); + end(list, i); + if (FileUtils.isDirectory(dir)) { + currentWorkingDirectory = dir; + println(dir); + } else { + println("Not a directory: " + dir); + } + } else if ("chmod".equals(c)) { + String mode = list[i++]; + String file = getFile(list[i++]); + end(list, i); + if ("-w".equals(mode)) { + boolean success = FileUtils.setReadOnly(file); + println(success ? "Success" : "Failed"); + } else { + println("Unsupported mode: " + mode); + } + } else if ("cp".equals(c)) { + String source = getFile(list[i++]); + String target = getFile(list[i++]); + end(list, i); + IOUtils.copyFiles(source, target); + } else if ("head".equals(c)) { + String file = getFile(list[i++]); + end(list, i); + cat(file, 1024); + } else if ("ls".equals(c)) { + String dir = currentWorkingDirectory; + if (i < list.length) { + dir = getFile(list[i++]); + } + end(list, i); + println(dir); + for (String file : FileUtils.newDirectoryStream(dir)) { + StringBuilder buff = new StringBuilder(); + buff.append(FileUtils.isDirectory(file) ? "d" : "-"); + buff.append(FileUtils.canWrite(file) ? "rw" : "r-"); + buff.append(' '); + buff.append(String.format("%10d", FileUtils.size(file))); + buff.append(' '); + long lastMod = FileUtils.lastModified(file); + buff.append(new Timestamp(lastMod).toString()); + buff.append(' '); + buff.append(FileUtils.getName(file)); + println(buff.toString()); + } + } else if ("mkdir".equals(c)) { + String dir = getFile(list[i++]); + end(list, i); + FileUtils.createDirectories(dir); + } else if ("mv".equals(c)) { + String source = getFile(list[i++]); + String target = getFile(list[i++]); + end(list, i); + FileUtils.move(source, target); + } else if ("pwd".equals(c)) { + end(list, i); + println(FileUtils.toRealPath(currentWorkingDirectory)); + } else if ("rm".equals(c)) { + if ("-r".equals(list[i])) { + i++; + String dir = getFile(list[i++]); + end(list, i); + FileUtils.deleteRecursive(dir, true); + } else if ("-rf".equals(list[i])) { + i++; + String dir = getFile(list[i++]); + end(list, i); + FileUtils.deleteRecursive(dir, false); + } else { + String file = getFile(list[i++]); + end(list, i); + FileUtils.delete(file); + } + } else if ("touch".equals(c)) { + String file = getFile(list[i++]); + end(list, i); + truncate(file, FileUtils.size(file)); + } else if ("truncate".equals(c)) { + if ("-s".equals(list[i])) { + i++; + long length = Long.decode(list[i++]); + String file = getFile(list[i++]); + end(list, i); + truncate(file, length); + } else { + println("Unsupported option"); + } + } else if ("unzip".equals(c)) { + String file = getFile(list[i++]); + end(list, i); + unzip(file, currentWorkingDirectory); + } else if ("zip".equals(c)) { + boolean recursive = false; + if ("-r".equals(list[i])) { + i++; + recursive = true; + } + String target = getFile(list[i++]); + ArrayList source = New.arrayList(); + readFileList(list, i, source, recursive); + zip(target, currentWorkingDirectory, source); + } + return true; + } + + private static void end(String[] list, int index) throws IOException { + if (list.length != index) { + throw new IOException("End of command expected, got: " + list[index]); + } + } + + private void cat(String fileName, long length) { + if (!FileUtils.exists(fileName)) { + print("No such file: " + fileName); + } + if (FileUtils.isDirectory(fileName)) { + print("Is a directory: " + fileName); + } + InputStream inFile = null; + try { + inFile = FileUtils.newInputStream(fileName); + IOUtils.copy(inFile, out, length); + } catch (IOException e) { + error(e); + } finally { + IOUtils.closeSilently(inFile); + } + println(""); + } + + private void truncate(String fileName, long length) { + FileChannel f = null; + try { + f = FileUtils.open(fileName, "rw"); + f.truncate(length); + } catch (IOException e) { + error(e); + } finally { + try { + f.close(); + } catch (IOException e) { + error(e); + } + } + } + + private void error(Exception e) { + println("Exception: " + e.getMessage()); + if (verbose) { + e.printStackTrace(err); + } + } + + private static void zip(String zipFileName, String base, + ArrayList source) { + FileUtils.delete(zipFileName); + OutputStream fileOut = null; + try { + fileOut = FileUtils.newOutputStream(zipFileName, false); + ZipOutputStream zipOut = new ZipOutputStream(fileOut); + for (String fileName : source) { + String f = FileUtils.toRealPath(fileName); + if (!f.startsWith(base)) { + DbException.throwInternalError(f + " does not start with " + base); + } + if (f.endsWith(zipFileName)) { + continue; + } + if (FileUtils.isDirectory(fileName)) { + continue; + } + f = f.substring(base.length()); + f = BackupCommand.correctFileName(f); + ZipEntry entry = new ZipEntry(f); + zipOut.putNextEntry(entry); + InputStream in = null; + try { + in = FileUtils.newInputStream(fileName); + IOUtils.copyAndCloseInput(in, zipOut); + } catch (FileNotFoundException e) { + // the file could have been deleted in the meantime + // ignore this (in this case an empty file is created) + } finally { + IOUtils.closeSilently(in); + } + zipOut.closeEntry(); + } + zipOut.closeEntry(); + zipOut.close(); + } catch (IOException e) { + throw DbException.convertIOException(e, zipFileName); + } finally { + IOUtils.closeSilently(fileOut); + } + } + + private void unzip(String zipFileName, String targetDir) { + InputStream inFile = null; + try { + inFile = FileUtils.newInputStream(zipFileName); + ZipInputStream zipIn = new ZipInputStream(inFile); + while (true) { + ZipEntry entry = zipIn.getNextEntry(); + if (entry == null) { + break; + } + String fileName = entry.getName(); + // restoring windows backups on linux and vice versa + fileName = fileName.replace('\\', + SysProperties.FILE_SEPARATOR.charAt(0)); + fileName = fileName.replace('/', + SysProperties.FILE_SEPARATOR.charAt(0)); + if (fileName.startsWith(SysProperties.FILE_SEPARATOR)) { + fileName = fileName.substring(1); + } + OutputStream o = null; + try { + o = FileUtils.newOutputStream(targetDir + + SysProperties.FILE_SEPARATOR + fileName, false); + IOUtils.copy(zipIn, o); + o.close(); + } finally { + IOUtils.closeSilently(o); + } + zipIn.closeEntry(); + } + zipIn.closeEntry(); + zipIn.close(); + } catch (IOException e) { + error(e); + } finally { + IOUtils.closeSilently(inFile); + } + } + + private int readFileList(String[] list, int i, ArrayList target, + boolean recursive) throws IOException { + while (i < list.length) { + String c = list[i++]; + if (";".equals(c)) { + break; + } + c = getFile(c); + if (!FileUtils.exists(c)) { + throw new IOException("File not found: " + c); + } + if (recursive) { + addFilesRecursive(c, target); + } else { + target.add(c); + } + } + return i; + } + + private void addFilesRecursive(String f, ArrayList target) { + if (FileUtils.isDirectory(f)) { + for (String c : FileUtils.newDirectoryStream(f)) { + addFilesRecursive(c, target); + } + } else { + target.add(getFile(f)); + } + } + + private String getFile(String f) { + if (FileUtils.isAbsolute(f)) { + return f; + } + String unwrapped = FileUtils.unwrap(f); + String prefix = f.substring(0, f.length() - unwrapped.length()); + f = prefix + currentWorkingDirectory + SysProperties.FILE_SEPARATOR + unwrapped; + return FileUtils.toRealPath(f); + } + + private void showHelp() { + println("Commands are case sensitive"); + println("? or help " + + "Display this help"); + println("cat " + + "Print the contents of the file"); + println("cd " + + "Change the directory"); + println("chmod -w " + + "Make the file read-only"); + println("cp " + + "Copy a file"); + println("head " + + "Print the first few lines of the contents"); + println("ls [] " + + "Print the directory contents"); + println("mkdir " + + "Create a directory (including parent directories)"); + println("mv " + + "Rename a file or directory"); + println("pwd " + + "Print the current working directory"); + println("rm " + + "Remove a file"); + println("rm -r " + + "Remove a directory, recursively"); + println("rm -rf " + + "Remove a directory, recursively; force"); + println("touch " + + "Update the last modified date (creates the file)"); + println("truncate -s " + + "Set the file length"); + println("unzip " + + "Extract all files from the zip file"); + println("zip [-r] " + + "Create a zip file (-r to recurse directories)"); + println("exit Exit"); + println(""); + } + + private String readLine() throws IOException { + String line = reader.readLine(); + if (line == null) { + throw new IOException("Aborted"); + } + return line; + } + + private void print(String s) { + out.print(s); + out.flush(); + } + + private void println(String s) { + out.println(s); + out.flush(); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/fs/package.html b/modules/h2/src/test/tools/org/h2/dev/fs/package.html new file mode 100644 index 0000000000000..804d950229f5c --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/fs/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +An encrypting file system. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/FtpClient.java b/modules/h2/src/test/tools/org/h2/dev/ftp/FtpClient.java new file mode 100644 index 0000000000000..c8667690c4711 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/FtpClient.java @@ -0,0 +1,476 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.ftp; + +import java.io.BufferedReader; +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.OutputStreamWriter; +import java.io.PrintWriter; +import java.net.InetAddress; +import java.net.Socket; +import java.nio.charset.StandardCharsets; + +import org.h2.util.IOUtils; +import org.h2.util.NetUtils; +import org.h2.util.StatementBuilder; +import org.h2.util.StringUtils; + +/** + * A simple standalone FTP client. + */ +public class FtpClient { + private Socket socket; + private BufferedReader reader; + private PrintWriter writer; + private int code; + private String message; + private InputStream inData; + private OutputStream outData; + + private FtpClient() { + // don't allow construction + } + + /** + * Open an FTP connection. + * + * @param url the FTP URL + * @return the ftp client object + */ + public static FtpClient open(String url) throws IOException { + FtpClient client = new FtpClient(); + client.connect(url); + return client; + } + + private void connect(String url) throws IOException { + socket = NetUtils.createSocket(url, 21, false); + InputStream in = socket.getInputStream(); + OutputStream out = socket.getOutputStream(); + reader = new BufferedReader(new InputStreamReader(in)); + writer = new PrintWriter(new OutputStreamWriter(out, StandardCharsets.UTF_8)); + readCode(220); + } + + private void readLine() throws IOException { + while (true) { + message = reader.readLine(); + if (message != null) { + int idxSpace = message.indexOf(' '); + int idxMinus = message.indexOf('-'); + int idx = idxSpace < 0 ? idxMinus : idxMinus < 0 ? idxSpace + : Math.min(idxSpace, idxMinus); + if (idx < 0) { + code = 0; + } else { + code = Integer.parseInt(message.substring(0, idx)); + message = message.substring(idx + 1); + } + } + break; + } + } + + private void readCode(int optional, int expected) throws IOException { + readLine(); + if (code == optional) { + readLine(); + } + if (code != expected) { + throw new IOException("Expected: " + expected + " got: " + code + + " " + message); + } + } + + private void readCode(int expected) throws IOException { + readCode(-1, expected); + } + + private void send(String command) { + writer.println(command); + writer.flush(); + } + + /** + * Login to this FTP server (USER, PASS, SYST, SITE, STRU F, TYPE I). + * + * @param userName the user name + * @param password the password + */ + public void login(String userName, String password) throws IOException { + send("USER " + userName); + readCode(331); + send("PASS " + password); + readCode(230); + send("SYST"); + readCode(215); + send("SITE"); + readCode(500); + send("STRU F"); + readCode(200); + send("TYPE I"); + readCode(200); + } + + /** + * Close the connection (QUIT). + */ + public void close() throws IOException { + if (socket != null) { + send("QUIT"); + readCode(221); + socket.close(); + } + } + + /** + * Change the working directory (CWD). + * + * @param dir the new directory + */ + public void changeWorkingDirectory(String dir) throws IOException { + send("CWD " + dir); + readCode(250); + } + + /** + * Change to the parent directory (CDUP). + */ + public void changeDirectoryUp() throws IOException { + send("CDUP"); + readCode(250); + } + + /** + * Delete a file (DELE). + * + * @param fileName the name of the file to delete + */ + void delete(String fileName) throws IOException { + send("DELE " + fileName); + readCode(226, 250); + } + + /** + * Create a directory (MKD). + * + * @param dir the directory to create + */ + public void makeDirectory(String dir) throws IOException { + send("MKD " + dir); + readCode(226, 257); + } + + /** + * Change the transfer mode (MODE). + * + * @param mode the mode + */ + void mode(String mode) throws IOException { + send("MODE " + mode); + readCode(200); + } + + /** + * Change the modified time of a file (MDTM). + * + * @param fileName the file name + */ + void modificationTime(String fileName) throws IOException { + send("MDTM " + fileName); + readCode(213); + } + + /** + * Issue a no-operation statement (NOOP). + */ + void noOperation() throws IOException { + send("NOOP"); + readCode(200); + } + + /** + * Print the working directory (PWD). + * + * @return the working directory + */ + String printWorkingDirectory() throws IOException { + send("PWD"); + readCode(257); + return removeQuotes(); + } + + private String removeQuotes() { + int first = message.indexOf('"') + 1; + int last = message.lastIndexOf('"'); + StringBuilder buff = new StringBuilder(); + for (int i = first; i < last; i++) { + char ch = message.charAt(i); + buff.append(ch); + if (ch == '\"') { + i++; + } + } + return buff.toString(); + } + + private void passive() throws IOException { + send("PASV"); + readCode(226, 227); + int first = message.indexOf('(') + 1; + int last = message.indexOf(')'); + String[] address = StringUtils.arraySplit( + message.substring(first, last), ',', true); + StatementBuilder buff = new StatementBuilder(); + for (int i = 0; i < 4; i++) { + buff.appendExceptFirst("."); + buff.append(address[i]); + } + String ip = buff.toString(); + InetAddress addr = InetAddress.getByName(ip); + int port = (Integer.parseInt(address[4]) << 8) | Integer.parseInt(address[5]); + Socket socketData = NetUtils.createSocket(addr, port, false); + inData = socketData.getInputStream(); + outData = socketData.getOutputStream(); + } + + /** + * Rename a file (RNFR / RNTO). + * + * @param fromFileName the old file name + * @param toFileName the new file name + */ + void rename(String fromFileName, String toFileName) throws IOException { + send("RNFR " + fromFileName); + readCode(350); + send("RNTO " + toFileName); + readCode(250); + } + + /** + * Read a file. + * + * @param fileName the file name + * @return the content, null if the file doesn't exist + */ + public byte[] retrieve(String fileName) throws IOException { + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + retrieve(fileName, buff, 0); + return buff.toByteArray(); + } + + /** + * Read a file ([REST] RETR). + * + * @param fileName the file name + * @param out the output stream + * @param restartAt restart at the given position (0 if no restart is + * required). + */ + void retrieve(String fileName, OutputStream out, long restartAt) + throws IOException { + passive(); + if (restartAt > 0) { + send("REST " + restartAt); + readCode(350); + } + send("RETR " + fileName); + IOUtils.copyAndClose(inData, out); + readCode(150, 226); + } + + /** + * Remove a directory (RMD). + * + * @param dir the directory to remove + */ + public void removeDirectory(String dir) throws IOException { + send("RMD " + dir); + readCode(226, 250); + } + + /** + * Remove all files and directory in a directory, and then delete the + * directory itself. + * + * @param dir the directory to remove + */ + public void removeDirectoryRecursive(String dir) throws IOException { + for (File f : listFiles(dir)) { + String name = f.getName(); + if (f.isDirectory()) { + if (!name.equals(".") && !name.equals("..")) { + removeDirectoryRecursive(dir + "/" + name); + } + } else { + delete(dir + "/" + name); + } + } + removeDirectory(dir); + } + + /** + * Get the size of a file (SIZE). + * + * @param fileName the file name + * @return the size + */ + long size(String fileName) throws IOException { + send("SIZE " + fileName); + readCode(250); + long size = Long.parseLong(message); + return size; + } + + /** + * Store a file (STOR). + * + * @param fileName the file name + * @param in the input stream + */ + public void store(String fileName, InputStream in) throws IOException { + passive(); + send("STOR " + fileName); + readCode(150); + IOUtils.copyAndClose(in, outData); + readCode(226); + } + + /** + * Copy a local file or directory to the FTP server, recursively. + * + * @param file the file to copy + */ + public void storeRecursive(File file) throws IOException { + if (file.isDirectory()) { + makeDirectory(file.getName()); + changeWorkingDirectory(file.getName()); + for (File f : file.listFiles()) { + storeRecursive(f); + } + changeWorkingDirectory(".."); + } else { + InputStream in = new FileInputStream(file); + store(file.getName(), in); + } + } + + /** + * Get the directory listing (NLST). + * + * @param dir the directory + * @return the listing + */ + public String nameList(String dir) throws IOException { + passive(); + send("NLST " + dir); + readCode(150); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + IOUtils.copyAndClose(inData, out); + readCode(226); + byte[] data = out.toByteArray(); + return new String(data); + } + + /** + * Get the directory listing (LIST). + * + * @param dir the directory + * @return the listing + */ + public String list(String dir) throws IOException { + passive(); + send("LIST " + dir); + readCode(150); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + IOUtils.copyAndClose(inData, out); + readCode(226); + byte[] data = out.toByteArray(); + return new String(data); + } + + /** + * A file on an FTP server. + */ + static class FtpFile extends File { + private static final long serialVersionUID = 1L; + private final boolean dir; + private final long length; + FtpFile(String name, boolean dir, long length) { + super(name); + this.dir = dir; + this.length = length; + } + @Override + public long length() { + return length; + } + @Override + public boolean isFile() { + return !dir; + } + @Override + public boolean isDirectory() { + return dir; + } + @Override + public boolean exists() { + return true; + } + } + + /** + * Check if a file exists on the FTP server. + * + * @param dir the directory + * @param name the directory or file name + * @return true if it exists + */ + public boolean exists(String dir, String name) throws IOException { + for (File f : listFiles(dir)) { + if (f.getName().equals(name)) { + return true; + } + } + return false; + } + + /** + * List the files on the FTP server. + * + * @param dir the directory + * @return the list of files + */ + public File[] listFiles(String dir) throws IOException { + String content = list(dir); + String[] list = StringUtils.arraySplit(content.trim(), '\n', true); + File[] files = new File[list.length]; + for (int i = 0; i < files.length; i++) { + String s = list[i]; + while (true) { + String s2 = StringUtils.replaceAll(s, " ", " "); + if (s2.equals(s)) { + break; + } + s = s2; + } + String[] tokens = StringUtils.arraySplit(s, ' ', true); + boolean directory = tokens[0].charAt(0) == 'd'; + long length = Long.parseLong(tokens[4]); + String name = tokens[8]; + File f = new FtpFile(name, directory, length); + files[i] = f; + } + return files; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/package.html b/modules/h2/src/test/tools/org/h2/dev/ftp/package.html new file mode 100644 index 0000000000000..8c7d69fa4abfd --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A simple FTP client. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpControl.java b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpControl.java new file mode 100644 index 0000000000000..7810f7fbe5ac0 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpControl.java @@ -0,0 +1,420 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.ftp.server; + +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStreamReader; +import java.io.OutputStreamWriter; +import java.io.PrintWriter; +import java.net.InetAddress; +import java.net.ServerSocket; +import java.net.Socket; +import java.nio.charset.StandardCharsets; + +import org.h2.store.fs.FileUtils; +import org.h2.util.StringUtils; + +/** + * The implementation of the control channel of the FTP server. + */ +public class FtpControl extends Thread { + + private static final String SERVER_NAME = "Small FTP Server"; + + private final FtpServer server; + private final Socket control; + private FtpData data; + private PrintWriter output; + private String userName; + private boolean connected, readonly; + private String currentDir = "/"; + private String serverIpAddress; + private boolean stop; + private String renameFrom; + private boolean replied; + private long restart; + + FtpControl(Socket control, FtpServer server, boolean stop) { + this.server = server; + this.control = control; + this.stop = stop; + } + + @Override + public void run() { + try { + output = new PrintWriter(new OutputStreamWriter( + control.getOutputStream(), StandardCharsets.UTF_8)); + if (stop) { + reply(421, "Too many users"); + } else { + reply(220, SERVER_NAME); + // TODO need option to configure the serverIpAddress? + serverIpAddress = control.getLocalAddress().getHostAddress().replace('.', ','); + BufferedReader input = new BufferedReader( + new InputStreamReader(control.getInputStream())); + while (!stop) { + String command = null; + try { + command = input.readLine(); + } catch (IOException e) { + // ignore + } + if (command == null) { + break; + } + process(command); + } + if (data != null) { + data.close(); + } + } + } catch (Throwable t) { + server.traceError(t); + } + server.closeConnection(); + } + + private void process(String command) throws IOException { + int idx = command.indexOf(' '); + String param = ""; + if (idx >= 0) { + param = command.substring(idx).trim(); + command = command.substring(0, idx); + } + command = StringUtils.toUpperEnglish(command); + if (command.length() == 0) { + reply(506, "No command"); + return; + } + server.trace(">" + command); + FtpEventListener listener = server.getEventListener(); + FtpEvent event = null; + if (listener != null) { + event = new FtpEvent(this, command, param); + listener.beforeCommand(event); + } + replied = false; + if (connected) { + processConnected(command, param); + } + if (!replied) { + if ("USER".equals(command)) { + userName = param; + reply(331, "Need password"); + } else if ("QUIT".equals(command)) { + reply(221, "Bye"); + stop = true; + } else if ("PASS".equals(command)) { + if (userName == null) { + reply(332, "Need username"); + } else if (server.checkUserPasswordWrite(userName, param)) { + reply(230, "Ok"); + readonly = false; + connected = true; + } else if (server.checkUserPasswordReadOnly(userName)) { + reply(230, "Ok, readonly"); + readonly = true; + connected = true; + } else { + reply(431, "Wrong user/password"); + } + } else if ("REIN".equals(command)) { + userName = null; + connected = false; + currentDir = "/"; + reply(200, "Ok"); + } else if ("HELP".equals(command)) { + reply(214, SERVER_NAME); + } + } + if (!replied) { + if (listener != null) { + listener.onUnsupportedCommand(event); + } + reply(506, "Invalid command"); + } + if (listener != null) { + listener.afterCommand(event); + } + } + + private void processConnected(String command, String param) throws IOException { + switch (command.charAt(0)) { + case 'C': + if ("CWD".equals(command)) { + String path = getPath(param); + String fileName = getFileName(path); + if (FileUtils.exists(fileName) && FileUtils.isDirectory(fileName)) { + if (!path.endsWith("/")) { + path += "/"; + } + currentDir = path; + reply(250, "Ok"); + } else { + reply(550, "Failed"); + } + } else if ("CDUP".equals(command)) { + if (currentDir.length() > 1) { + int idx = currentDir.lastIndexOf('/', currentDir.length() - 2); + currentDir = currentDir.substring(0, idx + 1); + reply(250, "Ok"); + } else { + reply(550, "Failed"); + } + } + break; + case 'D': + if ("DELE".equals(command)) { + String fileName = getFileName(param); + if (!readonly && FileUtils.exists(fileName) + && !FileUtils.isDirectory(fileName) + && FileUtils.tryDelete(fileName)) { + if (server.getAllowTask() && fileName.endsWith(FtpServer.TASK_SUFFIX)) { + server.stopTask(fileName); + } + reply(250, "Ok"); + } else { + reply(500, "Delete failed"); + } + } + break; + case 'L': + if ("LIST".equals(command)) { + processList(param, true); + } + break; + case 'M': + if ("MKD".equals(command)) { + processMakeDir(param); + } else if ("MODE".equals(command)) { + if ("S".equals(StringUtils.toUpperEnglish(param))) { + reply(200, "Ok"); + } else { + reply(504, "Invalid"); + } + } else if ("MDTM".equals(command)) { + String fileName = getFileName(param); + if (FileUtils.exists(fileName) && !FileUtils.isDirectory(fileName)) { + reply(213, server.formatLastModified(fileName)); + } else { + reply(550, "Failed"); + } + } + break; + case 'N': + if ("NLST".equals(command)) { + processList(param, false); + } else if ("NOOP".equals(command)) { + reply(200, "Ok"); + } + break; + case 'P': + if ("PWD".equals(command)) { + reply(257, StringUtils.quoteIdentifier(currentDir) + " directory"); + } else if ("PASV".equals(command)) { + ServerSocket dataSocket = FtpServer.createDataSocket(); + data = new FtpData(server, control.getInetAddress(), dataSocket); + data.start(); + int port = dataSocket.getLocalPort(); + reply(227, "Passive Mode (" + serverIpAddress + "," + + (port >> 8) + "," + (port & 255) + ")"); + } else if ("PORT".equals(command)) { + String[] list = StringUtils.arraySplit(param, ',', true); + String host = list[0] + "." + list[1] + "." + list[2] + "." + list[3]; + int port = (Integer.parseInt(list[4]) << 8) | Integer.parseInt(list[5]); + InetAddress address = InetAddress.getByName(host); + if (address.equals(control.getInetAddress())) { + data = new FtpData(server, address, port); + reply(200, "Ok"); + } else { + server.trace("Port REJECTED:" + address + " expected:" + + control.getInetAddress()); + reply(550, "Failed"); + } + } + break; + case 'R': + if ("RNFR".equals(command)) { + String fileName = getFileName(param); + if (FileUtils.exists(fileName)) { + renameFrom = fileName; + reply(350, "Ok"); + } else { + reply(450, "Not found"); + } + } else if ("RNTO".equals(command)) { + if (renameFrom == null) { + reply(503, "RNFR required"); + } else { + String fileOld = renameFrom; + String fileNew = getFileName(param); + boolean ok = false; + if (!readonly) { + try { + FileUtils.move(fileOld, fileNew); + reply(250, "Ok"); + ok = true; + } catch (Exception e) { + server.traceError(e); + } + } + if (!ok) { + reply(550, "Failed"); + } + } + } else if ("RETR".equals(command)) { + String fileName = getFileName(param); + if (FileUtils.exists(fileName) && !FileUtils.isDirectory(fileName)) { + reply(150, "Starting transfer"); + try { + data.send(fileName, restart); + reply(226, "Ok"); + } catch (IOException e) { + server.traceError(e); + reply(426, "Failed"); + } + restart = 0; + } else { + // Firefox compatibility + // (still not good) + processList(param, true); + // reply(426, "Not a file"); + } + } else if ("RMD".equals(command)) { + processRemoveDir(param); + } else if ("REST".equals(command)) { + try { + restart = Integer.parseInt(param); + reply(350, "Ok"); + } catch (NumberFormatException e) { + reply(500, "Invalid"); + } + } + break; + case 'S': + if ("SYST".equals(command)) { + reply(215, "UNIX Type: L8"); + } else if ("SITE".equals(command)) { + reply(500, "Not understood"); + } else if ("SIZE".equals(command)) { + param = getFileName(param); + if (FileUtils.exists(param) && !FileUtils.isDirectory(param)) { + reply(250, String.valueOf(FileUtils.size(param))); + } else { + reply(500, "Failed"); + } + } else if ("STOR".equals(command)) { + String fileName = getFileName(param); + if (!readonly && !FileUtils.exists(fileName) + || !FileUtils.isDirectory(fileName)) { + reply(150, "Starting transfer"); + try { + data.receive(fileName); + if (server.getAllowTask() && param.endsWith(FtpServer.TASK_SUFFIX)) { + server.startTask(fileName); + } + reply(226, "Ok"); + } catch (Exception e) { + server.traceError(e); + reply(426, "Failed"); + } + } else { + reply(550, "Failed"); + } + } else if ("STRU".equals(command)) { + if ("F".equals(StringUtils.toUpperEnglish(param))) { + reply(200, "Ok"); + } else { + reply(504, "Invalid"); + } + } + break; + case 'T': + if ("TYPE".equals(command)) { + param = StringUtils.toUpperEnglish(param); + if ("A".equals(param) || "A N".equals(param)) { + reply(200, "Ok"); + } else if ("I".equals(param) || "L 8".equals(param)) { + reply(200, "Ok"); + } else { + reply(500, "Invalid"); + } + } + break; + case 'X': + if ("XMKD".equals(command)) { + processMakeDir(param); + } else if ("XRMD".equals(command)) { + processRemoveDir(param); + } + break; + } + } + + private void processMakeDir(String param) { + String fileName = getFileName(param); + boolean ok = false; + if (!readonly) { + try { + FileUtils.createDirectories(fileName); + reply(257, StringUtils.quoteIdentifier(param) + " directory"); + ok = true; + } catch (Exception e) { + server.traceError(e); + } + } + if (!ok) { + reply(500, "Failed"); + } + } + + private void processRemoveDir(String param) { + String fileName = getFileName(param); + if (!readonly && FileUtils.exists(fileName) + && FileUtils.isDirectory(fileName) + && FileUtils.tryDelete(fileName)) { + reply(250, "Ok"); + } else { + reply(500, "Failed"); + } + } + + private String getFileName(String file) { + return server.getFileName(file.startsWith("/") ? file : currentDir + file); + } + + private String getPath(String path) { + return path.startsWith("/") ? path : currentDir + path; + } + + private void processList(String param, boolean directories) throws IOException { + String directory = getFileName(param); + if (!FileUtils.exists(directory)) { + reply(450, "Directory does not exist"); + return; + } else if (!FileUtils.isDirectory(directory)) { + reply(450, "Not a directory"); + return; + } + String list = server.getDirectoryListing(directory, directories); + reply(150, "Starting transfer"); + server.trace(list); + // need to use the current locale (UTF-8 would be wrong for the Windows + // Explorer) + data.send(list.getBytes()); + reply(226, "Done"); + } + + private void reply(int code, String message) { + server.trace(code + " " + message); + output.print(code + " " + message + "\r\n"); + output.flush(); + replied = true; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpData.java b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpData.java new file mode 100644 index 0000000000000..4f00f31d2b133 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpData.java @@ -0,0 +1,145 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.ftp.server; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.InetAddress; +import java.net.ServerSocket; +import java.net.Socket; +import org.h2.store.fs.FileUtils; +import org.h2.util.IOUtils; + +/** + * The implementation of the data channel of the FTP server. + */ +public class FtpData extends Thread { + + private final FtpServer server; + private final InetAddress address; + private ServerSocket serverSocket; + private volatile Socket socket; + private final boolean active; + private final int port; + + FtpData(FtpServer server, InetAddress address, ServerSocket serverSocket) { + this.server = server; + this.address = address; + this.serverSocket = serverSocket; + this.port = 0; + this.active = false; + } + + FtpData(FtpServer server, InetAddress address, int port) { + this.server = server; + this.address = address; + this.port = port; + this.active = true; + } + + @Override + public void run() { + try { + synchronized (this) { + Socket s = serverSocket.accept(); + if (s.getInetAddress().equals(address)) { + server.trace("Data connected:" + s.getInetAddress() + " expected:" + address); + socket = s; + notifyAll(); + } else { + server.trace("Data REJECTED:" + s.getInetAddress() + " expected:" + address); + close(); + } + } + } catch (IOException e) { + e.printStackTrace(); + } + } + + private void connect() throws IOException { + if (active) { + socket = new Socket(address, port); + } else { + waitUntilConnected(); + } + } + + private void waitUntilConnected() { + while (serverSocket != null && socket == null) { + try { + wait(); + } catch (InterruptedException e) { + // ignore + } + } + server.trace("connected"); + } + + /** + * Close the socket. + */ + void close() { + serverSocket = null; + socket = null; + } + + /** + * Read a file from a client. + * + * @param fileName the target file name + */ + synchronized void receive(String fileName) throws IOException { + connect(); + try { + InputStream in = socket.getInputStream(); + OutputStream out = FileUtils.newOutputStream(fileName, false); + IOUtils.copy(in, out); + out.close(); + } finally { + socket.close(); + } + server.trace("closed"); + } + + /** + * Send a file to the client. This method waits until the client has + * connected. + * + * @param fileName the source file name + * @param skip the number of bytes to skip + */ + synchronized void send(String fileName, long skip) throws IOException { + connect(); + try { + OutputStream out = socket.getOutputStream(); + InputStream in = FileUtils.newInputStream(fileName); + IOUtils.skipFully(in, skip); + IOUtils.copy(in, out); + in.close(); + } finally { + socket.close(); + } + server.trace("closed"); + } + + /** + * Wait until the client has connected, and then send the data to him. + * + * @param data the data to send + */ + synchronized void send(byte[] data) throws IOException { + connect(); + try { + OutputStream out = socket.getOutputStream(); + out.write(data); + } finally { + socket.close(); + } + server.trace("closed"); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpEvent.java b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpEvent.java new file mode 100644 index 0000000000000..8c14dec56de74 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpEvent.java @@ -0,0 +1,48 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.ftp.server; + +/** + * Describes an FTP event. This class is used by the FtpEventListener. + */ +public class FtpEvent { + private final FtpControl control; + private final String command; + private final String param; + + FtpEvent(FtpControl control, String command, String param) { + this.control = control; + this.command = command; + this.param = param; + } + + /** + * Get the FTP command. Example: RETR + * + * @return the command + */ + public String getCommand() { + return command; + } + + /** + * Get the FTP control object. + * + * @return the control object + */ + public FtpControl getControl() { + return control; + } + + /** + * Get the parameter of the FTP command (if any). + * + * @return the parameter + */ + public String getParam() { + return param; + } +} diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpEventListener.java b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpEventListener.java new file mode 100644 index 0000000000000..004499f76d5de --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpEventListener.java @@ -0,0 +1,34 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.ftp.server; + +/** + * Event listener for the FTP Server. + */ +public interface FtpEventListener { + + /** + * Called before the given command is processed. + * + * @param event the event + */ + void beforeCommand(FtpEvent event); + + /** + * Called after the command has been processed. + * + * @param event the event + */ + void afterCommand(FtpEvent event); + + /** + * Called when an unsupported command is processed. + * This method is called after beforeCommand. + * + * @param event the event + */ + void onUnsupportedCommand(FtpEvent event); +} diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpServer.java b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpServer.java new file mode 100644 index 0000000000000..2554be0481797 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/server/FtpServer.java @@ -0,0 +1,572 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.ftp.server; + +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.ServerSocket; +import java.net.Socket; +import java.sql.SQLException; +import java.text.SimpleDateFormat; +import java.util.Date; +import java.util.HashMap; +import java.util.Locale; +import java.util.Properties; +import org.h2.server.Service; +import org.h2.store.fs.FileUtils; +import org.h2.tools.Server; +import org.h2.util.IOUtils; +import org.h2.util.NetUtils; +import org.h2.util.SortedProperties; +import org.h2.util.Tool; + +/** + * Small FTP Server. Intended for ad-hoc networks in a secure environment. + * Remote connections are possible. + * See also http://cr.yp.to/ftp.html http://www.ftpguide.com/ + */ +public class FtpServer extends Tool implements Service { + + /** + * The default port to use for the FTP server. + * This value is also in the documentation and in the Server javadoc. + */ + public static final int DEFAULT_PORT = 8021; + + /** + * The default root directory name used by the FTP server. + * This value is also in the documentation and in the Server javadoc. + */ + public static final String DEFAULT_ROOT = "ftp"; + + /** + * The default user name that is allowed to read data. + * This value is also in the documentation and in the Server javadoc. + */ + public static final String DEFAULT_READ = "guest"; + + /** + * The default user name that is allowed to read and write data. + * This value is also in the documentation and in the Server javadoc. + */ + public static final String DEFAULT_WRITE = "sa"; + + /** + * The default password of the user that is allowed to read and write data. + * This value is also in the documentation and in the Server javadoc. + */ + public static final String DEFAULT_WRITE_PASSWORD = "sa"; + + static final String TASK_SUFFIX = ".task"; + + private static final int MAX_CONNECTION_COUNT = 100; + + private ServerSocket serverSocket; + private int port = DEFAULT_PORT; + private int openConnectionCount; + + private final SimpleDateFormat dateFormatNew = new SimpleDateFormat( + "MMM dd HH:mm", Locale.ENGLISH); + private final SimpleDateFormat dateFormatOld = new SimpleDateFormat( + "MMM dd yyyy", Locale.ENGLISH); + private final SimpleDateFormat dateFormat = new SimpleDateFormat( + "yyyyMMddHHmmss"); + + private String root = DEFAULT_ROOT; + private String writeUserName = DEFAULT_WRITE, + writePassword = DEFAULT_WRITE_PASSWORD; + private String readUserName = DEFAULT_READ; + private final HashMap tasks = new HashMap<>(); + + private boolean trace; + private boolean allowTask; + + private FtpEventListener eventListener; + + + /** + * When running without options, -tcp, -web, -browser, + * and -pg are started.
    + * Options are case sensitive. Supported options are: + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + *
    [-help] or [-?]Print the list of options
    [-web]Start the web server with the H2 Console
    [-webAllowOthers]Allow other computers to connect
    [-webPort <port>]The port (default: 8082)
    [-webSSL]Use encrypted (HTTPS) connections
    [-browser]Start a browser and open a page to login to the web server
    [-tcp]Start the TCP server
    [-tcpAllowOthers]Allow other computers to connect
    [-tcpPort <port>]The port (default: 9092)
    [-tcpSSL]Use encrypted (SSL) connections
    [-tcpPassword <pwd>]The password for shutting down a TCP server
    [-tcpShutdown "<url>"]Stop the TCP server; example: tcp://localhost:9094
    [-tcpShutdownForce]Do not wait until all connections are closed
    [-pg]Start the PG server
    [-pgAllowOthers]Allow other computers to connect
    [-pgPort <port>]The port (default: 5435)
    [-ftp]Start the FTP server
    [-ftpPort <port>]The port (default: 8021)
    [-ftpDir <dir>]The base directory (default: ftp)
    [-ftpRead <user>]The user name for reading (default: guest)
    [-ftpWrite <user>]The user name for writing (default: sa)
    [-ftpWritePassword <p>]The write password (default: sa)
    [-baseDir <dir>]The base directory for H2 databases; for all servers
    [-ifExists]Only existing databases may be opened; for all servers
    [-trace]Print additional trace information; for all servers
    + * @h2.resource + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new FtpServer().runTool(args); + } + + @Override + public void runTool(String... args) throws SQLException { + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg == null) { + continue; + } else if ("-?".equals(arg) || "-help".equals(arg)) { + showUsage(); + return; + } else if (arg.startsWith("-ftp")) { + if ("-ftpPort".equals(arg)) { + i++; + } else if ("-ftpDir".equals(arg)) { + i++; + } else if ("-ftpRead".equals(arg)) { + i++; + } else if ("-ftpWrite".equals(arg)) { + i++; + } else if ("-ftpWritePassword".equals(arg)) { + i++; + } else if ("-ftpTask".equals(arg)) { + // no parameters + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } else if ("-trace".equals(arg)) { + // no parameters + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + Server server = new Server(this, args); + server.start(); + out.println(server.getStatus()); + } + + @Override + public void listen() { + try { + while (serverSocket != null) { + Socket s = serverSocket.accept(); + boolean stop; + synchronized (this) { + openConnectionCount++; + stop = openConnectionCount > MAX_CONNECTION_COUNT; + } + FtpControl c = new FtpControl(s, this, stop); + c.start(); + } + } catch (Exception e) { + traceError(e); + } + } + + /** + * Close a connection. The open connection count will be decremented. + */ + void closeConnection() { + synchronized (this) { + openConnectionCount--; + } + } + + /** + * Create a socket to listen for incoming data connections. + * + * @return the server socket + */ + static ServerSocket createDataSocket() { + return NetUtils.createServerSocket(0, false); + } + + private void appendFile(StringBuilder buff, String fileName) { + buff.append(FileUtils.isDirectory(fileName) ? 'd' : '-'); + buff.append('r'); + buff.append(FileUtils.canWrite(fileName) ? 'w' : '-'); + buff.append("------- 1 owner group "); + String size = String.valueOf(FileUtils.size(fileName)); + for (int i = size.length(); i < 15; i++) { + buff.append(' '); + } + buff.append(size); + buff.append(' '); + Date now = new Date(), mod = new Date(FileUtils.lastModified(fileName)); + String date; + if (mod.after(now) + || Math.abs((now.getTime() - mod.getTime()) / + 1000 / 60 / 60 / 24) > 180) { + synchronized (dateFormatOld) { + date = dateFormatOld.format(mod); + } + } else { + synchronized (dateFormatNew) { + date = dateFormatNew.format(mod); + } + } + buff.append(date); + buff.append(' '); + buff.append(FileUtils.getName(fileName)); + buff.append("\r\n"); + } + + /** + * Get the last modified date of a date and format it as required by the FTP + * protocol. + * + * @param fileName the file name + * @return the last modified date of this file + */ + String formatLastModified(String fileName) { + synchronized (dateFormat) { + return dateFormat.format(new Date(FileUtils.lastModified(fileName))); + } + } + + /** + * Get the full file name of this relative path. + * + * @param path the relative path + * @return the file name + */ + String getFileName(String path) { + return root + getPath(path); + } + + private String getPath(String path) { + if (path.indexOf("..") > 0) { + path = "/"; + } + while (path.startsWith("/") && root.endsWith("/")) { + path = path.substring(1); + } + while (path.endsWith("/")) { + path = path.substring(0, path.length() - 1); + } + trace("path: " + path); + return path; + } + + /** + * Get the directory listing for this directory. + * + * @param directory the directory to list + * @param listDirectories if sub-directories should be listed + * @return the list + */ + String getDirectoryListing(String directory, boolean listDirectories) { + StringBuilder buff = new StringBuilder(); + for (String fileName : FileUtils.newDirectoryStream(directory)) { + if (!FileUtils.isDirectory(fileName) + || (FileUtils.isDirectory(fileName) && listDirectories)) { + appendFile(buff, fileName); + } + } + return buff.toString(); + } + + /** + * Check if this user name is allowed to write. + * + * @param userName the user name + * @param password the password + * @return true if this user may write + */ + boolean checkUserPasswordWrite(String userName, String password) { + return userName.equals(this.writeUserName) + && password.equals(this.writePassword); + } + + /** + * Check if this user name is allowed to read. + * + * @param userName the user name + * @return true if this user may read + */ + boolean checkUserPasswordReadOnly(String userName) { + return userName.equals(this.readUserName); + } + + @Override + public void init(String... args) { + for (int i = 0; args != null && i < args.length; i++) { + String a = args[i]; + if ("-ftpPort".equals(a)) { + port = Integer.decode(args[++i]); + } else if ("-ftpDir".equals(a)) { + root = FileUtils.toRealPath(args[++i]); + } else if ("-ftpRead".equals(a)) { + readUserName = args[++i]; + } else if ("-ftpWrite".equals(a)) { + writeUserName = args[++i]; + } else if ("-ftpWritePassword".equals(a)) { + writePassword = args[++i]; + } else if ("-trace".equals(a)) { + trace = true; + } else if ("-ftpTask".equals(a)) { + allowTask = true; + } + } + } + + @Override + public String getURL() { + return "ftp://" + NetUtils.getLocalAddress() + ":" + port; + } + + @Override + public int getPort() { + return port; + } + + @Override + public void start() { + root = FileUtils.toRealPath(root); + FileUtils.createDirectories(root); + serverSocket = NetUtils.createServerSocket(port, false); + port = serverSocket.getLocalPort(); + } + + @Override + public void stop() { + if (serverSocket == null) { + return; + } + try { + serverSocket.close(); + } catch (IOException e) { + traceError(e); + } + serverSocket = null; + } + + @Override + public boolean isRunning(boolean traceError) { + if (serverSocket == null) { + return false; + } + try { + Socket s = NetUtils.createLoopbackSocket(port, false); + s.close(); + return true; + } catch (IOException e) { + if (traceError) { + traceError(e); + } + return false; + } + } + + @Override + public boolean getAllowOthers() { + return true; + } + + @Override + public String getType() { + return "FTP"; + } + + @Override + public String getName() { + return "H2 FTP Server"; + } + + /** + * Write trace information if trace is enabled. + * + * @param s the message to write + */ + void trace(String s) { + if (trace) { + System.out.println(s); + } + } + + /** + * Write the stack trace if trace is enabled. + * + * @param e the exception + */ + void traceError(Throwable e) { + if (trace) { + e.printStackTrace(); + } + } + + boolean getAllowTask() { + return allowTask; + } + + /** + * Start a task. + * + * @param path the name of the task file + */ + void startTask(String path) throws IOException { + stopTask(path); + if (path.endsWith(".zip.task")) { + trace("expand: " + path); + Process p = Runtime.getRuntime().exec("jar -xf " + path, null, new File(root)); + new StreamRedirect(path, p.getInputStream(), null).start(); + return; + } + Properties prop = SortedProperties.loadProperties(path); + String command = prop.getProperty("command"); + String outFile = path.substring(0, path.length() - TASK_SUFFIX.length()); + String errorFile = root + "/" + + prop.getProperty("error", outFile + ".err.txt"); + String outputFile = root + "/" + + prop.getProperty("output", outFile + ".out.txt"); + trace("start process: " + path + " / " + command); + Process p = Runtime.getRuntime().exec(command, null, new File(root)); + new StreamRedirect(path, p.getErrorStream(), errorFile).start(); + new StreamRedirect(path, p.getInputStream(), outputFile).start(); + tasks.put(path, p); + } + + /** + * This class re-directs an input stream to a file. + */ + private static class StreamRedirect extends Thread { + private final InputStream in; + private OutputStream out; + private String outFile; + private final String processFile; + + StreamRedirect(String processFile, InputStream in, String outFile) { + this.processFile = processFile; + this.in = in; + this.outFile = outFile; + } + + private void openOutput() { + if (outFile != null) { + try { + this.out = FileUtils.newOutputStream(outFile, false); + } catch (Exception e) { + // ignore + } + outFile = null; + } + } + + @Override + public void run() { + while (true) { + try { + int x = in.read(); + if (x < 0) { + break; + } + openOutput(); + if (out != null) { + out.write(x); + } + } catch (IOException e) { + // ignore + } + } + IOUtils.closeSilently(out); + IOUtils.closeSilently(in); + new File(processFile).delete(); + } + } + + /** + * Stop a running task. + * + * @param processName the task name + */ + void stopTask(String processName) { + trace("kill process: " + processName); + Process p = tasks.remove(processName); + if (p == null) { + return; + } + p.destroy(); + } + + /** + * Set the event listener. Only one listener can be registered. + * + * @param eventListener the new listener, or null to de-register + */ + public void setEventListener(FtpEventListener eventListener) { + this.eventListener = eventListener; + } + + /** + * Get the registered event listener. + * + * @return the event listener, or null if non is registered + */ + FtpEventListener getEventListener() { + return eventListener; + } + + /** + * Create a new FTP server, but does not start it yet. Example: + * + *
    +     * Server server = FtpServer.createFtpServer(null).start();
    +     * 
    + * + * @param args the argument list + * @return the server + */ + public static Server createFtpServer(String... args) throws SQLException { + return new Server(new FtpServer(), args); + } + + @Override + public boolean isDaemon() { + return false; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/ftp/server/package.html b/modules/h2/src/test/tools/org/h2/dev/ftp/server/package.html new file mode 100644 index 0000000000000..21bef4f4458b3 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/ftp/server/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A simple FTP server. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/hash/IntPerfectHash.java b/modules/h2/src/test/tools/org/h2/dev/hash/IntPerfectHash.java new file mode 100644 index 0000000000000..9449fe777ac48 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/hash/IntPerfectHash.java @@ -0,0 +1,409 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.hash; + +import java.util.ArrayList; +import java.util.Arrays; + +/** + * A minimum perfect hash function tool. It needs about 2.2 bits per key. + */ +public class IntPerfectHash { + + /** + * Large buckets are typically divided into buckets of this size. + */ + private static final int DIVIDE = 6; + + /** + * The maximum size of a small bucket (one that is not further split if + * possible). + */ + private static final int MAX_SIZE = 12; + + /** + * The maximum offset for hash functions of small buckets. At most that many + * hash functions are tried for the given size. + */ + private static final int[] MAX_OFFSETS = { 0, 0, 8, 18, 47, 123, 319, 831, 2162, + 5622, 14617, 38006, 98815 }; + + /** + * The output value to split the bucket into many (more than 2) smaller + * buckets. + */ + private static final int SPLIT_MANY = 3; + + /** + * The minimum output value for a small bucket of a given size. + */ + private static final int[] SIZE_OFFSETS = new int[MAX_OFFSETS.length + 1]; + + static { + int last = SPLIT_MANY + 1; + for (int i = 0; i < MAX_OFFSETS.length; i++) { + SIZE_OFFSETS[i] = last; + last += MAX_OFFSETS[i]; + } + SIZE_OFFSETS[SIZE_OFFSETS.length - 1] = last; + } + + /** + * The description of the hash function. Used for calculating the hash of a + * key. + */ + private final byte[] data; + + /** + * Create a hash object to convert keys to hashes. + * + * @param data the data returned by the generate method + */ + public IntPerfectHash(byte[] data) { + this.data = data; + } + + /** + * Get the hash function description. + * + * @return the data + */ + public byte[] getData() { + return data; + } + + /** + * Calculate the hash value for the given key. + * + * @param x the key + * @return the hash value + */ + public int get(int x) { + return get(0, x, 0); + } + + /** + * Get the hash value for the given key, starting at a certain position and + * level. + * + * @param pos the start position + * @param x the key + * @param level the level + * @return the hash value + */ + private int get(int pos, int x, int level) { + int n = readVarInt(data, pos); + if (n < 2) { + return 0; + } else if (n > SPLIT_MANY) { + int size = getSize(n); + int offset = getOffset(n, size); + return hash(x, level, offset, size); + } + pos++; + int split; + if (n == SPLIT_MANY) { + split = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + } else { + split = n; + } + int h = hash(x, level, 0, split); + int s; + int start = pos; + for (int i = 0; i < h; i++) { + pos = getNextPos(pos); + } + s = getSizeSum(start, pos); + return s + get(pos, x, level + 1); + } + + /** + * Get the position of the next sibling. + * + * @param pos the position of this branch + * @return the position of the next sibling + */ + private int getNextPos(int pos) { + int n = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + if (n < 2 || n > SPLIT_MANY) { + return pos; + } + int split; + if (n == SPLIT_MANY) { + split = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + } else { + split = n; + } + for (int i = 0; i < split; i++) { + pos = getNextPos(pos); + } + return pos; + } + + /** + * The sum of the sizes between the start and end position. + * + * @param start the start position + * @param end the end position (excluding) + * @return the sizes + */ + private int getSizeSum(int start, int end) { + int s = 0; + for (int pos = start; pos < end;) { + int n = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + if (n < 2) { + s += n; + } else if (n > SPLIT_MANY) { + s += getSize(n); + } else if (n == SPLIT_MANY) { + pos += getVarIntLength(data, pos); + } + } + return s; + } + + private static void writeSizeOffset(ByteStream out, int size, + int offset) { + writeVarInt(out, SIZE_OFFSETS[size] + offset); + } + + private static int getOffset(int n, int size) { + return n - SIZE_OFFSETS[size]; + } + + private static int getSize(int n) { + for (int i = 0; i < SIZE_OFFSETS.length; i++) { + if (n < SIZE_OFFSETS[i]) { + return i - 1; + } + } + return 0; + } + + /** + * Generate the minimal perfect hash function data from the given list. + * + * @param list the data + * @return the hash function description + */ + public static byte[] generate(ArrayList list) { + ByteStream out = new ByteStream(); + generate(list, 0, out); + return out.toByteArray(); + } + + private static void generate(ArrayList list, int level, ByteStream out) { + int size = list.size(); + if (size <= 1) { + out.write((byte) size); + return; + } + if (level > 32) { + throw new IllegalStateException("Too many recursions; " + + " incorrect universal hash function?"); + } + if (size <= MAX_SIZE) { + int maxOffset = MAX_OFFSETS[size]; + int testSize = size; + nextOffset: + for (int offset = 0; offset < maxOffset; offset++) { + int bits = 0; + for (int i = 0; i < size; i++) { + int x = list.get(i); + int h = hash(x, level, offset, testSize); + if ((bits & (1 << h)) != 0) { + continue nextOffset; + } + bits |= 1 << h; + } + writeSizeOffset(out, size, offset); + return; + } + } + int split; + if (size > 57 * DIVIDE) { + split = size / (36 * DIVIDE); + } else { + split = (size - 47) / DIVIDE; + } + split = Math.max(2, split); + ArrayList> lists = new ArrayList<>(split); + for (int i = 0; i < split; i++) { + lists.add(new ArrayList(size / split)); + } + for (int x : list) { + ArrayList l = lists.get(hash(x, level, 0, split)); + l.add(x); + } + if (split >= SPLIT_MANY) { + out.write((byte) SPLIT_MANY); + } + writeVarInt(out, split); + list.clear(); + list.trimToSize(); + for (ArrayList s2 : lists) { + generate(s2, level + 1, out); + } + } + + private static int hash(int x, int level, int offset, int size) { + x += level + offset * 32; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = (x >>> 16) ^ x; + return Math.abs(x % size); + } + + private static int writeVarInt(ByteStream out, int x) { + int len = 0; + while ((x & ~0x7f) != 0) { + out.write((byte) (0x80 | (x & 0x7f))); + x >>>= 7; + len++; + } + out.write((byte) x); + return ++len; + } + + private static int readVarInt(byte[] d, int pos) { + int x = d[pos++]; + if (x >= 0) { + return x; + } + x &= 0x7f; + for (int s = 7; s < 64; s += 7) { + int b = d[pos++]; + x |= (b & 0x7f) << s; + if (b >= 0) { + break; + } + } + return x; + } + + private static int getVarIntLength(byte[] d, int pos) { + int x = d[pos++]; + if (x >= 0) { + return 1; + } + int len = 2; + for (int s = 7; s < 64; s += 7) { + int b = d[pos++]; + if (b >= 0) { + break; + } + len++; + } + return len; + } + + /** + * A stream of bytes. + */ + static class ByteStream { + + private byte[] data; + private int pos; + + ByteStream() { + this.data = new byte[16]; + } + + ByteStream(byte[] data) { + this.data = data; + } + + /** + * Read a byte. + * + * @return the byte, or -1. + */ + int read() { + return pos < data.length ? (data[pos++] & 255) : -1; + } + + /** + * Write a byte. + * + * @param value the byte + */ + void write(byte value) { + if (pos >= data.length) { + data = Arrays.copyOf(data, data.length * 2); + } + data[pos++] = value; + } + + /** + * Get the byte array. + * + * @return the byte array + */ + byte[] toByteArray() { + return Arrays.copyOf(data, pos); + } + + } + + /** + * A helper class for bit arrays. + */ + public static class BitArray { + + /** + * Set a bit in the array. + * + * @param data the array + * @param x the bit index + * @param value the new value + * @return the bit array (if the passed one was too small) + */ + public static byte[] setBit(byte[] data, int x, boolean value) { + int pos = x / 8; + if (pos >= data.length) { + data = Arrays.copyOf(data, pos + 1); + } + if (value) { + data[pos] |= 1 << (x & 7); + } else { + data[pos] &= 255 - (1 << (x & 7)); + } + return data; + } + + /** + * Get a bit in a bit array. + * + * @param data the array + * @param x the bit index + * @return the value + */ + public static boolean getBit(byte[] data, int x) { + return (data[x / 8] & (1 << (x & 7))) != 0; + } + + /** + * Count the number of set bits. + * + * @param data the array + * @return the number of set bits + */ + public static int countBits(byte[] data) { + int count = 0; + for (byte x : data) { + count += Integer.bitCount(x & 255); + } + return count; + } + + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/hash/MinimalPerfectHash.java b/modules/h2/src/test/tools/org/h2/dev/hash/MinimalPerfectHash.java new file mode 100644 index 0000000000000..d1eb96f58d94d --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/hash/MinimalPerfectHash.java @@ -0,0 +1,785 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.hash; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.security.SecureRandom; +import java.util.ArrayList; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; +import java.util.zip.Deflater; +import java.util.zip.Inflater; + +/** + * A minimal perfect hash function tool. It needs about 1.98 bits per key. + *

    + * The algorithm is recursive: sets that contain no or only one entry are not + * processed as no conflicts are possible. For sets that contain between 2 and + * 12 entries, a number of hash functions are tested to check if they can store + * the data without conflict. If no function was found, and for larger sets, the + * set is split into a (possibly high) number of smaller set, which are + * processed recursively. The average size of a top-level bucket is about 216 + * entries, and the maximum recursion level is typically 5. + *

    + * At the end of the generation process, the data is compressed using a general + * purpose compression tool (Deflate / Huffman coding) down to 2.0 bits per key. + * The uncompressed data is around 2.2 bits per key. With arithmetic coding, + * about 1.9 bits per key are needed. Generating the hash function takes about + * 2.5 seconds per million keys with 8 cores (multithreaded). The algorithm + * automatically scales with the number of available CPUs (using as many threads + * as there are processors). At the expense of processing time, a lower number + * of bits per key would be possible (for example 1.84 bits per key with 100000 + * keys, using 32 seconds generation time, with Huffman coding). + *

    + * The memory usage to efficiently calculate hash values is around 2.5 bits per + * key (the space needed for the uncompressed description, plus 8 bytes for + * every top-level bucket). + *

    + * At each level, only one user defined hash function per object is called + * (about 3 hash functions per key). The result is further processed using a + * supplemental hash function, so that the default user defined hash function + * doesn't need to be sophisticated (it doesn't need to be non-linear, have a + * good avalanche effect, or generate random looking data; it just should + * produce few conflicts if possible). + *

    + * To protect against hash flooding and similar attacks, a secure random seed + * per hash table is used. For further protection, cryptographically secure + * functions such as SipHash or SHA-256 can be used. However, such (slower) + * functions only need to be used if regular hash functions produce too many + * conflicts. This case is detected when generating the perfect hash function, + * by checking if there are too many conflicts (more than 2160 entries in one + * top-level bucket). In this case, the next hash function is used. That way, in + * the normal case, where no attack is happening, only fast, but less secure, + * hash functions are called. It is fine to use the regular hashCode method as + * the level 0 hash function. However, just relying on the regular hashCode + * method does not work if the key has more than 32 bits, because the risk of + * collisions is too high. Incorrect universal hash functions are detected (an + * exception is thrown if there are more than 32 recursion levels). + *

    + * In-place updating of the hash table is not implemented but possible in + * theory, by patching the hash function description. With a small change, + * non-minimal perfect hash functions can be calculated (for example 1.22 bits + * per key at a fill rate of 81%). + * + * @param the key type + */ +public class MinimalPerfectHash { + + /** + * Large buckets are typically divided into buckets of this size. + */ + private static final int DIVIDE = 6; + + /** + * For sets larger than this, instead of trying to map then uniquely to a + * set of the same size, the size of the set is incremented by one. This + * reduces the time to find a mapping, but the index of the hole also needs + * to be stored, which increases the space usage. + */ + private static final int SPEEDUP = 11; + + /** + * The maximum size of a small bucket (one that is not further split if + * possible). + */ + private static final int MAX_SIZE = 14; + + /** + * The maximum offset for hash functions of small buckets. At most that many + * hash functions are tried for the given size. + */ + private static final int[] MAX_OFFSETS = { 0, 0, 8, 18, 47, 123, 319, 831, 2162, + 5622, 14617, 38006, 98815, 256920, 667993 }; + + /** + * The output value to split the bucket into many (more than 2) smaller + * buckets. + */ + private static final int SPLIT_MANY = 3; + + /** + * The minimum output value for a small bucket of a given size. + */ + private static final int[] SIZE_OFFSETS = new int[MAX_OFFSETS.length + 1]; + + /** + * A secure random generator. + */ + private static final SecureRandom RANDOM = new SecureRandom(); + + static { + for (int i = SPEEDUP; i < MAX_OFFSETS.length; i++) { + MAX_OFFSETS[i] = (int) (MAX_OFFSETS[i] * 2.5); + } + int last = SPLIT_MANY + 1; + for (int i = 0; i < MAX_OFFSETS.length; i++) { + SIZE_OFFSETS[i] = last; + last += MAX_OFFSETS[i]; + } + SIZE_OFFSETS[SIZE_OFFSETS.length - 1] = last; + } + + /** + * The universal hash function. + */ + private final UniversalHash hash; + + /** + * The description of the hash function. Used for calculating the hash of a + * key. + */ + private final byte[] data; + + /** + * The random seed. + */ + private final int seed; + + /** + * The size up to the given root-level bucket in the data array. Used to + * speed up calculating the hash of a key. + */ + private final int[] rootSize; + + /** + * The position of the given root-level bucket in the data array. Used to + * speed up calculating the hash of a key. + */ + private final int[] rootPos; + + /** + * The hash function level at the root of the tree. Typically 0, except if + * the hash function at that level didn't split the entries as expected + * (which can be due to a bad hash function, or due to an attack). + */ + private final int rootLevel; + + /** + * Create a hash object to convert keys to hashes. + * + * @param desc the data returned by the generate method + * @param hash the universal hash function + */ + public MinimalPerfectHash(byte[] desc, UniversalHash hash) { + this.hash = hash; + byte[] b = data = expand(desc); + seed = ((b[0] & 255) << 24) | + ((b[1] & 255) << 16) | + ((b[2] & 255) << 8) | + (b[3] & 255); + if (b[4] == SPLIT_MANY) { + rootLevel = b[b.length - 1] & 255; + int split = readVarInt(b, 5); + rootSize = new int[split]; + rootPos = new int[split]; + int pos = 5 + getVarIntLength(b, 5); + int sizeSum = 0; + for (int i = 0; i < split; i++) { + rootSize[i] = sizeSum; + rootPos[i] = pos; + int start = pos; + pos = getNextPos(pos); + sizeSum += getSizeSum(start, pos); + } + } else { + rootLevel = 0; + rootSize = null; + rootPos = null; + } + } + + /** + * Calculate the hash value for the given key. + * + * @param x the key + * @return the hash value + */ + public int get(K x) { + return get(4, x, true, rootLevel); + } + + /** + * Get the hash value for the given key, starting at a certain position and + * level. + * + * @param pos the start position + * @param x the key + * @param isRoot whether this is the root of the tree + * @param level the level + * @return the hash value + */ + private int get(int pos, K x, boolean isRoot, int level) { + int n = readVarInt(data, pos); + if (n < 2) { + return 0; + } else if (n > SPLIT_MANY) { + int size = getSize(n); + int offset = getOffset(n, size); + if (size >= SPEEDUP) { + int p = offset % (size + 1); + offset = offset / (size + 1); + int result = hash(x, hash, level, seed, offset, size + 1); + if (result >= p) { + result--; + } + return result; + } + return hash(x, hash, level, seed, offset, size); + } + pos++; + int split; + if (n == SPLIT_MANY) { + split = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + } else { + split = n; + } + int h = hash(x, hash, level, seed, 0, split); + int s; + if (isRoot && rootPos != null) { + s = rootSize[h]; + pos = rootPos[h]; + } else { + int start = pos; + for (int i = 0; i < h; i++) { + pos = getNextPos(pos); + } + s = getSizeSum(start, pos); + } + return s + get(pos, x, false, level + 1); + } + + /** + * Get the position of the next sibling. + * + * @param pos the position of this branch + * @return the position of the next sibling + */ + private int getNextPos(int pos) { + int n = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + if (n < 2 || n > SPLIT_MANY) { + return pos; + } + int split; + if (n == SPLIT_MANY) { + split = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + } else { + split = n; + } + for (int i = 0; i < split; i++) { + pos = getNextPos(pos); + } + return pos; + } + + /** + * The sum of the sizes between the start and end position. + * + * @param start the start position + * @param end the end position (excluding) + * @return the sizes + */ + private int getSizeSum(int start, int end) { + int s = 0; + for (int pos = start; pos < end;) { + int n = readVarInt(data, pos); + pos += getVarIntLength(data, pos); + if (n < 2) { + s += n; + } else if (n > SPLIT_MANY) { + s += getSize(n); + } else if (n == SPLIT_MANY) { + pos += getVarIntLength(data, pos); + } + } + return s; + } + + private static void writeSizeOffset(ByteArrayOutputStream out, int size, + int offset) { + writeVarInt(out, SIZE_OFFSETS[size] + offset); + } + + private static int getOffset(int n, int size) { + return n - SIZE_OFFSETS[size]; + } + + private static int getSize(int n) { + for (int i = 0; i < SIZE_OFFSETS.length; i++) { + if (n < SIZE_OFFSETS[i]) { + return i - 1; + } + } + return 0; + } + + /** + * Generate the minimal perfect hash function data from the given set of + * integers. + * + * @param set the data + * @param hash the universal hash function + * @return the hash function description + */ + public static byte[] generate(Set set, UniversalHash hash) { + ArrayList list = new ArrayList<>(set); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + int seed = RANDOM.nextInt(); + out.write(seed >>> 24); + out.write(seed >>> 16); + out.write(seed >>> 8); + out.write(seed); + generate(list, hash, 0, seed, out); + return compress(out.toByteArray()); + } + + /** + * Generate the perfect hash function data from the given set of integers. + * + * @param list the data, in the form of a list + * @param hash the universal hash function + * @param level the recursion level + * @param seed the random seed + * @param out the output stream + */ + static void generate(ArrayList list, UniversalHash hash, + int level, int seed, ByteArrayOutputStream out) { + int size = list.size(); + if (size <= 1) { + out.write(size); + return; + } + if (level > 32) { + throw new IllegalStateException("Too many recursions; " + + " incorrect universal hash function?"); + } + if (size <= MAX_SIZE) { + int maxOffset = MAX_OFFSETS[size]; + // get the hash codes - we could stop early + // if we detect that two keys have the same hash + int[] hashes = new int[size]; + for (int i = 0; i < size; i++) { + hashes[i] = hash.hashCode(list.get(i), level, seed); + } + // use the supplemental hash function to find a way + // to make the hash code unique within this group - + // there might be a much faster way than that, by + // checking which bits of the hash code matter most + int testSize = size; + if (size >= SPEEDUP) { + testSize++; + maxOffset /= testSize; + } + nextOffset: + for (int offset = 0; offset < maxOffset; offset++) { + int bits = 0; + for (int i = 0; i < size; i++) { + int x = hashes[i]; + int h = hash(x, level, offset, testSize); + if ((bits & (1 << h)) != 0) { + continue nextOffset; + } + bits |= 1 << h; + } + if (size >= SPEEDUP) { + int pos = Integer.numberOfTrailingZeros(~bits); + writeSizeOffset(out, size, offset * (size + 1) + pos); + } else { + writeSizeOffset(out, size, offset); + } + return; + } + } + int split; + if (size > 57 * DIVIDE) { + split = size / (36 * DIVIDE); + } else { + split = (size - 47) / DIVIDE; + } + split = Math.max(2, split); + boolean isRoot = level == 0; + ArrayList> lists; + do { + lists = new ArrayList<>(split); + for (int i = 0; i < split; i++) { + lists.add(new ArrayList(size / split)); + } + for (int i = 0; i < size; i++) { + K x = list.get(i); + ArrayList l = lists.get(hash(x, hash, level, seed, 0, split)); + l.add(x); + if (isRoot && split >= SPLIT_MANY && + l.size() > 36 * DIVIDE * 10) { + // a bad hash function or attack was detected + level++; + lists = null; + break; + } + } + } while (lists == null); + if (split >= SPLIT_MANY) { + out.write(SPLIT_MANY); + } + writeVarInt(out, split); + boolean multiThreaded = isRoot && list.size() > 1000; + list.clear(); + list.trimToSize(); + if (multiThreaded) { + generateMultiThreaded(lists, hash, level, seed, out); + } else { + for (ArrayList s2 : lists) { + generate(s2, hash, level + 1, seed, out); + } + } + if (isRoot && split >= SPLIT_MANY) { + out.write(level); + } + } + + private static void generateMultiThreaded( + final ArrayList> lists, + final UniversalHash hash, + final int level, + final int seed, + ByteArrayOutputStream out) { + final ArrayList outList = + new ArrayList<>(); + int processors = Runtime.getRuntime().availableProcessors(); + Thread[] threads = new Thread[processors]; + final AtomicInteger success = new AtomicInteger(); + final AtomicReference failure = new AtomicReference<>(); + for (int i = 0; i < processors; i++) { + threads[i] = new Thread() { + @Override + public void run() { + try { + while (true) { + ArrayList list; + ByteArrayOutputStream temp = + new ByteArrayOutputStream(); + synchronized (lists) { + if (lists.isEmpty()) { + break; + } + list = lists.remove(0); + outList.add(temp); + } + generate(list, hash, level + 1, seed, temp); + } + } catch (Exception e) { + failure.set(e); + return; + } + success.incrementAndGet(); + } + }; + } + for (Thread t : threads) { + t.start(); + } + try { + for (Thread t : threads) { + t.join(); + } + if (success.get() != threads.length) { + Exception e = failure.get(); + if (e != null) { + throw new RuntimeException(e); + } + throw new RuntimeException("Unknown failure in one thread"); + } + for (ByteArrayOutputStream temp : outList) { + out.write(temp.toByteArray()); + } + } catch (InterruptedException e) { + throw new RuntimeException(e); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + /** + * Calculate the hash of a key. The result depends on the key, the recursion + * level, and the offset. + * + * @param o the key + * @param level the recursion level + * @param seed the random seed + * @param offset the index of the hash function + * @param size the size of the bucket + * @return the hash (a value between 0, including, and the size, excluding) + */ + private static int hash(K o, UniversalHash hash, int level, + int seed, int offset, int size) { + int x = hash.hashCode(o, level, seed); + x += level + offset * 32; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = (x >>> 16) ^ x; + return (x & (-1 >>> 1)) % size; + } + + private static int hash(int x, int level, int offset, int size) { + x += level + offset * 32; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = (x >>> 16) ^ x; + return (x & (-1 >>> 1)) % size; + } + + private static int writeVarInt(ByteArrayOutputStream out, int x) { + int len = 0; + while ((x & ~0x7f) != 0) { + out.write((byte) (0x80 | (x & 0x7f))); + x >>>= 7; + len++; + } + out.write((byte) x); + return ++len; + } + + private static int readVarInt(byte[] d, int pos) { + int x = d[pos++]; + if (x >= 0) { + return x; + } + x &= 0x7f; + for (int s = 7; s < 64; s += 7) { + int b = d[pos++]; + x |= (b & 0x7f) << s; + if (b >= 0) { + break; + } + } + return x; + } + + private static int getVarIntLength(byte[] d, int pos) { + int x = d[pos++]; + if (x >= 0) { + return 1; + } + int len = 2; + for (int s = 7; s < 64; s += 7) { + int b = d[pos++]; + if (b >= 0) { + break; + } + len++; + } + return len; + } + + /** + * Compress the hash description using a Huffman coding. + * + * @param d the data + * @return the compressed data + */ + private static byte[] compress(byte[] d) { + Deflater deflater = new Deflater(); + deflater.setStrategy(Deflater.HUFFMAN_ONLY); + deflater.setInput(d); + deflater.finish(); + ByteArrayOutputStream out2 = new ByteArrayOutputStream(d.length); + byte[] buffer = new byte[1024]; + while (!deflater.finished()) { + int count = deflater.deflate(buffer); + out2.write(buffer, 0, count); + } + deflater.end(); + return out2.toByteArray(); + } + + /** + * Decompress the hash description using a Huffman coding. + * + * @param d the data + * @return the decompressed data + */ + private static byte[] expand(byte[] d) { + Inflater inflater = new Inflater(); + inflater.setInput(d); + ByteArrayOutputStream out = new ByteArrayOutputStream(d.length); + byte[] buffer = new byte[1024]; + try { + while (!inflater.finished()) { + int count = inflater.inflate(buffer); + out.write(buffer, 0, count); + } + inflater.end(); + } catch (Exception e) { + throw new IllegalArgumentException(e); + } + return out.toByteArray(); + } + + /** + * An interface that can calculate multiple hash values for an object. The + * returned hash value of two distinct objects may be the same for a given + * hash function index, but as more hash functions indexes are called for + * those objects, the returned value must eventually be different. + *

    + * The returned value does not need to be uniformly distributed. + * + * @param the type + */ + public interface UniversalHash { + + /** + * Calculate the hash of the given object. + * + * @param o the object + * @param index the hash function index (index 0 is used first, so the + * method should be very fast with index 0; index 1 and so on + * are only called when really needed) + * @param seed the random seed (always the same for a hash table) + * @return the hash value + */ + int hashCode(T o, int index, int seed); + + } + + /** + * A sample hash implementation for long keys. + */ + public static class LongHash implements UniversalHash { + + @Override + public int hashCode(Long o, int index, int seed) { + if (index == 0) { + return o.hashCode(); + } else if (index < 8) { + long x = o.longValue(); + x += index; + x = ((x >>> 32) ^ x) * 0x45d9f3b; + x = ((x >>> 32) ^ x) * 0x45d9f3b; + return (int) (x ^ (x >>> 32)); + } + // get the lower or higher 32 bit depending on the index + int shift = (index & 1) * 32; + return (int) (o.longValue() >>> shift); + } + + } + + /** + * A sample hash implementation for integer keys. + */ + public static class StringHash implements UniversalHash { + + @Override + public int hashCode(String o, int index, int seed) { + if (index == 0) { + // use the default hash of a string, which might already be + // available + return o.hashCode(); + } else if (index < 8) { + // use a different hash function, which is fast but not + // necessarily universal, and not cryptographically secure + return getFastHash(o, index, seed); + } + // this method is supposed to be cryptographically secure; + // we could use SHA-256 for higher indexes + return getSipHash24(o, index, seed); + } + + /** + * A cryptographically weak hash function. It is supposed to be fast. + * + * @param o the string + * @param index the hash function index + * @param seed the seed + * @return the hash value + */ + public static int getFastHash(String o, int index, int seed) { + int x = (index * 0x9f3b) ^ seed; + int result = seed + o.length(); + for (int i = 0; i < o.length(); i++) { + x = 31 + x * 0x9f3b; + result ^= x * (1 + o.charAt(i)); + } + return result; + } + + /** + * A cryptographically relatively secure hash function. It is supposed + * to protected against hash-flooding denial-of-service attacks. + * + * @param o the string + * @param k0 key 0 + * @param k1 key 1 + * @return the hash value + */ + public static int getSipHash24(String o, long k0, long k1) { + byte[] b = o.getBytes(StandardCharsets.UTF_8); + return getSipHash24(b, 0, b.length, k0, k1); + } + + /** + * A cryptographically relatively secure hash function. It is supposed + * to protected against hash-flooding denial-of-service attacks. + * + * @param b the data + * @param start the start position + * @param end the end position plus one + * @param k0 key 0 + * @param k1 key 1 + * @return the hash value + */ + public static int getSipHash24(byte[] b, int start, int end, long k0, long k1) { + long v0 = k0 ^ 0x736f6d6570736575L; + long v1 = k1 ^ 0x646f72616e646f6dL; + long v2 = k0 ^ 0x6c7967656e657261L; + long v3 = k1 ^ 0x7465646279746573L; + int repeat; + for (int off = start; off <= end + 8; off += 8) { + long m; + if (off <= end) { + m = 0; + int i = 0; + for (; i < 8 && off + i < end; i++) { + m |= ((long) b[off + i] & 255) << (8 * i); + } + if (i < 8) { + m |= ((long) end - start) << 56; + } + v3 ^= m; + repeat = 2; + } else { + m = 0; + v2 ^= 0xff; + repeat = 4; + } + for (int i = 0; i < repeat; i++) { + v0 += v1; + v2 += v3; + v1 = Long.rotateLeft(v1, 13); + v3 = Long.rotateLeft(v3, 16); + v1 ^= v0; + v3 ^= v2; + v0 = Long.rotateLeft(v0, 32); + v2 += v1; + v0 += v3; + v1 = Long.rotateLeft(v1, 17); + v3 = Long.rotateLeft(v3, 21); + v1 ^= v2; + v3 ^= v0; + v2 = Long.rotateLeft(v2, 32); + } + v0 ^= m; + } + return (int) (v0 ^ v1 ^ v2 ^ v3); + } + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/hash/PerfectHash.java b/modules/h2/src/test/tools/org/h2/dev/hash/PerfectHash.java new file mode 100644 index 0000000000000..2c9712a4e12fb --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/hash/PerfectHash.java @@ -0,0 +1,258 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.hash; + +import java.io.ByteArrayOutputStream; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Set; +import java.util.zip.Deflater; +import java.util.zip.Inflater; + +/** + * A perfect hash function tool. It needs about 1.4 bits per key, and the + * resulting hash table is about 79% full. The minimal perfect hash function + * needs about 2.3 bits per key. + *

    + * Generating the hash function takes about 1 second per million keys + * for both perfect hash and minimal perfect hash. + *

    + * The algorithm is recursive: sets that contain no or only one entry are not + * processed as no conflicts are possible. Sets that contain between 2 and 16 + * entries, up to 16 hash functions are tested to check if they can store the + * data without conflict. If no function was found, the same is tested on a + * larger bucket (except for the minimal perfect hash). If no hash function was + * found, and for larger buckets, the bucket is split into a number of smaller + * buckets (up to 32). + *

    + * At the end of the generation process, the data is compressed using a general + * purpose compression tool (Deflate / Huffman coding). The uncompressed data is + * around 1.52 bits per key (perfect hash) and 3.72 (minimal perfect hash). + *

    + * Please also note the MinimalPerfectHash class, which uses less space per key. + */ +public class PerfectHash { + + /** + * The maximum size of a bucket. + */ + private static final int MAX_SIZE = 16; + + /** + * The maximum number of hash functions to test. + */ + private static final int OFFSETS = 16; + + /** + * The maximum number of buckets to split the set into. + */ + private static final int MAX_SPLIT = 32; + + /** + * The description of the hash function. Used for calculating the hash of a + * key. + */ + private final byte[] data; + + /** + * The offset of the result of the hash function at the given offset within + * the data array. Used for calculating the hash of a key. + */ + private final int[] plus; + + /** + * The position of the next bucket in the data array (in case this bucket + * needs to be skipped). Used for calculating the hash of a key. + */ + private final int[] next; + + /** + * Create a hash object to convert keys to hashes. + * + * @param data the data returned by the generate method + */ + public PerfectHash(byte[] data) { + this.data = data = expand(data); + plus = new int[data.length]; + next = new int[data.length]; + for (int i = 0, p = 0; i < data.length; i++) { + plus[i] = p; + int n = data[i] & 255; + p += n < 2 ? n : n >= MAX_SPLIT ? (n / OFFSETS) : 0; + } + } + + /** + * Calculate the hash from the key. + * + * @param x the key + * @return the hash + */ + public int get(int x) { + return get(0, x, 0); + } + + private int get(int pos, int x, int level) { + int n = data[pos] & 255; + if (n < 2) { + return plus[pos]; + } else if (n >= MAX_SPLIT) { + return plus[pos] + hash(x, level, n % OFFSETS, n / OFFSETS); + } + pos++; + int h = hash(x, level, 0, n); + for (int i = 0; i < h; i++) { + pos = read(pos); + } + return get(pos, x, level + 1); + } + + private int read(int pos) { + int p = next[pos]; + if (p == 0) { + int n = data[pos] & 255; + if (n < 2 || n >= MAX_SPLIT) { + return pos + 1; + } + int start = pos++; + for (int i = 0; i < n; i++) { + pos = read(pos); + } + next[start] = p = pos; + } + return p; + } + + /** + * Generate the perfect hash function data from the given set of integers. + * + * @param list the set + * @param minimal whether the perfect hash function needs to be minimal + * @return the data + */ + public static byte[] generate(Set list, boolean minimal) { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + generate(list, 0, minimal, out); + return compress(out.toByteArray()); + } + + private static void generate(Collection set, int level, + boolean minimal, ByteArrayOutputStream out) { + int size = set.size(); + if (size <= 1) { + out.write(size); + return; + } + if (size < MAX_SIZE) { + int max = minimal ? size : Math.min(MAX_SIZE - 1, size * 2); + for (int s = size; s <= max; s++) { + // Try a few hash functions ("offset" is basically the hash + // function index). We could try less hash functions, and + // instead use a larger size and remember the position of the + // hole (specially for the minimal perfect case), but that's + // more complicated. + nextOffset: + for (int offset = 0; offset < OFFSETS; offset++) { + int bits = 0; + for (int x : set) { + int h = hash(x, level, offset, s); + if ((bits & (1 << h)) != 0) { + continue nextOffset; + } + bits |= 1 << h; + } + out.write(s * OFFSETS + offset); + return; + } + } + } + // Split the set into multiple smaller sets. We could try to split more + // evenly by trying out multiple hash functions, but that's more + // complicated. + int split; + if (minimal) { + split = size > 150 ? size / 83 : (size + 3) / 4; + } else { + split = size > 265 ? size / 142 : (size + 5) / 7; + } + split = Math.min(MAX_SPLIT - 1, Math.max(2, split)); + out.write(split); + List> lists = new ArrayList<>(split); + for (int i = 0; i < split; i++) { + lists.add(new ArrayList(size / split)); + } + for (int x : set) { + lists.get(hash(x, level, 0, split)).add(x); + } + for (List s2 : lists) { + generate(s2, level + 1, minimal, out); + } + } + + /** + * Calculate the hash of a key. The result depends on the key, the recursion + * level, and the offset. + * + * @param x the key + * @param level the recursion level + * @param offset the index of the hash function + * @param size the size of the bucket + * @return the hash (a value between 0, including, and the size, excluding) + */ + private static int hash(int x, int level, int offset, int size) { + x += level * OFFSETS + offset; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = ((x >>> 16) ^ x) * 0x45d9f3b; + x = (x >>> 16) ^ x; + return Math.abs(x % size); + } + + /** + * Compress the hash description using a Huffman coding. + * + * @param d the data + * @return the compressed data + */ + private static byte[] compress(byte[] d) { + Deflater deflater = new Deflater(); + deflater.setStrategy(Deflater.HUFFMAN_ONLY); + deflater.setInput(d); + deflater.finish(); + ByteArrayOutputStream out2 = new ByteArrayOutputStream(d.length); + byte[] buffer = new byte[1024]; + while (!deflater.finished()) { + int count = deflater.deflate(buffer); + out2.write(buffer, 0, count); + } + deflater.end(); + return out2.toByteArray(); + } + + /** + * Decompress the hash description using a Huffman coding. + * + * @param d the data + * @return the decompressed data + */ + private static byte[] expand(byte[] d) { + Inflater inflater = new Inflater(); + inflater.setInput(d); + ByteArrayOutputStream out = new ByteArrayOutputStream(d.length); + byte[] buffer = new byte[1024]; + try { + while (!inflater.finished()) { + int count = inflater.inflate(buffer); + out.write(buffer, 0, count); + } + inflater.end(); + } catch (Exception e) { + throw new IllegalArgumentException(e); + } + return out.toByteArray(); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/hash/package.html b/modules/h2/src/test/tools/org/h2/dev/hash/package.html new file mode 100644 index 0000000000000..f49445cb9d000 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/hash/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A perfect hash function tool. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/mail/SendMail.java.txt b/modules/h2/src/test/tools/org/h2/dev/mail/SendMail.java.txt new file mode 100644 index 0000000000000..811b3abd5211f --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/mail/SendMail.java.txt @@ -0,0 +1,45 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.mail; +import java.util.Properties; +import javax.mail.Session; +import javax.mail.Transport; +import javax.mail.Message.RecipientType; +import javax.mail.internet.MimeMessage; + +/** + * Utility class to send a mail over a fixed gmail account. + */ +public class SendMail { + // http://repo2.maven.org/maven2/javax/mail/mail/1.4.1/mail-1.4.1.jar + // http://repo2.maven.org/maven2/javax/activation/activation/1.1/activation-1.1.jar + + public static void main(String[] args) throws Exception { + String to = "thomas.tom.mueller" + "@" + "gmail.com"; + sendMailOverGmail("", to, "Test", "Test Mail"); + } + + static void sendMailOverGmail(String password, String to, String subject, String body) throws Exception { + String username = "testing1212123" + "@" + "gmail.com"; + String host = "smtp.gmail.com"; + Properties prop = new Properties(); + prop.put("mail.smtps.auth", "true"); + Session session = Session.getDefaultInstance(prop); + session.setProtocolForAddress("rfc822", "smtps"); + session.setDebug(true); + MimeMessage msg = new MimeMessage(session); + msg.setRecipients(RecipientType.TO, to); + msg.setSubject(subject); + msg.setText(body); + Transport t = session.getTransport("smtps"); + try { + t.connect(host, username, password); + t.sendMessage(msg, msg.getAllRecipients()); + } finally { + t.close(); + } + } +} diff --git a/modules/h2/src/test/tools/org/h2/dev/net/PgTcpRedirect.java b/modules/h2/src/test/tools/org/h2/dev/net/PgTcpRedirect.java new file mode 100644 index 0000000000000..a8332d18b644c --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/net/PgTcpRedirect.java @@ -0,0 +1,559 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.net; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.DataInputStream; +import java.io.DataOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.ServerSocket; +import java.net.Socket; + +/** + * This class helps debug the PostgreSQL network protocol. + * It listens on one port, and sends the exact same data to another port. + */ +public class PgTcpRedirect { + + private static final boolean DEBUG = false; + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + new PgTcpRedirect().loop(args); + } + + private void loop(String... args) throws Exception { + // MySQL protocol: + // http://www.redferni.uklinux.net/mysql/MySQL-Protocol.html + // PostgreSQL protocol: + // http://developer.postgresql.org/pgdocs/postgres/protocol.html + // int portServer = 9083, portClient = 9084; + // int portServer = 3306, portClient = 3307; + // H2 PgServer + // int portServer = 5435, portClient = 5433; + // PostgreSQL + int portServer = 5432, portClient = 5433; + + for (int i = 0; i < args.length; i++) { + if ("-client".equals(args[i])) { + portClient = Integer.parseInt(args[++i]); + } else if ("-server".equals(args[i])) { + portServer = Integer.parseInt(args[++i]); + } + } + ServerSocket listener = new ServerSocket(portClient); + while (true) { + Socket client = listener.accept(); + Socket server = new Socket("localhost", portServer); + TcpRedirectThread c = new TcpRedirectThread(client, server, true); + TcpRedirectThread s = new TcpRedirectThread(server, client, false); + new Thread(c).start(); + new Thread(s).start(); + } + } + + /** + * This is the working thread of the TCP redirector. + */ + private class TcpRedirectThread implements Runnable { + + private static final int STATE_INIT_CLIENT = 0, STATE_REGULAR = 1; + private final Socket read, write; + private int state; + private final boolean client; + + TcpRedirectThread(Socket read, Socket write, boolean client) { + this.read = read; + this.write = write; + this.client = client; + state = client ? STATE_INIT_CLIENT : STATE_REGULAR; + } + + String readStringNull(InputStream in) throws IOException { + StringBuilder buff = new StringBuilder(); + while (true) { + int x = in.read(); + if (x <= 0) { + break; + } + buff.append((char) x); + } + return buff.toString(); + } + + private void println(String s) { + if (DEBUG) { + System.out.println(s); + } + } + + private boolean processClient(InputStream inStream, + OutputStream outStream) throws IOException { + DataInputStream dataIn = new DataInputStream(inStream); + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + DataOutputStream dataOut = new DataOutputStream(buff); + if (state == STATE_INIT_CLIENT) { + state = STATE_REGULAR; + int len = dataIn.readInt(); + dataOut.writeInt(len); + len -= 4; + byte[] data = new byte[len]; + dataIn.readFully(data, 0, len); + dataOut.write(data); + dataIn = new DataInputStream(new ByteArrayInputStream(data, 0, len)); + int version = dataIn.readInt(); + if (version == 80877102) { + println("CancelRequest"); + println(" pid: " + dataIn.readInt()); + println(" key: " + dataIn.readInt()); + } else if (version == 80877103) { + println("SSLRequest"); + } else { + println("StartupMessage"); + println(" version " + version + " (" + (version >> 16) + + "." + (version & 0xff) + ")"); + while (true) { + String param = readStringNull(dataIn); + if (param.length() == 0) { + break; + } + String value = readStringNull(dataIn); + println(" param " + param + "=" + value); + } + } + } else { + int x = dataIn.read(); + if (x < 0) { + println("end"); + return false; + } + // System.out.println(" x=" + (char)x+" " +x); + dataOut.write(x); + int len = dataIn.readInt(); + dataOut.writeInt(len); + len -= 4; + byte[] data = new byte[len]; + dataIn.readFully(data, 0, len); + dataOut.write(data); + dataIn = new DataInputStream(new ByteArrayInputStream(data, 0, len)); + switch (x) { + case 'B': { + println("Bind"); + println(" destPortal: " + readStringNull(dataIn)); + println(" prepName: " + readStringNull(dataIn)); + int formatCodesCount = dataIn.readShort(); + for (int i = 0; i < formatCodesCount; i++) { + println(" formatCode[" + i + "]=" + dataIn.readShort()); + } + int paramCount = dataIn.readShort(); + for (int i = 0; i < paramCount; i++) { + int paramLen = dataIn.readInt(); + println(" length[" + i + "]=" + paramLen); + byte[] d2 = new byte[paramLen]; + dataIn.readFully(d2); + } + int resultCodeCount = dataIn.readShort(); + for (int i = 0; i < resultCodeCount; i++) { + println(" resultCodeCount[" + i + "]=" + dataIn.readShort()); + } + break; + } + case 'C': { + println("Close"); + println(" type: (S:prepared statement, P:portal): " + dataIn.read()); + break; + } + case 'd': { + println("CopyData"); + break; + } + case 'c': { + println("CopyDone"); + break; + } + case 'f': { + println("CopyFail"); + println(" message: " + readStringNull(dataIn)); + break; + } + case 'D': { + println("Describe"); + println(" type (S=prepared statement, P=portal): " + (char) dataIn.readByte()); + println(" name: " + readStringNull(dataIn)); + break; + } + case 'E': { + println("Execute"); + println(" name: " + readStringNull(dataIn)); + println(" maxRows: " + dataIn.readShort()); + break; + } + case 'H': { + println("Flush"); + break; + } + case 'F': { + println("FunctionCall"); + println(" objectId:" + dataIn.readInt()); + int columns = dataIn.readShort(); + for (int i = 0; i < columns; i++) { + println(" formatCode[" + i + "]: " + dataIn.readShort()); + } + int count = dataIn.readShort(); + for (int i = 0; i < count; i++) { + int l = dataIn.readInt(); + println(" len[" + i + "]: " + l); + if (l >= 0) { + for (int j = 0; j < l; j++) { + dataIn.readByte(); + } + } + } + println(" resultFormat: " + dataIn.readShort()); + break; + } + case 'P': { + println("Parse"); + println(" name:" + readStringNull(dataIn)); + println(" query:" + readStringNull(dataIn)); + int count = dataIn.readShort(); + for (int i = 0; i < count; i++) { + println(" [" + i + "]: " + dataIn.readInt()); + } + break; + } + case 'p': { + println("PasswordMessage"); + println(" password: " + readStringNull(dataIn)); + break; + } + case 'Q': { + println("Query"); + println(" sql : " + readStringNull(dataIn)); + break; + } + case 'S': { + println("Sync"); + break; + } + case 'X': { + println("Terminate"); + break; + } + default: + println("############## UNSUPPORTED: " + (char) x); + } + } + dataOut.flush(); + byte[] buffer = buff.toByteArray(); + printData(buffer, buffer.length); + try { + outStream.write(buffer, 0, buffer.length); + outStream.flush(); + } catch (IOException e) { + e.printStackTrace(); + } + return true; + } + + private boolean processServer(InputStream inStream, + OutputStream outStream) throws IOException { + DataInputStream dataIn = new DataInputStream(inStream); + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + DataOutputStream dataOut = new DataOutputStream(buff); + int x = dataIn.read(); + if (x < 0) { + println("end"); + return false; + } + // System.out.println(" x=" + (char)x+" " +x); + dataOut.write(x); + int len = dataIn.readInt(); + dataOut.writeInt(len); + len -= 4; + byte[] data = new byte[len]; + dataIn.readFully(data, 0, len); + dataOut.write(data); + dataIn = new DataInputStream(new ByteArrayInputStream(data, 0, len)); + switch (x) { + case 'R': { + println("Authentication"); + int value = dataIn.readInt(); + if (value == 0) { + println(" Ok"); + } else if (value == 2) { + println(" KerberosV5"); + } else if (value == 3) { + println(" CleartextPassword"); + } else if (value == 4) { + println(" CryptPassword"); + byte b1 = dataIn.readByte(); + byte b2 = dataIn.readByte(); + println(" salt1=" + b1 + " salt2=" + b2); + } else if (value == 5) { + println(" MD5Password"); + byte b1 = dataIn.readByte(); + byte b2 = dataIn.readByte(); + byte b3 = dataIn.readByte(); + byte b4 = dataIn.readByte(); + println(" salt1=" + b1 + " salt2=" + b2 + " 3=" + b3 + " 4=" + b4); + } else if (value == 6) { + println(" SCMCredential"); + } + break; + } + case 'K': { + println("BackendKeyData"); + println(" process ID " + dataIn.readInt()); + println(" key " + dataIn.readInt()); + break; + } + case '2': { + println("BindComplete"); + break; + } + case '3': { + println("CloseComplete"); + break; + } + case 'C': { + println("CommandComplete"); + println(" command tag: " + readStringNull(dataIn)); + break; + } + case 'd': { + println("CopyData"); + break; + } + case 'c': { + println("CopyDone"); + break; + } + case 'G': { + println("CopyInResponse"); + println(" format: " + dataIn.readByte()); + int columns = dataIn.readShort(); + for (int i = 0; i < columns; i++) { + println(" formatCode[" + i + "]: " + dataIn.readShort()); + } + break; + } + case 'H': { + println("CopyOutResponse"); + println(" format: " + dataIn.readByte()); + int columns = dataIn.readShort(); + for (int i = 0; i < columns; i++) { + println(" formatCode[" + i + "]: " + dataIn.readShort()); + } + break; + } + case 'D': { + println("DataRow"); + int columns = dataIn.readShort(); + println(" columns : " + columns); + for (int i = 0; i < columns; i++) { + int l = dataIn.readInt(); + if (l > 0) { + for (int j = 0; j < l; j++) { + dataIn.readByte(); + } + } + // println(" ["+i+"] len: " + l); + } + break; + } + case 'I': { + println("EmptyQueryResponse"); + break; + } + case 'E': { + println("ErrorResponse"); + while (true) { + int fieldType = dataIn.readByte(); + if (fieldType == 0) { + break; + } + String msg = readStringNull(dataIn); + // http://developer.postgresql.org/pgdocs/postgres/protocol-error-fields.html + // S Severity + // C Code: the SQLSTATE code + // M Message + // D Detail + // H Hint + // P Position + // p Internal position + // q Internal query + // W Where + // F File + // L Line + // R Routine + println(" fieldType: " + fieldType + " msg: " + msg); + } + break; + } + case 'V': { + println("FunctionCallResponse"); + int resultLen = dataIn.readInt(); + println(" len: " + resultLen); + break; + } + case 'n': { + println("NoData"); + break; + } + case 'N': { + println("NoticeResponse"); + while (true) { + int fieldType = dataIn.readByte(); + if (fieldType == 0) { + break; + } + String msg = readStringNull(dataIn); + // http://developer.postgresql.org/pgdocs/postgres/protocol-error-fields.html + // S Severity + // C Code: the SQLSTATE code + // M Message + // D Detail + // H Hint + // P Position + // p Internal position + // q Internal query + // W Where + // F File + // L Line + // R Routine + println(" fieldType: " + fieldType + " msg: " + msg); + } + break; + } + case 'A': { + println("NotificationResponse"); + println(" processID: " + dataIn.readInt()); + println(" condition: " + readStringNull(dataIn)); + println(" information: " + readStringNull(dataIn)); + break; + } + case 't': { + println("ParameterDescription"); + println(" processID: " + dataIn.readInt()); + int count = dataIn.readShort(); + for (int i = 0; i < count; i++) { + println(" [" + i + "] objectId: " + dataIn.readInt()); + } + break; + } + case 'S': { + println("ParameterStatus"); + println(" parameter " + readStringNull(dataIn) + " = " + + readStringNull(dataIn)); + break; + } + case '1': { + println("ParseComplete"); + break; + } + case 's': { + println("ParseComplete"); + break; + } + case 'Z': { + println("ReadyForQuery"); + println(" status (I:idle, T:transaction, E:failed): " + + (char) dataIn.readByte()); + break; + } + case 'T': { + println("RowDescription"); + int columns = dataIn.readShort(); + println(" columns : " + columns); + for (int i = 0; i < columns; i++) { + println(" [" + i + "]"); + println(" name:" + readStringNull(dataIn)); + println(" tableId:" + dataIn.readInt()); + println(" columnId:" + dataIn.readShort()); + println(" dataTypeId:" + dataIn.readInt()); + println(" dataTypeSize (pg_type.typlen):" + dataIn.readShort()); + println(" modifier (pg_attribute.atttypmod):" + dataIn.readInt()); + println(" format code:" + dataIn.readShort()); + } + break; + } + default: + println("############## UNSUPPORTED: " + (char) x); + } + dataOut.flush(); + byte[] buffer = buff.toByteArray(); + printData(buffer, buffer.length); + try { + outStream.write(buffer, 0, buffer.length); + outStream.flush(); + } catch (IOException e) { + e.printStackTrace(); + } + return true; + } + + @Override + public void run() { + try { + OutputStream out = write.getOutputStream(); + InputStream in = read.getInputStream(); + while (true) { + boolean more; + if (client) { + more = processClient(in, out); + } else { + more = processServer(in, out); + } + if (!more) { + break; + } + } + try { + read.close(); + } catch (IOException e) { + // ignore + } + try { + write.close(); + } catch (IOException e) { + // ignore + } + } catch (Throwable e) { + e.printStackTrace(); + } + } + } + + /** + * Print the uninterpreted byte array. + * + * @param buffer the byte array + * @param len the length + */ + static synchronized void printData(byte[] buffer, int len) { + if (DEBUG) { + System.out.print(" "); + for (int i = 0; i < len; i++) { + int c = buffer[i] & 255; + if (c >= ' ' && c <= 127 && c != '[' & c != ']') { + System.out.print((char) c); + } else { + System.out.print("[" + Integer.toHexString(c) + "]"); + } + } + System.out.println(); + } + } +} diff --git a/modules/h2/src/test/tools/org/h2/dev/net/package.html b/modules/h2/src/test/tools/org/h2/dev/net/package.html new file mode 100644 index 0000000000000..63296ee5571eb --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/net/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A tool to redirect and interpret PostgreSQL network protocol packets. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/security/SecureKeyStoreBuilder.java b/modules/h2/src/test/tools/org/h2/dev/security/SecureKeyStoreBuilder.java new file mode 100644 index 0000000000000..3128c1178a732 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/security/SecureKeyStoreBuilder.java @@ -0,0 +1,106 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.security; + +import java.security.Key; +import java.security.KeyStore; +import java.security.KeyStoreException; +import java.security.NoSuchAlgorithmException; +import java.security.UnrecoverableKeyException; +import java.security.cert.Certificate; +import java.security.cert.CertificateEncodingException; +import java.util.Enumeration; + +import org.h2.security.CipherFactory; +import org.h2.util.StringUtils; + +/** + * Tool to generate source code for the SecureSocketFactory. First, create a + * keystore using: + *
    + * keytool -genkey -alias h2 -keyalg RSA -dname "cn=H2"
    + *     -validity 25000 -keypass h2pass -keystore h2.keystore
    + *     -storepass h2pass
    + * 
    + * Then run this application to generate the source code. Then replace the code + * in the function SecureSocketFactory.getKeyStore as specified + */ +public class SecureKeyStoreBuilder { + + private SecureKeyStoreBuilder() { + // utility class + } + + /** + * This method is called when executing this application from the command + * line. + * + * @param args the command line parameters + */ + public static void main(String... args) throws Exception { + String password = CipherFactory.KEYSTORE_PASSWORD; + KeyStore store = CipherFactory.getKeyStore(password); + printKeystore(store, password); + } + + private static void printKeystore(KeyStore store, String password) + throws KeyStoreException, NoSuchAlgorithmException, + UnrecoverableKeyException, CertificateEncodingException { + System.out.println("KeyStore store = KeyStore.getInstance(\"" + + store.getType() + "\");"); + System.out.println("store.load(null, password.toCharArray());"); + // System.out.println("keystore provider=" + + // store.getProvider().getName()); + Enumeration en = store.aliases(); + while (en.hasMoreElements()) { + String alias = en.nextElement(); + Key key = store.getKey(alias, password.toCharArray()); + System.out.println( + "KeyFactory keyFactory = KeyFactory.getInstance(\"" + + key.getAlgorithm() + "\");"); + System.out.println("store.load(null, password.toCharArray());"); + String pkFormat = key.getFormat(); + String encoded = StringUtils.convertBytesToHex(key.getEncoded()); + System.out.println( + pkFormat + "EncodedKeySpec keySpec = new " + + pkFormat + "EncodedKeySpec(getBytes(\"" + + encoded + "\"));"); + System.out.println( + "PrivateKey privateKey = keyFactory.generatePrivate(keySpec);"); + System.out.println("Certificate[] certs = {"); + for (Certificate cert : store.getCertificateChain(alias)) { + System.out.println( + " CertificateFactory.getInstance(\""+cert.getType()+"\")."); + String enc = StringUtils.convertBytesToHex(cert.getEncoded()); + System.out.println( + " generateCertificate(new ByteArrayInputStream(getBytes(\"" + + enc + "\"))),"); + // PublicKey pubKey = cert.getPublicKey(); + // System.out.println(" pubKey algorithm=" + + // pubKey.getAlgorithm()); + // System.out.println(" pubKey format=" + + // pubKey.getFormat()); + // System.out.println(" pubKey format="+ + // Utils.convertBytesToString(pubKey.getEncoded())); + } + System.out.println("};"); + System.out.println("store.setKeyEntry(\"" + alias + + "\", privateKey, password.toCharArray(), certs);"); + } + } + +// private void listCipherSuites(SSLServerSocketFactory f) { +// String[] def = f.getDefaultCipherSuites(); +// for (int i = 0; i < def.length; i++) { +// System.out.println("default = " + def[i]); +// } +// String[] sup = f.getSupportedCipherSuites(); +// for (int i = 0; i < sup.length; i++) { +// System.out.println("supported = " + sup[i]); +// } +// } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/security/package.html b/modules/h2/src/test/tools/org/h2/dev/security/package.html new file mode 100644 index 0000000000000..12570b445dfa4 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/security/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Security tools. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/sort/InPlaceStableMergeSort.java b/modules/h2/src/test/tools/org/h2/dev/sort/InPlaceStableMergeSort.java new file mode 100644 index 0000000000000..764b0ecf2de5c --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/sort/InPlaceStableMergeSort.java @@ -0,0 +1,322 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.sort; + +import java.util.Comparator; + +/** + * A stable merge sort implementation that uses at most O(log(n)) memory + * and O(n*log(n)*log(n)) time. + * + * @param the element type + */ +public class InPlaceStableMergeSort { + + /** + * The minimum size of the temporary array. It is used to speed up sorting + * small blocks. + */ + private static final int TEMP_SIZE = 1024; + + /** + * Blocks smaller than this number are sorted using binary insertion sort. + * This usually speeds up sorting. + */ + private static final int INSERTION_SORT_SIZE = 16; + + /** + * The data array to sort. + */ + private T[] data; + + /** + * The comparator. + */ + private Comparator comp; + + /** + * The temporary array. + */ + private T[] temp; + + /** + * Sort an array using the given comparator. + * + * @param data the data array to sort + * @param comp the comparator + */ + public static void sort(T[] data, Comparator comp) { + new InPlaceStableMergeSort().sortArray(data, comp); + } + + /** + * Sort an array using the given comparator. + * + * @param d the data array to sort + * @param c the comparator + */ + public void sortArray(T[] d, Comparator c) { + this.data = d; + this.comp = c; + int len = Math.max((int) (100 * Math.log(d.length)), TEMP_SIZE); + len = Math.min(d.length, len); + @SuppressWarnings("unchecked") + T[] t = (T[]) new Object[len]; + this.temp = t; + mergeSort(0, d.length - 1); + } + + /** + * Sort a block recursively using merge sort. + * + * @param from the index of the first entry to sort + * @param to the index of the last entry to sort + */ + void mergeSort(int from, int to) { + if (to - from < INSERTION_SORT_SIZE) { + binaryInsertionSort(from, to); + return; + } + int m = (from + to) >>> 1; + mergeSort(from, m); + mergeSort(m + 1, to); + merge(from, m + 1, to); + } + + /** + * Sort a block using the binary insertion sort algorithm. + * + * @param from the index of the first entry to sort + * @param to the index of the last entry to sort + */ + private void binaryInsertionSort(int from, int to) { + for (int i = from + 1; i <= to; i++) { + T x = data[i]; + int ins = binarySearch(x, from, i - 1); + for (int j = i - 1; j >= ins; j--) { + data[j + 1] = data[j]; + } + data[ins] = x; + } + } + + /** + * Find the index of the element that is larger than x. + * + * @param x the element to search + * @param from the index of the first entry + * @param to the index of the last entry + * @return the position + */ + private int binarySearch(T x, int from, int to) { + while (from <= to) { + int m = (from + to) >>> 1; + if (comp.compare(x, data[m]) >= 0) { + from = m + 1; + } else { + to = m - 1; + } + } + return from; + } + + /** + * Merge two arrays. + * + * @param from the start of the first range + * @param second start of the second range + * @param to the last element of the second range + */ + private void merge(int from, int second, int to) { + int len1 = second - from, len2 = to - second + 1; + if (len1 == 0 || len2 == 0) { + return; + } + if (len1 + len2 == 2) { + if (comp.compare(data[second], data[from]) < 0) { + swap(data, second, from); + } + return; + } + if (len1 <= temp.length) { + System.arraycopy(data, from, temp, 0, len1); + mergeSmall(data, from, temp, 0, len1 - 1, data, second, to); + return; + } else if (len2 <= temp.length) { + System.arraycopy(data, second, temp, 0, len2); + System.arraycopy(data, from, data, to - len1 + 1, len1); + mergeSmall(data, from, data, to - len1 + 1, to, temp, 0, len2 - 1); + return; + } + mergeBig(from, second, to); + } + + /** + * Merge two (large) arrays. This is done recursively by merging the + * beginning of both arrays, and then the end of both arrays. + * + * @param from the start of the first range + * @param second start of the second range + * @param to the last element of the second range + */ + private void mergeBig(int from, int second, int to) { + int len1 = second - from, len2 = to - second + 1; + int firstCut, secondCut, newSecond; + if (len1 > len2) { + firstCut = from + len1 / 2; + secondCut = findLower(data[firstCut], second, to); + int len = secondCut - second; + newSecond = firstCut + len; + } else { + int len = len2 / 2; + secondCut = second + len; + firstCut = findUpper(data[secondCut], from, second - 1); + newSecond = firstCut + len; + } + swapBlocks(firstCut, second, secondCut - 1); + merge(from, firstCut, newSecond - 1); + merge(newSecond, secondCut, to); + } + + /** + * Merge two (small) arrays using the temporary array. This is done to speed + * up merging. + * + * @param target the target array + * @param pos the position of the first element in the target array + * @param s1 the first source array + * @param from1 the index of the first element in the first source array + * @param to1 the index of the last element in the first source array + * @param s2 the second source array + * @param from2 the index of the first element in the second source array + * @param to2 the index of the last element in the second source array + */ + private void mergeSmall(T[] target, int pos, T[] s1, int from1, int to1, + T[] s2, int from2, int to2) { + T x1 = s1[from1], x2 = s2[from2]; + while (true) { + if (comp.compare(x1, x2) <= 0) { + target[pos++] = x1; + if (++from1 > to1) { + System.arraycopy(s2, from2, target, pos, to2 - from2 + 1); + break; + } + x1 = s1[from1]; + } else { + target[pos++] = x2; + if (++from2 > to2) { + System.arraycopy(s1, from1, target, pos, to1 - from1 + 1); + break; + } + x2 = s2[from2]; + } + } + } + + /** + * Find the largest element in the sorted array that is smaller than x. + * + * @param x the element to search + * @param from the index of the first entry + * @param to the index of the last entry + * @return the index of the resulting element + */ + private int findLower(T x, int from, int to) { + int len = to - from + 1, half; + while (len > 0) { + half = len / 2; + int m = from + half; + if (comp.compare(data[m], x) < 0) { + from = m + 1; + len = len - half - 1; + } else { + len = half; + } + } + return from; + } + + /** + * Find the smallest element in the sorted array that is larger than or + * equal to x. + * + * @param x the element to search + * @param from the index of the first entry + * @param to the index of the last entry + * @return the index of the resulting element + */ + private int findUpper(T x, int from, int to) { + int len = to - from + 1, half; + while (len > 0) { + half = len / 2; + int m = from + half; + if (comp.compare(data[m], x) <= 0) { + from = m + 1; + len = len - half - 1; + } else { + len = half; + } + } + return from; + } + + /** + * Swap the elements of two blocks in the data array. Both blocks are next + * to each other (the second block starts just after the first block ends). + * + * @param from the index of the first element in the first block + * @param second the index of the first element in the second block + * @param to the index of the last element in the second block + */ + private void swapBlocks(int from, int second, int to) { + int len1 = second - from, len2 = to - second + 1; + if (len1 == 0 || len2 == 0) { + return; + } + if (len1 < temp.length) { + System.arraycopy(data, from, temp, 0, len1); + System.arraycopy(data, second, data, from, len2); + System.arraycopy(temp, 0, data, from + len2, len1); + return; + } else if (len2 < temp.length) { + System.arraycopy(data, second, temp, 0, len2); + System.arraycopy(data, from, data, from + len2, len1); + System.arraycopy(temp, 0, data, from, len2); + return; + } + reverseBlock(from, second - 1); + reverseBlock(second, to); + reverseBlock(from, to); + } + + /** + * Reverse all elements in a block. + * + * @param from the index of the first element + * @param to the index of the last element + */ + private void reverseBlock(int from, int to) { + while (from < to) { + T old = data[from]; + data[from++] = data[to]; + data[to--] = old; + } + } + + /** + * Swap two elements in the array. + * + * @param d the array + * @param a the index of the first element + * @param b the index of the second element + */ + private void swap(T[] d, int a, int b) { + T t = d[a]; + d[a] = d[b]; + d[b] = t; + } + +} \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/sort/InPlaceStableQuicksort.java b/modules/h2/src/test/tools/org/h2/dev/sort/InPlaceStableQuicksort.java new file mode 100644 index 0000000000000..7959425c37d48 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/sort/InPlaceStableQuicksort.java @@ -0,0 +1,310 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.sort; + +import java.util.Comparator; + +/** + * A stable quicksort implementation that uses O(log(n)) memory. It normally + * runs in O(n*log(n)*log(n)), but at most in O(n^2). + * + * @param the element type + */ +public class InPlaceStableQuicksort { + + /** + * The minimum size of the temporary array. It is used to speed up sorting + * small blocks. + */ + private static final int TEMP_SIZE = 1024; + + /** + * Blocks smaller than this number are sorted using binary insertion sort. + * This usually speeds up sorting. + */ + private static final int INSERTION_SORT_SIZE = 16; + + /** + * The data array to sort. + */ + private T[] data; + + /** + * The comparator. + */ + private Comparator comp; + + /** + * The temporary array. + */ + private T[] temp; + + /** + * Sort an array using the given comparator. + * + * @param data the data array to sort + * @param comp the comparator + */ + public static void sort(T[] data, Comparator comp) { + new InPlaceStableQuicksort().sortArray(data, comp); + } + + /** + * Sort an array using the given comparator. + * + * @param d the data array to sort + * @param c the comparator + */ + public void sortArray(T[] d, Comparator c) { + this.data = d; + this.comp = c; + int len = Math.max((int) (100 * Math.log(d.length)), TEMP_SIZE); + len = Math.min(d.length, len); + @SuppressWarnings("unchecked") + T[] t = (T[]) new Object[len]; + this.temp = t; + quicksort(0, d.length - 1); + } + + /** + * Sort a block using the quicksort algorithm. + * + * @param from the index of the first entry to sort + * @param to the index of the last entry to sort + */ + private void quicksort(int from, int to) { + while (to > from) { + if (to - from < INSERTION_SORT_SIZE) { + binaryInsertionSort(from, to); + return; + } + T pivot = selectPivot(from, to); + int second = partition(pivot, from, to); + if (second > to) { + pivot = selectPivot(from, to); + pivot = data[to]; + second = partition(pivot, from, to); + if (second > to) { + second--; + } + } + quicksort(from, second - 1); + from = second; + } + } + + /** + * Sort a block using the binary insertion sort algorithm. + * + * @param from the index of the first entry to sort + * @param to the index of the last entry to sort + */ + private void binaryInsertionSort(int from, int to) { + for (int i = from + 1; i <= to; i++) { + T x = data[i]; + int ins = binarySearch(x, from, i - 1); + for (int j = i - 1; j >= ins; j--) { + data[j + 1] = data[j]; + } + data[ins] = x; + } + } + + /** + * Find the index of the element that is larger than x. + * + * @param x the element to search + * @param from the index of the first entry + * @param to the index of the last entry + * @return the position + */ + private int binarySearch(T x, int from, int to) { + while (from <= to) { + int m = (from + to) >>> 1; + if (comp.compare(x, data[m]) >= 0) { + from = m + 1; + } else { + to = m - 1; + } + } + return from; + } + + /** + * Move all elements that are bigger than the pivot to the end of the list, + * and return the partitioning index. The partitioning index is the start + * index of the range where all elements are larger than the pivot. If the + * partitioning index is larger than the 'to' index, then all elements are + * smaller or equal to the pivot. + * + * @param pivot the pivot + * @param from the index of the first element + * @param to the index of the last element + * @return the the first element of the second partition + */ + private int partition(T pivot, int from, int to) { + if (to - from < temp.length) { + return partitionSmall(pivot, from, to); + } + int m = (from + to + 1) / 2; + int m1 = partition(pivot, from, m - 1); + int m2 = partition(pivot, m, to); + swapBlocks(m1, m, m2 - 1); + return m1 + m2 - m; + } + + /** + * Partition a small block using the temporary array. This will speed up + * partitioning. + * + * @param pivot the pivot + * @param from the index of the first element + * @param to the index of the last element + * @return the the first element of the second partition + */ + private int partitionSmall(T pivot, int from, int to) { + int tempIndex = 0, dataIndex = from; + for (int i = from; i <= to; i++) { + T x = data[i]; + if (comp.compare(x, pivot) <= 0) { + if (tempIndex > 0) { + data[dataIndex] = x; + } + dataIndex++; + } else { + temp[tempIndex++] = x; + } + } + if (tempIndex > 0) { + System.arraycopy(temp, 0, data, dataIndex, tempIndex); + } + return dataIndex; + } + + /** + * Swap the elements of two blocks in the data array. Both blocks are next + * to each other (the second block starts just after the first block ends). + * + * @param from the index of the first element in the first block + * @param second the index of the first element in the second block + * @param to the index of the last element in the second block + */ + private void swapBlocks(int from, int second, int to) { + int len1 = second - from, len2 = to - second + 1; + if (len1 == 0 || len2 == 0) { + return; + } + if (len1 < temp.length) { + System.arraycopy(data, from, temp, 0, len1); + System.arraycopy(data, second, data, from, len2); + System.arraycopy(temp, 0, data, from + len2, len1); + return; + } else if (len2 < temp.length) { + System.arraycopy(data, second, temp, 0, len2); + System.arraycopy(data, from, data, from + len2, len1); + System.arraycopy(temp, 0, data, from, len2); + return; + } + reverseBlock(from, second - 1); + reverseBlock(second, to); + reverseBlock(from, to); + } + + /** + * Reverse all elements in a block. + * + * @param from the index of the first element + * @param to the index of the last element + */ + private void reverseBlock(int from, int to) { + while (from < to) { + T old = data[from]; + data[from++] = data[to]; + data[to--] = old; + } + } + + /** + * Select a pivot. To ensure a good pivot is select, the median element of a + * sample of the data is calculated. + * + * @param from the index of the first element + * @param to the index of the last element + * @return the pivot + */ + private T selectPivot(int from, int to) { + int count = (int) (6 * Math.log10(to - from)); + count = Math.min(count, temp.length); + int step = (to - from) / count; + for (int i = from, j = 0; i < to; i += step, j++) { + temp[j] = data[i]; + } + T pivot = select(temp, 0, count - 1, count / 2); + return pivot; + } + + /** + * Select the specified element. + * + * @param d the array + * @param from the index of the first element + * @param to the index of the last element + * @param k which element to return (1 means the lowest) + * @return the specified element + */ + private T select(T[] d, int from, int to, int k) { + while (true) { + int pivotIndex = (to + from) >>> 1; + int pivotNewIndex = selectPartition(d, from, to, pivotIndex); + int pivotDist = pivotNewIndex - from + 1; + if (pivotDist == k) { + return d[pivotNewIndex]; + } else if (k < pivotDist) { + to = pivotNewIndex - 1; + } else { + k = k - pivotDist; + from = pivotNewIndex + 1; + } + } + } + + /** + * Partition the elements to select an element. + * + * @param d the array + * @param from the index of the first element + * @param to the index of the last element + * @param pivotIndex the index of the pivot + * @return the new index + */ + private int selectPartition(T[] d, int from, int to, int pivotIndex) { + T pivotValue = d[pivotIndex]; + swap(d, pivotIndex, to); + int storeIndex = from; + for (int i = from; i <= to; i++) { + if (comp.compare(d[i], pivotValue) < 0) { + swap(d, storeIndex, i); + storeIndex++; + } + } + swap(d, to, storeIndex); + return storeIndex; + } + + /** + * Swap two elements in the array. + * + * @param d the array + * @param a the index of the first element + * @param b the index of the second element + */ + private void swap(T[] d, int a, int b) { + T t = d[a]; + d[a] = d[b]; + d[b] = t; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/sort/package.html b/modules/h2/src/test/tools/org/h2/dev/sort/package.html new file mode 100644 index 0000000000000..9c12afdff0414 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/sort/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Sorting utilities. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/dev/util/AnsCompression.java b/modules/h2/src/test/tools/org/h2/dev/util/AnsCompression.java new file mode 100644 index 0000000000000..ae0fdf692202e --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/AnsCompression.java @@ -0,0 +1,188 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.nio.ByteBuffer; +import java.util.Arrays; + +/** + * An ANS (Asymmetric Numeral Systems) compression tool. + * It uses the range variant. + */ +public class AnsCompression { + + private static final long TOP = 1L << 24; + private static final int SHIFT = 12; + private static final int MASK = (1 << SHIFT) - 1; + private static final long MAX = (TOP >> SHIFT) << 32; + + private AnsCompression() { + // a utility class + } + + /** + * Count the frequencies of codes in the data, and increment the target + * frequency table. + * + * @param freq the target frequency table + * @param data the data + */ + public static void countFrequencies(int[] freq, byte[] data) { + for (byte x : data) { + freq[x & 0xff]++; + } + } + + /** + * Scale the frequencies to a new total. Frequencies of 0 are kept as 0; + * larger frequencies result in at least 1. + * + * @param freq the (source and target) frequency table + * @param total the target total (sum of all frequencies) + */ + public static void scaleFrequencies(int[] freq, int total) { + int len = freq.length, sum = 0; + for (int x : freq) { + sum += x; + } + // the list of: (error << 8) + index + int[] errors = new int[len]; + int totalError = -total; + for (int i = 0; i < len; i++) { + int old = freq[i]; + if (old == 0) { + continue; + } + int ideal = (int) (old * total * 256L / sum); + // 1 too high so we can decrement if needed + int x = 1 + ideal / 256; + freq[i] = x; + totalError += x; + errors[i] = ((x * 256 - ideal) << 8) + i; + } + // we don't need to sort, we could just calculate + // which one is the nth element - but sorting is simpler + Arrays.sort(errors); + if (totalError < 0) { + // integer overflow + throw new IllegalArgumentException(); + } + while (totalError > 0) { + for (int i = 0; totalError > 0 && i < len; i++) { + int index = errors[i] & 0xff; + if (freq[index] > 1) { + freq[index]--; + totalError--; + } + } + } + } + + /** + * Generate the cumulative frequency table. + * + * @param freq the source frequency table + * @return the cumulative table, with one entry more + */ + static int[] generateCumulativeFrequencies(int[] freq) { + int len = freq.length; + int[] cumulativeFreq = new int[len + 1]; + for (int i = 0, x = 0; i < len; i++) { + x += freq[i]; + cumulativeFreq[i + 1] = x; + } + return cumulativeFreq; + } + + /** + * Generate the frequency-to-code table. + * + * @param cumulativeFreq the cumulative frequency table + * @return the result + */ + private static byte[] generateFrequencyToCode(int[] cumulativeFreq) { + byte[] freqToCode = new byte[1 << SHIFT]; + int x = 0; + byte s = -1; + for (int i : cumulativeFreq) { + while (x < i) { + freqToCode[x++] = s; + } + s++; + } + return freqToCode; + } + + /** + * Encode the data. + * + * @param freq the frequency table (will be scaled) + * @param data the source data (uncompressed) + * @return the compressed data + */ + public static byte[] encode(int[] freq, byte[] data) { + scaleFrequencies(freq, 1 << SHIFT); + int[] cumulativeFreq = generateCumulativeFrequencies(freq); + ByteBuffer buff = ByteBuffer.allocate(data.length * 2); + buff = encode(data, freq, cumulativeFreq, buff); + return Arrays.copyOfRange(buff.array(), + buff.arrayOffset() + buff.position(), buff.arrayOffset() + buff.limit()); + } + + private static ByteBuffer encode(byte[] data, int[] freq, + int[] cumulativeFreq, ByteBuffer buff) { + long state = TOP; + // encoding happens backwards + int b = buff.limit(); + for (int p = data.length - 1; p >= 0; p--) { + int x = data[p] & 0xff; + int f = freq[x]; + while (state >= MAX * f) { + b -= 4; + buff.putInt(b, (int) state); + state >>>= 32; + } + state = ((state / f) << SHIFT) + (state % f) + cumulativeFreq[x]; + } + b -= 8; + buff.putLong(b, state); + buff.position(b); + return buff.slice(); + } + + /** + * Decode the data. + * + * @param freq the frequency table (will be scaled) + * @param data the compressed data + * @param length the target length + * @return the uncompressed result + */ + public static byte[] decode(int[] freq, byte[] data, int length) { + scaleFrequencies(freq, 1 << SHIFT); + int[] cumulativeFreq = generateCumulativeFrequencies(freq); + byte[] freqToCode = generateFrequencyToCode(cumulativeFreq); + byte[] out = new byte[length]; + decode(data, freq, cumulativeFreq, freqToCode, out); + return out; + } + + private static void decode(byte[] data, int[] freq, int[] cumulativeFreq, + byte[] freqToCode, byte[] out) { + ByteBuffer buff = ByteBuffer.wrap(data); + long state = buff.getLong(); + for (int i = 0, size = out.length; i < size; i++) { + int x = (int) state & MASK; + int c = freqToCode[x] & 0xff; + out[i] = (byte) c; + state = (freq[c] * (state >> SHIFT)) + x - cumulativeFreq[c]; + while (state < TOP) { + state = (state << 32) | (buff.getInt() & 0xffffffffL); + } + } + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ArrayUtils.java b/modules/h2/src/test/tools/org/h2/dev/util/ArrayUtils.java new file mode 100644 index 0000000000000..68a016c8b18fc --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ArrayUtils.java @@ -0,0 +1,65 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Comparator; + +/** + * Array utility methods. + */ +public class ArrayUtils { + + /** + * Sort an array using binary insertion sort + * + * @param the type + * @param d the data + * @param left the index of the leftmost element + * @param right the index of the rightmost element + * @param comp the comparison class + */ + public static void binaryInsertionSort(T[] d, int left, int right, + Comparator comp) { + for (int i = left + 1; i <= right; i++) { + T t = d[i]; + int l = left; + for (int r = i; l < r;) { + int m = (l + r) >>> 1; + if (comp.compare(t, d[m]) >= 0) { + l = m + 1; + } else { + r = m; + } + } + for (int n = i - l; n > 0;) { + d[l + n--] = d[l + n]; + } + d[l] = t; + } + } + + /** + * Sort an array using insertion sort + * + * @param the type + * @param d the data + * @param left the index of the leftmost element + * @param right the index of the rightmost element + * @param comp the comparison class + */ + public static void insertionSort(T[] d, int left, int right, + Comparator comp) { + for (int i = left + 1, j; i <= right; i++) { + T t = d[i]; + for (j = i - 1; j >= left && comp.compare(d[j], t) > 0; j--) { + d[j + 1] = d[j]; + } + d[j + 1] = t; + } + } + + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/Base64.java b/modules/h2/src/test/tools/org/h2/dev/util/Base64.java new file mode 100644 index 0000000000000..9198aac3a6d12 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/Base64.java @@ -0,0 +1,234 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Random; +import java.util.concurrent.TimeUnit; + +/** + * This class converts binary to base64 and vice versa. + */ +public class Base64 { + + private static final byte[] CODE = new byte[64]; + private static final byte[] REV = new byte[256]; + + private Base64() { + // utility class + } + + static { + for (int i = 'A'; i <= 'Z'; i++) { + CODE[i - 'A'] = (byte) i; + CODE[i - 'A' + 26] = (byte) (i + 'a' - 'A'); + } + for (int i = 0; i < 10; i++) { + CODE[i + 2 * 26] = (byte) ('0' + i); + } + CODE[62] = (byte) '+'; + CODE[63] = (byte) '/'; + for (int i = 0; i < 255; i++) { + REV[i] = -1; + } + for (int i = 0; i < 64; i++) { + REV[CODE[i]] = (byte) i; + } + } + + private static void check(String a, String b) { + if (!a.equals(b)) { + throw new RuntimeException("mismatch: " + a + " <> " + b); + } + } + + /** + * Run the tests. + * + * @param args the command line parameters + */ + public static void main(String... args) { + check(new String(encode(new byte[] {})), ""); + check(new String(encode("A".getBytes())), "QQ=="); + check(new String(encode("AB".getBytes())), "QUI="); + check(new String(encode("ABC".getBytes())), "QUJD"); + check(new String(encode("ABCD".getBytes())), "QUJDRA=="); + check(new String(decode(new byte[] {})), ""); + check(new String(decode("QQ==".getBytes())), "A"); + check(new String(decode("QUI=".getBytes())), "AB"); + check(new String(decode("QUJD".getBytes())), "ABC"); + check(new String(decode("QUJDRA==".getBytes())), "ABCD"); + int len = 10000; + test(false, len); + test(true, len); + test(false, len); + test(true, len); + } + + private static void test(boolean fast, int len) { + Random random = new Random(10); + long time = System.nanoTime(); + byte[] bin = new byte[len]; + random.nextBytes(bin); + for (int i = 0; i < len; i++) { + byte[] dec; + if (fast) { + byte[] enc = encodeFast(bin); + dec = decodeFast(enc); + } else { + byte[] enc = encode(bin); + dec = decode(enc); + } + test(bin, dec); + } + time = System.nanoTime() - time; + System.out.println("fast=" + fast + " time=" + TimeUnit.NANOSECONDS.toMillis(time)); + } + + private static void test(byte[] in, byte[] out) { + if (in.length != out.length) { + throw new RuntimeException("Length error"); + } + for (int i = 0; i < in.length; i++) { + if (in[i] != out[i]) { + throw new RuntimeException("Error at " + i); + } + } + } + + private static byte[] encode(byte[] bin) { + byte[] code = CODE; + int size = bin.length; + int len = ((size + 2) / 3) * 4; + byte[] enc = new byte[len]; + int fast = size / 3 * 3, i = 0, j = 0; + for (; i < fast; i += 3, j += 4) { + int a = ((bin[i] & 255) << 16) + ((bin[i + 1] & 255) << 8) + (bin[i + 2] & 255); + enc[j] = code[a >> 18]; + enc[j + 1] = code[(a >> 12) & 63]; + enc[j + 2] = code[(a >> 6) & 63]; + enc[j + 3] = code[a & 63]; + } + if (i < size) { + int a = (bin[i++] & 255) << 16; + enc[j] = code[a >> 18]; + if (i < size) { + a += (bin[i] & 255) << 8; + enc[j + 2] = code[(a >> 6) & 63]; + } else { + enc[j + 2] = (byte) '='; + } + enc[j + 1] = code[(a >> 12) & 63]; + enc[j + 3] = (byte) '='; + } + return enc; + } + + private static byte[] encodeFast(byte[] bin) { + byte[] code = CODE; + int size = bin.length; + int len = ((size * 4) + 2) / 3; + byte[] enc = new byte[len]; + int fast = size / 3 * 3, i = 0, j = 0; + for (; i < fast; i += 3, j += 4) { + int a = ((bin[i] & 255) << 16) + ((bin[i + 1] & 255) << 8) + (bin[i + 2] & 255); + enc[j] = code[a >> 18]; + enc[j + 1] = code[(a >> 12) & 63]; + enc[j + 2] = code[(a >> 6) & 63]; + enc[j + 3] = code[a & 63]; + } + if (i < size) { + int a = (bin[i++] & 255) << 16; + enc[j] = code[a >> 18]; + if (i < size) { + a += (bin[i] & 255) << 8; + enc[j + 2] = code[(a >> 6) & 63]; + } + enc[j + 1] = code[(a >> 12) & 63]; + } + return enc; + } + + private static byte[] trim(byte[] enc) { + byte[] rev = REV; + int j = 0, size = enc.length; + if (size > 1 && enc[size - 2] == '=') { + size--; + } + if (size > 0 && enc[size - 1] == '=') { + size--; + } + for (int i = 0; i < size; i++) { + if (rev[enc[i] & 255] < 0) { + j++; + } + } + if (j == 0) { + return enc; + } + byte[] buff = new byte[size - j]; + for (int i = 0, k = 0; i < size; i++) { + int x = enc[i] & 255; + if (rev[x] >= 0) { + buff[k++] = (byte) x; + } + } + return buff; + } + + private static byte[] decode(byte[] enc) { + enc = trim(enc); + byte[] rev = REV; + int len = enc.length, size = (len * 3) / 4; + if (len > 0 && enc[len - 1] == '=') { + size--; + if (len > 1 && enc[len - 2] == '=') { + size--; + } + } + byte[] bin = new byte[size]; + int fast = size / 3 * 3, i = 0, j = 0; + for (; i < fast; i += 3, j += 4) { + int a = (rev[enc[j] & 255] << 18) + (rev[enc[j + 1] & 255] << 12) + + (rev[enc[j + 2] & 255] << 6) + rev[enc[j + 3] & 255]; + bin[i] = (byte) (a >> 16); + bin[i + 1] = (byte) (a >> 8); + bin[i + 2] = (byte) a; + } + if (i < size) { + int a = (rev[enc[j] & 255] << 10) + (rev[enc[j + 1] & 255] << 4); + bin[i++] = (byte) (a >> 8); + if (i < size) { + a += rev[enc[j + 2] & 255] >> 2; + bin[i] = (byte) a; + } + } + return bin; + } + + private static byte[] decodeFast(byte[] enc) { + byte[] rev = REV; + int len = enc.length, size = (len * 3) / 4; + byte[] bin = new byte[size]; + int fast = size / 3 * 3, i = 0, j = 0; + for (; i < fast; i += 3, j += 4) { + int a = (rev[enc[j] & 255] << 18) + (rev[enc[j + 1] & 255] << 12) + + (rev[enc[j + 2] & 255] << 6) + rev[enc[j + 3] & 255]; + bin[i] = (byte) (a >> 16); + bin[i + 1] = (byte) (a >> 8); + bin[i + 2] = (byte) a; + } + if (i < size) { + int a = (rev[enc[j] & 255] << 10) + (rev[enc[j + 1] & 255] << 4); + bin[i++] = (byte) (a >> 8); + if (i < size) { + a += rev[enc[j + 2] & 255] >> 2; + bin[i] = (byte) a; + } + } + return bin; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/BinaryArithmeticStream.java b/modules/h2/src/test/tools/org/h2/dev/util/BinaryArithmeticStream.java new file mode 100644 index 0000000000000..32cee8d1cefc0 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/BinaryArithmeticStream.java @@ -0,0 +1,269 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.PriorityQueue; + +/** + * A binary arithmetic stream. + */ +public class BinaryArithmeticStream { + + /** + * The maximum probability. + */ + public static final int MAX_PROBABILITY = (1 << 12) - 1; + + /** + * The low marker. + */ + protected int low; + + /** + * The high marker. + */ + protected int high = 0xffffffff; + + /** + * A binary arithmetic input stream. + */ + public static class In extends BinaryArithmeticStream { + + private final InputStream in; + private int data; + + public In(InputStream in) throws IOException { + this.in = in; + data = ((in.read() & 0xff) << 24) | + ((in.read() & 0xff) << 16) | + ((in.read() & 0xff) << 8) | + (in.read() & 0xff); + } + + /** + * Read a bit. + * + * @param probability the probability that the value is true + * @return the value + */ + public boolean readBit(int probability) throws IOException { + int split = low + probability * ((high - low) >>> 12); + boolean value; + // compare unsigned + if (data + Integer.MIN_VALUE > split + Integer.MIN_VALUE) { + low = split + 1; + value = false; + } else { + high = split; + value = true; + } + while (low >>> 24 == high >>> 24) { + data = (data << 8) | (in.read() & 0xff); + low <<= 8; + high = (high << 8) | 0xff; + } + return value; + } + + /** + * Read a value that is stored as a Golomb code. + * + * @param divisor the divisor + * @return the value + */ + public int readGolomb(int divisor) throws IOException { + int q = 0; + while (readBit(MAX_PROBABILITY / 2)) { + q++; + } + int bit = 31 - Integer.numberOfLeadingZeros(divisor - 1); + int r = 0; + if (bit >= 0) { + int cutOff = (2 << bit) - divisor; + for (; bit > 0; bit--) { + r = (r << 1) + (readBit(MAX_PROBABILITY / 2) ? 1 : 0); + } + if (r >= cutOff) { + r = (r << 1) + (readBit(MAX_PROBABILITY / 2) ? 1 : 0) - cutOff; + } + } + return q * divisor + r; + } + + } + + /** + * A binary arithmetic output stream. + */ + public static class Out extends BinaryArithmeticStream { + + private final OutputStream out; + + public Out(OutputStream out) { + this.out = out; + } + + /** + * Write a bit. + * + * @param value the value + * @param probability the probability that the value is true + */ + public void writeBit(boolean value, int probability) throws IOException { + int split = low + probability * ((high - low) >>> 12); + if (value) { + high = split; + } else { + low = split + 1; + } + while (low >>> 24 == high >>> 24) { + out.write(high >> 24); + low <<= 8; + high = (high << 8) | 0xff; + } + } + + /** + * Flush the stream. + */ + public void flush() throws IOException { + out.write(high >> 24); + out.write(high >> 16); + out.write(high >> 8); + out.write(high); + } + + /** + * Write the Golomb code of a value. + * + * @param divisor the divisor + * @param value the value + */ + public void writeGolomb(int divisor, int value) throws IOException { + int q = value / divisor; + for (int i = 0; i < q; i++) { + writeBit(true, MAX_PROBABILITY / 2); + } + writeBit(false, MAX_PROBABILITY / 2); + int r = value - q * divisor; + int bit = 31 - Integer.numberOfLeadingZeros(divisor - 1); + if (r < ((2 << bit) - divisor)) { + bit--; + } else { + r += (2 << bit) - divisor; + } + for (; bit >= 0; bit--) { + writeBit(((r >>> bit) & 1) == 1, MAX_PROBABILITY / 2); + } + } + + } + + /** + * A Huffman code table / tree. + */ + public static class Huffman { + + private final int[] codes; + private final Node tree; + + public Huffman(int[] frequencies) { + PriorityQueue queue = new PriorityQueue<>(); + for (int i = 0; i < frequencies.length; i++) { + int f = frequencies[i]; + if (f > 0) { + queue.offer(new Node(i, f)); + } + } + while (queue.size() > 1) { + queue.offer(new Node(queue.poll(), queue.poll())); + } + codes = new int[frequencies.length]; + tree = queue.poll(); + if (tree != null) { + tree.initCodes(codes, 1); + } + } + + /** + * Write a value. + * + * @param out the output stream + * @param value the value to write + */ + public void write(Out out, int value) throws IOException { + int code = codes[value]; + int bitCount = 30 - Integer.numberOfLeadingZeros(code); + Node n = tree; + for (int i = bitCount; i >= 0; i--) { + boolean goRight = ((code >> i) & 1) == 1; + int prob = (int) ((long) MAX_PROBABILITY * + n.right.frequency / n.frequency); + out.writeBit(goRight, prob); + n = goRight ? n.right : n.left; + } + } + + /** + * Read a value. + * + * @param in the input stream + * @return the value + */ + public int read(In in) throws IOException { + Node n = tree; + while (n.left != null) { + int prob = (int) ((long) MAX_PROBABILITY * + n.right.frequency / n.frequency); + boolean goRight = in.readBit(prob); + n = goRight ? n.right : n.left; + } + return n.value; + } + + } + + /** + * A Huffman code node. + */ + private static class Node implements Comparable { + + int value; + Node left; + Node right; + final int frequency; + + Node(int value, int frequency) { + this.frequency = frequency; + this.value = value; + } + + Node(Node left, Node right) { + this.left = left; + this.right = right; + this.frequency = left.frequency + right.frequency; + } + + @Override + public int compareTo(Node o) { + return frequency - o.frequency; + } + + void initCodes(int[] codes, int bits) { + if (left == null) { + codes[value] = bits; + } else { + left.initCodes(codes, bits << 1); + right.initCodes(codes, (bits << 1) + 1); + } + } + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/BitStream.java b/modules/h2/src/test/tools/org/h2/dev/util/BitStream.java new file mode 100644 index 0000000000000..bffe8f727c5cf --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/BitStream.java @@ -0,0 +1,298 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.PriorityQueue; + +/** + * A stream that supports Golomb and Huffman coding. + */ +public class BitStream { + + private BitStream() { + // a utility class + } + + /** + * A bit input stream. + */ + public static class In { + + private final InputStream in; + private int current = 0x10000; + + public In(InputStream in) { + this.in = in; + } + + /** + * Read a value that is stored as a Golomb code. + * + * @param divisor the divisor + * @return the value + */ + public int readGolomb(int divisor) { + int q = 0; + while (readBit() == 1) { + q++; + } + int bit = 31 - Integer.numberOfLeadingZeros(divisor - 1); + int r = 0; + if (bit >= 0) { + int cutOff = (2 << bit) - divisor; + for (; bit > 0; bit--) { + r = (r << 1) + readBit(); + } + if (r >= cutOff) { + r = (r << 1) + readBit() - cutOff; + } + } + return q * divisor + r; + } + + /** + * Read a bit. + * + * @return the bit (0 or 1) + */ + public int readBit() { + if (current >= 0x10000) { + try { + current = 0x100 | in.read(); + if (current < 0) { + return -1; + } + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + int bit = (current >>> 7) & 1; + current <<= 1; + return bit; + } + + /** + * Close the stream. This will also close the underlying stream. + */ + public void close() { + try { + in.close(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + } + + /** + * A bit output stream. + */ + public static class Out { + + private final OutputStream out; + private int current = 1; + + public Out(OutputStream out) { + this.out = out; + } + + /** + * Write the Golomb code of a value. + * + * @param divisor the divisor + * @param value the value + */ + public void writeGolomb(int divisor, int value) { + int q = value / divisor; + for (int i = 0; i < q; i++) { + writeBit(1); + } + writeBit(0); + int r = value - q * divisor; + int bit = 31 - Integer.numberOfLeadingZeros(divisor - 1); + if (r < ((2 << bit) - divisor)) { + bit--; + } else { + r += (2 << bit) - divisor; + } + for (; bit >= 0; bit--) { + writeBit((r >>> bit) & 1); + } + } + + /** + * Get the size of the Golomb code for this value. + * + * @param divisor the divisor + * @param value the value + * @return the number of bits + */ + public static int getGolombSize(int divisor, int value) { + int q = value / divisor; + int r = value - q * divisor; + int bit = 31 - Integer.numberOfLeadingZeros(divisor - 1); + if (r < ((2 << bit) - divisor)) { + bit--; + } + return bit + q + 2; + } + + /** + * Write a bit. + * + * @param bit the bit (0 or 1) + */ + public void writeBit(int bit) { + current = (current << 1) + bit; + if (current > 0xff) { + try { + out.write(current & 0xff); + } catch (IOException e) { + throw new IllegalStateException(e); + } + current = 1; + } + } + + /** + * Flush the stream. This will at write at most 7 '0' bits. + * This will also flush the underlying stream. + */ + public void flush() { + while (current > 1) { + writeBit(0); + } + try { + out.flush(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + /** + * Flush and close the stream. + * This will also close the underlying stream. + */ + public void close() { + flush(); + try { + out.close(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + + } + + } + + /** + * A Huffman code. + */ + public static class Huffman { + + private final int[] codes; + private final Node tree; + + public Huffman(int[] frequencies) { + PriorityQueue queue = new PriorityQueue<>(); + for (int i = 0; i < frequencies.length; i++) { + int f = frequencies[i]; + if (f > 0) { + queue.offer(new Node(i, f)); + } + } + while (queue.size() > 1) { + queue.offer(new Node(queue.poll(), queue.poll())); + } + codes = new int[frequencies.length]; + tree = queue.poll(); + if (tree != null) { + tree.initCodes(codes, 1); + } + } + + /** + * Write a value. + * + * @param out the output stream + * @param value the value to write + */ + public void write(BitStream.Out out, int value) { + int code = codes[value]; + int bitCount = 30 - Integer.numberOfLeadingZeros(code); + for (int i = bitCount; i >= 0; i--) { + out.writeBit((code >> i) & 1); + } + } + + /** + * Read a value. + * + * @param in the input stream + * @return the value + */ + public int read(BitStream.In in) { + Node n = tree; + while (n.left != null) { + n = in.readBit() == 1 ? n.right : n.left; + } + return n.value; + } + + /** + * Get the number of bits of the Huffman code for this value. + * + * @param value the value + * @return the number of bits + */ + public int getBitCount(int value) { + int code = codes[value]; + return 30 - Integer.numberOfLeadingZeros(code); + } + + } + + /** + * A Huffman code node. + */ + private static class Node implements Comparable { + + int value; + Node left; + Node right; + private final int frequency; + + Node(int value, int frequency) { + this.frequency = frequency; + this.value = value; + } + + Node(Node left, Node right) { + this.left = left; + this.right = right; + this.frequency = left.frequency + right.frequency; + } + + @Override + public int compareTo(Node o) { + return frequency - o.frequency; + } + + void initCodes(int[] codes, int bits) { + if (left == null) { + codes[value] = bits; + } else { + left.initCodes(codes, bits << 1); + right.initCodes(codes, (bits << 1) + 1); + } + } + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentLinkedList.java b/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentLinkedList.java new file mode 100644 index 0000000000000..df7e8cb38fec2 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentLinkedList.java @@ -0,0 +1,153 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Iterator; + +import org.h2.mvstore.DataUtils; + + +/** + * A very simple linked list that supports concurrent access. + * Internally, it uses immutable objects. + * It uses recursion and is not meant for long lists. + * + * @param the key type + */ +public class ConcurrentLinkedList { + + /** + * The sentinel entry. + */ + static final Entry NULL = new Entry<>(null, null); + + /** + * The head entry. + */ + @SuppressWarnings("unchecked") + volatile Entry head = (Entry) NULL; + + /** + * Get the first element, or null if none. + * + * @return the first element + */ + public K peekFirst() { + Entry x = head; + return x.obj; + } + + /** + * Get the last element, or null if none. + * + * @return the last element + */ + public K peekLast() { + Entry x = head; + while (x != NULL && x.next != NULL) { + x = x.next; + } + return x.obj; + } + + /** + * Add an element at the end. + * + * @param obj the element + */ + public synchronized void add(K obj) { + head = Entry.append(head, obj); + } + + /** + * Remove the first element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public synchronized boolean removeFirst(K obj) { + if (head.obj != obj) { + return false; + } + head = head.next; + return true; + } + + /** + * Remove the last element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public synchronized boolean removeLast(K obj) { + if (peekLast() != obj) { + return false; + } + head = Entry.removeLast(head); + return true; + } + + /** + * Get an iterator over all entries. + * + * @return the iterator + */ + public Iterator iterator() { + return new Iterator() { + + Entry current = head; + + @Override + public boolean hasNext() { + return current != NULL; + } + + @Override + public K next() { + K x = current.obj; + current = current.next; + return x; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + + /** + * An entry in the linked list. + */ + private static class Entry { + final K obj; + Entry next; + + Entry(K obj, Entry next) { + this.obj = obj; + this.next = next; + } + + @SuppressWarnings("unchecked") + static Entry append(Entry list, K obj) { + if (list == NULL) { + return new Entry<>(obj, (Entry) NULL); + } + return new Entry<>(list.obj, append(list.next, obj)); + } + + @SuppressWarnings("unchecked") + static Entry removeLast(Entry list) { + if (list == NULL || list.next == NULL) { + return (Entry) NULL; + } + return new Entry<>(list.obj, removeLast(list.next)); + } + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentLinkedListWithTail.java b/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentLinkedListWithTail.java new file mode 100644 index 0000000000000..c043208d2d569 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentLinkedListWithTail.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Iterator; + +import org.h2.mvstore.DataUtils; + +/** + * A very simple linked list that supports concurrent access. + * + * @param the key type + */ +public class ConcurrentLinkedListWithTail { + + /** + * The first entry (if any). + */ + volatile Entry head; + + /** + * The last entry (if any). + */ + private volatile Entry tail; + + /** + * Get the first element, or null if none. + * + * @return the first element + */ + public K peekFirst() { + Entry x = head; + return x == null ? null : x.obj; + } + + /** + * Get the last element, or null if none. + * + * @return the last element + */ + public K peekLast() { + Entry x = tail; + return x == null ? null : x.obj; + } + + /** + * Add an element at the end. + * + * @param obj the element + */ + public void add(K obj) { + Entry x = new Entry<>(obj); + Entry t = tail; + if (t != null) { + t.next = x; + } + tail = x; + if (head == null) { + head = x; + } + } + + /** + * Remove the first element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public boolean removeFirst(K obj) { + Entry x = head; + if (x == null || x.obj != obj) { + return false; + } + if (head == tail) { + tail = x.next; + } + head = x.next; + return true; + } + + /** + * Remove the last element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public boolean removeLast(K obj) { + Entry x = head; + if (x == null) { + return false; + } + Entry prev = null; + while (x.next != null) { + prev = x; + x = x.next; + } + if (x.obj != obj) { + return false; + } + if (prev != null) { + prev.next = null; + } + if (head == tail) { + head = prev; + } + tail = prev; + return true; + } + + /** + * Get an iterator over all entries. + * + * @return the iterator + */ + public Iterator iterator() { + return new Iterator() { + + Entry current = head; + + @Override + public boolean hasNext() { + return current != null; + } + + @Override + public K next() { + K x = current.obj; + current = current.next; + return x; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + + /** + * An entry in the linked list. + */ + private static class Entry { + final K obj; + Entry next; + + Entry(K obj) { + this.obj = obj; + } + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentRing.java b/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentRing.java new file mode 100644 index 0000000000000..1dfc4c023b086 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ConcurrentRing.java @@ -0,0 +1,155 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Iterator; + +import org.h2.mvstore.DataUtils; + +/** + * A ring buffer that supports concurrent access. + * + * @param the key type + */ +public class ConcurrentRing { + + /** + * The ring buffer. + */ + K[] buffer; + + /** + * The read position. + */ + volatile int readPos; + + /** + * The write position. + */ + volatile int writePos; + + @SuppressWarnings("unchecked") + public ConcurrentRing() { + buffer = (K[]) new Object[4]; + } + + /** + * Get the first element, or null if none. + * + * @return the first element + */ + public K peekFirst() { + return buffer[getIndex(readPos)]; + } + + /** + * Get the last element, or null if none. + * + * @return the last element + */ + public K peekLast() { + return buffer[getIndex(writePos - 1)]; + } + + /** + * Add an element at the end. + * + * @param obj the element + */ + public void add(K obj) { + buffer[getIndex(writePos)] = obj; + writePos++; + if (writePos - readPos >= buffer.length) { + // double the capacity + @SuppressWarnings("unchecked") + K[] b2 = (K[]) new Object[buffer.length * 2]; + for (int i = readPos; i < writePos; i++) { + K x = buffer[getIndex(i)]; + int i2 = i & b2.length - 1; + b2[i2] = x; + } + buffer = b2; + } + } + + /** + * Remove the first element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public boolean removeFirst(K obj) { + int p = readPos; + int idx = getIndex(p); + if (buffer[idx] != obj) { + return false; + } + buffer[idx] = null; + readPos = p + 1; + return true; + } + + /** + * Remove the last element, if it matches. + * + * @param obj the element to remove + * @return true if the element matched and was removed + */ + public boolean removeLast(K obj) { + int p = writePos; + int idx = getIndex(p - 1); + if (buffer[idx] != obj) { + return false; + } + buffer[idx] = null; + writePos = p - 1; + return true; + } + + /** + * Get the index in the array of the given position. + * + * @param pos the position + * @return the index + */ + int getIndex(int pos) { + return pos & (buffer.length - 1); + } + + /** + * Get an iterator over all entries. + * + * @return the iterator + */ + public Iterator iterator() { + return new Iterator() { + + int offset; + + @Override + public boolean hasNext() { + return readPos + offset < writePos; + } + + @Override + public K next() { + if (buffer[getIndex(readPos + offset)] == null) { + System.out.println("" + readPos); + System.out.println("" + getIndex(readPos + offset)); + System.out.println("null?"); + } + return buffer[getIndex(readPos + offset++)]; + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/FileContentHash.java b/modules/h2/src/test/tools/org/h2/dev/util/FileContentHash.java new file mode 100644 index 0000000000000..a54e9b242a49c --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/FileContentHash.java @@ -0,0 +1,171 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.IOException; +import java.io.InputStream; +import java.nio.charset.StandardCharsets; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; + +import org.h2.store.fs.FileUtils; +import org.h2.util.SortedProperties; +import org.h2.util.StringUtils; + +/** + * A utility to calculate the content hash of files. It should help detect + * duplicate files and differences between directories. + */ +public class FileContentHash { + + // find empty directories: + // find . -type d -empty + // find . -name .hash.prop -delete + + private static final boolean WRITE_HASH_INDEX = true; + private static final String HASH_INDEX = ".hash.prop"; + private static final int MIN_SIZE = 0; + private final HashMap hashes = new HashMap<>(); + private long nextLog; + + /** + * Run the viewer. + * + * @param args the command line arguments + */ + public static void main(String... args) throws IOException { + new FileContentHash().runTool(args); + } + + private void runTool(String... args) throws IOException { + if (args.length == 0) { + System.out.println("Usage: java " + getClass().getName() + " "); + return; + } + for (int i = 0; i < args.length; i++) { + Info info = hash(args[i]); + System.out.println("size: " + info.size); + } + } + + private static MessageDigest createMessageDigest() { + try { + return MessageDigest.getInstance("SHA-256"); + } catch (NoSuchAlgorithmException e) { + throw new RuntimeException(e); + } + } + + private Info hash(String path) throws IOException { + if (FileUtils.isDirectory(path)) { + long totalSize = 0; + SortedProperties propOld; + SortedProperties propNew = new SortedProperties(); + String hashFileName = path + "/" + HASH_INDEX; + if (FileUtils.exists(hashFileName)) { + propOld = SortedProperties.loadProperties(hashFileName); + } else { + propOld = new SortedProperties(); + } + List list = FileUtils.newDirectoryStream(path); + Collections.sort(list); + MessageDigest mdDir = createMessageDigest(); + for (String f : list) { + String name = FileUtils.getName(f); + if (name.equals(HASH_INDEX)) { + continue; + } + long length = FileUtils.size(f); + String entry = "name_" + name + + "-mod_" + FileUtils.lastModified(f) + + "-size_" + length; + String hash = propOld.getProperty(entry); + if (hash == null || FileUtils.isDirectory(f)) { + Info info = hash(f); + byte[] b = info.hash; + hash = StringUtils.convertBytesToHex(b); + totalSize += info.size; + entry = "name_" + name + + "-mod_" + FileUtils.lastModified(f) + + "-size_" + info.size; + } else { + totalSize += length; + checkCollision(f, length, StringUtils.convertHexToBytes(hash)); + } + propNew.put(entry, hash); + mdDir.update(entry.getBytes(StandardCharsets.UTF_8)); + mdDir.update(hash.getBytes(StandardCharsets.UTF_8)); + } + String oldFile = propOld.toString(); + String newFile = propNew.toString(); + if (!oldFile.equals(newFile)) { + if (WRITE_HASH_INDEX) { + propNew.store(path + "/" + HASH_INDEX); + } + } + Info info = new Info(); + info.hash = mdDir.digest(); + info.size = totalSize; + return info; + } + MessageDigest md = createMessageDigest(); + InputStream in = FileUtils.newInputStream(path); + long length = FileUtils.size(path); + byte[] buff = new byte[1024 * 1024]; + while (true) { + int len = in.read(buff); + if (len < 0) { + break; + } + md.update(buff, 0, len); + long t = System.nanoTime(); + if (nextLog == 0 || t > nextLog) { + System.out.println("Checking " + path); + nextLog = t + 5000 * 1000000L; + } + } + in.close(); + byte[] b = md.digest(); + checkCollision(path, length, b); + Info info = new Info(); + info.hash = b; + info.size = length; + return info; + } + + private void checkCollision(String path, long length, byte[] hash) { + if (length < MIN_SIZE) { + return; + } + String s = StringUtils.convertBytesToHex(hash); + String old = hashes.get(s); + if (old != null) { + System.out.println("Collision: " + old + "\n" + path + "\n"); + } else { + hashes.put(s, path); + } + } + + /** + * The info for a file. + */ + static class Info { + + /** + * The content hash. + */ + byte[] hash; + + /** + * The size in bytes. + */ + long size; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/FileViewer.java b/modules/h2/src/test/tools/org/h2/dev/util/FileViewer.java new file mode 100644 index 0000000000000..1bca45aa83414 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/FileViewer.java @@ -0,0 +1,214 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.RandomAccessFile; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.concurrent.TimeUnit; + +import org.h2.message.DbException; +import org.h2.util.Tool; + +/** + * A text file viewer that support very large files. + */ +public class FileViewer extends Tool { + + /** + * Run the viewer. + * + * @param args the command line arguments + */ + public static void main(String... args) throws SQLException { + new FileViewer().runTool(args); + } + + @Override + protected void showUsage() { + out.println("A text file viewer that support very large files."); + out.println("java "+getClass().getName() + "\n" + + " -file The name of the file to view\n" + + " [-find ] Find a string and display the next lines\n" + + " [-start ] Start at the given position\n" + + " [-head] Display the first lines\n" + + " [-tail] Display the last lines\n" + + " [-lines ] Display only x lines (default: 30)\n" + + " [-quiet] Do not print progress information"); + } + + @Override + public void runTool(String... args) throws SQLException { + String file = null; + String find = null; + boolean head = false, tail = false; + int lines = 30; + boolean quiet = false; + long start = 0; + for (int i = 0; args != null && i < args.length; i++) { + String arg = args[i]; + if (arg.equals("-file")) { + file = args[++i]; + } else if (arg.equals("-find")) { + find = args[++i]; + } else if (arg.equals("-start")) { + start = Long.decode(args[++i]).longValue(); + } else if (arg.equals("-head")) { + head = true; + } else if (arg.equals("-tail")) { + tail = true; + } else if (arg.equals("-lines")) { + lines = Integer.decode(args[++i]).intValue(); + } else if (arg.equals("-quiet")) { + quiet = true; + } else if (arg.equals("-help") || arg.equals("-?")) { + showUsage(); + return; + } else { + showUsageAndThrowUnsupportedOption(arg); + } + } + if (file == null) { + showUsage(); + return; + } + if (!head && !tail && find == null) { + head = true; + } + try { + process(file, find, head, tail, start, lines, quiet); + } catch (IOException e) { + throw DbException.toSQLException(e); + } + } + + private static void process(String fileName, String find, + boolean head, boolean tail, long start, int lines, + boolean quiet) throws IOException { + RandomAccessFile file = new RandomAccessFile(fileName, "r"); + long length = file.length(); + if (head) { + file.seek(start); + list(start, "Head", readLines(file, lines)); + } + if (find != null) { + file.seek(start); + long pos = find(file, find.getBytes(), quiet); + if (pos >= 0) { + file.seek(pos); + list(pos, "Found " + find, readLines(file, lines)); + } + } + if (tail) { + long pos = length - 100L * lines; + ArrayList list = null; + while (pos > 0) { + file.seek(pos); + list = readLines(file, Integer.MAX_VALUE); + if (list.size() > lines) { + break; + } + pos -= 100L * lines; + } + // remove the first (maybe partial) line + list.remove(0); + while (list.size() > lines) { + list.remove(0); + } + list(pos, "Tail", list); + } + } + + private static long find(RandomAccessFile file, byte[] find, boolean quiet) + throws IOException { + long pos = file.getFilePointer(); + long length = file.length(); + int bufferSize = 4 * 1024; + byte[] data = new byte[bufferSize * 2]; + long last = System.nanoTime(); + while (pos < length) { + System.arraycopy(data, bufferSize, data, 0, bufferSize); + if (pos + bufferSize > length) { + file.readFully(data, bufferSize, (int) (length - pos)); + return find(data, find, (int) (bufferSize + length - pos - find.length)); + } + if (!quiet) { + long now = System.nanoTime(); + if (now > last + TimeUnit.SECONDS.toNanos(5)) { + System.out.println((100 * pos / length) + "%"); + last = now; + } + } + file.readFully(data, bufferSize, bufferSize); + int f = find(data, find, bufferSize); + if (f >= 0) { + return f + pos - bufferSize; + } + pos += bufferSize; + } + return -1; + } + + private static int find(byte[] data, byte[] find, int max) { + outer: + for (int i = 0; i < max; i++) { + for (int j = 0; j < find.length; j++) { + if (data[i + j] != find[j]) { + continue outer; + } + } + return i; + } + return -1; + } + + private static void list(long pos, String header, ArrayList list) { + System.out.println("-----------------------------------------------"); + System.out.println("[" + pos + "]: " + header); + System.out.println("-----------------------------------------------"); + for (String l : list) { + System.out.println(l); + } + System.out.println("-----------------------------------------------"); + } + + private static ArrayList readLines(RandomAccessFile file, + int maxLines) throws IOException { + ArrayList lines = new ArrayList<>(); + ByteArrayOutputStream buff = new ByteArrayOutputStream(100); + boolean lastNewline = false; + while (maxLines > 0) { + int x = file.read(); + if (x < 0) { + break; + } + if (x == '\r' || x == '\n') { + if (!lastNewline) { + maxLines--; + lastNewline = true; + byte[] data = buff.toByteArray(); + String s = new String(data); + lines.add(s); + buff.reset(); + } + continue; + } + if (lastNewline) { + lastNewline = false; + } + buff.write(x); + } + byte[] data = buff.toByteArray(); + if (data.length > 0) { + String s = new String(data); + lines.add(s); + } + return lines; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray.java b/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray.java new file mode 100644 index 0000000000000..362d5d7197746 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray.java @@ -0,0 +1,177 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Arrays; +import java.util.Iterator; +import org.h2.mvstore.DataUtils; + +/** + * An immutable array. + * + * @param the type + */ +public final class ImmutableArray implements Iterable { + + private static final ImmutableArray EMPTY = new ImmutableArray<>( + new Object[0]); + + /** + * The array. + */ + private final K[] array; + + private ImmutableArray(K[] array) { + this.array = array; + } + + /** + * Get the entry at this index. + * + * @param index the index + * @return the entry + */ + public K get(int index) { + return array[index]; + } + + /** + * Get the length. + * + * @return the length + */ + public int length() { + return array.length; + } + + /** + * Set the entry at this index. + * + * @param index the index + * @param obj the object + * @return the new immutable array + */ + public ImmutableArray set(int index, K obj) { + K[] array = this.array.clone(); + array[index] = obj; + return new ImmutableArray<>(array); + } + + /** + * Insert an entry at this index. + * + * @param index the index + * @param obj the object + * @return the new immutable array + */ + public ImmutableArray insert(int index, K obj) { + int len = array.length + 1; + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[len]; + DataUtils.copyWithGap(this.array, array, this.array.length, index); + array[index] = obj; + return new ImmutableArray<>(array); + } + + /** + * Remove the entry at this index. + * + * @param index the index + * @return the new immutable array + */ + public ImmutableArray remove(int index) { + int len = array.length - 1; + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[len]; + DataUtils.copyExcept(this.array, array, this.array.length, index); + return new ImmutableArray<>(array); + } + + /** + * Get a sub-array. + * + * @param fromIndex the index of the first entry + * @param toIndex the end index, plus one + * @return the new immutable array + */ + public ImmutableArray subArray(int fromIndex, int toIndex) { + return new ImmutableArray<>(Arrays.copyOfRange(array, fromIndex, toIndex)); + } + + /** + * Create an immutable array. + * + * @param array the data + * @return the new immutable array + */ + @SafeVarargs + public static ImmutableArray create(K... array) { + return new ImmutableArray<>(array); + } + + /** + * Get the data. + * + * @return the data + */ + public K[] array() { + return array; + } + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + for (K obj : this) { + buff.append(' ').append(obj); + } + return buff.toString(); + } + + /** + * Get an empty immutable array. + * + * @param the key type + * @return the array + */ + @SuppressWarnings("unchecked") + public static ImmutableArray empty() { + return (ImmutableArray) EMPTY; + } + + /** + * Get an iterator over all entries. + * + * @return the iterator + */ + @Override + public Iterator iterator() { + return new Iterator() { + + ImmutableArray a = ImmutableArray.this; + int index; + + @Override + public boolean hasNext() { + return index < a.length(); + } + + @Override + public K next() { + return a.get(index++); + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + +} + + + diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray2.java b/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray2.java new file mode 100644 index 0000000000000..9ce1e8729c266 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray2.java @@ -0,0 +1,215 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.atomic.AtomicBoolean; +import org.h2.mvstore.DataUtils; + +/** + * An immutable array. + * + * @param the type + */ +public final class ImmutableArray2 implements Iterable { + + private static final ImmutableArray2 EMPTY = new ImmutableArray2<>( + new Object[0], 0); + + /** + * The array. + */ + private final K[] array; + private final int length; + private AtomicBoolean canExtend; + + private ImmutableArray2(K[] array, int len) { + this.array = array; + this.length = len; + } + + private ImmutableArray2(K[] array, int len, boolean canExtend) { + this.array = array; + this.length = len; + if (canExtend) { + this.canExtend = new AtomicBoolean(true); + } + } + + /** + * Get the entry at this index. + * + * @param index the index + * @return the entry + */ + public K get(int index) { + if (index >= length) { + throw new IndexOutOfBoundsException(); + } + return array[index]; + } + + /** + * Get the length. + * + * @return the length + */ + public int length() { + return length; + } + + /** + * Set the entry at this index. + * + * @param index the index + * @param obj the object + * @return the new immutable array + */ + public ImmutableArray2 set(int index, K obj) { + K[] a2 = Arrays.copyOf(array, length); + a2[index] = obj; + return new ImmutableArray2<>(a2, length); + } + + /** + * Insert an entry at this index. + * + * @param index the index + * @param obj the object + * @return the new immutable array + */ + public ImmutableArray2 insert(int index, K obj) { + int len = length + 1; + int newLen = len; + boolean extendable; + if (index == len - 1) { + AtomicBoolean x = canExtend; + if (x != null) { + // can set it to null early - we anyway + // reset the flag, so it is no longer useful + canExtend = null; + if (array.length > index && x.getAndSet(false)) { + array[index] = obj; + return new ImmutableArray2<>(array, len, true); + } + } + extendable = true; + newLen = len + 4; + } else { + extendable = false; + } + @SuppressWarnings("unchecked") + K[] a2 = (K[]) new Object[newLen]; + DataUtils.copyWithGap(array, a2, length, index); + a2[index] = obj; + return new ImmutableArray2<>(a2, len, extendable); + } + + /** + * Remove the entry at this index. + * + * @param index the index + * @return the new immutable array + */ + public ImmutableArray2 remove(int index) { + int len = length - 1; + if (index == len) { + return new ImmutableArray2<>(array, len); + } + @SuppressWarnings("unchecked") + K[] a2 = (K[]) new Object[len]; + DataUtils.copyExcept(array, a2, length, index); + return new ImmutableArray2<>(a2, len); + } + + /** + * Get a sub-array. + * + * @param fromIndex the index of the first entry + * @param toIndex the end index, plus one + * @return the new immutable array + */ + public ImmutableArray2 subArray(int fromIndex, int toIndex) { + int len = toIndex - fromIndex; + if (fromIndex == 0) { + return new ImmutableArray2<>(array, len); + } + return new ImmutableArray2<>(Arrays.copyOfRange(array, fromIndex, toIndex), len); + } + + /** + * Create an immutable array. + * + * @param array the data + * @return the new immutable array + */ + @SafeVarargs + public static ImmutableArray2 create(K... array) { + return new ImmutableArray2<>(array, array.length); + } + + /** + * Get the data. + * + * @return the data + */ + public K[] array() { + return array; + } + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + for (K obj : this) { + buff.append(' ').append(obj); + } + return buff.toString(); + } + + /** + * Get an empty immutable array. + * + * @param the key type + * @return the array + */ + @SuppressWarnings("unchecked") + public static ImmutableArray2 empty() { + return (ImmutableArray2) EMPTY; + } + + /** + * Get an iterator over all entries. + * + * @return the iterator + */ + @Override + public Iterator iterator() { + return new Iterator() { + + ImmutableArray2 a = ImmutableArray2.this; + int index; + + @Override + public boolean hasNext() { + return index < a.length(); + } + + @Override + public K next() { + return a.get(index++); + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + +} + diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray3.java b/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray3.java new file mode 100644 index 0000000000000..e01e0b67abb8c --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ImmutableArray3.java @@ -0,0 +1,457 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.util.Iterator; +import org.h2.mvstore.DataUtils; + +/** + * An immutable array. + * + * @param the type + */ +public abstract class ImmutableArray3 implements Iterable { + + private static final int MAX_LEVEL = 4; + + private static final ImmutableArray3 EMPTY = new Plain<>(new Object[0]); + + /** + * Get the length. + * + * @return the length + */ + public abstract int length(); + + /** + * Get the entry at this index. + * + * @param index the index + * @return the entry + */ + public abstract K get(int index); + + /** + * Set the entry at this index. + * + * @param index the index + * @param obj the object + * @return the new immutable array + */ + public abstract ImmutableArray3 set(int index, K obj); + + /** + * Insert an entry at this index. + * + * @param index the index + * @param obj the object + * @return the new immutable array + */ + public abstract ImmutableArray3 insert(int index, K obj); + + /** + * Remove the entry at this index. + * + * @param index the index + * @return the new immutable array + */ + public abstract ImmutableArray3 remove(int index); + + /** + * Get a sub-array. + * + * @param fromIndex the index of the first entry + * @param toIndex the end index, plus one + * @return the new immutable array + */ + public ImmutableArray3 subArray(int fromIndex, int toIndex) { + int len = toIndex - fromIndex; + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[len]; + for (int i = 0; i < len; i++) { + array[i] = get(fromIndex + i); + } + return new Plain<>(array); + } + + /** + * Create an immutable array. + * + * @param array the data + * @return the new immutable array + */ + @SafeVarargs + public static ImmutableArray3 create(K... array) { + return new Plain<>(array); + } + + /** + * Get the data. + * + * @return the data + */ + public K[] array() { + int len = length(); + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[len]; + for (int i = 0; i < len; i++) { + array[i] = get(i); + } + return array; + } + + /** + * Get the level of "abstraction". + * + * @return the level + */ + abstract int level(); + + @Override + public String toString() { + StringBuilder buff = new StringBuilder(); + for (K obj : this) { + buff.append(' ').append(obj); + } + return buff.toString(); + } + + /** + * Get an empty immutable array. + * + * @param the key type + * @return the array + */ + @SuppressWarnings("unchecked") + public static ImmutableArray3 empty() { + return (ImmutableArray3) EMPTY; + } + + /** + * Get an iterator over all entries. + * + * @return the iterator + */ + @Override + public Iterator iterator() { + return new Iterator() { + + ImmutableArray3 a = ImmutableArray3.this; + int index; + + @Override + public boolean hasNext() { + return index < a.length(); + } + + @Override + public K next() { + return a.get(index++); + } + + @Override + public void remove() { + throw DataUtils.newUnsupportedOperationException("remove"); + } + + }; + } + + + /** + * An immutable array backed by an array. + * + * @param the type + */ + static class Plain extends ImmutableArray3 { + + /** + * The array. + */ + private final K[] array; + + public Plain(K[] array) { + this.array = array; + } + + @Override + public K get(int index) { + return array[index]; + } + + @Override + public int length() { + return array.length; + } + + @Override + public ImmutableArray3 set(int index, K obj) { + return new Set<>(this, index, obj); + } + + @Override + public ImmutableArray3 insert(int index, K obj) { + return new Insert<>(this, index, obj); + } + + @Override + public ImmutableArray3 remove(int index) { + return new Remove<>(this, index); + } + + /** + * Get a plain array with the given entry updated. + * + * @param the type + * @param base the base type + * @param index the index + * @param obj the object + * @return the immutable array + */ + static ImmutableArray3 set(ImmutableArray3 base, int index, K obj) { + int len = base.length(); + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[len]; + for (int i = 0; i < len; i++) { + array[i] = i == index ? obj : base.get(i); + } + return new Plain<>(array); + } + + /** + * Get a plain array with the given entry inserted. + * + * @param the type + * @param base the base type + * @param index the index + * @param obj the object + * @return the immutable array + */ + static ImmutableArray3 insert(ImmutableArray3 base, int index, K obj) { + int len = base.length() + 1; + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[len]; + for (int i = 0; i < len; i++) { + array[i] = i == index ? obj : i < index ? base.get(i) : base.get(i - 1); + } + return new Plain<>(array); + } + + /** + * Get a plain array with the given entry removed. + * + * @param the type + * @param base the base type + * @param index the index + * @return the immutable array + */ + static ImmutableArray3 remove(ImmutableArray3 base, int index) { + int len = base.length() - 1; + @SuppressWarnings("unchecked") + K[] array = (K[]) new Object[len]; + for (int i = 0; i < len; i++) { + array[i] = i < index ? base.get(i) : base.get(i + 1); + } + return new Plain<>(array); + } + + @Override + int level() { + return 0; + } + + } + + /** + * An immutable array backed by another immutable array, with one element + * changed. + * + * @param the type + */ + static class Set extends ImmutableArray3 { + + private final int index; + private final ImmutableArray3 base; + private final K obj; + + Set(ImmutableArray3 base, int index, K obj) { + this.base = base; + this.index = index; + this.obj = obj; + } + + @Override + public int length() { + return base.length(); + } + + @Override + public K get(int index) { + return this.index == index ? obj : base.get(index); + } + + @Override + public ImmutableArray3 set(int index, K obj) { + if (index == this.index) { + return new Set<>(base, index, obj); + } else if (level() < MAX_LEVEL) { + return new Set<>(this, index, obj); + } + return Plain.set(this, index, obj); + } + + @Override + public ImmutableArray3 insert(int index, K obj) { + if (level() < MAX_LEVEL) { + return new Insert<>(this, index, obj); + } + return Plain.insert(this, index, obj); + } + + @Override + public ImmutableArray3 remove(int index) { + if (level() < MAX_LEVEL) { + return new Remove<>(this, index); + } + return Plain.remove(this, index); + } + + @Override + int level() { + return base.level() + 1; + } + + } + + /** + * An immutable array backed by another immutable array, with one element + * added. + * + * @param the type + */ + static class Insert extends ImmutableArray3 { + + private final int index; + private final ImmutableArray3 base; + private final K obj; + + Insert(ImmutableArray3 base, int index, K obj) { + this.base = base; + this.index = index; + this.obj = obj; + } + + @Override + public ImmutableArray3 set(int index, K obj) { + if (level() < MAX_LEVEL) { + return new Set<>(this, index, obj); + } + return Plain.set(this, index, obj); + } + + @Override + public ImmutableArray3 insert(int index, K obj) { + if (level() < MAX_LEVEL) { + return new Insert<>(this, index, obj); + } + return Plain.insert(this, index, obj); + } + + @Override + public ImmutableArray3 remove(int index) { + if (index == this.index) { + return base; + } else if (level() < MAX_LEVEL) { + return new Remove<>(this, index); + } + return Plain.remove(this, index); + } + + @Override + public int length() { + return base.length() + 1; + } + + @Override + public K get(int index) { + if (index == this.index) { + return obj; + } else if (index < this.index) { + return base.get(index); + } + return base.get(index - 1); + } + + @Override + int level() { + return base.level() + 1; + } + + } + + /** + * An immutable array backed by another immutable array, with one element + * removed. + * + * @param the type + */ + static class Remove extends ImmutableArray3 { + + private final int index; + private final ImmutableArray3 base; + + Remove(ImmutableArray3 base, int index) { + this.base = base; + this.index = index; + } + + @Override + public ImmutableArray3 set(int index, K obj) { + if (level() < MAX_LEVEL) { + return new Set<>(this, index, obj); + } + return Plain.set(this, index, obj); + } + + @Override + public ImmutableArray3 insert(int index, K obj) { + if (index == this.index) { + return base.set(index, obj); + } else if (level() < MAX_LEVEL) { + return new Insert<>(this, index, obj); + } + return Plain.insert(this, index, obj); + } + + @Override + public ImmutableArray3 remove(int index) { + if (level() < MAX_LEVEL) { + return new Remove<>(this, index); + } + return Plain.remove(this, index); + } + + @Override + public int length() { + return base.length() - 1; + } + + @Override + public K get(int index) { + if (index < this.index) { + return base.get(index); + } + return base.get(index + 1); + } + + @Override + int level() { + return base.level() + 1; + } + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/JavaProcessKiller.java b/modules/h2/src/test/tools/org/h2/dev/util/JavaProcessKiller.java new file mode 100644 index 0000000000000..67c9a1aced30a --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/JavaProcessKiller.java @@ -0,0 +1,126 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.ByteArrayOutputStream; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.charset.StandardCharsets; +import java.util.Map.Entry; +import java.util.TreeMap; + +/** + * Allows to kill a certain Java process. + */ +public class JavaProcessKiller { + + /** + * Kill a certain Java process. The JDK (jps) needs to be in the path. + * + * @param args the Java process name as listed by jps -l. If not set the + * Java processes are listed + */ + + public static void main(String... args) { + new JavaProcessKiller().run(args); + } + + private void run(String... args) { + TreeMap map = getProcesses(); + System.out.println("Processes:"); + System.out.println(map); + if (args.length == 0) { + System.out.println("Kill a Java process"); + System.out.println("Usage: java " + getClass().getName() + " "); + return; + } + String processName = args[0]; + int killCount = 0; + for (Entry e : map.entrySet()) { + String name = e.getValue(); + if (name.equals(processName)) { + int pid = e.getKey(); + System.out.println("Killing pid " + pid + "..."); + // Windows + try { + exec("taskkill", "/pid", "" + pid, "/f"); + } catch (Exception e2) { + // ignore + } + // Unix + try { + exec("kill", "-9", "" + pid); + } catch (Exception e2) { + // ignore + } + System.out.println("done."); + killCount++; + } + } + if (killCount == 0) { + System.out.println("Process " + processName + " not found"); + } + map = getProcesses(); + System.out.println("Processes now:"); + System.out.println(map); + } + + private static TreeMap getProcesses() { + String processList = exec("jps", "-l"); + String[] processes = processList.split("\n"); + TreeMap map = new TreeMap<>(); + for (int i = 0; i < processes.length; i++) { + String p = processes[i].trim(); + int idx = p.indexOf(' '); + if (idx > 0) { + int pid = Integer.parseInt(p.substring(0, idx)); + String n = p.substring(idx + 1); + map.put(pid, n); + } + } + return map; + } + + private static String exec(String... args) { + ByteArrayOutputStream err = new ByteArrayOutputStream(); + ByteArrayOutputStream out = new ByteArrayOutputStream(); + try { + Process p = Runtime.getRuntime().exec(args); + copyInThread(p.getInputStream(), out); + copyInThread(p.getErrorStream(), err); + p.waitFor(); + String e = new String(err.toByteArray(), StandardCharsets.UTF_8); + if (e.length() > 0) { + throw new RuntimeException(e); + } + String output = new String(out.toByteArray(), StandardCharsets.UTF_8); + return output; + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + private static void copyInThread(final InputStream in, final OutputStream out) { + new Thread("Stream copy") { + @Override + public void run() { + byte[] buffer = new byte[4096]; + try { + while (true) { + int len = in.read(buffer, 0, buffer.length); + if (len < 0) { + break; + } + out.write(buffer, 0, len); + } + } catch (Exception e) { + throw new RuntimeException(e); + } + } + }.start(); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/Migrate.java b/modules/h2/src/test/tools/org/h2/dev/util/Migrate.java new file mode 100644 index 0000000000000..1a7a98b16561a --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/Migrate.java @@ -0,0 +1,249 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.BufferedInputStream; +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.io.PrintStream; +import java.io.RandomAccessFile; +import java.net.URL; +import java.nio.charset.StandardCharsets; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.concurrent.TimeUnit; + +import org.h2.engine.Constants; +import org.h2.tools.RunScript; + +/** + * Migrate a H2 database version 1.1.x (page store not enabled) to 1.2.x (page + * store format). This will download the H2 jar file version 1.2.127 from + * maven.org if it doesn't exist, execute the Script tool (using Runtime.exec) + * to create a backup.sql script, rename the old database file to *.backup, + * created a new database (using the H2 jar file in the class path) using the + * Script tool, and then delete the backup.sql file. Most utility methods are + * copied from h2/src/tools/org/h2/build/BuildBase.java. + */ +public class Migrate { + + private static final String USER = "sa"; + private static final String PASSWORD = "sa"; + private static final File OLD_H2_FILE = new File("./h2-1.2.127.jar"); + private static final String DOWNLOAD_URL = + "http://repo2.maven.org/maven2/com/h2database/h2/1.2.127/h2-1.2.127.jar"; + private static final String CHECKSUM = + "056e784c7cf009483366ab9cd8d21d02fe47031a"; + private static final String TEMP_SCRIPT = "backup.sql"; + private final PrintStream sysOut = System.out; + private boolean quiet; + + /** + * Migrate databases. The user name and password are both "sa". + * + * @param args the path (default is the current directory) + * @throws Exception if conversion fails + */ + public static void main(String... args) throws Exception { + new Migrate().execute(new File(args.length == 1 ? args[0] : "."), true, + USER, PASSWORD, false); + } + + /** + * Migrate a database. + * + * @param file the database file (must end with .data.db) or directory + * @param recursive if the file parameter is in fact a directory (in which + * case the directory is scanned recursively) + * @param user the user name of the database + * @param password the password + * @param runQuiet to run in quiet mode + * @throws Exception if conversion fails + */ + public void execute(File file, boolean recursive, String user, + String password, boolean runQuiet) throws Exception { + String pathToJavaExe = getJavaExecutablePath(); + this.quiet = runQuiet; + if (file.isDirectory() && recursive) { + for (File f : file.listFiles()) { + execute(f, recursive, user, password, runQuiet); + } + return; + } + if (!file.getName().endsWith(Constants.SUFFIX_OLD_DATABASE_FILE)) { + return; + } + println("Migrating " + file.getName()); + if (!OLD_H2_FILE.exists()) { + download(OLD_H2_FILE.getAbsolutePath(), DOWNLOAD_URL, CHECKSUM); + } + String url = "jdbc:h2:" + file.getAbsolutePath(); + url = url.substring(0, url.length() - Constants.SUFFIX_OLD_DATABASE_FILE.length()); + exec(new String[] { + pathToJavaExe, + "-Xmx128m", + "-cp", OLD_H2_FILE.getAbsolutePath(), + "org.h2.tools.Script", + "-script", TEMP_SCRIPT, + "-url", url, + "-user", user, + "-password", password + }); + file.renameTo(new File(file.getAbsoluteFile() + ".backup")); + RunScript.execute(url, user, password, TEMP_SCRIPT, StandardCharsets.UTF_8, true); + new File(TEMP_SCRIPT).delete(); + } + + private static String getJavaExecutablePath() { + String pathToJava; + if (File.separator.equals("\\")) { + pathToJava = System.getProperty("java.home") + File.separator + + "bin" + File.separator + "java.exe"; + } else { + pathToJava = System.getProperty("java.home") + File.separator + + "bin" + File.separator + "java"; + } + if (!new File(pathToJava).exists()) { + // Fallback to old behaviour + pathToJava = "java"; + } + return pathToJava; + } + + private void download(String target, String fileURL, String sha1Checksum) { + File targetFile = new File(target); + if (targetFile.exists()) { + return; + } + mkdirs(targetFile.getAbsoluteFile().getParentFile()); + ByteArrayOutputStream buff = new ByteArrayOutputStream(); + try { + println("Downloading " + fileURL); + URL url = new URL(fileURL); + InputStream in = new BufferedInputStream(url.openStream()); + long last = System.nanoTime(); + int len = 0; + while (true) { + long now = System.nanoTime(); + if (now > last + TimeUnit.SECONDS.toNanos(1)) { + println("Downloaded " + len + " bytes"); + last = now; + } + int x = in.read(); + len++; + if (x < 0) { + break; + } + buff.write(x); + } + in.close(); + } catch (IOException e) { + throw new RuntimeException("Error downloading", e); + } + byte[] data = buff.toByteArray(); + String got = getSHA1(data); + if (sha1Checksum == null) { + println("SHA1 checksum: " + got); + } else { + if (!got.equals(sha1Checksum)) { + throw new RuntimeException("SHA1 checksum mismatch; got: " + got); + } + } + writeFile(targetFile, data); + } + + private static void mkdirs(File f) { + if (!f.exists()) { + if (!f.mkdirs()) { + throw new RuntimeException("Can not create directory " + f.getAbsolutePath()); + } + } + } + + private void println(String s) { + if (!quiet) { + sysOut.println(s); + } + } + + private void print(String s) { + if (!quiet) { + sysOut.print(s); + } + } + + private static String getSHA1(byte[] data) { + MessageDigest md; + try { + md = MessageDigest.getInstance("SHA-1"); + return convertBytesToString(md.digest(data)); + } catch (NoSuchAlgorithmException e) { + throw new RuntimeException(e); + } + } + + private static String convertBytesToString(byte[] value) { + StringBuilder buff = new StringBuilder(value.length * 2); + for (byte c : value) { + int x = c & 0xff; + buff.append(Integer.toString(x >> 4, 16)). + append(Integer.toString(x & 0xf, 16)); + } + return buff.toString(); + } + + private static void writeFile(File file, byte[] data) { + try { + RandomAccessFile ra = new RandomAccessFile(file, "rw"); + ra.write(data); + ra.setLength(data.length); + ra.close(); + } catch (IOException e) { + throw new RuntimeException("Error writing to file " + file, e); + } + } + + private int exec(String[] command) { + try { + for (String c : command) { + print(c + " "); + } + println(""); + Process p = Runtime.getRuntime().exec(command); + copyInThread(p.getInputStream(), quiet ? null : sysOut); + copyInThread(p.getErrorStream(), quiet ? null : sysOut); + p.waitFor(); + return p.exitValue(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + private static void copyInThread(final InputStream in, final OutputStream out) { + new Thread() { + @Override + public void run() { + try { + while (true) { + int x = in.read(); + if (x < 0) { + return; + } + if (out != null) { + out.write(x); + } + } + } catch (Exception e) { + throw new RuntimeException(e); + } + } + }.start(); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ReaderInputStream.java b/modules/h2/src/test/tools/org/h2/dev/util/ReaderInputStream.java new file mode 100644 index 0000000000000..c621c82c36f87 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ReaderInputStream.java @@ -0,0 +1,67 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.BufferedWriter; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStreamWriter; +import java.io.Reader; +import java.io.Writer; +import java.nio.charset.StandardCharsets; + +import org.h2.engine.Constants; + +/** + * The reader input stream wraps a reader and convert the character to the UTF-8 + * format. + */ +public class ReaderInputStream extends InputStream { + + private final Reader reader; + private final char[] chars; + private final ByteArrayOutputStream out; + private final Writer writer; + private int pos; + private int remaining; + private byte[] buffer; + + public ReaderInputStream(Reader reader) { + chars = new char[Constants.IO_BUFFER_SIZE]; + this.reader = reader; + out = new ByteArrayOutputStream(Constants.IO_BUFFER_SIZE); + writer = new BufferedWriter(new OutputStreamWriter(out, StandardCharsets.UTF_8)); + } + + private void fillBuffer() throws IOException { + if (remaining == 0) { + pos = 0; + remaining = reader.read(chars, 0, Constants.IO_BUFFER_SIZE); + if (remaining < 0) { + return; + } + writer.write(chars, 0, remaining); + writer.flush(); + buffer = out.toByteArray(); + remaining = buffer.length; + out.reset(); + } + } + + @Override + public int read() throws IOException { + if (remaining == 0) { + fillBuffer(); + } + if (remaining < 0) { + return -1; + } + remaining--; + return buffer[pos++] & 0xff; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/RemovePasswords.java b/modules/h2/src/test/tools/org/h2/dev/util/RemovePasswords.java new file mode 100644 index 0000000000000..38665833737d4 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/RemovePasswords.java @@ -0,0 +1,85 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.IOException; +import java.io.RandomAccessFile; +import java.nio.MappedByteBuffer; +import java.nio.channels.FileChannel.MapMode; +import java.nio.charset.StandardCharsets; + +import org.h2.engine.Constants; +import org.h2.security.SHA256; +import org.h2.store.fs.FileUtils; +import org.h2.util.MathUtils; +import org.h2.util.StringUtils; +import org.h2.util.Utils; + +/** + * A tool that removes passwords from an unencrypted database. + */ +public class RemovePasswords { + + /** + * Run the tool. + * + * @param args the command line arguments + */ + public static void main(String... args) throws Exception { + execute(args[0]); + } + + private static void execute(String fileName) throws IOException { + fileName = FileUtils.toRealPath(fileName); + RandomAccessFile f = new RandomAccessFile(fileName, "rw"); + long length = f.length(); + MappedByteBuffer buff = f.getChannel() + .map(MapMode.READ_WRITE, 0, length); + byte[] data = new byte[200]; + for (int i = 0; i < length - 200; i++) { + if (buff.get(i) != 'C' || buff.get(i + 1) != 'R' || + buff.get(i + 7) != 'U' || buff.get(i + 8) != 'S') { + continue; + } + buff.position(i); + buff.get(data); + String s = new String(data, StandardCharsets.UTF_8); + if (!s.startsWith("CREATE USER ")) { + continue; + } + int saltIndex = Utils.indexOf(s.getBytes(), "SALT ".getBytes(), 0); + if (saltIndex < 0) { + continue; + } + String userName = s.substring("CREATE USER ".length(), + s.indexOf("SALT ") - 1); + if (userName.startsWith("IF NOT EXISTS ")) { + userName = userName.substring("IF NOT EXISTS ".length()); + } + if (userName.startsWith("\"")) { + // TODO doesn't work for all cases ("" inside + // user name) + userName = userName.substring(1, userName.length() - 1); + } + System.out.println("User: " + userName); + byte[] userPasswordHash = SHA256.getKeyPasswordHash(userName, + "".toCharArray()); + byte[] salt = MathUtils.secureRandomBytes(Constants.SALT_LEN); + byte[] passwordHash = SHA256 + .getHashWithSalt(userPasswordHash, salt); + StringBuilder b = new StringBuilder(); + b.append("SALT '").append(StringUtils.convertBytesToHex(salt)) + .append("' HASH '") + .append(StringUtils.convertBytesToHex(passwordHash)) + .append('\''); + byte[] replacement = b.toString().getBytes(); + buff.position(i + saltIndex); + buff.put(replacement, 0, replacement.length); + } + f.close(); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpCleaner.java b/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpCleaner.java new file mode 100644 index 0000000000000..22b14ea166904 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpCleaner.java @@ -0,0 +1,144 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.FileReader; +import java.io.FileWriter; +import java.io.IOException; +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.io.PrintWriter; +import java.io.Reader; +import java.util.ArrayList; +import java.util.regex.Pattern; + +/** + * A tool that removes uninteresting lines from stack traces. + */ +public class ThreadDumpCleaner { + + private static final String[] PATTERN = { + "\"Concurrent Mark-Sweep GC Thread\".*\n", + + "\"Exception Catcher Thread\".*\n", + + "JNI global references:.*\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\n", + + "\".*?\".*\n\n", + + "\\$\\$YJP\\$\\$", + + "\"(Attach|Service|VM|GC|DestroyJavaVM|Signal|AWT|AppKit|C2 |Low Mem|" + + "process reaper|YJPAgent-).*?\"(?s).*?\n\n", + + " Locked ownable synchronizers:(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State: (TIMED_)?WAITING(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at sun.nio.ch.KQueueArrayWrapper.kevent0(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at java.io.FileInputStream.readBytes(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at sun.nio.ch.ServerSocketChannelImpl.accept(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at java.net.DualStackPlainSocketImpl.accept0(?s).*\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at sun.nio.ch.EPollArrayWrapper.epollWait(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at java.lang.Object.wait(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at java.net.PlainSocketImpl.socketAccept(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at java.net.SocketInputStream.socketRead0(?s).*?\n\n", + + "\".*?\".*?\n java.lang.Thread.State:.*\n\t" + + "at sun.nio.ch.WindowsSelectorImpl\\$SubSelector.poll0(?s).*?\n\n", + + }; + + private final ArrayList patterns = new ArrayList<>(); + + { + for (String s : PATTERN) { + patterns.add(Pattern.compile(s)); + } + } + + /** + * Run the tool. + * + * @param args the command line arguments + */ + public static void main(String... args) throws IOException { + String inFile = null, outFile = null; + for (int i = 0; i < args.length; i++) { + if (args[i].equals("-in")) { + inFile = args[++i]; + } else if (args[i].equals("-out")) { + outFile = args[++i]; + } + } + if (args.length == 0) { + outFile = "-"; + } + if (outFile == null) { + outFile = inFile + ".clean.txt"; + } + PrintWriter writer; + if ("-".equals(outFile)) { + writer = new PrintWriter(System.out); + } else { + writer = new PrintWriter(new BufferedWriter(new FileWriter(outFile))); + } + Reader r; + if (inFile != null) { + r = new FileReader(inFile); + } else { + r = new InputStreamReader(System.in); + } + new ThreadDumpCleaner().run( + new LineNumberReader(new BufferedReader(r)), + writer); + writer.close(); + r.close(); + } + + private void run(LineNumberReader reader, PrintWriter writer) throws IOException { + StringBuilder buff = new StringBuilder(); + while (true) { + String line = reader.readLine(); + if (line == null) { + break; + } + buff.append(line).append('\n'); + if (line.trim().length() == 0) { + writer.print(filter(buff.toString())); + buff = new StringBuilder(); + } + } + writer.println(filter(buff.toString())); + } + + private String filter(String s) { + for (Pattern p : patterns) { + s = p.matcher(s).replaceAll(""); + } + return s; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpFilter.java b/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpFilter.java new file mode 100644 index 0000000000000..7d5550dc9fa11 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpFilter.java @@ -0,0 +1,42 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.FileReader; +import java.io.FileWriter; +import java.io.LineNumberReader; +import java.io.PrintWriter; + +/** + * Filter full thread dumps from a log file. + */ +public class ThreadDumpFilter { + + /** + * Usage: java ThreadDumpFilter <log.txt >threadDump.txt + * + * @param a the file name + */ + public static void main(String... a) throws Exception { + String fileName = a[0]; + LineNumberReader in = new LineNumberReader( + new BufferedReader(new FileReader(fileName))); + PrintWriter writer = new PrintWriter(new BufferedWriter( + new FileWriter(fileName + ".filtered.txt"))); + for (String s; (s = in.readLine()) != null;) { + if (s.startsWith("Full thread")) { + do { + writer.println(s); + s = in.readLine(); + } while(s != null && (s.length() == 0 || " \t\"".indexOf(s.charAt(0)) >= 0)); + } + } + writer.close(); + in.close(); + } +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpInliner.java b/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpInliner.java new file mode 100644 index 0000000000000..9eda5b40b1400 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/ThreadDumpInliner.java @@ -0,0 +1,55 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.dev.util; + +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.FileReader; +import java.io.FileWriter; +import java.io.LineNumberReader; +import java.io.PrintWriter; + +/** + * Convert a list of thread dumps into one line per thread. + */ +public class ThreadDumpInliner { + + /** + * Usage: java ThreadDumpInliner threadDump.txt + * + * @param a the file name + */ + public static void main(String... a) throws Exception { + String fileName = a[0]; + LineNumberReader in = new LineNumberReader( + new BufferedReader(new FileReader(fileName))); + PrintWriter writer = new PrintWriter(new BufferedWriter( + new FileWriter(fileName + ".lines.txt"))); + + StringBuilder buff = new StringBuilder(); + for (String s; (s = in.readLine()) != null;) { + if (s.trim().length() == 0) { + continue; + } + if (s.startsWith(" ") || s.startsWith("\t")) { + buff.append('\t').append(s.trim()); + } else { + printNonEmpty(writer, buff.toString()); + buff = new StringBuilder(s); + } + } + printNonEmpty(writer, buff.toString()); + in.close(); + writer.close(); + } + + private static void printNonEmpty(PrintWriter writer, String s) { + s = s.trim(); + if (!s.isEmpty()) { + writer.println(s); + } + } +} diff --git a/modules/h2/src/test/tools/org/h2/dev/util/package.html b/modules/h2/src/test/tools/org/h2/dev/util/package.html new file mode 100644 index 0000000000000..ea11f5acfa21d --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/dev/util/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +Utility classes that are currently not used in the database engine. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/java/ClassObj.java b/modules/h2/src/test/tools/org/h2/java/ClassObj.java new file mode 100644 index 0000000000000..48a1dfe796577 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/ClassObj.java @@ -0,0 +1,463 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +import java.util.ArrayList; +import java.util.LinkedHashMap; + +/** + * A class or interface. + */ +public class ClassObj { + + /** + * The super class (null for java.lang.Object or primitive types). + */ + String superClassName; + + /** + * The list of interfaces that this class implements. + */ + ArrayList interfaceNames = new ArrayList<>(); + + + /** + * The fully qualified class name. + */ + String className; + + /** + * Whether this is an interface. + */ + boolean isInterface; + + /** + * Whether this class is public. + */ + boolean isPublic; + + /** + * Whether this is a primitive class (int, char,...) + */ + boolean isPrimitive; + + /** + * The primitive type (higher types are more complex) + */ + int primitiveType; + + /** + * The imported classes. + */ + ArrayList imports = new ArrayList<>(); + + /** + * The per-instance fields. + */ + LinkedHashMap instanceFields = + new LinkedHashMap<>(); + + /** + * The static fields of this class. + */ + LinkedHashMap staticFields = + new LinkedHashMap<>(); + + /** + * The methods. + */ + LinkedHashMap> methods = + new LinkedHashMap<>(); + + /** + * The list of native statements. + */ + ArrayList nativeCode = new ArrayList<>(); + + /** + * The class number. + */ + int id; + + /** + * Get the base type of this class. + */ + Type baseType; + + ClassObj() { + baseType = new Type(); + baseType.classObj = this; + } + + /** + * Add a method. + * + * @param method the method + */ + void addMethod(MethodObj method) { + ArrayList list = methods.get(method.name); + if (list == null) { + list = new ArrayList<>(); + methods.put(method.name, list); + } else { + // for overloaded methods + // method.name = method.name + "_" + (list.size() + 1); + } + list.add(method); + } + + /** + * Add an instance field. + * + * @param field the field + */ + void addInstanceField(FieldObj field) { + instanceFields.put(field.name, field); + } + + /** + * Add a static field. + * + * @param field the field + */ + void addStaticField(FieldObj field) { + staticFields.put(field.name, field); + } + + @Override + public String toString() { + if (isPrimitive) { + return "j" + className; + } + return JavaParser.toC(className); + } + + /** + * Get the method. + * + * @param find the method name in the source code + * @param args the parameters + * @return the method + */ + MethodObj getMethod(String find, ArrayList args) { + ArrayList list = methods.get(find); + if (list == null) { + throw new RuntimeException("Method not found: " + className + " " + find); + } + if (list.size() == 1) { + return list.get(0); + } + for (MethodObj m : list) { + if (!m.isVarArgs && m.parameters.size() != args.size()) { + continue; + } + boolean match = true; + int i = 0; + for (FieldObj f : m.parameters.values()) { + Expr a = args.get(i++); + Type t = a.getType(); + if (!t.equals(f.type)) { + match = false; + break; + } + } + if (match) { + return m; + } + } + throw new RuntimeException("Method not found: " + className); + } + + /** + * Get the field with the given name. + * + * @param name the field name + * @return the field + */ + FieldObj getField(String name) { + return instanceFields.get(name); + } + + @Override + public int hashCode() { + return className.hashCode(); + } + + @Override + public boolean equals(Object other) { + if (other instanceof ClassObj) { + ClassObj c = (ClassObj) other; + return c.className.equals(className); + } + return false; + } + +} + +/** + * A method. + */ +class MethodObj { + + /** + * Whether the last parameter is a var args parameter. + */ + boolean isVarArgs; + + /** + * Whether this method is static. + */ + boolean isStatic; + + /** + * Whether this method is private. + */ + boolean isPrivate; + + /** + * Whether this method is overridden. + */ + boolean isVirtual; + + /** + * Whether this method is to be ignored (using the Ignore annotation). + */ + boolean isIgnore; + + /** + * The name. + */ + String name; + + /** + * The statement block (if any). + */ + Statement block; + + /** + * The return type. + */ + Type returnType; + + /** + * The parameter list. + */ + LinkedHashMap parameters = + new LinkedHashMap<>(); + + /** + * Whether this method is final. + */ + boolean isFinal; + + /** + * Whether this method is public. + */ + boolean isPublic; + + /** + * Whether this method is native. + */ + boolean isNative; + + /** + * Whether this is a constructor. + */ + boolean isConstructor; + + @Override + public String toString() { + return name; + } + +} + +/** + * A field. + */ +class FieldObj { + + /** + * The type. + */ + Type type; + + /** + * Whether this is a variable or parameter. + */ + boolean isVariable; + + /** + * Whether this is a local field (not separately garbage collected). + */ + boolean isLocalField; + + /** + * The field name. + */ + String name; + + /** + * Whether this field is static. + */ + boolean isStatic; + + /** + * Whether this field is final. + */ + boolean isFinal; + + /** + * Whether this field is private. + */ + boolean isPrivate; + + /** + * Whether this field is public. + */ + boolean isPublic; + + /** + * Whether this method is to be ignored (using the Ignore annotation). + */ + boolean isIgnore; + + /** + * The initial value expression (may be null). + */ + Expr value; + + /** + * The class where this field is declared. + */ + ClassObj declaredClass; + + @Override + public String toString() { + return name; + } + +} + +/** + * A type. + */ +class Type { + + /** + * The class. + */ + ClassObj classObj; + + /** + * The array nesting level. 0 if not an array. + */ + int arrayLevel; + + /** + * Whether this is a var args parameter. + */ + boolean isVarArgs; + + /** + * Use ref-counting. + */ + boolean refCount = JavaParser.REF_COUNT; + + /** + * Whether this is a array or an non-primitive type. + * + * @return true if yes + */ + public boolean isObject() { + return arrayLevel > 0 || !classObj.isPrimitive; + } + + @Override + public String toString() { + return asString(); + } + + /** + * Get the C++ code. + * + * @return the C++ code + */ + public String asString() { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < arrayLevel; i++) { + if (refCount) { + buff.append("ptr< "); + } + buff.append("array< "); + } + if (refCount) { + if (!classObj.isPrimitive) { + buff.append("ptr< "); + } + } + buff.append(classObj.toString()); + if (refCount) { + if (!classObj.isPrimitive) { + buff.append(" >"); + } + } + for (int i = 0; i < arrayLevel; i++) { + if (refCount) { + buff.append(" >"); + } else { + if (!classObj.isPrimitive) { + buff.append("*"); + } + } + buff.append(" >"); + } + if (!refCount) { + if (isObject()) { + buff.append("*"); + } + } + return buff.toString(); + } + + @Override + public int hashCode() { + return toString().hashCode(); + } + + @Override + public boolean equals(Object other) { + if (other instanceof Type) { + Type t = (Type) other; + return t.classObj.equals(classObj) && t.arrayLevel == arrayLevel + && t.isVarArgs == isVarArgs; + } + return false; + } + + /** + * Get the default value, for primitive types (0 usually). + * + * @param context the context + * @return the expression + */ + public Expr getDefaultValue(JavaParser context) { + if (classObj.isPrimitive) { + LiteralExpr literal = new LiteralExpr(context, classObj.className); + literal.literal = "0"; + CastExpr cast = new CastExpr(); + cast.type = this; + cast.expr = literal; + cast.type = this; + return cast; + } + LiteralExpr literal = new LiteralExpr(context, classObj.className); + literal.literal = "null"; + return literal; + } + +} + diff --git a/modules/h2/src/test/tools/org/h2/java/Expr.java b/modules/h2/src/test/tools/org/h2/java/Expr.java new file mode 100644 index 0000000000000..5fb69dba09c91 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/Expr.java @@ -0,0 +1,736 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +import java.util.ArrayList; +import java.util.Iterator; + +/** + * An expression. + */ +public interface Expr { + + /** + * Get the C++ code. + * + * @return the C++ code + */ + String asString(); + + Type getType(); + void setType(Type type); + +} + +/** + * The base expression class. + */ +abstract class ExprBase implements Expr { + @Override + public final String toString() { + return "_" + asString() + "_"; + } +} + +/** + * A method call. + */ +class CallExpr extends ExprBase { + + /** + * The parameters. + */ + final ArrayList args = new ArrayList<>(); + + private final JavaParser context; + private final String className; + private final String name; + private Expr expr; + private ClassObj classObj; + private MethodObj method; + private Type type; + + CallExpr(JavaParser context, Expr expr, String className, String name) { + this.context = context; + this.expr = expr; + this.className = className; + this.name = name; + } + + private void initMethod() { + if (method != null) { + return; + } + if (className != null) { + classObj = context.getClassObj(className); + } else { + classObj = expr.getType().classObj; + } + method = classObj.getMethod(name, args); + if (method.isStatic) { + expr = null; + } + } + + @Override + public String asString() { + StringBuilder buff = new StringBuilder(); + initMethod(); + if (method.isIgnore) { + if (args.size() == 0) { + // ignore + } else if (args.size() == 1) { + buff.append(args.get(0)); + } else { + throw new IllegalArgumentException( + "Cannot ignore method with multiple arguments: " + method); + } + } else { + if (expr == null) { + // static method + buff.append(JavaParser.toC(classObj.toString() + "." + method.name)); + } else { + buff.append(expr.asString()).append("->"); + buff.append(method.name); + } + buff.append("("); + int i = 0; + Iterator paramIt = method.parameters.values().iterator(); + for (Expr a : args) { + if (i > 0) { + buff.append(", "); + } + FieldObj f = paramIt.next(); + i++; + a.setType(f.type); + buff.append(a.asString()); + } + buff.append(")"); + } + return buff.toString(); + } + + @Override + public Type getType() { + initMethod(); + return method.returnType; + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * A assignment expression. + */ +class AssignExpr extends ExprBase { + + /** + * The target variable or field. + */ + Expr left; + + /** + * The operation (=, +=,...). + */ + String op; + + /** + * The expression. + */ + Expr right; + + /** + * The type. + */ + Type type; + + @Override + public String asString() { + right.setType(left.getType()); + return left.asString() + " " + op + " " + right.asString(); + } + + @Override + public Type getType() { + return left.getType(); + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * A conditional expression. + */ +class ConditionalExpr extends ExprBase { + + /** + * The condition. + */ + Expr condition; + + /** + * The 'true' expression. + */ + Expr ifTrue; + + /** + * The 'false' expression. + */ + Expr ifFalse; + + @Override + public String asString() { + return condition.asString() + " ? " + ifTrue.asString() + " : " + + ifFalse.asString(); + } + + @Override + public Type getType() { + return ifTrue.getType(); + } + + @Override + public void setType(Type type) { + ifTrue.setType(type); + ifFalse.setType(type); + } + +} + +/** + * A literal. + */ +class LiteralExpr extends ExprBase { + + /** + * The literal expression. + */ + String literal; + + private final JavaParser context; + private final String className; + private Type type; + + public LiteralExpr(JavaParser context, String className) { + this.context = context; + this.className = className; + } + + @Override + public String asString() { + if ("null".equals(literal)) { + Type t = getType(); + if (t.isObject()) { + return "(" + getType().asString() + ") 0"; + } + return t.asString() + "()"; + } + return literal; + } + + @Override + public Type getType() { + if (type == null) { + type = new Type(); + type.classObj = context.getClassObj(className); + } + return type; + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * An operation. + */ +class OpExpr extends ExprBase { + + /** + * The left hand side. + */ + Expr left; + + /** + * The operation. + */ + String op; + + /** + * The right hand side. + */ + Expr right; + + private final JavaParser context; + private Type type; + + OpExpr(JavaParser context) { + this.context = context; + } + + @Override + public String asString() { + if (left == null) { + return op + right.asString(); + } else if (right == null) { + return left.asString() + op; + } + if (op.equals(">>>")) { + // ujint / ujlong + return "(((u" + left.getType() + ") " + left + ") >> " + right + ")"; + } else if (op.equals("+")) { + if (left.getType().isObject() || right.getType().isObject()) { + // TODO convert primitive to to String, call toString + StringBuilder buff = new StringBuilder(); + if (type.refCount) { + buff.append("ptr(new java_lang_StringBuilder("); + } else { + buff.append("(new java_lang_StringBuilder("); + } + buff.append(convertToString(left)); + buff.append("))->append("); + buff.append(convertToString(right)); + buff.append(")->toString()"); + return buff.toString(); + } + } + return "(" + left.asString() + " " + op + " " + right.asString() + ")"; + } + + private String convertToString(Expr e) { + Type t = e.getType(); + if (t.arrayLevel > 0) { + return e.toString() + "->toString()"; + } + if (t.classObj.isPrimitive) { + ClassObj wrapper = context.getWrapper(t.classObj); + return JavaParser.toC(wrapper + ".toString") + "(" + e.asString() + ")"; + } else if (e.getType().asString().equals("java_lang_String*")) { + return e.asString(); + } + return e.asString() + "->toString()"; + } + + private static boolean isComparison(String op) { + return op.equals("==") || op.equals(">") || op.equals("<") || + op.equals(">=") || op.equals("<=") || op.equals("!="); + } + + @Override + public Type getType() { + if (left == null) { + return right.getType(); + } + if (right == null) { + return left.getType(); + } + if (isComparison(op)) { + Type t = new Type(); + t.classObj = JavaParser.getBuiltInClass("boolean"); + return t; + } + if (op.equals("+")) { + if (left.getType().isObject() || right.getType().isObject()) { + Type t = new Type(); + t.classObj = context.getClassObj("java.lang.String"); + return t; + } + } + Type lt = left.getType(); + Type rt = right.getType(); + if (lt.classObj.primitiveType < rt.classObj.primitiveType) { + return rt; + } + return lt; + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * A "new" expression. + */ +class NewExpr extends ExprBase { + + /** + * The class. + */ + ClassObj classObj; + + /** + * The constructor parameters (for objects). + */ + final ArrayList args = new ArrayList<>(); + + /** + * The array bounds (for arrays). + */ + final ArrayList arrayInitExpr = new ArrayList<>(); + + /** + * The type. + */ + Type type; + + @Override + public String asString() { + boolean refCount = type.refCount; + StringBuilder buff = new StringBuilder(); + if (arrayInitExpr.size() > 0) { + if (refCount) { + if (classObj.isPrimitive) { + buff.append("ptr< array< " + classObj + " > >"); + } else { + buff.append("ptr< array< ptr< " + classObj + " > > >"); + } + } + if (classObj.isPrimitive) { + buff.append("(new array< " + classObj + " >(1 "); + } else { + if (refCount) { + buff.append("(new array< ptr< " + classObj + " > >(1 "); + } else { + buff.append("(new array< " + classObj + "* >(1 "); + } + } + for (Expr e : arrayInitExpr) { + buff.append("* ").append(e.asString()); + } + buff.append("))"); + } else { + if (refCount) { + buff.append("ptr< " + classObj + " >"); + } + buff.append("(new " + classObj); + buff.append("("); + int i = 0; + for (Expr a : args) { + if (i++ > 0) { + buff.append(", "); + } + buff.append(a.asString()); + } + buff.append("))"); + } + return buff.toString(); + } + + @Override + public Type getType() { + Type t = new Type(); + t.classObj = classObj; + t.arrayLevel = arrayInitExpr.size(); + return t; + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * A String literal. + */ +class StringExpr extends ExprBase { + + /** + * The constant name. + */ + String constantName; + + /** + * The literal. + */ + String text; + + private final JavaParser context; + private Type type; + + StringExpr(JavaParser context) { + this.context = context; + } + + @Override + public String asString() { + return constantName; + } + + @Override + public Type getType() { + if (type == null) { + type = new Type(); + type.classObj = context.getClassObj("java.lang.String"); + } + return type; + } + + /** + * Encode the String to Java syntax. + * + * @param s the string + * @return the encoded string + */ + static String javaEncode(String s) { + StringBuilder buff = new StringBuilder(s.length()); + for (int i = 0; i < s.length(); i++) { + char c = s.charAt(i); + switch (c) { + case '\t': + // HT horizontal tab + buff.append("\\t"); + break; + case '\n': + // LF linefeed + buff.append("\\n"); + break; + case '\f': + // FF form feed + buff.append("\\f"); + break; + case '\r': + // CR carriage return + buff.append("\\r"); + break; + case '"': + // double quote + buff.append("\\\""); + break; + case '\\': + // backslash + buff.append("\\\\"); + break; + default: + int ch = c & 0xffff; + if (ch >= ' ' && (ch < 0x80)) { + buff.append(c); + // not supported in properties files + // } else if(ch < 0xff) { + // buff.append("\\"); + // // make sure it's three characters (0x200 is octal 1000) + // buff.append(Integer.toOctalString(0x200 | ch).substring(1)); + } else { + buff.append("\\u"); + // make sure it's four characters + buff.append(Integer.toHexString(0x10000 | ch).substring(1)); + } + } + } + return buff.toString(); + } + + @Override + public void setType(Type type) { + // ignore + } + +} + +/** + * A variable. + */ +class VariableExpr extends ExprBase { + + /** + * The variable name. + */ + String name; + + /** + * The base expression (the first element in a.b variables). + */ + Expr base; + + /** + * The field. + */ + FieldObj field; + + private Type type; + private final JavaParser context; + + VariableExpr(JavaParser context) { + this.context = context; + } + + @Override + public String asString() { + init(); + StringBuilder buff = new StringBuilder(); + if (base != null) { + buff.append(base.asString()).append("->"); + } + if (field != null) { + if (field.isStatic) { + buff.append(JavaParser.toC(field.declaredClass + "." + field.name)); + } else if (field.name != null) { + buff.append(field.name); + } else if ("length".equals(name) && base.getType().arrayLevel > 0) { + buff.append("length()"); + } + } else { + buff.append(JavaParser.toC(name)); + } + return buff.toString(); + } + + private void init() { + if (field == null) { + Type t = base.getType(); + if (t.arrayLevel > 0) { + if ("length".equals(name)) { + field = new FieldObj(); + field.type = context.getClassObj("int").baseType; + } else { + throw new IllegalArgumentException("Unknown array method: " + name); + } + } else { + field = t.classObj.getField(name); + } + } + } + + @Override + public Type getType() { + init(); + return field.type; + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * An array initializer expression. + */ +class ArrayInitExpr extends ExprBase { + + /** + * The expression list. + */ + final ArrayList list = new ArrayList<>(); + + /** + * The type. + */ + Type type; + + @Override + public Type getType() { + return type; + } + + @Override + public String asString() { + StringBuilder buff = new StringBuilder("{ "); + int i = 0; + for (Expr e : list) { + if (i++ > 0) { + buff.append(", "); + } + buff.append(e.toString()); + } + buff.append(" }"); + return buff.toString(); + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * A type cast expression. + */ +class CastExpr extends ExprBase { + + /** + * The expression. + */ + Expr expr; + + /** + * The cast type. + */ + Type type; + + @Override + public Type getType() { + return type; + } + + @Override + public String asString() { + return "(" + type.asString() + ") " + expr.asString(); + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} + +/** + * An array access expression (get or set). + */ +class ArrayAccessExpr extends ExprBase { + + /** + * The base expression. + */ + Expr base; + + /** + * The index. + */ + Expr index; + + /** + * The type. + */ + Type type; + + @Override + public Type getType() { + Type t = new Type(); + t.classObj = base.getType().classObj; + t.arrayLevel = base.getType().arrayLevel - 1; + return t; + } + + @Override + public String asString() { + return base.asString() + "->at(" + index.asString() + ")"; + } + + @Override + public void setType(Type type) { + this.type = type; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/Ignore.java b/modules/h2/src/test/tools/org/h2/java/Ignore.java new file mode 100644 index 0000000000000..61c140752fe49 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/Ignore.java @@ -0,0 +1,13 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +/** + * This annotation marks methods that are only needed for testing. + */ +public @interface Ignore { + // empty +} diff --git a/modules/h2/src/test/tools/org/h2/java/JavaParser.java b/modules/h2/src/test/tools/org/h2/java/JavaParser.java new file mode 100644 index 0000000000000..b013b21406041 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/JavaParser.java @@ -0,0 +1,1849 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +import java.io.IOException; +import java.io.PrintWriter; +import java.io.RandomAccessFile; +import java.nio.charset.StandardCharsets; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedHashMap; +import org.h2.util.New; + +/** + * Converts Java to C. + */ +public class JavaParser { + + /** + * Whether ref-counting is used. + */ + public static final boolean REF_COUNT = false; + + /** + * Whether ref-counting is used for constants. + */ + public static final boolean REF_COUNT_STATIC = false; + + private static final HashMap BUILT_IN_CLASSES = new HashMap<>(); + + private static final int TOKEN_LITERAL_CHAR = 0; + private static final int TOKEN_LITERAL_STRING = 1; + private static final int TOKEN_LITERAL_NUMBER = 2; + private static final int TOKEN_RESERVED = 3; + private static final int TOKEN_IDENTIFIER = 4; + private static final int TOKEN_OTHER = 5; + + private static final HashSet RESERVED = new HashSet<>(); + private static final HashMap JAVA_IMPORT_MAP = new HashMap<>(); + + private final ArrayList allClasses = New.arrayList(); + + private String source; + + private ParseState current = new ParseState(); + + private String packageName; + private ClassObj classObj; + private int nextClassId; + private MethodObj method; + private FieldObj thisPointer; + private final HashMap importMap = new HashMap<>(); + private final HashMap classes = new HashMap<>(); + private final LinkedHashMap localVars = + new LinkedHashMap<>(); + private final HashMap allMethodsMap = new HashMap<>(); + private final ArrayList nativeHeaders = New.arrayList(); + private final HashMap stringToStringConstantMap = new HashMap<>(); + private final HashMap stringConstantToStringMap = new HashMap<>(); + + public JavaParser() { + addBuiltInTypes(); + } + + private void addBuiltInTypes() { + String[] list = { "abstract", "continue", "for", "new", "switch", + "assert", "default", "if", "package", "synchronized", + "boolean", "do", "goto", "private", "this", "break", "double", + "implements", "protected", "throw", "byte", "else", "import", + "public", "throws", "case", "enum", "instanceof", "return", + "transient", "catch", "extends", "int", "short", "try", "char", + "final", "interface", "static", "void", "class", "finally", + "long", "strictfp", "volatile", "const", "float", "native", + "super", "while", "true", "false", "null" }; + for (String s : list) { + RESERVED.add(s); + } + int id = 0; + addBuiltInType(id++, true, 0, "void"); + addBuiltInType(id++, true, 1, "boolean"); + addBuiltInType(id++, true, 2, "byte"); + addBuiltInType(id++, true, 3, "short"); + addBuiltInType(id++, true, 4, "char"); + addBuiltInType(id++, true, 5, "int"); + addBuiltInType(id++, true, 6, "long"); + addBuiltInType(id++, true, 7, "float"); + addBuiltInType(id++, true, 8, "double"); + String[] java = { "Boolean", "Byte", "Character", "Class", + "ClassLoader", "Double", "Float", "Integer", "Long", "Math", + "Number", "Object", "Runtime", "Short", "String", + "StringBuffer", "StringBuilder", "System", "Thread", + "ThreadGroup", "ThreadLocal", "Throwable", "Void" }; + for (String s : java) { + JAVA_IMPORT_MAP.put(s, "java.lang." + s); + addBuiltInType(id++, false, 0, "java.lang." + s); + } + nextClassId = id; + } + + /** + * Get the wrapper class for the given primitive class. + * + * @param c the class + * @return the wrapper class + */ + ClassObj getWrapper(ClassObj c) { + switch (c.id) { + case 1: + return getClass("java.lang.Boolean"); + case 2: + return getClass("java.lang.Byte"); + case 3: + return getClass("java.lang.Short"); + case 4: + return getClass("java.lang.Character"); + case 5: + return getClass("java.lang.Integer"); + case 6: + return getClass("java.lang.Long"); + case 7: + return getClass("java.lang.Float"); + case 8: + return getClass("java.lang.Double"); + } + throw new RuntimeException("not a primitive type: " + classObj); + } + + private void addBuiltInType(int id, boolean primitive, int primitiveType, + String type) { + ClassObj c = new ClassObj(); + c.id = id; + c.className = type; + c.isPrimitive = primitive; + c.primitiveType = primitiveType; + BUILT_IN_CLASSES.put(type, c); + addClass(c); + } + + private void addClass(ClassObj c) { + int id = c.id; + while (id >= allClasses.size()) { + allClasses.add(null); + } + allClasses.set(id, c); + } + + /** + * Parse the source code. + * + * @param baseDir the base directory + * @param className the fully qualified name of the class to parse + */ + void parse(String baseDir, String className) { + String fileName = baseDir + "/" + className.replace('.', '/') + ".java"; + current = new ParseState(); + try { + RandomAccessFile file = new RandomAccessFile(fileName, "r"); + byte[] buff = new byte[(int) file.length()]; + file.readFully(buff); + source = new String(buff, StandardCharsets.UTF_8); + file.close(); + } catch (IOException e) { + throw new RuntimeException(e); + } + source = replaceUnicode(source); + source = removeRemarks(source); + try { + readToken(); + parseCompilationUnit(); + } catch (Exception e) { + throw new RuntimeException(source.substring(0, current.index) + + "[*]" + source.substring(current.index), e); + } + } + + private static String cleanPackageName(String name) { + if (name.startsWith("org.h2.java.lang") + || name.startsWith("org.h2.java.io")) { + return name.substring("org.h2.".length()); + } + return name; + } + + private void parseCompilationUnit() { + if (readIf("package")) { + packageName = cleanPackageName(readQualifiedIdentifier()); + read(";"); + } + while (readIf("import")) { + String importPackageName = cleanPackageName(readQualifiedIdentifier()); + String importClass = importPackageName.substring(importPackageName + .lastIndexOf('.') + 1); + importMap.put(importClass, importPackageName); + read(";"); + } + while (true) { + Statement s = readNativeStatementIf(); + if (s == null) { + break; + } + nativeHeaders.add(s); + } + while (true) { + boolean isPublic = readIf("public"); + boolean isInterface; + if (readIf("class")) { + isInterface = false; + } else { + read("interface"); + isInterface = true; + } + String name = readIdentifier(); + classObj = BUILT_IN_CLASSES.get(packageName + "." + name); + if (classObj == null) { + classObj = new ClassObj(); + classObj.id = nextClassId++; + } + classObj.isPublic = isPublic; + classObj.isInterface = isInterface; + classObj.className = packageName == null ? "" : (packageName + ".") + + name; + // import this class + importMap.put(name, classObj.className); + addClass(classObj); + classes.put(classObj.className, classObj); + if (readIf("extends")) { + classObj.superClassName = readQualifiedIdentifier(); + } + if (readIf("implements")) { + while (true) { + classObj.interfaceNames.add(readQualifiedIdentifier()); + if (!readIf(",")) { + break; + } + } + } + parseClassBody(); + if (current.token == null) { + break; + } + } + } + + private boolean isTypeOrIdentifier() { + if (BUILT_IN_CLASSES.containsKey(current.token)) { + return true; + } + return current.type == TOKEN_IDENTIFIER; + } + + private ClassObj getClass(String type) { + ClassObj c = getClassIf(type); + if (c == null) { + throw new RuntimeException("Unknown type: " + type); + } + return c; + } + + /** + * Get the class for a built-in type. + * + * @param type the type + * @return the class or null if not found + */ + static ClassObj getBuiltInClass(String type) { + return BUILT_IN_CLASSES.get(type); + } + + private ClassObj getClassIf(String type) { + ClassObj c = BUILT_IN_CLASSES.get(type); + if (c != null) { + return c; + } + c = classes.get(type); + if (c != null) { + return c; + } + String mappedType = importMap.get(type); + if (mappedType == null) { + mappedType = JAVA_IMPORT_MAP.get(type); + if (mappedType == null) { + return null; + } + } + c = classes.get(mappedType); + if (c == null) { + c = BUILT_IN_CLASSES.get(mappedType); + if (c == null) { + throw new RuntimeException("Unknown class: " + mappedType); + } + } + return c; + } + + private void parseClassBody() { + read("{"); + localVars.clear(); + while (true) { + if (readIf("}")) { + break; + } + thisPointer = null; + while (true) { + Statement s = readNativeStatementIf(); + if (s == null) { + break; + } + classObj.nativeCode.add(s); + } + thisPointer = null; + HashSet annotations = new HashSet<>(); + while (readIf("@")) { + String annotation = readIdentifier(); + annotations.add(annotation); + } + boolean isIgnore = annotations.contains("Ignore"); + boolean isLocalField = annotations.contains("Local"); + boolean isStatic = false; + boolean isFinal = false; + boolean isPrivate = false; + boolean isPublic = false; + boolean isNative = false; + while (true) { + if (readIf("static")) { + isStatic = true; + } else if (readIf("final")) { + isFinal = true; + } else if (readIf("native")) { + isNative = true; + } else if (readIf("private")) { + isPrivate = true; + } else if (readIf("public")) { + isPublic = true; + } else { + break; + } + } + if (readIf("{")) { + method = new MethodObj(); + method.isIgnore = isIgnore; + method.name = isStatic ? "cl_init_obj" : ""; + method.isStatic = isStatic; + localVars.clear(); + if (!isStatic) { + initThisPointer(); + } + method.block = readStatement(); + classObj.addMethod(method); + } else { + String typeName = readTypeOrIdentifier(); + Type type = readType(typeName); + method = new MethodObj(); + method.isIgnore = isIgnore; + method.returnType = type; + method.isStatic = isStatic; + method.isFinal = isFinal; + method.isPublic = isPublic; + method.isPrivate = isPrivate; + method.isNative = isNative; + localVars.clear(); + if (!isStatic) { + initThisPointer(); + } + if (readIf("(")) { + if (type.classObj != classObj) { + throw getSyntaxException("Constructor of wrong type: " + + type); + } + method.name = ""; + method.isConstructor = true; + parseFormalParameters(method); + if (!readIf(";")) { + method.block = readStatement(); + } + classObj.addMethod(method); + addMethod(method); + } else { + String name = readIdentifier(); + if (name.endsWith("Method")) { + name = name.substring(0, + name.length() - "Method".length()); + } + method.name = name; + if (readIf("(")) { + parseFormalParameters(method); + if (!readIf(";")) { + method.block = readStatement(); + } + classObj.addMethod(method); + addMethod(method); + } else { + FieldObj field = new FieldObj(); + field.isIgnore = isIgnore; + field.isLocalField = isLocalField; + field.type = type; + field.name = name; + field.isStatic = isStatic; + field.isFinal = isFinal; + field.isPublic = isPublic; + field.isPrivate = isPrivate; + field.declaredClass = classObj; + if (readIf("=")) { + if (field.type.arrayLevel > 0 && readIf("{")) { + field.value = readArrayInit(field.type); + } else { + field.value = readExpr(); + } + } else { + field.value = field.type.getDefaultValue(this); + } + read(";"); + if (isStatic) { + classObj.addStaticField(field); + } else { + classObj.addInstanceField(field); + } + } + } + } + } + } + + private void addMethod(MethodObj m) { + if (m.isStatic) { + return; + } + MethodObj old = allMethodsMap.get(m.name); + if (old != null) { + old.isVirtual = true; + m.isVirtual = true; + } else { + allMethodsMap.put(m.name, m); + } + } + + private Expr readArrayInit(Type type) { + ArrayInitExpr expr = new ArrayInitExpr(); + expr.type = new Type(); + expr.type.classObj = type.classObj; + expr.type.arrayLevel = type.arrayLevel - 1; + if (!readIf("}")) { + while (true) { + expr.list.add(readExpr()); + if (readIf("}")) { + break; + } + read(","); + if (readIf("}")) { + break; + } + } + } + return expr; + } + + private void initThisPointer() { + thisPointer = new FieldObj(); + thisPointer.isVariable = true; + thisPointer.name = "this"; + thisPointer.type = new Type(); + thisPointer.type.classObj = classObj; + } + + private Type readType(String name) { + Type type = new Type(); + type.classObj = getClass(name); + while (readIf("[")) { + read("]"); + type.arrayLevel++; + } + if (readIf("...")) { + type.arrayLevel++; + type.isVarArgs = true; + } + return type; + } + + private void parseFormalParameters(MethodObj methodObj) { + if (readIf(")")) { + return; + } + while (true) { + FieldObj field = new FieldObj(); + field.isVariable = true; + String typeName = readTypeOrIdentifier(); + field.type = readType(typeName); + if (field.type.isVarArgs) { + methodObj.isVarArgs = true; + } + field.name = readIdentifier(); + methodObj.parameters.put(field.name, field); + if (readIf(")")) { + break; + } + read(","); + } + } + + private String readTypeOrIdentifier() { + if (current.type == TOKEN_RESERVED) { + if (BUILT_IN_CLASSES.containsKey(current.token)) { + return read(); + } + } + String s = readIdentifier(); + while (readIf(".")) { + s += "." + readIdentifier(); + } + return s; + } + + private Statement readNativeStatementIf() { + if (readIf("//")) { + boolean isC = readIdentifierIf("c"); + int start = current.index; + while (source.charAt(current.index) != '\n') { + current.index++; + } + String s = source.substring(start, current.index).trim(); + StatementNative stat = new StatementNative(s); + read(); + return isC ? stat : null; + } else if (readIf("/*")) { + boolean isC = readIdentifierIf("c"); + int start = current.index; + while (source.charAt(current.index) != '*' + || source.charAt(current.index + 1) != '/') { + current.index++; + } + String s = source.substring(start, current.index).trim(); + StatementNative stat = new StatementNative(s); + current.index += 2; + read(); + return isC ? stat : null; + } + return null; + } + + private Statement readStatement() { + Statement s = readNativeStatementIf(); + if (s != null) { + return s; + } + if (readIf(";")) { + return new EmptyStatement(); + } else if (readIf("{")) { + StatementBlock stat = new StatementBlock(); + while (true) { + if (readIf("}")) { + break; + } + stat.instructions.add(readStatement()); + } + return stat; + } else if (readIf("if")) { + IfStatement ifStat = new IfStatement(); + read("("); + ifStat.condition = readExpr(); + read(")"); + ifStat.block = readStatement(); + if (readIf("else")) { + ifStat.elseBlock = readStatement(); + } + return ifStat; + } else if (readIf("while")) { + WhileStatement whileStat = new WhileStatement(); + read("("); + whileStat.condition = readExpr(); + read(")"); + whileStat.block = readStatement(); + return whileStat; + } else if (readIf("break")) { + read(";"); + return new BreakStatement(); + } else if (readIf("continue")) { + read(";"); + return new ContinueStatement(); + } else if (readIf("switch")) { + + read("("); + SwitchStatement switchStat = new SwitchStatement(readExpr()); + read(")"); + read("{"); + while (true) { + if (readIf("default")) { + read(":"); + StatementBlock block = new StatementBlock(); + switchStat.setDefaultBlock(block); + while (true) { + block.instructions.add(readStatement()); + if (current.token.equals("case") + || current.token.equals("default") + || current.token.equals("}")) { + break; + } + } + } else if (readIf("case")) { + Expr expr = readExpr(); + read(":"); + StatementBlock block = new StatementBlock(); + while (true) { + block.instructions.add(readStatement()); + if (current.token.equals("case") + || current.token.equals("default") + || current.token.equals("}")) { + break; + } + } + switchStat.addCase(expr, block); + } else if (readIf("}")) { + break; + } + } + return switchStat; + } else if (readIf("for")) { + ForStatement forStat = new ForStatement(); + read("("); + ParseState back = copyParseState(); + try { + String typeName = readTypeOrIdentifier(); + Type type = readType(typeName); + String name = readIdentifier(); + FieldObj f = new FieldObj(); + f.name = name; + f.type = type; + f.isVariable = true; + localVars.put(name, f); + read(":"); + forStat.iterableType = type; + forStat.iterableVariable = name; + forStat.iterable = readExpr(); + } catch (Exception e) { + current = back; + forStat.init = readStatement(); + forStat.condition = readExpr(); + read(";"); + do { + forStat.updates.add(readExpr()); + } while (readIf(",")); + } + read(")"); + forStat.block = readStatement(); + return forStat; + } else if (readIf("do")) { + DoWhileStatement doWhileStat = new DoWhileStatement(); + doWhileStat.block = readStatement(); + read("while"); + read("("); + doWhileStat.condition = readExpr(); + read(")"); + read(";"); + return doWhileStat; + } else if (readIf("return")) { + ReturnStatement returnStat = new ReturnStatement(); + if (!readIf(";")) { + returnStat.expr = readExpr(); + read(";"); + } + return returnStat; + } else { + if (isTypeOrIdentifier()) { + ParseState start = copyParseState(); + String name = readTypeOrIdentifier(); + ClassObj c = getClassIf(name); + if (c != null) { + VarDecStatement dec = new VarDecStatement(); + dec.type = readType(name); + while (true) { + String varName = readIdentifier(); + Expr value = null; + if (readIf("=")) { + if (dec.type.arrayLevel > 0 && readIf("{")) { + value = readArrayInit(dec.type); + } else { + value = readExpr(); + } + } + FieldObj f = new FieldObj(); + f.isVariable = true; + f.type = dec.type; + f.name = varName; + localVars.put(varName, f); + dec.addVariable(varName, value); + if (readIf(";")) { + break; + } + read(","); + } + return dec; + } + current = start; + // ExprStatement + } + ExprStatement stat = new ExprStatement(readExpr()); + read(";"); + return stat; + } + } + + private ParseState copyParseState() { + ParseState state = new ParseState(); + state.index = current.index; + state.line = current.line; + state.token = current.token; + state.type = current.type; + return state; + } + + private Expr readExpr() { + Expr expr = readExpr1(); + String assign = current.token; + if (readIf("=") || readIf("+=") || readIf("-=") || readIf("*=") + || readIf("/=") || readIf("&=") || readIf("|=") || readIf("^=") + || readIf("%=") || readIf("<<=") || readIf(">>=") + || readIf(">>>=")) { + AssignExpr assignOp = new AssignExpr(); + assignOp.left = expr; + assignOp.op = assign; + assignOp.right = readExpr1(); + expr = assignOp; + } + return expr; + } + + private Expr readExpr1() { + Expr expr = readExpr2(); + if (readIf("?")) { + ConditionalExpr ce = new ConditionalExpr(); + ce.condition = expr; + ce.ifTrue = readExpr(); + read(":"); + ce.ifFalse = readExpr(); + return ce; + } + return expr; + } + + private Expr readExpr2() { + Expr expr = readExpr2a(); + while (true) { + String infixOp = current.token; + if (readIf("||")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2a(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2a() { + Expr expr = readExpr2b(); + while (true) { + String infixOp = current.token; + if (readIf("&&")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2b(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2b() { + Expr expr = readExpr2c(); + while (true) { + String infixOp = current.token; + if (readIf("|")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2c(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2c() { + Expr expr = readExpr2d(); + while (true) { + String infixOp = current.token; + if (readIf("^")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2d(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2d() { + Expr expr = readExpr2e(); + while (true) { + String infixOp = current.token; + if (readIf("&")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2e(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2e() { + Expr expr = readExpr2f(); + while (true) { + String infixOp = current.token; + if (readIf("==") || readIf("!=")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2f(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2f() { + Expr expr = readExpr2g(); + while (true) { + String infixOp = current.token; + if (readIf("<") || readIf(">") || readIf("<=") || readIf(">=")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2g(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2g() { + Expr expr = readExpr2h(); + while (true) { + String infixOp = current.token; + if (readIf("<<") || readIf(">>") || readIf(">>>")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2h(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2h() { + Expr expr = readExpr2i(); + while (true) { + String infixOp = current.token; + if (readIf("+") || readIf("-")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr2i(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr2i() { + Expr expr = readExpr3(); + while (true) { + String infixOp = current.token; + if (readIf("*") || readIf("/") || readIf("%")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = infixOp; + opExpr.right = readExpr3(); + expr = opExpr; + } else { + break; + } + } + return expr; + } + + private Expr readExpr3() { + if (readIf("(")) { + if (isTypeOrIdentifier()) { + ParseState start = copyParseState(); + String name = readTypeOrIdentifier(); + ClassObj c = getClassIf(name); + if (c != null) { + read(")"); + CastExpr expr = new CastExpr(); + expr.type = new Type(); + expr.type.classObj = c; + expr.expr = readExpr(); + return expr; + } + current = start; + } + Expr expr = readExpr(); + read(")"); + return expr; + } + String prefix = current.token; + if (readIf("++") || readIf("--") || readIf("!") || readIf("~") + || readIf("+") || readIf("-")) { + OpExpr expr = new OpExpr(this); + expr.op = prefix; + expr.right = readExpr3(); + return expr; + } + Expr expr = readExpr4(); + String suffix = current.token; + if (readIf("++") || readIf("--")) { + OpExpr opExpr = new OpExpr(this); + opExpr.left = expr; + opExpr.op = suffix; + expr = opExpr; + } + return expr; + } + + private Expr readExpr4() { + if (readIf("false")) { + LiteralExpr expr = new LiteralExpr(this, "boolean"); + expr.literal = "false"; + return expr; + } else if (readIf("true")) { + LiteralExpr expr = new LiteralExpr(this, "boolean"); + expr.literal = "true"; + return expr; + } else if (readIf("null")) { + LiteralExpr expr = new LiteralExpr(this, "java.lang.Object"); + expr.literal = "null"; + return expr; + } else if (current.type == TOKEN_LITERAL_NUMBER) { + // TODO or long, float, double + LiteralExpr expr = new LiteralExpr(this, "int"); + expr.literal = current.token.substring(1); + readToken(); + return expr; + } else if (current.type == TOKEN_LITERAL_CHAR) { + LiteralExpr expr = new LiteralExpr(this, "char"); + expr.literal = current.token + "'"; + readToken(); + return expr; + } else if (current.type == TOKEN_LITERAL_STRING) { + String text = current.token.substring(1); + StringExpr expr = getStringConstant(text); + readToken(); + return expr; + } + Expr expr; + expr = readExpr5(); + while (true) { + if (readIf(".")) { + String n = readIdentifier(); + if (readIf("(")) { + CallExpr e2 = new CallExpr(this, expr, null, n); + if (!readIf(")")) { + while (true) { + e2.args.add(readExpr()); + if (!readIf(",")) { + read(")"); + break; + } + } + } + expr = e2; + } else { + VariableExpr e2 = new VariableExpr(this); + e2.base = expr; + expr = e2; + e2.name = n; + } + } else if (readIf("[")) { + ArrayAccessExpr arrayExpr = new ArrayAccessExpr(); + arrayExpr.base = expr; + arrayExpr.index = readExpr(); + read("]"); + return arrayExpr; + } else { + break; + } + } + return expr; + } + + private StringExpr getStringConstant(String s) { + String c = stringToStringConstantMap.get(s); + if (c == null) { + StringBuilder buff = new StringBuilder(); + for (int i = 0; i < s.length() && i < 16; i++) { + char ch = s.charAt(i); + if (ch >= 'a' && ch <= 'z') { + // don't use Character.toUpperCase + // to avoid locale problems + // (the uppercase of 'i' is not always 'I') + buff.append((char) (ch + 'A' - 'a')); + } else if (ch >= 'A' && ch <= 'Z') { + buff.append(ch); + } else if (ch == '_' || ch == ' ') { + buff.append('_'); + } + } + c = buff.toString(); + if (c.length() == 0 || stringConstantToStringMap.containsKey(c)) { + if (c.length() == 0) { + c = "X"; + } + int i = 2; + for (;; i++) { + String c2 = c + "_" + i; + if (!stringConstantToStringMap.containsKey(c2)) { + c = c2; + break; + } + } + } + c = "STRING_" + c; + stringToStringConstantMap.put(s, c); + stringConstantToStringMap.put(c, s); + } + StringExpr expr = new StringExpr(this); + expr.text = s; + expr.constantName = c; + return expr; + } + + private Expr readExpr5() { + if (readIf("new")) { + NewExpr expr = new NewExpr(); + String typeName = readTypeOrIdentifier(); + expr.classObj = getClass(typeName); + if (readIf("(")) { + if (!readIf(")")) { + while (true) { + expr.args.add(readExpr()); + if (!readIf(",")) { + read(")"); + break; + } + } + } + } else { + while (readIf("[")) { + expr.arrayInitExpr.add(readExpr()); + read("]"); + } + } + return expr; + } + if (readIf("this")) { + VariableExpr expr = new VariableExpr(this); + if (thisPointer == null) { + throw getSyntaxException("'this' used in a static context"); + } + expr.field = thisPointer; + return expr; + } + String name = readIdentifier(); + if (readIf("(")) { + VariableExpr t; + if (thisPointer == null) { + // static method calling another static method + t = null; + } else { + // non-static method calling a static or non-static method + t = new VariableExpr(this); + t.field = thisPointer; + } + CallExpr expr = new CallExpr(this, t, classObj.className, name); + if (!readIf(")")) { + while (true) { + expr.args.add(readExpr()); + if (!readIf(",")) { + read(")"); + break; + } + } + } + return expr; + } + VariableExpr expr = new VariableExpr(this); + FieldObj f = localVars.get(name); + if (f == null) { + f = method.parameters.get(name); + } + if (f == null) { + f = classObj.staticFields.get(name); + } + if (f == null) { + f = classObj.instanceFields.get(name); + } + if (f == null) { + String imp = importMap.get(name); + if (imp == null) { + imp = JAVA_IMPORT_MAP.get(name); + } + if (imp != null) { + name = imp; + if (readIf(".")) { + String n = readIdentifier(); + if (readIf("(")) { + CallExpr e2 = new CallExpr(this, null, imp, n); + if (!readIf(")")) { + while (true) { + e2.args.add(readExpr()); + if (!readIf(",")) { + read(")"); + break; + } + } + } + return e2; + } + VariableExpr e2 = new VariableExpr(this); + // static member variable + e2.name = imp + "." + n; + ClassObj c = classes.get(imp); + FieldObj sf = c.staticFields.get(n); + e2.field = sf; + return e2; + } + // TODO static field or method of a class + } + } + expr.field = f; + if (f != null && (!f.isVariable && !f.isStatic)) { + VariableExpr ve = new VariableExpr(this); + ve.field = thisPointer; + expr.base = ve; + if (thisPointer == null) { + throw getSyntaxException("'this' used in a static context"); + } + } + expr.name = name; + return expr; + } + + private void read(String string) { + if (!readIf(string)) { + throw getSyntaxException(string + " expected, got " + current.token); + } + } + + private String readQualifiedIdentifier() { + String id = readIdentifier(); + if (localVars.containsKey(id)) { + return id; + } + if (classObj != null) { + if (classObj.staticFields.containsKey(id)) { + return id; + } + if (classObj.instanceFields.containsKey(id)) { + return id; + } + } + String fullName = importMap.get(id); + if (fullName != null) { + return fullName; + } + while (readIf(".")) { + id += "." + readIdentifier(); + } + return id; + } + + private String readIdentifier() { + if (current.type != TOKEN_IDENTIFIER) { + throw getSyntaxException("identifier expected, got " + + current.token); + } + String result = current.token; + readToken(); + return result; + } + + private boolean readIdentifierIf(String token) { + if (current.type == TOKEN_IDENTIFIER && token.equals(current.token)) { + readToken(); + return true; + } + return false; + } + + private boolean readIf(String token) { + if (current.type != TOKEN_IDENTIFIER && token.equals(current.token)) { + readToken(); + return true; + } + return false; + } + + private String read() { + String token = current.token; + readToken(); + return token; + } + + private RuntimeException getSyntaxException(String message) { + return new RuntimeException(message, new ParseException(source, + current.index)); + } + + /** + * Replace all Unicode escapes. + * + * @param s the text + * @return the cleaned text + */ + static String replaceUnicode(String s) { + if (s.indexOf("\\u") < 0) { + return s; + } + StringBuilder buff = new StringBuilder(s.length()); + for (int i = 0; i < s.length(); i++) { + if (s.substring(i).startsWith("\\\\")) { + buff.append("\\\\"); + i++; + } else if (s.substring(i).startsWith("\\u")) { + i += 2; + while (s.charAt(i) == 'u') { + i++; + } + String c = s.substring(i, i + 4); + buff.append((char) Integer.parseInt(c, 16)); + i += 4; + } else { + buff.append(s.charAt(i)); + } + } + return buff.toString(); + } + + /** + * Replace all Unicode escapes and remove all remarks. + * + * @param s the source code + * @return the cleaned source code + */ + static String removeRemarks(String s) { + char[] chars = s.toCharArray(); + for (int i = 0; i >= 0 && i < s.length(); i++) { + if (s.charAt(i) == '\'') { + i++; + while (true) { + if (s.charAt(i) == '\\') { + i++; + } else if (s.charAt(i) == '\'') { + break; + } + i++; + } + continue; + } else if (s.charAt(i) == '\"') { + i++; + while (true) { + if (s.charAt(i) == '\\') { + i++; + } else if (s.charAt(i) == '\"') { + break; + } + i++; + } + continue; + } + String sub = s.substring(i); + if (sub.startsWith("/*") && !sub.startsWith("/* c:")) { + int j = i; + i = s.indexOf("*/", i + 2) + 2; + for (; j < i; j++) { + if (chars[j] > ' ') { + chars[j] = ' '; + } + } + } else if (sub.startsWith("//") && !sub.startsWith("// c:")) { + int j = i; + i = s.indexOf('\n', i); + while (j < i) { + chars[j++] = ' '; + } + } + } + return new String(chars) + " "; + } + + private void readToken() { + int ch; + while (true) { + if (current.index >= source.length()) { + current.token = null; + return; + } + ch = source.charAt(current.index); + if (ch == '\n') { + current.line++; + } else if (ch > ' ') { + break; + } + current.index++; + } + int start = current.index; + if (Character.isJavaIdentifierStart(ch)) { + while (Character.isJavaIdentifierPart(source.charAt(current.index))) { + current.index++; + } + current.token = source.substring(start, current.index); + if (RESERVED.contains(current.token)) { + current.type = TOKEN_RESERVED; + } else { + current.type = TOKEN_IDENTIFIER; + } + return; + } else if (Character.isDigit(ch) + || (ch == '.' && Character.isDigit(source + .charAt(current.index + 1)))) { + String s = source.substring(current.index); + current.token = "0" + readNumber(s); + current.index += current.token.length() - 1; + current.type = TOKEN_LITERAL_NUMBER; + return; + } + current.index++; + switch (ch) { + case '\'': { + while (true) { + if (source.charAt(current.index) == '\\') { + current.index++; + } else if (source.charAt(current.index) == '\'') { + break; + } + current.index++; + } + current.index++; + current.token = source.substring(start + 1, current.index); + current.token = "\'" + javaDecode(current.token, '\''); + current.type = TOKEN_LITERAL_CHAR; + return; + } + case '\"': { + while (true) { + if (source.charAt(current.index) == '\\') { + current.index++; + } else if (source.charAt(current.index) == '\"') { + break; + } + current.index++; + } + current.index++; + current.token = source.substring(start + 1, current.index); + current.token = "\"" + javaDecode(current.token, '\"'); + current.type = TOKEN_LITERAL_STRING; + return; + } + case '(': + case ')': + case '[': + case ']': + case '{': + case '}': + case ';': + case ',': + case '?': + case ':': + case '@': + break; + case '.': + if (source.charAt(current.index) == '.' + && source.charAt(current.index + 1) == '.') { + current.index += 2; + } + break; + case '+': + if (source.charAt(current.index) == '=' + || source.charAt(current.index) == '+') { + current.index++; + } + break; + case '-': + if (source.charAt(current.index) == '=' + || source.charAt(current.index) == '-') { + current.index++; + } + break; + case '>': + if (source.charAt(current.index) == '>') { + current.index++; + if (source.charAt(current.index) == '>') { + current.index++; + } + } + if (source.charAt(current.index) == '=') { + current.index++; + } + break; + case '<': + if (source.charAt(current.index) == '<') { + current.index++; + } + if (source.charAt(current.index) == '=') { + current.index++; + } + break; + case '/': + if (source.charAt(current.index) == '*' + || source.charAt(current.index) == '/' + || source.charAt(current.index) == '=') { + current.index++; + } + break; + case '*': + case '~': + case '!': + case '=': + case '%': + case '^': + if (source.charAt(current.index) == '=') { + current.index++; + } + break; + case '&': + if (source.charAt(current.index) == '&') { + current.index++; + } else if (source.charAt(current.index) == '=') { + current.index++; + } + break; + case '|': + if (source.charAt(current.index) == '|') { + current.index++; + } else if (source.charAt(current.index) == '=') { + current.index++; + } + break; + } + current.type = TOKEN_OTHER; + current.token = source.substring(start, current.index); + } + + /** + * Parse a number literal and returns it. + * + * @param s the source code + * @return the number + */ + static String readNumber(String s) { + int i = 0; + if (s.startsWith("0x") || s.startsWith("0X")) { + i = 2; + while (true) { + char ch = s.charAt(i); + if ((ch < '0' || ch > '9') && (ch < 'a' || ch > 'f') + && (ch < 'A' || ch > 'F')) { + break; + } + i++; + } + if (s.charAt(i) == 'l' || s.charAt(i) == 'L') { + i++; + } + } else { + while (true) { + char ch = s.charAt(i); + if ((ch < '0' || ch > '9') && ch != '.') { + break; + } + i++; + } + if (s.charAt(i) == 'e' || s.charAt(i) == 'E') { + i++; + if (s.charAt(i) == '-' || s.charAt(i) == '+') { + i++; + } + while (Character.isDigit(s.charAt(i))) { + i++; + } + } + if (s.charAt(i) == 'f' || s.charAt(i) == 'F' || s.charAt(i) == 'd' + || s.charAt(i) == 'D' || s.charAt(i) == 'L' + || s.charAt(i) == 'l') { + i++; + } + } + return s.substring(0, i); + } + + private static RuntimeException getFormatException(String s, int i) { + return new RuntimeException(new ParseException(s, i)); + } + + private static String javaDecode(String s, char end) { + StringBuilder buff = new StringBuilder(s.length()); + for (int i = 0; i < s.length(); i++) { + char c = s.charAt(i); + if (c == end) { + break; + } else if (c == '\\') { + if (i >= s.length()) { + throw getFormatException(s, s.length() - 1); + } + c = s.charAt(++i); + switch (c) { + case 't': + buff.append('\t'); + break; + case 'r': + buff.append('\r'); + break; + case 'n': + buff.append('\n'); + break; + case 'b': + buff.append('\b'); + break; + case 'f': + buff.append('\f'); + break; + case '"': + buff.append('"'); + break; + case '\'': + buff.append('\''); + break; + case '\\': + buff.append('\\'); + break; + case 'u': { + try { + c = (char) (Integer.parseInt(s.substring(i + 1, i + 5), + 16)); + } catch (NumberFormatException e) { + throw getFormatException(s, i); + } + i += 4; + buff.append(c); + break; + } + default: + if (c >= '0' && c <= '9') { + try { + c = (char) (Integer.parseInt(s.substring(i, i + 3), + 8)); + } catch (NumberFormatException e) { + throw getFormatException(s, i); + } + i += 2; + buff.append(c); + } else { + throw getFormatException(s, i); + } + } + } else { + buff.append(c); + } + } + return buff.toString(); + } + + /** + * Write the C++ header. + * + * @param out the output writer + */ + void writeHeader(PrintWriter out) { + for (Statement s : nativeHeaders) { + out.println(s.asString()); + } + if (JavaParser.REF_COUNT_STATIC) { + out.println("#define STRING(s) STRING_REF(s)"); + } else { + out.println("#define STRING(s) STRING_PTR(s)"); + } + out.println(); + for (ClassObj c : classes.values()) { + out.println("class " + toC(c.className) + ";"); + } + for (ClassObj c : classes.values()) { + for (FieldObj f : c.staticFields.values()) { + StringBuilder buff = new StringBuilder(); + buff.append("extern "); + if (f.isFinal) { + buff.append("const "); + } + buff.append(f.type.asString()); + buff.append(" ").append(toC(c.className + "." + f.name)); + buff.append(";"); + out.println(buff.toString()); + } + for (ArrayList list : c.methods.values()) { + for (MethodObj m : list) { + if (m.isIgnore) { + continue; + } + if (m.isStatic) { + out.print(m.returnType.asString()); + out.print(" " + toC(c.className + "_" + m.name) + "("); + int i = 0; + for (FieldObj p : m.parameters.values()) { + if (i > 0) { + out.print(", "); + } + out.print(p.type.asString() + " " + p.name); + i++; + } + out.println(");"); + } + } + } + out.print("class " + toC(c.className) + " : public "); + if (c.superClassName == null) { + if (c.className.equals("java.lang.Object")) { + out.print("RefBase"); + } else { + out.print("java_lang_Object"); + } + } else { + out.print(toC(c.superClassName)); + } + out.println(" {"); + out.println("public:"); + for (FieldObj f : c.instanceFields.values()) { + out.print(" "); + out.print(f.type.asString() + " " + f.name); + out.println(";"); + } + out.println("public:"); + for (ArrayList list : c.methods.values()) { + for (MethodObj m : list) { + if (m.isIgnore) { + continue; + } + if (m.isStatic) { + continue; + } + if (m.isConstructor) { + out.print(" " + toC(c.className) + "("); + } else { + out.print(" " + m.returnType.asString() + " " + + m.name + "("); + } + int i = 0; + for (FieldObj p : m.parameters.values()) { + if (i > 0) { + out.print(", "); + } + out.print(p.type.asString()); + out.print(" " + p.name); + i++; + } + out.println(");"); + } + } + out.println("};"); + } + ArrayList constantNames = new ArrayList<>(stringConstantToStringMap.keySet()); + Collections.sort(constantNames); + for (String c : constantNames) { + String s = stringConstantToStringMap.get(c); + if (JavaParser.REF_COUNT_STATIC) { + out.println("ptr " + c + " = STRING(L\"" + s + + "\");"); + } else { + out.println("java_lang_String* " + c + " = STRING(L\"" + s + + "\");"); + } + } + } + + /** + * Write the C++ source code. + * + * @param out the output writer + */ + void writeSource(PrintWriter out) { + for (ClassObj c : classes.values()) { + out.println("/* " + c.className + " */"); + for (Statement s : c.nativeCode) { + out.println(s.asString()); + } + for (FieldObj f : c.staticFields.values()) { + StringBuilder buff = new StringBuilder(); + if (f.isFinal) { + buff.append("const "); + } + buff.append(f.type.asString()); + buff.append(" ").append(toC(c.className + "." + f.name)); + if (f.value != null) { + buff.append(" = ").append(f.value.asString()); + } + buff.append(";"); + out.println(buff.toString()); + } + for (ArrayList list : c.methods.values()) { + for (MethodObj m : list) { + if (m.isIgnore) { + continue; + } + if (m.isStatic) { + out.print(m.returnType.asString() + " " + + toC(c.className + "_" + m.name) + "("); + } else if (m.isConstructor) { + out.print(toC(c.className) + "::" + toC(c.className) + + "("); + } else { + out.print(m.returnType.asString() + " " + + toC(c.className) + "::" + m.name + "("); + } + int i = 0; + for (FieldObj p : m.parameters.values()) { + if (i > 0) { + out.print(", "); + } + out.print(p.type.asString() + " " + p.name); + i++; + } + out.println(") {"); + if (m.isConstructor) { + for (FieldObj f : c.instanceFields.values()) { + out.print(" "); + out.print("this->" + f.name); + out.print(" = " + f.value.asString()); + out.println(";"); + } + } + if (m.block != null) { + m.block.setMethod(m); + out.print(m.block.asString()); + } + out.println("}"); + out.println(); + } + } + } + } + + private static String indent(String s, int spaces) { + StringBuilder buff = new StringBuilder(s.length() + spaces); + for (int i = 0; i < s.length();) { + for (int j = 0; j < spaces; j++) { + buff.append(' '); + } + int n = s.indexOf('\n', i); + n = n < 0 ? s.length() : n + 1; + buff.append(s.substring(i, n)); + i = n; + } + if (!s.endsWith("\n")) { + buff.append('\n'); + } + return buff.toString(); + } + + /** + * Move the source code 4 levels to the right. + * + * @param o the source code + * @return the indented code + */ + static String indent(String o) { + return indent(o, 4); + } + + /** + * Get the C++ representation of this identifier. + * + * @param identifier the identifier + * @return the C representation + */ + static String toC(String identifier) { + return identifier.replace('.', '_'); + } + + ClassObj getClassObj() { + return classObj; + } + + /** + * Get the class of the given name. + * + * @param className the name + * @return the class + */ + ClassObj getClassObj(String className) { + ClassObj c = BUILT_IN_CLASSES.get(className); + if (c == null) { + c = classes.get(className); + } + return c; + } + +} + +/** + * The parse state. + */ +class ParseState { + + /** + * The parse index. + */ + int index; + + /** + * The token type + */ + int type; + + /** + * The token text. + */ + String token; + + /** + * The line number. + */ + int line; +} \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/java/Local.java b/modules/h2/src/test/tools/org/h2/java/Local.java new file mode 100644 index 0000000000000..be5dbb24883a4 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/Local.java @@ -0,0 +1,14 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +/** + * This annotation marks fields that are not shared and therefore don't need to + * be garbage collected separately. + */ +public @interface Local { + // empty +} diff --git a/modules/h2/src/test/tools/org/h2/java/Statement.java b/modules/h2/src/test/tools/org/h2/java/Statement.java new file mode 100644 index 0000000000000..7206cf30eaca9 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/Statement.java @@ -0,0 +1,504 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +import java.util.ArrayList; + +/** + * A statement. + */ +public interface Statement { + + void setMethod(MethodObj method); + boolean isEnd(); + + /** + * Get the C++ code. + * + * @return the C++ code + */ + String asString(); + +} + +/** + * The base class for statements. + */ +abstract class StatementBase implements Statement { + + @Override + public boolean isEnd() { + return false; + } + +} + +/** + * A "return" statement. + */ +class ReturnStatement extends StatementBase { + + /** + * The return expression. + */ + Expr expr; + + private MethodObj method; + + @Override + public void setMethod(MethodObj method) { + this.method = method; + } + + @Override + public String asString() { + if (expr == null) { + return "return;"; + } + Type returnType = method.returnType; + expr.setType(returnType); + if (!expr.getType().isObject()) { + return "return " + expr.asString() + ";"; + } + if (returnType.refCount) { + return "return " + expr.getType().asString() + "(" + expr.asString() + ");"; + } + return "return " + expr.asString() + ";"; + } + +} + +/** + * A "do .. while" statement. + */ +class DoWhileStatement extends StatementBase { + + /** + * The condition. + */ + Expr condition; + + /** + * The execution block. + */ + Statement block; + + @Override + public void setMethod(MethodObj method) { + block.setMethod(method); + } + + @Override + public String asString() { + return "do {\n" + block + "} while (" + condition.asString() + ");"; + } + +} + +/** + * A "continue" statement. + */ +class ContinueStatement extends StatementBase { + + @Override + public void setMethod(MethodObj method) { + // ignore + } + + @Override + public String asString() { + return "continue;"; + } + +} + +/** + * A "break" statement. + */ +class BreakStatement extends StatementBase { + + @Override + public void setMethod(MethodObj method) { + // ignore + } + + @Override + public String asString() { + return "break;"; + } + +} + +/** + * An empty statement. + */ +class EmptyStatement extends StatementBase { + + @Override + public void setMethod(MethodObj method) { + // ignore + } + + @Override + public String asString() { + return ";"; + } + +} + +/** + * A "switch" statement. + */ +class SwitchStatement extends StatementBase { + + private StatementBlock defaultBlock; + private final ArrayList cases = new ArrayList<>(); + private final ArrayList blocks = + new ArrayList<>(); + private final Expr expr; + + public SwitchStatement(Expr expr) { + this.expr = expr; + } + + @Override + public void setMethod(MethodObj method) { + defaultBlock.setMethod(method); + for (StatementBlock b : blocks) { + b.setMethod(method); + } + } + + @Override + public String asString() { + StringBuilder buff = new StringBuilder(); + buff.append("switch (").append(expr.asString()).append(") {\n"); + for (int i = 0; i < cases.size(); i++) { + buff.append("case " + cases.get(i).asString() + ":\n"); + buff.append(blocks.get(i).toString()); + } + if (defaultBlock != null) { + buff.append("default:\n"); + buff.append(defaultBlock.toString()); + } + buff.append("}"); + return buff.toString(); + } + + public void setDefaultBlock(StatementBlock block) { + this.defaultBlock = block; + } + + /** + * Add a case. + * + * @param expr the case expression + * @param block the execution block + */ + public void addCase(Expr expr, StatementBlock block) { + cases.add(expr); + blocks.add(block); + } + +} + +/** + * An expression statement. + */ +class ExprStatement extends StatementBase { + + private final Expr expr; + + public ExprStatement(Expr expr) { + this.expr = expr; + } + + @Override + public void setMethod(MethodObj method) { + // ignore + } + + @Override + public String asString() { + return expr.asString() + ";"; + } + +} + +/** + * A "while" statement. + */ +class WhileStatement extends StatementBase { + + /** + * The condition. + */ + Expr condition; + + /** + * The execution block. + */ + Statement block; + + @Override + public void setMethod(MethodObj method) { + block.setMethod(method); + } + + @Override + public String asString() { + String w = "while (" + condition.asString() + ")"; + String s = block.toString(); + return w + "\n" + s; + } + +} + +/** + * An "if" statement. + */ +class IfStatement extends StatementBase { + + /** + * The condition. + */ + Expr condition; + + /** + * The execution block. + */ + Statement block; + + /** + * The else block. + */ + Statement elseBlock; + + @Override + public void setMethod(MethodObj method) { + block.setMethod(method); + if (elseBlock != null) { + elseBlock.setMethod(method); + } + } + + @Override + public String asString() { + String w = "if (" + condition.asString() + ") {\n"; + String s = block.asString(); + if (elseBlock != null) { + s += "} else {\n" + elseBlock.asString(); + } + return w + s + "}"; + } + +} + +/** + * A "for" statement. + */ +class ForStatement extends StatementBase { + + /** + * The init block. + */ + Statement init; + + /** + * The condition. + */ + Expr condition; + + /** + * The main loop block. + */ + Statement block; + + /** + * The update list. + */ + ArrayList updates = new ArrayList<>(); + + /** + * The type of the iterable. + */ + Type iterableType; + + /** + * The iterable variable name. + */ + String iterableVariable; + + /** + * The iterable expression. + */ + Expr iterable; + + @Override + public void setMethod(MethodObj method) { + block.setMethod(method); + } + + @Override + public String asString() { + StringBuilder buff = new StringBuilder(); + buff.append("for ("); + if (iterableType != null) { + Type it = iterable.getType(); + if (it != null && it.arrayLevel > 0) { + String idx = "i_" + iterableVariable; + buff.append("int " + idx + " = 0; " + + idx + " < " + iterable.asString() + "->length(); " + + idx + "++"); + buff.append(") {\n"); + buff.append(JavaParser.indent(iterableType + + " " + iterableVariable + " = " + + iterable.asString() + "->at("+ idx +");\n")); + buff.append(block.toString()).append("}"); + } else { + // TODO iterate over a collection + buff.append(iterableType).append(' '); + buff.append(iterableVariable).append(": "); + buff.append(iterable); + buff.append(") {\n"); + buff.append(block.toString()).append("}"); + } + } else { + buff.append(init.asString()); + buff.append(" ").append(condition.asString()).append("; "); + for (int i = 0; i < updates.size(); i++) { + if (i > 0) { + buff.append(", "); + } + buff.append(updates.get(i).asString()); + } + buff.append(") {\n"); + buff.append(block.asString()).append("}"); + } + return buff.toString(); + } + +} + +/** + * A statement block. + */ +class StatementBlock extends StatementBase { + + /** + * The list of instructions. + */ + final ArrayList instructions = new ArrayList<>(); + + @Override + public void setMethod(MethodObj method) { + for (Statement s : instructions) { + s.setMethod(method); + } + } + + @Override + public String asString() { + StringBuilder buff = new StringBuilder(); + for (Statement s : instructions) { + if (s.isEnd()) { + break; + } + buff.append(JavaParser.indent(s.asString())); + } + return buff.toString(); + } + +} + +/** + * A variable declaration. + */ +class VarDecStatement extends StatementBase { + + /** + * The type. + */ + Type type; + + private final ArrayList variables = new ArrayList<>(); + private final ArrayList values = new ArrayList<>(); + + @Override + public void setMethod(MethodObj method) { + // ignore + } + + @Override + public String asString() { + StringBuilder buff = new StringBuilder(); + buff.append(type.asString()).append(' '); + StringBuilder assign = new StringBuilder(); + for (int i = 0; i < variables.size(); i++) { + if (i > 0) { + buff.append(", "); + } + String varName = variables.get(i); + buff.append(varName); + Expr value = values.get(i); + if (value != null) { + if (!value.getType().isObject()) { + buff.append(" = ").append(value.asString()); + } else { + value.setType(type); + assign.append(varName).append(" = ").append(value.asString()).append(";\n"); + } + } + } + buff.append(";"); + if (assign.length() > 0) { + buff.append("\n"); + buff.append(assign); + } + return buff.toString(); + } + + /** + * Add a variable. + * + * @param name the variable name + * @param value the init value + */ + public void addVariable(String name, Expr value) { + variables.add(name); + values.add(value); + } + +} + +/** + * A native statement. + */ +class StatementNative extends StatementBase { + + private final String code; + + StatementNative(String code) { + this.code = code; + } + + @Override + public void setMethod(MethodObj method) { + // ignore + } + + @Override + public String asString() { + return code; + } + + @Override + public boolean isEnd() { + return code.equals("return;"); + } + +} + diff --git a/modules/h2/src/test/tools/org/h2/java/Test.java b/modules/h2/src/test/tools/org/h2/java/Test.java new file mode 100644 index 0000000000000..ff84e72b187ec --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/Test.java @@ -0,0 +1,92 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +import java.io.FileWriter; +import java.io.IOException; +import java.io.PrintWriter; +import org.h2.test.TestBase; + +/** + * A test for the Java parser. + */ +public class Test extends TestBase { + + /** + * Start the task with the given arguments. + * + * @param args the arguments, or null + */ + public static void main(String... args) throws IOException { + new Test().test(); + } + + @Override + public void test() throws IOException { + // g++ -o test test.cpp + // chmod +x test + // ./test + + // TODO initialize fields + + // include files: + // /usr/include/c++/4.2.1/tr1/stdio.h + // /usr/include/stdio.h + // inttypes.h + + // not supported yet: + // exceptions + // HexadecimalFloatingPointLiteral + // int x()[] { return null; } + // import static + // import * + // initializer blocks + // access to static fields with instance variable + // final variables (within blocks, parameter list) + // Identifier : (labels) + // ClassOrInterfaceDeclaration within blocks + // (or any other nested classes) + // assert + + assertEquals("\\\\" + "u0000", JavaParser.replaceUnicode("\\\\" + "u0000")); + assertEquals("\u0000", JavaParser.replaceUnicode("\\" + "u0000")); + assertEquals("\u0000", JavaParser.replaceUnicode("\\" + "uu0000")); + assertEquals("\\\\" + "\u0000", JavaParser.replaceUnicode("\\\\\\" + "u0000")); + + assertEquals("0", JavaParser.readNumber("0a")); + assertEquals("0l", JavaParser.readNumber("0l")); + assertEquals("0xFFL", JavaParser.readNumber("0xFFLx")); + assertEquals("0xDadaCafe", JavaParser.readNumber("0xDadaCafex")); + assertEquals("1.40e-45f", JavaParser.readNumber("1.40e-45fx")); + assertEquals("1e1f", JavaParser.readNumber("1e1fx")); + assertEquals("2.f", JavaParser.readNumber("2.fx")); + assertEquals(".3d", JavaParser.readNumber(".3dx")); + assertEquals("6.022137e+23f", JavaParser.readNumber("6.022137e+23f+1")); + + JavaParser parser = new JavaParser(); + parser.parse("src/tools/org/h2", "java.lang.Object"); + parser.parse("src/tools/org/h2", "java.lang.String"); + parser.parse("src/tools/org/h2", "java.lang.Math"); + parser.parse("src/tools/org/h2", "java.lang.Integer"); + parser.parse("src/tools/org/h2", "java.lang.Long"); + parser.parse("src/tools/org/h2", "java.lang.StringBuilder"); + parser.parse("src/tools/org/h2", "java.io.PrintStream"); + parser.parse("src/tools/org/h2", "java.lang.System"); + parser.parse("src/tools/org/h2", "java.util.Arrays"); + parser.parse("src/tools", "org.h2.java.TestApp"); + + PrintWriter w = new PrintWriter(System.out); + parser.writeHeader(w); + parser.writeSource(w); + w.flush(); + w = new PrintWriter(new FileWriter("bin/test.cpp")); + parser.writeHeader(w); + parser.writeSource(w); + w.close(); + + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/TestApp.java b/modules/h2/src/test/tools/org/h2/java/TestApp.java new file mode 100644 index 0000000000000..4daa6ee1f7f2b --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/TestApp.java @@ -0,0 +1,58 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java; + +/** + * A test application. + */ +public class TestApp { + +/* c: + +int main(int argc, char** argv) { +// org_h2_java_TestApp_main(0); + org_h2_java_TestApp_main(ptr > >()); +} + +*/ + + /** + * Run this application. + * + * @param args the command line arguments + */ + public static void main(String... args) { + String[] list = new String[1000]; + for (int i = 0; i < 1000; i++) { + list[i] = "Hello " + i; + } + + // time:29244000 mac g++ -O3 without array bound checks + // time:30673000 mac java + // time:32449000 mac g++ -O3 + // time:69692000 mac g++ -O3 ref counted + // time:1200000000 raspberry g++ -O3 + // time:1720000000 raspberry g++ -O3 ref counted + // time:1980469000 raspberry java IcedTea6 1.8.13 Cacao VM + // time:12962645810 raspberry java IcedTea6 1.8.13 Zero VM + // java -XXaltjvm=cacao + + for (int k = 0; k < 4; k++) { + long t = System.nanoTime(); + long h = 0; + for (int j = 0; j < 10000; j++) { + for (int i = 0; i < 1000; i++) { + String s = list[i]; + h = (h * 7) ^ s.hashCode(); + } + } + System.out.println("hash: " + h); + t = System.nanoTime() - t; + System.out.println("time:" + t); + } + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/io/PrintStream.java b/modules/h2/src/test/tools/org/h2/java/io/PrintStream.java new file mode 100644 index 0000000000000..efac90b6f9c69 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/io/PrintStream.java @@ -0,0 +1,24 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.io; + +/** + * A print stream. + */ +public class PrintStream { + + /** + * Print the given string. + * + * @param s the string + */ + @SuppressWarnings("unused") + public void println(String s) { + // c: int x = s->chars->length(); + // c: printf("%.*S\n", x, s->chars->getPointer()); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/io/package.html b/modules/h2/src/test/tools/org/h2/java/io/package.html new file mode 100644 index 0000000000000..2eb12c32769d9 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/io/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A simple implementation of the java.lang.* package for the Java parser. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/java/lang/Integer.java b/modules/h2/src/test/tools/org/h2/java/lang/Integer.java new file mode 100644 index 0000000000000..eb52e9728e699 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/Integer.java @@ -0,0 +1,61 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.lang; + +/** + * A java.lang.Integer implementation. + */ +public class Integer { + + /** + * The smallest possible value. + */ + public static final int MIN_VALUE = 1 << 31; + + /** + * The largest possible value. + */ + public static final int MAX_VALUE = (int) ((1L << 31) - 1); + + /** + * Convert a value to a String. + * + * @param x the value + * @return the String + */ + public static String toString(int x) { + // c: wchar_t ch[20]; + // c: swprintf(ch, 20, L"%" PRId32, x); + // c: return STRING(ch); + // c: return; + if (x == MIN_VALUE) { + return String.wrap("-2147483648"); + } + char[] ch = new char[20]; + int i = 20 - 1, count = 0; + boolean negative; + if (x < 0) { + negative = true; + x = -x; + } else { + negative = false; + } + for (; i >= 0; i--) { + ch[i] = (char) ('0' + (x % 10)); + x /= 10; + count++; + if (x == 0) { + break; + } + } + if (negative) { + ch[--i] = '-'; + count++; + } + return new String(ch, i, count); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/lang/Long.java b/modules/h2/src/test/tools/org/h2/java/lang/Long.java new file mode 100644 index 0000000000000..69df8b041ee7e --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/Long.java @@ -0,0 +1,62 @@ +/* +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.lang; + +/** + * A java.lang.Long implementation. + */ +public class Long { + + /** + * The smallest possible value. + */ + public static final long MIN_VALUE = 1L << 63; + + /** + * The largest possible value. + */ + public static final long MAX_VALUE = (1L << 63) - 1; + + /** + * Convert a value to a String. + * + * @param x the value + * @return the String + */ + public static String toString(long x) { + // c: wchar_t ch[30]; + // c: swprintf(ch, 30, L"%" PRId64, x); + // c: return STRING(ch); + // c: return; + if (x == MIN_VALUE) { + return String.wrap("-9223372036854775808"); + } + char[] ch = new char[30]; + int i = 30 - 1, count = 0; + boolean negative; + if (x < 0) { + negative = true; + x = -x; + } else { + negative = false; + } + for (; i >= 0; i--) { + ch[i] = (char) ('0' + (x % 10)); + x /= 10; + count++; + if (x == 0) { + break; + } + } + if (negative) { + ch[--i] = '-'; + count++; + } + return new String(ch, i, count); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/lang/Math.java b/modules/h2/src/test/tools/org/h2/java/lang/Math.java new file mode 100644 index 0000000000000..b6e39a2ccf6e0 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/Math.java @@ -0,0 +1,24 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.lang; + +/** + * A java.lang.String implementation. + */ +public class Math { + + /** + * Get the larger of both values. + * + * @param a the first value + * @param b the second value + * @return the larger + */ + public static int max(int a, int b) { + return a > b ? a : b; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/lang/Object.java b/modules/h2/src/test/tools/org/h2/java/lang/Object.java new file mode 100644 index 0000000000000..6853cd2cdb07e --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/Object.java @@ -0,0 +1,27 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.lang; + +/** + * A java.lang.Object implementation. + */ +public class Object { + + @Override + public int hashCode() { + return 0; + } + + public boolean equals(Object other) { + return other == this; + } + + @Override + public java.lang.String toString() { + return "?"; + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/lang/String.java b/modules/h2/src/test/tools/org/h2/java/lang/String.java new file mode 100644 index 0000000000000..86a76e1fe4f55 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/String.java @@ -0,0 +1,222 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.lang; + +import org.h2.java.Ignore; +import org.h2.java.Local; + +/* c: + +#include +#include +#include +#include +#include +#include +#define __STDC_FORMAT_MACROS +#include + +#define jvoid void +#define jboolean int8_t +#define jbyte int8_t +#define jchar wchar_t +#define jint int32_t +#define jlong int64_t +#define jfloat float +#define jdouble double +#define ujint uint32_t +#define ujlong uint64_t +#define true 1 +#define false 0 +#define null 0 + +#define STRING_REF(s) ptr \ + (new java_lang_String(ptr< array > \ + (new array(s, (jint) wcslen(s))))); + +#define STRING_PTR(s) new java_lang_String \ + (new array(s, (jint) wcslen(s))); + +class RefBase { +protected: + jint refCount; +public: + RefBase() { + refCount = 0; + } + void reference() { + refCount++; + } + void release() { + if (--refCount == 0) { + delete this; + } + } + virtual ~RefBase() { + } +}; +template class ptr { + T* pointer; +public: + explicit ptr(T* p=0) : pointer(p) { + if (p != 0) { + ((RefBase*)p)->reference(); + } + } + ptr(const ptr& p) : pointer(p.pointer) { + if (p.pointer != 0) { + ((RefBase*)p.pointer)->reference(); + } + } + ~ptr() { + if (pointer != 0) { + ((RefBase*)pointer)->release(); + } + } + ptr& operator= (const ptr& p) { + if (this != &p && pointer != p.pointer) { + if (pointer != 0) { + ((RefBase*)pointer)->release(); + } + pointer = p.pointer; + if (pointer != 0) { + ((RefBase*)pointer)->reference(); + } + } + return *this; + } + T& operator*() { + return *pointer; + } + T* getPointer() { + return pointer; + } + T* operator->() { + return pointer; + } + jboolean operator==(const ptr& p) { + return pointer == p->pointer; + } + jboolean operator==(const RefBase* t) { + return pointer == t; + } +}; +template class array : RefBase { + jint len; + T* data; +public: + array(const T* d, jint len) { + this->len = len; + data = new T[len]; + memcpy(data, d, sizeof(T) * len); + } + array(jint len) { + this->len = len; + data = new T[len]; + } + ~array() { + delete[] data; + } + T* getPointer() { + return data; + } + jint length() { + return len; + } + T& operator[](jint index) { + if (index < 0 || index >= len) { + throw "index set"; + } + return data[index]; + } + T& at(jint index) { + if (index < 0 || index >= len) { + throw "index set"; + } + return data[index]; + } +}; + +*/ + +/** + * A java.lang.String implementation. + */ +public class String { + + /** + * The character array. + */ + @Local + char[] chars; + + private int hash; + + public String(char[] chars) { + this.chars = new char[chars.length]; + System.arraycopy(chars, 0, this.chars, 0, chars.length); + } + + public String(char[] chars, int offset, int count) { + this.chars = new char[count]; + System.arraycopy(chars, offset, this.chars, 0, count); + } + + @Override + public int hashCode() { + int h = hash; + if (h == 0) { + int size = chars.length; + if (size != 0) { + for (int i = 0; i < size; i++) { + h = h * 31 + chars[i]; + } + hash = h; + } + } + return h; + } + + /** + * Get the length of the string. + * + * @return the length + */ + public int length() { + return chars.length; + } + + /** + * The toString method. + * + * @return the string + */ + public String toStringMethod() { + return this; + } + + /** + * Get the java.lang.String. + * + * @return the string + */ + @Ignore + public java.lang.String asString() { + return new java.lang.String(chars); + } + + /** + * Wrap a java.lang.String. + * + * @param x the string + * @return the object + */ + @Ignore + public static String wrap(java.lang.String x) { + return new String(x.toCharArray()); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/lang/StringBuilder.java b/modules/h2/src/test/tools/org/h2/java/lang/StringBuilder.java new file mode 100644 index 0000000000000..2d6014ee7ae13 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/StringBuilder.java @@ -0,0 +1,66 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.lang; + +/** + * A java.lang.String implementation. + */ +public class StringBuilder { + + private int length; + private char[] buffer; + + public StringBuilder(String s) { + char[] chars = s.chars; + int len = chars.length; + buffer = new char[len]; + System.arraycopy(chars, 0, buffer, 0, len); + this.length = len; + } + + public StringBuilder() { + buffer = new char[10]; + } + + /** + * Append the given value. + * + * @param x the value + * @return this + */ + public StringBuilder append(String x) { + int l = x.length(); + ensureCapacity(l); + System.arraycopy(x.chars, 0, buffer, length, l); + length += l; + return this; + } + + /** + * Append the given value. + * + * @param x the value + * @return this + */ + public StringBuilder append(int x) { + append(Integer.toString(x)); + return this; + } + + @Override + public java.lang.String toString() { + return new java.lang.String(buffer, 0, length); + } + + private void ensureCapacity(int plus) { + if (buffer.length < length + plus) { + char[] b = new char[Math.max(length + plus, buffer.length * 2)]; + System.arraycopy(buffer, 0, b, 0, length); + buffer = b; + } + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/lang/System.java b/modules/h2/src/test/tools/org/h2/java/lang/System.java new file mode 100644 index 0000000000000..ec83d57e38851 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/System.java @@ -0,0 +1,77 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.lang; + +import java.io.PrintStream; + +/** + * A simple java.lang.System implementation. + */ +public class System { + + /** + * The stdout stream. + */ + public static PrintStream out; + + /** + * Copy data from the source to the target. + * Source and target may overlap. + * + * @param src the source array + * @param srcPos the first element in the source array + * @param dest the destination + * @param destPos the first element in the destination + * @param length the number of element to copy + */ + public static void arraycopy(char[] src, int srcPos, char[] dest, + int destPos, int length) { + /* c: + memmove(((jchar*)dest->getPointer()) + destPos, + ((jchar*)src->getPointer()) + srcPos, sizeof(jchar) * length); + */ + // c: return; + java.lang.System.arraycopy(src, srcPos, dest, destPos, length); + } + + /** + * Copy data from the source to the target. + * Source and target may overlap. + * + * @param src the source array + * @param srcPos the first element in the source array + * @param dest the destination + * @param destPos the first element in the destination + * @param length the number of element to copy + */ + public static void arraycopy(byte[] src, int srcPos, byte[] dest, + int destPos, int length) { + /* c: + memmove(((jbyte*)dest->getPointer()) + destPos, + ((jbyte*)src->getPointer()) + srcPos, sizeof(jbyte) * length); + */ + // c: return; + java.lang.System.arraycopy(src, srcPos, dest, destPos, length); + } + + /** + * Get the current time in milliseconds since 1970-01-01. + * + * @return the milliseconds + */ + public static long nanoTime() { + /* c: + #if CLOCKS_PER_SEC == 1000000 + return (jlong) clock() * 1000; + #else + return (jlong) clock() * 1000000 / CLOCKS_PER_SEC; + #endif + */ + // c: return; + return java.lang.System.nanoTime(); + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/lang/package.html b/modules/h2/src/test/tools/org/h2/java/lang/package.html new file mode 100644 index 0000000000000..2eb12c32769d9 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/lang/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A simple implementation of the java.lang.* package for the Java parser. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/java/package.html b/modules/h2/src/test/tools/org/h2/java/package.html new file mode 100644 index 0000000000000..61ea7ac1e78cf --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A Java parser implementation. + +

    \ No newline at end of file diff --git a/modules/h2/src/test/tools/org/h2/java/util/Arrays.java b/modules/h2/src/test/tools/org/h2/java/util/Arrays.java new file mode 100644 index 0000000000000..33ad92621ccbf --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/util/Arrays.java @@ -0,0 +1,74 @@ +/* + * Copyright 2004-2018 H2 Group. Multiple-Licensed under the MPL 2.0, + * and the EPL 1.0 (http://h2database.com/html/license.html). + * Initial Developer: H2 Group + */ +package org.h2.java.util; + +/** + * An simple implementation of java.util.Arrays + */ +public class Arrays { + + /** + * Fill an array with the given value. + * + * @param array the array + * @param x the value + */ + public static void fill(char[] array, char x) { + for (int i = 0, size = array.length; i < size; i++) { + array[i] = x; + } + } + + /** + * Fill an array with the given value. + * + * @param array the array + * @param x the value + */ + public static void fill(byte[] array, byte x) { + for (int i = 0; i < array.length; i++) { + array[i] = x; + } + } + + /** + * Fill an array with the given value. + * + * @param array the array + * @param x the value + */ + public static void fill(int[] array, int x) { + for (int i = 0; i < array.length; i++) { + array[i] = x; + } + } + + + /** + * Fill an array with the given value. + * + * @param array the array + * @param x the value + */ + public static void fillByte(byte[] array, byte x) { + for (int i = 0; i < array.length; i++) { + array[i] = x; + } + } + + /** + * Fill an array with the given value. + * + * @param array the array + * @param x the value + */ + public static void fillInt(int[] array, int x) { + for (int i = 0; i < array.length; i++) { + array[i] = x; + } + } + +} diff --git a/modules/h2/src/test/tools/org/h2/java/util/package.html b/modules/h2/src/test/tools/org/h2/java/util/package.html new file mode 100644 index 0000000000000..2eb12c32769d9 --- /dev/null +++ b/modules/h2/src/test/tools/org/h2/java/util/package.html @@ -0,0 +1,14 @@ + + + + +Javadoc package documentation +

    + +A simple implementation of the java.lang.* package for the Java parser. + +

    \ No newline at end of file diff --git a/modules/hadoop/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelperImpl.java b/modules/hadoop/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelperImpl.java index 0e86529eed796..6da79b24f9633 100644 --- a/modules/hadoop/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelperImpl.java +++ b/modules/hadoop/src/main/java/org/apache/ignite/internal/processors/hadoop/HadoopHelperImpl.java @@ -35,7 +35,7 @@ */ public class HadoopHelperImpl implements HadoopHelper { /** Kernal context. */ - private final GridKernalContext ctx; + private GridKernalContext ctx; /** Common class loader. */ private volatile HadoopClassLoader ldr; @@ -130,4 +130,10 @@ public HadoopHelperImpl(GridKernalContext ctx) { throw new IgniteException("Failed to resolve Ignite work directory.", e); } } + + /** {@inheritDoc} */ + @Override public void close() { + // Force drop KernalContext link, because HadoopHelper leaks in some tests. + ctx = null; + } } diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopAbstractMapReduceTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopAbstractMapReduceTest.java index fc6d7f8712869..fa224b262c0a1 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopAbstractMapReduceTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopAbstractMapReduceTest.java @@ -25,6 +25,7 @@ import java.util.SortedMap; import java.util.TreeMap; import java.util.UUID; +import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; @@ -55,6 +56,7 @@ import org.apache.ignite.internal.processors.hadoop.counter.HadoopPerformanceCounter; import org.apache.ignite.internal.processors.hadoop.impl.examples.HadoopWordCount1; import org.apache.ignite.internal.processors.hadoop.impl.examples.HadoopWordCount2; +import org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopFileSystemsUtils; import org.apache.ignite.internal.processors.igfs.IgfsEx; import org.apache.ignite.internal.processors.igfs.IgfsUtils; import org.apache.ignite.internal.util.lang.GridAbsPredicate; @@ -344,6 +346,22 @@ private void checkJobStatistics(HadoopJobId jobId) throws IgniteCheckedException super.beforeTest(); } + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + igniteSecondary = null; + secondaryFs = null; + igfs = null; + + HadoopFileSystemsUtils.clearFileSystemCache(); + FileSystem.clearStatistics(); + + Map stat = GridTestUtils.getFieldValue(FileSystem.class, FileSystem.class, "statisticsTable"); + + stat.clear(); + } + /** * Start grid with IGFS. * diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopCommandLineTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopCommandLineTest.java index 2bcc15c6839dd..39afab3c209ca 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopCommandLineTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopCommandLineTest.java @@ -28,6 +28,7 @@ import java.nio.file.Files; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collection; import java.util.Collections; import java.util.List; import org.apache.ignite.IgniteSystemProperties; @@ -37,6 +38,7 @@ import org.apache.ignite.igfs.IgfsInputStream; import org.apache.ignite.igfs.IgfsPath; import org.apache.ignite.internal.IgnitionEx; +import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.hadoop.HadoopCommonUtils; import org.apache.ignite.internal.processors.hadoop.HadoopJobEx; import org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker; @@ -45,11 +47,17 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; +import org.apache.ignite.marshaller.Marshaller; +import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test of integration with Hadoop client via command line interface. */ +@RunWith(JUnit4.class) public class HadoopCommandLineTest extends GridCommonAbstractTest { /** IGFS instance. */ private IgfsEx igfs; @@ -250,12 +258,34 @@ private ProcessBuilder createProcessBuilder() { res.environment().put("HADOOP_HOME", hadoopHome); res.environment().put("HADOOP_CLASSPATH", ggClsPath); res.environment().put("HADOOP_CONF_DIR", testWorkDir.getAbsolutePath()); + res.environment().put("HADOOP_OPTS", filteredJvmArgs()); res.redirectErrorStream(true); return res; } + /** + * Creates list of JVM arguments to be used to start hadoop process. + * + * @return JVM arguments. + */ + private String filteredJvmArgs() { + StringBuilder filteredJvmArgs = new StringBuilder(); + + filteredJvmArgs.append("-ea"); + + for (String arg : U.jvmArgs()) { + if (arg.startsWith("--add-opens") || arg.startsWith("--add-exports") || arg.startsWith("--add-modules") || + arg.startsWith("--patch-module") || arg.startsWith("--add-reads") || + arg.startsWith("-XX:+IgnoreUnrecognizedVMOptions")) + filteredJvmArgs.append(' ').append(arg); + } + + return filteredJvmArgs.toString(); + } + + /** * Waits for process exit and prints the its output. * @@ -332,6 +362,7 @@ private int executeHiveQuery(String qry) throws Exception { /** * Tests Hadoop command line integration. */ + @Test public void testHadoopCommandLine() throws Exception { assertEquals(0, executeHadoopCmd("fs", "-ls", "/")); @@ -408,6 +439,7 @@ private void checkQuery(String expRes, String qry) throws Exception { /** * Tests Hive integration. */ + @Test public void testHiveCommandLine() throws Exception { assertEquals(0, executeHiveQuery( "create table table_a (" + @@ -469,4 +501,4 @@ public void testHiveCommandLine() throws Exception { checkQuery("1000\n", "select count(b.id_b) from table_a a inner join table_b b on a.id_b = b.id_b"); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopFileSystemsTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopFileSystemsTest.java index 76806902b4a77..4a4ea6f3a5079 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopFileSystemsTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopFileSystemsTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopFileSystemsUtils; import org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLocalFileSystemV1; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test file systems for the working directory multi-threading support. */ +@RunWith(JUnit4.class) public class HadoopFileSystemsTest extends HadoopAbstractSelfTest { /** the number of threads */ private static final int THREAD_COUNT = 3; @@ -158,7 +162,8 @@ private void testFileSystem(final URI uri) throws Exception { * * @throws Exception If fails. */ + @Test public void testLocal() throws Exception { testFileSystem(URI.create("file:///")); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopGroupingTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopGroupingTest.java index d27a234323f3b..8ca1b86d80824 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopGroupingTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopGroupingTest.java @@ -47,12 +47,16 @@ import java.util.Random; import java.util.Set; import java.util.UUID; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.createJobInfo; /** * Grouping test. */ +@RunWith(JUnit4.class) public class HadoopGroupingTest extends HadoopAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -60,7 +64,7 @@ public class HadoopGroupingTest extends HadoopAbstractSelfTest { } /** {@inheritDoc} */ - protected boolean igfsEnabled() { + @Override protected boolean igfsEnabled() { return false; } @@ -87,6 +91,7 @@ protected boolean igfsEnabled() { /** * @throws Exception If failed. */ + @Test public void testGroupingReducer() throws Exception { doTestGrouping(false); } @@ -94,6 +99,7 @@ public void testGroupingReducer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGroupingCombiner() throws Exception { doTestGrouping(true); } @@ -299,4 +305,4 @@ public static class OutFormat extends OutputFormat { return null; } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopJobTrackerSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopJobTrackerSelfTest.java index 7f94def19eab3..5d69f4155be3d 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopJobTrackerSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopJobTrackerSelfTest.java @@ -43,6 +43,9 @@ import org.apache.ignite.internal.processors.hadoop.HadoopJobId; import org.apache.ignite.internal.processors.hadoop.HadoopJobStatus; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.createJobInfo; import static org.apache.ignite.internal.processors.hadoop.state.HadoopJobTrackerSelfTestState.combineExecCnt; @@ -53,6 +56,7 @@ /** * Job tracker self test. */ +@RunWith(JUnit4.class) public class HadoopJobTrackerSelfTest extends HadoopAbstractSelfTest { /** */ private static final String PATH_OUTPUT = "/test-out"; @@ -99,6 +103,7 @@ public class HadoopJobTrackerSelfTest extends HadoopAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testSimpleTaskSubmit() throws Exception { try { UUID globalId = UUID.randomUUID(); @@ -145,6 +150,7 @@ public void testSimpleTaskSubmit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTaskWithCombinerPerMap() throws Exception { try { UUID globalId = UUID.randomUUID(); @@ -323,4 +329,4 @@ private static class TestCombiner extends Reducer { System.out.println("Completed task: " + ctx.getTaskAttemptID().getTaskID().getId()); } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceEmbeddedSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceEmbeddedSelfTest.java index 21b7ee28c1a3a..9d750c9cf25f2 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceEmbeddedSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceEmbeddedSelfTest.java @@ -37,6 +37,9 @@ import org.apache.ignite.internal.processors.hadoop.HadoopJobProperty; import org.apache.ignite.internal.processors.hadoop.impl.examples.HadoopWordCount1; import org.apache.ignite.internal.processors.hadoop.impl.examples.HadoopWordCount2; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.createJobInfo; import static org.apache.ignite.internal.processors.hadoop.state.HadoopMapReduceEmbeddedSelfTestState.flags; @@ -44,6 +47,7 @@ /** * Tests map-reduce execution with embedded mode. */ +@RunWith(JUnit4.class) public class HadoopMapReduceEmbeddedSelfTest extends HadoopMapReduceTest { /** {@inheritDoc} */ @Override public HadoopConfiguration hadoopConfiguration(String igniteInstanceName) { @@ -58,6 +62,7 @@ public class HadoopMapReduceEmbeddedSelfTest extends HadoopMapReduceTest { /** * @throws Exception If fails. */ + @Test public void testMultiReducerWholeMapReduceExecution() throws Exception { checkMultiReducerWholeMapReduceExecution(false); } @@ -65,6 +70,7 @@ public void testMultiReducerWholeMapReduceExecution() throws Exception { /** * @throws Exception If fails. */ + @Test public void testMultiReducerWholeMapReduceExecutionStriped() throws Exception { checkMultiReducerWholeMapReduceExecution(true); } @@ -268,4 +274,4 @@ private static class CustomV1OutputFormat extends org.apache.hadoop.mapred.TextO flags.put("outputFormatWasConfigured", true); } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceErrorResilienceTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceErrorResilienceTest.java index afd6f26d92e69..afd636d1177d1 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceErrorResilienceTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceErrorResilienceTest.java @@ -19,6 +19,9 @@ import org.apache.ignite.igfs.IgfsPath; import org.apache.ignite.internal.processors.hadoop.impl.examples.HadoopWordCount2; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test of error resiliency after an error in a map-reduce job execution. @@ -27,12 +30,14 @@ * x { unchecked exception, checked exception, error } * x { phase where the error happens }. */ +@RunWith(JUnit4.class) public class HadoopMapReduceErrorResilienceTest extends HadoopAbstractMapReduceTest { /** * Tests recovery. * * @throws Exception If failed. */ + @Test public void testRecoveryAfterAnError0_Runtime() throws Exception { doTestRecoveryAfterAnError(0, HadoopErrorSimulator.Kind.Runtime); } @@ -42,6 +47,7 @@ public void testRecoveryAfterAnError0_Runtime() throws Exception { * * @throws Exception If failed. */ + @Test public void testRecoveryAfterAnError0_IOException() throws Exception { doTestRecoveryAfterAnError(0, HadoopErrorSimulator.Kind.IOException); } @@ -51,6 +57,7 @@ public void testRecoveryAfterAnError0_IOException() throws Exception { * * @throws Exception If failed. */ + @Test public void testRecoveryAfterAnError0_Error() throws Exception { doTestRecoveryAfterAnError(0, HadoopErrorSimulator.Kind.Error); } @@ -60,6 +67,7 @@ public void testRecoveryAfterAnError0_Error() throws Exception { * * @throws Exception If failed. */ + @Test public void testRecoveryAfterAnError7_Runtime() throws Exception { doTestRecoveryAfterAnError(7, HadoopErrorSimulator.Kind.Runtime); } @@ -68,6 +76,7 @@ public void testRecoveryAfterAnError7_Runtime() throws Exception { * * @throws Exception If failed. */ + @Test public void testRecoveryAfterAnError7_IOException() throws Exception { doTestRecoveryAfterAnError(7, HadoopErrorSimulator.Kind.IOException); } @@ -76,6 +85,7 @@ public void testRecoveryAfterAnError7_IOException() throws Exception { * * @throws Exception If failed. */ + @Test public void testRecoveryAfterAnError7_Error() throws Exception { doTestRecoveryAfterAnError(7, HadoopErrorSimulator.Kind.Error); } @@ -151,4 +161,4 @@ private void doTestWithErrorSimulator(HadoopErrorSimulator sim, IgfsPath inFile, // Expect success there: doTest(inFile, useNewMapper, useNewCombiner, useNewReducer); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceTest.java index feccb59193001..04091d03bf7b4 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopMapReduceTest.java @@ -19,15 +19,20 @@ import org.apache.ignite.igfs.IgfsPath; import org.apache.ignite.internal.processors.hadoop.impl.examples.HadoopWordCount2; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test of whole cycle of map-reduce processing via Job tracker. */ +@RunWith(JUnit4.class) public class HadoopMapReduceTest extends HadoopAbstractMapReduceTest { /** * Tests whole job execution with all phases in all combination of new and old versions of API. * @throws Exception If fails. */ + @Test public void testWholeMapReduceExecution() throws Exception { IgfsPath inDir = new IgfsPath(PATH_INPUT); @@ -63,4 +68,4 @@ protected boolean[][] getApiModes() { { true, true, true }, }; } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopNoHadoopMapReduceTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopNoHadoopMapReduceTest.java index 382631db45a4e..e59f9dd403273 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopNoHadoopMapReduceTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopNoHadoopMapReduceTest.java @@ -18,10 +18,14 @@ package org.apache.ignite.internal.processors.hadoop.impl; import org.apache.ignite.configuration.IgniteConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test attempt to execute a map-reduce task while no Hadoop processor available. */ +@RunWith(JUnit4.class) public class HadoopNoHadoopMapReduceTest extends HadoopMapReduceTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -34,6 +38,7 @@ public class HadoopNoHadoopMapReduceTest extends HadoopMapReduceTest { } /** {@inheritDoc} */ + @Test @Override public void testWholeMapReduceExecution() throws Exception { try { super.testWholeMapReduceExecution(); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSerializationWrapperSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSerializationWrapperSelfTest.java index 5ccc8cea50115..b1f16e0d4bd2d 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSerializationWrapperSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSerializationWrapperSelfTest.java @@ -30,15 +30,20 @@ import org.apache.ignite.internal.processors.hadoop.HadoopSerialization; import org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopSerializationWrapper; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test of wrapper of the native serialization. */ +@RunWith(JUnit4.class) public class HadoopSerializationWrapperSelfTest extends GridCommonAbstractTest { /** * Tests read/write of IntWritable via native WritableSerialization. * @throws Exception If fails. */ + @Test public void testIntWritableSerialization() throws Exception { HadoopSerialization ser = new HadoopSerializationWrapper(new WritableSerialization(), IntWritable.class); @@ -61,6 +66,7 @@ public void testIntWritableSerialization() throws Exception { * Tests read/write of Integer via native JavaleSerialization. * @throws Exception If fails. */ + @Test public void testIntJavaSerialization() throws Exception { HadoopSerialization ser = new HadoopSerializationWrapper(new JavaSerialization(), Integer.class); @@ -77,4 +83,4 @@ public void testIntJavaSerialization() throws Exception { assertEquals(3, ((Integer)ser.read(in, null)).intValue()); assertEquals(-5, ((Integer)ser.read(in, null)).intValue()); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSnappyTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSnappyTest.java index 80ff7547e2988..6958520cd807f 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSnappyTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSnappyTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests isolated Hadoop Snappy codec usage. */ +@RunWith(JUnit4.class) public class HadoopSnappyTest extends GridCommonAbstractTest { /** Length of data. */ private static final int BYTE_SIZE = 1024 * 50; @@ -45,6 +49,7 @@ public class HadoopSnappyTest extends GridCommonAbstractTest { * * @throws Exception On error. */ + @Test public void testSnappy() throws Throwable { // Run Snappy test in default class loader: checkSnappy(); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSortingTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSortingTest.java index bb11ccb77eeb1..dc84913d6a070 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSortingTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSortingTest.java @@ -50,12 +50,16 @@ import org.apache.ignite.configuration.HadoopConfiguration; import org.apache.ignite.internal.processors.hadoop.HadoopJobId; import org.apache.ignite.internal.util.typedef.X; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.createJobInfo; /** * Tests correct sorting. */ +@RunWith(JUnit4.class) public class HadoopSortingTest extends HadoopAbstractSelfTest { /** */ private static final String PATH_INPUT = "/test-in"; @@ -98,6 +102,7 @@ public class HadoopSortingTest extends HadoopAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testSortSimple() throws Exception { // Generate test data. Job job = Job.getInstance(); @@ -301,4 +306,4 @@ public FakeSplit() { len = in.readInt(); } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSplitWrapperSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSplitWrapperSelfTest.java index be2bfc2413727..4c661a6755f67 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSplitWrapperSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopSplitWrapperSelfTest.java @@ -29,15 +29,20 @@ import org.apache.hadoop.mapreduce.lib.input.FileSplit; import org.apache.ignite.internal.processors.hadoop.HadoopSplitWrapper; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Self test of {@link HadoopSplitWrapper}. */ +@RunWith(JUnit4.class) public class HadoopSplitWrapperSelfTest extends HadoopAbstractSelfTest { /** * Tests serialization of wrapper and the wrapped native split. * @throws Exception If fails. */ + @Test public void testSerialization() throws Exception { FileSplit nativeSplit = new FileSplit(new Path("/path/to/file"), 100, 500, new String[]{"host1", "host2"}); @@ -69,4 +74,4 @@ public void testSerialization() throws Exception { } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopTaskExecutionSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopTaskExecutionSelfTest.java index bc59c764ed8e2..afd5ac783e3b5 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopTaskExecutionSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopTaskExecutionSelfTest.java @@ -52,6 +52,9 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.hadoop.state.HadoopTaskExecutionSelfTestValues.cancelledTasks; import static org.apache.ignite.internal.processors.hadoop.state.HadoopTaskExecutionSelfTestValues.executedTasks; @@ -64,6 +67,7 @@ /** * Tests map-reduce task execution basics. */ +@RunWith(JUnit4.class) public class HadoopTaskExecutionSelfTest extends HadoopAbstractSelfTest { /** Test param. */ private static final String MAP_WRITE = "test.map.write"; @@ -107,6 +111,7 @@ public class HadoopTaskExecutionSelfTest extends HadoopAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testMapRun() throws Exception { int lineCnt = 10000; String fileName = "/testFile"; @@ -148,6 +153,7 @@ public void testMapRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMapCombineRun() throws Exception { int lineCnt = 10001; String fileName = "/testFile"; @@ -196,6 +202,7 @@ public void testMapCombineRun() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMapperException() throws Exception { prepareFile("/testFile", 1000); @@ -301,6 +308,7 @@ private static class InFormat extends TextInputFormat { /** * @throws Exception If failed. */ + @Test public void testTaskCancelling() throws Exception { Configuration cfg = prepareJobForCancelling(); @@ -345,6 +353,7 @@ public void testTaskCancelling() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJobKill() throws Exception { Configuration cfg = prepareJobForCancelling(); @@ -541,4 +550,4 @@ private static class TestReducer extends Reducer hadoopCache = getSystemCache(ignite, CU.SYS_CACHE_HADOOP_MR); @@ -39,4 +44,4 @@ public void testSystemCacheTx() throws Exception { checkImplicitTxSuccess(hadoopCache); checkStartTxSuccess(hadoopCache); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopUserLibsSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopUserLibsSelfTest.java index 0e4a0ef1d1a82..cf4ef5e88661e 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopUserLibsSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopUserLibsSelfTest.java @@ -25,10 +25,14 @@ import java.util.Collection; import java.util.Collections; import java.util.HashSet; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for user libs parsing. */ +@RunWith(JUnit4.class) public class HadoopUserLibsSelfTest extends GridCommonAbstractTest { /** Directory 1. */ private static final File DIR_1 = HadoopTestUtils.testDir("dir1"); @@ -89,6 +93,7 @@ public class HadoopUserLibsSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testNullOrEmptyUserLibs() throws Exception { assert parse(null).isEmpty(); assert parse("").isEmpty(); @@ -99,6 +104,7 @@ public void testNullOrEmptyUserLibs() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingle() throws Exception { Collection res = parse(single(FILE_1_1)); @@ -115,6 +121,7 @@ public void testSingle() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultiple() throws Exception { Collection res = parse(merge(single(FILE_1_1), single(FILE_1_2), single(FILE_2_1), single(FILE_2_2), single(MISSING_FILE))); @@ -131,6 +138,7 @@ public void testMultiple() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingleWildcard() throws Exception { Collection res = parse(wildcard(DIR_1)); @@ -148,6 +156,7 @@ public void testSingleWildcard() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleWildcards() throws Exception { Collection res = parse(merge(wildcard(DIR_1), wildcard(DIR_2), wildcard(MISSING_DIR))); @@ -163,6 +172,7 @@ public void testMultipleWildcards() throws Exception { * * @throws Exception If failed. */ + @Test public void testMixed() throws Exception { String str = merge( single(FILE_1_1), diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopV2JobSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopV2JobSelfTest.java index 041f0bc7c4634..5eefdf91621a9 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopV2JobSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopV2JobSelfTest.java @@ -40,12 +40,16 @@ import org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl; import org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopSerializationWrapper; import org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.createJobInfo; /** * Self test of {@link org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job}. */ +@RunWith(JUnit4.class) public class HadoopV2JobSelfTest extends HadoopAbstractSelfTest { /** */ private static final String TEST_SERIALIZED_VALUE = "Test serialized value"; @@ -73,6 +77,7 @@ private static class CustomSerialization extends WritableSerialization { * * @throws IgniteCheckedException If fails. */ + @Test public void testCustomSerializationApplying() throws IgniteCheckedException { JobConf cfg = new JobConf(); @@ -105,4 +110,4 @@ public void testCustomSerializationApplying() throws IgniteCheckedException { assertEquals(TEST_SERIALIZED_VALUE, ser.read(in, null).toString()); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopValidationSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopValidationSelfTest.java index ef16762191e6f..58f7f5ebf022d 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopValidationSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopValidationSelfTest.java @@ -18,10 +18,14 @@ package org.apache.ignite.internal.processors.hadoop.impl; import org.apache.ignite.configuration.IgniteConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Configuration validation tests. */ +@RunWith(JUnit4.class) public class HadoopValidationSelfTest extends HadoopAbstractSelfTest { /** Peer class loading enabled flag. */ public boolean peerClassLoading; @@ -47,7 +51,8 @@ public class HadoopValidationSelfTest extends HadoopAbstractSelfTest { * * @throws Exception If failed. */ + @Test public void testValid() throws Exception { startGrids(1); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopWeightedMapReducePlannerTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopWeightedMapReducePlannerTest.java index 6dcd9980c05b7..c55940aa2bcef 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopWeightedMapReducePlannerTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/HadoopWeightedMapReducePlannerTest.java @@ -45,10 +45,14 @@ import java.util.Set; import java.util.TreeMap; import java.util.UUID; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for weighted map-reduce planned. */ +@RunWith(JUnit4.class) public class HadoopWeightedMapReducePlannerTest extends GridCommonAbstractTest { /** ID 1. */ private static final UUID ID_1 = new UUID(0, 1); @@ -111,6 +115,7 @@ public class HadoopWeightedMapReducePlannerTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testOneIgfsSplitAffinity() throws Exception { IgfsMock igfs = LocationsBuilder.create().add(0, NODE_1).add(50, NODE_2).add(100, NODE_3).buildIgfs(); @@ -139,6 +144,7 @@ public void testOneIgfsSplitAffinity() throws Exception { * * @throws Exception If failed. */ + @Test public void testHdfsSplitsAffinity() throws Exception { IgfsMock igfs = LocationsBuilder.create().add(0, NODE_1).add(50, NODE_2).add(100, NODE_3).buildIgfs(); @@ -171,6 +177,7 @@ public void testHdfsSplitsAffinity() throws Exception { * * @throws Exception If failed. */ + @Test public void testHdfsSplitsReplication() throws Exception { IgfsMock igfs = LocationsBuilder.create().add(0, NODE_1).add(50, NODE_2).add(100, NODE_3).buildIgfs(); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolMultipleServersSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolMultipleServersSelfTest.java index 0e5ad03559257..dfebd3fbae6f2 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolMultipleServersSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolMultipleServersSelfTest.java @@ -50,11 +50,14 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Hadoop client protocol configured with multiple ignite servers tests. */ -@SuppressWarnings("ResultOfMethodCallIgnored") +@RunWith(JUnit4.class) public class HadoopClientProtocolMultipleServersSelfTest extends HadoopAbstractSelfTest { /** Input path. */ private static final String PATH_INPUT = "/input"; @@ -147,7 +150,7 @@ private void checkJobSubmit(Configuration conf) throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("ConstantConditions") + @Test public void testMultipleAddresses() throws Exception { restPort = REST_PORT; @@ -163,7 +166,7 @@ public void testMultipleAddresses() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings({"ConstantConditions", "ThrowableResultOfMethodCallIgnored"}) + @Test public void testSingleAddress() throws Exception { try { // Don't use REST_PORT to test connection fails if the only this port is configured @@ -189,7 +192,7 @@ public void testSingleAddress() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("ConstantConditions") + @Test public void testMixedAddrs() throws Exception { restPort = REST_PORT; @@ -312,4 +315,4 @@ public static class OutFormat extends OutputFormat { return null; } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolSelfTest.java index 0027b3e4781a4..ac88082b07826 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/client/HadoopClientProtocolSelfTest.java @@ -56,11 +56,15 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Hadoop client protocol tests in external process mode. */ @SuppressWarnings("ResultOfMethodCallIgnored") +@RunWith(JUnit4.class) public class HadoopClientProtocolSelfTest extends HadoopAbstractSelfTest { /** Input path. */ private static final String PATH_INPUT = "/input"; @@ -160,6 +164,7 @@ private void tstNextJobId() throws Exception { * * @throws Exception If failed. */ + @Test public void testJobCounters() throws Exception { IgniteFileSystem igfs = grid(0).fileSystem(HadoopAbstractSelfTest.igfsName); @@ -663,4 +668,4 @@ public static class TestReducer extends Reducer() { @Override public Object call() throws Exception { @@ -461,6 +468,7 @@ public void testCreateCheckParameters() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCreateBase() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -487,6 +495,7 @@ public void testCreateBase() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCreateCheckOverwrite() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -517,6 +526,7 @@ public void testCreateCheckOverwrite() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteIfNoSuchPath() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -527,6 +537,7 @@ public void testDeleteIfNoSuchPath() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteSuccessfulIfPathIsOpenedToRead() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "myFile"); @@ -560,6 +571,7 @@ public void testDeleteSuccessfulIfPathIsOpenedToRead() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteIfFilePathExists() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "myFile"); @@ -575,6 +587,7 @@ public void testDeleteIfFilePathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteIfDirectoryPathExists() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -590,6 +603,7 @@ public void testDeleteIfDirectoryPathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteFailsIfNonRecursive() throws Exception { Path fsHome = new Path(primaryFsUri); Path someDir3 = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -614,6 +628,7 @@ public void testDeleteFailsIfNonRecursive() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteRecursively() throws Exception { Path fsHome = new Path(primaryFsUri); Path someDir3 = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -632,6 +647,7 @@ public void testDeleteRecursively() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteRecursivelyFromRoot() throws Exception { Path fsHome = new Path(primaryFsUri); Path someDir3 = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -654,6 +670,7 @@ public void testDeleteRecursivelyFromRoot() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetPermissionCheckDefaultPermission() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -670,6 +687,7 @@ public void testSetPermissionCheckDefaultPermission() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetPermissionCheckNonRecursiveness() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -691,6 +709,7 @@ public void testSetPermissionCheckNonRecursiveness() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("OctalInteger") + @Test public void testSetPermission() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -710,6 +729,7 @@ public void testSetPermission() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetPermissionIfOutputStreamIsNotClosed() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "myFile"); @@ -727,6 +747,7 @@ public void testSetPermissionIfOutputStreamIsNotClosed() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckParametersPathIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -746,6 +767,7 @@ public void testSetOwnerCheckParametersPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckParametersUserIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -765,6 +787,7 @@ public void testSetOwnerCheckParametersUserIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckParametersGroupIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -784,6 +807,7 @@ public void testSetOwnerCheckParametersGroupIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwner() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -802,6 +826,7 @@ public void testSetOwner() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerIfOutputStreamIsNotClosed() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "myFile"); @@ -818,6 +843,7 @@ public void testSetOwnerIfOutputStreamIsNotClosed() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckNonRecursiveness() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -840,6 +866,7 @@ public void testSetOwnerCheckNonRecursiveness() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpenCheckParametersPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -849,6 +876,7 @@ public void testOpenCheckParametersPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpenNoSuchPath() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -861,6 +889,7 @@ public void testOpenNoSuchPath() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpenIfPathIsAlreadyOpened() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "someFile"); @@ -878,6 +907,7 @@ public void testOpenIfPathIsAlreadyOpened() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpen() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "someFile"); @@ -901,6 +931,7 @@ public void testOpen() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAppendIfPathPointsToDirectory() throws Exception { final Path fsHome = new Path(primaryFsUri); final Path dir = new Path(fsHome, "/tmp"); @@ -920,6 +951,7 @@ public void testAppendIfPathPointsToDirectory() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAppendIfFileIsAlreadyBeingOpenedToWrite() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -943,6 +975,7 @@ public void testAppendIfFileIsAlreadyBeingOpenedToWrite() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAppend() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "someFile"); @@ -974,6 +1007,7 @@ public void testAppend() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameCheckParametersSrcPathIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -988,6 +1022,7 @@ public void testRenameCheckParametersSrcPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameCheckParametersDstPathIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -1006,6 +1041,7 @@ public Object call() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameIfSrcPathDoesNotExist() throws Exception { Path fsHome = new Path(primaryFsUri); final Path srcFile = new Path(fsHome, "srcFile"); @@ -1025,6 +1061,7 @@ public void testRenameIfSrcPathDoesNotExist() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameIfSrcPathIsAlreadyBeingOpenedToWrite() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "srcFile"); @@ -1061,6 +1098,7 @@ public void testRenameIfSrcPathIsAlreadyBeingOpenedToWrite() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameFileIfDstPathExists() throws Exception { Path fsHome = new Path(primaryFsUri); final Path srcFile = new Path(fsHome, "srcFile"); @@ -1089,6 +1127,7 @@ public void testRenameFileIfDstPathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameFile() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "/tmp/srcFile"); @@ -1106,6 +1145,7 @@ public void testRenameFile() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameIfSrcPathIsAlreadyBeingOpenedToRead() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "srcFile"); @@ -1141,6 +1181,7 @@ public void testRenameIfSrcPathIsAlreadyBeingOpenedToRead() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRenameDirectoryIfDstPathExists() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcDir = new Path(fsHome, "/tmp/"); @@ -1174,6 +1215,7 @@ public void testRenameDirectoryIfDstPathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameDirectory() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/tmp/"); @@ -1191,6 +1233,7 @@ public void testRenameDirectory() throws Exception { } /** @throws Exception If failed. */ + @Test public void testListStatusIfPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1200,6 +1243,7 @@ public void testListStatusIfPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testListStatusIfPathDoesNotExist() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1213,6 +1257,7 @@ public void testListStatusIfPathDoesNotExist() throws Exception { * * @throws Exception If failed. */ + @Test public void testListStatus() throws Exception { Path igfsHome = new Path(primaryFsUri); @@ -1256,6 +1301,7 @@ public void testListStatus() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMkdirsIfPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1267,6 +1313,7 @@ public void testMkdirsIfPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMkdirsIfPermissionIsNull() throws Exception { Path dir = new Path("/tmp"); @@ -1277,6 +1324,7 @@ public void testMkdirsIfPermissionIsNull() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("OctalInteger") + @Test public void testMkdirs() throws Exception { Path fsHome = new Path(primaryFileSystemUriPath()); Path dir = new Path(fsHome, "/tmp/staging"); @@ -1296,6 +1344,7 @@ public void testMkdirs() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileStatusIfPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1305,6 +1354,7 @@ public void testGetFileStatusIfPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileStatusIfPathDoesNotExist() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1314,6 +1364,7 @@ public void testGetFileStatusIfPathDoesNotExist() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileBlockLocationsIfFileStatusIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1324,6 +1375,7 @@ public void testGetFileBlockLocationsIfFileStatusIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileBlockLocationsIfFileStatusReferenceNotExistingPath() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1333,6 +1385,7 @@ public void testGetFileBlockLocationsIfFileStatusReferenceNotExistingPath() thro } /** @throws Exception If failed. */ + @Test public void testGetFileBlockLocations() throws Exception { Path igfsHome = new Path(primaryFsUri); @@ -1371,6 +1424,7 @@ public void testGetFileBlockLocations() throws Exception { } /** @throws Exception If failed. */ + @Test public void testZeroReplicationFactor() throws Exception { // This test doesn't make sense for any mode except of PRIMARY. if (mode == PRIMARY) { @@ -1404,6 +1458,7 @@ public void testZeroReplicationFactor() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultithreadedCreate() throws Exception { Path dir = new Path(new Path(primaryFsUri), "/dir"); @@ -1479,6 +1534,7 @@ public void testMultithreadedCreate() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultithreadedAppend() throws Exception { Path dir = new Path(new Path(primaryFsUri), "/dir"); @@ -1555,6 +1611,7 @@ public void testMultithreadedAppend() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultithreadedOpen() throws Exception { final byte[] dataChunk = new byte[256]; @@ -1627,6 +1684,7 @@ public void run() { * * @throws Exception If failed. */ + @Test public void testMultithreadedMkdirs() throws Exception { final Path dir = new Path(new Path("igfs:///"), "/dir"); @@ -1707,6 +1765,7 @@ public void testMultithreadedMkdirs() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("TooBroadScope") + @Test public void testMultithreadedDelete() throws Exception { final Path dir = new Path(new Path(primaryFsUri), "/dir"); @@ -1780,6 +1839,7 @@ public void testMultithreadedDelete() throws Exception { } /** @throws Exception If failed. */ + @Test public void testConsistency() throws Exception { // Default buffers values checkConsistency(-1, 1, -1, -1, 1, -1); @@ -1805,6 +1865,7 @@ public void testConsistency() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testClientReconnect() throws Exception { final Path igfsHome = new Path(primaryFsUri); @@ -1850,6 +1911,7 @@ public void testClientReconnect() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testClientReconnectMultithreaded() throws Exception { final ConcurrentLinkedQueue q = new ConcurrentLinkedQueue<>(); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopIgfsDualAbstractSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopIgfsDualAbstractSelfTest.java index 3ffdf23cbb0fd..1c450f3bc9500 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopIgfsDualAbstractSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopIgfsDualAbstractSelfTest.java @@ -51,6 +51,9 @@ import java.io.IOException; import java.net.URI; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -66,6 +69,7 @@ /** * Tests for IGFS working in mode when remote file system exists: DUAL_SYNC, DUAL_ASYNC. */ +@RunWith(JUnit4.class) public abstract class HadoopIgfsDualAbstractSelfTest extends IgfsCommonAbstractTest { /** IGFS block size. */ protected static final int IGFS_BLOCK_SIZE = 512 * 1024; @@ -240,6 +244,7 @@ protected IgfsPath[] paths(IgfsPath... paths) { * * @throws Exception IF failed. */ + @Test public void testOpenPrefetchOverride() throws Exception { create(igfsSecondary, paths(DIR, SUBDIR), paths(FILE)); @@ -324,4 +329,4 @@ public void testOpenPrefetchOverride() throws Exception { }, IOException.class, "Failed to read data due to secondary file system exception: /dir/subdir/file"); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopSecondaryFileSystemConfigurationTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopSecondaryFileSystemConfigurationTest.java index 04eaf93b5f542..77320f81c31e4 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopSecondaryFileSystemConfigurationTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/HadoopSecondaryFileSystemConfigurationTest.java @@ -53,6 +53,9 @@ import java.io.IOException; import java.net.URI; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -67,6 +70,7 @@ /** * Tests secondary file system configuration. */ +@RunWith(JUnit4.class) public class HadoopSecondaryFileSystemConfigurationTest extends IgfsCommonAbstractTest { /** IGFS scheme */ static final String IGFS_SCHEME = "igfs"; @@ -420,6 +424,7 @@ private CommunicationSpi communicationSpi() { * * @throws Exception On failure. */ + @Test public void testFsConfigurationOnly() throws Exception { primaryCfgScheme = IGFS_SCHEME; primaryCfgAuthority = PRIMARY_AUTHORITY; @@ -441,6 +446,7 @@ public void testFsConfigurationOnly() throws Exception { * * @throws Exception On failure. */ + @Test public void testFsUriOverridesUriInConfiguration() throws Exception { // wrong primary URI in the configuration: primaryCfgScheme = "foo"; @@ -599,4 +605,4 @@ static String mkUri(String scheme, String authority) { static String mkUri(String scheme, String authority, String path) { return scheme + "://" + authority + "/" + path; } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsEventsTestSuite.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsEventsTestSuite.java index e3dac864a6157..b50d9bf35a41c 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsEventsTestSuite.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsEventsTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.internal.processors.hadoop.impl.igfs; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; @@ -31,6 +32,8 @@ import org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint; import org.apache.ignite.internal.util.typedef.G; import org.jetbrains.annotations.Nullable; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import static org.apache.ignite.igfs.IgfsMode.DUAL_ASYNC; import static org.apache.ignite.igfs.IgfsMode.DUAL_SYNC; @@ -40,7 +43,8 @@ * Test suite for IGFS event tests. */ @SuppressWarnings("PublicInnerClass") -public class IgfsEventsTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgfsEventsTestSuite { /** * @return Test suite. * @throws Exception Thrown in case of the failure. @@ -50,13 +54,13 @@ public static TestSuite suite() throws Exception { TestSuite suite = new TestSuite("Ignite FS Events Test Suite"); - suite.addTest(new TestSuite(ldr.loadClass(ShmemPrimary.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(ShmemDualSync.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(ShmemDualAsync.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(ShmemPrimary.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(ShmemDualSync.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(ShmemDualAsync.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(LoopbackPrimary.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(LoopbackDualSync.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(LoopbackDualAsync.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(LoopbackPrimary.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(LoopbackDualSync.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(LoopbackDualAsync.class.getName()))); return suite; } @@ -70,9 +74,9 @@ public static TestSuite suiteNoarchOnly() throws Exception { TestSuite suite = new TestSuite("Ignite IGFS Events Test Suite Noarch Only"); - suite.addTest(new TestSuite(ldr.loadClass(LoopbackPrimary.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(LoopbackDualSync.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(LoopbackDualAsync.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(LoopbackPrimary.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(LoopbackDualSync.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(LoopbackDualAsync.class.getName()))); return suite; } @@ -284,4 +288,4 @@ public static class LoopbackDualAsync extends LoopbackPrimarySecondaryTest { return igfsCfg; } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsNearOnlyMultiNodeSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsNearOnlyMultiNodeSelfTest.java index 20699f195d613..55914274426cf 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsNearOnlyMultiNodeSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgfsNearOnlyMultiNodeSelfTest.java @@ -39,10 +39,10 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -53,6 +53,7 @@ /** * Test hadoop file system implementation. */ +@RunWith(JUnit4.class) public class IgfsNearOnlyMultiNodeSelfTest extends GridCommonAbstractTest { /** Path to the default hadoop configuration. */ public static final String HADOOP_FS_CFG = "examples/config/filesystem/core-site.xml"; @@ -60,9 +61,6 @@ public class IgfsNearOnlyMultiNodeSelfTest extends GridCommonAbstractTest { /** Group size. */ public static final int GRP_SIZE = 128; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Node count. */ private int cnt; @@ -80,8 +78,6 @@ public class IgfsNearOnlyMultiNodeSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER).setForceServerMode(true)); - FileSystemConfiguration igfsCfg = new FileSystemConfiguration(); igfsCfg.setName("igfs"); @@ -164,6 +160,7 @@ protected URI getFileSystemURI(int grid) { } /** @throws Exception If failed. */ + @Test public void testContentsConsistency() throws Exception { try (FileSystem fs = FileSystem.get(getFileSystemURI(0), getFileSystemConfig())) { Collection> files = F.asList( diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemAbstractSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemAbstractSelfTest.java index a73367a4abfc2..bd1cf3c265886 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemAbstractSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemAbstractSelfTest.java @@ -86,6 +86,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -100,6 +103,7 @@ * Test hadoop file system implementation. */ @SuppressWarnings("all") +@RunWith(JUnit4.class) public abstract class IgniteHadoopFileSystemAbstractSelfTest extends IgfsCommonAbstractTest { /** Primary file system authority. */ private static final String PRIMARY_AUTHORITY = "igfs@"; @@ -445,6 +449,7 @@ protected FileSystemConfiguration igfsConfiguration(String igniteInstanceName) t } /** @throws Exception If failed. */ + @Test public void testGetUriIfFSIsNotInitialized() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -456,6 +461,7 @@ public void testGetUriIfFSIsNotInitialized() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("NullableProblems") + @Test public void testInitializeCheckParametersNameIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -468,6 +474,7 @@ public void testInitializeCheckParametersNameIsNull() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("NullableProblems") + @Test public void testInitializeCheckParametersCfgIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -479,6 +486,7 @@ public void testInitializeCheckParametersCfgIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testInitialize() throws Exception { final IgniteHadoopFileSystem fs = new IgniteHadoopFileSystem(); @@ -506,6 +514,7 @@ public void testInitialize() throws Exception { * * @throws Exception If failed. */ + @Test public void testIpcCache() throws Exception { HadoopIgfsEx hadoop = GridTestUtils.getFieldValue(fs, "rmtClient", "delegateRef", "value", "hadoop"); @@ -570,6 +579,7 @@ public void testIpcCache() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCloseIfNotInitialized() throws Exception { final FileSystem fs = new IgniteHadoopFileSystem(); @@ -578,6 +588,7 @@ public void testCloseIfNotInitialized() throws Exception { } /** @throws Exception If failed. */ + @Test public void testClose() throws Exception { final Path path = new Path("dir"); @@ -666,6 +677,7 @@ public void testClose() throws Exception { } /** @throws Exception If failed. */ + @Test public void testCreateCheckParameters() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -676,6 +688,7 @@ public void testCreateCheckParameters() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testCreateBase() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -702,6 +715,7 @@ public void testCreateBase() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testCreateCheckOverwrite() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -730,6 +744,7 @@ public void testCreateCheckOverwrite() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteIfNoSuchPath() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -740,6 +755,7 @@ public void testDeleteIfNoSuchPath() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteSuccessfulIfPathIsOpenedToRead() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "myFile"); @@ -766,6 +782,7 @@ public void testDeleteSuccessfulIfPathIsOpenedToRead() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteIfFilePathExists() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "myFile"); @@ -780,6 +797,7 @@ public void testDeleteIfFilePathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteIfDirectoryPathExists() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -794,6 +812,7 @@ public void testDeleteIfDirectoryPathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteFailsIfNonRecursive() throws Exception { Path fsHome = new Path(primaryFsUri); Path someDir3 = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -809,6 +828,7 @@ public void testDeleteFailsIfNonRecursive() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteRecursively() throws Exception { Path fsHome = new Path(primaryFsUri); Path someDir3 = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -826,6 +846,7 @@ public void testDeleteRecursively() throws Exception { } /** @throws Exception If failed. */ + @Test public void testDeleteRecursivelyFromRoot() throws Exception { Path fsHome = new Path(primaryFsUri); Path someDir3 = new Path(fsHome, "/someDir1/someDir2/someDir3"); @@ -847,6 +868,7 @@ public void testDeleteRecursivelyFromRoot() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testSetPermissionCheckDefaultPermission() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -864,6 +886,7 @@ public void testSetPermissionCheckDefaultPermission() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testSetPermissionCheckNonRecursiveness() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -885,6 +908,7 @@ public void testSetPermissionCheckNonRecursiveness() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("OctalInteger") + @Test public void testSetPermission() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -903,6 +927,7 @@ public void testSetPermission() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetPermissionIfOutputStreamIsNotClosed() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "myFile"); @@ -919,6 +944,7 @@ public void testSetPermissionIfOutputStreamIsNotClosed() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckParametersPathIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -937,6 +963,7 @@ public void testSetOwnerCheckParametersPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckParametersUserIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -955,6 +982,7 @@ public void testSetOwnerCheckParametersUserIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckParametersGroupIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -973,6 +1001,7 @@ public void testSetOwnerCheckParametersGroupIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwner() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/tmp/my"); @@ -992,6 +1021,7 @@ public void testSetOwner() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetTimes() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "/heartbeatTs"); @@ -1035,6 +1065,7 @@ public void testSetTimes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSetOwnerIfOutputStreamIsNotClosed() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "myFile"); @@ -1050,6 +1081,7 @@ public void testSetOwnerIfOutputStreamIsNotClosed() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetOwnerCheckNonRecursiveness() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "/tmp/my"); @@ -1071,6 +1103,7 @@ public void testSetOwnerCheckNonRecursiveness() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpenCheckParametersPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1080,6 +1113,7 @@ public void testOpenCheckParametersPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpenNoSuchPath() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -1092,6 +1126,7 @@ public void testOpenNoSuchPath() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpenIfPathIsAlreadyOpened() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "someFile"); @@ -1108,6 +1143,7 @@ public void testOpenIfPathIsAlreadyOpened() throws Exception { } /** @throws Exception If failed. */ + @Test public void testOpen() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "someFile"); @@ -1130,6 +1166,7 @@ public void testOpen() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAppendCheckParametersPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1139,6 +1176,7 @@ public void testAppendCheckParametersPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAppendIfPathPointsToDirectory() throws Exception { final Path fsHome = new Path(primaryFsUri); final Path dir = new Path(fsHome, "/tmp"); @@ -1156,6 +1194,7 @@ public void testAppendIfPathPointsToDirectory() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAppendIfFileIsAlreadyBeingOpenedToWrite() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -1176,6 +1215,7 @@ public void testAppendIfFileIsAlreadyBeingOpenedToWrite() throws Exception { } /** @throws Exception If failed. */ + @Test public void testAppend() throws Exception { Path fsHome = new Path(primaryFsUri); Path file = new Path(fsHome, "someFile"); @@ -1205,6 +1245,7 @@ public void testAppend() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameCheckParametersSrcPathIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -1217,6 +1258,7 @@ public void testRenameCheckParametersSrcPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameCheckParametersDstPathIsNull() throws Exception { Path fsHome = new Path(primaryFsUri); final Path file = new Path(fsHome, "someFile"); @@ -1230,6 +1272,7 @@ public Object call() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameIfSrcPathDoesNotExist() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "srcFile"); @@ -1243,6 +1286,7 @@ public void testRenameIfSrcPathDoesNotExist() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameIfSrcPathIsAlreadyBeingOpenedToWrite() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "srcFile"); @@ -1277,6 +1321,7 @@ public void testRenameIfSrcPathIsAlreadyBeingOpenedToWrite() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameFileIfDstPathExists() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "srcFile"); @@ -1297,6 +1342,7 @@ public void testRenameFileIfDstPathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameFile() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "/tmp/srcFile"); @@ -1313,6 +1359,7 @@ public void testRenameFile() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameIfSrcPathIsAlreadyBeingOpenedToRead() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcFile = new Path(fsHome, "srcFile"); @@ -1345,6 +1392,7 @@ public void testRenameIfSrcPathIsAlreadyBeingOpenedToRead() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameDirectoryIfDstPathExists() throws Exception { Path fsHome = new Path(primaryFsUri); Path srcDir = new Path(fsHome, "/tmp/"); @@ -1366,6 +1414,7 @@ public void testRenameDirectoryIfDstPathExists() throws Exception { } /** @throws Exception If failed. */ + @Test public void testRenameDirectory() throws Exception { Path fsHome = new Path(primaryFsUri); Path dir = new Path(fsHome, "/tmp/"); @@ -1382,6 +1431,7 @@ public void testRenameDirectory() throws Exception { } /** @throws Exception If failed. */ + @Test public void testListStatusIfPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1391,6 +1441,7 @@ public void testListStatusIfPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testListStatusIfPathDoesNotExist() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1404,6 +1455,7 @@ public void testListStatusIfPathDoesNotExist() throws Exception { * * @throws Exception If failed. */ + @Test public void testListStatus() throws Exception { Path igfsHome = new Path(PRIMARY_URI); @@ -1446,6 +1498,7 @@ public void testListStatus() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetWorkingDirectoryIfPathIsNull() throws Exception { fs.setWorkingDirectory(null); @@ -1460,12 +1513,14 @@ public void testSetWorkingDirectoryIfPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testSetWorkingDirectoryIfPathDoesNotExist() throws Exception { // Should not throw any exceptions. fs.setWorkingDirectory(new Path("/someDir")); } /** @throws Exception If failed. */ + @Test public void testSetWorkingDirectory() throws Exception { Path dir = new Path("/tmp/nested/dir"); Path file = new Path("file"); @@ -1483,6 +1538,7 @@ public void testSetWorkingDirectory() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetWorkingDirectoryIfDefault() throws Exception { String path = fs.getWorkingDirectory().toString(); @@ -1490,6 +1546,7 @@ public void testGetWorkingDirectoryIfDefault() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetWorkingDirectory() throws Exception { Path dir = new Path("/tmp/some/dir"); @@ -1503,6 +1560,7 @@ public void testGetWorkingDirectory() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMkdirsIfPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1512,6 +1570,7 @@ public void testMkdirsIfPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testMkdirsIfPermissionIsNull() throws Exception { Path dir = new Path("/tmp"); @@ -1522,6 +1581,7 @@ public void testMkdirsIfPermissionIsNull() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("OctalInteger") + @Test public void testMkdirs() throws Exception { Path fsHome = new Path(PRIMARY_URI); final Path dir = new Path(fsHome, "/tmp/staging"); @@ -1541,6 +1601,7 @@ public void testMkdirs() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileStatusIfPathIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1550,6 +1611,7 @@ public void testGetFileStatusIfPathIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileStatusIfPathDoesNotExist() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1559,6 +1621,7 @@ public void testGetFileStatusIfPathDoesNotExist() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileBlockLocationsIfFileStatusIsNull() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -1569,6 +1632,7 @@ public void testGetFileBlockLocationsIfFileStatusIsNull() throws Exception { } /** @throws Exception If failed. */ + @Test public void testGetFileBlockLocationsIfFileStatusReferenceNotExistingPath() throws Exception { Path path = new Path("someFile"); @@ -1584,6 +1648,7 @@ public void testGetFileBlockLocationsIfFileStatusReferenceNotExistingPath() thro } /** @throws Exception If failed. */ + @Test public void testGetFileBlockLocations() throws Exception { Path igfsHome = new Path(PRIMARY_URI); @@ -1622,11 +1687,13 @@ public void testGetFileBlockLocations() throws Exception { /** @throws Exception If failed. */ @SuppressWarnings("deprecation") + @Test public void testGetDefaultBlockSize() throws Exception { assertEquals(1L << 26, fs.getDefaultBlockSize()); } /** @throws Exception If failed. */ + @Test public void testZeroReplicationFactor() throws Exception { // This test doesn't make sense for any mode except of PRIMARY. if (mode == PRIMARY) { @@ -1661,6 +1728,7 @@ public void testZeroReplicationFactor() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultithreadedCreate() throws Exception { Path dir = new Path(new Path(PRIMARY_URI), "/dir"); @@ -1745,6 +1813,7 @@ public void run() { * * @throws Exception If failed. */ + @Test public void testMultithreadedAppend() throws Exception { Path dir = new Path(new Path(PRIMARY_URI), "/dir"); @@ -1829,6 +1898,7 @@ public void testMultithreadedAppend() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultithreadedOpen() throws Exception { final byte[] dataChunk = new byte[256]; @@ -1900,6 +1970,7 @@ public void run() { * * @throws Exception If failed. */ + @Test public void testMultithreadedMkdirs() throws Exception { final Path dir = new Path(new Path(PRIMARY_URI), "/dir"); @@ -1979,6 +2050,7 @@ public void testMultithreadedMkdirs() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("TooBroadScope") + @Test public void testMultithreadedDelete() throws Exception { final Path dir = new Path(new Path(PRIMARY_URI), "/dir"); @@ -2052,6 +2124,7 @@ public void testMultithreadedDelete() throws Exception { } /** @throws Exception If failed. */ + @Test public void testConsistency() throws Exception { // Default buffers values checkConsistency(-1, 1, -1, -1, 1, -1); @@ -2077,6 +2150,7 @@ public void testConsistency() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testClientReconnect() throws Exception { Path filePath = new Path(PRIMARY_URI, "file1"); @@ -2111,6 +2185,7 @@ public void testClientReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testModeResolver() throws Exception { IgfsModeResolver mr = ((IgniteHadoopFileSystem)fs).getModeResolver(); @@ -2122,6 +2197,7 @@ public void testModeResolver() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testClientReconnectMultithreaded() throws Exception { final ConcurrentLinkedQueue q = new ConcurrentLinkedQueue<>(); @@ -2500,4 +2576,4 @@ protected Configuration configurationSecondary(String authority) { return cfg; } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedAbstractSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedAbstractSelfTest.java index 8198cd3243b7b..eb743476d5ef9 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedAbstractSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedAbstractSelfTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.igfs.IgfsMode; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * IGFS Hadoop file system Ignite client -based self test. */ +@RunWith(JUnit4.class) public abstract class IgniteHadoopFileSystemClientBasedAbstractSelfTest extends IgniteHadoopFileSystemAbstractSelfTest { /** Alive node index. */ private static final int ALIVE_NODE_IDX = GRID_COUNT - 1; @@ -88,6 +92,7 @@ public abstract class IgniteHadoopFileSystemClientBasedAbstractSelfTest extends } /** {@inheritDoc} */ + @Test @Override public void testClientReconnect() throws Exception { Path filePath = new Path(PRIMARY_URI, "file1"); @@ -115,6 +120,7 @@ public abstract class IgniteHadoopFileSystemClientBasedAbstractSelfTest extends * * @throws Exception If error occurs. */ + @Test @Override public void testClientReconnectMultithreaded() throws Exception { final ConcurrentLinkedQueue q = new ConcurrentLinkedQueue<>(); @@ -190,4 +196,4 @@ private void startAllNodesExcept(int nodeIdx) throws Exception { if (i != nodeIdx) startGrid(i); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedOpenTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedOpenTest.java index 932f4d81dfb22..f41884edfba02 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedOpenTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientBasedOpenTest.java @@ -33,10 +33,14 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * IGFS Hadoop file system Ignite client -based self test for PROXY mode. */ +@RunWith(JUnit4.class) public class IgniteHadoopFileSystemClientBasedOpenTest extends GridCommonAbstractTest { /** Config root path. */ private static final String [] CONFIGS = { @@ -165,6 +169,7 @@ private static String authority(int idx) { /** * @throws Exception If failed. */ + @Test public void testFsOpenMultithreaded() throws Exception { skipInProc = false; @@ -176,7 +181,7 @@ public void testFsOpenMultithreaded() throws Exception { */ private void checkFsOpenWithAllNodesTypes() throws Exception { for (int i = 0; i < nodesTypes.length; ++i) { - log.info("Begin test case for nodes: " + S.arrayToString(nodesTypes[i])); + log.info("Begin test case for nodes: " + S.arrayToString(NodeType.class, nodesTypes[i])); startNodes(nodesTypes[i]); @@ -194,6 +199,7 @@ private void checkFsOpenWithAllNodesTypes() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFsOpenMultithreadedSkipInProc() throws Exception { skipInProc = true; @@ -203,6 +209,7 @@ public void testFsOpenMultithreadedSkipInProc() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteClientWithIgfsMisconfigure() throws Exception { startNodes(new NodeType[] {NodeType.REMOTE, NodeType.REMOTE, NodeType.REMOTE}); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientSelfTest.java index 93f1d05ffb6db..5ee95b0726ab8 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemClientSelfTest.java @@ -45,6 +45,9 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -55,6 +58,7 @@ /** * Test interaction between a IGFS client and a IGFS server. */ +@RunWith(JUnit4.class) public class IgniteHadoopFileSystemClientSelfTest extends IgfsCommonAbstractTest { /** Logger. */ private static final Log LOG = LogFactory.getLog(IgniteHadoopFileSystemClientSelfTest.class); @@ -139,7 +143,7 @@ protected CacheConfiguration metaCacheConfiguration() { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testOutputStreamDeferredException() throws Exception { final byte[] data = "test".getBytes(); @@ -182,7 +186,6 @@ public void testOutputStreamDeferredException() throws Exception { * @param flag Flag state. * @throws Exception If failed. */ - @SuppressWarnings("ConstantConditions") private void switchHandlerErrorFlag(boolean flag) throws Exception { IgfsProcessorAdapter igfsProc = ((IgniteKernal)grid(0)).context().igfs(); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemHandshakeSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemHandshakeSelfTest.java index 02c708bfaafb5..42a1235ca63df 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemHandshakeSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemHandshakeSelfTest.java @@ -40,6 +40,9 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -54,6 +57,7 @@ /** * Tests for IGFS file system handshake. */ +@RunWith(JUnit4.class) public class IgniteHadoopFileSystemHandshakeSelfTest extends IgfsCommonAbstractTest { /** IP finder. */ private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); @@ -83,6 +87,7 @@ public class IgniteHadoopFileSystemHandshakeSelfTest extends IgfsCommonAbstractT * * @throws Exception If failed. */ + @Test public void testHandshake() throws Exception { startUp(false, false); @@ -111,6 +116,7 @@ public void testHandshake() throws Exception { * * @throws Exception If failed. */ + @Test public void testHandshakeDefaultGrid() throws Exception { startUp(true, false); @@ -280,4 +286,4 @@ private static Configuration configuration(String authority, boolean tcp) { return cfg; } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemIpcCacheSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemIpcCacheSelfTest.java index e9de332511638..a40896438b0e4 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemIpcCacheSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemIpcCacheSelfTest.java @@ -35,9 +35,9 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -49,10 +49,8 @@ /** * IPC cache test. */ +@RunWith(JUnit4.class) public class IgniteHadoopFileSystemIpcCacheSelfTest extends IgfsCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Path to test hadoop configuration. */ private static final String HADOOP_FS_CFG = "modules/core/src/test/config/hadoop/core-site.xml"; @@ -66,11 +64,6 @@ public class IgniteHadoopFileSystemIpcCacheSelfTest extends IgfsCommonAbstractTe @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - FileSystemConfiguration igfsCfg = new FileSystemConfiguration(); igfsCfg.setName("igfs"); @@ -155,6 +148,7 @@ private CacheConfiguration metaCacheConfiguration() { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testIpcCache() throws Exception { Field cacheField = HadoopIgfsIpcIo.class.getDeclaredField("ipcCache"); @@ -221,4 +215,4 @@ public void testIpcCache() throws Exception { assert (Boolean)stopField.get(io); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerSelfTest.java index 6de033f274419..22c66b14859db 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerSelfTest.java @@ -30,6 +30,9 @@ import java.io.InputStreamReader; import java.util.ArrayList; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.igfs.common.IgfsLogger.DELIM_FIELD; import static org.apache.ignite.internal.igfs.common.IgfsLogger.DELIM_FIELD_VAL; @@ -51,6 +54,7 @@ /** * Grid IGFS client logger test. */ +@RunWith(JUnit4.class) public class IgniteHadoopFileSystemLoggerSelfTest extends IgfsCommonAbstractTest { /** Path string. */ private static final String PATH_STR = "/dir1/dir2/file;test"; @@ -107,6 +111,7 @@ private void removeLogs() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateDelete() throws Exception { IgfsLogger log = IgfsLogger.logger(ENDPOINT, IGFS_NAME, LOG_DIR, 10); @@ -162,6 +167,7 @@ public void testCreateDelete() throws Exception { * * @throws Exception If failed. */ + @Test public void testLogRead() throws Exception { IgfsLogger log = IgfsLogger.logger(ENDPOINT, IGFS_NAME, LOG_DIR, 10); @@ -192,6 +198,7 @@ public void testLogRead() throws Exception { * * @throws Exception If failed. */ + @Test public void testLogWrite() throws Exception { IgfsLogger log = IgfsLogger.logger(ENDPOINT, IGFS_NAME, LOG_DIR, 10); @@ -217,6 +224,7 @@ public void testLogWrite() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("TooBroadScope") + @Test public void testLogMisc() throws Exception { IgfsLogger log = IgfsLogger.logger(ENDPOINT, IGFS_NAME, LOG_DIR, 10); @@ -296,4 +304,4 @@ private String d(int cnt) { return buf.toString(); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerStateSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerStateSelfTest.java index 9eeff33f3f2af..1eadea1b75afe 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerStateSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemLoggerStateSelfTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -49,6 +52,7 @@ /** * Ensures that sampling is really turned on/off. */ +@RunWith(JUnit4.class) public class IgniteHadoopFileSystemLoggerStateSelfTest extends IgfsCommonAbstractTest { /** IGFS. */ private IgfsEx igfs; @@ -142,6 +146,7 @@ private void startUp() throws Exception { * * @throws Exception If failed. */ + @Test public void testLoggingDisabledSamplingNotSet() throws Exception { startUp(); @@ -153,6 +158,7 @@ public void testLoggingDisabledSamplingNotSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testLoggingEnabledSamplingNotSet() throws Exception { logging = true; @@ -166,6 +172,7 @@ public void testLoggingEnabledSamplingNotSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testLoggingDisabledSamplingDisabled() throws Exception { sampling = false; @@ -179,6 +186,7 @@ public void testLoggingDisabledSamplingDisabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testLoggingEnabledSamplingDisabled() throws Exception { logging = true; sampling = false; @@ -193,6 +201,7 @@ public void testLoggingEnabledSamplingDisabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testLoggingDisabledSamplingEnabled() throws Exception { sampling = true; @@ -206,6 +215,7 @@ public void testLoggingDisabledSamplingEnabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testLoggingEnabledSamplingEnabled() throws Exception { logging = true; sampling = true; @@ -220,6 +230,7 @@ public void testLoggingEnabledSamplingEnabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testSamplingChange() throws Exception { // Start with sampling not set. startUp(); @@ -286,6 +297,7 @@ public void testSamplingChange() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testLogDirectory() throws Exception { startUp(); @@ -329,4 +341,4 @@ private boolean logEnabled() throws Exception { return ((IgfsLogger)field.get(fs)).isLogEnabled(); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemAbstractSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemAbstractSelfTest.java index b5cf7beef80a7..6850e720cbf8c 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemAbstractSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemAbstractSelfTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.internal.util.ipc.IpcEndpointFactory; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint.DFLT_IPC_PORT; /** * IGFS Hadoop file system IPC self test. */ +@RunWith(JUnit4.class) public abstract class IgniteHadoopFileSystemShmemAbstractSelfTest extends IgniteHadoopFileSystemAbstractSelfTest { /** * Constructor. @@ -61,7 +65,7 @@ protected IgniteHadoopFileSystemShmemAbstractSelfTest(IgfsMode mode, boolean ski * * @throws Exception If error occurred. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testOutOfResources() throws Exception { final Collection eps = new LinkedList<>(); @@ -91,4 +95,4 @@ public void testOutOfResources() throws Exception { ep.close(); } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemExternalToClientAbstractSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemExternalToClientAbstractSelfTest.java index fa64ed7fe56e7..24bab6b2817ac 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemExternalToClientAbstractSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/igfs/IgniteHadoopFileSystemShmemExternalToClientAbstractSelfTest.java @@ -29,12 +29,16 @@ import org.apache.ignite.internal.util.ipc.IpcEndpointFactory; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint.DFLT_IPC_PORT; /** * IGFS Hadoop file system IPC self test. */ +@RunWith(JUnit4.class) public abstract class IgniteHadoopFileSystemShmemExternalToClientAbstractSelfTest extends IgniteHadoopFileSystemAbstractSelfTest { /** @@ -73,7 +77,7 @@ protected IgniteHadoopFileSystemShmemExternalToClientAbstractSelfTest(IgfsMode m * * @throws Exception If error occurred. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testOutOfResources() throws Exception { final Collection eps = new LinkedList<>(); @@ -103,4 +107,4 @@ public void testOutOfResources() throws Exception { ep.close(); } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopConcurrentHashMultimapSelftest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopConcurrentHashMultimapSelftest.java index 7862d6eff3595..82cf90e63a4c2 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopConcurrentHashMultimapSelftest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopConcurrentHashMultimapSelftest.java @@ -43,12 +43,17 @@ import org.apache.ignite.internal.util.io.GridUnsafeDataInput; import org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMemory; import org.apache.ignite.internal.util.typedef.X; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class HadoopConcurrentHashMultimapSelftest extends HadoopAbstractMapTest { /** */ + @Test public void testMapSimple() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(0); @@ -185,6 +190,7 @@ private void read(long ptr, int size, Writable w) { /** * @throws Exception if failed. */ + @Test public void testMultiThreaded() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(0); @@ -277,4 +283,4 @@ public void testMultiThreaded() throws Exception { assertEquals(0, mem.allocatedSize()); } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopHashMapSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopHashMapSelfTest.java index 195bcbbfcb752..9adb4961a925c 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopHashMapSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopHashMapSelfTest.java @@ -33,16 +33,21 @@ import java.util.Iterator; import java.util.Map; import java.util.Random; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class HadoopHashMapSelfTest extends HadoopAbstractMapTest { /** * Test simple map. * * @throws Exception If failed. */ + @Test public void testMapSimple() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(0); @@ -130,4 +135,4 @@ private GridLongList sorted(Collection col) { return lst.sort(); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopSkipListSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopSkipListSelfTest.java index 21575c545bdb4..720a18782d139 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopSkipListSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/collections/HadoopSkipListSelfTest.java @@ -43,14 +43,19 @@ import org.apache.ignite.internal.util.io.GridUnsafeDataInput; import org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMemory; import org.apache.ignite.internal.util.typedef.X; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Skip list tests. */ +@RunWith(JUnit4.class) public class HadoopSkipListSelfTest extends HadoopAbstractMapTest { /** * @throws Exception On error. */ + @Test public void testMapSimple() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(0); @@ -200,6 +205,7 @@ private void read(long ptr, int size, Writable w) { /** * @throws Exception if failed. */ + @Test public void testMultiThreaded() throws Exception { GridUnsafeMemory mem = new GridUnsafeMemory(0); @@ -292,4 +298,4 @@ public void testMultiThreaded() throws Exception { assertEquals(0, mem.allocatedSize()); } } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/streams/HadoopDataStreamSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/streams/HadoopDataStreamSelfTest.java index c7d4dce0fdec1..eb011ef3ef8d5 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/streams/HadoopDataStreamSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/shuffle/streams/HadoopDataStreamSelfTest.java @@ -34,16 +34,21 @@ import org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMemory; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class HadoopDataStreamSelfTest extends GridCommonAbstractTest { private static final int BUFF_SIZE = 4 * 1024; /** * @throws IOException If failed. */ + @Test public void testStreams() throws IOException { GridUnsafeMemory mem = new GridUnsafeMemory(0); @@ -65,6 +70,7 @@ public void testStreams() throws IOException { /** * @throws IOException If failed. */ + @Test public void testDirectStreams() throws IOException { HadoopDirectDataOutput out = new HadoopDirectDataOutput(BUFF_SIZE); @@ -80,6 +86,7 @@ public void testDirectStreams() throws IOException { /** * @throws IOException If failed. */ + @Test public void testReadline() throws IOException { checkReadLine("String1\rString2\r\nString3\nString4"); checkReadLine("String1\rString2\r\nString3\nString4\r\n"); @@ -295,4 +302,4 @@ private void checkRead(DataInput in) throws IOException { assertEquals("mom washes rum", in.readUTF()); } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/HadoopExecutorServiceTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/HadoopExecutorServiceTest.java index 3486a14e70a85..c1b7b99783232 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/HadoopExecutorServiceTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/HadoopExecutorServiceTest.java @@ -17,21 +17,25 @@ package org.apache.ignite.internal.processors.hadoop.impl.taskexecutor; -import java.util.concurrent.Callable; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.LongAdder; -import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import java.util.concurrent.Callable; +import java.util.concurrent.atomic.LongAdder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + /** * */ +@RunWith(JUnit4.class) public class HadoopExecutorServiceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testExecutesAll() throws Exception { final HadoopExecutorService exec = new HadoopExecutorService(log, "_GRID_NAME_", 10, 5); @@ -70,49 +74,4 @@ public void testExecutesAll() throws Exception { assertTrue(exec.shutdown(0)); } - - /** - * @throws Exception If failed. - */ - public void testShutdown() throws Exception { - for (int i = 0; i < 5; i++) { - final HadoopExecutorService exec = new HadoopExecutorService(log, "_GRID_NAME_", 10, 5); - - final LongAdder sum = new LongAdder(); - - final AtomicBoolean finish = new AtomicBoolean(); - - IgniteInternalFuture fut = multithreadedAsync(new Callable() { - @Override public Object call() throws Exception { - while (!finish.get()) { - exec.submit(new Callable() { - @Override public Void call() throws Exception { - sum.increment(); - - return null; - } - }); - } - - return null; - } - }, 19); - - Thread.sleep(200); - - assertTrue(exec.shutdown(50)); - - long res = sum.sum(); - - assertTrue(res > 0); - - finish.set(true); - - fut.get(); - - assertEquals(res, sum.sum()); // Nothing was executed after shutdown. - - X.println("_ ok"); - } - } -} \ No newline at end of file +} diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/external/HadoopExternalTaskExecutionSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/external/HadoopExternalTaskExecutionSelfTest.java index 12460787702df..84fb7fc16c5ca 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/external/HadoopExternalTaskExecutionSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/taskexecutor/external/HadoopExternalTaskExecutionSelfTest.java @@ -41,12 +41,16 @@ import org.apache.ignite.internal.processors.hadoop.HadoopJobId; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.marshaller.jdk.JdkMarshaller; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.createJobInfo; /** * Job tracker self test. */ +@RunWith(JUnit4.class) public class HadoopExternalTaskExecutionSelfTest extends HadoopAbstractSelfTest { /** {@inheritDoc} */ @Override protected boolean igfsEnabled() { @@ -89,6 +93,7 @@ public class HadoopExternalTaskExecutionSelfTest extends HadoopAbstractSelfTest /** * @throws Exception If failed. */ + @Test public void testSimpleTaskSubmit() throws Exception { String testInputFile = "/test"; @@ -125,6 +130,7 @@ public void testSimpleTaskSubmit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMapperException() throws Exception { String testInputFile = "/test"; @@ -231,4 +237,4 @@ private static class TestReducer extends Reducer()); } @@ -83,6 +89,7 @@ private void checkNullOrEmptyMappings(@Nullable Map map) throws * * @throws Exception If failed. */ + @Test public void testMappings() throws Exception { Map map = new HashMap<>(); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/ChainedUserNameMapperSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/ChainedUserNameMapperSelfTest.java index a9d295f861d6e..a839654a1a493 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/ChainedUserNameMapperSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/ChainedUserNameMapperSelfTest.java @@ -28,10 +28,14 @@ import java.util.Collections; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for chained user name mapper. */ +@RunWith(JUnit4.class) public class ChainedUserNameMapperSelfTest extends GridCommonAbstractTest { /** Test instance. */ private static final String INSTANCE = "test_instance"; @@ -44,7 +48,7 @@ public class ChainedUserNameMapperSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testNullMappers() throws Exception { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { @@ -60,7 +64,7 @@ public void testNullMappers() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testNullMapperElement() throws Exception { GridTestUtils.assertThrows(null, new Callable() { @Override public Void call() throws Exception { @@ -76,6 +80,7 @@ public void testNullMapperElement() throws Exception { * * @throws Exception If failed. */ + @Test public void testChaining() throws Exception { BasicUserNameMapper mapper1 = new BasicUserNameMapper(); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/KerberosUserNameMapperSelfTest.java b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/KerberosUserNameMapperSelfTest.java index bd76b51a1685b..a32245e5b0ba1 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/KerberosUserNameMapperSelfTest.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/internal/processors/hadoop/impl/util/KerberosUserNameMapperSelfTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.internal.processors.igfs.IgfsUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for Kerberos name mapper. */ +@RunWith(JUnit4.class) public class KerberosUserNameMapperSelfTest extends GridCommonAbstractTest { /** Test instance. */ private static final String INSTANCE = "test_instance"; @@ -37,6 +41,7 @@ public class KerberosUserNameMapperSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testMapper() throws Exception { KerberosUserNameMapper mapper = create(null, null); @@ -49,6 +54,7 @@ public void testMapper() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapperInstance() throws Exception { KerberosUserNameMapper mapper = create(INSTANCE, null); @@ -61,6 +67,7 @@ public void testMapperInstance() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapperRealm() throws Exception { KerberosUserNameMapper mapper = create(null, REALM); @@ -73,6 +80,7 @@ public void testMapperRealm() throws Exception { * * @throws Exception If failed. */ + @Test public void testMapperInstanceAndRealm() throws Exception { KerberosUserNameMapper mapper = create(INSTANCE, REALM); diff --git a/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteHadoopTestSuite.java b/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteHadoopTestSuite.java index 199fa9623bb4e..c4cebbf1dfffd 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteHadoopTestSuite.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteHadoopTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.commons.compress.archivers.tar.TarArchiveEntry; import org.apache.commons.compress.archivers.tar.TarArchiveInputStream; @@ -103,13 +104,16 @@ import java.nio.file.Path; import java.nio.file.Paths; import java.util.List; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import static org.apache.ignite.testframework.GridTestUtils.modeToPermissionSet; /** * Test suite for Hadoop Map Reduce engine. */ -public class IgniteHadoopTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteHadoopTestSuite { /** * @return Test suite. * @throws Exception Thrown in case of the failure. @@ -122,109 +126,109 @@ public static TestSuite suite() throws Exception { TestSuite suite = new TestSuite("Ignite Hadoop MR Test Suite"); - suite.addTest(new TestSuite(ldr.loadClass(HadoopUserLibsSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopUserLibsSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopWeightedMapReducePlannerTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopWeightedMapReducePlannerTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(BasicUserNameMapperSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(KerberosUserNameMapperSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(ChainedUserNameMapperSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(BasicUserNameMapperSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(KerberosUserNameMapperSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(ChainedUserNameMapperSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(KerberosHadoopFileSystemFactorySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(KerberosHadoopFileSystemFactorySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopTeraSortTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopTeraSortTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopSnappyTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopSnappyFullMapReduceTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSnappyTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSnappyFullMapReduceTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopIgfs20FileSystemLoopbackPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopIgfs20FileSystemLoopbackPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopIgfsDualSyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopIgfsDualAsyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopIgfsDualSyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopIgfsDualAsyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(Hadoop1OverIgfsDualSyncTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(Hadoop1OverIgfsDualAsyncTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(Hadoop1OverIgfsProxyTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(Hadoop1OverIgfsDualSyncTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(Hadoop1OverIgfsDualAsyncTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(Hadoop1OverIgfsProxyTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopFIleSystemFactorySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopFIleSystemFactorySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalSecondarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalDualSyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalDualAsyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedSecondarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedDualSyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedDualAsyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemClientBasedPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemClientBasedDualAsyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemClientBasedDualSyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemClientBasedProxySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientDualSyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientDualAsyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientProxySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalSecondarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalDualSyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalDualAsyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedSecondarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedDualSyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackEmbeddedDualAsyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemClientBasedPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemClientBasedDualAsyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemClientBasedDualSyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemClientBasedProxySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientDualSyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientDualAsyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoopbackExternalToClientProxySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemClientSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemClientSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoggerStateSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemLoggerSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoggerStateSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemLoggerSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemHandshakeSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemHandshakeSelfTest.class.getName()))); suite.addTest(IgfsEventsTestSuite.suiteNoarchOnly()); - suite.addTest(new TestSuite(ldr.loadClass(HadoopFileSystemsTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopFileSystemsTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopExecutorServiceTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopExecutorServiceTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopValidationSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopValidationSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopJobTrackerSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopJobTrackerSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopHashMapSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopDataStreamSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopConcurrentHashMultimapSelftest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopHashMapSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopDataStreamSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopConcurrentHashMultimapSelftest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopSkipListSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSkipListSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopTaskExecutionSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopTaskExecutionSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopV2JobSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopV2JobSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopSerializationWrapperSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopSplitWrapperSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSerializationWrapperSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSplitWrapperSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopTasksV1Test.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopTasksV2Test.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopTasksV1Test.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopTasksV2Test.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopMapReduceTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopWeightedPlannerMapReduceTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopNoHadoopMapReduceTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopMapReduceErrorResilienceTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopMapReduceTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopWeightedPlannerMapReduceTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopNoHadoopMapReduceTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopMapReduceErrorResilienceTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopMapReduceEmbeddedSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopMapReduceEmbeddedSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopSortingTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSortingTest.class.getName()))); // TODO https://issues.apache.org/jira/browse/IGNITE-3167 -// suite.addTest(new TestSuite(ldr.loadClass(HadoopExternalTaskExecutionSelfTest.class.getName()))); -// suite.addTest(new TestSuite(ldr.loadClass(HadoopExternalCommunicationSelfTest.class.getName()))); -// suite.addTest(new TestSuite(ldr.loadClass(HadoopSortingExternalTest.class.getName()))); +// suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopExternalTaskExecutionSelfTest.class.getName()))); +// suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopExternalCommunicationSelfTest.class.getName()))); +// suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSortingExternalTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopGroupingTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopGroupingTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopClientProtocolSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopClientProtocolEmbeddedSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopClientProtocolMultipleServersSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopClientProtocolSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopClientProtocolEmbeddedSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopClientProtocolMultipleServersSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopCommandLineTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopCommandLineTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopSecondaryFileSystemConfigurationTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopSecondaryFileSystemConfigurationTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopTxConfigCacheTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopTxConfigCacheTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemClientBasedOpenTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemClientBasedOpenTest.class.getName()))); return suite; } diff --git a/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteIgfsLinuxAndMacOSTestSuite.java b/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteIgfsLinuxAndMacOSTestSuite.java index 7d1b55dc788d3..ede1f181cb1cd 100644 --- a/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteIgfsLinuxAndMacOSTestSuite.java +++ b/modules/hadoop/src/test/java/org/apache/ignite/testsuites/IgniteIgfsLinuxAndMacOSTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.hadoop.HadoopTestClassLoader; import org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfs20FileSystemShmemPrimarySelfTest; @@ -32,6 +33,8 @@ import org.apache.ignite.internal.processors.hadoop.impl.igfs.IgniteHadoopFileSystemShmemExternalToClientPrimarySelfTest; import org.apache.ignite.internal.processors.hadoop.impl.igfs.IgniteHadoopFileSystemShmemExternalToClientProxySelfTest; import org.apache.ignite.internal.processors.igfs.IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; import static org.apache.ignite.testsuites.IgniteHadoopTestSuite.downloadHadoop; @@ -39,7 +42,8 @@ * Test suite for Hadoop file system over Ignite cache. * Contains tests which works on Linux and Mac OS platform only. */ -public class IgniteIgfsLinuxAndMacOSTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteIgfsLinuxAndMacOSTestSuite { /** * @return Test suite. * @throws Exception Thrown in case of the failure. @@ -51,22 +55,22 @@ public static TestSuite suite() throws Exception { TestSuite suite = new TestSuite("Ignite IGFS Test Suite For Linux And Mac OS"); - suite.addTest(new TestSuite(ldr.loadClass(IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgfsServerManagerIpcEndpointRegistrationOnLinuxAndMacSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalSecondarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalDualSyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalDualAsyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientDualAsyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientDualSyncSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientProxySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalSecondarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalDualSyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalDualAsyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientDualAsyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientDualSyncSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemShmemExternalToClientProxySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgniteHadoopFileSystemIpcCacheSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgniteHadoopFileSystemIpcCacheSelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(HadoopIgfs20FileSystemShmemPrimarySelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(HadoopIgfs20FileSystemShmemPrimarySelfTest.class.getName()))); - suite.addTest(new TestSuite(ldr.loadClass(IgfsNearOnlyMultiNodeSelfTest.class.getName()))); + suite.addTest(new JUnit4TestAdapter(ldr.loadClass(IgfsNearOnlyMultiNodeSelfTest.class.getName()))); suite.addTest(IgfsEventsTestSuite.suite()); diff --git a/modules/hibernate-4.2/pom.xml b/modules/hibernate-4.2/pom.xml index 41ac917666add..510f90698cd86 100644 --- a/modules/hibernate-4.2/pom.xml +++ b/modules/hibernate-4.2/pom.xml @@ -63,7 +63,7 @@ org.ow2.jotm jotm-core - 2.1.9 + ${jotm.version} test @@ -75,9 +75,9 @@ - com.h2database - h2 - ${h2.version} + org.apache.ignite + ignite-h2 + ${project.version} test @@ -130,6 +130,23 @@ 1.4.8 test + + + + + org.jboss.spec.javax.rmi + jboss-rmi-api_1.0_spec + ${jboss.rmi.version} + test + + + + javax.xml.bind + jaxb-api + ${jaxb.api.version} + test + + diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java index a28c5da31a257..eb09c59faa9ca 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java @@ -40,6 +40,9 @@ import org.hibernate.cache.spi.access.AccessType; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistryBuilder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -57,6 +60,7 @@ /** * Tests Hibernate L2 cache configuration. */ +@RunWith(JUnit4.class) public class HibernateL2CacheConfigurationSelfTest extends GridCommonAbstractTest { /** */ public static final String ENTITY1_NAME = Entity1.class.getName(); @@ -168,6 +172,7 @@ protected Configuration hibernateConfiguration(String igniteInstanceName) { /** * Tests property {@link HibernateAccessStrategyFactory#REGION_CACHE_PROPERTY}. */ + @Test public void testPerRegionCacheProperty() { testCacheUsage(1, 1, 0, 1, 1); } @@ -175,6 +180,7 @@ public void testPerRegionCacheProperty() { /** * Tests property {@link HibernateAccessStrategyFactory#DFLT_CACHE_NAME_PROPERTY}. */ + @Test public void testDefaultCache() { dfltCache = true; @@ -401,4 +407,4 @@ public void setId(int id) { this.id = id; } } -} \ No newline at end of file +} diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java index ed092aa248333..43c54b24abaa7 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java @@ -36,6 +36,9 @@ import org.hibernate.annotations.CacheConcurrencyStrategy; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistryBuilder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.cache.hibernate.HibernateAccessStrategyFactory.DFLT_CACHE_NAME_PROPERTY; @@ -46,6 +49,7 @@ /** * */ +@RunWith(JUnit4.class) public class HibernateL2CacheMultiJvmTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "hibernateCache"; @@ -89,6 +93,7 @@ public class HibernateL2CacheMultiJvmTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testL2Cache() throws Exception { Ignite srv = ignite(0); diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java index d69c61a10584d..3e60370def2f0 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java @@ -34,9 +34,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.hibernate.ObjectNotFoundException; @@ -55,6 +52,9 @@ import org.hibernate.service.ServiceRegistryBuilder; import org.hibernate.stat.NaturalIdCacheStatistics; import org.hibernate.stat.SecondLevelCacheStatistics; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -74,10 +74,8 @@ * * Tests Hibernate L2 cache. */ +@RunWith(JUnit4.class) public class HibernateL2CacheSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public static final String CONNECTION_URL = "jdbc:h2:mem:example;DB_CLOSE_DELAY=-1"; @@ -411,12 +409,6 @@ public void setVersion(long ver) { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(generalRegionConfiguration("org.hibernate.cache.spi.UpdateTimestampsCache"), generalRegionConfiguration("org.hibernate.cache.internal.StandardQueryCache"), transactionalRegionConfiguration(ENTITY_NAME), @@ -535,6 +527,7 @@ protected AccessType[] accessTypes() { /** * @throws Exception If failed. */ + @Test public void testCollectionCache() throws Exception { for (AccessType accessType : accessTypes()) testCollectionCache(accessType); @@ -655,6 +648,7 @@ private void testCollectionCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntityCache() throws Exception { for (AccessType accessType : accessTypes()) testEntityCache(accessType); @@ -815,6 +809,7 @@ private void testEntityCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTwoEntitiesSameCache() throws Exception { for (AccessType accessType : accessTypes()) testTwoEntitiesSameCache(accessType); @@ -1018,6 +1013,7 @@ private void testTwoEntitiesSameCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersionedEntity() throws Exception { for (AccessType accessType : accessTypes()) testVersionedEntity(accessType); @@ -1128,6 +1124,7 @@ private void testVersionedEntity(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testNaturalIdCache() throws Exception { for (AccessType accessType : accessTypes()) testNaturalIdCache(accessType); @@ -1282,6 +1279,7 @@ private void testNaturalIdCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntityCacheTransactionFails() throws Exception { for (AccessType accessType : accessTypes()) testEntityCacheTransactionFails(accessType); @@ -1456,6 +1454,7 @@ private void testEntityCacheTransactionFails(AccessType accessType) throws Excep /** * @throws Exception If failed. */ + @Test public void testQueryCache() throws Exception { for (AccessType accessType : accessTypes()) testQueryCache(accessType); @@ -1620,6 +1619,7 @@ private void testQueryCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testRegionClear() throws Exception { for (AccessType accessType : accessTypes()) testRegionClear(accessType); @@ -1945,4 +1945,4 @@ static Map hibernateProperties(String igniteInstanceName, String return map; } -} \ No newline at end of file +} diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java index 5e1fd32fb10b9..44d77011c2821 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java @@ -36,6 +36,9 @@ import org.hibernate.cache.spi.access.AccessType; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistryBuilder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -55,6 +58,7 @@ * Tests Hibernate L2 cache configuration. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class HibernateL2CacheStrategySelfTest extends GridCommonAbstractTest { /** */ private static final String ENTITY1_NAME = Entity1.class.getName(); @@ -168,6 +172,7 @@ private Configuration hibernateConfiguration(AccessType accessType, String ignit /** * @throws Exception If failed. */ + @Test public void testEntityCacheReadWrite() throws Exception { for (AccessType accessType : new AccessType[]{AccessType.READ_WRITE, AccessType.NONSTRICT_READ_WRITE}) testEntityCacheReadWrite(accessType); @@ -589,4 +594,4 @@ private void cleanup() throws Exception { for (IgniteCacheProxy cache : ((IgniteKernal)grid(0)).caches()) cache.clear(); } -} \ No newline at end of file +} diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java index 77025a1ea294c..27e8b53f820a6 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java @@ -24,6 +24,8 @@ import org.hibernate.FlushMode; import org.hibernate.Session; import org.hibernate.Transaction; +import org.junit.Ignore; +import org.junit.Test; /** * Cache store test. @@ -100,6 +102,7 @@ public void testConfigurationByFile() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConfigurationByResource() throws Exception { store.setHibernateConfigurationPath("/org/apache/ignite/cache/store/hibernate/hibernate.cfg.xml"); @@ -107,7 +110,10 @@ public void testConfigurationByResource() throws Exception { store.load("key"); } + /** */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-1757") + @Test @Override public void testSimpleMultithreading() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-1757"); + // No-op. } } diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java index db0e1bebe8026..28fb05d87f53e 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java @@ -43,10 +43,15 @@ import org.hibernate.metadata.ClassMetadata; import org.hibernate.metadata.CollectionMetadata; import org.hibernate.stat.Statistics; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for Cache jdbc blob store factory. */ +@RunWith(JUnit4.class) public class CacheHibernateStoreFactorySelfTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "test"; @@ -57,6 +62,7 @@ public class CacheHibernateStoreFactorySelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCacheConfiguration() throws Exception { try (Ignite ignite1 = startGrid(0)) { IgniteCache cache1 = ignite1.getOrCreateCache(cacheConfiguration()); @@ -68,6 +74,7 @@ public void testCacheConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testXmlConfiguration() throws Exception { try (Ignite ignite = Ignition.start(MODULE_PATH + "/src/test/config/factory-cache.xml")) { try(Ignite ignite1 = Ignition.start(MODULE_PATH + "/src/test/config/factory-cache1.xml")) { @@ -82,8 +89,10 @@ public void testXmlConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-1094") + @Test public void testIncorrectBeanConfiguration() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-1094"); + fail("https://issues.apache.org/jira/browse/IGNITE-10723"); GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -287,4 +296,4 @@ public static class DummySessionFactory implements SessionFactory { return null; } } -} \ No newline at end of file +} diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernateTestSuite.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernateTestSuite.java index 3791baed9c93c..84f2f7bf5c8df 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernateTestSuite.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernateTestSuite.java @@ -20,16 +20,18 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteBinaryHibernateTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBinaryHibernateTestSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { GridTestProperties.setProperty(GridTestProperties.MARSH_CLASS_NAME, BinaryMarshaller.class.getName()); return IgniteHibernateTestSuite.suite(); diff --git a/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteHibernateTestSuite.java b/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteHibernateTestSuite.java index 8d45dea48d8e2..436203a797567 100644 --- a/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteHibernateTestSuite.java +++ b/modules/hibernate-4.2/src/test/java/org/apache/ignite/testsuites/IgniteHibernateTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.cache.hibernate.HibernateL2CacheConfigurationSelfTest; import org.apache.ignite.cache.hibernate.HibernateL2CacheMultiJvmTest; @@ -28,33 +29,35 @@ import org.apache.ignite.cache.store.hibernate.CacheHibernateBlobStoreSelfTest; import org.apache.ignite.cache.store.hibernate.CacheHibernateStoreFactorySelfTest; import org.apache.ignite.cache.store.hibernate.CacheHibernateStoreSessionListenerSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Hibernate integration tests. */ -public class IgniteHibernateTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteHibernateTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Hibernate Integration Test Suite"); // Hibernate L2 cache. - suite.addTestSuite(HibernateL2CacheSelfTest.class); - suite.addTestSuite(HibernateL2CacheTransactionalSelfTest.class); - suite.addTestSuite(HibernateL2CacheTransactionalUseSyncSelfTest.class); - suite.addTestSuite(HibernateL2CacheConfigurationSelfTest.class); - suite.addTestSuite(HibernateL2CacheStrategySelfTest.class); - suite.addTestSuite(HibernateL2CacheMultiJvmTest.class); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheTransactionalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheTransactionalUseSyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheConfigurationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheStrategySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheMultiJvmTest.class)); - suite.addTestSuite(CacheHibernateBlobStoreSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateBlobStoreSelfTest.class)); - suite.addTestSuite(CacheHibernateBlobStoreNodeRestartTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateBlobStoreNodeRestartTest.class)); - suite.addTestSuite(CacheHibernateStoreSessionListenerSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateStoreSessionListenerSelfTest.class)); - suite.addTestSuite(CacheHibernateStoreFactorySelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateStoreFactorySelfTest.class)); return suite; } diff --git a/modules/hibernate-5.1/pom.xml b/modules/hibernate-5.1/pom.xml index ad3012461ed06..8d80d6b7125ca 100644 --- a/modules/hibernate-5.1/pom.xml +++ b/modules/hibernate-5.1/pom.xml @@ -63,7 +63,7 @@ org.ow2.jotm jotm-core - 2.1.9 + ${jotm.version} test @@ -75,9 +75,9 @@ - com.h2database - h2 - ${h2.version} + org.apache.ignite + ignite-h2 + ${project.version} test @@ -130,6 +130,36 @@ 1.4.8 test + + + + + org.jboss.spec.javax.rmi + jboss-rmi-api_1.0_spec + ${jboss.rmi.version} + test + + + + javax.xml.bind + jaxb-api + ${jaxb.api.version} + test + + + + com.sun.xml.bind + jaxb-core + ${jaxb.impl.version} + test + + + + com.sun.xml.bind + jaxb-impl + ${jaxb.impl.version} + test + diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java index 2268b06886518..f828d04c83753 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheConfigurationSelfTest.java @@ -40,6 +40,9 @@ import org.hibernate.boot.registry.StandardServiceRegistryBuilder; import org.hibernate.cache.spi.access.AccessType; import org.hibernate.cfg.Configuration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -55,6 +58,7 @@ /** * Tests Hibernate L2 cache configuration. */ +@RunWith(JUnit4.class) public class HibernateL2CacheConfigurationSelfTest extends GridCommonAbstractTest { /** */ public static final String ENTITY1_NAME = Entity1.class.getName(); @@ -165,6 +169,7 @@ protected Configuration hibernateConfiguration(String igniteInstanceName) { /** * Tests property {@link HibernateAccessStrategyFactory#REGION_CACHE_PROPERTY}. */ + @Test public void testPerRegionCacheProperty() { testCacheUsage(1, 1, 0, 1, 1); } @@ -172,6 +177,7 @@ public void testPerRegionCacheProperty() { /** * Tests property {@link HibernateAccessStrategyFactory#DFLT_CACHE_NAME_PROPERTY}. */ + @Test public void testDefaultCache() { dfltCache = true; @@ -399,4 +405,4 @@ public void setId(int id) { this.id = id; } } -} \ No newline at end of file +} diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java index 268ea8ccf04b2..b3da8bbfaf59d 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheMultiJvmTest.java @@ -36,6 +36,9 @@ import org.hibernate.annotations.CacheConcurrencyStrategy; import org.hibernate.boot.MetadataSources; import org.hibernate.boot.registry.StandardServiceRegistryBuilder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.cache.hibernate.HibernateAccessStrategyFactory.DFLT_CACHE_NAME_PROPERTY; @@ -46,6 +49,7 @@ /** * */ +@RunWith(JUnit4.class) public class HibernateL2CacheMultiJvmTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "hibernateCache"; @@ -89,6 +93,7 @@ public class HibernateL2CacheMultiJvmTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testL2Cache() throws Exception { Ignite srv = ignite(0); diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java index 798f73d43b057..b1b57f05e11c4 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheSelfTest.java @@ -34,9 +34,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.IgniteCacheProxy; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.hibernate.ObjectNotFoundException; @@ -60,6 +57,9 @@ import org.hibernate.mapping.RootClass; import org.hibernate.stat.NaturalIdCacheStatistics; import org.hibernate.stat.SecondLevelCacheStatistics; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -80,10 +80,8 @@ * * Tests Hibernate L2 cache. */ +@RunWith(JUnit4.class) public class HibernateL2CacheSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ public static final String CONNECTION_URL = "jdbc:h2:mem:example;DB_CLOSE_DELAY=-1"; @@ -420,12 +418,6 @@ public void setVersion(long ver) { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(generalRegionConfiguration("org.hibernate.cache.spi.UpdateTimestampsCache"), generalRegionConfiguration("org.hibernate.cache.internal.StandardQueryCache"), transactionalRegionConfiguration(ENTITY_NAME), @@ -515,6 +507,7 @@ protected AccessType[] accessTypes() { /** * @throws Exception If failed. */ + @Test public void testCollectionCache() throws Exception { for (AccessType accessType : accessTypes()) testCollectionCache(accessType); @@ -635,6 +628,7 @@ private void testCollectionCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntityCache() throws Exception { for (AccessType accessType : accessTypes()) testEntityCache(accessType); @@ -795,6 +789,7 @@ private void testEntityCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testTwoEntitiesSameCache() throws Exception { for (AccessType accessType : accessTypes()) testTwoEntitiesSameCache(accessType); @@ -998,6 +993,7 @@ private void testTwoEntitiesSameCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testVersionedEntity() throws Exception { for (AccessType accessType : accessTypes()) testVersionedEntity(accessType); @@ -1108,6 +1104,7 @@ private void testVersionedEntity(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testNaturalIdCache() throws Exception { for (AccessType accessType : accessTypes()) testNaturalIdCache(accessType); @@ -1262,6 +1259,7 @@ private void testNaturalIdCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testEntityCacheTransactionFails() throws Exception { for (AccessType accessType : accessTypes()) testEntityCacheTransactionFails(accessType); @@ -1436,6 +1434,7 @@ private void testEntityCacheTransactionFails(AccessType accessType) throws Excep /** * @throws Exception If failed. */ + @Test public void testQueryCache() throws Exception { for (AccessType accessType : accessTypes()) testQueryCache(accessType); @@ -1600,6 +1599,7 @@ private void testQueryCache(AccessType accessType) throws Exception { /** * @throws Exception If failed. */ + @Test public void testRegionClear() throws Exception { for (AccessType accessType : accessTypes()) testRegionClear(accessType); @@ -1951,4 +1951,4 @@ static Map hibernateProperties(String igniteInstanceName, String return map; } -} \ No newline at end of file +} diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java index 8bd80da5c2ece..0efb815d68648 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/hibernate/HibernateL2CacheStrategySelfTest.java @@ -40,6 +40,9 @@ import org.hibernate.cache.spi.access.AccessType; import org.hibernate.mapping.PersistentClass; import org.hibernate.mapping.RootClass; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -51,6 +54,7 @@ * Tests Hibernate L2 cache configuration. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class HibernateL2CacheStrategySelfTest extends GridCommonAbstractTest { /** */ private static final String ENTITY1_NAME = Entity1.class.getName(); @@ -120,6 +124,7 @@ private CacheConfiguration cacheConfiguration(String cacheName) { /** * @throws Exception If failed. */ + @Test public void testEntityCacheReadWrite() throws Exception { for (AccessType accessType : new AccessType[]{AccessType.READ_WRITE, AccessType.NONSTRICT_READ_WRITE}) testEntityCacheReadWrite(accessType); @@ -561,4 +566,4 @@ private void cleanup() throws Exception { for (IgniteCacheProxy cache : ((IgniteKernal)grid(0)).caches()) cache.clear(); } -} \ No newline at end of file +} diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java index c62db4ab619c2..37fd9f8250a89 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateBlobStoreSelfTest.java @@ -25,10 +25,15 @@ import org.hibernate.Session; import org.hibernate.Transaction; import org.hibernate.resource.transaction.spi.TransactionStatus; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Cache store test. */ +@RunWith(JUnit4.class) public class CacheHibernateBlobStoreSelfTest extends GridAbstractCacheStoreSelfTest> { /** @@ -69,6 +74,7 @@ public CacheHibernateBlobStoreSelfTest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConfigurationByUrl() throws Exception { URL url = U.resolveIgniteUrl(CacheHibernateStoreFactorySelfTest.MODULE_PATH + "/src/test/java/org/apache/ignite/cache/store/hibernate/hibernate.cfg.xml"); @@ -84,6 +90,7 @@ public void testConfigurationByUrl() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConfigurationByFile() throws Exception { URL url = U.resolveIgniteUrl(CacheHibernateStoreFactorySelfTest.MODULE_PATH + "/src/test/java/org/apache/ignite/cache/store/hibernate/hibernate.cfg.xml"); @@ -101,6 +108,7 @@ public void testConfigurationByFile() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConfigurationByResource() throws Exception { store.setHibernateConfigurationPath("/org/apache/ignite/cache/store/hibernate/hibernate.cfg.xml"); @@ -108,7 +116,10 @@ public void testConfigurationByResource() throws Exception { store.load("key"); } + /** */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-1757") + @Test @Override public void testSimpleMultithreading() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-1757"); + // No-op. } } diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java index 0ffe52ef1176f..ee3d395e0b6d1 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/cache/store/hibernate/CacheHibernateStoreFactorySelfTest.java @@ -43,10 +43,15 @@ import org.hibernate.metadata.ClassMetadata; import org.hibernate.metadata.CollectionMetadata; import org.hibernate.stat.Statistics; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for Cache jdbc blob store factory. */ +@RunWith(JUnit4.class) public class CacheHibernateStoreFactorySelfTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "test"; @@ -57,6 +62,7 @@ public class CacheHibernateStoreFactorySelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCacheConfiguration() throws Exception { try (Ignite ignite1 = startGrid(0)) { IgniteCache cache1 = ignite1.getOrCreateCache(cacheConfiguration()); @@ -68,6 +74,7 @@ public void testCacheConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testXmlConfiguration() throws Exception { try (Ignite ignite = Ignition.start(MODULE_PATH + "/src/test/config/factory-cache.xml")) { try(Ignite ignite1 = Ignition.start(MODULE_PATH + "/src/test/config/factory-cache1.xml")) { @@ -82,8 +89,10 @@ public void testXmlConfiguration() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-1094") + @Test public void testIncorrectBeanConfiguration() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-1094"); + fail("https://issues.apache.org/jira/browse/IGNITE-10723"); GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -255,4 +264,4 @@ public static class DummySessionFactory implements SessionFactory { return null; } } -} \ No newline at end of file +} diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernate5TestSuite.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernate5TestSuite.java index d5395111e84b4..40ee896a848f0 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernate5TestSuite.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteBinaryHibernate5TestSuite.java @@ -20,16 +20,18 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgniteBinaryHibernate5TestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBinaryHibernate5TestSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { GridTestProperties.setProperty(GridTestProperties.MARSH_CLASS_NAME, BinaryMarshaller.class.getName()); return IgniteHibernate5TestSuite.suite(); diff --git a/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteHibernate5TestSuite.java b/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteHibernate5TestSuite.java index b5715997eaaed..f87e676bf681c 100644 --- a/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteHibernate5TestSuite.java +++ b/modules/hibernate-5.1/src/test/java/org/apache/ignite/testsuites/IgniteHibernate5TestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.cache.hibernate.HibernateL2CacheConfigurationSelfTest; import org.apache.ignite.cache.hibernate.HibernateL2CacheMultiJvmTest; @@ -28,33 +29,35 @@ import org.apache.ignite.cache.store.hibernate.CacheHibernateBlobStoreSelfTest; import org.apache.ignite.cache.store.hibernate.CacheHibernateStoreFactorySelfTest; import org.apache.ignite.cache.store.hibernate.CacheHibernateStoreSessionListenerSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Hibernate integration tests. */ -public class IgniteHibernate5TestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteHibernate5TestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Hibernate5 Integration Test Suite"); // Hibernate L2 cache. - suite.addTestSuite(HibernateL2CacheSelfTest.class); - suite.addTestSuite(HibernateL2CacheTransactionalSelfTest.class); - suite.addTestSuite(HibernateL2CacheTransactionalUseSyncSelfTest.class); - suite.addTestSuite(HibernateL2CacheConfigurationSelfTest.class); - suite.addTestSuite(HibernateL2CacheStrategySelfTest.class); - suite.addTestSuite(HibernateL2CacheMultiJvmTest.class); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheTransactionalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheTransactionalUseSyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheConfigurationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheStrategySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(HibernateL2CacheMultiJvmTest.class)); - suite.addTestSuite(CacheHibernateBlobStoreSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateBlobStoreSelfTest.class)); - suite.addTestSuite(CacheHibernateBlobStoreNodeRestartTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateBlobStoreNodeRestartTest.class)); - suite.addTestSuite(CacheHibernateStoreSessionListenerSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateStoreSessionListenerSelfTest.class)); - suite.addTestSuite(CacheHibernateStoreFactorySelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheHibernateStoreFactorySelfTest.class)); return suite; } diff --git a/modules/hibernate-core/src/main/java/org/apache/ignite/cache/hibernate/HibernateCacheProxy.java b/modules/hibernate-core/src/main/java/org/apache/ignite/cache/hibernate/HibernateCacheProxy.java index fdb87f0c20201..d4efb5fd697c9 100644 --- a/modules/hibernate-core/src/main/java/org/apache/ignite/cache/hibernate/HibernateCacheProxy.java +++ b/modules/hibernate-core/src/main/java/org/apache/ignite/cache/hibernate/HibernateCacheProxy.java @@ -39,7 +39,6 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; import org.apache.ignite.internal.processors.cache.GridCacheContext; -import org.apache.ignite.internal.processors.cache.IgniteCacheExpiryPolicy; import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; import org.apache.ignite.lang.IgniteBiPredicate; @@ -124,10 +123,9 @@ public HibernateKeyTransformer keyTransformer(){ /** {@inheritDoc} */ @Nullable @Override public Object localPeek( Object key, - CachePeekMode[] peekModes, - @Nullable IgniteCacheExpiryPolicy plc + CachePeekMode[] peekModes ) throws IgniteCheckedException { - return delegate.localPeek(keyTransformer.transform(key), peekModes, plc); + return delegate.localPeek(keyTransformer.transform(key), peekModes); } /** {@inheritDoc} */ @@ -647,6 +645,21 @@ public HibernateKeyTransformer keyTransformer(){ return delegate.lostPartitions(); } + /** {@inheritDoc} */ + @Override public void preloadPartition(int part) throws IgniteCheckedException { + delegate.preloadPartition(part); + } + + /** {@inheritDoc} */ + @Override public IgniteInternalFuture preloadPartitionAsync(int part) throws IgniteCheckedException { + return delegate.preloadPartitionAsync(part); + } + + /** {@inheritDoc} */ + @Override public boolean localPreloadPartition(int part) throws IgniteCheckedException { + return delegate.localPreloadPartition(part); + } + /** {@inheritDoc} */ @Nullable @Override public EntryProcessorResult invoke( @Nullable AffinityTopologyVersion topVer, diff --git a/modules/ignored-tests/pom.xml b/modules/ignored-tests/pom.xml index b0b4972b34e80..342d50b239078 100644 --- a/modules/ignored-tests/pom.xml +++ b/modules/ignored-tests/pom.xml @@ -227,7 +227,7 @@ org.ow2.jotm jotm-core - 2.1.9 + ${jotm.version} test diff --git a/modules/indexing/pom.xml b/modules/indexing/pom.xml index 19b481fe3847e..3644349e9b30c 100644 --- a/modules/indexing/pom.xml +++ b/modules/indexing/pom.xml @@ -66,9 +66,9 @@ - com.h2database - h2 - ${h2.version} + org.apache.ignite + ignite-h2 + ${project.version} diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/cache/query/QueryTable.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/cache/query/QueryTable.java index 54f5f03f556b1..ca6343c9c7e4d 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/cache/query/QueryTable.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/cache/query/QueryTable.java @@ -93,6 +93,7 @@ public String table() { return false; writer.incrementState(); + } return true; @@ -121,6 +122,7 @@ public String table() { return false; reader.incrementState(); + } return reader.afterMessageRead(QueryTable.class); diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/DmlStatementsProcessor.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/DmlStatementsProcessor.java index 31715f1927b57..6ce0c4f5a9aa7 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/DmlStatementsProcessor.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/DmlStatementsProcessor.java @@ -42,6 +42,7 @@ import org.apache.ignite.cache.query.BulkLoadContextCursor; import org.apache.ignite.cache.query.FieldsQueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; @@ -87,6 +88,7 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteInClosure; +import org.apache.ignite.lang.IgniteProductVersion; import org.apache.ignite.spi.indexing.IndexingQueryFilter; import org.h2.command.Prepared; import org.h2.command.dml.Delete; @@ -111,6 +113,10 @@ public class DmlStatementsProcessor { /** Default number of attempts to re-run DELETE and UPDATE queries in case of concurrent modifications of values. */ private static final int DFLT_DML_RERUN_ATTEMPTS = 4; + /** The version which changed the anonymous class position of REMOVE closure. */ + private static final IgniteProductVersion RMV_ANON_CLS_POS_CHANGED_SINCE = + IgniteProductVersion.fromString("2.7.0"); + /** Indexing. */ private IgniteH2Indexing idx; @@ -787,7 +793,13 @@ private UpdateResult doDelete(GridCacheContext cctx, Iterable> cursor, i continue; } - sender.add(row.get(0), new ModifyingEntryProcessor(row.get(1), RMV), 0); + Object key = row.get(0); + + ClusterNode node = sender.primaryNodeByKey(key); + + IgniteInClosure> rmvC = getRemoveClosure(node, key); + + sender.add(key, new ModifyingEntryProcessor(row.get(1), rmvC), 0); } sender.flush(); @@ -1097,6 +1109,41 @@ UpdateResult mapDistributedUpdate(String schemaName, PreparedStatement stmt, Sql return updateSqlFields(schemaName, c, GridSqlQueryParser.prepared(stmt), fldsQry, local, filter, cancel); } + /** Remove updater for compatibility with < 2.7.0. Must not be moved around to keep at anonymous position 4. */ + private static final IgniteInClosure> RMV_OLD = + new IgniteInClosure>() { + @Override public void apply(MutableEntry e) { + e.remove(); + } + }; + + /** Remove updater. Must not be moved around to keep at anonymous position 5. */ + private static final IgniteInClosure> RMV = + new IgniteInClosure>() { + @Override public void apply(MutableEntry e) { + e.remove(); + } + }; + + /** + * Returns the remove closure based on the version of the primary node. + * + * @param node Primary node. + * @param key Key. + * @return Remove closure. + */ + protected static IgniteInClosure> getRemoveClosure(ClusterNode node, Object key) { + assert node != null; + assert key != null; + + IgniteInClosure> rmvC = RMV; + + if (node.version().compareTo(RMV_ANON_CLS_POS_CHANGED_SINCE) < 0) + rmvC = RMV_OLD; + + return rmvC; + } + /** * @param schema Schema name. * @param conn Connection. @@ -1301,14 +1348,6 @@ private ModifyingEntryProcessor(Object val, IgniteInClosure> RMV = new IgniteInClosure>() { - /** {@inheritDoc} */ - @Override public void apply(MutableEntry e) { - e.remove(); - } - }; - /** * */ diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/H2TableDescriptor.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/H2TableDescriptor.java index 899bdda0b457a..988ec2f49a9d7 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/H2TableDescriptor.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/H2TableDescriptor.java @@ -203,6 +203,7 @@ H2RowFactory rowFactory(GridH2RowDescriptor rowDesc) { "_key_PK", tbl, true, + false, H2Utils.treeIndexColumns(desc, new ArrayList(2), keyCol, affCol), -1 ); @@ -250,7 +251,7 @@ H2RowFactory rowFactory(GridH2RowDescriptor rowDesc) { // Add explicit affinity key index if nothing alike was found. if (affCol != null && !affIdxFound) { - idxs.add(idx.createSortedIndex("AFFINITY_KEY", tbl, false, + idxs.add(idx.createSortedIndex("AFFINITY_KEY", tbl, false, true, H2Utils.treeIndexColumns(desc, new ArrayList(2), affCol, keyCol), -1)); } @@ -300,7 +301,7 @@ public GridH2IndexBase createUserIndex(GridQueryIndexDescriptor idxDesc) { if (idxDesc.type() == QueryIndexType.SORTED) { cols = H2Utils.treeIndexColumns(desc, cols, keyCol, affCol); - return idx.createSortedIndex(idxDesc.name(), tbl, false, cols, idxDesc.inlineSize()); + return idx.createSortedIndex(idxDesc.name(), tbl, false, false, cols, idxDesc.inlineSize()); } else if (idxDesc.type() == QueryIndexType.GEOSPATIAL) return H2Utils.createSpatialIndex(tbl, idxDesc.name(), cols.toArray(new IndexColumn[cols.size()])); diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IgniteH2Indexing.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IgniteH2Indexing.java index 4b855a75d79d5..c0c20297394f3 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IgniteH2Indexing.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IgniteH2Indexing.java @@ -44,7 +44,6 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import java.util.regex.Pattern; -import javax.cache.Cache; import javax.cache.CacheException; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteDataStreamer; @@ -53,7 +52,6 @@ import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.query.FieldsQueryCursor; import org.apache.ignite.cache.query.QueryCancelledException; -import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.cache.query.annotations.QuerySqlFunction; @@ -61,10 +59,10 @@ import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.GridTopic; import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.pagemem.store.IgnitePageStoreManager; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.CacheEntryImpl; -import org.apache.ignite.internal.processors.cache.CacheObjectUtils; import org.apache.ignite.internal.processors.cache.CacheObjectValueContext; +import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.QueryCursorImpl; @@ -78,13 +76,13 @@ import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO; import org.apache.ignite.internal.processors.cache.query.CacheQueryPartitionInfo; -import org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager; import org.apache.ignite.internal.processors.cache.query.GridCacheQueryMarshallable; import org.apache.ignite.internal.processors.cache.query.GridCacheTwoStepQuery; import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.cache.query.QueryTable; import org.apache.ignite.internal.processors.cache.query.SqlFieldsQueryEx; import org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter; +import org.apache.ignite.internal.processors.odbc.SqlStateCode; import org.apache.ignite.internal.processors.query.CacheQueryObjectValueContext; import org.apache.ignite.internal.processors.query.GridQueryCacheObjectsIterator; import org.apache.ignite.internal.processors.query.GridQueryCancel; @@ -118,7 +116,6 @@ import org.apache.ignite.internal.processors.query.h2.opt.GridH2IndexBase; import org.apache.ignite.internal.processors.query.h2.opt.GridH2PlainRowFactory; import org.apache.ignite.internal.processors.query.h2.opt.GridH2QueryContext; -import org.apache.ignite.internal.processors.query.h2.opt.GridH2Row; import org.apache.ignite.internal.processors.query.h2.opt.GridH2RowDescriptor; import org.apache.ignite.internal.processors.query.h2.opt.GridH2Table; import org.apache.ignite.internal.processors.query.h2.sql.GridSqlAlias; @@ -131,12 +128,14 @@ import org.apache.ignite.internal.processors.query.h2.sys.SqlSystemTableEngine; import org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemView; import org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewBaselineNodes; +import org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewCaches; import org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewNodeAttributes; import org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewNodeMetrics; import org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewNodes; import org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor; import org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor; import org.apache.ignite.internal.processors.query.h2.twostep.MapQueryLazyWorker; +import org.apache.ignite.internal.processors.query.h2.twostep.PartitionReservationManager; import org.apache.ignite.internal.processors.query.h2.twostep.msg.GridH2QueryRequest; import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitor; import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorClosure; @@ -168,6 +167,8 @@ import org.apache.ignite.internal.util.typedef.internal.LT; import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.internal.util.worker.GridWorker; +import org.apache.ignite.internal.util.worker.GridWorkerFuture; import org.apache.ignite.lang.IgniteBiClosure; import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.lang.IgniteFuture; @@ -176,6 +177,7 @@ import org.apache.ignite.marshaller.Marshaller; import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.plugin.security.SecurityPermission; import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.spi.indexing.IndexingQueryFilter; import org.apache.ignite.spi.indexing.IndexingQueryFilterImpl; @@ -204,7 +206,6 @@ import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.mvccEnabled; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.txStart; -import static org.apache.ignite.internal.processors.cache.query.GridCacheQueryType.SQL; import static org.apache.ignite.internal.processors.cache.query.GridCacheQueryType.SQL_FIELDS; import static org.apache.ignite.internal.processors.cache.query.GridCacheQueryType.TEXT; import static org.apache.ignite.internal.processors.query.QueryUtils.KEY_FIELD_NAME; @@ -245,6 +246,7 @@ public class IgniteH2Indexing implements GridQueryIndexing { System.setProperty("h2.serializeJavaObject", "false"); System.setProperty("h2.objectCacheMaxPerElementSize", "0"); // Avoid ValueJavaObject caching. System.setProperty("h2.optimizeTwoEquals", "false"); // Makes splitter fail on subqueries in WHERE. + System.setProperty("h2.dropRestrict", "false"); // Drop schema with cascade semantics. } /** Default DB options. */ @@ -377,6 +379,9 @@ public class IgniteH2Indexing implements GridQueryIndexing { /** */ private final ConcurrentMap dataTables = new ConcurrentHashMap<>(); + /** Partition reservation manager. */ + private PartitionReservationManager partReservationMgr; + /** */ private volatile GridBoundedConcurrentLinkedHashMap twoStepCache = new GridBoundedConcurrentLinkedHashMap<>(TWO_STEP_QRY_CACHE_SIZE); @@ -886,15 +891,7 @@ private void addInitialUserIndex(String schemaName, H2TableDescriptor desc, Grid // Populate index with existing cache data. final GridH2RowDescriptor rowDesc = h2Tbl.rowDescriptor(); - SchemaIndexCacheVisitorClosure clo = new SchemaIndexCacheVisitorClosure() { - @Override public void apply(CacheDataRow row) throws IgniteCheckedException { - GridH2Row h2Row = rowDesc.createRow(row); - - h2Idx.putx(h2Row); - } - }; - - cacheVisitor.visit(clo); + cacheVisitor.visit(new IndexBuildClosure(rowDesc, h2Idx)); // At this point index is in consistent state, promote it through H2 SQL statement, so that cached // prepared statements are re-built. @@ -987,11 +984,13 @@ private void executeSql(String schemaName, String sql) throws IgniteCheckedExcep * @param name Index name, * @param tbl Table. * @param pk Primary key flag. + * @param affinityKey Affinity key flag. * @param cols Columns. * @param inlineSize Index inline size. * @return Index. */ - GridH2IndexBase createSortedIndex(String name, GridH2Table tbl, boolean pk, List cols, + GridH2IndexBase createSortedIndex(String name, GridH2Table tbl, boolean pk, boolean affinityKey, + List cols, int inlineSize) { try { GridCacheContext cctx = tbl.cache(); @@ -1003,7 +1002,7 @@ GridH2IndexBase createSortedIndex(String name, GridH2Table tbl, boolean pk, List H2RowCache cache = rowCache.forGroup(cctx.groupId()); - return new H2TreeIndex(cctx, cache, tbl, name, pk, cols, inlineSize, segments); + return new H2TreeIndex(cctx, cache, tbl, name, pk, affinityKey, cols, inlineSize, segments); } catch (IgniteCheckedException e) { throw new IgniteException(e); @@ -1118,6 +1117,14 @@ else if (DdlStatementsProcessor.isDdlStatement(p)) { throw new IgniteSQLException("SELECT FOR UPDATE query requires transactional " + "cache with MVCC enabled.", IgniteQueryErrorCode.UNSUPPORTED_OPERATION); + if (this.ctx.security().enabled()) { + GridSqlQueryParser parser = new GridSqlQueryParser(false); + + parser.parse(p); + + checkSecurity(parser.cacheIds()); + } + GridNearTxSelectForUpdateFuture sfuFut = null; int opTimeout = qryTimeout; @@ -1556,111 +1563,6 @@ public void bindParameters(PreparedStatement stmt, return cursor; } - /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override public QueryCursor> queryLocalSql(String schemaName, String cacheName, - final SqlQuery qry, final IndexingQueryFilter filter, final boolean keepBinary) throws IgniteCheckedException { - String type = qry.getType(); - String sqlQry = qry.getSql(); - String alias = qry.getAlias(); - Object[] params = qry.getArgs(); - - GridQueryCancel cancel = new GridQueryCancel(); - - final GridCloseableIterator> i = queryLocalSql(schemaName, cacheName, sqlQry, alias, - F.asList(params), type, filter, cancel); - - return new QueryCursorImpl<>(new Iterable>() { - @SuppressWarnings("NullableProblems") - @Override public Iterator> iterator() { - return new ClIter>() { - @Override public void close() throws Exception { - i.close(); - } - - @Override public boolean hasNext() { - return i.hasNext(); - } - - @Override public Cache.Entry next() { - IgniteBiTuple t = i.next(); - - K key = (K)CacheObjectUtils.unwrapBinaryIfNeeded(objectContext(), t.get1(), keepBinary, false); - V val = (V)CacheObjectUtils.unwrapBinaryIfNeeded(objectContext(), t.get2(), keepBinary, false); - - return new CacheEntryImpl<>(key, val); - } - - @Override public void remove() { - throw new UnsupportedOperationException(); - } - }; - } - }, cancel); - } - - /** - * Executes regular query. - * - * @param schemaName Schema name. - * @param cacheName Cache name. - * @param qry Query. - * @param alias Table alias. - * @param params Query parameters. - * @param type Query return type. - * @param filter Cache name and key filter. - * @param cancel Cancel object. - * @return Queried rows. - * @throws IgniteCheckedException If failed. - */ - @SuppressWarnings("unchecked") - GridCloseableIterator> queryLocalSql(String schemaName, String cacheName, - final String qry, String alias, @Nullable final Collection params, String type, - final IndexingQueryFilter filter, GridQueryCancel cancel) throws IgniteCheckedException { - final H2TableDescriptor tbl = tableDescriptor(schemaName, cacheName, type); - - if (tbl == null) - throw new IgniteSQLException("Failed to find SQL table for type: " + type, - IgniteQueryErrorCode.TABLE_NOT_FOUND); - - String sql = generateQuery(qry, alias, tbl); - - Connection conn = connectionForThread(tbl.schemaName()); - - H2Utils.setupConnection(conn, false, false); - - GridH2QueryContext qctx = new GridH2QueryContext(nodeId, nodeId, 0, LOCAL).filter(filter) - .distributedJoinMode(OFF); - - PreparedStatement stmt = preparedStatementWithParams(conn, sql, params, true); - - MvccQueryTracker mvccTracker = mvccTracker(stmt, false); - - if (mvccTracker != null) - qctx.mvccSnapshot(mvccTracker.snapshot()); - - GridH2QueryContext.set(qctx); - - GridRunningQueryInfo run = new GridRunningQueryInfo(qryIdGen.incrementAndGet(), qry, SQL, schemaName, - U.currentTimeMillis(), null, true); - - runs.put(run.id(), run); - - try { - ResultSet rs = executeSqlQueryWithTimer(stmt, conn, sql, params, 0, cancel); - - return new H2KeyValueIterator(rs); - } - finally { - GridH2QueryContext.clearThreadLocal(); - - if (mvccTracker != null) - mvccTracker.onDone(); - - runs.remove(run.id()); - } - } - /** * Initialises MVCC filter and returns MVCC query tracker if needed. * @param stmt Prepared statement. @@ -1742,8 +1644,6 @@ else if (mvccEnabled != cctx.mvccEnabled()) PreparedStatementEx stmtEx = stmt.unwrap(PreparedStatementEx.class); if (mvccEnabled) { - assert mvccCacheId != null; - stmtEx.putMeta(MVCC_CACHE_ID, mvccCacheId); stmtEx.putMeta(MVCC_STATE, Boolean.TRUE); } @@ -1786,7 +1686,7 @@ private Iterable> runQueryTwoStep( final MvccQueryTracker tracker = mvccTracker == null && qry.mvccEnabled() ? MvccUtils.mvccTracker(ctx.cache().context().cacheContext(qry.cacheIds().get(0)), startTx) : mvccTracker; - GridNearTxLocal tx = tx(ctx); + GridNearTxLocal tx = tracker != null ? tx(ctx) : null; if (qry.forUpdate()) qry.forUpdate(checkActive(tx) != null); @@ -1796,8 +1696,16 @@ private Iterable> runQueryTwoStep( return new Iterable>() { @SuppressWarnings("NullableProblems") @Override public Iterator> iterator() { - return rdcQryExec.query(schemaName, qry, keepCacheObj, enforceJoinOrder, opTimeout, - cancel, params, parts, lazy, tracker); + try { + return rdcQryExec.query(schemaName, qry, keepCacheObj, enforceJoinOrder, opTimeout, + cancel, params, parts, lazy, tracker); + } + catch (Throwable e) { + if (tracker != null) + tracker.onDone(); + + throw e; + } } }; } @@ -1828,9 +1736,9 @@ UpdateResult runDistributedUpdate( } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override public QueryCursor> queryDistributedSql(String schemaName, String cacheName, - SqlQuery qry, boolean keepBinary) { + @Override public SqlFieldsQuery generateFieldsQuery(String cacheName, SqlQuery qry) { + String schemaName = schema(cacheName); + String type = qry.getType(); H2TableDescriptor tblDesc = tableDescriptor(schemaName, cacheName, type); @@ -1848,72 +1756,101 @@ UpdateResult runDistributedUpdate( throw new IgniteException(e); } - SqlFieldsQuery fqry = new SqlFieldsQuery(sql); + SqlFieldsQuery res = new SqlFieldsQuery(sql); - fqry.setArgs(qry.getArgs()); - fqry.setPageSize(qry.getPageSize()); - fqry.setDistributedJoins(qry.isDistributedJoins()); - fqry.setPartitions(qry.getPartitions()); - fqry.setLocal(qry.isLocal()); + res.setArgs(qry.getArgs()); + res.setDistributedJoins(qry.isDistributedJoins()); + res.setLocal(qry.isLocal()); + res.setPageSize(qry.getPageSize()); + res.setPartitions(qry.getPartitions()); + res.setReplicatedOnly(qry.isReplicatedOnly()); + res.setSchema(schemaName); + res.setSql(sql); if (qry.getTimeout() > 0) - fqry.setTimeout(qry.getTimeout(), TimeUnit.MILLISECONDS); + res.setTimeout(qry.getTimeout(), TimeUnit.MILLISECONDS); - final QueryCursor> res = - querySqlFields(schemaName, fqry, null, keepBinary, true, null, null).get(0); + return res; + } - final Iterable> converted = new Iterable>() { - @Override public Iterator> iterator() { - final Iterator> iter0 = res.iterator(); + /** + * Prepares statement for query. + * + * @param qry Query string. + * @param tableAlias table alias. + * @param tbl Table to use. + * @return Prepared statement. + * @throws IgniteCheckedException In case of error. + */ + private static String generateQuery(String qry, String tableAlias, H2TableDescriptor tbl) + throws IgniteCheckedException { + assert tbl != null; - return new Iterator>() { - @Override public boolean hasNext() { - return iter0.hasNext(); - } + final String qry0 = qry; - @Override public Cache.Entry next() { - List l = iter0.next(); + String t = tbl.fullTableName(); - return new CacheEntryImpl<>((K)l.get(0), (V)l.get(1)); - } + String from = " "; - @Override public void remove() { - throw new UnsupportedOperationException(); - } - }; - } - }; + qry = qry.trim(); - // No metadata for SQL queries. - return new QueryCursorImpl>(converted) { - @Override public void close() { - res.close(); + String upper = qry.toUpperCase(); + + if (upper.startsWith("SELECT")) { + qry = qry.substring(6).trim(); + + final int star = qry.indexOf('*'); + + if (star == 0) + qry = qry.substring(1).trim(); + else if (star > 0) { + if (F.eq('.', qry.charAt(star - 1))) { + t = qry.substring(0, star - 1); + + qry = qry.substring(star + 1).trim(); + } + else + throw new IgniteCheckedException("Invalid query (missing alias before asterisk): " + qry0); } - }; + else + throw new IgniteCheckedException("Only queries starting with 'SELECT *' and 'SELECT alias.*' " + + "are supported (rewrite your query or use SqlFieldsQuery instead): " + qry0); + + upper = qry.toUpperCase(); + } + + if (!upper.startsWith("FROM")) + from = " FROM " + t + (tableAlias != null ? " as " + tableAlias : "") + + (upper.startsWith("WHERE") || upper.startsWith("ORDER") || upper.startsWith("LIMIT") ? + " " : " WHERE "); + + if(tableAlias != null) + t = tableAlias; + + qry = "SELECT " + t + "." + KEY_FIELD_NAME + ", " + t + "." + VAL_FIELD_NAME + from + qry; + + return qry; } /** - * Try executing query using native facilities. + * Determines if a passed query can be executed natively. * * @param schemaName Schema name. * @param qry Query. - * @param cliCtx Client context, or {@code null} if not applicable. - * @return Result or {@code null} if cannot parse/process this query. + * @return Command or {@code null} if cannot parse this query. */ - @SuppressWarnings({"ConstantConditions", "StatementWithEmptyBody"}) - private List>> tryQueryDistributedSqlFieldsNative(String schemaName, SqlFieldsQuery qry, - @Nullable SqlClientContext cliCtx) { + @Nullable private SqlCommand parseQueryNative(String schemaName, SqlFieldsQuery qry) { // Heuristic check for fast return. if (!INTERNAL_CMD_RE.matcher(qry.getSql().trim()).find()) return null; - // Parse. - SqlCommand cmd; - try { SqlParser parser = new SqlParser(schemaName, qry.getSql()); - cmd = parser.nextCommand(); + SqlCommand cmd = parser.nextCommand(); + + // TODO support transansaction commands in multistatements + // https://issues.apache.org/jira/browse/IGNITE-10063 // No support for multiple commands for now. if (parser.nextCommand() != null) @@ -1931,6 +1868,8 @@ private List>> tryQueryDistributedSqlFieldsNative(Stri || cmd instanceof SqlAlterUserCommand || cmd instanceof SqlDropUserCommand)) return null; + + return cmd; } catch (SqlStrictParseException e) { throw new IgniteSQLException(e.getMessage(), IgniteQueryErrorCode.PARSING, e); @@ -1951,7 +1890,18 @@ private List>> tryQueryDistributedSqlFieldsNative(Stri throw new IgniteSQLException("Failed to parse DDL statement: " + qry.getSql() + ": " + e.getMessage(), code, e); } + } + /** + * Executes a query natively. + * + * @param qry Query. + * @param cmd Parsed command corresponding to query. + * @param cliCtx Client context, or {@code null} if not applicable. + * @return Result cursors. + */ + private List>> queryDistributedSqlFieldsNative(SqlFieldsQuery qry, SqlCommand cmd, + @Nullable SqlClientContext cliCtx) { // Execute. try { if (cmd instanceof SqlCreateIndexCommand @@ -1981,7 +1931,8 @@ else if (cmd instanceof SqlSetStreamingCommand) { return Collections.singletonList(H2Utils.zeroCursor()); } catch (IgniteCheckedException e) { - throw new IgniteSQLException("Failed to execute DDL statement [stmt=" + qry.getSql() + ']', e); + throw new IgniteSQLException("Failed to execute DDL statement [stmt=" + qry.getSql() + + ", err=" + e.getMessage() + ']', e); } } @@ -2006,16 +1957,16 @@ private void checkQueryType(SqlFieldsQuery qry, boolean isQry) { * @throws IgniteCheckedException if failed. */ private void processTxCommand(SqlCommand cmd, SqlFieldsQuery qry) throws IgniteCheckedException { - if (!mvccEnabled(ctx)) - throw new IgniteSQLException("MVCC must be enabled in order to invoke transactional operation: " + - qry.getSql(), IgniteQueryErrorCode.MVCC_DISABLED); - NestedTxMode nestedTxMode = qry instanceof SqlFieldsQueryEx ? ((SqlFieldsQueryEx)qry).getNestedTxMode() : NestedTxMode.DEFAULT; GridNearTxLocal tx = tx(ctx); if (cmd instanceof SqlBeginTransactionCommand) { + if (!mvccEnabled(ctx)) + throw new IgniteSQLException("MVCC must be enabled in order to start transaction.", + IgniteQueryErrorCode.MVCC_DISABLED); + if (tx != null) { if (nestedTxMode == null) nestedTxMode = NestedTxMode.DEFAULT; @@ -2067,7 +2018,8 @@ else if (cmd instanceof SqlCommitTransactionCommand) { @SuppressWarnings("ThrowFromFinallyBlock") private void doCommit(@NotNull GridNearTxLocal tx) throws IgniteCheckedException { try { - if (!tx.isRollbackOnly()) + // TODO: Why checking for rollback only? + //if (!tx.isRollbackOnly()) tx.commit(); } finally { @@ -2112,10 +2064,19 @@ private void closeTx(@NotNull GridNearTxLocal tx) throws IgniteCheckedException boolean mvccEnabled = mvccEnabled(ctx), startTx = autoStartTx(qry); try { - List>> res = tryQueryDistributedSqlFieldsNative(schemaName, qry, cliCtx); + SqlCommand nativeCmd = parseQueryNative(schemaName, qry); - if (res != null) - return res; + if (!(nativeCmd instanceof SqlCommitTransactionCommand || nativeCmd instanceof SqlRollbackTransactionCommand) + && !ctx.state().publicApiActiveState(true)) { + throw new IgniteException("Can not perform the operation because the cluster is inactive. Note, that " + + "the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes " + + "join the cluster. To activate the cluster call Ignite.active(true)."); + } + + if (nativeCmd != null) + return queryDistributedSqlFieldsNative(qry, nativeCmd, cliCtx); + + List>> res; { // First, let's check if we already have a two-step query for this statement... @@ -2184,8 +2145,9 @@ private void closeTx(@NotNull GridNearTxLocal tx) throws IgniteCheckedException res.addAll(doRunPrepared(schemaName, prepared, newQry, twoStepQry, meta, keepBinary, startTx, tracker, cancel)); + // We cannot cache two-step query for multiple statements query except the last statement if (parseRes.twoStepQuery() != null && parseRes.twoStepQueryKey() != null && - !parseRes.twoStepQuery().explain()) + !parseRes.twoStepQuery().explain() && remainingSql == null) twoStepCache.putIfAbsent(parseRes.twoStepQueryKey(), new H2TwoStepCachedQuery(meta, twoStepQry.copy())); } @@ -2195,8 +2157,12 @@ private void closeTx(@NotNull GridNearTxLocal tx) throws IgniteCheckedException catch (RuntimeException | Error e) { GridNearTxLocal tx; - if (mvccEnabled && (tx = tx(ctx)) != null) + if (mvccEnabled && (tx = tx(ctx)) != null && + (!(e instanceof IgniteSQLException) || /* Parsing errors should not rollback Tx. */ + ((IgniteSQLException)e).sqlState() != SqlStateCode.PARSING_EXCEPTION) ) { + tx.setRollbackOnly(); + } throw e; } @@ -2283,6 +2249,9 @@ private List>> doRunPrepared(String schemaNa checkQueryType(qry, true); + if (ctx.security().enabled()) + checkSecurity(twoStepQry.cacheIds()); + return Collections.singletonList(doRunDistributedQuery(schemaName, qry, twoStepQry, meta, keepBinary, startTx, tracker, cancel)); } @@ -2297,6 +2266,23 @@ private List>> doRunPrepared(String schemaNa } } + /** + * Check security access for caches. + * + * @param cacheIds Cache IDs. + */ + private void checkSecurity(Collection cacheIds) { + if (F.isEmpty(cacheIds)) + return; + + for (Integer cacheId : cacheIds) { + DynamicCacheDescriptor desc = ctx.cache().cacheDescriptor(cacheId); + + if (desc != null) + ctx.security().authorize(desc.cacheName(), SecurityPermission.CACHE_READ, null); + } + } + /** * Parse and split query if needed, cache either two-step query or statement. * @param schemaName Schema name. @@ -2368,7 +2354,7 @@ private ParsingResult parseAndSplit(String schemaName, SqlFieldsQuery qry, int f // Legit assertion - we have H2 query flag above. assert parsedStmt instanceof GridSqlQuery; - loc = parser.isLocalQuery(qry.isReplicatedOnly()); + loc = parser.isLocalQuery(); } if (loc) { @@ -2698,64 +2684,6 @@ else if (cctx.config().getQueryParallelism() != expectedParallelism) { } } - /** - * Prepares statement for query. - * - * @param qry Query string. - * @param tableAlias table alias. - * @param tbl Table to use. - * @return Prepared statement. - * @throws IgniteCheckedException In case of error. - */ - private String generateQuery(String qry, String tableAlias, H2TableDescriptor tbl) throws IgniteCheckedException { - assert tbl != null; - - final String qry0 = qry; - - String t = tbl.fullTableName(); - - String from = " "; - - qry = qry.trim(); - - String upper = qry.toUpperCase(); - - if (upper.startsWith("SELECT")) { - qry = qry.substring(6).trim(); - - final int star = qry.indexOf('*'); - - if (star == 0) - qry = qry.substring(1).trim(); - else if (star > 0) { - if (F.eq('.', qry.charAt(star - 1))) { - t = qry.substring(0, star - 1); - - qry = qry.substring(star + 1).trim(); - } - else - throw new IgniteCheckedException("Invalid query (missing alias before asterisk): " + qry0); - } - else - throw new IgniteCheckedException("Only queries starting with 'SELECT *' and 'SELECT alias.*' " + - "are supported (rewrite your query or use SqlFieldsQuery instead): " + qry0); - - upper = qry.toUpperCase(); - } - - if (!upper.startsWith("FROM")) - from = " FROM " + t + (tableAlias != null ? " as " + tableAlias : "") + - (upper.startsWith("WHERE") || upper.startsWith("ORDER") || upper.startsWith("LIMIT") ? - " " : " WHERE "); - - if(tableAlias != null) - t = tableAlias; - - qry = "SELECT " + t + "." + KEY_FIELD_NAME + ", " + t + "." + VAL_FIELD_NAME + from + qry; - - return qry; - } - /** * Registers new class description. * @@ -3065,33 +2993,97 @@ public ThreadLocalObjectPool.Reusable detach() { return reusableConnection; } + /** {@inheritDoc} */ + @Override public IgniteInternalFuture rebuildIndexesFromHash(GridCacheContext cctx) { + // No data in fresh in-memory cache. + if (!cctx.group().persistenceEnabled()) + return null; + + IgnitePageStoreManager pageStore = cctx.shared().pageStore(); + + assert pageStore != null; + + SchemaIndexCacheVisitorClosure clo; + + if (!pageStore.hasIndexStore(cctx.groupId())) { + // If there are no index store, rebuild all indexes. + clo = new IndexRebuildFullClosure(cctx.queries(), cctx.mvccEnabled()); + } + else { + // Otherwise iterate over tables looking for missing indexes. + IndexRebuildPartialClosure clo0 = new IndexRebuildPartialClosure(); + + for (H2TableDescriptor tblDesc : tables(cctx.name())) { + assert tblDesc.table() != null; + + tblDesc.table().collectIndexesForPartialRebuild(clo0); + } + + if (clo0.hasIndexes()) + clo = clo0; + else + return null; + } + + // Closure prepared, do rebuild. + final GridWorkerFuture fut = new GridWorkerFuture<>(); + + markIndexRebuild(cctx.name(), true); + + GridWorker worker = new GridWorker(ctx.igniteInstanceName(), "index-rebuild-worker-" + cctx.name(), log) { + @Override protected void body() { + try { + rebuildIndexesFromHash0(cctx, clo); + + markIndexRebuild(cctx.name(), false); + + fut.onDone(); + } + catch (Exception e) { + fut.onDone(e); + } + catch (Throwable e) { + U.error(log, "Failed to rebuild indexes for cache: " + cctx.name(), e); + + fut.onDone(e); + + throw e; + } + } + }; + + fut.setWorker(worker); + + ctx.getExecutorService().execute(worker); + + return fut; + } + /** - * Rebuild indexes from hash index. + * Do index rebuild. * - * @param cacheName Cache name. + * @param cctx Cache context. + * @param clo Closure. * @throws IgniteCheckedException If failed. */ - @Override public void rebuildIndexesFromHash(String cacheName) throws IgniteCheckedException { - int cacheId = CU.cacheId(cacheName); - - GridCacheContext cctx = ctx.cache().context().cacheContext(cacheId); - - final GridCacheQueryManager qryMgr = cctx.queries(); - + protected void rebuildIndexesFromHash0(GridCacheContext cctx, SchemaIndexCacheVisitorClosure clo) + throws IgniteCheckedException { SchemaIndexCacheVisitor visitor = new SchemaIndexCacheVisitorImpl(cctx); - visitor.visit(new RebuildIndexFromHashClosure(qryMgr, cctx.mvccEnabled())); - - for (H2TableDescriptor tblDesc : tables(cacheName)) - tblDesc.table().markRebuildFromHashInProgress(false); + visitor.visit(clo); } - /** {@inheritDoc} */ - @Override public void markForRebuildFromHash(String cacheName) { + /** + * Mark tables for index rebuild, so that their indexes are not used. + * + * @param cacheName Cache name. + * @param val Value. + */ + private void markIndexRebuild(String cacheName, boolean val) { for (H2TableDescriptor tblDesc : tables(cacheName)) { assert tblDesc.table() != null; - tblDesc.table().markRebuildFromHashInProgress(true); + tblDesc.table().markRebuildFromHashInProgress(val); } } @@ -3138,6 +3130,8 @@ public GridReduceQueryExecutor reduceQueryExecutor() { org.h2.Driver.load(); + partReservationMgr = new PartitionReservationManager(ctx); + try { if (getString(IGNITE_H2_DEBUG_CONSOLE) != null) { Connection c = DriverManager.getConnection(dbUrl); @@ -3266,6 +3260,7 @@ public Collection systemViews(GridKernalContext ctx) { views.add(new SqlSystemViewNodeAttributes(ctx)); views.add(new SqlSystemViewBaselineNodes(ctx)); views.add(new SqlSystemViewNodeMetrics(ctx)); + views.add(new SqlSystemViewCaches(ctx)); return views; } @@ -3499,10 +3494,11 @@ private void createSqlFunctions(String schema, Class[] clss) throws IgniteChe String schemaName = schema(cacheName); + partReservationMgr.onCacheStop(cacheName); + H2Schema schema = schemas.get(schemaName); if (schema != null) { - mapQryExec.onCacheStop(cacheName); dmlProc.onCacheStop(cacheName); // Remove this mapping only after callback to DML proc - it needs that mapping internally @@ -3712,6 +3708,13 @@ private int bindPartitionInfoParameter(CacheQueryPartitionInfo partInfo, Object[ return conns; } + /** + * @return Partition reservation manager. + */ + public PartitionReservationManager partitionReservationManager() { + return partReservationMgr; + } + /** * Collect cache identifiers from two-step query. * @@ -3764,11 +3767,4 @@ private boolean hasSystemViews(GridCacheTwoStepQuery twoStepQry) { return false; } - - /** - * Closeable iterator. - */ - private interface ClIter extends AutoCloseable, Iterator { - // No-op. - } } diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexBuildClosure.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexBuildClosure.java new file mode 100644 index 0000000000000..8d1923f2a2a0c --- /dev/null +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexBuildClosure.java @@ -0,0 +1,54 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query.h2; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; +import org.apache.ignite.internal.processors.query.h2.opt.GridH2IndexBase; +import org.apache.ignite.internal.processors.query.h2.opt.GridH2Row; +import org.apache.ignite.internal.processors.query.h2.opt.GridH2RowDescriptor; +import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorClosure; + +/** + * Index build closure. + */ +public class IndexBuildClosure implements SchemaIndexCacheVisitorClosure { + /** Row descriptor. */ + private final GridH2RowDescriptor rowDesc; + + /** Index. */ + private final GridH2IndexBase idx; + + /** + * Constructor. + * + * @param rowDesc Row descriptor. + * @param idx Target index. + */ + public IndexBuildClosure(GridH2RowDescriptor rowDesc, GridH2IndexBase idx) { + this.rowDesc = rowDesc; + this.idx = idx; + } + + /** {@inheritDoc} */ + @Override public void apply(CacheDataRow row) throws IgniteCheckedException { + GridH2Row row0 = rowDesc.createRow(row); + + idx.putx(row0); + } +} diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/RebuildIndexFromHashClosure.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexRebuildFullClosure.java similarity index 89% rename from modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/RebuildIndexFromHashClosure.java rename to modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexRebuildFullClosure.java index b635eaca8892b..8018839a6ce7b 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/RebuildIndexFromHashClosure.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexRebuildFullClosure.java @@ -22,8 +22,10 @@ import org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager; import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorClosure; -/** */ -class RebuildIndexFromHashClosure implements SchemaIndexCacheVisitorClosure { +/** + * Closure to rebuild all indexes. + */ +public class IndexRebuildFullClosure implements SchemaIndexCacheVisitorClosure { /** */ private final GridCacheQueryManager qryMgr; @@ -34,7 +36,7 @@ class RebuildIndexFromHashClosure implements SchemaIndexCacheVisitorClosure { * @param qryMgr Query manager. * @param mvccEnabled MVCC status flag. */ - RebuildIndexFromHashClosure(GridCacheQueryManager qryMgr, boolean mvccEnabled) { + public IndexRebuildFullClosure(GridCacheQueryManager qryMgr, boolean mvccEnabled) { this.qryMgr = qryMgr; this.mvccEnabled = mvccEnabled; } diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexRebuildPartialClosure.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexRebuildPartialClosure.java new file mode 100644 index 0000000000000..2672f0627cdb8 --- /dev/null +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IndexRebuildPartialClosure.java @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query.h2; + +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; +import org.apache.ignite.internal.processors.query.h2.opt.GridH2IndexBase; +import org.apache.ignite.internal.processors.query.h2.opt.GridH2Row; +import org.apache.ignite.internal.processors.query.h2.opt.GridH2Table; +import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorClosure; + +import java.util.Collection; +import java.util.Collections; +import java.util.IdentityHashMap; +import java.util.Map; + +/** + * Closure to rebuild some cache indexes. + */ +public class IndexRebuildPartialClosure implements SchemaIndexCacheVisitorClosure { + /** Indexes. */ + private final Map> tblIdxs = new IdentityHashMap<>(); + + /** {@inheritDoc} */ + @Override public void apply(CacheDataRow row) throws IgniteCheckedException { + assert hasIndexes(); + + for (Map.Entry> tblIdxEntry : tblIdxs.entrySet()) { + GridH2Table tbl = tblIdxEntry.getKey(); + + GridH2Row row0 = tbl.rowDescriptor().createRow(row); + + for (GridH2IndexBase idx : tblIdxEntry.getValue()) + idx.putx(row0); + } + } + + /** + * @param idx Index to be rebuilt. + */ + public void addIndex(GridH2Table tbl, GridH2IndexBase idx) { + Collection idxs = tblIdxs.get(tbl); + + if (idxs == null) { + idxs = Collections.newSetFromMap(new IdentityHashMap<>()); + + idxs.add(idx); + + tblIdxs.put(tbl, idxs); + } + + idxs.add(idx); + } + + /** + * @return {@code True} if there is at least one index to rebuild. + */ + public boolean hasIndexes() { + return !tblIdxs.isEmpty(); + } +} diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2Tree.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2Tree.java index ce40df0039c7f..34b7fa39cf8b9 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2Tree.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2Tree.java @@ -17,10 +17,15 @@ package org.apache.ignite.internal.processors.query.h2.database; +import java.util.ArrayList; import java.util.Comparator; import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; +import java.util.stream.Collectors; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.internal.pagemem.PageMemory; import org.apache.ignite.internal.pagemem.wal.IgniteWriteAheadLogManager; import org.apache.ignite.internal.processors.cache.mvcc.MvccUtils; @@ -43,6 +48,8 @@ import org.h2.value.Value; import org.jetbrains.annotations.Nullable; +import static org.apache.ignite.internal.processors.query.h2.database.InlineIndexHelper.CANT_BE_COMPARE; + /** */ public abstract class H2Tree extends BPlusTree { @@ -64,6 +71,21 @@ public abstract class H2Tree extends BPlusTree { /** */ private final boolean mvccEnabled; + /** */ + private final boolean pk; + + /** */ + private final boolean affinityKey; + + /** */ + private final String cacheName; + + /** */ + private final String tblName; + + /** */ + private final String idxName; + /** */ private final Comparator comp = new Comparator() { @Override public int compare(Value o1, Value o2) { @@ -74,10 +96,28 @@ public abstract class H2Tree extends BPlusTree { /** Row cache. */ private final H2RowCache rowCache; + /** How often real invocation of inline size calculation will be skipped. */ + private static final int THROTTLE_INLINE_SIZE_CALCULATION = 1_000; + + /** Counter of inline size calculation for throttling real invocations. */ + private final ThreadLocal inlineSizeCalculationCntr = ThreadLocal.withInitial(() -> 0L); + + /** Keep max calculated inline size for current index. */ + private final AtomicInteger maxCalculatedInlineSize; + + /** */ + private final IgniteLogger log; + + /** Whether index was created from scratch during owning node lifecycle. */ + private final boolean created; + /** * Constructor. * * @param name Tree name. + * @param idxName Name of index. + * @param cacheName Cache name. + * @param tblName Table name. * @param reuseList Reuse list. * @param grpId Cache group ID. * @param pageMem Page memory. @@ -86,12 +126,18 @@ public abstract class H2Tree extends BPlusTree { * @param metaPageId Meta page ID. * @param initNew Initialize new index. * @param rowCache Row cache. + * @param pk {@code true} for primary key. + * @param affinityKey {@code true} for affinity key. * @param mvccEnabled Mvcc flag. * @param failureProcessor if the tree is corrupted. + * @param log Logger. * @throws IgniteCheckedException If failed. */ protected H2Tree( String name, + String idxName, + String cacheName, + String tblName, ReuseList reuseList, int grpId, PageMemory pageMem, @@ -103,9 +149,13 @@ protected H2Tree( IndexColumn[] cols, List inlineIdxs, int inlineSize, + AtomicInteger maxCalculatedInlineSize, + boolean pk, + boolean affinityKey, boolean mvccEnabled, @Nullable H2RowCache rowCache, - @Nullable FailureProcessor failureProcessor + @Nullable FailureProcessor failureProcessor, + IgniteLogger log ) throws IgniteCheckedException { super(name, grpId, pageMem, wal, globalRmvId, metaPageId, reuseList, failureProcessor); @@ -114,7 +164,16 @@ protected H2Tree( inlineSize = getMetaInlineSize(); } + this.idxName = idxName; + this.cacheName = cacheName; + this.tblName = tblName; + this.inlineSize = inlineSize; + this.maxCalculatedInlineSize = maxCalculatedInlineSize; + + this.pk = pk; + this.affinityKey = affinityKey; + this.mvccEnabled = mvccEnabled; assert rowStore != null; @@ -132,7 +191,11 @@ protected H2Tree( this.rowCache = rowCache; + this.log = log; + initTree(initNew, inlineSize); + + this.created = initNew; } /** @@ -163,7 +226,7 @@ public GridH2Row createRowFromLink(long link) throws IgniteCheckedException { * Create row from link. * * @param link Link. - * @param mvccOpCntr + * @param mvccOpCntr MVCC operation counter. * @return Row. * @throws IgniteCheckedException if failed. */ @@ -247,7 +310,7 @@ private int getMetaInlineSize() throws IgniteCheckedException { int c = inlineIdx.compare(pageAddr, off + fieldOff, inlineSize() - fieldOff, v2, comp); - if (c == -2) + if (c == CANT_BE_COMPARE) break; lastIdxUsed++; @@ -264,6 +327,8 @@ private int getMetaInlineSize() throws IgniteCheckedException { if (lastIdxUsed == cols.length) return mvccCompare((H2RowLinkIO)io, pageAddr, idx, row); + inlineSizeRecomendation(row); + SearchRow rowData = getRow(io, pageAddr, idx); for (int i = lastIdxUsed, len = cols.length; i < len; i++) { @@ -361,6 +426,85 @@ private int mvccCompare(GridH2SearchRow r1, GridH2SearchRow r2) { return -Long.compare(r1.mvccCounter(), r2.mvccCounter()); } + /** + * Calculate aggregate inline size for given indexes and log recommendation in case calculated size more than + * current inline size. + * + * @param row Grid H2 row related to given inline indexes. + */ + private void inlineSizeRecomendation(SearchRow row) { + //Do the check only for put operations. + if(!(row instanceof GridH2KeyValueRowOnheap)) + return; + + Long invokeCnt = inlineSizeCalculationCntr.get(); + + inlineSizeCalculationCntr.set(++invokeCnt); + + boolean throttle = invokeCnt % THROTTLE_INLINE_SIZE_CALCULATION != 0; + + if (throttle) + return; + + int newSize = 0; + + InlineIndexHelper idx; + + List colNames = new ArrayList<>(); + + for (InlineIndexHelper index : inlineIdxs) { + idx = index; + + newSize += idx.inlineSizeOf(row.getValue(idx.columnIndex())); + + colNames.add(index.colName()); + } + + if (newSize > inlineSize()) { + int oldSize; + + while (true) { + oldSize = maxCalculatedInlineSize.get(); + + if (oldSize >= newSize) + return; + + if (maxCalculatedInlineSize.compareAndSet(oldSize, newSize)) + break; + } + + String cols = colNames.stream().collect(Collectors.joining(", ", "(", ")")); + + String idxType = pk ? "PRIMARY KEY" : affinityKey ? "AFFINITY KEY (implicit)" : "SECONDARY"; + + String recommendation; + + if (pk || affinityKey) { + recommendation = "set system property " + + IgniteSystemProperties.IGNITE_MAX_INDEX_PAYLOAD_SIZE + " with recommended size " + + "(be aware it will be used by default for all indexes without explicit inline size)"; + } + else { + recommendation = "use INLINE_SIZE option for CREATE INDEX command, " + + "QuerySqlField.inlineSize for annotated classes, or QueryIndex.inlineSize for explicit " + + "QueryEntity configuration"; + } + + String warn = "Indexed columns of a row cannot be fully inlined into index " + + "what may lead to slowdown due to additional data page reads, increase index inline size if needed " + + "(" + recommendation + ") " + + "[cacheName=" + cacheName + + ", tableName=" + tblName + + ", idxName=" + idxName + + ", idxCols=" + cols + + ", idxType=" + idxType + + ", curSize=" + inlineSize() + + ", recommendedInlineSize=" + newSize + "]"; + + U.warn(log, warn); + } + } + /** * @param v1 First value. * @param v2 Second value. @@ -368,6 +512,14 @@ private int mvccCompare(GridH2SearchRow r1, GridH2SearchRow r2) { */ public abstract int compareValues(Value v1, Value v2); + /** + * @return {@code True} if index was created during curren node's lifetime, {@code False} if it was restored from + * disk. + */ + public boolean created() { + return created; + } + /** {@inheritDoc} */ @Override public String toString() { return S.toString(H2Tree.class, this, "super", super.toString()); diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2TreeFilterClosure.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2TreeFilterClosure.java index e583546fe8d24..99d08944ccbc5 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2TreeFilterClosure.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/H2TreeFilterClosure.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.query.h2.database; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; import org.apache.ignite.internal.pagemem.PageIdUtils; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; @@ -27,6 +28,7 @@ import org.apache.ignite.internal.processors.query.h2.opt.GridH2Row; import org.apache.ignite.internal.processors.query.h2.opt.GridH2SearchRow; import org.apache.ignite.internal.transactions.IgniteTxMvccVersionCheckedException; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.spi.indexing.IndexingQueryCacheFilter; @@ -47,17 +49,22 @@ public class H2TreeFilterClosure implements H2Tree.TreeRowClosure cctx; + /** Table name/ */ + private final String tblName; + + /** */ + private final boolean pk; + + /** */ + private final boolean affinityKey; + + /** */ + private final String idxName; + + /** Tree name. */ + private final String treeName; + + /** */ + private final IgniteLogger log; + /** * @param cctx Cache context. * @param tbl Table. - * @param name Index name. + * @param idxName Index name. * @param pk Primary key. + * @param affinityKey {@code true} for affinity key. * @param colsList Index columns. * @param inlineSize Inline size. * @throws IgniteCheckedException If failed. @@ -86,8 +108,9 @@ public H2TreeIndex( GridCacheContext cctx, @Nullable H2RowCache rowCache, GridH2Table tbl, - String name, + String idxName, boolean pk, + boolean affinityKey, List colsList, int inlineSize, int segmentsCnt @@ -96,20 +119,26 @@ public H2TreeIndex( this.cctx = cctx; + this.log = cctx.logger(getClass().getName()); + + this.pk = pk; + this.affinityKey = affinityKey; + + this.tblName = tbl.getName(); + this.idxName = idxName; + IndexColumn[] cols = colsList.toArray(new IndexColumn[colsList.size()]); IndexColumn.mapColumns(cols, tbl); - initBaseIndex(tbl, 0, name, cols, + initBaseIndex(tbl, 0, idxName, cols, pk ? IndexType.createPrimaryKey(false, false) : IndexType.createNonUnique(false, false, false)); GridQueryTypeDescriptor typeDesc = tbl.rowDescriptor().type(); int typeId = cctx.binaryMarshaller() ? typeDesc.typeId() : typeDesc.valueClass().hashCode(); - name = (tbl.rowDescriptor() == null ? "" : typeId + "_") + name; - - name = BPlusTree.treeName(name, "H2Tree"); + treeName = BPlusTree.treeName((tbl.rowDescriptor() == null ? "" : typeId + "_") + idxName, "H2Tree"); if (cctx.affinityNode()) { inlineIdxs = getAvailableInlineColumns(cols); @@ -118,15 +147,20 @@ public H2TreeIndex( IgniteCacheDatabaseSharedManager db = cctx.shared().database(); + AtomicInteger maxCalculatedInlineSize = new AtomicInteger(); + for (int i = 0; i < segments.length; i++) { db.checkpointReadLock(); - try { - RootPage page = getMetaPage(name, i); + try { + RootPage page = getMetaPage(i); segments[i] = new H2Tree( - name, - cctx.offheap().reuseListForIndex(name), + treeName, + idxName, + tblName, + tbl.cacheName(), + cctx.offheap().reuseListForIndex(treeName), cctx.groupId(), cctx.dataRegion().pageMemory(), cctx.shared().wal(), @@ -137,9 +171,13 @@ public H2TreeIndex( cols, inlineIdxs, computeInlineSize(inlineIdxs, inlineSize), + maxCalculatedInlineSize, + pk, + affinityKey, cctx.mvccEnabled(), rowCache, - cctx.kernalContext().failure()) { + cctx.kernalContext().failure(), + log) { @Override public int compareValues(Value v1, Value v2) { return v1 == v2 ? 0 : table.compareTypeSafe(v1, v2); } @@ -159,6 +197,30 @@ public H2TreeIndex( initDistributedJoinMessaging(tbl); } + /** + * Check if index exists in store. + * + * @return {@code True} if exists. + */ + public boolean rebuildRequired() { + assert segments != null; + + for (int i = 0; i < segments.length; i++) { + try { + H2Tree segment = segments[i]; + + if (segment.created()) + return true; + } + catch (Exception e) { + throw new IgniteException("Failed to check index tree root page existence [cacheName=" + cctx.name() + + ", tblName=" + tblName + ", idxName=" + idxName + ", segment=" + i + ']'); + } + } + + return false; + } + /** * @param cols Columns array. * @return List of {@link InlineIndexHelper} objects. @@ -167,10 +229,24 @@ private List getAvailableInlineColumns(IndexColumn[] cols) { List res = new ArrayList<>(); for (IndexColumn col : cols) { - if (!InlineIndexHelper.AVAILABLE_TYPES.contains(col.column.getType())) + if (!InlineIndexHelper.AVAILABLE_TYPES.contains(col.column.getType())) { + String idxType = pk ? "PRIMARY KEY" : affinityKey ? "AFFINITY KEY (implicit)" : "SECONDARY"; + + U.warn(log, "Column cannot be inlined into the index because it's type doesn't support inlining, " + + "index access may be slow due to additional page reads (change column type if possible) " + + "[cacheName=" + cctx.name() + + ", tableName=" + tblName + + ", idxName=" + idxName + + ", idxType=" + idxType + + ", colName=" + col.columnName + + ", columnType=" + InlineIndexHelper.nameTypeBycode(col.column.getType()) + ']' + ); + break; + } InlineIndexHelper idx = new InlineIndexHelper( + col.columnName, col.column.getType(), col.column.getColumnId(), col.sortType, @@ -362,7 +438,7 @@ private List getAvailableInlineColumns(IndexColumn[] cols) { tree.destroy(); - dropMetaPage(tree.getName(), i); + dropMetaPage(i); } } } @@ -415,7 +491,7 @@ private List getAvailableInlineColumns(IndexColumn[] cols) { if(p == null && v == null) return null; - return new H2TreeFilterClosure(p, v, cctx); + return new H2TreeFilterClosure(p, v, cctx, log); } /** @@ -457,22 +533,20 @@ private int computeInlineSize(List inlineIdxs, int cfgInlineS } /** - * @param name Name. * @param segIdx Segment index. * @return RootPage for meta page. * @throws IgniteCheckedException If failed. */ - private RootPage getMetaPage(String name, int segIdx) throws IgniteCheckedException { - return cctx.offheap().rootPageForIndex(cctx.cacheId(), name + "%" + segIdx); + private RootPage getMetaPage(int segIdx) throws IgniteCheckedException { + return cctx.offheap().rootPageForIndex(cctx.cacheId(), treeName, segIdx); } /** - * @param name Name. * @param segIdx Segment index. * @throws IgniteCheckedException If failed. */ - private void dropMetaPage(String name, int segIdx) throws IgniteCheckedException { - cctx.offheap().dropRootPageForIndex(cctx.cacheId(), name + "%" + segIdx); + private void dropMetaPage(int segIdx) throws IgniteCheckedException { + cctx.offheap().dropRootPageForIndex(cctx.cacheId(), treeName, segIdx); } /** {@inheritDoc} */ diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelper.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelper.java index 0299033fce399..2f885582f2f2f 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelper.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelper.java @@ -50,6 +50,9 @@ * Helper class for in-page indexes. */ public class InlineIndexHelper { + /** Value for comparison meaning 'Not enough information to compare'. */ + public static final int CANT_BE_COMPARE = -2; + private static final Charset CHARSET = StandardCharsets.UTF_8; /** PageContext for use in IO's */ @@ -75,6 +78,9 @@ public class InlineIndexHelper { Value.JAVA_OBJECT ); + /** */ + private final String colName; + /** */ private final int type; @@ -94,11 +100,16 @@ public class InlineIndexHelper { private final boolean compareStringsOptimized; /** + * @param colName Column name. * @param type Index type (see {@link Value}). * @param colIdx Index column index. * @param sortType Column sort type (see {@link IndexColumn#sortType}). + * @param compareMode Compare mode. */ - public InlineIndexHelper(int type, int colIdx, int sortType, CompareMode compareMode) { + public InlineIndexHelper(String colName, int type, int colIdx, int sortType, + CompareMode compareMode) { + + this.colName = colName; this.type = type; this.colIdx = colIdx; this.sortType = sortType; @@ -164,6 +175,13 @@ public InlineIndexHelper(int type, int colIdx, int sortType, CompareMode compare } } + /** + * @return Column name + */ + public String colName() { + return colName; + } + /** * @return Index type. */ @@ -347,7 +365,7 @@ protected boolean isValueFull(long pageAddr, int off) { * @param maxSize Maximum size to read. * @param v Value to compare. * @param comp Comparator. - * @return Compare result (-2 means we can't compare). + * @return Compare result ( {@code CANT_BE_COMPARE} means we can't compare). */ public int compare(long pageAddr, int off, int maxSize, Value v, Comparator comp) { int c = tryCompareOptimized(pageAddr, off, maxSize, v); @@ -358,7 +376,7 @@ public int compare(long pageAddr, int off, int maxSize, Value v, Comparator 0 && size + 1 > maxSize) || maxSize < 1 || (type = PageUtils.getByte(pageAddr, off)) == Value.UNKNOWN) - return -2; + return CANT_BE_COMPARE; if (type == Value.NULL) return Integer.MIN_VALUE; @@ -568,7 +586,7 @@ private int compareAsPrimitive(long pageAddr, int off, Value v, int type) { * @param pageAddr Page address. * @param off Offset. * @param v Value to compare. - * @return Compare result ({@code -2} means we can't compare). + * @return Compare result ({@code CANT_BE_COMPARE} means we can't compare). */ private int compareAsBytes(long pageAddr, int off, Value v) { byte[] bytes = v.getBytesNoCopy(); @@ -620,7 +638,7 @@ private int compareAsBytes(long pageAddr, int off, Value v) { // b) Even truncated current value is longer, so that it's bigger. return fixSort(1, sortType()); - return -2; + return CANT_BE_COMPARE; } /** @@ -628,7 +646,7 @@ private int compareAsBytes(long pageAddr, int off, Value v) { * @param off Offset. * @param v Value to compare. * @param ignoreCase {@code True} if a case-insensitive comparison should be used. - * @return Compare result ({@code -2} means we can't compare). + * @return Compare result ({@code CANT_BE_COMPARE} means we can't compare). */ private int compareAsString(long pageAddr, int off, Value v, boolean ignoreCase) { String s = v.getString(); @@ -795,7 +813,48 @@ private int compareAsString(long pageAddr, int off, Value v, boolean ignoreCase) // b) Even truncated current value is longer, so that it's bigger. return fixSort(1, sortType()); - return -2; + return CANT_BE_COMPARE; + } + + /** + * Calculate size to inline given value. + * + * @param val Value to calculate inline size. + * @return Calculated inline size for given value. + */ + public int inlineSizeOf(Value val){ + if (val.getType() == Value.NULL) + return 1; + + if (val.getType() != type) + throw new UnsupportedOperationException("value type doesn't match"); + + switch (type) { + case Value.BOOLEAN: + case Value.BYTE: + case Value.SHORT: + case Value.INT: + case Value.LONG: + case Value.FLOAT: + case Value.DOUBLE: + case Value.TIME: + case Value.DATE: + case Value.TIMESTAMP: + case Value.UUID: + return size + 1; + + case Value.STRING: + case Value.STRING_FIXED: + case Value.STRING_IGNORECASE: + return val.getString().getBytes(CHARSET).length + 3; + + case Value.BYTES: + case Value.JAVA_OBJECT: + return val.getBytes().length + 3; + + default: + throw new UnsupportedOperationException("no get operation for fast index type " + type); + } } /** @@ -931,7 +990,6 @@ public int put(long pageAddr, int off, Value val, int maxSize) { return maxSize; } - } default: @@ -1015,4 +1073,67 @@ protected boolean canRelyOnCompare(int c, Value shortVal, Value v2) { public static int fixSort(int c, int sortType) { return sortType == SortOrder.ASCENDING ? c : -c; } + + /** + * @param typeCode Type code. + * @return Name. + */ + public static String nameTypeBycode(int typeCode) { + switch (typeCode) { + case Value.UNKNOWN: + return "UNKNOWN"; + case Value.NULL: + return "NULL"; + case Value.BOOLEAN: + return "BOOLEAN"; + case Value.BYTE: + return "BYTE"; + case Value.SHORT: + return "SHORT"; + case Value.INT: + return "INT"; + case Value.LONG: + return "LONG"; + case Value.DECIMAL: + return "DECIMAL"; + case Value.DOUBLE: + return "DOUBLE"; + case Value.FLOAT: + return "FLOAT"; + case Value.TIME: + return "TIME"; + case Value.DATE: + return "DATE"; + case Value.TIMESTAMP: + return "TIMESTAMP"; + case Value.BYTES: + return "BYTES"; + case Value.STRING: + return "STRING"; + case Value.STRING_IGNORECASE: + return "STRING_IGNORECASE"; + case Value.BLOB: + return "BLOB"; + case Value.CLOB: + return "CLOB"; + case Value.ARRAY: + return "ARRAY"; + case Value.RESULT_SET: + return "RESULT_SET"; + case Value.JAVA_OBJECT: + return "JAVA_OBJECT"; + case Value.UUID: + return "UUID"; + case Value.STRING_FIXED: + return "STRING_FIXED"; + case Value.GEOMETRY: + return "GEOMETRY"; + case Value.TIMESTAMP_TZ: + return "TIMESTAMP_TZ"; + case Value.ENUM: + return "ENUM"; + default: + return "UNKNOWN type " + typeCode; + } + } } diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/ddl/DdlStatementsProcessor.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/ddl/DdlStatementsProcessor.java index 8688c4fbd965a..94e39ef2870c0 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/ddl/DdlStatementsProcessor.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/ddl/DdlStatementsProcessor.java @@ -72,7 +72,6 @@ import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.plugin.security.SecurityPermission; import org.h2.command.Prepared; import org.h2.command.ddl.AlterTableAlterColumn; @@ -329,7 +328,7 @@ else if (stmt0 instanceof GridSqlDropIndex) { } } else if (stmt0 instanceof GridSqlCreateTable) { - ctx.security().authorize(null, SecurityPermission.CACHE_CREATE, SecurityContextHolder.get()); + ctx.security().authorize(null, SecurityPermission.CACHE_CREATE, null); GridSqlCreateTable cmd = (GridSqlCreateTable)stmt0; @@ -358,11 +357,11 @@ else if (stmt0 instanceof GridSqlCreateTable) { ctx.query().dynamicTableCreate(cmd.schemaName(), e, cmd.templateName(), cmd.cacheName(), cmd.cacheGroup(), cmd.dataRegionName(), cmd.affinityKey(), cmd.atomicityMode(), - cmd.writeSynchronizationMode(), cmd.backups(), cmd.ifNotExists()); + cmd.writeSynchronizationMode(), cmd.backups(), cmd.ifNotExists(), cmd.encrypted()); } } else if (stmt0 instanceof GridSqlDropTable) { - ctx.security().authorize(null, SecurityPermission.CACHE_DESTROY, SecurityContextHolder.get()); + ctx.security().authorize(null, SecurityPermission.CACHE_DESTROY, null); GridSqlDropTable cmd = (GridSqlDropTable)stmt0; diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/dml/DmlBatchSender.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/dml/DmlBatchSender.java index 7c2a4228c95d0..88c19c3d907bb 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/dml/DmlBatchSender.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/dml/DmlBatchSender.java @@ -98,14 +98,10 @@ public void add(Object key, EntryProcessor proc, int ro throws IgniteCheckedException { assert key != null; assert proc != null; - - ClusterNode node = cctx.affinity().primaryByKey(key, AffinityTopologyVersion.NONE); - - if (node == null) - throw new IgniteCheckedException("Failed to map key to node."); - assert rowNum < cntPerRow.length; + ClusterNode node = primaryNodeByKey(key); + UUID nodeId = node.id(); Batch batch = batches.get(nodeId); @@ -126,6 +122,20 @@ public void add(Object key, EntryProcessor proc, int ro sendBatch(batch); } + /** + * @param key Key. + * @return Primary node for given key. + * @throws IgniteCheckedException If primary node is not found. + */ + public ClusterNode primaryNodeByKey(Object key) throws IgniteCheckedException { + ClusterNode node = cctx.affinity().primaryByKey(key, AffinityTopologyVersion.NONE); + + if (node == null) + throw new IgniteCheckedException("Failed to map key to node."); + + return node; + } + /** * Flush any remaining entries. */ diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2QueryContext.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2QueryContext.java index f12c0f30170fd..af70d821cc6c0 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2QueryContext.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2QueryContext.java @@ -28,6 +28,7 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.GridReservable; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; import org.apache.ignite.internal.processors.query.h2.twostep.MapQueryLazyWorker; +import org.apache.ignite.internal.processors.query.h2.twostep.PartitionReservation; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.spi.indexing.IndexingQueryFilter; @@ -53,7 +54,7 @@ public class GridH2QueryContext { private volatile boolean cleared; /** */ - private List reservations; + private PartitionReservation reserved; /** Range streams for indexes. */ private Map streams; @@ -190,11 +191,11 @@ public DistributedJoinMode distributedJoinMode() { } /** - * @param reservations Reserved partitions or group reservations. + * @param reserved Reserved partitions or group reservations. * @return {@code this}. */ - public GridH2QueryContext reservations(List reservations) { - this.reservations = reservations; + public GridH2QueryContext reservations(PartitionReservation reserved) { + this.reserved = reserved; return this; } @@ -416,12 +417,8 @@ private static boolean doClear(Key key, boolean nodeStop) { public void clearContext(boolean nodeStop) { cleared = true; - List r = reservations; - - if (!nodeStop && !F.isEmpty(r)) { - for (int i = 0; i < r.size(); i++) - r.get(i).release(); - } + if (!nodeStop && reserved != null) + reserved.release(); } /** diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Row.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Row.java index 6246aa5e7d018..7fb896f0ee92e 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Row.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Row.java @@ -126,11 +126,6 @@ public byte mvccTxState() { return row.newMvccTxState(); } - /** {@inheritDoc} */ - @Override public boolean isKeyAbsentBefore() { - return row.isKeyAbsentBefore(); - } - /** {@inheritDoc} */ @Override public boolean indexSearchRow() { return false; diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Table.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Table.java index a612b637980ad..ca5c3e97de814 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Table.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2Table.java @@ -26,7 +26,6 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.LongAdder; import java.util.concurrent.locks.Lock; -import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteInterruptedException; @@ -35,6 +34,7 @@ import org.apache.ignite.internal.processors.cache.query.QueryTable; import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.processors.query.QueryField; +import org.apache.ignite.internal.processors.query.h2.IndexRebuildPartialClosure; import org.apache.ignite.internal.processors.query.h2.database.H2RowFactory; import org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex; import org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor; @@ -88,7 +88,7 @@ public class GridH2Table extends TableBase { private final Map tmpIdxs = new HashMap<>(); /** */ - private final ReadWriteLock lock; + private final ReentrantReadWriteLock lock; /** */ private boolean destroyed; @@ -126,6 +126,7 @@ public class GridH2Table extends TableBase { * @param idxsFactory Indexes factory. * @param cctx Cache context. */ + @SuppressWarnings("ConstantConditions") public GridH2Table(CreateTableData createTblData, GridH2RowDescriptor desc, H2RowFactory rowFactory, GridH2SystemIndexFactory idxsFactory, GridCacheContext cctx) { super(createTblData); @@ -174,11 +175,14 @@ public GridH2Table(CreateTableData createTblData, GridH2RowDescriptor desc, H2Ro assert idxs != null; List clones = new ArrayList<>(idxs.size()); + for (Index index : idxs) { Index clone = createDuplicateIndexIfNeeded(index); + if (clone != null) clones.add(clone); } + idxs.addAll(clones); boolean hasHashIndex = idxs.size() >= 2 && index(0).getIndexType().isHash(); @@ -551,12 +555,39 @@ private void addToIndex(GridH2IndexBase idx, GridH2Row row, GridH2Row prevRow) { } /** + * Collect indexes for rebuild. * + * @param clo Closure. + */ + public void collectIndexesForPartialRebuild(IndexRebuildPartialClosure clo) { + for (int i = sysIdxsCnt; i < idxs.size(); i++) { + Index idx = idxs.get(i); + + if (idx instanceof H2TreeIndex) { + H2TreeIndex idx0 = (H2TreeIndex)idx; + + if (idx0.rebuildRequired()) + clo.addIndex(this, idx0); + } + } + } + + /** + * Mark or unmark index rebuild state. */ public void markRebuildFromHashInProgress(boolean value) { assert !value || (idxs.size() >= 2 && index(1).getIndexType().isHash()) : "Table has no hash index."; rebuildFromHashInProgress = value; + + lock.writeLock().lock(); + + try { + incrementModificationCounter(); + } + finally { + lock.writeLock().unlock(); + } } /** @@ -637,7 +668,7 @@ private Index commitUserIndex(Session ses, String idxName) { if (cloneIdx != null) database.addSchemaObject(ses, cloneIdx); - setModified(); + incrementModificationCounter(); return idx; } @@ -941,7 +972,7 @@ public void addColumns(List cols, boolean ifNotExists) { desc.refreshMetadataFromTypeDescriptor(); - setModified(); + incrementModificationCounter(); } finally { unlock(true); @@ -949,9 +980,10 @@ public void addColumns(List cols, boolean ifNotExists) { } /** + * Drop columns. * - * @param cols - * @param ifExists + * @param cols Columns. + * @param ifExists If EXISTS flag. */ public void dropColumns(List cols, boolean ifExists) { assert !ifExists || cols.size() == 1; @@ -1003,7 +1035,7 @@ public void dropColumns(List cols, boolean ifExists) { ((GridH2IndexBase)idx).refreshColumnIds(); } - setModified(); + incrementModificationCounter(); } finally { unlock(true); @@ -1031,6 +1063,15 @@ public void dropColumns(List cols, boolean ifExists) { return columns; } + /** + * Increment modification counter to force recompilation of existing prepared statements. + */ + private void incrementModificationCounter() { + assert lock.isWriteLockedByCurrentThread(); + + setModified(); + } + /** * Set insert hack flag. * diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlCreateTable.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlCreateTable.java index de86d6aa71e8c..0da77bbfb0e5e 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlCreateTable.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlCreateTable.java @@ -84,6 +84,9 @@ public class GridSqlCreateTable extends GridSqlStatement { /** Extra WITH-params. */ private List params; + /** Encrypted flag. */ + private boolean encrypted; + /** * @return Cache name upon which new cache configuration for this table must be based. */ @@ -336,6 +339,20 @@ public void params(List params) { this.params = params; } + /** + * @return Encrypted flag. + */ + public boolean encrypted() { + return encrypted; + } + + /** + * @param encrypted Encrypted flag. + */ + public void encrypted(boolean encrypted) { + this.encrypted = encrypted; + } + /** {@inheritDoc} */ @Override public String getSQL() { return null; diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java index a653e7f90390e..c042acc1d5bc3 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java @@ -21,6 +21,7 @@ import java.sql.PreparedStatement; import java.sql.SQLException; import java.util.ArrayList; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.IdentityHashMap; @@ -34,7 +35,6 @@ import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.QueryIndex; import org.apache.ignite.cache.QueryIndexType; -import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.query.IgniteSQLException; @@ -513,6 +513,9 @@ public class GridSqlQueryParser { /** Data region name. */ public static final String PARAM_DATA_REGION = "DATA_REGION"; + /** */ + private static final String PARAM_ENCRYPTED = "ENCRYPTED"; + /** */ private final IdentityHashMap h2ObjToGridObj = new IdentityHashMap<>(); @@ -526,6 +529,9 @@ public class GridSqlQueryParser { */ private int parsingSubQryExpression; + /** Whether this is SELECT FOR UPDATE. */ + private boolean selectForUpdate; + /** * @param useOptimizedSubqry If we have to find correct order for table filters in FROM clause. * Relies on uniqueness of table filter aliases. @@ -1610,6 +1616,11 @@ else if (CacheWriteSynchronizationMode.PRIMARY_SYNC.name().equalsIgnoreCase(val) break; + case PARAM_ENCRYPTED: + res.encrypted(F.isEmpty(val) || Boolean.parseBoolean(val)); + + break; + default: throw new IgniteSQLException("Unsupported parameter: " + name, IgniteQueryErrorCode.PARSING); } @@ -1702,13 +1713,13 @@ else if (stmt.getClass() == Update.class) /** * Check if query may be run locally on all caches mentioned in the query. - * @param replicatedOnlyQry replicated-only query flag from original {@link SqlFieldsQuery}. + * * @return {@code true} if query may be run locally on all caches mentioned in the query, i.e. there's no need * to run distributed query. - * @see SqlFieldsQuery#isReplicatedOnly() */ - public boolean isLocalQuery(boolean replicatedOnlyQry) { - boolean hasCaches = false; + public boolean isLocalQuery() { + if (selectForUpdate) + return false; for (Object o : h2ObjToGridObj.values()) { if (o instanceof GridSqlAlias) @@ -1718,19 +1729,21 @@ public boolean isLocalQuery(boolean replicatedOnlyQry) { GridH2Table tbl = ((GridSqlTable)o).dataTable(); if (tbl != null) { - hasCaches = true; - GridCacheContext cctx = tbl.cache(); - if (!cctx.isLocal() && !(replicatedOnlyQry && cctx.isReplicatedAffinityNode())) + if (cctx.mvccEnabled()) + return false; + + if (cctx.isPartitioned()) + return false; + + if (cctx.isReplicated() && !cctx.isReplicatedAffinityNode()) return false; } } } - // For consistency with old logic, let's not force locality in absence of caches - - // if there are no caches, original SqlFieldsQuery's isLocal flag will be used. - return hasCaches; + return true; } /** @@ -1755,6 +1768,27 @@ public GridCacheContext getFirstPartitionedCache() { return null; } + /** + * @return All known cache IDs. + */ + public Collection cacheIds() { + ArrayList res = new ArrayList<>(1); + + for (Object o : h2ObjToGridObj.values()) { + if (o instanceof GridSqlAlias) + o = GridSqlAlias.unwrap((GridSqlAst)o); + + if (o instanceof GridSqlTable) { + GridH2Table tbl = ((GridSqlTable)o).dataTable(); + + if (tbl != null) + res.add(tbl.cacheId()); + } + } + + return res; + } + /** * @param stmt Prepared statement. * @return Parsed AST. @@ -1764,6 +1798,8 @@ public final GridSqlStatement parse(Prepared stmt) { if (optimizedTableFilterOrder != null) collectOptimizedTableFiltersOrder((Query)stmt); + selectForUpdate = isForUpdateQuery(stmt); + return parseQuery((Query)stmt); } diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuerySplitter.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuerySplitter.java index ca9c5bb064cb3..7fa426fc6ba6d 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuerySplitter.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuerySplitter.java @@ -58,6 +58,7 @@ import org.jetbrains.annotations.Nullable; import static org.apache.ignite.internal.processors.query.h2.opt.GridH2CollocationModel.isCollocated; +import static org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap.DEFAULT_COLUMNS_COUNT; import static org.apache.ignite.internal.processors.query.h2.sql.GridSqlConst.TRUE; import static org.apache.ignite.internal.processors.query.h2.sql.GridSqlFunctionType.AVG; import static org.apache.ignite.internal.processors.query.h2.sql.GridSqlFunctionType.CAST; @@ -201,7 +202,7 @@ public static GridCacheTwoStepQuery split( // subqueries because we do not have unique FROM aliases yet. GridSqlQuery qry = parse(prepared, false); - String originalSql = qry.getSQL(); + String originalSql = prepared.getSQL(); // debug("ORIGINAL", originalSql); @@ -2142,16 +2143,23 @@ private void splitAggregate( case SUM: // SUM( SUM(x) ) or SUM(DISTINCT x) case MAX: // MAX( MAX(x) ) or MAX(DISTINCT x) case MIN: // MIN( MIN(x) ) or MIN(DISTINCT x) + GridSqlElement rdcAgg0; + if (hasDistinctAggregate) /* and has no collocated group by */ { mapAgg = agg.child(); - rdcAgg = aggregate(agg.distinct(), agg.type()).addChild(column(mapAggAlias.alias())); + rdcAgg0 = aggregate(agg.distinct(), agg.type()).addChild(column(mapAggAlias.alias())); } else { mapAgg = aggregate(agg.distinct(), agg.type()).resultType(agg.resultType()).addChild(agg.child()); - rdcAgg = aggregate(agg.distinct(), agg.type()).addChild(column(mapAggAlias.alias())); + + rdcAgg0 = function(CAST).resultType(agg.resultType()) + .addChild(aggregate(agg.distinct(), agg.type()).addChild(column(mapAggAlias.alias()))); } + // Avoid second type upcast on reducer (e.g. Int -> (map) -> Long -> (reduce) -> BigDecimal). + rdcAgg = function(CAST).resultType(agg.resultType()).addChild(rdcAgg0); + break; case COUNT_ALL: // CAST(SUM( COUNT(*) ) AS BIGINT) @@ -2375,6 +2383,9 @@ private static CacheQueryPartitionInfo extractPartitionFromEquality(GridSqlOpera GridH2Table tbl = (GridH2Table) column.column().getTable(); + if (!isAffinityKey(column.column().getColumnId(), tbl)) + return null; + GridH2RowDescriptor desc = tbl.rowDescriptor(); IndexColumn affKeyCol = tbl.getAffinityKeyColumn(); @@ -2397,6 +2408,27 @@ private static CacheQueryPartitionInfo extractPartitionFromEquality(GridSqlOpera column.column().getType(), param.index()); } + /** + * + * @param colId Column ID to check + * @param tbl H2 Table + * @return is affinity key or not + */ + private static boolean isAffinityKey(int colId, GridH2Table tbl) { + GridH2RowDescriptor desc = tbl.rowDescriptor(); + + if (desc.isKeyColumn(colId)) + return true; + + IndexColumn affKeyCol = tbl.getAffinityKeyColumn(); + + try { + return affKeyCol != null && colId >= DEFAULT_COLUMNS_COUNT && desc.isColumnKeyProperty(colId - DEFAULT_COLUMNS_COUNT) && colId == affKeyCol.column.getColumnId(); + } catch(IllegalStateException e) { + return false; + } + } + /** * Merges two partition info arrays, removing duplicates * diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlAbstractLocalSystemView.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlAbstractLocalSystemView.java index d692dbac3dafa..d028406a122c9 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlAbstractLocalSystemView.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlAbstractLocalSystemView.java @@ -37,7 +37,7 @@ public abstract class SqlAbstractLocalSystemView extends SqlAbstractSystemView { * @param tblName Table name. * @param desc Description. * @param ctx Context. - * @param indexes Indexed columns. + * @param indexes Indexes. * @param cols Columns. */ public SqlAbstractLocalSystemView(String tblName, String desc, GridKernalContext ctx, String[] indexes, @@ -49,6 +49,17 @@ public SqlAbstractLocalSystemView(String tblName, String desc, GridKernalContext assert indexes != null; } + /** + * @param tblName Table name. + * @param desc Description. + * @param ctx Context. + * @param indexedCols Indexed columns. + * @param cols Columns. + */ + public SqlAbstractLocalSystemView(String tblName, String desc, GridKernalContext ctx, String indexedCols, Column... cols) { + this(tblName, desc, ctx, new String[] {indexedCols}, cols); + } + /** * @param tblName Table name. * @param desc Description. diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewCaches.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewCaches.java new file mode 100644 index 0000000000000..ff9ef32115121 --- /dev/null +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewCaches.java @@ -0,0 +1,197 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query.h2.sys.view; + +import java.util.Collection; +import java.util.Collections; +import java.util.Iterator; +import java.util.concurrent.atomic.AtomicLong; +import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; +import org.apache.ignite.internal.util.typedef.F; +import org.h2.engine.Session; +import org.h2.result.Row; +import org.h2.result.SearchRow; +import org.h2.value.Value; + +/** + * System view: caches. + */ +public class SqlSystemViewCaches extends SqlAbstractLocalSystemView { + /** + * @param ctx Grid context. + */ + public SqlSystemViewCaches(GridKernalContext ctx) { + super("CACHES", "Ignite caches", ctx, "NAME", + newColumn("NAME"), + newColumn("CACHE_ID", Value.INT), + newColumn("CACHE_TYPE"), + newColumn("GROUP_ID", Value.INT), + newColumn("GROUP_NAME"), + newColumn("CACHE_MODE"), + newColumn("ATOMICITY_MODE"), + newColumn("IS_ONHEAP_CACHE_ENABLED", Value.BOOLEAN), + newColumn("IS_COPY_ON_READ", Value.BOOLEAN), + newColumn("IS_LOAD_PREVIOUS_VALUE", Value.BOOLEAN), + newColumn("IS_READ_FROM_BACKUP", Value.BOOLEAN), + newColumn("PARTITION_LOSS_POLICY"), + newColumn("NODE_FILTER"), + newColumn("TOPOLOGY_VALIDATOR"), + newColumn("IS_EAGER_TTL", Value.BOOLEAN), + newColumn("WRITE_SYNCHRONIZATION_MODE"), + newColumn("IS_INVALIDATE", Value.BOOLEAN), + newColumn("IS_EVENTS_DISABLED", Value.BOOLEAN), + newColumn("IS_STATISTICS_ENABLED", Value.BOOLEAN), + newColumn("IS_MANAGEMENT_ENABLED", Value.BOOLEAN), + newColumn("BACKUPS", Value.INT), + newColumn("AFFINITY"), + newColumn("AFFINITY_MAPPER"), + newColumn("REBALANCE_MODE"), + newColumn("REBALANCE_BATCH_SIZE", Value.INT), + newColumn("REBALANCE_TIMEOUT", Value.LONG), + newColumn("REBALANCE_DELAY", Value.LONG), + newColumn("REBALANCE_THROTTLE", Value.LONG), + newColumn("REBALANCE_BATCHES_PREFETCH_COUNT", Value.LONG), + newColumn("REBALANCE_ORDER", Value.INT), + newColumn("EVICTION_FILTER"), + newColumn("EVICTION_POLICY_FACTORY"), + newColumn("IS_NEAR_CACHE_ENABLED", Value.BOOLEAN), + newColumn("NEAR_CACHE_EVICTION_POLICY_FACTORY"), + newColumn("NEAR_CACHE_START_SIZE", Value.INT), + newColumn("DEFAULT_LOCK_TIMEOUT", Value.LONG), + newColumn("CACHE_INTERCEPTOR"), + newColumn("CACHE_STORE_FACTORY"), + newColumn("IS_STORE_KEEP_BINARY", Value.BOOLEAN), + newColumn("IS_READ_THROUGH", Value.BOOLEAN), + newColumn("IS_WRITE_THROUGH", Value.BOOLEAN), + newColumn("IS_WRITE_BEHIND_ENABLED", Value.BOOLEAN), + newColumn("WRITE_BEHIND_COALESCING", Value.BOOLEAN), + newColumn("WRITE_BEHIND_FLUSH_SIZE", Value.INT), + newColumn("WRITE_BEHIND_FLUSH_FREQUENCY", Value.LONG), + newColumn("WRITE_BEHIND_FLUSH_THREAD_COUNT", Value.INT), + newColumn("WRITE_BEHIND_FLUSH_BATCH_SIZE", Value.INT), + newColumn("MAX_CONCURRENT_ASYNC_OPERATIONS", Value.INT), + newColumn("CACHE_LOADER_FACTORY"), + newColumn("CACHE_WRITER_FACTORY"), + newColumn("EXPIRY_POLICY_FACTORY"), + newColumn("IS_SQL_ESCAPE_ALL", Value.BOOLEAN), + newColumn("SQL_SCHEMA"), + newColumn("SQL_INDEX_MAX_INLINE_SIZE", Value.INT), + newColumn("IS_SQL_ONHEAP_CACHE_ENABLED", Value.BOOLEAN), + newColumn("SQL_ONHEAP_CACHE_MAX_SIZE", Value.INT), + newColumn("QUERY_DETAILS_METRICS_SIZE", Value.INT), + newColumn("QUERY_PARALLELISM", Value.INT), + newColumn("MAX_QUERY_ITERATORS_COUNT", Value.INT), + newColumn("DATA_REGION_NAME") + ); + } + + /** {@inheritDoc} */ + @SuppressWarnings("unchecked") + @Override public Iterator getRows(Session ses, SearchRow first, SearchRow last) { + SqlSystemViewColumnCondition nameCond = conditionForColumn("NAME", first, last); + + Collection caches; + + if (nameCond.isEquality()) { + DynamicCacheDescriptor cache = ctx.cache().cacheDescriptor(nameCond.valueForEquality().getString()); + + caches = cache == null ? Collections.emptySet() : Collections.singleton(cache); + } + else + caches = ctx.cache().cacheDescriptors().values(); + + AtomicLong rowKey = new AtomicLong(); + + return F.iterator(caches, + cache -> createRow(ses, rowKey.incrementAndGet(), + cache.cacheName(), + cache.cacheId(), + cache.cacheType(), + cache.groupId(), + cache.groupDescriptor().groupName(), + cache.cacheConfiguration().getCacheMode(), + cache.cacheConfiguration().getAtomicityMode(), + cache.cacheConfiguration().isOnheapCacheEnabled(), + cache.cacheConfiguration().isCopyOnRead(), + cache.cacheConfiguration().isLoadPreviousValue(), + cache.cacheConfiguration().isReadFromBackup(), + cache.cacheConfiguration().getPartitionLossPolicy(), + cache.cacheConfiguration().getNodeFilter(), + cache.cacheConfiguration().getTopologyValidator(), + cache.cacheConfiguration().isEagerTtl(), + cache.cacheConfiguration().getWriteSynchronizationMode(), + cache.cacheConfiguration().isInvalidate(), + cache.cacheConfiguration().isEventsDisabled(), + cache.cacheConfiguration().isStatisticsEnabled(), + cache.cacheConfiguration().isManagementEnabled(), + cache.cacheConfiguration().getBackups(), + cache.cacheConfiguration().getAffinity(), + cache.cacheConfiguration().getAffinityMapper(), + cache.cacheConfiguration().getRebalanceMode(), + cache.cacheConfiguration().getRebalanceBatchSize(), + cache.cacheConfiguration().getRebalanceTimeout(), + cache.cacheConfiguration().getRebalanceDelay(), + cache.cacheConfiguration().getRebalanceThrottle(), + cache.cacheConfiguration().getRebalanceBatchesPrefetchCount(), + cache.cacheConfiguration().getRebalanceOrder(), + cache.cacheConfiguration().getEvictionFilter(), + cache.cacheConfiguration().getEvictionPolicyFactory(), + cache.cacheConfiguration().getNearConfiguration() != null, + cache.cacheConfiguration().getNearConfiguration() != null ? + cache.cacheConfiguration().getNearConfiguration().getNearEvictionPolicyFactory() : null, + cache.cacheConfiguration().getNearConfiguration() != null ? + cache.cacheConfiguration().getNearConfiguration().getNearStartSize() : null, + cache.cacheConfiguration().getDefaultLockTimeout(), + cache.cacheConfiguration().getInterceptor(), + cache.cacheConfiguration().getCacheStoreFactory(), + cache.cacheConfiguration().isStoreKeepBinary(), + cache.cacheConfiguration().isReadThrough(), + cache.cacheConfiguration().isWriteThrough(), + cache.cacheConfiguration().isWriteBehindEnabled(), + cache.cacheConfiguration().getWriteBehindCoalescing(), + cache.cacheConfiguration().getWriteBehindFlushSize(), + cache.cacheConfiguration().getWriteBehindFlushFrequency(), + cache.cacheConfiguration().getWriteBehindFlushThreadCount(), + cache.cacheConfiguration().getWriteBehindBatchSize(), + cache.cacheConfiguration().getMaxConcurrentAsyncOperations(), + cache.cacheConfiguration().getCacheLoaderFactory(), + cache.cacheConfiguration().getCacheWriterFactory(), + cache.cacheConfiguration().getExpiryPolicyFactory(), + cache.cacheConfiguration().isSqlEscapeAll(), + cache.cacheConfiguration().getSqlSchema(), + cache.cacheConfiguration().getSqlIndexMaxInlineSize(), + cache.cacheConfiguration().isSqlOnheapCacheEnabled(), + cache.cacheConfiguration().getSqlOnheapCacheMaxSize(), + cache.cacheConfiguration().getQueryDetailMetricsSize(), + cache.cacheConfiguration().getQueryParallelism(), + cache.cacheConfiguration().getMaxQueryIteratorsCount(), + cache.cacheConfiguration().getDataRegionName() + ), true); + } + + /** {@inheritDoc} */ + @Override public boolean canGetRowCount() { + return true; + } + + /** {@inheritDoc} */ + @Override public long getRowCount() { + return ctx.cache().cacheDescriptors().size(); + } +} diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodeMetrics.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodeMetrics.java index 01b4e976f0cae..d3921aaaa0490 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodeMetrics.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodeMetrics.java @@ -40,7 +40,7 @@ public class SqlSystemViewNodeMetrics extends SqlAbstractLocalSystemView { * @param ctx Grid context. */ public SqlSystemViewNodeMetrics(GridKernalContext ctx) { - super("NODE_METRICS", "Node metrics", ctx, new String[] {"NODE_ID"}, + super("NODE_METRICS", "Node metrics", ctx, "NODE_ID", newColumn("NODE_ID", Value.UUID), newColumn("LAST_UPDATE_TIME", Value.TIMESTAMP), newColumn("MAX_ACTIVE_JOBS", Value.INT), diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodes.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodes.java index 514f92e9708ee..d8720310aef08 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodes.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sys/view/SqlSystemViewNodes.java @@ -39,7 +39,7 @@ public class SqlSystemViewNodes extends SqlAbstractLocalSystemView { * @param ctx Grid context. */ public SqlSystemViewNodes(GridKernalContext ctx) { - super("NODES", "Topology nodes", ctx, new String[] {"ID"}, + super("NODES", "Topology nodes", ctx, "ID", newColumn("ID", Value.UUID), newColumn("CONSISTENT_ID"), newColumn("VERSION"), diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridMapQueryExecutor.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridMapQueryExecutor.java index f2281116eb5fe..76ea110b16cd0 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridMapQueryExecutor.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridMapQueryExecutor.java @@ -21,12 +21,10 @@ import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; -import java.util.AbstractCollection; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; -import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.UUID; @@ -41,7 +39,6 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteSystemProperties; -import org.apache.ignite.cache.PartitionLossPolicy; import org.apache.ignite.cache.query.QueryCancelledException; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cluster.ClusterNode; @@ -55,7 +52,6 @@ import org.apache.ignite.internal.managers.communication.GridMessageListener; import org.apache.ignite.internal.managers.eventstorage.GridLocalEventListener; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; -import org.apache.ignite.internal.processors.cache.CacheInvalidStateException; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.distributed.dht.CompoundLockFuture; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; @@ -88,7 +84,6 @@ import org.apache.ignite.internal.processors.query.h2.twostep.msg.GridH2QueryRequest; import org.apache.ignite.internal.processors.query.h2.twostep.msg.GridH2SelectForUpdateTxDetails; import org.apache.ignite.internal.util.GridSpinBusyLock; -import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.CU; @@ -104,13 +99,8 @@ import org.jetbrains.annotations.Nullable; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SQL_FORCE_LAZY_RESULT_SET; -import static org.apache.ignite.cache.PartitionLossPolicy.READ_ONLY_SAFE; -import static org.apache.ignite.cache.PartitionLossPolicy.READ_WRITE_SAFE; import static org.apache.ignite.events.EventType.EVT_CACHE_QUERY_EXECUTED; import static org.apache.ignite.internal.managers.communication.GridIoPolicy.QUERY_POOL; -import static org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion.NONE; -import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.LOST; -import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; import static org.apache.ignite.internal.processors.query.h2.opt.DistributedJoinMode.OFF; import static org.apache.ignite.internal.processors.query.h2.opt.DistributedJoinMode.distributedJoinMode; import static org.apache.ignite.internal.processors.query.h2.opt.GridH2QueryType.MAP; @@ -140,9 +130,6 @@ public class GridMapQueryExecutor { /** */ private final GridSpinBusyLock busyLock; - /** */ - private final ConcurrentMap reservations = new ConcurrentHashMap<>(); - /** Lazy workers. */ private final ConcurrentHashMap lazyWorkers = new ConcurrentHashMap<>(); @@ -301,256 +288,6 @@ private MapNodeResults resultsForNode(UUID nodeId) { return nodeRess; } - /** - * @param cctx Cache context. - * @param p Partition ID. - * @return Partition. - */ - private GridDhtLocalPartition partition(GridCacheContext cctx, int p) { - return cctx.topology().localPartition(p, NONE, false); - } - - /** - * @param cacheIds Cache IDs. - * @param topVer Topology version. - * @param explicitParts Explicit partitions list. - * @param reserved Reserved list. - * @param nodeId Node ID. - * @param reqId Request ID. - * @return String which is null in case of success or with causeMessage if failed - * @throws IgniteCheckedException If failed. - */ - private String reservePartitions( - @Nullable List cacheIds, - AffinityTopologyVersion topVer, - final int[] explicitParts, - List reserved, - UUID nodeId, - long reqId - ) throws IgniteCheckedException { - assert topVer != null; - - if (F.isEmpty(cacheIds)) - return null; - - Collection partIds = wrap(explicitParts); - - for (int i = 0; i < cacheIds.size(); i++) { - GridCacheContext cctx = ctx.cache().context().cacheContext(cacheIds.get(i)); - - // Cache was not found, probably was not deployed yet. - if (cctx == null) { - return String.format("Failed to reserve partitions for query (cache is not found on " + - "local node) [localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s]", - ctx.localNodeId(), nodeId, reqId, topVer, cacheIds.get(i)); - } - - if (cctx.isLocal() || !cctx.rebalanceEnabled()) - continue; - - // For replicated cache topology version does not make sense. - final MapReservationKey grpKey = new MapReservationKey(cctx.name(), cctx.isReplicated() ? null : topVer); - - GridReservable r = reservations.get(grpKey); - - if (explicitParts == null && r != null) { // Try to reserve group partition if any and no explicits. - if (r != MapReplicatedReservation.INSTANCE) { - if (!r.reserve()) - return String.format("Failed to reserve partitions for query (group " + - "reservation failed) [localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + - "cacheName=%s]",ctx.localNodeId(), nodeId, reqId, topVer, cacheIds.get(i), cctx.name()); - - reserved.add(r); - } - } - else { // Try to reserve partitions one by one. - int partsCnt = cctx.affinity().partitions(); - - if (cctx.isReplicated()) { // Check all the partitions are in owning state for replicated cache. - if (r == null) { // Check only once. - for (int p = 0; p < partsCnt; p++) { - GridDhtLocalPartition part = partition(cctx, p); - - // We don't need to reserve partitions because they will not be evicted in replicated caches. - GridDhtPartitionState partState = part != null ? part.state() : null; - - if (partState != OWNING) - return String.format("Failed to reserve partitions for query " + - "(partition of REPLICATED cache is not in OWNING state) [" + - "localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, cacheName=%s, " + - "part=%s, partFound=%s, partState=%s]", - ctx.localNodeId(), - nodeId, - reqId, - topVer, - cacheIds.get(i), - cctx.name(), - p, - (part != null), - partState - ); - } - - // Mark that we checked this replicated cache. - reservations.putIfAbsent(grpKey, MapReplicatedReservation.INSTANCE); - } - } - else { // Reserve primary partitions for partitioned cache (if no explicit given). - if (explicitParts == null) - partIds = cctx.affinity().primaryPartitions(ctx.localNodeId(), topVer); - - int reservedCnt = 0; - - for (int partId : partIds) { - GridDhtLocalPartition part = partition(cctx, partId); - - GridDhtPartitionState partState = part != null ? part.state() : null; - - if (partState != OWNING) { - if (partState == LOST) - ignoreLostPartitionIfPossible(cctx, part); - else { - return String.format("Failed to reserve partitions for query " + - "(partition of PARTITIONED cache is not found or not in OWNING state) [" + - "localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + - "cacheName=%s, part=%s, partFound=%s, partState=%s]", - ctx.localNodeId(), - nodeId, - reqId, - topVer, - cacheIds.get(i), - cctx.name(), - partId, - (part != null), - partState - ); - } - } - - if (!part.reserve()) { - return String.format("Failed to reserve partitions for query " + - "(partition of PARTITIONED cache cannot be reserved) [" + - "localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + - "cacheName=%s, part=%s, partFound=%s, partState=%s]", - ctx.localNodeId(), - nodeId, - reqId, - topVer, - cacheIds.get(i), - cctx.name(), - partId, - true, - partState - ); - } - - reserved.add(part); - - reservedCnt++; - - // Double check that we are still in owning state and partition contents are not cleared. - partState = part.state(); - - if (partState != OWNING) { - if (partState == LOST) - ignoreLostPartitionIfPossible(cctx, part); - else { - return String.format("Failed to reserve partitions for query " + - "(partition of PARTITIONED cache is not in OWNING state after reservation) [" + - "localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + - "cacheName=%s, part=%s, partState=%s]", - ctx.localNodeId(), - nodeId, - reqId, - topVer, - cacheIds.get(i), - cctx.name(), - partId, - partState - ); - } - } - } - - if (explicitParts == null && reservedCnt > 0) { - // We reserved all the primary partitions for cache, attempt to add group reservation. - GridDhtPartitionsReservation grp = new GridDhtPartitionsReservation(topVer, cctx, "SQL"); - - if (grp.register(reserved.subList(reserved.size() - reservedCnt, reserved.size()))) { - if (reservations.putIfAbsent(grpKey, grp) != null) - throw new IllegalStateException("Reservation already exists."); - - grp.onPublish(new CI1() { - @Override public void apply(GridDhtPartitionsReservation r) { - reservations.remove(grpKey, r); - } - }); - } - } - } - } - } - - return null; - } - - /** - * Decide whether to ignore or proceed with lost partition. - * - * @param cctx Cache context. - * @param part Partition. - * @throws IgniteCheckedException If failed. - */ - private static void ignoreLostPartitionIfPossible(GridCacheContext cctx, GridDhtLocalPartition part) - throws IgniteCheckedException { - PartitionLossPolicy plc = cctx.config().getPartitionLossPolicy(); - - if (plc != null) { - if (plc == READ_ONLY_SAFE || plc == READ_WRITE_SAFE) { - throw new CacheInvalidStateException("Failed to execute query because cache partition has been " + - "lost [cacheName=" + cctx.name() + ", part=" + part + ']'); - } - } - } - - /** - * @param ints Integers. - * @return Collection wrapper. - */ - private static Collection wrap(final int[] ints) { - if (ints == null) - return null; - - if (ints.length == 0) - return Collections.emptySet(); - - return new AbstractCollection() { - @SuppressWarnings("NullableProblems") - @Override public Iterator iterator() { - return new Iterator() { - /** */ - private int i = 0; - - @Override public boolean hasNext() { - return i < ints.length; - } - - @Override public Integer next() { - return ints[i++]; - } - - @Override public void remove() { - throw new UnsupportedOperationException(); - } - }; - } - - @Override public int size() { - return ints.length; - } - }; - } - /** * @param node Node. * @param req Query request. @@ -835,21 +572,27 @@ private void onQueryRequest0( MapQueryResults qr = null; - List reserved = new ArrayList<>(); + PartitionReservation reserved = null; try { // We want to reserve only in not SELECT FOR UPDATE case - // otherwise, their state is protected by locked topology. if (topVer != null && txDetails == null) { // Reserve primary for topology version or explicit partitions. - String err = reservePartitions(cacheIds, topVer, parts, reserved, node.id(), reqId); + reserved = h2.partitionReservationManager().reservePartitions( + cacheIds, + topVer, + parts, + node.id(), + reqId + ); - if (!F.isEmpty(err)) { + if (reserved.failed()) { // Unregister lazy worker because re-try may never reach this node again. if (lazy) stopAndUnregisterCurrentLazyWorker(); - sendRetry(node, reqId, segmentId, err); + sendRetry(node, reqId, segmentId, reserved.error()); return; } @@ -1055,11 +798,8 @@ private void onQueryRequest0( } } finally { - if (reserved != null) { - // Release reserved partitions. - for (int i = 0; i < reserved.size(); i++) - reserved.get(i).release(); - } + if (reserved != null) + reserved.release(); } } @@ -1098,20 +838,26 @@ private void onDmlRequest(final ClusterNode node, final GridH2DmlRequest req) th AffinityTopologyVersion topVer = req.topologyVersion(); - List reserved = new ArrayList<>(); + PartitionReservation reserved = null; MapNodeResults nodeResults = resultsForNode(node.id()); try { - String err = reservePartitions(cacheIds, topVer, parts, reserved, node.id(), reqId); + reserved = h2.partitionReservationManager().reservePartitions( + cacheIds, + topVer, + parts, + node.id(), + reqId + ); - if (!F.isEmpty(err)) { + if (reserved.failed()) { U.error(log, "Failed to reserve partitions for DML request. [localNodeId=" + ctx.localNodeId() + ", nodeId=" + node.id() + ", reqId=" + req.requestId() + ", cacheIds=" + cacheIds + ", topVer=" + topVer + ", parts=" + Arrays.toString(parts) + ']'); sendUpdateResponse(node, reqId, null, - "Failed to reserve partitions for DML request. " + err); + "Failed to reserve partitions for DML request. " + reserved.error()); return; } @@ -1173,11 +919,8 @@ private void onDmlRequest(final ClusterNode node, final GridH2DmlRequest req) th sendUpdateResponse(node, reqId, null, e.getMessage()); } finally { - if (!F.isEmpty(reserved)) { - // Release reserved partitions. - for (int i = 0; i < reserved.size(); i++) - reserved.get(i).release(); - } + if (reserved != null) + reserved.release(); nodeResults.removeUpdate(reqId); } @@ -1389,17 +1132,6 @@ private void sendRetry(ClusterNode node, long reqId, int segmentId, String retry } } - /** - * @param cacheName Cache name. - */ - public void onCacheStop(String cacheName) { - // Drop group reservations. - for (MapReservationKey grpKey : reservations.keySet()) { - if (F.eq(grpKey.cacheName(), cacheName)) - reservations.remove(grpKey); - } - } - /** * Unregister lazy worker if needed (i.e. if we are currently in lazy worker thread). */ diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridReduceQueryExecutor.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridReduceQueryExecutor.java index 62c5c78a843ce..74c3005bccc1f 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridReduceQueryExecutor.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridReduceQueryExecutor.java @@ -47,6 +47,7 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.cache.PartitionLossPolicy; import org.apache.ignite.cache.query.QueryCancelledException; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.events.DiscoveryEvent; @@ -112,6 +113,8 @@ import static java.util.Collections.singletonList; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SQL_RETRY_TIMEOUT; +import static org.apache.ignite.cache.PartitionLossPolicy.READ_ONLY_SAFE; +import static org.apache.ignite.cache.PartitionLossPolicy.READ_WRITE_SAFE; import static org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion.NONE; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.checkActive; import static org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.mvccEnabled; @@ -397,7 +400,13 @@ private boolean hasMovingPartitions(GridCacheContext cctx) { * @return Cache context. */ private GridCacheContext cacheContext(Integer cacheId) { - return ctx.cache().context().cacheContext(cacheId); + GridCacheContext cctx = ctx.cache().context().cacheContext(cacheId); + + if (cctx == null) + throw new CacheException(String.format("Cache not found on local node (was concurrently destroyed?) " + + "[cacheId=%d]", cacheId)); + + return cctx; } /** @@ -1202,9 +1211,6 @@ private boolean wasCancelled(CacheException e) { */ public void releaseRemoteResources(Collection nodes, ReduceQueryRun r, long qryReqId, boolean distributedJoins, MvccQueryTracker mvccTracker) { - if (mvccTracker != null) - mvccTracker.onDone(); - // For distributedJoins need always send cancel request to cleanup resources. if (distributedJoins) send(nodes, new GridQueryCancelRequest(qryReqId), null, false); @@ -1220,6 +1226,8 @@ public void releaseRemoteResources(Collection nodes, ReduceQueryRun if (!runs.remove(qryReqId, r)) U.warn(log, "Query run was already removed: " + qryReqId); + else if (mvccTracker != null) + mvccTracker.onDone(); } /** @@ -1694,6 +1702,24 @@ private NodesForPartitionsResult nodesForPartitions(List cacheIds, Affi Map partsMap = null; Map qryMap = null; + for (int cacheId : cacheIds) { + GridCacheContext cctx = cacheContext(cacheId); + + PartitionLossPolicy plc = cctx.config().getPartitionLossPolicy(); + + if (plc != READ_ONLY_SAFE && plc != READ_WRITE_SAFE) + continue; + + Collection lostParts = cctx.topology().lostPartitions(); + + for (int part : lostParts) { + if (parts == null || Arrays.binarySearch(parts, part) >= 0) { + throw new CacheException("Failed to execute query because cache partition has been " + + "lost [cacheName=" + cctx.name() + ", part=" + part + ']'); + } + } + } + if (isPreloadingActive(cacheIds)) { if (isReplicatedOnly) nodes = replicatedUnstableDataNodes(cacheIds, qryId); diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservation.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservation.java new file mode 100644 index 0000000000000..8af5b65e133de --- /dev/null +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservation.java @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query.h2.twostep; + +import org.apache.ignite.internal.processors.cache.distributed.dht.GridReservable; +import org.jetbrains.annotations.Nullable; + +import java.util.List; +import java.util.concurrent.atomic.AtomicBoolean; + +/** + * Partition reservation for specific query. + */ +public class PartitionReservation { + /** Reserved partitions. */ + private final List reserved; + + /** Error message. */ + private final String err; + + /** Release guard. */ + private final AtomicBoolean releaseGuard = new AtomicBoolean(); + + /** + * Constructor for successfull reservation. + * + * @param reserved Reserved partitions. + */ + public PartitionReservation(List reserved) { + this(reserved, null); + } + + /** + * Base constructor. + * + * @param reserved Reserved partitions. + * @param err Error message. + */ + public PartitionReservation(@Nullable List reserved, @Nullable String err) { + this.reserved = reserved; + this.err = err; + } + + /** + * @return Error message (if any). + */ + @Nullable public String error() { + return err; + } + + /** + * @return {@code True} if reservation failed. + */ + public boolean failed() { + return err != null; + } + + /** + * Release partitions. + */ + public void release() { + if (!releaseGuard.compareAndSet(false, true)) + return; + + if (reserved != null) { + for (int i = 0; i < reserved.size(); i++) + reserved.get(i).release(); + } + } +} diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/MapReservationKey.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservationKey.java similarity index 77% rename from modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/MapReservationKey.java rename to modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservationKey.java index 9d2d7ba696849..0fad2c4dd0926 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/MapReservationKey.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservationKey.java @@ -19,11 +19,12 @@ import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.S; /** - * Mapper reservation key. + * Partition reservation key. */ -public class MapReservationKey { +public class PartitionReservationKey { /** Cache name. */ private final String cacheName; @@ -36,7 +37,7 @@ public class MapReservationKey { * @param cacheName Cache name. * @param topVer Topology version. */ - public MapReservationKey(String cacheName, AffinityTopologyVersion topVer) { + public PartitionReservationKey(String cacheName, AffinityTopologyVersion topVer) { this.cacheName = cacheName; this.topVer = topVer; } @@ -48,6 +49,13 @@ public String cacheName() { return cacheName; } + /** + * @return Topology version of reservation. + */ + public AffinityTopologyVersion topologyVersion() { + return topVer; + } + /** {@inheritDoc} */ @Override public boolean equals(Object o) { if (this == o) @@ -56,7 +64,7 @@ public String cacheName() { if (o == null || getClass() != o.getClass()) return false; - MapReservationKey other = (MapReservationKey)o; + PartitionReservationKey other = (PartitionReservationKey)o; return F.eq(cacheName, other.cacheName) && F.eq(topVer, other.topVer); @@ -70,4 +78,9 @@ public String cacheName() { return res; } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(PartitionReservationKey.class, this); + } } diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservationManager.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservationManager.java new file mode 100644 index 0000000000000..932c270e4634e --- /dev/null +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/PartitionReservationManager.java @@ -0,0 +1,364 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query.h2.twostep; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteLogger; +import org.apache.ignite.cache.PartitionLossPolicy; +import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.managers.communication.GridIoPolicy; +import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; +import org.apache.ignite.internal.processors.cache.CacheInvalidStateException; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridReservable; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.PartitionsExchangeAware; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionsReservation; +import org.apache.ignite.internal.util.typedef.CI1; +import org.apache.ignite.internal.util.typedef.F; +import org.jetbrains.annotations.Nullable; + +import static org.apache.ignite.cache.PartitionLossPolicy.READ_ONLY_SAFE; +import static org.apache.ignite.cache.PartitionLossPolicy.READ_WRITE_SAFE; +import static org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion.NONE; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.LOST; +import static org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState.OWNING; + +/** + * Class responsible for partition reservation for queries executed on local node. Prevents partitions from being + * evicted from node during query execution. + */ +public class PartitionReservationManager implements PartitionsExchangeAware { + /** Special instance of reservable object for REPLICATED caches. */ + private static final ReplicatedReservable REPLICATED_RESERVABLE = new ReplicatedReservable(); + + /** Kernal context. */ + private final GridKernalContext ctx; + + /** Group reservations cache. When affinity version is not changed and all primary partitions must be reserved + * we get group reservation from this map instead of create new reservation group. + */ + private final ConcurrentMap reservations = new ConcurrentHashMap<>(); + + /** Logger. */ + private final IgniteLogger log; + + /** + * Constructor. + * + * @param ctx Context. + */ + public PartitionReservationManager(GridKernalContext ctx) { + this.ctx = ctx; + + log = ctx.log(PartitionReservationManager.class); + + ctx.cache().context().exchange().registerExchangeAwareComponent(this); + } + + /** + * @param cacheIds Cache IDs. + * @param reqTopVer Topology version from request. + * @param explicitParts Explicit partitions list. + * @param nodeId Node ID. + * @param reqId Request ID. + * @return String which is null in case of success or with causeMessage if failed + * @throws IgniteCheckedException If failed. + */ + public PartitionReservation reservePartitions( + @Nullable List cacheIds, + AffinityTopologyVersion reqTopVer, + final int[] explicitParts, + UUID nodeId, + long reqId + ) throws IgniteCheckedException { + assert reqTopVer != null; + + AffinityTopologyVersion topVer = ctx.cache().context().exchange().lastAffinityChangedTopologyVersion(reqTopVer); + + if (F.isEmpty(cacheIds)) + return new PartitionReservation(Collections.emptyList()); + + Collection partIds; + + if (explicitParts == null) + partIds = null; + else if (explicitParts.length == 0) + partIds = Collections.emptyList(); + else { + partIds = new ArrayList<>(explicitParts.length); + + for (int explicitPart : explicitParts) + partIds.add(explicitPart); + } + + List reserved = new ArrayList<>(); + + for (int i = 0; i < cacheIds.size(); i++) { + GridCacheContext cctx = ctx.cache().context().cacheContext(cacheIds.get(i)); + + // Cache was not found, probably was not deployed yet. + if (cctx == null) { + return new PartitionReservation(reserved, + String.format("Failed to reserve partitions for query (cache is not " + + "found on local node) [localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s]", + ctx.localNodeId(), nodeId, reqId, topVer, cacheIds.get(i))); + } + + if (cctx.isLocal() || !cctx.rebalanceEnabled()) + continue; + + // For replicated cache topology version does not make sense. + final PartitionReservationKey grpKey = new PartitionReservationKey(cctx.name(), cctx.isReplicated() ? null : topVer); + + GridReservable r = reservations.get(grpKey); + + if (explicitParts == null && r != null) { // Try to reserve group partition if any and no explicits. + if (r != REPLICATED_RESERVABLE) { + if (!r.reserve()) + return new PartitionReservation(reserved, + String.format("Failed to reserve partitions for query (group " + + "reservation failed) [localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + + "cacheName=%s]",ctx.localNodeId(), nodeId, reqId, topVer, cacheIds.get(i), cctx.name())); + + reserved.add(r); + } + } + else { // Try to reserve partitions one by one. + int partsCnt = cctx.affinity().partitions(); + + if (cctx.isReplicated()) { // Check all the partitions are in owning state for replicated cache. + if (r == null) { // Check only once. + for (int p = 0; p < partsCnt; p++) { + GridDhtLocalPartition part = partition(cctx, p); + + // We don't need to reserve partitions because they will not be evicted in replicated caches. + GridDhtPartitionState partState = part != null ? part.state() : null; + + if (partState != OWNING) + return new PartitionReservation(reserved, + String.format("Failed to reserve partitions for " + + "query (partition of REPLICATED cache is not in OWNING state) [" + + "localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + + "cacheName=%s, part=%s, partFound=%s, partState=%s]", + ctx.localNodeId(), + nodeId, + reqId, + topVer, + cacheIds.get(i), + cctx.name(), + p, + (part != null), + partState + )); + } + + // Mark that we checked this replicated cache. + reservations.putIfAbsent(grpKey, REPLICATED_RESERVABLE); + } + } + else { // Reserve primary partitions for partitioned cache (if no explicit given). + if (explicitParts == null) + partIds = cctx.affinity().primaryPartitions(ctx.localNodeId(), topVer); + + int reservedCnt = 0; + + for (int partId : partIds) { + GridDhtLocalPartition part = partition(cctx, partId); + + GridDhtPartitionState partState = part != null ? part.state() : null; + + if (partState != OWNING) { + if (partState == LOST) + ignoreLostPartitionIfPossible(cctx, part); + else { + return new PartitionReservation(reserved, + String.format("Failed to reserve partitions " + + "for query (partition of PARTITIONED cache is not found or not in OWNING " + + "state) [localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + + "cacheName=%s, part=%s, partFound=%s, partState=%s]", + ctx.localNodeId(), + nodeId, + reqId, + topVer, + cacheIds.get(i), + cctx.name(), + partId, + (part != null), + partState + )); + } + } + + if (!part.reserve()) { + return new PartitionReservation(reserved, + String.format("Failed to reserve partitions for query " + + "(partition of PARTITIONED cache cannot be reserved) [" + + "localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, cacheId=%s, " + + "cacheName=%s, part=%s, partFound=%s, partState=%s]", + ctx.localNodeId(), + nodeId, + reqId, + topVer, + cacheIds.get(i), + cctx.name(), + partId, + true, + partState + )); + } + + reserved.add(part); + + reservedCnt++; + + // Double check that we are still in owning state and partition contents are not cleared. + partState = part.state(); + + if (partState != OWNING) { + if (partState == LOST) + ignoreLostPartitionIfPossible(cctx, part); + else { + return new PartitionReservation(reserved, + String.format("Failed to reserve partitions for " + + "query (partition of PARTITIONED cache is not in OWNING state after " + + "reservation) [localNodeId=%s, rmtNodeId=%s, reqId=%s, affTopVer=%s, " + + "cacheId=%s, cacheName=%s, part=%s, partState=%s]", + ctx.localNodeId(), + nodeId, + reqId, + topVer, + cacheIds.get(i), + cctx.name(), + partId, + partState + )); + } + } + } + + if (explicitParts == null && reservedCnt > 0) { + // We reserved all the primary partitions for cache, attempt to add group reservation. + GridDhtPartitionsReservation grp = new GridDhtPartitionsReservation(topVer, cctx, "SQL"); + + if (grp.register(reserved.subList(reserved.size() - reservedCnt, reserved.size()))) { + if (reservations.putIfAbsent(grpKey, grp) != null) + throw new IllegalStateException("Reservation already exists."); + + grp.onPublish(new CI1() { + @Override public void apply(GridDhtPartitionsReservation r) { + reservations.remove(grpKey, r); + } + }); + } + } + } + } + } + + return new PartitionReservation(reserved); + } + + /** + * @param cacheName Cache name. + */ + public void onCacheStop(String cacheName) { + // Drop group reservations. + for (PartitionReservationKey grpKey : reservations.keySet()) { + if (F.eq(grpKey.cacheName(), cacheName)) + reservations.remove(grpKey); + } + } + + /** + * Decide whether to ignore or proceed with lost partition. + * + * @param cctx Cache context. + * @param part Partition. + * @throws IgniteCheckedException If failed. + */ + private static void ignoreLostPartitionIfPossible(GridCacheContext cctx, GridDhtLocalPartition part) + throws IgniteCheckedException { + PartitionLossPolicy plc = cctx.config().getPartitionLossPolicy(); + + if (plc != null) { + if (plc == READ_ONLY_SAFE || plc == READ_WRITE_SAFE) { + throw new CacheInvalidStateException("Failed to execute query because cache partition has been " + + "lost [cacheName=" + cctx.name() + ", part=" + part + ']'); + } + } + } + + /** + * @param cctx Cache context. + * @param p Partition ID. + * @return Partition. + */ + private static GridDhtLocalPartition partition(GridCacheContext cctx, int p) { + return cctx.topology().localPartition(p, NONE, false); + } + + /** + * Cleanup group reservations cache on change affinity version. + */ + @Override public void onDoneAfterTopologyUnlock(final GridDhtPartitionsExchangeFuture fut) { + try { + // Must not do anything at the exchange thread. Dispatch to the management thread pool. + ctx.closure().runLocal(() -> { + AffinityTopologyVersion topVer = ctx.cache().context().exchange() + .lastAffinityChangedTopologyVersion(fut.topologyVersion()); + + reservations.forEach((key, r) -> { + if (r != REPLICATED_RESERVABLE && !F.eq(key.topologyVersion(), topVer)) { + assert r instanceof GridDhtPartitionsReservation; + + ((GridDhtPartitionsReservation)r).invalidate(); + } + }); + }, + GridIoPolicy.MANAGEMENT_POOL); + } + catch (Throwable e) { + log.error("Unexpected exception on start reservations cleanup", e); + } + } + + /** + * Mapper fake reservation object for replicated caches. + */ + private static class ReplicatedReservable implements GridReservable { + /** {@inheritDoc} */ + @Override public boolean reserve() { + throw new IllegalStateException(); + } + + /** {@inheritDoc} */ + @Override public void release() { + throw new IllegalStateException(); + } + } +} diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeRequest.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeRequest.java index 702488487d9c0..186104ad44b54 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeRequest.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeRequest.java @@ -21,6 +21,7 @@ import java.util.List; import java.util.UUID; import org.apache.ignite.internal.GridDirectCollection; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; @@ -30,6 +31,7 @@ /** * Range request. */ +@IgniteCodeGeneratingFail public class GridH2IndexRangeRequest implements Message { /** */ private UUID originNodeId; diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeResponse.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeResponse.java index 4fe660c0677a1..18814bb3f71a7 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeResponse.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2IndexRangeResponse.java @@ -21,6 +21,7 @@ import java.util.List; import java.util.UUID; import org.apache.ignite.internal.GridDirectCollection; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageCollectionItemType; @@ -30,6 +31,7 @@ /** * Range response message. */ +@IgniteCodeGeneratingFail public class GridH2IndexRangeResponse implements Message { /** */ public static final byte STATUS_OK = 0; diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2QueryRequest.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2QueryRequest.java index 0bec66e56be42..cca366af7ce9f 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2QueryRequest.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2QueryRequest.java @@ -29,6 +29,7 @@ import org.apache.ignite.internal.GridDirectMap; import org.apache.ignite.internal.GridDirectTransient; import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.mvcc.MvccSnapshot; @@ -50,6 +51,7 @@ /** * Query request. */ +@IgniteCodeGeneratingFail public class GridH2QueryRequest implements Message, GridCacheQueryMarshallable { /** */ private static final long serialVersionUID = 0L; @@ -518,7 +520,7 @@ public void txDetails(GridH2SelectForUpdateTxDetails txDetails) { writer.incrementState(); case 9: - if (!writer.writeMessage("topVer", topVer)) + if (!writer.writeAffinityTopologyVersion("topVer", topVer)) return false; writer.incrementState(); @@ -633,7 +635,7 @@ public void txDetails(GridH2SelectForUpdateTxDetails txDetails) { reader.incrementState(); case 9: - topVer = reader.readMessage("topVer"); + topVer = reader.readAffinityTopologyVersion("topVer"); if (!reader.isLastRead()) return false; diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2ValueMessage.java b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2ValueMessage.java index 18f88803bd218..fd5b5243a7be8 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2ValueMessage.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/msg/GridH2ValueMessage.java @@ -20,6 +20,7 @@ import java.nio.ByteBuffer; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.internal.GridKernalContext; +import org.apache.ignite.internal.IgniteCodeGeneratingFail; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.plugin.extensions.communication.MessageReader; import org.apache.ignite.plugin.extensions.communication.MessageWriter; @@ -28,6 +29,7 @@ /** * Abstract message wrapper for H2 values. */ +@IgniteCodeGeneratingFail public abstract class GridH2ValueMessage implements Message { /** * Gets H2 value. diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/ValidateIndexesClosure.java b/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/ValidateIndexesClosure.java index 503b57c494c70..b6909e3191e7f 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/ValidateIndexesClosure.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/ValidateIndexesClosure.java @@ -18,6 +18,8 @@ import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; +import java.nio.ByteBuffer; +import java.nio.ByteOrder; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; @@ -37,22 +39,29 @@ import org.apache.ignite.IgniteException; import org.apache.ignite.IgniteInterruptedException; import org.apache.ignite.IgniteLogger; +import org.apache.ignite.internal.GridKernalContext; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.pagemem.PageIdAllocator; +import org.apache.ignite.internal.pagemem.PageIdUtils; +import org.apache.ignite.internal.pagemem.store.PageStore; import org.apache.ignite.internal.processors.cache.CacheGroupContext; import org.apache.ignite.internal.processors.cache.CacheObject; import org.apache.ignite.internal.processors.cache.CacheObjectContext; import org.apache.ignite.internal.processors.cache.CacheObjectUtils; import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionState; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; +import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.verify.PartitionKey; import org.apache.ignite.internal.processors.query.GridQueryProcessor; import org.apache.ignite.internal.processors.query.GridQueryTypeDescriptor; import org.apache.ignite.internal.processors.query.QueryTypeDescriptorImpl; import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; +import org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex; import org.apache.ignite.internal.processors.query.h2.opt.GridH2Row; import org.apache.ignite.internal.processors.query.h2.opt.GridH2RowDescriptor; import org.apache.ignite.internal.processors.query.h2.opt.GridH2Table; @@ -104,9 +113,15 @@ public class ValidateIndexesClosure implements IgniteCallable> partArgs = new ArrayList<>(); List> idxArgs = new ArrayList<>(); + totalCacheGrps = grpIds.size(); + + Map integrityCheckResults = integrityCheckIndexesPartitions(grpIds); + for (Integer grpId : grpIds) { CacheGroupContext grpCtx = ignite.context().cache().cacheGroup(grpId); - if (grpCtx == null) + if (grpCtx == null || integrityCheckResults.containsKey(grpId)) continue; List parts = grpCtx.topology().localPartitions(); @@ -210,7 +229,8 @@ private VisorValidateIndexesJobResult call0() throws Exception { ArrayList indexes = gridH2Tbl.getIndexes(); for (Index idx : indexes) - idxArgs.add(new T2<>(ctx, idx)); + if (idx instanceof H2TreeIndex) + idxArgs.add(new T2<>(ctx, idx)); } } } @@ -220,15 +240,15 @@ private VisorValidateIndexesJobResult call0() throws Exception { Collections.shuffle(partArgs); Collections.shuffle(idxArgs); + totalPartitions = partArgs.size(); + totalIndexes = idxArgs.size(); + for (T2 t2 : partArgs) procPartFutures.add(processPartitionAsync(t2.get1(), t2.get2())); for (T2 t2 : idxArgs) procIdxFutures.add(processIndexAsync(t2.get1(), t2.get2())); - totalPartitions = procPartFutures.size(); - totalIndexes = procIdxFutures.size(); - Map partResults = new HashMap<>(); Map idxResults = new HashMap<>(); @@ -240,7 +260,8 @@ private VisorValidateIndexesJobResult call0() throws Exception { Map partRes = fut.get(); - partResults.putAll(partRes); + if (!partRes.isEmpty() && partRes.entrySet().stream().anyMatch(e -> !e.getValue().issues().isEmpty())) + partResults.putAll(partRes); } for (; curIdx < procIdxFutures.size(); curIdx++) { @@ -248,8 +269,12 @@ private VisorValidateIndexesJobResult call0() throws Exception { Map idxRes = fut.get(); - idxResults.putAll(idxRes); + if (!idxRes.isEmpty() && idxRes.entrySet().stream().anyMatch(e -> !e.getValue().issues().isEmpty())) + idxResults.putAll(idxRes); } + + log.warning("ValidateIndexesClosure finished: processed " + totalPartitions + " partitions and " + + totalIndexes + " indexes."); } catch (InterruptedException | ExecutionException e) { for (int j = curPart; j < procPartFutures.size(); j++) @@ -258,15 +283,102 @@ private VisorValidateIndexesJobResult call0() throws Exception { for (int j = curIdx; j < procIdxFutures.size(); j++) procIdxFutures.get(j).cancel(false); - if (e instanceof InterruptedException) - throw new IgniteInterruptedException((InterruptedException)e); - else if (e.getCause() instanceof IgniteException) - throw (IgniteException)e.getCause(); - else - throw new IgniteException(e.getCause()); + throw unwrapFutureException(e); + } + + return new VisorValidateIndexesJobResult(partResults, idxResults, integrityCheckResults.values()); + } + + /** + * @param grpIds Group ids. + */ + private Map integrityCheckIndexesPartitions(Set grpIds) { + List>> integrityCheckFutures = new ArrayList<>(grpIds.size()); + + for (Integer grpId: grpIds) { + final CacheGroupContext grpCtx = ignite.context().cache().cacheGroup(grpId); + + if (grpCtx == null || !grpCtx.persistenceEnabled()) { + integrityCheckedIndexes.incrementAndGet(); + + continue; + } + + Future> checkFut = + calcExecutor.submit(new Callable>() { + @Override public T2 call() throws Exception { + IndexIntegrityCheckIssue issue = integrityCheckIndexPartition(grpCtx); + + return new T2<>(grpCtx.groupId(), issue); + } + }); + + integrityCheckFutures.add(checkFut); + } + + Map integrityCheckResults = new HashMap<>(); + + int curFut = 0; + try { + for (Future> fut : integrityCheckFutures) { + T2 res = fut.get(); + + if (res.getValue() != null) + integrityCheckResults.put(res.getKey(), res.getValue()); + } + } + catch (InterruptedException | ExecutionException e) { + for (int j = curFut; j < integrityCheckFutures.size(); j++) + integrityCheckFutures.get(j).cancel(false); + + throw unwrapFutureException(e); + } + + return integrityCheckResults; + } + + /** + * @param gctx Cache group context. + */ + private IndexIntegrityCheckIssue integrityCheckIndexPartition(CacheGroupContext gctx) { + GridKernalContext ctx = ignite.context(); + GridCacheSharedContext cctx = ctx.cache().context(); + + try { + FilePageStoreManager pageStoreMgr = (FilePageStoreManager)cctx.pageStore(); + + if (pageStoreMgr == null) + return null; + + int pageSz = gctx.dataRegion().pageMemory().pageSize(); + + PageStore pageStore = pageStoreMgr.getStore(gctx.groupId(), PageIdAllocator.INDEX_PARTITION); + + long pageId = PageIdUtils.pageId(PageIdAllocator.INDEX_PARTITION, PageIdAllocator.FLAG_IDX, 0); + + ByteBuffer buf = ByteBuffer.allocateDirect(pageSz); + + buf.order(ByteOrder.nativeOrder()); + + for (int pageNo = 0; pageNo < pageStore.pages(); pageId++, pageNo++) { + buf.clear(); + + pageStore.read(pageId, buf, true); + } + + return null; } + catch (Throwable t) { + log.error("Integrity check of index partition of cache group " + gctx.cacheOrGroupName() + " failed", t); + + return new IndexIntegrityCheckIssue(gctx.cacheOrGroupName(), t); + } + finally { + integrityCheckedIndexes.incrementAndGet(); - return new VisorValidateIndexesJobResult(partResults, idxResults); + printProgressIfNeeded("Current progress of ValidateIndexesClosure: checked integrity of " + + integrityCheckedIndexes.get() + " index partitions of " + totalCacheGrps + " cache groups"); + } } /** @@ -371,6 +483,19 @@ else if (current++ % checkThrough > 0) if (cacheCtx == null) throw new IgniteException("Unknown cacheId of CacheDataRow: " + cacheId); + if (row.link() == 0L) { + String errMsg = "Invalid partition row, possibly deleted"; + + log.error(errMsg); + + IndexValidationIssue is = new IndexValidationIssue(null, cacheCtx.name(), null, + new IgniteCheckedException(errMsg)); + + enoughIssues |= partRes.reportIssue(is); + + continue; + } + try { QueryTypeDescriptorImpl res = (QueryTypeDescriptorImpl)m.invoke( qryProcessor, cacheCtx.name(), cacheCtx.cacheObjectContext(), row.key(), row.value(), true); @@ -392,6 +517,9 @@ else if (current++ % checkThrough > 0) ArrayList indexes = gridH2Tbl.getIndexes(); for (Index idx : indexes) { + if (!(idx instanceof H2TreeIndex)) + continue; + try { Cursor cursor = idx.find((Session) null, h2Row, h2Row); @@ -434,7 +562,7 @@ else if (current++ % checkThrough > 0) finally { part.release(); - printProgressIfNeeded(); + printProgressOfIndexValidationIfNeeded(); } PartitionKey partKey = new PartitionKey(grpCtx.groupId(), part.id(), grpCtx.cacheOrGroupName()); @@ -447,16 +575,21 @@ else if (current++ % checkThrough > 0) /** * */ - private void printProgressIfNeeded() { - long curTs = U.currentTimeMillis(); + private void printProgressOfIndexValidationIfNeeded() { + printProgressIfNeeded("Current progress of ValidateIndexesClosure: processed " + + processedPartitions.get() + " of " + totalPartitions + " partitions, " + + processedIndexes.get() + " of " + totalIndexes + " SQL indexes"); + } + /** + * + */ + private void printProgressIfNeeded(String msg) { + long curTs = U.currentTimeMillis(); long lastTs = lastProgressPrintTs.get(); - if (curTs - lastTs >= 60_000 && lastProgressPrintTs.compareAndSet(lastTs, curTs)) { - log.warning("Current progress of ValidateIndexesClosure: processed " + - processedPartitions.get() + " of " + totalPartitions + " partitions, " + - processedIndexes.get() + " of " + totalIndexes + " SQL indexes"); - } + if (curTs - lastTs >= 60_000 && lastProgressPrintTs.compareAndSet(lastTs, curTs)) + log.warning(msg); } /** @@ -546,12 +679,14 @@ else if (current++ % checkThrough > 0) h2key = h2Row.key(); - CacheDataRow cacheDataStoreRow = ctx.group().offheap().read(ctx, h2key); + if (h2Row.link() != 0L) { + CacheDataRow cacheDataStoreRow = ctx.group().offheap().read(ctx, h2key); - if (cacheDataStoreRow == null) - throw new IgniteCheckedException("Key is present in SQL index, but can't be found in CacheDataTree."); - - previousKey = h2key; + if (cacheDataStoreRow == null) + throw new IgniteCheckedException("Key is present in SQL index, but can't be found in CacheDataTree."); + } + else + throw new IgniteCheckedException("Invalid index row, possibly deleted " + h2Row); } catch (Throwable t) { Object o = CacheObjectUtils.unwrapBinaryIfNeeded( @@ -564,14 +699,34 @@ else if (current++ % checkThrough > 0) enoughIssues |= idxValidationRes.reportIssue(is); } + finally { + previousKey = h2key; + } } String uniqueIdxName = "[cache=" + ctx.name() + ", idx=" + idx.getName() + "]"; processedIndexes.incrementAndGet(); - printProgressIfNeeded(); + printProgressOfIndexValidationIfNeeded(); return Collections.singletonMap(uniqueIdxName, idxValidationRes); } + + /** + * @param e Future result exception. + * @return Unwrapped exception. + */ + private IgniteException unwrapFutureException(Exception e) { + assert e instanceof InterruptedException || e instanceof ExecutionException : "Expecting either InterruptedException " + + "or ExecutionException"; + + if (e instanceof InterruptedException) + return new IgniteInterruptedException((InterruptedException)e); + else if (e.getCause() instanceof IgniteException) + return (IgniteException)e.getCause(); + else + return new IgniteException(e.getCause()); + } + } diff --git a/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTask.java b/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTask.java index abb7f7ee55ec1..922c53e15ed97 100644 --- a/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTask.java +++ b/modules/indexing/src/main/java/org/apache/ignite/internal/visor/verify/VisorValidateIndexesTask.java @@ -17,16 +17,22 @@ package org.apache.ignite.internal.visor.verify; +import java.util.ArrayList; +import java.util.Collection; import java.util.HashMap; +import java.util.HashSet; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.UUID; import org.apache.ignite.IgniteException; +import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.compute.ComputeJobResult; import org.apache.ignite.internal.processors.task.GridInternal; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.visor.VisorJob; import org.apache.ignite.internal.visor.VisorMultiNodeTask; +import org.apache.ignite.internal.visor.VisorTaskArgument; import org.jetbrains.annotations.Nullable; /** @@ -58,6 +64,29 @@ public class VisorValidateIndexesTask extends VisorMultiNodeTask jobNodes(VisorTaskArgument arg) { + Collection srvNodes = ignite.cluster().forServers().nodes(); + Collection ret = new ArrayList<>(srvNodes.size()); + + VisorValidateIndexesTaskArg taskArg = arg.getArgument(); + + Set nodeIds = taskArg.getNodes() != null ? new HashSet<>(taskArg.getNodes()) : null; + + if (nodeIds == null) { + for (ClusterNode node : srvNodes) + ret.add(node.id()); + } + else { + for (ClusterNode node : srvNodes) { + if (nodeIds.contains(node.id())) + ret.add(node.id()); + } + } + + return ret; + } + /** * */ diff --git a/modules/indexing/src/test/java/org/apache/ignite/client/ClientTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/client/ClientTestSuite.java index 623a19ebe66a5..a9d5c8a97f98f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/client/ClientTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/client/ClientTestSuite.java @@ -1,12 +1,11 @@ /* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at + * Copyright 2019 GridGain Systems, Inc. and Contributors. * - * http://www.apache.org/licenses/LICENSE-2.0 + * Licensed under the GridGain Community Edition License (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -34,7 +33,9 @@ SecurityTest.class, FunctionalQueryTest.class, IgniteBinaryQueryTest.class, - SslParametersTest.class + SslParametersTest.class, + ConnectToStartingNodeTest.class, + AsyncChannelTest.class }) public class ClientTestSuite { // No-op. diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/AffinityKeyNameAndValueFieldNameConflictTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/AffinityKeyNameAndValueFieldNameConflictTest.java new file mode 100644 index 0000000000000..ff147e7ba7a76 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/AffinityKeyNameAndValueFieldNameConflictTest.java @@ -0,0 +1,269 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.io.Serializable; +import java.util.List; +import java.util.concurrent.Callable; +import java.util.function.BiFunction; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import javax.cache.CacheException; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheKeyConfiguration; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.cache.affinity.AffinityKeyMapped; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.cache.query.annotations.QuerySqlField; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.internal.S; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * IGNITE-7793 SQL does not work if value has sql field which name equals to affinity keyProducer name + */ +@RunWith(JUnit4.class) +public class AffinityKeyNameAndValueFieldNameConflictTest extends GridCommonAbstractTest { + /** */ + private static final String PERSON_CACHE = "person"; + + /** */ + private Class keyCls; + + /** */ + private BiFunction keyProducer; + + /** */ + private boolean qryEntityCfg; + + /** */ + private boolean keyFieldSpecified; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + CacheConfiguration ccfg = new CacheConfiguration(PERSON_CACHE); + + if (qryEntityCfg) { + CacheKeyConfiguration keyCfg = new CacheKeyConfiguration(keyCls.getName(), "name"); + cfg.setCacheKeyConfiguration(keyCfg); + + QueryEntity entity = new QueryEntity(); + entity.setKeyType(keyCls.getName()); + entity.setValueType(Person.class.getName()); + if (keyFieldSpecified) + entity.setKeyFields(Stream.of("name").collect(Collectors.toSet())); + + entity.addQueryField("id", Integer.class.getName(), null); + entity.addQueryField("name", String.class.getName(), null); + + ccfg.setQueryEntities(F.asList(entity)); + } else { + CacheKeyConfiguration keyCfg = new CacheKeyConfiguration(keyCls); + cfg.setCacheKeyConfiguration(keyCfg); + + ccfg.setIndexedTypes(keyCls, Person.class); + } + + cfg.setCacheConfiguration(ccfg); + + return cfg; + } + + /** + * @throws Exception If failed. + */ + @Test + public void testQueryEntityConfig() throws Exception { + qryEntityCfg = true; + keyCls = PersonKey1.class; + keyProducer = PersonKey1::new; + checkQuery(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testQueryEntityConfigKeySpecified() throws Exception { + qryEntityCfg = true; + keyFieldSpecified = true; + keyCls = PersonKey1.class; + keyProducer = PersonKey1::new; + checkQuery(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testAnnotationConfig() throws Exception { + keyCls = PersonKey1.class; + keyProducer = PersonKey1::new; + checkQuery(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testAnnotationConfigCollision() throws Exception { + keyCls = PersonKey2.class; + keyProducer = PersonKey2::new; + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + checkQuery(); + + return null; + } + }, CacheException.class, "Property with name 'name' already exists."); + } + + /** + * @throws Exception If failed. + */ + private void checkQuery() throws Exception { + startGrid(2); + + Ignite g = grid(2); + + IgniteCache personCache = g.cache(PERSON_CACHE); + + personCache.put(keyProducer.apply(1, "o1"), new Person("p1")); + + SqlFieldsQuery query = new SqlFieldsQuery("select * from \"" + PERSON_CACHE + "\"." + Person.class.getSimpleName() + " it where it.name=?"); + + List> result = personCache.query(query.setArgs(keyFieldSpecified ? "o1" : "p1")).getAll(); + + assertEquals(1, result.size()); + + stopAllGrids(); + } + + /** + * + */ + public static class PersonKey1 { + /** */ + @QuerySqlField + private int id; + + /** */ + @AffinityKeyMapped + private String name; + + /** + * @param id Key. + * @param name Affinity keyProducer. + */ + public PersonKey1(int id, String name) { + this.id = id; + this.name = name; + } + + /** {@inheritDoc} */ + @Override public boolean equals(Object o) { + if (this == o) + return true; + + if (o == null || getClass() != o.getClass()) + return false; + + PersonKey1 other = (PersonKey1)o; + + return id == other.id; + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + return id; + } + } + + /** + * + */ + public static class PersonKey2 { + /** */ + @QuerySqlField + private int id; + + /** */ + @QuerySqlField + @AffinityKeyMapped + private String name; + + /** + * @param id Key. + * @param name Affinity keyProducer. + */ + public PersonKey2(int id, String name) { + this.id = id; + this.name = name; + } + + /** {@inheritDoc} */ + @Override public boolean equals(Object o) { + if (this == o) + return true; + + if (o == null || getClass() != o.getClass()) + return false; + + PersonKey2 other = (PersonKey2)o; + + return id == other.id; + } + + /** {@inheritDoc} */ + @Override public int hashCode() { + return id; + } + } + + /** + * + */ + private static class Person implements Serializable { + + /** */ + @QuerySqlField + String name; + + /** + * @param name name. + */ + public Person(String name) { + this.name = name; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return S.toString(Person.class, this); + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BigEntryQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BigEntryQueryTest.java index 3b5562a3035b8..4e443f9b113cb 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BigEntryQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BigEntryQueryTest.java @@ -39,10 +39,14 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.events.EventType; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * This is a specific test for IGNITE-8900. */ +@RunWith(JUnit4.class) public class BigEntryQueryTest extends GridCommonAbstractTest { /** */ public static final String CACHE = "cache"; @@ -55,6 +59,7 @@ public class BigEntryQueryTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testBigEntriesSelect() throws Exception { startGrids(2); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinaryMetadataConcurrentUpdateWithIndexesTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinaryMetadataConcurrentUpdateWithIndexesTest.java new file mode 100644 index 0000000000000..861f16a1e4517 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinaryMetadataConcurrentUpdateWithIndexesTest.java @@ -0,0 +1,440 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.io.IOException; +import java.io.OutputStream; +import java.lang.reflect.Field; +import java.net.Socket; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteSystemProperties; +import org.apache.ignite.binary.BinaryObject; +import org.apache.ignite.binary.BinaryObjectBuilder; +import org.apache.ignite.binary.BinaryObjectException; +import org.apache.ignite.binary.BinaryType; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.CacheWriteSynchronizationMode; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.cache.QueryIndex; +import org.apache.ignite.cache.QueryIndexType; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.events.EventType; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.binary.BinaryMetadata; +import org.apache.ignite.internal.managers.discovery.CustomMessageWrapper; +import org.apache.ignite.internal.managers.discovery.DiscoveryCustomMessage; +import org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl; +import org.apache.ignite.internal.processors.cache.binary.MetadataUpdateProposedMessage; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteBiClosure; +import org.apache.ignite.spi.discovery.DiscoverySpiCustomMessage; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryAbstractMessage; +import org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static java.util.concurrent.TimeUnit.MILLISECONDS; +import static org.apache.ignite.testframework.GridTestUtils.runAsync; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Tests scenario for too early metadata update completion in case of multiple concurrent updates for the same schema. + *

    + * Scenario is the following: + * + *

      + *
    • Start 4 nodes, connect client to node 2 in topology order (starting from 1).
    • + *
    • Start two concurrent transactions from client node producing same schema update.
    • + *
    • Delay second update until first update will return to client with stamped propose message and writes new + * schema to local metadata cache
    • + *
    • Unblock second update. It should correctly wait until the metadata is applied on all + * nodes or tx will fail on commit.
    • + *
    + */ +@RunWith(JUnit4.class) +public class BinaryMetadataConcurrentUpdateWithIndexesTest extends GridCommonAbstractTest { + /** */ + private static final int FIELDS = 2; + + /** */ + private static final int MB = 1024 * 1024; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + cfg.setIncludeEventTypes(EventType.EVTS_DISCOVERY); + + BlockTcpDiscoverySpi spi = new BlockTcpDiscoverySpi(); + + Field rndAddrsField = U.findField(BlockTcpDiscoverySpi.class, "skipAddrsRandomization"); + + assertNotNull(rndAddrsField); + + rndAddrsField.set(spi, true); + + cfg.setDiscoverySpi(spi.setIpFinder(sharedStaticIpFinder)); + + cfg.setClientMode(igniteInstanceName.startsWith("client")); + + QueryEntity qryEntity = new QueryEntity("java.lang.Integer", "Value"); + + LinkedHashMap fields = new LinkedHashMap<>(); + + Collection indexes = new ArrayList<>(FIELDS); + + for (int i = 0; i < FIELDS; i++) { + String name = "s" + i; + + fields.put(name, "java.lang.String"); + + indexes.add(new QueryIndex(name, QueryIndexType.SORTED)); + } + + qryEntity.setFields(fields); + + qryEntity.setIndexes(indexes); + + cfg.setDataStorageConfiguration(new DataStorageConfiguration(). + setDefaultDataRegionConfiguration(new DataRegionConfiguration().setMaxSize(50 * MB))); + + cfg.setCacheConfiguration(new CacheConfiguration(DEFAULT_CACHE_NAME). + setBackups(0). + setQueryEntities(Collections.singleton(qryEntity)). + setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL). + setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC). + setCacheMode(CacheMode.PARTITIONED)); + + return cfg; + } + + /** Flag to start syncing metadata requests. Should skip on exchange. */ + private volatile boolean syncMeta; + + /** Metadata init latch. Both threads must request initial metadata. */ + private CountDownLatch initMetaReq = new CountDownLatch(2); + + /** Thread local flag for need of waiting local metadata update. */ + private ThreadLocal delayMetadataUpdateThreadLoc = new ThreadLocal<>(); + + /** Latch for waiting local metadata update. */ + public static final CountDownLatch localMetaUpdatedLatch = new CountDownLatch(1); + + /** */ + @Test + public void testMissingSchemaUpdate() throws Exception { + // Start order is important. + Ignite node0 = startGrid("node0"); + + Ignite node1 = startGrid("node1"); + + IgniteEx client0 = startGrid("client0"); + + CacheObjectBinaryProcessorImpl.TestBinaryContext clientCtx = + (CacheObjectBinaryProcessorImpl.TestBinaryContext)((CacheObjectBinaryProcessorImpl)client0.context(). + cacheObjects()).binaryContext(); + + clientCtx.addListener(new CacheObjectBinaryProcessorImpl.TestBinaryContext.TestBinaryContextListener() { + @Override public void onAfterMetadataRequest(int typeId, BinaryType type) { + if (syncMeta) { + try { + initMetaReq.countDown(); + + initMetaReq.await(); + } + catch (Exception e) { + throw new BinaryObjectException(e); + } + } + } + + @Override public void onBeforeMetadataUpdate(int typeId, BinaryMetadata metadata) { + // Delay one of updates until schema is locally updated on propose message. + if (delayMetadataUpdateThreadLoc.get() != null) + await(localMetaUpdatedLatch, 5000); + } + }); + + Ignite node2 = startGrid("node2"); + + Ignite node3 = startGrid("node3"); + + startGrid("node4"); + + node0.cluster().active(true); + + awaitPartitionMapExchange(); + + syncMeta = true; + + CountDownLatch clientProposeMsgBlockedLatch = new CountDownLatch(1); + + AtomicBoolean clientWait = new AtomicBoolean(); + final Object clientMux = new Object(); + + AtomicBoolean srvWait = new AtomicBoolean(); + final Object srvMux = new Object(); + + ((BlockTcpDiscoverySpi)node1.configuration().getDiscoverySpi()).setClosure((snd, msg) -> { + if (msg instanceof MetadataUpdateProposedMessage) { + if (Thread.currentThread().getName().contains("client")) { + log.info("Block custom message to client0: [locNode=" + snd + ", msg=" + msg + ']'); + + clientProposeMsgBlockedLatch.countDown(); + + // Message to client + synchronized (clientMux) { + while (!clientWait.get()) + try { + clientMux.wait(); + } + catch (InterruptedException e) { + fail(); + } + } + } + } + + return null; + }); + + ((BlockTcpDiscoverySpi)node2.configuration().getDiscoverySpi()).setClosure((snd, msg) -> { + if (msg instanceof MetadataUpdateProposedMessage) { + MetadataUpdateProposedMessage msg0 = (MetadataUpdateProposedMessage)msg; + + int pendingVer = U.field(msg0, "pendingVer"); + + // Should not block propose messages until they reach coordinator. + if (pendingVer == 0) + return null; + + log.info("Block custom message to next server: [locNode=" + snd + ", msg=" + msg + ']'); + + // Message to client + synchronized (srvMux) { + while (!srvWait.get()) + try { + srvMux.wait(); + } + catch (InterruptedException e) { + fail(); + } + } + } + + return null; + }); + + Integer key = primaryKey(node3.cache(DEFAULT_CACHE_NAME)); + + IgniteInternalFuture fut0 = runAsync(() -> { + try (Transaction tx = client0.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + client0.cache(DEFAULT_CACHE_NAME).put(key, build(client0, "val", 0)); + + tx.commit(); + } + catch (Throwable t) { + log.error("err", t); + } + + }); + + // Implements test logic. + IgniteInternalFuture fut1 = runAsync(() -> { + // Wait for initial metadata received. It should be initial version: pending=0, accepted=0 + await(initMetaReq, 5000); + + // Wait for blocking proposal message to client node. + await(clientProposeMsgBlockedLatch, 5000); + + // Unblock proposal message to client. + clientWait.set(true); + + synchronized (clientMux) { + clientMux.notify(); + } + + // Give some time to apply update. + doSleep(3000); + + // Unblock second metadata update. + localMetaUpdatedLatch.countDown(); + + // Give some time for tx to complete (success or fail). fut2 will throw an error if tx has failed on commit. + doSleep(3000); + + // Unblock metadata message and allow for correct version acceptance. + srvWait.set(true); + + synchronized (srvMux) { + srvMux.notify(); + } + }); + + IgniteInternalFuture fut2 = runAsync(() -> { + delayMetadataUpdateThreadLoc.set(true); + + try (Transaction tx = client0.transactions(). + txStart(PESSIMISTIC, REPEATABLE_READ, 0, 1)) { + client0.cache(DEFAULT_CACHE_NAME).put(key, build(client0, "val", 0)); + + tx.commit(); + } + }); + + fut0.get(); + fut1.get(); + fut2.get(); + } + + /** + * @param latch Latch. + * @param timeout Timeout. + */ + private void await(CountDownLatch latch, long timeout) { + try { + latch.await(5000, MILLISECONDS); + } + catch (InterruptedException e) { + Thread.currentThread().interrupt(); + } + + long cnt = initMetaReq.getCount(); + + if (cnt != 0) + throw new RuntimeException("Invalid latch count after wait: " + cnt); + } + + /** + * @param ignite Ignite. + * @param prefix Value prefix. + * @param fields Fields. + */ + protected BinaryObject build(Ignite ignite, String prefix, int... fields) { + BinaryObjectBuilder builder = ignite.binary().builder("Value"); + + for (int field : fields) { + assertTrue(field < FIELDS); + + builder.setField("i" + field, field); + builder.setField("s" + field, prefix + field); + } + + return builder.build(); + } + + /** + * Discovery SPI which can simulate network split. + */ + protected class BlockTcpDiscoverySpi extends TcpDiscoverySpi { + /** Closure. */ + private volatile IgniteBiClosure clo; + + /** + * @param clo Closure. + */ + public void setClosure(IgniteBiClosure clo) { + this.clo = clo; + } + + /** + * @param addr Address. + * @param msg Message. + */ + private synchronized void apply(ClusterNode addr, TcpDiscoveryAbstractMessage msg) { + if (!(msg instanceof TcpDiscoveryCustomEventMessage)) + return; + + TcpDiscoveryCustomEventMessage cm = (TcpDiscoveryCustomEventMessage)msg; + + DiscoveryCustomMessage delegate; + + try { + DiscoverySpiCustomMessage custMsg = cm.message(marshaller(), U.resolveClassLoader(ignite().configuration())); + + assertNotNull(custMsg); + + delegate = ((CustomMessageWrapper)custMsg).delegate(); + + } + catch (Throwable throwable) { + throw new RuntimeException(throwable); + } + + if (clo != null) + clo.apply(addr, delegate); + } + + /** {@inheritDoc} */ + @Override protected void writeToSocket( + Socket sock, + TcpDiscoveryAbstractMessage msg, + byte[] data, + long timeout + ) throws IOException { + if (spiCtx != null) + apply(spiCtx.localNode(), msg); + + super.writeToSocket(sock, msg, data, timeout); + } + + /** {@inheritDoc} */ + @Override protected void writeToSocket(Socket sock, + OutputStream out, + TcpDiscoveryAbstractMessage msg, + long timeout) throws IOException, IgniteCheckedException { + if (spiCtx != null) + apply(spiCtx.localNode(), msg); + + super.writeToSocket(sock, out, msg, timeout); + } + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + CacheObjectBinaryProcessorImpl.useTestBinaryCtx = true; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + CacheObjectBinaryProcessorImpl.useTestBinaryCtx = false; + + stopAllGrids(); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinarySerializationQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinarySerializationQuerySelfTest.java index 4b3c881bf217d..74c834a25e803 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinarySerializationQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinarySerializationQuerySelfTest.java @@ -56,10 +56,14 @@ import java.util.ArrayList; import java.util.Iterator; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for query with BinaryMarshaller and different serialization modes. */ +@RunWith(JUnit4.class) public class BinarySerializationQuerySelfTest extends GridCommonAbstractTest { /** Ignite instance. */ private Ignite ignite; @@ -159,6 +163,7 @@ private static QueryEntity entityForClass(Class cls) { * * @throws Exception If failed. */ + @Test public void testPlain() throws Exception { check(EntityPlain.class); } @@ -168,6 +173,7 @@ public void testPlain() throws Exception { * * @throws Exception If failed. */ + @Test public void testSerializable() throws Exception { check(EntitySerializable.class); } @@ -177,6 +183,7 @@ public void testSerializable() throws Exception { * * @throws Exception If failed. */ + @Test public void testExternalizable() throws Exception { check(EntityExternalizable.class); } @@ -186,6 +193,7 @@ public void testExternalizable() throws Exception { * * @throws Exception If failed. */ + @Test public void testBinarylizable() throws Exception { check(EntityBinarylizable.class); } @@ -195,6 +203,7 @@ public void testBinarylizable() throws Exception { * * @throws Exception If failed. */ + @Test public void testWriteReadObject() throws Exception { check(EntityWriteReadObject.class); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinaryTypeMismatchLoggingTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinaryTypeMismatchLoggingTest.java index 7a41a6ab726b6..bf9796da3c23e 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinaryTypeMismatchLoggingTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/BinaryTypeMismatchLoggingTest.java @@ -34,11 +34,14 @@ import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests of binary type mismatch logging. */ +@RunWith(JUnit4.class) public class BinaryTypeMismatchLoggingTest extends GridCommonAbstractTest { /** */ public static final String MESSAGE_PAYLOAD_VALUE = "expValType=Payload, actualValType=o.a.i.i.processors.cache.BinaryTypeMismatchLoggingTest$Payload"; @@ -49,6 +52,7 @@ public class BinaryTypeMismatchLoggingTest extends GridCommonAbstractTest { /** * @throws Exception In case of an error. */ + @Test public void testValueReadCreateTable() throws Exception { Ignite ignite = startGrid(0); @@ -74,6 +78,7 @@ public void testValueReadCreateTable() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testValueReadQueryEntities() throws Exception { Ignite ignite = startGrid(0); @@ -102,6 +107,7 @@ public void testValueReadQueryEntities() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testEntryReadCreateTable() throws Exception { Ignite ignite = startGrid(0); @@ -127,6 +133,7 @@ public void testEntryReadCreateTable() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testEntryReadQueryEntities() throws Exception { Ignite ignite = startGrid(0); @@ -156,6 +163,7 @@ public void testEntryReadQueryEntities() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testValueWriteCreateTable() throws Exception { Ignite ignite = startGridWithLogCapture(); @@ -185,6 +193,7 @@ public void testValueWriteCreateTable() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testValueWriteQueryEntities() throws Exception { Ignite ignite = startGridWithLogCapture(); @@ -209,6 +218,7 @@ public void testValueWriteQueryEntities() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testEntryWriteCreateTable() throws Exception { Ignite ignite = startGridWithLogCapture(); @@ -242,6 +252,7 @@ public void testEntryWriteCreateTable() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testEntryWriteQueryEntities() throws Exception { Ignite ignite = startGridWithLogCapture(); @@ -277,6 +288,7 @@ public void testEntryWriteQueryEntities() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testEntryWriteCacheIsolation() throws Exception { Ignite ignite = startGridWithLogCapture(); @@ -309,6 +321,7 @@ public void testEntryWriteCacheIsolation() throws Exception { /** * @throws Exception In case of an error. */ + @Test public void testValueWriteMultipleQueryEntities() throws Exception { Ignite ignite = startGridWithLogCapture(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryDetailMetricsSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryDetailMetricsSelfTest.java index 837de6544a246..151fcea70a6f1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryDetailMetricsSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryDetailMetricsSelfTest.java @@ -36,12 +36,16 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Tests for cache query details metrics. */ +@RunWith(JUnit4.class) public abstract class CacheAbstractQueryDetailMetricsSelfTest extends GridCommonAbstractTest { /** */ private static final int QRY_DETAIL_METRICS_SIZE = 3; @@ -110,6 +114,7 @@ private CacheConfiguration configureCache(String cacheName) { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -123,6 +128,7 @@ public void testSqlFieldsQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -137,6 +143,7 @@ public void testSqlFieldsQueryNotFullyFetchedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -150,6 +157,7 @@ public void testSqlFieldsQueryFailedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testQueryMetricsEviction() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -186,7 +194,7 @@ public void testQueryMetricsEviction() throws Exception { assertTrue(lastMetrics.contains("SQL_FIELDS select * from String limit 2;")); assertTrue(lastMetrics.contains("SCAN A;")); - assertTrue(lastMetrics.contains("SQL from String;")); + assertTrue(lastMetrics.contains("SELECT \"A\".\"STRING\"._KEY, \"A\".\"STRING\"._VAL from String;")); cache = grid(0).context().cache().jcache("B"); @@ -258,6 +266,7 @@ private static class Worker extends Thread { * * @throws Exception In case of error. */ + @Test public void testQueryMetricsMultithreaded() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -286,6 +295,7 @@ public void testQueryMetricsMultithreaded() throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -299,6 +309,7 @@ public void testScanQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -313,6 +324,7 @@ public void testScanQueryNotFullyFetchedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -326,6 +338,7 @@ public void testScanQueryFailedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -339,6 +352,7 @@ public void testSqlQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -348,24 +362,12 @@ public void testSqlQueryNotFullyFetchedMetrics() throws Exception { checkQueryNotFullyFetchedMetrics(cache, qry, true); } - /** - * Test metrics for failed Scan queries. - * - * @throws Exception In case of error. - */ - public void testSqlQueryFailedMetrics() throws Exception { - IgniteCache cache = grid(0).context().cache().jcache("A"); - - SqlQuery qry = new SqlQuery<>("Long", "from Long"); - - checkQueryFailedMetrics(cache, qry); - } - /** * Test metrics for Sql queries. * * @throws Exception In case of error. */ + @Test public void testTextQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -379,6 +381,7 @@ public void testTextQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testTextQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -393,6 +396,7 @@ public void testTextQueryNotFullyFetchedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testTextQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -406,6 +410,7 @@ public void testTextQueryFailedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsCrossCacheQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -419,6 +424,7 @@ public void testSqlFieldsCrossCacheQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsCrossCacheQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -433,6 +439,7 @@ public void testSqlFieldsCrossCacheQueryNotFullyFetchedMetrics() throws Exceptio * * @throws Exception In case of error. */ + @Test public void testSqlFieldsCrossCacheQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryMetricsSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryMetricsSelfTest.java index eb3c8d6d3ccbe..27e61e56bc2a3 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryMetricsSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheAbstractQueryMetricsSelfTest.java @@ -36,12 +36,16 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * Tests for cache query metrics. */ +@RunWith(JUnit4.class) public abstract class CacheAbstractQueryMetricsSelfTest extends GridCommonAbstractTest { /** Grid count. */ protected int gridCnt; @@ -106,6 +110,7 @@ public abstract class CacheAbstractQueryMetricsSelfTest extends GridCommonAbstra * * @throws Exception In case of error. */ + @Test public void testSqlFieldsQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -119,6 +124,7 @@ public void testSqlFieldsQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -133,6 +139,7 @@ public void testSqlFieldsQueryNotFullyFetchedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -146,6 +153,7 @@ public void testSqlFieldsQueryFailedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -159,6 +167,7 @@ public void testScanQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -173,6 +182,7 @@ public void testScanQueryNotFullyFetchedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -186,6 +196,7 @@ public void testScanQueryFailedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -199,6 +210,7 @@ public void testSqlQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -208,24 +220,12 @@ public void testSqlQueryNotFullyFetchedMetrics() throws Exception { checkQueryNotFullyFetchedMetrics(cache, qry, true); } - /** - * Test metrics for failed Scan queries. - * - * @throws Exception In case of error. - */ - public void testSqlQueryFailedMetrics() throws Exception { - IgniteCache cache = grid(0).context().cache().jcache("A"); - - SqlQuery qry = new SqlQuery<>("Long", "from Long"); - - checkQueryFailedMetrics(cache, qry); - } - /** * Test metrics for Sql queries. * * @throws Exception In case of error. */ + @Test public void testTextQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -239,6 +239,7 @@ public void testTextQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testTextQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -253,6 +254,7 @@ public void testTextQueryNotFullyFetchedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testTextQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -266,6 +268,7 @@ public void testTextQueryFailedMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsCrossCacheQueryMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -279,6 +282,7 @@ public void testSqlFieldsCrossCacheQueryMetrics() throws Exception { * * @throws Exception In case of error. */ + @Test public void testSqlFieldsCrossCacheQueryNotFullyFetchedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -293,6 +297,7 @@ public void testSqlFieldsCrossCacheQueryNotFullyFetchedMetrics() throws Exceptio * * @throws Exception In case of error. */ + @Test public void testSqlFieldsCrossCacheQueryFailedMetrics() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); @@ -326,6 +331,7 @@ private static class Worker extends Thread { * * @throws Exception In case of error. */ + @Test public void testQueryMetricsMultithreaded() throws Exception { IgniteCache cache = grid(0).context().cache().jcache("A"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheBinaryKeyConcurrentQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheBinaryKeyConcurrentQueryTest.java index 0582132acedb3..cdf585fa44a9f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheBinaryKeyConcurrentQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheBinaryKeyConcurrentQueryTest.java @@ -38,11 +38,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -53,10 +53,8 @@ * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheBinaryKeyConcurrentQueryTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 3; @@ -67,8 +65,6 @@ public class CacheBinaryKeyConcurrentQueryTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setMarshaller(null); return cfg; @@ -84,6 +80,7 @@ public class CacheBinaryKeyConcurrentQueryTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPutAndQueries() throws Exception { Ignite ignite = ignite(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationP2PTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationP2PTest.java index 3777154780487..765e3f73d03ca 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationP2PTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheConfigurationP2PTest.java @@ -33,12 +33,16 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; /** * */ +@RunWith(JUnit4.class) public class CacheConfigurationP2PTest extends GridCommonAbstractTest { /** */ public static final String NODE_START_MSG = "Test external node started"; @@ -75,6 +79,7 @@ static IgniteConfiguration createConfiguration() { /** * @throws Exception If failed. */ + @Test public void testCacheConfigurationP2P() throws Exception { fail("Enable when IGNITE-537 is fixed."); @@ -181,4 +186,4 @@ public void testCacheConfigurationP2P() throws Exception { } } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIndexStreamerTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIndexStreamerTest.java index dda7eedc30c23..65e239ec04a4c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIndexStreamerTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIndexStreamerTest.java @@ -25,14 +25,13 @@ import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -41,22 +40,12 @@ /** * */ +@RunWith(JUnit4.class) public class CacheIndexStreamerTest extends GridCommonAbstractTest { - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** * @throws Exception If failed. */ + @Test public void testStreamerAtomic() throws Exception { checkStreamer(ATOMIC); } @@ -64,6 +53,7 @@ public void testStreamerAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStreamerTx() throws Exception { checkStreamer(TRANSACTIONAL); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIteratorScanQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIteratorScanQueryTest.java index c6cd87bf067cf..6f7c84d3a4a59 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIteratorScanQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheIteratorScanQueryTest.java @@ -31,6 +31,9 @@ import java.util.Collections; import java.util.Comparator; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -38,6 +41,7 @@ /** * Node filter test. */ +@RunWith(JUnit4.class) public class CacheIteratorScanQueryTest extends GridCommonAbstractTest { /** Client mode. */ private boolean client = false; @@ -75,6 +79,7 @@ public CacheIteratorScanQueryTest() { /** * @throws Exception If failed. */ + @Test public void testScanQuery() throws Exception { Ignite server = startGrid(0); @@ -108,6 +113,7 @@ public void testScanQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryGetAllClientSide() throws Exception { Ignite server = startGrid(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingBaseTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingBaseTest.java index f456c56433b6c..ce1f86a1daacf 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingBaseTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingBaseTest.java @@ -25,9 +25,6 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -37,9 +34,6 @@ * Tests various cache operations with indexing enabled. */ public abstract class CacheOffheapBatchIndexingBaseTest extends GridCommonAbstractTest { - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * Load data into cache * @@ -62,8 +56,6 @@ protected void preload(String name) { cfg.setPeerClassLoadingEnabled(false); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - return cfg; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingMultiTypeTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingMultiTypeTest.java index 87d10a14f66e8..f0a68ad10741a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingMultiTypeTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingMultiTypeTest.java @@ -21,16 +21,20 @@ import java.util.TreeMap; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; -import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests various cache operations with indexing enabled. * Cache contain multiple types. */ +@RunWith(JUnit4.class) public class CacheOffheapBatchIndexingMultiTypeTest extends CacheOffheapBatchIndexingBaseTest { /** * Tests putAll with multiple indexed entities and streamer pre-loading with low off-heap cache size. */ + @Test public void testPutAllMultupleEntitiesAndStreamer() { doStreamerBatchTest(50, 1_000, new Class[] { Integer.class, CacheOffheapBatchIndexingBaseTest.Person.class, @@ -41,6 +45,7 @@ public void testPutAllMultupleEntitiesAndStreamer() { /** * Tests putAll after with streamer batch load with one entity. */ + @Test public void testPuAllSingleEntity() { doStreamerBatchTest(50, 1_000, @@ -90,4 +95,4 @@ private void doStreamerBatchTest(int iterations, cache.destroy(); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingSingleTypeTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingSingleTypeTest.java index acf33dc380db0..f07e041dfddfb 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingSingleTypeTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOffheapBatchIndexingSingleTypeTest.java @@ -27,17 +27,22 @@ import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests various cache operations with indexing enabled. * Cache contains single type. */ +@RunWith(JUnit4.class) public class CacheOffheapBatchIndexingSingleTypeTest extends CacheOffheapBatchIndexingBaseTest { /** * Tests removal using EntryProcessor. * * @throws Exception If failed. */ + @Test public void testBatchRemove() throws Exception { Ignite ignite = grid(0); @@ -84,6 +89,7 @@ public void testBatchRemove() throws Exception { /** * */ + @Test public void testPutAllAndStreamer() { doStreamerBatchTest(50, 1_000, @@ -94,6 +100,7 @@ public void testPutAllAndStreamer() { /** * */ + @Test public void testPuAllSingleEntity() { doStreamerBatchTest(50, 1_000, @@ -144,4 +151,4 @@ private void doStreamerBatchTest(int iterations, cache.destroy(); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOperationsWithExpirationTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOperationsWithExpirationTest.java index 6a04126d37ebf..7167fcfd206b7 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOperationsWithExpirationTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheOperationsWithExpirationTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -44,6 +47,7 @@ /** * */ +@RunWith(JUnit4.class) public class CacheOperationsWithExpirationTest extends GridCommonAbstractTest { /** */ private static final int KEYS = 10_000; @@ -78,6 +82,7 @@ private CacheConfiguration cacheConfiguration(CacheAtom /** * @throws Exception If failed. */ + @Test public void testAtomicIndexEnabled() throws Exception { concurrentPutGetRemoveExpireAndQuery(cacheConfiguration(ATOMIC, true)); } @@ -85,6 +90,7 @@ public void testAtomicIndexEnabled() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomic() throws Exception { concurrentPutGetRemoveExpireAndQuery(cacheConfiguration(ATOMIC, false)); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryBuildValueTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryBuildValueTest.java index 3bbb007e776d2..e97366cfb77e0 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryBuildValueTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryBuildValueTest.java @@ -34,26 +34,22 @@ import org.apache.ignite.configuration.BinaryConfiguration; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheQueryBuildValueTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setMarshaller(null); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); QueryEntity entity = new QueryEntity(); @@ -99,6 +95,7 @@ public class CacheQueryBuildValueTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testBuilderAndQuery() throws Exception { Ignite node = ignite(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryEvictDataLostTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryEvictDataLostTest.java index 254b59ce8bf0c..99bb20be940c0 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryEvictDataLostTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryEvictDataLostTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheQueryEvictDataLostTest extends GridCommonAbstractTest { /** */ private static final int KEYS = 100_000; @@ -71,6 +75,7 @@ public CacheQueryEvictDataLostTest() { /** * @throws Exception If failed. */ + @Test public void testQueryDataLost() throws Exception { final long stopTime = U.currentTimeMillis() + 30_000; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryFilterExpiredTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryFilterExpiredTest.java index 47c87fa1cddc2..f9fed9fe39c90 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryFilterExpiredTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryFilterExpiredTest.java @@ -25,13 +25,12 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.lang.GridAbsPredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -39,22 +38,12 @@ /** * */ +@RunWith(JUnit4.class) public class CacheQueryFilterExpiredTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(gridName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - - return cfg; - } - /** * @throws Exception If failed. */ + @Test public void testFilterExpired() throws Exception { try (Ignite ignite = startGrid(0)) { checkFilterExpired(ignite, ATOMIC, false); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryNewClientSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryNewClientSelfTest.java index d33084026a72e..f5047bc013896 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryNewClientSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheQueryNewClientSelfTest.java @@ -23,28 +23,16 @@ import org.apache.ignite.Ignition; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for the case when client is started after the cache is already created. */ +@RunWith(JUnit4.class) public class CacheQueryNewClientSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -53,6 +41,7 @@ public class CacheQueryNewClientSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testQueryFromNewClient() throws Exception { Ignite srv = startGrid("server"); @@ -88,6 +77,7 @@ public void testQueryFromNewClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryFromNewClientCustomSchemaName() throws Exception { Ignite srv = startGrid("server"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheRandomOperationsMultithreadedTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheRandomOperationsMultithreadedTest.java index c2f31f73f060d..c450fd992bfa5 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheRandomOperationsMultithreadedTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheRandomOperationsMultithreadedTest.java @@ -40,12 +40,12 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgniteInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -55,10 +55,8 @@ /** * */ +@RunWith(JUnit4.class) public class CacheRandomOperationsMultithreadedTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int KEYS = 1000; @@ -72,8 +70,6 @@ public class CacheRandomOperationsMultithreadedTest extends GridCommonAbstractTe @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); return cfg; @@ -93,6 +89,7 @@ public class CacheRandomOperationsMultithreadedTest extends GridCommonAbstractTe /** * @throws Exception If failed. */ + @Test public void testAtomicOffheapEviction() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, ATOMIC, @@ -105,6 +102,7 @@ public void testAtomicOffheapEviction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAtomicOffheapEvictionIndexing() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, ATOMIC, @@ -117,6 +115,7 @@ public void testAtomicOffheapEvictionIndexing() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxOffheapEviction() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, TRANSACTIONAL, @@ -129,6 +128,7 @@ public void testTxOffheapEviction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxOffheapEvictionIndexing() throws Exception { CacheConfiguration ccfg = cacheConfiguration(PARTITIONED, TRANSACTIONAL, diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheScanPartitionQueryFallbackSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheScanPartitionQueryFallbackSelfTest.java index 2e66b71457d21..ab2fa1fad6204 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheScanPartitionQueryFallbackSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheScanPartitionQueryFallbackSelfTest.java @@ -57,13 +57,16 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests partition scan query fallback. */ +@RunWith(JUnit4.class) public class CacheScanPartitionQueryFallbackSelfTest extends GridCommonAbstractTest { /** Grid count. */ private static final int GRID_CNT = 3; @@ -71,9 +74,6 @@ public class CacheScanPartitionQueryFallbackSelfTest extends GridCommonAbstractT /** Keys count. */ private static final int KEYS_CNT = 50 * RendezvousAffinityFunction.DFLT_PARTITION_COUNT; - /** Ip finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Backups. */ private int backups; @@ -101,10 +101,7 @@ public class CacheScanPartitionQueryFallbackSelfTest extends GridCommonAbstractT cfg.setClientMode(clientMode); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - discoSpi.setIpFinder(IP_FINDER); - discoSpi.setForceServerMode(true); - cfg.setDiscoverySpi(discoSpi); + ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setForceServerMode(true); cfg.setCommunicationSpi(commSpiFactory.create()); @@ -128,6 +125,7 @@ public class CacheScanPartitionQueryFallbackSelfTest extends GridCommonAbstractT * * @throws Exception If failed. */ + @Test public void testScanLocal() throws Exception { cacheMode = CacheMode.PARTITIONED; backups = 0; @@ -155,6 +153,7 @@ public void testScanLocal() throws Exception { * * @throws Exception If failed. */ + @Test public void testScanLocalExplicit() throws Exception { cacheMode = CacheMode.PARTITIONED; backups = 0; @@ -190,6 +189,7 @@ public void testScanLocalExplicit() throws Exception { * * @throws Exception If failed. */ + @Test public void testScanLocalExplicitNoPart() throws Exception { cacheMode = CacheMode.PARTITIONED; backups = 0; @@ -215,6 +215,7 @@ public void testScanLocalExplicitNoPart() throws Exception { * * @throws Exception If failed. */ + @Test public void testScanRemote() throws Exception { cacheMode = CacheMode.PARTITIONED; backups = 0; @@ -244,6 +245,7 @@ public void testScanRemote() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testScanFallbackOnRebalancing() throws Exception { scanFallbackOnRebalancing(false); } @@ -341,6 +343,7 @@ private void scanFallbackOnRebalancing(final boolean cur) throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanFallbackOnRebalancingCursor1() throws Exception { cacheMode = CacheMode.PARTITIONED; clientMode = false; @@ -408,6 +411,7 @@ public void testScanFallbackOnRebalancingCursor1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanFallbackOnRebalancingCursor2() throws Exception { scanFallbackOnRebalancing(true); } @@ -577,4 +581,4 @@ private static class TestFallbackOnRebalancingCommunicationSpiFactory implements return new TcpCommunicationSpi(); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheSqlQueryValueCopySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheSqlQueryValueCopySelfTest.java index 5b92726f85a11..8f188d297179e 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheSqlQueryValueCopySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/CacheSqlQueryValueCopySelfTest.java @@ -34,18 +34,16 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.query.GridQueryProcessor; import org.apache.ignite.internal.processors.query.GridRunningQueryInfo; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests modification of values returned by query iterators with enabled copy on read. */ +@RunWith(JUnit4.class) public class CacheSqlQueryValueCopySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int KEYS = 100; @@ -56,8 +54,6 @@ public class CacheSqlQueryValueCopySelfTest extends GridCommonAbstractTest { if ("client".equals(cfg.getIgniteInstanceName())) cfg.setClientMode(true); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration cc = new CacheConfiguration<>(DEFAULT_CACHE_NAME); cc.setCopyOnRead(true); @@ -98,6 +94,7 @@ public class CacheSqlQueryValueCopySelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testTwoStepSqlClientQuery() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(DEFAULT_CACHE_NAME); @@ -128,6 +125,7 @@ public void testTwoStepSqlClientQuery() throws Exception { /** * Test two step query without local reduce phase. */ + @Test public void testTwoStepSkipReduceSqlQuery() { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -145,6 +143,7 @@ public void testTwoStepSkipReduceSqlQuery() { /** * Test two step query value copy. */ + @Test public void testTwoStepReduceSqlQuery() { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -163,6 +162,7 @@ public void testTwoStepReduceSqlQuery() { /** * Tests local sql query. */ + @Test public void testLocalSqlQuery() { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -182,6 +182,7 @@ public void testLocalSqlQuery() { /** * Tests local sql query. */ + @Test public void testLocalSqlFieldsQuery() { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -224,6 +225,7 @@ private IgniteInternalFuture runQueryAsync(final Query qry) throws Excepti * * @throws Exception If failed. */ + @Test public void testRunningSqlFieldsQuery() throws Exception { IgniteInternalFuture fut = runQueryAsync(new SqlFieldsQuery("select _val, sleep(1000) from Value limit 3")); @@ -264,6 +266,7 @@ public void testRunningSqlFieldsQuery() throws Exception { * * @throws Exception If failed. */ + @Test public void testRunningSqlQuery() throws Exception { IgniteInternalFuture fut = runQueryAsync(new SqlQuery(Value.class, "id > sleep(100)")); @@ -304,6 +307,7 @@ public void testRunningSqlQuery() throws Exception { * * @throws Exception If failed. */ + @Test public void testCancelingSqlFieldsQuery() throws Exception { runQueryAsync(new SqlFieldsQuery("select * from (select _val, sleep(100) from Value limit 50)")); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ClientReconnectAfterClusterRestartTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ClientReconnectAfterClusterRestartTest.java index 90121615d673d..ff1cc82749d65 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ClientReconnectAfterClusterRestartTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ClientReconnectAfterClusterRestartTest.java @@ -39,9 +39,13 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class ClientReconnectAfterClusterRestartTest extends GridCommonAbstractTest { /** Server id. */ private static final int SERVER_ID = 0; @@ -113,6 +117,7 @@ public class ClientReconnectAfterClusterRestartTest extends GridCommonAbstractTe } /** */ + @Test public void testReconnectClient() throws Exception { try { startGrid(SERVER_ID); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeSqlTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeSqlTest.java new file mode 100644 index 0000000000000..9a91fe215cb2b --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ClusterReadOnlyModeSqlTest.java @@ -0,0 +1,99 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache; + +import java.util.Random; +import javax.cache.CacheException; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.query.FieldsQueryCursor; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.internal.util.typedef.G; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Tests SQL queries in read-only cluster mode. + */ +@RunWith(JUnit4.class) +public class ClusterReadOnlyModeSqlTest extends ClusterReadOnlyModeAbstractTest { + /** + * + */ + @Test + public void testSqlReadOnly() { + assertSqlReadOnlyMode(false); + + changeClusterReadOnlyMode(true); + + assertSqlReadOnlyMode(true); + + changeClusterReadOnlyMode(false); + + assertSqlReadOnlyMode(false); + } + + /** + * @param readOnly If {@code true} then data modification SQL queries must fail, else succeed. + */ + private void assertSqlReadOnlyMode(boolean readOnly) { + Random rnd = new Random(); + + for (Ignite ignite : G.allGrids()) { + for (String cacheName : CACHE_NAMES) { + IgniteCache cache = ignite.cache(cacheName); + + try (FieldsQueryCursor cur = cache.query(new SqlFieldsQuery("SELECT * FROM Integer"))) { + cur.getAll(); + } + + boolean failed = false; + + try (FieldsQueryCursor cur = cache.query(new SqlFieldsQuery("DELETE FROM Integer"))) { + cur.getAll(); + } + catch (CacheException ex) { + if (!readOnly) + log.error("Failed to delete data", ex); + + failed = true; + } + + if (failed != readOnly) + fail("SQL delete from " + cacheName + " must " + (readOnly ? "fail" : "succeed")); + + failed = false; + + try (FieldsQueryCursor cur = cache.query(new SqlFieldsQuery( + "INSERT INTO Integer(_KEY, _VAL) VALUES (?, ?)").setArgs(rnd.nextInt(1000), rnd.nextInt()))) { + cur.getAll(); + } + catch (CacheException ex) { + if (!readOnly) + log.error("Failed to insert data", ex); + + failed = true; + } + + if (failed != readOnly) + fail("SQL insert into " + cacheName + " must " + (readOnly ? "fail" : "succeed")); + } + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/DdlTransactionSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/DdlTransactionSelfTest.java index 6652559a15060..9fec0b70e6695 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/DdlTransactionSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/DdlTransactionSelfTest.java @@ -26,22 +26,20 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.apache.ignite.transactions.TransactionState; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class DdlTransactionSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -59,11 +57,6 @@ public class DdlTransactionSelfTest extends GridCommonAbstractTest { .setDefaultTxConcurrency(TransactionConcurrency.PESSIMISTIC) .setDefaultTxTimeout(5000)); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); cfg.setCacheConfiguration(getCacheConfiguration()); cfg.setClientMode(client); @@ -84,6 +77,7 @@ private CacheConfiguration getCacheConfiguration() { /** * @throws Exception If failed. */ + @Test public void testTxIsCommittedOnDdlRequestMultinodeClient() throws Exception { startGridsMultiThreaded(4, false); @@ -132,6 +126,7 @@ public void testTxIsCommittedOnDdlRequestMultinodeClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxIsCommittedOnDdlRequestMultinode() throws Exception { Ignite node = startGridsMultiThreaded(4); @@ -174,6 +169,7 @@ public void testTxIsCommittedOnDdlRequestMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxIsCommittedOnDdlRequest() throws Exception { Ignite node = startGrid(); @@ -216,6 +212,7 @@ public void testTxIsCommittedOnDdlRequest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDdlRequestWithoutTxMultinodeClient() throws Exception { startGridsMultiThreaded(4, false); @@ -260,6 +257,7 @@ public void testDdlRequestWithoutTxMultinodeClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDdlRequestWithoutTxMultinode() throws Exception { Ignite node = startGridsMultiThreaded(4); @@ -298,6 +296,7 @@ public void testDdlRequestWithoutTxMultinode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDdlRequestWithoutTx() throws Exception { Ignite node = startGrid(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheCrossCacheQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheCrossCacheQuerySelfTest.java index cf8bb2ebe6b09..8238e36468c08 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheCrossCacheQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheCrossCacheQuerySelfTest.java @@ -40,11 +40,11 @@ import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -52,6 +52,7 @@ /** * Tests cross cache queries. */ +@RunWith(JUnit4.class) public class GridCacheCrossCacheQuerySelfTest extends GridCommonAbstractTest { /** */ private static final String PART_CACHE_NAME = "partitioned"; @@ -62,9 +63,6 @@ public class GridCacheCrossCacheQuerySelfTest extends GridCommonAbstractTest { /** */ private static final String REPL_STORE_CACHE_NAME = "replicated-store"; - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private Ignite ignite; @@ -72,12 +70,6 @@ public class GridCacheCrossCacheQuerySelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setCacheConfiguration( createCache(PART_CACHE_NAME, CacheMode.PARTITIONED, Integer.class, FactPurchase.class), createCache(REPL_PROD_CACHE_NAME, CacheMode.REPLICATED, Integer.class, DimProduct.class), @@ -128,6 +120,7 @@ private static CacheConfiguration createCache(String name, CacheMode mode, Class /** * @throws Exception If failed. */ + @Test public void testTwoStepGroupAndAggregates() throws Exception { IgniteInternalCache cache = ((IgniteKernal)ignite).getCache(PART_CACHE_NAME); @@ -228,6 +221,7 @@ public void testTwoStepGroupAndAggregates() throws Exception { /** * @throws Exception If failed. */ + @Test public void testApiQueries() throws Exception { IgniteCache c = ignite.cache(PART_CACHE_NAME); @@ -243,6 +237,7 @@ public void testApiQueries() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultiStatement() throws Exception { final IgniteInternalCache cache = ((IgniteKernal)ignite).getCache(PART_CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQuerySelfTest.java index 14ad39a5c7736..db420085de330 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQuerySelfTest.java @@ -43,10 +43,10 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -54,6 +54,7 @@ /** * FullTest queries left test. */ +@RunWith(JUnit4.class) public class GridCacheFullTextQuerySelfTest extends GridCommonAbstractTest { /** Cache size. */ private static final int MAX_ITEM_COUNT = 100; @@ -61,19 +62,10 @@ public class GridCacheFullTextQuerySelfTest extends GridCommonAbstractTest { /** Cache name */ private static final String PERSON_CACHE = "Person"; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setIncludeEventTypes(); cfg.setConnectorConfiguration(null); @@ -102,6 +94,7 @@ public class GridCacheFullTextQuerySelfTest extends GridCommonAbstractTest { /** * @throws Exception In case of error. */ + @Test public void testTextQueryWithField() throws Exception { checkTextQuery("name:1*", false, false); } @@ -109,6 +102,7 @@ public void testTextQueryWithField() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testLocalTextQueryWithKeepBinary() throws Exception { checkTextQuery(true, true); } @@ -116,6 +110,7 @@ public void testLocalTextQueryWithKeepBinary() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testLocalTextQuery() throws Exception { checkTextQuery(true, false); } @@ -123,6 +118,7 @@ public void testLocalTextQuery() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testTextQueryWithKeepBinary() throws Exception { checkTextQuery(false, true); } @@ -130,6 +126,7 @@ public void testTextQueryWithKeepBinary() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testTextQuery() throws Exception { checkTextQuery(false, true); } @@ -371,4 +368,4 @@ public Person(String name, int age) { birthday = cal.getTime(); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLazyQueryPartitionsReleaseTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLazyQueryPartitionsReleaseTest.java index a11296966c938..e934989275dc6 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLazyQueryPartitionsReleaseTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheLazyQueryPartitionsReleaseTest.java @@ -34,18 +34,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to lazy query partitions has not been released too early. */ +@RunWith(JUnit4.class) public class GridCacheLazyQueryPartitionsReleaseTest extends GridCommonAbstractTest { - /** IP finder */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Cache name */ private static final String PERSON_CACHE = "person"; @@ -65,12 +63,6 @@ public class GridCacheLazyQueryPartitionsReleaseTest extends GridCommonAbstractT cfg.setCacheConfiguration(ccfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -84,6 +76,7 @@ public class GridCacheLazyQueryPartitionsReleaseTest extends GridCommonAbstractT * * @throws Exception If failed. */ + @Test public void testLazyQueryPartitionsRelease() throws Exception { Ignite node1 = startGrid(0); @@ -131,6 +124,7 @@ public void testLazyQueryPartitionsRelease() throws Exception { * * @throws Exception If failed. */ + @Test public void testLazyQueryPartitionsReleaseOnClose() throws Exception { Ignite node1 = startGrid(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapSelfTest.java index e06f6a6a3b660..caabc407b5369 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffHeapSelfTest.java @@ -26,10 +26,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -38,23 +38,15 @@ /** * Test for cache swap. */ +@RunWith(JUnit4.class) public class GridCacheOffHeapSelfTest extends GridCommonAbstractTest { /** Saved versions. */ private final Map versions = new HashMap<>(); - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setNetworkTimeout(2000); CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -78,6 +70,7 @@ public class GridCacheOffHeapSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testOffHeapIterator() throws Exception { try { startGrids(1); @@ -142,4 +135,4 @@ public int value() { return S.toString(CacheValue.class, this); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexEntryEvictTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexEntryEvictTest.java index e14da4a22918c..071c68d69da05 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexEntryEvictTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexEntryEvictTest.java @@ -29,10 +29,10 @@ import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -41,20 +41,12 @@ /** * */ +@RunWith(JUnit4.class) public class GridCacheOffheapIndexEntryEvictTest extends GridCommonAbstractTest { - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setNetworkTimeout(2000); CacheConfiguration cacheCfg = defaultCacheConfiguration(); @@ -80,6 +72,7 @@ public class GridCacheOffheapIndexEntryEvictTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testQueryWhenLocked() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -112,6 +105,7 @@ public void testQueryWhenLocked() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdates() throws Exception { final int ENTRIES = 500; @@ -189,4 +183,4 @@ public TestValue(int val) { val = in.readInt(); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexGetSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexGetSelfTest.java index 321a201297374..029cc628ae4a6 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexGetSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheOffheapIndexGetSelfTest.java @@ -29,13 +29,13 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,20 +45,12 @@ /** * Tests off heap storage when both offheaped and swapped entries exists. */ +@RunWith(JUnit4.class) public class GridCacheOffheapIndexGetSelfTest extends GridCommonAbstractTest { - /** */ - private final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setNetworkTimeout(2000); cfg.setDeploymentMode(SHARED); @@ -100,6 +92,7 @@ protected CacheConfiguration cacheConfiguration() { * * @throws Exception If failed. */ + @Test public void testGet() throws Exception { IgniteCache cache = jcache(grid(0), cacheConfiguration(), Long.class, Long.class); @@ -124,6 +117,7 @@ public void testGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPutGet() throws Exception { IgniteCache cache = jcache(grid(0), cacheConfiguration(), Object.class, Object.class); @@ -151,6 +145,7 @@ public void testPutGet() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithExpiryPolicy() throws Exception { IgniteCache cache = jcache(grid(0), cacheConfiguration(), Long.class, Long.class); @@ -223,4 +218,4 @@ public void setValue(String val) { this.val = val; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexDisabledSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexDisabledSelfTest.java index 525b6882c3651..5c20e195300b5 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexDisabledSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQueryIndexDisabledSelfTest.java @@ -30,17 +30,15 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteBiPredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class GridCacheQueryIndexDisabledSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ public GridCacheQueryIndexDisabledSelfTest() { super(true); @@ -57,18 +55,13 @@ public GridCacheQueryIndexDisabledSelfTest() { cfg.setCacheConfiguration(ccfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } /** * @throws Exception If failed. */ + @Test public void testSqlQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(SqlValue.class.getSimpleName()); @@ -88,6 +81,7 @@ public void testSqlQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlFieldsQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(SqlValue.class.getSimpleName()); @@ -119,6 +113,7 @@ public void testSqlFieldsQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFullTextQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(String.class.getSimpleName()); @@ -138,6 +133,7 @@ public void testFullTextQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanLocalQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(String.class.getSimpleName()); @@ -155,6 +151,7 @@ public void testScanLocalQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlLocalQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(SqlValue.class.getSimpleName()); @@ -174,6 +171,7 @@ public void testSqlLocalQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlLocalFieldsQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(SqlValue.class.getSimpleName()); @@ -193,6 +191,7 @@ public void testSqlLocalFieldsQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFullTextLocalQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(String.class.getSimpleName()); @@ -212,6 +211,7 @@ public void testFullTextLocalQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQuery() throws Exception { IgniteCache cache = grid().getOrCreateCache(String.class.getSimpleName()); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySerializationSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySerializationSelfTest.java index 85fd6db977b03..73d542d89df5a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySerializationSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheQuerySerializationSelfTest.java @@ -28,10 +28,10 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -39,6 +39,7 @@ /** * Tests for cache query results serialization. */ +@RunWith(JUnit4.class) public class GridCacheQuerySerializationSelfTest extends GridCommonAbstractTest { /** */ private static final int GRID_CNT = 2; @@ -49,9 +50,6 @@ public class GridCacheQuerySerializationSelfTest extends GridCommonAbstractTest /** */ private static final CacheMode CACHE_MODE = PARTITIONED; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { startGridsMultiThreaded(GRID_CNT); @@ -66,12 +64,6 @@ public class GridCacheQuerySerializationSelfTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setName(CACHE_NAME); @@ -102,6 +94,7 @@ private GridCacheQueryTestValue value(String f1, int f2, long f3) { * * @throws Exception In case of error. */ + @Test public void testSerialization() throws Exception { IgniteEx g0 = grid(0); @@ -142,4 +135,4 @@ private static class QueryCallable implements IgniteCallable ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setName("offheap-cache"); @@ -95,6 +87,7 @@ public class GridCacheQuerySimpleBenchmark extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testPerformance() throws Exception { Random rnd = new GridRandom(); @@ -211,4 +204,4 @@ public Person() { name = U.readString(in); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridIndexingWithNoopSwapSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridIndexingWithNoopSwapSelfTest.java index 570a1b078ed17..c1d9d5f815388 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridIndexingWithNoopSwapSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridIndexingWithNoopSwapSelfTest.java @@ -27,10 +27,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractQuerySelfTest.ObjectValue; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -39,10 +39,8 @@ /** * GG-4368 */ +@RunWith(JUnit4.class) public class GridIndexingWithNoopSwapSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ protected Ignite ignite; @@ -50,12 +48,6 @@ public class GridIndexingWithNoopSwapSelfTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -92,6 +84,7 @@ public class GridIndexingWithNoopSwapSelfTest extends GridCommonAbstractTest { } /** @throws Exception If failed. */ + @Test public void testQuery() throws Exception { IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); @@ -111,4 +104,4 @@ public void testQuery() throws Exception { assertEquals(10, cache.query(qry.setArgs(0)).getAll().size()); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectFieldsQuerySelfTest.java index 21e4852c30bae..4f6fa8fd95d49 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectFieldsQuerySelfTest.java @@ -32,22 +32,20 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiPredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests that server nodes do not need class definitions to execute queries. */ +@RunWith(JUnit4.class) public class IgniteBinaryObjectFieldsQuerySelfTest extends GridCommonAbstractTest { /** */ public static final String PERSON_KEY_CLS_NAME = "org.apache.ignite.tests.p2p.cache.PersonKey"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Grid count. */ public static final int GRID_CNT = 4; @@ -68,12 +66,6 @@ protected String getPersonClassName(){ cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setMarshaller(null); if (getTestIgniteInstanceName(3).equals(igniteInstanceName)) { @@ -121,6 +113,7 @@ protected void initExtClassLoader() { /** * @throws Exception If failed. */ + @Test public void testQueryPartitionedAtomic() throws Exception { checkQuery(CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC); } @@ -128,6 +121,7 @@ public void testQueryPartitionedAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReplicatedAtomic() throws Exception { checkQuery(CacheMode.REPLICATED, CacheAtomicityMode.ATOMIC); } @@ -135,6 +129,7 @@ public void testQueryReplicatedAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryPartitionedTransactional() throws Exception { checkQuery(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL); } @@ -142,6 +137,7 @@ public void testQueryPartitionedTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReplicatedTransactional() throws Exception { checkQuery(CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL); } @@ -149,6 +145,7 @@ public void testQueryReplicatedTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFieldsQueryPartitionedAtomic() throws Exception { checkFieldsQuery(CacheMode.PARTITIONED, CacheAtomicityMode.ATOMIC); } @@ -156,6 +153,7 @@ public void testFieldsQueryPartitionedAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFieldsQueryReplicatedAtomic() throws Exception { checkFieldsQuery(CacheMode.REPLICATED, CacheAtomicityMode.ATOMIC); } @@ -163,6 +161,7 @@ public void testFieldsQueryReplicatedAtomic() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFieldsQueryPartitionedTransactional() throws Exception { checkFieldsQuery(CacheMode.PARTITIONED, CacheAtomicityMode.TRANSACTIONAL); } @@ -170,6 +169,7 @@ public void testFieldsQueryPartitionedTransactional() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFieldsQueryReplicatedTransactional() throws Exception { checkFieldsQuery(CacheMode.REPLICATED, CacheAtomicityMode.TRANSACTIONAL); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectQueryArgumentsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectQueryArgumentsTest.java index e23935324b221..9b3b7707d2076 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectQueryArgumentsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryObjectQueryArgumentsTest.java @@ -37,20 +37,18 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * */ +@RunWith(JUnit4.class) public class IgniteBinaryObjectQueryArgumentsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 3; @@ -85,8 +83,6 @@ public class IgniteBinaryObjectQueryArgumentsTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setCacheConfiguration(getCacheConfigurations()); cfg.setMarshaller(null); @@ -185,6 +181,7 @@ private CacheConfiguration getCacheConfiguration(final String cacheName, final C /** * @throws Exception If failed. */ + @Test public void testObjectArgument() throws Exception { testKeyQuery(OBJECT_CACHE, new TestKey(1), new TestKey(2)); } @@ -192,6 +189,7 @@ public void testObjectArgument() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimitiveObjectArgument() throws Exception { testKeyValQuery(PRIM_CACHE, 1, 2); } @@ -199,6 +197,7 @@ public void testPrimitiveObjectArgument() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStringObjectArgument() throws Exception { testKeyValQuery(STR_CACHE, "str1", "str2"); } @@ -206,6 +205,7 @@ public void testStringObjectArgument() throws Exception { /** * @throws Exception If failed. */ + @Test public void testEnumObjectArgument() throws Exception { testKeyValQuery(ENUM_CACHE, EnumKey.KEY1, EnumKey.KEY2); } @@ -213,6 +213,7 @@ public void testEnumObjectArgument() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUuidObjectArgument() throws Exception { final UUID uuid1 = UUID.randomUUID(); UUID uuid2 = UUID.randomUUID(); @@ -226,6 +227,7 @@ public void testUuidObjectArgument() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDateObjectArgument() throws Exception { testKeyValQuery(DATE_CACHE, new Date(0), new Date(1)); } @@ -233,6 +235,7 @@ public void testDateObjectArgument() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimestampArgument() throws Exception { testKeyValQuery(TIMESTAMP_CACHE, new Timestamp(0), new Timestamp(1)); } @@ -241,6 +244,7 @@ public void testTimestampArgument() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBigDecimalArgument() throws Exception { final ThreadLocalRandom rnd = ThreadLocalRandom.current(); @@ -353,6 +357,7 @@ private void testValQuery(final String cacheName, final T val1, final T val2 /** * @throws Exception If failed. */ + @Test public void testFieldSearch() throws Exception { final IgniteCache cache = ignite(0).cache(FIELD_CACHE); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryWrappedObjectFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryWrappedObjectFieldsQuerySelfTest.java index bff725c2eadd6..8313e7432b802 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryWrappedObjectFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteBinaryWrappedObjectFieldsQuerySelfTest.java @@ -23,7 +23,7 @@ public class IgniteBinaryWrappedObjectFieldsQuerySelfTest extends IgniteBinaryObjectFieldsQuerySelfTest { /** {@inheritDoc} */ - protected String getPersonClassName() { + @Override protected String getPersonClassName() { return "org.apache.ignite.tests.p2p.cache.PersonWrapper$Person"; } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractFieldsQuerySelfTest.java index ce5c95e45c070..3f4d984ec5137 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractFieldsQuerySelfTest.java @@ -55,12 +55,11 @@ import org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.DiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -69,10 +68,8 @@ /** * Tests for fields queries. */ +@RunWith(JUnit4.class) public abstract class IgniteCacheAbstractFieldsQuerySelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static IgniteCache orgCache; @@ -100,8 +97,6 @@ public abstract class IgniteCacheAbstractFieldsQuerySelfTest extends GridCommonA cfg.setPeerClassLoadingEnabled(false); - cfg.setDiscoverySpi(discovery()); - if (hasCache) cfg.setCacheConfiguration(cacheConfiguration()); else @@ -127,15 +122,6 @@ protected CacheConfiguration cacheConfiguration() { return ccfg; } - /** @return Discovery SPI. */ - private DiscoverySpi discovery() { - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - return spi; - } - /** * @param clsK Class k. * @param clsV Class v. @@ -221,6 +207,7 @@ protected CacheAtomicityMode atomicityMode() { protected abstract int gridCount(); /** @throws Exception If failed. */ + @Test public void testCacheMetaData() throws Exception { // Put internal key to test filtering of internal objects. @@ -364,6 +351,7 @@ else if (!"cacheWithCustomKeyPrecision".equalsIgnoreCase(meta.cacheName())) /** * */ + @Test public void testExplain() { List> res = grid(0).cache(personCache.getName()).query(sqlFieldsQuery( String.format("explain select p.age, p.name, o.name " + @@ -373,7 +361,7 @@ public void testExplain() { for (List row : res) X.println("____ : " + row); - if (cacheMode() == PARTITIONED || (cacheMode() == REPLICATED && !isReplicatedOnly())) { + if (cacheMode() == PARTITIONED) { assertEquals(2, res.size()); assertTrue(((String)res.get(1).get(0)).contains(GridSqlQuerySplitter.mergeTableIdentifier(0))); @@ -384,6 +372,7 @@ public void testExplain() { /** @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testExecuteWithMetaDataAndPrecision() throws Exception { QueryEntity qeWithPrecision = new QueryEntity() .setKeyType("java.lang.Long") @@ -422,6 +411,7 @@ public void testExecuteWithMetaDataAndPrecision() throws Exception { } } + @Test public void testExecuteWithMetaDataAndCustomKeyPrecision() throws Exception { QueryEntity qeWithPrecision = new QueryEntity() .setKeyType("java.lang.String") @@ -479,6 +469,7 @@ public void testExecuteWithMetaDataAndCustomKeyPrecision() throws Exception { } /** @throws Exception If failed. */ + @Test public void testExecuteWithMetaData() throws Exception { QueryCursorImpl> cursor = (QueryCursorImpl>)personCache.query(sqlFieldsQuery( String.format("select p._KEY, p.name, p.age, o.name " + @@ -579,11 +570,13 @@ else if (cnt == 1) { } /** @throws Exception If failed. */ + @Test public void testExecute() throws Exception { doTestExecute(personCache, sqlFieldsQuery("select _KEY, name, age from Person")); } /** @throws Exception If failed. */ + @Test public void testExecuteNoOpCache() throws Exception { doTestExecute(noOpCache, sqlFieldsQuery("select _KEY, name, age from \"AffinityKey-Person\".Person")); } @@ -639,6 +632,7 @@ else if (cnt == 1) { } /** @throws Exception If failed. */ + @Test public void testExecuteWithArguments() throws Exception { QueryCursor> qry = personCache .query(sqlFieldsQuery("select _KEY, name, age from Person where age > ?").setArgs(30)); @@ -691,6 +685,7 @@ private SqlFieldsQuery sqlFieldsQuery(String sql) { } /** @throws Exception If failed. */ + @Test public void testSelectAllJoined() throws Exception { QueryCursor> qry = personCache.query(sqlFieldsQuery( @@ -755,6 +750,7 @@ else if (cnt == 1) { } /** @throws Exception If failed. */ + @Test public void testEmptyResult() throws Exception { QueryCursor> qry = personCache.query(sqlFieldsQuery("select name from Person where age = 0")); @@ -771,6 +767,7 @@ public void testEmptyResult() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingleResultUsesFindOne() throws Exception { QueryCursor> qry = intCache.query(sqlFieldsQuery("select _val from Integer where _key = 25")); @@ -790,6 +787,7 @@ public void testSingleResultUsesFindOne() throws Exception { * * @throws Exception If failed. */ + @Test public void testEmptyResultUsesFindOne() throws Exception { QueryCursor> qry = intCache.query(sqlFieldsQuery("select _val from Integer where _key = -10")); @@ -801,6 +799,7 @@ public void testEmptyResultUsesFindOne() throws Exception { } /** @throws Exception If failed. */ + @Test public void testQueryString() throws Exception { QueryCursor> qry = strCache.query(sqlFieldsQuery("select * from String")); @@ -818,6 +817,7 @@ public void testQueryString() throws Exception { } /** @throws Exception If failed. */ + @Test public void testQueryIntegersWithJoin() throws Exception { QueryCursor> qry = intCache.query(sqlFieldsQuery( "select i._KEY, i._VAL, j._KEY, j._VAL from Integer i join Integer j where i._VAL >= 100")); @@ -840,6 +840,7 @@ public void testQueryIntegersWithJoin() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPagination() throws Exception { // Query with page size 20. QueryCursor> qry = @@ -862,6 +863,7 @@ public void testPagination() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNamedCache() throws Exception { try { IgniteCache cache = jcache("tmp_int", Integer.class, Integer.class); @@ -882,6 +884,7 @@ public void testNamedCache() throws Exception { } /** @throws Exception If failed. */ + @Test public void testNoPrimitives() throws Exception { try { final IgniteCache cache = grid(0).getOrCreateCache("tmp_without_index"); @@ -900,6 +903,7 @@ public void testNoPrimitives() throws Exception { } /** @throws Exception If failed. */ + @Test public void testComplexKeys() throws Exception { IgniteCache cache = jcache(PersonKey.class, Person.class); @@ -941,6 +945,7 @@ else if (cnt == 5) /** * @throws Exception If failed. */ + @Test public void testPaginationIterator() throws Exception { QueryCursor> qry = intCache.query(sqlFieldsQuery("select _key, _val from Integer").setPageSize(10)); @@ -961,6 +966,7 @@ public void testPaginationIterator() throws Exception { } /** @throws Exception If failed. */ + @Test public void testPaginationIteratorKeepAll() throws Exception { QueryCursor> qry = intCache.query(sqlFieldsQuery("select _key, _val from Integer").setPageSize(10)); @@ -1002,6 +1008,7 @@ public void testPaginationIteratorKeepAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPaginationGet() throws Exception { QueryCursor> qry = intCache.query(sqlFieldsQuery("select _key, _val from Integer").setPageSize(10)); @@ -1025,6 +1032,7 @@ public void testPaginationGet() throws Exception { } /** @throws Exception If failed. */ + @Test public void testEmptyGrid() throws Exception { QueryCursor> qry = personCache .query(sqlFieldsQuery("select name, age from Person where age = 25")); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractInsertSqlQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractInsertSqlQuerySelfTest.java index c811cb53d6fd1..ca9c26b673d98 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractInsertSqlQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractInsertSqlQuerySelfTest.java @@ -39,9 +39,6 @@ import org.apache.ignite.internal.processors.query.QueryUtils; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.marshaller.Marshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; @@ -52,9 +49,6 @@ */ @SuppressWarnings("unchecked") public abstract class IgniteCacheAbstractInsertSqlQuerySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ protected final Marshaller marsh; @@ -96,12 +90,6 @@ boolean isBinaryMarshaller() { cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractQuerySelfTest.java index ac9de6f8101aa..59bcc0fb20d82 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractQuerySelfTest.java @@ -82,10 +82,9 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -110,9 +109,6 @@ public abstract class IgniteCacheAbstractQuerySelfTest extends GridCommonAbstrac /** Cache store. */ private static TestStore store = new TestStore(); - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * @return Grid count. */ @@ -142,7 +138,7 @@ protected NearCacheConfiguration nearCacheConfiguration() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - c.setDiscoverySpi(new TcpDiscoverySpi().setForceServerMode(true).setIpFinder(ipFinder)); + ((TcpDiscoverySpi)c.getDiscoverySpi()).setForceServerMode(true); if (igniteInstanceName.startsWith("client")) { c.setClientMode(true); @@ -266,7 +262,7 @@ protected Ignite ignite() { super.afterTest(); for(String cacheName : ignite().cacheNames()) - ignite().cache(cacheName).removeAll(); + ignite().cache(cacheName).destroy(); } /** {@inheritDoc} */ @@ -286,21 +282,13 @@ protected Ignite ignite() { * * @throws Exception In case of error. */ - public void _testDifferentKeyTypes() throws Exception { - fail("http://atlassian.gridgain.com/jira/browse/GG-11216"); - + @Test + public void testDifferentKeyTypes() throws Exception { final IgniteCache cache = jcache(Object.class, Object.class); cache.put(1, "value"); - try { - cache.put("key", "value"); - - fail(); - } - catch (CacheException ignored) { - // No-op. - } + cache.put("key", "value"); } /** @@ -308,6 +296,7 @@ public void _testDifferentKeyTypes() throws Exception { * * @throws Exception In case of error. */ + @Test public void testDifferentValueTypes() throws Exception { IgniteCache cache = jcache(Integer.class, Object.class); @@ -323,6 +312,7 @@ public void testDifferentValueTypes() throws Exception { * * @throws Exception In case of error. */ + @Test public void testStringType() throws Exception { IgniteCache cache = jcache(Integer.class, String.class); @@ -342,6 +332,7 @@ public void testStringType() throws Exception { * * @throws Exception In case of error. */ + @Test public void testIntegerType() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -367,6 +358,7 @@ public void testIntegerType() throws Exception { * * @throws Exception In case of error. */ + @Test public void testTableAliasInSqlQuery() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -404,6 +396,7 @@ public void testTableAliasInSqlQuery() throws Exception { * * @throws IgniteCheckedException If failed. */ + @Test public void testUserDefinedFunction() throws IgniteCheckedException { // Without alias. final IgniteCache cache = jcache(Object.class, Object.class); @@ -451,6 +444,7 @@ public void testUserDefinedFunction() throws IgniteCheckedException { * * @throws Exception If failed. */ + @Test public void testExpiration() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -483,6 +477,7 @@ public void testExpiration() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIllegalBounds() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -500,6 +495,7 @@ public void testIllegalBounds() throws Exception { * * @throws Exception In case of error. */ + @Test public void testComplexType() throws Exception { IgniteCache cache = jcache(Key.class, GridCacheQueryTestValue.class); @@ -521,7 +517,7 @@ public void testComplexType() throws Exception { QueryCursor> qry = cache .query(new SqlQuery(GridCacheQueryTestValue.class, - "fieldName='field1' and field2=1 and field3=1 and id=100500 and embeddedField2=11 and x=3")); + "fieldName='field1' and field2=1 and field3=1 and id=100500 and embeddedField2=11 and x=3")); Cache.Entry entry = F.first(qry.getAll()); @@ -533,6 +529,7 @@ public void testComplexType() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testComplexTypeKeepBinary() throws Exception { if (ignite().configuration().getMarshaller() == null || ignite().configuration().getMarshaller() instanceof BinaryMarshaller) { IgniteCache cache = jcache(Key.class, GridCacheQueryTestValue.class); @@ -605,6 +602,7 @@ private Key(long id) { * * @throws Exception In case of error. */ + @Test public void testSelectQuery() throws Exception { IgniteCache cache = jcache(Integer.class, String.class); @@ -621,11 +619,17 @@ public void testSelectQuery() throws Exception { /** * JUnit. - * - * @throws Exception In case of error. */ - public void testSimpleCustomTableName() throws Exception { - final IgniteCache cache = ignite().cache(DEFAULT_CACHE_NAME); + @Test + public void testSimpleCustomTableName() { + CacheConfiguration cacheConf = new CacheConfiguration(cacheConfiguration()) + .setName(DEFAULT_CACHE_NAME) + .setQueryEntities(Arrays.asList( + new QueryEntity(Integer.class, Type1.class), + new QueryEntity(Integer.class, Type2.class) + )); + + final IgniteCache cache = ignite().getOrCreateCache(cacheConf); cache.put(10, new Type1(1, "Type1 record #1")); cache.put(20, new Type1(2, "Type1 record #2")); @@ -658,6 +662,7 @@ public void testSimpleCustomTableName() throws Exception { * * @throws Exception In case of error. */ + @Test public void testMixedCustomTableName() throws Exception { final IgniteCache cache = jcache(Integer.class, Object.class); @@ -704,6 +709,7 @@ public void testMixedCustomTableName() throws Exception { * * @throws Exception In case of error. */ + @Test public void testDistributedJoinCustomTableName() throws Exception { IgniteCache cache = jcache(Integer.class, Object.class); @@ -731,6 +737,7 @@ public void testDistributedJoinCustomTableName() throws Exception { * * @throws Exception In case of error. */ + @Test public void testObjectQuery() throws Exception { IgniteCache cache = jcache(Integer.class, ObjectValue.class); @@ -769,6 +776,7 @@ public void testObjectQuery() throws Exception { * * @throws Exception In case of error. */ + @Test public void testObjectWithString() throws Exception { IgniteCache cache = jcache(Integer.class, ObjectValue2.class); @@ -803,6 +811,7 @@ public void testObjectWithString() throws Exception { * * @throws Exception In case of error. */ + @Test public void testEnumObjectQuery() throws Exception { final IgniteCache cache = jcache(Long.class, EnumObject.class); @@ -937,13 +946,14 @@ private void assertEnumType(final List> enumObject /** * JUnit. - * - * @throws Exception In case of error. */ - public void _testObjectQueryWithSwap() throws Exception { - fail("http://atlassian.gridgain.com/jira/browse/GG-11216"); + @Test + public void testObjectQueryWithSwap() { + CacheConfiguration config = new CacheConfiguration(cacheConfiguration()); - IgniteCache cache = jcache(Integer.class, ObjectValue.class); + config.setOnheapCacheEnabled(true); + + IgniteCache cache = jcache(ignite(), config, Integer.class, ObjectValue.class); boolean partitioned = cache.getConfiguration(CacheConfiguration.class).getCacheMode() == PARTITIONED; @@ -956,16 +966,14 @@ public void _testObjectQueryWithSwap() throws Exception { IgniteCache c = g.cache(cache.getName()); for (int i = 0; i < cnt; i++) { - if (i % 2 == 0) { - assertNotNull(c.localPeek(i, CachePeekMode.ONHEAP)); + assertNotNull(c.localPeek(i, CachePeekMode.ONHEAP)); - c.localEvict(Collections.singleton(i)); // Swap. + c.localEvict(Collections.singleton(i)); // Swap. - if (!partitioned || g.affinity(cache.getName()).mapKeyToNode(i).isLocal()) { - ObjectValue peekVal = c.localPeek(i, CachePeekMode.ONHEAP); + if (!partitioned || g.affinity(cache.getName()).mapKeyToNode(i).isLocal()) { + ObjectValue peekVal = c.localPeek(i, CachePeekMode.ONHEAP); - assertNull("Non-null value for peek [key=" + i + ", val=" + peekVal + ']', peekVal); - } + assertNull("Non-null value for peek [key=" + i + ", val=" + peekVal + ']', peekVal); } } } @@ -1035,6 +1043,7 @@ public void _testObjectQueryWithSwap() throws Exception { * * @throws Exception In case of error. */ + @Test public void testFullTextSearch() throws Exception { IgniteCache cache = jcache(Integer.class, ObjectValue.class); @@ -1081,6 +1090,7 @@ public void testFullTextSearch() throws Exception { * * @throws Exception In case of error. */ + @Test public void testScanQuery() throws Exception { IgniteCache c1 = jcache(Integer.class, String.class); @@ -1119,6 +1129,7 @@ public void testScanQuery() throws Exception { /** * @throws Exception In case of error. */ + @Test public void testScanPartitionQuery() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -1160,13 +1171,17 @@ public void testScanPartitionQuery() throws Exception { /** * JUnit. - * - * @throws Exception In case of error. */ - public void _testTwoObjectsTextSearch() throws Exception { - fail("http://atlassian.gridgain.com/jira/browse/GG-11216"); + @Test + public void testTwoObjectsTextSearch() { + CacheConfiguration conf = new CacheConfiguration<>(cacheConfiguration()); - IgniteCache c = jcache(Object.class, Object.class); + conf.setQueryEntities(Arrays.asList( + new QueryEntity(Integer.class, ObjectValue.class), + new QueryEntity(String.class, ObjectValueOther.class) + )); + + IgniteCache c = jcache(ignite(), conf, Object.class, Object.class); c.put(1, new ObjectValue("ObjectValue str", 1)); c.put("key", new ObjectValueOther("ObjectValueOther str")); @@ -1188,6 +1203,7 @@ public void _testTwoObjectsTextSearch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPrimitiveType() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); cache.put(1, 1); @@ -1209,6 +1225,7 @@ public void testPrimitiveType() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPaginationIteratorDefaultCache() throws Exception { testPaginationIterator(jcache(ignite(), cacheConfiguration(), DEFAULT_CACHE_NAME, Integer.class, Integer.class)); } @@ -1216,6 +1233,7 @@ public void testPaginationIteratorDefaultCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPaginationIteratorNamedCache() throws Exception { testPaginationIterator(jcache(ignite(), cacheConfiguration(), Integer.class, Integer.class)); } @@ -1249,6 +1267,7 @@ private void testPaginationIterator(IgniteCache cache) throws /** * @throws Exception If failed. */ + @Test public void testPaginationGetDefaultCache() throws Exception { testPaginationGet(jcache(ignite(), cacheConfiguration(), DEFAULT_CACHE_NAME, Integer.class, Integer.class)); } @@ -1256,6 +1275,7 @@ public void testPaginationGetDefaultCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPaginationGetNamedCache() throws Exception { testPaginationGet(jcache(ignite(), cacheConfiguration(), Integer.class, Integer.class)); } @@ -1290,6 +1310,7 @@ private void testPaginationGet(IgniteCache cache) throws Excep /** * @throws Exception If failed. */ + @Test public void testScanFilters() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -1326,6 +1347,7 @@ public void testScanFilters() throws Exception { /** * @throws IgniteCheckedException if failed. */ + @Test public void testBadHashObjectKey() throws IgniteCheckedException { IgniteCache cache = jcache(BadHashKeyObject.class, Byte.class); @@ -1340,6 +1362,7 @@ public void testBadHashObjectKey() throws IgniteCheckedException { /** * @throws IgniteCheckedException if failed. */ + @Test public void testTextIndexedKey() throws IgniteCheckedException { IgniteCache cache = jcache(ObjectValue.class, Long.class); @@ -1355,6 +1378,7 @@ public void testTextIndexedKey() throws IgniteCheckedException { /** * @throws Exception If failed. */ + @Test public void testOrderByOnly() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -1385,6 +1409,7 @@ public void testOrderByOnly() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLimitOnly() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -1413,6 +1438,7 @@ public void testLimitOnly() throws Exception { /** * @throws Exception If failed. */ + @Test public void testArray() throws Exception { IgniteCache cache = jcache(Integer.class, ArrayObject.class); @@ -1436,6 +1462,7 @@ public void testArray() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlQueryEvents() throws Exception { checkSqlQueryEvents(); } @@ -1443,6 +1470,7 @@ public void testSqlQueryEvents() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFieldsQueryMetadata() throws Exception { IgniteCache cache = jcache(UUID.class, Person.class); @@ -1478,6 +1506,8 @@ private void checkSqlQueryEvents() throws Exception { @Override public boolean apply(Event evt) { assert evt instanceof CacheQueryExecutedEvent; + System.out.println(">>> EVENT"); + if (evtsDisabled) fail("Cache events are disabled"); @@ -1520,6 +1550,7 @@ private void checkSqlQueryEvents() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryEvents() throws Exception { final Map map = new ConcurrentHashMap<>(); final IgniteCache cache = jcache(Integer.class, Integer.class); @@ -1624,6 +1655,7 @@ public void testScanQueryEvents() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTextQueryEvents() throws Exception { final Map map = new ConcurrentHashMap<>(); final IgniteCache cache = jcache(UUID.class, Person.class); @@ -1727,6 +1759,7 @@ public void testTextQueryEvents() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFieldsQueryEvents() throws Exception { final IgniteCache cache = jcache(UUID.class, Person.class); final boolean evtsDisabled = cache.getConfiguration(CacheConfiguration.class).isEventsDisabled(); @@ -1781,6 +1814,7 @@ public void testFieldsQueryEvents() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalSqlQueryFromClient() throws Exception { try { Ignite g = startGrid("client"); @@ -1804,6 +1838,7 @@ public void testLocalSqlQueryFromClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLocalSqlFieldsQueryFromClient() throws Exception { try { Ignite g = startGrid("client"); @@ -2223,40 +2258,6 @@ public String value() { } } - /** - * Empty test object. - */ - @SuppressWarnings("UnusedDeclaration") - private static class EmptyObject { - /** */ - private int val; - - /** - * @param val Value. - */ - private EmptyObject(int val) { - this.val = val; - } - - /** {@inheritDoc} */ - @Override public int hashCode() { - return val; - } - - /** {@inheritDoc} */ - @Override public boolean equals(Object o) { - if (this == o) - return true; - - if (!(o instanceof EmptyObject)) - return false; - - EmptyObject that = (EmptyObject)o; - - return val == that.val; - } - } - /** * */ @@ -2397,9 +2398,9 @@ public EnumObject(long id, EnumType type) { */ @Override public String toString() { return "EnumObject{" + - "id=" + id + - ", type=" + type + - '}'; + "id=" + id + + ", type=" + type + + '}'; } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractSqlDmlQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractSqlDmlQuerySelfTest.java index a57a867526b4a..168b1b7f9c3a6 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractSqlDmlQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractSqlDmlQuerySelfTest.java @@ -33,9 +33,6 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.marshaller.Marshaller; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.IgniteTestResources; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; @@ -44,9 +41,6 @@ */ @SuppressWarnings("unchecked") public abstract class IgniteCacheAbstractSqlDmlQuerySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ protected final Marshaller marsh; @@ -75,12 +69,6 @@ protected boolean isBinaryMarshaller() { cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCollocatedQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCollocatedQuerySelfTest.java index 2b3076c143bc8..643ae72684df3 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCollocatedQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCollocatedQuerySelfTest.java @@ -30,16 +30,17 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.GridRandom; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** */ +@RunWith(JUnit4.class) public class IgniteCacheCollocatedQuerySelfTest extends GridCommonAbstractTest { /** */ private static final String QRY = @@ -61,19 +62,10 @@ public class IgniteCacheCollocatedQuerySelfTest extends GridCommonAbstractTest { /** */ private static final long SEED = ThreadLocalRandom.current().nextLong(); - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -113,6 +105,7 @@ private static List> query(IgniteCache c, boolean /** * Correct affinity. */ + @Test public void testColocatedQueryRight() { IgniteCache c = ignite(0).cache(DEFAULT_CACHE_NAME); @@ -140,6 +133,7 @@ public void testColocatedQueryRight() { /** * Correct affinity. */ + @Test public void testColocatedQueryWrong() { IgniteCache c = ignite(0).cache(DEFAULT_CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsQueryTest.java index 4e6af25e4cf72..7b6a07f8d3343 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigVariationsQueryTest.java @@ -38,6 +38,9 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.IgniteCacheConfigVariationsAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -49,6 +52,7 @@ /** * Config Variations query tests. */ +@RunWith(JUnit4.class) public class IgniteCacheConfigVariationsQueryTest extends IgniteCacheConfigVariationsAbstractTest { /** */ public static final int CNT = 50; @@ -75,6 +79,7 @@ public class IgniteCacheConfigVariationsQueryTest extends IgniteCacheConfigVaria * @throws Exception If failed. */ @SuppressWarnings("serial") + @Test public void testScanQuery() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -106,6 +111,7 @@ public void testScanQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanPartitionQuery() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -164,7 +170,7 @@ public void testScanPartitionQuery() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("SubtractionInCompareTo") + @Test public void testScanFilters() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -208,7 +214,7 @@ public void testScanFilters() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("SubtractionInCompareTo") + @Test public void testLocalScanQuery() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -246,7 +252,7 @@ public void testLocalScanQuery() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("SubtractionInCompareTo") + @Test public void testScanQueryLocalFilter() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { @@ -293,7 +299,7 @@ public void testScanQueryLocalFilter() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("SubtractionInCompareTo") + @Test public void testScanQueryPartitionFilter() throws Exception { runInAllDataModes(new TestRunnable() { @Override public void run() throws Exception { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationPrimitiveTypesSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationPrimitiveTypesSelfTest.java index 4ea537bb133c8..c1939de06b286 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationPrimitiveTypesSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheConfigurationPrimitiveTypesSelfTest.java @@ -21,41 +21,25 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ -@SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheConfigurationPrimitiveTypesSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); } - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - - return cfg; - } - /** * @throws Exception If failed. */ + @Test public void testPrimitiveTypes() throws Exception { Ignite ignite = startGrid(1); @@ -96,4 +80,4 @@ public void testPrimitiveTypes() throws Exception { assertEquals(cacheDouble.query(new SqlQuery<>(Double.class, "1 = 1")).getAll().size(), 1); assertEquals(cacheBoolean.query(new SqlQuery<>(Boolean.class, "1 = 1")).getAll().size(), 1); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCrossCacheJoinRandomTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCrossCacheJoinRandomTest.java index 3ebb3b4bb894c..29274903faac6 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCrossCacheJoinRandomTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheCrossCacheJoinRandomTest.java @@ -38,10 +38,10 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -51,10 +51,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheCrossCacheJoinRandomTest extends AbstractH2CompareQueryTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -107,10 +105,6 @@ public class IgniteCacheCrossCacheJoinRandomTest extends AbstractH2CompareQueryT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = ((TcpDiscoverySpi)cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -195,6 +189,7 @@ private CacheConfiguration configuration(String name, CacheMode cacheMode, int b /** * @throws Exception If failed. */ + @Test public void testJoin2Caches() throws Exception { testJoin(2, MODES_1); } @@ -202,6 +197,7 @@ public void testJoin2Caches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoin3Caches() throws Exception { testJoin(3, MODES_1); } @@ -209,6 +205,7 @@ public void testJoin3Caches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoin4Caches() throws Exception { testJoin(4, MODES_2); } @@ -216,6 +213,7 @@ public void testJoin4Caches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoin5Caches() throws Exception { testJoin(5, MODES_2); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDeleteSqlQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDeleteSqlQuerySelfTest.java index 92c40b8a8ed29..1baedf81bd951 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDeleteSqlQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDeleteSqlQuerySelfTest.java @@ -22,15 +22,20 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheDeleteSqlQuerySelfTest extends IgniteCacheAbstractSqlDmlQuerySelfTest { /** * */ + @Test public void testDeleteSimple() { IgniteCache p = cache(); @@ -55,6 +60,7 @@ public void testDeleteSimple() { /** * */ + @Test public void testDeleteSingle() { IgniteCache p = cache(); @@ -83,6 +89,7 @@ public void testDeleteSingle() { * In binary mode, this test checks that inner forcing of keepBinary works - without it, EntryProcessors * inside DML engine would compare binary and non-binary objects with the same keys and thus fail. */ + @Test public void testDeleteSimpleWithoutKeepBinary() { IgniteCache p = ignite(0).cache("S2P"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCollocatedAndNotTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCollocatedAndNotTest.java index f53f2633cd832..7a6c3b80755fb 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCollocatedAndNotTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCollocatedAndNotTest.java @@ -35,10 +35,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -46,10 +46,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedJoinCollocatedAndNotTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE = "person"; @@ -70,10 +68,6 @@ public class IgniteCacheDistributedJoinCollocatedAndNotTest extends GridCommonAb cfg.setCacheKeyConfiguration(keyCfg); - TcpDiscoverySpi spi = ((TcpDiscoverySpi)cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - List ccfgs = new ArrayList<>(); { @@ -156,6 +150,7 @@ private CacheConfiguration configuration(String name) { /** * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { Ignite client = grid(2); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCustomAffinityMapper.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCustomAffinityMapper.java index b8aa4cb985861..34fcfeac94a09 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCustomAffinityMapper.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinCustomAffinityMapper.java @@ -32,11 +32,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -45,10 +45,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedJoinCustomAffinityMapper extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE = "person"; @@ -65,8 +63,6 @@ public class IgniteCacheDistributedJoinCustomAffinityMapper extends GridCommonAb @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - List ccfgs = new ArrayList<>(); { @@ -156,6 +152,7 @@ private CacheConfiguration configuration(String name) { /** * @throws Exception If failed. */ + @Test public void testJoinCustomAffinityMapper() throws Exception { Ignite ignite = ignite(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinNoIndexTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinNoIndexTest.java index f8bc0888792d0..e105fcdb9128e 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinNoIndexTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinNoIndexTest.java @@ -34,11 +34,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -46,10 +46,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedJoinNoIndexTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE = "person"; @@ -63,10 +61,6 @@ public class IgniteCacheDistributedJoinNoIndexTest extends GridCommonAbstractTes @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = ((TcpDiscoverySpi)cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - List ccfgs = new ArrayList<>(); { @@ -132,6 +126,7 @@ private CacheConfiguration configuration(String name) { /** * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { Ignite client = grid(2); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinPartitionedAndReplicatedTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinPartitionedAndReplicatedTest.java index 462a0082050a1..f1f43a692ed13 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinPartitionedAndReplicatedTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinPartitionedAndReplicatedTest.java @@ -34,10 +34,11 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -47,10 +48,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedJoinPartitionedAndReplicatedTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE = "person"; @@ -67,10 +66,6 @@ public class IgniteCacheDistributedJoinPartitionedAndReplicatedTest extends Grid @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = ((TcpDiscoverySpi)cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -179,6 +174,7 @@ private List caches(boolean idx, /** * @throws Exception If failed. */ + @Test public void testJoin1() throws Exception { join(true, REPLICATED, PARTITIONED, PARTITIONED); } @@ -186,15 +182,16 @@ public void testJoin1() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-5956") + @Test public void testJoin2() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-5956"); - join(true, PARTITIONED, REPLICATED, PARTITIONED); } /** * @throws Exception If failed. */ + @Test public void testJoin3() throws Exception { join(true, PARTITIONED, PARTITIONED, REPLICATED); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinQueryConditionsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinQueryConditionsTest.java index 6f20923b0f69b..820c7ae6a419b 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinQueryConditionsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinQueryConditionsTest.java @@ -32,10 +32,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -43,10 +43,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedJoinQueryConditionsTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE = "person"; @@ -63,10 +61,6 @@ public class IgniteCacheDistributedJoinQueryConditionsTest extends GridCommonAbs @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = ((TcpDiscoverySpi) cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -86,6 +80,7 @@ public class IgniteCacheDistributedJoinQueryConditionsTest extends GridCommonAbs /** * @throws Exception If failed. */ + @Test public void testJoinQuery1() throws Exception { joinQuery1(true); } @@ -173,6 +168,7 @@ private void joinQuery1(boolean idx) throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery2() throws Exception { Ignite client = grid(2); @@ -283,6 +279,7 @@ public void _testJoinQuery3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery4() throws Exception { Ignite client = grid(2); @@ -334,6 +331,7 @@ public void testJoinQuery4() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery5() throws Exception { Ignite client = grid(2); @@ -375,6 +373,7 @@ public void testJoinQuery5() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery6() throws Exception { Ignite client = grid(2); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinTest.java index dee8e64e82c44..40e261db79e05 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDistributedJoinTest.java @@ -35,17 +35,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.GridRandom; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedJoinTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static Connection conn; @@ -53,10 +51,6 @@ public class IgniteCacheDistributedJoinTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = ((TcpDiscoverySpi)cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - CacheConfiguration ccfga = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfga.setName("a"); @@ -200,6 +194,7 @@ private static Z insert(Statement s, Z z) throws SQLException { /** * @throws Exception If failed. */ + @Test public void testJoins() throws Exception { Ignite ignite = ignite(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDuplicateEntityConfigurationSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDuplicateEntityConfigurationSelfTest.java index d5c0f0a08785d..e82f78960e3ba 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDuplicateEntityConfigurationSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheDuplicateEntityConfigurationSelfTest.java @@ -21,32 +21,16 @@ import org.apache.ignite.cache.QueryEntity; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheDuplicateEntityConfigurationSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - - return c; - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrid(0); @@ -55,6 +39,7 @@ public class IgniteCacheDuplicateEntityConfigurationSelfTest extends GridCommonA /** * @throws Exception If failed. */ + @Test public void testClassDuplicatesQueryEntity() throws Exception { String cacheName = "duplicate"; @@ -86,6 +71,7 @@ public void testClassDuplicatesQueryEntity() throws Exception { /** * @throws Exception If failed. */ + @Test public void testClassDuplicatesQueryReverse() throws Exception { String cacheName = "duplicate"; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFieldsQueryNoDataSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFieldsQueryNoDataSelfTest.java index 75c0cd4966a5a..bcc032558a6cb 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFieldsQueryNoDataSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFieldsQueryNoDataSelfTest.java @@ -22,10 +22,10 @@ import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -33,10 +33,8 @@ /** * Test for local query on partitioned cache without data. */ +@RunWith(JUnit4.class) public class IgniteCacheFieldsQueryNoDataSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -52,12 +50,6 @@ public class IgniteCacheFieldsQueryNoDataSelfTest extends GridCommonAbstractTest cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -69,6 +61,7 @@ public class IgniteCacheFieldsQueryNoDataSelfTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testQuery() throws Exception { Collection> res = grid().cache(DEFAULT_CACHE_NAME) .query(new SqlQuery("Integer", "from Integer")).getAll(); @@ -76,4 +69,4 @@ public void testQuery() throws Exception { assert res != null; assert res.isEmpty(); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFullTextQueryNodeJoiningSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFullTextQueryNodeJoiningSelfTest.java index 162b1e5bacaaa..3ee152da3c553 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFullTextQueryNodeJoiningSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheFullTextQueryNodeJoiningSelfTest.java @@ -32,10 +32,11 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -44,10 +45,9 @@ /** * Tests cache in-place modification logic with iterative value increment. */ +@RunWith(JUnit4.class) +@Ignore("https://issues.apache.org/jira/browse/IGNITE-2229") public class IgniteCacheFullTextQueryNodeJoiningSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Number of nodes to test on. */ private static final int GRID_CNT = 3; @@ -80,18 +80,12 @@ public class IgniteCacheFullTextQueryNodeJoiningSelfTest extends GridCommonAbstr cfg.setCacheConfiguration(cache); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - TcpCommunicationSpi commSpi = new TcpCommunicationSpi(); commSpi.setSharedMemoryPort(-1); cfg.setCommunicationSpi(commSpi); - cfg.setDiscoverySpi(disco); - return cfg; } @@ -105,9 +99,8 @@ protected CacheAtomicityMode atomicityMode() { /** * @throws Exception If failed. */ + @Test public void testFullTextQueryNodeJoin() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-2229"); - for (int r = 0; r < 5; r++) { startGrids(GRID_CNT); @@ -145,4 +138,4 @@ private IndexedEntity(String val) { this.val = val; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsSqlTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsSqlTest.java index 617909db3027a..7fe1156a7b129 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsSqlTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheGroupsSqlTest.java @@ -30,16 +30,15 @@ import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.AffinityKey; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -50,25 +49,14 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheGroupsSqlTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String GROUP1 = "grp1"; /** */ private static final String GROUP2 = "grp2"; - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(gridName); - - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { super.beforeTest(); @@ -86,6 +74,7 @@ public class IgniteCacheGroupsSqlTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testSqlQuery() throws Exception { Ignite node = ignite(0); @@ -112,6 +101,7 @@ public void testSqlQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery1() throws Exception { joinQuery(GROUP1, GROUP2, REPLICATED, PARTITIONED, TRANSACTIONAL, TRANSACTIONAL); } @@ -119,6 +109,7 @@ public void testJoinQuery1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery2() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Void call() throws Exception { @@ -131,6 +122,7 @@ public void testJoinQuery2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery3() throws Exception { joinQuery(GROUP1, GROUP1, PARTITIONED, PARTITIONED, TRANSACTIONAL, ATOMIC); } @@ -138,6 +130,7 @@ public void testJoinQuery3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery4() throws Exception { joinQuery(GROUP1, GROUP1, REPLICATED, REPLICATED, ATOMIC, TRANSACTIONAL); } @@ -145,6 +138,7 @@ public void testJoinQuery4() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery5() throws Exception { joinQuery(GROUP1, null, REPLICATED, PARTITIONED, TRANSACTIONAL, TRANSACTIONAL); } @@ -152,6 +146,7 @@ public void testJoinQuery5() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQuery6() throws Exception { joinQuery(GROUP1, null, PARTITIONED, PARTITIONED, TRANSACTIONAL, ATOMIC); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInsertSqlQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInsertSqlQuerySelfTest.java index 0f72883507129..e14a02577a408 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInsertSqlQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheInsertSqlQuerySelfTest.java @@ -25,10 +25,10 @@ import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.IgniteCacheUpdateSqlQuerySelfTest.AllTypes; @@ -36,28 +36,21 @@ * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheInsertSqlQuerySelfTest extends IgniteCacheAbstractInsertSqlQuerySelfTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } /** * */ + @Test public void testInsertWithExplicitKey() { IgniteCache p = ignite(0).cache("S2P").withKeepBinary(); @@ -72,6 +65,7 @@ public void testInsertWithExplicitKey() { /** * */ + @Test public void testInsertFromSubquery() { IgniteCache p = ignite(0).cache("S2P").withKeepBinary(); @@ -92,6 +86,7 @@ public void testInsertFromSubquery() { /** * */ + @Test public void testInsertWithExplicitPrimitiveKey() { IgniteCache p = ignite(0).cache("I2P").withKeepBinary(); @@ -107,6 +102,7 @@ public void testInsertWithExplicitPrimitiveKey() { /** * */ + @Test public void testInsertWithDynamicKeyInstantiation() { IgniteCache p = ignite(0).cache("K2P").withKeepBinary(); @@ -121,6 +117,7 @@ public void testInsertWithDynamicKeyInstantiation() { /** * Test insert with implicit column names. */ + @Test public void testImplicitColumnNames() { IgniteCache p = ignite(0).cache("K2P").withKeepBinary(); @@ -139,6 +136,7 @@ public void testImplicitColumnNames() { /** * */ + @Test public void testFieldsCaseSensitivity() { IgniteCache p = ignite(0).cache("K22P").withKeepBinary(); @@ -153,6 +151,7 @@ public void testFieldsCaseSensitivity() { /** * */ + @Test public void testPrimitives() { IgniteCache p = ignite(0).cache("I2I"); @@ -167,7 +166,7 @@ public void testPrimitives() { /** * */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testDuplicateKeysException() { final IgniteCache p = ignite(0).cache("I2I"); @@ -193,6 +192,7 @@ public void testDuplicateKeysException() { /** * */ + @Test public void testUuidHandling() { IgniteCache p = ignite(0).cache("U2I"); @@ -206,6 +206,7 @@ public void testUuidHandling() { /** * */ + @Test public void testNestedFieldsHandling() { IgniteCache p = ignite(0).cache("I2AT"); @@ -231,6 +232,7 @@ public void testNestedFieldsHandling() { /** * Check that few sequential start-stops of the cache do not affect work of DML. */ + @Test public void testCacheRestartHandling() { for (int i = 0; i < 4; i++) { IgniteCache p = diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedCollocationTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedCollocationTest.java index 3d3f27b07c573..694ddf18d4996 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedCollocationTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedCollocationTest.java @@ -34,9 +34,9 @@ import org.apache.ignite.internal.processors.query.h2.sql.AbstractH2CompareQueryTest; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -45,10 +45,8 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheJoinPartitionedAndReplicatedCollocationTest extends AbstractH2CompareQueryTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE = "person"; @@ -80,10 +78,6 @@ public class IgniteCacheJoinPartitionedAndReplicatedCollocationTest extends Abst @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = ((TcpDiscoverySpi)cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -176,6 +170,7 @@ private CacheConfiguration configuration(String name, int backups) { /** * @throws Exception If failed. */ + @Test public void testJoin() throws Exception { Ignite client = grid(SRVS); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedTest.java index 4f080e6c7545b..9d655828758ae 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinPartitionedAndReplicatedTest.java @@ -26,9 +26,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; @@ -37,6 +34,10 @@ import java.util.ArrayList; import java.util.List; import java.util.concurrent.Callable; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -46,10 +47,9 @@ /** * */ +@RunWith(JUnit4.class) +@Ignore("https://issues.apache.org/jira/browse/IGNITE-5016") public class IgniteCacheJoinPartitionedAndReplicatedTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE = "person"; @@ -66,10 +66,6 @@ public class IgniteCacheJoinPartitionedAndReplicatedTest extends GridCommonAbstr @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = ((TcpDiscoverySpi)cfg.getDiscoverySpi()); - - spi.setIpFinder(IP_FINDER); - List ccfgs = new ArrayList<>(); { @@ -171,9 +167,9 @@ private CacheConfiguration configuration(String name) { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-5016") + @Test public void testJoin() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-5016"); - Ignite client = grid(2); IgniteCache personCache = client.cache(PERSON_CACHE); @@ -224,9 +220,9 @@ public void testJoin() throws Exception { /** */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-5016") + @Test public void testReplicatedToPartitionedLeftJoin() { - fail("https://issues.apache.org/jira/browse/IGNITE-5016"); - Ignite client = grid(2); IgniteCache personCache = client.cache(PERSON_CACHE); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinQueryWithAffinityKeyTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinQueryWithAffinityKeyTest.java index be111531320a0..05e71b096e21c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinQueryWithAffinityKeyTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheJoinQueryWithAffinityKeyTest.java @@ -37,10 +37,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -50,10 +50,8 @@ * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheJoinQueryWithAffinityKeyTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 5; @@ -67,8 +65,6 @@ public class IgniteCacheJoinQueryWithAffinityKeyTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - CacheKeyConfiguration keyCfg = new CacheKeyConfiguration(); keyCfg.setTypeName(TestKeyWithAffinity.class.getName()); @@ -95,6 +91,7 @@ public class IgniteCacheJoinQueryWithAffinityKeyTest extends GridCommonAbstractT /** * @throws Exception If failed. */ + @Test public void testJoinQuery() throws Exception { testJoinQuery(PARTITIONED, 0, false, true); @@ -106,6 +103,7 @@ public void testJoinQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQueryEscapeAll() throws Exception { escape = true; @@ -115,6 +113,7 @@ public void testJoinQueryEscapeAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQueryWithAffinityKey() throws Exception { testJoinQuery(PARTITIONED, 0, true, true); @@ -126,6 +125,7 @@ public void testJoinQueryWithAffinityKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQueryWithAffinityKeyEscapeAll() throws Exception { escape = true; @@ -135,6 +135,7 @@ public void testJoinQueryWithAffinityKeyEscapeAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQueryWithAffinityKeyNotQueryField() throws Exception { testJoinQuery(PARTITIONED, 0, true, false); @@ -146,6 +147,7 @@ public void testJoinQueryWithAffinityKeyNotQueryField() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJoinQueryWithAffinityKeyNotQueryFieldEscapeAll() throws Exception { escape = true; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLargeResultSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLargeResultSelfTest.java index bac8d1d177273..4d0c19fdb7b5c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLargeResultSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLargeResultSelfTest.java @@ -25,30 +25,22 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** */ +@RunWith(JUnit4.class) public class IgniteCacheLargeResultSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -71,6 +63,7 @@ public class IgniteCacheLargeResultSelfTest extends GridCommonAbstractTest { /** */ + @Test public void testLargeResult() { // Fill cache. try (IgniteDataStreamer streamer = ignite(0).dataStreamer(DEFAULT_CACHE_NAME)) { @@ -103,4 +96,4 @@ public void testLargeResult() { // Currently we have no ways to do multiple passes through a merge table. } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest.java index a4f398fceff18..c9002f94169ea 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to validate https://issues.apache.org/jira/browse/IGNITE-2310 */ +@RunWith(JUnit4.class) public class IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest extends IgniteCacheLockPartitionOnAffinityRunAbstractTest { /** Atomic cache. */ private static final String ATOMIC_CACHE = "atomic"; @@ -137,6 +141,7 @@ protected void destroyCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotReservedAtomicCacheOp() throws Exception { notReservedCacheOp(ATOMIC_CACHE); } @@ -144,6 +149,7 @@ public void testNotReservedAtomicCacheOp() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotReservedTxCacheOp() throws Exception { notReservedCacheOp(TRANSACT_CACHE); } @@ -202,6 +208,7 @@ private void notReservedCacheOp(final String cacheName) throws Exception { /** * @throws Exception If failed. */ + @Test public void testReservedPartitionCacheOp() throws Exception { // Workaround for initial update job metadata. grid(0).cache(Person.class.getSimpleName()).clear(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunTest.java index 89ef607b86fa6..3825480d73559 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunTest.java @@ -45,10 +45,15 @@ import org.apache.ignite.resources.IgniteInstanceResource; import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to validate https://issues.apache.org/jira/browse/IGNITE-2310 */ +@RunWith(JUnit4.class) public class IgniteCacheLockPartitionOnAffinityRunTest extends IgniteCacheLockPartitionOnAffinityRunAbstractTest { /** * @param ignite Ignite. @@ -290,7 +295,11 @@ private static int getPersonsCountMultipleCache(final IgniteEx ignite, IgniteLog /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7692") + @Test public void testSingleCache() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-7692"); + final PersonsCountGetter personsCntGetter = new PersonsCountGetter() { @Override public int getPersonsCount(IgniteEx ignite, IgniteLogger log, int orgId) throws Exception { return getPersonsCountSingleCache(ignite, log, orgId); @@ -344,6 +353,7 @@ public void testSingleCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleCaches() throws Exception { final PersonsCountGetter personsCntGetter = new PersonsCountGetter() { @Override public int getPersonsCount(IgniteEx ignite, IgniteLogger log, int orgId) throws Exception { @@ -402,6 +412,7 @@ public void testMultipleCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCheckReservePartitionException() throws Exception { int orgId = primaryKey(grid(1).cache(Organization.class.getSimpleName())); @@ -424,6 +435,7 @@ public void testCheckReservePartitionException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePartitionJobCompletesNormally() throws Exception { final int orgId = primaryKey(grid(1).cache(Organization.class.getSimpleName())); @@ -472,6 +484,7 @@ public void testReleasePartitionJobCompletesNormally() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePartitionJobThrowsException() throws Exception { final int orgId = primaryKey(grid(1).cache(Organization.class.getSimpleName())); @@ -532,6 +545,7 @@ public void testReleasePartitionJobThrowsException() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePartitionJobThrowsError() throws Exception { final int orgId = primaryKey(grid(1).cache(Organization.class.getSimpleName())); @@ -591,6 +605,7 @@ public void testReleasePartitionJobThrowsError() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePartitionJobUnmarshalingFails() throws Exception { final int orgId = primaryKey(grid(1).cache(Organization.class.getSimpleName())); @@ -609,6 +624,7 @@ public void testReleasePartitionJobUnmarshalingFails() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePartitionJobMasterLeave() throws Exception { final int orgId = primaryKey(grid(0).cache(Organization.class.getSimpleName())); @@ -698,6 +714,7 @@ public void testReleasePartitionJobMasterLeave() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReleasePartitionJobImplementMasterLeave() throws Exception { final int orgId = primaryKey(grid(0).cache(Organization.class.getSimpleName())); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest.java index a05e10ec77cc2..73b467a612927 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest.java @@ -35,10 +35,14 @@ import org.apache.ignite.spi.collision.CollisionJobContext; import org.apache.ignite.spi.collision.CollisionSpi; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test to validate https://issues.apache.org/jira/browse/IGNITE-2310 */ +@RunWith(JUnit4.class) public class IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest extends IgniteCacheLockPartitionOnAffinityRunAbstractTest { @@ -58,6 +62,7 @@ public class IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest /** * @throws Exception If failed. */ + @Test public void testPartitionReservation() throws Exception { int orgId = 0; cancelAllJobs = true; @@ -81,6 +86,7 @@ public void testPartitionReservation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJobFinishing() throws Exception { final AtomicInteger jobNum = new AtomicInteger(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMergeSqlQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMergeSqlQuerySelfTest.java index c92c7dcf1f004..cab011990434f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMergeSqlQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMergeSqlQuerySelfTest.java @@ -21,6 +21,9 @@ import java.util.Arrays; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.IgniteCacheUpdateSqlQuerySelfTest.AllTypes; @@ -28,10 +31,12 @@ * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheMergeSqlQuerySelfTest extends IgniteCacheAbstractInsertSqlQuerySelfTest { /** * */ + @Test public void testMergeWithExplicitKey() { IgniteCache p = ignite(0).cache("S2P").withKeepBinary(); @@ -46,6 +51,7 @@ public void testMergeWithExplicitKey() { /** * */ + @Test public void testMergeFromSubquery() { IgniteCache p = ignite(0).cache("S2P").withKeepBinary(); @@ -66,6 +72,7 @@ public void testMergeFromSubquery() { /** * */ + @Test public void testMergeWithExplicitPrimitiveKey() { IgniteCache p = ignite(0).cache("I2P").withKeepBinary(); @@ -81,6 +88,7 @@ public void testMergeWithExplicitPrimitiveKey() { /** * */ + @Test public void testMergeWithDynamicKeyInstantiation() { IgniteCache p = ignite(0).cache("K2P").withKeepBinary(); @@ -95,6 +103,7 @@ public void testMergeWithDynamicKeyInstantiation() { /** * */ + @Test public void testFieldsCaseSensitivity() { IgniteCache p = ignite(0).cache("K22P").withKeepBinary(); @@ -109,6 +118,7 @@ public void testFieldsCaseSensitivity() { /** * */ + @Test public void testPrimitives() { IgniteCache p = ignite(0).cache("I2I").withKeepBinary(); @@ -123,6 +133,7 @@ public void testPrimitives() { /** * */ + @Test public void testNestedFieldsHandling() { IgniteCache p = ignite(0).cache("I2AT"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMultipleIndexedTypesTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMultipleIndexedTypesTest.java index 0614a050a9048..b3d410f750570 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMultipleIndexedTypesTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheMultipleIndexedTypesTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheMultipleIndexedTypesTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { @@ -49,6 +53,7 @@ public class IgniteCacheMultipleIndexedTypesTest extends GridCommonAbstractTest /** * @throws Exception If failed. */ + @Test public void testMultipleIndexedTypes() throws Exception { CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoClassQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoClassQuerySelfTest.java index 2eb66eecfa6a2..5bf1834e3810d 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoClassQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheNoClassQuerySelfTest.java @@ -26,9 +26,10 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.marshaller.jdk.JdkMarshaller; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; @@ -37,16 +38,14 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheNoClassQuerySelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @SuppressWarnings({"unchecked", "deprecation"}) @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - c.setDiscoverySpi(new TcpDiscoverySpi().setForceServerMode(true).setIpFinder(ipFinder)); + ((TcpDiscoverySpi)c.getDiscoverySpi()).setForceServerMode(true); CacheConfiguration cc = defaultCacheConfiguration(); @@ -91,6 +90,7 @@ public class IgniteCacheNoClassQuerySelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNoClass() throws Exception { try { startGrid(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectKeyIndexingSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectKeyIndexingSelfTest.java index 2865767fb04d4..e9766f1aeeb82 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectKeyIndexingSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheObjectKeyIndexingSelfTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test index behavior when key is of plain Object type per indexing settings. */ +@RunWith(JUnit4.class) public class IgniteCacheObjectKeyIndexingSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -52,6 +56,7 @@ protected static CacheConfiguration cacheCfg() { } /** */ + @Test public void testObjectKeyHandling() throws Exception { Ignite ignite = grid(); @@ -92,7 +97,7 @@ public void testObjectKeyHandling() throws Exception { Arrays.asList(uid, "C") ) ); - + cache.remove(uid); // Removal has worked for both keys although the table was the same and keys were of different type @@ -107,7 +112,7 @@ private void assertItemsNumber(long num) { assertEquals(num, grid().cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("select count(*) from TestObject")).getAll() .get(0).get(0)); } - + /** */ private static class TestObject { /** */ diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapEvictQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapEvictQueryTest.java index 6a22f052f2bff..a7ee406efcfd2 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapEvictQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapEvictQueryTest.java @@ -35,30 +35,22 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.LT; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** */ +@RunWith(JUnit4.class) public class IgniteCacheOffheapEvictQueryTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = defaultCacheConfiguration(); cacheCfg.setCacheMode(PARTITIONED); @@ -87,6 +79,7 @@ public class IgniteCacheOffheapEvictQueryTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testEvictAndRemove() throws Exception { final int KEYS_CNT = 3000; final int THREADS_CNT = 250; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapIndexScanTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapIndexScanTest.java index c0cc46dfc5a02..4cf7d77a39be9 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapIndexScanTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheOffheapIndexScanTest.java @@ -26,20 +26,18 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Based scanCount with offheap index issue. */ +@RunWith(JUnit4.class) public class IgniteCacheOffheapIndexScanTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static IgniteCache cache; @@ -47,12 +45,6 @@ public class IgniteCacheOffheapIndexScanTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - CacheConfiguration cacheCfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); cacheCfg.setCacheMode(LOCAL); @@ -75,6 +67,7 @@ public class IgniteCacheOffheapIndexScanTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testQueryPlan() throws Exception { for (int i = 0 ; i < 1000; i++) cache.put(i, new Person(i, "firstName" + i, "lastName" + i, i % 100)); @@ -184,4 +177,4 @@ public Person(int id, int orgId, String firstName, String lastName, double salar ']'; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingQueryErrorTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingQueryErrorTest.java index a2f1d1dedf52c..fbf5adb720863 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingQueryErrorTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheP2pUnmarshallingQueryErrorTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteBiPredicate; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Checks behavior on exception while unmarshalling key. */ +@RunWith(JUnit4.class) public class IgniteCacheP2pUnmarshallingQueryErrorTest extends IgniteCacheP2pUnmarshallingErrorTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -41,6 +45,7 @@ public class IgniteCacheP2pUnmarshallingQueryErrorTest extends IgniteCacheP2pUnm } /** {@inheritDoc} */ + @Test @Override public void testResponseMessageOnUnmarshallingFailed() { readCnt.set(Integer.MAX_VALUE); @@ -62,6 +67,7 @@ public class IgniteCacheP2pUnmarshallingQueryErrorTest extends IgniteCacheP2pUnm /** * @throws Exception If failed. */ + @Test public void testResponseMessageOnRequestUnmarshallingFailed() throws Exception { readCnt.set(Integer.MAX_VALUE); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionedQueryMultiThreadedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionedQueryMultiThreadedSelfTest.java index dc8f8d3696828..aa7cb75406f52 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionedQueryMultiThreadedSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePartitionedQueryMultiThreadedSelfTest.java @@ -40,11 +40,11 @@ import org.apache.ignite.internal.util.typedef.CAX; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -52,6 +52,7 @@ /** * Tests for partitioned cache queries. */ +@RunWith(JUnit4.class) public class IgniteCachePartitionedQueryMultiThreadedSelfTest extends GridCommonAbstractTest { /** */ private static final boolean TEST_INFO = true; @@ -59,9 +60,6 @@ public class IgniteCachePartitionedQueryMultiThreadedSelfTest extends GridCommon /** Number of test grids (nodes). Should not be less than 2. */ private static final int GRID_CNT = 3; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** Don't start grid by default. */ public IgniteCachePartitionedQueryMultiThreadedSelfTest() { super(false); @@ -71,12 +69,6 @@ public IgniteCachePartitionedQueryMultiThreadedSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -123,6 +115,7 @@ public IgniteCachePartitionedQueryMultiThreadedSelfTest() { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testLuceneAndSqlMultithreaded() throws Exception { // ---------- Test parameters ---------- // int luceneThreads = 10; @@ -292,4 +285,4 @@ String degree() { return S.toString(PersonObj.class, this); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePrimitiveFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePrimitiveFieldsQuerySelfTest.java index cf098b1ea403a..f8a759bc698e4 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePrimitiveFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCachePrimitiveFieldsQuerySelfTest.java @@ -24,21 +24,19 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.LinkedHashMap; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteCachePrimitiveFieldsQuerySelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -46,12 +44,6 @@ public class IgniteCachePrimitiveFieldsQuerySelfTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); - - discoSpi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(discoSpi); - cfg.setCacheConfiguration(cacheConfiguration(CACHE_NAME)); // Force BinaryMarshaller. @@ -93,6 +85,7 @@ private CacheConfiguration cacheConfiguration(String cache /** * @throws Exception if failed. */ + @Test public void testStaticCache() throws Exception { checkCache(ignite(0).cache(CACHE_NAME)); } @@ -112,7 +105,7 @@ private void checkCache(IgniteCache cache) throws Exceptio } /** - * + * */ @SuppressWarnings("unused") private static class IndexedType { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueriesLoadTest1.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueriesLoadTest1.java index e5d0e2c84a835..07e18a7f05a35 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueriesLoadTest1.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueriesLoadTest1.java @@ -50,11 +50,11 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.resources.IgniteInstanceResource; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -66,6 +66,7 @@ * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheQueriesLoadTest1 extends GridCommonAbstractTest { /** Operation. */ private static final String OPERATION = "Operation"; @@ -130,9 +131,6 @@ public class IgniteCacheQueriesLoadTest1 extends GridCommonAbstractTest { private static final String FIND_DEPOSIT_SQL = "SELECT _key FROM \"" + DEPOSIT_CACHE + "\"." + DEPOSIT + " WHERE " + TRADER_ID + "=?"; - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES = 5; @@ -150,8 +148,6 @@ public class IgniteCacheQueriesLoadTest1 extends GridCommonAbstractTest { cfg.setMarshaller(null); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - RendezvousAffinityFunction aff = new RendezvousAffinityFunction(); aff.setPartitions(3000); @@ -174,6 +170,7 @@ public class IgniteCacheQueriesLoadTest1 extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testQueries() throws Exception { runQueries(1, true, 10_000); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryH2IndexingLeakTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryH2IndexingLeakTest.java index 59be13878aebd..e29b068625d4b 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryH2IndexingLeakTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryH2IndexingLeakTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_H2_INDEXING_CACHE_CLEANUP_PERIOD; import static org.apache.ignite.IgniteSystemProperties.IGNITE_H2_INDEXING_CACHE_THREAD_USAGE_TIMEOUT; @@ -44,6 +47,7 @@ /** * Tests leaks at the IgniteH2Indexing */ +@RunWith(JUnit4.class) public class IgniteCacheQueryH2IndexingLeakTest extends GridCommonAbstractTest { /** */ private static final long TEST_TIMEOUT = 2 * 60 * 1000; @@ -135,6 +139,7 @@ private static int getStatementCacheSize(GridQueryProcessor qryProcessor) { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testLeaksInIgniteH2IndexingOnTerminatedThread() throws Exception { final IgniteCache c = grid(0).cache(DEFAULT_CACHE_NAME); @@ -187,6 +192,7 @@ public void testLeaksInIgniteH2IndexingOnTerminatedThread() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLeaksInIgniteH2IndexingOnUnusedThread() throws Exception { final IgniteCache c = grid(0).cache(DEFAULT_CACHE_NAME); @@ -219,4 +225,4 @@ public void testLeaksInIgniteH2IndexingOnUnusedThread() throws Exception { fut.get(); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryIndexSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryIndexSelfTest.java index f9916aeb1e3c3..73b8eb0893445 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryIndexSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryIndexSelfTest.java @@ -27,6 +27,9 @@ import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.internal.util.typedef.internal.S; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CachePeekMode.ALL; @@ -34,6 +37,7 @@ /** * Tests for cache query index. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryIndexSelfTest extends GridCacheAbstractSelfTest { /** Grid count. */ private static final int GRID_CNT = 2; @@ -54,6 +58,7 @@ public class IgniteCacheQueryIndexSelfTest extends GridCacheAbstractSelfTest { /** * @throws Exception If failed. */ + @Test public void testWithoutStoreLoad() throws Exception { IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -72,6 +77,7 @@ public void testWithoutStoreLoad() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWithStoreLoad() throws Exception { for (int i = 0; i < ENTRY_CNT; i++) storeStgy.putToStore(i, new CacheValue(i)); @@ -133,4 +139,4 @@ int value() { return S.toString(CacheValue.class, this); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryLoadSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryLoadSelfTest.java index 70f350cd332b7..76b6f3a81e083 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryLoadSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryLoadSelfTest.java @@ -36,11 +36,11 @@ import org.apache.ignite.internal.util.typedef.P2; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteBiInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -48,10 +48,8 @@ /** * Test that entries are indexed on load/reload methods. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryLoadSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Puts count. */ private static final int PUT_CNT = 10; @@ -82,12 +80,6 @@ public IgniteCacheQueryLoadSelfTest() { cfg.setCacheConfiguration(ccfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -117,6 +109,7 @@ private long size(Class cls) throws IgniteCheckedException { /** * @throws Exception If failed. */ + @Test public void testLoadCache() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -135,6 +128,7 @@ public void testLoadCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAsync() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -153,6 +147,7 @@ public void testLoadCacheAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLoadCacheFiltered() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -176,6 +171,7 @@ public boolean apply(Integer key, ValueObject val) { /** * @throws Exception If failed. */ + @Test public void testLoadCacheAsyncFiltered() throws Exception { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); @@ -199,6 +195,7 @@ public boolean apply(Integer key, ValueObject val) { /** * @throws Exception If failed. */ + @Test public void testReloadAsync() throws Exception { STORE_MAP.put(1, new ValueObject(1)); @@ -219,6 +216,7 @@ public void testReloadAsync() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReloadAll() throws Exception { for (int i = 0; i < PUT_CNT; i++) STORE_MAP.put(i, new ValueObject(i)); @@ -328,4 +326,4 @@ int value() { return S.toString(ValueObject.class, this); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryMultiThreadedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryMultiThreadedSelfTest.java index eb926a10c15fc..d3d53972e4773 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryMultiThreadedSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheQueryMultiThreadedSelfTest.java @@ -42,16 +42,12 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteKernal; import org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager; -import org.apache.ignite.internal.processors.query.GridQueryProcessor; -import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; import org.apache.ignite.internal.util.typedef.CAX; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.jetbrains.annotations.NotNull; -import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -60,7 +56,7 @@ /** * Multi-threaded tests for cache queries. */ -@SuppressWarnings("StatementWithEmptyBody") +@RunWith(JUnit4.class) public class IgniteCacheQueryMultiThreadedSelfTest extends GridCommonAbstractTest { /** */ private static final boolean TEST_INFO = true; @@ -68,9 +64,6 @@ public class IgniteCacheQueryMultiThreadedSelfTest extends GridCommonAbstractTes /** Number of test grids (nodes). Should not be less than 2. */ private static final int GRID_CNT = 3; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static AtomicInteger idxSwapCnt = new AtomicInteger(); @@ -89,12 +82,6 @@ public IgniteCacheQueryMultiThreadedSelfTest() { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration(cacheConfiguration()); return cfg; @@ -224,6 +211,7 @@ private Set affinityNodes(Iterable> entries, * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testMultiThreadedSwapUnswapString() throws Exception { int threadCnt = 50; final int keyCnt = 2000; @@ -294,6 +282,7 @@ public void testMultiThreadedSwapUnswapString() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testMultiThreadedSwapUnswapLong() throws Exception { int threadCnt = 50; final int keyCnt = 2000; @@ -365,9 +354,8 @@ public void testMultiThreadedSwapUnswapLong() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) - public void _testMultiThreadedSwapUnswapLongString() throws Exception { - fail("http://atlassian.gridgain.com/jira/browse/GG-11216"); - + @Test + public void testMultiThreadedSwapUnswapLongString() throws Exception { int threadCnt = 50; final int keyCnt = 2000; final int valCnt = 10000; @@ -438,6 +426,7 @@ public void _testMultiThreadedSwapUnswapLongString() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testMultiThreadedSwapUnswapObject() throws Exception { int threadCnt = 50; final int keyCnt = 4000; @@ -510,6 +499,7 @@ public void testMultiThreadedSwapUnswapObject() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testMultiThreadedSameQuery() throws Exception { int threadCnt = 50; final int keyCnt = 10; @@ -573,6 +563,7 @@ public void testMultiThreadedSameQuery() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testMultiThreadedNewQueries() throws Exception { int threadCnt = 50; final int keyCnt = 10; @@ -632,6 +623,7 @@ public void testMultiThreadedNewQueries() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testMultiThreadedScanQuery() throws Exception { int threadCnt = 50; final int keyCnt = 500; @@ -690,6 +682,7 @@ public void testMultiThreadedScanQuery() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testMultiThreadedSqlFieldsQuery() throws Throwable { int threadCnt = 16; final int keyCnt = 1100; // set resultSet size bigger than page size @@ -763,4 +756,4 @@ public int value() { return val; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryErrorSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryErrorSelfTest.java index 09790854b2d03..a89ed29db1f97 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryErrorSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryErrorSelfTest.java @@ -21,10 +21,14 @@ import javax.cache.CacheException; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Java API query error messages test. */ +@RunWith(JUnit4.class) public class IgniteCacheSqlQueryErrorSelfTest extends GridCacheAbstractSelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -36,6 +40,7 @@ public class IgniteCacheSqlQueryErrorSelfTest extends GridCacheAbstractSelfTest * * @throws Exception If failed. */ + @Test public void testSelectWrongTable() throws Exception { checkSqlErrorMessage("select from wrong", "Failed to parse query. Table \"WRONG\" not found"); @@ -46,6 +51,7 @@ public void testSelectWrongTable() throws Exception { * * @throws Exception If failed. */ + @Test public void testSelectWrongColumnName() throws Exception { checkSqlErrorMessage("select wrong from test", "Failed to parse query. Column \"WRONG\" not found"); @@ -56,6 +62,7 @@ public void testSelectWrongColumnName() throws Exception { * * @throws Exception If failed. */ + @Test public void testSelectWrongSyntax() throws Exception { checkSqlErrorMessage("select from test where", "Failed to parse query. Syntax error in SQL statement \"SELECT FROM TEST WHERE[*]"); @@ -66,6 +73,7 @@ public void testSelectWrongSyntax() throws Exception { * * @throws Exception If failed. */ + @Test public void testDmlWrongTable() throws Exception { checkSqlErrorMessage("insert into wrong (id, val) values (3, 'val3')", "Failed to parse query. Table \"WRONG\" not found"); @@ -85,6 +93,7 @@ public void testDmlWrongTable() throws Exception { * * @throws Exception If failed. */ + @Test public void testDmlWrongColumnName() throws Exception { checkSqlErrorMessage("insert into test (id, wrong) values (3, 'val3')", "Failed to parse query. Column \"WRONG\" not found"); @@ -104,6 +113,7 @@ public void testDmlWrongColumnName() throws Exception { * * @throws Exception If failed. */ + @Test public void testDmlWrongSyntax() throws Exception { checkSqlErrorMessage("insert test (id, val) values (3, 'val3')", "Failed to parse query. Syntax error in SQL statement \"INSERT TEST[*] (ID, VAL)"); @@ -123,6 +133,7 @@ public void testDmlWrongSyntax() throws Exception { * * @throws Exception If failed. */ + @Test public void testDdlWrongTable() throws Exception { checkSqlErrorMessage("create table test (id int primary key, val varchar)", "Table already exists: TEST"); @@ -145,6 +156,7 @@ public void testDdlWrongTable() throws Exception { * * @throws Exception If failed. */ + @Test public void testDdlWrongColumnName() throws Exception { checkSqlErrorMessage("create index idx1 on test (wrong)", "Column doesn't exist: WRONG"); @@ -158,6 +170,7 @@ public void testDdlWrongColumnName() throws Exception { * * @throws Exception If failed. */ + @Test public void testDdlWrongSyntax() throws Exception { checkSqlErrorMessage("create table wrong (id int wrong key, val varchar)", "Failed to parse query. Syntax error in SQL statement \"CREATE TABLE WRONG (ID INT WRONG[*]"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryMultiThreadedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryMultiThreadedSelfTest.java index 7241196b502ee..7388150e89ca2 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryMultiThreadedSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheSqlQueryMultiThreadedSelfTest.java @@ -33,11 +33,11 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.GridRandom; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -45,20 +45,12 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteCacheSqlQueryMultiThreadedSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME); ccfg.setCacheMode(PARTITIONED); @@ -86,6 +78,7 @@ public class IgniteCacheSqlQueryMultiThreadedSelfTest extends GridCommonAbstract /** * @throws Exception If failed. */ + @Test public void testQuery() throws Exception { final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -117,6 +110,7 @@ public void testQuery() throws Exception { * Test put and parallel query. * @throws Exception If failed. */ + @Test public void testQueryPut() throws Exception { final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -189,4 +183,4 @@ public int age() { return age; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStarvationOnRebalanceTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStarvationOnRebalanceTest.java index 621d10d14f8d4..2f525ca570f6d 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStarvationOnRebalanceTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheStarvationOnRebalanceTest.java @@ -29,6 +29,9 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -36,6 +39,7 @@ /** * Test to reproduce https://issues.apache.org/jira/browse/IGNITE-3073. */ +@RunWith(JUnit4.class) public class IgniteCacheStarvationOnRebalanceTest extends GridCacheAbstractSelfTest { /** Grid count. */ private static final int GRID_CNT = 4; @@ -86,6 +90,7 @@ public class IgniteCacheStarvationOnRebalanceTest extends GridCacheAbstractSelfT /** * @throws Exception If failed. */ + @Test public void testLoadSystemWithPutAndStartRebalancing() throws Exception { final IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); @@ -163,4 +168,4 @@ int value() { return S.toString(CacheValue.class, this); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUnionDuplicatesTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUnionDuplicatesTest.java index 45c9f31aed653..dfb65f5f71f8f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUnionDuplicatesTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUnionDuplicatesTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.query.h2.sql.AbstractH2CompareQueryTest; import org.apache.ignite.internal.processors.query.h2.sql.BaseH2CompareQueryTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class IgniteCacheUnionDuplicatesTest extends AbstractH2CompareQueryTest { /** */ private static IgniteCache pCache; @@ -40,14 +44,14 @@ public class IgniteCacheUnionDuplicatesTest extends AbstractH2CompareQueryTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - cfg.setCacheConfiguration(cacheConfiguration("orgCache", PARTITIONED, Integer.class, Organization.class)); + cfg.setCacheConfiguration(cacheConfiguration("part", PARTITIONED, Integer.class, Organization.class)); return cfg; } /** {@inheritDoc} */ @Override protected void createCaches() { - pCache = ignite.cache("orgCache"); + pCache = ignite.cache("part"); } /** {@inheritDoc} */ @@ -73,6 +77,7 @@ public class IgniteCacheUnionDuplicatesTest extends AbstractH2CompareQueryTest { /** * @throws Exception If failed. */ + @Test public void testUnionDuplicateFilter() throws Exception { compareQueryRes0(pCache, "select name from \"part\".Organization " + "union " + @@ -83,6 +88,8 @@ public void testUnionDuplicateFilter() throws Exception { @Override protected Statement initializeH2Schema() throws SQLException { Statement st = super.initializeH2Schema(); + st.executeUpdate("CREATE SCHEMA \"part\";"); + st.execute("create table \"part\".ORGANIZATION" + " (_key int not null," + " _val other not null," + diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUpdateSqlQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUpdateSqlQuerySelfTest.java index 20cbe3a3bce96..7445d8324dfbd 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUpdateSqlQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheUpdateSqlQuerySelfTest.java @@ -32,11 +32,15 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteCacheUpdateSqlQuerySelfTest extends IgniteCacheAbstractSqlDmlQuerySelfTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -65,6 +69,7 @@ private static CacheConfiguration createAllTypesCacheConfig() { /** * */ + @Test public void testUpdateSimple() { IgniteCache p = cache(); @@ -95,6 +100,7 @@ public void testUpdateSimple() { /** * */ + @Test public void testUpdateSingle() { IgniteCache p = cache(); @@ -125,6 +131,7 @@ public void testUpdateSingle() { /** * */ + @Test public void testUpdateValueAndFields() { IgniteCache p = cache(); @@ -155,6 +162,7 @@ public void testUpdateValueAndFields() { /** * */ + @Test public void testDefault() { IgniteCache p = cache(); @@ -183,6 +191,7 @@ public void testDefault() { } /** */ + @Test public void testTypeConversions() throws ParseException { IgniteCache cache = ignite(0).cache("L2AT"); @@ -227,6 +236,7 @@ public void testTypeConversions() throws ParseException { } /** */ + @Test public void testSingleInnerFieldUpdate() throws ParseException { IgniteCache cache = ignite(0).cache("L2AT"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCheckClusterStateBeforeExecuteQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCheckClusterStateBeforeExecuteQueryTest.java index 7f155ed61a117..a2b258e054769 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCheckClusterStateBeforeExecuteQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCheckClusterStateBeforeExecuteQueryTest.java @@ -24,27 +24,22 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.GridTestUtils.assertThrows; /** * */ +@RunWith(JUnit4.class) public class IgniteCheckClusterStateBeforeExecuteQueryTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - DataStorageConfiguration pCfg = new DataStorageConfiguration(); pCfg.setDefaultDataRegionConfiguration(new DataRegionConfiguration() @@ -73,6 +68,7 @@ public class IgniteCheckClusterStateBeforeExecuteQueryTest extends GridCommonAbs /** * @throws Exception On failed. */ + @Test public void testDynamicSchemaChangesPersistence() throws Exception { final IgniteEx ig = startGrid(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectCacheQueriesFailoverTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectCacheQueriesFailoverTest.java index 39634cb5228f7..b415d8d41dd6d 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectCacheQueriesFailoverTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectCacheQueriesFailoverTest.java @@ -36,12 +36,16 @@ import org.apache.ignite.internal.IgniteClientReconnectFailoverAbstractTest; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteBiPredicate; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectCacheQueriesFailoverTest extends IgniteClientReconnectFailoverAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -73,6 +77,7 @@ public class IgniteClientReconnectCacheQueriesFailoverTest extends IgniteClientR /** * @throws Exception If failed. */ + @Test public void testReconnectCacheQueries() throws Exception { final Ignite client = grid(serverCount()); @@ -120,6 +125,7 @@ public void testReconnectCacheQueries() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectScanQuery() throws Exception { final Ignite client = grid(serverCount()); @@ -229,4 +235,4 @@ public void setName(String name) { return S.toString(Person.class, this); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectQueriesTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectQueriesTest.java index 5f45d80eb317a..1f72741e7c756 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectQueriesTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteClientReconnectQueriesTest.java @@ -37,6 +37,9 @@ import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; @@ -45,6 +48,7 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteClientReconnectQueriesTest extends IgniteClientReconnectAbstractTest { /** */ public static final String QUERY_CACHE = "query"; @@ -84,6 +88,7 @@ public class IgniteClientReconnectQueriesTest extends IgniteClientReconnectAbstr /** * @throws Exception If failed. */ + @Test public void testQueryReconnect() throws Exception { Ignite cln = grid(serverCount()); @@ -128,6 +133,7 @@ public void testQueryReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReconnectQueryInProgress() throws Exception { Ignite cln = grid(serverCount()); @@ -187,6 +193,7 @@ public void testReconnectQueryInProgress() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryReconnect() throws Exception { Ignite cln = grid(serverCount()); @@ -244,6 +251,7 @@ public void testScanQueryReconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryReconnectInProgress1() throws Exception { scanQueryReconnectInProgress(false); } @@ -251,6 +259,7 @@ public void testScanQueryReconnectInProgress1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testScanQueryReconnectInProgress2() throws Exception { scanQueryReconnectInProgress(true); } @@ -437,4 +446,4 @@ public void setSurname(String surname) { return S.toString(Person.class, this); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCrossCachesJoinsQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCrossCachesJoinsQueryTest.java index c80ae693fee84..cbe3b579ccc5c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCrossCachesJoinsQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCrossCachesJoinsQueryTest.java @@ -48,9 +48,9 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.SB; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -60,10 +60,8 @@ * */ @SuppressWarnings({"unchecked", "PackageVisibleField", "serial"}) +@RunWith(JUnit4.class) public class IgniteCrossCachesJoinsQueryTest extends AbstractH2CompareQueryTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String PERSON_CACHE_NAME = "person"; @@ -98,8 +96,6 @@ public class IgniteCrossCachesJoinsQueryTest extends AbstractH2CompareQueryTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setClientMode(client); return cfg; @@ -317,6 +313,7 @@ private boolean useCollocatedData() { /** * @throws Exception If failed. */ + @Test public void testDistributedJoins1() throws Exception { distributedJoins = true; @@ -326,6 +323,7 @@ public void testDistributedJoins1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoins2() throws Exception { distributedJoins = true; @@ -335,6 +333,7 @@ public void testDistributedJoins2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoins3() throws Exception { distributedJoins = true; @@ -344,6 +343,7 @@ public void testDistributedJoins3() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollocatedJoins1() throws Exception { distributedJoins = false; @@ -353,6 +353,7 @@ public void testCollocatedJoins1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollocatedJoins2() throws Exception { distributedJoins = false; @@ -362,6 +363,7 @@ public void testCollocatedJoins2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCollocatedJoins3() throws Exception { distributedJoins = false; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicSqlRestoreTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicSqlRestoreTest.java index f7dc7b41ba6de..1c2aa20507016 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicSqlRestoreTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteDynamicSqlRestoreTest.java @@ -18,12 +18,20 @@ package org.apache.ignite.internal.processors.cache; import java.io.Serializable; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.Statement; +import java.sql.Timestamp; import java.util.Arrays; import java.util.Collections; import java.util.LinkedHashMap; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.Ignition; import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.cache.QueryEntity; import org.apache.ignite.cache.query.SqlFieldsQuery; @@ -33,10 +41,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.spi.IgniteSpiException; +import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.hamcrest.CoreMatchers.containsString; import static org.hamcrest.MatcherAssert.assertThat; @@ -44,6 +57,8 @@ /** * */ +@SuppressWarnings("Duplicates") +@RunWith(JUnit4.class) public class IgniteDynamicSqlRestoreTest extends GridCommonAbstractTest implements Serializable { public static final String TEST_CACHE_NAME = "test"; @@ -84,6 +99,7 @@ public class IgniteDynamicSqlRestoreTest extends GridCommonAbstractTest implemen /** * @throws Exception if failed. */ + @Test public void testMergeChangedConfigOnCoordinator() throws Exception { { //given: two started nodes with test table @@ -117,7 +133,7 @@ public void testMergeChangedConfigOnCoordinator() throws Exception { //and: change data try (IgniteDataStreamer s = ig.dataStreamer(TEST_CACHE_NAME)) { s.allowOverwrite(true); - for (int i = 0; i < 5_000; i++) + for (int i = 0; i < 50; i++) s.addData(i, null); } @@ -134,7 +150,7 @@ public void testMergeChangedConfigOnCoordinator() throws Exception { //then: everything is ok try (IgniteDataStreamer s = ig1.dataStreamer(TEST_CACHE_NAME)) { s.allowOverwrite(true); - for (int i = 0; i < 50_000; i++) { + for (int i = 0; i < 50; i++) { BinaryObject bo = ig1.binary().builder(TEST_INDEX_OBJECT) .setField("a", i, Object.class) .setField("b", String.valueOf(i), Object.class) @@ -147,14 +163,136 @@ public void testMergeChangedConfigOnCoordinator() throws Exception { IgniteCache cache = ig1.cache(TEST_CACHE_NAME); - assertThat(doExplainPlan(cache, "explain select * from TestIndexObject where a > 5"), containsString("myindexa")); + assertIndexUsed(cache, "explain select * from TestIndexObject where a > 5", "myindexa"); assertFalse(cache.query(new SqlFieldsQuery("SELECT a,b,c FROM TestIndexObject limit 1")).getAll().isEmpty()); } } + /** + * @throws Exception If failed. + */ + @SuppressWarnings("AssertWithSideEffects") + @Test + public void testIndexCreationWhenNodeStopped() throws Exception { + // Start topology. + startGrid(0); + Ignite srv2 = startGrid(1); + Ignite cli; + + Ignition.setClientMode(true); + + try { + cli = startGrid(2); + } + finally { + Ignition.setClientMode(false); + } + + cli.cluster().active(true); + + // Create table, add some data. + int entryCnt = 50; + + try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10802")) { + executeJdbc(conn, + " CREATE TABLE PERSON (\n" + + " FIRST_NAME VARCHAR,\n" + + " LAST_NAME VARCHAR,\n" + + " ADDRESS VARCHAR,\n" + + " LANG VARCHAR,\n" + + " BIRTH_DATE TIMESTAMP,\n" + + " CONSTRAINT PK_PESON PRIMARY KEY (FIRST_NAME,LAST_NAME,ADDRESS,LANG)\n" + + " ) WITH \"key_type=PersonKeyType, CACHE_NAME=PersonCache, value_type=PersonValueType, AFFINITY_KEY=FIRST_NAME,template=PARTITIONED,backups=1\""); + + try (PreparedStatement stmt = conn.prepareStatement( + "insert into Person(LANG, FIRST_NAME, ADDRESS, LAST_NAME, BIRTH_DATE) values(?,?,?,?,?)")) { + for (int i = 0; i < entryCnt; i++) { + String s = String.valueOf(i); + + stmt.setString(1, s); + stmt.setString(2, s); + stmt.setString(3, s); + stmt.setString(4, s); + stmt.setTimestamp(5, new Timestamp(System.currentTimeMillis())); + + stmt.executeUpdate(); + } + } + } + + // Stop second node. + srv2.close(); + + // Create an index on remaining node. + try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10802")) { + executeJdbc(conn, "create index PERSON_FIRST_NAME_IDX on PERSON(FIRST_NAME)"); + } + + // Restart second node. + startGrid(1); + + // Await for index rebuild on started node. + assert GridTestUtils.waitForCondition(new PA() { + @Override public boolean apply() { + try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10801")) { + try (PreparedStatement stmt = conn.prepareStatement( + "EXPLAIN SELECT * FROM Person USE INDEX(PERSON_FIRST_NAME_IDX) WHERE FIRST_NAME=?")) { + stmt.setString(1, String.valueOf(1)); + + StringBuilder fullPlan = new StringBuilder(); + + try (ResultSet rs = stmt.executeQuery()) { + while (rs.next()) + fullPlan.append(rs.getString(1)).append("; "); + } + + System.out.println("PLAN: " + fullPlan); + + return fullPlan.toString().contains("PUBLIC.PERSON_FIRST_NAME_IDX"); + } + } + catch (Exception e) { + throw new RuntimeException("Query failed.", e); + } + } + }, 5_000); + + // Make sure that data could be queried. + try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10802")) { + try (PreparedStatement stmt = conn.prepareStatement( + "SELECT COUNT(*) FROM Person USE INDEX(PERSON_FIRST_NAME_IDX) WHERE FIRST_NAME=?")) { + for (int i = 0; i < entryCnt; i ++) { + stmt.setString(1, String.valueOf(i)); + + try (ResultSet rs = stmt.executeQuery()) { + rs.next(); + + long cnt = rs.getLong(1); + + assertEquals(1L, cnt); + } + } + } + } + } + + /** + * Execute a statement through JDBC connection. + * + * @param conn Connection. + * @param sql Statement. + * @throws Exception If failed. + */ + private static void executeJdbc(Connection conn, String sql) throws Exception { + try (Statement stmt = conn.createStatement()) { + stmt.execute(sql); + } + } + /** * @throws Exception if failed. */ + @Test public void testTakeConfigFromJoiningNodeOnInactiveGrid() throws Exception { { //given: two started nodes with test table @@ -186,7 +324,7 @@ public void testTakeConfigFromJoiningNodeOnInactiveGrid() throws Exception { //then: config for cache was applying successful IgniteCache cache = ig.cache(TEST_CACHE_NAME); - assertThat(doExplainPlan(cache, "explain select * from TestIndexObject where a > 5"), containsString("myindexa")); + assertIndexUsed(cache, "explain select * from TestIndexObject where a > 5", "myindexa"); assertFalse(cache.query(new SqlFieldsQuery("SELECT a,b,c FROM TestIndexObject limit 1")).getAll().isEmpty()); } } @@ -194,6 +332,7 @@ public void testTakeConfigFromJoiningNodeOnInactiveGrid() throws Exception { /** * @throws Exception if failed. */ + @Test public void testResaveConfigAfterMerge() throws Exception { { //given: two started nodes with test table @@ -233,7 +372,7 @@ public void testResaveConfigAfterMerge() throws Exception { IgniteCache cache = ig.cache(TEST_CACHE_NAME); - assertThat(doExplainPlan(cache, "explain select * from TestIndexObject where a > 5"), containsString("myindexa")); + assertIndexUsed(cache, "explain select * from TestIndexObject where a > 5", "myindexa"); assertFalse(cache.query(new SqlFieldsQuery("SELECT a,b,c FROM TestIndexObject limit 1")).getAll().isEmpty()); } } @@ -241,6 +380,8 @@ public void testResaveConfigAfterMerge() throws Exception { /** * @throws Exception if failed. */ + @SuppressWarnings("ArraysAsListWithZeroOrOneArgument") + @Test public void testMergeChangedConfigOnInactiveGrid() throws Exception { { //given: two started nodes with test table @@ -288,7 +429,7 @@ public void testMergeChangedConfigOnInactiveGrid() throws Exception { //then: config should be merged try (IgniteDataStreamer s = ig1.dataStreamer(TEST_CACHE_NAME)) { s.allowOverwrite(true); - for (int i = 0; i < 5_000; i++) { + for (int i = 0; i < 50; i++) { BinaryObject bo = ig1.binary().builder("TestIndexObject") .setField("a", i, Object.class) .setField("b", String.valueOf(i), Object.class) @@ -300,15 +441,35 @@ public void testMergeChangedConfigOnInactiveGrid() throws Exception { IgniteCache cache = ig1.cache(TEST_CACHE_NAME); //then: index "myindexa" and column "b" restored from node "1" - assertThat(doExplainPlan(cache, "explain select * from TestIndexObject where a > 5"), containsString("myindexa")); - assertThat(doExplainPlan(cache, "explain select * from TestIndexObject where b > 5"), containsString("myindexb")); + assertIndexUsed(cache, "explain select * from TestIndexObject where a > 5", "myindexa"); + assertIndexUsed(cache, "explain select * from TestIndexObject where b > 5", "myindexb"); assertFalse(cache.query(new SqlFieldsQuery("SELECT a,b FROM TestIndexObject limit 1")).getAll().isEmpty()); } } + /** + * Make sure that index is used for the given statement. + * + * @param cache Cache. + * @param sql Statement. + * @param idx Index. + * @throws IgniteCheckedException If failed. + */ + private void assertIndexUsed(IgniteCache cache, String sql, String idx) + throws IgniteCheckedException { + assert GridTestUtils.waitForCondition(new PA() { + @Override public boolean apply() { + String plan = doExplainPlan(cache, sql); + + return plan.contains(idx); + } + }, 10_000); + } + /** * @throws Exception if failed. */ + @Test public void testTakeChangedConfigOnActiveGrid() throws Exception { { //given: two started nodes with test table @@ -341,7 +502,7 @@ public void testTakeChangedConfigOnActiveGrid() throws Exception { //then: config should be merged try (IgniteDataStreamer s = ig.dataStreamer(TEST_CACHE_NAME)) { s.allowOverwrite(true); - for (int i = 0; i < 5_000; i++) { + for (int i = 0; i < 50; i++) { BinaryObject bo = ig.binary().builder("TestIndexObject") .setField("a", i, Object.class) .setField("b", String.valueOf(i), Object.class) @@ -355,7 +516,7 @@ public void testTakeChangedConfigOnActiveGrid() throws Exception { cache.indexReadyFuture().get(); - assertThat(doExplainPlan(cache, "explain select * from TestIndexObject where a > 5"), containsString("myindexa")); + assertIndexUsed(cache, "explain select * from TestIndexObject where a > 5", "myindexa"); assertFalse(cache.query(new SqlFieldsQuery("SELECT a,b,c FROM TestIndexObject limit 1")).getAll().isEmpty()); } } @@ -363,6 +524,8 @@ public void testTakeChangedConfigOnActiveGrid() throws Exception { /** * @throws Exception if failed. */ + @SuppressWarnings("ConstantConditions") + @Test public void testFailJoiningNodeBecauseDifferentSql() throws Exception { { //given: two started nodes with test table @@ -407,6 +570,8 @@ public void testFailJoiningNodeBecauseDifferentSql() throws Exception { /** * @throws Exception if failed. */ + @SuppressWarnings("ConstantConditions") + @Test public void testFailJoiningNodeBecauseFieldInlineSizeIsDifferent() throws Exception { { //given: two started nodes with test table @@ -417,13 +582,13 @@ public void testFailJoiningNodeBecauseFieldInlineSizeIsDifferent() throws Except IgniteCache cache = ig.getOrCreateCache(getTestTableConfiguration()); - cache.query(new SqlFieldsQuery("create index myindexa on TestIndexObject(a) INLINE_SIZE 1000")).getAll(); + cache.query(new SqlFieldsQuery("create index myindexa on TestIndexObject(a) INLINE_SIZE 100")).getAll(); //stop one node and create index on other node stopGrid(1); cache.query(new SqlFieldsQuery("drop index myindexa")).getAll(); - cache.query(new SqlFieldsQuery("create index myindexa on TestIndexObject(a) INLINE_SIZE 2000")).getAll(); + cache.query(new SqlFieldsQuery("create index myindexa on TestIndexObject(a) INLINE_SIZE 200")).getAll(); //and: stopped all grid stopAllGrids(); @@ -447,6 +612,8 @@ public void testFailJoiningNodeBecauseFieldInlineSizeIsDifferent() throws Except /** * @throws Exception if failed. */ + @SuppressWarnings("ConstantConditions") + @Test public void testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid() throws Exception { { startGrid(0); @@ -497,7 +664,7 @@ public void testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid() throws Exce */ private void fillTestData(Ignite ig) { try (IgniteDataStreamer s = ig.dataStreamer(TEST_CACHE_NAME)) { - for (int i = 0; i < 50_000; i++) { + for (int i = 0; i < 500; i++) { BinaryObject bo = ig.binary().builder("TestIndexObject") .setField("a", i, Object.class) .setField("b", String.valueOf(i), Object.class) diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteErrorOnRebalanceTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteErrorOnRebalanceTest.java index 98aa4aabb1115..8fd4968976afc 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteErrorOnRebalanceTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteErrorOnRebalanceTest.java @@ -33,14 +33,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.spi.IgniteSpiAdapter; import org.apache.ignite.spi.IgniteSpiException; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.indexing.IndexingQueryFilter; import org.apache.ignite.spi.indexing.IndexingSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -50,16 +50,12 @@ /** * */ +@RunWith(JUnit4.class) public class IgniteErrorOnRebalanceTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - cfg.setConsistentId(gridName); DataStorageConfiguration memCfg = new DataStorageConfiguration() @@ -97,6 +93,7 @@ public class IgniteErrorOnRebalanceTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testErrorOnRebalance() throws Exception { Ignite srv0 = startGrid(0); @@ -125,6 +122,10 @@ public void testErrorOnRebalance() throws Exception { awaitPartitionMapExchange(); + srv1.cluster().setBaselineTopology(srv1.cluster().topologyVersion()); + + awaitPartitionMapExchange(); + IgniteCache cache0 = srv0.cache(DEFAULT_CACHE_NAME); IgniteCache cache1 = srv1.cache(DEFAULT_CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IncorrectQueryEntityTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IncorrectQueryEntityTest.java index 0dd237d8e8d3a..4cfd05cea0cad 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IncorrectQueryEntityTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IncorrectQueryEntityTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.internal.processors.query.QueryUtils; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * A test for {@link QueryEntity} initialization with incorrect query field name */ +@RunWith(JUnit4.class) public class IncorrectQueryEntityTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { @@ -63,6 +67,7 @@ public class IncorrectQueryEntityTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testIncorrectQueryField() throws Exception { try { startGrid(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IndexingCachePartitionLossPolicySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IndexingCachePartitionLossPolicySelfTest.java index f2085994b0cd0..7007499a84552 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IndexingCachePartitionLossPolicySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IndexingCachePartitionLossPolicySelfTest.java @@ -23,8 +23,6 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.distributed.IgniteCachePartitionLossPolicySelfTest; -import java.util.Collection; - /** * Partition loss policy test with enabled indexing. */ @@ -39,80 +37,27 @@ public class IndexingCachePartitionLossPolicySelfTest extends IgniteCachePartiti } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override protected void validateQuery(boolean safe, int part, Ignite node) { - // Get node lost and remaining partitions. - IgniteCache cache = node.cache(CACHE_NAME); - - Collection lostParts = cache.lostPartitions(); - - Integer remainingPart = null; - - for (int i = 0; i < node.affinity(CACHE_NAME).partitions(); i++) { - if (lostParts.contains(i)) - continue; - - remainingPart = i; - - break; - } - - // Determine whether local query should be executed on that node. - boolean execLocQry = false; - - for (int nodePrimaryPart : node.affinity(CACHE_NAME).primaryPartitions(node.cluster().localNode())) { - if (part == nodePrimaryPart) { - execLocQry = true; - - break; - } - } - - // 1. Check query against all partitions. - validateQuery0(safe, node, false); - - // TODO: https://issues.apache.org/jira/browse/IGNITE-7039 -// if (execLocQry) -// validateQuery0(safe, node, true); - - // 2. Check query against LOST partition. - validateQuery0(safe, node, false, part); + protected void checkQueryPasses(Ignite node, boolean loc, int... parts) { + executeQuery(node, loc, parts); + } - // TODO: https://issues.apache.org/jira/browse/IGNITE-7039 -// if (execLocQry) -// validateQuery0(safe, node, true, part); + /** {@inheritDoc} */ + protected void checkQueryFails(Ignite node, boolean loc, int... parts) { + // TODO: Local queries ignore partition loss, see https://issues.apache.org/jira/browse/IGNITE-7039. + if (loc) + return; - // 3. Check query on remaining partition. - if (remainingPart != null) { - executeQuery(node, false, remainingPart); + try { + executeQuery(node, loc, parts); - // 4. Check query over two partitions - normal and LOST. - validateQuery0(safe, node, false, part, remainingPart); + fail("Exception is not thrown."); } - } - - /** - * Query validation routine. - * - * @param safe Safe flag. - * @param node Node. - * @param loc Local flag. - * @param parts Partitions. - */ - private void validateQuery0(boolean safe, Ignite node, boolean loc, int... parts) { - if (safe) { - try { - executeQuery(node, loc, parts); + catch (Exception e) { + boolean exp = e.getMessage() != null && + e.getMessage().contains("Failed to execute query because cache partition has been lost"); - fail("Exception is not thrown."); - } - catch (Exception e) { - assertTrue(e.getMessage(), e.getMessage() != null && - e.getMessage().contains("Failed to execute query because cache partition has been lost")); - } - } - else { - executeQuery(node, loc, parts); + if (!exp) + throw e; } } @@ -124,7 +69,7 @@ private void validateQuery0(boolean safe, Ignite node, boolean loc, int... parts * @param loc Local flag. */ private static void executeQuery(Ignite node, boolean loc, int... parts) { - IgniteCache cache = node.cache(CACHE_NAME); + IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); SqlFieldsQuery qry = new SqlFieldsQuery("SELECT * FROM Integer"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryEntityCaseMismatchTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryEntityCaseMismatchTest.java index 1b0546af2a437..25357d55b476a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryEntityCaseMismatchTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryEntityCaseMismatchTest.java @@ -32,12 +32,16 @@ import java.util.HashSet; import java.util.LinkedHashMap; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test reveals issue of null values in SQL query resultset columns that correspond to compound key. * That happens when QueryEntity.keyFields has wrong register compared to QueryEntity.fields. * Issue only manifests for BinaryMarshaller case. Otherwise the keyFields aren't taken into account. */ +@RunWith(JUnit4.class) public class QueryEntityCaseMismatchTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { @@ -94,7 +98,7 @@ public class QueryEntityCaseMismatchTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCacheInitializationFailure() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Void call() throws Exception { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryJoinWithDifferentNodeFiltersTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryJoinWithDifferentNodeFiltersTest.java index 47666b93cc04b..4702995be4684 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryJoinWithDifferentNodeFiltersTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/QueryJoinWithDifferentNodeFiltersTest.java @@ -30,10 +30,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class QueryJoinWithDifferentNodeFiltersTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache"; @@ -89,6 +93,7 @@ public class QueryJoinWithDifferentNodeFiltersTest extends GridCommonAbstractTes /** * @throws Exception if failed. */ + @Test public void testSize() throws Exception { startGrids(NODE_COUNT); @@ -160,4 +165,4 @@ private static class TestFilter implements IgnitePredicate { return clusterNode.attribute("DATA") != null; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/SqlFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/SqlFieldsQuerySelfTest.java index f68484171f7bb..4741cbc53a26f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/SqlFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/SqlFieldsQuerySelfTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class SqlFieldsQuerySelfTest extends GridCommonAbstractTest { /** INSERT statement. */ private final static String INSERT = "insert into Person(_key, name) values (5, 'x')"; @@ -44,6 +48,7 @@ public class SqlFieldsQuerySelfTest extends GridCommonAbstractTest { /** * @throws Exception If error. */ + @Test public void testSqlFieldsQuery() throws Exception { startGrids(2); @@ -55,6 +60,7 @@ public void testSqlFieldsQuery() throws Exception { /** * @throws Exception If error. */ + @Test public void testSqlFieldsQueryWithTopologyChanges() throws Exception { startGrid(0); @@ -68,6 +74,7 @@ public void testSqlFieldsQueryWithTopologyChanges() throws Exception { /** * @throws Exception If error. */ + @Test public void testQueryCaching() throws Exception { startGrid(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/StartCachesInParallelTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/StartCachesInParallelTest.java new file mode 100644 index 0000000000000..30bb018330344 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/StartCachesInParallelTest.java @@ -0,0 +1,155 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache; + +import org.apache.ignite.Ignite; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.failure.FailureContext; +import org.apache.ignite.failure.StopNodeFailureHandler; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.IgniteSystemProperties.IGNITE_ALLOW_START_CACHES_IN_PARALLEL; + +/** + * Tests, that cluster could start and activate with all possible values of IGNITE_ALLOW_START_CACHES_IN_PARALLEL. + */ +@RunWith(JUnit4.class) +public class StartCachesInParallelTest extends GridCommonAbstractTest { + /** IGNITE_ALLOW_START_CACHES_IN_PARALLEL option value before tests. */ + private String allowParallel; + + /** Test failure handler. */ + private TestStopNodeFailureHandler failureHnd; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + cfg.setDataStorageConfiguration( + new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration().setPersistenceEnabled(true))); + + cfg.setCacheConfiguration( + new CacheConfiguration<>() + .setName(DEFAULT_CACHE_NAME) + .setIndexedTypes(Integer.class, Integer.class)); + + failureHnd = new TestStopNodeFailureHandler(); + + cfg.setFailureHandler(failureHnd); + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTestsStarted() throws Exception { + super.beforeTestsStarted(); + + allowParallel = System.getProperty(IGNITE_ALLOW_START_CACHES_IN_PARALLEL); + } + + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + super.afterTestsStopped(); + + if (allowParallel != null) + System.setProperty(IGNITE_ALLOW_START_CACHES_IN_PARALLEL, allowParallel); + else + System.clearProperty(IGNITE_ALLOW_START_CACHES_IN_PARALLEL); + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + + /** */ + @Test + public void testWithEnabledOption() throws Exception { + doTest("true"); + } + + /** */ + @Test + public void testWithDisabledOption() throws Exception { + doTest("false"); + } + + /** */ + @Test + public void testWithoutOption() throws Exception { + doTest(null); + } + + /** + * Test routine. + * + * @param optionVal IGNITE_ALLOW_START_CACHES_IN_PARALLEL value. + * @throws Exception If failed. + */ + private void doTest(String optionVal) throws Exception { + if (optionVal == null) + System.clearProperty(IGNITE_ALLOW_START_CACHES_IN_PARALLEL); + else { + Boolean.parseBoolean(optionVal); + + System.setProperty(IGNITE_ALLOW_START_CACHES_IN_PARALLEL, optionVal); + } + + assertEquals("Property wasn't set", optionVal, System.getProperty(IGNITE_ALLOW_START_CACHES_IN_PARALLEL)); + + IgniteEx node = startGrid(0); + + node.cluster().active(true); + + assertNull("Node failed with " + failureHnd.lastFailureCtx, failureHnd.lastFailureCtx); + + assertTrue(node.cluster().active()); + } + + /** */ + private static class TestStopNodeFailureHandler extends StopNodeFailureHandler { + /** Last failure context. */ + private volatile FailureContext lastFailureCtx; + + /** {@inheritDoc} */ + @Override public boolean handle(Ignite ignite, FailureContext failureCtx) { + lastFailureCtx = failureCtx; + + return super.handle(ignite, failureCtx); + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/authentication/SqlUserCommandSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/authentication/SqlUserCommandSelfTest.java index 64819f3173449..15e635bcf48bf 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/authentication/SqlUserCommandSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/authentication/SqlUserCommandSelfTest.java @@ -28,19 +28,17 @@ import org.apache.ignite.internal.processors.authentication.User; import org.apache.ignite.internal.processors.authentication.UserManagementException; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for leaks JdbcConnection on SqlFieldsQuery execute. */ +@RunWith(JUnit4.class) public class SqlUserCommandSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Nodes count. */ private static final int NODES_COUNT = 3; @@ -57,12 +55,6 @@ public class SqlUserCommandSelfTest extends GridCommonAbstractTest { if (getTestIgniteInstanceIndex(igniteInstanceName) == CLI_NODE) cfg.setClientMode(true); - TcpDiscoverySpi spi = new TcpDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - cfg.setAuthenticationEnabled(true); cfg.setDataStorageConfiguration(new DataStorageConfiguration() @@ -101,6 +93,7 @@ public class SqlUserCommandSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCreateUpdateDropUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -128,6 +121,7 @@ public void testCreateUpdateDropUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCreateWithAlreadyExistUser() throws Exception { AuthorizationContext.context(actxDflt); userSql(0, "CREATE USER test WITH PASSWORD 'test'"); @@ -148,6 +142,7 @@ public void testCreateWithAlreadyExistUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAlterDropNotExistUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -175,6 +170,7 @@ public void testAlterDropNotExistUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotAuthenticateOperation() throws Exception { for (int i = 0; i < NODES_COUNT; ++i) { final int idx = i; @@ -208,6 +204,7 @@ public void testNotAuthenticateOperation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNotAuthorizedOperation() throws Exception { AuthorizationContext.context(actxDflt); @@ -250,6 +247,7 @@ public void testNotAuthorizedOperation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDropDefaultUser() throws Exception { AuthorizationContext.context(actxDflt); @@ -269,6 +267,7 @@ public void testDropDefaultUser() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQuotedUsername() throws Exception { AuthorizationContext.context(actxDflt); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheClientQueryReplicatedNodeRestartSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheClientQueryReplicatedNodeRestartSelfTest.java index 996d6a7c583de..aef93bc113a78 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheClientQueryReplicatedNodeRestartSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheClientQueryReplicatedNodeRestartSelfTest.java @@ -44,10 +44,10 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.P1; import org.apache.ignite.internal.util.typedef.X; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -57,6 +57,7 @@ /** * Test for distributed queries with replicated client cache and node restarts. */ +@RunWith(JUnit4.class) public class IgniteCacheClientQueryReplicatedNodeRestartSelfTest extends GridCommonAbstractTest { /** */ private static final String QRY = "select co.id, count(*) cnt\n" + @@ -91,21 +92,12 @@ public class IgniteCacheClientQueryReplicatedNodeRestartSelfTest extends GridCom /** */ private static final int PRODUCT_CNT = 100; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { X.println("Ignite instance name: " + igniteInstanceName); IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - int i = 0; CacheConfiguration[] ccs = new CacheConfiguration[4]; @@ -208,6 +200,7 @@ private void assertClient(IgniteCache c, boolean client) { /** * @throws Exception If failed. */ + @Test public void testRestarts() throws Exception { int duration = 90 * 1000; int qryThreadNum = 5; @@ -437,4 +430,4 @@ private static class Product implements Serializable { this.companyId = companyId; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryAbstractSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryAbstractSelfTest.java index 7a83014e1cd47..591ac9dc6af8f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryAbstractSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryAbstractSelfTest.java @@ -52,8 +52,6 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.util.AttributeNodeFilter; @@ -75,9 +73,6 @@ public abstract class IgniteCacheDistributedPartitionQueryAbstractSelfTest exten /** Grids count. */ protected static final int GRIDS_COUNT = 10; - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Partitions per region distribution. */ protected static final int[] PARTS_PER_REGION = new int[] {10, 20, 30, 40, 24}; @@ -142,11 +137,6 @@ public abstract class IgniteCacheDistributedPartitionQueryAbstractSelfTest exten cfg.setDataStorageConfiguration(memCfg); - TcpDiscoverySpi spi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - spi.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(spi); - /** Clients cache */ CacheConfiguration clientCfg = new CacheConfiguration<>(); clientCfg.setName("cl"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryConfigurationSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryConfigurationSelfTest.java index 0253fe873ed12..af0ddd62cc9a6 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryConfigurationSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryConfigurationSelfTest.java @@ -20,12 +20,17 @@ import java.util.Arrays; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests cache query configuration. */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedPartitionQueryConfigurationSelfTest extends GridCommonAbstractTest { /** Tests partition validation. */ + @Test public void testPartitions() { final SqlFieldsQuery qry = new SqlFieldsQuery("select 1"); @@ -89,4 +94,4 @@ private void failIfNotThrown(Runnable r) { // No-op. } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest.java index e6a4e7377a388..379bd50110d65 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest.java @@ -29,10 +29,14 @@ import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests distributed queries over set of partitions on unstable topology. */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest extends IgniteCacheDistributedPartitionQueryAbstractSelfTest { /** {@inheritDoc} */ @@ -53,6 +57,7 @@ public class IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest extends /** * Tests join query within region on unstable topology. */ + @Test public void testJoinQueryUnstableTopology() throws Exception { final AtomicBoolean stop = new AtomicBoolean(); @@ -128,4 +133,4 @@ public void testJoinQueryUnstableTopology() throws Exception { restartStats.get(i)); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQuerySelfTest.java index 00c384870e580..b4e656d9046ea 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedPartitionQuerySelfTest.java @@ -24,42 +24,53 @@ import org.apache.ignite.cache.affinity.Affinity; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.SqlQuery; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests distributed queries over set of partitions on stable topology. */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedPartitionQuerySelfTest extends IgniteCacheDistributedPartitionQueryAbstractSelfTest { /** Tests query within region. */ + @Test public void testRegionQuery() { doTestRegionQuery(grid(0)); } /** Tests query within region (client). */ + @Test public void testRegionQueryClient() throws Exception { doTestRegionQuery(grid("client")); } /** Test query within partitions. */ + @Test public void testPartitionsQuery() { doTestPartitionsQuery(grid(0)); } /** Test query within partitions (client). */ + @Test public void testPartitionsQueryClient() throws Exception { doTestPartitionsQuery(grid("client")); } /** Tests join query within region. */ + @Test public void testJoinQuery() { doTestJoinQuery(grid(0)); } /** Tests join query within region. */ + @Test public void testJoinQueryClient() throws Exception { doTestJoinQuery(grid("client")); } /** Tests local query over partitions. */ + @Test public void testLocalQuery() { Affinity affinity = grid(0).affinity("cl"); @@ -87,4 +98,4 @@ public void testLocalQuery() { for (List row : rows) assertEquals("Incorrect partition", parts[0], affinity.partition(row.get(0))); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryCancelSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryCancelSelfTest.java index d5ee0e978f874..88e44aa480381 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryCancelSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryCancelSelfTest.java @@ -30,20 +30,19 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.G; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests distributed SQL query cancel related scenarios. */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedQueryCancelSelfTest extends GridCommonAbstractTest { /** Grids count. */ private static final int GRIDS_COUNT = 3; - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache size. */ public static final int CACHE_SIZE = 10_000; @@ -63,8 +62,6 @@ public class IgniteCacheDistributedQueryCancelSelfTest extends GridCommonAbstrac /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - spi.setIpFinder(IP_FINDER); CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setIndexedTypes(Integer.class, String.class); @@ -85,6 +82,7 @@ public class IgniteCacheDistributedQueryCancelSelfTest extends GridCommonAbstrac } /** */ + @Test public void testQueryCancelsOnGridShutdown() throws Exception { try (Ignite client = startGrid("client")) { @@ -138,6 +136,7 @@ public void testQueryCancelsOnGridShutdown() throws Exception { } /** */ + @Test public void testQueryResponseFailCode() throws Exception { try (Ignite client = startGrid("client")) { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest.java index 56fd7b8d13b75..5de6351ebd54a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest.java @@ -37,20 +37,19 @@ import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests distributed SQL queries cancel by user or timeout. */ +@RunWith(JUnit4.class) public class IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest extends GridCommonAbstractTest { /** Grids count. */ private static final int GRIDS_CNT = 3; - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache size. */ public static final int CACHE_SIZE = 10_000; @@ -76,8 +75,6 @@ public class IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest extends Gr /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi spi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - spi.setIpFinder(IP_FINDER); CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setIndexedTypes(Integer.class, String.class); @@ -99,76 +96,91 @@ public class IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest extends Gr } /** */ + @Test public void testRemoteQueryExecutionTimeout() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_1, 500, TimeUnit.MILLISECONDS, true); } /** */ + @Test public void testRemoteQueryWithMergeTableTimeout() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_2, 500, TimeUnit.MILLISECONDS, true); } /** */ + @Test public void testRemoteQueryExecutionCancel0() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_1, 1, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryExecutionCancel1() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_1, 500, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryExecutionCancel2() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_1, 1, TimeUnit.SECONDS, false); } /** */ + @Test public void testRemoteQueryExecutionCancel3() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_1, 3, TimeUnit.SECONDS, false); } /** */ + @Test public void testRemoteQueryWithMergeTableCancel0() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_2, 1, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryWithMergeTableCancel1() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_2, 500, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryWithMergeTableCancel2() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_2, 1_500, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryWithMergeTableCancel3() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_2, 3, TimeUnit.SECONDS, false); } /** */ + @Test public void testRemoteQueryWithoutMergeTableCancel0() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_3, 1, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryWithoutMergeTableCancel1() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_3, 500, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryWithoutMergeTableCancel2() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_3, 1_000, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testRemoteQueryWithoutMergeTableCancel3() throws Exception { testQueryCancel(CACHE_SIZE, VAL_SIZE, QRY_3, 3, TimeUnit.SECONDS, false); } /** */ + @Test public void testRemoteQueryAlreadyFinishedStop() throws Exception { testQueryCancel(100, VAL_SIZE, QRY_3, 3, TimeUnit.SECONDS, false); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedFieldsQuerySelfTest.java index 7f9989ddeaefc..3a478ed2e3888 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedFieldsQuerySelfTest.java @@ -27,12 +27,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractFieldsQuerySelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Tests for fields queries. */ +@RunWith(JUnit4.class) public class IgniteCachePartitionedFieldsQuerySelfTest extends IgniteCacheAbstractFieldsQuerySelfTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -62,11 +66,13 @@ protected NearCacheConfiguration nearConfiguration() { } /** @throws Exception If failed. */ + @Test public void testLocalQuery() throws Exception { doTestLocalQuery(intCache, new SqlFieldsQuery("select _key, _val from Integer")); } /** @throws Exception If failed. */ + @Test public void testLocalQueryNoOpCache() throws Exception { doTestLocalQuery(noOpCache, new SqlFieldsQuery("select _key, _val from \"Integer-Integer\".Integer")); } @@ -90,4 +96,4 @@ private void doTestLocalQuery(IgniteCache cache, SqlFieldsQuery fldsQry) t assertEquals(exp, qry.getAll().size()); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedQuerySelfTest.java index 72709acde4682..d5ca183899752 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCachePartitionedQuerySelfTest.java @@ -41,6 +41,7 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.communication.CommunicationSpi; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; +import org.junit.Test; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CachePeekMode.ALL; @@ -67,6 +68,7 @@ public class IgniteCachePartitionedQuerySelfTest extends IgniteCacheAbstractQuer /** * @throws Exception If failed. */ + @Test public void testFieldsQuery() throws Exception { Person p1 = new Person("Jon", 1500); Person p2 = new Person("Jane", 2000); @@ -106,6 +108,7 @@ public void testFieldsQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleNodesQuery() throws Exception { Person p1 = new Person("Jon", 1500); Person p2 = new Person("Jane", 2000); @@ -154,6 +157,7 @@ private void checkResult(Iterable> entries, Person... /** * @throws Exception If failed. */ + @Test public void testScanQueryPagination() throws Exception { final int pageSize = 5; @@ -218,4 +222,4 @@ private static class TestTcpCommunicationSpi extends TcpCommunicationSpi { super.sendMessage(node, msg, ackC); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryAbstractDistributedJoinSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryAbstractDistributedJoinSelfTest.java index 7e23c8881cd6a..bcb3841e451f1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryAbstractDistributedJoinSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryAbstractDistributedJoinSelfTest.java @@ -29,9 +29,6 @@ import org.apache.ignite.internal.util.GridRandom; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -82,9 +79,6 @@ public class IgniteCacheQueryAbstractDistributedJoinSelfTest extends GridCommonA /** */ private static final int PRODUCT_CNT = 100; - /** */ - private static TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -92,12 +86,6 @@ public class IgniteCacheQueryAbstractDistributedJoinSelfTest extends GridCommonA if ("client".equals(igniteInstanceName)) c.setClientMode(true); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - int i = 0; CacheConfiguration[] ccs = new CacheConfiguration[4]; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNoRebalanceSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNoRebalanceSelfTest.java index 7b4d400778aa8..07e558df71fe1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNoRebalanceSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNoRebalanceSelfTest.java @@ -24,18 +24,16 @@ import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test added to check for https://issues.apache.org/jira/browse/IGNITE-3326. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryNoRebalanceSelfTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** * */ @@ -47,8 +45,6 @@ public IgniteCacheQueryNoRebalanceSelfTest() { @Override protected IgniteConfiguration getConfiguration() throws Exception { IgniteConfiguration cfg = super.getConfiguration(); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setBackups(0); ccfg.setIndexedTypes(Integer.class, Integer.class); @@ -62,6 +58,7 @@ public IgniteCacheQueryNoRebalanceSelfTest() { /** * Tests correct query execution with disabled re-balancing. */ + @Test public void testQueryNoRebalance() { IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeFailTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeFailTest.java index 6e47399b46854..6b244d5149953 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeFailTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeFailTest.java @@ -27,19 +27,17 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test added to check for https://issues.apache.org/jira/browse/IGNITE-2542. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryNodeFailTest extends GridCommonAbstractTest { - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean client; @@ -47,8 +45,6 @@ public class IgniteCacheQueryNodeFailTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder); - cfg.setClientMode(client); CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); @@ -76,6 +72,7 @@ public class IgniteCacheQueryNodeFailTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testNodeFailedSimpleQuery()throws Exception { checkNodeFailed("select _key from Integer"); } @@ -83,6 +80,7 @@ public void testNodeFailedSimpleQuery()throws Exception { /** * @throws Exception If failed. */ + @Test public void testNodeFailedReduceQuery()throws Exception { checkNodeFailed("select avg(_key) from Integer"); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartDistributedJoinSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartDistributedJoinSelfTest.java index bad53030d26dd..634af06a8f775 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartDistributedJoinSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartDistributedJoinSelfTest.java @@ -33,10 +33,14 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicIntegerArray; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for distributed queries with node restarts. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryNodeRestartDistributedJoinSelfTest extends IgniteCacheQueryAbstractDistributedJoinSelfTest { /** Total nodes. */ private int totalNodes = 6; @@ -64,6 +68,7 @@ public class IgniteCacheQueryNodeRestartDistributedJoinSelfTest extends IgniteCa /** * @throws Exception If failed. */ + @Test public void testRestarts() throws Exception { restarts(false); } @@ -71,6 +76,7 @@ public void testRestarts() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRestartsBroadcast() throws Exception { restarts(true); } @@ -278,4 +284,4 @@ private void restarts(final boolean broadcastQry) throws Exception { info("Stopped."); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest.java index dd495cfb216a9..57d8d7c136aec 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest.java @@ -38,9 +38,9 @@ import org.apache.ignite.internal.util.typedef.CAX; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -49,6 +49,7 @@ /** * Test for distributed queries with node restarts. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryNodeRestartSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 3; @@ -56,9 +57,6 @@ public class IgniteCacheQueryNodeRestartSelfTest extends GridCacheAbstractSelfTe /** */ private static final int KEY_CNT = 1000; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected int gridCount() { return GRID_CNT; @@ -75,12 +73,6 @@ public class IgniteCacheQueryNodeRestartSelfTest extends GridCacheAbstractSelfTe c.setConsistentId(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - CacheConfiguration cc = defaultCacheConfiguration(); cc.setCacheMode(PARTITIONED); @@ -104,6 +96,7 @@ public class IgniteCacheQueryNodeRestartSelfTest extends GridCacheAbstractSelfTe * @throws Exception If failed. */ @SuppressWarnings({"TooBroadScope"}) + @Test public void testRestarts() throws Exception { int duration = 60 * 1000; int qryThreadNum = 10; @@ -259,4 +252,4 @@ public synchronized boolean awaitEvents(int cnt, long timeout) throws Interrupte return false; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest2.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest2.java index 834a7324ef066..c72554f5c4277 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest2.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryNodeRestartSelfTest2.java @@ -45,12 +45,12 @@ import org.apache.ignite.internal.util.typedef.CAX; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.TransactionException; import org.apache.ignite.transactions.TransactionTimeoutException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -61,6 +61,7 @@ /** * Test for distributed queries with node restarts. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryNodeRestartSelfTest2 extends GridCommonAbstractTest { /** */ private static final String PARTITIONED_QRY = "select co.id, count(*) cnt\n" + @@ -89,9 +90,6 @@ public class IgniteCacheQueryNodeRestartSelfTest2 extends GridCommonAbstractTest /** */ private static final int PRODUCT_CNT = 100; - /** */ - private static TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); @@ -101,12 +99,6 @@ public class IgniteCacheQueryNodeRestartSelfTest2 extends GridCommonAbstractTest c.setDataStorageConfiguration(memCfg); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - int i = 0; CacheConfiguration[] ccs = new CacheConfiguration[4]; @@ -199,6 +191,7 @@ private void fillCaches() { /** * @throws Exception If failed. */ + @Test public void testRestarts() throws Exception { int duration = 90 * 1000; int qryThreadNum = 4; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest.java index 03a8d499adbc5..c6dd5cf6493a7 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest.java @@ -33,47 +33,59 @@ import org.apache.ignite.internal.processors.GridProcessor; import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for cancel of query containing distributed joins. */ +@RunWith(JUnit4.class) public class IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest extends IgniteCacheQueryAbstractDistributedJoinSelfTest { /** */ + @Test public void testCancel1() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 1, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testCancel2() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 50, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testCancel3() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 100, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testCancel4() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 500, TimeUnit.MILLISECONDS, false); } /** */ + @Test public void testTimeout1() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 1, TimeUnit.MILLISECONDS, true); } /** */ + @Test public void testTimeout2() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 50, TimeUnit.MILLISECONDS, true); } /** */ + @Test public void testTimeout3() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 100, TimeUnit.MILLISECONDS, true); } /** */ + @Test public void testTimeout4() throws Exception { testQueryCancel(grid(0), "pe", QRY_0, 500, TimeUnit.MILLISECONDS, true); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteSqlQueryWithBaselineTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteSqlQueryWithBaselineTest.java index 203a319a3c6b7..81429c535f3a5 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteSqlQueryWithBaselineTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/near/IgniteSqlQueryWithBaselineTest.java @@ -27,20 +27,18 @@ import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import javax.cache.Cache; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class IgniteSqlQueryWithBaselineTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -54,12 +52,6 @@ public class IgniteSqlQueryWithBaselineTest extends GridCommonAbstractTest { ) ); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -110,6 +102,7 @@ public static class C2 implements Serializable { /** * @throws Exception If failed. */ + @Test public void testQueryWithNodeNotInBLT() throws Exception { startGrids(2); @@ -123,6 +116,7 @@ public void testQueryWithNodeNotInBLT() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryWithoutBLTNode() throws Exception { startGrids(2); @@ -137,6 +131,7 @@ public void testQueryWithoutBLTNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryFromNotBLTNode() throws Exception { startGrid(1); @@ -181,4 +176,4 @@ private void doQuery() { log.info("result size: " + res.size()); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest.java index a5b3706105112..d97e50b487a2f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.internal.processors.query.QueryUtils; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -36,6 +39,7 @@ * Tests non-collocated join with REPLICATED cache and no primary partitions for that cache on some nodes. */ @SuppressWarnings("unused") +@RunWith(JUnit4.class) public class IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest extends GridCommonAbstractTest { /** Client node name. */ public static final String NODE_CLI = "client"; @@ -103,6 +107,7 @@ public class IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest ext * * @throws Exception If failed. */ + @Test public void testJoinNonCollocated() throws Exception { SqlFieldsQuery qry = new SqlFieldsQuery("SELECT COUNT(*) FROM PartValue p, RepValue r WHERE p.repId=r.id"); @@ -148,4 +153,4 @@ public RepValue(int id) { this.id = id; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQuerySelfTest.java index 953e5fafa05a9..c4b5c054f208c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedFieldsQuerySelfTest.java @@ -27,12 +27,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractFieldsQuerySelfTest; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; /** * Tests for fields queries. */ +@RunWith(JUnit4.class) public class IgniteCacheReplicatedFieldsQuerySelfTest extends IgniteCacheAbstractFieldsQuerySelfTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -47,6 +51,7 @@ public class IgniteCacheReplicatedFieldsQuerySelfTest extends IgniteCacheAbstrac /** * @throws Exception If failed. */ + @Test public void testLostIterator() throws Exception { IgniteCache cache = intCache; @@ -77,4 +82,4 @@ public void testLostIterator() throws Exception { } }, IgniteException.class, null); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedQuerySelfTest.java index 13942c2ede3d4..52fd0347144b7 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/IgniteCacheReplicatedQuerySelfTest.java @@ -46,16 +46,20 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testsuites.IgniteIgnore; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.REPLICATED; import static org.apache.ignite.cache.CachePeekMode.ALL; +import static org.apache.ignite.events.EventType.EVT_NODE_FAILED; import static org.apache.ignite.events.EventType.EVT_NODE_LEFT; /** * Tests replicated query. */ +@RunWith(JUnit4.class) public class IgniteCacheReplicatedQuerySelfTest extends IgniteCacheAbstractQuerySelfTest { /** */ private static final boolean TEST_DEBUG = false; @@ -101,8 +105,8 @@ public class IgniteCacheReplicatedQuerySelfTest extends IgniteCacheAbstractQuery } /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); + @Override protected void beforeTest() throws Exception { + super.beforeTest(); ignite1 = grid(0); ignite2 = grid(1); @@ -113,22 +117,10 @@ public class IgniteCacheReplicatedQuerySelfTest extends IgniteCacheAbstractQuery cache3 = jcache(ignite3, CacheKey.class, CacheValue.class); } - /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { - super.afterTestsStopped(); - - ignite1 = null; - ignite2 = null; - ignite3 = null; - - cache1 = null; - cache2 = null; - cache3 = null; - } - /** * @throws Exception If failed. */ + @Test public void testClientOnlyNode() throws Exception { try { Ignite g = startGrid("client"); @@ -165,6 +157,7 @@ public void testClientOnlyNode() throws Exception { * * @throws Exception If failed. */ + @Test public void testIterator() throws Exception { int keyCnt = 100; @@ -196,6 +189,7 @@ public void testIterator() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testLocalQueryWithExplicitFlag() throws Exception { doTestLocalQuery(true); } @@ -203,6 +197,7 @@ public void testLocalQueryWithExplicitFlag() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testLocalQueryWithoutExplicitFlag() throws Exception { doTestLocalQuery(false); } @@ -240,6 +235,7 @@ private void doTestLocalQuery(boolean loc) throws Exception { /** * @throws Exception If test failed. */ + @Test public void testDistributedQuery() throws Exception { final int keyCnt = 4; @@ -292,6 +288,7 @@ public void testDistributedQuery() throws Exception { /** * @throws Exception If test failed. */ + @Test public void testToString() throws Exception { int keyCnt = 4; @@ -309,6 +306,7 @@ public void testToString() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLostIterator() throws Exception { IgniteCache cache = jcache(Integer.class, Integer.class); @@ -346,14 +344,14 @@ public void testLostIterator() throws Exception { /** * @throws Exception If failed. */ - @IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-613", forceFailure = true) + @Test public void testNodeLeft() throws Exception { Ignite g = startGrid("client"); try { assertTrue(g.configuration().isClientMode()); - IgniteCache cache = jcache(Integer.class, Integer.class); + IgniteCache cache = jcache(g, Integer.class, Integer.class); for (int i = 0; i < 1000; i++) cache.put(i, i); @@ -384,7 +382,7 @@ public void testNodeLeft() throws Exception { return true; } - }, EVT_NODE_LEFT); + }, EVT_NODE_LEFT, EVT_NODE_FAILED); stopGrid("client"); @@ -434,6 +432,15 @@ private void checkLocalQueryResults(IgniteCache cache, boo assert !iter.hasNext(); } + /** + * @throws Exception If failed. + */ + @Override public void testSqlQueryEvents() throws Exception { + fail("https://issues.apache.org/jira/browse/IGNITE-8765"); + + super.testSqlQueryEvents(); + } + /** * Cache key. */ diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/encryption/EncryptedSqlTableTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/encryption/EncryptedSqlTableTest.java new file mode 100644 index 0000000000000..4e0e3c35f7e4f --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/encryption/EncryptedSqlTableTest.java @@ -0,0 +1,69 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.encryption; + +import java.util.List; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.encryption.EncryptedCacheRestartTest; +import org.apache.ignite.internal.util.IgniteUtils; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** */ +public class EncryptedSqlTableTest extends EncryptedCacheRestartTest { + /** {@inheritDoc} */ + @Override protected void createEncryptedCache(IgniteEx grid0, @Nullable IgniteEx grid1, String cacheName, + String cacheGroup, boolean putData) { + + executeSql(grid0, "CREATE TABLE encrypted(ID BIGINT, NAME VARCHAR(10), PRIMARY KEY (ID)) " + + "WITH \"ENCRYPTED=true\""); + executeSql(grid0, "CREATE INDEX enc0 ON encrypted(NAME)"); + + if (putData) { + for (int i=0; i<100; i++) + executeSql(grid0, "INSERT INTO encrypted(ID, NAME) VALUES(?, ?)", i, "" + i); + } + } + + /** {@inheritDoc} */ + @Override protected void checkData(IgniteEx grid0) { + for (int i=0; i<100; i++) { + List> res = executeSql(grid0, "SELECT NAME FROM encrypted WHERE ID = ?", i); + + assertEquals(1, res.size()); + assertEquals("" + i, res.get(0).get(0)); + } + } + + /** */ + private List> executeSql(IgniteEx grid, String qry, Object...args) { + return grid.context().query().querySqlFields( + new SqlFieldsQuery(qry).setSchema("PUBLIC").setArgs(args), true).getAll(); + } + + /** {@inheritDoc} */ + @NotNull @Override protected String cacheName() { + return "SQL_PUBLIC_ENCRYPTED"; + } + + /** {@inheritDoc} */ + @Override protected String keystorePath() { + return IgniteUtils.resolveIgnitePath("modules/indexing/src/test/resources/tde.jks").getAbsolutePath(); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/AbstractIndexingCommonTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/AbstractIndexingCommonTest.java new file mode 100644 index 0000000000000..49bd8c0aca2d3 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/AbstractIndexingCommonTest.java @@ -0,0 +1,61 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.index; + +import java.util.Set; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.h2.engine.Session; +import org.h2.util.CloseWatcher; + +/** + * Base class for all indexing tests to check H2 connection management. + */ +public class AbstractIndexingCommonTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected void afterTestsStopped() throws Exception { + stopAllGrids(); + + checkAllConnectionAreClosed(); + + super.afterTestsStopped(); + } + + /** + * Checks all H2 connection are closed. + */ + void checkAllConnectionAreClosed() { + Set refs = GridTestUtils.getFieldValue(CloseWatcher.class, "refs"); + + if (!refs.isEmpty()) { + for (Object o : refs) { + if (o instanceof CloseWatcher + && ((CloseWatcher)o).getCloseable() instanceof Session) { + log.error("Session: " + ((CloseWatcher)o).getCloseable() + + ", open=" + !((Session)((CloseWatcher)o).getCloseable()).isClosed()); + } + } + + // Uncomment and use heap dump to investigate the problem if the test failed. + // GridDebug.dumpHeap("h2_conn_heap_dmp.hprof", true); + + fail("There are not closed connections. See the log above."); + } + } + +} \ No newline at end of file diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/AbstractSchemaSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/AbstractSchemaSelfTest.java index 7f1e2e74e8ec1..e7ab35d9d6472 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/AbstractSchemaSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/AbstractSchemaSelfTest.java @@ -59,9 +59,6 @@ import org.apache.ignite.internal.util.typedef.internal.SB; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; @@ -70,9 +67,6 @@ */ @SuppressWarnings("unchecked") public abstract class AbstractSchemaSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cache. */ protected static final String CACHE_NAME = "cache"; @@ -125,8 +119,6 @@ public abstract class AbstractSchemaSelfTest extends GridCommonAbstractTest { protected IgniteConfiguration commonConfiguration(int idx) throws Exception { IgniteConfiguration cfg = super.getConfiguration(getTestIgniteInstanceName(idx)); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setMarshaller(new BinaryMarshaller()); DataStorageConfiguration memCfg = new DataStorageConfiguration().setDefaultDataRegionConfiguration( @@ -566,7 +558,7 @@ protected void dynamicIndexDrop(Ignite node, String cacheName, String idxName, b * @param node Node to create cache on. */ protected void destroySqlCache(Ignite node) throws IgniteCheckedException { - ((IgniteEx)node).context().cache().dynamicDestroyCache(CACHE_NAME, true, true, false).get(); + ((IgniteEx)node).context().cache().dynamicDestroyCache(CACHE_NAME, true, true, false, null).get(); } /** diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/BasicIndexMultinodeTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/BasicIndexMultinodeTest.java new file mode 100644 index 0000000000000..37c82c2156c84 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/BasicIndexMultinodeTest.java @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.index; + +/** + * A set of basic tests for caches with indexes in multi-node environment. + */ +public class BasicIndexMultinodeTest extends BasicIndexTest { + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 3; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/BasicIndexTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/BasicIndexTest.java index feb330bcaef87..afd071d364d10 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/BasicIndexTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/BasicIndexTest.java @@ -25,8 +25,8 @@ import java.util.LinkedHashMap; import java.util.List; import java.util.Objects; +import java.util.stream.Collectors; import org.apache.ignite.IgniteCache; -import org.apache.ignite.cache.CacheKeyConfiguration; import org.apache.ignite.cache.QueryEntity; import org.apache.ignite.cache.QueryIndex; import org.apache.ignite.cache.query.SqlFieldsQuery; @@ -34,51 +34,43 @@ import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; +import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; /** * A set of basic tests for caches with indexes. */ public class BasicIndexTest extends GridCommonAbstractTest { /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** */ - private Collection indexes; + private Collection indexes = Collections.emptyList(); /** */ private Integer inlineSize; /** */ - private Boolean isPersistenceEnabled; + private boolean isPersistenceEnabled; /** */ - private String affKeyFieldName; + private int gridCount = 1; - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - assertNotNull(indexes); - assertNotNull(inlineSize); - assertNotNull(isPersistenceEnabled); - - for (QueryIndex index : indexes) { + for (QueryIndex index : indexes) index.setInlineSize(inlineSize); - } IgniteConfiguration igniteCfg = super.getConfiguration(igniteInstanceName); - igniteCfg.setDiscoverySpi( - new TcpDiscoverySpi().setIpFinder(IP_FINDER) - ); + igniteCfg.setConsistentId(igniteInstanceName); LinkedHashMap fields = new LinkedHashMap<>(); fields.put("keyStr", String.class.getName()); @@ -98,13 +90,6 @@ public class BasicIndexTest extends GridCommonAbstractTest { )) .setSqlIndexMaxInlineSize(inlineSize); - if (affKeyFieldName != null) { - ccfg.setKeyConfiguration(new CacheKeyConfiguration() - .setTypeName(Key.class.getTypeName()) - .setAffinityKeyFieldName(affKeyFieldName) - ); - } - igniteCfg.setCacheConfiguration(ccfg); if (isPersistenceEnabled) { @@ -118,7 +103,9 @@ public class BasicIndexTest extends GridCommonAbstractTest { return igniteCfg; } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override protected void beforeTest() throws Exception { super.beforeTest(); @@ -127,47 +114,46 @@ public class BasicIndexTest extends GridCommonAbstractTest { cleanPersistenceDir(); } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override protected void afterTest() throws Exception { stopAllGrids(); cleanPersistenceDir(); - indexes = null; - - inlineSize = null; - - isPersistenceEnabled = null; - - affKeyFieldName = null; - super.afterTest(); } + /** + * @return Grid count used in test. + */ + protected int gridCount() { + return gridCount; + } + /** */ + @Test public void testNoIndexesNoPersistence() throws Exception { - indexes = Collections.emptyList(); - - isPersistenceEnabled = false; - - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); + startGridsMultiThreaded(gridCount()); populateCache(); checkAll(); - stopGrid(); + stopAllGrids(); } } /** */ + @Test public void testAllIndexesNoPersistence() throws Exception { indexes = Arrays.asList( new QueryIndex("keyStr"), @@ -178,39 +164,34 @@ public void testAllIndexesNoPersistence() throws Exception { new QueryIndex("valPojo") ); - isPersistenceEnabled = false; - - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); + startGridsMultiThreaded(gridCount()); populateCache(); checkAll(); - stopGrid(); + stopAllGrids(); } } /** */ + @Test public void testDynamicIndexesNoPersistence() throws Exception { - indexes = Collections.emptyList(); - - isPersistenceEnabled = false; - - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); + startGridsMultiThreaded(gridCount()); populateCache(); @@ -225,46 +206,42 @@ public void testDynamicIndexesNoPersistence() throws Exception { checkAll(); - stopGrid(); + stopAllGrids(); } } /** */ + @Test public void testNoIndexesWithPersistence() throws Exception { - indexes = Collections.emptyList(); - isPersistenceEnabled = true; - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); - - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); populateCache(); checkAll(); - stopGrid(); - - startGrid(); + stopAllGrids(); - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); checkAll(); - stopGrid(); + stopAllGrids(); cleanPersistenceDir(); } } /** */ + @Test public void testAllIndexesWithPersistence() throws Exception { indexes = Arrays.asList( new QueryIndex("keyStr"), @@ -277,51 +254,44 @@ public void testAllIndexesWithPersistence() throws Exception { isPersistenceEnabled = true; - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); - - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); populateCache(); checkAll(); - stopGrid(); - - startGrid(); + stopAllGrids(); - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); checkAll(); - stopGrid(); + stopAllGrids(); cleanPersistenceDir(); } } /** */ + @Test public void testDynamicIndexesWithPersistence() throws Exception { - indexes = Collections.emptyList(); - isPersistenceEnabled = true; - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); - - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); populateCache(); @@ -336,66 +306,105 @@ public void testDynamicIndexesWithPersistence() throws Exception { checkAll(); - stopGrid(); + stopAllGrids(); + + startGridsMultiThreaded(gridCount()); + + checkAll(); + + stopAllGrids(); + + cleanPersistenceDir(); + } + } + + /** */ + @Test + public void testDynamicIndexesDropWithPersistence() throws Exception { + isPersistenceEnabled = true; - startGrid(); + int[] inlineSizes = {0, 10, 20, 50, 100}; - grid().cluster().active(true); + for (int i : inlineSizes) { + log().info("Checking inlineSize=" + i); + + inlineSize = i; + + startGridsMultiThreaded(gridCount()); + + populateCache(); + + String[] cols = { + "keyStr", + "keyLong", + "keyPojo", + "valStr", + "valLong", + "valPojo" + }; + + createDynamicIndexes(cols); + + checkAll(); + + dropDynamicIndexes(cols); + + checkAll(); + + stopAllGrids(); + + startGridsMultiThreaded(gridCount()); checkAll(); - stopGrid(); + stopAllGrids(); cleanPersistenceDir(); } } /** */ + @Test public void testNoIndexesWithPersistenceIndexRebuild() throws Exception { - indexes = Collections.emptyList(); - isPersistenceEnabled = true; - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); - - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); populateCache(); checkAll(); - Path idxPath = getIndexBinPath(); + List idxPaths = getIndexBinPaths(); // Shutdown gracefully to ensure there is a checkpoint with index.bin. // Otherwise index.bin rebuilding may not work. - grid().cluster().active(false); - - stopGrid(); + grid(0).cluster().active(false); - assertTrue(U.delete(idxPath)); + stopAllGrids(); - startGrid(); + idxPaths.forEach(idxPath -> assertTrue(U.delete(idxPath))); - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); - grid().cache(DEFAULT_CACHE_NAME).indexReadyFuture().get(); + grid(0).cache(DEFAULT_CACHE_NAME).indexReadyFuture().get(); checkAll(); - stopGrid(); + stopAllGrids(); cleanPersistenceDir(); } } /** */ + @Test public void testAllIndexesWithPersistenceIndexRebuild() throws Exception { indexes = Arrays.asList( new QueryIndex("keyStr"), @@ -408,61 +417,54 @@ public void testAllIndexesWithPersistenceIndexRebuild() throws Exception { isPersistenceEnabled = true; - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); - - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); populateCache(); checkAll(); - Path idxPath = getIndexBinPath(); + List idxPaths = getIndexBinPaths(); // Shutdown gracefully to ensure there is a checkpoint with index.bin. // Otherwise index.bin rebuilding may not work. - grid().cluster().active(false); - - stopGrid(); + grid(0).cluster().active(false); - assertTrue(U.delete(idxPath)); + stopAllGrids(); - startGrid(); + idxPaths.forEach(idxPath -> assertTrue(U.delete(idxPath))); - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); - grid().cache(DEFAULT_CACHE_NAME).indexReadyFuture().get(); + grid(0).cache(DEFAULT_CACHE_NAME).indexReadyFuture().get(); checkAll(); - stopGrid(); + stopAllGrids(); cleanPersistenceDir(); } } /** */ + @Test public void testDynamicIndexesWithPersistenceIndexRebuild() throws Exception { - indexes = Collections.emptyList(); - isPersistenceEnabled = true; - int[] inlineSizes = { 0, 10, 20, 50, 100 }; + int[] inlineSizes = {0, 10, 20, 50, 100}; for (int i : inlineSizes) { log().info("Checking inlineSize=" + i); inlineSize = i; - startGrid(); - - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); populateCache(); @@ -477,25 +479,23 @@ public void testDynamicIndexesWithPersistenceIndexRebuild() throws Exception { checkAll(); - Path idxPath = getIndexBinPath(); + List idxPaths = getIndexBinPaths(); // Shutdown gracefully to ensure there is a checkpoint with index.bin. // Otherwise index.bin rebuilding may not work. - grid().cluster().active(false); - - stopGrid(); + grid(0).cluster().active(false); - assertTrue(U.delete(idxPath)); + stopAllGrids(); - startGrid(); + idxPaths.forEach(idxPath -> assertTrue(U.delete(idxPath))); - grid().cluster().active(true); + startGridsMultiThreaded(gridCount()); - grid().cache(DEFAULT_CACHE_NAME).indexReadyFuture().get(); + grid(0).cache(DEFAULT_CACHE_NAME).indexReadyFuture().get(); checkAll(); - stopGrid(); + stopAllGrids(); cleanPersistenceDir(); } @@ -503,7 +503,7 @@ public void testDynamicIndexesWithPersistenceIndexRebuild() throws Exception { /** */ private void checkAll() { - IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); + IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); checkRemovePut(cache); @@ -520,7 +520,7 @@ private void checkAll() { /** */ private void populateCache() { - IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); + IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); // Be paranoid and populate first even indexes in ascending order, then odd indexes in descending // to check that inserting in the middle works. @@ -557,9 +557,9 @@ private void checkSelectAll(IgniteCache cache) { assertEquals(100, data.size()); for (List row : data) { - Key key = (Key)row.get(0); + Key key = (Key) row.get(0); - Val val = (Val)row.get(1); + Val val = (Val) row.get(1); long i = key.keyLong; @@ -616,9 +616,9 @@ private void checkSelectStringRange(IgniteCache cache) { assertEquals(10, data.size()); for (List row : data) { - Key key = (Key)row.get(0); + Key key = (Key) row.get(0); - Val val = (Val)row.get(1); + Val val = (Val) row.get(1); long i = key.keyLong; @@ -644,9 +644,9 @@ private void checkSelectLongRange(IgniteCache cache) { assertEquals(10, data.size()); for (List row : data) { - Key key = (Key)row.get(0); + Key key = (Key) row.get(0); - Val val = (Val)row.get(1); + Val val = (Val) row.get(1); long i = key.keyLong; @@ -658,29 +658,54 @@ private void checkSelectLongRange(IgniteCache cache) { } } - /** Must be called when the grid is up. */ - private Path getIndexBinPath() { - IgniteInternalCache cachex = grid().cachex(DEFAULT_CACHE_NAME); + /** + * Must be called when the grid is up. + */ + private List getIndexBinPaths() { + return G.allGrids().stream() + .map(grid -> (IgniteEx) grid) + .map(grid -> { + IgniteInternalCache cachex = grid.cachex(DEFAULT_CACHE_NAME); - assertNotNull(cachex); + assertNotNull(cachex); - FilePageStoreManager pageStoreMgr = (FilePageStoreManager)cachex.context().shared().pageStore(); + FilePageStoreManager pageStoreMgr = (FilePageStoreManager) cachex.context().shared().pageStore(); - assertNotNull(pageStoreMgr); + assertNotNull(pageStoreMgr); - File cacheWorkDir = pageStoreMgr.cacheWorkDir(cachex.configuration()); + File cacheWorkDir = pageStoreMgr.cacheWorkDir(cachex.configuration()); - return cacheWorkDir.toPath().resolve("index.bin"); + return cacheWorkDir.toPath().resolve("index.bin"); + }) + .collect(Collectors.toList()); } /** */ private void createDynamicIndexes(String... cols) { - IgniteCache cache = grid().cache(DEFAULT_CACHE_NAME); + IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); for (String col : cols) { + String indexName = col + "_idx"; + String schemaName = DEFAULT_CACHE_NAME; + + cache.query(new SqlFieldsQuery( + String.format("create index %s on \"%s\".Val(%s) INLINE_SIZE %s;", indexName, schemaName, col, inlineSize) + )).getAll(); + } + + cache.indexReadyFuture().get(); + } + + /** */ + private void dropDynamicIndexes(String... cols) { + IgniteCache cache = grid(0).cache(DEFAULT_CACHE_NAME); + + for (String col : cols) { + String indexName = col + "_idx"; + cache.query(new SqlFieldsQuery( - "create index on Val(" + col + ") INLINE_SIZE " + inlineSize - )); + String.format("drop index %s;", indexName) + )).getAll(); } cache.indexReadyFuture().get(); @@ -714,7 +739,9 @@ private Key(String str, long aLong, Pojo pojo) { keyPojo = pojo; } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public boolean equals(Object o) { if (this == o) return true; @@ -722,19 +749,23 @@ private Key(String str, long aLong, Pojo pojo) { if (o == null || getClass() != o.getClass()) return false; - Key key = (Key)o; + Key key = (Key) o; return keyLong == key.keyLong && Objects.equals(keyStr, key.keyStr) && Objects.equals(keyPojo, key.keyPojo); } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public int hashCode() { return Objects.hash(keyStr, keyLong, keyPojo); } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public String toString() { return S.toString(Key.class, this); } @@ -758,7 +789,9 @@ private Val(String str, long aLong, Pojo pojo) { valPojo = pojo; } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public boolean equals(Object o) { if (this == o) return true; @@ -766,19 +799,23 @@ private Val(String str, long aLong, Pojo pojo) { if (o == null || getClass() != o.getClass()) return false; - Val val = (Val)o; + Val val = (Val) o; return valLong == val.valLong && Objects.equals(valStr, val.valStr) && Objects.equals(valPojo, val.valPojo); } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public int hashCode() { return Objects.hash(valStr, valLong, valPojo); } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public String toString() { return S.toString(Val.class, this); } @@ -794,7 +831,9 @@ private Pojo(long pojoLong) { this.pojoLong = pojoLong; } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public boolean equals(Object o) { if (this == o) return true; @@ -802,17 +841,21 @@ private Pojo(long pojoLong) { if (o == null || getClass() != o.getClass()) return false; - Pojo pojo = (Pojo)o; + Pojo pojo = (Pojo) o; return pojoLong == pojo.pojoLong; } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public int hashCode() { return Objects.hash(pojoLong); } - /** {@inheritDoc} */ + /** + * {@inheritDoc} + */ @Override public String toString() { return S.toString(Pojo.class, this); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DuplicateKeyValueClassesSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DuplicateKeyValueClassesSelfTest.java index 6eb1bb982df7f..2b67214b90bed 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DuplicateKeyValueClassesSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DuplicateKeyValueClassesSelfTest.java @@ -22,11 +22,15 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.UUID; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Make sure that cache can start with multiple key-value classes of the same type. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class DuplicateKeyValueClassesSelfTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -48,6 +52,7 @@ public class DuplicateKeyValueClassesSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testDuplicateKeyClass() throws Exception { CacheConfiguration ccfg = new CacheConfiguration() .setName(CACHE_NAME) @@ -61,6 +66,7 @@ public void testDuplicateKeyClass() throws Exception { * * @throws Exception If failed. */ + @Test public void testDuplicateValueClass() throws Exception { CacheConfiguration ccfg = new CacheConfiguration() .setName(CACHE_NAME) diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractConcurrentSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractConcurrentSelfTest.java index 072f1ab2d136a..9d03d2434cb69 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractConcurrentSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractConcurrentSelfTest.java @@ -59,6 +59,9 @@ import org.apache.ignite.internal.util.typedef.T3; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgnitePredicate; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteClientReconnectAbstractTest.TestTcpDiscoverySpi; @@ -66,6 +69,7 @@ * Concurrency tests for dynamic index create/drop. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public abstract class DynamicColumnsAbstractConcurrentSelfTest extends DynamicColumnsAbstractTest { /** Test duration. */ private static final long TEST_DUR = 10_000L; @@ -155,6 +159,7 @@ public abstract class DynamicColumnsAbstractConcurrentSelfTest extends DynamicCo * * @throws Exception If failed. */ + @Test public void testAddColumnCoordinatorChange() throws Exception { checkCoordinatorChange(true); } @@ -164,6 +169,7 @@ public void testAddColumnCoordinatorChange() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropColumnCoordinatorChange() throws Exception { checkCoordinatorChange(false); } @@ -241,6 +247,7 @@ public void checkCoordinatorChange(boolean addOrRemove) throws Exception { * * @throws Exception If failed. */ + @Test public void testOperationChaining() throws Exception { // 7 nodes * 2 columns = 14 latch countdowns. CountDownLatch finishLatch = new CountDownLatch(14); @@ -295,6 +302,7 @@ public void testOperationChaining() throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeJoinOnPendingAddOperation() throws Exception { checkNodeJoinOnPendingOperation(true); } @@ -304,6 +312,7 @@ public void testNodeJoinOnPendingAddOperation() throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeJoinOnPendingDropOperation() throws Exception { checkNodeJoinOnPendingOperation(false); } @@ -351,6 +360,7 @@ private void checkNodeJoinOnPendingOperation(boolean addOrRemove) throws Excepti * * @throws Exception If failed, */ + @Test public void testConcurrentPutRemove() throws Exception { CountDownLatch finishLatch = new CountDownLatch(4); @@ -481,6 +491,7 @@ private Object key(int id) { * * @throws Exception If failed. */ + @Test public void testAddConcurrentRebalance() throws Exception { checkConcurrentRebalance(true); } @@ -490,6 +501,7 @@ public void testAddConcurrentRebalance() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropConcurrentRebalance() throws Exception { checkConcurrentRebalance(false); } @@ -557,6 +569,7 @@ private void put(Ignite node, int startIdx, int endIdx) { * * @throws Exception If failed. */ + @Test public void testAddConcurrentCacheDestroy() throws Exception { checkConcurrentCacheDestroy(true); } @@ -566,6 +579,7 @@ public void testAddConcurrentCacheDestroy() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropConcurrentCacheDestroy() throws Exception { checkConcurrentCacheDestroy(false); } @@ -626,6 +640,7 @@ private void checkConcurrentCacheDestroy(boolean addOrRemove) throws Exception { * * @throws Exception If failed. */ + @Test public void testQueryConsistencyMultithreaded() throws Exception { // Start complex topology. ignitionStart(serverConfiguration(1)); @@ -717,6 +732,7 @@ public void testQueryConsistencyMultithreaded() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnect() throws Exception { checkClientReconnect(false, true); } @@ -726,6 +742,7 @@ public void testClientReconnect() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnectWithCacheRestart() throws Exception { checkClientReconnect(true, true); } @@ -735,6 +752,7 @@ public void testClientReconnectWithCacheRestart() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnectWithNonDynamicCache() throws Exception { checkClientReconnect(false, false); } @@ -744,6 +762,7 @@ public void testClientReconnectWithNonDynamicCache() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnectWithNonDynamicCacheRestart() throws Exception { checkClientReconnect(true, false); } @@ -842,6 +861,7 @@ private void reconnectClientNode(final Ignite srvNode, final Ignite cliNode, fin * @throws Exception If failed. */ @SuppressWarnings("StringConcatenationInLoop") + @Test public void testConcurrentOperationsAndNodeStartStopMultithreaded() throws Exception { // Start several stable nodes. ignitionStart(serverConfiguration(1)); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractTest.java index 5ddafa355888d..e515e27416096 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicColumnsAbstractTest.java @@ -38,9 +38,6 @@ import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.processors.query.QueryField; import org.apache.ignite.internal.processors.query.QueryUtils; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.h2.value.DataType; @@ -61,9 +58,6 @@ public abstract class DynamicColumnsAbstractTest extends GridCommonAbstractTest /** SQL to drop test table. */ final static String DROP_SQL = "DROP TABLE Person"; - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** * Check that given columns are seen by client. * @param node Node to check. @@ -181,8 +175,6 @@ protected IgniteConfiguration clientConfiguration(int idx) throws Exception { protected IgniteConfiguration commonConfiguration(int idx) throws Exception { IgniteConfiguration cfg = getConfiguration(getTestIgniteInstanceName(idx)); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - DataStorageConfiguration memCfg = new DataStorageConfiguration().setDefaultDataRegionConfiguration( new DataRegionConfiguration().setMaxSize(128L * 1024 * 1024)); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractBasicSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractBasicSelfTest.java index bd3b0939e92bd..7bc95c79e6c85 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractBasicSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractBasicSelfTest.java @@ -23,7 +23,6 @@ import java.util.concurrent.Callable; import javax.cache.CacheException; import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.Ignition; import org.apache.ignite.cache.CacheAtomicityMode; @@ -39,6 +38,9 @@ import org.apache.ignite.internal.processors.query.schema.SchemaOperationException; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; @@ -48,7 +50,8 @@ /** * Tests for dynamic index creation. */ -@SuppressWarnings({"unchecked", "ThrowableResultOfMethodCallIgnored"}) +@SuppressWarnings({"unchecked"}) +@RunWith(JUnit4.class) public abstract class DynamicIndexAbstractBasicSelfTest extends DynamicIndexAbstractSelfTest { /** Node index for regular server (coordinator). */ protected static final int IDX_SRV_CRD = 0; @@ -78,7 +81,7 @@ public abstract class DynamicIndexAbstractBasicSelfTest extends DynamicIndexAbst /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { - node().context().cache().dynamicDestroyCache(CACHE_NAME, true, true, false).get(); + node().context().cache().dynamicDestroyCache(CACHE_NAME, true, true, false, null).get(); super.afterTest(); } @@ -137,6 +140,7 @@ private void loadInitialData() { * * @throws Exception If failed. */ + @Test public void testCreatePartitionedAtomic() throws Exception { checkCreate(PARTITIONED, ATOMIC, false); } @@ -146,6 +150,7 @@ public void testCreatePartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreatePartitionedAtomicNear() throws Exception { checkCreate(PARTITIONED, ATOMIC, true); } @@ -155,6 +160,7 @@ public void testCreatePartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreatePartitionedTransactional() throws Exception { checkCreate(PARTITIONED, TRANSACTIONAL, false); } @@ -164,6 +170,7 @@ public void testCreatePartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreatePartitionedTransactionalNear() throws Exception { checkCreate(PARTITIONED, TRANSACTIONAL, true); } @@ -173,6 +180,7 @@ public void testCreatePartitionedTransactionalNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateReplicatedAtomic() throws Exception { checkCreate(REPLICATED, ATOMIC, false); } @@ -182,6 +190,7 @@ public void testCreateReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateReplicatedTransactional() throws Exception { checkCreate(REPLICATED, TRANSACTIONAL, false); } @@ -221,6 +230,7 @@ private void checkCreate(CacheMode mode, CacheAtomicityMode atomicityMode, boole * * @throws Exception If failed. */ + @Test public void testCreateCompositePartitionedAtomic() throws Exception { checkCreateComposite(PARTITIONED, ATOMIC, false); } @@ -230,6 +240,7 @@ public void testCreateCompositePartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateCompositePartitionedAtomicNear() throws Exception { checkCreateComposite(PARTITIONED, ATOMIC, true); } @@ -239,6 +250,7 @@ public void testCreateCompositePartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateCompositePartitionedTransactional() throws Exception { checkCreateComposite(PARTITIONED, TRANSACTIONAL, false); } @@ -248,6 +260,7 @@ public void testCreateCompositePartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateCompositePartitionedTransactionalNear() throws Exception { checkCreateComposite(PARTITIONED, TRANSACTIONAL, true); } @@ -257,6 +270,7 @@ public void testCreateCompositePartitionedTransactionalNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateCompositeReplicatedAtomic() throws Exception { checkCreateComposite(REPLICATED, ATOMIC, false); } @@ -266,6 +280,7 @@ public void testCreateCompositeReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateCompositeReplicatedTransactional() throws Exception { checkCreateComposite(REPLICATED, TRANSACTIONAL, false); } @@ -297,6 +312,7 @@ private void checkCreateComposite(CacheMode mode, CacheAtomicityMode atomicityMo * * @throws Exception If failed. */ + @Test public void testCreateIndexNoCachePartitionedAtomic() throws Exception { checkCreateNotCache(PARTITIONED, ATOMIC, false); } @@ -306,6 +322,7 @@ public void testCreateIndexNoCachePartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoCachePartitionedAtomicNear() throws Exception { checkCreateNotCache(PARTITIONED, ATOMIC, true); } @@ -315,6 +332,7 @@ public void testCreateIndexNoCachePartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoCachePartitionedTransactional() throws Exception { checkCreateNotCache(PARTITIONED, TRANSACTIONAL, false); } @@ -324,6 +342,7 @@ public void testCreateIndexNoCachePartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoCachePartitionedTransactionalNear() throws Exception { checkCreateNotCache(PARTITIONED, TRANSACTIONAL, true); } @@ -333,6 +352,7 @@ public void testCreateIndexNoCachePartitionedTransactionalNear() throws Exceptio * * @throws Exception If failed. */ + @Test public void testCreateIndexNoCacheReplicatedAtomic() throws Exception { checkCreateNotCache(REPLICATED, ATOMIC, false); } @@ -342,6 +362,7 @@ public void testCreateIndexNoCacheReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoCacheReplicatedTransactional() throws Exception { checkCreateNotCache(REPLICATED, TRANSACTIONAL, false); } @@ -383,6 +404,7 @@ private void checkCreateNotCache(CacheMode mode, CacheAtomicityMode atomicityMod * * @throws Exception If failed. */ + @Test public void testCreateNoTablePartitionedAtomic() throws Exception { checkCreateNoTable(PARTITIONED, ATOMIC, false); } @@ -392,6 +414,7 @@ public void testCreateNoTablePartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateNoTablePartitionedAtomicNear() throws Exception { checkCreateNoTable(PARTITIONED, ATOMIC, true); } @@ -401,6 +424,7 @@ public void testCreateNoTablePartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateNoTablePartitionedTransactional() throws Exception { checkCreateNoTable(PARTITIONED, TRANSACTIONAL, false); } @@ -410,6 +434,7 @@ public void testCreateNoTablePartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateNoTablePartitionedTransactionalNear() throws Exception { checkCreateNoTable(PARTITIONED, TRANSACTIONAL, true); } @@ -419,6 +444,7 @@ public void testCreateNoTablePartitionedTransactionalNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateNoTableReplicatedAtomic() throws Exception { checkCreateNoTable(REPLICATED, ATOMIC, false); } @@ -428,6 +454,7 @@ public void testCreateNoTableReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateNoTableReplicatedTransactional() throws Exception { checkCreateNoTable(REPLICATED, TRANSACTIONAL, false); } @@ -459,6 +486,7 @@ private void checkCreateNoTable(CacheMode mode, CacheAtomicityMode atomicityMode * * @throws Exception If failed. */ + @Test public void testCreateIndexNoColumnPartitionedAtomic() throws Exception { checkCreateIndexNoColumn(PARTITIONED, ATOMIC, false); } @@ -468,6 +496,7 @@ public void testCreateIndexNoColumnPartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoColumnPartitionedAtomicNear() throws Exception { checkCreateIndexNoColumn(PARTITIONED, ATOMIC, true); } @@ -477,6 +506,7 @@ public void testCreateIndexNoColumnPartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoColumnPartitionedTransactional() throws Exception { checkCreateIndexNoColumn(PARTITIONED, TRANSACTIONAL, false); } @@ -486,6 +516,7 @@ public void testCreateIndexNoColumnPartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoColumnPartitionedTransactionalNear() throws Exception { checkCreateIndexNoColumn(PARTITIONED, TRANSACTIONAL, true); } @@ -495,6 +526,7 @@ public void testCreateIndexNoColumnPartitionedTransactionalNear() throws Excepti * * @throws Exception If failed. */ + @Test public void testCreateIndexNoColumnReplicatedAtomic() throws Exception { checkCreateIndexNoColumn(REPLICATED, ATOMIC, false); } @@ -504,6 +536,7 @@ public void testCreateIndexNoColumnReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexNoColumnReplicatedTransactional() throws Exception { checkCreateIndexNoColumn(REPLICATED, TRANSACTIONAL, false); } @@ -535,6 +568,7 @@ private void checkCreateIndexNoColumn(CacheMode mode, CacheAtomicityMode atomici * * @throws Exception If failed. */ + @Test public void testCreateIndexOnColumnWithAliasPartitionedAtomic() throws Exception { checkCreateIndexOnColumnWithAlias(PARTITIONED, ATOMIC, false); } @@ -544,6 +578,7 @@ public void testCreateIndexOnColumnWithAliasPartitionedAtomic() throws Exception * * @throws Exception If failed. */ + @Test public void testCreateIndexOnColumnWithAliasPartitionedAtomicNear() throws Exception { checkCreateIndexOnColumnWithAlias(PARTITIONED, ATOMIC, true); } @@ -553,6 +588,7 @@ public void testCreateIndexOnColumnWithAliasPartitionedAtomicNear() throws Excep * * @throws Exception If failed. */ + @Test public void testCreateIndexOnColumnWithAliasPartitionedTransactional() throws Exception { checkCreateIndexOnColumnWithAlias(PARTITIONED, TRANSACTIONAL, false); } @@ -562,6 +598,7 @@ public void testCreateIndexOnColumnWithAliasPartitionedTransactional() throws Ex * * @throws Exception If failed. */ + @Test public void testCreateColumnWithAliasPartitionedTransactionalNear() throws Exception { checkCreateIndexOnColumnWithAlias(PARTITIONED, TRANSACTIONAL, true); } @@ -571,6 +608,7 @@ public void testCreateColumnWithAliasPartitionedTransactionalNear() throws Excep * * @throws Exception If failed. */ + @Test public void testCreateColumnWithAliasReplicatedAtomic() throws Exception { checkCreateIndexOnColumnWithAlias(REPLICATED, ATOMIC, false); } @@ -580,6 +618,7 @@ public void testCreateColumnWithAliasReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateColumnWithAliasReplicatedTransactional() throws Exception { checkCreateIndexOnColumnWithAlias(REPLICATED, TRANSACTIONAL, false); } @@ -621,6 +660,7 @@ private void checkCreateIndexOnColumnWithAlias(CacheMode mode, CacheAtomicityMod * * @throws Exception If failed. */ + @Test public void testCreateIndexWithInlineSizePartitionedAtomic() throws Exception { checkCreateIndexWithInlineSize(PARTITIONED, ATOMIC, false); } @@ -630,6 +670,7 @@ public void testCreateIndexWithInlineSizePartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexWithInlineSizePartitionedAtomicNear() throws Exception { checkCreateIndexWithInlineSize(PARTITIONED, ATOMIC, true); } @@ -639,6 +680,7 @@ public void testCreateIndexWithInlineSizePartitionedAtomicNear() throws Exceptio * * @throws Exception If failed. */ + @Test public void testCreateIndexWithInlineSizePartitionedTransactional() throws Exception { checkCreateIndexWithInlineSize(PARTITIONED, TRANSACTIONAL, false); } @@ -648,6 +690,7 @@ public void testCreateIndexWithInlineSizePartitionedTransactional() throws Excep * * @throws Exception If failed. */ + @Test public void testCreateIndexWithInlineSizePartitionedTransactionalNear() throws Exception { checkCreateIndexWithInlineSize(PARTITIONED, TRANSACTIONAL, true); } @@ -657,6 +700,7 @@ public void testCreateIndexWithInlineSizePartitionedTransactionalNear() throws E * * @throws Exception If failed. */ + @Test public void testCreateIndexWithInlineSizeReplicatedAtomic() throws Exception { checkCreateIndexWithInlineSize(REPLICATED, ATOMIC, false); } @@ -666,6 +710,7 @@ public void testCreateIndexWithInlineSizeReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexWithInlineSizeReplicatedTransactional() throws Exception { checkCreateIndexWithInlineSize(REPLICATED, TRANSACTIONAL, false); } @@ -751,6 +796,7 @@ private void checkNoIndexIsCreatedForInlineSize(final int inlineSize, int ignite * * @throws Exception If failed. */ + @Test public void testCreateIndexWithParallelismPartitionedAtomic() throws Exception { checkCreateIndexWithParallelism(PARTITIONED, ATOMIC, false); } @@ -760,6 +806,7 @@ public void testCreateIndexWithParallelismPartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexWithParallelismPartitionedAtomicNear() throws Exception { checkCreateIndexWithParallelism(PARTITIONED, ATOMIC, true); } @@ -769,6 +816,7 @@ public void testCreateIndexWithParallelismPartitionedAtomicNear() throws Excepti * * @throws Exception If failed. */ + @Test public void testCreateIndexWithParallelismPartitionedTransactional() throws Exception { checkCreateIndexWithParallelism(PARTITIONED, TRANSACTIONAL, false); } @@ -778,6 +826,7 @@ public void testCreateIndexWithParallelismPartitionedTransactional() throws Exce * * @throws Exception If failed. */ + @Test public void testCreateIndexWithParallelismPartitionedTransactionalNear() throws Exception { checkCreateIndexWithParallelism(PARTITIONED, TRANSACTIONAL, true); } @@ -787,6 +836,7 @@ public void testCreateIndexWithParallelismPartitionedTransactionalNear() throws * * @throws Exception If failed. */ + @Test public void testCreateIndexWithParallelismReplicatedAtomic() throws Exception { checkCreateIndexWithParallelism(REPLICATED, ATOMIC, false); } @@ -796,6 +846,7 @@ public void testCreateIndexWithParallelismReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testCreateIndexWithParallelismReplicatedTransactional() throws Exception { checkCreateIndexWithParallelism(REPLICATED, TRANSACTIONAL, false); } @@ -880,6 +931,7 @@ private void checkNoIndexIsCreatedForParallelism(final int parallel, int igniteQ * * @throws Exception If failed. */ + @Test public void testDropPartitionedAtomic() throws Exception { checkDrop(PARTITIONED, ATOMIC, false); } @@ -889,6 +941,7 @@ public void testDropPartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropPartitionedAtomicNear() throws Exception { checkDrop(PARTITIONED, ATOMIC, true); } @@ -898,6 +951,7 @@ public void testDropPartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropPartitionedTransactional() throws Exception { checkDrop(PARTITIONED, TRANSACTIONAL, false); } @@ -907,6 +961,7 @@ public void testDropPartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropPartitionedTransactionalNear() throws Exception { checkDrop(PARTITIONED, TRANSACTIONAL, true); } @@ -916,6 +971,7 @@ public void testDropPartitionedTransactionalNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropReplicatedAtomic() throws Exception { checkDrop(REPLICATED, ATOMIC, false); } @@ -925,6 +981,7 @@ public void testDropReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropReplicatedTransactional() throws Exception { checkDrop(REPLICATED, TRANSACTIONAL, false); } @@ -976,6 +1033,7 @@ public void checkDrop(CacheMode mode, CacheAtomicityMode atomicityMode, boolean * * @throws Exception If failed. */ + @Test public void testDropNoIndexPartitionedAtomic() throws Exception { checkDropNoIndex(PARTITIONED, ATOMIC, false); } @@ -985,6 +1043,7 @@ public void testDropNoIndexPartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoIndexPartitionedAtomicNear() throws Exception { checkDropNoIndex(PARTITIONED, ATOMIC, true); } @@ -994,6 +1053,7 @@ public void testDropNoIndexPartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoIndexPartitionedTransactional() throws Exception { checkDropNoIndex(PARTITIONED, TRANSACTIONAL, false); } @@ -1003,6 +1063,7 @@ public void testDropNoIndexPartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoIndexPartitionedTransactionalNear() throws Exception { checkDropNoIndex(PARTITIONED, TRANSACTIONAL, true); } @@ -1012,6 +1073,7 @@ public void testDropNoIndexPartitionedTransactionalNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoIndexReplicatedAtomic() throws Exception { checkDropNoIndex(REPLICATED, ATOMIC, false); } @@ -1021,6 +1083,7 @@ public void testDropNoIndexReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoIndexReplicatedTransactional() throws Exception { checkDropNoIndex(REPLICATED, TRANSACTIONAL, false); } @@ -1051,6 +1114,7 @@ private void checkDropNoIndex(CacheMode mode, CacheAtomicityMode atomicityMode, * * @throws Exception If failed. */ + @Test public void testDropNoCachePartitionedAtomic() throws Exception { checkDropNoCache(PARTITIONED, ATOMIC, false); } @@ -1060,6 +1124,7 @@ public void testDropNoCachePartitionedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoCachePartitionedAtomicNear() throws Exception { checkDropNoCache(PARTITIONED, ATOMIC, true); } @@ -1069,6 +1134,7 @@ public void testDropNoCachePartitionedAtomicNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoCachePartitionedTransactional() throws Exception { checkDropNoCache(PARTITIONED, TRANSACTIONAL, false); } @@ -1078,6 +1144,7 @@ public void testDropNoCachePartitionedTransactional() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoCachePartitionedTransactionalNear() throws Exception { checkDropNoCache(PARTITIONED, TRANSACTIONAL, true); } @@ -1087,6 +1154,7 @@ public void testDropNoCachePartitionedTransactionalNear() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoCacheReplicatedAtomic() throws Exception { checkDropNoCache(REPLICATED, ATOMIC, false); } @@ -1096,6 +1164,7 @@ public void testDropNoCacheReplicatedAtomic() throws Exception { * * @throws Exception If failed. */ + @Test public void testDropNoCacheReplicatedTransactional() throws Exception { checkDropNoCache(REPLICATED, TRANSACTIONAL, false); } @@ -1137,6 +1206,7 @@ private void checkDropNoCache(CacheMode mode, CacheAtomicityMode atomicityMode, * * @throws Exception If failed. */ + @Test public void testFailOnLocalCache() throws Exception { for (Ignite node : Ignition.allGrids()) { if (!node.configuration().isClientMode()) @@ -1165,6 +1235,7 @@ public void testFailOnLocalCache() throws Exception { * * @throws Exception If failed. */ + @Test public void testNonSqlCache() throws Exception { final QueryIndex idx = index(IDX_NAME_2, field(FIELD_NAME_1)); @@ -1178,6 +1249,7 @@ public void testNonSqlCache() throws Exception { /** * Test behavior depending on index name case sensitivity. */ + @Test public void testIndexNameCaseSensitivity() throws Exception { doTestIndexNameCaseSensitivity("myIdx", false); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractConcurrentSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractConcurrentSelfTest.java index bf98491c25381..33054670b23a7 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractConcurrentSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractConcurrentSelfTest.java @@ -51,6 +51,9 @@ import org.apache.ignite.internal.util.typedef.T3; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.IgniteClientReconnectAbstractTest.TestTcpDiscoverySpi; @@ -58,6 +61,7 @@ * Concurrency tests for dynamic index create/drop. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public abstract class DynamicIndexAbstractConcurrentSelfTest extends DynamicIndexAbstractSelfTest { /** Test duration. */ private static final long TEST_DUR = 10_000L; @@ -136,6 +140,7 @@ public abstract class DynamicIndexAbstractConcurrentSelfTest extends DynamicInde * * @throws Exception If failed. */ + @Test public void testCoordinatorChange() throws Exception { // Start servers. Ignite srv1 = ignitionStart(serverConfiguration(1)); @@ -201,6 +206,7 @@ public void testCoordinatorChange() throws Exception { * * @throws Exception If failed. */ + @Test public void testOperationChaining() throws Exception { Ignite srv1 = ignitionStart(serverConfiguration(1)); @@ -253,6 +259,7 @@ public void testOperationChaining() throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeJoinOnPendingOperation() throws Exception { Ignite srv1 = ignitionStart(serverConfiguration(1)); @@ -290,6 +297,7 @@ public void testNodeJoinOnPendingOperation() throws Exception { * * @throws Exception If failed, */ + @Test public void testConcurrentPutRemove() throws Exception { // Start several nodes. Ignite srv1 = ignitionStart(serverConfiguration(1)); @@ -387,6 +395,7 @@ public void testConcurrentPutRemove() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentRebalance() throws Exception { // Start cache and populate it with data. Ignite srv1 = ignitionStart(serverConfiguration(1)); @@ -434,6 +443,7 @@ public void testConcurrentRebalance() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentCacheDestroy() throws Exception { // Start complex topology. Ignite srv1 = ignitionStart(serverConfiguration(1)); @@ -479,6 +489,7 @@ public void testConcurrentCacheDestroy() throws Exception { * * @throws Exception If failed. */ + @Test public void testConcurrentOperationsMultithreaded() throws Exception { // Start complex topology. ignitionStart(serverConfiguration(1)); @@ -553,6 +564,7 @@ public void testConcurrentOperationsMultithreaded() throws Exception { * * @throws Exception If failed. */ + @Test public void testQueryConsistencyMultithreaded() throws Exception { // Start complex topology. ignitionStart(serverConfiguration(1)); @@ -631,6 +643,7 @@ public void testQueryConsistencyMultithreaded() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnect() throws Exception { checkClientReconnect(false); } @@ -640,6 +653,7 @@ public void testClientReconnect() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientReconnectWithCacheRestart() throws Exception { checkClientReconnect(true); } @@ -753,6 +767,7 @@ private void reconnectClientNode(final Ignite srvNode, final Ignite cliNode, fin * * @throws Exception If failed. */ + @Test public void testConcurrentOperationsAndNodeStartStopMultithreaded() throws Exception { // Start several stable nodes. ignitionStart(serverConfiguration(1)); @@ -878,6 +893,7 @@ public void testConcurrentOperationsAndNodeStartStopMultithreaded() throws Excep * * @throws Exception If failed. */ + @Test public void testConcurrentOperationsAndCacheStartStopMultithreaded() throws Exception { // Start complex topology. ignitionStart(serverConfiguration(1)); @@ -1097,7 +1113,7 @@ private static class BlockingIndexing extends IgniteH2Indexing { /** * Start a node. - * + * * @param cfg Configuration. * @return Ignite instance. */ diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractSelfTest.java index 0bab750a063e2..158eac43557a4 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/DynamicIndexAbstractSelfTest.java @@ -46,18 +46,12 @@ import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; /** * Tests for dynamic index creation. */ @SuppressWarnings({"unchecked", "ThrowableResultOfMethodCallIgnored"}) public abstract class DynamicIndexAbstractSelfTest extends AbstractSchemaSelfTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Attribute to filter node out of cache data nodes. */ protected static final String ATTR_FILTERED = "FILTERED"; @@ -142,8 +136,6 @@ protected IgniteConfiguration commonConfiguration(int idx) throws Exception { cfg.setFailureHandler(new StopNodeFailureHandler()); - cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(IP_FINDER)); - cfg.setMarshaller(new BinaryMarshaller()); DataStorageConfiguration memCfg = new DataStorageConfiguration().setDefaultDataRegionConfiguration( diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2ConnectionLeaksSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2ConnectionLeaksSelfTest.java index 7713004446c9f..14e5ac9d42c35 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2ConnectionLeaksSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2ConnectionLeaksSelfTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for leaks JdbcConnection on SqlFieldsQuery execute. */ +@RunWith(JUnit4.class) public class H2ConnectionLeaksSelfTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -73,6 +77,7 @@ public class H2ConnectionLeaksSelfTest extends GridCommonAbstractTest { /** * @throws Exception On failed. */ + @Test public void testConnectionLeaks() throws Exception { final IgniteCache cache = grid(1).cache(CACHE_NAME); @@ -98,6 +103,7 @@ public void testConnectionLeaks() throws Exception { /** * @throws Exception On failed. */ + @Test public void testConnectionLeaksOnSqlException() throws Exception { final CountDownLatch latch = new CountDownLatch(THREAD_CNT); final CountDownLatch latch2 = new CountDownLatch(1); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicColumnsAbstractBasicSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicColumnsAbstractBasicSelfTest.java index e74e9cdf11514..846e9f1e1694a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicColumnsAbstractBasicSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicColumnsAbstractBasicSelfTest.java @@ -33,27 +33,31 @@ import org.apache.ignite.internal.processors.query.QueryUtils; import org.apache.ignite.testframework.config.GridTestProperties; import org.h2.jdbc.JdbcSQLException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.testframework.config.GridTestProperties.BINARY_MARSHALLER_USE_SIMPLE_NAME_MAPPER; /** * Test to check dynamic columns related features. */ +@RunWith(JUnit4.class) public abstract class H2DynamicColumnsAbstractBasicSelfTest extends DynamicColumnsAbstractTest { /** * Index of coordinator node. */ - final static int SRV_CRD_IDX = 0; + static final int SRV_CRD_IDX = 0; /** * Index of non coordinator server node. */ - final static int SRV_IDX = 1; + static final int SRV_IDX = 1; /** * Index of client. */ - final static int CLI_IDX = 2; + static final int CLI_IDX = 2; /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -105,6 +109,7 @@ private int checkTableState(String schemaName, String tblName, QueryField... col /** * Test column addition to the end of the columns list. */ + @Test public void testAddColumnSimple() throws SQLException { run("ALTER TABLE Person ADD COLUMN age int"); @@ -118,6 +123,7 @@ public void testAddColumnSimple() throws SQLException { /** * Test column addition to the end of the columns list. */ + @Test public void testAddFewColumnsSimple() throws SQLException { run("ALTER TABLE Person ADD COLUMN (age int, \"city\" varchar)"); @@ -130,6 +136,7 @@ public void testAddFewColumnsSimple() throws SQLException { /** * Test {@code IF EXISTS} handling. */ + @Test public void testIfTableExists() { run("ALTER TABLE if exists City ADD COLUMN population int"); } @@ -137,6 +144,7 @@ public void testIfTableExists() { /** * Test {@code IF NOT EXISTS} handling. */ + @Test public void testIfColumnNotExists() { run("ALTER TABLE Person ADD COLUMN if not exists name varchar"); } @@ -144,6 +152,7 @@ public void testIfColumnNotExists() { /** * Test {@code IF NOT EXISTS} handling. */ + @Test public void testDuplicateColumnName() { assertThrows("ALTER TABLE Person ADD COLUMN name varchar", "Column already exists: NAME"); } @@ -151,12 +160,14 @@ public void testDuplicateColumnName() { /** * Test behavior in case of missing table. */ + @Test public void testMissingTable() { assertThrows("ALTER TABLE City ADD COLUMN name varchar", "Table doesn't exist: CITY"); } /** */ @SuppressWarnings("unchecked") + @Test public void testComplexOperations() { IgniteCache cache = ignite(nodeIndex()) .cache(QueryUtils.createTableCacheName(QueryUtils.DFLT_SCHEMA, "PERSON")); @@ -229,6 +240,7 @@ public void testComplexOperations() { /** * Test that we can add columns dynamically to tables associated with non dynamic caches as well. */ + @Test public void testAddColumnToNonDynamicCache() throws SQLException { run("ALTER TABLE \"idx\".PERSON ADD COLUMN CITY varchar"); @@ -243,6 +255,7 @@ public void testAddColumnToNonDynamicCache() throws SQLException { * Test that we can add columns dynamically to tables associated with non dynamic caches storing user types as well. */ @SuppressWarnings("unchecked") + @Test public void testAddColumnToNonDynamicCacheWithRealValueType() throws SQLException { CacheConfiguration ccfg = defaultCacheConfiguration().setName("City") .setIndexedTypes(Integer.class, City.class); @@ -289,6 +302,7 @@ public void testAddColumnToNonDynamicCacheWithRealValueType() throws SQLExceptio * @throws SQLException If failed. */ @SuppressWarnings("unchecked") + @Test public void testAddColumnUUID() throws SQLException { CacheConfiguration ccfg = defaultCacheConfiguration().setName("GuidTest") .setIndexedTypes(Integer.class, GuidTest.class); @@ -356,6 +370,7 @@ public void testAddColumnUUID() throws SQLException { /** * Test addition of column with not null constraint. */ + @Test public void testAddNotNullColumn() throws SQLException { run("ALTER TABLE Person ADD COLUMN age int NOT NULL"); @@ -369,6 +384,7 @@ public void testAddNotNullColumn() throws SQLException { /** * Test addition of column explicitly defined as nullable. */ + @Test public void testAddNullColumn() throws SQLException { run("ALTER TABLE Person ADD COLUMN age int NULL"); @@ -382,7 +398,8 @@ public void testAddNullColumn() throws SQLException { /** * Test that {@code ADD COLUMN} fails for non dynamic table that has flat value. */ - @SuppressWarnings({"unchecked", "ThrowFromFinallyBlock"}) + @SuppressWarnings({"unchecked"}) + @Test public void testTestAlterTableOnFlatValueNonDynamicTable() { CacheConfiguration c = new CacheConfiguration("ints").setIndexedTypes(Integer.class, Integer.class) @@ -401,7 +418,7 @@ public void testTestAlterTableOnFlatValueNonDynamicTable() { /** * Test that {@code ADD COLUMN} fails for dynamic table that has flat value. */ - @SuppressWarnings({"unchecked", "ThrowFromFinallyBlock"}) + @Test public void testTestAlterTableOnFlatValueDynamicTable() { try { run("CREATE TABLE TEST (id int primary key, x varchar) with \"wrap_value=false\""); @@ -417,6 +434,7 @@ public void testTestAlterTableOnFlatValueDynamicTable() { * * @throws Exception if failed. */ + @Test public void testDropColumn() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, a INT, b CHAR)"); @@ -445,6 +463,7 @@ public void testDropColumn() throws Exception { * * @throws Exception if failed. */ + @Test public void testDroppedColumnMeta() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, a INT, b CHAR)"); @@ -467,6 +486,7 @@ public void testDroppedColumnMeta() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropMultipleColumns() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, a INT, b CHAR, c INT)"); @@ -492,6 +512,7 @@ public void testDropMultipleColumns() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropNonExistingColumn() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, a INT)"); @@ -507,6 +528,7 @@ public void testDropNonExistingColumn() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnNonExistingTable() throws Exception { assertThrowsAnyCause("ALTER TABLE nosuchtable DROP COLUMN a", JdbcSQLException.class, "Table \"NOSUCHTABLE\" not found"); @@ -516,6 +538,7 @@ public void testDropColumnNonExistingTable() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnIfTableExists() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, a INT, b CHAR)"); @@ -535,6 +558,7 @@ public void testDropColumnIfTableExists() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnIfExists() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, a INT)"); @@ -555,6 +579,7 @@ public void testDropColumnIfExists() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnIndexPresent() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, a INT, b INT)"); @@ -581,6 +606,7 @@ public void testDropColumnIndexPresent() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnOnRealClassValuedTable() throws Exception { try { run("CREATE TABLE test (id INT PRIMARY KEY, x VARCHAR) with \"wrap_value=false\""); @@ -597,6 +623,7 @@ public void testDropColumnOnRealClassValuedTable() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnThatIsPartOfKey() throws Exception { try { run("CREATE TABLE test(id INT, a INT, b CHAR, PRIMARY KEY(id, a))"); @@ -613,6 +640,7 @@ public void testDropColumnThatIsPartOfKey() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnThatIsKey() throws Exception { try { run("CREATE TABLE test(id INT PRIMARY KEY, a INT, b CHAR)"); @@ -629,6 +657,7 @@ public void testDropColumnThatIsKey() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropColumnThatIsValue() throws Exception { try { run("CREATE TABLE test(id INT PRIMARY KEY, a INT, b CHAR)"); @@ -648,6 +677,7 @@ public void testDropColumnThatIsValue() throws Exception { * @throws SQLException if failed. */ @SuppressWarnings("unchecked") + @Test public void testDropColumnFromNonDynamicCacheWithRealValueType() throws SQLException { CacheConfiguration ccfg = defaultCacheConfiguration().setName("City") .setIndexedTypes(Integer.class, City.class); @@ -713,6 +743,7 @@ public void testDropColumnFromNonDynamicCacheWithRealValueType() throws SQLExcep * * @throws Exception if failed. */ + @Test public void testDropColumnPriorToIndexedColumn() throws Exception { try { run("CREATE TABLE test(id INT PRIMARY KEY, a CHAR, b INT)"); @@ -760,7 +791,6 @@ private void doTestAlterTableOnFlatValue(String tblName) { * @param sql Statement. * @param msg Expected message. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") protected void assertThrows(final String sql, String msg) { assertThrows(grid(nodeIndex()), sql, msg); } @@ -772,7 +802,6 @@ protected void assertThrows(final String sql, String msg) { * @param cls Expected exception class. * @param msg Expected message. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") protected void assertThrowsAnyCause(final String sql, Class cls, String msg) { assertThrowsAnyCause(grid(nodeIndex()), sql, cls, msg); } @@ -787,7 +816,7 @@ protected List> run(String sql) { } /** City class. */ - private final static class City { + private static final class City { /** City id. */ @QuerySqlField private int id; @@ -844,7 +873,7 @@ public void state(String state) { } /** */ - private final static class GuidTest { + private static final class GuidTest { /** */ @QuerySqlField private int id; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexAbstractSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexAbstractSelfTest.java index 10ef56fcd3b5e..90645f1396b90 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexAbstractSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexAbstractSelfTest.java @@ -34,10 +34,14 @@ import org.apache.ignite.configuration.NearCacheConfiguration; import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.util.typedef.F; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test that checks indexes handling on H2 side. */ +@RunWith(JUnit4.class) public abstract class H2DynamicIndexAbstractSelfTest extends AbstractSchemaSelfTest { /** Client node index. */ private final static int CLIENT = 2; @@ -75,6 +79,7 @@ public abstract class H2DynamicIndexAbstractSelfTest extends AbstractSchemaSelfT /** * Test that after index creation index is used by queries. */ + @Test public void testCreateIndex() throws Exception { IgniteCache cache = cache(); @@ -114,6 +119,7 @@ public void testCreateIndex() throws Exception { /** * Test that creating an index with duplicate name yields an error. */ + @Test public void testCreateIndexWithDuplicateName() { final IgniteCache cache = cache(); @@ -131,6 +137,7 @@ public void testCreateIndexWithDuplicateName() { /** * Test that creating an index with duplicate name does not yield an error with {@code IF NOT EXISTS}. */ + @Test public void testCreateIndexIfNotExists() { final IgniteCache cache = cache(); @@ -144,6 +151,7 @@ public void testCreateIndexIfNotExists() { /** * Test that after index drop there are no attempts to use it, and data state remains intact. */ + @Test public void testDropIndex() { IgniteCache cache = cache(); @@ -179,6 +187,7 @@ public void testDropIndex() { /** * Test that dropping a non-existent index yields an error. */ + @Test public void testDropMissingIndex() { final IgniteCache cache = cache(); @@ -192,6 +201,7 @@ public void testDropMissingIndex() { /** * Test that dropping a non-existent index does not yield an error with {@code IF EXISTS}. */ + @Test public void testDropMissingIndexIfExists() { final IgniteCache cache = cache(); @@ -201,6 +211,7 @@ public void testDropMissingIndexIfExists() { /** * Test that changes in cache affect index, and vice versa. */ + @Test public void testIndexState() { IgniteCache cache = cache(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexingComplexAbstractTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexingComplexAbstractTest.java index 2c21e2319404e..fd6728faa2876 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexingComplexAbstractTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicIndexingComplexAbstractTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.lang.IgniteCallable; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base class for testing work of combinations of DML and DDL operations. */ +@RunWith(JUnit4.class) public abstract class H2DynamicIndexingComplexAbstractTest extends DynamicIndexAbstractSelfTest { /** Cache mode to test with. */ private final CacheMode cacheMode; @@ -49,16 +53,16 @@ public abstract class H2DynamicIndexingComplexAbstractTest extends DynamicIndexA private final int backups; /** Names of companies to use. */ - private final static List COMPANIES = Arrays.asList("ASF", "GNU", "BSD"); + private static final List COMPANIES = Arrays.asList("ASF", "GNU", "BSD"); /** Cities to use. */ - private final static List CITIES = Arrays.asList("St. Petersburg", "Boston", "Berkeley", "London"); + private static final List CITIES = Arrays.asList("St. Petersburg", "Boston", "Berkeley", "London"); /** Index of server node. */ - protected final static int SRV_IDX = 0; + protected static final int SRV_IDX = 0; /** Index of client node. */ - protected final static int CLIENT_IDX = 1; + protected static final int CLIENT_IDX = 1; /** * Constructor. @@ -88,7 +92,7 @@ public abstract class H2DynamicIndexingComplexAbstractTest extends DynamicIndexA } /** Do test. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testOperations() { executeSql("CREATE TABLE person (id int, name varchar, age int, company varchar, city varchar, " + "primary key (id, name, city)) WITH \"template=" + cacheMode.name() + ",atomicity=" + atomicityMode.name() + diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicTableSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicTableSelfTest.java index 6ed914c165937..1d7c0e8aee209 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicTableSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2DynamicTableSelfTest.java @@ -33,8 +33,6 @@ import java.util.Random; import java.util.UUID; import java.util.concurrent.Callable; -import javax.cache.CacheException; -import org.apache.ignite.Ignite; import org.apache.ignite.IgniteException; import org.apache.ignite.Ignition; import org.apache.ignite.binary.BinaryObject; @@ -44,7 +42,6 @@ import org.apache.ignite.cache.QueryEntity; import org.apache.ignite.cache.QueryIndex; import org.apache.ignite.cache.query.SqlFieldsQuery; -import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; @@ -69,11 +66,15 @@ import org.apache.ignite.testframework.GridTestUtils; import org.h2.jdbc.JdbcSQLException; import org.h2.value.DataType; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for CREATE/DROP TABLE. */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class H2DynamicTableSelfTest extends AbstractSchemaSelfTest { /** Client node index. */ private static final int CLIENT = 2; @@ -134,6 +135,7 @@ public class H2DynamicTableSelfTest extends AbstractSchemaSelfTest { * Test that {@code CREATE TABLE} actually creates new cache, H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTable() throws Exception { doTestCreateTable(CACHE_NAME, null, null, null); } @@ -142,6 +144,7 @@ public void testCreateTable() throws Exception { * Test that {@code CREATE TABLE} actually creates new cache, H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTableWithCacheGroup() throws Exception { doTestCreateTable(CACHE_NAME, "MyGroup", null, null); } @@ -150,6 +153,7 @@ public void testCreateTableWithCacheGroup() throws Exception { * Test that {@code CREATE TABLE} actually creates new cache, H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTableWithCacheGroupAndLegacyParamName() throws Exception { doTestCreateTable(CACHE_NAME, "MyGroup", null, null, true); } @@ -159,6 +163,7 @@ public void testCreateTableWithCacheGroupAndLegacyParamName() throws Exception { * H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTableWithWriteSyncMode() throws Exception { doTestCreateTable(CACHE_NAME + "_async", null, null, CacheWriteSynchronizationMode.FULL_ASYNC); } @@ -168,6 +173,7 @@ public void testCreateTableWithWriteSyncMode() throws Exception { * H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTableReplicated() throws Exception { doTestCreateTable("REPLICATED", null, CacheMode.REPLICATED, CacheWriteSynchronizationMode.FULL_SYNC); } @@ -177,6 +183,7 @@ public void testCreateTableReplicated() throws Exception { * H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTablePartitioned() throws Exception { doTestCreateTable("PARTITIONED", null, CacheMode.PARTITIONED, CacheWriteSynchronizationMode.FULL_SYNC); } @@ -186,6 +193,7 @@ public void testCreateTablePartitioned() throws Exception { * H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTableReplicatedCaseInsensitive() throws Exception { doTestCreateTable("replicated", null, CacheMode.REPLICATED, CacheWriteSynchronizationMode.FULL_SYNC); } @@ -195,6 +203,7 @@ public void testCreateTableReplicatedCaseInsensitive() throws Exception { * H2 table and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testCreateTablePartitionedCaseInsensitive() throws Exception { doTestCreateTable("partitioned", null, CacheMode.PARTITIONED, CacheWriteSynchronizationMode.FULL_SYNC); } @@ -204,6 +213,7 @@ public void testCreateTablePartitionedCaseInsensitive() throws Exception { * H2 table and type descriptor on all nodes, when no cache template name is given. * @throws Exception if failed. */ + @Test public void testCreateTableNoTemplate() throws Exception { doTestCreateTable(null, null, CacheMode.PARTITIONED, CacheWriteSynchronizationMode.FULL_SYNC); } @@ -211,6 +221,7 @@ public void testCreateTableNoTemplate() throws Exception { /** * Test behavior depending on table name case sensitivity. */ + @Test public void testTableNameCaseSensitivity() { doTestTableNameCaseSensitivity("Person", false); @@ -221,6 +232,7 @@ public void testTableNameCaseSensitivity() { * Test that {@code CREATE TABLE} with given write sync mode actually creates new cache as needed. * @throws Exception if failed. */ + @Test public void testFullSyncWriteMode() throws Exception { doTestCreateTable(null, null, null, CacheWriteSynchronizationMode.FULL_SYNC, "write_synchronization_mode=full_sync"); @@ -230,6 +242,7 @@ public void testFullSyncWriteMode() throws Exception { * Test that {@code CREATE TABLE} with given write sync mode actually creates new cache as needed. * @throws Exception if failed. */ + @Test public void testPrimarySyncWriteMode() throws Exception { doTestCreateTable(null, null, null, CacheWriteSynchronizationMode.PRIMARY_SYNC, "write_synchronization_mode=primary_sync"); @@ -239,6 +252,7 @@ public void testPrimarySyncWriteMode() throws Exception { * Test that {@code CREATE TABLE} with given write sync mode actually creates new cache as needed. * @throws Exception if failed. */ + @Test public void testFullAsyncWriteMode() throws Exception { doTestCreateTable(null, null, null, CacheWriteSynchronizationMode.FULL_ASYNC, "write_synchronization_mode=full_async"); @@ -247,6 +261,7 @@ public void testFullAsyncWriteMode() throws Exception { /** * Test behavior only in case of cache name override. */ + @Test public void testCustomCacheName() { doTestCustomNames("cname", null, null); } @@ -254,6 +269,7 @@ public void testCustomCacheName() { /** * Test behavior only in case of key type name override. */ + @Test public void testCustomKeyTypeName() { doTestCustomNames(null, "keytype", null); } @@ -261,6 +277,7 @@ public void testCustomKeyTypeName() { /** * Test behavior only in case of value type name override. */ + @Test public void testCustomValueTypeName() { doTestCustomNames(null, null, "valtype"); } @@ -268,6 +285,7 @@ public void testCustomValueTypeName() { /** * Test behavior only in case of cache and key type name override. */ + @Test public void testCustomCacheAndKeyTypeName() { doTestCustomNames("cname", "keytype", null); } @@ -275,6 +293,7 @@ public void testCustomCacheAndKeyTypeName() { /** * Test behavior only in case of cache and value type name override. */ + @Test public void testCustomCacheAndValueTypeName() { doTestCustomNames("cname", null, "valtype"); } @@ -282,6 +301,7 @@ public void testCustomCacheAndValueTypeName() { /** * Test behavior only in case of key and value type name override. */ + @Test public void testCustomKeyAndValueTypeName() { doTestCustomNames(null, "keytype", "valtype"); } @@ -289,6 +309,7 @@ public void testCustomKeyAndValueTypeName() { /** * Test behavior only in case of cache, key, and value type name override. */ + @Test public void testCustomCacheAndKeyAndValueTypeName() { doTestCustomNames("cname", "keytype", "valtype"); } @@ -298,6 +319,7 @@ public void testCustomCacheAndKeyAndValueTypeName() { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testDuplicateCustomCacheName() throws Exception { client().getOrCreateCache("new"); @@ -317,6 +339,7 @@ public void testDuplicateCustomCacheName() throws Exception { * Test that {@code CREATE TABLE} with given write sync mode actually creates new cache as needed. * @throws Exception if failed. */ + @Test public void testPlainKey() throws Exception { doTestCreateTable(null, null, null, CacheWriteSynchronizationMode.FULL_SYNC); } @@ -507,6 +530,7 @@ private void doTestCreateTable(String tplCacheName, String cacheGrp, CacheMode c * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testBackups() throws Exception { String cacheName = "BackupTestCache"; @@ -628,6 +652,7 @@ private void doTestCreateTable(String tplCacheName, String cacheGrp, CacheMode c /** * Test that attempting to specify negative number of backups yields exception. */ + @Test public void testNegativeBackups() { assertCreateTableWithParamsThrows("bAckUPs = -5 ", "\"BACKUPS\" cannot be negative: -5"); } @@ -635,6 +660,7 @@ public void testNegativeBackups() { /** * Test that attempting to omit mandatory value of BACKUPS parameter yields an error. */ + @Test public void testEmptyBackups() { assertCreateTableWithParamsThrows(" bAckUPs = ", "Parameter value cannot be empty: BACKUPS"); } @@ -642,6 +668,7 @@ public void testEmptyBackups() { /** * Test that attempting to omit mandatory value of ATOMICITY parameter yields an error. */ + @Test public void testEmptyAtomicity() { assertCreateTableWithParamsThrows("AtomicitY= ", "Parameter value cannot be empty: ATOMICITY"); } @@ -649,6 +676,7 @@ public void testEmptyAtomicity() { /** * Test that providing an invalid value of ATOMICITY parameter yields an error. */ + @Test public void testInvalidAtomicity() { assertCreateTableWithParamsThrows("atomicity=InvalidValue", "Invalid value of \"ATOMICITY\" parameter (should be either TRANSACTIONAL or ATOMIC): InvalidValue"); @@ -657,6 +685,7 @@ public void testInvalidAtomicity() { /** * Test that attempting to omit mandatory value of CACHEGROUP parameter yields an error. */ + @Test public void testEmptyCacheGroup() { assertCreateTableWithParamsThrows("cache_group=", "Parameter value cannot be empty: CACHE_GROUP"); } @@ -664,6 +693,7 @@ public void testEmptyCacheGroup() { /** * Test that attempting to omit mandatory value of WRITE_SYNCHRONIZATION_MODE parameter yields an error. */ + @Test public void testEmptyWriteSyncMode() { assertCreateTableWithParamsThrows("write_synchronization_mode=", "Parameter value cannot be empty: WRITE_SYNCHRONIZATION_MODE"); @@ -672,6 +702,7 @@ public void testEmptyWriteSyncMode() { /** * Test that attempting to provide invalid value of WRITE_SYNCHRONIZATION_MODE parameter yields an error. */ + @Test public void testInvalidWriteSyncMode() { assertCreateTableWithParamsThrows("write_synchronization_mode=invalid", "Invalid value of \"WRITE_SYNCHRONIZATION_MODE\" parameter " + @@ -683,6 +714,7 @@ public void testInvalidWriteSyncMode() { * contains {@code IF NOT EXISTS} clause. * @throws Exception if failed. */ + @Test public void testCreateTableIfNotExists() throws Exception { execute("CREATE TABLE \"Person\" (\"id\" int, \"city\" varchar," + " \"name\" varchar, \"surname\" varchar, \"age\" int, PRIMARY KEY (\"id\", \"city\")) WITH " + @@ -698,6 +730,7 @@ public void testCreateTableIfNotExists() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCreateExistingTable() throws Exception { execute("CREATE TABLE \"Person\" (\"id\" int, \"city\" varchar," + " \"name\" varchar, \"surname\" varchar, \"age\" int, PRIMARY KEY (\"id\", \"city\")) WITH " + @@ -719,6 +752,7 @@ public void testCreateExistingTable() throws Exception { * yields an error. * @throws Exception if failed. */ + @Test public void testCreateTableWithWrongColumnNameAsKey() throws Exception { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -735,6 +769,7 @@ public void testCreateTableWithWrongColumnNameAsKey() throws Exception { * Test that {@code DROP TABLE} actually removes specified cache and type descriptor on all nodes. * @throws Exception if failed. */ + @Test public void testDropTable() throws Exception { execute("CREATE TABLE IF NOT EXISTS \"Person\" (\"id\" int, \"city\" varchar," + " \"name\" varchar, \"surname\" varchar, \"age\" int, PRIMARY KEY (\"id\", \"city\")) WITH " + @@ -758,6 +793,7 @@ public void testDropTable() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCacheSelfDrop() throws Exception { execute("CREATE TABLE IF NOT EXISTS \"Person\" (\"id\" int, \"city\" varchar," + " \"name\" varchar, \"surname\" varchar, \"age\" int, PRIMARY KEY (\"id\", \"city\")) WITH " + @@ -782,6 +818,7 @@ public void testCacheSelfDrop() throws Exception { * * @throws Exception if failed. */ + @Test public void testDropMissingTableIfExists() throws Exception { execute("DROP TABLE IF EXISTS \"City\""); } @@ -791,6 +828,7 @@ public void testDropMissingTableIfExists() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testDropMissingTable() throws Exception { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -806,6 +844,7 @@ public void testDropMissingTable() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testDropNonDynamicTable() throws Exception { GridTestUtils.assertThrows(null, new Callable() { @Override public Object call() throws Exception { @@ -818,23 +857,17 @@ public void testDropNonDynamicTable() throws Exception { } /** - * Test that attempting to destroy via cache API a cache created via SQL yields an error. + * Test that attempting to destroy via cache API a cache created via SQL finishes successfully. * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testDestroyDynamicSqlCache() throws Exception { execute("CREATE TABLE \"Person\" (\"id\" int, \"city\" varchar," + " \"name\" varchar, \"surname\" varchar, \"age\" int, PRIMARY KEY (\"id\", \"city\")) WITH " + "\"template=cache\""); - GridTestUtils.assertThrows(null, new Callable() { - @Override public Object call() throws Exception { - client().destroyCache(cacheName("Person")); - - return null; - } - }, CacheException.class, - "Only cache created with cache API may be removed with direct call to destroyCache"); + client().destroyCache(cacheName("Person")); } /** @@ -843,6 +876,7 @@ public void testDestroyDynamicSqlCache() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testSqlFlagCompatibilityCheck() throws Exception { execute("CREATE TABLE \"Person\" (\"id\" int, \"city\" varchar, \"name\" varchar, \"surname\" varchar, " + "\"age\" int, PRIMARY KEY (\"id\", \"city\")) WITH \"template=cache\""); @@ -864,6 +898,7 @@ public void testSqlFlagCompatibilityCheck() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testIndexNameConflictCheckDiscovery() throws Exception { execute(grid(0), "CREATE TABLE \"Person\" (id int primary key, name varchar)"); @@ -881,7 +916,7 @@ public void testIndexNameConflictCheckDiscovery() throws Exception { e.setValueType("City"); queryProcessor(client()).dynamicTableCreate("PUBLIC", e, CacheMode.PARTITIONED.name(), null, null, null, - null, CacheAtomicityMode.ATOMIC, null, 10, false); + null, CacheAtomicityMode.ATOMIC, null, 10, false, false); return null; } @@ -893,6 +928,7 @@ public void testIndexNameConflictCheckDiscovery() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testTableNameConflictCheckSql() throws Exception { execute(grid(0), "CREATE TABLE \"Person\" (id int primary key, name varchar)"); @@ -908,6 +944,7 @@ public void testTableNameConflictCheckSql() throws Exception { /** * @throws Exception if failed. */ + @Test public void testAffinityKey() throws Exception { execute("CREATE TABLE \"City\" (\"name\" varchar primary key, \"code\" int) WITH wrap_key,wrap_value," + "\"affinity_key='name'\""); @@ -969,6 +1006,7 @@ public void testAffinityKey() throws Exception { * @throws Exception If failed. */ @SuppressWarnings({"ThrowableNotThrown", "unchecked"}) + @Test public void testDataRegion() throws Exception { // Empty region name. GridTestUtils.assertThrows(log, new Callable() { @@ -994,6 +1032,7 @@ public void testDataRegion() throws Exception { * Test various cases of affinity key column specification. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testAffinityKeyCaseSensitivity() { execute("CREATE TABLE \"A\" (\"name\" varchar primary key, \"code\" int) WITH wrap_key,wrap_value," + "\"affinity_key='name'\""); @@ -1056,6 +1095,7 @@ public void testAffinityKeyCaseSensitivity() { * Tests that attempting to specify an affinity key that actually is a value column yields an error. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testAffinityKeyNotKeyColumn() { // Error arises because user has specified case sensitive affinity column name GridTestUtils.assertThrows(null, new Callable() { @@ -1071,6 +1111,7 @@ public void testAffinityKeyNotKeyColumn() { * Tests that attempting to specify an affinity key that actually is a value column yields an error. */ @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testAffinityKeyNotFound() { // Error arises because user has specified case sensitive affinity column name GridTestUtils.assertThrows(null, new Callable() { @@ -1085,6 +1126,7 @@ public void testAffinityKeyNotFound() { /** * Tests behavior on sequential create and drop of a table and its index. */ + @Test public void testTableAndIndexRecreate() { execute("drop table if exists \"PUBLIC\".t"); @@ -1128,6 +1170,7 @@ public void testTableAndIndexRecreate() { /** * @throws Exception If test failed. */ + @Test public void testQueryLocalWithRecreate() throws Exception { execute("CREATE TABLE A(id int primary key, name varchar, surname varchar) WITH \"cache_name=cache," + "template=replicated\""); @@ -1161,6 +1204,7 @@ public void testQueryLocalWithRecreate() throws Exception { /** * Test that it's impossible to create tables with same name regardless of key/value wrapping settings. */ + @Test public void testWrappedAndUnwrappedKeyTablesInteroperability() { { execute("create table a (id int primary key, x varchar)"); @@ -1226,6 +1270,7 @@ public void testWrappedAndUnwrappedKeyTablesInteroperability() { /** * Test that it's possible to create tables with matching key and/or value primitive types. */ + @Test public void testDynamicTablesInteroperability() { execute("create table a (id int primary key, x varchar) with \"wrap_value=false\""); @@ -1247,6 +1292,7 @@ public void testDynamicTablesInteroperability() { /** * Test that when key or value has more than one column, wrap=false is forbidden. */ + @Test public void testWrappingAlwaysOnWithComplexObjects() { assertDdlCommandThrows("create table a (id int, x varchar, c long, primary key(id, c)) with \"wrap_key=false\"", "WRAP_KEY cannot be false when composite primary key exists."); @@ -1259,6 +1305,7 @@ public void testWrappingAlwaysOnWithComplexObjects() { * Test behavior when neither key nor value should be wrapped. * @throws SQLException if failed. */ + @Test public void testNoWrap() throws SQLException { doTestKeyValueWrap(false, false, false); } @@ -1267,6 +1314,7 @@ public void testNoWrap() throws SQLException { * Test behavior when only key is wrapped. * @throws SQLException if failed. */ + @Test public void testKeyWrap() throws SQLException { doTestKeyValueWrap(true, false, false); } @@ -1275,6 +1323,7 @@ public void testKeyWrap() throws SQLException { * Test behavior when only value is wrapped. * @throws SQLException if failed. */ + @Test public void testValueWrap() throws SQLException { doTestKeyValueWrap(false, true, false); } @@ -1283,6 +1332,7 @@ public void testValueWrap() throws SQLException { * Test behavior when both key and value is wrapped. * @throws SQLException if failed. */ + @Test public void testKeyAndValueWrap() throws SQLException { doTestKeyValueWrap(true, true, false); } @@ -1292,6 +1342,7 @@ public void testKeyAndValueWrap() throws SQLException { * Key and value are UUID. * @throws SQLException if failed. */ + @Test public void testUuidNoWrap() throws SQLException { doTestKeyValueWrap(false, false, true); } @@ -1301,6 +1352,7 @@ public void testUuidNoWrap() throws SQLException { * Key and value are UUID. * @throws SQLException if failed. */ + @Test public void testUuidKeyWrap() throws SQLException { doTestKeyValueWrap(true, false, true); } @@ -1310,6 +1362,7 @@ public void testUuidKeyWrap() throws SQLException { * Key and value are UUID. * @throws SQLException if failed. */ + @Test public void testUuidValueWrap() throws SQLException { doTestKeyValueWrap(false, true, true); } @@ -1319,6 +1372,7 @@ public void testUuidValueWrap() throws SQLException { * Key and value are UUID. * @throws SQLException if failed. */ + @Test public void testUuidKeyAndValueWrap() throws SQLException { doTestKeyValueWrap(true, true, true); } @@ -1512,6 +1566,7 @@ private void assertDdlCommandThrows(final String cmd, String expErrMsg) { * * @throws Exception if failed. */ + @Test public void testGetTablesForCache() throws Exception { try { execute("create table t1(id int primary key, name varchar)"); @@ -1608,7 +1663,7 @@ private IgniteConfiguration clientConfiguration(int idx) throws Exception { * @return Configuration. * @throws Exception If failed. */ - protected IgniteConfiguration commonConfiguration(int idx) throws Exception { + @Override protected IgniteConfiguration commonConfiguration(int idx) throws Exception { IgniteConfiguration cfg = super.commonConfiguration(idx); DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration().setName(DATA_REGION_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCachePageEvictionTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCachePageEvictionTest.java index ba5edc9f1125b..85769ef52abbb 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCachePageEvictionTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCachePageEvictionTest.java @@ -36,10 +36,14 @@ import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for H2RowCacheRegistry with page eviction. */ +@RunWith(JUnit4.class) public class H2RowCachePageEvictionTest extends GridCommonAbstractTest { /** Entries count. */ private static final int ENTRIES = 10_000; @@ -147,6 +151,7 @@ private void checkRowCacheOnPageEviction() { /** * @throws Exception On error. */ + @Test public void testEvictPagesWithDiskStorageSingleCacheInGroup() throws Exception { persistenceEnabled = true; @@ -160,6 +165,7 @@ public void testEvictPagesWithDiskStorageSingleCacheInGroup() throws Exception { /** * @throws Exception On error. */ + @Test public void testEvictPagesWithDiskStorageWithOtherCacheInGroup() throws Exception { persistenceEnabled = true; @@ -175,6 +181,7 @@ public void testEvictPagesWithDiskStorageWithOtherCacheInGroup() throws Exceptio /** * @throws Exception On error. */ + @Test public void testEvictPagesWithoutDiskStorageSingleCacheInGroup() throws Exception { persistenceEnabled = false; @@ -186,6 +193,7 @@ public void testEvictPagesWithoutDiskStorageSingleCacheInGroup() throws Exceptio /** * @throws Exception On error. */ + @Test public void testEvictPagesWithoutDiskStorageWithOtherCacheInGroup() throws Exception { persistenceEnabled = false; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCacheSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCacheSelfTest.java index 5db8231394bc5..6a177f022f0ca 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCacheSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/H2RowCacheSelfTest.java @@ -39,11 +39,15 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jsr166.ConcurrentLinkedHashMap; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests H2RowCacheRegistry. */ @SuppressWarnings({"unchecked", "ConstantConditions"}) +@RunWith(JUnit4.class) public class H2RowCacheSelfTest extends GridCommonAbstractTest { /** Keys count. */ private static final int ENTRIES = 1_000; @@ -79,6 +83,7 @@ private CacheConfiguration cacheConfiguration(String name, boolean enableOnheapC /** */ + @Test public void testDestroyCacheCreation() { final String cacheName0 = "cache0"; final String cacheName1 = "cache1"; @@ -99,6 +104,7 @@ public void testDestroyCacheCreation() { /** * @throws IgniteCheckedException If failed. */ + @Test public void testDestroyCacheSingleCacheInGroup() throws IgniteCheckedException { checkDestroyCache(); } @@ -106,6 +112,7 @@ public void testDestroyCacheSingleCacheInGroup() throws IgniteCheckedException { /** * @throws IgniteCheckedException If failed. */ + @Test public void testDestroyCacheWithOtherCacheInGroup() throws IgniteCheckedException { grid().getOrCreateCache(cacheConfiguration("cacheWithoutOnheapCache", false)); @@ -115,6 +122,7 @@ public void testDestroyCacheWithOtherCacheInGroup() throws IgniteCheckedExceptio /** * @throws Exception If failed. */ + @Test public void testDeleteEntryCacheSingleCacheInGroup() throws Exception { checkDeleteEntry(); } @@ -122,6 +130,7 @@ public void testDeleteEntryCacheSingleCacheInGroup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeleteEntryWithOtherCacheInGroup() throws Exception { grid().getOrCreateCache(cacheConfiguration("cacheWithoutOnheapCache", false)); @@ -131,6 +140,7 @@ public void testDeleteEntryWithOtherCacheInGroup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateEntryCacheSingleCacheInGroup() throws Exception { checkDeleteEntry(); } @@ -138,6 +148,7 @@ public void testUpdateEntryCacheSingleCacheInGroup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateEntryWithOtherCacheInGroup() throws Exception { grid().getOrCreateCache(cacheConfiguration("cacheWithoutOnheapCache", false)); @@ -147,6 +158,7 @@ public void testUpdateEntryWithOtherCacheInGroup() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFixedSize() throws Exception { int maxSize = 100; String cacheName = "cacheWithLimitedSize"; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/IgniteDecimalSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/IgniteDecimalSelfTest.java index 96926ea5efbaf..aaeb030ecb7b1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/IgniteDecimalSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/IgniteDecimalSelfTest.java @@ -33,13 +33,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; -import static java.math.RoundingMode.HALF_UP; import static java.util.Arrays.asList; /** * Test to check decimal columns. */ +@RunWith(JUnit4.class) public class IgniteDecimalSelfTest extends AbstractSchemaSelfTest { /** */ private static final int PRECISION = 9; @@ -60,13 +63,13 @@ public class IgniteDecimalSelfTest extends AbstractSchemaSelfTest { private static final MathContext MATH_CTX = new MathContext(PRECISION); /** */ - private static final BigDecimal VAL_1 = new BigDecimal("123456789", MATH_CTX).setScale(SCALE, HALF_UP); + private static final BigDecimal VAL_1 = BigDecimal.valueOf(123456789); /** */ - private static final BigDecimal VAL_2 = new BigDecimal("12345678.12345678", MATH_CTX).setScale(SCALE, HALF_UP); + private static final BigDecimal VAL_2 = BigDecimal.valueOf(1.23456789); /** */ - private static final BigDecimal VAL_3 = new BigDecimal(".123456789", MATH_CTX).setScale(SCALE, HALF_UP); + private static final BigDecimal VAL_3 = BigDecimal.valueOf(.12345678); /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -123,6 +126,7 @@ public class IgniteDecimalSelfTest extends AbstractSchemaSelfTest { /** * @throws Exception If failed. */ + @Test public void testConfiguredFromDdl() throws Exception { checkPrecisionAndScale(DEC_TAB_NAME, VALUE, PRECISION, SCALE); } @@ -130,6 +134,7 @@ public void testConfiguredFromDdl() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConfiguredFromQueryEntity() throws Exception { checkPrecisionAndScale(SALARY_TAB_NAME, "amount", PRECISION, SCALE); } @@ -137,6 +142,7 @@ public void testConfiguredFromQueryEntity() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConfiguredFromQueryEntityInDynamicallyCreatedCache() throws Exception { IgniteEx grid = grid(0); @@ -152,6 +158,7 @@ public void testConfiguredFromQueryEntityInDynamicallyCreatedCache() throws Exce /** * @throws Exception If failed. */ + @Test public void testConfiguredFromAnnotations() throws Exception { IgniteEx grid = grid(0); @@ -165,6 +172,7 @@ public void testConfiguredFromAnnotations() throws Exception { } /** */ + @Test public void testSelectDecimal() throws Exception { IgniteEx grid = grid(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/LongIndexNameTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/LongIndexNameTest.java index 3c2b7133bb217..f7063b8da8394 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/LongIndexNameTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/LongIndexNameTest.java @@ -33,10 +33,14 @@ import java.util.ArrayList; import java.util.LinkedHashMap; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class LongIndexNameTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -64,6 +68,7 @@ public class LongIndexNameTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testLongIndexNames() throws Exception { try { Ignite ignite = startGrid(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/MvccEmptyTransactionSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/MvccEmptyTransactionSelfTest.java new file mode 100644 index 0000000000000..0db97fcb20313 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/MvccEmptyTransactionSelfTest.java @@ -0,0 +1,108 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.index; + +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.Ignition; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.apache.ignite.transactions.Transaction; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.Statement; +import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Test for empty transaction while is then enlisted with real value. + */ +@RunWith(JUnit4.class) +public class MvccEmptyTransactionSelfTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + } + + /** + * @throws Exception If failed. + */ + @SuppressWarnings("unchecked") + @Test + public void testEmptyTransaction() throws Exception { + Ignition.start(config("srv", false)); + + Ignite cli = Ignition.start(config("cli", true)); + + try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10801")) { + try (Statement stmt = conn.createStatement()) { + stmt.execute("CREATE TABLE person (id BIGINT PRIMARY KEY, name VARCHAR) " + + "WITH \"atomicity=TRANSACTIONAL_SNAPSHOT, cache_name=PERSON_CACHE\""); + } + } + + IgniteCache cache = cli.cache("PERSON_CACHE"); + + try (Transaction tx = cli.transactions().txStart()) { + // This will cause empty near TX to be created and then rolled back. + cache.query(new SqlFieldsQuery("UPDATE person SET name=?").setArgs("Petr")).getAll(); + + // One more time. + cache.query(new SqlFieldsQuery("UPDATE person SET name=?").setArgs("Petr")).getAll(); + + // Normal transaction is created, and several updates are performed. + cache.query(new SqlFieldsQuery("INSERT INTO person VALUES (?, ?)").setArgs(1, "Ivan")).getAll(); + cache.query(new SqlFieldsQuery("UPDATE person SET name=?").setArgs("Sergey")).getAll(); + + // Another update with empty response. + cache.query(new SqlFieldsQuery("UPDATE person SET name=? WHERE name=?").setArgs("Vasiliy", "Ivan")).getAll(); + + // One more normal update. + cache.query(new SqlFieldsQuery("UPDATE person SET name=?").setArgs("Vsevolod")).getAll(); + + tx.commit(); + } + + List> res = cache.query(new SqlFieldsQuery("SELECT name FROM person")).getAll(); + + assert res.size() == 1; + assert res.get(0).size() == 1; + + assertEquals("Vsevolod", (String)res.get(0).get(0)); + } + + /** + * Create config. + * + * @param name Name. + * @param client Client flag. + * @return Config. + */ + private static IgniteConfiguration config(String name, boolean client) { + IgniteConfiguration cfg = new IgniteConfiguration(); + + cfg.setIgniteInstanceName(name); + cfg.setClientMode(client); + + return cfg; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/OptimizedMarshallerIndexNameTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/OptimizedMarshallerIndexNameTest.java index 57a55f391b30b..fb5e2d0a019cf 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/OptimizedMarshallerIndexNameTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/OptimizedMarshallerIndexNameTest.java @@ -40,6 +40,9 @@ import java.io.ObjectOutput; import java.util.List; import java.util.UUID; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -50,6 +53,7 @@ * See IGNITE-6915 for details. */ +@RunWith(JUnit4.class) public class OptimizedMarshallerIndexNameTest extends GridCommonAbstractTest { /** Test name 1 */ private static final String TEST_NAME1 = "Name1"; @@ -111,6 +115,7 @@ protected static CacheConfiguration cacheConfiguration(String na * Verifies that BPlusTree are not erroneously shared between tables in the same cache * due to IGNITE-6915 bug. */ + @Test public void testOptimizedMarshallerIndex() { // Put objects of different types into the same cache diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/QueryEntityValidationSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/QueryEntityValidationSelfTest.java index c8f670615d756..ace76354f1189 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/QueryEntityValidationSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/QueryEntityValidationSelfTest.java @@ -29,11 +29,14 @@ import java.util.LinkedHashMap; import java.util.List; import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for query entity validation. */ -@SuppressWarnings("ThrowableResultOfMethodCallIgnored") +@RunWith(JUnit4.class) public class QueryEntityValidationSelfTest extends GridCommonAbstractTest { /** Cache name. */ private static final String CACHE_NAME = "cache"; @@ -48,6 +51,7 @@ public class QueryEntityValidationSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testValueTypeNull() throws Exception { final CacheConfiguration ccfg = new CacheConfiguration().setName(CACHE_NAME); @@ -71,6 +75,7 @@ public void testValueTypeNull() throws Exception { * * @throws Exception If failed. */ + @Test public void testIndexTypeNull() throws Exception { final CacheConfiguration ccfg = new CacheConfiguration().setName(CACHE_NAME); @@ -113,6 +118,7 @@ public void testIndexTypeNull() throws Exception { * * @throws Exception If failed. */ + @Test public void testIndexNameDuplicate() throws Exception { final CacheConfiguration ccfg = new CacheConfiguration().setName(CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SchemaExchangeSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SchemaExchangeSelfTest.java index c7709f2893955..9280d6b4af584 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SchemaExchangeSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SchemaExchangeSelfTest.java @@ -32,8 +32,10 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import java.util.Collections; import java.util.Map; @@ -44,13 +46,11 @@ /** * Tests for schema exchange between nodes. */ +@RunWith(JUnit4.class) public class SchemaExchangeSelfTest extends AbstractSchemaSelfTest { /** Node on which filter should be applied (if any). */ private static String filterNodeName; - /** */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); @@ -65,6 +65,7 @@ public class SchemaExchangeSelfTest extends AbstractSchemaSelfTest { * * @throws Exception If failed. */ + @Test public void testEmptyStatic() throws Exception { checkEmpty(false); } @@ -74,6 +75,7 @@ public void testEmptyStatic() throws Exception { * * @throws Exception If failed. */ + @Test public void testEmptyDynamic() throws Exception { checkEmpty(true); } @@ -114,6 +116,7 @@ private void checkEmpty(boolean dynamic) throws Exception { * * @throws Exception If failed. */ + @Test public void testNonEmptyStatic() throws Exception { checkNonEmpty(false); } @@ -123,6 +126,7 @@ public void testNonEmptyStatic() throws Exception { * * @throws Exception If failed. */ + @Test public void testNonEmptyDynamic() throws Exception { checkNonEmpty(true); } @@ -163,6 +167,7 @@ private void checkNonEmpty(boolean dynamic) throws Exception { * * @throws Exception If failed. */ + @Test public void testDynamicRestarts() throws Exception { IgniteEx node1 = start(1, KeyClass.class, ValueClass.class); IgniteEx node2 = startNoCache(2); @@ -245,6 +250,7 @@ public void testDynamicRestarts() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientJoinStatic() throws Exception { checkClientJoin(false); } @@ -254,6 +260,7 @@ public void testClientJoinStatic() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientJoinDynamic() throws Exception { checkClientJoin(true); } @@ -303,6 +310,7 @@ private void checkClientJoin(boolean dynamic) throws Exception { * * @throws Exception If failed. */ + @Test public void testClientCacheStartStatic() throws Exception { checkClientCacheStart(false); } @@ -312,6 +320,7 @@ public void testClientCacheStartStatic() throws Exception { * * @throws Exception If failed. */ + @Test public void testClientCacheStartDynamic() throws Exception { checkClientCacheStart(true); } @@ -386,6 +395,7 @@ private void checkClientCacheStart(boolean dynamic) throws Exception { * * @throws Exception If failed. */ + @Test public void testNodeFilter() throws Exception { filterNodeName = getTestIgniteInstanceName(1); @@ -411,6 +421,7 @@ public void testNodeFilter() throws Exception { * * @throws Exception If failed. */ + @Test public void testServerRestartWithNewTypes() throws Exception { IgniteEx node1 = start(1, KeyClass.class, ValueClass.class); assertTypes(node1, ValueClass.class); @@ -457,6 +468,7 @@ public void testServerRestartWithNewTypes() throws Exception { * @throws Exception If failed. */ @SuppressWarnings("unchecked") + @Test public void testClientReconnect() throws Exception { final IgniteEx node1 = start(1, KeyClass.class, ValueClass.class); assertTypes(node1, ValueClass.class); @@ -545,7 +557,6 @@ private IgniteEx start(int idx, boolean client, Class... clss) throws Exception cfg.setClientMode(client); cfg.setLocalHost("127.0.0.1"); - cfg.setDiscoverySpi(new TestTcpDiscoverySpi().setIpFinder(IP_FINDER)); if (filterNodeName != null && F.eq(name, filterNodeName)) cfg.setUserAttributes(Collections.singletonMap("AFF_NODE", true)); @@ -595,8 +606,6 @@ private IgniteEx startNoCache(int idx, boolean client) throws Exception { cfg.setClientMode(client); cfg.setLocalHost("127.0.0.1"); - cfg.setDiscoverySpi(new TestTcpDiscoverySpi().setIpFinder(IP_FINDER)); - return (IgniteEx)Ignition.start(cfg); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsComandsWithMvccDisabledSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionCommandsWithMvccDisabledSelfTest.java similarity index 62% rename from modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsComandsWithMvccDisabledSelfTest.java rename to modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionCommandsWithMvccDisabledSelfTest.java index d2931ba826085..95fd2d14fa137 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsComandsWithMvccDisabledSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionCommandsWithMvccDisabledSelfTest.java @@ -18,14 +18,17 @@ package org.apache.ignite.internal.processors.cache.index; import java.util.concurrent.Callable; -import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ -public class SqlTransactionsComandsWithMvccDisabledSelfTest extends AbstractSchemaSelfTest { +@RunWith(JUnit4.class) +public class SqlTransactionCommandsWithMvccDisabledSelfTest extends AbstractSchemaSelfTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { super.beforeTestsStarted(); @@ -47,37 +50,32 @@ public class SqlTransactionsComandsWithMvccDisabledSelfTest extends AbstractSche /** * @throws Exception if failed. */ - public void testBeginWithMvccDisabledThrows() throws Exception { - checkMvccDisabledBehavior("BEGIN"); - } + @Test + public void testBeginWithMvccDisabled() throws Exception { + GridTestUtils.assertThrows(null, new Callable() { + @Override public Object call() throws Exception { + execute(grid(0), "BEGIN"); - /** - * @throws Exception if failed. - */ - public void testCommitWithMvccDisabledThrows() throws Exception { - checkMvccDisabledBehavior("COMMIT"); + return null; + } + }, IgniteSQLException.class, "MVCC must be enabled in order to start transaction."); } /** * @throws Exception if failed. */ - public void testRollbackWithMvccDisabledThrows() throws Exception { - checkMvccDisabledBehavior("rollback"); + @Test + public void testCommitWithMvccDisabled() throws Exception { + execute(grid(0), "COMMIT"); + // assert no exception } /** - * @param sql Operation to test. * @throws Exception if failed. */ - private void checkMvccDisabledBehavior(String sql) throws Exception { - try (IgniteEx node = startGrid(commonConfiguration(1))) { - GridTestUtils.assertThrows(null, new Callable() { - @Override public Object call() throws Exception { - execute(node, sql); - - return null; - } - }, IgniteSQLException.class, "MVCC must be enabled in order to invoke transactional operation: " + sql); - } + @Test + public void testRollbackWithMvccDisabled() throws Exception { + execute(grid(0), "ROLLBACK"); + // assert no exception } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsComandsSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsComandsSelfTest.java deleted file mode 100644 index 8b3fbe35ad9fc..0000000000000 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsComandsSelfTest.java +++ /dev/null @@ -1,83 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.cache.index; - -import java.util.concurrent.Callable; -import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.processors.query.IgniteSQLException; -import org.apache.ignite.testframework.GridTestUtils; - -/** - * - */ -public class SqlTransactionsComandsSelfTest extends AbstractSchemaSelfTest { - /** {@inheritDoc} */ - @Override protected void beforeTestsStarted() throws Exception { - super.beforeTestsStarted(); - - startGrid(commonConfiguration(0)); - - super.execute(grid(0), "CREATE TABLE INTS(k int primary key, v int) WITH \"wrap_value=false,cache_name=ints," + - "atomicity=transactional\""); - } - - /** {@inheritDoc} */ - @Override protected void afterTestsStopped() throws Exception { - stopAllGrids(); - - super.afterTestsStopped(); - } - - - /** - * @throws Exception if failed. - */ - public void testBeginWithMvccDisabledThrows() throws Exception { - checkMvccDisabledBehavior("BEGIN"); - } - - /** - * @throws Exception if failed. - */ - public void testCommitWithMvccDisabledThrows() throws Exception { - checkMvccDisabledBehavior("COMMIT"); - } - - /** - * @throws Exception if failed. - */ - public void testRollbackWithMvccDisabledThrows() throws Exception { - checkMvccDisabledBehavior("rollback"); - } - - /** - * @param sql Operation to test. - * @throws Exception if failed. - */ - private void checkMvccDisabledBehavior(String sql) throws Exception { - try (IgniteEx node = startGrid(commonConfiguration(1))) { - GridTestUtils.assertThrows(null, new Callable() { - @Override public Object call() throws Exception { - execute(node, sql); - - return null; - } - }, IgniteSQLException.class, "MVCC must be enabled in order to invoke transactional operation: " + sql); - } - } -} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsCommandsWithMvccEnabledSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsCommandsWithMvccEnabledSelfTest.java index 76f80138a5de5..29eaad6710b96 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsCommandsWithMvccEnabledSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsCommandsWithMvccEnabledSelfTest.java @@ -43,10 +43,15 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionState; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests to check behavior regarding transactions started via SQL. */ +@RunWith(JUnit4.class) public class SqlTransactionsCommandsWithMvccEnabledSelfTest extends AbstractSchemaSelfTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -68,6 +73,7 @@ public class SqlTransactionsCommandsWithMvccEnabledSelfTest extends AbstractSche /** * Test that BEGIN opens a transaction. */ + @Test public void testBegin() { execute(node(), "BEGIN"); @@ -79,6 +85,7 @@ public void testBegin() { /** * Test that COMMIT commits a transaction. */ + @Test public void testCommit() { execute(node(), "BEGIN WORK"); @@ -98,6 +105,7 @@ public void testCommit() { /** * Test that COMMIT without a transaction yields nothing. */ + @Test public void testCommitNoTransaction() { execute(node(), "COMMIT"); } @@ -105,6 +113,7 @@ public void testCommitNoTransaction() { /** * Test that ROLLBACK without a transaction yields nothing. */ + @Test public void testRollbackNoTransaction() { execute(node(), "ROLLBACK"); } @@ -112,6 +121,7 @@ public void testRollbackNoTransaction() { /** * Test that ROLLBACK rolls back a transaction. */ + @Test public void testRollback() { execute(node(), "BEGIN TRANSACTION"); @@ -131,9 +141,9 @@ public void testRollback() { /** * Test that attempting to perform various SQL operations within non SQL transaction yields an exception. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9470") + @Test public void testSqlOperationsWithinNonSqlTransaction() { - fail("https://issues.apache.org/jira/browse/IGNITE-9470"); - assertSqlOperationWithinNonSqlTransactionThrows("COMMIT"); assertSqlOperationWithinNonSqlTransactionThrows("ROLLBACK"); @@ -263,33 +273,6 @@ else if (arg instanceof EntryProcessor) return arg.getClass(); } - /** - * Test that attempting to perform a cache PUT operation from within an SQL transaction fails. - */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") - public void testCacheOperationsFromSqlTransaction() { - checkCacheOperationThrows("invoke", 1, ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invoke", 1, CACHE_ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAsync", 1, ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAsync", 1, CACHE_ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAll", Collections.singletonMap(1, CACHE_ENTRY_PROC), X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAll", Collections.singleton(1), CACHE_ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAll", Collections.singleton(1), ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAllAsync", Collections.singletonMap(1, CACHE_ENTRY_PROC), - X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAllAsync", Collections.singleton(1), CACHE_ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - - checkCacheOperationThrows("invokeAllAsync", Collections.singleton(1), ENTRY_PROC, X.EMPTY_OBJECT_ARRAY); - } - /** */ private final static EntryProcessor ENTRY_PROC = new EntryProcessor() { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsSelfTest.java index d93bdab7cbe6e..c3853881eff84 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/index/SqlTransactionsSelfTest.java @@ -43,10 +43,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionState; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests to check behavior regarding transactions started via SQL. */ +@RunWith(JUnit4.class) public class SqlTransactionsSelfTest extends AbstractSchemaSelfTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { @@ -68,6 +72,7 @@ public class SqlTransactionsSelfTest extends AbstractSchemaSelfTest { /** * Test that BEGIN opens a transaction. */ + @Test public void testBegin() { execute(node(), "BEGIN"); @@ -79,6 +84,7 @@ public void testBegin() { /** * Test that COMMIT commits a transaction. */ + @Test public void testCommit() { execute(node(), "BEGIN WORK"); @@ -98,6 +104,7 @@ public void testCommit() { /** * Test that COMMIT without a transaction yields nothing. */ + @Test public void testCommitNoTransaction() { execute(node(), "COMMIT"); } @@ -105,6 +112,7 @@ public void testCommitNoTransaction() { /** * Test that ROLLBACK without a transaction yields nothing. */ + @Test public void testRollbackNoTransaction() { execute(node(), "ROLLBACK"); } @@ -112,6 +120,7 @@ public void testRollbackNoTransaction() { /** * Test that ROLLBACK rolls back a transaction. */ + @Test public void testRollback() { execute(node(), "BEGIN TRANSACTION"); @@ -131,6 +140,7 @@ public void testRollback() { /** * Test that attempting to perform various SQL operations within non SQL transaction yields an exception. */ + @Test public void testSqlOperationsWithinNonSqlTransaction() { assertSqlOperationWithinNonSqlTransactionThrows("COMMIT"); @@ -181,7 +191,6 @@ private void assertSqlOperationWithinNonSqlTransactionThrows(final String sql) { /** * Test that attempting to perform a cache API operation from within an SQL transaction fails. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") private void checkCacheOperationThrows(final String opName, final Object... args) { execute(node(), "BEGIN"); @@ -264,7 +273,7 @@ else if (arg instanceof EntryProcessor) /** * Test that attempting to perform a cache PUT operation from within an SQL transaction fails. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCacheOperationsFromSqlTransaction() { checkCacheOperationThrows("get", 1); @@ -361,7 +370,7 @@ public void testCacheOperationsFromSqlTransaction() { } /** */ - private final static EntryProcessor ENTRY_PROC = + private static final EntryProcessor ENTRY_PROC = new EntryProcessor() { @Override public Object process(MutableEntry entry, Object... arguments) throws EntryProcessorException { @@ -370,7 +379,7 @@ public void testCacheOperationsFromSqlTransaction() { }; /** */ - private final static CacheEntryProcessor CACHE_ENTRY_PROC = + private static final CacheEntryProcessor CACHE_ENTRY_PROC = new CacheEntryProcessor() { @Override public Object process(MutableEntry entry, Object... arguments) throws EntryProcessorException { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalFieldsQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalFieldsQuerySelfTest.java index 72d72903a1210..c494ca319c2e3 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalFieldsQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalFieldsQuerySelfTest.java @@ -21,12 +21,16 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractFieldsQuerySelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Tests for fields queries. */ +@RunWith(JUnit4.class) public class IgniteCacheLocalFieldsQuerySelfTest extends IgniteCacheAbstractFieldsQuerySelfTest { // static { // System.setProperty(IgniteSystemProperties.IGNITE_H2_DEBUG_CONSOLE, "1"); @@ -45,8 +49,9 @@ public class IgniteCacheLocalFieldsQuerySelfTest extends IgniteCacheAbstractFiel /** * @throws Exception If failed. */ + @Test public void testInformationSchema() throws Exception { jcache(String.class, String.class).query( new SqlFieldsQuery("SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS").setLocal(true)).getAll(); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQueryCancelOrTimeoutSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQueryCancelOrTimeoutSelfTest.java index fc681a4f33009..d4f76425ff4fe 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQueryCancelOrTimeoutSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQueryCancelOrTimeoutSelfTest.java @@ -29,12 +29,16 @@ import org.apache.ignite.cache.query.QueryCancelledException; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Tests local query cancellations and timeouts. */ +@RunWith(JUnit4.class) public class IgniteCacheLocalQueryCancelOrTimeoutSelfTest extends GridCommonAbstractTest { /** Cache size. */ private static final int CACHE_SIZE = 10_000; @@ -92,6 +96,7 @@ private void loadCache(IgniteCache cache) { /** * Tests cancellation. */ + @Test public void testQueryCancel() { testQuery(false, 1, TimeUnit.SECONDS); } @@ -99,6 +104,7 @@ public void testQueryCancel() { /** * Tests cancellation with zero timeout. */ + @Test public void testQueryCancelZeroTimeout() { testQuery(false, 1, TimeUnit.MILLISECONDS); } @@ -106,6 +112,7 @@ public void testQueryCancelZeroTimeout() { /** * Tests timeout. */ + @Test public void testQueryTimeout() { testQuery(true, 1, TimeUnit.SECONDS); } @@ -148,4 +155,4 @@ private void testQuery(boolean timeout, int timeoutUnits, TimeUnit timeUnit) { // Test must exit gracefully. } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQuerySelfTest.java index 2272f27d158dd..f5f171f4f3ad5 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/local/IgniteCacheLocalQuerySelfTest.java @@ -29,12 +29,16 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.internal.processors.cache.IgniteCacheAbstractQuerySelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.LOCAL; /** * Tests local query. */ +@RunWith(JUnit4.class) public class IgniteCacheLocalQuerySelfTest extends IgniteCacheAbstractQuerySelfTest { /** {@inheritDoc} */ @Override protected int gridCount() { @@ -49,6 +53,7 @@ public class IgniteCacheLocalQuerySelfTest extends IgniteCacheAbstractQuerySelfT /** * @throws Exception If test failed. */ + @Test public void testQueryLocal() throws Exception { // Let's do it twice to see how prepared statement caching behaves - without recompilation // check for cached prepared statements this would fail. @@ -98,6 +103,7 @@ public void testQueryLocal() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testLocalSqlQueryFromClient() throws Exception { try { Ignite g = startGrid("client"); @@ -127,6 +133,7 @@ public void testQueryLocal() throws Exception { } /** {@inheritDoc} */ + @Test @Override public void testLocalSqlFieldsQueryFromClient() throws Exception { try { Ignite g = startGrid("client"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractContinuousQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractContinuousQuerySelfTest.java new file mode 100644 index 0000000000000..468349432642f --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractContinuousQuerySelfTest.java @@ -0,0 +1,75 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryAbstractSelfTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + +/** + * + */ +@RunWith(JUnit4.class) +public abstract class CacheMvccAbstractContinuousQuerySelfTest extends GridCacheContinuousQueryAbstractSelfTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Override protected int gridCount() { + return 2; + } + + /** {@inheritDoc} */ + @Override protected NearCacheConfiguration nearConfiguration() { + return null; + } + + /** {@inheritDoc} */ + @Test + @Override public void testInternalKey() throws Exception { + // No-op. + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311") + @Test + @Override public void testExpired() throws Exception { + // No-op. + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7954") + @Test + @Override public void testLoadCache() throws Exception { + // No-op. + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9321") + @Test + @Override public void testEvents() throws Exception { + // No-op. + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractSqlContinuousQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractSqlContinuousQuerySelfTest.java new file mode 100644 index 0000000000000..96fdf062b25dd --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractSqlContinuousQuerySelfTest.java @@ -0,0 +1,40 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryAbstractSelfTest; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + +/** + * Base class for MVCC continuous queries. + */ +public abstract class CacheMvccAbstractSqlContinuousQuerySelfTest extends CacheMvccAbstractContinuousQuerySelfTest { + /** {@inheritDoc} */ + @Override protected void cachePut(IgniteCache cache, Integer key, Integer val) { + cache.query(new SqlFieldsQuery("MERGE INTO Integer (_key, _val) values (" + key + ',' + val + ')')).getAll(); + } + + /** {@inheritDoc} */ + @Override protected void cacheRemove(IgniteCache cache, Integer key) { + cache.query(new SqlFieldsQuery("DELETE FROM Integer WHERE _key=" + key)).getAll(); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractSqlCoordinatorFailoverTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractSqlCoordinatorFailoverTest.java index c449ee20e390b..e59018ec0b26f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractSqlCoordinatorFailoverTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccAbstractSqlCoordinatorFailoverTest.java @@ -17,6 +17,27 @@ package org.apache.ignite.internal.processors.cache.mvcc; +import java.util.concurrent.Callable; +import javax.cache.CacheException; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheServerNotFoundException; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteNodeAttributes; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtAffinityAssignmentResponse; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgnitePredicate; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SCAN; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SQL; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.WriteMode.DML; @@ -26,11 +47,13 @@ /** * Mvcc SQL API coordinator failover test. */ -@SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public abstract class CacheMvccAbstractSqlCoordinatorFailoverTest extends CacheMvccAbstractBasicCoordinatorFailoverTest { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10750") + @Test public void testAccountsTxSql_Server_Backups0_CoordinatorFails() throws Exception { accountsTxReadAll(2, 1, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL, DML, DFLT_TEST_TIME, RestartMode.RESTART_CRD); @@ -39,6 +62,8 @@ public void testAccountsTxSql_Server_Backups0_CoordinatorFails() throws Exceptio /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10753") + @Test public void testAccountsTxSql_SingleNode_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -49,6 +74,7 @@ public void testAccountsTxSql_SingleNode_CoordinatorFails_Persistence() throws E /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_ClientServer_Backups0_RestartCoordinator_ScanDml() throws Exception { putAllGetAll(RestartMode.RESTART_CRD , 2, 1, 0, 64, new InitIndexing(Integer.class, Integer.class), SCAN, DML); @@ -57,6 +83,8 @@ public void testPutAllGetAll_ClientServer_Backups0_RestartCoordinator_ScanDml() /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10767") + @Test public void testPutAllGetAll_SingleNode_RestartCoordinator_ScanDml_Persistence() throws Exception { persistence = true; @@ -67,6 +95,8 @@ public void testPutAllGetAll_SingleNode_RestartCoordinator_ScanDml_Persistence() /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10753") + @Test public void testPutAllGetAll_ClientServer_Backups0_RestartCoordinator_SqlDml() throws Exception { putAllGetAll(RestartMode.RESTART_CRD, 2, 1, 0, DFLT_PARTITION_COUNT, new InitIndexing(Integer.class, Integer.class), SQL, DML); @@ -75,6 +105,7 @@ public void testPutAllGetAll_ClientServer_Backups0_RestartCoordinator_SqlDml() t /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_SingleNode_RestartCoordinator_SqlDml_Persistence() throws Exception { persistence = true; @@ -85,6 +116,8 @@ public void testPutAllGetAll_SingleNode_RestartCoordinator_SqlDml_Persistence() /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testUpdate_N_Objects_ClientServer_Backups0_Sql_Persistence() throws Exception { persistence = true; @@ -95,6 +128,8 @@ public void testUpdate_N_Objects_ClientServer_Backups0_Sql_Persistence() throws /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testUpdate_N_Objects_SingleNode_Sql_Persistence() throws Exception { updateNObjectsTest(3, 1, 0, 0, 1, DFLT_TEST_TIME, new InitIndexing(Integer.class, Integer.class), SQL, DML, RestartMode.RESTART_CRD); @@ -103,6 +138,7 @@ public void testUpdate_N_Objects_SingleNode_Sql_Persistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCoordinatorFailureSimplePessimisticTxSql() throws Exception { coordinatorFailureSimple(PESSIMISTIC, REPEATABLE_READ, SQL, DML); } @@ -110,6 +146,7 @@ public void testCoordinatorFailureSimplePessimisticTxSql() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxInProgressCoordinatorChangeSimple_Readonly() throws Exception { txInProgressCoordinatorChangeSimple(PESSIMISTIC, REPEATABLE_READ, new InitIndexing(Integer.class, Integer.class), SQL, DML); @@ -118,6 +155,7 @@ public void testTxInProgressCoordinatorChangeSimple_Readonly() throws Exception /** * @throws Exception If failed. */ + @Test public void testReadInProgressCoordinatorFailsSimple_FromClient() throws Exception { readInProgressCoordinatorFailsSimple(true, new InitIndexing(Integer.class, Integer.class), SQL, DML); } @@ -125,6 +163,7 @@ public void testReadInProgressCoordinatorFailsSimple_FromClient() throws Excepti /** * @throws Exception If failed. */ + @Test public void testCoordinatorChangeActiveQueryClientFails_Simple() throws Exception { checkCoordinatorChangeActiveQueryClientFails_Simple(new InitIndexing(Integer.class, Integer.class), SQL, DML); } @@ -132,7 +171,153 @@ public void testCoordinatorChangeActiveQueryClientFails_Simple() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testCoordinatorChangeActiveQueryClientFails_SimpleScan() throws Exception { checkCoordinatorChangeActiveQueryClientFails_Simple(new InitIndexing(Integer.class, Integer.class), SCAN, DML); } + + /** + * @throws Exception If failed. + */ + @Test + public void testStartLastServerFails() throws Exception { + testSpi = true; + + startGrids(3); + + CacheConfiguration cfg = cacheConfiguration(cacheMode(), FULL_SYNC, 0, DFLT_PARTITION_COUNT) + .setIndexedTypes(Integer.class, Integer.class); + + cfg.setNodeFilter(new TestNodeFilter(getTestIgniteInstanceName(1))); + + Ignite srv1 = ignite(1); + + srv1.createCache(cfg); + + client = true; + + final Ignite c = startGrid(3); + + client = false; + + TestRecordingCommunicationSpi.spi(srv1).blockMessages(GridDhtAffinityAssignmentResponse.class, c.name()); + + IgniteInternalFuture fut = GridTestUtils.runAsync(new Callable() { + @Override public Void call() throws Exception { + c.cache(DEFAULT_CACHE_NAME); + + return null; + } + }, "start-cache"); + + U.sleep(1000); + + assertFalse(fut.isDone()); + + stopGrid(1); + + fut.get(); + + final IgniteCache clientCache = c.cache(DEFAULT_CACHE_NAME); + + for (int i = 0; i < 10; i++) { + final int k = i; + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.get(k); + + return null; + } + }, CacheServerNotFoundException.class, null); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.put(k, k); + + return null; + } + }, CacheServerNotFoundException.class, null); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.remove(k); + + return null; + } + }, CacheServerNotFoundException.class, null); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.query(new SqlFieldsQuery("SELECT * FROM INTEGER")).getAll(); + + return null; + } + }, CacheException.class, "Failed to find data nodes for cache"); // TODO IGNITE-10377 should be CacheServerNotFoundException. + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.query(new SqlFieldsQuery("SELECT * FROM INTEGER ORDER BY _val")).getAll(); + + return null; + } + }, CacheException.class, "Failed to find data nodes for cache"); // TODO IGNITE-10377 should be CacheServerNotFoundException. + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.query(new SqlFieldsQuery("DELETE FROM Integer WHERE 1 = 1")).getAll(); + + return null; + } + }, CacheException.class, "Failed to find data nodes for cache"); // TODO IGNITE-10377 should be CacheServerNotFoundException. + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.query(new SqlFieldsQuery("INSERT INTO Integer (_key, _val) VALUES (1, 2)")).getAll(); + + return null; + } + }, CacheException.class, "Failed to get primary node"); // TODO IGNITE-10377 should be CacheServerNotFoundException. + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + clientCache.query(new SqlFieldsQuery("UPDATE Integer SET _val=42 WHERE _key IN (SELECT DISTINCT _val FROM INTEGER)")).getAll(); + + return null; + } + }, CacheException.class, "Failed to find data nodes for cache"); // TODO IGNITE-10377 should be CacheServerNotFoundException. + } + + startGrid(1); + + awaitPartitionMapExchange(); + + for (int i = 0; i < 100; i++) { + assertNull(clientCache.get(i)); + + clientCache.put(i, i); + + assertEquals(i, clientCache.get(i)); + } + } + + /** + * + */ + private static class TestNodeFilter implements IgnitePredicate { + /** */ + private final String includeName; + + /** + * @param includeName Node to include. + */ + public TestNodeFilter(String includeName) { + this.includeName = includeName; + } + + /** {@inheritDoc} */ + @Override public boolean apply(ClusterNode node) { + return includeName.equals(node.attribute(IgniteNodeAttributes.ATTR_IGNITE_INSTANCE_NAME)); + } + } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBackupsAbstractTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBackupsAbstractTest.java index 998cb766668c7..147562eab1e88 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBackupsAbstractTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBackupsAbstractTest.java @@ -46,6 +46,10 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SQL; @@ -58,8 +62,8 @@ * Backups tests. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public abstract class CacheMvccBackupsAbstractTest extends CacheMvccAbstractTest { - /** Test timeout. */ private final long txLongTimeout = getTestTimeout() / 4; @@ -68,6 +72,7 @@ public abstract class CacheMvccBackupsAbstractTest extends CacheMvccAbstractTest * * @throws Exception If fails. */ + @Test public void testBackupsCoherenceSimple() throws Exception { disableScheduledVacuum = true; @@ -181,6 +186,8 @@ public void testBackupsCoherenceSimple() throws Exception { * * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10104") + @Test public void testBackupsCoherenceWithLargeOperations() throws Exception { disableScheduledVacuum = true; @@ -277,6 +284,7 @@ public void testBackupsCoherenceWithLargeOperations() throws Exception { * * @throws Exception If failed. */ + @Test public void testBackupsCoherenceWithInFlightBatchesOverflow() throws Exception { testSpi = true; @@ -387,6 +395,7 @@ public void testBackupsCoherenceWithInFlightBatchesOverflow() throws Exception { * * @throws Exception If failed. */ + @Test public void testBackupsCoherenceWithConcurrentUpdates2ServersNoClients() throws Exception { checkBackupsCoherenceWithConcurrentUpdates(2, 0); } @@ -396,6 +405,7 @@ public void testBackupsCoherenceWithConcurrentUpdates2ServersNoClients() throws * * @throws Exception If failed. */ + @Test public void testBackupsCoherenceWithConcurrentUpdates4ServersNoClients() throws Exception { checkBackupsCoherenceWithConcurrentUpdates(4, 0); } @@ -405,6 +415,7 @@ public void testBackupsCoherenceWithConcurrentUpdates4ServersNoClients() throws * * @throws Exception If failed. */ + @Test public void testBackupsCoherenceWithConcurrentUpdates3Servers1Client() throws Exception { checkBackupsCoherenceWithConcurrentUpdates(3, 1); } @@ -414,6 +425,7 @@ public void testBackupsCoherenceWithConcurrentUpdates3Servers1Client() throws Ex * * @throws Exception If failed. */ + @Test public void testBackupsCoherenceWithConcurrentUpdates5Servers2Clients() throws Exception { checkBackupsCoherenceWithConcurrentUpdates(5, 2); } @@ -461,6 +473,7 @@ private void checkBackupsCoherenceWithConcurrentUpdates(int srvs, int clients) t /** * @throws Exception If failed. */ + @Test public void testNoForceKeyRequestDelayedRebalanceNoVacuum() throws Exception { disableScheduledVacuum = true; @@ -470,6 +483,7 @@ public void testNoForceKeyRequestDelayedRebalanceNoVacuum() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoForceKeyRequestDelayedRebalance() throws Exception { doTestRebalanceNodeAdd(true); } @@ -477,6 +491,7 @@ public void testNoForceKeyRequestDelayedRebalance() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoForceKeyRequestNoVacuum() throws Exception { disableScheduledVacuum = true; @@ -486,6 +501,7 @@ public void testNoForceKeyRequestNoVacuum() throws Exception { /** * @throws Exception If failed. */ + @Test public void testNoForceKeyRequest() throws Exception { doTestRebalanceNodeAdd(false); } @@ -569,6 +585,7 @@ private void doTestRebalanceNodeAdd(boolean delayRebalance) throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalanceNodeLeaveClient() throws Exception { doTestRebalanceNodeLeave(true); } @@ -576,6 +593,7 @@ public void testRebalanceNodeLeaveClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRebalanceNodeLeaveServer() throws Exception { doTestRebalanceNodeLeave(false); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBasicContinuousQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBasicContinuousQueryTest.java new file mode 100644 index 0000000000000..c0ffa12725347 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBasicContinuousQueryTest.java @@ -0,0 +1,605 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicInteger; +import javax.cache.Cache; +import javax.cache.CacheException; +import javax.cache.event.CacheEntryEvent; +import javax.cache.event.CacheEntryUpdatedListener; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.affinity.Affinity; +import org.apache.ignite.cache.query.ContinuousQuery; +import org.apache.ignite.cache.query.QueryCursor; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareRequest; +import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager; +import org.apache.ignite.internal.processors.continuous.GridContinuousMessage; +import org.apache.ignite.internal.processors.continuous.GridContinuousProcessor; +import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.internal.util.typedef.PA; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.lang.IgniteBiPredicate; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static java.util.concurrent.TimeUnit.MILLISECONDS; +import static java.util.concurrent.TimeUnit.SECONDS; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; +import static org.apache.ignite.internal.processors.cache.mvcc.MvccCachingManager.TX_SIZE_THRESHOLD; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; +import static org.apache.ignite.transactions.TransactionState.PREPARING; +import static org.apache.ignite.transactions.TransactionState.ROLLED_BACK; + +/** + * Basic continuous queries test with enabled mvcc. + */ +@RunWith(JUnit4.class) +public class CacheMvccBasicContinuousQueryTest extends CacheMvccAbstractTest { + /** */ + private static final long LATCH_TIMEOUT = 5000; + + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.PARTITIONED; + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + // Wait for all routines are unregistered + GridTestUtils.waitForCondition(new PA() { + @Override public boolean apply() { + for (Ignite node : G.allGrids()) { + GridContinuousProcessor proc = ((IgniteEx)node).context().continuous(); + + if(((Map)U.field(proc, "rmtInfos")).size() > 0) + return false; + } + + return true; + } + }, 3000); + + for (Ignite node : G.allGrids()) { + GridContinuousProcessor proc = ((IgniteEx)node).context().continuous(); + + assertEquals(1, ((Map)U.field(proc, "locInfos")).size()); + assertEquals(0, ((Map)U.field(proc, "rmtInfos")).size()); + assertEquals(0, ((Map)U.field(proc, "startFuts")).size()); + assertEquals(0, ((Map)U.field(proc, "stopFuts")).size()); + assertEquals(0, ((Map)U.field(proc, "bufCheckThreads")).size()); + + CacheContinuousQueryManager mgr = ((IgniteEx)node).context().cache().internalCache(DEFAULT_CACHE_NAME).context().continuousQueries(); + + assertEquals(0, ((Map)U.field(mgr, "lsnrs")).size()); + + MvccCachingManager cachingMgr = ((IgniteEx)node).context().cache().context().mvccCaching(); + + assertEquals(0, ((Map)U.field(cachingMgr, "enlistCache")).size()); + assertEquals(0, ((Map)U.field(cachingMgr, "cntrs")).size()); + } + + super.afterTest(); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testAllEntries() throws Exception { + Ignite node = startGrids(3); + + final IgniteCache cache = node.createCache( + cacheConfiguration(cacheMode(), FULL_SYNC, 1, 2) + .setCacheMode(CacheMode.REPLICATED) + .setIndexedTypes(Integer.class, Integer.class)); + + ContinuousQuery qry = new ContinuousQuery<>(); + + final Map> map = new HashMap<>(); + final CountDownLatch latch = new CountDownLatch(5); + + qry.setLocalListener(new CacheEntryUpdatedListener() { + @Override public void onUpdated(Iterable> evts) { + for (CacheEntryEvent e : evts) { + synchronized (map) { + List vals = map.get(e.getKey()); + + if (vals == null) { + vals = new ArrayList<>(); + + map.put(e.getKey(), vals); + } + + vals.add(e.getValue()); + } + + latch.countDown(); + } + } + }); + + try (QueryCursor> ignored = cache.query(qry)) { + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + String dml = "INSERT INTO Integer (_key, _val) values (1,1),(2,2)"; + + cache.query(new SqlFieldsQuery(dml)).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + String dml1 = "MERGE INTO Integer (_key, _val) values (3,3)"; + + cache.query(new SqlFieldsQuery(dml1)).getAll(); + + String dml2 = "DELETE FROM Integer WHERE _key = 2"; + + cache.query(new SqlFieldsQuery(dml2)).getAll(); + + String dml3 = "UPDATE Integer SET _val = 10 WHERE _key = 1"; + + cache.query(new SqlFieldsQuery(dml3)).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + String dml = "INSERT INTO Integer (_key, _val) values (4,4),(5,5)"; + + cache.query(new SqlFieldsQuery(dml)).getAll(); + + tx.rollback(); + } + + assert latch.await(LATCH_TIMEOUT, MILLISECONDS); + + assertEquals(3, map.size()); + + List vals = map.get(1); + + assertNotNull(vals); + assertEquals(2, vals.size()); + assertEquals(1, (int)vals.get(0)); + assertEquals(10, (int)vals.get(1)); + + vals = map.get(2); + + assertNotNull(vals); + assertEquals(2, vals.size()); + assertEquals(2, (int)vals.get(0)); + assertEquals(2, (int)vals.get(1)); + + vals = map.get(3); + + assertNotNull(vals); + assertEquals(1, vals.size()); + assertEquals(3, (int)vals.get(0)); + } + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCachingMaxSize() throws Exception { + Ignite node = startGrids(1); + + final IgniteCache cache = node.createCache( + cacheConfiguration(cacheMode(), FULL_SYNC, 1, 2) + .setCacheMode(CacheMode.PARTITIONED) + .setIndexedTypes(Integer.class, Integer.class)); + + ContinuousQuery qry = new ContinuousQuery<>(); + + qry.setLocalListener(new CacheEntryUpdatedListener() { + @Override public void onUpdated(Iterable> evts) { + // No-op. + } + }); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Object call() throws Exception { + try (QueryCursor> ignored = cache.query(qry)) { + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int i = 0; i < TX_SIZE_THRESHOLD + 1; i++) + cache.query(new SqlFieldsQuery("INSERT INTO Integer (_key, _val) values (" + i + ", 1)")).getAll(); + + tx.commit(); + } + } + + return null; + } + }, CacheException.class, "Failed to run update. Transaction is too large. Consider reducing transaction size"); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10768") + @Test + public void testUpdateCountersGapClosedSimplePartitioned() throws Exception { + checkUpdateCountersGapIsProcessedSimple(CacheMode.PARTITIONED); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testUpdateCountersGapClosedSimpleReplicated() throws Exception { + checkUpdateCountersGapIsProcessedSimple(CacheMode.REPLICATED); + } + + /** + * @throws Exception if failed. + */ + private void checkUpdateCountersGapIsProcessedSimple(CacheMode cacheMode) throws Exception { + testSpi = true; + + int srvCnt = 4; + + startGridsMultiThreaded(srvCnt); + + client = true; + + IgniteEx nearNode = startGrid(srvCnt); + + IgniteCache cache = nearNode.createCache( + cacheConfiguration(cacheMode, FULL_SYNC, srvCnt - 1, srvCnt) + .setIndexedTypes(Integer.class, Integer.class)); + + IgniteEx primary = grid(0); + + List keys = primaryKeys(primary.cache(DEFAULT_CACHE_NAME), 3); + + ContinuousQuery qry = new ContinuousQuery<>(); + + List arrivedEvts = new ArrayList<>(); + + CountDownLatch latch = new CountDownLatch(2); + + qry.setLocalListener(new CacheEntryUpdatedListener() { + @Override public void onUpdated(Iterable> evts) { + for (CacheEntryEvent e : evts) { + arrivedEvts.add(e); + + latch.countDown(); + } + } + }); + + QueryCursor> cur = nearNode.cache(DEFAULT_CACHE_NAME).query(qry); + + // Initial value. + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(keys.get(0))).getAll(); + + Transaction txA = nearNode.transactions().txStart(PESSIMISTIC, REPEATABLE_READ); + + // prevent first transaction prepare on backups + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(primary); + + spi.blockMessages(new IgniteBiPredicate() { + private final AtomicInteger limiter = new AtomicInteger(); + + @Override public boolean apply(ClusterNode node, Message msg) { + if (msg instanceof GridDhtTxPrepareRequest) + return limiter.getAndIncrement() < srvCnt - 1; + + if (msg instanceof GridContinuousMessage) + return true; + + return false; + } + }); + + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(keys.get(1))).getAll(); + + txA.commitAsync(); + + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + return nearNode.context().cache().context().tm().activeTransactions().stream().allMatch(tx -> tx.state() == PREPARING); + } + }, 3_000); + + GridTestUtils.runAsync(() -> { + try (Transaction txB = nearNode.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(keys.get(2))); + + txB.commit(); + } + }).get(); + + long primaryUpdCntr = getUpdateCounter(primary, keys.get(0)); + + assertEquals(3, primaryUpdCntr); // There were three updates. + + // drop primary + stopGrid(primary.name()); + + // Wait all txs are rolled back. + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + boolean allRolledBack = true; + + for (int i = 1; i < srvCnt; i++) { + boolean rolledBack = grid(i).context().cache().context().tm().activeTransactions().stream().allMatch(tx -> tx.state() == ROLLED_BACK); + + allRolledBack &= rolledBack; + } + + return allRolledBack; + } + }, 3_000); + + for (int i = 1; i < srvCnt; i++) { + IgniteCache backupCache = grid(i).cache(DEFAULT_CACHE_NAME); + + int size = backupCache.query(new SqlFieldsQuery("select * from Integer")).getAll().size(); + + long backupCntr = getUpdateCounter(grid(i), keys.get(0)); + + assertEquals(2, size); + assertEquals(primaryUpdCntr, backupCntr); + } + + assertTrue(latch.await(3, SECONDS)); + + assertEquals(2, arrivedEvts.size()); + assertEquals(keys.get(0), arrivedEvts.get(0).getKey()); + assertEquals(keys.get(2), arrivedEvts.get(1).getKey()); + + cur.close(); + nearNode.close(); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10756") + @Test + public void testUpdateCountersGapClosedPartitioned() throws Exception { + checkUpdateCountersGapsClosed(CacheMode.PARTITIONED); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testUpdateCountersGapClosedReplicated() throws Exception { + checkUpdateCountersGapsClosed(CacheMode.REPLICATED); + } + + /** + * @throws Exception If failed. + */ + private void checkUpdateCountersGapsClosed(CacheMode cacheMode) throws Exception { + testSpi = true; + + int srvCnt = 4; + + startGridsMultiThreaded(srvCnt); + + IgniteEx nearNode = grid(srvCnt - 1); + + IgniteCache cache = nearNode.createCache( + cacheConfiguration(cacheMode, FULL_SYNC, srvCnt - 1, srvCnt) + .setIndexedTypes(Integer.class, Integer.class)); + + IgniteEx primary = grid(0); + + Affinity aff = nearNode.affinity(cache.getName()); + + int[] nearBackupParts = aff.backupPartitions(nearNode.localNode()); + + int[] primaryParts = aff.primaryPartitions(primary.localNode()); + + Collection nearSet = new HashSet<>(); + + for (int part : nearBackupParts) + nearSet.add(part); + + Collection primarySet = new HashSet<>(); + + for (int part : primaryParts) + primarySet.add(part); + + // We need backup partitions on the near node. + nearSet.retainAll(primarySet); + + List keys = singlePartKeys(primary.cache(DEFAULT_CACHE_NAME), 20, nearSet.iterator().next()); + + int range = 3; + + ContinuousQuery qry = new ContinuousQuery<>(); + + List arrivedEvts = new ArrayList<>(); + + CountDownLatch latch = new CountDownLatch(range * 2); + + qry.setLocalListener(new CacheEntryUpdatedListener() { + @Override public void onUpdated(Iterable> evts) { + for (CacheEntryEvent e : evts) { + arrivedEvts.add(e); + + latch.countDown(); + } + } + }); + + QueryCursor> cur = nearNode.cache(DEFAULT_CACHE_NAME).query(qry); + + // prevent first transaction prepare on backups + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(primary); + + spi.blockMessages(new IgniteBiPredicate() { + private final AtomicInteger limiter = new AtomicInteger(); + + @Override public boolean apply(ClusterNode node, Message msg) { + if (msg instanceof GridDhtTxPrepareRequest) + return limiter.getAndIncrement() < srvCnt - 1; + + return false; + } + }); + + Transaction txA = primary.transactions().txStart(PESSIMISTIC, REPEATABLE_READ); + + for (int i = 0; i < range; i++) + primary.cache(DEFAULT_CACHE_NAME).put(keys.get(i), 2); + + txA.commitAsync(); + + GridTestUtils.runAsync(() -> { + try (Transaction tx = primary.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int i = range; i < range * 2; i++) + primary.cache(DEFAULT_CACHE_NAME).put(keys.get(i), 1); + + tx.commit(); + } + }).get(); + + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + return primary.context().cache().context().tm().activeTransactions().stream().allMatch(tx -> tx.state() == PREPARING); + } + }, 3_000); + + GridTestUtils.runAsync(() -> { + try (Transaction txB = primary.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + for (int i = range * 2; i < range * 3; i++) + primary.cache(DEFAULT_CACHE_NAME).put(keys.get(i), 3); + + txB.commit(); + } + }).get(); + + long primaryUpdCntr = getUpdateCounter(primary, keys.get(0)); + + assertEquals(range * 3, primaryUpdCntr); + + // drop primary + stopGrid(primary.name()); + + // Wait all txs are rolled back. + GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + boolean allRolledBack = true; + + for (int i = 1; i < srvCnt; i++) { + boolean rolledBack = grid(i).context().cache().context().tm().activeTransactions().stream().allMatch(tx -> tx.state() == ROLLED_BACK); + + allRolledBack &= rolledBack; + } + + return allRolledBack; + } + }, 3_000); + + for (int i = 1; i < srvCnt; i++) { + IgniteCache backupCache = grid(i).cache(DEFAULT_CACHE_NAME); + + int size = backupCache.query(new SqlFieldsQuery("select * from Integer")).getAll().size(); + + long backupCntr = getUpdateCounter(grid(i), keys.get(0)); + + assertEquals(range * 2, size); + assertEquals(primaryUpdCntr, backupCntr); + } + + assertTrue(latch.await(5, SECONDS)); + + assertEquals(range * 2, arrivedEvts.size()); + + cur.close(); + } + + /** + * @param primaryCache Cache. + * @param size Number of keys. + * @return Keys belong to a given part. + * @throws Exception If failed. + */ + private List singlePartKeys(IgniteCache primaryCache, int size, int part) throws Exception { + Ignite ignite = primaryCache.unwrap(Ignite.class); + + List res = new ArrayList<>(); + + final Affinity aff = ignite.affinity(primaryCache.getName()); + + final ClusterNode node = ignite.cluster().localNode(); + + assertTrue(GridTestUtils.waitForCondition(new GridAbsPredicate() { + @Override public boolean apply() { + return aff.primaryPartitions(node).length > 0; + } + }, 5000)); + + int cnt = 0; + + for (int key = 0; key < aff.partitions() * size * 10; key++) { + if (aff.partition(key) == part) { + res.add(key); + + if (++cnt == size) + break; + } + } + + assertEquals(size, res.size()); + + return res; + } + + /** + * @param node Node. + * @param key Key. + * @return Extracts update counter of partition which key belongs to. + */ + private long getUpdateCounter(IgniteEx node, Integer key) { + int partId = node.cachex(DEFAULT_CACHE_NAME).context().affinity().partition(key); + + GridDhtLocalPartition part = node.cachex(DEFAULT_CACHE_NAME).context().dht().topology().localPartition(partId); + + assert part != null; + + return part.updateCounter(); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBulkLoadTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBulkLoadTest.java index 98bbdfc400346..bff6e9781a844 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBulkLoadTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccBulkLoadTest.java @@ -28,6 +28,9 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.Arrays.asList; import static java.util.Collections.singletonList; @@ -35,6 +38,7 @@ /** * */ +@RunWith(JUnit4.class) public class CacheMvccBulkLoadTest extends CacheMvccAbstractTest { /** */ private IgniteCache sqlNexus; @@ -65,6 +69,7 @@ public class CacheMvccBulkLoadTest extends CacheMvccAbstractTest { /** * @throws Exception If failed. */ + @Test public void testCopyStoresData() throws Exception { String csvFilePath = new File(getClass().getResource("mvcc_person.csv").toURI()).getAbsolutePath(); stmt.executeUpdate("copy from '" + csvFilePath + "' into person (id, name) format csv"); @@ -81,6 +86,7 @@ public void testCopyStoresData() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCopyDoesNotOverwrite() throws Exception { sqlNexus.query(q("insert into person values(1, 'Old')")); String csvFilePath = new File(getClass().getResource("mvcc_person.csv").toURI()).getAbsolutePath(); @@ -98,6 +104,7 @@ public void testCopyDoesNotOverwrite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCopyLeavesPartialResultsInCaseOfFailure() throws Exception { String csvFilePath = new File(getClass().getResource("mvcc_person_broken.csv").toURI()).getAbsolutePath(); try { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccClientReconnectContinuousQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccClientReconnectContinuousQueryTest.java new file mode 100644 index 0000000000000..33e09602b6f04 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccClientReconnectContinuousQueryTest.java @@ -0,0 +1,30 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.ClientReconnectContinuousQueryTest; + +/** + * + */ +public class CacheMvccClientReconnectContinuousQueryTest extends ClientReconnectContinuousQueryTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryBackupQueueTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryBackupQueueTest.java new file mode 100644 index 0000000000000..3a598a2ca3cee --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryBackupQueueTest.java @@ -0,0 +1,30 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryBackupQueueTest; + +/** + * + */ +public class CacheMvccContinuousQueryBackupQueueTest extends IgniteCacheContinuousQueryBackupQueueTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryClientReconnectTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryClientReconnectTest.java new file mode 100644 index 0000000000000..ec622217f987f --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryClientReconnectTest.java @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryClientReconnectTest; +import org.junit.Ignore; +import org.junit.Test; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + +/** + * Mvcc CQ client reconnect test. + */ +public class CacheMvccContinuousQueryClientReconnectTest extends IgniteCacheContinuousQueryClientReconnectTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicMode() { + return TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10537") + @Test + @Override public void testReconnectClient() throws Exception { + super.testReconnectClient(); + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10537") + @Test + @Override public void testReconnectClientAndLeftRouter() throws Exception { + super.testReconnectClientAndLeftRouter(); + } +} \ No newline at end of file diff --git a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticRepeatableReadSeltTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryClientTest.java similarity index 55% rename from modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticRepeatableReadSeltTest.java rename to modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryClientTest.java index 4aa693c29c1b3..21488779092b1 100644 --- a/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/CacheGetEntryPessimisticRepeatableReadSeltTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryClientTest.java @@ -14,27 +14,26 @@ * See the License for the specific language governing permissions and * limitations under the License. */ +package org.apache.ignite.internal.processors.cache.mvcc; -package org.apache.ignite.internal.processors.cache; - -import org.apache.ignite.transactions.TransactionConcurrency; -import org.apache.ignite.transactions.TransactionIsolation; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryClientTest; +import org.junit.Ignore; +import org.junit.Test; /** - * Test getEntry and getEntries methods. + * Mvcc CQ client test. */ -public class CacheGetEntryPessimisticRepeatableReadSeltTest extends CacheGetEntryAbstractTest { +public class CacheMvccContinuousQueryClientTest extends IgniteCacheContinuousQueryClientTest { /** {@inheritDoc} */ - @Override protected TransactionConcurrency concurrency() { - return TransactionConcurrency.PESSIMISTIC; + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; } /** {@inheritDoc} */ - @Override protected TransactionIsolation isolation() { - return TransactionIsolation.REPEATABLE_READ; - } - - @Override public void testReplicatedTransactional() throws Exception { - super.testReplicatedTransactional(); + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10769") + @Test + @Override public void testNodeJoinsRestartQuery() throws Exception { + super.testNodeJoinsRestartQuery(); } } diff --git a/modules/web-console/frontend/app/modules/branding/header-title.directive.js b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryImmutableEntryTest.java similarity index 63% rename from modules/web-console/frontend/app/modules/branding/header-title.directive.js rename to modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryImmutableEntryTest.java index f67439c0a64da..bef9c70998161 100644 --- a/modules/web-console/frontend/app/modules/branding/header-title.directive.js +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryImmutableEntryTest.java @@ -14,31 +14,17 @@ * See the License for the specific language governing permissions and * limitations under the License. */ +package org.apache.ignite.internal.processors.cache.mvcc; -const template = ` -

    {{::title.text}}

    -`; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryImmutableEntryTest; /** - * @param {import('./branding.service').default} branding + * */ -export default function factory(branding) { - function controller() { - const ctrl = this; - - ctrl.text = branding.headerText; +public class CacheMvccContinuousQueryImmutableEntryTest extends IgniteCacheContinuousQueryImmutableEntryTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; } - - return { - restrict: 'E', - template, - controller, - controllerAs: 'title', - replace: true - }; } - -factory.$inject = ['IgniteBranding']; - diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryMultiNodesFilteringTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryMultiNodesFilteringTest.java new file mode 100644 index 0000000000000..714e83410ef8d --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryMultiNodesFilteringTest.java @@ -0,0 +1,30 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryMultiNodesFilteringTest; + +/** + * + */ +public class CacheMvccContinuousQueryMultiNodesFilteringTest extends GridCacheContinuousQueryMultiNodesFilteringTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryPartitionedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryPartitionedSelfTest.java new file mode 100644 index 0000000000000..80b039d97dad3 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryPartitionedSelfTest.java @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheMode; + +/** + * Mvcc continuous query test for partitioned cache. + */ +public class CacheMvccContinuousQueryPartitionedSelfTest extends CacheMvccAbstractContinuousQuerySelfTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.PARTITIONED; + } +} \ No newline at end of file diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryPartitionedTxOneNodeTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryPartitionedTxOneNodeTest.java new file mode 100644 index 0000000000000..795932e26546c --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryPartitionedTxOneNodeTest.java @@ -0,0 +1,36 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedTxOneNodeTest; + +/** + * Mvcc continuous query test for one node. + */ +public class CacheMvccContinuousQueryPartitionedTxOneNodeTest extends GridCacheContinuousQueryReplicatedTxOneNodeTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.PARTITIONED; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryReplicatedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryReplicatedSelfTest.java new file mode 100644 index 0000000000000..c9adbf95183ad --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryReplicatedSelfTest.java @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheMode; + +/** + * Mvcc continuous query test for replicated cache. + */ +public class CacheMvccContinuousQueryReplicatedSelfTest extends CacheMvccAbstractContinuousQuerySelfTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.REPLICATED; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryReplicatedTxOneNodeTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryReplicatedTxOneNodeTest.java new file mode 100644 index 0000000000000..d522ee9b9af9d --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousQueryReplicatedTxOneNodeTest.java @@ -0,0 +1,37 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedTxOneNodeTest; + +/** + * Mvcc continuous query test for one node. + */ +public class CacheMvccContinuousQueryReplicatedTxOneNodeTest extends GridCacheContinuousQueryReplicatedTxOneNodeTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.REPLICATED; + } +} + diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerClientSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerClientSelfTest.java new file mode 100644 index 0000000000000..696b7e267d472 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerClientSelfTest.java @@ -0,0 +1,42 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerClientSelfTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheMvccContinuousWithTransformerClientSelfTest extends CacheContinuousWithTransformerClientSelfTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311") + @Test + @Override public void testExpired() throws Exception { + // No-op. + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerPartitionedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerPartitionedSelfTest.java new file mode 100644 index 0000000000000..a85b8dfcd3fa6 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerPartitionedSelfTest.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerReplicatedSelfTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheMvccContinuousWithTransformerPartitionedSelfTest extends CacheContinuousWithTransformerReplicatedSelfTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.PARTITIONED; + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311") + @Test + @Override public void testExpired() throws Exception { + // No-op. + } +} + diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerReplicatedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerReplicatedSelfTest.java new file mode 100644 index 0000000000000..b05c878e61c34 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccContinuousWithTransformerReplicatedSelfTest.java @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerReplicatedSelfTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * + */ +@RunWith(JUnit4.class) +public class CacheMvccContinuousWithTransformerReplicatedSelfTest + extends CacheContinuousWithTransformerReplicatedSelfTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7311") + @Test + @Override public void testExpired() throws Exception { + // No-op. + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccDmlSimpleTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccDmlSimpleTest.java index 7f141ca36c4b1..dcd19a64aca4d 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccDmlSimpleTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccDmlSimpleTest.java @@ -28,12 +28,16 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.query.IgniteSQLException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.Arrays.asList; /** * */ +@RunWith(JUnit4.class) public class CacheMvccDmlSimpleTest extends CacheMvccAbstractTest { /** */ private IgniteCache cache; @@ -58,6 +62,7 @@ public class CacheMvccDmlSimpleTest extends CacheMvccAbstractTest { /** * @throws Exception if failed. */ + @Test public void testInsert() throws Exception { int cnt = update("insert into Integer(_key, _val) values(1, 1),(2, 2)"); @@ -78,6 +83,7 @@ public void testInsert() throws Exception { /** * @throws Exception if failed. */ + @Test public void testMerge() throws Exception { { int cnt = update("merge into Integer(_key, _val) values(1, 1),(2, 2)"); @@ -97,6 +103,7 @@ public void testMerge() throws Exception { /** * @throws Exception if failed. */ + @Test public void testUpdate() throws Exception { { int cnt = update("update Integer set _val = 42 where _key = 42"); @@ -139,6 +146,7 @@ public void testUpdate() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDelete() throws Exception { { int cnt = update("delete from Integer where _key = 42"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSelectForUpdateQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSelectForUpdateQueryTest.java index 12209abd4f389..3e849edfefb23 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSelectForUpdateQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSelectForUpdateQueryTest.java @@ -18,19 +18,24 @@ package org.apache.ignite.internal.processors.cache.mvcc; import org.apache.ignite.cache.CacheMode; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** */ +@RunWith(JUnit4.class) public class CacheMvccPartitionedSelectForUpdateQueryTest extends CacheMvccSelectForUpdateQueryAbstractTest { /** {@inheritDoc} */ - public CacheMode cacheMode() { + @Override public CacheMode cacheMode() { return PARTITIONED; } /** * */ + @Test public void testSelectForUpdateDistributedSegmented() throws Exception { doTestSelectForUpdateDistributed("PersonSeg", false); } @@ -38,6 +43,7 @@ public void testSelectForUpdateDistributedSegmented() throws Exception { /** * */ + @Test public void testSelectForUpdateLocalSegmented() throws Exception { doTestSelectForUpdateLocal("PersonSeg", false); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlCoordinatorFailoverTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlCoordinatorFailoverTest.java index 1362b4a6c7d1e..0cabf65d7b190 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlCoordinatorFailoverTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlCoordinatorFailoverTest.java @@ -18,6 +18,10 @@ package org.apache.ignite.internal.processors.cache.mvcc; import org.apache.ignite.cache.CacheMode; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SCAN; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SQL; @@ -28,6 +32,7 @@ /** * SQL Mvcc coordinator failover test for partitioned caches. */ +@RunWith(JUnit4.class) public class CacheMvccPartitionedSqlCoordinatorFailoverTest extends CacheMvccAbstractSqlCoordinatorFailoverTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -37,6 +42,8 @@ public class CacheMvccPartitionedSqlCoordinatorFailoverTest extends CacheMvccAbs /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10750") + @Test public void testAccountsTxSql_ClientServer_Backups2_CoordinatorFails() throws Exception { accountsTxReadAll(4, 2, 2, DFLT_PARTITION_COUNT, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL, DML, DFLT_TEST_TIME, RestartMode.RESTART_CRD); @@ -45,6 +52,8 @@ public void testAccountsTxSql_ClientServer_Backups2_CoordinatorFails() throws Ex /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testAccountsTxSql_Server_Backups1_CoordinatorFails_Persistence() throws Exception { persistence = true; @@ -55,6 +64,8 @@ public void testAccountsTxSql_Server_Backups1_CoordinatorFails_Persistence() thr /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testPutAllGetAll_ClientServer_Backups3_RestartCoordinator_ScanDml() throws Exception { putAllGetAll(RestartMode.RESTART_CRD , 5, 2, 3, DFLT_PARTITION_COUNT, new InitIndexing(Integer.class, Integer.class), SCAN, DML); @@ -63,6 +74,7 @@ public void testPutAllGetAll_ClientServer_Backups3_RestartCoordinator_ScanDml() /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_ClientServer_Backups1_RestartCoordinator_ScanDml_Persistence() throws Exception { persistence = true; @@ -73,6 +85,7 @@ public void testPutAllGetAll_ClientServer_Backups1_RestartCoordinator_ScanDml_Pe /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_ClientServer_Backups2_RestartCoordinator_SqlDml_Persistence() throws Exception { persistence = true; @@ -83,6 +96,7 @@ public void testPutAllGetAll_ClientServer_Backups2_RestartCoordinator_SqlDml_Per /** * @throws Exception If failed. */ + @Test public void testPutAllGetAll_ClientServer_Backups1_RestartCoordinator_SqlDml() throws Exception { putAllGetAll(RestartMode.RESTART_CRD, 2, 1, 1, 64, new InitIndexing(Integer.class, Integer.class), SQL, DML); @@ -91,6 +105,55 @@ public void testPutAllGetAll_ClientServer_Backups1_RestartCoordinator_SqlDml() t /** * @throws Exception If failed. */ + @Test + public void testPutAllGetAll_ClientServer_Backups1_RestartRandomSrv_SqlDml() throws Exception { + putAllGetAll(RestartMode.RESTART_RND_SRV, 3, 1, 1, DFLT_PARTITION_COUNT, + new InitIndexing(Integer.class, Integer.class), SQL, DML); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutAllGetAll_ClientServer_Backups2_RestartRandomSrv_SqlDml() throws Exception { + putAllGetAll(RestartMode.RESTART_RND_SRV, 4, 1, 2, DFLT_PARTITION_COUNT, + new InitIndexing(Integer.class, Integer.class), SQL, DML); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutAllGetAll_Server_Backups2_RestartRandomSrv_SqlDml() throws Exception { + putAllGetAll(RestartMode.RESTART_RND_SRV, 4, 0, 2, DFLT_PARTITION_COUNT, + new InitIndexing(Integer.class, Integer.class), SQL, DML); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test + public void testPutAllGetAll_Server_Backups1_SinglePartition_RestartRandomSrv_SqlDml() throws Exception { + putAllGetAll(RestartMode.RESTART_RND_SRV, 4, 0, 1, 1, + new InitIndexing(Integer.class, Integer.class), SQL, DML); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test + public void testPutAllGetAll_ClientServer_Backups1_SinglePartition_RestartRandomSrv_SqlDml() throws Exception { + putAllGetAll(RestartMode.RESTART_RND_SRV, 3, 1, 1, 1, + new InitIndexing(Integer.class, Integer.class), SQL, DML); + } + + /** + * @throws Exception If failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testUpdate_N_Objects_ClientServer_Backups2_Sql() throws Exception { updateNObjectsTest(7, 3, 2, 2, DFLT_PARTITION_COUNT, DFLT_TEST_TIME, new InitIndexing(Integer.class, Integer.class), SQL, DML, RestartMode.RESTART_CRD); @@ -99,6 +162,8 @@ public void testUpdate_N_Objects_ClientServer_Backups2_Sql() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10752") + @Test public void testUpdate_N_Objects_ClientServer_Backups1_Sql_Persistence() throws Exception { persistence = true; @@ -109,6 +174,7 @@ public void testUpdate_N_Objects_ClientServer_Backups1_Sql_Persistence() throws /** * @throws Exception If failed. */ + @Test public void testSqlReadInProgressCoordinatorFails() throws Exception { readInProgressCoordinatorFails(false, false, PESSIMISTIC, REPEATABLE_READ, SQL, DML, new InitIndexing(Integer.class, Integer.class)); } @@ -116,6 +182,7 @@ public void testSqlReadInProgressCoordinatorFails() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlReadInsideTxInProgressCoordinatorFails() throws Exception { readInProgressCoordinatorFails(false, true, PESSIMISTIC, REPEATABLE_READ, SQL, DML, new InitIndexing(Integer.class, Integer.class)); } @@ -123,6 +190,7 @@ public void testSqlReadInsideTxInProgressCoordinatorFails() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlReadInProgressCoordinatorFails_ReadDelay() throws Exception { readInProgressCoordinatorFails(true, false, PESSIMISTIC, REPEATABLE_READ, SQL, DML, new InitIndexing(Integer.class, Integer.class)); } @@ -130,6 +198,7 @@ public void testSqlReadInProgressCoordinatorFails_ReadDelay() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlReadInsideTxInProgressCoordinatorFails_ReadDelay() throws Exception { readInProgressCoordinatorFails(true, true, PESSIMISTIC, REPEATABLE_READ, SQL, DML, new InitIndexing(Integer.class, Integer.class)); } @@ -137,6 +206,7 @@ public void testSqlReadInsideTxInProgressCoordinatorFails_ReadDelay() throws Exc /** * @throws Exception If failed. */ + @Test public void testReadInProgressCoordinatorFailsSimple_FromServer() throws Exception { readInProgressCoordinatorFailsSimple(false, new InitIndexing(Integer.class, Integer.class), SQL, DML); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesTest.java index 199cfad017b56..02f091d74dff4 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesTest.java @@ -24,7 +24,7 @@ /** */ public class CacheMvccPartitionedSqlTxQueriesTest extends CacheMvccSqlTxQueriesAbstractTest { /** {@inheritDoc} */ - protected CacheMode cacheMode() { + @Override protected CacheMode cacheMode() { return PARTITIONED; } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesWithReducerTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesWithReducerTest.java index 03de543f307d3..86966c199f2f7 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesWithReducerTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccPartitionedSqlTxQueriesWithReducerTest.java @@ -17,14 +17,68 @@ package org.apache.ignite.internal.processors.cache.mvcc; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.apache.ignite.Ignite; import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** */ +@RunWith(JUnit4.class) public class CacheMvccPartitionedSqlTxQueriesWithReducerTest extends CacheMvccSqlTxQueriesWithReducerAbstractTest { /** {@inheritDoc} */ - protected CacheMode cacheMode() { + @Override protected CacheMode cacheMode() { return PARTITIONED; } + + /** + * @throws Exception If failed. + */ + @Test + public void testQueryUpdateOnUnstableTopologyDoesNotCauseDeadlock() throws Exception { + ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) + .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); + + testSpi = true; + + Ignite updateNode = startGrids(3); + + CountDownLatch latch = new CountDownLatch(1); + + TestRecordingCommunicationSpi spi = TestRecordingCommunicationSpi.spi(grid(1)); + + spi.blockMessages((node, msg) -> { + if (msg instanceof GridDhtPartitionsSingleMessage) { + latch.countDown(); + + return true; + } + + return false; + }); + + CompletableFuture.runAsync(() -> stopGrid(2)); + + assertTrue(latch.await(TX_TIMEOUT, TimeUnit.MILLISECONDS)); + + CompletableFuture queryFut = CompletableFuture.runAsync(() -> updateNode + .cache(DEFAULT_CACHE_NAME) + .query(new SqlFieldsQuery("INSERT INTO MvccTestSqlIndexValue (_key, idxVal1) VALUES (1,1),(2,2),(3,3)")) + .getAll()); + + Thread.sleep(300); + + spi.stopBlock(); + + queryFut.get(TX_TIMEOUT, TimeUnit.MILLISECONDS); + } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSelectForUpdateQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSelectForUpdateQueryTest.java index a45831942b889..b123e57fcacfc 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSelectForUpdateQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSelectForUpdateQueryTest.java @@ -24,7 +24,7 @@ /** */ public class CacheMvccReplicatedSelectForUpdateQueryTest extends CacheMvccSelectForUpdateQueryAbstractTest { /** {@inheritDoc} */ - public CacheMode cacheMode() { + @Override public CacheMode cacheMode() { return REPLICATED; } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSqlTxQueriesTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSqlTxQueriesTest.java index bde2c5dd6d9e3..4554a7f67c552 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSqlTxQueriesTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccReplicatedSqlTxQueriesTest.java @@ -28,6 +28,10 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.query.GridQueryProcessor; import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -36,9 +40,10 @@ import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; /** */ +@RunWith(JUnit4.class) public class CacheMvccReplicatedSqlTxQueriesTest extends CacheMvccSqlTxQueriesAbstractTest { /** {@inheritDoc} */ - protected CacheMode cacheMode() { + @Override protected CacheMode cacheMode() { return REPLICATED; } @@ -53,6 +58,7 @@ protected CacheMode cacheMode() { /** * @throws Exception If failed. */ + @Test public void testReplicatedJoinPartitionedClient() throws Exception { checkReplicatedJoinPartitioned(true); } @@ -60,6 +66,7 @@ public void testReplicatedJoinPartitionedClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicatedJoinPartitionedServer() throws Exception { checkReplicatedJoinPartitioned(false); } @@ -144,6 +151,8 @@ public void checkReplicatedJoinPartitioned(boolean client) throws Exception { * * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10763") + @Test public void testReplicatedAndPartitionedUpdateSingleTransaction() throws Exception { ccfgs = new CacheConfiguration[] { cacheConfiguration(REPLICATED, FULL_SYNC, 0, DFLT_PARTITION_COUNT) diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSelectForUpdateQueryAbstractTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSelectForUpdateQueryAbstractTest.java index 00c748e68e088..b5631e228b0a1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSelectForUpdateQueryAbstractTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSelectForUpdateQueryAbstractTest.java @@ -30,6 +30,7 @@ import org.apache.ignite.cache.query.FieldsQueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.util.typedef.X; @@ -38,6 +39,9 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest.connect; @@ -46,6 +50,7 @@ /** * Test for {@code SELECT FOR UPDATE} queries. */ +@RunWith(JUnit4.class) public abstract class CacheMvccSelectForUpdateQueryAbstractTest extends CacheMvccAbstractTest { /** */ private static final int CACHE_SIZE = 50; @@ -57,7 +62,7 @@ public abstract class CacheMvccSelectForUpdateQueryAbstractTest extends CacheMvc disableScheduledVacuum = getName().equals("testSelectForUpdateAfterAbortedTx"); - startGrids(3); + IgniteEx grid = startGrid(0); CacheConfiguration seg = new CacheConfiguration("segmented*"); @@ -66,11 +71,9 @@ public abstract class CacheMvccSelectForUpdateQueryAbstractTest extends CacheMvc if (seg.getCacheMode() == PARTITIONED) seg.setQueryParallelism(4); - grid(0).addCacheConfiguration(seg); + grid.addCacheConfiguration(seg); - Thread.sleep(1000L); - - try (Connection c = connect(grid(0))) { + try (Connection c = connect(grid)) { execute(c, "create table person (id int primary key, firstName varchar, lastName varchar) " + "with \"atomicity=transactional_snapshot,cache_name=Person\""); @@ -90,14 +93,15 @@ public abstract class CacheMvccSelectForUpdateQueryAbstractTest extends CacheMvc tx.commit(); } } + + startGridsMultiThreaded(1, 2); } /** * */ + @Test public void testSelectForUpdateDistributed() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9724"); - doTestSelectForUpdateDistributed("Person", false); } @@ -105,6 +109,7 @@ public void testSelectForUpdateDistributed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelectForUpdateLocal() throws Exception { doTestSelectForUpdateLocal("Person", false); } @@ -112,6 +117,7 @@ public void testSelectForUpdateLocal() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelectForUpdateOutsideTxDistributed() throws Exception { doTestSelectForUpdateDistributed("Person", true); } @@ -119,6 +125,7 @@ public void testSelectForUpdateOutsideTxDistributed() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelectForUpdateOutsideTxLocal() throws Exception { doTestSelectForUpdateLocal("Person", true); } @@ -162,6 +169,8 @@ void doTestSelectForUpdateLocal(String cacheName, boolean outsideTx) throws Exce * @throws Exception If failed. */ void doTestSelectForUpdateDistributed(String cacheName, boolean outsideTx) throws Exception { + awaitPartitionMapExchange(); + Ignite node = grid(0); IgniteCache cache = node.cache(cacheName); @@ -192,6 +201,7 @@ void doTestSelectForUpdateDistributed(String cacheName, boolean outsideTx) throw /** * */ + @Test public void testSelectForUpdateWithUnion() { assertQueryThrows("select id from person union select 1 for update", "SELECT UNION FOR UPDATE is not supported."); @@ -200,6 +210,7 @@ public void testSelectForUpdateWithUnion() { /** * */ + @Test public void testSelectForUpdateWithJoin() { assertQueryThrows("select p1.id from person p1 join person p2 on p1.id = p2.id for update", "SELECT FOR UPDATE with joins is not supported."); @@ -208,6 +219,7 @@ public void testSelectForUpdateWithJoin() { /** * */ + @Test public void testSelectForUpdateWithLimit() { assertQueryThrows("select id from person limit 0,5 for update", "LIMIT/OFFSET clauses are not supported for SELECT FOR UPDATE."); @@ -216,6 +228,7 @@ public void testSelectForUpdateWithLimit() { /** * */ + @Test public void testSelectForUpdateWithGroupings() { assertQueryThrows("select count(*) from person for update", "SELECT FOR UPDATE with aggregates and/or GROUP BY is not supported."); @@ -227,6 +240,7 @@ public void testSelectForUpdateWithGroupings() { /** * @throws Exception If failed. */ + @Test public void testSelectForUpdateAfterAbortedTx() throws Exception { assert disableScheduledVacuum; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeTest.java index fe1304a7d8a9a..acb3d34f21f6b 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSizeTest.java @@ -34,12 +34,16 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.query.IgniteSQLException; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CachePeekMode.BACKUP; /** * */ +@RunWith(JUnit4.class) public class CacheMvccSizeTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -100,6 +104,7 @@ private void checkSizeModificationByOperation(Consumer> before /** * @throws Exception if failed. */ + @Test public void testSql() throws Exception { startGridsMultiThreaded(2); @@ -127,7 +132,7 @@ public void testSql() throws Exception { } } }, - true, 0); + false, 0); checkSizeModificationByOperation("merge into person(id, name) values(1, 'a')", true, 1); @@ -215,6 +220,7 @@ public void testSql() throws Exception { /** * @throws Exception if failed. */ + @Test public void testInsertDeleteConcurrent() throws Exception { startGridsMultiThreaded(2); @@ -263,6 +269,7 @@ private int update(SqlFieldsQuery qry, IgniteCache cache) { /** * @throws Exception if failed. */ + @Test public void testWriteConflictDoesNotChangeSize() throws Exception { startGridsMultiThreaded(2); @@ -298,7 +305,7 @@ public void testWriteConflictDoesNotChangeSize() throws Exception { } catch (Exception e) { if (e.getCause().getCause() instanceof IgniteSQLException) - assertTrue(e.getMessage().toLowerCase().contains("version mismatch")); + assertTrue(e.getMessage().contains("Failed to finish transaction because it has been rolled back")); else { e.printStackTrace(); @@ -316,6 +323,7 @@ public void testWriteConflictDoesNotChangeSize() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDeleteChangesSizeAfterUnlock() throws Exception { startGridsMultiThreaded(2); @@ -362,6 +370,7 @@ public void testDeleteChangesSizeAfterUnlock() throws Exception { /** * @throws Exception if failed. */ + @Test public void testDataStreamerModifiesReplicatedCacheSize() throws Exception { startGridsMultiThreaded(2); @@ -391,6 +400,7 @@ public void testDataStreamerModifiesReplicatedCacheSize() throws Exception { /** * @throws Exception if failed. */ + @Test public void testSizeIsConsistentAfterRebalance() throws Exception { IgniteEx ignite = startGrid(0); @@ -415,6 +425,7 @@ public void testSizeIsConsistentAfterRebalance() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSizeIsConsistentAfterRebalanceDuringInsert() throws Exception { IgniteEx ignite = startGrid(0); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlConfigurationValidationTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlConfigurationValidationTest.java index 7e6c9e884fff5..b10a847e50503 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlConfigurationValidationTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlConfigurationValidationTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Configuration validation for SQL configured caches. */ +@RunWith(JUnit4.class) public class CacheMvccSqlConfigurationValidationTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -39,6 +43,7 @@ public class CacheMvccSqlConfigurationValidationTest extends CacheMvccAbstractTe /** * @throws Exception If failed. */ + @Test public void testCacheGroupAtomicityModeMismatch1() throws Exception { Ignite node = startGrid(); @@ -62,6 +67,7 @@ public void testCacheGroupAtomicityModeMismatch1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCacheGroupAtomicityModeMismatch2() throws Exception { Ignite node = startGrid(); @@ -84,6 +90,7 @@ public void testCacheGroupAtomicityModeMismatch2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTxDifferentMvccSettingsTransactional() throws Exception { ccfg = defaultCacheConfiguration().setSqlSchema("PUBLIC"); Ignite node = startGrid(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlContinuousQueryPartitionedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlContinuousQueryPartitionedSelfTest.java new file mode 100644 index 0000000000000..cef553e654877 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlContinuousQueryPartitionedSelfTest.java @@ -0,0 +1,30 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheMode; + +/** + * Mvcc continuous query test for partitioned SQL cache. + */ +public class CacheMvccSqlContinuousQueryPartitionedSelfTest extends CacheMvccAbstractSqlContinuousQuerySelfTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.PARTITIONED; + } +} + diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlContinuousQueryReplicatedSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlContinuousQueryReplicatedSelfTest.java new file mode 100644 index 0000000000000..948e6e17733f9 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlContinuousQueryReplicatedSelfTest.java @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.ignite.internal.processors.cache.mvcc; + +import org.apache.ignite.cache.CacheMode; + +/** + * Mvcc continuous query test for replicated SQL cache. + */ +public class CacheMvccSqlContinuousQueryReplicatedSelfTest extends CacheMvccAbstractSqlContinuousQuerySelfTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + return CacheMode.REPLICATED; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlLockTimeoutTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlLockTimeoutTest.java index eae79a5db4c02..5cc6efebdb677 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlLockTimeoutTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlLockTimeoutTest.java @@ -34,6 +34,9 @@ import org.apache.ignite.configuration.TransactionConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheMode.REPLICATED; @@ -41,6 +44,7 @@ import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; /** */ +@RunWith(JUnit4.class) public class CacheMvccSqlLockTimeoutTest extends CacheMvccAbstractTest { /** */ private static final int TIMEOUT_MILLIS = 200; @@ -61,6 +65,7 @@ public class CacheMvccSqlLockTimeoutTest extends CacheMvccAbstractTest { /** * @throws Exception if failed. */ + @Test public void testLockTimeoutsForPartitionedCache() throws Exception { checkLockTimeouts(partitionedCacheConfig()); } @@ -68,6 +73,7 @@ public void testLockTimeoutsForPartitionedCache() throws Exception { /** * @throws Exception if failed. */ + @Test public void testLockTimeoutsForReplicatedCache() throws Exception { checkLockTimeouts(replicatedCacheConfig()); } @@ -75,6 +81,7 @@ public void testLockTimeoutsForReplicatedCache() throws Exception { /** * @throws Exception if failed. */ + @Test public void testLockTimeoutsAfterDefaultTxTimeoutForPartitionedCache() throws Exception { checkLockTimeoutsAfterDefaultTxTimeout(partitionedCacheConfig()); } @@ -82,6 +89,7 @@ public void testLockTimeoutsAfterDefaultTxTimeoutForPartitionedCache() throws Ex /** * @throws Exception if failed. */ + @Test public void testLockTimeoutsAfterDefaultTxTimeoutForReplicatedCache() throws Exception { checkLockTimeoutsAfterDefaultTxTimeout(replicatedCacheConfig()); } @@ -89,6 +97,7 @@ public void testLockTimeoutsAfterDefaultTxTimeoutForReplicatedCache() throws Exc /** * @throws Exception if failed. */ + @Test public void testConcurrentForPartitionedCache() throws Exception { checkTimeoutsConcurrent(partitionedCacheConfig()); } @@ -96,6 +105,7 @@ public void testConcurrentForPartitionedCache() throws Exception { /** * @throws Exception if failed. */ + @Test public void testConcurrentForReplicatedCache() throws Exception { checkTimeoutsConcurrent(replicatedCacheConfig()); } @@ -343,7 +353,7 @@ private void mergeInRandomOrder(IgniteEx ignite, IgniteCache cache, List nonMvccCache = node.createCache(new CacheConfiguration<>("no-mvcc-cache") + .setAtomicityMode(TRANSACTIONAL).setIndexedTypes(Integer.class, Integer.class)); + + nonMvccCache.put(1,1); + + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, READ_COMMITTED)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, SERIALIZABLE)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, SERIALIZABLE)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer").setLocal(true)).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, SERIALIZABLE)) { + nonMvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer").setLocal(true)).getAll(); + + tx.commit(); + } + } + + /** + * @throws Exception If failed + */ + @Test + public void testSqlTransactionModesMvcc() throws Exception { + IgniteEx node = startGrid(0); + + IgniteCache mvccCache = node.createCache(new CacheConfiguration<>("mvcc-cache") + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT).setIndexedTypes(Integer.class, Integer.class)); + + mvccCache.put(1,1); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, READ_COMMITTED)) { + mvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + return null; + } + }, CacheException.class, "Only pessimistic transactions are supported when MVCC is enabled"); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, REPEATABLE_READ)) { + mvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + return null; + } + }, CacheException.class, "Only pessimistic transactions are supported when MVCC is enabled"); + + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + try (Transaction tx = node.transactions().txStart(OPTIMISTIC, SERIALIZABLE)) { + mvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + return null; + } + }, CacheException.class, "Only pessimistic transactions are supported when MVCC is enabled"); + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, READ_COMMITTED)) { + mvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + mvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, SERIALIZABLE)) { + mvccCache.query(new SqlFieldsQuery("SELECT * FROM Integer")).getAll(); + + tx.commit(); + } + } + + /** + * @throws Exception If failed + */ + @Test + public void testConsequentMvccNonMvccOperations() throws Exception { + IgniteEx node = startGrid(0); + + IgniteCache mvccCache = node.createCache(new CacheConfiguration<>("mvcc-cache") + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT).setIndexedTypes(Integer.class, Integer.class)); + + IgniteCache nonMvccCache = node.createCache(new CacheConfiguration<>("no-mvcc-cache") + .setAtomicityMode(TRANSACTIONAL).setIndexedTypes(Integer.class, Integer.class)); + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + nonMvccCache.put(1, 1); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + mvccCache.query(new SqlFieldsQuery("INSERT INTO Integer (_key, _val) VALUES (3,3)")).getAll(); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + nonMvccCache.put(2, 2); + + tx.commit(); + } + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + mvccCache.query(new SqlFieldsQuery("INSERT INTO Integer (_key, _val) VALUES (5,5)")).getAll(); + + tx.commit(); + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesAbstractTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesAbstractTest.java index 707636232f9b3..863df2eb529fa 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesAbstractTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesAbstractTest.java @@ -50,6 +50,7 @@ import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter; +import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; import org.apache.ignite.internal.processors.cache.query.SqlFieldsQueryEx; import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException; @@ -61,6 +62,10 @@ import org.apache.ignite.lang.IgniteBiTuple; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SQL; @@ -74,10 +79,12 @@ /** * Tests for transactional SQL. */ +@RunWith(JUnit4.class) public abstract class CacheMvccSqlTxQueriesAbstractTest extends CacheMvccAbstractTest { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_SingleNode_SinglePartition() throws Exception { accountsTxReadAll(1, 0, 0, 1, new InitIndexing(Integer.class, MvccTestAccount.class), false, SQL, DML); @@ -86,6 +93,7 @@ public void testAccountsTxDmlSql_SingleNode_SinglePartition() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_SingleNode_SinglePartition() throws Exception { accountsTxReadAll(1, 0, 0, 1, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL, DML); @@ -94,6 +102,7 @@ public void testAccountsTxDmlSql_WithRemoves_SingleNode_SinglePartition() throws /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_SingleNode() throws Exception { accountsTxReadAll(1, 0, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), false, SQL, DML); @@ -102,6 +111,7 @@ public void testAccountsTxDmlSql_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_SingleNode_Persistence() throws Exception { persistence = true; @@ -111,6 +121,7 @@ public void testAccountsTxDmlSql_SingleNode_Persistence() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSumSql_SingleNode() throws Exception { accountsTxReadAll(1, 0, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), false, SQL_SUM, DML); @@ -119,6 +130,7 @@ public void testAccountsTxDmlSumSql_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSumSql_WithRemoves_SingleNode() throws Exception { accountsTxReadAll(1, 0, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL_SUM, DML); @@ -127,6 +139,7 @@ public void testAccountsTxDmlSumSql_WithRemoves_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSumSql_WithRemoves__ClientServer_Backups0() throws Exception { accountsTxReadAll(4, 2, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL_SUM, DML); @@ -135,6 +148,7 @@ public void testAccountsTxDmlSumSql_WithRemoves__ClientServer_Backups0() throws /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSumSql_ClientServer_Backups2() throws Exception { accountsTxReadAll(4, 2, 2, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL_SUM, DML); @@ -143,6 +157,7 @@ public void testAccountsTxDmlSumSql_ClientServer_Backups2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_SingleNode() throws Exception { accountsTxReadAll(1, 0, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL, DML); @@ -151,6 +166,7 @@ public void testAccountsTxDmlSql_WithRemoves_SingleNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_SingleNode_Persistence() throws Exception { persistence = true; @@ -160,6 +176,7 @@ public void testAccountsTxDmlSql_WithRemoves_SingleNode_Persistence() throws Exc /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_ClientServer_Backups0() throws Exception { accountsTxReadAll(4, 2, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), false, SQL, DML); @@ -168,6 +185,7 @@ public void testAccountsTxDmlSql_ClientServer_Backups0() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups0() throws Exception { accountsTxReadAll(4, 2, 0, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL, DML); @@ -176,6 +194,7 @@ public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups0() throws Exce /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups0_Persistence() throws Exception { persistence = true; @@ -185,6 +204,7 @@ public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups0_Persistence() /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_ClientServer_Backups1() throws Exception { accountsTxReadAll(3, 0, 1, 64, new InitIndexing(Integer.class, MvccTestAccount.class), false, SQL, DML); @@ -193,6 +213,7 @@ public void testAccountsTxDmlSql_ClientServer_Backups1() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups1() throws Exception { accountsTxReadAll(4, 2, 1, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL, DML); @@ -201,6 +222,7 @@ public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups1() throws Exce /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups1_Persistence() throws Exception { persistence = true; @@ -210,6 +232,7 @@ public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups1_Persistence() /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_ClientServer_Backups2() throws Exception { accountsTxReadAll(4, 2, 2, 64, new InitIndexing(Integer.class, MvccTestAccount.class), false, SQL, DML); @@ -218,6 +241,7 @@ public void testAccountsTxDmlSql_ClientServer_Backups2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups2() throws Exception { accountsTxReadAll(4, 2, 2, 64, new InitIndexing(Integer.class, MvccTestAccount.class), true, SQL, DML); @@ -226,9 +250,8 @@ public void testAccountsTxDmlSql_WithRemoves_ClientServer_Backups2() throws Exce /** * @throws Exception If failed. */ + @Test public void testAccountsTxDmlSql_ClientServer_Backups2_Persistence() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9292"); - persistence = true; testAccountsTxDmlSql_ClientServer_Backups2(); @@ -237,6 +260,55 @@ public void testAccountsTxDmlSql_ClientServer_Backups2_Persistence() throws Exce /** * @throws Exception If failed. */ + @Test + public void testParsingErrorHasNoSideEffect() throws Exception { + ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 0, 4) + .setIndexedTypes(Integer.class, Integer.class); + + IgniteEx node = startGrid(0); + + IgniteCache cache = node.cache(DEFAULT_CACHE_NAME); + + try (Transaction tx = node.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + tx.timeout(TX_TIMEOUT); + + SqlFieldsQuery qry = new SqlFieldsQuery("INSERT INTO Integer (_key, _val) values (1),(2,2),(3,3)"); + + try { + try (FieldsQueryCursor> cur = cache.query(qry)) { + fail("We should not get there."); + } + } + catch (CacheException ex){ + IgniteSQLException cause = X.cause(ex, IgniteSQLException.class); + + assertNotNull(cause); + assertEquals(IgniteQueryErrorCode.PARSING, cause.statusCode()); + + assertFalse(tx.isRollbackOnly()); + } + + qry = new SqlFieldsQuery("INSERT INTO Integer (_key, _val) values (4,4),(5,5),(6,6)"); + + try (FieldsQueryCursor> cur = cache.query(qry)) { + assertEquals(3L, cur.iterator().next().get(0)); + } + + tx.commit(); + } + + assertNull(cache.get(1)); + assertNull(cache.get(2)); + assertNull(cache.get(3)); + assertEquals(4, cache.get(4)); + assertEquals(5, cache.get(5)); + assertEquals(6, cache.get(6)); + } + + /** + * @throws Exception If failed. + */ + @Test public void testQueryInsertStaticCache() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -282,6 +354,7 @@ public void testQueryInsertStaticCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertStaticCacheImplicit() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -312,6 +385,7 @@ public void testQueryInsertStaticCacheImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryDeleteStaticCache() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -358,6 +432,7 @@ public void testQueryDeleteStaticCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryFastDeleteStaticCache() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -403,6 +478,7 @@ public void testQueryFastDeleteStaticCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryFastUpdateStaticCache() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -448,6 +524,7 @@ public void testQueryFastUpdateStaticCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryFastDeleteObjectStaticCache() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, MvccTestSqlIndexValue.class); @@ -492,6 +569,7 @@ public void testQueryFastDeleteObjectStaticCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryFastUpdateObjectStaticCache() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, MvccTestSqlIndexValue.class); @@ -536,6 +614,7 @@ public void testQueryFastUpdateObjectStaticCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryDeleteStaticCacheImplicit() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -572,6 +651,7 @@ public void testQueryDeleteStaticCacheImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryUpdateStaticCache() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -613,6 +693,7 @@ public void testQueryUpdateStaticCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryUpdateStaticCacheImplicit() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -649,6 +730,7 @@ public void testQueryUpdateStaticCacheImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryDeadlockWithTxTimeout() throws Exception { checkQueryDeadlock(TimeoutMode.TX); } @@ -656,6 +738,7 @@ public void testQueryDeadlockWithTxTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryDeadlockWithStmtTimeout() throws Exception { checkQueryDeadlock(TimeoutMode.STMT); } @@ -739,6 +822,7 @@ private void checkQueryDeadlock(TimeoutMode timeoutMode) throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryDeadlockImplicit() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 0, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -807,6 +891,7 @@ public void testQueryDeadlockImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertClient() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -856,6 +941,7 @@ public void testQueryInsertClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertClientImplicit() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -890,6 +976,7 @@ public void testQueryInsertClientImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertSubquery() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class, Integer.class, MvccTestSqlIndexValue.class); @@ -933,6 +1020,7 @@ public void testQueryInsertSubquery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertSubqueryImplicit() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class, Integer.class, MvccTestSqlIndexValue.class); @@ -971,6 +1059,7 @@ public void testQueryInsertSubqueryImplicit() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryUpdateSubquery() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class, Integer.class, MvccTestSqlIndexValue.class); @@ -1014,6 +1103,7 @@ public void testQueryUpdateSubquery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryUpdateSubqueryImplicit() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class, Integer.class, MvccTestSqlIndexValue.class); @@ -1052,6 +1142,8 @@ public void testQueryUpdateSubqueryImplicit() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10764") + @Test public void testQueryInsertMultithread() throws Exception { final int THREAD_CNT = 8; final int BATCH_SIZE = 1000; @@ -1119,9 +1211,9 @@ public void testQueryInsertMultithread() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9470") + @Test public void testQueryInsertUpdateMultithread() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9470"); - ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1228,6 +1320,7 @@ public Void process(MutableEntry entry, /** * @throws Exception If failed. */ + @Test public void testQueryInsertVersionConflict() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1287,12 +1380,13 @@ public void testQueryInsertVersionConflict() throws Exception { IgniteSQLException ex0 = X.cause(ex.get(), IgniteSQLException.class); assertNotNull("Exception has not been thrown.", ex0); - assertEquals("Mvcc version mismatch.", ex0.getMessage()); + assertTrue(ex0.getMessage().startsWith("Cannot serialize transaction due to write conflict")); } /** * @throws Exception If failed. */ + @Test public void testInsertAndFastDeleteWithoutVersionConflict() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1323,6 +1417,7 @@ public void testInsertAndFastDeleteWithoutVersionConflict() throws Exception { /** * @throws Exception If failed. */ + @Test public void testInsertAndFastUpdateWithoutVersionConflict() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1353,9 +1448,9 @@ public void testInsertAndFastUpdateWithoutVersionConflict() throws Exception { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-9292") + @Test public void testInsertFastUpdateConcurrent() throws Exception { - fail("https://issues.apache.org/jira/browse/IGNITE-9292"); - ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1386,6 +1481,7 @@ public void testInsertFastUpdateConcurrent() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertRollback() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1426,6 +1522,7 @@ public void testQueryInsertRollback() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertUpdateSameKeys() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1467,6 +1564,7 @@ public void testQueryInsertUpdateSameKeys() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryInsertUpdateSameKeysInSameOperation() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1499,6 +1597,7 @@ public void testQueryInsertUpdateSameKeysInSameOperation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryPendingUpdates() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1567,6 +1666,7 @@ public void testQueryPendingUpdates() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSelectProducesTransaction() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, MvccTestSqlIndexValue.class); @@ -1601,6 +1701,7 @@ public void testSelectProducesTransaction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRepeatableRead() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, MvccTestSqlIndexValue.class); @@ -1652,6 +1753,32 @@ public void testRepeatableRead() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testFastInsertUpdateConcurrent() throws Exception { + ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) + .setIndexedTypes(Integer.class, Integer.class); + + Ignite ignite = startGridsMultiThreaded(4); + + IgniteCache cache = ignite.cache(DEFAULT_CACHE_NAME); + + for (int i = 0; i < 1000; i++) { + int key = i; + CompletableFuture.allOf( + CompletableFuture.runAsync(() -> { + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, ?)").setArgs(key, key)); + }), + CompletableFuture.runAsync(() -> { + cache.query(new SqlFieldsQuery("update Integer set _val = ? where _key = ?").setArgs(key, key)); + }) + ).join(); + } + } + + /** + * @throws Exception If failed. + */ + @Test public void testIterator() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -1722,6 +1849,7 @@ public void testIterator() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHints() throws Exception { persistence = true; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesWithReducerAbstractTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesWithReducerAbstractTest.java index a7cf292c18b53..684631a2b3ea4 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesWithReducerAbstractTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlTxQueriesWithReducerAbstractTest.java @@ -39,6 +39,10 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.testframework.GridTestUtils.assertThrowsWithCause; @@ -49,6 +53,7 @@ /** * Tests for transactional SQL. */ +@RunWith(JUnit4.class) public abstract class CacheMvccSqlTxQueriesWithReducerAbstractTest extends CacheMvccAbstractTest { /** */ private static final int TIMEOUT = 3000; @@ -64,6 +69,7 @@ public abstract class CacheMvccSqlTxQueriesWithReducerAbstractTest extends Cache /** * @throws Exception If failed. */ + @Test public void testQueryReducerInsert() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); @@ -117,6 +123,7 @@ public void testQueryReducerInsert() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerInsertDuplicateKey() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); @@ -165,6 +172,7 @@ public void testQueryReducerInsertDuplicateKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerMerge() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); @@ -217,6 +225,7 @@ public void testQueryReducerMerge() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerMultiBatchPerNodeServer() throws Exception { checkMultiBatchPerNode(false); } @@ -224,6 +233,7 @@ public void testQueryReducerMultiBatchPerNodeServer() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerMultiBatchPerNodeClient() throws Exception { checkMultiBatchPerNode(true); } @@ -292,6 +302,7 @@ private void checkMultiBatchPerNode(boolean client) throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerDelete() throws Exception { ccfgs = new CacheConfiguration[] { cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) @@ -343,6 +354,7 @@ public void testQueryReducerDelete() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerUpdate() throws Exception { ccfgs = new CacheConfiguration[] { cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) @@ -395,6 +407,7 @@ public void testQueryReducerUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerImplicitTxInsert() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); @@ -444,6 +457,7 @@ public void testQueryReducerImplicitTxInsert() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerRollbackInsert() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); @@ -506,6 +520,8 @@ private List sqlGet(int key, IgniteCache cache) { /** * @throws Exception If failed. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10763") + @Test public void testQueryReducerDeadlockInsertWithTxTimeout() throws Exception { checkQueryReducerDeadlockInsert(TimeoutMode.TX); } @@ -513,6 +529,7 @@ public void testQueryReducerDeadlockInsertWithTxTimeout() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerDeadlockInsertWithStmtTimeout() throws Exception { checkQueryReducerDeadlockInsert(TimeoutMode.STMT); } @@ -612,6 +629,7 @@ public void checkQueryReducerDeadlockInsert(TimeoutMode timeoutMode) throws Exce /** * @throws Exception If failed. */ + @Test public void testQueryReducerInsertVersionConflict() throws Exception { ccfgs = new CacheConfiguration[] { cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) @@ -680,12 +698,13 @@ public void testQueryReducerInsertVersionConflict() throws Exception { IgniteSQLException ex0 = X.cause(ex.get(), IgniteSQLException.class); assertNotNull("Exception has not been thrown.", ex0); - assertEquals("Mvcc version mismatch.", ex0.getMessage()); + assertTrue(ex0.getMessage().startsWith("Cannot serialize transaction due to write conflict")); } /** * @throws Exception If failed. */ + @Test public void testQueryReducerInsertValues() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); @@ -723,6 +742,7 @@ public void testQueryReducerInsertValues() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerMergeValues() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); @@ -763,6 +783,7 @@ public void testQueryReducerMergeValues() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerFastUpdate() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -804,6 +825,7 @@ public void testQueryReducerFastUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueryReducerFastDelete() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, CacheMvccSqlTxQueriesAbstractTest.MvccTestSqlIndexValue.class); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlUpdateCountersTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlUpdateCountersTest.java index 943f5a42afe52..3b5f8e447e1e1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlUpdateCountersTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccSqlUpdateCountersTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SQL; @@ -46,6 +49,7 @@ * Test for MVCC caches update counters behaviour. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class CacheMvccSqlUpdateCountersTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -55,6 +59,7 @@ public class CacheMvccSqlUpdateCountersTest extends CacheMvccAbstractTest { /** * @throws Exception If failed. */ + @Test public void testUpdateCountersInsertSimple() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -93,6 +98,7 @@ public void testUpdateCountersInsertSimple() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateCountersDoubleUpdate() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -139,6 +145,7 @@ public void testUpdateCountersDoubleUpdate() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateCountersRollback() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, DFLT_PARTITION_COUNT) .setIndexedTypes(Integer.class, Integer.class); @@ -185,6 +192,7 @@ public void testUpdateCountersRollback() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeleteOwnKey() throws Exception { ccfg = cacheConfiguration(cacheMode(), FULL_SYNC, 2, 1) .setCacheMode(CacheMode.REPLICATED) @@ -320,6 +328,7 @@ public void testDeleteOwnKey() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateCountersMultithreaded() throws Exception { final int writers = 4; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccStreamingInsertTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccStreamingInsertTest.java index b07a187a579af..e0415b8c55a64 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccStreamingInsertTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccStreamingInsertTest.java @@ -29,12 +29,16 @@ import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.Arrays.asList; /** * */ +@RunWith(JUnit4.class) public class CacheMvccStreamingInsertTest extends CacheMvccAbstractTest { /** */ private IgniteCache sqlNexus; @@ -68,6 +72,7 @@ public class CacheMvccStreamingInsertTest extends CacheMvccAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStreamingInsertWithoutOverwrite() throws Exception { conn.createStatement().execute("SET STREAMING 1 BATCH_SIZE 2 ALLOW_OVERWRITE 0 " + " PER_NODE_BUFFER_SIZE 1000 FLUSH_FREQUENCY 100"); @@ -93,6 +98,7 @@ public void testStreamingInsertWithoutOverwrite() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUpdateWithOverwrite() throws Exception { conn.createStatement().execute("SET STREAMING 1 BATCH_SIZE 2 ALLOW_OVERWRITE 1 " + " PER_NODE_BUFFER_SIZE 1000 FLUSH_FREQUENCY 100"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxNodeMappingTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxNodeMappingTest.java new file mode 100644 index 0000000000000..6df21437a54d8 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxNodeMappingTest.java @@ -0,0 +1,219 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc; + +import com.google.common.collect.ImmutableMap; +import com.google.common.collect.Sets; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; +import java.util.stream.Collectors; +import java.util.stream.IntStream; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.affinity.Affinity; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; +import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; +import org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; + +/** + * Test checks that transactions started on nodes collect all nodes participating in distributed transaction. + */ +@RunWith(JUnit4.class) +public class CacheMvccTxNodeMappingTest extends CacheMvccAbstractTest { + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + throw new RuntimeException("Is not supposed to be used"); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testAllTxNodesAreTrackedCli() throws Exception { + checkAllTxNodesAreTracked(false); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testAllTxNodesAreTrackedSrv() throws Exception { + checkAllTxNodesAreTracked(true); + } + + /** + * @param nearSrv {@code true} specifies that node initiated tx is server, otherwise it is client. + */ + private void checkAllTxNodesAreTracked(boolean nearSrv) throws Exception { + int srvCnt = 4; + + startGridsMultiThreaded(srvCnt); + + IgniteEx ign; + + if (nearSrv) + ign = grid(0); + else { + client = true; + + ign = startGrid(srvCnt); + } + + IgniteCache cache = ign.createCache(basicCcfg().setBackups(2)); + + Affinity aff = ign.affinity(cache.getName()); + + Integer k1 = null, k2 = null; + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(0).localNode(), i) + && aff.isBackup(grid(1).localNode(), i) + && aff.isBackup(grid(2).localNode(), i)) { + k1 = i; + break; + } + } + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(1).localNode(), i) + && aff.isBackup(grid(0).localNode(), i) + && aff.isBackup(grid(2).localNode(), i)) { + k2 = i; + break; + } + } + + Integer key1 = k1, key2 = k2; + + assert key1 != null && key2 != null; + + // data initialization + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(key1)); + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(key2)); + + ImmutableMap> txNodes = ImmutableMap.of( + grid(0).localNode().id(), Sets.newHashSet(grid(1).localNode().id(), grid(2).localNode().id()), + grid(1).localNode().id(), Sets.newHashSet(grid(0).localNode().id(), grid(2).localNode().id()) + ); + + // cache put + checkScenario(ign, srvCnt, txNodes, () -> { + cache.put(key1, 42); + cache.put(key2, 42); + }); + + // fast update + checkScenario(ign, srvCnt, txNodes, () -> { + cache.query(new SqlFieldsQuery("merge into Integer(_key, _val) values(?, 42)").setArgs(key1)); + cache.query(new SqlFieldsQuery("merge into Integer(_key, _val) values(?, 42)").setArgs(key2)); + }); + + // cursor update + checkScenario(ign, srvCnt, txNodes, () -> { + cache.query(new SqlFieldsQuery("update Integer set _val = _val + 1 where _key = ?").setArgs(key1)); + cache.query(new SqlFieldsQuery("update Integer set _val = _val + 1 where _key = ?").setArgs(key2)); + }); + + // broadcast update + checkScenario(ign, srvCnt, txNodes, () -> { + cache.query(new SqlFieldsQuery("update Integer set _val = _val + 1").setArgs(key1)); + }); + + // select for update does not start remote tx on backup + ImmutableMap> sfuTxNodes = ImmutableMap.of( + grid(0).localNode().id(), Collections.emptySet(), + grid(1).localNode().id(), Collections.emptySet() + ); + + // cursor select for update + checkScenario(ign, srvCnt, sfuTxNodes, () -> { + cache.query(new SqlFieldsQuery("select _val from Integer where _key = ? for update").setArgs(key1)).getAll(); + cache.query(new SqlFieldsQuery("select _val from Integer where _key = ? for update").setArgs(key2)).getAll(); + }); + + // broadcast select for update + checkScenario(ign, srvCnt, sfuTxNodes, () -> { + cache.query(new SqlFieldsQuery("select _val from Integer for update").setArgs(key1)).getAll(); + }); + } + + /** */ + private void checkScenario(IgniteEx ign, int srvCnt, ImmutableMap> txNodes, Runnable r) + throws Exception { + try (Transaction userTx = ign.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + r.run(); + + GridNearTxLocal nearTx = ((TransactionProxyImpl)userTx).tx(); + + nearTx.prepareNearTxLocal().get(); + + List txs = IntStream.range(0, srvCnt) + .mapToObj(i -> txsOnNode(grid(i), nearTx.nearXidVersion())) + .flatMap(Collection::stream) + .collect(Collectors.toList()); + + assertFalse(txs.isEmpty()); + + txs.forEach(tx -> assertEquals(txNodes, repack(tx.transactionNodes()))); + } + } + + /** */ + private static CacheConfiguration basicCcfg() { + return new CacheConfiguration<>("test") + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT) + .setCacheMode(PARTITIONED) + .setIndexedTypes(Integer.class, Integer.class); + } + + /** */ + private static Map> repack(Map> orig) { + ImmutableMap.Builder> builder = ImmutableMap.builder(); + + orig.forEach((primary, backups) -> { + builder.put(primary, new HashSet<>(backups)); + }); + + return builder.build(); + } + + /** */ + private static List txsOnNode(IgniteEx node, GridCacheVersion xidVer) { + return node.context().cache().context().tm().activeTransactions().stream() + .peek(tx -> assertEquals(xidVer, tx.nearXidVersion())) + .collect(Collectors.toList()); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxRecoveryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxRecoveryTest.java new file mode 100644 index 0000000000000..10d5d363f46bd --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/CacheMvccTxRecoveryTest.java @@ -0,0 +1,671 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.cache.mvcc; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Optional; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.IntPredicate; +import java.util.stream.Collectors; +import java.util.stream.IntStream; +import java.util.stream.StreamSupport; +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.affinity.Affinity; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.TestRecordingCommunicationSpi; +import org.apache.ignite.internal.processors.cache.GridCacheContext; +import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareRequest; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishResponse; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal; +import org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxPrepareRequest; +import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; +import org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.util.lang.GridAbsPredicate; +import org.apache.ignite.internal.util.typedef.G; +import org.apache.ignite.lang.IgniteBiPredicate; +import org.apache.ignite.plugin.extensions.communication.Message; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.transactions.Transaction; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; +import static org.apache.ignite.cache.CacheMode.PARTITIONED; +import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTxRecoveryTest.NodeMode.CLIENT; +import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTxRecoveryTest.NodeMode.SERVER; +import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTxRecoveryTest.TxEndResult.COMMIT; +import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTxRecoveryTest.TxEndResult.ROLLBAK; +import static org.apache.ignite.transactions.TransactionConcurrency.PESSIMISTIC; +import static org.apache.ignite.transactions.TransactionIsolation.REPEATABLE_READ; +import static org.apache.ignite.transactions.TransactionState.COMMITTED; +import static org.apache.ignite.transactions.TransactionState.PREPARED; +import static org.apache.ignite.transactions.TransactionState.PREPARING; +import static org.apache.ignite.transactions.TransactionState.ROLLED_BACK; + +/** */ +@RunWith(JUnit4.class) +public class CacheMvccTxRecoveryTest extends CacheMvccAbstractTest { + /** */ + public enum TxEndResult { + /** */ COMMIT, + /** */ ROLLBAK + } + + /** */ + public enum NodeMode { + /** */ SERVER, + /** */ CLIENT + } + + /** {@inheritDoc} */ + @Override protected CacheMode cacheMode() { + throw new RuntimeException("Is not supposed to be used"); + } + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + cfg.setCommunicationSpi(new TestRecordingCommunicationSpi()); + + return cfg; + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryCommitNearFailure1() throws Exception { + checkRecoveryNearFailure(COMMIT, CLIENT); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryCommitNearFailure2() throws Exception { + checkRecoveryNearFailure(COMMIT, SERVER); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryRollbackNearFailure1() throws Exception { + checkRecoveryNearFailure(ROLLBAK, CLIENT); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryRollbackNearFailure2() throws Exception { + checkRecoveryNearFailure(ROLLBAK, SERVER); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryCommitPrimaryFailure1() throws Exception { + checkRecoveryPrimaryFailure(COMMIT, false); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryRollbackPrimaryFailure1() throws Exception { + checkRecoveryPrimaryFailure(ROLLBAK, false); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryCommitPrimaryFailure2() throws Exception { + checkRecoveryPrimaryFailure(COMMIT, true); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryRollbackPrimaryFailure2() throws Exception { + checkRecoveryPrimaryFailure(ROLLBAK, true); + } + + /** */ + private void checkRecoveryNearFailure(TxEndResult endRes, NodeMode nearNodeMode) throws Exception { + int gridCnt = 4; + int baseCnt = gridCnt - 1; + + boolean commit = endRes == COMMIT; + + startGridsMultiThreaded(baseCnt); + + // tweak client/server near + client = nearNodeMode == CLIENT; + + IgniteEx nearNode = startGrid(baseCnt); + + IgniteCache cache = nearNode.getOrCreateCache(basicCcfg() + .setBackups(1)); + + Affinity aff = nearNode.affinity(DEFAULT_CACHE_NAME); + + List keys = new ArrayList<>(); + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(0).localNode(), i) && aff.isBackup(grid(1).localNode(), i)) { + keys.add(i); + break; + } + } + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(1).localNode(), i) && aff.isBackup(grid(2).localNode(), i)) { + keys.add(i); + break; + } + } + + assert keys.size() == 2; + + TestRecordingCommunicationSpi nearComm + = (TestRecordingCommunicationSpi)nearNode.configuration().getCommunicationSpi(); + + if (!commit) + nearComm.blockMessages(GridNearTxPrepareRequest.class, grid(1).name()); + + GridTestUtils.runAsync(() -> { + // run in separate thread to exclude tx from thread-local map + GridNearTxLocal nearTx + = ((TransactionProxyImpl)nearNode.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)).tx(); + + for (Integer k : keys) + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(k)); + + List txs = IntStream.range(0, baseCnt) + .mapToObj(i -> txsOnNode(grid(i), nearTx.xidVersion())) + .flatMap(Collection::stream) + .collect(Collectors.toList()); + + IgniteInternalFuture prepareFut = nearTx.prepareNearTxLocal(); + + if (commit) + prepareFut.get(); + else + assertConditionEventually(() -> txs.stream().anyMatch(tx -> tx.state() == PREPARED)); + + // drop near + nearNode.close(); + + assertConditionEventually(() -> txs.stream().allMatch(tx -> tx.state() == (commit ? COMMITTED : ROLLED_BACK))); + + return null; + }).get(); + + if (commit) { + assertConditionEventually(() -> { + int rowsCnt = grid(0).cache(DEFAULT_CACHE_NAME) + .query(new SqlFieldsQuery("select * from Integer")).getAll().size(); + return rowsCnt == keys.size(); + }); + } + else { + int rowsCnt = G.allGrids().get(0).cache(DEFAULT_CACHE_NAME) + .query(new SqlFieldsQuery("select * from Integer")).getAll().size(); + + assertEquals(0, rowsCnt); + } + + assertPartitionCountersAreConsistent(keys, grids(baseCnt, i -> true)); + } + + /** */ + private void checkRecoveryPrimaryFailure(TxEndResult endRes, boolean mvccCrd) throws Exception { + int gridCnt = 4; + int baseCnt = gridCnt - 1; + + boolean commit = endRes == COMMIT; + + startGridsMultiThreaded(baseCnt); + + client = true; + + IgniteEx nearNode = startGrid(baseCnt); + + IgniteCache cache = nearNode.getOrCreateCache(basicCcfg() + .setBackups(1)); + + Affinity aff = nearNode.affinity(DEFAULT_CACHE_NAME); + + List keys = new ArrayList<>(); + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(0).localNode(), i) && aff.isBackup(grid(1).localNode(), i)) { + keys.add(i); + break; + } + } + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(grid(1).localNode(), i) && aff.isBackup(grid(2).localNode(), i)) { + keys.add(i); + break; + } + } + + assert keys.size() == 2; + + int victim, victimBackup; + + if (mvccCrd) { + victim = 0; + victimBackup = 1; + } + else { + victim = 1; + victimBackup = 2; + } + + TestRecordingCommunicationSpi victimComm = (TestRecordingCommunicationSpi)grid(victim).configuration().getCommunicationSpi(); + + if (commit) + victimComm.blockMessages(GridNearTxFinishResponse.class, nearNode.name()); + else + victimComm.blockMessages(GridDhtTxPrepareRequest.class, grid(victimBackup).name()); + + GridNearTxLocal nearTx + = ((TransactionProxyImpl)nearNode.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)).tx(); + + for (Integer k : keys) + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(k)); + + List txs = IntStream.range(0, baseCnt) + .filter(i -> i != victim) + .mapToObj(i -> txsOnNode(grid(i), nearTx.xidVersion())) + .flatMap(Collection::stream) + .collect(Collectors.toList()); + + IgniteInternalFuture commitFut = nearTx.commitAsync(); + + if (commit) + assertConditionEventually(() -> txs.stream().allMatch(tx -> tx.state() == COMMITTED)); + else + assertConditionEventually(() -> txs.stream().anyMatch(tx -> tx.state() == PREPARED)); + + // drop victim + grid(victim).close(); + + awaitPartitionMapExchange(); + + assertConditionEventually(() -> txs.stream().allMatch(tx -> tx.state() == (commit ? COMMITTED : ROLLED_BACK))); + + assert victimComm.hasBlockedMessages(); + + if (commit) { + assertConditionEventually(() -> { + int rowsCnt = G.allGrids().get(0).cache(DEFAULT_CACHE_NAME) + .query(new SqlFieldsQuery("select * from Integer")).getAll().size(); + return rowsCnt == keys.size(); + }); + } + else { + int rowsCnt = G.allGrids().get(0).cache(DEFAULT_CACHE_NAME) + .query(new SqlFieldsQuery("select * from Integer")).getAll().size(); + + assertEquals(0, rowsCnt); + } + + assertTrue(commitFut.isDone()); + + assertPartitionCountersAreConsistent(keys, grids(baseCnt, i -> i != victim)); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testRecoveryCommit() throws Exception { + startGridsMultiThreaded(2); + + client = true; + + IgniteEx ign = startGrid(2); + + IgniteCache cache = ign.getOrCreateCache(basicCcfg()); + + AtomicInteger keyCntr = new AtomicInteger(); + + ArrayList keys = new ArrayList<>(); + + ign.cluster().forServers().nodes() + .forEach(node -> keys.add(keyForNode(ign.affinity(DEFAULT_CACHE_NAME), keyCntr, node))); + + GridTestUtils.runAsync(() -> { + // run in separate thread to exclude tx from thread-local map + Transaction tx = ign.transactions().txStart(PESSIMISTIC, REPEATABLE_READ); + + for (Integer k : keys) + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(k)); + + ((TransactionProxyImpl)tx).tx().prepareNearTxLocal().get(); + + return null; + }).get(); + + // drop near + stopGrid(2, true); + + IgniteEx srvNode = grid(0); + + assertConditionEventually( + () -> srvNode.cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("select * from Integer")).getAll().size() == 2 + ); + + assertPartitionCountersAreConsistent(keys, G.allGrids()); + } + + /** + * @throws Exception if failed. + */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10766") + @Test + public void testCountersNeighborcastServerFailed() throws Exception { + int srvCnt = 4; + + startGridsMultiThreaded(srvCnt); + + client = true; + + IgniteEx ign = startGrid(srvCnt); + + IgniteCache cache = ign.getOrCreateCache(basicCcfg() + .setBackups(2)); + + ArrayList keys = new ArrayList<>(); + + int vid = 3; + + IgniteEx victim = grid(vid); + + Affinity aff = ign.affinity(DEFAULT_CACHE_NAME); + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(victim.localNode(), i) && !aff.isBackup(grid(0).localNode(), i)) { + keys.add(i); + break; + } + } + + for (int i = 0; i < 100; i++) { + if (aff.isPrimary(victim.localNode(), i) && !aff.isBackup(grid(1).localNode(), i)) { + keys.add(i); + break; + } + } + + assert keys.size() == 2 && !keys.contains(99); + + // prevent prepare on one backup + ((TestRecordingCommunicationSpi)victim.configuration().getCommunicationSpi()) + .blockMessages(GridDhtTxPrepareRequest.class, grid(0).name()); + + GridNearTxLocal nearTx = ((TransactionProxyImpl)ign.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)).tx(); + + for (Integer k : keys) + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(k)); + + List txs = IntStream.range(0, srvCnt) + .mapToObj(this::grid) + .filter(g -> g != victim) + .map(g -> txsOnNode(g, nearTx.xidVersion())) + .flatMap(Collection::stream) + .collect(Collectors.toList()); + + nearTx.commitAsync(); + + // await tx partially prepared + assertConditionEventually(() -> txs.stream().anyMatch(tx -> tx.state() == PREPARED)); + + CountDownLatch latch1 = new CountDownLatch(1); + CountDownLatch latch2 = new CountDownLatch(1); + + IgniteInternalFuture backgroundTxFut = GridTestUtils.runAsync(() -> { + try (Transaction ignored = ign.transactions().txStart()) { + boolean upd = false; + + for (int i = 100; i < 200; i++) { + if (!aff.isPrimary(victim.localNode(), i)) { + cache.put(i, 11); + upd = true; + break; + } + } + + assert upd; + + latch1.countDown(); + + latch2.await(); + } + + return null; + }); + + latch1.await(); + + // drop primary + victim.close(); + + // do all assertions before rebalance + assertConditionEventually(() -> txs.stream().allMatch(tx -> tx.state() == ROLLED_BACK)); + + List liveNodes = grids(srvCnt, i -> i != vid); + + assertPartitionCountersAreConsistent(keys, liveNodes); + + latch2.countDown(); + + backgroundTxFut.get(); + + assertTrue(liveNodes.stream() + .map(node -> node.cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("select * from Integer")).getAll()) + .allMatch(Collection::isEmpty)); + } + + /** + * @throws Exception if failed. + */ + @Test + public void testUpdateCountersGapIsClosed() throws Exception { + int srvCnt = 3; + + startGridsMultiThreaded(srvCnt); + + client = true; + + IgniteEx ign = startGrid(srvCnt); + + IgniteCache cache = ign.getOrCreateCache( + basicCcfg().setBackups(2)); + + int vid = 1; + + IgniteEx victim = grid(vid); + + ArrayList keys = new ArrayList<>(); + + Integer part = null; + + Affinity aff = ign.affinity(DEFAULT_CACHE_NAME); + + for (int i = 0; i < 2000; i++) { + int p = aff.partition(i); + if (aff.isPrimary(victim.localNode(), i)) { + if (part == null) part = p; + if (p == part) keys.add(i); + if (keys.size() == 2) break; + } + } + + assert keys.size() == 2; + + Transaction txA = ign.transactions().txStart(PESSIMISTIC, REPEATABLE_READ); + + // prevent first transaction prepare on backups + ((TestRecordingCommunicationSpi)victim.configuration().getCommunicationSpi()) + .blockMessages(new IgniteBiPredicate() { + final AtomicInteger limiter = new AtomicInteger(); + + @Override public boolean apply(ClusterNode node, Message msg) { + if (msg instanceof GridDhtTxPrepareRequest) + return limiter.getAndIncrement() < 2; + + return false; + } + }); + + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(keys.get(0))); + + txA.commitAsync(); + + GridCacheVersion aXidVer = ((TransactionProxyImpl)txA).tx().xidVersion(); + + assertConditionEventually(() -> txsOnNode(victim, aXidVer).stream() + .anyMatch(tx -> tx.state() == PREPARING)); + + GridTestUtils.runAsync(() -> { + try (Transaction txB = ign.transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) { + cache.query(new SqlFieldsQuery("insert into Integer(_key, _val) values(?, 42)").setArgs(keys.get(1))); + + txB.commit(); + } + }).get(); + + long victimUpdCntr = updateCounter(victim.cachex(DEFAULT_CACHE_NAME).context(), keys.get(0)); + + List backupNodes = grids(srvCnt, i -> i != vid); + + List backupTxsA = backupNodes.stream() + .map(node -> txsOnNode(node, aXidVer)) + .flatMap(Collection::stream) + .collect(Collectors.toList()); + + // drop primary + victim.close(); + + assertConditionEventually(() -> backupTxsA.stream().allMatch(tx -> tx.state() == ROLLED_BACK)); + + backupNodes.stream() + .map(node -> node.cache(DEFAULT_CACHE_NAME)) + .forEach(c -> { + assertEquals(1, c.query(new SqlFieldsQuery("select * from Integer")).getAll().size()); + }); + + backupNodes.forEach(node -> { + for (Integer k : keys) + assertEquals(victimUpdCntr, updateCounter(node.cachex(DEFAULT_CACHE_NAME).context(), k)); + }); + } + + /** */ + private static CacheConfiguration basicCcfg() { + return new CacheConfiguration<>(DEFAULT_CACHE_NAME) + .setAtomicityMode(TRANSACTIONAL_SNAPSHOT) + .setCacheMode(PARTITIONED) + .setIndexedTypes(Integer.class, Integer.class); + } + + /** */ + private static List txsOnNode(IgniteEx node, GridCacheVersion xidVer) { + List txs = node.context().cache().context().tm().activeTransactions().stream() + .peek(tx -> assertEquals(xidVer, tx.nearXidVersion())) + .collect(Collectors.toList()); + + assert !txs.isEmpty(); + + return txs; + } + + /** */ + private static void assertConditionEventually(GridAbsPredicate p) + throws IgniteInterruptedCheckedException { + if (!GridTestUtils.waitForCondition(p, 5_000)) + fail(); + } + + /** */ + private List grids(int cnt, IntPredicate p) { + return IntStream.range(0, cnt).filter(p).mapToObj(this::grid).collect(Collectors.toList()); + } + + /** */ + private void assertPartitionCountersAreConsistent(Iterable keys, Iterable nodes) { + for (Integer key : keys) { + long cntr0 = -1; + + for (Ignite n : nodes) { + IgniteEx node = ((IgniteEx)n); + + if (node.affinity(DEFAULT_CACHE_NAME).isPrimaryOrBackup(node.localNode(), key)) { + long cntr = updateCounter(node.cachex(DEFAULT_CACHE_NAME).context(), key); +// System.err.println(node.localNode().consistentId() + " " + key + " -> " + cntr); + if (cntr0 == -1) + cntr0 = cntr; + + assertEquals(cntr0, cntr); + } + } + } + } + + /** */ + private static long updateCounter(GridCacheContext cctx, Object key) { + return dataStore(cctx, key) + .map(IgniteCacheOffheapManager.CacheDataStore::updateCounter) + .get(); + } + + /** */ + private static Optional dataStore( + GridCacheContext cctx, Object key) { + int p = cctx.affinity().partition(key); + IgniteCacheOffheapManager offheap = cctx.offheap(); + return StreamSupport.stream(offheap.cacheDataStores().spliterator(), false) + .filter(ds -> ds.partId() == p) + .findFirst(); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadBulkOpsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadBulkOpsTest.java index 46aeaa1b3cacf..bf73db1551ca7 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadBulkOpsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadBulkOpsTest.java @@ -18,6 +18,8 @@ package org.apache.ignite.internal.processors.cache.mvcc; import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; import java.util.HashSet; import java.util.LinkedHashSet; import java.util.List; @@ -26,21 +28,35 @@ import java.util.concurrent.Callable; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; +import java.util.function.BiFunction; +import java.util.function.Function; +import java.util.stream.Collectors; +import javax.cache.processor.EntryProcessorException; +import javax.cache.processor.EntryProcessorResult; +import javax.cache.processor.MutableEntry; import java.util.function.Function; import java.util.stream.Collectors; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteTransactions; +import org.apache.ignite.cache.CacheEntryProcessor; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.GET; +import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.INVOKE; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SQL; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.WriteMode.DML; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.WriteMode.PUT; @@ -48,6 +64,7 @@ /** * Test basic mvcc bulk cache operations. */ +@RunWith(JUnit4.class) public class MvccRepeatableReadBulkOpsTest extends CacheMvccAbstractTest { /** {@inheritDoc} */ @Override protected CacheMode cacheMode() { @@ -95,6 +112,7 @@ private int nodesCount() { /** * @throws Exception If failed. */ + @Test public void testRepeatableReadIsolationGetPut() throws Exception { checkOperations(GET, GET, PUT, true); checkOperations(GET, GET, PUT, false); @@ -103,6 +121,16 @@ public void testRepeatableReadIsolationGetPut() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testRepeatableReadIsolationInvoke() throws Exception { + checkOperations(GET, GET, WriteMode.INVOKE, true); + checkOperations(GET, GET, WriteMode.INVOKE, false); + } + + /** + * @throws Exception If failed. + */ + @Test public void testRepeatableReadIsolationSqlPut() throws Exception { checkOperations(SQL, SQL, PUT, true); checkOperations(SQL, SQL, PUT, false); @@ -111,6 +139,16 @@ public void testRepeatableReadIsolationSqlPut() throws Exception { /** * @throws Exception If failed. */ + @Test + public void testRepeatableReadIsolationSqlInvoke() throws Exception { + checkOperations(SQL, SQL, WriteMode.INVOKE, true); + checkOperations(SQL, SQL, WriteMode.INVOKE, false); + } + + /** + * @throws Exception If failed. + */ + @Test public void testRepeatableReadIsolationSqlDml() throws Exception { checkOperations(SQL, SQL, DML, true); checkOperations(SQL, SQL, DML, false); @@ -119,6 +157,7 @@ public void testRepeatableReadIsolationSqlDml() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRepeatableReadIsolationGetDml() throws Exception { checkOperations(GET, GET, DML, true); checkOperations(GET, GET, DML, false); @@ -127,22 +166,29 @@ public void testRepeatableReadIsolationGetDml() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRepeatableReadIsolationMixedPut() throws Exception { checkOperations(SQL, GET, PUT, false); checkOperations(SQL, GET, PUT, true); + checkOperations(SQL, GET, WriteMode.INVOKE, false); + checkOperations(SQL, GET, WriteMode.INVOKE, true); } /** * @throws Exception If failed. */ + @Test public void testRepeatableReadIsolationMixedPut2() throws Exception { checkOperations(GET, SQL, PUT, false); checkOperations(GET, SQL, PUT, true); + checkOperations(GET, SQL, WriteMode.INVOKE, false); + checkOperations(GET, SQL, WriteMode.INVOKE, true); } /** * @throws Exception If failed. */ + @Test public void testRepeatableReadIsolationMixedDml() throws Exception { checkOperations(SQL, GET, DML, false); checkOperations(SQL, GET, DML, true); @@ -151,6 +197,7 @@ public void testRepeatableReadIsolationMixedDml() throws Exception { /** * @throws Exception If failed. */ + @Test public void testRepeatableReadIsolationMixedDml2() throws Exception { checkOperations(GET, SQL, DML, false); checkOperations(GET, SQL, DML, true); @@ -159,18 +206,77 @@ public void testRepeatableReadIsolationMixedDml2() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOperationConsistency() throws Exception { checkOperationsConsistency(PUT, false); checkOperationsConsistency(DML, false); + checkOperationsConsistency(WriteMode.INVOKE, false); checkOperationsConsistency(PUT, true); checkOperationsConsistency(DML, true); + checkOperationsConsistency(WriteMode.INVOKE, true); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testInvokeConsistency() throws Exception { + Ignite node = grid(/*requestFromClient ? nodesCount() - 1 :*/ 0); + + TestCache cache = new TestCache<>(node.cache(DEFAULT_CACHE_NAME)); + + final Set keys1 = new HashSet<>(3); + final Set keys2 = new HashSet<>(3); + + Set allKeys = generateKeySet(cache.cache, keys1, keys2); + + final Map map1 = keys1.stream().collect( + Collectors.toMap(k -> k, k -> new MvccTestAccount(k, 1))); + + final Map map2 = keys2.stream().collect( + Collectors.toMap(k -> k, k -> new MvccTestAccount(k, 1))); + + assertEquals(0, cache.cache.size()); + + updateEntries(cache, map1, WriteMode.INVOKE); + assertEquals(3, cache.cache.size()); + + updateEntries(cache, map1, WriteMode.INVOKE); + assertEquals(3, cache.cache.size()); + + getEntries(cache, allKeys, INVOKE); + assertEquals(3, cache.cache.size()); + + updateEntries(cache, map2, WriteMode.INVOKE); + assertEquals(6, cache.cache.size()); + + getEntries(cache, keys2, INVOKE); + assertEquals(6, cache.cache.size()); + + removeEntries(cache, keys1, WriteMode.INVOKE); + assertEquals(3, cache.cache.size()); + + removeEntries(cache, keys1, WriteMode.INVOKE); + assertEquals(3, cache.cache.size()); + + getEntries(cache, allKeys, INVOKE); + assertEquals(3, cache.cache.size()); + + updateEntries(cache, map1, WriteMode.INVOKE); + assertEquals(6, cache.cache.size()); + + removeEntries(cache, allKeys, WriteMode.INVOKE); + assertEquals(0, cache.cache.size()); + + getEntries(cache, allKeys, INVOKE); + assertEquals(0, cache.cache.size()); } /** * Checks SQL and CacheAPI operation isolation consistency. * * @param readModeBefore read mode used before value updated. - * @param readModeBefore read mode used after value updated. + * @param readModeAfter read mode used after value updated. * @param writeMode write mode used for update. * @throws Exception If failed. */ @@ -206,20 +312,23 @@ private void checkOperations(ReadMode readModeBefore, ReadMode readModeAfter, @Override public Void call() throws Exception { updateStart.await(); + assertEquals(initialMap.size(), cache2.cache.size()); + try (Transaction tx = txs2.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { + tx.timeout(TX_TIMEOUT); updateEntries(cache2, updateMap, writeMode); removeEntries(cache2, keysForRemove, writeMode); - checkContains(cache2, true, updateMap.keySet()); - checkContains(cache2, false, keysForRemove); - assertEquals(updateMap, cache2.cache.getAll(allKeys)); tx.commit(); } + finally { + updateFinish.countDown(); + } - updateFinish.countDown(); + assertEquals(updateMap.size(), cache2.cache.size()); return null; } @@ -270,7 +379,7 @@ private void checkOperations(ReadMode readModeBefore, ReadMode readModeAfter, * @return All keys. * @throws IgniteCheckedException If failed. */ - protected Set generateKeySet(IgniteCache cache, Set keySet1, + protected Set generateKeySet(IgniteCache cache, Set keySet1, Set keySet2) throws IgniteCheckedException { LinkedHashSet allKeys = new LinkedHashSet<>(); @@ -302,50 +411,72 @@ private void checkOperationsConsistency(WriteMode writeMode, boolean requestFrom TestCache cache = new TestCache<>(node.cache(DEFAULT_CACHE_NAME)); - final Set keysForUpdate = new HashSet<>(3); - final Set keysForRemove = new HashSet<>(3); - final Set allKeys = generateKeySet(grid(0).cache(DEFAULT_CACHE_NAME), keysForUpdate, keysForRemove); + final Set keysForUpdate = new HashSet<>(3); + final Set keysForRemove = new HashSet<>(3); - int updCnt = 1; + final Set allKeys = generateKeySet(grid(0).cache(DEFAULT_CACHE_NAME), keysForUpdate, keysForRemove); - final Map initialVals = allKeys.stream().collect( - Collectors.toMap(k -> k, k -> new MvccTestAccount(k, 1))); + try { + int updCnt = 1; + + final Map initialVals = allKeys.stream().collect( + Collectors.toMap(k -> k, k -> new MvccTestAccount(k, 1))); + + updateEntries(cache, initialVals, writeMode); + + assertEquals(initialVals.size(), cache.cache.size()); + + IgniteTransactions txs = node.transactions(); + + Map updatedVals = null; - cache.cache.putAll(initialVals); + try (Transaction tx = txs.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { + Map vals1 = getEntries(cache, allKeys, GET); + Map vals2 = getEntries(cache, allKeys, SQL); + Map vals3 = getEntries(cache, allKeys, ReadMode.INVOKE); - IgniteTransactions txs = node.transactions(); + assertEquals(initialVals, vals1); + assertEquals(initialVals, vals2); + assertEquals(initialVals, vals3); - Map updatedVals = null; + assertEquals(initialVals.size(), cache.cache.size()); - try (Transaction tx = txs.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { - Map vals1 = getEntries(cache, allKeys, GET); - Map vals2 = getEntries(cache, allKeys, SQL); + for (ReadMode readMode : new ReadMode[] {GET, SQL, INVOKE}) { + int updCnt0 = ++updCnt; - assertEquals(initialVals, vals1); - assertEquals(initialVals, vals2); + updatedVals = allKeys.stream().collect(Collectors.toMap(Function.identity(), + k -> new MvccTestAccount(k, updCnt0))); - for (ReadMode readMode : new ReadMode[] {GET, SQL}) { - int updCnt0 = ++updCnt; + updateEntries(cache, updatedVals, writeMode); + assertEquals(allKeys.size(), cache.cache.size()); - updatedVals = keysForUpdate.stream().collect(Collectors.toMap(Function.identity(), - k -> new MvccTestAccount(k, updCnt0))); + removeEntries(cache, keysForRemove, writeMode); - updateEntries(cache, updatedVals, writeMode); - removeEntries(cache, keysForRemove, writeMode); + for (Integer key : keysForRemove) + updatedVals.remove(key); - assertEquals(String.valueOf(readMode), updatedVals, getEntries(cache, allKeys, readMode)); + assertEquals(String.valueOf(readMode), updatedVals, getEntries(cache, allKeys, readMode)); + } + + tx.commit(); } - tx.commit(); - } + try (Transaction tx = txs.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { + assertEquals(updatedVals, getEntries(cache, allKeys, GET)); + assertEquals(updatedVals, getEntries(cache, allKeys, SQL)); + assertEquals(updatedVals, getEntries(cache, allKeys, INVOKE)); - try (Transaction tx = txs.txStart(TransactionConcurrency.PESSIMISTIC, TransactionIsolation.REPEATABLE_READ)) { - assertEquals(updatedVals, getEntries(cache, allKeys, GET)); - assertEquals(updatedVals, getEntries(cache, allKeys, SQL)); + tx.commit(); + } - tx.commit(); + assertEquals(updatedVals.size(), cache.cache.size()); + } + finally { + cache.cache.removeAll(keysForUpdate); } + + assertEquals(0, cache.cache.size()); } /** @@ -365,6 +496,18 @@ protected Map getEntries( return cache.cache.getAll(keys); case SQL: return getAllSql(cache); + case INVOKE: { + Map res = new HashMap<>(); + + CacheEntryProcessor ep = new GetEntryProcessor<>(); + + Map> invokeRes = cache.cache.invokeAll(keys, ep); + + for (Map.Entry> e : invokeRes.entrySet()) + res.put(e.getKey(), e.getValue().get()); + + return res; + } default: fail(); } @@ -395,6 +538,19 @@ protected void updateEntries( break; } + case INVOKE: { + CacheEntryProcessor ep = + new GetAndPutEntryProcessor(){ + /** {@inheritDoc} */ + @Override MvccTestAccount newValForKey(Integer key) { + return entries.get(key); + } + }; + + cache.cache.invokeAll(entries.keySet(), ep); + + break; + } default: fail(); } @@ -423,6 +579,13 @@ protected void removeEntries( break; } + case INVOKE: { + CacheEntryProcessor ep = new RemoveEntryProcessor<>(); + + cache.cache.invokeAll(keys, ep); + + break; + } default: fail(); } @@ -438,4 +601,52 @@ protected void removeEntries( protected void checkContains(TestCache cache, boolean expected, Set keys) { assertEquals(expected, cache.cache.containsKeys(keys)); } + + /** + * Applies get operation. + */ + static class GetEntryProcessor implements CacheEntryProcessor { + /** {@inheritDoc} */ + @Override public V process(MutableEntry entry, + Object... arguments) throws EntryProcessorException { + return entry.getValue(); + } + } + + /** + * Applies remove operation. + */ + static class RemoveEntryProcessor implements CacheEntryProcessor { + /** {@inheritDoc} */ + @Override public R process(MutableEntry entry, + Object... arguments) throws EntryProcessorException { + entry.remove(); + + return null; + } + } + + /** + * Applies get and put operation. + */ + static class GetAndPutEntryProcessor implements CacheEntryProcessor { + /** {@inheritDoc} */ + @Override public V process(MutableEntry entry, + Object... args) throws EntryProcessorException { + V newVal = !F.isEmpty(args) ? (V)args[0] : newValForKey(entry.getKey()); + + V oldVal = entry.getValue(); + entry.setValue(newVal); + + return oldVal; + } + + /** + * @param key Key. + * @return New value. + */ + V newValForKey(K key){ + throw new UnsupportedOperationException(); + } + } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadOperationsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadOperationsTest.java index c782f9839ac5c..e6ed0a369b9ef 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadOperationsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/mvcc/MvccRepeatableReadOperationsTest.java @@ -22,12 +22,18 @@ import java.util.Map; import java.util.Set; import java.util.stream.Collectors; +import javax.cache.processor.EntryProcessorException; +import javax.cache.processor.MutableEntry; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteTransactions; +import org.apache.ignite.cache.CacheEntryProcessor; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.GET; import static org.apache.ignite.internal.processors.cache.mvcc.CacheMvccAbstractTest.ReadMode.SQL; @@ -35,6 +41,7 @@ /** * Test basic mvcc cache operation operations. */ +@RunWith(JUnit4.class) public class MvccRepeatableReadOperationsTest extends MvccRepeatableReadBulkOpsTest { /** {@inheritDoc} */ @Override protected Map getEntries( @@ -55,8 +62,24 @@ public class MvccRepeatableReadOperationsTest extends MvccRepeatableReadBulkOpsT return res; } + case SQL: return getAllSql(cache); + + case INVOKE: { + Map res = new HashMap<>(); + + CacheEntryProcessor ep = new GetEntryProcessor(); + + for (Integer key : keys) { + MvccTestAccount val = cache.cache.invoke(key, ep); + + if(val != null) + res.put(key, val); + } + + return res; + } default: fail(); } @@ -65,7 +88,7 @@ public class MvccRepeatableReadOperationsTest extends MvccRepeatableReadBulkOpsT } /** {@inheritDoc} */ - protected void updateEntries( + @Override protected void updateEntries( TestCache cache, Map entries, WriteMode writeMode) { @@ -79,6 +102,7 @@ protected void updateEntries( break; } + case DML: { for (Map.Entry e : entries.entrySet()) { if (e.getValue() == null) @@ -88,13 +112,23 @@ protected void updateEntries( } break; } + + case INVOKE: { + GetAndPutEntryProcessor ep = new GetAndPutEntryProcessor<>(); + + for (final Map.Entry e : entries.entrySet()) + cache.cache.invoke(e.getKey(), ep, e.getValue()); + + break; + } + default: fail(); } } /** {@inheritDoc} */ - protected void removeEntries( + @Override protected void removeEntries( TestCache cache, Set keys, WriteMode writeMode) { @@ -111,6 +145,14 @@ protected void removeEntries( break; } + case INVOKE: { + CacheEntryProcessor ep = new RemoveEntryProcessor<>(); + + for (Integer key : keys) + cache.cache.invoke(key, ep); + + break; + } default: fail(); } @@ -128,6 +170,7 @@ protected void removeEntries( * * @throws IgniteCheckedException If failed. */ + @Test public void testGetAndUpdateOperations() throws IgniteCheckedException { Ignite node1 = grid(0); @@ -189,6 +232,7 @@ public void testGetAndUpdateOperations() throws IgniteCheckedException { * * @throws IgniteCheckedException If failed. */ + @Test public void testPutIfAbsentConsistency() throws IgniteCheckedException { Ignite node1 = grid(0); @@ -229,6 +273,7 @@ public void testPutIfAbsentConsistency() throws IgniteCheckedException { * * @throws IgniteCheckedException If failed. */ + @Test public void testReplaceConsistency() throws IgniteCheckedException { Ignite node1 = grid(0); @@ -273,4 +318,4 @@ public void testReplaceConsistency() throws IgniteCheckedException { assertEquals(updateMap, getEntries(cache1, allKeys, SQL)); assertEquals(updateMap, getEntries(cache1, allKeys, GET)); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IndexingMultithreadedLoadContinuousRestartTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IndexingMultithreadedLoadContinuousRestartTest.java new file mode 100644 index 0000000000000..2b5d882a54a1b --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/IndexingMultithreadedLoadContinuousRestartTest.java @@ -0,0 +1,282 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +package org.apache.ignite.internal.processors.cache.persistence.db; + +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.LinkedHashMap; +import java.util.concurrent.ThreadLocalRandom; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.cache.QueryIndex; +import org.apache.ignite.cache.QueryIndexType; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.DataRegionConfiguration; +import org.apache.ignite.configuration.DataStorageConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.visor.verify.ValidateIndexesClosure; +import org.apache.ignite.internal.visor.verify.VisorValidateIndexesJobResult; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Tests that continuous non-graceful node stop under load doesn't break SQL indexes. + */ +public class IndexingMultithreadedLoadContinuousRestartTest extends GridCommonAbstractTest { + /** Cache name. */ + private static final String CACHE_NAME = "test-cache-name"; + + /** Restarts. */ + private static final int RESTARTS = 5; + + /** Load threads. */ + private static final int THREADS = 5; + + /** Load loop cycles. */ + private static final int LOAD_LOOP = 5000; + + /** Key bound. */ + private static final int KEY_BOUND = 1000; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(gridName); + + QueryEntity qryEntity = new QueryEntity(); + qryEntity.setKeyType(UserKey.class.getName()); + qryEntity.setValueType(UserValue.class.getName()); + qryEntity.setTableName("USER_TEST_TABLE"); + qryEntity.setKeyFields(new HashSet<>(Arrays.asList("a", "b", "c"))); + + LinkedHashMap fields = new LinkedHashMap<>(); + fields.put("a", "java.lang.Integer"); + fields.put("b", "java.lang.Integer"); + fields.put("c", "java.lang.Integer"); + fields.put("x", "java.lang.Integer"); + fields.put("y", "java.lang.Integer"); + fields.put("z", "java.lang.Integer"); + qryEntity.setFields(fields); + + QueryIndex idx1 = new QueryIndex(); + idx1.setName("IDX_1"); + idx1.setIndexType(QueryIndexType.SORTED); + LinkedHashMap idxFields = new LinkedHashMap<>(); + idxFields.put("a", false); + idxFields.put("b", false); + idxFields.put("c", false); + idx1.setFields(idxFields); + + QueryIndex idx2 = new QueryIndex(); + idx2.setName("IDX_2"); + idx2.setIndexType(QueryIndexType.SORTED); + idxFields = new LinkedHashMap<>(); + idxFields.put("x", false); + idx2.setFields(idxFields); + + QueryIndex idx3 = new QueryIndex(); + idx3.setName("IDX_3"); + idx3.setIndexType(QueryIndexType.SORTED); + idxFields = new LinkedHashMap<>(); + idxFields.put("y", false); + idx3.setFields(idxFields); + + QueryIndex idx4 = new QueryIndex(); + idx4.setName("IDX_4"); + idx4.setIndexType(QueryIndexType.SORTED); + idxFields = new LinkedHashMap<>(); + idxFields.put("z", false); + idx4.setFields(idxFields); + + qryEntity.setIndexes(Arrays.asList(idx1, idx2, idx3, idx4)); + + cfg.setCacheConfiguration(new CacheConfiguration() + .setAffinity(new RendezvousAffinityFunction(false, 32)) + .setName(CACHE_NAME) + .setQueryEntities(Collections.singleton(qryEntity))); + + cfg.setDataStorageConfiguration( + new DataStorageConfiguration() + .setDefaultDataRegionConfiguration( + new DataRegionConfiguration() + .setPersistenceEnabled(true) + .setInitialSize(200L * 1024 * 1024) + .setMaxSize(200L * 1024 * 1024) + ) + ); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + cleanPersistenceDir(); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + cleanPersistenceDir(); + } + + /** + * Tests that continuous non-graceful node stop under load doesn't break SQL indexes. + * + * @throws Exception If failed. + */ + @Test + public void test() throws Exception { + for (int i = 0; i < RESTARTS; i++) { + IgniteEx ignite = startGrid(0); + + ignite.cluster().active(true); + + // Ensure that checkpoint isn't running - otherwise validate indexes task may fail. + forceCheckpoint(); + + // Validate indexes on start. + ValidateIndexesClosure clo = new ValidateIndexesClosure(Collections.singleton(CACHE_NAME), 0, 0); + ignite.context().resource().injectGeneric(clo); + VisorValidateIndexesJobResult res = clo.call(); + + assertFalse(res.hasIssues()); + + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(new Runnable() { + @Override public void run() { + IgniteCache cache = ignite.cache(CACHE_NAME); + + int i = 0; + try { + for (; i < LOAD_LOOP; i++) { + ThreadLocalRandom r = ThreadLocalRandom.current(); + + Integer keySeed = r.nextInt(KEY_BOUND); + UserKey key = new UserKey(keySeed); + + if (r.nextBoolean()) + cache.put(key, new UserValue(r.nextLong())); + else + cache.remove(key); + } + + ignite.close(); // Intentionally stop grid while another loaders are still in progress. + } + catch (Exception e) { + log.warning("Failed to update cache after " + i + " loop cycles", e); + } + } + }, THREADS, "loader"); + + fut.get(); + + ignite.close(); + } + } + + /** + * User key. + */ + private static class UserKey { + /** A. */ + private int a; + + /** B. */ + private int b; + + /** C. */ + private int c; + + /** + * @param a A. + * @param b B. + * @param c C. + */ + public UserKey(int a, int b, int c) { + this.a = a; + this.b = b; + this.c = c; + } + + /** + * @param seed Seed. + */ + public UserKey(long seed) { + a = (int)(seed % 17); + b = (int)(seed % 257); + c = (int)(seed % 3001); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return "UserKey{" + + "a=" + a + + ", b=" + b + + ", c=" + c + + '}'; + } + } + + /** + * User value. + */ + private static class UserValue { + /** X. */ + private int x; + + /** Y. */ + private int y; + + /** Z. */ + private int z; + + /** + * @param x X. + * @param y Y. + * @param z Z. + */ + public UserValue(int x, int y, int z) { + this.x = x; + this.y = y; + this.z = z; + } + + /** + * @param seed Seed. + */ + public UserValue(long seed) { + x = (int)(seed % 6991); + y = (int)(seed % 18679); + z = (int)(seed % 31721); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return "UserValue{" + + "x=" + x + + ", y=" + y + + ", z=" + z + + '}'; + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryTest.java index c2b5dc2d382ef..2fe864af4557a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryTest.java @@ -18,8 +18,12 @@ package org.apache.ignite.internal.processors.cache.persistence.db.wal; import java.io.File; +import java.io.FilenameFilter; +import java.io.IOException; import java.nio.ByteBuffer; import java.nio.ByteOrder; +import java.nio.file.Files; +import java.nio.file.Paths; import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; @@ -38,6 +42,7 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteCompute; import org.apache.ignite.IgniteException; +import org.apache.ignite.IgniteLogger; import org.apache.ignite.IgniteSystemProperties; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; @@ -53,9 +58,11 @@ import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.configuration.WALMode; +import org.apache.ignite.internal.DiscoverySpiTestListener; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.IgniteInterruptedCheckedException; +import org.apache.ignite.internal.managers.discovery.IgniteDiscoverySpi; import org.apache.ignite.internal.pagemem.FullPageId; import org.apache.ignite.internal.pagemem.PageUtils; import org.apache.ignite.internal.pagemem.wal.WALIterator; @@ -67,15 +74,22 @@ import org.apache.ignite.internal.pagemem.wal.record.TxRecord; import org.apache.ignite.internal.pagemem.wal.record.WALRecord; import org.apache.ignite.internal.pagemem.wal.record.delta.PageDeltaRecord; +import org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor; import org.apache.ignite.internal.processors.cache.GridCacheSharedContext; +import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; +import org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry; +import org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntryType; +import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.persistence.filename.PdsConsistentIdProcessor; import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetaStorage; import org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx; import org.apache.ignite.internal.processors.cache.persistence.tree.io.TrackingPageIO; +import org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.GridUnsafe; import org.apache.ignite.internal.util.typedef.CA; +import org.apache.ignite.internal.util.lang.IgniteInClosureX; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.PA; import org.apache.ignite.internal.util.typedef.PAX; @@ -94,12 +108,18 @@ import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import static org.apache.ignite.internal.IgniteNodeAttributes.ATTR_IGNITE_INSTANCE_NAME; +import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.CACHE_DATA_FILENAME; import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.DFLT_STORE_DIR; /** * */ +@RunWith(JUnit4.class) public class IgniteWalRecoveryTest extends GridCommonAbstractTest { /** */ private static final String HAS_CACHE = "HAS_CACHE"; @@ -116,6 +136,9 @@ public class IgniteWalRecoveryTest extends GridCommonAbstractTest { /** */ private static final String RENAMED_CACHE_NAME = "partitioned0"; + /** */ + private static final String CACHE_TO_DESTROY_NAME = "destroyCache"; + /** */ private static final String LOC_CACHE_NAME = "local"; @@ -221,6 +244,7 @@ public class IgniteWalRecoveryTest extends GridCommonAbstractTest { /** * @throws Exception if failed. */ + @Test public void testWalBig() throws Exception { IgniteEx ignite = startGrid(1); @@ -263,6 +287,7 @@ public void testWalBig() throws Exception { /** * @throws Exception if failed. */ + @Test public void testWalBigObjectNodeCancel() throws Exception { final int MAX_SIZE_POWER = 21; @@ -301,6 +326,7 @@ public void testWalBigObjectNodeCancel() throws Exception { /** * @throws Exception If fail. */ + @Test public void testSwitchClassLoader() throws Exception { try { final IgniteEx igniteEx = startGrid(1); @@ -342,6 +368,7 @@ public void testSwitchClassLoader() throws Exception { /** * @throws Exception if failed. */ + @Test public void testWalSimple() throws Exception { try { IgniteEx ignite = startGrid(1); @@ -419,6 +446,7 @@ else if (i % 2 == 0) /** * @throws Exception If fail. */ + @Test public void testWalLargeValue() throws Exception { try { IgniteEx ignite = startGrid(1); @@ -466,9 +494,115 @@ public void testWalLargeValue() throws Exception { } } + /** + * Check binary recover completes successfully when node stopped at the middle of checkpoint. + * Destroy cache_data.bin file for particular cache to emulate missing {@link DynamicCacheDescriptor} + * file (binary recovery should complete successfully in this case). + * + * @throws Exception if failed. + */ + @Test + public void testBinaryRecoverBeforePMEWhenMiddleCheckpoint() throws Exception { + startGrids(3); + + IgniteEx ig2 = grid(2); + + ig2.cluster().active(true); + + IgniteCache cache = ig2.cache(CACHE_NAME); + + for (int i = 1; i <= 4_000; i++) + cache.put(i, new BigObject(i)); + + BigObject objToCheck; + + ig2.getOrCreateCache(CACHE_TO_DESTROY_NAME).put(1, objToCheck = new BigObject(1)); + + GridCacheDatabaseSharedManager dbMgr = (GridCacheDatabaseSharedManager)ig2 + .context().cache().context().database(); + + IgniteInternalFuture cpFinishFut = dbMgr.forceCheckpoint("force checkpoint").finishFuture(); + + // Delete checkpoint END file to emulate node stopped at the middle of checkpoint. + cpFinishFut.listen(new IgniteInClosureX() { + @Override public void applyx(IgniteInternalFuture fut0) throws IgniteCheckedException { + try { + CheckpointEntry cpEntry = dbMgr.checkpointHistory().lastCheckpoint(); + + String cpEndFileName = GridCacheDatabaseSharedManager.checkpointFileName(cpEntry, + CheckpointEntryType.END); + + Files.delete(Paths.get(dbMgr.checkpointDirectory().getAbsolutePath(), cpEndFileName)); + + log.info("Checkpoint marker removed [cpEndFileName=" + cpEndFileName + ']'); + } + catch (IOException e) { + throw new IgniteCheckedException(e); + } + } + }); + + // Resolve cache directory. Emulating cache destroy in the middle of checkpoint. + IgniteInternalCache destoryCache = ig2.cachex(CACHE_TO_DESTROY_NAME); + + FilePageStoreManager pageStoreMgr = (FilePageStoreManager)destoryCache.context().shared().pageStore(); + + File destroyCacheWorkDir = pageStoreMgr.cacheWorkDir(destoryCache.configuration()); + + // Stop the whole cluster + stopAllGrids(); + + // Delete cache_data.bin file for this cache. Binary recovery should complete successfully after it. + final File[] files = destroyCacheWorkDir.listFiles(new FilenameFilter() { + @Override public boolean accept(final File dir, final String name) { + return name.endsWith(CACHE_DATA_FILENAME); + } + }); + + assertTrue(files.length > 0); + + for (final File file : files) + assertTrue("Can't remove " + file.getAbsolutePath(), file.delete()); + + startGrids(2); + + // Preprare Ignite instance configuration with additional Discovery checks. + final String ig2Name = getTestIgniteInstanceName(2); + + final IgniteConfiguration onJoinCfg = optimize(getConfiguration(ig2Name)); + + // Check restore beeing called before PME and joining node to cluster. + ((IgniteDiscoverySpi)onJoinCfg.getDiscoverySpi()) + .setInternalListener(new DiscoverySpiTestListener() { + @Override public void beforeJoin(ClusterNode locNode, IgniteLogger log) { + String nodeName = locNode.attribute(ATTR_IGNITE_INSTANCE_NAME); + + GridCacheSharedContext sharedCtx = ((IgniteEx)ignite(getTestIgniteInstanceIndex(nodeName))) + .context() + .cache() + .context(); + + if (nodeName.equals(ig2Name)) { + // Checkpoint history initialized on node start. + assertFalse(((GridCacheDatabaseSharedManager)sharedCtx.database()) + .checkpointHistory().checkpoints().isEmpty()); + } + + super.beforeJoin(locNode, log); + } + }); + + Ignite restoredIg2 = startGrid(ig2Name, onJoinCfg); + + awaitPartitionMapExchange(); + + assertEquals(restoredIg2.cache(CACHE_TO_DESTROY_NAME).get(1), objToCheck); + } + /** * @throws Exception if failed. */ + @Test public void testWalRolloverMultithreadedDefault() throws Exception { logOnly = false; @@ -478,6 +612,7 @@ public void testWalRolloverMultithreadedDefault() throws Exception { /** * @throws Exception if failed. */ + @Test public void testWalRolloverMultithreadedLogOnly() throws Exception { logOnly = true; @@ -487,6 +622,7 @@ public void testWalRolloverMultithreadedLogOnly() throws Exception { /** * @throws Exception if failed. */ + @Test public void testHugeCheckpointRecord() throws Exception { long prevFDTimeout = customFailureDetectionTimeout; @@ -573,6 +709,7 @@ private void checkWalRolloverMultithreaded() throws Exception { /** * @throws Exception If fail. */ + @Test public void testWalRenameDirSimple() throws Exception { try { IgniteEx ignite = startGrid(1); @@ -636,6 +773,7 @@ private File cacheDir(final String cacheName, final String consId) throws Ignite /** * @throws Exception if failed. */ + @Test public void testRecoveryNoCheckpoint() throws Exception { try { IgniteEx ctrlGrid = startGrid(0); @@ -689,6 +827,7 @@ public void testRecoveryNoCheckpoint() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRecoveryLargeNoCheckpoint() throws Exception { try { IgniteEx ctrlGrid = startGrid(0); @@ -744,6 +883,7 @@ public void testRecoveryLargeNoCheckpoint() throws Exception { /** * @throws Exception if failed. */ + @Test public void testRandomCrash() throws Exception { try { IgniteEx ctrlGrid = startGrid(0); @@ -782,6 +922,7 @@ public void testRandomCrash() throws Exception { /** * @throws Exception if failed. */ + @Test public void testLargeRandomCrash() throws Exception { try { IgniteEx ctrlGrid = startGrid(0); @@ -830,6 +971,7 @@ private static class RemoteNodeFilter implements IgnitePredicate { /** * @throws Exception If failed. */ + @Test public void testDestroyCache() throws Exception { try { IgniteEx ignite = startGrid(1); @@ -855,6 +997,7 @@ public void testDestroyCache() throws Exception { /** * @throws Exception If fail. */ + @Test public void testEvictPartition() throws Exception { try { Ignite ignite1 = startGrid("node1"); @@ -899,6 +1042,7 @@ public void testEvictPartition() throws Exception { /** * @throws Exception If fail. */ + @Test public void testMetastorage() throws Exception { try { int cnt = 5000; @@ -964,6 +1108,7 @@ public void testMetastorage() throws Exception { /** * @throws Exception If fail. */ + @Test public void testMetastorageLargeArray() throws Exception { try { int cnt = 5000; @@ -1011,6 +1156,7 @@ public void testMetastorageLargeArray() throws Exception { /** * @throws Exception If fail. */ + @Test public void testMetastorageRemove() throws Exception { try { int cnt = 400; @@ -1064,6 +1210,7 @@ public void testMetastorageRemove() throws Exception { /** * @throws Exception If fail. */ + @Test public void testMetastorageUpdate() throws Exception { try { int cnt = 2000; @@ -1116,6 +1263,7 @@ public void testMetastorageUpdate() throws Exception { /** * @throws Exception If fail. */ + @Test public void testMetastorageWalRestore() throws Exception { try { int cnt = 2000; @@ -1172,6 +1320,7 @@ public void testMetastorageWalRestore() throws Exception { /** * @throws Exception if failed. */ + @Test public void testAbsentDeadlock_Iterator_RollOver_Archivation() throws Exception { try { walSegments = 2; @@ -1240,6 +1389,7 @@ public void testAbsentDeadlock_Iterator_RollOver_Archivation() throws Exception /** * @throws Exception if failed. */ + @Test public void testApplyDeltaRecords() throws Exception { try { IgniteEx ignite0 = (IgniteEx)startGrid("node0"); @@ -1377,6 +1527,7 @@ else if (rec instanceof PageDeltaRecord) { * * @throws Exception If fail. */ + @Test public void testRecoveryOnTransactionalAndPartitionedCache() throws Exception { IgniteEx ignite = (IgniteEx) startGrids(3); ignite.cluster().active(true); @@ -1453,6 +1604,7 @@ public void testRecoveryOnTransactionalAndPartitionedCache() throws Exception { * * @throws Exception If any fail. */ + @Test public void testTxRecordsConsistency() throws Exception { System.setProperty(IgniteSystemProperties.IGNITE_WAL_LOG_TX_RECORDS, "true"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ttl/CacheTtlAbstractSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ttl/CacheTtlAbstractSelfTest.java index c9a5bf6ff9652..34663801f95c2 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ttl/CacheTtlAbstractSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/ttl/CacheTtlAbstractSelfTest.java @@ -38,10 +38,10 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteBiInClosure; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -56,10 +56,8 @@ /** * TTL test. */ +@RunWith(JUnit4.class) public abstract class CacheTtlAbstractSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int MAX_CACHE_SIZE = 5; @@ -112,8 +110,6 @@ public abstract class CacheTtlAbstractSelfTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfg); - ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - return cfg; } @@ -145,6 +141,7 @@ public abstract class CacheTtlAbstractSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testDefaultTimeToLiveLoadCache() throws Exception { IgniteCache cache = jcache(0); @@ -160,6 +157,7 @@ public void testDefaultTimeToLiveLoadCache() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultTimeToLiveLoadAll() throws Exception { defaultTimeToLiveLoadAll(false); @@ -194,6 +192,7 @@ private void defaultTimeToLiveLoadAll(boolean replaceExisting) throws Exception /** * @throws Exception If failed. */ + @Test public void testDefaultTimeToLiveStreamerAdd() throws Exception { try (IgniteDataStreamer streamer = ignite(0).dataStreamer(DEFAULT_CACHE_NAME)) { for (int i = 0; i < SIZE; i++) @@ -223,6 +222,7 @@ public void testDefaultTimeToLiveStreamerAdd() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultTimeToLivePut() throws Exception { IgniteCache cache = jcache(0); @@ -240,6 +240,7 @@ public void testDefaultTimeToLivePut() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultTimeToLivePutAll() throws Exception { IgniteCache cache = jcache(0); @@ -260,6 +261,7 @@ public void testDefaultTimeToLivePutAll() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDefaultTimeToLivePreload() throws Exception { if (cacheMode() == LOCAL) return; @@ -285,6 +287,7 @@ public void testDefaultTimeToLivePreload() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTimeToLiveTtl() throws Exception { long time = DEFAULT_TIME_TO_LIVE + 2000; @@ -360,4 +363,4 @@ private void checkSizeAfterLive(int gridCnt) throws Exception { assertNull(cache.localPeek(key)); } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/ClientConnectorConfigurationValidationSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/ClientConnectorConfigurationValidationSelfTest.java index 8ce334ee0a44e..593952f8e5349 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/ClientConnectorConfigurationValidationSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/ClientConnectorConfigurationValidationSelfTest.java @@ -40,11 +40,15 @@ import java.sql.Statement; import java.util.concurrent.Callable; import java.util.concurrent.atomic.AtomicInteger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Client connector configuration validation tests. */ @SuppressWarnings("deprecation") +@RunWith(JUnit4.class) public class ClientConnectorConfigurationValidationSelfTest extends GridCommonAbstractTest { /** Node index generator. */ private static final AtomicInteger NODE_IDX_GEN = new AtomicInteger(); @@ -62,6 +66,7 @@ public class ClientConnectorConfigurationValidationSelfTest extends GridCommonAb * * @throws Exception If failed. */ + @Test public void testDefault() throws Exception { check(new ClientConnectorConfiguration(), true); checkJdbc(null, ClientConnectorConfiguration.DFLT_PORT); @@ -72,6 +77,7 @@ public void testDefault() throws Exception { * * @throws Exception If failed. */ + @Test public void testHost() throws Exception { check(new ClientConnectorConfiguration().setHost("126.0.0.1"), false); @@ -88,6 +94,7 @@ public void testHost() throws Exception { * * @throws Exception If failed. */ + @Test public void testPort() throws Exception { check(new ClientConnectorConfiguration().setPort(-1), false); check(new ClientConnectorConfiguration().setPort(0), false); @@ -107,6 +114,7 @@ public void testPort() throws Exception { * * @throws Exception If failed. */ + @Test public void testPortRange() throws Exception { check(new ClientConnectorConfiguration().setPortRange(-1), false); @@ -122,6 +130,7 @@ public void testPortRange() throws Exception { * * @throws Exception If failed. */ + @Test public void testSocketBuffers() throws Exception { check(new ClientConnectorConfiguration().setSocketSendBufferSize(-4 * 1024), false); check(new ClientConnectorConfiguration().setSocketReceiveBufferSize(-4 * 1024), false); @@ -138,6 +147,7 @@ public void testSocketBuffers() throws Exception { * * @throws Exception If failed. */ + @Test public void testMaxOpenCusrorsPerConnection() throws Exception { check(new ClientConnectorConfiguration().setMaxOpenCursorsPerConnection(-1), false); @@ -153,6 +163,7 @@ public void testMaxOpenCusrorsPerConnection() throws Exception { * * @throws Exception If failed. */ + @Test public void testThreadPoolSize() throws Exception { check(new ClientConnectorConfiguration().setThreadPoolSize(0), false); check(new ClientConnectorConfiguration().setThreadPoolSize(-1), false); @@ -166,6 +177,7 @@ public void testThreadPoolSize() throws Exception { * * @throws Exception If failed. */ + @Test public void testOdbcConnectorConversion() throws Exception { int port = ClientConnectorConfiguration.DFLT_PORT - 1; @@ -183,6 +195,7 @@ public void testOdbcConnectorConversion() throws Exception { * * @throws Exception If failed. */ + @Test public void testSqlConnectorConversion() throws Exception { int port = ClientConnectorConfiguration.DFLT_PORT - 1; @@ -200,6 +213,7 @@ public void testSqlConnectorConversion() throws Exception { * * @throws Exception If failed. */ + @Test public void testIgnoreOdbcWhenSqlSet() throws Exception { int port = ClientConnectorConfiguration.DFLT_PORT - 1; @@ -218,6 +232,7 @@ public void testIgnoreOdbcWhenSqlSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testIgnoreOdbcAndSqlWhenClientSet() throws Exception { int cliPort = ClientConnectorConfiguration.DFLT_PORT - 1; int sqlPort = ClientConnectorConfiguration.DFLT_PORT - 2; @@ -239,6 +254,7 @@ public void testIgnoreOdbcAndSqlWhenClientSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testIgnoreOdbcWhenClientSet() throws Exception { int cliPort = ClientConnectorConfiguration.DFLT_PORT - 1; int odbcPort = ClientConnectorConfiguration.DFLT_PORT - 2; @@ -258,6 +274,7 @@ public void testIgnoreOdbcWhenClientSet() throws Exception { * * @throws Exception If failed. */ + @Test public void testIgnoreSqlWhenClientSet() throws Exception { int cliPort = ClientConnectorConfiguration.DFLT_PORT - 1; int sqlPort = ClientConnectorConfiguration.DFLT_PORT - 2; @@ -277,7 +294,7 @@ public void testIgnoreSqlWhenClientSet() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testDisabled() throws Exception { IgniteConfiguration cfg = baseConfiguration(); @@ -299,6 +316,7 @@ public void testDisabled() throws Exception { * * @throws Exception If failed. */ + @Test public void testJdbcConnectionEnabled() throws Exception { IgniteConfiguration cfg = baseConfiguration(); @@ -317,7 +335,7 @@ public void testJdbcConnectionEnabled() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testJdbcConnectionDisabled() throws Exception { IgniteConfiguration cfg = baseConfiguration(); @@ -342,7 +360,7 @@ public void testJdbcConnectionDisabled() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testJdbcConnectionDisabledForDaemon() throws Exception { final IgniteConfiguration cfg = baseConfiguration().setDaemon(true); @@ -395,7 +413,6 @@ private IgniteConfiguration baseConfiguration() throws Exception { * @param cliConnCfg Client connector configuration. * @param success Success flag. * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") private void check(ClientConnectorConfiguration cliConnCfg, boolean success) throws Exception { final IgniteConfiguration cfg = baseConfiguration(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/IgniteDataStreamerTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/IgniteDataStreamerTest.java index 1bfb02e192952..415c05b000d82 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/IgniteDataStreamerTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/client/IgniteDataStreamerTest.java @@ -28,11 +28,15 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteUuid; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CachePeekMode.ALL; /** */ +@RunWith(JUnit4.class) public class IgniteDataStreamerTest extends GridCommonAbstractTest { public static final String CACHE_NAME = "UUID_CACHE"; @@ -83,6 +87,7 @@ private CacheConfiguration cacheConfiguration(Class key, Class CacheConfiguration cacheConfig(String name, boolean partitioned, Class... idxTypes) { @@ -31,6 +35,7 @@ public class IgniteCacheGroupsSqlSegmentedIndexSelfTest extends IgniteSqlSegment /** * @throws Exception If failed. */ + @Test @Override public void testSegmentedPartitionedWithReplicated() throws Exception { log.info("Test is ignored"); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteCachelessQueriesSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteCachelessQueriesSelfTest.java index a7dae9e849421..15c33322dc054 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteCachelessQueriesSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteCachelessQueriesSelfTest.java @@ -31,18 +31,16 @@ import org.apache.ignite.internal.processors.cache.query.GridCacheTwoStepQuery; import org.apache.ignite.internal.processors.query.h2.H2TwoStepCachedQuery; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for behavior in various cases of local and distributed queries. */ +@RunWith(JUnit4.class) public class IgniteCachelessQueriesSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private final static String SELECT = "select count(*) from \"pers\".Person p, \"org\".Organization o where p.orgId = o._key"; @@ -63,12 +61,6 @@ public class IgniteCachelessQueriesSelfTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -109,6 +101,7 @@ protected CacheConfiguration cacheConfig(String name, TestCacheMode /** * */ + @Test public void testDistributedQueryOnPartitionedCaches() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.PARTITIONED, false, false); @@ -118,6 +111,7 @@ public void testDistributedQueryOnPartitionedCaches() { /** * */ + @Test public void testDistributedQueryOnPartitionedAndReplicatedCache() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.REPLICATED, false, false); @@ -127,15 +121,17 @@ public void testDistributedQueryOnPartitionedAndReplicatedCache() { /** * */ + @Test public void testDistributedQueryOnReplicatedCaches() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.REPLICATED, false, false); - assertDistributedQuery(); + assertLocalQuery(); } /** * */ + @Test public void testDistributedQueryOnSegmentedCaches() { createCachesAndExecuteQuery(TestCacheMode.SEGMENTED, TestCacheMode.SEGMENTED, false, false); @@ -145,6 +141,7 @@ public void testDistributedQueryOnSegmentedCaches() { /** * */ + @Test public void testDistributedQueryOnReplicatedAndSegmentedCache() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.SEGMENTED, false, false); @@ -154,6 +151,7 @@ public void testDistributedQueryOnReplicatedAndSegmentedCache() { /** * */ + @Test public void testDistributedQueryOnPartitionedCachesWithReplicatedFlag() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.PARTITIONED, true, false); @@ -163,6 +161,7 @@ public void testDistributedQueryOnPartitionedCachesWithReplicatedFlag() { /** * */ + @Test public void testDistributedQueryOnPartitionedAndReplicatedCacheWithReplicatedFlag() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.REPLICATED, true, false); @@ -172,6 +171,7 @@ public void testDistributedQueryOnPartitionedAndReplicatedCacheWithReplicatedFla /** * */ + @Test public void testLocalQueryOnReplicatedCachesWithReplicatedFlag() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.REPLICATED, true, false); @@ -181,6 +181,7 @@ public void testLocalQueryOnReplicatedCachesWithReplicatedFlag() { /** * */ + @Test public void testDistributedQueryOnSegmentedCachesWithReplicatedFlag() { createCachesAndExecuteQuery(TestCacheMode.SEGMENTED, TestCacheMode.SEGMENTED, true, false); @@ -190,6 +191,7 @@ public void testDistributedQueryOnSegmentedCachesWithReplicatedFlag() { /** * */ + @Test public void testDistributedQueryOnReplicatedAndSegmentedCacheWithReplicatedFlag() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.SEGMENTED, true, false); @@ -199,6 +201,7 @@ public void testDistributedQueryOnReplicatedAndSegmentedCacheWithReplicatedFlag( /** * */ + @Test public void testLocalQueryOnPartitionedCachesWithLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.PARTITIONED, false, true); @@ -208,6 +211,7 @@ public void testLocalQueryOnPartitionedCachesWithLocalFlag() { /** * */ + @Test public void testLocalQueryOnPartitionedAndReplicatedCacheWithLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.REPLICATED, false, true); @@ -217,6 +221,7 @@ public void testLocalQueryOnPartitionedAndReplicatedCacheWithLocalFlag() { /** * */ + @Test public void testLocalQueryOnReplicatedCachesWithLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.REPLICATED, false, true); @@ -226,6 +231,7 @@ public void testLocalQueryOnReplicatedCachesWithLocalFlag() { /** * */ + @Test public void testLocalTwoStepQueryOnSegmentedCachesWithLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.SEGMENTED, TestCacheMode.SEGMENTED, false, true); @@ -235,6 +241,7 @@ public void testLocalTwoStepQueryOnSegmentedCachesWithLocalFlag() { /** * */ + @Test public void testLocalTwoStepQueryOnReplicatedAndSegmentedCacheWithLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.SEGMENTED, false, true); @@ -244,6 +251,7 @@ public void testLocalTwoStepQueryOnReplicatedAndSegmentedCacheWithLocalFlag() { /** * */ + @Test public void testLocalQueryOnPartitionedCachesWithReplicatedAndLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.PARTITIONED, false, true); @@ -253,6 +261,7 @@ public void testLocalQueryOnPartitionedCachesWithReplicatedAndLocalFlag() { /** * */ + @Test public void testLocalQueryOnPartitionedAndReplicatedCacheWithReplicatedAndLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.PARTITIONED, TestCacheMode.REPLICATED, true, true); @@ -262,6 +271,7 @@ public void testLocalQueryOnPartitionedAndReplicatedCacheWithReplicatedAndLocalF /** * */ + @Test public void testLocalQueryOnReplicatedCachesWithReplicatedAndLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.REPLICATED, true, true); @@ -271,6 +281,7 @@ public void testLocalQueryOnReplicatedCachesWithReplicatedAndLocalFlag() { /** * */ + @Test public void testLocalTwoStepQueryOnSegmentedCachesWithReplicatedAndLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.SEGMENTED, TestCacheMode.SEGMENTED, true, true); @@ -280,6 +291,7 @@ public void testLocalTwoStepQueryOnSegmentedCachesWithReplicatedAndLocalFlag() { /** * */ + @Test public void testLocalTwoStepQueryOnReplicatedAndSegmentedCacheWithReplicatedAndLocalFlag() { createCachesAndExecuteQuery(TestCacheMode.REPLICATED, TestCacheMode.SEGMENTED, true, true); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteQueryDedicatedPoolTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteQueryDedicatedPoolTest.java index b2f4e47ec2048..884ab77ac3a87 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteQueryDedicatedPoolTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteQueryDedicatedPoolTest.java @@ -40,20 +40,19 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.spi.IgniteSpiAdapter; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.spi.indexing.IndexingQueryFilter; import org.apache.ignite.spi.indexing.IndexingSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Ensures that SQL queries are executed in a dedicated thread pool. */ +@RunWith(JUnit4.class) public class IgniteQueryDedicatedPoolTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Name of the cache for test */ private static final String CACHE_NAME = "query_pool_test"; @@ -68,10 +67,6 @@ public class IgniteQueryDedicatedPoolTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi spi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setIndexedTypes(Integer.class, Integer.class); @@ -101,6 +96,7 @@ public class IgniteQueryDedicatedPoolTest extends GridCommonAbstractTest { * @throws Exception If failed. * @see GridCacheTwoStepQuery#isLocal() */ + @Test public void testSqlQueryUsesDedicatedThreadPool() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME); @@ -129,6 +125,7 @@ public void testSqlQueryUsesDedicatedThreadPool() throws Exception { * Tests that Scan queries are executed in dedicated pool * @throws Exception If failed. */ + @Test public void testScanQueryUsesDedicatedThreadPool() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME); @@ -152,6 +149,7 @@ public void testScanQueryUsesDedicatedThreadPool() throws Exception { * Tests that SPI queries are executed in dedicated pool * @throws Exception If failed. */ + @Test public void testSpiQueryUsesDedicatedThreadPool() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlCreateTableTemplateTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlCreateTableTemplateTest.java new file mode 100644 index 0000000000000..ba0ef38560089 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlCreateTableTemplateTest.java @@ -0,0 +1,159 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query; + +import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.binary.BinaryObjectBuilder; +import org.apache.ignite.cache.affinity.AffinityKeyMapper; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Ensures that SQL queries work for tables created dynamically based on a template. + */ +public class IgniteSqlCreateTableTemplateTest extends GridCommonAbstractTest { + /** {@inheritDoc} */ + @Override public IgniteConfiguration getConfiguration(String name) throws Exception { + IgniteConfiguration configuration = new IgniteConfiguration(); + configuration.setIgniteInstanceName(name); + + CacheConfiguration defaultCacheConfiguration = new CacheConfiguration(); + + defaultCacheConfiguration.setName("DEFAULT_TEMPLATE*"); + + CacheConfiguration customCacheConfiguration = new CacheConfiguration(); + + customCacheConfiguration.setName("CUSTOM_TEMPLATE*"); + + MockAffinityKeyMapper customAffinityMapper = new MockAffinityKeyMapper(); + + customCacheConfiguration.setAffinityMapper(customAffinityMapper); + + configuration.setCacheConfiguration(defaultCacheConfiguration, customCacheConfiguration); + + return configuration; + } + + /** {@inheritDoc} */ + @SuppressWarnings("deprecation") + @Override protected void beforeTestsStarted() throws Exception { + startGrid(); + } + + /** + * Tests select statement works on a table with BinaryObject as a primary key. + */ + @Test + public void testSelectForTableWithDataInsertedWithKeyValueAPI() { + Ignite ignite = grid(); + IgniteCache cache = ignite.getOrCreateCache("test"); + + createTable(cache, "PERSON", "DEFAULT_TEMPLATE"); + + createTable(cache, "ORGANIZATION", "DEFAULT_TEMPLATE"); + + BinaryObjectBuilder keyBuilder = ignite.binary().builder("PERSON_KEY"); + + keyBuilder.setField("ID", 1); + keyBuilder.setField("AFF_PERSON", 2); + + BinaryObjectBuilder valueBuilder = ignite.binary().builder("PERSON_VALUE"); + valueBuilder.setField("NAME", "test"); + + ignite.cache("PERSON_CACHE").withKeepBinary().put(keyBuilder.build(), valueBuilder.build()); + + keyBuilder = ignite.binary().builder("ORGANIZATION_KEY"); + + keyBuilder.setField("ID", 1); + keyBuilder.setField("AFF_ORGANIZATION", 2); + + valueBuilder = ignite.binary().builder("ORGANIZATION_VALUE"); + + valueBuilder.setField("NAME", "test"); + + ignite.cache("ORGANIZATION_CACHE").withKeepBinary().put(keyBuilder.build(), valueBuilder.build()); + + assertEquals(1, ignite.cache("PERSON_CACHE").query( + new SqlFieldsQuery("Select NAME from PERSON where ID = 1")).getAll().size() + ); + + assertEquals(1, ignite.cache("PERSON_CACHE").query( + new SqlFieldsQuery("Select NAME from PERSON where AFF_PERSON = 2")).getAll().size() + ); + + assertEquals(1, ignite.cache("ORGANIZATION_CACHE").query( + new SqlFieldsQuery("Select NAME from ORGANIZATION where AFF_ORGANIZATION = 2")).getAll().size() + ); + + } + + /** + * Creates table based on a template. + * + * @param cache Cache. + */ + private void createTable(IgniteCache cache, String tableName, String template) { + String sql = String.format( + "CREATE TABLE IF NOT EXISTS %1$s(\n" + + " ID INT NOT NULL,\n" + + " AFF_%1$s INT NOT NULL,\n" + + " NAME VARCHAR2(100),\n" + + " PRIMARY KEY (ID, AFF_%1$s)\n" + + ") with \"TEMPLATE=%2$s,KEY_TYPE=%1$s_KEY, AFFINITY_KEY=AFF_%1$s, CACHE_NAME=%1$s_CACHE, " + + "VALUE_TYPE=%1$s_VALUE, ATOMICITY=TRANSACTIONAL\";", tableName, template); + + cache.query(new SqlFieldsQuery(sql).setSchema("PUBLIC")); + } + + /** + * When template has custom affinity mapper. + * then cache created via CREATE TABLE command should have the same affinity mapper. + */ + @SuppressWarnings("unchecked") + @Test + public void testCustomAffinityKeyMapperIsNotOverwritten() { + Ignite ignite = grid(); + + IgniteCache cache = ignite.getOrCreateCache("test"); + + createTable(cache, "CUSTOM", "CUSTOM_TEMPLATE"); + + assertTrue(ignite.getOrCreateCache("CUSTOM_CACHE").getConfiguration( + CacheConfiguration.class).getAffinityMapper() instanceof MockAffinityKeyMapper); + } + + /** + * Mock affinity mapper implementation. + */ + @SuppressWarnings("deprecation") + private static class MockAffinityKeyMapper implements AffinityKeyMapper { + /** {@inheritDoc} */ + @Override public Object affinityKey(Object key) { + return null; + } + + /** {@inheritDoc} */ + @Override public void reset() { + // no-op + } + } +} \ No newline at end of file diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDefaultValueTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDefaultValueTest.java index f4c05397960e4..4da1940661725 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDefaultValueTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDefaultValueTest.java @@ -29,16 +29,16 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ @SuppressWarnings("ThrowableNotThrown") +@RunWith(JUnit4.class) public class IgniteSqlDefaultValueTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Name of client node. */ private static final String NODE_CLIENT = "client"; @@ -49,12 +49,7 @@ public class IgniteSqlDefaultValueTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration c = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - disco.setForceServerMode(true); - - c.setDiscoverySpi(disco); + ((TcpDiscoverySpi)c.getDiscoverySpi()).setForceServerMode(true); if (gridName.equals(NODE_CLIENT)) c.setClientMode(true); @@ -88,6 +83,7 @@ public class IgniteSqlDefaultValueTest extends GridCommonAbstractTest { /** */ + @Test public void testDefaultValueColumn() { sql("CREATE TABLE TEST (id int, val0 varchar DEFAULT 'default-val', primary key (id))"); sql("INSERT INTO TEST (id) VALUES (?)", 1); @@ -107,6 +103,7 @@ public void testDefaultValueColumn() { /** */ + @Test public void testDefaultValueColumnAfterUpdate() { sql("CREATE TABLE TEST (id int, val0 varchar DEFAULT 'default-val', val1 varchar, primary key (id))"); sql("INSERT INTO TEST (id, val1) VALUES (?, ?)", 1, "val-10"); @@ -138,6 +135,7 @@ public void testDefaultValueColumnAfterUpdate() { /** */ + @Test public void testEmptyValueNullDefaults() { sql("CREATE TABLE TEST (id int, val0 varchar, primary key (id))"); sql("INSERT INTO TEST (id) VALUES (?)", 1); @@ -155,6 +153,7 @@ public void testEmptyValueNullDefaults() { /** */ + @Test public void testAddColumnWithDefaults() { sql("CREATE TABLE TEST (id int, val0 varchar, primary key (id))"); @@ -169,6 +168,7 @@ public void testAddColumnWithDefaults() { /** */ + @Test public void testDefaultTypes() { assertEquals("Check tinyint", (byte)28, getDefaultObject("TINYINT", "28")); assertEquals("Check smallint", (short)28, getDefaultObject("SMALLINT", "28")); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDistributedJoinSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDistributedJoinSelfTest.java index 4b993eca16ec1..f3968ad49ff7a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDistributedJoinSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlDistributedJoinSelfTest.java @@ -25,18 +25,16 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for correct distributed sql joins. */ +@RunWith(JUnit4.class) public class IgniteSqlDistributedJoinSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final int NODES_COUNT = 2; @@ -52,12 +50,6 @@ public class IgniteSqlDistributedJoinSelfTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -84,6 +76,7 @@ protected CacheConfiguration cacheConfig(String name, boolean partitioned, Class /** * @throws Exception If failed. */ + @Test public void testNonCollocatedDistributedJoin() throws Exception { CacheConfiguration ccfg1 = cacheConfig("pers", true, String.class, Person.class); CacheConfiguration ccfg2 = cacheConfig("org", true, String.class, Organization.class); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlEntryCacheModeAgnosticTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlEntryCacheModeAgnosticTest.java index db7ca399d7d65..e87e72345dbe7 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlEntryCacheModeAgnosticTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlEntryCacheModeAgnosticTest.java @@ -23,12 +23,12 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL; import static org.apache.ignite.cache.CacheMode.LOCAL; @@ -38,10 +38,8 @@ /** * Test different cache modes for query entry */ +@RunWith(JUnit4.class) public class IgniteSqlEntryCacheModeAgnosticTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Host. */ public static final String HOST = "127.0.0.1"; @@ -60,12 +58,6 @@ public class IgniteSqlEntryCacheModeAgnosticTest extends GridCommonAbstractTest c.setLocalHost(HOST); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - c.setCacheConfiguration(cacheConfiguration(LOCAL_CACHE_NAME), cacheConfiguration(REPLICATED_CACHE_NAME), cacheConfiguration(PARTITIONED_CACHE_NAME)); @@ -116,6 +108,7 @@ private CacheConfiguration cacheConfiguration(String cacheName) throws Exception /** * It should not matter what cache mode does entry cache use, if there is no join */ + @Test public void testCrossCacheModeQuery() throws Exception { Ignite ignite = startGrid(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatCollocatedTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatCollocatedTest.java index 05d29bfe65b66..36250e9044ecc 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatCollocatedTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatCollocatedTest.java @@ -30,15 +30,16 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for GROUP_CONCAT aggregate function in collocated mode. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteSqlGroupConcatCollocatedTest extends GridCommonAbstractTest { /** */ private static final int CLIENT = 7; @@ -49,19 +50,10 @@ public class IgniteSqlGroupConcatCollocatedTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "cache"; - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration( new CacheConfiguration(CACHE_NAME) .setAffinity(new RendezvousAffinityFunction().setPartitions(8)) @@ -115,6 +107,7 @@ public class IgniteSqlGroupConcatCollocatedTest extends GridCommonAbstractTest { /** * */ + @Test public void testGroupConcatSimple() { IgniteCache c = ignite(CLIENT).cache(CACHE_NAME); @@ -139,6 +132,7 @@ public void testGroupConcatSimple() { /** * */ + @Test public void testGroupConcatOrderBy() { IgniteCache c = ignite(CLIENT).cache(CACHE_NAME); @@ -164,6 +158,7 @@ public void testGroupConcatOrderBy() { /** * */ + @Test public void testGroupConcatWithDistinct() { IgniteCache c = ignite(CLIENT).cache(CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatNotCollocatedTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatNotCollocatedTest.java index fbd38f7aeabfc..af40c763929a0 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatNotCollocatedTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlGroupConcatNotCollocatedTest.java @@ -28,16 +28,17 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for GROUP_CONCAT aggregate function in not collocated mode. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteSqlGroupConcatNotCollocatedTest extends GridCommonAbstractTest { /** */ private static final int CLIENT = 7; @@ -45,19 +46,10 @@ public class IgniteSqlGroupConcatNotCollocatedTest extends GridCommonAbstractTes /** */ private static final String CACHE_NAME = "cache"; - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - cfg.setCacheConfiguration( new CacheConfiguration(CACHE_NAME) .setAffinity(new RendezvousAffinityFunction().setPartitions(8)) @@ -99,6 +91,7 @@ public class IgniteSqlGroupConcatNotCollocatedTest extends GridCommonAbstractTes /** * */ + @Test public void testGroupConcatSimple() { IgniteCache c = ignite(CLIENT).cache(CACHE_NAME); @@ -122,6 +115,7 @@ public void testGroupConcatSimple() { /** * */ + @Test public void testGroupConcatCountDistinct() { IgniteCache c = ignite(CLIENT).cache(CACHE_NAME); @@ -145,6 +139,7 @@ public void testGroupConcatCountDistinct() { /** * */ + @Test public void testGroupConcatDistributedException() { final IgniteCache c = ignite(CLIENT).cache(CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlKeyValueFieldsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlKeyValueFieldsTest.java index d63be7cf3624c..36a1cc0137341 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlKeyValueFieldsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlKeyValueFieldsTest.java @@ -1,392 +1,392 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.internal.processors.query; - -import org.apache.ignite.IgniteCache; -import org.apache.ignite.IgniteCheckedException; -import org.apache.ignite.cache.QueryEntity; -import org.apache.ignite.cache.QueryIndex; -import org.apache.ignite.cache.query.QueryCursor; -import org.apache.ignite.cache.query.SqlFieldsQuery; -import org.apache.ignite.configuration.CacheConfiguration; -import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.internal.IgniteEx; -import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; -import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; -import org.apache.ignite.testframework.GridTestUtils; -import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -import java.util.ArrayList; -import java.util.Arrays; -import java.util.LinkedHashMap; -import java.util.List; -import java.util.concurrent.Callable; - -/** - * Test hidden _key, _val, _ver columns - */ -public class IgniteSqlKeyValueFieldsTest extends GridCommonAbstractTest { - - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - - /** */ - private static String NODE_BAD_CONF_MISS_KEY_FIELD = "badConf1"; - /** */ - private static String NODE_BAD_CONF_MISS_VAL_FIELD = "badConf2"; - /** */ - private static String NODE_CLIENT = "client"; - - /** */ - private static String CACHE_PERSON_NO_KV = "PersonNoKV"; - /** */ - private static String CACHE_INT_NO_KV_TYPE = "IntNoKVType"; - /** */ - private static String CACHE_PERSON = "Person"; - /** */ - private static String CACHE_JOB = "Job"; - - - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { - IgniteConfiguration c = super.getConfiguration(gridName); - - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - - c.setMarshaller(new BinaryMarshaller()); - - List ccfgs = new ArrayList<>(); - CacheConfiguration ccfg = buildCacheConfiguration(gridName); - if (ccfg != null) - ccfgs.add(ccfg); - - ccfgs.add(buildCacheConfiguration(CACHE_PERSON_NO_KV)); - ccfgs.add(buildCacheConfiguration(CACHE_INT_NO_KV_TYPE)); - ccfgs.add(buildCacheConfiguration(CACHE_PERSON)); - ccfgs.add(buildCacheConfiguration(CACHE_JOB)); - - c.setCacheConfiguration(ccfgs.toArray(new CacheConfiguration[ccfgs.size()])); - if (gridName.equals(NODE_CLIENT)) - c.setClientMode(true); - - return c; - } - - /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { - super.beforeTest(); - - startGrid(0); - startGrid(NODE_CLIENT); - } - - /** {@inheritDoc} */ - @Override protected void afterTest() throws Exception { - super.afterTest(); - - stopAllGrids(); - } - - private CacheConfiguration buildCacheConfiguration(String name) { - if (name.equals(NODE_BAD_CONF_MISS_KEY_FIELD)) { - CacheConfiguration ccfg = new CacheConfiguration(NODE_BAD_CONF_MISS_KEY_FIELD); - QueryEntity qe = new QueryEntity(Object.class.getName(), Object.class.getName()); - qe.setKeyFieldName("k"); - qe.addQueryField("a", Integer.class.getName(), null); - ccfg.setQueryEntities(F.asList(qe)); - return ccfg; - } - else if (name.equals(NODE_BAD_CONF_MISS_VAL_FIELD)) { - CacheConfiguration ccfg = new CacheConfiguration(NODE_BAD_CONF_MISS_VAL_FIELD); - QueryEntity qe = new QueryEntity(Object.class.getName(), Object.class.getName()); - qe.setValueFieldName("v"); - qe.addQueryField("a", Integer.class.getName(), null); - ccfg.setQueryEntities(F.asList(qe)); - return ccfg; - } - else if (name.equals(CACHE_PERSON_NO_KV)) { - CacheConfiguration ccfg = new CacheConfiguration(CACHE_PERSON_NO_KV); - - QueryEntity entity = new QueryEntity(); - - entity.setKeyType(Integer.class.getName()); - entity.setValueType(Person.class.getName()); - - LinkedHashMap fields = new LinkedHashMap<>(); - fields.put("name", String.class.getName()); - fields.put("age", Integer.class.getName()); - - entity.setFields(fields); - - ccfg.setQueryEntities(Arrays.asList(entity)); - return ccfg; - } - else if (name.equals(CACHE_INT_NO_KV_TYPE)) { - CacheConfiguration ccfg = new CacheConfiguration(CACHE_INT_NO_KV_TYPE); - QueryEntity entity = new QueryEntity(); - - entity.setKeyType(null); - entity.setValueType(null); - - entity.setKeyFieldName("id"); - entity.setValueFieldName("v"); - - LinkedHashMap fields = new LinkedHashMap<>(); - fields.put("id", Integer.class.getName()); - fields.put("v", Integer.class.getName()); - - entity.setFields(fields); - - ccfg.setQueryEntities(Arrays.asList(entity)); - return ccfg; - } - else if (name.equals(CACHE_PERSON)) { - CacheConfiguration ccfg = new CacheConfiguration(CACHE_PERSON); - - QueryEntity entity = new QueryEntity(); - - entity.setKeyType(Integer.class.getName()); - entity.setValueType(Person.class.getName()); - - entity.setKeyFieldName("id"); - entity.setValueFieldName("v"); - - LinkedHashMap fields = new LinkedHashMap<>(); - fields.put("name", String.class.getName()); - fields.put("age", Integer.class.getName()); - - fields.put(entity.getKeyFieldName(), entity.getKeyType()); - fields.put(entity.getValueFieldName(), entity.getValueType()); - - entity.setFields(fields); - - ccfg.setQueryEntities(Arrays.asList(entity)); - return ccfg; - } - else if (name.equals(CACHE_JOB)) { - CacheConfiguration ccfg = new CacheConfiguration(CACHE_JOB); - ccfg.setIndexedTypes(Integer.class, Integer.class); - return ccfg; - } - return null; - } - - /** Test for setIndexedTypes() primitive types */ - public void testSetIndexTypesPrimitive() throws Exception { - IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_JOB); - - checkInsert(cache, "insert into Integer (_key, _val) values (?,?)", 1, 100); - - checkSelect(cache, "select * from Integer", 1, 100); - checkSelect(cache, "select _key, _val from Integer", 1, 100); - } - - /** Test configuration error : keyFieldName is missing from fields */ - public void testErrorKeyFieldMissingFromFields() throws Exception { - checkCacheStartupError(NODE_BAD_CONF_MISS_KEY_FIELD); - } - - /** Test configuration error : valueFieldName is missing from fields */ - public void testErrorValueFieldMissingFromFields() throws Exception { - checkCacheStartupError(NODE_BAD_CONF_MISS_VAL_FIELD); - } - - /** */ - private void checkCacheStartupError(final String name) { - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - startGrid(name); - - return null; - } - }, IgniteCheckedException.class, null); - } - - /** - * Check that it is allowed to leave QE.keyType and QE.valueType unset - * in case keyFieldName and valueFieldName are set and present in fields - */ - public void testQueryEntityAutoKeyValTypes() throws Exception { - IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_INT_NO_KV_TYPE); - - checkInsert(cache, "insert into Integer (_key, _val) values (?,?)", 1, 100); - - checkSelect(cache, "select * from Integer where id = 1", 1, 100); - - checkSelect(cache, "select * from Integer", 1, 100); - checkSelect(cache, "select _key, _val from Integer", 1, 100); - checkSelect(cache, "select id, v from Integer", 1, 100); - } - - /** Check that it is possible to not have keyFieldName and valueFieldName */ - public void testNoKeyValueAliases() throws Exception { - IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON_NO_KV); - - Person alice = new Person("Alice", 1); - checkInsert(cache, "insert into Person (_key, _val) values (?,?)", 1, alice); - - checkSelect(cache, "select * from Person", alice.name, alice.age); - checkSelect(cache, "select _key, _val from Person", 1, alice); - } - - /** Check keyFieldName and valueFieldName columns access */ - public void testKeyValueAlias() throws Exception { - //_key, _val, _ver | name, age, id, v - Person alice = new Person("Alice", 1); - Person bob = new Person("Bob", 2); - - IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON); - - checkInsert(cache, "insert into Person (_key, _val) values (?,?)", 1, alice); - checkInsert(cache, "insert into Person (id, v) values (?,?)", 2, bob); - - checkSelect(cache, "select * from Person where _key=1", alice.name, alice.age, 1, alice); - checkSelect(cache, "select _key, _val from Person where id=1", 1, alice); - - checkSelect(cache, "select * from Person where _key=2", bob.name, bob.age, 2, bob); - checkSelect(cache, "select _key, _val from Person where id=2", 2, bob); - - checkInsert(cache, "update Person set age = ? where id = ?", 3, 1); - checkSelect(cache, "select _key, age from Person where id=1", 1, 3); - - checkInsert(cache, "update Person set v = ? where id = ?", alice, 1); - checkSelect(cache, "select _key, _val from Person where id=1", 1, alice); - } - - /** Check _ver version field is accessible */ - public void testVersionField() throws Exception { - Person alice = new Person("Alice", 1); - Person bob = new Person("Bob", 2); - - IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON); - - checkInsert(cache, "insert into Person (id, v) values (?,?)", 1, alice); - assertNotNull(getVersion(cache, 1)); - - checkInsert(cache, "insert into Person (id, v) values (?,?)", 2, bob); - assertNotNull(getVersion(cache, 2)); - - GridCacheVersion v1 = getVersion(cache, 1); - - checkInsert(cache, "update Person set age = ? where id = ?", 3, 1); - - GridCacheVersion v2 = getVersion(cache, 1); - - assertFalse( v1.equals(v2) ); - } - - /** Check that joins are working on keyFieldName, valueFieldName columns */ - public void testJoinKeyValFields() throws Exception { - IgniteEx client = grid(NODE_CLIENT); - IgniteCache cache = client.cache(CACHE_PERSON); - IgniteCache cache2 = client.cache(CACHE_JOB); - - checkInsert(cache, "insert into Person (id, v) values (?, ?)", 1, new Person("Bob", 30)); - checkInsert(cache, "insert into Person (id, v) values (?, ?)", 2, new Person("David", 35)); - checkInsert(cache2, "insert into Integer (_key, _val) values (?, ?)", 100, 1); - checkInsert(cache2, "insert into Integer (_key, _val) values (?, ?)", 200, 2); - - QueryCursor> cursor = cache.query(new SqlFieldsQuery("select p.id, j._key from Person p, \""+ CACHE_JOB +"\".Integer j where p.id = j._val")); - List> results = cursor.getAll(); - assertEquals(2, results.size()); - assertEquals(1, results.get(0).get(0)); - assertEquals(100, results.get(0).get(1)); - assertEquals(2, results.get(1).get(0)); - assertEquals(200, results.get(1).get(1)); - } - - /** Check automatic addition of index for keyFieldName column */ - public void testAutoKeyFieldIndex() throws Exception { - IgniteEx client = grid(NODE_CLIENT); - IgniteCache cache = client.cache(CACHE_PERSON); - - QueryCursor> cursor = cache.query(new SqlFieldsQuery("explain select * from Person where id = 1")); - List> results = cursor.getAll(); - assertEquals(2, results.size()); - assertTrue(((String)results.get(0).get(0)).contains("\"_key_PK_proxy\"")); - - cursor = cache.query(new SqlFieldsQuery("explain select * from Person where _key = 1")); - results = cursor.getAll(); - assertEquals(2, results.size()); - assertTrue(((String)results.get(0).get(0)).contains("\"_key_PK\"")); - } - - /** */ - private GridCacheVersion getVersion(IgniteCache cache, int key) { - QueryCursor> cursor = cache.query(new SqlFieldsQuery("select _ver from Person where id = ?").setArgs(key)); - List> results = cursor.getAll(); - assertEquals(1, results.size()); - return ((GridCacheVersion) results.get(0).get(0)); - } - - /** */ - private void checkInsert(IgniteCache cache, String qry, Object ... args) throws Exception { - QueryCursor> cursor = cache.query(new SqlFieldsQuery(qry).setArgs(args)); - assertEquals(1, ((Number) cursor.getAll().get(0).get(0)).intValue()); - } - - /** */ - private void checkSelect(IgniteCache cache, String selectQry, Object ... expected) { - QueryCursor> cursor = cache.query(new SqlFieldsQuery(selectQry)); - - List> results = cursor.getAll(); - - assertEquals(1, results.size()); - - List row0 = results.get(0); - for(int col = 0; col < expected.length; ++col) - assertEquals(expected[col], row0.get(col)); - } - - /** */ - private static class Person { - /** */ - private String name; - - /** */ - private int age; - - /** */ - public Person(String name, int age) { - this.name = name; - this.age = age; - } - - /** */ - @Override public int hashCode() { - return name.hashCode() ^ age; - } - - /** */ - @Override public boolean equals(Object o) { - if (this == o) - return true; - if (!(o instanceof Person)) - return false; - Person other = (Person)o; - return name.equals(other.name) && age == other.age; - } - } -} +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query; + +import org.apache.ignite.IgniteCache; +import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.cache.query.QueryCursor; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.binary.BinaryMarshaller; +import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; +import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.concurrent.Callable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +/** + * Test hidden _key, _val, _ver columns + */ +@RunWith(JUnit4.class) +public class IgniteSqlKeyValueFieldsTest extends GridCommonAbstractTest { + /** */ + private static String NODE_BAD_CONF_MISS_KEY_FIELD = "badConf1"; + /** */ + private static String NODE_BAD_CONF_MISS_VAL_FIELD = "badConf2"; + /** */ + private static String NODE_CLIENT = "client"; + + /** */ + private static String CACHE_PERSON_NO_KV = "PersonNoKV"; + /** */ + private static String CACHE_INT_NO_KV_TYPE = "IntNoKVType"; + /** */ + private static String CACHE_PERSON = "Person"; + /** */ + private static String CACHE_JOB = "Job"; + + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { + IgniteConfiguration c = super.getConfiguration(gridName); + + c.setMarshaller(new BinaryMarshaller()); + + List ccfgs = new ArrayList<>(); + CacheConfiguration ccfg = buildCacheConfiguration(gridName); + if (ccfg != null) + ccfgs.add(ccfg); + + ccfgs.add(buildCacheConfiguration(CACHE_PERSON_NO_KV)); + ccfgs.add(buildCacheConfiguration(CACHE_INT_NO_KV_TYPE)); + ccfgs.add(buildCacheConfiguration(CACHE_PERSON)); + ccfgs.add(buildCacheConfiguration(CACHE_JOB)); + + c.setCacheConfiguration(ccfgs.toArray(new CacheConfiguration[ccfgs.size()])); + if (gridName.equals(NODE_CLIENT)) + c.setClientMode(true); + + return c; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + startGrid(0); + startGrid(NODE_CLIENT); + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + super.afterTest(); + + stopAllGrids(); + } + + private CacheConfiguration buildCacheConfiguration(String name) { + if (name.equals(NODE_BAD_CONF_MISS_KEY_FIELD)) { + CacheConfiguration ccfg = new CacheConfiguration(NODE_BAD_CONF_MISS_KEY_FIELD); + QueryEntity qe = new QueryEntity(Object.class.getName(), Object.class.getName()); + qe.setKeyFieldName("k"); + qe.addQueryField("a", Integer.class.getName(), null); + ccfg.setQueryEntities(F.asList(qe)); + return ccfg; + } + else if (name.equals(NODE_BAD_CONF_MISS_VAL_FIELD)) { + CacheConfiguration ccfg = new CacheConfiguration(NODE_BAD_CONF_MISS_VAL_FIELD); + QueryEntity qe = new QueryEntity(Object.class.getName(), Object.class.getName()); + qe.setValueFieldName("v"); + qe.addQueryField("a", Integer.class.getName(), null); + ccfg.setQueryEntities(F.asList(qe)); + return ccfg; + } + else if (name.equals(CACHE_PERSON_NO_KV)) { + CacheConfiguration ccfg = new CacheConfiguration(CACHE_PERSON_NO_KV); + + QueryEntity entity = new QueryEntity(); + + entity.setKeyType(Integer.class.getName()); + entity.setValueType(Person.class.getName()); + + LinkedHashMap fields = new LinkedHashMap<>(); + fields.put("name", String.class.getName()); + fields.put("age", Integer.class.getName()); + + entity.setFields(fields); + + ccfg.setQueryEntities(Arrays.asList(entity)); + return ccfg; + } + else if (name.equals(CACHE_INT_NO_KV_TYPE)) { + CacheConfiguration ccfg = new CacheConfiguration(CACHE_INT_NO_KV_TYPE); + QueryEntity entity = new QueryEntity(); + + entity.setKeyType(null); + entity.setValueType(null); + + entity.setKeyFieldName("id"); + entity.setValueFieldName("v"); + + LinkedHashMap fields = new LinkedHashMap<>(); + fields.put("id", Integer.class.getName()); + fields.put("v", Integer.class.getName()); + + entity.setFields(fields); + + ccfg.setQueryEntities(Arrays.asList(entity)); + return ccfg; + } + else if (name.equals(CACHE_PERSON)) { + CacheConfiguration ccfg = new CacheConfiguration(CACHE_PERSON); + + QueryEntity entity = new QueryEntity(); + + entity.setKeyType(Integer.class.getName()); + entity.setValueType(Person.class.getName()); + + entity.setKeyFieldName("id"); + entity.setValueFieldName("v"); + + LinkedHashMap fields = new LinkedHashMap<>(); + fields.put("name", String.class.getName()); + fields.put("age", Integer.class.getName()); + + fields.put(entity.getKeyFieldName(), entity.getKeyType()); + fields.put(entity.getValueFieldName(), entity.getValueType()); + + entity.setFields(fields); + + ccfg.setQueryEntities(Arrays.asList(entity)); + return ccfg; + } + else if (name.equals(CACHE_JOB)) { + CacheConfiguration ccfg = new CacheConfiguration(CACHE_JOB); + ccfg.setIndexedTypes(Integer.class, Integer.class); + return ccfg; + } + return null; + } + + /** Test for setIndexedTypes() primitive types */ + @Test + public void testSetIndexTypesPrimitive() throws Exception { + IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_JOB); + + checkInsert(cache, "insert into Integer (_key, _val) values (?,?)", 1, 100); + + checkSelect(cache, "select * from Integer", 1, 100); + checkSelect(cache, "select _key, _val from Integer", 1, 100); + } + + /** Test configuration error : keyFieldName is missing from fields */ + @Test + public void testErrorKeyFieldMissingFromFields() throws Exception { + checkCacheStartupError(NODE_BAD_CONF_MISS_KEY_FIELD); + } + + /** Test configuration error : valueFieldName is missing from fields */ + @Test + public void testErrorValueFieldMissingFromFields() throws Exception { + checkCacheStartupError(NODE_BAD_CONF_MISS_VAL_FIELD); + } + + /** */ + private void checkCacheStartupError(final String name) { + GridTestUtils.assertThrows(log, new Callable() { + @Override public Void call() throws Exception { + startGrid(name); + + return null; + } + }, IgniteCheckedException.class, null); + } + + /** + * Check that it is allowed to leave QE.keyType and QE.valueType unset + * in case keyFieldName and valueFieldName are set and present in fields + */ + @Test + public void testQueryEntityAutoKeyValTypes() throws Exception { + IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_INT_NO_KV_TYPE); + + checkInsert(cache, "insert into Integer (_key, _val) values (?,?)", 1, 100); + + checkSelect(cache, "select * from Integer where id = 1", 1, 100); + + checkSelect(cache, "select * from Integer", 1, 100); + checkSelect(cache, "select _key, _val from Integer", 1, 100); + checkSelect(cache, "select id, v from Integer", 1, 100); + } + + /** Check that it is possible to not have keyFieldName and valueFieldName */ + @Test + public void testNoKeyValueAliases() throws Exception { + IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON_NO_KV); + + Person alice = new Person("Alice", 1); + checkInsert(cache, "insert into Person (_key, _val) values (?,?)", 1, alice); + + checkSelect(cache, "select * from Person", alice.name, alice.age); + checkSelect(cache, "select _key, _val from Person", 1, alice); + } + + /** Check keyFieldName and valueFieldName columns access */ + @Test + public void testKeyValueAlias() throws Exception { + //_key, _val, _ver | name, age, id, v + Person alice = new Person("Alice", 1); + Person bob = new Person("Bob", 2); + + IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON); + + checkInsert(cache, "insert into Person (_key, _val) values (?,?)", 1, alice); + checkInsert(cache, "insert into Person (id, v) values (?,?)", 2, bob); + + checkSelect(cache, "select * from Person where _key=1", alice.name, alice.age, 1, alice); + checkSelect(cache, "select _key, _val from Person where id=1", 1, alice); + + checkSelect(cache, "select * from Person where _key=2", bob.name, bob.age, 2, bob); + checkSelect(cache, "select _key, _val from Person where id=2", 2, bob); + + checkInsert(cache, "update Person set age = ? where id = ?", 3, 1); + checkSelect(cache, "select _key, age from Person where id=1", 1, 3); + + checkInsert(cache, "update Person set v = ? where id = ?", alice, 1); + checkSelect(cache, "select _key, _val from Person where id=1", 1, alice); + } + + /** Check _ver version field is accessible */ + @Test + public void testVersionField() throws Exception { + Person alice = new Person("Alice", 1); + Person bob = new Person("Bob", 2); + + IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON); + + checkInsert(cache, "insert into Person (id, v) values (?,?)", 1, alice); + assertNotNull(getVersion(cache, 1)); + + checkInsert(cache, "insert into Person (id, v) values (?,?)", 2, bob); + assertNotNull(getVersion(cache, 2)); + + GridCacheVersion v1 = getVersion(cache, 1); + + checkInsert(cache, "update Person set age = ? where id = ?", 3, 1); + + GridCacheVersion v2 = getVersion(cache, 1); + + assertFalse( v1.equals(v2) ); + } + + /** Check that joins are working on keyFieldName, valueFieldName columns */ + @Test + public void testJoinKeyValFields() throws Exception { + IgniteEx client = grid(NODE_CLIENT); + IgniteCache cache = client.cache(CACHE_PERSON); + IgniteCache cache2 = client.cache(CACHE_JOB); + + checkInsert(cache, "insert into Person (id, v) values (?, ?)", 1, new Person("Bob", 30)); + checkInsert(cache, "insert into Person (id, v) values (?, ?)", 2, new Person("David", 35)); + checkInsert(cache2, "insert into Integer (_key, _val) values (?, ?)", 100, 1); + checkInsert(cache2, "insert into Integer (_key, _val) values (?, ?)", 200, 2); + + QueryCursor> cursor = cache.query(new SqlFieldsQuery("select p.id, j._key from Person p, \""+ CACHE_JOB +"\".Integer j where p.id = j._val")); + List> results = cursor.getAll(); + assertEquals(2, results.size()); + assertEquals(1, results.get(0).get(0)); + assertEquals(100, results.get(0).get(1)); + assertEquals(2, results.get(1).get(0)); + assertEquals(200, results.get(1).get(1)); + } + + /** Check automatic addition of index for keyFieldName column */ + @Test + public void testAutoKeyFieldIndex() throws Exception { + IgniteEx client = grid(NODE_CLIENT); + IgniteCache cache = client.cache(CACHE_PERSON); + + QueryCursor> cursor = cache.query(new SqlFieldsQuery("explain select * from Person where id = 1")); + List> results = cursor.getAll(); + assertEquals(2, results.size()); + assertTrue(((String)results.get(0).get(0)).contains("\"_key_PK_proxy\"")); + + cursor = cache.query(new SqlFieldsQuery("explain select * from Person where _key = 1")); + results = cursor.getAll(); + assertEquals(2, results.size()); + assertTrue(((String)results.get(0).get(0)).contains("\"_key_PK\"")); + } + + /** */ + private GridCacheVersion getVersion(IgniteCache cache, int key) { + QueryCursor> cursor = cache.query(new SqlFieldsQuery("select _ver from Person where id = ?").setArgs(key)); + List> results = cursor.getAll(); + assertEquals(1, results.size()); + return ((GridCacheVersion) results.get(0).get(0)); + } + + /** */ + private void checkInsert(IgniteCache cache, String qry, Object ... args) throws Exception { + QueryCursor> cursor = cache.query(new SqlFieldsQuery(qry).setArgs(args)); + assertEquals(1, ((Number) cursor.getAll().get(0).get(0)).intValue()); + } + + /** */ + private void checkSelect(IgniteCache cache, String selectQry, Object ... expected) { + QueryCursor> cursor = cache.query(new SqlFieldsQuery(selectQry)); + + List> results = cursor.getAll(); + + assertEquals(1, results.size()); + + List row0 = results.get(0); + for(int col = 0; col < expected.length; ++col) + assertEquals(expected[col], row0.get(col)); + } + + /** */ + private static class Person { + /** */ + private String name; + + /** */ + private int age; + + /** */ + public Person(String name, int age) { + this.name = name; + this.age = age; + } + + /** */ + @Override public int hashCode() { + return name.hashCode() ^ age; + } + + /** */ + @Override public boolean equals(Object o) { + if (this == o) + return true; + if (!(o instanceof Person)) + return false; + Person other = (Person)o; + return name.equals(other.name) && age == other.age; + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlNotNullConstraintTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlNotNullConstraintTest.java index 3a90c990927f0..33660b24c1df6 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlNotNullConstraintTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlNotNullConstraintTest.java @@ -50,20 +50,19 @@ import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.lang.IgniteBiInClosure; import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.Transaction; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class IgniteSqlNotNullConstraintTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Name of client node. */ private static String NODE_CLIENT = "client"; @@ -112,13 +111,6 @@ public class IgniteSqlNotNullConstraintTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration c = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - disco.setForceServerMode(true); - - c.setDiscoverySpi(disco); - List ccfgs = new ArrayList<>(); ccfgs.addAll(cacheConfigurations()); @@ -250,6 +242,7 @@ private CacheConfiguration buildCacheConfigurationRestricted(String cacheName, b } /** */ + @Test public void testQueryEntityGetSetNotNullFields() throws Exception { QueryEntity qe = new QueryEntity(); @@ -267,6 +260,7 @@ public void testQueryEntityGetSetNotNullFields() throws Exception { } /** */ + @Test public void testQueryEntityEquals() throws Exception { QueryEntity a = new QueryEntity(); @@ -284,6 +278,7 @@ public void testQueryEntityEquals() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxPut() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -301,6 +296,7 @@ public void testAtomicOrImplicitTxPut() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxPutIfAbsent() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -318,6 +314,7 @@ public void testAtomicOrImplicitTxPutIfAbsent() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxGetAndPut() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -335,6 +332,7 @@ public void testAtomicOrImplicitTxGetAndPut() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxGetAndPutIfAbsent() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -352,6 +350,7 @@ public void testAtomicOrImplicitTxGetAndPutIfAbsent() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxReplace() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -370,6 +369,7 @@ public void testAtomicOrImplicitTxReplace() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxGetAndReplace() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -388,16 +388,19 @@ public void testAtomicOrImplicitTxGetAndReplace() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxPutAll() throws Exception { doAtomicOrImplicitTxPutAll(F.asMap(1, okValue, 5, badValue), 1); } /** */ + @Test public void testAtomicOrImplicitTxPutAllForSingleValue() throws Exception { doAtomicOrImplicitTxPutAll(F.asMap(5, badValue), 0); } /** */ + @Test public void testAtomicOrImplicitTxInvoke() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -413,6 +416,7 @@ public void testAtomicOrImplicitTxInvoke() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxInvokeAll() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -434,6 +438,7 @@ key1, new TestEntryProcessor(okValue), } /** */ + @Test public void testTxPutCreate() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -456,6 +461,7 @@ public void testTxPutCreate() throws Exception { } /** */ + @Test public void testTxPutUpdate() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -479,6 +485,7 @@ public void testTxPutUpdate() throws Exception { } /** */ + @Test public void testTxPutIfAbsent() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -500,6 +507,7 @@ public void testTxPutIfAbsent() throws Exception { } /** */ + @Test public void testTxGetAndPut() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -521,6 +529,7 @@ public void testTxGetAndPut() throws Exception { } /** */ + @Test public void testTxGetAndPutIfAbsent() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -542,6 +551,7 @@ public void testTxGetAndPutIfAbsent() throws Exception { } /** */ + @Test public void testTxReplace() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -566,6 +576,7 @@ public void testTxReplace() throws Exception { } /** */ + @Test public void testTxGetAndReplace() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -590,16 +601,19 @@ public void testTxGetAndReplace() throws Exception { } /** */ + @Test public void testTxPutAll() throws Exception { doTxPutAll(F.asMap(1, okValue, 5, badValue)); } /** */ + @Test public void testTxPutAllForSingleValue() throws Exception { doTxPutAll(F.asMap(5, badValue)); } /** */ + @Test public void testTxInvoke() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -624,6 +638,7 @@ public void testTxInvoke() throws Exception { } /** */ + @Test public void testTxInvokeAll() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -650,6 +665,7 @@ key1, new TestEntryProcessor(okValue), } /** */ + @Test public void testAtomicOrImplicitTxInvokeDelete() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -663,6 +679,7 @@ public void testAtomicOrImplicitTxInvokeDelete() throws Exception { } /** */ + @Test public void testAtomicOrImplicitTxInvokeAllDelete() throws Exception { executeWithAllCaches(new TestClosure() { @Override public void run() throws Exception { @@ -679,6 +696,7 @@ key1, new TestEntryProcessor(null), } /** */ + @Test public void testTxInvokeDelete() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -696,6 +714,7 @@ public void testTxInvokeDelete() throws Exception { } /** */ + @Test public void testTxInvokeAllDelete() throws Exception { executeWithAllTxCaches(new TestClosure() { @Override public void run() throws Exception { @@ -716,6 +735,7 @@ key1, new TestEntryProcessor(null), } /** */ + @Test public void testDynamicTableCreateNotNullFieldsAllowed() throws Exception { executeSql("CREATE TABLE test(id INT PRIMARY KEY, field INT NOT NULL)"); @@ -733,6 +753,7 @@ public void testDynamicTableCreateNotNullFieldsAllowed() throws Exception { } /** */ + @Test public void testAlterTableAddColumnNotNullFieldAllowed() throws Exception { executeSql("CREATE TABLE test(id INT PRIMARY KEY, age INT)"); @@ -742,11 +763,13 @@ public void testAlterTableAddColumnNotNullFieldAllowed() throws Exception { } /** */ + @Test public void testAtomicNotNullCheckDmlInsertValues() throws Exception { checkNotNullCheckDmlInsertValues(CacheAtomicityMode.ATOMIC); } /** */ + @Test public void testTransactionalNotNullCheckDmlInsertValues() throws Exception { checkNotNullCheckDmlInsertValues(CacheAtomicityMode.TRANSACTIONAL); } @@ -777,11 +800,13 @@ private void checkNotNullCheckDmlInsertValues(CacheAtomicityMode atomicityMode) } /** */ + @Test public void testAtomicAddColumnNotNullCheckDmlInsertValues() throws Exception { checkAddColumnNotNullCheckDmlInsertValues(CacheAtomicityMode.ATOMIC); } /** */ + @Test public void testTransactionalAddColumnNotNullCheckDmlInsertValues() throws Exception { checkAddColumnNotNullCheckDmlInsertValues(CacheAtomicityMode.TRANSACTIONAL); } @@ -814,6 +839,7 @@ private void checkAddColumnNotNullCheckDmlInsertValues(CacheAtomicityMode atomic } /** */ + @Test public void testNotNullCheckDmlInsertFromSelect() throws Exception { executeSql("CREATE TABLE test(id INT PRIMARY KEY, name VARCHAR, age INT)"); @@ -839,6 +865,7 @@ public void testNotNullCheckDmlInsertFromSelect() throws Exception { } /** */ + @Test public void testNotNullCheckDmlUpdateValues() throws Exception { executeSql("CREATE TABLE test(id INT PRIMARY KEY, name VARCHAR NOT NULL)"); @@ -865,6 +892,7 @@ public void testNotNullCheckDmlUpdateValues() throws Exception { } /** */ + @Test public void testNotNullCheckDmlUpdateFromSelect() throws Exception { executeSql("CREATE TABLE src(id INT PRIMARY KEY, name VARCHAR)"); executeSql("CREATE TABLE dest(id INT PRIMARY KEY, name VARCHAR NOT NULL)"); @@ -912,6 +940,7 @@ public void testNotNullCheckDmlUpdateFromSelect() throws Exception { } /** Check QueryEntity configuration fails with NOT NULL field and read-through. */ + @Test public void testReadThroughRestrictionQueryEntity() throws Exception { // Node start-up failure (read-through cache store). GridTestUtils.assertThrowsAnyCause(log, new Callable() { @@ -930,6 +959,7 @@ public void testReadThroughRestrictionQueryEntity() throws Exception { } /** Check QueryEntity configuration fails with NOT NULL field and cache interceptor. */ + @Test public void testInterceptorRestrictionQueryEntity() throws Exception { // Node start-up failure (interceptor). GridTestUtils.assertThrowsAnyCause(log, new Callable() { @@ -948,6 +978,7 @@ public void testInterceptorRestrictionQueryEntity() throws Exception { } /** Check create table fails with NOT NULL field and read-through. */ + @Test public void testReadThroughRestrictionCreateTable() throws Exception { GridTestUtils.assertThrowsAnyCause(log, new Callable() { @Override public Object call() throws Exception { @@ -958,6 +989,7 @@ public void testReadThroughRestrictionCreateTable() throws Exception { } /** Check create table fails with NOT NULL field and cache interceptor. */ + @Test public void testInterceptorRestrictionCreateTable() throws Exception { GridTestUtils.assertThrowsAnyCause(log, new Callable() { @Override public Object call() throws Exception { @@ -968,6 +1000,7 @@ public void testInterceptorRestrictionCreateTable() throws Exception { } /** Check alter table fails with NOT NULL field and read-through. */ + @Test public void testReadThroughRestrictionAlterTable() throws Exception { executeSql("CREATE TABLE test(id INT PRIMARY KEY, age INT) " + "WITH \"template=" + CACHE_READ_THROUGH + "\""); @@ -980,6 +1013,7 @@ public void testReadThroughRestrictionAlterTable() throws Exception { } /** Check alter table fails with NOT NULL field and cache interceptor. */ + @Test public void testInterceptorRestrictionAlterTable() throws Exception { executeSql("CREATE TABLE test(id INT PRIMARY KEY, age INT) " + "WITH \"template=" + CACHE_INTERCEPTOR + "\""); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlParameterizedQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlParameterizedQueryTest.java index b5039cd4cb9ea..2c1416ba64326 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlParameterizedQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlParameterizedQueryTest.java @@ -30,9 +30,10 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test sql queries with parameters for all types. @@ -41,10 +42,8 @@ * @author Sergey Chernolyas &sergey_chernolyas@gmail.com& * @see IGNITE-6286 */ +@RunWith(JUnit4.class) public class IgniteSqlParameterizedQueryTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String CACHE_BOOKMARK = "Bookmark"; @@ -55,12 +54,6 @@ public class IgniteSqlParameterizedQueryTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration c = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - c.setCacheConfiguration(buildCacheConfiguration(CACHE_BOOKMARK)); if (gridName.equals(NODE_CLIENT)) c.setClientMode(true); @@ -119,6 +112,7 @@ private Object columnValue(String field, Object val) { * testing parametrized query by field with supported type * @throws Exception if any error occurs */ + @Test public void testSupportedTypes() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_BOOKMARK); Bookmark bookmark = new Bookmark(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlQueryParallelismTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlQueryParallelismTest.java index e4321086cbe2d..31f663366e113 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlQueryParallelismTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlQueryParallelismTest.java @@ -28,20 +28,18 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * A test against setting different values of query parallelism in cache configurations of the same cache. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteSqlQueryParallelismTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private boolean isClient = false; @@ -61,12 +59,6 @@ public class IgniteSqlQueryParallelismTest extends GridCommonAbstractTest { cfg.setCacheConfiguration(ccfg1, ccfg2); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -97,6 +89,7 @@ private static CacheConfiguration cacheConfig(String name, Class... idxTypes) /** * @throws Exception If failed. */ + @Test public void testIndexSegmentationOnClient() throws Exception { IgniteCache c1 = ignite(0).cache("org"); IgniteCache c2 = ignite(0).cache("pers"); @@ -124,6 +117,7 @@ public void testIndexSegmentationOnClient() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIndexSegmentation() throws Exception { IgniteCache c1 = ignite(0).cache("org"); IgniteCache c2 = ignite(0).cache("pers"); @@ -199,4 +193,4 @@ public Organization(String name) { this.name = name; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlRoutingTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlRoutingTest.java index 4976ee8a04c96..7a70d7fc8055c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlRoutingTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlRoutingTest.java @@ -29,8 +29,6 @@ import org.apache.ignite.events.Event; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.nio.ByteBuffer; @@ -48,15 +46,16 @@ import java.util.UUID; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVT_CACHE_QUERY_EXECUTED; /** Tests for query partitions derivation. */ +@RunWith(JUnit4.class) public class IgniteSqlRoutingTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static String NODE_CLIENT = "client"; @@ -79,12 +78,6 @@ public class IgniteSqlRoutingTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration c = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - c.setMarshaller(new BinaryMarshaller()); List ccfgs = new ArrayList<>(); @@ -176,6 +169,7 @@ private CacheConfiguration buildCacheConfiguration(String name) { } /** */ + @Test public void testUnicastQuerySelectAffinityKeyEqualsConstant() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -189,6 +183,7 @@ public void testUnicastQuerySelectAffinityKeyEqualsConstant() throws Exception { } /** */ + @Test public void testUnicastQuerySelectAffinityKeyEqualsParameter() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -203,6 +198,7 @@ public void testUnicastQuerySelectAffinityKeyEqualsParameter() throws Exception } /** */ + @Test public void testUnicastQuerySelectKeyEqualsParameterReused() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON); @@ -219,6 +215,7 @@ public void testUnicastQuerySelectKeyEqualsParameterReused() throws Exception { } /** */ + @Test public void testUnicastQuerySelectKeyEqualsParameter() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -236,6 +233,7 @@ public void testUnicastQuerySelectKeyEqualsParameter() throws Exception { } /** Check group, having, ordering allowed to be unicast requests. */ + @Test public void testUnicastQueryGroups() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -257,6 +255,7 @@ public void testUnicastQueryGroups() throws Exception { } /** */ + @Test public void testUnicastQuerySelectKeyEqualAndFieldParameter() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -274,6 +273,7 @@ public void testUnicastQuerySelectKeyEqualAndFieldParameter() throws Exception { } /** */ + @Test public void testUnicastQuerySelect2KeyEqualsAndFieldParameter() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -296,6 +296,7 @@ public void testUnicastQuerySelect2KeyEqualsAndFieldParameter() throws Exception } /** */ + @Test public void testUnicastQueryKeyTypeConversionParameter() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON); @@ -313,6 +314,7 @@ public void testUnicastQueryKeyTypeConversionParameter() throws Exception { } /** */ + @Test public void testUnicastQueryKeyTypeConversionConstant() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_PERSON); @@ -329,6 +331,7 @@ public void testUnicastQueryKeyTypeConversionConstant() throws Exception { } /** */ + @Test public void testUnicastQueryAffinityKeyTypeConversionParameter() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -344,6 +347,7 @@ public void testUnicastQueryAffinityKeyTypeConversionParameter() throws Exceptio } /** */ + @Test public void testUnicastQueryAffinityKeyTypeConversionConstant() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -358,6 +362,7 @@ public void testUnicastQueryAffinityKeyTypeConversionConstant() throws Exception } /** */ + @Test public void testBroadcastQuerySelectKeyEqualsOrFieldParameter() throws Exception { IgniteCache cache = grid(NODE_CLIENT).cache(CACHE_CALL); @@ -371,6 +376,7 @@ public void testBroadcastQuerySelectKeyEqualsOrFieldParameter() throws Exception } /** */ + @Test public void testUuidKeyAsByteArrayParameter() throws Exception { String cacheName = "uuidCache"; @@ -411,6 +417,7 @@ public void testUuidKeyAsByteArrayParameter() throws Exception { } /** */ + @Test public void testDateKeyAsTimestampParameter() throws Exception { String cacheName = "dateCache"; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSchemaIndexingTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSchemaIndexingTest.java index 2dee617502424..534cf5becf9c1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSchemaIndexingTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSchemaIndexingTest.java @@ -31,32 +31,24 @@ import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests {@link IgniteH2Indexing} support {@link CacheConfiguration#setSqlSchema(String)} configuration. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteSqlSchemaIndexingTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration() throws Exception { IgniteConfiguration cfg = super.getConfiguration(); cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -85,10 +77,11 @@ private static CacheConfiguration cacheConfig(String name, boolean partitioned, * * @throws Exception If failed. */ + @Test public void testCaseSensitive() throws Exception { //TODO rewrite with dynamic cache creation, and GRID start in #beforeTest after resolve of - //TODO https://issues.apache.org/jira/browse/IGNITE-1094 - fail("https://issues.apache.org/jira/browse/IGNITE-1094"); + //TODO IGNITE-1094 + fail("https://issues.apache.org/jira/browse/IGNITE-10723"); GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -114,11 +107,11 @@ public void testCaseSensitive() throws Exception { * * @throws Exception If failed. */ - @SuppressWarnings("ThrowableResultOfMethodCallIgnored") + @Test public void testCustomSchemaMultipleCachesTablesCollision() throws Exception { //TODO: Rewrite with dynamic cache creation, and GRID start in #beforeTest after resolve of - //TODO: https://issues.apache.org/jira/browse/IGNITE-1094 - fail("https://issues.apache.org/jira/browse/IGNITE-1094"); + //TODO: IGNITE-1094 + fail("https://issues.apache.org/jira/browse/IGNITE-10723"); GridTestUtils.assertThrows(log, new Callable() { @Override public Object call() throws Exception { @@ -144,6 +137,7 @@ public void testCustomSchemaMultipleCachesTablesCollision() throws Exception { * * @throws Exception If failed. */ + @Test public void testCacheUnregistration() throws Exception { startGridsMultiThreaded(3, true); @@ -182,6 +176,7 @@ public void testCacheUnregistration() throws Exception { * * @throws Exception If failed. */ + @Test public void testSchemaEscapeAll() throws Exception { startGridsMultiThreaded(3, true); @@ -239,7 +234,7 @@ private static void escapeCheckSchemaName(final IgniteCache cache } } - // TODO add tests with dynamic cache unregistration, after resolve of https://issues.apache.org/jira/browse/IGNITE-1094 + // TODO add tests with dynamic cache unregistration - IGNITE-1094 resolved /** Test class as query entity */ private static class Fact { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSegmentedIndexSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSegmentedIndexSelfTest.java index 389a1ab6529a0..4d786e4ce390f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSegmentedIndexSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSegmentedIndexSelfTest.java @@ -34,18 +34,16 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for correct distributed queries with index consisted of many segments. */ +@RunWith(JUnit4.class) public class IgniteSqlSegmentedIndexSelfTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static final String ORG_CACHE_NAME = "org"; @@ -74,12 +72,6 @@ public class IgniteSqlSegmentedIndexSelfTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -118,6 +110,7 @@ protected CacheConfiguration cacheConfig(String name, boolean parti /** * @throws Exception If failed. */ + @Test public void testSegmentedIndex() throws Exception { ignite(0).createCache(cacheConfig(PERSON_CAHE_NAME, true, Integer.class, Person.class)); ignite(0).createCache(cacheConfig(ORG_CACHE_NAME, true, Integer.class, Organization.class)); @@ -135,6 +128,7 @@ public void testSegmentedIndex() throws Exception { * Check correct index snapshots with segmented indices. * @throws Exception If failed. */ + @Test public void testSegmentedIndexReproducableResults() throws Exception { ignite(0).createCache(cacheConfig(ORG_CACHE_NAME, true, Integer.class, Organization.class)); @@ -160,6 +154,7 @@ public void testSegmentedIndexReproducableResults() throws Exception { * Checks correct select count(*) result with segmented indices. * @throws Exception If failed. */ + @Test public void testSegmentedIndexSizeReproducableResults() throws Exception { ignite(0).createCache(cacheConfig(ORG_CACHE_NAME, true, Integer.class, Organization.class)); @@ -186,6 +181,7 @@ public void testSegmentedIndexSizeReproducableResults() throws Exception { * * @throws Exception If failed. */ + @Test public void testSegmentedIndexWithEvictionPolicy() throws Exception { final IgniteCache cache = ignite(0).createCache( cacheConfig(ORG_CACHE_NAME, true, Integer.class, Organization.class) @@ -209,6 +205,7 @@ public void testSegmentedIndexWithEvictionPolicy() throws Exception { * * @throws Exception If failed. */ + @Test public void testSizeOnSegmentedIndexWithEvictionPolicy() throws Exception { final IgniteCache cache = ignite(0).createCache( cacheConfig(ORG_CACHE_NAME, true, Integer.class, Organization.class) @@ -232,6 +229,7 @@ public void testSizeOnSegmentedIndexWithEvictionPolicy() throws Exception { * * @throws Exception If failed. */ + @Test public void testSegmentedPartitionedWithReplicated() throws Exception { ignite(0).createCache(cacheConfig(PERSON_CAHE_NAME, true, Integer.class, Person.class)); ignite(0).createCache(cacheConfig(ORG_CACHE_NAME, false, Integer.class, Organization.class)); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest.java index 6d1b32b3b65bd..3b4c761d96f46 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest.java @@ -32,17 +32,16 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.cache.query.SqlFieldsQueryEx; import org.apache.ignite.internal.util.typedef.F; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link SqlFieldsQueryEx#skipReducerOnUpdate} flag. */ +@RunWith(JUnit4.class) public class IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static int NODE_COUNT = 4; @@ -71,12 +70,6 @@ public class IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest extends GridCommonAbstr @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration c = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - List ccfgs = new ArrayList<>(); ccfgs.add(buildCacheConfiguration(CACHE_ACCOUNT)); @@ -189,6 +182,7 @@ private CacheConfiguration buildCacheConfiguration(String name) { * * @throws Exception If failed. */ + @Test public void testUpdate() throws Exception { Map accounts = getAccounts(100, 1, 100); @@ -201,6 +195,7 @@ public void testUpdate() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdateFastKey() throws Exception { Map accounts = getAccounts(100, 1, 100); @@ -214,6 +209,7 @@ public void testUpdateFastKey() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdateLimit() throws Exception { Map accounts = getAccounts(100, 1, 100); @@ -227,6 +223,7 @@ public void testUpdateLimit() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdateWhereSubquery() throws Exception { Map accounts = getAccounts(100, 1, -100); @@ -245,6 +242,7 @@ public void testUpdateWhereSubquery() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdateSetSubquery() throws Exception { Map accounts = getAccounts(100, 1, 1000); Map trades = getTrades(100, 2); @@ -262,6 +260,7 @@ public void testUpdateSetSubquery() throws Exception { * * @throws Exception If failed. */ + @Test public void testUpdateSetTableSubquery() throws Exception { Map accounts = getAccounts(100, 1, 1000); Map trades = getTrades(100, 2); @@ -279,6 +278,7 @@ public void testUpdateSetTableSubquery() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsertValues() throws Exception { String text = "INSERT INTO \"acc\".Account (_key, name, sn, depo)" + " VALUES (?, ?, ?, ?), (?, ?, ?, ?)"; @@ -291,6 +291,7 @@ public void testInsertValues() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsertFromSelect() throws Exception { Map accounts = getAccounts(100, 1, 1000); @@ -307,6 +308,7 @@ public void testInsertFromSelect() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsertFromSelectOrderBy() throws Exception { Map accounts = getAccounts(100, 1, 1000); @@ -324,6 +326,7 @@ public void testInsertFromSelectOrderBy() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsertFromSelectUnion() throws Exception { Map accounts = getAccounts(20, 1, 1000); @@ -342,6 +345,7 @@ public void testInsertFromSelectUnion() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsertFromSelectGroupBy() throws Exception { Map accounts = getAccounts(100, 1, 1000); Map trades = getTrades(100, 2); @@ -362,6 +366,7 @@ public void testInsertFromSelectGroupBy() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsertFromSelectDistinct() throws Exception { Map accounts = getAccounts(100, 2, 100); @@ -378,6 +383,7 @@ public void testInsertFromSelectDistinct() throws Exception { * * @throws Exception If failed. */ + @Test public void testInsertFromSelectJoin() throws Exception { Map accounts = getAccounts(100, 1, 100); Map stocks = getStocks(5); @@ -397,6 +403,7 @@ public void testInsertFromSelectJoin() throws Exception { * * @throws Exception If failed. */ + @Test public void testDelete() throws Exception { Map accounts = getAccounts(100, 1, 100); @@ -412,6 +419,7 @@ public void testDelete() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeleteTop() throws Exception { Map accounts = getAccounts(100, 1, 100); @@ -427,6 +435,7 @@ public void testDeleteTop() throws Exception { * * @throws Exception If failed. */ + @Test public void testDeleteWhereSubquery() throws Exception { Map accounts = getAccounts(20, 1, 100); Map trades = getTrades(10, 2); @@ -445,6 +454,7 @@ public void testDeleteWhereSubquery() throws Exception { * * @throws Exception If failed. */ + @Test public void testMergeValues() throws Exception { Map accounts = getAccounts(1, 1, 100); @@ -459,6 +469,7 @@ public void testMergeValues() throws Exception { * * @throws Exception If failed. */ + @Test public void testMergeFromSelectJoin() throws Exception { Map accounts = getAccounts(100, 1, 100); Map stocks = getStocks(5); @@ -482,6 +493,7 @@ public void testMergeFromSelectJoin() throws Exception { * * @throws Exception If failed. */ + @Test public void testMergeFromSelectOrderBy() throws Exception { Map accounts = getAccounts(100, 1, 1000); @@ -503,6 +515,7 @@ public void testMergeFromSelectOrderBy() throws Exception { * * @throws Exception If failed. */ + @Test public void testMergeFromSelectGroupBy() throws Exception { Map accounts = getAccounts(100, 1, 1000); Map trades = getTrades(100, 2); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlSelfTest.java index 12def671c6bea..f41d52f11c0ac 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSkipReducerOnUpdateDmlSelfTest.java @@ -50,10 +50,11 @@ import org.apache.ignite.internal.util.lang.GridAbsPredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.lang.IgnitePredicate; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.events.EventType.EVT_CACHE_QUERY_EXECUTED; @@ -61,11 +62,9 @@ /** * Tests for distributed DML. */ -@SuppressWarnings({"unchecked", "ThrowableResultOfMethodCallIgnored"}) +@SuppressWarnings({"unchecked"}) +@RunWith(JUnit4.class) public class IgniteSqlSkipReducerOnUpdateDmlSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ private static int NODE_COUNT = 4; @@ -91,12 +90,6 @@ public class IgniteSqlSkipReducerOnUpdateDmlSelfTest extends GridCommonAbstractT @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration c = super.getConfiguration(gridName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - List ccfgs = new ArrayList<>(); ccfgs.add(buildCacheConfiguration(CACHE_ORG)); @@ -201,6 +194,7 @@ private CacheConfiguration buildCacheConfiguration(String name) { * * @throws Exception if failed. */ + @Test public void testSimpleUpdateDistributedReplicated() throws Exception { fillCaches(); @@ -220,6 +214,7 @@ public void testSimpleUpdateDistributedReplicated() throws Exception { * * @throws Exception if failed. */ + @Test public void testSimpleUpdateDistributedPartitioned() throws Exception { fillCaches(); @@ -236,6 +231,7 @@ public void testSimpleUpdateDistributedPartitioned() throws Exception { * * @throws Exception if failed. */ + @Test public void testDistributedUpdateFailedKeys() throws Exception { // UPDATE can produce failed keys due to concurrent modification fillCaches(); @@ -254,6 +250,7 @@ public void testDistributedUpdateFailedKeys() throws Exception { * * @throws Exception if failed. */ + @Test public void testDistributedUpdateFail() throws Exception { fillCaches(); @@ -272,6 +269,7 @@ public void testDistributedUpdateFail() throws Exception { * @throws Exception if failed. */ @SuppressWarnings("ConstantConditions") + @Test public void testQueryParallelism() throws Exception { String cacheName = CACHE_ORG + "x4"; @@ -294,6 +292,7 @@ public void testQueryParallelism() throws Exception { * * @throws Exception if failed. */ + @Test public void testEvents() throws Exception { final CountDownLatch latch = new CountDownLatch(NODE_COUNT); @@ -332,6 +331,7 @@ public void testEvents() throws Exception { * * @throws Exception if failed. */ + @Test public void testSpecificPartitionsUpdate() throws Exception { fillCaches(); @@ -365,6 +365,7 @@ public void testSpecificPartitionsUpdate() throws Exception { * * @throws Exception if failed. */ + @Test public void testCancel() throws Exception { latch = new CountDownLatch(NODE_COUNT + 1); @@ -407,6 +408,7 @@ public void testCancel() throws Exception { * * @throws Exception if failed. */ + @Test public void testNodeStopDuringUpdate() throws Exception { startGrid(NODE_COUNT + 1); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSplitterSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSplitterSelfTest.java index df226b2fcdd6b..49ae546625b63 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSplitterSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/IgniteSqlSplitterSelfTest.java @@ -51,25 +51,24 @@ import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.SB; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.testsuites.IgniteIgnore; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.springframework.util.StringUtils; /** * Tests for correct distributed partitioned queries. */ @SuppressWarnings("unchecked") +@RunWith(JUnit4.class) public class IgniteSqlSplitterSelfTest extends GridCommonAbstractTest { /** */ private static final int CLIENT = 7; - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); @@ -80,12 +79,6 @@ public class IgniteSqlSplitterSelfTest extends GridCommonAbstractTest { cfg.setPeerClassLoadingEnabled(false); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -120,6 +113,7 @@ private static CacheConfiguration cacheConfig(String name, boolean partitioned, * Tests offset and limit clauses for query. * @throws Exception If failed. */ + @Test public void testOffsetLimit() throws Exception { IgniteCache c = ignite(0).getOrCreateCache(cacheConfig("ints", true, Integer.class, Integer.class)); @@ -160,7 +154,9 @@ public void testOffsetLimit() throws Exception { /** */ - public void _testMergeJoin() { + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10199") + @Test + public void testMergeJoin() { IgniteCache c = ignite(CLIENT).getOrCreateCache(cacheConfig("org", true, Integer.class, Org.class)); @@ -187,6 +183,8 @@ public void _testMergeJoin() { } } + /** */ + @Test public void testPushDownSubquery() { IgniteCache c = ignite(CLIENT).getOrCreateCache(cacheConfig("ps", true, Integer.class, Person.class)); @@ -232,6 +230,7 @@ public void testPushDownSubquery() { /** */ + @Test public void testPushDown() { IgniteCache c = ignite(CLIENT).getOrCreateCache(cacheConfig("ps", true, Integer.class, Person.class)); @@ -278,6 +277,7 @@ public void testPushDown() { /** */ + @Test public void testPushDownLeftJoin() { IgniteCache c = ignite(0).getOrCreateCache(cacheConfig("ps", true, Integer.class, Person.class)); @@ -328,48 +328,56 @@ public void testPushDownLeftJoin() { /** */ + @Test public void testReplicatedTablesUsingPartitionedCache() { doTestReplicatedTablesUsingPartitionedCache(1, false, false); } /** */ + @Test public void testReplicatedTablesUsingPartitionedCacheSegmented() { doTestReplicatedTablesUsingPartitionedCache(5, false, false); } /** */ + @Test public void testReplicatedTablesUsingPartitionedCacheClient() { doTestReplicatedTablesUsingPartitionedCache(1, true, false); } /** */ + @Test public void testReplicatedTablesUsingPartitionedCacheSegmentedClient() { doTestReplicatedTablesUsingPartitionedCache(5, true, false); } /** */ + @Test public void testReplicatedTablesUsingPartitionedCacheRO() { doTestReplicatedTablesUsingPartitionedCache(1, false, true); } /** */ + @Test public void testReplicatedTablesUsingPartitionedCacheSegmentedRO() { doTestReplicatedTablesUsingPartitionedCache(5, false, true); } /** */ + @Test public void testReplicatedTablesUsingPartitionedCacheClientRO() { doTestReplicatedTablesUsingPartitionedCache(1, true, true); } /** */ + @Test public void testReplicatedTablesUsingPartitionedCacheSegmentedClientRO() { doTestReplicatedTablesUsingPartitionedCache(5, true, true); } @@ -413,18 +421,22 @@ private void doTestReplicatedTablesUsingPartitionedCache(int segments, boolean c } } + @Test public void testPartitionedTablesUsingReplicatedCache() { doTestPartitionedTablesUsingReplicatedCache(1, false); } + @Test public void testPartitionedTablesUsingReplicatedCacheSegmented() { doTestPartitionedTablesUsingReplicatedCache(7, false); } + @Test public void testPartitionedTablesUsingReplicatedCacheClient() { doTestPartitionedTablesUsingReplicatedCache(1, true); } + @Test public void testPartitionedTablesUsingReplicatedCacheSegmentedClient() { doTestPartitionedTablesUsingReplicatedCache(7, true); } @@ -458,6 +470,7 @@ private void doTestPartitionedTablesUsingReplicatedCache(int segments, boolean c /** */ + @Test public void testSubQueryWithAggregate() { CacheConfiguration ccfg1 = cacheConfig("pers", true, AffinityKey.class, Person2.class); @@ -488,6 +501,7 @@ public void testSubQueryWithAggregate() { /** * @throws InterruptedException If failed. */ + @Test public void testDistributedJoinFromReplicatedCache() throws InterruptedException { CacheConfiguration ccfg1 = cacheConfig("pers", true, Integer.class, Person2.class); @@ -515,6 +529,7 @@ public void testDistributedJoinFromReplicatedCache() throws InterruptedException } @SuppressWarnings("SuspiciousMethodCalls") + @Test public void testExists() { IgniteCache x = ignite(0).getOrCreateCache(cacheConfig("x", true, Integer.class, Person2.class)); @@ -558,6 +573,7 @@ public void testExists() { /** * @throws Exception If failed. */ + @Test public void testSortedMergeIndex() throws Exception { IgniteCache c = ignite(0).getOrCreateCache(cacheConfig("v", true, Integer.class, Value.class)); @@ -622,6 +638,7 @@ public void testSortedMergeIndex() throws Exception { /** * @throws Exception If failed. */ + @Test public void testGroupIndexOperations() throws Exception { IgniteCache c = ignite(0).getOrCreateCache(cacheConfig("grp", false, Integer.class, GroupIndexTestValue.class)); @@ -701,6 +718,7 @@ public void testGroupIndexOperations() throws Exception { /** */ + @Test public void testUseIndexHints() { CacheConfiguration ccfg = cacheConfig("pers", true, Integer.class, Person2.class); @@ -734,6 +752,7 @@ public void testUseIndexHints() { /** * @throws Exception If failed. */ + @Test public void testDistributedJoins() throws Exception { CacheConfiguration ccfg1 = cacheConfig("pers", true, Integer.class, Person2.class); @@ -765,6 +784,7 @@ public void testDistributedJoins() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoinsUnion() throws Exception { CacheConfiguration ccfg1 = cacheConfig("pers", true, Integer.class, Person2.class); CacheConfiguration ccfg2 = cacheConfig("org", true, Integer.class, Organization.class); @@ -819,6 +839,7 @@ public void testDistributedJoinsUnion() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoinsUnionPartitionedReplicated() throws Exception { CacheConfiguration ccfg1 = cacheConfig("pers", true, Integer.class, Person2.class); @@ -888,6 +909,7 @@ public void testDistributedJoinsUnionPartitionedReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoinsPlan() throws Exception { List> caches = new ArrayList<>(); @@ -1242,6 +1264,7 @@ public void testDistributedJoinsPlan() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDistributedJoinsEnforceReplicatedNotLast() throws Exception { List> caches = new ArrayList<>(); @@ -1289,24 +1312,28 @@ public void testDistributedJoinsEnforceReplicatedNotLast() throws Exception { /** */ + @Test public void testSchemaQuoted() { doTestSchemaName("\"ppAf\""); } /** */ + @Test public void testSchemaQuotedUpper() { doTestSchemaName("\"PPAF\""); } /** */ + @Test public void testSchemaUnquoted() { doTestSchemaName("ppAf"); } /** */ + @Test public void testSchemaUnquotedUpper() { doTestSchemaName("PPAF"); } @@ -1339,6 +1366,7 @@ public void doTestSchemaName(String schema) { /** * @throws Exception If failed. */ + @Test public void testIndexSegmentation() throws Exception { CacheConfiguration ccfg1 = cacheConfig("pers", true, Integer.class, Person2.class).setQueryParallelism(4); @@ -1370,6 +1398,7 @@ public void testIndexSegmentation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testReplicationCacheIndexSegmentationFailure() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Void call() throws Exception { @@ -1386,6 +1415,7 @@ public void testReplicationCacheIndexSegmentationFailure() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIndexSegmentationPartitionedReplicated() throws Exception { CacheConfiguration ccfg1 = cacheConfig("pers", true, Integer.class, Person2.class).setQueryParallelism(4); @@ -1452,6 +1482,7 @@ public void testIndexSegmentationPartitionedReplicated() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIndexWithDifferentSegmentationLevelsFailure() throws Exception { CacheConfiguration ccfg1 = cacheConfig("pers", true, Integer.class, Person2.class).setQueryParallelism(4); @@ -1628,6 +1659,7 @@ private void checkQueryPlan(IgniteCache cache, /** * Test HAVING clause. */ + @Test public void testHaving() { IgniteCache c = ignite(0).getOrCreateCache(cacheConfig("having", true, Integer.class, Integer.class)); @@ -1766,6 +1798,7 @@ private static List column(int idx, List> rows) { * */ @IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-1886", forceFailure = true) + @Test public void testFunctionNpe() { IgniteCache userCache = ignite(0).createCache( cacheConfig("UserCache", true, Integer.class, User.class)); @@ -1798,6 +1831,7 @@ public void testFunctionNpe() { /** * */ + @Test public void testImplicitJoinConditionGeneration() { IgniteCache p = ignite(0).createCache(cacheConfig("P", true, Integer.class, Person.class)); IgniteCache d = ignite(0).createCache(cacheConfig("D", true, Integer.class, Department.class)); @@ -1826,6 +1860,7 @@ public void testImplicitJoinConditionGeneration() { /** * @throws Exception If failed. */ + @Test public void testJoinWithSubquery() throws Exception { IgniteCache c1 = ignite(0).createCache( cacheConfig("Contract", true, @@ -1855,6 +1890,7 @@ public void testJoinWithSubquery() throws Exception { } /** @throws Exception if failed. */ + @Test public void testDistributedAggregates() throws Exception { final String cacheName = "ints"; @@ -1902,6 +1938,7 @@ public void testDistributedAggregates() throws Exception { } /** @throws Exception if failed. */ + @Test public void testCollocatedAggregates() throws Exception { final String cacheName = "ints"; @@ -1948,6 +1985,7 @@ public void testCollocatedAggregates() throws Exception { * * @throws Exception If failed, */ + @Test public void testEmptyCacheAggregates() throws Exception { final String cacheName = "ints"; @@ -1978,6 +2016,7 @@ public void testEmptyCacheAggregates() throws Exception { * * @throws Exception If failed. */ + @Test public void testAvgVariousDataTypes() throws Exception { final String cacheName = "avgtypes"; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/LazyQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/LazyQuerySelfTest.java index d5cc0ebc0ee91..7f2f1d201912b 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/LazyQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/LazyQuerySelfTest.java @@ -35,10 +35,14 @@ import java.util.Iterator; import java.util.List; import java.util.concurrent.ThreadLocalRandom; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for lazy query execution. */ +@RunWith(JUnit4.class) public class LazyQuerySelfTest extends GridCommonAbstractTest { /** Keys ocunt. */ private static final int KEY_CNT = 200; @@ -62,6 +66,7 @@ public class LazyQuerySelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testSingleNode() throws Exception { checkSingleNode(1); } @@ -71,6 +76,7 @@ public void testSingleNode() throws Exception { * * @throws Exception If failed. */ + @Test public void testSingleNodeWithParallelism() throws Exception { checkSingleNode(4); } @@ -80,6 +86,7 @@ public void testSingleNodeWithParallelism() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleNodes() throws Exception { checkMultipleNodes(1); } @@ -89,6 +96,7 @@ public void testMultipleNodes() throws Exception { * * @throws Exception If failed. */ + @Test public void testMultipleNodesWithParallelism() throws Exception { checkMultipleNodes(4); } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/MemLeakOnSqlWithClientReconnectTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/MemLeakOnSqlWithClientReconnectTest.java new file mode 100644 index 0000000000000..236e5ae240072 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/MemLeakOnSqlWithClientReconnectTest.java @@ -0,0 +1,220 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query; + +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheMode; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction; +import org.apache.ignite.cache.query.FieldsQueryCursor; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.IgniteInternalFuture; +import org.apache.ignite.internal.processors.cache.index.AbstractIndexingCommonTest; +import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; +import org.apache.ignite.internal.util.typedef.internal.U; +import org.apache.ignite.logger.NullLogger; +import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Test; + +/** + * Tests for group reservation leaks at the PartitionReservationManager on unstable topology. + */ +public class MemLeakOnSqlWithClientReconnectTest extends AbstractIndexingCommonTest { + /** Keys count. */ + private static final int KEY_CNT = 10; + + /** Keys count. */ + private static final int ITERS = 2000; + + /** Replicated cache schema name. */ + private static final String REPL_SCHEMA = "REPL"; + + /** Partitioned cache schema name. */ + private static final String PART_SCHEMA = "PART"; + + /** Client node. */ + private IgniteEx cli; + + /** {@inheritDoc} */ + @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); + + if (igniteInstanceName.startsWith("cli")) + cfg.setClientMode(true).setGridLogger(new NullLogger()); + + return cfg; + } + + /** {@inheritDoc} */ + @Override protected void beforeTest() throws Exception { + super.beforeTest(); + + startGrid(); + + cli = startGrid("cli-main"); + + IgniteCache partCache = cli.createCache(new CacheConfiguration() + .setName("PART") + .setSqlSchema("PART") + .setQueryEntities(Collections.singleton(new QueryEntity(Long.class, Long.class) + .setTableName("test") + .addQueryField("id", Long.class.getName(), null) + .addQueryField("val", Long.class.getName(), null) + .setKeyFieldName("id") + .setValueFieldName("val") + )) + .setAffinity(new RendezvousAffinityFunction(false, 10))); + + IgniteCache replCache = cli.createCache(new CacheConfiguration() + .setName("REPL") + .setSqlSchema("REPL") + .setQueryEntities(Collections.singleton(new QueryEntity(Long.class, Long.class) + .setTableName("test") + .addQueryField("id", Long.class.getName(), null) + .addQueryField("val", Long.class.getName(), null) + .setKeyFieldName("id") + .setValueFieldName("val"))) + .setCacheMode(CacheMode.REPLICATED)); + + for (long i = 0; i < KEY_CNT; ++i) { + partCache.put(i, i); + replCache.put(i, i); + } + } + + /** {@inheritDoc} */ + @Override protected void afterTest() throws Exception { + stopAllGrids(); + + super.afterTest(); + } + + /** + * Test partition group reservation leaks on partitioned cache. + * + * @throws Exception On error. + */ + @Test + public void testPartitioned() throws Exception { + checkReservationLeak(false); + } + + /** + * Test partition group reservation leaks on replicated cache. + * + * @throws Exception On error. + */ + @Test + public void testReplicated() throws Exception { + checkReservationLeak(true); + } + + /** + * Check partition group reservation leaks. + * + * @param replicated Flag to run query on partitioned or replicated cache. + * @throws Exception On error. + */ + private void checkReservationLeak(boolean replicated) throws Exception { + final AtomicInteger cliNum = new AtomicInteger(); + final AtomicBoolean end = new AtomicBoolean(); + + IgniteInternalFuture fut = GridTestUtils.runMultiThreadedAsync(() -> { + String name = "cli_" + cliNum.getAndIncrement(); + + while (!end.get()) { + try { + startGrid(name); + + U.sleep(10); + + stopGrid(name); + } + catch (Exception e) { + fail("Unexpected exception on start test client node"); + } + } + }, + 10, "cli-restart"); + + try { + // Warm up. + runQuery(cli, ITERS, replicated); + + int baseReservations = reservationCount(grid()); + + // Run multiple queries on unstable topology. + runQuery(cli, ITERS * 10, replicated); + + int curReservations = reservationCount(grid()); + + assertTrue("Reservations leaks: [base=" + baseReservations + ", cur=" + curReservations + ']', + curReservations < baseReservations * 2); + + log.info("Reservations OK: [base=" + baseReservations + ", cur=" + curReservations + ']'); + } + finally { + end.set(true); + } + + fut.get(); + } + + /** + * @param ign Ignite. + * @param iters Run query 'iters' times + * @param repl Run on replicated or partitioned cache. + */ + private void runQuery(IgniteEx ign, int iters, boolean repl) { + for (int i = 0; i < iters; ++i) + sql(ign, repl ? REPL_SCHEMA : PART_SCHEMA,"SELECT * FROM test").getAll(); + } + + /** + * @param ign Ignite instance. + * @param schema Schema name. + * @param sql SQL query. + * @param args Query parameters. + * @return Results cursor. + */ + private FieldsQueryCursor> sql(IgniteEx ign, String schema, String sql, Object... args) { + return ign.context().query().querySqlFields(new SqlFieldsQuery(sql) + .setSchema(schema) + .setArgs(args), false); + } + + /** + * @param ign Ignite instance. + * @return Count of reservations. + */ + private static int reservationCount(IgniteEx ign) { + IgniteH2Indexing idx = (IgniteH2Indexing)ign.context().query().getIndexing(); + + Map reservations = GridTestUtils.getFieldValue(idx.partitionReservationManager(), "reservations"); + + return reservations.size(); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/MultipleStatementsSqlQuerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/MultipleStatementsSqlQuerySelfTest.java index becd5865c642c..2c07349d8aefa 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/MultipleStatementsSqlQuerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/MultipleStatementsSqlQuerySelfTest.java @@ -25,10 +25,14 @@ import org.apache.ignite.internal.processors.cache.QueryCursorImpl; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for schemas. */ +@RunWith(JUnit4.class) public class MultipleStatementsSqlQuerySelfTest extends GridCommonAbstractTest { /** Node. */ private IgniteEx node; @@ -47,10 +51,9 @@ public class MultipleStatementsSqlQuerySelfTest extends GridCommonAbstractTest { /** * Test query without caches. - * - * @throws Exception If failed. */ - public void testQuery() throws Exception { + @Test + public void testQuery() { GridQueryProcessor qryProc = node.context().query(); SqlFieldsQuery qry = new SqlFieldsQuery( @@ -92,10 +95,9 @@ public void testQuery() throws Exception { /** * Test query without caches. - * - * @throws Exception If failed. */ - public void testQueryWithParameters() throws Exception { + @Test + public void testQueryWithParameters() { GridQueryProcessor qryProc = node.context().query(); SqlFieldsQuery qry = new SqlFieldsQuery( @@ -137,9 +139,9 @@ public void testQueryWithParameters() throws Exception { } /** - * @throws Exception If failed. */ - public void testQueryMultipleStatementsFailed() throws Exception { + @Test + public void testQueryMultipleStatementsFailed() { final SqlFieldsQuery qry = new SqlFieldsQuery("select 1; select 1;").setSchema("PUBLIC"); GridTestUtils.assertThrows(log, @@ -151,4 +153,34 @@ public void testQueryMultipleStatementsFailed() throws Exception { } }, IgniteSQLException.class, "Multiple statements queries are not supported"); } + + /** + * Check cached two-steps query. + */ + @Test + public void testCachedTwoSteps() { + List>> curs = sql("SELECT 1; SELECT 2"); + + assertEquals(2, curs.size()); + assertEquals(1, curs.get(0).getAll().get(0).get(0)); + assertEquals(2, curs.get(1).getAll().get(0).get(0)); + + curs = sql("SELECT 1; SELECT 2"); + + assertEquals(2, curs.size()); + assertEquals(1, curs.get(0).getAll().get(0).get(0)); + assertEquals(2, curs.get(1).getAll().get(0).get(0)); + } + + /** + * @param sql SQL query. + * @return Results. + */ + private List>> sql(String sql) { + GridQueryProcessor qryProc = node.context().query(); + + SqlFieldsQuery qry = new SqlFieldsQuery(sql).setSchema("PUBLIC"); + + return qryProc.querySqlFields(qry, true, false); + } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/RunningQueriesTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/RunningQueriesTest.java new file mode 100644 index 0000000000000..81b6006a15879 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/RunningQueriesTest.java @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query; + +import java.util.Collection; +import java.util.Collections; +import java.util.concurrent.CyclicBarrier; +import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.cache.query.SqlFieldsQuery; +import org.apache.ignite.cache.query.SqlQuery; +import org.apache.ignite.cache.query.annotations.QuerySqlFunction; +import org.apache.ignite.configuration.CacheConfiguration; +import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.testframework.GridTestUtils; +import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; + +/** + * Tests for running queries. + */ +public class RunningQueriesTest extends GridCommonAbstractTest { + /** + * + */ + @Test + public void testQueriesOriginalText() throws Exception { + IgniteEx ignite = startGrid(0); + + IgniteCache cache = ignite.getOrCreateCache(new CacheConfiguration() + .setName("cache") + .setQueryEntities(Collections.singletonList(new QueryEntity(Integer.class, Integer.class))) + .setSqlFunctionClasses(TestSQLFunctions.class) + ); + + cache.put(0, 0); + + GridTestUtils.runAsync(() -> cache.query(new SqlFieldsQuery( + "SELECT * FROM /* comment */ Integer WHERE awaitBarrier() = 0")).getAll()); + + GridTestUtils.runAsync(() -> cache.query(new SqlQuery(Integer.class, + "FROM /* comment */ Integer WHERE awaitBarrier() = 0")).getAll()); + + TestSQLFunctions.barrier.await(); + + Collection runningQueries = ignite.context().query().runningQueries(-1); + + TestSQLFunctions.barrier.await(); + + assertEquals(2, runningQueries.size()); + + for (GridRunningQueryInfo info : runningQueries) + assertTrue("Failed to find comment in query: " + info.query(), info.query().contains("/* comment */")); + } + + /** + * Utility class with custom SQL functions. + */ + public static class TestSQLFunctions { + /** Barrier. */ + static CyclicBarrier barrier = new CyclicBarrier(3); + + /** + * Await cyclic barrier twice, first time to wait for enter method, second time to wait for collecting running + * queries. + */ + @QuerySqlFunction + public static long awaitBarrier() { + try { + barrier.await(); + barrier.await(); + } + catch (Exception ignored) { + // No-op. + } + + return 0; + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlIllegalSchemaSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlIllegalSchemaSelfTest.java index e56f8a2069b56..51dc4dc46a456 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlIllegalSchemaSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlIllegalSchemaSelfTest.java @@ -17,21 +17,28 @@ package org.apache.ignite.internal.processors.query; +import java.util.concurrent.Callable; +import java.util.function.Consumer; +import javax.cache.CacheException; import org.apache.ignite.Ignite; +import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteException; import org.apache.ignite.Ignition; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; - -import javax.cache.CacheException; -import java.util.concurrent.Callable; +import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for illegal SQL schemas in node and cache configurations. */ @SuppressWarnings({"ThrowableNotThrown", "unchecked"}) +@RunWith(JUnit4.class) public class SqlIllegalSchemaSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { @@ -41,6 +48,7 @@ public class SqlIllegalSchemaSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testBadCacheName() throws Exception { IgniteConfiguration cfg = getConfiguration(); @@ -60,23 +68,35 @@ public void testBadCacheName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBadCacheNameDynamic() throws Exception { - Ignite node = startGrid(); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node.getOrCreateCache(new CacheConfiguration().setName(QueryUtils.SCHEMA_SYS)); - - return null; + doubleConsumerAccept( + (node)->{ + try { + node.getOrCreateCache(new CacheConfiguration().setName(QueryUtils.SCHEMA_SYS)); + } + catch (CacheException e) { + assertTrue(hasCause(e, IgniteCheckedException.class, + "SQL schema name derived from cache name is reserved (please set explicit SQL " + + "schema name through CacheConfiguration.setSqlSchema() or choose another cache name) [" + + "cacheName=IGNITE, schemaName=null]")); + + return; + } + catch (Throwable e) { + fail("Exception class is not as expected [expected=" + + CacheException.class + ", actual=" + e.getClass() + ']'); + } + + fail("Exception has not been thrown."); } - }, CacheException.class, "SQL schema name derived from cache name is reserved (please set explicit SQL " + - "schema name through CacheConfiguration.setSqlSchema() or choose another cache name) [" + - "cacheName=IGNITE, schemaName=null]"); + ); } /** * @throws Exception If failed. */ + @Test public void testBadSchemaLower() throws Exception { IgniteConfiguration cfg = getConfiguration(); @@ -96,23 +116,35 @@ public void testBadSchemaLower() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBadSchemaLowerDynamic() throws Exception { - Ignite node = startGrid(); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node.getOrCreateCache( - new CacheConfiguration().setName("CACHE").setSqlSchema(QueryUtils.SCHEMA_SYS.toLowerCase()) - ); - - return null; + doubleConsumerAccept( + (node) -> { + try { + node.getOrCreateCache( + new CacheConfiguration().setName("CACHE").setSqlSchema(QueryUtils.SCHEMA_SYS.toLowerCase()) + ); + } + catch (CacheException e) { + assertTrue(hasCause(e, IgniteCheckedException.class, + "SQL schema name is reserved (please choose another one) [cacheName=CACHE, schemaName=ignite]")); + + return; + } + catch (Throwable e) { + fail("Exception class is not as expected [expected=" + + CacheException.class + ", actual=" + e.getClass() + ']'); + } + + fail("Exception has not been thrown."); } - }, CacheException.class, "SQL schema name is reserved (please choose another one) [cacheName=CACHE, schemaName=ignite]"); + ); } /** * @throws Exception If failed. */ + @Test public void testBadSchemaUpper() throws Exception { IgniteConfiguration cfg = getConfiguration(); @@ -132,24 +164,35 @@ public void testBadSchemaUpper() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBadSchemaUpperDynamic() throws Exception { - Ignite node = startGrid(); - - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node.getOrCreateCache( - new CacheConfiguration().setName("CACHE").setSqlSchema(QueryUtils.SCHEMA_SYS.toUpperCase()) - ); - - return null; + doubleConsumerAccept( + (node) -> { + try { + node.getOrCreateCache( + new CacheConfiguration().setName("CACHE").setSqlSchema(QueryUtils.SCHEMA_SYS.toUpperCase()) + ); + } + catch (CacheException e) { + assertTrue(hasCause(e, IgniteCheckedException.class, + "SQL schema name is reserved (please choose another one) [cacheName=CACHE, schemaName=IGNITE]")); + + return; + } + catch (Throwable e) { + fail("Exception class is not as expected [expected=" + + CacheException.class + ", actual=" + e.getClass() + ']'); + } + + fail("Exception has not been thrown."); } - }, CacheException.class, "SQL schema name is reserved (please choose another one) [cacheName=CACHE, " + - "schemaName=IGNITE]"); + ); } /** * @throws Exception If failed. */ + @Test public void testBadSchemaQuoted() throws Exception { IgniteConfiguration cfg = getConfiguration(); @@ -169,19 +212,78 @@ public void testBadSchemaQuoted() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBadSchemaQuotedDynamic() throws Exception { + doubleConsumerAccept( + (node) -> { + try { + node.getOrCreateCache( + new CacheConfiguration().setName("CACHE") + .setSqlSchema("\"" + QueryUtils.SCHEMA_SYS.toUpperCase() + "\"") + ); + } + catch (CacheException e) { + assertTrue(hasCause(e, IgniteCheckedException.class, + "SQL schema name is reserved (please choose another one) [cacheName=CACHE, schemaName=\"IGNITE\"]")); + + return; + } + catch (Throwable e) { + fail("Exception class is not as expected [expected=" + + CacheException.class + ", actual=" + e.getClass() + ']'); + } + + fail("Exception has not been thrown."); + } + ); + } + + /** + * Executes double call of consumer's accept method with passed Ignite instance. + * + * @param cons Consumer. + * @throws Exception If failed. + */ + private void doubleConsumerAccept(Consumer cons) throws Exception { Ignite node = startGrid(); - GridTestUtils.assertThrows(log, new Callable() { - @Override public Void call() throws Exception { - node.getOrCreateCache( - new CacheConfiguration().setName("CACHE") - .setSqlSchema("\"" + QueryUtils.SCHEMA_SYS.toUpperCase() + "\"") - ); + cons.accept(node); - return null; + cons.accept(node); + } + + /** + * Checks if passed in {@code 'Throwable'} has given class in {@code 'cause'} hierarchy + * including that throwable itself and it contains passed message. + *

    + * Note that this method follows includes {@link Throwable#getSuppressed()} + * into check. + * + * @param t Throwable to check (if {@code null}, {@code false} is returned). + * @param cls Cause class to check (if {@code null}, {@code false} is returned). + * @param msg Message to check. + * @return {@code True} if one of the causing exception is an instance of passed in classes + * and it contains the passed message, {@code false} otherwise. + */ + private boolean hasCause(@Nullable Throwable t, Class cls, String msg) { + if (t == null) + return false; + + assert cls != null; + + for (Throwable th = t; th != null; th = th.getCause()) { + if (cls.isAssignableFrom(th.getClass()) && F.eq(th.getMessage(), msg)) + return true; + + for (Throwable n : th.getSuppressed()) { + if (hasCause(n, cls, msg)) + return true; } - }, CacheException.class, "SQL schema name is reserved (please choose another one) [cacheName=CACHE, " + - "schemaName=\"IGNITE\"]"); + + if (th.getCause() == th) + break; + } + + return false; } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlPushDownFunctionTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlPushDownFunctionTest.java index 9e7877003d27e..a36b4796f7e2b 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlPushDownFunctionTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlPushDownFunctionTest.java @@ -21,10 +21,14 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for schemas. */ +@RunWith(JUnit4.class) public class SqlPushDownFunctionTest extends GridCommonAbstractTest { /** Node. */ private IgniteEx node; @@ -43,6 +47,7 @@ public class SqlPushDownFunctionTest extends GridCommonAbstractTest { /** */ + @Test public void testPushDownFunction() { sql("CREATE TABLE Person(id INTEGER PRIMARY KEY, company_id INTEGER)"); sql("CREATE TABLE Company(id INTEGER PRIMARY KEY, name VARCHAR)"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSchemaSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSchemaSelfTest.java index b271d806d70c0..b02ef49ac377f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSchemaSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSchemaSelfTest.java @@ -20,20 +20,28 @@ import java.util.Collections; import java.util.Iterator; import java.util.List; +import java.util.concurrent.Callable; import java.util.concurrent.atomic.AtomicInteger; +import javax.cache.CacheException; import org.apache.ignite.IgniteCache; import org.apache.ignite.cache.QueryEntity; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.query.schema.SchemaOperationException; import org.apache.ignite.internal.util.typedef.F; +import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for schemas. */ +@RunWith(JUnit4.class) public class SqlSchemaSelfTest extends GridCommonAbstractTest { /** Person cache name. */ private static final String CACHE_PERSON = "PersonCache"; @@ -61,6 +69,7 @@ public class SqlSchemaSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testQueryWithoutCacheOnPublicSchema() throws Exception { GridQueryProcessor qryProc = node.context().query(); @@ -89,6 +98,7 @@ public void testQueryWithoutCacheOnPublicSchema() throws Exception { * * @throws Exception If failed. */ + @Test public void testQueryWithoutCacheOnCacheSchema() throws Exception { node.createCache(new CacheConfiguration() .setName(CACHE_PERSON) @@ -121,6 +131,7 @@ public void testQueryWithoutCacheOnCacheSchema() throws Exception { * * @throws Exception If failed. */ + @Test public void testSchemaChange() throws Exception { IgniteCache cache = node.createCache(new CacheConfiguration() .setName(CACHE_PERSON) @@ -161,6 +172,7 @@ public void testSchemaChange() throws Exception { * * @throws Exception If failed. */ + @Test public void testSchemaChangeOnCacheWithPublicSchema() throws Exception { IgniteCache cache = node.createCache(new CacheConfiguration() .setName(CACHE_PERSON) @@ -197,6 +209,7 @@ public void testSchemaChangeOnCacheWithPublicSchema() throws Exception { * * @throws Exception If failed. */ + @Test public void testCustomSchemaName() throws Exception { IgniteCache cache = registerQueryEntity("Person", CACHE_PERSON); @@ -208,6 +221,7 @@ public void testCustomSchemaName() throws Exception { * * @throws Exception If failed. */ + @Test public void testCustomSchemaMultipleCaches() throws Exception { for (int i = 1; i <= 3; i++) { String tbl = "Person" + i; @@ -229,6 +243,7 @@ public void testCustomSchemaMultipleCaches() throws Exception { * * @throws Exception If failed. */ + @Test public void testCustomSchemaConcurrentUse() throws Exception { final AtomicInteger maxIdx = new AtomicInteger(); @@ -292,19 +307,25 @@ private void testQueryEntity(IgniteCache cache, String tbl) { * * @throws Exception If failed. */ - public void _testTypeConflictInPublicSchema() throws Exception { - // TODO: IGNITE-5380: uncomment work after fix. - fail("Hang for now, need to fix"); - + @Test + public void testTypeConflictInPublicSchema() throws Exception { node.createCache(new CacheConfiguration() .setName(CACHE_PERSON) .setIndexedTypes(PersonKey.class, Person.class) .setSqlSchema(QueryUtils.DFLT_SCHEMA)); - node.createCache(new CacheConfiguration() - .setName(CACHE_PERSON_2) - .setIndexedTypes(PersonKey.class, Person.class) - .setSqlSchema(QueryUtils.DFLT_SCHEMA)); + Throwable th = GridTestUtils.assertThrows(log, (Callable) () -> { + node.createCache(new CacheConfiguration() + .setName(CACHE_PERSON_2) + .setIndexedTypes(PersonKey.class, Person.class) + .setSqlSchema(QueryUtils.DFLT_SCHEMA)); + + return null; + }, CacheException.class, null); + + SchemaOperationException e = X.cause(th, SchemaOperationException.class); + + assertEquals(SchemaOperationException.CODE_TABLE_EXISTS, e.code()); } /** diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSystemViewsSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSystemViewsSelfTest.java index ccd07964fba29..5be574c354805 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSystemViewsSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/SqlSystemViewsSelfTest.java @@ -19,24 +19,36 @@ import java.sql.Time; import java.sql.Timestamp; +import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Random; import java.util.TimeZone; import java.util.UUID; import java.util.concurrent.Callable; +import javax.cache.Cache; +import javax.cache.configuration.Factory; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.QueryEntity; +import org.apache.ignite.cache.eviction.EvictableEntry; +import org.apache.ignite.cache.eviction.EvictionFilter; +import org.apache.ignite.cache.eviction.EvictionPolicy; import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cluster.ClusterMetrics; +import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; +import org.apache.ignite.configuration.TopologyValidator; import org.apache.ignite.internal.ClusterMetricsSnapshot; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.IgniteNodeAttributes; import org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode; +import org.apache.ignite.internal.util.lang.GridNodePredicate; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.G; import org.apache.ignite.internal.util.typedef.X; @@ -44,10 +56,14 @@ import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for ignite SQL system views. */ +@RunWith(JUnit4.class) public class SqlSystemViewsSelfTest extends GridCommonAbstractTest { /** Metrics check attempts. */ private static final int METRICS_CHECK_ATTEMPTS = 10; @@ -109,10 +125,11 @@ private void assertSqlError(final String sql) { assertEquals(IgniteQueryErrorCode.UNSUPPORTED_OPERATION, sqlE.statusCode()); } - + /** * Test system views modifications. */ + @Test public void testModifications() throws Exception { startGrid(getConfiguration()); @@ -140,6 +157,7 @@ public void testModifications() throws Exception { /** * Test different query modes. */ + @Test public void testQueryModes() throws Exception { Ignite ignite = startGrid(0); startGrid(1); @@ -168,6 +186,7 @@ public void testQueryModes() throws Exception { /** * Test that we can't use cache tables and system views in the same query. */ + @Test public void testCacheToViewJoin() throws Exception { Ignite ignite = startGrid(); @@ -193,6 +212,7 @@ private void assertColumnTypes(List rowData, Class ... colTypes) { * * @throws Exception If failed. */ + @Test public void testNodesViews() throws Exception { Ignite igniteSrv = startGrid(getTestIgniteInstanceName(), getConfiguration().setMetricsUpdateFrequency(500L)); @@ -459,6 +479,7 @@ public void testNodesViews() throws Exception { /** * Test baseline topology system view. */ + @Test public void testBaselineViews() throws Exception { cleanPersistenceDir(); @@ -504,6 +525,225 @@ public void testBaselineViews() throws Exception { return super.getConfiguration().setCacheConfiguration(new CacheConfiguration().setName(DEFAULT_CACHE_NAME)); } + /** + * Test caches system views. + */ + @SuppressWarnings("ConstantConditions") + @Test + public void testCachesViews() throws Exception { + DataStorageConfiguration dsCfg = new DataStorageConfiguration() + .setDefaultDataRegionConfiguration(new DataRegionConfiguration().setName("def").setPersistenceEnabled(true)) + .setDataRegionConfigurations(new DataRegionConfiguration().setName("dr1"), + new DataRegionConfiguration().setName("dr2")); + + IgniteEx ignite0 = startGrid(getConfiguration().setDataStorageConfiguration(dsCfg)); + + Ignite ignite1 = startGrid(getConfiguration().setDataStorageConfiguration(dsCfg).setIgniteInstanceName("node1")); + + ignite0.cluster().active(true); + + Ignite ignite2 = startGrid(getConfiguration().setDataStorageConfiguration(dsCfg).setIgniteInstanceName("node2")); + + Ignite ignite3 = startGrid(getConfiguration().setDataStorageConfiguration(dsCfg).setIgniteInstanceName("node3") + .setClientMode(true)); + + ignite0.getOrCreateCache(new CacheConfiguration<>() + .setName("cache_atomic_part") + .setAtomicityMode(CacheAtomicityMode.ATOMIC) + .setCacheMode(CacheMode.PARTITIONED) + .setGroupName("cache_grp") + .setNodeFilter(new TestNodeFilter(ignite0.cluster().localNode())) + ); + + ignite0.getOrCreateCache(new CacheConfiguration<>() + .setName("cache_atomic_repl") + .setAtomicityMode(CacheAtomicityMode.ATOMIC) + .setCacheMode(CacheMode.REPLICATED) + .setDataRegionName("dr1") + .setTopologyValidator(new TestTopologyValidator()) + ); + + ignite0.getOrCreateCache(new CacheConfiguration<>() + .setName("cache_tx_part") + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setCacheMode(CacheMode.PARTITIONED) + .setGroupName("cache_grp") + .setNodeFilter(new TestNodeFilter(ignite0.cluster().localNode())) + ); + + ignite0.getOrCreateCache(new CacheConfiguration<>() + .setName("cache_tx_repl") + .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) + .setCacheMode(CacheMode.REPLICATED) + .setDataRegionName("dr2") + .setEvictionFilter(new TestEvictionFilter()) + .setEvictionPolicyFactory(new TestEvictionPolicyFactory()) + .setOnheapCacheEnabled(true) + ); + + execSql("CREATE TABLE cache_sql (ID INT PRIMARY KEY, VAL VARCHAR) WITH " + + "\"cache_name=cache_sql,template=partitioned,atomicity=atomic\""); + + awaitPartitionMapExchange(); + + List> resAll = execSql("SELECT NAME, CACHE_ID, CACHE_TYPE, GROUP_ID, GROUP_NAME, " + + "CACHE_MODE, ATOMICITY_MODE, IS_ONHEAP_CACHE_ENABLED, IS_COPY_ON_READ, IS_LOAD_PREVIOUS_VALUE, " + + "IS_READ_FROM_BACKUP, PARTITION_LOSS_POLICY, NODE_FILTER, TOPOLOGY_VALIDATOR, IS_EAGER_TTL, " + + "WRITE_SYNCHRONIZATION_MODE, IS_INVALIDATE, IS_EVENTS_DISABLED, IS_STATISTICS_ENABLED, " + + "IS_MANAGEMENT_ENABLED, BACKUPS, AFFINITY, AFFINITY_MAPPER, " + + "REBALANCE_MODE, REBALANCE_BATCH_SIZE, REBALANCE_TIMEOUT, REBALANCE_DELAY, REBALANCE_THROTTLE, " + + "REBALANCE_BATCHES_PREFETCH_COUNT, REBALANCE_ORDER, " + + "EVICTION_FILTER, EVICTION_POLICY_FACTORY, " + + "IS_NEAR_CACHE_ENABLED, NEAR_CACHE_EVICTION_POLICY_FACTORY, NEAR_CACHE_START_SIZE, " + + "DEFAULT_LOCK_TIMEOUT, CACHE_INTERCEPTOR, CACHE_STORE_FACTORY, " + + "IS_STORE_KEEP_BINARY, IS_READ_THROUGH, IS_WRITE_THROUGH, " + + "IS_WRITE_BEHIND_ENABLED, WRITE_BEHIND_COALESCING, WRITE_BEHIND_FLUSH_SIZE, " + + "WRITE_BEHIND_FLUSH_FREQUENCY, WRITE_BEHIND_FLUSH_THREAD_COUNT, WRITE_BEHIND_FLUSH_BATCH_SIZE, " + + "MAX_CONCURRENT_ASYNC_OPERATIONS, CACHE_LOADER_FACTORY, CACHE_WRITER_FACTORY, EXPIRY_POLICY_FACTORY, " + + "IS_SQL_ESCAPE_ALL, SQL_SCHEMA, SQL_INDEX_MAX_INLINE_SIZE, IS_SQL_ONHEAP_CACHE_ENABLED, " + + "SQL_ONHEAP_CACHE_MAX_SIZE, QUERY_DETAILS_METRICS_SIZE, QUERY_PARALLELISM, MAX_QUERY_ITERATORS_COUNT, " + + "DATA_REGION_NAME FROM IGNITE.CACHES"); + + assertColumnTypes(resAll.get(0), + String.class, Integer.class, String.class, Integer.class, String.class, + String.class, String.class, Boolean.class, Boolean.class, Boolean.class, + Boolean.class, String.class, String.class, String.class, Boolean.class, + String.class, Boolean.class, Boolean.class, Boolean.class, + Boolean.class, Integer.class, String.class, String.class, + String.class, Integer.class, Long.class, Long.class, Long.class, // Rebalance. + Long.class, Integer.class, + String.class, String.class, // Eviction. + Boolean.class, String.class, Integer.class, // Near cache. + Long.class, String.class, String.class, + Boolean.class, Boolean.class, Boolean.class, + Boolean.class, Boolean.class, Integer.class, // Write-behind. + Long.class, Integer.class, Integer.class, + Integer.class, String.class, String.class, String.class, + Boolean.class, String.class, Integer.class, Boolean.class, // SQL. + Integer.class, Integer.class, Integer.class, Integer.class, + String.class); + + assertEquals("cache_tx_part", execSql("SELECT NAME FROM IGNITE.CACHES WHERE " + + "CACHE_MODE = 'PARTITIONED' AND ATOMICITY_MODE = 'TRANSACTIONAL' AND NAME like 'cache%'").get(0).get(0)); + + assertEquals("cache_atomic_repl", execSql("SELECT NAME FROM IGNITE.CACHES WHERE " + + "CACHE_MODE = 'REPLICATED' AND ATOMICITY_MODE = 'ATOMIC' AND NAME like 'cache%'").get(0).get(0)); + + assertEquals(2L, execSql("SELECT COUNT(*) FROM IGNITE.CACHES WHERE GROUP_NAME = 'cache_grp'") + .get(0).get(0)); + + assertEquals("cache_atomic_repl", execSql("SELECT NAME FROM IGNITE.CACHES " + + "WHERE DATA_REGION_NAME = 'dr1'").get(0).get(0)); + + assertEquals("cache_tx_repl", execSql("SELECT NAME FROM IGNITE.CACHES " + + "WHERE DATA_REGION_NAME = 'dr2'").get(0).get(0)); + + assertEquals("PARTITIONED", execSql("SELECT CACHE_MODE FROM IGNITE.CACHES " + + "WHERE NAME = 'cache_atomic_part'").get(0).get(0)); + + assertEquals("USER", execSql("SELECT CACHE_TYPE FROM IGNITE.CACHES WHERE NAME = 'cache_sql'") + .get(0).get(0)); + + assertEquals(0L, execSql("SELECT COUNT(*) FROM IGNITE.CACHES WHERE NAME = 'no_such_cache'").get(0) + .get(0)); + + assertEquals(0L, execSql("SELECT COUNT(*) FROM IGNITE.CACHES WHERE NAME = 1").get(0).get(0)); + + assertEquals("TestNodeFilter", execSql("SELECT NODE_FILTER FROM IGNITE.CACHES WHERE NAME = " + + "'cache_atomic_part'").get(0).get(0)); + + assertEquals("TestEvictionFilter", execSql("SELECT EVICTION_FILTER FROM IGNITE.CACHES " + + "WHERE NAME = 'cache_tx_repl'").get(0).get(0)); + + assertEquals("TestEvictionPolicyFactory", execSql("SELECT EVICTION_POLICY_FACTORY " + + "FROM IGNITE.CACHES WHERE NAME = 'cache_tx_repl'").get(0).get(0)); + + assertEquals("TestTopologyValidator", execSql("SELECT TOPOLOGY_VALIDATOR FROM IGNITE.CACHES " + + "WHERE NAME = 'cache_atomic_repl'").get(0).get(0)); + + // Check quick count. + assertEquals(execSql("SELECT COUNT(*) FROM IGNITE.CACHES").get(0).get(0), + execSql("SELECT COUNT(*) FROM IGNITE.CACHES WHERE CACHE_ID <> CACHE_ID + 1").get(0).get(0)); + + // Check that caches are the same on BLT, BLT filtered by node filter, non BLT and client nodes. + assertEquals(5L, execSql("SELECT COUNT(*) FROM IGNITE.CACHES WHERE NAME like 'cache%'").get(0) + .get(0)); + + assertEquals(5L, execSql(ignite1, "SELECT COUNT(*) FROM IGNITE.CACHES WHERE NAME like 'cache%'") + .get(0).get(0)); + + assertEquals(5L, execSql(ignite2, "SELECT COUNT(*) FROM IGNITE.CACHES WHERE NAME like 'cache%'") + .get(0).get(0)); + + assertEquals(5L, execSql(ignite3, "SELECT COUNT(*) FROM IGNITE.CACHES WHERE NAME like 'cache%'") + .get(0).get(0)); + + // Check cache groups. + resAll = execSql("SELECT ID, GROUP_NAME, IS_SHARED, CACHE_COUNT, " + + "CACHE_MODE, ATOMICITY_MODE, AFFINITY, PARTITIONS_COUNT, " + + "NODE_FILTER, DATA_REGION_NAME, TOPOLOGY_VALIDATOR, PARTITION_LOSS_POLICY, " + + "REBALANCE_MODE, REBALANCE_DELAY, REBALANCE_ORDER, BACKUPS " + + "FROM IGNITE.CACHE_GROUPS"); + + assertColumnTypes(resAll.get(0), + Integer.class, String.class, Boolean.class, Integer.class, + String.class, String.class, String.class, Integer.class, + String.class, String.class, String.class, String.class, + String.class, Long.class, Integer.class, Integer.class); + + assertEquals(2, execSql("SELECT CACHE_COUNT FROM IGNITE.CACHE_GROUPS " + + "WHERE GROUP_NAME = 'cache_grp'").get(0).get(0)); + + assertEquals("cache_grp", execSql("SELECT GROUP_NAME FROM IGNITE.CACHE_GROUPS " + + "WHERE IS_SHARED = true AND GROUP_NAME like 'cache%'").get(0).get(0)); + + // Check index on ID column. + assertEquals("cache_tx_repl", execSql("SELECT GROUP_NAME FROM IGNITE.CACHE_GROUPS " + + "WHERE ID = ?", ignite0.cachex("cache_tx_repl").context().groupId()).get(0).get(0)); + + assertEquals(0, execSql("SELECT ID FROM IGNITE.CACHE_GROUPS WHERE ID = 0").size()); + + // Check join by indexed column. + assertEquals("cache_tx_repl", execSql("SELECT CG.GROUP_NAME FROM IGNITE.CACHES C JOIN " + + "IGNITE.CACHE_GROUPS CG ON C.GROUP_ID = CG.ID WHERE C.NAME = 'cache_tx_repl'").get(0).get(0)); + + // Check join by non-indexed column. + assertEquals("cache_grp", execSql("SELECT CG.GROUP_NAME FROM IGNITE.CACHES C JOIN " + + "IGNITE.CACHE_GROUPS CG ON C.GROUP_NAME = CG.GROUP_NAME WHERE C.NAME = 'cache_tx_part'").get(0).get(0)); + + // Check configuration equality for cache and cache group views. + assertEquals(3L, execSql("SELECT COUNT(*) FROM IGNITE.CACHES C JOIN IGNITE.CACHE_GROUPS CG " + + "ON C.NAME = CG.GROUP_NAME WHERE C.NAME like 'cache%' " + + "AND C.CACHE_MODE = CG.CACHE_MODE " + + "AND C.ATOMICITY_MODE = CG.ATOMICITY_MODE " + + "AND COALESCE(C.AFFINITY, '-') = COALESCE(CG.AFFINITY, '-') " + + "AND COALESCE(C.NODE_FILTER, '-') = COALESCE(CG.NODE_FILTER, '-') " + + "AND COALESCE(C.DATA_REGION_NAME, '-') = COALESCE(CG.DATA_REGION_NAME, '-') " + + "AND COALESCE(C.TOPOLOGY_VALIDATOR, '-') = COALESCE(CG.TOPOLOGY_VALIDATOR, '-') " + + "AND C.PARTITION_LOSS_POLICY = CG.PARTITION_LOSS_POLICY " + + "AND C.REBALANCE_MODE = CG.REBALANCE_MODE " + + "AND C.REBALANCE_DELAY = CG.REBALANCE_DELAY " + + "AND C.REBALANCE_ORDER = CG.REBALANCE_ORDER " + + "AND C.BACKUPS = CG.BACKUPS").get(0).get(0)); + + // Check quick count. + assertEquals(execSql("SELECT COUNT(*) FROM IGNITE.CACHE_GROUPS").get(0).get(0), + execSql("SELECT COUNT(*) FROM IGNITE.CACHE_GROUPS WHERE ID <> ID + 1").get(0).get(0)); + + // Check that cache groups are the same on different nodes. + assertEquals(4L, execSql("SELECT COUNT(*) FROM IGNITE.CACHE_GROUPS " + + "WHERE GROUP_NAME like 'cache%'").get(0).get(0)); + + assertEquals(4L, execSql(ignite1, "SELECT COUNT(*) FROM IGNITE.CACHE_GROUPS " + + "WHERE GROUP_NAME like 'cache%'").get(0).get(0)); + + assertEquals(4L, execSql(ignite2, "SELECT COUNT(*) FROM IGNITE.CACHE_GROUPS " + + "WHERE GROUP_NAME like 'cache%'").get(0).get(0)); + + assertEquals(4L, execSql(ignite3, "SELECT COUNT(*) FROM IGNITE.CACHE_GROUPS " + + "WHERE GROUP_NAME like 'cache%'").get(0).get(0)); + } + /** * Gets ignite configuration with persistence enabled. */ @@ -533,4 +773,70 @@ private long convertToMilliseconds(Object sqlTime) { return time0.getTime() + TimeZone.getDefault().getOffset(time0.getTime()); } + + /** + * + */ + private static class TestNodeFilter extends GridNodePredicate { + /** + * @param node Node. + */ + public TestNodeFilter(ClusterNode node) { + super(node); + } + + /** {@inheritDoc} */ + @Override public String toString() { + return "TestNodeFilter"; + } + } + + /** + * + */ + private static class TestEvictionFilter implements EvictionFilter { + /** {@inheritDoc} */ + @Override public boolean evictAllowed(Cache.Entry entry) { + return false; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return "TestEvictionFilter"; + } + } + + /** + * + */ + private static class TestEvictionPolicyFactory implements Factory> { + /** {@inheritDoc} */ + @Override public EvictionPolicy create() { + return new EvictionPolicy() { + @Override public void onEntryAccessed(boolean rmv, EvictableEntry entry) { + // No-op. + } + }; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return "TestEvictionPolicyFactory"; + } + } + + /** + * + */ + private static class TestTopologyValidator implements TopologyValidator { + /** {@inheritDoc} */ + @Override public boolean validate(Collection nodes) { + return true; + } + + /** {@inheritDoc} */ + @Override public String toString() { + return "TestTopologyValidator"; + } + } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/DmlStatementsProcessorTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/DmlStatementsProcessorTest.java new file mode 100644 index 0000000000000..63ae9000c23f7 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/DmlStatementsProcessorTest.java @@ -0,0 +1,153 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.query.h2; + +import javax.cache.processor.MutableEntry; +import org.apache.ignite.cluster.ClusterNode; +import org.apache.ignite.lang.IgniteInClosure; +import org.apache.ignite.lang.IgniteProductVersion; +import org.apache.ignite.testframework.GridTestNode; +import org.junit.Assert; +import org.junit.Test; + +/** + * Ensures that anonymous classes of entry modifiers are compatible with old versions. + */ +public class DmlStatementsProcessorTest { + /** + * Checks that remove-closure is available by anonymous class position (4). + * This is required for compatibility with versions < 2.7.0. + * + * @throws Exception If failed. + */ + @Test + public void testRemoveEntryModifierCompatibilityOld() throws Exception { + checkRemoveClosureByAnonymousPosition(4); + } + + /** + * Checks that remove-closure is available by anonymous class position (5). + * This is required for compatibility with versions >= 2.7.0. + * + * @throws Exception If failed. + */ + @Test + public void testRemoveEntryModifierCompatibilityNew() throws Exception { + checkRemoveClosureByAnonymousPosition(5); + } + + /** + * Checks that the old remove-closure is used if the remote node version is less than 2.7.0. + */ + @Test + public void testRemoveEntryModifierClassName() { + String oldClsName = DmlStatementsProcessor.class.getName() + "$" + 4; + String newClsName = DmlStatementsProcessor.class.getName() + "$" + 5; + + checkRemoveEntryClassName("2.4.0", oldClsName); + checkRemoveEntryClassName("2.5.0", oldClsName); + checkRemoveEntryClassName("2.6.0", oldClsName); + + checkRemoveEntryClassName("2.7.0", newClsName); + checkRemoveEntryClassName("2.8.0", newClsName); + } + + /** + * Checks remove-closure class name. + * + * @param ver The version of the remote node. + * @param expClsName Expected class name. + */ + private void checkRemoveEntryClassName(final String ver, String expClsName) { + ClusterNode node = new GridTestNode() { + @Override public IgniteProductVersion version() { + return IgniteProductVersion.fromString(ver); + } + }; + + IgniteInClosure> rmvC = + DmlStatementsProcessor.getRemoveClosure(node, 0); + + Assert.assertNotNull("Check remove-closure", rmvC); + + Assert.assertEquals("Check remove-closure class name for version " + ver, + expClsName, rmvC.getClass().getName()); + } + + /** + * Checks that remove-closure is available by anonymous class position. + */ + @SuppressWarnings("unchecked") + private void checkRemoveClosureByAnonymousPosition(int position) throws Exception { + Class cls = Class.forName(DmlStatementsProcessor.class.getName() + "$" + position); + + IgniteInClosure> rmvC = + (IgniteInClosure>)cls.newInstance(); + + CustomMutableEntry entry = new CustomMutableEntry<>(); + + rmvC.apply(entry); + + Assert.assertTrue("Entry should be removed", entry.isRemoved()); + } + + /** + * + */ + private static class CustomMutableEntry implements MutableEntry { + /** */ + private boolean rmvd; + + /** + * @return {@code true} if remove method was called. + */ + private boolean isRemoved() { + return rmvd; + } + + /** {@inheritDoc} */ + @Override public boolean exists() { + return false; + } + + /** {@inheritDoc} */ + @Override public void remove() { + rmvd = true; + } + + /** {@inheritDoc} */ + @Override public void setValue(V v) { + // No-op. + } + + /** {@inheritDoc} */ + @Override public K getKey() { + return null; + } + + /** {@inheritDoc} */ + @Override public V getValue() { + return null; + } + + /** {@inheritDoc} */ + @Override public T unwrap(Class aCls) { + return null; + } + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildSelfTest.java index c5f1441af5d69..5bbb4e28adef1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildSelfTest.java @@ -24,21 +24,27 @@ import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; +import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager; import org.apache.ignite.internal.processors.cache.IgniteInternalCache; import org.apache.ignite.internal.processors.cache.index.DynamicIndexAbstractSelfTest; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.query.GridQueryProcessor; +import org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorClosure; import org.apache.ignite.internal.util.lang.GridCursor; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Index rebuild after node restart test. */ +@RunWith(JUnit4.class) public class GridIndexRebuildSelfTest extends DynamicIndexAbstractSelfTest { /** Data size. */ - protected static final int AMOUNT = 300; + protected static final int AMOUNT = 50; /** Data size. */ protected static final String CACHE_NAME = "T"; @@ -109,11 +115,12 @@ public class GridIndexRebuildSelfTest extends DynamicIndexAbstractSelfTest { *

    * @throws Exception if failed. */ + @Test public void testIndexRebuild() throws Exception { IgniteEx srv = startServer(); execute(srv, "CREATE TABLE T(k int primary key, v int) WITH \"cache_name=T,wrap_value=false," + - "atomicity=transactional_snapshot\""); + "atomicity=transactional\""); execute(srv, "CREATE INDEX IDX ON T(v)"); @@ -228,13 +235,14 @@ private static class BlockingIndexing extends IgniteH2Indexing { private boolean firstRbld = true; /** {@inheritDoc} */ - @Override public void rebuildIndexesFromHash(String cacheName) throws IgniteCheckedException { + @Override protected void rebuildIndexesFromHash0(GridCacheContext cctx, SchemaIndexCacheVisitorClosure clo) + throws IgniteCheckedException { if (!firstRbld) U.await(INSTANCE.rebuildLatch); else firstRbld = false; - super.rebuildIndexesFromHash(cacheName); + super.rebuildIndexesFromHash0(cctx, clo); } } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildWithMvccEnabledSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildWithMvccEnabledSelfTest.java index cf685468bf198..15f8ca8e73b65 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildWithMvccEnabledSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexRebuildWithMvccEnabledSelfTest.java @@ -29,13 +29,18 @@ import org.apache.ignite.internal.processors.cache.mvcc.MvccVersion; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; +import org.apache.ignite.internal.processors.cache.transactions.IgniteInternalTx; import org.apache.ignite.internal.util.lang.GridCursor; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.ignite.lang.IgniteBiTuple; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Index rebuild after node restart test. */ +@RunWith(JUnit4.class) public class GridIndexRebuildWithMvccEnabledSelfTest extends GridIndexRebuildSelfTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration serverConfiguration(int idx, boolean filter) throws Exception { @@ -44,6 +49,7 @@ public class GridIndexRebuildWithMvccEnabledSelfTest extends GridIndexRebuildSel } /** {@inheritDoc} */ + @Test public void testIndexRebuild() throws Exception { IgniteEx srv = startServer(); @@ -84,7 +90,7 @@ public void testIndexRebuild() throws Exception { * @throws IgniteCheckedException if failed. */ private static void lockVersion(IgniteEx node) throws IgniteCheckedException { - node.context().coordinators().requestSnapshotAsync().get(); + node.context().coordinators().requestSnapshotAsync((IgniteInternalTx)null).get(); } /** {@inheritDoc} */ diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexingSpiAbstractSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexingSpiAbstractSelfTest.java index 6b76230590e85..716cd029da70d 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexingSpiAbstractSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/GridIndexingSpiAbstractSelfTest.java @@ -17,20 +17,7 @@ package org.apache.ignite.internal.processors.query.h2; -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.Collections; -import java.util.Iterator; -import java.util.LinkedHashMap; -import java.util.List; -import java.util.Map; -import org.apache.ignite.IgniteCache; -import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.IgniteLogger; -import org.apache.ignite.binary.BinaryObject; -import org.apache.ignite.binary.BinaryObjectBuilder; import org.apache.ignite.cache.QueryEntity; import org.apache.ignite.cache.QueryIndex; import org.apache.ignite.cache.QueryIndexType; @@ -38,48 +25,37 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.binary.BinaryMarshaller; -import org.apache.ignite.internal.binary.BinaryObjectImpl; -import org.apache.ignite.internal.processors.cache.CacheObject; -import org.apache.ignite.internal.processors.cache.CacheObjectContext; -import org.apache.ignite.internal.processors.cache.CacheObjectValueContext; -import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.query.GridQueryFieldsResult; -import org.apache.ignite.internal.processors.query.GridQueryIndexDescriptor; -import org.apache.ignite.internal.processors.query.GridQueryProperty; -import org.apache.ignite.internal.processors.query.GridQueryTypeDescriptor; -import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.lang.IgniteBiTuple; -import org.apache.ignite.plugin.extensions.communication.MessageReader; -import org.apache.ignite.plugin.extensions.communication.MessageWriter; -import org.apache.ignite.spi.IgniteSpiCloseableIterator; -import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.testframework.GridStringLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.h2.util.JdbcUtils; -import org.jetbrains.annotations.Nullable; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for all SQL based indexing SPI implementations. */ +@RunWith(JUnit4.class) public abstract class GridIndexingSpiAbstractSelfTest extends GridCommonAbstractTest { - /** */ - private static final TextIndex textIdx = new TextIndex(F.asList("txt")); - /** */ private static final LinkedHashMap fieldsAA = new LinkedHashMap<>(); /** */ private static final LinkedHashMap fieldsAB = new LinkedHashMap<>(); - /** */ - private static final LinkedHashMap fieldsBA = new LinkedHashMap<>(); - /** */ private IgniteEx ignite0; /** {@inheritDoc} */ + @SuppressWarnings("deprecation") @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); @@ -98,20 +74,8 @@ public abstract class GridIndexingSpiAbstractSelfTest extends GridCommonAbstract fieldsAB.putAll(fieldsAA); fieldsAB.put("txt", String.class.getName()); - - fieldsBA.putAll(fieldsAA); - fieldsBA.put("sex", Boolean.class.getName()); } - /** */ - private static TypeDesc typeAA = new TypeDesc("A", "A", "A", Collections.>emptyMap(), null); - - /** */ - private static TypeDesc typeAB = new TypeDesc("A", "A", "B", Collections.>emptyMap(), textIdx); - - /** */ - private static TypeDesc typeBA = new TypeDesc("B", "B", "A", Collections.>emptyMap(), null); - /** {@inheritDoc} */ @Override protected void beforeTest() throws Exception { ignite0 = startGrid(0); @@ -144,81 +108,11 @@ private CacheConfiguration cacheACfg() { return cfg; } - /** - * - */ - private CacheConfiguration cacheBCfg() { - CacheConfiguration cfg = new CacheConfiguration(DEFAULT_CACHE_NAME); - - cfg.setName("B"); - - QueryEntity eA = new QueryEntity(Integer.class.getName(), "A"); - eA.setFields(fieldsBA); - - cfg.setQueryEntities(Collections.singleton(eA)); - - return cfg; - } - /** {@inheritDoc} */ @Override protected void afterTest() throws Exception { stopAllGrids(); } - /** - * @param id Id. - * @param name Name. - * @param age Age. - * @return AA. - */ - private BinaryObjectBuilder aa(String typeName, long id, String name, int age) { - BinaryObjectBuilder aBuilder = ignite0.binary().builder(typeName) - .setField("id", id) - .setField("name", name) - .setField("age", age); - - return aBuilder; - } - - /** - * @param id Id. - * @param name Name. - * @param age Age. - * @param txt Text. - * @return AB. - */ - private BinaryObjectBuilder ab(long id, String name, int age, String txt) { - BinaryObjectBuilder aBuilder = aa("B", id, name, age); - - aBuilder.setField("txt", txt); - - return aBuilder; - } - - /** - * @param id Id. - * @param name Name. - * @param age Age. - * @param sex Sex. - * @return BA. - */ - private BinaryObjectBuilder ba(long id, String name, int age, boolean sex) { - BinaryObjectBuilder builder = aa("A", id, name, age); - - builder.setField("sex", sex); - - return builder; - } - - /** - * @param row Row - * @return Value. - * @throws IgniteSpiException If failed. - */ - private BinaryObjectImpl value(IgniteBiTuple row) throws IgniteSpiException { - return row.get2(); - } - /** * @return Indexing. */ @@ -233,148 +127,13 @@ protected boolean offheap() { return false; } - /** - * @param key Key. - * @return Cache object. - */ - private KeyCacheObject key(int key) { - return new TestCacheObject(key); - } - - /** - * @throws Exception If failed. - */ - public void testSpi() throws Exception { - IgniteH2Indexing spi = getIndexing(); - - IgniteCache cacheA = ignite0.createCache(cacheACfg()); - - IgniteCache cacheB = ignite0.createCache(cacheBCfg()); - - assertFalse(spi.queryLocalSql(spi.schema(typeAA.cacheName()), typeAA.cacheName(), "select * from A.A", null, - Collections.emptySet(), typeAA.name(), null, null).hasNext()); - - assertFalse(spi.queryLocalSql(spi.schema(typeAB.cacheName()), typeAB.cacheName(), "select * from A.B", null, - Collections.emptySet(), typeAB.name(), null, null).hasNext()); - - assertFalse(spi.queryLocalSql(spi.schema(typeBA.cacheName()), typeBA.cacheName(), "select * from B.A", null, - Collections.emptySet(), typeBA.name(), null, null).hasNext()); - - assertFalse(spi.queryLocalSql(spi.schema(typeBA.cacheName()), typeBA.cacheName(), - "select * from B.A, A.B, A.A", null, Collections.emptySet(), typeBA.name(), null, null).hasNext()); - - try { - spi.queryLocalSql(spi.schema(typeBA.cacheName()), typeBA.cacheName(), - "select aa.*, ab.*, ba.* from A.A aa, A.B ab, B.A ba", - null, Collections.emptySet(), typeBA.name(), null, null).hasNext(); - - fail("Enumerations of aliases in select block must be prohibited"); - } - catch (IgniteCheckedException ignored) { - // all fine - } - - assertFalse(spi.queryLocalSql(spi.schema(typeAB.cacheName()), typeAB.cacheName(), "select ab.* from A.B ab", - null, Collections.emptySet(), typeAB.name(), null, null).hasNext()); - - assertFalse(spi.queryLocalSql(spi.schema(typeBA.cacheName()), typeBA.cacheName(), - "select ba.* from B.A as ba", null, Collections.emptySet(), typeBA.name(), null, null).hasNext()); - - cacheA.put(1, aa("A", 1, "Vasya", 10).build()); - cacheA.put(1, ab(1, "Vasya", 20, "Some text about Vasya goes here.").build()); - cacheB.put(1, ba(2, "Petya", 25, true).build()); - cacheB.put(1, ba(2, "Kolya", 25, true).build()); - cacheA.put(2, aa("A", 2, "Valera", 19).build()); - cacheA.put(3, aa("A", 3, "Borya", 18).build()); - cacheA.put(4, ab(4, "Vitalya", 20, "Very Good guy").build()); - - // Query data. - Iterator> res = spi.queryLocalSql(spi.schema(typeAA.cacheName()), - typeAA.cacheName(), "from a order by age", null, Collections.emptySet(), typeAA.name(), null, null); - - assertTrue(res.hasNext()); - assertEquals(aa("A", 3, "Borya", 18).build(), value(res.next())); - assertTrue(res.hasNext()); - assertEquals(aa("A", 2, "Valera", 19).build(), value(res.next())); - assertFalse(res.hasNext()); - - res = spi.queryLocalSql(spi.schema(typeAA.cacheName()), typeAA.cacheName(), - "select aa.* from a aa order by aa.age", null, Collections.emptySet(), typeAA.name(), null, null); - - assertTrue(res.hasNext()); - assertEquals(aa("A", 3, "Borya", 18).build(), value(res.next())); - assertTrue(res.hasNext()); - assertEquals(aa("A", 2, "Valera", 19).build(), value(res.next())); - assertFalse(res.hasNext()); - - res = spi.queryLocalSql(spi.schema(typeAB.cacheName()), typeAB.cacheName(), "from b order by name", null, - Collections.emptySet(), typeAB.name(), null, null); - - assertTrue(res.hasNext()); - assertEquals(ab(1, "Vasya", 20, "Some text about Vasya goes here.").build(), value(res.next())); - assertTrue(res.hasNext()); - assertEquals(ab(4, "Vitalya", 20, "Very Good guy").build(), value(res.next())); - assertFalse(res.hasNext()); - - res = spi.queryLocalSql(spi.schema(typeAB.cacheName()), typeAB.cacheName(), - "select bb.* from b as bb order by bb.name", null, Collections.emptySet(), typeAB.name(), null, null); - - assertTrue(res.hasNext()); - assertEquals(ab(1, "Vasya", 20, "Some text about Vasya goes here.").build(), value(res.next())); - assertTrue(res.hasNext()); - assertEquals(ab(4, "Vitalya", 20, "Very Good guy").build(), value(res.next())); - assertFalse(res.hasNext()); - - res = spi.queryLocalSql(spi.schema(typeBA.cacheName()), typeBA.cacheName(), "from a", null, - Collections.emptySet(), typeBA.name(), null, null); - - assertTrue(res.hasNext()); - assertEquals(ba(2, "Kolya", 25, true).build(), value(res.next())); - assertFalse(res.hasNext()); - - // Text queries - Iterator> txtRes = spi.queryLocalText(spi.schema(typeAB.cacheName()), - typeAB.cacheName(), "good", typeAB.name(), null); - - assertTrue(txtRes.hasNext()); - assertEquals(ab(4, "Vitalya", 20, "Very Good guy").build(), value(txtRes.next())); - assertFalse(txtRes.hasNext()); - - // Fields query - GridQueryFieldsResult fieldsRes = - spi.queryLocalSqlFields(spi.schema("A"), "select a.a.name n1, a.a.age a1, b.a.name n2, " + - "b.a.age a2 from a.a, b.a where a.a.id = b.a.id ", Collections.emptySet(), null, false, false, 0, null); - - String[] aliases = {"N1", "A1", "N2", "A2"}; - Object[] vals = { "Valera", 19, "Kolya", 25}; - - IgniteSpiCloseableIterator> it = fieldsRes.iterator(); - - assertTrue(it.hasNext()); - - List fields = it.next(); - - assertEquals(4, fields.size()); - - int i = 0; - - for (Object f : fields) { - assertEquals(aliases[i], fieldsRes.metaData().get(i).fieldName()); - assertEquals(vals[i++], f); - } - - assertFalse(it.hasNext()); - - // Remove - cacheA.remove(2); - cacheB.remove(1); - } - /** * Test long queries write explain warnings into log. * * @throws Exception If failed. */ + @SuppressWarnings({"unchecked", "deprecation"}) + @Test public void testLongQueries() throws Exception { IgniteH2Indexing spi = getIndexing(); @@ -416,381 +175,4 @@ public void testLongQueries() throws Exception { GridTestUtils.setFieldValue(spi, "log", oldLog); } } - - /** - * Index descriptor. - */ - private static class TextIndex implements GridQueryIndexDescriptor { - /** */ - private final Collection fields; - - /** - * @param fields Fields. - */ - private TextIndex(Collection fields) { - this.fields = Collections.unmodifiableCollection(fields); - } - - /** {@inheritDoc} */ - @Override public String name() { - return null; - } - - /** {@inheritDoc} */ - @Override public Collection fields() { - return fields; - } - - /** {@inheritDoc} */ - @Override public boolean descending(String field) { - return false; - } - - /** {@inheritDoc} */ - @Override public QueryIndexType type() { - return QueryIndexType.FULLTEXT; - } - - /** {@inheritDoc} */ - @Override public int inlineSize() { - return 0; - } - } - - /** - * Type descriptor. - */ - private static class TypeDesc implements GridQueryTypeDescriptor { - /** */ - private final String name; - - /** */ - private final String cacheName; - - /** */ - private final String schemaName; - - /** */ - private final Map> valFields; - - /** */ - private final GridQueryIndexDescriptor textIdx; - - /** - * @param cacheName Cache name. - * @param schemaName Schema name. - * @param name Type name. - * @param valFields Fields. - * @param textIdx Fulltext index. - */ - private TypeDesc(String cacheName, String schemaName, String name, Map> valFields, GridQueryIndexDescriptor textIdx) { - this.name = name; - this.cacheName = cacheName; - this.schemaName = schemaName; - this.valFields = Collections.unmodifiableMap(valFields); - this.textIdx = textIdx; - } - - /** {@inheritDoc} */ - @Override public String affinityKey() { - return null; - } - - /** {@inheritDoc} */ - @Override public String name() { - return name; - } - - /** {@inheritDoc} */ - @Override public String schemaName() { - return schemaName; - } - - /** {@inheritDoc} */ - @Override public String tableName() { - return null; - } - - /** - * @return Cache name. - */ - String cacheName() { - return cacheName; - } - - /** {@inheritDoc} */ - @Override public Map> fields() { - return valFields; - } - - /** {@inheritDoc} */ - @Override public GridQueryProperty property(final String name) { - return new GridQueryProperty() { - /** */ - @Override public Object value(Object key, Object val) throws IgniteCheckedException { - return TypeDesc.this.value(name, key, val); - } - - /** */ - @Override public void setValue(Object key, Object val, Object propVal) throws IgniteCheckedException { - throw new UnsupportedOperationException(); - } - - /** */ - @Override public String name() { - return name; - } - - /** */ - @Override public Class type() { - return Object.class; - } - - /** */ - @Override public boolean key() { - return false; - } - - /** */ - @Override public GridQueryProperty parent() { - return null; - } - - /** */ - @Override public boolean notNull() { - return false; - } - - /** */ - @Override public Object defaultValue() { - return null; - } - - /** */ - @Override public int precision() { - return -1; - } - - /** */ - @Override public int scale() { - return -1; - } - }; - } - - /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override public T value(String field, Object key, Object val) throws IgniteSpiException { - assert !F.isEmpty(field); - - assert key instanceof Integer; - - Map m = (Map)val; - - if (m.containsKey(field)) - return m.get(field); - - return null; - } - - /** {@inheritDoc} */ - @SuppressWarnings("unchecked") - @Override public void setValue(String field, Object key, Object val, Object propVal) throws IgniteCheckedException { - assert !F.isEmpty(field); - - assert key instanceof Integer; - - Map m = (Map)val; - - m.put(field, propVal); - } - - /** */ - @Override public Map indexes() { - return Collections.emptyMap(); - } - - /** */ - @Override public GridQueryIndexDescriptor textIndex() { - return textIdx; - } - - /** */ - @Override public Class valueClass() { - return Object.class; - } - - /** */ - @Override public Class keyClass() { - return Integer.class; - } - - /** */ - @Override public String keyTypeName() { - return null; - } - - /** */ - @Override public String valueTypeName() { - return null; - } - - /** */ - @Override public boolean valueTextIndex() { - return textIdx == null; - } - - /** */ - @Override public int typeId() { - return 0; - } - - /** {@inheritDoc} */ - @Override public String keyFieldName() { - return null; - } - - /** {@inheritDoc} */ - @Override public String valueFieldName() { - return null; - } - - /** {@inheritDoc} */ - @Nullable @Override public String keyFieldAlias() { - return null; - } - - /** {@inheritDoc} */ - @Nullable @Override public String valueFieldAlias() { - return null; - } - - /** {@inheritDoc} */ - @Override public void validateKeyAndValue(Object key, Object value) throws IgniteCheckedException { - // No-op. - } - - /** {@inheritDoc} */ - @Override public void setDefaults(Object key, Object val) throws IgniteCheckedException { - // No-op. - } - } - - /** - */ - private static class TestCacheObject implements KeyCacheObject { - /** */ - private Object val; - - /** */ - private int part; - - /** - * @param val Value. - */ - private TestCacheObject(Object val) { - this.val = val; - } - - /** {@inheritDoc} */ - @Override public void onAckReceived() { - // No-op. - } - - /** {@inheritDoc} */ - @Nullable @Override public T value(CacheObjectValueContext ctx, boolean cpy) { - return (T)val; - } - - /** {@inheritDoc} */ - @Override public int partition() { - return part; - } - - /** {@inheritDoc} */ - @Override public void partition(int part) { - this.part = part; - } - - /** {@inheritDoc} */ - @Override public byte[] valueBytes(CacheObjectValueContext ctx) throws IgniteCheckedException { - return JdbcUtils.serialize(val, null); - } - - /** {@inheritDoc} */ - @Override public boolean putValue(ByteBuffer buf) throws IgniteCheckedException { - return false; - } - - /** {@inheritDoc} */ - @Override public int putValue(long addr) throws IgniteCheckedException { - return 0; - } - - /** {@inheritDoc} */ - @Override public boolean putValue(final ByteBuffer buf, final int off, final int len) - throws IgniteCheckedException { - return false; - } - - /** {@inheritDoc} */ - @Override public int valueBytesLength(CacheObjectContext ctx) throws IgniteCheckedException { - return 0; - } - - /** {@inheritDoc} */ - @Override public byte cacheObjectType() { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public boolean isPlatformType() { - return true; - } - - /** {@inheritDoc} */ - @Override public KeyCacheObject copy(int part) { - return this; - } - - /** {@inheritDoc} */ - @Override public CacheObject prepareForCache(CacheObjectContext ctx) { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public void finishUnmarshal(CacheObjectValueContext ctx, ClassLoader ldr) throws IgniteCheckedException { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public void prepareMarshal(CacheObjectValueContext ctx) throws IgniteCheckedException { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public boolean readFrom(ByteBuffer buf, MessageReader reader) { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public short directType() { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public byte fieldsCount() { - throw new UnsupportedOperationException(); - } - - /** {@inheritDoc} */ - @Override public boolean internal() { - return false; - } - } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2ResultSetIteratorNullifyOnEndSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2ResultSetIteratorNullifyOnEndSelfTest.java index 31b0b97c5b0ed..02c5cba88ac6a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2ResultSetIteratorNullifyOnEndSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2ResultSetIteratorNullifyOnEndSelfTest.java @@ -17,28 +17,26 @@ package org.apache.ignite.internal.processors.query.h2; -import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Objects; -import javax.cache.Cache; import org.apache.ignite.IgniteCache; -import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.SqlFieldsQuery; -import org.apache.ignite.cache.query.SqlQuery; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.internal.processors.cache.QueryCursorImpl; import org.apache.ignite.internal.processors.query.GridQueryCacheObjectsIterator; -import org.apache.ignite.internal.processors.query.GridQueryProcessor; -import org.apache.ignite.internal.util.lang.GridCloseableIterator; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test for iterator data link erasure after closing or completing */ +@RunWith(JUnit4.class) public class H2ResultSetIteratorNullifyOnEndSelfTest extends GridCommonAbstractTest { /** */ private static final int NODES_COUNT = 2; @@ -46,83 +44,13 @@ public class H2ResultSetIteratorNullifyOnEndSelfTest extends GridCommonAbstractT /** */ private static final int PERSON_COUNT = 20; - /** */ - private static final String SELECT_ALL_SQL = "SELECT p.* FROM Person p ORDER BY p.salary"; - /** */ private static final String SELECT_MAX_SAL_SQLF = "select max(salary) from Person"; - /** - * Non local SQL check nullification after close - */ - public void testSqlQueryClose() { - SqlQuery qry = new SqlQuery<>(Person.class, SELECT_ALL_SQL); - - QueryCursor> qryCurs = cache().query(qry); - - qryCurs.iterator(); - - qryCurs.close(); - - H2ResultSetIterator h2It = extractIteratorInnerGridIteratorInnerH2ResultSetIterator(qryCurs); - - checkIterator(h2It); - } - - /** - * Non local SQL check nullification after complete - */ - public void testSqlQueryComplete() { - SqlQuery qry = new SqlQuery<>(Person.class, SELECT_ALL_SQL); - - QueryCursor> qryCurs = cache().query(qry); - - qryCurs.getAll(); - - H2ResultSetIterator h2It = extractIteratorInnerGridIteratorInnerH2ResultSetIterator(qryCurs); - - checkIterator(h2It); - } - - /** - * Local SQL check nullification after close - */ - public void testSqlQueryLocalClose() { - SqlQuery qry = new SqlQuery<>(Person.class, SELECT_ALL_SQL); - - qry.setLocal(true); - - QueryCursor> qryCurs = cache().query(qry); - - qryCurs.iterator(); - - qryCurs.close(); - - H2ResultSetIterator h2It = extractIterableInnerH2ResultSetIterator(qryCurs); - - checkIterator(h2It); - } - - /** - * Local SQL check nullification after complete - */ - public void testSqlQueryLocalComplete() { - SqlQuery qry = new SqlQuery<>(Person.class, SELECT_ALL_SQL); - - qry.setLocal(true); - - QueryCursor> qryCurs = cache().query(qry); - - qryCurs.getAll(); - - H2ResultSetIterator h2It = extractIterableInnerH2ResultSetIterator(qryCurs); - - checkIterator(h2It); - } - /** * Non local SQL Fields check nullification after close */ + @Test public void testSqlFieldsQueryClose() { SqlFieldsQuery qry = new SqlFieldsQuery(SELECT_MAX_SAL_SQLF); @@ -140,6 +68,7 @@ public void testSqlFieldsQueryClose() { /** * Non local SQL Fields check nullification after complete */ + @Test public void testSqlFieldsQueryComplete() { SqlFieldsQuery qry = new SqlFieldsQuery(SELECT_MAX_SAL_SQLF); @@ -155,6 +84,7 @@ public void testSqlFieldsQueryComplete() { /** * Local SQL Fields check nullification after close */ + @Test public void testSqlFieldsQueryLocalClose() { SqlFieldsQuery qry = new SqlFieldsQuery(SELECT_MAX_SAL_SQLF); @@ -174,6 +104,7 @@ public void testSqlFieldsQueryLocalClose() { /** * Local SQL Fields check nullification after complete */ + @Test public void testSqlFieldsQueryLocalComplete() { SqlFieldsQuery qry = new SqlFieldsQuery(SELECT_MAX_SAL_SQLF); @@ -199,45 +130,6 @@ private void checkIterator(H2ResultSetIterator h2it){ fail(); } - /** - * Extract H2ResultSetIterator by reflection for non local SQL cases - * @param qryCurs source cursor - * @return target iterator or null of not extracted - */ - private H2ResultSetIterator extractIteratorInnerGridIteratorInnerH2ResultSetIterator( - QueryCursor> qryCurs) { - if (QueryCursorImpl.class.isAssignableFrom(qryCurs.getClass())) { - Iterator inner = GridTestUtils.getFieldValue(qryCurs, QueryCursorImpl.class, "iter"); - - GridQueryCacheObjectsIterator it = GridTestUtils.getFieldValue(inner, inner.getClass(), "val$iter0"); - - Iterator> h2RsIt = GridTestUtils.getFieldValue(it, GridQueryCacheObjectsIterator.class, "iter"); - - if (H2ResultSetIterator.class.isAssignableFrom(h2RsIt.getClass())) - return (H2ResultSetIterator)h2RsIt; - } - return null; - } - - /** - * Extract H2ResultSetIterator by reflection for local SQL cases. - * - * @param qryCurs source cursor - * @return target iterator or null of not extracted - */ - private H2ResultSetIterator extractIterableInnerH2ResultSetIterator( - QueryCursor> qryCurs) { - if (QueryCursorImpl.class.isAssignableFrom(qryCurs.getClass())) { - Iterable iterable = GridTestUtils.getFieldValue(qryCurs, QueryCursorImpl.class, "iterExec"); - - Iterator h2RsIt = GridTestUtils.getFieldValue(iterable, iterable.getClass(), "val$i"); - - if (H2ResultSetIterator.class.isAssignableFrom(h2RsIt.getClass())) - return (H2ResultSetIterator)h2RsIt; - } - return null; - } - /** * Extract H2ResultSetIterator by reflection for SQL Fields cases. * @@ -256,67 +148,6 @@ private H2ResultSetIterator extractGridIteratorInnerH2ResultSetIterator(QueryCur return null; } - /** - * "onClose" should remove links to data. - */ - public void testOnClose() { - try { - GridCloseableIterator it = indexing().queryLocalSql( - indexing().schema(cache().getName()), - cache().getName(), - SELECT_ALL_SQL, - null, - Collections.emptySet(), - "Person", - null, - null); - - if (H2ResultSetIterator.class.isAssignableFrom(it.getClass())) { - H2ResultSetIterator h2it = (H2ResultSetIterator)it; - - h2it.onClose(); - - assertNull(GridTestUtils.getFieldValue(h2it, H2ResultSetIterator.class, "data")); - } - else - fail(); - } - catch (IgniteCheckedException e) { - fail(e.getMessage()); - } - } - - /** - * Complete iterate should remove links to data. - */ - public void testOnComplete() { - try { - GridCloseableIterator it = indexing().queryLocalSql( - indexing().schema(cache().getName()), - cache().getName(), - SELECT_ALL_SQL, - null, - Collections.emptySet(), - "Person", - null, - null); - - if (H2ResultSetIterator.class.isAssignableFrom(it.getClass())) { - H2ResultSetIterator h2it = (H2ResultSetIterator)it; - - while (h2it.onHasNext()) - h2it.onNext(); - - assertNull(GridTestUtils.getFieldValue(h2it, H2ResultSetIterator.class, "data")); - } - else - fail(); - } - catch (IgniteCheckedException e) { - fail(e.getMessage()); - } - } - /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrids(NODES_COUNT); @@ -335,15 +166,6 @@ public void testOnComplete() { stopAllGrids(); } - /** - * @return H2 indexing instance. - */ - private IgniteH2Indexing indexing() { - GridQueryProcessor qryProcessor = grid(0).context().query(); - - return GridTestUtils.getFieldValue(qryProcessor, GridQueryProcessor.class, "idx"); - } - /** * @return Cache. */ diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2StatementCacheSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2StatementCacheSelfTest.java index 655d039632386..f3bd8d7f7a1c0 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2StatementCacheSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/H2StatementCacheSelfTest.java @@ -19,15 +19,20 @@ import java.sql.PreparedStatement; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class H2StatementCacheSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testEviction() throws Exception { H2StatementCache stmtCache = new H2StatementCache(1); H2CachedStatementKey key1 = new H2CachedStatementKey("", "1"); @@ -44,6 +49,7 @@ public void testEviction() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLruEvictionInStoreOrder() throws Exception { H2StatementCache stmtCache = new H2StatementCache(2); @@ -60,6 +66,7 @@ public void testLruEvictionInStoreOrder() throws Exception { /** * @throws Exception If failed. */ + @Test public void testLruEvictionInAccessOrder() throws Exception { H2StatementCache stmtCache = new H2StatementCache(2); @@ -80,4 +87,4 @@ public void testLruEvictionInAccessOrder() throws Exception { private static PreparedStatement stmt() { return new PreparedStatementExImpl(null); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlBigIntegerKeyTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlBigIntegerKeyTest.java index 366c61ac15f12..262beabeb1a88 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlBigIntegerKeyTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlBigIntegerKeyTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Ensures that BigInteger can be used as key */ +@RunWith(JUnit4.class) public class IgniteSqlBigIntegerKeyTest extends GridCommonAbstractTest { /** */ private static final String CACHE_NAME = "Mycache"; @@ -65,6 +69,7 @@ private IgniteCache getCache() { } /** */ + @Test public void testBigIntegerKeyGet() { IgniteCache cache = getCache(); @@ -82,6 +87,7 @@ public void testBigIntegerKeyGet() { } /** */ + @Test public void testBigIntegerKeyQuery() { IgniteCache cache = getCache(); @@ -90,6 +96,7 @@ public void testBigIntegerKeyQuery() { } /** */ + @Test public void testBigIntegerFieldQuery() { IgniteCache cache = getCache(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlQueryMinMaxTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlQueryMinMaxTest.java index e8403ec23a0c8..2d1409a304cdf 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlQueryMinMaxTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/IgniteSqlQueryMinMaxTest.java @@ -24,17 +24,16 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** Test for SQL min() and max() optimization */ +@RunWith(JUnit4.class) public class IgniteSqlQueryMinMaxTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Name of the cache for test */ private static final String CACHE_NAME = "intCache"; @@ -59,10 +58,6 @@ public class IgniteSqlQueryMinMaxTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); - TcpDiscoverySpi spi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - - spi.setIpFinder(IP_FINDER); - CacheConfiguration ccfg = new CacheConfiguration<>(DEFAULT_CACHE_NAME); ccfg.setIndexedTypes(Integer.class, Integer.class); ccfg.setName(CACHE_NAME); @@ -80,6 +75,7 @@ public class IgniteSqlQueryMinMaxTest extends GridCommonAbstractTest { } /** Check min() and max() functions in queries */ + @Test public void testQueryMinMax() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME_2); @@ -118,6 +114,7 @@ public void testQueryMinMax() throws Exception { } /** Check min() and max() on empty cache */ + @Test public void testQueryMinMaxEmptyCache() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME_2); @@ -135,6 +132,7 @@ public void testQueryMinMaxEmptyCache() throws Exception { * Check min() and max() over _key use correct index * Test uses value object cache */ + @Test public void testMinMaxQueryPlanOnKey() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME_2); @@ -151,6 +149,7 @@ public void testMinMaxQueryPlanOnKey() throws Exception { * Check min() and max() over value fields use correct index. * Test uses value object cache */ + @Test public void testMinMaxQueryPlanOnFields() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME_2); @@ -167,6 +166,7 @@ public void testMinMaxQueryPlanOnFields() throws Exception { * Check min() and max() over _key uses correct index * Test uses primitive cache */ + @Test public void testSimpleMinMaxQueryPlanOnKey() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME); @@ -183,6 +183,7 @@ public void testSimpleMinMaxQueryPlanOnKey() throws Exception { * Check min() and max() over _val uses correct index. * Test uses primitive cache */ + @Test public void testSimpleMinMaxQueryPlanOnValue() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME); @@ -196,6 +197,7 @@ public void testSimpleMinMaxQueryPlanOnValue() throws Exception { } /** Check min() and max() over group */ + @Test public void testGroupMinMax() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME_2); @@ -225,6 +227,7 @@ public void testGroupMinMax() throws Exception { } /** Check min() and max() over group with having clause */ + @Test public void testGroupHavingMinMax() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME_2); @@ -261,6 +264,7 @@ public void testGroupHavingMinMax() throws Exception { } /** Check min() and max() over group with joins */ + @Test public void testJoinGroupMinMax() throws Exception { try (Ignite client = startGrid("client")) { IgniteCache cache = client.cache(CACHE_NAME); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/PreparedStatementExSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/PreparedStatementExSelfTest.java index 22bff3b2a9b9b..808413bce685f 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/PreparedStatementExSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/PreparedStatementExSelfTest.java @@ -19,14 +19,19 @@ import java.sql.PreparedStatement; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class PreparedStatementExSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testStoringMeta() throws Exception { PreparedStatement stmt = stmt(); @@ -40,6 +45,7 @@ public void testStoringMeta() throws Exception { /** * @throws Exception If failed. */ + @Test public void testStoringMoreMetaKeepsExisting() throws Exception { PreparedStatement stmt = stmt(); @@ -58,4 +64,4 @@ public void testStoringMoreMetaKeepsExisting() throws Exception { private static PreparedStatement stmt() { return new PreparedStatementExImpl(null); } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/ThreadLocalObjectPoolSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/ThreadLocalObjectPoolSelfTest.java index b7b7a3701c975..7a11261a2f0ca 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/ThreadLocalObjectPoolSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/ThreadLocalObjectPoolSelfTest.java @@ -20,10 +20,14 @@ import java.util.concurrent.CompletableFuture; import org.apache.ignite.internal.processors.query.h2.ThreadLocalObjectPool.Reusable; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class ThreadLocalObjectPoolSelfTest extends GridCommonAbstractTest { /** */ private ThreadLocalObjectPool pool = new ThreadLocalObjectPool<>(Obj::new, 1); @@ -31,6 +35,7 @@ public class ThreadLocalObjectPoolSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testObjectIsReusedAfterRecycling() throws Exception { Reusable o1 = pool.borrow(); o1.recycle(); @@ -43,6 +48,7 @@ public void testObjectIsReusedAfterRecycling() throws Exception { /** * @throws Exception If failed. */ + @Test public void testBorrowedObjectIsNotReturnedTwice() throws Exception { Reusable o1 = pool.borrow(); Reusable o2 = pool.borrow(); @@ -53,6 +59,7 @@ public void testBorrowedObjectIsNotReturnedTwice() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectShouldBeClosedOnRecycleIfPoolIsFull() throws Exception { Reusable o1 = pool.borrow(); Reusable o2 = pool.borrow(); @@ -65,6 +72,7 @@ public void testObjectShouldBeClosedOnRecycleIfPoolIsFull() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectShouldNotBeReturnedIfPoolIsFull() throws Exception { Reusable o1 = pool.borrow(); Reusable o2 = pool.borrow(); @@ -81,6 +89,7 @@ public void testObjectShouldNotBeReturnedIfPoolIsFull() throws Exception { /** * @throws Exception If failed. */ + @Test public void testObjectShouldReturnedToRecyclingThreadBag() throws Exception { Reusable o1 = pool.borrow(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelperTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelperTest.java index 4c64264559ea6..3604248215eb6 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelperTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/database/InlineIndexHelperTest.java @@ -51,11 +51,17 @@ import org.h2.value.ValueTime; import org.h2.value.ValueTimestamp; import org.h2.value.ValueUuid; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.springframework.util.SerializationUtils; +import static org.apache.ignite.internal.processors.query.h2.database.InlineIndexHelper.CANT_BE_COMPARE; + /** * Simple tests for {@link InlineIndexHelper}. */ +@RunWith(JUnit4.class) public class InlineIndexHelperTest extends GridCommonAbstractTest { /** */ private static final int CACHE_ID = 42; @@ -70,6 +76,7 @@ public class InlineIndexHelperTest extends GridCommonAbstractTest { private static final Comparator ALWAYS_FAILS_COMPARATOR = new AlwaysFailsComparator(); /** Test utf-8 string cutting. */ + @Test public void testConvert() { // 8 bytes total: 1b, 1b, 3b, 3b. @@ -84,6 +91,7 @@ public void testConvert() { } /** */ + @Test public void testCompare1bytes() throws Exception { int maxSize = 3 + 2; // 2 ascii chars + 3 bytes header. @@ -93,12 +101,13 @@ public void testCompare1bytes() throws Exception { assertEquals(1, putAndCompare("bbb", "aaa", maxSize)); assertEquals(1, putAndCompare("aaa", "aa", maxSize)); assertEquals(1, putAndCompare("aaa", "a", maxSize)); - assertEquals(-2, putAndCompare("aaa", "aaa", maxSize)); - assertEquals(-2, putAndCompare("aaa", "aab", maxSize)); - assertEquals(-2, putAndCompare("aab", "aaa", maxSize)); + assertEquals(CANT_BE_COMPARE, putAndCompare("aaa", "aaa", maxSize)); + assertEquals(CANT_BE_COMPARE, putAndCompare("aaa", "aab", maxSize)); + assertEquals(CANT_BE_COMPARE, putAndCompare("aab", "aaa", maxSize)); } /** */ + @Test public void testCompare2bytes() throws Exception { int maxSize = 3 + 4; // 2 2-bytes chars + 3 bytes header. @@ -108,12 +117,13 @@ public void testCompare2bytes() throws Exception { assertEquals(1, putAndCompare("¢¢¢", "¡¡¡", maxSize)); assertEquals(1, putAndCompare("¡¡¡", "¡¡", maxSize)); assertEquals(1, putAndCompare("¡¡¡", "¡", maxSize)); - assertEquals(-2, putAndCompare("¡¡¡", "¡¡¡", maxSize)); - assertEquals(-2, putAndCompare("¡¡¡", "¡¡¢", maxSize)); - assertEquals(-2, putAndCompare("¡¡¢", "¡¡¡", maxSize)); + assertEquals(CANT_BE_COMPARE, putAndCompare("¡¡¡", "¡¡¡", maxSize)); + assertEquals(CANT_BE_COMPARE, putAndCompare("¡¡¡", "¡¡¢", maxSize)); + assertEquals(CANT_BE_COMPARE, putAndCompare("¡¡¢", "¡¡¡", maxSize)); } /** */ + @Test public void testCompare3bytes() throws Exception { int maxSize = 3 + 6; // 2 3-bytes chars + 3 bytes header. @@ -129,6 +139,7 @@ public void testCompare3bytes() throws Exception { } /** */ + @Test public void testCompare4bytes() throws Exception { int maxSize = 3 + 8; // 2 4-bytes chars + 3 bytes header. @@ -144,6 +155,7 @@ public void testCompare4bytes() throws Exception { } /** */ + @Test public void testCompareMixed() throws Exception { int maxSize = 3 + 8; // 2 up to 4-bytes chars + 3 bytes header. @@ -154,6 +166,7 @@ public void testCompareMixed() throws Exception { } /** */ + @Test public void testCompareMixed2() throws Exception { int strCnt = 1000; int symbCnt = 20; @@ -207,7 +220,7 @@ private int putAndCompare(String v1, String v2, int maxSize) throws Exception { int off = 0; - InlineIndexHelper ih = new InlineIndexHelper(Value.STRING, 1, 0, + InlineIndexHelper ih = new InlineIndexHelper("", Value.STRING, 1, 0, CompareMode.getInstance(null, 0)); ih.put(pageAddr, off, v1 == null ? ValueNull.INSTANCE : ValueString.get(v1), maxSize); @@ -218,11 +231,12 @@ private int putAndCompare(String v1, String v2, int maxSize) throws Exception { if (page != 0L) pageMem.releasePage(CACHE_ID, pageId, page); - pageMem.stop(); + pageMem.stop(true); } } /** Limit is too small to cut */ + @Test public void testStringCut() { // 6 bytes total: 3b, 3b. byte[] bytes = InlineIndexHelper.trimUTF8("\u20ac\u20ac".getBytes(Charsets.UTF_8), 2); @@ -231,8 +245,9 @@ public void testStringCut() { } /** Test on String values compare */ + @Test public void testRelyOnCompare() { - InlineIndexHelper ha = new InlineIndexHelper(Value.STRING, 0, SortOrder.ASCENDING, + InlineIndexHelper ha = new InlineIndexHelper("", Value.STRING, 0, SortOrder.ASCENDING, CompareMode.getInstance(null, 0)); // same size @@ -253,8 +268,9 @@ public void testRelyOnCompare() { } /** Test on Bytes values compare */ + @Test public void testRelyOnCompareBytes() { - InlineIndexHelper ha = new InlineIndexHelper(Value.BYTES, 0, SortOrder.ASCENDING, + InlineIndexHelper ha = new InlineIndexHelper("", Value.BYTES, 0, SortOrder.ASCENDING, CompareMode.getInstance(null, 0)); // same size @@ -275,8 +291,9 @@ public void testRelyOnCompareBytes() { } /** Test on Bytes values compare */ + @Test public void testRelyOnCompareJavaObject() { - InlineIndexHelper ha = new InlineIndexHelper(Value.JAVA_OBJECT, 0, SortOrder.ASCENDING, + InlineIndexHelper ha = new InlineIndexHelper("",Value.JAVA_OBJECT, 0, SortOrder.ASCENDING, CompareMode.getInstance(null, 0)); // different types @@ -299,6 +316,7 @@ public void testRelyOnCompareJavaObject() { } /** */ + @Test public void testStringTruncate() throws Exception { DataRegionConfiguration plcCfg = new DataRegionConfiguration().setInitialSize(1024 * MB) .setMaxSize(1024 * MB); @@ -323,7 +341,7 @@ public void testStringTruncate() throws Exception { int off = 0; - InlineIndexHelper ih = new InlineIndexHelper(Value.STRING, 1, 0, + InlineIndexHelper ih = new InlineIndexHelper("", Value.STRING, 1, 0, CompareMode.getInstance(null, 0)); ih.put(pageAddr, off, ValueString.get("aaaaaaa"), 3 + 5); @@ -345,11 +363,12 @@ public void testStringTruncate() throws Exception { finally { if (page != 0L) pageMem.releasePage(CACHE_ID, pageId, page); - pageMem.stop(); + pageMem.stop(true); } } /** */ + @Test public void testBytes() throws Exception { DataRegionConfiguration plcCfg = new DataRegionConfiguration().setInitialSize(1024 * MB) .setMaxSize(1024 * MB); @@ -374,7 +393,7 @@ public void testBytes() throws Exception { int off = 0; - InlineIndexHelper ih = new InlineIndexHelper(Value.BYTES, 1, 0, + InlineIndexHelper ih = new InlineIndexHelper("", Value.BYTES, 1, 0, CompareMode.getInstance(null, 0)); int maxSize = 3 + 3; @@ -403,11 +422,12 @@ public void testBytes() throws Exception { finally { if (page != 0L) pageMem.releasePage(CACHE_ID, pageId, page); - pageMem.stop(); + pageMem.stop(true); } } /** */ + @Test public void testJavaObject() throws Exception { DataRegionConfiguration plcCfg = new DataRegionConfiguration().setInitialSize(1024 * MB) .setMaxSize(1024 * MB); @@ -432,7 +452,7 @@ public void testJavaObject() throws Exception { int off = 0; - InlineIndexHelper ih = new InlineIndexHelper(Value.JAVA_OBJECT, 1, 0, + InlineIndexHelper ih = new InlineIndexHelper("", Value.JAVA_OBJECT, 1, 0, CompareMode.getInstance(null, 0)); int maxSize = 3 + 3; @@ -461,11 +481,12 @@ public void testJavaObject() throws Exception { finally { if (page != 0L) pageMem.releasePage(CACHE_ID, pageId, page); - pageMem.stop(); + pageMem.stop(true); } } /** */ + @Test public void testNull() throws Exception { testPutGet(ValueInt.get(-1), ValueNull.INSTANCE, ValueInt.get(3)); testPutGet(ValueInt.get(-1), ValueNull.INSTANCE, ValueInt.get(3)); @@ -475,41 +496,49 @@ public void testNull() throws Exception { } /** */ + @Test public void testBoolean() throws Exception { testPutGet(ValueBoolean.get(true), ValueBoolean.get(false), ValueBoolean.get(true)); } /** */ + @Test public void testByte() throws Exception { testPutGet(ValueByte.get((byte)-1), ValueByte.get((byte)2), ValueByte.get((byte)3)); } /** */ + @Test public void testShort() throws Exception { testPutGet(ValueShort.get((short)-32000), ValueShort.get((short)2), ValueShort.get((short)3)); } /** */ + @Test public void testInt() throws Exception { testPutGet(ValueInt.get(-1), ValueInt.get(2), ValueInt.get(3)); } /** */ + @Test public void testLong() throws Exception { testPutGet(ValueLong.get(-1), ValueLong.get(2), ValueLong.get(3)); } /** */ + @Test public void testFloat() throws Exception { testPutGet(ValueFloat.get(1.1f), ValueFloat.get(2.2f), ValueFloat.get(1.1f)); } /** */ + @Test public void testDouble() throws Exception { testPutGet(ValueDouble.get(1.1f), ValueDouble.get(2.2f), ValueDouble.get(1.1f)); } /** */ + @Test public void testDate() throws Exception { testPutGet(ValueDate.get(Date.valueOf("2017-02-20")), ValueDate.get(Date.valueOf("2017-02-21")), @@ -517,6 +546,7 @@ public void testDate() throws Exception { } /** */ + @Test public void testTime() throws Exception { testPutGet(ValueTime.get(Time.valueOf("10:01:01")), ValueTime.get(Time.valueOf("11:02:02")), @@ -524,6 +554,7 @@ public void testTime() throws Exception { } /** */ + @Test public void testTimestamp() throws Exception { testPutGet(ValueTimestamp.get(Timestamp.valueOf("2017-02-20 10:01:01")), ValueTimestamp.get(Timestamp.valueOf("2017-02-20 10:01:01")), @@ -531,6 +562,7 @@ public void testTimestamp() throws Exception { } /** */ + @Test public void testUUID() throws Exception { testPutGet(ValueUuid.get(UUID.randomUUID().toString()), ValueUuid.get(UUID.randomUUID().toString()), @@ -563,7 +595,7 @@ private void testPutGet(Value v1, Value v2, Value v3) throws Exception { int off = 0; int max = 255; - InlineIndexHelper ih = new InlineIndexHelper(v1.getType(), 1, 0, + InlineIndexHelper ih = new InlineIndexHelper("", v1.getType(), 1, 0, CompareMode.getInstance(null, 0)); off += ih.put(pageAddr, off, v1, max - off); @@ -582,7 +614,7 @@ private void testPutGet(Value v1, Value v2, Value v3) throws Exception { if (page != 0L) pageMem.releasePage(CACHE_ID, pageId, page); - pageMem.stop(); + pageMem.stop(true); } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/AbstractH2CompareQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/AbstractH2CompareQueryTest.java index b1d58421b255c..8e16ed022cf3b 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/AbstractH2CompareQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/AbstractH2CompareQueryTest.java @@ -41,9 +41,6 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.binary.BinaryMarshaller; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; @@ -52,9 +49,6 @@ * partitioned) which have the same data models and data content. */ public abstract class AbstractH2CompareQueryTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** */ protected static Ignite ignite; @@ -69,12 +63,6 @@ public abstract class AbstractH2CompareQueryTest extends GridCommonAbstractTest @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - c.setDiscoverySpi(disco); - c.setMarshaller(new BinaryMarshaller()); return c; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/BaseH2CompareQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/BaseH2CompareQueryTest.java index f9d25d67d8f0e..157dc3b61dae8 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/BaseH2CompareQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/BaseH2CompareQueryTest.java @@ -39,11 +39,15 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testsuites.IgniteIgnore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Base set of queries to compare query results from h2 database instance and mixed ignite caches (replicated and partitioned) * which have the same data models and data content. */ +@RunWith(JUnit4.class) public class BaseH2CompareQueryTest extends AbstractH2CompareQueryTest { /** Org count. */ public static final int ORG_CNT = 30; @@ -223,6 +227,7 @@ public class BaseH2CompareQueryTest extends AbstractH2CompareQueryTest { /** * */ + @Test public void testSelectStar() { assertEquals(1, cachePers.query(new SqlQuery,Person>( Person.class, "\t\r\n select \n*\t from Person limit 1")).getAll().size()); @@ -239,6 +244,7 @@ public void testSelectStar() { /** * @throws Exception If failed. */ + @Test public void testInvalidQuery() throws Exception { final SqlFieldsQuery sql = new SqlFieldsQuery("SELECT firstName from Person where id <> ? and orgId <> ?"); @@ -255,6 +261,7 @@ public void testInvalidQuery() throws Exception { * @throws Exception */ @IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-705", forceFailure = true) + @Test public void testAllExamples() throws Exception { // compareQueryRes0("select ? limit ? offset ?"); @@ -424,6 +431,7 @@ public void testAllExamples() throws Exception { /** * @throws Exception If failed. */ + @Test public void testParamSubstitution() throws Exception { compareQueryRes0(cachePers, "select ? from \"pers\".Person", "Some arg"); } @@ -431,6 +439,7 @@ public void testParamSubstitution() throws Exception { /** * @throws SQLException If failed. */ + @Test public void testAggregateOrderBy() throws SQLException { compareOrderedQueryRes0(cachePers, "select firstName name, count(*) cnt from \"pers\".Person " + "group by name order by cnt, name desc"); @@ -439,6 +448,7 @@ public void testAggregateOrderBy() throws SQLException { /** * @throws Exception If failed. */ + @Test public void testNullParamSubstitution() throws Exception { List> rs1 = compareQueryRes0(cachePers, "select ? from \"pers\".Person", null); @@ -449,6 +459,7 @@ public void testNullParamSubstitution() throws Exception { /** * */ + @Test public void testUnion() throws SQLException { String base = "select _val v from \"pers\".Person"; @@ -464,6 +475,7 @@ public void testUnion() throws SQLException { /** * @throws Exception If failed. */ + @Test public void testEmptyResult() throws Exception { compareQueryRes0(cachePers, "select id from \"pers\".Person where 0 = 1"); } @@ -471,6 +483,7 @@ public void testEmptyResult() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlQueryWithAggregation() throws Exception { compareQueryRes0(cachePers, "select avg(salary) from \"pers\".Person, \"org\".Organization " + "where Person.orgId = Organization.id and " + @@ -480,6 +493,7 @@ public void testSqlQueryWithAggregation() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlFieldsQuery() throws Exception { compareQueryRes0(cachePers, "select concat(firstName, ' ', lastName) from \"pers\".Person"); } @@ -487,6 +501,7 @@ public void testSqlFieldsQuery() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSqlFieldsQueryWithJoin() throws Exception { compareQueryRes0(cachePers, "select concat(firstName, ' ', lastName), " + "Organization.name from \"pers\".Person, \"org\".Organization where " @@ -496,6 +511,7 @@ public void testSqlFieldsQueryWithJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testOrdered() throws Exception { compareOrderedQueryRes0(cachePers, "select firstName, lastName" + " from \"pers\".Person" + @@ -505,6 +521,7 @@ public void testOrdered() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimpleJoin() throws Exception { // Have expected results. compareQueryRes0(cachePers, String.format("select id, firstName, lastName" + @@ -520,6 +537,7 @@ public void testSimpleJoin() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSimpleReplicatedSelect() throws Exception { compareQueryRes0(cacheProd, "select id, name from \"prod\".Product"); } @@ -527,6 +545,7 @@ public void testSimpleReplicatedSelect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testCrossCache() throws Exception { compareQueryRes0(cachePers, "select firstName, lastName" + " from \"pers\".Person, \"purch\".Purchase" + @@ -962,4 +981,4 @@ private static class Address implements Serializable { return "Address [id=" + id + ", street=" + street + ']'; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java index de77150c51d34..a3a3a093f6c61 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java @@ -48,9 +48,6 @@ import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.U; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.h2.command.Prepared; @@ -60,6 +57,9 @@ import org.h2.table.Column; import org.h2.value.Value; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheRebalanceMode.SYNC; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; @@ -67,10 +67,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridQueryParsingTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true); - /** */ private static Ignite ignite; @@ -79,12 +77,6 @@ public class GridQueryParsingTest extends GridCommonAbstractTest { @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration c = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(ipFinder); - - c.setDiscoverySpi(disco); - c.setCacheConfiguration( cacheConfiguration(DEFAULT_CACHE_NAME, "SCH1", String.class, Person.class), cacheConfiguration("addr", "SCH2", String.class, Address.class), @@ -135,6 +127,7 @@ private CacheConfiguration cacheConfiguration(@NotNull String name, String sqlSc /** * @throws Exception If failed. */ + @Test public void testParseSelectAndUnion() throws Exception { checkQuery("select 1 from Person p where addrIds in ((1,2,3), (3,4,5))"); checkQuery("select 1 from Person p where addrId in ((1,))"); @@ -328,6 +321,7 @@ public void testParseSelectAndUnion() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUseIndexHints() throws Exception { checkQuery("select * from Person use index (\"PERSON_NAME_IDX\")"); checkQuery("select * from Person use index (\"PERSON_PARENTNAME_IDX\")"); @@ -345,6 +339,7 @@ public void testUseIndexHints() throws Exception { * * @throws Exception If failed. */ + @Test public void testParseTableFilter() throws Exception { Prepared prepared = parse("select Person.old, p1.old, p1.addrId from Person, Person p1 " + "where exists(select 1 from sch2.Address a where a.id = p1.addrId)"); @@ -388,6 +383,7 @@ public void testParseTableFilter() throws Exception { } /** */ + @Test public void testParseMerge() throws Exception { /* Plain rows w/functions, operators, defaults, and placeholders. */ checkQuery("merge into Person(old, name) values(5, 'John')"); @@ -434,6 +430,7 @@ public void testParseMerge() throws Exception { } /** */ + @Test public void testParseInsert() throws Exception { /* Plain rows w/functions, operators, defaults, and placeholders. */ checkQuery("insert into Person(old, name) values(5, 'John')"); @@ -476,6 +473,7 @@ public void testParseInsert() throws Exception { } /** */ + @Test public void testParseDelete() throws Exception { checkQuery("delete from Person"); checkQuery("delete from Person p where p.old > ?"); @@ -487,6 +485,7 @@ public void testParseDelete() throws Exception { } /** */ + @Test public void testParseUpdate() throws Exception { checkQuery("update Person set name='Peter'"); checkQuery("update Person per set name='Peter', old = 5"); @@ -503,6 +502,7 @@ public void testParseUpdate() throws Exception { /** * */ + @Test public void testParseCreateIndex() throws Exception { assertCreateIndexEquals( buildCreateIndex(null, "Person", "sch1", false, QueryIndexType.SORTED, @@ -561,6 +561,7 @@ public void testParseCreateIndex() throws Exception { /** * */ + @Test public void testParseDropIndex() throws Exception { // Schema that is not set defaults to default schema of connection which is sch1 assertDropIndexEquals(buildDropIndex("idx", "sch1", false), "drop index idx"); @@ -578,6 +579,7 @@ public void testParseDropIndex() throws Exception { /** * */ + @Test public void testParseDropTable() throws Exception { // Schema that is not set defaults to default schema of connection which is sch1 assertDropTableEquals(buildDropTable("sch1", "tbl", false), "drop table tbl"); @@ -593,6 +595,7 @@ public void testParseDropTable() throws Exception { } /** */ + @Test public void testParseCreateTable() throws Exception { assertCreateTableEquals( buildCreateTable("sch1", "Person", "cache", F.asList("id", "city"), @@ -636,6 +639,7 @@ false, c("id", Value.INT), c("city", Value.STRING), c("name", Value.STRING), } /** */ + @Test public void testParseCreateTableWithDefaults() { assertParseThrows("create table Person (id int primary key, age int, " + "ts TIMESTAMP default CURRENT_TIMESTAMP()) WITH \"template=cache\"", @@ -653,6 +657,7 @@ public void testParseCreateTableWithDefaults() { } /** */ + @Test public void testParseAlterTableAddColumn() throws Exception { assertAlterTableAddColumnEquals(buildAlterTableAddColumn("SCH2", "Person", false, false, c("COMPANY", Value.STRING)), "ALTER TABLE SCH2.Person ADD company varchar"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/H2CompareBigQueryTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/H2CompareBigQueryTest.java index 50b71b598216e..3c3a785e07198 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/H2CompareBigQueryTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/H2CompareBigQueryTest.java @@ -36,6 +36,9 @@ import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.X; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Executes one big query (and subqueries of the big query) to compare query results from h2 database instance and @@ -72,6 +75,7 @@ * * */ +@RunWith(JUnit4.class) public class H2CompareBigQueryTest extends AbstractH2CompareQueryTest { /** Root order count. */ private static final int ROOT_ORDER_CNT = 1000; @@ -282,6 +286,7 @@ protected boolean distributedJoins() { /** * @throws Exception If failed. */ + @Test public void testBigQuery() throws Exception { X.println(); X.println(bigQry); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/CacheQueryMemoryLeakTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/CacheQueryMemoryLeakTest.java index 754504e14a990..8d801e768cba3 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/CacheQueryMemoryLeakTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/CacheQueryMemoryLeakTest.java @@ -32,22 +32,19 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** */ +@RunWith(JUnit4.class) public class CacheQueryMemoryLeakTest extends GridCommonAbstractTest { - /** */ - private static final TcpDiscoveryVmIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration igniteCfg = super.getConfiguration(igniteInstanceName); - ((TcpDiscoverySpi)igniteCfg.getDiscoverySpi()).setIpFinder(IP_FINDER); - if (igniteInstanceName.equals("client")) igniteCfg.setClientMode(true); @@ -64,6 +61,7 @@ public class CacheQueryMemoryLeakTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testResultIsMultipleOfPage() throws Exception { IgniteEx srv = (IgniteEx)startGrid("server"); Ignite client = startGrid("client"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheCauseRetryMessageSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheCauseRetryMessageSelfTest.java index 8c4358a7fd722..b9c846ad7a57c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheCauseRetryMessageSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheCauseRetryMessageSelfTest.java @@ -32,6 +32,9 @@ import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SQL_RETRY_TIMEOUT; import static org.apache.ignite.internal.processors.query.h2.twostep.JoinSqlTestHelper.Organization; @@ -40,6 +43,7 @@ /** * Failed to reserve partitions for query (cache is not found on local node) Root cause test */ +@RunWith(JUnit4.class) public class DisappearedCacheCauseRetryMessageSelfTest extends GridCommonAbstractTest { /** */ private static final int NODES_COUNT = 2; @@ -51,6 +55,7 @@ public class DisappearedCacheCauseRetryMessageSelfTest extends GridCommonAbstrac private IgniteCache orgCache; /** */ + @Test public void testDisappearedCacheCauseRetryMessage() { SqlQuery qry = new SqlQuery(JoinSqlTestHelper.Person.class, JoinSqlTestHelper.JOIN_SQL).setArgs("Organization #0"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheWasNotFoundMessageSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheWasNotFoundMessageSelfTest.java index 9928ed6ff2745..c54f2695f18c0 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheWasNotFoundMessageSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/DisappearedCacheWasNotFoundMessageSelfTest.java @@ -31,7 +31,11 @@ import org.apache.ignite.lang.IgniteInClosure; import org.apache.ignite.plugin.extensions.communication.Message; import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; +import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SQL_RETRY_TIMEOUT; import static org.apache.ignite.internal.processors.query.h2.twostep.JoinSqlTestHelper.Organization; @@ -40,6 +44,7 @@ /** * Grid cache context is not registered for cache id root cause message test */ +@RunWith(JUnit4.class) public class DisappearedCacheWasNotFoundMessageSelfTest extends GridCommonAbstractTest { /** */ private static final int NODES_COUNT = 2; @@ -51,6 +56,7 @@ public class DisappearedCacheWasNotFoundMessageSelfTest extends GridCommonAbstra private IgniteCache orgCache; /** */ + @Test public void testDisappearedCacheWasNotFoundMessage() { SqlQuery qry = new SqlQuery(Person.class, JoinSqlTestHelper.JOIN_SQL).setArgs("Organization #0"); @@ -62,7 +68,10 @@ public void testDisappearedCacheWasNotFoundMessage() { fail("No CacheException emitted."); } catch (CacheException e) { - assertTrue(e.getMessage(), e.getMessage().contains("Cache not found on local node")); + boolean exp = e.getMessage().contains("Cache not found on local node (was concurrently destroyed?)"); + + if (!exp) + throw e; } } @@ -70,6 +79,8 @@ public void testDisappearedCacheWasNotFoundMessage() { @Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(gridName); + cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(LOCAL_IP_FINDER)); + cfg.setCommunicationSpi(new TcpCommunicationSpi(){ /** {@inheritDoc} */ @Override public void sendMessage(ClusterNode node, Message msg, IgniteInClosure ackC) { diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/NonCollocatedRetryMessageSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/NonCollocatedRetryMessageSelfTest.java index c602225e8e30e..2b657b126d728 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/NonCollocatedRetryMessageSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/NonCollocatedRetryMessageSelfTest.java @@ -36,12 +36,16 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SQL_RETRY_TIMEOUT; /** * Failed to execute non-collocated query root cause message test */ +@RunWith(JUnit4.class) public class NonCollocatedRetryMessageSelfTest extends GridCommonAbstractTest { /** */ private static final int NODES_COUNT = 3; @@ -53,6 +57,7 @@ public class NonCollocatedRetryMessageSelfTest extends GridCommonAbstractTest { private IgniteCache personCache; /** */ + @Test public void testNonCollocatedRetryMessage() { SqlQuery qry = new SqlQuery(JoinSqlTestHelper.Person.class, JoinSqlTestHelper.JOIN_SQL).setArgs("Organization #0"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/RetryCauseMessageSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/RetryCauseMessageSelfTest.java index dbb2c59bc2fd7..0f33a15d1c067 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/RetryCauseMessageSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/RetryCauseMessageSelfTest.java @@ -43,6 +43,10 @@ import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.IgniteSystemProperties.IGNITE_SQL_RETRY_TIMEOUT; import static org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion.NONE; @@ -53,6 +57,7 @@ /** * Test for 6 retry cases */ +@RunWith(JUnit4.class) public class RetryCauseMessageSelfTest extends GridCommonAbstractTest { /** */ private static final int NODES_COUNT = 2; @@ -80,6 +85,7 @@ public class RetryCauseMessageSelfTest extends GridCommonAbstractTest { /** * Failed to reserve partitions for query (cache is not found on local node) */ + @Test public void testSynthCacheWasNotFoundMessage() { GridMapQueryExecutor mapQryExec = GridTestUtils.getFieldValue(h2Idx, IgniteH2Indexing.class, "mapQryExec"); @@ -121,16 +127,17 @@ public void testSynthCacheWasNotFoundMessage() { /** * Failed to reserve partitions for query (group reservation failed) */ + @Test public void testGrpReservationFailureMessage() { final GridMapQueryExecutor mapQryExec = GridTestUtils.getFieldValue(h2Idx, IgniteH2Indexing.class, "mapQryExec"); - final ConcurrentMap reservations = GridTestUtils.getFieldValue(mapQryExec, GridMapQueryExecutor.class, "reservations"); + final ConcurrentMap reservations = reservations(h2Idx); GridTestUtils.setFieldValue(h2Idx, IgniteH2Indexing.class, "mapQryExec", new MockGridMapQueryExecutor(null) { @Override public void onMessage(UUID nodeId, Object msg) { if (GridH2QueryRequest.class.isAssignableFrom(msg.getClass())) { - final MapReservationKey grpKey = new MapReservationKey(ORG, null); + final PartitionReservationKey grpKey = new PartitionReservationKey(ORG, null); reservations.put(grpKey, new GridReservable() { @@ -166,7 +173,11 @@ public void testGrpReservationFailureMessage() { /** * Failed to reserve partitions for query (partition of REPLICATED cache is not in OWNING state) */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-7039") + @Test public void testReplicatedCacheReserveFailureMessage() { + fail("https://issues.apache.org/jira/browse/IGNITE-7039"); + GridMapQueryExecutor mapQryExec = GridTestUtils.getFieldValue(h2Idx, IgniteH2Indexing.class, "mapQryExec"); final GridKernalContext ctx = GridTestUtils.getFieldValue(mapQryExec, GridMapQueryExecutor.class, "ctx"); @@ -189,7 +200,7 @@ public void testReplicatedCacheReserveFailureMessage() { aState.getAndSet(stateVal); } - else + else startedExecutor.onMessage(nodeId, msg); } }.insertRealExecutor(mapQryExec)); @@ -197,6 +208,7 @@ public void testReplicatedCacheReserveFailureMessage() { SqlQuery qry = new SqlQuery<>(Organization.class, ORG_SQL); qry.setDistributedJoins(true); + try { orgCache.query(qry).getAll(); } @@ -214,6 +226,7 @@ public void testReplicatedCacheReserveFailureMessage() { /** * Failed to reserve partitions for query (partition of PARTITIONED cache cannot be reserved) */ + @Test public void testPartitionedCacheReserveFailureMessage() { GridMapQueryExecutor mapQryExec = GridTestUtils.getFieldValue(h2Idx, IgniteH2Indexing.class, "mapQryExec"); @@ -264,16 +277,17 @@ public void testPartitionedCacheReserveFailureMessage() { /** * Failed to execute non-collocated query (will retry) */ + @Test public void testNonCollocatedFailureMessage() { final GridMapQueryExecutor mapQryExec = GridTestUtils.getFieldValue(h2Idx, IgniteH2Indexing.class, "mapQryExec"); - final ConcurrentMap reservations = GridTestUtils.getFieldValue(mapQryExec, GridMapQueryExecutor.class, "reservations"); + final ConcurrentMap reservations = reservations(h2Idx); GridTestUtils.setFieldValue(h2Idx, IgniteH2Indexing.class, "mapQryExec", new MockGridMapQueryExecutor(null) { @Override public void onMessage(UUID nodeId, Object msg) { if (GridH2QueryRequest.class.isAssignableFrom(msg.getClass())) { - final MapReservationKey grpKey = new MapReservationKey(ORG, null); + final PartitionReservationKey grpKey = new PartitionReservationKey(ORG, null); reservations.put(grpKey, new GridReservable() { @@ -285,6 +299,7 @@ public void testNonCollocatedFailureMessage() { } }); } + startedExecutor.onMessage(nodeId, msg); } @@ -354,20 +369,30 @@ public void testNonCollocatedFailureMessage() { stopAllGrids(); } + /** + * @param h2Idx Indexing. + * @return Current reservations. + */ + private static ConcurrentMap reservations(IgniteH2Indexing h2Idx) { + PartitionReservationManager partReservationMgr = h2Idx.partitionReservationManager(); + + return GridTestUtils.getFieldValue(partReservationMgr, PartitionReservationManager.class, "reservations"); + } /** * Wrapper around @{GridMapQueryExecutor} */ private abstract static class MockGridMapQueryExecutor extends GridMapQueryExecutor { + /** Wrapped executor */ + GridMapQueryExecutor startedExecutor; /** - * Wrapped executor + * @param startedExecutor Started executor. + * @return Mocked map query executor. */ - GridMapQueryExecutor startedExecutor; + MockGridMapQueryExecutor insertRealExecutor(GridMapQueryExecutor startedExecutor) { + this.startedExecutor = startedExecutor; - /** */ - MockGridMapQueryExecutor insertRealExecutor(GridMapQueryExecutor realExecutor) { - this.startedExecutor = realExecutor; return this; } @@ -393,11 +418,6 @@ MockGridMapQueryExecutor insertRealExecutor(GridMapQueryExecutor realExecutor) { return startedExecutor.busyLock(); } - /** {@inheritDoc} */ - @Override public void onCacheStop(String cacheName) { - startedExecutor.onCacheStop(cacheName); - } - /** {@inheritDoc} */ @Override public void stopAndUnregisterCurrentLazyWorker() { startedExecutor.stopAndUnregisterCurrentLazyWorker(); diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/TableViewSubquerySelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/TableViewSubquerySelfTest.java index eaf4243018ff3..677344d719cf8 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/TableViewSubquerySelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/twostep/TableViewSubquerySelfTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class TableViewSubquerySelfTest extends GridCommonAbstractTest { /** */ private static final int NODES_COUNT = 1; @@ -52,6 +56,7 @@ public class TableViewSubquerySelfTest extends GridCommonAbstractTest { } /** */ + @Test public void testSubqueryTableView() { final String cacheName = "a1"; diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedAtomicColumnConstraintsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedAtomicColumnConstraintsTest.java index 601090fa3faa1..2e7effef667f2 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedAtomicColumnConstraintsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedAtomicColumnConstraintsTest.java @@ -18,6 +18,7 @@ package org.apache.ignite.internal.processors.sql; import java.io.Serializable; +import java.math.BigDecimal; import java.util.Arrays; import java.util.Collections; import java.util.HashMap; @@ -37,8 +38,12 @@ import org.apache.ignite.internal.util.typedef.T2; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.NotNull; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; import static org.apache.ignite.internal.processors.query.QueryUtils.KEY_FIELD_NAME; @@ -46,6 +51,7 @@ import static org.apache.ignite.testframework.GridTestUtils.assertThrowsWithCause; /** */ +@RunWith(JUnit4.class) public class IgniteCachePartitionedAtomicColumnConstraintsTest extends GridCommonAbstractTest { /** */ private static final long FUT_TIMEOUT = 10_000L; @@ -55,12 +61,31 @@ public class IgniteCachePartitionedAtomicColumnConstraintsTest extends GridCommo /** */ private static final String STR_ORG_CACHE_NAME = "STR_ORG"; - + + /** */ private static final String STR_ORG_WITH_FIELDS_CACHE_NAME = "STR_ORG_WITH_FIELDS"; /** */ private static final String OBJ_CACHE_NAME = "ORG_ADDRESS"; + /** */ + private static final String DEC_CACHE_NAME_FOR_SCALE = "DEC_DEC_FOR_SCALE"; + + /** */ + private static final String OBJ_CACHE_NAME_FOR_SCALE = "ORG_EMPLOYEE_FOR_SCALE"; + + /** */ + private static final String DEC_EMPL_CACHE_NAME_FOR_SCALE = "DEC_EMPLOYEE_FOR_SCALE"; + + /** */ + private static final String DEC_CACHE_NAME_FOR_PREC = "DEC_DEC_FOR_PREC"; + + /** */ + private static final String OBJ_CACHE_NAME_FOR_PREC = "ORG_EMPLOYEE_FOR_PREC"; + + /** */ + private static final String DEC_EMPL_CACHE_NAME_FOR_PREC = "DEC_EMPLOYEE_FOR_PREC"; + /** */ private Consumer shouldFail = (op) -> assertThrowsWithCause(op, IgniteException.class); @@ -71,6 +96,15 @@ public class IgniteCachePartitionedAtomicColumnConstraintsTest extends GridCommo @Override protected void beforeTestsStarted() throws Exception { startGrid(0); + createCachesForStringTests(); + + createCachesForDecimalPrecisionTests(); + + createCachesForDecimalScaleTests(); + } + + /** @throws Exception If failed.*/ + private void createCachesForStringTests() throws Exception { Map strStrPrecision = new HashMap<>(); strStrPrecision.put(KEY_FIELD_NAME, 5); @@ -102,10 +136,89 @@ public class IgniteCachePartitionedAtomicColumnConstraintsTest extends GridCommo .setFieldsPrecision(strOrgPrecision)), STR_ORG_WITH_FIELDS_CACHE_NAME); } + /** @throws Exception If failed.*/ + private void createCachesForDecimalPrecisionTests() throws Exception { + Map decDecPrecision = new HashMap<>(); + + decDecPrecision.put(KEY_FIELD_NAME, 4); + decDecPrecision.put(VAL_FIELD_NAME, 4); + + jcache(grid(0), cacheConfiguration(new QueryEntity(BigDecimal.class.getName(), BigDecimal.class.getName()) + .setFieldsPrecision(decDecPrecision)), DEC_CACHE_NAME_FOR_PREC); + + Map orgEmployeePrecision = new HashMap<>(); + + orgEmployeePrecision.put("id", 4); + orgEmployeePrecision.put("salary", 4); + + jcache(grid(0), cacheConfiguration(new QueryEntity(DecOrganization.class.getName(), Employee.class.getName()) + .addQueryField("id", "java.math.BigDecimal", "id") + .addQueryField("salary", "java.math.BigDecimal", "salary") + .setFieldsPrecision(orgEmployeePrecision)), OBJ_CACHE_NAME_FOR_PREC); + + Map decEmployeePrecision = new HashMap<>(); + + decEmployeePrecision.put(KEY_FIELD_NAME, 4); + decEmployeePrecision.put("salary", 4); + + jcache(grid(0), cacheConfiguration(new QueryEntity(BigDecimal.class.getName(), Employee.class.getName()) + .addQueryField("salary", "java.math.BigDecimal", "salary") + .setFieldsPrecision(decEmployeePrecision)), DEC_EMPL_CACHE_NAME_FOR_PREC); + } + + /** @throws Exception If failed.*/ + private void createCachesForDecimalScaleTests() throws Exception { + Map decDecPrecision = new HashMap<>(); + + decDecPrecision.put(KEY_FIELD_NAME, 4); + decDecPrecision.put(VAL_FIELD_NAME, 4); + + Map decDecScale = new HashMap<>(); + + decDecScale.put(KEY_FIELD_NAME, 2); + decDecScale.put(VAL_FIELD_NAME, 2); + + jcache(grid(0), cacheConfiguration(new QueryEntity(BigDecimal.class.getName(), BigDecimal.class.getName()) + .setFieldsScale(decDecScale) + .setFieldsPrecision(decDecPrecision)), DEC_CACHE_NAME_FOR_SCALE); + + Map orgEmployeePrecision = new HashMap<>(); + + orgEmployeePrecision.put("id", 4); + orgEmployeePrecision.put("salary", 4); + + Map orgEmployeeScale = new HashMap<>(); + + orgEmployeeScale.put("id", 2); + orgEmployeeScale.put("salary", 2); + + jcache(grid(0), cacheConfiguration(new QueryEntity(DecOrganization.class.getName(), Employee.class.getName()) + .addQueryField("id", "java.math.BigDecimal", "id") + .addQueryField("salary", "java.math.BigDecimal", "salary") + .setFieldsScale(orgEmployeeScale) + .setFieldsPrecision(orgEmployeePrecision)), OBJ_CACHE_NAME_FOR_SCALE); + + Map decEmployeePrecision = new HashMap<>(); + + decEmployeePrecision.put(KEY_FIELD_NAME, 4); + decEmployeePrecision.put("salary", 4); + + Map decEmployeeScale = new HashMap<>(); + + decEmployeeScale.put(KEY_FIELD_NAME, 2); + decEmployeeScale.put("salary", 2); + + jcache(grid(0), cacheConfiguration(new QueryEntity(BigDecimal.class.getName(), Employee.class.getName()) + .addQueryField("salary", "java.math.BigDecimal", "salary") + .setFieldsPrecision(decEmployeePrecision) + .setFieldsScale(decEmployeeScale)), DEC_EMPL_CACHE_NAME_FOR_SCALE); + } + /** * @throws Exception If failed. */ - public void testPutTooLongValueFail() throws Exception { + @Test + public void testPutTooLongStringValueFail() throws Exception { IgniteCache cache = jcache(0, STR_CACHE_NAME); T2 val = new T2<>("3", "123456"); @@ -113,14 +226,15 @@ public void testPutTooLongValueFail() throws Exception { checkPutAll(shouldFail, cache, new T2<>("1", "1"), val); checkPutOps(shouldFail, cache, val); - + checkReplaceOps(shouldFail, cache, val, "1"); } /** * @throws Exception If failed. */ - public void testPutTooLongKeyFail() throws Exception { + @Test + public void testPutTooLongStringKeyFail() throws Exception { IgniteCache cache = jcache(0, STR_CACHE_NAME); T2 val = new T2<>("123456", "2"); @@ -133,7 +247,8 @@ public void testPutTooLongKeyFail() throws Exception { /** * @throws Exception If failed. */ - public void testPutTooLongValueFieldFail() throws Exception { + @Test + public void testPutTooLongStringValueFieldFail() throws Exception { IgniteCache cache = jcache(0, OBJ_CACHE_NAME); T2 val = new T2<>(new Organization("3"), new Address("123456")); @@ -148,7 +263,8 @@ public void testPutTooLongValueFieldFail() throws Exception { /** * @throws Exception If failed. */ - public void testPutTooLongKeyFieldFail() throws Exception { + @Test + public void testPutTooLongStringKeyFieldFail() throws Exception { IgniteCache cache = jcache(0, OBJ_CACHE_NAME); T2 val = new T2<>(new Organization("123456"), new Address("2")); @@ -161,19 +277,23 @@ public void testPutTooLongKeyFieldFail() throws Exception { /** * @throws Exception If failed. */ - public void testPutTooLongKeyFail2() throws Exception { - doCheckPutTooLongKeyFail2(STR_ORG_CACHE_NAME); + @Test + public void testPutTooLongStringKeyFail2() throws Exception { + doCheckPutTooLongStringKeyFail2(STR_ORG_CACHE_NAME); } /** * @throws Exception If failed. */ - public void testPutTooLongKeyFail3() throws Exception { - doCheckPutTooLongKeyFail2(STR_ORG_WITH_FIELDS_CACHE_NAME); + @Test + public void testPutTooLongStringKeyFail3() throws Exception { + doCheckPutTooLongStringKeyFail2(STR_ORG_WITH_FIELDS_CACHE_NAME); } - - private void doCheckPutTooLongKeyFail2(String cacheName) { + /** + * @throws Exception If failed. + */ + private void doCheckPutTooLongStringKeyFail2(String cacheName) { IgniteCache cache = jcache(0, cacheName); T2 val = new T2<>("123456", new Organization("1")); @@ -186,7 +306,8 @@ private void doCheckPutTooLongKeyFail2(String cacheName) { /** * @throws Exception If failed. */ - public void testPutLongValue() throws Exception { + @Test + public void testPutLongStringValue() throws Exception { IgniteCache cache = jcache(0, STR_CACHE_NAME); T2 val = new T2<>("3", "12345"); @@ -201,7 +322,8 @@ public void testPutLongValue() throws Exception { /** * @throws Exception If failed. */ - public void testPutLongKey() throws Exception { + @Test + public void testPutLongStringKey() throws Exception { IgniteCache cache = jcache(0, STR_CACHE_NAME); T2 val = new T2<>("12345", "2"); @@ -214,7 +336,8 @@ public void testPutLongKey() throws Exception { /** * @throws Exception If failed. */ - public void testPutLongValueField() throws Exception { + @Test + public void testPutLongStringValueField() throws Exception { IgniteCache cache = jcache(0, OBJ_CACHE_NAME); T2 val = new T2<>(new Organization("3"), new Address("12345")); @@ -229,7 +352,8 @@ public void testPutLongValueField() throws Exception { /** * @throws Exception If failed. */ - public void testPutLongKeyField() throws Exception { + @Test + public void testPutLongStringKeyField() throws Exception { IgniteCache cache = jcache(0, OBJ_CACHE_NAME); T2 val = new T2<>(new Organization("12345"), new Address("2")); @@ -242,18 +366,23 @@ public void testPutLongKeyField() throws Exception { /** * @throws Exception If failed. */ - public void testPutLongKey2() throws Exception { - doCheckPutLongKey2(STR_ORG_CACHE_NAME); + @Test + public void testPutLongStringKey2() throws Exception { + doCheckPutLongStringKey2(STR_ORG_CACHE_NAME); } /** * @throws Exception If failed. */ - public void testPutLongKey3() throws Exception { - doCheckPutLongKey2(STR_ORG_WITH_FIELDS_CACHE_NAME); + @Test + public void testPutLongStringKey3() throws Exception { + doCheckPutLongStringKey2(STR_ORG_WITH_FIELDS_CACHE_NAME); } - private void doCheckPutLongKey2(String cacheName) { + /** + * @throws Exception If failed. + */ + private void doCheckPutLongStringKey2(String cacheName) { IgniteCache cache = jcache(0, cacheName); T2 key2 = new T2<>("12345", new Organization("1")); @@ -263,6 +392,239 @@ private void doCheckPutLongKey2(String cacheName) { checkPutOps(shouldSucceed, cache, key2); } + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalValueFail() throws Exception { + IgniteCache cache = jcache(0, DEC_CACHE_NAME_FOR_PREC); + + T2 val = new T2<>(d(12.36), d(123.45)); + + checkPutAll(shouldFail, cache, new T2<>(d(12.34), d(12.34)), val); + + checkPutOps(shouldFail, cache, val); + + checkReplaceOps(shouldFail, cache, val, d(12.34)); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalKeyFail() throws Exception { + IgniteCache cache = jcache(0, DEC_CACHE_NAME_FOR_PREC); + + T2 val = new T2<>(d(123.45), d(12.34)); + + checkPutAll(shouldFail, cache, new T2<>(d(12.35), d(12.34)), val); + + checkPutOps(shouldFail, cache, val); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalKeyFail2() throws Exception { + IgniteCache cache = jcache(0, DEC_EMPL_CACHE_NAME_FOR_PREC); + + T2 val = new T2<>(d(123.45), new Employee(d(12.34))); + + checkPutAll(shouldFail, cache, new T2<>(d(12.35), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalValueFieldFail() throws Exception { + IgniteCache cache = jcache(0, OBJ_CACHE_NAME_FOR_PREC); + + T2 val = new T2<>(new DecOrganization(d(12.36)), new Employee(d(123.45))); + + checkPutAll(shouldFail, cache, new T2<>(new DecOrganization(d(12.34)), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + + checkReplaceOps(shouldFail, cache, val, new Employee(d(12.34))); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalValueFieldFail2() throws Exception { + IgniteCache cache = jcache(0, DEC_EMPL_CACHE_NAME_FOR_PREC); + + T2 val = new T2<>(d(12.36), new Employee(d(123.45))); + + checkPutAll(shouldFail, cache, new T2<>(d(12.34), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + + checkReplaceOps(shouldFail, cache, val, new Employee(d(12.34))); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalKeyFieldFail() throws Exception { + IgniteCache cache = jcache(0, OBJ_CACHE_NAME_FOR_PREC); + + T2 val = new T2<>(new DecOrganization(d(123.45)), new Employee(d(12.34))); + + checkPutAll(shouldFail, cache, new T2<>(new DecOrganization(d(12.35)), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalValueScaleFail() throws Exception { + IgniteCache cache = jcache(0, DEC_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(d(12.36), d(3.456)); + + checkPutAll(shouldFail, cache, new T2<>(d(12.34), d(12.34)), val); + + checkPutOps(shouldFail, cache, val); + + checkReplaceOps(shouldFail, cache, val, d(12.34)); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalKeyScaleFail() throws Exception { + IgniteCache cache = jcache(0, DEC_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(d(3.456), d(12.34)); + + checkPutAll(shouldFail, cache, new T2<>(d(12.35), d(12.34)), val); + + checkPutOps(shouldFail, cache, val); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalKeyScaleFail2() throws Exception { + IgniteCache cache = jcache(0, DEC_EMPL_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(d(3.456), new Employee(d(12.34))); + + checkPutAll(shouldFail, cache, new T2<>(d(12.35), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalValueFieldScaleFail() throws Exception { + IgniteCache cache = jcache(0, OBJ_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(new DecOrganization(d(12.36)), new Employee(d(3.456))); + + checkPutAll(shouldFail, cache, new T2<>(new DecOrganization(d(12.34)), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + + checkReplaceOps(shouldFail, cache, val, new Employee(d(12.34))); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalValueFieldScaleFail2() throws Exception { + IgniteCache cache = jcache(0, DEC_EMPL_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(d(12.36), new Employee(d(3.456))); + + checkPutAll(shouldFail, cache, new T2<>(d(12.34), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + + checkReplaceOps(shouldFail, cache, val, new Employee(d(12.34))); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutTooLongDecimalKeyFieldScaleFail() throws Exception { + IgniteCache cache = jcache(0, OBJ_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(new DecOrganization(d(3.456)), new Employee(d(12.34))); + + checkPutAll(shouldFail, cache, new T2<>(new DecOrganization(d(12.35)), new Employee(d(12.34))), val); + + checkPutOps(shouldFail, cache, val); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutValidDecimalKeyAndValue() throws Exception { + IgniteCache cache = jcache(0, DEC_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(d(12.37), d(12.34)); + + checkPutAll(shouldSucceed, cache, new T2<>(d(12.36), d(12.34)), val); + + checkPutOps(shouldSucceed, cache, val); + + checkReplaceOps(shouldSucceed, cache, val, d(12.34)); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutValidDecimalKeyAndValueField() throws Exception { + IgniteCache cache = jcache(0, OBJ_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(new DecOrganization(d(12.37)), new Employee(d(12.34))); + + checkPutAll(shouldSucceed, cache, new T2<>(new DecOrganization(d(12.36)), new Employee(d(12.34))), val); + + checkPutOps(shouldSucceed, cache, val); + + checkReplaceOps(shouldSucceed, cache, val, new Employee(d(12.34))); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testPutValidDecimalKeyAndValueField2() throws Exception { + IgniteCache cache = jcache(0, DEC_EMPL_CACHE_NAME_FOR_SCALE); + + T2 val = new T2<>(d(12.37), new Employee(d(12.34))); + + checkPutAll(shouldSucceed, cache, new T2<>(d(12.36), new Employee(d(12.34))), val); + + checkPutOps(shouldSucceed, cache, val); + + checkReplaceOps(shouldSucceed, cache, val, new Employee(d(12.34))); + } + + /** */ + private BigDecimal d(double val) { + return BigDecimal.valueOf(val); + } + /** */ private void checkReplaceOps(Consumer checker, IgniteCache cache, T2 val, V okVal) { K k = val.get1(); @@ -352,9 +714,11 @@ protected CacheConfiguration cacheConfiguration(QueryEntity qryEntity) { cache.setAtomicityMode(atomicityMode()); cache.setBackups(1); cache.setWriteSynchronizationMode(FULL_SYNC); - cache.setQueryEntities(Collections.singletonList(qryEntity)); + if (TRANSACTIONAL_SNAPSHOT.equals(atomicityMode())) + cache.setNearConfiguration(null); + return cache; } @@ -369,7 +733,6 @@ protected CacheConfiguration cacheConfiguration(QueryEntity qryEntity) { } /** */ - @SuppressWarnings("UnusedDeclaration") private static class Organization implements Serializable { /** Name. */ private final String name; @@ -383,7 +746,6 @@ private Organization(String name) { } /** */ - @SuppressWarnings("UnusedDeclaration") private static class Address implements Serializable { /** Name. */ private final String address; @@ -395,4 +757,32 @@ private Address(String address) { this.address = address; } } + + /** */ + @SuppressWarnings("UnusedDeclaration") + private static class DecOrganization implements Serializable { + /** Id. */ + private final BigDecimal id; + + /** + * @param id Id. + */ + private DecOrganization(BigDecimal id) { + this.id = id; + } + } + + /** */ + @SuppressWarnings("UnusedDeclaration") + private static class Employee implements Serializable { + /** Salary. */ + private final BigDecimal salary; + + /** + * @param salary Salary. + */ + private Employee(BigDecimal salary) { + this.salary = salary; + } + } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedTransactionalColumnConstraintsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedTransactionalColumnConstraintsTest.java index cd5c979e9636f..c73c04ec24fee 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedTransactionalColumnConstraintsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedTransactionalColumnConstraintsTest.java @@ -21,10 +21,10 @@ import org.jetbrains.annotations.NotNull; /** */ -public class IgniteCachePartitionedTransactionalColumnConstraintsTest +public class IgniteCachePartitionedTransactionalColumnConstraintsTest extends IgniteCachePartitionedAtomicColumnConstraintsTest { /** {@inheritDoc} */ - @NotNull protected CacheAtomicityMode atomicityMode() { + @Override @NotNull protected CacheAtomicityMode atomicityMode() { return CacheAtomicityMode.TRANSACTIONAL; } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest.java new file mode 100644 index 0000000000000..a636a3c11cb1d --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest.java @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.sql; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.jetbrains.annotations.NotNull; + +/** */ +public class IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest + extends IgniteCachePartitionedAtomicColumnConstraintsTest { + /** {@inheritDoc} */ + @NotNull @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** */ + @Override public void testPutTooLongStringValueFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringValueFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFail3() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyScaleFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldScaleFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFieldScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest.java new file mode 100644 index 0000000000000..2a7f6b5375aeb --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest.java @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.sql; + +import org.apache.ignite.cache.CacheAtomicityMode; +import org.jetbrains.annotations.NotNull; + +/** */ +public class IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest + extends IgniteCacheReplicatedAtomicColumnConstraintsTest { + /** {@inheritDoc} */ + @NotNull @Override protected CacheAtomicityMode atomicityMode() { + return CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + } + + /** */ + @Override public void testPutTooLongStringValueFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringValueFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongStringKeyFail3() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFieldFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyScaleFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalValueFieldScaleFail2() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } + + /** */ + @Override public void testPutTooLongDecimalKeyFieldScaleFail() { + fail("https://issues.apache.org/jira/browse/IGNITE-10066"); + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteSQLColumnConstraintsTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteSQLColumnConstraintsTest.java index 762743b636f59..0080bf0171a8c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteSQLColumnConstraintsTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteSQLColumnConstraintsTest.java @@ -17,127 +17,365 @@ package org.apache.ignite.internal.processors.sql; +import java.math.BigDecimal; import java.util.List; +import java.util.Objects; import org.apache.ignite.cache.query.SqlFieldsQuery; -import org.apache.ignite.internal.processors.odbc.SqlStateCode; import org.apache.ignite.internal.processors.query.IgniteSQLException; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.processors.odbc.SqlStateCode.CONSTRAINT_VIOLATION; +import static org.apache.ignite.internal.processors.odbc.SqlStateCode.INTERNAL_ERROR; /** */ +@RunWith(JUnit4.class) public class IgniteSQLColumnConstraintsTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected void beforeTestsStarted() throws Exception { startGrid(0); - execSQL("CREATE TABLE varchar_table(id INT PRIMARY KEY, str VARCHAR(5))"); + String mvccQry = mvccEnabled() ? " WITH \"atomicity=transactional_snapshot\"" : ""; + + runSQL("CREATE TABLE varchar_table(id INT PRIMARY KEY, str VARCHAR(5))" + mvccQry); execSQL("INSERT INTO varchar_table VALUES(?, ?)", 1, "12345"); - execSQL("CREATE TABLE char_table(id INT PRIMARY KEY, str CHAR(5))"); + checkSQLResults("SELECT * FROM varchar_table WHERE id = 1", 1, "12345"); + + runSQL("CREATE TABLE decimal_table(id INT PRIMARY KEY, val DECIMAL(4, 2))" + mvccQry); + + execSQL("INSERT INTO decimal_table VALUES(?, ?)", 1, 12.34); + + checkSQLResults("SELECT * FROM decimal_table WHERE id = 1", 1, BigDecimal.valueOf(12.34)); + + runSQL("CREATE TABLE char_table(id INT PRIMARY KEY, str CHAR(5))" + mvccQry); execSQL("INSERT INTO char_table VALUES(?, ?)", 1, "12345"); + + checkSQLResults("SELECT * FROM char_table WHERE id = 1", 1, "12345"); + + runSQL("CREATE TABLE decimal_table_4(id INT PRIMARY KEY, field DECIMAL(4, 2))" + mvccQry); + + runSQL("CREATE TABLE char_table_2(id INT PRIMARY KEY, field INTEGER)" + mvccQry); + + runSQL("CREATE TABLE decimal_table_2(id INT PRIMARY KEY, field INTEGER)" + mvccQry); + + runSQL("CREATE TABLE char_table_3(id INT PRIMARY KEY, field CHAR(5), field2 INTEGER)" + mvccQry); + + runSQL("CREATE TABLE decimal_table_3(id INT PRIMARY KEY, field DECIMAL(4, 2), field2 INTEGER)" + mvccQry); + + runSQL("CREATE TABLE char_table_4(id INT PRIMARY KEY, field CHAR(5))" + mvccQry); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCreateTableWithTooLongCharDefault() throws Exception { + checkSQLThrows("CREATE TABLE too_long_default(id INT PRIMARY KEY, str CHAR(5) DEFAULT '123456')", + INTERNAL_ERROR); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCreateTableWithTooLongScaleDecimalDefault() throws Exception { + checkSQLThrows("CREATE TABLE too_long_decimal_default_scale(id INT PRIMARY KEY, val DECIMAL(4, 2)" + + " DEFAULT 1.345)", INTERNAL_ERROR); + } + + @Test + public void testCreateTableWithTooLongDecimalDefault() throws Exception { + checkSQLThrows("CREATE TABLE too_long_decimal_default(id INT PRIMARY KEY, val DECIMAL(4, 2)" + + " DEFAULT 123.45)", INTERNAL_ERROR); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testInsertTooLongDecimal() throws Exception { + checkSQLThrows("INSERT INTO decimal_table VALUES(?, ?)", CONSTRAINT_VIOLATION, 2, 123.45); + + assertTrue(execSQL("SELECT * FROM decimal_table WHERE id = ?", 2).isEmpty()); + + checkSQLThrows("UPDATE decimal_table SET val = ? WHERE id = ?", CONSTRAINT_VIOLATION, 123.45, 1); + + checkSQLResults("SELECT * FROM decimal_table WHERE id = 1", 1, BigDecimal.valueOf(12.34)); + + checkSQLThrows("MERGE INTO decimal_table(id, val) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, 123.45); + + checkSQLResults("SELECT * FROM decimal_table WHERE id = 1", 1, BigDecimal.valueOf(12.34)); } /** * @throws Exception If failed. */ - public void testCreateTableWithTooLongDefault() throws Exception { - checkSQLThrows("CREATE TABLE too_long_default(id INT PRIMARY KEY, str CHAR(5) DEFAULT '123456')"); + @Test + public void testInsertTooLongScaleDecimal() throws Exception { + checkSQLThrows("INSERT INTO decimal_table VALUES(?, ?)", CONSTRAINT_VIOLATION, 3, 1.234); + + assertTrue(execSQL("SELECT * FROM decimal_table WHERE id = ?", 3).isEmpty()); + + checkSQLThrows("UPDATE decimal_table SET val = ? WHERE id = ?", CONSTRAINT_VIOLATION, 1.234, 1); + + checkSQLResults("SELECT * FROM decimal_table WHERE id = 1", 1, BigDecimal.valueOf(12.34)); + + checkSQLThrows("MERGE INTO decimal_table(id, val) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, 1.234); + + checkSQLResults("SELECT * FROM decimal_table WHERE id = 1", 1, BigDecimal.valueOf(12.34)); } /** * @throws Exception If failed. */ + @Test public void testInsertTooLongVarchar() throws Exception { - checkSQLThrows("INSERT INTO varchar_table VALUES(?, ?)", 2, "123456"); + checkSQLThrows("INSERT INTO varchar_table VALUES(?, ?)", CONSTRAINT_VIOLATION, 2, "123456"); + + assertTrue(execSQL("SELECT * FROM varchar_table WHERE id = ?", 2).isEmpty()); + + checkSQLThrows("UPDATE varchar_table SET str = ? WHERE id = ?", CONSTRAINT_VIOLATION, "123456", 1); + + checkSQLResults("SELECT * FROM varchar_table WHERE id = 1", 1, "12345"); - checkSQLThrows("UPDATE varchar_table SET str = ? WHERE id = ?", "123456", 1); + checkSQLThrows("MERGE INTO varchar_table(id, str) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, "123456"); - checkSQLThrows("MERGE INTO varchar_table(id, str) VALUES(?, ?)", 1, "123456"); + checkSQLResults("SELECT * FROM varchar_table WHERE id = 1", 1, "12345"); } /** * @throws Exception If failed. */ + @Test public void testInsertTooLongChar() throws Exception { - checkSQLThrows("INSERT INTO char_table VALUES(?, ?)", 2, "123456"); + checkSQLThrows("INSERT INTO char_table VALUES(?, ?)", CONSTRAINT_VIOLATION, 2, "123456"); + + assertTrue(execSQL("SELECT * FROM char_table WHERE id = ?", 2).isEmpty()); + + checkSQLThrows("UPDATE char_table SET str = ? WHERE id = ?", CONSTRAINT_VIOLATION, "123456", 1); + + checkSQLResults("SELECT * FROM char_table WHERE id = 1", 1, "12345"); - checkSQLThrows("UPDATE char_table SET str = ? WHERE id = ?", "123456", 1); + checkSQLThrows("MERGE INTO char_table(id, str) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, "123456"); - checkSQLThrows("MERGE INTO char_table(id, str) VALUES(?, ?)", 1, "123456"); + checkSQLResults("SELECT * FROM char_table WHERE id = 1", 1, "12345"); } /** * @throws Exception If failed. */ - public void testConstraintsAfterAlterTable() throws Exception { - execSQL("CREATE TABLE char_table_2(id INT PRIMARY KEY, field INTEGER)"); - + @Test + public void testCharConstraintsAfterAlterTable() throws Exception { execSQL("ALTER TABLE char_table_2 ADD COLUMN str CHAR(5) NOT NULL"); - + execSQL("INSERT INTO char_table_2(id, str) VALUES(?, ?)", 1, "1"); - checkSQLThrows("INSERT INTO char_table_2(id, str) VALUES(?, ?)", 2, "123456"); + checkSQLResults("SELECT * FROM char_table_2 WHERE id = 1", 1, null, "1"); + + checkSQLThrows("INSERT INTO char_table_2(id, str) VALUES(?, ?)", CONSTRAINT_VIOLATION, 2, "123456"); + + assertTrue(execSQL("SELECT * FROM decimal_table_2 WHERE id = ?", 2).isEmpty()); - checkSQLThrows("UPDATE char_table_2 SET str = ? WHERE id = ?", "123456", 1); + checkSQLThrows("UPDATE char_table_2 SET str = ? WHERE id = ?", CONSTRAINT_VIOLATION, "123456", 1); - checkSQLThrows("MERGE INTO char_table_2(id, str) VALUES(?, ?)", 1, "123456"); + checkSQLResults("SELECT * FROM char_table_2 WHERE id = 1", 1, null, "1"); + + checkSQLThrows("MERGE INTO char_table_2(id, str) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, "123456"); + + checkSQLResults("SELECT * FROM char_table_2 WHERE id = 1", 1, null, "1"); } /** * @throws Exception If failed. */ - public void testDropColumnWithConstraint() throws Exception { - execSQL("CREATE TABLE char_table_3(id INT PRIMARY KEY, field CHAR(5), field2 INTEGER)"); + @Test + public void testDecimalConstraintsAfterAlterTable() throws Exception { + execSQL("ALTER TABLE decimal_table_2 ADD COLUMN val DECIMAL(4, 2) NOT NULL"); + + execSQL("INSERT INTO decimal_table_2(id, val) VALUES(?, ?)", 1, 12.34); + + checkSQLResults("SELECT * FROM decimal_table_2 WHERE id = 1", 1, null, BigDecimal.valueOf(12.34)); + + checkSQLThrows("INSERT INTO decimal_table_2(id, val) VALUES(?, ?)", CONSTRAINT_VIOLATION, 2, 1234.56); + + assertTrue(execSQL("SELECT * FROM decimal_table_2 WHERE id = ?", 2).isEmpty()); + + checkSQLThrows("UPDATE decimal_table_2 SET val = ? WHERE id = ?", CONSTRAINT_VIOLATION, 1234.56, 1); + + checkSQLResults("SELECT * FROM decimal_table_2 WHERE id = 1", 1, null, BigDecimal.valueOf(12.34)); + + checkSQLThrows("MERGE INTO decimal_table_2(id, val) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, 12345.6); + + checkSQLResults("SELECT * FROM decimal_table_2 WHERE id = 1", 1, null, BigDecimal.valueOf(12.34)); + + checkSQLThrows("INSERT INTO decimal_table_2(id, val) VALUES(?, ?)", CONSTRAINT_VIOLATION, 3, 1.234); + checkSQLResults("SELECT * FROM decimal_table_2 WHERE id = 1", 1, null, BigDecimal.valueOf(12.34)); + + checkSQLThrows("UPDATE decimal_table_2 SET val = ? WHERE id = ?", CONSTRAINT_VIOLATION, 1.234, 1); + + checkSQLResults("SELECT * FROM decimal_table_2 WHERE id = 1", 1, null, BigDecimal.valueOf(12.34)); + + checkSQLThrows("MERGE INTO decimal_table_2(id, val) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, 1.234); + + checkSQLResults("SELECT * FROM decimal_table_2 WHERE id = 1", 1, null, BigDecimal.valueOf(12.34)); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCharDropColumnWithConstraint() throws Exception { execSQL("INSERT INTO char_table_3(id, field, field2) VALUES(?, ?, ?)", 1, "12345", 1); - checkSQLThrows("INSERT INTO char_table_3(id, field, field2) VALUES(?, ?, ?)", 2, "123456", 1); + checkSQLResults("SELECT * FROM char_table_3 WHERE id = 1", 1, "12345", 1); + + checkSQLThrows("INSERT INTO char_table_3(id, field, field2) VALUES(?, ?, ?)", CONSTRAINT_VIOLATION, + 2, "123456", 1); + + assertTrue(execSQL("SELECT * FROM decimal_table_3 WHERE id = ?", 2).isEmpty()); execSQL("ALTER TABLE char_table_3 DROP COLUMN field"); execSQL("INSERT INTO char_table_3(id, field2) VALUES(?, ?)", 3, 3); + + checkSQLResults("SELECT * FROM char_table_3 WHERE id = 3", 3, 3); } - public void testSqlState() throws Exception { - execSQL("CREATE TABLE char_table_4(id INT PRIMARY KEY, field CHAR(5))"); + /** + * @throws Exception If failed. + */ + @Test + public void testDecimalDropColumnWithConstraint() throws Exception { + execSQL("INSERT INTO decimal_table_3(id, field, field2) VALUES(?, ?, ?)", 1, 12.34, 1); + + checkSQLResults("SELECT * FROM decimal_table_3 WHERE id = 1", 1, BigDecimal.valueOf(12.34), 1); + + checkSQLThrows("INSERT INTO decimal_table_3(id, field, field2) VALUES(?, ?, ?)", CONSTRAINT_VIOLATION, + 2, 12.3456, 1); + + assertTrue(execSQL("SELECT * FROM decimal_table_3 WHERE id = ?", 2).isEmpty()); + + execSQL("ALTER TABLE decimal_table_3 DROP COLUMN field"); - IgniteSQLException err = (IgniteSQLException) - checkSQLThrows("INSERT INTO char_table_4(id, field) VALUES(?, ?)", 1, "123456"); + execSQL("INSERT INTO decimal_table_3(id, field2) VALUES(?, ?)", 3, 3); - assertEquals(err.sqlState(), CONSTRAINT_VIOLATION); + checkSQLResults("SELECT * FROM decimal_table_3 WHERE id = 3", 3, 3); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testCharSqlState() throws Exception { + checkSQLThrows("INSERT INTO char_table_4(id, field) VALUES(?, ?)", CONSTRAINT_VIOLATION, 1, "123456"); + + assertTrue(execSQL("SELECT * FROM decimal_table_4 WHERE id = ?", 1).isEmpty()); execSQL("INSERT INTO char_table_4(id, field) VALUES(?, ?)", 2, "12345"); - err = (IgniteSQLException) - checkSQLThrows("UPDATE char_table_4 SET field = ? WHERE id = ?", "123456", 2); + checkSQLResults("SELECT * FROM char_table_4 WHERE id = 2", 2, "12345"); + + checkSQLThrows("UPDATE char_table_4 SET field = ? WHERE id = ?", CONSTRAINT_VIOLATION, "123456", 2); + + checkSQLResults("SELECT * FROM char_table_4 WHERE id = 2", 2, "12345"); + + checkSQLThrows("MERGE INTO char_table_4(id, field) VALUES(?, ?)", CONSTRAINT_VIOLATION, 2, "123456"); + + checkSQLResults("SELECT * FROM char_table_4 WHERE id = 2", 2, "12345"); + } + + /** + * @throws Exception If failed. + */ + @Test + public void testDecimalSqlState() throws Exception { + checkSQLThrows("INSERT INTO decimal_table_4 VALUES(?, ?)", CONSTRAINT_VIOLATION, + 1, BigDecimal.valueOf(1234.56)); + + assertTrue(execSQL("SELECT * FROM decimal_table_4 WHERE id = ?", 1).isEmpty()); + + checkSQLThrows("INSERT INTO decimal_table_4 VALUES(?, ?)", CONSTRAINT_VIOLATION, + 1, BigDecimal.valueOf(1.345)); + + assertTrue(execSQL("SELECT * FROM decimal_table_4 WHERE id = ?", 1).isEmpty()); + + execSQL("INSERT INTO decimal_table_4 (id, field) VALUES(?, ?)", 2, 12.34); - assertEquals(err.sqlState(), CONSTRAINT_VIOLATION); + checkSQLResults("SELECT * FROM decimal_table_4 WHERE id = 2", 2, BigDecimal.valueOf(12.34)); - err = (IgniteSQLException) - checkSQLThrows("MERGE INTO char_table_4(id, field) VALUES(?, ?)", 2, "123456"); + checkSQLThrows("UPDATE decimal_table_4 SET field = ? WHERE id = ?", CONSTRAINT_VIOLATION, + BigDecimal.valueOf(1234.56), 2); - assertEquals(err.sqlState(), CONSTRAINT_VIOLATION); + checkSQLResults("SELECT * FROM decimal_table_4 WHERE id = 2", 2, BigDecimal.valueOf(12.34)); + + checkSQLThrows("MERGE INTO decimal_table_4(id, field) VALUES(?, ?)", CONSTRAINT_VIOLATION, + 2, BigDecimal.valueOf(1234.56)); + + checkSQLResults("SELECT * FROM decimal_table_4 WHERE id = 2", 2, BigDecimal.valueOf(12.34)); + + checkSQLThrows("UPDATE decimal_table_4 SET field = ? WHERE id = ?", CONSTRAINT_VIOLATION, + BigDecimal.valueOf(1.345), 2); + + checkSQLResults("SELECT * FROM decimal_table_4 WHERE id = 2", 2, BigDecimal.valueOf(12.34)); + + checkSQLThrows("MERGE INTO decimal_table_4(id, field) VALUES(?, ?)", CONSTRAINT_VIOLATION, + 2, BigDecimal.valueOf(1.345)); + + checkSQLResults("SELECT * FROM decimal_table_4 WHERE id = 2", 2, BigDecimal.valueOf(12.34)); } /** */ - private Throwable checkSQLThrows(String sql, Object... args) { - return GridTestUtils.assertThrowsWithCause(() -> { + protected void checkSQLThrows(String sql, String sqlStateCode, Object... args) { + IgniteSQLException err = (IgniteSQLException)GridTestUtils.assertThrowsWithCause(() -> { execSQL(sql, args); return 0; }, IgniteSQLException.class); + + assertEquals(err.sqlState(), sqlStateCode); } /** */ - private List execSQL(String sql, Object... args) { + protected List execSQL(String sql, Object... args) { + return runSQL(sql, args); + } + + /** */ + protected List runSQL(String sql, Object... args) { SqlFieldsQuery qry = new SqlFieldsQuery(sql) .setArgs(args); return grid(0).context().query().querySqlFields(qry, true).getAll(); } + + /** */ + protected void checkSQLResults(String sql, Object... args) { + List rows = execSQL(sql); + + assertNotNull(rows); + + assertTrue(!rows.isEmpty()); + + assertEquals(rows.size(), 1); + + List row = (List)rows.get(0); + + assertEquals(row.size(), args.length); + + for (int i = 0; i < args.length; i++) + assertTrue(args[i] + " != " + row.get(i), Objects.equals(args[i], row.get(i))); + } + + /** */ + protected boolean mvccEnabled() { + return false; + } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteTransactionSQLColumnConstraintTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteTransactionSQLColumnConstraintTest.java new file mode 100644 index 0000000000000..5239e1fb36165 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/IgniteTransactionSQLColumnConstraintTest.java @@ -0,0 +1,71 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.internal.processors.sql; + +import java.util.List; +import org.apache.ignite.internal.processors.query.IgniteSQLException; +import org.apache.ignite.testframework.GridTestUtils; + +/** + */ +public class IgniteTransactionSQLColumnConstraintTest extends IgniteSQLColumnConstraintsTest { + /** {@inheritDoc} */ + @Override protected void checkSQLThrows(String sql, String sqlStateCode, Object... args) { + runSQL("BEGIN TRANSACTION"); + + IgniteSQLException err = (IgniteSQLException)GridTestUtils.assertThrowsWithCause(() -> { + runSQL(sql, args); + + return 0; + }, IgniteSQLException.class); + + runSQL("ROLLBACK TRANSACTION"); + + assertEquals(err.sqlState(), sqlStateCode); + } + + /** {@inheritDoc} */ + @Override protected List execSQL(String sql, Object... args) { + runSQL("BEGIN TRANSACTION"); + + List res = runSQL(sql, args); + + runSQL("COMMIT TRANSACTION"); + + return res; + } + + /** + * That test is ignored due to drop column(s) operation is unsupported for the MVCC tables. + */ + @Override public void testCharDropColumnWithConstraint() { + // No-op. + } + + /** + * That test is ignored due to drop column(s) operation is unsupported for the MVCC tables. + */ + @Override public void testDecimalDropColumnWithConstraint() { + // No-op. + } + + /** {@inheritDoc} */ + @Override protected boolean mvccEnabled() { + return true; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/SqlConnectorConfigurationValidationSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/SqlConnectorConfigurationValidationSelfTest.java index 4d1b333739d84..19bc925a4a198 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/SqlConnectorConfigurationValidationSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/internal/processors/sql/SqlConnectorConfigurationValidationSelfTest.java @@ -36,11 +36,15 @@ import java.sql.Statement; import java.util.concurrent.Callable; import java.util.concurrent.atomic.AtomicInteger; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * SQL connector configuration validation tests. */ @SuppressWarnings("deprecation") +@RunWith(JUnit4.class) public class SqlConnectorConfigurationValidationSelfTest extends GridCommonAbstractTest { /** Node index generator. */ private static final AtomicInteger NODE_IDX_GEN = new AtomicInteger(); @@ -58,6 +62,7 @@ public class SqlConnectorConfigurationValidationSelfTest extends GridCommonAbstr * * @throws Exception If failed. */ + @Test public void testDefault() throws Exception { check(new SqlConnectorConfiguration(), true); assertJdbc(null, SqlConnectorConfiguration.DFLT_PORT); @@ -68,6 +73,7 @@ public void testDefault() throws Exception { * * @throws Exception If failed. */ + @Test public void testHost() throws Exception { check(new SqlConnectorConfiguration().setHost("126.0.0.1"), false); @@ -84,6 +90,7 @@ public void testHost() throws Exception { * * @throws Exception If failed. */ + @Test public void testPort() throws Exception { check(new SqlConnectorConfiguration().setPort(-1), false); check(new SqlConnectorConfiguration().setPort(0), false); @@ -103,6 +110,7 @@ public void testPort() throws Exception { * * @throws Exception If failed. */ + @Test public void testPortRange() throws Exception { check(new SqlConnectorConfiguration().setPortRange(-1), false); @@ -118,6 +126,7 @@ public void testPortRange() throws Exception { * * @throws Exception If failed. */ + @Test public void testSocketBuffers() throws Exception { check(new SqlConnectorConfiguration().setSocketSendBufferSize(-4 * 1024), false); check(new SqlConnectorConfiguration().setSocketReceiveBufferSize(-4 * 1024), false); @@ -134,6 +143,7 @@ public void testSocketBuffers() throws Exception { * * @throws Exception If failed. */ + @Test public void testMaxOpenCusrorsPerConnection() throws Exception { check(new SqlConnectorConfiguration().setMaxOpenCursorsPerConnection(-1), false); @@ -149,6 +159,7 @@ public void testMaxOpenCusrorsPerConnection() throws Exception { * * @throws Exception If failed. */ + @Test public void testThreadPoolSize() throws Exception { check(new SqlConnectorConfiguration().setThreadPoolSize(0), false); check(new SqlConnectorConfiguration().setThreadPoolSize(-1), false); diff --git a/modules/indexing/src/test/java/org/apache/ignite/spi/communication/tcp/GridOrderedMessageCancelSelfTest.java b/modules/indexing/src/test/java/org/apache/ignite/spi/communication/tcp/GridOrderedMessageCancelSelfTest.java index cfb56b0d4ea77..11e2e596d1d5d 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/spi/communication/tcp/GridOrderedMessageCancelSelfTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/spi/communication/tcp/GridOrderedMessageCancelSelfTest.java @@ -41,11 +41,11 @@ import org.apache.ignite.lang.IgniteFuture; import org.apache.ignite.lang.IgniteRunnable; import org.apache.ignite.plugin.extensions.communication.Message; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.apache.ignite.cache.CacheMode.PARTITIONED; @@ -54,10 +54,8 @@ /** * */ +@RunWith(JUnit4.class) public class GridOrderedMessageCancelSelfTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** Cancel latch. */ private static CountDownLatch cancelLatch; @@ -80,12 +78,6 @@ public class GridOrderedMessageCancelSelfTest extends GridCommonAbstractTest { cfg.setCommunicationSpi(new CommunicationSpi()); - TcpDiscoverySpi disco = new TcpDiscoverySpi(); - - disco.setIpFinder(IP_FINDER); - - cfg.setDiscoverySpi(disco); - return cfg; } @@ -106,6 +98,7 @@ public class GridOrderedMessageCancelSelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testTask() throws Exception { Map map = U.field(((IgniteKernal)grid(0)).context().io(), "msgSetMap"); @@ -119,6 +112,7 @@ public void testTask() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTaskException() throws Exception { Map map = U.field(((IgniteKernal)grid(0)).context().io(), "msgSetMap"); @@ -236,4 +230,4 @@ private static class FailTask extends ComputeTaskSplitAdapter { return null; } } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/sqltests/BaseSqlTest.java b/modules/indexing/src/test/java/org/apache/ignite/sqltests/BaseSqlTest.java index 5827db1eccafc..034d363364f18 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/sqltests/BaseSqlTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/sqltests/BaseSqlTest.java @@ -58,10 +58,14 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Test base for test for sql features. */ +@RunWith(JUnit4.class) public class BaseSqlTest extends GridCommonAbstractTest { /** Number of all employees. */ public final static long EMP_CNT = 1000L; @@ -622,6 +626,7 @@ protected void testAllNodes(Consumer consumer) { /** * Check basic SELECT * query. */ + @Test public void testBasicSelect() { testAllNodes(node -> { Result emps = executeFrom("SELECT * FROM Employee", node); @@ -637,6 +642,7 @@ public void testBasicSelect() { /** * Check SELECT query with projection (fields). */ + @Test public void testSelectFields() { testAllNodes(node -> { Result res = executeFrom("SELECT firstName, id, age FROM Employee;", node); @@ -654,6 +660,7 @@ public void testSelectFields() { /** * Check basic BETWEEN operator usage. */ + @Test public void testSelectBetween() { testAllNodes(node -> { Result emps = executeFrom("SELECT * FROM Employee e WHERE e.id BETWEEN 101 and 200", node); @@ -678,6 +685,7 @@ public void testSelectBetween() { /** * Check BETWEEN operator filters out all the result (empty result set is expected). */ + @Test public void testEmptyBetween() { testAllNodes(node -> { Result emps = executeFrom("SELECT * FROM Employee e WHERE e.id BETWEEN 200 AND 101", node); @@ -689,6 +697,7 @@ public void testEmptyBetween() { /** * Check SELECT IN with fixed values. */ + @Test public void testSelectInStatic() { testAllNodes(node -> { Result actual = executeFrom("SELECT age FROM Employee WHERE id IN (1, 256, 42)", node); @@ -708,6 +717,7 @@ public void testSelectInStatic() { /** * Check SELECT IN with simple subquery values. */ + @Test public void testSelectInSubquery() { testAllNodes(node -> { Result actual = executeFrom("SELECT lastName FROM Employee WHERE id in (SELECT id FROM Employee WHERE age < 30)", node); @@ -721,6 +731,7 @@ public void testSelectInSubquery() { /** * Check ORDER BY operator with varchar field. */ + @Test public void testBasicOrderByLastName() { testAllNodes(node -> { Result result = executeFrom("SELECT * FROM Employee e ORDER BY e.lastName", node); @@ -739,6 +750,7 @@ public void testBasicOrderByLastName() { /** * Check DISTINCT operator selecting not unique field. */ + @Test public void testBasicDistinct() { testAllNodes(node -> { Result ages = executeFrom("SELECT DISTINCT age FROM Employee", node); @@ -752,6 +764,7 @@ public void testBasicDistinct() { /** * Check simple WHERE operator. */ + @Test public void testDistinctWithWhere() { testAllNodes(node -> { Result ages = executeFrom("SELECT DISTINCT age FROM Employee WHERE id < 100", node); @@ -765,6 +778,7 @@ public void testDistinctWithWhere() { /** * Check greater operator in where clause with both indexed and non-indexed field. */ + @Test public void testWhereGreater() { testAllNodes(node -> { Result idxActual = executeFrom("SELECT firstName FROM Employee WHERE age > 30", node); @@ -783,6 +797,7 @@ public void testWhereGreater() { /** * Check less operator in where clause with both indexed and non-indexed field. */ + @Test public void testWhereLess() { testAllNodes(node -> { Result idxActual = executeFrom("SELECT firstName FROM Employee WHERE age < 30", node); @@ -801,6 +816,7 @@ public void testWhereLess() { /** * Check equals operator in where clause with both indexed and non-indexed field. */ + @Test public void testWhereEq() { testAllNodes(node -> { Result idxActual = executeFrom("SELECT firstName FROM Employee WHERE age = 30", node); @@ -819,6 +835,7 @@ public void testWhereEq() { /** * Check GROUP BY operator with indexed field. */ + @Test public void testGroupByIndexedField() { testAllNodes(node -> { // Need to filter out only part of records (each one is a count of employees @@ -851,6 +868,7 @@ public void testGroupByIndexedField() { /** * Check GROUP BY operator with indexed field. */ + @Test public void testGroupByNonIndexedField() { testAllNodes(node -> { // Need to filter out only part of records (each one is a count of employees @@ -1040,6 +1058,7 @@ public void checkInnerJoinEmployeeDepartment(String depTab) { /** * Check INNER JOIN with collocated data. */ + @Test public void testInnerJoinEmployeeDepartment() { checkInnerJoinEmployeeDepartment(DEP_TAB); } @@ -1138,6 +1157,7 @@ public void checkLeftJoinDepartmentEmployee(String depTab) { /** * Check LEFT JOIN with collocated data. */ + @Test public void testLeftJoin() { checkLeftJoinEmployeeDepartment(DEP_TAB); } @@ -1207,6 +1227,7 @@ public void checkRightJoinDepartmentEmployee(String depTab) { /** * Check RIGHT JOIN with collocated data. */ + @Test public void testRightJoin() { checkRightJoinEmployeeDepartment(DEP_TAB); } @@ -1215,6 +1236,7 @@ public void testRightJoin() { * Check that FULL OUTER JOIN (which is currently unsupported) causes valid error message. */ @SuppressWarnings("ThrowableNotThrown") + @Test public void testFullOuterJoinIsNotSupported() { testAllNodes(node -> { String fullOuterJoinQry = "SELECT e.id as EmpId, e.firstName as EmpName, d.id as DepId, d.name as DepName " + @@ -1235,6 +1257,7 @@ public void testFullOuterJoinIsNotSupported() { * Check that distributed FULL OUTER JOIN (which is currently unsupported) causes valid error message. */ @SuppressWarnings("ThrowableNotThrown") + @Test public void testFullOuterDistributedJoinIsNotSupported() { testAllNodes(node -> { String qry = "SELECT d.id, d.name, a.address " + diff --git a/modules/indexing/src/test/java/org/apache/ignite/sqltests/PartitionedSqlTest.java b/modules/indexing/src/test/java/org/apache/ignite/sqltests/PartitionedSqlTest.java index 83e70157d359d..8899b38290b36 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/sqltests/PartitionedSqlTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/sqltests/PartitionedSqlTest.java @@ -19,10 +19,14 @@ import java.util.Arrays; import java.util.List; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Includes all base sql test plus tests that make sense in partitioned mode. */ +@RunWith(JUnit4.class) public class PartitionedSqlTest extends BaseSqlTest { /** {@inheritDoc} */ @Override protected void setupData() { @@ -34,6 +38,7 @@ public class PartitionedSqlTest extends BaseSqlTest { /** * Check distributed INNER JOIN. */ + @Test public void testInnerDistributedJoin() { Arrays.asList(true, false).forEach(forceOrder -> testAllNodes(node -> { final String qryTpl = "SELECT d.id, d.name, a.address " + @@ -57,6 +62,7 @@ public void testInnerDistributedJoin() { /** * Check that if required index is missed, correct exception will be thrown. */ + @Test public void testInnerDistJoinMissedIndex() { Arrays.asList(true, false).forEach(forceOrder -> testAllNodes(node -> { String qryTpl = "SELECT d.id, d.name, a.address " + @@ -74,6 +80,7 @@ public void testInnerDistJoinMissedIndex() { /** * Check distributed LEFT JOIN. */ + @Test public void testLeftDistributedJoin() { Arrays.asList(true, false).forEach(forceOrder -> testAllNodes(node -> { final String qryTpl = "SELECT d.id, d.name, a.depId, a.address " + @@ -97,6 +104,7 @@ public void testLeftDistributedJoin() { /** * Check that if required index is missed, correct exception will be thrown. */ + @Test public void testLeftDistributedJoinMissedIndex() { Arrays.asList(true, false).forEach(forceOrder -> testAllNodes(node -> { String qryTpl = "SELECT d.id, d.name, a.address " + @@ -114,6 +122,7 @@ public void testLeftDistributedJoinMissedIndex() { /** * Check distributed RIGHT JOIN. */ + @Test public void testRightDistributedJoin() { setExplain(true); @@ -140,6 +149,7 @@ public void testRightDistributedJoin() { /** * Check that if required index is missed, correct exception will be thrown. */ + @Test public void testRightDistributedJoinMissedIndex() { Arrays.asList(true, false).forEach(forceOrder -> testAllNodes(node -> { String qryTpl = "SELECT d.id, d.name, a.address " + diff --git a/modules/indexing/src/test/java/org/apache/ignite/sqltests/ReplicatedSqlTest.java b/modules/indexing/src/test/java/org/apache/ignite/sqltests/ReplicatedSqlTest.java index a71f217ba1afc..e7fafdffb08ad 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/sqltests/ReplicatedSqlTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/sqltests/ReplicatedSqlTest.java @@ -19,10 +19,15 @@ import java.util.Arrays; import java.util.List; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Includes all base sql test plus tests that make sense in replicated mode. */ +@RunWith(JUnit4.class) public class ReplicatedSqlTest extends BaseSqlTest { /** Name of the department table created in partitioned mode. */ private String DEP_PART_TAB = "DepartmentPart"; @@ -45,6 +50,7 @@ public class ReplicatedSqlTest extends BaseSqlTest { /** * Checks distributed INNER JOIN of replicated and replicated tables. */ + @Test public void testInnerDistributedJoinReplicatedReplicated() { checkInnerDistJoinWithReplicated(DEP_TAB); } @@ -52,6 +58,7 @@ public void testInnerDistributedJoinReplicatedReplicated() { /** * Checks distributed INNER JOIN of partitioned and replicated tables. */ + @Test public void testInnerDistJoinPartitionedReplicated() { checkInnerDistJoinWithReplicated(DEP_PART_TAB); } @@ -90,6 +97,7 @@ private void checkInnerDistJoinWithReplicated(String depTab) { /** * Checks distributed INNER JOIN of replicated and partitioned tables. */ + @Test public void testMixedInnerDistJoinReplicatedPartitioned() { checkInnerDistJoinReplicatedWith(DEP_PART_TAB); } @@ -129,6 +137,7 @@ private void checkInnerDistJoinReplicatedWith(String depTab) { /** * Checks distributed LEFT JOIN of replicated and replicated tables. */ + @Test public void testLeftDistributedJoinReplicatedReplicated() { checkLeftDistributedJoinWithReplicated(DEP_TAB); } @@ -136,6 +145,7 @@ public void testLeftDistributedJoinReplicatedReplicated() { /** * Checks distributed LEFT JOIN of partitioned and replicated tables. */ + @Test public void testLeftDistributedJoinPartitionedReplicated() { setExplain(true); checkLeftDistributedJoinWithReplicated(DEP_PART_TAB); @@ -144,9 +154,9 @@ public void testLeftDistributedJoinPartitionedReplicated() { /** * Checks distributed LEFT JOIN of replicated and partitioned tables. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8732") + @Test public void testLeftDistributedJoinReplicatedPartitioned() { - fail("https://issues.apache.org/jira/browse/IGNITE-8732"); - checkLeftDistributedJoinReplicatedWith(DEP_PART_TAB); } @@ -215,6 +225,7 @@ private void checkLeftDistributedJoinReplicatedWith(String depTab) { /** * Checks distributed RIGHT JOIN of replicated and replicated tables. */ + @Test public void testRightDistributedJoinReplicatedReplicated() { checkRightDistributedJoinWithReplicated(DEP_TAB); } @@ -222,15 +233,16 @@ public void testRightDistributedJoinReplicatedReplicated() { /** * Checks distributed RIGHT JOIN of partitioned and replicated tables. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8732") + @Test public void testRightDistributedJoinPartitionedReplicated() { - fail("https://issues.apache.org/jira/browse/IGNITE-8732"); - checkRightDistributedJoinWithReplicated(DEP_PART_TAB); } /** * Checks distributed RIGHT JOIN of replicated and partitioned tables. */ + @Test public void testRightDistributedJoinReplicatedPartitioned() { setExplain(true); checkRightDistributedJoinReplicatedWith(DEP_PART_TAB); @@ -304,6 +316,7 @@ public void checkRightDistributedJoinReplicatedWith(String depTab) { /** * Check INNER JOIN with collocated data of replicated and partitioned tables. */ + @Test public void testInnerJoinReplicatedPartitioned() { checkInnerJoinEmployeeDepartment(DEP_PART_TAB); } @@ -311,6 +324,7 @@ public void testInnerJoinReplicatedPartitioned() { /** * Check INNER JOIN with collocated data of partitioned and replicated tables. */ + @Test public void testInnerJoinPartitionedReplicated() { checkInnerJoinDepartmentEmployee(DEP_PART_TAB); } @@ -318,15 +332,16 @@ public void testInnerJoinPartitionedReplicated() { /** * Check LEFT JOIN with collocated data of replicated and partitioned tables. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8732") + @Test public void testLeftJoinReplicatedPartitioned() { - fail("https://issues.apache.org/jira/browse/IGNITE-8732"); - checkLeftJoinEmployeeDepartment(DEP_PART_TAB); } /** * Check LEFT JOIN with collocated data of partitioned and replicated tables. */ + @Test public void testLeftJoinPartitionedReplicated() { checkLeftJoinDepartmentEmployee(DEP_PART_TAB); } @@ -334,6 +349,7 @@ public void testLeftJoinPartitionedReplicated() { /** * Check RIGHT JOIN with collocated data of replicated and partitioned tables. */ + @Test public void testRightJoinReplicatedPartitioned() { checkRightJoinEmployeeDepartment(DEP_PART_TAB); } @@ -341,9 +357,9 @@ public void testRightJoinReplicatedPartitioned() { /** * Check RIGHT JOIN with collocated data of partitioned and replicated tables. */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-8732") + @Test public void testRightJoinPartitionedReplicated() { - fail("https://issues.apache.org/jira/browse/IGNITE-8732"); - checkRightJoinDepartmentEmployee(DEP_PART_TAB); } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite.java index b44ff2dbc3517..c62df2fb630c1 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite.java @@ -17,36 +17,505 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.apache.ignite.internal.processors.cache.AffinityKeyNameAndValueFieldNameConflictTest; +import org.apache.ignite.internal.processors.cache.BigEntryQueryTest; +import org.apache.ignite.internal.processors.cache.BinaryMetadataConcurrentUpdateWithIndexesTest; import org.apache.ignite.internal.processors.cache.BinarySerializationQuerySelfTest; import org.apache.ignite.internal.processors.cache.BinarySerializationQueryWithReflectiveSerializerSelfTest; +import org.apache.ignite.internal.processors.cache.CacheIteratorScanQueryTest; +import org.apache.ignite.internal.processors.cache.CacheLocalQueryDetailMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.CacheLocalQueryMetricsSelfTest; +import org.apache.ignite.internal.processors.cache.CacheOffheapBatchIndexingBaseTest; +import org.apache.ignite.internal.processors.cache.CacheOffheapBatchIndexingMultiTypeTest; +import org.apache.ignite.internal.processors.cache.CacheOffheapBatchIndexingSingleTypeTest; +import org.apache.ignite.internal.processors.cache.CachePartitionedQueryDetailMetricsDistributedSelfTest; +import org.apache.ignite.internal.processors.cache.CachePartitionedQueryDetailMetricsLocalSelfTest; +import org.apache.ignite.internal.processors.cache.CachePartitionedQueryMetricsDistributedSelfTest; +import org.apache.ignite.internal.processors.cache.CachePartitionedQueryMetricsLocalSelfTest; +import org.apache.ignite.internal.processors.cache.CacheQueryBuildValueTest; +import org.apache.ignite.internal.processors.cache.CacheQueryEvictDataLostTest; +import org.apache.ignite.internal.processors.cache.CacheQueryNewClientSelfTest; +import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryDetailMetricsDistributedSelfTest; +import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryDetailMetricsLocalSelfTest; +import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryMetricsDistributedSelfTest; +import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryMetricsLocalSelfTest; +import org.apache.ignite.internal.processors.cache.CacheSqlQueryValueCopySelfTest; +import org.apache.ignite.internal.processors.cache.DdlTransactionSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheCrossCacheQuerySelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheFullTextQuerySelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheLazyQueryPartitionsReleaseTest; +import org.apache.ignite.internal.processors.cache.GridCacheQueryIndexDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheQueryIndexingDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheQueryInternalKeysSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheQuerySerializationSelfTest; +import org.apache.ignite.internal.processors.cache.GridCacheQuerySqlFieldInlineSizeSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteBinaryObjectFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteBinaryObjectLocalQueryArgumentsTest; +import org.apache.ignite.internal.processors.cache.IgniteBinaryObjectQueryArgumentsTest; +import org.apache.ignite.internal.processors.cache.IgniteBinaryWrappedObjectFieldsQuerySelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheBinaryObjectsScanSelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheBinaryObjectsScanWithEventsSelfTest; -import org.apache.ignite.internal.processors.cache.BigEntryQueryTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheCollocatedQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheDeleteSqlQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinCollocatedAndNotTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinCustomAffinityMapper; +import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinNoIndexTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinPartitionedAndReplicatedTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinQueryConditionsTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheDuplicateEntityConfigurationSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheFieldsQueryNoDataSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheFullTextQueryNodeJoiningSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheInsertSqlQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheJoinPartitionedAndReplicatedCollocationTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheJoinPartitionedAndReplicatedTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheJoinQueryWithAffinityKeyTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheLargeResultSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheMergeSqlQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheMultipleIndexedTypesTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheNoClassQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapEvictQueryTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapIndexScanTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheP2pUnmarshallingQueryErrorTest; +import org.apache.ignite.internal.processors.cache.IgniteCachePrimitiveFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheQueryH2IndexingLeakTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheQueryIndexSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheQueryLoadSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheSqlQueryErrorSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheUnionDuplicatesTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheUpdateSqlQuerySelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCheckClusterStateBeforeExecuteQueryTest; +import org.apache.ignite.internal.processors.cache.IgniteClientReconnectCacheQueriesFailoverTest; +import org.apache.ignite.internal.processors.cache.IgniteCrossCachesJoinsQueryTest; +import org.apache.ignite.internal.processors.cache.IgniteDynamicSqlRestoreTest; +import org.apache.ignite.internal.processors.cache.IgniteErrorOnRebalanceTest; +import org.apache.ignite.internal.processors.cache.IncorrectQueryEntityTest; +import org.apache.ignite.internal.processors.cache.IndexingCachePartitionLossPolicySelfTest; +import org.apache.ignite.internal.processors.cache.QueryEntityCaseMismatchTest; +import org.apache.ignite.internal.processors.cache.SqlFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.authentication.SqlUserCommandSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicNearEnabledFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicNearEnabledQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedPartitionQueryConfigurationSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedPartitionQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedQueryCancelSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedFieldsQueryP2PEnabledSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedQueryEvtsDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedQueryP2PDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedSnapshotEnabledQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNoRebalanceSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQueryP2PEnabledSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQueryROSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQueryEvtsDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQueryP2PDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQuerySelfTest; +import org.apache.ignite.internal.processors.cache.encryption.EncryptedSqlTableTest; +import org.apache.ignite.internal.processors.cache.index.BasicIndexMultinodeTest; +import org.apache.ignite.internal.processors.cache.index.BasicIndexTest; +import org.apache.ignite.internal.processors.cache.index.DuplicateKeyValueClassesSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexClientBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerCoordinatorBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerNodeFIlterBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerNodeFilterCoordinatorBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2ConnectionLeaksSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientAtomicPartitionedNoBackupsTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientTransactionalPartitionedNoBackupsTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerAtomicPartitionedNoBackupsTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerTransactionalPartitionedNoBackupsTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicColumnsClientBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicColumnsServerBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicColumnsServerCoordinatorBasicSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexAtomicPartitionedNearSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexAtomicPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexAtomicReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexTransactionalPartitionedNearSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexTransactionalPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexTransactionalReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientAtomicPartitionedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientAtomicReplicatedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientTransactionalPartitionedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientTransactionalReplicatedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerAtomicPartitionedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerAtomicReplicatedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerTransactionalPartitionedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerTransactionalReplicatedTest; +import org.apache.ignite.internal.processors.cache.index.H2DynamicTableSelfTest; +import org.apache.ignite.internal.processors.cache.index.H2RowCachePageEvictionTest; +import org.apache.ignite.internal.processors.cache.index.H2RowCacheSelfTest; +import org.apache.ignite.internal.processors.cache.index.IgniteDecimalSelfTest; +import org.apache.ignite.internal.processors.cache.index.LongIndexNameTest; +import org.apache.ignite.internal.processors.cache.index.OptimizedMarshallerIndexNameTest; +import org.apache.ignite.internal.processors.cache.index.QueryEntityValidationSelfTest; +import org.apache.ignite.internal.processors.cache.index.SchemaExchangeSelfTest; +import org.apache.ignite.internal.processors.cache.index.SqlTransactionCommandsWithMvccDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalAtomicQuerySelfTest; +import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalQueryCancelOrTimeoutSelfTest; +import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalQuerySelfTest; +import org.apache.ignite.internal.processors.cache.query.CacheScanQueryFailoverTest; +import org.apache.ignite.internal.processors.cache.query.GridCacheQueryTransformerSelfTest; +import org.apache.ignite.internal.processors.cache.query.GridCircularQueueTest; +import org.apache.ignite.internal.processors.cache.query.IgniteCacheQueryCacheDestroySelfTest; +import org.apache.ignite.internal.processors.cache.query.IndexingSpiQuerySelfTest; +import org.apache.ignite.internal.processors.cache.query.IndexingSpiQueryTxSelfTest; +import org.apache.ignite.internal.processors.cache.query.IndexingSpiQueryWithH2IndexingSelfTest; +import org.apache.ignite.internal.processors.client.ClientConnectorConfigurationValidationSelfTest; +import org.apache.ignite.internal.processors.database.baseline.IgniteStableBaselineBinObjFieldsQuerySelfTest; +import org.apache.ignite.internal.processors.query.IgniteCachelessQueriesSelfTest; +import org.apache.ignite.internal.processors.query.IgniteQueryDedicatedPoolTest; +import org.apache.ignite.internal.processors.query.IgniteSqlDefaultValueTest; +import org.apache.ignite.internal.processors.query.IgniteSqlDistributedJoinSelfTest; +import org.apache.ignite.internal.processors.query.IgniteSqlEntryCacheModeAgnosticTest; +import org.apache.ignite.internal.processors.query.IgniteSqlGroupConcatCollocatedTest; +import org.apache.ignite.internal.processors.query.IgniteSqlGroupConcatNotCollocatedTest; +import org.apache.ignite.internal.processors.query.IgniteSqlKeyValueFieldsTest; +import org.apache.ignite.internal.processors.query.IgniteSqlNotNullConstraintTest; +import org.apache.ignite.internal.processors.query.IgniteSqlParameterizedQueryTest; +import org.apache.ignite.internal.processors.query.IgniteSqlQueryParallelismTest; +import org.apache.ignite.internal.processors.query.IgniteSqlRoutingTest; +import org.apache.ignite.internal.processors.query.IgniteSqlSchemaIndexingTest; +import org.apache.ignite.internal.processors.query.IgniteSqlSegmentedIndexMultiNodeSelfTest; +import org.apache.ignite.internal.processors.query.IgniteSqlSegmentedIndexSelfTest; +import org.apache.ignite.internal.processors.query.IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest; +import org.apache.ignite.internal.processors.query.IgniteSqlSkipReducerOnUpdateDmlSelfTest; +import org.apache.ignite.internal.processors.query.IgniteSqlSplitterSelfTest; +import org.apache.ignite.internal.processors.query.LazyQuerySelfTest; +import org.apache.ignite.internal.processors.query.MultipleStatementsSqlQuerySelfTest; +import org.apache.ignite.internal.processors.query.SqlIllegalSchemaSelfTest; +import org.apache.ignite.internal.processors.query.SqlPushDownFunctionTest; +import org.apache.ignite.internal.processors.query.SqlSchemaSelfTest; +import org.apache.ignite.internal.processors.query.SqlSystemViewsSelfTest; +import org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessorTest; +import org.apache.ignite.internal.processors.query.h2.GridH2IndexingInMemSelfTest; +import org.apache.ignite.internal.processors.query.h2.GridH2IndexingOffheapSelfTest; +import org.apache.ignite.internal.processors.query.h2.GridIndexRebuildSelfTest; +import org.apache.ignite.internal.processors.query.h2.H2ResultSetIteratorNullifyOnEndSelfTest; +import org.apache.ignite.internal.processors.query.h2.H2StatementCacheSelfTest; +import org.apache.ignite.internal.processors.query.h2.IgniteSqlBigIntegerKeyTest; +import org.apache.ignite.internal.processors.query.h2.IgniteSqlQueryMinMaxTest; +import org.apache.ignite.internal.processors.query.h2.PreparedStatementExSelfTest; +import org.apache.ignite.internal.processors.query.h2.ThreadLocalObjectPoolSelfTest; +import org.apache.ignite.internal.processors.query.h2.sql.BaseH2CompareQueryTest; +import org.apache.ignite.internal.processors.query.h2.sql.GridQueryParsingTest; +import org.apache.ignite.internal.processors.query.h2.sql.H2CompareBigQueryDistributedJoinsTest; +import org.apache.ignite.internal.processors.query.h2.sql.H2CompareBigQueryTest; +import org.apache.ignite.internal.processors.sql.IgniteCachePartitionedAtomicColumnConstraintsTest; +import org.apache.ignite.internal.processors.sql.IgniteCachePartitionedTransactionalColumnConstraintsTest; +import org.apache.ignite.internal.processors.sql.IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest; +import org.apache.ignite.internal.processors.sql.IgniteCacheReplicatedAtomicColumnConstraintsTest; +import org.apache.ignite.internal.processors.sql.IgniteCacheReplicatedTransactionalColumnConstraintsTest; +import org.apache.ignite.internal.processors.sql.IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest; +import org.apache.ignite.internal.processors.sql.IgniteSQLColumnConstraintsTest; +import org.apache.ignite.internal.processors.sql.IgniteTransactionSQLColumnConstraintTest; +import org.apache.ignite.internal.processors.sql.SqlConnectorConfigurationValidationSelfTest; +import org.apache.ignite.internal.sql.SqlParserBulkLoadSelfTest; +import org.apache.ignite.internal.sql.SqlParserCreateIndexSelfTest; +import org.apache.ignite.internal.sql.SqlParserDropIndexSelfTest; +import org.apache.ignite.internal.sql.SqlParserSetStreamingSelfTest; +import org.apache.ignite.internal.sql.SqlParserTransactionalKeywordsSelfTest; +import org.apache.ignite.internal.sql.SqlParserUserSelfTest; +import org.apache.ignite.spi.communication.tcp.GridOrderedMessageCancelSelfTest; +import org.apache.ignite.sqltests.PartitionedSqlTest; +import org.apache.ignite.sqltests.ReplicatedSqlTest; +import org.apache.ignite.testframework.IgniteTestSuite; /** - * Cache query suite with binary marshaller. + * Test suite for cache queries. */ public class IgniteBinaryCacheQueryTestSuite extends TestSuite { /** - * @return Suite. - * @throws Exception In case of error. + * @return Test suite. + * @throws Exception If failed. */ public static TestSuite suite() throws Exception { - TestSuite suite = IgniteCacheQuerySelfTestSuite.suite(); + IgniteTestSuite suite = new IgniteTestSuite("Ignite Cache Queries Test Suite"); + + suite.addTest(new JUnit4TestAdapter(AffinityKeyNameAndValueFieldNameConflictTest.class)); + + suite.addTest(new JUnit4TestAdapter(PartitionedSqlTest.class)); + suite.addTest(new JUnit4TestAdapter(ReplicatedSqlTest.class)); + + suite.addTest(new JUnit4TestAdapter(SqlParserCreateIndexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlParserDropIndexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlParserTransactionalKeywordsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlParserBulkLoadSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlParserSetStreamingSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(SqlConnectorConfigurationValidationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientConnectorConfigurationValidationSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(SqlSchemaSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlIllegalSchemaSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(MultipleStatementsSqlQuerySelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(BasicIndexTest.class)); + suite.addTest(new JUnit4TestAdapter(BasicIndexMultinodeTest.class)); + + // Misc tests. + suite.addTest(new JUnit4TestAdapter(QueryEntityValidationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DuplicateKeyValueClassesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheLazyQueryPartitionsReleaseTest.class)); + + // Dynamic index create/drop tests. + suite.addTest(new JUnit4TestAdapter(SchemaExchangeSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(DynamicIndexServerCoordinatorBasicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicIndexServerBasicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicIndexServerNodeFilterCoordinatorBasicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicIndexServerNodeFIlterBasicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicIndexClientBasicSelfTest.class)); + + // H2 tests. + suite.addTest(new JUnit4TestAdapter(DmlStatementsProcessorTest.class)); + suite.addTest(new JUnit4TestAdapter(GridH2IndexingInMemSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridH2IndexingOffheapSelfTest.class)); + + // Parsing + suite.addTest(new JUnit4TestAdapter(GridQueryParsingTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheSqlQueryErrorSelfTest.class)); + + // Config. + suite.addTest(new JUnit4TestAdapter(IgniteCacheDuplicateEntityConfigurationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IncorrectQueryEntityTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDynamicSqlRestoreTest.class)); + + // Queries tests. + suite.addTest(new JUnit4TestAdapter(LazyQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlSplitterSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlPushDownFunctionTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlSegmentedIndexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachelessQueriesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlSegmentedIndexMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlSchemaIndexingTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQueryIndexDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryLoadSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLocalQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLocalAtomicQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedQueryP2PDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedQueryEvtsDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedSnapshotEnabledQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicNearEnabledQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedQueryP2PDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedQueryEvtsDisabledSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheUnionDuplicatesTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheJoinPartitionedAndReplicatedCollocationTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectCacheQueriesFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteErrorOnRebalanceTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheQueryBuildValueTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheOffheapBatchIndexingMultiTypeTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryIndexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheCollocatedQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLargeResultSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQueryInternalKeysSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2ResultSetIteratorNullifyOnEndSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlBigIntegerKeyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheOffheapEvictQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheOffheapIndexScanTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCacheCrossCacheQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQuerySerializationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteBinaryObjectFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteStableBaselineBinObjFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteBinaryWrappedObjectFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryH2IndexingLeakTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryNoRebalanceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQueryTransformerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheScanQueryFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePrimitiveFieldsQuerySelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheJoinQueryWithAffinityKeyTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheJoinPartitionedAndReplicatedTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCrossCachesJoinsQueryTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheMultipleIndexedTypesTest.class)); + + // DML. + suite.addTest(new JUnit4TestAdapter(IgniteCacheMergeSqlQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheInsertSqlQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheUpdateSqlQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDeleteSqlQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlSkipReducerOnUpdateDmlSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteBinaryObjectQueryArgumentsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteBinaryObjectLocalQueryArgumentsTest.class)); + + suite.addTest(new JUnit4TestAdapter(IndexingSpiQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IndexingSpiQueryTxSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheMultipleIndexedTypesTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlQueryMinMaxTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridCircularQueueTest.class)); + suite.addTest(new JUnit4TestAdapter(IndexingSpiQueryWithH2IndexingSelfTest.class)); + + // DDL. + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexTransactionalReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexTransactionalPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexTransactionalPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexAtomicReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexAtomicPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexAtomicPartitionedNearSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicTableSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicColumnsClientBasicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicColumnsServerBasicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicColumnsServerCoordinatorBasicSelfTest.class)); + + // DML+DDL. + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexClientAtomicPartitionedTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexClientAtomicPartitionedNoBackupsTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexClientAtomicReplicatedTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexClientTransactionalPartitionedTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexClientTransactionalPartitionedNoBackupsTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexClientTransactionalReplicatedTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexServerAtomicPartitionedTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexServerAtomicPartitionedNoBackupsTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexServerAtomicReplicatedTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexServerTransactionalPartitionedTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexServerTransactionalPartitionedNoBackupsTest.class)); + suite.addTest(new JUnit4TestAdapter(H2DynamicIndexingComplexServerTransactionalReplicatedTest.class)); + + suite.addTest(new JUnit4TestAdapter(DdlTransactionSelfTest.class)); + + // Fields queries. + suite.addTest(new JUnit4TestAdapter(SqlFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLocalFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedFieldsQueryROSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedFieldsQueryP2PEnabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheAtomicNearEnabledFieldsQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedFieldsQueryP2PEnabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheFieldsQueryNoDataSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQueryIndexingDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridOrderedMessageCancelSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheQueryEvictDataLostTest.class)); + + // Full text queries. + suite.addTest(new JUnit4TestAdapter(GridCacheFullTextQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheFullTextQueryNodeJoiningSelfTest.class)); + + // Ignite cache and H2 comparison. + suite.addTest(new JUnit4TestAdapter(BaseH2CompareQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(H2CompareBigQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(H2CompareBigQueryDistributedJoinsTest.class)); + + // Cache query metrics. + suite.addTest(new JUnit4TestAdapter(CacheLocalQueryMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePartitionedQueryMetricsDistributedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePartitionedQueryMetricsLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheReplicatedQueryMetricsDistributedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheReplicatedQueryMetricsLocalSelfTest.class)); + + // Cache query metrics. + suite.addTest(new JUnit4TestAdapter(CacheLocalQueryDetailMetricsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePartitionedQueryDetailMetricsDistributedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CachePartitionedQueryDetailMetricsLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheReplicatedQueryDetailMetricsDistributedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheReplicatedQueryDetailMetricsLocalSelfTest.class)); + + // Unmarshalling query test. + suite.addTest(new JUnit4TestAdapter(IgniteCacheP2pUnmarshallingQueryErrorTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheNoClassQuerySelfTest.class)); + + // Cancellation. + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedQueryCancelSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLocalQueryCancelOrTimeoutSelfTest.class)); + + // Distributed joins. + suite.addTest(new JUnit4TestAdapter(H2CompareBigQueryDistributedJoinsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedJoinCollocatedAndNotTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedJoinCustomAffinityMapper.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedJoinNoIndexTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedJoinPartitionedAndReplicatedTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedJoinQueryConditionsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedJoinTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlDistributedJoinSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlQueryParallelismTest.class)); + + // Other. + suite.addTest(new JUnit4TestAdapter(CacheIteratorScanQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheQueryNewClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheOffheapBatchIndexingSingleTypeTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheSqlQueryValueCopySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryCacheDestroySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteQueryDedicatedPoolTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlEntryCacheModeAgnosticTest.class)); + suite.addTest(new JUnit4TestAdapter(QueryEntityCaseMismatchTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedPartitionQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedPartitionQueryConfigurationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlKeyValueFieldsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlRoutingTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlNotNullConstraintTest.class)); + suite.addTest(new JUnit4TestAdapter(LongIndexNameTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheQuerySqlFieldInlineSizeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlParameterizedQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(H2ConnectionLeaksSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCheckClusterStateBeforeExecuteQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(OptimizedMarshallerIndexNameTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlSystemViewsSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(GridIndexRebuildSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(SqlTransactionCommandsWithMvccDisabledSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteSqlDefaultValueTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDecimalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSQLColumnConstraintsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteTransactionSQLColumnConstraintTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedAtomicColumnConstraintsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedTransactionalColumnConstraintsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedAtomicColumnConstraintsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedTransactionalColumnConstraintsTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest.class)); + + // H2 Rows on-heap cache + suite.addTest(new JUnit4TestAdapter(H2RowCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2RowCachePageEvictionTest.class)); + + // User operation SQL + suite.addTest(new JUnit4TestAdapter(SqlParserUserSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(SqlUserCommandSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(EncryptedSqlTableTest.class)); + + suite.addTest(new JUnit4TestAdapter(ThreadLocalObjectPoolSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(H2StatementCacheSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(PreparedStatementExSelfTest.class)); - // Serialization. - suite.addTestSuite(BinarySerializationQuerySelfTest.class); - suite.addTestSuite(BinarySerializationQueryWithReflectiveSerializerSelfTest.class); - suite.addTestSuite(IgniteCacheBinaryObjectsScanSelfTest.class); - suite.addTestSuite(IgniteCacheBinaryObjectsScanWithEventsSelfTest.class); - suite.addTestSuite(BigEntryQueryTest.class); + // Partition loss. + suite.addTest(new JUnit4TestAdapter(IndexingCachePartitionLossPolicySelfTest.class)); - //Should be adjusted. Not ready to be used with BinaryMarshaller. - //suite.addTestSuite(GridCacheBinarySwapScanQuerySelfTest.class); + // GROUP_CONCAT + suite.addTest(new JUnit4TestAdapter(IgniteSqlGroupConcatCollocatedTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlGroupConcatNotCollocatedTest.class)); - //TODO: the following tests= was never tested with binary. Exclude or pass? -// suite.addTestSuite(IgniteSqlSchemaIndexingTest.class); + // Binary + suite.addTest(new JUnit4TestAdapter(BinarySerializationQuerySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BinarySerializationQueryWithReflectiveSerializerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheBinaryObjectsScanSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheBinaryObjectsScanWithEventsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(BigEntryQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(BinaryMetadataConcurrentUpdateWithIndexesTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite2.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite2.java index ce2a6669f621a..33b6273cfd701 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite2.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinaryCacheQueryTestSuite2.java @@ -17,21 +17,112 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.apache.ignite.internal.processors.cache.CacheScanPartitionQueryFallbackSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheCrossCacheJoinRandomTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheObjectKeyIndexingSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCachePartitionedQueryMultiThreadedSelfTest; import org.apache.ignite.internal.processors.cache.IgniteCacheQueriesLoadTest1; +import org.apache.ignite.internal.processors.cache.IgniteCacheQueryEvictsMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheQueryMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.IgniteCacheSqlQueryMultiThreadedSelfTest; +import org.apache.ignite.internal.processors.cache.QueryJoinWithDifferentNodeFiltersTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheClientQueryReplicatedNodeRestartSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeFailTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartDistributedJoinSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartSelfTest2; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartTxSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.near.IgniteSqlQueryWithBaselineTest; +import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentAtomicPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentAtomicReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentTransactionalPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentTransactionalReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexPartitionedAtomicConcurrentSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexPartitionedTransactionalConcurrentSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexReplicatedAtomicConcurrentSelfTest; +import org.apache.ignite.internal.processors.cache.index.DynamicIndexReplicatedTransactionalConcurrentSelfTest; +import org.apache.ignite.internal.processors.cache.query.ScanQueryOffheapExpiryPolicySelfTest; +import org.apache.ignite.internal.processors.database.baseline.IgniteChangingBaselineCacheQueryNodeRestartSelfTest; +import org.apache.ignite.internal.processors.database.baseline.IgniteStableBaselineCacheQueryNodeRestartsSelfTest; +import org.apache.ignite.internal.processors.query.*; +import org.apache.ignite.internal.processors.query.h2.twostep.CacheQueryMemoryLeakTest; +import org.apache.ignite.internal.processors.query.h2.twostep.DisappearedCacheCauseRetryMessageSelfTest; +import org.apache.ignite.internal.processors.query.h2.twostep.DisappearedCacheWasNotFoundMessageSelfTest; +import org.apache.ignite.internal.processors.query.h2.twostep.NonCollocatedRetryMessageSelfTest; +import org.apache.ignite.internal.processors.query.h2.twostep.RetryCauseMessageSelfTest; +import org.apache.ignite.internal.processors.query.h2.twostep.TableViewSubquerySelfTest; +import org.apache.ignite.testframework.IgniteTestSuite; /** - * Cache query suite with binary marshaller. + * Test suite for cache queries. */ public class IgniteBinaryCacheQueryTestSuite2 extends TestSuite { /** - * @return Suite. - * @throws Exception In case of error. + * @return Test suite. + * @throws Exception If failed. */ public static TestSuite suite() throws Exception { - TestSuite suite = IgniteCacheQuerySelfTestSuite2.suite(); + TestSuite suite = new IgniteTestSuite("Ignite Cache Queries Test Suite 2"); - suite.addTestSuite(IgniteCacheQueriesLoadTest1.class); + // Dynamic index create/drop tests. + suite.addTest(new JUnit4TestAdapter(DynamicIndexPartitionedAtomicConcurrentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicIndexPartitionedTransactionalConcurrentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicIndexReplicatedAtomicConcurrentSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicIndexReplicatedTransactionalConcurrentSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(DynamicColumnsConcurrentAtomicPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicColumnsConcurrentTransactionalPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicColumnsConcurrentAtomicReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DynamicColumnsConcurrentTransactionalReplicatedSelfTest.class)); + + // Distributed joins. + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryNodeRestartDistributedJoinSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest.class)); + + // Other tests. + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryMultiThreadedSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryEvictsMultiThreadedSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(ScanQueryOffheapExpiryPolicySelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheCrossCacheJoinRandomTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheClientQueryReplicatedNodeRestartSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryNodeFailTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryNodeRestartSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlQueryWithBaselineTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteChangingBaselineCacheQueryNodeRestartSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteStableBaselineCacheQueryNodeRestartsSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryNodeRestartSelfTest2.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueryNodeRestartTxSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheSqlQueryMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCachePartitionedQueryMultiThreadedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheScanPartitionQueryFallbackSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheObjectKeyIndexingSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheGroupsCompareQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheGroupsSqlSegmentedIndexSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheGroupsSqlSegmentedIndexMultiNodeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheGroupsSqlDistributedJoinSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(QueryJoinWithDifferentNodeFiltersTest.class)); + + suite.addTest(new JUnit4TestAdapter(CacheQueryMemoryLeakTest.class)); + + suite.addTest(new JUnit4TestAdapter(NonCollocatedRetryMessageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(RetryCauseMessageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DisappearedCacheCauseRetryMessageSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(DisappearedCacheWasNotFoundMessageSelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(TableViewSubquerySelfTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteCacheQueriesLoadTest1.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSqlCreateTableTemplateTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheQueryTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheQueryTestSuite.java index 109e244e0b5b7..0bf334a9e8519 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheQueryTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteBinarySimpleNameMapperCacheQueryTestSuite.java @@ -19,14 +19,16 @@ import junit.framework.TestSuite; import org.apache.ignite.testframework.config.GridTestProperties; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache query suite with binary marshaller. */ -public class IgniteBinarySimpleNameMapperCacheQueryTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteBinarySimpleNameMapperCacheQueryTestSuite { /** * @return Suite. - * @throws Exception In case of error. */ public static TestSuite suite() throws Exception { GridTestProperties.setProperty(GridTestProperties.BINARY_MARSHALLER_USE_SIMPLE_NAME_MAPPER, "true"); diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheAffinityRunTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheAffinityRunTestSuite.java index e9c7b79c6f73d..9b470f779da3a 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheAffinityRunTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheAffinityRunTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest; import org.apache.ignite.internal.processors.cache.IgniteCacheLockPartitionOnAffinityRunTest; @@ -24,26 +25,28 @@ import org.apache.ignite.internal.processors.cache.IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest; import org.apache.ignite.internal.processors.database.baseline.IgniteBaselineLockPartitionOnAffinityRunAtomicCacheTest; import org.apache.ignite.internal.processors.database.baseline.IgniteBaselineLockPartitionOnAffinityRunTxCacheTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Compute and Cache tests for affinityRun/Call. These tests is extracted into separate suite * because ones take a lot of time. */ -public class IgniteCacheAffinityRunTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheAffinityRunTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Compute and Cache Affinity Run Test Suite"); - suite.addTestSuite(IgniteCacheLockPartitionOnAffinityRunTest.class); - suite.addTestSuite(IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest.class); - suite.addTestSuite(IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest.class); - suite.addTestSuite(IgniteBaselineLockPartitionOnAffinityRunAtomicCacheTest.class); - suite.addTestSuite(IgniteBaselineLockPartitionOnAffinityRunTxCacheTest.class); - suite.addTestSuite(IgniteCacheLockPartitionOnAffinityRunTxCacheOpTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLockPartitionOnAffinityRunTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLockPartitionOnAffinityRunAtomicCacheOpTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteBaselineLockPartitionOnAffinityRunAtomicCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteBaselineLockPartitionOnAffinityRunTxCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheLockPartitionOnAffinityRunTxCacheOpTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheConfigVariationQueryTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheConfigVariationQueryTestSuite.java index 83ae27f584a9c..a8e9c1c4832e0 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheConfigVariationQueryTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheConfigVariationQueryTestSuite.java @@ -20,16 +20,18 @@ import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.IgniteCacheConfigVariationsQueryTest; import org.apache.ignite.testframework.configvariations.ConfigVariationsTestSuiteBuilder; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache queries. */ -public class IgniteCacheConfigVariationQueryTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheConfigVariationQueryTestSuite { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { return new ConfigVariationsTestSuiteBuilder( "Cache Config Variations Query Test Suite", IgniteCacheConfigVariationsQueryTest.class) diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccSqlTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccSqlTestSuite.java index 21ab2e69fc46b..04189145dba35 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccSqlTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheMvccSqlTestSuite.java @@ -17,9 +17,29 @@ package org.apache.ignite.testsuites; -import junit.framework.TestSuite; +import org.apache.ignite.cache.CacheAtomicityMode; +import org.apache.ignite.configuration.NearCacheConfiguration; +import org.apache.ignite.internal.processors.cache.distributed.dht.GridCacheColocatedTxPessimisticOriginatingNodeFailureSelfTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCachePartitionedNearDisabledPrimaryNodeFailureRecoveryTest; +import org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest; +import org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedTxPessimisticOriginatingNodeFailureSelfTest; +import org.apache.ignite.internal.processors.cache.index.MvccEmptyTransactionSelfTest; import org.apache.ignite.internal.processors.cache.index.SqlTransactionsCommandsWithMvccEnabledSelfTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccBasicContinuousQueryTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccBulkLoadTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccClientReconnectContinuousQueryTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryBackupQueueTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryClientReconnectTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryClientTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryImmutableEntryTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryMultiNodesFilteringTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryPartitionedTxOneNodeTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousQueryReplicatedTxOneNodeTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousWithTransformerClientSelfTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousWithTransformerPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccContinuousWithTransformerReplicatedSelfTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccDmlSimpleTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccIteratorWithConcurrentJdbcTransactionTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccLocalEntriesWithConcurrentJdbcTransactionTest; @@ -39,64 +59,147 @@ import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSizeTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSizeWithConcurrentJdbcTransactionTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSqlConfigurationValidationTest; -import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSqlUpdateCountersTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSqlContinuousQueryPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSqlContinuousQueryReplicatedSelfTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSqlLockTimeoutTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSqlTxModesTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccSqlUpdateCountersTest; import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccStreamingInsertTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTxNodeMappingTest; +import org.apache.ignite.internal.processors.cache.mvcc.CacheMvccTxRecoveryTest; import org.apache.ignite.internal.processors.cache.mvcc.MvccRepeatableReadBulkOpsTest; import org.apache.ignite.internal.processors.cache.mvcc.MvccRepeatableReadOperationsTest; import org.apache.ignite.internal.processors.query.h2.GridIndexRebuildWithMvccEnabledSelfTest; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Suite; -/** - * - */ -public class IgniteCacheMvccSqlTestSuite extends TestSuite { - /** - * @return Test suite. - */ - public static TestSuite suite() { - TestSuite suite = new TestSuite("IgniteCache SQL MVCC Test Suite"); - - // Simple tests. - suite.addTestSuite(CacheMvccSqlConfigurationValidationTest.class); - suite.addTestSuite(CacheMvccDmlSimpleTest.class); - suite.addTestSuite(SqlTransactionsCommandsWithMvccEnabledSelfTest.class); - suite.addTestSuite(CacheMvccSizeTest.class); - suite.addTestSuite(CacheMvccSqlUpdateCountersTest.class); - suite.addTestSuite(CacheMvccSqlLockTimeoutTest.class); - - suite.addTestSuite(GridIndexRebuildWithMvccEnabledSelfTest.class); - - // SQL vs CacheAPI consistency. - suite.addTestSuite(MvccRepeatableReadOperationsTest.class); - suite.addTestSuite(MvccRepeatableReadBulkOpsTest.class); - - // JDBC tests. - suite.addTestSuite(CacheMvccSizeWithConcurrentJdbcTransactionTest.class); - suite.addTestSuite(CacheMvccScanQueryWithConcurrentJdbcTransactionTest.class); - suite.addTestSuite(CacheMvccLocalEntriesWithConcurrentJdbcTransactionTest.class); - suite.addTestSuite(CacheMvccIteratorWithConcurrentJdbcTransactionTest.class); - - // Load tests. - suite.addTestSuite(CacheMvccBulkLoadTest.class); - suite.addTestSuite(CacheMvccStreamingInsertTest.class); - - suite.addTestSuite(CacheMvccPartitionedSqlQueriesTest.class); - suite.addTestSuite(CacheMvccReplicatedSqlQueriesTest.class); - suite.addTestSuite(CacheMvccPartitionedSqlTxQueriesTest.class); - suite.addTestSuite(CacheMvccReplicatedSqlTxQueriesTest.class); - - suite.addTestSuite(CacheMvccPartitionedSqlTxQueriesWithReducerTest.class); - suite.addTestSuite(CacheMvccReplicatedSqlTxQueriesWithReducerTest.class); - suite.addTestSuite(CacheMvccPartitionedSelectForUpdateQueryTest.class); - suite.addTestSuite(CacheMvccReplicatedSelectForUpdateQueryTest.class); - - // Failover tests. - suite.addTestSuite(CacheMvccPartitionedBackupsTest.class); - suite.addTestSuite(CacheMvccReplicatedBackupsTest.class); - - suite.addTestSuite(CacheMvccPartitionedSqlCoordinatorFailoverTest.class); - suite.addTestSuite(CacheMvccReplicatedSqlCoordinatorFailoverTest.class); - - return suite; +import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT; + +/** */ +@RunWith(Suite.class) +@Suite.SuiteClasses({ + // Simple tests. + MvccEmptyTransactionSelfTest.class, + CacheMvccSqlConfigurationValidationTest.class, + CacheMvccDmlSimpleTest.class, + SqlTransactionsCommandsWithMvccEnabledSelfTest.class, + CacheMvccSizeTest.class, + CacheMvccSqlUpdateCountersTest.class, + CacheMvccSqlLockTimeoutTest.class, + CacheMvccSqlTxModesTest.class, + GridIndexRebuildWithMvccEnabledSelfTest.class, + + CacheMvccTxNodeMappingTest.class, + + // SQL vs CacheAPI consistency. + MvccRepeatableReadOperationsTest.class, + MvccRepeatableReadBulkOpsTest.class, + + // JDBC tests. + CacheMvccSizeWithConcurrentJdbcTransactionTest.class, + CacheMvccScanQueryWithConcurrentJdbcTransactionTest.class, + CacheMvccLocalEntriesWithConcurrentJdbcTransactionTest.class, + CacheMvccIteratorWithConcurrentJdbcTransactionTest.class, + + // Load tests. + CacheMvccBulkLoadTest.class, + CacheMvccStreamingInsertTest.class, + + CacheMvccPartitionedSqlQueriesTest.class, + CacheMvccReplicatedSqlQueriesTest.class, + CacheMvccPartitionedSqlTxQueriesTest.class, + CacheMvccReplicatedSqlTxQueriesTest.class, + + CacheMvccPartitionedSqlTxQueriesWithReducerTest.class, + CacheMvccReplicatedSqlTxQueriesWithReducerTest.class, + CacheMvccPartitionedSelectForUpdateQueryTest.class, + CacheMvccReplicatedSelectForUpdateQueryTest.class, + + // Failover tests. + CacheMvccPartitionedBackupsTest.class, + CacheMvccReplicatedBackupsTest.class, + + CacheMvccPartitionedSqlCoordinatorFailoverTest.class, + CacheMvccReplicatedSqlCoordinatorFailoverTest.class, + + // Continuous queries. + CacheMvccBasicContinuousQueryTest.class, + CacheMvccContinuousQueryPartitionedSelfTest.class, + CacheMvccContinuousQueryReplicatedSelfTest.class, + CacheMvccSqlContinuousQueryPartitionedSelfTest.class, + CacheMvccSqlContinuousQueryReplicatedSelfTest.class, + + CacheMvccContinuousQueryPartitionedTxOneNodeTest.class, + CacheMvccContinuousQueryReplicatedTxOneNodeTest.class, + + CacheMvccContinuousQueryClientReconnectTest.class, + CacheMvccContinuousQueryClientTest.class, + + CacheMvccContinuousQueryMultiNodesFilteringTest.class, + CacheMvccContinuousQueryBackupQueueTest.class, + CacheMvccContinuousQueryImmutableEntryTest.class, + CacheMvccClientReconnectContinuousQueryTest.class, + + CacheMvccContinuousWithTransformerClientSelfTest.class, + CacheMvccContinuousWithTransformerPartitionedSelfTest.class, + CacheMvccContinuousWithTransformerReplicatedSelfTest.class, + + // Transaction recovery. + CacheMvccTxRecoveryTest.class, + + IgniteCacheMvccSqlTestSuite.MvccPartitionedPrimaryNodeFailureRecoveryTest.class, + IgniteCacheMvccSqlTestSuite.MvccPartitionedTwoBackupsPrimaryNodeFailureRecoveryTest.class, + IgniteCacheMvccSqlTestSuite.MvccColocatedTxPessimisticOriginatingNodeFailureRecoveryTest.class, + IgniteCacheMvccSqlTestSuite.MvccReplicatedTxPessimisticOriginatingNodeFailureRecoveryTest.class +}) +public class IgniteCacheMvccSqlTestSuite { + /** */ + public static class MvccPartitionedPrimaryNodeFailureRecoveryTest + extends IgniteCachePartitionedNearDisabledPrimaryNodeFailureRecoveryTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL_SNAPSHOT; + } + } + + /** */ + public static class MvccPartitionedTwoBackupsPrimaryNodeFailureRecoveryTest + extends IgniteCachePartitionedTwoBackupsPrimaryNodeFailureRecoveryTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Override protected NearCacheConfiguration nearConfiguration() { + return null; + } + } + + /** */ + public static class MvccColocatedTxPessimisticOriginatingNodeFailureRecoveryTest + extends GridCacheColocatedTxPessimisticOriginatingNodeFailureSelfTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL_SNAPSHOT; + } + } + + /** */ + public static class MvccReplicatedTxPessimisticOriginatingNodeFailureRecoveryTest + extends GridCacheReplicatedTxPessimisticOriginatingNodeFailureSelfTest { + /** {@inheritDoc} */ + @Override protected CacheAtomicityMode atomicityMode() { + return TRANSACTIONAL_SNAPSHOT; + } + + /** {@inheritDoc} */ + @Ignore("https://issues.apache.org/jira/browse/IGNITE-10765") + @Test + @Override public void testManyKeysRollback() throws Exception { + super.testManyKeysRollback(); + } } } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite.java deleted file mode 100644 index 7c8b2f86c5005..0000000000000 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite.java +++ /dev/null @@ -1,489 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.testsuites; - -import junit.framework.TestSuite; -import org.apache.ignite.internal.processors.cache.CacheIteratorScanQueryTest; -import org.apache.ignite.internal.processors.cache.CacheLocalQueryDetailMetricsSelfTest; -import org.apache.ignite.internal.processors.cache.CacheLocalQueryMetricsSelfTest; -import org.apache.ignite.internal.processors.cache.CacheOffheapBatchIndexingSingleTypeTest; -import org.apache.ignite.internal.processors.cache.CachePartitionedQueryDetailMetricsDistributedSelfTest; -import org.apache.ignite.internal.processors.cache.CachePartitionedQueryDetailMetricsLocalSelfTest; -import org.apache.ignite.internal.processors.cache.CachePartitionedQueryMetricsDistributedSelfTest; -import org.apache.ignite.internal.processors.cache.CachePartitionedQueryMetricsLocalSelfTest; -import org.apache.ignite.internal.processors.cache.CacheQueryEvictDataLostTest; -import org.apache.ignite.internal.processors.cache.CacheQueryNewClientSelfTest; -import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryDetailMetricsDistributedSelfTest; -import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryDetailMetricsLocalSelfTest; -import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryMetricsDistributedSelfTest; -import org.apache.ignite.internal.processors.cache.CacheReplicatedQueryMetricsLocalSelfTest; -import org.apache.ignite.internal.processors.cache.CacheSqlQueryValueCopySelfTest; -import org.apache.ignite.internal.processors.cache.DdlTransactionSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheCrossCacheQuerySelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheFullTextQuerySelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheLazyQueryPartitionsReleaseTest; -import org.apache.ignite.internal.processors.cache.GridCacheQueryIndexDisabledSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheQueryIndexingDisabledSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheQueryInternalKeysSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheQuerySerializationSelfTest; -import org.apache.ignite.internal.processors.cache.GridCacheQuerySqlFieldInlineSizeSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteBinaryObjectFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteBinaryObjectLocalQueryArgumentsTest; -import org.apache.ignite.internal.processors.cache.IgniteBinaryObjectQueryArgumentsTest; -import org.apache.ignite.internal.processors.cache.IgniteBinaryWrappedObjectFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheCollocatedQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheDeleteSqlQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinCollocatedAndNotTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinCustomAffinityMapper; -import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinNoIndexTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinPartitionedAndReplicatedTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinQueryConditionsTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheDistributedJoinTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheDuplicateEntityConfigurationSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheFieldsQueryNoDataSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheFullTextQueryNodeJoiningSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheInsertSqlQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheJoinPartitionedAndReplicatedTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheJoinQueryWithAffinityKeyTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheLargeResultSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheMergeSqlQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheMultipleIndexedTypesTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheNoClassQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapEvictQueryTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapIndexScanTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheP2pUnmarshallingQueryErrorTest; -import org.apache.ignite.internal.processors.cache.IgniteCachePrimitiveFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheQueryH2IndexingLeakTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheQueryIndexSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheQueryLoadSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheSqlQueryErrorSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheUpdateSqlQuerySelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCheckClusterStateBeforeExecuteQueryTest; -import org.apache.ignite.internal.processors.cache.IgniteCrossCachesJoinsQueryTest; -import org.apache.ignite.internal.processors.cache.IgniteDynamicSqlRestoreTest; -import org.apache.ignite.internal.processors.cache.IncorrectQueryEntityTest; -import org.apache.ignite.internal.processors.cache.IndexingCachePartitionLossPolicySelfTest; -import org.apache.ignite.internal.processors.cache.QueryEntityCaseMismatchTest; -import org.apache.ignite.internal.processors.cache.SqlFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.authentication.SqlUserCommandSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicNearEnabledFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicNearEnabledQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheAtomicQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedPartitionQueryConfigurationSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedPartitionQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedQueryCancelSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedFieldsQueryP2PEnabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedQueryEvtsDisabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedQueryP2PDisabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCachePartitionedSnapshotEnabledQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryAbstractDistributedJoinSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNoRebalanceSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQueryP2PEnabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQueryROSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQueryEvtsDisabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQueryP2PDisabledSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQuerySelfTest; -import org.apache.ignite.internal.processors.cache.index.BasicIndexTest; -import org.apache.ignite.internal.processors.cache.index.DuplicateKeyValueClassesSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexClientBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerCoordinatorBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerNodeFIlterBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexServerNodeFilterCoordinatorBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2ConnectionLeaksSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicColumnsClientBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicColumnsServerBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicColumnsServerCoordinatorBasicSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexAtomicPartitionedNearSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexAtomicPartitionedSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexAtomicReplicatedSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexTransactionalPartitionedNearSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexTransactionalPartitionedSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexTransactionalReplicatedSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientAtomicPartitionedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientAtomicReplicatedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientTransactionalPartitionedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexClientTransactionalReplicatedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerAtomicPartitionedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerAtomicReplicatedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerTransactionalPartitionedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicIndexingComplexServerTransactionalReplicatedTest; -import org.apache.ignite.internal.processors.cache.index.H2DynamicTableSelfTest; -import org.apache.ignite.internal.processors.cache.index.H2RowCachePageEvictionTest; -import org.apache.ignite.internal.processors.cache.index.H2RowCacheSelfTest; -import org.apache.ignite.internal.processors.cache.index.IgniteDecimalSelfTest; -import org.apache.ignite.internal.processors.cache.index.LongIndexNameTest; -import org.apache.ignite.internal.processors.cache.index.OptimizedMarshallerIndexNameTest; -import org.apache.ignite.internal.processors.cache.index.SchemaExchangeSelfTest; -import org.apache.ignite.internal.processors.cache.index.SqlTransactionsComandsSelfTest; -import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalAtomicQuerySelfTest; -import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalQueryCancelOrTimeoutSelfTest; -import org.apache.ignite.internal.processors.cache.local.IgniteCacheLocalQuerySelfTest; -import org.apache.ignite.internal.processors.cache.query.CacheScanQueryFailoverTest; -import org.apache.ignite.internal.processors.cache.query.GridCacheQueryTransformerSelfTest; -import org.apache.ignite.internal.processors.cache.query.IgniteCacheQueryCacheDestroySelfTest; -import org.apache.ignite.internal.processors.cache.query.IndexingSpiQuerySelfTest; -import org.apache.ignite.internal.processors.cache.query.IndexingSpiQueryTxSelfTest; -import org.apache.ignite.internal.processors.client.ClientConnectorConfigurationValidationSelfTest; -import org.apache.ignite.internal.processors.database.baseline.IgniteStableBaselineBinObjFieldsQuerySelfTest; -import org.apache.ignite.internal.processors.query.IgniteCachelessQueriesSelfTest; -import org.apache.ignite.internal.processors.query.IgniteQueryDedicatedPoolTest; -import org.apache.ignite.internal.processors.query.IgniteSqlDefaultValueTest; -import org.apache.ignite.internal.processors.query.IgniteSqlDistributedJoinSelfTest; -import org.apache.ignite.internal.processors.query.IgniteSqlEntryCacheModeAgnosticTest; -import org.apache.ignite.internal.processors.query.IgniteSqlGroupConcatCollocatedTest; -import org.apache.ignite.internal.processors.query.IgniteSqlGroupConcatNotCollocatedTest; -import org.apache.ignite.internal.processors.query.IgniteSqlKeyValueFieldsTest; -import org.apache.ignite.internal.processors.query.IgniteSqlNotNullConstraintTest; -import org.apache.ignite.internal.processors.query.IgniteSqlParameterizedQueryTest; -import org.apache.ignite.internal.processors.query.IgniteSqlQueryParallelismTest; -import org.apache.ignite.internal.processors.query.IgniteSqlRoutingTest; -import org.apache.ignite.internal.processors.query.IgniteSqlSchemaIndexingTest; -import org.apache.ignite.internal.processors.query.IgniteSqlSegmentedIndexMultiNodeSelfTest; -import org.apache.ignite.internal.processors.query.IgniteSqlSegmentedIndexSelfTest; -import org.apache.ignite.internal.processors.query.IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest; -import org.apache.ignite.internal.processors.query.IgniteSqlSkipReducerOnUpdateDmlSelfTest; -import org.apache.ignite.internal.processors.query.IgniteSqlSplitterSelfTest; -import org.apache.ignite.internal.processors.query.LazyQuerySelfTest; -import org.apache.ignite.internal.processors.query.MultipleStatementsSqlQuerySelfTest; -import org.apache.ignite.internal.processors.query.SqlIllegalSchemaSelfTest; -import org.apache.ignite.internal.processors.query.SqlPushDownFunctionTest; -import org.apache.ignite.internal.processors.query.SqlSchemaSelfTest; -import org.apache.ignite.internal.processors.query.SqlSystemViewsSelfTest; -import org.apache.ignite.internal.processors.query.h2.GridH2IndexingInMemSelfTest; -import org.apache.ignite.internal.processors.query.h2.GridH2IndexingOffheapSelfTest; -import org.apache.ignite.internal.processors.query.h2.GridIndexRebuildSelfTest; -import org.apache.ignite.internal.processors.query.h2.H2ResultSetIteratorNullifyOnEndSelfTest; -import org.apache.ignite.internal.processors.query.h2.H2StatementCacheSelfTest; -import org.apache.ignite.internal.processors.query.h2.IgniteSqlBigIntegerKeyTest; -import org.apache.ignite.internal.processors.query.h2.IgniteSqlQueryMinMaxTest; -import org.apache.ignite.internal.processors.query.h2.PreparedStatementExSelfTest; -import org.apache.ignite.internal.processors.query.h2.ThreadLocalObjectPoolSelfTest; -import org.apache.ignite.internal.processors.query.h2.sql.BaseH2CompareQueryTest; -import org.apache.ignite.internal.processors.query.h2.sql.GridQueryParsingTest; -import org.apache.ignite.internal.processors.query.h2.sql.H2CompareBigQueryDistributedJoinsTest; -import org.apache.ignite.internal.processors.query.h2.sql.H2CompareBigQueryTest; -import org.apache.ignite.internal.processors.sql.IgniteCachePartitionedAtomicColumnConstraintsTest; -import org.apache.ignite.internal.processors.sql.IgniteCachePartitionedTransactionalColumnConstraintsTest; -import org.apache.ignite.internal.processors.sql.IgniteCacheReplicatedAtomicColumnConstraintsTest; -import org.apache.ignite.internal.processors.sql.IgniteCacheReplicatedTransactionalColumnConstraintsTest; -import org.apache.ignite.internal.processors.sql.IgniteSQLColumnConstraintsTest; -import org.apache.ignite.internal.processors.sql.SqlConnectorConfigurationValidationSelfTest; -import org.apache.ignite.internal.sql.SqlParserBulkLoadSelfTest; -import org.apache.ignite.internal.sql.SqlParserCreateIndexSelfTest; -import org.apache.ignite.internal.sql.SqlParserDropIndexSelfTest; -import org.apache.ignite.internal.sql.SqlParserSetStreamingSelfTest; -import org.apache.ignite.internal.sql.SqlParserTransactionalKeywordsSelfTest; -import org.apache.ignite.internal.sql.SqlParserUserSelfTest; -import org.apache.ignite.spi.communication.tcp.GridOrderedMessageCancelSelfTest; -import org.apache.ignite.sqltests.PartitionedSqlTest; -import org.apache.ignite.sqltests.ReplicatedSqlTest; -import org.apache.ignite.testframework.IgniteTestSuite; - -/** - * Test suite for cache queries. - */ -public class IgniteCacheQuerySelfTestSuite extends TestSuite { - /** - * @return Test suite. - * @throws Exception If failed. - */ - public static TestSuite suite() throws Exception { - IgniteTestSuite suite = new IgniteTestSuite("Ignite Cache Queries Test Suite"); - - suite.addTestSuite(PartitionedSqlTest.class); - suite.addTestSuite(ReplicatedSqlTest.class); - - suite.addTestSuite(SqlParserCreateIndexSelfTest.class); - suite.addTestSuite(SqlParserDropIndexSelfTest.class); - suite.addTestSuite(SqlParserTransactionalKeywordsSelfTest.class); - suite.addTestSuite(SqlParserBulkLoadSelfTest.class); - suite.addTestSuite(SqlParserSetStreamingSelfTest.class); - - suite.addTestSuite(SqlConnectorConfigurationValidationSelfTest.class); - suite.addTestSuite(ClientConnectorConfigurationValidationSelfTest.class); - - suite.addTestSuite(SqlSchemaSelfTest.class); - suite.addTestSuite(SqlIllegalSchemaSelfTest.class); - suite.addTestSuite(MultipleStatementsSqlQuerySelfTest.class); - - suite.addTestSuite(BasicIndexTest.class); - - // Misc tests. - // TODO: Enable when IGNITE-1094 is fixed. - // suite.addTest(new TestSuite(QueryEntityValidationSelfTest.class)); - suite.addTest(new TestSuite(DuplicateKeyValueClassesSelfTest.class)); - suite.addTest(new TestSuite(GridCacheLazyQueryPartitionsReleaseTest.class)); - - // Dynamic index create/drop tests. - suite.addTest(new TestSuite(SchemaExchangeSelfTest.class)); - - suite.addTest(new TestSuite(DynamicIndexServerCoordinatorBasicSelfTest.class)); - suite.addTest(new TestSuite(DynamicIndexServerBasicSelfTest.class)); - suite.addTest(new TestSuite(DynamicIndexServerNodeFilterCoordinatorBasicSelfTest.class)); - suite.addTest(new TestSuite(DynamicIndexServerNodeFIlterBasicSelfTest.class)); - suite.addTest(new TestSuite(DynamicIndexClientBasicSelfTest.class)); - - // H2 tests. - - // TODO: IGNITE-4994: Restore mock. - // suite.addTest(new TestSuite(GridH2TableSelfTest.class)); - - suite.addTest(new TestSuite(GridH2IndexingInMemSelfTest.class)); - suite.addTest(new TestSuite(GridH2IndexingOffheapSelfTest.class)); - - // Parsing - suite.addTestSuite(GridQueryParsingTest.class); - suite.addTestSuite(IgniteCacheSqlQueryErrorSelfTest.class); - - // Config. - suite.addTestSuite(IgniteCacheDuplicateEntityConfigurationSelfTest.class); - suite.addTestSuite(IncorrectQueryEntityTest.class); - suite.addTestSuite(IgniteDynamicSqlRestoreTest.class); - - // Queries tests. - suite.addTestSuite(LazyQuerySelfTest.class); - suite.addTestSuite(IgniteSqlSplitterSelfTest.class); - suite.addTestSuite(SqlPushDownFunctionTest.class); - suite.addTestSuite(IgniteSqlSegmentedIndexSelfTest.class); - suite.addTestSuite(IgniteCachelessQueriesSelfTest.class); - suite.addTestSuite(IgniteSqlSegmentedIndexMultiNodeSelfTest.class); - suite.addTestSuite(IgniteSqlSchemaIndexingTest.class); - suite.addTestSuite(GridCacheQueryIndexDisabledSelfTest.class); - suite.addTestSuite(IgniteCacheQueryLoadSelfTest.class); - suite.addTestSuite(IgniteCacheLocalQuerySelfTest.class); - suite.addTestSuite(IgniteCacheLocalAtomicQuerySelfTest.class); - suite.addTestSuite(IgniteCacheReplicatedQuerySelfTest.class); - suite.addTestSuite(IgniteCacheReplicatedQueryP2PDisabledSelfTest.class); - suite.addTestSuite(IgniteCacheReplicatedQueryEvtsDisabledSelfTest.class); - suite.addTestSuite(IgniteCachePartitionedQuerySelfTest.class); - suite.addTestSuite(IgniteCachePartitionedSnapshotEnabledQuerySelfTest.class); - suite.addTestSuite(IgniteCacheAtomicQuerySelfTest.class); - suite.addTestSuite(IgniteCacheAtomicNearEnabledQuerySelfTest.class); - suite.addTestSuite(IgniteCachePartitionedQueryP2PDisabledSelfTest.class); - suite.addTestSuite(IgniteCachePartitionedQueryEvtsDisabledSelfTest.class); - - //suite.addTestSuite(IgniteCacheUnionDuplicatesTest.class); - //suite.addTestSuite(IgniteCacheConfigVariationsQueryTest.class); - //suite.addTestSuite(IgniteCacheJoinPartitionedAndReplicatedCollocationTest.class); - //suite.addTestSuite(IgniteClientReconnectCacheQueriesFailoverTest.class); - //suite.addTestSuite(IgniteErrorOnRebalanceTest.class); - //suite.addTestSuite(CacheQueryBuildValueTest.class); - //suite.addTestSuite(CacheOffheapBatchIndexingMultiTypeTest.class); - //suite.addTestSuite(CacheOffheapBatchIndexingBaseTest.class); - - suite.addTestSuite(IgniteCacheQueryIndexSelfTest.class); - suite.addTestSuite(IgniteCacheCollocatedQuerySelfTest.class); - suite.addTestSuite(IgniteCacheLargeResultSelfTest.class); - suite.addTestSuite(GridCacheQueryInternalKeysSelfTest.class); - suite.addTestSuite(H2ResultSetIteratorNullifyOnEndSelfTest.class); - suite.addTestSuite(IgniteSqlBigIntegerKeyTest.class); - suite.addTestSuite(IgniteCacheOffheapEvictQueryTest.class); - suite.addTestSuite(IgniteCacheOffheapIndexScanTest.class); - - suite.addTestSuite(IgniteCacheQueryAbstractDistributedJoinSelfTest.class); - - suite.addTestSuite(GridCacheCrossCacheQuerySelfTest.class); - suite.addTestSuite(GridCacheQuerySerializationSelfTest.class); - suite.addTestSuite(IgniteBinaryObjectFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteStableBaselineBinObjFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteBinaryWrappedObjectFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteCacheQueryH2IndexingLeakTest.class); - suite.addTestSuite(IgniteCacheQueryNoRebalanceSelfTest.class); - suite.addTestSuite(GridCacheQueryTransformerSelfTest.class); - suite.addTestSuite(CacheScanQueryFailoverTest.class); - suite.addTestSuite(IgniteCachePrimitiveFieldsQuerySelfTest.class); - - suite.addTestSuite(IgniteCacheJoinQueryWithAffinityKeyTest.class); - suite.addTestSuite(IgniteCacheJoinPartitionedAndReplicatedTest.class); - suite.addTestSuite(IgniteCrossCachesJoinsQueryTest.class); - - suite.addTestSuite(IgniteCacheMultipleIndexedTypesTest.class); - - // DML. - suite.addTestSuite(IgniteCacheMergeSqlQuerySelfTest.class); - suite.addTestSuite(IgniteCacheInsertSqlQuerySelfTest.class); - suite.addTestSuite(IgniteCacheUpdateSqlQuerySelfTest.class); - suite.addTestSuite(IgniteCacheDeleteSqlQuerySelfTest.class); - suite.addTestSuite(IgniteSqlSkipReducerOnUpdateDmlSelfTest.class); - suite.addTestSuite(IgniteSqlSkipReducerOnUpdateDmlFlagSelfTest.class); - - suite.addTestSuite(IgniteBinaryObjectQueryArgumentsTest.class); - suite.addTestSuite(IgniteBinaryObjectLocalQueryArgumentsTest.class); - - suite.addTestSuite(IndexingSpiQuerySelfTest.class); - suite.addTestSuite(IndexingSpiQueryTxSelfTest.class); - - suite.addTestSuite(IgniteCacheMultipleIndexedTypesTest.class); - suite.addTestSuite(IgniteSqlQueryMinMaxTest.class); - - //suite.addTestSuite(GridCircularQueueTest.class); - //suite.addTestSuite(IndexingSpiQueryWithH2IndexingSelfTest.class); - - // DDL. - suite.addTestSuite(H2DynamicIndexTransactionalReplicatedSelfTest.class); - suite.addTestSuite(H2DynamicIndexTransactionalPartitionedSelfTest.class); - suite.addTestSuite(H2DynamicIndexTransactionalPartitionedNearSelfTest.class); - suite.addTestSuite(H2DynamicIndexAtomicReplicatedSelfTest.class); - suite.addTestSuite(H2DynamicIndexAtomicPartitionedSelfTest.class); - suite.addTestSuite(H2DynamicIndexAtomicPartitionedNearSelfTest.class); - suite.addTestSuite(H2DynamicTableSelfTest.class); - suite.addTestSuite(H2DynamicColumnsClientBasicSelfTest.class); - suite.addTestSuite(H2DynamicColumnsServerBasicSelfTest.class); - suite.addTestSuite(H2DynamicColumnsServerCoordinatorBasicSelfTest.class); - - // DML+DDL. - //suite.addTestSuite(H2DynamicIndexingComplexAbstractTest.class); - suite.addTestSuite(H2DynamicIndexingComplexClientAtomicPartitionedTest.class); - //suite.addTestSuite(H2DynamicIndexingComplexClientAtomicPartitionedNoBackupsTest.class); - suite.addTestSuite(H2DynamicIndexingComplexClientAtomicReplicatedTest.class); - suite.addTestSuite(H2DynamicIndexingComplexClientTransactionalPartitionedTest.class); - //suite.addTestSuite(H2DynamicIndexingComplexClientTransactionalPartitionedNoBackupsTest.class); - suite.addTestSuite(H2DynamicIndexingComplexClientTransactionalReplicatedTest.class); - suite.addTestSuite(H2DynamicIndexingComplexServerAtomicPartitionedTest.class); - //suite.addTestSuite(H2DynamicIndexingComplexServerAtomicPartitionedNoBackupsTest.class); - suite.addTestSuite(H2DynamicIndexingComplexServerAtomicReplicatedTest.class); - suite.addTestSuite(H2DynamicIndexingComplexServerTransactionalPartitionedTest.class); - //suite.addTestSuite(H2DynamicIndexingComplexServerTransactionalPartitionedNoBackupsTest.class); - suite.addTestSuite(H2DynamicIndexingComplexServerTransactionalReplicatedTest.class); - - suite.addTestSuite(DdlTransactionSelfTest.class); - - // Fields queries. - suite.addTestSuite(SqlFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteCacheLocalFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteCacheReplicatedFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteCacheReplicatedFieldsQueryROSelfTest.class); - suite.addTestSuite(IgniteCacheReplicatedFieldsQueryP2PEnabledSelfTest.class); - suite.addTestSuite(IgniteCacheReplicatedFieldsQueryJoinNoPrimaryPartitionsSelfTest.class); - suite.addTestSuite(IgniteCachePartitionedFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteCacheAtomicFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteCacheAtomicNearEnabledFieldsQuerySelfTest.class); - suite.addTestSuite(IgniteCachePartitionedFieldsQueryP2PEnabledSelfTest.class); - suite.addTestSuite(IgniteCacheFieldsQueryNoDataSelfTest.class); - suite.addTestSuite(GridCacheQueryIndexingDisabledSelfTest.class); - suite.addTestSuite(GridOrderedMessageCancelSelfTest.class); - suite.addTestSuite(CacheQueryEvictDataLostTest.class); - - // Full text queries. - suite.addTestSuite(GridCacheFullTextQuerySelfTest.class); - suite.addTestSuite(IgniteCacheFullTextQueryNodeJoiningSelfTest.class); - - // Ignite cache and H2 comparison. - suite.addTestSuite(BaseH2CompareQueryTest.class); - suite.addTestSuite(H2CompareBigQueryTest.class); - suite.addTestSuite(H2CompareBigQueryDistributedJoinsTest.class); - - // Cache query metrics. - suite.addTestSuite(CacheLocalQueryMetricsSelfTest.class); - suite.addTestSuite(CachePartitionedQueryMetricsDistributedSelfTest.class); - suite.addTestSuite(CachePartitionedQueryMetricsLocalSelfTest.class); - suite.addTestSuite(CacheReplicatedQueryMetricsDistributedSelfTest.class); - suite.addTestSuite(CacheReplicatedQueryMetricsLocalSelfTest.class); - - // Cache query metrics. - suite.addTestSuite(CacheLocalQueryDetailMetricsSelfTest.class); - suite.addTestSuite(CachePartitionedQueryDetailMetricsDistributedSelfTest.class); - suite.addTestSuite(CachePartitionedQueryDetailMetricsLocalSelfTest.class); - suite.addTestSuite(CacheReplicatedQueryDetailMetricsDistributedSelfTest.class); - suite.addTestSuite(CacheReplicatedQueryDetailMetricsLocalSelfTest.class); - - // Unmarshalling query test. - suite.addTestSuite(IgniteCacheP2pUnmarshallingQueryErrorTest.class); - suite.addTestSuite(IgniteCacheNoClassQuerySelfTest.class); - - // Cancellation. - suite.addTestSuite(IgniteCacheDistributedQueryCancelSelfTest.class); - suite.addTestSuite(IgniteCacheLocalQueryCancelOrTimeoutSelfTest.class); - - // Distributed joins. - suite.addTestSuite(H2CompareBigQueryDistributedJoinsTest.class); - suite.addTestSuite(IgniteCacheDistributedJoinCollocatedAndNotTest.class); - suite.addTestSuite(IgniteCacheDistributedJoinCustomAffinityMapper.class); - suite.addTestSuite(IgniteCacheDistributedJoinNoIndexTest.class); - suite.addTestSuite(IgniteCacheDistributedJoinPartitionedAndReplicatedTest.class); - suite.addTestSuite(IgniteCacheDistributedJoinQueryConditionsTest.class); - suite.addTestSuite(IgniteCacheDistributedJoinTest.class); - suite.addTestSuite(IgniteSqlDistributedJoinSelfTest.class); - suite.addTestSuite(IgniteSqlQueryParallelismTest.class); - - // Other. - suite.addTestSuite(CacheIteratorScanQueryTest.class); - suite.addTestSuite(CacheQueryNewClientSelfTest.class); - suite.addTestSuite(CacheOffheapBatchIndexingSingleTypeTest.class); - suite.addTestSuite(CacheSqlQueryValueCopySelfTest.class); - suite.addTestSuite(IgniteCacheQueryCacheDestroySelfTest.class); - suite.addTestSuite(IgniteQueryDedicatedPoolTest.class); - suite.addTestSuite(IgniteSqlEntryCacheModeAgnosticTest.class); - suite.addTestSuite(QueryEntityCaseMismatchTest.class); - suite.addTestSuite(IgniteCacheDistributedPartitionQuerySelfTest.class); - suite.addTestSuite(IgniteCacheDistributedPartitionQueryNodeRestartsSelfTest.class); - suite.addTestSuite(IgniteCacheDistributedPartitionQueryConfigurationSelfTest.class); - suite.addTestSuite(IgniteSqlKeyValueFieldsTest.class); - suite.addTestSuite(IgniteSqlRoutingTest.class); - suite.addTestSuite(IgniteSqlNotNullConstraintTest.class); - suite.addTestSuite(LongIndexNameTest.class); - suite.addTestSuite(GridCacheQuerySqlFieldInlineSizeSelfTest.class); - suite.addTestSuite(IgniteSqlParameterizedQueryTest.class); - suite.addTestSuite(H2ConnectionLeaksSelfTest.class); - suite.addTestSuite(IgniteCheckClusterStateBeforeExecuteQueryTest.class); - suite.addTestSuite(OptimizedMarshallerIndexNameTest.class); - suite.addTestSuite(SqlSystemViewsSelfTest.class); - - suite.addTestSuite(GridIndexRebuildSelfTest.class); - - suite.addTestSuite(SqlTransactionsComandsSelfTest.class); - - suite.addTestSuite(IgniteSqlDefaultValueTest.class); - suite.addTestSuite(IgniteDecimalSelfTest.class); - suite.addTestSuite(IgniteSQLColumnConstraintsTest.class); - - suite.addTestSuite(IgniteCachePartitionedAtomicColumnConstraintsTest.class); - suite.addTestSuite(IgniteCachePartitionedTransactionalColumnConstraintsTest.class); - suite.addTestSuite(IgniteCacheReplicatedAtomicColumnConstraintsTest.class); - suite.addTestSuite(IgniteCacheReplicatedTransactionalColumnConstraintsTest.class); - - // H2 Rows on-heap cache - suite.addTestSuite(H2RowCacheSelfTest.class); - suite.addTestSuite(H2RowCachePageEvictionTest.class); - - // User operation SQL - suite.addTestSuite(SqlParserUserSelfTest.class); - suite.addTestSuite(SqlUserCommandSelfTest.class); - - suite.addTestSuite(ThreadLocalObjectPoolSelfTest.class); - suite.addTestSuite(H2StatementCacheSelfTest.class); - suite.addTestSuite(PreparedStatementExSelfTest.class); - - // Partition loss. - suite.addTestSuite(IndexingCachePartitionLossPolicySelfTest.class); - - // GROUP_CONCAT - suite.addTestSuite(IgniteSqlGroupConcatCollocatedTest.class); - suite.addTestSuite(IgniteSqlGroupConcatNotCollocatedTest.class); - - return suite; - } -} diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite2.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite2.java deleted file mode 100644 index 4b91b4307869e..0000000000000 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite2.java +++ /dev/null @@ -1,127 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.testsuites; - -import junit.framework.TestSuite; -import org.apache.ignite.internal.processors.cache.CacheScanPartitionQueryFallbackSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheCrossCacheJoinRandomTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheObjectKeyIndexingSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCachePartitionedQueryMultiThreadedSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheQueryEvictsMultiThreadedSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheQueryMultiThreadedSelfTest; -import org.apache.ignite.internal.processors.cache.IgniteCacheSqlQueryMultiThreadedSelfTest; -import org.apache.ignite.internal.processors.cache.QueryJoinWithDifferentNodeFiltersTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheClientQueryReplicatedNodeRestartSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeFailTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartDistributedJoinSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartSelfTest2; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryNodeRestartTxSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest; -import org.apache.ignite.internal.processors.cache.distributed.near.IgniteSqlQueryWithBaselineTest; -import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentAtomicPartitionedSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentAtomicReplicatedSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentTransactionalPartitionedSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicColumnsConcurrentTransactionalReplicatedSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexPartitionedAtomicConcurrentSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexPartitionedTransactionalConcurrentSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexReplicatedAtomicConcurrentSelfTest; -import org.apache.ignite.internal.processors.cache.index.DynamicIndexReplicatedTransactionalConcurrentSelfTest; -import org.apache.ignite.internal.processors.cache.query.ScanQueryOffheapExpiryPolicySelfTest; -import org.apache.ignite.internal.processors.database.baseline.IgniteChangingBaselineCacheQueryNodeRestartSelfTest; -import org.apache.ignite.internal.processors.database.baseline.IgniteStableBaselineCacheQueryNodeRestartsSelfTest; -import org.apache.ignite.internal.processors.query.IgniteCacheGroupsCompareQueryTest; -import org.apache.ignite.internal.processors.query.IgniteCacheGroupsSqlDistributedJoinSelfTest; -import org.apache.ignite.internal.processors.query.IgniteCacheGroupsSqlSegmentedIndexMultiNodeSelfTest; -import org.apache.ignite.internal.processors.query.IgniteCacheGroupsSqlSegmentedIndexSelfTest; -import org.apache.ignite.internal.processors.query.h2.twostep.CacheQueryMemoryLeakTest; -import org.apache.ignite.internal.processors.query.h2.twostep.DisappearedCacheCauseRetryMessageSelfTest; -import org.apache.ignite.internal.processors.query.h2.twostep.DisappearedCacheWasNotFoundMessageSelfTest; -import org.apache.ignite.internal.processors.query.h2.twostep.NonCollocatedRetryMessageSelfTest; -import org.apache.ignite.internal.processors.query.h2.twostep.RetryCauseMessageSelfTest; -import org.apache.ignite.internal.processors.query.h2.twostep.TableViewSubquerySelfTest; -import org.apache.ignite.testframework.IgniteTestSuite; - -/** - * Test suite for cache queries. - */ -public class IgniteCacheQuerySelfTestSuite2 extends TestSuite { - /** - * @return Test suite. - * @throws Exception If failed. - */ - public static TestSuite suite() throws Exception { - TestSuite suite = new IgniteTestSuite("Ignite Cache Queries Test Suite 2"); - - // Dynamic index create/drop tests. - suite.addTestSuite(DynamicIndexPartitionedAtomicConcurrentSelfTest.class); - suite.addTestSuite(DynamicIndexPartitionedTransactionalConcurrentSelfTest.class); - suite.addTestSuite(DynamicIndexReplicatedAtomicConcurrentSelfTest.class); - suite.addTestSuite(DynamicIndexReplicatedTransactionalConcurrentSelfTest.class); - - suite.addTestSuite(DynamicColumnsConcurrentAtomicPartitionedSelfTest.class); - suite.addTestSuite(DynamicColumnsConcurrentTransactionalPartitionedSelfTest.class); - suite.addTestSuite(DynamicColumnsConcurrentAtomicReplicatedSelfTest.class); - suite.addTestSuite(DynamicColumnsConcurrentTransactionalReplicatedSelfTest.class); - - // Distributed joins. - suite.addTestSuite(IgniteCacheQueryNodeRestartDistributedJoinSelfTest.class); - suite.addTestSuite(IgniteCacheQueryStopOnCancelOrTimeoutDistributedJoinSelfTest.class); - - // Other tests. - suite.addTestSuite(IgniteCacheQueryMultiThreadedSelfTest.class); - - suite.addTestSuite(IgniteCacheQueryEvictsMultiThreadedSelfTest.class); - - suite.addTestSuite(ScanQueryOffheapExpiryPolicySelfTest.class); - - suite.addTestSuite(IgniteCacheCrossCacheJoinRandomTest.class); - suite.addTestSuite(IgniteCacheClientQueryReplicatedNodeRestartSelfTest.class); - suite.addTestSuite(IgniteCacheQueryNodeFailTest.class); - suite.addTestSuite(IgniteCacheQueryNodeRestartSelfTest.class); - suite.addTestSuite(IgniteSqlQueryWithBaselineTest.class); - suite.addTestSuite(IgniteChangingBaselineCacheQueryNodeRestartSelfTest.class); - suite.addTestSuite(IgniteStableBaselineCacheQueryNodeRestartsSelfTest.class); - suite.addTestSuite(IgniteCacheQueryNodeRestartSelfTest2.class); - suite.addTestSuite(IgniteCacheQueryNodeRestartTxSelfTest.class); - suite.addTestSuite(IgniteCacheSqlQueryMultiThreadedSelfTest.class); - suite.addTestSuite(IgniteCachePartitionedQueryMultiThreadedSelfTest.class); - suite.addTestSuite(CacheScanPartitionQueryFallbackSelfTest.class); - suite.addTestSuite(IgniteCacheDistributedQueryStopOnCancelOrTimeoutSelfTest.class); - suite.addTestSuite(IgniteCacheObjectKeyIndexingSelfTest.class); - - suite.addTestSuite(IgniteCacheGroupsCompareQueryTest.class); - suite.addTestSuite(IgniteCacheGroupsSqlSegmentedIndexSelfTest.class); - suite.addTestSuite(IgniteCacheGroupsSqlSegmentedIndexMultiNodeSelfTest.class); - suite.addTestSuite(IgniteCacheGroupsSqlDistributedJoinSelfTest.class); - - suite.addTestSuite(QueryJoinWithDifferentNodeFiltersTest.class); - - suite.addTestSuite(CacheQueryMemoryLeakTest.class); - - suite.addTestSuite(NonCollocatedRetryMessageSelfTest.class); - suite.addTestSuite(RetryCauseMessageSelfTest.class); - suite.addTestSuite(DisappearedCacheCauseRetryMessageSelfTest.class); - suite.addTestSuite(DisappearedCacheWasNotFoundMessageSelfTest.class); - - suite.addTestSuite(TableViewSubquerySelfTest.class); - - return suite; - } -} diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite3.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite3.java index 08511d919d20f..b23e8529931e3 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite3.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite3.java @@ -17,18 +17,22 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; -import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousBatchAckTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousBatchForceServerModeAckTest; -import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryAsyncFilterListenerTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryConcurrentPartitionUpdateTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterPartitionedAtomicTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterPartitionedTxTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterReplicatedAtomicTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterReplicatedTxTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryEventBufferTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterPartitionedAtomicTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterPartitionedTxTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterReplicatedAtomicTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryCounterReplicatedTxTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryExecuteInPrimaryTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFactoryAsyncFilterRandomOperationTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAtomicNearEnabledSelfSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFactoryFilterRandomOperationTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAtomicNearEnabledSelfSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryLostPartitionTest; @@ -37,121 +41,86 @@ import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryOrderingEventTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTwoNodesTest; -import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerFailoverTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerClientSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerFailoverTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerLocalSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerPartitionedSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerRandomOperationsTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerReplicatedSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.CacheKeepBinaryIterationNearEnabledTest; -import org.apache.ignite.internal.processors.cache.query.continuous.CacheKeepBinaryIterationTest; -import org.apache.ignite.internal.processors.cache.query.continuous.CacheKeepBinaryIterationStoreEnabledTest; import org.apache.ignite.internal.processors.cache.query.continuous.ClientReconnectContinuousQueryTest; -import org.apache.ignite.internal.processors.cache.query.continuous.ContinuousQueryMarshallerTest; -import org.apache.ignite.internal.processors.cache.query.continuous.ContinuousQueryPeerClassLoadingTest; -import org.apache.ignite.internal.processors.cache.query.continuous.ContinuousQueryRemoteFilterMissingInClassPathSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.ContinuousQueryReassignmentTest; import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryAtomicNearEnabledSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryAtomicP2PDisabledSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryAtomicSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryConcurrentTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryLocalAtomicSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryLocalSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryMultiNodesFilteringTest; import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryNodesFilteringTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionAtomicOneNodeTest; import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionTxOneNodeTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionedOnlySelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionedP2PDisabledSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionedSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedAtomicOneNodeTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedAtomicSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedP2PDisabledSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedTxOneNodeTest; -import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryTxSelfTest; -import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryBackupQueueTest; import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryClientReconnectTest; -import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryClientTest; import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryClientTxReconnectTest; -import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryImmutableEntryTest; import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryNoUnsubscribeTest; import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest; /** * Test suite for cache queries. */ -public class IgniteCacheQuerySelfTestSuite3 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheQuerySelfTestSuite3 { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { - TestSuite suite = new TestSuite("Ignite Cache Queries Test Suite 3"); + public static TestSuite suite() { + TestSuite suite = new TestSuite("Ignite Cache Continuous Queries Test Suite"); - // Continuous queries. - suite.addTestSuite(GridCacheContinuousQueryLocalSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryLocalAtomicSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryReplicatedSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryReplicatedAtomicSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryReplicatedP2PDisabledSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryPartitionedSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryPartitionedOnlySelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryPartitionedP2PDisabledSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryTxSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryAtomicSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryAtomicNearEnabledSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryAtomicP2PDisabledSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryReplicatedTxOneNodeTest.class); - suite.addTestSuite(GridCacheContinuousQueryReplicatedAtomicOneNodeTest.class); - suite.addTestSuite(GridCacheContinuousQueryPartitionTxOneNodeTest.class); - suite.addTestSuite(GridCacheContinuousQueryPartitionAtomicOneNodeTest.class); - suite.addTestSuite(IgniteCacheContinuousQueryClientTest.class); - suite.addTestSuite(IgniteCacheContinuousQueryClientReconnectTest.class); - suite.addTestSuite(IgniteCacheContinuousQueryClientTxReconnectTest.class); - suite.addTestSuite(CacheContinuousQueryRandomOperationsTest.class); - suite.addTestSuite(CacheContinuousQueryRandomOperationsTwoNodesTest.class); - suite.addTestSuite(GridCacheContinuousQueryConcurrentTest.class); - suite.addTestSuite(CacheContinuousQueryAsyncFilterListenerTest.class); - suite.addTestSuite(CacheContinuousQueryFactoryFilterRandomOperationTest.class); - suite.addTestSuite(CacheContinuousQueryFactoryAsyncFilterRandomOperationTest.class); - suite.addTestSuite(CacheContinuousQueryOrderingEventTest.class); - suite.addTestSuite(CacheContinuousQueryOperationFromCallbackTest.class); - suite.addTestSuite(CacheContinuousQueryOperationP2PTest.class); - suite.addTestSuite(CacheContinuousBatchAckTest.class); - suite.addTestSuite(CacheContinuousBatchForceServerModeAckTest.class); - suite.addTestSuite(CacheContinuousQueryExecuteInPrimaryTest.class); - suite.addTestSuite(CacheContinuousQueryLostPartitionTest.class); - suite.addTestSuite(ContinuousQueryRemoteFilterMissingInClassPathSelfTest.class); - suite.addTestSuite(GridCacheContinuousQueryNodesFilteringTest.class); - suite.addTestSuite(GridCacheContinuousQueryMultiNodesFilteringTest.class); - suite.addTestSuite(IgniteCacheContinuousQueryImmutableEntryTest.class); - suite.addTestSuite(CacheKeepBinaryIterationTest.class); - suite.addTestSuite(CacheKeepBinaryIterationStoreEnabledTest.class); - suite.addTestSuite(CacheKeepBinaryIterationNearEnabledTest.class); - suite.addTestSuite(IgniteCacheContinuousQueryBackupQueueTest.class); - suite.addTestSuite(IgniteCacheContinuousQueryNoUnsubscribeTest.class); - suite.addTestSuite(ClientReconnectContinuousQueryTest.class); - suite.addTestSuite(ContinuousQueryPeerClassLoadingTest.class); - suite.addTestSuite(ClientReconnectContinuousQueryTest.class); - suite.addTestSuite(ContinuousQueryMarshallerTest.class); + // Continuous queries 1. + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryNodesFilteringTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryPartitionTxOneNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryExecuteInPrimaryTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientReconnectContinuousQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryNoUnsubscribeTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryClientTxReconnectTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryClientReconnectTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryAtomicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryAtomicNearEnabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryPartitionTxOneNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryClientReconnectTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryClientTxReconnectTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFactoryAsyncFilterRandomOperationTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousBatchForceServerModeAckTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryExecuteInPrimaryTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryNodesFilteringTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryNoUnsubscribeTest.class)); + suite.addTest(new JUnit4TestAdapter(ClientReconnectContinuousQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuousQueryReassignmentTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryConcurrentPartitionUpdateTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFactoryAsyncFilterRandomOperationTest.class)); - suite.addTestSuite(CacheContinuousQueryConcurrentPartitionUpdateTest.class); - suite.addTestSuite(CacheContinuousQueryEventBufferTest.class); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterPartitionedAtomicTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterPartitionedTxTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterReplicatedAtomicTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterReplicatedTxTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverAtomicNearEnabledSelfSelfTest.class)); - suite.addTestSuite(CacheContinuousWithTransformerReplicatedSelfTest.class); - suite.addTestSuite(CacheContinuousWithTransformerLocalSelfTest.class); - suite.addTestSuite(CacheContinuousWithTransformerPartitionedSelfTest.class); - suite.addTestSuite(CacheContinuousWithTransformerClientSelfTest.class); - suite.addTestSuite(CacheContinuousWithTransformerFailoverTest.class); - suite.addTestSuite(CacheContinuousWithTransformerRandomOperationsTest.class); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerClientSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerRandomOperationsTest.class)); - //suite.addTestSuite(CacheContinuousQueryCounterPartitionedAtomicTest.class); - //suite.addTestSuite(CacheContinuousQueryCounterPartitionedTxTest.class); - //suite.addTestSuite(CacheContinuousQueryCounterReplicatedAtomicTest.class); - //suite.addTestSuite(CacheContinuousQueryCounterReplicatedTxTest.class); - //suite.addTestSuite(CacheContinuousQueryFailoverAtomicNearEnabledSelfSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterPartitionedAtomicTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterPartitionedTxTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterReplicatedAtomicTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryCounterReplicatedTxTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverAtomicNearEnabledSelfSelfTest.class)); - //suite.addTestSuite(IgniteCacheContinuousQueryReconnectTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryReconnectTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite4.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite4.java index 2aa3419764427..b07d16872617c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite4.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite4.java @@ -17,35 +17,44 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryAsyncFailoverAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryAsyncFailoverMvccTxSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryAsyncFailoverTxReplicatedSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryAsyncFailoverTxSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAtomicReplicatedSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverMvccTxReplicatedSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverMvccTxSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverTxReplicatedSelfTest; import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverTxSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for cache queries. */ -public class IgniteCacheQuerySelfTestSuite4 extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheQuerySelfTestSuite4 { /** * @return Test suite. - * @throws Exception If failed. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Cache Queries Test Suite 4"); // Continuous queries failover tests. - suite.addTestSuite(CacheContinuousQueryFailoverAtomicSelfTest.class); - suite.addTestSuite(CacheContinuousQueryFailoverAtomicReplicatedSelfTest.class); - suite.addTestSuite(CacheContinuousQueryFailoverTxSelfTest.class); - suite.addTestSuite(CacheContinuousQueryFailoverTxReplicatedSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverAtomicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverAtomicReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverTxSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverTxReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverMvccTxSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFailoverMvccTxReplicatedSelfTest.class)); - suite.addTestSuite(CacheContinuousQueryAsyncFailoverAtomicSelfTest.class); - suite.addTestSuite(CacheContinuousQueryAsyncFailoverTxReplicatedSelfTest.class); - suite.addTestSuite(CacheContinuousQueryAsyncFailoverTxSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryAsyncFailoverAtomicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryAsyncFailoverTxReplicatedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryAsyncFailoverTxSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryAsyncFailoverMvccTxSelfTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite5.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite5.java new file mode 100644 index 0000000000000..c6e1550fbef10 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite5.java @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryEventBufferTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFactoryFilterRandomOperationTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryLostPartitionTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryOperationFromCallbackTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTwoNodesTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerFailoverTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerLocalSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.ContinuousQueryPeerClassLoadingTest; +import org.apache.ignite.internal.processors.cache.query.continuous.ContinuousQueryRemoteFilterMissingInClassPathSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryAtomicP2PDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryConcurrentTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryLocalSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionedP2PDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedP2PDisabledSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedTxOneNodeTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryTxSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryBackupQueueTest; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryImmutableEntryTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite for cache queries. + */ +@RunWith(AllTests.class) +public class IgniteCacheQuerySelfTestSuite5 { + /** + * @return Test suite. + */ + public static TestSuite suite() { + TestSuite suite = new TestSuite("Ignite Cache Continuous Queries Test Suite 2"); + + // Continuous queries 2. + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryImmutableEntryTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryEventBufferTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryReplicatedTxOneNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerFailoverTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuousQueryRemoteFilterMissingInClassPathSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuousQueryPeerClassLoadingTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryAtomicP2PDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryTxSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryReplicatedP2PDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryPartitionedP2PDisabledSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryLostPartitionTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryConcurrentTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryRandomOperationsTwoNodesTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryBackupQueueTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryOperationFromCallbackTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryFactoryFilterRandomOperationTest.class)); + + return suite; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite6.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite6.java new file mode 100644 index 0000000000000..9460c9e830591 --- /dev/null +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite6.java @@ -0,0 +1,78 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.ignite.testsuites; + +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousBatchAckTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryAsyncFilterListenerTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryOperationP2PTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryOrderingEventTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryRandomOperationsTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerPartitionedSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousWithTransformerRandomOperationsTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheKeepBinaryIterationNearEnabledTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheKeepBinaryIterationStoreEnabledTest; +import org.apache.ignite.internal.processors.cache.query.continuous.CacheKeepBinaryIterationTest; +import org.apache.ignite.internal.processors.cache.query.continuous.ContinuousQueryMarshallerTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryLocalAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryMultiNodesFilteringTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionAtomicOneNodeTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryPartitionedOnlySelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedAtomicOneNodeTest; +import org.apache.ignite.internal.processors.cache.query.continuous.GridCacheContinuousQueryReplicatedAtomicSelfTest; +import org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryClientTest; +import org.apache.ignite.internal.processors.query.MemLeakOnSqlWithClientReconnectTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; + +/** + * Test suite for cache queries. + */ +@RunWith(AllTests.class) +public class IgniteCacheQuerySelfTestSuite6 { + /** + * @return Test suite. + */ + public static TestSuite suite() { + TestSuite suite = new TestSuite("Ignite Cache Continuous Queries Test Suite 3"); + + // Continuous queries 3. + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryPartitionAtomicOneNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryLocalAtomicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryReplicatedAtomicOneNodeTest.class)); + suite.addTest(new JUnit4TestAdapter(ContinuousQueryMarshallerTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryReplicatedAtomicSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheKeepBinaryIterationTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryMultiNodesFilteringTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheKeepBinaryIterationStoreEnabledTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheKeepBinaryIterationNearEnabledTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheContinuousQueryPartitionedOnlySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryOperationP2PTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousBatchAckTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryOrderingEventTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheContinuousQueryClientTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryAsyncFilterListenerTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousWithTransformerRandomOperationsTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheContinuousQueryRandomOperationsTest.class)); + suite.addTest(new JUnit4TestAdapter(MemLeakOnSqlWithClientReconnectTest.class)); + + return suite; + } +} diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingAndPersistenceTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingAndPersistenceTestSuite.java index 1bc60f65658c8..540d100184e0c 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingAndPersistenceTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingAndPersistenceTestSuite.java @@ -17,21 +17,26 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; +import org.apache.ignite.internal.processors.cache.StartCachesInParallelTest; import org.apache.ignite.util.GridCommandHandlerIndexingTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache tests using indexing. */ -public class IgniteCacheWithIndexingAndPersistenceTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheWithIndexingAndPersistenceTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Cache With Indexing And Persistence Test Suite"); - suite.addTestSuite(GridCommandHandlerIndexingTest.class); + suite.addTest(new JUnit4TestAdapter(GridCommandHandlerIndexingTest.class)); + suite.addTest(new JUnit4TestAdapter(StartCachesInParallelTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingTestSuite.java index e351cb6f4a106..55d129bddd224 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheWithIndexingTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.BinaryTypeMismatchLoggingTest; import org.apache.ignite.internal.processors.cache.CacheBinaryKeyConcurrentQueryTest; @@ -27,6 +28,7 @@ import org.apache.ignite.internal.processors.cache.CacheQueryFilterExpiredTest; import org.apache.ignite.internal.processors.cache.CacheRandomOperationsMultithreadedTest; import org.apache.ignite.internal.processors.cache.ClientReconnectAfterClusterRestartTest; +import org.apache.ignite.internal.processors.cache.ClusterReadOnlyModeSqlTest; import org.apache.ignite.internal.processors.cache.GridCacheOffHeapSelfTest; import org.apache.ignite.internal.processors.cache.GridCacheOffheapIndexEntryEvictTest; import org.apache.ignite.internal.processors.cache.GridCacheOffheapIndexGetSelfTest; @@ -41,54 +43,55 @@ import org.apache.ignite.internal.processors.cache.ttl.CacheTtlTransactionalPartitionedSelfTest; import org.apache.ignite.internal.processors.client.IgniteDataStreamerTest; import org.apache.ignite.internal.processors.query.h2.database.InlineIndexHelperTest; -import org.apache.ignite.testframework.junits.GridAbstractTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Cache tests using indexing. */ -public class IgniteCacheWithIndexingTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteCacheWithIndexingTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { - System.setProperty(GridAbstractTest.PERSISTENCE_IN_TESTS_IS_ALLOWED_PROPERTY, "false"); - + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Cache With Indexing Test Suite"); - suite.addTestSuite(InlineIndexHelperTest.class); + suite.addTest(new JUnit4TestAdapter(InlineIndexHelperTest.class)); - suite.addTestSuite(GridIndexingWithNoopSwapSelfTest.class); - suite.addTestSuite(GridCacheOffHeapSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridIndexingWithNoopSwapSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheOffHeapSelfTest.class)); - suite.addTestSuite(CacheTtlTransactionalLocalSelfTest.class); - suite.addTestSuite(CacheTtlTransactionalPartitionedSelfTest.class); - suite.addTestSuite(CacheTtlAtomicLocalSelfTest.class); - suite.addTestSuite(CacheTtlAtomicPartitionedSelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheTtlTransactionalLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheTtlTransactionalPartitionedSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheTtlAtomicLocalSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheTtlAtomicPartitionedSelfTest.class)); - suite.addTestSuite(GridCacheOffheapIndexGetSelfTest.class); - suite.addTestSuite(GridCacheOffheapIndexEntryEvictTest.class); - suite.addTestSuite(CacheIndexStreamerTest.class); + suite.addTest(new JUnit4TestAdapter(GridCacheOffheapIndexGetSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheOffheapIndexEntryEvictTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheIndexStreamerTest.class)); - suite.addTestSuite(CacheConfigurationP2PTest.class); + suite.addTest(new JUnit4TestAdapter(CacheConfigurationP2PTest.class)); - suite.addTestSuite(IgniteCacheConfigurationPrimitiveTypesSelfTest.class); - suite.addTestSuite(IgniteClientReconnectQueriesTest.class); - suite.addTestSuite(CacheRandomOperationsMultithreadedTest.class); - suite.addTestSuite(IgniteCacheStarvationOnRebalanceTest.class); - suite.addTestSuite(CacheOperationsWithExpirationTest.class); - suite.addTestSuite(CacheBinaryKeyConcurrentQueryTest.class); - suite.addTestSuite(CacheQueryFilterExpiredTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheConfigurationPrimitiveTypesSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteClientReconnectQueriesTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheRandomOperationsMultithreadedTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteCacheStarvationOnRebalanceTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheOperationsWithExpirationTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheBinaryKeyConcurrentQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheQueryFilterExpiredTest.class)); - suite.addTestSuite(ClientReconnectAfterClusterRestartTest.class); + suite.addTest(new JUnit4TestAdapter(ClientReconnectAfterClusterRestartTest.class)); - suite.addTestSuite(CacheQueryAfterDynamicCacheStartFailureTest.class); + suite.addTest(new JUnit4TestAdapter(CacheQueryAfterDynamicCacheStartFailureTest.class)); - suite.addTestSuite(IgniteCacheGroupsSqlTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteCacheGroupsSqlTest.class)); - suite.addTestSuite(IgniteDataStreamerTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteDataStreamerTest.class)); - suite.addTestSuite(BinaryTypeMismatchLoggingTest.class); + suite.addTest(new JUnit4TestAdapter(BinaryTypeMismatchLoggingTest.class)); + + suite.addTest(new JUnit4TestAdapter(ClusterReadOnlyModeSqlTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakWithIndexingTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakWithIndexingTestSuite.java index 36cd10130da1d..9946f94db32e4 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakWithIndexingTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteDbMemoryLeakWithIndexingTestSuite.java @@ -17,23 +17,26 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.database.IgniteDbMemoryLeakIndexedTest; import org.apache.ignite.internal.processors.database.IgniteDbMemoryLeakSqlQueryTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Page memory leaks tests using indexing. */ -public class IgniteDbMemoryLeakWithIndexingTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteDbMemoryLeakWithIndexingTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Db Memory Leaks With Indexing Test Suite"); - suite.addTestSuite(IgniteDbMemoryLeakSqlQueryTest.class); - suite.addTestSuite(IgniteDbMemoryLeakIndexedTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteDbMemoryLeakSqlQueryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbMemoryLeakIndexedTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingCoreTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingCoreTestSuite.java index 2989ccdddc150..fdd3a47fb6a31 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingCoreTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingCoreTestSuite.java @@ -16,6 +16,7 @@ */ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsAtomicCacheHistoricalRebalancingTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsAtomicCacheRebalancingTest; @@ -27,8 +28,10 @@ import org.apache.ignite.internal.processors.cache.persistence.IgnitePdsTxHistoricalRebalancingTest; import org.apache.ignite.internal.processors.cache.persistence.IgnitePersistentStoreCacheGroupsTest; import org.apache.ignite.internal.processors.cache.persistence.PersistenceDirectoryWarningLoggingTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgniteLogicalRecoveryTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsMultiNodePutGetRestartTest; import org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsPageEvictionTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IgniteSequentialNodeCrashRecoveryTest; import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsCacheDestroyDuringCheckpointTest; import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsCacheIntegrationTest; import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsDiskErrorsRecoveringTest; @@ -40,49 +43,59 @@ import org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteWalRecoveryWithCompactionTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalPathsTest; import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalRecoveryTxLogicalRecordsTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalRolloverRecordLoggingFsyncTest; +import org.apache.ignite.internal.processors.cache.persistence.db.wal.WalRolloverRecordLoggingLogOnlyTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for tests that cover core PDS features and depend on indexing module. */ -public class IgnitePdsWithIndexingCoreTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgnitePdsWithIndexingCoreTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Persistent Store With Indexing Test Suite"); - suite.addTestSuite(IgnitePdsCacheIntegrationTest.class); - suite.addTestSuite(IgnitePdsPageEvictionTest.class); - suite.addTestSuite(IgnitePdsMultiNodePutGetRestartTest.class); - suite.addTestSuite(IgnitePersistentStoreCacheGroupsTest.class); - suite.addTestSuite(PersistenceDirectoryWarningLoggingTest.class); - suite.addTestSuite(WalPathsTest.class); - suite.addTestSuite(WalRecoveryTxLogicalRecordsTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsCacheIntegrationTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsPageEvictionTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsMultiNodePutGetRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePersistentStoreCacheGroupsTest.class)); + suite.addTest(new JUnit4TestAdapter(PersistenceDirectoryWarningLoggingTest.class)); + suite.addTest(new JUnit4TestAdapter(WalPathsTest.class)); + suite.addTest(new JUnit4TestAdapter(WalRecoveryTxLogicalRecordsTest.class)); + suite.addTest(new JUnit4TestAdapter(WalRolloverRecordLoggingFsyncTest.class)); + suite.addTest(new JUnit4TestAdapter(WalRolloverRecordLoggingLogOnlyTest.class)); - suite.addTestSuite(IgniteWalRecoveryTest.class); - suite.addTestSuite(IgniteWalRecoveryWithCompactionTest.class); - suite.addTestSuite(IgnitePdsNoActualWalHistoryTest.class); - suite.addTestSuite(IgniteWalRebalanceTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteWalRecoveryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteWalRecoveryWithCompactionTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsNoActualWalHistoryTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteWalRebalanceTest.class)); - suite.addTestSuite(IgnitePdsAtomicCacheRebalancingTest.class); - suite.addTestSuite(IgnitePdsAtomicCacheHistoricalRebalancingTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsAtomicCacheRebalancingTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsAtomicCacheHistoricalRebalancingTest.class)); - suite.addTestSuite(IgnitePdsTxCacheRebalancingTest.class); - suite.addTestSuite(IgnitePdsTxHistoricalRebalancingTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsTxCacheRebalancingTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsTxHistoricalRebalancingTest.class)); - suite.addTestSuite(IgniteWalRecoveryPPCTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteWalRecoveryPPCTest.class)); - suite.addTestSuite(IgnitePdsDiskErrorsRecoveringTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsDiskErrorsRecoveringTest.class)); - suite.addTestSuite(IgnitePdsCacheDestroyDuringCheckpointTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsCacheDestroyDuringCheckpointTest.class)); - suite.addTestSuite(IgnitePdsBinaryMetadataOnClusterRestartTest.class); - suite.addTestSuite(IgnitePdsMarshallerMappingRestoreOnNodeStartTest.class); - suite.addTestSuite(IgnitePdsThreadInterruptionTest.class); - suite.addTestSuite(IgnitePdsBinarySortObjectFieldsTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsBinaryMetadataOnClusterRestartTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsMarshallerMappingRestoreOnNodeStartTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsThreadInterruptionTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsBinarySortObjectFieldsTest.class)); - suite.addTestSuite(IgnitePdsCorruptedIndexTest.class); + suite.addTest(new JUnit4TestAdapter(IgnitePdsCorruptedIndexTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteLogicalRecoveryTest.class)); + + suite.addTest(new JUnit4TestAdapter(IgniteSequentialNodeCrashRecoveryTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingTestSuite.java b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingTestSuite.java index 67b9fad63cde2..af408978e6b03 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingTestSuite.java +++ b/modules/indexing/src/test/java/org/apache/ignite/testsuites/IgnitePdsWithIndexingTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.IgnitePdsSingleNodeWithIndexingAndGroupPutGetPersistenceSelfTest; import org.apache.ignite.internal.processors.cache.IgnitePdsSingleNodeWithIndexingPutGetPersistenceTest; @@ -26,26 +27,30 @@ import org.apache.ignite.internal.processors.database.IgnitePersistentStoreQueryWithMultipleClassesPerCacheTest; import org.apache.ignite.internal.processors.database.IgnitePersistentStoreSchemaLoadTest; import org.apache.ignite.internal.processors.database.IgniteTwoRegionsRebuildIndexTest; +import org.apache.ignite.internal.processors.cache.persistence.db.IndexingMultithreadedLoadContinuousRestartTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * */ -public class IgnitePdsWithIndexingTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgnitePdsWithIndexingTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Ignite Db Memory Leaks With Indexing Test Suite"); - suite.addTestSuite(IgniteDbSingleNodeWithIndexingWalRestoreTest.class); - suite.addTestSuite(IgniteDbSingleNodeWithIndexingPutGetTest.class); - suite.addTestSuite(IgniteDbMultiNodeWithIndexingPutGetTest.class); - suite.addTestSuite(IgnitePdsSingleNodeWithIndexingPutGetPersistenceTest.class); - suite.addTestSuite(IgnitePdsSingleNodeWithIndexingAndGroupPutGetPersistenceSelfTest.class); - suite.addTestSuite(IgnitePersistentStoreSchemaLoadTest.class); - suite.addTestSuite(IgnitePersistentStoreQueryWithMultipleClassesPerCacheTest.class); - suite.addTestSuite(IgniteTwoRegionsRebuildIndexTest.class); + suite.addTest(new JUnit4TestAdapter(IgniteDbSingleNodeWithIndexingWalRestoreTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbSingleNodeWithIndexingPutGetTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteDbMultiNodeWithIndexingPutGetTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsSingleNodeWithIndexingPutGetPersistenceTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePdsSingleNodeWithIndexingAndGroupPutGetPersistenceSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePersistentStoreSchemaLoadTest.class)); + suite.addTest(new JUnit4TestAdapter(IgnitePersistentStoreQueryWithMultipleClassesPerCacheTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteTwoRegionsRebuildIndexTest.class)); + suite.addTest(new JUnit4TestAdapter(IndexingMultithreadedLoadContinuousRestartTest.class)); return suite; } diff --git a/modules/indexing/src/test/java/org/apache/ignite/util/GridCommandHandlerIndexingTest.java b/modules/indexing/src/test/java/org/apache/ignite/util/GridCommandHandlerIndexingTest.java index 48e94c1e62cce..1be768b4ec5fc 100644 --- a/modules/indexing/src/test/java/org/apache/ignite/util/GridCommandHandlerIndexingTest.java +++ b/modules/indexing/src/test/java/org/apache/ignite/util/GridCommandHandlerIndexingTest.java @@ -17,6 +17,9 @@ package org.apache.ignite.util; +import java.io.File; +import java.io.IOException; +import java.io.RandomAccessFile; import java.io.Serializable; import java.util.ArrayList; import java.util.Iterator; @@ -24,8 +27,8 @@ import java.util.concurrent.ThreadLocalRandom; import javax.cache.Cache; import org.apache.ignite.Ignite; -import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteCheckedException; +import org.apache.ignite.IgniteDataStreamer; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cache.QueryEntity; @@ -39,65 +42,50 @@ import org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition; import org.apache.ignite.internal.processors.cache.persistence.CacheDataRow; import org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager; +import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager; import org.apache.ignite.internal.processors.cache.tree.SearchRow; import org.apache.ignite.internal.processors.query.GridQueryProcessor; import org.apache.ignite.internal.util.lang.GridIterator; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.internal.commandline.CommandHandler.EXIT_CODE_OK; +import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.INDEX_FILE_NAME; /** * */ +@RunWith(JUnit4.class) public class GridCommandHandlerIndexingTest extends GridCommandHandlerTest { + /** Test cache name. */ + private static final String CACHE_NAME = "persons-cache-vi"; + /** * Tests that validation doesn't fail if nothing is broken. */ + @Test public void testValidateIndexesNoErrors() throws Exception { - Ignite ignite = startGrids(2); - - ignite.cluster().active(true); - - Ignite client = startGrid("client"); - - String cacheName = "persons-cache-vi"; - - IgniteCache personCache = createPersonCache(client, cacheName); - - ThreadLocalRandom rand = ThreadLocalRandom.current(); - - for (int i = 0; i < 10_000; i++) - personCache.put(i, new Person(rand.nextInt(), String.valueOf(rand.nextLong()))); + prepareGridForTest(); injectTestSystemOut(); - assertEquals(EXIT_CODE_OK, execute("--cache", "validate_indexes", cacheName)); + assertEquals(EXIT_CODE_OK, execute("--cache", "validate_indexes", CACHE_NAME)); - assertTrue(testOut.toString().contains("validate_indexes has finished, no issues found")); + assertTrue(testOut.toString().contains("no issues found")); } /** * Tests that missing rows in CacheDataTree are detected. */ + @Test public void testBrokenCacheDataTreeShouldFailValidation() throws Exception { - Ignite ignite = startGrids(2); + Ignite ignite = prepareGridForTest(); - ignite.cluster().active(true); - - Ignite client = startGrid("client"); - - String cacheName = "persons-cache-vi"; - - IgniteCache personCache = createPersonCache(client, cacheName); - - ThreadLocalRandom rand = ThreadLocalRandom.current(); - - for (int i = 0; i < 10_000; i++) - personCache.put(i, new Person(rand.nextInt(), String.valueOf(rand.nextLong()))); - - breakCacheDataTree(ignite, cacheName, 1); + breakCacheDataTree(ignite, CACHE_NAME, 1); injectTestSystemOut(); @@ -105,11 +93,11 @@ public void testBrokenCacheDataTreeShouldFailValidation() throws Exception { execute( "--cache", "validate_indexes", - cacheName, - "checkFirst", "10000", - "checkThrough", "10")); + CACHE_NAME, + "--check-first", "10000", + "--check-through", "10")); - assertTrue(testOut.toString().contains("validate_indexes has finished with errors")); + assertTrue(testOut.toString().contains("issues found (listed above)")); assertTrue(testOut.toString().contains( "Key is present in SQL index, but is missing in corresponding data page.")); @@ -118,7 +106,49 @@ public void testBrokenCacheDataTreeShouldFailValidation() throws Exception { /** * Tests that missing rows in H2 indexes are detected. */ + @Test public void testBrokenSqlIndexShouldFailValidation() throws Exception { + Ignite ignite = prepareGridForTest(); + + breakSqlIndex(ignite, CACHE_NAME); + + injectTestSystemOut(); + + assertEquals(EXIT_CODE_OK, execute("--cache", "validate_indexes", CACHE_NAME)); + + assertTrue(testOut.toString().contains("issues found (listed above)")); + } + + /** + * Tests that corrupted pages in the index partition are detected. + */ + @Test + public void testCorruptedIndexPartitionShouldFailValidation() throws Exception { + Ignite ignite = prepareGridForTest(); + + forceCheckpoint(); + + File idxPath = indexPartition(ignite, CACHE_NAME); + + stopAllGrids(); + + corruptIndexPartition(idxPath); + + startGrids(2); + + awaitPartitionMapExchange(); + + injectTestSystemOut(); + + assertEquals(EXIT_CODE_OK, execute("--cache", "validate_indexes", CACHE_NAME)); + + assertTrue(testOut.toString().contains("issues found (listed above)")); + } + + /** + * + */ + private Ignite prepareGridForTest() throws Exception{ Ignite ignite = startGrids(2); ignite.cluster().active(true); @@ -127,20 +157,52 @@ public void testBrokenSqlIndexShouldFailValidation() throws Exception { String cacheName = "persons-cache-vi"; - IgniteCache personCache = createPersonCache(client, cacheName); + client.getOrCreateCache(new CacheConfiguration() + .setName(cacheName) + .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) + .setAtomicityMode(CacheAtomicityMode.ATOMIC) + .setBackups(1) + .setQueryEntities(F.asList(personEntity(true, true))) + .setAffinity(new RendezvousAffinityFunction(false, 32))); ThreadLocalRandom rand = ThreadLocalRandom.current(); - for (int i = 0; i < 10_000; i++) - personCache.put(i, new Person(rand.nextInt(), String.valueOf(rand.nextLong()))); + try (IgniteDataStreamer streamer = client.dataStreamer(CACHE_NAME)) { + for (int i = 0; i < 10_000; i++) + streamer.addData(i, new Person(rand.nextInt(), String.valueOf(rand.nextLong()))); + } - breakSqlIndex(ignite, cacheName); + return ignite; + } - injectTestSystemOut(); + /** + * Get index partition file for specific node and cache. + */ + private File indexPartition(Ignite ig, String cacheName) { + IgniteEx ig0 = (IgniteEx)ig; - assertEquals(EXIT_CODE_OK, execute("--cache", "validate_indexes", cacheName)); + FilePageStoreManager pageStoreManager = ((FilePageStoreManager) ig0.context().cache().context().pageStore()); - assertTrue(testOut.toString().contains("validate_indexes has finished with errors")); + return new File(pageStoreManager.cacheWorkDir(false, cacheName), INDEX_FILE_NAME); + } + + /** + * Write some random trash in index partition. + */ + private void corruptIndexPartition(File path) throws IOException { + assertTrue(path.exists()); + + ThreadLocalRandom rand = ThreadLocalRandom.current(); + + try (RandomAccessFile idx = new RandomAccessFile(path, "rw")) { + byte[] trash = new byte[1024]; + + rand.nextBytes(trash); + + idx.seek(4096); + + idx.write(trash); + } } /** @@ -241,22 +303,6 @@ private void breakSqlIndex(Ignite ig, String cacheName) throws Exception { } } - /** - * Dynamically creates cache with SQL indexes. - * - * @param ig Client. - * @param cacheName Cache name. - */ - private IgniteCache createPersonCache(Ignite ig, String cacheName) { - return ig.getOrCreateCache(new CacheConfiguration() - .setName(cacheName) - .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) - .setAtomicityMode(CacheAtomicityMode.ATOMIC) - .setBackups(1) - .setQueryEntities(F.asList(personEntity(true, true))) - .setAffinity(new RendezvousAffinityFunction(false, 32))); - } - /** * @param idxName Index name. * @param idxOrgId Index org id. diff --git a/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest.java b/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest.java index 9cb0f21562cfc..6a144fcfc6089 100644 --- a/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest.java +++ b/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest.java @@ -31,10 +31,14 @@ import org.apache.ignite.cache.query.SqlFieldsQuery; import org.apache.ignite.cache.query.annotations.QuerySqlField; import org.apache.ignite.configuration.CacheConfiguration; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests queries against entities with JSR-310 Java 8 Date and Time API fields. */ +@RunWith(JUnit4.class) public class CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest extends CacheQueryJsr310Java8DateTimeApiAbstractTest { /** * Entity containing JSR-310 fields. @@ -222,6 +226,7 @@ private static CacheConfiguration createCacheConfi * * @throws Exception If failed. */ + @Test public void testInsertEntityFields() throws Exception { cache.remove(entity.getId()); @@ -247,6 +252,7 @@ public void testInsertEntityFields() throws Exception { * * @throws Exception If failed. */ + @Test public void testDateDiffForLocalDateTimeFieldAtMidnight() throws Exception { SqlFieldsQuery qry = new SqlFieldsQuery("select DATEDIFF('DAY', locDateTime, CURRENT_DATE ()) from EntityWithJsr310Fields"); @@ -262,6 +268,7 @@ public void testDateDiffForLocalDateTimeFieldAtMidnight() throws Exception { * * @throws Exception If failed. */ + @Test public void testSelectLocalTimeFieldReturnsTime() throws Exception { SqlFieldsQuery qry = new SqlFieldsQuery("select locTime from EntityWithJsr310Fields"); @@ -276,6 +283,7 @@ public void testSelectLocalTimeFieldReturnsTime() throws Exception { * * @throws Exception If failed. */ + @Test public void testSelectLocalDateFieldReturnsDate() throws Exception { SqlFieldsQuery qry = new SqlFieldsQuery("select locDate from EntityWithJsr310Fields"); @@ -290,6 +298,7 @@ public void testSelectLocalDateFieldReturnsDate() throws Exception { * * @throws Exception If failed. */ + @Test public void testSelectLocalDateTimeFieldReturnsTimestamp() throws Exception { SqlFieldsQuery qry = new SqlFieldsQuery("select locDateTime from EntityWithJsr310Fields"); @@ -302,6 +311,7 @@ public void testSelectLocalDateTimeFieldReturnsTimestamp() throws Exception { /** * Tests selection of an entity by a {@link LocalTime} field. */ + @Test public void testSelectByAllJsr310Fields() { SqlFieldsQuery qry = new SqlFieldsQuery( "select locDate from EntityWithJsr310Fields where locTime = ? and locDate = ? and locDateTime = ?" @@ -316,6 +326,7 @@ public void testSelectByAllJsr310Fields() { /** * Tests updating of all JSR-310 fields. */ + @Test public void testUpdateAllJsr310Fields() { EntityWithJsr310Fields expEntity = new EntityWithJsr310Fields(entity); @@ -337,6 +348,7 @@ public void testUpdateAllJsr310Fields() { /** * Tests deleting by all JSR-310 fields. */ + @Test public void testDeleteByAllJsr310Fields() { SqlFieldsQuery qry = new SqlFieldsQuery( "delete from EntityWithJsr310Fields where locTime = ? and locDate = ? and locDateTime = ?" diff --git a/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryJsr310Java8DateTimeApiAbstractTest.java b/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryJsr310Java8DateTimeApiAbstractTest.java index d0ca8f1b6d1a3..61c5d1d3eea35 100644 --- a/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryJsr310Java8DateTimeApiAbstractTest.java +++ b/modules/indexing/src/test/java8/org/apache/ignite/internal/processors/query/h2/CacheQueryJsr310Java8DateTimeApiAbstractTest.java @@ -25,18 +25,12 @@ import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; -import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder; -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; /** * Base class for JSR-310 Java 8 Date and Time API queries tests. */ public abstract class CacheQueryJsr310Java8DateTimeApiAbstractTest extends GridCommonAbstractTest { - /** IP finder. */ - private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true); - /** {@link LocalTime} instance. */ protected static final LocalTime LOCAL_TIME = LocalTime.now().minusHours(10); @@ -53,16 +47,6 @@ public abstract class CacheQueryJsr310Java8DateTimeApiAbstractTest extends GridC /** {@link LocalDateTime} instance. */ protected static final LocalDateTime LOCAL_DATE_TIME = LocalDateTime.of(LOCAL_DATE, LocalTime.MIDNIGHT); - /** {@inheritDoc} */ - @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { - IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName); - TcpDiscoverySpi discoverySpi = (TcpDiscoverySpi)cfg.getDiscoverySpi(); - - discoverySpi.setIpFinder(IP_FINDER); - - return cfg; - } - /** * Creates a cache configuration with the specified cache name * and indexed type key/value pairs. diff --git a/modules/indexing/src/test/java8/org/apache/ignite/testsuites/CacheQueryJsr310Java8DateTimeApiSupportTestSuite.java b/modules/indexing/src/test/java8/org/apache/ignite/testsuites/CacheQueryJsr310Java8DateTimeApiSupportTestSuite.java index aa7aed8a59c01..7999218dc1ed6 100644 --- a/modules/indexing/src/test/java8/org/apache/ignite/testsuites/CacheQueryJsr310Java8DateTimeApiSupportTestSuite.java +++ b/modules/indexing/src/test/java8/org/apache/ignite/testsuites/CacheQueryJsr310Java8DateTimeApiSupportTestSuite.java @@ -17,13 +17,17 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.query.h2.CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Test suite for JSR-310 Java 8 Date and Time API queries. */ -public class CacheQueryJsr310Java8DateTimeApiSupportTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class CacheQueryJsr310Java8DateTimeApiSupportTestSuite { /** * @return Test suite. * @throws Exception If failed. @@ -31,7 +35,7 @@ public class CacheQueryJsr310Java8DateTimeApiSupportTestSuite extends TestSuite public static TestSuite suite() throws Exception { TestSuite suite = new TestSuite("JSR-310 Java 8 Date and Time API Cache Queries Test Suite"); - suite.addTestSuite(CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest.class); + suite.addTest(new JUnit4TestAdapter(CacheQueryEntityWithJsr310Java8DateTimeApiFieldsTest.class)); return suite; } diff --git a/modules/indexing/src/test/resources/tde.jks b/modules/indexing/src/test/resources/tde.jks new file mode 100644 index 0000000000000..1bf532c292dec Binary files /dev/null and b/modules/indexing/src/test/resources/tde.jks differ diff --git a/modules/jcl/src/test/java/org/apache/ignite/logger/jcl/JclLoggerTest.java b/modules/jcl/src/test/java/org/apache/ignite/logger/jcl/JclLoggerTest.java index f7b60ecdcc70c..0568e0ce8525a 100644 --- a/modules/jcl/src/test/java/org/apache/ignite/logger/jcl/JclLoggerTest.java +++ b/modules/jcl/src/test/java/org/apache/ignite/logger/jcl/JclLoggerTest.java @@ -17,21 +17,22 @@ package org.apache.ignite.logger.jcl; -import junit.framework.TestCase; import org.apache.commons.logging.LogFactory; import org.apache.ignite.IgniteLogger; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; /** * Jcl logger test. */ @GridCommonTest(group = "Logger") -public class JclLoggerTest extends TestCase { +public class JclLoggerTest { /** */ @SuppressWarnings({"FieldCanBeLocal"}) private IgniteLogger log; /** */ + @Test public void testLogInitialize() { log = new JclLogger(LogFactory.getLog(JclLoggerTest.class.getName())); @@ -45,4 +46,4 @@ public void testLogInitialize() { assert log.getLogger(JclLoggerTest.class.getName()) instanceof JclLogger; } -} \ No newline at end of file +} diff --git a/modules/jcl/src/test/java/org/apache/ignite/testsuites/IgniteJclTestSuite.java b/modules/jcl/src/test/java/org/apache/ignite/testsuites/IgniteJclTestSuite.java index 8c65cd92335f1..5736006bba8e6 100644 --- a/modules/jcl/src/test/java/org/apache/ignite/testsuites/IgniteJclTestSuite.java +++ b/modules/jcl/src/test/java/org/apache/ignite/testsuites/IgniteJclTestSuite.java @@ -17,22 +17,25 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.logger.jcl.JclLoggerTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Commons logging test. */ -public class IgniteJclTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteJclTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Commons Logging Test Suite"); - suite.addTest(new TestSuite(JclLoggerTest.class)); + suite.addTest(new JUnit4TestAdapter(JclLoggerTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/jms11/src/test/java/org/apache/ignite/stream/jms11/IgniteJmsStreamerTest.java b/modules/jms11/src/test/java/org/apache/ignite/stream/jms11/IgniteJmsStreamerTest.java index caf8f9eb644bc..4b2e819c23258 100644 --- a/modules/jms11/src/test/java/org/apache/ignite/stream/jms11/IgniteJmsStreamerTest.java +++ b/modules/jms11/src/test/java/org/apache/ignite/stream/jms11/IgniteJmsStreamerTest.java @@ -57,8 +57,9 @@ import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; -import org.junit.After; -import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; @@ -67,6 +68,7 @@ * * @author Raul Kripalani */ +@RunWith(JUnit4.class) public class IgniteJmsStreamerTest extends GridCommonAbstractTest { /** */ private static final int CACHE_ENTRY_COUNT = 100; @@ -99,9 +101,8 @@ public IgniteJmsStreamerTest() { /** * @throws Exception If failed. */ - @Before @SuppressWarnings("unchecked") - public void beforeTest() throws Exception { + @Override public void beforeTest() throws Exception { grid().getOrCreateCache(defaultCacheConfiguration()); broker = new BrokerService(); @@ -127,8 +128,7 @@ public void beforeTest() throws Exception { /** * @throws Exception Iff ailed. */ - @After - public void afterTest() throws Exception { + @Override public void afterTest() throws Exception { grid().cache(DEFAULT_CACHE_NAME).clear(); broker.stop(); @@ -138,6 +138,7 @@ public void afterTest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueFromName() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -167,6 +168,7 @@ public void testQueueFromName() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTopicFromName() throws JMSException, InterruptedException { Destination dest = new ActiveMQTopic(TOPIC_NAME); @@ -199,6 +201,7 @@ public void testTopicFromName() throws JMSException, InterruptedException { /** * @throws Exception If failed. */ + @Test public void testQueueFromExplicitDestination() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -228,6 +231,7 @@ public void testQueueFromExplicitDestination() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTopicFromExplicitDestination() throws JMSException, InterruptedException { Destination dest = new ActiveMQTopic(TOPIC_NAME); @@ -259,6 +263,7 @@ public void testTopicFromExplicitDestination() throws JMSException, InterruptedE /** * @throws Exception If failed. */ + @Test public void testInsertMultipleCacheEntriesFromOneMessage() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -287,6 +292,7 @@ public void testInsertMultipleCacheEntriesFromOneMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDurableSubscriberStartStopStart() throws Exception { Destination dest = new ActiveMQTopic(TOPIC_NAME); @@ -327,6 +333,7 @@ public void testDurableSubscriberStartStopStart() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueMessagesConsumedInBatchesCompletionSizeBased() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -366,6 +373,7 @@ public void testQueueMessagesConsumedInBatchesCompletionSizeBased() throws Excep /** * @throws Exception If failed. */ + @Test public void testQueueMessagesConsumedInBatchesCompletionTimeBased() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -416,6 +424,7 @@ public void testQueueMessagesConsumedInBatchesCompletionTimeBased() throws Excep /** * @throws Exception If failed. */ + @Test public void testGenerateNoEntries() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -444,6 +453,7 @@ public void testGenerateNoEntries() throws Exception { /** * @throws Exception If failed. */ + @Test public void testTransactedSessionNoBatching() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -473,6 +483,7 @@ public void testTransactedSessionNoBatching() throws Exception { /** * @throws Exception If failed. */ + @Test public void testQueueMultipleThreads() throws Exception { Destination dest = new ActiveMQQueue(QUEUE_NAME); @@ -512,6 +523,7 @@ public void testQueueMultipleThreads() throws Exception { * * @throws Exception If fails. */ + @Test public void testExceptionListener() throws Exception { // restart broker with auth plugin if (broker.isStarted()) @@ -696,4 +708,4 @@ private void produceStringMessages(Destination dest, boolean singleMsg) throws J } } -} \ No newline at end of file +} diff --git a/modules/jta/pom.xml b/modules/jta/pom.xml index 9c3bd6cc5dbac..b6b0b8ab1cb00 100644 --- a/modules/jta/pom.xml +++ b/modules/jta/pom.xml @@ -50,7 +50,7 @@ org.ow2.jotm jotm-core - 2.2.3 + ${jotm.version} test @@ -95,7 +95,7 @@ org.jboss.spec.javax.rmi jboss-rmi-api_1.0_spec - 1.0.6.Final + ${jboss.rmi.version} test diff --git a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/CacheJndiTmFactorySelfTest.java b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/CacheJndiTmFactorySelfTest.java index 494c5b726e196..0dae5f87f37e6 100644 --- a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/CacheJndiTmFactorySelfTest.java +++ b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/CacheJndiTmFactorySelfTest.java @@ -32,10 +32,14 @@ import org.apache.ignite.cache.jta.jndi.CacheJndiTmFactory; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * */ +@RunWith(JUnit4.class) public class CacheJndiTmFactorySelfTest extends GridCommonAbstractTest { /** */ private static final String TM_JNDI_NAME = "java:/comp/env/tm/testtm1"; @@ -87,6 +91,7 @@ public class CacheJndiTmFactorySelfTest extends GridCommonAbstractTest { /** * @throws Exception If failed. */ + @Test public void testFactory() throws Exception { CacheJndiTmFactory f = new CacheJndiTmFactory("wrongJndiName", NOT_TM_JNDI_NAME, TM_JNDI_NAME2, TM_JNDI_NAME); @@ -100,6 +105,7 @@ public void testFactory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testFactoryException() throws Exception { final CacheJndiTmFactory f = new CacheJndiTmFactory("wrongJndiName", NOT_TM_JNDI_NAME, "wrongJndiName2"); diff --git a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaConfigurationValidationSelfTest.java b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaConfigurationValidationSelfTest.java index c7b7feddbb384..dc65ae5daf3b6 100644 --- a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaConfigurationValidationSelfTest.java +++ b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaConfigurationValidationSelfTest.java @@ -26,12 +26,16 @@ import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; /** * Configuration validation test. */ +@RunWith(JUnit4.class) public class GridCacheJtaConfigurationValidationSelfTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { @@ -53,6 +57,7 @@ public class GridCacheJtaConfigurationValidationSelfTest extends GridCommonAbstr * * @throws Exception If failed. */ + @Test public void testAtomicWithTmLookup() throws Exception { GridTestUtils.assertThrows(log, new Callable() { @Override public Void call() throws Exception { diff --git a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaFactoryConfigValidationSelfTest.java b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaFactoryConfigValidationSelfTest.java index bb12d5e2b3de8..c63df3cdb90a3 100644 --- a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaFactoryConfigValidationSelfTest.java +++ b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheJtaFactoryConfigValidationSelfTest.java @@ -25,12 +25,16 @@ import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; /** * Configuration validation test. */ +@RunWith(JUnit4.class) public class GridCacheJtaFactoryConfigValidationSelfTest extends GridCommonAbstractTest { /** */ private Factory factory; @@ -53,6 +57,7 @@ public class GridCacheJtaFactoryConfigValidationSelfTest extends GridCommonAbstr /** * @throws Exception If failed. */ + @Test public void testNullFactory() throws Exception { factory = new NullTxFactory(); @@ -70,6 +75,7 @@ public void testNullFactory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testWrongTypeFactory() throws Exception { factory = new IntegerTxFactory(); @@ -87,6 +93,7 @@ public void testWrongTypeFactory() throws Exception { /** * @throws Exception If failed. */ + @Test public void testExceptionFactory() throws Exception { factory = new ExceptionTxFactory(); diff --git a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaLifecycleAwareSelfTest.java b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaLifecycleAwareSelfTest.java index c10d11541ccd1..bd36a2593a452 100644 --- a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaLifecycleAwareSelfTest.java +++ b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaLifecycleAwareSelfTest.java @@ -31,12 +31,16 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.testframework.junits.common.GridAbstractLifecycleAwareSelfTest; import org.jetbrains.annotations.Nullable; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.cache.CacheMode.PARTITIONED; /** * Test for {@link LifecycleAware} support for {@link CacheTmLookup}. */ +@RunWith(JUnit4.class) public class GridJtaLifecycleAwareSelfTest extends GridAbstractLifecycleAwareSelfTest { /** */ private static final String CACHE_NAME = "cache"; @@ -140,11 +144,13 @@ public static class TestTxFactory extends GridAbstractLifecycleAwareSelfTest.Tes } /** {@inheritDoc} */ + @Test @Override public void testLifecycleAware() throws Exception { // No-op, see anothre tests. } /** {@inheritDoc} */ + @Test public void testCacheLookupLifecycleAware() throws Exception { tmConfigurationType = TmConfigurationType.CACHE_LOOKUP; @@ -152,6 +158,7 @@ public void testCacheLookupLifecycleAware() throws Exception { } /** {@inheritDoc} */ + @Test public void testGlobalLookupLifecycleAware() throws Exception { tmConfigurationType = TmConfigurationType.GLOBAL_LOOKUP; @@ -159,6 +166,7 @@ public void testGlobalLookupLifecycleAware() throws Exception { } /** {@inheritDoc} */ + @Test public void testFactoryLifecycleAware() throws Exception { tmConfigurationType = TmConfigurationType.FACTORY; diff --git a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaTransactionManagerSelfTest.java b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaTransactionManagerSelfTest.java index a7bb78523646e..dff2aacb62856 100644 --- a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaTransactionManagerSelfTest.java +++ b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/GridJtaTransactionManagerSelfTest.java @@ -26,6 +26,9 @@ import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.transactions.TransactionConcurrency; import org.apache.ignite.transactions.TransactionIsolation; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.objectweb.jotm.Current; import org.objectweb.jotm.Jotm; @@ -35,6 +38,7 @@ /** * JTA Tx Manager test. */ +@RunWith(JUnit4.class) public class GridJtaTransactionManagerSelfTest extends GridCommonAbstractTest { /** Java Open Transaction Manager facade. */ private static Jotm jotm; @@ -70,6 +74,7 @@ public class GridJtaTransactionManagerSelfTest extends GridCommonAbstractTest { * * @throws Exception If failed. */ + @Test public void testJtaTxContextSwitch() throws Exception { for (TransactionIsolation isolation : TransactionIsolation.values()) { TransactionConfiguration cfg = grid().context().config().getTransactionConfiguration(); @@ -144,6 +149,7 @@ public void testJtaTxContextSwitch() throws Exception { /** * @throws Exception If failed. */ + @Test public void testJtaTxContextSwitchWithExistingTx() throws Exception { for (TransactionIsolation isolation : TransactionIsolation.values()) { TransactionConfiguration cfg = grid().context().config().getTransactionConfiguration(); diff --git a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/AbstractCacheJtaSelfTest.java b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/AbstractCacheJtaSelfTest.java index 89fb72f2a98cf..1ea553db203c9 100644 --- a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/AbstractCacheJtaSelfTest.java +++ b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/AbstractCacheJtaSelfTest.java @@ -27,6 +27,9 @@ import org.apache.ignite.internal.processors.cache.GridCacheAbstractSelfTest; import org.apache.ignite.testframework.GridTestSafeThreadFactory; import org.apache.ignite.transactions.Transaction; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import org.objectweb.jotm.Jotm; import java.util.concurrent.Callable; import java.util.concurrent.CountDownLatch; @@ -39,6 +42,7 @@ /** * Abstract class for cache tests. */ +@RunWith(JUnit4.class) public abstract class AbstractCacheJtaSelfTest extends GridCacheAbstractSelfTest { /** */ private static final int GRID_CNT = 1; @@ -97,6 +101,7 @@ public abstract class AbstractCacheJtaSelfTest extends GridCacheAbstractSelfTest * * @throws Exception If failed. */ + @Test public void testJta() throws Exception { UserTransaction jtaTx = jotm.getUserTransaction(); @@ -140,7 +145,7 @@ public void testJta() throws Exception { /** * @throws Exception If failed. */ - @SuppressWarnings("ConstantConditions") + @Test public void testJtaTwoCaches() throws Exception { UserTransaction jtaTx = jotm.getUserTransaction(); @@ -190,6 +195,7 @@ public void testJtaTwoCaches() throws Exception { /** * @throws Exception If failed. */ + @Test public void testAsyncOpAwait() throws Exception { final IgniteCache cache = jcache(); diff --git a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/GridPartitionedCacheJtaLookupClassNameSelfTest.java b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/GridPartitionedCacheJtaLookupClassNameSelfTest.java index 7357f8ea9af80..0429de8b2c466 100644 --- a/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/GridPartitionedCacheJtaLookupClassNameSelfTest.java +++ b/modules/jta/src/test/java/org/apache/ignite/internal/processors/cache/jta/GridPartitionedCacheJtaLookupClassNameSelfTest.java @@ -27,10 +27,14 @@ import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testsuites.IgniteIgnore; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Lookup class name based JTA integration test using PARTITIONED cache. */ +@RunWith(JUnit4.class) public class GridPartitionedCacheJtaLookupClassNameSelfTest extends AbstractCacheJtaSelfTest { /** {@inheritDoc} */ @Override protected void configureJta(IgniteConfiguration cfg) { @@ -40,7 +44,8 @@ public class GridPartitionedCacheJtaLookupClassNameSelfTest extends AbstractCach /** * */ - @IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-1094", forceFailure = true) + @IgniteIgnore(value = "https://issues.apache.org/jira/browse/IGNITE-10723", forceFailure = true) + @Test public void testIncompatibleTmLookup() { final IgniteEx ignite = grid(0); diff --git a/modules/jta/src/test/java/org/apache/ignite/testsuites/IgniteJtaTestSuite.java b/modules/jta/src/test/java/org/apache/ignite/testsuites/IgniteJtaTestSuite.java index 3cc7935eb19a6..82350b282960f 100644 --- a/modules/jta/src/test/java/org/apache/ignite/testsuites/IgniteJtaTestSuite.java +++ b/modules/jta/src/test/java/org/apache/ignite/testsuites/IgniteJtaTestSuite.java @@ -17,6 +17,7 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.internal.processors.cache.CacheJndiTmFactorySelfTest; import org.apache.ignite.internal.processors.cache.GridCacheJtaConfigurationValidationSelfTest; @@ -30,35 +31,37 @@ import org.apache.ignite.internal.processors.cache.jta.GridReplicatedCacheJtaLookupClassNameSelfTest; import org.apache.ignite.internal.processors.cache.GridJtaLifecycleAwareSelfTest; import org.apache.ignite.testframework.IgniteTestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * JTA integration tests. */ -public class IgniteJtaTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteJtaTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new IgniteTestSuite("JTA Integration Test Suite"); - suite.addTestSuite(GridPartitionedCacheJtaFactorySelfTest.class); - suite.addTestSuite(GridReplicatedCacheJtaFactorySelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridPartitionedCacheJtaFactorySelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridReplicatedCacheJtaFactorySelfTest.class)); - suite.addTestSuite(GridPartitionedCacheJtaLookupClassNameSelfTest.class); - suite.addTestSuite(GridReplicatedCacheJtaLookupClassNameSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridPartitionedCacheJtaLookupClassNameSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridReplicatedCacheJtaLookupClassNameSelfTest.class)); - suite.addTestSuite(GridPartitionedCacheJtaFactoryUseSyncSelfTest.class); - suite.addTestSuite(GridReplicatedCacheJtaFactoryUseSyncSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridPartitionedCacheJtaFactoryUseSyncSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridReplicatedCacheJtaFactoryUseSyncSelfTest.class)); - suite.addTestSuite(GridJtaLifecycleAwareSelfTest.class); - suite.addTestSuite(GridCacheJtaConfigurationValidationSelfTest.class); - suite.addTestSuite(GridCacheJtaFactoryConfigValidationSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridJtaLifecycleAwareSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheJtaConfigurationValidationSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(GridCacheJtaFactoryConfigValidationSelfTest.class)); - suite.addTestSuite(GridJtaTransactionManagerSelfTest.class); + suite.addTest(new JUnit4TestAdapter(GridJtaTransactionManagerSelfTest.class)); // Factory - suite.addTestSuite(CacheJndiTmFactorySelfTest.class); + suite.addTest(new JUnit4TestAdapter(CacheJndiTmFactorySelfTest.class)); return suite; } diff --git a/modules/kafka/pom.xml b/modules/kafka/pom.xml index 18ffcaa201d69..5a93fb1803a94 100644 --- a/modules/kafka/pom.xml +++ b/modules/kafka/pom.xml @@ -54,6 +54,14 @@ ${kafka.version} + + org.apache.kafka + kafka-clients + ${kafka.version} + test + test + + org.apache.kafka kafka_2.11 @@ -73,6 +81,13 @@ curator-test ${curator.version} test + + + + guava + com.google.guava + + diff --git a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/IgniteKafkaStreamerSelfTestSuite.java b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/IgniteKafkaStreamerSelfTestSuite.java index c8d413ab79021..f86f7a782278d 100644 --- a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/IgniteKafkaStreamerSelfTestSuite.java +++ b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/IgniteKafkaStreamerSelfTestSuite.java @@ -17,27 +17,30 @@ package org.apache.ignite.stream.kafka; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.stream.kafka.connect.IgniteSinkConnectorTest; import org.apache.ignite.stream.kafka.connect.IgniteSourceConnectorTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Apache Kafka streamers tests. */ -public class IgniteKafkaStreamerSelfTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteKafkaStreamerSelfTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Apache Kafka streamer Test Suite"); // Kafka streamer. - suite.addTest(new TestSuite(KafkaIgniteStreamerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(KafkaIgniteStreamerSelfTest.class)); // Kafka streamers via Connect API. - suite.addTest(new TestSuite(IgniteSinkConnectorTest.class)); - suite.addTest(new TestSuite(IgniteSourceConnectorTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSinkConnectorTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSourceConnectorTest.class)); return suite; } diff --git a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/KafkaIgniteStreamerSelfTest.java b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/KafkaIgniteStreamerSelfTest.java index 48d4a8d951d85..24fd8a33a5c76 100644 --- a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/KafkaIgniteStreamerSelfTest.java +++ b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/KafkaIgniteStreamerSelfTest.java @@ -31,19 +31,27 @@ import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.IgniteDataStreamer; +import org.apache.ignite.IgniteLogger; import org.apache.ignite.events.CacheEvent; +import org.apache.ignite.internal.IgniteEx; import org.apache.ignite.internal.util.typedef.internal.A; import org.apache.ignite.lang.IgniteBiPredicate; +import org.apache.ignite.resources.IgniteInstanceResource; +import org.apache.ignite.resources.LoggerResource; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.serialization.StringDeserializer; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; /** * Tests {@link KafkaStreamer}. */ +@RunWith(JUnit4.class) public class KafkaIgniteStreamerSelfTest extends GridCommonAbstractTest { /** Embedded Kafka. */ private TestKafkaBroker embeddedBroker; @@ -92,6 +100,7 @@ public KafkaIgniteStreamerSelfTest() { * @throws TimeoutException If timed out. * @throws InterruptedException If interrupted. */ + @Test public void testKafkaStreamer() throws TimeoutException, InterruptedException { embeddedBroker.createTopic(TOPIC_NAME, PARTITIONS, REPLICATION_FACTOR); @@ -201,9 +210,24 @@ record -> { final CountDownLatch latch = new CountDownLatch(CNT); IgniteBiPredicate locLsnr = new IgniteBiPredicate() { + @IgniteInstanceResource + private Ignite ig; + + @LoggerResource + private IgniteLogger log; + + /** {@inheritDoc} */ @Override public boolean apply(UUID uuid, CacheEvent evt) { latch.countDown(); + if (log.isInfoEnabled()) { + IgniteEx igEx = (IgniteEx)ig; + + UUID nodeId = igEx.localNode().id(); + + log.info("Recive event=" + evt + ", nodeId=" + nodeId); + } + return true; } }; @@ -211,7 +235,8 @@ record -> { ignite.events(ignite.cluster().forCacheNodes(DEFAULT_CACHE_NAME)).remoteListen(locLsnr, null, EVT_CACHE_OBJECT_PUT); // Checks all events successfully processed in 10 seconds. - assertTrue(latch.await(10, TimeUnit.SECONDS)); + assertTrue("Failed to wait latch completion, still wait " + latch.getCount() + " events", + latch.await(10, TimeUnit.SECONDS)); for (Map.Entry entry : keyValMap.entrySet()) assertEquals(entry.getValue(), cache.get(entry.getKey())); diff --git a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/TestKafkaBroker.java b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/TestKafkaBroker.java index 4f0d1d3fa018d..9b9b377d07261 100644 --- a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/TestKafkaBroker.java +++ b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/TestKafkaBroker.java @@ -27,7 +27,8 @@ import java.util.concurrent.TimeoutException; import kafka.server.KafkaConfig; import kafka.server.KafkaServer; -import kafka.utils.SystemTime$; +import kafka.zk.KafkaZkClient; +import org.apache.kafka.common.utils.SystemTime; import kafka.utils.TestUtils; import kafka.utils.ZkUtils; import org.I0Itec.zkclient.ZkClient; @@ -101,7 +102,9 @@ public void createTopic(String topic, int partitions, int replicationFactor) servers.add(kafkaSrv); - TestUtils.createTopic(zkUtils, topic, partitions, replicationFactor, + KafkaZkClient client = kafkaSrv.zkClient(); + + TestUtils.createTopic(client, topic, partitions, replicationFactor, scala.collection.JavaConversions.asScalaBuffer(servers), new Properties()); } @@ -154,7 +157,7 @@ public void shutdown() { private void setupKafkaServer() throws IOException { kafkaCfg = new KafkaConfig(getKafkaConfig()); - kafkaSrv = TestUtils.createServer(kafkaCfg, SystemTime$.MODULE$); + kafkaSrv = TestUtils.createServer(kafkaCfg, new SystemTime()); kafkaSrv.startup(); } diff --git a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSinkConnectorTest.java b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSinkConnectorTest.java index 90306a7eabbad..24a605e77b27d 100644 --- a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSinkConnectorTest.java +++ b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSinkConnectorTest.java @@ -41,6 +41,7 @@ import org.apache.kafka.connect.runtime.Herder; import org.apache.kafka.connect.runtime.Worker; import org.apache.kafka.connect.runtime.WorkerConfig; +import org.apache.kafka.connect.runtime.isolation.Plugins; import org.apache.kafka.connect.runtime.rest.entities.ConnectorInfo; import org.apache.kafka.connect.runtime.standalone.StandaloneConfig; import org.apache.kafka.connect.runtime.standalone.StandaloneHerder; @@ -48,7 +49,11 @@ import org.apache.kafka.connect.sink.SinkRecord; import org.apache.kafka.connect.storage.OffsetBackingStore; import org.apache.kafka.connect.util.Callback; +import org.apache.kafka.connect.util.ConnectUtils; import org.apache.kafka.connect.util.FutureCallback; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; import static org.easymock.EasyMock.mock; @@ -56,6 +61,7 @@ /** * Tests for {@link IgniteSinkConnector}. */ +@RunWith(JUnit4.class) public class IgniteSinkConnectorTest extends GridCommonAbstractTest { /** Number of input messages. */ private static final int EVENT_CNT = 10000; @@ -88,22 +94,22 @@ public class IgniteSinkConnectorTest extends GridCommonAbstractTest { private static Ignite grid; /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override protected void beforeTest() throws Exception { kafkaBroker = new TestKafkaBroker(); for (String topic : TOPICS) kafkaBroker.createTopic(topic, PARTITIONS, REPLICATION_FACTOR); - WorkerConfig workerCfg = new StandaloneConfig(makeWorkerProps()); + Map props = makeWorkerProps(); + WorkerConfig workerCfg = new StandaloneConfig(props); OffsetBackingStore offBackingStore = mock(OffsetBackingStore.class); offBackingStore.configure(workerCfg); - worker = new Worker(WORKER_ID, new SystemTime(), workerCfg, offBackingStore); + worker = new Worker(WORKER_ID, new SystemTime(), new Plugins(props), workerCfg, offBackingStore); worker.start(); - herder = new StandaloneHerder(worker); + herder = new StandaloneHerder(worker, ConnectUtils.lookupKafkaClusterId(workerCfg)); herder.start(); } @@ -125,7 +131,6 @@ public class IgniteSinkConnectorTest extends GridCommonAbstractTest { } /** {@inheritDoc} */ - @SuppressWarnings("unchecked") @Override protected void beforeTestsStarted() throws Exception { IgniteConfiguration cfg = loadConfiguration("modules/kafka/src/test/resources/example-ignite.xml"); @@ -134,6 +139,10 @@ public class IgniteSinkConnectorTest extends GridCommonAbstractTest { grid = startGrid("igniteServerNode", cfg); } + /** + * @throws Exception if failed. + */ + @Test public void testSinkPutsWithoutTransformation() throws Exception { Map sinkProps = makeSinkProps(Utils.join(TOPICS, ",")); @@ -142,6 +151,10 @@ public void testSinkPutsWithoutTransformation() throws Exception { testSinkPuts(sinkProps, false); } + /** + * @throws Exception if failed. + */ + @Test public void testSinkPutsWithTransformation() throws Exception { testSinkPuts(makeSinkProps(Utils.join(TOPICS, ",")), true); } @@ -288,7 +301,6 @@ private Map makeWorkerProps() { * Test transformer. */ static class TestExtractor implements StreamSingleTupleExtractor { - /** {@inheritDoc} */ @Override public Map.Entry extract(SinkRecord msg) { String[] parts = ((String)msg.value()).split("_"); diff --git a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSourceConnectorTest.java b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSourceConnectorTest.java index cc487aaf7a304..0d0612911e7fd 100644 --- a/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSourceConnectorTest.java +++ b/modules/kafka/src/test/java/org/apache/ignite/stream/kafka/connect/IgniteSourceConnectorTest.java @@ -47,18 +47,24 @@ import org.apache.kafka.connect.runtime.Herder; import org.apache.kafka.connect.runtime.Worker; import org.apache.kafka.connect.runtime.WorkerConfig; +import org.apache.kafka.connect.runtime.isolation.Plugins; import org.apache.kafka.connect.runtime.rest.entities.ConnectorInfo; import org.apache.kafka.connect.runtime.standalone.StandaloneConfig; import org.apache.kafka.connect.runtime.standalone.StandaloneHerder; import org.apache.kafka.connect.storage.MemoryOffsetBackingStore; import org.apache.kafka.connect.util.Callback; +import org.apache.kafka.connect.util.ConnectUtils; import org.apache.kafka.connect.util.FutureCallback; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; /** * Tests for {@link IgniteSourceConnector}. */ +@RunWith(JUnit4.class) public class IgniteSourceConnectorTest extends GridCommonAbstractTest { /** Number of input messages. */ private static final int EVENT_CNT = 100; @@ -98,15 +104,16 @@ public class IgniteSourceConnectorTest extends GridCommonAbstractTest { @Override protected void beforeTest() throws Exception { kafkaBroker = new TestKafkaBroker(); - WorkerConfig workerCfg = new StandaloneConfig(makeWorkerProps()); + Map props = makeWorkerProps(); + WorkerConfig workerCfg = new StandaloneConfig(props); MemoryOffsetBackingStore offBackingStore = new MemoryOffsetBackingStore(); offBackingStore.configure(workerCfg); - worker = new Worker(WORKER_ID, new SystemTime(), workerCfg, offBackingStore); + worker = new Worker(WORKER_ID, new SystemTime(), new Plugins(props), workerCfg, offBackingStore); worker.start(); - herder = new StandaloneHerder(worker); + herder = new StandaloneHerder(worker, ConnectUtils.lookupKafkaClusterId(workerCfg)); herder.start(); } @@ -133,6 +140,7 @@ public class IgniteSourceConnectorTest extends GridCommonAbstractTest { * * @throws Exception Thrown in case of the failure. */ + @Test public void testEventsInjectedIntoKafkaWithoutFilter() throws Exception { Map srcProps = makeSourceProps(Utils.join(TOPICS, ",")); @@ -146,6 +154,7 @@ public void testEventsInjectedIntoKafkaWithoutFilter() throws Exception { * * @throws Exception Thrown in case of the failure. */ + @Test public void testEventsInjectedIntoKafka() throws Exception { doTest(makeSourceProps(Utils.join(TOPICS, ",")), true); } @@ -250,6 +259,7 @@ private void checkDataDelivered(final int expectedEventsCnt) throws Exception { props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-grp"); props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); props.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 1); + props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 20000); props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 10000); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); diff --git a/modules/kubernetes/DEVNOTES.txt b/modules/kubernetes/DEVNOTES.txt index b2a8173587bf7..2e3d84d5a45bd 100644 --- a/modules/kubernetes/DEVNOTES.txt +++ b/modules/kubernetes/DEVNOTES.txt @@ -10,7 +10,7 @@ Building Apache Ignite ========================= Use the command below to assemble an Apache Ignite binary: - mvn clean package -Prelease -Dignite.edition=fabric-lgpl -DskipTests + mvn clean package -Prelease -Dignite.edition=apache-ignite-lgpl -DskipTests Note, if you alter the build instruction somehow make sure to update the files under 'config' folder if needed. diff --git a/modules/kubernetes/config/Dockerfile b/modules/kubernetes/config/Dockerfile index 2274535b5ccb2..634e18146895e 100644 --- a/modules/kubernetes/config/Dockerfile +++ b/modules/kubernetes/config/Dockerfile @@ -22,7 +22,7 @@ FROM java:8 ENV IGNITE_VERSION 2.0.0-SNAPSHOT # Set IGNITE_HOME variable. -ENV IGNITE_HOME /opt/ignite/apache-ignite-fabric-lgpl-${IGNITE_VERSION}-bin +ENV IGNITE_HOME /opt/ignite/apache-ignite-lgpl-${IGNITE_VERSION}-bin # Setting a path to an Apache Ignite configuration file. Used by run.sh script below. ENV CONFIG_URI ${IGNITE_HOME}/config/example-kube.xml @@ -40,11 +40,11 @@ RUN apt-get update && apt-get install -y --no-install-recommends unzip WORKDIR /opt/ignite # Copying local Apache Ignite build to the docker image. -COPY ./apache-ignite-fabric-lgpl-${IGNITE_VERSION}-bin.zip apache-ignite-fabric-lgpl-${IGNITE_VERSION}-bin.zip +COPY ./apache-ignite-lgpl-${IGNITE_VERSION}-bin.zip apache-ignite-lgpl-${IGNITE_VERSION}-bin.zip # Unpacking the build. -RUN unzip apache-ignite-fabric-lgpl-${IGNITE_VERSION}-bin.zip -RUN rm apache-ignite-fabric-lgpl-${IGNITE_VERSION}-bin.zip +RUN unzip apache-ignite-lgpl-${IGNITE_VERSION}-bin.zip +RUN rm apache-ignite-lgpl-${IGNITE_VERSION}-bin.zip # Copying the executable file and setting permissions. COPY ./run.sh $IGNITE_HOME/ diff --git a/modules/kubernetes/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/kubernetes/TcpDiscoveryKubernetesIpFinderSelfTest.java b/modules/kubernetes/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/kubernetes/TcpDiscoveryKubernetesIpFinderSelfTest.java index fd3e2a35e432c..c12e18ad4225b 100644 --- a/modules/kubernetes/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/kubernetes/TcpDiscoveryKubernetesIpFinderSelfTest.java +++ b/modules/kubernetes/src/test/java/org/apache/ignite/spi/discovery/tcp/ipfinder/kubernetes/TcpDiscoveryKubernetesIpFinderSelfTest.java @@ -19,10 +19,14 @@ import org.apache.ignite.spi.IgniteSpiException; import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAbstractSelfTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * TcpDiscoveryKubernetesIpFinder test. */ +@RunWith(JUnit4.class) public class TcpDiscoveryKubernetesIpFinderSelfTest extends TcpDiscoveryIpFinderAbstractSelfTest { /** @@ -45,6 +49,7 @@ public TcpDiscoveryKubernetesIpFinderSelfTest() throws Exception { } /* {@inheritDoc} */ + @Test @Override public void testIpFinder() throws Exception { TcpDiscoveryKubernetesIpFinder ipFinder = new TcpDiscoveryKubernetesIpFinder(); @@ -90,4 +95,4 @@ public TcpDiscoveryKubernetesIpFinderSelfTest() throws Exception { assertTrue(e.getMessage().startsWith("One or more configuration parameters are invalid")); } } -} \ No newline at end of file +} diff --git a/modules/kubernetes/src/test/java/org/apache/ignite/testsuites/IgniteKubernetesTestSuite.java b/modules/kubernetes/src/test/java/org/apache/ignite/testsuites/IgniteKubernetesTestSuite.java index 540657e8b2601..43b9b917164de 100644 --- a/modules/kubernetes/src/test/java/org/apache/ignite/testsuites/IgniteKubernetesTestSuite.java +++ b/modules/kubernetes/src/test/java/org/apache/ignite/testsuites/IgniteKubernetesTestSuite.java @@ -18,24 +18,27 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinderSelfTest; import org.apache.ignite.testframework.IgniteTestSuite; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Ignite Kubernetes integration test. */ -public class IgniteKubernetesTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteKubernetesTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new IgniteTestSuite("Kubernetes Integration Test Suite"); // Cloud Nodes IP finder. - suite.addTestSuite(TcpDiscoveryKubernetesIpFinderSelfTest.class); + suite.addTest(new JUnit4TestAdapter(TcpDiscoveryKubernetesIpFinderSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jConfigUpdateTest.java b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jConfigUpdateTest.java index c0d9591c4df59..e2dc02ba600ed 100644 --- a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jConfigUpdateTest.java +++ b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jConfigUpdateTest.java @@ -21,14 +21,17 @@ import java.nio.file.Files; import java.nio.file.StandardCopyOption; import java.util.Date; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.internal.U; import org.apache.log4j.helpers.FileWatchdog; +import org.junit.Test; + +import static junit.framework.Assert.assertFalse; +import static junit.framework.Assert.assertTrue; /** * Checking that Log4j configuration is updated when its source file is changed. */ -public class GridLog4jConfigUpdateTest extends TestCase { +public class GridLog4jConfigUpdateTest { /** Path to log4j configuration with INFO enabled. */ private static final String LOG_CONFIG_INFO = "modules/log4j/src/test/config/log4j-info.xml"; @@ -51,7 +54,7 @@ public class GridLog4jConfigUpdateTest extends TestCase { * Check that changing log4j config file causes the logger configuration to be updated. * String-accepting constructor is used. */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testConfigChangeStringConstructor() throws Exception { checkConfigUpdate(new Log4JLoggerSupplier() { @Override public Log4JLogger get(File cfgFile) throws Exception { @@ -64,7 +67,7 @@ public void testConfigChangeStringConstructor() throws Exception { * Check that changing log4j config file causes the logger configuration to be updated. * String-accepting constructor is used. */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testConfigChangeStringConstructorDefaultDelay() throws Exception { checkConfigUpdate(new Log4JLoggerSupplier() { @Override public Log4JLogger get(File cfgFile) throws Exception { @@ -77,7 +80,7 @@ public void testConfigChangeStringConstructorDefaultDelay() throws Exception { * Check that changing log4j config file causes the logger configuration to be updated. * File-accepting constructor is used. */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testConfigChangeFileConstructor() throws Exception { checkConfigUpdate(new Log4JLoggerSupplier() { @Override public Log4JLogger get(File cfgFile) throws Exception { @@ -90,7 +93,7 @@ public void testConfigChangeFileConstructor() throws Exception { * Check that changing log4j config file causes the logger configuration to be updated. * File-accepting constructor is used. */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testConfigChangeUrlConstructor() throws Exception { checkConfigUpdate(new Log4JLoggerSupplier() { @Override public Log4JLogger get(File cfgFile) throws Exception { diff --git a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jCorrectFileNameTest.java b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jCorrectFileNameTest.java index ac5b4983af799..f166c000e6313 100644 --- a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jCorrectFileNameTest.java +++ b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jCorrectFileNameTest.java @@ -20,7 +20,6 @@ import java.io.File; import java.util.Collections; import java.util.Enumeration; -import junit.framework.TestCase; import org.apache.ignite.Ignite; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.G; @@ -32,17 +31,24 @@ import org.apache.log4j.Logger; import org.apache.log4j.PatternLayout; import org.apache.log4j.varia.LevelRangeFilter; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; /** * Tests that several grids log to files with correct names. */ @GridCommonTest(group = "Logger") -public class GridLog4jCorrectFileNameTest extends TestCase { +public class GridLog4jCorrectFileNameTest { /** Appender */ private Log4jRollingFileAppender appender; - /** {@inheritDoc} */ - @Override protected void setUp() throws Exception { + /** */ + @Before + public void setUp() { Logger root = Logger.getRootLogger(); for (Enumeration appenders = root.getAllAppenders(); appenders.hasMoreElements(); ) { @@ -55,8 +61,9 @@ public class GridLog4jCorrectFileNameTest extends TestCase { root.addAppender(appender); } - /** {@inheritDoc} */ - @Override public void tearDown() { + /** */ + @After + public void tearDown() { if (appender != null) { Logger.getRootLogger().removeAppender(Log4jRollingFileAppender.class.getSimpleName()); @@ -69,6 +76,7 @@ public class GridLog4jCorrectFileNameTest extends TestCase { * * @throws Exception If error occurs. */ + @Test public void testLogFilesTwoNodes() throws Exception { checkOneNode(0); checkOneNode(1); @@ -102,9 +110,8 @@ private void checkOneNode(int id) throws Exception { * * @param igniteInstanceName Ignite instance name. * @return Grid configuration. - * @throws Exception If error occurred. */ - private static IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { + private static IgniteConfiguration getConfiguration(String igniteInstanceName) { IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteInstanceName(igniteInstanceName); @@ -128,9 +135,8 @@ private static IgniteConfiguration getConfiguration(String igniteInstanceName) t * Creates new GridLog4jRollingFileAppender. * * @return GridLog4jRollingFileAppender. - * @throws Exception If error occurred. */ - private static Log4jRollingFileAppender createAppender() throws Exception { + private static Log4jRollingFileAppender createAppender() { Log4jRollingFileAppender appender = new Log4jRollingFileAppender(); appender.setLayout(new PatternLayout("[%d{ISO8601}][%-5p][%t][%c{1}] %m%n")); diff --git a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jInitializedTest.java b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jInitializedTest.java index 1fb9c34c0c21f..a8bb4fea91342 100644 --- a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jInitializedTest.java +++ b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jInitializedTest.java @@ -17,25 +17,27 @@ package org.apache.ignite.logger.log4j; -import junit.framework.TestCase; import org.apache.ignite.IgniteLogger; import org.apache.ignite.testframework.junits.common.GridCommonTest; import org.apache.log4j.BasicConfigurator; +import org.junit.Before; +import org.junit.Test; + +import static junit.framework.Assert.assertTrue; /** * Log4j initialized test. */ @GridCommonTest(group = "Logger") -public class GridLog4jInitializedTest extends TestCase { - - /** - * @throws Exception If failed. - */ - @Override protected void setUp() throws Exception { +public class GridLog4jInitializedTest { + /** */ + @Before + public void setUp() { BasicConfigurator.configure(); } /** */ + @Test public void testLogInitialize() { IgniteLogger log = new Log4JLogger(); @@ -57,4 +59,4 @@ public void testLogInitialize() { assert log.getLogger(GridLog4jInitializedTest.class.getName()) instanceof Log4JLogger; } -} \ No newline at end of file +} diff --git a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingFileTest.java b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingFileTest.java index d1b09d7542e0a..a6d683e68ea92 100644 --- a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingFileTest.java +++ b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingFileTest.java @@ -18,24 +18,28 @@ package org.apache.ignite.logger.log4j; import java.io.File; -import junit.framework.TestCase; import org.apache.ignite.IgniteLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertTrue; /** * Grid Log4j SPI test. */ @GridCommonTest(group = "Logger") -public class GridLog4jLoggingFileTest extends TestCase { +public class GridLog4jLoggingFileTest { /** */ private IgniteLogger log; /** Logger config */ private File xml; - /** {@inheritDoc} */ - @Override protected void setUp() throws Exception { + /** */ + @Before + public void setUp() throws Exception { xml = GridTestUtils.resolveIgnitePath("modules/core/src/test/config/log4j-test.xml"); assert xml != null; @@ -47,6 +51,7 @@ public class GridLog4jLoggingFileTest extends TestCase { /** * Tests log4j logging SPI. */ + @Test public void testLog() { System.out.println(log.toString()); @@ -62,4 +67,4 @@ public void testLog() { log.error("This is 'error' message."); log.error("This is 'error' message.", new Exception("It's a test error exception")); } -} \ No newline at end of file +} diff --git a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingPathTest.java b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingPathTest.java index 867efbac45c68..b4840a41925d1 100644 --- a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingPathTest.java +++ b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingPathTest.java @@ -17,29 +17,34 @@ package org.apache.ignite.logger.log4j; -import junit.framework.TestCase; import org.apache.ignite.IgniteLogger; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertTrue; /** * Grid Log4j SPI test. */ @GridCommonTest(group = "Logger") -public class GridLog4jLoggingPathTest extends TestCase { +public class GridLog4jLoggingPathTest { /** */ private IgniteLogger log; /** Logger config */ private String path = "modules/core/src/test/config/log4j-test.xml"; - /** {@inheritDoc} */ - @Override protected void setUp() throws Exception { + /** */ + @Before + public void setUp() throws Exception { log = new Log4JLogger(path).getLogger(getClass()); } /** * Tests log4j logging SPI. */ + @Test public void testLog() { System.out.println(log.toString()); @@ -57,4 +62,4 @@ public void testLog() { log.error("This is 'error' message."); log.error("This is 'error' message.", new Exception("It's a test error exception")); } -} \ No newline at end of file +} diff --git a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingUrlTest.java b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingUrlTest.java index 1e2e8df82bf31..e76625adc5f57 100644 --- a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingUrlTest.java +++ b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jLoggingUrlTest.java @@ -19,24 +19,28 @@ import java.io.File; import java.net.URL; -import junit.framework.TestCase; import org.apache.ignite.IgniteLogger; import org.apache.ignite.testframework.GridTestUtils; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertTrue; /** * Grid Log4j SPI test. */ @GridCommonTest(group = "Logger") -public class GridLog4jLoggingUrlTest extends TestCase { +public class GridLog4jLoggingUrlTest { /** */ private IgniteLogger log; /** Logger config */ private URL url; - /** {@inheritDoc} */ - @Override protected void setUp() throws Exception { + /** */ + @Before + public void setUp() throws Exception { File xml = GridTestUtils.resolveIgnitePath("modules/core/src/test/config/log4j-test.xml"); assert xml != null; @@ -49,6 +53,7 @@ public class GridLog4jLoggingUrlTest extends TestCase { /** * Tests log4j logging SPI. */ + @Test public void testLog() { System.out.println(log.toString()); @@ -64,4 +69,4 @@ public void testLog() { log.error("This is 'error' message."); log.error("This is 'error' message.", new Exception("It's a test error exception")); } -} \ No newline at end of file +} diff --git a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jNotInitializedTest.java b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jNotInitializedTest.java index d32e890686420..c230e7ef6ffcb 100644 --- a/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jNotInitializedTest.java +++ b/modules/log4j/src/test/java/org/apache/ignite/logger/log4j/GridLog4jNotInitializedTest.java @@ -17,16 +17,19 @@ package org.apache.ignite.logger.log4j; -import junit.framework.TestCase; import org.apache.ignite.IgniteLogger; import org.apache.ignite.testframework.junits.common.GridCommonTest; +import org.junit.Test; + +import static junit.framework.Assert.assertTrue; /** * Log4j not initialized test. */ @GridCommonTest(group = "Logger") -public class GridLog4jNotInitializedTest extends TestCase { +public class GridLog4jNotInitializedTest { /** */ + @Test public void testLogInitialize() { IgniteLogger log = new Log4JLogger().getLogger(GridLog4jNotInitializedTest.class); @@ -48,4 +51,4 @@ public void testLogInitialize() { log.warning("This is 'warning' message."); log.error("This is 'error' message."); } -} \ No newline at end of file +} diff --git a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2ConfigUpdateTest.java b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2ConfigUpdateTest.java index d9f75176ee299..ae09b73d3b0ff 100644 --- a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2ConfigUpdateTest.java +++ b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2ConfigUpdateTest.java @@ -21,13 +21,16 @@ import java.nio.file.Files; import java.nio.file.StandardCopyOption; import java.util.Date; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.Test; + +import static junit.framework.Assert.assertFalse; +import static junit.framework.Assert.assertTrue; /** * Checking that Log4j2 configuration is updated when its source file is changed. */ -public class Log4j2ConfigUpdateTest extends TestCase { +public class Log4j2ConfigUpdateTest { /** Path to log4j2 configuration with INFO enabled. */ private static final String LOG_CONFIG_INFO = "modules/log4j2/src/test/config/log4j2-info.xml"; @@ -56,7 +59,7 @@ public class Log4j2ConfigUpdateTest extends TestCase { * Check that changing log4j2 config file causes the logger configuration to be updated. * String-accepting constructor is used. */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testConfigChangeStringConstructor() throws Exception { checkConfigUpdate(new Log4J2LoggerSupplier() { @Override public Log4J2Logger get(File cfgFile) throws Exception { @@ -69,7 +72,7 @@ public void testConfigChangeStringConstructor() throws Exception { * Check that changing log4j config file causes the logger configuration to be updated. * File-accepting constructor is used. */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testConfigChangeFileConstructor() throws Exception { checkConfigUpdate(new Log4J2LoggerSupplier() { @Override public Log4J2Logger get(File cfgFile) throws Exception { @@ -82,7 +85,7 @@ public void testConfigChangeFileConstructor() throws Exception { * Check that changing log4j config file causes the logger configuration to be updated. * File-accepting constructor is used. */ - @SuppressWarnings("ResultOfMethodCallIgnored") + @Test public void testConfigChangeUrlConstructor() throws Exception { checkConfigUpdate(new Log4J2LoggerSupplier() { @Override public Log4J2Logger get(File cfgFile) throws Exception { diff --git a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerMarkerTest.java b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerMarkerTest.java index 672f5d960974b..22f8c5e3f6e23 100644 --- a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerMarkerTest.java +++ b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerMarkerTest.java @@ -18,13 +18,22 @@ package org.apache.ignite.logger.log4j2; import java.io.File; -import junit.framework.TestCase; import org.apache.ignite.internal.util.typedef.internal.U; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; /** * Testing that markers are supported by log4j2 implementation. */ -public class Log4j2LoggerMarkerTest extends TestCase { +@RunWith(JUnit4.class) +public class Log4j2LoggerMarkerTest { /** Path to log4j configuration. */ private static final String LOG_CONFIG = "modules/log4j2/src/test/config/log4j2-markers.xml"; @@ -35,22 +44,21 @@ public class Log4j2LoggerMarkerTest extends TestCase { private static final String LOG_FILTERED = "work/log/filtered.log"; /** */ - @Override protected void setUp() throws Exception { - super.setUp(); - + @Before + public void setUp() { Log4J2Logger.cleanup(); deleteLogs(); } /** */ - @Override protected void tearDown() throws Exception { - super.tearDown(); - + @After + public void tearDown() { deleteLogs(); } /** */ + @Test public void testMarkerFiltering() throws Exception { // create log Log4J2Logger log = new Log4J2Logger(LOG_CONFIG); diff --git a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerSelfTest.java b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerSelfTest.java index 5f3207ec52e80..4ad4c65bf3237 100644 --- a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerSelfTest.java +++ b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerSelfTest.java @@ -21,7 +21,6 @@ import java.net.URL; import java.util.Collections; import java.util.UUID; -import junit.framework.TestCase; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteLogger; import org.apache.ignite.configuration.IgniteConfiguration; @@ -31,27 +30,36 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; /** * Grid Log4j2 SPI test. */ -public class Log4j2LoggerSelfTest extends TestCase { +@RunWith(JUnit4.class) +public class Log4j2LoggerSelfTest { /** */ private static final String LOG_PATH_TEST = "modules/core/src/test/config/log4j2-test.xml"; /** */ private static final String LOG_PATH_MAIN = "config/ignite-log4j2.xml"; - /** - * @throws Exception If failed. - */ - @Override protected void setUp() throws Exception { + /** */ + @Before + public void setUp() { Log4J2Logger.cleanup(); } /** * @throws Exception If failed. */ + @Test public void testFileConstructor() throws Exception { File xml = GridTestUtils.resolveIgnitePath(LOG_PATH_TEST); @@ -73,6 +81,7 @@ public void testFileConstructor() throws Exception { /** * @throws Exception If failed. */ + @Test public void testUrlConstructor() throws Exception { File xml = GridTestUtils.resolveIgnitePath(LOG_PATH_TEST); @@ -95,6 +104,7 @@ public void testUrlConstructor() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPathConstructor() throws Exception { IgniteLogger log = new Log4J2Logger(LOG_PATH_TEST).getLogger(getClass()); @@ -126,6 +136,7 @@ private void checkLog(IgniteLogger log) { /** * @throws Exception If failed. */ + @Test public void testSystemNodeId() throws Exception { UUID id = UUID.randomUUID(); @@ -139,6 +150,7 @@ public void testSystemNodeId() throws Exception { * * @throws Exception If error occurs. */ + @Test public void testLogFilesTwoNodes() throws Exception { checkOneNode(0); checkOneNode(1); @@ -196,4 +208,4 @@ private static IgniteConfiguration getConfiguration(String igniteInstanceName, S .setConnectorConfiguration(null) .setDiscoverySpi(disco); } -} \ No newline at end of file +} diff --git a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerVerboseModeSelfTest.java b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerVerboseModeSelfTest.java index c28108ce54914..5c4390838e389 100644 --- a/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerVerboseModeSelfTest.java +++ b/modules/log4j2/src/test/java/org/apache/ignite/logger/log4j2/Log4j2LoggerVerboseModeSelfTest.java @@ -21,7 +21,6 @@ import java.io.File; import java.io.PrintStream; import java.util.Collections; -import junit.framework.TestCase; import org.apache.ignite.Ignite; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.internal.util.typedef.G; @@ -29,20 +28,22 @@ import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.ignite.testframework.GridTestUtils; import org.apache.logging.log4j.Level; +import org.junit.Before; +import org.junit.Test; + +import static junit.framework.Assert.assertTrue; /** * Grid Log4j2 SPI test. */ -public class Log4j2LoggerVerboseModeSelfTest extends TestCase { +public class Log4j2LoggerVerboseModeSelfTest { /** */ private static final String LOG_PATH_VERBOSE_TEST = "modules/core/src/test/config/log4j2-verbose-test.xml"; - /** - * @throws Exception If failed. - */ - @Override protected void setUp() throws Exception { + /** */ + @Before + public void setUp() { Log4J2Logger.cleanup(); - } /** @@ -50,6 +51,7 @@ public class Log4j2LoggerVerboseModeSelfTest extends TestCase { * * @throws Exception If failed. */ + @Test public void testVerboseMode() throws Exception { final PrintStream backupSysOut = System.out; final PrintStream backupSysErr = System.err; @@ -137,4 +139,4 @@ private static IgniteConfiguration getConfiguration(String igniteInstanceName, S .setConnectorConfiguration(null) .setDiscoverySpi(disco); } -} \ No newline at end of file +} diff --git a/modules/log4j2/src/test/java/org/apache/ignite/testsuites/IgniteLog4j2TestSuite.java b/modules/log4j2/src/test/java/org/apache/ignite/testsuites/IgniteLog4j2TestSuite.java index 0b8cefa50147e..665ae31baac87 100644 --- a/modules/log4j2/src/test/java/org/apache/ignite/testsuites/IgniteLog4j2TestSuite.java +++ b/modules/log4j2/src/test/java/org/apache/ignite/testsuites/IgniteLog4j2TestSuite.java @@ -17,28 +17,31 @@ package org.apache.ignite.testsuites; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.logger.log4j2.Log4j2ConfigUpdateTest; import org.apache.ignite.logger.log4j2.Log4j2LoggerMarkerTest; import org.apache.ignite.logger.log4j2.Log4j2LoggerSelfTest; import org.apache.ignite.logger.log4j2.Log4j2LoggerVerboseModeSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Log4j2 logging tests. */ -public class IgniteLog4j2TestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteLog4j2TestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Log4j2 Logging Test Suite"); - suite.addTestSuite(Log4j2LoggerSelfTest.class); - suite.addTestSuite(Log4j2LoggerVerboseModeSelfTest.class); - suite.addTestSuite(Log4j2LoggerMarkerTest.class); - suite.addTestSuite(Log4j2ConfigUpdateTest.class); + suite.addTest(new JUnit4TestAdapter(Log4j2LoggerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(Log4j2LoggerVerboseModeSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(Log4j2LoggerMarkerTest.class)); + suite.addTest(new JUnit4TestAdapter(Log4j2ConfigUpdateTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/mesos/pom.xml b/modules/mesos/pom.xml index a5c711ad22ae8..18416502a838f 100644 --- a/modules/mesos/pom.xml +++ b/modules/mesos/pom.xml @@ -37,7 +37,7 @@ 1.5.0 https://ignite.apache.org/latest - /ignite/%s/apache-ignite-fabric-%s-bin.zip + /ignite/%s/apache-ignite-%s-bin.zip https://www.apache.org/dyn/closer.cgi?as_json=1 diff --git a/modules/mesos/src/main/java/org/apache/ignite/mesos/resource/IgniteProvider.java b/modules/mesos/src/main/java/org/apache/ignite/mesos/resource/IgniteProvider.java index 4cd0c0a0add6c..5bd20105b28b0 100644 --- a/modules/mesos/src/main/java/org/apache/ignite/mesos/resource/IgniteProvider.java +++ b/modules/mesos/src/main/java/org/apache/ignite/mesos/resource/IgniteProvider.java @@ -49,7 +49,7 @@ public class IgniteProvider { // This constants are set by maven-ant-plugin. /** */ - private static final String DOWNLOAD_URL_PATTERN = "https://archive.apache.org/dist/ignite/%s/apache-ignite-fabric-%s-bin.zip"; + private static final String DOWNLOAD_URL_PATTERN = "https://archive.apache.org/dist/ignite/%s/apache-ignite-%s-bin.zip"; /** URL for request Ignite latest version. */ private final static String IGNITE_LATEST_VERSION_URL = "https://ignite.apache.org/latest"; @@ -58,7 +58,7 @@ public class IgniteProvider { private static final String APACHE_MIRROR_URL = "https://www.apache.org/dyn/closer.cgi?as_json=1"; /** Ignite on Apache URL path. */ - private static final String IGNITE_PATH = "/ignite/%s/apache-ignite-fabric-%s-bin.zip"; + private static final String IGNITE_PATH = "/ignite/%s/apache-ignite-%s-bin.zip"; /** Version pattern. */ private static final Pattern VERSION_PATTERN = Pattern.compile("(?<=version=).*\\S+"); @@ -271,4 +271,4 @@ private static String fileName(String url) { return split[split.length - 1]; } -} \ No newline at end of file +} diff --git a/modules/mesos/src/test/java/org/apache/ignite/IgniteMesosTestSuite.java b/modules/mesos/src/test/java/org/apache/ignite/IgniteMesosTestSuite.java index af9a37db6a606..886bc9ebdc374 100644 --- a/modules/mesos/src/test/java/org/apache/ignite/IgniteMesosTestSuite.java +++ b/modules/mesos/src/test/java/org/apache/ignite/IgniteMesosTestSuite.java @@ -17,22 +17,25 @@ package org.apache.ignite; +import junit.framework.JUnit4TestAdapter; import junit.framework.TestSuite; import org.apache.ignite.mesos.IgniteSchedulerSelfTest; +import org.junit.runner.RunWith; +import org.junit.runners.AllTests; /** * Apache Mesos integration tests. */ -public class IgniteMesosTestSuite extends TestSuite { +@RunWith(AllTests.class) +public class IgniteMesosTestSuite { /** * @return Test suite. - * @throws Exception Thrown in case of the failure. */ - public static TestSuite suite() throws Exception { + public static TestSuite suite() { TestSuite suite = new TestSuite("Apache Mesos Integration Test Suite"); - suite.addTest(new TestSuite(IgniteSchedulerSelfTest.class)); + suite.addTest(new JUnit4TestAdapter(IgniteSchedulerSelfTest.class)); return suite; } -} \ No newline at end of file +} diff --git a/modules/mesos/src/test/java/org/apache/ignite/mesos/IgniteSchedulerSelfTest.java b/modules/mesos/src/test/java/org/apache/ignite/mesos/IgniteSchedulerSelfTest.java index 4e485698aa4f7..64117f02989cf 100644 --- a/modules/mesos/src/test/java/org/apache/ignite/mesos/IgniteSchedulerSelfTest.java +++ b/modules/mesos/src/test/java/org/apache/ignite/mesos/IgniteSchedulerSelfTest.java @@ -22,22 +22,27 @@ import java.util.Collections; import java.util.List; import java.util.regex.Pattern; -import junit.framework.TestCase; import org.apache.ignite.mesos.resource.ResourceProvider; import org.apache.mesos.Protos; import org.apache.mesos.SchedulerDriver; +import org.junit.Before; +import org.junit.Test; + +import static junit.framework.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; /** * Scheduler tests. */ -public class IgniteSchedulerSelfTest extends TestCase { +public class IgniteSchedulerSelfTest { /** */ private IgniteScheduler scheduler; - /** {@inheritDoc} */ - @Override public void setUp() throws Exception { - super.setUp(); - + /** */ + @Before + public void setUp() throws Exception { ClusterProperties clustProp = new ClusterProperties(); scheduler = new IgniteScheduler(clustProp, new ResourceProvider() { @@ -62,6 +67,7 @@ public class IgniteSchedulerSelfTest extends TestCase { /** * @throws Exception If failed. */ + @Test public void testHostRegister() throws Exception { Protos.Offer offer = createOffer("hostname", 4, 1024); @@ -81,6 +87,7 @@ public void testHostRegister() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeclineByCpu() throws Exception { Protos.Offer offer = createOffer("hostname", 4, 1024); @@ -115,6 +122,7 @@ public void testDeclineByCpu() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeclineByMem() throws Exception { Protos.Offer offer = createOffer("hostname", 4, 1024); @@ -149,6 +157,7 @@ public void testDeclineByMem() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeclineByMemCpu() throws Exception { Protos.Offer offer = createOffer("hostname", 1, 1024); @@ -191,6 +200,7 @@ public void testDeclineByMemCpu() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeclineByCpuMinRequirements() throws Exception { Protos.Offer offer = createOffer("hostname", 8, 10240); @@ -211,6 +221,7 @@ public void testDeclineByCpuMinRequirements() throws Exception { /** * @throws Exception If failed. */ + @Test public void testDeclineByMemMinRequirements() throws Exception { Protos.Offer offer = createOffer("hostname", 8, 10240); @@ -231,6 +242,7 @@ public void testDeclineByMemMinRequirements() throws Exception { /** * @throws Exception If failed. */ + @Test public void testHosthameConstraint() throws Exception { Protos.Offer offer = createOffer("hostname", 8, 10240); @@ -258,6 +270,7 @@ public void testHosthameConstraint() throws Exception { /** * @throws Exception If failed. */ + @Test public void testPerNode() throws Exception { Protos.Offer offer = createOffer("hostname", 8, 1024); @@ -304,6 +317,7 @@ public void testPerNode() throws Exception { /** * @throws Exception If failed. */ + @Test public void testIgniteFramework() throws Exception { final String mesosUserValue = "userAAAAA"; final String mesosRoleValue = "role1"; @@ -330,6 +344,7 @@ public void testIgniteFramework() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMesosRoleValidation() throws Exception { List failedRoleValues = Arrays.asList("", ".", "..", "-testRole", "test/Role", "test\\Role", "test Role", null); @@ -511,4 +526,4 @@ public void clear() { return null; } } -} \ No newline at end of file +} diff --git a/modules/ml/pom.xml b/modules/ml/pom.xml index ad31da256645d..ebc5fb6c51d04 100644 --- a/modules/ml/pom.xml +++ b/modules/ml/pom.xml @@ -57,13 +57,18 @@ ${project.version} - org.apache.ignite ignite-spring ${project.version} + + org.apache.ignite + ignite-h2 + ${project.version} + + it.unimi.dsi fastutil diff --git a/modules/ml/src/main/java/org/apache/ignite/ml/math/exceptions/preprocessing/package-info.java b/modules/ml/src/main/java/org/apache/ignite/ml/math/exceptions/preprocessing/package-info.java new file mode 100644 index 0000000000000..77dd980c56a0d --- /dev/null +++ b/modules/ml/src/main/java/org/apache/ignite/ml/math/exceptions/preprocessing/package-info.java @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * + * Contains exceptions for preprocessing. + */ +package org.apache.ignite.ml.math.exceptions.preprocessing; \ No newline at end of file diff --git a/modules/ml/src/main/java/org/apache/ignite/ml/selection/scoring/evaluator/package-info.java b/modules/ml/src/main/java/org/apache/ignite/ml/selection/scoring/evaluator/package-info.java new file mode 100644 index 0000000000000..c2c7c4307ba5e --- /dev/null +++ b/modules/ml/src/main/java/org/apache/ignite/ml/selection/scoring/evaluator/package-info.java @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * + * Package for model evaluator classes. + */ +package org.apache.ignite.ml.selection.scoring.evaluator; \ No newline at end of file diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/IgniteMLTestSuite.java b/modules/ml/src/test/java/org/apache/ignite/ml/IgniteMLTestSuite.java index 481e1faa6f993..3c98c1c9290e1 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/IgniteMLTestSuite.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/IgniteMLTestSuite.java @@ -17,6 +17,8 @@ package org.apache.ignite.ml; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; import org.apache.ignite.ml.clustering.ClusteringTestSuite; import org.apache.ignite.ml.common.CommonTestSuite; import org.apache.ignite.ml.composition.CompositionTestSuite; @@ -34,31 +36,38 @@ import org.apache.ignite.ml.svm.SVMTestSuite; import org.apache.ignite.ml.tree.DecisionTreeTestSuite; import org.junit.runner.RunWith; -import org.junit.runners.Suite; +import org.junit.runners.AllTests; /** * Test suite for all module tests. IMPL NOTE tests in {@code org.apache.ignite.ml.tree.performance} are not * included here because these are intended only for manual execution. */ -@RunWith(Suite.class) -@Suite.SuiteClasses({ - MathImplMainTestSuite.class, - RegressionsTestSuite.class, - SVMTestSuite.class, - ClusteringTestSuite.class, - DecisionTreeTestSuite.class, - KNNTestSuite.class, - MLPTestSuite.class, - DatasetTestSuite.class, - PipelineTestSuite.class, - PreprocessingTestSuite.class, - GAGridTestSuite.class, - SelectionTestSuite.class, - CompositionTestSuite.class, - EnvironmentTestSuite.class, - StructuresTestSuite.class, - CommonTestSuite.class -}) +@RunWith(AllTests.class) public class IgniteMLTestSuite { - // No-op. + /** */ + public static TestSuite suite() { + TestSuite suite = new TestSuite(IgniteMLTestSuite.class.getSimpleName()); + + /** JUnit 4 tests. */ + suite.addTest(new JUnit4TestAdapter(MathImplMainTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(RegressionsTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(SVMTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(ClusteringTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(KNNTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(PipelineTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(PreprocessingTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(GAGridTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(CompositionTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(EnvironmentTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(StructuresTestSuite.class)); + suite.addTest(new JUnit4TestAdapter(CommonTestSuite.class)); + + /** JUnit 3 tests. */ + suite.addTest(DecisionTreeTestSuite.suite()); + suite.addTest(MLPTestSuite.suite()); + suite.addTest(DatasetTestSuite.suite()); + suite.addTest(SelectionTestSuite.suite()); + + return suite; + } } diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/DatasetTestSuite.java b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/DatasetTestSuite.java index babddfb168c00..8ab41c5ac2882 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/DatasetTestSuite.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/DatasetTestSuite.java @@ -17,6 +17,8 @@ package org.apache.ignite.ml.dataset; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; import org.apache.ignite.ml.dataset.feature.ObjectHistogramTest; import org.apache.ignite.ml.dataset.impl.cache.CacheBasedDatasetBuilderTest; import org.apache.ignite.ml.dataset.impl.cache.CacheBasedDatasetTest; @@ -28,24 +30,29 @@ import org.apache.ignite.ml.dataset.primitive.SimpleDatasetTest; import org.apache.ignite.ml.dataset.primitive.SimpleLabeledDatasetTest; import org.junit.runner.RunWith; -import org.junit.runners.Suite; +import org.junit.runners.AllTests; /** * Test suite for all tests located in org.apache.ignite.ml.dataset.* package. */ -@RunWith(Suite.class) -@Suite.SuiteClasses({ - DatasetWrapperTest.class, - ComputeUtilsTest.class, - DatasetAffinityFunctionWrapperTest.class, - PartitionDataStorageTest.class, - CacheBasedDatasetBuilderTest.class, - CacheBasedDatasetTest.class, - LocalDatasetBuilderTest.class, - SimpleDatasetTest.class, - SimpleLabeledDatasetTest.class, - ObjectHistogramTest.class -}) +@RunWith(AllTests.class) public class DatasetTestSuite { - // No-op. + /** */ + public static TestSuite suite() { + TestSuite suite = new TestSuite(DatasetTestSuite.class.getSimpleName()); + + suite.addTest(new JUnit4TestAdapter(DatasetWrapperTest.class)); + suite.addTest(new JUnit4TestAdapter(DatasetAffinityFunctionWrapperTest.class)); + suite.addTest(new JUnit4TestAdapter(PartitionDataStorageTest.class)); + suite.addTest(new JUnit4TestAdapter(LocalDatasetBuilderTest.class)); + suite.addTest(new JUnit4TestAdapter(SimpleDatasetTest.class)); + suite.addTest(new JUnit4TestAdapter(SimpleLabeledDatasetTest.class)); + suite.addTest(new JUnit4TestAdapter(DatasetWrapperTest.class)); + suite.addTest(new JUnit4TestAdapter(ObjectHistogramTest.class)); + suite.addTest(new JUnit4TestAdapter(ComputeUtilsTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheBasedDatasetBuilderTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheBasedDatasetTest.class)); + + return suite; + } } diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetBuilderTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetBuilderTest.java index 1cf6dbfb0f1b5..82d9259c133c3 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetBuilderTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetBuilderTest.java @@ -28,10 +28,14 @@ import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.ml.dataset.UpstreamEntry; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link CacheBasedDatasetBuilder}. */ +@RunWith(JUnit4.class) public class CacheBasedDatasetBuilderTest extends GridCommonAbstractTest { /** Number of nodes in grid. */ private static final int NODE_COUNT = 10; @@ -61,6 +65,7 @@ public class CacheBasedDatasetBuilderTest extends GridCommonAbstractTest { /** * Tests that partitions of the dataset cache are placed on the same nodes as upstream cache. */ + @Test public void testBuild() { IgniteCache upstreamCache = createTestCache(100, 10); CacheBasedDatasetBuilder builder = new CacheBasedDatasetBuilder<>(ignite, upstreamCache); @@ -89,6 +94,7 @@ public void testBuild() { /** * Tests that predicate works correctly. */ + @Test public void testBuildWithPredicate() { CacheConfiguration upstreamCacheConfiguration = new CacheConfiguration<>(); upstreamCacheConfiguration.setAffinity(new RendezvousAffinityFunction(false, 1)); diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetTest.java index a89253036ed89..374449885bde7 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/CacheBasedDatasetTest.java @@ -40,10 +40,14 @@ import org.apache.ignite.lang.IgnitePredicate; import org.apache.ignite.ml.dataset.primitive.data.SimpleDatasetData; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link CacheBasedDataset}. */ +@RunWith(JUnit4.class) public class CacheBasedDatasetTest extends GridCommonAbstractTest { /** Number of nodes in grid. */ private static final int NODE_COUNT = 4; @@ -75,6 +79,7 @@ public class CacheBasedDatasetTest extends GridCommonAbstractTest { * computations on dataset. Reservation means that partitions won't be unloaded from the node before computation is * completed. */ + @Test public void testPartitionExchangeDuringComputeCall() { int partitions = 4; @@ -130,6 +135,7 @@ public void testPartitionExchangeDuringComputeCall() { * computations on dataset. Reservation means that partitions won't be unloaded from the node before computation is * completed. */ + @Test public void testPartitionExchangeDuringComputeWithCtxCall() { int partitions = 4; diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/util/ComputeUtilsTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/util/ComputeUtilsTest.java index 952fc435cefde..e879f93f0a827 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/util/ComputeUtilsTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/dataset/impl/cache/util/ComputeUtilsTest.java @@ -34,6 +34,7 @@ import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.ml.dataset.UpstreamEntry; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; /** * Tests for {@link ComputeUtils}. @@ -57,7 +58,7 @@ public class ComputeUtilsTest extends GridCommonAbstractTest { } /** {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { + @Override protected void beforeTest() { /* Grid instance. */ ignite = grid(NODE_COUNT); ignite.configuration().setPeerClassLoadingEnabled(true); @@ -67,6 +68,7 @@ public class ComputeUtilsTest extends GridCommonAbstractTest { /** * Tests that in case two caches maintain their partitions on different nodes, affinity call won't be completed. */ + @Test public void testAffinityCallWithRetriesNegative() { ClusterNode node1 = grid(1).cluster().localNode(); ClusterNode node2 = grid(2).cluster().localNode(); @@ -108,6 +110,7 @@ public void testAffinityCallWithRetriesNegative() { /** * Test that in case two caches maintain their partitions on the same node, affinity call will be completed. */ + @Test public void testAffinityCallWithRetriesPositive() { ClusterNode node = grid(1).cluster().localNode(); @@ -147,6 +150,7 @@ public void testAffinityCallWithRetriesPositive() { /** * Tests {@code getData()} method. */ + @Test public void testGetData() { ClusterNode node = grid(1).cluster().localNode(); @@ -205,6 +209,7 @@ public void testGetData() { /** * Tests {@code initContext()} method. */ + @Test public void testInitContext() { ClusterNode node = grid(1).cluster().localNode(); @@ -259,7 +264,7 @@ private static class TestPartitionData implements AutoCloseable { } /** {@inheritDoc} */ - @Override public void close() throws Exception { + @Override public void close() { // Do nothing, GC will clean up. } } diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/knn/LabeledVectorSetTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/knn/LabeledVectorSetTest.java index 2303e96f0de37..27c1192a5d2b1 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/knn/LabeledVectorSetTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/knn/LabeledVectorSetTest.java @@ -277,6 +277,7 @@ public void testSetLabelInvalid() { } /** */ + @Test @Override public void testExternalization() { double[][] mtx = new double[][] { diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTestSuite.java b/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTestSuite.java index 3f98ba51a95da..58eea8da605a5 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTestSuite.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTestSuite.java @@ -17,19 +17,25 @@ package org.apache.ignite.ml.nn; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; import org.junit.runner.RunWith; -import org.junit.runners.Suite; +import org.junit.runners.AllTests; /** * Test suite for multilayer perceptrons. */ -@RunWith(Suite.class) -@Suite.SuiteClasses({ - MLPTest.class, - MLPTrainerTest.class, - MLPTrainerIntegrationTest.class, - LossFunctionsTest.class -}) +@RunWith(AllTests.class) public class MLPTestSuite { - // No-op. + /** */ + public static TestSuite suite() { + TestSuite suite = new TestSuite(MLPTestSuite.class.getSimpleName()); + + suite.addTest(new JUnit4TestAdapter(MLPTest.class)); + suite.addTest(new JUnit4TestAdapter(MLPTrainerTest.class)); + suite.addTest(new JUnit4TestAdapter(LossFunctionsTest.class)); + suite.addTest(new JUnit4TestAdapter(MLPTrainerIntegrationTest.class)); + + return suite; + } } diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTrainerIntegrationTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTrainerIntegrationTest.java index 3521cb6b1b444..ff6754abe2eca 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTrainerIntegrationTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/nn/MLPTrainerIntegrationTest.java @@ -39,10 +39,14 @@ import org.apache.ignite.ml.optimization.updatecalculators.SimpleGDParameterUpdate; import org.apache.ignite.ml.optimization.updatecalculators.SimpleGDUpdateCalculator; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link MLPTrainer} that require to start the whole Ignite infrastructure. */ +@RunWith(JUnit4.class) public class MLPTrainerIntegrationTest extends GridCommonAbstractTest { /** Number of nodes in grid */ private static final int NODE_COUNT = 3; @@ -64,7 +68,7 @@ public class MLPTrainerIntegrationTest extends GridCommonAbstractTest { /** * {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { + @Override protected void beforeTest() { /* Grid instance. */ ignite = grid(NODE_COUNT); ignite.configuration().setPeerClassLoadingEnabled(true); @@ -74,6 +78,7 @@ public class MLPTrainerIntegrationTest extends GridCommonAbstractTest { /** * Test 'XOR' operation training with {@link SimpleGDUpdateCalculator}. */ + @Test public void testXORSimpleGD() { xorTest(new UpdatesStrategy<>( new SimpleGDUpdateCalculator(0.3), @@ -85,6 +90,7 @@ public void testXORSimpleGD() { /** * Test 'XOR' operation training with {@link RPropUpdateCalculator}. */ + @Test public void testXORRProp() { xorTest(new UpdatesStrategy<>( new RPropUpdateCalculator(), @@ -96,6 +102,7 @@ public void testXORRProp() { /** * Test 'XOR' operation training with {@link NesterovUpdateCalculator}. */ + @Test public void testXORNesterov() { xorTest(new UpdatesStrategy<>( new NesterovUpdateCalculator(0.1, 0.7), diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/nn/performance/MLPTrainerMnistIntegrationTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/nn/performance/MLPTrainerMnistIntegrationTest.java index bd31b196b120f..f8792a5cd42f6 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/nn/performance/MLPTrainerMnistIntegrationTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/nn/performance/MLPTrainerMnistIntegrationTest.java @@ -36,10 +36,14 @@ import org.apache.ignite.ml.optimization.updatecalculators.RPropUpdateCalculator; import org.apache.ignite.ml.util.MnistUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests {@link MLPTrainer} on the MNIST dataset that require to start the whole Ignite infrastructure. */ +@RunWith(JUnit4.class) public class MLPTrainerMnistIntegrationTest extends GridCommonAbstractTest { /** Number of nodes in grid */ private static final int NODE_COUNT = 3; @@ -61,7 +65,7 @@ public class MLPTrainerMnistIntegrationTest extends GridCommonAbstractTest { /** * {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { + @Override protected void beforeTest() { /* Grid instance. */ ignite = grid(NODE_COUNT); ignite.configuration().setPeerClassLoadingEnabled(true); @@ -69,6 +73,7 @@ public class MLPTrainerMnistIntegrationTest extends GridCommonAbstractTest { } /** Tests on the MNIST dataset. */ + @Test public void testMNIST() throws IOException { int featCnt = 28 * 28; int hiddenNeuronsCnt = 100; diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/selection/SelectionTestSuite.java b/modules/ml/src/test/java/org/apache/ignite/ml/selection/SelectionTestSuite.java index 21c605b737fd5..e2f8feb9fee92 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/selection/SelectionTestSuite.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/selection/SelectionTestSuite.java @@ -17,6 +17,8 @@ package org.apache.ignite.ml.selection; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; import org.apache.ignite.ml.selection.cv.CrossValidationTest; import org.apache.ignite.ml.selection.paramgrid.ParameterSetGeneratorTest; import org.apache.ignite.ml.selection.scoring.cursor.CacheBasedLabelPairCursorTest; @@ -29,25 +31,29 @@ import org.apache.ignite.ml.selection.split.TrainTestDatasetSplitterTest; import org.apache.ignite.ml.selection.split.mapper.SHA256UniformMapperTest; import org.junit.runner.RunWith; -import org.junit.runners.Suite; +import org.junit.runners.AllTests; /** * Test suite for all tests located in org.apache.ignite.ml.selection.* package. */ -@RunWith(Suite.class) -@Suite.SuiteClasses({ - CrossValidationTest.class, - EvaluatorTest.class, - ParameterSetGeneratorTest.class, - CacheBasedLabelPairCursorTest.class, - LocalLabelPairCursorTest.class, - AccuracyTest.class, - PrecisionTest.class, - RecallTest.class, - FmeasureTest.class, - SHA256UniformMapperTest.class, - TrainTestDatasetSplitterTest.class -}) +@RunWith(AllTests.class) public class SelectionTestSuite { - // No-op. + /** */ + public static TestSuite suite() { + TestSuite suite = new TestSuite(SelectionTestSuite.class.getSimpleName()); + + suite.addTest(new JUnit4TestAdapter(CrossValidationTest.class)); + suite.addTest(new JUnit4TestAdapter(ParameterSetGeneratorTest.class)); + suite.addTest(new JUnit4TestAdapter(LocalLabelPairCursorTest.class)); + suite.addTest(new JUnit4TestAdapter(AccuracyTest.class)); + suite.addTest(new JUnit4TestAdapter(PrecisionTest.class)); + suite.addTest(new JUnit4TestAdapter(RecallTest.class)); + suite.addTest(new JUnit4TestAdapter(FmeasureTest.class)); + suite.addTest(new JUnit4TestAdapter(SHA256UniformMapperTest.class)); + suite.addTest(new JUnit4TestAdapter(TrainTestDatasetSplitterTest.class)); + suite.addTest(new JUnit4TestAdapter(EvaluatorTest.class)); + suite.addTest(new JUnit4TestAdapter(CacheBasedLabelPairCursorTest.class)); + + return suite; + } } diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/cursor/CacheBasedLabelPairCursorTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/cursor/CacheBasedLabelPairCursorTest.java index 8d020770a70cc..33b0eed351451 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/cursor/CacheBasedLabelPairCursorTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/cursor/CacheBasedLabelPairCursorTest.java @@ -24,10 +24,14 @@ import org.apache.ignite.ml.math.primitives.vector.VectorUtils; import org.apache.ignite.ml.selection.scoring.LabelPair; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link CacheBasedLabelPairCursor}. */ +@RunWith(JUnit4.class) public class CacheBasedLabelPairCursorTest extends GridCommonAbstractTest { /** Number of nodes in grid. */ private static final int NODE_COUNT = 4; @@ -55,6 +59,7 @@ public class CacheBasedLabelPairCursorTest extends GridCommonAbstractTest { } /** */ + @Test public void testIterate() { IgniteCache data = ignite.createCache(UUID.randomUUID().toString()); diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/evaluator/EvaluatorTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/evaluator/EvaluatorTest.java index 6f7aa366a50e0..58e30b2d51284 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/evaluator/EvaluatorTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/selection/scoring/evaluator/EvaluatorTest.java @@ -47,6 +47,9 @@ import org.apache.ignite.ml.tree.DecisionTreeNode; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; import org.apache.ignite.thread.IgniteThread; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.junit.Assert.assertArrayEquals; @@ -54,6 +57,7 @@ * Tests for {@link Evaluator} that require to start the whole Ignite infrastructure. IMPL NOTE based on * Step_8_CV_with_Param_Grid example. */ +@RunWith(JUnit4.class) public class EvaluatorTest extends GridCommonAbstractTest { /** Number of nodes in grid */ private static final int NODE_COUNT = 3; @@ -83,6 +87,7 @@ public class EvaluatorTest extends GridCommonAbstractTest { } /** */ + @Test public void testBasic() throws InterruptedException { AtomicReference actualAccuracy = new AtomicReference<>(null); AtomicReference actualAccuracy2 = new AtomicReference<>(null); @@ -129,6 +134,7 @@ public void testBasic() throws InterruptedException { } /** */ + @Test public void testBasic2() throws InterruptedException { AtomicReference actualAccuracy = new AtomicReference<>(null); AtomicReference actualAccuracy2 = new AtomicReference<>(null); @@ -162,6 +168,7 @@ public void testBasic2() throws InterruptedException { } /** */ + @Test public void testBasic3() throws InterruptedException { AtomicReference actualAccuracy = new AtomicReference<>(null); AtomicReference actualAccuracy2 = new AtomicReference<>(null); diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeClassificationTrainerIntegrationTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeClassificationTrainerIntegrationTest.java index aadc8a76a0d33..f7f13f25b1f20 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeClassificationTrainerIntegrationTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeClassificationTrainerIntegrationTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.ml.math.primitives.vector.VectorUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link DecisionTreeClassificationTrainer} that require to start the whole Ignite infrastructure. */ +@RunWith(JUnit4.class) public class DecisionTreeClassificationTrainerIntegrationTest extends GridCommonAbstractTest { /** Number of nodes in grid */ private static final int NODE_COUNT = 3; @@ -51,7 +55,7 @@ public class DecisionTreeClassificationTrainerIntegrationTest extends GridCommon /** * {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { + @Override protected void beforeTest() { /* Grid instance. */ ignite = grid(NODE_COUNT); ignite.configuration().setPeerClassLoadingEnabled(true); @@ -59,6 +63,7 @@ public class DecisionTreeClassificationTrainerIntegrationTest extends GridCommon } /** */ + @Test public void testFit() { int size = 100; diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeRegressionTrainerIntegrationTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeRegressionTrainerIntegrationTest.java index a190685a8475b..9b25ce075c0f7 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeRegressionTrainerIntegrationTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeRegressionTrainerIntegrationTest.java @@ -26,10 +26,14 @@ import org.apache.ignite.internal.util.IgniteUtils; import org.apache.ignite.ml.math.primitives.vector.VectorUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests for {@link DecisionTreeRegressionTrainer} that require to start the whole Ignite infrastructure. */ +@RunWith(JUnit4.class) public class DecisionTreeRegressionTrainerIntegrationTest extends GridCommonAbstractTest { /** Number of nodes in grid */ private static final int NODE_COUNT = 3; @@ -51,7 +55,7 @@ public class DecisionTreeRegressionTrainerIntegrationTest extends GridCommonAbst /** * {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { + @Override protected void beforeTest() { /* Grid instance. */ ignite = grid(NODE_COUNT); ignite.configuration().setPeerClassLoadingEnabled(true); @@ -59,6 +63,7 @@ public class DecisionTreeRegressionTrainerIntegrationTest extends GridCommonAbst } /** */ + @Test public void testFit() { int size = 100; diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeTestSuite.java b/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeTestSuite.java index 2cbb486c3df5f..7c984c1002fc3 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeTestSuite.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/tree/DecisionTreeTestSuite.java @@ -17,6 +17,8 @@ package org.apache.ignite.ml.tree; +import junit.framework.JUnit4TestAdapter; +import junit.framework.TestSuite; import org.apache.ignite.ml.tree.data.DecisionTreeDataTest; import org.apache.ignite.ml.tree.impurity.gini.GiniImpurityMeasureCalculatorTest; import org.apache.ignite.ml.tree.impurity.gini.GiniImpurityMeasureTest; @@ -25,24 +27,29 @@ import org.apache.ignite.ml.tree.impurity.util.SimpleStepFunctionCompressorTest; import org.apache.ignite.ml.tree.impurity.util.StepFunctionTest; import org.junit.runner.RunWith; -import org.junit.runners.Suite; +import org.junit.runners.AllTests; /** * Test suite for all tests located in {@link org.apache.ignite.ml.tree} package. */ -@RunWith(Suite.class) -@Suite.SuiteClasses({ - DecisionTreeClassificationTrainerTest.class, - DecisionTreeRegressionTrainerTest.class, - DecisionTreeClassificationTrainerIntegrationTest.class, - DecisionTreeRegressionTrainerIntegrationTest.class, - DecisionTreeDataTest.class, - GiniImpurityMeasureCalculatorTest.class, - GiniImpurityMeasureTest.class, - MSEImpurityMeasureCalculatorTest.class, - MSEImpurityMeasureTest.class, - StepFunctionTest.class, - SimpleStepFunctionCompressorTest.class -}) +@RunWith(AllTests.class) public class DecisionTreeTestSuite { + /** */ + public static TestSuite suite() { + TestSuite suite = new TestSuite(DecisionTreeTestSuite.class.getSimpleName()); + + suite.addTest(new JUnit4TestAdapter(DecisionTreeClassificationTrainerTest.class)); + suite.addTest(new JUnit4TestAdapter(DecisionTreeRegressionTrainerTest.class)); + suite.addTest(new JUnit4TestAdapter(DecisionTreeDataTest.class)); + suite.addTest(new JUnit4TestAdapter(GiniImpurityMeasureCalculatorTest.class)); + suite.addTest(new JUnit4TestAdapter(GiniImpurityMeasureTest.class)); + suite.addTest(new JUnit4TestAdapter(MSEImpurityMeasureCalculatorTest.class)); + suite.addTest(new JUnit4TestAdapter(MSEImpurityMeasureTest.class)); + suite.addTest(new JUnit4TestAdapter(StepFunctionTest.class)); + suite.addTest(new JUnit4TestAdapter(SimpleStepFunctionCompressorTest.class)); + suite.addTest(new JUnit4TestAdapter(DecisionTreeRegressionTrainerIntegrationTest.class)); + suite.addTest(new JUnit4TestAdapter(DecisionTreeClassificationTrainerIntegrationTest.class)); + + return suite; + } } diff --git a/modules/ml/src/test/java/org/apache/ignite/ml/tree/performance/DecisionTreeMNISTIntegrationTest.java b/modules/ml/src/test/java/org/apache/ignite/ml/tree/performance/DecisionTreeMNISTIntegrationTest.java index ca513ed432adb..e0cdc96348f25 100644 --- a/modules/ml/src/test/java/org/apache/ignite/ml/tree/performance/DecisionTreeMNISTIntegrationTest.java +++ b/modules/ml/src/test/java/org/apache/ignite/ml/tree/performance/DecisionTreeMNISTIntegrationTest.java @@ -31,11 +31,15 @@ import org.apache.ignite.ml.tree.impurity.util.SimpleStepFunctionCompressor; import org.apache.ignite.ml.util.MnistUtils; import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; /** * Tests {@link DecisionTreeClassificationTrainer} on the MNIST dataset that require to start the whole Ignite * infrastructure. For manual run. */ +@RunWith(JUnit4.class) public class DecisionTreeMNISTIntegrationTest extends GridCommonAbstractTest { /** Number of nodes in grid */ private static final int NODE_COUNT = 3; @@ -57,7 +61,7 @@ public class DecisionTreeMNISTIntegrationTest extends GridCommonAbstractTest { /** * {@inheritDoc} */ - @Override protected void beforeTest() throws Exception { + @Override protected void beforeTest() { /* Grid instance. */ ignite = grid(NODE_COUNT); ignite.configuration().setPeerClassLoadingEnabled(true); @@ -65,6 +69,7 @@ public class DecisionTreeMNISTIntegrationTest extends GridCommonAbstractTest { } /** Tests on the MNIST dataset. For manual run. */ + @Test public void testMNIST() throws IOException { CacheConfiguration trainingSetCacheCfg = new CacheConfiguration<>(); trainingSetCacheCfg.setAffinity(new RendezvousAffinityFunction(false, 10)); diff --git a/modules/mqtt/src/test/java/org/apache/ignite/stream/mqtt/IgniteMqttStreamerTest.java b/modules/mqtt/src/test/java/org/apache/ignite/stream/mqtt/IgniteMqttStreamerTest.java index d8c15eae0256f..029ff6e5c7e04 100644 --- a/modules/mqtt/src/test/java/org/apache/ignite/stream/mqtt/IgniteMqttStreamerTest.java +++ b/modules/mqtt/src/test/java/org/apache/ignite/stream/mqtt/IgniteMqttStreamerTest.java @@ -53,14 +53,16 @@ import org.eclipse.paho.client.mqttv3.MqttException; import org.eclipse.paho.client.mqttv3.MqttMessage; import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence; -import org.junit.After; -import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT; /** * Test for {@link MqttStreamer}. */ +@RunWith(JUnit4.class) public class IgniteMqttStreamerTest extends GridCommonAbstractTest { /** The test data. */ private static final Map TEST_DATA = new HashMap<>(); @@ -105,9 +107,8 @@ public IgniteMqttStreamerTest() { /** * @throws Exception If failed. */ - @Before @SuppressWarnings("unchecked") - public void beforeTest() throws Exception { + @Override public void beforeTest() throws Exception { grid().getOrCreateCache(defaultCacheConfiguration()); // find an available local port @@ -153,8 +154,7 @@ public void beforeTest() throws Exception { /** * @throws Exception If failed. */ - @After - public void afterTest() throws Exception { + @Override public void afterTest() throws Exception { try { streamer.stop(); } @@ -173,6 +173,7 @@ public void afterTest() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectDisconnect() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -194,6 +195,7 @@ public void testConnectDisconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testConnectionStatusWithBrokerDisconnection() throws Exception { fail("https://issues.apache.org/jira/browse/IGNITE-2255"); @@ -225,6 +227,7 @@ public void testConnectionStatusWithBrokerDisconnection() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleTopic_NoQoS_OneEntryPerMessage() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -247,6 +250,7 @@ public void testSingleTopic_NoQoS_OneEntryPerMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleTopics_NoQoS_OneEntryPerMessage() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -273,6 +277,7 @@ public void testMultipleTopics_NoQoS_OneEntryPerMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleTopic_NoQoS_MultipleEntriesOneMessage() throws Exception { // configure streamer streamer.setMultipleTupleExtractor(multipleTupleExtractor()); @@ -295,6 +300,7 @@ public void testSingleTopic_NoQoS_MultipleEntriesOneMessage() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleTopics_NoQoS_MultipleEntriesOneMessage() throws Exception { // configure streamer streamer.setMultipleTupleExtractor(multipleTupleExtractor()); @@ -321,6 +327,7 @@ public void testMultipleTopics_NoQoS_MultipleEntriesOneMessage() throws Exceptio /** * @throws Exception If failed. */ + @Test public void testSingleTopic_NoQoS_ConnectOptions_Durable() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -362,6 +369,7 @@ public void testSingleTopic_NoQoS_ConnectOptions_Durable() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleTopic_NoQoS_Reconnect() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -408,6 +416,7 @@ public void testSingleTopic_NoQoS_Reconnect() throws Exception { /** * @throws Exception If failed. */ + @Test public void testSingleTopic_NoQoS_RetryOnce() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -447,6 +456,7 @@ public void testSingleTopic_NoQoS_RetryOnce() throws Exception { /** * @throws Exception If failed. */ + @Test public void testMultipleTopics_MultipleQoS_OneEntryPerMessage() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); @@ -474,6 +484,7 @@ public void testMultipleTopics_MultipleQoS_OneEntryPerMessage() throws Exception /** * @throws Exception If failed. */ + @Test public void testMultipleTopics_MultipleQoS_Mismatch() throws Exception { // configure streamer streamer.setSingleTupleExtractor(singleTupleExtractor()); diff --git a/modules/platforms/cpp/common/include/ignite/common/utils.h b/modules/platforms/cpp/common/include/ignite/common/utils.h index 29e36f525d90b..cbcc10ce6ab58 100644 --- a/modules/platforms/cpp/common/include/ignite/common/utils.h +++ b/modules/platforms/cpp/common/include/ignite/common/utils.h @@ -548,6 +548,62 @@ namespace ignite return BoundInstance(instance, mfunc); } + /** + * Method guard class template. + * + * Upon destruction calls provided method on provided class instance. + * + * @tparam T Value type. + */ + template + class MethodGuard + { + public: + /** Value type. */ + typedef T ValueType; + + /** Mehtod type. */ + typedef void (ValueType::*MethodType)(); + + /** + * Constructor. + * + * @param val Instance, to call method on. + * @param method Method to call. + */ + MethodGuard(ValueType* val, MethodType method) : + val(val), + method(method) + { + // No-op. + } + + /** + * Destructor. + */ + ~MethodGuard() + { + if (val && method) + (val->*method)(); + } + + /** + * Release control over object. + */ + void Release() + { + val = 0; + method = 0; + } + + private: + /** Instance, to call method on. */ + ValueType* val; + + /** Method to call. */ + MethodType method; + }; + /** * Get dynamic library full name. * @param name Name without extension. diff --git a/modules/platforms/cpp/core-test/src/continuous_query_test.cpp b/modules/platforms/cpp/core-test/src/continuous_query_test.cpp index a5136d26d3019..1168bba21ee60 100644 --- a/modules/platforms/cpp/core-test/src/continuous_query_test.cpp +++ b/modules/platforms/cpp/core-test/src/continuous_query_test.cpp @@ -137,8 +137,9 @@ class Listener : public CacheEntryEventListener * @param key Key. * @param oldVal Old value. * @param val Current value. + * @param eType Evenet type. */ - void CheckNextEvent(const K& key, boost::optional oldVal, boost::optional val) + void CheckNextEvent(const K& key, boost::optional oldVal, boost::optional val, CacheEntryEventType::T eType) { CacheEntryEvent event; bool success = eventQueue.Pull(event, boost::chrono::seconds(1)); @@ -148,6 +149,7 @@ class Listener : public CacheEntryEventListener BOOST_CHECK_EQUAL(event.GetKey(), key); BOOST_CHECK_EQUAL(event.HasOldValue(), oldVal.is_initialized()); BOOST_CHECK_EQUAL(event.HasValue(), val.is_initialized()); + BOOST_CHECK_EQUAL(event.GetEventType(), eType); if (oldVal && event.HasOldValue()) BOOST_CHECK_EQUAL(event.GetOldValue().value, oldVal->value); @@ -357,16 +359,16 @@ struct ContinuousQueryTestSuiteFixture void CheckEvents(Cache& cache, Listener& lsnr) { cache.Put(1, TestEntry(10)); - lsnr.CheckNextEvent(1, boost::none, TestEntry(10)); + lsnr.CheckNextEvent(1, boost::none, TestEntry(10), CacheEntryEventType::CREATE); cache.Put(1, TestEntry(20)); - lsnr.CheckNextEvent(1, TestEntry(10), TestEntry(20)); + lsnr.CheckNextEvent(1, TestEntry(10), TestEntry(20), CacheEntryEventType::UPDATE); cache.Put(2, TestEntry(20)); - lsnr.CheckNextEvent(2, boost::none, TestEntry(20)); + lsnr.CheckNextEvent(2, boost::none, TestEntry(20), CacheEntryEventType::CREATE); cache.Remove(1); - lsnr.CheckNextEvent(1, TestEntry(20), TestEntry(20)); + lsnr.CheckNextEvent(1, TestEntry(20), TestEntry(20), CacheEntryEventType::REMOVE); } IGNITE_EXPORTED_CALL void IgniteModuleInit0(ignite::IgniteBindingContext& context) @@ -707,14 +709,14 @@ BOOST_AUTO_TEST_CASE(TestFilterSingleNode) cache.Put(150, TestEntry(1502)); cache.Remove(150); - lsnr.CheckNextEvent(100, boost::none, TestEntry(1000)); - lsnr.CheckNextEvent(101, boost::none, TestEntry(1010)); + lsnr.CheckNextEvent(100, boost::none, TestEntry(1000), CacheEntryEventType::CREATE); + lsnr.CheckNextEvent(101, boost::none, TestEntry(1010), CacheEntryEventType::CREATE); - lsnr.CheckNextEvent(142, boost::none, TestEntry(1420)); - lsnr.CheckNextEvent(142, TestEntry(1420), TestEntry(1421)); - lsnr.CheckNextEvent(142, TestEntry(1421), TestEntry(1421)); + lsnr.CheckNextEvent(142, boost::none, TestEntry(1420), CacheEntryEventType::CREATE); + lsnr.CheckNextEvent(142, TestEntry(1420), TestEntry(1421), CacheEntryEventType::UPDATE); + lsnr.CheckNextEvent(142, TestEntry(1421), TestEntry(1421), CacheEntryEventType::REMOVE); - lsnr.CheckNextEvent(149, boost::none, TestEntry(1490)); + lsnr.CheckNextEvent(149, boost::none, TestEntry(1490), CacheEntryEventType::CREATE); } BOOST_AUTO_TEST_CASE(TestFilterMultipleNodes) @@ -757,14 +759,14 @@ BOOST_AUTO_TEST_CASE(TestFilterMultipleNodes) for (int i = 200; i < 250; ++i) cache2.Put(i, TestEntry(i * 10)); - lsnr.CheckNextEvent(100, boost::none, TestEntry(1000)); - lsnr.CheckNextEvent(101, boost::none, TestEntry(1010)); + lsnr.CheckNextEvent(100, boost::none, TestEntry(1000), CacheEntryEventType::CREATE); + lsnr.CheckNextEvent(101, boost::none, TestEntry(1010), CacheEntryEventType::CREATE); - lsnr.CheckNextEvent(142, boost::none, TestEntry(1420)); - lsnr.CheckNextEvent(142, TestEntry(1420), TestEntry(1421)); - lsnr.CheckNextEvent(142, TestEntry(1421), TestEntry(1421)); + lsnr.CheckNextEvent(142, boost::none, TestEntry(1420), CacheEntryEventType::CREATE); + lsnr.CheckNextEvent(142, TestEntry(1420), TestEntry(1421), CacheEntryEventType::UPDATE); + lsnr.CheckNextEvent(142, TestEntry(1421), TestEntry(1421), CacheEntryEventType::REMOVE); - lsnr.CheckNextEvent(149, boost::none, TestEntry(1490)); + lsnr.CheckNextEvent(149, boost::none, TestEntry(1490), CacheEntryEventType::CREATE); } BOOST_AUTO_TEST_SUITE_END() \ No newline at end of file diff --git a/modules/platforms/cpp/core/include/ignite/cache/event/cache_entry_event.h b/modules/platforms/cpp/core/include/ignite/cache/event/cache_entry_event.h index 14fa18591df83..967a16e4cddb1 100644 --- a/modules/platforms/cpp/core/include/ignite/cache/event/cache_entry_event.h +++ b/modules/platforms/cpp/core/include/ignite/cache/event/cache_entry_event.h @@ -30,6 +30,41 @@ namespace ignite { namespace cache { + /** + * Cache entry event type. + */ + struct CacheEntryEventType + { + enum T + { + /** Event type - Create. */ + CREATE = 0, + + /** Event type - Update. */ + UPDATE = 1, + + /** Event type - Remove. */ + REMOVE = 2, + }; + + static T FromInt8(int8_t val) + { + switch (val) + { + case CREATE: + case UPDATE: + case REMOVE: + return static_cast(val); + + default: + { + IGNITE_ERROR_FORMATTED_1(IgniteError::IGNITE_ERR_BINARY, + "Unsupported CacheEntryEventType", "val", val); + } + } + } + }; + /** * Cache entry event class template. * @@ -48,7 +83,8 @@ namespace ignite CacheEntryEvent() : CacheEntry(), oldVal(), - hasOldValue(false) + hasOldValue(false), + eventType(CacheEntryEventType::CREATE) { // No-op. } @@ -61,7 +97,8 @@ namespace ignite CacheEntryEvent(const CacheEntryEvent& other) : CacheEntry(other), oldVal(other.oldVal), - hasOldValue(other.hasOldValue) + hasOldValue(other.hasOldValue), + eventType(other.eventType) { // No-op. } @@ -88,6 +125,7 @@ namespace ignite oldVal = other.oldVal; hasOldValue = other.hasOldValue; + eventType = other.eventType; } return *this; @@ -113,6 +151,18 @@ namespace ignite return hasOldValue; } + /** + * Get event type. + * + * @see CacheEntryEventType::T for details on possible types of events. + * + * @return Event type. + */ + CacheEntryEventType::T GetEventType() const + { + return eventType; + } + /** * Reads cache event using provided raw reader. * @@ -124,6 +174,9 @@ namespace ignite this->hasOldValue = reader.TryReadObject(this->oldVal); this->hasValue = reader.TryReadObject(this->val); + + int8_t eventTypeByte = reader.ReadInt8(); + this->eventType = CacheEntryEventType::FromInt8(eventTypeByte); } private: @@ -132,6 +185,9 @@ namespace ignite /** Indicates whether old value exists */ bool hasOldValue; + + /** Event type. */ + CacheEntryEventType::T eventType; }; } } diff --git a/modules/platforms/cpp/core/namespaces.dox b/modules/platforms/cpp/core/namespaces.dox index 60e8cf743a108..eccfc824a890e 100644 --- a/modules/platforms/cpp/core/namespaces.dox +++ b/modules/platforms/cpp/core/namespaces.dox @@ -18,7 +18,7 @@ /** * \mainpage Apache Ignite C++ * - * Apache Ignite In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for + * Apache Ignite In-Memory Database and Caching Platformis a high-performance, integrated and distributed in-memory platform for * computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with * traditional disk-based or flash-based technologies. */ diff --git a/modules/platforms/cpp/core/src/ignition.cpp b/modules/platforms/cpp/core/src/ignition.cpp index 7d90a52c77682..087d42d04e2c7 100644 --- a/modules/platforms/cpp/core/src/ignition.cpp +++ b/modules/platforms/cpp/core/src/ignition.cpp @@ -75,8 +75,7 @@ namespace ignite * Constructor. */ JvmOptions() : - size(0), - opts(0) + opts() { // No-op. } @@ -100,39 +99,43 @@ namespace ignite { Deinit(); - size = 3 + static_cast(cfg.jvmOpts.size()); + const size_t REQ_OPTS_CNT = 4; + const size_t JAVA9_OPTS_CNT = 6; - if (!home.empty()) - ++size; - - // Brackets '()' here guarantee for the array to be zeroed. - // Important to avoid crash in case of exception. - opts = new char*[size](); - - int idx = 0; + opts.reserve(cfg.jvmOpts.size() + REQ_OPTS_CNT + JAVA9_OPTS_CNT); // 1. Set classpath. std::string cpFull = "-Djava.class.path=" + cp; - opts[idx++] = CopyChars(cpFull.c_str()); + opts.push_back(CopyChars(cpFull.c_str())); // 2. Set home. if (!home.empty()) { std::string homeFull = "-DIGNITE_HOME=" + home; - opts[idx++] = CopyChars(homeFull.c_str()); + opts.push_back(CopyChars(homeFull.c_str())); } // 3. Set Xms, Xmx. std::string xmsStr = JvmMemoryString("-Xms", cfg.jvmInitMem); std::string xmxStr = JvmMemoryString("-Xmx", cfg.jvmMaxMem); - opts[idx++] = CopyChars(xmsStr.c_str()); - opts[idx++] = CopyChars(xmxStr.c_str()); + opts.push_back(CopyChars(xmsStr.c_str())); + opts.push_back(CopyChars(xmxStr.c_str())); // 4. Set the rest options. for (std::list::const_iterator i = cfg.jvmOpts.begin(); i != cfg.jvmOpts.end(); ++i) - opts[idx++] = CopyChars(i->c_str()); + opts.push_back(CopyChars(i->c_str())); + + // Adding options for Java 9 or later + if (IsJava9OrLater()) { + opts.push_back(CopyChars("--add-exports=java.base/jdk.internal.misc=ALL-UNNAMED")); + opts.push_back(CopyChars("--add-exports=java.base/sun.nio.ch=ALL-UNNAMED")); + opts.push_back(CopyChars("--add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED")); + opts.push_back(CopyChars("--add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED")); + opts.push_back(CopyChars("--add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED")); + opts.push_back(CopyChars("--illegal-access=permit")); + } } /** @@ -140,13 +143,10 @@ namespace ignite */ void Deinit() { - if (opts) - { - for (int i = 0; i < size; ++i) - ReleaseChars(opts[i]); + for (size_t i = 0; i < opts.size(); ++i) + ReleaseChars(opts[i]); - delete[] opts; - } + opts.clear(); } /** @@ -154,9 +154,9 @@ namespace ignite * * @return Built options */ - char** GetOpts() const + char** GetOpts() { - return opts; + return &opts[0]; } /** @@ -166,15 +166,12 @@ namespace ignite */ int GetSize() const { - return size; + return static_cast(opts.size()); } private: - /** Size */ - int size; - /** Options array. */ - char** opts; + std::vector opts; }; Ignite Ignition::Start(const IgniteConfiguration& cfg) diff --git a/modules/platforms/cpp/examples/Makefile.am b/modules/platforms/cpp/examples/Makefile.am index 29b140bb09866..d7a49b7b86b1a 100644 --- a/modules/platforms/cpp/examples/Makefile.am +++ b/modules/platforms/cpp/examples/Makefile.am @@ -23,5 +23,6 @@ SUBDIRS = \ query-example \ continuous-query-example \ compute-example \ + thin-client-put-get-example \ include diff --git a/modules/platforms/cpp/examples/configure.ac b/modules/platforms/cpp/examples/configure.ac index 8cf7cf71d6f89..9c1acdd15b7d2 100644 --- a/modules/platforms/cpp/examples/configure.ac +++ b/modules/platforms/cpp/examples/configure.ac @@ -58,6 +58,7 @@ AC_CONFIG_FILES([ \ query-example/Makefile \ continuous-query-example/Makefile \ compute-example/Makefile \ + thin-client-put-get-example/Makefile \ ]) AC_OUTPUT diff --git a/modules/platforms/cpp/examples/project/vs/ignite-examples.sln b/modules/platforms/cpp/examples/project/vs/ignite-examples.sln index bf837433b3b10..aeff805f1bd2d 100644 --- a/modules/platforms/cpp/examples/project/vs/ignite-examples.sln +++ b/modules/platforms/cpp/examples/project/vs/ignite-examples.sln @@ -11,6 +11,8 @@ Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "put-get-example", "..\..\pu EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "compute-example", "..\..\compute-example\project\vs\compute-example.vcxproj", "{18BB0A18-8213-472A-81A0-9D9753697135}" EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "thin-client-put-get-example", "..\..\thin-client-put-get-example\project\vs\thin-client-put-get-example.vcxproj", "{8F045A49-A1C8-45B5-B9E4-FFB323AD1060}" +EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Release|x64 = Release|x64 @@ -37,6 +39,10 @@ Global {18BB0A18-8213-472A-81A0-9D9753697135}.Release|x64.Build.0 = Release|x64 {18BB0A18-8213-472A-81A0-9D9753697135}.Release|x86.ActiveCfg = Release|Win32 {18BB0A18-8213-472A-81A0-9D9753697135}.Release|x86.Build.0 = Release|Win32 + {8F045A49-A1C8-45B5-B9E4-FFB323AD1060}.Release|x64.ActiveCfg = Release|x64 + {8F045A49-A1C8-45B5-B9E4-FFB323AD1060}.Release|x64.Build.0 = Release|x64 + {8F045A49-A1C8-45B5-B9E4-FFB323AD1060}.Release|x86.ActiveCfg = Release|Win32 + {8F045A49-A1C8-45B5-B9E4-FFB323AD1060}.Release|x86.Build.0 = Release|Win32 EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE diff --git a/modules/platforms/cpp/examples/thin-client-put-get-example/Makefile.am b/modules/platforms/cpp/examples/thin-client-put-get-example/Makefile.am new file mode 100644 index 0000000000000..f9f73453037d7 --- /dev/null +++ b/modules/platforms/cpp/examples/thin-client-put-get-example/Makefile.am @@ -0,0 +1,53 @@ +## +## Licensed to the Apache Software Foundation (ASF) under one or more +## contributor license agreements. See the NOTICE file distributed with +## this work for additional information regarding copyright ownership. +## The ASF licenses this file to You under the Apache License, Version 2.0 +## (the "License"); you may not use this file except in compliance with +## the License. You may obtain a copy of the License at +## +## http://www.apache.org/licenses/LICENSE-2.0 +## +## Unless required by applicable law or agreed to in writing, software +## distributed under the License is distributed on an "AS IS" BASIS, +## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +## See the License for the specific language governing permissions and +## limitations under the License. +## + +ACLOCAL_AMFLAGS =-I m4 + +noinst_PROGRAMS = ignite-thin-client-put-get-example + +AM_CPPFLAGS = \ + -I@top_srcdir@/include \ + -I@top_srcdir@/../thin-client/include \ + -I@top_srcdir@/../thin-client/os/linux/include \ + -I@top_srcdir@/../common/include \ + -I@top_srcdir@/../common/os/linux/include \ + -I@top_srcdir@/../binary/include \ + -D__STDC_LIMIT_MACROS \ + -D__STDC_CONSTANT_MACROS + +AM_CXXFLAGS = \ + -Wall \ + -std=c++03 + +ignite_thin_client_put_get_example_LDADD = \ + @top_srcdir@/../thin-client/libignite-thin-client.la \ + -lpthread + +ignite_thin_client_put_get_example_LDFLAGS = \ + -static-libtool-libs + +ignite_thin_client_put_get_example_SOURCES = \ + src/thin_client_put_get_example.cpp + +run-check: check + ./ignite-thin-client-put-get-example -p + +clean-local: clean-check + $(RM) *.gcno *.gcda + +clean-check: + $(RM) $(ignite_thin_client_put_get_example_OBJECTS) diff --git a/modules/platforms/cpp/examples/thin-client-put-get-example/project/vs/thin-client-put-get-example.vcxproj b/modules/platforms/cpp/examples/thin-client-put-get-example/project/vs/thin-client-put-get-example.vcxproj new file mode 100644 index 0000000000000..12c8a9871c0d0 --- /dev/null +++ b/modules/platforms/cpp/examples/thin-client-put-get-example/project/vs/thin-client-put-get-example.vcxproj @@ -0,0 +1,107 @@ + + + + + Release + Win32 + + + Release + x64 + + + + {8F045A49-A1C8-45B5-B9E4-FFB323AD1060} + Win32Proj + igniteexamples + + + + Application + false + v100 + true + MultiByte + + + Application + false + v100 + true + MultiByte + + + + + + + + + + + + + false + + + false + + + + Level3 + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions) + ..\..\..\include;..\..\..\..\common\os\win\include;..\..\..\..\common\include;..\..\..\..\binary\include;..\..\..\..\thin-client\os\win\include;..\..\..\..\thin-client\include;%(AdditionalIncludeDirectories) + + + Console + true + true + true + ignite.binary.lib;ignite.thin-client.lib;%(AdditionalDependencies) + ..\..\..\..\project\vs\$(Platform)\$(Configuration)\;%(AdditionalLibraryDirectories) + + + copy "$(ProjectDir)..\..\..\..\project\vs\$(Platform)\$(Configuration)\ignite.common.dll" "$(OutDir)" +copy "$(ProjectDir)..\..\..\..\project\vs\$(Platform)\$(Configuration)\ignite.thin-client.dll" "$(OutDir)" + + + + + Level3 + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions) + ..\..\..\include;..\..\..\..\common\os\win\include;..\..\..\..\common\include;..\..\..\..\binary\include;..\..\..\..\thin-client\os\win\include;..\..\..\..\thin-client\include;%(AdditionalIncludeDirectories) + + + Console + true + true + true + ignite.binary.lib;ignite.thin-client.lib;%(AdditionalDependencies) + ..\..\..\..\project\vs\$(Platform)\$(Configuration)\;%(AdditionalLibraryDirectories) + + + copy "$(ProjectDir)..\..\..\..\project\vs\$(Platform)\$(Configuration)\ignite.common.dll" "$(OutDir)" +copy "$(ProjectDir)..\..\..\..\project\vs\$(Platform)\$(Configuration)\ignite.thin-client.dll" "$(OutDir)" + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/modules/platforms/cpp/examples/thin-client-put-get-example/project/vs/thin-client-put-get-example.vcxproj.filters b/modules/platforms/cpp/examples/thin-client-put-get-example/project/vs/thin-client-put-get-example.vcxproj.filters new file mode 100644 index 0000000000000..41d4998e2eb1c --- /dev/null +++ b/modules/platforms/cpp/examples/thin-client-put-get-example/project/vs/thin-client-put-get-example.vcxproj.filters @@ -0,0 +1,35 @@ + + + + + Header Files + + + Header Files + + + Header Files + + + + + {4FC737F1-C7A5-4376-A066-2A32D752A2FF} + + + {93995380-89BD-4b04-88EB-625FBE52EBFB} + + + {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} + + + + + Source Files + + + + + Config + + + \ No newline at end of file diff --git a/modules/platforms/cpp/examples/thin-client-put-get-example/src/thin_client_put_get_example.cpp b/modules/platforms/cpp/examples/thin-client-put-get-example/src/thin_client_put_get_example.cpp new file mode 100644 index 0000000000000..252ad3d90a8b6 --- /dev/null +++ b/modules/platforms/cpp/examples/thin-client-put-get-example/src/thin_client_put_get_example.cpp @@ -0,0 +1,127 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include + +#include +#include + +#include "ignite/examples/organization.h" + +using namespace ignite; +using namespace thin; +using namespace cache; + +using namespace examples; + +/* + * Execute individual Put and Get operations. + * + * @param cache Cache instance. + */ +void PutGet(CacheClient& cache) +{ + // Create new Organization to store in cache. + Organization org("Microsoft", Address("1096 Eddy Street, San Francisco, CA", 94109)); + + // Put organization to cache. + cache.Put(1, org); + + // Get recently created employee as a strongly-typed fully de-serialized instance. + Organization orgFromCache = cache.Get(1); + + std::cout << ">>> Retrieved organization instance from cache: " << std::endl; + std::cout << orgFromCache.ToString() << std::endl; + std::cout << std::endl; +} + +/* + * Execute bulk Put and Get operations. + * + * @param cache Cache instance. + */ +void PutGetAll(CacheClient& cache) +{ + // Create new Organizations to store in cache. + Organization org1("Microsoft", Address("1096 Eddy Street, San Francisco, CA", 94109)); + Organization org2("Red Cross", Address("184 Fidler Drive, San Antonio, TX", 78205)); + + // Put created data entries to cache. + std::map vals; + + vals[1] = org1; + vals[2] = org2; + + cache.PutAll(vals); + + // Get recently created organizations as a strongly-typed fully de-serialized instances. + std::set keys; + + keys.insert(1); + keys.insert(2); + + std::map valsFromCache; + cache.GetAll(keys, valsFromCache); + + std::cout << ">>> Retrieved organization instances from cache: " << std::endl; + + for (std::map::iterator it = valsFromCache.begin(); it != valsFromCache.end(); ++it) + std::cout << it->second.ToString() << std::endl; + + std::cout << std::endl; +} + +int main() +{ + IgniteClientConfiguration cfg; + + cfg.SetEndPoints("127.0.0.1"); + + try + { + // Start a client. + IgniteClient client = IgniteClient::Start(cfg); + + std::cout << std::endl; + std::cout << ">>> Cache put-get example started." << std::endl; + std::cout << std::endl; + + // Get cache instance. + CacheClient cache = client.GetOrCreateCache("PutGetExample"); + + // Clear cache. + cache.Clear(); + + PutGet(cache); + PutGetAll(cache); + } + catch (IgniteError& err) + { + std::cout << "An error occurred: " << err.GetText() << std::endl; + + return err.GetCode(); + } + + std::cout << std::endl; + std::cout << ">>> Example finished, press 'Enter' to exit ..." << std::endl; + std::cout << std::endl; + + std::cin.get(); + + return 0; +} \ No newline at end of file diff --git a/modules/platforms/cpp/jni/Makefile.am b/modules/platforms/cpp/jni/Makefile.am index 56eaa6c844f11..da107a2bdadf8 100644 --- a/modules/platforms/cpp/jni/Makefile.am +++ b/modules/platforms/cpp/jni/Makefile.am @@ -39,6 +39,7 @@ AM_CXXFLAGS = \ libignite_jni_la_LIBADD = \ -L$(JAVA_HOME)/jre/lib/amd64/server \ + -L$(JAVA_HOME)/lib/server \ @top_srcdir@/common/libignite-common.la libignite_jni_la_LDFLAGS = \ diff --git a/modules/platforms/cpp/jni/include/ignite/jni/java.h b/modules/platforms/cpp/jni/include/ignite/jni/java.h index c713e811ad733..196d941e49d03 100644 --- a/modules/platforms/cpp/jni/include/ignite/jni/java.h +++ b/modules/platforms/cpp/jni/include/ignite/jni/java.h @@ -114,6 +114,13 @@ namespace ignite typedef long long(JNICALL *InLongOutLongHandler)(void* target, int type, long long val); typedef long long(JNICALL *InLongLongLongObjectOutLongHandler)(void* target, int type, long long val1, long long val2, long long val3, void* arg); + /** + * Is Java 9 or later is used. + * + * @return true if the Java 9 or later is in use. + */ + bool IGNITE_IMPORT_EXPORT IsJava9OrLater(); + /** * JNI handlers holder. */ diff --git a/modules/platforms/cpp/jni/os/linux/src/utils.cpp b/modules/platforms/cpp/jni/os/linux/src/utils.cpp index 52e4097b67f51..0fda52dbed061 100644 --- a/modules/platforms/cpp/jni/os/linux/src/utils.cpp +++ b/modules/platforms/cpp/jni/os/linux/src/utils.cpp @@ -37,7 +37,8 @@ namespace ignite namespace jni { const char* JAVA_HOME = "JAVA_HOME"; - const char* JAVA_DLL = "/jre/lib/amd64/server/libjvm.so"; + const char* JAVA_DLL1 = "/jre/lib/amd64/server/libjvm.so"; + const char* JAVA_DLL2 = "/lib/server/libjvm.so"; const char* IGNITE_HOME = "IGNITE_HOME"; @@ -311,7 +312,12 @@ namespace ignite if (!javaEnv.empty()) { - std::string javaDll = javaEnv + JAVA_DLL; + std::string javaDll = javaEnv + JAVA_DLL1; + + if (FileExists(javaDll)) + return javaDll; + + javaDll = javaEnv + JAVA_DLL2; if (FileExists(javaDll)) return javaDll; diff --git a/modules/platforms/cpp/jni/src/java.cpp b/modules/platforms/cpp/jni/src/java.cpp index ac4ba6345d5c2..9a07a5f3af6c8 100644 --- a/modules/platforms/cpp/jni/src/java.cpp +++ b/modules/platforms/cpp/jni/src/java.cpp @@ -23,10 +23,15 @@ #include #include -#include "ignite/jni/utils.h" -#include "ignite/common/concurrent.h" -#include "ignite/jni/java.h" +#include +#include +#include #include +#include + +#ifndef JNI_VERSION_9 +#define JNI_VERSION_9 0x00090000 +#endif // JNI_VERSION_9 #define IGNITE_SAFE_PROC_NO_ARG(jniEnv, envPtr, type, field) { \ JniHandlers* hnds = reinterpret_cast(envPtr); \ @@ -94,7 +99,18 @@ namespace ignite { namespace java { - namespace gcc = ignite::common::concurrent; + namespace icc = ignite::common::concurrent; + + bool IGNITE_IMPORT_EXPORT IsJava9OrLater() + { + JavaVMInitArgs args; + + memset(&args, 0, sizeof(args)); + + args.version = JNI_VERSION_9; + + return JNI_GetDefaultJavaVMInitArgs(&args) == JNI_OK; + } /* --- Startup exception. --- */ class JvmException : public std::exception { @@ -114,26 +130,6 @@ namespace ignite } }; - /** - * Heloper function to copy characters. - * - * @param src Source. - * @return Result. - */ - char* CopyChars(const char* src) - { - if (src) - { - size_t len = strlen(src); - char* dest = new char[len + 1]; - strcpy(dest, src); - *(dest + len) = 0; - return dest; - } - else - return NULL; - } - JniErrorInfo::JniErrorInfo() : code(IGNITE_JNI_ERR_SUCCESS), errCls(NULL), errMsg(NULL) { // No-op. @@ -141,14 +137,14 @@ namespace ignite JniErrorInfo::JniErrorInfo(int code, const char* errCls, const char* errMsg) : code(code) { - this->errCls = CopyChars(errCls); - this->errMsg = CopyChars(errMsg); + this->errCls = common::CopyChars(errCls); + this->errMsg = common::CopyChars(errMsg); } JniErrorInfo::JniErrorInfo(const JniErrorInfo& other) : code(other.code) { - this->errCls = CopyChars(other.errCls); - this->errMsg = CopyChars(other.errMsg); + this->errCls = common::CopyChars(other.errCls); + this->errMsg = common::CopyChars(other.errMsg); } JniErrorInfo& JniErrorInfo::operator=(const JniErrorInfo& other) @@ -159,17 +155,9 @@ namespace ignite JniErrorInfo tmp(other); // 2. Swap with temp. - int code0 = code; - char* errCls0 = errCls; - char* errMsg0 = errMsg; - - code = tmp.code; - errCls = tmp.errCls; - errMsg = tmp.errMsg; - - tmp.code = code0; - tmp.errCls = errCls0; - tmp.errMsg = errMsg0; + std::swap(code, tmp.code); + std::swap(errCls, tmp.errCls); + std::swap(errMsg, tmp.errMsg); } return *this; @@ -177,11 +165,8 @@ namespace ignite JniErrorInfo::~JniErrorInfo() { - if (errCls) - delete[] errCls; - - if (errMsg) - delete[] errMsg; + delete[] errCls; + delete[] errMsg; } /** @@ -255,8 +240,8 @@ namespace ignite JniMethod M_PLATFORM_IGNITION_STOP_ALL = JniMethod("stopAll", "(Z)V", true); /* STATIC STATE. */ - gcc::CriticalSection JVM_LOCK; - gcc::CriticalSection CONSOLE_LOCK; + icc::CriticalSection JVM_LOCK; + icc::CriticalSection CONSOLE_LOCK; JniJvm JVM; bool PRINT_EXCEPTION = false; std::vector consoleWriteHandlers; @@ -725,7 +710,7 @@ namespace ignite } void JniContext::Detach() { - gcc::Memory::Fence(); + icc::Memory::Fence(); if (JVM.GetJvm()) { JNIEnv* env; diff --git a/modules/platforms/cpp/odbc-test/include/odbc_test_suite.h b/modules/platforms/cpp/odbc-test/include/odbc_test_suite.h index 7a33c6abe51d7..2381130385af0 100644 --- a/modules/platforms/cpp/odbc-test/include/odbc_test_suite.h +++ b/modules/platforms/cpp/odbc-test/include/odbc_test_suite.h @@ -44,7 +44,16 @@ namespace ignite void Prepare(); /** - * Establish connection to node. + * Establish connection to node using provided handles. + * + * @param conn Connection. + * @param statement Statement to allocate. + * @param connectStr Connection string. + */ + void Connect(SQLHDBC& conn, SQLHSTMT& statement, const std::string& connectStr); + + /** + * Establish connection to node using default handles. * * @param connectStr Connection string. */ diff --git a/modules/platforms/cpp/odbc-test/include/test_utils.h b/modules/platforms/cpp/odbc-test/include/test_utils.h index a1de23e7eb47a..58a4347d222dd 100644 --- a/modules/platforms/cpp/odbc-test/include/test_utils.h +++ b/modules/platforms/cpp/odbc-test/include/test_utils.h @@ -29,6 +29,12 @@ #include "ignite/ignition.h" +#define ODBC_THROW_ON_ERROR(ret, type, handle) \ + if (!SQL_SUCCEEDED(ret)) \ + { \ + throw ignite_test::GetOdbcError(type, handle); \ + } + #define ODBC_FAIL_ON_ERROR(ret, type, handle) \ if (!SQL_SUCCEEDED(ret)) \ { \ @@ -43,12 +49,65 @@ BOOST_FAIL(ignite_test::GetOdbcErrorMessage(type, handle) + ", msg = " + msg); \ } +/** + * Client ODBC erorr. + */ +class OdbcClientError : public std::exception +{ +public: + /** + * Constructor + * + * @param sqlstate SQL state. + * @param message Error message. + */ + OdbcClientError(const std::string& sqlstate, const std::string& message) : + sqlstate(sqlstate), + message(message) + { + // No-op. + } + + /** + * Destructor. + */ + virtual ~OdbcClientError() IGNITE_NO_THROW + { + // No-op. + } + + /** + * Implementation of the standard std::exception::what() method. + * Synonym for GetText() method. + * + * @return Error message string. + */ + virtual const char* what() const IGNITE_NO_THROW + { + return message.c_str(); + } + + /** SQL state. */ + std::string sqlstate; + + /** Error message. */ + std::string message; +}; namespace ignite_test { /** Read buffer size. */ enum { ODBC_BUFFER_SIZE = 1024 }; + /** + * Extract error. + * + * @param handleType Type of the handle. + * @param handle Handle. + * @return Error. + */ + OdbcClientError GetOdbcError(SQLSMALLINT handleType, SQLHANDLE handle); + /** * Extract error state. * diff --git a/modules/platforms/cpp/odbc-test/src/meta_queries_test.cpp b/modules/platforms/cpp/odbc-test/src/meta_queries_test.cpp index 82dbf3a46261e..1edda7c124ba0 100644 --- a/modules/platforms/cpp/odbc-test/src/meta_queries_test.cpp +++ b/modules/platforms/cpp/odbc-test/src/meta_queries_test.cpp @@ -341,6 +341,40 @@ BOOST_AUTO_TEST_CASE(TestDdlTablesMeta) BOOST_REQUIRE_EQUAL(ret, SQL_NO_DATA); } +BOOST_AUTO_TEST_CASE(TestDdlTablesMetaTableTypeList) +{ + Connect("DRIVER={Apache Ignite};ADDRESS=127.0.0.1:11110;SCHEMA=PUBLIC"); + + SQLCHAR createTable[] = "create table TestTable(id int primary key, testColumn varchar)"; + SQLRETURN ret = SQLExecDirect(stmt, createTable, SQL_NTS); + + if (!SQL_SUCCEEDED(ret)) + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + + SQLCHAR empty[] = ""; + SQLCHAR table[] = "TestTable"; + SQLCHAR typeList[] = "TABLE,VIEW"; + + ret = SQLTables(stmt, empty, SQL_NTS, empty, SQL_NTS, table, SQL_NTS, typeList, SQL_NTS); + + if (!SQL_SUCCEEDED(ret)) + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + + ret = SQLFetch(stmt); + + if (!SQL_SUCCEEDED(ret)) + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + + CheckStringColumn(stmt, 1, ""); + CheckStringColumn(stmt, 2, "\"PUBLIC\""); + CheckStringColumn(stmt, 3, "TESTTABLE"); + CheckStringColumn(stmt, 4, "TABLE"); + + ret = SQLFetch(stmt); + + BOOST_REQUIRE_EQUAL(ret, SQL_NO_DATA); +} + BOOST_AUTO_TEST_CASE(TestDdlColumnsMeta) { Connect("DRIVER={Apache Ignite};ADDRESS=127.0.0.1:11110;SCHEMA=PUBLIC"); @@ -384,4 +418,47 @@ BOOST_AUTO_TEST_CASE(TestDdlColumnsMeta) BOOST_REQUIRE_EQUAL(ret, SQL_NO_DATA); } +BOOST_AUTO_TEST_CASE(TestDdlColumnsMetaEscaped) +{ + Connect("DRIVER={Apache Ignite};ADDRESS=127.0.0.1:11110;SCHEMA=PUBLIC"); + + SQLCHAR createTable[] = "create table ESG_FOCUS(id int primary key, TEST_COLUMN varchar)"; + SQLRETURN ret = SQLExecDirect(stmt, createTable, SQL_NTS); + + if (!SQL_SUCCEEDED(ret)) + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + + SQLCHAR empty[] = ""; + SQLCHAR table[] = "ESG\\_FOCUS"; + + ret = SQLColumns(stmt, empty, SQL_NTS, empty, SQL_NTS, table, SQL_NTS, empty, SQL_NTS); + + if (!SQL_SUCCEEDED(ret)) + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + + ret = SQLFetch(stmt); + + if (!SQL_SUCCEEDED(ret)) + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + + CheckStringColumn(stmt, 1, ""); + CheckStringColumn(stmt, 2, "\"PUBLIC\""); + CheckStringColumn(stmt, 3, "ESG_FOCUS"); + CheckStringColumn(stmt, 4, "ID"); + + ret = SQLFetch(stmt); + + if (!SQL_SUCCEEDED(ret)) + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + + CheckStringColumn(stmt, 1, ""); + CheckStringColumn(stmt, 2, "\"PUBLIC\""); + CheckStringColumn(stmt, 3, "ESG_FOCUS"); + CheckStringColumn(stmt, 4, "TEST_COLUMN"); + + ret = SQLFetch(stmt); + + BOOST_REQUIRE_EQUAL(ret, SQL_NO_DATA); +} + BOOST_AUTO_TEST_SUITE_END() diff --git a/modules/platforms/cpp/odbc-test/src/odbc_test_suite.cpp b/modules/platforms/cpp/odbc-test/src/odbc_test_suite.cpp index 4f5be916d3666..b79d13ae88b46 100644 --- a/modules/platforms/cpp/odbc-test/src/odbc_test_suite.cpp +++ b/modules/platforms/cpp/odbc-test/src/odbc_test_suite.cpp @@ -56,6 +56,34 @@ namespace ignite BOOST_REQUIRE(dbc != NULL); } + void OdbcTestSuite::Connect(SQLHDBC& conn, SQLHSTMT& statement, const std::string& connectStr) + { + // Allocate a connection handle + SQLAllocHandle(SQL_HANDLE_DBC, env, &conn); + + BOOST_REQUIRE(conn != NULL); + + // Connect string + std::vector connectStr0(connectStr.begin(), connectStr.end()); + + SQLCHAR outstr[ODBC_BUFFER_SIZE]; + SQLSMALLINT outstrlen; + + // Connecting to ODBC server. + SQLRETURN ret = SQLDriverConnect(conn, NULL, &connectStr0[0], static_cast(connectStr0.size()), + outstr, sizeof(outstr), &outstrlen, SQL_DRIVER_COMPLETE); + + if (!SQL_SUCCEEDED(ret)) + { + BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_DBC, conn)); + } + + // Allocate a statement handle + SQLAllocHandle(SQL_HANDLE_STMT, conn, &statement); + + BOOST_REQUIRE(statement != NULL); + } + void OdbcTestSuite::Connect(const std::string& connectStr) { Prepare(); diff --git a/modules/platforms/cpp/odbc-test/src/sql_get_info_test.cpp b/modules/platforms/cpp/odbc-test/src/sql_get_info_test.cpp index d8ed0879aec4d..96e6164e5f21b 100644 --- a/modules/platforms/cpp/odbc-test/src/sql_get_info_test.cpp +++ b/modules/platforms/cpp/odbc-test/src/sql_get_info_test.cpp @@ -165,7 +165,7 @@ BOOST_AUTO_TEST_CASE(TestValues) CheckIntInfo(SQL_POS_OPERATIONS, 0); CheckIntInfo(SQL_SQL92_DATETIME_FUNCTIONS, SQL_SDF_CURRENT_DATE | SQL_SDF_CURRENT_TIMESTAMP); CheckIntInfo(SQL_SQL92_VALUE_EXPRESSIONS, SQL_SVE_CASE | SQL_SVE_CAST | SQL_SVE_COALESCE | SQL_SVE_NULLIF); - CheckIntInfo(SQL_STATIC_CURSOR_ATTRIBUTES1, SQL_CA1_NEXT); + CheckIntInfo(SQL_STATIC_CURSOR_ATTRIBUTES1, SQL_CA1_NEXT | SQL_CA1_ABSOLUTE); CheckIntInfo(SQL_STATIC_CURSOR_ATTRIBUTES2, 0); CheckIntInfo(SQL_PARAM_ARRAY_ROW_COUNTS, SQL_PARC_BATCH); CheckIntInfo(SQL_PARAM_ARRAY_SELECTS, SQL_PAS_NO_SELECT); diff --git a/modules/platforms/cpp/odbc-test/src/test_utils.cpp b/modules/platforms/cpp/odbc-test/src/test_utils.cpp index fc8cbd3aaf08b..68bd78756ea9a 100644 --- a/modules/platforms/cpp/odbc-test/src/test_utils.cpp +++ b/modules/platforms/cpp/odbc-test/src/test_utils.cpp @@ -23,6 +23,21 @@ namespace ignite_test { + OdbcClientError GetOdbcError(SQLSMALLINT handleType, SQLHANDLE handle) + { + SQLCHAR sqlstate[7] = {}; + SQLINTEGER nativeCode; + + SQLCHAR message[ODBC_BUFFER_SIZE]; + SQLSMALLINT reallen = 0; + + SQLGetDiagRec(handleType, handle, 1, sqlstate, &nativeCode, message, ODBC_BUFFER_SIZE, &reallen); + + return OdbcClientError( + std::string(reinterpret_cast(sqlstate)), + std::string(reinterpret_cast(message), reallen)); + } + std::string GetOdbcErrorState(SQLSMALLINT handleType, SQLHANDLE handle) { SQLCHAR sqlstate[7] = {}; diff --git a/modules/platforms/cpp/odbc-test/src/transaction_test.cpp b/modules/platforms/cpp/odbc-test/src/transaction_test.cpp index 73e54b838a0f3..ed1b054813cfe 100644 --- a/modules/platforms/cpp/odbc-test/src/transaction_test.cpp +++ b/modules/platforms/cpp/odbc-test/src/transaction_test.cpp @@ -75,51 +75,56 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite */ void InsertTestValue(int64_t key, const std::string& value) { - SQLCHAR insertReq[] = "INSERT INTO TestType(_key, strField) VALUES(?, ?)"; + InsertTestValue(stmt, key, value); + } - SQLRETURN ret; + /** + * Insert test string value in cache and make all the neccessary checks. + * + * @param stmt Statement. + * @param key Key. + * @param value Value. + */ + static void InsertTestValue(SQLHSTMT stmt, int64_t key, const std::string& value) + { + SQLCHAR insertReq[] = "INSERT INTO TestType(_key, strField) VALUES(?, ?)"; - ret = SQLPrepare(stmt, insertReq, SQL_NTS); + SQLRETURN ret = SQLPrepare(stmt, insertReq, SQL_NTS); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); char strField[1024] = { 0 }; SQLLEN strFieldLen = 0; ret = SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR, sizeof(strField), sizeof(strField), &strField, sizeof(strField), &strFieldLen); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); strncpy(strField, value.c_str(), sizeof(strField)); strFieldLen = SQL_NTS; ret = SQLExecute(stmt); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); SQLLEN affected = 0; ret = SQLRowCount(stmt, &affected); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); BOOST_CHECK_EQUAL(affected, 1); ret = SQLMoreResults(stmt); if (ret != SQL_NO_DATA) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); - ResetStatement(); + ResetStatement(stmt); } /** @@ -130,14 +135,23 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite */ void UpdateTestValue(int64_t key, const std::string& value) { - SQLCHAR insertReq[] = "UPDATE TestType SET strField=? WHERE _key=?"; + UpdateTestValue(stmt, key, value); + } - SQLRETURN ret; + /** + * Update test string value in cache and make all the neccessary checks. + * + * @param stmt Statement. + * @param key Key. + * @param value Value. + */ + static void UpdateTestValue(SQLHSTMT stmt, int64_t key, const std::string& value) + { + SQLCHAR insertReq[] = "UPDATE TestType SET strField=? WHERE _key=?"; - ret = SQLPrepare(stmt, insertReq, SQL_NTS); + SQLRETURN ret = SQLPrepare(stmt, insertReq, SQL_NTS); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); char strField[1024] = { 0 }; SQLLEN strFieldLen = 0; @@ -145,36 +159,32 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite ret = SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR, sizeof(strField), sizeof(strField), &strField, sizeof(strField), &strFieldLen); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); strncpy(strField, value.c_str(), sizeof(strField)); strFieldLen = SQL_NTS; ret = SQLExecute(stmt); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); SQLLEN affected = 0; ret = SQLRowCount(stmt, &affected); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); BOOST_CHECK_EQUAL(affected, 1); ret = SQLMoreResults(stmt); if (ret != SQL_NO_DATA) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); - ResetStatement(); + ResetStatement(stmt); } /** @@ -184,41 +194,45 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite */ void DeleteTestValue(int64_t key) { - SQLCHAR insertReq[] = "DELETE FROM TestType WHERE _key=?"; + DeleteTestValue(stmt, key); + } - SQLRETURN ret; + /** + * Delete test string value. + * + * @param stmt Statement. + * @param key Key. + */ + static void DeleteTestValue(SQLHSTMT stmt, int64_t key) + { + SQLCHAR insertReq[] = "DELETE FROM TestType WHERE _key=?"; - ret = SQLPrepare(stmt, insertReq, SQL_NTS); + SQLRETURN ret = SQLPrepare(stmt, insertReq, SQL_NTS); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLExecute(stmt); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); SQLLEN affected = 0; ret = SQLRowCount(stmt, &affected); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); BOOST_CHECK_EQUAL(affected, 1); ret = SQLMoreResults(stmt); if (ret != SQL_NO_DATA) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); - ResetStatement(); + ResetStatement(stmt); } - /** * Selects and checks the value. * @@ -226,6 +240,18 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite * @param expect Expected value. */ void CheckTestValue(int64_t key, const std::string& expect) + { + CheckTestValue(stmt, key, expect); + } + + /** + * Selects and checks the value. + * + * @param stmt Statement. + * @param key Key. + * @param expect Expected value. + */ + static void CheckTestValue(SQLHSTMT stmt, int64_t key, const std::string& expect) { // Just selecting everything to make sure everything is OK SQLCHAR selectReq[] = "SELECT strField FROM TestType WHERE _key = ?"; @@ -235,23 +261,19 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite SQLRETURN ret = SQLBindCol(stmt, 1, SQL_C_CHAR, &strField, sizeof(strField), &strFieldLen); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLExecDirect(stmt, selectReq, sizeof(selectReq)); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLFetch(stmt); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); BOOST_CHECK_EQUAL(std::string(strField, strFieldLen), expect); @@ -262,9 +284,9 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite ret = SQLMoreResults(stmt); if (ret != SQL_NO_DATA) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); - ResetStatement(); + ResetStatement(stmt); } /** @@ -273,6 +295,17 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite * @param key Key. */ void CheckNoTestValue(int64_t key) + { + CheckNoTestValue(stmt, key); + } + + /** + * Selects and checks that value is absent. + * + * @param stmt Statement. + * @param key Key. + */ + static void CheckNoTestValue(SQLHSTMT stmt, int64_t key) { // Just selecting everything to make sure everything is OK SQLCHAR selectReq[] = "SELECT strField FROM TestType WHERE _key = ?"; @@ -282,48 +315,55 @@ struct TransactionTestSuiteFixture : public odbc::OdbcTestSuite SQLRETURN ret = SQLBindCol(stmt, 1, SQL_C_CHAR, &strField, sizeof(strField), &strFieldLen); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLExecDirect(stmt, selectReq, sizeof(selectReq)); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLFetch(stmt); BOOST_CHECK_EQUAL(ret, SQL_NO_DATA); - if (ret != SQL_NO_DATA && !SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + if (ret != SQL_NO_DATA) + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLMoreResults(stmt); + BOOST_CHECK_EQUAL(ret, SQL_NO_DATA); + if (ret != SQL_NO_DATA) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); - ResetStatement(); + ResetStatement(stmt); } /** * Reset statement state. */ void ResetStatement() + { + ResetStatement(stmt); + } + + /** + * Reset statement state. + * + * @param stmt Statement. + */ + static void ResetStatement(SQLHSTMT stmt) { SQLRETURN ret = SQLFreeStmt(stmt, SQL_RESET_PARAMS); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); ret = SQLFreeStmt(stmt, SQL_UNBIND); - if (!SQL_SUCCEEDED(ret)) - BOOST_FAIL(GetOdbcErrorMessage(SQL_HANDLE_STMT, stmt)); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_STMT, stmt); } /** Node started during the test. */ @@ -738,4 +778,72 @@ BOOST_AUTO_TEST_CASE(TransactionEnvironmentTxModeCommit) CheckTestValue(42, "Some"); } +BOOST_AUTO_TEST_CASE(TransactionVersionMismatchError) +{ + Connect("DRIVER={Apache Ignite};address=127.0.0.1:11110;schema=cache"); + + InsertTestValue(1, "test_1"); + + SQLRETURN ret = SQLSetConnectAttr(dbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, 0); + + ODBC_FAIL_ON_ERROR(ret, SQL_HANDLE_DBC, dbc); + + CheckTestValue(1, "test_1"); + + SQLHDBC dbc2; + SQLHSTMT stmt2; + + Connect(dbc2, stmt2, "DRIVER={Apache Ignite};address=127.0.0.1:11110;schema=cache"); + + ret = SQLSetConnectAttr(dbc2, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, 0); + + ODBC_FAIL_ON_ERROR(ret, SQL_HANDLE_DBC, dbc2); + + InsertTestValue(stmt2, 2, "test_2"); + + ret = SQLEndTran(SQL_HANDLE_DBC, dbc2, SQL_COMMIT); + + ODBC_FAIL_ON_ERROR(ret, SQL_HANDLE_DBC, dbc2); + + CheckTestValue(stmt2, 1, "test_1"); + CheckTestValue(stmt2, 2, "test_2"); + + try + { + InsertTestValue(2, "test_2"); + + BOOST_FAIL("Exception is expected"); + } + catch (OdbcClientError& err) + { + BOOST_CHECK(err.message.find("Cannot serialize transaction due to write conflict") != err.message.npos); + BOOST_CHECK_EQUAL(err.sqlstate, "40001"); + + ResetStatement(stmt); + } + + try + { + CheckTestValue(1, "test_1"); + + BOOST_FAIL("Exception is expected"); + } + catch (OdbcClientError& err) + { + BOOST_CHECK(err.message.find("Transaction is already completed") != err.message.npos); + BOOST_CHECK_EQUAL(err.sqlstate, "25000"); + + ResetStatement(stmt); + } + + ret = SQLEndTran(SQL_HANDLE_DBC, dbc, SQL_ROLLBACK); + ODBC_THROW_ON_ERROR(ret, SQL_HANDLE_DBC, dbc); + + SQLFreeHandle(SQL_HANDLE_STMT, stmt2); + + SQLDisconnect(dbc2); + + SQLFreeHandle(SQL_HANDLE_DBC, dbc2); +} + BOOST_AUTO_TEST_SUITE_END() diff --git a/modules/platforms/cpp/odbc/Makefile.am b/modules/platforms/cpp/odbc/Makefile.am index 5a8ed6decf943..5230a2c5d5993 100644 --- a/modules/platforms/cpp/odbc/Makefile.am +++ b/modules/platforms/cpp/odbc/Makefile.am @@ -57,7 +57,7 @@ libignite_odbc_la_SOURCES = \ src/config/config_tools.cpp \ src/config/configuration.cpp \ src/config/connection_info.cpp \ - src/config/connection_string_parser.cpp \ + src/config/connection_string_parser.cpp \ src/connection.cpp \ src/cursor.cpp \ src/diagnostic/diagnosable_adapter.cpp \ @@ -81,6 +81,7 @@ libignite_odbc_la_SOURCES = \ src/ssl/ssl_gateway.cpp \ src/ssl/secure_socket_client.cpp \ src/ssl/ssl_mode.cpp \ + src/ssl/ssl_api.cpp \ src/sql/sql_lexer.cpp \ src/sql/sql_parser.cpp \ src/sql/sql_set_streaming_command.cpp \ diff --git a/modules/platforms/cpp/odbc/include/Makefile.am b/modules/platforms/cpp/odbc/include/Makefile.am index be6e059edfcfc..a3f6995512b12 100644 --- a/modules/platforms/cpp/odbc/include/Makefile.am +++ b/modules/platforms/cpp/odbc/include/Makefile.am @@ -53,7 +53,7 @@ noinst_HEADERS = \ ignite/odbc/diagnostic/diagnosable.h \ ignite/odbc/diagnostic/diagnosable_adapter.h \ ignite/odbc/ssl/ssl_mode.h \ - ignite/odbc/ssl/ssl_bindings.h \ + ignite/odbc/ssl/ssl_api.h \ ignite/odbc/ssl/secure_socket_client.h \ ignite/odbc/ssl/ssl_gateway.h \ ignite/odbc/sql/sql_command.h \ diff --git a/modules/platforms/cpp/odbc/include/ignite/odbc/common_types.h b/modules/platforms/cpp/odbc/include/ignite/odbc/common_types.h index 29bb022399588..9241bbc4da8ab 100644 --- a/modules/platforms/cpp/odbc/include/ignite/odbc/common_types.h +++ b/modules/platforms/cpp/odbc/include/ignite/odbc/common_types.h @@ -95,9 +95,15 @@ namespace ignite /** Invalid cursor state. */ S24000_INVALID_CURSOR_STATE, + /** Invalid transaction state. */ + S25000_INVALID_TRANSACTION_STATE, + /** Invalid schema name. */ S3F000_INVALID_SCHEMA_NAME, + /** Serialization failure. */ + S40001_SERIALIZATION_FAILURE, + /** Syntax error or access violation. */ S42000_SYNTAX_ERROR_OR_ACCESS_VIOLATION, @@ -371,7 +377,13 @@ namespace ignite ENTRY_PROCESSING = 4005, /** Cache not found. */ - CACHE_NOT_FOUND = 4006 + CACHE_NOT_FOUND = 4006, + + /** Transaction is already completed. */ + TRANSACTION_COMPLETED = 5004, + + /** Transaction serialization error. */ + TRANSACTION_SERIALIZATION_ERROR = 5005 }; }; diff --git a/modules/web-console/frontend/app/modules/navbar/Navbar.provider.js b/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_api.h similarity index 78% rename from modules/web-console/frontend/app/modules/navbar/Navbar.provider.js rename to modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_api.h index 9b905d7285aa8..69c7ab540d04d 100644 --- a/modules/web-console/frontend/app/modules/navbar/Navbar.provider.js +++ b/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_api.h @@ -15,16 +15,18 @@ * limitations under the License. */ -export default function() { - const items = []; +#ifndef _IGNITE_ODBC_SSL_SSL_API +#define _IGNITE_ODBC_SSL_SSL_API - this.push = function(data) { - items.push(data); - }; - - this.$get = function() { - return items; - }; - - return this; +namespace ignite +{ + namespace odbc + { + namespace ssl + { + bool EnsureSslLoaded(); + } + } } + +#endif //_IGNITE_ODBC_SSL_SSL_API \ No newline at end of file diff --git a/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_bindings.h b/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_bindings.h deleted file mode 100644 index 9a1740dc41f7c..0000000000000 --- a/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_bindings.h +++ /dev/null @@ -1,357 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef _IGNITE_ODBC_SSL_SSL_BINDINGS -#define _IGNITE_ODBC_SSL_SSL_BINDINGS - -#include -#include -#include - -#include "ignite/odbc/ssl/ssl_gateway.h" - -namespace ignite -{ - namespace odbc - { - namespace ssl - { - // Declaring constant used by OpenSSL for readability. - enum { OPERATION_SUCCESS = 1 }; - - inline SSL_CTX *SSL_CTX_new(const SSL_METHOD *meth) - { - typedef SSL_CTX*(FuncType)(const SSL_METHOD*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_CTX_new); - - return fp(meth); - } - - inline void SSL_CTX_free(SSL_CTX *ctx) - { - typedef void(FuncType)(SSL_CTX*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_CTX_free); - - fp(ctx); - } - - inline void SSL_CTX_set_verify(SSL_CTX *ctx, int mode, int(*callback) (int, X509_STORE_CTX *)) - { - typedef void(FuncType)(SSL_CTX*, int, int(*)(int, X509_STORE_CTX*)); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_set_verify); - - fp(ctx, mode, callback); - } - - inline void SSL_CTX_set_verify_depth(SSL_CTX *ctx, int depth) - { - typedef void(FuncType)(SSL_CTX*, int); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_set_verify_depth); - - fp(ctx, depth); - } - - inline int SSL_CTX_load_verify_locations(SSL_CTX *ctx, const char *cAfile, const char *cApath) - { - typedef int(FuncType)(SSL_CTX*, const char*, const char*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_load_verify_locations); - - return fp(ctx, cAfile, cApath); - } - - inline int SSL_CTX_use_certificate_chain_file(SSL_CTX *ctx, const char *file) - { - typedef int(FuncType)(SSL_CTX*, const char*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_use_certificate_chain_file); - - return fp(ctx, file); - } - - inline int SSL_CTX_use_RSAPrivateKey_file(SSL_CTX *ctx, const char *file, int type) - { - typedef int(FuncType)(SSL_CTX*, const char*, int); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_use_RSAPrivateKey_file); - - return fp(ctx, file, type); - } - - inline int SSL_CTX_set_cipher_list(SSL_CTX *ctx, const char *str) - { - typedef int(FuncType)(SSL_CTX*, const char*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_set_cipher_list); - - return fp(ctx, str); - } - - inline long SSL_CTX_ctrl(SSL_CTX *ctx, int cmd, long larg, void *parg) - { - typedef long(FuncType)(SSL_CTX*, int, long, void*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_CTX_ctrl); - - return fp(ctx, cmd, larg, parg); - } - - inline long SSL_get_verify_result(const SSL *s) - { - typedef long(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_get_verify_result); - - return fp(s); - } - - inline int SSL_library_init() - { - typedef int(FuncType)(); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_library_init); - - return fp(); - } - - inline void SSL_load_error_strings() - { - typedef void(FuncType)(); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_load_error_strings); - - fp(); - } - - inline X509 *SSL_get_peer_certificate(const SSL *s) - { - typedef X509*(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_get_peer_certificate); - - return fp(s); - } - - inline long SSL_ctrl(SSL *s, int cmd, long larg, void *parg) - { - typedef long(FuncType)(SSL*, int, long, void*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_ctrl); - - return fp(s, cmd, larg ,parg); - } - - inline long SSL_set_tlsext_host_name_(SSL *s, const char *name) - { - return ssl::SSL_ctrl(s, SSL_CTRL_SET_TLSEXT_HOSTNAME, - TLSEXT_NAMETYPE_host_name, const_cast(name)); - } - - inline void SSL_set_connect_state_(SSL* s) - { - typedef void(FuncType)(SSL*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_set_connect_state); - - return fp(s); - } - - inline int SSL_connect_(SSL* s) - { - typedef int(FuncType)(SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_connect); - - return fp(s); - } - - inline int SSL_get_error_(const SSL *s, int ret) - { - typedef int(FuncType)(const SSL*, int); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_get_error); - - return fp(s, ret); - } - - inline int SSL_want_(const SSL *s) - { - typedef int(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_want); - - return fp(s); - } - - inline int SSL_write_(SSL *s, const void *buf, int num) - { - typedef int(FuncType)(SSL*, const void*, int); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_write); - - return fp(s, buf, num); - } - - inline int SSL_read_(SSL *s, void *buf, int num) - { - typedef int(FuncType)(SSL*, void*, int); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_read); - - return fp(s, buf, num); - } - - inline int SSL_pending_(const SSL *ssl) - { - typedef int(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_pending); - - return fp(ssl); - } - - inline int SSL_get_fd_(const SSL *ssl) - { - typedef int(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_get_fd); - - return fp(ssl); - } - - inline void SSL_free_(SSL *ssl) - { - typedef void(FuncType)(SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_free); - - fp(ssl); - } - - inline const SSL_METHOD *SSLv23_client_method_() - { - typedef const SSL_METHOD*(FuncType)(); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSLv23_client_method); - - return fp(); - } - - inline void OPENSSL_config(const char *configName) - { - typedef void(FuncType)(const char*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpOPENSSL_config); - - fp(configName); - } - - inline void X509_free(X509 *a) - { - typedef void(FuncType)(X509*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpX509_free); - - fp(a); - } - - inline BIO *BIO_new_ssl_connect(SSL_CTX *ctx) - { - typedef BIO*(FuncType)(SSL_CTX*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpBIO_new_ssl_connect); - - return fp(ctx); - } - - inline void BIO_free_all(BIO *a) - { - typedef void(FuncType)(BIO*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpBIO_free_all); - - fp(a); - } - - inline long BIO_ctrl(BIO *bp, int cmd, long larg, void *parg) - { - typedef long(FuncType)(BIO*, int, long, void*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpBIO_ctrl); - - return fp(bp, cmd, larg, parg); - } - - inline long BIO_get_fd_(BIO *bp, int *fd) - { - return ssl::BIO_ctrl(bp, BIO_C_GET_FD, 0, reinterpret_cast(fd)); - } - - inline long BIO_get_ssl_(BIO *bp, SSL** ssl) - { - return ssl::BIO_ctrl(bp, BIO_C_GET_SSL, 0, reinterpret_cast(ssl)); - } - - inline long BIO_set_nbio_(BIO *bp, long n) - { - return ssl::BIO_ctrl(bp, BIO_C_SET_NBIO, n, NULL); - } - - inline long BIO_set_conn_hostname_(BIO *bp, const char *name) - { - return ssl::BIO_ctrl(bp, BIO_C_SET_CONNECT, 0, const_cast(name)); - } - - inline unsigned long ERR_get_error_() - { - typedef unsigned long(FuncType)(); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpERR_get_error); - - return fp(); - } - - inline void ERR_error_string_n_(unsigned long e, char *buf, size_t len) - { - typedef void(FuncType)(unsigned long, char*, size_t); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpERR_error_string_n); - - fp(e, buf, len); - } - } - } -} - -#endif //_IGNITE_ODBC_SSL_SSL_BINDINGS \ No newline at end of file diff --git a/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_gateway.h b/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_gateway.h index 4b102ad28b972..02761b2dc5f32 100644 --- a/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_gateway.h +++ b/modules/platforms/cpp/odbc/include/ignite/odbc/ssl/ssl_gateway.h @@ -18,6 +18,10 @@ #ifndef _IGNITE_ODBC_SSL_SSL_LIBRARY #define _IGNITE_ODBC_SSL_SSL_LIBRARY +#include +#include +#include + #include "ignite/common/concurrent.h" #include "ignite/common/dynamic_load_os.h" @@ -32,6 +36,7 @@ namespace ignite */ struct SslFunctions { + void *fpSSLeay_version; void *fpSSL_CTX_new; void *fpSSL_CTX_free; void *fpSSL_CTX_set_verify; @@ -63,6 +68,12 @@ namespace ignite void *fpBIO_ctrl; void *fpERR_get_error; void *fpERR_error_string_n; + void *fpERR_print_errors_fp; + + void *fpOpenSSL_version; + void *fpSSL_CTX_set_options; + void *fpOPENSSL_init_ssl; + void *fpTLS_client_method; }; /** @@ -103,6 +114,88 @@ namespace ignite return inited; } + char* SSLeay_version_(int type); + + int OPENSSL_init_ssl_(uint64_t opts, const void* settings); + + long SSL_CTX_set_options_(SSL_CTX* ctx, long options); + + long SSL_CTX_ctrl_(SSL_CTX* ctx, int cmd, long larg, void* parg); + + SSL_CTX* SSL_CTX_new_(const SSL_METHOD* meth); + + void SSL_CTX_free_(SSL_CTX* ctx); + + void SSL_CTX_set_verify_(SSL_CTX* ctx, int mode, int (*callback)(int, X509_STORE_CTX*)); + + void SSL_CTX_set_verify_depth_(SSL_CTX* ctx, int depth); + + int SSL_CTX_load_verify_locations_(SSL_CTX* ctx, const char* cAfile, const char* cApath); + + int SSL_CTX_use_certificate_chain_file_(SSL_CTX* ctx, const char* file); + + int SSL_CTX_use_RSAPrivateKey_file_(SSL_CTX* ctx, const char* file, int type); + + int SSL_CTX_set_cipher_list_(SSL_CTX* ctx, const char* str); + + long SSL_get_verify_result_(const SSL* s); + + int SSL_library_init_(); + + void SSL_load_error_strings_(); + + X509* SSL_get_peer_certificate_(const SSL* s); + + long SSL_ctrl_(SSL* s, int cmd, long larg, void* parg); + + long SSL_set_tlsext_host_name_(SSL* s, const char* name); + + void SSL_set_connect_state_(SSL* s); + + int SSL_connect_(SSL* s); + + int SSL_get_error_(const SSL* s, int ret); + + int SSL_want_(const SSL* s); + + int SSL_write_(SSL* s, const void* buf, int num); + + int SSL_read_(SSL* s, void* buf, int num); + + int SSL_pending_(const SSL* ssl); + + int SSL_get_fd_(const SSL* ssl); + + void SSL_free_(SSL* ssl); + + const SSL_METHOD* SSLv23_client_method_(); + + const SSL_METHOD* TLS_client_method_(); + + void OPENSSL_config_(const char* configName); + + void X509_free_(X509* a); + + BIO* BIO_new_ssl_connect_(SSL_CTX* ctx); + + void BIO_free_all_(BIO* a); + + long BIO_ctrl_(BIO* bp, int cmd, long larg, void* parg); + + long BIO_get_fd_(BIO* bp, int* fd); + + long BIO_get_ssl_(BIO* bp, SSL** ssl); + + long BIO_set_nbio_(BIO* bp, long n); + + long BIO_set_conn_hostname_(BIO* bp, const char* name); + + unsigned long ERR_get_error_(); + + void ERR_error_string_n_(unsigned long e, char* buf, size_t len); + + void ERR_print_errors_fp_(FILE *fd); + private: /** * Constructor. @@ -114,6 +207,11 @@ namespace ignite */ ~SslGateway(); + /** + * Unload all SSL symbols. + */ + void UnloadAll(); + /** * Load SSL library. * @param name Name. @@ -126,9 +224,23 @@ namespace ignite */ bool LoadSslLibraries(); + /** + * Load mandatory SSL methods. + * + * @throw IgniteError if can not load one of the functions. + */ + void LoadMandatoryMethods(); + + /** + * Try load SSL method. + * + * @param name Name. + * @return Method pointer. + */ + void* TryLoadSslMethod(const char* name); + /** * Load SSL method. - * @param mod Module. * @param name Name. * @return Method pointer. */ @@ -146,6 +258,9 @@ namespace ignite /** ssleay32 module. */ common::dynamic::Module ssleay32; + /** libcrypto module. */ + common::dynamic::Module libcrypto; + /** libssl module. */ common::dynamic::Module libssl; diff --git a/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj b/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj index 69318ef1e496a..b583c111031df 100644 --- a/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj +++ b/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj @@ -201,6 +201,7 @@ + @@ -263,7 +264,7 @@ - + diff --git a/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj.filters b/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj.filters index f0e49b576e4c1..5fe3bd367e54f 100644 --- a/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj.filters +++ b/modules/platforms/cpp/odbc/project/vs/odbc.vcxproj.filters @@ -196,6 +196,9 @@ Code\streaming + + Code\ssl + @@ -344,9 +347,6 @@ Code\ssl - - Code\ssl - Code\config @@ -392,5 +392,8 @@ Code\streaming + + Code\ssl + \ No newline at end of file diff --git a/modules/platforms/cpp/odbc/src/common_types.cpp b/modules/platforms/cpp/odbc/src/common_types.cpp index 10c4f775bcd6c..72b3cc6602309 100644 --- a/modules/platforms/cpp/odbc/src/common_types.cpp +++ b/modules/platforms/cpp/odbc/src/common_types.cpp @@ -28,7 +28,7 @@ namespace ignite { switch (result) { - case SqlResult::AI_SUCCESS: + case SqlResult::AI_SUCCESS: return SQL_SUCCESS; case SqlResult::AI_SUCCESS_WITH_INFO: @@ -157,6 +157,12 @@ namespace ignite case ResponseStatus::COLUMN_ALREADY_EXISTS: return SqlState::S42S21_COLUMN_ALREADY_EXISTS; + case ResponseStatus::TRANSACTION_COMPLETED: + return SqlState::S25000_INVALID_TRANSACTION_STATE; + + case ResponseStatus::TRANSACTION_SERIALIZATION_ERROR: + return SqlState::S40001_SERIALIZATION_FAILURE; + case ResponseStatus::CACHE_NOT_FOUND: case ResponseStatus::NULL_TABLE_DESCRIPTOR: case ResponseStatus::CONVERSION_FAILED: diff --git a/modules/platforms/cpp/odbc/src/config/connection_info.cpp b/modules/platforms/cpp/odbc/src/config/connection_info.cpp index 5885381b0532b..fde8ca53c710f 100644 --- a/modules/platforms/cpp/odbc/src/config/connection_info.cpp +++ b/modules/platforms/cpp/odbc/src/config/connection_info.cpp @@ -1103,7 +1103,7 @@ namespace ignite // Bitmask that describes the attributes of a static cursor that are supported by the driver. This // bitmask contains the first subset of attributes; for the second subset, see // SQL_STATIC_CURSOR_ATTRIBUTES2. - intParams[SQL_STATIC_CURSOR_ATTRIBUTES1] = SQL_CA1_NEXT; + intParams[SQL_STATIC_CURSOR_ATTRIBUTES1] = SQL_CA1_NEXT | SQL_CA1_ABSOLUTE; #endif // SQL_STATIC_CURSOR_ATTRIBUTES1 #ifdef SQL_STATIC_CURSOR_ATTRIBUTES2 diff --git a/modules/platforms/cpp/odbc/src/connection.cpp b/modules/platforms/cpp/odbc/src/connection.cpp index b580f12804fea..1953a162620da 100644 --- a/modules/platforms/cpp/odbc/src/connection.cpp +++ b/modules/platforms/cpp/odbc/src/connection.cpp @@ -30,7 +30,7 @@ #include "ignite/odbc/connection.h" #include "ignite/odbc/message.h" #include "ignite/odbc/ssl/ssl_mode.h" -#include "ignite/odbc/ssl/ssl_gateway.h" +#include "ignite/odbc/ssl/ssl_api.h" #include "ignite/odbc/ssl/secure_socket_client.h" #include "ignite/odbc/system/tcp_socket_client.h" #include "ignite/odbc/dsn_config.h" @@ -155,7 +155,7 @@ namespace ignite if (sslMode != SslMode::DISABLE) { - bool loaded = ssl::SslGateway::GetInstance().LoadAll(); + bool loaded = ssl::EnsureSslLoaded(); if (!loaded) { diff --git a/modules/platforms/cpp/odbc/src/diagnostic/diagnostic_record.cpp b/modules/platforms/cpp/odbc/src/diagnostic/diagnostic_record.cpp index 7b82b4090cd93..d3a369242f998 100644 --- a/modules/platforms/cpp/odbc/src/diagnostic/diagnostic_record.cpp +++ b/modules/platforms/cpp/odbc/src/diagnostic/diagnostic_record.cpp @@ -79,9 +79,15 @@ namespace /** SQL state 24000 constant. */ const std::string STATE_24000 = "24000"; + /** SQL state 25000 constant. */ + const std::string STATE_25000 = "25000"; + /** SQL state 3F000 constant. */ const std::string STATE_3F000 = "3F000"; + /** SQL state 40001 constant. */ + const std::string STATE_40001 = "40001"; + /** SQL state 42000 constant. */ const std::string STATE_42000 = "42000"; @@ -305,9 +311,15 @@ namespace ignite case SqlState::S24000_INVALID_CURSOR_STATE: return STATE_24000; + case SqlState::S25000_INVALID_TRANSACTION_STATE: + return STATE_25000; + case SqlState::S3F000_INVALID_SCHEMA_NAME: return STATE_3F000; + case SqlState::S40001_SERIALIZATION_FAILURE: + return STATE_40001; + case SqlState::S42000_SYNTAX_ERROR_OR_ACCESS_VIOLATION: return STATE_42000; diff --git a/modules/platforms/cpp/odbc/src/message.cpp b/modules/platforms/cpp/odbc/src/message.cpp index 587d2535ce203..6398964948a0c 100644 --- a/modules/platforms/cpp/odbc/src/message.cpp +++ b/modules/platforms/cpp/odbc/src/message.cpp @@ -84,9 +84,10 @@ namespace ignite { utility::WriteString(writer, config.GetUser()); utility::WriteString(writer, config.GetPassword()); + } + if (version >= ProtocolVersion::VERSION_2_7_0) writer.WriteInt8(config.GetNestedTxMode()); - } } QueryExecuteRequest::QueryExecuteRequest(const std::string& schema, const std::string& sql, diff --git a/modules/platforms/cpp/odbc/src/ssl/secure_socket_client.cpp b/modules/platforms/cpp/odbc/src/ssl/secure_socket_client.cpp index 84eb1e8867f82..f7811fb7925b6 100644 --- a/modules/platforms/cpp/odbc/src/ssl/secure_socket_client.cpp +++ b/modules/platforms/cpp/odbc/src/ssl/secure_socket_client.cpp @@ -22,13 +22,15 @@ #include "ignite/common/concurrent.h" #include "ignite/odbc/system/tcp_socket_client.h" #include "ignite/odbc/ssl/secure_socket_client.h" -#include "ignite/odbc/ssl/ssl_bindings.h" +#include "ignite/odbc/ssl/ssl_gateway.h" #include "ignite/common/utils.h" #ifndef SOCKET_ERROR # define SOCKET_ERROR (-1) #endif // SOCKET_ERROR +enum { OPERATION_SUCCESS = 1 }; + namespace ignite { namespace odbc @@ -52,13 +54,15 @@ namespace ignite CloseInteral(); if (context) - ssl::SSL_CTX_free(reinterpret_cast(context)); + SslGateway::GetInstance().SSL_CTX_free_(reinterpret_cast(context)); } bool SecureSocketClient::Connect(const char* hostname, uint16_t port, int32_t, diagnostic::Diagnosable& diag) { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (!context) { @@ -77,52 +81,52 @@ namespace ignite if (!ssl0) return false; - int res = ssl::SSL_set_tlsext_host_name_(ssl0, hostname); + int res = sslGateway.SSL_set_tlsext_host_name_(ssl0, hostname); if (res != OPERATION_SUCCESS) { + sslGateway.SSL_free_(ssl0); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not set host name for secure connection: " + GetSslError(ssl0, res)); - ssl::SSL_free_(ssl0); - return false; } - ssl::SSL_set_connect_state_(ssl0); + sslGateway.SSL_set_connect_state_(ssl0); bool connected = CompleteConnectInternal(ssl0, DEFALT_CONNECT_TIMEOUT, diag); if (!connected) { - ssl::SSL_free_(ssl0); + sslGateway.SSL_free_(ssl0); return false; } // Verify a server certificate was presented during the negotiation - X509* cert = ssl::SSL_get_peer_certificate(ssl0); + X509* cert = sslGateway.SSL_get_peer_certificate_(ssl0); if (cert) - ssl::X509_free(cert); + sslGateway.X509_free_(cert); else { + sslGateway.SSL_free_(ssl0); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Remote host did not provide certificate: " + GetSslError(ssl0, res)); - ssl::SSL_free_(ssl0); - return false; } // Verify the result of chain verification // Verification performed according to RFC 4158 - res = ssl::SSL_get_verify_result(ssl0); + res = sslGateway.SSL_get_verify_result_(ssl0); if (X509_V_OK != res) { + sslGateway.SSL_free_(ssl0); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Certificate chain verification failed: " + GetSslError(ssl0, res)); - ssl::SSL_free_(ssl0); - return false; } @@ -138,7 +142,9 @@ namespace ignite int SecureSocketClient::Send(const int8_t* data, size_t size, int32_t timeout) { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (!ssl) { @@ -149,14 +155,16 @@ namespace ignite SSL* ssl0 = reinterpret_cast(ssl); - int res = ssl::SSL_write_(ssl0, data, static_cast(size)); + int res = sslGateway.SSL_write_(ssl0, data, static_cast(size)); return res; } int SecureSocketClient::Receive(int8_t* buffer, size_t size, int32_t timeout) { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (!ssl) { @@ -169,7 +177,7 @@ namespace ignite int res = 0; - if (!blocking && ssl::SSL_pending_(ssl0) == 0) + if (!blocking && sslGateway.SSL_pending_(ssl0) == 0) { res = WaitOnSocket(ssl, timeout, true); @@ -177,7 +185,7 @@ namespace ignite return res; } - res = ssl::SSL_read_(ssl0, buffer, static_cast(size)); + res = sslGateway.SSL_read_(ssl0, buffer, static_cast(size)); return res; } @@ -190,30 +198,11 @@ namespace ignite void* SecureSocketClient::MakeContext(const std::string& certPath, const std::string& keyPath, const std::string& caPath, diagnostic::Diagnosable& diag) { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); - static bool sslLibInited = false; - static common::concurrent::CriticalSection sslCs; + assert(sslGateway.Loaded()); - if (!sslLibInited) - { - common::concurrent::CsLockGuard lock(sslCs); - - if (!sslLibInited) - { - LOG_MSG("Initializing SSL library"); - - (void)SSL_library_init(); - - SSL_load_error_strings(); - - OPENSSL_config(0); - - sslLibInited = true; - } - } - - const SSL_METHOD* method = ssl::SSLv23_client_method_(); + const SSL_METHOD* method = sslGateway.SSLv23_client_method_(); if (!method) { diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not get SSL method."); @@ -221,7 +210,7 @@ namespace ignite return 0; } - SSL_CTX* ctx = ssl::SSL_CTX_new(method); + SSL_CTX* ctx = sslGateway.SSL_CTX_new_(method); if (!ctx) { diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not create new SSL context."); @@ -229,57 +218,56 @@ namespace ignite return 0; } - ssl::SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, 0); + sslGateway.SSL_CTX_set_verify_(ctx, SSL_VERIFY_PEER, 0); - ssl::SSL_CTX_set_verify_depth(ctx, 8); + sslGateway.SSL_CTX_set_verify_depth_(ctx, 8); - const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION; - ssl::SSL_CTX_ctrl(ctx, SSL_CTRL_OPTIONS, flags, NULL); + sslGateway.SSL_CTX_set_options_(ctx, SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION); const char* cCaPath = caPath.empty() ? 0 : caPath.c_str(); - long res = ssl::SSL_CTX_load_verify_locations(ctx, cCaPath, 0); + long res = sslGateway.SSL_CTX_load_verify_locations_(ctx, cCaPath, 0); if (res != OPERATION_SUCCESS) { + sslGateway.SSL_CTX_free_(ctx); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not set Certificate Authority path for secure connection."); - ssl::SSL_CTX_free(ctx); - return 0; } - res = ssl::SSL_CTX_use_certificate_chain_file(ctx, certPath.c_str()); + res = sslGateway.SSL_CTX_use_certificate_chain_file_(ctx, certPath.c_str()); if (res != OPERATION_SUCCESS) { + sslGateway.SSL_CTX_free_(ctx); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not set client certificate file for secure connection."); - ssl::SSL_CTX_free(ctx); - return 0; } - res = ssl::SSL_CTX_use_RSAPrivateKey_file(ctx, keyPath.c_str(), SSL_FILETYPE_PEM); + res = sslGateway.SSL_CTX_use_RSAPrivateKey_file_(ctx, keyPath.c_str(), SSL_FILETYPE_PEM); if (res != OPERATION_SUCCESS) { + sslGateway.SSL_CTX_free_(ctx); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not set private key file for secure connection."); - ssl::SSL_CTX_free(ctx); - return 0; } const char* const PREFERRED_CIPHERS = "HIGH:!aNULL:!kRSA:!PSK:!SRP:!MD5:!RC4"; - res = ssl::SSL_CTX_set_cipher_list(ctx, PREFERRED_CIPHERS); + res = sslGateway.SSL_CTX_set_cipher_list_(ctx, PREFERRED_CIPHERS); if (res != OPERATION_SUCCESS) { + sslGateway.SSL_CTX_free_(ctx); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not set ciphers list for secure connection."); - ssl::SSL_CTX_free(ctx); - return 0; } @@ -289,7 +277,11 @@ namespace ignite void* SecureSocketClient::MakeSsl(void* context, const char* hostname, uint16_t port, bool& blocking, diagnostic::Diagnosable& diag) { - BIO* bio = ssl::BIO_new_ssl_connect(reinterpret_cast(context)); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + + BIO* bio = sslGateway.BIO_new_ssl_connect_(reinterpret_cast(context)); if (!bio) { diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not create SSL connection."); @@ -298,7 +290,7 @@ namespace ignite } blocking = false; - long res = ssl::BIO_set_nbio_(bio, 1); + long res = sslGateway.BIO_set_nbio_(bio, 1); if (res != OPERATION_SUCCESS) { blocking = true; @@ -312,23 +304,23 @@ namespace ignite std::string address = stream.str(); - res = ssl::BIO_set_conn_hostname_(bio, address.c_str()); + res = sslGateway.BIO_set_conn_hostname_(bio, address.c_str()); if (res != OPERATION_SUCCESS) { - diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not set SSL connection hostname."); + sslGateway.BIO_free_all_(bio); - ssl::BIO_free_all(bio); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not set SSL connection hostname."); return 0; } SSL* ssl = 0; - ssl::BIO_get_ssl_(bio, &ssl); + sslGateway.BIO_get_ssl_(bio, &ssl); if (!ssl) { - diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not get SSL instance from BIO."); + sslGateway.BIO_free_all_(bio); - ssl::BIO_free_all(bio); + diag.AddStatusRecord(SqlState::SHY000_GENERAL_ERROR, "Can not get SSL instance from BIO."); return 0; } @@ -338,16 +330,20 @@ namespace ignite bool SecureSocketClient::CompleteConnectInternal(void* ssl, int timeout, diagnostic::Diagnosable& diag) { + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + SSL* ssl0 = reinterpret_cast(ssl); while (true) { - int res = ssl::SSL_connect_(ssl0); + int res = sslGateway.SSL_connect_(ssl0); if (res == OPERATION_SUCCESS) return true; - int sslError = ssl::SSL_get_error_(ssl0, res); + int sslError = sslGateway.SSL_get_error_(ssl0, res); LOG_MSG("wait res=" << res << ", sslError=" << sslError); @@ -359,7 +355,7 @@ namespace ignite return false; } - int want = ssl::SSL_want_(ssl0); + int want = sslGateway.SSL_want_(ssl0); res = WaitOnSocket(ssl, timeout, want == SSL_READING); @@ -386,9 +382,13 @@ namespace ignite std::string SecureSocketClient::GetSslError(void* ssl, int ret) { + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + SSL* ssl0 = reinterpret_cast(ssl); - int sslError = ssl::SSL_get_error_(ssl0, ret); + int sslError = sslGateway.SSL_get_error_(ssl0, ret); LOG_MSG("ssl_error: " << sslError); @@ -407,11 +407,11 @@ namespace ignite return std::string("SSL error: ") + common::LexicalCast(sslError); } - long error = ssl::ERR_get_error_(); + long error = sslGateway.ERR_get_error_(); char errBuf[1024] = { 0 }; - ssl::ERR_error_string_n_(error, errBuf, sizeof(errBuf)); + sslGateway.ERR_error_string_n_(error, errBuf, sizeof(errBuf)); return std::string(errBuf); } @@ -435,11 +435,13 @@ namespace ignite void SecureSocketClient::CloseInteral() { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (ssl) { - ssl::SSL_free_(reinterpret_cast(ssl)); + sslGateway.SSL_free_(reinterpret_cast(ssl)); ssl = 0; } @@ -447,13 +449,17 @@ namespace ignite int SecureSocketClient::WaitOnSocket(void* ssl, int32_t timeout, bool rd) { + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + int ready = 0; int lastError = 0; SSL* ssl0 = reinterpret_cast(ssl); fd_set fds; - int fd = ssl::SSL_get_fd_(ssl0); + int fd = sslGateway.SSL_get_fd_(ssl0); if (fd < 0) { diff --git a/modules/platforms/cpp/odbc/src/ssl/ssl_api.cpp b/modules/platforms/cpp/odbc/src/ssl/ssl_api.cpp new file mode 100644 index 0000000000000..4a1d6926c2765 --- /dev/null +++ b/modules/platforms/cpp/odbc/src/ssl/ssl_api.cpp @@ -0,0 +1,33 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "ignite/odbc/ssl/ssl_gateway.h" +#include "ignite/odbc/ssl/ssl_api.h" + +namespace ignite +{ + namespace odbc + { + namespace ssl + { + bool EnsureSslLoaded() + { + return SslGateway::GetInstance().LoadAll(); + } + } + } +} diff --git a/modules/platforms/cpp/odbc/src/ssl/ssl_gateway.cpp b/modules/platforms/cpp/odbc/src/ssl/ssl_gateway.cpp index 308302a069f8a..6116a767a7e69 100644 --- a/modules/platforms/cpp/odbc/src/ssl/ssl_gateway.cpp +++ b/modules/platforms/cpp/odbc/src/ssl/ssl_gateway.cpp @@ -17,9 +17,6 @@ #include -#include -#include - #include "ignite/common/utils.h" #include "ignite/odbc/ssl/ssl_gateway.h" @@ -29,6 +26,18 @@ # define ADDITIONAL_OPENSSL_HOME_ENV "OPEN_SSL_HOME" #endif // ADDITIONAL_OPENSSL_HOME_ENV +#ifndef SSL_CTRL_OPTIONS +# define SSL_CTRL_OPTIONS 32 +#endif // SSL_CTRL_OPTIONS + +#ifndef OPENSSL_INIT_LOAD_SSL_STRINGS +# define OPENSSL_INIT_LOAD_SSL_STRINGS 0x00200000L +#endif // OPENSSL_INIT_LOAD_SSL_STRINGS + +#ifndef OPENSSL_INIT_LOAD_CRYPTO_STRINGS +# define OPENSSL_INIT_LOAD_CRYPTO_STRINGS 0x00000002L +#endif // OPENSSL_INIT_LOAD_CRYPTO_STRINGS + namespace ignite { namespace odbc @@ -39,7 +48,7 @@ namespace ignite inited(false), functions() { - // No-op. + memset(&functions, 0, sizeof(functions)); } SslGateway::~SslGateway() @@ -47,41 +56,61 @@ namespace ignite // No-op. } + void SslGateway::UnloadAll() + { + libeay32.Unload(); + ssleay32.Unload(); + libssl.Unload(); + libcrypto.Unload(); + + memset(&functions, 0, sizeof(functions)); + } + common::dynamic::Module SslGateway::LoadSslLibrary(const char* name) { using namespace common; using namespace dynamic; - std::string fullName = GetDynamicLibraryName(name); - - Module libModule = LoadModule(fullName); - - if (libModule.IsLoaded()) - return libModule; - std::string home = GetEnv(ADDITIONAL_OPENSSL_HOME_ENV); if (home.empty()) home = GetEnv("OPENSSL_HOME"); - if (home.empty()) - return libModule; + std::string fullName = GetDynamicLibraryName(name); + + if (!home.empty()) + { + std::stringstream constructor; + + constructor << home << Fs << "bin" << Fs << fullName; - std::stringstream constructor; + std::string fullPath = constructor.str(); - constructor << home << Fs << "bin" << Fs << fullName; + Module mod = LoadModule(fullPath); - std::string fullPath = constructor.str(); + if (mod.IsLoaded()) + return mod; + } - return LoadModule(fullPath); + return LoadModule(fullName); } bool SslGateway::LoadSslLibraries() { - libeay32 = LoadSslLibrary("libeay32"); - ssleay32 = LoadSslLibrary("ssleay32"); libssl = LoadSslLibrary("libssl"); + if (!libssl.IsLoaded()) + { + libcrypto = LoadSslLibrary("libcrypto-1_1-x64"); + libssl = LoadSslLibrary("libssl-1_1-x64"); + } + + if (!libssl.IsLoaded()) + { + libeay32 = LoadSslLibrary("libeay32"); + ssleay32 = LoadSslLibrary("ssleay32"); + } + if (!libssl.IsLoaded() && (!libeay32.IsLoaded() || !ssleay32.IsLoaded())) { if (!libeay32.IsLoaded()) @@ -93,41 +122,29 @@ namespace ignite if (!libssl.IsLoaded()) LOG_MSG("Can not load libssl."); - libeay32.Unload(); - ssleay32.Unload(); - libssl.Unload(); - return false; } return true; } - SslGateway& SslGateway::GetInstance() - { - static SslGateway self; - - return self; - } - - bool SslGateway::LoadAll() + void SslGateway::LoadMandatoryMethods() { - using namespace common::dynamic; + functions.fpSSLeay_version = TryLoadSslMethod("SSLeay_version"); - if (inited) - return true; + if (!functions.fpSSLeay_version) + functions.fpOpenSSL_version = LoadSslMethod("OpenSSL_version"); - common::concurrent::CsLockGuard lock(initCs); + functions.fpSSL_library_init = TryLoadSslMethod("SSL_library_init"); + functions.fpSSL_load_error_strings = TryLoadSslMethod("SSL_load_error_strings"); - if (inited) - return true; + if (!functions.fpSSL_library_init || !functions.fpSSL_load_error_strings) + functions.fpOPENSSL_init_ssl = LoadSslMethod("OPENSSL_init_ssl"); - if (!LoadSslLibraries()) - { - LOG_MSG("Can not load neccessary OpenSSL libraries."); + functions.fpSSLv23_client_method = TryLoadSslMethod("SSLv23_client_method"); - return false; - } + if (!functions.fpSSLv23_client_method) + functions.fpTLS_client_method = LoadSslMethod("TLS_client_method"); functions.fpSSL_CTX_new = LoadSslMethod("SSL_CTX_new"); functions.fpSSL_CTX_free = LoadSslMethod("SSL_CTX_free"); @@ -139,13 +156,11 @@ namespace ignite functions.fpSSL_CTX_set_cipher_list = LoadSslMethod("SSL_CTX_set_cipher_list"); functions.fpSSL_get_verify_result = LoadSslMethod("SSL_get_verify_result"); - functions.fpSSL_library_init = LoadSslMethod("SSL_library_init"); - functions.fpSSL_load_error_strings = LoadSslMethod("SSL_load_error_strings"); + functions.fpSSL_get_peer_certificate = LoadSslMethod("SSL_get_peer_certificate"); functions.fpSSL_ctrl = LoadSslMethod("SSL_ctrl"); functions.fpSSL_CTX_ctrl = LoadSslMethod("SSL_CTX_ctrl"); - functions.fpSSLv23_client_method = LoadSslMethod("SSLv23_client_method"); functions.fpSSL_set_connect_state = LoadSslMethod("SSL_set_connect_state"); functions.fpSSL_connect = LoadSslMethod("SSL_connect"); functions.fpSSL_get_error = LoadSslMethod("SSL_get_error"); @@ -165,67 +180,521 @@ namespace ignite functions.fpERR_get_error = LoadSslMethod("ERR_get_error"); functions.fpERR_error_string_n = LoadSslMethod("ERR_error_string_n"); + } + + SslGateway& SslGateway::GetInstance() + { + static SslGateway self; + + return self; + } + + bool SslGateway::LoadAll() + { + using namespace common::dynamic; + + if (inited) + return true; + + common::concurrent::CsLockGuard lock(initCs); - bool allLoaded = - functions.fpSSL_CTX_new != 0 && - functions.fpSSL_CTX_free != 0 && - functions.fpSSL_CTX_set_verify != 0 && - functions.fpSSL_CTX_set_verify_depth != 0 && - functions.fpSSL_CTX_load_verify_locations != 0 && - functions.fpSSL_CTX_use_certificate_chain_file != 0 && - functions.fpSSL_CTX_use_RSAPrivateKey_file != 0 && - functions.fpSSL_CTX_set_cipher_list != 0 && - functions.fpSSL_get_verify_result != 0 && - functions.fpSSL_library_init != 0 && - functions.fpSSL_load_error_strings != 0 && - functions.fpSSL_get_peer_certificate != 0 && - functions.fpSSL_ctrl != 0 && - functions.fpSSL_CTX_ctrl != 0 && - functions.fpSSLv23_client_method != 0 && - functions.fpSSL_set_connect_state != 0 && - functions.fpSSL_connect != 0 && - functions.fpSSL_get_error != 0 && - functions.fpSSL_want != 0 && - functions.fpSSL_write != 0 && - functions.fpSSL_read != 0 && - functions.fpSSL_pending != 0 && - functions.fpSSL_get_fd != 0 && - functions.fpSSL_free != 0 && - functions.fpBIO_new_ssl_connect != 0 && - functions.fpOPENSSL_config != 0 && - functions.fpX509_free != 0 && - functions.fpBIO_free_all != 0 && - functions.fpBIO_ctrl != 0 && - functions.fpERR_get_error != 0 && - functions.fpERR_error_string_n != 0; - - if (!allLoaded) + if (inited) + return true; + + common::MethodGuard guard(this, &SslGateway::UnloadAll); + + if (!LoadSslLibraries()) { - libeay32.Unload(); - ssleay32.Unload(); - libssl.Unload(); + LOG_MSG("Can not load neccessary OpenSSL libraries."); + + return false; } - inited = allLoaded; + LoadMandatoryMethods(); + + functions.fpSSL_CTX_set_options = TryLoadSslMethod("SSL_CTX_set_options"); + functions.fpERR_print_errors_fp = TryLoadSslMethod("ERR_print_errors_fp"); + + (void)SSL_library_init_(); - return inited; + SSL_load_error_strings_(); + + OPENSSL_config_(0); + + guard.Release(); + + inited = true; + + return true; } - void* SslGateway::LoadSslMethod(const char* name) + void* SslGateway::TryLoadSslMethod(const char* name) { void* fp = libeay32.FindSymbol(name); if (!fp) fp = ssleay32.FindSymbol(name); + if (!fp) + fp = libcrypto.FindSymbol(name); + if (!fp) fp = libssl.FindSymbol(name); + return fp; + } + + void* SslGateway::LoadSslMethod(const char* name) + { + void* fp = TryLoadSslMethod(name); + if (!fp) LOG_MSG("Can not load function " << name); return fp; } + + char* SslGateway::SSLeay_version_(int type) + { + typedef char* (FuncType)(int); + + FuncType* fp = 0; + + if (functions.fpSSLeay_version) + fp = reinterpret_cast(functions.fpSSLeay_version); + else + fp = reinterpret_cast(functions.fpOpenSSL_version); + + assert(fp != 0); + + return fp(type); + } + + int SslGateway::OPENSSL_init_ssl_(uint64_t opts, const void* settings) + { + assert(functions.fpOPENSSL_init_ssl != 0); + + typedef int (FuncType)(uint64_t, const void*); + + FuncType* fp = reinterpret_cast(functions.fpOPENSSL_init_ssl); + + return fp(opts, settings); + } + + long SslGateway::SSL_CTX_set_options_(SSL_CTX* ctx, long options) + { + if (functions.fpSSL_CTX_set_options) + { + typedef long (FuncType)(SSL_CTX*, long); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_options); + + return fp(ctx, options); + } + + return SSL_CTX_ctrl_(ctx, SSL_CTRL_OPTIONS, options, NULL); + } + + long SslGateway::SSL_CTX_ctrl_(SSL_CTX* ctx, int cmd, long larg, void* parg) + { + assert(functions.fpSSL_CTX_ctrl != 0); + + typedef long (FuncType)(SSL_CTX*, int, long, void*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_ctrl); + + return fp(ctx, cmd, larg, parg); + } + + SSL_CTX* SslGateway::SSL_CTX_new_(const SSL_METHOD* meth) + { + assert(functions.fpSSL_CTX_new != 0); + + typedef SSL_CTX*(FuncType)(const SSL_METHOD*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_new); + + return fp(meth); + } + + void SslGateway::SSL_CTX_free_(SSL_CTX* ctx) + { + assert(functions.fpSSL_CTX_free != 0); + + typedef void (FuncType)(SSL_CTX*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_free); + + fp(ctx); + } + + void SslGateway::SSL_CTX_set_verify_(SSL_CTX* ctx, int mode, int (* callback)(int, X509_STORE_CTX*)) + { + assert(functions.fpSSL_CTX_set_verify != 0); + + typedef void (FuncType)(SSL_CTX*, int, int (*)(int, X509_STORE_CTX*)); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_verify); + + fp(ctx, mode, callback); + } + + void SslGateway::SSL_CTX_set_verify_depth_(SSL_CTX* ctx, int depth) + { + assert(functions.fpSSL_CTX_set_verify_depth != 0); + + typedef void (FuncType)(SSL_CTX*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_verify_depth); + + fp(ctx, depth); + } + + int SslGateway::SSL_CTX_load_verify_locations_(SSL_CTX* ctx, const char* cAfile, const char* cApath) + { + assert(functions.fpSSL_CTX_load_verify_locations != 0); + + typedef int (FuncType)(SSL_CTX*, const char*, const char*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_load_verify_locations); + + return fp(ctx, cAfile, cApath); + } + + int SslGateway::SSL_CTX_use_certificate_chain_file_(SSL_CTX* ctx, const char* file) + { + assert(functions.fpSSL_CTX_use_certificate_chain_file != 0); + + typedef int (FuncType)(SSL_CTX*, const char*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_use_certificate_chain_file); + + return fp(ctx, file); + } + + int SslGateway::SSL_CTX_use_RSAPrivateKey_file_(SSL_CTX* ctx, const char* file, int type) + { + assert(functions.fpSSL_CTX_use_RSAPrivateKey_file != 0); + + typedef int (FuncType)(SSL_CTX*, const char*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_use_RSAPrivateKey_file); + + return fp(ctx, file, type); + } + + int SslGateway::SSL_CTX_set_cipher_list_(SSL_CTX* ctx, const char* str) + { + assert(functions.fpSSL_CTX_set_cipher_list != 0); + + typedef int (FuncType)(SSL_CTX*, const char*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_cipher_list); + + return fp(ctx, str); + } + + long SslGateway::SSL_get_verify_result_(const SSL* s) + { + assert(functions.fpSSL_get_verify_result != 0); + + typedef long (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_verify_result); + + return fp(s); + } + + int SslGateway::SSL_library_init_() + { + typedef int (FuncType)(); + + if (functions.fpSSL_library_init) + { + FuncType* fp = reinterpret_cast(functions.fpSSL_library_init); + + return fp(); + } + + return OPENSSL_init_ssl_(0, NULL); + } + + void SslGateway::SSL_load_error_strings_() + { + typedef void (FuncType)(); + + if (functions.fpSSL_load_error_strings) + { + FuncType* fp = reinterpret_cast(functions.fpSSL_load_error_strings); + + fp(); + + return; + } + + OPENSSL_init_ssl_(OPENSSL_INIT_LOAD_SSL_STRINGS | OPENSSL_INIT_LOAD_CRYPTO_STRINGS, NULL); + } + + X509* SslGateway::SSL_get_peer_certificate_(const SSL* s) + { + assert(functions.fpSSL_get_peer_certificate != 0); + + typedef X509*(FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_peer_certificate); + + return fp(s); + } + + long SslGateway::SSL_ctrl_(SSL* s, int cmd, long larg, void* parg) + { + assert(functions.fpSSL_ctrl != 0); + + typedef long (FuncType)(SSL*, int, long, void*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_ctrl); + + return fp(s, cmd, larg, parg); + } + + long SslGateway::SSL_set_tlsext_host_name_(SSL* s, const char* name) + { + return SSL_ctrl_(s, SSL_CTRL_SET_TLSEXT_HOSTNAME, + TLSEXT_NAMETYPE_host_name, const_cast(name)); + } + + void SslGateway::SSL_set_connect_state_(SSL* s) + { + assert(functions.fpSSL_set_connect_state != 0); + + typedef void (FuncType)(SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_set_connect_state); + + return fp(s); + } + + int SslGateway::SSL_connect_(SSL* s) + { + assert(functions.fpSSL_connect != 0); + + typedef int (FuncType)(SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_connect); + + return fp(s); + } + + int SslGateway::SSL_get_error_(const SSL* s, int ret) + { + assert(functions.fpSSL_get_error != 0); + + typedef int (FuncType)(const SSL*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_error); + + return fp(s, ret); + } + + int SslGateway::SSL_want_(const SSL* s) + { + assert(functions.fpSSL_want != 0); + + typedef int (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_want); + + return fp(s); + } + + int SslGateway::SSL_write_(SSL* s, const void* buf, int num) + { + assert(functions.fpSSL_write != 0); + + typedef int (FuncType)(SSL*, const void*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_write); + + return fp(s, buf, num); + } + + int SslGateway::SSL_read_(SSL* s, void* buf, int num) + { + assert(functions.fpSSL_read != 0); + + typedef int (FuncType)(SSL*, void*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_read); + + return fp(s, buf, num); + } + + int SslGateway::SSL_pending_(const SSL* ssl) + { + assert(functions.fpSSL_pending != 0); + + typedef int (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_pending); + + return fp(ssl); + } + + int SslGateway::SSL_get_fd_(const SSL* ssl) + { + assert(functions.fpSSL_get_fd != 0); + + typedef int (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_fd); + + return fp(ssl); + } + + void SslGateway::SSL_free_(SSL* ssl) + { + assert(functions.fpSSL_free != 0); + + typedef void (FuncType)(SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_free); + + fp(ssl); + } + + const SSL_METHOD* SslGateway::SSLv23_client_method_() + { + if (functions.fpSSLv23_client_method) + { + typedef const SSL_METHOD*(FuncType)(); + + FuncType* fp = reinterpret_cast(functions.fpSSLv23_client_method); + + return fp(); + } + + return TLS_client_method_(); + } + + const SSL_METHOD* SslGateway::TLS_client_method_() + { + assert(functions.fpTLS_client_method != 0); + + typedef const SSL_METHOD*(FuncType)(); + + FuncType* fp = reinterpret_cast(functions.fpTLS_client_method); + + return fp(); + } + + void SslGateway::OPENSSL_config_(const char* configName) + { + assert(functions.fpOPENSSL_config != 0); + + typedef void (FuncType)(const char*); + + FuncType* fp = reinterpret_cast(functions.fpOPENSSL_config); + + fp(configName); + } + + void SslGateway::X509_free_(X509* a) + { + assert(functions.fpX509_free != 0); + + typedef void (FuncType)(X509*); + + FuncType* fp = reinterpret_cast(functions.fpX509_free); + + fp(a); + } + + BIO* SslGateway::BIO_new_ssl_connect_(SSL_CTX* ctx) + { + assert(functions.fpBIO_new_ssl_connect != 0); + + typedef BIO*(FuncType)(SSL_CTX*); + + FuncType* fp = reinterpret_cast(functions.fpBIO_new_ssl_connect); + + return fp(ctx); + } + + void SslGateway::BIO_free_all_(BIO* a) + { + assert(functions.fpBIO_free_all != 0); + + typedef void (FuncType)(BIO*); + + FuncType* fp = reinterpret_cast(functions.fpBIO_free_all); + + fp(a); + } + + long SslGateway::BIO_ctrl_(BIO* bp, int cmd, long larg, void* parg) + { + assert(functions.fpBIO_ctrl != 0); + + typedef long (FuncType)(BIO*, int, long, void*); + + FuncType* fp = reinterpret_cast(functions.fpBIO_ctrl); + + return fp(bp, cmd, larg, parg); + } + + long SslGateway::BIO_get_fd_(BIO* bp, int* fd) + { + return BIO_ctrl_(bp, BIO_C_GET_FD, 0, reinterpret_cast(fd)); + } + + long SslGateway::BIO_get_ssl_(BIO* bp, SSL** ssl) + { + return BIO_ctrl_(bp, BIO_C_GET_SSL, 0, reinterpret_cast(ssl)); + } + + long SslGateway::BIO_set_nbio_(BIO* bp, long n) + { + return BIO_ctrl_(bp, BIO_C_SET_NBIO, n, NULL); + } + + long SslGateway::BIO_set_conn_hostname_(BIO* bp, const char* name) + { + return BIO_ctrl_(bp, BIO_C_SET_CONNECT, 0, const_cast(name)); + } + + unsigned long SslGateway::ERR_get_error_() + { + assert(functions.fpERR_get_error != 0); + + typedef unsigned long (FuncType)(); + + FuncType* fp = reinterpret_cast(functions.fpERR_get_error); + + return fp(); + } + + void SslGateway::ERR_error_string_n_(unsigned long e, char* buf, size_t len) + { + assert(functions.fpERR_error_string_n != 0); + + typedef void (FuncType)(unsigned long, char*, size_t); + + FuncType* fp = reinterpret_cast(functions.fpERR_error_string_n); + + fp(e, buf, len); + } + + void SslGateway::ERR_print_errors_fp_(FILE* fd) + { + if (!functions.fpERR_print_errors_fp) + return; + + typedef void (FuncType)(FILE*); + + FuncType* fp = reinterpret_cast(functions.fpERR_print_errors_fp); + + fp(fd); + } } } } diff --git a/modules/platforms/cpp/thin-client-test/src/cache_client_test.cpp b/modules/platforms/cpp/thin-client-test/src/cache_client_test.cpp index 1bbb0b8b59fc7..4031d4611ef91 100644 --- a/modules/platforms/cpp/thin-client-test/src/cache_client_test.cpp +++ b/modules/platforms/cpp/thin-client-test/src/cache_client_test.cpp @@ -361,6 +361,56 @@ BOOST_AUTO_TEST_CASE(CacheClientContainsComplexKey) BOOST_CHECK(cache.ContainsKey(key)); } +BOOST_AUTO_TEST_CASE(CacheClientReplaceBasicKey) +{ + IgniteClientConfiguration cfg; + + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = client.GetCache("local"); + + int32_t key = 42; + std::string valIn = "Lorem ipsum"; + + cache.Put(key, valIn); + + BOOST_REQUIRE(!cache.Replace(1, "Test")); + BOOST_REQUIRE(cache.Replace(42, "Test")); + + BOOST_REQUIRE_EQUAL(cache.Get(1), ""); + BOOST_REQUIRE_EQUAL(cache.Get(42), "Test"); +} + +BOOST_AUTO_TEST_CASE(CacheClientReplaceComplexKey) +{ + IgniteClientConfiguration cfg; + + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = client.GetCache("local"); + + ignite::ComplexType key; + + key.i32Field = 123; + key.strField = "Test value"; + key.objField.f1 = 42; + key.objField.f2 = "Inner value"; + + int32_t valIn = 42; + + cache.Put(key, valIn); + + BOOST_REQUIRE(!cache.Replace(ignite::ComplexType(), 2)); + BOOST_REQUIRE(cache.Replace(key, 13)); + + BOOST_REQUIRE_EQUAL(cache.Get(key), 13); + BOOST_REQUIRE_EQUAL(cache.Get(ignite::ComplexType()), 0); +} + BOOST_AUTO_TEST_CASE(CacheClientPartitionsInt8) { NumPartitionTest(100); @@ -764,4 +814,390 @@ BOOST_AUTO_TEST_CASE(CacheClientDefaultDynamicCache) } } +BOOST_AUTO_TEST_CASE(CacheClientDefaultDynamicCacheThreeNodes) +{ + StartNode("node1"); + StartNode("node2"); + + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110..11120"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("defaultdynamic3"); + + cache.RefreshAffinityMapping(); + + for (int64_t i = 1; i < 1000; ++i) + cache.Put(ignite::common::LexicalCast(i * 39916801), i * 5039); + + for (int64_t i = 1; i < 1000; ++i) + { + int64_t val; + LocalPeek(cache, ignite::common::LexicalCast(i * 39916801), val); + + BOOST_CHECK_EQUAL(val, i * 5039); + } +} + +BOOST_AUTO_TEST_CASE(CacheClientGetAllContainers) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + std::map res; + + cache.GetAll(keys, res); + + BOOST_REQUIRE_EQUAL(res.size(), keys.size()); + + for (size_t i = 0; i < keys.size(); ++i) + BOOST_REQUIRE_EQUAL(values[i], res[keys[i]]); +} + +BOOST_AUTO_TEST_CASE(CacheClientGetAllIterators) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + std::map res; + + cache.GetAll(keys.begin(), keys.end(), std::inserter(res, res.end())); + + BOOST_REQUIRE_EQUAL(res.size(), keys.size()); + + for (size_t i = 0; i < keys.size(); ++i) + BOOST_REQUIRE_EQUAL(values[i], res[keys[i]]); +} + +BOOST_AUTO_TEST_CASE(CacheClientPutAllContainers) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::map toPut; + + toPut[1] = "first"; + toPut[2] = "second"; + toPut[3] = "third"; + + cache.PutAll(toPut); + + for (std::map::const_iterator it = toPut.begin(); it != toPut.end(); ++it) + BOOST_REQUIRE_EQUAL(cache.Get(it->first), it->second); +} + +BOOST_AUTO_TEST_CASE(CacheClientPutAllIterators) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::map toPut; + + toPut[1] = "first"; + toPut[2] = "second"; + toPut[3] = "third"; + + cache.PutAll(toPut.begin(), toPut.end()); + + for (std::map::const_iterator it = toPut.begin(); it != toPut.end(); ++it) + BOOST_REQUIRE_EQUAL(cache.Get(it->first), it->second); +} + +BOOST_AUTO_TEST_CASE(CacheClientRemoveAllContainers) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 3); + + for (size_t i = 0; i < keys.size(); ++i) + BOOST_REQUIRE_EQUAL(cache.Get(keys[i]), values[i]); + + cache.RemoveAll(keys); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 0); +} + +BOOST_AUTO_TEST_CASE(CacheClientRemoveAllIterators) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 3); + + for (size_t i = 0; i < keys.size(); ++i) + BOOST_REQUIRE_EQUAL(cache.Get(keys[i]), values[i]); + + cache.RemoveAll(keys.begin(), keys.end()); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 0); +} + +BOOST_AUTO_TEST_CASE(CacheClientClearAllContainers) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 3); + + for (size_t i = 0; i < keys.size(); ++i) + BOOST_REQUIRE_EQUAL(cache.Get(keys[i]), values[i]); + + cache.ClearAll(keys); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 0); +} + +BOOST_AUTO_TEST_CASE(CacheClientClearAllIterators) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 3); + + for (size_t i = 0; i < keys.size(); ++i) + BOOST_REQUIRE_EQUAL(cache.Get(keys[i]), values[i]); + + cache.ClearAll(keys.begin(), keys.end()); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 0); +} + +BOOST_AUTO_TEST_CASE(CacheClientContainsKeysContainers) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 3); + + BOOST_REQUIRE(cache.ContainsKeys(keys)); + + std::set check; + + BOOST_REQUIRE(cache.ContainsKeys(check)); + + check.insert(1); + BOOST_REQUIRE(cache.ContainsKeys(check)); + + check.insert(2); + BOOST_REQUIRE(cache.ContainsKeys(check)); + + check.insert(3); + BOOST_REQUIRE(cache.ContainsKeys(check)); + + check.insert(4); + BOOST_REQUIRE(!cache.ContainsKeys(check)); + + check.erase(2); + BOOST_REQUIRE(!cache.ContainsKeys(check)); + + check.erase(4); + BOOST_REQUIRE(cache.ContainsKeys(check)); +} + +BOOST_AUTO_TEST_CASE(CacheClientContainsKeysIterators) +{ + IgniteClientConfiguration cfg; + cfg.SetEndPoints("127.0.0.1:11110"); + + IgniteClient client = IgniteClient::Start(cfg); + + cache::CacheClient cache = + client.CreateCache("test"); + + std::vector keys; + + keys.push_back(1); + keys.push_back(2); + keys.push_back(3); + + std::vector values; + + values.push_back("first"); + values.push_back("second"); + values.push_back("third"); + + for (size_t i = 0; i < keys.size(); ++i) + cache.Put(keys[i], values[i]); + + BOOST_REQUIRE_EQUAL(cache.GetSize(cache::CachePeekMode::PRIMARY), 3); + + BOOST_REQUIRE(cache.ContainsKeys(keys)); + + std::set check; + + BOOST_REQUIRE(cache.ContainsKeys(check.begin(), check.end())); + + check.insert(1); + BOOST_REQUIRE(cache.ContainsKeys(check.begin(), check.end())); + + check.insert(2); + BOOST_REQUIRE(cache.ContainsKeys(check.begin(), check.end())); + + check.insert(3); + BOOST_REQUIRE(cache.ContainsKeys(check.begin(), check.end())); + + check.insert(4); + BOOST_REQUIRE(!cache.ContainsKeys(check.begin(), check.end())); + + check.erase(2); + BOOST_REQUIRE(!cache.ContainsKeys(check.begin(), check.end())); + + check.erase(4); + BOOST_REQUIRE(cache.ContainsKeys(check.begin(), check.end())); +} + + BOOST_AUTO_TEST_SUITE_END() diff --git a/modules/platforms/cpp/thin-client-test/src/ssl_test.cpp b/modules/platforms/cpp/thin-client-test/src/ssl_test.cpp index 6bfda493a3005..eaadd9b26c59f 100644 --- a/modules/platforms/cpp/thin-client-test/src/ssl_test.cpp +++ b/modules/platforms/cpp/thin-client-test/src/ssl_test.cpp @@ -96,7 +96,7 @@ BOOST_AUTO_TEST_CASE(SslConnectionReject2) cfg.SetEndPoints("127.0.0.1:11110"); cfg.SetSslMode(SslMode::DISABLE); - + BOOST_CHECK_THROW(IgniteClient::Start(cfg), ignite::IgniteError); } diff --git a/modules/platforms/cpp/thin-client/Makefile.am b/modules/platforms/cpp/thin-client/Makefile.am index f8d919012fade..1efff8ea44341 100644 --- a/modules/platforms/cpp/thin-client/Makefile.am +++ b/modules/platforms/cpp/thin-client/Makefile.am @@ -63,6 +63,7 @@ libignite_thin_client_la_SOURCES = \ src/impl/data_router.cpp \ src/impl/ssl/ssl_gateway.cpp \ src/impl/ssl/secure_socket_client.cpp \ + src/impl/ssl/ssl_api.cpp \ src/ignite_client.cpp clean-local: diff --git a/modules/platforms/cpp/thin-client/include/Makefile.am b/modules/platforms/cpp/thin-client/include/Makefile.am index 79816fe8f8d73..7af4377e883bb 100644 --- a/modules/platforms/cpp/thin-client/include/Makefile.am +++ b/modules/platforms/cpp/thin-client/include/Makefile.am @@ -17,11 +17,12 @@ ACLOCAL_AMFLAGS =-I m4 -noinst_HEADERS = \ +nobase_include_HEADERS = \ ignite/thin/ssl_mode.h \ ignite/thin/ignite_client.h \ ignite/thin/ignite_client_configuration.h \ ignite/thin/cache/cache_client.h \ + ignite/thin/cache/cache_peek_mode.h \ ignite/impl/thin/writable_key.h \ ignite/impl/thin/readable.h \ ignite/impl/thin/writable.h \ diff --git a/modules/platforms/cpp/thin-client/include/ignite/impl/thin/cache/cache_client_proxy.h b/modules/platforms/cpp/thin-client/include/ignite/impl/thin/cache/cache_client_proxy.h index 349a1dc861b1c..cda2faeac0bc2 100644 --- a/modules/platforms/cpp/thin-client/include/ignite/impl/thin/cache/cache_client_proxy.h +++ b/modules/platforms/cpp/thin-client/include/ignite/impl/thin/cache/cache_client_proxy.h @@ -76,6 +76,14 @@ namespace ignite */ void Put(const WritableKey& key, const Writable& value); + /** + * Stores given key-value pairs in cache. + * If write-through is enabled, the stored values will be persisted to store. + * + * @param pairs Writable key-value pair sequence. + */ + void PutAll(const Writable& pairs); + /** * Get value from cache. * @@ -84,6 +92,31 @@ namespace ignite */ void Get(const WritableKey& key, Readable& value); + /** + * Retrieves values mapped to the specified keys from cache. + * If some value is not present in cache, then it will be looked up from swap storage. If + * it's not present in swap, or if swap is disabled, and if read-through is allowed, value + * will be loaded from persistent store. + * + * @param keys Writable key sequence. + * @param pairs Readable key-value pair sequence. + */ + void GetAll(const Writable& keys, Readable& pairs); + + /** + * Stores given key-value pair in cache only if there is a previous mapping for it. + * If cache previously contained value for the given key, then this value is returned. + * In case of PARTITIONED or REPLICATED caches, the value will be loaded from the primary node, + * which in its turn may load the value from the swap storage, and consecutively, if it's not + * in swap, rom the underlying persistent storage. + * If write-through is enabled, the stored value will be persisted to store. + * + * @param key Key to store in cache. + * @param value Value to be associated with the given key. + * @return True if the value was replaced. + */ + bool Replace(const WritableKey& key, const Writable& value); + /** * Check if the cache contains a value for the specified key. * @@ -92,6 +125,14 @@ namespace ignite */ bool ContainsKey(const WritableKey& key); + /** + * Check if cache contains mapping for these keys. + * + * @param keys Keys. + * @return True if cache contains mapping for all these keys. + */ + bool ContainsKeys(const Writable& keys); + /** * Gets the number of all entries cached across all nodes. * @note This operation is distributed and will query all participating nodes for their cache sizes. @@ -125,17 +166,23 @@ namespace ignite * If the returned value is not needed, method removex() should always be used instead of this * one to avoid the overhead associated with returning of the previous value. * If write-through is enabled, the value will be removed from store. - * This method is transactional and will enlist the entry into ongoing transaction if there is one. * * @param key Key whose mapping is to be removed from cache. * @return False if there was no matching key. */ bool Remove(const WritableKey& key); + /** + * Removes given key mappings from cache. + * If write-through is enabled, the value will be removed from store. + * + * @param keys Keys whose mappings are to be removed from cache. + */ + void RemoveAll(const Writable& keys); + /** * Removes all mappings from cache. * If write-through is enabled, the value will be removed from store. - * This method is transactional and will enlist the entry into ongoing transaction if there is one. */ void RemoveAll(); @@ -152,6 +199,14 @@ namespace ignite */ void Clear(); + /** + * Clear entries from the cache and swap storage, without notifying listeners or CacheWriters. + * Entry is cleared only if it is not currently locked, and is not participating in a transaction. + * + * @param keys Keys to clear. + */ + void ClearAll(const Writable& keys); + /** * Get from CacheClient. * Use for testing purposes only. diff --git a/modules/platforms/cpp/thin-client/include/ignite/impl/thin/readable.h b/modules/platforms/cpp/thin-client/include/ignite/impl/thin/readable.h index 458da2592de3e..87f4cc3b4a2ab 100644 --- a/modules/platforms/cpp/thin-client/include/ignite/impl/thin/readable.h +++ b/modules/platforms/cpp/thin-client/include/ignite/impl/thin/readable.h @@ -18,7 +18,7 @@ #ifndef _IGNITE_IMPL_THIN_READABLE #define _IGNITE_IMPL_THIN_READABLE -#include +#include namespace ignite { @@ -91,6 +91,74 @@ namespace ignite /** Data router. */ ValueType& value; }; + + /** + * Implementation of Readable interface for map. + * + * @tparam T1 Type of the first element in the pair. + * @tparam T2 Type of the second element in the pair. + * @tparam I Out iterator. + */ + template + class ReadableMapImpl : public Readable + { + public: + /** Type of the first element in the pair. */ + typedef T1 ElementType1; + + /** Type of the second element in the pair. */ + typedef T2 ElementType2; + + /** Type of the iterator. */ + typedef I IteratorType; + + /** + * Constructor. + * + * @param iter Iterator. + */ + ReadableMapImpl(IteratorType iter) : + iter(iter) + { + // No-op. + } + + /** + * Destructor. + */ + virtual ~ReadableMapImpl() + { + // No-op. + } + + /** + * Read value using reader. + * + * @param reader Reader to use. + */ + virtual void Read(binary::BinaryReaderImpl& reader) + { + using namespace ignite::binary; + + int32_t cnt = reader.ReadInt32(); + + for (int32_t i = 0; i < cnt; ++i) + { + std::pair pair; + + reader.ReadTopObject(pair.first); + reader.ReadTopObject(pair.second); + + iter = pair; + + ++iter; + } + } + + private: + /** Iterator type. */ + IteratorType iter; + }; } } } diff --git a/modules/platforms/cpp/thin-client/include/ignite/impl/thin/writable.h b/modules/platforms/cpp/thin-client/include/ignite/impl/thin/writable.h index 5d5eefe203515..d97fc19a07519 100644 --- a/modules/platforms/cpp/thin-client/include/ignite/impl/thin/writable.h +++ b/modules/platforms/cpp/thin-client/include/ignite/impl/thin/writable.h @@ -18,7 +18,7 @@ #ifndef _IGNITE_IMPL_THIN_WRITABLE #define _IGNITE_IMPL_THIN_WRITABLE -#include +#include namespace ignite { @@ -50,6 +50,8 @@ namespace ignite /** * Implementation of the Writable class for a concrete type. + * + * @tparam T Value type. */ template class WritableImpl : public Writable @@ -91,6 +93,157 @@ namespace ignite /** Value. */ const ValueType& value; }; + + /** + * Implementation of the Writable class for a set of values. + * + * @tparam T Value type. + * @tparam I Iterator type. + */ + template + class WritableSetImpl : public Writable + { + public: + /** Element type. */ + typedef T ElementType; + + /** Iterator type. */ + typedef I IteratorType; + + /** + * Constructor. + * + * @param begin Begin of the sequence. + * @param end Sequence end. + */ + WritableSetImpl(IteratorType begin, IteratorType end) : + begin(begin), + end(end) + { + // No-op. + } + + /** + * Destructor. + */ + virtual ~WritableSetImpl() + { + // No-op. + } + + /** + * Write sequence using writer. + * + * @param writer Writer to use. + */ + virtual void Write(binary::BinaryWriterImpl& writer) const + { + using namespace ignite::binary; + + interop::InteropOutputStream* out = writer.GetStream(); + + int32_t cntPos = out->Reserve(4); + + out->Synchronize(); + + int32_t cnt = 0; + for (IteratorType it = begin; it != end; ++it) + { + writer.WriteObject(*it); + + ++cnt; + } + + out->WriteInt32(cntPos, cnt); + + out->Synchronize(); + } + + private: + /** Sequence begin. */ + IteratorType begin; + + /** Sequence end. */ + IteratorType end; + }; + + /** + * Implementation of the Writable class for a map. + * + * @tparam K Key type. + * @tparam V Value type. + * @tparam I Iterator type. + */ + template + class WritableMapImpl : public Writable + { + public: + /** Key type. */ + typedef K KeyType; + + /** Value type. */ + typedef V ValueType; + + /** Iterator type. */ + typedef I IteratorType; + + /** + * Constructor. + * + * @param begin Begin of the sequence. + * @param end Sequence end. + */ + WritableMapImpl(IteratorType begin, IteratorType end) : + begin(begin), + end(end) + { + // No-op. + } + + /** + * Destructor. + */ + virtual ~WritableMapImpl() + { + // No-op. + } + + /** + * Write sequence using writer. + * + * @param writer Writer to use. + */ + virtual void Write(binary::BinaryWriterImpl& writer) const + { + using namespace ignite::binary; + + interop::InteropOutputStream* out = writer.GetStream(); + + int32_t cntPos = out->Reserve(4); + + out->Synchronize(); + + int32_t cnt = 0; + for (IteratorType it = begin; it != end; ++it) + { + writer.WriteObject(it->first); + writer.WriteObject(it->second); + + ++cnt; + } + + out->WriteInt32(cntPos, cnt); + + out->Synchronize(); + } + + private: + /** Sequence begin. */ + IteratorType begin; + + /** Sequence end. */ + IteratorType end; + }; } } } diff --git a/modules/platforms/cpp/thin-client/include/ignite/thin/cache/cache_client.h b/modules/platforms/cpp/thin-client/include/ignite/thin/cache/cache_client.h index 39e12698299bc..1fcf4f5479727 100644 --- a/modules/platforms/cpp/thin-client/include/ignite/thin/cache/cache_client.h +++ b/modules/platforms/cpp/thin-client/include/ignite/thin/cache/cache_client.h @@ -105,6 +105,33 @@ namespace ignite proxy.Put(wrKey, wrValue); } + /** + * Stores given key-value pairs in cache. + * If write-through is enabled, the stored values will be persisted to store. + * + * @param begin Iterator pointing to the beginning of the key-value pair sequence. + * @param end Iterator pointing to the end of the key-value pair sequence. + */ + template + void PutAll(InIter begin, InIter end) + { + impl::thin::WritableMapImpl wrSeq(begin, end); + + proxy.PutAll(wrSeq); + } + + /** + * Stores given key-value pairs in cache. + * If write-through is enabled, the stored values will be persisted to store. + * + * @param vals Key-value pairs to store in cache. + */ + template + void PutAll(const Map& vals) + { + PutAll(vals.begin(), vals.end()); + } + /** * Get value from the cache. * @@ -134,6 +161,60 @@ namespace ignite return value; } + /** + * Retrieves values mapped to the specified keys from cache. + * If some value is not present in cache, then it will be looked up from swap storage. If + * it's not present in swap, or if swap is disabled, and if read-through is allowed, value + * will be loaded from persistent store. + * + * @param begin Iterator pointing to the beginning of the key sequence. + * @param end Iterator pointing to the end of the key sequence. + * @param dst Output iterator. Should dereference to std::pair or CacheEntry. + */ + template + void GetAll(InIter begin, InIter end, OutIter dst) + { + impl::thin::WritableSetImpl wrSeq(begin, end); + impl::thin::ReadableMapImpl rdSeq(dst); + + proxy.GetAll(wrSeq, rdSeq); + } + + /** + * Retrieves values mapped to the specified keys from cache. + * If some value is not present in cache, then it will be looked up from swap storage. If + * it's not present in swap, or if swap is disabled, and if read-through is allowed, value + * will be loaded from persistent store. + * + * @param keys Keys. + * @param res Map of key-value pairs. + */ + template + void GetAll(const Set& keys, Map& res) + { + return GetAll(keys.begin(), keys.end(), std::inserter(res, res.end())); + } + + /** + * Stores given key-value pair in cache only if there is a previous mapping for it. + * If cache previously contained value for the given key, then this value is returned. + * In case of PARTITIONED or REPLICATED caches, the value will be loaded from the primary node, + * which in its turn may load the value from the swap storage, and consecutively, if it's not + * in swap, rom the underlying persistent storage. + * If write-through is enabled, the stored value will be persisted to store. + * + * @param key Key to store in cache. + * @param value Value to be associated with the given key. + * @return True if the value was replaced. + */ + bool Replace(const K& key, const V& value) + { + impl::thin::WritableKeyImpl wrKey(key); + impl::thin::WritableImpl wrValue(value); + + return proxy.Replace(wrKey, wrValue); + } + /** * Check if the cache contains a value for the specified key. * @@ -147,6 +228,33 @@ namespace ignite return proxy.ContainsKey(wrKey); } + /** + * Check if cache contains mapping for these keys. + * + * @param keys Keys. + * @return True if cache contains mapping for all these keys. + */ + template + bool ContainsKeys(const Set& keys) + { + return ContainsKeys(keys.begin(), keys.end()); + } + + /** + * Check if cache contains mapping for these keys. + * + * @param begin Iterator pointing to the beginning of the key sequence. + * @param end Iterator pointing to the end of the key sequence. + * @return True if cache contains mapping for all these keys. + */ + template + bool ContainsKeys(InIter begin, InIter end) + { + impl::thin::WritableSetImpl wrSeq(begin, end); + + return proxy.ContainsKeys(wrSeq); + } + /** * Gets the number of all entries cached across all nodes. * @note This operation is distributed and will query all participating nodes for their cache sizes. @@ -169,7 +277,6 @@ namespace ignite * If the returned value is not needed, method removex() should always be used instead of this * one to avoid the overhead associated with returning of the previous value. * If write-through is enabled, the value will be removed from store. - * This method is transactional and will enlist the entry into ongoing transaction if there is one. * * @param key Key whose mapping is to be removed from cache. * @return False if there was no matching key. @@ -181,6 +288,33 @@ namespace ignite return proxy.Remove(wrKey); } + /** + * Removes given key mappings from cache. + * If write-through is enabled, the value will be removed from store. + * + * @param keys Keys whose mappings are to be removed from cache. + */ + template + void RemoveAll(const Set& keys) + { + RemoveAll(keys.begin(), keys.end()); + } + + /** + * Removes given key mappings from cache. + * If write-through is enabled, the value will be removed from store. + * + * @param begin Iterator pointing to the beginning of the key sequence. + * @param end Iterator pointing to the end of the key sequence. + */ + template + void RemoveAll(InIter begin, InIter end) + { + impl::thin::WritableSetImpl wrSeq(begin, end); + + proxy.RemoveAll(wrSeq); + } + /** * Removes all mappings from cache. * If write-through is enabled, the value will be removed from store. @@ -212,6 +346,33 @@ namespace ignite proxy.Clear(); } + /** + * Clear entries from the cache and swap storage, without notifying listeners or CacheWriters. + * Entry is cleared only if it is not currently locked, and is not participating in a transaction. + * + * @param keys Keys to clear. + */ + template + void ClearAll(const Set& keys) + { + ClearAll(keys.begin(), keys.end()); + } + + /** + * Clear entries from the cache and swap storage, without notifying listeners or CacheWriters. + * Entry is cleared only if it is not currently locked, and is not participating in a transaction. + * + * @param begin Iterator pointing to the beginning of the key sequence. + * @param end Iterator pointing to the end of the key sequence. + */ + template + void ClearAll(InIter begin, InIter end) + { + impl::thin::WritableSetImpl wrSeq(begin, end); + + proxy.ClearAll(wrSeq); + } + /** * Refresh affinity mapping. * diff --git a/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj b/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj index 3221268d37f1f..9c0091380cafa 100644 --- a/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj +++ b/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj @@ -166,6 +166,7 @@ + @@ -195,7 +196,7 @@ - + @@ -210,4 +211,4 @@ - + \ No newline at end of file diff --git a/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj.filters b/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj.filters index 2d20106f35572..a5cad74bcb705 100644 --- a/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj.filters +++ b/modules/platforms/cpp/thin-client/project/vs/thin-client.vcxproj.filters @@ -67,6 +67,9 @@ Code\impl\cache + + Code\impl\ssl + @@ -123,9 +126,6 @@ Code\impl\ssl - - Code\impl\ssl - Code\impl\ssl @@ -153,5 +153,8 @@ Code\impl\cache + + Code\impl\ssl + - + \ No newline at end of file diff --git a/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.cpp b/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.cpp index fe37a9cc09e12..78d0e13379fc4 100644 --- a/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.cpp +++ b/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.cpp @@ -61,39 +61,78 @@ namespace ignite router.Get()->SyncMessage(req, rsp, endPoints); } + + if (rsp.GetStatus() != ResponseStatus::SUCCESS) + throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + } + + template + void CacheClientImpl::SyncMessage(const ReqT& req, RspT& rsp) + { + router.Get()->SyncMessage(req, rsp); + + if (rsp.GetStatus() != ResponseStatus::SUCCESS) + throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); } void CacheClientImpl::Put(const WritableKey& key, const Writable& value) { - CachePutRequest req(id, binary, key, value); + CacheKeyValueRequest req(id, binary, key, value); Response rsp; SyncCacheKeyMessage(key, req, rsp); - - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); } void CacheClientImpl::Get(const WritableKey& key, Readable& value) { - CacheKeyRequest req(id, binary, key); - CacheGetResponse rsp(value); + CacheValueRequest req(id, binary, key); + CacheValueResponse rsp(value); SyncCacheKeyMessage(key, req, rsp); + } - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + void CacheClientImpl::PutAll(const Writable & pairs) + { + CacheValueRequest req(id, binary, pairs); + Response rsp; + + SyncMessage(req, rsp); + } + + void CacheClientImpl::GetAll(const Writable& keys, Readable& pairs) + { + CacheValueRequest req(id, binary, keys); + CacheValueResponse rsp(pairs); + + SyncMessage(req, rsp); + } + + bool CacheClientImpl::Replace(const WritableKey& key, const Writable& value) + { + CacheKeyValueRequest req(id, binary, key, value); + BoolResponse rsp; + + SyncCacheKeyMessage(key, req, rsp); + + return rsp.GetValue(); } bool CacheClientImpl::ContainsKey(const WritableKey& key) { - CacheKeyRequest req(id, binary, key); + CacheValueRequest req(id, binary, key); BoolResponse rsp; SyncCacheKeyMessage(key, req, rsp); - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + return rsp.GetValue(); + } + + bool CacheClientImpl::ContainsKeys(const Writable& keys) + { + CacheValueRequest req(id, binary, keys); + BoolResponse rsp; + + SyncMessage(req, rsp); return rsp.GetValue(); } @@ -103,47 +142,43 @@ namespace ignite CacheGetSizeRequest req(id, binary, peekModes); Int64Response rsp; - router.Get()->SyncMessage(req, rsp); - - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + SyncMessage(req, rsp); return rsp.GetValue(); } bool CacheClientImpl::Remove(const WritableKey& key) { - CacheKeyRequest req(id, binary, key); + CacheValueRequest req(id, binary, key); BoolResponse rsp; - router.Get()->SyncMessage(req, rsp); - - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + SyncMessage(req, rsp); return rsp.GetValue(); } + void CacheClientImpl::RemoveAll(const Writable& keys) + { + CacheValueRequest req(id, binary, keys); + Response rsp; + + SyncMessage(req, rsp); + } + void CacheClientImpl::RemoveAll() { CacheRequest req(id, binary); Response rsp; - router.Get()->SyncMessage(req, rsp); - - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + SyncMessage(req, rsp); } void CacheClientImpl::Clear(const WritableKey& key) { - CacheKeyRequest req(id, binary, key); + CacheValueRequest req(id, binary, key); Response rsp; - router.Get()->SyncMessage(req, rsp); - - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + SyncMessage(req, rsp); } void CacheClientImpl::Clear() @@ -151,21 +186,23 @@ namespace ignite CacheRequest req(id, binary); Response rsp; - router.Get()->SyncMessage(req, rsp); + SyncMessage(req, rsp); + } - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); + void CacheClientImpl::ClearAll(const Writable& keys) + { + CacheValueRequest req(id, binary, keys); + Response rsp; + + SyncMessage(req, rsp); } void CacheClientImpl::LocalPeek(const WritableKey& key, Readable& value) { - CacheKeyRequest req(id, binary, key); - CacheGetResponse rsp(value); + CacheValueRequest req(id, binary, key); + CacheValueResponse rsp(value); SyncCacheKeyMessage(key, req, rsp); - - if (rsp.GetStatus() != ResponseStatus::SUCCESS) - throw IgniteError(IgniteError::IGNITE_ERR_CACHE, rsp.GetError().c_str()); } void CacheClientImpl::RefreshAffinityMapping() diff --git a/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.h b/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.h index a28c4ddd5d4d1..f9555a2e63578 100644 --- a/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.h +++ b/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_impl.h @@ -74,6 +74,14 @@ namespace ignite */ void Put(const WritableKey& key, const Writable& value); + /** + * Stores given key-value pairs in cache. + * If write-through is enabled, the stored values will be persisted to store. + * + * @param pairs Writable key-value pair sequence. + */ + void PutAll(const Writable& pairs); + /** * Get value from cache. * @@ -82,6 +90,31 @@ namespace ignite */ void Get(const WritableKey& key, Readable& value); + /** + * Retrieves values mapped to the specified keys from cache. + * If some value is not present in cache, then it will be looked up from swap storage. If + * it's not present in swap, or if swap is disabled, and if read-through is allowed, value + * will be loaded from persistent store. + * + * @param keys Writable key sequence. + * @param pairs Readable key-value pair sequence. + */ + void GetAll(const Writable& keys, Readable& pairs); + + /** + * Stores given key-value pair in cache only if there is a previous mapping for it. + * If cache previously contained value for the given key, then this value is returned. + * In case of PARTITIONED or REPLICATED caches, the value will be loaded from the primary node, + * which in its turn may load the value from the swap storage, and consecutively, if it's not + * in swap, rom the underlying persistent storage. + * If write-through is enabled, the stored value will be persisted to store. + * + * @param key Key to store in cache. + * @param value Value to be associated with the given key. + * @return True if the value was replaced. + */ + bool Replace(const WritableKey& key, const Writable& value); + /** * Check if the cache contains a value for the specified key. * @@ -90,6 +123,14 @@ namespace ignite */ bool ContainsKey(const WritableKey& key); + /** + * Check if cache contains mapping for these keys. + * + * @param keys Keys. + * @return True if cache contains mapping for all these keys. + */ + bool ContainsKeys(const Writable& keys); + /** * Gets the number of all entries cached across all nodes. * @note This operation is distributed and will query all @@ -108,17 +149,23 @@ namespace ignite * If the returned value is not needed, method removex() should always be used instead of this * one to avoid the overhead associated with returning of the previous value. * If write-through is enabled, the value will be removed from store. - * This method is transactional and will enlist the entry into ongoing transaction if there is one. * * @param key Key whose mapping is to be removed from cache. * @return False if there was no matching key. */ bool Remove(const WritableKey& key); + /** + * Removes given key mappings from cache. + * If write-through is enabled, the value will be removed from store. + * + * @param keys Keys whose mappings are to be removed from cache. + */ + void RemoveAll(const Writable& keys); + /** * Removes all mappings from cache. * If write-through is enabled, the value will be removed from store. - * This method is transactional and will enlist the entry into ongoing transaction if there is one. */ void RemoveAll(); @@ -135,6 +182,14 @@ namespace ignite */ void Clear(); + /** + * Clear entries from the cache and swap storage, without notifying listeners or CacheWriters. + * Entry is cleared only if it is not currently locked, and is not participating in a transaction. + * + * @param keys Keys to clear. + */ + void ClearAll(const Writable& keys); + /** * Peeks at in-memory cached value using default optional * peek mode. This method will not load value from any @@ -164,6 +219,16 @@ namespace ignite template void SyncCacheKeyMessage(const WritableKey& key, const ReqT& req, RspT& rsp); + /** + * Synchronously send message and receive response. + * + * @param req Request message. + * @param rsp Response message. + * @throw IgniteError on error. + */ + template + void SyncMessage(const ReqT& req, RspT& rsp); + /** Data router. */ SP_DataRouter router; diff --git a/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_proxy.cpp b/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_proxy.cpp index 1fc8ac983211e..3d37255baf854 100644 --- a/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_proxy.cpp +++ b/modules/platforms/cpp/thin-client/src/impl/cache/cache_client_proxy.cpp @@ -17,7 +17,8 @@ #include -#include "impl/cache/cache_client_impl.h" +#include +#include using namespace ignite::impl::thin; using namespace cache; @@ -55,11 +56,31 @@ namespace ignite GetCacheImpl(impl).Get(key, value); } + void CacheClientProxy::PutAll(const Writable& pairs) + { + GetCacheImpl(impl).PutAll(pairs); + } + + void CacheClientProxy::GetAll(const Writable & keys, Readable & pairs) + { + GetCacheImpl(impl).GetAll(keys, pairs); + } + + bool CacheClientProxy::Replace(const WritableKey& key, const Writable& value) + { + return GetCacheImpl(impl).Replace(key, value); + } + bool CacheClientProxy::ContainsKey(const WritableKey & key) { return GetCacheImpl(impl).ContainsKey(key); } + bool CacheClientProxy::ContainsKeys(const Writable & keys) + { + return GetCacheImpl(impl).ContainsKeys(keys); + } + int64_t CacheClientProxy::GetSize(int32_t peekModes) { return GetCacheImpl(impl).GetSize(peekModes); @@ -80,6 +101,11 @@ namespace ignite return GetCacheImpl(impl).Remove(key); } + void CacheClientProxy::RemoveAll(const Writable & keys) + { + return GetCacheImpl(impl).RemoveAll(keys); + } + void CacheClientProxy::RemoveAll() { GetCacheImpl(impl).RemoveAll(); @@ -94,6 +120,11 @@ namespace ignite { GetCacheImpl(impl).Clear(); } + + void CacheClientProxy::ClearAll(const Writable& keys) + { + GetCacheImpl(impl).ClearAll(keys); + } } } } diff --git a/modules/platforms/cpp/thin-client/src/impl/data_channel.cpp b/modules/platforms/cpp/thin-client/src/impl/data_channel.cpp index 6b40d811b64f0..592dba331fc9e 100644 --- a/modules/platforms/cpp/thin-client/src/impl/data_channel.cpp +++ b/modules/platforms/cpp/thin-client/src/impl/data_channel.cpp @@ -23,7 +23,7 @@ #include #include "impl/message.h" -#include "impl/ssl/ssl_gateway.h" +#include "impl/ssl/ssl_api.h" #include "impl/ssl/secure_socket_client.h" #include "impl/net/tcp_socket_client.h" #include "impl/net/remote_type_updater.h" @@ -81,7 +81,7 @@ namespace ignite if (sslMode != SslMode::DISABLE) { - ssl::SslGateway::GetInstance().LoadAll(); + ssl::EnsureSslLoaded(); socket.reset(new ssl::SecureSocketClient(config.GetSslCertFile(), config.GetSslKeyFile(), config.GetSslCaFile())); diff --git a/modules/platforms/cpp/thin-client/src/impl/message.cpp b/modules/platforms/cpp/thin-client/src/impl/message.cpp index 7a991362e70b3..5cf8fb8eae3ba 100644 --- a/modules/platforms/cpp/thin-client/src/impl/message.cpp +++ b/modules/platforms/cpp/thin-client/src/impl/message.cpp @@ -73,20 +73,6 @@ namespace ignite reader.ReadString(error); } - CachePutRequest::CachePutRequest(int32_t cacheId, bool binary, const Writable& key, const Writable& value) : - CacheKeyRequest(cacheId, binary, key), - value(value) - { - // No-op. - } - - void CachePutRequest::Write(binary::BinaryWriterImpl& writer, const ProtocolVersion& ver) const - { - CacheKeyRequest::Write(writer, ver); - - value.Write(writer); - } - ClientCacheNodePartitionsResponse::ClientCacheNodePartitionsResponse( std::vector& nodeParts): nodeParts(nodeParts) @@ -111,18 +97,18 @@ namespace ignite nodeParts[i].Read(reader); } - CacheGetResponse::CacheGetResponse(Readable& value) : + CacheValueResponse::CacheValueResponse(Readable& value) : value(value) { // No-op. } - CacheGetResponse::~CacheGetResponse() + CacheValueResponse::~CacheValueResponse() { // No-op. } - void CacheGetResponse::ReadOnSuccess(binary::BinaryReaderImpl& reader, const ProtocolVersion&) + void CacheValueResponse::ReadOnSuccess(binary::BinaryReaderImpl& reader, const ProtocolVersion&) { value.Read(reader); } diff --git a/modules/platforms/cpp/thin-client/src/impl/message.h b/modules/platforms/cpp/thin-client/src/impl/message.h index 2f839c06e624f..2d0df6f3f7324 100644 --- a/modules/platforms/cpp/thin-client/src/impl/message.h +++ b/modules/platforms/cpp/thin-client/src/impl/message.h @@ -375,12 +375,12 @@ namespace ignite }; /** - * Cache key request. + * Cache value request. * - * Request to cache containing single key. + * Request to cache containing writable value. */ template - class CacheKeyRequest : public CacheRequest + class CacheValueRequest : public CacheRequest { public: /** @@ -388,11 +388,11 @@ namespace ignite * * @param cacheId Cache ID. * @param binary Binary cache flag. - * @param key Key. + * @param value Value. */ - CacheKeyRequest(int32_t cacheId, bool binary, const Writable& key) : + CacheValueRequest(int32_t cacheId, bool binary, const Writable& value) : CacheRequest(cacheId, binary), - key(key) + value(value) { // No-op. } @@ -400,7 +400,7 @@ namespace ignite /** * Destructor. */ - virtual ~CacheKeyRequest() + virtual ~CacheValueRequest() { // No-op. } @@ -414,18 +414,19 @@ namespace ignite { CacheRequest::Write(writer, ver); - key.Write(writer); + value.Write(writer); } private: /** Key. */ - const Writable& key; + const Writable& value; }; /** - * Cache put request. + * Cache key value request. */ - class CachePutRequest : public CacheKeyRequest + template + class CacheKeyValueRequest : public CacheValueRequest { public: /** @@ -436,12 +437,17 @@ namespace ignite * @param key Key. * @param value Value. */ - CachePutRequest(int32_t cacheId, bool binary, const Writable& key, const Writable& value); + CacheKeyValueRequest(int32_t cacheId, bool binary, const Writable& key, const Writable& value) : + CacheValueRequest(cacheId, binary, key), + value(value) + { + // No-op. + } /** * Destructor. */ - virtual ~CachePutRequest() + virtual ~CacheKeyValueRequest() { // No-op. } @@ -451,7 +457,12 @@ namespace ignite * @param writer Writer. * @param ver Version. */ - virtual void Write(binary::BinaryWriterImpl& writer, const ProtocolVersion& ver) const; + virtual void Write(binary::BinaryWriterImpl& writer, const ProtocolVersion& ver) const + { + CacheValueRequest::Write(writer, ver); + + value.Write(writer); + } private: /** Value. */ @@ -621,9 +632,9 @@ namespace ignite }; /** - * Cache get response. + * Cache value response. */ - class CacheGetResponse : public Response + class CacheValueResponse : public Response { public: /** @@ -631,12 +642,12 @@ namespace ignite * * @param value Value. */ - CacheGetResponse(Readable& value); + CacheValueResponse(Readable& value); /** * Destructor. */ - virtual ~CacheGetResponse(); + virtual ~CacheValueResponse(); /** * Read data if response status is ResponseStatus::SUCCESS. diff --git a/modules/platforms/cpp/thin-client/src/impl/ssl/secure_socket_client.cpp b/modules/platforms/cpp/thin-client/src/impl/ssl/secure_socket_client.cpp index e7e60b6e55e61..5a9b7f4afd24c 100644 --- a/modules/platforms/cpp/thin-client/src/impl/ssl/secure_socket_client.cpp +++ b/modules/platforms/cpp/thin-client/src/impl/ssl/secure_socket_client.cpp @@ -24,12 +24,15 @@ #include "impl/net/tcp_socket_client.h" #include "impl/ssl/secure_socket_client.h" -#include "impl/ssl/ssl_bindings.h" + +#include "impl/ssl/ssl_gateway.h" #ifndef SOCKET_ERROR # define SOCKET_ERROR (-1) #endif // SOCKET_ERROR +enum { OPERATION_SUCCESS = 1 }; + namespace ignite { namespace impl @@ -55,12 +58,14 @@ namespace ignite CloseInteral(); if (context) - ssl::SSL_CTX_free(reinterpret_cast(context)); + SslGateway::GetInstance().SSL_CTX_free_(reinterpret_cast(context)); } bool SecureSocketClient::Connect(const char* hostname, uint16_t port, int32_t timeout) { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (!context) { @@ -77,34 +82,34 @@ namespace ignite if (!ssl0) return false; - int res = ssl::SSL_set_tlsext_host_name_(ssl0, hostname); + int res = sslGateway.SSL_set_tlsext_host_name_(ssl0, hostname); if (res != OPERATION_SUCCESS) { - ssl::SSL_free_(ssl0); + sslGateway.SSL_free_(ssl0); std::string err = "Can not set host name for secure connection: " + GetSslError(ssl0, res); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, err.c_str()); } - ssl::SSL_set_connect_state_(ssl0); + sslGateway.SSL_set_connect_state_(ssl0); bool connected = CompleteConnectInternal(ssl0, timeout); if (!connected) { - ssl::SSL_free_(ssl0); + sslGateway.SSL_free_(ssl0); return false; } // Verify a server certificate was presented during the negotiation - X509* cert = ssl::SSL_get_peer_certificate(ssl0); + X509* cert = sslGateway.SSL_get_peer_certificate_(ssl0); if (cert) - ssl::X509_free(cert); + sslGateway.X509_free_(cert); else { - ssl::SSL_free_(ssl0); + sslGateway.SSL_free_(ssl0); // std::string err = "Remote host did not provide certificate: " + GetSslError(ssl0, res); @@ -113,10 +118,10 @@ namespace ignite // Verify the result of chain verification // Verification performed according to RFC 4158 - res = ssl::SSL_get_verify_result(ssl0); + res = sslGateway.SSL_get_verify_result_(ssl0); if (X509_V_OK != res) { - ssl::SSL_free_(ssl0); + sslGateway.SSL_free_(ssl0); // std::string err = "Certificate chain verification failed: " + GetSslError(ssl0, res); @@ -135,7 +140,9 @@ namespace ignite int SecureSocketClient::Send(const int8_t* data, size_t size, int32_t timeout) { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (!ssl) { @@ -145,14 +152,16 @@ namespace ignite SSL* ssl0 = reinterpret_cast(ssl); - int res = ssl::SSL_write_(ssl0, data, static_cast(size)); + int res = sslGateway.SSL_write_(ssl0, data, static_cast(size)); return res; } int SecureSocketClient::Receive(int8_t* buffer, size_t size, int32_t timeout) { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (!ssl) { @@ -164,7 +173,7 @@ namespace ignite int res = 0; - if (!blocking && ssl::SSL_pending_(ssl0) == 0) + if (!blocking && sslGateway.SSL_pending_(ssl0) == 0) { res = WaitOnSocket(ssl, timeout, true); @@ -172,7 +181,7 @@ namespace ignite return res; } - res = ssl::SSL_read_(ssl0, buffer, static_cast(size)); + res = sslGateway.SSL_read_(ssl0, buffer, static_cast(size)); return res; } @@ -185,76 +194,58 @@ namespace ignite void* SecureSocketClient::MakeContext(const std::string& certPath, const std::string& keyPath, const std::string& caPath) { - assert(SslGateway::GetInstance().Loaded()); - - static bool sslLibInited = false; - static common::concurrent::CriticalSection sslCs; + SslGateway &sslGateway = SslGateway::GetInstance(); - if (!sslLibInited) - { - common::concurrent::CsLockGuard lock(sslCs); - - if (!sslLibInited) - { - (void)SSL_library_init(); + assert(sslGateway.Loaded()); - SSL_load_error_strings(); - - OPENSSL_config(0); - - sslLibInited = true; - } - } - - const SSL_METHOD* method = ssl::SSLv23_client_method_(); + const SSL_METHOD* method = sslGateway.SSLv23_client_method_(); if (!method) throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not get SSL method"); - SSL_CTX* ctx = ssl::SSL_CTX_new(method); + SSL_CTX* ctx = sslGateway.SSL_CTX_new_(method); if (!ctx) throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not create new SSL context"); - ssl::SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, 0); + sslGateway.SSL_CTX_set_verify_(ctx, SSL_VERIFY_PEER, 0); - ssl::SSL_CTX_set_verify_depth(ctx, 8); + sslGateway.SSL_CTX_set_verify_depth_(ctx, 8); - const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION; - ssl::SSL_CTX_ctrl(ctx, SSL_CTRL_OPTIONS, flags, NULL); + sslGateway.SSL_CTX_set_options_(ctx, SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION); const char* cCaPath = caPath.empty() ? 0 : caPath.c_str(); - long res = ssl::SSL_CTX_load_verify_locations(ctx, cCaPath, 0); + long res = sslGateway.SSL_CTX_load_verify_locations_(ctx, cCaPath, 0); if (res != OPERATION_SUCCESS) { - ssl::SSL_CTX_free(ctx); + sslGateway.SSL_CTX_free_(ctx); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not set Certificate Authority path for secure connection"); } - res = ssl::SSL_CTX_use_certificate_chain_file(ctx, certPath.c_str()); + res = sslGateway.SSL_CTX_use_certificate_chain_file_(ctx, certPath.c_str()); if (res != OPERATION_SUCCESS) { - ssl::SSL_CTX_free(ctx); + sslGateway.SSL_CTX_free_(ctx); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not set client certificate file for secure connection"); } - res = ssl::SSL_CTX_use_RSAPrivateKey_file(ctx, keyPath.c_str(), SSL_FILETYPE_PEM); + res = sslGateway.SSL_CTX_use_RSAPrivateKey_file_(ctx, keyPath.c_str(), SSL_FILETYPE_PEM); if (res != OPERATION_SUCCESS) { - ssl::SSL_CTX_free(ctx); + sslGateway.SSL_CTX_free_(ctx); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not set private key file for secure connection"); } const char* const PREFERRED_CIPHERS = "HIGH:!aNULL:!kRSA:!PSK:!SRP:!MD5:!RC4"; - res = ssl::SSL_CTX_set_cipher_list(ctx, PREFERRED_CIPHERS); + res = sslGateway.SSL_CTX_set_cipher_list_(ctx, PREFERRED_CIPHERS); if (res != OPERATION_SUCCESS) { - ssl::SSL_CTX_free(ctx); + sslGateway.SSL_CTX_free_(ctx); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not set ciphers list for secure connection"); @@ -265,30 +256,34 @@ namespace ignite void* SecureSocketClient::MakeSsl(void* context, const char* hostname, uint16_t port, bool& blocking) { - BIO* bio = ssl::BIO_new_ssl_connect(reinterpret_cast(context)); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + + BIO* bio = sslGateway.BIO_new_ssl_connect_(reinterpret_cast(context)); if (!bio) throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not create SSL connection."); - blocking = ssl::BIO_set_nbio_(bio, 1) != OPERATION_SUCCESS; + blocking = sslGateway.BIO_set_nbio_(bio, 1) != OPERATION_SUCCESS; std::stringstream stream; stream << hostname << ":" << port; std::string address = stream.str(); - long res = ssl::BIO_set_conn_hostname_(bio, address.c_str()); + long res = sslGateway.BIO_set_conn_hostname_(bio, address.c_str()); if (res != OPERATION_SUCCESS) { - ssl::BIO_free_all(bio); + sslGateway.BIO_free_all_(bio); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not set SSL connection hostname."); } SSL* ssl = 0; - ssl::BIO_get_ssl_(bio, &ssl); + sslGateway.BIO_get_ssl_(bio, &ssl); if (!ssl) { - ssl::BIO_free_all(bio); + sslGateway.BIO_free_all_(bio); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, "Can not get SSL instance from BIO."); } @@ -298,21 +293,25 @@ namespace ignite bool SecureSocketClient::CompleteConnectInternal(void* ssl, int timeout) { + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + SSL* ssl0 = reinterpret_cast(ssl); while (true) { - int res = ssl::SSL_connect_(ssl0); + int res = sslGateway.SSL_connect_(ssl0); if (res == OPERATION_SUCCESS) return true; - int sslError = ssl::SSL_get_error_(ssl0, res); + int sslError = sslGateway.SSL_get_error_(ssl0, res); if (IsActualError(sslError)) return false; - int want = ssl::SSL_want_(ssl0); + int want = sslGateway.SSL_want_(ssl0); res = WaitOnSocket(ssl, timeout, want == SSL_READING); @@ -326,9 +325,13 @@ namespace ignite std::string SecureSocketClient::GetSslError(void* ssl, int ret) { + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + SSL* ssl0 = reinterpret_cast(ssl); - int sslError = ssl::SSL_get_error_(ssl0, ret); + int sslError = sslGateway.SSL_get_error_(ssl0, ret); switch (sslError) { @@ -345,11 +348,11 @@ namespace ignite return std::string("SSL error: ") + common::LexicalCast(sslError); } - long error = ssl::ERR_get_error_(); + long error = sslGateway.ERR_get_error_(); char errBuf[1024] = { 0 }; - ssl::ERR_error_string_n_(error, errBuf, sizeof(errBuf)); + sslGateway.ERR_error_string_n_(error, errBuf, sizeof(errBuf)); return std::string(errBuf); } @@ -373,11 +376,13 @@ namespace ignite void SecureSocketClient::CloseInteral() { - assert(SslGateway::GetInstance().Loaded()); + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); if (ssl) { - ssl::SSL_free_(reinterpret_cast(ssl)); + sslGateway.SSL_free_(reinterpret_cast(ssl)); ssl = 0; } @@ -385,13 +390,17 @@ namespace ignite int SecureSocketClient::WaitOnSocket(void* ssl, int32_t timeout, bool rd) { + SslGateway &sslGateway = SslGateway::GetInstance(); + + assert(sslGateway.Loaded()); + int ready = 0; int lastError = 0; SSL* ssl0 = reinterpret_cast(ssl); fd_set fds; - int fd = ssl::SSL_get_fd_(ssl0); + int fd = sslGateway.SSL_get_fd_(ssl0); if (fd < 0) { diff --git a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_api.cpp b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_api.cpp new file mode 100644 index 0000000000000..a1da21524d6f1 --- /dev/null +++ b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_api.cpp @@ -0,0 +1,36 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "impl/ssl/ssl_gateway.h" +#include "impl/ssl/ssl_api.h" + +namespace ignite +{ + namespace impl + { + namespace thin + { + namespace ssl + { + void EnsureSslLoaded() + { + SslGateway::GetInstance().LoadAll(); + } + } + } + } +} diff --git a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_api.h b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_api.h new file mode 100644 index 0000000000000..8d6b72dbd00f6 --- /dev/null +++ b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_api.h @@ -0,0 +1,36 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#ifndef _IGNITE_IMPL_THIN_SSL_SSL_API +#define _IGNITE_IMPL_THIN_SSL_SSL_API + + +namespace ignite +{ + namespace impl + { + namespace thin + { + namespace ssl + { + void EnsureSslLoaded(); + } + } + } +} + +#endif //_IGNITE_IMPL_THIN_SSL_SSL_API \ No newline at end of file diff --git a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_bindings.h b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_bindings.h deleted file mode 100644 index 05ad00d9354ce..0000000000000 --- a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_bindings.h +++ /dev/null @@ -1,360 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef _IGNITE_IMPL_THIN_SSL_SSL_BINDINGS -#define _IGNITE_IMPL_THIN_SSL_SSL_BINDINGS - -#include -#include -#include - -#include "impl/ssl/ssl_gateway.h" - -namespace ignite -{ - namespace impl - { - namespace thin - { - namespace ssl - { - // Declaring constant used by OpenSSL for readability. - enum { OPERATION_SUCCESS = 1 }; - - inline SSL_CTX *SSL_CTX_new(const SSL_METHOD *meth) - { - typedef SSL_CTX*(FuncType)(const SSL_METHOD*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_CTX_new); - - return fp(meth); - } - - inline void SSL_CTX_free(SSL_CTX *ctx) - { - typedef void(FuncType)(SSL_CTX*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_CTX_free); - - fp(ctx); - } - - inline void SSL_CTX_set_verify(SSL_CTX *ctx, int mode, int(*callback) (int, X509_STORE_CTX *)) - { - typedef void(FuncType)(SSL_CTX*, int, int(*)(int, X509_STORE_CTX*)); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_set_verify); - - fp(ctx, mode, callback); - } - - inline void SSL_CTX_set_verify_depth(SSL_CTX *ctx, int depth) - { - typedef void(FuncType)(SSL_CTX*, int); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_set_verify_depth); - - fp(ctx, depth); - } - - inline int SSL_CTX_load_verify_locations(SSL_CTX *ctx, const char *cAfile, const char *cApath) - { - typedef int(FuncType)(SSL_CTX*, const char*, const char*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_load_verify_locations); - - return fp(ctx, cAfile, cApath); - } - - inline int SSL_CTX_use_certificate_chain_file(SSL_CTX *ctx, const char *file) - { - typedef int(FuncType)(SSL_CTX*, const char*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_use_certificate_chain_file); - - return fp(ctx, file); - } - - inline int SSL_CTX_use_RSAPrivateKey_file(SSL_CTX *ctx, const char *file, int type) - { - typedef int(FuncType)(SSL_CTX*, const char*, int); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_use_RSAPrivateKey_file); - - return fp(ctx, file, type); - } - - inline int SSL_CTX_set_cipher_list(SSL_CTX *ctx, const char *str) - { - typedef int(FuncType)(SSL_CTX*, const char*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_CTX_set_cipher_list); - - return fp(ctx, str); - } - - inline long SSL_CTX_ctrl(SSL_CTX *ctx, int cmd, long larg, void *parg) - { - typedef long(FuncType)(SSL_CTX*, int, long, void*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_CTX_ctrl); - - return fp(ctx, cmd, larg, parg); - } - - inline long SSL_get_verify_result(const SSL *s) - { - typedef long(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_get_verify_result); - - return fp(s); - } - - inline int SSL_library_init() - { - typedef int(FuncType)(); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_library_init); - - return fp(); - } - - inline void SSL_load_error_strings() - { - typedef void(FuncType)(); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_load_error_strings); - - fp(); - } - - inline X509 *SSL_get_peer_certificate(const SSL *s) - { - typedef X509*(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_get_peer_certificate); - - return fp(s); - } - - inline long SSL_ctrl(SSL *s, int cmd, long larg, void *parg) - { - typedef long(FuncType)(SSL*, int, long, void*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_ctrl); - - return fp(s, cmd, larg ,parg); - } - - inline long SSL_set_tlsext_host_name_(SSL *s, const char *name) - { - return ssl::SSL_ctrl(s, SSL_CTRL_SET_TLSEXT_HOSTNAME, - TLSEXT_NAMETYPE_host_name, const_cast(name)); - } - - inline void SSL_set_connect_state_(SSL* s) - { - typedef void(FuncType)(SSL*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSL_set_connect_state); - - return fp(s); - } - - inline int SSL_connect_(SSL* s) - { - typedef int(FuncType)(SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_connect); - - return fp(s); - } - - inline int SSL_get_error_(const SSL *s, int ret) - { - typedef int(FuncType)(const SSL*, int); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_get_error); - - return fp(s, ret); - } - - inline int SSL_want_(const SSL *s) - { - typedef int(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_want); - - return fp(s); - } - - inline int SSL_write_(SSL *s, const void *buf, int num) - { - typedef int(FuncType)(SSL*, const void*, int); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_write); - - return fp(s, buf, num); - } - - inline int SSL_read_(SSL *s, void *buf, int num) - { - typedef int(FuncType)(SSL*, void*, int); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_read); - - return fp(s, buf, num); - } - - inline int SSL_pending_(const SSL *ssl) - { - typedef int(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_pending); - - return fp(ssl); - } - - inline int SSL_get_fd_(const SSL *ssl) - { - typedef int(FuncType)(const SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_get_fd); - - return fp(ssl); - } - - inline void SSL_free_(SSL *ssl) - { - typedef void(FuncType)(SSL*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpSSL_free); - - fp(ssl); - } - - inline const SSL_METHOD *SSLv23_client_method_() - { - typedef const SSL_METHOD*(FuncType)(); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpSSLv23_client_method); - - return fp(); - } - - inline void OPENSSL_config(const char *configName) - { - typedef void(FuncType)(const char*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpOPENSSL_config); - - fp(configName); - } - - inline void X509_free(X509 *a) - { - typedef void(FuncType)(X509*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpX509_free); - - fp(a); - } - - inline BIO *BIO_new_ssl_connect(SSL_CTX *ctx) - { - typedef BIO*(FuncType)(SSL_CTX*); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpBIO_new_ssl_connect); - - return fp(ctx); - } - - inline void BIO_free_all(BIO *a) - { - typedef void(FuncType)(BIO*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpBIO_free_all); - - fp(a); - } - - inline long BIO_ctrl(BIO *bp, int cmd, long larg, void *parg) - { - typedef long(FuncType)(BIO*, int, long, void*); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpBIO_ctrl); - - return fp(bp, cmd, larg, parg); - } - - inline long BIO_get_fd_(BIO *bp, int *fd) - { - return ssl::BIO_ctrl(bp, BIO_C_GET_FD, 0, reinterpret_cast(fd)); - } - - inline long BIO_get_ssl_(BIO *bp, SSL** ssl) - { - return ssl::BIO_ctrl(bp, BIO_C_GET_SSL, 0, reinterpret_cast(ssl)); - } - - inline long BIO_set_nbio_(BIO *bp, long n) - { - return ssl::BIO_ctrl(bp, BIO_C_SET_NBIO, n, NULL); - } - - inline long BIO_set_conn_hostname_(BIO *bp, const char *name) - { - return ssl::BIO_ctrl(bp, BIO_C_SET_CONNECT, 0, const_cast(name)); - } - - inline unsigned long ERR_get_error_() - { - typedef unsigned long(FuncType)(); - - FuncType* fp = reinterpret_cast(SslGateway::GetInstance().GetFunctions().fpERR_get_error); - - return fp(); - } - - inline void ERR_error_string_n_(unsigned long e, char *buf, size_t len) - { - typedef void(FuncType)(unsigned long, char*, size_t); - - FuncType* fp = reinterpret_cast( - SslGateway::GetInstance().GetFunctions().fpERR_error_string_n); - - fp(e, buf, len); - } - } - } - } -} - -#endif //_IGNITE_IMPL_THIN_SSL_SSL_BINDINGS \ No newline at end of file diff --git a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.cpp b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.cpp index 380fe0f1843fd..42ed88dc7d47c 100644 --- a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.cpp +++ b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.cpp @@ -26,6 +26,18 @@ # define ADDITIONAL_OPENSSL_HOME_ENV "OPEN_SSL_HOME" #endif // ADDITIONAL_OPENSSL_HOME_ENV +#ifndef SSL_CTRL_OPTIONS +# define SSL_CTRL_OPTIONS 32 +#endif // SSL_CTRL_OPTIONS + +#ifndef OPENSSL_INIT_LOAD_SSL_STRINGS +# define OPENSSL_INIT_LOAD_SSL_STRINGS 0x00200000L +#endif // OPENSSL_INIT_LOAD_SSL_STRINGS + +#ifndef OPENSSL_INIT_LOAD_CRYPTO_STRINGS +# define OPENSSL_INIT_LOAD_CRYPTO_STRINGS 0x00000002L +#endif // OPENSSL_INIT_LOAD_CRYPTO_STRINGS + namespace ignite { namespace impl @@ -38,7 +50,7 @@ namespace ignite inited(false), functions() { - // No-op. + memset(&functions, 0, sizeof(functions)); } SslGateway::~SslGateway() @@ -46,41 +58,61 @@ namespace ignite // No-op. } + void SslGateway::UnloadAll() + { + libeay32.Unload(); + ssleay32.Unload(); + libssl.Unload(); + libcrypto.Unload(); + + memset(&functions, 0, sizeof(functions)); + } + common::dynamic::Module SslGateway::LoadSslLibrary(const char* name) { using namespace common; using namespace dynamic; - std::string fullName = GetDynamicLibraryName(name); - - Module libModule = LoadModule(fullName); - - if (libModule.IsLoaded()) - return libModule; - std::string home = GetEnv(ADDITIONAL_OPENSSL_HOME_ENV); if (home.empty()) home = GetEnv("OPENSSL_HOME"); - if (home.empty()) - return libModule; + std::string fullName = GetDynamicLibraryName(name); + + if (!home.empty()) + { + std::stringstream constructor; - std::stringstream constructor; + constructor << home << Fs << "bin" << Fs << fullName; - constructor << home << Fs << "bin" << Fs << fullName; + std::string fullPath = constructor.str(); - std::string fullPath = constructor.str(); + Module mod = LoadModule(fullPath); - return LoadModule(fullPath); + if (mod.IsLoaded()) + return mod; + } + + return LoadModule(fullName); } void SslGateway::LoadSslLibraries() { - libeay32 = LoadSslLibrary("libeay32"); - ssleay32 = LoadSslLibrary("ssleay32"); libssl = LoadSslLibrary("libssl"); + if (!libssl.IsLoaded()) + { + libcrypto = LoadSslLibrary("libcrypto-1_1-x64"); + libssl = LoadSslLibrary("libssl-1_1-x64"); + } + + if (!libssl.IsLoaded()) + { + libeay32 = LoadSslLibrary("libeay32"); + ssleay32 = LoadSslLibrary("ssleay32"); + } + if (!libssl.IsLoaded() && (!libeay32.IsLoaded() || !ssleay32.IsLoaded())) { if (!libssl.IsLoaded()) @@ -89,44 +121,37 @@ namespace ignite std::stringstream ss; - ss << "Can not load neccessary OpenSSL libraries: "; + ss << "Can not load neccessary OpenSSL libraries:"; if (!libeay32.IsLoaded()) - ss << "libeay32"; + ss << " libeay32"; if (!ssleay32.IsLoaded()) ss << " ssleay32"; - libeay32.Unload(); - ssleay32.Unload(); - libssl.Unload(); - std::string res = ss.str(); throw IgniteError(IgniteError::IGNITE_ERR_GENERIC, res.c_str()); } } - SslGateway& SslGateway::GetInstance() + void SslGateway::LoadMandatoryMethods() { - static SslGateway self; + functions.fpSSLeay_version = TryLoadSslMethod("SSLeay_version"); - return self; - } - - void SslGateway::LoadAll() - { - using namespace common::dynamic; + if (!functions.fpSSLeay_version) + functions.fpOpenSSL_version = LoadSslMethod("OpenSSL_version"); - if (inited) - return; + functions.fpSSL_library_init = TryLoadSslMethod("SSL_library_init"); + functions.fpSSL_load_error_strings = TryLoadSslMethod("SSL_load_error_strings"); - common::concurrent::CsLockGuard lock(initCs); + if (!functions.fpSSL_library_init || !functions.fpSSL_load_error_strings) + functions.fpOPENSSL_init_ssl = LoadSslMethod("OPENSSL_init_ssl"); - if (inited) - return; + functions.fpSSLv23_client_method = TryLoadSslMethod("SSLv23_client_method"); - LoadSslLibraries(); + if (!functions.fpSSLv23_client_method) + functions.fpTLS_client_method = LoadSslMethod("TLS_client_method"); functions.fpSSL_CTX_new = LoadSslMethod("SSL_CTX_new"); functions.fpSSL_CTX_free = LoadSslMethod("SSL_CTX_free"); @@ -138,13 +163,11 @@ namespace ignite functions.fpSSL_CTX_set_cipher_list = LoadSslMethod("SSL_CTX_set_cipher_list"); functions.fpSSL_get_verify_result = LoadSslMethod("SSL_get_verify_result"); - functions.fpSSL_library_init = LoadSslMethod("SSL_library_init"); - functions.fpSSL_load_error_strings = LoadSslMethod("SSL_load_error_strings"); + functions.fpSSL_get_peer_certificate = LoadSslMethod("SSL_get_peer_certificate"); functions.fpSSL_ctrl = LoadSslMethod("SSL_ctrl"); functions.fpSSL_CTX_ctrl = LoadSslMethod("SSL_CTX_ctrl"); - functions.fpSSLv23_client_method = LoadSslMethod("SSLv23_client_method"); functions.fpSSL_set_connect_state = LoadSslMethod("SSL_set_connect_state"); functions.fpSSL_connect = LoadSslMethod("SSL_connect"); functions.fpSSL_get_error = LoadSslMethod("SSL_get_error"); @@ -164,60 +187,67 @@ namespace ignite functions.fpERR_get_error = LoadSslMethod("ERR_get_error"); functions.fpERR_error_string_n = LoadSslMethod("ERR_error_string_n"); + } - bool allLoaded = - functions.fpSSL_CTX_new != 0 && - functions.fpSSL_CTX_free != 0 && - functions.fpSSL_CTX_set_verify != 0 && - functions.fpSSL_CTX_set_verify_depth != 0 && - functions.fpSSL_CTX_load_verify_locations != 0 && - functions.fpSSL_CTX_use_certificate_chain_file != 0 && - functions.fpSSL_CTX_use_RSAPrivateKey_file != 0 && - functions.fpSSL_CTX_set_cipher_list != 0 && - functions.fpSSL_get_verify_result != 0 && - functions.fpSSL_library_init != 0 && - functions.fpSSL_load_error_strings != 0 && - functions.fpSSL_get_peer_certificate != 0 && - functions.fpSSL_ctrl != 0 && - functions.fpSSL_CTX_ctrl != 0 && - functions.fpSSLv23_client_method != 0 && - functions.fpSSL_set_connect_state != 0 && - functions.fpSSL_connect != 0 && - functions.fpSSL_get_error != 0 && - functions.fpSSL_want != 0 && - functions.fpSSL_write != 0 && - functions.fpSSL_read != 0 && - functions.fpSSL_pending != 0 && - functions.fpSSL_get_fd != 0 && - functions.fpSSL_free != 0 && - functions.fpBIO_new_ssl_connect != 0 && - functions.fpOPENSSL_config != 0 && - functions.fpX509_free != 0 && - functions.fpBIO_free_all != 0 && - functions.fpBIO_ctrl != 0 && - functions.fpERR_get_error != 0 && - functions.fpERR_error_string_n != 0; - - if (!allLoaded) - { - libeay32.Unload(); - ssleay32.Unload(); - libssl.Unload(); - } + SslGateway& SslGateway::GetInstance() + { + static SslGateway self; - inited = allLoaded; + return self; } - void* SslGateway::LoadSslMethod(const char* name) + void SslGateway::LoadAll() + { + using namespace common::dynamic; + + if (inited) + return; + + common::concurrent::CsLockGuard lock(initCs); + + if (inited) + return; + + common::MethodGuard guard(this, &SslGateway::UnloadAll); + + LoadSslLibraries(); + + LoadMandatoryMethods(); + + functions.fpSSL_CTX_set_options = TryLoadSslMethod("SSL_CTX_set_options"); + functions.fpERR_print_errors_fp = TryLoadSslMethod("ERR_print_errors_fp"); + + (void)SSL_library_init_(); + + SSL_load_error_strings_(); + + OPENSSL_config_(0); + + guard.Release(); + + inited = true; + } + + void* SslGateway::TryLoadSslMethod(const char* name) { void* fp = libeay32.FindSymbol(name); if (!fp) fp = ssleay32.FindSymbol(name); + if (!fp) + fp = libcrypto.FindSymbol(name); + if (!fp) fp = libssl.FindSymbol(name); + return fp; + } + + void* SslGateway::LoadSslMethod(const char* name) + { + void* fp = TryLoadSslMethod(name); + if (!fp) { std::stringstream ss; @@ -231,6 +261,448 @@ namespace ignite return fp; } + + char* SslGateway::SSLeay_version_(int type) + { + typedef char* (FuncType)(int); + + FuncType* fp = 0; + + if (functions.fpSSLeay_version) + fp = reinterpret_cast(functions.fpSSLeay_version); + else + fp = reinterpret_cast(functions.fpOpenSSL_version); + + assert(fp != 0); + + return fp(type); + } + + int SslGateway::OPENSSL_init_ssl_(uint64_t opts, const void* settings) + { + assert(functions.fpOPENSSL_init_ssl != 0); + + typedef int (FuncType)(uint64_t, const void*); + + FuncType* fp = reinterpret_cast(functions.fpOPENSSL_init_ssl); + + return fp(opts, settings); + } + + long SslGateway::SSL_CTX_set_options_(SSL_CTX* ctx, long options) + { + if (functions.fpSSL_CTX_set_options) + { + typedef long (FuncType)(SSL_CTX*, long); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_options); + + return fp(ctx, options); + } + + return SSL_CTX_ctrl_(ctx, SSL_CTRL_OPTIONS, options, NULL); + } + + long SslGateway::SSL_CTX_ctrl_(SSL_CTX* ctx, int cmd, long larg, void* parg) + { + assert(functions.fpSSL_CTX_ctrl != 0); + + typedef long (FuncType)(SSL_CTX*, int, long, void*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_ctrl); + + return fp(ctx, cmd, larg, parg); + } + + SSL_CTX* SslGateway::SSL_CTX_new_(const SSL_METHOD* meth) + { + assert(functions.fpSSL_CTX_new != 0); + + typedef SSL_CTX*(FuncType)(const SSL_METHOD*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_new); + + return fp(meth); + } + + void SslGateway::SSL_CTX_free_(SSL_CTX* ctx) + { + assert(functions.fpSSL_CTX_free != 0); + + typedef void (FuncType)(SSL_CTX*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_free); + + fp(ctx); + } + + void SslGateway::SSL_CTX_set_verify_(SSL_CTX* ctx, int mode, int (* callback)(int, X509_STORE_CTX*)) + { + assert(functions.fpSSL_CTX_set_verify != 0); + + typedef void (FuncType)(SSL_CTX*, int, int (*)(int, X509_STORE_CTX*)); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_verify); + + fp(ctx, mode, callback); + } + + void SslGateway::SSL_CTX_set_verify_depth_(SSL_CTX* ctx, int depth) + { + assert(functions.fpSSL_CTX_set_verify_depth != 0); + + typedef void (FuncType)(SSL_CTX*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_verify_depth); + + fp(ctx, depth); + } + + int SslGateway::SSL_CTX_load_verify_locations_(SSL_CTX* ctx, const char* cAfile, const char* cApath) + { + assert(functions.fpSSL_CTX_load_verify_locations != 0); + + typedef int (FuncType)(SSL_CTX*, const char*, const char*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_load_verify_locations); + + return fp(ctx, cAfile, cApath); + } + + int SslGateway::SSL_CTX_use_certificate_chain_file_(SSL_CTX* ctx, const char* file) + { + assert(functions.fpSSL_CTX_use_certificate_chain_file != 0); + + typedef int (FuncType)(SSL_CTX*, const char*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_use_certificate_chain_file); + + return fp(ctx, file); + } + + int SslGateway::SSL_CTX_use_RSAPrivateKey_file_(SSL_CTX* ctx, const char* file, int type) + { + assert(functions.fpSSL_CTX_use_RSAPrivateKey_file != 0); + + typedef int (FuncType)(SSL_CTX*, const char*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_use_RSAPrivateKey_file); + + return fp(ctx, file, type); + } + + int SslGateway::SSL_CTX_set_cipher_list_(SSL_CTX* ctx, const char* str) + { + assert(functions.fpSSL_CTX_set_cipher_list != 0); + + typedef int (FuncType)(SSL_CTX*, const char*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_CTX_set_cipher_list); + + return fp(ctx, str); + } + + long SslGateway::SSL_get_verify_result_(const SSL* s) + { + assert(functions.fpSSL_get_verify_result != 0); + + typedef long (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_verify_result); + + return fp(s); + } + + int SslGateway::SSL_library_init_() + { + typedef int (FuncType)(); + + if (functions.fpSSL_library_init) + { + FuncType* fp = reinterpret_cast(functions.fpSSL_library_init); + + return fp(); + } + + return OPENSSL_init_ssl_(0, NULL); + } + + void SslGateway::SSL_load_error_strings_() + { + typedef void (FuncType)(); + + if (functions.fpSSL_load_error_strings) + { + FuncType* fp = reinterpret_cast(functions.fpSSL_load_error_strings); + + fp(); + + return; + } + + OPENSSL_init_ssl_(OPENSSL_INIT_LOAD_SSL_STRINGS | OPENSSL_INIT_LOAD_CRYPTO_STRINGS, NULL); + } + + X509* SslGateway::SSL_get_peer_certificate_(const SSL* s) + { + assert(functions.fpSSL_get_peer_certificate != 0); + + typedef X509*(FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_peer_certificate); + + return fp(s); + } + + long SslGateway::SSL_ctrl_(SSL* s, int cmd, long larg, void* parg) + { + assert(functions.fpSSL_ctrl != 0); + + typedef long (FuncType)(SSL*, int, long, void*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_ctrl); + + return fp(s, cmd, larg, parg); + } + + long SslGateway::SSL_set_tlsext_host_name_(SSL* s, const char* name) + { + return SSL_ctrl_(s, SSL_CTRL_SET_TLSEXT_HOSTNAME, + TLSEXT_NAMETYPE_host_name, const_cast(name)); + } + + void SslGateway::SSL_set_connect_state_(SSL* s) + { + assert(functions.fpSSL_set_connect_state != 0); + + typedef void (FuncType)(SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_set_connect_state); + + return fp(s); + } + + int SslGateway::SSL_connect_(SSL* s) + { + assert(functions.fpSSL_connect != 0); + + typedef int (FuncType)(SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_connect); + + return fp(s); + } + + int SslGateway::SSL_get_error_(const SSL* s, int ret) + { + assert(functions.fpSSL_get_error != 0); + + typedef int (FuncType)(const SSL*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_error); + + return fp(s, ret); + } + + int SslGateway::SSL_want_(const SSL* s) + { + assert(functions.fpSSL_want != 0); + + typedef int (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_want); + + return fp(s); + } + + int SslGateway::SSL_write_(SSL* s, const void* buf, int num) + { + assert(functions.fpSSL_write != 0); + + typedef int (FuncType)(SSL*, const void*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_write); + + return fp(s, buf, num); + } + + int SslGateway::SSL_read_(SSL* s, void* buf, int num) + { + assert(functions.fpSSL_read != 0); + + typedef int (FuncType)(SSL*, void*, int); + + FuncType* fp = reinterpret_cast(functions.fpSSL_read); + + return fp(s, buf, num); + } + + int SslGateway::SSL_pending_(const SSL* ssl) + { + assert(functions.fpSSL_pending != 0); + + typedef int (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_pending); + + return fp(ssl); + } + + int SslGateway::SSL_get_fd_(const SSL* ssl) + { + assert(functions.fpSSL_get_fd != 0); + + typedef int (FuncType)(const SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_get_fd); + + return fp(ssl); + } + + void SslGateway::SSL_free_(SSL* ssl) + { + assert(functions.fpSSL_free != 0); + + typedef void (FuncType)(SSL*); + + FuncType* fp = reinterpret_cast(functions.fpSSL_free); + + fp(ssl); + } + + const SSL_METHOD* SslGateway::SSLv23_client_method_() + { + if (functions.fpSSLv23_client_method) + { + typedef const SSL_METHOD*(FuncType)(); + + FuncType* fp = reinterpret_cast(functions.fpSSLv23_client_method); + + return fp(); + } + + return TLS_client_method_(); + } + + const SSL_METHOD* SslGateway::TLS_client_method_() + { + assert(functions.fpTLS_client_method != 0); + + typedef const SSL_METHOD*(FuncType)(); + + FuncType* fp = reinterpret_cast(functions.fpTLS_client_method); + + return fp(); + } + + void SslGateway::OPENSSL_config_(const char* configName) + { + assert(functions.fpOPENSSL_config != 0); + + typedef void (FuncType)(const char*); + + FuncType* fp = reinterpret_cast(functions.fpOPENSSL_config); + + fp(configName); + } + + void SslGateway::X509_free_(X509* a) + { + assert(functions.fpX509_free != 0); + + typedef void (FuncType)(X509*); + + FuncType* fp = reinterpret_cast(functions.fpX509_free); + + fp(a); + } + + BIO* SslGateway::BIO_new_ssl_connect_(SSL_CTX* ctx) + { + assert(functions.fpBIO_new_ssl_connect != 0); + + typedef BIO*(FuncType)(SSL_CTX*); + + FuncType* fp = reinterpret_cast(functions.fpBIO_new_ssl_connect); + + return fp(ctx); + } + + void SslGateway::BIO_free_all_(BIO* a) + { + assert(functions.fpBIO_free_all != 0); + + typedef void (FuncType)(BIO*); + + FuncType* fp = reinterpret_cast(functions.fpBIO_free_all); + + fp(a); + } + + long SslGateway::BIO_ctrl_(BIO* bp, int cmd, long larg, void* parg) + { + assert(functions.fpBIO_ctrl != 0); + + typedef long (FuncType)(BIO*, int, long, void*); + + FuncType* fp = reinterpret_cast(functions.fpBIO_ctrl); + + return fp(bp, cmd, larg, parg); + } + + long SslGateway::BIO_get_fd_(BIO* bp, int* fd) + { + return BIO_ctrl_(bp, BIO_C_GET_FD, 0, reinterpret_cast(fd)); + } + + long SslGateway::BIO_get_ssl_(BIO* bp, SSL** ssl) + { + return BIO_ctrl_(bp, BIO_C_GET_SSL, 0, reinterpret_cast(ssl)); + } + + long SslGateway::BIO_set_nbio_(BIO* bp, long n) + { + return BIO_ctrl_(bp, BIO_C_SET_NBIO, n, NULL); + } + + long SslGateway::BIO_set_conn_hostname_(BIO* bp, const char* name) + { + return BIO_ctrl_(bp, BIO_C_SET_CONNECT, 0, const_cast(name)); + } + + unsigned long SslGateway::ERR_get_error_() + { + assert(functions.fpERR_get_error != 0); + + typedef unsigned long (FuncType)(); + + FuncType* fp = reinterpret_cast(functions.fpERR_get_error); + + return fp(); + } + + void SslGateway::ERR_error_string_n_(unsigned long e, char* buf, size_t len) + { + assert(functions.fpERR_error_string_n != 0); + + typedef void (FuncType)(unsigned long, char*, size_t); + + FuncType* fp = reinterpret_cast(functions.fpERR_error_string_n); + + fp(e, buf, len); + } + + void SslGateway::ERR_print_errors_fp_(FILE* fd) + { + if (!functions.fpERR_print_errors_fp) + return; + + typedef void (FuncType)(FILE*); + + FuncType* fp = reinterpret_cast(functions.fpERR_print_errors_fp); + + fp(fd); + } } } } diff --git a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.h b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.h index 440aaa867c6f9..0908c103581c1 100644 --- a/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.h +++ b/modules/platforms/cpp/thin-client/src/impl/ssl/ssl_gateway.h @@ -18,9 +18,14 @@ #ifndef _IGNITE_IMPL_THIN_SSL_SSL_LIBRARY #define _IGNITE_IMPL_THIN_SSL_SSL_LIBRARY +#include +#include +#include + #include #include + namespace ignite { namespace impl @@ -34,6 +39,7 @@ namespace ignite */ struct SslFunctions { + void *fpSSLeay_version; void *fpSSL_CTX_new; void *fpSSL_CTX_free; void *fpSSL_CTX_set_verify; @@ -65,6 +71,12 @@ namespace ignite void *fpBIO_ctrl; void *fpERR_get_error; void *fpERR_error_string_n; + void *fpERR_print_errors_fp; + + void *fpOpenSSL_version; + void *fpSSL_CTX_set_options; + void *fpOPENSSL_init_ssl; + void *fpTLS_client_method; }; /** @@ -104,6 +116,88 @@ namespace ignite return inited; } + char* SSLeay_version_(int type); + + int OPENSSL_init_ssl_(uint64_t opts, const void* settings); + + long SSL_CTX_set_options_(SSL_CTX* ctx, long options); + + long SSL_CTX_ctrl_(SSL_CTX* ctx, int cmd, long larg, void* parg); + + SSL_CTX* SSL_CTX_new_(const SSL_METHOD* meth); + + void SSL_CTX_free_(SSL_CTX* ctx); + + void SSL_CTX_set_verify_(SSL_CTX* ctx, int mode, int (*callback)(int, X509_STORE_CTX*)); + + void SSL_CTX_set_verify_depth_(SSL_CTX* ctx, int depth); + + int SSL_CTX_load_verify_locations_(SSL_CTX* ctx, const char* cAfile, const char* cApath); + + int SSL_CTX_use_certificate_chain_file_(SSL_CTX* ctx, const char* file); + + int SSL_CTX_use_RSAPrivateKey_file_(SSL_CTX* ctx, const char* file, int type); + + int SSL_CTX_set_cipher_list_(SSL_CTX* ctx, const char* str); + + long SSL_get_verify_result_(const SSL* s); + + int SSL_library_init_(); + + void SSL_load_error_strings_(); + + X509* SSL_get_peer_certificate_(const SSL* s); + + long SSL_ctrl_(SSL* s, int cmd, long larg, void* parg); + + long SSL_set_tlsext_host_name_(SSL* s, const char* name); + + void SSL_set_connect_state_(SSL* s); + + int SSL_connect_(SSL* s); + + int SSL_get_error_(const SSL* s, int ret); + + int SSL_want_(const SSL* s); + + int SSL_write_(SSL* s, const void* buf, int num); + + int SSL_read_(SSL* s, void* buf, int num); + + int SSL_pending_(const SSL* ssl); + + int SSL_get_fd_(const SSL* ssl); + + void SSL_free_(SSL* ssl); + + const SSL_METHOD* SSLv23_client_method_(); + + const SSL_METHOD* TLS_client_method_(); + + void OPENSSL_config_(const char* configName); + + void X509_free_(X509* a); + + BIO* BIO_new_ssl_connect_(SSL_CTX* ctx); + + void BIO_free_all_(BIO* a); + + long BIO_ctrl_(BIO* bp, int cmd, long larg, void* parg); + + long BIO_get_fd_(BIO* bp, int* fd); + + long BIO_get_ssl_(BIO* bp, SSL** ssl); + + long BIO_set_nbio_(BIO* bp, long n); + + long BIO_set_conn_hostname_(BIO* bp, const char* name); + + unsigned long ERR_get_error_(); + + void ERR_error_string_n_(unsigned long e, char* buf, size_t len); + + void ERR_print_errors_fp_(FILE *fd); + private: /** * Constructor. @@ -115,6 +209,11 @@ namespace ignite */ ~SslGateway(); + /** + * Unload all SSL symbols. + */ + void UnloadAll(); + /** * Load SSL library. * @param name Name. @@ -127,10 +226,28 @@ namespace ignite */ void LoadSslLibraries(); + /** + * Load mandatory SSL methods. + * + * @throw IgniteError if can not load one of the functions. + */ + void LoadMandatoryMethods(); + + /** + * Try load SSL method. + * + * @param name Name. + * @return Method pointer. + */ + void* TryLoadSslMethod(const char* name); + /** * Load SSL method. + * * @param name Name. * @return Method pointer. + * + * @throw IgniteError if the method is not present. */ void* LoadSslMethod(const char* name); @@ -146,6 +263,9 @@ namespace ignite /** ssleay32 module. */ common::dynamic::Module ssleay32; + /** libcrypto module. */ + common::dynamic::Module libcrypto; + /** libssl module. */ common::dynamic::Module libssl; diff --git a/modules/platforms/dotnet/Apache.Ignite.AspNet.Tests/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.AspNet.Tests/Properties/AssemblyInfo.cs index 17a9bbb80cf41..4c0b668787a69 100644 --- a/modules/platforms/dotnet/Apache.Ignite.AspNet.Tests/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.AspNet.Tests/Properties/AssemblyInfo.cs @@ -27,7 +27,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.AspNet/Apache.Ignite.AspNet.nuspec b/modules/platforms/dotnet/Apache.Ignite.AspNet/Apache.Ignite.AspNet.nuspec index 1bc204ab28f5a..e380b91784e29 100644 --- a/modules/platforms/dotnet/Apache.Ignite.AspNet/Apache.Ignite.AspNet.nuspec +++ b/modules/platforms/dotnet/Apache.Ignite.AspNet/Apache.Ignite.AspNet.nuspec @@ -45,7 +45,7 @@ Session State Store Provider: stores session state data in a distributed in-memo More info: https://apacheignite-net.readme.io/ - Copyright 2018 + Copyright 2020 OutputCacheProvider Apache Ignite In-Memory Distributed Computing SQL NoSQL Grid Map Reduce Cache diff --git a/modules/platforms/dotnet/Apache.Ignite.AspNet/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.AspNet/Properties/AssemblyInfo.cs index 25646bd53a14f..fba4047ee7f16 100644 --- a/modules/platforms/dotnet/Apache.Ignite.AspNet/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.AspNet/Properties/AssemblyInfo.cs @@ -25,7 +25,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.Benchmarks/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.Benchmarks/Properties/AssemblyInfo.cs index 369f4df64959d..191bba9854476 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Benchmarks/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Benchmarks/Properties/AssemblyInfo.cs @@ -23,7 +23,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/Apache.Ignite.Core.Tests.DotNetCore.csproj b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/Apache.Ignite.Core.Tests.DotNetCore.csproj index e27f8b779fe0c..2ec0bdb905951 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/Apache.Ignite.Core.Tests.DotNetCore.csproj +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/Apache.Ignite.Core.Tests.DotNetCore.csproj @@ -42,6 +42,7 @@ + @@ -107,7 +108,7 @@ - + @@ -184,7 +185,7 @@ - + diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/tde.jks b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/tde.jks new file mode 100644 index 0000000000000..1bf532c292dec Binary files /dev/null and b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/tde.jks differ diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests.NuGet/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.NuGet/Properties/AssemblyInfo.cs index 3b49386bf27ab..b14d47c71fa05 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests.NuGet/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.NuGet/Properties/AssemblyInfo.cs @@ -23,7 +23,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests.TestDll/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.TestDll/Properties/AssemblyInfo.cs index 5a18026911dae..bd0469e7c98e6 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests.TestDll/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests.TestDll/Properties/AssemblyInfo.cs @@ -23,7 +23,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Apache.Ignite.Core.Tests.csproj b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Apache.Ignite.Core.Tests.csproj index aa58afc110e68..77b68baec172d 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Apache.Ignite.Core.Tests.csproj +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Apache.Ignite.Core.Tests.csproj @@ -94,6 +94,7 @@ + @@ -115,6 +116,7 @@ + diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/CacheParityTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/CacheParityTest.cs index 7548740c04711..d0a103fabd6a8 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/CacheParityTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/CacheParityTest.cs @@ -57,7 +57,10 @@ public class CacheParityTest "sizeLongAsync", // IGNITE-6563 "localSizeLong", // IGNITE-6563 "enableStatistics", // IGNITE-7276 - "clearStatistics" // IGNITE-9017 + "clearStatistics", // IGNITE-9017 + "preloadPartition", // IGNITE-9998 + "preloadPartitionAsync", // IGNITE-9998 + "localPreloadPartition", // IGNITE-9998 }; /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/IgniteConfigurationParityTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/IgniteConfigurationParityTest.cs index 5b4106a6b142b..a50d48a64a32c 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/IgniteConfigurationParityTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/IgniteConfigurationParityTest.cs @@ -64,7 +64,8 @@ public class IgniteConfigurationParityTest "CacheStoreSessionListenerFactories", "PlatformConfiguration", "ExecutorConfiguration", - "CommunicationFailureResolver" + "CommunicationFailureResolver", + "EncryptionSpi" }; /** Properties that are missing on .NET side. */ @@ -81,8 +82,7 @@ public class IgniteConfigurationParityTest "TimeServerPortRange", "IncludeProperties", "isAutoActivationEnabled", // IGNITE-7301 - "MvccVacuumFrequency", //TODO: IGNITE-9390: Remove when Mvcc support will be added. - "MvccVacuumThreadCount" //TODO: IGNITE-9390: Remove when Mvcc support will be added. + "NetworkCompressionLevel" }; /// @@ -99,4 +99,4 @@ public void TestIgniteConfiguration() KnownMappings); } } -} \ No newline at end of file +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/TcpCommunicationSpiParityTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/TcpCommunicationSpiParityTest.cs new file mode 100644 index 0000000000000..be8bd11af6487 --- /dev/null +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/TcpCommunicationSpiParityTest.cs @@ -0,0 +1,84 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +namespace Apache.Ignite.Core.Tests.ApiParity +{ + using System.Collections.Generic; + using Apache.Ignite.Core.Cache.Configuration; + using Apache.Ignite.Core.Communication.Tcp; + using NUnit.Framework; + + /// + /// Tests that .NET has all properties from Java configuration APIs. + /// + public class TcpCommunicationSpiParityTest + { + /** Known property name mappings. */ + private static readonly Dictionary KnownMappings = new Dictionary() + { + {"SocketReceiveBuffer", "SocketReceiveBufferSize"}, + {"SocketSendBuffer", "SocketSendBufferSize"} + }; + + /** Properties that are not needed on .NET side. */ + private static readonly string[] UnneededProperties = + { + // Java-specific. + "AddressResolver", + "Listener", + "run", + "ReceivedMessagesByType", + "ReceivedMessagesByNode", + "SentMessagesByType", + "SentMessagesByNode", + "SentMessagesCount", + "SentBytesCount", + "ReceivedMessagesCount", + "ReceivedBytesCount", + "OutboundMessagesQueueSize", + "resetMetrics", + "dumpStats", + "boundPort", + "SpiContext", + "simulateNodeFailure", + "cancel", + "order", + "onTimeout", + "endTime", + "id", + "connectionIndex", + "NodeFilter" + }; + + /** Properties that are missing on .NET side. */ + private static readonly string[] MissingProperties = {}; + + /// + /// Tests the cache configuration parity. + /// + [Test] + public void TestTcpCommunicationSpi() + { + ParityTest.CheckConfigurationParity( + @"modules\core\src\main\java\org\apache\ignite\spi\communication\tcp\TcpCommunicationSpi.java", + typeof(TcpCommunicationSpi), + UnneededProperties, + MissingProperties, + KnownMappings); + } + } +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/AdvancedSerializationTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/AdvancedSerializationTest.cs index 4a1092236e4a9..d697e17085223 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/AdvancedSerializationTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/AdvancedSerializationTest.cs @@ -96,7 +96,7 @@ private static void CheckTask(IIgnite grid, object arg) Assert.AreEqual(expectedRes, jobResult.InnerXml); } -#if !NETCOREAPP2_0 // AppDomains are not supported in .NET Core +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 // AppDomains are not supported in .NET Core /// /// Tests custom serialization binder. /// @@ -111,7 +111,7 @@ public void TestSerializationBinder() for (var i = 0; i < count; i++) { dynamic val = Activator.CreateInstance(GenerateDynamicType()); - + val.Id = i; val.Name = "Name_" + i; @@ -143,7 +143,7 @@ private static Type GenerateDynamicType() TypeAttributes.Class | TypeAttributes.Public | TypeAttributes.Serializable); typeBuilder.DefineField("Id", typeof (int), FieldAttributes.Public); - + typeBuilder.DefineField("Name", typeof (string), FieldAttributes.Public); return typeBuilder.CreateType(); @@ -258,4 +258,4 @@ public void Cancel() // No-op. } } -} \ No newline at end of file +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/SqlDmlTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/SqlDmlTest.cs index 0ffd06807d923..005488ae31fb5 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/SqlDmlTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Binary/Serializable/SqlDmlTest.cs @@ -38,7 +38,7 @@ public class SqlDmlTest { /** */ private IIgnite _ignite; - + /** */ private StringBuilder _outSb; @@ -109,7 +109,7 @@ public void TestSimpleSerializable() var guid = Guid.NewGuid(); var insertRes = cache.Query(new SqlFieldsQuery( "insert into SimpleSerializable(_key, Byte, Bool, Short, Int, Long, Float, Double, " + - "Decimal, Guid, String) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)", + "Decimal, Guid, String) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)", 3, 45, true, 43, 33, 99, 4.5f, 6.7, 9.04m, guid, "bar33")).GetAll(); Assert.AreEqual(1, insertRes.Count); @@ -163,7 +163,7 @@ public void TestDotNetSpecificSerializable() Assert.AreEqual("Value was either too large or too small for a UInt32.", ex.Message); } -#if !NETCOREAPP2_0 // Console redirect issues on .NET Core +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 // Console redirect issues on .NET Core /// /// Tests the log warning. /// @@ -196,25 +196,25 @@ private class SimpleSerializable : ISerializable [QuerySqlField] public short Short { get; set; } - + [QuerySqlField] public int Int { get; set; } - + [QuerySqlField] public long Long { get; set; } - + [QuerySqlField] public float Float { get; set; } - + [QuerySqlField] public double Double { get; set; } - + [QuerySqlField] public decimal Decimal { get; set; } - + [QuerySqlField] public Guid Guid { get; set; } - + [QuerySqlField] public string String { get; set; } diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CacheAbstractTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CacheAbstractTest.cs index 02ed39d4014ea..3c74ce4baba35 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CacheAbstractTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CacheAbstractTest.cs @@ -39,7 +39,7 @@ namespace Apache.Ignite.Core.Tests.Cache /// Base cache test. /// [SuppressMessage("ReSharper", "UnusedVariable")] - public abstract class CacheAbstractTest + public abstract class CacheAbstractTest { /// /// Fixture setup. @@ -91,7 +91,7 @@ public void BeforeTest() /// [TearDown] public void AfterTest() { - for (int i = 0; i < GridCount(); i++) + for (int i = 0; i < GridCount(); i++) Cache(i).WithKeepBinary().RemoveAll(); for (int i = 0; i < GridCount(); i++) @@ -213,7 +213,7 @@ public void TestContainsKey() Assert.IsTrue(cache.ContainsKey(key)); Assert.IsFalse(cache.ContainsKey(-1)); } - + [Test] public void TestContainsKeys() { @@ -229,7 +229,7 @@ public void TestContainsKeys() Assert.IsFalse(cache.ContainsKeys(keys.Concat(new[] {int.MaxValue}))); } - + [Test] public void TestPeek() { @@ -260,11 +260,11 @@ public void TestGet() Assert.AreEqual(1, cache.Get(1)); Assert.AreEqual(2, cache.Get(2)); - + Assert.Throws(() => cache.Get(3)); int value; - + Assert.IsTrue(cache.TryGet(1, out value)); Assert.AreEqual(1, value); @@ -282,7 +282,7 @@ public void TestGetAsync() cache.Put(1, 1); cache.Put(2, 2); - + Assert.AreEqual(1, cache.Get(1)); Assert.AreEqual(2, cache.Get(2)); Assert.IsFalse(cache.ContainsKey(3)); @@ -376,11 +376,11 @@ public void TestGetAndRemove() Assert.AreEqual(1, cache.Get(1)); Assert.IsFalse(cache.GetAndRemove(0).Success); - + Assert.AreEqual(1, cache.GetAndRemove(1).Value); Assert.IsFalse(cache.GetAndRemove(1).Success); - + Assert.IsFalse(cache.ContainsKey(1)); } @@ -653,10 +653,9 @@ public void TestPutAll([Values(true, false)] bool async) /// Expiry policy tests. /// [Test] + [Ignore("https://issues.apache.org/jira/browse/IGNITE-8983")] public void TestWithExpiryPolicy() { - Assert.Fail("https://issues.apache.org/jira/browse/IGNITE-8983"); - TestWithExpiryPolicy((cache, policy) => cache.WithExpiryPolicy(policy), true); } @@ -680,11 +679,11 @@ public void TestCacheConfigurationExpiryPolicy() /// /// Expiry policy tests. /// - private void TestWithExpiryPolicy(Func, IExpiryPolicy, ICache> withPolicyFunc, + private void TestWithExpiryPolicy(Func, IExpiryPolicy, ICache> withPolicyFunc, bool origCache) { ICache cache0 = Cache(0); - + int key0; int key1; @@ -698,7 +697,7 @@ private void TestWithExpiryPolicy(Func, IExpiryPolicy, ICache cache = withPolicyFunc(cache0, new ExpiryPolicy(null, null, null)); cache0 = origCache ? cache0 : cache; @@ -787,7 +786,7 @@ private void TestWithExpiryPolicy(Func, IExpiryPolicy, ICache /// Expiry policy tests for zero and negative expiry values. /// @@ -796,7 +795,7 @@ private void TestWithExpiryPolicy(Func, IExpiryPolicy, ICache cache0 = Cache(0); - + int key0; int key1; @@ -838,7 +837,7 @@ public void TestWithExpiryPolicyZeroNegative() cache0.RemoveAll(new List { key0, key1 }); // Test negative expiration. - cache = cache0.WithExpiryPolicy(new ExpiryPolicy(TimeSpan.FromMilliseconds(-100), + cache = cache0.WithExpiryPolicy(new ExpiryPolicy(TimeSpan.FromMilliseconds(-100), TimeSpan.FromMilliseconds(-100), TimeSpan.FromMilliseconds(-100))); cache.Put(key0, key0); @@ -1714,7 +1713,7 @@ public void TestThreadLocalLeak() Assert.IsNull(err); } - + /** * Test tries to provoke garbage collection for .Net future before it was completed to verify * futures pinning works. @@ -2041,7 +2040,7 @@ public void TestNearKeys() foreach (var nearKey in nearKeys.Take(3)) Assert.AreNotEqual(0, cache.Get(nearKey)); } - + [Test] public void TestSerializable() { @@ -2180,7 +2179,7 @@ private void TestInvokeAll(bool async) var results = res.OrderBy(x => x.Key).Select(x => x.Result); var expectedResults = entries.OrderBy(x => x.Key).Select(x => x.Value + arg); - + Assert.IsTrue(results.SequenceEqual(expectedResults)); var resultEntries = cache.GetAll(entries.Keys); @@ -2197,20 +2196,20 @@ private void TestInvokeAll(bool async) res = cache.InvokeAll(entries.Keys, new T {Exists = false}, arg); Assert.IsTrue(res.All(x => x.Result == arg)); - Assert.IsTrue(cache.GetAll(entries.Keys).All(x => x.Value == arg)); + Assert.IsTrue(cache.GetAll(entries.Keys).All(x => x.Value == arg)); // Test exceptions var errKey = entries.Keys.Reverse().Take(5).Last(); TestInvokeAllException(cache, entries, new T { ThrowErr = true, ThrowOnKey = errKey }, arg, errKey); - TestInvokeAllException(cache, entries, new T { ThrowErrBinarizable = true, ThrowOnKey = errKey }, + TestInvokeAllException(cache, entries, new T { ThrowErrBinarizable = true, ThrowOnKey = errKey }, arg, errKey); TestInvokeAllException(cache, entries, new T { ThrowErrNonSerializable = true, ThrowOnKey = errKey }, arg, errKey, "ExpectedException"); } - private static void TestInvokeAllException(ICache cache, Dictionary entries, + private static void TestInvokeAllException(ICache cache, Dictionary entries, T processor, int arg, int errKey, string exceptionText = null) where T : AddArgCacheEntryProcessor { var res = cache.InvokeAll(entries.Keys, processor, arg); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CachePartitionedTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CachePartitionedTest.cs index 68546b9ebb1b1..c9100b223aaa3 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CachePartitionedTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/CachePartitionedTest.cs @@ -17,6 +17,9 @@ namespace Apache.Ignite.Core.Tests.Cache { + using Apache.Ignite.Core.Cache; + using Apache.Ignite.Core.Cache.Configuration; + using Apache.Ignite.Core.Transactions; using NUnit.Framework; [Category(TestUtils.CategoryIntensive)] @@ -41,5 +44,39 @@ protected override int Backups() { return 1; } + + /// + /// Test MVCC transaction. + /// + [Test] + public void TestMvccTransaction() + { + IIgnite ignite = GetIgnite(0); + + ICache cache = ignite.GetOrCreateCache(new CacheConfiguration + { + Name = "mvcc", + AtomicityMode = CacheAtomicityMode.TransactionalSnapshot + }); + + ITransaction tx = ignite.GetTransactions().TxStart(); + + cache.Put(1, 1); + cache.Put(2, 2); + + tx.Commit(); + + Assert.AreEqual(1, cache.Get(1)); + Assert.AreEqual(2, cache.Get(2)); + + tx = ignite.GetTransactions().TxStart(); + + Assert.AreEqual(1, cache.Get(1)); + Assert.AreEqual(2, cache.Get(2)); + + tx.Commit(); + + ignite.DestroyCache("mvcc"); + } } } diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/DataRegionMetricsTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/DataRegionMetricsTest.cs index 15cf4dbe42759..474a867107338 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/DataRegionMetricsTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/DataRegionMetricsTest.cs @@ -46,7 +46,7 @@ public void TestMemoryMetrics() // Verify metrics. var metrics = ignite.GetDataRegionMetrics().OrderBy(x => x.Name).ToArray(); - Assert.AreEqual(3, metrics.Length); // two defined plus system. + Assert.AreEqual(4, metrics.Length); // two defined plus system and plus TxLog. var emptyMetrics = metrics[0]; Assert.AreEqual(RegionNoMetrics, emptyMetrics.Name); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/MemoryMetricsTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/MemoryMetricsTest.cs index 92f9b4d979145..352c1e9dfaa61 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/MemoryMetricsTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/MemoryMetricsTest.cs @@ -44,7 +44,7 @@ public void TestMemoryMetrics() // Verify metrics. var metrics = ignite.GetMemoryMetrics().OrderBy(x => x.Name).ToArray(); - Assert.AreEqual(3, metrics.Length); // two defined plus system. + Assert.AreEqual(4, metrics.Length); // two defined plus system and plus TxLog. var emptyMetrics = metrics[0]; Assert.AreEqual(MemoryPolicyNoMetrics, emptyMetrics.Name); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/CacheQueriesTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/CacheQueriesTest.cs index ceeeb374707f8..ef5623ea8bc27 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/CacheQueriesTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/CacheQueriesTest.cs @@ -350,7 +350,9 @@ public void TestSqlQuery([Values(true, false)] bool loc, [Values(true, false)] b var qry = new SqlQuery(typeof(QueryPerson), "age < 50", loc) { EnableDistributedJoins = distrJoin, +#pragma warning disable 618 ReplicatedOnly = false, +#pragma warning restore 618 Timeout = TimeSpan.FromSeconds(3) }; @@ -381,7 +383,9 @@ public void TestSqlFieldsQuery([Values(true, false)] bool loc, [Values(true, fal EnableDistributedJoins = distrJoin, EnforceJoinOrder = enforceJoinOrder, Colocated = !distrJoin, +#pragma warning disable 618 ReplicatedOnly = false, +#pragma warning restore 618 Local = loc, Timeout = TimeSpan.FromSeconds(2), Lazy = lazy diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/CacheQueriesWithRestartServerTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/CacheQueriesWithRestartServerTest.cs new file mode 100644 index 0000000000000..1e64667bb596a --- /dev/null +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/CacheQueriesWithRestartServerTest.cs @@ -0,0 +1,154 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +namespace Apache.Ignite.Core.Tests.Cache.Query +{ + using System.Linq; + using System.Threading; + using Apache.Ignite.Core.Binary; + using Apache.Ignite.Core.Cache; + using Apache.Ignite.Core.Cache.Query; + using NUnit.Framework; + + /// + /// Tests queries behavior with client reconnect and server restart. + /// + public sealed class CacheQueriesRestartServerTest + { + /** */ + private IIgnite _client; + + /** */ + private IIgnite _server; + + /// + /// Sets up the fixture. + /// + [TestFixtureSetUp] + public void FixtureSetUp() + { + _server = StartGrid(0); + _client = StartGrid(0, true); + + TestUtils.WaitForCondition(() => _server.GetCluster().GetNodes().Count == 2, 1000); + } + + /// + /// Tears down the fixture. + /// + [TestFixtureTearDown] + public void FixtureTearDown() + { + Ignition.StopAll(true); + } + + /// + /// Tests that Scan query works after client reconnect with full cluster restart. + /// + [Test] + public void Test_ScanQueryAfterClientReconnect_ReturnsResults([Values(true, false)] bool emptyFilterObject) + { + var cache = _client.GetOrCreateCache("Test"); + cache.Put(1, new Item { Id = 20, Title = "test" }); + + Ignition.Stop(_server.Name, false); + + var evt = new ManualResetEventSlim(false); + + _client.ClientReconnected += (sender, args) => evt.Set(); + + _server = StartGrid(0); + + var restarted = evt.Wait(10000); + Assert.IsTrue(restarted); + + cache = _client.GetOrCreateCache("Test"); + cache.Put(1, new Item { Id = 30, Title = "test" }); + + var filter = emptyFilterObject + ? (ICacheEntryFilter) new TestFilter() + : new TestFilterWithField {TestValue = 9}; + + var cursor = cache.Query(new ScanQuery(filter)); + var items = cursor.GetAll(); + + Assert.AreEqual(30, items.Single().Value.Id); + } + + /// + /// Starts the grid. + /// + private static IIgnite StartGrid(int i, bool client = false) + { + return Ignition.Start(new IgniteConfiguration(TestUtils.GetTestConfiguration()) + { + ClientMode = client, + IgniteInstanceName = client ? "client-" + i : "grid-" + i + }); + } + + /// + /// Test filter. + /// + private class TestFilter : ICacheEntryFilter + { + /** */ + public bool Invoke(ICacheEntry entry) + { + return entry.Value.Id > 10; + } + } + + /// + /// Test filter with field. + /// + private class TestFilterWithField : ICacheEntryFilter + { + /** */ + public int TestValue { get; set; } + + /** */ + public bool Invoke(ICacheEntry entry) + { + return entry.Value.Id > TestValue; + } + } + + private class Item : IBinarizable + { + /** */ + public int Id { get; set; } + + /** */ + public string Title { get; set; } + + /** */ + public void WriteBinary(IBinaryWriter writer) + { + writer.WriteInt("Id", Id); + writer.WriteString("Title", Title); + } + + /** */ + public void ReadBinary(IBinaryReader reader) + { + Id = reader.ReadInt("Id"); + Title = reader.ReadString("Title"); + } + } + } +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Introspection.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Introspection.cs index f5b5baa4f3569..77f79f47d9c60 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Introspection.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Introspection.cs @@ -52,7 +52,9 @@ public void TestIntrospection() PageSize = 999, EnforceJoinOrder = true, Timeout = TimeSpan.FromSeconds(2.5), +#pragma warning disable 618 ReplicatedOnly = true, +#pragma warning restore 618 Colocated = true, Lazy = true }).Where(x => x.Key > 10).ToCacheQueryable(); @@ -76,7 +78,9 @@ public void TestIntrospection() Assert.AreEqual(999, fq.PageSize); Assert.IsFalse(fq.EnableDistributedJoins); Assert.IsTrue(fq.EnforceJoinOrder); +#pragma warning disable 618 Assert.IsTrue(fq.ReplicatedOnly); +#pragma warning restore 618 Assert.IsTrue(fq.Colocated); Assert.AreEqual(TimeSpan.FromSeconds(2.5), fq.Timeout); Assert.IsTrue(fq.Lazy); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Misc.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Misc.cs index 8aaa7b8a80002..0cc6e5d158bce 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Misc.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Misc.cs @@ -30,6 +30,7 @@ namespace Apache.Ignite.Core.Tests.Cache.Query.Linq using System.Linq; using Apache.Ignite.Core.Cache; using Apache.Ignite.Core.Cache.Configuration; + using Apache.Ignite.Core.Common; using Apache.Ignite.Linq; using NUnit.Framework; @@ -342,7 +343,7 @@ public void TestTimeout() }); // ReSharper disable once ReturnValueOfPureMethodIsNotUsed - var ex = Assert.Throws(() => + var ex = Assert.Throws(() => persons.SelectMany(p => GetRoleCache().AsCacheQueryable()).ToArray()); Assert.IsTrue(ex.ToString().Contains("QueryCancelledException: The query was cancelled while executing.")); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Strings.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Strings.cs index cb89a5ba06d97..628f35cf6e711 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Strings.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Cache/Query/Linq/CacheLinqTest.Strings.cs @@ -66,7 +66,7 @@ public void TestStrings() CheckFunc(x => x.Trim(), strings); -#if !NETCOREAPP2_0 // Trim is not supported on .NET Core +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 // Trim is not supported on .NET Core CheckFunc(x => x.Trim('P'), strings); var toTrim = new[] { 'P' }; CheckFunc(x => x.Trim(toTrim), strings); @@ -89,9 +89,9 @@ public void TestStrings() CheckFunc(x => Regex.Replace(x, @"son.\d", "kele!", RegexOptions.None), strings); CheckFunc(x => Regex.Replace(x, @"person.\d", "akele!", RegexOptions.IgnoreCase), strings); CheckFunc(x => Regex.Replace(x, @"person.\d", "akele!", RegexOptions.Multiline), strings); - CheckFunc(x => Regex.Replace(x, @"person.\d", "akele!", RegexOptions.IgnoreCase | RegexOptions.Multiline), + CheckFunc(x => Regex.Replace(x, @"person.\d", "akele!", RegexOptions.IgnoreCase | RegexOptions.Multiline), strings); - var notSupportedException = Assert.Throws(() => CheckFunc(x => + var notSupportedException = Assert.Throws(() => CheckFunc(x => Regex.IsMatch(x, @"^person\d", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant), strings)); Assert.AreEqual("RegexOptions.CultureInvariant is not supported", notSupportedException.Message); @@ -100,7 +100,7 @@ public void TestStrings() CheckFunc(x => Regex.IsMatch(x, @"^person_9\d", RegexOptions.IgnoreCase), strings); CheckFunc(x => Regex.IsMatch(x, @"^Person_9\d", RegexOptions.Multiline), strings); CheckFunc(x => Regex.IsMatch(x, @"^person_9\d", RegexOptions.IgnoreCase | RegexOptions.Multiline), strings); - notSupportedException = Assert.Throws(() => CheckFunc(x => + notSupportedException = Assert.Throws(() => CheckFunc(x => Regex.IsMatch(x, @"^person_9\d",RegexOptions.IgnoreCase | RegexOptions.CultureInvariant), strings)); Assert.AreEqual("RegexOptions.CultureInvariant is not supported", notSupportedException.Message); @@ -114,4 +114,4 @@ public void TestStrings() CheckFunc(x => x + 10, strings); } } -} \ No newline at end of file +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/CacheTestSsl.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/CacheTestSsl.cs index 0f55ce5e71b2b..f6742320df581 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/CacheTestSsl.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/CacheTestSsl.cs @@ -54,7 +54,7 @@ protected override IgniteClientConfiguration GetClientConfiguration() CertificatePassword = "123456", SkipServerCertificateValidation = true, CheckCertificateRevocation = true, -#if !NETCOREAPP2_0 +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 SslProtocols = SslProtocols.Tls #else SslProtocols = SslProtocols.Tls12 diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/ScanQueryTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/ScanQueryTest.cs index 71c8f0fdf65f8..18ced28812af7 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/ScanQueryTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/Cache/ScanQueryTest.cs @@ -124,7 +124,7 @@ public void TestWithFilter() var single = clientCache.Query(new ScanQuery(new PersonKeyFilter(3))).Single(); Assert.AreEqual(3, single.Key); -#if !NETCOREAPP2_0 // Serializing delegates is not supported on this platform. +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 // Serializing delegates is not supported on this platform. // Multiple results. var res = clientCache.Query(new ScanQuery(new PersonFilter(x => x.Name.Length == 1))) .ToList(); @@ -157,7 +157,7 @@ public void TestWithFilterBinary() } -#if !NETCOREAPP2_0 // Serializing delegates and exceptions is not supported on this platform. +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 // Serializing delegates and exceptions is not supported on this platform. /// /// Tests the exception in filter. /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/IgniteClientConfigurationTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/IgniteClientConfigurationTest.cs index 3d55f4c896fbd..a1131939fbcbf 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/IgniteClientConfigurationTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Client/IgniteClientConfigurationTest.cs @@ -196,7 +196,7 @@ public void TestStartFromAppConfig() } } -#if !NETCOREAPP2_0 +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 /// /// Tests the schema validation. /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Compute/ComputeApiTest.JavaTask.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Compute/ComputeApiTest.JavaTask.cs index 1f5c3a3c82e6b..be0a203747a16 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Compute/ComputeApiTest.JavaTask.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Compute/ComputeApiTest.JavaTask.cs @@ -316,7 +316,7 @@ public void TestEchoTaskBinarizable() Assert.AreEqual(val, binRes.GetField("Field")); -#if !NETCOREAPP2_0 +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 var dotNetBin = _grid1.GetBinary().ToBinary(res); Assert.AreEqual(dotNetBin.Header.HashCode, ((BinaryObject)binRes).Header.HashCode); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Config/full-config.xml b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Config/full-config.xml index b091a497b4ddd..4aa5190e2ce3d 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Config/full-config.xml +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Config/full-config.xml @@ -23,7 +23,7 @@ workDirectory='c:' JvmMaxMemoryMb='1024' MetricsLogFrequency='0:0:10' isDaemon='true' isLateAffinityAssignment='false' springConfigUrl='c:\myconfig.xml' autoGenerateIgniteInstanceName='true' peerAssemblyLoadingMode='CurrentAppDomain' longQueryWarningTimeout='1:2:3' isActiveOnStart='false' - consistentId='someId012' redirectJavaConsoleOutput='false' authenticationEnabled='true'> + consistentId='someId012' redirectJavaConsoleOutput='false' authenticationEnabled='true' mvccVacuumFrequency='10000' mvccVacuumThreadCount='4'> 127.1.1.1 diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ConsoleRedirectTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ConsoleRedirectTest.cs index 2e389c3e7af13..a0727e472abaa 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ConsoleRedirectTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ConsoleRedirectTest.cs @@ -80,7 +80,7 @@ public void TestStartupOutput() { using (Ignition.Start(TestUtils.GetTestConfiguration())) { - Assert.IsTrue(_outSb.ToString().Contains("[ver=1, servers=1, clients=0,")); + Assert.AreEqual(1, Regex.Matches(_outSb.ToString(), "ver=1, locNode=[a-fA-F0-9]{8,8}, servers=1, clients=0,").Count); } } @@ -152,7 +152,7 @@ public void TestMultipleDomains() { using (var ignite = Ignition.Start(TestUtils.GetTestConfiguration())) { - Assert.IsTrue(_outSb.ToString().Contains("[ver=1, servers=1, clients=0,")); + Assert.AreEqual(1, Regex.Matches(_outSb.ToString(), "ver=1, locNode=[a-fA-F0-9]{8,8}, servers=1, clients=0,").Count); // Run twice RunInNewDomain(); @@ -166,10 +166,10 @@ public void TestMultipleDomains() Assert.AreEqual(4, Regex.Matches(outTxt, ">>> Ignite instance name: newDomainGrid").Count); // Both domains produce the topology snapshot on node enter - Assert.AreEqual(2, Regex.Matches(outTxt, "ver=2, servers=2, clients=0,").Count); - Assert.AreEqual(1, Regex.Matches(outTxt, "ver=3, servers=1, clients=0,").Count); - Assert.AreEqual(2, Regex.Matches(outTxt, "ver=4, servers=2, clients=0,").Count); - Assert.AreEqual(1, Regex.Matches(outTxt, "ver=5, servers=1, clients=0,").Count); + Assert.AreEqual(2, Regex.Matches(outTxt, "ver=2, locNode=[a-fA-F0-9]{8,8}, servers=2, clients=0,").Count); + Assert.AreEqual(1, Regex.Matches(outTxt, "ver=3, locNode=[a-fA-F0-9]{8,8}, servers=1, clients=0,").Count); + Assert.AreEqual(2, Regex.Matches(outTxt, "ver=4, locNode=[a-fA-F0-9]{8,8}, servers=2, clients=0,").Count); + Assert.AreEqual(1, Regex.Matches(outTxt, "ver=5, locNode=[a-fA-F0-9]{8,8}, servers=1, clients=0,").Count); } } diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/FailureHandlerTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/FailureHandlerTest.cs index 7c447b180942b..a8aff71c63804 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/FailureHandlerTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/FailureHandlerTest.cs @@ -71,6 +71,7 @@ public void TearDown() /// Tests /// [Test] + [Ignore("IGNITE-10364")] public void TestStopNodeFailureHandler() { TestFailureHandler(typeof(StopNodeFailureHandler)); @@ -80,6 +81,7 @@ public void TestStopNodeFailureHandler() /// Tests /// [Test] + [Ignore("IGNITE-10364")] public void TestStopNodeOrHaltFailureHandler() { TestFailureHandler(typeof(StopNodeOrHaltFailureHandler)); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationSerializerTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationSerializerTest.cs index e2ece20a66f37..ec9d4fdbcc890 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationSerializerTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationSerializerTest.cs @@ -100,6 +100,9 @@ public void TestPredefinedXml() Assert.IsFalse(cfg.IsActiveOnStart); Assert.IsTrue(cfg.AuthenticationEnabled); + Assert.AreEqual(10000, cfg.MvccVacuumFrequency); + Assert.AreEqual(4, cfg.MvccVacuumThreadCount); + Assert.IsNotNull(cfg.SqlSchemas); Assert.AreEqual(2, cfg.SqlSchemas.Count); Assert.IsTrue(cfg.SqlSchemas.Contains("SCHEMA_1")); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationTest.cs index a03d09cd1cfef..f0f3b7cf956c5 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/IgniteConfigurationTest.cs @@ -36,6 +36,7 @@ namespace Apache.Ignite.Core.Tests using Apache.Ignite.Core.Discovery.Tcp; using Apache.Ignite.Core.Discovery.Tcp.Multicast; using Apache.Ignite.Core.Discovery.Tcp.Static; + using Apache.Ignite.Core.Encryption.Keystore; using Apache.Ignite.Core.Events; using Apache.Ignite.Core.Impl.Common; using Apache.Ignite.Core.PersistentStore; @@ -81,6 +82,7 @@ public void TestDefaultValueAttributes() CheckDefaultValueAttributes(new IgniteConfiguration()); CheckDefaultValueAttributes(new BinaryConfiguration()); CheckDefaultValueAttributes(new TcpDiscoverySpi()); + CheckDefaultValueAttributes(new KeystoreEncryptionSpi()); CheckDefaultValueAttributes(new CacheConfiguration()); CheckDefaultValueAttributes(new TcpDiscoveryMulticastIpFinder()); CheckDefaultValueAttributes(new TcpCommunicationSpi()); @@ -132,6 +134,14 @@ public void TestAllConfigurationProperties() Assert.AreEqual(disco.ThreadPriority, resDisco.ThreadPriority); Assert.AreEqual(disco.TopologyHistorySize, resDisco.TopologyHistorySize); + var enc = (KeystoreEncryptionSpi) cfg.EncryptionSpi; + var resEnc = (KeystoreEncryptionSpi) resCfg.EncryptionSpi; + + Assert.AreEqual(enc.MasterKeyName, resEnc.MasterKeyName); + Assert.AreEqual(enc.KeySize, resEnc.KeySize); + Assert.AreEqual(enc.KeyStorePath, resEnc.KeyStorePath); + Assert.AreEqual(enc.KeyStorePassword, resEnc.KeyStorePassword); + var ip = (TcpDiscoveryStaticIpFinder) disco.IpFinder; var resIp = (TcpDiscoveryStaticIpFinder) resDisco.IpFinder; @@ -176,9 +186,11 @@ public void TestAllConfigurationProperties() var com = (TcpCommunicationSpi) cfg.CommunicationSpi; var resCom = (TcpCommunicationSpi) resCfg.CommunicationSpi; Assert.AreEqual(com.AckSendThreshold, resCom.AckSendThreshold); + Assert.AreEqual(com.ConnectionsPerNode, resCom.ConnectionsPerNode); Assert.AreEqual(com.ConnectTimeout, resCom.ConnectTimeout); Assert.AreEqual(com.DirectBuffer, resCom.DirectBuffer); Assert.AreEqual(com.DirectSendBuffer, resCom.DirectSendBuffer); + Assert.AreEqual(com.FilterReachableAddresses, resCom.FilterReachableAddresses); Assert.AreEqual(com.IdleConnectionTimeout, resCom.IdleConnectionTimeout); Assert.AreEqual(com.LocalAddress, resCom.LocalAddress); Assert.AreEqual(com.LocalPort, resCom.LocalPort); @@ -187,13 +199,18 @@ public void TestAllConfigurationProperties() Assert.AreEqual(com.MessageQueueLimit, resCom.MessageQueueLimit); Assert.AreEqual(com.ReconnectCount, resCom.ReconnectCount); Assert.AreEqual(com.SelectorsCount, resCom.SelectorsCount); + Assert.AreEqual(com.SelectorSpins, resCom.SelectorSpins); + Assert.AreEqual(com.SharedMemoryPort, resCom.SharedMemoryPort); Assert.AreEqual(com.SlowClientQueueLimit, resCom.SlowClientQueueLimit); Assert.AreEqual(com.SocketReceiveBufferSize, resCom.SocketReceiveBufferSize); Assert.AreEqual(com.SocketSendBufferSize, resCom.SocketSendBufferSize); + Assert.AreEqual(com.SocketWriteTimeout, resCom.SocketWriteTimeout); Assert.AreEqual(com.TcpNoDelay, resCom.TcpNoDelay); Assert.AreEqual(com.UnacknowledgedMessagesBufferSize, resCom.UnacknowledgedMessagesBufferSize); - + Assert.AreEqual(com.UsePairedConnections, resCom.UsePairedConnections); + Assert.AreEqual(cfg.FailureDetectionTimeout, resCfg.FailureDetectionTimeout); + Assert.AreEqual(cfg.SystemWorkerBlockedTimeout, resCfg.SystemWorkerBlockedTimeout); Assert.AreEqual(cfg.ClientFailureDetectionTimeout, resCfg.ClientFailureDetectionTimeout); Assert.AreEqual(cfg.LongQueryWarningTimeout, resCfg.LongQueryWarningTimeout); @@ -242,6 +259,9 @@ public void TestAllConfigurationProperties() AssertExtensions.ReflectionEqual(cfg.DataStorageConfiguration, resCfg.DataStorageConfiguration); + Assert.AreEqual(cfg.MvccVacuumFrequency, resCfg.MvccVacuumFrequency); + Assert.AreEqual(cfg.MvccVacuumThreadCount, resCfg.MvccVacuumThreadCount); + Assert.IsNotNull(resCfg.SqlSchemas); Assert.AreEqual(2, resCfg.SqlSchemas.Count); Assert.IsTrue(resCfg.SqlSchemas.Contains("SCHEMA_3")); @@ -498,6 +518,8 @@ private static void CheckDefaultProperties(IgniteConfiguration cfg) cfg.ClientConnectorConfigurationEnabled); Assert.AreEqual(IgniteConfiguration.DefaultRedirectJavaConsoleOutput, cfg.RedirectJavaConsoleOutput); Assert.AreEqual(IgniteConfiguration.DefaultAuthenticationEnabled, cfg.AuthenticationEnabled); + Assert.AreEqual(IgniteConfiguration.DefaultMvccVacuumFrequency, cfg.MvccVacuumFrequency); + Assert.AreEqual(IgniteConfiguration.DefaultMvccVacuumThreadCount, cfg.MvccVacuumThreadCount); // Thread pools. Assert.AreEqual(IgniteConfiguration.DefaultManagementThreadPoolSize, cfg.ManagementThreadPoolSize); @@ -679,6 +701,13 @@ private static IgniteConfiguration GetCustomConfig() ThreadPriority = 6, TopologyHistorySize = 1234567 }, + EncryptionSpi = new KeystoreEncryptionSpi() + { + KeySize = 192, + KeyStorePassword = "love_sex_god", + KeyStorePath = "tde.jks", + MasterKeyName = KeystoreEncryptionSpi.DefaultMasterKeyName + }, IgniteInstanceName = "gridName1", IgniteHome = IgniteHome.Resolve(null), IncludedEventTypes = EventType.DiscoveryAll, @@ -727,9 +756,16 @@ private static IgniteConfiguration GetCustomConfig() TcpNoDelay = false, SlowClientQueueLimit = 98, SocketSendBufferSize = 2045, - UnacknowledgedMessagesBufferSize = 3450 + UnacknowledgedMessagesBufferSize = 3450, + ConnectionsPerNode = 12, + UsePairedConnections = true, + SharedMemoryPort = 1234, + SocketWriteTimeout = 2222, + SelectorSpins = 12, + FilterReachableAddresses = true }, FailureDetectionTimeout = TimeSpan.FromSeconds(3.5), + SystemWorkerBlockedTimeout = TimeSpan.FromSeconds(8.5), ClientFailureDetectionTimeout = TimeSpan.FromMinutes(12.3), LongQueryWarningTimeout = TimeSpan.FromMinutes(1.23), IsActiveOnStart = true, @@ -802,6 +838,7 @@ private static IgniteConfiguration GetCustomConfig() ConcurrencyLevel = 1, PageSize = 8 * 1024, WalAutoArchiveAfterInactivity = TimeSpan.FromMinutes(5), + CheckpointReadLockTimeout = TimeSpan.FromSeconds(9.5), DefaultDataRegionConfiguration = new DataRegionConfiguration { Name = "reg1", @@ -836,6 +873,8 @@ private static IgniteConfiguration GetCustomConfig() } }, AuthenticationEnabled = false, + MvccVacuumFrequency = 20000, + MvccVacuumThreadCount = 8, SqlSchemas = new List { "SCHEMA_3", "schema_4" } }; diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Properties/AssemblyInfo.cs index 32b7702ab55d0..5b02649710d4a 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Properties/AssemblyInfo.cs @@ -23,7 +23,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ReconnectTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ReconnectTest.cs index 5d40408c2bee7..e8aa60a5dfd4e 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ReconnectTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/ReconnectTest.cs @@ -17,11 +17,14 @@ namespace Apache.Ignite.Core.Tests { + using System; using System.Threading; + using System.Threading.Tasks; using Apache.Ignite.Core.Cache; using Apache.Ignite.Core.Cache.Configuration; using Apache.Ignite.Core.Common; using Apache.Ignite.Core.Lifecycle; + using Apache.Ignite.Core.Tests.Client.Cache; using Apache.Ignite.Core.Tests.Process; using NUnit.Framework; @@ -63,9 +66,9 @@ public void TestClusterRestart() client.ClientReconnected += (sender, args) => { eventArgs = args; }; - var cache = client.GetCache(CacheName); + var cache = client.GetCache(CacheName); - cache[1] = 1; + cache[1] = new Person(1); Ignition.Stop(server.Name, true); @@ -91,14 +94,14 @@ public void TestClusterRestart() Assert.IsTrue(eventArgs.HasClusterRestarted); // Refresh the cache instance and check that it works. - var cache1 = client.GetCache(CacheName); + var cache1 = client.GetCache(CacheName); Assert.AreEqual(0, cache1.GetSize()); - cache1[1] = 2; - Assert.AreEqual(2, cache1[1]); + cache1[1] = new Person(2); + Assert.AreEqual(2, cache1[1].Id); // Check that old cache instance still works. - Assert.AreEqual(2, cache.Get(1)); + Assert.AreEqual(2, cache.Get(1).Id); } /// @@ -163,6 +166,60 @@ public void TestFailedConnection() } } + /// + /// Tests writer structure cleanup after client reconnect with full cluster restart. + /// + [Test] + public void TestClusterRestart_ResetsCachedMetadataAndWriterStructures() + { + var serverCfg = new IgniteConfiguration(TestUtils.GetTestConfiguration()) + { + CacheConfiguration = new[] {new CacheConfiguration(CacheName)} + }; + + var clientCfg = new IgniteConfiguration(TestUtils.GetTestConfiguration()) + { + IgniteInstanceName = "client", + ClientMode = true + }; + + var server = Ignition.Start(serverCfg); + var client = Ignition.Start(clientCfg); + + Assert.AreEqual(2, client.GetCluster().GetNodes().Count); + + var evt = new ManualResetEventSlim(false); + client.ClientReconnected += (sender, args) => evt.Set(); + + var cache = client.GetCache(CacheName); + cache[1] = new Person(1); + + Task.Factory.StartNew(() => + { + while (!evt.IsSet) + { + try + { + cache[1] = new Person(1); + } + catch (Exception) + { + // Ignore exceptions while disconnected, keep on trying to populate writer structure cache. + } + } + }); + + Ignition.Stop(server.Name, true); + var server2 = Ignition.Start(serverCfg); + evt.Wait(); + + // Verify that we can deserialize on server (meta is resent properly). + cache[2] = new Person(2); + + var serverCache = server2.GetCache(CacheName); + Assert.AreEqual(2, serverCache[2].Id); + } + /// /// Starts the server process. /// @@ -173,7 +230,6 @@ private static IgniteProcess StartServerProcess(IgniteConfiguration cfg) "-J-DIGNITE_QUIET=false"); } - /// /// Test set up. /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Services/ServicesTest.cs b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Services/ServicesTest.cs index 81c36522272c4..6bea9b2d8e489 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Services/ServicesTest.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Services/ServicesTest.cs @@ -1018,7 +1018,7 @@ private static void CheckServiceStarted(IIgnite grid, int count = 1, string svcN /// private IgniteConfiguration GetConfiguration(string springConfigUrl) { -#if !NETCOREAPP2_0 +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 if (!CompactFooter) { springConfigUrl = Compute.ComputeApiTestFullFooter.ReplaceFooterSetting(springConfigUrl); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.Schema.nuspec b/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.Schema.nuspec index de4e47230a6c9..37810d12a75f0 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.Schema.nuspec +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.Schema.nuspec @@ -42,7 +42,7 @@ XSD file describes the structure of IgniteConfigurationSection and enables Intel More info on Apache Ignite.NET: https://apacheignite-net.readme.io/ - Copyright 2018 + Copyright 2020 Apache Ignite XSD Intellisense diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.csproj b/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.csproj index 57357da81699c..f5897c8ec40b7 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.csproj +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.csproj @@ -161,6 +161,10 @@ + + + + @@ -600,4 +604,4 @@ --> - \ No newline at end of file + diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.nuspec b/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.nuspec index d5a0c682ff7a3..e131494b16134 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.nuspec +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Apache.Ignite.Core.nuspec @@ -46,7 +46,7 @@ Supports .NET 4+ and .NET Core 2.0+. More info: https://apacheignite-net.readme.io/ - Copyright 2018 + Copyright 2020 Apache Ignite In-Memory Distributed Computing SQL NoSQL Grid Map Reduce Cache linqpad-samples diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheAtomicityMode.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheAtomicityMode.cs index 8c36a77d36b4e..49e5d120854b6 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheAtomicityMode.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheAtomicityMode.cs @@ -26,6 +26,13 @@ public enum CacheAtomicityMode { /// /// Specifies fully ACID-compliant transactional cache behavior. + /// + /// Note! In this mode, transactional consistency is guaranteed for key-value API operations only. + /// To enable ACID capabilities for SQL transactions, use TRANSACTIONAL_SNAPSHOT mode. + /// + /// Note! This atomicity mode is not compatible with the other atomicity modes within the same transaction. + /// If a transaction is executed over multiple caches, all caches must have the same atomicity mode, + /// either TRANSACTIONAL_SNAPSHOT or TRANSACTIONAL. /// Transactional, @@ -49,6 +56,27 @@ public enum CacheAtomicityMode /// Also note that all data modifications in mode are guaranteed to be atomic /// and consistent with writes to the underlying persistent store, if one is configured. /// - Atomic + Atomic, + + /// + /// Specifies fully ACID-compliant transactional cache behavior for both key-value API and SQL transactions. + /// + /// This atomicity mode enables multiversion concurrency control (MVCC) for the cache. In MVCC-enabled caches, + /// when a transaction updates a row, it creates a new version of that row instead of overwriting it. + /// Other users continue to see the old version of the row until the transaction is committed. + /// In this way, readers and writers do not conflict with each other and always work with a consistent dataset. + /// The old version of data is cleaned up when it's no longer accessed by anyone. + /// + /// With this mode enabled, one node is elected as an MVCC coordinator. This node tracks all in-flight transactions + /// and queries executed in the cluster. Each transaction or query executed over the cache with + /// TRANSACTIONAL_SNAPSHOT mode works with a current snapshot of data generated for this transaction or query + /// by the coordinator. This snapshot ensures that the transaction works with a consistent database state + /// during its execution period. + /// + /// Note! This atomicity mode is not compatible with the other atomicity modes within the same transaction. + /// If a transaction is executed over multiple caches, all caches must have the same atomicity mode, + /// either TRANSACTIONAL_SNAPSHOT or TRANSACTIONAL. + /// + TransactionalSnapshot, } } \ No newline at end of file diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheConfiguration.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheConfiguration.cs index a8925ad97e8f6..2e0e1a3d127f0 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheConfiguration.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Configuration/CacheConfiguration.cs @@ -161,6 +161,9 @@ public class CacheConfiguration : IBinaryRawWriteAwareEx /// Default value for . public const int DefaultQueryParallelism = 1; + /// Default value for . + public const bool DefaultEncryptionEnabled = false; + /// /// Gets or sets the cache name. /// @@ -214,6 +217,7 @@ public CacheConfiguration(string name) RebalanceBatchesPrefetchCount = DefaultRebalanceBatchesPrefetchCount; MaxQueryIteratorsCount = DefaultMaxQueryIteratorsCount; QueryParallelism = DefaultQueryParallelism; + EncryptionEnabled = DefaultEncryptionEnabled; } /// @@ -329,6 +333,7 @@ private void Read(BinaryReader reader, ClientProtocolVersion srvVer) QueryDetailMetricsSize = reader.ReadInt(); QueryParallelism = reader.ReadInt(); SqlSchema = reader.ReadString(); + EncryptionEnabled = reader.ReadBoolean(); QueryEntities = reader.ReadCollectionRaw(r => new QueryEntity(r, srvVer)); @@ -427,6 +432,7 @@ internal void Write(BinaryWriter writer, ClientProtocolVersion srvVer) writer.WriteInt(QueryDetailMetricsSize); writer.WriteInt(QueryParallelism); writer.WriteString(SqlSchema); + writer.WriteBoolean(EncryptionEnabled); writer.WriteCollectionRaw(QueryEntities, srvVer); @@ -919,5 +925,12 @@ public string MemoryPolicyName /// [DefaultValue(DefaultQueryParallelism)] public int QueryParallelism { get; set; } + + /// + /// Gets or sets encryption flag. + /// Default is false. + /// + [DefaultValue(DefaultEncryptionEnabled)] + public bool EncryptionEnabled { get; set; } } } diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlFieldsQuery.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlFieldsQuery.cs index a93e00dd84d65..060321bdc7d63 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlFieldsQuery.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlFieldsQuery.cs @@ -115,6 +115,7 @@ public SqlFieldsQuery(string sql, bool loc, params object[] args) /// Gets or sets a value indicating whether this query contains only replicated tables. /// This is a hint for potentially more effective execution. /// + [Obsolete("No longer used as of Apache Ignite 2.8.")] public bool ReplicatedOnly { get; set; } /// @@ -161,7 +162,9 @@ public override string ToString() return string.Format("SqlFieldsQuery [Sql={0}, Arguments=[{1}], Local={2}, PageSize={3}, " + "EnableDistributedJoins={4}, EnforceJoinOrder={5}, Timeout={6}, ReplicatedOnly={7}" + ", Colocated={8}, Schema={9}, Lazy={10}]", Sql, args, Local, +#pragma warning disable 618 PageSize, EnableDistributedJoins, EnforceJoinOrder, Timeout, ReplicatedOnly, +#pragma warning restore 618 Colocated, Schema, Lazy); } } diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlQuery.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlQuery.cs index 7d8e8fba7c67d..4c979ed0b8804 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlQuery.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Cache/Query/SqlQuery.cs @@ -119,6 +119,7 @@ public SqlQuery(string queryType, string sql, bool local, params object[] args) /// Gets or sets a value indicating whether this query contains only replicated tables. /// This is a hint for potentially more effective execution. /// + [Obsolete("No longer used as of Apache Ignite 2.8.")] public bool ReplicatedOnly { get; set; } /** */ @@ -140,7 +141,9 @@ internal override void Write(BinaryWriter writer, bool keepBinary) writer.WriteBoolean(EnableDistributedJoins); writer.WriteInt((int) Timeout.TotalMilliseconds); +#pragma warning disable 618 writer.WriteBoolean(ReplicatedOnly); +#pragma warning restore 618 } /** */ @@ -161,7 +164,9 @@ public override string ToString() return string.Format("SqlQuery [Sql={0}, Arguments=[{1}], Local={2}, PageSize={3}, " + "EnableDistributedJoins={4}, Timeout={5}, ReplicatedOnly={6}]", Sql, args, Local, +#pragma warning disable 618 PageSize, EnableDistributedJoins, Timeout, ReplicatedOnly); +#pragma warning restore 618 } } diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Communication/Tcp/TcpCommunicationSpi.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Communication/Tcp/TcpCommunicationSpi.cs index d272906a39bf8..b070f9aadf804 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Communication/Tcp/TcpCommunicationSpi.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Communication/Tcp/TcpCommunicationSpi.cs @@ -40,6 +40,9 @@ public class TcpCommunicationSpi : ICommunicationSpi /// Default value of property. public const int DefaultAckSendThreshold = 16; + /// Default value of property. + public const int DefaultConnectionsPerNode = 1; + /// Default value of property. public static readonly TimeSpan DefaultConnectTimeout = TimeSpan.FromSeconds(5); @@ -49,6 +52,9 @@ public class TcpCommunicationSpi : ICommunicationSpi /// Default value of property. public const bool DefaultDirectSendBuffer = false; + /// Default value of property. + public const bool DefaultFilterReachableAddresses = false; + /// Default value of property. public static readonly TimeSpan DefaultIdleConnectionTimeout = TimeSpan.FromSeconds(30); @@ -70,31 +76,49 @@ public class TcpCommunicationSpi : ICommunicationSpi /// Default value of property. public static readonly int DefaultSelectorsCount = Math.Min(4, Environment.ProcessorCount); + /// Default value of property. + public const long DefaultSelectorSpins = 0; + + /// Default value of property. + public const int DefaultSharedMemoryPort = -1; + /// Default socket buffer size. public const int DefaultSocketBufferSize = 32 * 1024; + /// Default value of property. + public const long DefaultSocketWriteTimeout = 2000; + /// Default value of property. public const bool DefaultTcpNoDelay = true; + /// Default value of property. + public const bool DefaultUsePairedConnections = false; + /// /// Initializes a new instance of the class. /// public TcpCommunicationSpi() { AckSendThreshold = DefaultAckSendThreshold; + ConnectionsPerNode = DefaultConnectionsPerNode; ConnectTimeout = DefaultConnectTimeout; DirectBuffer = DefaultDirectBuffer; DirectSendBuffer = DefaultDirectSendBuffer; + FilterReachableAddresses = DefaultFilterReachableAddresses; IdleConnectionTimeout = DefaultIdleConnectionTimeout; LocalPort = DefaultLocalPort; LocalPortRange = DefaultLocalPortRange; MaxConnectTimeout = DefaultMaxConnectTimeout; MessageQueueLimit = DefaultMessageQueueLimit; ReconnectCount = DefaultReconnectCount; + SharedMemoryPort = DefaultSharedMemoryPort; SelectorsCount = DefaultSelectorsCount; + SelectorSpins = DefaultSelectorSpins; SocketReceiveBufferSize = DefaultSocketBufferSize; SocketSendBufferSize = DefaultSocketBufferSize; + SocketWriteTimeout = DefaultSocketWriteTimeout; TcpNoDelay = DefaultTcpNoDelay; + UsePairedConnections = DefaultUsePairedConnections; } /// @@ -104,9 +128,11 @@ public TcpCommunicationSpi() internal TcpCommunicationSpi(IBinaryRawReader reader) { AckSendThreshold = reader.ReadInt(); + ConnectionsPerNode = reader.ReadInt(); ConnectTimeout = reader.ReadLongAsTimespan(); DirectBuffer = reader.ReadBoolean(); DirectSendBuffer = reader.ReadBoolean(); + FilterReachableAddresses = reader.ReadBoolean(); IdleConnectionTimeout = reader.ReadLongAsTimespan(); LocalAddress = reader.ReadString(); LocalPort = reader.ReadInt(); @@ -115,15 +141,19 @@ internal TcpCommunicationSpi(IBinaryRawReader reader) MessageQueueLimit = reader.ReadInt(); ReconnectCount = reader.ReadInt(); SelectorsCount = reader.ReadInt(); + SelectorSpins = reader.ReadLong(); + SharedMemoryPort = reader.ReadInt(); SlowClientQueueLimit = reader.ReadInt(); SocketReceiveBufferSize = reader.ReadInt(); SocketSendBufferSize = reader.ReadInt(); + SocketWriteTimeout = reader.ReadLong(); TcpNoDelay = reader.ReadBoolean(); UnacknowledgedMessagesBufferSize = reader.ReadInt(); + UsePairedConnections = reader.ReadBoolean(); } /// - /// Gets or sets the number of received messages per connection to node + /// Gets or sets the number of received messages per connection to node /// after which acknowledgment message is sent. /// [DefaultValue(DefaultAckSendThreshold)] @@ -136,14 +166,14 @@ internal TcpCommunicationSpi(IBinaryRawReader reader) public TimeSpan ConnectTimeout { get; set; } /// - /// Gets or sets a value indicating whether to allocate direct (ByteBuffer.allocateDirect) + /// Gets or sets a value indicating whether to allocate direct (ByteBuffer.allocateDirect) /// or heap (ByteBuffer.allocate) buffer. /// [DefaultValue(DefaultDirectBuffer)] public bool DirectBuffer { get; set; } /// - /// Gets or sets a value indicating whether to allocate direct (ByteBuffer.allocateDirect) + /// Gets or sets a value indicating whether to allocate direct (ByteBuffer.allocateDirect) /// or heap (ByteBuffer.allocate) send buffer. /// [DefaultValue(DefaultDirectSendBuffer)] @@ -156,7 +186,7 @@ internal TcpCommunicationSpi(IBinaryRawReader reader) public TimeSpan IdleConnectionTimeout { get; set; } /// - /// Gets or sets the local host address for socket binding. Note that one node could have + /// Gets or sets the local host address for socket binding. Note that one node could have /// additional addresses beside the loopback one. This configuration parameter is optional. /// public string LocalAddress { get; set; } @@ -193,7 +223,7 @@ internal TcpCommunicationSpi(IBinaryRawReader reader) /// /// Gets or sets the message queue limit for incoming and outgoing messages. /// - /// When set to positive number send queue is limited to the configured value. + /// When set to positive number send queue is limited to the configured value. /// 0 disables the limitation. /// [DefaultValue(DefaultMessageQueueLimit)] @@ -216,11 +246,11 @@ internal TcpCommunicationSpi(IBinaryRawReader reader) /// /// Gets or sets slow client queue limit. /// - /// When set to a positive number, communication SPI will monitor clients outbound message queue sizes + /// When set to a positive number, communication SPI will monitor clients outbound message queue sizes /// and will drop those clients whose queue exceeded this limit. /// /// Usually this value should be set to the same value as which controls - /// message back-pressure for server nodes. The default value for this parameter is 0 + /// message back-pressure for server nodes. The default value for this parameter is 0 /// which means unlimited. /// public int SlowClientQueueLimit { get; set; } @@ -230,7 +260,7 @@ internal TcpCommunicationSpi(IBinaryRawReader reader) /// [DefaultValue(DefaultSocketBufferSize)] public int SocketReceiveBufferSize { get; set; } - + /// /// Gets or sets the size of the socket send buffer. /// @@ -250,21 +280,66 @@ internal TcpCommunicationSpi(IBinaryRawReader reader) public bool TcpNoDelay { get; set; } /// - /// Gets or sets the maximum number of stored unacknowledged messages per connection to node. - /// If number of unacknowledged messages exceeds this number + /// Gets or sets the maximum number of stored unacknowledged messages per connection to node. + /// If number of unacknowledged messages exceeds this number /// then connection to node is closed and reconnect is attempted. /// public int UnacknowledgedMessagesBufferSize { get; set; } + /// + /// Gets or sets the number of connections per node. + /// + [DefaultValue(DefaultConnectionsPerNode)] + public int ConnectionsPerNode { get; set; } + + /// + /// Gets or sets a value indicating whether separate connections should be used for incoming and outgoing data. + /// Set this to true if should maintain connection for outgoing + /// and incoming messages separately. In this case total number of connections between local and each remote + /// node is equals to * 2. + /// + public bool UsePairedConnections { get; set; } + + /// + /// Gets or sets a local port to accept shared memory connections. + /// + [DefaultValue(DefaultSharedMemoryPort)] + public int SharedMemoryPort { get; set; } + + /// + /// Gets or sets socket write timeout for TCP connection. If message can not be written to + /// socket within this time then connection is closed and reconnect is attempted. + /// + /// Default value is . + /// + [DefaultValue(DefaultSocketWriteTimeout)] + public long SocketWriteTimeout { get; set; } + + /// + /// Gets or sets a values that defines how many non-blocking selectors should be made. + /// Can be set to so selector threads will never block. + /// + /// Default value is . + /// + public long SelectorSpins { get; set; } + + /// + /// Gets or sets a value indicating whether filter for reachable addresses + /// should be enabled on creating tcp client. + /// + public bool FilterReachableAddresses { get; set; } + /// /// Writes this instance to the specified writer. /// internal void Write(IBinaryRawWriter writer) { writer.WriteInt(AckSendThreshold); + writer.WriteInt(ConnectionsPerNode); writer.WriteLong((long) ConnectTimeout.TotalMilliseconds); writer.WriteBoolean(DirectBuffer); writer.WriteBoolean(DirectSendBuffer); + writer.WriteBoolean(FilterReachableAddresses); writer.WriteLong((long) IdleConnectionTimeout.TotalMilliseconds); writer.WriteString(LocalAddress); writer.WriteInt(LocalPort); @@ -273,11 +348,15 @@ internal void Write(IBinaryRawWriter writer) writer.WriteInt(MessageQueueLimit); writer.WriteInt(ReconnectCount); writer.WriteInt(SelectorsCount); + writer.WriteLong(SelectorSpins); + writer.WriteInt(SharedMemoryPort); writer.WriteInt(SlowClientQueueLimit); writer.WriteInt(SocketReceiveBufferSize); writer.WriteInt(SocketSendBufferSize); + writer.WriteLong(SocketWriteTimeout); writer.WriteBoolean(TcpNoDelay); writer.WriteInt(UnacknowledgedMessagesBufferSize); + writer.WriteBoolean(UsePairedConnections); } } } diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Configuration/DataStorageConfiguration.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Configuration/DataStorageConfiguration.cs index 0a010b4a75c6d..8771c7795d788 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Configuration/DataStorageConfiguration.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Configuration/DataStorageConfiguration.cs @@ -234,6 +234,7 @@ internal DataStorageConfiguration(IBinaryRawReader reader) PageSize = reader.ReadInt(); ConcurrencyLevel = reader.ReadInt(); WalAutoArchiveAfterInactivity = reader.ReadLongAsTimespan(); + CheckpointReadLockTimeout = reader.ReadTimeSpanNullable(); var count = reader.ReadInt(); @@ -286,6 +287,7 @@ internal void Write(IBinaryRawWriter writer) writer.WriteInt(PageSize); writer.WriteInt(ConcurrencyLevel); writer.WriteTimeSpanAsLong(WalAutoArchiveAfterInactivity); + writer.WriteTimeSpanAsLongNullable(CheckpointReadLockTimeout); if (DataRegionConfigurations != null) { @@ -488,6 +490,11 @@ internal void Write(IBinaryRawWriter writer) [DefaultValue(typeof(TimeSpan), "-00:00:00.001")] public TimeSpan WalAutoArchiveAfterInactivity { get; set; } + /// + /// Gets or sets the timeout for checkpoint read lock acquisition. + /// + public TimeSpan? CheckpointReadLockTimeout { get; set; } + /// /// Gets or sets the data region configurations. /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/IEncryptionSpi.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/IEncryptionSpi.cs new file mode 100644 index 0000000000000..c0ea4754fc2cd --- /dev/null +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/IEncryptionSpi.cs @@ -0,0 +1,34 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +namespace Apache.Ignite.Core.Encryption +{ + using System.Diagnostics.CodeAnalysis; + using Apache.Ignite.Core.Encryption.Keystore; + + /// + /// Encryption SPI. + /// + /// Only predefined implementations are supported: + /// + /// + [SuppressMessage("Microsoft.Design", "CA1040:AvoidEmptyInterfaces")] + public interface IEncryptionSpi + { + // No-op. + } +} \ No newline at end of file diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Keystore/KeystoreEncryptionSpi.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Keystore/KeystoreEncryptionSpi.cs new file mode 100644 index 0000000000000..e1866f8acd326 --- /dev/null +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Keystore/KeystoreEncryptionSpi.cs @@ -0,0 +1,84 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +namespace Apache.Ignite.Core.Encryption.Keystore +{ + using System.ComponentModel; + using Apache.Ignite.Core.Binary; + + /// + /// IEncryptionSPI implementation base on JDK provided cipher algorithm implementations. + /// + public class KeystoreEncryptionSpi : IEncryptionSpi + { + /// + /// Default master key name. + /// + public const string DefaultMasterKeyName = "ignite.master.key"; + + /// + /// Default encryption key size. + /// + public const int DefaultKeySize = 256; + + /// + /// Name of master key in key store. + /// + [DefaultValue(DefaultMasterKeyName)] + public string MasterKeyName { get; set; } + + /// + /// Size of encryption key. + /// + [DefaultValue(DefaultKeySize)] + public int KeySize { get; set; } + + /// + /// Path to key store. + /// + public string KeyStorePath { get; set; } + + /// + /// Key store password. + /// + public string KeyStorePassword { get; set; } + + /// + /// Empty constructor. + /// + public KeystoreEncryptionSpi() + { + MasterKeyName = DefaultMasterKeyName; + KeySize = DefaultKeySize; + } + + /// + /// Initializes a new instance of the class. + /// + /// The reader. + public KeystoreEncryptionSpi(IBinaryRawReader reader) + { + MasterKeyName = reader.ReadString(); + KeySize = reader.ReadInt(); + KeyStorePath = reader.ReadString(); + + var keyStorePassword = reader.ReadCharArray(); + + KeyStorePassword = keyStorePassword == null ? null : new string(keyStorePassword); + } + } +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Keystore/Package-Info.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Keystore/Package-Info.cs new file mode 100644 index 0000000000000..8df8b34f9a328 --- /dev/null +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Keystore/Package-Info.cs @@ -0,0 +1,26 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ + +#pragma warning disable 1587 // invalid XML comment + +/// +/// Encryption API based on standart java keystore. +/// +namespace Apache.Ignite.Core.Encryption.Keystore +{ + // No-op. +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Package-Info.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Package-Info.cs new file mode 100644 index 0000000000000..37cafdb782d99 --- /dev/null +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Encryption/Package-Info.cs @@ -0,0 +1,26 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ + +#pragma warning disable 1587 // invalid XML comment + +/// +/// Encryption API. +/// +namespace Apache.Ignite.Core.Encryption +{ + // No-op. +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfiguration.cs b/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfiguration.cs index bd531186276ac..63bf7948d971e 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfiguration.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfiguration.cs @@ -39,6 +39,8 @@ namespace Apache.Ignite.Core using Apache.Ignite.Core.Deployment; using Apache.Ignite.Core.Discovery; using Apache.Ignite.Core.Discovery.Tcp; + using Apache.Ignite.Core.Encryption; + using Apache.Ignite.Core.Encryption.Keystore; using Apache.Ignite.Core.Events; using Apache.Ignite.Core.Failure; using Apache.Ignite.Core.Impl; @@ -163,6 +165,9 @@ public class IgniteConfiguration /** */ private TimeSpan? _clientFailureDetectionTimeout; + /** */ + private TimeSpan? _sysWorkerBlockedTimeout; + /** */ private int? _publicThreadPoolSize; @@ -205,6 +210,12 @@ public class IgniteConfiguration /** Map from user-defined listener to it's id. */ private Dictionary _localEventListenerIds; + /** MVCC vacuum frequency. */ + private long? _mvccVacuumFreq; + + /** MVCC vacuum thread count. */ + private int? _mvccVacuumThreadCnt; + /// /// Default network retry count. /// @@ -230,6 +241,16 @@ public class IgniteConfiguration /// public const bool DefaultAuthenticationEnabled = false; + /// + /// Default value for property. + /// + public const long DefaultMvccVacuumFrequency = 5000; + + /// + /// Default value for property. + /// + public const int DefaultMvccVacuumThreadCount = 2; + /// /// Initializes a new instance of the class. /// @@ -309,6 +330,9 @@ internal void Write(BinaryWriter writer, ClientProtocolVersion srvVer) writer.WriteTimeSpanAsLongNullable(_longQueryWarningTimeout); writer.WriteBooleanNullable(_isActiveOnStart); writer.WriteBooleanNullable(_authenticationEnabled); + writer.WriteLongNullable(_mvccVacuumFreq); + writer.WriteIntNullable(_mvccVacuumThreadCnt); + writer.WriteTimeSpanAsLongNullable(_sysWorkerBlockedTimeout); if (SqlSchemas == null) writer.WriteInt(-1); @@ -355,6 +379,26 @@ internal void Write(BinaryWriter writer, ClientProtocolVersion srvVer) else writer.WriteBoolean(false); + var enc = EncryptionSpi; + + if (enc != null) + { + writer.WriteBoolean(true); + + var keystoreEnc = enc as KeystoreEncryptionSpi; + + if (keystoreEnc == null) + throw new InvalidOperationException("Unsupported encryption SPI: " + enc.GetType()); + + writer.WriteString(keystoreEnc.MasterKeyName); + writer.WriteInt(keystoreEnc.KeySize); + writer.WriteString(keystoreEnc.KeyStorePath); + writer.WriteCharArray( + keystoreEnc.KeyStorePassword == null ? null : keystoreEnc.KeyStorePassword.ToCharArray()); + } + else + writer.WriteBoolean(false); + // Communication config var comm = CommunicationSpi; @@ -675,6 +719,9 @@ private void ReadCore(BinaryReader r, ClientProtocolVersion srvVer) _longQueryWarningTimeout = r.ReadTimeSpanNullable(); _isActiveOnStart = r.ReadBooleanNullable(); _authenticationEnabled = r.ReadBooleanNullable(); + _mvccVacuumFreq = r.ReadLongNullable(); + _mvccVacuumThreadCnt = r.ReadIntNullable(); + _sysWorkerBlockedTimeout = r.ReadTimeSpanNullable(); int sqlSchemasCnt = r.ReadInt(); @@ -707,6 +754,9 @@ private void ReadCore(BinaryReader r, ClientProtocolVersion srvVer) // Discovery config DiscoverySpi = r.ReadBoolean() ? new TcpDiscoverySpi(r) : null; + EncryptionSpi = (srvVer.CompareTo(ClientSocket.Ver120) >= 0 && r.ReadBoolean()) ? + new KeystoreEncryptionSpi(r) : null; + // Communication config CommunicationSpi = r.ReadBoolean() ? new TcpCommunicationSpi(r) : null; @@ -1035,6 +1085,12 @@ public string GridName /// Null for default communication. /// public ICommunicationSpi CommunicationSpi { get; set; } + + /// + /// Gets or sets the encryption service provider. + /// Null for disabled encryption. + /// + public IEncryptionSpi EncryptionSpi { get; set; } /// /// Gets or sets a value indicating whether node should start in client mode. @@ -1323,6 +1379,15 @@ public TimeSpan FailureDetectionTimeout set { _failureDetectionTimeout = value; } } + /// + /// Gets or sets the timeout for blocked system workers detection. + /// + public TimeSpan? SystemWorkerBlockedTimeout + { + get { return _sysWorkerBlockedTimeout; } + set { _sysWorkerBlockedTimeout = value; } + } + /// /// Gets or sets the failure detection timeout used by /// and for client nodes. @@ -1549,6 +1614,26 @@ public bool AuthenticationEnabled set { _authenticationEnabled = value; } } + /// + /// Time interval between MVCC vacuum runs in milliseconds. + /// + [DefaultValue(DefaultMvccVacuumFrequency)] + public long MvccVacuumFrequency + { + get { return _mvccVacuumFreq ?? DefaultMvccVacuumFrequency; } + set { _mvccVacuumFreq = value; } + } + + /// + /// Number of MVCC vacuum threads. + /// + [DefaultValue(DefaultMvccVacuumThreadCount)] + public int MvccVacuumThreadCount + { + get { return _mvccVacuumThreadCnt ?? DefaultMvccVacuumThreadCount; } + set { _mvccVacuumThreadCnt = value; } + } + /// /// Gets or sets predefined failure handlers implementation. /// A failure handler handles critical failures of Ignite instance accordingly: diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfigurationSection.xsd b/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfigurationSection.xsd index ebbef67334ca9..5f4a439f80683 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfigurationSection.xsd +++ b/modules/platforms/dotnet/Apache.Ignite.Core/IgniteConfigurationSection.xsd @@ -862,6 +862,11 @@ Desired query parallelism within a single node. + + + Flag indicating whether cache encryption enabled. + + @@ -905,6 +910,38 @@ + + + Encryption spi. Null for disabled encryption. + + + + + Master key name + + + + + Encryption key size. + + + + + Key store path. + + + + + Key store password. + + + + + Assembly-qualified type name. + + + + Discovery service provider. Null for default discovery. @@ -999,7 +1036,7 @@ - Whether TcpDiscoverySpi is started in server mode regardless of IgniteConfiguration.ClientMode setting. + Whether TcpDiscoveryspi is started in server mode regardless of IgniteConfiguration.ClientMode setting. @@ -1885,6 +1922,13 @@ Inactivity time after which to run WAL segment auto archiving. + + + + Timeout for checkpoint read lock acquisition. + + + @@ -2191,6 +2235,13 @@ + + + + Timeout for blocked system workers detection. + + + @@ -2265,6 +2316,17 @@ Whether user authentication is enabled for the cluster. + + + Time interval between MVCC vacuum runs in milliseconds. + + + + + + Number of MVCC vacuum threads. + + Whether client connector should be enabled (allow thin clients, ODBC and JDBC drivers to work with Ignite). diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryFullTypeDescriptor.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryFullTypeDescriptor.cs index 50c8c275b4fc2..c4003448e90fa 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryFullTypeDescriptor.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryFullTypeDescriptor.cs @@ -245,6 +245,17 @@ public void UpdateWriteStructure(int pathIdx, IList updat } } + /// + /// Resets writer structure. + /// + public void ResetWriteStructure() + { + lock (this) + { + _writerTypeStruct = null; + } + } + /** */ public void UpdateReadStructure(int pathIdx, IList updates) { diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryReaderExtensions.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryReaderExtensions.cs index b13318c0e909c..5fca62dfbf159 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryReaderExtensions.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryReaderExtensions.cs @@ -78,6 +78,14 @@ public static TimeSpan ReadLongAsTimespan(this IBinaryRawReader reader) return reader.ReadBoolean() ? reader.ReadInt() : (int?) null; } + /// + /// Reads the nullable long. + /// + public static long? ReadLongNullable(this IBinaryRawReader reader) + { + return reader.ReadBoolean() ? reader.ReadLong() : (long?) null; + } + /// /// Reads the nullable bool. /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryWriterExtensions.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryWriterExtensions.cs index e504d75af0068..b965bcad9df4b 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryWriterExtensions.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/BinaryWriterExtensions.cs @@ -44,7 +44,7 @@ public static void WriteBooleanNullable(this IBinaryRawWriter writer, bool? valu } /// - /// Writes the nullable boolean. + /// Writes the nullable int. /// public static void WriteIntNullable(this IBinaryRawWriter writer, int? value) { @@ -57,6 +57,20 @@ public static void WriteIntNullable(this IBinaryRawWriter writer, int? value) writer.WriteBoolean(false); } + /// + /// Writes the nullable long. + /// + public static void WriteLongNullable(this IBinaryRawWriter writer, long? value) + { + if (value != null) + { + writer.WriteBoolean(true); + writer.WriteLong(value.Value); + } + else + writer.WriteBoolean(false); + } + /// /// Writes the timespan. /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Marshaller.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Marshaller.cs index 0a7b54ce9aeff..e1cc98fb8556f 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Marshaller.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Marshaller.cs @@ -268,7 +268,7 @@ public BinaryReader StartUnmarshal(IBinaryStream stream, BinaryMode mode = Binar { return new BinaryReader(this, stream, mode, null); } - + /// /// Gets metadata for the given type ID. /// @@ -297,7 +297,7 @@ public void PutBinaryType(IBinaryTypeDescriptor desc) { Debug.Assert(desc != null); - GetBinaryTypeHandler(desc); // ensure that handler exists + GetBinaryTypeHandler(desc); // ensure that handler exists if (Ignite != null) { @@ -505,7 +505,7 @@ private BinaryFullTypeDescriptor AddUserType(Type type, int typeId, string typeN desc = desc == null ? new BinaryFullTypeDescriptor(type, typeId, typeName, true, _cfg.NameMapper, - _cfg.IdMapper, ser, false, AffinityKeyMappedAttribute.GetFieldNameFromAttribute(type), + _cfg.IdMapper, ser, false, AffinityKeyMappedAttribute.GetFieldNameFromAttribute(type), BinaryUtils.IsIgniteEnum(type), registered) : new BinaryFullTypeDescriptor(desc, type, ser, registered); @@ -576,8 +576,8 @@ private BinaryFullTypeDescriptor AddUserType(BinaryTypeConfiguration typeCfg, Ty // Type is found. var typeName = GetTypeName(type, nameMapper); int typeId = GetTypeId(typeName, idMapper); - var affKeyFld = typeCfg.AffinityKeyFieldName - ?? AffinityKeyMappedAttribute.GetFieldNameFromAttribute(type); + var affKeyFld = typeCfg.AffinityKeyFieldName + ?? AffinityKeyMappedAttribute.GetFieldNameFromAttribute(type); var serializer = GetSerializer(_cfg, typeCfg, type, typeId, nameMapper, idMapper, _log); return AddType(type, typeId, typeName, true, keepDeserialized, nameMapper, idMapper, serializer, @@ -655,7 +655,7 @@ private BinaryFullTypeDescriptor AddType(Type type, int typeId, string typeName, ThrowConflictingTypeError(typeName, conflictingType.TypeName, typeId); } - var descriptor = new BinaryFullTypeDescriptor(type, typeId, typeName, userType, nameMapper, idMapper, + var descriptor = new BinaryFullTypeDescriptor(type, typeId, typeName, userType, nameMapper, idMapper, serializer, keepDeserialized, affKeyFieldName, isEnum); if (RegistrationDisabled) @@ -783,6 +783,30 @@ public string GetTypeName(Type type, IBinaryNameMapper mapper = null) return GetTypeName(type.AssemblyQualifiedName, mapper); } + /// + /// Called when local client node has been reconnected to the cluster. + /// + /// Cluster restarted flag. + public void OnClientReconnected(bool clusterRestarted) + { + if (!clusterRestarted) + return; + + // Reset all binary structures. Metadata must be sent again. + // _idToDesc enumerator is thread-safe (returns a snapshot). + // If there are new descriptors added concurrently, they are fine (we are already connected). + + // Race is possible when serialization is started before reconnect (or even before disconnect) + // and finished after reconnect, meta won't be sent to cluster because it is assumed to be known, + // but operation will succeed. + // We don't support this use case. Users should handle reconnect events properly when cluster is restarted. + // Supporting this very rare use case will complicate the code a lot with little benefit. + foreach (var desc in _idToDesc) + { + desc.Value.ResetWriteStructure(); + } + } + /// /// Gets the name of the type. /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Structure/BinaryStructureTracker.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Structure/BinaryStructureTracker.cs index ee2e7e17047df..bf3ea6a6ddab8 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Structure/BinaryStructureTracker.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Binary/Structure/BinaryStructureTracker.cs @@ -97,6 +97,7 @@ public void UpdateWriterStructure(BinaryWriter writer) { if (_curStructUpdates != null) { + // The following line assumes that cluster meta update will succeed (BinaryProcessor.PutBinaryTypes). _desc.UpdateWriteStructure(_curStructPath, _curStructUpdates); var marsh = writer.Marshaller; @@ -146,4 +147,4 @@ private int GetNewFieldId(string fieldName, byte fieldTypeId, int action) return fieldId; } } -} \ No newline at end of file +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Cache/CacheImpl.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Cache/CacheImpl.cs index 9e99967872b09..ab16f67e182f6 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Cache/CacheImpl.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Cache/CacheImpl.cs @@ -1129,7 +1129,9 @@ private IPlatformTargetInternal QueryFieldsInternal(SqlFieldsQuery qry) writer.WriteBoolean(qry.EnforceJoinOrder); writer.WriteBoolean(qry.Lazy); // Lazy flag. writer.WriteInt((int) qry.Timeout.TotalMilliseconds); +#pragma warning disable 618 writer.WriteBoolean(qry.ReplicatedOnly); +#pragma warning restore 618 writer.WriteBoolean(qry.Colocated); writer.WriteString(qry.Schema); // Schema }); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Client/Cache/CacheClient.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Client/Cache/CacheClient.cs index 8cc2741cfe740..d930b51834c6c 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Client/Cache/CacheClient.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Client/Cache/CacheClient.cs @@ -680,7 +680,9 @@ private static void WriteSqlQuery(IBinaryRawWriter writer, SqlQuery qry) QueryBase.WriteQueryArgs(writer, qry.Arguments); writer.WriteBoolean(qry.EnableDistributedJoins); writer.WriteBoolean(qry.Local); +#pragma warning disable 618 writer.WriteBoolean(qry.ReplicatedOnly); +#pragma warning restore 618 writer.WriteInt(qry.PageSize); writer.WriteTimeSpanAsLong(qry.Timeout); } @@ -705,7 +707,9 @@ private static void WriteSqlFieldsQuery(IBinaryRawWriter writer, SqlFieldsQuery writer.WriteBoolean(qry.EnableDistributedJoins); writer.WriteBoolean(qry.Local); +#pragma warning disable 618 writer.WriteBoolean(qry.ReplicatedOnly); +#pragma warning restore 618 writer.WriteBoolean(qry.EnforceJoinOrder); writer.WriteBoolean(qry.Colocated); writer.WriteBoolean(qry.Lazy); diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Common/CopyOnWriteConcurrentDictionary.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Common/CopyOnWriteConcurrentDictionary.cs index cd26af7b74522..4d3d1f82150e9 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Common/CopyOnWriteConcurrentDictionary.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Common/CopyOnWriteConcurrentDictionary.cs @@ -19,6 +19,7 @@ namespace Apache.Ignite.Core.Impl.Common { using System; + using System.Collections; using System.Collections.Generic; using System.Diagnostics.CodeAnalysis; @@ -27,7 +28,8 @@ namespace Apache.Ignite.Core.Impl.Common /// Good for frequent reads / infrequent writes scenarios. /// [SuppressMessage("Microsoft.Naming", "CA1711:IdentifiersShouldNotHaveIncorrectSuffix")] - public class CopyOnWriteConcurrentDictionary + [SuppressMessage("Microsoft.Naming", "CA1710:IdentifiersShouldHaveCorrectSuffix")] + public class CopyOnWriteConcurrentDictionary : IEnumerable> { /** */ private volatile Dictionary _dict = new Dictionary(); @@ -88,13 +90,17 @@ public void Set(TKey key, TValue value) _dict = dict0; } } + + /** */ + public IEnumerator> GetEnumerator() + { + return _dict.GetEnumerator(); + } - /// - /// Determines whether the specified key exists in the dictionary. - /// - public bool ContainsKey(TKey key) + /** */ + IEnumerator IEnumerable.GetEnumerator() { - return _dict.ContainsKey(key); + return GetEnumerator(); } } } \ No newline at end of file diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Ignite.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Ignite.cs index 42d9ed65e8889..ecec7684d3cf7 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Ignite.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Ignite.cs @@ -1002,6 +1002,8 @@ internal void OnClientDisconnected() /// Cluster restarted flag. internal void OnClientReconnected(bool clusterRestarted) { + _marsh.OnClientReconnected(clusterRestarted); + _clientReconnectTaskCompletionSource.TrySetResult(clusterRestarted); var handler = ClientReconnected; diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/AppDomains.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/AppDomains.cs index e98796b1899cc..64c366549e14c 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/AppDomains.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/AppDomains.cs @@ -15,7 +15,7 @@ * limitations under the License. */ -#if !NETCOREAPP2_0 +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 namespace Apache.Ignite.Core.Impl.Unmanaged.Jni { using System; @@ -65,12 +65,12 @@ public static _AppDomain GetDefaultAppDomain() { throw new IgniteException("Failed to get default AppDomain. Cannot create meta host: " + hr); } - + var host = (ICLRMetaHost) objHost; var vers = Environment.Version; var versString = string.Format("v{0}.{1}.{2}", vers.Major, vers.Minor, vers.Build); var runtime = (ICLRRuntimeInfo) host.GetRuntime(versString, ref IID_CLRRuntimeInfo); - + bool started; uint flags; runtime.IsStarted(out started, out flags); @@ -133,4 +133,4 @@ public static extern int CLRCreateInstance(ref Guid clsid, ref Guid iid, } } } -#endif \ No newline at end of file +#endif diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/Jvm.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/Jvm.cs index 5c781b50ec44d..4e04a8084ecb4 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/Jvm.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/Jvm.cs @@ -107,7 +107,7 @@ private Jvm(IntPtr jvmPtr) /// private static Callbacks GetCallbacksFromDefaultDomain() { -#if !NETCOREAPP2_0 +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 // JVM exists once per process, and JVM callbacks exist once per process. // We should register callbacks ONLY from the default AppDomain (which can't be unloaded). // Non-default appDomains should delegate this logic to the default one. diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/JvmDll.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/JvmDll.cs index ef161f4604784..682226dd42d5c 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/JvmDll.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Impl/Unmanaged/Jni/JvmDll.cs @@ -172,8 +172,8 @@ public JniResult GetCreatedJvms(out IntPtr pvm, int size, out int size2) /// /// Gets the default JVM init args. - /// Before calling this function, native code must set the vm_args->version field to the JNI version - /// it expects the VM to support. After this function returns, vm_args->version will be set + /// Before calling this function, native code must set the vm_args->version field to the JNI version + /// it expects the VM to support. After this function returns, vm_args->version will be set /// to the actual JNI version the VM supports. /// public unsafe JniResult GetDefaultJvmInitArgs(JvmInitArgs* args) @@ -305,7 +305,7 @@ private static IEnumerable> GetJvmDllPaths(string c /// private static IEnumerable> GetJvmDllPathsWindows() { -#if !NETCOREAPP2_0 +#if !NETCOREAPP2_0 && !NETCOREAPP2_1 if (!Os.IsWindows) { yield break; @@ -446,4 +446,4 @@ internal static extern JniResult JNI_GetCreatedJavaVMs(out IntPtr pvm, int size, internal static extern JniResult JNI_GetDefaultJavaVMInitArgs(JvmInitArgs* args); } } -} \ No newline at end of file +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Core/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.Core/Properties/AssemblyInfo.cs index c403c1f543711..1e3872053e631 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Core/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Core/Properties/AssemblyInfo.cs @@ -25,7 +25,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Apache.Ignite.EntityFramework.nuspec b/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Apache.Ignite.EntityFramework.nuspec index e9b778c02eb24..e791ae4b7f0cb 100644 --- a/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Apache.Ignite.EntityFramework.nuspec +++ b/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Apache.Ignite.EntityFramework.nuspec @@ -47,7 +47,7 @@ More info: https://apacheignite-net.readme.io/ Apache Ignite EntityFramework Integration - Copyright 2018 + Copyright 2020 EntityFramework Second-Level Apache Ignite In-Memory Distributed Computing SQL NoSQL Grid Map Reduce Cache diff --git a/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Properties/AssemblyInfo.cs index 1e5c565381b0c..427243acd19f2 100644 --- a/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.EntityFramework/Properties/AssemblyInfo.cs @@ -25,7 +25,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.Linq/Apache.Ignite.Linq.nuspec b/modules/platforms/dotnet/Apache.Ignite.Linq/Apache.Ignite.Linq.nuspec index 0a4f99971a84d..4295824853d91 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Linq/Apache.Ignite.Linq.nuspec +++ b/modules/platforms/dotnet/Apache.Ignite.Linq/Apache.Ignite.Linq.nuspec @@ -49,7 +49,7 @@ Supports .NET 4+ and .NET Core 2.0+. More info: https://apacheignite-net.readme.io/ - Copyright 2018 + Copyright 2020 Apache Ignite In-Memory Distributed Computing SQL NoSQL LINQ Grid Map Reduce Cache linqpad-samples diff --git a/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/CacheFieldsQueryExecutor.cs b/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/CacheFieldsQueryExecutor.cs index 9d0edd7f516ca..866edff8fd0e8 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/CacheFieldsQueryExecutor.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/CacheFieldsQueryExecutor.cs @@ -204,7 +204,9 @@ internal SqlFieldsQuery GetFieldsQuery(string text, object[] args) PageSize = _options.PageSize, EnforceJoinOrder = _options.EnforceJoinOrder, Timeout = _options.Timeout, +#pragma warning disable 618 ReplicatedOnly = _options.ReplicatedOnly, +#pragma warning restore 618 Colocated = _options.Colocated, Local = _options.Local, Arguments = args, diff --git a/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/MethodVisitor.cs b/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/MethodVisitor.cs index 84bd98f43e29e..375c7a8e6fe5a 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/MethodVisitor.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Linq/Impl/MethodVisitor.cs @@ -54,7 +54,7 @@ internal static class MethodVisitor { GetStringMethod("ToLower", new Type[0], GetFunc("lower")), GetStringMethod("ToUpper", new Type[0], GetFunc("upper")), - GetStringMethod("Contains", del: (e, v) => VisitSqlLike(e, v, "'%' || ? || '%'")), + GetStringMethod("Contains", new[] {typeof (string)}, (e, v) => VisitSqlLike(e, v, "'%' || ? || '%'")), GetStringMethod("StartsWith", new[] {typeof (string)}, (e, v) => VisitSqlLike(e, v, "? || '%'")), GetStringMethod("EndsWith", new[] {typeof (string)}, (e, v) => VisitSqlLike(e, v, "'%' || ?")), GetStringMethod("IndexOf", new[] {typeof (string)}, GetFunc("instr", -1)), @@ -72,7 +72,7 @@ internal static class MethodVisitor GetStringMethod("PadRight", "rpad", typeof (int), typeof (char)), GetRegexMethod("Replace", "regexp_replace", typeof (string), typeof (string), typeof (string)), - GetRegexMethod("Replace", "regexp_replace", typeof (string), typeof (string), typeof (string), + GetRegexMethod("Replace", "regexp_replace", typeof (string), typeof (string), typeof (string), typeof(RegexOptions)), GetRegexMethod("IsMatch", "regexp_like", typeof (string), typeof (string)), GetRegexMethod("IsMatch", "regexp_like", typeof (string), typeof (string), typeof(RegexOptions)), @@ -205,7 +205,7 @@ private static VisitMethodDelegate GetFunc(string func, params int[] adjust) /// /// Visits the instance function. /// - private static void VisitFunc(MethodCallExpression expression, CacheQueryExpressionVisitor visitor, + private static void VisitFunc(MethodCallExpression expression, CacheQueryExpressionVisitor visitor, string func, string suffix, params int[] adjust) { visitor.ResultBuilder.Append(func).Append("("); @@ -358,7 +358,7 @@ private static KeyValuePair GetRegexMethod(stri private static KeyValuePair GetParameterizedTrimMethod(string name, string sqlName) { - return GetMethod(typeof(string), name, new[] {typeof(char[])}, + return GetMethod(typeof(string), name, new[] {typeof(char[])}, (e, v) => VisitParameterizedTrimFunc(e, v, sqlName)); } @@ -379,4 +379,4 @@ private static KeyValuePair GetMathMethod(strin return GetMathMethod(name, name, argTypes); } } -} \ No newline at end of file +} diff --git a/modules/platforms/dotnet/Apache.Ignite.Linq/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.Linq/Properties/AssemblyInfo.cs index 0e90d60985eb9..3902c43f6f92e 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Linq/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Linq/Properties/AssemblyInfo.cs @@ -24,7 +24,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.Linq/QueryOptions.cs b/modules/platforms/dotnet/Apache.Ignite.Linq/QueryOptions.cs index c727e1cdbc635..994baf2648340 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Linq/QueryOptions.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Linq/QueryOptions.cs @@ -99,6 +99,7 @@ public QueryOptions() /// Gets or sets a value indicating whether this query contains only replicated tables. /// This is a hint for potentially more effective execution. /// + [Obsolete("No longer used as of Apache Ignite 2.8.")] public bool ReplicatedOnly { get; set; } /// diff --git a/modules/platforms/dotnet/Apache.Ignite.Log4Net/Apache.Ignite.Log4Net.nuspec b/modules/platforms/dotnet/Apache.Ignite.Log4Net/Apache.Ignite.Log4Net.nuspec index 1ae2b93ee85e9..e726cea60bda1 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Log4Net/Apache.Ignite.Log4Net.nuspec +++ b/modules/platforms/dotnet/Apache.Ignite.Log4Net/Apache.Ignite.Log4Net.nuspec @@ -40,7 +40,7 @@ Creating NuGet package: false log4net Logger for Apache Ignite - Copyright 2018 + Copyright 2020 Apache Ignite In-Memory Distributed Computing SQL NoSQL LINQ Grid Map Reduce Cache log4net logger diff --git a/modules/platforms/dotnet/Apache.Ignite.Log4Net/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.Log4Net/Properties/AssemblyInfo.cs index f29a30366b394..8162104fa46f8 100644 --- a/modules/platforms/dotnet/Apache.Ignite.Log4Net/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.Log4Net/Properties/AssemblyInfo.cs @@ -24,7 +24,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite.NLog/Apache.Ignite.NLog.nuspec b/modules/platforms/dotnet/Apache.Ignite.NLog/Apache.Ignite.NLog.nuspec index f5e9a6959bf85..413ddf9340d6d 100644 --- a/modules/platforms/dotnet/Apache.Ignite.NLog/Apache.Ignite.NLog.nuspec +++ b/modules/platforms/dotnet/Apache.Ignite.NLog/Apache.Ignite.NLog.nuspec @@ -40,7 +40,7 @@ Creating NuGet package: false NLog Logger for Apache Ignite - Copyright 2018 + Copyright 2020 Apache Ignite In-Memory Distributed Computing SQL NoSQL LINQ Grid Map Reduce Cache NLog logger diff --git a/modules/platforms/dotnet/Apache.Ignite.NLog/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite.NLog/Properties/AssemblyInfo.cs index 7a7a555750180..6cb85362ed5a9 100644 --- a/modules/platforms/dotnet/Apache.Ignite.NLog/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite.NLog/Properties/AssemblyInfo.cs @@ -24,7 +24,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/Apache.Ignite/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/Apache.Ignite/Properties/AssemblyInfo.cs index 195285fcd594f..a206960532460 100644 --- a/modules/platforms/dotnet/Apache.Ignite/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/Apache.Ignite/Properties/AssemblyInfo.cs @@ -23,7 +23,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/docfx/Apache.Ignite.docfx.json b/modules/platforms/dotnet/docfx/Apache.Ignite.docfx.json index 1b2214ef5dccd..fd67bcfbb6dfc 100644 --- a/modules/platforms/dotnet/docfx/Apache.Ignite.docfx.json +++ b/modules/platforms/dotnet/docfx/Apache.Ignite.docfx.json @@ -81,7 +81,7 @@ "_appTitle": "Apache Ignite.NET", "_appFaviconPath": "images/favicon.ico", "_appLogoPath": "images/logo_ignite_32_32.png", - "_appFooter": "© 2015 - 2018 The Apache Software Foundation", + "_appFooter": "© 2015 - 2020 The Apache Software Foundation", "_disableContribution": true } } diff --git a/modules/platforms/dotnet/examples/Apache.Ignite.Examples/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/examples/Apache.Ignite.Examples/Properties/AssemblyInfo.cs index a3d63e1c5615b..1a557e743ec09 100644 --- a/modules/platforms/dotnet/examples/Apache.Ignite.Examples/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/examples/Apache.Ignite.Examples/Properties/AssemblyInfo.cs @@ -23,7 +23,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/dotnet/examples/Apache.Ignite.ExamplesDll/Properties/AssemblyInfo.cs b/modules/platforms/dotnet/examples/Apache.Ignite.ExamplesDll/Properties/AssemblyInfo.cs index d58e3fd34ec5d..ae8f270e3de86 100644 --- a/modules/platforms/dotnet/examples/Apache.Ignite.ExamplesDll/Properties/AssemblyInfo.cs +++ b/modules/platforms/dotnet/examples/Apache.Ignite.ExamplesDll/Properties/AssemblyInfo.cs @@ -23,7 +23,7 @@ [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Apache Software Foundation")] [assembly: AssemblyProduct("Apache Ignite.NET")] -[assembly: AssemblyCopyright("Copyright 2018")] +[assembly: AssemblyCopyright("Copyright 2020")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] diff --git a/modules/platforms/nodejs/lib/EnumItem.js b/modules/platforms/nodejs/lib/EnumItem.js index 1d1725e3d5908..5e80da9dee2b5 100644 --- a/modules/platforms/nodejs/lib/EnumItem.js +++ b/modules/platforms/nodejs/lib/EnumItem.js @@ -17,6 +17,7 @@ 'use strict'; +const Util = require('util'); const ArgumentChecker = require('./internal/ArgumentChecker'); const Errors = require('./Errors'); @@ -157,14 +158,18 @@ class EnumItem { * @ignore */ async _write(communicator, buffer) { + const type = await this._getType(communicator, this._typeId); + if (!type || !type._isEnum) { + throw Errors.IgniteClientError.enumSerializationError( + true, Util.format('enum type id "%d" is not registered', this._typeId)); + } buffer.writeInteger(this._typeId); if (this._ordinal !== null) { buffer.writeInteger(this._ordinal); return; } else if (this._name !== null || this._value !== null) { - const type = await this._getType(communicator, this._typeId); - if (type._isEnum && type._enumValues) { + if (type._enumValues) { for (let i = 0; i < type._enumValues.length; i++) { if (this._name === type._enumValues[i][0] || this._value === type._enumValues[i][1]) { @@ -185,8 +190,12 @@ class EnumItem { this._typeId = buffer.readInteger(); this._ordinal = buffer.readInteger(); const type = await this._getType(communicator, this._typeId); - if (!type._isEnum || !type._enumValues || type._enumValues.length <= this._ordinal) { - throw new Errors.IgniteClientError('EnumItem can not be deserialized: type mismatch'); + if (!type || !type._isEnum) { + throw Errors.IgniteClientError.enumSerializationError( + false, Util.format('enum type id "%d" is not registered', this._typeId)); + } + else if (!type._enumValues || type._enumValues.length <= this._ordinal) { + throw Errors.IgniteClientError.enumSerializationError(false, 'type mismatch'); } this._name = type._enumValues[this._ordinal][0]; this._value = type._enumValues[this._ordinal][1]; diff --git a/modules/platforms/nodejs/lib/Errors.js b/modules/platforms/nodejs/lib/Errors.js index 57a7a8c291306..89baf386ca0e8 100644 --- a/modules/platforms/nodejs/lib/Errors.js +++ b/modules/platforms/nodejs/lib/Errors.js @@ -83,6 +83,18 @@ class IgniteClientError extends Error { } return new IgniteClientError(msg); } + + /** + * EnumItem serialization/deserialization errors. + * @ignore + */ + static enumSerializationError(serialize, message = null) { + let msg = serialize ? 'Enum item can not be serialized' : 'Enum item can not be deserialized'; + if (message) { + msg = msg + ': ' + message; + } + return new IgniteClientError(msg); + } } /** diff --git a/modules/platforms/nodejs/lib/internal/BinaryUtils.js b/modules/platforms/nodejs/lib/internal/BinaryUtils.js index 2619df7961fd7..fe1e4034a9958 100644 --- a/modules/platforms/nodejs/lib/internal/BinaryUtils.js +++ b/modules/platforms/nodejs/lib/internal/BinaryUtils.js @@ -497,6 +497,10 @@ class BinaryUtils { expectedTypeCode === BinaryUtils.TYPE_CODE.COMPLEX_OBJECT) { return; } + else if (expectedTypeCode === BinaryUtils.TYPE_CODE.ENUM && + actualTypeCode === BinaryUtils.TYPE_CODE.BINARY_ENUM) { + return; + } else if (actualTypeCode !== expectedTypeCode) { throw Errors.IgniteClientError.typeCastError(actualTypeCode, expectedTypeCode); } diff --git a/modules/platforms/nodejs/spec/TestingHelper.js b/modules/platforms/nodejs/spec/TestingHelper.js index 79a53361dce8e..78df0cb37915f 100644 --- a/modules/platforms/nodejs/spec/TestingHelper.js +++ b/modules/platforms/nodejs/spec/TestingHelper.js @@ -232,6 +232,13 @@ class TestingHelper { TestingHelper.checkError(error, Errors.IgniteClientError, done) } + static checkEnumItemSerializationError(error, done) { + if (!(error instanceof Errors.IgniteClientError) || + error.message.indexOf('Enum item can not be serialized') < 0) { + done.fail('unexpected error: ' + error); + } + } + static checkError(error, errorType, done) { if (!(error instanceof errorType)) { done.fail('unexpected error: ' + error); diff --git a/modules/platforms/nodejs/spec/cache/CachePutGetDiffTypes.spec.js b/modules/platforms/nodejs/spec/cache/CachePutGetDiffTypes.spec.js index 28a9ae3825f43..a6e1bba8c64a6 100644 --- a/modules/platforms/nodejs/spec/cache/CachePutGetDiffTypes.spec.js +++ b/modules/platforms/nodejs/spec/cache/CachePutGetDiffTypes.spec.js @@ -543,6 +543,45 @@ describe('cache put get test suite >', () => { catch(error => done.fail(error)); }); + it('put enum items', (done) => { + Promise.resolve(). + then(async () => { + const fakeTypeId = 12345; + const enumItem1 = new EnumItem(fakeTypeId); + enumItem1.setOrdinal(1); + await putEnumItem(enumItem1, null, done); + await putEnumItem(enumItem1, ObjectType.PRIMITIVE_TYPE.ENUM, done); + const enumItem2 = new EnumItem(fakeTypeId); + enumItem2.setName('name'); + await putEnumItem(enumItem2, null, done); + await putEnumItem(enumItem2, ObjectType.PRIMITIVE_TYPE.ENUM, done); + const enumItem3 = new EnumItem(fakeTypeId); + enumItem3.setValue(2); + await putEnumItem(enumItem3, null, done); + await putEnumItem(enumItem3, ObjectType.PRIMITIVE_TYPE.ENUM, done); + }). + then(done). + catch(error => done.fail(error)); + }); + + async function putEnumItem(value, valueType, done) { + const cache = igniteClient.getCache(CACHE_NAME). + setKeyType(null). + setValueType(valueType); + const key = new Date(); + // Enums registration is not supported by the client, therefore put EnumItem must throw IgniteClientError + try { + await cache.put(key, value); + done.fail('put EnumItem must throw IgniteClientError'); + } + catch (err) { + TestingHelper.checkEnumItemSerializationError(err, done); + } + finally { + await cache.removeAll(); + } + } + async function putGetPrimitiveValues(keyType, valueType, key, value, modificator) { const cache = await igniteClient.getCache(CACHE_NAME). setKeyType(keyType). diff --git a/modules/platforms/php/.gitignore b/modules/platforms/php/.gitignore new file mode 100644 index 0000000000000..4f4acd356ffd7 --- /dev/null +++ b/modules/platforms/php/.gitignore @@ -0,0 +1,2 @@ +vendor/ +composer.lock \ No newline at end of file diff --git a/modules/platforms/php/README.md b/modules/platforms/php/README.md new file mode 100644 index 0000000000000..d6cd51dd7d550 --- /dev/null +++ b/modules/platforms/php/README.md @@ -0,0 +1,37 @@ +# PHP Thin Client # + +## Installation ## + +The client requires PHP version 7.2 or higher (http://php.net/manual/en/install.php) and Composer Dependency Manager (https://getcomposer.org/download/). + +The client additionally requires PHP Multibyte String extension. Depending on you PHP configuration you may need to additionally install/configure it (http://php.net/manual/en/mbstring.installation.php) + +### Installation from the PHP Package Repository ### + +Run from your application root +``` +composer require apache/apache-ignite-client +``` + +To use the client in your application, include `vendor/autoload.php` file, generated by Composer, to your source code, eg. +``` +require_once __DIR__ . '/vendor/autoload.php'; +``` + +### Installation from Sources ### + +1. Download Ignite sources to `local_ignite_path` +2. Go to `local_ignite_path/modules/platforms/php` folder +3. Execute `composer install --no-dev` command + +```bash +cd local_ignite_path/modules/platforms/php +composer install --no-dev +``` + +To use the client in your application, include `vendor/autoload.php` file, generated by Composer, to your source code, eg. +``` +require_once "/vendor/autoload.php"; +``` + +For more information, see [Apache Ignite PHP Thin Client documentation](https://apacheignite.readme.io/docs/php-thin-client). diff --git a/modules/platforms/php/api_docs/Doxyfile b/modules/platforms/php/api_docs/Doxyfile new file mode 100644 index 0000000000000..23924633eecc9 --- /dev/null +++ b/modules/platforms/php/api_docs/Doxyfile @@ -0,0 +1,2487 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Doxyfile 1.8.14 + +# This file describes the settings to be used by the documentation system +# doxygen (www.doxygen.org) for a project. +# +# All text after a double hash (##) is considered a comment and is placed in +# front of the TAG it is preceding. +# +# All text after a single hash (#) is considered a comment and will be ignored. +# The format is: +# TAG = value [value, ...] +# For lists, items can also be appended using: +# TAG += value [value, ...] +# Values that contain spaces should be placed between quotes (\" \"). + +#--------------------------------------------------------------------------- +# Project related configuration options +#--------------------------------------------------------------------------- + +# This tag specifies the encoding used for all characters in the config file +# that follow. The default is UTF-8 which is also the encoding used for all text +# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv +# built into libc) for the transcoding. See +# https://www.gnu.org/software/libiconv/ for the list of possible encodings. +# The default value is: UTF-8. + +DOXYFILE_ENCODING = UTF-8 + +# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by +# double-quotes, unless you are using Doxywizard) that should identify the +# project for which the documentation is generated. This name is used in the +# title of most generated pages and in a few other places. +# The default value is: My Project. + +PROJECT_NAME = "PHP Client for Apache Ignite" + +# The PROJECT_NUMBER tag can be used to enter a project or revision number. This +# could be handy for archiving the generated documentation or if some version +# control system is used. + +PROJECT_NUMBER = + +# Using the PROJECT_BRIEF tag one can provide an optional one line description +# for a project that appears at the top of each page and should give viewer a +# quick idea about the purpose of the project. Keep the description short. + +PROJECT_BRIEF = + +# With the PROJECT_LOGO tag one can specify a logo or an icon that is included +# in the documentation. The maximum height of the logo should not exceed 55 +# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy +# the logo to the output directory. + +PROJECT_LOGO = + +# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path +# into which the generated documentation will be written. If a relative path is +# entered, it will be relative to the location where doxygen was started. If +# left blank the current directory will be used. + +OUTPUT_DIRECTORY = api_docs + +# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- +# directories (in 2 levels) under the output directory of each output format and +# will distribute the generated files over these directories. Enabling this +# option can be useful when feeding doxygen a huge amount of source files, where +# putting all generated files in the same directory would otherwise causes +# performance problems for the file system. +# The default value is: NO. + +CREATE_SUBDIRS = NO + +# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII +# characters to appear in the names of generated files. If set to NO, non-ASCII +# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode +# U+3044. +# The default value is: NO. + +ALLOW_UNICODE_NAMES = NO + +# The OUTPUT_LANGUAGE tag is used to specify the language in which all +# documentation generated by doxygen is written. Doxygen will use this +# information to generate all constant output in the proper language. +# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, +# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), +# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, +# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), +# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, +# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, +# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, +# Ukrainian and Vietnamese. +# The default value is: English. + +OUTPUT_LANGUAGE = English + +# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member +# descriptions after the members that are listed in the file and class +# documentation (similar to Javadoc). Set to NO to disable this. +# The default value is: YES. + +BRIEF_MEMBER_DESC = YES + +# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief +# description of a member or function before the detailed description +# +# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the +# brief descriptions will be completely suppressed. +# The default value is: YES. + +REPEAT_BRIEF = YES + +# This tag implements a quasi-intelligent brief description abbreviator that is +# used to form the text in various listings. Each string in this list, if found +# as the leading text of the brief description, will be stripped from the text +# and the result, after processing the whole list, is used as the annotated +# text. Otherwise, the brief description is used as-is. If left blank, the +# following values are used ($name is automatically replaced with the name of +# the entity):The $name class, The $name widget, The $name file, is, provides, +# specifies, contains, represents, a, an and the. + +ABBREVIATE_BRIEF = "The $name class" \ + "The $name widget" \ + "The $name file" \ + is \ + provides \ + specifies \ + contains \ + represents \ + a \ + an \ + the + +# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then +# doxygen will generate a detailed section even if there is only a brief +# description. +# The default value is: NO. + +ALWAYS_DETAILED_SEC = NO + +# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all +# inherited members of a class in the documentation of that class as if those +# members were ordinary class members. Constructors, destructors and assignment +# operators of the base classes will not be shown. +# The default value is: NO. + +INLINE_INHERITED_MEMB = NO + +# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path +# before files name in the file list and in the header files. If set to NO the +# shortest path that makes the file name unique will be used +# The default value is: YES. + +FULL_PATH_NAMES = YES + +# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. +# Stripping is only done if one of the specified strings matches the left-hand +# part of the path. The tag can be used to show relative paths in the file list. +# If left blank the directory from which doxygen is run is used as the path to +# strip. +# +# Note that you can specify absolute paths here, but also relative paths, which +# will be relative from the directory where doxygen is started. +# This tag requires that the tag FULL_PATH_NAMES is set to YES. + +STRIP_FROM_PATH = + +# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the +# path mentioned in the documentation of a class, which tells the reader which +# header file to include in order to use a class. If left blank only the name of +# the header file containing the class definition is used. Otherwise one should +# specify the list of include paths that are normally passed to the compiler +# using the -I flag. + +STRIP_FROM_INC_PATH = + +# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but +# less readable) file names. This can be useful is your file systems doesn't +# support long names like on DOS, Mac, or CD-ROM. +# The default value is: NO. + +SHORT_NAMES = NO + +# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the +# first line (until the first dot) of a Javadoc-style comment as the brief +# description. If set to NO, the Javadoc-style will behave just like regular Qt- +# style comments (thus requiring an explicit @brief command for a brief +# description.) +# The default value is: NO. + +JAVADOC_AUTOBRIEF = NO + +# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first +# line (until the first dot) of a Qt-style comment as the brief description. If +# set to NO, the Qt-style will behave just like regular Qt-style comments (thus +# requiring an explicit \brief command for a brief description.) +# The default value is: NO. + +QT_AUTOBRIEF = NO + +# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a +# multi-line C++ special comment block (i.e. a block of //! or /// comments) as +# a brief description. This used to be the default behavior. The new default is +# to treat a multi-line C++ comment block as a detailed description. Set this +# tag to YES if you prefer the old behavior instead. +# +# Note that setting this tag to YES also means that rational rose comments are +# not recognized any more. +# The default value is: NO. + +MULTILINE_CPP_IS_BRIEF = NO + +# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the +# documentation from any documented member that it re-implements. +# The default value is: YES. + +INHERIT_DOCS = YES + +# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new +# page for each member. If set to NO, the documentation of a member will be part +# of the file/class/namespace that contains it. +# The default value is: NO. + +SEPARATE_MEMBER_PAGES = NO + +# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen +# uses this value to replace tabs by spaces in code fragments. +# Minimum value: 1, maximum value: 16, default value: 4. + +TAB_SIZE = 4 + +# This tag can be used to specify a number of aliases that act as commands in +# the documentation. An alias has the form: +# name=value +# For example adding +# "sideeffect=@par Side Effects:\n" +# will allow you to put the command \sideeffect (or @sideeffect) in the +# documentation, which will result in a user-defined paragraph with heading +# "Side Effects:". You can put \n's in the value part of an alias to insert +# newlines (in the resulting output). You can put ^^ in the value part of an +# alias to insert a newline as if a physical newline was in the original file. + +ALIASES = + +# This tag can be used to specify a number of word-keyword mappings (TCL only). +# A mapping has the form "name=value". For example adding "class=itcl::class" +# will allow you to use the command class in the itcl::class meaning. + +TCL_SUBST = + +# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources +# only. Doxygen will then generate output that is more tailored for C. For +# instance, some of the names that are used will be different. The list of all +# members will be omitted, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_FOR_C = NO + +# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or +# Python sources only. Doxygen will then generate output that is more tailored +# for that language. For instance, namespaces will be presented as packages, +# qualified scopes will look different, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_JAVA = NO + +# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran +# sources. Doxygen will then generate output that is tailored for Fortran. +# The default value is: NO. + +OPTIMIZE_FOR_FORTRAN = NO + +# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL +# sources. Doxygen will then generate output that is tailored for VHDL. +# The default value is: NO. + +OPTIMIZE_OUTPUT_VHDL = NO + +# Doxygen selects the parser to use depending on the extension of the files it +# parses. With this tag you can assign which parser to use for a given +# extension. Doxygen has a built-in mapping, but you can override or extend it +# using this tag. The format is ext=language, where ext is a file extension, and +# language is one of the parsers supported by doxygen: IDL, Java, Javascript, +# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: +# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: +# Fortran. In the later case the parser tries to guess whether the code is fixed +# or free formatted code, this is the default for Fortran type files), VHDL. For +# instance to make doxygen treat .inc files as Fortran files (default is PHP), +# and .f files as C (default is Fortran), use: inc=Fortran f=C. +# +# Note: For files without extension you can use no_extension as a placeholder. +# +# Note that for custom extensions you also need to set FILE_PATTERNS otherwise +# the files are not read by doxygen. + +EXTENSION_MAPPING = + +# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments +# according to the Markdown format, which allows for more readable +# documentation. See http://daringfireball.net/projects/markdown/ for details. +# The output of markdown processing is further processed by doxygen, so you can +# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in +# case of backward compatibilities issues. +# The default value is: YES. + +MARKDOWN_SUPPORT = YES + +# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up +# to that level are automatically included in the table of contents, even if +# they do not have an id attribute. +# Note: This feature currently applies only to Markdown headings. +# Minimum value: 0, maximum value: 99, default value: 0. +# This tag requires that the tag MARKDOWN_SUPPORT is set to YES. + +TOC_INCLUDE_HEADINGS = 0 + +# When enabled doxygen tries to link words that correspond to documented +# classes, or namespaces to their corresponding documentation. Such a link can +# be prevented in individual cases by putting a % sign in front of the word or +# globally by setting AUTOLINK_SUPPORT to NO. +# The default value is: YES. + +AUTOLINK_SUPPORT = YES + +# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want +# to include (a tag file for) the STL sources as input, then you should set this +# tag to YES in order to let doxygen match functions declarations and +# definitions whose arguments contain STL classes (e.g. func(std::string); +# versus func(std::string) {}). This also make the inheritance and collaboration +# diagrams that involve STL classes more complete and accurate. +# The default value is: NO. + +BUILTIN_STL_SUPPORT = NO + +# If you use Microsoft's C++/CLI language, you should set this option to YES to +# enable parsing support. +# The default value is: NO. + +CPP_CLI_SUPPORT = NO + +# Set the SIP_SUPPORT tag to YES if your project consists of sip (see: +# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen +# will parse them like normal C++ but will assume all classes use public instead +# of private inheritance when no explicit protection keyword is present. +# The default value is: NO. + +SIP_SUPPORT = NO + +# For Microsoft's IDL there are propget and propput attributes to indicate +# getter and setter methods for a property. Setting this option to YES will make +# doxygen to replace the get and set methods by a property in the documentation. +# This will only work if the methods are indeed getting or setting a simple +# type. If this is not the case, or you want to show the methods anyway, you +# should set this option to NO. +# The default value is: YES. + +IDL_PROPERTY_SUPPORT = YES + +# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC +# tag is set to YES then doxygen will reuse the documentation of the first +# member in the group (if any) for the other members of the group. By default +# all members of a group must be documented explicitly. +# The default value is: NO. + +DISTRIBUTE_GROUP_DOC = NO + +# If one adds a struct or class to a group and this option is enabled, then also +# any nested class or struct is added to the same group. By default this option +# is disabled and one has to add nested compounds explicitly via \ingroup. +# The default value is: NO. + +GROUP_NESTED_COMPOUNDS = NO + +# Set the SUBGROUPING tag to YES to allow class member groups of the same type +# (for instance a group of public functions) to be put as a subgroup of that +# type (e.g. under the Public Functions section). Set it to NO to prevent +# subgrouping. Alternatively, this can be done per class using the +# \nosubgrouping command. +# The default value is: YES. + +SUBGROUPING = YES + +# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions +# are shown inside the group in which they are included (e.g. using \ingroup) +# instead of on a separate page (for HTML and Man pages) or section (for LaTeX +# and RTF). +# +# Note that this feature does not work in combination with +# SEPARATE_MEMBER_PAGES. +# The default value is: NO. + +INLINE_GROUPED_CLASSES = NO + +# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions +# with only public data fields or simple typedef fields will be shown inline in +# the documentation of the scope in which they are defined (i.e. file, +# namespace, or group documentation), provided this scope is documented. If set +# to NO, structs, classes, and unions are shown on a separate page (for HTML and +# Man pages) or section (for LaTeX and RTF). +# The default value is: NO. + +INLINE_SIMPLE_STRUCTS = NO + +# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or +# enum is documented as struct, union, or enum with the name of the typedef. So +# typedef struct TypeS {} TypeT, will appear in the documentation as a struct +# with name TypeT. When disabled the typedef will appear as a member of a file, +# namespace, or class. And the struct will be named TypeS. This can typically be +# useful for C code in case the coding convention dictates that all compound +# types are typedef'ed and only the typedef is referenced, never the tag name. +# The default value is: NO. + +TYPEDEF_HIDES_STRUCT = NO + +# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This +# cache is used to resolve symbols given their name and scope. Since this can be +# an expensive process and often the same symbol appears multiple times in the +# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small +# doxygen will become slower. If the cache is too large, memory is wasted. The +# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range +# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 +# symbols. At the end of a run doxygen will report the cache usage and suggest +# the optimal cache size from a speed point of view. +# Minimum value: 0, maximum value: 9, default value: 0. + +LOOKUP_CACHE_SIZE = 0 + +#--------------------------------------------------------------------------- +# Build related configuration options +#--------------------------------------------------------------------------- + +# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in +# documentation are documented, even if no documentation was available. Private +# class members and static file members will be hidden unless the +# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. +# Note: This will also disable the warnings about undocumented members that are +# normally produced when WARNINGS is set to YES. +# The default value is: NO. + +EXTRACT_ALL = NO + +# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will +# be included in the documentation. +# The default value is: NO. + +EXTRACT_PRIVATE = NO + +# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal +# scope will be included in the documentation. +# The default value is: NO. + +EXTRACT_PACKAGE = NO + +# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be +# included in the documentation. +# The default value is: NO. + +EXTRACT_STATIC = NO + +# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined +# locally in source files will be included in the documentation. If set to NO, +# only classes defined in header files are included. Does not have any effect +# for Java sources. +# The default value is: YES. + +EXTRACT_LOCAL_CLASSES = YES + +# This flag is only useful for Objective-C code. If set to YES, local methods, +# which are defined in the implementation section but not in the interface are +# included in the documentation. If set to NO, only methods in the interface are +# included. +# The default value is: NO. + +EXTRACT_LOCAL_METHODS = NO + +# If this flag is set to YES, the members of anonymous namespaces will be +# extracted and appear in the documentation as a namespace called +# 'anonymous_namespace{file}', where file will be replaced with the base name of +# the file that contains the anonymous namespace. By default anonymous namespace +# are hidden. +# The default value is: NO. + +EXTRACT_ANON_NSPACES = NO + +# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all +# undocumented members inside documented classes or files. If set to NO these +# members will be included in the various overviews, but no documentation +# section is generated. This option has no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_MEMBERS = YES + +# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all +# undocumented classes that are normally visible in the class hierarchy. If set +# to NO, these classes will be included in the various overviews. This option +# has no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_CLASSES = NO + +# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend +# (class|struct|union) declarations. If set to NO, these declarations will be +# included in the documentation. +# The default value is: NO. + +HIDE_FRIEND_COMPOUNDS = NO + +# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any +# documentation blocks found inside the body of a function. If set to NO, these +# blocks will be appended to the function's detailed documentation block. +# The default value is: NO. + +HIDE_IN_BODY_DOCS = NO + +# The INTERNAL_DOCS tag determines if documentation that is typed after a +# \internal command is included. If the tag is set to NO then the documentation +# will be excluded. Set it to YES to include the internal documentation. +# The default value is: NO. + +INTERNAL_DOCS = NO + +# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file +# names in lower-case letters. If set to YES, upper-case letters are also +# allowed. This is useful if you have classes or files whose names only differ +# in case and if your file system supports case sensitive file names. Windows +# and Mac users are advised to set this option to NO. +# The default value is: system dependent. + +CASE_SENSE_NAMES = NO + +# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with +# their full class and namespace scopes in the documentation. If set to YES, the +# scope will be hidden. +# The default value is: NO. + +HIDE_SCOPE_NAMES = NO + +# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will +# append additional text to a page's title, such as Class Reference. If set to +# YES the compound reference will be hidden. +# The default value is: NO. + +HIDE_COMPOUND_REFERENCE= NO + +# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of +# the files that are included by a file in the documentation of that file. +# The default value is: YES. + +SHOW_INCLUDE_FILES = YES + +# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each +# grouped member an include statement to the documentation, telling the reader +# which file to include in order to use the member. +# The default value is: NO. + +SHOW_GROUPED_MEMB_INC = NO + +# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include +# files with double quotes in the documentation rather than with sharp brackets. +# The default value is: NO. + +FORCE_LOCAL_INCLUDES = NO + +# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the +# documentation for inline members. +# The default value is: YES. + +INLINE_INFO = YES + +# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the +# (detailed) documentation of file and class members alphabetically by member +# name. If set to NO, the members will appear in declaration order. +# The default value is: YES. + +SORT_MEMBER_DOCS = YES + +# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief +# descriptions of file, namespace and class members alphabetically by member +# name. If set to NO, the members will appear in declaration order. Note that +# this will also influence the order of the classes in the class list. +# The default value is: NO. + +SORT_BRIEF_DOCS = NO + +# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the +# (brief and detailed) documentation of class members so that constructors and +# destructors are listed first. If set to NO the constructors will appear in the +# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. +# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief +# member documentation. +# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting +# detailed member documentation. +# The default value is: NO. + +SORT_MEMBERS_CTORS_1ST = NO + +# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy +# of group names into alphabetical order. If set to NO the group names will +# appear in their defined order. +# The default value is: NO. + +SORT_GROUP_NAMES = NO + +# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by +# fully-qualified names, including namespaces. If set to NO, the class list will +# be sorted only by class name, not including the namespace part. +# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. +# Note: This option applies only to the class list, not to the alphabetical +# list. +# The default value is: NO. + +SORT_BY_SCOPE_NAME = NO + +# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper +# type resolution of all parameters of a function it will reject a match between +# the prototype and the implementation of a member function even if there is +# only one candidate or it is obvious which candidate to choose by doing a +# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still +# accept a match between prototype and implementation in such cases. +# The default value is: NO. + +STRICT_PROTO_MATCHING = NO + +# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo +# list. This list is created by putting \todo commands in the documentation. +# The default value is: YES. + +GENERATE_TODOLIST = YES + +# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test +# list. This list is created by putting \test commands in the documentation. +# The default value is: YES. + +GENERATE_TESTLIST = YES + +# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug +# list. This list is created by putting \bug commands in the documentation. +# The default value is: YES. + +GENERATE_BUGLIST = YES + +# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) +# the deprecated list. This list is created by putting \deprecated commands in +# the documentation. +# The default value is: YES. + +GENERATE_DEPRECATEDLIST= YES + +# The ENABLED_SECTIONS tag can be used to enable conditional documentation +# sections, marked by \if ... \endif and \cond +# ... \endcond blocks. + +ENABLED_SECTIONS = + +# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the +# initial value of a variable or macro / define can have for it to appear in the +# documentation. If the initializer consists of more lines than specified here +# it will be hidden. Use a value of 0 to hide initializers completely. The +# appearance of the value of individual variables and macros / defines can be +# controlled using \showinitializer or \hideinitializer command in the +# documentation regardless of this setting. +# Minimum value: 0, maximum value: 10000, default value: 30. + +MAX_INITIALIZER_LINES = 30 + +# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at +# the bottom of the documentation of classes and structs. If set to YES, the +# list will mention the files that were used to generate the documentation. +# The default value is: YES. + +SHOW_USED_FILES = YES + +# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This +# will remove the Files entry from the Quick Index and from the Folder Tree View +# (if specified). +# The default value is: YES. + +SHOW_FILES = YES + +# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces +# page. This will remove the Namespaces entry from the Quick Index and from the +# Folder Tree View (if specified). +# The default value is: YES. + +SHOW_NAMESPACES = YES + +# The FILE_VERSION_FILTER tag can be used to specify a program or script that +# doxygen should invoke to get the current version for each file (typically from +# the version control system). Doxygen will invoke the program by executing (via +# popen()) the command command input-file, where command is the value of the +# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided +# by doxygen. Whatever the program writes to standard output is used as the file +# version. For an example see the documentation. + +FILE_VERSION_FILTER = + +# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed +# by doxygen. The layout file controls the global structure of the generated +# output files in an output format independent way. To create the layout file +# that represents doxygen's defaults, run doxygen with the -l option. You can +# optionally specify a file name after the option, if omitted DoxygenLayout.xml +# will be used as the name of the layout file. +# +# Note that if you run doxygen from a directory containing a file called +# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE +# tag is left empty. + +LAYOUT_FILE = + +# The CITE_BIB_FILES tag can be used to specify one or more bib files containing +# the reference definitions. This must be a list of .bib files. The .bib +# extension is automatically appended if omitted. This requires the bibtex tool +# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info. +# For LaTeX the style of the bibliography can be controlled using +# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the +# search path. See also \cite for info how to create references. + +CITE_BIB_FILES = + +#--------------------------------------------------------------------------- +# Configuration options related to warning and progress messages +#--------------------------------------------------------------------------- + +# The QUIET tag can be used to turn on/off the messages that are generated to +# standard output by doxygen. If QUIET is set to YES this implies that the +# messages are off. +# The default value is: NO. + +QUIET = NO + +# The WARNINGS tag can be used to turn on/off the warning messages that are +# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES +# this implies that the warnings are on. +# +# Tip: Turn warnings on while writing the documentation. +# The default value is: YES. + +WARNINGS = YES + +# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate +# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag +# will automatically be disabled. +# The default value is: YES. + +WARN_IF_UNDOCUMENTED = YES + +# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for +# potential errors in the documentation, such as not documenting some parameters +# in a documented function, or documenting parameters that don't exist or using +# markup commands wrongly. +# The default value is: YES. + +WARN_IF_DOC_ERROR = YES + +# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that +# are documented, but have no documentation for their parameters or return +# value. If set to NO, doxygen will only warn about wrong or incomplete +# parameter documentation, but not about the absence of documentation. +# The default value is: NO. + +WARN_NO_PARAMDOC = NO + +# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when +# a warning is encountered. +# The default value is: NO. + +WARN_AS_ERROR = NO + +# The WARN_FORMAT tag determines the format of the warning messages that doxygen +# can produce. The string should contain the $file, $line, and $text tags, which +# will be replaced by the file and line number from which the warning originated +# and the warning text. Optionally the format may contain $version, which will +# be replaced by the version of the file (if it could be obtained via +# FILE_VERSION_FILTER) +# The default value is: $file:$line: $text. + +WARN_FORMAT = "$file:$line: $text" + +# The WARN_LOGFILE tag can be used to specify a file to which warning and error +# messages should be written. If left blank the output is written to standard +# error (stderr). + +WARN_LOGFILE = + +#--------------------------------------------------------------------------- +# Configuration options related to the input files +#--------------------------------------------------------------------------- + +# The INPUT tag is used to specify the files and/or directories that contain +# documented source files. You may enter file names like myfile.cpp or +# directories like /usr/src/myproject. Separate the files or directories with +# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING +# Note: If this tag is empty the current directory is searched. + +INPUT = src + +# This tag can be used to specify the character encoding of the source files +# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses +# libiconv (or the iconv built into libc) for the transcoding. See the libiconv +# documentation (see: https://www.gnu.org/software/libiconv/) for the list of +# possible encodings. +# The default value is: UTF-8. + +INPUT_ENCODING = UTF-8 + +# If the value of the INPUT tag contains directories, you can use the +# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and +# *.h) to filter out the source-files in the directories. +# +# Note that for custom extensions or not directly supported extensions you also +# need to set EXTENSION_MAPPING for the extension otherwise the files are not +# read by doxygen. +# +# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp, +# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, +# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, +# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08, +# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf. + +FILE_PATTERNS = *.php \ + *.php4 \ + *.php5 \ + *.phtml + +# The RECURSIVE tag can be used to specify whether or not subdirectories should +# be searched for input files as well. +# The default value is: NO. + +RECURSIVE = YES + +# The EXCLUDE tag can be used to specify files and/or directories that should be +# excluded from the INPUT source files. This way you can easily exclude a +# subdirectory from a directory tree whose root is specified with the INPUT tag. +# +# Note that relative paths are relative to the directory from which doxygen is +# run. + +EXCLUDE = src/Apache/Ignite/Internal + +# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or +# directories that are symbolic links (a Unix file system feature) are excluded +# from the input. +# The default value is: NO. + +EXCLUDE_SYMLINKS = NO + +# If the value of the INPUT tag contains directories, you can use the +# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude +# certain files from those directories. +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories for example use the pattern */test/* + +EXCLUDE_PATTERNS = + +# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names +# (namespaces, classes, functions, etc.) that should be excluded from the +# output. The symbol name can be a fully qualified name, a word, or if the +# wildcard * is used, a substring. Examples: ANamespace, AClass, +# AClass::ANamespace, ANamespace::*Test +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories use the pattern */test/* + +EXCLUDE_SYMBOLS = + +# The EXAMPLE_PATH tag can be used to specify one or more files or directories +# that contain example code fragments that are included (see the \include +# command). + +EXAMPLE_PATH = + +# If the value of the EXAMPLE_PATH tag contains directories, you can use the +# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and +# *.h) to filter out the source-files in the directories. If left blank all +# files are included. + +EXAMPLE_PATTERNS = * + +# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be +# searched for input files to be used with the \include or \dontinclude commands +# irrespective of the value of the RECURSIVE tag. +# The default value is: NO. + +EXAMPLE_RECURSIVE = NO + +# The IMAGE_PATH tag can be used to specify one or more files or directories +# that contain images that are to be included in the documentation (see the +# \image command). + +IMAGE_PATH = + +# The INPUT_FILTER tag can be used to specify a program that doxygen should +# invoke to filter for each input file. Doxygen will invoke the filter program +# by executing (via popen()) the command: +# +# +# +# where is the value of the INPUT_FILTER tag, and is the +# name of an input file. Doxygen will then use the output that the filter +# program writes to standard output. If FILTER_PATTERNS is specified, this tag +# will be ignored. +# +# Note that the filter must not add or remove lines; it is applied before the +# code is scanned, but not when the output code is generated. If lines are added +# or removed, the anchors will not be placed correctly. +# +# Note that for custom extensions or not directly supported extensions you also +# need to set EXTENSION_MAPPING for the extension otherwise the files are not +# properly processed by doxygen. + +INPUT_FILTER = + +# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern +# basis. Doxygen will compare the file name with each pattern and apply the +# filter if there is a match. The filters are a list of the form: pattern=filter +# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how +# filters are used. If the FILTER_PATTERNS tag is empty or if none of the +# patterns match the file name, INPUT_FILTER is applied. +# +# Note that for custom extensions or not directly supported extensions you also +# need to set EXTENSION_MAPPING for the extension otherwise the files are not +# properly processed by doxygen. + +FILTER_PATTERNS = + +# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using +# INPUT_FILTER) will also be used to filter the input files that are used for +# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). +# The default value is: NO. + +FILTER_SOURCE_FILES = NO + +# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file +# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and +# it is also possible to disable source filtering for a specific pattern using +# *.ext= (so without naming a filter). +# This tag requires that the tag FILTER_SOURCE_FILES is set to YES. + +FILTER_SOURCE_PATTERNS = + +# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that +# is part of the input, its contents will be placed on the main page +# (index.html). This can be useful if you have a project on for instance GitHub +# and want to reuse the introduction page also for the doxygen output. + +USE_MDFILE_AS_MAINPAGE = + +#--------------------------------------------------------------------------- +# Configuration options related to source browsing +#--------------------------------------------------------------------------- + +# If the SOURCE_BROWSER tag is set to YES then a list of source files will be +# generated. Documented entities will be cross-referenced with these sources. +# +# Note: To get rid of all source code in the generated output, make sure that +# also VERBATIM_HEADERS is set to NO. +# The default value is: NO. + +SOURCE_BROWSER = NO + +# Setting the INLINE_SOURCES tag to YES will include the body of functions, +# classes and enums directly into the documentation. +# The default value is: NO. + +INLINE_SOURCES = NO + +# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any +# special comment blocks from generated source code fragments. Normal C, C++ and +# Fortran comments will always remain visible. +# The default value is: YES. + +STRIP_CODE_COMMENTS = YES + +# If the REFERENCED_BY_RELATION tag is set to YES then for each documented +# function all documented functions referencing it will be listed. +# The default value is: NO. + +REFERENCED_BY_RELATION = NO + +# If the REFERENCES_RELATION tag is set to YES then for each documented function +# all documented entities called/used by that function will be listed. +# The default value is: NO. + +REFERENCES_RELATION = NO + +# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set +# to YES then the hyperlinks from functions in REFERENCES_RELATION and +# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will +# link to the documentation. +# The default value is: YES. + +REFERENCES_LINK_SOURCE = YES + +# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the +# source code will show a tooltip with additional information such as prototype, +# brief description and links to the definition and documentation. Since this +# will make the HTML file larger and loading of large files a bit slower, you +# can opt to disable this feature. +# The default value is: YES. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +SOURCE_TOOLTIPS = YES + +# If the USE_HTAGS tag is set to YES then the references to source code will +# point to the HTML generated by the htags(1) tool instead of doxygen built-in +# source browser. The htags tool is part of GNU's global source tagging system +# (see https://www.gnu.org/software/global/global.html). You will need version +# 4.8.6 or higher. +# +# To use it do the following: +# - Install the latest version of global +# - Enable SOURCE_BROWSER and USE_HTAGS in the config file +# - Make sure the INPUT points to the root of the source tree +# - Run doxygen as normal +# +# Doxygen will invoke htags (and that will in turn invoke gtags), so these +# tools must be available from the command line (i.e. in the search path). +# +# The result: instead of the source browser generated by doxygen, the links to +# source code will now point to the output of htags. +# The default value is: NO. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +USE_HTAGS = NO + +# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a +# verbatim copy of the header file for each class for which an include is +# specified. Set to NO to disable this. +# See also: Section \class. +# The default value is: YES. + +VERBATIM_HEADERS = YES + +# If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the +# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the +# cost of reduced performance. This can be particularly helpful with template +# rich C++ code for which doxygen's built-in parser lacks the necessary type +# information. +# Note: The availability of this option depends on whether or not doxygen was +# generated with the -Duse-libclang=ON option for CMake. +# The default value is: NO. + +CLANG_ASSISTED_PARSING = NO + +# If clang assisted parsing is enabled you can provide the compiler with command +# line options that you would normally use when invoking the compiler. Note that +# the include paths will already be set by doxygen for the files and directories +# specified with INPUT and INCLUDE_PATH. +# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES. + +CLANG_OPTIONS = + +# If clang assisted parsing is enabled you can provide the clang parser with the +# path to the compilation database (see: +# http://clang.llvm.org/docs/HowToSetupToolingForLLVM.html) used when the files +# were built. This is equivalent to specifying the "-p" option to a clang tool, +# such as clang-check. These options will then be passed to the parser. +# Note: The availability of this option depends on whether or not doxygen was +# generated with the -Duse-libclang=ON option for CMake. +# The default value is: 0. + +CLANG_COMPILATION_DATABASE_PATH = 0 + +#--------------------------------------------------------------------------- +# Configuration options related to the alphabetical class index +#--------------------------------------------------------------------------- + +# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all +# compounds will be generated. Enable this if the project contains a lot of +# classes, structs, unions or interfaces. +# The default value is: YES. + +ALPHABETICAL_INDEX = YES + +# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in +# which the alphabetical index list will be split. +# Minimum value: 1, maximum value: 20, default value: 5. +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES. + +COLS_IN_ALPHA_INDEX = 5 + +# In case all classes in a project start with a common prefix, all classes will +# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag +# can be used to specify a prefix (or a list of prefixes) that should be ignored +# while generating the index headers. +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES. + +IGNORE_PREFIX = + +#--------------------------------------------------------------------------- +# Configuration options related to the HTML output +#--------------------------------------------------------------------------- + +# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output +# The default value is: YES. + +GENERATE_HTML = YES + +# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of +# it. +# The default directory is: html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_OUTPUT = html + +# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each +# generated HTML page (for example: .htm, .php, .asp). +# The default value is: .html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FILE_EXTENSION = .html + +# The HTML_HEADER tag can be used to specify a user-defined HTML header file for +# each generated HTML page. If the tag is left blank doxygen will generate a +# standard header. +# +# To get valid HTML the header file that includes any scripts and style sheets +# that doxygen needs, which is dependent on the configuration options used (e.g. +# the setting GENERATE_TREEVIEW). It is highly recommended to start with a +# default header using +# doxygen -w html new_header.html new_footer.html new_stylesheet.css +# YourConfigFile +# and then modify the file new_header.html. See also section "Doxygen usage" +# for information on how to generate the default header that doxygen normally +# uses. +# Note: The header is subject to change so you typically have to regenerate the +# default header when upgrading to a newer version of doxygen. For a description +# of the possible markers and block names see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_HEADER = + +# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each +# generated HTML page. If the tag is left blank doxygen will generate a standard +# footer. See HTML_HEADER for more information on how to generate a default +# footer and what special commands can be used inside the footer. See also +# section "Doxygen usage" for information on how to generate the default footer +# that doxygen normally uses. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FOOTER = + +# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style +# sheet that is used by each HTML page. It can be used to fine-tune the look of +# the HTML output. If left blank doxygen will generate a default style sheet. +# See also section "Doxygen usage" for information on how to generate the style +# sheet that doxygen normally uses. +# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as +# it is more robust and this tag (HTML_STYLESHEET) will in the future become +# obsolete. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_STYLESHEET = + +# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined +# cascading style sheets that are included after the standard style sheets +# created by doxygen. Using this option one can overrule certain style aspects. +# This is preferred over using HTML_STYLESHEET since it does not replace the +# standard style sheet and is therefore more robust against future updates. +# Doxygen will copy the style sheet files to the output directory. +# Note: The order of the extra style sheet files is of importance (e.g. the last +# style sheet in the list overrules the setting of the previous ones in the +# list). For an example see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_STYLESHEET = + +# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or +# other source files which should be copied to the HTML output directory. Note +# that these files will be copied to the base HTML output directory. Use the +# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these +# files. In the HTML_STYLESHEET file, use the file name only. Also note that the +# files will be copied as-is; there are no commands or markers available. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_FILES = + +# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen +# will adjust the colors in the style sheet and background images according to +# this color. Hue is specified as an angle on a colorwheel, see +# https://en.wikipedia.org/wiki/Hue for more information. For instance the value +# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 +# purple, and 360 is red again. +# Minimum value: 0, maximum value: 359, default value: 220. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_HUE = 220 + +# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors +# in the HTML output. For a value of 0 the output will use grayscales only. A +# value of 255 will produce the most vivid colors. +# Minimum value: 0, maximum value: 255, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_SAT = 100 + +# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the +# luminance component of the colors in the HTML output. Values below 100 +# gradually make the output lighter, whereas values above 100 make the output +# darker. The value divided by 100 is the actual gamma applied, so 80 represents +# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not +# change the gamma. +# Minimum value: 40, maximum value: 240, default value: 80. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_GAMMA = 80 + +# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML +# page will contain the date and time when the page was generated. Setting this +# to YES can help to show when doxygen was last run and thus if the +# documentation is up to date. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_TIMESTAMP = NO + +# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML +# documentation will contain a main index with vertical navigation menus that +# are dynamically created via Javascript. If disabled, the navigation index will +# consists of multiple levels of tabs that are statically embedded in every HTML +# page. Disable this option to support browsers that do not have Javascript, +# like the Qt help browser. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_DYNAMIC_MENUS = YES + +# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML +# documentation will contain sections that can be hidden and shown after the +# page has loaded. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_DYNAMIC_SECTIONS = NO + +# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries +# shown in the various tree structured indices initially; the user can expand +# and collapse entries dynamically later on. Doxygen will expand the tree to +# such a level that at most the specified number of entries are visible (unless +# a fully collapsed tree already exceeds this amount). So setting the number of +# entries 1 will produce a full collapsed tree by default. 0 is a special value +# representing an infinite number of entries and will result in a full expanded +# tree by default. +# Minimum value: 0, maximum value: 9999, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_INDEX_NUM_ENTRIES = 100 + +# If the GENERATE_DOCSET tag is set to YES, additional index files will be +# generated that can be used as input for Apple's Xcode 3 integrated development +# environment (see: https://developer.apple.com/tools/xcode/), introduced with +# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a +# Makefile in the HTML output directory. Running make will produce the docset in +# that directory and running make install will install the docset in +# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at +# startup. See https://developer.apple.com/tools/creatingdocsetswithdoxygen.html +# for more information. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_DOCSET = NO + +# This tag determines the name of the docset feed. A documentation feed provides +# an umbrella under which multiple documentation sets from a single provider +# (such as a company or product suite) can be grouped. +# The default value is: Doxygen generated docs. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_FEEDNAME = "Doxygen generated docs" + +# This tag specifies a string that should uniquely identify the documentation +# set bundle. This should be a reverse domain-name style string, e.g. +# com.mycompany.MyDocSet. Doxygen will append .docset to the name. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_BUNDLE_ID = org.doxygen.Project + +# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify +# the documentation publisher. This should be a reverse domain-name style +# string, e.g. com.mycompany.MyDocSet.documentation. +# The default value is: org.doxygen.Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_ID = org.doxygen.Publisher + +# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. +# The default value is: Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_NAME = Publisher + +# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three +# additional HTML index files: index.hhp, index.hhc, and index.hhk. The +# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop +# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on +# Windows. +# +# The HTML Help Workshop contains a compiler that can convert all HTML output +# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML +# files are now used as the Windows 98 help format, and will replace the old +# Windows help format (.hlp) on all Windows platforms in the future. Compressed +# HTML files also contain an index, a table of contents, and you can search for +# words in the documentation. The HTML workshop also contains a viewer for +# compressed HTML files. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_HTMLHELP = NO + +# The CHM_FILE tag can be used to specify the file name of the resulting .chm +# file. You can add a path in front of the file if the result should not be +# written to the html output directory. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_FILE = + +# The HHC_LOCATION tag can be used to specify the location (absolute path +# including file name) of the HTML help compiler (hhc.exe). If non-empty, +# doxygen will try to run the HTML help compiler on the generated index.hhp. +# The file has to be specified with full path. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +HHC_LOCATION = + +# The GENERATE_CHI flag controls if a separate .chi index file is generated +# (YES) or that it should be included in the master .chm file (NO). +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +GENERATE_CHI = NO + +# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) +# and project file content. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_INDEX_ENCODING = + +# The BINARY_TOC flag controls whether a binary table of contents is generated +# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it +# enables the Previous and Next buttons. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +BINARY_TOC = NO + +# The TOC_EXPAND flag can be set to YES to add extra items for group members to +# the table of contents of the HTML help documentation and to the tree view. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +TOC_EXPAND = NO + +# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and +# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that +# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help +# (.qch) of the generated HTML documentation. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_QHP = NO + +# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify +# the file name of the resulting .qch file. The path specified is relative to +# the HTML output folder. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QCH_FILE = + +# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help +# Project output. For more information please see Qt Help Project / Namespace +# (see: http://doc.qt.io/qt-4.8/qthelpproject.html#namespace). +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_NAMESPACE = org.doxygen.Project + +# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt +# Help Project output. For more information please see Qt Help Project / Virtual +# Folders (see: http://doc.qt.io/qt-4.8/qthelpproject.html#virtual-folders). +# The default value is: doc. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_VIRTUAL_FOLDER = doc + +# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom +# filter to add. For more information please see Qt Help Project / Custom +# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_NAME = + +# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the +# custom filter to add. For more information please see Qt Help Project / Custom +# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_ATTRS = + +# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this +# project's filter section matches. Qt Help Project / Filter Attributes (see: +# http://doc.qt.io/qt-4.8/qthelpproject.html#filter-attributes). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_SECT_FILTER_ATTRS = + +# The QHG_LOCATION tag can be used to specify the location of Qt's +# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the +# generated .qhp file. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHG_LOCATION = + +# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be +# generated, together with the HTML files, they form an Eclipse help plugin. To +# install this plugin and make it available under the help contents menu in +# Eclipse, the contents of the directory containing the HTML and XML files needs +# to be copied into the plugins directory of eclipse. The name of the directory +# within the plugins directory should be the same as the ECLIPSE_DOC_ID value. +# After copying Eclipse needs to be restarted before the help appears. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_ECLIPSEHELP = NO + +# A unique identifier for the Eclipse help plugin. When installing the plugin +# the directory name containing the HTML and XML files should also have this +# name. Each documentation set should have its own identifier. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. + +ECLIPSE_DOC_ID = org.doxygen.Project + +# If you want full control over the layout of the generated HTML pages it might +# be necessary to disable the index and replace it with your own. The +# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top +# of each HTML page. A value of NO enables the index and the value YES disables +# it. Since the tabs in the index contain the same information as the navigation +# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +DISABLE_INDEX = NO + +# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index +# structure should be generated to display hierarchical information. If the tag +# value is set to YES, a side panel will be generated containing a tree-like +# index structure (just like the one that is generated for HTML Help). For this +# to work a browser that supports JavaScript, DHTML, CSS and frames is required +# (i.e. any modern browser). Windows users are probably better off using the +# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can +# further fine-tune the look of the index. As an example, the default style +# sheet generated by doxygen has an example that shows how to put an image at +# the root of the tree instead of the PROJECT_NAME. Since the tree basically has +# the same information as the tab index, you could consider setting +# DISABLE_INDEX to YES when enabling this option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_TREEVIEW = NO + +# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that +# doxygen will group on one line in the generated HTML documentation. +# +# Note that a value of 0 will completely suppress the enum values from appearing +# in the overview section. +# Minimum value: 0, maximum value: 20, default value: 4. +# This tag requires that the tag GENERATE_HTML is set to YES. + +ENUM_VALUES_PER_LINE = 4 + +# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used +# to set the initial width (in pixels) of the frame in which the tree is shown. +# Minimum value: 0, maximum value: 1500, default value: 250. +# This tag requires that the tag GENERATE_HTML is set to YES. + +TREEVIEW_WIDTH = 250 + +# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to +# external symbols imported via tag files in a separate window. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +EXT_LINKS_IN_WINDOW = NO + +# Use this tag to change the font size of LaTeX formulas included as images in +# the HTML documentation. When you change the font size after a successful +# doxygen run you need to manually remove any form_*.png images from the HTML +# output directory to force them to be regenerated. +# Minimum value: 8, maximum value: 50, default value: 10. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_FONTSIZE = 10 + +# Use the FORMULA_TRANSPARENT tag to determine whether or not the images +# generated for formulas are transparent PNGs. Transparent PNGs are not +# supported properly for IE 6.0, but are supported on all modern browsers. +# +# Note that when changing this option you need to delete any form_*.png files in +# the HTML output directory before the changes have effect. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_TRANSPARENT = YES + +# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see +# https://www.mathjax.org) which uses client side Javascript for the rendering +# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX +# installed or if you want to formulas look prettier in the HTML output. When +# enabled you may also need to install MathJax separately and configure the path +# to it using the MATHJAX_RELPATH option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +USE_MATHJAX = NO + +# When MathJax is enabled you can set the default output format to be used for +# the MathJax output. See the MathJax site (see: +# http://docs.mathjax.org/en/latest/output.html) for more details. +# Possible values are: HTML-CSS (which is slower, but has the best +# compatibility), NativeMML (i.e. MathML) and SVG. +# The default value is: HTML-CSS. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_FORMAT = HTML-CSS + +# When MathJax is enabled you need to specify the location relative to the HTML +# output directory using the MATHJAX_RELPATH option. The destination directory +# should contain the MathJax.js script. For instance, if the mathjax directory +# is located at the same level as the HTML output directory, then +# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax +# Content Delivery Network so you can quickly see the result without installing +# MathJax. However, it is strongly recommended to install a local copy of +# MathJax from https://www.mathjax.org before deployment. +# The default value is: https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_RELPATH = https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/ + +# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax +# extension names that should be enabled during MathJax rendering. For example +# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_EXTENSIONS = + +# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces +# of code that will be used on startup of the MathJax code. See the MathJax site +# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an +# example see the documentation. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_CODEFILE = + +# When the SEARCHENGINE tag is enabled doxygen will generate a search box for +# the HTML output. The underlying search engine uses javascript and DHTML and +# should work on any modern browser. Note that when using HTML help +# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) +# there is already a search function so this one should typically be disabled. +# For large projects the javascript based search engine can be slow, then +# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to +# search using the keyboard; to jump to the search box use + S +# (what the is depends on the OS and browser, but it is typically +# , /